id
string
paper_link
string
title
string
full_text
string
2205.03922v2
http://arxiv.org/abs/2205.03922v2
Computer assisted proofs for transverse collision and near collision orbits in the restricted three body problem
\documentclass[1p]{elsarticle} \def\bibsection{\section*{References}} \usepackage{lineno} \usepackage[colorlinks,linkcolor={blue}]{hyperref} \modulolinenumbers[5] \usepackage{amsfonts} \usepackage{graphicx} \usepackage{amsmath} \usepackage{graphicx} \usepackage{amssymb, nicefrac} \usepackage{mathtools} \DeclarePairedDelimiter{\setof}{\{}{\}} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \DeclarePairedDelimiter{\dotp}{\langle}{\rangle} \DeclarePairedDelimiter{\paren}{(}{)} \renewcommand{\iff}{\text{if and only if}} \usepackage{xcolor} \definecolor{lightblue}{rgb}{0.8,0.8,1} \newcommand{\sk}[2]{{\color{lightblue}#1} {\color{blue}#2}} \newtheorem{theorem}{Theorem} \newtheorem{maintheorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{Example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{rem}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{remark}[theorem]{Remark} \newtheorem{remarks}[theorem]{Remarks} \newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}} \newcommand{\correction}[2]{#2} \newcommand{\cor}[1]{#1} \newcommand\corTypo[1]{#1} \newcommand{\comment}[1]{} \newcommand{\cover}[1]{\stackrel{#1}{\Longrightarrow}} \newcommand{\invcover}[1]{\stackrel{#1}{\Longleftarrow}} \newcommand{\dom}{\mathrm{dom}} \newcommand{\inter}{\mathrm{int}}\newcommand{\mymarginpar}[1]{} \newcommand{\id}{\mathrm{Id}} \makeatletter \usepackage{tikz} \newcommand*\circled[2][1.6]{\tikz[baseline=(char.base)]{ \node[shape=circle, draw, inner sep=1pt, minimum height={\f@size*#1},] (char) {\vphantom{WAH1g}#2};}} \newcommand\NoStart{\circled[0.0]{$N_0$} } \makeatother \bibliographystyle{elsarticle-num} \begin{document} \begin{frontmatter} \title{Computer assisted proofs for transverse collision \\ and near collision orbits in the restricted three body problem} \author{Maciej J. Capi\'nski\footnote{M. C. was partially supported by the NCN grants 2019/35/B/ST1/00655 and 2021/41/B/ST1/00407.}} \ead{[email protected]} \address{AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krak\'ow, Poland} \author{Shane Kepley} \ead{[email protected]} \address{Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, Netherlands } \author{J.D. Mireles James\footnote{J.D.M.J. was partially supported by NSF Grant DMS 1813501}} \ead{[email protected]} \address{Florida Atlantic University, 777 Glades Road, Boca Raton, Florida, 33431} \begin{abstract} This paper considers two point boundary value problems for conservative systems defined in multiple coordinate systems, and develops a flexible a posteriori framework for computer assisted existence proofs. Our framework is applied to the study collision and near collision orbits in the circular restricted three body problem. In this case the coordinate systems are the standard rotating coordinates, and the two Levi-Civita coordinate systems regularizing collisions with each of the massive primaries. The proposed framework is used to prove the existence of a number of orbits which have long been studied numerically in the celestial mechanics literature, but for which there are no existing analytical proofs at the mass and energy values considered here. These include transverse ejection/collisions from one primary body to the other, Str\"{o}mgren's assymptotic periodic orbits (transverse homoclinics for $L_{4,5}$), families of periodic orbits passing through collision, and orbits connecting $L_4$ to ejection or collision. \end{abstract} \begin{keyword} Celestial mechanics, collisions, transverse homoclinic, computer assisted proofs. \MSC[2010] 37C29, 37J46, 70F07. \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} \correction{comment 1}{ The present work develops computer assisted arguments for proving theorems about collision and near collision orbits in conservative systems. Using these arguments, we answer several open questions about the dynamics of the planar circular restricted three body problem (CRTBP), a simplified model of three body motion popular since the pioneering work of Poincar\'{e} \cite{MR1194622,MR1194623,MR1194624}. Our approach combines classical Levi-Civita regularization with a multiple shooting scheme for two point boundary value problems (BVPs) describing orbits which begin and end on parameterized curves/symmetry sets in an energy level. After numerically computing an approximate solution to the BVP, we use a Newton-Krawczyk theorem to establish the proof of the existence of a true solution nearby. The a posteriori argument makes extensive use of validated Taylor integrators for vector fields and variational equations. See also Remark \ref{rem:puttingItAllTogether} for several additional remarks about the relationship between the present work and the existing literature. } The PCRTBP, defined formally in Section \ref{sec:PCRTBP}, describes the motion of an infinitesimal particle like a satellite, asteroid, or comet moving in the field of two massive bodies which orbit their center of mass on Keplerian circles. The massive bodies are called the primaries, and and one assumes that their orbits are not disturbed by the addition of the massless particle. Changing to a co-rotating frame of reference results in autonomous equations of motion, and choosing normalized units of distance, mass, and time reduces the number of parameters in the problem to one: the mass ratio of the primaries. The system has a single first integral referred to as the Jacobi constant, usually written as $C$. It is important to remember that for systems with a conserved quantity, periodic orbits occur in one parameter families -- or tubes -- parameterized by ``energy'' or conserved quantity. We note also that the CRTBP has an equilibrium solution, also called a Lagrange point or libration point, called $L_4$ in the upper half plane forming an equilateral triangle with the two primaries. (Similarly, $L_5$ forms an equilateral triangle in the lower half plane). We are interested in the following questions for the CRTBP. \begin{itemize} \item \textbf{Q1:} \textit{Do there exist orbits of the infinitesimal body, which collide with one primary in forward time, and the other primary in backward time?} We refer to such orbits as primary-to-primary ejection-collisions. \item \textbf{Q2:} \textit{Do there exist orbits of infinitesimal body which are assymptotic to the $L_4$ in backward time, but which collide with a primary in forward time?} (Or the reverse - from ejection to $L_4$). We refer to these as $L_4$-to-collision orbits (or ejection-to-$L_4$ orbits). \item \textbf{Q3:} \textit{Do there exist orbits of the infinitesimal body which are asymptotic in both forward and backward time to $L_4$? } Such orbits are said to be homoclinic to $L_{4}$. \item \textbf{Q4:} \textit{Do there exist tubes of large amplitude periodic orbits for the infinitesimal body, which accumulate to an ejection-collision orbit with one of the primaries?} Such tubes are said to terminate at an ejection-collision orbit. \item \textbf{Q5:} \textit{Do there exist tubes of periodic orbits for the infinitesimal body which accumulate to a pair of ejection-collision orbits going from one primary to the other and back?}. Such tubes are said to terminate at a consecutive ejection-collision. \end{itemize} The questions 1 and 4 are known to have affirmative answers in various perturbative situations, but they are open for many mass ratios and/or values of the Jacobi constant. There is also numerical evidence suggesting the existence of $L_4$ homoclinic orbits, and consecutive ejection-collisions. However, questions 2,3, and 5 are not perturbative, and until now they remain open. We review the literature in more detail in Section \ref{sec:collisionLit}. The following theorems, for non-perturbative mass and energy parameters of the planar CRTBP, constitute the main results of the present work. \correction{comment 9}{} \correction{comment 3}{ \begin{theorem}\label{thm:main-thm-1} Consider the PCRTBP with mass ratio $1/4$ and Jacobi constant $C = 3.2$. There exist at least two transverse ejection-collision orbits which transit between the primary bodies. One of these is ejected from the large primary and collides with the smaller, and the other is ejected from the smaller primary and collides with the larger. Both orbits make this transition in finite time as measured in the original/synodic coordinates. (See page \pageref{thm:ejectionCollision} for the precise statement). \end{theorem} \begin{theorem}\label{thm:main-thm-2} Consider the PCRTBP with equal masses and Jacobi constant $C_{L_4} = 3$. There exists at least one ejection-to-$L_4$ orbit, and at least one $L_4$-to-collision. These orbits take infinite (forward or backward) time to reach $L_4$. (See page \pageref{thm:CAP-L4-to-collision} for the precise statement.) Analogous orbits exist for $L_5$ by symmetry. \end{theorem} \begin{theorem}\label{thm:main-thm-3} Consider the PCRTBP with equal masses and Jacobi constant $C_{L_4} = 3$. There exist at least three distinct transverse homoclinic orbits to $L_4$. (See page \pageref{thm:CAP-connections} for the precise statement). Analogous orbits exist for $L_5$ by symmetry considerations. These orbits take infinite time to accumulate to $L_4$. As a corollary of transversality, (see also Remark \ref{rem:termination}) there exist chaotic subsystems in some neighborhood of each homoclinic orbit. \end{theorem} \begin{theorem}\label{thm:main-thm-4} Consider the PCRTBP with Earth-Moon mass ratio. There exists a one parameter family of periodic orbits which accumulate to an ejection-collision orbit originating from and terminating with the Earth. The ejection collision orbit has Jacobi constant $C \approx 1.434$, and has ``large amplitude'', in the sense that it passes near collision with the Moon. This ejection-collision occurs in finite time in synodic/unregularized coordinates. (See page \pageref{th:CAP-Lyap} for the precise statement.) \end{theorem} \begin{theorem} \label{thm:main-thm-5}Consider the PCRTBP with equal masses. There exists a family of periodic orbits which accumulate to a consecutive ejection-collision orbit involving both primaries. Each of the ejection-collisions occurs in finite time in synodic/unregularized coordinates. The Jacobi constant of the consecutive ejection-collision orbit is $C \approx 2.06$. (See page \pageref{thm:doubleCollision} for the precise statement.) \end{theorem} } \correction{comment 4}{Each of the theorems is interesting in its own right, as we elaborate on in the remarks below. Nevertheless, we note that the mass ratios and energies in the theorems have been chosen primarily to illustrate that our approach can be applied in many different settings. Similar theorems could be proven at other parameter values, or in other problems involving collisions, using the methodology developed here. We also remark that our results make no claims about global uniqueness. There could be many other such orbits for the given parameter values. However, due to the transversality, such orbits cannot be arbitrarily close to the orbits whose existence we prove. } \begin{remark}[Ballistic transport] \label{rem:ballisticTransport} {\em Theorem 1 establishes the existence of ballistic transport, or zero energy transfer, from one primary to the other in finite time (think of this as a ``free lunch'' trip between the primaries). In physical terms, ballistic transport allows debris to diffuse between a planet and it's moon, or between a star and one of its planets, using only the natural dynamics of the system. This phenomena is observed for example when Earth rocks, ejected into space after a meteor strike, are later discovered on the Moon \cite{earthMoonRock} (or vice versa). Martial applications of low energy Moon-to-Earth transfer are discussed in \cite{next100Years,theMoonIsHarsh}. Mathematically rigorous existence proofs for primary-to-primary ejection-collision orbits have until now required both small mass ratio and high velocity -- that is, negative enough Jacobi constant. See \cite{MR682839}. In a similar fashion, Theorem 2 establishes the existence of zero energy transfers involving $L_4$ and a primary, and could for example be used to design space missions which visit the triangular libration points. } \end{remark} \begin{remark}[Termination orbits] \label{rem:termination} {\em Theorems \ref{thm:main-thm-3}, \ref{thm:main-thm-4}, \ref{thm:main-thm-5} involve the termination of tubes of periodic orbits. \cor{ Indeed, a corollary of Theorem \ref{thm:main-thm-3} is that there are families/tubes of periodic orbits accumulating to each of our $L_4$ homoclinics. This follows from a theorem of Henrard \cite{MR0365628}. Another corollary is that, near each of the orbits of Theorem \ref{thm:main-thm-3}, there is an invariant chaotic subsystem in the $L_4$ energy level. This is due to a theorem of Devaney \cite{MR0442990}. } \cor{ Numerical evidence for the existence of $L_4$ homoclinics in the equal mass CRTBP appears already in the work of Str\"{o}mgren in the 1920's \cite{stromgrenMoulton,stromgrenRef}. See \cite{szebehelyTriangularPoints,onMoulton_Szebehely,theoryOfOrbits} for more discussion. Such orbits were once called \textit{asymptotic periodic orbits}, in light of the fact that they are closed loops with infinite period. Despite of the fact that they appeared in the literature more than a hundred years ago, the present work provides -- to the the best of our knowlege -- the first mathematically rigorous existence proof of transverse $L_4$ homoclinics in the CRTBP. } In Theorems \ref{thm:main-thm-4} and \ref{thm:main-thm-5}, we first prove the existence of the ejection-collision orbits, and then directly establish the existence of one parameter families of periodic orbits terminating at these ejection-collision orbits by an application of the implicit function theorem. Termination orbits have a long history in celestial mechanics, and are of fundamental importance in equivariant bifurcation theory. We refer the interested reader to the discussion of ``Str\"{o}mgren's termination principle'' in Chapter 9 of \cite{theoryOfOrbits}, and to the works of \cite{MR1879221,MR2042173,MR2969866} on equivariant families in the Hill three body and restricted three body problems. See also the works of \cite{MR3007103,MR2821620} on global continuation families in the restricted $N$-body problem.} \end{remark} \correction{comment 6}{ \begin{remark}[Final fate of the velocity variables] \label{rem:Chazy} {\em Chazy's 1922 paper \cite{MR1509241} proved an important classification result describing the possible asymptotic behavior of the position variables of three body orbits defined for all time. In the context of the CRTBP, Chazy's result says that orbits are either hyperbolic (massless particle goes to infinity with non-zero final velocity), parabolic (massless particle goes to infinity with zero final velocity), bounded (massless particle remains in a bounded region for all time), or oscillatory (the lim sup of the distance from the origin is infinite, while the lim inf is finite -- that is, the massless particle makes infinitely many excursions between neighbourhoods of the origin and infinity). An analogous complete classification theorem for the velocity variables does not exist, however we note that our Theorems 2 and 3 and establish the existence of orbits with interesting asymptotic velocities. For example the $L_4$ to collision orbits of Theorem 2 have zero asymptotic velocity and reach infinite velocity in forward time (or vice versa), while the homoclinics of Theorem 3 have zero forward and backward asymptotic velocity. } \end{remark} } \begin{remark}[Moulton's $L_4$ periodic orbits] \label{rem:Moulton} {\em The family of periodic orbits whose existence is established in Theorem 5 is of Moulton's $L_4$ type, in the sense of \cite{moultonBook}. That is, these are periodic orbits which when projected into the $(x,y)$ plane (i.e. the configuration space) have non-trivial winding about $L_4$. See also Chapter 9 of \cite{theoryOfOrbits}, or the works of \cite{onMoulton_Szebehely,szebehelyTriangularPoints} for a more complete discussion of the history (and controversy) surrounding Moulton's orbits. The present work provides, to the best of our knowlege, the first proof that Moulton type $L_4$ periodic orbits exist. } \end{remark} \cor{The remainder of the paper is organized as follows. The next three subsections briefly discuss some literature on regularization of collision, numerical computational methods, and computer assisted methods of proof in celestial mechanics. We conclude them with Remark \ref{rem:puttingItAllTogether} which places our work in the context of the discussed works. These sections can be skimmed by the reader who wishes to dive right into the mathematical setup, which is described in Section \ref{sec:problem}. There we describe the problem setup in terms of an appropriate multiple shooting problem, and establish tools for solving it. In particular, we define the unfolding parameters which we use to isolate transverse solutions in energy level sets and use this notion to formulate Theorem \ref{th:single-shooting} and Lemma \ref{lem:multiple-shooting-2} which we later use for our computer assisted proofs. The role of the unfolding parameter is to add a missing variable, which is needed for solving the problems by means of zero finding with a Newton method. The unfolding parameter is an artificial variable added to the equations. It is added though in a way that ensures that we can recover the solution of the original problem from the appended one. (See Remark \ref{rem:unfolding} for more detailed comments.)\label{explanation-unfolding} } In Section \ref{sec:PCRTBP} we describe the planar CRTBP and it's Levi-Civita regularization. Sections \ref{sec:ejectionToCollision}, \ref{sec:L4_to_collision}, and \ref{sec:symmetric-orbits} describe respectively the formulation of the multiple shooting problem for primary-to-primary ejection-collision orbits, $L_4$ to ejection/collision orbits, $L_4$ homoclinic orbits, and periodic ejection-collision families. Section \ref{sec:CAP} describes our computer assisted proof strategy and illustrates how this strategy is used to prove our main theorems. Some technical details are given in the appendices. The codes implementing the computer assisted proofs discussed in this paper are available at the homepage of the first author MC. \section*{Literature review} \subsection{Geometric approach to collision dynamics} \label{sec:collisionLit} \correction{comment 5}{ Suppose one were to choose, more or less arbitrarily, an initial configuration of the gravitational $N$-body problem. A fundamental question is to ask ``does this initial configuration lead to collision between two or more of the bodies in finite time?'' The question is delicate, and remains central to the theory even after generations of serious study. Saari for example has shown that the set of orbits which reach collision in finite time (the \textit{collision set}) has measure zero \cite{MR295648,MR321386}, so that $N$-body collisions are in some sense physically unlikely. On the other hand, results due to Kaloshin, Guardia, and Zhang prove that the collision set can be $\gamma$-dense in open sets \cite{MR3951693}. Though this notion of density is technical, the result shows that the embedding of the collision set may be topologically complicated. } \cor{ One of the main tools for studying collisions is to introduce coordinate transformations which regularize the singularities. The virtue of a regularizing coordinate change, from a geometric perspective, is that it transforms the singularity set in the original coordinates into a nicer geometric object. For example, after Levi-Civita regularization in the planar CRTBP, the singularity sets (restricted to a particular fixed energy level) are transformed into circles \cite{MR1555161}. We review the Levi-Civita coordinates for the CRTBP in Section \ref{sec:PCRTBP}, and refer the interested reader to Chapter 3 of \cite{theoryOfOrbits}, to the notes of \cite{cellettiCollisions,MR633766}, and to the works of \cite{MR562695,MR633766,MR359459,MR3069058,MR638060} for much more complete overview of the pre-McGehee literature on different regularization techniques. } \cor{Advecting the regularized singularity set under the backward flow for a time $T$ leads to a smooth manifold of initial conditions whose orbits collide with the primary in forward time $T$ or less. This is referred to as a local collision manifold. Running time backwards leads to a local ejection manifold. Studying intersections between ejection and collision manifolds, and their intersections with other invariant objects, provides invaluable insights. } \cor{One of the first works to combine this geometric picture of collisions with techniques from the qualitative theory of dynamical systems is the paper by McGehee \cite{MR359459}. Here, a general method for regularizing singularities is developed and used to study triple collisions in an isocoleces three body problem. As an illustration of the power of the method, the author proves the existence of an infinite set of initial conditions whose orbits achieve arbitrarily large velocities after near collision. } \cor{Building on these results, Devaney proved the existence of infinitely many ejection-collision orbits in the same model, when one of the masses is small \cite{Devaney1980TripleCI}. Further insights, based on similar techniques, are found in the works of Simo, ElBialy, and Lacomba and Losco, and Moeckel \cite{MR640127, ElBialy:1989td,MR571374,MR571374,10.2307/24893242}. Using similar methods, Alvarez-Ram\'{i}rez, Barrab\'{e}s, Medina, and Oll\'{e} obtain numerical and analytical results for a related symmetric collinear four-body problem in \cite{MR3880194}. In \cite{MR638060}, Belbruno developed a new regularization technique for the spatial CRTBP and used it to prove the existence of families of periodic orbits which terminate at ejection-collision when the mass ratio is small enough. This is a perturbative analog of our Theorem \ref{thm:main-thm-4} (but in the spatial problem).} \cor{The paper \cite{MR682839} by Llibre is especially relevant to the present study, as the author establishes a number of theorems about ejection-collision orbits in the planar CRTBP. Taylor expansions for the local ejection/collision manifolds in Levi-Civita coordinates are given and used to show that the local collision sets are homeomorphic to cylinders for all values of energy and any mass ratio. Then, for mass ratio sufficiently small, the author proves that the ejection/collision manifolds intersect twice near the large primary. This gives the existence of a pair of ejection-collision orbits which depart from and return to the large body. For Jacobi constant sufficiently negative and mass ratio sufficiently small, he also proves the existence of an ejection form the large body which collides with the small body in finite time. This is a perturbative analogue of our Theorem \ref{thm:main-thm-1}. We note that the large primary to small primary ejection-collisions in \cite{MR682839} are ``fast'', in the sense that the relative velocity between the infinitesimal body and the large primary is never zero. Compare this to the orbits of our Theorem \ref{thm:main-thm-1}, which twice attain zero relative velocity with respect to the large primary (the orbits make a ``loop''). } \cor{A follow up paper by Lacomba and Llibre \cite{MR949626} shows that the ejection-collision orbits of \cite{MR682839} are transverse, and as a corollary the authors prove that the CRTBP has no $C^1$ extendable regular integrals. Heuristically speaking, this says that the Jacobi integral is the only conserved quantity in the CRTBP and hence the system is not integrable. Transversality is proven analytically for small values of the mass ratio in the CRTBP, and studied numerically for Hill's problem. In \cite{MR993819}, Delgado proves the transversality result for the Hill's problem, using a perturbative argument for $1/C$ small (large Jacobi constant). We remark that the techniques developed in the present work could be applied to the Hill problem for non-perturbative values of $C$. We also mention the work of Pinyol \cite{MR1342132}, which uses similar techniques to prove the existence of ejection-collision orbits for the elliptic CRTBP.} \cor{A number of results for collision and near collision orbits have been established using KAM arguments in Levi-Civita coordinates. For example in \cite{MR967629} and for the planar CRTBP, Chenciner and Llibre prove the existence of invariant tori which intersect the regularized collision circle transversally. The argument works for $1/C$ small enough and for any mass ratio. The dynamics on the invariant tori are conjugate to irrational rotation, so that their existence implies that there are infinitely many orbits which pass arbitrarily close to collision infinitely many times. Returning to the original coordinates, the authors refer to these as punctured invariant tori (punctured by the collision set). Punctured tori in an averaged four body problem (weakly coupled double Kepler problem) are studied by F\'{e}joz in \cite{MR1849229}, and this work is extended by the same author to the CRTBP, for a parameter regime where the system can be viewed as a perturbation of two uncoupled Kepler problems \cite{MR1919782}. See also the work of Zhao \cite{MR3417880} for a proof that there exists a positive measure set of punctured toi in the spatial CRTBP.} \cor{In \cite{MR1805879}, Bolotin and Mackay use variational methods to prove the existence of chaotic collision and near collision dynamics in the planar CRTBP. The argument studies some normally hyperbolic invariant manifolds whose stable/unstable manifolds, in regularized coordinates, intersect one another near the local ejection/collision manifolds. The small parameter in this situation is the mass ratio, and the results hold in an explicit closed interval of energies. The same authors extend the result to the spatial CRTBP in \cite{MR2245344}, and Bolotin obtains the existence of chaotic near collision dynamics for the elliptic CRTBP in \cite{MR2331205}.} \cor{A more constructive (non-variational) approach to studying chaotic collision and near collision dynamics is found in Font, Nunes, and Sim\'{o} \cite{MR1877971}. Here, the authors prove the existence of chaotic invariant sets containing orbits which make infinitely many near collisions with the smaller body in the planar CRTBP. Again, $\mu$ is taken as the small parameter and the authors compute perturbative expansions for some Poincar\'e maps in Levi-Civita coordinates. Using these expansions they directly prove the existence of horseshoe dynamics. Since the argument is constructive, they are also able to show, via careful numerical calculations, that the expansions provide useful predictions for $\mu$ as large as $10^{-3}$. In a follow up paper \cite{MR2475705}, the same authors numerically compute all the near collision periodic orbits in a fixed energy level satisfying certain bounds on the return time, and present numerical evidence for the existence of chaotic dynamics between these.} \cor{In the paper \cite{MR3693390}, Oll\'{e}, Rodr\'{i}guez, and Soler introduce the notion of an $n$-ejection-collision orbit. This is an orbit which is ejected from larger primary body, and which makes an excursion where it achieves a relative maximum distance from the large primary $n$-times before colliding with it: such orbits look like flowers with $n$ petals and the primary body at the center. Finding $n$-ejection-collision orbits necessitates studying the local ejection/collision manifolds at a greater distance from the regularized singularity set than in previous works. The authors also numerically study the bifurcation structure of these families for $1 \leq n \leq 10$ over a range of energies. } \cor{In a follow-up paper \cite{MR4110029}, the same authors prove the existence of four families of $n$-ejection-collision orbits for any value of $n \geq 1$, for $\mu$ small enough, and for values of the Jacobi constant sufficiently large. The argument exploits an analytic solution of the variational equations in a neighborhood of the regularized singularity set in the Levi-Civita coordinates. They also perform large scale numerical calculations which suggest that the $n$-ejection-collision orbits persist at all values of the mass ratio, and for large ranges of the Jacobi constant. Another paper by the same authors numerically studies the manifold of ejection orbits, for both the large and small primary bodies, over the whole range of mass ratios and for a number of different values of the Jacobi constant \cite{MR4162341}. The authors also propose a geometric mechanism for finding ejection orbits which transit from one primary to the neighborhood of the other. More precisely, they numerically compute intersections between the ejection manifold manifold and the stable manifold of a periodic orbit in the $L_1$ Lyapunov family at the appropriate energy level, and study the resulting dynamics. } \cor{In the paper \cite{tereOlleCollisions}, Seara, Oll\'{e}, Rodr\'{i}guez, and Soler dramatically extend the results of \cite{MR4110029}. First, they show that the existence of an $n$-ejection-collision is equivalent to an orbit with $n$-zeros of the angular momentum: a scalar quantity. Using this advance they are able to remove the small mass condition, and prove that for either primary, at any value of the mass ratio, and for any $n \geq 1$, there exist four $n$-ejection-collision orbits. The argument is based on an application of the implicit function theorem, with the $1/C$ as the small parameter. The authors also make a detailed numerical study of this enlarged family of $n$-ejection-collision orbits, taking $\mu \to 1$. Using insights obtained from these numerical explorations, they propose an analytical hypotheses which allows them to prove existence results for $n$-ejection-collision orbits for the Hill problem, again for large enough energies. We remark that, since $n$-ejection-collision orbits can be formulated as solutions of two point boundary value problems beginning and ending on the regularized collision circle, an interesting project could be to prove the existence of such orbits for smaller values of $C$ using the techniques developed in the present work. } \subsection{Numerical calculations, computational mathematics, and celestial mechanics} \label{sec:numerics} \correction{comment 7}{ Computational and observational tools for predicting the motions of celestial bodies have roots in antiquity, so that even a terse overview is beyond the scope of the present study. Nevertheless, we remark that the numerical methods like integration techniques for solving initial value problems, and bisection/Newton schemes for solving nonlinear systems of equations have been applied to the study of the CRTBP at least since G.H. Darwin's 1897 treatise on periodic orbits \cite{MR1554890}. The reader interested in the history of pen-and-paper calculations for the CRTBP will find the work of Moulton's group in Chicago, as well as Stromgren's group in Copenhagen, from the 1910's to the 1930's of great interest. Detailed discussion of their accomplishments are found in \cite{stromgrenMoulton,moultonBook}, and in Chapter 9 of Szebehely's Book \cite{theoryOfOrbits}. } \cor{The historic work of the mathematicians/human computers at the NACA, and subsequent NASA space agencies, had a profound effect on the shape of twentieth century affairs, as chronicled in a number of books and films. See for example \cite{hidenHumanComputers,hiddenFigures,dorthyVaughn,nasaComputers,mcmastersPage}. The ascension of digital computing and the dawn of the space race in the 1950's and 1960's led to an explosion of computational work in celestial mechanics. Again, the literature is vast and we refer to the books of Szebehely \cite{theoryOfOrbits}, Celletti and Perozzi \cite{alesanderaBook}, and Belbruno \cite{MR2391999} for more thorough discussion of historical developments and the surrounding literature. } \cor{In the context of the present work, it is important to discuss the idea of recasting transport problems into two point boundary value problems (BVPs). The main idea is to project the boundary conditions for an orbit segment onto a representation of the local stable/unstable manifold of some invariant object (both linear and higher order expansions of the stable/unstable manifolds are in frequent use). Then a homoclinic or heteroclinic connection is reconceptualized as an orbit beginning on the unstable and terminating on the stable manifold, giving a clear example of a BVP. } \cor{The papers by Beyn, Friedman, Doedel, Kunin \cite{MR618636,MR1068199,MR1007358,MR1205453,MR1456497} lay the foundations for such BVP methods. A BVP approach for computing periodic orbits in conservative systems is developed by Mu\~{n}oz-Almaraz, Freire, Gal\'{a}n, Doedel, and Vanderbauwhede in \cite{MR2003792}. In particular, they introduce an unfolding parameter for the periodic orbit problem, an idea we make extensive use of in Section \ref{sec:problem}. Connections between periodic orbits are studied by Doedel, Kooi, Van Voorn, and Kuznetsov in \cite{MR2454068,MR2511084}, and Calleja, Doedel, Humphries, Lemus-Rodr\'{i}guez and Oldeman apply these techniques to the CRTBP in \cite{MR2989589}.} \cor{BVP methods for computing connections between invariant objects are central to the geometric approach to space mission design described in the four volume set of books by G\'{o}mez, Jorba, Sim\'{o}, and Masdemont \cite{MR1867240,MR1881823,MR1878993,MR1875754}, and also in the book of Koon, Lo, Marsden, and Ross \cite{MR1870302}. A focused (and shorter) research paper describing the role of connecting orbits in the spatial CRTBP is found in the paper \cite{MR2086140} by G\'{o}mez, Koon, Lo, Marsden, Masdemont, and Ross, and explicit discussion of the role of invariant manifolds in space missions which visit the moons of Jupiter is found in the paper \cite{MR1884895}, by Koon, Marsden, Ross, and Lo. A sophisticated multiple shooting scheme for computing families of connecting orbits between periodic orbits in Lyapunov families is developed in by Barrab\'{e}s, Mondelo, and Oll\'{e} in \cite{Barrabes:2009ve}, and extended by the same authors to the general Hamiltonian setting in \cite{Barrabes:2013ws}. Recent extensions are found in the work of Kumar, Anderson, and de la Llave \cite{Kumar:2021vc,MR4361879} on connecting orbits between invariant tori in periodic perturbations of the CRTBP, and by Barcelona, Haro, and Mondelo \cite{https://doi.org/10.48550/arxiv.2301.08526} for studying families of connecting orbits between center manifolds. A broad overview of numerical techniques for studying transport phenomena in $N$-body problems is found in the review by Dellnitz, Junge, Koon, Lekien, Lo, Marsden, Padberg, Preis, Ross, and Thiere \cite{MR2136742}. } \cor{We must insist that the references given in this subsection in no way constitute a complete list. Our aim is only to stress the importance of BVP techniques, and to suggest their rich history of application in the celestial mechanics literature, while possibly directing the reader to more definitive sources. } \subsection{Computer assisted proof in celestial mechanics} \label{sec:capLit} Constructive, computer assisted proofs are a valuable tool in celestial mechanics, as they facilitate the study $N$-body dynamics far from any perturbative regime, and in the absence of any small parameters or variational structure. Computer assisted arguments usually begin with careful numerical calculation of some dynamically interesting orbits. From this starting point, one tries to construct a posteriori arguments which show that there are true orbits with the desired dynamics nearby. Invariant objets like equilibria, periodic orbits, quasi-periodic solutions (invariant tori), local stable/unstable manifolds, and connecting orbits between invariant sets can be either reformulated as solutions of appropriate functional equations, or expressed in terms of topological/geometric conditions in certain regions of phase space. Given a numerical candidate which approximately satisfies either the functional equation or the geometric conditions, fixed point or degree theoretical arguments are used to prove the existence of true solutions nearby. As an example, the Newton-Krawczyk Theorem \ref{thm:NK} from Section \ref{sec:CAP} is a tool for verifying the existence of a unique non-degenerate zero of a nonlinear map given a good enough approximate root. The theorem is proved using the contraction mapping theorem, and the interested reader will find many similar theorems discussed in the references below. To get a sense of the power of computer assisted methods of proof in celestial mechanics, we refer the reader to the works of Arioli, Barutello, Terracini, Kapela, Zgliczy\'{n}ski, Sim\'{o}, Burgos, Calleja, Garc\'{i}a-Azpeitia, Lessard, Mireles James, Walawska, and Wilczak \cite{MR2112702,MR2259202, MR2012847,MR2185163,MR3622273, MR2312391,MR3896998,MR4208440,MR3923486} on periodic orbits, the works of Arioli, Wilczak, Zgliczy\'{n}ski, Capi\'{n}ski, Kepley, Mireles James,Galante, and Kaloshin \cite{MR1947690,MR1961956,MR3032848,MR3906230,MR2824484} on transverse connecting orbits and chaos, the works of Capi\'{n}ski, Guardia, Mart\'{i}n, Sera, Zgliczy\'{n}ski, Roldan, Wodka, and Gidea \cite{oscillations,capinski_roldan,diffusionCRTBP,maciejMarianDiffusion} on oscillations to infinity, center manifolds, and Arnold diffusion, and to the works of Celletti, Chierchia, de la Llave, Rana, Figueras, Haro, Luque, Gabern, Jorba, Caracciolo, and Locatelli \cite{MR1101365,MR1101369,alexCAPKAM,MR2150352,MR4128817} on quasi-periodic orbits and KAM phenomena. We remark that while this list is by no means complete, consulting these papers and the papers cited therein will give the reader a reasonable impression of the state-of-the-art in this area. More general references on computer assisted proofs in dynamical systems and differential equations are found in the review articles by Lanford, Koch, Schenkel, Wittwer, van den Berg, Lessard, and G\'{o}mez-Serrano \cite{MR759197,MR1420838,jpjbReview,MR3990999}, and in the books by Tucker, and Nakao, Plum, and Watanabe \cite{MR2807595,MR3971222}. We also mention the recent review article by Kapela, Mrozek, Wilczak, and Zgliczy\'{n}ski \cite{CAPD_paper}, which describes the use of the CAPD library for validated numerical integration of ODEs and their variational equations. The CAPD library is a general purpose toolkit, and can be applied to any problem where explicit formulas for the vector field are known in closed form. We make extensive use of this library throughout the present work. Additional details about CAPD algorithms are found in the papers by Zgliczy\'{n}ski, and Wilczak \cite{MR1930946,cnLohner}, but the reader who is interested in the historical development of these ideas should consult the references of \cite{CAPD_paper}. Methods for computing validated enclosures of stable/unstable manifolds attached to equilibrium solutions for some restricted three and four body problems are discussed in \cite{MR3906230,MR3792792}. and these methods are used freely in the sequel. \begin{remark}[Relevance of the present work] \label{rem:puttingItAllTogether} {\em \correction{comment 2}{ In light of the discussion contained in Sections \ref{sec:collisionLit}, \ref{sec:numerics}, and \ref{sec:capLit} a few, somewhat more refined comments about the novelty of the present work are in order. First, note that our results make essential use of the geometric formulation of collision dynamics discussed in Section \ref{sec:collisionLit}. An important difference of perspective is that, since we work without small parameters, we formulate multiple shooting problems describing orbits with boundary conditions on the regularized collision set, rather than working with perturbative expansions for the local ejection/collision manifolds and studying intersections between them. Once we have a good enough numerical approximation of the solution of a BVP we validate the existence of a true solution via a standard a posteriori argument. } \cor{ In this sense our work exploits the BVP approach for studying dynamical objects discussed in Section \ref{sec:numerics}. Our shooting templates, discussed (see Section \ref{sec:problem}), are general enough to allow for any number of coordinate swaps in any order, and allow us to shoot from stable/unstable manifolds, regularized collision sets, or discrete symmetry subspaces. In this way we have one BVP framework which covers all the theorems considered in this paper. We note that the setup applies also to higher dimensional problems involving stable/unstable manifolds of other geometric objects, as seen for example in \cite{jayAndMaxime}. Our setup incorporates an unfolding parameter approach to BVPs for conservative systems -- as discussed in \cite{MR1870260,MR2003792,MR1992054} for periodic orbits. An important feature of our setup is that the existence of a non-degenerate solution of the BVP implies transversality relative to the energy submanifold. One virtue of the abstract framework presented in Section \ref{sec:problem} is that we prove transversality results and properties of the unfolding parameter only once, and they apply to all problems considered later in the text -- rather than having to establish such results for each new problem considered. } \cor{Still, this shooting template framework is just a convenience. The main contribution of the present work is a flexible computer assisted approach to proving theorems about collision dynamics in celestial mechanics problems. We remark that, until now, collisions have been viewed largely as impediments to the implementation of successful computer assisted proofs. The present work demonstrates that well known tools from regularization theory can be combined with existing validated numerical tools and a posteriori analysis to prove interesting theorems about collisions in non-perturbative settings. As applications, we prove a number of new results for the CRTBP.} } \end{remark} \color{black} \section{Problem setup} \label{sec:problem} Consider an ODE with one or more first integrals or constants of motion. For such systems, the level sets of the integrals give rise to invariant sets. Indeed, the level sets are invariant manifolds except at critical points of the conserved quantities. In this section we describe a shooting method for two point boundary value problems between submanifolds of the level set. To be more precise, we consider two manifolds, parameterized (locally) by some functions, which are contained in a level set. We present a method which allows us to find points on these manifold which are linked by a solution of an ODE. This in particular implies that the two manifolds intersect. Our method will allow us to establish transversality of the intersection within the level set. We consider an ODE\begin{equation} x^{\prime}=f\left( x\right) , \label{eq:ode-1} \end{equation} where $f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$. Assume that the flow $% \phi\left( x,t\right) $ induced by (\ref{eq:ode-1}) has an integral of motion expressed as\begin{equation*} E:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}, \end{equation*} which means that \begin{equation} E\left( \phi\left( x,t\right) \right) =E\left( x\right) , \label{eq:E-integral} \end{equation} for every $x\in\mathbb{R}^{d}$ and $t\in \mathbb{R}$. Fix $c\in\mathbb{R% }^{k}$ and define the level set \begin{equation} M :=\left\{x \in \mathbb{R}^d : E(x)=c\right\} , \label{eq:M-level-set} \end{equation} and assume that $M$ is (except possibly at some degenerate points) a smooth manifold. Consider two open sets $D_{1}\subset\mathbb{R}^{d_{1}}$ and $% D_{2}\subset\mathbb{R}^{d_{2}}$ and two chart maps \begin{equation} P_{i}: D_{i}\rightarrow M\subset\mathbb{R}^{d}\qquad\text{for }i=1,2, \label{eq:Pi-intro} \end{equation} parameterizing submanifolds of $M$. \begin{remark} One can for example think of the $P_{1}$ and $P_{2}$ as parameterizations of the exit or entrance sets on some local unstable and stable manifolds, respectively, of some invariant object. However in some of the applications to follow $P_{1,2}$ will parameterize collision sets in regularized coordinates or some surfaces of symmetry for $% f $. \end{remark} We seek points $\bar{x}_{i}\in D_{i}$ for $i=1,2$ and a time $\bar{\tau}\in% \mathbb{R}$ such that% \begin{equation} \phi\left( P_{1}(\bar{x}_{1}),\bar{\tau}\right) =P_{2}\left( \bar{x}_{2}\right) . \label{eq:problem-1} \end{equation} Note that if $P_1$ and $P_2$ parameterize some $\phi$-invariant manifolds, then Equation \eqref{eq:problem-1} implies that these manifolds intersect. The setup is depicted in Figure \ref{fig:setup}. \begin{figure}[ptb] \begin{center} \includegraphics[height=4cm]{Fig-1.pdf} \end{center} \caption{The left and right plots are in $\mathbb{R}^{d}$ with a $d-k$ dimensional manifold $M$ depicted in gray. The manifolds $% P_{i}(D_{i})\subset M$, for $i=1,2$, are represented by curves inside of $M$. We seek $\bar x_{1} \in D_1, \bar x_{2} \in D_2$ and $\bar{\tau} \in \mathbb{R}$ such that $\protect\phi(P_{1}(\bar x_{1}),\bar{\tau})=P_{2}(\bar x_{2})$. The two points $P_{i}(\bar x_{i})$, for $i=1,2$, are represented by dots.} \label{fig:setup} \end{figure} \begin{remark} Denote by $x_{1},x_{2}$ the points $x_{1}\in\mathbb{R}^{d_{1}}$ and by $% x_{2}\in\mathbb{R}^{d_{2}}$: this avoids confusion with $x\in\mathbb{R}^{d}$. \end{remark} We introduce a general scheme which allows us to: \begin{enumerate} \item Establish the intersection of the manifolds parameterized by $P_{1}$ and $P_{2}$ by means of a suitable Newton operator. \item Establish that the intersection is transverse relative to the level set $M$. \item Provide a setup flexible enough for multiple shooting between charts in different coordinates. \end{enumerate} Our methodology is applied to establish connections between stable/unstable and collision manifolds in the PCRTBP. \subsection{Level set shooting} We now provide a more detailed formulation of problem (\ref{eq:problem-1}) which allows us to describe connections between multiple level sets in distinct coordinate systems (instead of just one coordinate system as discussed in Section \ref{eq:M-level-set}). This allows us to study applications to collision dynamics as boundary value problems joining points in different coordinate systems. Let $U_1, U_2 \subset \mathbb{R}^{d}$ be open sets and consider smooth functions $E_{1},E_{2}$\begin{equation*} E_{i}:U_{i}\rightarrow\mathbb{R}^{k}\qquad\text{for }i=1,2, \end{equation*} for which $DE_{i}\left( x\right) $ is of rank $k$ for every $x\in U_{i}$, for $i=1,2.$ We fix $c_{1},c_{2}\in\mathbb{R}^{k}$ and define the following the level sets \begin{equation*} M_{i}=\left\{ x\in U_{i}:E_{i}\left( x\right) =c_{i}\right\} \qquad\text{for }i=1,2, \end{equation*} and assume that $M_{i}\neq\emptyset$ for $i=1,2$. Observe that the $M_i$ are smooth $d-k$ dimensional manifolds by the assumption that $DE_{i}$ are of rank $k$, for $i=1,2$. Consider now a smooth function $R:U_{1}\times\mathbb{R\times\mathbb{R}}^{k}\rightarrow\mathbb{R}^{d}$ We introduce the following notation for coordinates\begin{equation*} \left( x,\tau,\alpha\right) \in\mathbb{R}^{d}\times\mathbb{R\times \mathbb{R}}^{k},\qquad y\in\mathbb{R}^{d}, \end{equation*} and define a parameter dependent family of maps $R_{\tau ,\alpha}:U_{1}\rightarrow\mathbb{R}^{d}$ by% \begin{equation*} R_{\tau,\alpha}\left( x\right) :=R\left( x,\tau,\alpha\right) , \end{equation*} and assume that for each $(x, \tau, \alpha) \in \mathbb{R}^{d + k + 1}$, the $d \times d$ matrix \begin{equation*} \frac{\partial}{\partial x} R(x, \tau, \alpha), \end{equation*} is invertible, so that $R_{\tau, \alpha}(x)$ is a local diffeomorphism on $\mathbb{R}^d$. The following definition makes precise our assumptions about when $R_{\tau, \alpha}(x)$ takes values in $M_2$. \begin{definition} \label{def:unfolding}We say that $\alpha $ is an unfolding parameter for $R$ if the following two conditions are satisfied for every $x \in M_1$. \begin{enumerate} \item If $R_{\tau, \alpha}(x) \in M_2$, then $\alpha = 0$. \item If $R_{\tau, 0}(x) \in U_2$, then $R_{\tau, 0}(x) \in M_2$. \end{enumerate} \end{definition} \medskip To emphasize that we are interested in points mapped from $M_{1}$ to $M_{2}$, we say that $\alpha$ is an unfolding parameter for $R$\emph{\ from }$M_{1}$\emph{\ to }$M_{2}$. Assume from now on that $\alpha$ is an unfolding parameter for $R$. We consider two open sets $D_{1}\subset \mathbb{R}^{d_{1}}$ and $D_{2}\subset% \mathbb{R}^{d_{2}}$ where $d_{1},d_{2}\in\mathbb{N}$ and two smooth functions \begin{equation*} P_{i}:D_{i}\rightarrow M_{i},\qquad\text{for }i=1,2, \end{equation*} each of which is a diffeomorphism onto its image. Define\begin{equation*} F: D_{1}\times D_{2}\times\mathbb{R}\times\mathbb{R}^{k}\rightarrow\mathbb{\mathbb{R}}^{d} \end{equation*} by the formula \begin{equation} F\left( x_{1},x_{2},\tau,\alpha\right) :=R_{\tau,\alpha}\left( P_{1}\left( x_{1}\right) \right) -P_{2}\left( x_{2}\right) . \label{eq:F-def-1-shooting} \end{equation} We require that\begin{equation} d_{1}+d_{2}+1+k=d, \label{eq:dimensions} \end{equation} and seek $\bar{x}_{1},\bar{x}_{2},\bar{\tau}$ such that\begin{equation} \label{eq:zero} F\left( \bar{x}_{1},\bar{x}_{2},\bar{\tau}, 0\right) = R_{\bar{\tau }, 0}\left( P_{1}\left( \bar{x}_{1}\right) \right) -P_{2}\left( \bar{x} _{2}\right) =0, \end{equation} with $DF\left( \bar{x}_{1},\bar{x}_{2},\bar{\tau},0\right)$ an isomorphism. In fact, we do more than simply solve (\ref{eq:zero}). For some open interval $I \subset\mathbb{R}$ containing $\bar{\tau}$ we establish a transverse intersection between the smooth manifolds $R\left( P_{1}\left( D_{1}\right) ,I,0\right) $ and $P_{2}\left( D_{2}\right) $ at $\bar{y} := P_{2}\left( \bar{x}_{2}\right) \in M_{2}$. \medskip \begin{remark}[Role of the unfolding parameter] \label{rem:unfolding} {\em \correction{comment 8}{ The setup above, and in particular the roles of the parameters $\alpha$ and $\tau$, might first appear puzzling. In the applications we have in mind, $\tau$ is the time associated with the flow map of an ODE. The unfolding parameter $\alpha$ deals with the fact that we solve a problem restricted to the level sets $M_i$ for $i=1,2$. Consider for example shooting from a 1D arc to a 1D arc in a 4D conservative vector field, where the arcs are in the same level set of the conserved quantity (think of the arc as an outflowing segment on the boundary of a local 2D stable/unstable manifold, or part of the collision set). Since we are working in a 3D level set, the 2D surfaces formed by advecting the arcs can intersect transversally relative to the level set. However, since the arcs are parameterized by one variable functions, and the time of flight $\tau$ is unknown, taking into account the dimension of the vector field, we have 4 equations in 3 unknowns. Adding an unfolding parameter to the problem balances the equations, but it must be done carefully, so that the new variable does not change the solution set for the problem. The idea was first exploited for periodic orbits of conservative systems in \cite{MR2003792,MR1870260}. We adapt the idea to the shooting templates developed for the present work, and this is the purpose of the variable $\alpha$. } \cor{An alternative formulation would be to fix the energy and use its formula to eliminate one of the variables in the equations of motion, or to work with coordinates in which we can write $M_i$ as graphs of some functions and use these functions and appropriate projections to enforce the constraints. Another possibility is to throw away one of the equations in the BVP formulation when applying Newton, and to check a-posteriori that this equation is satisfied \cite{BDLM,MR3919451}. Yet another approach would be to directly apply a Newton scheme for the unbalanced BVP, exploiting the Moore-Penrose psudoinverse at each step. While such approaches lead to excelent numerical methods, one encounters difficulties when translating them into computer assisted arguments. We believe that the unfolding parameter is a good solution in this setting, as it leads to balanced equations with isolated solutions suitable for verification using fixed point theorems.}} \end{remark} We now give an example which informs the intuition. \begin{example} \label{ex:motivating}(Canonical unfolding.) Consider the ODE in Equation \eqref{eq:ode-1}and $E:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfying Equation \eqref{eq:E-integral}. Suppose $c\in\mathbb{R}$ is fixed and denote its associated level set by $M := \left\{ E=c\right\}$ (In this example we have $k=1$ and $E_{1}=E_{2}=E$.) Assume there are smooth functions $P_{1},P_{2}$ as in \eqref{eq:Pi-intro} and that $d_{1}+d_{2}+2=d$. We construct a shooting operator for Equation \eqref{eq:problem-1} by choosing $R$ as follows. Consider the $\alpha$-parameterized family of ODEs \begin{equation*} x^{\prime}=f(x)+\alpha\nabla E\left( x\right). \end{equation*} Let $\phi_{\alpha}\left( x,t\right)$ denote the induced flow and note that $\phi_{0}=\phi$ is the flow induced by Equation \eqref{eq:ode-1}. Defining the shooting operator by the formula \begin{equation} R\left( x,\tau,\alpha\right) :=\phi_{\alpha}\left( x,\tau\right), \label{eq:R-alpha} \end{equation} we see that solving Equation \eqref{eq:problem-1} is equivalent to solving Equation \eqref{eq:zero}. Observe that $\alpha$ is unfolding for $R$ because $E$ is an integral of motion for $\phi$ from which it follows that \begin{align*} \frac{d}{dt}E\left( R_{\tau,\alpha}\left( x\right) \right) & = \frac{d}{dt} E(\phi_{\alpha}(x,t)) \\ & =\nabla E\left( \phi_{\alpha}\left( x,t\right) \right) \cdot\left( f(\phi _{\alpha}\left( x,t\right) )+\alpha\nabla E\left( \phi_{\alpha}\left( x,t\right) \right) \right) \\ & =\alpha\left\Vert \nabla E\left( \phi_{\alpha}\left( x,t\right) \right) \right\Vert ^{2}, \end{align*} where $\cdot$ denotes the standard scalar product. Here we have used the fact that Equation \eqref{eq:E-integral} implies $\nabla E\left( x\right) \cdot f(x)=0$ but also $\nabla E(\phi_\alpha(x,t)) \neq 0$ since $\nabla E$ is assumed to have rank $1$ everywhere. \end{example} Returning to the general setup we have the following theorem. \begin{theorem} \label{th:single-shooting}Assume that $\alpha$ is an unfolding parameter for $R$ and $F$ is defined as in Equation \eqref {eq:F-def-1-shooting}. If \begin{equation} F\left( \bar{x}_{1},\bar{x}_{2},\bar{\tau},\bar{\alpha}\right) =0, \label{eq:F-zero-in-thm1} \end{equation} then $\bar{\alpha}=0$. Moreover, if $DF\left( \bar{x}_{1},\bar{x}_{2},\bar{% \tau},0\right) $ is an isomorphism, then there exists an open interval $I\subset\mathbb{R}$ of $\bar{\tau}$ such that the manifolds $R\left( P_{1}\left( D_{1}\right) ,I,0\right) $ and $P_{2}\left( D_{2}\right) $ intersect transversally in $M_{2}$ at $\bar{y}:=P_{2}\left( \bar{x}_{2}\right) $. Specifically, we have the splitting \begin{equation} T_{\bar{y}}R\left( P_{1}\left( D_{1}\right) ,I,0\right) \oplus T_{\bar{y}}P_{2}\left( D_{2}\right) =T_{\bar{y}}M_{2}, \label{eq:th1-transversality} \end{equation} and moreover, $\bar{y}$ is an isolated transverse point. \end{theorem} \begin{proof} Recalling the definition of $F$ in Equation \eqref{eq:F-def-1-shooting} and the hypothesis of Equation \eqref{eq:F-zero-in-thm1}, we have that $\bar{x}=P_{1}\left( \bar {x}_{1}\right) \in M_{1}$ and $\bar{y}=P_{2}\left( \bar{x}_{2}\right) \in M_{2}$. The fact that $\alpha$ is an unfolding parameter for $R$, combined with $R\left( \bar{x},\bar{\tau},\bar{\alpha}\right) =\bar{y}$, implies that $\bar{\alpha}=0$. Since $F(\bar x_1,\bar x_2, \bar \tau, 0)=0$, we see that $R(P_1(D_1),I,0)$ and $P_2(D_2)$ intersect at $\bar y$. Our hypotheses on $P_{1,2}$ and $R$ imply that $R\left( P_{1}\left( D_{1}\right) ,I,0\right) $ and $P_{2}\left( D_{2}\right) $ are submanifolds of $M_{2}$ so evidently \begin{equation*} T_{\bar{y}}R\left( P_{1}\left( D_{1}\right) ,I,0\right) \oplus T_{\bar{y}}P_{2}\left( D_{2}\right) \subset T_{\bar{y}}M_{2}. \end{equation*} However, from the assumption in Equation \eqref{eq:dimensions} we have $% d-k=d_{1}+d_{2}+1$ and therefore it suffices to prove that $T_{\bar{y}}R\left( P_{1}\left( D_{1}\right) ,I,0\right) \oplus T_{\bar{y}}P_{2}\left( D_{2}\right) $ is $d-k$ dimensional. Suppose $\setof*{e_{1},\ldots ,e_{d_{1}}}$ is a basis for $\mathbb{R}^{d_{1}}$ and $\setof*{\tilde{e}_{1},\ldots ,\tilde{e}% _{d_{2}}}$ is a basis for $\mathbb{R}^{d_{2}}$. Define% \begin{align*} v_{i}& :=\frac{\partial R}{\partial x_{1}}\left( \bar{x}_{1},\bar{\tau},0\right) DP_{1}\left( \bar{x}_{1}\right) e_{i}\qquad \text{for }i=1,\ldots ,d_{1} \\ v_{i}& :=DP_{2}\left( \bar{x}_{2}\right) \tilde{e}_{i-d_{1}}\qquad \text{for }i=d_{1}+1,\ldots ,d_{1}+d_{2} \\ v_{d_{1}+d_{2}+1}& :=\frac{\partial R}{\partial \tau }\left( \bar{x}_{1},\bar{\tau},0\right) . \end{align*}After differentiating Equation \eqref{eq:F-def-1-shooting} we obtain the formula \begin{equation*} DF=\left( \begin{array}{cccc} \frac{\partial F}{\partial x_{1}} & \frac{\partial F}{\partial x_{2}} & \frac{\partial F}{\partial \tau } & \frac{\partial F}{\partial \alpha }\end{array}\right) =\left( \begin{array}{cccc} \frac{\partial R}{\partial x_{1}}DP_{1} & -DP_{2} & \frac{\partial R}{\partial \tau } & \frac{\partial R}{\partial \alpha }\end{array}\right), \end{equation*}and since $DF$ is an isomorphism at $\left( \bar{x}_{1},\bar{x}_{1},\bar{\tau% },0\right) $, it follows that the vectors $v_{1},\ldots ,v_{d_{1}+d_{2}+1}$ span a $d_{1}+d_{2}+1=d-k$ dimensional space. Observe that \begin{align*} T_{\bar{y}}R\left( P(D_1) , I, 0\right) & =\text{span}\left( v_{1},\ldots,v_{d_{1}},v_{d_{1}+d_{2}+1}\right) , \\ T_{\bar{y}} P_{2} \left(D_2 \right) & =\text{span}\left( v_{d_{1}+1},\ldots,v_{d_{1}+d_{2}}\right), \end{align*} proving the claim in Equation \eqref{eq:th1-transversality}. Moreover, since \begin{equation*} \dim R\left( P_{1}\left( D_{1}\right) ,I,0\right) +\dim P_{2}\left( D_{2}\right) =\left( d_{1}+1\right) +d_{2}=d-k=\dim M_{2}, \end{equation*} it follows that $\bar{y}$ is an isolated transverse intersection point which concludes the proof. \end{proof} We finish this section by defining an especially simple ``dissipative'' unfolding parameter which works in the setting of the PCRTBP. \begin{example} \label{ex:dissipative-unfolding}(Dissipative unfolding.) Let $x,y\in \mathbb{% R}^{2k}$, let $\Omega:\mathbb{R}^{2k}\rightarrow\mathbb{R}$ and $J\in\mathbb{R}^{2k\times2k}$ be of the form% \begin{equation*} J=\left( \begin{array}{cc} 0 & \operatorname{Id}_{k} \\ -\operatorname{Id}_{k} & 0\end{array} \right) , \end{equation*} where $\operatorname{Id}_{k}$ is a $k\times k$ identity matrix. Let us consider an ODE of the form \begin{equation*} \left(x', y'\right) = f\left( x,y\right) :=\left( y,2Jy+\frac {\partial}{\partial x}\Omega\left( x\right) \right) . \end{equation*} One can check that $E\left( x,y\right) =-\left\Vert y\right\Vert ^{2}+2\Omega\left( x\right) $ is an integral of motion. Consider the parameterized family of ODEs% \begin{equation} \left(x',y'\right) = f_{\alpha}\left( x,y\right) :=f\left( x,y\right) +\left( 0,\alpha y\right) , \label{eq:dissipative-vect-alpha} \end{equation} and let $\phi_{\alpha}\left( \left( x,y\right) ,t\right) $ denote the flow induced by Equation \eqref{eq:dissipative-vect-alpha}. Define the shooting operator defined by \begin{equation} R\left( \left( x,y\right) ,\tau,\alpha\right) :=\phi_{\alpha}\left( \left( x,y\right) ,\tau\right). \label{eq:R-alpha-dissipative} \end{equation} As in Example \ref{ex:motivating}, one can check the equivalence between Equations \eqref{eq:problem-1} and \eqref{eq:zero}. The fact that $\alpha$ is unfolding for $R$ follows as\begin{equation*} \frac{d}{dt}E\left( \phi_{\alpha}\left( \left( x,y\right) ,t\right) \right) =-2\alpha\left\Vert y\right\Vert ^{2}. \end{equation*} \end{example} \subsection{Level set multiple shooting\label{sec:multiple-shooting}} Consider a sequence of open sets $U_{1},\ldots,U_{n}\subset\mathbb{R}^{d}$ and a sequence of smooth maps \begin{equation*} E_{i}:U_{i}\rightarrow\mathbb{R}^{k}\qquad\text{for }i=1,\ldots,n \end{equation*} for which $DE_{i}\left( x\right) $ is of rank $k$ for every $x\in U_{i}$, for $i=1,\ldots,n$. Let $c_{1},\ldots,c_{n}\in\mathbb{R}^{k}$ be a fixed sequence with corresponding level sets \begin{equation*} M_{i}:=\left\{ x\in U_{i}:E_{i}\left( x\right) =c_{i}\right\} \qquad\text{for }i=1,\ldots,n. \end{equation*} Let\begin{equation*} R^{i}:U_{i}\times\mathbb{R}\times\mathbb{R}^{k}\rightarrow\mathbb{R}^{d}\qquad\text{for }i=1,\ldots,n-1 \end{equation*} be a sequence of smooth functions which defines a sequence of parameter dependent maps \begin{align*} R_{\tau,\alpha}^{i} & :U_{i}\rightarrow\mathbb{R}^{d}, \\ R_{\tau,\alpha}^{i}\left( x\right) & :=R^{i}\left( x,\tau,\alpha\right) ,\qquad\text{for }i=1,\ldots,n-1. \end{align*} We assume that for each fixed $\tau$ and $\alpha$, each of the maps is a local diffeomorphism on $\mathbb{R}^d$. Let $D_{0} \subset \mathbb{R}^{d_{0}}$ and $D_{n} \subset \mathbb{R}^{d_{n}}$ be open sets, and let \begin{equation*} P_{0}:D_{0}\rightarrow M_0\subset \mathbb{R}^{d},\qquad\qquad P_{n}:D_{n}\rightarrow M_n\subset \mathbb{R}^{d}, \end{equation*} be diffeomorphisms onto their image. Assume that\begin{equation} d_{0}+d_{n}+1+k=d \label{eq:dimensions-multiple-shooting} \end{equation}and consider the function\begin{equation*} \tilde{F}:\mathbb{R}^{nd}\supset D_{0}\times \underset{n-1}{\underbrace{\mathbb{R}^{d}\times \ldots \times \mathbb{R}^{d}}}\times D_{n}\times \mathbb{R}\times \mathbb{R}^{k}\rightarrow \underset{n}{\underbrace{\mathbb{R}^{d}\times \ldots \times \mathbb{R}^{d}}}, \end{equation*}defined by the formula \begin{equation} \tilde{F}\left( x_{0},\ldots ,x_{n},\tau ,\alpha \right) =\left( \begin{array}{r@{\,\,\,\,}l} P_{0}\left( x_{0}\right) & -\,\,\,x_{1} \\ R_{\tau ,\alpha }^{1}\left( x_{1}\right) & -\,\,\,x_{2} \\ & \, \vdots \\ R_{\tau ,\alpha }^{n-2}\left( x_{n-2}\right) & -\,\,\,x_{n-1} \\ R_{\tau ,\alpha }^{n-1}\left( x_{n-1}\right) & -\,\,\,P_{n}\left( x_{n}\right) \end{array}\right) \label{eq:multi-prob} \end{equation} We now define the following functions \begin{align*} R& :U_{1}\times \mathbb{R}\times \mathbb{R}^{k}\rightarrow \mathbb{R}^{d}, \\ F& : D_{0}\times D_{n}\times \mathbb{R}\times \mathbb{R}^{k}\rightarrow \mathbb{R}^{d} \end{align*}by the formulas \begin{align} R\left( x_{1},\tau ,\alpha \right) & =R_{\tau ,\alpha }\left( x_{1}\right) :=R_{\tau ,\alpha }^{n-1}\circ \ldots \circ R_{\tau ,\alpha }^{1}\left( x_{1}\right) , \notag \\ F\left( x_{0},x_{n},\tau ,\alpha \right) & :=R_{\tau ,\alpha }\left( P_{0}\left( x_{0}\right) \right) -P_{n}\left( x_{n}\right) . \label{eq:F-parallel} \end{align} \begin{definition} We say that $\alpha $ is an unfolding parameter for the sequence $R_{\tau ,\alpha }^{i}$ if it is unfolding for $R_{\tau ,\alpha }=R_{\tau ,\alpha }^{n-1}\circ \ldots \circ R_{\tau ,\alpha }^{1}.$ \end{definition} We now formulate the following lemma. \begin{lemma} \label{lem:multiple-shooting-2}If $\tilde{F}\left( \bar{x}_{0},\ldots,\bar {x% }_{n},\bar{\tau},\bar{\alpha}\right) =0$ and $D\tilde{F}\left( \bar{x}_{0},\ldots,\bar{x}_{n},\bar{\tau},\bar{\alpha}\right) $ is an isomorphism, then $F\left( \bar{x}_{0},\bar{x}_{n},\bar{\tau},\bar{\alpha}\right) =0$ and $DF\left( \bar{x}_{0},\bar{x}_{n},\bar{\tau},\bar{\alpha}\right) $ is an isomorphism. \end{lemma} \begin{proof} The fact that $F\left( \bar{x}_{0},\bar{x}_{n},\bar{\tau},\bar{\alpha }% \right) =0$ follows directly from the way $\tilde{F}$ and $F$ are defined in Equations \eqref{eq:multi-prob} and \eqref{eq:F-parallel} respectively. Before proving that $DF$ is an isomorphism, we set up some notation. We will write\begin{equation*} dR^{i}:=\frac{\partial R^{i}}{\partial x_{i}}\left( \bar{x}_{i},\bar{\tau },\bar{\alpha}\right)\qquad\text{for }i=1,\ldots,n-1. \end{equation*} It will be convenient for us to swap the order of the coordinates, so we define\begin{equation} \hat{F}\left( x_{1},\ldots, x_{n},x_{0},\tau,\alpha\right) :=\tilde{F}\left( x_{0},x_{1},\ldots, x_{n},\tau,\alpha\right), \label{eq:F-reordered} \end{equation} and write\begin{equation*} \hat{F}=\left( \hat{F}_{1},\ldots,\hat{F}_{n}\right) \qquad\text{where\qquad }\hat{F}_{i}:\mathbb{R}^{nd}\rightarrow\mathbb{R}^{d},\text{ for }i=1,\ldots, n. \end{equation*} Finally, the last notation we introduce is $z\in\mathbb{R}^{d}$ to combine the coordinates from the domain of $F$ together \begin{equation*} z=\left( z_{1},\ldots,z_{d}\right) =\left( x_{n},x_{0},\tau,\alpha\right) \in\mathbb{R}^{d_{n}}\times\mathbb{R}^{d_{0}}\times\mathbb{R}\times \mathbb{R}^{k}=\mathbb{R}^{d}. \end{equation*} Note that $z$ is also the variable corresponding to the last $d$ coordinates from the domain of $\hat F$ (see Equation \eqref{eq:F-reordered}). Finally, we remark that all derivatives considered in the argument below are computed at the point $(\bar{x} _{0},\ldots,\bar{x}_{n},\bar{\tau},\bar{\alpha})$. With the above notation we see that\begin{equation*} D\hat{F}=\left( \begin{array}{ccccc} -\operatorname{Id} & 0 & \cdots & 0 & \frac{\partial\hat{F}_{1}}{\partial z} \\ dR^{1} & -\operatorname{Id} & \ddots & \vdots & \frac{\partial\hat{F}_{2}}{\partial z} \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & dR^{n-2} & -\operatorname{Id} & \frac{\partial\hat{F}_{n-1}}{\partial z} \\ 0 & \cdots & 0 & dR^{n-1} & \frac{\partial\hat{F}_{n}}{\partial z}\end{array} \right), \end{equation*} and $D\hat{F}$ is an isomorphism since $D\tilde{F}$ is an isomorphism. To see this define a sequence of vectors $v^{1},\ldots,v^{d}\in\mathbb{R}% ^{nd}$ of the form% \begin{equation*} v^{i}=\left( \begin{array}{c} v_{1}^{i} \\ \vdots \\ v_{n}^{i}\end{array} \right) \in\mathbb{R}^{d}\times\ldots\times\mathbb{R}^{d}=\mathbb{R}^{nd}\qquad\text{for }i=1,\ldots,d, \end{equation*} with $v_{1}^{i}$,$v_{n}^{i}\in\mathbb{R}^{d}$ chosen as \begin{equation} v_{1}^{i}=\frac{\partial\hat{F}_{1}}{\partial z_{i}},\qquad\qquad v_{n}^{i}=\left( \begin{array}{ccccccc} 0 & \cdots & 0 & \overset{i}{1} & 0 & \cdots & 0\end{array} \right) ^{\top}, \label{eq:vin-choice} \end{equation} and $v_{2}^{i},\ldots,v_{n-1}^{i}\in\mathbb{R}^{d}$ defined inductively as\begin{equation} v_{k}^{i}=dR^{k-1}v_{k-1}^{i}+\frac{\partial\hat{F}_{k}}{\partial z_{i}}\quad\text{for }k=2,\ldots,n-1. \label{eq:v-ik} \end{equation} Note that from the choice of $v_{n}^{i}$ in (\ref{eq:vin-choice}) the vectors $v^{1},\ldots,v^{d}$ are linearly independent. By direct computation\footnote{From (\ref{eq:multi-prob}) and (\ref{eq:v-ik}) follow the cancellations when multiplying the vector $v^i$ by $D\hat F$.} it follows that\begin{equation} D\hat{F}v^{i}=\left( \begin{array}{c} 0 \\ dR^{n-1}v_{n-1}^{i}+\frac{\partial\hat{F}_{n}}{\partial z_{i}}\end{array} \right) \qquad\text{for }i=1,\ldots,d, \label{eq:proof-shooting-0} \end{equation} where the zero is in $\mathbb{R}^{\left( n-1\right) d}$. Looking at (\ref{eq:multi-prob}), since $\hat{F}_{1},\ldots \hat{F}_{n-1}$ do not depend on $x_{n}$, we see that for $i\in \left\{ 1,\ldots ,d_{n}\right\} $ we have $\frac{\partial \hat{F}_{1}}{\partial z_{i}}=\ldots =\frac{\partial \hat{F}_{n-1}}{\partial z_{i}}=0,$ so \begin{eqnarray} dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}} &=&dR^{n-1}\left( dR^{n-2}v_{n-2}^{i}+\frac{\partial \hat{F}_{n-1}}{\partial z_{i}}\right) -\frac{\partial P_{n}}{\partial x_{n,i}} \label{eq:proof-shooting-1} \\ &=&dR^{n-1}\left( dR^{n-2}v_{n-2}^{i}+0\right) -\frac{\partial P_{n}}{\partial x_{n,i}} \notag \\ &=&\cdots \notag \\ &=&dR^{n-1}\ldots dR^{1}v_{1}^{i}-\frac{\partial P_{n}}{\partial x_{n,i}} \notag \\ &=&dR^{n-1}\ldots dR^{1}\frac{\partial \hat{F}_{1}}{\partial z_{i}}-\frac{\partial P_{n}}{\partial x_{n,i}} \notag \\ &=&-\frac{\partial P_{n}}{\partial x_{n,i}}\qquad \text{for }i=1,\ldots ,d_{n}. \notag \end{eqnarray} Similarly, for $j=i-d_{n}\in \left\{ 1,\ldots ,d_{0}\right\} $ from (\ref{eq:multi-prob}) we see that $\frac{\partial \hat{F}_{1}}{\partial z_{i}}=% \frac{\partial P_{0}}{\partial x_{0,j}}$ and $\frac{\partial \hat{F}_{2}}{\partial z_{i}}=\ldots =\frac{\partial \hat{F}_{n}}{\partial z_{i}}=0$, so \begin{align} dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}& =dR^{n-1}dR^{n-2}\ldots dR^{1}\frac{\partial P_{0}}{\partial x_{0,j}}=\frac{\partial \left( R_{\bar{\tau},\bar{\alpha}}\circ P_{0}\right) }{\partial x_{0,j}} \label{eq:proof-shooting-2} \\ & \qquad \qquad \qquad \qquad \qquad \left. \text{for }i=d_{n}+1,\ldots ,d_{n}+d_{0}.\right. \notag \end{align} The index $i=d_{n}+d_{0}+1$ corresponds to $\tau $. Similarly to (\ref{eq:proof-shooting-1}), by inductively applying the chain rule, it follows that \begin{equation} dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}=\frac{\partial R}{\partial \tau }\qquad \text{for }i=d_{n}+d_{0}+1. \label{eq:proof-shooting-3} \end{equation} Finally, for $j=i-d_{n}-d_{0}-1\in \left\{ 1,\ldots ,k\right\} $, the variable $z_i$ corresponds to $\alpha_j$, and also by applying the chain rule we obtain that\begin{equation} dR^{n-1}v_{n-1}^{i}+\frac{\partial \hat{F}_{n}}{\partial z_{i}}=\frac{\partial R}{\partial \alpha _{j}}\qquad \text{for }i=d_{n}+d_{0}+2,\ldots ,d. \label{eq:proof-shooting-4} \end{equation} Combining Equations \eqref{eq:proof-shooting-0}--\eqref{eq:proof-shooting-4} we see that\begin{equation} \left( \begin{array}{ccc} D\hat{F}v^{1} & \cdots & D\hat{F}v^{d}\end{array}\right) =\left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ -\frac{\partial P_{n}}{\partial x_{n}} & \frac{\partial \left( R_{\bar{\tau},\bar{\alpha}}\circ P_{0}\right) }{\partial x_{0}} & \frac{\partial R}{\partial \tau } & \frac{\partial R}{\partial \alpha }\end{array}\right) . \label{eq:DF-multi-final} \end{equation}Since $v^{1},\ldots ,v^{d}$ are linearly independent and since $D\hat{F}$ is an isomorphism, the rank of the above matrix is $d$. Looking at Equation \eqref{eq:multi-prob} we see that the lower part of the matrix in Equation \eqref{eq:DF-multi-final} corresponds to $DF$ which implies that $DF$ is of rank $d$, hence is an isomorphism. \end{proof} We see that we can validate assumptions of Theorem \ref{th:single-shooting} by setting up a multiple shooting problem (\ref{eq:multi-prob}) and applying Lemma \ref{lem:multiple-shooting-2}. To do so, one needs to additionally check whether $\alpha $ is an unfolding parameter for the sequence $R_{\tau ,\alpha }^{i}.$ \section{Regularization of collisions in the PCRTBP} \label{sec:PCRTBP} \label{sec:CAPS} In this section we formally introduce the equations of motion for the PCRTBP as discussed in Section \ref{sec:intro}. \correction{comment 11}{We stress that all the material in this section, and in Subsections \ref{sec:reg1} and \ref{sec:regSecondPrimary} are completely standard, and we follow the normalization conventions as in \cite{theoryOfOrbits}. That being said, since we use this material to implement computer assisted proofs, it is important to explicitly state every formula correctly and to recall some important but well known facts. The reader who is familiar with the CRTBP and its Levi-Civita regularization may want to skim this section while jumping ahead to Section \ref{sec:ejectionToCollision}.} Recall that the problem describes a three body system, where two massive primaries are on circular orbits about their center of mass, and a third massless particle moves in their field. The equations of motion for the massless particle are expressed in a co-rotating frame with the frequency of the primaries. Writing Newton's laws in the co-rotating frame leads to \begin{align} x^{\prime \prime }& =2y^{\prime }+\partial _{x}\Omega (x,y), \label{eq:NewtonPCRTBP} \\ y^{\prime \prime }& =-2x^{\prime }+\partial _{y}\Omega (x,y), \notag \end{align}where \begin{equation*} \Omega (x,y)=(1-\mu )\left( \frac{r_{1}^{2}}{2}+\frac{1}{r_{1}}\right) +\mu \left( \frac{r_{2}^{2}}{2}+\frac{1}{r_{2}}\right) , \end{equation*}\begin{equation*} r_{1}^2=(x-\mu )^{2}+y^{2},\quad \quad \text{and}\quad \quad r_{2}^2=(x+1-\mu )^{2}+y^{2}. \end{equation*}Here $x,y$ are the positions of the massless particle on the plane. The $\mu $ and $1-\mu $ are the masses of the primaries (normalized so that the total mass of the system is $1$). The rotating frame is oriented so that the primaries lie on the $x$-axis, with the center of mass at the origin. We take $\mu \in (0, \frac{1}{2}]$ so that the large body is always to the right of the origin. The larger primary has mass $m_{1}=1-\mu $ and is located at the position $(\mu ,0)$. Similarly the smaller primary has mass $m_{2}=\mu $ and is located at position $(\mu -1,0)$. The top frame of Figure \ref{fig:PCRTBP_coordinates} provides a schematic for the positioning of the primaries and the massless particle. \begin{figure}[t] \begin{center} \includegraphics[height=7.5cm]{Fig-2.pdf} \end{center} \caption{Three coordinate frames for the PCRTBP: the center top image depicts the classical PCRTBP in the rotating frame. The bottom left and right frames depict the restricted three body problem in Levi-Civita coordinates: regularization of collisions with $m_{2}$ on the left and with $% m_{1}$ on the right. Observe that in these coordinates the regularized body has been moved to the origin. The Levi-Civita transformations $T_{1}$ and $% T_{2}$ provide double covers of the original system, so that in the regularized frames there are singularities at the two copies of the remaining body. } \label{fig:PCRTBP_coordinates} \end{figure} Let $U\subset \mathbb{R}^{4}$ denote the open set \begin{equation*} U:=\left\{ (x,p,y,q)\in \mathbb{R}^{4}\,|\,\left( x,y\right) \not\in \{\left( \mu ,0\right) ,\left( \mu -1,0\right) \}\right\} . \end{equation*}The vector field $f\colon U\rightarrow \mathbb{R}^{4}$ defined by \begin{equation} f(x,p,y,q):=\left( \begin{array}{c} p \\ 2q+x-\frac{(1-\mu )\left( x-\mu \right) }{((x-\mu )^{2}+y^{2})^{3/2}}-\frac{\mu \left( x+1-\mu \right) }{((x+1-\mu )^{2}+y^{2})^{3/2}} \\ q \\ -2p+y-\frac{(1-\mu )y}{((x-\mu )^{2}+y^{2})^{3/2}}-\frac{\mu y}{((x+1-\mu )^{2}+y^{2})^{3/2}}\end{array}\right) \label{eq:PCRTBP} \end{equation}is equivalent to the second order system given in \eqref{eq:NewtonPCRTBP}. Note that \begin{equation*} \Vert f(x,p,y,q)\Vert \rightarrow \infty \quad \quad \text{as either}\quad \quad (x,y)\rightarrow (\mu ,0)\quad \text{or}\quad (x,y)\rightarrow (\mu -1,0). \end{equation*} Let $\mathbf{x}=(x,p,y,q)$ denote the coordinates in $U$ and denote by $\phi (\mathbf{x},t)$ the flow generated by $f$ on $U$. The system (\ref{eq:PCRTBP}) has an integral of motion $E\colon U\rightarrow \mathbb{R}$ given by \begin{equation} E\left( \mathbf{x}\right) =-p^{2}-q^{2}+2\Omega (x,y), \label{eq:JacobiIntegral} \end{equation}which is refered to as the Jacobi integral. We are interested in orbits with initial conditions $\mathbf{x} \in U$ with the property that their positions limit to either $m_1 :=(\mu, 0)$ or $m_2 := (\mu -1, 0)$ in finite time. Such orbits, which reach a singularity of the vector field $f$ in finite time, are called collisions. It has long been known that if we fix our attention to a specific level set of the Jacobi integral for some fixed $c\in \mathbb{R}$, then it is possible to make a change of coordinates which ``removes'' or regularizes the singularities. This idea is reviewed in the next sections. \subsection{Regularization of collisions with $m_{1}$} \label{sec:reg1} To regularize a collision with $m_{1}$, define the complex variables $% z=x+iy, $ and the new ``regularized'' variables $\hat{z}=\hat{x}+i\hat{y},$ related to $z$ by the transformation \begin{equation*} \hat{z}^{2}=z-\mu . \end{equation*}One also rescales time in the regularized coordinates with the rescaled time $\hat{t}$ related to the original time $t$ by the formula \begin{equation*} \frac{dt}{d\hat{t}}=4|\hat{z}|^{2}. \end{equation*} Let $U_{1}\in \mathbb{R}^{4}$ denote the open set \begin{equation*} U_{1}=\left\{ \mathbf{\hat{x}}=(\hat{x},\hat{p},\hat{y},\hat{q})\in \mathbb{R}^{4} : \left( \hat{x},\hat{y}\right) \notin \left\{ \left( 0,-1\right) ,\left( 0,1\right) \right\} \right\} . \end{equation*}This set will be the domain of the regularized vector field which allows us to ``flow through'' collisions with $m_1$ but not with $m_{2}$. A lengthy calculation (see \cite{theoryOfOrbits}), applying the change of coordinates and time rescaling just described to the vector field $f$ defined in Equation \eqref{eq:PCRTBP} leads to the regularized Levi-Civita vector field $f_{1}^{c}\colon U_{1}\rightarrow \mathbb{R}^{4}$ with the ODE $% \mathbf{\hat{x}}^{\prime }=f_{1}^{c}\left( \mathbf{\hat{x}}\right) $ given by% \begin{eqnarray} \hat{x}^{\prime } &=&\hat{p}, \notag \\ \hat{p}^{\prime } &=&8\left( \hat{x}^{2}+\hat{y}^{2}\right) \hat{q}+12\hat{x}(\hat{x}^{2}+\hat{y}^{2})^{2}+16\mu \hat{x}^{3}+4(\mu -c)\hat{x} \notag\\ &&+\frac{8\mu (\hat{x}^{3}-3\hat{x}\hat{y}^{2}+\hat{x})}{((\hat{x}^{2}+\hat{y}^{2})^{2}+1+2(\hat{x}^{2}-\hat{y}^{2}))^{3/2}}, \notag\\ \hat{y}^{\prime } &=&\hat{q}, \label{eq:regularizedSystem_m1} \\ \hat{q}^{\prime } &=&-8\left( \hat{x}^{2}+\hat{y}^{2}\right) \hat{p}+12\hat{v}\left( \hat{x}^{2}+\hat{y}^{2}\right) ^{2}-16\mu \hat{y}^{3}+4\left( \mu -c\right) \hat{y} \notag \\ &&+\frac{8\mu (-\hat{y}^{3}+3\hat{x}^{2}\hat{y}+\hat{y})}{((\hat{x}^{2}+\hat{y}^{2})^{2}+1+2(\hat{x}^{2}-\hat{y}^{2}))^{3/2}}, \notag \end{eqnarray} where the parameter $c$ in the above ODE is $c=E(x,p,y,q)$. The main observation is that the regularized vector field is well defined at the origin $\left( \hat{x},\hat{y}\right) =\left( 0,0\right) $, and that the origin maps to the collision with $m_{1}$ when we invert the Levi-Civita coordinate transformation. Let $\psi _{1}^{c}(\mathbf{\hat{x}},\hat{t})$ denote the flow generated by $% f_{1}^c$. The flow conserves the first integral $E_{1}^{c}\colon U_{1}\rightarrow \mathbb{R}$ given by \begin{eqnarray} E_{1}^{c}(\mathbf{\hat{x}}) &=&-\hat{q}^{2}-\hat{p}^{2}+4(\hat{x}^{2}+\hat{y}^{2})^{3}+8\mu (\hat{x}^{4}-\hat{y}^{4})+4(\mu -c)(\hat{x}^{2}+\hat{y}^{2}) \notag \\ &&+8(1-\mu )+8\mu \frac{(\hat{x}^{2}+\hat{y}^{2})}{\sqrt{(\hat{x}^{2}+\hat{y}^{2})^{2}+1+2(\hat{x}^{2}-\hat{y}^{2})}}. \label{eq:reg_P_energy} \end{eqnarray} Note that the parameter $c$ appears both in the formulae for $f_{1}^{c}$ and $E_{1}^{c}$. We write $\psi _{1}^{c}$ to stress that the flow depends explicitly on the choice of $c$. We choose $c \in \mathbb{R}$ and then, after regularization, have new coordinates which allow us to study collisions only in the level set \begin{equation} M:=\left\{ \mathbf{x}\in U : E(\mathbf{x})=c\right\} . \label{eq:M-level-set-c} \end{equation} We define the linear subspace $\mathcal{C}_{1}\subset \mathbb{R}^{4}$ by \begin{equation*} \mathcal{C}_{1}=\left\{ (\hat{x},\hat{p},\hat{y},\hat{q})\in \mathbb{R}^{4}\,|\,\hat{x}=\hat{y}=0\right\} , \end{equation*}The change of coordinates between the two coordinate systems is given by the transform $T_{1}\colon U_{1}\backslash \mathcal{C}_{1}\rightarrow U$, \begin{equation} \mathbf{x}=T_{1}(\mathbf{\hat{x}}):=\left( \begin{array}{c} \hat{x}^{2}-\hat{y}^{2}+\mu \\ \frac{\hat{x}\hat{p}-\hat{y}\hat{q}}{2(\hat{x}^{2}+\hat{y}^{2})} \\ 2\hat{x}\hat{y} \\ \frac{\hat{y}\hat{p}+\hat{x}\hat{q}}{2(\hat{x}^{2}+\hat{y}^{2})}\end{array}\right) , \label{eq:T1-def} \end{equation}and is a local diffeomorphism on $U_{1}\backslash \mathcal{C}_{1}$. The following theorem collects results from \cite{theoryOfOrbits}, and relates the dynamics of the original and the regularized systems. \begin{theorem} \label{thm:LeviCivitta} Let $c$ be the fixed parameter determining the level set $M$ in Equation \eqref{eq:M-level-set-c}. Assume that $% \mathbf{x}_{0}\in U$ satisfies $E(\mathbf{x}_{0})=c,$ and assume that $\mathbf{\hat{x}}_{0}\in U_{1}\setminus \mathcal{C}_{1}$ is such that $\mathbf{x}_{0}=T_{1}\left( \mathbf{\hat{x}}_{0}\right) $. Then the curve% \begin{equation*} \gamma \left( s\right) :=T_{1}\left( \psi _{1}^{c}(\hat{\mathbf{x}}_{0},s)\right) \end{equation*}parameterizes the following possible solutions of the PCRTBP in $M$: \begin{enumerate} \item If for every $\hat{t}\in \lbrack -\hat{T},\hat{T}]$ we have $\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\in U_{1}\setminus \mathcal{C}_{1}$, then $\gamma \left( s\right) ,$ for $s\in \lbrack -\hat{T},\hat{T}]$ lies on a trajectory of the PCRTBP which avoids collisions. Moreover, the time $t$ in the original coordinates that corresponds to the time $\hat{t}\in \lbrack -\hat{T},\hat{T}]$ in the regularised coordinates is recovered by the integral \begin{equation} t=4\int_{0}^{\hat{t}}\left( \hat{x}(s)^{2}+\hat{y}(s)^{2}\right) ds, \label{eq:time-recovery} \end{equation}i.e. \begin{equation*} \phi \left( t,\mathbf{x}_{0}\right) =T_{1}\left( \psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\right) . \end{equation*} \item If for $\hat{T}>0$, for every $\hat{t}\in \lbrack 0,\hat{T})$ we have $% \psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\in U_{1}\setminus \mathcal{C}_{1} $ and $\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{T})\in \mathcal{C}_{1}$, then in the original coordinates the trajectory starting from $\mathbf{x}% _{0} $ reaches the collision with $m_{1}$ at time $T>0$ given by% \begin{equation} T=4\int_{0}^{\hat{T}}\left( \hat{x}(s)^{2}+\hat{y}(s)^{2}\right) \,ds. \label{eq:time-to-collision} \end{equation} \item If for $\hat{T}<0$, for every $\hat{t}\in (\hat{T},0]$ we have $\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{t})\in U_{1}\setminus \mathcal{C}_{1}$ and $\psi _{1}^{c}(\hat{\mathbf{x}}_{0},\hat{T})\in \mathcal{C}_{1}$, then in the original coordinates the backward trajectory starting from $\mathbf{x}% _{0}$ reaches the collision with $m_{1}$ at time $T<0$ expressed in Equation \eqref{eq:time-to-collision}. \end{enumerate} \end{theorem} Orbits satisfying condition 2 from Theorem \ref{thm:LeviCivitta} are collision orbits, while orbits satisfying condition 3 from Theorem \ref{thm:LeviCivitta} are called ejection orbits. From Theorem \ref{thm:LeviCivitta} we see that for regularized orbits $\psi _{1}^{c}\left( \mathbf{\hat{x}}_{0},\hat{t}\right) $ to have a physical meaning in the original coordinates we need to choose $c=E\left( T_{1}\left( \mathbf{\hat{x}}_{0}\right) \right)$ for the regularization energy. The following lemma, whose proof is a standard calculation (see \cite{theoryOfOrbits}), addresses this choice. \begin{lemma} \label{lem:energies-cond}For every $\mathbf{\hat{x}}\in U_{1}$, we have \begin{equation} E\left( T_{1}\left( \mathbf{\hat{x}}\right) \right) =c\qquad \text{if and only if} \qquad E_{1}^{c}\left( \mathbf{\hat{x}}\right) =0. \label{eq:energies-cond-m1} \end{equation} \end{lemma} The following corollary of Lemma \ref{lem:energies-cond} is a consequence of evaluating the expression for the energy at zero when the positions are zero. \begin{corollary} \label{cor:collisions-m1}If we consider $\mathbf{\hat{x}}=\left( \hat{x},% \hat{p},\hat{y},\hat{q}\right) $ with $\hat{x}=\hat{y}=0$, which corresponds to a collision with $m_{1}$, then from $E_{1}^{c}\left( \mathbf{\hat{x}}% \right) =0$ we see that for a trajectory $\psi _{1}^{c}\left( \mathbf{\hat{x}},\hat{t}\right) $ starting from a collision point $\mathbf{\hat{x}}=\left( 0,\hat{p},0,\hat{q}\right) $ to have a physical meaning in the original coordinates it is necessary and sufficient that \begin{equation} \hat{q}^{2}+\hat{p}^{2}=8(1-\mu ). \label{eq:collision-m1} \end{equation} \end{corollary} \begin{definition} \label{def:ejection-collision-manifolds}We refer to \begin{equation*} \left\{ \psi _{1}^{c}\left( \mathbf{\hat{x}},\hat{t}\right) :\hat{q}^{2}+\hat{p}^{2}=8(1-\mu ),\,\hat{t}\geq 0\text{ and }\psi _{1}^{c}(\mathbf{\hat{x}},[0,\hat{t}])\cap \mathcal{C}_{1}=\emptyset \right\} \end{equation*}as the ejection manifold from $m_{1},$ and\begin{equation*} \left\{ \psi _{1}^{c}\left( \mathbf{\hat{x}},\hat{t}\right) :\hat{q}^{2}+\hat{p}^{2}=8(1-\mu ),\,\hat{t}\leq 0\text{ and }\psi _{1}^{c}(\mathbf{\hat{x}},[\hat{t},0])\cap \mathcal{C}_{1}=\emptyset \right\} \end{equation*} as the collision manifold to $m_{1}$. \end{definition} Note that both the collision and the ejection manifolds depend on the choice of $c$. That is, we have a family of collision/ejection manifolds, parameterized by the Jacobi constant $c$. For a fixed $c$ the collision manifold, when viewed in the original coordinates, consists of points with energy $c$, whose forward trajectory reaches the collision with $m_{1}$. Similarly, for fixed $c$, the ejection manifold, in the original coordinates, consists of points with energy $c$ whose backward trajectory collide with $m_{1}$. Thus, the circle defined in Corollary \ref{cor:collisions-m1} is a sort of ``fundamental domain'' for ejections/collisions to $m_1$ with energy $c$. \subsection{Regularization of collisions with $m_{2}$} \label{sec:regSecondPrimary} To regularize at the second primary, we define the coordinates $\tilde{z}=% \tilde{x}+i\tilde{y}$ through $\tilde{z}^{2}=z+1-\mu $ and consider the time rescaling $dt/d\tilde{t}=4|\tilde{z}|^{2}$. As in the previous section, define \begin{eqnarray*} U_{2}&:= &\left\{ \mathbf{\tilde{x}}=(\tilde{x},\tilde{p},\tilde{y},\tilde{q})\in \mathbb{R}^{4}\,|\,\left( \tilde{x},\tilde{y}\right) \notin \left\{ \left( -1,0\right) ,\left( 1,0\right) \right\} \right\} , \\ \mathcal{C}_{2}&:= &\left\{ \mathbf{\tilde{x}}=(\tilde{x},\tilde{p},\tilde{y},\tilde{q})\in \mathbb{R}^{4}\,|\,\tilde{x}=\tilde{y}=0\right\}, \end{eqnarray*}so that $U_{2}$ consists of points in the regularized coordinates which do not collide with $m_{1}$, and $\mathcal{C}_{2}$ consists of points which collide with $m_{2}$. The regularized Levi-Civita vector field $f_{2}^{c}:U_{2}\rightarrow \mathbb{% R}^{4}$ with the ODE $\mathbf{\tilde{x}}^{\prime }=f_{2}^{c}\left( \mathbf{\tilde{x}}\right) $ is of the form (see \cite{theoryOfOrbits})% \begin{eqnarray} \tilde{x}^{\prime } &=&\tilde{p}, \label{eq:reg_S_field} \notag \\ \tilde{p}^{\prime } &=&8\left( \tilde{x}^{2}+\tilde{y}^{2}\right) \tilde{q}+12\tilde{x}(\tilde{x}^{2}+\tilde{y}^{2})^{2}-16(1-\mu )\tilde{x}^{3}+4\left( (1-\mu )-c\right) \tilde{x} \notag \\ &&+\frac{8(1-\mu )\left( -\tilde{x}^{3}+3\tilde{x}\tilde{y}^{2}+\tilde{x}\right) }{((\tilde{x}^{2}+\tilde{y}^{2})^{2}+1+2(\tilde{y}^{2}-\tilde{x}^{2}))^{3/2}}, \notag \\ \tilde{y}^{\prime } &=&\tilde{q}, \label{eq:regularizedSystem_m2} \\ \tilde{q}^{\prime } &=&-8\left( \tilde{u}^{2}+\tilde{y}^{2}\right) \tilde{p}+12\tilde{y}(\tilde{x}^{2}+\tilde{y}^{2})^{2}+16(1-\mu )\tilde{y}^{3}+4\left( (1-\mu )-c\right) \tilde{y} \notag \\ &&+\frac{8(1-\mu )\left( \tilde{y}^{3}-3\tilde{x}^{2}\tilde{y}+\tilde{y}\right) }{((\tilde{x}^{2}+\tilde{y}^{2})^{2}+1+2(\tilde{y}^{2}-\tilde{x}^{2}))^{3/2}}, \notag \end{eqnarray}with the integral of motion \begin{align} E_{2}^{c}\left( \mathbf{\tilde{x}}\right) & =-\tilde{p}^{2}-\tilde{q}^{2}+4(\tilde{x}^{2}+\tilde{y}^{2})^{3}+8(1-\mu )(\tilde{y}^{4}-\tilde{x}^{4})+4\left( (1-\mu )-c\right) (\tilde{x}^{2}+\tilde{y}^{2}) \notag \\ & \quad +8(1-\mu )\frac{\tilde{x}^{2}+\tilde{y}^{2}}{\sqrt{(\tilde{x}^{2}+\tilde{y}^{2})^{2}+1+2(\tilde{y}^{2}-\tilde{x}^{2})}}+8\mu . \label{eq:E2} \end{align} We write $\psi_2^c( \mathbf{\tilde x},\tilde t)$ for the flow induced by (\ref{eq:reg_S_field}). The change of coordinates from the regularized coordinates $\mathbf{\tilde{x}% }$ to the original coordinates $\mathbf{x}$ is given by $T_{2}:U_{2}\setminus \mathcal{C}_{2}\rightarrow \mathbb{R}^{4}$\textbf{\ }of the form% \begin{equation} \mathbf{x}=T_{2}\left( \mathbf{\tilde{x}}\right) =\left( \begin{array}{c} \tilde{x}^{2}-\tilde{y}^{2}+\mu-1 \\ \frac{\tilde{x}\tilde{p}-\tilde{y}\tilde{q}}{2(\tilde{x}^{2}+\tilde{y}^{2})} \\ 2\tilde{x}\tilde{y} \\ \frac{\tilde{y}\tilde{p}+\tilde{x}\tilde{q}}{2(\tilde{x}^{2}+\tilde{y}^{2})}\end{array}\right) . \label{eq:T2-def} \end{equation} A theorem analogous to Theorem \ref{thm:LeviCivitta} characterizes solution curves in the two coordinate systems and the collisions with the second primary $m_{2}$. Also, analogously to Lemma \ref{lem:energies-cond} and Corollary \ref{cor:collisions-m1} for every $\mathbf{\ \tilde{x}}\in U_{2}$ we have\begin{equation} E\left( T_{2}\left( \mathbf{\tilde{x}}\right) \right) =c\qquad \text{if and only if} \qquad E_{2}^{c}\left( \mathbf{\tilde{x}}\right) =0, \label{eq:energies-cond-m2} \end{equation}and a trajectory $\psi _{2}^{c}\left( \mathbf{\tilde{x}},\tilde{t}\right) $ starting from a collision point $\mathbf{\tilde{x}}=\left( 0,\tilde{p},0,% \tilde{q}\right) $ with $m_{2}$ has physical meaning in the original coordinates if and only if \begin{equation} \tilde{q}^{2}+\tilde{p}^{2}=8\mu . \label{eq:collision-m2} \end{equation} We introduce the notions of the ejection and collision manifolds for $m_{2}$ analogously to Definition \ref{def:ejection-collision-manifolds}. \begin{figure}[t!] \begin{center} \includegraphics[height=6.0cm]{ejectionCollisionPic} \end{center} \caption{Ejection collision orbits in the PCRTBP when $\mu = 1/4$ and $C = 3.2$. The grey curves at the top and bottom of the figure illustrate the zero velocity curves, i.e. the boundaries of the prohibited Hill's regions, for this value of $C$. The black dots at $x = \mu$ and $x = -1+\mu$ depict the locations of the primary bodies. The curves in the middle of the figure represent two ejection-collision orbits: $m_2$ to $m_1$ (bottom) and $m_1$ to $m_2$ (top). (Recall that $m_2$ is on the left and $m_1$ on the right; compare with Figure \ref{fig:PCRTBP_coordinates}.) These orbits are computed by numerically locating an approximate zero of the function defined in Equation \eqref{eq:collisionOperator}. In setting up the BVP we choose to spend $s = 0.35$ time units in each of the regularized coordinate systems (red and green orbit segments) but this transforms to unequal amounts of time in the original/synodic coordinates. The blue portion of the orbit is in the original coordinates. The curves are plotted by changing all points back to the original coordinates. The entire ejection-collision takes about $2.427$ time units in the original/synodic coordinates.} \label{fig:ejectionCollisions} \end{figure} \section{Ejection-collision orbits} \label{sec:ejectionToCollision} We now define a level set multiple shooting operator whose zeros correspond to transverse ejection-collision orbits from the body $m_{k}$ to the body $m_{l}$ for $k,l\in\left\{ 1,2\right\}$ in the PCRTBP. \correction{comment 12}{ Two such orbits in the PCRTBP are illustrated in Figure \ref{fig:ejectionCollisions}. } Note that the PCRTBP has the form discussed in Example \ref{ex:dissipative-unfolding}, so that a dissipative unfolding is given by the one parameter family of ODEs \begin{equation} f_{\alpha}(x,p,y,q)=f(x,p,y,q)+\alpha\left( 0,p,0,q\right), \label{eq:unfoldedPCRTBP} \end{equation} where $f$ is as defined in Equation \eqref{eq:PCRTBP}. Let $\phi_{\alpha}(\mathbf{x},t)$ denote the flow generated by the the vector field of Equation \eqref{eq:unfoldedPCRTBP}. For $c\in\mathbb{R}$ consider the fixed energy level set $M$. Then $\alpha$ is an unfolding parameter for the mapping \begin{equation*} R_{\tau,\alpha}\left( \mathbf{x}\right) =\phi_{\alpha}(\mathbf{x},\tau) \end{equation*} from $M$ to $M$. (Here $R_{\tau,\alpha}:\mathbb{R}^{4}\rightarrow \mathbb{R}^{4}$ for fixed $\alpha,\tau\in\mathbb{R}$.) Define the functions $P_{i} \colon \mathbb{R} \to \mathbb{R}^4$ for $i = 1,2$ by \begin{equation} P_{i}\left( \theta \right) :=\left\{ \begin{array}{lll} (0,\sqrt{8\left( 1-\mu \right) }\cos \left( \theta \right) ,0,\sqrt{8\left( 1-\mu \right) }\sin \theta ) & & \text{for }i=1,\medskip \\ (0,\sqrt{8\mu }\cos \left( \theta \right) ,0,\sqrt{8\mu }\sin \theta ) & & \text{for }i=2.\end{array}\right. \label{eq:collisions-par-Pi} \end{equation}By Equations \eqref{eq:collision-m1} and \eqref{eq:collision-m2} the function $% P_{i}\left( \theta \right) $ parameterizes the collision set for the primary $m_{i}$, with $i=1,2$. Fix $k,l\in \left\{ 1,2\right\} $ and consider level sets $% M_{1},\ldots ,M_{6}\subset \mathbb{R}^{4}$ defined by % \begin{align*} M_{1}& =M_{2}=\left\{ E_{k}^{c}=0\right\} , \\ M_{3}& =M_{4}=\left\{ E=c\right\} , \\ M_{5}& =M_{6}=\left\{ E_{l}^{c}=0\right\}. \end{align*}Choose $s>0$, and for $i = 1,2$ recall the definition of the coordinate transformations $T_{i} \colon U_i \backslash \mathcal{C}_i \to \mathbb{R}^4$ defined in Equations \eqref{eq:T1-def} and \eqref{eq:T2-def}. Taking the maps $R_{\tau ,\alpha }^{1},\ldots ,R_{\tau ,\alpha }^{5}:\mathbb{R}^{4}\rightarrow \mathbb{R}^{4}$ as% \begin{align*} R_{\tau ,\alpha }^{1}\left( x_{1}\right) & =\psi _{k}^{c}\left( x_{1},s\right) , \\ R_{\tau ,\alpha }^{2}\left( x_{2}\right) & =T_{k}\left( x_{2}\right) , \\ R_{\tau ,\alpha }^{3}\left( x_{3}\right) & =\phi _{\alpha }\left( x_{3},\tau \right) , \\ R_{\tau ,\alpha }^{4}\left( x_{4}\right) & =T_{l}^{-1}\left( x_{4}\right) , \\ R_{\tau ,\alpha }^{5}\left( x_{5}\right) & =\psi _{l}^{c}\left( x_{5},s\right), \end{align*} we let \begin{equation*} F:\mathbb{R\times }\underset{5 \ \text{copies}}{\underbrace{\mathbb{R}^{4}\mathbb{\times }\ldots \mathbb{\times R}^{4}}}\mathbb{\times R\times R\times R\rightarrow }\underset{6 \ \text{copies}}{\underbrace{\mathbb{R}^{4}\mathbb{\times }\ldots \mathbb{\times R}^{4}}} \end{equation*}be defined as \begin{equation}\label{eq:collisionOperator} F\left( x_{0},x_{1},\ldots x_{5},x_{6},\tau ,\alpha \right):= \left( \begin{array}{r@{\,\,\,}l} P_{k}\left( x_{0}\right) & -\,\,\,x_{1} \\ R_{\alpha ,\tau }^{1}\left(x_{1}\right) &- \,\,\, x_{2} \\ R_{\alpha ,\tau }^{2}\left(x_{2}\right) &- \,\,\, x_{3} \\ R_{\alpha ,\tau }^{3}\left(x_{3}\right) &- \,\,\, x_{4} \\ R_{\alpha ,\tau }^{4}\left( x_{4}\right) &- \,\,\, x_{5} \\ R_{\alpha ,\tau }^{5}\left( x_{5}\right) &- \,\,\, P_{l}\left( x_{6}\right) \end{array} \right), \end{equation} where $x_{0},x_{6},\tau ,\alpha \in \mathbb{R}$ and $% x_{1},\ldots ,x_{5}\in \mathbb{R}^{4}$. We also write $\left( x_{k},p_{k},y_{k},q_{k}\right) $ and $\left( x_{l},p_{l},y_{l},q_{l}\right) $ to denote the regularized coordinates given by the coordinate transformations $T_{k}$ and $T_{l}$, respectively. \begin{lemma}\label{lem:collision-connections} Let $\mathbf{x}^{\ast }=\left( x_{0}^{\ast },\ldots ,x_{6}^{\ast }\right) $ and $\tau ^{\ast }>0$. If \begin{equation*} DF\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) \end{equation*}is an isomorphism and \begin{equation*} F\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) =0, \end{equation*}then the orbit of the point $x_3^{\ast}$ is ejected from the primary body $m_{k}$ and collides with the primary body $m_{l}.$ (The same is true of the orbit of the point $x_4^{\ast}$.) Moreover, intersection of the collision and ejection manifolds is transversal on the energy level $\left\{ E=c\right\} $ and the time from the ejection to the collision is \begin{equation} \tau ^{\ast }+4\int_{0}^{s}\left\Vert \pi _{x_{k},y_{k}}\psi _{k}^{c}\left( x_{1}^{\ast },u\right) \right\Vert ^{2}du+4\int_{0}^{s}\left\Vert \pi _{x_{l},y_{l}}\psi _{l}^{c}\left( x_{5}^{\ast },u\right) \right\Vert ^{2}du. \label{eq:time-between-collisions} \end{equation}(Above we use the Euclidean norm.) \end{lemma} \begin{proof} We have $d_{0}=d_{6}=k=1$ and $d=4$, so the condition in Equation \eqref {eq:dimensions-multiple-shooting} is satisfied. We now show that $\alpha $ is an unfolding parameter for $R_{\tau ,\alpha }=R_{\tau ,\alpha }^{5}\circ \ldots \circ R_{\tau ,\alpha }^{1}$. Since $E_{i}^{c}$ is an integral of motion for the flow $\psi _{i}^{c}$, for $i=1,2$, we see that\begin{equation*} \begin{array}{rcl} x_{1}\in M_{1}=\left\{ E_{k}^{c}=0\right\} & \qquad \iff \qquad & R_{\tau ,\alpha }^{1}\left( x_{1}\right) =\psi _{k}^{c}\left( x_{1},s\right) \in M_{2}=\left\{ E_{k}^{c}=0\right\} ,\medskip \\ x_{5}\in M_{5}=\left\{ E_{l}^{c}=0\right\} & \qquad \iff \qquad & R_{\tau ,\alpha }^{5}\left( x_{5}\right) =\psi _{l}^{c}\left( x_{5},s\right) \in M_{6}=\left\{ E_{l}^{c}=0\right\} .\end{array}\end{equation*}Also, by Equations \eqref{eq:energies-cond-m1} and \eqref{eq:energies-cond-m2} we see that\begin{equation*} \begin{array}{rcl} x_{2}\in M_{2}=\left\{ E_{k}^{c}=0\right\} & \qquad \iff \qquad & R_{\tau ,\alpha }^{2}\left( x_{2}\right) =T_{k}\left( x_{2}\right) \in M_{3}=\left\{ E=c\right\} ,\medskip \\ x_{4}\in M_{4}=\left\{ E=c\right\} & \qquad \iff \qquad & R_{\tau ,\alpha }^{4}\left( x_{2}\right) =T_{l}^{-1}\left( x_{4}\right) \in M_{5}=\left\{ E_{l}^{c}=0\right\} .\end{array}\end{equation*}Moreover $\alpha $ is an unfolding parameter for the PCRTBP, and hence for \begin{equation*} R_{\tau ,\alpha }^{3}\left( x_{3}\right) =\phi _{\alpha }\left( x_{3},\tau \right). \end{equation*}Note that for $i=1,2,4,5$, the maps$R_{\tau ,\alpha }^{i}$ takes the level sets $M_{i}$ into the level set $M_{i+1}$ and this does not depend on the choice of $\alpha$. Then, since $\alpha $ is an unfolding parameter for $R_{\tau ,\alpha }^{3}$, it follows directly from Definition \ref{def:unfolding} that $% \alpha $ is an unfolding parameter for $R_{\tau ,\alpha }=R_{\tau ,\alpha }^{5}\circ \ldots \circ R_{\tau ,\alpha }^{1}.$ By applying Lemma \ref{lem:multiple-shooting-2} to \begin{equation*} \tilde{F}\left( x_{0},x_{6},\tau ,\alpha \right) :=R_{\tau ,\alpha }\left( P_{k}\left( x_{0}\right) \right) -P_{l}\left( x_{6}\right) \end{equation*}we obtain that $D\tilde{F}\left( x_{0}^{\ast },x_{6}^{\ast },\tau ^{\ast },0\right) $ is an isomorphism and that $\tilde{F}\left( x_{0}^{\ast },x_{6}^{\ast },\tau ^{\ast },0\right) =0$. Since \begin{equation*} \tilde{F}\left( x_{0}^{\ast },x_{6}^{\ast },\tau ^{\ast },0\right) =\psi _{l}^{c}\left( T_{l}^{-1}\left( \phi \left( T_{k}\left( \psi _{k}^{c}\left( P_{k}(x_{0}^{\ast }),s\right) \right) ,\tau ^{\ast }\right) \right) ,s\right) -P_{l}\left( x_{6}^{\ast }\right) , \end{equation*}we see that, by Theorem \ref{thm:LeviCivitta} (and its mirror counterpart for the collision with $m_{2}$) we have an orbit originating at the point $P_{k}(x_{0}^{\ast })$ on the collision set for $m_k$, and terminating at the point $P_{l}\left( x_{6}^{\ast }\right) $ on the collision set for $m_l$. The transversality of the intersection between the ejection manifold of $m_{k}$ and the collision manifold of $m_{l}$ follows from Theorem \ref{th:single-shooting}. The time between collisions in Equation \eqref{eq:time-between-collisions} follows from Equation \eqref{eq:time-to-collision}. \end{proof} \begin{remark}[Additional shooting steps] \label{rem:additionalShooting} {\em We remark that in practice, computing accurate enclosures of flow maps requires shortening the time step. Consider for example the third and fourth component of $F$ as defined in Equation \eqref{eq:collisionOperator}, and suppose that time step of length $\nicefrac{\tau}{N}$ is desired. By the properties of the flow map, solving the sub-system of equations \begin{equation}\label{eq:colOp_comp3} \begin{aligned} R_{\alpha, \tau}^3(x_3) - x_4 = \phi_\alpha(x_3, \tau) - x_4 &= 0 \\ R_{\alpha, \tau}^4(x_4) - x_5 = T^{-1}_l(x_4) - x_5 &= 0 \end{aligned} \end{equation} is equivalent to solving \begin{align*} \phi_\alpha(x_3, \nicefrac{\tau}{N}) - y_1 &= 0\\ \phi_\alpha(y_1, \nicefrac{\tau}{N}) - y_2 &= 0\\ &\vdots \\ \phi_\alpha(y_{N-2}, \nicefrac{\tau}{N}) - y_{N-1} &= 0 \\ \phi_{\alpha}(y_{N-1}, \nicefrac{\tau}{N}) - x_4 &= 0 \\ T_l^{-1}(x_4) - x_5 &= 0, \end{align*} and we can append these new variables and components to the map $F$ defined in Equation \eqref{eq:collisionOperator} without changing the zeros of the operator. Moreover, by Lemma \ref{lem:multiple-shooting-2} the transversality result for the operator is not changed by the addition of additional steps. Indeed, by the same reasoning we can (and do) add intermediate shooting steps in the regularized coordinates to reduce the time steps to any desired tolerance. } \end{remark} \section{Connections between collisions and libration points $L_{4}$, $L_{5}$}\label{sec:L4_to_collision} For each value of $\mu \in (0, 1/2]$, the PCRTBP has exactly five equilibrium solutions. For traditional reasons, these are referred to as libration points of the PCRTBP. Three of these are collinear with the primary bodies, and lie on the $x$-axis. These are referred to as $L_1, L_2$ and $L_3$, and they correspond to the co-linear relative equilibrium solutions discovered by Euler. The remaining two libration points are located at the third vertex of the equilateral triangles whose other two vertices are the primary and secondary bodies. These are referred to as $L_4$ and $L_5$, and correspond to the equilateral triangle solutions of Lagrange. Figure \ref{fig:PCRTBP_librations} illustrates the locations of the libration points in the phase space. \begin{figure}[!t] \centering \includegraphics[height=5cm]{Fig-3.pdf} \caption{The three collinear libration points $L_{1,2,3}$ and the equilateral triangle libration points $L_{4, 5}$, relative to the positions of the primary masses $m_1$ and $m_2$.} \label{fig:PCRTBP_librations} \end{figure} For all values of the mass ratio, the collinear libration points have saddle $\times$ center stability. The center manifolds give rise to important families of periodic orbits known as Lyapunov families. The stability of $% L_4 $ and $L_5$ depend on the mass ratio $\mu$. For \begin{equation*} 0 < \mu < \mu_* \approx 0.04, \end{equation*} where the exact value is $\mu_* = 2/(25 + \sqrt{621})$, the triangular libration points have center $\times$ center stability. That is, they are stable in the the sense of Hamiltonian systems and exhibit the full ``zoo'' of nearby KAM objects. When $\mu > \mu_*$, the triangular libration points $L_4$ and $L_5$ have saddle-focus stability. That is, they have a complex conjugate pair of stable and a complex conjugate pair of unstable eigenvalues. The four eigenvalues then have the form \begin{equation*} \lambda = \pm \alpha \pm i \beta, \end{equation*} for some $\alpha, \beta > 0$. In this case, each libration point has an attached two dimensional stable and two dimensional unstable manifold. Since these two dimensional manifolds live in the three dimensional energy level set of $L_{4,5}$, there exists the possibility that they intersect the two dimensional collision or ejection manifolds of the primaries transversely. It is also possible that the stable/unstable manifolds of $L_{4,5}$ intersect one other transversely giving rise to homoclinic or heteroclinic connecting orbits. In fact, in this paper we prove that both of these phenomena occur and in this section we discuss our method for proving the existence of intersections between a stable/unstable manifold of $L_{4,5}$, and an ejection/collision manifold of a primary body. Any point of intersection between these manifolds gives rise to an orbit which is asymptotic to $L_4$, but which collides or is ejected from one of the massive bodies. Two such orbits are illustrated in Figure \ref{fig:EC_to_collision}. \begin{figure}[!t] \centering \includegraphics[height=4.75cm]{L4_EC_pic1.pdf}\includegraphics[height=4.75cm]{L4_EC_pic2.pdf} \caption{Libration-to-collision and ejection-to-libration orbits for $\mu = 1/2$ and $c = 3$ (which is the $L_4$ value of the Jacobi constant in the equal mass problem). The left frame illustrates an ejection to $L_4$ orbit, and the right frame an $L_4$ to collision. In each frame $m_1$ is depicted as a black dot and $L_4$ as a red dot. The boundary of a parameterized local unstable manifold for $L_4$ is depicted as the red circle; stable boundary the green circle. The orbits are found by computing an approximate zero of the map defined in Equation \eqref{eq:EC_to_L4_operator}. The green portion of the left, and red portion of the right curves are computed in regularized coordinates for the body $m_1$, where we have fixed $s = 0.5$ regularized time units before the change back to original/synodic coordinates. These orbit segments are transformed back to the original coordinates for the plot.} \label{fig:EC_to_collision} \end{figure} Let $\overline{B}\subset \mathbb{R}^{2}$ denote a closed ball with radius $1$. Assume that\begin{equation*} w_{j}^{\kappa }:\overline{B}\rightarrow \mathbb{R}^{4}\qquad \text{for }j\in \{4,5\}\text{ and }\kappa \in \left\{ u,s\right\}, \end{equation*}parameterize the two dimensional local stable/unstable manifolds of $L_{j}$. We assume that the charts are normalized so that $w_{j}^{\kappa }\left( 0\right) =L_{j}$. Then \begin{equation*} w_{j}^{\kappa }\left( \overline{B}\right) =W_{\text{loc}}^{\kappa }\left( L_{j}\right) \qquad \text{for }j\in \{4,5\},\text{ }\kappa \in \left\{ u,s\right\} . \end{equation*}Define the functions\begin{equation*} P_{j}^{\kappa }:\mathbb{R}\rightarrow \mathbb{R}^{4}\qquad \text{for }j\in \{4,5\}\text{ and }\kappa \in \left\{ u,s\right\}, \end{equation*}by\begin{equation} P_{j}^{\kappa }\left( \theta \right) :=w_{j}^{\kappa }\left( \cos \theta ,\sin \theta \right) . \label{eq:Pj-lib} \end{equation} For $i\in \{1,2\}$ consider $P_{i}$ as defined in Equation \eqref{eq:collisions-par-Pi}. For \begin{equation*} \mathbf{x}=\left( x_{0},x_{1},x_{2},x_{3},x_{4}\right) \in \mathbb{R}^{14}, \end{equation*}where $x_{0},x_{4}\in \mathbb{R}, x_{1},x_{2},x_{3}\in \mathbb{R}^{4}$, and $j \in \left\{ 4,5\right\} $ we define \begin{equation*} F_{i,j}^{u},F_{i,j}^{s}:\mathbb{R}^{16}\rightarrow \mathbb{R}^{16}, \end{equation*}by the formulas \begin{equation} \label{eq:EC_to_L4_operator} F_{i,j}^{u}\left( \mathbf{x},\tau ,\alpha \right) =\left( \begin{array}{r@{\,\,-\,\,}l} P_{j}^{u}\left( x_{0}\right) & x_{1} \\ \phi _{\alpha }\left( x_{1},\tau \right) & x_{2} \\ T_{i}^{-1}(x_{2}) & x_{3} \\ \psi _{i}^{c_{j}}\left( x_{3},s\right) & P_{i}(x_{4})\end{array}\right) ,\quad F_{i,j}^{s}\left( \mathbf{x},\tau ,\alpha \right) =\left( \begin{array}{r@{\,\,-\,\,}l} P_{i}(x_{0}) & x_{1} \\ \psi _{i}^{c_{j}}\left( x_{1},s\right) & x_{2} \\ T_{i}(x_{2}) & x_{3} \\ \phi _{\alpha }\left( x_{3},\tau \right) & P_{j}^{s}\left( x_{4}\right)\end{array}\right). \end{equation}Here $\tau ,\alpha \in \mathbb{R}$ and the constant $c_{j}$ in $\psi _{i}^{c_{j}}$ is chosen as $c_{j}=E\left( L_{j}\right) $. Zeros of the operator $F_{i,j}^{u}$ correspond to intersections of the unstable manifold of $L_{j}$ with the collision manifold of mass $m_{i}.$ We also refer to this as a heteroclinic connection from $L_{j}$ to $m_{i}$. Similarly, zeros of the operator $F_{i,j}^{s}$ correspond to intersections between the stable manifold of $L_{j}$ with the ejection manifold of mass $m_{i}.$ In other words, they lead to heteroclinic connections ejected from $% m_{i}$ and limiting to the libration point $L_{j}$ in forward time. This is expressed formally in the following lemma. \begin{lemma}\label{lem:Li-collisions} Fix $i\in \left\{ 1,2\right\} ,$ $j\in \{4,5\}$, and $\kappa \in \left\{ u,s\right\} $. Suppose there exists $\mathbf{x}^{\ast} = (x_0^{\ast}, x_1^{\ast}, x_2^{\ast}, x_3^{\ast}, x_4^{\ast}) \in \mathbb{R}^{14}$ and $\tau ^{\ast } > 0 $ satisfying \begin{equation*} F_{i,j}^{\kappa }\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) =0, \end{equation*} and such that \begin{equation*} DF_{i,j}^{\kappa }\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) \end{equation*}is an isomorphism. Then we have the following two cases. \begin{enumerate} \item If $\kappa = u$, then the orbit of $x_1^{\ast}$ is heteroclinic from the libration point $L_{j}$ to collision with $m_{i}$ and the intersection of $W^{u}\left( L_{j}\right)$ with the collision manifold of $m_{i}$ is transverse with respect to the energy level $\left\{ E=c_j\right\} $. \item If $\kappa = s$, then the orbit of $x_3^{\ast}$ is heteroclinic from the libration point $L_{j}$ to ejection with $m_{i}$ and the intersection of $W^{s}\left( L_{j}\right)$ with the ejection manifold of $m_i$ is transverse with respect to the energy level $\left\{ E=c_j\right\} $. \end{enumerate} \end{lemma} \begin{proof} The proof follows from an argument similar to the proof of Lemma \ref{lem:collision-connections}. \end{proof} \bigskip By a small modification of the operator just defined, we can study orbits homoclinic or heteroclinic to the libration points as well. Such orbits arise as intersections of the stable/unstable manifolds of the libration points, and lead naturally to two point BVPs. Three such orbits, homoclinic to $L_4$ in the PCRTBP, are illustrated in Figure \ref{fig:PCRTBP_L4_homoclinics}. Note that homoclinic/heteroclinic connections between equilibrium solutions do not require changing to regularized coordinates as such orbits exists for all forward and backward time and cannot have any collisions. While this claim is mathematically correct, any homoclinic/heteroclinic orbit which passes sufficiently close to a collision with $m_{i}$ for $i\in \left\{ 1,2\right\} $ becomes difficult to continue numerically. Consequently, these orbits may still be difficult or impossible to validate via computer assisted proof. In this case regularization techniques are an asset even when studying orbits which pass near a collision. The left and center homoclinic orbits in Figure \ref{fig:PCRTBP_L4_homoclinics} for example are computed entirely in the usual PCRTBP coordinates, while the right orbit was computed using both coordinate systems. With this in mind we express the homoclinic/heteroclinic problem in the framework set up in the previous sections. \begin{figure}[!t] \centering \includegraphics[height=3.8cm]{L4_homoclinics1.pdf}\includegraphics[height=3.8cm]{L4_homoclinics2.pdf}\includegraphics[height=3.8cm]{L4_homoclinics3.pdf} \caption{ Transverse homoclinic orbits at $L_4$ for $\mu = 1/2$ in the $C = 3$ energy level. Each orbit traverses the illustrated curves in a clockwise fashion. The left and center orbits were known to Stromgren and Szebeheley. The center and right orbits possess no symmetry, and the orbit on the right passes close to collision with $m_2$. Each orbit is found by approximately computing a zero of the map defined in Equation \eqref{eq:homoclinicOperator}. The left and center orbits are computed in only the standard coordinate system. In the definition of the shooting template, we allow the orbit to spend $s_1 = 1.8635$ regularized time units in Levi-Civita coordinates and to flow for $s_2 =5$ time units in the original/synodic coordinates before reaching the stable manifold. } \label{fig:PCRTBP_L4_homoclinics} \end{figure} Let $P^{\kappa}_{j}:\mathbb{R}\rightarrow \mathbb{R}^{4}$, for $j\in \left\{ 4,5\right\} $ be the functions defined in Equation \eqref{eq:Pj-lib} and consider \begin{equation*} \mathbf{x}=\left( x_{0},\ldots ,x_{6}\right) \in \mathbb{R}^{22}, \end{equation*}where $x_{0},x_{6}\in \mathbb{R}$ and $x_{1},\ldots ,x_{5}\in \mathbb{R}^{4}$, and fix $s_{1},s_{2}>0$. Let \begin{equation*} F_{i,j,k}:\mathbb{R}^{24}\rightarrow \mathbb{R}^{24},\qquad \text{for }j,k\in \{4,5\},i\in \left\{ 1,2\right\} , \end{equation*}be defined as\begin{equation} \label{eq:homoclinicOperator} F_{i,j,k}\left( \mathbf{x},\tau ,\alpha \right) :=\left( \begin{array}{r@{\,\,-\,\,}l} P_{j}^{u}\left( x_{0}\right) & x_{1} \\ \phi _{\alpha }\left( x_{1},\tau \right) & x_{2} \\ T_{i}^{-1}(x_{2}) & x_{3} \\ \psi _{i}^{c_{j}}\left( x_{3},s_{1}\right) & x_{4} \\ T_{i}(x_{4}) & x_{5} \\ \phi _{\alpha }\left( x_{5},s_{2}\right) & P_{k}^{s}\left( x_{6}\right) \end{array}\right) . \end{equation} One can formulate an analogous result to the Lemmas \ref{lem:collision-connections} and \ref{lem:Li-collisions}, so that \[ F_{i,j,k}\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) =0, \] together with $DF_{i,j,k}\left( \mathbf{x}^{\ast },\tau ^{\ast },0\right) $ an isomorphism implies that the manifolds $W^{u}\left( L_{j}\right) $ and $W^{s}\left( L_{k}\right) $ intersect transversally. Again, the advantage of solving $F_{i,j,k}=0$ over parallel shooting in the original coordinates is that one can establish the existence of connections which pass arbitrarily close to a collision $m_{1}$ and/or $m_2$. Indeed, the operator defined in Equation \eqref{eq:homoclinicOperator} can be generalized to study homoclinic orbits which make any finite number of flybys of the primaries in any order before returning to $L_{4,5}$ by making additional changes of variables to regularized coordinates every time the orbit passes near collision. \section{Symmetric periodic orbits passing through collision\label{sec:symmetric-orbits}} In this section we show that our method applies to the study of families of periodic orbits which pass through a collision. By this we mean the following. We will prove the existence of a family of orbits parameterized by the value of the Jacobi constant on an interval. As in the introduction, we refer to this as a tube of periodic orbits. For all values in the interval except one, the intersection of the energy level set with the tube is a periodic orbit. For a single isolated value of the energy the intersection of the energy level set with the tube is an ejection-collision orbit involving $m_{1}$. The situation is depicted in Figure \ref{fig:Lyap}. \begin{figure}[tbp] \begin{center} \includegraphics[height=3.95cm]{Fig-4_0.pdf} \includegraphics[height=3.95cm]{Fig-4_1.pdf} \includegraphics[height=3.95cm]{Fig-4_2.pdf} \par \includegraphics[height=3.95cm]{Fig-4_0c.pdf} \includegraphics[height=3.95cm]{Fig-4_1c.pdf} \includegraphics[height=3.95cm]{Fig-4_2c.pdf} \end{center} \caption{A family of Lyapunov periodic orbits passing through a collision. The left two figures are in the original coordinates, the middle two are in the regularised coordinates at $m_{1}$ and the right two are in regularised coordinates at $m_{2}$. (Compare with Figure \protect\ref{fig:PCRTBP_coordinates}.) The trajectories computed in the original coordinates are in black, and the trajectories computed in the regularized coordinates are in red. The collision with $m_1$ is indicated by a cross. The mass $m_2$ is added in the closeup figures as a black dot. The operator (\protect\ref{eq:Fc-choice}) gives half of a periodic orbit in red and black. The second half, which follows from the symmetry, is depicted in grey. The plots are for the Earth-moon system.} \label{fig:Lyap} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[height=3.95cm]{Fig-4_0_detail_1} \includegraphics[height=3.95cm]{Fig-4_0_detail_2} \end{center} \caption{A closeup of a Lyapunov orbit before (left) and after (right) passing through the collision. The plot is in the original coordinates.} \label{fig:Lyap-closeup} \end{figure} To establish such a family of periodic orbits we make use of the time reversing symmetry of the PCRTBP. Recall that for\begin{equation*} S\left( x,p,y,q\right) :=\left( x,-p,-y,q\right) \end{equation*}and for the flow $\phi \left( \mathbf{x},t\right) $ of the PCRTBP we have that\begin{equation} S\left( \phi \left( \mathbf{x},t\right) \right) =\phi \left( S\left( \mathbf{x}\right) ,-t\right) . \label{eq:symmetry-prop} \end{equation}Let us introduce the notation $\mathcal{S}$ to stand for the set of self $S$-symmetric points\begin{equation*} \mathcal{S:}=\left\{ \mathbf{x}\in \mathbb{R}^{4}:\mathbf{x}=S\left( \mathbf{x}\right) \right\} . \end{equation*} The property in Equation \eqref{eq:symmetry-prop} is used to find periodic orbits as follows. Suppose $\mathbf{x},\mathbf{y}\in \mathcal{S} $ satisfy $\mathbf{y}=\phi \left( \mathbf{x},t\right)$. Then by Equation \eqref{eq:symmetry-prop}, we have \begin{equation} \phi \left( \mathbf{x},2t\right) =\phi \left( \mathbf{y},t\right) =\phi \left( S\left( \mathbf{y}\right) ,t\right) =S\left( \phi \left( \mathbf{y},-t\right) \right) =S\left( \mathbf{x}\right) =\mathbf{x}, \label{eq:S-symm-periodic} \end{equation}meaning that $\mathbf{x}$ lies on a periodic orbit. Our strategy is then to set up a boundary value problem which shoots from $\mathcal{S% }$ to itself. The set $\mathcal{S}$ lies on the $x$-axis in the $\left( x,y\right)$ coordinate frame. From the left plot in Figure \ref{fig:Lyap} it is clear that we are interested in points on $\mathcal{S}$ which will pass through collision with $m_{1}$ and close to the collision with $m_{2}$. We therefore consider the set $\mathcal{S}$ transformed to the regularized coordinates of $m_1$ and $m_2$. \begin{lemma} Let $\mathcal{\hat{S}},\mathcal{\tilde{S}}\subset \mathbb{R}^{4}$ be defined as\begin{eqnarray*} \mathcal{\hat{S}} &=&\left\{ \left( 0,\hat{p},\hat{y},0\right) :\hat{p},\hat{y}\in \mathbb{R}\right\} , \\ \mathcal{\tilde{S}} &=&\left\{ \left( \tilde{x},0,0,\tilde{q}\right) :\tilde{x},\tilde{q}\in \mathbb{R}\right\} . \end{eqnarray*}Then $T_{1}(\mathcal{\hat{S}})=\mathcal{S}$ and $T_{2}(\mathcal{\tilde{S}})=% \mathcal{S}$. \end{lemma} \begin{proof} The proof follows directly from the definition of $T_{1}$ and $T_{2}$. (See Equations \eqref{eq:T1-def} and \eqref{eq:T2-def}.) \end{proof} The intuition behind the choice of $\mathcal{\hat{S}},$ $\mathcal{\tilde{S}}$ is seen in Figure \ref{fig:PCRTBP_coordinates}. From the figure we see that the set $\mathcal{\hat{S}}$ is the vertical axis $\{\hat{x}=0\}$ and $\mathcal{\tilde{S}}$ is the horizontal axis $\left\{ \tilde{y}=0\right\} $, which join the primaries in the regularized coordinates. To find the desired symmetric periodic orbits we fix an energy level $c\in \mathbb{R}$ and introduce an appropriate shooting operator, whose zero implies the existence of an orbit with energy $c$. Slightly abusing notation, let us first define two functions $\hat{p},\tilde{q}:\mathbb{R}% ^{2}\rightarrow \mathbb{R}$ as% \begin{eqnarray*} \hat{p}\left( \hat{y},c\right) & := &\sqrt{4\hat{y}^{6}-8\mu \hat{y}^{4}+4(\mu -c)\hat{y}^{2}+\frac{8\mu \hat{y}^{2}}{\sqrt{\hat{y}^{4}+1-2\hat{y}^{2}}}+8(1-\mu )}, \\ \tilde{q}\left( \tilde{x},c\right) &:=&\sqrt{4\tilde{x}^{6}-8(1-\mu )\tilde{x}^{4}+4\left( (1-\mu )-c\right) \tilde{x}^{2}+\frac{8(1-\mu )\tilde{x}^{2}}{\sqrt{\tilde{x}^{4}+1-2\tilde{x}^{2}}}+8\mu }. \end{eqnarray*}Observe that from Equations \eqref{eq:reg_P_energy} and \eqref{eq:E2} we have \begin{align} E_{1}^{c}\left( 0,\hat{p}\left( \hat{y},c\right) ,\hat{y},0\right) & =0, \label{eq:pc-implicit} \\ E_{2}^{c}\left( \tilde{x},0,0,\tilde{q}\left( \tilde{x},c\right) \right) & =0. \label{eq:qc-implicit} \end{align}Next, we define $P_{1}^{c},P_{2}^{c}:\mathbb{R}\rightarrow \mathbb{R}^{4}$ by \begin{align*} \hat{P}_{1}^{c}\left( \hat{y}\right) & :=\left( 0,\hat{p}\left( \hat{y},c\right) ,\hat{y},0\right) , \\ \tilde{P}_{2}^{c}\left( \tilde{x}\right) & :=\left( \tilde{x},0,0,\tilde{q}\left( \tilde{x},c\right) \right), \end{align*}and note that $P_{1}^{c}\left( \mathbb{R}\right) \subset \mathcal{\hat{S}}$ and $% P_{2}^{c}\left( \mathbb{R}\right) \subset \mathcal{\tilde{S}}$. Taking \begin{equation*} \mathbf{x}=(x_{0}, x_{1},\ldots ,x_{5},x_{6})\in \mathbb{R}\times \underset{5 \ \text{copies}}{\underbrace{\mathbb{R}^{4}\times \ldots \times \mathbb{R}^{4}}}\times \mathbb{R}=\mathbb{R}^{22}\mathbb{,} \end{equation*}we define the shooting operator $F_{c}:\mathbb{R}^{24}\rightarrow \mathbb{R}^{24}$ as \begin{equation} F_{c}\left( \mathbf{x},\tau ,\alpha \right) =\left( \begin{array}{r@{\,\,-\,\,}l} \hat{P}_{1}^{c}\left( x_{0}\right) & x_{1} \\ \psi _{1}^{c}\left( x_{1},s\right) & x_{2} \\ T_{1}\left( x_{2}\right) & x_{3} \\ \phi _{\alpha }\left( x_{3},\tau \right) & x_{4} \\ T_{2}^{-1}\left( x_{4}\right) & x_{5} \\ \psi _{2}^{c}\left( x_{5},s\right) & \tilde{P}_{2}^{c}\left( x_{6}\right)\end{array}\right) . \label{eq:Fc-choice} \end{equation}We have the following result. \begin{lemma} \label{lem:Lyap-existence} Suppose that for $c\in \mathbb{R}$ we have an $\mathbf{x}\left( c\right) \in\mathbb{R}^{22}$ and $\tau \left( c\right) \in \mathbb{R}$ for which \begin{equation*} F_{c}\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,0\right) =0, \end{equation*} then we have one of the following three cases: \begin{enumerate} \item If $x_{0}\left( c\right) \neq 0$ and $x_{6}\left( c\right) \neq 0$, then the orbit through $T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) )$ is periodic. \item If $x_{0}\left( c\right) =0$ and $x_{6}\left( c\right) \neq 0$, then then the orbit through $T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) )$ is an ejection-collision with $m_1$. \item If $x_{0}\left( c\right) \neq 0$ and $x_{6}\left( c\right) =0$, then then the orbit through $T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) )$ is an ejection-collision with $m_2$. \end{enumerate} \end{lemma} \begin{proof} The result follows immediately from the definition of $F_{c}$ in Equation \eqref{eq:Fc-choice} and from Theorem \ref{thm:LeviCivitta} (or the analogous theorem for $m_2$). We highlight the fact that due to Equations \eqref{eq:pc-implicit}--\eqref{eq:qc-implicit} we have $E_{1}^{c}(\hat{P}% _{1}^{c}\left( x_{0}\right) )=0$ and $E_{2}^{c}( \tilde{P}_{2}^{c}\left( x_{6}\right) ) =0$, so the trajectories in the regularized coordinates correspond to the physical trajectories in the physical coordinates of the PCRTBP. \end{proof} We can use the implicit function theorem to compute the derivative of $\mathbf{x}% \left( c\right) $ with respect to $c$. Let us write $\mathbf{y}\left( c\right) :=\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,\alpha \left( c\right) \right) $ and suppose $F_c(\mathbf{y}(c))=0$. (Note that in fact we must also have that $\alpha \left( c\right) =0$ since $\alpha $ is unfolding.) Then $\frac{d}{dc}\mathbf{x}\left( c\right) $ is computed from the first coordinates of the vector $\frac{d}{dc}\mathbf{y}\left( c\right) $ and is given by the formula \begin{equation} \frac{d}{dc}\mathbf{y}\left( c\right) =-\left( \frac{\partial F_{c}}{\partial \mathbf{y}}\right) ^{-1}\frac{\partial F_{c}}{\partial c}. \label{eq:implicit-dx-dc} \end{equation} \begin{theorem} \label{th:Lyap-through-collision}Assume that for $c\in \left[ c_{1},c_{2}% \right] $ the functions $\mathbf{x}\left( c\right) $ and $\tau \left( c\right) $ solve the implicit equation \begin{equation*} F_{c}\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,0\right) =0. \end{equation*} If\begin{eqnarray} \label{eq:Bolzano-condition-Lyap} x_{0}\left( c_{1}\right) >0>x_{0}\left( c_{2}\right) , \\ \label{eq:x6-nonzero} x_{6}\left( c\right) \neq 0\qquad \text{for all }c\in \left[ c_{1},c_{2}\right], \end{eqnarray} and \begin{equation} \frac{d}{dc}x_{0}\left( c\right) <0\qquad \text{for all }c\in \left[ c_{1},c_{2}\right] , \label{eq:der-cond-Lyap} \end{equation}then there exists a unique energy parameter $c^{\ast }\in \left( c_{1},c_{2}\right) $ for which we have have an intersection of the ejection and collision manifolds of $m_{1}$. Moreover, for all remaining $c\in \left[ c_{1},c_{2}\right] \setminus \left\{ c^{\ast }\right\} $ the orbit of the point $T_{1}( \hat{P}_{1}^{c}\left( x_{0}\left( c\right) \right) ) $ is periodic. \end{theorem} \begin{proof} The result follows directly from the Bolzano theorem and Lemma \ref{lem:Lyap-existence}. \end{proof} Theorem \ref{th:Lyap-through-collision} is deliberately formulated so that its hypotheses can be validated via computer assistance. Specifically, rigorous enclosures of Equation \eqref{eq:implicit-dx-dc} are rigorously computed and Equations \eqref{eq:Bolzano-condition-Lyap}-\eqref{eq:der-cond-Lyap} are rigorously verified using interval arithmetic. \medskip We finish this section with an example of a similar approach, which can be used for the proofs of double collisions in the case when $m_{1}=m_{2}=\frac{% 1}{2}$. That is, we establish the existence of a family of periodic orbits, parameterized by energy (the Jacobi constant), which are symmetric with respect to the $y$-axis, and such that for a single parameter from the family we have a double collision as in Figure \ref{fig:eq}. \begin{figure}[tbp] \begin{center} \includegraphics[height=3.95cm]{Fig-5_1.pdf} \includegraphics[height=3.95cm]{Fig-5_2.pdf} \end{center} \caption{A family of periodic orbits passing through a double collision. The left figure is in the original coordinates and the right figure is in the regularised coordinates at $m_{1}$. The trajectories computed in the original coordinates are in black, the trajectories computed in the regularized coordinates are in red, and the collision orbit is in blue. The second half of an orbit, which follows from the $R$-symmetry, is depicted in grey. The plots are for the system with equal masses.} \label{fig:eq} \end{figure} In this case consider $R:\mathbb{R}^{4}\rightarrow \mathbb{R}^{4}$ defined as\begin{equation*} R\left( x,p,y,q\right) =\left( -x,p,y,-q\right) . \end{equation*}For the case of two equal masses, we have the time reversing symmetry \begin{equation} R\left( \phi \left( \mathbf{x},t\right) \right) =\phi \left( R\left( \mathbf{x}\right) ,-t\right) . \label{eq:R-symmetry.} \end{equation}We denote by $\mathcal{R}$ the set of all points which are $R$-self symmetric, i.e. $\mathcal{R}=\{\mathbf{x}=R\left( \mathbf{x}\right) \}$. An argument mirroring Equation \eqref{eq:S-symm-periodic} shows that if two points $\mathbf{x},\mathbf{y}\in \mathcal{R}$ have $% \mathbf{y}=\phi \left( \mathbf{x},t\right) ,$ then these points must lie on a periodic orbit. To obtain the existence of the family of orbits depicted in Figure \ref{fig:eq}, define $p:% \mathbb{R}^{2}\rightarrow \mathbb{R}$ and $P_{1}^{c},P_{2}^{c}:\mathbb{R}\rightarrow \mathbb{R}^{4}$ as% \begin{eqnarray*} p\left( y,c\right) &:=&\sqrt{2\Omega (0,y)-c}, \\ P_{1}^{c}\left( y\right) &:=&\left( 0,p\left( y,c\right) ,y,0\right) , \\ P_{2}^{c}\left( y\right) &:=&\left( 0,-p\left( y,c\right) ,y,0\right) . \end{eqnarray*}Note that $P_{1}^{c}\left( y\right) ,P_{2}^{c}\left( y\right) \in \mathcal{R} $ and $E\left( P_{1}^{c}\left( y\right) \right) =E\left( P_{2}^{c}\left( y\right) \right) =c$ (see Equation \eqref{eq:JacobiIntegral}). Consider $x_{0},x_{7}\in \mathbb{R}$ and $x_{1},\ldots ,x_{6}\in \mathbb{R}^{4},$ where \begin{equation} x_{4}=\left( s_{4},\hat{p}_{4},\hat{y}_{4},\hat{q}_{4}\right) \in \mathbb{R}^{4}. \label{eq:s4} \end{equation}We emphasize that the first coordinate in $x_{4}$ will be used here in a slightly less standard way than in the previous examples. We define also \begin{equation*} \mathrm{\hat{x}}_{4}:=\left( 0,\hat{p}_{4},\hat{y}_{4},\hat{q}_{4}\right) \in \mathbb{R}^{4}. \end{equation*}We now choose some fixed $s_{2},s_{5}\in \mathbb{R}$, $s_{2},s_{5}>0$, and for \begin{equation*} \mathbf{x}=\left( x_{0},\ldots ,x_{7}\right) \in \mathbb{R}\times \underset{6}{\underbrace{\mathbb{R}^{4}\times \mathbb{\ldots }\times \mathbb{R}^{4}}}\times \mathbb{R}=\mathbb{R}^{26} \end{equation*}define the operator $F_{c}:\mathbb{R}^{26}\times \mathbb{R}\times \mathbb{R% }\rightarrow \mathbb{R}^{28}$ as \begin{equation} F_{c}\left( \mathbf{x},\tau ,\alpha \right) =\left( \begin{array}{r@{\,\,-\,\,}l} P_1^{c}\left( x_{0}\right) & x_{1} \\ \phi _{\alpha }\left( x_{1},s_{2}\right) & x_{2} \\ T_{1}^{-1}\left( x_{2}\right) & x_{3} \\ \psi _{1}^{c}\left( x_{3},s_{4}\right) & \mathrm{\hat{x}}_{4} \\ \psi _{1}^{c}\left( \mathrm{\hat{x}}_{4},s_{5}\right) & x_{5} \\ T_{1}\left( x_{5}\right) & x_{6} \\ \phi _{\alpha }\left( x_{6},\tau \right) & P_2^{c}\left( x_{7}\right) \end{array}\right) . \label{eq:Fc-equal} \end{equation} Note that in Equation \eqref{eq:Fc-equal} the $s_{2},s_{5}$ are some fixed parameters, and $s_{4}$ is one of the coordinates of $\mathbf{x}$. We claim that if $F_{c}\left( \mathbf{x},\tau ,0,0\right) =0$ and $\pi _{\hat{y}_{4}}% \mathbf{x}=0$, then the orbit of $x_2$ passes through the collision with $m_{1}$. This is because $\mathrm{\hat{x}}_{4}=\left( 0,\hat{p}_{4},\hat{y} _{4},\hat{q}_{4}\right) $, so that $F_{c}=0$ ensures that the point $\psi _{1}^{c}\left( x_{3},s_{4}\right)$ is zero on the $\hat x_4$ coordinate. So, if $F_{c}(\mathbf{x})=0$ and $\pi _{\hat{y}_{4}}\mathbf{x}=0$, then $\pi_{\hat x_4, \hat y_4}\psi _{1}^{c}\left( x_{3},s_{4}\right)=0$ and we arrive at the collision. Moreover, by the $R$-symmetry of the system in this case we also establish heteroclinic connections between collisions with $m_{1}$ and $m_{2}$ (see Figure \ref{fig:eq}). If on the other hand $F_{c}=0$ and $\pi _{\hat{y}_{4}}\mathbf{x}\neq 0,$ then we have a periodic orbit passing near the collisions with $m_{1}$ and $m_{2}$. One can prove a result analogous to Theorem \ref{th:Lyap-through-collision} with the minor difference being that instead of using $x_{0}$ in Equations \eqref{eq:Bolzano-condition-Lyap} and \eqref{eq:der-cond-Lyap} we take $ \hat{y}_{4}.$ We omit the details in order not to repeat the same argument. \section{Computer assisted proofs for collision/near collision orbits} \label{sec:CAP} \subsection{Newton-Krawczyk method} For a smooth mapping $F : \mathbb{R}^n \to \mathbb{R}^n$, the following theorem provides sufficient conditions for the existence of a solution of $% F(x)=0$ in the neighborhood of a \textquotedblleft good enough\textquotedblright\ approximate solution. The hypotheses of the theorem require measuring the defect associated with the approximate solution, as well as the quality of a certain condition number for an approximate inverse of the derivative. Theorems of this kind are used widely in computer assisted proofs, and we refer the interested reader to the works of \cite{MR0231516,MR1100928,MR1057685,MR2807595,MR2652784, MR3971222,MR3822720,jpjbReview} for a more complete overview. Let $\left\Vert \cdot \right\Vert $ be a norm in $\mathbb{R}^{n}$ and let $% \overline{B}(x_{0},r)\subset \mathbb{R}^{n}$ denote a closed ball of radius $r \geq 0$ centered at $x_0$ in that norm. \begin{theorem}[Newton-Krawczyk] \label{thm:NK} \label{thm:aPosteriori} Let $U\subset \mathbb{R}^{n}$ be an open set and $F\colon U\rightarrow \mathbb{R}^{n}$ be at least of class $C^2$. Suppose that $x_{0}\in U$ and let $A$ be a $n\times n$ matrix. Suppose that $Y,Z,r>0$ are positive constants such that $\overline{B}(x_{0},r)\subset U$ and \begin{eqnarray} \Vert AF(x_{0})\Vert &\leq &Y, \label{eq:Krawczyk-Y} \\ \sup_{x\in \overline{B}(x_{0},r)}\Vert \mathrm{Id}-ADF(x)\Vert &\leq &Z. \label{eq:Krawczyk-Z} \end{eqnarray}If \begin{equation} Zr-r+Y\leq 0, \label{eq:Krawczyk-ineq} \end{equation}then there is a unique $\hat{x}\in \overline{B}(x_{0},r)$ for which $F(\hat{x% })=0.$ Moreover, $DF(\hat{x})$ is invertible. \end{theorem} \begin{proof} The proof is included in \ref{sec:proof} for the sake of completeness. \end{proof} \bigskip The theorem is well suited for applications to computer assisted proofs. To validate the assumptions its enough to compute interval enclosures of the quantities $F(x_{0})$ and $DF(B)$, where $B$ is a suitable ball. These enclosures are done using interval arithmetic, and the results are returned as sets (cubes in $\mathbb{R}^{n}$ and $\mathbb{% R}^{n\times n}$) enclosing the correct values. A good choice for the matrix $A$ is any floating point approximate inverse of the derivative of $F$ at $x_{0}$, computed with standard linear algebra packages. The advantage of working with such an approximation is that there is no need to compute a rigorous interval enclosure of a solution of a linear equation (as in the interval Newton method). In higher dimensional problems, solving linear equations can lead to large overestimation (the so called ``wrapping effect''). In our work the evaluation of $F$ and its derivative involves integrating ODEs and variational equations. There are well know general purpose algorithms for solving these problems, and we refer the interested reader to \cite{c1Lohner,cnLohner,MR2807595}. For parameterizing the invariant manifolds attached to $L_4$ with interval enclosures, we exploit the techniques discussed in \cite{myNotes} (validated integration is also discussed in this reference). We remark that our implementations use the IntLab laboratory running under MatLab\footnote{https://www.tuhh.de/ti3/rump/intlab/} and/or the CAPD\footnote{Computer Assisted Proofs in Dynamics, http://capd.ii.uj.edu.pl} C\texttt{++} library, and recall that the source codes are found at the homepage of MC. See \cite{Ru99a} and \cite{CAPD_paper} as references for the usage and the functionality of the libraries. \subsection{Computer assisted existence proofs for ejection-collision orbits} \label{sec:EC} The methodology of Section \ref{sec:ejectionToCollision}, and especially Lemma \ref{lem:collision-connections}, is combined with Theorem \ref{thm:NK} to obtain the following. \begin{maintheorem}\label{thm:ejectionCollision} \label{thm:CAP-ejCol} Consider the planar PCRTBP with $\mu = 1/4$ and $c = 3.2$. Let \[ \overline{p} = \left( \begin{array}{c} -0.564897282072410 \\ \phantom{-}0.978399619177283 \\ -0.099609551141525 \\ -0.751696444982537 \end{array} \right), \] \[ r = 2.7 \times 10^{-13}, \] and \[ B_r = \left\{ x \in \mathbb{R}^4 \, : \|x - \overline{p}\| \leq r\right\}, \] where the norm is the maximum norm on components. Then, there exists a unique $p_* \in B_r$ such that the orbit of $p_*$ is ejected from $m_2$ (at $x = -1 + \mu, y= 0$), collides with $m_1$ (at $x = \mu, y= 0$), and \correction{comment 10}{ the total time $T$ (in synodic/un-regularized coordinates) from ejection to collision satisfies \begin{equation*} 2.42710599795 \leq T \leq 2.42710599796. \end{equation*} } In addition, the ejection manifold of $m_2$ intersects the collision manifold of $m_1$ transversely along the orbit of $p_*$, where transversality is relative to the level set $\setof*{E = 3.2}$. Moreover, there exists a transverse $S$-symmetric counterpart ejected from $m_1$ and colliding with $m_2$. \end{maintheorem} \begin{proof} The first step in the proof is to define an appropriate version of the map $% F $ in Equation \eqref{eq:collisionOperator}, whose zeros correspond to ejection-collision orbits from $m_2$ to $m_1$. In particular we set $k = 2$ and $l = 1$, and choose (somewhat arbitrarily) the parameter $s = 0.35$ in the definition of the component maps $R_{\tau, \alpha}^1$ and $R_{\tau, \alpha}^5$. The parameter $s$ determines how long to integrate/flow in the regularized coordinates. Next we compute an approximate zero $\overline{x} \in \mathbb{R}^{24}$ of $F$ using Newton's method. Note that interval arithmetic is not required in this step. The resulting numerical data is recorded in Table \ref{table:th1}, and we note that $\overline{x}_3$ in the table corresponds to $\overline{p}$ in the hypothesis of the theorem. Note also that we take $\bar\alpha$ in the approximate solution to be zero. \begin{table}[tbp] {\scriptsize \begin{tabular}{cllll} \hline $\overline{x}_0 = $ & $\phantom{(-}2.945584780500716$ & & & \\ $\overline{x}_1 = $ & $(\phantom{-} 0.0,$ & $-1.387134030283961,$ & $\phantom{-}0.0,$ & $% \phantom{-}0.275425456390970)$ \\ $\overline{x}_2 = $ & $(-0.444581369966432,$ & $-1.038375926396089,$ & $\phantom{-}0.112026231721142,$ & $\phantom{-}0.449167625710802)$ \\ $\overline{x}_3 = $ & $(-0.564897282072410,$ & $\phantom{-}0.978399619177283,$ & $-0.099609551141525,$ & $-0.751696444982537)$ \\ $\overline{x}_4 = $ & $(-0.244097430449606,$ & $\phantom{-}0.878139982728136,$ & $-0.025435855606099,$ & $\phantom{-}0.543608549989376)$ \\ $\overline{x}_5 = $ & $( \phantom{-}0.018086991443589,$ & $-0.732714475912918,$ & $-0.703153304556756,$ & $\phantom{-}1.254598547822042)$ \\ $\overline{x}_6 = $ & $\phantom{(-}1.459760691418490$ & & & \\ $\overline{\tau} = $ & $\phantom{(-}2.051635871465197$ & & & \\ $\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\ \hline \end{tabular} } \caption{ Numerical data used in the proof of Theorem \protect\ref{thm:CAP-ejCol}, giving the approximate solution of $F=0$ for the operator \eqref{eq:collisionOperator}, whose zeros correspond to the ejection-collision orbits from $m_2$ to $m_1$. We set the mass ratio to $\mu = 1/4$ and Jacobi constant to $c = 3.2$. The resulting orbit is illustrated in Figure \ref{fig:ejectionCollisions} (bottom curve). \label{table:th1}\label{tab:ejColTab1} } \end{table} We define $A$ to be the numerically computed approximate inverse of $DF(% \overline{x})$, and let \begin{equation*} B = \overline{B}(\overline{x}, r_*), \end{equation*} denote the closed ball of radius \begin{equation*} r_* = 2\times 10^{-12}, \end{equation*} in the maximum norm about the numerical approximation. (The reader interested in the numerical entries of the Matrix can run the accompanying computer program). We note that the choice of $r_*$ is somewhat arbitrary. (It should be small enough that there is not too much ``wrapping'', but not so small that there is no $r \leq r_*$ satisfying the hypothesis of Theorem \ref{thm:NK}). Using interval arithmetic and validated numerical integration we compute an interval enclosure of the length $24$ vector of intervals $\mathbf{F}$ having \begin{equation*} F(\overline{x}) \in \mathbf{F}, \end{equation*} and an interval enclosure of a $24 \times 24$ interval matrix $\mathbf{M}$ with \begin{equation*} DF(x) \in \mathbf{M} \quad \quad \mbox{for all } x \in B. \end{equation*} We then check, again using interval arithmetic, that \begin{equation*} \|A \mathbf{F} \| \in 10^{-12} \times [ 0.0, 0.26850976470521] \end{equation*} and that \begin{equation*} \|\mbox{Id} - A \mathbf{M} \| \in 10^{-7} \times [ 0.0, 0.23119622467860]. \end{equation*} From these we have \begin{equation*} \|A F(\overline{x}) \| \leq Y < 0.269 \times 10^{-12} \end{equation*} and \begin{equation*} \sup_{x \in B} \|\mbox{Id} - A DF(x)\|\leq Z < 0.232 \times 10^{-7}, \end{equation*} though the actual bounds stored in the computer are tighter than those just reported (hence the inequality). We let \begin{equation*} r = \sup\left( \frac{Y}{1- Z}\right) \leq 2.7 \times 10^{-13}, \end{equation*} and note again that the actual bound stored in the computer is smaller than reported here. We then check, using interval arithmetic, that \begin{equation*} Z r - r + Y \leq - 5.048 \times 10^{-29} < 0. \end{equation*} We also note that, since $r \leq r_*$, we have that $\overline{B}(\overline{% x}, r) \subset B$, so that \begin{equation*} \sup_{x \in \overline{B}(\overline{x}, r)} \| \mbox{Id} - A DF(x)\| \leq Z, \end{equation*} on the smaller ball as well. From this we conclude, via Theorem \ref{thm:NK}, that there exists a unique $% x_* \in \overline{B}(\overline{x}, r) \subset \mathbb{R}^{24}$ so that $F(x_*) =0$, and moreover that $DF(x_*)$ is invertible. Hence, it now follows from Lemma \ref{lem:collision-connections} that there exists a transverse ejection-collision from $m_2$ to $m_1$ in the PCRTBP. Note that the integration time in the standard coordinates \begin{equation*} \bar \tau = 2.051635871465197, \end{equation*} is one of the variables of $F$ (we are simply reading this off the table). The rescaled integration time in the regularized coordinates is fixed to be $% s = 0.35$. Our programs compute validated bounds on the integrals in Equation % \eqref{eq:time-between-collisions} and provide interval enclosures for the time each orbit spends in the regularized coordinate systems of $% m_1$ and $m_2$ respectively. This interval enclosure is \begin{equation*} T_1 + T_2 \in [ 0.27116751585137, 0.27116751585615] + [ 0.10430261063473, 0.10430261063793]. \end{equation*} Since the true integration time $\tau_*$ is in an $r$-neighborhood of $\bar\tau$ it follows that \begin{equation*} \tau_* \in [ 2.05163587146492, 2.05163587146547]. \end{equation*} Interval addition of the three time intervals containing $T_1$, $T_2$ and $\tau_*$ provides the desired final bound on the total time of flight given in the theorem. The connection in the other direction follows from the $S$-symmetry of the system (see Equation \eqref{eq:symmetry-prop}). The computational part of the proof is implemented in IntLab running under MatLab, and took 21 minutes to run on a standard desktop computer. \end{proof} \medskip The orbit whose existence is proven in Theorem \ref{thm:ejectionCollision} is illustrated in Figure \ref{fig:ejectionCollisions} (lower orbit of the two orbits illustrated in the figure). The higher orbit follows from the $S$-symmetry of the PCRTBP. We remark that our implementation actually subdivides the time steps $s = 0.35$ in regularized coordinates 50 times, while the time step $\bar \tau$ is subdivided 200 times. This only enlarges the size of the system of equations as discussed in Remark \ref{rem:additionalShooting}. Validation of the $50 + 200 + 50 = 300$ steps of Taylor integration, along with the spatial and parametric variational equations, takes most of the computational time for the proof. The choice of the mass $\mu = 1/4$ and the energy $c = 3.2$ was more or less arbitrary and the existence of many similar orbits could be proven using the same method. \subsection{Connections between ejections/collisions and the libration points $L_4$, $L_5$} \label{sec:EC_to_L4_proofs} We apply the methodology of Section \ref{sec:L4_to_collision}, and especially Lemma \ref{lem:Li-collisions}, in conjunction with Theorem \ref{thm:NK} to obtain the following result. The local stable (or unstable) manifolds at $L_4$ are computed using the methods and implementation of \cite{MR3906230}. See \ref{sec:manifolds} for a few additional remarks concerning the parameterizations. \begin{maintheorem} \label{thm:CAP-L4-to-collision} Consider planar PCRTBP with $\mu = 1/2$ and $c = 3$ is the energy of $L_4$. Let \[ \overline{p} = \left( \begin{array}{c} \phantom{-}0.003213450375413 \\ \phantom{-}0.197716496638868 \\ -0.404375730348827 \\ \phantom{-}0.696149210661807 \\ \end{array} \right), \] \[ r = 8.2 \times 10^{-12}, \] and \[ B_r = \left\{ x \in \mathbb{R}^4 \, \colon \, \| x - \bar{p}\| \leq r \right\}. \] Then there exists a unique point \[ p_* \in B_r \] such that the orbit of $p_*$ accumulates to $L_4$ as $t \to - \infty$, collides with $m_1$ (located at $x = \mu, y= 0$) in finite forward time, and the unstable manifold of $L_4$ intersects the collision set of $m_1$ transversely along the orbit of $p_*$, where transversality is relative to level set $\setof*{E = 3}$. \end{maintheorem} \begin{table}[t!] {\scriptsize \begin{tabular}{cllll} \hline $\overline{x}_0 = $ & $\phantom{(-}0.329444389425640$ & & & \\ $\overline{x}_1 = $ & $( -0.032305434322402,$ & $-0.044152238388004,$ & $\phantom{-}0.843244687835647,$ & $0.005057045291404)$ \\ $\overline{x}_2 = $ & $( \phantom{-}0.003213450375413,$ & $\phantom{-}0.197716496638868,$ & $-0.404375730348827,$ & $0.696149210661807)$ \\ $\overline{x}_3 = $ & $ ( \phantom{-}0.268116630482827,$ & $-0.943915863314079,$ & $-0.754104155383092,$ & $0.671496024758153)$ \\ $\overline{x}_4 = $ & $\phantom{(-}1.696671399505923$ & & & \\ $\overline{\tau} = $ & $\phantom{(-}7.034349085576677$ & & & \\ $\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\ \hline \end{tabular}} \caption{ Numerical data providing an approximate zero of the map $F^u_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator}, for $i = 1$, $j= 4$, $c=3$, $\mu=1/2$ and $s = 0.5$. The data is used in the proof of Theorem \protect\ref{thm:CAP-L4-to-collision}, and results in the existence of the $L_4$ to collision orbit illustrated in the right frame of Figure \ref{fig:EC_to_collision}. \label{tab:L4-to-collisionTab1} } \end{table} \begin{proof} The proof is similar to the proof of Theorem \ref{thm:ejectionCollision}, and we only sketch the argument. Orbits accumulating to $L_4$ in backward time and colliding with $m_1$ are equivalent to zeros of the mapping $F^u_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator} with $j = 4$ and $i = 1$. We also set the parameter $s = 0.5$, which is the integration time in the regularized coordinates. The first step is to compute a numerical zero $\bar{x} = (\bar x_0,\bar x_1,\bar x_2,\bar x_3,\bar x_4,\bar \tau,\bar \alpha) \in \mathbb{R}^{16}$ of $F^u_{i,j}$. This step exploits Newton's method (no interval arithmetic necessary), and the resulting data is reported in Table \ref{tab:L4-to-collisionTab1}. Note that $\bar x_1 \in \mathbb{R}^4$ from the table is the initial condition $\bar{p}$ in the statement of the theorem. We take $A$ to be a numerically computed approximate inverse of the $16 \times 16$ matrix $DF_{i,j}^u(\bar{x})$. Again, the definition of $A$ does not require interval arithmetic. For the next step we compute interval enclosures of $F(\bar{x})$ and of $DF_{i,j}^u(x)$ for $x$ in a cube of radius $r_* = 5 \times 10^{-9}$ and obtain that \[ \|A F(\bar{x})\| \in 10^{-11} \times [ 0.0, 0.82147145471154], \] and that \[ \sup_{x \in B_{r_*}(\bar{x})} \| \mbox{Id} - A D F_{i,j}^u(x) \| \in [ 0.0, 0.00151459031904]. \] Using interval arithmetic we compute \[ r = \frac{Y}{1-Z} \leq 8.3 \times 10^{-12}, \] where the actual value stored in the computer is smaller than reported here (and hence the inequality). We then check, using interval arithmetic, that $Z r - r + Y < 0$. Since $r < r_*$, we have that there exists a unique $x_* \in B_r(\bar{x})$ so that $F_{i,j}^u(x_*) = 0$. Moreover, transversality follows from the non-degeneracy of the derivative of $F_{i,j}^u$. The proof is implemented in IntLab running under MatLab, and took about 30 minutes to run on a standard desktop computer. \end{proof} By replacing the operator $F_{i,j}^u$ with the operator $F^s_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator}, again with $j = 4$ and $i = 1$, we obtain a nonlinear map whose zeros correspond to ejection-to-$L_4$ orbits. We compute an approximate numerical zero of the resulting operator (the numerical data is given in Table \ref{tab:m1-to-L4_Tab3}) and repeat a nearly identical argument to that above. This results in the existence of a transverse ejection-to-$L_4$ orbit in the PCRTBP with $\mu = 1/4$ and $c=3$. The validated error bound for the numerical data has \[ r \leq 1.8 \times 10^{-11}, \] so that the desired orbit passes with in an $r$-neighborhood of the point \[ \bar{p} = \left( \begin{array}{c} -0.112449038686947 \\ -0.553321424594493 \\ \phantom{-} 0.308527098616200 \\ \phantom{-}0.727049637558896 \\ \end{array} \right). \] In this way we prove the existence of both the orbits illustrated in Figure \ref{fig:EC_to_collision}. More precisely, the orbit whose existence is established in Theorem \ref{thm:CAP-L4-to-collision} is illustrated in the right frame of the figure, and the orbit discussed in the preceding remarks is illustrated in the left frame. \begin{table}[t!] {\scriptsize \begin{tabular}{cllll} \hline $\overline{x}_0 = $ & $\phantom{(-}1.561515178070094$ & & & \\ $\overline{x}_1 = $ & $( \phantom{-}0.0,$ & $ \phantom{-}0.018562030958889,$ & $0.0,$ & $ 1.999913860896684)$ \\ $\overline{x}_2 = $ & $( \phantom{-}0.191471460280817,$ & $\phantom{-}0.959639244531484,$ & $0.805673853857139,$ & $1.170011720749615)$ \\ $\overline{x}_3 = $ & $(-0.112449038686946,$ & $-0.553321424594493,$ & $0.308527098616200,$ & $0.727049637558895)$ \\ $\overline{x}_4 = $ & $\phantom{(-}5.229765599216696$ & & & \\ $\overline{\tau} = $ & $\phantom{(-}4.673109099822270$ & & & \\ $\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\ \hline \end{tabular} } \caption{ Numerical data for an approximate zero of the map $F^s_{i,j}$ defined in Equation \eqref{eq:EC_to_L4_operator}, with $i = 1$, $j= 4$ and $s = 0.5$. An argument similar to the proof of Theorem \ref{thm:CAP-L4-to-collision}, using the data in the table, leads to an existence proof for the ejection-to-$L_4$ orbit illustrated in the left frame of Figure \ref{fig:EC_to_collision}. \label{tab:m1-to-L4_Tab3}} \end{table} \subsection{Transverse homoclinics for $L_4$ and $L_5$} \label{sec:homoclinicProofs} Combining the methodology of Section \ref{sec:L4_to_collision}, and especially Lemma \ref{lem:Li-collisions}, with Theorem \ref{thm:NK} we obtain the following result. \begin{maintheorem} \label{thm:CAP-connections} Consider the planar PCRTBP with $\mu = 1/2$ and $c = 3$ is the energy level of $L_4$. Let \[ \bar{p} = \left( \begin{array}{c} -0.037058535628028 \\ -0.007623220519232 \\ \phantom{-}0.873641524369283 \\ \phantom{-}0.033084516464648 \end{array} \right), \] and \[ B_r = \left\{ x \in \mathbb{R}^4 \, : \, \| x - \bar{p} \| \leq r \right\}, \] where \[ r = 1.6 \times 10^{-9}. \] Then there exists a unique $p_* \in B_r$ so that the orbit of $p_*$ is homoclinic to $L_4$ and $W^s(L_4)$ intersects $W^u(L_4)$ transverseley along the orbit of $p_*$, where transversality is relative to the level set $\setof*{E = 3}$. \end{maintheorem} \begin{table}[t!] \label{tab:L4-homoclinic} {\scriptsize \begin{tabular}{cllll} \hline $\overline{x}_0 = $ & $ \phantom{(-}1.411845524482813$ & & & \\ $\overline{x}_1 = $ & $(-0.037058535628028,$ & $-0.007623220519232,$ & $\phantom{-}0.873641524369283,$ & $\phantom{-}0.033084516464648)$ \\ $\overline{x}_2 = $ & $( -0.243792823114517,$ & $-1.231115802740768,$ & $\phantom{-}0.191555403283542,$ & $-0.508371511645513)$ \\ $\overline{x}_3 = $ & $(\phantom{-} 0.536705934592082,$ & $-1.502936895854406,$ & $\phantom{-}0.178454709494811,$ & $-0.106295188690239)$ \\ $\overline{x}_4 = $ & $( -0.504618223339967,$ & $-0.258236025635830,$ & $-0.463683951257916,$ & $-1.155517796520023)$ \\ $\overline{x}_5 = $ & $( -0.460363255327369,$ &$-0.431694933697799,$ & $\phantom{-}0.467966743350051,$ & $\phantom{-}0.748266448178995)$ \\ $\overline{x}_6 = $ & $\phantom{(-} 5.988827136344083$ & & & \\ $\overline{\tau} = $ & $\phantom{(-}4.753189987600258$ & & & \\ $\overline{\alpha} = $ & $\phantom{(-}0.0$ & & & \\ \hline \end{tabular} } \caption{ Numerical data for the proof of Theorem \protect\ref{thm:CAP-connections}, which provides an approximate zero of the $L_4$ homoclinic map $F_{i,j,k}$ defined in Equation \eqref{eq:homoclinicOperator}, when $i = k = 4$, $j = 2$, $s_1 = 1.8635$, and $s_2 = 5$. The orbit is depicted on the right plot in Figure \ref{fig:PCRTBP_L4_homoclinics}. } \end{table} \begin{proof} As in the earlier cases, the argument hinges on proving the existence of a zero of a suitable nonlinear mapping, in this case the map $F_{i,j,k}$ defined in Equation \eqref{eq:homoclinicOperator}, with $i = k = 4$ and $j = 2$. The integration time parameters are set as $s_1 = 1.8635$ and $s_2 = 5$. These are the flow times in the regularized coordinates and in the original coordinates (the second time) respectively. With these choices, a zero of $F_{4,2,4}$ corresponds to an orbit homoclinic to $L_4$ which passes through the Levi-Civita coordinates regularized at $m_2$. The numerical data $\bar x \in \mathbb{R}^{24}$ providing an approximate zero of $F_{4,2,4}$ is reported in Table \ref{tab:L4-homoclinic}. Note that $x_1$ corresponds to $\bar{p}$ in the hypothesis of the theorem. We let $A$ be a numerically computed approximate inverse of the matrix $DF_{4,2,4}(\bar{x})$. The table data and the matrix $A$ are computed using a numerical Newton scheme, and standard double precision floating point operations. Using validated numerical integration schemes, validated bounds on the local stable/unstable manifold parameterizations, and interval arithmetic, we compute interval enclosures of $ F_{4,2,4}(\bar{x})$ and of $D F_{4,2,4}(B_r(\bar{x}))$ with where $r = 1.659487745915747 \times 10^{-9}$. We then check that \[ \| A F(\bar{x}) \| \in 10^{-8} \times [ 0.0, 0.16432156145308], \] and that \[ \sup_{x \in B_r(\bar{x})} \| \mbox{Id} - A D F_{4,2,4}(B_r(\bar{x})) \| \in [ 0.0, 0.00980551463848]. \] Finally, we use interval arithmetic to verify that $Z r - r + Y < 0$ and transversality follows as in the earlier cases which completes the proof. \end{proof} \bigskip Note that, from a numerical perspective, this is the most difficult computer assisted argument presented so far. This is seen in the fact that $Z \approx 10^{-2}$ and $r \approx 10^{-9}$. That is, these constants are roughly three orders of magnitude less accurate than the previous theorems. On the other hand, the orbit itself is more complicated than those in the previous theorems. We note that the accuracy of the result could be improved by taking smaller integration steps and/or using higher order Taylor approximation. However, this would also increase the required computational time. Now, by symmetry, the result above gives a transverse homoclinic orbit for $L_5$ which passes near $m_1$. We also observe that each of these transverse homoclinic orbits also satisfy the hypotheses of the theorems of Devaney and Henard discussed in Section \ref{sec:intro}. In particular, Theorem \ref{thm:CAP-connections} also proves the existence of a chaotic subsystem in the $c = 3$ energy level of the PCRTBP near the orbit of $p_*$, and a tube of periodic orbits parameterized by the Jacobi constant which accumulate to the homoclinic orbit through $p_*$. We remark that, using similar arguments, we are able to prove also the existence and transversality of of the homoclinic orbits in the left and center frames of Figure \ref{fig:PCRTBP_L4_homoclinics}. More precisely, let \[ \bar{p}_1 = \left( \begin{array}{c} -0.033854025583296 \\ -0.043110876471418 \\ \phantom{-}0.844639632487862 \\ \phantom{-}0.007320747846173 \end{array} \right), \quad \quad \quad \bar{p}_2 = \left( \begin{array}{c} \phantom{-}0.029871559148065 \\ -0.006337684774610 \\ \phantom{-} 0.850175365286339 \\ -0.034734413580682 \end{array} \right), \] and \[ r_1 = 2.03 \times 10^{-10}, \quad \quad \quad r_2 = 1.84 \times 10^{-8}. \] Then there exist unique points $p^1_* \in B(\bar{p}_1, r_1)$ and $p_*^2 \in B(\bar{p}_2, r_2)$ so that $W^{s,u}(L_4)$ intersect transversely along the orbits through these points. It is also interesting to note that $r_2$ is two orders of magnitude larger than $r_1$. This is caused by the fact that the time of flight (integration time) is longer in this case and, more importantly, the fact that the second orbit passes very close to $m_1$. Indeed, the error bounds for the second orbit would very likely be improved by changing to regularized coordinates near $m_1$ and this may even be necessary to validate some homoclinics passing even closer to $m_1$ or $m_2$. Nevertheless, we were able to validate these orbits in standard coordinates so we have not done this here. The orbit of $p_*^1$ is illustrated in the left frame of Figure \ref{fig:PCRTBP_L4_homoclinics} appears to have $y$-axis symmetry, however we do not use this symmetry nor do we rigorously prove its existence. The orbit of $p_*^2$ is illustrated in the center frame of Figure \ref{fig:PCRTBP_L4_homoclinics} has no apparent symmetry. The orbits illustrated in the left and center frames have appeared previously in the literature, as remarked in Section \ref{sec:intro}. However, to the best of our knowledge this is the first mathematically rigorous proof of their existence. \subsection{Periodic orbits passing through collision} \label{sec:PO_collisions} We apply the methodology of section \ref{sec:symmetric-orbits}, namely Lemma \ref{lem:Lyap-existence} and Theorem \ref{th:Lyap-through-collision}, with Theorem \ref{thm:NK} to obtain the following result. We consider the Earth-Moon mass ratio largely for the sake of variety. \begin{maintheorem} \label{th:CAP-Lyap}Consider the Earth-Moon system \footnote{So named because this is the approximate mass ratio of the Moon relative to the Earth.} where $m_2$ has mass $\mu =0.0123/1.0123$ and $% m_{1}$ has mass $1-\mu$. Let\footnote{% In fact, our numerical calculations suggest that a more accurate value of the Jacobi constant for which we have the collision is $1.434045949300768$. However, since in the theorem we obtain only interval results, we round $% c_{0}$ so that digits smaller than the width of the interval are not used.}% \begin{equation*} c_{0}=1.4340459493,\qquad \text{and\qquad }\delta =10^{-11}. \end{equation*}There exists a single value $c^{\ast }\in \left( c_{0}-\delta ,c_{0}+\delta \right) $ of the Jacobi integral, for which we have an orbit along the intersection of the ejection and collision manifolds of $m_{1}$. Moreover, for every $c\in \left[ c_{0}-\delta ,c_{0}+\delta \right] \setminus \left\{ c^{\ast }\right\} $ we have an $S$-symmetric Lyapunov orbit, that passes close to the collision with $m_{1}$. In addition, for every $c\in \left\{ 1.2,1.25,1.3,\ldots ,1.65\right\} $ there exists a Lyapunov orbit, which passes close the collision with $m_{1}$. (These orbits are depicted in Figure \ref{fig:Lyap}.) \end{maintheorem} \begin{table} {\scriptsize \begin{tabular}{r l l l l} \hline $\bar x_0 = $ & \phantom{-(}0.0 & & & \\ $\bar x_1 = $ & (\phantom{-}0.0, & \phantom{-}2.8111911379251, & \phantom{-}0.0, &\phantom{-}0.0) \\ $\bar x_2 = $ & (\phantom{-}0.96886794638213, & -0.3219837525934, & -0.52587590839627, & -2.8644348266831) \\ $\bar x_3 = $ & (\phantom{-}0.67431017475157, & -0.74811608844773, & -1.0190086228395, & -1.0721803622694) \\ $\bar x_4 = $ & (-1.0199016713004, & \phantom{-}0.72482377063238, & -0.062207790440189, & \phantom{-}1.1639536137604) \\ $\bar x_5 = $ & (\phantom{-}0.1377088390491,& -0.32616835939217, & -0.22586709346235, & \phantom{-}0.6480010784062) \\ $\bar x_6 = $ & \phantom{(-}0.070375791076957 \\ $\bar \tau = $ & \phantom{(-}2.0972398526268 \\ $\bar \alpha = $ & \phantom{(-}0.0 \\ \hline \end{tabular} } \caption{Numerical data for the proof of Theorem \protect\ref{th:CAP-Lyap}, which gives an approximate solution to $F_c=0$ for the operator (\ref{eq:Fc-choice}), for which we have a collision of the family of Lyapunov orbits with $m_1$ for the Earth-Moon system (see Figure \ref{fig:Lyap}). This occurs for a unique value of the Jacobi constant $c^* \in \mathbf{c}$. \label{tabl:Lyap}} \end{table} \begin{proof} The orbits for the Jacobi integral values in $\mathbf{c}:=\left[ c_{0}-\delta ,c_{0}+\delta \right] $ were established by means of Theorems % \ref{th:Lyap-through-collision} and \ref{thm:aPosteriori}. We have first pre-computed numerically (through a standard, non-interval, numerical computation) an approximation $\mathbf{\bar{x}}\in \mathbb{R}^{22}$, $\bar{% \tau}\in \mathbb{R}$ for the functions $\mathbf{x}\left( c\right) $ and $\tau \left( c\right) $, for $c\in \mathbf{c}$. (The $\mathbf{\bar{x}}$ and $\bar \tau $ are written out in Table \ref{tabl:Lyap}.) We then took $\bar x:=\left( \mathbf{\bar x},\bar \tau ,0\right) \in \mathbb{R}^{24},$ and a ball $\overline{B}\left( \bar x,r\right) $, in the maximum norm, with $% r=10^{-11}.$ We established using Theorem \ref% {thm:aPosteriori} that $\mathbf{x}\left( c\right) $ and $\tau \left( c\right) $ satisfying \begin{equation*} F_{c}\left( \mathbf{x}\left( c\right) ,\tau \left( c\right) ,0\right) =0,\qquad \text{for }c\in \mathbf{c}, \end{equation*}are $r$-close to $\mathbf{\bar{x}}$ and $\bar{\tau}.$ To apply Theorem \ref{thm:aPosteriori} we have used the matrix $A$ to be an approximation of $% \left( DF_{c}(\mathbf{\bar{x}},\bar{\tau},0)\right) ^{-1}$ (computed with standard numerics, without interval arithmetic). We also checked using interval arithmetic that\begin{eqnarray*} x_{0}\left( c_0-\delta \right) &\in &[3.2261\cdot 10^{-12},5.2262\cdot 10^{-12}]>0, \\ x_{0}\left( c_0+\delta \right) &\in &[-4.6229\cdot 10^{-12},-2.6228\cdot 10^{-12}]<0. \end{eqnarray*} By using Equation \eqref{eq:implicit-dx-dc}, we have established the following interval arithmetic bound for the derivative of $x_0$ with respect to the parameter \begin{equation*} \frac{d}{dc}x_{0}\left( c\right) \in \left[ -0.53146,-0.25344\right] <0\qquad \text{for }c\in \mathbf{c}. \end{equation*}We also verified that\begin{equation*} x_{6}\left( c\right) \in \left[ 0.07037579,0.07037580\right] ,\qquad \text{for }c\in \mathbf{c}, \end{equation*}so $x_{6}\left( c\right) \neq 0$. This proves all necessary hypotheses of Theorem \ref{th:Lyap-through-collision} are satisfied for the interval $\mathbf{c}$, which finishes the first part of the proof. The Lyapunov orbits for $c\in \left\{ 1.2,1.25,1.3,\ldots ,1.65\right\} $ were estabilshed in a similar way. For each value of the Jacobi constant we have non-rigorously computed an approximation of a point for which $F_{c}$ is close to zero, and validated that we have $F_{c}=0$ for a point in a given neighbourhood of each approximation by means of Theorem \ref{th:Lyap-through-collision}. Then each Lyapunov orbit followed from Lemma \ref{lem:Lyap-existence}. The proof was conducted by using the CAPD library \cite{CAPD_paper} and took under 4 seconds on a standard laptop. \end{proof} In a similar way we have used the operator in Equation \eqref{eq:Fc-equal} to prove the following result. \begin{table} {\scriptsize \begin{tabular}{r l l l l} \hline $\bar x_0 = $ & \phantom{(-}2.1500812504263 & & & \\ $\bar x_1 = $ & (\phantom{-}0.0, &\phantom{-}1.9284591731628, & \phantom{-}2.1500812504263, &\phantom{-}0.0) \\ $\bar y_1 = $ & (\phantom{-}0.69048473611567, &\phantom{-}1.7931365837031, &\phantom{-}2.0235432631366, &-0.68131264815823) \\ $\bar y_2 = $ & (\phantom{-}1.2840491252838, &\phantom{-}1.4060903194974, &\phantom{-}1.6633024005717, &-1.2578372410208) \\ $\bar y_3 = $ &(\phantom{-}1.6975511373876, & \phantom{-}0.82331762641153, &\phantom{-}1.1255430505039, &-1.635312833307) \\ $\bar y_4 = $ &(\phantom{-}1.8749336204161, &\phantom{-}0.13626785074409, & \phantom{-}0.4974554541058, &-1.7408028751654) \\ $\bar y_5 = $ & (\phantom{-}1.7998279644685, &-0.53073278614628,& -0.11297480280335, & -1.5366473737295) \\ $\bar y_6 = $ &(\phantom{-}1.5061749347656, & -1.0305902992759,& -0.59342931060715, & -1.0405479042095) \\ $\bar y_7 = $ &(\phantom{-}1.0818972907729,& -1.2225719420862, &-0.85102013466618, &-0.34180581034401) \\ $\bar y_8 = $ & (\phantom{-}0.65897461363208,& -1.0129455565064, &-0.83705911740279, &\phantom{-}0.41484122714387) \\ $\bar x_2 = $ & (\phantom{-}0.39363679634804, &-0.35214129843918,& -0.55459777216455, &\phantom{-}1.118144276789) \\ $\bar x_3 = $ & (\phantom{-}0.47871801188109, &-1.6325298121847, &-0.5792530867862, &\phantom{-}0.66259374214967) \\ $\bar x_4 = $ & (\phantom{-}{\bf 0.40239981358785},& -1.0164469492932, &\phantom{-}0.0, &\phantom{-}1.7224504635177) \\ $\bar x_5 = $ & (-0.25865224139372,& -0.43561054122851,& \phantom{-}0.51876042853484, &\phantom{-}1.7861707478994) \\ $\bar x_6 = $ &(\phantom{-}0.29778859976434,& -1.2111468567795, &-0.2683570951738, &-1.0237309759288) \\ $\bar x_7 = $ & \phantom{(}-0.38367247647373 & & & \\ $\bar \tau = $ & \phantom{(-}0.24444305938687 & & & \\ $\bar \alpha = $ & \phantom{(-}0.0 & & & \\ \hline \end{tabular} } \caption{Numerical data for the proof of Theorem \protect\ref{thm:doubleCollision} giving an approximate solution to $F_c=0$, for the operator (\ref{eq:Fc-equal}), for $c=2.05991609689$ for which we have a double collision of a family of $R$-symmetric periodic orbits for the equal masses system; see Figure \ref{fig:eq}. In the bold font we have singled out the first coefficient of $x_4$, which is the time $s_4$ and not the physical coordinate of the collision point, for which we have $\hat x=0$. (See Equations \eqref{eq:s4} and \eqref{eq:Fc-equal}.)\label{tabl:eq}} \end{table} \begin{maintheorem} \label{thm:doubleCollision} Consider the equal masses system where $\mu =\frac{1}{2}$. Let\footnote{We believe that a more accurate value of the Jacobi constant for which we have the double collision is $2.059916096889689$.}\begin{equation*} c_{0}=2.05991609689,\qquad \text{and\qquad }\delta =10^{-11}. \end{equation*}There exists a single value $c^{\ast }\in \left( c_{0}-\delta ,c_{0}+\delta \right) $ of the Jacobi integral, for which we have two intersections of the ejection and collision manifolds of $m_{1}$ and $m_{2}$ (a double collision). Moreover, for every $c\in \left[ c_{0}-\delta ,c_{0}+\delta % \right] \setminus \left\{ c^{\ast }\right\} $ we have an $R$-symmetric periodic orbit, that passes close to the collision with both $m_{1}$ and $% m_{2}$. In addition, for every $c\in \left\{ 2,2.05,2.1,2.15,2.2\right\} $ there exists an $R$-symmetric periodic orbit, which passes close the collisions with $m_{1}$ and $m_{2}$. (See Figure \ref{fig:eq}.) \end{maintheorem} \begin{proof} The proof follows along the same lines as the proof of Theorem \ref{th:CAP-Lyap}. We do not write out the details of all the estimates since we feel that this brings little added value\footnote{The code for the proof is made available on the personal web page of Maciej Capi\'{n}ski.}. In the operator $F_{c}$ from Equation \eqref{eq:Fc-equal} we have taken $s_{2}=3.3$ and $s_{5}=0.3$. The fact that $s_{2}$ involves a long integration time caused a technical problem for us in obtaining an estimate for $\frac{d}{dc}\pi _{\hat{y}}\mathbf{x}\left( c\right) $. To get a good enough estimate to establish that $\frac{d}{dc}\pi _{\hat{y}}\mathbf{x}% \left( c\right) >0$ we needed to include additional points $y_{1},\ldots ,y_{m}$ in the shooting scheme and extend $F_{c}$ to include \begin{equation*} \phi _{\alpha }\left( x_{1},s\right) -y_{1},\quad \phi _{\alpha }\left( y_{1},s\right) -y_{2},\quad \ldots \quad \phi _{\alpha }\left( y_{m-1},s\right) -y_{m},\quad \phi _{\alpha }\left( y_{m},s\right) -x_{2}, \end{equation*}where $s=s_{2}/\left( m+1\right) .$ We took $m=8$, and the point $X_{0}$ wich serves as our approximation for $F_{c}=0$ is written out in Table \ref{tabl:eq}. The proof took under 10 seconds on a standard laptop. \end{proof} \begin{remark}[MatLab with IntLab versus CAPD] {\em We note that the computer programs implemented in C\texttt{++} using the CAPD library run much faster than the programs implemented in MatLab using IntLab to manage the interval arithmetic. This is not surprising, as compiled programs typically run several hundred times faster than MatLab programs, and the use of interval arithmetic only complicates things. Moreover, CAPD is a well tested, optimized, general purpose package, while our IntLab codes were written specifically for this project. The CAPD library, due to its efficient integrators, allowed us to perform almost all of the proofs without subdividing the time steps, which was needed for the MatLab code (see Remark \ref{rem:additionalShooting} and comments at the end of section \ref{sec:EC}), except for the proof of Theorem \ref{thm:doubleCollision} (see Table \ref{tabl:eq}). In particular, little time has been spent on optimizing these codes. Nevertheless, it is nice to have rigorous integrators implemented in multiple languages, and the codes for validating the 2D stable/unstable manifolds at $L_4$ were written in IntLab and have not been ported to $C^{++}$. } \end{remark} \section{Acknowledgments} The authors gratefully acknowledge conversations with Pau Martin, Immaculada Baldoma, and Marian Gidea at the $60^{\rm th}$ birthday conference of Rafael de la Llave, \textit{Llavefest} in Barcelona in the summer of 2017. We also offer our sincere thanks to an anonymous referee whose thorough review and thoughtful suggestions greatly improved the quality of the final manuscript. \appendix {\small \section{\label{sec:proof}} \begin{proof}[Proof of Theorem \protect\ref{thm:aPosteriori}] From Equation \eqref{eq:Krawczyk-ineq} and since $r>0$ we see that $Z+\frac{Y}{r}\leq 1,$ which since $Y,r>0$ gives \begin{equation} Z<1. \label{eq:Z-smaller-than-one} \end{equation} Now, define the Newton operator \begin{equation} T(x)=x-AF(x). \label{eq:T-def-Krawczyk} \end{equation}For $x_{1},x_{2}\in \overline{B}(x_{0},r)$, by the mean value theorem and (\ref{eq:Krawczyk-Z}), we see that \begin{align*} \Vert T(x_{1})-T(x_{2})\Vert & \leq \sup_{x\in \overline{B}(x_{0},r)}\Vert DT(x)\Vert \Vert x_{1}-x_{2}\Vert \\ & =\sup_{x\in \overline{B}(x_{0},r)}\Vert \mathrm{Id}-ADF(x)\Vert \left\Vert x_{1}-x_{2}\right\Vert \\ & \leq Z\Vert x_{1}-x_{2}\Vert , \end{align*}and since $Z<1$ we conclude that $T$ is a contraction on $\overline{B}% (x_{0},r)$. To see that $T$ maps $\overline{B}(x_{0},r)$ into itself, for $x\in \overline{B}(x_{0},r)$ by Equations \eqref{eq:Krawczyk-Y}--\eqref{eq:Krawczyk-ineq} we have \begin{align*} \Vert T(x)-x_{0}\Vert & \leq \Vert T(x)-T(x_{0})\Vert +\Vert T(x_{0})-x_{0}\Vert \\ & \leq \sup_{x\in \overline{B}(x_{0},r)}\Vert DT(z)\Vert \Vert x-x_{0}\Vert +\Vert AF(x_{0})\Vert \\ & \leq Zr+Y \\ & \leq r \end{align*}hence $T(x)\in \overline{B}(x_{0},r)$. By the Banach contraction mapping theorem there is a unique $\hat{x}\in \overline{B}(x_{0},r)$ so that \begin{equation} T(\hat{x})=\hat{x}. \label{eq:x-hat-zero} \end{equation} Now observe that for every $x\in \overline{B}(x_{0},r)$, including $\hat{x}$, by Equations \eqref{eq:Krawczyk-Z} and \eqref{eq:Z-smaller-than-one} we have that \begin{equation*} \Vert \mathrm{Id}-ADF(\hat{x})\Vert \leq Z<1. \end{equation*}Then \begin{equation*} ADF(\hat{x})=\mathrm{Id}-\left( \mathrm{Id}-ADF(\hat{x})\right) =\mathrm{Id}-B \end{equation*}with $\Vert B\Vert <1$. By the Neumann series theorem we see that $ADF(\hat{x% })$ is invertible. It therefore follows that both $A$ and $DF(\hat{x})$ are also invertible. From Equations \eqref{eq:T-def-Krawczyk} and \eqref{eq:x-hat-zero} we see that $AF(% \hat{x})=0.$ But $A$ is invertible, so it follows that $F(\hat{x})=0$, as required. \end{proof} \section{\label{sec:manifolds}} Here follows a terse description of the local stable/unstable manifold parameterizations used in the proofs in Sections \ref{sec:EC_to_L4_proofs} and \ref{sec:homoclinicProofs}. Much more complete information is found in \cite{MR3906230,MR2177465,mamotreto}. In the present discussion $f \colon U \to \mathbb{R}^d$ denotes the (real analytic) PCRTB vector field, and $L_{j}$ is one of the equilateral triangle libration points -- so that $j = 4,5$. We are interested in parameter values where $Df(L_{4,5})$ has complex conjugate stable/unstable eigenvalues \[ \pm \alpha \pm i\beta, \] with $\alpha, \beta > 0$. We write $\lambda = -\alpha + i \beta$ when considering the stable manifold, and $\lambda = \alpha + i \beta$ when considering the unstable. Our goal is to develop a formal series expansion of the form \begin{equation} \label{eq:powerSeries} w_j^{\kappa}(z_1, z_1) = \sum_{m = 0}^\infty \sum_{n = 0}^\infty p_{mn} z_1^m z_2^n, \end{equation} where $j = 4$ or $5$ depending on wether we are based at $L_4$ or $L_5$, and $\kappa = s$ or $u$ depending on wether we considering the stable or unstable manifold. Here $p_{mn} \in \mathbb{C}^4$ for all $(m,n) \in \mathbb{N}^2$. Moreover, we take \[ p_{00} = L_j, \] where $j = 4,5$, and \[ p_{10} = \xi, \quad \quad \mbox{and} \quad \quad p_{01} = \overline{\xi}, \] where $\xi, \overline{\xi} \in \mathbb{C}^4$ are complex conjugate eigenvectors associated with the complex conjugate eigenvalues $ \lambda, \bar\lambda \in \mathbb{C}$. We use the parameterization method to characterize $w_j^\kappa$. While we refer the interested reader to \cite{MR2177465,mamotreto} for much more complete discussion of this method, we remark that the main idea is to solve the invariance equation \begin{equation}\label{eq:invEq} \lambda z_1 \frac{\partial}{\partial z_1} w_j^\kappa(z_1, z_2) + \overline{\lambda} z_2 w_j^\kappa (z_1,z_2) = f(w_j^\kappa(z_1,z_2)), \end{equation} subject to the constraints \[ w_j^\kappa (0,0) = L_j, \quad \quad \frac{\partial}{\partial v} w_j^\kappa (0,0) = \xi, \quad \quad \mbox{and} \quad \quad \frac{\partial}{\partial w} w_j^\kappa (0,0) = \overline{\xi}. \] It can be show that if $w_j^{\kappa}$ solves Equation \eqref{eq:invEq} subject to these constraints, then it parameterizes a local stable/unstable manifold at $L_j$. To solve Equation \eqref{eq:invEq} numerically we insert the power series ansatz of Equation \eqref{eq:powerSeries}, expand the nonlinearities, and match like powers of $z_1$ and $z_2$. This procedure leads to homological equations of the form \[ \left(Df(p_{00}) - (m \lambda + n \overline{\lambda})\mbox{Id}\right) p_{mn} = \mathcal{R}_{mn}, \] describing the power series coefficients $p_{mn}$ for $m + n \geq 2$. Here $\mathcal{R}_{mn}$ is a nonlinear function of the coefficients of order less than $m + n$, whose computation in the case of the PCRTBP is discussed in more detail in \cite{MR3906230}. Note that if $f$ is real analytic, then the coefficients have the symmetry \[ p_{nm} = \overline{p_{mn}}, \] and we obtain the real image of $\mathcal{P}$ by evaluating on complex conjugate variables $w = \overline{v}$. Since the order zero and order 1 coefficients are determined by $L_j$ and its eigendata, we can compute $p_{mn}$ for all $2 \leq m + n \leq N$ by recursively solving the linear homological equations to any desired order $N \geq 2$. We obtain the approximation \[ w_j^{\kappa, N}(z_1,z_2) = \sum_{m+n = 0}^N p_{mn} z_1^m z_2^n. \] {\tiny \begin{table}[tbp] {\scriptsize \begin{tabular}{clll} \hline $m$ $\setminus$ $n$ & \qquad 0 &\qquad 1 &\qquad 2 \\ \hline $0$ & $\phantom{10^{-4}} \left( \begin{array}{l} \phantom{-}0.0 \\ \phantom{-}0.0 \\ \phantom{-}0.866 \\ \phantom{-}0.0 \end{array} \right) $ & $\phantom{10^{-4}} \left( \begin{array}{l} \phantom{-}0.012 + 0.018i \\ -0.025 \\ -0.015 + 0.0067i \\ \phantom{-}0.0034 - 0.019i \end{array} \right)$ & $ 10^{-3} \left( \begin{array}{l} -0.050 + 0.076i \\ -0.081 - 0.190i \\ -0.054 - 0.041i\\ \phantom{-}0.140 - 0.051i \end{array} \right) $ \\ $1$ & $\phantom{10^{-4}}\left( \begin{array}{l} \phantom{-}0.012 - 0.018i \\ -0.025 \\ -0.015 - 0.0067i \\ \phantom{-}0.0034 + 0.019i \end{array} \right)$ & $ 10^{-3} \left( \begin{array}{l} \phantom{-}0.37 \\ -0.47 \\ -0.09 \\ \phantom{-}0.12 \end{array} \right) $ & $ 10^{-4} \left( \begin{array}{l} \phantom{-}0.041 + 0.055 i \\ -0.130 - 0.070i \\ -0.036 - 0.048i \\ \phantom{-}0.110 + 0.060i \end{array} \right) $ \\ $2$ & $ 10^{-3} \left( \begin{array}{l} -0.050 - 0.076i \\ -0.081 + 0.190i \\ -0.054 + 0.041i\\ \phantom{-}0.140 + 0.051i \end{array} \right) $ & $ 10^{-4} \left( \begin{array}{l} \phantom{-}0.041 - 0.055 i \\ -0.130 + 0.070 i \\ -0.036 + 0.048i \\ \phantom{-}0.110 - 0.060i \end{array} \right) $ & 0 \\ \hline \end{tabular} \[ p_{03}= 10^{-5} \left( \begin{array}{l} -0.14 + 0.18i \\ -0.26 - 0.76i \\ \phantom{-}0.11 + 0.09i \\ -0.47 + 0.11i \end{array} \right) \qquad\qquad\qquad p_{30}=10^{-5} \left( \begin{array}{l} -0.14 - 0.18i \\ -0.26 + 0.76i \\ \phantom{-}0.11 - 0.09i \\ -0.47 - 0.11i \end{array} \right) \] } \caption{ Approximate power series coefficients $p_{mn}$ for the parameterization of the local stable manifold of $L_4$ for the equal masses case $\mu=1/2$.\label{tab:parm_coeff1} } \end{table} } For example, in the PCRTBP with $\mu = 1/2$, Table \ref{tab:parm_coeff1} shows approximate coefficients for the stable manifold at $L_4$, computed to order $N = 3$. The data has been truncated at only two or three significant figures to make it fit in the table. Note that the complex conjugate structure of the coefficients is seen in the table. The table is included to give the reader a sense of the form of the data in these calculations, and could be used to very approximately reproduce some of the results in the present work. For the calculations in the main body of the text, we take $N= 12$ and compute the $p_{mn}$ by recursively solving the homological equations using interval arithmetic. Moreover, using the a-posteriori analysis developed in \cite{MR3792792}, we obtain a bound of the form \[ \sum_{m+n = 13}^\infty \| p_{mn} \| \leq 1.4 \times 10^{-13}, \] on the norm of the tail of the parameterization. The analysis is very similar to the a-posteriori analysis of the Newton Krawczyk Theorem \ref{thm:NK} promoted in the present work, adapted to the context of Banach spaces of infinite sequences. Note that this ``little ell one'' norm bounds the $C^0$ norm of the truncation error on the unit disk, and that Cauchy bounds can be used to estimate derivatives of the parameterization on any smaller disk. Thus we actually take \[ P_j^\kappa(\theta) = w_j^{\kappa}(0.9 \cos(\theta) + 0.9 \sin(\theta) i, 0.9 \cos(\theta) - 0.9 \sin(\theta) i), \] as our local parameterization, where \[ w_j^{\kappa}(z_1,z_2) = w_j^{\kappa, N}(z_1,z_2) + w_j^{\kappa, \infty}(z_1,z_2), \] is a polynomial plus a tail which has \[ w_j^{\kappa, \infty}(z_1,z_2) = \sum_{n + m = N+1}^\infty p_{mn} z_1^m z_2^n, \] and \[ \sup_{|z_1|,|z_2| < 1} \left\| w_j^{\kappa, \infty}(z_1,z_2) \right\| \leq 1.4 \times 10^{-13}. \] The $0.9$ gives up a portion of the disk, allowing us to bound the derivatives needed in the Newton-Kantorovich argument. } \bibliographystyle{plain} \bibliography{papers} \end{document}
2205.03845v1
http://arxiv.org/abs/2205.03845v1
A System of Four simultaneous Recursions: Generalization of the Ledin-Shannon-Ollerton Identity
\documentclass[11pt,reqno]{amsart} \setlength{\voffset}{-.25in} \sloppy \usepackage{amssymb,latexsym} \usepackage{graphicx} \usepackage{url} \usepackage{setspace} \textwidth=6.175in \textheight=9.0in \headheight=13pt \calclayout \makeatletter \newcommand{\monthyear}[1]{ \def\@monthyear{\uppercase{#1}}} \newcommand{\volnumber}[1]{ \def\@volnumber{\uppercase{#1}}} \AtBeginDocument{\def\ps@plain{\ps@empty \def\@oddfoot{\@monthyear \hfil \thepage} \def\@evenfoot{\thepage \hfil \@volnumber}} \def\ps@firstpage{\ps@plain} \def\ps@headings{\ps@empty \def\@evenhead{ \setTrue{runhead} \def\thanks{\protect\thanks@warning} \uppercase{The Fibonacci Quarterly}\hfil} \def\@oddhead{ \setTrue{runhead} \def\thanks{\protect\thanks@warning} \hfill\uppercase{A System of 4 Recursions}} \let\@mkboth\markboth \def\@evenfoot{ \thepage \hfil \@volnumber} \def\@oddfoot{ \@monthyear \hfil \thepage} }\footskip=25pt \pagestyle{headings}} \makeatother \newcommand{\s}[2]{#1^{(#2)}} \newcommand{\R}{{\mathbb R}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\C}{{\mathbb C}} \newcommand{\N}{{\mathbb N}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\minus}{\scalebox{0.75}[1.0]{$-$}} \newcommand{\Mod}[1]{\ (\mathrm{mod}\ #1)} \theoremstyle{plain} \numberwithin{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem{theorem}[thm]{Theorem} \newtheorem{lemma}[thm]{Lemma} \newtheorem{example}[thm]{Example} \newtheorem{definition}[thm]{Definition} \newtheorem{proposition}[thm]{Proposition} \newcommand{\helv}{ \fontfamily{phv}\fontseries{m}\fontsize{9}{11}\selectfont} \doublespacing \begin{document} \monthyear{Month Year} \volnumber{Volume, Number} \setcounter{page}{1} \title[A System of Four Simultaneous Recursions]{A System of Four simultaneous Recursions \\ Generalization of the Ledin-Shannon-Ollerton Identity } \author{Russell Jay Hendel} \address{Towson University} \email{[email protected]} \begin{abstract} This paper further generalizes a recent result of Shannon and Ollerton who resurrected an old identity due to Ledin. This paper generalizes the Ledin-Shannon-Ollerton result to all the metallic sequences. The results give closed formulas for the sum of products of powers of the first $n$ integers with the first $n$ members of the metallic sequence. Three key innovations of this paper are i) reducing the proof of the generalization to the solution of a system of 4 simultaneous recursions; ii) use of the shift operation to prove equality of polynomials; and iii) new OEIS sequences arising from the coefficients of the four polynomial families satisfying the 4 simultaneous recursions. \end{abstract} \maketitle \section{Introduction} Shannon and Ollerton \cite{Shannon} in a beautiful paper recently resurrected an old result of Ledin \cite{Ledin}. Here are two identities, one from the Ledin paper and one from this paper. \begin{equation}\label{e1} \displaystyle \sum_{k=1}^n k^2 F_k = \biggl(n^2-2n+5\biggl) F_n + \biggl(n^2-4n+8)\biggr) F_{n+1} -8, \end{equation} where $\{F_k\}_{k \ge 0}$ are, as usual the Fibonacci numbers, and $n$ is an arbitrary positive integer. \begin{equation}\label{e2} \displaystyle \sum_{k=1}^n k^3 P_k = \frac{1}{2} \times \biggl(n^3+3n-3\biggl) P_n +\frac{1}{2} \times \biggl(n^3-3n^2+6n-7)\biggr) P_{n+1} +\frac{7}{2}, \end{equation} where $\{P_k\}_{k \ge 0}$ are, as usual the Pell numbers, and $n$ is an arbitrary positive integer. What intrigued Shannon, Ollerton, and Ledin are the polynomial coefficients in the above equations: $n^2-2n+5, n^2-4n+8, \frac{1}{2} ( n^3+3n-3 ), \text{ and } \frac{1}{2} ( n^3-3n^2+6n-7 ).$ These examples naturally motivate generalization. An outline of the rest of this paper is as follows. Section 2 describes the main result of this paper generalizing all previous results. The four, integer, triangular arrays used to produce the polynomial coefficients in \eqref{e1}-\eqref{e2} are presented in Section 3; they naturally correspond to new OEIS sequences. Section 4 discusses the contributions of this paper to previous results. Sections 5 begins the proof and naturally motivates defining the shift algebraic operator whose properties are explored in Section 6. Section 7 completes the proof. Section 8 presents fun facts, patterns, and identities on the four new OEIS triangular arrays. \section{The Main Result} We first recall the definition of the \emph{metallic} sequences \cite{Spinadel, Wikipedia}. \begin{definition} The metallic sequence of order $m \ge 1,$ is defined recursively by \begin{equation}\label{e3} G_0=0, G_1 =1, G_n= m G_{n-1} + G_{n-2}, n \ge 2. \end{equation} For $m=1,2$ the sequences have names, Golden and Silver, corresponding to the Fibonacci and Pell numbers. Sequences for $m \ge 3,$ depending on the author, may be called Bronze, or Copper etc. When emphasis is needed, the symbol $G^{(m)}$ will indicate the metallic sequence with parameter $m$ \eqref{e3}. We will use the term \emph{metallicity of the sequence} to refer to $m.$ \end{definition} \textbf{Convention.} Throughout the paper if $r(x)$ is any polynomial of degree $deg(r),$ then the notation for the coefficients of $r(x)$ are given by \begin{equation}\label{e4} r(x) = \sum_{i=0}^{deg(r)} r_i x^i. \end{equation} \begin{theorem}[Main Theorem] For $m \ge 1, n \ge 0,p \ge 1$ there exist two degree-p polynomials with rational coefficients, $F(x), T(x)$ such that \begin{equation}\label{e5} S(n,m,p) = \displaystyle \sum_{k=0}^n k^p G^{(m)}_k = F(n) G^{(m)}_n + T(n) G^{(m)}_{n+1} - T_0. \end{equation} Moreover we can give explicit form to $F(n)$ and $T(n).$ We first define four integer-polynomial families satisfying the following system of simultaneous recursions. \begin{multline}\label{e6} \s{f}{0}(X) =1; \qquad \s{t}{i}(X) = \sum_{j=0}^{j=i} \binom{i}{j} X^{i-j} \s{f}{j}(X), i \ge 0; \s{d}{i}(X) = \s{t}{i}(X) - X\s{f}{i}(X), i \ge 0; \\ \s{s}{i}(X) = \s{f}{i}(X)+\s{d}{i}(X), i \ge 0; \qquad \s{f}{i}(X)=\sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-1-j} \s{s}{j}(X), i \ge 1. \end{multline} Then the formulas for $F(n)$ and $T(n)$ are \begin{equation}\label{e7} F(n) = \sum_{i=0}^p \frac{\s{f}{i}(m)}{ m^{i+1}} \times (-1)^i \binom{p}{i} n^{p-i}, \qquad T(n) = \sum_{i=0}^p \frac{ \s{t}{i}(m)}{m^{i+1}} \times (-1)^i \binom{p}{i} n^{p-i}. \end{equation} \end{theorem} For the proof of the Main Theorem we will also need a polynomial $D(k)$, defined by \begin{equation}\label{e8} D(n) = T(n) - n F(n) = \sum_{i=0}^p \frac{\s{d}{i}(m)}{ m^{i+1}} \times (-1)^i \binom{p}{i} n^{p-i}, \end{equation} where the last equality follows from \eqref{e6}. \section{Basics about the Main Theorem} Before proceeding, we present in this section, some basics about the Main Theorem. \textbf{Comment.} There are many symbols in this theorem and throughout the paper. We have endeavored to make the notation mnemonical to facilitate readability. $F$ and $T$ correspond to the \emph{first} and $\emph{second},$ (that is \emph{the number ``t"wo}) polynomial in \eqref{e5} when read from left to right (or corresponding to increasing indices of $G$). $m$ corresponds to the \emph{metallicity} of the recursion. $p$ corresponds to the \emph{power} to which we raise $k$ in \eqref{e5}. $d$ corresponds to the \emph{difference} (of $t$ and a multiple of $f$) and $s$ corresponds to the \emph{sum} (of $f$ and $d$). The proof of \eqref{e5} will be by induction. We can immediately prove the base case. \begin{proposition}[Base Case]\label{basecase} For all $m \ge 1, p \ge 1, S(0,m,p) = 0.$ \end{proposition} \begin{proof} Clear by the defining recursion, \eqref{e3}, and our conventions about polyomial coefficients \eqref{e4}. \end{proof} Each of the four polynomial families, $f,t,d,s,$ naturally gives rise to a triangular integer array arising from listing coefficients in ascending powers of $X.$ These four arrays are presented in Tables \ref{tab:f} - \ref{tab:s} which provide conventions on row and column notation for these tables. In these tables, the capital $T$ mnemonically stands for \emph{triangle}, with the superscript indicating which of the four families of rational functions are being described. \begin{center} \begin{table}[ht] \begin{small} \caption {The table $\s{T}{f}.$ Column and row conventions are illustrated for example by i) $\s{T}{f}_{3,2}=8,$ the coefficient of $X^2$ in $\s{f}{3}(X),$ or by, ii) $\s{f}{1}(X)=2-X.$} \label{tab:f} { \renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular}{||c||c|c|c|c|c|c|c|c||} \hline \hline \;&$X^0$&$X^1$&$X^2$&$X^3$&$X^4$&$X^5$&$X^6$&$X^7$\\ \hline $ \s{f}{0}(x)$&$1$&\;&\;&$ $&\;&\;&\;&\;\\ $ \s{f}{1}(X)$&$2$&$-1$&\;&\;&\;&\;&\;&\;\\ $ \s{f}{2}(X)$&$8$&$-4$&$1$&\;&\;&\;&\;&\;\\ $ \s{f}{3}(X)$&$48$&$-24$&$8$&$-1$&\;&\;&\;&\;\\ $ \s{f}{4}(X)$&$384$&$-192$&$80$&$-16$&$1$&\;&\;&\;\\ $ \s{f}{5}(X)$&$3840$&$-1920$&$960$&$-240$&$32$&$-1$&\;&\;\\ $ \s{f}{6}(X)$&$46080$&$-23040$&$13440$&$-3840$&$728$&$-64$&$1$&\;\\ $ \s{f}{7}(X)$&$645120$&$-322560$&$215040$&$-67200$&$16128$&$-2184$&$128$&$-1$\\ \hline \hline \end{tabular} \end{center} } \end{small} \end{table} \end{center} \begin{center} \begin{table}[ht] \begin{small} \caption {The table $\s{T}{t}.$ Column and row conventions are illustrated for example by i) $\s{T}{t}_{6,2}=7680,$ the coefficient of $X^2$ in $ \s{t}{6}(X),$ or by, ii) $\s{t}{3}(X)=48-2X^2.$} \label{tab:t} { \renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular}{||c||c|c|c|c|c|c|c||} \hline \hline \;&$X^0$&$X^1$&$X^2$&$X^3$&$X^4$&$X^5$&$X^6$\\ \hline $ \s{t}{0}(X)$&$1$&\;&\;&\;&\;&\;&\;\\ $ \s{t}{1}(X)$&$2$&\;&\;&\;&\;&\;&\;\\ $ \s{t}{2}(X)$&$8$&\;&\;&\;&\;&\;&\;\\ $ \s{t}{3}(X)$&$48$&\;&$2$&\;&\;&\;&\;\\ $ \s{t}{4}(X)$&$384$&\;&$32$&\;&\;&\;&\;\\ $ \s{t}{5}(X)$&$3840$&\;&$480$&\;&$2$&$ $&\;\\ $ \s{t}{6}(X)$&$46080$&\;&$7680$&\;&$128$&\;&\;\\ $ \s{t}{7}(X)$&$645120$&\;&$134400$&\;&$4368$&\;&$2$\\ \hline \hline \end{tabular} \end{center} } \end{small} \end{table} \end{center} \begin{center} \begin{table}[ht] \begin{small} \caption {The table $\s{T}{d}.$ Column and row conventions are illustrated for example by i) $\s{T}{f}_{5,3}=-960,$ the coefficient of $X^3$ in $\s{d}{5}(X),$ or by, ii) $\s{d}{0}(X)=1-X.$} \label{tab:d} { \renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular}{||c||c|c|c|c|c|c|c|c||} \hline \hline \;&$X^0$&$X^1$&$X^2$&$X^3$&$X^4$&$X^5$&$X^6$&$X^7$\\ \hline $ \s{d}{0}(X)$&$1$&$-1$&\;&\;&\;&\;&\;&\;\\ $ \s{d}{1}(X)$&$2$&$-2$&$1$&\;&\;&\;&\;&\;\\ $ \s{d}{2}(X)$&$8$&$-8$&$4$&$-1$&\;&\;&\;&\;\\ $ \s{d}{3}(X)$&$48$&$-48$&$26$&$-8$&$1$&\;&\;&\;\\ $ \s{d}{4}(X)$&$384$&$-384$&$224$&$-80$&$16$&$-1$&\;&\;\\ $ \s{d}{5}(X)$&$3840$&$-3840$&$2400$&$-960$&$242$&$-32$&$1$&\;\\ $\s{d}{6}(X)$&$46080$&$-46080$&$30720$&$-13440$&$3968$&$-728$&$64$&$-1$\\ \hline \hline \end{tabular} \end{center} } \end{small} \end{table} \end{center} \begin{center} \begin{table}[ht] \begin{small} \caption {The table $\s{T}{s}.$ Column and row conventions are illustrated for example by i) $\s{T}{s}_{3,0}=96,$ the coefficient of $X^0$ in $\s{t}{3}(X),$ or by, ii) $\s{s}{0}(X)=2-X.$} \label{tab:s} { \renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular}{||c||c|c|c|c|c|c|c|c||} \hline \hline \;&$X^0$&$X^1$&$X^2$&$X^3$&$X^4$&$X^5$&$X^6$&$X^7$\\ \hline $\s{s}{0}(X)$&$2$&$-1$&\;&\;&\;&\;&\;&\;\\ $\s{s}{1}(X)$&$4$&$-3$&$1$&\;&\;&\;&\;&\;\\ $\s{s}{2}(X)$&$16$&$-12$&$5$&$-1$&\;&\;&\;&\;\\ $\s{s}{3}(X)$&$96$&$-72$&$34$&$-9$&$1$&\;&\;&\;\\ $\s{s}{4}(X)$&$768$&$-576$&$304$&$-96$&$17$&$-1$&\;&\;\\ $ \s{s}{5}(X)$&$7680$&$-5760$&$3360$&$-1200$&$274$&$-33$&$1$&\;\\ $\s{s}{6}(X)$&$92160$&$-69120$&$44160$&$-17280$&$4696$&$-792$&$65$&$-1$\\ \hline \hline \end{tabular} \end{center} } \end{small} \end{table} \end{center} \begin{example} We illustrate \eqref{e5}-\eqref{e7} and Tables \ref{tab:f}-\ref{tab:s} by deriving the polynomial $T(n)$ in \eqref{e2}. In \eqref{e2}, $p=3,m=2.$ By Table \ref{tab:t}, we have $\s{t}{0}(X)=1, \s{t}{1}(X)=2, \s{t}{2}(X)=8, \s{t}{3}(X)=48 + 2 X^2.$ Hence, by \eqref{e7} $$ T(n) = \frac{1}{2^1}\binom{3}{0} n^3 - \frac{2}{2^2}\binom{3}{1} n^2 + \frac{8}{2^3} \binom{3}{2}n - \frac{48+2\cdot 2^2}{2^4} \binom{3}{3} n^0 = \frac{1}{2} n^3 - \frac{3}{2} n^2 + \frac{6}{2}n - \frac{7}{2}. $$ \end{example} The four triangular arrays have many obvious patterns; these will be explored in Section 8. For the proof of the Main Theorem we need the following identity. \begin{proposition}\label{fequald}For $i \ge 1,$ $$ \s{f}{i}(X) = \sum_{j=0}^i \binom{i}{j} X^{i-j} \s{d}{j}(X).$$ \end{proposition} \begin{proof} To make the notation clearer, we omit the arguments of polynomials. By \eqref{e6}, $\s{s}{i} = \s{d}{i} + \s{f}{i}.$ Hence, by \eqref{e6}, for $i \ge 1,$ \begin{equation}\label{temp1} \s{f}{i}=\sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-1-j} \s{s}{j} = \sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-1-j} (\s{f}{j} + \s{d}{j}). \end{equation} Again, by \eqref{e6}, for $i \ge 1,$ \begin{equation}\label{temp2} \s{t}{i}=\sum_{j=0}^{j=i} \binom{i}{j} X^{i-j} \s{f}{j} = \s{f}{i} + \sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-j} \s{f}{j}. \end{equation} By \eqref{e6}, $\s{d}{i}= \s{t}{i} - X\s{f}{i}.$ Substituting \eqref{temp1} and \eqref{temp2} we have \begin{equation}\label{temp3} \s{d}{i}= \s{f}{i} + \sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-j} \s{f}{j} - \sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-j} (\s{f}{j} + \s{d}{j}) \end{equation} The proposition requires us to prove \begin{equation}\label{temp4} \s{f}{i}= \sum_{j=0}^i \binom{i}{j} X^{i-j} \s{d}{j} = \s{d}{i} + \sum_{j=0}^{i-1} \binom{i}{j} X^{i-j} \s{d}{j} \end{equation} Substituting \eqref{temp3} into \eqref{temp4} and cancelling $\s{f}{i}$ from both sides of the resulting equation, we see we must prove $$ 0=\sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-j} \s{f}{j} - \sum_{j=0}^{j=i-1} \binom{i}{j} X^{i-j} (\s{f}{j} + \s{d}{j}) + \sum_{j=0}^{i-1} \binom{i}{j} X^{i-j} \s{d}{j}. $$ This completes the proof. \end{proof} \section{Contributions of this paper} There are four main contributions of this paper over previous results. First, the obvious contribution, that we are generalizing the Ledin and Shannon-Ollerton results to all metallic sequences. Second, this generalization introduces new techniques, the technique of simultaneous systems of recursions. Simultaneous systems have not been explored that much in the Fibonacci Quarterly and they are a welcome venue for future researchers. Additionally, this paper uses the shift operator to prove equality of polynomials. Third, while the Shannon-Ollerton paper delightfully connects the proofs to known and established OEIS sequences, this paper leads to four new integer triangular arrays. Finally, the Shannon-Ollerton approach while also using an inductive approach reduced the proof to certain identities with the Bernoulli numbers. The inductive proof in this paper avoids reduction to the Bernoulli numbers. It was hoped that since the proof in this paper avoids the Bernoulli numbers that the various conjectures about Bernoulli numbers made by Shannon-Ollerton could be proven by the results of this paper; but so far a straightforward proof has eluded me. \section{The Bar Algebraic-Shift Operator} Recall, that we will prove the Main Theorem, by showing that $F(n), T(n)$ as defined by \eqref{e7} satisfy \eqref{e5}. To accomplish this proof we will need an algebraic operator that shifts arguments of a target polynomial. Since the need for this algebraic operator arises naturally in the proof, we motivate the need, and then define this operator, by beginning the proof of the Main Theorem in this section. To begin the proof, we assume $m$ and $p$ fixed, allowing us to smoothen the exposition by writing $S(n)$ instead of $S(n,m,p)$ and $G_n$ instead of $\s{G}{m}_n.$ The proof is by induction; the base case, $n=0,$ has already been done in Proposition \ref{basecase}. For an induction step we assume \eqref{e5} true for the case $n$ and proceed to prove it for the case $n+1.$ That is we must prove \begin{equation}\label{e10} S(n+1) = F(n+1) G_{n+1} + T(n+1) G_{n+2} - T_0. \end{equation} But by the induction assumption, \eqref{e5}, $$ S(n+1) = S(n) + (n+1)^p G_{n+1} = F(n) G_{n} + T(n) G_{n+1} - T_0 + (n+1)^p G_{n+1}. $$ By \eqref{e3} we can write $G_n = G_{n+2} - m G_{n+1},$ reducing the last equation to the equivalent, \begin{equation}\label{e11} S(n+1) =(T(n) - mF(n) G_{n+1} + F(n) G_{n+2} -T_0 +(n+1)^p G_{n+1}= D(n) G_{n+1} + F(n) G_{n+2} - T_0 +(n+1)^p G_{n+1}, \end{equation} the last equality arising from \eqref{e8}. Equating the right hand sides \eqref{e10} and \eqref{e11}, we see that to accomplish the induction step it suffices to prove \begin{equation}\label{e13} F(n+1) G_{n+1} + T(n+1) G_{n+2} = D(n) G_{n+1} + F(n) G_{n+2} +(n+1)^p G_{n+1}. \end{equation} To prove \eqref{e13} it suffices to equate coefficients of $G_{n+1}, G_{n+2}.$ That is it suffices to prove \begin{equation}\label{e14} F(n+1) = D(n) + (n+1)^p; \qquad T(n+1) = F(n). \end{equation} At this point we get stuck. The traditional way of proving the equality of polynomials, $F(n+1) = D(n),$ by equating corresponding coefficients in the polynomials, does not work naturally here, since the values of the arguments on the two sides are different ($n+1$ vs. $n$); equating coefficients is therefore not justifiable. This motivates the following definition. For a given polynomial $R$ define $\bar{R}$ as the polynomial such that \begin{equation}\label{e15} \bar{R}(x+1) = R(x). \end{equation} The bar operator is simply a shift operator. Prior to proving the existence and the form of the bar-shift operator in the next section, we show how its existence simplifies the proof. To prove \eqref{e14}, it suffices, using \eqref{e15}, to prove \begin{equation}\label{e16} F(n+1) = D(n)+ (n+1)^p = \bar{D}(n+1)+ (n+1)^p, \qquad T(n+1) = F(n) = \bar{F}(n+1). \end{equation} In turn, to prove \eqref{e16}, and hence to complete the proof of the Main Theorem, it suffices to prove the two polynomial equalities of \begin{equation}\label{e17} F(X) = \bar{D}(X) +X^p ; \qquad T(X) = \bar{F}(X), \end{equation} by showing that corresponding coefficients are equal. \section{Properties of the Bar Shift Operator} To prove \eqref{e17} we need properties of the bar-shift operator. \begin{lemma} $\bar{R}(X)$ always exists, \end{lemma} \begin{proof} Clear. Using \eqref{e4}, define $S(Y) = \sum_{i=0}^{degree(R)} R_i (Y-1)^i.$ Then $\bar{R}(X)=S(X+1).$ \end{proof} \begin{example} Let $R(X)=X^2.$ Then $\bar{R}(X)=X^2-2X+1.$ We may verify that $\bar{R}(X+1) = (X+1)^2 - 2(X+1)+1 = X^2=R(X)$ as required by \eqref{e15}. \end{example} As just pointed out, by writing $R(X)=R((X+1)-1)$ we can obtain the coefficients of $\bar{R}(X)$ from those of $R(X).$ For a general polynomial $R$ of degree $q$ we have, using our conventions about polynomial coefficients, \eqref{e3}, that $\begin{pmatrix} \bar{R}_{q} \\ \bar{R}_{q-1} \\ \bar{R}_{q-2} \\ \vdots \\ \bar{R}_{0} \\ \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 &\dotsc &0 \\ -\binom{q}{1} & \binom{q-1}{0} &0 &\dotsc & 0 \\ \binom{q}{2} & \binom{q-1}{1} & \binom{q-2}{0} &\dotsc & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ (-1)^q \binom{q}{q} & (-1)^{q-1} \binom{q-1}{q-1} & (-1)^{q-2} \binom{q-2}{q-2} &\dotsc &1 \end{pmatrix} \begin{pmatrix} R_{q} \\ R_{q-1} \\ R_{q-2}\\ \vdots \\ R_{0} \end{pmatrix} $ By equating polynomial coefficients, this matrix equation generates $q$ equations. \begin{equation}\label{e18} \bar{R}_{q} = R_q; \qquad \bar{R}_{q-1} = -\binom{q}{1} R_q +\binom{q-1}{0} R_{q-1}; \bar{R}_{q-2}=\binom{q}{2} R_q -\binom{q-1}{1} R_{q-1}+ \binom{q-2} {0} R_{q-2}; \dotsc \end{equation} We close this section with the following obvious observation. \begin{lemma}\label{lem:linear} The polynomial shift operator is a linear operator (on polynomials). \end{lemma} \begin{proof} Clear. \end{proof} \section{Completion of the Proof of the Main Theorem } We continue the proof of the Main Theorem begun in the Section 5. Recall we showed that to prove \eqref{e7} satisfies \eqref{e5} it suffices to prove \eqref{e17}. Notice that the polynomial $X^p$ on the right-hand side of the first equation in \eqref{e17} only contributes to the coefficient of $X^i$ when $i=p.$ Accordingly we must deal with two cases. For each case we must prove coefficient equality of the two asserted identities in \eqref{e17}. \textbf{Case $X^i, i=p.$} Equation \eqref{e17} is stated using the bar-shift operator. But by \eqref{e18} $\bar{D}_p=D_p$ and $\bar{F}_p=F_p.$ Thus it suffices to prove that $$ F_p=D_p+1, T_p=F_p.$$ These equations are immediately proven using \eqref{e6}- \eqref{e8}, which state $$ F_p=\frac{1}{m}, T_p=\frac{1}{m}, \text{ and } D_p = \frac{1-m}{m}.$$ \textbf{Case $X^{p-i},$ with $i<p.$} First we prove the second equation in \eqref{e17}, $$ T_{p-i}= \bar{F}_{p-i}. $$ By \eqref{e7} $$ T_{p-i} =(-1)^i \binom{p}{i} \frac{\s{t}{i}(m)}{m^{i+1}}.$$ By \eqref{e6} we therefore have \begin{equation}\label{e19} T_{p-i} =(-1)^i \binom{p}{i} \frac{1}{m^{i+1}} \sum_{j=0}^i \binom{i}{j} m^{i-j} \s{f}{j}(m). \end{equation} By \eqref{e18} $$ \bar{F}_{p-i} = \sum_{j=0}^i \binom{p-i}{i-j}(-1)^{i+j} F_{p-j}. $$ Applying \eqref{e7} to this last equation we obtain \begin{equation}\label{e20} \bar{F}_{p-i} = \sum_{j=0}^i \binom{p-i}{i-j}(-1)^{i+j} \binom{p}{j} \s{f}{j}(m)(-1)^j \frac{1}{m^{j+1}}. \end{equation} To prove the second equation in \eqref{e17} true, we must show that the left hand sides of \eqref{e19} and \eqref{e20} are equal which follows since the right hand sides of \eqref{e19} and \eqref{e20} are equal, where we have used the binomial coefficient identity, \cite{Wolfram}, \begin{equation}\label{e23} \binom{p}{i} \binom{i}{j} = \binom{p-j}{i-j} \binom{p}{j}. \end{equation} Next, we prove the first equation in \eqref{e17}, $$ F_{p-i}= \bar{D}_{p-i}. $$ By \eqref{e7} $$ F_{p-i} =(-1)^i \binom{p}{i} \frac{\s{f}{i}(m)}{m^{i+1}}.$$ By Proposition \ref{fequald} we therefore have \begin{equation}\label{e21} F_{p-i} =(-1)^i \binom{p}{i} \frac{1}{m^{i+1}} \sum_{j=0}^i \binom{i}{j} m^{i-j} \s{d}{j}(m). \end{equation} By \eqref{e18} $$ \bar{D}_{p-i} = \sum_{j=0}^i \binom{p-i}{i-j}(-1)^{i+j} D_{p-j}. $$ By \eqref{e6} and \eqref{e8} we further have \begin{equation}\label{e22} \bar{D}_{p-i} = \sum_{j=0}^i \binom{p-i}{i-j}(-1)^{i+j} \s{d}{j}(m)(-1)^j \frac{1}{m^{j+1}} \binom{p}{j}. \end{equation} Comparing the right-hand sides of \eqref{e21}-\eqref{e22} we see, by \eqref{e23}, that they are equal, and hence their left-hand sides are equal, completing the proof of the Main Theorem. \section{Fun Patterns and Identities} The four, integer, triangular arrays presented in Tables \ref{tab:f}-\ref{tab:s} are new, not previously found in OEIS. Consistent, with Fibonacci Quarterly tradition, we list a collection of patterns and identities found in these triangles. Patterns exist both in the columns and diagonals. We suffice with proving one of these since the proofs are all similar and follow from the defining equation, \eqref{e6}. All identities hold for $i,j \ge 0,$ unless otherwise stated. \begin{itemize} \item $sign(\s{T}{f}_{i,j}) = sign(\s{T}{d}_{i,j}) = sign(\s{T}{s}_{i,j})=(-1)^j$ ;\; \; $sign(\s{T}{t}_{i,j}) = 1 $ \item $\s{T}{f}_{i,i} =(-1)^i$ ;\; \; $\s{T}{d}_{i,i+1}=\s{T}{s}_{i,i+1}=(-1)^{i+1} ;\; \;$ $\s{T}{t}_{i,2j+1}=0$ \item $\s{T}{t}_{2i+1,2i}=2;\; \; \s{T}{t}_{2i+2,2i}=2^{2i+3}$ \item $\s{T}{f}_{i,0} = \s{T}{t}_{i,0}= \s{T}{d}_{i,0}= -\s{T}{d}_{i,1} = \frac{1}{2} \s{T}{s}_{i,0}= 2^i i!$ \item For $i \ge 1,$ $\s{T}{f}_{i,1} =2^{i-1} i! ;\; \; \text{ for } i \ge 1, \s{T}{s}_{i,1} = -3i!2^{i-1}$ \item $\s{T}{d}_{i,i}=(-1)^i 2^i ;\; \; \s{T}{f}_{i+1,i}=(-1)^i 2^{i+1} ;\; \; \s{T}{s}_{i,i}=(-1)^i (2^i+1)$ \end{itemize} \begin{proof} We prove the fourth bulleted item. First note that by the construction of Tables \ref{tab:f}-\ref{tab:s}, for any symbol $h \in \{f, t, d, s \},$ by \eqref{e4}, $$ \s{T}{h}_{i,j} = \s{h}{i}_j. $$ Hence to prove $\s{T}{f}_{i,0} = 2^i i!$ it suffices to prove $\s{f}{i}_0 = 2i \s{f}{i-1}_0, i \ge 1.$ However, by \eqref{e6}, for $i \ge 1,$ we have $\s{f}{i-1}_0=\s{t}{i-1}_0=\s{d}{i-1}_0.$ Additionally, by \eqref{e6}, since $\s{s}{i-1} = \s{f}{i-1} +\s{d}{i-1},$ we have $\s{s}{i-1}_0=2\s{f}{i-1}_0.$ Finally, by \eqref{e6}, we have $$\s{f}{i}_0 = \binom{i}{i-1} \s{s}{i-1}_0 = 2i \s{f}{i-1}_0$$ as was required to prove. \end{proof} \section{Conclusion} This paper further generalizes the result of Shannon and Ollerton who in turn resurrected an oldie of Ledin. Besides the four new triangular arrays the proof was greatly facilitated by creating a system of four simultaneous recursions two of which never enter the statement of the Main Theorem. We believe this approach of simultaneous systems fruitful and applicable to other areas. \begin{thebibliography}{99} \bibitem{Shannon} A. G. Shannon and R. L. Ollerton, \emph{A Note on Ledin's Summation Problem}, The Fibonacci Quarterly,\textbf{59.1} (2021), 47--56. \bibitem{Ledin} G. Ledin, \emph{On a certain Kind of Fibonacci Sums,} The Fibonacci Quarterly, \textbf{5.1} (1967), 45--48. \bibitem{Spinadel} V. Spinadel, \emph{The Family of Metallic Means}, \textbf{http://www.mi.sanu.ac.rs/vismath/spinadel/} \bibitem{Wikipedia} Wikipedia, \emph{Metallic Mean}, \textbf{https://en.wikipedia.org/wiki/Metallic\_mean} \bibitem{Wolfram} Wolfram, \emph{Binomial Coefficient Functional Identities, Distant Neighbors}, \textbf{https://functions.wolfram.com/GammaBetaErf/Binomial/17/02/02/} \end{thebibliography} \medskip \noindent MSC2020: 11B39 \end{document}
2205.03800v1
http://arxiv.org/abs/2205.03800v1
Viscosity solutions of Hamilton-Jacobi equations for neutral-type systems
\begin{filecontents*}{example.eps} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore \end{filecontents*} \RequirePackage{fix-cm} \documentclass[smallextended]{svjour3} \smartqed \usepackage{graphicx} \usepackage{amsfonts} \usepackage{amsmath} \begin{document} \title{Viscosity solutions of Hamilton-Jacobi equations for neutral-type systems\thanks{This work is supported by a grant of the RSF no. 21-71-10070,\\ https://rscf.ru/project/21-71-10070/} } \author{Anton Plaksin} \institute{A. Plaksin \at N.N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences, 16 S. Kovalevskaya Str., Yekaterinburg, 620108, Russia;\\ Ural Federal University, 19 Mira street, 620002 Ekaterinburg, Russia \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} The paper deals with path-dependent Hamilton-Jacobi equations with a coinvariant derivative which arise in investigations of optimal control problems and differential games for neutral-type systems in Hale's form. A~viscosity (generalized) solution of a Cauchy problem for such equations is considered. The existence, uniqueness, and consistency of the viscosity solution are proved. Equivalent definitions of the viscosity solution, including the definitions of minimax and Dini solutions, are obtained. Application of the results to an optimal control problem for neutral-type systems in Hale's form are given. \keywords{neutral-type systems \and Hamilton-Jacobi equations \and coinvariant derivatives \and viscosity solutions \and minimax solutions \and optimal control problems} \subclass{49L20 \and 49L25 \and 34K40} \end{abstract} \section{Introduction}\label{intro} The paper aims to develop the viscosity solution theory for path-dependent Hamilton-Jacobi (HJ) equations with a coinvariant derivative which arise in optimal control problems and differential games for neutral-type systems in Hale's form \cite{Hale_1977}. Initially, the notion of coinvariant derivatives and path-dependent HJ equations with such derivatives were considered in \cite{Kim_1999} to describe infinitesimal properties of value functionals in optimal control problems for time-delay systems. Also, similar derivative notions are known such as Clio derivatives \cite{Aubin_Haddad_2002} and horizontal and vertical derivatives \cite{Dupire_2009} (the connection between these notions was addressed in~\cite{Gomoyunov_Lukoyanov_Plaksin_2021}). As with HJ equations with partial derivatives, path-dependent HJ equations with coinvariant derivatives may not have a differentiable solution and various approaches to the definition of a generalized solution needed to be examined. The minimax approach \cite{Subbotin_1980,Subbotin_1984,Subbotin_1995} and its application to differential games \cite{Krasovskii_Subbotin_1988,Krasovskii_Krasovskii_1995} for time-delay systems were developed in \cite{Lukoyanov_2000,Lukoyanov_2003,Lukoyanov_2010a} (see also \cite{Bayraktar_Keller_2018} for an extension of these results to an infinite dimensional setting). Investigations of the viscosity approach \cite{Crandall_Lions_1983,Crandall_Evans_Lions_1984} to the generalized solution definition for path-dependent HJ equations associated with time-delay systems can be roughly divided into two groups. In the first group \cite{Bayraktar_Keller_2018,Ekren_Touzi_Zhang_2016,Kaise_2015,Kaise_Kato_Takahashi_2018,Lukoyanov_2010b,Pham_Zhang_2014,Soner_1988,Zhou_2019}, modified definitions of the viscosity solution based on some parameterizations were researched. In the second group \cite{Plaksin_2020,Plaksin_2021,Zhou_2021}, more natural definitions of the viscosity solutions are used, but the path-dependent HJ equations are considered on the wider spaces of discontinuous functions. In contrast to path-dependent HJ equations arising in optimal control and differential game problems for time-delay systems, path-dependent HJ equations associated with the problems for more general neutral-type systems in Hale's form have a new term. The fact that this term is not defined at all point of the functional space and discontinuous at some points of the domain do not allow us to apply the above results and require us to construct the new theory of generalized solutions. The minimax solution theory for such equations and the application of this theory to differential games for neutral-type systems in Hale's form were investigated in \cite{Gomoyunov_Plaksin_2019,Lukoyanov_Plaksin_2020,Lukoyanov_Plaksin_2020b,Plaksin_2021b}. The present paper develops the viscosity solution theory for such equations. Following \cite{Plaksin_2020,Plaksin_2021}, we consider the path-dependent HJ equations on the space of discontinuous functions and introduce the definition of a viscosity solution of a Cauchy problem for this equation. We prove that the viscosity solution exists and is unique (see Theorem \ref{teo:viscosity_solution}). We obtain additional equivalent definitions of the viscosity (generalized) solution including the definitions of minimax and Dini solutions (see Theorem \ref{teo:equivalent_solutions}). We also establish that a coinvariant differentiable solution of the Cauchy problem coincides with the viscosity solution and, on the other hand, the viscosity solution at the points of coinvariant differentiability satisfies the HJ equation (see Theorem \ref{teo:classical_and_viscosity_solutions}). Moreover, we consider an application of the results to an optimal control problem for a neutral-type system in Hale's form (see Theorem \ref{teo:application}). The main idea of the proofs is to obtain the existence and uniqueness of the minimax solution based on results from \cite{Plaksin_2021b} and to establish the equivalence of the minimax and viscosity solutions using the scheme from \cite{Clarke_Ledyaev_1994,Subbotin_1993,Subbotin_1995} (see also \cite{Plaksin_2020}). However, there are several obstacles to implement that. Firstly, since the minimax solution from \cite{Plaksin_2021b} is defined on the space of Lipschitz continuous functions, we prove the certain Lipschitz continuous property of this solution and, using that, extend the minimax solution to the space of piecewise Lipschitz continuous functions (see Theorem \ref{teo:Lip_minimax_solution} and Theorem \ref{teo:minimax_solutions} $(a)\Leftrightarrow (b)$). Secondly, as noted above, the path-dependent HJ equations under considerations have the special term which is not defined on the whole space of piecewise Lipschitz continuous functions. Therefore, we introduce the definitions of a viscosity solution (see Definition \ref{def:viscosity_solution}) only on a certain subspace on which this term is defined. Such a definition seems one of the main features of the present paper, since usually the viscosity solution definitions for even more particular path-dependent HJ equations were considered on whole functional spaces. Besides, such a definition prevent us from applying the scheme from \cite{Subbotin_1993,Subbotin_1995} directly. Nonetheless, we overcome this obstacle by introducing an additional auxiliary definition of minimax solution on the space of continuously differentiable functions ($\mathrm{C}^1$-minimax solution in Definition \ref{def:C1_minimax_solution}). The fact that the $\mathrm{C}^1$-minimax solution coincides with the usual minimax solution (see Theorem \ref{teo:minimax_solutions} $(b)\Leftrightarrow (c)$) completes the proof of the equivalence of the minimax and viscosity solutions (see Theorem \ref{teo:equivalent_solutions}). Thirdly, the extension to the space of piecewise Lipschitz continuous functions does not allow us to expect a continuous solution, as opposed to \cite{Gomoyunov_Plaksin_2019,Lukoyanov_Plaksin_2020,Plaksin_2021b}. Nevertheless, the class of, generally speaking, discontinuous functionals, suggested in the paper, is suitable for obtaining the existence and uniqueness results within this class. \section{Main results} \subsection{Functional spaces} Let $\mathbb R^n$ be the n-dimensional Euclidian space with the inner product $\langle \cdot, \cdot \rangle$ and the norm $\|\cdot\|$. A function $x(\cdot) \colon [a,b) \mapsto \mathbb R^n$ (or $x(\cdot) \colon [a,b] \mapsto \mathbb R^n$) is called piecewise Lipschitz continuous if there exist points $a = \xi_1 < \xi_2 < \ldots < \xi_k = b$ such that the function $x(\cdot)$ is Lipschitz continuous on the interval $[\xi_i,\xi_{i+1})$ for each $i \in \overline{1,k-1}$. Note that such a function $x(\cdot)$ is right continuous on the interval $\xi \in [a,b)$ and has a finite left limit $x(\xi-0)$ for any $\xi \in (a,b]$. Denote by $\mathrm{PLip}([a,b),\mathbb R^n)$ (or $\mathrm{PLip}([a,b],\mathbb R^n)$) the linear space of piecewise Lipschitz continuous functions $x(\cdot) \colon [a,b) \mapsto \mathbb R^n$ (or $x(\cdot) \colon [a,b] \mapsto \mathbb R^n$). Let $\vartheta, h > 0$. Denote \begin{equation*} \mathrm{PLip} = \mathrm{PLip}([-h,0),\mathbb R^n),\quad \mathrm{Lip} = \mathrm{Lip}([-h,0),\mathbb R^n),\quad \mathrm{C}^1 = \mathrm{C}^1([-h,0),\mathbb R^n). \end{equation*} where $\mathrm{Lip}([-h,0),\mathbb R^n)$ and $\mathrm{C}^1([-h,0),\mathbb R^n)$ are the linear spaces of Lipschitz continuous and continuously differentiable functions $x(\cdot) \colon [a,b) \mapsto \mathbb R^n$. Denote \begin{equation}\label{def:G_s} \begin{array}{rl} \mathrm{PLip}_* = & \big\{w(\cdot) \in \mathrm{PLip} \colon \text{there exists } \delta_w > 0 \text{ such that }\\[0.2cm] & \ \,\, w(\cdot) \text{ is continuously differentiable on } [-h,-h+\delta_w] \big\}. \end{array} \end{equation} Note that the following inclusions are valid: \begin{equation}\label{space_inclusions} \mathrm{C}^1 \subset \mathrm{Lip} \subset \mathrm{PLip},\quad \mathrm{C}^1 \subset \mathrm{PLip}_* \subset \mathrm{PLip}. \end{equation} For the sake of brevity, for any $w(\cdot) \in \mathrm{PLip}$, we denote \begin{equation*} \|w(\cdot)\|_1 = \int_{-h}^0 \|w(\xi)\| \mathrm{d} \xi,\quad \|w(\cdot)\|_\infty = \sup\limits_{\xi\in [-h,0)} \|w(\xi)\|,\quad w(-0) = w(0 - 0). \end{equation*} Without loss of generality of the results presented below, we can suppose the existence of $I \in \mathbb N$ such that $\vartheta = I h$. Define the spaces \begin{equation}\label{def:G_G_s} \mathbb G = [0,\vartheta] \times \mathbb R^n \times \mathrm{PLip},\quad \mathbb G_* = \cup_{i=0}^{I-1} (i h, (i+1) h) \times \mathbb R^n \times \mathrm{PLip}_*. \end{equation} \subsection{Hamilton-Jacobi equation} For each $(\tau,z,w(\cdot)) \in \mathbb G$, denote \begin{equation*} \begin{array}{rl} \Lambda(\tau,z,w(\cdot)) = \big\{x(\cdot) \in \mathrm{PLip}([\tau-h,\vartheta],\mathbb R^n) \colon & x(\tau) = z, \\[0.2cm] & x(t) = w(t - \tau),\, t \in [\tau-h,\tau)\big\}. \end{array} \end{equation*} Following \cite{Plaksin_2020,Plaksin_2021} (see also \cite{Kim_1999,Lukoyanov_2000}), a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is called coinvariantly (ci-) differentiable at a point $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$ if there exist $\partial^{ci}_{\tau,w}\varphi(\tau,z,w(\cdot)) \in \mathbb R$ and $\nabla_z\varphi(\tau,z,w(\cdot)) \in \mathbb R^n$ such that, for every $t \in [\tau,\vartheta]$, $y \in \mathbb R^n$, and $x(\cdot) \in \Lambda(\tau,z,w(\cdot))$, the relation below holds \begin{equation}\label{def:ci-differentiability} \begin{array}{c} \varphi(t,y,x_t(\cdot)) - \varphi(\tau,z,w(\cdot)) = (t - \tau) \partial^{ci}_{\tau,w}\varphi(\tau,z,w(\cdot)) \\[0.2cm] + \langle y - z, \nabla_z \varphi(\tau,z,w(\cdot)) \rangle + o(|t - \tau| + \|y - z\|), \end{array} \end{equation} where $x_t(\cdot)$ denotes the function from $\mathrm{PLip}$ such that $x_t(\xi) = x(t + \xi)$, $\xi \in [-h,0)$ and the value $o(\delta)$ can depend on $x(\cdot)$ and $o(\delta)/\delta \to 0$ as $\delta \to +0$. Then $\partial^{ci}_{\tau,w}\varphi(\tau,z,w(\cdot))$ is called the ci-derivative of $\varphi$ with respect to $\{\tau,w(\cdot)\}$ and $\nabla_z \varphi(\tau,z,w(\cdot))$ is the gradient of $\varphi$ with respect to $z$. Similarly, the mapping $\mathbb G \ni (\tau,z,w(\cdot)) \mapsto \phi = (\phi_1, \ldots, \phi_n) \in \mathbb R^n$ is called ci-differentiable at a point $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$, if the functionals $\phi_i\colon \mathbb G \mapsto \mathbb R$, $i=\overline{1,n}$ are ci-differentiable at this point. Let us fix the functions $g \colon [0,\vartheta] \times \mathbb R^n \mapsto \mathbb R^n$ and $H\colon [0,\vartheta] \times \mathbb R^n \times \mathbb R^n \times \mathbb R^n \mapsto \mathbb R$ and the mapping $\sigma \colon \mathbb R^n \times \mathrm{PLip} \mapsto \mathbb R$ satisfying the the following conditions: \begin{itemize} \item[$(g)$] The function $g$ is continuously differentiable. \item[$(H_1)$] The function $H$ is continuous. \item[$(H_2)$] There exists a constant $c_H > 0$ such that \begin{equation*} \big|H(\tau,z,r,s) - H(\tau,z,r,s')\big| \leq c_H \big(1 + \|z\| + \|r\|\big) \|s - s'\| \end{equation*} for any $\tau \in [0,\vartheta]$, $z,r,s,s' \in \mathbb R^n$. \item[$(H_3)$] For every $\alpha > 0$, there exists $\lambda_H = \lambda_H(\alpha) > 0$ such that \begin{equation*} \big|H(\tau,z,r,s) - H(\tau,z',r',s)\big|\leq \lambda_H \big(\|z - z'\| + \|r - r'\|\big) \big(1 + \|s\|\big) \end{equation*} for any $\tau \in [0,\vartheta]$, $z,r,z',r',s \in \mathbb R^n$: $\max\{\|z\|,\|r\|,\|z'\|,\|r'\|\} \leq \alpha$. \item[$(\sigma)$] For every $\alpha > 0$, there exists $\lambda_\sigma = \lambda_\sigma(\alpha) > 0$ such that \begin{equation*} \big|\sigma(z,w(\cdot)) - \sigma(z',w'(\cdot))\big| \leq \lambda_\sigma \big(\|z - z'\| + \|w(\cdot) - w'(\cdot)\|_1\big) \end{equation*} for any $(z,w(\cdot)),(z',w'(\cdot)) \in P(\alpha)$, where \begin{equation}\label{def:P} P(\alpha) = \big\{(z,w(\cdot)) \in \mathbb R^n \times \mathrm{PLip} \colon \|z\| \leq \alpha,\, \|w(\cdot)\|_\infty \leq \alpha\big\}. \end{equation} \end{itemize} Consider the mapping $g_*(\tau,z,w(\cdot)) = g(\tau,w(-h))$, $(\tau,z,w(\cdot)) \in \mathbb G$. Due to condition $(g)$, $g_*$ is ci-differentiable on $\mathbb G_*$ (see (\ref{def:G_G_s})) and \begin{equation}\label{def:dg_s} \partial^{ci}_{\tau,w} g_*(\tau,z,w(\cdot)) = G(\tau,w(-h),\mathrm{d}^+ w(-h) / \mathrm{d} \xi),\quad \nabla_z g_*(\tau,z,w(\cdot)) = 0 \end{equation} for any $(\tau,z,w(\cdot)) \in \mathbb G_*$, where $\mathrm{d}^+ w(-h) / \mathrm{d} \xi$ is the right derivative of the function $w(\xi)$, $\xi \in [-h,0)$ at the point $\xi=-h$ and $G(\tau,x,y) = \partial g(\tau,x) / \partial \tau + \nabla_x g(\tau,x) y$. Since the mapping $g_*$ is determined by the function $g$ and does not depend on $z$, for brevity, we denote \begin{equation}\label{def:derivative_g} \partial^{ci}_{\tau,w} g(\tau,w(\cdot)) = \partial^{ci}_{\tau,w} g_*(\tau,z,w(\cdot)). \end{equation} For the functional $\varphi \colon \mathbb G \mapsto \mathbb R$, let us consider the Cauchy problem for the HJ equation \begin{equation}\label{Hamilton-Jacobi_equation} \begin{array}{rcl} \partial^{ci}_{\tau,w} \varphi(\tau,z,w(\cdot)) & + & \langle\partial^{ci}_{\tau,w} g(\tau,w(\cdot)), \nabla_z \varphi(\tau,z,w(\cdot)) \rangle \\[0.3cm] & + & H(\tau,z,w(-h),\nabla_z \varphi(\tau,z,w(\cdot))) = 0, \end{array} \ (\tau,z,w(\cdot)) \in \mathbb G_*, \end{equation} and the terminal condition \begin{equation}\label{terminal_condition} \varphi(\vartheta,z,w(\cdot)) =\sigma(z,w(\cdot)),\quad (z,w(\cdot)) \in \mathbb R^n \times \mathrm{PLip}. \end{equation} \begin{remark} Such Cauchy problems arise in investigations of optimal control problems and differential games for neutral-type systems in Hale's from (see, e.g., \cite{Gomoyunov_Plaksin_2019}). In contrast to Cauchy problems corresponding to time-delay systems \cite{Lukoyanov_2000,Lukoyanov_2003,Lukoyanov_2010a,Lukoyanov_2010b}, the new term $\langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), \nabla_z \varphi(\tau,z,w(\cdot)) \rangle$ appears. Since the functional $\partial^{ci}_{\tau,w} g(\tau,w(\cdot))$ depends on the derivative $\mathrm{d}^+ w(-h) / \mathrm{d} \xi$ (see (\ref{def:dg_s}), (\ref{def:derivative_g})), it is not defined at all points of $\mathbb G$ and discontinuous with respect to the uniform norm. Thus, we can not apply results from \cite{Lukoyanov_2000,Lukoyanov_2003,Lukoyanov_2010a,Lukoyanov_2010b} to Cauchy problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}). Moreover, we need to consider the HJ equation only on a set on which $\partial^{ci}_{\tau,w} g(\tau,w(\cdot))$ is defined. We choice $\mathbb G_*$ as such a set. Due to condition $(g)$, the value $\partial^{ci}_{\tau,w} g(\tau,w(\cdot))$ is defined on $\mathbb G_*$. \end{remark} We search a solution of this problem among the functionals $\varphi\colon \mathbb G \mapsto \mathbb R$ satisfying the following conditions: \begin{itemize} \item[$(\varphi_1)$] For each $(\tau,w(\cdot)) \in [0,\vartheta] \times \mathrm{Lip}$, the function $\overline{\varphi}(t) = \varphi(t,w(-0),w(\cdot))$, $t \in [\tau,\vartheta]$ is continuous. \item[$(\varphi_2)$] For every $\alpha > 0$, there exists $\lambda_\varphi = \lambda_\varphi(\alpha) > 0$ such that \begin{equation}\label{Phi_lip} |\varphi(\tau,z,w(\cdot)) - \varphi(\tau,z',w'(\cdot))| \leq \lambda_\varphi \upsilon(\tau,z-z',w(\cdot) - w'(\cdot)) \end{equation} for any $\tau \in [0,\vartheta]$ and $(z,w(\cdot)), (z',w'(\cdot)) \in P(\alpha)$, where \begin{equation}\label{def:upsilon} \upsilon(\tau,z,w(\cdot)) = \|z\| + \|w(\cdot)\|_1 + \|w(-h)\| + \|w(i h - \tau)\| \end{equation} in which $i \in \overline{-1,I-1}$ is such that $\tau \in (i h, (i+1) h]$. \end{itemize} \begin{remark}\label{rem:conditions} Note that if $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$, then the functional $\hat{\varphi}(\tau,w(\cdot)) = \varphi(\tau,w(-0),w(\cdot))$, $(\tau,w(\cdot))\in [0,\vartheta] \times \mathrm{Lip}$ is continuous with respect to uniform norm. Thus, these conditions are consistent with prior works \cite{Gomoyunov_Plaksin_2019,Lukoyanov_Plaksin_2020,Plaksin_2021b} in which solutions of HJ equations for neutral-type systems on the space of Lipschitz continuous functions were considered in the class of continuous functionals. Moreover, these conditions are an analog of the conditions suggested in \cite{Plaksin_2020,Plaksin_2021}, devoted to viscosity solutions of HJ equations for time-delay systems, with additional terms $\|w(-h)\|$ and $\|w(i h - \tau)\|$. \end{remark} \subsection{Viscosity solution} Denote \begin{equation*} \Lambda_0(\tau,z,w(\cdot)) = \big\{x(\cdot) \in \Lambda(\tau,z,w(\cdot)) \colon x(t) = z,\ t \in [\tau,\vartheta]\big\},\quad (\tau,z,w(\cdot)) \in \mathbb G. \end{equation*} \begin{remark} Similar to \cite{Plaksin_2021}, note that if a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$ and is ci-differentiable at $(\tau,z,w(\cdot)) \in \mathbb G_*$, then the function \begin{equation}\label{tilde_phi} \tilde{\varphi}(t,x) = \varphi(t,x,\kappa_t(\cdot)),\quad (t,x) \in \mathbb [\tau,\vartheta] \times \mathbb R^n,\quad \kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot)), \end{equation} has a right partial derivative $\partial^+ \tilde{\varphi}(\tau,z) / \partial \tau$ and a gradient $\nabla_z \tilde{\varphi}(\tau,z)$ at the point $(\tau,z)$, and satisfies the following HJ equation: \begin{equation*} \partial^+ \tilde{\varphi}(\tau,z) / \partial \tau + \langle\partial^{ci}_{\tau,w} g(\tau,w(\cdot)), \nabla_z \tilde{\varphi}(\tau,z) \rangle + H(\tau,z,w(-h),\nabla_z \tilde{\varphi}(\tau,z)) = 0. \end{equation*} Thus, we might say that HJ equation with a ci-derivative (\ref{Hamilton-Jacobi_equation}) is locally this HJ equation with partial derivatives. \end{remark} Then, based on the classical definition of viscosity solutions \cite{Crandall_Lions_1983}, it leads us in a natural way to the following definition. \begin{definition}\label{def:viscosity_solution} A functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is called a viscosity solution of problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}) if $\varphi$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$, (\ref{terminal_condition}) and conditions \begin{subequations} \begin{gather} \left\{ \begin{array}{ll} \text{for every}\ (\tau,z,w(\cdot)) \in \mathbb G_*,\ \psi \in \mathrm{C}^1(\mathbb R \times \mathbb R^n,\mathbb R), \text{ and } \delta > 0,\\[0.2cm] \text{if}\ \varphi(\tau,z,w(\cdot)) - \psi(\tau,z) \leq \varphi(t,x,\kappa_t(\cdot)) - \psi(t,x)\\[0.2cm] \text{for any}\ (t,x) \in O_\delta^+(\tau,z),\ \kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot)), \\[0.2cm] \text{then}\ \partial \psi(\tau,z) / \partial \tau + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), \nabla_z \psi(\tau,z) \rangle \\[0.2cm] \hspace{4cm} + H(\tau,z,w(-h),\nabla_z \psi(\tau,z)) \leq 0, \end{array} \right.\label{sub_viscosity_solution}\\[0.2cm] \left\{ \begin{array}{ll} \text{for every}\ (\tau,z,w(\cdot)) \in \mathbb G_*,\ \psi \in \mathrm{C}^1(\mathbb R \times \mathbb R^n,\mathbb R), \text{ and } \delta > 0,\\[0.2cm] \text{if}\ \varphi(\tau,z,w(\cdot)) - \psi(\tau,z) \geq \varphi(t,x,\kappa_t(\cdot)) - \psi(t,x)\\[0.2cm] \text{for any}\ (t,x) \in O_\delta^+(\tau,z),\ \kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot)), \\[0.2cm] \text{then}\ \partial \psi(\tau,z) / \partial \tau + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), \nabla_z \psi(\tau,z) \rangle \\[0.2cm] \hspace{4cm} + H(\tau,z,w(-h),\nabla_z \psi(\tau,z)) \geq 0. \end{array}\label{super_viscosity_solution} \right. \end{gather} \end{subequations} Here $O^+_\delta(\tau,z) = \{(t,x) \in [\tau,\tau+\delta] \times \mathbb R^n \colon \|x - z\| \leq \delta\}$, and $\mathrm{C}^1(\mathbb R \times \mathbb R^n, \mathbb R)$ is the space of continuously differentiable functions $\psi \colon \mathbb R \times \mathbb R^n \mapsto \mathbb R$. \end{definition} The main result of the paper is the following theorem. \begin{theorem}\label{teo:viscosity_solution} There exists a unique viscosity solution $\varphi$ of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{theorem} In order to prove the theorem, we consider the minimax approach to the definition of generalized solutions of problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}) (see \cite{Plaksin_2021b}). \subsection{Minimax solution} Taking the constant $c_H > 0$ from $(H_2)$, denote \begin{equation}\label{def:F} F^\eta(x,y) = \big\{l \in \mathbb R^n \colon \|l\| \leq c_H (1 + \|x\| + \|y\|) + \eta \big\},\ \ x,y \in \mathbb R^n,\ \ \eta \in [0,1]. \end{equation} Let $(\tau,z,w(\cdot)) \in \mathbb G$. Denote by $X^\eta(\tau,z,w(\cdot))$ the set of the functions $x(\cdot) \in \Lambda(\tau,z,w(\cdot))$ such that the function $y(t) = x(t) - g(t,x(t-h))$, $t \in [\tau,\vartheta]$ is Lipschitz continuous and the neutral-type differential inclusion \begin{equation}\label{def:inclusion} \frac{\mathrm{d}}{\mathrm{d} t}\Big(x(t) - g(t,x(t-h))\Big) \in F^\eta(x(t),x(t-h))\text{ for a.e. } t \in [\tau,\vartheta], \end{equation} holds. Note that the set $X^\eta(t,z,w(\cdot))$ is not empty. At least, the function $x(\cdot) \in \Lambda(\tau,z,w(\cdot))$ satisfying $x(t) = g(t,x(t-h))$, $t \in (\tau,\vartheta]$, belongs to $X^\eta(t,z,w(\cdot))$. The following inequalities for functionals $\varphi \colon \mathbb G \mapsto \mathbb R$ are key to define minimax solutions under consideration below: \begin{subequations} \begin{align} \inf\limits_{x(\cdot) \in X^\eta(\tau,z,w(\cdot))} \big(\varphi(t,x(t),x_t(\cdot)) + \omega(\tau,t,x(\cdot),s)\big) \leq \varphi(\tau,z,w(\cdot)), \label{def:upper_minmax_solution} \\[0.0cm] \sup\limits_{x(\cdot) \in X^\eta(\tau,z,w(\cdot))} \big(\varphi(t,x(t),x_t(\cdot)) + \omega(\tau,t,x(\cdot),s)\big) \geq \varphi(\tau,z,w(\cdot)), \label{def:lower_minmax_solution} \end{align} \end{subequations} where $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$, $t \in (\tau,\vartheta]$, $s \in \mathbb R^n$, $\eta \in [0,1]$, and \begin{equation}\label{def:omega} \begin{array}{rcl} \omega(\tau,t,x(\cdot),s) &=& \displaystyle\int_\tau^t H(\xi,x(\xi),x(\xi-h),s) \mathrm{d} \xi \\[0.4cm] &-& \big\langle \Big(x(t) - g(t,x(t-h))\Big) - \Big(x(\tau) - g(\tau,x(\tau-h))\Big), s \big\rangle. \end{array} \end{equation} \begin{definition} A functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is called a minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}, if $\varphi$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$, {\rm (\ref{terminal_condition})} and the inequalities (\ref{def:upper_minmax_solution}), (\ref{def:lower_minmax_solution}) for any $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$, $t \in (\tau,\vartheta]$, $s \in \mathbb R^n$, and $\eta = 0$. \end{definition} This definition of minimax solution seems to be natural for considered problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}). Nevertheless, recently, the theory of minimax solution of problems similar to (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}) was developed on the space of Lipschitz continuous functions (see \cite{Plaksin_2021b}). To apply these results, it is convenient to give the following auxiliary definition of a minimax solution. \begin{definition} A functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is called a $\mathrm{Lip}$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}, if $\varphi$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$ and, taking $z = w(-0)$, satisfies condition {\rm (\ref{terminal_condition})} for any $w(\cdot) \in \mathrm{Lip}$ and inequalities (\ref{def:upper_minmax_solution}), (\ref{def:lower_minmax_solution}) for any $(\tau,w(\cdot)) \in [0,\vartheta) \times \mathrm{Lip}$, $t \in (\tau,\vartheta]$, $s \in \mathbb R^n$, and $\eta =0$. \end{definition} The first step to prove Theorem \ref{teo:viscosity_solution} is the following theorem which follows from Lemmas \ref{lem:ex_un_minimax_sol}, \ref{lem:varphi_extension} taking Remark \ref{rem:conditions} into account. \begin{theorem}\label{teo:Lip_minimax_solution} There exists a unique $\mathrm{Lip}$-minimax solution $\varphi$ of problem {\rm(\ref{Hamilton-Jacobi_equation}),~(\ref{terminal_condition})}. \end{theorem} Then, we introduce one more auxiliary definition of a minimax solution. \begin{definition}\label{def:C1_minimax_solution} A functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is called a $\mathrm{C^1}$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}, if $\varphi$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$, and, taking $z = w(-0)$, satisfies condition {\rm (\ref{terminal_condition})} for any $w(\cdot) \in \mathrm{C}^1$ and inequalities (\ref{def:upper_minmax_solution}), (\ref{def:lower_minmax_solution}) for any $i \in \overline{0,I-1}$, $(\tau,w(\cdot)) \in [i h, (i + 1) h) \times \mathrm{C}^1$, $t \in (\tau,(i+1)h]$, $s \in \mathbb R^n$, and $\eta \in (0,1]$. \end{definition} The following theorem establishes the equivalence of these three definitions. \begin{theorem}\label{teo:minimax_solutions} The following statements are equivalent: \begin{description} \item[(a)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \vspace{0.1cm} \item[(b)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a $\mathrm{Lip}$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \vspace{0.1cm} \item[(c)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a $\mathrm{C^1}$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{description} \end{theorem} The implications $(a) \Rightarrow (b) \Rightarrow (c)$ are valid due to inclusions (\ref{space_inclusions}) and inclusion $X^0(\tau,z,w(\cdot)) \subset X^\eta(\tau,z,w(\cdot))$ for any $\eta \in (0,1]$. The implication $(c) \Rightarrow (a)$ is proved in Lemma \ref{lem:C1_and_minimax_solutions}. Finally, in order to prove Theorem \ref{teo:viscosity_solution}, we follow scheme from \cite{Clarke_Ledyaev_1994,Subbotin_1993,Subbotin_1995} (see also \cite{Plaksin_2020}). For that, we define the corresponding notions of lower and upper right directional derivatives as well as the notions of subdifferential and superdifferential of a functional $\varphi \colon \mathbb G \mapsto \mathbb R$. By analogy with \cite{Plaksin_2021} (see also \cite{Lukoyanov_2010a}), lower and upper right directional derivatives of a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ along $(l_0, l) \in \mathbb [0,+\infty) \times \mathbb R^n$ at $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$ are defined by \begin{subequations}\label{directional_derivatives} \begin{align} \partial^-_{(l_0,l)} \varphi(\tau,z,w(\cdot)) & = \liminf\limits_{\delta \downarrow 0} \displaystyle\frac{\varphi(\tau + l_0 \delta, z + l \delta,\kappa_{\tau + l_0 \delta}(\cdot)) - \varphi(\tau,z,w(\cdot))}{\delta},\label{lower_directional_derivatives}\\[0.0cm] \partial^+_{(l_0,l)} \varphi(\tau,z,w(\cdot)) & = \limsup\limits_{\delta \downarrow 0} \displaystyle\frac{\varphi(\tau + l_0 \delta, z + l \delta,\kappa_{\tau + l_0 \delta}(\cdot)) - \varphi(\tau,z,w(\cdot))}{\delta},\label{ipper_directional_derivatives} \end{align} \end{subequations} where $\kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot))$. The subdifferential of a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ at a point $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$ is a set, denoted by $D^-(\tau,z,w(\cdot))$, of pairs $(p_0,p) \in \mathbb R \times \mathbb R^n$ such that \begin{equation}\label{subdifferential} \lim\limits_{\delta \to 0} \inf\limits_{(t,x) \in O^+_\delta(\tau,z)} \!\!\frac{\varphi(t,x,\kappa_t(\cdot)) - \varphi(\tau,z,w(\cdot)) - (t - \tau) p_0 - \langle x - z, p \rangle}{|t - \tau| + \|x - z\|} \geq 0. \end{equation} The superdifferential of a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ at a point $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$ is a set, denoted by $D^+(\tau,z,w(\cdot))$, of pairs $(q_0,q) \in \mathbb R \times \mathbb R^n$ such that \begin{equation}\label{superdifferential} \lim\limits_{\delta \to 0} \sup\limits_{(t,x) \in O^+_\delta(\tau,z)} \!\!\frac{\varphi(t,x,\kappa_t(\cdot)) - \varphi(\tau,z,w(\cdot)) - (t - \tau) q_0 - \langle x - z, q \rangle}{|t - \tau| + \|x - z\|} \leq 0. \end{equation} The theorem below together with Theorem \ref{teo:Lip_minimax_solution} complete the proof of Theorem~\ref{teo:viscosity_solution}. \begin{theorem}\label{teo:equivalent_solutions} The following statements are equivalent: \begin{description} \item[(a)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \vspace{0.1cm} \item[(b)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a $\mathrm{C^1}$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \vspace{0.1cm} \item[(c)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$, {\rm (\ref{terminal_condition})} and, for every $(\tau,z,w(\cdot)) \in \mathbb G_*$ and $s \in \mathbb R^n$, the following inequalities hold: \begin{subequations} \begin{gather} \begin{array}{l} \inf\limits_{l \in F^0(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot))} \Big(\partial^-_{1,l} \varphi(\tau,z,w(\cdot)) + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)) ,s \rangle\\[0.0cm] \hspace{5cm} + H(\tau,z,w(-h),s) - \langle l,s \rangle\Big) \leq 0,\label{lower_directional_derivative_inequality} \\[0.1cm] \end{array}\\[0.1cm] \begin{array}{l} \sup\limits_{l \in F^0(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot))} \Big(\partial^+_{1,l} \varphi(\tau,z,w(\cdot)) + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)) ,s \rangle\\[0.0cm] \hspace{5cm} + H(\tau,z,w(-h),s) - \langle l,s \rangle\Big) \geq 0.\label{upper_directional_derivative_inequality} \end{array} \end{gather} \end{subequations} \vspace{0.1cm} \item[(d)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$, {\rm (\ref{terminal_condition})} and, for every $(\tau,z,w(\cdot)) \in \mathbb G_*$, $(p_0,p) \in D^-\varphi(\tau,z,w(\cdot))$, and $(q_0,q) \in D^+\varphi(\tau,z,w(\cdot))$ the following inequalities hold: \begin{subequations} \begin{gather} p_0 + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), p \rangle + H(\tau,z,w(-h),p) \leq 0,\label{subdifferential_viscosity_inequality}\\[0.1cm] q_0 + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), p \rangle + H(\tau,z,w(-h),q) \geq 0.\label{superdifferential_viscosity_inequality} \end{gather} \end{subequations} \item[(e)] The functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a viscosity solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{description} \end{theorem} The implications $(a) \Rightarrow (e) \Rightarrow (d) \Rightarrow (c) \Rightarrow (b) \Rightarrow (a)$ follow from Lemmas \ref{minimax_and_viscosity_solutions}, \ref{lem:viscosity_and_D_solutions}, \ref{lem:D_and_dini_solutions}, \ref{lem:dini_and_C1_solutions}, and Theorem \ref{teo:minimax_solutions}, respectively. \begin{remark} Besides the proof of Theorem \ref{teo:viscosity_solution}, Theorem \ref{teo:equivalent_solutions} establishs other equivalent definitions of generalized solution of (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}) which have analogues in the classical theory \cite{Subbotin_1995}. Namely, $(c)$ is similar to the infinitesimal variant of the definitions of minimax solutions (see \cite{Subbotin_1980}) or Dini solutions according to another terminology (see \cite{Clarke_Ledyaev_1994}). $(d)$ is an analog of the equivalent notion of the viscosity solutions considered in \cite{Crandall_Evans_Lions_1984}. \end{remark} \subsection{Consistency} \begin{remark} Note that, directly from definitions of ci-differentiability (see (\ref{def:ci-differentiability})) and sub- and superdifferentials (see (\ref{subdifferential}) and (\ref{superdifferential})), if a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies conditions $(\varphi_1)$, $(\varphi_2)$ and is ci-differentiable $(\tau,z,w(\cdot)) \in \mathbb G_*$, then \begin{equation*} \begin{array}{rcl} D^-\varphi(\tau,z,w(\cdot)) &=& \big\{(p_0,p) \colon p_0 \leq \partial^{ci}_{\tau,w}\varphi(\tau,z,w(\cdot)),\, p = \nabla_z\varphi(\tau,z,w(\cdot))\big\},\\[0.2cm] D^+\varphi(\tau,z,w(\cdot)) &=& \big\{(q_0,q) \colon q_0 \geq \partial^{ci}_{\tau,w}\varphi(\tau,z,w(\cdot)),\, q = \nabla_z\varphi(\tau,z,w(\cdot))\big\}. \end{array} \end{equation*} \end{remark} Hence, the following statement about consistency of the viscosity solution definition (see Definition \ref{def:viscosity_solution}) and problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}) can be obtained from Theorem~\ref{teo:equivalent_solutions}. \begin{theorem}\label{teo:classical_and_viscosity_solutions} a) Let a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ be the viscosity solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. If $\varphi$ is ci-differentiable at a point $(\tau,z,w(\cdot)) \in \mathbb G_*$, then it satisfies HJ equation {\rm (\ref{Hamilton-Jacobi_equation})} at this point. b) Let a functional $\varphi \colon \mathbb G \mapsto \mathbb R$ be ci-differentiable at every $(\tau,z,w(\cdot)) \in \mathbb G_*$, satisfy HJ equation {\rm (\ref{Hamilton-Jacobi_equation})} on $\mathbb G_*$ and satisfy conditions $(\varphi_1)$, $(\varphi_1)$, and {\rm (\ref{terminal_condition})}. Then $\varphi$ is the viscosity solution of problem {\rm (\ref{Hamilton-Jacobi_equation}),~(\ref{terminal_condition})}. \end{theorem} \subsection{Application for optimal control problems} In spite of the fact that prior works (see, e.g., \cite{Gomoyunov_Plaksin_2019,Lukoyanov_Plaksin_2020b}) consider dynamical optimization problems for neutral-time systems on the space of Lipschitz continuous functions, we can also apply the results of this paper to get characterizations of value functionals in such problems. For example, consider the following optimal control problem: for each $(\tau,w(\cdot)) \in [0,\vartheta] \times \mathrm{Lip}$, it is required to minimize the Bolza cost functional \begin{equation}\label{J} J(\tau,w(\cdot),u(\cdot)) = \sigma(x(\vartheta),x_\vartheta(\cdot)) + \int_\tau^\vartheta f^0(t,x(t),x(t-h),u(t)) \mathrm{d} t, \end{equation} over all measurable functions $u(\cdot)\colon [\tau,\vartheta] \mapsto \mathbb U$, where $x(\cdot) \in \Lambda(\tau,w(-0),w(\cdot))$ is a Lipschitz continuous function satisfying the neutral-type equation \begin{equation}\label{def:neutral_type_equation} \frac{\mathrm{d}}{\mathrm{d} t}\Big(x(t) - g(t,x(t-h))\Big) = f(t,x(t),x(t-h),u(t))\text{ for a.e. } t \in [\tau,\vartheta]. \end{equation} Here $\mathbb U \subset \mathbb R^m$ is the compact set; the function $g$ and the functional $\sigma$ satisfy conditions $(g)$ and $(\sigma)$; the functions $f \colon [0,\vartheta] \times \mathbb R^n \times \mathbb R^n \times \mathbb U \mapsto \mathbb R^n$ and $f^0 \colon [0,\vartheta] \times \mathbb R^n \times \mathbb R^n \times \mathbb U \mapsto \mathbb R$ satisfy the following conditions: \begin{itemize} \item[$(f_1)$] The functions $f$ and $f^0$ are continuous. \item[$(f_2)$] There exists a constant $c_f > 0$ such that \begin{equation*} \big\|f(t,z,r,u)\big\| + \big|f^0(t,z,r,u)\big| \leq c_f\big(1 + \|z\| + \|r\|\big) \end{equation*} for any $t \in [0,\vartheta]$, $z,r \in \mathbb R^n$, and $u \in \mathbb U$. \item[$(f_3)$] For every $\alpha > 0$, there exists $\lambda_f = \lambda_f(\alpha) > 0$ such that \begin{equation*} \begin{array}{c} \big\|f(t,z,r,u) - f(t,z',r',u)\big\| + \big|f^0(t,z,r,u) - f^0(t,z',r',u)\big|\\[0.2cm] \leq \lambda_f\big(\|z - z'\| + \|r - r'\|\big) \end{array} \end{equation*} for any $t \in [0,\vartheta]$, $z,r,z',r' \in \mathbb R^n$: $\max\{\|z\|,\|r\|,\|z'\|,\|r'\|\} \leq \alpha$, and $u \in \mathbb U$. \end{itemize} Note that, under these conditions, the function \begin{equation}\label{def:hamiltonian} H(t,z,r,s) = \min\limits_{u \in \mathbb U} \big(\langle f(t,z,r,u), s \rangle + f^0(t,z,r,u)\big),\ t \in [0,\vartheta],\ z,r,s \in \mathbb R^n, \end{equation} satisfies conditions $(H_1)$--$(H_3)$. The value functional of this problem is \begin{equation}\label{def:value_functional} \hat{\varphi}(\tau,w(\cdot)) = \inf\limits_{u(\cdot)} J(\tau,w(\cdot),u(\cdot)). \end{equation} Applying Lemma \ref{lem:value_functional} to show that $\hat{\varphi}$ is the functional from Lemma \ref{lem:ex_un_minimax_sol} and next, using Lemma \ref{lem:varphi_extension} and Theorem \ref{teo:equivalent_solutions}, taking Remark \ref{rem:conditions} into account, we can obtain the following result. \begin{theorem}\label{teo:application} There exists a unique functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfying conditions $(\varphi_1)$, $(\varphi_2)$ and the equality \begin{equation*} \varphi(\tau,w(-0),w(\cdot)) = \hat{\varphi}(\tau,w(\cdot)),\quad (\tau,w(\cdot)) \in [0,\vartheta] \times \mathrm{Lip}. \end{equation*} Such a functional $\varphi$ is a unique viscosity solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{theorem} \section{Proofs} \subsection{Auxiliary statements} \begin{lemma}\label{lem:alpha_x_lambda_x} For every $\alpha > 0$, there exists $\alpha_X = \alpha_X(\alpha) > 0$, $\alpha_X^* = \alpha_X^*(\alpha) > 0$, and $\lambda_X^* = \lambda_X^*(\alpha) > 0$ such that, for each $\tau \in [0,\vartheta]$ and $(z,w(\cdot)) \in P(\alpha)$ {\em(}see {\em(\ref{def:P}))}, the functions $x(\cdot) \in X^1(\tau,z,w(\cdot))$ and $y(t) = x(t) - g(t,x(t-h))$, $t \in [\tau,\vartheta]$, satisfy the relations \begin{equation*} (x(t),x_t(\cdot)) \in P(\alpha_X),\ \ \|y(t)\|\leq \alpha_X^*,\ \ \|y(t) - y(t')\| \leq \lambda^*_X |t - t'|,\ \ t,t' \in [\tau,\vartheta]. \end{equation*} \end{lemma} \begin{proof} The existence of $\alpha_X$ satisfying the first relation can be proved in the same way as in Lemma 6.1 from \cite{Lukoyanov_Plaksin_2020}. Due to condition $(g)$, there exists $\alpha_g > 0$ such that $\|g(\tau,w(-h))\| \leq \alpha_g$ for any $\tau \in [0,\vartheta]$ and $(z,w(\cdot)) \in P(\alpha)$. Then, from (\ref{def:F}), (\ref{def:inclusion}), the second relation is obtained by \begin{equation*} \begin{array}{c} \|y(t)\| \leq \|z\| + \|g(\tau,w(-h))\| + \displaystyle\int_\tau^t \Big(c_H \big(1 + \|x(\xi)\| + \|x(\xi-h)\|\big) + 1\Big) \mathrm{d} \xi \\[0.4cm] \leq \alpha + \alpha_g + (c_H (1 + 2 \alpha_X) + 1) \vartheta:= \alpha_X^*,\quad t \in [\tau,\vartheta], \end{array} \end{equation*} and, taking $\lambda_X^* = c_H (1 + 2 \alpha_X)$, the third relation is derived by \begin{equation*} \|y(t') - y(t)\| \leq c_H \int_{t}^{t'} \Big(c_H \big(1 + \|x(\xi)\| + \|x(\xi-h)\|\big) + 1\Big) \mathrm{d} \xi \leq \lambda_X^* (t' - t), \end{equation*} where, without loss of generality, we assume $\tau \leq t \leq t' \leq \vartheta$. \end{proof} \begin{lemma}\label{lem:g_s} Let the function $g \colon [0,\vartheta] \times \mathbb R^n \mapsto \mathbb R^n$ satisfy $(g)$. Then, for every $\alpha > 0$ there exists $\lambda_g = \lambda_g(\alpha) > 0$ such that \begin{equation*} \big|g(t,x) - g(t',x')\big| \leq \lambda_g \big(|t - t'| + \|x - x'\|\big) \end{equation*} for any $(t,x), (t',x') \in [0,\vartheta] \times \mathbb R^n$: $\max\{\|x\|,\|x'\|\} \leq \alpha$. \end{lemma} \begin{lemma}\label{lem:lambda_x_local} Let $\alpha_0,\lambda_0 > 0$. There exists $\lambda_X = \lambda_X(\alpha_0, \lambda_0) > 0$ with the following property. Let $(\tau,z,w(\cdot)) \in \mathbb G_*$. Let $\delta_w > 0$ be such that $w(\cdot)$ is continuously differentiable on $[-h, -h + \delta_w]$ {\rm(}see {\em(\ref{def:G_G_s}))}. Let the relations \begin{equation}\label{lem:lambda_x_local:condition} (z,w(\cdot)) \in P(\alpha_0),\quad \|w(\xi) - w(\xi')\| \leq \lambda_0 |\xi - \xi'|,\quad \xi,\xi' \in [-h,-h + \delta_w], \end{equation} hold. Then, every function $x(\cdot) \in X^1(\tau,z,w(\cdot))$ satisfies the inequality \begin{equation}\label{lem:lambda_x_local:statement} \|x(t) - x(t')\| \leq \lambda_X |t - t'|,\quad t,t' \in [\tau, \tau + \delta_w]. \end{equation} \end{lemma} \begin{proof} According to Lemmas \ref{lem:alpha_x_lambda_x}, \ref{lem:g_s}, define $\lambda_X^* = \lambda_X^*(\alpha_0)$ and $\lambda_g = \lambda_g(\alpha_0)$. Put $\lambda_X = \lambda_X^* + \lambda_g (1+ \lambda_0)$. Then the inequality (\ref{lem:lambda_x_local:statement}) follows from the estimates \begin{equation*} \|x(t) - x(t')\| \leq \|y(t) - y(t')\| + \|g(t,w(t-\tau-h)) - g(t',w(t'-\tau-h))\| \leq \lambda_X (t - t'). \end{equation*} The lemma has been proved. \end{proof} Let $(z,w(\cdot)) \in \mathbb R^n \times \mathrm{PLip}$. Denote by $\Gamma(z,w(\cdot))$ the set of sequences $\{w^j(\cdot)\}_{j \in \mathbb N} \subset \mathrm{Lip}$ such that \begin{equation}\label{def:Gamma} \begin{array}{c} \|w(\cdot) - w^j(\cdot)\|_1 \to 0,\quad \|z - w^j(-0)\| \to 0,\\[0.2cm] \|w(\xi) - w^j(\xi)\| \to 0,\quad \xi \in [-h,0), \end{array} \quad \text{as } j \to \infty. \end{equation} \begin{lemma}\label{lem:Gamma_nonempty} For each $(z,w(\cdot)) \in \mathbb R^n \times \mathrm{PLip}$, there exists a sequence $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ such that \begin{description} \item[$(w^j_1)$] The inclusion $w^j(\cdot) \in \mathrm{C}^1$ holds for any $j \in \mathbb N$. \vspace{0.1cm} \item[$(w^j_2)$] The inequality $\|w^j(\cdot)\|_\infty \leq \max\{\|z\|,\|w(\cdot)\|_\infty\}$ holds for any $j \in \mathbb N$. \vspace{0.1cm} \item[$(w^j_3)$] For every $\xi \in (-h,0)$ satisfying the equality $w(\xi-0) = w(\xi)$, there exists $\delta > 0$ such that \begin{equation*} \max\limits_{\zeta \in [-\delta,\delta]} \|w(\xi+\zeta) - w^j(\xi+\zeta)\| \to 0 \text{ as } j \to \infty. \end{equation*} \end{description} \end{lemma} \begin{proof} Let us take a continuously differentiable function $\beta(\cdot) \colon (- \infty, + \infty) \mapsto \mathbb [0,+\infty)$ such that $$ \beta(\xi) = 0,\quad \xi \in (-\infty,0]\cup [1,+\infty),\quad \int_{-\infty}^{+\infty} \beta(\zeta) \mathrm{d}\zeta = 1. $$ Let $\overline{w}(\cdot)\colon [-h,1] \mapsto \mathbb R^n$ be such that $\overline{w}(\xi) = w(\xi)$, $\xi \in [-h,0)$ and $\overline{w}(\xi) = z$, $\xi \in [0,1]$. Then one can show the functions $$ w^j(\xi) = j \int_{-\infty}^{+\infty} \beta(j \zeta) \overline{w}(\xi + \zeta) \mathrm{d} \zeta,\quad j \in \mathbb N, $$ satisfy the statements of the lemma. \end{proof} \begin{lemma}\label{lem:Gamma_motion} Let $(\tau,z,w(\cdot)) \in \mathbb G$. Let $\{w^j(\cdot)\}_{j \in \mathbb N} \subset \Gamma(z,w(\cdot))$ satisfy conditions $(w^j_1)$--$(w^j_3)$. Let $x^j(\cdot) \in X^{1/j}(\tau,w^j(-0),w^j(\cdot))$, $j \in \mathbb N$. Then there exist a subsequence $x^{\,j_m}(\cdot)$, $m \in \mathbb N$ and a function $x(\cdot) \in X^0(\tau,z,w(\cdot))$ such that $\{x^{j_m}_t(\cdot)\}_{m \in \mathbb N} \in \Gamma(x(t),x_t(\cdot))$ for any $t \in [\tau, \min\{\tau + h,\vartheta\})$. \end{lemma} \begin{proof} Let $\alpha_0 = \max\{\|z\|, \|w(\cdot)\|_\infty\}$. In accordance with Lemma \ref{lem:alpha_x_lambda_x}, define $\alpha^*_X = \alpha^*_X(\alpha_0)$ and $\lambda^*_X = \lambda^*_X(\alpha_0)$. Then, for the functions $y^j(t) = x^j(t) - g(t,x^j(t-h))$, $t \in [\tau,\vartheta]$, $j \in \mathbb N$, we have \begin{equation}\label{lem:Gamma_motion:yi} \|y^j(t)\| \leq \alpha_X^*,\quad \|y^j(t) - y^j(t')\| \leq \lambda_X^* |t - t'|,\quad t,t' \in [\tau,\vartheta]. \end{equation} Due to these estimates and Arzela-Ascoli theorem (see, e.g., \cite[p. 207]{Natanson_1960}), without loss of generality, we can suppose that there exists a continuous function $y(\cdot) \colon [\tau,\vartheta] \mapsto \mathbb R^n$ such that \begin{equation}\label{lem:Gamma_motion:y} \max\limits_{t \in [\tau,\vartheta]} \|y(t) - y^j(t)\| \to 0\text{ as } j \to \infty. \end{equation} From (\ref{lem:Gamma_motion:yi}) and (\ref{lem:Gamma_motion:y}), one can establish the Lipschitz continuity of $y(\cdot)$. Denote $\tau_h = \min\{\tau + h,\vartheta\}$. Define the function $x(\cdot)$ so that \begin{equation}\label{lem:Gamma_motion:x} x(t) = \left\{ \begin{array}{ll} w(t-\tau), \text{if } t \in [\tau-h, \tau),\\[0.2cm] y(t) + g(t,w(t-\tau-h)),\ \text{if } t \in [\tau, \tau_h),\\[0.2cm] g(t,x(t-h)),\ \text{if } t \in [\tau_h, \vartheta]. \end{array} \right. \end{equation} Choose $\lambda_g = \lambda_g(\alpha_0)$ according to Lemma \ref{lem:g_s}. Then, taking into account definitions of $x^j(\cdot)$, $y^j(\cdot)$, and $x(\cdot)$ and condition $(w^j_2)$, we have \begin{equation*} \|x(\xi) - x^j(t)\| \leq \|y(t) - y^j(t)\| + \lambda_g \|w(t-\tau-h) - w^j(t-\tau-h)\|,\quad \xi \in [\tau, \tau_h). \end{equation*} \begin{equation}\label{lem:Gamma_motion:lambda_g} \|x(t) - x^j(t)\| = \|w(t - \tau) - w^j(t - \tau)\|,\quad t \in [\tau-h, \tau), \end{equation} From these estimates, taking into account the inclusion $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z, w(\cdot))$ and (\ref{lem:Gamma_motion:y}), we obtain $\{x^j_t(\cdot)\}_{j \in \mathbb N} \in \Gamma(x(t), x_t(\cdot))$ for any $t \in [\tau, \tau_h)$. Let us show the inclusion $x(\cdot) \in X^0(\tau,z,w(\cdot))$. Firstly, according to (\ref{lem:Gamma_motion:x}), (\ref{lem:Gamma_motion:lambda_g}), the inclusion $x(\cdot) \in \Lambda(\tau,z,w(\cdot))$ holds. Note that, due to (\ref{lem:Gamma_motion:x}), the function $x(\cdot)$ satisfies (\ref{def:inclusion}) for every $t \in (\tau_h, \vartheta]$ in which $\eta = 0$. Let $t \in (\tau, \tau_h)$ be such that there exists $\mathrm{d} y(t)/ \mathrm{d} t$ and $w(t-\tau-h-0) = w(t-\tau-h)$. Let $\varepsilon > 0$. Then, due to the inclusion $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ and relations (\ref{lem:Gamma_motion:yi}), (\ref{lem:Gamma_motion:y}), there exist $\delta \in (0,\min\{t-\tau, \tau_h-t, \varepsilon / (9 c_H \lambda_g)\})$ and $j_* > 0$ such that \begin{equation}\label{lem:Gamma_motion:delta} \begin{array}{c} \|w(t-\tau-h) - w^j(t-\tau-h+\zeta)\| \leq \varepsilon / (3 c_H\max\{1, 3 \lambda_g\}),\\[0.2cm] \|y(t) - y^j(t + \zeta)\| \leq \varepsilon / (9 c_H),\quad 1/j \leq \varepsilon / 3, \end{array} \end{equation} for any $\zeta \in [-\delta,\delta]$ and $j \geq j_*$, where $c_H$ is from condition $(H_2)$. From these estimates, choice of $\lambda_g$, and definitions of $x^j(\cdot)$, $y^j(\cdot)$, and $x(\cdot)$, we derive \begin{equation*} \begin{array}{c} \|x(t) - x^j(t + \zeta)\| \leq \|y(t) - y^j(t + \zeta)\| \\[0.2cm] + \lambda_g \big(\zeta + \|w(t-\tau-h) - w^j(t-\tau-h+\zeta)\|\big) \leq \varepsilon / (3 c_H). \end{array} \end{equation*} Then, due to the inclusion $x^j(\cdot) \in X^{1/j}(\tau,w^j(-0),w^j(\cdot))$, and the first and the third inequalities in (\ref{lem:Gamma_motion:delta}), we derive \begin{equation*} \|\mathrm{d}\, y^j(t + \zeta) / \mathrm{d} t\| \leq c_H \big(\|x(t)\| + \|w(t-\tau-h)\|\big) + \varepsilon\ \text{for a.e. }\ \zeta \in [-\delta,\delta]. \end{equation*} Using Lemma 12 from \cite[p. 63]{Filippov_1988}, we obtain \begin{equation*} \|\mathrm{d} y(t) / \mathrm{d} t\|\leq c_H\big(\|x(t)\| + \|w(t-\tau-h)\|\big) + \varepsilon. \end{equation*} This estimate holds for every $\varepsilon > 0$ that means for $\varepsilon = 0$. Thus, the inclusion $x(\cdot) \in X^0(\tau,z,w(\cdot))$ is proved. \end{proof} \begin{lemma}\label{lem:dg} Let $(\tau,z,w(\cdot)) \in \mathbb G_*$. Let $\delta_w \in (0,h)$ be such that $w(\cdot)$ is continuously differentiable on $[-h, -h + \delta_w]$. Then the inclusion $(t,x(t),x_t(\cdot)) \in \mathbb G_*$ holds for any $t \in [\tau, \tau + \delta_w]$ {\rm(}see {\em(\ref{def:G_G_s}))}, and the following functions coincide and are continuous: \begin{equation*} \overline{g}(t) := \frac{\mathrm{d}}{\mathrm{d} t} \big(g(t,x(t-h)\big) = \partial^{ci}_{\tau,w} g(t,x_t(\cdot)) = \partial^{ci}_{\tau,w} g(t,\kappa_t(\cdot)),\quad t \in [\tau, \tau + \delta_w]. \end{equation*} for any $x(\cdot) \in \Lambda(\tau,z,w(\cdot))$ and $\kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot))$. \end{lemma} \begin{proof} is directly following from definition (\ref{def:derivative_g}) of $\partial^{ci}_{\tau,w} g(\tau,w(\cdot))$. \end{proof} \begin{lemma}\label{lem:kappa} Let $(z,w(\cdot)) \in \mathbb R^n \times \mathrm{PLip}$. Then, for every $\varepsilon > 0$, there exists $\nu = \nu(\varepsilon) > 0$ such that, for every $\tau \in [0,\vartheta]$ and $t, t' \in [\tau,\vartheta]$ satisfying $|t - t'| \leq \nu$ the inequality below holds: \begin{displaymath} \|\kappa_t(\cdot) - \kappa_{t'}(\cdot)\|_1 \leq \varepsilon,\quad \kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot)). \end{displaymath} \end{lemma} \begin{proof} can be obtain using approximation of $(z,w(\cdot))$ by Lipschitz continuous functions (see Lemma \ref{lem:Gamma_nonempty}). \end{proof} \subsection{Proof of Theorem \ref{teo:Lip_minimax_solution}} According to Theorem 3 from \cite{Plaksin_2021b}, the following lemma holds: \begin{lemma}\label{lem:ex_un_minimax_sol} There exists a unique continuous (with respect to uniform norm) functional $\hat{\varphi} \colon [0,\vartheta] \times \mathrm{Lip} \mapsto \mathbb R$ satisfying condition $\varphi(\vartheta,w(\cdot)) = \sigma(w(-0),w(\cdot))$ for any $w(\cdot) \in \mathrm{Lip}$ and the inequalities \begin{subequations} \begin{align} \inf\limits_{x(\cdot) \in X^0(\tau,w(-0),w(\cdot))} \big(\hat{\varphi}(t,x_t(\cdot)) + \omega(\tau,t,x(\cdot),s)\big) \leq \hat{\varphi}(\tau,w(\cdot)), \label{def:upper_minmax_solution_} \\[0.0cm] \sup\limits_{x(\cdot) \in X^0(\tau,w(-0),w(\cdot))} \big(\hat{\varphi}(t,x_t(\cdot)) + \omega(\tau,t,x(\cdot),s)\big) \geq \hat{\varphi}(\tau,w(\cdot)), \label{def:lower_minmax_solution_} \end{align} \end{subequations} for any $(\tau,w(\cdot)) \in [0,\vartheta) \times \mathrm{Lip}$, $t \in (\tau,\vartheta]$, and $s \in \mathbb R^n$, where $\omega$ is from {\rm (\ref{def:omega})}. \end{lemma} Lemmas \ref{lem:lk}, \ref{lem:upper_lower_stability} below can be proved similar to Lemmas 1, 3 from \cite{Plaksin_2021b}. Let $\alpha > 0$. Define $\lambda_g = \lambda_g(\alpha) > 1$ and $\lambda_H = \lambda_H(\alpha) > 1$ according to Lemma \ref{lem:g_s} and condition $(H_3)$. For every $\gamma,\varepsilon > 0$ and $(\tau,z,w(\cdot)) \in \mathbb G$, denote \begin{equation}\label{def:nu_theta_kappa} \begin{array}{rcl} \theta^\alpha_\gamma(\tau) &=& (e^{-(4 \lambda_H + 2 \lambda_g / h) \tau} - \gamma) / \gamma,\\[0.2cm] \nu^\alpha_{\gamma,\varepsilon}(\tau,z,w(\cdot)) &=& \theta^\alpha_\gamma(\tau) \bigg(\!\sqrt{\varepsilon^4 + \|z\|^2} + 2 \lambda_H \displaystyle\int_{-h}^0\!\! \bigg(1 - \frac{2 \lambda_g \xi}{h}\bigg) \|w(\xi)\| \mathrm{d}\xi \! \bigg). \end{array} \end{equation} \begin{lemma}\label{lem:lk} Let $\alpha,\varepsilon > 0$ and $\tau \in [0,\vartheta]$. Let $\gamma > 0$ be such that $\theta^\alpha_\gamma(t) > 1$ for any $t \in [0,\vartheta]$. Let $x(\cdot), x'(\cdot)$ be Lipschitz continuous functions from $[\tau-h,\vartheta]$ to $\mathbb R^n$ satisfying the inequality \begin{equation}\label{lem:lk:condition} \|x(t)\| \leq \alpha,\quad \|x'(t)\| \leq \alpha,\quad t \in [\tau-h,\vartheta]. \end{equation} Then the following inequality holds: \begin{equation*} \nu^\alpha_{\gamma,\varepsilon}(t,\Delta y(t),\Delta x_t(\cdot)) - \nu^\alpha_{\gamma,\varepsilon}(\tau,\Delta y(\tau),\Delta x_\tau(\cdot)) \leq \int_\tau^t \Delta H^\alpha_{\gamma,\varepsilon}(\xi) \mathrm{d} \xi,\quad t \in [\tau,\vartheta]. \end{equation*} where $\Delta x(t) = x(t) - x'(t)$, $\Delta y(t) = \Delta x(t) - g(t,x(t-h)) + g(t,x'(t-h))$, and \begin{equation*} \begin{array}{rcl} \Delta H^\alpha_{\gamma,\varepsilon}(t) &=& H(t,x(t),x(t-h),\nabla_z \nu^\alpha_{\gamma,\varepsilon}(t,\Delta y(t),\Delta x_t(\cdot))) \\[0.2cm] &-& H(t,x'(t),x'(t-h),\nabla_z \nu^\alpha_{\gamma,\varepsilon}(t,\Delta y(t),\Delta x_t(\cdot))) \\[0.2cm] &+& \langle \Delta y(t), \nabla_z \nu^\alpha_{\gamma,\varepsilon}(t,\Delta y(t),\Delta x_t(\cdot)) \rangle. \end{array} \end{equation*} \end{lemma} \begin{lemma}\label{lem:upper_lower_stability} Let $\hat{\varphi} \colon [0,\vartheta] \times \mathrm{Lip} \mapsto \mathbb R$ be taken from Lemma \ref{lem:ex_un_minimax_sol}. Let $\alpha, \gamma, \varepsilon > 0$, $(\tau,w(\cdot)), (\tau,w'(\cdot)) \in [0,\vartheta] \times \mathrm{Lip}$ and $t \in [\tau,\vartheta]$. Then there exist functions $x(\cdot) \in X^0(\tau,w(-0),w(\cdot))$ and $x'(\cdot) \in X^0(\tau,w'(-0),w'(\cdot))$ such that \begin{equation*} \begin{array}{c} \hat{\varphi}(t,x_t(\cdot)) - \hat{\varphi}(t,x'_t(\cdot)) + \displaystyle\int_\tau^t \Delta H_{\gamma,\varepsilon}^\alpha(\xi) \mathrm{d} \xi \leq \hat{\varphi}(\tau,w(\cdot)) - \hat{\varphi}(\tau,w'(\cdot)) + (t - \tau) \varepsilon, \end{array} \end{equation*} where we use denotations from Lemma \ref{lem:lk}. \end{lemma} \begin{lemma}\label{lem:phi_lip} Let $\hat{\varphi} \colon [0,\vartheta] \times \mathrm{Lip} \mapsto \mathbb R$ be taken from Lemma \ref{lem:ex_un_minimax_sol}. For every $\alpha > 0$, there exists $\lambda_\varphi = \lambda_\varphi(\alpha) > 0$ such that \begin{equation*} |\hat{\varphi}(\tau,w(\cdot)) - \hat{\varphi}(\tau,w'(\cdot))| \leq \lambda_\varphi \upsilon(\tau,w(-0) - w'(-0),w(\cdot) - w'(\cdot)) \end{equation*} for any $\tau \in [0,\vartheta]$ and $(z,w(\cdot)), (z',w'(\cdot)) \in P(\alpha) \cap (\mathbb R^n \times \mathrm{Lip})$, where $P(\alpha)$ and $\upsilon$ are defined according to {\rm (\ref{def:P})} and {\rm (\ref{def:upsilon})}, respectively. \end{lemma} \begin{proof} Let us prove that, for each $i \in \overline{0,I}$ and $\alpha > 0$, there exists $\lambda_i = \lambda_i(\alpha) > 0$ such that \begin{equation}\label{lem:phi_lip:induction} \begin{array}{c} \hat{\varphi}(\tau,w(\cdot)) - \hat{\varphi}(\tau,w'(\cdot)) \leq \lambda_i \upsilon(\tau,w(-0) - w'(-0),w(\cdot) - w'(\cdot)) \end{array} \end{equation} for any $\tau \in [i h,\vartheta]$ and $(w(-0),w(\cdot)), (w'(-0),w'(\cdot)) \in P(\alpha) \cap (\mathbb R^n \times \mathrm{Lip})$. After that, taking $\lambda_\varphi = \lambda_0$, we will get the statement of the lemma. Note that, for $i = I$ and each $\alpha > 0$, inequality (\ref{lem:phi_lip:induction}) holds due to (\ref{def:upsilon}), conditions (\ref{terminal_condition}), and $(\sigma)$ if we take $\lambda_I = \lambda_I(\alpha) = \lambda_\sigma(\alpha)$ Assume that inequality (\ref{lem:phi_lip:induction}) holds for $i = j + 1 \leq I$ and prove it for $i = j$. Let $\alpha > 0$. According to Lemmas \ref{lem:alpha_x_lambda_x}, \ref{lem:g_s} and condition $(H_2)$, define $\alpha_X = \alpha_X(\alpha)$, $\lambda_g = \lambda_g(\alpha_X)$, and $\lambda_H = \lambda_H(\alpha_X) > 1$. Due to our assumption, there exists $\lambda_{j+1} = \lambda_{j+1}(\alpha_X)$ such that (\ref{lem:phi_lip:induction}) holds for $i = j + 1$. In accordance with (\ref{def:nu_theta_kappa}), there exist $\gamma > 0$ such that \begin{equation}\label{lem:phi_lip:gamma} \theta^{\alpha_X}_{\gamma}(0) \geq \theta^{\alpha_X}_{\gamma}(t) \geq \max\{\lambda_{j+1},1\},\quad t \in [0,\vartheta]. \end{equation} Put \begin{equation}\label{lem:phi_lip:lambda_j} \lambda_j = 2 \theta^{\alpha_X}_{\gamma}(0) \lambda_H (1 + 2 \lambda_g) + \lambda_{j+1} (2 + \lambda_g). \end{equation} Since $\lambda_j > \lambda_{j+1}$, then inequality (\ref{lem:phi_lip:induction}) already holds for $\lambda_j$ and $\tau \in [(j+1)h, \vartheta]$. Let $\tau \in [j h,(j+1) h)$ and $(w(-0),w(\cdot)), (w'(-0),w'(\cdot)) \in P(\alpha) \cap (\mathbb R^n \times \mathrm{Lip})$. Let us show, for every $\zeta > 0$, the following estimate: \begin{equation}\label{lem:phi_lip:zeta} \hat{\varphi}(\tau,w'(\cdot)) - \hat{\varphi}(\tau,w(\cdot)) \leq \lambda_j \upsilon(\tau,w'(-0) - w(-0),w'(\cdot) - w(\cdot)) + \zeta. \end{equation} Let $\zeta > 0$. Denote $\vartheta_* = (j + 1) h$. Choose $\varepsilon > 0$ such that \begin{equation}\label{lem:phi_lip:epsilon} \theta^{\alpha_X}_{\gamma}(\tau) \varepsilon^2 + (\vartheta_* - \tau) \varepsilon \leq \zeta. \end{equation} According to Lemma \ref{lem:upper_lower_stability}, where we take $\alpha = \alpha_X$, define the functions $x(\cdot) \in X^0(\tau,w(-0),w(\cdot))$ and $x'(\cdot) \in X^0(\tau,w'(-0),w'(\cdot))$. Due to the choice of $\alpha_X$, these functions satisfy (\ref{lem:lk:condition}) for $\alpha = \alpha_X$. Then, using Lemma \ref{lem:lk}, we have \begin{equation}\label{lem:phi_lip:1} \begin{array}{c} \hat{\varphi}(\tau,w'(\cdot)) - \hat{\varphi}(\tau,w(\cdot)) \leq \hat{\varphi}(\vartheta_*,x'_{\vartheta_*}(\cdot)) - \hat{\varphi}(\vartheta_*,x_{\vartheta_*}(\cdot)) \\[0.2cm] + \nu^{\alpha_X}_{\gamma,\varepsilon}(\tau,\Delta y(\tau),\Delta x_\tau(\cdot)) - \nu^{\alpha_X}_{\gamma,\varepsilon}(\vartheta_*,\Delta y(\vartheta_*),\Delta x_{\vartheta_*}(\cdot)) + (\vartheta_* - \tau) \varepsilon. \end{array} \end{equation} Due to the choice of $\lambda_g$, $\lambda_H > 1$, and (\ref{def:upsilon}), (\ref{def:nu_theta_kappa}), (\ref{lem:lk:condition}), (\ref{lem:phi_lip:gamma}), we derive \begin{equation}\label{lem:phi_lip:2} \begin{array}{c} \nu^{\alpha_X}_{\gamma,\varepsilon}(\tau,\Delta y(\tau),\Delta x_\tau(\cdot))\\[0.2cm] \leq \theta^{\alpha_X}_\gamma(\tau) \big(\varepsilon^2 + \|\Delta x(\tau)\| + \lambda_g \|\Delta x(\tau-h)\| + 2 \lambda_H (1 + 2 \lambda_g) \|\Delta x_\tau(\cdot)\|_1\big) \\[0.2cm] \leq \theta^{\alpha_X}_\gamma(\tau) \varepsilon^2 + 2 \theta^{\alpha_X}_\gamma(0) \lambda_H( 1 + 2 \lambda_g) \upsilon(\tau,w(-0)-w'(-0),w(\cdot)-w'(\cdot)) \end{array} \end{equation} and, taking taking into account the choice of $\lambda_{j+1}$ and (\ref{lem:phi_lip:gamma}), we obtain \begin{equation}\label{lem:phi_lip:3} \begin{array}{c} \hat{\varphi}(\vartheta_*,x'_{\vartheta_*}(\cdot)) - \hat{\varphi}(\vartheta_*,x_{\vartheta_*}(\cdot)) \leq \lambda_{j+1} \upsilon(\vartheta_*,\Delta x(\vartheta_*), \Delta x_{\vartheta_*}(\cdot))\\[0.2cm] \leq \lambda_{j+1} \big(\|\Delta y(\vartheta_*)\| + \|\Delta x_{\vartheta_*}(\cdot)\|_1 + (2 + \lambda_g) \|\Delta x(j h)\|\big) \\[0.2cm] \leq \nu^{\alpha_X}_{\gamma,\varepsilon}(\vartheta_*,\Delta y(\vartheta_*),\Delta x_{\vartheta_*}(\cdot)) + \lambda_{j+1} (2 + \lambda_g) \|\Delta x(j h)\|. \end{array} \end{equation} Due to (\ref{def:upsilon}), the inequality $\|\Delta x(j h)\| \leq \upsilon(\tau,w(-0)-w'(-0),w(\cdot)-w'(\cdot))$ in both cases of $\tau = j h$ and $\tau \in (j h,\vartheta_*)$. Thus, from (\ref{lem:phi_lip:lambda_j}), (\ref{lem:phi_lip:epsilon})--(\ref{lem:phi_lip:3}), we get (\ref{lem:phi_lip:zeta}). \end{proof} \begin{lemma}\label{lem:varphi_extension} Let $\hat{\varphi} \colon [0,\vartheta] \times \mathrm{Lip} \mapsto \mathbb R$ be taken from Lemma \ref{lem:ex_un_minimax_sol}. Then there exists a unique functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfying conditions $(\varphi_1)$, $(\varphi_2)$ and the equality \begin{equation}\label{lem:varphi_extension:statement} \varphi(\tau,w(-0),w(\cdot)) = \hat{\varphi}(\tau,w(\cdot)),\quad (\tau,w(\cdot)) \in [0,\vartheta] \times \mathrm{Lip}. \end{equation} \end{lemma} \begin{proof} Let $(\tau,z,w(\cdot)) \in \mathbb G$ and $\alpha_0 = \{\|z\|,\|w(\cdot)\|_\infty\}$. Due to Lemma \ref{lem:Gamma_nonempty}, there exists $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ satisfying condition $(w^j_2)$. Let us consider the sequence $A^j = \hat{\varphi}(\tau,w^j(\cdot))$, $j \in \mathbb N$. Then, taking $\lambda_\varphi = \lambda_\varphi(\alpha_0)$ according to Lemma \ref{lem:phi_lip} and (\ref{def:upsilon}), we have \begin{equation*} |A^j| \leq |\hat{\varphi}(\tau,w^j(\cdot)) - \hat{\varphi}(\tau,0(\cdot))| + |\hat{\varphi}(\tau,0(\cdot))| \leq \lambda_\varphi (3 + h) \alpha_0 + |\hat{\varphi}(\tau,0(\cdot))|. \end{equation*} Therefore, the sequence $A^j$ is bounded. Hence, without loss of generality, we can assume the existence of $A^*$ such that $A^{j} \to A^*$ as $j \to \infty$. Let us show this limit does not depend on the choice of $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ satisfying condition $(w^j_2)$. Let $\{r^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ satisfy condition $(w^j_2)$. Then, for defined above $\lambda_\varphi$, using the definition of $\Gamma(z,w(\cdot))$ (see (\ref{def:Gamma})) and (\ref{def:upsilon}), we derive \begin{equation*} |\hat{\varphi}(\tau,w^j(\cdot)) - \hat{\varphi}(\tau,r^j(\cdot))| \leq \lambda_\varphi \upsilon(\tau, w^j(-0) - r^j(-0), w^j(\cdot) - r^j(\cdot)) \to 0 \end{equation*} as $j \to +\infty$. Thus, $\hat{\varphi}(\tau,r^j(\cdot)) \to A^*$ as $j \to +\infty$ for any $\{r^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ satisfying condition $(w^j_2)$. Put $\varphi(\tau,z,w(\cdot)) = A^*$. By the similar way, we can define the values $\varphi(\tau,z,w(\cdot))$ for any $(\tau,z,w(\cdot)) \in \mathbb G$. The equality (\ref{lem:varphi_extension:statement}) directly follows from the inclusion $\{w^j(\cdot) \equiv w(\cdot)\}_{j \in \mathbb N} \in \Gamma(w(-0),w(\cdot))$ for any $w(\cdot) \in \mathrm{Lip}$. Condition $(\varphi_1)$ holds for $\varphi$ due to (\ref{lem:varphi_extension:statement}) and the continuity of $\hat{\varphi}$. Let us show the condition $(\varphi_2)$. Let $\alpha > 0$. Define $\lambda_\varphi = \lambda_\varphi(\alpha)$ according to Lemma \ref{lem:phi_lip}. Let $\tau \in [0,\vartheta]$ and $(z,w(\cdot)), (z',w'(\cdot)) \in P(\alpha)$. Due to Lemma~\ref{lem:Gamma_nonempty}, there exist $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ and $\{r^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z',w'(\cdot))$ satisfying condition $(w^j_2)$. Then we have \begin{equation*} |\hat{\varphi}(\tau,w^j(\cdot)) - \hat{\varphi}(\tau,r^j(\cdot))| \leq \lambda_\varphi \upsilon(\tau,w^j(-0)-r^j(-0),w^j(\cdot)-r^j(\cdot)),\quad j \in \mathbb N. \end{equation*} Passing to the limit as $j \to + \infty$, taking into account the construction of $\varphi$, we obtain (\ref{Phi_lip}). Let us prove the uniqueness. Let functionals $\varphi \colon \mathbb G \mapsto \mathbb R$ and $\varphi' \colon \mathbb G \mapsto \mathbb R$ satisfy $(\varphi_1)$, $(\varphi_2)$, and (\ref{lem:varphi_extension:statement}). Let $(\tau,z,w(\cdot)) \in \mathbb G$. According to Lemma~\ref{lem:Gamma_nonempty}, there exists $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ satisfying condition $(w^j_2)$. Let $\alpha_0 = \{\|z\|,\|w(\cdot)\|_\infty\}$. Then, due to $(\varphi_2)$, there exists $\lambda_\varphi(\alpha_0)$ such that, taking (\ref{lem:varphi_extension:statement}) into account, the following estimates holds: \begin{equation*} \begin{array}{c} |\varphi(\tau,z,w(\cdot)) - \varphi'(\tau,z,w(\cdot))| \leq |\varphi(\tau,z,w(\cdot)) - \varphi(\tau,w^j(-0),w^j(\cdot))| \\[0.2cm] + |\varphi'(\tau,z,w(\cdot)) - \varphi'(\tau,w^j(-0),w^j(\cdot))| \leq 2 \lambda_\varphi \upsilon(\tau,z - w^j(-0),w(\cdot) - w^j(\cdot)). \end{array} \end{equation*} Passing to the limit as $j \to + \infty$, we get $\varphi(\tau,z,w(\cdot)) = \varphi'(\tau,z,w(\cdot))$. \end{proof} \subsection{Proof of Theorem \ref{teo:minimax_solutions}} \begin{lemma}\label{lem:C1_and_minimax_solutions} If the functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a $\mathrm{C}^1$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}, then it is the minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{lemma} \begin{proof} Let us show that $\varphi$ satisfies inequalities (\ref{def:upper_minmax_solution}), (\ref{def:lower_minmax_solution}) for any $(\tau,z,w(\cdot)) \in \mathbb G$, $\tau < \vartheta$, $t \in (\tau,\vartheta]$, $s \in \mathbb R^n$, and $\eta = 0$, where, without loss of generality, we can suppose that $i h \leq \tau < t \leq (i+1) h$ for some $i \in \overline{0,I-1}$ and $t < \tau + h$. Let $i \in \overline{0,I-1}$, $(\tau,z,w(\cdot)) \in [ih, (i+1)h) \times \mathbb R^n \times \mathrm{PLip}$, $t \in (\tau,(i+1)h]$, and $s \in \mathbb R^n$. Due to Lemma \ref{lem:Gamma_nonempty}, there exists a sequence $\{w^j(\cdot)\}_{j \in \mathbb N} \in \Gamma(z,w(\cdot))$ such that conditions $(w^j_1)$--$(w^j_3)$ hold. Since $\varphi$ is a $\mathrm{C}^1$-minimax solution of problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}), for each $j \in \mathbb N$, there exists $x^j(\cdot) \in X^{1/j}(\tau,w^j(-0),w^j(\cdot))$ such that \begin{equation}\label{lem_cb:1} \varphi(t,x^j(t),x^j_t(\cdot)) + \omega(\tau,t,x^j(\cdot),s) \leq \varphi(\tau,w^j(-0),w^j(\cdot)) + 1 / j. \end{equation} In accordance with Lemma \ref{lem:Gamma_motion}, without loss of generality, we can suppose the existence of $x(\cdot) \in X^0(\tau,z,w(\cdot))$ such that $\{x^j_t(\cdot)\}_{j \in \mathbb N} \in \Gamma(x(t),x_t(\cdot))$. Let $\alpha_0 = \max\{\|z\|,\|w(\cdot)\|_\infty\}$. According Lemmas \ref{lem:alpha_x_lambda_x}, \ref{lem:g_s} and conditions $(H_3)$, $(\varphi_2)$, define $\alpha_X = \alpha_X(\alpha_0)$, $\lambda_g = \lambda_g(\alpha_X)$, $\lambda_H = \lambda_H(\alpha_X)$, and $\lambda_\varphi = \lambda_\varphi(\alpha_X)$. Denote $\lambda_\omega = \lambda_H (1 + \|s\|) + (1 + \lambda_g) \|s\|$. Then, using (\ref{def:omega}) and (\ref{def:Gamma}), we derive \begin{equation*} \begin{array}{rcl} \big|\varphi(\tau,z,w(\cdot)) - \varphi(\tau,w^j(-0),w^j(\cdot))\big| &\leq& \lambda_\varphi \upsilon(\tau,z-w^j(-0),w(\cdot)-w^j(\cdot)),\\[0.2cm] \big|\varphi(t,x(t),x_t(\cdot)) - \varphi(t,x^j(t),x^j_t(\cdot))\big| &\leq& \lambda_\varphi \upsilon(t,x(t)-x^j(t),x_t(\cdot)-x^j_t(\cdot)),\\[0.2cm] \big|\omega(\tau,t,x(\cdot),s) - \omega(\tau,t,x^j(\cdot),s)\big| &\leq& \lambda_\omega \upsilon(t,x(t)-x^j(t),x_t(\cdot)-x^j_t(\cdot)),\\[0.2cm] &+& \lambda_\omega \upsilon(\tau,z-w^j(-0),w(\cdot)-w^j(\cdot)). \end{array} \end{equation*} Thus, passing to the limit as $j \to + \infty$, form (\ref{lem_cb:1}), we derive \begin{equation*} \varphi(t,x(t),x_t(\cdot)) + \omega(\tau,t,x(\cdot),s) \leq \varphi(\tau,z,w(\cdot)). \end{equation*} This estimate implies (\ref{def:upper_minmax_solution}). Inequality (\ref{def:lower_minmax_solution}) can be proved by the similar way. The functional $\varphi$ satisfies condition (\ref{terminal_condition}) for any $w(\cdot) \in \mathrm{PLip}$ since $\varphi$ satisfies condition (\ref{terminal_condition}) for any $w(\cdot) \in \mathrm{C}^1$ and due to Lemma (\ref{lem:Gamma_nonempty}) and condition $(\sigma)$. \end{proof} \subsection{Proof of Theorem \ref{teo:equivalent_solutions}} \begin{lemma}\label{lem:delta_s} Let $(\tau,z,w(\cdot)) \in \mathbb G_*$. Let $i \in \overline{0,I-1}$ satisfy $\tau \in (i h, (i + 1) h)$. Then there exists $\delta_* \in (0,h)$ such that $[\tau,\tau + \delta_*] \subset (i h, (i + 1) h)$ and the function $w(\cdot)$ is continuously differentiable on $[-h,-h+\delta_*]$. \end{lemma} \begin{proof} According to definition (\ref{def:G_G_s}) of $G_*$, there exists $\delta_w > 0$ such that $w(\cdot)$ is continuously differentiable on $[-h,-h+\delta_w]$. Let $\delta_i > 0$ be such that $\tau + \delta_i < (i + 1) h$. Then, taking $\delta_* = \min\{\delta_w, \delta_i\}$, we obtain the statement of the lemma. \end{proof} \begin{lemma}\label{minimax_and_viscosity_solutions} If the functional $\varphi \colon \mathbb G \mapsto \mathbb R$ is a minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}, then it is the viscosity solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{lemma} \begin{proof} Let us prove that $\varphi$ satisfy (\ref{sub_viscosity_solution}). Let $(\tau,z,w(\cdot)) \in \mathbb G_*$, $\psi \in \mathrm{C}^1(\mathbb R \times \mathbb R^n,\mathbb R)$, and $\delta > 0$ be such that \begin{equation*} \varphi(\tau,z,w(\cdot)) - \psi(\tau,z) \leq \varphi(t,x,\kappa_t(\cdot)) - \psi(t,x),\quad (t,x) \in O^+_\delta(\tau,z), \end{equation*} where $\kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot))$. Let $\alpha_0 = \max\{\|z\|,\|w(\cdot)\|_\infty\}$. According to Lemmas \ref{lem:alpha_x_lambda_x}, \ref{lem:lambda_x_local}, \ref{lem:delta_s} and condition $(\varphi_2)$, define $\alpha_X = \alpha_X(\alpha_0)$, $\lambda_X = \lambda_X(\alpha_0)$, $\delta_*$, and $\lambda_\varphi = \lambda_\varphi(\alpha_X)$, respectively. Denote $t_* = \tau + \min\{\delta_*, \delta, \delta / \lambda_X\}$. Then, we have \begin{equation*} \big|\varphi(t,x(t),x_{t}(\cdot)) - \varphi(t,x(t),\kappa_{t}(\cdot))\big| \leq \lambda_\varphi \displaystyle\int_\tau^{t} \|x(\xi) - z\| \mathrm{d} \xi \leq \lambda_\varphi \lambda_X (t - \tau)^2, \end{equation*} \begin{equation} (t,x(t)) \in O^+_\delta(\tau,z),\quad t \in [\tau,t_*],\quad x(\cdot) \in X^1(\tau,z,w(\cdot)). \end{equation} Let $t_j = \tau + (t_* - \tau) / j$, $j \in \mathbb N$. Due to the inclusion $\psi \in \mathrm{C}^1(\mathbb R \times \mathbb R^n,\mathbb R)$, there exist $\varepsilon_j > 0$, $j \in \mathbb N$ such that $\varepsilon_j \to 0$ as $j \to \infty$ and \begin{equation} \frac{\psi(t_j,x(t_j)) - \psi(\tau,z)}{t_j - \tau} - \frac{\partial}{\partial \tau} \psi(\tau,z) - \langle \frac{x(t_j) - x(\tau)}{t_j - \tau}, \nabla_z \psi(\tau,z) \rangle \geq - \varepsilon_j \end{equation} for any $x(\cdot) \in X^1(\tau,z,w(\cdot))$. Since $\varphi$ is the minimax solution of problem (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition}), for each $j \in \mathbb N$, there exists $x^j(\cdot) \in X^1(\tau,z,w(\cdot))$ such that \begin{equation} \varphi(t_j,x^j(t_j),x^j_{t_j}(\cdot)) + \omega(\tau,t_j,x^j(\cdot),\nabla_z \psi(\tau,z)) \leq \varphi(\tau,z,w(\cdot)) + (t_j - \tau) \varepsilon_j. \end{equation} Thus, taking (\ref{def:omega}) into account, we derive \begin{equation*} \begin{array}{c} \displaystyle\frac{1}{t_j - \tau}\int_\tau^{t_j} H(\xi,x^j(\xi),x^j(\xi-h),\nabla_z \psi(\tau,z)) \mathrm{d} \xi \\[0.3cm] \displaystyle + \frac{1}{t_j - \tau} \big\langle g(t_j,x(t_j-h)) - g(\tau,x(\tau-h)), \nabla_z \psi(\tau,z) \big\rangle\\[0.3cm] \displaystyle\leq - \frac{\partial}{\partial \tau} \psi(\tau,z) + 2 \varepsilon_j + \lambda_\varphi \lambda_X (t_j - \tau). \end{array} \end{equation*} Passing to the limit as $j \to \infty$, taking into accound condition $(H_1)$, inequalities (\ref{lem:lambda_x_local:condition}), (\ref{lem:lambda_x_local:statement}), and Lemma \ref{lem:dg}, we obtain \begin{equation*} \partial \psi(\tau,z) / \partial \tau + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), \nabla_z \psi(\tau,z) \rangle + H(\tau,z,w(\cdot),\nabla_z \psi(\tau,z)) \leq 0. \end{equation*} Thus, condition (\ref{sub_viscosity_solution}) is proved. Condition (\ref{super_viscosity_solution}) can be proved similarly. \end{proof} \begin{lemma}\label{lem:tilde_phi} Let $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies $(\varphi_1)$, $(\varphi_2)$ and $(\tau,z,w(\cdot)) \in \mathbb G_*$. Let $\delta_* > 0$ be taken from Lemma \ref{lem:delta_s}. Then the function $\tilde{\varphi}$ defined by \begin{equation}\label{def:tilde_phi} \tilde{\varphi}(t,x) = \varphi(t,x,\kappa_t(\cdot)),\quad \kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot)), \end{equation} is continuous at every $(t,x) \in [\tau,\tau + \delta_*] \times \mathbb R^n$. \end{lemma} \begin{proof} Let $(t,x) \in [\tau,\tau + \delta_*] \times \mathbb R^n$ and $\varepsilon > 0$. Let $\alpha_0 = \max\{\|x\|,\|z\|,\|w(\cdot)\|_\infty\}$. Due to condition $(\varphi_2)$, define $\lambda_\varphi = \lambda_\varphi(\alpha_0 + 1)$. Let $\varepsilon_* = \varepsilon / (32 \lambda_\varphi)$. Then, applying Lemma \ref{lem:Gamma_nonempty} to $(x,\kappa_t)$, we obtain the existence of $w^*(\cdot) \in \mathrm{C}^1$ such that \begin{equation}\label{lem:tilde_phi:overline_w} \begin{array}{c} \|w^*(\cdot)\|_\infty \leq \alpha_0,\quad \|x - w^*(-0)\| \leq \varepsilon_*,\quad \|\kappa_t(\cdot) - w^*(\cdot)\|_1 \leq \varepsilon_*,\\[0.3cm] \|\kappa_t(-h) - w^*(-h)\| \leq \varepsilon_*,\quad \|\kappa(i h) - w^*(i h - t)\| \leq \varepsilon_*. \end{array} \end{equation} where $i \in \overline{0,I-1}$ satisfies $\tau \in (i h, (i + 1) h)$ which implies $t \in (i h, (i + 1) h)$ according to the choice $\delta_*$. Since $w^*(\cdot) \in \mathrm{C}^1$, there exist $\lambda^* > 0$ such that \begin{equation}\label{lem:tilde_phi:overline_lambda} \|w^*(\xi) - w^*(\xi')\| \leq \lambda^* |\xi - \xi|,\quad \xi,\xi' \in [-h, 0). \end{equation} Due to Lemma \ref{lem:kappa} and condition $(\varphi_1)$, there exists $\nu_* > 0$ such that \begin{equation}\label{lem:tilde_phi:delta_s} \|\kappa_t(\cdot) - \kappa_{t'}(\cdot)\|_1 \leq \varepsilon_*,\ |\varphi(t,w^*(-0),w^*(\cdot)) - \varphi(t',w^*(-0),w^*(\cdot))| \leq \varepsilon/2 \end{equation} for every $t' \in [\tau, \tau + \delta_*]$: $|t' - t| \leq \nu_*$. Due to the choice of $\delta_*$, there exists $\lambda_0 > 0$ such that \begin{equation}\label{lem:tilde_phi:lambda_0} \|w(\xi) - w(\xi'-h)\| \leq \lambda_0 |\xi - \xi|,\quad \xi,\xi' \in [-h, -h+\delta_*). \end{equation} Define \begin{equation}\label{lem:tilde_phi:delta} \nu = \min\{1, \varepsilon_*, \varepsilon_* / \lambda_0, \varepsilon_* / \lambda^*, \nu_*\}. \end{equation} For proving the lemma, it suffices to establish the inequality \begin{equation}\label{lem:tilde_phi:continuity} |\tilde{\varphi}(t,x) - \tilde{\varphi}(t',x')| \leq \varepsilon,\quad (t',x') \in O_* := O_\nu(t,x) \cap ([\tau,\tau + \delta_*] \times \mathbb R^n). \end{equation} Firstly, due to the choice of $\delta_*$, $\alpha_0$, and $\lambda_\varphi$, from (\ref{lem:tilde_phi:overline_w}), (\ref{lem:tilde_phi:overline_lambda}), the first inequality in (\ref{lem:tilde_phi:delta_s}), and (\ref{lem:tilde_phi:lambda_0})--(\ref{lem:tilde_phi:continuity}), we derive \begin{equation*} \begin{array}{c} |\varphi(t',x',\kappa_{t'}(\cdot)) - \varphi(t',w^*(-0),w^*(\cdot))| \leq \lambda_\varphi\big(4 \varepsilon_* + \|x' - x\| + \|\kappa_{t'}(\cdot) - \kappa_t(\cdot)\|_1 \\[0.2cm] + \|\kappa_{t'}(-h) - \kappa_{t}(-h)\| + \|w^*(i h - t) - w^*(i h - t')\|\big) \leq 8 \lambda_\varphi \varepsilon_* \end{array} \end{equation*} for any $(t',x') \in O_*$. Applying this estimate for $(t,x)$ and arbitrary $(t',x') \in O_*$, taking (\ref{def:tilde_phi}) into account, we obtain \begin{equation*} |\tilde{\varphi}(t,x) - \tilde{\varphi}(t',x')| \leq |\varphi(t,w^*(-0),w^*(\cdot)) - \varphi(t',w^*(-0),w^*(\cdot))| + 16 \lambda_\varphi \varepsilon_*. \end{equation*} In accordance with the choice of $\varepsilon_*$, the second inequality in (\ref{lem:tilde_phi:delta_s}), and (\ref{lem:tilde_phi:delta}), from this inequality we conclude (\ref{lem:tilde_phi:continuity}). \end{proof} \begin{lemma}\label{lem:viscosity_and_D_solutions} If the functional $\varphi\colon \mathbb G \mapsto \mathbb R$ is a viscosity solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}, then it satisfies {\rm (\ref{subdifferential_viscosity_inequality})} and {\rm (\ref{superdifferential_viscosity_inequality})}. \end{lemma} \begin{proof} Let $(\tau,z,w(\cdot)) \in \mathbb G_*$ and $(p_0,p) \in D^-\varphi(\tau,z,w(\cdot))$. Choose the number $\delta_*$ in accordance with Lemma~\ref{lem:delta_s}. Taking the function $\tilde{\varphi}$ by (\ref{def:tilde_phi}), define consistently the function $\tilde{\varphi}_*$ as follows \begin{equation*} \tilde{\varphi}_*(t,x) = \min\{0,\tilde{\varphi}(t,x) - \tilde{\varphi}(\tau,z) - (t - \tau) p_0 - \langle x - z, p \rangle\} \end{equation*} for $(t, x) \in [\tau,\tau + \delta_*] \times \mathbb R^n$, $\tilde{\varphi}_*(t,x) = \tilde{\varphi}_*(\tau + \delta_*,x)$ for $(t, x) \in (\tau + \delta_*,+\infty) \times \mathbb R^n$, and $\tilde{\varphi}_*(t,x) = \tilde{\varphi}_*(\tau - (t - \tau),x)$ for $(t, x) \in (-\infty,\tau) \times \mathbb R^n$. Then, according to Lemma~\ref{lem:tilde_phi}, this function is continuous on $\mathbb R \times \mathbb R^n$. The further proof of the fact that $\varphi$ satisfies (\ref{subdifferential_viscosity_inequality}) can be carried out in the same way as in Lemma 8.1 from \cite{Plaksin_2021}. In a similar way one can prove that (\ref{super_viscosity_solution}) implies (\ref{superdifferential_viscosity_inequality}). \end{proof} \begin{lemma}\label{lem:positive_part_deriv} Let $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfy $(\varphi_2)$ and $(\tau,z,w(\cdot)) \in \mathbb G_*$. Let $L \subset \mathbb R^n$ be a nonempty compact set. Suppose that \begin{equation}\label{lem:positive_part_deriv:cond} \partial^-_{1,l} \varphi(\tau,z,w(\cdot)) > 0,\quad l \in L. \end{equation} Then, there exist $\epsilon_\circ, \delta_\circ > 0$ such that \begin{equation}\label{lem:positive_part_deriv:state} \varphi(t, z+l(t-\tau), \kappa_t(\cdot)) - \varphi(\tau,z,w(\cdot)) > \epsilon_\circ (\tau - t) \end{equation} for any $t \in (\tau, \tau + \delta_\circ]$ and $l \in L$, where $\kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot))$. \end{lemma} \begin{proof} The proof of the lemma is based on definition (\ref{lower_directional_derivatives}) of lower directional derivatives and condition $(\varphi_2)$. \end{proof} \begin{lemma}\label{lem:mvi} Let $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfy condition $(\varphi_2)$. Let $(\tau,z,w(\cdot)) \in \mathbb G_*$. Let $L \subset \mathbb R^n$ be a nonempty convex compact set. Suppose that {\em (\ref{lem:positive_part_deriv:cond})} holds. Then, for every $\delta > 0$, there exist \begin{equation}\label{lem:mvi:t_s_y_s_p0_p} \begin{array}{c} (t_*,x_*) \in \Omega_\delta := \big\{(t,x) \in [\tau, \tau + \delta] \times \mathbb R^n \colon \min\limits_{l \in L} \|x - z - l (t - \tau)\| \leq \delta\big\},\\[0.2cm] (p_0,p) \in D^-\varphi(t_*,x_*,\kappa_{t_*}(\cdot)),\quad \kappa(\cdot) \in \Lambda_0(\tau,z,w(\cdot)), \end{array} \end{equation} such that \begin{equation}\label{lem:mvi:state} p_0 + \langle l, p\rangle > 0,\quad l \in L. \end{equation} \end{lemma} \begin{proof} Take $\delta_* > 0$ from Lemma \ref{lem:delta_s}. According to (\ref{lem:mvi:t_s_y_s_p0_p}), one can take $\alpha_\Omega > 0$ such that $\{\|x\|,\|w(\cdot)\|_\infty\} \leq \alpha_\Omega$ for any $(t,x) \in \Omega_{\delta_*}$. Due to condition $(\varphi_2)$, define $\lambda_\varphi = \lambda_\varphi(\alpha_\Omega)$. Then, for every $(t,x), (t_*,x_*) \in \Omega_{\delta_*}$ satisfying $t \geq t_*$, and $\chi(\cdot) \in \Lambda_0(t_*,x_*,\kappa_{t_*}(\cdot))$, we derive \begin{equation}\label{lem:mvi:lambda_phi} |\varphi(t,x,\chi_t(\cdot)) - \varphi(t,x,\kappa_t(\cdot))| \leq \lambda_\varphi \|\chi_t(\cdot) - \kappa_t(\cdot)\|_1 \leq \lambda_\varphi \|x_* - z\| (t - t_*). \end{equation} According to Lemma \ref{lem:positive_part_deriv}, there exist $\epsilon_\circ,\delta_\circ > 0$ such that (\ref{lem:positive_part_deriv:state}) holds. Then, without loss of generality, we can suppose that \begin{equation}\label{lem:mvi:nu} \delta \leq \min\{\delta_*,\delta_\circ\},\quad \delta < \epsilon_\circ / (\lambda_\varphi (1 + \lambda_L)),\quad \lambda_L = \max\{\|l\|\,|\,l \in L\}. \end{equation} For each $k \in \mathbb N$, define the function \begin{equation}\label{lem:mvi:gamma_k} \gamma_k(t,x,\xi,y) = \varphi(t,x,\kappa_t(\cdot)) + k \|x - y\|^2 + k (t - \xi)^2 - \epsilon_\circ (\xi - \tau), \end{equation} where $(t,x) \in \Omega_\delta$ and $(\xi,y) \in \Omega_\delta^* := \{(\xi,y) \in\Omega_\delta \colon \min\limits_{l \in L} \|y - z - l (\xi - \tau)\| = 0\}$. According to the choice of $\delta$ and Lemma \ref{lem:tilde_phi}, $\gamma_k$ is continuous. The set $\Omega_\delta \times \Omega_\delta^*$ is compact. Therefore, there exists $(t_k,x_k,\xi_k,y_k) \in \Omega_\delta \times \Omega_\delta^*$ such that \begin{equation}\label{lem:mvi:min_point} \gamma_k(t_k,x_k,\xi_k,y_k) = \min\limits_{(t,x,\xi,y) \in \Omega_\delta \times \Omega_\delta^*} \gamma_k(t,x,\xi,y). \end{equation} Furthermore, without loss of generality, we suppose that $(t_k,x_k,\xi_k,y_k) \to (\overline{t},\overline{x},\overline{\xi},\overline{y}) \in \Omega_\delta \times \Omega_\delta^*$ as $k \to \infty$. Due to (\ref{lem:mvi:min_point}), we have \begin{equation}\label{lem:mvi:min_and_init_point} \gamma_k(t_k,x_k,\xi_k,y_k) \leq \gamma_k(\tau,z,\tau,z) = \varphi(\tau,z,w(\cdot)). \end{equation} Hence, we obtain \begin{equation}\label{lem:mvi:limit} \overline{t} = \overline{\xi},\quad \overline{x} = \overline{y}. \end{equation} Let us show that $\overline{t} < \tau + \delta$. For the sake of a contradiction, suppose that $\overline{t} = \tau + \delta$. Then, applying Lemma \ref{lem:tilde_phi} and (\ref{lem:positive_part_deriv:state}), (\ref{lem:mvi:gamma_k}), we derive \begin{equation*} \begin{array}{c} \liminf\limits_{k \to \infty} \gamma_k(t_k,x_k,\xi_k,y_k) \geq \lim\limits_{k \to \infty}\big(\varphi(t_k,x_k,\kappa_{t_k}(\cdot)) - \epsilon_\circ (\xi_k - \tau) \big) \\[0.2cm] = \varphi(\tau + \delta, \overline{y}, \kappa_{\tau + \delta}(\cdot)) - \epsilon_\circ \delta > \varphi(\tau,z,w(\cdot)). \end{array} \end{equation*} This inequality contradicts (\ref{lem:mvi:min_and_init_point}). In accordance with $\overline{t} < \tau + \delta$ and (\ref{lem:mvi:limit}), one can take $k \in \mathbb N$ so that \begin{equation}\label{lem:mvi:k} t_k < \tau + \delta,\quad \xi_k < \tau + \delta,\quad \|x_k - y_k\| \leq \delta / 4,\quad \lambda_L |t_k - \xi_k| \leq \delta / 4, \end{equation} where the number $\lambda_L$ is defined in (\ref{lem:mvi:nu}). Put \begin{equation}\label{lem:mvi:p0_p} p_0 = - 2 k (t_k - \xi_k) - \lambda_\varphi \|x_k - z\|,\quad p = - 2 k (x_k - y_k). \end{equation} Let us prove the inclusion $(p_0,p) \in D^- \varphi (t_k,x_k,\kappa_{t_k}(\cdot))$. Since $(\xi_k,y_k) \in \Omega_\delta^*$, there exists $l_k$ such that $y_k = z + l_k (\xi_k - \tau)$. Then, due to definition $\lambda_L$ in (\ref{lem:mvi:nu}) and (\ref{lem:mvi:k}), we have \begin{equation*} \|x - z - l_k (t - \tau)\| \leq \|x - x_k\| + \|x_k - y_k\| + \lambda_L |t - t_k| + \lambda_L |t_k - \xi_k| \leq \delta \end{equation*} for any $(t,x) \in O^+_{\delta (1 + \lambda_L) / 4}(t_k,x_k)$. It means that $(t,x) \in \Omega_\delta$ for any $(t,x) \in O^+_{\delta (1 + \lambda_L) / 4}(t_k,x_k)$. Applying (\ref{lem:mvi:gamma_k}), (\ref{lem:mvi:min_point}), we obtain \begin{equation*} \begin{array}{c} 0 \leq \gamma_k(t, x, \xi_k, y_k) - \gamma_k(t_k, x_k, \xi_k, y_k) = \varphi(t,x,\kappa_t(\cdot)) - \varphi(t_k,x_k,\kappa_{t_k}(\cdot)) \\[0.2cm] + k \|x - x_k\|^2 + 2 k \langle x - x_k, x_k - y_k \rangle + k (t - t_k)^2 + 2 k (t - t_k) (t_k - \xi_k) \end{array} \end{equation*} for any $(t,x) \in O^+_{\delta (1 + \lambda_L) / 4}(t_k,x_k)$. Then, taking into account (\ref{subdifferential}), (\ref{lem:mvi:lambda_phi}) for $(t_*,x_*) = (t_k,x_k)$, and (\ref{lem:mvi:p0_p}), we conclude $(p_0,p) \in D^-\varphi(t_k,x_k,\kappa_{t_k}(\cdot))$. Let us prove (\ref{lem:mvi:state}). Let $l \in L$. Since $L$ is convex, then we have \begin{equation*} l_\nu = (l \nu + l_k (\xi_k - \tau))/ (\nu + \xi_k - \tau) \in L,\quad \nu \in (0, \delta + \tau - \xi_k). \end{equation*} From this inclusion and $(\xi_k,y_k) \in \Omega_\delta^*$, we derive $\|y_k+l \nu - z - l_\nu (\xi_k+\nu - \tau)\| = 0$ that means the inclusion $(\xi_k+\nu,y_k+l \nu) \in \Omega^*_\delta$, $\nu \in (0, \delta + \tau - \xi_*)$. Then, according to (\ref{lem:mvi:gamma_k}), (\ref{lem:mvi:min_point}), for every $\nu \in (0,\delta+ \tau - \xi_*)$, we obtain \begin{equation*} \begin{array}{c} 0 \leq \gamma_k(t_k,x_k,\xi_k+\nu,y_k+l \nu,) - \gamma_k(t_k,x_k,\xi_k,y_k) \\[0.2cm] = k \|l\|^2 \nu^2 - 2 k \langle l, x_k - y_k \rangle \nu + k \nu^2 - 2 k (t_k - \xi_k) \nu - \epsilon_\circ \nu. \end{array} \end{equation*} Dividing this inequality by $\nu$ and passing to the limit as $\nu \to + 0$, we get \begin{equation}\label{lem:mvi:xi_var} \epsilon_\circ \leq - 2 k \langle x_k - y_k, l \rangle - 2 k (t_k - \xi_k). \end{equation} Since $(t_k,x_k) \in \Omega_\delta$, there exists $l_x \in L$ such that $\|x_k - z - l_x(t_k - \tau)\| \leq \delta$. Then, using (\ref{lem:mvi:nu}), we derive \begin{equation}\label{lem:mvi:x_k_z} \|x_k - z\| \leq \|x_k - z - l_x(t_k - \tau)\| + \|l_x\| (t_k - \tau) \leq (1 + \lambda_L) \delta < \epsilon_\circ / \lambda_\varphi . \end{equation} From (\ref{lem:mvi:p0_p})--(\ref{lem:mvi:x_k_z}), we conclude (\ref{lem:mvi:state}). Thus, taking $(t_*,x_*) = (t_k,x_k)$ we obtain statement of the lemma. \end{proof} \begin{lemma}\label{lem:directional_derivative_continuous} Let $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfy $(\varphi_2)$ and $(\tau,z,w(\cdot)) \in \mathbb G_*$. If there exists $l_* \in \mathbb R^n$ such that $\partial^-_{1,l_*}\varphi(t,z,w(\cdot)) \in \mathbb R$, then $\partial^-_{1,l}\varphi(t,z,w(\cdot)) \in \mathbb R$ for every $l \in \mathbb R^n$, and the function $\phi(l) = \partial^-_{1,l}\varphi(t,z,w(\cdot))$, $l \in \mathbb R^n$, is continuous. If there exists $l_* \in \mathbb R^n$ such that $\partial^-_{1,l_*}\varphi(t,z,w(\cdot)) = + \infty$, then $\partial^-_{1,l}\varphi(t,z,w(\cdot))= + \infty$ for every $l \in \mathbb R^n$. \end{lemma} \begin{proof} follows directly from condition $(\varphi_2)$ and definition (\ref{directional_derivatives}). \end{proof} \begin{lemma}\label{lem:D_and_dini_solutions} If the functional $\varphi\colon \mathbb G \mapsto \mathbb R$ satisfies {\em (\ref{subdifferential_viscosity_inequality})} and {\em (\ref{superdifferential_viscosity_inequality})}, then it satisfies {\em (\ref{lower_directional_derivative_inequality})} and {\em (\ref{upper_directional_derivative_inequality})}. \end{lemma} \begin{proof} Let us prove (\ref{lower_directional_derivative_inequality}). For the sake of a contradiction, suppose that there exist $(\tau,z,w(\cdot)) \in \mathbb G_*$ and $s \in \mathbb R^n$ such that \begin{equation*} \partial^-_{1,l} \varphi(\tau,z,w(\cdot)) + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), s \rangle + H(\tau,z,w(-h),s) - \langle l,s \rangle > 0 \end{equation*} for any $l \in F(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot))$. In the case if there exists $l_* \in F(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot))$ such that $\partial^-_{1,l_*} \varphi(\tau,z,w(\cdot)) \in \mathbb R$, then, taking into account Lemma \ref{lem:directional_derivative_continuous}, one can take $\eta, \varepsilon > 0$ so that \begin{equation}\label{lem:f_d:eta_epsilon} \partial^-_{1,l} \varphi(\tau,z,w(\cdot)) + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), s \rangle + H(\tau,z,w(-h),s) - \langle l,s \rangle > \varepsilon \end{equation} for any $l \in F^\eta(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot))$. If there exists $l_0 \in F(z,w(-h))$ such that $\partial^-_{1,l_0} \varphi(t,z,w(\cdot)) = + \infty$, then, in accordance with Lemma \ref{lem:directional_derivative_continuous}, inequality (\ref{lem:f_d:eta_epsilon}) also holds. Put $L = F^\eta(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot))$. According to conditions $(H_1)$ and (\ref{def:F}), there exists $\delta_2 > 0$ such that \begin{equation}\label{lem:f_d:delta_2} \begin{array}{c} |H(t,x,w(t-\tau-h),s) - H(\tau,z,w(-h),s)| \leq \varepsilon,\\[0.2cm] F^0(x,w(t-\tau-h)) \subset F^\eta(z,w(-h)), \end{array} \quad (t,x) \in \Omega_{\delta_2}. \end{equation} Define the functional $\varphi_* \colon \mathbb G \mapsto \mathbb R$ by \begin{equation*} \begin{array}{c} \varphi_*(t,x,r(\cdot)) = \varphi(t,x,r(\cdot)) \\[0.2cm] + \big(\langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), s \rangle + H(\tau,z,w(-h),s) - \varepsilon\big) (t - \tau) - \langle x, s \rangle, \end{array} \ \ (t,x,r(\cdot)) \in \mathbb G. \end{equation*} Since $\varphi$ satisfy $(\varphi_2)$, then $\varphi_*$ satisfy $(\varphi_2)$. Moreover, from (\ref{lem:f_d:eta_epsilon}), we derive $\partial^-_{1,l} \varphi_*(\tau,z,w(\cdot)) > 0$, $l \in L$. Applying Lemma \ref{lem:mvi} to the functional $\varphi_*$, the set $L$, and $\delta = \min\{\delta_1,\delta_2\}$, we obtain that there exist $(t_*,x_*) \in \Omega_\delta$ and $(p_0,p) \in D^-\varphi_*(t_*,x_*,\kappa_{t_*}(\cdot))$ such that \begin{equation}\label{lem:f_d:p0_p} p_0 + \langle l,p \rangle > 0,\quad l \in L = F^\eta(z,w(-h)) + \partial^{ci}_{\tau,w} g(\tau,w(\cdot)). \end{equation} Let us define \begin{equation*} p'_0 = p_0 - H(\tau,z,w(-h),s) - \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), s \rangle + \varepsilon,\quad p' = p + s. \end{equation*} Then, we have $(p'_0,p') \in D^-\varphi(t_*,x_*,\kappa_{t_*}(\cdot))$. Thus, taking the choice of $\delta$ into account, from (\ref{def:F}), (\ref{subdifferential_viscosity_inequality}), (\ref{lem:f_d:delta_2}), and condition $(H_3)$, we obtain \begin{equation*} \begin{array}{c} 0 \geq p'_0 + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), p' \rangle + H(t_*,x_*,w(t_*-\tau-h),p') \\[0.3cm] \geq p_0 + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), p \rangle - c_H \big(1 + \|x_*\| + \|w(t_*-\tau-h)\|\big) \|p\| \\[0.3cm] = p_0 + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), p \rangle + \min\limits_{l \in F^0(x_*,w(t_*-\tau-h))} \langle l, p \rangle \\[0.3cm] \geq p_0 + \langle \partial^{ci}_{\tau,w} g(\tau,w(\cdot)), p \rangle + \min\limits_{l \in F^\eta(z,w(-h))} \langle l, p \rangle. \end{array} \end{equation*} This estimate contradicts (\ref{lem:f_d:p0_p}). Thus, (\ref{lower_directional_derivative_inequality}) holds. For $\partial^+_l \varphi(t,z,w(\cdot))$ and $D^+ \varphi(t,z,w(\cdot))$, one can establish statements similar to Lemmas \ref{lem:mvi}, \ref{lem:directional_derivative_continuous} and prove (\ref{upper_directional_derivative_inequality}). \end{proof} \begin{lemma}\label{lem:dini_and_C1_solutions} If the functional $\varphi \colon \mathbb G \mapsto \mathbb R$ satisfies $(\varphi_1)$, $(\varphi_2)$, {\rm (\ref{lower_directional_derivative_inequality})}, and {\rm (\ref{upper_directional_derivative_inequality})}, then it is $\mathrm{C^1}$-minimax solution of problem {\rm (\ref{Hamilton-Jacobi_equation}), (\ref{terminal_condition})}. \end{lemma} \begin{proof} Let us prove (\ref{def:upper_minmax_solution}) for any $i \in \overline{0,I-1}$, $(\tau,w(\cdot)) \in [i h, (i + 1) h) \times \mathrm{C}^1$, $t \in (\tau,(i+1)h]$, $\eta \in (0,1]$, and $s \in \mathbb R^n$. For the sake of a contradiction, suppose that there exist $i \in \overline{0,I-1}$, $(\tau,w(\cdot)) \in [i h, (i + 1) h) \times \mathrm{C}^1$, $t \in (\tau,(i+1)h]$, $\eta \in (0,1]$, $s \in \mathbb R^n$, and $\varepsilon > 0$ such that \begin{equation}\label{lem:d_c:contr} \varphi(t,x(t),x_t(\cdot)) - \varphi(\tau,w(-0),w(\cdot)) + \omega(\tau,t,x(\cdot),s) > \varepsilon \end{equation} for any $x(\cdot) \in X^\eta := X^\eta(\tau,w(-0),w(\cdot))$. Let \begin{equation*} \xi_* = \max \Big\{\xi \in [\tau, t] \colon \min\limits_{x(\cdot) \in X^\eta} \big(\varphi(\xi,x(\xi),x_\xi(\cdot)) + \omega(\xi,\tau,x(\cdot),s)\big) \leq \beta(\xi)\Big\}, \end{equation*} where $\beta(\xi) = \varphi(\tau,w(-0),w(\cdot)) + \varepsilon (\xi - \tau) / (t - \tau)$, $\xi \in [\tau, t]$. Similar to Assertion~2 from \cite{Plaksin_2021b}, one can show that the set $X^\eta$ is a compact on the space of continuous functions and there exist $\alpha_X, \lambda_X > 0$ such that \begin{equation*} \|x(\xi)\| \leq \alpha_X,\quad \|x(\xi) - x(\xi')\| \leq \lambda_X |\xi - \xi'|,\quad \xi,\xi' \in [\tau-h,\vartheta],\quad x(\cdot) \in X^\eta. \end{equation*} Using conditions $(\varphi_1)$ and $(\varphi_2)$, one can state the functional $\hat{\varphi}(\tau,w(\cdot)) = \varphi(\tau,w(-0),w(\cdot))$ is continuous at any $(\tau,w(\cdot)) \in [0,\vartheta] \times \mathrm{Lip}$. Thus, taking into account conditions $(g)$, $(H_1)$, and definition (\ref{def:omega}) of $\omega$, we can establish that these maximum and minimum are archived. In accordance with (\ref{lem:d_c:contr}), we have $\xi_* < t$. Let the function $x(\cdot) \in X^\eta$ be such that \begin{equation}\label{lem:d_c:xi_s} \varphi(\xi_*,x(\xi_*),x_{\xi_*}(\cdot)) + \omega(\tau,\xi_*,x(\cdot),s) \leq \beta(\xi_*). \end{equation} Denote $\varepsilon_* = \varepsilon / (5 (t - \tau))$. Due to (\ref{lower_directional_derivative_inequality}), there exists $l_* \in F^0(x(\xi_*),x(\xi_*-h)) + \partial^{ci}_{\tau,w} g(\xi_*,x_{\xi_*}(\cdot))$ satisfying \begin{equation} \begin{array}{rcl} \partial^-_{1,l_*} \varphi(\xi_*,x(\xi_*),x_{\xi_*}(\cdot)) & + & \langle \partial^{ci}_{\tau,w} g(\xi_*,x_{\xi_*}(\cdot)) ,s \rangle\\[0.2cm] & + & H(\xi_*,x(\xi_*),x(\xi_*-h),s) - \langle l_*,s \rangle \leq \varepsilon_*. \end{array} \end{equation} Redefine $x(\cdot)$ on the interval $[\xi_*,\vartheta]$ so that $\mathrm{d} x(\xi) / \mathrm{d} \xi = l_*$, $\xi \in [\xi_*, \vartheta]$. According to condition $(\varphi_2)$, define $\lambda_\varphi = \lambda_\varphi(\alpha_X)$. Then, due to (\ref{def:F}), (\ref{lower_directional_derivatives}), condition $(H_1)$, and Lemma \ref{lem:dg} in which, since $w(\cdot) \in \mathrm{C}^1$, we can take $\delta_w \in (\xi_* - \tau, t - \tau)$, there exist $t_* \in (\xi_*,\min\{\tau + \delta_w, \xi_* + \varepsilon/(\lambda_\varphi \lambda_X)]$ such that \begin{equation} \displaystyle\frac{\varphi(t_*,x(t_*),\kappa_{t_*}(\cdot)) - \varphi(\xi_*,x(\xi_*),x_{\xi_*}(\cdot))}{t_* - \xi_*} \leq \partial^-_{1,l_*} \varphi(\xi_*,x(\xi_*),x_{\xi_*}(\cdot)) + \varepsilon_*, \end{equation} where $\kappa(\cdot) \in \Lambda_0(\xi_*,x(\xi_*),x_{\xi_*}(\cdot))$, and \begin{equation}\label{lem:d_c:t_s} \begin{array}{c} F^0(x(\xi_*),x(\xi_*-h)) \subset F^{\eta/2}(x(\xi),x(\xi-h)),\\[0.3cm] \big|H(\xi,x(\xi),x_{\xi}(\cdot),s) - H(\xi_*,x(\xi_*),x_{\xi_*}(\cdot),s)\big| \leq \varepsilon_*,\\[0.3cm] \displaystyle\bigg|\frac{\mathrm{d}}{\mathrm{d} \xi}\Big(g(\xi,x_\xi(\cdot))\Big) - \partial^{ci}_{\tau,w} g(\xi_*,x_{\xi_*}(\cdot))\bigg| \leq \min\bigg\{\frac{\eta}{2}, \frac{\varepsilon_*}{\|s\| + 1}\bigg\} \end{array} \end{equation} for any $\xi \in [\xi_*,t_*]$. Then, we have \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} \xi} \Big(x(\xi) - g(\xi,x(\xi-h))\Big) \in F^\eta(x(\xi),x(\xi-h))\quad {a.e. }\ \xi \in [\xi_*,t_*]. \end{equation*} Redefining the function $x(\cdot)$ on the interval $[t_*,\vartheta]$ such that $x(\cdot) \in X^\eta$, according to the choice of $\alpha_X$, $\lambda_X$, $\lambda_\varphi$, and $t_*$, we derive \begin{equation*} \big|\varphi(t_*,x(t_*),\kappa_{t_*}(\cdot)) - \varphi(t_*,x(t_*),x_{t_*}(\cdot)) \leq \lambda_\varphi \lambda_X (t_* - \xi_*)^2 \leq \varepsilon_* (t_* - \xi_*). \end{equation*} Using this estimate, from (\ref{lem:d_c:xi_s})--(\ref{lem:d_c:t_s}), taking the definition of $\varepsilon_*$ and $\beta$ into account, we obtain \begin{equation*} \varphi(t_*,x(t_*),x_{t_*}(\cdot)) + \omega(\tau,t_*,x(\cdot),s) \leq \beta(t_*). \end{equation*} This inequality contradict the choice of $\xi_*$. \end{proof} \subsection{Proof of Theorem \ref{teo:application}} \begin{lemma}\label{lem:value_functional} The value functional $\hat{\varphi}$ defined in {\rm (\ref{def:value_functional})} is the functional from Lemma~\ref{lem:ex_un_minimax_sol}. \end{lemma} \begin{proof} Let us prove (\ref{def:upper_minmax_solution}). Let $(\tau,w(\cdot)) \in \mathbb [0,\vartheta) \times \mathrm{Lip}$, $t \in (\tau,\vartheta]$, $s \in \mathbb R^n$, and $\varepsilon > 0$. Due to definition (\ref{def:value_functional}) of $\hat{\varphi}$, one can show (see, e.g., \cite[Chapter VIII, Theorem 1.9]{Bardi_Capuzzo-Dolcetta_1997}) that there exists $u(\cdot) \colon [\tau,\vartheta] \mapsto \mathbb U$ such that \begin{equation*} \hat{\varphi}(t,x_t(\cdot)) + \int_\tau^t f^0(\xi,x(\xi),x(\xi-h),u(\xi)) \mathrm{d} \xi \leq \hat{\varphi}(\tau,x_\tau(\cdot)) \end{equation*} where $x(\cdot) \in \Lambda(\tau,w(-0),w(\cdot))$ is the Lipschitz continuous function satisfying neutral-type equation (\ref{def:neutral_type_equation}). From this estimate, using (\ref{def:hamiltonian}), we get (\ref{def:upper_minmax_solution}). Let us prove (\ref{def:lower_minmax_solution}). Let $(\tau,w(\cdot)) \in \mathbb [0,\vartheta) \times \mathrm{Lip}$, $t \in (\tau,\vartheta]$, $s \in \mathbb R^n$, and $\varepsilon > 0$. Due to Assertion 2 from \cite{Plaksin_2021b}, there exist $\alpha_X, \lambda_X > 0$ such that \begin{equation*} \|x(\xi)\| \leq \alpha_X,\quad \|x(\xi) - x(\xi')\| \leq \lambda_X |\xi - \xi'|,\quad \xi,\xi' \in [\tau-h,\vartheta] \end{equation*} for any Lipschitz continuous function $x(\cdot) \in \Lambda(\tau,w(-0),w(\cdot))$ satisfying neutral-type equation (\ref{def:neutral_type_equation}). Then, taking into account condition $(H_1)$ and continuity of the functions $f$ and $f^0$, there exist $\delta > 0$ such that \begin{equation*} \begin{array}{c} \big|H(\xi,x(\xi),x(\xi-h),s) - H(\xi',x(\xi'),x(\xi'-h),s)\big| \leq \varepsilon,\\[0.2cm] \big|f(\xi,x(\xi),x(\xi-h),u) - f(\xi',x(\xi'),x(\xi'-h),u)\big| \leq \varepsilon,\\[0.2cm] \big|f^0(\xi,x(\xi),x(\xi-h),u) - f^0(\xi',x(\xi'),x(\xi'-h),u)\big| \leq \varepsilon \end{array} \end{equation*} for any $\xi,\xi' \in [\tau,\vartheta]$ satisfying $|\xi - \xi'| \leq \delta$, any $u \in \mathbb U$, and any Lipschitz continuous $x(\cdot) \in \Lambda(\tau,w(-0),w(\cdot))$ satisfying (\ref{def:neutral_type_equation}). Let $k \in \mathbb N$ be such that $\Delta t := (t - \tau)/k \leq \delta$ and $t_i = \tau + i \Delta t$, $i \in \overline{0,k}$. Let the function $u(\cdot) \colon [\tau,\vartheta] \mapsto \mathbb U$ and the Lipschitz continuous function $x(\cdot)$ satisfy (\ref{def:neutral_type_equation}) and the following feedback rule: \begin{equation*} u(\xi) = u_i,\quad \xi \in [t_i,t_{i+1}),\quad i \in \overline{0,k-1} \end{equation*} where, in accordance with (\ref{def:hamiltonian}), the value $u_i \in \mathbb U$ can be taken satisfying $H(t_i,x(t_i),x(t_i-h),s) = \langle f(t_i,x(t_i),x(t_i-h),u_i), s \rangle + f^0(t_i,x(t_i),x(t_i-h),u_i)$. Then, taking into account (\ref{def:omega}) and (\ref{def:value_functional}), we derive \begin{equation*} \begin{array}{c} \hat{\varphi}(t_{i+1},x_{t_{i+1}}(\cdot)) + \omega(t_i,t_{i+1},x(\cdot),s) + 3 (t_{i+1} - t_i) \varepsilon\\[0.2cm] \geq \displaystyle\hat{\varphi}(t_{i+1},x_{t_{i+1}}(\cdot)) + \int_{t_i}^{t_{i+1}} f^0(\xi,x(\xi),x(\xi-h),u(\xi)) \mathrm{d} \xi \geq \hat{\varphi}(t_{i},x_{t_{i}}(\cdot)) \end{array} \end{equation*} for any $i \in \overline{0,k-1}$. Using this estimate for each $i \in \overline{0,k-1}$ and $\varepsilon > 0$, we conclude (\ref{def:lower_minmax_solution}). \end{proof} \begin{thebibliography}{} \bibitem{Aubin_Haddad_2002} Aubin, J.P., Haddad, G.: History path dependent optimal control and portfolio valuation and management. Positivity. 6(3), 331--358 (2002). https://doi.org/10.1023/A:1020244921138 \bibitem{Bayraktar_Keller_2018} Bayraktar, E., Keller, C.: Path-dependent Hamilton--Jacobi equations in infinite dimensions. Journal of Functional Analysis. 275(8), 2096--2161 (2018). https://doi.org/10.1016/j.jfa.2018.07.010 \bibitem{Bardi_Capuzzo-Dolcetta_1997} Bardi, M., Capuzzo-Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkh\"auser, Boston (1997) \bibitem{Clarke_Ledyaev_1994} Clarke, F.H., Ledyaev, Yu.S.: Mean Value Inequalities in Hilbert Space. Transactions of the American Mathematical Society. 344(1), 307--324 (1994). https://doi.org/10.2307/2154718 \bibitem{Crandall_Lions_1983} Crandall, M.G., Lions, P.-L.: Viscosity solutions of Hamilton-Jacobi equations. Transactions of the American Mathematical Society. 277(1), 1--42 (1983). https://doi.org/10.2307/1999343 \bibitem{Crandall_Evans_Lions_1984} Crandall, M.G., Evans, L.C., Lions, P.-L.: Some properties of viscosity solutions of Hamilton-Jacobi equations. Transactions of the American Mathematical Society. 282(2), 487--502 (1984). https://doi.org/10.2307/1999247 \bibitem{Dupire_2009} Dupire, B.: Functional {It\^{o}} calculus. https://ssrn.com/abstract=1435551 (2009). Accessed 25 July 2009 \bibitem{Ekren_Touzi_Zhang_2016} Ekren, I., Touzi, N., Zhang, J.: Viscosity solutions of fully nonlinear parabolic path dependent PDEs: Part I. Annals of Probability. 44(2), 1212--1253 (2016). https://doi.org/10.1214/15-AOP1027 \bibitem{Filippov_1988} Filippov, A.F.: Differential equations with discontinuous Righthand Sides. Springer, Berlin (1988) \bibitem{Gomoyunov_Lukoyanov_Plaksin_2021} Gomoyunov, M.I., Lukoyanov, N.Yu., Plaksin, A.R.: Path-dependent Hamilton-Jacobi equations: the minimax solutions revised. Applied Mathematics and Optimization. 84(1), S1087--S1117 (2021) \bibitem{Gomoyunov_Plaksin_2019} Gomoyunov, M.I., Plaksin, A.R.: On basic equation of differential games for neutral-type systems. Mechanics of Solids. 54(2), 131--143 (2019) \bibitem{Hale_1977} Hale, J.: Theory of Functional Differential Equations. Springer-Verlag, New York (1977) \bibitem{Kaise_2015} Kaise, H.: Path-dependent differential games of inf-sup type and Isaacs partial differential equations. Proceedings of the 54th IEEE Conference on Decision and Control (CDC). 1972--1977 (2015). https://doi.org/10.1109/CDC.2015.7402496 \bibitem{Kaise_Kato_Takahashi_2018} Kaise, H., Kato, T., Takahashi, Y.: Hamilton-Jacobi partial differential equations with path-dependent terminal costs under superlinear Lagrangians. Proceedings of the 23rd International Symposium on Mathematical Theory of Networks and Systems (MTNS). 692--699 (2018) \bibitem{Kim_1999} Kim, A.V.: Functional Differential Equations: Application of $i$-Smooth Calculus. Kluwer Academic Publishers, Dordrecht, The Netherlands (1999) \bibitem{Krasovskii_Subbotin_1988} Krasovskii, N.N., Subbotin, A.I.: Game-Theoretical Control Problems, Springer, New York (1988) \bibitem{Krasovskii_Krasovskii_1995} Krasovskii, A.N., Krasovskii, N.N.: Control under Lack of Information, Birkh\"auser, Berlin (1995) \bibitem{Lukoyanov_2000} Lukoyanov, N.Yu.: A Hamilton-Jacobi type equation in control problems with hereditary information. Journal of Applied Mathematics and Mechanics. 64, 243--253 (2000). https://doi.org/10.1016/S0021-8928(00)00046-0 \bibitem{Lukoyanov_2003} Lukoyanov, N.Yu.: Functional Hamilton-Jacobi type equation in ci-derivatives for systems with distributed delays. Nonlinear Functional Analysis and Applications. 8(3), 365--397 (2003) \bibitem{Lukoyanov_2010a} Lukoyanov, N.Yu.: On optimality conditions for the guaranteed result in control problems for time-delay systems. Proceedings of the Steklov Institute of Mathematics. 1, 175--187 (2010) \bibitem{Lukoyanov_2010b} Lukoyanov, N.Yu.: Minimax and viscosity solutions in optimization problems for hereditary systems. Proceedings of the Steklov Institute of Mathematics. 2, 214--225 (2010). https://doi.org/10.1134/S0081543810060179 \bibitem{Lukoyanov_Plaksin_2020} Lukoyanov, N.Yu., Plaksin, A.R.: Hamilton-Jacobi Equations for Neutral-Type Systems: Inequalities for Directional Derivatives of Minimax Solutions. Minimax Theory and its Applications. 5(2), 369--381 (2020) \bibitem{Lukoyanov_Plaksin_2020b} Lukoyanov, N.Yu., Plaksin, A.R.: On the Theory of Positional Differential Games for Neutral-Type Systems. Proceedings of the Steklov Institute of Mathematics. 309(1), S83--S92 (2020).https://doi.org/10.1134/S0081543820040100 \bibitem{Natanson_1960} Natanson, I.P.: Theory of Functions of a Real Variable. Volume 2. Frederick Ungar Publishing Co., New-York (1960) \bibitem{Pham_Zhang_2014} Pham, T., Zhang, J.: Two person zero-sum game in weak formulation and path dependent {Bellman--Isaacs} equation. SIAM Journal on Control and Optimization. 52(4), 2090--2121 (2014). https://doi.org/10.1137/120894907 \bibitem{Plaksin_2020} Plaksin, A.: Minimax and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations for Time-Delay Systems. Journal of Optimization Theory and Applications. 187(1), 22--42 (2020). https://doi.org/10.1007/s10957-020-01742-6 \bibitem{Plaksin_2021} Plaksin, A.: Viscosity Solutions of Hamilton-Jacobi-Bellman-Isaacs Equations for Time-Delay Systems. SIAM Journal on Control and Optimization. 59(3), 1951--1972 (2021). https://doi.org/10.1137/20M1311880 \bibitem{Plaksin_2021b} Plaksin, A.: On the Minimax Solution of the Hamilton-Jacobi Equations for Neutral-Type Systems: the Case of an Inhomogeneous Hamiltonian. Differential Equations. 57(11), 1516--1526 (2021). https://doi.org/10.1134/S0012266121110100 \bibitem{Soner_1988} Soner, H.M.: On the Hamilton-Jacobi-Bellman equations in Banach spaces. Journal of Optimization Theory and Applications. 57(3), 429-437 (1988). https://doi.org/10.1007/BF02346162 \bibitem{Subbotin_1980} Subbotin, A. I.: A generalization of the basic equation of the theory of differential games. Soviet Mathematics - Doklady. 22, 358-362 (1980) \bibitem{Subbotin_1984} Subbotin, A.I.: Generalization of the main equation of differential game theory. Journal of Optimization Theory and Applications. 43(1), 151-162 (1984) \bibitem{Subbotin_1993} Subbotin, A.I.: On a property of the subdifferential. Mathematics of the USSR - Sbornik. 74(1), 63-78 (1993) \bibitem{Subbotin_1995} Subbotin, A.I.: Generalized Solutions of First Order PDEs: The Dynamical Optimization Perspective. Birkh\"{a}user, Boston (1995) \bibitem{Zhou_2019} Zhou, J.: Delay optimal control and viscosity solutions to associated Hamilton--Jacobi--Bellman equations. International Journal of Control. 92(10), 2263--2273 (2019). https://doi.org/10.1080/00207179.2018.1436769 \bibitem{Zhou_2021} Zhou, J.: A notion of viscosity solutions to second-order Hamilton-Jacobi-Bellman equations with delays. International Journal of Control (2021). https://doi.org/10.1080/00207179.2021.1921279 \end{thebibliography} \end{document}
2205.03791v1
http://arxiv.org/abs/2205.03791v1
Harmonic Centrality and Centralization of Some Graph Products
\documentclass[11pt,a4paper, final, twoside]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage{fancyhdr} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amscd} \usepackage{latexsym} \usepackage{amsthm} \usepackage{graphicx} \usepackage{graphics} \usepackage{graphtex} \usepackage{afterpage} \usepackage[colorlinks=true, urlcolor=blue, linkcolor=black, citecolor=black]{hyperref} \usepackage{color} \setcounter{MaxMatrixCols}{10} \renewcommand{\thefootnote}{} \setlength{\oddsidemargin}{1pt} \setlength{\evensidemargin}{1pt} \setlength{\hoffset}{-1in} \addtolength{\hoffset}{35mm} \setlength{\textwidth}{140mm} \setlength{\marginparsep}{0pt} \setlength{\marginparwidth}{0pt} \setlength{\topmargin}{0pt} \setlength{\voffset}{-2in} \addtolength{\voffset}{20mm} \setlength{\textheight}{200mm} \setlength{\headheight}{45mm} \setlength{\headsep}{5mm} \setlength{\footskip}{15mm} \pagestyle{fancy} \fancyhead{} \fancyfoot{} \def\bs{\blacksquare} \renewcommand{\headrulewidth}{1pt} \newcommand{\HRule}{\rule{\linewidth}{1pt}} \newcommand{\pf}{\noindent \textit{Proof}:\ } \newtheorem{thm}{Theorem}[section] \newtheorem{algorithm}[thm]{Algorithm} \newtheorem{axiom}[thm]{Axiom} \newtheorem{lem}[thm]{Lemma} \newtheorem{example}[thm]{Example} \newtheorem{exercise}[thm]{Exercise} \newtheorem{notation}[thm]{Notation} \newtheorem{problem}[thm]{Problem} \theoremstyle{proposition} \newtheorem{prop}{Proposition}[section] \newtheorem{case}[thm]{Case} \newtheorem{claim}[thm]{Claim} \newtheorem{conclusion}[thm]{Conclusion} \newtheorem{condition}[thm]{Condition} \newtheorem{conjecture}[thm]{Conjecture} \newtheorem{cor}[thm]{Corollary} \newtheorem{criterion}[thm]{Criterion} \theoremstyle{definition} \newtheorem{defn}{Definition}[section] \theoremstyle{remark} \newtheorem{solution}[thm]{Solution} \newtheorem{summary}[thm]{Summary} \numberwithin{equation}{section} \pagenumbering{arabic} \setcounter{page}{100} \begin{document} \rhead{\includegraphics[width=14cm]{ARJOMheader.jpg}} \hyphenpenalty=100000 \begin{flushright} {\Large \textbf{\\Harmonic Centrality and Centralization \\of Some Graph Products }}\\[5mm] {\large \textbf{Jose Mari E. Ortega$^\mathrm{1^*}$\footnote{\emph{*Corresponding author: E-mail: [email protected]}}, Rolito G. Eballe$^\mathrm{2}$}}\\[3mm] $^\mathrm{1}${\footnotesize \it Mathematics Unit, Philippine Science High School Southern Mindanao Campus \\ Brgy. Sto. Ni\~no, Davao City, Philippines\\ $^\mathrm{2}$Mathematics Department, College of Arts and Sciences, Central Mindanao University\\ Musuan, Maramag, Bukidnon, Philippines}\\ \end{flushright}\footnotesize \indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\indent\quad\quad~~~~~~~~{\it \textbf{Received: 20 February 2022\\ [1mm] \indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\indent\quad\quad~~ {Accepted: 26 April 2022}\\ \fbox{\large{\bf{\slshape{Original Research Article}}}}~~~~~~~~~~~~~~~~~~~~~~~~~~~\quad Published: 05 May 2022 }}\\[2mm] \HRule\\[3mm] {\Large \textbf{Abstract}}\\[4mm] \fbox{\begin{minipage}{5.4in}{\footnotesize Harmonic centrality calculates the importance of a node in a network by adding the inverse of the geodesic distances of this node to all the other nodes. Harmonic centralization, on the other hand, is the graph-level centrality score based on the node-level harmonic centrality. In this paper, we present some results on both the harmonic centrality and harmonic centralization of graphs resulting from some graph products such as Cartesian and direct products of the path $P_2$ with any of the path $P_m$, cycle $C_m$, and fan $F_m$ graphs. } \end{minipage}}\\[4mm] \footnotesize{\it{Keywords:} harmonic centrality; harmonic centralization; graph products.}\\[1mm] \footnotesize{{2010 Mathematics Subject Classification:} 05C12; 05C82; 91D30} \section{Introduction}\label{I1} Centrality in graph theory and network analysis is based on the importance of a vertex in a graph. Freeman \cite{Fre} tackled the concept of centrality being a salient attribute of social networks which may relate to other properties and processes of said networks. Among the many measures of centrality in social network analysis is harmonic centrality. Introduced in 2000 by Marchiori and Latora \cite{Mar}, and developed independently by Dekker \cite{Dek} and Rochat \cite{Roc}, it sums the inverse of the geodesic distances of a particular node to all the other nodes, where it is zero if there is no path from that node to another. It is then normalized by dividing by $m-1$, where $m$ is the number of vertices in the graph. For a related work on harmonic centrality in some graph families, please see \cite{Ort1}. While centrality quantifies the importance of a node in a graph, centralization quantifies a graph-level centrality score based on the various centrality scores of the nodes. It sums up the differences in the centrality index of each vertex from the vertex with the highest centrality, with this sum normalized by the most centralized graph of the same order. Centralization may be used to compare how central graphs are. For a related work on harmonic centralization of some graph families, please see \cite{Ort2}. In this paper, we derive the harmonic centrality of the vertices and the harmonic centralization of products of some important families of graphs. Graphs considered in this study are the path $P_m$, cycle $C_m$, and fan $F_m$ with the Cartesian and direct product as our binary operations. Our motivation is to be able to obtain formulas that could be of use when one determines the harmonic centrality and harmonic centralization of more complex graphs. For a related study on graph products using betweenness centrality, see \cite{Kum}. All graphs under considered here are simple, undirected, and finite. For graph theoretic terminologies not specifically defined nor described in this study, please refer to \cite{Bal}. \section{Preliminary Notes}\label{I2} For formality, we present below the definitions of the concepts discussed in this paper. \begin{defn} \cite{Bol} Let $G=(V(G), E(G))$ be a nontrivial graph of order $m$. If $u\in V(G)$, then the harmonic centrality of $u$ is given by the expression $$ \mathcal{H}_G(u)=\frac{\mathcal{R}_G(u)}{m-1},$$ where $\mathcal{R}_G(u)=\sum_{x\neq u}\frac{1}{d(u,x)}$ is the sum of the reciprocals of the shortest distance $d(u,x)$ in $G$ between vertices $u$ and $x$, for each $x\in (V(G)\setminus{u})$, with $\frac{1}{d(u,x)}=0$ in case there is no path from $u$ to $x$ in $G$. \end{defn} \begin{defn} The harmonic centralization of a graph $G$ of order $m$ is given by $$ {C_\mathcal{H} (G) } = \frac{\sum\limits_{i=1}^{m}(\mathcal{H}_{G \text{max}}(u)-\mathcal{H}_{G}(u_i))}{\frac{m-2}{m}} $$ where $\mathcal{H}_{G \text{max}}(u)$ is the largest harmonic centrality of vertex $u$ in $G$. \end{defn} In graph $G$ in Figure 1, we have the harmonic centrality of each node calculated as $\mathcal{H}_{G}(a_1)=\frac{37}{72}, \mathcal{H}_{G}(a_2)=\frac{29}{36}, \mathcal{H}_{G}(a_3)=\frac{23}{36}, \mathcal{H}_{G}(a_4)=\frac{5}{6}, \mathcal{H}_{G}(a_5)=\frac{3}{4}, \mathcal{H}_{G}(a_6)=\frac{13}{18}$, and $\mathcal{H}_{G}(a_7)=\frac{35}{72}$. Clearly, $\mathcal{H}_{G \text{max}}(u)=\frac{5}{6}$ from node $a_4$; thus, the harmonic centralization of graph $G$ has a value of ${C_\mathcal{H} (G)=\frac{(\frac{5}{6}-\frac{37}{72})+(\frac{5}{6}-\frac{29}{36})+(\frac{5}{6}-\frac{23}{36})+(\frac{5}{6}-\frac{3}{4})+(\frac{5}{6}-\frac{13}{18})+(\frac{5}{6}-\frac{35}{72})}{\frac{7-2}{2}} }= \frac{\frac{13}{12}}{\frac{5}{2}}=\frac{13}{30}$ \begin{figure}[!htb] \Magnify 0.8 $$\pic \Path (310,60) (310,100) (350,135) (390,100) (390,60) (310,60) \Path (310,100) (390,100) \Path (260,100) (310,100) \Path (390,60) (440,60) \Path (310,60) (390,100) \Align [c] ($a_{1}$) (260,110) \Align [c] ($a_{2}$) (303,110) \Align [c] ($a_{3}$) (360,140) \Align [c] ($a_{4}$) (400,110) \Align [c] ($a_5$) (300,60) \Align [c] ($a_{6}$) (400,70) \Align [c] ($a_{7}$) (440,70) \Align [c] ($\bullet$) (310,60) \cip$$ \caption{Graph $G$ with $u\in V(G)$, where $\mathcal{H}_G(a_5)=\frac{3}{4}$ and $C_\mathcal{H}(G)=\frac{13}{30}$} \end{figure} \vspace{10pt} \begin{defn} The $n^{th}$ harmonic number $H_n$ is the sum of the reciprocals of the first $n$ natural numbers, that is $ H_n=\sum_{k=1}^n\frac{1}{k}. $ \end{defn} The definitions for the binary operations of graphs considered in this study are given below. \begin{defn} [Cartesian product of graphs] Let $G$ and $H$ be graphs. The vertex set of the Cartesian product $G \square H$ is the Cartesian product $V(G) \times V(H)$; where two vertices $(u,v)$ and $(u', v')$ are adjacent in $G \square H$ if and only if either (i) $u=u'$ and $v$ is adjacent to $v'$ in $H$, or (ii) $v=v'$ and $u$ is adjacent to $u'$ in $G$. \cite{Bal} \end{defn} \begin{defn} [Direct product of graphs] Let $G$ and $H$ be graphs. The vertex set of the direct product $G \times H$ is the Cartesian product $V(G) \times V(H)$; where two vertices $(u,v)$ and $(u', v')$ are adjacent in $G \times H$ if and only if (i) $u$ and $u'$ are adjacent in $G$ and (ii) $v$ and $v'$ are adjacent in $H$. \cite{Bal} \end{defn} Drawn below are the paths $P_2$ and $P_3$, and the products $P_2 \square P_3$ and $P_2 \times P_3$. \vspace{-5mm} \begin{center} \begin{figure}[!htb] \Magnify 1.0 $$\pic \Path (-120,120) (-90,120) \Align [c] ($a$) (-120,130) \Align [c] ($b$) (-90,130) \Path (-20,150) (-20,120) (-20,90) \Align [c] ($w$) (-10,150) \Align [c] ($x$) (-10,120) \Align [c] ($y$) (-10,90) \Path (40,90) (40,120) (40,150) (70,150) (70, 120) (70, 90) (40,90) \Path (40,120) (70,120) \Path (120,90) (150,120) (120,150) \Path (150,90) (120, 120) (150, 150) \Align [c] ($(a)\text{Path} P_2$) (-105,75) \Align [c] ($(b)\text{Path} P_3$) (-25,75) \Align [c] ($(c)P_2 \square P_3$) (55,75) \Align [c] ($(d)P_2\times P_3$) (135,75) \cip$$ \caption[The Cartesian Product $P_2 \square P_3$; Direct Product $P_2 \times P_3$.]{(a) Path $P_2$ (b) Path $P_3$ (c) Cartesian Product $P_2 \square P_3$; and (d) Direct Product $P_2 \times P_3$.} \label{fig:Products} \end{figure} \end{center} \vspace{-40pt} \section{Main Results}\label{J} The Cartesian and direct products of special graph families considered in this paper are again that of path $P_2$ with path $P_m$ of order $m\geq 1$, path $P_2$ with cycle $C_m$ of order $m\geq 3$, and path $P_2$ with fan $F_m$ of order $m+1\geq 2$. \begin{center} \begin{figure}[!htb] \Magnify 1.1 $$\pic \Path (-200,20) (-180,20) (-160,20) \Path (-140,20) (-120,20) \Align[c] ($\ldots$) (-150,20) \Align [c] ($u_{1}$) (-200,10) \Align [c] ($u_{2}$) (-180,10) \Align [c] ($u_{3}$) (-160,10) \Align [c] ($u_{m-1}$) (-135,10) \Align [c] ($u_{m}$) (-113,10) \Path (-70,20) (-50,40) (-30,20) (-50,0) \Align[c] ($\ddots$) (-61,13) \Align [c] ($u_{1}$) (-50,50) \Align [c] ($u_{2}$) (-80,20) \Align [c] ($u_{m-1}$) (-50,-8) \Align [c] ($u_{m}$) (-20,20) \Path (50,50) (50,0) \Path (50,50) (70,0) \Path (50,50) (90,0) \Path (50,50) (30,0) \Path (50,50) (10,0) \Path (10,0) (30,0) \Path (50,0) (30,0) \Path (70,0) (50,0) \Align [c] ($u_0$) (40,50) \Align [c] ($u_1$) (10,-10) \Align [c] ($u_2$) (30,-10) \Align [c] ($u_3$) (50,-10) \Align [c] ($u_4$) (70,-10) \Align [c] ($u_m$) (95,-10) \Align [c] ($\ldots$) (80,0) \Align [c] ($(a)$) (-150,-20) \Align [c] ($(b)$) (-50,-20) \Align [c] ($(c)$) (50,-20) \cip$$ \caption[The Path $P_m$;\; Cycle $C_m$;\; and Fan $F_m$.]{(a) Path $P_m$;\;(b) Cycle $C_m$;\; and (c) Fan $F_m$.} \label{fig:Path} \end{figure} \end{center} \begin{thm} \normalfont For the Cartesian product of the path $P_2=[u_1, u_2]$ and the path $P_m=[v_1, v_2, ..., v_m]$, the harmonic centrality of any vertex $(u_i, v_j)$ is given by \[ \mathcal{H}_{P_2\square P_m}(u_i, v_j) = \begin{dcases*} \frac{1}{2m-1} \Big(2H_{m-1}+\frac{1}{m}\Big) & \text{for $1\leq i \leq 2$, $j=1$ or $j=m$} \\ \frac{1}{2m-1} \Big[2\Big(H_{j-1}+H_{m-j}\Big) & \\ \quad \quad + \frac{1}{j} +\frac{1}{m-j+1}-1\Big] & \text{for $1\leq i \leq 2$, $1<j<m$.} \\ \end{dcases*} \] \pf The product of $P_2$ and $P_m$ is also called a ladder graph $L_m$ of order $2m$. Considering the structure of the product $P_2\square P_m$ or $L_m$, we can partition its vertex set into two subsets $V_2(L_m)=\{(u_1, v_1), (u_1, v_2),..., (u_1, v_m)\}$ and $V_2(L_m)=\{(u_2, v_1), (u_2, v_2),..., (u_2, v_m)\}$ with $P_2=[u_1, u_2]$ and $P_m=[v_1, v_2, ..., v_m]$. For $(u_i, v_1)$ and $(u_i, v_m)$ with $i=1, 2$ we have \\ \noindent \small $\begin{aligned} \mathcal{R}_{L_m}(u_i,v_1) & = \mathcal{R}_{L_m}(u_i,v_m) \ = \mathcal{R}_{L_m}(u_1,v_1) \ \\ & = \sum\limits_{x \in V_1(L_{m}),x\neq (u_1, v_1)} \frac{1}{\text{d}_{L_{m}} ((u_1,v_1), x)} +\sum\limits_{x \in V_2(L_{m})} \frac{1}{\text{d}_{L_{m}} ((u_1,v_1), x)} \\ & = \frac{1}{\text{d}((u_1,v_1), (u_1,v_2))}+\frac{1}{\text{d} ((u_1,v_1), (u_1,v_3))}+\\ & \quad \quad ...+ \frac{1}{\text{d}((u_1,v_1), (u_1,v_m))}+\frac{1}{\text{d} ((u_1,v_1), (u_2,v_1))}+\\ & \quad \quad ...+\frac{1}{\text{d} ((u_1,v_1), (u_2,v_{m-1}))}+\frac{1}{\text{d} ((u_1,v_1), (u_2,v_m)}\\ & = \Big[1+\frac{1}{2}+...+\frac{1}{m-1}\Big]+ \Big[1+\frac{1}{2}+...+\frac{1}{m-1}+\frac{1}{m}\Big] \\ & = 2\sum_{k=1}^{m-1} \frac{1}{k} + \frac{1}{m} \\ & = 2H_{m-1}+\frac{1}{m} \end{aligned}$ \\ \noindent As for $(u_i, v_j)\in V(L_m)$, where $i=1, 2,$ and $j=2, 3, ..., m-1$, \noindent \small $\begin{aligned} \mathcal{R}_{L_m}(u_i, v_j)\ & =\mathcal{R}_{L_m}(u_1, v_j)\ \\ & = \sum\limits_{x \in V_1(L_{m}),x\neq (u_1, v_j)} \frac{1}{\text{d}_{L_{m}} ((u_1,v_j), x)} +\sum\limits_{x \in V_2(L_{m})} \frac{1}{\text{d}_{L_{m}} ((u_1,v_j), x)} \\ & = \sum\limits_{1\leq k\leq m,k\neq j} \frac{1}{\text{d}_{L_{m}} ((u_1,v_j), (u_1, v_k))} +\sum\limits_{k=1}^{m} \frac{1}{\text{d}_{L_{m}} ((u_1,v_j), (u_2, v_k))} \\ & = \big[\frac{1}{j-1}+\frac{1}{j-2}+...+\frac{1}{3} +\frac{1}{2}+1\big]+\big[1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{m-j}\big]\\ & \quad \quad +\big[\frac{1}{j}+\frac{1}{j-1}+...+\frac{1}{3} +\frac{1}{2}\big]+\big[1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{m-j+1}\big]\\ & = \big[\frac{1}{j-1}+\frac{1}{j-2}+...+\frac{1}{3} +\frac{1}{2}+1\big]+\big[1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{m-j}\big]\\ & \quad \quad +\big[\frac{1}{j-1}+\frac{1}{j-2}+...+\frac{1}{3} +\frac{1}{2}+1\big]+\big[1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{m-j}\big] \\ & \quad \quad \quad + \frac{1}{j}+\frac{1}{m-j+1}-1\\ & = 2(H_{j-1}+H_{m-j}) + \frac{1}{j} +\frac{1}{m-j+1}-1 \end{aligned}$ Thus, after normalizing we get \[ \mathcal{H}_{P_2\square P_m}(u_i, v_j) = \begin{dcases*} \frac{1}{2m-1} \Big(2H_{m-1}+\frac{1}{m}\Big) & \text{for $1\leq i \leq 2$, $j=1$ or $j=m$} \\ \frac{1}{2m-1} \Big[2\Big(H_{j-1}+H_{m-j}\Big) & \\ \quad \quad + \frac{1}{j} +\frac{1}{m-j+1}-1\Big] & \text{for $1\leq i \leq 2$, $1<j<m$.} \\ \end{dcases*} \]\\\\ [-40pt] \end{thm} \begin{thm} \normalfont For the Cartesian product of path $P_2=[u_1, u_2]$ of order 2 and cycle graph $C_m=[v_1, v_2, ..., v_m, v_1]$ of order $m$, the harmonic centrality of any vertex $(u_i, v_j)$ is given by \[ \mathcal{H}_{P_2 \square C_m}(u_i,v_j) = \begin{dcases*} \frac{1}{2m-1} \Big(4H_\frac{m-1}{2}+\frac{3-m}{m+1}\Big) & \text{if $m$ is odd}\\ \frac{1}{2m-1} \Big(4H_{\frac{m}{2}}+\frac{2}{m+2}-\frac{m+2}{m}\Big) & \text{if $m$ is even}. \end{dcases*} \] \begin{sloppypar} \pf The Cartesian product of $P_2$ and $C_m$ is also known as a prism $Y_m$ of order $2m$. Considering its structure, we can partition its into two subsets $V_1(Y_m)={(u_1, v_1), (u_1, v_2),..., (u_1, v_m)}$ and $V_2(Y_m)={(u_2, v_1), (u_2, v_2),..., (u_2, v_m)}$. If $m$ is odd, we have \end{sloppypar} \noindent $\begin{aligned} \mathcal{R}_{Y_m}(u_i,v_j) \ & = \mathcal{R}_{Y_m}(u_1,v_j) \ \\ & = \sum\limits_{x \in V_1(Y_{m}), x\neq (u_1, v_j)} \frac{1}{\text{d}_{Y_m} ((u_i,v_j), x)} +\sum\limits_{x \in V_2(Y_m)} \frac{1}{\text{d}_{Y_m} ((u_i,v_j), x)} \\ & = \Big[1+1+...+\frac{1}{\frac{m-1}{2}}+\frac{1}{\frac{m-1}{2}}\Big]+ \Big[1+\frac{1}{2}+\frac{1}{2}+...+\frac{1}{\frac{m+1}{2}}+\frac{1}{\frac{m+1}{2}}\Big] \\ & = 4\sum_{k=1}^{\frac{m-1}{2}} \frac{1}{k} + 2\Big(\frac{2}{m+1}\Big) -1 \\ & = 4H_{\frac{m-1}{2}}+\frac{3-m}{m+1} \end{aligned}$ \\ \noindent If $m$ is even, we have \noindent $\begin{aligned} \mathcal{R}_{Y_m}(u_i,v_j) \ & = \sum\limits_{x \in V_1(Y_{m})} \frac{1}{\text{d}_{Y_m} ((u_i,v_j), x)} +\sum\limits_{x \in V_2(Y_m)} \frac{1}{\text{d}_{Y_m} ((u_i,v_j), x)} \\ & = \Big[1+1+\frac{1}{2}+\frac{1}{2}+...+\frac{1}{\frac{m}{2}}\Big]+ \Big[1+\frac{1}{2}+\frac{1}{2}+...+\frac{1}{\frac{m+2}{2}}\Big]\\ & = 4\sum_{k=1}^{\frac{m}{2}} \frac{1}{k} + \frac{1}{\frac{m+2}{2}} -1 -\frac{1}{\frac{m}{2}} \\ & = 4H_{\frac{m}{2}}+\frac{2}{m+2}-\frac{m+2}{m}\\ \end{aligned}$ \\ Normalizing and consolidating these results we get \[ \mathcal{H}_{Y_m}(u_i,v_j) = \begin{dcases*} \frac{1}{2m-1} \Big(4H_\frac{m-1}{2}+\frac{3-m}{m+1}\Big) & \text{if $m$ is odd} \\ \frac{1}{2m-1} \Big(4H_{\frac{m}{2}}+\frac{2}{m+2}-\frac{m+2}{m}\Big) & \text{if $m$ is even.} $\quad \bs$ \end{dcases*} \] \\ \end{thm} \begin{thm} \normalfont For the Cartesian product of path $P_2$ of order 2 and fan graph $F_m$ of order $m+1$ in Figure 3c, the harmonic centrality of any vertex $(u_i, v_j)$ is given by \[ \mathcal{H}_{P_2\square F_m }(u_i, v_j) = \begin{dcases*} \frac{3m+2}{2(2m+1)} & \text{for $i=1,2$ and $j=0$} \\ \frac{5m+14}{6(2m+1)} & \text{for $i=1,2$ and $j=1,m$} \\ \frac{5m+18}{6(2m+1)} & \text{for $i=1,2$ and $1<j<m$} \\ \end{dcases*} \] \pf Vertices $(u_1, j_0)$ and $(u_2, j_0)$ are adjacent to $m+1$ vertices, and have a distance of 2 to $m$ vertices. Thus, $$\mathcal{H}_{P_2\square F_m}(u_i, v_0)\ = \frac{1}{2m+1}\Big[1(m+1) + \frac{1}{2} (m)\Big] = \frac{3m+2}{2(2m+1)} $$ For vertices $(u_1, v_1), (u_1, v_m), (u_2, v_1)$ and $(u_2, v_m)$, each is adjacent to three vertices, each has a distance of 2 to $m$ vertices, and each has a distance of 3 to $m-2$ vertices; therefore, $$\mathcal{H}_{P_2\square F_m}(u_i, v_1)\ = \mathcal{H}_{P_2\square F_m}(u_i, v_m)\ = \frac{1}{2m+1}\Big[1(3) + \frac{1}{2} (m) + \frac{1}{3} (m-2)\Big] = \frac{5m+14}{6(2m+1)} $$ As for vertices $(u_i, v_j)$ for $i=1,2$ and $1<j<m$, each is adjacent to four vertices, each has a distance of 2 to $m$ vertices, and each has a distance of 3 to $m-3$ vertices; thus, $$\mathcal{H}_{P_2\square F_m}(u_i, v_j)\ = \frac{1}{2m+1}\Big[1(4) + \frac{1}{2} (m) + \frac{1}{3} (m-3)\Big] = \frac{5m+18}{6(2m+1)}. \quad \bs $$ \end{thm} \begin{thm} \normalfont For the Cartesian product of $P_2=[u_1, u_2]$ of order 2 and path $P_m=[u_1, u_2, ..., u_m]$ of order $m$, the harmonic centralization is given by $$ {\footnotesize {C}_\mathcal{H}(L_m) = \begin{dcases*} \frac{4}{(m-1)(2m-1)}\Big[2(m-1)H_\frac{m-1}{2}-2H_{m-1} \\ \quad +\frac{2(m-1)}{m+1}-\frac{m-1}{2}-\frac{1}{m} & \text{if $m$ is odd } \\ \quad \quad -\sum\limits_{i=2}^{\frac{m-1}{2}}\Big(2H_{i-1}+2H_{m-i}+\frac{1-i}{i}+\frac{1}{m-i+1}\Big)\Big] \\ \\[-8pt] \frac{2}{(2m-1)(m-1)}\Big[4(m-2)H_\frac{m}{2}-4H_{m-1} \\ \quad -\frac{m^2-2}{m}+\frac{2m-4}{m+2} & \text{if $m$ is even.} \\ \quad -2\sum\limits_{i=2}^{\frac{m-2}{2}} \Big(2H_{i-1}+2H_{m-i}+ \frac{1-i}{i}+ \frac{1}{m-i+1}\Big)\Big] \end{dcases*} }$$ \pf The product of $P_2=[u_1, u_2]$ and $P_m=[u_1, u_2, ..., u_m]$ of order is a ladder graph $L_m$ of order $2m$. In a ladder graph, if $m$ is odd, then vertices $(u_1, v_{\frac{m+1}{2}})$ and $(u_2, v_{\frac{m+1}{2}})$ will have the maximum harmonic centrality of $\frac{4}{2m-1}\Big(H_\frac{m-1}{2}+\frac{1}{m+1}-\frac{1}{4}\Big)$. So, $$ {\small \begin{aligned} {C}_\mathcal{H}(L_m) & = \frac{1}{\frac{2m-2}{2}}\Big[\Big(\frac{4(2(m-1))}{2m-1}\Big(H_\frac{m-1}{2}+\frac{1}{m+1}-\frac{1}{4}\Big)\Big)-\frac{4}{2m-1}\Big(2H_{m-1}+\frac{1}{m}\Big) \\ & \quad \quad -\frac{4}{2m-1}\sum\limits_{j=2}^{\frac{m-1}{2}}\Big(2H_{j-1}+2H_{m-j}+\frac{1-j}{j}+\frac{1}{m-j+1}\Big)\Big]\\ & = \frac{4}{(m-1)(2m-1)}\Big[2(m-1)H_\frac{m-1}{2}-2H_{m-1}+\frac{2(m-1)}{m+1}-\frac{m-1}{2}-\frac{1}{m} \\ & \quad \quad -\sum\limits_{j=2}^{\frac{m-1}{2}}\Big(2H_{j-1}+2H_{m-j}+\frac{1-j}{j}+\frac{1}{m-j+1}\Big)\Big].\\ \end{aligned} }$$ On the other hand, if $m$ is even in the product, vertices $(u_1, v_{\frac{m}{2}}), (u_2, v_{\frac{m}{2}}), (u_1, v_{\frac{m+2}{2}})$ and $(u_2, v_\frac{m+2}{2})$ will have the maximum harmonic centrality of $\frac{1}{2m-1}\Big(4H_\frac{m}{2}-\frac{m+2}{m}+\frac{2}{m+2}\Big)$. So, $$ {\small \begin{aligned} {C}_\mathcal{H}(L_m) & = \frac{1}{\frac{2m-2}{2}}\Big[\frac{4}{2m-1}\Big(4H_\frac{m-1}{2}-2H_{m-1}-\frac{m+3}{m}+\frac{2}{m+2}\Big) \\ & \quad \quad +\frac{2(m-4)}{2m-1}\Big(4H_\frac{m}{2}-\frac{m+2}{m}+\frac{2}{m+2}\Big) \\ & \quad \quad -\frac{4}{2m-1}\sum\limits_{j=2}^{\frac{m-2}{2}}\Big(2H_{j-1}+2H_{m-j}+\frac{1}{j}+\frac{1}{m-j+1}-1\Big)\Big]\\ & = \frac{2}{(m-1)(2m-1)}\Big[8H_\frac{m}{2}+4(m-4)H_\frac{m}{2}-4H_{m-1} \\ & \quad \quad -\frac{2(m+3)-(m+2)(m-4)}{m}+\frac{4+2(m-4)}{m+2} \\ & \quad \quad -2\sum\limits_{j=2}^{\frac{m-2}{2}}\Big(2H_{j-1}+2H_{m-j}+\frac{1-j}{j}+\frac{1}{m-j+1}\Big)\Big]\\ & =\frac{1}{(2m-1)(m-1)}\Big[4(m-2)H_\frac{m}{2}-4H_{m-1} \\ & \quad -\frac{(m^2-2)}{m}+\frac{2m-4}{m+2} \\ & \quad -2\sum\limits_{j=2}^{\frac{m-2}{2}} \Big(2H_{j-1}+2H_{m-j}+ \frac{1-j}{j}+ \frac{1}{m-j+1}\Big)\Big]. \quad \bs \end{aligned} }$$ \end{thm} \begin{thm} \normalfont For the Cartesian product of path $P_2$ of order 2 and Cycle $C_m$ of order $m$, the harmonic centralization is zero. \\ \pf Each vertex of the resulting graph of $P_2\square C_m$ will have the same harmonic centralities of $\mathcal{H}_{P_2\square C_m}(u_i, v_j)=\frac{2}{m-1}(4H_\frac{m-1}{2}+\frac{3-m}{m+1})$ if $m$ is odd and $\mathcal{H}_{P_2\square C_m}(u_i, v_j)= \frac{2}{m-1}(4H_\frac{m-1}{2}+\frac{2}{m+2}-\frac{m+2}{m})$ if $m$ is even. Therefore, this results to a harmonic centralization of zero. $\bs$ \end{thm} \begin{thm} \normalfont For the Cartesian product of path $P_2$ of order 2 and fan graph $F_m$ of order $m+1$, the harmonic centralization is given by \[ {C}_\mathcal{H}(P_2\square F_m) = \frac{4(m-1)(m-2)}{3m(2m+1)} \] \pf The vertex with the maximum harmonic centrality would be vertices $(u_1, v_0)$ and $(u_2, v_0)$ having a value of $\frac{3m+2}{2(2m+1)}$. The harmonic centrality of all the other vertices will be subtracted from this maximum harmonic centrality, thus $\begin{aligned} {C}_\mathcal{H}(P_2\square F_m)\ & = \Big(\frac{1}{\frac{2(m+1)-2}{2}}\Big)\Big[4\Big(\frac{3m+2}{4m+2}-\frac{5m+14}{6(2m+1)}\Big) \\ & \quad \quad \quad \quad +(2m-4)\Big(\frac{3m+2}{4m+2}-\frac{5m+18}{6(2m+1)}\Big)\Big] \\ & = \cfrac{4(m-1)(m-2)}{3m(2m+1)}. \quad \bs \\ \end{aligned}$ \\ \end{thm} \begin{thm} \normalfont For the direct product of path $P_2$ of order 2 and a path graph $P_m$ of order $m$, the harmonic centrality of any vertex $(u_i, v_j)$ is given by \[ \mathcal{H}_{P_2\times P_m }(u_i, v_j) = \begin{dcases*} \frac{H_{m-1}}{2m-1} & \text{for $i=1,2$ and $j=1$ or $m$} \\ \frac{H_{j-1}+H_{m-j}}{2m-1} & \text{for $i=1,2$ and $1<j<m$} \\ \end{dcases*} \] \pf The resulting direct product of $P_2\times P_m$ is a graph of order $2m$ composed of a pair of disconnected path graphs of order $m$. Thus, the endpoints will have a harmonic centrality of $\frac{H_{m-1}}{2m-1}$, while the inside vertices will have a harmonic centrality value of $\frac{H_{j-1}+H_{m-j}}{2m-1}$. $\quad \bs$ \end{thm} \begin{thm} \normalfont For the direct product of path $P_2$ of order 2 and a cycle graph $C_m$ of order $m$, the harmonic centrality of any vertex $(u_i, v_j)$ is given by \[ \mathcal{H}_{P_2\times C_m }(u_i, v_j) = \begin{dcases*} \frac{1}{2m-1}\big(2H_{m-1}+\frac{1}{m}\big) & \text{if $m$ is odd} \\ \frac{1}{2m-1}\big(2H_{\frac{m-2}{2}}+\frac{2}{m}\big) & \text{if $m$ is even} \\ \end{dcases*} \] \pf The resulting graph of $P_2\times C_m$ of order $2m$ is a pair of cycle graphs of order $m$. From what we know of cycle graphs $\mathcal{R}_{C_m}(u)=2H_{m-1}+\frac{1}{m}$ if $m$ is odd, while $\mathcal{R}_{C_m}(u)=2H_{\frac{m-2}{2}}+\frac{2}{m}$ if $m$ is even. $\quad \bs$ \end{thm} \begin{thm} \normalfont For the direct product of path $P_2$ of order 2 and a fan graph $F_m$ of order $m+1$, the harmonic centrality of any vertex $(u_i, v_j)$ is given by \[ \mathcal{H}_{P_2\times F_m }(u_i, v_j) = \begin{dcases*} \frac{m}{2m+1} & \text{for $i=1,2$ and $j=0$} \\ \frac{m+2}{2(2m+1)} & \text{for $i=1,2$ and $j=1$ or $m$} \\ \frac{m+3}{2(2m+1)} & \text{for $i=1,2$ and $1<j<m$} \\ \end{dcases*} \] \pf The resulting graph of $P_2\times F_m$ is of order $2m+2$ composed of a pair of fan graphs each with an order of $m+1$. So vertices $(u_1, v_0)$ and $(u_2, v_0)$ will be adjacent to $m$ vertices and normalized by $2m+1$. While vertices $(u_1, v_1)$ and $(u_1, v_m)$ will be adjacent to 2 vertices and have a distance of 2 to $m-2$ vertices. Therefore, $\frac{1}{2m+1}\Big(\frac{m-2}{2}+2\Big) = \frac{m+2}{2(2m+1)}$. All the other vertices will be adjacent to 3 other vertices and have a distance of 2 to $m-3$ vertices, therefore, $\frac{1}{2m+1}\Big(\frac{m-3}{2}+3\Big) = \frac{m+3}{2(2m+1)}$. $\quad \bs$ \end{thm} \begin{thm} \normalfont For the direct product of path $P_2$ of order 2 and a path graph $P_m$ of order $m$, the harmonic centralization is given by \[ \footnotesize {C}_\mathcal{H}(P_2 \times P_m) = \begin{dcases*} \frac{4\big((m-1)H_{\frac{m-1}{2}}-H_{m-1}-\sum\limits_{j=2}^{\frac{m-1}{2}}(H_{j-1}+H_{m-j})\big)}{(m-1)(2m-1)} & \text{if $m$ is odd} \\ \frac{4\big((m-2)(H_{\frac{m-2}{2}}+\frac{1}{m})-H_{m-1}-\sum\limits_{j=2}^{\frac{m-2}{2}}(H_{j-1}+H_{m-j})\big)}{(m-1)(2m-1)} & \text{if $m$ is even} \\ \end{dcases*} \] \pf If $m$ is odd, then the maximum harmonic centrality will be $\frac{H_{\frac{m-1}{2}}}{2m-1}$ for vertices $(u_1, v_{\frac{m-1}{2}})$ and $(u_2, v_{\frac{m-1}{2}})$. Other vertices in the graph are the endpoints which has a harmonic centrality of $\frac{H_{m-1}}{2m-1}$ and all other vertices have harmonic centrality of $\frac{H_{j-1}+H_{m-j}}{2m-1}$. Subtracting from the maximum and normalizing we get $$\frac{4\big((m-1)H_{\frac{m-1}{2}}-H_{m-1}-\sum\limits_{j=2}^{\frac{m-1}{2}}(H_{j-1}+H_{m-j})\big)}{(m-1)(2m-1)}$$. \\ On the other hand, if $m$ is even, then vertices $(u_1, v_{\frac{m}{2}}), (u_2, v_{\frac{m}{2}}), (u_1, v_{\frac{m+1}{2}})$ and $(u_2, v_{\frac{m+1}{2}})$ will have the maximum harmonic centrality of $\frac{H_{\frac{m-2}{2}}+\frac{1}{m}}{2m-1}$. The harmonic centrality of all other vertices will be subtracted from this value and normalized to arrive at the harmonic centralization value of $$\frac{4\big((m-2)(H_{\frac{m-2}{2}}+\frac{1}{m})-H_{m-1}-\sum\limits_{j=2}^{\frac{m-2}{2}}(H_{j-1}+H_{m-j})\big)}{(m-1)(2m-1)} \quad \bs$$. \end{thm} \begin{thm} \normalfont For the direct product of path $P_2$ of order 2 and a cycle graph $C_m$ of order $m$, the harmonic centralization is zero. \\ \pf Since the harmonic centrality of the vertices in the direct product of path $P_2$ and cycle graph $C_m$ have the same values, the harmonic centralization therefore equates to zero. $\quad \bs$ \end{thm} \begin{thm} \normalfont For the direct product of Path $P_2$ of order 2 and Fan graph $F_m$ of order $m+1$, the harmonic centralization is given by \[ {C}_\mathcal{H}(P_2\times F_m) = \frac{(m-1)(m-2)}{m(2m+1)} \] \pf The maximum harmonic centrality value is $\frac{m}{2m+1}$ while four vertices have harmonic centrality values of $\frac{m+2}{2(2m+1)}$. As for the other vertices, they have a harmonic centrality of $\frac{m+3}{2(2m+1)}$, so dividing by $m$ to normalize, we have \\ $\begin{aligned} {C}_\mathcal{H}(P_2\times F_m)\ & = \frac{1}{m}\Big[2m\Big(\frac{m}{2m+1}\Big)-4\Big(\frac{m+2}{2(2m+1)}\Big)-(2m-4)\Big(\frac{m+3}{2(2m+1)}\Big)\Big] \\ & = \cfrac{(m-2)(m-1)}{m(2m+1)}. \quad \bs \\ \end{aligned}$ \\ \end{thm} \section{Conclusions} Harmonic centrality is one of the more recent centrality measures that identifies the importance of a node, while harmonic centralization quantifies how centralized a graph is based on the node-level harmonic centrality. In this paper, we introduced some results on the harmonic centrality of the nodes and harmonic centralization of graphs resulting from the Cartesian and direct products of the path $P_2$ with any of the path $P_m$, cycle $C_m$, and fan $F_m$ graphs. For further studies, results can be derived for other families of graphs and other binary operations. \\ \noindent \Large\textbf{Acknowledgement}\\[2mm] \footnotesize The authors would like to acknowledge the valuable comments and inputs made by the anonymous referees.\\[2mm] \noindent{\Large\bf Competing Interests}\\\\ Authors have declared that no competing interests exist. \begin{thebibliography}{99} \bibitem{Bal} {R. Balakrishnan \& K. Ranganathan, \em A Textbook of Graph Theory, } 2nd Ed. Springer, New York. (2012). \bibitem{Bol} {P. Boldi \& S. Vigna, Axioms for centrality, \em Internet Mathematics, } {\bf 10}, (2014) 222-262. https://doi.org/10.1080/15427951.2013.865686. \bibitem{Dek} {A.H. Dekker, Conceptual Distance in Social Network Analysis, \em Journal of Social Structure, }{\bf 6 (3)} (2005). \bibitem{Fre}{L. C. Freeman, Centrality in Social Networks: Conceptual Clarification, \em Social Networks, }{\bf 1(3)} (1979), 215 - 239. https://doi.org/10.1016/0378-8733(78)90021-7 \bibitem{Gom} {S. Gómez, Centrality in Networks: Finding the Most Important Nodes. \em In: Moscato P., de Vries N. (eds) Business and Consumer Analytics: New Ideas, } Springer, Cham. (2019) https://doi.org/10.1007/978-3-030-06222-4\_8 \bibitem{Kum}{S. Kumar \& K. Balakrishnan, Betweenness centrality in Cartesian product of graphs, \em AKCE International Journal of Graphs and Combinatorics, }{\bf 17(1)} (2020), 571 - 583. https://doi: 10.1016/j.akcej.2019.03.012 \bibitem{Mar}{M. Marchiori \& V. Latora, Harmony in the small-world, \em Physica A: Statistical Mechanics and Its Applications, }{\bf 285 (3–4)} (2000), 539–546. https://doi:10.1016/s0378-4371(00)00311-3 \bibitem{Ort1}{J. M. E. Ortega \& R. G. Eballe, Harmonic centrality of some graph families, \em Advances and Applications in Mathematical Sciences, }{\bf 21 (5)} (2022), 2581-2598. https://doi:10.5281/zenodo.6396942 \bibitem{Ort2}{J. M. E. Ortega \& R. G. Eballe , \em Harmonic centralization of some graph families.} (2022). ArXiv. Pre-print. https://doi.org/10.48550/arXiv.2204.04381 \bibitem{Roc} {Y. Rochat, Closeness centrality extended to unconnected graphs: The harmonic centrality index, \em Applications of Social Network Analysis}, ASNA 2009. (2009) \bibitem{San} {C. Sanna, On the p-adic valuation of harmonic numbers, \em Journal of Number Theory, } {\bf 166} (2016) 41–46. https://doi:10.1016/j.jnt.2016.02.020 \end{thebibliography} \scriptsize\-----------------------------------------------------------------------------------------------------------------------------------------\\\copyright \it 20YY Author name; This is an Open Access article distributed under the terms of the Creative Commons Attribution License \href{http://creativecommons.org/licenses/by/2.0}{http://creativecommons.org/licenses/by/4.0}, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. \end{document} -+++- *
2205.03752v3
http://arxiv.org/abs/2205.03752v3
Efficient Representation of Large-Alphabet Probability Distributions
\documentclass[journal]{resources/IEEEtran_new} \synctex=1 \IEEEoverridecommandlockouts \usepackage{graphicx} \graphicspath{{images/}} \usepackage{amsthm, amsmath, amsfonts, amssymb} \usepackage{enumerate} \usepackage{graphicx} \usepackage{mathtools} \usepackage{thmtools} \usepackage{thm-restate} \usepackage{cleveref} \usepackage{subfigure} \usepackage{resources/custom_commands} \usepackage{resources/coloredboxes} \usepackage{float} \usepackage{enumerate} \usepackage{marginnote} \usepackage{autonum} \usepackage{scalerel,stackengine} \stackMath \newcommand\reallywidecheck[1]{\savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{#1}}]{\kern-.6pt\bigwedge\kern-.6pt} {\rule[-\textheight/2]{1ex}{\textheight}} }{\textheight}}{0.5ex}}\stackon[1pt]{#1}{\scalebox{-1}{\tmpbox}}} \usepackage{mathabx} \newtheorem*{theorem*}{Theorem} \declaretheoremstyle[ spaceabove=\topsep, spacebelow=\topsep, headfont=\normalfont\bfseries, notefont=\bfseries, notebraces={}{}, bodyfont=\normalfont\itshape, postheadspace=0.5em, name={\ignorespaces}, numbered=no, headpunct=.] {mystyle} \declaretheorem[style=mystyle]{namedthm*} \newcommand{\matn}{\ensuremath{\mathcal{N}}} \newcommand{\matx}{\ensuremath{\mathcal{X}}} \newcommand{\PP}{\ensuremath{\mathbb{P}}} \newcommand{\EE}{\ensuremath{\mathbb{E}}} \newcommand{\enc}{\mathrm{Enc}} \newcommand{\dec}{\mathrm{Dec}} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\TV}{TV} \newcommand{\Var}{\mathrm{Var}} \newcommand{\del}{\ensuremath{\partial}} \newcommand{\vol}{\ensuremath{\mathrm{vol}}} \def\eqdef{\stackrel{\scriptscriptstyle\triangle}{=}} \DeclareMathOperator{\kl}{{\scriptscriptstyle KL}} \DeclareMathOperator{\ID}{{\scriptscriptstyle ID}} \DeclareMathOperator{\SQ}{{\scriptscriptstyle SQ}} \newcommand{\idloss}{L_{\ID}} \newcommand{\sqloss}{L_{\SQ}} \DeclareMathOperator{\mult}{Mult} \newcommand{\Btdis}{\mathrm{Beta}} \newcommand{\eqlinebreak}{\ensuremath{\nonumber \\ & \quad \quad}} \newcommand{\eqlinebreakshort}{\ensuremath{\nonumber \\ & \quad \quad}} \newcommand{\eqstartshort}{\ensuremath{&}} \newcommand{\eqstartnonumshort}{\ensuremath{& \nonumber}} \newcommand{\eqbreakshort}{\ensuremath{ \\}} \newcommand{\ipp}{\mathrm{IP}} \newcommand{\arcsinh}{\ensuremath{\mathrm{ArcSinh}}} \newcommand{\annotate}[1]{\textcolor{red}{#1}} \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \def\ypnb#1{\textcolor{red}{[\textbf{YP notes:} #1]}} \def\nbyp#1{\textcolor{red}{[\textbf{YP notes:} #1]}} \def\mreals{\mathbb{R}} \usepackage{blindtext} \newcommand{\dpfconst}{24} \newcommand{\shift}[2]{\ensuremath{T_{#2}(#1)}} \definecolor{olivedrab}{rgb}{0.42, 0.56, 0.14} \newcommand{\aviv}[1]{\marginnote{\textcolor{olivedrab}{Aviv says: #1}}} \definecolor{palatinatepurple}{rgb}{0.41, 0.16, 0.38} \newcommand{\jennifer}[1]{\marginnote{\textcolor{palatinatepurple}{Jennifer says: #1}}} \definecolor{princetonorange}{rgb}{1.0, 0.56, 0.0} \newcommand{\yury}[1]{{\reversemarginpar\marginnote{\textcolor{princetonorange}{Yury says: #1}}}} \usepackage{subfiles} \newif\iflong \longtrue \newcommand{\az}{\ensuremath{K}} \newcommand{\newvar}{\ensuremath{w}} \newcommand{\newVar}{\ensuremath{W}} \newcommand{\normvar}{\ensuremath{z}} \newcommand{\bnormvar}{\ensuremath{\boldsymbol{z}}} \newcommand{\normVar}{\ensuremath{Z}} \newcommand{\bnormVar}{\ensuremath{\boldsymbol{Z}}} \newcommand{\normset}{\ensuremath{\mathcal{Z}}} \newcommand{\rawvar}{\ensuremath{y}} \newcommand{\brawvar}{\ensuremath{\boldsymbol{y}}} \newcommand{\comp}{\ensuremath{f}} \newcommand{\compder}{\ensuremath{f'}} \newcommand{\compset}{\ensuremath{\cF}} \newcommand{\locloss}{\ensuremath{g}} \newcommand{\rawloss}{\ensuremath{\widetilde{\cL}}} \newcommand{\singleloss}{\ensuremath{\widetilde{L}}} \title{Efficient Representation of Large-Alphabet Probability Distributions{}\thanks{This work was supported in part by the NSF grant CCF-2131115 and sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. \indent This paper has supplementary downloadable material available at http://ieeexplore.ieee.org, provided by the authors. The material includes the appendices. Contact [email protected], [email protected], and [email protected] for further questions about this work. }} \author{\IEEEauthorblockN{Aviv Adler, Jennifer Tang, Yury Polyanskiy} \\ MIT EECS Department, Cambridge, MA, USA \\ [email protected], [email protected], [email protected] } \date{\today} \begin{document} \maketitle \begin{abstract} A number of engineering and scientific problems require representing and manipulating probability distributions over large alphabets, which we may think of as long vectors of reals summing to $1$. In some cases it is required to represent such a vector with only $b$ bits per entry. A natural choice is to partition the interval $[0,1]$ into $2^b$ uniform bins and quantize entries to each bin independently. We show that a minor modification of this procedure -- applying an entrywise non-linear function (compander) $f(x)$ prior to quantization -- yields an extremely effective quantization method. For example, for $b=8 (16)$ and $10^5$-sized alphabets, the quality of representation improves from a loss (under KL divergence) of $0.5 (0.1)$ bits/entry to $10^{-4} (10^{-9})$ bits/entry. Compared to floating point representations, our compander method improves the loss from $10^{-1}(10^{-6})$ to $10^{-4}(10^{-9})$ bits/entry. These numbers hold for both real-world data (word frequencies in books and DNA $k$-mer counts) and for synthetic randomly generated distributions. Theoretically, we analyze a minimax optimality criterion and show that the closed-form compander $f(x) ~\propto~ \arcsinh(\sqrt{c_\az (\az \log \az) x})$ is (asymptotically as $b\to\infty$) optimal for quantizing probability distributions over a $\az$-letter alphabet. Non-asymptotically, such a compander (substituting $1/2$ for $c_\az$ for simplicity) has KL-quantization loss bounded by $\leq 8\cdot 2^{-2b} \log^2 \az$. Interestingly, a similar minimax criterion for the quadratic loss on the hypercube shows optimality of the standard uniform quantizer. This suggests that the $\arcsinh$ quantizer is as fundamental for KL-distortion as the uniform quantizer for quadratic distortion. \end{abstract} \vspace{-0.5pc} \section{Compander Basics and Definitions} Consider the problem of \emph{quantizing} the probability simplex $\triangle_{\az-1} = \{\bx \in \bbR^\az : \bx \geq \bzero, \sum_i x_i = 1 \}$ of alphabet size $\az$,\footnote{While the alphabet has $\az$ letters, $\triangle_{\az-1}$ is $(\az-1)$-dimensional due to the constraint that the entries sum to $1$.} i.e. of finding a finite subset $\normset \subseteq \triangle_{\az-1}$ to represent the entire simplex. Each $\bx \in \triangle_{\az-1}$ is associated with some $\bnormvar = \bnormvar(\bx) \in \normset$, and the objective is to find a set $\normset$ and an assignment such that the difference between the values $\bx \in \triangle_{\az-1}$ and their representations $\bnormvar \in \normset$ are minimized; while this can be made arbitrarily small by making $\normset$ arbitrarily large, the goal is to do this efficiently for any given fixed size $|\normset| = M$. Since $\bx, \bnormvar \in \triangle_{\az-1}$, they both represent probability distributions over a size-$\az$ alphabet. Hence, a natural way to measure the quality of the quantization is to use the KL (Kullback-Leibler) divergence $D_{\kl}(\bx \| \bnormvar)$, which corresponds to the excess code length for lossless compression and is commonly used as a way to compare probability distributions. (Note that we want to minimize the KL divergence.) While one can consider how to best represent the vector $\bx$ as a whole, in this paper we consider only \emph{scalar quantization} methods in which each element $x_j$ of $\bx$ is handled separately, since we showed in \cite{adler_ratedistortion_2021} that for Dirichlet priors on the simplex, methods using scalar quantization perform nearly as well as optimal vector quantization. Scalar quantization is also typically simpler and faster to use, and can be parallelized easily. Our scalar quantizer is based on \emph{companders} (portmanteau of `compressor' and `expander'), a simple, powerful and flexible technique first explored by Bennett in 1948 \cite{bennett1948} in which the value $x_j$ is passed through a nonlinear function $f$ before being uniformly quantized. We discuss the background in greater depth in \Cref{sec::previous_works}. In what follows, $\log$ is always base-$e$ unless otherwise specified. We denote $[N] := \{1,\dots, N\}$. \subsubsection{Encoding} Companders require two things: a monotonically increasing\footnote{We require increasing functions as a convention, so larger $x_i$ map to larger values in $[N]$. Note that $\comp$ does \emph{not} need to be \emph{strictly} increasing; if $f$ is flat over interval $I \subseteq [0,1]$ then all $x_i \in I$ will always be encoded by the same value. This is useful if no $x_i$ in $I$ ever occurs, i.e. $I$ has zero probability mass under the prior.} function $\comp:[0,1] \to [0, 1]$ (we denote the set of such functions as $\compset$) and an integer $N$ representing the number of quantization levels, or \emph{granularity}. To simplify the problem and algorithm, we use the same $\comp$ for each element of the vector $\bx = (x_1, \dots, x_\az) \in \triangle_{\az-1}$ (see \Cref{rmk::symmetric-distribution}). To quantize $x \in [0, 1]$, the compander computes $\comp(x)$ and applies a uniform quantizer with $N$ levels, i.e. encoding $x$ to $n_N(x) \in [N]$ if $\comp(x) \in (\frac{n-1}{N}, \frac{n}{N}]$; this is equivalent to $n_N(x) = \lceil \comp(x) N \rceil$. This encoding partitions $[0,1]$ into \emph{bins} $I^{(n)}$: \begin{align}\label{eq::bins} x \in I^{(n)} = \comp^{-1} \Big(\Big(\frac{n-1}{N}, \frac{n}{N}\Big] \Big) \iff n_N(x) = n \end{align} where $\comp^{-1}$ denotes the preimage under $f$. As an example, consider the function $f(x) = x^s$. Varying $s$ gives a natural class of functions from $[0,1]$ to $[0,1]$, which we call the class of \emph{power companders}. If we select $s = 1/2$ and $N = 4$, then the $4$ bins created by this encoding are \begin{align} I^{(1)} &= (0, 1/16], I^{(2)} = (1/16, 1/4], \\ I^{(3)} &= (1/4, 9/16], I^{(4)} = (9/16, 1]\,. \end{align} \subsubsection{Decoding} \label{sec::decoding} To decode $n \in [N]$, we pick some $\rawvar_{(n)} \in I^{(n)}$ to represent all $x \in I^{(n)}$; for a given $x$ (at granularity $N$), its representation is denoted $\rawvar(x) = \rawvar_{(n_N(x))}$. This is generally either the \emph{midpoint} of the bin or, if $x$ is drawn randomly from a known prior\footnote{Priors on $\triangle_{\az-1}$ induce priors over $[0,1]$ for each entry.} $p$, the \emph{centroid} (the mean within bin $I^{(n)}$). The midpoint and centroid of $I^{(n)}$ are defined, respectively, as \begin{align} \bar{y}_{(n)} &= {1\over2} \left(\comp^{-1}\left(\frac{n-1}{N}\right) + \comp^{-1}\left(\frac{n}{N}\right)\right)\\ \widetilde{y}_{(n)} &= \bbE_{X \sim p} [X \, | \, X \in I^{(n)}] \,. \end{align} We will discuss this in greater detail in \Cref{sec::x-from-prior}. Handling each element of $\bx$ separately means the decoded values may not sum to $1$, so we normalize the vector after decoding. Thus, if $\bx$ is the input, \begin{align}\label{eq::norm_step} \normvar_i(\bx) = \frac{\rawvar(x_i)}{\sum_{j = 1}^\az \rawvar(x_j)} \end{align} and the vector $\bnormvar = \bnormvar(\bx) = (\normvar_1(\bx), \dots, \normvar_\az(\bx)) \in \triangle_{\az-1}$ is the output of the compander. This notation reflects the fact that each entry of the normalized reconstruction depends on all of $\bx$ due to the normalization step. We refer to $\brawvar = \brawvar(\bx) = (\rawvar(x_1), \dots, \rawvar(x_\az))$ as the \emph{raw} reconstruction of $\bx$, and $\bnormvar$ as the \emph{normalized} reconstruction. If the raw reconstruction uses centroid decoding, we likewise denote it using $\widetilde{\by} = \widetilde{\by}(\bx) = (\widetilde{y}(x_1), \dots, \widetilde{y}(x_\az))$. For brevity we may sometimes drop the $\bx$ input in the notation, e.g. $\bz := \bz(\bx)$; if $\bX$ is random we will sometimes denote its quantization as $\bZ := \bz(\bX)$. Thus, any $\bx \in \triangle_{\az-1}$ requires $\az \lceil \log_2 N \rceil$ bits to store; to encode and decode, only $\comp$ and $N$ need to be stored (as well as the prior if using centroid decoding). Another major advantage is that a single $\comp$ can work well over many or all choices of $N$, making the design more flexible. \subsubsection{KL divergence loss} The loss incurred by representing $\bx$ as $\bnormvar := \bnormvar(\bx)$ is the KL divergence \begin{align}\label{eq::kl_loss_norm} D_{\kl}(\bx\| \bnormvar) = \sum_{i=1}^{\az} x_i \log \frac{x_i}{\normvar_i} \,. \end{align} Although this loss function has some unusual properties (for instance $D_{\kl}(\bx \| \bnormvar) \neq D_{\kl}(\bnormvar \| \bx)$ and it does not obey the triangle inequality) it measures the amount of `mis-representation' created by representing the probability vector $\bx$ by another probability vector $\bnormvar$, and is hence is a natural quantity to minimize. In particular, it represents the excess code length created by trying to encode the output of $\bx$ using a code built for $\bnormvar$, as well as having connections to hypothesis testing (a natural setting in which the `difference' between probability distributions is studied). \subsubsection{Distributions from a prior} \label{sec::x-from-prior} Much of our work concerns the case where $\bx \in \triangle_{\az-1}$ is drawn from some prior $P_\bx$ (to be commonly denoted as simply $P$). Using a single $\comp$ for each entry means we can WLOG assume that $P$ is symmetric over the alphabet, i.e. for any permutation $\sigma$, if $\bX \sim P$ then $\sigma(\bX) \sim P$ as well. This is because for any prior $P$ over $\triangle_{\az-1}$, there is a symmetric prior $P'$ such that \begin{align} \bbE_{\bX \sim P} [D_{\kl}(\bX \| \bnormvar(\bX))] \hspace{-0.2pc} = \hspace{-0.2pc} \bbE_{\bX' \sim P'} [D_{\kl}(\bX' \| \bnormvar(\bX'))] \end{align} for all $f$, where $\bnormvar(\bX)$ is the result of quantizing (to any number of levels) with $f$ as the compander. To get $\bX' \sim P'$, generate $\bX \sim P$ and a uniformly random permutation $\sigma$, and let $\bX' = \sigma(\bX)$. We denote the set of symmetric priors as $\cP^\triangle_\az$. Note that a key property of symmetric priors is that their marginal distributions are the same across all entries, and hence we can speak of $P \in \cP^\triangle_\az$ having a single marginal $p$. \begin{remark} \label{rmk::symmetric-distribution} In principle, given a nonsymmetric prior $P_\bx$ over $\triangle_{\az-1}$ with marginals $p_1, \dots, p_\az$, we could quantize each letter's value with a different compander $f_1, \dots, f_\az$, giving more accuracy than using a single $f$ (at the cost of higher complexity). However, the symmetrization of $P_\bx$ over the letters (by permuting the indices randomly after generating $\bX \sim P_\bx$) yields a prior in $\cP^\triangle_\az$ on which any single $f$ will have the same (overall) performance and cannot be improved on by using varying $f_i$. Thus, considering symmetric $P_\bx$ suffices to derive our minimax compander. \end{remark} While the random probability vector comes from a prior $P \in \cP^\triangle_\az$, our analysis will rely on decomposing the loss so we can deal with one letter at a time. Hence, we work with the marginals $p$ of $P$ (which are identical since $P$ is symmetric), which we refer to as \emph{single-letter distributions} and are probability distributions over $[0,1]$. We let $\cP$ denote the class of probability distributions over $[0,1]$ that are absolutely continuous with respect to the Lebesgue measure. We denote elements of $\cP$ by their probability density functions (PDF), e.g. $p \in \cP$; the cumulative distribution function (CDF) associated with $p$ is denoted $F_p$ and satisfies $F'_p(x) = p(x)$ and $F_p(x) = \int_0^x p(t) \, dt$ (since $F_p$ is monotonic, its derivative exists almost everywhere). Note that while $p \in \cP$ does not have to be continuous, its CDF $F_p$ must be absolutely continuous. Following common terminology~\cite{grimmett2001}, we refer to such probability distributions as \emph{continuous}. Let $\cP_{1/\az} = \{p \in \cP : \bbE_{X\sim p}[X] = 1/\az\}$. Note that $P \in \cP^\triangle_\az$ implies its marginals $p$ are in $\cP_{1/\az}$. \subsubsection{Expected loss and preliminary results} For $P \in \cP^\triangle_\az$, $\comp \in \compset$ and granularity $N$, we define the \emph{expected loss}: \begin{equation}\label{eq::def_loss} \cL_\az(P, \comp, N) = \bbE_{\bX \sim P}[D_{\kl}(\bX \| \bnormvar(\bX))]\,. \end{equation} This is the value we want to minimize over $\comp$. \begin{remark} While $\bX$ and $\bnormvar(\bX)$ are random, they are also probability vectors. The KL divergence $D_{\kl}(\bX \| \bnormvar(\bX))$ is the divergence between $\bX$ and $\bnormvar(\bX)$ themselves, not the prior distributions over $\triangle_{\az-1}$ they are drawn from. \end{remark} Note that $\cL_\az(P,\comp,N)$ can almost be decomposed into a sum of $\az$ separate expected values, except the normalization step \eqref{eq::norm_step} depends on the random vector $\bX$ as a whole. Hence, we define the \emph{raw loss}: \begin{align} \label{eq::raw-loss} \rawloss_\az(P, \comp, N) \hspace{-0.2pc} = \hspace{-0.2pc} \bbE_{\bX \sim P}\Big[\sum_{i=1}^\az X_i \log(X_i/\widetilde{y}(X_i))\Big]\,. \end{align} We also define for $p \in \cP$, the \emph{single-letter loss} as \begin{align} \label{eq::raw-ssl} \singleloss(p, \comp, N) = \bbE_{X \sim p} \big[ X \log ( X/\widetilde{y}(X)) \big]\,. \end{align} The raw loss is useful because it bounds the (normalized) expected loss and is decomposable into single-letter losses. Note that both raw and single-letter loss are defined with centroid decoding. \begin{proposition}\label{lem::im-a-barby-girl} For $P \in \cP^\triangle_\az$ with marginals $p$, \begin{align} \cL_\az(P, \comp, N) \leq \rawloss_\az(P, \comp, N) = \az \, \singleloss(p,\comp,N)\,. \end{align} \end{proposition} \iflong\begin{proof} Separating out the normalization term gives \begin{align} \cL \eqstartnonumshort (P, \comp, N) = \bbE_{\bX \sim P} [D_{\kl}(\bX || \bnormvar(\bX))] \\ &= \rawloss_\az(P, \comp, N) + \bbE_{\bX \sim P} \left[ \log \left( \sum_{i=1}^\az \widetilde{y}(X_i) \right)\right] \,. \end{align} Since $\bbE[\widetilde{y}(X_i)] = \bbE[X_i]$ for all $i$, $\sum_{i = 1}^\az \bbE[\widetilde{y}(X_i)] =\sum_{i = 1}^\az \bbE[{X}_i] = 1 $. Because $\log$ is concave, by Jensen's Inequality \begin{align} \bbE_{\bX \sim P} \bigg[\log \Big( \sum_{i=1}^\az \widetilde{y}(X_i) \Big)\bigg] &\leq \log \Big( \bbE \Big[\sum_{i=1}^\az \widetilde{y}(X_i)\Big] \Big) \\&= \log(1) = 0 \end{align} and we are done.\footnote{An upper bound similar to \Cref{lem::im-a-barby-girl} can be found in \cite[Lemma 1]{benyishai2021}.} \end{proof} To derive our results about worst-case priors (for instance, \Cref{thm::minimax_compander}), we will also be interested in $\singleloss(p,\comp,N)$ even when $p$ is not known to be a marginal of some $P \in \cP^\triangle_\az$. \begin{remark} \label{rmk::centroid-needed} Though one can define raw and single-letter loss without centroid decoding (replacing $\widetilde{y}$ in \eqref{eq::raw-loss} or \eqref{eq::raw-ssl} with another decoding method $\widehat{y}$), this removes much of their usefulness. This is because the resulting expected loss can be dominated by the difference between $\bbE[X]$ and $\bbE[\widehat{y}(X)]$, potentially even making it negative; specifically, the Taylor expansion of $X \log(X/\widehat{y}(X))$ has $X - \widehat{y}(X)$ in its first term, which can have negative expectation. While this can make the expected `raw loss' negative under general decoding, it cannot be exploited to make the (normalized) expected loss negative because the normalization step $\normvar_i(\bX) = \widehat{y}(X_i)/\sum_j \widehat{y}(X_j)$ cancels out the problematic term. Centroid decoding avoids this problem by ensuring $\bbE[X] = \bbE[\widetilde{y}(X)]$, removing the issue. \end{remark} As we will show, when $N$ is large these values are roughly proportional to $N^{-2}$ (for well-chosen $\comp$) and so we define the \emph{asymptotic single-letter loss}: \begin{align} \label{eq::raw-assl} \singleloss(p,\comp) = \lim_{N \to \infty} N^2 \singleloss(p,\comp,N)\,. \end{align} We similarly define $\rawloss_\az(P,\comp)$ and $\cL_\az(P,\comp)$. While the limit in \eqref{eq::raw-assl} does not necessarily exist for every $p, \comp$, we will show that one can ensure it exists by choosing an appropriate $\comp$ (which works against any $p \in \cP$), and cannot gain much by not doing so. \section{Results} \label{sec::main-theorems} We demonstrate, theoretically and experimentally, the efficacy of companding for quantizing probability distributions with KL divergence loss. \subsection{Theoretical Results} \label{sec::theoretical-results} While we will occasionally give intuition for how the results here are derived, our primary concern in this section is to fully state the results and to build a clear framework for discussing them. Our main results concern the formulation and evaluation of a \emph{minimax compander} $\comp^*_\az$ for alphabet size $\az$, which satisfies \begin{align} \label{eq::minimax-condition} \comp^*_\az = \underset{\comp \, \in \, \compset}{\argmin} \underset{p \, \in \, \cP_{1/\az}}{\sup} \widetilde{L}(p,\comp) \,. \end{align} We require $p \in \cP_{1/\az}$ because if $P \in \cP^\triangle_\az$ and is symmetric, its marginals are in $\cP_{1/\az}$. The natural counterpart of the minimax compander $\comp^*_\az$ is the \emph{maximin density} $p^*_\az \in \cP_{1/\az}$, satisfying \begin{align} \label{eq::maximin-condition} p^*_\az = \underset{p \, \in \, \cP_{1/\az}}{\argmax} \underset{\comp \, \in \, \compset}{\inf} \widetilde{L}(p,\comp) \,. \end{align} We call \eqref{eq::minimax-condition} and \eqref{eq::maximin-condition}, respectively, the \emph{minimax condition} and the \emph{maximin condition}. In the same way that the minimax compander gives the best performance guarantee against an unknown single-letter prior $p \in \cP_{1/\az}$ (asymptotic as $N \to \infty$), the maximin density is the most difficult prior to quantize effectively as $N \to \infty$. Since they are highly related, we will define them together: \begin{proposition} \label{prop::maximin-density} For alphabet size $\az > 4$, there is a unique $c_{\az} \in [\frac{1}{4}, \frac{3}{4}]$ such that if $a_{\az} = (4/(c_{\az} \az \log \az + 1))^{1/3}$ and $b_{\az} = 4/a_{\az}^2 - a_{\az}$, then the following density is in $\cP_{1/\az}$: \begin{align} &p^*_{\az}(x) = (a_{\az} x^{1/3} + b_{\az} x^{4/3})^{-3/2} \label{eq::maximin-density}\,. \end{align} Furthermore, $\lim_{\az \to \infty} c_{\az} = 1/2$. \end{proposition} Note that this is both a result and a definition: we show that $a_\az, b_\az, c_\az$ exist which make the definition of $p^*_\az$ possible. With the constant $c_\az$, we define the minimax compander: \begin{definition} \label{def::minimax-compander} Given the constant $c_\az$ as shown to exist in \Cref{prop::maximin-density}, the \emph{minimax compander} is the function $f^*_\az : [0,1] \to [0,1]$ where \begin{align}\label{eq::minimax-compander} \comp^*_\az(x) = \frac{\arcsinh(\sqrt{c_\az (\az \log \az) \, x})}{\arcsinh(\sqrt{c_\az \az \log \az})}\,. \end{align} The \emph{approximate minimax compander} $f^{**}_\az$ is \begin{align} \label{eq::appx-minimax-compander} \comp^{**}_\az(x) = \frac{\arcsinh(\sqrt{(1/2) (\az \log \az) \, x})}{\arcsinh(\sqrt{(1/2) \az \log \az})}\,. \end{align} \end{definition} \begin{remark} \label{rmk::minimax-is-closed-form} While $\comp^*_\az$ and $\comp^{**}_\az$ might seem complex, $ \arcsinh(\sqrt{\newvar}) = \log(\sqrt{\newvar} + \sqrt{\newvar+1}) $ so they are relatively simple functions to work with. \end{remark} We will show that $f^*_\az, p^*_\az$ as defined above satisfy their respective conditions \eqref{eq::minimax-condition} and \eqref{eq::maximin-condition}: \begin{theorem}\label{thm::minimax_compander} The minimax compander $\comp^*_\az$ and maximin single-letter density $p^*_\az$ satisfy \begin{align} &\sup_{p \in \cP_{1/\az}} \singleloss(p,\comp^*_\az) = \inf_{\comp \in \compset} \sup_{p \in \cP_{1/\az}} \singleloss(p,\comp) \label{eq::minmax} \\ = & \sup_{p \in \cP_{1/\az}} \inf_{\comp \in \compset} \singleloss(p,\comp) = \inf_{\comp \in \compset} \singleloss(p^*_\az, \comp) \label{eq::maxmin} \end{align} which is equal to $\singleloss(p^*_\az, \comp^*_\az)$ and satisfies \begin{align} \label{eq::raw_loss_saddle} \singleloss(p^*_\az, \comp^*_\az) = \frac{1}{24} (1 + o(1)) \az^{-1}\log^2 \az. \end{align} \end{theorem} Since any symmetric $P \in \cP^\triangle_\az$ has marginals $p \in \cP_{1/\az}$, this (with \Cref{lem::im-a-barby-girl}) implies an important corollary for the normalized KL-divergence loss incurred by using the minimax compander: \begin{corollary}\label{cor::worstcase_prior} For any prior $P \in \cP^{\triangle}_\az$, \begin{align} \cL_\az(P,\comp^*_\az) \leq \rawloss_\az(P,\comp^*_\az) = \frac{1}{24} (1 + o(1))\log^2 \az \,. \end{align} \end{corollary} However, the set of symmetric $P \in \cP^\triangle_\az$ does not correspond exactly with $p \in \cP_{1/\az}$: while any symmetric $P \in \cP^\triangle_\az$ has marginals $p \in \cP_{1/\az}$, it is not true that any given $p \in \cP_{1/\az}$ has a corresponding symmetric prior $P \in \cP^\triangle_\az$. Thus, it is natural to ask: can the minimax compander's performance be improved by somehow taking these `shape' constraints into account? The answer is `not by more than a factor of $\approx 2$': \begin{proposition}\label{prop::bound_worstcase_prior_exist} There is a prior $P^* \in \cP^{\triangle}_\az$ such that for any $P \in \cP^\triangle_\az$ \begin{align}\label{eq::bound_worstcase_prior} \inf_{\comp \in \compset} \rawloss_\az(P^*, \comp) \geq \frac{\az - 1}{2\az} \rawloss_\az(P, \comp^*_\az) \,. \end{align} \end{proposition} While the minimax compander satisfies the minimax condition \eqref{eq::minimax-condition}, it requires working with the constant $c_\az$, which, while bounded, is tricky to compute or use exactly. Hence, in practice we advocate using the \emph{approximate minimax compander} \eqref{eq::appx-minimax-compander}, which yields very similar asymptotic performance without needing to know $c_\az$: \begin{proposition} \label{thm::approximate-minimax-compander} Suppose that $\az$ is sufficiently large so that $c_\az \in [\frac{1}{2 (1 + \varepsilon)}, \frac{1 + \varepsilon}{2}]$. Then for any $p \in \cP$, \begin{align} \singleloss(p,\comp^{**}_\az) \leq (1+ \varepsilon) \singleloss(p,\comp^*_\az)\,. \end{align} \end{proposition} Before we show how we get \Cref{thm::minimax_compander}, we make the following points: \begin{remark}\label{rmk::loss_with_uniform} If we use the uniform quantizer instead of minimax there exists a $P \in \cP^\triangle_\az$ where \begin{align}\label{eq::uniform_achieve} \bbE_{\bX \sim P}[D_{\kl}(\bX \| \bnormVar)] = \Theta\left(\az^2 N^{-2} \log N \right)\,. \end{align} This is done by using marginal density $p$ uniform on $[0,2/\az]$. To get a prior $P \in \cP^\triangle_\az$ with these marginals, if $\az$ is even, we can pair up indices so that $x_{2j-1} = 2/\az - x_{2j}$ for all $j = 1, \dots, \az/2$ (for odd $\az$, set $x_\az = 1/\az$) and then symmetrize by permuting the indices. See \Cref{sec::uniform} for more details. The dependence on $N$ is worse than $N^{-2}$ resulting in $\widetilde{L}(p,f) = \infty$. This shows theoretical suboptimality of the uniform quantizer. Note also that the quadratic dependence on $\az$ is significantly worse than the $\log^2 \az$ dependence achieved by the minimax compander. Incidentally, other single-letter priors such as $p(x) = (1-\alpha)x^{-\alpha}$ where $\alpha = \frac{\az-2}{\az-1}$ can achieve worse dependence on $N$ (specifically, $N^{-(2-\alpha)}$ for this prior). However, the example above achieves a bad dependence on both $N$ and $\az$ simultaneously, showing that in all regimes of $\az, N$ the uniform quantizer is vulnerable to bad priors. \end{remark} \begin{remark} Instead of the KL divergence loss on the simplex, we can do a similar analysis to find the minimax compander for $L_2^2$ loss on the unit hypercube. The solution is given by the identity function $\comp(x)=x$ corresponding to the standard (non-companded) uniform quantization. (See \Cref{sec::other_losses}.) \end{remark} To show \Cref{thm::minimax_compander} we formulate and show a number of intermediate results which are also of significant interest for a theoretical understanding of companding under KL divergence, in particular studying the asymptotic behavior of $\widetilde{L}(p,f,N)$ as $N \to \infty$. We define: \begin{definition} For $p \in \cP$ and $\comp \in \compset$, let \begin{align} L^\dagger(p,\comp) &= \frac{1}{24} \int_0^1 p(x) \compder(x)^{-2} x^{-1} \, dx \\ &= \bbE_{X \sim p}\Big[\frac{1}{24}\compder(X)^{-2} X^{-1}\Big] \label{eq::raw_loss} \,. \end{align} \end{definition} For full rigor, we also need to define a set of `well-behaved' companders: \begin{definition} Let $\compset^\dagger \subseteq \compset$ be the set of $\comp$ such that for each $f$ there exist constants $c > 0$ and $\alpha \in (0,1/2]$ for which $\comp(x) - c x^{\alpha}$ is still monotonically increasing. \end{definition} Then the following describes the asymptotic single-letter loss of compander $f$ on prior $p$ (with centroid decoding): \begin{theorem} \label{thm::asymptotic-normalized-expdiv} For any $p \in \cP$ and $\comp \in \compset$, \begin{align} \liminf_{N \to \infty} N^2 \singleloss(p,\comp,N) \geq L^\dagger(p,\comp) \,. \label{eq::fatou-bound} \end{align} Furthermore, if $\comp \in \compset^\dagger$ then an exact result holds: \begin{align} \singleloss(p,\comp) &= L^\dagger(p,\comp) < \infty \label{eq::norm_loss} \,. \end{align} \end{theorem} The intuition behind the formula for $L^\dagger(p,f)$ is that as $N \to \infty$, the density $p$ becomes roughly uniform within each bin $I^{(n)}$. Additionally, the bin containing a given $x \in [0,1]$ will have width $r_{(n)} \approx N^{-1} \compder(x)^{-1}$. Then, letting $\unif_{I^{(n)}}$ be the uniform distribution over $I^{(n)}$ and $\bar{y}_{(n)} \approx x$ be the midpoint of $I^{(n)}$ (which is also the centroid under the uniform distribution), we apply the approximation \begin{align} \bbE_{X \sim \unif_{I^{(n)}}}[X \log(X/\bar{y}_{(n)})] &\approx \frac{1}{24} r_{(n)}^2 \bar{y}_{(n)}^{-1} \\ &\approx \frac{1}{24} N^{-2} \compder(x)^{-2} x^{-1} \,. \end{align} Averaging over $X \sim p$ and multiplying by $N^2$ then gives \eqref{eq::raw_loss}. One wrinkle is that we need to use the Dominated Convergence Theorem to get the exact result \eqref{eq::norm_loss}, but we cannot necessarily apply it for all $\comp \in \compset$; instead, we can apply it for all $\comp \in \compset^\dagger$, and outside of $\compset^\dagger$ we get \eqref{eq::fatou-bound} using Fatou's Lemma. While limiting ourselves to $\comp \in \compset^\dagger$ might seem like a serious restriction, it does not lose anything essential because $\compset^\dagger$ is `dense' within $\compset$ in the following way: \begin{proposition} \label{prop::approximate-compander} For any $\comp \in \compset$ and $\delta \in (0,1]$, \begin{align} \comp_\delta (x) = (1-\delta) \comp(x) + \delta x^{1/2} \label{eq::approximate-compander} \end{align} satisfies $\comp_\delta \in \compset^\dagger$ and \begin{align} \lim_{\delta \to 0} \singleloss(p,\comp_\delta) = \lim_{\delta \to 0} L^\dagger(p,\comp_\delta) = L^\dagger(p,\comp) \label{eq::approximate-optimal-compander}\,. \end{align} \end{proposition} \begin{remark} It is important to note that strictly speaking the limit represented by $\widetilde{L}(p,\comp)$ may not always exist if $\comp \not \in \cF^\dagger$. However: (i) one can always guarantee that it exists by selecting $\comp \in \compset^\dagger$; (ii) by \eqref{eq::fatou-bound}, it is impossible to use $f$ outside $\compset^\dagger$ to get asymptotic performance better than $L^\dagger(p,\comp)$; and (iii) by \Cref{prop::approximate-compander}, given $f$ outside $\compset^\dagger$, one can get a compander in $\compset^\dagger$ with arbitrarily close (or better) performance to $\comp$ by using $\comp_\delta(x) = (1-\delta)\comp(x) + \delta x^{1/2}$ for $\delta$ close to $0$. This suggests that considering only $\comp \in \compset^\dagger$ is sufficient since there is no real way to benefit by using $\comp \not \in \compset^\dagger$. Additionally, both $\comp^*_\az$ and $\comp^{**}_\az$ are in $\compset^\dagger$. Thus, in \Cref{thm::minimax_compander}, although the limit might not exist for certain $\comp \in \compset, p \in \cP_{1/\az}$, the minimax compander still performs better since it has less loss than even the $\liminf$ of the loss of other companders. \end{remark} Given \Cref{thm::asymptotic-normalized-expdiv}, it's natural to ask: for a given $p \in \cP$, what compander $f$ minimizes $L^\dagger(p,f)$? This yields the following by calculus of variations: \begin{theorem} \label{thm::optimal_compander_loss} The best loss against source $p \in \cP$ is \begin{align} \hspace{-0.75pc} \inf_{\comp \in \compset} \singleloss(p,\comp) &= \min_{\comp \in \compset} L^\dagger(p,\comp) \\ &= \frac{1}{24} \Big(\int_0^1 (p(x)x^{-1})^{1/3} dx\Big)^3 \label{eq::raw_overall_dist} \end{align} where the \emph{optimal compander against $p$} is \begin{align} &\comp_p(x) = \underset{\comp \in \compset}{\argmin} L^\dagger(p,\comp) = \frac{\int_0^x (p(t)t^{-1})^{1/3} \, dt}{\int_0^1 (p(t)t^{-1})^{1/3} \, dt} \label{eq::best_f_raw} \end{align} (satisfying $\compder_p(x) \, \propto \, (p(x) x^{-1})^{1/3}$). \end{theorem} Note that $f_p$ may not be in $\compset^\dagger$ (for instance, if $p$ assigns zero probability mass to an interval $I \subseteq [0,1]$, then $f_p$ will be constant over $I$). However, this can be corrected by taking a convex combination with $x^{1/2}$ as described in \Cref{prop::approximate-compander}. The expression \eqref{eq::raw_overall_dist} represents in a sense how hard $p \in \cP$ is to quantize with a compander, and the maximin density $p^*_\az$ is the density in $\cP_{1/\az}$ which maximizes it;\footnote{The maximizing density over all $p \in \cP$ happens to be $p(x) = \frac{1}{2} x^{-1/2}$; however, $\bbE_{X \sim p}[X] = 1/3$ so it cannot be the marginal of any symmetric $P \in \cP^\triangle_\az$ when $\az > 3$.} in turn, the minimax compander $f^*_\az$ is the optimal compander against $p^*_\az$, i.e. \begin{align} f^*_\az = f_{p^*_\az} \,. \end{align} So far we considered quantization of a random probability vector with a known prior. We next consider the case where the quantization guarantee is given pointwise, i.e. we cover $\triangle_{\az-1}$ with a finite number of KL divergence balls of fixed radius. Note that since the prior is unknown, only the midpoint decoder can be used. \begin{theorem}[Divergence covering] \label{thm::worstcase_power_minimax} On alphabet size $\az > 4$ and $N \geq 8 \log(2\sqrt{\az \log \az} + 1)$ intervals, the minimax and approximate minimax companders with midpoint decoding achieve \emph{worst-case loss} over $\triangle_{\az-1}$ of \begin{align} \max_{\bx \in \triangle_{\az-1}}D_{\kl}(\bx\|\bnormvar) \leq (1 + \mathrm{err}(\az)) N^{-2} \log^2 \az \end{align} where $\mathrm{err}(\az)$ is an error term satisfying \begin{align} \mathrm{err}(\az) \leq 18 \frac{\log \log \az}{\log \az} \leq 7 \text{ when } \az > 4 \,. \end{align} \end{theorem} Note that the non-asymptotic worst-case bound matches (up to a constant factor) the known-prior asymptotic result~\eqref{eq::raw_loss_saddle}. We remark that condition on $N$ is mild: for example, if $N = 256$ (i.e. we are representing the probability vector with $8$ bits per entry), then $N > 8 \log(2\sqrt{\az \log \az}+1)$ for all $\az \leq 2.6 \times 10^{25}$. \begin{remark} When $b$ is the number of bits used to quantize each value in the probability vector, using the approximate minimax compander yields a worst-case loss on the order of $2^{-2b} \log ^2 \az$. In~\cite{phdthesis} we prove bounds on the optimal loss under arbitrary (vector) quantization of probability vectors and show that this loss is sandwiched between $2^{-2 b\frac{\az}{\az - 1}}$ (\cite[Proposition 2]{phdthesis}) and $2^{-2 b\frac{\az}{\az - 1}} \log \az$ (\cite[Theorem 2]{phdthesis}). Thus, the entrywise companders in this work are quite competitive. \end{remark} We also consider the natural family of \emph{power companders} $f(x)=x^s$, both in terms of average asymptotic raw loss and worst-case non-asymptotic normalized loss. By definition, $f(x) \in \compset^\dagger$ and hence $\widetilde{L}(p,f)$ is well-defined and \Cref{thm::asymptotic-normalized-expdiv} applies. \begin{theorem}\label{thm::power_compander_results} The power compander $f(x) = x^s$ with exponent $s \in (0,1/2]$ has asymptotic loss \begin{align} \underset{p \in \cP_{1/\az}} \sup \widetilde{L}(p,f) = \frac{1}{24} s^{-2} K^{2s-1}\label{eq::power_loss_s}\,. \end{align} For $\az > 7$, \eqref{eq::power_loss_s} is minimized by setting $s = \frac{1}{\log \az}$ (when $\az \leq 7$, $\frac{1}{\log \az} > 1/2$) and $f(x) = x^s$ achieves \begin{align} \underset{p \in \cP_{1/\az}} \sup \widetilde{L}(p,f) &= \frac{e^2}{24} \frac{1}{\az} \log^2 \az \\ \text{and }~~ \underset{P \in \cP^\triangle_\az} \sup \widetilde{\cL}(P,f) &= \frac{e^2}{24} \log^2 \az\,. \end{align} Additionally, when $s = \frac{1}{\log \az}$, it achieves the following worst-case bound with midpoint decoding for $\az > 7$ and $N > \frac{e}{2} \log \az$: \begin{align} \max_{\bx \in \triangle_{\az-1}} \hspace{-0.4pc} D_{\kl}(\bx\|\bnormvar) \hspace{-0.2pc} &\leq \hspace{-0.2pc} (1 + \mathrm{err}(\az,N)) \frac{e^2}{2} N^{-2} \log^2 \az \\ \text{where } \mathrm{err}&(\az,N) = \frac{e}{2} \frac{\log \az}{N - \frac{e}{2}\log \az} \,. \label{eq::power_worst_case_bound} \end{align} \end{theorem} Note in particular that when $N \geq e \log \az$, we have $\mathrm{err}(\az,N) \leq 1$, giving a bound of $\max_{\bx \in \triangle_{\az-1}}D_{\kl}(\bx\|\bnormvar) \leq e^2 N^{-2} \log^2 \az$. We can think of $s = \frac{1}{\log \az}$ as a `minimax' among the class of power companders. This result shows $f(x) = x^{\frac{1}{\log \az}}$ has performance within a constant factor of the minimax compander, and hence might be a good alternative. \iflong \else Due to space constraints, we omit the proofs of \Cref{rmk::best-constant,thm::worstcase_power_minimax}. We sketch the other proofs in \Cref{sec::asymt_single,sec::minimax}. \subsection{Experimental Results} \label{sec::experimental_results} We compare the performance of five quantizers, with granularities $N = 2^8$ and $N = 2^{16}$, on three types of datasets of various alphabet sizes: \begin{itemize}\item Random synthetic distributions drawn from the uniform prior over the simplex: {We draw and take the average over 1000 random samples for our results.} \item Frequency of words in books: {These frequencies are computed from text available on the Natural Language Toolkit (NLTK) libraries for Python. For each text, we get tokens (single words or punctuation) from each text and simply count the occurrence of each token} \item Frequency of $k$-mers in DNA: {For a given sequence of DNA, the set of $k$-mers are the set of length $k$ substrings which appear in the sequence. We use the human genome as the source for our DNA sequences. Parts of the sequence marked as repeats are removed.} \end{itemize} Our quantizers are: \begin{itemize} \item \textbf{Approximate Minimax Compander:} As given by equation \eqref{eq::appx-minimax-compander}. Using the approximate minimax compander is much simpler than the minimax compander since the constant $c_\az$ does not need to be computed. \item \textbf{Truncation:} Uniform quantization (equivalent to $\comp(x) = x$), which truncates the least significant bits. This is the natural way of quantizing values in $[0,1]$. \item \textbf{Float and bfloat16:} For 8-bit encodings ($N = 2^8$), we use a floating point implementation which allocates 4 bits to the exponent and 4 bits to the mantissa. For 16-bit encodings ($N = 2^{16}$), we use bfloat16, a standard which is commonly used in machine learning~\cite{kalamkar2019study}. \item \textbf{Exponential Density Interval (EDI):} This is the quantization method we used in an achievability proof in \cite{adler_ratedistortion_2021}. It is designed for the uniform prior over the simplex. \item \textbf{Power Compander:} Recall that the compander is $ \comp(x) = x^{s}$. We optimize $s$ and find that $s = \frac{1}{\log_e \az}$ asymptotically minimizes KL divergence, and also gives close to the best performance among power companders empirically. To see the effects of different powers $s$ on the performance of the power compander, see \Cref{fig::books_power}. \end{itemize} Because a well-defined prior does not always exist for these datasets (and for simplicity) we use midpoint decoding for all the companders. When a probability value of exactly $0$ appears, we do not use companding and instead quantize the value to $0$, i.e. the value $0$ has its own bin. \begin{figure} \centering \includegraphics[scale = .4, trim = {25 0 0 0} ]{figures/all_books_frequencies_256.pdf} \caption{Power compander $\comp(x) = x^s$ performance with different powers $s$ used to quantize frequency of words in books. The number $\az$ of distinct words in each book is shown in the legend. The theoretical optimal power $s = \frac{1}{\log \az}$ is plotted.} \label{fig::books_power} \end{figure} Our main experimental results are given in \Cref{fig::first_compare_compander}, showing the KL divergence between the empirical distribution $\bx$ and its quantized version $\bnormvar$ versus alphabet size $\az$. The approximate minimax compander performs well against all sources. \begin{figure} \centering \includegraphics[scale = .4]{figures/kl_ucsc_dna_books_dir_1_truncate_exp_power_sinh_float_256_fig.pdf} \includegraphics[scale = .4]{figures/kl_ucsc_dna_books_dir_1_truncate_exp_power_sinh_bfloat16_65536_fig.pdf} \caption{Plot comparing the performance of the truncation compander, the EDI compander, floating points, the power compander, and the approximate minimax compander \eqref{eq::appx-minimax-compander} on probability distributions of various sizes. } \label{fig::first_compare_compander} \vspace{-1pc} \end{figure} For truncation, the KL divergence increases with $\az$ and is generally fairly large. The EDI quantizer works well for the synthetic uniform prior (as it should), but for real-world datasets like word frequency in books, it performs badly (sometimes even worse than truncation). The loss of the power compander is similar to the minimax compander (only worse by a constant factor), as predicted by \Cref{thm::power_compander_results}. The experiments show that the approximate minimax compander achieves low loss on the entire ensemble of data (even for relatively small granularity, such as $N = 256$) and outperforms both truncation and floating-point implementations on the same number of bits. Additionally, its closed-form expression (and entrywise application) makes it simple to implement and computationally inexpensive, so it can be easily added to existing systems to lower storage requirements at little or no cost to fidelity. \subsection{Paper Organization} We provide background and discuss previous work on companders in \Cref{sec::previous_works}. We prove \Cref{thm::asymptotic-normalized-expdiv} in \Cref{sec::asymt_single} (though proofs of some lemmas and propositions leading up to it are given in Appendix~\ref{sec::appendix_proofs_asymptotic}). \Cref{prop::approximate-compander} is proved in Appendix~\ref{sec::approximate-compander-f-dagger}. In \Cref{sec::minimax}, we optimize over \eqref{eq::raw_loss} to get the maximin single-letter distribution (showing part of \Cref{prop::maximin-density} with other parts left to Appendix~\ref{sec::minimax_const_analysis}) and the minimax compander, thus showing \Cref{thm::optimal_compander_loss,thm::minimax_compander}, \Cref{cor::worstcase_prior} and \Cref{prop::bound_worstcase_prior_exist} (leaving \Cref{thm::approximate-minimax-compander} for Appendix~\ref{sec::proof_L_appx_minimax_compander}). We prove \Cref{thm::worstcase_power_minimax} and the worst-case part of \Cref{thm::power_compander_results} in Appendix~\ref{sec::worst-case_analysis}. Other parts of \Cref{thm::power_compander_results} are discussed in Appendix~\ref{sec::power_compander_analysis}. In \Cref{sec::other_losses} we discuss companders for losses other than KL divergence. Finally, in \Cref{sec::info_distillation_main} we discuss a connection of our problem to the problem of information distillation with proofs given in \Cref{sec::info_distillation_detail}. \section{Background} \label{sec::previous_works} Companders (also spelled ``compandors'') were introduced by Bennett in 1948 \cite{bennett1948} as a way to quantize speech signals, where it is advantageous to give finer quantization levels to weaker signals and coarser levels to larger signals. Bennett gives a first order approximation that the mean-square error in this system is given by \begin{align}\label{eq::bennett_compander} \frac{1}{12 N^2} \int_{a}^b \frac{p(x)}{(f'(x))^2} dx \end{align} where $N$ is the number quantization levels, $a$ and $b$ are the minimum and maximum values of the input signal, $p$ is the probability density of the input signal, and $f'$ is the slope of the compressor function placed before the uniform quantization. This formula is similar to our \eqref{eq::raw_loss} except that we have an extra $x^{-1}$ since we are working with KL divergence. Others have expanded on this line of work. In \cite{panter_dite}, the authors studied the same problem and determined the optimal compressor under mean-square error, a result which parallels our result \eqref{eq::raw_overall_dist}. However, results like those in \cite{bennett1948, panter_dite} are stated either as first order approximations or make simplifying assumptions. For example, in \cite{panter_dite}, the authors state that they assume the values $\widehat{y}_{(n)}$ are close together enough that probability density within any given bin can be treated as a constant. In contrast, we rigorously show that this fundamental logic holds under very general conditions ($f \in \compset^\dagger$). Generalizations of Bennett's formula are also studied when instead of mean-square error, the loss is the expected $r$th moment loss $\bbE\norm{\cdot}^r$. This is computed for vectors of length $\az$ in \cite{zador1982} and \cite{gersho1979}. The typical examples of companders used in engineering and signals processing are the $\mu$-law and $A$-law companders \cite{lewis_mu-law}. For the $\mu$-law compander, \cite{panter_dite} and \cite{smith1957} argue that for mean-squared error, for a large enough constant $\mu$ the distortion becomes independent of the signal. Quantizing probability distributions is a well-studied topic, though typically the loss function is a norm and not KL divergence \cite{graf2007}. Quantizing for KL divergence is considered in our earlier work \cite{adler_ratedistortion_2021}, focusing on average KL loss for Dirichlet priors. A similar problem to quantizing under KL divergence is \emph{information $k$-means}. This is the problem of clustering $n$ points $a_i$ to $k$ centers $\hat{a}_j$ to minimize the KL divergences between the points and their associated centers. Theoretical aspects of this are explored in \cite{slonim1999} and \cite{tishby2000}. Information $k$-means has been implemented for several different applications \cite{Pereira93, jiang2013, cao2013}. There are also other works that study clustering with a slightly different but related metric \cite{dhillon2003,nielsen2013, veldhuis2002}; however, the focus of these works is to analyze data rather than reduce storage. \begin{remark} A variant of the classic problem of prediction with log-loss is an equivalent formulation to quantizing the simplex with KL loss: let $\bx \in \triangle_{\az-1}$ and $A \sim \bx$ (in the alphabet $[\az]$); we want to predict $A$ by positing a distribution $\bnormvar \in \triangle_{\az-1}$, and our loss is $-\log \normvar_A$. In the standard version, the problem is to pick the best $\bnormvar$ given limited information about $\bx$; however, if we \emph{know} $\bx$ but are required to express $\bnormvar$ using only $\log_2 M$ bits, it is equivalent to quantizing the simplex with KL divergence loss. \end{remark} \section{Asymptotic Single-Letter Loss} \label{sec::asymt_single} In this section we give the proof of \Cref{thm::asymptotic-normalized-expdiv} (though the proofs of some lemmas must be sketched). We use the following notation: Given an interval $I$ we define $\bar{y}_I$ to be its midpoint and $r_I$ to be its width, so that by definition \begin{align} I = [\bar{y}_I - r_I/2, \bar{y}_I + r_I/2]\,. \end{align} Note that if $I \subseteq [0,1]$ then $r_I \leq 2 \bar{y}_I$. Given probability distribution $p$ and interval $I$, we denote the following: $p|_I$ is $p$ restricted to $I$; $\pi_{p,I} := \bbP_{X \sim p}[X \in I]$ is the probability mass of $I$; and the \emph{centroid of $I$ under $p$} is \begin{align} \widetilde{y}_{p,I} := \bbE_{X \sim p|_I} [X] = \bbE_{X \sim p}[X \, | \, X \in I]\,. \end{align} If they are undefined because $\bbP_{X \sim p}[X \in I] = 0$ then by convention $p|_I$ is uniform on $I$ and $\widetilde{y}_{p, I} = \bar{y}_I$. When $I = I^{(n)}$ is a bin of the compander, we can replace it with $(n)$ in the notation, i.e. $\bar{y}_{(n)} = \bar{y}_{I^{(n)}}$ (so the midpoint of the bin containing $x$ at granularity $N$ is denoted $\bar{y}_{(n_N(x))}$ and the width of the bin is $r_{(n_N(x))}$). When $I$ and/or $p$ are fixed, we sometimes drop them from the notation, i.e. $\widetilde{y}_I$ or even just $\widetilde{y}$ to denote the centroid of $I$ under $p$. \subsection{The Local Loss Function} One key to the proof is the following perspective: instead of considering $X \sim p$ directly, we (equivalently) first select bin $I^{(n)}$ with probability $\pi_{p, (n)}$, and then select $X \sim p|_{(n)}$. The expected loss can then be considered within bin $I^{(n)}$. This makes it useful to define: \begin{definition} Given probability measure $p$ and interval $I$, the \emph{single-interval loss of $I$ under $p$} is \begin{align} \ell_{p,I} = \bbE_{X \sim p|_{I}}[X \log(X/\widetilde{y}_{p,I})]\,. \end{align} \end{definition} As before, if $p$ and/or $I$ is fixed and clear, we can drop it from the notation (and if $I = I^{(n)}$ is a bin, we can denote the local loss as $\ell_{p, (n)}$). This can be interpreted as follows: if we quantize all $x \in I$ to the centroid $\widetilde{y}_I$, then $\ell_{p,I}$ is the expected loss of $X \sim p$ conditioned on $X \in I$. Thus the values of $\ell_{p,(n)}$ can be used as an alternate means of computing the single-letter loss: \begin{align} \singleloss(p,f,N) &= \bbE_{X \sim p} [X \log(X/\widetilde{y}(X))] \\ &= \sum_{n=1}^N \pi_{p, (n)} \bbE_{X \sim p|_{(n)}}[X \log(X/\widetilde{y}_{p,(n)})] \\ &= \sum_{n=1}^N \pi_{p, (n)} \ell_{p, (n)} = \int_{[0,1]} \ell_{p, (n_N(x))} \, dp\,. \end{align} Thus the normalized single-letter loss (whose limit is the asymptotic single-letter loss \eqref{eq::raw-assl}) is \begin{align} N^2 \, \singleloss(p,f,N) = \int_{[0,1]} N^2 \, \ell_{p, (n_N(x))} \, dp\,. \end{align} For single-letter density $p$ and compander $f$, we define the \emph{local loss function at granularity $N$}: \begin{align} \label{eq::loc-loss-fn} g_N(x) = N^2 \, \ell_{p, (n_N(x))}\,. \end{align} We also define the \emph{asymptotic local loss function}: \begin{align} g(x) = \frac{1}{24} f'(x)^{-2} x^{-1} \,. \end{align} \Cref{thm::asymptotic-normalized-expdiv} is therefore equivalent to: \begin{align} \liminf_{N \to \infty} \int g_N \, dp &\geq \int g \, dp ~\forall~ p \in \cP, f \in \cF \label{eq::fatou-restated} \\ \text{and} \, \lim_{N \to \infty} \int g_N \, dp &= \int g \, dp ~\forall~ p \in \cP, f \in \cF^\dagger \label{eq::raw-restated}. \end{align} To prove \eqref{eq::fatou-restated} and \eqref{eq::raw-restated}, we show: \begin{proposition} \label{prop::locloss-convergence} For all $p \in \cP$, $\comp \in \compset$, if $X \sim p$ then \begin{align} \lim_{N \to \infty} \locloss_N(X) = \locloss(X) ~~~\text{almost surely.} \end{align} \end{proposition} \begin{proposition} \label{prop::locloss-dominating} Let $f \in \cF^\dagger$ be a compander and $c > 0$ and $\alpha \in (0,1]$ such that $f(x) - c x^\alpha$ is monotonically increasing. Letting $g_N$ be the local loss functions as in \eqref{eq::loc-loss-fn} and \begin{align} h(x) = (2^{2/\alpha} + \alpha^2 2^{1/\alpha - 2}) (c \alpha)^{-2} x^{1-2\alpha} + c^{-1/\alpha} 2^{1/\alpha - 2} \end{align} then $g_N(x) \leq h(x)$ for all $x, N$. Additionally, if $\alpha \leq 1/2$ then $\int_{[0,1]} h \, dp < \infty$. \end{proposition} The lower bound \eqref{eq::fatou-restated} then follows immediately from \Cref{prop::locloss-convergence} and Fatou's Lemma; and when $f \in \cF^\dagger$, by \Cref{prop::locloss-dominating} there is some $h$ which is integrable over $p$ and dominates all $g_N$, thus showing \eqref{eq::raw-restated} by the Dominated Convergence Theorem. To prove \Cref{prop::locloss-convergence}, we use the following: \begin{itemize} \item For any $x$ at which $f$ is differentiable, when $N$ is large, the width of the interval $x$ falls in is \begin{align} r_{(n_N(x))} \approx N^{-1} f'(x)^{-1} \, . \end{align} \item For any $x$ at which $F_p$ is differentiable, $p|_I$ will be approximately uniform over any sufficiently small $I$ containing $x$. \item For a sufficently small interval $I$ containing $x$ and such that $p|_I$ is approximately uniform, \begin{align}\label{eq::local_loss_approx} \ell_{p,I} \approx \frac{1}{24} r_I^2 x^{-1}\,. \end{align} \end{itemize} Putting these together, we get that if $F_p$ and $f$ are both differentiable at $x$ then when $N$ is large, \begin{align} g_N(x) &= N^2 \, \ell_{p,(n_N(x))} \\ &\approx N^2 \frac{1}{24} r_{(n_N(x))}^2 x^{-1} \approx \frac{1}{24} f'(x)^{-2} x^{-1} = g(x) \end{align} as we wanted. We formally state each of these steps in \Cref{sec::locloss-convergence-preliminaries} and combine them to prove \Cref{prop::locloss-convergence} in \Cref{sec::locloss-convergence-pf}. The proof of \Cref{prop::locloss-dominating} is given in \Cref{sec::locloss-dominating-proof}, along with its own set of definitions and lemmas needed to show it. \section{Minimax Compander} \label{sec::minimax} \Cref{thm::asymptotic-normalized-expdiv} showed that for $f \in \cF^\dagger$, the asymptotic single-letter loss is equivalent to \begin{align} \singleloss(p, f) = \frac{1}{24} \int_0^1 p(x) f'(x)^{-2} x^{-1} dx\,. \end{align} Using this, we can analyze what is the `best' compander $f$ we can choose and what is the `worst' single-letter density $p$ in order to show \Cref{thm::optimal_compander_loss,thm::minimax_compander} and their related results. \iflong \subsection{Optimizing the Compander} We show \Cref{thm::optimal_compander_loss}, which follows from \Cref{thm::asymptotic-normalized-expdiv} by finding $\comp \in \compset$ which minimizes $L^\dagger(p,\comp)$. This is achieved by optimizing over $\compder$; we will also use some concepts from \Cref{prop::approximate-compander} to connect it back to $\inf_{\comp \in \cF} \widetilde{L}(p,\comp)$ when the resulting $\comp$ is not in $\cF^\dagger$. Since $\comp : [0,1] \to [0,1]$ is monotonic, we use constraints $\compder(x) \geq 0$ and $\int_0^1 \compder(x) \, dx = 1$. \iflong We solve the following: \begin{align} \text{minimize } &L^\dagger(p,\comp) = \frac{1}{24}\int_0^1 p(x) \compder(x)^{-2} x^{-1} \, dx \\ \text{subject to } &\int_0^1 \compder(x) \, dx = 1 \\&\text{ and } \compder(x) \geq 0 \text{ for all } x \in [0,1] \,. \end{align} The function $L^\dagger(p,\comp)$ is convex in $\compder$, and thus first order conditions show optimality. Let $\lambda(x)$ satisfy $\int_0^1 \lambda(x) dx = 0$. If $\compder(x) \, \propto \, (p(x)x^{-1})^{1/3}$, we derive: \begin{align} &\frac{d}{dt} \frac{1}{24} \int_0^1 p(x) \big(\compder(x) + t \, \lambda(x) \big)^{-2} x^{-1} \, dx \label{eq::perturbed_func_min_L} \\ &= \frac{1}{24} \int_0^1 p(x) x^{-1} \frac{d}{dt} \big(\compder(x) + t \, \lambda(x) \big)^{-2} \, dx \\ &= -\frac{1}{12} \int_0^1 p(x) x^{-1} \big(\compder(x) + t \, \lambda(x) \big)^{-3} \lambda(x) \, dx \\ &= -\frac{1}{12} \int_0^1 p(x) x^{-1} \compder(x)^{-3} \lambda(x) \, dx ~~(\text{at } t = 0) \, \\ & \propto \, -\frac{1}{12} \int_0^1 \lambda(x) \, dx = 0 \label{eq::f_proportional_to}\,. \end{align} Thus, such $f$ satisfies the first-order optimality condition under the constraint $\int f'(x) \, dx = 1$. This gives \else Using calculus of variations, we get $ \compder_p(x) \, \propto \, (p(x)x^{-1})^{1/3} $ and $\comp(0) = 0$ and $\comp(1) = 1$, from which \eqref{eq::raw_overall_dist} and \eqref{eq::best_f_raw} follow. If $\comp_p \in \compset^\dagger$, then $\comp_p = \argmin_f \singleloss(p,\comp)$, and for any other $\comp \in \compset$, \begin{align} \hspace{-0.5pc} \singleloss(p,\comp_p) &= L^\dagger(p,\comp_p) \leq L^\dagger(p,\comp) \\&\leq \liminf_{N \to \infty} N^2 \singleloss(p,\comp,N) \,. \end{align} If $\comp_p \not \in \compset^\dagger$, for any $\delta > 0$ define $\comp_{p,\delta} = (1 - \delta) \comp_p + \delta x^{1/2}$ (as in \eqref{eq::approximate-compander}). Then $\comp_{p,\delta} - \delta x^{1/2} = (1 - \delta) \comp_p$ is monotonically increasing so $\comp_{p,\delta} \in \compset^\dagger$, so \Cref{thm::asymptotic-normalized-expdiv} applies to $\comp_{p,\delta}$; additionally, $\comp_{p,\delta} - (1-\delta) \comp_p = \delta x^{1/2}$ is monotonically increasing as well so $\comp'_{p,\delta} \geq (1-\delta) \comp'_p$. Hence, plugging into the $L^\dagger$ formula gives: \begin{align} &\singleloss(p,\comp_{p,\delta}) = L^\dagger(p,\comp_{p,\delta}) \leq L^\dagger(p,\comp_p)(1-\delta)^{-2} \, . \end{align} Taking $\delta \to 0$ (and since $\compset^\dagger \subseteq \compset$) shows that \begin{align} L^\dagger(p,\comp_p) = \inf_{\comp \in \compset^\dagger} \singleloss(p,\comp)\,, \end{align} finishing the proof of \Cref{thm::optimal_compander_loss}. \begin{remark} Since we know the corresponding single-letter source $p$ for a Dirichlet prior, using this $p$ with \Cref{thm::optimal_compander_loss} gives us the optimal compander for Dirichlet priors on any alphabet size. This gives us a better quantization method than EDI which was discussed in \Cref{{sec::experimental_results}}. This optimal compander for Dirichlet priors is called the \emph{beta compander} and its details are given in Appendix~\ref{sec::beta_companding}. \end{remark} \iflong \subsection{The Minimax Companders and Approximations} To prove \Cref{thm::minimax_compander} and \Cref{cor::worstcase_prior}, we first consider what density $p$ maximizes equation \eqref{eq::raw_overall_dist}: \begin{align} \frac{1}{24} \left(\int_0^1 (p(x)x^{-1})^{1/3} dx\right)^3 \end{align} i.e. is most difficult to quantize with a compander. Using calculus of variations to maximize \begin{align} \int_0^1 (p(x)x^{-1})^{1/3} \,dx \label{eq::loss-to-maximize} \end{align} (which of course maximizes \eqref{eq::raw_overall_dist}) subject to $p(x) \geq 0$ and $\int_0^1 p(x) \, dx = 1$, we find that maximizer is $p(x) = \frac{1}{2} x^{-1/2}$. However, while interesting, this is only for a single letter; and because $\bbE[X] = 1/3$ under this distribution, it is clearly impossible to construct a prior over $\triangle_{\az-1}$ (whose output vector \emph{must} sum to $1$) with this marginal (unless $\az = 3$). \iflong Hence, we add an expected value constraint to the problem of maximizing \eqref{eq::loss-to-maximize}, giving: \begin{align} \text{maximize } &\int_0^1 \big(p(x)x^{-1}\big)^{1/3} \, dx \\ \text{subject to } &\int_0^1 p(x) \, dx = 1\label{eq::p_constraint_sum}; \\ &\int_0^1 p(x)x \,dx = \frac{1}{\az} \label{eq::p_constraint_mean};\\ &\text{and } p(x) \geq 0 \text{ for all } x\,. \end{align} We can solve this again using variational methods (we are maximizing a concave function so we only need to satisfy first-order optimality conditions). A function $p(x) > 0$ is optimal if, for any $\lambda(x)$ where \begin{align} \int_0^1 \lambda(x) \, dx = 0 \text{ and } \int_0^1 \lambda(x)x \, dx &= 0 \label{eq::variational_constraints} \end{align} the following holds: \begin{align} \frac{d}{dt} \int_0^1 x^{-1/3} \big(p(x) + t \, \lambda(x)\big)^{1/3} \, dx = 0\,. \end{align} We have by the same logic as before: \begin{align} \frac{d}{dt} \eqstartshort \int_0^1 x^{-1/3} \big(p(x) + t \, \lambda(x)\big)^{1/3} \, dx \eqbreakshort &= \frac{1}{3} \int_0^1 x^{-1/3} \big(p(x) + t \, \lambda(x)\big)^{-2/3} \lambda(x) \, dx \\ &= \frac{1}{3} \int_0^1 x^{-1/3} p(x)^{-2/3} \lambda(x) \, dx ~~ (\text{at } t = 0)\,. \label{eq::variational_condition} \end{align} Thus, if we can arrange things so that there are constants $a_\az, b_\az$ such that \begin{align} x^{-1/3} p(x)^{-2/3} = a_\az + b_\az x \end{align} this ensures \eqref{eq::variational_condition} equals zero. In that case, \begin{align} x^{-1/3} p(x)^{-2/3} &= a_\az + b_\az x \\ \iff \hspace{1pc} p(x)^{-2/3} &= a_\az x^{1/3} + b_\az x ^{4/3} \\ \iff \hspace{2.6pc} p(x) &= \big(a_\az x^{1/3} + b_\az x^{4/3}\big)^{-3/2}\label{eq::maximin-density_repeat}\,. \end{align} This is the maximin density $p^*_\az$ from \Cref{prop::maximin-density} \eqref{eq::maximin-density}, where $a_\az , b_\az $ are set to meet the constraints \eqref{eq::p_constraint_sum} and \eqref{eq::p_constraint_mean}. Exact formulas for $a_\az, b_\az$ are difficult to find; we give more details on after the next step. We want to determine the optimal compander for the maximin density \eqref{eq::maximin-density_repeat}. We know from \eqref{eq::f_proportional_to} that we need to first compute \begin{align} \phi(x) &= \int_0 ^{x} \newvar^{-1/3} \left(a_\az \newvar^{1/3} + b_\az \newvar^{4/3} \right)^{-1/2} \,d\newvar \\ & = \frac{2 \arcsinh \left(\sqrt{\frac{b_\az x}{a_\az }} \right)}{\sqrt{b_\az }}\,. \label{eq::minimax-compander-proportional} \end{align} The best compander $f(x)$ is proportional to \eqref{eq::minimax-compander-proportional} and is exactly given by $f(x) = \phi(x)/\phi(1)$. The resulting compander, which we call the \emph{minimax compander}, is \begin{align} \label{eq::arcsinh-general} f(x) &= \frac{\arcsinh \left(\sqrt{\frac{b_\az x}{a_\az }} \right)}{\arcsinh \left(\sqrt{\frac{b_\az }{a_\az }} \right)} \,. \end{align} Given the form of $f(x)$, it is natural to determine an expression for the ratio $b_\az / a_\az$. We can parameterize both $a_\az$ and $b_\az$ by $b_\az / a_\az$ and then examine how $b_\az / a_\az$ behaves as a function of $\az$. The constraints on $a_\az$ and $b_\az$ give that \begin{align} a_\az &= 4^{1/3}(b_\az / a_\az + 1)^{-1/3} \\b_\az &= 4 a_\az^{-2} - a_\az\,. \end{align} The ratio $b_\az / a_\az$ grows approximately as $\az \log \az$. Hence, we choose to parameterize \begin{align} b_\az / a_\az = c_\az \az \log \az\,. \end{align} To satisfy the constraints, we get $.25 < c_\az < .75$ so long as $\az > 24$ (see \Cref{sec::minimax_const_analysis} for details), and \Cref{lem::c_k_half} in \Cref{sec::lim_c_k} shows that $c_\az \to 1/2$ as $\az \to \infty$. Combining these gives \Cref{prop::maximin-density}. We can then express $a_\az$, $b_\az$ in terms of $c_\az$: \begin{align} a_\az &= 4^{1/3}(c_\az \az \log \az + 1)^{-1/3} \\b_\az &= 4 a_\az^{-2} - a_\az \\ &= 4^{1/3} (c_\az \az \log \az + 1)^{2/3} \eqlinebreakshort - 4^{1/3}(c_\az \az \log \az + 1)^{-1/3} \label{eq::value_c_2_worst+p} \\& = 4^{1/3}(c_\az \az \log \az)^{2/3} (1+o(1))\,. \end{align} When $\az$ is large, the second term in \eqref{eq::value_c_2_worst+p} is negligible compared to the first. Thus, plugging into \eqref{eq::arcsinh-general} we get the {minimax compander} and {approximate minimax compander}, respectively: \begin{align} \comp^*_\az(x) &= \frac{\arcsinh \left(\sqrt{(c_\az \az \log \az) x} \right)}{\arcsinh \left(\sqrt{c_\az \az \log \az} \right)} \\ \approx \comp^{**}_\az(x) &= \frac{\arcsinh (\sqrt{((1/2) \az \log \az) x} )}{\arcsinh (\sqrt{(1/2) \az \log \az} )} \,. \end{align} The minimax compander minimizes the maximum (raw) loss against all densities in $\cP_{1/\az}$, while the approximate minimax compander performs very similarly but is more applicable since it can be used without computing $c_\az$. To compute the loss of the minimax compander, we can use \eqref{eq::raw_overall_dist} to get \begin{align} L^{\dagger}(p_\az^*, \comp^*_\az) & = \frac{1}{\dpfconst} \left(\frac{2 \arcsinh\left(\sqrt{c_\az \az \log \az} \right)} {\sqrt{b_\az }} \right)^3\,. \end{align} Substituting we get \begin{align} L^{\dagger}\eqstartshort(p_\az^*, \comp^*_\az) \\ &= \frac{1}{\dpfconst} \frac{8 \left(\log \left(\sqrt{c_\az \az \log \az} + \sqrt{c_\az \az \log \az + 1} \right)\right)^3}{2 c_\az \az \log \az (1+o(1))}\\ & = \frac{1}{\dpfconst} \frac{(\log 4 (c_\az \az \log \az))^3}{2 c_\az \az \log \az} (1+o(1)) \\ & = \frac{1}{24} \frac{\log^2 \az}{\az} (1 + o(1)) \label{eq::L_dagger_value_minimax}\,. \end{align} \else Hence, we add the constraint $p \in \cP_{1/\az}$, maximizing \eqref{eq::loss-to-maximize} subject to $p(x) \geq 0$, $\int_0^1 p(x) \, dx = 1$, and $\int_0^1 p(x) x \, dx = 1/\az$. Solving this modified problem yields the \emph{maximin density} $p^*_\az$ \eqref{eq::maximin-density} from \Cref{thm::minimax_compander}; $\comp^*_\az$ \eqref{eq::minimax-compander} is simply the optimal compander (given by \eqref{eq::best_f_raw}) against $p^*_\az$. In fact, not only is $\comp^*_\az$ optimal against the maximin density $p^*_\az$, but (as alluded to in the name `minimax compander') it minimizes the maximum asymptotic loss over all $p \in \cP_{1/\az}$. More formally we show that $(\comp^*_\az, p^*_\az)$ is a saddle point of $L^\dagger$. The function $L^\dagger(p,\comp)$ is concave (actually linear) in $p$ and convex in $\compder$, and we can show that the pair $(\comp^*_\az, p^*_\az)$ form a saddle point, thus proving \eqref{eq::minmax}-\eqref{eq::maxmin} from \Cref{thm::minimax_compander}. \iflong We can compute that \begin{align} (\comp_\az^*)'(x) \, &\propto \, (p_\az^*(x) x^{-1})^{1/3} \\ & = x^{-1/3} (a_\az x^{1/3} + b_\az x^{4/3})^{-1/2} \\ & = \frac{1}{\sqrt{a_\az x + b_\az x^2}}\,. \end{align} Assume we set $a_\az $ and $b_\az $ to the appropriate values for $\az$. For any $p \in \cP_{1/\az}$, \begin{align} {L^{\dagger}}(p, \comp_\az^*) & = \int_0^1 p(x) x^{-1} ((\comp_\az^*)'(x))^{-2} dx\\ & = \int_0^1 p(x) x^{-1} (a_\az x + b_\az x^2) dx \\ & = a_\az + b_\az \frac{1}{\az} \end{align} i.e. $L^\dagger(p,\comp^*_\az)$ does not depend on $p$. Since $\comp^*_\az$ is the optimal compander against the maximin compander $p^*_\az$ we can therefore conclude: \begin{align} \sup_{p \in \cP_{1/\az}} {L^{\dagger}}(p, \comp_\az^*) &= {L^{\dagger}}(p_\az^*, \comp_\az^*) \\ = \inf_{\comp \in \compset} {L^{\dagger}}(p_\az^*, \comp) &= \sup_{p \in \cP_{1/\az} }\inf_{\comp \in \compset} {L^{\dagger}}(p, \comp)\,. \end{align} Since it is always true that \begin{align}\label{eq::minimax_ineq} \sup_{p \in \cP_{1/\az}} \inf_{\comp \in \compset} {L^{\dagger}}(p, \comp) \leq \inf_{\comp \in \compset} \sup_{p \in \cP_{1/\az}} {L^{\dagger}}(p, \comp) \,, \end{align} this shows that $(\comp_\az^*, p^*_\az)$ is a saddle point. Furthermore, $\comp_\az^* \in \compset^\dagger$ (specifically it behaves as a multiple of $x^{1/2}$ near $0$), so $\singleloss(p,\comp_\az^*) = L^\dagger(p,\comp_\az^*)$ for all $p$, thus showing that $\comp_\az^*$ performs well against any $p \in \cP_{1/\az}$. Using \eqref{eq::raw_loss} with the expressions for $p^*_\az$ and $\comp_\az^*$ and \eqref{eq::L_dagger_value_minimax} gives \eqref{eq::raw_loss_saddle}. This completes the proof of \Cref{thm::minimax_compander}. \begin{remark} While the power compander $\comp(x) = x^{1/\log \az}$ is not minimax optimal, it has similar properties to the minimax compander and differs in loss by at most a constant factor. We analyze the power compander in \Cref{sec::power_compander_analysis}. \end{remark} \iflong \subsection{Existence of Priors with Given Marginals} While $p^*_\az$ is the most difficult density in $\cP_{1/\az}$ to quantize, it is unclear whether a prior $P^*$ on $\triangle_{\az-1}$ exists with marginals $p^*_\az$ -- even though $\az$ copies of $p^*_\az$ will correctly sum to $1$ in expectation, it may not be possible to correlate them to guarantee they sum to $1$. However, it is possible to construct a prior $P^*$ whose marginals are as hard to quantize, up to a constant factor, as $p^*_\az$, by use of clever correlation between the letters. We start with a lemma: \begin{lemma}\label{lem::cramming2} Let $p \in \cP_{1/\az}$. Then there exists a joint distribution of $(X_1, \dots, X_\az)$ such that (i) $X_i \sim p$ for all $i \in [\az]$ and (ii) $\sum_{i \in [\az]} X_i \leq 2$, guaranteed. \end{lemma} \iflong \begin{proof} Let $F$ be the cumulative distribution function of $p$. Define the quantile function $F^{-1}$ as \begin{align} F^{-1}(u) = \inf \{x : F(x) \geq u\}. \end{align} We break $[0,1]$ into $\az$ uniform sub-intervals $I_i = ((i-1)/\az, i/\az]$ (let $I_1 = [0, 1/\az]$). We then generate $X_1, X_2, \dots, X_\az$ jointly by the following procedure: \begin{enumerate} \item Choose a permutation $\sigma : [\az] \to [\az]$ uniformly at random (from $\az!$ possibilities). \item Let $U_k \sim \unif_{I_{\sigma(k)}}$ independently for all $k$. \item Let $X_k = F^{-1}(U_k)$. \end{enumerate} Now we consider $\sum_k X_k$. Let $b_i = F^{-1}(i/k)$ for $i = 0, 1, \dots, \az$. Note that if $\sigma(k) = i$ then $U_k \in ((i-1)/\az, i/\az]$ and hence $X_k = F^{-1}(U_k) \in [b_{i-1}, b_i]$. Therefore $X_{\sigma^{-1}(i)} \in [b_{i-1}, b_i]$ and thus for any permutation $\sigma$, \begin{align} \sum_{i=1}^\az b_{i-1} &\leq \sum_{i=1}^\az X_{\sigma^{-1}(i)} \leq \sum_{i=1}^\az b_i \\ &= \Big(\sum_{i=1}^\az b_{i-1}\Big) + b_\az - b_0\\ &\leq \Big(\sum_{i=1}^\az b_{i-1}\Big) + 1 \leq 2 \end{align} as $\sum_i b_{i-1} \leq \sum_i \bbE[X_{\sigma^{-1}(i)}] = \az \bbE_{X \sim p}[X] = 1$. \end{proof} \Cref{lem::cramming2} shows a joint distribution of $\newVar_1, \dots, \newVar_{\az-1}$ such that $\newVar_i \sim p^*_\az$ for all $i$ and $\sum_{i=1}^{\az-1} \newVar_i \leq 2$ (guaranteed) exists. \iflong Then, if $X_i = \newVar_i/2$ for all $i \in [\az-1]$, we have $\sum_{i=1}^{\az-1} X_i \leq 1$. Then setting $X_\az = 1 - \sum_{i = 1}^{\az - 1} X_i \geq 0$ ensures that $(X_1, \dots, X_\az)$ is a probability vector. \else We scale by $1/2$ and add $X_\az$ so that all $\az$ variables sum to one. Denoting this prior $P^*_{\text{hard}}$ and letting $p^{**}_\az(x) = 2p^*_\az(2x)$ (so $\newVar_i \sim p^*_\az \implies X_i \sim p^{**}_\az$) we get that \begin{align} &\inf_{\comp \in \compset} \rawloss_\az(P^*_{\text{hard}}, \comp) \geq (\az - 1) \inf_{\comp \in \compset} \singleloss(p^{**}_\az, \comp) \label{eq::scale2_pstart} \\ &= (\az - 1) \frac{1}{2} {L^{\dagger}}(p^*_\az, \comp_\az^*) \label{eq::after_scale2_pstart} \geq \frac{1}{2} \frac{\az - 1}{\az} \sup_{P \in \cP^\triangle_\az} \rawloss_\az(P, \comp_\az^*) \,. \end{align} The last inequality holds because $p^*_\az$ is the maximin density (under expectation constraints). To make $P^*_{\text{hard}}$ symmetric, we permute the letter indices randomly without affecting the raw loss; thus we get \Cref{cor::worstcase_prior}. To get \eqref{eq::after_scale2_pstart} from \eqref{eq::scale2_pstart}, we have \begin{align} \inf_{\comp \in \compset} &\singleloss(2p^*_\az(2x), \comp) = \frac{1}{24} \left(\int_0^1 (2p^*_\az(2x) x^{-1})^{1/3} dx\right)^3\\ & = \frac{1}{24} \left(\int_0^1 (2p^*_\az(u) 2 u^{-1})^{1/3} \frac{1}{2}du\right)^3\\ & = \frac{1}{2} {L^{\dagger}}(p^*_\az, \comp_*)\,. \end{align} \iflong This shows \Cref{prop::bound_worstcase_prior_exist}. In \Cref{fig:cramming_plot}, we validate the distribution $P^*_{\text{hard}}$ by showing the performance of each compander when quantizing random distributions drawn from $P^*_{\text{hard}}$. For the minimax compander, the KL divergence loss on the worst-case prior looks to be within a constant of that for the other datasets. \begin{figure} \centering \includegraphics[scale = .4]{figures/kl_ucsc_dna_books_worst-case-cramming__truncate_exp_sinh_bfloat16_65536_fig.pdf} \caption{Each compander (or quantization method) is used on random distributions drawn from the prior $P^*_{\text{hard}}$. Comparison is given to when each compander is used on the books and DNA datasets.} \label{fig:cramming_plot} \end{figure} \section{Companding Other Metrics and Spaces} \label{sec::other_losses} While our primary focus has been KL divergence over the simplex, for context we compare our results to what the same compander analysis would give for other loss functions like squared Euclidean distance ($L_2^2$) and absolute distance ($L_1$ or $TV$ distance). For a vector $\bx$ and its representation $\bnormvar$ let \begin{align} L_2^2(\bx, \bnormvar) &= \sum_i (x_i - \normvar_i)^2\\ L_1(\bx, \bnormvar) &= \sum_i |x_i - \normvar_i|\,. \end{align} For squared Euclidean distance, asymptotic loss was already given by \eqref{eq::bennett_compander} in \cite{bennett1948}, and scales as $N^{-2}$. It turns out that the maximin single-letter distribution over a bounded interval is the uniform distribution. Thus, the minimax compander for $L_2^2$ is simply the identity function, i.e. uniform quantization is the minimax for quantizing a hypercube in high-dimensional space under $L_2^2$ loss. (For unbounded spaces, $L_2^2$ loss does not scale with $N^{-2}$.) If we add the expected value constraint to the $L_2^2$ compander optimization problem, we can derive the best square distance compander for the probability simplex. For alphabet size $\az$, we get that the minimax compander for $L_2^2$ is given by \begin{align} f_{L_2^2, \az}(x) = \frac{\sqrt{1 + \az(\az-2)x} - 1}{\az-2} \end{align} and the total $L_2^2$ loss for probability vector $\bx$ and its quantization $\bnormvar$ has the relation \begin{align} \lim_{N \to \infty} N^2 L_2^2 (\bx, \bnormvar) \leq \frac{1}{3}\,. \end{align} For $L_1$, unlike KL divergence and $L_2^2$, the loss scales as $1/N$. Like $L_2^2$, the minimax single-letter compander for $L_1$ loss in the hypercube $[0,1]^\az$ is the identity function, i.e. uniform quantization. In general, the derivative of the optimal compander for single-letter density $p(x)$ has the form \begin{align} \comp_{L_1, \az} ' (x) \, \propto \, \sqrt{p(x)}\,. \end{align} On the probability simplex for alphabet size $\az$, the worst case prior $p(x)$ has the form \begin{align} p(x) = (\alpha_\az x + \beta_\az )^{-2} \end{align} where $\alpha_\az, \beta_\az$ are constants scaling to allow $\int_{[0,1]} \, dp = 1$ (i.e. $p$ is a valid probability density) and $\int_{[0,1]} x \, dp = 1/\az$ (i.e. $\bbE_{X \sim p}[X] = 1/\az$ so $\az$ copies of it are expected to sum to $1$). Thus, the minimax compander on the simplex for $L_1$ loss (and letting $\gamma_\az = \alpha_\az/\beta_\az$) satisfies \begin{align} \comp_{L_1, \az} ' (x) \, &\propto \, (\alpha_\az x + \beta_\az)^{-1} \\ \implies \comp_{L_1, \az} (x) \, &\propto \, \log((\alpha_\az/\beta_\az) x + 1) \\ \implies \comp_{L_1, \az} (x) &= \frac{\log(\gamma_\az x + 1)}{\log(\gamma_\az + 1)} \end{align} since $\comp_{L_1, \az} (x)$ has to be scaled to go from $0$ to $1$. The asymptotic $L_1$ loss for probability vector $\bx$ and its quantization $\bnormvar$ is bounded by \begin{align} \lim_{N \to \infty} N L_1 (\bx, \bnormvar) =O(\log \az) \,. \end{align} \renewcommand{\arraystretch}{1} \begin{figure*}[h!] \centering \begin{tabular}{|c|c|l|c|} \hline Loss & Space & Optimal Compander & Asymptotic Upper Bound \\ \hline KL & Simplex & $ \comp^*_\az(x) = \frac{\arcsinh(\sqrt{c_\az (\az \log \az) \, x})}{\arcsinh(\sqrt{c_\az \az \log \az})}$ & $N^{-2} \log^2 \az $ \\ $L_2^2$ & Simplex & $ f_{L_2^2, \az}(x) = \frac{\sqrt{1 + \az(\az-2)x} - 1}{\az-2}$ & $N^{-2}$ \\ $L_2^2$ & Hypercube & $ f_{L_2^2}(x) = x$ (uniform quantizer) & $N^{-2} \az $ \\ $L_1~ (TV)$ & Simplex & $f_{L_1, \az}(x) = \frac{\log(\gamma_\az x + 1)}{\log(\gamma_\az + 1)}$ & $N^{-1} \log \az$ \\ $L_1~ (TV) $ & Hypercube & $f_{L_1}(x) = x$ (uniform quantizer) & $N^{-1} \az$ \\ \hline \end{tabular} \caption{Summary of results for various losses and spaces. Asymptotic Upper Bound is an upper bound on how we expect the loss of the optimal compander to scale with $N$ and $\az$ (constant terms are neglected). } \label{tab::other_losses_spaces} \end{figure*} \section{Connection to Information Distillation} \label{sec::info_distillation_main} It turns out that the general problem of quantizing the simplex under the \textit{average} KL divergence loss, as defined in~\eqref{eq::def_loss}, is equivalent to recently introduced problem of \emph{information distillation}. Information distillation has a number of applications, including in constructing polar codes~\cite{bhatt_distilling2021,tal2015}. In this section we establish this equivalence and also demonstrate how the compander-based solutions to the KL-quantization can lead to rather simple and efficient information distillers. \vspace{-0.5pc} \subsection{Information Distillation} In the information distillation problem we have two random variables $A \in \cA$ and $B \in \cB$, where $|\cA| = \az$ (and $\cB$ can be finite or infinite) under joint distribution $P_{A,B}$ with marginals $P_A, P_B$. The goal is, given some finite $M < |\cB|$, to find an \emph{information distiller} (which we will also refer to as a \emph{distiller}), which is a (deterministic) function $h : \cB \to [M]$, which minimizes the information loss \begin{align} \label{eq::inf-distillation} I(A;B) - I(A;h(B))\end{align} associated with quantizing $B \to h(B)$. The interpretation here is that $B$ is a (high-dimensional) noisy observation of some important random variable $A$ and we want to record observation $B$, but only have $\log_2 M$ bits to do so. Optimal $h$ minimizes the additive loss entailed by this quantization of $B$. To quantify the amount of loss incurred by this quantization, we use the \emph{degrading cost}~\cite{tal2015,bhatt_distilling2021} \begin{align} \mathrm{DC}(\az, M) = \sup_{P_{A,B}} \inf_{h} I(A; B) - I(A;h(B))\,. \end{align} Note that in supremizing over $P_{A,B}$ there is no restriction on $\cB$, only on $|\cA|$ and the size of the range of $h$. It has been shown in \cite{tal2015} that there is a $P_{A,B}$ such that \begin{align} \inf_{h} I(A; B) - I(A;h(B)) = \Omega(M^{-2/(\az - 1)}) \end{align} giving a lower bound to $\mathrm{DC}(\az, M) $. For an upper bound, \cite{Kartowsky2017} showed that if $2\az < M < |\cB|$, then \begin{align} \mathrm{DC}(\az, M) = O(M^{-2/(\az - 1)})\,. \end{align} Specifically, $\mathrm{DC}(\az, M) \leq \nu(\az) M^{-2/(\az - 1)}$ where $\nu(\az) \approx 16 \pi e \az^2$ for large $\az$. While~\cite{bhatt_distilling2021} focused on multiplicative loss, their work also implied an improved bound on the additive loss as well; namely, for all $\az \geq 2$ and $M^{1/(\az - 1)} \geq 4$, we have \begin{align} \mathrm{DC}(\az, M) \leq 1268 (\az - 1) M^{-2/(\az - 1)}\,.\label{eq::distilling_result} \end{align} \subsection{Info Distillation Upper Bounds Via Companders} Using our KL divergence quantization bounds, we will show an upper bound to $\mathrm{DC}(\az, M)$ which improves on \eqref{eq::distilling_result} for $K$ which are not too small and for $M$ which are not exceptionally large. First, we establish the relation between the two problems: \begin{proposition} \label{prop:infodist} For every $P_{A,B}$ define a random variable $\bX \in \triangle_{\az-1}$ by setting $X_a = P[A=a \, | \, B]$. Then, for every information distiller $h: \cB \to [M]$ there is a vector quantizer $\bnormvar: \triangle_{\az-1} \to \triangle_{\az-1}$ with range of cardinality $M$ such that \begin{align}\label{eq::id_kl} \hspace{-0.5pc} I(A;B) - I(A;h(B)) \geq \bbE[D_{\kl}(\bX \| \bnormvar(\bX))]\,. \end{align} Conversely, for any vector quantizer $\bnormvar$ there exists a distiller $h$ such that \begin{align}\label{eq::id_kl2} I(A;B) - I(A;h(B)) \leq \bbE[D_{\kl}(\bX \| \bnormvar(\bX))]\,. \end{align} \end{proposition} The inequalities in \Cref{prop:infodist} can be replaced by equalities if the distiller $h$ and the quantizer $\bnormvar$ avoid certain trivial inefficiencies. If they do so, there is a clean `equivalent' quantizer $\bnormvar$ for any distiller $h$, and vice versa, which preserves the expected loss. This equivalence and \Cref{prop:infodist} are shown in \Cref{sec::info_distillation_detail}. Thus, we can use KL quantizers to bound the degrading cost above (see \Cref{sec::info_distillation_detail} for details): \begin{align} \mathrm{DC}(\az, M) &= \sup_{P_{A,B}} \inf_{h} I(A; B) - I(A;h(B)) \\ &= \sup_{P} \inf_{\bnormvar} \bbE_{\bX \sim P} [D_{\kl}(\bX \| \bnormVar)] \label{eq::infg_supP_KL_symPrior} \\&\leq \inf_{\bnormvar} \sup_{P} \bbE_{\bX \sim P} [D_{\kl}(\bX \| \bnormVar)]\,. \label{eq::infg_supP_KL} \end{align} We then use the approximate minimax compander results to give an upper bound to \eqref{eq::infg_supP_KL}. This yields: \begin{proposition}\label{prop::compander_worst_case_for_distilling} For any $\az \geq 5$ and $M^{1/K} > \lceil 8 \log (2 \sqrt{\az \log \az} + 1) \rceil$ \begin{align} \label{eq::compander_worst_case_for_distilling} \mathrm{DC}(\az, M) \hspace{-0.2pc} \leq \hspace{-0.2pc} \bigg( \hspace{-0.2pc} 1 \hspace{-0.2pc} + \hspace{-0.2pc} 18 \frac{\log \log \az}{\log \az} \hspace{-0.1pc} \bigg) M^{-\frac{2}{\az}} \log ^2 \az\,. \end{align} \end{proposition} \begin{proof} Consider the right-hand side of~\eqref{eq::id_kl}. The compander-based quantizer from \Cref{thm::worstcase_power_minimax} gives a guaranteed bound on $D(\bX\|\bnormvar(\bX))$ (and $M = N^\az$ substituted), which also holds in expectation. \end{proof} \begin{remark} Similarly, an upper bound on the divergence covering problem~\cite[Thm 2]{phdthesis} implies \begin{equation}\label{eq::distilling_result2} \mathrm{DC}(\az, M) \le 800 (\log K) M^{-2/(\az-1)} \,. \end{equation} (This appears to be the best known upper bound on $\mathrm{DC}$.) The lower bound on the divergence covering, though, does not imply lower bounds on $\mathrm{DC}$, since divergence covering seeks one collection of $M$ points that are good for quantizing any $P$, whereas $\mathrm{DC}$ permits the collection to depend on $P$. For distortion measures that satisfy the triangle inequality, though, we have a provable relationship between the metric entropy and rate-distortion for the least-favorable prior, see~\cite[Section 27.7]{ITbook}. \end{remark} \section{Acknowledgements} We would like to thank Anthony Philippakis for his guidance on the DNA $k$-mer experiments. \bibliographystyle{resources/IEEEbib} \bibliography{ref_compander} \clearpage \appendices \crefalias{section}{appendix} \section*{Appendix Organization} \paragraph*{Appendix~\ref{sec::appendix_proofs_asymptotic}} We fill in the details of the proof of \Cref{thm::asymptotic-normalized-expdiv}. \paragraph*{Appendix~\ref{sec::approximate-compander-f-dagger}} We prove \Cref{prop::approximate-compander}. \paragraph*{Appendix~\ref{sec::appendix_other_companders}} We develop and analyze other types of companders, specifically \emph{beta companders}, which are optimized to quantize vectors from Dirichlet priors (\Cref{sec::beta_companding}), and \emph{power companders}, which have the form $f(x) = x^s$ and have properties similar to the minimax compander (\Cref{sec::power_compander_analysis}). Supplemental experimental results are also provided. \paragraph*{Appendix~\ref{sec::appendix_minimax}} We analyze the minimax compander and approximate minimax compander more deeply, showing that $c_\az \in [1/4, 3/4]$ (\Cref{sec::minimax_const_analysis}) and $\lim_{\az \to \infty} c_\az = 1/2$ (\Cref{sec::lim_c_k}), and show that when $c_\az \approx 1/2$, the approximate minimax compander performs similarly to the minimax compander against all priors $p \in \cP$ (\Cref{sec::proof_L_appx_minimax_compander}). Supplemental experimental results are also provided. \paragraph*{Appendix~\ref{sec::worst-case_analysis}} We prove \Cref{thm::worstcase_power_minimax}, showing bounds on the worst-case loss (adversarially selected $\bx$, rather than from a prior) for the power, minimax, and approximate minimax companders. \paragraph*{Appendix~\ref{sec::info_distillation_detail}} We discuss the connection to information distillation in detail. \section{Asymptotic Single-Letter Loss Proofs} \label{sec::appendix_proofs_asymptotic} In this appendix, we give all the proofs necessary for \Cref{thm::asymptotic-normalized-expdiv}, whose proof outline was discussed in \Cref{sec::asymt_single}. We begin with notation in \Cref{sec::locloss_notation}. In \Cref{sec::locloss-convergence-preliminaries}, we give some preliminaries for showing \Cref{prop::locloss-convergence} (which shows that the local loss functions $g_N$ converge to the asymptotic local loss function $g$ a.s. when the input $X$ is distributed according to $p \in \cP$). In \Cref{sec::locloss-convergence-pf}, we give the proof of \Cref{prop::locloss-convergence}. In \Cref{sec::locloss-dominating-proof}, we give the proof of \Cref{prop::locloss-dominating} (which shows the existence of an integrable $h$ dominating $g_N$ when the compander $f$ is from the `well-behaved' set $\cF^\dagger$). In order to focus on the main ideas, some of the more minor details needed for \Cref{prop::locloss-convergence} and \Cref{prop::locloss-dominating} are omitted and left for later sections. We fill in the details on the lemmas and propositions used in the proof of \Cref{prop::locloss-convergence}, including proofs for all results from \Cref{sec::locloss-convergence-preliminaries} (specifically \Cref{lem::asymptotic-interval-sizes,lem::increase-interval-loss-positive} and \Cref{prop::uniform-at-small-scales,prop::divergence-of-close-distributions,prop::loss-under-uniform-dist}) in \Cref{sec::proof_asymptotic-interval-sizes,sec::proof_uniform-at-small-scales,sec::proof_divergence-of-close-distributions,sec::proof_loss-under-uniform-dist,sec::additional_lemmas}. We then fill in the details of the lemmas for the proof of \Cref{prop::locloss-dominating}, specifically \Cref{lem::interval-loss-ub,lem::dominating-companders}. \subsection{Notation} \label{sec::locloss_notation} Given probability distribution $p$ and interval $I$, $p|_I$ denotes $p$ restricted to $I$, i.e. $X \sim p|_I$ is the same as $X \sim p$ conditioned on $X \in I$. We also define the probability mass of $I$ under $p$ as $\pi_{p,I} = \bbP_{X \sim p}[ X \in I]$. If $\pi_{p,I} = 0$, we let $p|_I$ be uniform on $I$ by default. Given two probability distributions $p, q$ (over the same domain), their \emph{Kolmogorov-Smirnov distance} (KS distance) is \begin{align} \hspace{-0.2pc} d_{KS}(p,q) \hspace{-0.2pc} = \hspace{-0.2pc} \norm{F_p - F_q}_{\infty} \hspace{-0.2pc} = \hspace{-0.2pc} \sup_x |F_p(x) - F_q(x)| \label{eq::infty-norm} \end{align} (recall that $F_p, F_q$ are the CDFs of $p,q$). We use standard order-of-growth notation (which are also used in \Cref{sec::main-theorems}). We review these definitions here for clarity, especially as we will use some of the rarer concepts (in particular, small-$\omega$). For a parameter $t$ and functions $a(t), b(t)$, we say: \begin{align} a(t) = O(b(t)) &\iff \limsup_{t \to \infty} |a(t)/b(t)| < \infty \\ a(t) = \Omega(b(t)) &\iff \liminf_{t \to \infty} |a(t)/b(t)| > 0 \\ a(t) = \Theta(b(t)) &\iff a(t) = O(b(t)), \, a(t) = \Omega(b(t))\,. \end{align} We use small-$o$ notation to denote the strict versions of these: \begin{align} a(t) = o(b(t)) &\iff \lim_{t \to \infty} |a(t)/b(t)| = 0 \\ a(t) = \omega(b(t)) &\iff \lim_{t \to \infty} |a(t)/b(t)| = \infty\,. \end{align} Sometimes we will want to indicate order-of-growth as $t \to 0$ instead of $t \to \infty$; this will be explicitly mentioned in that case. \subsection{Preliminaries for \Cref{prop::locloss-convergence}} \label{sec::locloss-convergence-preliminaries} We first generalize the idea of \emph{bins}. The bin around $x \in [0,1]$ at granularity $N$ is the interval $I = I^{(n)}$ containing $x$ such that $f(I) = [(n-1)/N, n/N]$ for some $n \in [N]$. This notion relies on integers because $f(I) = [(n-1)/N, n/N]$ for integers $n,N$. We remove the dependence on integers while keeping the basic structure (an interval $I$ about $x$ whose image $f(I)$ is a given size): \begin{definition} \label{def::pseudobin} For any $x \in [0,1]$, $\theta \in [0,1]$, and $\varepsilon > 0$, we define the \emph{pseudo-bin} $I^{(x, \theta, \varepsilon)}$ as the interval satisfying: \begin{align} \hspace{-1pc} I^{(x,\theta,\varepsilon)} &= [x - \theta r^{(x, \theta, \varepsilon)}, x + (1-\theta) r^{(x, \theta, \varepsilon)}] \text{ where } \\ \hspace{-0.85pc} r^{(x, \theta, \varepsilon)} &= \inf \big(r : f(x + (1-\theta) r) - f(x - \theta r) \geq \varepsilon \big) \label{eq::defn-of-r}\,. \end{align} \end{definition} The interpretation of this is that $I^{(x,\theta,\varepsilon)}$ is the minimal interval $x$ such that $|f(I^{(x,\theta,\varepsilon)})| \geq \varepsilon$ and such that $x$ occurs at $\theta$ within $I^{(x,\theta,\varepsilon)}$, i.e. a $\theta$ fraction of $I^{(x,\theta,\varepsilon)}$ falls below $x$ and $1 - \theta$ falls above. Its width is $r^{(x,\theta,\varepsilon)}$. This implies that bins are a special type of pseudo-bins. Specifically, for any $x$ and $N$ (and any compander $f$), \begin{align} I^{(n_N(x))} = I^{(x,\theta,1/N)} \text{ for some } \theta \in [0,1] \,. \end{align} We now consider the size of pseudo-bins as $\varepsilon \to 0$: \begin{lemma} \label{lem::asymptotic-interval-sizes} If $f$ is differentiable at $x$, then \begin{align} \lim_{\varepsilon \to 0} \varepsilon^{-1} r^{(x,\theta,\varepsilon)} = f'(x)^{-1} \end{align} (including going to $\infty$ when $f'(x) = 0$). The limit converges uniformly over $\theta \in [0,1]$. \end{lemma} The proof is given in \Cref{sec::proof_asymptotic-interval-sizes}. Note that applying this to bins means $\lim_{N \to \infty} N r^{(n_N(x))} = f'(x)^{-1}$, and hence when $f'(x) > 0$ we have $r^{(n_N(x))} = N^{-1} f'(x)^{-1} + o(N^{-1})$. For any interval $I$, we want to measure how close $p$ is to uniform over $I$ using the distance measure $d_{KS}(p,q)$ from \eqref{eq::infty-norm}. We will show that when $F'_p(x) = p(x)$ is well-defined and positive at $x$, $p$ is approximately uniform on any sufficiently small interval $I$ around $x$. Formally: \begin{proposition} \label{prop::uniform-at-small-scales} If $p(x) = F'_p(x) > 0$ is well-defined, then for every $\varepsilon > 0$ there is a $\delta > 0$ such that for all intervals $I$ such that $x \in I$ and $r_I \leq \delta$, \begin{align} d_{KS}(p|_I, \unif_I) \leq \varepsilon \, . \end{align} \end{proposition} We give the proof in \Cref{sec::proof_uniform-at-small-scales}. This allows us to use the following: \begin{proposition} \label{prop::divergence-of-close-distributions} Let $p$ be a probability measure and $I$ be an interval containing $x$ such that $r_I \leq x/4$ and $d_{KS}(p|_I,\unif_I) \leq \varepsilon$ where $\varepsilon \leq 1/2$. Then \begin{align} | \ell_{p,I} - \ell_{\unif_I} | \leq 2 \varepsilon r_I^2 x^{-1} + O(r_I^3 x^{-2})\,. \end{align} \end{proposition} Recall that $\ell_{p,I}$ is the interval loss of $I$ under distribution $p$ when all points in $I$ are quantized to $\widetilde{y}_{p,I}$, the centroid of the interval. We give the proof of \Cref{prop::divergence-of-close-distributions} in \Cref{sec::proof_divergence-of-close-distributions}. \begin{proposition} \label{prop::loss-under-uniform-dist} For any $x > 0$ and any sequence of intervals $I_1, I_2, \dots \subseteq [0,1]$ all containing $x$ such that $r_{I_i} \to 0$ as $i \to \infty$, \begin{align} \ell_{\unif_{I_i}} = \frac{1}{24} r_{I_i}^2 x^{-1} + O(r_{I_i}^3 x^{-2}) \,. \end{align} \end{proposition} The proof is in \Cref{sec::proof_loss-under-uniform-dist}. Note that the above lemmas are all about asymptotic behavior as intervals shrink to $0$ in width; to deal with the (edge) case where they do not, we need the following lemma: \begin{lemma} \label{lem::increase-interval-loss-positive} For any $I$ such that $\bbP_{X \sim p} [X \in I] > 0$, there is some $a_I > 0$ such that \begin{align} \ell_{p,J} \geq a_I \text{ for any } J \supseteq I \,. \end{align} \end{lemma} We give the proof in \Cref{sec::additional_lemmas}. \subsection{Proof of \Cref{prop::locloss-convergence}} \label{sec::locloss-convergence-pf} We now combine the above results to prove \Cref{prop::locloss-convergence}, i.e. that $\lim_{N \to \infty} g_N(X) = g(X)$ almost surely when $X \sim p$. Because $p \in \cP$ (i.e. it is a continuous probability distribution) we will treat the bins as closed sets, i.e. $I^{(n)} = [f(\frac{n-1}{N})^{-1}, f(\frac{n}{N})^{-1}]$; this does not affect anything since the resulting overlap is only a finite set of points. \begin{proof} Since $p \in \cP$ then when $X \sim p$ the following hold with probability $1$: \begin{enumerate} \item $0 < X < 1$; \item $f'(X)$ is well-defined; \item $p(X) = F'_p(X)$ is well defined; \item $p(X) > 0$. \end{enumerate} This is because if $p \in \cP$, and $|S|$ denotes the Lebesgue measure of set $S$, then \begin{align} |S| = 0 \implies \bbP_{X \sim p} [X \in S] = 0\,. \end{align} This implies (1) since $\{0,1\}$ is measure-$0$. Additionally, by Lebesgue's differentiation theorem for monotone functions, any monotonic function on $[0,1]$ is differentiable almost everywhere on $[0,1]$ (i.e. excluding at most a measure-$0$ set), and compander $f$ and CDF $F_p$ are monotonic. This implies 2) and 3). Finally, 4) follows because the set of $X$ such that $p(X) = 0$ has probability $0$ under $p$ by definition. Therefore, we can fix $X \sim p$ and assume it satisfies the above properties. We now consider the bin size $r_{(n_N(X))}$ as $N \to \infty$; there are two cases, (a) $\lim_{N \to \infty} r_{(n_N(X))} = 0$ and (b) $\limsup_{N \to \infty} r_{(n_N(X))} > 0$. For case (b), since the length of the interval does not go to zero, $g_N(X) = N^2 \ell_{p,(n_N(X))} \to \infty$; additionally, $g(X) = \infty$ by default since case (b) requires that $f'(X) = 0$, and so $g_N(X) \to g(X)$ as we want. \emph{Case (a):} In this case (which holds for all $X$ if $f \in \cF^\dagger$), any $\delta > 0$ there is some sufficiently large $N^*$ (which can depend on $X$) such that \begin{align} N \geq N^*\implies r_{(n_N(X))} \leq \delta \, . \end{align} By \Cref{prop::uniform-at-small-scales}, for any $\varepsilon > 0$ there is some $\delta > 0$ such that for all intervals $I$ where $X \in I$ and $r_I \leq \delta$, we have $d_{KS}(p|_I, \unif_I) \leq \varepsilon$. Putting this together implies that for any $\varepsilon > 0$, there is some sufficiently large $N^*_\varepsilon$ such that for all $N \geq N^*_\varepsilon$, \begin{align} d_{KS}(p|_{(n_N(X))}, \unif_{(n_N(X))}) \leq \varepsilon \, . \end{align} i.e. $p$ is $\varepsilon$ close to uniform on $I^{(n_N(X))}$. Furthermore, we can always choose $\varepsilon \leq 1/2$ and $N^*_\varepsilon$ sufficiently large that $r_{(n_N(X))} \leq X/4$ (since $\lim_{N \to \infty} r_{(n_N(X))} = 0$). Under these conditions, for $N > N^*_\varepsilon$ we can apply \Cref{prop::divergence-of-close-distributions} and get \begin{align} |&\ell_{p,(n_N(X))} - \ell_{\unif_{(n_N(X))}}| \nonumber \\ &\leq 2 \varepsilon r_{(n_N(X))}^2 X^{-1} + O(r_{(n_N(X))}^3 X^{-2}) \, . \end{align} We can then turn this around: as $N \to \infty$, we have $\varepsilon \to 0$ and hence $\varepsilon = o(1)$ (as $N \to \infty$), so \begin{align} \label{eq::p-close-to-unif} |\ell_{p,(n_N(X))} - \ell_{\unif_{(n_N(X))}}| = o(r_{(n_N(X))}^2 X^{-1}) \, . \end{align} We then apply \Cref{prop::loss-under-uniform-dist} (note that since $r_{(n_N(X))} \leq X/4$ and $X \leq 2\bar{y}_{(n_N(X))}$, we know automatically that $r_{(n_N(X))} \leq \bar{y}_{(n_N(X))}/2$) to get that \begin{align} \ell_{\unif_{(n_N(X))}} = \frac{1}{24} r_{(n_N(X))}^2 \bar{y}_{(n_N(X))}^{-1} + O(r_{(n_N(X))}^3 X^{-2})\,. \end{align} However, since $X$ is fixed and $r_{(n_N(X))} \to 0$ as $N \to 0$ (and $|X - \bar{y}_{(n_N(X))}| \leq r_{(n_N(X))}$ since they are both in the bin $I^{(n_N(X))}$), we know that $\bar{y}_{(n_N(X))} = X (1 + o(1))$ where $o(1)$ is in terms of $N$ (as $N \to \infty$). Hence (noting that $(1 + o(1))^{-1}$ is still $1 + o(1)$ and $O(r_{(n_N(X))}^3 X^{-2})$ is $o(1) r_{(n_N(X))}^2 X^{-1}$) we can re-write the above and combine with \eqref{eq::p-close-to-unif} to get \begin{align} \ell_{\unif_{(n_N(X))}} &= \frac{1}{24} (1 + o(1)) r_{(n_N(X))}^2 X^{-1} \\ \implies \ell_{p, (n_N(X))} &= \frac{1}{24} (1 + o(1)) r_{(n_N(X))}^2 X^{-1} \, . \end{align} We now split things into two cases: (i) $f'(X) > 0$; (ii) $f'(X) = 0$. \emph{Case i ($f'(X) > 0$):} For all $N$ there is a $\theta \in [0,1]$ such that $I^{(n_N(X))} = I^{(X,\theta,1/N)}$ (bins are pseudo-bins, see \Cref{def::pseudobin}). Thus, by \Cref{lem::asymptotic-interval-sizes} (which shows uniform convergence over $\theta$), \begin{align} \lim_{N \to \infty} N r_{(n_N(X))} = f'(X)^{-1}\,. \end{align} Thus, we may re-write as a little-$o$ and plug into $g_N(X)$: \begin{align} r_{(n_N(X))} &= N^{-1} f'(X)^{-1} + o(N^{-1}) \\ &= N^{-1} f'(X)^{-1} (1 + o(1)) \\ \implies g_N(X) &= N^2 \ell_{p, (n_N(X))} \label{eq::gn-to-g-01} \\ &= N^2 \frac{1}{24} (1 + o(1)) r_{(n_N(X))}^2 X^{-1} \label{eq::gn-to-g-02} \\ &= N^2 \frac{1}{24} (1 + o(1)) N^{-2} f'(X)^{-2} X^{-1} \label{eq::gn-to-g-03} \\ &= \frac{1}{24} (1 + o(1)) f'(X)^{-2} X^{-1} \label{eq::gn-to-g-04} \end{align} implying $\lim_{N \to \infty} g_N(X) = g(X)$ as we wanted. \emph{Case ii ($f'(X) = 0$):} As before, for any $N$ there is some $\theta \in [0,1]$ such that $I^{(n_N(X))} = I^{(X,\theta,1/N)}$. Thus, by \Cref{lem::asymptotic-interval-sizes} and as $f'(X) = 0$, we have \begin{align} \lim_{N \to \infty} N r_{(n_N(X))} = \infty \, . \end{align} since the convergence in \Cref{lem::asymptotic-interval-sizes} is uniform over $\theta$. We can then re-write this as a little-$\omega$: \begin{align} r_{(n_N(X))} = \omega(N^{-1}) \, . \end{align} This implies that \begin{align} g_N(X) &= N^2 \ell_{p, (n_N(X))} \\ &= N^2 \frac{1}{24} (1 + o(1)) r_{(n_N(X))}^2 X^{-1} \\ &= N^2 \frac{1}{24} (1 + o(1)) \omega(N^{-2}) X^{-1} \\ &= \omega(1) \end{align} where $\omega(1)$ means $\lim_{N \to \infty} g_N(X) = \infty$. But since $f'(X) = 0$, by convention we have $g(X) = \frac{1}{24} f'(X)^{-2} X^{-1} = \infty$ and so $\lim_{N \to \infty} g_N(X) = g(X)$ as we wanted. \emph{Case (b):} $\limsup_{N \to \infty} r_{(n_N(X))} > 0$. Note that this can only happen if $f'(X) = 0$, so $g(X) = \infty$; hence our goal is to show that $\lim_{N \to \infty} g_N(X) = \infty$. Related to the above, this only happens if $f$ is not strictly monotonic at $X$, i.e. if there is some $a < X$ or some $b > X$ such that $f(X) = f(a)$ or $f(X) = b$ (or both). If both, $[a,b] \subseteq I^{(n_N(X))}$ for all $N$. Since $p(X)$ is well-defined and positive, any nonzero-width interval containing $X$ has positive probability mass under $p$. Thus, by \Cref{lem::increase-interval-loss-positive}, there exists some $\alpha > 0$ such that all $J \supseteq [a,b]$ satisfies $\ell_{p,J} \geq \alpha$. But then $g_N(X) \geq N^2 \alpha$ and goes to $\infty$. If only $a$ exists, we divide the granularities $N$ into two classes: first, $N$ such that $I^{(n_N(X))}$ has lower boundary exactly at $X$ (which can happen if $f(X)$ is rational), and second, $N$ such that $I^{(n_N(X))}$ has lower boundary below $X$. Call the first class $N^{(1)}(1), N^{(1)}(2), \dots$ and the second $N^{(2)}(1), N^{(2)}(2), \dots$. Then as no $b$ exists, $\lim_{i \to \infty} r^{(n_{N^{(1)}(i)}(X))} = 0$, i.e. the bins corresponding to the first class shrink to $0$ and the asymptotic argument applies to them, showing $g_{N^{(1)}(i)}(X) \to \infty$. For the second class, for any $i$, we have $I^{(n_{N^{(2)}(i)}(X))} \supseteq [a,X]$ and so we have an $\alpha > 0$ lower bound of the interval loss, and multiplying by $N^2$ takes it to $\infty$. Thus since both subsequences of $N$ take $g_N(X)$ to $\infty$, we are done. An analogous argument holds if $b$ exists but not $a$. As this holds for any $X$ under conditions 1-4, which happens almost surely, we are done. \end{proof} \subsection{Proof of \Cref{prop::locloss-dominating}} \label{sec::locloss-dominating-proof} To finish our Dominated Convergence Theorem (DCT) argument, we to prove \Cref{prop::locloss-dominating}, which gives an integrable function $h$ dominating all the local loss functions $g_N$. As with \Cref{prop::locloss-convergence}, we do this in stages. We first define: \begin{definition} For any interval $I$, let \begin{align} \ell^*_I = \sup_q \ell_{q,I} \end{align} where $q$ is a probability distribution over $[0,1]$. If $I = I^{(n)}$ we can denote this as $\ell^*_{(n)}$. \end{definition} Since $\ell_{q,I}$ is only affected by $q|_I$ (i.e. what $q$ does outside of $I$ is irrelevant), we can restrict $q$ to be a probability distribution over $I$ without affecting the value of $\ell^*_I$. The question is thus: what is the maximum single-interval loss which can be produced on interval $I$? Then, we can use the upper bound \begin{align} \label{eq::locloss-upper-bound-1} g_N(x) = N^2 \ell_{p, (n_N(x))} \leq N^2 \ell^*_{(n_N(x))} \,. \end{align} This has the benefit of simplifying the term by removing $p$. We now bound $\ell^*_I$: \begin{lemma} \label{lem::interval-loss-ub} For any interval $I$, $\ell^*_I \leq \frac{1}{2} r_I^2 \bar{y}_I^{-1}$. \end{lemma} We give the proof in \Cref{sec::proof_interval-loss-ub}. We can then add the above result to \eqref{eq::locloss-upper-bound-1} in order to obtain \begin{align} \label{eq::locloss-upper-bound-2} g_N(x) \leq N^2 \ell^*_{(n_N(x))} \leq N^2 \frac{1}{2} r_{(n_N(x))}^2 \bar{y}_{(n_N(x))}^{-1}\,. \end{align} However, it is hard to use this as the boundaries of $I^{(n_N(x))}$ in relation to $x$ are inconvenient. Instead, use an interval which is `centered' at $x$ in some way, with the help of the following: \begin{lemma} \label{lem::ell-star} If $I \subseteq I'$, then $\ell^*_I \leq \ell^*_{I'}$. \end{lemma} \begin{proof} This follows as any $q$ over $I$ is also a distribution over $I'$ (giving $0$ probability to $I'\backslash I$). \end{proof} Thus, if we can find some interval $J$ such that $I^{(n_N(x))} \subseteq J$ (but of the right size) and which had more convenient boundaries, we can use that instead. We define: \begin{definition} For compander $f$ at scale $N$ and $x \in [0,1]$, define the interval \begin{align} J^{f,N,x} = f^{-1} \Big(\Big[f(x) - \frac{1}{N}, f(x) + \frac{1}{N}\Big] \cap [0,1] \Big)\,. \end{align} \end{definition} As mentioned, we want this because it contains $I^{(n_N(x))}$: \begin{lemma} \label{lem::bin-ub} For any strictly monotonic $f$ and integer $N$, \begin{align} I^{(n_N(x))} \subseteq J^{f,N,x}\,. \end{align} \end{lemma} \begin{proof} Since $f$ is strictly monotonic, it has a well-defined inverse $f^{-1}$. By definition the bin $I^{(n_N(x))}$, when passed through the compander $f$, returns $[\frac{n-1}{N}, \frac{n}{N}]$, i.e. \begin{align} f(I^{(n_N(x))}) = \Big[\frac{n-1}{N}, \frac{n}{N}\Big] \, . \end{align} Note that this interval has width $1/N$ and includes $f(x)$ and (by definition) it is in $[0,1]$. Hence, \begin{align} &f(I^{(n_N(x))}) \subseteq \Big[f(x) - \frac{1}{N}, f(x) + \frac{1}{N}\Big] \cap [0,1] \\ &\implies f(I^{(n_N(x))}) \subseteq f(J^{f,N,x}) \\ &\implies I^{(n_N(x))} \subseteq J^{f,N,x} \end{align} and we are done. \end{proof} Now we can consider the importance of $f \in \cF^\dagger$: by dominating a monomial $c x^\alpha$, we can `upper bound' the interval $J^{f,N,x}$ by the equivalent interval with the compander $f_*(x) = c x^\alpha$ (i.e. $J^{f,N,x} \subseteq J^{f_*,N,x}$), which is then much nicer to work with.\footnote{While $f_*(x)$ may not map to all of $[0,1]$, it's a valid compander (but sub-optimal as it only uses some of the $N$ labels).} This also guarantees that $f$ is strictly monotonic. \begin{lemma} \label{lem::dominating-companders} If $f_1, f_2 \in \cF$ are strictly monotonic increasing companders such that $f_2 - f_1$ is also monotonically increasing (not necessarily strictly) and $f_1(0) = 0$, then for any $x \in [0,1]$ and $N$, \begin{align} J^{f_2, N, x} \subseteq J^{f_1, N, x}\,. \end{align} \end{lemma} The proof is given in \Cref{sec::proof_dominating-companders}. Finally, we need a quick lemma concerning the guarantee that if $f \in \cF^\dagger$, the function $g(x) = \frac{1}{24} f'(x)^{-2} x^{-1}$ is integrable under any distribution $p$: \begin{lemma}\label{lem::h_finite} Let $f \in \cF^\dagger$, and let $g(x) = \frac{1}{24} f'(x)^{-2} x^{-1}$. Then for any probability distribution $p$ over $[0,1]$, \begin{align} \int_{[0,1]} g \, dp < \infty \,. \end{align} \end{lemma} \begin{proof} If $f \in \cF^\dagger$, then there is some $c > 0$ and $\alpha \in (0,1/2]$ such that $f(x) - cx^\alpha$ is monotonically increasing. Thus (whenever it is well-defined, which is almost everywhere by Lebesgue's differentiation theorem for monotone functions) we have $f'(x) \geq c \alpha x^{\alpha-1}$ and since $\alpha \in (0,1/2]$, we have $1 - 2\alpha \geq 0$. Thus, for all $x \in [0,1]$, \begin{align} 0 \leq g(x) \leq \frac{1}{24} c^{-2} \alpha^{-2} x^{1 - 2\alpha} \leq \frac{1}{24} c^{-2} \alpha^{-2} \end{align} which of course implies that $\int_{[0,1]} g \, p < \infty$. \end{proof} We can now prove \Cref{prop::locloss-dominating}, which will complete the proof of \Cref{thm::asymptotic-normalized-expdiv}. \begin{proof}[Proof of \Cref{prop::locloss-dominating}] As before, let $f_*(x) = c x^\alpha$; thus $f_*(0) = 0$ so we can apply \Cref{lem::dominating-companders}. We begin, as outlined in \eqref{eq::locloss-upper-bound-2}, with: \begin{align} g_N(x) &= N^2 \ell_{p,(n_N(x))} \\ &\leq N^2 \ell^*_{(n_N(x))} \label{eq::1st-ineq} \\ &\leq N^2 \ell^*_{J^{f,N,x}} \label{eq::2nd-ineq} \\ &\leq N^2 \ell^*_{J^{f_*,N,x}} \label{eq::3rd-ineq} \end{align} where \eqref{eq::1st-ineq} follows from the definition of $\ell^*_I$; \eqref{eq::2nd-ineq} follows from \Cref{lem::ell-star,lem::bin-ub}; and \eqref{eq::3rd-ineq} follows from \Cref{lem::dominating-companders}. However, since $f_*(x) = c x^\alpha$, we have a specific formula we can work with. We have $f'_*(x) = \alpha c x^{\alpha-1}$ and $f_*^{-1}(\newvar) = (\newvar/c)^{1/\alpha} = c^{-1/\alpha} \newvar^{1/\alpha}$. Note that this means we can re-write \begin{align} h(x) = (2^{2/\alpha} + \alpha^2 2^{1/\alpha - 2}) f'_*(x)^{-2} x^{-1} + c^{-1/\alpha} 2^{1/\alpha - 2} \end{align} which sheds some light on the structure of $h(x)$. Using \Cref{lem::h_finite} proves that $\int_{[0,1]} h \, dp$ is finite if $\comp \in \compset$, which occurs when $\alpha \leq 1/2$. Fix a value of $x$. Let $r_N(x)$ be the width of $J^{f_*, N, x}$. We consider two cases: (i) $cx ^{\alpha} < 1/N$; and (ii) $cx ^{\alpha} \geq 1/N$. \emph{Case (i):} This implies $f(J^{f_*, N, x}) \subseteq [0, 2/N]$ so \begin{align} x &< c^{-1/\alpha} N ^{-1/\alpha} \\ \implies r_N(x) &\leq c^{-1/\alpha} (N/2) ^{-1/\alpha}\,. \end{align} Then, as $J^{f_*, N, x}$ has lower boundary $0$ in this case, $\bar{y}_{(n_N(x))} = r_N(x)/2$. Thus, using \eqref{eq::locloss-upper-bound-2}, \begin{align} g_N(x) &\leq N^2 \frac{1}{2} r_N(x)^2 \bar{y}_{(n_N(x))}^{-1} \\ &\leq c^{-1/\alpha} 2^{-1/\alpha} N ^{-1/\alpha + 2} \,. \end{align} If $\alpha \leq 1/2$, then $N ^{-1/\alpha + 2}$ is maximized at $N = 1$, and thus \begin{align} g_N(x) \leq c^{-1/\alpha} 2^{-1/\alpha}\,. \end{align} If $\alpha > 1/2$, the value $N ^{-1/\alpha + 2}$ is maximized for the largest possible $N$ still satisfying Case (i). Since $cx^{\alpha} < 1/N$, this implies that $N < c^{-1} x^{-\alpha}$. Then, \begin{align} g_N(x) &\leq c^{-1/\alpha} ( c^{-1} x^{-\alpha})^{-1/\alpha + 2} 2^{-1/\alpha} \\ & = c ^{-2 } x ^{1-2\alpha } 2^{-1/\alpha} \\& = \alpha^2 (c\alpha x^{\alpha- 1})^{-2} x^{-1} 2^{-1/\alpha} \\ & = \alpha^2 f_*'(x)^{-2} x^{-1} 2^{-1/\alpha} \,. \end{align} Thus, for Case (i) we have that for any $a \in (0, 1]$, \begin{align} g_N(x) &\leq \alpha^2 f_*'(x)^{-2} x^{-1} 2^{-1/\alpha} + c^{-1/\alpha}2^{-1/\alpha}\,. \end{align} \emph{Case (ii):} When $cx ^{\alpha} \geq 1/N$, since $x \in I \implies \bar{y}_I \geq x/2$ (the midpoint of an interval cannot be less than half the largest element of the interval), we can upper-bound $g_N(x)$ (using \eqref{eq::3rd-ineq} and \Cref{lem::interval-loss-ub}) by \begin{align} \label{eq::4th-ineq} g_N(x) \leq N^2 \frac{1}{2} r_N(x)^2 \bar{y}^{-1}_{J^{f_*,N,x}} \leq N^2 r_N(x)^2 x^{-1}\,. \end{align} We then bound $r_N(x)$ using the Fundamental Theorem of Calculus: since $f$ is monotonically increasing, for any $a \leq b$, \begin{align} \int_a^b f'(t) \, dt \leq f(b) - f(a) \end{align} (any discontinuities can only make $f$ increase faster). Additionally $r_N(x) = b_1 - a_1$ where $f(b_1) = \max(f(x) + 1/N,1)$ and $f(a_1) = f(x) - 1/N$ (since it's Case (ii) we know $f(x)-1/N \geq 0$ and since $f \in \cF^\dagger$ is strictly monotonic $a_1, b_1$ are unique). Thus, if we define $a_2, b_2$ such that \begin{align} \int_{a_2}^x f'(t) \, dt = 1/N \text{ and } \int_x^{b_2} f'(t) \, dt = 1/N \end{align} (or $a_2 = 0$ or $b_2 = 1$ if they exceed the $[0,1]$ bounds) we have $r_N(x) \leq b_2 - a_2$. Then, because $f - f_*$ is monotonically increasing, we can define $a_3, b_3$ where \begin{align} \int_{a_3}^x f'_*(t) \, dt = 1/N \text{ and } \int_x^{b_3} f'_*(t) \, dt = 1/N \end{align} and get that $r_N(x) \leq b_3 - a_3$ (also allowing $b_3 \geq 1$ if necessary). This yields: \begin{align} &r_N(x) \leq c^{-1/\alpha} \int_{\max(0, cx^\alpha-1/N)}^{\min(1, cx^\alpha+1/N)} (f^{-1}_*)'(\newvar) \, d\newvar \\ &= c^{-1/\alpha}\int_{\max(0, cx^\alpha-1/N)}^{\min(1, cx^\alpha+1/N)} \alpha^{-1} \newvar^{1/\alpha-1} \, d\newvar \\ &\leq c^{-1/\alpha}\int_{\max(0,cx^\alpha-1/N)}^{\min(1, cx^\alpha+1/N)} \alpha^{-1} (cx^\alpha+1/N)^{1/\alpha-1} \, d\newvar \\ &\leq c^{-1/\alpha}\int_{cx^\alpha-1/N}^{cx^\alpha+1/N} \alpha^{-1} (cx^\alpha+1/N)^{1/\alpha-1} \, d\newvar \\ &= (2/N) c^{-1/\alpha} \alpha^{-1} (cx^\alpha+1/N)^{1/\alpha-1} \end{align} \begin{align} \implies r_N(x) &\leq (2/N) c^{-1/\alpha} \alpha^{-1} (cx^\alpha+1/N)^{1/\alpha-1} \\ &\leq 2 N^{-1} c^{-1/\alpha} \alpha^{-1} (2 cx^\alpha)^{1/\alpha-1} \\ &= N^{-1} c^{-1/\alpha} \alpha^{-1} 2^{1/\alpha} (c x^\alpha)^{1/\alpha - 1} \\ &= 2^{1/\alpha} N^{-1} \big(c^{-1} \alpha^{-1} x^{1-\alpha}\big) \\ &= 2^{1/\alpha} N^{-1} f'_*(x)^{-1}\,. \end{align} Thus, we can incorporate this into our bound \eqref{eq::4th-ineq} \begin{align} g_N(x) &\leq N^2 r_N(x)^2 x^{-1} \\& \leq 2^{2/\alpha} f'_*(x)^{-2} x^{-1} \,. \end{align} So, $h(x)$, as the sum of the two cases, upper bounds $g_N(x)$ no matter what. We can also note that if $\alpha \leq 1/2$, then $x^{1-2 \alpha} \leq 1$ and hence we can upper-bound $h$ by a constant. Thus $\int_{[0,1]} h \, dp = \bbE_{X \sim p} [h(X)] < \infty$ trivially, for any $p$, and we are done. \end{proof} This completes the proof of \eqref{eq::norm_loss} in \Cref{thm::asymptotic-normalized-expdiv}. \subsection{Proof of \Cref{lem::asymptotic-interval-sizes}} \label{sec::proof_asymptotic-interval-sizes} \begin{proof} Note that for fixed $\theta$ and $x$, $r^{(x,\theta,\varepsilon)}$ is nonnegative and monotonically decreases as $\varepsilon$ decreases. Thus $\lim_{\varepsilon \to 0} r^{(x,\theta,\varepsilon)} \geq 0$ is well defined. We first assume that $\lim_{\varepsilon \to 0} r^{(x,\theta,\varepsilon)} = 0$ for all $\theta \in [0,1]$. Let $s_\theta(r)$ be defined as \begin{align} s_\theta(r) := \frac{ f(x + (1-\theta)r) - f(x - \theta r)}{r}\,. \end{align} We want to show that $\lim_{r \to 0} s_\theta(r) = f'(x)$ for all $\theta \in [0,1]$, and that this limit is uniform over $\theta \in [0,1]$. For $\theta \in \{0, 1\}$ we get respectively the right and left derivatives and since $f$ is differentiable at $x$ we are done for those cases. For $\theta \in (0,1)$ we write: \begin{align} s_\theta(r) &= \frac{ f(x + (1-\theta)r) - f(x - \theta r)}{r} \\ &= \frac{f(x + (1-\theta)r) - f(x)}{r} \eqlinebreakshort+ \frac{f(x) - f(x - \theta r)}{r} \\ &= (1-\theta)\frac{f(x + (1-\theta)r) - f(x)}{(1-\theta)r} \eqlinebreakshort+ \theta\frac{f(x - \theta r)-f(x)}{-\theta r} \,. \end{align} This implies \begin{align} \lim_{r \to 0} s_\theta(r) &= \lim_{r \to 0} \bigg( (1-\theta)\frac{f(x + (1-\theta)r) - f(x)}{(1-\theta)r} \eqlinebreakshort+ \theta\frac{f(x - \theta r)-f(x)}{-\theta r} \bigg) \\ &= (1-\theta) f'(x) + \theta f'(x) = f'(x)\,. \end{align} Furthermore we note that the convergence is uniform over $\theta \in [0,1]$. This is because for any $\alpha > 0$, there is a $\delta > 0$ such that for $|r| \leq \delta$, \begin{align} \bigg| \frac{f(x+r) - f(x)}{r} - f'(x) \bigg| \leq \alpha\,. \end{align} But $|r| \leq \delta \implies |-\theta r| \leq \delta$ and $|(1-\theta)r| \leq \delta$. Thus, \begin{align} \eqstartnonumshort|s_\theta(r) - f'(x)| \eqbreakshort &= \bigg|(1-\theta)\frac{f(x + (1-\theta)r) - f(x)}{(1-\theta)r} \eqlinebreakshort+ \theta\frac{f(x - \theta r)-f(x)}{-\theta r} - f'(x) \bigg| \\ &\leq \bigg| (1-\theta)\frac{f(x + (1-\theta)r) - f(x)}{(1-\theta)r} - (1-\theta) f'(x) \bigg| \eqlinebreakshort+ \bigg| \theta\frac{f(x - \theta r)-f(x)}{-\theta r} - \theta f'(x) \bigg| \\ &\leq (1-\theta) \alpha + \theta \alpha \\ &= \alpha\,. \end{align} Thus we have uniform convergence of $s_\theta(r)$ to $f'(x)$ over all $\theta \in [0,1]$ as $r \to 0$. Since $r^{(x,\theta,\varepsilon)} \to 0$ as $\varepsilon \to 0$, \begin{align} f'(x) &= \lim_{\varepsilon \to 0} s_\theta(r^{(x,\theta,\varepsilon)}) \\ &= \lim_{\varepsilon \to 0} \frac{f(x + (1-\theta)r^{(x,\theta,\varepsilon)}) - f(x - \theta r^{(x,\theta,\varepsilon)})}{r^{(x,\theta,\varepsilon)}} \\ &= \lim_{\varepsilon \to 0} \frac{\varepsilon}{ r^{(x,\theta,\varepsilon)}} \\ &\implies \lim_{\varepsilon \to 0} \varepsilon^{-1} \, r^{(x,\theta,\varepsilon)} = f'(x)^{-1} \end{align} as we wanted. The third equality comes from the definition of $r^{(x,\theta,\varepsilon)}$ \eqref{eq::defn-of-r} and the fact that $f'(x)$ is well-defined. Now we need to consider what happens if $\lim_{\varepsilon \to 0} r^{(x,\theta,\varepsilon)} \neq 0$ for some values of $\theta$; this can either be because the limit is positive or because the limit does not exist, but in either case it is clearly only possible if $f$ is not strictly monotonic at $x$ and hence only if $f'(x) = 0$. Additionally, it can only happen if $f$ is flat at $x$, i.e. there is either some $a < x$ or some $a > x$ such that $f(a) = f(x)$ (or both). In this case, for any $0 < \theta < 1$, $I^{(x,\theta,\varepsilon)}$ contains the interval between $a$ and $x$ and hence $r^{(x, \theta, \varepsilon)} \geq |x-a|$. For $\theta = 0$ and $\theta = 1$, either $r^{(x, \theta, \varepsilon)}$ is bounded away from $0$, or it approaches $0$; in the first case, $\varepsilon^{-1} r^{(x,\theta,\varepsilon)} \to \infty$ by default, while in the second the proof for the $\lim_{\varepsilon \to 0} r^{(x,\theta,\varepsilon)} = 0$ case holds. Thus, for all values of $\theta \in [0,1]$, we know that $\lim_{\varepsilon \to 0} \varepsilon^{-1} r^{(x,\theta,\varepsilon)} = \infty$ as we need; and this is uniform over $\theta$ because for any $\theta \in (0,1)$ we have $\varepsilon^{-1} r^{(x,\theta,\varepsilon)} \geq \varepsilon^{-1} |x - a|$, meaning that for any large $\alpha > 0$, we can choose $\varepsilon^*$ small enough so that for all $\varepsilon < \varepsilon^*$ all of the following hold: (i) $\varepsilon^{-1} |x - a| > \alpha$; (ii) $\varepsilon^{-1} r^{(x,0,\varepsilon)} > \alpha$; and (iii) $\varepsilon^{-1} r^{(x,0,\varepsilon)} > \alpha$. Thus, we have uniform convergence and we are done. \end{proof} \subsection{Proof of \Cref{prop::uniform-at-small-scales}} \label{sec::proof_uniform-at-small-scales} \begin{proof} We can assume that $\varepsilon \leq 1/2$ (if not, just use the value of $\delta$ corresponding to $\varepsilon = 1/2$). Let $\delta > 0$ be such that for all $x'$ such that $|x' - x| \leq \delta$, \begin{align} \Big| \frac{F_p(x') - F_p(x)}{x' - x} - p(x) \Big| \leq p(x) \varepsilon/8\,. \end{align} Since the derivative $p(x) = F'_p(x)$ is well-defined, this $\delta$ must exist. Then for $x' \in I$, \begin{align} &\big| (F_p(x') - F_p(x)) - (x' - x) p(x) \big| \eqlinebreakshort\leq |x' - x| p(x) \varepsilon/8 \leq r_I p(x) \varepsilon/8\,. \end{align} Now let $x''$ also be such that $|x'' - x| \leq \delta$. Then \begin{align} \big| (F_p(x'') &- F_p(x')) - (x'' - x') p(x) \big| \\ &= \big| ((F_p(x'') - F_p(x)) - (x'' - x)p(x) ) \eqlinebreakshort - ~((F_p(x') - F_p(x)) - (x' - x)p(x)) \big| \\ & \leq r_I p(x)\varepsilon/4 \label{eq::unif-upper-numerator}\,. \end{align} Let $x'$ be the lower boundary of $I$, so $x' + r_I$ is the upper boundary of $I$ (for which the above of course applies). Then we get \begin{align} \big| (F_p(x' + r_I) - F_p(x')) - r_I p(x) \big| &\leq r_I p(x) \varepsilon/4 \\ \implies \Big| \frac{F_p(x' + r_I) - F_p(x')}{r_I p(x)} - 1 \Big| &\leq \varepsilon/4\,. \label{eq::unif-upper-denominator} \end{align} Then we know that for any $x'' \in I$, \begin{align} F_{p|_I}(x'') = \frac{F_p(x'') - F_p(x')}{F_p(x' + r_I) - F_p(x')}\,. \end{align} By \eqref{eq::unif-upper-numerator} we know that \begin{align} (x'' - x')p(x) - &r_I p(x) \varepsilon/4 \leq F_p(x'') - F_p(x') \nonumber \\ &\leq (x'' - x')p(x) + r_I p(x) \varepsilon/4 \\ \implies r_I p(x) ((x'' - x')&/r_I - \varepsilon/4) \leq F_p(x'') - F_p(x') \nonumber \\ &\leq r_I p(x) ((x'' - x')/r_I + \varepsilon/4) \end{align} and by \eqref{eq::unif-upper-denominator} we know that \begin{align} r_I p(x) - r_I p(x) \varepsilon/4 &\leq F_p(x + r_I) - F_p(x') \nonumber \\ &\leq r_I p(x) + r_I p(x) \varepsilon/4 \\ \implies r_I p(x) (1 - \varepsilon/4) &\leq F_p(x + r_I) - F_p(x') \nonumber \\ &\leq r_I p(x) (1 + \varepsilon/4)\,. \end{align} Noting that $(x'' - x')/r_I = F_{\unif_I}(x'') \in [0,1]$ is the CDF of the uniform distribution on $I$, we get that \begin{align} F_{p|_I}(x'') &\geq \frac{r_I p(x) ((x'' - x')/r_I - \varepsilon/4) }{r_I p(x) (1 + \varepsilon/4)} \\ &= \frac{(x'' - x')/r_I - \varepsilon/4}{1 + \varepsilon/4} \\&\geq F_{\unif_I}(x'') - \varepsilon \end{align} and similarly that \begin{align} F_{p|_I}(x'') &\leq \frac{r_I p(x) ((x'' + x')/r_I - \varepsilon/4) }{r_I p(x) (1 - \varepsilon/4)} \\ &= \frac{(x'' - x')/r_I + \varepsilon/4}{1 - \varepsilon/4} \\&\leq F_{\unif_I}(x'') + \varepsilon \end{align} and hence for such a $\delta > 0$ we have for all $I$ containing $x$ and such that $r_I \leq \delta$ we have \begin{align} |F_{p|_I}(x'') - F_{\unif_I}(x'')| \leq \varepsilon \end{align} for all $x'' \in I$. For $x'' \not \in I = [x', x' + r_I]$ we then observe that \begin{align} F_{p|_I}(x'') = F_{\unif_I}(x'') = \begin{cases} 0 &\text{if } x'' < x' \\ 1 &\text{if } x'' > x' + r_I \end{cases} \end{align} thus finishing the proof. \end{proof} \subsection{Proof of \Cref{prop::divergence-of-close-distributions}} \label{sec::proof_divergence-of-close-distributions} \begin{proof} Let $\xi = \widetilde{y}_{p,I} - \bar{y}_I$. Then: \begin{align} |\xi| &= \bigg| \int_{I} \big(\bbP_{X \sim p|_I}[X \geq x] - \bbP_{X \sim \unif_I}[X \geq x]\big) \, dx \bigg| \\ & \leq \int_{I} \big| \bbP_{X \sim p|_I}[X \geq x] - \bbP_{X \sim \unif_I}[X \geq x] \big| \, dx \\ & \leq r_I \varepsilon \,. \end{align} For any distribution $q$ and any fixed value $\newvar$, define the shift operator $\shift{q}{\newvar}$ to denote the distribution of $X - \newvar$ where $X \sim q$ (i.e. just shift it by $\newvar$). Note that $\shift{p|_I}{\widetilde{y}_{p,I}}$ and $\shift{\unif_I} { \bar{y}_I}$ are both constructed to have expectation $0$, and in particular $\shift{\unif_I} { \bar{y}_I}$ is the uniform distribution over an interval of width $r_I$ centered at $0$. Additionally, \begin{align} d_{KS}\eqstartshort(\shift{p|_I}{ \widetilde{y}_{p,I}}, \shift{\unif_I}{ \bar{y}_I }) \eqbreakshort &\leq d_{KS}(\shift{p|_I} { \widetilde{y}_{p,I}}, \shift{\unif_I}{\widetilde{y}_{p,I}}) \\ &~~~~+ d_{KS}(\shift{\unif_I}{ \widetilde{y}_{p,I}}, \shift{\unif_I}{ \bar{y}_I}) \\ &\leq 2\varepsilon \end{align} since $d_{KS}(\cdot, \cdot)$ is a metric, $d_{KS}(q_1, q_2) = d_{KS}(\shift{q_1}{\newvar}, \shift{q_2}{\newvar})$ for any $q_1, q_2$ and $\newvar$, and \begin{align} d_{KS}(\shift{\unif_I}{z_1}, \shift{\unif_I} {z_2}) \leq |z_2 - z_1|/r_I\,. \end{align} For convenience, let $q_1 = \shift{p|_I} { \widetilde{y}_{p,I}}$ and $q_2 = \shift{\unif_I}{ \bar{y}_I}$, and let $\newVar_1 \sim q_1$ and $\newVar_2 \sim q_2$. We know the following: $\bbE[\newVar_1] = \bbE[\newVar_2] = 0$; $d_{KS}(q_1, q_2) \leq 2\varepsilon$; and $q_1, q_2$ have support on $[-r_I, r_I]$. Let $\eta_i = \bbE[\newVar_1^i] - \bbE[\newVar_2^i]$. Then we can compute the following: \begin{align} |\eta_i| &= \bigg| \int_0^{r_I^i} (\bbP[\newVar_1^i \geq x] - \bbP[\newVar_2^i \geq x]) \, dx \eqlinebreakshort- \int_0^{r_I^i} (\bbP[\newVar_1^i \leq -x] - \bbP[\newVar_2^i \leq -x]) \, dx \bigg|\,. \end{align} If $i$ is odd, then we do a $u$-substitution with $u = x^{1/i}$ and get \begin{align} |\eta_i| &= \bigg| \int_0^{r_I^i} (\bbP[\newVar_1 \geq x^{1/i}] - \bbP[\newVar_2 \geq x^{1/i}]) \, dx \eqlinebreakshort- \int_{-r_I^i}^{0} (\bbP[\newVar_1 \leq - x^{1/i}] - \bbP[\newVar_2^i \leq -x^{1/i}]) \, dx \bigg| \\ &= i \, \bigg| \int_0^{r_I} u^{i-1} (\bbP[\newVar_1 \geq u] - \bbP[\newVar_2 \geq u]) \, du \eqlinebreakshort- \int_{-r_I}^0 u^{i-1} (\bbP[\newVar_1 \leq u] - \bbP[\newVar_2 \leq u]) \, du \bigg| \\ & \leq 2 \int_0^{r_I} i u^{i-1} 2\varepsilon du = 4\varepsilon r_I^i\,. \end{align} Similarly if $i$ is even we get \begin{align} |\eta_i| &= \bigg| \int_0^{r_I^i} (\bbP[\newVar_1 \geq x^{1/i}] - \bbP[\newVar_2 \geq x^{1/i}]) \, dx \eqlinebreakshort + \int_{-r_I^i}^{0} (\bbP[\newVar_1 \leq - x^{1/i}] - \bbP[\newVar_2^i \leq -x^{1/i}]) \, dx \bigg| \\ &= i \, \bigg| \int_0^{r_I} u^{i-1} (\bbP[\newVar_1 \geq u] - \bbP[\newVar_2 \geq u]) \, du \eqlinebreakshort + \int_{-r_I}^0 u^{i-1} (\bbP[\newVar_1 \leq u] - \bbP[\newVar_2 \leq u]) \, du \bigg| \\ & \leq 2 \int_0^{r_I} i u^{i-1} 2\varepsilon du = 4\varepsilon r_I^i \end{align} and we can conclude that $|\eta_i| \leq 4\varepsilon r_I^i$ in general. Then we can take the respective Taylor expansions: let $X_1 \sim p|_I$ and $X_2 \sim \unif_I$ (and $\newVar_1 \sim q_1, \newVar_2 \sim q_2$ as above). We get \begin{align} \ell_{p,I} &= \bbE[X_1 \log(X_1/\widetilde{y}_{p,I})] \\ &= \widetilde{y}_{p,I}\bbE[(\newVar_1/\widetilde{y}_{p,I} + 1) \log(\newVar_1/\widetilde{y}_{p,I} + 1)] \\ & = \widetilde{y}_{p,I} \bbE\left[\newVar_1/\widetilde{y}_{p,I} + \frac{ (\newVar_1/\widetilde{y}_{p,I})^2}{2} - \frac{(\newVar_1/\widetilde{y}_{p,I})^3}{6(1+\eta)^2} \right]\label{eq::taylor_lagrange_ell_pI} \end{align} where $\eta$ is a number between $0$ and $\newVar_1/\widetilde{y}_{p,I}$ (we get this using Lagrange's formula for the error). Since $\newVar_1 + \widetilde{y}_{p,I} \in I$, we know that \begin{align} \widetilde{y}_{p,I} - r_I \leq \newvar + \widetilde{y}_{p,I} \leq \widetilde{y}_{p,I} + r_I\,. \end{align} Since $r_I < x/ 4$ and $\widetilde{y}_{p,I} \geq x - r_I$ (as $x, \widetilde{y}_{p,I}$ share the width-$r_I$ interval $I$), we get that $\widetilde{y}_{p,I} > 3 r_I$, and therefore \begin{align} \frac{2}{3} \widetilde{y}_{p,I} &< \newVar_1 + \widetilde{y}_{p,I} < \frac{4}{3} \widetilde{y}_{p,I} \\ \implies \frac{-1}{3} &< \newVar_1/\widetilde{y}_{p,I} < \frac{1}{3}\,. \end{align} This gives that $|\eta| < 1/3$. Using this and the fact that $\bbE[\newVar_1] = 0$ by construction, we can write \eqref{eq::taylor_lagrange_ell_pI} as \begin{align} \ell_{p,I} &\leq \frac{1}{2} \bbE[\newVar_1^2]/\widetilde{y}_{p,I} + \frac{|\bbE[\newVar_1^3]|}{8/3 } (\widetilde{y}_{p,I})^{-2} \\ &\leq \frac{1}{2} \bbE[\newVar_1^2]/\widetilde{y}_{p,I} + \frac{r_I^3}{8/3 (x - r_I)^{2} } \,. \end{align} Since $r_I < x/4$, we know that $x - r_I > (3/4)x$, and hence \begin{align} \ell_{p,I} &\leq \frac{1}{2} \bbE[\newVar_1^2]/\widetilde{y}_{p,I} + (2/3) r_I^3 x^{-2}\,. \end{align} Hence we get \begin{align} \ell_{p,I} &= \frac{1}{2} \bbE[\newVar_1^2]/\widetilde{y}_{p,I} + O(r_I^3 x^{-2})\,. \end{align} Because $x - r_I \leq \bar{y}_I$ as well (and $\newVar_2$ has support on $[-r_I, r_I]$) we can repeat the above arguments to conclude similarly that \begin{align} \label{eq::uniform-taylor} \ell_{\unif_I} = \frac{1}{2}\bbE[\newVar_2^2]/\bar{y}_I + O(r_I^3 x^{-2})\,. \end{align} Hence their difference is \begin{align} |\ell_{p,I} - \ell_{\unif_I}| \leq \\ \frac{1}{2} \big| \bbE[\newVar_1^2]/\widetilde{y}_{p,I} - \bbE[\newVar_2^2]/\bar{y}_I \big| + O(r_I^3 x^{-2}) \label{eq::what-we-want}\,. \end{align} Taking the main term, we split it into three parts: \begin{align} \big| \bbE &[\newVar_1^2]/\widetilde{y}_{p,I} - \bbE[\newVar_2^2]/\bar{y}_I \big| \\&\leq \big| \bbE[\newVar_1^2]/\widetilde{y}_{p,I} - \bbE[\newVar_1^2]/x \big| \label{eq::first-part-bd} \\ &~~+\big| \bbE[\newVar_2^2]/\bar{y}_I - \bbE[\newVar_2^2]/x \big| \label{eq::second-part-bd} \\ &~~+\big| \bbE[\newVar_1^2]/x - \bbE[\newVar_2^2]/x \big| \label{eq::third-part-bd}\,. \end{align} The first part \eqref{eq::first-part-bd} can be bounded by \begin{align} \big| \bbE[\newVar_1^2]/\widetilde{y}_{p,I} - \bbE[\newVar_1^2]/x \big| &\leq |\bbE[\newVar_1^2]| \, |1/\widetilde{y}_{p,I} - 1/x| \\ &\leq r_I^2 \frac{|x - \widetilde{y}_{p,I}|}{\widetilde{y}_{p,I} x} \\ &\leq (4/3) r_I^3 x^{-2} \\ &= O(r_I^3 x^{-2})\,. \end{align} An analogous argument bounds \eqref{eq::second-part-bd}, giving \begin{align} \big| \bbE[\newVar_2^2]/\bar{y}_I - \bbE[\newVar_2^2]/x \big| = O(r_I^3 x^{-2})\,. \end{align} Finally, \eqref{eq::third-part-bd} follows from \begin{align} \big| \bbE[\newVar_1^2]/x - \bbE[\newVar_2^2]/x \big| = |\eta_2| x^{-1} \leq 4 \varepsilon r_I^2 x^{-1}\,. \end{align} Thus, plugging it all into \eqref{eq::what-we-want} we get \begin{align} |\ell_{p,I} - \ell_{\unif_I}| \leq 2 \varepsilon r_I^2 x^{-1} + O(r_I^3 x^{-2}) \,. \end{align} \end{proof} \subsection{Proof of \Cref{prop::loss-under-uniform-dist}} \label{sec::proof_loss-under-uniform-dist} \begin{proof} Let $i^*$ be such that $r_{I_{i^*}} \leq x/4$ for all $i \geq i^*$ (since $\lim_{i \to \infty} r_{I_i} = 0$ this exists) and WLOG consider the sequence of $i \geq i^*$. The result then follows from the Taylor series of $\ell_{\unif_{I_i}}$, as shown by \eqref{eq::uniform-taylor} (see proof of \Cref{prop::divergence-of-close-distributions} in \Cref{sec::proof_divergence-of-close-distributions}). Keeping the definition from the proof of \Cref{prop::divergence-of-close-distributions}, we let $\newVar_2 \sim \shift{\unif_{I_i}}{\bar{y}_{I_i}}$, i.e. uniform over a width-$r_{I_i}$ interval centered at $0$. Thus we have $\bbE[\newVar_2^2] = \frac{1}{12} r_{I_i}^2$ and hence \eqref{eq::uniform-taylor} yields \begin{align} \ell_{\unif_{I_i}} &= \frac{1}{2}\bbE[\newVar_2^2]/\bar{y}_{I_i} + O(r_I^3 x^{-2}) \\ &= \frac{1}{24} r_{I_i}^2 \bar{y}_{I_i}^{-1} + O(r_{I_i}^3 x^{-2}) \label{eq::uniform-loss}\,. \end{align} But $\bar{y}_{I_i}$ and $x$ share the interval $I_i$ and hence as $r_{I_i} \to 0$, \begin{align} \bar{y}_{I_i} &= x + O(r_{I_i}) \\ &= x (1 + O(r_{I_i} x^{-1})) \\ \implies \bar{y}_{I_i}^{-1} &= x^{-1} (1 + O(r_{I_i} x^{-1})) \end{align} since when $r_{I_i}$ is very small, $O(r_{I_i} x^{-1})$ is very small so $(1 + O(r_{I_i} x^{-1})^{-1} = 1 + O(r_{I_i} x^{-1})$ (the inverse of a value close to $1$ is also close to $1$). Thus, we can replace $\bar{y}_{I_i}^{-1}$ in \eqref{eq::uniform-loss} to get \begin{align} \ell_{\unif_I} = \frac{1}{24} r_{I_i}^2 x^{-1} + O(r_{I_i}^3 x^{-2}) \end{align} as we wanted. \end{proof} \subsection{Single-Interval Loss Function Properties and Proof of \Cref{lem::increase-interval-loss-positive} } \label{sec::additional_lemmas} We prove \Cref{lem::increase-interval-loss-positive} here; to do so, we show a few lemmas concerning the single-interval loss function $\ell_{p,I}$. First, we show an alternative formula for $\ell_{p,I}$ which sheds some light on it: \begin{lemma} \label{lem::single-interval-loss-form-2} For any $p, I$, \begin{align} \ell_{p,I} = \bbE_{X \sim p|_I}[X \log X] - \widetilde{y}_{p,I} \log(\widetilde{y}_{p,I})\,. \end{align} \end{lemma} \begin{proof} We compute $\ell_{p,I}$ as follows: \begin{align} \ell_{p,I} &= \bbE_{X \sim p}[X \log (X/\widetilde{y}_{p,I}) \, | \, X \in I] \\ &= \bbE_{X \sim p|_I}[X \log (X/\widetilde{y}_{p,I})] \\ &= \bbE_{X \sim p|_I}[X \log(X) - X \log(\widetilde{y}_{p,I})] \\ &= \bbE_{X \sim p|_I}[X \log X] - \bbE_{X \sim p|_I}[X] \log(\widetilde{y}_{p,I}) \\ &= \bbE_{X \sim p|_I}[X \log X] - \widetilde{y}_{p,I} \log(\widetilde{y}_{p,I}) \end{align} since $\widetilde{y}_{p,I} = \bbE_{X \sim p|_I}[X]$. \end{proof} We now want to show that it really does represent something resembling a loss function: first, that it is nonnegative, and second that it achieves equality if and only if $X \sim p$ on $I$ is known for sure (so the decoded value can be guaranteed to equal $X$). \begin{lemma} For any $p$ and $I \subseteq [0,1]$ (even $p$ is not continuous), \begin{align} \ell_{p,I} \geq 0 \end{align} with equality if and only if there is some $\newvar \in I$ s.t. \begin{align} \bbP_{X \sim p}[X = \newvar \, | \, X \in I] = 1 \,. \end{align} \end{lemma} \begin{proof} Using \Cref{lem::single-interval-loss-form-2}, if we define the function $h(t) = t \log t$ then since $h$ is strictly convex, by Jensen's Inequality (where all expectations are over $X \sim p|_I$) \begin{align} \ell_{p,I} = \bbE[h(X)] - h(\bbE[X]) \geq 0 \end{align} with equality if and only if $X \sim p|_I$ is fixed with probability $1$. \end{proof} This yields the following corollary: \begin{corollary} \label{cor::distortion-nonzero} If $p \in \cP$ and $I$ has nonzero width, \begin{align} \ell_{p,I} > 0 \,. \end{align} \end{corollary} This follows because $p \in \cP$ is continuous and so cannot have all its mass on a particular value in any nonzero-width $I$. If $I$ has zero probability mass under $p$, then $\ell_{p,I}$ defaults to the interval loss under a uniform distribution. Finally, we can prove \Cref{lem::increase-interval-loss-positive}. Recall that it shows that if $I$ has nonzero probability mass under $p$, one cannot get the interval loss to approach $0$ by choosing $J \supseteq I$, i.e. if $p \in \cP$ and $I$ is such that $\bbP_{X \sim p}[X \in I] > 0$, then there is some $\alpha > 0$ (which can depend on $I$) such that \begin{align} \ell_{p,J} \geq \alpha \text{ for all } J \supseteq I \,. \end{align} \begin{proof}[Proof of \Cref{lem::increase-interval-loss-positive}] We can re-write $\ell_{p,J}$ as \begin{align} \ell_{p,J} &= \bbE_{X \sim p}[X \log(X/\widetilde{y}_{p,J}) \, | \, X \in J] \\ &= \int_J \frac{p(x)}{\int_J dp} x \log(x/\widetilde{y}_{p,J}) \, dx \end{align} where $\int_J \, dp$ is just the integral representation of $\bbP_{X \sim p}[X \in J]$. Therefore, since $p \in \cP$, $\ell_{p,J}$ is continuous at $J$ with respect to the boundaries of $J$ (the inverse probability mass $(\int_J dp)^{-1}$ is continuous since $\int_J dp \geq \int_I dp > 0$). Thus, we can consider $\ell_{p,J}$ as a continuous function over the boundaries of $J$ on the domain where $I \subseteq J \subseteq [0,1]$; this domain can be represented as a closed subset of $[0,1]^2$ and hence is compact. Thus, by the Weierstrass extreme value theorem, $\ell_{J,p}$ achieves its minimum $\alpha$ on this domain, and by \Cref{cor::distortion-nonzero} it must be positive. Hence, we have shown that there is an $\alpha > 0$ such that for any $J \supseteq I$, $\ell_{p,J} > \alpha$. \end{proof} \subsection{Proof of \Cref{lem::interval-loss-ub}} \label{sec::proof_interval-loss-ub} \begin{proof} We WLOG restrict ourselves to $q$ which are probability distributions over $I$. Let $\cP_I$ denote the set of probability distributions over $I$ (not necessarily continuous) and $\cP'_I$ denote the set of probability distributions over $I$ which place all the probability mass on the boundaries $\bar{y}_I - r_I/2$ and $\bar{y}_I + r_I/2$, i.e. for all $q' \in \cP'_I$ we have \begin{align} \bbP_{X \sim q'} [X \in \{\bar{y}_I - r_I/2, \bar{y}_I + r_I/2\}] = 1 \, . \end{align} We then make the following claim: \emph{Claim 1:} For all $q \in \cP_I$, exists $q' \in \cP'_I$ such that $\ell_{q,I} \leq \ell_{q',I}$. This follows from the convexity of the function $x \log(x)$ and the definition of $\ell_{q,I}$, i.e. \begin{align} \ell_{q,I} = \bbE_{X \sim q} [X \log (X/\widetilde{y}_{q,I})] \end{align} (since $q$ in this case is a distribution over $I$, we removed the condition $X \in I$ as it is redundant). In particular, if $q'$ is the (unique) distribution in $\cP'_I$ such that $\bbE_{X \sim q'}[X] = \widetilde{y}_{q,I}$ (i.e. we move all the probability mass to the boundary but keep the expected value the same), then $\ell_{q',I}$ can be computed by considering the average over the linear function which connects the end points of $X \log (X/\widetilde{y}_{q,I})$ over $I$. Because of convexity, this linear function is always greater than or equal to $X \log (X/\widetilde{y}_{q,I})$ on $I$, and therefore $\ell_{q,I} \leq \ell_{q',I}$. Thus, Claim 1 holds and we can restrict our attention to $\cP'_I$. For simplicity we introduce a linear mapping $\newvar$ from $[-1/2,1/2]$ to $I$: for $\theta \in [-1/2,1/2]$, let $\newvar(\theta) = \bar{y}_I + \theta r_I$ (so $\newvar(-1/2) = \bar{y}_I - r_I/2$ is the lower boundary of $I$, $\newvar(1/2) = \bar{y}_I + r_I/2$ is the upper boundary, and $\newvar(0) = \bar{y}_I$ is the midpoint). We also specially denote $a = \newvar(-1/2)$ to be the lower boundary and $b = \newvar(1/2)$ to be the upper boundary. Then, since any $q \in \cP'_I$ can only assign probabilities to $a$ and $b$, we can parametrize all $q \in \cP'_I$: let $q(\theta)$ denote the distribution assigning probability $1/2 + \theta$ to the upper boundary $b$ and $1/2 - \theta$ to the lower boundary $a$. Then this gives the nice formula: \begin{align} \widetilde{y}_{q(\theta), I} = \bar{y}_I + \theta r_I = \newvar(\theta) \end{align} i.e. $q(\theta)$ is the unique distribution in $\cP'_I$ with expectation $\newvar(\theta)$. This brings us to our next claim: \emph{Claim 2:} $\ell_{q(\theta), I} \leq 2 \ell_{q(0), I}$ for any $\theta \in [-1/2, 1/2]$. Ignoring the redundant condition $X \in I$, we use \begin{align} \ell_{q,I} &= \bbE_{X \sim q} [X \log(X)] - \widetilde{y}_{q,I} \log(\widetilde{y}_{q,I}) \label{eq::alt-loss-formula} \end{align} to re-write $\ell_{q(\theta),I}$ as follows: \begin{align} \ell_{q(\theta),I} &= (1/2 - \theta) a \log(a) + (1/2 + \theta) b \log(b) \eqlinebreakshort- \newvar(\theta) \log(\newvar(\theta))\,. \end{align} This implies that \begin{align} \ell_{q(\theta), I} &\leq \ell_{q(\theta), I} + \ell_{q(-\theta), I} \\ &= \big(a\log(a) + b \log(b) \big) \eqlinebreakshort- \big(\newvar(\theta) \log(\newvar(\theta)) + \newvar(-\theta) \log(\newvar(-\theta)) \big) \\ &\leq \big(a\log(a) + b \log(b) \big) - 2 \bar{y}_I \log(\bar{y}_I) \\ &= 2 \ell_{q(0),I} \end{align} where the inequality follows because $x \log(x)$ is convex and the mean of $\newvar(\theta)$ and $\newvar(-\theta)$ is $\newvar(0) = \bar{y}_I$, showing Claim 2. \emph{Claim 3:} $2 \ell_{q(0),I} \leq \frac{1}{2} r_I^2 \bar{y}_I^{-1}$. This comes from rewriting according to \eqref{eq::alt-loss-formula} and then applying the Taylor series expansion of $(1+t) \log(1+t)$. Define $t = r_I/(2 \bar{y}_I) \leq 1$ (otherwise $I \not \in [0,1]$), we get: \begin{align} 2 &\ell_{q(0),I} \\&= \big(a\log(a) + b \log(b) \big) - 2 \bar{y}_I \log(\bar{y}_I) \\ &= (\bar{y}_I - r_I/2) \log(\bar{y}_I - r_I/2) \eqlinebreakshort + (\bar{y}_I + r_I/2) \log(\bar{y}_I + r_I/2) - 2 \bar{y}_I \log(\bar{y}_I) \\ &= (\bar{y}_I - r_I/2) (\log(\bar{y}_I - r_I/2) - \log(\bar{y}_I)) \eqlinebreakshort + (\bar{y}_I + r_I/2) (\log(\bar{y}_I + r_I/2) - \log(\bar{y}_I)) \\ &= \bar{y}_I \big((1 - t) \log(1 - t) + (1 + t) \log(1 + t) \big) \,. \end{align} We can use the inequality that $(1 - t) \log(1 - t) + (1 + t) \log(1 + t) \leq 2 t^2$ for $|t| \leq 1$, to get \begin{align} 2 \ell_{q(0),I} \leq 2 \bar{y}_I t^2 = \frac{1}{2} r_I^2 \bar{y}_I^{-1}\,. \end{align} This resolves Claim 3. The lemma then follows from Claims 1, 2, and 3. \end{proof} \subsection{Proof of \Cref{lem::dominating-companders}} \label{sec::proof_dominating-companders} \begin{proof} First, note that the above conditions imply that $f_2(x) \geq f_1(x)$ and that $f'_2(x) \geq f'_1(x)$ for all $x$ where both are defined (almost everywhere). Let $J^{f_i, N, x} = [a_i, b_i]$ for $i = 1, 2$. We will prove that $a_1 \leq a_2$ and $b_1 \geq b_2$. Note that by definition if $f_1(x) - 1/N \leq 0$ then $a_1 = 0$ and $a_1 \leq a_2$ happens by default; thus this is also the case if $f_2(x) - 1/N \leq 0$ since $f_2 \geq f_1$ means this implies $f_1(x) - 1/N \leq 0$. Meanwhile, if $f_2(x) + 1/N \geq 1$ we have \begin{align} 1/N \geq 1 - f_2(x) \geq f_2(1) - f_2(x) &\geq f_1(1) - f_1(x) \end{align} meaning that $b_1 = 1$ (and $b_2 = 1$) so $b_1 \geq b_2$; and similarly $f_1(x) + 1/N \geq 1$ simply implies $b_1 = 1 \geq b_2$. Thus we do not need to worry about the boundaries hitting $0$ or $1$ (i.e. we can ignore the `$\cap [0,1]$' in the definition), as the needed result easily holds whenever it happens. Then $a_1$ and $a_2$ are the values for which \begin{align} \int_{a_2}^x f'_2(t) \, dt = \int_{a_1}^x f'_1(t) \, dt = 1/N\,. \end{align} But since $0 \leq f'_1(t) \leq f'_2(t)$, we know that \begin{align} \int_{a_2}^x f'_2(t) \, dt = 1/N = \int_{a_1}^x f'_1(t) \, dt \leq \int_{a_1}^x f'_2(t) \, dt \end{align} which implies that $a_2 \geq a_1$. An analogous proof on the opposite side proves $b_1 \geq b_2$ and hence \begin{align} J^{f_2, N, x} = [a_2, b_2] \subseteq [a_1, b_1] = J^{f_1, N, x} \end{align} as we needed. \end{proof} \section{Proof of \Cref{prop::approximate-compander} } \label{sec::approximate-compander-f-dagger} \begin{proof} First, note that $\comp_\delta - \delta x^{1/2} = (1 - \delta) \comp$ is monotonically increasing so $\comp \in \compset^\dagger$. Furthermore, where the derivative $f'$ exists (which is almost everywhere since it is monotonic and bounded), \begin{align} f'_\delta (x) = (1-\delta)f'(x) + (\delta/2)x^{-1/2}\,. \end{align} Thus, pointwise, $\lim_{\delta \to 0} f'_\delta (x) = f'(x)$ for all $x$. Since for all $\delta > 0$ we have $\comp \in \compset^\dagger$, \Cref{thm::asymptotic-normalized-expdiv} applies to $\comp_\delta$. So, we have \begin{align} \lim_{\delta \to 0} \widetilde{L}(p,\comp_\delta) &= \lim_{\delta \to 0} L^\dagger(p,\comp_\delta) \\ &= \lim_{\delta \to 0} \frac{1}{24} \int_0^1 p(x) f'_\delta(x)^{-2} x^{-1} \, dx \end{align} and $\lim_{\delta \to 0} p(x) f'_\delta(x)^{-2} x^{-1} = p(x) f'(x)^{-2} x^{-1}$, i.e. pointwise convergence of the integrand. We now consider two possibilities: (i) $\int_0^1 p(x) f'(x)^{-2} x^{-1} < \infty$; (ii) $\int_0^1 p(x) f'(x)^{-2} x^{-1} = \infty$. In case (i), WLOG assume that $\delta \leq 1/2$; then $f'_\delta(x) > \frac{1}{2} f'(x)$, which implies $f'_\delta(x)^{-2} < 4 f'(x)^{-2}$. Thus, we have an integrable dominating function ($4 p(x) f'(x)^{-2} x^{-1}$) and we can apply the Dominated Convergence Theorem, which shows what we want. In case (ii), we need to show $\lim_{\delta \to 0} \int_0^1 p(x) f'_\delta(x)^{-2} x^{-1} \, dx = \infty$. Let $\cX_\delta^+ = \{x \in [0,1] : f'(x) \geq \delta x^{-1/2}\}$ and $\cX_\delta^- = [0,1] \backslash \cX_\delta^+$, with $\bone_{\cdot}(\cdot)$ denoting their respective indicator functions. Then \begin{align} f'_\delta(x) &= (1-\delta) f'(x) + (\delta/2) x^{-1/2} \\ &\leq f'(x) + \delta x^{-1/2} \\ &\leq 2 f'(x) \, \bone_{\cX_\delta^+}(x) + 2 \delta x^{-1/2} \, \bone_{\cX_\delta^-}(x) \\ \implies f'_\delta(x)^{-2} &\geq \frac{1}{4} f'(x)^{-2} \, \bone_{\cX_\delta^+}(x) + \frac{1}{4} \delta^{-2} x \, \bone_{\cX_\delta^-}(x)\,. \end{align} This then shows that (switching to $\int \cdot dp$ notation) \begin{align} \int f'_\delta(x)^{-2} x^{-1} \, dp \geq &\frac{1}{4} \int \bone_{\cX_\delta^+}(x) f'(x)^{-2} x^{-1} \, dp \\ &+ \frac{1}{4} \int \bone_{\cX_\delta^-}(x) \delta^{-2} \, dp\,. \end{align} Note that $\cX_\delta^+$ expands as $\delta \to 0$. We then have two sub-cases (a) $\lim_{\delta \to 0} \bbP_{X \sim p} [X \in \cX_\delta^+] = 1$; (b) $\lim_{\delta \to 0} \bbP_{X \sim p} [X \in \cX_\delta^+] < 1$, which implies that there is some $\beta > 0$ such that $\bbP_{X \sim p} [X \in \cX_\delta^-] > \beta$ for all $\delta$. Then in sub-case (a), we have \begin{align} &\lim_{\delta \to 0} \frac{1}{4} \int \bone_{\cX_\delta^+}(x) f'(x)^{-2} x^{-1} \, dp \\ = & \frac{1}{4} \lim_{\delta \to 0} \bbE_{X \sim p} [\bone_{\cX_\delta^+}(X) f'(X)^{-2} X^{-1}] = \infty\,. \end{align} This is infinite because $\cX_0^+ := \lim_{\delta \to \infty} \cX_\delta^+$ is probability measure-$1$ set, and by the definition of Lebesgue integration, integration over $\cX_0^+$ is equivalent to the limit of integration over $\cX_\delta^+$, and since it is probability measure $1$ integrating over it with respect to $p$ is equivalent to integrating over $[0,1]$. Meanwhile in sub-case (b) we have \begin{align} \frac{1}{4} \int \bone_{\cX_\delta^-}(x) \delta^{-2} \, dp = \frac{\delta^{-2}}{4} \bbP_{X \sim p}[X \in \cX_\delta^-] \geq \frac{\delta^{-2}}{4} \beta \end{align} which goes to $\infty$ as $\delta \to 0$, and we are done. \end{proof} \section{Beta and Power Companders} \label{sec::appendix_other_companders} In this appendix, we analyze \emph{beta companders}, which are optimal companders for symmetric Dirichlet priors and are based on the normalized incomplete beta function (\Cref{sec::beta_companding}) and \emph{power companders}, which have the form $f(x) = x^s$ and which have properties similar to the minimax compander when $s = 1/\log \az$ (\Cref{sec::power_compander_analysis}). We also add supplemental experimental results. First, we compare the beta compander with truncation (identity compander) and the EDI (Exponential Density Interval) compander we developed in \cite{adler_ratedistortion_2021} in the case of the uniform prior on $\triangle_{\az-1}$ (which is equivalent to a Dirichlet prior with all parameters set to $1$), on book word frequencies, and on DNA $k$-mer frequencies. EDI was, in a sense, developed to minimize the expected KL divergence loss for the uniform prior (specifically to remove dependence on $\az$) as a means of proving a result in \cite{adler_ratedistortion_2021}; the beta compander was then directly developed for all Dirichlet priors. Second, we compare the theoretical prediction for the power compander against various data sets; this demonstrates a close match to the theoretical performance for synthetic (uniform on $\triangle_{\az-1}$) data and DNA $k$-mer frequencies, while the power compander performs better on book word frequencies. Note that this is not a contradiction, as the theoretical prediction is for its performance on the worst possible prior -- it instead indicates that book word frequencies are somehow more suited to power companders than the uniform distribution or DNA $k$-mer frequencies. Finally, we compare how quickly the beta and power companders converge to their theoretical limits (with uniform prior); specifically how quickly $N^2 \widetilde{L}(p,f,N)$ converges to $\widetilde{L}(p,f)$. The results show that for large $\az$ ($\approx 10^5$), both are already very close by $N = 2^8 = 256$; while for smaller values of $\az$, power companders still converge very quickly while beta companders may take even until $N = 2^{16} = 65536$ or beyond to be close. \subsection{Beta Companders for Symmetric Dirichlet Priors} \label{sec::beta_companding} \begin{definition} When $\bX$ is drawn from a Dirichlet distribution with parameters $\balpha = \alpha_1,\dots,\alpha_\az$, we use the notation $\bX \sim \dir(\balpha)$. When $\alpha_1 = \dots = \alpha_\az = \alpha$, then $\bX$ is drawn from a symmetric Dirichlet with parameter $\alpha$ and we use the notation $\bX \sim \dir_\az(\alpha)$. \end{definition} As a corollary to \Cref{thm::optimal_compander_loss}, we get that the optimal compander for the symmetric Dirichlet distribution is the following: \begin{corollary} When $\bx \sim \dir_\az(\alpha)$, let $p(x)$ be the associated single-letter density (same for all elements due to symmetry). The optimal compander for $p$ satisfies \begin{align} \compder(x) &= B \Big(\frac{\alpha + 1}{3}, \frac{(\az-1)\alpha + 2}{3} \Big)^{-1} \eqlinebreakshort x^{(\alpha - 2)/3} (1-x)^{((\az-1)\alpha - 1)/3} \label{eq::beta_compander_deriv} \end{align} where $B(a,b)$ is the Beta function. Therefore, $\comp(x)$ is the normalized incomplete Beta function $I_x((\alpha + 1)/3, ((\az-1)\alpha + 2)/3)$. Then \begin{align} &\singleloss(p, \comp) \nonumber \\&= \frac{1}{2} B\Big(\frac{\alpha+1}{3}, \frac{(\az-1)\alpha + 2}{3} \Big)^3 B(\alpha, (\az-1)\alpha)^{-1}\label{eq::dir_Dpf}\,. \end{align} \end{corollary} This result uses the following fact: \begin{fact} \label{fact::dirichlet-marginals} For $\bX \sim \dir(\alpha_1, \dots, \alpha_\az)$, the marginal distribution on $X_k$ is $X_k \sim \Btdis(\alpha_k, \beta_k)$, where $\beta_k = \sum_{j \neq k} \alpha_j$. When the prior is symmetric with parameter $\alpha$, we get that all $X_k$ are distributed according to $\Btdis(\alpha, (\az-1)\alpha)$. \end{fact} \begin{remark} Since \eqref{eq::dir_Dpf} scales with $\az^{-1}$, this means that $\widehat\cL_\az(\dir_\az(\alpha), \comp)$ is constant with respect to $\az$. This is consistent with what we get with the EDI compander (see \cite{adler_ratedistortion_2021}). \end{remark} We will call the compander $f$ derived from integrating \eqref{eq::beta_compander_deriv} the \emph{beta compander}. (This is because integrating \eqref{eq::beta_compander_deriv} gives an incomplete beta function.) The beta compander naturally performs better than the EDI method since this compander is optimized to do so. We can see the comparison in \Cref{fig::beta_and_EDI} that on random uniform distributions, the beta compander is better than the EDI method by a constant amount for all $\az$. \begin{figure} \centering \includegraphics[scale = .4]{figures/kl_ucsc_dna_books_dir_1_truncate_exp_beta_256_fig.pdf} \includegraphics[scale = .4]{figures/kl_ucsc_dna_books_dir_1_truncate_exp_beta_65536_fig.pdf} \caption{Comparing the beta compander and the EDI method. The random data is drawn with $\dir_\az(1)$ (i.e. uniform).} \label{fig::beta_and_EDI} \end{figure} The beta compander is not the easiest algorithm to implement however. It is necessary to compute an incomplete beta function in order to find the compander function $\comp$, which is not known to have a closed form expression. We reiterate \Cref{rmk::minimax-is-closed-form} that it is indeed interesting that the minimax compander, on the other hand, does have a closed form. \subsection{Analysis of the Power Compander} \label{sec::power_compander_analysis} \begin{figure} \centering \includegraphics[scale = .4]{figures/add_theoretical_kl_ucsc_dna_books_dir_1_power_65536_fig.pdf} \caption{Comparing theoretical performance \eqref{eq::power_p_f} of the power compander to experimental results. } \label{fig::theoretical_power} \end{figure} Starting with \Cref{thm::asymptotic-normalized-expdiv}, we can use the asymptotic analysis to understand why the power compander works well for all distributions. The following proposition proves the first set of results in \Cref{thm::power_compander_results}. \begin{proposition} \label{thm::power_r_loss} Let single-letter density $p$ be the marginal probability of one letter on any symmetric probability distribution $P$ over $\az$ letters. For the power compander $\comp(x) = x^s$ where $s \leq \frac{1}{2}$, \begin{align} \singleloss(p, \comp) \leq \frac{1}{\az} \frac{1}{\dpfconst} s^{-2} \az^{2 s} \end{align} and for any prior $P \in \cP^\triangle_\az$, \begin{align} \rawloss_\az(P, x^s) \leq\frac{1}{\dpfconst} s^{-2} \az^{2 s}\,. \end{align} Optimizing over $s$ gives \begin{align} \rawloss_\az(P, \comp) \leq \frac{e^2}{\dpfconst} \log^2 \az\label{eq::power_p_f}\,. \end{align} \end{proposition} \begin{proof} Since $\comp(x) = x^s$ we have that $\compder(x) = s x^{s-1}$. Using \Cref{thm::asymptotic-normalized-expdiv}, this gives \begin{align} &\singleloss(p, \comp) \nonumber \\ &= \frac{1}{\dpfconst} s^{-2} \int_0^1 x^{1-2 s} p(x) dx = \frac{1}{\dpfconst} s^{-2}\bbE_{X\sim p}[X^{1-2 s}]\,. \end{align} The function $x^{1-2 s}$ is increasing and also a concave function. We want to find the maximin prior distribution $P \in \cP^\triangle_\az$ (with marginals $p$) with the constraint \begin{align} \sum_{i} \bbE_{X_i \sim p}[X_i] &= 1 \end{align} (another constraint is that values of $p$ are such that must sum to one, but we give a weaker constraint here). We want to choose $P$ to maximize \begin{align} \sum_{i} \bbE_{X_i \sim p}[X_i^{1-2 s}] & = \bbE_{ (X_1,...,X_\az) \sim P}\left[\sum_{i} X_i^{1-2 s}\right]\,. \end{align} By concavity (even ignoring any constraint that $P$ is symmetric), the maximum solution is given when $X_1 = \dots = X_\az$. Therefore, the maximin $P$ is such that the marginal on one letter $p$ is \begin{align} p(1/\az) = 1\,. \end{align} The probability mass function where $ 1/\az$ occurs with probability $1$ is a limit point of a sequence of continuous densities of the form \begin{align} p(x) = \frac{1}{2\varepsilon} \text{ on } x \in \left[ \frac{1}{\az} - \varepsilon, \frac{1}{\az} + \varepsilon\right] \end{align} as $\varepsilon \to 0$. We use this since we are restricting to continuous probability distributions. Evaluating with this gives \begin{align} \singleloss(p, \comp) &= \frac{1}{\dpfconst} s^{-2}\bbE_{X\sim p}[X^{1-2 s}] \\ &\leq \frac{1}{\dpfconst} s^{-2} \left(\frac{1}{\az} \right) ^{1-2 s} \\ &= \frac{1}{\az} \frac{1}{\dpfconst} s^{-2} \az^{2 s} \end{align} which shows \eqref{eq::power_p_f}. Multiplying by $\az$ gives $\widetilde \cL_\az(P,\comp)$ for symmetric $P$. Note that for any non-symmetric $P$, we can always symmetrize $P$ to a symmetric prior $P_{sym}$ by averaging over all random permutations of the indices. Because the loss $\widetilde \cL_\az(P,\comp)$ is concave in $P$, the symmetrized prior $P_{sym}$ will give an higher value, that is $\widetilde \cL_\az(P,\comp) \leq \widetilde \cL_\az(P_{sym},\comp)$. Hence $\widetilde \cL_\az(P,\comp) \leq \frac{1}{24}s^{-2}\az^{2s}$ holds for all priors. Finding the $s$ which minimizes $\frac{1}{\dpfconst} s^{-2} \az^{2 s}$ is equivalent to finding $s$ which minimizes $s \log \az - \log s$. \begin{align} 0 &= \frac{d}{ds} s \log \az - \log s = \log \az - \frac{1}{s}\\ &\implies s = \frac{1}{\log \az}\,. \end{align} We can plug this back into our equation, using the fact that $e^{\log \az} = \az$ implies that $\az^{\frac{1}{\log \az}} = e$. Thus, using $\comp(x) = x^{\frac{1}{\log \az}}$ gives that \begin{align} \rawloss_\az(P, \comp) \leq \frac{e^2}{\dpfconst} \log^2 \az \text{ for any } P \in \cP^\triangle_\az\,. \end{align} To generate a prior $P \in \cP^\triangle_\az$ that matches this upper bound, we note that this means we want its marginal $p$ to maximize $\frac{1}{24} (\log^2 \az) \bbE_{X \sim p}[X^{1-2/\log \az}]$, and from before we know that fixing $X = 1/\az$ does this (since $\bbE_{X \sim p}[X] = 1/\az$ as $p$ is the marginal of $P$). While $p$ has to represent a probability density function, and therefore cannot be a point mass, we can restrict its support to an arbitrarily small neighborhood around $1/\az$ (and it is obvious that there are priors $P \in \cP^\triangle_\az$ with such a marginal), thus getting a match and showing that \begin{align} \underset{P \in \cP^\triangle_\az}{\sup} \rawloss_\az(P, \comp) = \frac{e^2}{\dpfconst} \log^2 \az \,. \end{align} \end{proof} The power compander turns out to give guarantees bounds on the value on $\rawloss_\az(P, \comp)$ when $\comp$ is chosen so that $s = {1}/{\log \az}$. We show the comparison between this theoretical result on raw loss with the experimental results in \Cref{fig::theoretical_power}. \subsection{Converging to Theoretical} \label{sec::beta_power_convergence} For both the power compander and the beta compander, we show in \Cref{fig::theoretical_compare} how quickly the experimental results converge to the theoretical results. Experimental results have a fixed granularity $N$ whereas the theoretical results assume that $N \to \infty$. The plots show that by $N = 2^{16}$ (each value gets $16$ bits), the experimental results for the power compander are very close to the theoretical results, and even for $N = 2^8$ they are not so far. For the beta compander, the experimental results are close to the theoretical when $\az$ is large. When $\az = 100$, the results for $N = 2^{16}$ is not that close to the theoretical result, which demonstrates the effect of using unnormalized (or raw) values. The difference between normalizing and not normalizing gets smaller as $\az$ increases. \begin{figure*} \centering \includegraphics[scale = .5]{figures/theoreticalCompander_power.pdf} \includegraphics[scale = .5]{figures/theoreticalCompander_beta.pdf} \caption{Comparing theoretical expression $\singleloss(p, f)$ with experimental result. The KL divergence value of the experimental results are multiplied to $N^2$ in order to be comparable to $\singleloss(p, f)$.} \label{fig::theoretical_compare} \end{figure*} \section{Minimax and Approximate Minimax Companders} \label{sec::appendix_minimax} In this appendix, we analyze the minimax compander and approximate minimax compander. Specifically, we analyze the constant $c_\az$, to show that it falls in $[1/4,3/4]$ (\Cref{sec::minimax_const_analysis}) and that $\lim_{\az \to \infty} c_\az = 1/2$ (\Cref{sec::lim_c_k}). We also show that when $c_\az$ is close to $1/2$, the approximate minimax compander (which is the same as the minimax compander except it replaces $c_\az$ with $1/2$) has performance close to the minimax compander against all priors $p \in \cP$ (\Cref{sec::proof_L_appx_minimax_compander}). \subsection{Analysis of Minimax Companding Constant} \label{sec::minimax_const_analysis} \begin{figure} \centering \includegraphics[scale = .4]{figures/add_theoretical_kl_ucsc_dna_books_dir_1_sinh_65536_fig.pdf} \caption{Comparing theoretical performance \eqref{eq::L_dagger_value_minimax} of the approximate minimax compander to experimental results.} \label{fig::theoretical_minimax} \end{figure} \subsubsection{Determining bounds on $c_\az$} \label{sec::bounds_c_k} If $a_\az, b_\az \geq 0$, then $p(x)$ is well-behaved (and bigger than $0$). We need $a_\az$ and $b_\az$ to be such that $p(x)$ is a density that integrates to $1$ and also that $p(x)$ has expected value of $1/\az$. To do this, first we compute that \begin{align} \bbE_{X\sim p}[X] &= \int_{0}^{1} x \left(a_\az x^{1/3} + b_\az x^{4/3}\right)^{-3/2} \, dx \\ &= \frac{-2}{b_\az\sqrt{a_\az + b_\az}} + \frac{2 \arcsinh\left(\sqrt{\frac{b_\az}{a_\az}} \right)}{b_\az^{3/2}}\,. \end{align} The constraint that $\int_0^1 p(x) \, dx = 1$ requires that $a_\az \sqrt{a_\az+b_\az} = 2$. We can use this to get \begin{align} \bbE_{X\sim p}[X] &= \frac{-a_\az}{b_\az} + \frac{a_\az \sqrt{\frac{a_\az}{b_\az} + 1} \arcsinh\left(\sqrt{\frac{b_\az}{a_\az}} \right)}{b_\az}\\ & = \frac{-1}{r} + \frac{ \sqrt{\frac{1}{r} + 1} \arcsinh\left(\sqrt{r} \right)}{r}\\ & = \frac{-1}{r} + \frac{ \sqrt{\frac{1}{r} + 1} \log\left( \sqrt{r} + \sqrt{r+1} \right)}{r}\label{eq::exact_expectation_r} \end{align} where we use $r = b_\az/ a_\az$. We will find upper and lower bounds in order to approximate what $r$ should be. Using \eqref{eq::exact_expectation_r}, we can get \begin{align} \bbE_{X\sim p}[X] & \leq \frac{1}{2} \frac{\log r}{r} \end{align} so long as $r > 3$. If we choose $r = c_1 \az \log \az$ and set $c_1 = .75$, then \begin{align} \bbE_{X\sim p}[X] & \leq \frac{1}{2} \frac{\log (c_1 \az \log \az)}{c_1 \az \log \az} \\ &\leq \frac{1}{2 c_1 \az} + \frac{\log \log \az}{2 c_1 \az \log \az} + \frac{\log c_1}{2 c_1 \az \log \az} \leq \frac{1}{\az} \end{align} so long as $\az> 4$. Similarly, we have \begin{align} \bbE_{X\sim p}[X] & \geq \frac{1}{3} \frac{\log r}{r} \end{align} for all $r$. If we choose $r = c_2 \az \log \az$ and set $c_2 = .25$, then \begin{align} \bbE_{X\sim p}[X] & \geq \frac{1}{3} \frac{\log (c_2 \az \log \az)}{c_2 \az \log \az} \geq \frac{1}{\az} \end{align} so long as $\az > \dpfconst$. Changing the value of $c$ changes the value of $\bbE_{X\sim p}[X]$ continuously. Hence, for each $\az > \dpfconst$, there exists a $c_\az$ so that if $r = c_\az \az \log \az$, then \begin{align} \bbE_{X\sim p}[X] = \frac{1}{\az}\,. \end{align} such that $.25 < c_\az < .75$. This proves the result for $\az > 24$; numerical evaluation of $c_\az$ for $\az = 5, 6, \dots, 24$ then confirms that the result holds for all $\az > 4$. \subsubsection{Limiting value of $c_\az$} \label{sec::lim_c_k} \begin{lemma}\label{lem::c_k_half} In the limit, $c_\az \to 1/2$. \end{lemma} \begin{proof} We start with $r = \frac{b_\az}{a_\az} = c_\az \az \log \az$, and we need to meet the condition that \begin{align} \frac{-1}{r} + \frac{ \sqrt{\frac{1}{r} + 1} \log\left( \sqrt{r} + \sqrt{r+1} \right)}{r} = \frac{1}{\az}\,. \end{align} Substituting we get \begin{align} \frac{1}{\az} &= \frac{-1}{c_\az \az \log \az} + \sqrt{\frac{1}{c_\az \az \log \az} + 1} \nonumber \\ & \quad \quad \frac{ \log\left( \sqrt{c_\az \az \log \az} + \sqrt{c_\az \az \log \az+1} \right)}{c_\az \az \log \az} \\ \implies &c_\az = \frac{-1}{ \log \az} + \sqrt{\frac{1}{c_\az \az \log \az} + 1} \nonumber \\ & \quad \quad \frac{ \log\left( \sqrt{c_\az \az \log \az} + \sqrt{c_\az \az \log \az+1} \right)}{ \log \az} \,. \end{align} Let $c = \lim_{\az \to \infty} c_\az$. Since $c_\az$ is bounded, we know that $\lim_{\az \to \infty} c_\az \az \log \az = \infty$ since $c_\az$ is bounded below by $1/4$; additionally $\log c_\az$ is bounded (above and below) since for $K > 4$ we have $c_\az \in [1/4, 3/4]$. \begin{align} c &= \lim_{\az \to \infty}\frac{-1}{ \log \az} \eqlinebreakshort + \sqrt{\frac{1}{c_\az \az \log \az} + 1} \eqlinebreakshort\frac{ \log\left( \sqrt{c_\az \az \log \az} + \sqrt{c_\az \az \log \az+1} \right)}{ \log \az} \\ &= 0 + 1 \lim_{\az \to \infty} \frac{ \log\left( 2\sqrt{c_\az \az \log \az} \right)}{ \log \az}\\ &= \lim_{\az \to \infty} \frac{\log 2 + \frac{1}{2} \log c_\az + \frac{1}{2} \log \az + \frac{1}{2} \log \log \az}{\log \az}\\ & = \frac{1}{2}\,. \end{align} \end{proof} \subsection{Approximate Minimax Compander vs. Minimax Compander} \label{sec::proof_L_appx_minimax_compander} For any $\az$, $c_\az$ can be approximated numerically. To simplify the quantizer, recall we can use $c_\az \approx \frac{1}{2}$ for large $\az$ to get the approximate minimax compander \eqref{eq::appx-minimax-compander}. This is close to optimal without needing to compute $c_\az$. Here we prove \Cref{thm::approximate-minimax-compander}. \begin{proof} Since $\comp^*_\az, \comp^{**}_\az \in \cF^\dagger$, we know that \begin{align} \singleloss(p,\comp^*_\az) = L^\dagger(p,\comp^*_\az) ~\text{ and }~ \singleloss(p,\comp^{**}_\az) = L^\dagger(p,\comp^{**}_\az)\,. \end{align} We define the corresponding asymptotic local loss functions \begin{align} \locloss^*(x) &= \frac{1}{24} (\comp^*_\az)'(x)^{-2} x^{-1} \\\locloss^{**}(x) &= \frac{1}{24} (\comp^{**}_\az)'(x)^{-2} x^{-1} \end{align} so that our goal is to prove \begin{align} \int \locloss^{**} \, dp \leq (1+\varepsilon) \int \locloss^* \, dp\,. \end{align} Let $\gamma^* = c_\az (\az \log \az)$ and $\gamma^{**} = \frac{1}{2} (\az \log \az)$ (the constants in $\comp^*_\az$ and $\comp^{**}_\az$ respectively) and let $\phi^*(x) = \arcsinh(\sqrt{\gamma^* x})$ and $\phi^{**}(x) = \arcsinh(\sqrt{\gamma^{**} x})$. Then \begin{align} (\phi^*)'(x) &= \frac{\sqrt{\gamma^*}}{2 \sqrt{x} \sqrt{\gamma^* x+1}} \\ \text{and } ~ (\phi^{**})'(x) &= \frac{\sqrt{\gamma^{**}}}{2 \sqrt{x} \sqrt{\gamma^{**} x+1}} \,. \end{align} Note that $\comp^*_\az(x) = \phi^*(x)/\phi^*(1)$ and $\comp^{**}_\az(x) = \phi^{**}(x)/\phi^{**}(1)$. We now split into two cases: (i) $c_\az > 1/2$ and (ii) $c_\az < 1/2$. In case (i) (which implies $\gamma^* > \gamma^{**}$, and note that $\gamma^*/\gamma^{**} = 2c_\az \leq 1 + \varepsilon$), we get for all $x \in [0,1]$, \begin{align} \frac{(\phi^*)'(x)}{(\phi^{**})'(x)} &= \sqrt{\frac{\gamma^*}{\gamma^{**}}} \sqrt{\frac{\gamma^{**} x + 1}{\gamma^* x + 1}} \\&\quad\quad \in [1, \sqrt{\gamma^*/\gamma^{**}}] \ \subseteq [1, \sqrt{1+\varepsilon}] \end{align} since $\sqrt{\frac{\gamma^{**} x+1}{\gamma^* x+1}} \in [\sqrt{\gamma^{**} / \gamma^*}, 1]$. Because $\gamma^* \geq \gamma^{**}$ and $\arcsinh$ is an increasing function, we know that $\phi^*(1) \geq \phi^{**}(1)$. Thus, for any $x \in [0,1]$, \begin{align} (\comp^{**}_\az)'(x) &= \frac{(\phi^{**})'(x)}{\phi^{**}(1)} \\ &\geq \frac{\frac{1}{\sqrt{1+\varepsilon}}(\phi^{*})'(x)}{\phi^{*}(1)} \\& = \frac{1}{\sqrt{1+\varepsilon}} (\comp^*_\az)'(x) \\ \implies (\comp^{**}_\az)'(x)^{-2} &\leq (1 + \varepsilon) (\comp^{*}_\az)'(x)^{-2} \\ \implies g^{**}(x) &\leq (1 + \varepsilon) g^*(x) \\ \implies \int g^{**} \, dp &\leq (1 + \varepsilon) \int g^* \, dp \end{align} which is what we wanted to prove. Case (ii), where $c_\az < 1/2$ (implying $\gamma^{**} > \gamma^*$) can be proved analogously: \begin{align} \frac{(\phi^{**})'(x)}{(\phi^*)'(x)} & = \sqrt{\frac{\gamma^{**}}{\gamma^*}} \sqrt{\frac{\gamma^* x + 1}{\gamma^{**} x + 1}} \\ & \quad\quad \in [1, \sqrt{\gamma^{**}/\gamma^*}] \subseteq [1, \sqrt{1+\varepsilon}] \end{align} which then gives us $(\phi^{**})'(x) \geq (\phi^*)'(x)$ and \begin{align} \phi^{**}(1) &= \int_0^1 (\phi^{**})'(t) \, dt \\ &\leq \sqrt{1 + \varepsilon} \int_0^1 (\phi^{*})'(t) \, dt \\ &\leq (\sqrt{1 + \varepsilon}) \phi^{*}(1) \,. \end{align} Thus, for any $x \in [0,1]$, \begin{align} (\comp^{**}_\az)'(x) &= \frac{(\phi^{**})'(x)}{\phi^{**}(1)} \\ &\geq \frac{(\phi^{*})'(x)}{(\sqrt{1+\varepsilon})\phi^{*}(1)} \\& = \frac{1}{\sqrt{1+\varepsilon}} (\comp^*_\az)'(x) \\ \implies (\comp^{**}_\az)'(x)^{-2} &\leq (1 + \varepsilon) (\comp^{*}_\az)'(x)^{-2} \\ \implies g^{**}(x) &\leq (1 + \varepsilon) g^*(x) \\ \implies \int g^{**} \, dp &\leq (1 + \varepsilon) \int g^* \, dp \end{align} completing the proof for both cases. \end{proof} We show the comparison of the theoretical (asymptotic in $\az$ result) of the approximate minimax compander with the experimental results in \Cref{fig::theoretical_minimax}. \iflong \section{Worst-Case Analysis} \label{sec::worst-case_analysis} \newcommand{\len}{\text{len}} In this section, we prove \Cref{thm::worstcase_power_minimax} which applies both to the minimax compander and the power compander. Since we are dealing with worst-case (i.e. not a random $\bx$) the centroid is not defined; therefore this theorem works with the \emph{midpoint decoder}. Thus, the (raw) decoded value of $x$ is $\bar{y}_{(n_N(x))}$. Additionally, we are not using the raw reconstruction but the normalized reconstruction, and hence it does not suffice to deal with a single letter at a time. Thus, we will work with a full probability vector $\bx \in \triangle_{\az-1}$. \begin{proof}[Proof of \Cref{thm::worstcase_power_minimax} and \eqref{eq::power_worst_case_bound} in \Cref{thm::power_compander_results}] Let $\bx \in \triangle_{\az-1}$ be the vector we are quantizing, with $i$th element (out of $\az$, summing to $1$) $x_i$; since we are dealing with midpoint decoding, our (raw) decoded value of $x_i$ is $\bar{y}_{n_N(x_i)}$. For simplicity, let us denote it as $\bar{y}_i$, and the normalized value as $\normvar_i = \bar{y}_i / \big(\sum_j \bar{y}_j\big)$. Let $\delta_i = \bar{y}_i - x_i$ be the difference between the raw decoded value $\bar{y}_i$ and the original value $x_i$. Then: \begin{align} D_{\kl}&\left( \bx \| \bnormvar \right) = \sum_{i} x_i \log \frac{x_i}{\normvar_i} \\ & = \sum_{i} x_i \log \frac{x_i}{\bar{y}_i} + \log\Big(\sum_i \bar{y}_i\Big) \\ & = \sum_{i} (\bar{y}_i - \delta_i) \log \frac{\bar{y}_i - \delta_i}{\bar{y}_i} + \log\Big(1 + \sum_i \delta_i\Big) \,. \end{align} Next we use that $\log(1+\newvar) \leq \newvar$. \begin{align} D_{\kl}\left( \bx \| \bnormvar \right)& \leq \sum_{i} (\bar{y}_i - \delta_i) \frac{-\delta_i}{\bar{y}_i} + \sum_i \delta_i \label{eq::use_log_ineq} \\ &=\sum_i -\delta_i + \sum_i \frac{\delta_i^2}{\bar{y}_i} + \sum_i \delta_i\\ & =\sum_i \frac{(\bar{y}_i - x_i)^2}{\bar{y}_i} \end{align} (note that in \eqref{eq::use_log_ineq} we used the inequality $\log(1+\newvar) \leq \newvar$ on \emph{both} appearances of the logarithm, as well as the fact that $\bar{y}_i - \delta_i = x_i \geq 0$). We now consider each bin $I^{(n)}$ induced by $\comp$. For simplicity let the dividing points between the bins be denoted by \begin{align} \beta_{(n)} = f^{-1}\Big(\frac{n}{N}\Big) = \bar{y}_{(n)} + r_{(n)}/2 \end{align} (where $r_{(n)}$ is the width of the $n$th bin) so that $I^{(n)} = (\beta_{(n-1)}, \beta_{(n)}]$. Since all the companders we are discussing are strictly monotonic, there is no ambiguity. Then, the Mean Value Theorem (which we can use since the minimax compander, the approximate minimax compander, and the power compander are all continuous and differentiable ) says that, for each $I^{(n)}$ there is some value $\newvar_{(n)}$ such that \begin{align} f'(\newvar_{(n)}) = \frac{f(\beta_{(n)}) - f(\beta_{(n-1)})}{\beta_{(n)} - \beta_{(n-1)}} = N^{-1} r_{(n)}^{-1} \end{align} (since $f(\beta_{(n)}) - f(\beta_{(n-1)}) = n/N - (n-1)/N = 1/N$ and $\beta_{(n)} - \beta_{(n-1)} = r_{(n)}$ by definition). Thus, we can re-write this as follows: \begin{align} r_{(n)} = N^{-1} f'(\newvar_{(n)})^{-1} \, . \end{align} We will also denote the following for simplicity: $I_i = I^{(n_N(x_i))}$; $r_i = r_{(n_N(x_i))}$; and $\newvar_i = \newvar_{(n_N(x_i))}$ (the bin, bin length, and bin mean value corresponding to $x_i$). Trivially, since $\newvar_i \in I_i$, we know that $\frac{\newvar_i}{2} \leq \bar{y}_i$. Thus, we can derive (since $\bar{y}_i$ is the midpoint of $I_i$ and $x_i \in I_i$, we know that $|\bar{y}_i - x_i| \leq r_i/2$) that \begin{align} D_{\kl} \left( \bx \| \bnormvar \right)& \leq \sum_i \frac{(\bar{y}_i - x_i)^2}{\bar{y}_i}\\ & \leq \frac{1}{4}\sum_i \frac{r_i^2}{\bar{y}_i}\\ & \leq \frac{1}{4}\sum_i \frac{1}{N^2 (\newvar_i / 2) (f'(\newvar_i))^2}\\ & = \frac{1}{2} N^{-2} \sum_i \frac{1}{ \newvar_i (f'(\newvar_i))^2}\,. \label{eq::worst-case-general-mean-value-bound} \end{align} Note that while we are using midpoint decoding for our quantization, for the purposes of analysis, it is more convenient to express the all the terms in the KL divergence loss using the mean value. We now examine the worst case performance of the three companders: the power compander, the minimax compander, and the approximate minimax compander. \emph{Power compander:} In this case, we have \begin{align} f(x) = x^{s} \text{ and } f'(x) = s x^{s-1} \end{align} for $s = \frac{1}{\log \az}$ (which is optimal for minimizing raw distortion against worst-case priors). This yields \begin{align} D_{\kl}\left( \bx \| \bnormvar \right)& \leq \frac{1}{2} N^{-2} s^{-2} \sum_i \frac{1}{\newvar_i \newvar_i^{2s - 2}}\\ & = \frac{1}{2} N^{-2} s^{-2}\sum_i \newvar_i^{1 - 2s} \,. \end{align} So long as $s < 1/2$ (which occurs for $\az > 7$), the function $\newvar_i^{1 - 2s}$ is concave in $\newvar_i$. Thus, replacing all $\newvar_i$ by their average will increase the value. Furthermore, $\az^s = \az^{\frac{1}{\log \az}} = e$. Thus, we can derive: \begin{align} D_{\kl}\left( \bx \| \bnormvar \right)& \leq \frac{1}{2} N^{-2} s^{-2} \az \left(\frac{\sum_i \newvar_i}{\az}\right)^{1 - 2s} \\ & = \frac{1}{2} N^{-2} (\log^2 \az) e^2 \Big(\sum_i \newvar_i\Big)^{1 - 2s} \\ & \leq \frac{e^2}{2} N^{-2} (\log^2 \az) \max\Big\{1, \sum_i \newvar_i\Big\}\label{eq::worst_case_power_bound_max} \,. \end{align} Next, we need to bound $\max\left\{1, \sum_i \newvar_i\right\}$. Assume that $\sum_i \newvar_i > 1$ (otherwise our bound is just $1$). Then, we note the following: $\sum_i x_i = 1$ by definition; $s^{-1} = \log \az$; and \begin{align} r_i = N^{-1} f'(\newvar_i)^{-1} = N^{-1} s^{-1} \newvar_i^{1-s} \,. \end{align} This allows us to make the following derivation: \begin{align} \sum_i | \newvar_i - x_i | &\leq \frac{1}{2} \sum_i r_i \\ \implies \sum_i \newvar_i &\leq \sum_i x_i + \frac{1}{2} N^{-1} s^{-1} \sum_i \newvar_i^{1-s} \\ &\leq 1 + \frac{1}{2} N^{-1} \log(\az) \az \left(\frac{\sum_i \newvar_i}{\az}\right)^{1-s} \label{eq::concavity-saves-the-day}\\ &= 1 + \frac{e}{2} N^{-1} \log(\az) \Big(\sum_i \newvar_i\Big)^{1-s} \label{eq::zounds} \\ &\leq 1 + \frac{e}{2} N^{-1} \log(\az) \Big(\sum_i \newvar_i\Big) \,. \end{align} We get \eqref{eq::concavity-saves-the-day} by the same concavity trick: because $\newvar_i^{1-s}$ is concave in $\newvar_i$, replacing each individual $\newvar_i$ with their average can only increase the sum. We get \eqref{eq::zounds} because $\az^s = \az^{\frac{1}{\log \az}} = e$. We can combine terms with $\sum_i \newvar_i$. \begin{align} \left(1 - \frac{e}{2} N^{-1} \log \az \right) \sum_i \newvar_i \leq 1\,. \end{align} This implies that if $N > \frac{e}{2} \log \az$, then \begin{align} \sum_i \newvar_i &\leq \frac{1}{1 - \frac{e}{2} N^{-1} \log \az}\\ &= \frac{N}{N -\frac{e}{2} \log \az} = 1 + \frac{e}{2} \frac{\log \az}{N - \frac{e}{2}\log \az} \label{eq::worst_case_power_bound_sumy}\,. \end{align} Furthermore, if $N \geq e \log \az$, we get that $\sum_i \newvar_i \leq 2$. Combining \eqref{eq::worst_case_power_bound_max} with \eqref{eq::worst_case_power_bound_sumy}, we have \begin{align} \eqstartnonumshort D_{\kl}( \bx \| \bnormvar ) \eqbreakshort &\leq \frac{e^2}{2} N^{-2} (\log^2 \az) \max\left\{1, \left( 1 + \frac{e}{2} \frac{\log \az}{N - \frac{e}{2}\log \az}\right) \right\}\\ & = \frac{e^2}{2} N^{-2} (\log^2 \az) \left( 1 + \frac{e}{2} \frac{\log \az}{N - \frac{e}{2}\log \az}\right) \end{align} for $N > \frac{e}{2} \log \az$. When $N \geq e \log \az$, this becomes the pleasing \begin{align} D_{\kl}( \bx \| \bnormvar ) \leq e^2 N^{-2} \log^2 \az\,. \end{align} \emph{Minimax compander and approximate minimax compander:} Since they are very similar in form, it is convenient to do both at once. Let $c$ be a constant which is either $c_\az$ if we are considering the minimax compander, or $\frac{1}{2}$ if we're considering the approximate minimax compander; and let $\gamma = c \az \log \az$. Then our compander and its derivative will have the form \begin{align} f(x) &= \frac{\arcsinh(\sqrt{\gamma x})}{\arcsinh(\sqrt{\gamma})} \\ f'(x) &= \frac{1}{2 \arcsinh(\sqrt{\gamma})} \frac{\sqrt{\gamma}}{\sqrt{x}\sqrt{1 - \gamma x} } \\ \implies f'(x)^{-1} &= 2 \arcsinh(\sqrt{\gamma}) \sqrt{\frac{x}{\gamma} + x^2}\,. \end{align} This then yields that \begin{align} r_i &= N^{-1} f'(\newvar_i)^{-1} \\ &= 2 N^{-1} \arcsinh(\sqrt{\gamma}) \sqrt{\frac{\newvar_i}{\gamma} + \newvar_i^2}\,. \end{align} Then we can derive from \eqref{eq::worst-case-general-mean-value-bound} that \begin{align} &D_{\kl}( \bx \| \bnormvar) \leq \frac{1}{2} N^{-2} (2 \arcsinh(\sqrt{\gamma}))^2 \sum_i \frac{\frac{\newvar_i}{\gamma} + \newvar_i^2}{\newvar_i} \\ &= 2 N^{-2} (\arcsinh(\sqrt{\gamma}))^2 \left(\frac{\az}{\gamma} + \sum_i \newvar_i \right) \\ &\leq 2 N^{-2} (\arcsinh(\sqrt{\gamma}))^2 \left(\frac{\az}{\gamma} + \max\left\{1,\sum_i \newvar_i\right\} \right) \label{eq::worst-case-kl-bound-minimax}\,. \end{align} Assuming that $\sum_i \newvar_i > 1$ (otherwise the bound is just $1$), \begin{align} \sum_i | \newvar_i - x_i | &\leq \sum_i \frac{r_i}{2} \\ \implies \sum_i \newvar_i &\leq \sum_i x_i + N^{-1} \arcsinh(\sqrt{\gamma}) \sum_i \sqrt{\frac{\newvar_i}{\gamma} + \newvar_i^2} \\ &= 1 + N^{-1} \arcsinh(\sqrt{\gamma}) \sum_i \sqrt{\frac{\newvar_i}{\gamma} + \newvar_i^2} \label{eq::minimax_sum_yi_bound}\,. \end{align} To bound the sum in \eqref{eq::minimax_sum_yi_bound}, using the fact that $\sqrt{\cdot}$ is concave (so averaging the inputs of a sum of square roots makes it bigger), we get \begin{align} \sum_i \sqrt{\frac{\newvar_i}{\gamma} + \newvar_i^2} & \leq \sum_i \sqrt{\frac{\newvar_i}{\gamma}} + \sqrt{\newvar_i^2} \\ &\leq \az \left(\frac{\sum_i \newvar_i}{\az (c \az \log \az)}\right)^{1/2} + \sum_i \newvar_i \\ &\leq \left(\frac{\sum_i \newvar_i}{c \log \az}\right)^{1/2} + \sum_i \newvar_i \\ &\leq \frac{\sum_i \newvar_i}{(c \log \az)^{1/2}} + \sum_i \newvar_i \\ &= \Big( \sum_i \newvar_i \Big) \left(1 + \frac{1}{(c \log \az)^{1/2}}\right) \\ &= \eta\Big( \sum_i \newvar_i\Big) \end{align} where $\eta = 1 +(c \log \az)^{-1/2}$. Then \eqref{eq::minimax_sum_yi_bound} becomes \begin{align} \sum_i \newvar_i &\leq 1 + \eta N^{-1} \arcsinh(\sqrt{\gamma}) \Big( \sum_i \newvar_i\Big) \,. \end{align} Since we have $\sum_i \newvar_i$ on both sides of the equation, we can combine these terms like before. \begin{align} &(1 - \eta N^{-1} \arcsinh(\sqrt{\gamma})) \sum_i \newvar_i \leq 1 \\ \implies &\sum_i \newvar_i \leq \frac{N}{N - \eta \arcsinh(\sqrt{\gamma})} \end{align} if $N > \eta \arcsinh (\sqrt{\gamma}) $. Combining these and using the expression $\arcsinh(\sqrt{\newvar}) = \log (\sqrt{\newvar + 1} + \sqrt{\newvar}) \leq \log(2 \sqrt{\newvar} + 1)$ we get from \eqref{eq::worst-case-kl-bound-minimax} that \begin{align} \eqstartnonumshort D_{\kl}( \bx\| \bnormvar) \eqbreakshort &\leq 2 N^{-2} (\arcsinh(\sqrt{\gamma}))^2 \eqlinebreakshort \left(\frac{\az}{\gamma} + \frac{N}{N - \eta \arcsinh(\sqrt{\gamma})}\right) \\ & = 2 N^{-2} (\arcsinh(\sqrt{c \az \log \az}))^2 \eqlinebreakshort \left(\frac{\az}{c \az \log \az} + \frac{N}{N - \eta \arcsinh(\sqrt{c \az \log \az})}\right) \\ & \leq 2N^{-2} (\log(2\sqrt{c \az \log \az} + 1))^2 \eqlinebreakshort\left(\frac{1}{c \log \az} + \frac{N}{N - \eta \log(2\sqrt{c \az \log \az} + 1)}\right) \,. \end{align} This holds for all $N > \eta \log(2\sqrt{c \az \log \az} + 1)$; furthermore, if $N > 3 \eta\log(2\sqrt{c \az \log \az} + 1)$, the second term in the parentheses is at most $3/2$ (and if $N$ is larger, this term goes to $1$). Recall $c$ is between $1/4$ and $3/4$ (as it is either $c_\az$ or $1/2$) when $\az >4$. Then, we know that for all $\az >4$ that $\eta < 2.57 \dots$ and $1/(c \log \az) < 5/2$. Thus, for \begin{align} N &> 8\log(2\sqrt{c \az \log \az} + 1) \\ &> 3 (2.6) \log(2\sqrt{c \az \log \az} + 1) \\ &> 3 \eta\log(2\sqrt{c \az \log \az} + 1) \end{align} we can bound the entire parenthesis term by $4$. Then, \begin{align} D_{\kl}&( \bx \| \bnormvar) \leq 8 N^{-2} (\log(2\sqrt{c \az \log \az} + 1))^2 \\ &\leq 8 N^{-2} (\log(3\sqrt{c \az \log \az}))^2 \\ &= 2 N^{-2} (\log (c \az \log \az) + 2\log 3)^2 \label{eq::numerical-bd-01} \\ & = 2 N^{-2} \Big(1 + O\Big(\frac{\log \log \az}{\log \az} \Big)\Big) \log^2 \az \,. \end{align} Note that whether $c$ is $c_\az$ or $1/2$, it is always between $1/4$ and $3/4$, and so it has no effect on the order of growth. We also note that the above (stated more crudely) is an order of growth within $O(N^{-2}\log^2 \az)$. We can obtain a relatively clean upper bound on the error term $O\big(\frac{\log \log \az}{\log \az}\big)$ by setting $c = 3/4$ (which is larger than the whole range of possible values); in this case, numerically computing \eqref{eq::numerical-bd-01}, we get that the error term is at most $18 \frac{\log \log \az}{\log \az}$ for $\az > 4$. The quantity $18 \frac{\log \log \az}{\log \az}$ has a maximum value of around $6.62183$. \end{proof} The statement above (which is used for \Cref{thm::worstcase_power_minimax}) computes constants for our bound which work for both the minimax compander and approximate minimax compander and only requires that $\az > 4$. If we are only concerned with large alphabet sizes, to improve the constants for the approximate minimax compander (where $c = 1/2$), we can instead use the following: For $\az \geq 55$ and $N > 6 \log(2\sqrt{c \az \log \az} + 1)$, \begin{align} D_{\kl}&( \bx \| \bnormvar) \leq N^{-2} \Big(1 + 6\frac{\log \log \az}{\log \az} \Big) \log^2 \az \,. \end{align} \section{Uniform Quantization} \label{sec::uniform} In this section, we examine of the performance of uniform quantization under KL divergence loss. This is the same as applying the truncate compander. First, we will prove \eqref{eq::uniform_achieve} of \Cref{rmk::loss_with_uniform}. \begin{proof}[Proof of \eqref{eq::uniform_achieve}] Let $p$ be the single-letter distribution which is uniform over $\left[0, {2}/{\az}\right]$ for each symbol. Specifically, the probability density function is \begin{align} p(x) = \frac{\az}{2} \text{ for } x \in \left[0, \frac{2}{\az}\right] \end{align} and since the expected value under $p$ is $1/\az$, we have that $p \in \cP_{1/\az}$. We want to compute the single-letter loss for $p$, but notice that we cannot use \Cref{thm::asymptotic-normalized-expdiv} to do so, since the quantity $L^\dagger(p, f)$ is not finite here (this is not surprising since we are showing a case where the dependence of $\widetilde{L}(p,f,N)$ on $N$ is larger than $\Theta(N^{-2})$). Thus we need to compute the single-letter loss starting with \eqref{eq::raw-ssl}. \begin{align} \singleloss(p, \comp, N) &= \bbE_{X \sim p} \big[ X \log ( X/\widetilde{y}(X)) \big] \\& = \sum_{n = 1}^N \int_{I^{(n)}} p(x) x \log \frac{x}{\tilde y_n} dx \\& = \sum_{n = 1}^N \int_{I^{(n)}} \bbI\{x < 2/\az\}\frac{\az}{2} x \log \frac{x}{\tilde y_n} dx \\ &\geq \frac{\az}{2}\sum_{n = 1}^{\lfloor 2N /\az \rfloor} \int_{n/N}^{(n+1)/N} x \log \frac{x}{\tilde y_n} dx \\ &= \frac{\az}{2}\sum_{n = 1}^{\lfloor 2N /\az \rfloor} \int_{\tilde y_n - \frac{r}{2}}^{\tilde y_n + \frac{r}{2}} x \log \frac{x}{\tilde y_n} dx \end{align} where we let $r = 1/N$. Using the Taylor expansion for $\log(1+x)$, we can get that \begin{align} \int_{\tilde y_n - \frac{r}{2}}^{\tilde y_n + \frac{r}{2}} x \log \frac{x}{\tilde y_n} dx = \frac{r^3}{24 \tilde y_n} + O\left(\frac{r^5}{\tilde y_n^3} \right)\,. \end{align} This gives that \begin{align} \singleloss(p, \comp, N) &\geq \frac{\az}{2}\sum_{n = 1}^{\lfloor 2N /\az \rfloor} \frac{r^3}{24 \tilde y_n} - O\left(\frac{r^5}{\tilde y_n^3} \right) \\& = \frac{\az}{48} \frac{1}{ N^3} \sum_{n = 1}^{\lfloor 2N /\az \rfloor} \frac{1}{ \tilde y_n} - \sum_{n = 1}^{\lfloor 2N /\az \rfloor} O\left(\frac{1}{N^5 \tilde y_n^3} \right)\,. \end{align} Because the intervals are uniform, the centroid is the midpoint of each interval, which means that \begin{align} \tilde y_n = \frac{n - 1/2}{N}\,. \end{align} This gives that \begin{align} \sum_{n = 1}^{\lfloor 2N /\az \rfloor} \frac{1}{\tilde y_n} &= \sum_{n = 1}^{\lfloor 2N /\az \rfloor} \frac{1}{ \frac{n - 1/2}{N}} \\&> N \sum_{n = 1}^{\lfloor 2N /\az \rfloor} \frac{1}{ n} \\ & > C_1 N \log (2N /\az )\,. \end{align} We also need to bound the smaller order terms to make sure they are not too big, \begin{align} \sum_{n = 1}^{\lfloor 2N /\az \rfloor} \frac{1}{\tilde y_n^3} &< N^3\left(2^3 + \sum_{n = 2}^{\lfloor 2N /\az \rfloor} \frac{1}{ (n - 1)^3}\right) \\ & = N^3 C_3\,. \end{align} Combining these give \begin{align} \singleloss(p, \comp, N) & \geq \frac{\az}{48 N^3} C_1 N \log (2N /\az ) - O\left(\frac{1}{N^2} \right) \\& = \Omega\left(\frac{\az}{N^2} \log N \right)\,. \end{align} All the inequalities we used for the lower bound can easily be adjusted to make an upper bound. For instance, the floor function in the summation can be replaced with a ceiling function. The quantity $\tilde y_n$ can be rounded up or down and the inequalities approximating sums can have different multiplicative constants. This gives that for $p(x)$, we have \begin{align} \singleloss(p, \comp, N) = \Theta\left(\frac{\az}{N^2} \log N \right)\,. \end{align} Combining this single-letter density with the proof of \Cref{prop::bound_worstcase_prior_exist} gives a prior $P$ over the simplex so that \begin{align} \rawloss_\az(P, \comp, N) = \az \singleloss(p, \comp, N) = \Theta\left(\frac{\az^2}{N^2} \log N\right)\,.\label{eq::raw_loss_showing_uniform_kover2} \end{align} when $f$ is the truncate compander. We want to relate the raw loss in \eqref{eq::raw_loss_showing_uniform_kover2} to the expected loss $\cL_\az(P, \comp, N)$. This requires us to look at the normalization constant. \begin{align} \bbE_{\bX \sim P}& \left[\log \left(\sum_{k = 1}^\az \tilde y_k \right) \right] \\&= \bbE_{\bX \sim P} \left[\log \left(\sum_{k = 1}^\az \tilde y_k - \sum_{k = 1}^\az x_k + \sum_{k = 1}^\az x_k\right) \right] \\ & = \bbE_{\bX \sim P} \left[\log \left(\sum_{k = 1}^\az \delta_k + 1\right) \right] \end{align} where $\delta_k = \tilde y_k - x_k$. We can bound \begin{align} -\frac{1}{2N}\leq \delta_k \leq \frac{1}{2N} \\ -\frac{\az}{2N}\leq \sum_{k = 1}^\az \delta_k \leq \frac{\az}{2N}\,. \end{align} Additionally, we know that by construction, \begin{align} \bbE_{\bX \sim p} \left[\sum_{k = 1}^\az \delta_k \right] = \sum_{k = 1}^\az (\tilde{y}_k - x_k) = 0 \end{align} since $\tilde{y}_k$ is produced by the centroid decoder. Therefore, since $\log$ is concave, we have \begin{align} \bbE_{\bX \sim P} &\left[\log \left(\sum_{k = 1}^\az \delta_k + 1\right) \right] \\ &\geq \frac{1}{2} \left( \log \left(1 - \frac{\az}{2N}\right) + \log \left(1 + \frac{\az}{2N}\right) \right) \\ &\geq \frac{1}{2} \cdot 2 \cdot \frac{-(\az/(2N))^2}{2} \\ &= -\frac{1}{8} \az^2 N^{-2} \end{align} where the second inequality follows from the Taylor series of $\log(1+w)$. But this means that \begin{align} -\bbE_{\bX \sim P} \left[\log \left(\sum_{k = 1}^\az \delta_k + 1\right) \right] = O\left(\frac{\az^2}{N^2}\right) \end{align} and hence by the proof of \Cref{lem::im-a-barby-girl} \begin{align} \cL(P,f,&N) \\ &= \widetilde{\cL}(P,f,N) + \bbE_{\bX \sim P} \left[\log \left(\sum_{k = 1}^\az \delta_k + 1\right) \right] \\ &= \Theta\left(\frac{\az^2}{N^2} \log N\right) + O\left(\frac{\az^2}{N^2}\right) \\ &= \Theta\left(\frac{\az^2}{N^2} \log N\right) \end{align} since the extra $\log N$ factor causes the first term to dominate the second. \end{proof} The density $p(x)$ which produces \eqref{eq::raw_loss_showing_uniform_kover2} is not necessarily the worst possible density function in terms of the dependence of raw loss on the granularity $N$; however, it achieves simultaneously a worse-than-$\Theta(N^{-2})$ dependence on $N$ and a very large dependence on the alphabet size $\az$ (namely $\Theta(\az^2)$) with the uniform quantizer (i.e. truncation), and is therefore an ideal example of why the uniform quantizer is vulnerable to having poor performance. For illustration, we will also sketch an analysis of the performance of the uniform prior against prior $p(x) = (1-\alpha) x^{-\alpha}$ where $\alpha = \frac{\az-2}{\az-1}$ (as mentioned in \Cref{rmk::loss_with_uniform}); this is constructed so that $\bbE_{X \sim p}[X] = 1/\az$ and hence $p \in \cP_{1/\az}$. The analysis shows that the loss is proportional to $N^{-(2-\alpha)}$. Let $N$ be large; for this sketch we will treat $p$ as roughly uniform over any bin $I^{(n)} := ((n-1)/N, n/N]$. Note that this does not strictly hold for small $n$ (no matter how large $N$ gets, $p$ never becomes approximately uniform over e.g. $I^{(1)}$) but this inaccuracy is most pronounced on the first interval $I^{(1)} = (0,1/n]$. Additionally, $p$ on $(0,1/n]$ is a stretched and scaled version of $p$ on $(0,1]$; for $n = 2, 3, \dots, N$, the distribution $p$ over $I^{(n)}$ is closer to being uniform, and hence the distortion over any bin under $p$ can be bounded below (and above) by a constant multiple of the distortion under a uniform distribution (the constant can depend on $\az$ but not $N$). Thus for determining the dependence of the (raw) distortion on $N$, this simplification does not affect the result. Then, the expected distortion given that $X \in I^{(n)}$ is proportional (roughly) to $N^{-2} (n/N)^{-1} = n^{-1} N^{-1}$ (since the interval has width $\propto \, N^{-1}$ and is centered at a point $\propto \, n/N$), and the probability of falling into $I^{(n)}$ is proportional to $(n/N)^{1-\alpha} - ((n-1)/N)^{1-\alpha} \approx n^{-\alpha} N^{-(1-\alpha)}$; therefore (up to a multiplicative factor which is constant in $N$) the expected distortion is roughly \begin{align} \sum_{n=1}^N n^{-1} N^{-1} n^{-\alpha} N^{-(1-\alpha)} = N^{-(2-\alpha)} \sum_{n=1}^N n^{-(1+\alpha)} \,. \end{align} But, noting that $\sum n^{-(1+\alpha)}$ is a convergent series, we can apply an upper bound \begin{align} \sum_{n=1}^N n^{-(1+\alpha)} < \sum_{n=1}^\infty n^{-(1+\alpha)} \end{align} which is a (finite) constant which depends only on $\az$ (through $\alpha$) but not $N$. Hence, we obtain our $\Theta(N^{-(2-\alpha)}) = \Theta(N^{-2} \cdot N^{\alpha})$ order for the distortion. We note that as discussed this is worse than $\Theta(N^{-2} \log N)$. \section{Connection to Information Distillation Details} \label{sec::info_distillation_detail} In this section, we go over the technical results connecting quantizing probabilities with KL divergence and information distillation (discussed in \Cref{sec::info_distillation_main}), in particular the proof of \Cref{prop:infodist}, which shows that information distillers and quantizers under KL divergence have a close connection. In this section, we will use the notation $\widetilde{B}$ to denote $h(B)$. We denote by $P_A, P_B$ the marginals of $A$ and $B$ under the joint distribution $P_{A,B}$. \subsection{Equivalent Instances of Information Distillation and Simplex Quantization} We consider an information distillation instance, consisting of a joint probability distribution $P_{A,B}$ over $\cA \times \cB$ where $|\cA| = \az$ (and $\cB$ can be arbitrarily large or even uncountably infinite) and a number of labels $M$ to which we can distill; WLOG we will assume $\cA = [\az]$. The objective of information distillation is to find a distiller $h : \cB \to [M]$ which preserves as much mutual information with $A$ as possible, i.e. minimizes the loss \begin{align} \idloss(P_{A,B}, h) := I(A;B) - I(A;\widetilde{B}) \end{align} where $\widetilde{B} = h(B)$.\footnote{We do not include the parameter $M$ in the loss expression because it is already implicitly included as the range of the distiller $h$.} We denote an instance of the information distillation problem as $(P_{A,B}, M)_{\ID}$. What is important about $b \in \cB$ for information distillation is what $B = b$ implies about $A$. We therefore denote by $\bx(b) \in \triangle_{\az-1}$ the conditional probability of $A$ given $B = b$, i.e. \begin{align} x_a(b) = P_{A|B}(a|b) = \bbP[A = a \, | \, B = b] \,. \end{align} This then suggests a way to define the equivalent simplex quantization instance to a given information distillation instance. Recall that a simplex quantization instance (with average KL divergence loss) consists of a prior $P$ over $\triangle_{\az-1}$ and a number of quantization points $M$; the goal is to find a quantizer $\bnormvar : \triangle_{\az-1} \to \triangle_{\az-1}$ such that its range $\cZ$ has cardinality $M$ (or less) and which minimizes the expected KL divergence loss \begin{align} \sqloss(P,\bnormvar) := \bbE_{\bX \sim P}[D_{\kl}(\bX \| \bz(\bX) )]\,. \end{align} We denote an instance of the simplex quantization problem (with average KL divergence loss) as $(P,M)_{\SQ}$. \begin{definition} We call an information distillation instance $(P_{A,B},M)_{\ID}$ and a simplex quantization instance $(P,M)_{\SQ}$ \emph{equivalent} if they use the same value of $M$ and $P$ is the push-forward distribution induced by $\bx(\cdot)$ on $P_B$, i.e. \begin{align} B \sim P_B \implies \bX = \bx(B) \sim P\,. \end{align} We denote this $(P_{A,B},M)_{\ID} \equiv (P,M)_{\SQ}$. \end{definition} We show that any instance of one problem has at least one equivalent instance of the other. \begin{lemma} \label{lem::equiv_instances_01} For any information distillation instance $(P_{A,B},M)_{\ID}$, there is some $(P,M)_{\SQ}$ such that $(P_{A,B},M)_{\ID} \equiv (P,M)_{\SQ}$ and vice versa. \end{lemma} \begin{proof} In either case, given the limit on the number of labels/quantization points $M$, we use it for the equivalent instance. Given an information distillation instance with joint distribution $P_{A,B}$, we have a well-defined function $\bx : \cB \to \triangle_{\az-1}$ and therefore the push-forward distribution $P$ of $P_B$ under $\bx(\cdot)$ is well-defined, giving us the equivalent instance $(P,M)_{\SQ}$. Given a simplex quantization instance with prior $P$, we let $\cB = \triangle_{\az-1}$ and let $P_{A,B} = P_{A|B} P_B$ given by $P_B = P$ (a probability distribution over $\triangle_{\az-1}$) and $P_{A|B}(a|b) = x_a(b)$, i.e. $A$ is distributed on $\cA = [\az]$ according to $B \in \triangle_{\az-1}$. Then $\bx(\cdot)$ is just the identity function and therefore $P = P_B$ is the push-forward distribution as we need. \end{proof} Note that each information distillation instance $(P_{A,B},M)_{\ID}$ has a unique equivalent simplex quantization instance (since $P$ is determined by being the push-forward distrbution of $P_B$), whereas each simplex quantization instance $(P,M)_{\SQ}$ may have many different equivalent information distillation instances, as $\cB$ can be arbitrarily large and elaborate. The goal will be to show that if we have equivalent instances $(P_{A,B},M)_{\ID} \equiv (P,M)_{\SQ}$ then a distiller $h$ for $(P_{A,B},M)_{\ID}$ will have an `equivalent' quantizer $\bnormvar$ for $(P,M)_{\SQ}$ (achieving the same loss) and vice versa. This is generally achieved by the following scheme: we arbitrarily label the $M$ elements of $\cZ$ as $\bnormvar^{(j)}$ for $j \in [M]$, so \begin{align} \cZ = \{\bnormvar^{(1)}, \dots, \bnormvar^{(M)}\} \, . \end{align} Then we will generally have equivalence between $h$ and $\bnormvar$ if the following relation holds: \begin{align} \bnormvar(\bx(b)) = \bnormvar^{(h(b))} ~~\text{ for all } b \in \cB \,. \end{align} Then we will derive \begin{align} \idloss(P_{A,B}, h) = \sqloss(P, \bnormvar) \,. \end{align} However, as mentioned, this may be true (and/or possible) only if $h$ or $\bnormvar$ avoid certain trivial inefficiencies, hence the inequalities in \Cref{prop:infodist}. These will be formally defined and discussed in the following subsections. \subsection{Separable Information Distillers} We consider what happens when we have $b,b'$ such that $\bx(b) = \bx(b')$, i.e. $B = b$ and $B = b'$ induce the same conditional probability for $A$ over $\cA$. In this case, in the `equivalent' simplex quantization instance, the quantizer $\bz$ will quantize $\bx = \bx(b) = \bx(b')$ to a single value $\bnormvar^{(j)} \in \cZ$, while the distiller has the option of assigning $h(b) \neq h(b')$; if so, it is not clear what value the `equivalent' quantizer $\bz$ will assign to $\bx = \bx(b) = \bx(b')$. However, we will show that we can ignore such cases. We define: \begin{definition} \label{def:separable-distillers} We call a quantizer $h$ \emph{separable} if for any $b,b' \in \cB$, \begin{align} \bx(b) = \bx(b') \implies h(b) = h(b') \end{align} i.e. if $b$ and $b'$ induce the same conditional probability vector for $A$, they are assigned the same quantization label. \end{definition} We call the set of information distillers $\cH$ and the set of separable information distillers $\cH_{\sf{sep}}$. Since the important attribute of any $b \in \cB$ (for information distillation) is how $B = b$ affects the distribution of $A$, there is no reason why $b, b' \in \cB$ should be assigned different labels by the distiller if $\bx(b) = \bx(b')$; thus, intuitively, it is clear that considering separable distillers is sufficient for discussing bounds the the performance of optimal distillers. We show this formally: \begin{lemma}\label{lem::opt_H_sep} For any $h \in \cH$ (inducing $\widetilde{B} = h(B)$), there is some $h^* \in \cH_{\sf{sep}}$ (inducing $\widetilde{B}^* = h^*(B)$) such that \begin{align} I(A;\widetilde{B}) \leq I(A;\widetilde{B}^*)\,. \end{align} This then implies: \begin{align} \sup_{h \in \cH} I(A;\widetilde{B}) = \sup_{h \in \cH_{\sf{sep}}} I(A;\widetilde{B}) \,. \end{align} \end{lemma} \begin{proof} This follows from the fact that it is optimal to only consider deterministic distiller (or quantization) functions, as shown in \cite{bhatt_distilling2021}. We may assume WLOG that $h \not \in \cH_{\sf{sep}}$. First, note that $P_B$ induces a push-forward distribution $P$ over $\triangle_{\az-1}$ through $\bx(b)$. If $h \in \cH_{\sf{sep}}$, this means there is a deterministic $h_{\triangle} : \triangle_{\az-1} \to [M]$ satisfying \begin{align} h(b) = h_\triangle(\bx(b)) \text{ for all } b \in \cB \,. \end{align} Then $I(A;h(B)) = I(A;h_{\triangle}(\bx(B)))$. If $h \not \in \cH_{\sf{sep}}$, we still have a joint distribution $P_{\bx(B) \widetilde{B}}$; then we consider the conditional probability distribution $P_{\widetilde{B}|\bx(B)}(\widetilde{b}|\bx(b))$. This can be viewed as a \emph{non-deterministic} distiller $h_\triangle : \triangle_{\az-1} \to [M]$ (it returns a random output with distribution dependent on input $b$) under prior $P$, and similarly \begin{align} I(A;h(B)) = I(A;h_{\triangle}(\bx(B))) \end{align} since the joint distribution $P_{A\widetilde{B}}$ is the same either way. But by \cite{bhatt_distilling2021}, for $\bX \sim P$ over $\triangle_{\az-1}$ and any non-deterministic distiller $h_\triangle : \triangle_{\az-1} \to [M]$, there is a deterministic distiller $h^*_\triangle : \triangle_{\az-1} \to [M]$ such that \begin{align} I(A;h_\triangle(\bX)) \leq I(A;h^*_{\triangle}(\bX)) \,. \end{align} Finally, any deterministic $h^*_{\triangle} : \triangle_{\az-1} \to [M]$ has an equivalent (separable) $h^* : \cB \to [M]$ such that $h^*(b) = h^*_\triangle(\bx(b))$ for all $b \in \cB$, simply by definition. Thus, for any non-separable $h \in \cH$, there is an equivalent non-deterministic distiller $h_\triangle$ for $\bX \sim P$; for every non-deterministic distiller $h_\triangle$ for $\bX \sim P$, there is a better deterministic distiller $h^*_\triangle$; and for every deterministic distiller $h^*_\triangle$ for $\bX \sim P$, there is an equivalent $h^* \in \cH_{\sf{sep}}$, i.e. \begin{align} I(A;h(B)) &= I(A;h_\triangle(\bX)) \\ \leq I(A;h^*_\triangle(\bX)) &= I(A;h^*(B))\,. \end{align} This then implies that \begin{align} \sup_{h \in \cH} I(A;\widetilde{B}) \leq \sup_{h \in \cH_{\sf{sep}}} I(A;\widetilde{B}) \end{align} while the fact that $\cH_{\sf{sep}} \subseteq \cH$ implies \begin{align} \sup_{h \in \cH} I(A;\widetilde{B}) \geq \sup_{h \in \cH_{\sf{sep}}} I(A;\widetilde{B}) \end{align} thus producing the equality we want \end{proof} This of course also implies that for any $h \in \cH$, there is some $h^* \in \cH_{\sf{sep}}$ such that \begin{align} \idloss(P_{A,B},h) \geq \idloss(P_{A,B},h^*). \end{align} and furthermore that \begin{align} \inf_{h \in \cH} \idloss(P_{A,B},h) = \inf_{h \in \cH_{\sf{sep}}} \idloss(P_{A,B},h). \end{align} \subsection{Decoding-Optimal Simplex Quantizers} We now consider simplex quantizers under average KL divergence loss. In particular, we note an obvious potential inefficiency: letting $\cZ = \{\bnormvar^{(1)}, \dots, \bnormvar^{(M)}\}$ be the range of quantizer $\bnormvar$, we define $\cX^{(j)} := \{\bx \in \triangle_{\az-1} : \bz(\bx) = \bz^{(j)}\}$ for all $j$; then, given $\cX^{(j)}$ there will be some optimal choice for the value of $\bnormvar^{(j)}$ which minimizes the expected KL divergence. If $\bz$ does not use the optimal value (which will turn out to be the conditional expectation e.g. centroid of $\cX^{(j)}$), for instance by using a value of $\bz^{(j)}$ which is completely unrelated to $\cX^{(j)}$, then there is an obvious and easily-fixed inefficiency. One way to frame this is by breaking the quantization process into two steps, an \emph{encoder} $g : \triangle_{\az-1} \to [M]$ and a \emph{decoder} $\dec : [M] \to \triangle_{\az-1}$ so that the quantization of $\bX$ is $\bnormVar = \bnormvar(\bX) = \dec(g(\bX))$; we WLOG label the elements of $\cZ$ such that $\bz^{(j)} = \dec(j)$. Then the encoder $g$ partitions $\triangle_{\az-1}$ into the $M$ `bins' (analogous to the compander bins) $\cX^{(1)}, \dots \cX^{(M)}$ (the same as defined above): \begin{align} \cX^{(j)} = \{\bx \in \triangle_{\az-1} : g(\bx) = j\}\,. \end{align} \begin{lemma}\label{lem::centroid_optimal} Given encoder $g$ and prior $P$, the optimal decoder function (for $g$ on $P$) is \begin{align} \dec^*_g = \underset{\dec}{\argmin} \bbE_{\bX \sim P} [D_{\kl}(\bX \| \dec(g(\bX)))] \end{align} satisfies, for all $j \in [M]$, \begin{align} \dec^*_g(j) = \bbE_{\bX \sim P}[\bX \, | \, \bX \in \cX^{(j)}]\,. \end{align} We call any quantizer consisting of an encoder $g$ and the optimal decoder function $\dec^*_g$ \emph{decoding-optimal}. This implies that for any quantizer $\bnormvar$ on prior $P$, there is a decoding-optimal $\bnormvar^*$ such that \begin{align} \sqloss(P,\bnormvar^*) \leq \sqloss(P,\bnormvar) \,. \end{align} \end{lemma} \begin{proof} This is proved by \cite[Corollary 4.2]{ITbook}. \end{proof} Note that the optimal $\dec^*_g(j)$ is the centroid (conditional expectation under $P$) of the bin $\cX^{(j)}$ induced by $g$. \subsection{Deriving the Connection} We now prove \Cref{prop:infodist}. We first define what it means for a distiller and a quantizer to be equivalent: \begin{definition} If we have equivalent information distillation and simplex quantization problems $(P_{A,B}, M)_{\ID} \equiv (P,M)_{\SQ}$, then the distiller $h$ and quantizer $\bnormvar$ are \emph{equivalent} for these problems if: \begin{itemize} \item $h$ is separable and $\bnormvar$ is decoding-optimal; \item there is a labeling $\bz^{(1)}, \dots, \bz^{(M)}$ of the elements of $\cZ$ such that $\bnormvar(\bx(b)) = \bnormvar^{(h(b))}$ for all $b \in \cB$. \end{itemize} We denote this as $h \equiv \bnormvar$. \end{definition} We then claim that all separable distillers and decoding-optimal quantizers have equivalent counterparts: \begin{lemma} \label{lem::equiv_instances_02} For any $(P_{A,B}, M)_{\ID} \equiv (P,M)_{\SQ}$, any separable $h$ for $(P_{A,B}, M)_{\ID}$ has an equivalent (decoding-optimal) $\bnormvar$, and any decoding-optimal $\bnormvar$ for $(P,M)_{\SQ}$ has an equivalent (separable) $h$. \end{lemma} \begin{proof} We handle the two directions separately: \emph{Any $h$ has an equivalent $\bnormvar$:} Since $h$ is separable, we know that $\bx(b) = \bx(b') \implies h(b) = h(b')$. Thus, we can define $\cX^{(j)}$ as \begin{align} \cX^{(j)} := \{ \bx \in \triangle_{\az-1} : h(b) = j ~ \forall b \text{ s.t. } \bx(b) = \bx\} \end{align} for all $j \in [M]$. Then we define $\bnormvar$ as follows: $\bnormvar(\bx) = \bnormvar^{(j)}$ for all $\bx \in \cX^{(j)}$, where \begin{align} \bnormvar^{(j)} = \bbE_{\bX \sim P}[\bX \, | \, \bX \in \cX^{(j)}] \,. \end{align} Then by construction of $\bnormvar^{(j)}$ we have that $\bnormvar$ is decoding-optimal and for $\bx \in \cX^{(j)}$ we have $h(b) = j$ for all $b$ such that $\bx(b) = \bx$ and $\bnormvar(\bx) = \bnormvar^{(j)}$, hence $\bnormvar(\bx) = \bnormvar^{(h(b))}$, so they are equivalent. \emph{Any $\bnormvar$ has an equivalent $h$:} We label the elements of $\cZ$ arbitrarily as $\bz^{(1)}, \dots, \bz^{(M)}$; then we let $h(b) = j$ for all $b$ such that $\bnormvar(\bx(b)) = \bnormvar^{(j)}$, which implies $\bnormvar(\bx(b)) = \bnormvar^{(h(b))}$. \end{proof} Now we show that equivalent solutions have the same loss: \begin{proposition}\label{prop::KL_and_distill_equal} If $(P_{A,B},M)_{\ID} \equiv (P,M)_{\SQ}$ and $h \equiv \bnormvar$, then \begin{align} \idloss(P_{A,B},h) = \sqloss(P,\bnormvar) \label{eq::kl_and_MI}\,. \end{align} \end{proposition} \begin{proof} Let $(A,B) \sim P_{A,B}$ and let $\bX = \bx(B)$ and $\bnormVar = \bnormvar(\bX)$. Then we know since $(P_{A,B},M)_{\ID} \equiv (P,M)_{\SQ}$ that $\bX \sim P$. Furthermore, defining \begin{align} \cX^{(j)} = \{\bx \in \triangle_{\az-1} : h(b) = j ~\forall b \text{ s.t. } \bx(b) = \bx\} \end{align} and $\bnormvar^{(j)} = \bbE[\bX \, | \, \bX \in \cX^{(j)}]$, we know that since $h \equiv \bnormvar$ we have $\bnormVar = \bnormvar^{(h(B))}$. We now let $\normVar_i$ refer to the $i$th element of vector $\bnormVar$, and let $\widetilde{B} = h(B)$ and $\widetilde{b} = h(b)$. We then derive: \begin{align} \idloss(P_{A,B},h) &= I(A;B) - I(A;\widetilde{B}) \\& = \int \sum_{a} P_{A,B}(a ,b) \log \frac{P_{A|B}(a |b)}{P_A(a)} db \eqlinebreakshort - \sum_{a ,\widetilde b} P_{A, \widetilde B}(a ,\widetilde b) \log \frac{P_{A|\widetilde B}(a |\widetilde b) }{P_A(a)} \\& = \int \sum_{a} P_{A,B}(a ,b) \log \frac{P_{A|B}(a |b)}{P_A(a)} \eqlinebreakshort - P_{A,B}(a ,b) \log \frac{P_{A|\widetilde B}(a |\widetilde b) }{P_A(a)} db \\& = \int \sum_{a} P_{A,B}(a ,b) \log \frac{P_{A|B}(a |b)}{P_{A|\widetilde B}(a |\widetilde b) } db \\& = \int P_B(b) \sum_{a} P_{A|B}(a|b) \log \frac{P_{A|B}(a |b)}{P_{A|\widetilde B}(a |\widetilde b) } db \\& = \bbE_{B} \bigg[\sum_{a} P_{A|B}(a|b) \log \frac{P_{A|B}(a |b)}{P_{A|\widetilde B}(a |\widetilde b) } \bigg] \\& = \bbE_{B} \big[D_{\kl}((A|B) \| (A|\widetilde{B})) \big] \\& = \bbE_{\bX} \big[D_{\kl}(\bX \| \bnormVar) \big] \label{eq:to-sqloss} \\& = \sqloss(P, \bnormvar) \end{align} where \eqref{eq:to-sqloss} holds as $B \sim P_B \implies \bX \sim P$ and \begin{align} \bnormVar = \bnormvar(\bX) = \bnormvar^{(\widetilde{B})} = \bbE_{\bX \sim P}[\bX \, | \, \bX \in \cX^{(\widetilde{B})}] \end{align} and since $A \sim \bX = \bX(B)$, we know that $P_{A|\widetilde{B}}(a|\widetilde{b}) = \bbE_{\bX \sim P}[X_a \, | \, \bX \in \cX^{(\widetilde{b})}]$. \end{proof} \begin{proof}[Proof of \Cref{prop:infodist}] We get the proof of \Cref{prop:infodist} as a corollary to \Cref{prop::KL_and_distill_equal} and \Cref{lem::opt_H_sep,lem::centroid_optimal,lem::equiv_instances_02} (which show, respectively, that non-separable distillers can be replaced by separable distillers, that non-decoding-optimal quantizers can be replaced by decoding-optimal quantizers, and that any separable distiller has an equivalent decoding-optimal quantizer and vice versa). Note that \Cref{prop:infodist} ensures $(P_{A,B},M)_{\ID} \equiv (P,M)_{\SQ}$ through its definition of $\bX$. Then, given a distiller $h \in \cH$, by \Cref{lem::opt_H_sep} we can find a separable $h^* \in \cH_{\sf{sep}}$ such that \begin{align} \idloss(P_{A,B}, h^*) \leq \idloss(P_{A,B}, h) \,. \end{align} By \Cref{prop::KL_and_distill_equal}, there is a quantizer $\bnormvar$ such that \begin{align} \sqloss(P, \bnormvar) \leq \idloss(P_{A,B}, h^*) \leq \idloss(P_{A,B}, h) \,. \end{align} completing the result in the first direction. Given a quantizer $\bnormvar$, by \Cref{lem::centroid_optimal} there exists a decoding-optimal $\bnormvar^*$ such that \begin{align} \sqloss(P, \bnormvar^*) \leq \idloss(P, \bnormvar) \,. \end{align} By \Cref{prop::KL_and_distill_equal}, there is a distiller $h$ such that \begin{align} \idloss(P_{A,B}, h) \leq \sqloss(P, \bnormvar^*) \leq \sqloss(P, \bnormvar) \,. \end{align} completing the result in the second direction. \end{proof} Now that we have shown \Cref{prop:infodist}, we can use it to derive the connection between the performance of our companders and the Degrading Cost $\mathrm{DC}$: \begin{proposition}\label{prop::infsup_for_dc} For any $\az, M$: \begin{align} \mathrm{DC}(\az,M) = \sup_{P \text{ over } \triangle_{\az-1}} \inf_{\substack{\bnormvar \\ |\cZ| = M}} \sqloss(P,\bz) \label{eq::supP_kl_and_MI} \,. \end{align} \end{proposition} \begin{proof} We show inequalities in both directions to get the equality. First, note that for any joint distribution $P_{A,B}$ on $\cA \times \cB$ where $|\cA| = \az$ (WLOG we can assume $\cA = [\az]$), we know there is some prior $P$ over $\triangle_{\az-1}$ such that \begin{align} (P_{A,B}, M)_{\ID} \equiv (P,M)_{\SQ} \end{align} for all $M$, by \Cref{lem::equiv_instances_01}, and that for any distiller $h : \cB \to M$ there is some quantizer $\bnormvar$ with cardinality-$M$ range such that \begin{align} \sqloss(P, \bnormvar) \leq \idloss(P_{A,B}, h) \end{align} by \Cref{lem::equiv_instances_02} and \Cref{prop::KL_and_distill_equal}. Thus for any $P_{A,B}$ and $M$, for the equivalent $P$, \begin{align} \inf_{h:\cB \to M} \idloss(P_{A,B}, h) \geq \inf_{\substack{\bnormvar \\ |\cZ| = M}} \sqloss(P,\bz) \end{align} and hence we have \begin{align} \mathrm{DC}(\az,M) &= \sup_{\substack{{P_{A,B}} \\ |\cA| = \az}} \inf_{h:\cB \to M} \idloss(P_{A,B}, h) \\ &\geq \sup_{P \text{ over } \triangle_{\az-1}} \inf_{\substack{\bnormvar \\ |\cZ| = M}} \sqloss(P,\bz)\,. \end{align} Then, for any $P$ over $\triangle_{\az-1}$, we have the same logic: by \Cref{lem::equiv_instances_01} there is an equivalent $P_{A,B}$, so for any $P,M$ we can find $P_{A,B}$ for which \begin{align} \inf_{\substack{\bnormvar \\ |\cZ| = M}} \sqloss(P,\bz) \geq \inf_{h:\cB \to M} \idloss(P_{A,B}, h)\,. \end{align} Then we get that \begin{align} \mathrm{DC}(\az,M) &= \sup_{\substack{{P_{A,B}} \\ |\cA| = \az}} \inf_{h:\cB \to M} \idloss(P_{A,B}, h) \\ &\leq \sup_{P \text{ over } \triangle_{\az-1}} \inf_{\substack{\bnormvar \\ |\cZ| = M}} \sqloss(P,\bz) \end{align} and hence the equality in \eqref{eq::supP_kl_and_MI} holds. \end{proof} \Cref{prop::infsup_for_dc} is used to show \eqref{eq::infg_supP_KL}. \subsection{Comparison} Compared to \eqref{eq::distilling_result}, our bound in \Cref{prop::compander_worst_case_for_distilling} which uses the approximate minimax compander has a worse dependence on $M$. Our dependence on $M$ is worse since our compander method performs scalar quantization on each entry, and the raw quantized values do not necessarily add up to $1$. Other quantization schemes can rely on the fact that the values add up to $1$ to avoid encoding one of the $\az$ values. Offsetting this are the improved dependence on $\az$ ($\log^2 \az$ versus $\az-1$, as stated) and constant ($\leq 19$ and decreasing to $1$ as $\az \to \infty$ versus $1268$); this yields a better bound when $M$ is not exceptionally large. For example, when $\az = 10$, our bound is better than \eqref{eq::distilling_result} so long as the conditions on $M^{1/\az}$ in \Cref{prop::compander_worst_case_for_distilling} are met (which requires $M > 16^{10}$) and if $M < 1.014 \times 10^{97}$. While these may both seem like very large numbers, the former corresponds with only $4$ bits to express each value in the probability vector, while the latter corresponds with more than $32$ bits per value. In general, the `crossing point' (at which both bounds give the same result) is at \begin{align} M = \Bigg(1268 \bigg( \hspace{-0.2pc} 1 \hspace{-0.2pc} + \hspace{-0.2pc} 18 \frac{\log \log \az}{\log \az} \hspace{-0.1pc} \bigg)^{-1} \frac{\az-1}{\log^2 \az}\Bigg)^{\frac{\az(\az-1)}{2}} \end{align} or, to put it in terms of `bits per vector entry' $b$ (taking $\log_2$ of the above to get bits and dividing by $\az$), \begin{align} b \approx \frac{\az-1}{2} \bigg(\log_2(\az) - 2 \log_2 \log \az + 10.3\bigg) \end{align} for large $\az$. The disadvantage is that our bound does not apply to the case of $\az < 5$ or $M$ which is not large. Note that scalar quantization in general only works with very large $M$, since even $2$ different encoded values per symbol requires $M = 2^\az$ different quantization values. \end{document}
2205.03605v2
http://arxiv.org/abs/2205.03605v2
Quadratic unilateral polynomials over split quaternions
\documentclass[11pt]{article} \usepackage{amsfonts} \usepackage{amscd} \usepackage{amsmath,amstext,amsthm,amsbsy,amssymb} \usepackage{mathrsfs} \usepackage{multirow} \usepackage{epsfig} \setlength{\oddsidemargin}{-0.32in} \setlength{\textwidth}{7.18in} \setlength{\textheight}{9.28 in} \setlength{\topmargin}{-.6in} \def\BBox{\vrule height 0.5em width 0.6em depth 0em} \newcommand{\bi}{{\bf i}} \newcommand{\bj}{{\bf j}} \newcommand{\bk}{{\bf k}} \newcommand{\bc}{{\mathbb C}} \newcommand{\br}{{\mathbb R}} \newcommand{\bh}{{\mathbb H}} \newcommand{\ba}{{\mathbb A}} \newcommand{\bs}{{\mathbb S}} \newcommand{\bb}{{\mathbb B}} \newcommand{\bm}{{\mathbb M}} \newcommand{\bp}{{\mathbb P}} \newcommand{\F}{{\mathbb F}} \newcommand{\bx}{{\mathbb X}} \def\BBox{$\square$} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{pro}{Proposition}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{rem}{Remark}[section] \newtheorem{exam}{Example}[section] \newtheorem{defi}{Definition}[section] \newtheorem{cond}{Condition} \begin{document} \title{Quadratic unilateral polynomials over split quaternions} \author{ Wensheng Cao \\ School of Mathematics and Computational Science,\\ Wuyi University, Jiangmen, Guangdong 529020, P.R. China\\ e-mail: {\tt [email protected]}} \date{} \maketitle \bigskip {\bf Abstract} \,\, In this paper, we derive explicit formulas for computing the roots of $ax^{2}+bx+c=0$ with $a$ being not invertible in split quaternion algebra. We also use the approach developed by Opfer, Janovska and Falcao etc. to verify our results when the corresponding companion polynomials $C(x)\neq 0$. \vspace{2mm}\baselineskip 12pt {\bf Keywords and phrases:} \ Split quaternion, Quadratic formula, Zero divisor, Solving polynomial equation {\bf Mathematics Subject Classifications (2010):}\ \ {\rm 11C08; 11R52; 12Y05; 15A66} \section{Introduction} \subsection{Split quaternions} Let $\br, \bc, \bh$ or $\bh_s$ be respectively the set of real numbers, complex numbers, quaternions or split quaternions. The quaternion algebra $\bh$ and split quaternion algebra $\bh_s$ are non-commutative extensions of the complex numbers. $\bh_s$ can be represented as $$\bh_s=\{x=x_0+x_1\bi+x_2\bj+x_3\bk,x_i\in \br,i=0,1,2,3\},$$ where $1,\bi,\bj,\bk$ are basis of $\bh_s$ satisfying the following multiplication rules: \begin{table}[h] \centering \caption{The multiplication table of $\bh_s$} \vspace{3mm} \begin{tabular}{c|cccc} $\bh_s$ &1& $\bi$ & $\bj$ & $\bk$\\ \hline 1 & 1& $\bi$ & $\bj$ & $\bk$\\ $\bi$ &$\bi$ &-1 & $\bk$ & -$\bj$ \\ $\bj$&$\bj$&-$\bk$ &1& -$\bi$ \\ $\bk$ &$\bk$&$\bj$ & $\bi$ & 1 \end{tabular} \end{table} Let $\bar{x}=x_0-x_1\bi-x_2\bj-x_3\bk$, $$\Re(x)=(x+\bar{x})/2=x_0,\ \Im(x)=(x-\bar{x})/2=x_1\bi+x_2\bj+x_3\bk$$ be respectively the conjugate, real part and imaginary part of $x\in\bh_s$. Each element in $\bh_s$ can be expressed as $$x=(x_0+x_1\bi)+(x_2+x_3\bi)\bj =z_1+z_2\bj,\mbox{ where }z_1=x_0+x_1\bi,z_2=x_2+x_3\bi\in \bc.$$ This implies that $$\bh_s=\bc\oplus\bc\bj \mbox{ and }\bj z=\bar{z}\bj, \forall z\in \bc.$$ For $x=x_0+x_1\bi+x_2\bj+x_3\bk=z_1+z_2\bj\in \bh_s$, we define \begin{equation}\label{Ix}I_x=\bar{x}x=x\bar{x}=x_0^2+x_1^2-x_2^2-x_3^2=|z_1|^2-|z_2|^2\end{equation} and \begin{equation}\label{kp}K(x)=\Im(x)^2=-x_1^2+x_2^2 + x_3^2=\Re(x)^2-I_x.\end{equation} It can be easily verified that \begin{equation} \overline{xy}=\bar{y}\bar{x},\ I_{yx}=I_yI_x, \forall x,y\in\bh_s.\end{equation} Unlike the Hamilton quaternion algebra, the split quaternion algebra contains nontrivial zero divisors. The set of zero divisors is denoted by \begin{equation} Z(\bh_s)=\{x\in \bh_s:I_x=0\}. \end{equation} Note that $$\Re(xy)=\Re(yx)=x_0y_0-x_1y_1+x_2y_2+x_3y_3,\forall x,y\in \bh_s.$$ For $x=x_0+x_1\bi+x_2\bj+x_3\bk,y=y_0+y_1\bi+y_2\bj+y_3\bk\in \bh_s$, we define \begin{equation}\label{inp2}\left\langle x,y \right\rangle=x_0y_0+x_1y_1-x_2y_2-x_3y_3.\end{equation} For the sake of simplification, we denote $$\left\langle x,y \right\rangle:=P_{x,y}=P_{xy}.$$ Accordingly we have $$I_x=\left\langle x,x\right\rangle=P_{xx},\ \Re({\bar{y}x})=\Re({\bar{x}y}) =\Re({y\bar{x}})=\left\langle x,y \right\rangle=P_{xy}=P_{yx}.$$ We say that two split quaternions $a,b\in \bh_s$ are similar if and only if there exists a $q\in \bh_s-Z(\bh_s)$ such that $qa=bq$. Similarity is an equivalence relation in split quaternions. Let \begin{equation}[\lambda]_{\sim}=\{q^{-1}\lambda q:\forall q\in \bh_s\}\end{equation} be the similarity class of $\lambda$. \begin{pro}(c.f. \cite[Theorem 4.3]{cao})\label{thmsim} Two split quaternions $a,b \in \bh_s-\br$ are similar if and only if $$\Re(a)=\Re(b),\, K(a)=K(b).$$ \end{pro} We mention that two split quaternions $a_0$ and $a_0+\bi+\bj$ are not similar. We define the quasisimilar class of $q$ as the following set \begin{equation}[q]=\{p\in \bh_s: \Re(p)=\Re(q), I_p=I_q\}.\end{equation} \begin{pro} If $q\in \br$ then $[q]_{\sim}=\{q\}\subset [q]$; if $q\in \bh_s-\br$ then $[q]_{\sim}=[q].$ \end{pro} \subsection{Niven's algorithm over $\bh_s$} The problem of solving quaternionic quadratic equations for quaternions was first approached by Niven \cite{niven}. In \cite{Opfer10,sero01}, Niven’s algorithm is tailored for finding zeros of unilateral polynomials $$f(x)=\sum_{j=0}^{n}a_jx^j, z,a_j\in \bh, \mbox{ where } a_n=1, a_0\neq 0.$$ Recently, Niven's algorithm has been developed by Falcao, Opfer and Janovska etc.\cite{Ireneu,Irene,Opfer17,Opfer18} to solve the unilateral polynomials over $\bh_s$. We sketch Niven's algorithm over $\bh_s$ as follows. In the set of polynomials of the form \begin{equation}f(x)=\sum_{j=0}^{n}a_jx^j, a_j\in \bh_s,\end{equation} we define the addition and multiplication of such polynomials as in the commutative case where the variable commutes with the coefficients. With the two operations, this set becomes a ring, referred to as the ring of unilateral polynomials in $\bh_s$ and denoted by $\bh_s[x]$. For $f(x)\in \bh_s[x]$, we define the $$\overline{f(x)}=\sum_{j=0}^{n}\overline{a_j}x^j.$$ The companion polynomial $C(x)$ of $f(x)$ is defined as \begin{equation}C(x)=\sum_{j,k=0}^{n}\overline{a_j}a_k\,x^{j+k}.\end{equation} Note that in the two operations of the ring $\bh_s[x]$, \begin{equation}C(x)=f(x)\overline{f(x)}=\overline{f(x)}f(x).\end{equation} We mention that $C(x)$ is a polynomial with real coefficients of degree at most $2n$. For each quasisimilar class $[q]$, we define real coefficient quadratic polynomial \begin{equation}\Psi_{[q]}(x)=x^2-2\Re(q)x+I_q.\end{equation} Obviously \begin{equation}\Psi_{[q]}(p)=0,\forall p\in [q].\end{equation} In \cite{Ireneu,Irene,Opfer17,Opfer18}, Falcao, Opfer and Janovska etc. considered the unilateral polynomials \begin{equation}\label{pz}f(x)=\sum_{j=0}^{n}a_j x^j, x,a_j\in \bh_s, \mbox{ where } a_n, a_0 \mbox{ are invertible}. \end{equation} The mechanism of Niven's algorithm can be described by the following proposition. \begin{pro}\label{pro1.1}(c.f.\cite[Theorem 3.7]{Irene}) Let \begin{equation}Z(f)=\{q\in\bh_s:f(q)=0\}.\end{equation} If $q\in Z(f)$, then $\Psi_{[q]}(x)$ is a divisor of $C(x)$ in complex number field. That is $$C(x)=h(x)\Psi_{[q]}(x),\,\, h(x)\in \bc[x].$$ \end{pro} For such a $[q]$, one can perform the following $$f(x)=h(x)\Psi_{[q]}(x)+A_q+B_qx.$$ One then can say that the element $y\in [q]$ satisfying $A_q+B_qy=0$ belongs to $Z(f)$. That is $$Z(f)=\bigcup_{[q]}\{y\in [q]:A_q+B_qy=0\}.$$ The essential principle of Niven's algorithm using the companion polynomial is that we can figure out the $\Re(q)$ and $I_q$ by the companion polynomial and then search for the solutions in these quasisimilar classes $[q]$. \subsection{Quadratic equation in $\bh_s$} The quadratic equation $$ax^2+bx+c=0,a,b,c\in\bh_s$$ can be reformulated as $$x^2+a^{-1}bx+a^{-1}c=0$$ if $a$ is invertible. Such quadratic equation has been considered in \cite{caoaxiom,Ireneu,Irene,Opfer18,Opfer17}. In this paper, we will focus on deriving explicit formulas of the roots of the quadratic equation \begin{equation}\label{eqcons}ax^2+bx+c=0, a\in Z(\bh_s)-\{0\}.\end{equation} The companion polynomial of $f(x)$ is $$C(x)=(a\bar{b}+b\bar{a})x^3+(a\bar{c}+c\bar{a}+I_b)x^2+(c\bar{b}+b\bar{c})x+I_c=0.$$ That is \begin{equation}\label{czpoly}C(x)=2P_{ab}x^3+(2P_{ac}+I_b)x^2+2P_{bc}x+I_c=0.\end{equation} \begin{exam} The companion polynomial of $$f(x)=(1+\bj)x^2 +(\bi-\bk)x-1+\bi-\bj-\bk= 0$$ is $C(x)\equiv 0$. \end{exam} The above example shows that we need to face the intricate problem (\ref{eqcons}) without the help of the corresponding companion polynomial. Since $a$ is not invertible and $c$ is arbitary, the above quadratic equation has not been considered in \cite{caoaxiom,Ireneu,Irene,Opfer18,Opfer17}. To reduce the number of parameters in $ax^2+bx+c=0$ and simplify our consideration, we have the following proposition. \begin{pro}\label{reducepro2}(c.f.\cite[Section 7]{Opfer14}) The quadratic equation $dy^2+ey+f=0$ with $d=d_1+d_2\bj\in Z(\bh_s)-\{0\}$, $d_1,d_2\in \bc$ is solvable if and only if the quadratic equation $$ax^2+bx+c=0$$ is solvable, where $$d_1^{-1}e=k_0+k_1\bi+k_2\bj+k_3\bk, k_i\in \br,i=0,\cdots,3$$ and $$a=1+d_1^{-1}d_2\bj,\, b=d_1^{-1}e-k_0(1+d_1^{-1}d_2\bj),\, c=d_1^{-1}f-\frac{d_1^{-1}ek_0}{2}+\frac{(1+d_1^{-1}d_2\bj)k_0^2}{4}.$$ If the quadratic equation $ax^2+bx+c=0$ is solvable and $x$ is a solution then $y= x-\frac{k_0}{2}$ is a solution of $dy^2+ey+f=0$. \end{pro} \begin{proof} Since $d=d_1+d_2\bj\in Z(\bh_s)-\{0\}$, we have $I_{d_1}=I_{d_2}\neq 0$ and $d_1$ is invertible. Hence $dy^2+ey+f=0$ is equivalent to $$(1+d_1^{-1}d_2\bj)y^2+d_1^{-1}ey+d_1^{-1}f=0.$$ Let $y=x-\frac{k_0}{2}.$ Then $dy^2+ey+f=0$ is equivalent to $$(1+d_1^{-1}d_2\bj)\big(x^2-k_0x+\frac{k_0^2}{4}\big)+d_1^{-1}e\big(x-\frac{k_0}{2}\big)+d_1^{-1}f=0.$$ That is $$(1+d_1^{-1}d_2\bj)x^2+[d_1^{-1}e-k_0(1+d_1^{-1}d_2\bj)]x+d_1^{-1}f -\frac{d_1^{-1}ek_0}{2}+\frac{(1+d_1^{-1}d_2\bj)k_0^2}{4}=0.$$ \end{proof} In the process of the above proof, let $$d_1^{-1}d_2=a_2+a_3\bi, a_2,a_3\in \br.$$ Then we have $a_2^2+a_3^2=1$ and $$a=1+d_1^{-1}d_2\bj=1+a_2\bj+a_3\bk\in Z(\bh_s).$$ Since $\Re[d_1^{-1}e-k_0(1+d_1^{-1}d_2\bj)]=0$, we have $$b=d_1^{-1}e-k_0(1+d_1^{-1}d_2\bj)=b_1\bi+b_2\bj+b_3\bk.$$ Hence we only need to solve the following equations: \begin{itemize} \item Equation I: \quad $ax^2+c=0, a=1+a_2\bj+a_3\bk\in Z(\bh_s)$; \item Equation II: \quad $ax^2+bx+c=0, a=1+a_2\bj+a_3\bk\in Z(\bh_s), b=b_1\bi+b_2\bj+b_3\bk\neq 0$. \end{itemize} We will show in Proposition \ref{proc=0} that if Equation I is solvable then the corresponding companion polynomial $$C(x)=0.$$ By the Moore-Penrose inverse property obtained in \cite{cao}, we will solve Equations I in Section 2 (Theorem \ref{thm2.1}). For Equation II, observe that \begin{equation}x^2=x(2x_0-\bar{x})=2x_0x-I_x. \end{equation} Therefore $ax^2+bx+c=0$ becomes \begin{equation}\label{linearx1} (2x_0a+b)x=aI_x-c. \end{equation} Let \begin{eqnarray}\label{Nf1}N&=&I_x=\bar{x}x,\\ \label{Tf1}T&=&\bar{x}+x=2x_0.\end{eqnarray} By $(2x_0a+b)x=aI_x-c$, we have $(2x_0a+b)x\overline{(2x_0a+b)x}=(aI_x-c)\overline{aI_x-c}$. That is \begin{equation}\label{enf1} N(2TP_{ab}+I_b+2P_{ac})-I_c=0.\end{equation} Any solutions $x=x_0+x_1\bi+x_2\bj+x_3\bk$ of $ax^2+bx+c=0$ must fall into two categories: \begin{itemize} \item $2x_0a+b\in Z(\bh_s)$; \item $2x_0a+b\in \bh_s- Z(\bh_s)$. \end{itemize} For Equation II, we define $$SZ=\{x\in \bh_s:ax^2+bx+c=0 \mbox{ and }2x_0a+b\in Z(\bh_s)\}$$ and $$SI=\{x\in \bh_s:ax^2+bx+c=0 \mbox{ and }2x_0a+b\in \bh_s-Z(\bh_s)\}.$$ In order to solve Equation II, for technical reasons, we divide Equation II into the following two equations: \begin{itemize} \item Equation II for SZ, \item Equation II for SI. \end{itemize} If $2x_0a+b\in \bh_s-Z(\bh_s)$, then by (\ref{linearx1}) we have \begin{equation}x=(2x_0a+b)^{-1}(aI_x-c)=(Ta+b)^{-1}(aN-c)=\frac{(T\bar{a}+\bar{b})(aN-c)}{2TP_{ab}+I_b}\end{equation} and \begin{equation}\bar{x}=\frac{(N\bar{a}-\bar{c})(Ta+b)}{2TP_{ab}+I_b}.\end{equation} Substituting the above formulas of $x$ and $\bar{x}$ in (\ref{Tf1}), we obtain \begin{eqnarray} \label{enf2} x+\bar{x}&=&\frac{-2TP_{ac}+2NP_{ab}-2P_{bc}}{2TP_{ab}+I_b}=T. \end{eqnarray} Hence $(T,N)$ satisfies our first real nonlinear system: \begin{equation}\label{rsym1}\left\{ \begin{aligned} N(2TP_{ab}+I_b+2P_{ac})-I_c=0,\\ 2P_{ab}T^2+(2P_{ac}+I_b)T-2NP_{ab}+2P_{bc}=0. \end{aligned} \right.\end{equation} Since we aim to find a root of $ax^2+bx+c=0$, we do not know $x_0$ beforehand. For technical reason, we may assume that $$2x_0a+b=Ta+b\in \bh_s-Z(\bh_s).$$ For Equation II, after obtaining the pair $(T,N)$ from the real nonlinear system (\ref{rsym1}), we need test $Ta+b\in \bh_s-Z(\bh_s)$. Only for the pair $(T,N)$ satisfying $Ta+b\in \bh_s-Z(\bh_s)$, we obtain the corresponding solution $ x=(Ta+b)^{-1}(aN-c).$ If $2x_0a+b\in Z(\bh_s)$ then \begin{equation}\label{ncasezero}\left\langle 2x_0a+b,2x_0a+b \right\rangle=4x_0P_{ab}+I_b=0.\end{equation} Also we have \begin{equation}\label{nzacc}\left\langle aI_x-c,aI_x-c \right\rangle=-2I_xP_{ac}+I_c=0.\end{equation} By Eq.(\ref{ncasezero}) and (\ref{nzacc}), we may know some information on $x_0$ and $I_x$. For example, if $P_{ab}\neq 0$ then $x_0=\frac{-I_b}{4P_{ab}}$; if $P_{ac}\neq 0$ then $I_x=\frac{I_c}{2P_{ac}}$. However, in general, we may get no information of $x_0$ and $I_x$. For example, if $P_{ab}=0,P_{ac}=0$. We will resort to its natural real nonlinear system as follows. Let $a=1+a_2\bj+a_3\bk,$ $b=b_1\bi+b_2\bj+b_3\bk$, $c=c_0+c_1\bi+c_2\bj+c_3\bk \in \bh_s$. By the rule of multiplication Table 1, the equation $ax^2+bx+c=0$ can be reformulated as our second real nonlinear system: \begin{equation}\label{rsym2}\left\{ \begin{aligned} & x_0^2-x_1^2+x_2^2+x_3^2+2a_2x_0x_2+2a_3x_0x_3-b_1x_1+b_2x_2+b_3x_3+c_0=0,\\ & 2x_0x_1-2a_2x_0x_3+2a_3x_0x_2+b_1x_0-b_2x_3+b_3x_2+c_1=0,\\ &2x_0x_2+a_2(x_0^2-x_1^2+x_2^2+x_3^2)+2a_3x_0x_1-b_1x_3+b_2x_0+b_3x_1+c_2=0,\\ &2x_0x_3-2a_2x_0x_1+a_3(x_0^2-x_1^2+x_2^2+x_3^2)+b_1x_2-b_2x_1+b_3x_0+c_3=0. \end{aligned} \right.\end{equation} Roughly speaking, we will rely on the two real systems (\ref{rsym1}) or (\ref{rsym2}) to solve the Equation II. Under some conditions, we can deduce some some linear relations of $x_i, i=0,\cdots,3$, which is helpfull in treating the Equation II. For example, under the condition $P_{ab}=0$ and $x\in SZ$, we can deduce some linear relations of $x_1,x_2$ and $x_3$ from Eqs.(\ref{rsym2}) (Proposition \ref{prop5.1}). We list our problem solving process in Table 2. \begin{table}[!htbp] \centering \caption{ Problem solving process} \begin{tabular}{|c|c|c|c|} \hline Section &Type of Equation & Result&Examples of Theorem\\ \hline 2&Equation I& Theorem 2.1&Example 2.1\\ \hline \multirow{3}*{3}&Equation II with $P_{ab}\neq 0$ for SZ& Theorem 3.1&Examples 3.1 and 3.2\\ \cline{2-4} &Equation II with $P_{ab}=0,P_{\bi a,b}=0$ for SZ& Theorem 3.2&Examples 3.3 and 3.4\\ \cline{2-4} &Equation II with $P_{ab}=0,P_{a \bi,b}=0$ for SZ& Theorem 3.3&Examples 3.5 and 3.6\\ \hline \multirow{3}*{4}&Equation II with $P_{ab}\neq 0$ for SI& Theorem 4.1&Example 4.1\\ \cline{2-4} &Equation II with $P_{ab}=0,I_b+2P_{ac}\neq 0$ for SI& Theorem 4.2&Example 4.2\\ \cline{2-4} &Equation II with $P_{ab}=0,I_b+2P_{ac}=0$ for SI& Theorem 4.3&Example 4.3\\ \cline{2-4} \hline \end{tabular} \end{table} We remark that all examples in Table 2 are carefully chosen to illustrate that all our formulas work. The author has checked that the equation in Example 3.1 has no solution in $SI$. The equation of Example 3.2 (which is the same as Example 4.1) has a solution in $SZ$ and $SI$, respectively. We have given all solutions of all examples in our paper. In Section 5, we will imitate the approach developed in \cite{Ireneu,Irene,Opfer18,Opfer17} using companion polynomials. We apply such an approach to our examples in Table 2 with $C(x)\not\equiv 0$ and get the same results. These examples provide a reciprocal authentication of the two approaches. \section{Equation I} In this section, we consider $ax^2+c=0$ with $a=1+a_2\bj+a_3\bk\in Z(\bh_s)$. The Moore-Penrose inverse \cite{cao} of $a=t_1+t_2\bj,t_1,t_2\in \bc$ is defined to be $$a^+=\left\{\begin{array}{ll} 0, & \hbox{if }\,\, $a=0;$ \\ \frac{\overline{t_1}-t_2\bj}{|t_1|^2-|t_2|^2}=\frac{\overline{a}}{I_a}, & \hbox{if\, $I_a\neq 0$;} \\ \frac{\overline{t_1}+t_2\bj}{4|t_1|^2}, & \hbox{if\, $I_a=0$.} \\ \end{array} \right.$$ For $a=t_1+t_2\bj\in Z(\bh_s)-\{0\}$, we have the following equations: \begin{equation}aa^+a=a,\ a^+aa^+=a^+,\ aa^+=\frac{1}{2}\big(1+\frac{t_2}{\overline{t_1}}\bj\big), \ a^+a=\frac{1}{2}\big(1+\frac{t_2}{t_1}\bj\big).\end{equation} \begin{lem}(c.f.\cite[Corollary 3.1]{cao})\label{lemmoor} Let $a=t_1+t_2\bj\in Z(\bh_s)-\{0\}$. Then the equation $ax=d$ is solvable if and only if $aa^+d=\frac{1}{2}(1+\frac{t_2}{\overline{t_1}}\bj)d=d$, in which case all solutions are given by $$x=a^+d+(1-a^+a)y =\frac{\overline{t_1}+t_2\bj}{4|t_1|^2}d+\frac{1}{2}(1-\frac{t_2}{t_1}\bj)y,\forall y\in \bh_s.$$ \end{lem} \begin{pro}\label{proc=0} Let $f(x)=ax^2+c, a=1+a_2\bj+a_3\bk\in Z(\bh_s)$. Then $f(x)=0$ is solvable, then its companion polynomial must be $$C(x)=0.$$ \end{pro} \begin{proof} By (\ref{czpoly}), the companion polynomial of Equation I is $$C(x)=2P_{ac}x^2+I_c=0.$$ If $f(x)=0$ is solvable, then there exists $x$ such that $c=-ax^2$. Hence $$I_c=I_aI_x^2=0.$$ Because $a=1+(a_2+a_3\bi)\bj\in Z(\bh_s)$, we have \begin{equation}\label{ainvf}a^+=\frac{a}{4},\ \ aa^+=a^+a=\frac{a}{2}.\end{equation} By Lemma \ref{lemmoor}, $ax^2+c=0$ is solvable if and only if $aa^+c=c.$ That is $$ac=2c.$$ Thus $\Re(ac)=2\Re(c)$. Hence $$P_{ac}=c_0-a_2c_2-a_3c_3=2\Re(c)-\Re(ac)=0.$$ This concludes the proof. \end{proof} \begin{thm}\label{thm2.1} The quadratic equation $ax^2+c=0$ is solvable if and only if $$ac=2c.$$ If $ax^2+c=0$ is solvable then $$x=\sqrt[s]{-\frac{c}{2}+\frac{\bar{a}}{2}y},$$ where $y\in \bh_s$ satisfying that $$\sqrt[s]{-\frac{c}{2}+\frac{\bar{a}}{2}y}\neq \emptyset.$$ \end{thm} \begin{proof} By the proof of Proposition \ref{proc=0}, $ax^2+c=0$ is solvable if and only if $ac=2c.$ Under this condition, $$x^2=a^+(-c)+(1-a^+a)y =-\frac{c}{2}+(1-\frac{a}{2})y=-\frac{c}{2}+\frac{\bar{a}}{2}y,\forall y\in \bh_s.$$ It follows from \cite[Theorem 2.1]{caoaxiom} that $$x=\sqrt[s]{-\frac{c}{2}+\frac{\bar{a}}{2}y},$$ where $y\in \bh_s$ such that $\sqrt[s]{-\frac{c}{2}+\frac{\bar{a}}{2}y}\neq \emptyset.$ \end{proof} \begin{exam}\label{exam4.1} Consider the quadratic equation $(1+\bj)x^2 -1-\bj=0$. That is, $a=1+\bj,c=-1-\bj.$ $$x=\sqrt[s]{\frac{1}{2}(1+\bj)+\frac{1}{2}(1-\bj)y},$$ where $y\in \bh_s$ such that $\sqrt[s]{\frac{1}{2}(1+\bj)+\frac{1}{2}(1-\bj)y}\neq \emptyset.$ \end{exam} \section{Equation II for SZ} We will find the necessary and sufficient conditions of Equation II to have a solution such that $2x_0+b\in Z(\bh_s)$. For the sake of simplification, we define the following three numbers related to Equation II: \begin{equation}\label{t1t2d}\delta=a_2b_3-a_3b_2+b_1,\,\,t_1=c_2-c_0a_2-a_3c_1,\,\,t_2=c_3-c_0a_3+a_2c_1.\end{equation} The above numbers in fact are \begin{equation}\label{t1t2d1}\delta=P_{ a \bi,b},\quad t_1=-P_{ a \bj,c},\quad t_2=-P_{\bk a,c}.\end{equation} We relable the real nonlinear system (\ref{rsym2}) as follows. \begin{eqnarray} x_0^2-x_1^2+x_2^2+x_3^2+2a_2x_0x_2+2a_3x_0x_3-b_1x_1+b_2x_2+b_3x_3+c_0=0,\label{ae1}\\ 2x_0x_1-2a_2x_0x_3+2a_3x_0x_2+b_1x_0-b_2x_3+b_3x_2+c_1=0,\label{ae2}\\ 2x_0x_2+a_2(x_0^2-x_1^2+x_2^2+x_3^2)+2a_3x_0x_1-b_1x_3+b_2x_0+b_3x_1+c_2=0,\label{ae3}\\ 2x_0x_3-2a_2x_0x_1+a_3(x_0^2-x_1^2+x_2^2+x_3^2)+b_1x_2-b_2x_1+b_3x_0+c_3=0.\label{ae4} \end{eqnarray} Suppose $x=x_0+x_1\bi+x_2\bj+x_3\bk\in SZ$ is a solution of Eq.(\ref{linearx1}). By Eq.(\ref{linearx1}), we have \begin{equation}\label{inpax0}\left\langle 2x_0a+b,2x_0a+b \right\rangle=4x_0P_{ab}+I_b=0.\end{equation} Based on this, we divide our consideration into two cases: $$P_{ab}\neq 0\mbox{ and }P_{ab}=0.$$ \subsection{Case $P_{ab}\neq 0$} If $P_{ab}\neq 0$ then by (\ref{inpax0}) we have \begin{equation}\label{slx0}x_0=\frac{-I_b}{4P_{ab}}.\end{equation} We reformulate Eqs.(\ref{ae1}) and (\ref{ae2}) as \begin{eqnarray} -x_1^2+x_2^2+x_3^2-b_1x_1+(b_2+2a_2x_0)x_2+(b_3+2a_3x_0)x_3+c_0+x_0^2&=&0,\label{1aeq1}\\ 2x_0x_1+(b_3+2a_3x_0)x_2-(b_2+2a_2x_0)x_3&=&-c_1-b_1x_0.\label{1aeq2} \end{eqnarray} Using Eq.(\ref{ae1})$\times a_2$+Eq.(\ref{ae2})$\times a_3- $ Eq.(\ref{ae3}) and Eq.(\ref{ae1})$\times a_3-$ Eq.(\ref{ae2})$\times a_2-$ Eq.(\ref{ae4}), we obtain \begin{eqnarray} (-a_2b_1-b_3)x_1+(a_2b_2+a_3b_3)x_2+(a_2b_3-a_3b_2+b_1)x_3&=&c_2-c_0a_2-a_3c_1+(b_2-a_3b_1)x_0,\label{1aeq3}\\ (-a_3b_1+b_2)x_1+(a_3b_2-a_2b_3-b_1)x_2+(a_2b_2+a_3b_3)x_3&=& c_3-c_0a_3+a_2c_1+(b_3+a_2b_1)x_0.\label{1aeq4} \end{eqnarray} Let $y=(x_1,x_2,x_3)^T$. Eqs.(\ref{1aeq2})-(\ref{1aeq4}) can be expressed as \begin{equation}\label{lineareqs5} Ay=u,\end{equation} where \begin{equation}\label{matrixA}A=\left( \begin{array}{ccc} 2x_0&b_3+2a_3x_0&-b_2-2a_2x_0\\ -a_2b_1-b_3 & a_2b_2+a_3b_3 &a_2b_3-a_3b_2+b_1\\ -a_3b_1+b_2 & a_3b_2-a_2b_3-b_1 & a_2b_2+a_3b_3 \\ \end{array} \right)\end{equation} and \begin{equation}\label{matrixu}u=\left( \begin{array}{c} -c_1-b_1x_0\\ t_1+(b_2-a_3b_1)x_0 \\ t_2+(b_3+a_2b_1)x_0\\ \end{array} \right).\end{equation} \begin{pro}\label{prop5.2} Let $x_0=\frac{-I_b}{4P_{ab}}$ and $a_2^2+a_3^2=1$. Let $A$ be given by (\ref{matrixA}). Then $$\det(A)=0.$$ \end{pro} \begin{proof} Let \begin{equation}\label{matrixB}B=\left( \begin{array}{ccc} 2x_0&b_3&-b_2\\ -a_2b_1-b_3 & a_2b_2+2a_3b_3+a_2a_3b_1 &-a_3b_2+a_3^2b_1 \\ -a_3b_1+b_2 & -a_2b_3-a_2^2b_1& 2a_2b_2+a_3b_3-a_2a_3b_1 \\ \end{array} \right).\end{equation} It is obvious that $B$ is obtained by performing elementary column transformations from $A$. It can be verified that $\det(B)=0$. Therefore $\det(A)=\det(B)=0$. \end{proof} Let \begin{equation}\label{matrixM}M=\left(\begin{array}{cc} a_2b_2+a_3b_3 &a_2b_3-a_3b_2+b_1 \\ a_3b_2-a_2b_3-b_1& a_2b_2+a_3b_3\\ \end{array}\right)=\left(\begin{array}{cc} -P_{ab} &\delta \\ -\delta& -P_{ab}\\ \end{array}\right).\end{equation} Since $P_{ab}\neq 0$, the subdeterminant $$m=:\det(M)=P_{ab}^2+\delta^2>0.$$ By Proposition \ref{prop5.2}, this means that $rank(A)=2$. We reformulate Eqs.(\ref{1aeq3}) and (\ref{1aeq4}) as \begin{equation}Mz=v,\end{equation} where $$z=(x_2,x_3)^T,v=\left( \begin{array}{c} t_1+(b_2-a_3b_1)x_0+(a_2b_1+b_3)x_1 \\ t_2+(a_2b_1+b_3)x_0+(a_3b_1-b_2)x_1\\ \end{array} \right).$$ Let $$k_1:=-P_{ab}(a_2b_1+b_3)-\delta(a_3b_1-b_2)=2b_2\delta-a_3I_b,$$ $$k_2:=-P_{ab}(b_2-a_3b_1)-\delta(a_2b_1+b_3)=-2b_3\delta-a_2I_b$$ and \begin{equation*}\Delta_1=\frac{-P_{ab}t_1-\delta t_2}{m},\Delta_2=\frac{\delta t_1-P_{ab}t_2}{m}.\end{equation*} Note that $$ m=P_{ab}^2+\delta^2=b_1^2+b_2^2+b_3^2+2a_2b_1b_3-2a_3b_1b_2=2b_1\delta-I_b$$ and $$k_1^2+k_2^2=m^2.$$ Because $$M^{-1}=\frac{1}{m}\left(\begin{array}{cc} -P_{ab} & -\delta\\ \delta&-P_{ab}\\ \end{array}\right)\mbox{ and } z=M^{-1}v,$$ we have \begin{eqnarray}x_2&=&\frac{-P_{ab}[t_1+(b_2-a_3b_1)x_0+(a_2b_1+b_3)x_1] -\delta[t_2+(a_2b_1+b_3)x_0+(a_3b_1-b_2)x_1]}{m}\nonumber\\ &=&\frac{-P_{ab}(a_2b_1+b_3)-\delta(a_3b_1-b_2)}{m}x_1+\frac{-P_{ab}(b_2-a_3b_1)-\delta(a_2b_1+b_3)}{m}x_0 +\frac{-P_{ab}t_1-\delta t_2}{m}\nonumber\\ &=&\frac{k_1}{m}x_1+\frac{k_2}{m}x_0+\Delta_1\label{x2s5}\end{eqnarray} and \begin{eqnarray}x_3&=&\frac{\delta[t_1+(b_2-a_3b_1)x_0+(a_2b_1+b_3)x_1]-P_{ab}[t_2+(a_2b_1+b_3)x_0+(a_3b_1-b_2)x_1]}{m}\nonumber\\ &=&\frac{\delta(a_2b_1+b_3)-P_{ab}(a_3b_1-b_2)}{m}x_1 +\frac{-P_{ab}(a_2b_1+b_3)-\delta(a_3b_1-b_2)}{m}x_0+\frac{\delta t_1-P_{ab}t_2}{m}\nonumber\\ &=&-\frac{k_2}{m}x_1+\frac{k_1}{m}x_0+\Delta_2.\label{x3s5}\end{eqnarray} Substituting the above two formulas in Eq.(\ref{1aeq2}), we have \begin{eqnarray*}\Big(2x_0+\frac{b_3k_1+b_2k_2+2a_3k_1x_0+2a_2k_2x_0}{m}\Big)x_1+F=0, \end{eqnarray*} where \begin{eqnarray}\label{condthm5.1}F=\frac{2a_3k_2-2a_2k_1}{m}x_0^2 +\Big(\frac{b_3k_2-b_2k_1}{m}+2a_3\Delta_1-2a_2\Delta_2+b_1\Big)x_0+b_3\Delta_1-b_2\Delta_2+c_1.\end{eqnarray} Note that $$2x_0+\frac{b_3k_1+b_2k_2+2a_3k_1x_0+2a_2k_2x_0}{m}=0.$$ By the solvability of $Ay=u$, we should have $F=0$. We remark that the fact that the coefficient of $x_1$ is zero is guaranteed by $\det(A)=0$ and $F=0$ is just a restatement of $rank(A)=rank(A,u)=2$. Substituting $x_2$ and $x_3$ of (\ref{x2s5}) and (\ref{x3s5}) in Eq.(\ref{1aeq1}), we obtain $$Rx_1+L=0,$$ where \begin{eqnarray}R=\frac{2k_1\Delta_1-2k_2\Delta_2+b_2k_1-b_3k_2+2(a_2k_1-a_3k_2)x_0-mb_1}{m}\end{eqnarray} and \begin{eqnarray}L&=&b_2\Delta_1+b_3\Delta_2+\Delta_1^2+\Delta_2^2+c_0+\frac{2(a_2k_2+a_3k_1+m)}{m}x_0^2\nonumber\\ &&+\frac{(2k_2\Delta_1+2k_1\Delta_2+b_2k_2+b_3k_1+2a_2\Delta_1m+2a_3\Delta_2m)}{m}x_0.\end{eqnarray} If $R=0$ we should have $L=0$ and in this case, $x_1$ is arbitrary. If $R\neq 0$ then $$x_1=\frac{-L}{R}.$$ Summarizing our reasoning process, we figure out the following conditions. \begin{defi}\label{def5.1} For the coefficients $a,b,c$ in Equation II such that $P_{ab}\neq 0,$ we set \begin{equation}\label{x0s5thm}x_0=\frac{-I_b}{4P_{ab}},\end{equation} \begin{equation}\label{k1k2mthm}k_1=2b_2\delta-a_3I_b,k_2=-2b_3\delta-a_2I_b, m=2b_1\delta-I_b,\end{equation} \begin{equation}\label{Delta}\Delta_1=\frac{-P_{ab}t_1-\delta t_2}{m},\Delta_2=\frac{\delta t_1-P_{ab}t_2}{m},\end{equation} \begin{eqnarray}\label{thmR}R=\frac{2k_1\Delta_1-2k_2\Delta_2+b_2k_1-b_3k_2+2(a_2k_1-a_3k_2)x_0-mb_1}{m},\end{eqnarray} \begin{eqnarray}L&=&b_2\Delta_1+b_3\Delta_2+\Delta_1^2+\Delta_2^2+c_0+\frac{2(a_2k_2+a_3k_1+m)}{m}x_0^2\nonumber\\ &&+\frac{(2k_2\Delta_1+2k_1\Delta_2+b_2k_2+b_3k_1+2a_2\Delta_1m+2a_3\Delta_2m)}{m}x_0,\label{thmL}\end{eqnarray} and \begin{eqnarray}\label{condthm3.2F}F=\frac{2a_3k_2-2a_2k_1}{m}x_0^2 +\Big(\frac{b_3k_2-b_2k_1}{m}+2a_3\Delta_1-2a_2\Delta_2+b_1\Big)x_0+b_3\Delta_1-b_2\Delta_2+c_1.\end{eqnarray} We say $(a,b,c)$ satisfies {\bf Condition 1} if the following two conditions hold: \begin{itemize} \item [(1)] $F=0;$ \item [(2)] If $R=0$ then $L=0$. \end{itemize} \end{defi} Summarizing the previous results, we obtain the following theorem. \begin{thm}\label{thm5.1} Equation II with $P_{ab}\neq 0$ has a solution $x\in SZ$ if and only if Condition 1 holds. Let $x_0,k_1,k_2,m,\Delta_1,\Delta_2,R,L,F$ be given by Definition \ref{def5.1}. If Condition 1 holds, then we have the following cases: \begin{itemize} \item [(1)] If $R\neq 0$ then Equation II has a solution: $$x=x_0-\frac{L}{R}\bi+x_2\bj+x_3\bk,$$ where $$x_2= -\frac{k_1L}{mR}+\frac{k_2}{m}x_0+\Delta_1$$ and $$x_3=\frac{k_2L}{mR}+\frac{k_1}{m}x_0+\Delta_2.$$ \item [(2)] If $R=0$ then Equation II has solutions: $$x=x_0+x_1\bi+(\frac{k_1}{m}x_1+\frac{k_2}{m}x_0+\Delta_1)\bj+(-\frac{k_2}{m}x_1+\frac{k_1}{m}x_0+\Delta_2)\bk, \forall x_1\in \br.$$ \end{itemize} \end{thm} \begin{exam}\label{exam5.1} Consider the quadratic equation $(1+\bj)x^2 +(\bi+2\bj+\bk)x-\frac{1}{4}+\frac{5}{2}\bi+\frac{3}{4}\bi+\frac{5}{2}\bk= 0$. That is, $a=1+\bj,b=\bi+2\bj+\bk$ and $c=-\frac{1}{4}+\frac{5}{2}\bi+\frac{3}{4}\bi+\frac{5}{2}\bk$. In this case $$x_0=-\frac{1}{2},k_1=8,k_2=0,\Delta_1=-1,\Delta_2=\frac{3}{2},m=8,R=-2,L=2, F=0.$$ Therefore $(a,b,c)$ satisfies {\it Condition 1} and $x_1=-\frac{L}{R}=1, x_2=0,x_3=1$. Thus $$x=-\frac{1}{2}+\bi+\bk$$ is a solution of the given quadratic equation. \end{exam} \begin{exam}\label{exam5.2} Consider the quadratic equation $(1+\bj)x^2 +(\bi+\bj)x-1+\bi= 0$. That is, $a=1+\bj,b=\bi+\bj$ and $c=-1+\bi$. In this case $$x_0=0,k_1=2,k_2=0,\Delta_1=0,\Delta_2=1,m=2,R=L=0,F=0.$$ Therefore $(a,b,c)$ satisfies {\it Condition 1}. In this case $x_1$ is arbitrary, $x_2=x_1,x_3=1$. Thus $$x=x_1\bi+x_1\bj+\bk,\forall x_1\in \br$$ are solutions of the given quadratic equation. \end{exam} \subsection{Case $P_{ab}=0$} In this subsection, we will find the necessary and sufficient conditions of Equation II with $P_{ab}=0$ to have a solution $x\in SZ$. We begin with a proposition, which describes the linear relation of $x_i,i=0,\cdots, 3$. \begin{pro}\label{prop5.1} Suppose that $P_{ab}=0$. Then the solution $x$ of $ax^2+bx+c=0$ satisfies the following linear equation: \begin{equation}\label{lineareqn} Ay=u,\end{equation} where $y=(x_0,x_1,x_2,x_3)^T$, \begin{equation}\label{matrixA1n}A=\left( \begin{array}{cccc} b_2-a_3b_1&a_2b_1+b_3&0&-\delta\\ a_2b_1+b_3&a_3b_1-b_2&\delta&0 \end{array} \right),u=\left( \begin{array}{c} -t_1\\ -t_2 \end{array} \right)\end{equation} and $t_1,t_2,\delta$ are given by (\ref{t1t2d1}). \end{pro} \begin{proof} Note that $a_2^2+a_3^2=1$ and $a_2b_2+a_3b_3=0$. Using Eq.(\ref{ae3})$-$Eq.(\ref{ae1})$\times a_2-$Eq.(\ref{ae2})$\times a_3$, we have \begin{equation}\label{new3}(b_2-a_3b_1)x_0+(a_2b_1+b_3)x_1+(a_3b_2-a_2b_3-b_1)x_3+c_2-a_2c_0-a_3c_1=0.\end{equation} Using Eq.(\ref{ae4})$-$Eq.(\ref{ae1})$\times a_3+$ Eq.(\ref{ae2})$\times a_2$, we have \begin{equation}\label{new4}(a_2b_1+b_3)x_0+(a_3b_1-b_2)x_1+(a_2b_3-a_3b_2+b_1)x_2+c_3+a_2c_1-a_3c_0=0.\end{equation} This completes the proof. \end{proof} Suppose that Equation II with $P_{ab}=0$ has a solution $x\in SZ$. By Proposition \ref{prop5.1}, under the condition $P_{ab}=0$, we have \begin{eqnarray} (b_2-a_3b_1)x_0+(a_2b_1+b_3)x_1+(a_3b_2-a_2b_3-b_1)x_3+t_1=0,\label{2new3}\\ (a_2b_1+b_3)x_0+(a_3b_1-b_2)x_1+(a_2b_3-a_3b_2+b_1)x_2+t_2=0.\label{2new4} \end{eqnarray} Since $$\left\langle 2x_0a+b,2x_0a+b \right\rangle=4x_0P_{ab}+I_b=0.$$ By our assumption $P_{ab}=0$, we must have $I_b=0$. By $P_{ab}=I_b=0$, we have $$b_1^2-(a_3b_2-a_2b_3)^2=0.$$ Thus we have $\delta=2b_1$ or $\delta=0$. We divide our consider into two subcases: $$\delta=2b_1\mbox{ and }\delta=0.$$ We mention that $\delta=2b_1$ is $P_{\bi a ,b}=0$ and $\delta=0$ is $P_{a \bi ,b}=0$ \subsubsection{Subcase $P_{\bi a ,b}=0$} We begin with the case $\delta=2b_1$, that is $b_1=-a_3b_2+a_2b_3$. If $b_1=-a_3b_2+a_2b_3$, then by $P_{ab}=0$ and $I_a=0$ we have $b_2-a_3b_1=2b_2,a_2b_1+b_3=2b_3.$ Thus \begin{equation}\label{b3b2c1}b_3=a_2b_1,b_2=-a_3b_1.\end{equation} By our assumption $b\neq 0$, we have $b_1\neq 0$. Hence Eqs. (\ref{2new3}) and (\ref{2new4}) become \begin{eqnarray*} 2b_2x_0+2b_3x_1-2b_1x_3+t_1=0,\\ 2b_3x_0-2b_2x_1+2b_1x_2+t_2=0. \end{eqnarray*} From the above and Eq.(\ref{b3b2c1}), we get \begin{eqnarray} x_2&=&-a_2x_0-a_3x_1-\frac{t_2}{2b_1},\label{s5newx2}\\ x_3&=&-a_3x_0+a_2x_1+\frac{t_1}{2b_1}.\label{s5newx3} \end{eqnarray} Substituting the above two formulas of $x_2$ and $x_3$ in Eq.(\ref{ae2}), we obtain \begin{eqnarray*} -\frac{a_2t_1+a_3t_2}{b_1}x_0-\frac{a_2t_2-a_3t_1}{2}+c_1=0. \end{eqnarray*} If $a_2t_1+a_3t_2=0$ then we must have $-\frac{a_2t_2-a_3t_1}{2}+c_1=0$ and in this case $x_0$ is arbitrary. If $a_2t_1+a_3t_2\neq 0$ then $$x_0=\frac{(a_3t_1-a_2t_2+2c_1)b_1}{2(a_2t_1+a_3t_2)}.$$ Substituting $x_2$ and $x_3$ of (\ref{s5newx2}) and (\ref{s5newx3}) in Eq.(\ref{ae1}), we obtain \begin{eqnarray*} \frac{a_2t_1+a_3t_2}{b_1}x_1+\frac{t_1^2+t_2^2}{4b_1^2}+\frac{a_2t_1+a_3t_2}{2}+c_0=0. \end{eqnarray*} If $a_2t_1+a_3t_2=0$ then we need $\frac{t_1^2+t_2^2}{4b_1^2}+c_0=0$ and $x_1$ is arbitrary. If $a_2t_1+a_3t_2\neq 0$ then $$x_1=-\frac{t_1^2+t_2^2+2b_1^2(a_2t_1+a_3t_2)+4b_1^2c_0}{4b_1(a_2t_1+a_3t_2)}.$$ By the above reasoning process, we figure out the following condition. \begin{defi}\label{def5.2} For the coefficients $a,b,c$ in Equation II such that $P_{ab}=0$ and $I_b=0$, $(a,b,c)$ satisfies {\bf Condition 2} if the following two conditions hold: \begin{itemize} \item [(1)] $P_{\bi a,b}=0;$ \item [(2)] If $a_2t_1+a_3t_2=0$ then $a_2t_2-a_3t_1-2c_1=0$ and $t_1^2+t_2^2+4b_1^2c_0=0$. \end{itemize} \end{defi} Summarizing the previous results, we obtain the following theorem. \begin{thm}\label{thm3.2} Equation II with $P_{ab}=0$ and $P_{\bi a,b}=0$ has a solution $x\in SZ$ if and only if Condition 2 holds. If Condition 2 holds, then we have the following cases. \begin{itemize} \item [(1)] If $a_2t_1+a_3t_2\neq 0$ then Equation II has a solution $$x=x_0+x_1\bi+x_2\bj+x_3\bk,$$ where \begin{eqnarray*}\label{s5x0}x_0=\frac{(a_3t_1-a_2t_2+2c_1)b_1}{2(a_2t_1+a_3t_2)} \end{eqnarray*} \begin{eqnarray*}\label{s5x1}x_1=-\frac{t_1^2+t_2^2+2b_1^2(a_2t_1+a_3t_2)+4b_1^2c_0}{4b_1(a_2t_1+a_3t_2)}, \end{eqnarray*} and \begin{eqnarray*} x_2&=&-a_2x_0-a_3x_1-\frac{t_2}{2b_1},\\ x_3&=&-a_3x_0+a_2x_1+\frac{t_1}{2b_1}. \end{eqnarray*} \item [(2)]If $a_2t_1+a_3t_2=0$ then Equation II has solutions $$x=x_0+x_1\bi+x_2\bj+x_3\bk,\forall x_0,x_1\in \br,$$ where \begin{eqnarray*} x_2&=&-a_2x_0-a_3x_1-\frac{t_2}{2b_1},\\ x_3&=&-a_3x_0+a_2x_1+\frac{t_1}{2b_1}. \end{eqnarray*} \end{itemize} \end{thm} \begin{exam}\label{examthm5.3} Consider the quadratic equation $(1+\bj)x^2 +(\bi+\bk)x+1-\bi=0$. That is, $a=1+\bj,b=\bi+\bk$ and $c=1-\bi$. In this case $t_1=t_2=-1$ and $a_2t_1+a_3t_2=-1$. The equation $ax^2+bx+c=0$ has a solution $x=\frac{1}{2}+\bi+\frac{1}{2}\bk.$ \end{exam} \begin{exam}\label{examthm5.4} Consider the quadratic equation $(1+\bj)x^2 +(\bi+\bk)x-1+\bi-\bj+\bk=0$. That is, $a=1+\bj,b=\bi+\bk$ and $c=-1+\bi-\bj+\bk$. In this case $t_1=0,t_2=2$ and $a_2t_1+a_3t_2=0$. The equation $ax^2+bx+c=0$ has solutions $x=x_0+x_1\bi-(1+x_0)\bj+x_1\bk,\forall x_0,x_1\in \br.$ \end{exam} \subsubsection{Subcase $P_{ a \bi,b}=0$} We now consider the second case $\delta=0$, that is, $b_1=a_3b_2-a_2b_3$. If $b_1=a_3b_2-a_2b_3$ then by $P_{ab}=0$ and $I_a=0$ we have $$b_2-a_3b_1=a_2(a_2b_2+a_3b_3)=0,\,a_2b_1+b_3=a_3(a_2b_2+a_3b_3)=0.$$ So we have \begin{equation}\label{b3b2c2}b_2=a_3b_1,b_3=-a_2b_1.\end{equation} Hence we have \begin{equation}\label{ab1i}b=ab_1\bi\end{equation} From the above formulas, Eqs.(\ref{2new3}) and (\ref{2new4}) imply that \begin{equation}\label{ac1}c_2-a_2c_0-a_3c_1=0,\,c_3+a_2c_1-a_3c_0=0.\end{equation} By $I_a=0$ and the above two conditions, we have \begin{equation}\label{ac2}P_{ac}=\left\langle a,c\right\rangle=c_0-a_2c_2-a_3c_3=0.\end{equation} From this, we get \begin{equation}\label{ac3}c_1=a_3c_2-a_2c_3.\end{equation} Eqs. (\ref{ac1})-(\ref{ac3}) are equivalent to the condition \begin{equation}\label{acend}ac=2c.\end{equation} Under the condition $P_{ab}=I_a=I_b=0,b_2=a_3b_1,b_3=-a_2b_1$ and $ac=2c$, we have \begin{center}Eq.(\ref{ae3})$=$ Eq.(\ref{ae1})$\times a_2+$Eq.(\ref{ae2})$\times a_3$ and Eq.(\ref{ae4})$=$ Eq.(\ref{ae1})$\times a_3-$Eq.(\ref{ae2})$\times a_2$.\end{center} Hence in this case Equation II only has two independent equalities Eqs.(\ref{ae1}) and (\ref{ae2}), which can be reformulated as \begin{eqnarray} x_0^2+2(a_2x_2+a_3x_3)x_0-x_1^2+x_2^2+x_3^2-b_1x_1+a_3b_1x_2-a_2b_1x_3+c_0=0,\label{ab0eq1}\\ x_0(2x_1-2a_2x_3+2a_3x_2+b_1)=b_1(a_2x_2+a_3x_3)-c_1.\label{ab0eq2} \end{eqnarray} These are underdetermined system of equations. Before going on, we make the following remark. \begin{rem} Note that $$ax^2+bx+c=ax^2+ab_1\bi x+\frac{ac}{2}=a(x^2+b_1\bi x+\frac{c}{2}).$$ We reformulate the Equation II $$a(x^2+b_1\bi x+\frac{c}{2})=0.$$ By Lemma \ref{lemmoor}, the above equation is equivalent to $$x^2+b_1\bi x+\frac{c}{2}=(1-a^+a)y=\frac{\bar{a}}{2}y=0, \forall y\in \bh_s.$$ For simplification, we express it as $$x^2+b_1\bi x+\frac{c}{2}+\bar{a}y=0, \forall y\in \bh_s.$$ In principle, we can solve the above equation for a specific $y$ by the approaches in \cite{caoaxiom,Ireneu,Irene,Opfer18,Opfer17}. \end{rem} To avoid introducing the arbitrary $y$, we choose another method as follows. Note that $b_1\neq 0$. By Eq.(\ref{ab0eq2}), if $x_0=0$ then $$a_2x_2+a_3x_3=\frac{c_1}{b_1}.$$ we treat now the cases $a_2=0$ and $a_2\neq 0$, respectively. If $a_2=0$ then $a_3\neq 0$ and therefore $$x_3=\frac{c_1}{a_3b_1}.$$ Note that $a_3^2=1$. Substituting $x_0=0$ and $x_3=\frac{c_1}{a_3b_1}$ in Eq.(\ref{ab0eq1}), we obtain $$x_2^2+a_3b_1x_2+\frac{c_1^2}{b_1^2}+c_0-x_1^2-b_1x_1=0.$$ So we have a solution $$x=x_1\bi+x_2\bj+\frac{c_1}{a_3b_1}\bk,$$ where $$x_2=\frac{-a_3b_1\pm \sqrt{b_1^2-4(\frac{c_1^2}{b_1^2}+c_0-x_1^2-b_1x_1)}}{2}$$ and $x_1\in \br$ satisfies $$\frac{c_1^2}{b_1^2}+c_0-\frac{b_1^2}{4}-x_1^2-b_1x_1\leq 0.$$ If $a_2\neq 0$ then \begin{equation}\label{addx2}x_2=\frac{c_1}{a_2b_1}-\frac{a_3}{a_2}x_3.\end{equation} Substituting $x_0=0$ and the above formula in Eq.(\ref{ab0eq1}), we obtain $$x_1^2+b_1x_1+t=0,$$ where $$t=-\frac{1}{a_2^2}x_3^2+(\frac{2a_3c_1+a_2b_1^2}{a_2^2b_1})x_3-(c_0+\frac{a_3c_1}{a_2}+\frac{c_1^2}{a_2^2b_1^2}).$$ Hence $x_1$ can be expressed by $x_3$ as \begin{equation}\label{addx1}x_1=\frac{-b_1\pm \sqrt{b_1^2-4t}}{2}\end{equation} and $x_3\in \br$ satisfies \begin{equation}\label{addb1}b_1^2-4t=\frac{4}{a_2^2}[x_3^2-(\frac{2a_3c_1+a_2b_1^2}{b_1})x_3+\frac{4(a_2^2b_1^2c_0+a_2b_1^2a_3c_1+c_1^2)+b_1^4a_2^2}{4b_1^2}] \geq 0.\end{equation} So we have solutions $$x=x_1\bi+x_2\bj+x_3\bk,$$ where $x_1,x_2$ are given by (\ref{addx1}) and (\ref{addx2}), and $x_3\in \br$ satisfies (\ref{addb1}). If $x_0\neq 0$ then from Eq.(\ref{ab0eq2}) we have $$2x_1-2a_2x_3+2a_3x_2+b_1=\frac{a_3b_1x_3+a_2b_1x_2-c_1}{x_0}.$$ From this, we get $$x_1=(\frac{a_2b_1}{2x_0}-a_3)x_2+(\frac{a_3b_1}{2x_0}+a_2)x_3-\frac{c_1+b_1x_0}{2x_0}.$$ Substituting the above formula in Eq.(\ref{ab0eq1}) and rearranging the equation, we obtain \begin{eqnarray*} &&x_0^4+2(a_2x_2+a_3x_3)x_0^3+[(a_2x_2+a_3x_3)^2+b_1(a_3x_2-a_2x_3)+c_0+\frac{b_1^2}{4}]x_0^2\\ &&+[a_2a_3b_1(x_2^2-x_3^2)+b_1(a_3^2-a_2^2)x_2x_3+c_1(a_2x_3-a_3x_2)]x_0-\frac{[b_1(a_2x_2+a_3x_3)-c_1]^2}{4}=0. \end{eqnarray*} Let \begin{eqnarray*} f(z)&=&z^4+2(a_2x_2+a_3x_3)z^3+[(a_2x_2+a_3x_3)^2+b_1(a_3x_2-a_2x_3)+c_0+\frac{b_1^2}{4}]z^2\\ &&+[a_2a_3b_1(x_2^2-x_3^2)+b_1(a_3^2-a_2^2)x_2x_3+c_1(a_2x_3-a_3x_2)]z-\frac{[b_1(a_2x_2+a_3x_3)-c_1]^2}{4}. \end{eqnarray*} Then $$f(0)=-\frac{[b_1(a_2x_2+a_3x_3)-c_1]^2}{4}\leq 0,\lim_{z\to +\infty} f(z)=+\infty,\lim_{z\to -\infty} f(z)=+\infty.$$ If $a_2x_2+a_3x_3\neq \frac{c_1}{b_1}$ then $f(0)<0$, and $f(z)=0$ has at least two real solutions $z_1\in (-\infty,0)$ and $z_2\in (0,\infty)$. Let $T\in \br$ be a solution of $f(z)=0$ with $a_2x_2+a_3x_3\neq \frac{c_1}{b_1}$. Then Equation II has a solution $$x=T+x_1\bi+x_2\bj+x_3\bk,$$ where $$x_1=(\frac{a_2b_1}{2T}-a_3)x_2+(\frac{a_3b_1}{2T}+a_2)x_3-\frac{c_1+b_1T}{2T}.$$ If \begin{equation}\label{ab0eqx23}a_2x_2+a_3x_3=\frac{c_1}{b_1},\end{equation} then by (\ref{ab0eq2}) and our assumption $x_0\neq 0$ we have \begin{eqnarray} 2x_1-2a_2x_3+2a_3x_2+b_1=0.\label{ab0eq3} \end{eqnarray} By (\ref{ab0eqx23}) and (\ref{ab0eq3}), we obtain that \begin{eqnarray*} x_2=\frac{a_2c_1}{b_1}-\frac{a_3b_1}{2}-a_3x_1,\label{ab0eq4}\\ x_3=\frac{a_3c_1}{b_1}+\frac{a_2b_1}{2}+a_2x_1.\label{ab0eq5} \end{eqnarray*} Substituting the above formulas in (\ref{ab0eq1}), we obtain \begin{eqnarray} x_0^2+\frac{2c_1}{b_1}x_0-b_1x_1+\frac{c_1^2}{b_1^2}-\frac{b_1^2}{4}+c_0=0. \end{eqnarray} Hence \begin{eqnarray*} x_1= \frac{1}{b_1}x_0^2+\frac{2c_1}{b_1^2}x_0+\frac{c_1^2}{b_1^3}-\frac{b_1}{4}+\frac{c_0}{b_1}.\label{s5newx1} \end{eqnarray*} Form the above description, Equation II has solutions $$x=x_0+x_1\bi+x_2\bj+x_3\bk,\forall x_0\neq 0,$$ where $x_1,x_2,x_3$ are expressed by formulas containing $x_0$ as above. Summarizing the previous results, we obtain the following theorem. \begin{thm}\label{thm5.3} Equation II with $P_{ab}=0$ and $P_{a \bi,b}=0$ has a solution $x\in SZ$ if and only if $ac=2c$. If Equation II is solvable then we have the following cases: \begin{itemize} \item [(1)] Case $x_0=0$: \begin{itemize} \item [(1.1)] if $a_2=0$ then Equation II has solutions:$$x=x_1\bi+x_2\bj+\frac{c_1}{a_3b_1}\bk,$$ where $$x_2=\frac{-a_3b_1\pm \sqrt{b_1^2-4(\frac{c_1^2}{b_1^2}+c_0-x_1^2-b_1x_1)}}{2}$$ and $x_1$ is real numbers satisfies $$x_1^2+b_1x_1+\frac{b_1^2}{4}-\frac{c_1^2}{b_1^2}-c_0\geq 0.$$ \item [(1.2)] if $a_2\neq 0$ then Equation II has solutions:$$x=x_1\bi+x_2\bj+x_3\bk,$$ where $x_3\in \br$ satisfies $$w=x_3^2-(\frac{2a_3c_1+a_2b_1^2}{b_1})x_3 +\frac{4(a_2^2b_1^2c_0+a_2b_1^2a_3c_1+c_1^2)+b_1^4a_2^2}{4b_1^2}\ge 0$$ and $$x_1=\frac{-b_1}{2}\pm \frac{\sqrt{w}}{a_2},$$ $$x_2=\frac{c_1}{a_2b_1}-\frac{a_3}{a_2}x_3.$$ \end{itemize} \item [(2)] Case $x_0\neq 0$: \begin{itemize} \item [(2.1)] $a_2x_2+a_3x_3\neq \frac{c_1}{b_1}$: Equation II has solutions: $$x=T+x_1\bi+x_2\bj+x_3\bk,$$ where $T$ be a real solution of the following equation: \begin{eqnarray*} z^4+2(a_2x_2+a_3x_3)z^3+[(a_2x_2+a_3x_3)^2+b_1(a_3x_2-a_2x_3)+c_0+\frac{b_1^2}{4}]z^2\\ +[a_2a_3b_1(x_2^2-x_3^2)+b_1(a_3^2-a_2^2)x_2x_3+c_1(a_2x_3-a_3x_2)]z-\frac{[b_1(a_2x_2+a_3x_3)-c_1]^2}{4}=0 \end{eqnarray*} and $$x_1=(\frac{a_2b_1}{2T}-a_3)x_2+(\frac{a_3b_1}{2T}+a_2)x_3-\frac{c_1+b_1T}{2T}.$$ \item [(2.2)] $a_2x_2+a_3x_3=\frac{c_1}{b_1}$: Equation II has solutions: $$x=x_0+x_1\bi+x_2\bj+x_3\bk,\forall x_0\neq 0,$$ where \begin{eqnarray*} x_1&=& \frac{1}{b_1}x_0^2+\frac{2c_1}{b_1^2}x_0+\frac{c_1^2}{b_1^3}-\frac{b_1}{4}+\frac{c_0}{b_1},\\ x_2&=&\frac{a_2c_1}{b_1}-\frac{a_3b_1}{2}-a_3x_1,\\ x_3&=&\frac{a_3c_1}{b_1}+\frac{a_2b_1}{2}+a_2x_1. \end{eqnarray*} \end{itemize} \end{itemize} \end{thm} \begin{exam}\label{exam5.5} Consider the quadratic equation \begin{equation}\label{exam5.5eq}(1+\bk)x^2 +(\bi+\bj)x+1+2\bi+2\bj+\bk= 0.\end{equation} That is, $a=1+\bk,b=\bi+\bj$ and $c=1+2\bi+2\bj+\bk$. Then we have the following cases: \begin{itemize} \item[(1.1)] Eq.(\ref{exam5.5eq}) has the following solutions $$x=x_1\bi-\Big(\frac{1}{2}\pm \sqrt{x_1^2+x_1-\frac{19}{4}}\Big)\bj+2\bk,$$ where $x_1$ is arbitrary but satisfies $x_1^2+x_1-\frac{19}{4}\geq 0$. \item[(2.1)] Eq.(\ref{exam5.5eq}) has the following solutions $$x=T+x_1\bi+x_2\bj+x_3\bk,\forall x_3\neq 2, x_2 \in \br$$ where $T$ be a real solution of the following equation: \begin{eqnarray}\label{examp5.5x0} z^4+2x_3z^3+(x_3^2+x_2+\frac{5}{4})z^2+(x_2x_3-2x_2)z-\frac{(x_3-2)^2}{4}=0 \end{eqnarray} and $$x_1=-x_2+\frac{1}{2T}x_3-\frac{2+T}{2T}.$$ For example, if we take $x_2=x_3=1$, then Eq. (\ref{examp5.5x0}) has real solution $T_1=0.3914$ and $T_2=-0.1675$. So we have solutions $$x=0.3914-2.7773\bi+\bj+\bk,\mbox{ and } x=-0.1675+1.4857\bi+\bj+\bk.$$ \item[(2.2)] When $x_3=2$, Eq.(\ref{exam5.5eq}) has the following solutions $$x=x_0+x_1\bi-(x_1+\frac{1}{2})\bj+2\bk,\forall x_0\neq 0,$$ where \begin{eqnarray*} x_1= x_0^2+4x_0+\frac{19}{4}. \end{eqnarray*} \end{itemize} \end{exam} \begin{exam}\label{exam5.6} Consider the quadratic equation \begin{equation}\label{exam5.6eq}(1+\bj)x^2 +(-\bi+\bk)x-1+\bi-\bj-\bk= 0.\end{equation} That is, $a=1+\bj,b=-\bi+\bk$ and $c=-1+\bi-\bj-\bk$. Then we have the following cases. \begin{itemize} \item[(1.2)] Eq.(\ref{exam5.6eq}) has the following solutions $$x=(1+x_3)\bi-\bj+x_3\bk\mbox{ and } x=-x_3\bi-\bj+x_3\bk, \forall x_3\in \br.$$ \item[(2.1)] Eq.(\ref{exam5.6eq}) has the following solutions $$x=T+x_1\bi+x_2\bj+x_3\bk,\forall x_2\neq -1,x_3\in \br$$ where $T$ be a real solution of the following equation: \begin{eqnarray}\label{examp5.6x0} z^4+2x_2z^3+(x_2^2+x_3-\frac{3}{4})z^2+(x_2x_3+x_3)z-\frac{(x_2+1)^2}{4}=0 \end{eqnarray} and $$x_1=-\frac{1}{2T}x_2+x_3+\frac{T-1}{2T}.$$ For example, if we take $x_2=x_3=1$, then Eq. (\ref{examp5.6x0}) has real solution $x_0=-2$ and $x_0=0.362$. So we have solutions $$x=-2+2\bi+\bj+\bk,\mbox{ and } x=0.3620-1.2621\bi+\bj+\bk.$$ \item[(2.2)] When $x_2=-1$, Eq.(\ref{exam5.6eq}) has the following solutions $$x=x_0+x_1\bi-\bj+(x_1-\frac{1}{2})\bk,\forall x_0\neq 0,$$ where \begin{eqnarray*} x_1= -x_0^2+2x_0+\frac{1}{4}. \end{eqnarray*} \end{itemize} \end{exam} \section{Equation II for SI} In this section we consider Equation II for $SI$. We relabel the real nonlinear system (\ref{rsym1}) as follows. \begin{eqnarray} N(2TP_{ab}+I_b+2P_{ac})-I_c=0,\label{1ieq1}\\ 2P_{ab}T^2+(2P_{ac}+I_b)T-2NP_{ab}+2P_{bc}=0.\label{1ieq2} \end{eqnarray} We treat the case $P_{ab}\neq 0$ and $P_{ab}= 0$ separately. \subsection{Case $P_{ab}\neq 0$} \begin{thm}\label{thm5.4}Equation II with $P_{ab}\neq 0$ has a solution $$x=(Ta+b)^{-1}(aN-c),$$ where $T$ is a real solution of the following cubic equation \begin{equation}\label{Tab}4P_{ab}^2T^3+[4P_{ab}(2P_{ac}+I_b)]T^2+[4P_{ab}P_{bc} +(2P_{ac}+I_b)^2]T+2P_{bc}(2P_{ac}+I_b)-2P_{ab}I_c=0. \end{equation} and \begin{equation}\label{Nab}N=\frac{2P_{ab}T^2+(2P_{ac}+I_b)T+2P_{bc}}{2P_{ab}}. \end{equation} \end{thm} \begin{proof} If $P_{ab}\neq 0$ then by (\ref{1ieq2}) we get \begin{equation}\label{Nab2}N=\frac{2P_{ab}T^2+(2P_{ac}+I_b)T+2P_{bc}}{2P_{ab}}. \end{equation} Substituting the above $N$ in (\ref{1ieq1}), we obtain \begin{equation}\label{Tab2}4P_{ab}^2T^3+[4P_{ab}(2P_{ac}+I_b)]T^2+[4P_{ab}P_{bc}+(2P_{ac}+I_b)^2]T +2P_{bc}(2P_{ac}+I_b)-2P_{ab}I_c=0. \end{equation} Let $T$ be a real solution of the above cubic equation. Then the corresponding solution is $$x=(Ta+b)^{-1}(aN-c).$$ \end{proof} \begin{exam}\label{exam5.7} Consider the quadratic equation $(1+\bj)x^2 +(\bi+\bj)x-1+\bi=0$. That is, $a=1+\bj,b=\bi+\bj$ and $c=-1+\bi$. $P_{ab}=-1$. In this case $T=-2, N=1$ and $$x=(Ta+b)^{-1}(aN-c)=-1.$$ \end{exam} Combining this example with Example 3.2, we know that the set of solution of the equation $$(1+\bj)x^2 +(\bi+\bj)x-1+\bi=0$$ is $$\{-1\}\cup \{x=x_1\bi+x_1\bj+\bk,\forall x_1\in \br\}.$$ \subsection{Case $P_{ab}=0$} \begin{thm}\label{thm5.5}Equation II with $P_{ab}=0$ and $I_b+2P_{ac}\neq 0$ is solvable and $$x=(Ta+b)^{-1}(aN-c),$$ where $$N=\frac{I_c}{I_b+2P_{ac}},T= \frac{-2P_{bc}}{I_b+2P_{ac}}.$$ \end{thm} \begin{proof} Since $\left\langle 2x_0a+b,2x_0a+b \right\rangle=4x_0P_{ab}+I_b\neq 0$ and $P_{ab}=0$, we have $I_b\neq 0$. If $P_{ab}=0$ then by (\ref{1ieq1}) and (\ref{1ieq2}), $(T,N)$ satisfies the real system \begin{eqnarray} N(I_b+2P_{ac})=I_c,\label{ieq1}\\ (2P_{ac}+I_b)T=-2P_{bc}.\label{ieq2} \end{eqnarray} If $I_b+2P_{ac}\neq 0$ then $$N=\frac{I_c}{I_b+2P_{ac}},T= \frac{-2P_{bc}}{I_b+2P_{ac}}.$$ So the corresponding solution is $$x=(Ta+b)^{-1}(aN-c).$$ \end{proof} \begin{exam}\label{exam5.8} Consider the quadratic equation $(1+\bj)x^2 +(2\bi+\bk)x+1+\bi+2\bj+\bk= 0$. That is, $a=1+\bj,b=2\bi+\bk$ and $c=1+\bi+2\bj+\bk$. $P_{ab}=0, I_b+2P_{ac}=1$. In this case $T=-2, N=-3$ and $$x=(Ta+b)^{-1}(aN-c)=-1+\frac{17}{3}\bi+\frac{1}{3}\bj+6\bk.$$ \end{exam} To treat the case of $I_b+2P_{ac}=0$, we need the following proposition. \begin{pro}\label{prop5.3} For the coefficients $a,b,c$ in Equation II, we assume that $$P_{ab}=0,I_a=0,I_c=0,P_{bc}=0,I_b+2P_{ac}=0, I_b\neq 0.$$ Then we have \begin{equation}\label{formu1}\frac{(a_2b_1+b_3)^2+(b_2-a_3b_1)^2}{\delta^2}=1,\end{equation} \begin{equation}\label{formu2}\frac{a_3(b_2-a_3b_1)-a_2(a_2b_1+b_3)}{\delta}=-1,\end{equation} \begin{equation}\label{formu3}\frac{a_3(a_2b_1+b_3)+a_2(b_2-a_3b_1)}{\delta}=0,\end{equation} \begin{equation}\label{formu4}\frac{2t_2(a_2b_1+b_3)+2t_1(b_2-a_3b_1)}{\delta^2} +\frac{b_3(b_2-a_3b_1)-b_2(a_2b_1+b_3)+2a_3t_1-2a_2t_2}{\delta}=0,\end{equation} \begin{equation}\label{formu5}\frac{2t_2(a_3b_1-b_2)+2t_1(a_2b_1+b_3)}{\delta^2} +\frac{(a_2b_1+b_3)b_3-b_2(a_3b_1-b_2)}{\delta}-b_1=0.\end{equation} \end{pro} \begin{proof} By $a_2^2+a_3^2=1$ and $a_2b_2+a_3b_3=0$, we can easily verify Eqs.(\ref{formu1})-(\ref{formu3}). Noting that $b_3(b_2-a_3b_1)-b_2(a_2b_1+b_3)=-b_1(a_2b_2+a_3b_3)=0$, $a_2b_1+b_3-a_2\delta=a_3(a_2b_2+a_3b_3)=0$ and $b_2-a_3b_1+a_3\delta=a_2(a_2b_2+a_3b_3)=0$, we have $$2t_2(a_2b_1+b_3)+2t_1(b_2-a_3b_1)+(2a_3t_1-2a_2t_2)\delta=2(a_2b_1+b_3-a_2\delta)t_2+2(b_2-a_3b_1+a_3\delta)t_1=0.$$ This proves Eq.(\ref{formu4}). It is obvious that $$\frac{(a_2b_1+b_3)b_3-b_2(a_3b_1-b_2)}{\delta}-b_1=\frac{b_2^2+b_3^2-b_1^2}{\delta}=\frac{-I_b}{\delta}.$$ By $a_2^2+a_3^2=1$ and $a_2b_2+a_3b_3=0$, we have $$b_3t_1-b_2t_2+P_{ac}(a_2b_3-a_3b_2)=0.$$ Noting $a_2t_1+a_3t_2=-P_{ac}$ and $-I_b=2P_{ac}$, we have $$t_2(a_3b_1-b_2)+t_1(a_2b_1+b_3)+P_{ac}\delta=(a_3t_2+a_2t_1)b_1+b_3t_1-b_2t_2+P_{ac}(a_2b_3-a_3b_2+b_1)=0.$$ This proves Eq.(\ref{formu5}). \end{proof} \begin{thm}\label{thm5.6}Consider Equation II with $P_{ab}=0$ and $I_b+2P_{ac}=0$. Let \begin{equation}\label{condpab02} F=t_1^2+t_2^2+(b_3t_1-b_2t_2)\delta+c_0\delta^2.\end{equation} Equation II is solvable if only if $F=0$. If $F=0$ then Equation II has solutions $$x=x_0+x_1\bi+x_2\bj+x_3\bk,\forall x_0,x_1\in \br,$$ where \begin{eqnarray} \label{sx2e} x_2&=&-\frac{t_2}{\delta}-\frac{(a_2b_1+b_3)}{\delta}x_0-\frac{(a_3b_1-b_2)}{\delta}x_1,\\ \label{sx3e}x_3&=&\frac{t_1}{\delta}+\frac{(b_2-a_3b_1)}{\delta}x_0+\frac{(a_2b_1+b_3)}{\delta}x_1. \end{eqnarray} \end{thm} \begin{proof} Suppose that there is a solution $x\in SI$. By Eq.(\ref{ieq1}) and Eq.(\ref{ieq2}), if $I_b+2P_{ac}=0$ then $$I_c=0,P_{bc}=0,I_b\neq 0.$$ In this special case, although $2x_0a+b\in \bh_s-Z(\bh_s)$, Eq.(\ref{ieq1}) and Eq.(\ref{ieq2}) provide no information about $N$ and $T$. So we return to the original equation. By Proposition \ref{prop5.1}, under the condition $P_{ab}=0$ and $I_a=0$, we have \begin{eqnarray} (b_2-a_3b_1)x_0+(a_2b_1+b_3)x_1-\delta x_3+t_1=0,\label{6new3}\\ (a_2b_1+b_3)x_0+(a_3b_1-b_2)x_1+\delta x_2+t_2=0.\label{6new4} \end{eqnarray} Since $P_{ab}=0,a_2^2+a_3^2=1$ and $I_b\neq 0$, we obtain $$b_1^2-(a_3b_2-a_2b_3)^2=b_1^2-b_2^2-b_3^2+(a_2b_2+a_3b_3)^2=I_b\neq 0.$$ This means $\delta=a_2b_3-a_3b_2+b_1\neq 0$. So we have \begin{eqnarray*} x_2=-\frac{(a_2b_1+b_3)}{\delta}x_0-\frac{(a_3b_1-b_2)}{\delta}x_1-\frac{t_2}{\delta},\\ x_3=\frac{(b_2-a_3b_1)}{\delta}x_0+\frac{(a_2b_1+b_3)}{\delta}x_1+\frac{t_1}{\delta}. \end{eqnarray*} Substituting the above two formulas of $x_2$ and $x_3$ in Eq. (8), that is, $$x_0^2-x_1^2+x_2^2+x_3^2+2a_2x_0x_2+2a_3x_0x_3-b_1x_1+b_2x_2+b_3x_3+c_0=0,$$ we obtain \begin{eqnarray*} &&\Big[1+\frac{(a_2b_1+b_3)^2+(b_2-a_3b_1)^2}{\delta^2}+2\frac{a_3(b_2-a_3b_1)-a_2(a_2b_1+b_3)}{\delta}\Big]x_0^2\\ &&+\Big[\frac{(a_2b_1+b_3)^2+(b_2-a_3b_1)^2}{\delta^2}-1\Big]x_1^2+\frac{a_3(a_2b_1+b_3)+a_2(b_2-a_3b_1)}{\delta}x_0x_1\\ &&+\Big[\frac{2t_2(a_2b_1+b_3)+2t_1(b_2-a_3b_1)}{\delta^2}+\frac{b_3(b_2-a_3b_1)-b_3(a_2b_1+b_3)+2a_3t_1-2a_2t_2}{\delta}\Big]x_0\\ &&+\Big[\frac{2t_2(a_3b_1-b_2)+2t_1(a_2b_1+b_3)}{\delta^2}+\frac{(a_2b_1+b_3)b_3-b_2(a_3b_1-b_2)}{\delta}-b_1\Big]x_1\\ &&+\frac{t_1^2+t_2^2}{\delta^2}+\frac{b_3t_1-b_2t_2}{\delta}+c_0=0.\end{eqnarray*} By Proposition \ref{prop5.3}, if $F=0$ then the above equation is an identity. Thus Equation II has solutions $$x=x_0+x_1\bi+x_2\bj+x_3\bk,\forall x_0,x_1\in \br,$$ where $x_2$ and $x_3$ are given by (\ref{sx2e}) and (\ref{sx3e}). \end{proof} \begin{exam}\label{exam5.9} Consider the quadratic equation $(1+\bj)x^2 +(2\bi+\bk)x-\frac{3}{4}+\frac{3}{4}\bj= 0$. That is, $a=1+\bj,b=2\bi+\bk$ and $c=-\frac{3}{4}+\frac{3}{4}\bj$. It is obvious that $P_{ab}=0,I_c=0,P_{bc}=0,I_b+2P_{ac}=0,I_b\neq 0$. Then $\delta=3,t_1=\frac{3}{2},t_2=0, F=0$ and $$x=x_0+x_1\bi-x_0\bj+(x_1+\frac{1}{2})\bk,\forall x_0,x_1\in \br.$$ \end{exam} \section{Verification of our examples by companion polynomial approach} We mention that Proposition \ref{pro1.1} still holds. We restate it as the following lemma. \begin{lem}(c.f.\cite[Theorem 3.8]{Irene}) If $q\in Z(f)$ with $p(z)$ given by (\ref{eqcons}), then $\Psi_{[q]}(x)$ is a divisor of $c(z)$ given by (\ref{czpoly}) in complex number field. That is $$C(x)=Q(x)\Psi_{[q]}(x), Q(x)\in \bc[x].$$ \end{lem} \begin{proof} If $q\in Z(f)$ then by \cite[Section 3.4]{sch20} there exist $h(x)\in \bh_s[x]$ such that $f(x)=h(x)(x-q)$. Hence $C(x)=f(x)\overline{f(x)}=h(x)(x-q)(x-\bar{q})\overline{h(x)}=h(x)\overline{h(x)}\Psi_{[q]}(x).$ Letting $Q(x)=h(x)\overline{h(x)}$ completes the proof. \end{proof} \begin{lem}\label{lemge} Let $a,d\in \bh_s$. Then the equation $ax=d$ is solvable if and only if $aa^+d=d$, in which case all solutions are given by $$x=a^+d+(1-a^+a)y,\quad \forall y=y_0+y_1\bi+y_2\bj+y_3\bk\in \bh_s \mbox{ with } y_i\in \br.$$ \end{lem} \begin{proof}If $a$ is invertible, then $1-a^+a=0$ and $x=a^{-1}d$. It is obvious for the case $a=0$. The case of $a$ being noninvertible is the same as Lemma \ref{lemmoor}. \end{proof} Let \begin{equation}\label{sab}S(a,d)=\{x\in \bh_s:ax=d\}.\end{equation} \begin{thm}\label{thmcom} Suppose that the companion polynomial of Equation II \begin{equation}\label{czpoly1}C(x)=2P_{ab}x^3+(2P_{ac}+I_b)x^2+2P_{bc}x+I_c\not\equiv 0\end{equation} Let $\Psi_{[q]}(x)=x^2-Tx+N$ be a divisor of $C(x)$. Then the set of solutions of Equation II is $$Z(f)=\bigcup_{[q]} \{S(Ta+b,aN-c)\cap [q]\}.$$ \end{thm} \begin{proof} Let $q\in Z(f)$ and $T=2\Re(q),N=I_q=q\bar{q}$. Then we have $$(Ta+b)q=aN-c.$$ Thus $q\in S(Ta+b,aN-c)$. By Lemma \ref{lemge}, we get the result. \end{proof} By computation, we know that the companion polynomials of Equation II in Examples 3.4, 3.5, 3.6 and 4.3 are $C(x)=0$. We will apply Theorem \ref{thmcom} to our Examples 3.1, 3,2, 3.3 and 4.2. In these examples, we have checked that for each pair $(T,N)$ the equations $(Ta+b)x=aN-c$ are solvable. We present our verification procedure as follows: \begin{itemize} \item[(1)] In Example 3.1, we have $C(x)=-4(x+\frac{1}{2})^3$, one pair $(T, N)=(-1,\frac{1}{4})$ and $$Ta+b=-1+\bi+\bj+\bk\in Z(\bh_s),\, Na-c=\frac{1}{2}-\frac{5}{2}\bi-\frac{1}{2}\bj-\frac{5}{2}\bk.$$ By Lemma \ref{lemge}, we have $$S(Ta+b,Na-c)=\{-\frac{3}{4}+\frac{1}{2}(y_0+y_3)+[\frac{1}{2}+\frac{1}{2}(y_1+y_2)]\bi+[-\frac{1}{2}+\frac{1}{2}(y_1+y_2)]\bj+[\frac{3}{4}+\frac{1}{2}(y_0+y_3)]\bk\}. $$Hence $$Z(f)=S(Ta+b,Na-c)\cap \{x\in \bh_s: \Re(x)=\frac{-1}{2}, I_x=\frac{1}{4}\}=\{-\frac{1}{2}+\bi+\bk\}.$$ \item[(2)]In Example 3.2(alos in Example 4.1), we have $C(x)=-2(x-1)(x+1)^2$, two pairs $$(T, N)=(-2,1),\,\, (T, N)=(0,-1).$$ For the first pair, we have $$Ta+b=-2+\bi-\bj\in \bh_s- Z(\bh_s),\, Na-c=2-\bi+\bj.$$ By Lemma \ref{lemge}, we have $$S(Ta+b,Na-c)=\{-1\} $$ and $$S(Ta+b,Na-c)\cap \{x\in \bh_s: \Re(x)=-1, I_x=1\}=\{-1\}\subset Z(f).$$ For the second pair, we have $$Ta+b=\bi+\bj\in Z(\bh_s),\,Na-c=-\bi-\bj.$$ By Lemma \ref{lemge}, we have $$S(Ta+b,Na-c)=\{-\frac{1}{2}+\frac{1}{2}(y_0+y_3)+\frac{1}{2}(y_1+y_2)\bi+\frac{1}{2}(y_1+y_2)\bj+[\frac{1}{2}+\frac{1}{2}(y_0+y_3)]\bk\}. $$ Hence $$S(Ta+b,Na-c)\cap \{x\in \bh_s: \Re(x)=0, I_x=-1\}=\{x_1\bi+x_1\bj+\bk,\forall x_1\in \br\}.$$ Therefore we have $$Z(f)=\{-1\}\cup \{x=x_1\bi+x_1\bj+\bk,\forall x_1\in \br\}.$$ \item[(3)] In Example 3.3, we have $C(x)=2(x^2-x+1)$, one pair $(T, N)=(1,1)$ and $$Ta+b=1+\bi+\bj+\bk\in Z(\bh_s),\,Na-c=\bi+\bj.$$ By Lemma \ref{lemge}, we have $$S(Ta+b,Na-c)=\{\frac{1}{4}+\frac{1}{2}(y_0-y_2)+[\frac{1}{4}+\frac{1}{2}(y_1+y_3)]\bi+[\frac{1}{4}+\frac{1}{2}(y_2-y_0)]\bj+[-\frac{1}{4}+\frac{1}{2}(y_1+y_3)]\bk\}. $$ $$Z(f)=S(Ta+b,Na-c)\cap \{x\in \bh_s: \Re(x)=\frac{1}{2}, I_x=1\}=\{\frac{1}{2}+\bi+\frac{1}{2}\bk \}.$$ \item[(4)] In Example 4.2, we have $C(x)=(x-1)(x+3)$, one pair $(T, N)=(-2,-3)$ and $$Ta+b=-2+2\bi-2\bj+\bk\in \bh_s-Z(\bh_s),\, Na-c=-4-\bi-5\bj-\bk.$$ By Lemma \ref{lemge}, we have $$S(Ta+b,Na-c)=\{-1+\frac{17}{3}\bi+\frac{1}{3}\bj+6\bk\} $$ Hence $$Z(f)=\{-1+\frac{17}{3}\bi+\frac{1}{3}\bj+6\bk\}.$$ \end{itemize} \vspace{2mm} {\bf Acknowledgments.}\quad This work is supported by Natural Science Foundation of China (11871379) and Key project of National Natural Science Foundation of Guangdong Province Universities (2019KZDXM025). \begin{thebibliography}{99} \bibitem{cao} W. Cao, Z. Chang, Moore-Penrose inverse of split quaternion, Linear and Multilinear Algebra, 2022,70(9):1631-1647. \bibitem{caoaxiom}W. Cao, Quadratic equation in split quaternions. Axioms. 2022,11(5):188 \bibitem{Ireneu} M. Falcao, F. Miranda, R. Severino, M. Soares, Polynomials over quaternions and coquaternions: a unified approach. Lecture Notes in Comput. Sci., 2017, 10405: 379-393. \bibitem{Irene} M. Falcao, F. Miranda, R. Severino, M. Soares, The number of zeros of unilateral polynomials over coquaternions revisited, doi.org/10.48550/arXiv.1703.10986 \bibitem{Opfer10}D. Janovska, G. Opfer, A note on the computation of all zeros of simple quaternionic polynomials. SIAM J. Numer. Anal. 2010,48: 244-256. \bibitem{Opfer14} D. Janovska, G. Opfer, Zeros and singular points for one-sided coquaternionic polynomials with an extension to other $R^4$ algebras. Electron.Trans. Numer. Anal., 2014, 41: 133-158. \bibitem{Opfer18} D. Janovska, G. Opfer, The relation between the companion matrix and the companion polynomial in $R^4$ algebras, Adv. Appl. Clifford Algebras, 2018, 28:76. \bibitem{niven} I. Niven, Equations in quaternions, American Math. Monthly, 1941, 48: 654-661. \bibitem{Opfer17}G. Opfer, Niven’s algorithm applied to the roots of the companion polynomial over $R^4$ algebras. Adv. Appl. Clifford Algebras, 2017, 27: 2659-2675. \bibitem{sch20}F. Scharler, J. Siegele, P. Schrocker, Quadratic Split quaternion polynomials: factorization and geometry. Adv. Appl. Clifford Algebras, 2020, 30: 11. \bibitem{sero01}R. Serodio, E. Pereira, J. Vitoria, Computing the zeros of quaternionic polynomials, Comput. Math. Appl., 2001, 42: 1229-1237. \end{thebibliography} \end{document}
2205.03577v1
http://arxiv.org/abs/2205.03577v1
Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle and the Ordering Principle
\documentclass[12pt,letterpaper]{article} \usepackage{amsmath,amssymb,amsthm,amsfonts} \usepackage{accents} \usepackage{caption} \usepackage{comment} \usepackage[roman,full]{complexity} \usepackage{enumerate} \usepackage{fancyhdr} \usepackage{float} \usepackage{fullpage} \usepackage{graphicx} \usepackage{hyperref} \usepackage{parskip} \usepackage{todonotes} \usepackage[square,numbers]{natbib} \usepackage{dsfont} \renewcommand{\E}{\mathbb{E}} \newcommand{\F}{\mathbb{F}} \renewcommand{\R}{\mathbb{R}} \newcommand{\1}{\mathds{1}} \theoremstyle{definition} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \setlength{\marginparwidth}{2cm} \setlength{\parindent}{0in} \setlength{\parskip}{12pt} \allowdisplaybreaks \title{Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle and the Ordering Principle} \author{Aaron Potechin and Aaron Zhang} \date{} \begin{document} \maketitle \abstract{In this paper, we investigate the total coefficient size of Nullstellensatz proofs. We show that Nullstellensatz proofs of the pigeonhole principle on $n$ pigeons require total coefficient size $2^{\Omega(n)}$ and that there exist Nullstellensatz proofs of the ordering principle on $n$ elements with total coefficient size $2^n - n$.}\\ \\ \textbf{Acknowledgement:} This research was supported by NSF grant CCF-2008920 and NDSEG fellowship F-9422254702. \section{Introduction} Given a system $\{p_i = 0: i \in [m]\}$ of polynomial equations over an algebraically closed field, a Nullstellensatz proof of infeasibility is an equality of the form $1 = \sum_{i=1}^{m}{p_i{q_i}}$ for some polynomials $\{q_i = 0: i \in [m]\}$. Hilbert's Nullstellensatz\footnote{Actually, this is the weak form of Hilbert's Nullstellensatz. Hilbert's Nullstellensatz actually says that given polynomials $p_1,\ldots,p_m$ and another polynomial $p$, if $p(x) = 0$ for all $x$ such that $p_i(x) = 0$ for each $i \in [m]$ then there exists a natural number $r$ such that $p^r$ is in the ideal generated by $p_1,\ldots,p_m$. } says that the Nullstellensatz proof system is complete, i.e. a system of polynomial equations has no solutions over an algebraically closed field if and only if there is a Nullstellensatz proof of infeasibility. However, Hilbert's Nullstellensatz does not give any bounds on the degree or size needed for Nullstellensatz proofs. The degree of Nullstellensatz proofs has been extensively studied. Grete Hermann showed a doubly exponential degree upper bound for the ideal membership problem \cite{gretehermann} which implies the same upper bound for Nullstellensatz proofs. Several decades later, W. Dale Brownawell gave an exponential upper bound on the degree required for Nullstellensatz proofs over algebraically closed fields of characterisic zero \cite{10.2307/1971361}. A year later, J{\'a}nos Koll{\'a}r showed that this result holds for all algebraically closed fields \cite{kollar1988sharp}. For specific problems, the degree of Nullstellensatz proofs can be analyzed using designs \cite{DBLP:conf/dimacs/Buss96}. Using designs, Nullstellensatz degree lower bounds have been shown for many problems including the pigeonhole principle, the induction principle, the housesitting principle, and the mod $m$ matching principles \cite{365714, 10.1006/jcss.1998.1575, 10.1007/BF01294258, 507685, 10.1145/237814.237860}. More recent work showed that there is a close connection between Nullstellensatz degree and reversible pebbling games \cite{derezende_et_al:LIPIcs:2019:10840} and that lower bounds on Nullstellensatz degree can be lifted to lower bounds on monotone span programs, monotone comparator circuits, and monotone switching networks \cite{10.1145/3188745.3188914}. For analyzing the size of Nullstellensatz proofs, a powerful technique is the size-degree tradeoff showed by Russell Impagliazzo, Pavel Pudl\'{a}k, and Ji\v{r}\'{\i} Sgall for polynomial calculus \cite{10.1007/s000370050024}. This tradeoff says that if there is a size $S$ polynomial calculus proof then there is a polynomial calculus proof of degree $O(\sqrt{n\log{S}})$. Thus, if we have an $\Omega(n)$ degree lower bound for polynomial calculus, this implies a $2^{\Omega(n)}$ size lower bound for polynomial calculus (which also holds for Nullstellensatz as Nullstellensatz is a weaker proof system). However, the size-degree tradeoff does not give any size lower bound when the degree is $O(\sqrt{n})$ and we know of very few other techniques for analyzing the size of Nullstellensatz proofs. In this paper, we instead investigate the total coefficient size of Nullstellensatz proofs. We have two reasons for this. First, total coefficient size is interesting in its own right and to the best of our knowledge, it has not yet been explored. Second, total coefficient size may give insight into proof size in settings where we cannot apply the size-degree tradeoff and thus do not have good size lower bounds. \begin{remark} Note that Nullstellensatz size lower bounds do not imply total coefficient size lower bounds because we could have a proof with many monomials but a small coefficient on each monomial. Thus, the exponential size lower bounds for the pigeonhole principle from Razborov's $\Omega(n)$ degree lower bound for polynomial calculus \cite{razborov1998lower} and the size-degree tradeoff \cite{10.1007/s000370050024} do not imply total coefficient size lower bounds for the pigeonhole principle. \end{remark} \subsection{Our results} In this paper, we consider two principles, the pigeonhole principle and the ordering principle. We show an exponential lower bound on the total coefficient size of Nullstellensatz proofs of the pigeonhole principle and we show an exponential upper bound on the total coefficient size of Nullstellensatz proofs of the ordering principle. More precisely, we show the following bounds. \begin{theorem}\label{thm:pigeonholelowerbound} For all $n \geq 2$, any Nullstellensatz proof of the pigeonhole principle with $n$ pigeons and $n-1$ holes has total coefficient size $\Omega\left(n^{\frac{3}{4}}\left(\frac{2}{\sqrt{e}}\right)^{n}\right)$. \end{theorem} \begin{theorem} For all $n \geq 3$, there is a Nullstellensatz proof of the ordering principle on $n$ elements with size and total coefficient size $2^{n} - n$. \end{theorem} After showing these bounds, we discuss total coefficient size for stronger proof systems. We observe that if we consider a stronger proof system which we call resolution-like proofs, our lower bound proof for the pigeonhole principle no longer works. We also observe that even though resolution is a dynamic proof system, the $O(n^3)$ size resolution proof of the ordering principle found by Gunnar St{\aa}lmark \cite{staalmarck1996short} can be captured by a one line sum of squares proof. \section{Nullstellensatz total coefficient size}\label{preliminaries} We start by defining total coefficient size for Nullstellensatz proofs and describing a linear program for finding the minimum total coefficient size of a Nullstellensatz proof. \begin{definition} Given a polynomial $f$, we define the total coefficient size $T(f)$ of $f$ to be the sum of the magnitudes of the coefficients of $f$. For example, if $f(x,y,z) = 2{x^2}y - 3xyz + 5z^5$ then $T(f) = 2 + 3 + 5 = 10$. \end{definition} \begin{definition} Given a system $\{p_i = 0: i \in [m]\}$ of $m$ polynomial equations, a Nullstellensatz proof of infeasibility is an equality of the form \[ 1 = \sum_{i=1}^{m}{p_i{q_i}} \] for some polynomials $\{q_i: i \in [m]\}$. We define the total coefficient size of such a Nullstellensatz proof to be $\sum_{i=1}^{m}{T(q_i)}$. \end{definition} The following terminology will be useful. \begin{definition} Given a system $\{p_i = 0: i \in [m]\}$ of polynomial equations, we call each of the equations $p_i = 0$ an axiom. For each axiom $s_i = 0$, we define a weakening of this axiom to be an equation of the form $rp_i = 0$ for some monomial $r$. \end{definition} \begin{remark} We do not include the total coefficient size of $p_i$ in the total coefficient size of the proof as we want to focus on the complexity of the proof as opposed to the complexity of the axioms. That said, in this paper we only consider systems of polynomial equations where each $p_i$ is a monomial, so this choice does not matter. \end{remark} The minimum total coefficient size of a Nullstellensatz proof can be found using the following linear program. In general, this linear program will have infinite size, but as we discuss below, it has finite size when the variables are Boolean. \begin{enumerate} \item[] Primal: Minimize $\sum_{i=1}^{m}{T(q_i)}$ subject to $\sum_{i=1}^{m}{{p_i}{q_i}} = 1$. More precisely, writing $q_i = \sum_{\text{monomials } r}{c_{ir}r}$, we minimize $\sum_{i=1}^{m}{\sum_{\text{monomials } r}{b_{ir}}}$ subject to the constraints that \begin{enumerate} \item[1.] $b_{ir} \geq -c_{ir}$ and $b_{ir} \geq c_{ir}$ for all $i \in [m]$ and monomials $r$. \item[2.] $\sum_{i=1}^{m}{\sum_{\text{monomials } r}{c_{ir}{r}p_i}} = 1$ \end{enumerate} \item[] Dual: Maximize $D(1)$ subject to the constraints that \begin{enumerate} \item[1.] $D$ is a linear map from polynomials to $\mathbb{R}$. \item[2.] For each $i \in [m]$ and each monomial $r$, $|D(rp_i)| \leq 1$. \end{enumerate} \end{enumerate} Weak duality, which is what we need for our lower bound on the pigeonhole principle, can be seen directly as follows. \begin{proposition} If $D$ is a linear map from polynomials to $\mathbb{R}$ such that $|D(rp_i)| \leq 1$ for all $i \in [m]$ and all monomials $r$ then any Nullstellensatz proof of infeasibility has total coefficient size at least $D(1)$. \end{proposition} \begin{proof} Given a Nullstellensatz proof $1 = \sum_{i=1}^{m}{{p_i}{q_i}}$, applying $D$ to it gives \[ D(1) = \sum_{i=1}^{m}{D({p_i}{q_i})} \leq \sum_{i=1}^{m}{T(q_i)} \] \end{proof} \subsection{Special case: Boolean variables} In this paper, we only consider problems where all of our variables are Boolean, so we make specific definitions for this case. In particular, we allow monomials to contain terms of the form $(1-x_i)$ as well as $x_i$ and we allow the Boolean axioms $x_i^2 = x_i$ to be used for free. We also observe that we can define a linear map $D$ from polynomials to $\mathbb{R}$ by assigning a value $D(x)$ to each input $x$. \begin{definition} Given Boolean variables $x_1,\ldots,x_N$ where we have that $x_i = 1$ if $x_i$ is true and $x_i = 0$ if $x_i$ is false, we define a monomial to be a product of the form $\left(\prod_{i \in S}{x_i}\right)\left(\prod_{j \in T}{(1 - x_j)}\right)$ for some disjoint subsets $S,T$ of $[N]$. \end{definition} \begin{definition} Given a Boolean variable $x$, we use $\bar{x}$ as shorthand for the negation $1-x$ of $x$. \end{definition} \begin{definition} Given a set of polynomial equations $\{p_i = 0: i \in [m]\}$ together with Boolean axioms $\{x_j^2 - x_j = 0: j \in [N]\}$, we define the total coefficient size of a Nullstellensatz proof \[ 1 = \sum_{i = 1}^{m}{{p_i}{q_i}} + \sum_{j = 1}^{N}{{g_j}(x_j^2 - x_j)} \] to be $\sum_{i=1}^{m}{T(q_i)}$. In other words, we allow the Boolean axioms $\{x_j^2 - x_j = 0: j \in [N]\}$ to be used for free. \end{definition} \begin{remark} For the problems we consider in this paper, all of our non-Boolean axioms are monomials, so there is actually no need to use the Boolean axioms. \end{remark} \begin{remark} We allow monomials to contain terms of the form $(1-x_i)$ and allow the Boolean axioms to be used for free in order to avoid spurious lower bounds coming from difficulties in manipulating the Boolean variables rather than handling the non-Boolean axioms. In particular, with these adjustments, when the non-Boolean axioms are monomials, the minimum total coefficient size of a Nullstellensatz proof is upper bounded by the minimum size of a tree-resolution proof. \end{remark} Since the Boolean axioms $\{x_j^2 - x_j = 0: j \in [N]\}$ can be used for free, to specify a linear map $D$ from polynomials to $\mathbb{R}$, it is necessary and sufficient to specify the value of $D$ on each input $x \in \{0,1\}^{N}$. \begin{definition} Given a function $D: \{0,1\}^{N} \to \mathbb{R}$, we can view $D$ as a linear map from polynomials to $\mathbb{R}$ by taking $D(f) = \sum_{x \in \{0,1\}^{N}}{f(x)D(x)}$ \end{definition} \section{Total coefficient size lower bound for the pigeonhole principle} In this section, we prove Theorem \ref{thm:pigeonholelowerbound}, our total coefficient size lower bound on the pigeonhole principle. We start by formally defining the pigeonhole principle. \begin{definition}[pigeonhole principle ($\mathrm{PHP}_n$)] Intuitively, the pigeonhole principle says that if $n$ pigeons are assigned to $n - 1$ holes, then some hole must have more than one pigeon. Formally, for $n \ge 1$, we define $\mathrm{PHP}_n$ to be the statement that the following system of axioms is infeasible: \begin{itemize} \item For each $i \in [n]$ and $j \in [n-1]$, we have a variable $x_{i, j}$. $x_{i, j} = 1$ represents pigeon $i$ being in hole $j$, and $x_{i, j} = 0$ represents pigeon $i$ not being in hole $j$. \item For each $i \in [n]$, we have the axiom $\prod_{j = 1}^{n - 1}{\bar{x}_{i, j}} = 0$ representing the constraint that each pigeon must be in at least one hole (recall that $\bar{x}_{i,j} = 1 - x_{i,j}$). \item For each pair of distinct pigeons $i_1, i_2 \in [n]$ and each hole $j \in [n-1]$, we have the axiom $x_{i_1, j}x_{i_2, j} = 0$ representing the constraint that pigeons $i_1$ and $i_2$ cannot both be in hole $j$. \end{itemize} \end{definition} We prove our lower bound on the total coefficient size complexity of $\text{PHP}_n$ by constructing and analyzing a dual solution $D$. In our dual solution, the only assignments $x$ for which $D(x) \neq 0$ are those where each pigeon goes to exactly one hole (i.e., for each pigeon $i$, exactly one of the $x_{i, j}$ is 1). Note that there are $(n - 1)^n$ such assignments. In the rest of this section, when we refer to assignments or write a summation or expectation over assignments $x$, we refer specifically to these $(n - 1)^n$ assignments. Recall that the dual constraints are \[ D(W) = \sum_{\text{assignments } x}{D(x)W(x)} \in [-1,1] \] for all weakenings $W$ of an axiom. Note that since $D(x)$ is only nonzero for assignments $x$ where each pigeon goes to exactly one hole, for any weakening $W$ of an axiom of the form $\prod_{j = 1}^{n - 1}{\bar{x}_{i, j}} = 0$, $D(W) = 0$. Thus, it is sufficient to consider weakenings $W$ of the axioms $x_{i_1, j}x_{i_2, j} = 0$. Further note that if $|D(W)| > 1$ for some weakening $W$ then we can rescale $D$ by dividing by $\max_{W}{|D(W)|}$. Thus, we can rewrite the objective value of the dual program as $\frac{D(1)}{\max_{W}{|D(W)|}}$. Letting $\E$ denote the expectation over a uniform assignment where each pigeon goes to exactly one hole, $\frac{D(1)}{\max_{W}{|D(W)|}} = \frac{\E(D)}{\max_{W}{|\E(DW)|}}$ so it is sufficient to construct $D$ and analyze $\E(D)$ and $\max_{W}{|\E(DW)|}$. Before constructing and analyzing $D$, we provide some intuition for our construction. The idea is that, if we consider a subset of $n - 1$ pigeons, then $D$ should behave like the indicator function for whether those $n - 1$ pigeons all go to different holes. More concretely, for any polynomial $p$ which does not depend on some pigeon $i$ (i.e. $p$ does not contain $x_{i,j}$ or $\bar{x}_{i,j}$ for any $j \in [n-1]$), \[ \E(Dp) = \frac{(n-1)!}{(n-1)^{n-1}}\E(p \mid \text{all pigeons in } [n] \setminus \{i\} \text{ go to different holes}) \] Given this intuition, we now present our construction. Our dual solution $D$ will be a linear combination of the following functions: \begin{definition}[functions $J_S$]\label{J} Let $S \subsetneq [n]$ be a subset of pigeons of size at most $n - 1$. We define the function $J_S$ that maps assignments to $\{0, 1\}$. For an assignment $x$, $J_S(x) = 1$ if all pigeons in $S$ are in different holes according to $x$, and $J_S(x) = 0$ otherwise. $\qed$ \end{definition} Note that if $|S| = 0$ or $|S| = 1$, then $J_S$ is the constant function 1. In general, the expectation of $J_S$ over a uniform assignment is $\E(J_S) = \left(\prod_{k = 1}^{|S|} (n - k)\right) / (n - 1)^{|S|}$.\\ \begin{definition}[dual solution $D$]\label{D} Our dual solution $D$ is: \begin{equation*} D = \sum_{S \subsetneq [n]} c_SJ_S, \end{equation*} where the coefficients $c_S$ are $c_S = \frac{(-1)^{n - 1 - |S|} (n - 1 - |S|)!}{(n - 1)^{n - 1 - |S|}}$. \end{definition} We will lower-bound the dual value $\E(D) / \max_W |\E(DW)|$ by computing $\E(D)$ and then upper-bounding $\max_W |\E(DW)|$. In both calculations, we will use the following key property of $D$, which we introduced in our intuition for the construction: \begin{lemma}\label{dual-intuition} If $p$ is a polynomial which does not depend on pigeon $i$ (i.e. $p$ does not contain any variables of the form $x_{i,j}$ or $\bar{x}_{i, j}$) then $\E(Dp) = \E(J_{[n] \setminus \{i\}}p)$. \end{lemma} \begin{proof} Without loss of generality, suppose $p$ does not contain any variables of the form $x_{1,j}$ or $\bar{x}_{1, j}$. Let $T$ be any subset of pigeons that does not contain pigeon 1 and that has size at most $n - 2$. Observe that \[ \E({J_{T \cup \{1\}}}p) = \frac{n - 1 - |T|}{n-1}\E({J_{T}}p) \] because regardless of the locations of the pigeons in $T$, the probability that pigeon $1$ goes to a different hole is $\frac{n - 1 - |T|}{n-1}$ and $p$ does not depend on the location of pigeon $1$. Since \begin{align*} c_{T \cup \{1\}} &= \frac{(-1)^{n - 2 - |T|} (n - 2 - |T|)!}{(n - 1)^{n - 2 - |T|}} \\ &= -\frac{n-1}{n-1-|T|} \cdot \frac{(-1)^{n - 1 - |T|} (n - 1 - |T|)!}{(n - 1)^{n - 1 - |T|}} = -\frac{n-1}{n-1-|T|}c_{T} \end{align*} we have that for all $T \subsetneq \{2, \dots, n\}$, \[ \E(c_{T \cup \{1\}}{J_{T \cup \{1\}}}p) + \E(c_{T}{J_{T}}p) = 0 \] Thus, all terms except for $J_{\{2,3,\ldots,n\}}$ cancel. Since $c_{\{2,3,\ldots,n\}} = 1$, we have that $\E(Dp) = \E(J_{\{2,3,\ldots,n\}}p)$, as needed. \end{proof} The value of $\E(D)$ follows immediately: \begin{corollary}\label{exp-d} \begin{equation*} \E(D) = \frac{(n - 2)!}{(n - 1)^{n - 2}}. \end{equation*} \end{corollary} \begin{proof} Let $p = 1$. By Lemma \ref{dual-intuition}, $\E(D) = \E(J_{\{2, \dots, n\}}) = (n - 2)!/(n - 1)^{n - 2}$. \end{proof} \subsection{Upper bound on $\max_W |\E(DW)|$} We introduce the following notation: \begin{definition}[$H_{W, i}$] Given a weakening $W$, we define a set of holes $H_{W, i} \subseteq [n-1]$ for each pigeon $i \in [n]$ so that $W(x) = 1$ if and only if each pigeon $i \in [n]$ is mapped to one of the holes in $H_{W, i}$. More precisely, \begin{itemize} \item If $W$ contains terms $x_{i, j_1}$ and $x_{i, j_2}$ for distinct holes $j_1, j_2$, then $H_{W, i} = \emptyset$ (i.e. it is impossible that $W(x) = 1$ because pigeon $i$ cannot go to both holes $h$ and $h'$). Similarly, if $W$ contains both $x_{i,j}$ and $\bar{x}_{i,j}$ for some $j$ then $H_{W, i} = \emptyset$ (i.e. it is impossible for pigeon $i$ to both be in hole $j$ and not be in hole $j$). \item If $W$ contains exactly one term of the form $x_{i, j}$, then $H_{W, i} = \{j\}$. (i.e., for all $x$ such that $W(x) = 1$, pigeon $i$ goes to hole $j$). \item If $W$ contains no terms of the form $x_{i, j}$, then $H_{W, i}$ is the subset of holes $j$ such that $W$ does \textit{not} contain the term $\bar{x}_{i, j}$. (i.e., if $W$ contains the term $\bar{x}_{i, j}$, then for all $x$ such that $W(x) = 1$, pigeon $i$ does not go to hole $j$.) \end{itemize} \end{definition} The key property we will use to bound $\max_W |\E(DW)|$ follows immediately from Lemma \ref{dual-intuition}: \begin{lemma}\label{exp-dw} Let $W$ be a weakening. If there exists some pigeon $i \in [n]$ such that $H_{W, i} = [n-1]$ (i.e., $W$ does not contain any terms of the form $x_{i, j}$ or $\bar{x}_{i, j}$), then $\E(DW) = 0$. \end{lemma} \begin{proof} Without loss of generality, suppose $W$ is a weakening of the axiom $x_{2, 1}x_{3, 1} = 0$ and $H_{W, 1} = [n]$. By Lemma \ref{dual-intuition}, $\E(DW) = \E(J_{\{2, \dots, n\}}W)$. However, $\E(J_{\{2, \dots, n\}}W) = 0$ because if $W = 1$ then pigeons 2 and 3 must both go to hole 1. \end{proof} We make the following definition and then state a corollary of Lemma \ref{exp-dw}. \begin{definition}[$W^{\mathrm{flip}}_S$] Let $W$ be a weakening of the axiom $x_{i_1, j}x_{i_2, j} = 0$ for pigeons $i_1, i_2$ and hole $j$. Let $S \subseteq [n] \setminus \{i_1, i_2\}$. We define $W^{\mathrm{flip}}_S$, which is also a weakening of the axiom $x_{i_1, j}x_{i_2, j} = 0$, as follows. \begin{itemize} \item For each pigeon $i_3 \in S$, we define $W^{\mathrm{flip}}_S$ so that $H_{W^{\mathrm{flip}}_S, i_3} = [n-1] \setminus H_{W, i_3}$. \item For each pigeon $i_3 \notin S$, we define $W^{\mathrm{flip}}_S$ so that $H_{W^{\mathrm{flip}}_S, i_3} = H_{W, i_3}$. \end{itemize} (Technically, there may be multiple ways to define $W^{\mathrm{flip}}_S$ to satisfy these properties; we can arbitrarily choose any such definition.) $\qed$ \end{definition} In other words, $W^{\mathrm{flip}}_S$ is obtained from $W$ by flipping the sets of holes that the pigeons in $S$ can go to in order to make the weakening evaluate to 1. Now we state a corollary of Lemma \ref{exp-dw}:\\ \begin{corollary}\label{flip} Let $W$ be a weakening of the axiom $x_{i_1, j}x_{i_2, j} = 0$ for pigeons $i_1, i_2$ and hole $j$. Let $S \subseteq [n] \setminus \{i_1, i_2\}$. Then \begin{equation*} \E\left(DW^{\mathrm{flip}}_S\right) = (-1)^{|S|} \cdot \E(DW). \end{equation*} \end{corollary} \begin{proof} It suffices to show that for $i_3 \in [n] \setminus \{i_1, i_2\}$, we have $\E\left(DW^{\mathrm{flip}}_{\{i_3\}}\right) = -\E(DW)$. Indeed, $W + W^{\mathrm{flip}}_{\{i_3\}}$ is a weakening satisfying $H_{W + W^{\mathrm{flip}}_{\{i_3\}}, i_3} = [n-1]$. Therefore, by Lemma \ref{exp-dw}, $\E\left(D\left(W + W^{\mathrm{flip}}_{\{i_3\}}\right)\right) = 0$. \end{proof} Using Corollary \ref{flip}, we can bound $\max_W |\E(DW)|$ using Cauchy-Schwarz. We first show an approach that does not give a strong enough bound. We then show how to modify the approach to achieve a better bound. \subsubsection{Unsuccessful approach to upper bound $\max_W |\E(DW)|$}\label{unsuccessful} Consider $\max_W |\E(DW)|$. By Lemma \ref{flip}, it suffices to consider only weakenings $W$ such that, if $W$ is a weakening of the axiom $x_{i_1, j}x_{i_2, j} = 0$, then for all pigeons $i_3 \in [n] \setminus \{i_1, i_2\}$, we have $|H_{W, k}| \leq \lfloor (n - 1) / 2 \rfloor$. For any such $W$, we have \begin{align*} \lVert W \rVert &= \sqrt{E(W^2)}\\ &\le \sqrt{\left(\frac{1}{n - 1}\right)^2\left(\frac{1}{2}\right)^{n - 2}}\\ &= (n - 1)^{-1} \cdot 2^{-(n - 2)/2}. \end{align*} By Cauchy-Schwarz, \begin{align*} |\E(DW)| &\le \lVert D \rVert \lVert W \rVert\\ &\le \lVert D \rVert (n - 1)^{-1}2^{-(n - 2)/2}. \end{align*} Using the value of $\E(D)$ from Corollary \ref{exp-d}, the dual value $\E(D) / \max_W |\E(DW)|$ is at least: \[ \frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot \frac{(n - 1)2^{(n - 2)/2}}{\lVert D \rVert} = \widetilde{\Theta}\left(\left(\frac{e}{\sqrt{2}}\right)^{-n} \cdot \frac{1}{\lVert D \rVert}\right) \] by Stirling's formula. Thus, in order to achieve an exponential lower bound on the dual value, we would need $1 / \lVert D \rVert \ge \Omega(c^n)$ for some $c > e/\sqrt{2}$. However, this requirement is too strong, as we will show that $1 / \lVert D \rVert = \widetilde{\Theta}\left(\left(\sqrt{e}\right)^n\right)$. Directly applying Cauchy-Schwarz results in too loose of a bound on $\max_W |\E(DW)|$, so we now modify our approach. \subsubsection{Successful approach to upper bound $\max_W |\E(DW)|$} \begin{definition}[$W^{\{-1, 0, 1\}}$] Let $W$ be a weakening of the axiom $x_{i_1, j}x_{i_2, j} = 0$ for pigeons $i_1, i_2$ and hole $j$. We define the function $W^{\{-1, 0, 1\}}$ that maps assignments to $\{-1, 0, 1\}$. For an assignment $x$, \begin{itemize} \item If pigeons $i_1$ and $i_2$ do not both go to hole $j$, then $W^{\{-1, 0, 1\}}(x) = 0$. \item Otherwise, let $V(x) = |\{i_3 \in [n] \setminus \{i_1, i_2\} : \text{pigeon } i_3 \text{ does not go to } H_{W, i_3}\}|$. Then $W^{\{-1, 0, 1\}}(x) = (-1)^{V(x)}$. \end{itemize} \end{definition} Note that $W^{\{-1, 0, 1\}}$ is a linear combination of the $W^{\mathrm{flip}}_S$:\\ \begin{lemma}\label{exp-dw-plus-minus} Let $W$ be a weakening of the axiom $x_{i_1, j}x_{i_2, j} = 0$ for pigeons $i_1, i_2$ and hole $j$. We have: \begin{equation*} W^{\{-1, 0, 1\}} = \sum_{S \subseteq [n] \setminus \{i_1, i_2\}} (-1)^{|S|} \cdot W^{\mathrm{flip}}_S. \end{equation*} It follows that: \begin{equation*} \E\left(DW^{\{-1, 0, 1\}}\right) = 2^{n - 2} \cdot \E(DW). \end{equation*} \end{lemma} \begin{proof}To prove the first equation, consider any assignment $x$. If pigeons $i_1$ and $i_2$ do not both go to hole $j$, then both $W^{\{-1, 0, 1\}}$ and all the $W^{\mathrm{flip}}_S$ evaluate to 0 on $x$. Otherwise, exactly one of the $W^{\mathrm{flip}}_S(x)$ equals 1, and for this choice of $S$, we have $W^{\{-1, 0, 1\}}(x) = (-1)^{|S|}$. The second equation follows because: \begin{align*} \E\left(DW^{\{-1, 0, 1\}}\right) &= \sum_{S \subseteq [n] \setminus \{i_1, i_2\}} (-1)^{|S|} \cdot \E\left(DW^{\mathrm{flip}}_S\right)\\ &= \sum_{S \subseteq [n] \setminus \{i_1, i_2\}} (-1)^{|S|}(-1)^{|S|} \cdot \E(DW) \tag{Corollary \ref{flip}}\\ &= 2^{n - 2} \cdot \E(DW). \end{align*} \end{proof} Using Lemma \ref{exp-dw-plus-minus}, we now improve on the approach to upper-bound $\max_W |\E(DW)|$ from section \ref{unsuccessful}: \begin{lemma}\label{exp-DW-successful} The dual value $\E(D) / \max_W |\E(DW)|$ is at least $\frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot \frac{(n - 1)2^{n - 2}}{\lVert D \rVert}$ \end{lemma} \begin{proof} For any $W$, we have: \begin{align*} \E(DW) &= 2^{-(n - 2)} \cdot \E\left(DW^{\{-1, 0, 1\}}\right) \tag{Lemma \ref{exp-dw-plus-minus}}\\ &\le 2^{-(n - 2)} \cdot \lVert D \rVert \lVert W^{\{-1, 0, 1\}} \rVert \tag{Cauchy-Schwarz}\\ &= 2^{-(n - 2)} \cdot \lVert D \rVert \sqrt{\E\left(\left(W^{\{-1, 0, 1\}}\right)^2\right)}\\ &= (n - 1)^{-1}2^{-(n - 2)} \cdot \lVert D \rVert. \end{align*} Using the value of $\E(D)$ from Corollary \ref{exp-d}, the dual value $\E(D) / \max_W |\E(DW)|$ is at least $\frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot \frac{(n - 1)2^{n - 2}}{\lVert D \rVert}$. \end{proof} It only remains to compute $\lVert D \rVert$:\\ \begin{lemma}\label{norm-D} \[ {\lVert D \rVert}^2 = \frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot n! \cdot \sum_{c = 0}^{n - 1} \frac{(-1)^{n - 1 - c}}{n - c} \cdot \frac{1}{(n - 1)^{n - 1 - c}c!} \] \end{lemma} \begin{proof} Recall the definition of $D$ (Definition \ref{D}): \begin{align*} D &= \sum_{S \subsetneq [n]} c_SJ_S,\\ c_S &= \frac{(-1)^{n - 1 - |S|} (n - 1 - |S|)!}{(n - 1)^{n - 1 - |S|}}. \end{align*} We compute $\lVert D \rVert^2 = \E(D^2)$ as follows. \begin{equation*} \E(D^2) = \sum_{S \subsetneq [n]} \sum_{T \subsetneq [n]} c_Sc_T \cdot \E(J_SJ_T). \end{equation*} Given $S, T \subsetneq [n]$, we have: \begin{align*} \E(J_SJ_T) &= \E(J_S)\E(J_T \mid J_S = 1)\\ &= \left(\left(\prod_{i = 1}^{|S|} (n - i)!\right) / (n - 1)^{|S|}\right)\left(\left(\prod_{j = |S \cap T| + 1}^{|T|} (n - j)!\right) / (n - 1)^{|T \setminus S|}\right) \end{align*} Therefore, \begin{align*} c_Sc_T \cdot \E(J_SJ_T) &= \left(c_S\left(\prod_{i = 1}^{|S|} (n - i)!\right) / (n - 1)^{|S|}\right)\left(c_T\left(\prod_{j = |S \cap T| + 1}^{|T|} (n - j)!\right) / (n - 1)^{|T \setminus S|}\right). \end{align*} Note that the product of $(-1)^{n - 1 - |S|}$ (from the $c_S$) and $(-1)^{n - 1 - |T|}$ (from the $c_T$) equals $(-1)^{|S| - |T|}$, so the above equation becomes: \begin{align*} c_Sc_T \cdot \E(J_SJ_T) &= (-1)^{|S| - |T|} \left(\frac{(n - 2)!}{(n - 1)^{n - 2}}\right)\left(\frac{(n - 1 - |S \cap T|)!}{(n - 1)^{n - 1 - |S \cap T|}}\right). \end{align*} Now, we rearrange the sum for $\E(D^2)$ in the following way: \begin{align*} \E(D^2) &= \sum_{S \subsetneq [n]} \sum_{T \subsetneq [n]} c_Sc_T \cdot \E(J_SJ_T)\\ &= \frac{(n - 2)!}{(n - 1)^{n - 2}} \sum_{c = 0}^{n - 1} \frac{(n - 1 - c)!}{(n - 1)^{n - 1 - c}} \sum_{\substack{S, T \subsetneq [n],\\|S \cap T| = c}} (-1)^{|S| - |T|}. \end{align*} To evaluate this expression, fix $c \le n - 1$ and consider the inner sum. Consider the collection of tuples $\{(S, T) \mid S, T \subsetneq [n], |S \cap T| = c\}$. We can pair up (most of) these tuples in the following way. For each $S$, let $m_S$ denote the minimum element in $[n]$ that is not in $S$ (note that $m_S$ is well defined because $S$ cannot be $[n]$). We pair up the tuple $(S, T)$ with the tuple $(S, T \triangle \{m_S\})$, where $\triangle$ denotes symmetric difference. The only tuples $(S, T)$ that cannot be paired up in this way are those where $|S| = c$ and $T = [n] \setminus \{m_S\}$, because $T$ cannot be $[n]$. There are $\binom{n}{c}$ unpaired tuples $(S, T)$, and for each of these tuples, we have $(-1)^{|S| - |T|} = (-1)^{n - 1 - c}$. On the other hand, each pair $(S, T), (S, T \triangle \{m_S\})$ contributes 0 to the inner sum. Therefore, the inner sum equals $(-1)^{n - 1 - c}\binom{n}{c}$, and we have: \begin{align*} \E(D^2) &= \frac{(n - 2)!}{(n - 1)^{n - 2}} \sum_{c = 0}^{n - 1} \frac{(-1)^{n - 1 - c}(n - 1 - c)!}{(n - 1)^{n - 1 - c}}\binom{n}{c}\\ &= \frac{(n - 2)!}{(n - 1)^{n - 2}} \sum_{c = 0}^{n - 1} \frac{(-1)^{n - 1 - c}(n - 1 - c)!}{(n - 1)^{n - 1 - c}} \cdot \frac{n!}{c!(n - c)!}\\ &= \frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot n! \cdot \sum_{c = 0}^{n - 1} \frac{(-1)^{n - 1 - c}}{n - c} \cdot \frac{1}{(n - 1)^{n - 1 - c}c!}. \end{align*} \end{proof} \begin{corollary}\label{cor:roughnormbound} $\E(D^2) \leq \frac{n!}{(n-1)^{n-1}}$ \end{corollary} \begin{proof} Observe that the sum \[ \sum_{c = 0}^{n - 1} \frac{(-1)^{n - 1 - c}}{n - c} \cdot \frac{1}{(n - 1)^{n - 1 - c}c!} \] is an alternating series where the magnitudes of the terms decrease as $c$ decreases. The two largest magnitude terms are $1/(n - 1)!$ and $-(1/2) \cdot 1/(n - 1)!$. Therefore, the sum is at most $\frac{1}{(n - 1)!}$, and we conclude that \[ \E(D^2) \leq \frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot \frac{n!}{(n-1)!} = \frac{n!}{(n-1)^{n-1}} \] as needed. \end{proof} We can now complete the proof of Theorem \ref{thm:pigeonholelowerbound} \begin{proof}[Proof of Theorem \ref{thm:pigeonholelowerbound}] By Lemma \ref{exp-DW-successful}, any Nullstellensatz proof for $\text{PHP}_n$ has total coefficient size at least $\frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot \frac{(n - 1)2^{n - 2}}{\lVert D \rVert}$. By Corollary \ref{cor:roughnormbound}, $\lVert D \rVert \leq \sqrt{\frac{n!}{(n-1)^{n-1}}}$. Combining these results, any Nullstellensatz proof for $\text{PHP}_n$ has total coefficient size at least \begin{align*} \frac{(n - 2)!}{(n - 1)^{n - 2}} \cdot \frac{(n - 1)2^{n - 2}}{\sqrt{\frac{n!}{(n-1)^{n-1}}}} &= \frac{2^{n-2}}{\sqrt{n}} \cdot \frac{\sqrt{(n-1)!}}{(n-1)^{\frac{n}{2} - \frac{3}{2}}} \\ &= \frac{2^{n-2}(n-1)}{\sqrt{n}}\sqrt{\frac{(n-1)!}{(n-1)^{n-1}}}\end{align*} Using Stirling's approximation that $n!$ is approximately $\sqrt{2{\pi}n}\left(\frac{n}{e}\right)^n$, $\sqrt{\frac{(n-1)!}{(n-1)^{n-1}}}$ is approximately $\sqrt[4]{2{\pi}(n-1)}\left(\frac{1}{\sqrt{e}}\right)^{n-1}$ so this expression is $\Omega\left(n^{\frac{3}{4}}\left(\frac{2}{\sqrt{e}}\right)^{n}\right)$, as needed. \end{proof} \subsection{Experimental Results for $\text{PHP}_n$} For small $n$, we computed the optimal dual values shown below. The first column of values is the optimal dual value for $n = 3, 4$. The second column of values is the optimal dual value for $n = 3, 4, 5, 6$ under the restriction that the only nonzero assignments are those where each pigeon goes to exactly one hole. \begin{center} \begin{tabular}{ |c|c|c| } \hline $n$ & dual value & dual value, each pigeon goes to exactly one hole \\ \hline 3 & 11 & 6 \\ 4 & $41.4\overline{69}$ & 27 \\ 5 & - & 100 \\ 6 & - & 293.75 \\ \hline \end{tabular} \end{center} For comparison, the table below shows the value we computed for our dual solution and the lower bound of $\frac{2^{n-2}(n-1)}{\sqrt{n}}\sqrt{\frac{(n-1)!}{(n-1)^{n-1}}}$ that we showed in the proof of Theorem \ref{thm:pigeonholelowerbound}. (Values are rounded to 3 decimals.) \begin{center} \begin{tabular}{ |c|c|c| } \hline $n$ & value of $D$ & proven lower bound on value of $D$ \\ \hline 3 & 4 & 1.633 \\ 4 & 18 & 2.828 \\ 5 & 64 & 4.382 \\ 6 & 210.674 & 6.4 \\ \hline \end{tabular} \end{center} It is possible that our lower bound on the value of $D$ can be improved. The following experimental evidence suggests that the dual value $\E(D) / \max_W |\E(DW)|$ of $D$ may actually be $\widetilde{\Theta}(2^n)$. For $n = 3, 4, 5, 6$, we found that the weakenings $W$ that maximize $|\E(DW)|$ are of the following form, up to symmetry. (By symmetry, we mean that we can permute pigeons/holes without changing $|\E(DW)|$, and we can flip sets of holes as in Lemma \ref{flip} without changing $|\E(DW)|$.) \begin{itemize} \item For odd $n$ ($n = 3, 5$): $W$ is the weakening of the axiom $x_{1, 1}x_{2, 1} = 0$ where, for $i = 3, \dots, n$, we have $H_{W, i} = \{2, \dots, (n + 1)/2\}$. \item For even $n$ ($n = 4, 6$): $W$ is the following weakening of the axiom $x_{1, 1}x_{2, 1} = 0$. For $i = 3, \dots, n/2 + 1$, we have $H_{W, i} = \{2, \dots, n/2\}$. For $i = n/2 + 2, \dots, n$, we have $H_{W, i} = \{n/2 + 1, \dots, n - 1\}$. \end{itemize} If this pattern continues to hold for larger $n$, then experimentally it seems that\\$\E(D) / \max_W |\E(DW)|$ is $\widetilde{\Theta}(2^n)$, although we do not have a proof of this. \section{Total coefficient size upper bound for the ordering principle} In this section, we construct an explicit Nullstellensatz proof of infeasibility for the ordering principle $\text{ORD}_n$ with size and total coefficient size $2^n - n$. We start by formally defining the ordering principle. \begin{definition}[ordering principle ($\mathrm{ORD}_n$)] Intuitively, the ordering principle says that any well-ordering on $n$ elements must have a minimum element. Formally, for $n \ge 1$, we define $\mathrm{ORD}_n$ to be the statement that the following system of axioms is infeasible: \begin{itemize} \item We have a variable $x_{i, j}$ for each pair $i, j \in [n]$ with $i < j$. $x_{i, j} = 1$ represents element $i$ being less than element $j$ in the well-ordering, and $x_{i, j} = 0$ represents element $i$ being more than element $j$ in the well-ordering. We write $x_{j,i}$ as shorthand for $1 - x_{i,j}$ (i.e. we take $x_{j,i} = \bar{x}_{i,j} = 1 - x_{i,j}$). \item For each $i \in [n]$, we have the axiom $\prod_{j \in [n] \setminus \{i\}}{x_{i,j}} = 0$ which represents the constraint that element $i$ is not a minimum element. We call these axioms non-minimality axioms. \item For each triple $i,j,k \in [n]$ where $i < j < k$, we have the two axioms $x_{i,j}x_{j,k}x_{k,i} = 0$ and $x_{k,j}x_{j,i}x_{i,k} = 0$ which represent the constraints that elements $i, j, k$ satisfy transitivity. We call these axioms transitivity axioms. \end{itemize} \end{definition} In our Nullstellensatz proof, for each weakening $W$ of an axiom, its coefficient $c_W$ will either be $1$ or $0$. Non-minimality axioms will appear with coefficient $1$ and the only weakenings of transitivity axioms which appear have a special form which we describe below. \begin{definition}[nice transitivity weakening] Let $W$ be a weakening of the axiom $x_{i,j}x_{j,k}x_{k,i}$ or the axiom $x_{k,j}x_{j,i}x_{i,k}$ for some $i < j < k$. Let $G(W)$ be the following directed graph. The vertices of $G(W)$ are $[n]$. For distinct $i', j' \in [n]$, $G(W)$ has an edge from $i'$ to $j'$ if $W$ contains the term $x_{i', j'}$. We say that $W$ is a \textit{nice transitivity weakening} if $G(W)$ has exactly $n$ edges and all vertices are reachable from vertex $i$. \end{definition} In other words, if $W$ is a weakening of the axiom $x_{i,j}x_{j,k}x_{k,i}$ or the axiom $x_{k,j}x_{j,i}x_{i,k}$ then $G(W)$ contains a 3-cycle on vertices $\{i, j, k\}$. $W$ is a nice transitivity weakening if and only if contracting this 3-cycle results in a (directed) spanning tree rooted at the contracted vertex. Note that if $W$ is a nice transitivity weakening and $x$ is an assignment with a minimum element then $W(x) = 0$.\\ \begin{theorem}\label{ordering-primal} There is a Nullstellensatz proof of infeasibility for $\text{ORD}_n$ satisfying: \begin{enumerate} \item The total coefficient size is $2^n - n$. \item Each $c_W$ is either 0 or 1. \item If $A$ is a non-minimality axiom, then $c_A = 1$ and $c_W = 0$ for all other weakenings of $A$. \item If $W$ is a transitivity weakening but not a nice transitivity weakening then $c_W = 0$. \end{enumerate} \end{theorem} \textbf{Proof.} We prove Theorem \ref{ordering-primal} by induction on $n$. When $n = 3$, the desired Nullstellensatz proof sets $c_A = 1$ for each axiom $A$. It can be verified that $\sum_W c_WW$ evaluates to 1 on each assignment, and that this Nullstellensatz proof satisfies the properties of Theorem \ref{ordering-primal}. Now suppose we have a Nullstellensatz proof for $\text{ORD}_n$ satisfying Theorem \ref{ordering-primal}, and let $S_n$ denote the set of transitivity weakenings $W$ for which $c_W = 1$. The idea to obtain a Nullstellensatz proof for $\text{ORD}_{n + 1}$ is to use two ``copies'' of $S_n$, the first copy on elements $\{1, \dots, n\}$ and the second copy on elements $\{2, \dots, n + 1\}$. Specifically, we construct the Nullstellensatz proof for $\text{ORD}_{n + 1}$ by setting the following $c_W$ to 1 and all other $c_W$ to 0. \begin{enumerate} \item For each non-minimality axiom $A$ in $\text{ORD}_{n + 1}$, we set $c_A = 1$. \item For each $W \in S_n$, we define the transitivity weakening $W'$ on $n + 1$ elements by $W' = W \cdot x_{1, n + 1}$ and set $c_{W'} = 1$. \item For each $W \in S_n$, first we define the transitivity weakening $W''$ on $n + 1$ elements by replacing each variable $x_{i, j}$ that appears in $W$ by $x_{i + 1, j + 1}$. (e.g., if $W = x_{1, 2}x_{2, 3}x_{3,1}$, then $W'' = x_{2, 3}x_{3, 4}x_{4,2}$.) Then, we define $W''' = W''x_{n + 1,1}$ and set $c_{W'''} = 1$. \item For each $i \in \{2, \dots, n\}$, for each of the 2 transitivity axioms $A$ on $(1, i, n + 1)$, we set $c_W = 1$ for the following weakening $W$ of $A$: \begin{equation*} W = A\left(\prod_{j \in [n] \setminus \{i\}}{x_{i, j}}\right). \end{equation*} In other words, $W(x) = 1$ if and only if $A(x) = 1$ and $i$ is the minimum element among the elements $[n+1] \setminus \{1, n + 1\}$. \end{enumerate} The desired properties 1 through 4 in Theorem \ref{ordering-primal} can be verified by induction. It remains to show that for each assignment $x$, there is exactly one nonzero $c_W$ for which $W(x) = 1$. If $x$ has a minimum element $i \in [n+1]$, then the only nonzero $c_W$ for which $W(x) = 1$ is the non-minimality axiom for $i$. Now suppose that $x$ does not have a minimum element. Consider two cases: either $x_{1, n + 1} = 1$, or $x_{n + 1,1} = 1$. Suppose $x_{1, n + 1} = 1$. Consider the two subcases: \begin{enumerate} \item Suppose that, if we ignore element $n + 1$, then there is still no minimum element among the elements $\{1, \dots, n\}$. Then there is exactly one weakening $W$ in point 2 of the construction for which $W(x) = 1$, by induction. \item Otherwise, for some $i \in \{2, \dots, n\}$, we have that $i$ is a minimum element among $\{1, \dots, n\}$ and $x_{n + 1,i} = 1$. Then there is exactly one weakening $W$ in point 4 of the construction for which $W(x) = 1$ (namely the weakening $W$ of the axiom $A = x_{i,1}x_{1, n + 1}x_{n+1,i}$). \end{enumerate} The case $x_{n + 1,1} = 1$ is handled similarly by considering whether there is a minimum element among $\{2, \dots, n + 1\}$. Assignments that do have a minimum element among $\{2, \dots, n + 1\}$ are handled by point 3 of the construction, and assignments that do not are handled by point 4 of the construction. $\qed$ \subsection{Restriction to instances with no minimial element} We now observe that for the ordering principle, we can restrict our attention to instances which have no minimum element. \begin{lemma} Suppose we have coefficients $c_W$ satisfying $\sum_W c_{W}W(x) = 1$ for all assignments $x$ that have no minimum element (but it is possible that $\sum_W c_{W}W(x) \neq 1$ on assignments $x$ that do have a minimum element). Then there exist coefficients $c'_{W}$ such that $\sum_W c'_{W}W = 1$ (i.e., the coefficients $c_W'$ are a valid primal solution) with \begin{equation*} \sum_{W}{|c'_W|} \leq (n + 1)\left(\sum_{W}{ |c_W|}\right) + n. \end{equation*} \end{lemma} This lemma says that, to prove upper or lower bounds for $\text{ORD}_n$ by constructing primal or dual solutions, it suffices to consider only assignments $x$ that have no minimum element, up to a factor of $O(n)$ in the solution value. \begin{proof} Let $C$ denote the function on weakenings that maps $W$ to $c_W$. For $i \in [n]$, we will define the function $C_i$ on weakenings satisfying the properties: \begin{itemize} \item If $x$ is an assignment where $i$ is a minimum element, then $\sum_{W}{C_i(W)W(x)} = \sum_{W}{C(W)W(x)}$. \item Otherwise, $\sum_{W}{C_i(W)W(x)} = 0$. \end{itemize} Let $A_i = \prod_{j\in [n] \setminus \{i\}}{x_{i, j}}$ be the non-minimality axiom for $i$. Intuitively, we want to define $C_i$ as follows: For all $W$, $C_i(A_iW) = C(W)$. (If $W$ is a weakening that is not a weakening of $A_i$, then $C_i(W) = 0$.) The only technicality is that multiple weakenings $W$ may become the same when multiplied by $A_i$, so we actually define $C_i(A_iW) = \sum_{W': A_iW' = A_iW} C(W')$. Finally, we use the functions $C_i$ to define the function $C'$: \begin{equation*} C' = C - \left(\sum_{i = 1}^n C_i\right) + \left(\sum_{i = 1}^n A_i\right). \end{equation*} By taking $c'_W = C'(W)$, the $c'_W$ are a valid primal solution with the desired bound on the total coefficient size. \end{proof} \subsection{Experimental results} For small values of $n$, we have computed both the minimum total coefficient size of a Nullstellensatz proof of the ordering principle and the value of the linear program where we restrict our attention to instances $x$ which have no minimum element. We found that for $n = 3,4,5$, the minimum total coefficient size of a Nullstellensatz proof of the ordering principle is $2^n - n$ so the primal solution given by Theorem \ref{ordering-primal} is optimal. However, for $n = 6$ this solution is not optimal as the minimum total coefficient size is $52$ rather than $2^6 - 6 = 58$. If we restrict our attention to instances $x$ which have no minimum element then for $n = 3,4,5,6$, the value of the resulting linear program is equal to $2\binom{n}{3}$, which is the number of transitivity axioms. However, this is no longer true for $n = 7$, though we did not compute the exact value. \section{Analyzing Total Coefficient Size for Stronger Proof Systems} In this section, we consider the total coefficient size for two stronger proof systems, sum of squares proofs and a proof system which is between Nullstellensatz and sum of squares proofs which we call resolution-like proofs. \begin{definition} Given a system of axioms $\{p_i = 0: i \in [m]\}$, we define a resolution-like proof of infeasibility to be an equality of the form \[ -1 = \sum_{i=1}^{m}{{p_i}{q_i}} + \sum_{j}{{c_j}g_j} \] where each $g_j$ is a monomial and each coefficient $c_j$ is non-negative. We define the total coefficient size of such a proof to be $\sum_{i=1}^{m}{T(q_i)} + \sum_{j}{c_j}$. \end{definition} We call this proof system resolution-like because it captures the resolution-like calculus introduced for Max-SAT by Mar\'{i}a Luisa Bonet, Jordi Levy, and Felip Many\`{a} \cite{BONET2007606}. The idea is that if we have deduced that $x{r_1} \leq 0$ and $\bar{x}{r_2} \leq 0$ for some variable $x$ and monomials $r_1$ and $r_2$ then we can deduce that ${r_1}{r_2} \leq 0$ as follows: \[ {r_1}{r_2} = x{r_1} - (1 - r_2)x{r_1} + \bar{x}{r_2} - (1 - r_1)\bar{x}{r_2} \] where we decompose $(1 - r_1)$ and $(1-r_2)$ into monomials using the observation that $1 - \prod_{i=1}^{k}{x_i} = \sum_{j = 1}^{k}{(1 - x_j)\left(\prod_{i=1}^{j-1}{x_i}\right)}$. The minimum total coefficient size of a resolution-like proof can be found using the following linear program. \begin{enumerate} \item[] Primal: Minimize $\sum_{i=1}^{m}{T(q_i)} + \sum_{j}{c_j}$ subject to $\sum_{i=1}^{m}{{p_i}{q_i}} + \sum_{j}{{c_j}g_j} = -1$ \item[] Dual: Maximize $D(1)$ subject to the constraints that \begin{enumerate} \item[1.] $D$ is a linear map from polynomials to $\mathbb{R}$. \item[2.] For each $i \in [m]$ and each monomial $r$, $|D(rp_i)| \leq 1$. \item[3.] For each monomial $r$, $D(r) \geq -1$. \end{enumerate} \end{enumerate} \begin{definition} Given a system of axioms $\{p_i = 0: i \in [m]\}$, a Positivstellensatz/sum of squares proof of infeasibility is an equality of the form \[ -1 = \sum_{i=1}^{m}{{p_i}{q_i}} + \sum_{j}{g_j^2} \] We define the total coefficient size of a Positivstellensatz/sum of squares proof to be $\sum_{i=1}^{m}{T(q_i)} + \sum_{j}{T(g_j)^2}$ \end{definition} \begin{enumerate} \item[] Primal: Minimize $\sum_{i=1}^{m}{T(q_i)} + \sum_{j}{T(g_j)^2}$ subject to the constraint that $-1 = \sum_{i=1}^{m}{{p_i}{q_i}} + \sum_{j}{g_j^2}$. \item[] Dual: Maximize $D(1)$ subject to the constraints that \begin{enumerate} \item[1.] $D$ is a linear map from polynomials to $\mathbb{R}$. \item[2.] For each $i \in [m]$ and each monomial $r$, $|D(rp_i)| \leq 1$. \item[3.] For each polynomial $g_j$, $D((g_j)^2) \geq -T(g_j)^2$, \end{enumerate} \end{enumerate} \subsection{Failure of the dual certificate for resolution-like proofs} In this subsection, we observe that our dual certificate does not give a lower bound on the total coefficient size for resolution-like proofs of the pigeonhole principle because it has a large negative value on some monomials. \begin{theorem} The value of the dual certificate on the polynomial $\prod_{i=1}^{n}{\bar{x}_{i1}}$ is \\ $-\frac{(n-2)!}{(n-1)^{n-1}}\left(1 - \frac{(-1)^{n - 1}}{(n-1)^{n-2}}\right)$ \end{theorem} \begin{proof} To show this, we make the following observations. \begin{enumerate} \item The value of the dual certificate on the polynomial $\prod_{i=2}^{n}{\bar{x}_{i1}}$ is $0$. \item The value of the dual certificate on the polynomial $x_{11}\prod_{i=3}^{n}{\bar{x}_{i1}}$ is $\frac{(n-2)!}{(n-1)^{n-1}}$ \item The value of the dual certificate on the polynomial $x_{11}x_{21}\prod_{i=3}^{n}{\bar{x}_{i1}}$ is $\frac{(-1)^{n-2}(n-2)!}{(n-1)^{2n-3}}$. \end{enumerate} For the first observation, observe that since the first pigeon is unrestricted, every term of the dual certificate cancels except $J_{\{2,3,\ldots,n\}}$ which is $0$ as none of these pigeons can go to hole $1$. For the second observation, observe that since the second pigeon is unrestriced, every term of the dual certificate cancels except $J_{\{1,3,4,\ldots,n\}}$ which gives value $\frac{\E(D)}{n-1} = \frac{(n-2)!}{(n-1)^{n-1}}$. For the third observation, observe that by Lemma \ref{flip}, the value of the dual certificate on the polynomial $x_{11}x_{21}\prod_{i=3}^{n}{\bar{x}_{i1}}$ is $(-1)^{n-2}$ times the value of the dual certificate on the polynomial $\prod_{i=1}^{n}{x_{i1}}$ which is \[ \frac{1}{(n-1)^{n}}\left(-\frac{(n-1)!}{(n-1)^{n-1}} + n\frac{(n-2)!}{(n-1)^{n-2}}\right) = \frac{(n-2)!}{(n-1)^{2n-3}} \] Putting these observations together, the value of the dual certificate for the polynomial \[ \prod_{i=1}^{n}{\bar{x}_{i1}} = \prod_{i=2}^{n}{\bar{x}_{i1}} - x_{11}\prod_{i=3}^{n}{\bar{x}_{i1}} + x_{11}x_{21}\prod_{i=3}^{n}{\bar{x}_{i1}} \] is $-\frac{(n-2)!}{(n-1)^{n-1}} + \frac{(-1)^{n - 2}(n-2)!}{(n-1)^{2n-3}} = -\frac{(n-2)!}{(n-1)^{n-1}}\left(1 - \frac{(-1)^{n - 1}}{(n-1)^{n-2}}\right)$. \end{proof} \subsection{Small total coefficient size sum of squares proof of the ordering principle} In this subsection, we show that the small size resolution proof of the ordering principle \cite{staalmarck1996short}, which seems to be dynamic in nature, can actually be mimicked by a sum of squares proof. Thus, while sum of squares requires degree $\tilde{\Theta}(\sqrt{n})$ to refute the negation of the ordering principle \cite{potechin:LIPIcs:2020:12590}, there is a sum of squares proof which has polynomial size and total coefficient size. To make our proof easier to express, we define the following monomials \begin{definition} \ \begin{enumerate} \item Whenever $1 \leq j \leq m \leq n$, let $F_{jm} = \prod_{i \in [m] \setminus \{j\}}{x_{ji}}$ be the monomial which is $1$ if $x_j$ is the first element in $x_1,\dots,x_m$ and $0$ otherwise. \item For all $m \in [n-1]$ and all distinct $j,k \in [m]$, we define $T_{jmk}$ to be the monomial \[ T_{jmk} = F_{jm}x_{(m+1)j}x_{k(m+1)}\prod_{i \in [k-1] \setminus \{j\}}{x_{(m+1)i}} \] Note that $T_{jmk}$ is a multiple of $x_{(m+1)j}x_{jk}x_{k(m+1)}$ so it is a weakening of a transitivity axiom. \end{enumerate} \end{definition} With these definitions, we can now express our proof. \begin{theorem} The following equality (modulo the axioms that $x_{ij}^2 = x_{ij}$ and $x_{ij}x_{ji} = 0$ for all distinct $i,j \in [n]$) gives an SoS proof that the total ordering axioms are infeasible. \[ -1 = \sum_{m=1}^{n-1}{\left(\left(F_{(m+1)(m+1)} - \sum_{j=1}^{m}{F_{jm}F_{(m+1)(m+1)}}\right)^2 - \sum_{j=1}^{m}{\sum_{k \in [m] \setminus \{j\}}{T_{jmk}}}\right)} - \sum_{j=1}^{n}{F_{jn}} \] \end{theorem} \begin{proof} Our key building block is the following lemma. \begin{lemma}\label{lem:buildingblock} For all $m \in [1,n-1]$ and all $j \in [1,m]$, \[ F_{jm} = F_{j(m+1)} + \sum_{k \in [m] \setminus \{j\}}{T_{jmk}} + F_{jm}F_{(m+1)(m+1)} \] \end{lemma} \begin{proof} First note that \[ F_{jm} = F_{jm}x_{j(m+1)} + F_{jm} x_{(m+1)j} = F_{j(m+1)} + F_{jm}x_{(m+1)j} \] We now use the following proposition \begin{proposition} \[ F_{jm}x_{(m+1)j} = \sum_{k \in [m] \setminus \{j\}}{F_{jm}x_{(m+1)j}x_{k(m+1)}\prod_{i \in [k-1] \setminus \{j\}}{x_{(m+1)i}}} + F_{jm}F_{(m+1)(m+1)}x_{(m+1)j} \] \end{proposition} \begin{proof} If $x_{k'(m+1)} = 0$ for all $k \in [m]$ then $x_{k(m+1)}\prod_{i \in [k-1] \setminus \{j\}}{x_{(m+1)i}} = 0$ for all $k \in [m]$ and $F_{(m+1)(m+1)} = 1$. Otherwise, let $k'$ be the first index in $[m]$ such that $x_{k'(m+1)} = 1$. Now observe that $F_{(m+1)(m+1)} = 0$ and $x_{k(m+1)}\prod_{i \in [k-1] \setminus \{j\}}{x_{(m+1)i}} = 1$ if $k = k'$ and is $0$ if $k \neq k'$. \end{proof} Finally, we observe that $F_{jm}F_{(m+1)(m+1)}x_{(m+1)j} = F_{jm}F_{(m+1)(m+1)}$ as $x_{(m+1)j}$ is contained in $F_{(m+1)(m+1)}$. Putting everything together, \[ F_{jm} = F_{j(m+1)} + \sum_{k \in [m] \setminus \{j\}}{T_{jmk}} + F_{jm}F_{(m+1)(m+1)} \] \end{proof} We now verify that our proof, which we restate here for convenience, is indeed an equality modulo the axioms that $x_{ij}^2 = x_{ij}$ and $x_{ij}x_{ji} = 0$ for all distinct $i,j \in [n]$. \[ -1 = \sum_{m=1}^{n-1}{\left(\left(F_{(m+1)(m+1)} - \sum_{j=1}^{m}{F_{jm}F_{(m+1)(m+1)}}\right)^2 - \sum_{j=1}^{m}{\sum_{k \in [m] \setminus \{j\}}{T_{jmk}}}\right)} - \sum_{j=1}^{n}{F_{jn}} \] Observe that $F_{(m+1)(m+1)}^2 = F_{(m+1)(m+1)}$, $F_{jm}^2 = F_{(jm)}$, and for all distinct $j,j' \in [m]$, $F_{jm}F_{j'm} = 0$. Thus, \[ \left(F_{(m+1)(m+1)} - \sum_{j=1}^{m}{F_{jm}F_{(m+1)(m+1)}}\right)^2 = \left(F_{(m+1)(m+1)} - \sum_{j=1}^{m}{F_{jm}F_{(m+1)(m+1)}}\right) \] By Lemma \ref{lem:buildingblock}, $F_{jm}F_{(m+1)(m+1)} + \sum_{k \in [m] \setminus \{j\}}{T_{jmk}} = F_{jm} - F_{j(m+1)} $. This implies that \begin{align*} -\sum_{m=1}^{n-1}{\sum_{j=1}^{m}{\left(F_{jm}F_{(m+1)(m+1)} + \sum_{k \in [m] \setminus \{j\}}{T_{jmk}} \right)}}&= -\sum_{j=1}^{n-1}{\sum_{m = j}^{n-1}{\left(F_{jm} - F_{j(m+1)} \right)}} \\ &= \sum_{j=1}^{n-1}{(F_{jn} - F_{jj})} \end{align*} Now observe that $\sum_{m=1}^{n-1}{F_{(m+1)(m+1)}} = \sum_{j = 2}^{n-1}{F_{jj}} + F_{nn} = -1 + \sum_{j=1}^{n-1}{F_{jj}} + F_{nn}$. Putting everything together, the equality holds. Since each $T_{jmk}$ is a weakening of a transitivity axiom and each $F_{jn}$ is a non-minimality axiom, this indeed gives a sum of squares proof that these axioms are unsatisfiable. \end{proof} \section{Open Problems} Our work raises a number of open problems. For the pigeonhole principle, while we have proved an exponential total coefficient size lower bound on Nullstellensatz proofs, there is a lot of room for further work. Some questions are as follows. \begin{enumerate} \item For the pigeonhole principle, our lower bound is $2^{\Omega(n)}$ while the trivial upper bound is $O(n!)$. Can we improve the lower and/or upper bound? \item If we increase the number of pigeons from $n$ to $n+1$ while still having $n - 1$ holes, our lower bound proof no longer applies. Can we prove a total coefficient size lower bound on Nullstellensatz when there are $n+1$ or more pigeons? How does the minimum total coefficient size of a proof depend on the number of pigeons? \item Can we show total coefficient size lower bounds for resolution-like proofs of the pigeonhole principle? \item How much of an effect would adding the axioms that pigeons can only go to one hole have on the minimum total coefficient size needed to prove the pigeonhole principle? \end{enumerate} We are still far from understanding the total coefficient size of proofs for the ordering principle. Two natural questions are as follows. \begin{enumerate} \item Can we prove superpolynomial lower bounds on the total coefficient size of Nullstellensatz proofs for the ordering principle and/or improve the $O(2^n)$ upper bound? \item Are there resolution-like proofs for the ordering principle with polynomial total coefficient size? If so, this shows that the seemingly dynamic $O(n^3)$ size resolution proof of the ordering principle \cite{staalmarck1996short} can be captured by a one line resolution-like proof. If not, this gives a natural example separating resolution proof size and the total coefficient size of resolution-like proofs. \end{enumerate} Finally, we can ask what relationships and separations we can show between all of these different proof systems. Some questions are as follows. \begin{enumerate} \item Are there natural examples where the minimum total coefficient size is very different (either larger or smaller) than the minimum size for Nullstellensatz, resolution-like, or sum of squares proofs? \item Can the minimum total coefficient size of a strong proof system be used to lower bound the size of another proof system? For example, can resolution proof size be lower bounded by the minimum total coefficient size of a sum of squares proof or can we find an example where there is a polynomial size resolution proof but any sum of squares proof has superpolynomial total coefficient size? \end{enumerate} \bibliographystyle{alpha} \bibliography{main} \end{document}
2205.03456v3
http://arxiv.org/abs/2205.03456v3
Björk-Sjölin condition for strongly singular convolution operators on graded Lie groups
\documentclass[12pt, reqno]{amsart} \usepackage{amsmath, amsthm, amscd, amsfonts, amssymb, graphicx, color, mathrsfs} \usepackage[bookmarksnumbered, colorlinks, plainpages]{hyperref} \usepackage[all]{xy} \usepackage{slashed} \setlength{\textwidth}{15.2cm} \setlength{\textheight}{22.7cm} \setlength{\topmargin}{0mm} \setlength{\oddsidemargin}{3mm} \setlength{\evensidemargin}{3mm} \setlength{\footskip}{1cm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{ass}[theorem]{Assumption} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{cri}[theorem]{Criterion} \newtheorem{summary}[theorem]{Summary} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{problem}[theorem]{Problem} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \begin{document} \setcounter{page}{1} \title[ Bj\"ork-Sj\"olin condition on graded Lie groups ]{Bj\"ork-Sj\"olin condition for strongly singular convolution operators on graded Lie groups} \author[D. Cardona]{Duv\'an Cardona} \address{ Duv\'an Cardona: \endgraf Department of Mathematics: Analysis, Logic and Discrete Mathematics \endgraf Ghent University, Belgium \endgraf {\it E-mail address} {\rm [email protected], [email protected]} } \author[M. Ruzhansky]{Michael Ruzhansky} \address{ Michael Ruzhansky: \endgraf Department of Mathematics: Analysis, Logic and Discrete Mathematics \endgraf Ghent University, Belgium \endgraf and \endgraf School of Mathematical Sciences \endgraf Queen Mary University of London \endgraf United Kingdom \endgraf {\it E-mail address} {\rm [email protected], [email protected]} } \thanks{The authors are supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021). Michael Ruzhansky is also supported by EPSRC grant EP/R003025/2. } \keywords{Calder\'on-Zygmund operator, Weak (1,1) inequality, Oscillating singular integrals} \subjclass[2010]{35S30, 42B20; Secondary 42B37, 42B35} \begin{abstract} In this work we extend the $L^1$-Bj\"ork-Sj\"olin theory of strongly singular convolution operators to arbitrary graded Lie groups. Our criteria are presented in terms of the oscillating H\"ormander condition due to Bj\"ork and Sj\"olin of the kernel of the operator, and the decay of its group Fourier transform is measured in terms of the infinitesimal representation of an arbitrary Rockland operator. The historical result by Bj\"ork and Sj\"olin is re-obtained in the case of the Euclidean space. \end{abstract} \maketitle \tableofcontents \allowdisplaybreaks \section{Introduction} The aim of this manuscript is to extend the theory of strongly singular integrals by Bj\"ork and Sj\"olin \cite{Bjork,Sjolin} to arbitrary graded Lie groups. This family of Lie groups includes Heisenberg type groups, stratified groups, and are characterised between the family of nilpotent Lie groups by the existence of (Rockland operators) hypoelliptic left-invariant homogeneous partial differential operators in view of the Helffer and Nourrigat solution of the Rockland conjecture \cite{HelfferNourrigat}. Oscillating singular integrals arise as generalisations of the oscillating Fourier multipliers. In the euclidean setting they are used in PDE to estimate in the family of Sobolev spaces the hyperbolic differential problems associated to the powers of elliptic operators, in particular of the fractional (positive) Laplacian $\Delta_x^{\frac{\gamma}{2}},$ where $0<\gamma<1.$ In the Euclidean setting, oscillating Fourier multipliers are associated to symbols of the form \begin{equation}\label{OSC:F:M} \widehat{K}(\xi)=\psi(\xi)\frac{e^{i|\xi|^a}}{|\xi|^{\frac{n\alpha}{2}}},\,\psi\in C^{\infty}(\mathbb{R}^n),\quad 0<a<1, \end{equation} where $\psi$ vanishes near the origin and is equal to one for $|\xi|$ large. It was proved by Wainger \cite{Wainger1965} that $K(x)$ is essentially equal to $c_n|x|^{-n-\lambda}e^{ic_{n}'|x|^{a'}},$ where $ \lambda=\frac{n(a-\alpha)}{2(1-a)},$ and $ a'=\frac{a}{a-1}.$ From this one can deduce that $$|\nabla K(x)|\lesssim |x|^{-n-\lambda-1+a'}.$$ This gradient estimate shows that a kernel that satisfies \eqref{OSC:F:M} is outside of the theory of singular integrals due to Calder\'on and Zygmund \cite{CalderonZygmund1952}. Nevertheless, the boundedness of singular integrals defined by kernels as in \eqref{OSC:F:M} was extensively investigated in the classical works of Hardy \cite{Hardy1913}, Hirschman \cite{Hirschman1956} and Wainger \cite{Wainger1965} until the end-points estimates proved by \cite{Fefferman1970,FeffermanStein1972}. Further works on the subject in the setting of smooth manifolds and beyond can be found in Seeger \cite{Seeger,Seeger1990,Seeger1991}, Seeger and Sogge \cite{SeegerSogge} and for the setting of Fourier integral operators, we refer the reader to Seeger, Sogge and Stein \cite{SSS} and Tao \cite{Tao}. In \cite{Fefferman1970,FeffermanStein1972} Fefferman and Stein introduced a theory for oscillating Fourier multipliers which are convolution operators with singular kernels satisfying the condition \begin{equation}\label{FeffCond} \textnormal{A}(\theta):\, \sup_{0<R<1}\Vert\, \smallint\limits_{|x|\geq 2R^{1-\theta}}|K(x-y)-K(x)|dx \Vert_{L^\infty(B(0,R),\,dy)} <\infty, \end{equation} for some $0\leq \theta<1,$ and its Fourier transform has order $-n\theta/2,$ that is \begin{equation}\label{decay} \textnormal{B}(\theta):\, |\widehat{K}(\xi)|=O((1+|\xi|)^{-\frac{n\theta}{2}}),\quad 0\leq \theta<1. \end{equation} With $\theta=0,$ Fefferman-Stein's conditions agree with the one introduced by H\"ormander \cite{Hormander1960} for the standard Calder\'on-Zygmund operators \cite{CalderonZygmund1952}. However, with $0<\theta<1,$ the conditions above by Fefferman and Stein also consider the oscillating kernels as in \eqref{OSC:F:M}. The boundedness theory due to Fefferman and Stein can be summarised (by several reasons, including the real and complex interpolation theory of continuous linear operators on Lebesgue spaces) in the following theorem. \begin{theorem}[Fefferman and Stein \cite{Fefferman1970,FeffermanStein1972}, 1970-1972]\label{Fefferman:Stein} Assume that $K\in L^1_{loc}(\mathbb{R}^n\setminus\{0\})$ is a distribution with compact support satisfying the hypothesis $\textnormal{A}(\theta)$ and $\textnormal{B}(\theta)$ with $0\leq \theta<1.$ Then the convolution operator $$ T:f\mapsto f\ast K, $$ admits an extension of weak (1,1) type. Moreover, $T$ admits a bounded extension from the Hardy space $H^1(\mathbb{R}^n)$ into $L^1(\mathbb{R}^n).$ \end{theorem} On the other hand, answering a question by Bj\"ork in \cite{Bjork}, Sj\"olin \cite{Sjolin} developed the $L^1$-theory for the convolution operators $T:f\mapsto f\ast K$ where the kernel $K$ satisfies the two conditions $\textnormal{A}(\alpha)$ and $\textnormal{B}(\theta)$ given by \begin{equation}\label{Sjolin:1} \textnormal{A}(\theta):\, \sup_{0<R<b}\Vert\, \smallint\limits_{|x|\geq 2R^{1-\theta}}|K(x-y)-K(x)|dx \Vert_{L^\infty(B(0,R),\,dy)} <\infty, \end{equation} where $0<b< 1,$ and \begin{equation}\label{Sjolin:2} \textnormal{B}(\alpha):\, |\widehat{K}(\xi)|=O((1+|\xi|)^{-\frac{n\alpha}{2}}),\quad 0\leq \alpha<1, \end{equation} where $0<\alpha<\theta<1.$ In the standard terminology of harmonic analysis, a convolution operator with kernel satisfying the conditions $ \textnormal{A}(\theta)$ and $\textnormal{B}(\alpha)$ with $0<\alpha<\theta<1$ is called a strongly singular integral. The result in Sj\"olin \cite{Sjolin} states the boundedness of this family of operators in $L^1(\mathbb{R}^n)$ as follows. Here, $\Delta_x=-\sum_{j=1}^n\partial_{x_j}^2$ is the positive Laplacian on $\mathbb{R}^n,$ and for any $s\in \mathbb{R},$ $L^1_{s}(\mathbb{R}^n)$ is the Sobolev space obtained from the closure of $C^{\infty}_0(\mathbb{R}^n)$ by the norm $\Vert f\Vert_{L^1_s}:=\Vert(1+\Delta_x)^{\frac{s}{2}}f\Vert_{L^1}.$ \begin{theorem}[Sj\"olin \cite{Sjolin}, 1976] Assume that $K\in L^1_{loc}(\mathbb{R}^n\setminus\{0\})$ is a distribution with compact support satisfying the hypothesis $\textnormal{A}(\theta)$ and $\textnormal{B}(\alpha)$ with $0<\alpha< \theta<1.$ Then, $T:H^1(\mathbb{R}^n)\rightarrow L^1_{-\varkappa}(\mathbb{R}^n)$ extends to a bounded operator provided that \begin{equation} \varkappa\geq {n(\theta-\alpha)}/{[n(1-\theta)+2]}, \end{equation} or equivalently, $$(1+\Delta_x)^{-\frac{\varkappa}{2}}T:H^{1}(\mathbb{R}^n)\rightarrow L^1(\mathbb{R}^n)$$ admits a bounded extension. \end{theorem} In the recent works \cite{CR20221,CR20223} the authors have generalised on graded Lie groups (with the Fourier transform criteria in terms of Rockland operators) the theory established by Fefferman and Stein in \cite{Fefferman1970,FeffermanStein1972}. The following extension of Theorem \ref{Fefferman:Stein} has been obtained as part of the investigation done in \cite{CR20221,CR20223}. \begin{theorem}[\cite{CR20221,CR20223}]\label{CR:0CZ:Graded:2022} Consider $G$ to be a graded Lie group, let $|\cdot|$ be a homogeneous quasi-norm on $G$ and let $Q$ be its homogeneous dimension. Let $\mathcal{R}$ be a Rockland operator of homogeneous degree $\nu>0.$ Assume that the kernel $K$ of the convolution operator $T:f\mapsto f\ast K,$ satisfies the estimate \begin{equation}\label{Fourier:growth:1New} \sup_{\pi\in \widehat{G}}\Vert \widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q\theta}{2\nu}} \Vert_{\textnormal{op}}<\infty, \end{equation} and the kernel condition \begin{equation}\label{GS:CZ:cond:22} [K]_{H_{\infty,\theta}}':=\sup_{0<R<1}\sup_{|y|<R} \smallint\limits_{|x|\geq 2R^{1-\theta}}|K(y^{-1}x)-K(x)|dx <\infty. \end{equation} Then $T:H^1(G)\rightarrow L^1(G)$ extends to a bounded operator from the Hardy space $H^1(G)$ into $L^1(G)$. Moreover, $T:L^1(G)\rightarrow L^{1,\infty}(G)$ admits an extension of weak (1,1) type. \end{theorem} In this work we are going to extend in our main Theorem \ref{Sjolin:Th:graded:groups} the conditions $\textnormal{A}(\theta)$ and $\textnormal{B}(\alpha)$ of \eqref{Sjolin:1} and \eqref{Sjolin:2} due to Bj\"ork and Sj\"olin to arbitrary graded Lie groups. To present the statement of the theorem we introduce some notations. Here, for any graded Lie group $G,$ and $s\in \mathbb{R},$ $L^1_{s}(G)$ is the closure of $C^{\infty}_0(G)$ by the norm $$\Vert f\Vert_{L^1_s(G)}:=\Vert(1+\mathcal{R})^{\frac{s}{\nu}}f\Vert_{L^1},$$ where $\mathcal{R}$ is a positive Rockland operator of homogeneous degree $v>0,$ and $L^{1,\infty}_{s}(G)$ is the weak-$L^{1}_{s}(G)$ Sobolev space defined by the semi-norm $$ \Vert f \Vert_{ L^{1,\infty}_{s}(G)}:=\sup_{\lambda>0}\lambda|\{x\in G:|(1+\mathcal{R})^{\frac{s}{\nu}}f(x)|>\lambda\} <\infty. $$ The main result of this work is the following. \begin{theorem}\label{Sjolin:Th:graded:groups} Consider $G$ to be a graded Lie group, let $|\cdot|$ be a homogeneous quasi-norm on $G$ and let $Q$ be its homogeneous dimension. Let $\mathcal{R}$ be a Rockland operator of homogeneous degree $\nu>0.$ Let $K\in L^1_{\textnormal{loc}}(G\setminus \{e\})$ be a distribution of compact support and let $T:f\mapsto f\ast K,$ be the corresponding integral operator associated to $K.$ Assume that for $0<\alpha< \theta<1,$ $K$ satisfies the Fourier transform estimate \begin{equation}\label{A:alpha} \sup_{\pi\in \widehat{G}}\Vert \widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{op}}<\infty, \end{equation} and the kernel condition \begin{equation} [K]_{H_{\infty,\theta,b}}':=\sup_{0<R<b}\sup_{|y|<R} \smallint\limits_{|x|\geq 2R^{1-\theta}}|K(y^{-1}x)-K(x)|dx <\infty, \end{equation} where $0<b< 1.$ Then $T:H^1(G)\rightarrow L^1_{-\varkappa}(G)$ extends to a bounded operator provided that \begin{equation} \varkappa\geq {Q(\theta-\alpha)}/{[Q(1-\theta)+2]}, \end{equation} or equivalently, $$(1+\mathcal{R})^{-\frac{\varkappa}{\nu}}T:H^{1}(G)\rightarrow L^1(G)$$ admits a bounded extension. Moreover, $T:L^1(G)\rightarrow L^{1,\infty}_{-\varkappa}(G)$ extends to a bounded operator, or equivalently, $$(1+\mathcal{R})^{-\frac{\varkappa}{\nu}}T:L^{1}(G)\rightarrow L^{1,\infty}(G)$$ admits an extension of weak $(1,1)$ type. \end{theorem} All this work will be dedicated to prove this statement. In Section \ref{preliminaries} we record the aspects of the Fourier analysis on graded Lie groups and the analysis of Rockland operators used in this work and finally, in Section \ref{proof:section} we prove Theorem \ref{Sjolin:Th:graded:groups}. \section{Fourier analysis on graded groups}\label{preliminaries} The notation and terminology of this paper on the analysis of homogeneous Lie groups are mostly taken from Folland and Stein \cite{FollandStein1982}. For the analysis of Rockland operators we will follow \cite[Chapter 4]{FischerRuzhanskyBook}. \subsection{Homogeneous and graded Lie groups} Let $G$ be a homogeneous Lie group. This means that $G$ is a connected and simply connected Lie group whose Lie algebra $\mathfrak{g}$ is endowed with a family of dilations $D_{r}^{\mathfrak{g}},$ $r>0,$ which are automorphisms on $\mathfrak{g}$ satisfying the following two conditions: \begin{itemize} \item For every $r>0,$ $D_{r}^{\mathfrak{g}}$ is a map of the form $$ D_{r}^{\mathfrak{g}}=\textnormal{Exp}(\ln(r)A) $$ for some diagonalisable linear operator $A\equiv \textnormal{diag}[\nu_1,\cdots,\nu_n]$ on $\mathfrak{g}.$ \item $\forall X,Y\in \mathfrak{g}, $ and $r>0,$ $[D_{r}^{\mathfrak{g}}X, D_{r}^{\mathfrak{g}}Y]=D_{r}^{\mathfrak{g}}[X,Y].$ \end{itemize} We call the eigenvalues of $A,$ $\nu_1,\nu_2,\cdots,\nu_n,$ the dilations weights or weights of $G$. The homogeneous dimension of a homogeneous Lie group $G$ is given by $$ Q=\textnormal{\textbf{Tr}}(A)=\nu_1+\cdots+\nu_n. $$ The dilations $D_{r}^{\mathfrak{g}}$ of the Lie algebra $\mathfrak{g}$ induce a family of maps on $G$ defined via, $$ D_{r}:=\exp_{G}\circ D_{r}^{\mathfrak{g}} \circ \exp_{G}^{-1},\,\, r>0, $$ where $\exp_{G}:\mathfrak{g}\rightarrow G$ is the usual exponential mapping associated to the Lie group $G.$ We refer to the family $D_{r},$ $r>0,$ as dilations on the group. If we write $rx=D_{r}(x),$ $x\in G,$ $r>0,$ then a relation on the homogeneous structure of $G$ and the Haar measure $dx$ on $G$ is given by $$ \smallint\limits_{G}(f\circ D_{r})(x)dx=r^{-Q}\smallint\limits_{G}f(x)dx. $$ A Lie group is graded if its Lie algebra $\mathfrak{g}$ may be decomposed as the sum of subspaces $\mathfrak{g}=\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\oplus \cdots \oplus \mathfrak{g}_{s}$ such that $[\mathfrak{g}_{i},\mathfrak{g}_{j} ]\subset \mathfrak{g}_{i+j},$ and $ \mathfrak{g}_{i+j}=\{0\}$ if $i+j>s.$ Examples of such groups are the Heisenberg group $\mathbb{H}^n$ and more generally any stratified groups where the Lie algebra $ \mathfrak{g}$ is generated by $\mathfrak{g}_{1}$. Here, $n$ is the topological dimension of $G,$ $n=n_{1}+\cdots +n_{s},$ where $n_{k}=\mbox{dim}\mathfrak{g}_{k}.$ A Lie algebra admitting a family of dilations is nilpotent, and hence so is its associated connected, simply connected Lie group. The converse does not hold, i.e., not every nilpotent Lie group is homogeneous although they exhaust a large class, see \cite{FischerRuzhanskyBook} for details. Indeed, the main class of Lie groups under our consideration is that of graded Lie groups. A graded Lie group $G$ is a homogeneous Lie group equipped with a family of weights $\nu_j,$ all of them positive rational numbers. Let us observe that if $\nu_{i}=\frac{a_i}{b_i}$ with $a_i,b_i$ integer numbers, and $b$ is the least common multiple of the $b_i's,$ the family of dilations $$ \mathbb{D}_{r}^{\mathfrak{g}}=\textnormal{Exp}(\ln(r^b)A):\mathfrak{g}\rightarrow\mathfrak{g}, $$ have integer weights, $\nu_{i}=\frac{a_i b}{b_i}. $ So, in this paper we always assume that the weights $\nu_j,$ defining the family of dilations are non-negative integer numbers which allow us to assume that the homogeneous dimension $Q$ is a non-negative integer number. This is a natural context for the study of Rockland operators (see Remark 4.1.4 of \cite{FischerRuzhanskyBook}). \subsection{Fourier analysis on nilpotent Lie groups} Let $G$ be a simply connected nilpotent Lie group. Then the adjoint representation $\textnormal{ad}:\mathfrak{g}\rightarrow\textnormal{End}(\mathfrak{g})$ is nilpotent. Let us assume that $\pi$ is a continuous, unitary and irreducible representation of $G,$ this means that, \begin{itemize} \item $\pi\in \textnormal{Hom}(G, \textnormal{U}(H_{\pi})),$ for some separable Hilbert space $H_\pi,$ i.e. $\pi(xy)=\pi(x)\pi(y)$ and for the adjoint of $\pi(x),$ $\pi(x)^*=\pi(x^{-1}),$ for every $x,y\in G.$ \item The map $(x,v)\mapsto \pi(x)v, $ from $G\times H_\pi$ into $H_\pi$ is continuous. \item For every $x\in G,$ and $W_\pi\subset H_\pi,$ if $\pi(x)W_{\pi}\subset W_{\pi},$ then $W_\pi=H_\pi$ or $W_\pi=\emptyset.$ \end{itemize} Let $\textnormal{Rep}(G)$ be the set of unitary, continuous and irreducible representations of $G.$ The relation, {\small{ \begin{equation*} \pi_1\sim \pi_2\textnormal{ if and only if, there exists } A\in \mathscr{B}(H_{\pi_1},H_{\pi_2}),\textnormal{ such that }A\pi_{1}(x)A^{-1}=\pi_2(x), \end{equation*}}}for every $x\in G,$ is an equivalence relation and the unitary dual of $G,$ denoted by $\widehat{G}$ is defined via $ \widehat{G}:={\textnormal{Rep}(G)}/{\sim}. $ Let us denote by $d\pi$ the Plancherel measure on $\widehat{G}.$ The Fourier transform of $f\in \mathscr{S}(G), $ (this means that $f\circ \textnormal{exp}_G\in \mathscr{S}(\mathfrak{g})$, with $\mathfrak{g}\simeq \mathbb{R}^{\dim(G)}$) at $\pi\in\widehat{G},$ is defined by \begin{equation*} \widehat{f}(\pi)=\smallint\limits_{G}f(x)\pi(x)^*dx:H_\pi\rightarrow H_\pi,\textnormal{ and }\mathscr{F}_{G}:\mathscr{S}(G)\rightarrow \mathscr{S}(\widehat{G}):=\mathscr{F}_{G}(\mathscr{S}(G)). \end{equation*} If we identify one representation $\pi$ with its equivalence class, $[\pi]=\{\pi':\pi\sim \pi'\}$, for every $\pi\in \widehat{G}, $ the Kirillov trace character $\Theta_\pi$ defined by $$(\Theta_{\pi},f): =\textnormal{\textbf{Tr}}(\widehat{f}(\pi)),$$ is a tempered distribution on $\mathscr{S}(G).$ In particular, the identity $ f(e_G)=\smallint\limits_{\widehat{G}}(\Theta_{\pi},f)d\pi, $ implies the Fourier inversion formula $f=\mathscr{F}_G^{-1}(\widehat{f}),$ where \begin{equation*} (\mathscr{F}_G^{-1}\sigma)(x):=\smallint\limits_{\widehat{G}}\textnormal{\textbf{Tr}}(\pi(x)\sigma(\pi))d\pi,\,\,x\in G,\,\,\,\,\mathscr{F}_G^{-1}:\mathscr{S}(\widehat{G})\rightarrow\mathscr{S}(G), \end{equation*}is the inverse Fourier transform. In this context, the Plancherel theorem takes the form $\Vert f\Vert_{L^2(G)}=\Vert \widehat{f}\Vert_{L^2(\widehat{G})}$, where $$L^2(\widehat{G}):=\smallint\limits_{\widehat{G}}H_\pi\otimes H_{\pi}^*d\pi,$$ is the Hilbert space endowed with the norm: $\Vert \sigma\Vert_{L^2(\widehat{G})}=(\int_{\widehat{G}}\Vert \sigma(\pi)\Vert_{\textnormal{HS}}^2d\pi)^{\frac{1}{2}}.$ \subsection{Homogeneous linear operators and Rockland operators} A linear operator $T:C^\infty(G)\rightarrow \mathscr{D}'(G)$ is homogeneous of degree $\nu\in \mathbb{C}$ if for every $r>0$ the equality \begin{equation*} T(f\circ D_{r})=r^{\nu}(Tf)\circ D_{r} \end{equation*} holds for every $f\in \mathscr{D}(G). $ If for every representation $\pi\in\widehat{G},$ $\pi:G\rightarrow U({H}_{\pi}),$ we denote by ${H}_{\pi}^{\infty}$ the set of smooth vectors, that is, the space of elements $v\in {H}_{\pi}$ such that the function $x\mapsto \pi(x)v,$ $x\in \widehat{G},$ is smooth, a Rockland operator is a left-invariant differential operator $\mathcal{R}$ which is homogeneous of positive degree $\nu=\nu_{\mathcal{R}}$ and such that, for every unitary irreducible non-trivial representation $\pi\in \widehat{G},$ $\pi({R})$ is injective on ${H}_{\pi}^{\infty};$ $\sigma_{\mathcal{R}}(\pi)=\pi(\mathcal{R})$ is the symbol associated to $\mathcal{R}.$ It coincides with the infinitesimal representation of $\mathcal{R}$ as an element of the universal enveloping algebra. It can be shown that a Lie group $G$ is graded if and only if there exists a differential Rockland operator on $G.$ If the Rockland operator is formally self-adjoint, then $\mathcal{R}$ and $\pi(\mathcal{R})$ admit self-adjoint extensions on $L^{2}(G)$ and ${H}_{\pi},$ respectively. Now if we preserve the same notation for their self-adjoint extensions and we denote by $E$ and $E_{\pi}$ their spectral measures, we will denote by $$ f(\mathcal{R})=\smallint\limits_{-\infty}^{\infty}f(\lambda) dE(\lambda),\,\,\,\textnormal{and}\,\,\,\pi(f(\mathcal{R}))\equiv f(\pi(\mathcal{R}))=\smallint\limits_{-\infty}^{\infty}f(\lambda) dE_{\pi}(\lambda), $$ the functions defined by the functional calculus. In general, we will reserve the notation $\{dE_A(\lambda)\}_{0\leq\lambda<\infty}$ for the spectral measure associated with a positive and self-adjoint operator $A$ on a Hilbert space $H.$ We now recall a lemma on dilations on the unitary dual $\widehat{G},$ which will be useful in our analysis of spectral multipliers. For the proof, see Lemma 4.3 of \cite{FischerRuzhanskyBook}. \begin{lemma}\label{dilationsrepre} For every $\pi\in \widehat{G}$ let us define \begin{equation}\label{dilations:repre} \pi^{(r)}(x)= D_{r}(\pi)(x)= (r\cdot \pi)(x)=\pi(r\cdot x)\equiv \pi(D_r(x)), \end{equation} for every $r>0$ and all $x\in G.$ Then, if $f\in L^{\infty}(\mathbb{R})$ then $f(\pi^{(r)}(\mathcal{R}))=f({r^{\nu}\pi(\mathcal{R})}).$ \end{lemma} \begin{remark}{{For instance, for any $\alpha\in \mathbb{N}_0^n,$ and for an arbitrary family $X_1,\cdots, X_n,$ of left-invariant vector-fields we will use the notation \begin{equation} [\alpha]:=\sum_{j=1}^n\nu_j\alpha_j, \end{equation}for the homogeneity degree of the operator $X^{\alpha}:=X_1^{\alpha_1}\cdots X_{n}^{\alpha_n},$ whose order is $|\alpha|:=\sum_{j=1}^n\alpha_j.$}} \begin{remark}\label{DilationLebesgue} By considering the dilation $r\cdot x=D_{r}(x),$ $x\in G,$ $r>0,$ then a relation between the homogeneous structure of $G$ and the Haar measure $dx$ on $G$ is given by (see \cite[Page 100]{FischerRuzhanskyBook}) $$ \smallint\limits_{G}(f\circ D_{r})(x)dx=r^{-Q}\smallint\limits_{G}f(x)dx. $$ Note that if $f_{r}:=r^{-Q}f(r^{-1}\cdot),$ then \begin{equation}\label{Eq:dilatedFourier} \widehat{f}_{r}(\pi)=\smallint\limits_{G}r^{-Q}f(r^{-1}\cdot x)\pi(x)^*dx=\smallint\limits_{G}f(y)\pi(r\cdot y)^{*}dy=\widehat{f}(r\cdot \pi), \end{equation}for any $\pi\in \widehat{G}$ and all $r>0,$ with $(r\cdot \pi)(y)=\pi(r\cdot y),$ $y\in G,$ as in \eqref{dilations:repre}. \end{remark} \end{remark} \section{Proof of the main theorem}\label{proof:section} We will start our analysis for the proof of Theorem \ref{Sjolin:Th:graded:groups} by analysing the operator $ \mathfrak{G}_{a}:=\left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{a}{\nu}} , $ for any $a>0,$ as a convolution operator with a finite measure. This analysis will be addressed in Lemma \ref{Lemma:quotient:Rockland} below where we extend an observation done e.g. in Stein \cite[Page 133]{Stein1970} in the case of the Laplace operator to general Rockland operators. During this work we will denote by $\mathcal{B}_{s}$ to the right convolution kernel of the operator $(1+\mathcal{R})^{-\frac{s}{\nu}},$ for any $s\in \mathbb{R}.$ \subsection{The quotient between the Riesz and the Bessel potential} There is an intimate connection between the Bessel potential and the Riesz potentials of Rockland operators. This affinity between the two is given in precision in the following lemma. \begin{lemma}\label{Lemma:quotient:Rockland} Let $\alpha>0,$ and let $\mathcal{R}$ be a Rockland operator on $G$ of homogeneity degree $\nu>0.$ There exists a finite measure $\mu_\alpha$ on $G$ such that its Fourier transform is given by \begin{equation}\label{mu:alpha} \widehat{\mu}_\alpha(\pi)=\left(\frac{\pi(\mathcal{R})}{1+\pi(\mathcal{R})}\right)^{\frac{\alpha}{\nu}},\,\pi\in\widehat{G}. \end{equation} \end{lemma} \begin{proof}For the proof of Lemma \ref{Lemma:quotient:Rockland} let us use the expansion \begin{equation} (1-t)^{\alpha/\nu}=1+\sum_{m=1}^{\infty}A_{m,\alpha/\nu}t^m,\,\,|t|<1,\,\sum_{m=1}^\infty|A_{m,\alpha/\nu}|<\infty, \end{equation}which is still valid when $t\rightarrow 1^{-}$ because $(1-t)^{\alpha/\nu}$ remains bounded for $\alpha>0$. Let $dE_{\pi(\mathcal{R})}$ be the spectral measure of the operator $\pi(\mathcal{R}).$ With $t=\frac{1}{1+\lambda},$ $\lambda\geq 0,$ we have that \begin{align*} \left(\frac{\lambda}{1+\lambda}\right)^{\frac{\alpha}{\nu}}= \left(1-\frac{1}{1+\lambda}\right)^{\frac{\alpha}{\nu}}=1+\sum_{m=1}^{\infty}A_{m,\alpha/\nu}(1+\lambda)^{-m}, \end{align*}and then the functional calculus of the operator $\pi(\mathcal{R})$ implies that \begin{align*} \left(\frac{\pi(\mathcal{R})}{1+\pi(\mathcal{R})}\right)^{\frac{\alpha}{\nu}} &=\smallint_{0}^{\infty} \left(\frac{\lambda}{1+\lambda}\right)^{\frac{\alpha}{\nu}}dE_{\pi(\mathcal{R})}(\lambda)=1+\sum_{m=1}^{\infty}\smallint_{0}^{\infty}A_{m,\alpha/\nu}(1+\lambda)^{-m}dE_{\pi(\mathcal{R})}(\lambda)\\ &=1+\sum_{m=1}^{\infty}A_{m,\alpha/\nu}(1+\pi(\mathcal{R}))^{-m}. \end{align*}In consequence, the required measure $\mu_\alpha$ is given by \begin{equation} \mu_\alpha=\delta+\sum_{m=1}^{\infty}A_{m,\alpha/\nu}\mathcal{B}_{m\nu}(x)dx. \end{equation}Indeed, note that $\mu_\alpha$ satisfies that $\widehat{\mu}_\alpha(\pi)= \left({\pi(\mathcal{R})}/[{1+\pi(\mathcal{R})]}\right)^{\frac{\alpha}{\nu}},$ $\pi\in\widehat{G}.$ The proof of Lemma \ref{Lemma:quotient:Rockland} is complete. \end{proof} \begin{corollary} Let $\alpha>0,$ and let $\mathcal{R}$ be a Rockland operator on $G$ of homogeneity degree $\nu>0.$ Then the operator \begin{equation} \left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{\alpha}{\nu}}:L^p(G)\rightarrow L^p(G), \end{equation}extends to a bounded operator for all $1\leq p\leq \infty.$ \end{corollary} \begin{proof} The action of the operator $\left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{\alpha}{\nu}}$ on functions in $L^1(G)$ is obtained from the right convolution with the measure $\mu_\alpha$ in \eqref{mu:alpha}. So, $\left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{\alpha}{\nu}}$ is bounded from $L^1(G)$ into $L^1(G).$ By the duality argument $\left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{\alpha}{\nu}}$ is bounded from $L^\infty(G)$ into $L^\infty(G).$ Because $\left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{\alpha}{\nu}}$ is bounded on $L^2(G),$ the Marcinkiewicz interpolation theorem implies the boundedness of $ \left(\frac{\mathcal{R}}{1+\mathcal{R}}\right)^{\frac{\alpha}{\nu}}:L^p(G)\rightarrow L^p(G),$ for all $1\leq p\leq \infty.$ \end{proof} \subsection{Boundedness of strongly singular integral operators} We are going to prove our main Theorem \ref{Sjolin:Th:graded:groups}. For this, we precise the notations. \begin{itemize} \item[-] Consider $G$ to be a graded Lie group, let $|\cdot|$ be a homogeneous quasi-norm on $G$ and let $Q$ be its homogeneous dimension. \item[-] Let $\mathcal{R}$ be a Rockland operator of homogeneous degree $\nu>0.$ Let $K\in L^1_{\textnormal{loc}}(G\setminus \{e\})$ and $T:f\mapsto f\ast K,$ the corresponding integral operator associated to $K.$ \item[-] Assume that for $0<\alpha<\theta<1,$ $K$ satisfies the Fourier transform estimate \begin{equation}\label{A:alpha:2:2} \sup_{\pi\in \widehat{G}}\Vert \widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{op}}<\infty, \end{equation} and the kernel condition \begin{equation} [K]_{H_{\infty,\theta,b}}':=\sup_{|y|<b} \smallint\limits_{|x|\geq 2R^{1-\theta}}|K(y^{-1}x)-K(x)|dx <\infty ,\,0<b<1. \end{equation} \end{itemize} We are going to prove that $T:H^1(G)\rightarrow L^1_{-\varkappa}(G)$ extends to a bounded operator provided that \begin{equation} \varkappa\geq {Q(\theta-\alpha)}/{[Q(1-\theta)+2]}, \end{equation} or equivalently, that $$(1+\mathcal{R})^{-\frac{\varkappa}{\nu}}T:H^{1}(G)\rightarrow L^1(G)$$ admits a bounded extension. In the same way, we have to prove that $$(1+\mathcal{R})^{-\frac{\varkappa}{\nu}}T:L^{1}(G)\rightarrow L^{1,\infty}(G)$$ admits a bounded extension, which proves that $T:L^1(G)\rightarrow L^{1,\infty}_{-\varkappa}(G)$ extends to a bounded operator. \begin{proof}[Proof of Theorem \ref{Sjolin:Th:graded:groups}] It is suffice to consider the critical case $$ \varkappa:=\frac{Q(\theta-\alpha)}{Q(1-\theta)+2}. $$ Indeed, having proved the boundedness of $T:H^1(G)\rightarrow L^1_{-\varkappa}(G)$ and of $T:L^1(G)\rightarrow L^{1,\infty}_{-\varkappa}(G),$ for any $\varkappa'>\varkappa=\frac{Q(\theta-\alpha)}{Q(1-\theta)+2},$ we have the continuous inclusions $$ L^{1}_{-\varkappa}(G)\hookrightarrow L^{1}_{-\varkappa'}(G),\,L^{1,\infty}_{-\varkappa}(G)\hookrightarrow L^{1,\infty}_{-\varkappa'}(G)$$ implying also the existence of the bounded extensions $T:H^1(G)\rightarrow L^1_{-\varkappa'}(G)$ and $T:L^1(G)\rightarrow L^{1,\infty}_{-\varkappa'}(G).$ Let us choose $\phi\in C^{\infty}_0(G)$ so that \begin{equation}\label{the:auxiliar:phi} \phi\geq 0,\quad \textnormal{ with } \textnormal{supp}[\phi]\subset\{x\in G: 1/2<|x|<2\},\,\,\phi(x)=1 \textnormal{ if } \frac{3}{4}<|x|<1, \end{equation} and such that $$\forall x\in G, \,|x|<1,\,\,\sum_{k=0}^{\infty}\phi(2^k\cdot x)=1.$$ Also, for $x\in G,$ define $\phi_k(x):=\phi(2^k\cdot x)$ and $$\psi(x)=\sum_{k=0}^\infty\phi_{k}(x).$$ The kernel of the operator $(1+\mathcal{R})^{-\frac{\varkappa}{\nu}}T$ is given by $K\ast \mathcal{B}_{\varkappa}.$ Let us use the decomposition $$ K\ast \mathcal{B}_{\varkappa}=K\ast(\mathcal{B}_{\varkappa}\psi)+K\ast(\mathcal{B}_{\varkappa}(1-\psi))=K_1+K_2,\,K_1:=K\ast(\mathcal{B}_{\varkappa}\psi). $$ Let us prove that $K_2=K\ast (\mathcal{B}_{\varkappa}(1-\psi)) \in \mathscr{S}(G)$ is a smooth function in $L^1(G).$ Indeed, from the properties of $\phi,$ we have that $\psi(x)=1$ for all $x\in G$ with $|x|<1,$ and then $1-\psi(x)\equiv 0$ when $|x|<1.$ On the other hand, the function $\mathcal{B}_{\varkappa}$ decreases rapidly for $|x|\geq 1.$ Indeed, for any $N\in \mathbb{N},$ there is $C_N>0,$ such that (see \cite[Theorem 5.4.1]{FischerRuzhanskyBook}) \begin{align*} |\mathcal{B}_{\varkappa}(x)|\leq C_N|x|^{-N},\,|x|>1. \end{align*} Since, $x\in \textnormal{supp}(\phi_k)$ implies that $|2^k\cdot x|\in (1/2,2),$ then \begin{equation} \forall k\geq 0,\, \textnormal{supp}(\phi_k)\subset \{x\in G:2^{-k-1}<|x|<2^{-k+1}\} . \end{equation}So, for any $x\in G:\,|x|>2,$ $\psi(x)=\sum_{k=0}^{\infty}\phi(2^k\cdot x)=0.$ In conclusion the function $(1-\psi)\mathcal{B}_\varkappa$ has its support in the complement of the set $\{x\in G:|x|<1\}$ and $(1-\phi)\mathcal{B}_\varkappa \in L^1(G)\cap C^{\infty}(\{x\in G:|x|>1\}).$ So, the left convolution operator $T_{K_2}$ associated to $K_2$ is bounded from $L^1(G)$ into $L^1(G).$ The embedding $H^{1}(G)\hookrightarrow L^{1}(G)$ implies that $T_{K_2}:H^{1}(G)\hookrightarrow L^1(G)$ is bounded. Note also that the boundedness of $T_{K_2}$ from $L^{1}(G)$ into $L^1(G)$ implies its boundedness from $L^{1}(G)$ into $L^{1,\infty}(G)$ in view of the inclusion $L^{1}(G)\hookrightarrow L^{1,\infty}(G).$ Now, to continue with the proof it suffices to demonstrate the boundedness of $T_{K_1}=T-T_{K_2}$ from $H^{1}(G)$ into $ L^1(G),$ and from $L^{1}(G)$ into $L^{1,\infty}(G).$ For this, we will prove that $K_1=K\ast (\mathcal{B}_\varkappa\psi)$ satisfies the conditions \begin{equation}\label{Fourier:growth:1New:1} \sup_{\pi\in \widehat{G}}\Vert \widehat{K}_1(\pi) (1+\pi(\mathcal{R}))^{\frac{Q a}{2\nu}} \Vert_{\textnormal{op}}<\infty, \end{equation} and the kernel condition \begin{equation}\label{GS:CZ:cond:22:2} [K]_{H_{\infty,\theta}}':=\sup_{0<R<1}\sup_{|y|<R} \smallint\limits_{|x|\geq 2R^{1-a}}|K_1(y^{-1}x)-K_1(x)|dx <\infty, \end{equation} with \begin{equation} a:=\frac{Q\alpha(1-\theta)+2\theta}{Q(1-\theta)+2}\in (0,1). \end{equation} Note also that the hypothesis $\alpha<\theta,$ implies that $Q\alpha(1-\theta)+2\theta<Q\theta(1-\theta)+2\theta=\theta[Q(1-\theta)+2]$ which (by dividing both sides of this inequality by $Q(1-\theta)+2$) implies the estimate \begin{align}\label{a:less:than:theta} 0<a=\frac{Q\alpha(1-\theta)+2\theta}{Q(1-\theta)+2}<\theta, \end{align} allowing the use of Theorem \ref{CR:0CZ:Graded:2022}. For the proof of \eqref{Fourier:growth:1New:1} let use that $\psi$ vanishes for $|x|>2.$ So, $\psi\mathcal{B}_\varkappa$ is of compact support of $G,$ and for all $r\in \mathbb{R},$ $$ (1+\mathcal{R})^{r/\nu}[\psi\mathcal{B}_\varkappa]\in L^1(G). $$ Indeed, for any $r\in \mathbb{R},$ $(1+\mathcal{R})^{r/\nu}$ maps the Schwartz space $\mathscr{S}(G)$ into the Schwartz space $\mathscr{S}(G).$ So, for all $r\in \mathbb{R},$ $(1+\mathcal{R})^{r/\nu}[\psi\mathcal{B}_\varkappa]$ is the right-convolution kernel of a bounded operator on $L^2(G).$ Indeed, the Hausdorff-Young inequality gives \begin{equation} \forall f\in L^2(G),\,\,\Vert f\ast (1+\mathcal{R})^{r/\nu}[\psi\mathcal{B}_\varkappa] \Vert_{L^2(G)}\leq \Vert f\Vert_{L^2(G)}\Vert (1+\mathcal{R})^{r/\nu}[\psi\mathcal{B}_\varkappa] \Vert_{L^1(G)}. \end{equation}However, the Plancherel theorem indicates that for any $r\in \mathbb{R},$ $$ \sup_{\pi\in \widehat{G}}\Vert \mathscr{F}[(1+\mathcal{R})^{r/\nu}[\psi\mathcal{B}_\varkappa]](\pi)\Vert_{\textnormal{op}}=\sup_{\pi\in \widehat{G}}\Vert (1+\pi(\mathcal{R}))^{r/\nu}\widehat{\psi\mathcal{B}}_\varkappa(\pi)\Vert_{\textnormal{op}}<\infty. $$ In a similar way, we have that $$\forall r\in \mathbb{R},\, \sup_{\pi\in \widehat{G}}\Vert\widehat{\psi\mathcal{B}}_\varkappa(\pi) (1+\pi(\mathcal{R}))^{r/\nu}\Vert_{\textnormal{op}}<\infty. $$ In consequence, for any $s>0,$ \begin{align*} &\sup_{\pi\in \widehat{G}}\Vert \widehat{K}_1(\pi) (1+\pi(\mathcal{R}))^{\frac{Q a}{2\nu}} \Vert_{\textnormal{op}} \\ &= \sup_{\pi\in \widehat{G}}\Vert \widehat{\mathcal{B}_\varkappa\psi}(\pi)\widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q a}{2\nu}}\Vert_{\textnormal{op}}\\ &= \sup_{\pi\in \widehat{G}}\Vert \widehat{\mathcal{B}_\varkappa\psi}(\pi)(1+\pi(\mathcal{R}))^{s/\nu}(1+\pi(\mathcal{R}))^{-s/\nu}\widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q a}{2\nu}}\Vert_{\textnormal{op}}\\ &\leq \sup_{\pi\in \widehat{G}}\Vert \widehat{\mathcal{B}_\varkappa\psi}(\pi)(1+\pi(\mathcal{R}))^{s/\nu}\Vert_{\textnormal{op}} \Vert(1+\pi(\mathcal{R}))^{-s/\nu}\widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q a}{2\nu}}\Vert_{\textnormal{op}}\\ &\lesssim_{s} \Vert(1+\pi(\mathcal{R}))^{-s/\nu}\Vert_{\textnormal{op}}\Vert\widehat{K}(\pi) (1+\pi(\mathcal{R}))^{\frac{Q a}{2\nu}}\Vert_{\textnormal{op}}<\infty \end{align*}which demonstrate \eqref{Fourier:growth:1New:1}. Now, we are going to prove \eqref{GS:CZ:cond:22:2}. Define $$ G_{\varkappa,k}:=\phi_k \mathcal{B}_\varkappa. $$ Then, $$K\ast (\psi \mathcal{B}_\varkappa)=\sum_{k=0}^{\infty}K\ast G_{\varkappa,k}.$$ To do so, take $y\in G$ such that $|y|<\min\{b,\frac{1}{2}\}.$ For any $k\in \mathbb{N}_0,$ let \begin{equation} I_k:=\smallint\limits_{|x|>2|y|^{1-a}}|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx. \end{equation} Now, let us make an analysis of the last integral above when $t\in \textnormal{supp}(G_{\varkappa,k}).$ In that case $|2^{k}\cdot t|=2^{k}|t|\in (1/2,2)$ that is $2^{-k-1}<|t|<2^{-k+1}.$ Note that the changes of variables $z=xt^{-1}$ implies the inequalities \begin{align*} I_k&=\smallint\limits_{|x|>2|y|^{1-a}}|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx\\ &=\smallint\limits_{|x|>2|y|^{1-a}}|\smallint\limits_{G} (K(y^{-1}xt^{-1}) G_{\varkappa,k}(t)-K(xt^{-1}) G_{\varkappa,k}(t))dt|dx\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{|x|>2|y|^{1-a}} |K(y^{-1}xt^{-1}) -K(xt^{-1})|dxdt\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{|zt|>2|y|^{1-a}} |K(y^{-1}z)-K(z)|dzdt. \end{align*} So, we have proved the estimate \begin{equation}\label{case1:deep:analysis} I_k\leq \smallint\limits_{G}| G_{\varkappa,k}(t)| \smallint\limits_{|zt|>2|y|^{1-a}} |K(y^{-1}z)-K(z)|dz\,dt, \end{equation} where $|y|\leq b < 1.$ To continue, let us estimate the integral $$\|G_{\varkappa,k}\|_{L^1(G)}= \smallint\limits_{G} |G_{\varkappa,k}(t)|dt. $$ First, observe that $\mathcal{B}_{\varkappa}$ is the right-convolution kernel of the pseudo-differential operator $(1+\mathcal{R})^{-\frac{\varkappa}{\nu}}\in \Psi^{-\varkappa}_{1,0}(G\times \widehat{G}).$ Note that $0<\varkappa<Q,$ which can be proved by observing that $$Q(\theta-\alpha)<2Q<Q^{2}(1-\theta)+2Q=Q(Q(1-\theta)+2)$$ implying that $\varkappa=Q(\theta-\alpha)/[Q(1-\theta)+2]<Q.$ So, $\mathcal{B}_{\varkappa}$ satisfies the estimate (see \cite[Theorem 5.4.1]{FischerRuzhanskyBook}) $$ |\mathcal{B}_{\varkappa}(t)|\leq C_{\varkappa}|t|^{-(Q-\varkappa)},\,|t|\lesssim 1. $$ In consequence the change of variable $u=2^{k}t$ has the effect in the Haar measure $du=2^{kQ}dt$ and then $dt=2^{-kQ}du$, implying the following estimates $$ \smallint\limits_{G}| G_{\varkappa,k}(t)|dt=\smallint\limits_{G}| \mathcal{B}_{\varkappa}(t)\phi(2^{k}t)|dt\lesssim \smallint\limits_{|2^{k}t|<2}| \mathcal{B}_{\varkappa}(t)\phi(2^{k}t)|dt\lesssim \smallint\limits_{|2^{k}t|<2} |t|^{-(Q-\varkappa)}\phi(2^{k}t)|dt$$ $$=\smallint\limits_{|u|<2} |2^{-k}u|^{-(Q-\varkappa)}\phi(u)|2^{-kQ}du=2^{kQ-k\varkappa-kQ}\smallint\limits_{|u|<2}|u|^{-(Q-\varkappa)}\phi(u)du$$ $$\lesssim_{\phi}2^{-k\varkappa}.$$ The analysis above shows the validity of the inequality \begin{align}\label{G:B:K} \smallint\limits_{G}| G_{\varkappa,k}(t)|dt\leq C_{\phi} 2^{-k\varkappa}, \end{align} for some $C_{\phi}>0.$ In particular, as $0<1-a<1,$ we have that $|y|\leq |y|^{1-a}.$ Now, we will analyse \eqref{case1:deep:analysis} in three cases. Indeed, for any $k,$ we will analyse the situation when $r=2^{-k}$ is inside of the interval $[0,|y|/2),$ or, in the interval $[|y|/2, |y|^{\frac{1-\theta}{1-\alpha}})$ and finally, the case where $r=2^{-k}$ is inside of the set $(|y|^{\frac{1-\theta}{1-\alpha}},\infty).$ See Figure \ref{Fig2} below. \begin{figure}[h] \includegraphics[width=6cm]{Sectors.PNG}\\ \caption{} \label{Fig2} \centering \end{figure} \begin{itemize} \item[Case 1:] $2^{-k}<|y|/2.$ In consequence, for the integral in \eqref{case1:deep:analysis}, the inequality $|zt|>2|y|^{1-a}$ implies that $|z|+|t|>2|y|^{1-a}$ and then $$ |z|>2|y|^{1-a} -|t|>2|y|^{1-a}-2^{-k+1}. $$ The inequality $|y|^{1-a}-|y|\geq 0,$ and the fact that $2^{-k+1}<|y|$ imply that $$2|y|^{1-a}-2^{-k+1}>|y|^{1-a}+(|y|^{1-a}-|y|)\geq |y|^{1-a}, $$ and in this case $|z|>|y|^{1-a}.$ We have proved that \begin{equation} \{z\in G:\forall t\in \textnormal{supp}(G_{\varkappa,k}),|zt|>2|y|^{1-a}\,\}\subset \{z\in G:|z|>|y|^{1-a}\,\}. \end{equation}So, we can estimate \begin{align*} I_k&\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{|zt|>2|y|^{1-a}} |K(y^{-1}z)-K(z)|dzdt\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)|dt \smallint\limits_{|z|>|y|^{1-a}} |K(y^{-1}z)-K(z)|dz\\ &\lesssim_{\phi} 2^{-k\varkappa} \smallint\limits_{|z|>|y|^{1-a}} |K(y^{-1}z)-K(z)|dz. \end{align*} Let us consider a sequence of points $y_i,$ $0\leq i\leq m,$ $0<1/m<b,$ such that \begin{equation*} y_0=e,\cdots, y_{m}=y, \,d(y_i,y_{i+1})<1/m, \,0\leq i\leq m-1. \end{equation*} \begin{itemize} \item {\bf{The topological algorithm for the choice of the $y_i$'s.}} For constructing this family of points, we consider the curve \begin{equation} y(t):[0,m]\rightarrow G,\,y(t)=\frac{t}{m}\cdot y, \end{equation} and the $y_i$'s will belong to its graph. Note that $y(0)=e,$ $y(m)= y,$ and that the derivative $y'(t)$ of the function $y(t)$ is the constant function $$ y'(t)=\frac{1}{m}\cdot y. $$ We illustrate the choice of the points $y_i$'s in Figure \ref{Fig1} below. \begin{figure}[h] \includegraphics[width=8cm]{Covering.PNG}\\ \caption{} \label{Fig1} \centering \end{figure} The topological algorithm to choose the points $y_i$ is as follows. Observe that the length of the curve $\ell$ is $\leq 1.$ Indeed, $$\ell:=\smallint\limits_{0}^{m}|y'(t)|dt=\smallint\limits_{0}^{m}|1/m\cdot y|dt\leq m(1/m)b\leq 1.$$ Note that we can cover the graph of $y(t)$ with $N_0$ balls $B_i=B(y_i,r_i)$ of radius $r_i=1/m,$ such that $y_0=e,$ $y_{i-i}\in B_{i} $ for $i\geq 2,$ $y_{m}=y,$ and $N_0\sim 2m.$ To guarantee that $d(y_i,y_{i+1})<1/m$ we can take $$ z_{i+1}\in \partial B_{i}\cap \{y(t):0\leq t\leq m\} $$ and choose $y_{i+1}\in B_i$ such that $d(y_{i+1},z_{i+1})<\frac{1}{2^m}.$ This inductive process ends when one of the balls $B_{i}$ contains the point $y$ in its interior and the distance between $y$ and the center of ball is less than $1/m$. \end{itemize} Having fixed the sequence $y_i$ now let us choose a suitable $m.$ Indeed, consider $m\geq 2$ as the least positive integer such that $$ \frac{2}{m^{1-\theta}}<|y|^{1-a}-|y|<\frac{2}{(m-1)^{1-\theta}}. $$ Then we have that $$ |y|^{1-a}-|y|\sim \frac{2}{m^{1-\theta}}= 2\times \left(\frac{1}{m}\right)^{1-\theta}\sim 2d(y_{i},y_{i+1})^{1-\theta}=2|y_{i}^{-1}y_{i+1}|^{1-\theta}, $$ for all $0\leq i\leq m-1.$ The previous analysis and the changes of variables $x=y_{i-1}^{-1}z$ implies that $$ I_k\leq \smallint\limits_{G} |G_{\varkappa,k}(t)|dt \smallint\limits_{|z|>|y|^{1-a}} |K(y^{-1}z)-K(z)|dz $$ $$\lesssim_{\phi}2^{-k\varkappa}\sum_{i=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(y_{i}^{-1}z\right)-K\left(y_{i-1}^{-1}z\right)|dz= 2^{-k\varkappa}\sum_{i=1}^m \smallint\limits_{|y_{i-1}\cdot x|>|y|^{1-a}}|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx $$ \begin{align*} &\lesssim 2^{-k\varkappa}\sum_{i=1}^m \smallint\limits_{|x|>2|y_{i-1}^{-1}y_{i}|^{1-\theta}}|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx \\ &=2^{-k\varkappa}\sum_{i=1}^m \smallint\limits_{|x|>2|y_{i-1}^{-1}y_{i}|^{1-\theta}}|K((y_{i-1}^{-1}y_{i})^{-1}x)-K(x)|dx \\ & \lesssim 2^{-k\varkappa}\sum_{i=1}^m[K]_{H_{\infty,\theta,b}}'=2^{-k\varkappa}m[K]_{H_{\infty,\theta,b}}'. \end{align*} Indeed, in the previous inequality we have used the estimate $$ \smallint\limits_{|y_{i-1}\cdot x|>|y|^{1-a}}|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx\lesssim \smallint\limits_{|x|>2|y_{i-1}^{-1}y_{i}|^{1-\theta} }|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx. $$ Indeed, estimating $|y_{i-1}|\sim |y|(i-1)/m<|y|,$ we have that the estimate $|y_{i-1}x|\geq |y|^{1-a} $ implies that $$ |x|>|y|^{1-a}-|y_{i-1}|\succeq |y|^{1-a}-|y|\succeq 2|y_{i-1}^{-1}y_{i}|^{1-\theta}. $$ The choice of $m$ implies that $d(y_i,y_{i+1})\sim \frac{|y|}{m}$ and then $$ d(y_i,y_{i+1})^{1-\theta}\sim\left(\frac{|y|}{m}\right)^{1-\theta}\sim |y|^{1-a}-|y|. $$ Then $1/m\sim (|y|^{1-a})^{\frac{1}{1-\theta}}/|y|.$ We then can estimate $m\sim |y|^{1-\frac{1-a}{1-\theta}}.$ So, to finish our analysis in Case 1, note that $|y|^{-1}\lesssim 2^{k}$ which implies that \begin{align} \sum_{k:2^{-k}<|y|/2}I_{k}\lesssim \sum_{k:2^{-k}<|y|/2} 2^{-k\varkappa}m \lesssim |y|^{1-\frac{1-a}{1-\theta}}\sum_{k:2^{-k}<|y|/2}2^{-k\varkappa}\sim |y|^{1-\frac{1-a}{1-\theta}}|y|^{\varkappa}. \end{align} Since $\varkappa+1-\frac{1-a}{1-\theta}=0,$ we have that $ \sum_{k:2^{-k}<|y|/2}I_{k}\lesssim 1.$ \item[Case 2:] $|y|/2\leq 2^{-k}<|y|^{\frac{1-\theta}{1-\alpha}}.$ Define $$\delta_k:=5|y|^{2(1-\theta)/\lambda}2^{-kQ(1-\alpha)(1-\theta)/\lambda},$$ where $$ \lambda:=Q(1-\theta)+2. $$ Then we have the upper and the lower bound $$ 5\times {2^{-k}}<\delta_k<5|y|^{1-\theta}.$$ Split $I_k$ as follows, \begin{align*} I_k:=\smallint\limits_{|x|>2|y|^{1-a}}|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx=J_{1,k}+J_{2,k}, \end{align*} where \begin{equation} J_{1,k}=\smallint\limits_{\{|x|>2|y|^{1-a} \}\cap \{x:|x|\leq \delta_k\} }|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx \end{equation}and \begin{equation} J_{2,k}=\smallint\limits_{\{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} }|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx. \end{equation}Now, let us estimate $J_{2,k}.$ Indeed, the change of variable $z=xt^{-1},$ for $t\in \textnormal{supp}(G_{\varkappa,k})$ implies \begin{align*} J_{2,k}&=\smallint\limits_{ \{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} }|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx\\ &=\smallint\limits_{ \{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} }|\smallint\limits_{G} (K(y^{-1}xt^{-1}) G_{\varkappa,k}(t)-K(xt^{-1}) G_{\varkappa,k}(t))dt|dx\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{ \{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} } |K(y^{-1}xt^{-1}) -K(xt^{-1})|dxdt\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{\{|zt|>2|y|^{1-a} \}\cap \{z:|zt|> \delta_k\}} |K(y^{-1}z)-K(z)|dzdt\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{ \{z:|zt|> \delta_k\}} |K(y^{-1}z)-K(z)|dz dt. \end{align*} Note that when $|zt|>\delta_{k},$ we have $|t|+|z|\geq|zt|>\delta_{k}$ and with $t\in \textnormal{supp}(G_{\varkappa,k}),$ $|t|<2^{-k+1}$ from which ones deduce the inclusion of sets $$ \{z:|zt|> \delta_k \}\subset\{z: |z|>\delta_k-2^{-k+1}\}, $$ and the estimate $$ \smallint\limits_{ \{z:|zt|> \delta_k\}} |K(y^{-1}z)-K(z)|dz\leq \smallint\limits_{ \{z: |z|>\delta_k-2^{-k+1}\} } |K(y^{-1}z)-K(z)|dz. $$ So, the previous analysis together with \eqref{G:B:K} gives \begin{equation} J_{2,k}\lesssim 2^{-k\varkappa} \smallint\limits_{ \{z:|z|> \delta_k-2^{-k+1}\}} |K(y^{-1}z)-K(z)|dz. \end{equation} To continue, let us make use of the argument illustrated in Figure \ref{Fig1}. Using this construction we consider a sequence of points $y_i,$ $0\leq i\leq m,$ $0<1/m<b,$ such that \begin{equation*} y_0=e,\cdots, y_{m}=y, \,d(y_i,y_{i+1})\sim |y|/m, \,0\leq i\leq m-1, \end{equation*}on the curve $y(t)=\frac{t}{m}\cdot y,$ $t\in [0,m],$ and we consider again the topological construction done in Case 1 in order to obtain the required family of points $y_i.$ From now, assume that $m$ is the least integer such that $$ 2d(y_{i},y_{i+1})^{1-\theta}\sim 2(|y|/m)^{1-\theta}<\delta_k-2^{-k+2}. $$ The changes of variables $x=y_{i-1}^{-1}z$ in any term of the sums below implies that $$ J_{2,k}\lesssim 2^{-k\varkappa} \smallint\limits_{ \{z:|z|> \delta_k-2^{-k+1}\}} |K(y^{-1}z)-K(z)|dz $$ $$\lesssim_{\phi}2^{-k\varkappa}\sum_{i=1}^{m}\smallint\limits_{ \{z:|z|> \delta_k-2^{-k+1}\} } |K\left(y_{i}^{-1}z\right)-K\left(y_{i-1}^{-1}z\right)|dz$$ $$= 2^{-k\varkappa}\sum_{i=1}^m \smallint\limits_{ \{x:|y_{i-1}\cdot x|> \delta_k-2^{-k+1}\} }|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx .$$ Note that for $|y_{i-1}\cdot x|> \delta_k-2^{-k+1},$ $|y|+|x|> \delta_k-2^{-k+1}$ and then, the hypothesis $|y|/2<2^{-k}$ $$ |x|> \delta_k-2^{-k+1}-|y|> \delta_k-2^{-k+1}-2^{-k+1}=\delta_k-2^{-k+2} \succeq 2d(y_{i},y_{i+1})^{1-\theta} $$ from which we have proved that $$\smallint\limits_{ \{x:|y_{i-1}\cdot x|> \delta_k-2^{-k+1}\} }|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx\lesssim \smallint\limits_{|x|>2|y_{i-1}^{-1}y_{i}|^{1-\theta}}|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx.$$ In consequence, \begin{align*} J_{2,k}\lesssim & 2^{-k\varkappa}\sum_{i=1}^m \smallint\limits_{|x|>2|y_{i-1}^{-1}y_{i}|^{1-\theta}}|K(y_{i}^{-1}y_{i-1}x)-K(x)|dx \\ &=2^{-k\varkappa}\sum_{i=1}^m \smallint\limits_{|z|>2|y_{i-1}^{-1}y_{i}|^{1-\theta}}|K((y_{i-1}^{-1}y_{i})^{-1}x)-K(x)|dx \\ & \lesssim 2^{-k\varkappa}\sum_{i=1}^m[K]_{H_{\infty,\theta,b}}'=2^{-k\varkappa}m[K]_{H_{\infty,\theta,b}}'. \end{align*}It follows that $m\lesssim |y|^{Q(1-\theta)/\lambda}2^{kQ(1-\theta)/\lambda},$ and in this Case 2, $$ J_{2,k}\lesssim 2^{-k\varkappa}|y|^{Q(1-\theta)/\lambda}2^{kQ(1-\theta)/\lambda}, $$ where $$ \lambda:=Q(1-\theta)+2. $$ Now, let us estimate $J_{1,k}.$ In view of the Schwarz inequality we have the estimate: \begin{align*} J_{1,k}&\leq 2\smallint_{|x|\leq \delta_k}|K\ast G_{\varkappa,k}(x)|dx\lesssim \delta_{k}^{\frac{Q}{2}}\|K\ast G_{\varkappa,k}\|_{L^2(G)}=\delta_{k}^{\frac{Q}{2}}\Vert \widehat{G}_{\varkappa,k}\widehat{K}\Vert_{L^2(\widehat{G})}\\ &\leq \delta_{k}^{\frac{Q}{2}}\Vert \widehat{G}_{\varkappa,k} \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})} + \delta_{k}^{\frac{Q}{2}} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}, \end{align*} with $\phi$ as in \eqref{the:auxiliar:phi}. Since, \begin{align*} \Vert \widehat{G}_{\varkappa,k} \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}^2 &=\smallint\limits_{\widehat{G}}\| \widehat{G}_{\varkappa,k}(\pi) \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}(\pi)\|^2_{\textnormal{HS}}d\pi\\ &\leq \Vert \widehat{G}_{\varkappa,k}\Vert_{L^\infty(\widehat{G})}^2\smallint\limits_{\widehat{G}}\| \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}(\pi)\|^2_{\textnormal{HS}}d\pi. \end{align*}Using \eqref{G:B:K} we have that $\Vert \widehat{G}_{\varkappa,k}\Vert_{L^\infty(\widehat{G})}^2\leq \Vert {G}_{\varkappa,k} \Vert^2_{L^1(\widehat{G})}\lesssim 2^{-2k\varkappa}$ and then $$ \Vert \widehat{G}_{\varkappa,k} \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}^2\lesssim 2^{-2k\varkappa} \| \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}(\pi)\|_{L^2(\widehat{G})}. $$ Using \eqref{A:alpha}, that is, \begin{equation} \sup_{\pi\in \widehat{G}}\Vert (1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}}\widehat{K}(\pi) \Vert_{\textnormal{op}}<\infty, \end{equation} we have that $$ \| \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}(\pi)\|_{L^2(\widehat{G})} =\| \phi((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} (1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}}\widehat{K}(\pi)\|_{L^2(\widehat{G})}$$ $$\leq \sup_{\pi\in \widehat{G}}\Vert (1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}}\widehat{K}(\pi) \Vert_{\textnormal{op}}\times \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}$$ $$\lesssim \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}$$ $$\lesssim \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}$$ $$= \| \pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}} \phi((2^{-k}\cdot \pi)(\mathcal{R}))\|_{L^2(\widehat{G})}.$$ Note that in the last line we have used the commutativity identity $$ \phi((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}= \pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}} \phi((2^{-k}\cdot \pi)(\mathcal{R})) $$ in view of the functional calculus of $\mathcal{R},$ and the estimate \begin{equation}\label{Quotient:L2:act} \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})} \lesssim \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}. \end{equation}Indeed, \begin{align*} & \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}\\ & = \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\pi(\mathcal{R})^{\frac{Q\alpha}{2\nu}}(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}\\ &\leq \sup_{\pi\in \widehat{G}} \Vert \pi(\mathcal{R})^{\frac{Q\alpha}{2\nu}}(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\Vert_{\textnormal{op}} \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}\\ &\lesssim \| \phi((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\|_{L^2(\widehat{G})}. \end{align*} Note that we have used the fact that, in view of the $L^2(G)$-boundedness of the operator $ \mathcal{R}^{\frac{Q\alpha}{2\nu}}(1+\mathcal{R})^{-\frac{Q\alpha}{2\nu}},$ the sup \begin{equation}\label{sup:quotient} \sup_{\pi\in \widehat{G}} \Vert \pi(\mathcal{R})^{\frac{Q\alpha}{2\nu}}(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\Vert_{\textnormal{op}}<\infty, \end{equation} is finite. On the other hand, using the Plancherel theorem we get \begin{align} \| \pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}} \phi((2^{-k}\cdot \pi)(\mathcal{R})) \|_{L^2(\widehat{G})}=\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[\phi((2^{-k}\cdot \pi)(\mathcal{R}))] \Vert_{L^2(G)}. \end{align}With $r=2^{-k},$ and $\Phi_{r}=r^{-Q}\phi(\mathcal{R})\delta(r^{-1}\cdot),$ $\widehat{\Phi}_r(\pi)=\widehat{\Phi}_1(r\cdot \pi).$ In consequence $$ \phi((2^{-k}\cdot \pi)(\mathcal{R}))=\phi((r\cdot \pi)(\mathcal{R}))= \widehat{\phi(\mathcal{R}\delta)}(r\cdot \pi)= \widehat{\Phi}_1(r\cdot \pi) $$ and \begin{align*} \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[\phi((2^{-k}\cdot \pi)(\mathcal{R}))]=\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[\phi((r\cdot \pi)(\mathcal{R}))]=\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[\widehat{\Phi}_r(\pi)] \end{align*} $$ =\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\Phi_r. $$ As $0<Q\alpha/2<Q,$ in view of Corollary 4.3.11 of \cite{FischerRuzhanskyBook}, the right-convolution kernel of $\mathcal{R}^{-\frac{Q\alpha}{2\nu}}$ is homogeneous of order $\frac{Q\alpha}{2}-Q,$ and in consequence of \cite[Lemma 3.2.7]{FischerRuzhanskyBook} $\mathcal{R}^{-\frac{Q\alpha}{2\nu}}$ has homogeneous degree equal to $-Q\alpha/2.$ So, we have that \begin{align*} \Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[\phi((2^{-k}\cdot \pi)(\mathcal{R}))] \Vert_{L^2(G)}&=\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\Phi_r \Vert_{L^2(G)}=r^{-Q}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta(r^{-1}\cdot)] \Vert_{L^2(G)}\\ &=r^{-Q} r^{\frac{Q\alpha}{2}}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](r^{-1}\cdot) \Vert_{L^2(G)}\\ &=r^{-Q} r^{\frac{Q\alpha}{2}}r^{\frac{Q}{2}}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](\cdot) \Vert_{L^2(G)}\\ &=2^{-k(\frac{Q\alpha}{2}-\frac{Q}{2})}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](\cdot) \Vert_{L^2(G)}. \end{align*}In view of the Hulanicki theorem in \cite{FischerRuzhanskyBook}, $\phi(\mathcal{R})\delta\in \mathscr{S}(G)$ and then $$ \Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](\cdot) \Vert_{L^2(G)}<\infty,$$ in view of Corollary 4.3.11 in \cite{FischerRuzhanskyBook}. All the analysis above implies that \begin{align*} J_{1,k} &\leq \delta_{k}^{\frac{Q}{2}}\Vert \widehat{G}_{\varkappa,k} \phi((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})} + \delta_{k}^{\frac{Q}{2}} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}\\ &\lesssim \delta_{k}^{\frac{Q}{2}}2^{-k\varkappa}2^{-k(\frac{Q\alpha}{2}-\frac{Q}{2})}+ \delta_{k}^{\frac{Q}{2}} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}\\ &\lesssim \delta_{k}^{\frac{Q}{2}}2^{-k(\varkappa+\frac{Q(\alpha-1)}{2})}+ \delta_{k}^{\frac{Q}{2}} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}\\ &=\delta_{k}^{\frac{Q}{2}}2^{-\frac{kQ(a-1)}{2}}+ \delta_{k}^{\frac{Q}{2}} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}. \end{align*}Now, we will prove the estimate \begin{align}\label{Aux:again:jik} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}\lesssim 2^{-\frac{kQ(a-1)}{2}}, \end{align} in order to have the following upper bound for $J_{1,k},$ \begin{equation}\label{to:proof:jik} J_{1,k} \lesssim\delta_{k}^{\frac{Q}{2}}2^{-k(\varkappa+\frac{Q(\alpha-1)}{2})}=\delta_{k}^{\frac{Q}{2}}2^{-\frac{kQ(1-a)}{2})}. \end{equation}For the proof of \eqref{Aux:again:jik} note that \begin{align*} &\Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}\\ &= \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}(1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}} \widehat{K}\Vert_{L^2(\widehat{G})}\\ &\leq \sup_{\pi\in \widehat{G}}\Vert (1+\pi(\mathcal{R}))^{\frac{Q\alpha}{2\nu}} \widehat{K}(\pi)\Vert_{\textnormal{op}} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}\\ &\lesssim \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}. \end{align*}Using again the estimate in \eqref{sup:quotient} we have that \begin{align*} & \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))(1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}\\ &=\Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}} \pi(\mathcal{R})^{\frac{Q\alpha}{2\nu}} (1+\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}\\ &\lesssim \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}. \end{align*}Now, let us use the functional calculus of $\mathcal{R}.$ For any continuous function $\kappa(t)$ on $\mathbb{R}^+$ one has that \begin{equation} \forall r>0, \kappa(r^{\nu}\mathcal{R})\delta=r^{-Q}[ \kappa(\mathcal{R})\delta](r^{-1}\cdot). \end{equation}Taking in both sides the group Fourier transform one has \begin{align*} \kappa(r^{\nu}\pi(\mathcal{R}))= \kappa((r\cdot \pi)(\mathcal{R})). \end{align*}The previous identity with $\kappa(t)=t^{-\frac{Q}{2\nu}}$ gives \begin{equation} \forall r>0, (r^{\nu}\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}}=((r\cdot \pi)(\mathcal{R}) )^{-\frac{Q\alpha}{2\nu}}. \end{equation}Using the previous property, and the changes of variables $\pi'=2^{-k}\cdot \pi,$ we have the effect in the Borel measure $d\pi'=2^{-kQ}d\pi$ on the unitary dual $\widehat{G}$ and we can estimate \begin{align*} & \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}^2\\ &=\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(\pi) (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{HS}}^2d\pi\\ &=\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))((2^k\cdot\pi')(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{HS}}^22^{kQ}d\pi'\\ &=\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(2^{k\nu}\pi(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{HS}}^22^{kQ}d\pi'\\ &=\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{HS}}^22^{k(Q-Q\alpha)}d\pi'\\ & \lesssim\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{HS}}^22^{k(Q-Q\alpha)}d\pi'. \end{align*} Then, we have estimated \begin{align*} & \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}^2\\ &\lesssim\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{\textnormal{HS}}^22^{k(Q-Q\alpha)}d\pi'. \end{align*} Now, let us use the identity \begin{align*} (1-\phi)=(1-\phi)^2+\phi(1-\phi). \end{align*}We have that \begin{align*} & \Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{L^2(\widehat{G})}\\ &\leq \Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)^2(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{L^2(\widehat{G})}\\ &\hspace{3cm}+\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') \phi(1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{L^2(\widehat{G})}=R_{1}+R_{2}. \end{align*} Let us estimate $R_2,$ that is the last term of the previous inequality. \begin{align*} R_2 &=\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') \phi(1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \Vert_{L^2(\widehat{G})}\\ &\lesssim\Vert \widehat{G}_{\varkappa,k}\Vert_{L^\infty(\widehat{G})}\Vert (1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \phi(1-\phi)(\pi'(\mathcal{R})) \Vert_{L^2(\widehat{G})}\\ &\lesssim \Vert {G}_{\varkappa,k}\Vert_{L^{1}(G)} \Vert (1+\mathcal{R})^{-\frac{Q\alpha}{2\nu}}[ [\phi(1-\phi)](\mathcal{R})\delta \Vert_{L^2(G)}. \end{align*}In view of the Hulanicki theorem in \cite{FischerRuzhanskyBook}, we have that $[\phi(1-\phi)](\mathcal{R})\delta \in \mathscr{S}(G),$ and \begin{align*} \Vert (1+\mathcal{R})^{-\frac{Q\alpha}{2\nu}}[ [\phi(1-\phi)](\mathcal{R})\delta \Vert_{L^2(G)}=\Vert \phi(1-\phi)](\mathcal{R})\delta \Vert_{L^2_{ -\frac{Q\alpha}{2}}(G)}<\infty. \end{align*}So, we have proved that $$ R_2\lesssim \Vert {G}_{\varkappa,k}\Vert_{L^{1}(G)}\lesssim2^{-k\varkappa}. $$ Now, let $N=n_{0}\nu>Q/2,$ where $n_0\in \mathbb{N}.$ Let us consider and let $\mathcal{B}_N$ be the Bessel potential defined by $\widehat{\mathcal{B}}_N(\pi)=(1+\pi(\mathcal{R}))^{\frac{N}{\nu}}.$ We can estimate \begin{align*} & R_1= \smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)^2(\pi'(\mathcal{R})) (1+\pi'(\mathcal{R}))^{-\frac{Q\alpha}{2\nu}} \|_{\textnormal{HS}}^2d\pi'\\ &\leq \smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)^2(\pi'(\mathcal{R})) \|_{\textnormal{HS}}^2d\pi'\\ &=\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{\frac{N}{\nu}}(1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{N}{\nu}}\|_{\textnormal{HS}}^2d\pi'. \end{align*} Note that the pseudo-differential operator $(1-\phi)(\mathcal{R})(1+\mathcal{R})^{-\frac{N}{\nu}}$ is smoothing and then its right-convolution kernel $k_{N}$ belongs to the Schwartz space $\mathscr{S}(G).$ Note also that $$ \| (1-\phi)(\pi'(\mathcal{R}))\|_{L^\infty(\widehat{G})}=\sup_{\pi'\in \widehat{G}} \Vert (1-\phi)(\pi'(\mathcal{R}))\Vert_{\textnormal{op}}\leq \Vert 1-\phi\Vert_{L^{\infty}(\mathbb{R}^+)}\lesssim 1, $$ in view of the Functional calculus of the operator $\pi'(\mathcal{R}),$ $\pi'\in \widehat{G},$ and the properties of $\phi$ in \eqref{the:auxiliar:phi}. So, using the Plancherel theorem we estimate \begin{align*} & R_1\\ &=\smallint\limits_{\widehat{G}}\Vert \widehat{G}_{\varkappa,k}(2^{k}\cdot \pi') (1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{\frac{N}{\nu}}(1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{N}{\nu}}\|_{\textnormal{HS}}^2d\pi'\\ &\leq \Vert \widehat{G}_{\varkappa,k}\Vert^2_{L^{\infty}(\widehat{G})}\| (1-\phi)(\pi'(\mathcal{R}))\|^2_{L^\infty(\widehat{G})}\smallint\limits_{\widehat{G}}\Vert (1+\pi'(\mathcal{R}))^{\frac{N}{\nu}} (1-\phi)(\pi'(\mathcal{R}))(1+\pi'(\mathcal{R}))^{-\frac{N}{\nu}}\|_{\textnormal{HS}}^2d\pi'\\ &\leq \Vert \widehat{G}_{\varkappa,k}\Vert^2_{L^{\infty}(\widehat{G})}\| (1-\phi)(\pi'(\mathcal{R}))\|^2_{L^\infty(\widehat{G})}\smallint\limits_{\widehat{G}}\Vert (1+\pi'(\mathcal{R}))^{\frac{N}{\nu}}\widehat{k}_N(\pi')\|_{\textnormal{HS}}^2d\pi'\\ &\lesssim 2^{-2k\varkappa}\smallint\limits_{\widehat{G}}\Vert(1+\pi'(\mathcal{R}))^{\frac{N}{\nu}}\widehat{k}_N(\pi')\|_{\textnormal{HS}}^2d\pi'= 2^{-2k\varkappa}\smallint\limits_{\widehat{G}}\Vert (1+\mathcal{R})^{\frac{N}{\nu}}k_N\|_{L^2(G)}^2. \end{align*}So, we have proved that $$ R_1\lesssim \Vert {G}_{\varkappa,k}\Vert_{L^{1}(G)}\lesssim2^{-k\varkappa}. $$ The analysis above allows us to conclude that \begin{align*} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \widehat{K}\Vert_{L^2(\widehat{G})}\lesssim 2^{-k(\frac{Q(1-\alpha)}{2} +\varkappa)}= 2^{\frac{kQ(1-a)}{2}}, \end{align*}as well as the estimate \eqref{to:proof:jik}. It follows then that $$ J_{1,k}\lesssim 2^{-k\varkappa}|y|^{Q(1-\theta)/\lambda}2^{kQ(1-\theta)/\lambda}. $$ So, to finish our proof in Case 2, note that $|y|^{-1}\lesssim 2^{k}$ which implies that \begin{align} \sum_{k:|y|/2\leq 2^{-k}<|y|^{\frac{1-\theta}{1-\alpha}}}I_{k}\lesssim \sum_{k:|y|/2\leq 2^{-k}<|y|^{\frac{1-\theta}{1-\alpha}}} 2^{-k\varkappa}|y|^{Q(1-\theta)/\lambda}2^{kQ(1-\theta)/\lambda}\lesssim 1. \end{align} \item[Case 3:] $|y|^{\frac{1-\theta}{1-\alpha}}\leq 2^{-k}.$ Define $$\delta_k:=4\cdot2^{-k(1-\alpha)}.$$ Note that $$ \delta_k/2\geq 2|y|^{1-\theta}. $$ Split $I_k$ as follows, \begin{align*} I_k:=\smallint\limits_{|x|>2|y|^{1-a}}|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx=J_{1,k}+J_{2,k}, \end{align*} where \begin{equation} J_{1,k}=\smallint\limits_{\{|x|>2|y|^{1-a} \}\cap \{x:|x|\leq \delta_k\} }|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx \end{equation}and \begin{equation} J_{2,k}=\smallint\limits_{\{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} }|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx. \end{equation}Now, let us estimate $J_{2,k}.$ Indeed, the change of variable $z=xt^{-1},$ for $t\in \textnormal{supp}(G_{\varkappa,k})$ implies \begin{align*} J_{2,k}&=\smallint\limits_{ \{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} }|K\ast G_{\varkappa,k}(y^{-1}x)-K\ast G_{\varkappa,k}(x)|dx\\ &=\smallint\limits_{ \{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} }|\smallint\limits_{G} (K(y^{-1}xt^{-1}) G_{\varkappa,k}(t)-K(xt^{-1}) G_{\varkappa,k}(t))dt|dx\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{ \{|x|>2|y|^{1-a} \}\cap \{x:|x|> \delta_k\} } |K(y^{-1}xt^{-1}) -K(xt^{-1})|dxdt\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{\{|zt|>2|y|^{1-a} \}\cap \{z:|zt|> \delta_k\}} |K(y^{-1}z)-K(z)|dzdt\\ &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)| \smallint\limits_{ \{z:|zt|> \delta_k\}} |K(y^{-1}z)-K(z)|dz dt. \end{align*} Note that when $|zt|>\delta_{k},$ we have $|t|+|z|\geq|zt|>\delta_{k}$ and with $t\in \textnormal{supp}(G_{\varkappa,k}),$ $|t|<2^{-k+1}$ from which ones deduce the inclusion of sets $$ \{z:|zt|> \delta_k \}\subset\{z: |z|>\delta_k-2^{-k+1}\}, $$ and the estimate $$ \smallint\limits_{ \{z:|zt|> \delta_k\}} |K(y^{-1}z)-K(z)|dz\leq \smallint\limits_{ \{z: |z|>\delta_k-2^{-k+1}\} } |K(y^{-1}z)-K(z)|dz $$ $$ \leq \smallint\limits_{ \{z: |z|>\delta_k/2\} } |K(y^{-1}z)-K(z)|dz $$ $$ \leq \smallint\limits_{ \{z: |z|>2|y|^{1-\theta}\} } |K(y^{-1}z)-K(z)|dz , $$where we have used that $\delta_k/2\geq 2|y|^{1-\theta}.$ So, the previous analysis together with \eqref{G:B:K} gives \begin{equation} J_{2,k}\lesssim 2^{-k\varkappa} \smallint\limits_{ \{z: |z|>2|y|^{1-\theta}\} } |K(y^{-1}z)-K(z)|dz\lesssim 2^{-k\varkappa}. \end{equation}The same analysis done in Case 2, allows us to deduce the estimate $$ J_{1,k}\leq C\delta_{k}^{\frac{Q}{2}}2^{-kQ(a-1)/2} \leq C2^{-k\varkappa}. $$ So, to finish Case 3, note that \begin{align} \sum_{k:|y|^{\frac{1-\theta}{1-\alpha}}<2^{-k} }I_{k}\lesssim \sum_{k:|y|^{\frac{1-\theta}{1-\alpha}}<2^{-k}} 2^{-k\varkappa}\lesssim 1. \end{align} \end{itemize} The proof of Theorem \ref{Sjolin:Th:graded:groups} is complete. \end{proof} \bibliographystyle{amsplain} \begin{thebibliography}{99} \bibitem{Bjork} Bj\"ork, J.-E. $L^p$ estimates for convolution operators defined by compactly supported distributions in Rn. Math. Scand. 34, 129-136, (1974). \bibitem{CalderonZygmund1952} Calder\'on, A. P., Zygmund, A. On the existence of certain singular integrals. Acta Math., 88, 85--139 (1952). \bibitem{CR20221} Cardona, D. Ruzhansky, M. Boundedness of oscillating singular integrals on Lie groups of polynomial growth, submitted. arXiv:2201.12883. \bibitem{CR20223} Cardona, D. Ruzhansky, M. Oscillating singular integral operators on graded Lie groups revisited. arXiv:2201.12881 \bibitem{Fefferman1970} Fefferman, C. Inequalities for strongly singular integral operators, Acta Math. 24 9--36, (1970). \bibitem{FeffermanStein1972} Fefferman, C., Stein, E. $H^p$ spaces of several variables. Acta Math., 129, 137--193, (1972). \bibitem{Fefferman1973} Fefferman, C. $L^p$-bounds for pseudo-differential operators, Israel J. Math. 14, 413--417, (1973). \bibitem{FischerRuzhanskyBook} Fischer V., Ruzhansky M., Quantization on nilpotent Lie groups, Progress in Mathematics, Vol. 314, Birkhauser, 2016. xiii+557pp. \bibitem{FollandStein1982} Folland, G. and Stein, E. Hardy Spaces on Homogeneous Groups, Princeton University Press, Princeton, N.J., 1982. \bibitem{Hardy1913} Hardy, G. H. A theorem concerning Taylor's series, Quart. J. Pure Appl. Math., 44, 147--160, (1913). \bibitem{HelfferNourrigat} Helffer, B. Nourrigat, J. Caracterisation des operateurs hypoelliptiques homogenes invariants a gauche sur un groupe de Lie nilpotent gradue, Comm. Partial Differential Equations, 4(8), 899--958, (1979). \bibitem{Hirschman1956} Hirschman, I. I., Multiplier transformations I, Duke Math. J., 222--242, (1956). \bibitem{Hormander1960} H\"ormander, L. Estimates for translation invariant operators in Lp spaces, Acta Math., 104, 93--139, (1960). \bibitem{RothschildStein76} Rothschild, L. P., Stein, E. M. Hypoelliptic differential operators and nilpotent groups. Acta Math., 137(3--4), 247--320, 1976. \bibitem{Seeger} Seeger, A. Some inequalities for singular convolution operators in Lp-spaces, Trans. Amer. Math. Soc., 308(1), 259--272, (1988). \bibitem{Seeger1990} Seeger, A. Remarks on singular convolution operators. Studia Math. 97(2), 91--114, (1990). \bibitem{Seeger1991} Seeger, A. Endpoint estimates for multiplier transformations on compact manifolds. Indiana Univ. Math. J. 40(2), 471--533, (1991). \bibitem{SeegerSogge} Seeger, A. Sogge, C. D. On the boundedness of functions of (pseudo-) differential operators on compact manifolds. Duke Math. J. 59(3), 709--736, (1989). \bibitem{SSS} Seeger, A., Sogge, C. D., Stein, E. M. \newblock Regularity properties of {F}ourier integral operators. \newblock { Ann. of Math.}, 134(2), 231--251, (1991). \bibitem{Sjolin} Sj\"olin, P. $L^p$ estimates for strongly singular convolution operators in $\mathbb{R}^n$. Ark. Mat. 14(1), 59-64, (1976). \bibitem{Stein1998Notices} Stein, E.M. Singular Integrals: The roles of Calder\'on and Zygmund. Notices Amer. Math. Soc. 45, 1130--1140, (1998). \bibitem{Stein1970} Stein, E. M., Singular integrals and differentiability properties of functions. Princeton 1970. \bibitem{Tao} Tao, T. The weak-type $(1,1)$ of Fourier integral operators of order $-(n - 1)/2,$ J. Aust. Math. Soc., {76}(1), 1--21, (2004). \bibitem{Taylorbook1981} Taylor, M. Pseudodifferential Operators, Princeton Univ. Press, Princeton, N.J., 1981. \bibitem{Wainger1965} Wainger, S. Special trigonometric series in $k$-dimensions, Mem. Amer. Math. Soc., 59, (1965). \end{thebibliography} \end{document} Without loss of generality we can assume that $K\in L^1(\mathbb{R}^n).$ Indeed, by following the argument in Fefferman \cite{Fefferman1970}, we can replace $K$ by $K_{\epsilon}:=K\ast \phi_\epsilon,$ where $\phi\geq 0,$ $\phi_\epsilon:=\epsilon^{-n}\phi(\cdot/\epsilon),$ and $\smallint \phi\equiv 1.$ order to extend the celebrated Calder\'on-Zygmund theory in \cite{CalderonZygmund1952}, and the corresponding weak $(1,1)$ estimate for oscillating singular integrals due to Fefferman \cite{Fefferman1970}. Indeed, and is within this context that we generalise the works [..,..]. The program of Calder\'on and Zygmund dedicated to generalising several of the fundamental results of the one-dimensional harmonic analysis to higher dimensions, leads to generalise the Hilbert transform by introducing singular integrals of the form \begin{equation}\label{CZ:1952} Tf(x)=\textnormal{p.v.}\smallint_{\mathbb{R}^n}K(x-y)f(y)dy, \end{equation}where the kernel $K,$ having some regularity properties, is homogeneous of degree $-n,$ and satisfies the cancellation property $\smallint_{|x|=1}K(x)d\sigma(x)=0.$ In particular, they exploited the $L^2$-theory, by imposing regularity properties on $K$ in such a way that its Fourier transform $\widehat{K}$ will be a bounded function. Indeed, the general philosophy of the work [], was to prove that an operator $T$ as in \eqref{CZ:1952} is bounded from $L^1$ into $\textnormal{weak-}L^1,$ and to make use of the Marcinkiewicz interpolation result [] with its $L^2$-boundedness in order to conclude its $L^p$-boundedness for all $1<p<\infty.$ To continue with the estimation of \eqref{318}, let us use that $|I_j|\lesssim 1.$ Indeed, in view of the Calder\'on-Zygmund decomposition in Theorem \ref{CZ:G}, $I_j$ is the image of an open Eucliden set $\tilde{I}_{j},$ under a diffeomorphism $\omega_\ell,$ so, $|I_j|\sim \|\textnormal{Det}(D\omega_\ell)\|_{L^\infty}|\tilde{I}_{j}|,$ which also implies that $|I_j|\sim |\tilde{I}_{j}|,$ and that $\textnormal{diam}(I_j)\sim \textnormal{diam}(\tilde{I}_j). $ Because, $|\tilde{I}_{j}|\sim (\frac{1}{\sqrt 2}\textnormal{diam}(\tilde{I}_j))^n\sim (\frac{1}{\sqrt 2}\textnormal{diam}({I}_j))^n, $ we have that $\textnormal{diam}(I_j)\sim (\frac{1}{2}\textnormal{diam}(\tilde{I}_j))^n\lesssim 1,$ since we are considering $j,$ such that $\textnormal{diam}({I}_j)<1$ we deduce that $|I_j|\lesssim 1$ as desired. Returning to \eqref{318}, observe that \begin{align*} \sup_{y\in I_j}|\phi_j\ast k_\theta(y^{-1}x)| \smallint\limits_{I_j}|b_j(y)|dy &\lesssim \Vert \phi_j \Vert_{L^\infty}\Vert k_\theta \Vert_{L^1}|I_j| \frac{1}{|I_j|} \smallint\limits_{I_j}|b_j(y)|dy \\ &\lesssim (\textnormal{diam}(I_j))^{\frac{1}{1-\theta}}\Vert \phi\Vert_{L^\infty}\Vert k_\theta\Vert_{L^1}\frac{1}{|I_j|} \smallint\limits_{I_j}|b_j(y)|dy\\ &\lesssim \frac{1}{|I_j|} \smallint\limits_{I_j}|b_j(y)|dy. \end{align*} First, let us prove that \begin{equation} \Vert G_2\Vert_{L^2}^2\lesssim \alpha\Vert f\Vert_{L^1}. \end{equation} Indeed, by using \eqref{I:j} note that $$ \Vert G_2\Vert_{L^1} =\smallint\limits_{G}| \sum_{j:x\nsim I_j} \tilde{b}_{j}\ast k_\theta(x)|dx \leq \sum_{j:x\nsim I_j}\smallint\limits_{G}|\tilde{b}_{j}\ast k_\theta(x)|dx $$ $$=\sum_{j:x\nsim I_j}\smallint\limits_{G}|{b}_{j}\ast \phi_j\ast k_\theta(x)|dx.$$ In view of the Hausdorff-Young inequality, we get $$\Vert G_2\Vert_{L^1}\lesssim \sum_{j}\smallint\limits_{G}|{b}_{j}\ast \phi_j\ast k_\theta(x)|dx $$ $$\lesssim \sum_{j}\Vert {b}_{j}\Vert_{L^1(G)}\Vert \phi_j\ast k_\theta\Vert_{L^1(G)}\leq \sum_{j}\Vert {b}_{j}\Vert_{L^1(G)}\Vert \chi_{k}\Vert_{L^\infty}\Vert \phi_j\ast k_\theta\Vert_{L^1(G)}$$ $$\lesssim_{N_0} (\sup_{j} \Vert \phi_j\ast k_\theta\Vert_{L^1(G)}) \sum_{j}\Vert {b}_{j}\Vert_{L^1(G)}\lesssim (\sup_{j} \Vert \phi_j\ast k_\theta\Vert_{L^1(G)})\sum_{j}2^{n+1}(\gamma\alpha)|I_j|$$ $$\lesssim (\sup_{j} \Vert \phi_j\ast k_\theta\Vert_{L^1(G)}) \Vert f \Vert_{L^1}. $$Now, by observing that the kernel of the pseudo-differential operator $(1+\mathcal{R})^{-\frac{Q\theta}{2\nu}},$ of order $-n\theta/2$ satisfies the estimate $ |k_{\theta}(x)|\lesssim |x|^{-(n-\frac{n\theta}{2})}=|x|^{-n(1-\frac{\theta}{2})} ,$ we obtain that $k_\theta\in L^1(G).$ So, using that $\smallint|\phi_j(x)|dx=1$ and the Hausdorff-Young inequality we obtain \begin{equation} (\sup_{j} \Vert \phi_j\ast k_\theta\Vert_{L^1(G)}) \Vert f \Vert_{L^1}\lesssim \Vert k_\theta\Vert_{L^1}\smallint|\phi_j(x)|dx=\Vert k_\theta\Vert_{L^1}. \end{equation}So, we have proved that $\Vert G_2\Vert_{L^1}\lesssim\Vert f\Vert_{L^1}.$ On the other hand, $\Vert G_2\Vert_{L^\infty}\lesssim \alpha.$ For proving this fact, observe that if $x\nsim I_j,$ one has \begin{equation}\label{318} | {b}_{j}\ast \phi_j\ast k_\theta(x)|\leq \smallint\limits_{I_j}|\phi_j\ast k_\theta(y^{-1}x)||b_j(y)|dy\leq \sup_{y\in I_j}|\phi_j\ast k_\theta(y^{-1}x)| \smallint\limits_{I_j}|b_j(y)|dy. \end{equation} \subsection{Calder\'on-Zygmund decomposition in local coordinates systems}\label{Micro:Cald:Zyg:Decompo} In Theorem \ref{CZ:G} below we microlocalise on compact Lie groups the Euclidean Calder\'on-Zygmund decomposition theorem. Although the Calder\'on-Zygmund decomposition theorem is a valid tool on any space of homogeneous type, see Coifman and Weiss \cite[Page 74]{CoifmanWeiss}, we have proved a variant of such a result where the decomposition is subordinated to any atlas $\{O_\ell\}_\ell$ of the group $G.$ One reason for introducing this alternative decomposition theorem is that the Lebesgue number for the covering $\{O_\ell\}_\ell$ will play a modest role in the proof of our main theorem, see Remark \ref{Lemma:Lebesgue:number} and the definition of \eqref{def:c:LebNum} in Section \ref{proof:section}. In contrast with other Calder\'on-Zygmund decomposition theorems (see e.g. Hebish \cite{Hebish}, Coifman and De Guzman \cite[Page 74]{CoifmanWeiss}, etc.) where the bad part $b$ of any function $f=g+b\in L^1(G)$ is decomposed at any altitude $\alpha>0,$ in components $b_J$ supported in open sets $B_J$ with a finite overlapping, we split in Theorem \ref{CZ:G} any $f\in L^1(G)$ in the usual way $f=g+b,$ where the bad part $b=\sum_J b_J$ of $f$ can be decomposed into functions $b_J$'s with disjoint supports $I_J$'s. However, it is of interest by itself that, as in the Euclidean case, one has the Calder\'on-Zygmund decomposition of any integrable function $f\in L^1(G)$ at any altitude $\alpha>0,$ including $\alpha$ in the range $0<\alpha\leq \frac{1}{|G|}\smallint|f(x)|dx,$ as we will prove in Theorem \ref{CZ:G}. Now we present the microlocalised version of the Calder\'on-Zygmund decomposition theorem on $G.$ \begin{theorem}[Calder\'on-Zygmund type decomposition on compact Lie groups]\label{CZ:G} Let $\alpha,\gamma>0$ and let $G$ be a compact Lie group of dimension $n.$ For any atlas $\{{O}_\ell\}_{\ell}$ of $G,$ and for any $f\in L^1(G)$ we have a decomposition of the form $$f=g+b=\sum_{\ell=1}^{M_0}g_\ell+\sum_{J}b_J, \quad g=\sum_{\ell=1}^{M_0}g_\ell,$$ at height $\gamma\alpha,$ $\gamma,\alpha>0,$ where the $b_J$'s are supported in disjoint measurable sets $I_J,$ any of them included in one open set ${O}_\ell.$ Moreover, any $g_\ell$ is supported in ${O}_\ell,$ and the ``good'' part $g$ and the ``bad'' part $b$ of $f$ satisfy \begin{itemize} \item[(1A)] $\Vert g_\ell \Vert_{L^\infty}\lesssim_{G} \gamma \alpha$ and $\Vert g_\ell\Vert_{L^1}\lesssim_{G} \Vert f\Vert_{L^1}.$ \item[(1B)] $\Vert g \Vert_{L^\infty}\lesssim_{G} \gamma \alpha$ and $\Vert g\Vert_{L^1}\lesssim_{G} \Vert f\Vert_{L^1}.$ \item[(2)] \begin{equation}\label{I:j} \sum_{J} |I_J|\lesssim_{G} (\gamma\alpha)^{-1} \Vert f\Vert_{L^1}, \textnormal{ and } \Vert b\Vert_{L^1}\lesssim_{G} 2^{n+1}\Vert f\Vert_{L^1}. \end{equation} \item[(3)] \begin{equation} \Vert b_J\Vert_{L^1}\lesssim_{G} 2^{n+1}(\gamma \alpha)|I_J|. \end{equation} \item[(4)] There is a family of diffeomorphisms $\omega_\ell:\tilde{O}_\ell\rightarrow O_\ell$ such that any $I_{J}\subset{O_\ell}$ is the image under $\omega_\ell$ of a semi-open dyadic cube of $\mathbb{R}^n.$ \item[(5)] For $I_{J}\subset{O_\ell},$ $b_J$ satisfies the following cancellation property: \begin{equation*} \smallint\limits_{{I_J}}{b}_{J}(x)|\textnormal{det}(D\omega_\ell^{-1})(x)|dx=0, \end{equation*}where $D\omega_\ell^{-1}$ is the corresponding Jacobian matrix for the change of coordinates $\omega_\ell^{-1}:{O}_\ell\rightarrow \tilde{O}_\ell$. \end{itemize} \end{theorem} \begin{proof}Let $\{{O}_\ell\}_{\ell}$ be an atlas of $G.$ Because $G$ is a paracompact manifold, we can consider a partition of the unity $\{\chi_\ell\}_{\ell=1}^{M_0}$ subordinated to a family of open subsets $V_{\ell},$ with $\textnormal{supp}{(\chi_\ell)}\subset {O_{\ell}},$ where any the open subset ${O_{\ell}}$ has compact closure $\overline{O_{\ell}}\subset{V}_\ell,$ and let us consider the family of diffeomorphisms $$ \omega_\ell:\tilde{O}_\ell\rightarrow O_\ell,\,\,\omega_\ell:\tilde{V}_\ell\rightarrow V_\ell ,$$ where the $\tilde{O}_\ell$'s and the $\tilde{V}_\ell$'s are open subsets of $\mathbb{R}^n$ such that, for any $\ell,$ $\overline{O}_\ell$ and $\overline{\tilde{O}}_\ell$ are compact sets. In other words, the partition of the unity $\{\chi_\ell\}_{\ell=1}^{M_0}$ is subordinated to the atlas $(\{O_\ell,\omega_\ell\}),$ with this family of open sets being a refinement of the collection $\{V_\ell\}.$ Let $\tilde{\chi}_\ell:=\chi_\ell\circ \omega_\ell.$ Define $f_\ell=f\cdot \chi_\ell,$ and denote $\tilde{f}_{\ell }:=f_\ell\circ \omega_\ell.$ Motivated by the changes of variables formula $$ \smallint\limits_{O_\ell}|f_\ell(g)|dg= \smallint\limits_{\tilde{O}_\ell}|{f}_\ell\circ \omega_\ell(x)||\det[D\omega_\ell(x)]|dx = \smallint\limits_{\tilde{O}_\ell}|\tilde{f}_{\ell }(x)||\det[D\omega_\ell(x)]|dx $$ we apply the Euclidean Calder\'on-Zygmund decomposition (see e.g. Theorem 4.3.1 of Grafakos \cite{GrafakosBook}) to any function $\tilde f_\ell$ in order to have the decomposition $$\tilde{f}_{\ell }=\tilde{g}_\ell+\tilde{b}_\ell=\tilde{g}_\ell+\sum_{j}\tilde{b}_{\ell j}$$ at height $\gamma\alpha,$ where $\gamma,\alpha>0.$ Then, the ``good'' part $\tilde{g}_\ell$ and the ``bad'' part $\tilde{b}_\ell$ of $\tilde{f}$ satisfy \begin{itemize} \item[(1)'] $\Vert \tilde{g}_\ell \Vert_{L^\infty}\leq \gamma \alpha$ and $\Vert \tilde{g}_\ell\Vert_{L^1}\leq \Vert \tilde f_\ell\Vert_{L^1}.$ \item[(2)'] The $\tilde{b}_{\ell j}$'s are supported in disjoint sets $\tilde{I}_{\ell j}$ such that \begin{equation*} \sum_{j} |\tilde{I}_{\ell j}|\leq (\gamma\alpha)^{-1} \Vert \tilde f_\ell\Vert_{L^1}, \textnormal{ and } \Vert \tilde{b}_\ell\Vert_{L^1}\lesssim\Vert \tilde f_\ell\Vert_{L^1}. \end{equation*} \item[(3)'] \begin{equation*} \Vert \tilde{b}_{\ell j}\Vert_{L^1}\leq 2^{n+1}(\gamma \alpha)|\tilde{I}_{\ell j }| . \end{equation*} \item[(4)'] \begin{equation*} \smallint\limits_{\tilde{I_{\ell j}}}\tilde{b}_{\ell j}(x)dx=0. \end{equation*} \end{itemize} Note that in general $\tilde{I}_{\ell j}$ can be taken to be a dyadic cube (see e.g. Theorem 4.3.1 of Grafakos \cite{GrafakosBook}). However, using that the function $\tilde f_\ell$ has its support on $\tilde{O}_{\ell},$ we can assume that $\tilde{I}_{\ell j}$ is a dyadic cube $D_{\ell j}$ inside of $ \tilde{O}_{\ell}.$ We are going to explain this point in more detail by making the careful modifications to the classical proof of the Calder\'on-Zygmund covering lemma on $\mathbb{R}^n,$ (see, e.g. the proof of Theorem 4.3.1 of Grafakos \cite{GrafakosBook}) about the choice of the dyadic cubes $D_j.$ Indeed, we need to subordinate the family of sets $\tilde{I}_{\ell j}$ to the open sets $\tilde{O}_\ell.$ For instance, decompose $\mathbb{R}^n$ in a mesh of disjoint dyadic cubes $D$ of the same size and let us consider those such that $D\cap \tilde{O}_\ell\neq \emptyset.$ We call to the set of such a cubes cubes in the family of order zero. Let $\mathfrak{F}^{(0)}$ be the family of cubes of order zero. From the family of cubes of order zero we extract those cubes $D\subset \tilde{O}_\ell$ in the interior of $\tilde{O}_\ell$ such that $$\frac{1}{|D|}\smallint\limits_{D}|\tilde{f}_{\ell }(x)|dx>\gamma\alpha, $$ for any cube $D$ in the mesh. Let us call these cubes of zero generation. Let $\mathfrak{S}^{(0)}$ be the class of cubes of zero generation. Subdivide each cube of zero generation into $2^n$ cubes of the same size, and also any cube in the family $\mathfrak{F}^{(0)}\setminus \mathfrak{S}^{(0)}$ by bisecting each of its sides. Then, we will consider from these cubes those satisfying that $ D_j\cap \tilde{O}_\ell\neq 0,$ and the inequality \begin{align}\label{elction:cubes} \frac{1}{|D_j |}\smallint\limits_{D_j}|\tilde{f}_\ell(x)|dx>\gamma\alpha. \end{align} We will say that these cubes are in the family of order one $\mathfrak{F}^{(1)}.$ From this family we select those cubes $\tilde{I}_{\ell j}^{(1)}:=D_j$ that are totally included in $\tilde{O}_\ell,$ and such a family will be called of generation one. The class of cubes of generation one is denoted by $\mathfrak{S}^{(1)}.$ Now subdivide each nonselected cube in the family $\mathfrak{F}^{(1)}\setminus \mathfrak{S}^{(1)}$ into $2^n$ subcubes of the same size by bisecting each side and any of these cubes belongs to the family of order two if it satisfies \eqref{elction:cubes}. Such a family will be denoted by $\mathfrak{F}^{(2)}$. Let us consider those cubes $\tilde{I}_{\ell j}^{(2)}:=D_j$ in the family $\mathfrak{F}^{(2)}$ totally contained in $\tilde{O}_\ell.$ This class of cubes $\mathfrak{S}^{(2)}$ are called of generation two. By repeating indefinitely this process one construct a family $\mathfrak{S}:=\bigcup_{k=1}^{\infty} \mathfrak{S}^{(k)}.$ The family $\mathfrak{S}$ is the required family of sets $\tilde{I}_{\ell j}.$ Note that in some instances $\mathfrak{S}$ may be empty, and in such a case $\tilde{b}_\ell = 0$ and $\tilde{g}_\ell = \tilde{f}_{\ell }$. So, we always have for a set $\tilde{I}_{\ell j}$ of $\mathfrak{S}$ that $\tilde{I}_{\ell j}\subset \tilde{O}_\ell.$ So, by using the standard arguments, is easy to see that \begin{equation} \tilde{b}_{\ell j}:=\left( \tilde{f}_{\ell}-\frac{1}{|\tilde{I}_{\ell j}|}\smallint\limits_{\tilde{I}_{\ell j}}\tilde{f}_{\ell }(x)dx \right)\chi_{\tilde{I}_{\ell j}},\,\tilde{b}_{\ell }=\sum_{j}\tilde{b}_{\ell j},\,\tilde{g}_{\ell }=\tilde{f}_{\ell }-\tilde{b}_{\ell }, \end{equation}are the functions of the Calder\'on-Zygmund decomposition of $\tilde{f}_{\ell }.$ Indeed, let us start by proving the estimates for the $\tilde{b}_{\ell j}$'s. In any choice of a cube $\tilde{I}_{\ell j}=D_{j}$ there is a unique non-selected cube $D_{ j}'$ such that $D_{ j}\subset D_{ j}'$ with twice its side length and with $D_j'\cap \tilde{O}_\ell \neq \emptyset$. Usually, $D_{ j}'$ is called the parent of $\tilde{I}_{\ell j}=D_{j}.$ Since the parent of $D_j$ was not selected one has that, $D_{j}'$ is not totally contained in $\tilde{O}_\ell,$ but it does to satisfy \begin{equation} \frac{1}{|D_j'|}\smallint\limits_{D_j'}|\tilde{f}_{\ell }(x)|dx\leq \gamma\alpha. \end{equation} If $D_{j}'$ is not totally contained in $\tilde{O}_\ell,$ as $D_{j}$ was selected, then $D_{j}\subset{D}_{j}'\cap \tilde{O}_\ell,$ and then \begin{equation} |D_{j}|\leq |{D}_{j}'\cap \tilde{O}_\ell|\leq |{D}_{j}'|=2^n|D_j|, \end{equation}and then \begin{align*} \frac{1}{|D_j |}\smallint\limits_{D_j}|\tilde{f}_{\ell }(x)|dx \sim\frac{2^n}{|D_j'|}\smallint\limits_{D_j}|\tilde{f}_{\ell}(x)|dx\leq \frac{2^n}{|D_j'|}\smallint\limits_{D_j'}|\tilde{f}_{\ell }(x)|dx \leq \alpha\gamma. \end{align*} The previous estimate implies (3)'. Indeed, for $\tilde{I}_{\ell j}=D_j$ on has \begin{align*} \smallint\limits_{D_j}|\tilde{b}_{\ell j }|dx&\lesssim\smallint_{\tilde{I}_{\ell j}}|\tilde{f}_{\ell }(x)|dx+|\tilde{I}_{\ell j}|\left(\frac{1}{|\tilde{I}_{\ell j}|}\smallint\limits_{\tilde{I}_{\ell j}}|\tilde{f}_{\ell }(x)|dx\right)\leq 2\smallint\limits_{\tilde{I}_{\ell j}}|\tilde{f}_{\ell }(x)|dx\\ &\leq 2^{n+1}\gamma\alpha|\tilde{I}_{\ell j}|. \end{align*} For the proof of (2)' note that \begin{align*} \sum_{j}|\tilde{I}_{\ell j}|\leq \frac{1}{\alpha\gamma}\sum_{j}\smallint\limits_{\tilde{I}_{\ell j}}|\tilde{f}_{\ell }(x)|dx\lesssim\frac{1}{\alpha\gamma }\Vert \tilde{f}_\ell\Vert_{L^1}. \end{align*}Now, we are going to prove the estimates for the functions $\tilde{g}_{\ell}.$ Note that for $x\in \tilde{I}_{\ell j}$ one trivially has the estimate $|\tilde{g}_{\ell }(x)|\lesssim 2^n\gamma\alpha$ since $\tilde{g}_\ell$ is constant equal to ${|\tilde{I}_{\ell j}|}^{-1}\smallint_{\tilde{I}_{\ell j}}f(x)dx$ on $\tilde{I}_{\ell j}.$ So, we have to prove the inequality $|g(x)|\lesssim2^n\gamma\alpha$ for $x$ outside of the sets $\tilde{I}_{\ell j}.$ To do this, take $$x\in \tilde{O}_\ell\setminus\cup_{j}\tilde{I}_{\ell j},$$ and note that there exists a unique non-selected dyadic cube $D_{(x)}^{(k)}$ of generation $k$ that contains the point $x.$ Because $D_{(x)}^{(k)}$ not was selected, we have that \begin{align*} \frac{1}{|D_{(x)}^{(k)}|}\smallint\limits_{D_{(x)}^{(k)}}|\tilde{f}_{\ell }(x)|dx\leq \alpha\gamma. \end{align*} Note that $\cap_{k}\overline{D_{(x)}^{(k)}}=\{x\}.$ In view of the Lebesgue differentiation theorem one has the equality \begin{equation} \tilde{f}_{\ell j }(x)=\lim_{k\rightarrow\infty} \frac{1}{|D_{(x)}^{(k)}|}\smallint\limits_{D_{(x)}^{(k)}}|\tilde{f}_{\ell }(x)|dx\lesssim\gamma \alpha. \end{equation} So, we have proved that for a.e. $x,$ $|\tilde{f}_{\ell }(x)|\lesssim \gamma\alpha.$ Finally, since $\tilde{g}_\ell$ is equal to ${|\tilde{I}_{\ell j}|}^{-1}\smallint_{\tilde{I}_{\ell j}}\tilde{f}_\ell(x)dx$ on $\tilde{I}_{\ell j},$ and equal to $\tilde{f}_{\ell }$ in $\tilde{O}_\ell\setminus\cup_{j}\tilde{I}_{\ell j},$ we have that $\Vert \tilde{g}_\ell\Vert_{L^1}\lesssim \Vert\tilde{f}_{\ell }\Vert_{L^1}.$ Thus, the proof of (1)' is complete. The property (4)' follows by a straightforward computation. Now, define the functions $$ b_{\ell j}:=\tilde{b}_{\ell j}\circ \omega_\ell^{-1}, \quad g_{\ell }:=\tilde{g}_{\ell}\circ \omega_\ell^{-1},\quad b_\ell:=\sum_{j}b_{\ell j}. $$ Note that we have the following control of the $L^\infty$-norm of $g_\ell,$ $$ \Vert g_\ell\Vert_{L^\infty}= \Vert \tilde g_\ell\Vert_{L^\infty}\leq \gamma \alpha. $$ Also, observe that $$ \Vert g_\ell \Vert_{L^1}=\smallint_{G} |g_{\ell}(y)|dy=\smallint |\tilde{g}_{\ell}(x)||\textnormal{det}[D\omega_\ell](x)|dx \leq \sup_{1\leq \ell\leq M_0}\Vert \textnormal{det}[D\omega_\ell]\Vert_{L^\infty(\tilde{O}_\ell)}\smallint |\tilde{g}_{\ell}(x)|$$ $$ \lesssim \Vert \tilde{f}_{\ell }\Vert_{L^1}=\smallint|\tilde{f}_{\ell}(y)|dy \lesssim \Vert \textnormal{det}[D\omega_\ell^{-1}]\Vert_{L^{\infty}({O}_\ell)} \smallint_{O_\ell}|f_{\ell}(y)|dy\lesssim \Vert {f}_{\ell }\Vert_{L^1}, $$ and the validity of the following estimates $$ \Vert {b}_{\ell }\Vert_{L^1}=\smallint| {b}_{\ell }(x)|dx\leq \sup_{1\leq \ell\leq M_0}\Vert \textnormal{det}[D\omega_\ell]\Vert_{L^{\infty}(\tilde{O}_\ell)} \lesssim\smallint| \tilde{b}_{\ell }(x)| dx $$ $$ \lesssim \smallint|\tilde{f}_{\ell}(y)|dy \lesssim \Vert \textnormal{det}[D\omega_\ell^{-1}]\Vert_{L^{\infty}({O}_\ell)} \smallint_{O_\ell}|f_{\ell}(y)|dy.$$ Take into account that we have used the estimates $$\sup_{1\leq \ell\leq M_0}\Vert\textnormal{det}[D\omega_\ell^{-1}]\Vert_{L^{\infty}(\tilde{O}_\ell)}, \quad\sup_{1\leq \ell\leq M_0} \Vert \textnormal{det}[D\omega_\ell] \Vert_{L^\infty(\tilde{O}_{\ell})} <\infty.$$ Indeed, the restriction of the mapping $$ [x\mapsto D\omega_\ell(x):T_x\tilde{V}_\ell\cong \mathbb{R}^n\rightarrow T_xV_\ell]:\tilde{V}_\ell\rightarrow \textnormal{End}(T\tilde{O}_\ell, T{O}_\ell)$$ to $\overline{\tilde{O}_\ell}$ is smooth, its composition with $\textnormal{det}:\textnormal{End}(T\tilde{O}_\ell, T{O}_\ell)\rightarrow\mathbb{R}$ is a continuous mapping and the compactness of $\overline{\tilde{O}}_\ell$ imply that $$ \sup_{1\leq \ell\leq M_0} \Vert \textnormal{det}[D\omega_\ell] \Vert_{L^\infty(\tilde{O}_{\ell})}\leq \sup_{1\leq \ell\leq M_0} \Vert \textnormal{det}[{D\omega_\ell}] \Vert_{L^\infty(\overline{\tilde{O}}_{\ell})} <\infty. $$ A similar argument shows that $\sup_{1\leq \ell\leq M_0}\Vert\textnormal{det}[D\omega_\ell^{-1}]\Vert_{L^{\infty}(\tilde{O}_\ell)}<\infty.$ All the analysis above implies the following inequality of $L^1$-norms $$\Vert g_\ell \Vert_{L^1}\lesssim \Vert f_\ell \Vert_{L^1} .$$ By observing that the functions $f_{\ell}$ have disjoint supports, the function $$g:=\sum_{\ell=1}^{M_0} g_\ell$$ satisfies the $L^1$-estimates $$ \Vert g\Vert_{L^1}\lesssim \sum_{\ell=1}^{M_0}\Vert g_\ell\Vert_{L^1}\lesssim \sum_{\ell=1}^{M_0}\Vert f_\ell\Vert_{L^1}=\Vert f\Vert_{L^1}. $$ Now, define $$b:=\sum_{\ell=1}^{M_0}\sum_jb_{\ell j}=\sum_{J}b_{J},$$ where the set of indices $\{J\}$ enumerates all the possible pairs $J=(\ell,j).$ Note that the $b_{\ell j}$'s are supported on the sets $ I_{\ell j}:=\omega_{\ell}(\tilde{I}_{\ell j})\subset O_{\ell }, $ whose measure satisfy \begin{align*} \sum_{\ell, j}|{I}_{\ell j}|\leq \sum_{\ell, j}|\tilde{I}_{\ell j}|\Vert\det[D\omega_\ell] \|_{L^{\infty}(\tilde{O}_\ell)}\lesssim (\gamma\alpha)^{-1}\sum_{\ell=1}^{M_0} \Vert \tilde f_\ell\Vert_{L^1}\lesssim (\gamma\alpha)^{-1}\Vert f \Vert_{L^1}. \end{align*} Indeed, in the previous estimates we have used that $$ |I_{\ell j}|=\smallint_{I_{\ell j}}dy=\smallint_{\tilde{I_{\ell j}}}|\textnormal{det}[D\omega_\ell(x)]|dx\leq \sup_{1\leq \ell\leq M_0}\Vert \textnormal{det}[D\omega_\ell] \Vert_{L^\infty(\tilde{O}_{\ell})}|\tilde{I}_{\ell j}|. $$ Also, we have $$ \Vert {b}_{\ell j}\Vert_{L^1}=\smallint|b_{\ell j}(y)|dy\leq \sup_{1\leq \ell \leq M_{0}} \Vert\det[D\omega_\ell] \|_{L^{\infty}(\tilde{O}_\ell)}\smallint |\tilde{b}_{\ell j}(x)|dx\lesssim \Vert \tilde{b}_{\ell j}\Vert_{L^1} $$ $$ \leq 2^{n+1}(\gamma \alpha)|\tilde{I}_{\ell j }|= 2^{n+1}(\gamma \alpha)\smallint_{\tilde{I}_{\ell j }}dx= 2^{n+1}(\gamma \alpha)\smallint\limits_{I_{\ell j}} |\textnormal{det}[D\omega_\ell^{-1}(y)]|dy$$ $$ \leq \sup_{1\leq \ell\leq M_0}\Vert \textnormal{det}[D\omega_\ell^{-1}]\Vert_{L^\infty(O_\ell)} 2^{n+1}(\gamma \alpha)|I_{\ell j}| $$ $$ \lesssim 2^{n+1}(\gamma \alpha)|I_{\ell j}| . $$ Finally, observing that \begin{align*} \Vert b\Vert_{L^1}\leq \sum_{\ell=1}^{M_0}\sum_{j}\Vert b_{\ell j}\Vert_{L^1} \lesssim \sum_{\ell=1}^{M_0}\sum_{j} 2^{n+1}(\gamma \alpha)|I_{\ell j}|\lesssim \Vert f\Vert_{L^1} , \end{align*} we have proved that $g$ and $b$ as defined above satisfy the requirements of Theorem \ref{CZ:G}. Note that (5) follows from the theorem of changes of variables applied to (4)'. Thus, we end the proof. \end{proof} \begin{remark}\label{Lemma:Lebesgue:number} For our further analysis we will use the following property on $G,$ which holds true for arbitrary compact metric spaces: there exists a number $\varkappa>0,$ such that for any open covering $\{U_\ell\}_{\ell}$ of $G,$ any point of $G,$ is contained in a ball of radius $\varkappa$ contained in some $U_\ell.$ Naturally, we will use this property for the family of sets $U_\ell=O_\ell$ as defined in the proof of Theorem \ref{CZ:G}. Usually, $\varkappa:=\textnormal{Leb}(\{U_\ell\}_{\ell})$ is called the Lebesgue number of the covering $\{U_\ell\}_{\ell}.$ \end{remark} Now, we will construct an exceptional function $\phi$ that will be instrumental for our proof. Let $s<0.$ Let us take $N\in \mathbb{N},$ such that $N>\frac{|s|}{\nu}.$ Then, we have that (see \cite[Page 229]{FischerRuzhanskyBook}) \begin{equation} \mathcal{R}^{N}[C^\infty_0(G,\mathbb{R}^{+}_0)]\subset \mathcal{R}^{N}[C^\infty_0(G)]\subset \mathcal{R}^{N}[\mathscr{S}(G)]\subset \textnormal{Dom}[\mathcal{R}^{\frac{s}{\nu}}]. \end{equation} In particular, for $s=-Q\theta/2,$ we have that \begin{equation} \mathcal{R}^{[Q\theta/2\nu]+1}[C^\infty_0(G,\mathbb{R}^{+}_0)]\subset \mathcal{R}^{[Q\theta/2\nu]+1}[\mathscr{S}(G)]\subset \textnormal{Dom}[\mathcal{R}^{-\frac{Q\theta}{2\nu}}]. \end{equation}Let $\varkappa\in C^\infty_0(G), $ such that $\textnormal{supp}[\varkappa]\subset B(e,1).$ Since $\mathcal{R}^{[Q\theta/2\nu]+1}$ is a left-invariant differential operator, we have that $$\textnormal{supp}[\mathcal{R}^{[Q\theta/2\nu]+1}(\varkappa)]\subset\textnormal{supp}(\varkappa)\subset B(e,1).$$ Then, in terms of $\alpha:=|\mathcal{R}^{[Q\theta/2\nu]+1}(\varkappa)|^2,$ which is compactly supported in $B(e,1),$ and positive, define \begin{equation} \phi:=\frac{\alpha}{\smallint\limits_G\alpha(x)dx}. \end{equation} \subsection{Example: the case of the Heisenberg group $\mathbb{H}_n$} In this sub-section we illustrate our Theorem \ref{main:th} for the case of the Heisenberg group $G=\mathbb{H}_n,$ which can be realised as the manifold $\mathbb{R}^{2n+1}$ endowed of the product \begin{equation} (x,y,t)\cdot (x',y',z')=(x+x',y+y',t+t'+\frac{1}{2}(xy'-x'y)),\,(x,y,z),(x',y',z')\in \mathbb{H}_n. \end{equation} The Lie algebra $\mathfrak{h}_n=\textnormal{Lie}(\mathbb{H}_n)$ is spanned by the vector-fields \begin{equation} X_j=\partial_{x_j}-\frac{y_j}{2}\partial_{t},\,\quad Y_j=\partial_{x_j}+\frac{x_j}{2}\partial_{t},\,1\leq j\leq n,\,\quad T=\partial_t. \end{equation}The canonical relations are $[X_j,Y_j]=T.$ The positive sub-Laplacian $\mathcal{L}_{sub}$ on the Heisenberg group $$ \mathcal{L}_{sub}:=-\sum_{j=1}^n(X_j^2+Y_j^2)=-\sum_{j=1}^n\left(\partial_{x_j}-\frac{y_j}{2}\partial_{t}\right)^2+\left(\partial_{x_j}+\frac{x_j}{2}\partial_{t}\right)^2, $$ is a Rockland operator of homogeneous degree $\nu=2$ and the homogeneous dimension of $\mathbb{H}_n$ is $Q=2n+2.$ We observe that if $\mathbb{H}_n$ is endowed with the Carnot-Carath\'eodory distance $|\cdot|$ then the local dimension $d_0$ agrees with the global dimension $d_\infty$ and $d_0=d_\infty=Q=2n+2.$ Moreover, on any nilpotent stratified group one always has that $d_0=d_\infty.$ The unitary dual $\widehat{\mathbb{H}}_n$ of $\mathbb{H}_n$ admits the identification \begin{equation}\label{dual:of:Hn} \widehat{\mathbb{H}}_n=\{\pi_{\lambda}:\lambda\neq 0,\,\lambda\in \mathbb{R}\}\sim \mathbb{R}^{*}:=\mathbb{R}\setminus \{0\}. \end{equation} In \eqref{dual:of:Hn}, the family $\pi_{\lambda},$ $\lambda\in \mathbb{R}^*,$ are the Schr\"odinger representations, which are the unitary operators on $L^2(\mathbb{R}^n)$ defined via \begin{equation*} \pi_\lambda (x,y,z) h(u)=e^{i\lambda (t+\frac{1}{2}x\cdot y)}e^{i\sqrt{\lambda}yu}h(u+\sqrt{|\lambda|}x),\,\sqrt{\lambda}:=\frac{\lambda}{|\lambda|}\sqrt{|\lambda|},\,\lambda\in \mathbb{R}^*. \end{equation*} In the Heisenberg group setting, for $\mathcal{R}=\mathcal{L},$ the Fourier transform of its right-convolution kernel takes the form of the dilated harmonic oscillator \begin{equation*} \pi_\lambda(\mathcal{L}_{sub})=|\lambda|(-\Delta_u+|u|^2), \, \Delta_u:=\sum_{j=1}^n\partial_{u_j}^2,\,u=(u_1,\cdots,u_n)\in \mathbb{R}^n. \end{equation*} For a distribution $K$ on $G=\mathbb{H}_n,$ the condition in \eqref{LxiG} for the Fourier transform of $K,$ $$ \widehat{K}(\lambda)=\smallint_{\mathbb{H}_n}K(x,y,t)\pi_\lambda(x,y,t)^*dx\,dy\,dt:L^2(\mathbb{R}^n)\rightarrow L^2(\mathbb{R}^n), $$ becomes equivalent to the following Fourier transform condition \begin{equation}\label{FT:hn:cond} \sup_{\lambda\in \mathbb{R}^*}\Vert \pi_\lambda(1+\mathcal{L})^{\frac{Q\theta}{2\nu}}\widehat{K}(\lambda) \Vert_{\textnormal{op}}=\sup_{\lambda\in \mathbb{R}^*}\Vert \left(1+|\lambda|(-\Delta_u+|u|^2)\right)^{\frac{(n+1)\theta}{2}}\widehat{K}(\lambda) \Vert_{\textnormal{op}}<\infty. \end{equation} In view of Theorem \ref{main:th}, a convolution operator $T$ (defined via $Tf=f\ast K,$ $f\in C^\infty(\mathbb{H}_n)$) associated to a right-convolution kernel $K,$ satisfying \eqref{FT:hn:cond} and the kernel condition \begin{equation} \sup_{R>0}\,\sup_{\{Y\in \mathbb{H}_n:|Y|<R\}} \smallint\limits_{\{X\in \mathbb{H}_n: |X|\geq 2R^{1-\theta} \}}|K(Y^{-1}X)-K(X)|dX <\infty, \end{equation} admits a bounded extension from $L^1(\mathbb{H}_n) $ into $ L^{1,\infty}(\mathbb{H}_n),$ that is, the operator is of weak (1,1) type. &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& \begin{align*} I_k &\leq \smallint\limits_{G} |G_{\varkappa,k}(t)|dt \smallint\limits_{|z|>|y|^{1-a}} |K(y^{-1}z)-K(z)|dz\\ &\lesssim_{\phi}2^{-k\varkappa}\sum_{k=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(\left[ \frac{k}{m}\cdot y\right]^{-1}z\right)-K\left(\left[ \frac{k-1}{m}\cdot y\right]^{-1}z\right)|dz, \end{align*} for any $m\in \mathbb{N}.$ Let $m$ be the last positive integer such that \begin{equation} 2\left|\frac{1}{m}\cdot y\right|^{1-\theta}<|y|^{1-a}-|y| \end{equation} Let $t_{k}:=\frac{k}{m}.$ Then, \begin{align*} &\sum_{k=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(\left[ \frac{k}{m}\cdot y\right]^{-1}z\right)-K\left(\left[ \frac{k-1}{m}\cdot y\right]^{-1}z\right)|dz\\ &=\sum_{k=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(\left[t_k\cdot y\right]^{-1}z\right)-K\left(\left[ t_{k-1}\cdot y\right]^{-1}z\right)|dz\\ &=\sum_{k=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(\left[D_{t_k} (y)\right]^{-1}z\right)-K\left(\left[ D_{t_{k-1}}( y)\right]^{-1}z\right)|dz\\ &=\sum_{k=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(D_{t_k}\left[ y^{-1}\right]z\right)-K\left( D_{t_{k-1}}\left[ y\right]^{-1}z\right)|dz\\ &=\sum_{k=1}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(D_{t_k}\left[ y^{-1}\right]z\right)-K\left( D_{t_{k-1}}\left[ y\right]^{-1}z\right)|dz\\ &=\sum_{k=2}^{m}\smallint\limits_{|z|>|y|^{1-a}} |K\left(D_{t_k}\left[ y^{-1}\right]z\right)-K\left( D_{t_{k-1}/t_k}(D_{t_k}\left[ y\right]^{-1})z\right)|dz\\ &+\smallint\limits_{|z|>|y|^{1-a}} |K\left(\left[ \frac{1}{m}\cdot y\right]^{-1}z\right)-K\left(z\right)|dz. \end{align*}For $k\geq 2,$ the changes of variables $z=D_{t_{k-1}/t_k}(x)$ allows us to write \begin{align*} & \smallint\limits_{|z|>|y|^{1-a}} |K\left(D_{t_k}\left[ y^{-1}\right]z\right)-K\left( D_{t_{k-1}/t_k}D_{t_k}\left[ y\right]^{-1}z\right)|dz\\ &=\left(\frac{t_k}{t_{k-1}}\right)^Q \end{align*} \begin{align*} \Vert \widehat{G}_{\varkappa,k} (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}\Vert_{L^2(\widehat{G})}=\Vert k_\alpha\ast (1-\phi)(\mathcal{R})\delta \ast G_{\varkappa,k}\Vert_{L^2(G)}. \end{align*}Using the Hausdorff-Young inequality, we have that \begin{align*} & \Vert k_\alpha\ast \mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))] \ast G_{\varkappa,k}\Vert_{L^2(G)}\\ &\leq \Vert k_\alpha\ast [\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]\ast G_{\varkappa,k}\Vert_{L^2(G)}\\&\leq C \Vert k_\alpha\ast [\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]\Vert\times \Vert G_{\varkappa,k}\Vert_{L^1(G)}\\ &\leq \Vert k_\alpha\ast [\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]\ast G_{\varkappa,k}\Vert_{L^2(G)}\\&\leq C \Vert k_\alpha\ast [\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]\Vert\times \Vert G_{\varkappa,k}\Vert_{L^1(G)}\\ &\leq 2^{-k\varkappa}\Vert k_\alpha\ast [\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]\Vert_{L^2(G)}, \end{align*} where in the last line we have applied \eqref{G:B:K}. Using again the Plancherel theorem we deduce that \begin{align*} \Vert k_\alpha\ast [\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]\Vert_{L^2(G)} &=\Vert(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))\pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}} \Vert_{L^2(\widehat{G})} \\ &=\Vert \pi(\mathcal{R})^{-\frac{Q\alpha}{2\nu}}(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R})) \Vert_{L^2(\widehat{G})} \\ &=\Vert\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))] \Vert_{L^2(G)}. \end{align*}Now, let us make use of the homogeneity argument done above. As before, we know that the operator $\mathcal{R}^{-\frac{Q\alpha}{2\nu}}$ has homogeneity order equal to $-Q\alpha/2.$ With $r=2^{-k},$ and ${\tilde\Phi}_{r}=r^{-Q}(1-\phi)(\mathcal{R})\delta(r^{-1}\cdot),$ $\widehat{{\tilde\Phi}}_r(\pi)=\widehat{{\tilde\Phi}}_1(r\cdot \pi).$ In consequence $$ (1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))=(1-\phi)((r\cdot \pi)(\mathcal{R}))= \widehat{(1-\phi)(\mathcal{R})\delta}(r\cdot \pi)= \widehat{{\tilde\Phi}}_1(r\cdot \pi) $$ and \begin{align*} \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))]=\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[(1-\phi)((r\cdot \pi)(\mathcal{R}))]=\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[\widehat{\tilde{\Phi}}_r(\pi)] \end{align*} $$ =\mathcal{R}^{-\frac{Q\alpha}{2\nu}}\tilde{\Phi}_r. $$ So, we have that \begin{align*} &\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\mathscr{F}_{G}^{-1}[(1-\phi)((2^{-k}\cdot \pi)(\mathcal{R}))] \Vert_{L^2(G)}\\ &=\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}\tilde{\Phi}_r \Vert_{L^2(G)}=r^{-Q}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[(1-\phi)(\mathcal{R})\delta(r^{-1}\cdot)] \Vert_{L^2(G)}\\ &=r^{-Q} r^{\frac{Q\alpha}{2}}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[(1-\phi)(\mathcal{R})\delta](r^{-1}\cdot) \Vert_{L^2(G)}=r^{-Q} r^{\frac{Q\alpha}{2}}r^{\frac{Q}{2}}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](\cdot) \Vert_{L^2(G)}\\ &=2^{-k(\frac{Q\alpha}{2}-\frac{Q}{2})}\Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](\cdot) \Vert_{L^2(G)}. \end{align*}In view of the Hulanicki theorem, $\phi(\mathcal{R})\delta\in \mathscr{S}(G)$ and then $$ \Vert \mathcal{R}^{-\frac{Q\alpha}{2\nu}}[\phi(\mathcal{R})\delta](\cdot) \Vert_{L^2(G)}<\infty,$$ in view Corollary 4.3.11 in \cite{FischerRuzhanskyBook}.
2205.03451v2
http://arxiv.org/abs/2205.03451v2
Random meander model for links
\documentclass[12pt, a4paper]{amsart} \usepackage{amsmath,amssymb} \usepackage{accents} \usepackage{graphicx} \usepackage{verbatim} \usepackage{enumitem} \usepackage{tikz} \usepackage{caption} \usepackage{subcaption} \usepackage{tabu} \usepackage{longtable} \usepackage{array} \usepackage{hyperref} \usepackage[normalem]{ulem} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \everymath{\displaystyle} \usetikzlibrary{decorations.pathreplacing,angles,quotes} \newcommand{\n}[1]{{\color{blue}{#1}}} \newcommand{\owad}[1]{{\color{red}{#1}}} \textwidth = 6 in \textheight = 8 in \oddsidemargin = 0.0 in \evensidemargin = 0.0 in \hoffset = 0.2 in \headsep = 0.2 in \parskip = 0.04in \newlength{\dhatheight} \newcommand{\doublehat}[1]{ \settoheight{\dhatheight}{\ensuremath{\hat{#1}}} \addtolength{\dhatheight}{-0.35ex} \hat{\vphantom{\rule{1pt}{\dhatheight}} \smash{\hat{#1}}}} \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{example}[theorem]{Example} \newtheorem{fact}[theorem]{Fact} \newtheorem{question}[theorem]{Question} \newtheorem{claim}[theorem]{Claim} \newenvironment{mylist}{\begin{list}{}{ \setlength{\parskip}{0mm} \setlength{\topsep}{2mm} \setlength{\parsep}{0mm} \setlength{\itemsep}{0mm} \setlength{\labelwidth}{7mm} \setlength{\labelsep}{3mm} \setlength{\itemindent}{0mm} \setlength{\leftmargin}{12mm} \setlength{\listparindent}{6mm} }}{\end{list}} \newcommand{\GAP}{{\sf GAP}} \newcommand{\KBMAG}{{\sf KBMAG}} \newcommand{\N}{{\mathbb N}} \newcommand{\R}{{\mathbb R}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\sub}{\mathrm{sub}} \newcommand{\pwsub}{\mathrm{pwsub}} \newcommand{\ifof}{if and only if\ } \newcommand{\ep}{\epsilon} \newcommand{\dd}{\Delta} \newcommand{\ra}{\rightarrow} \newcommand{\ras}{{\stackrel{~~*}{\ra}}} \newcommand{\ga}{\Gamma} \newcommand{\xx}{X^{(1)}} \newcommand{\pp}{{\mathcal{P}}} \newcommand{\bo}{{\partial}} \newcommand{\td}{\widetilde d} \newcommand{\tb}{\widetilde B} \newcommand{\tdd}{\widetilde \dd} \newcommand{\nn}{\N[\frac{1}{4}]} \newcommand{\ps}{\Psi} \newcommand{\ph}{\Sigma} \newcommand{\bul}{\bullet} \newcommand{\s}{\sigma} \newcommand{\dmc}{diameter} \newcommand{\gc}{{g,a}} \newcommand\hu{{\tilde u}} \newcommand\ct{{T}} \newcommand\hc{{f}} \newcommand\he{{\hat e}} \newcommand\hp{{\hat p}} \newcommand\te{{\tilde e}} \newcommand\pe{{e"}} \newcommand\hhe{{\widetilde e}} \newcommand\hhr{{\widetilde r}} \newcommand\hhs{{\widetilde s}} \newcommand\hhp{{\widetilde p}} \newcommand\tx{{\tilde x}} \newcommand\ve{{\vec e}} \newcommand{\kxi}{k_r^i} \newcommand{\kte}{k_\cc^e} \newcommand{\kxe}{k_r^e} \newcommand{\hr}{c(\ves)} \newcommand{\wa}{e_{w,a}} \newcommand{\xa}{e_{x,a}} \newcommand{\lbl}{{\mathsf{label}}}\newcommand{\rep}{\chi} \newcommand{\init}{{\mathsf{i}}} \newcommand{\term}{{\mathsf{t}}} \newcommand{\skz}{^{(0)}} \newcommand{\sko}{^{(1)}} \newcommand{\cay}{X} \newcommand{\ch}[3][2]{ {{#2} \choose{#3}} } \newcommand{\mstack}[3][2]{ {{#2} \choose{#3}} } \newlist{pcases}{enumerate}{1} \setlist[pcases]{ label={\em{Case~\arabic*:}}\protect\thiscase.~, ref=\arabic*, align=left, labelsep=0pt, leftmargin=0pt, labelwidth=0pt, parsep=0pt } \newcommand{\case}[1][]{ \if\relax\detokenize{#1}\relax \def\thiscase{} \else \def\thiscase{~#1} \item } \newcommand{\coment}[1]{\marginpar{\sffamily{\tiny #1\par}\normalfont}} \everymath{\displaystyle} \title{Random meander model for links} \author{Nicholas Owad} \author{Anastasiia Tsvietkova} \begin{document} \begin{abstract} We suggest a new random model for links based on meander diagrams and graphs. We then prove that trivial links appear with vanishing probability in this model, no link $L$ is obtained with probability 1, and there is a lower bound for the number of non-isotopic knots obtained for a fixed number of crossings. A random meander diagram is obtained through matching pairs of parentheses, a well-studied problem in combinatorics. Hence tools from combinatorics can be used to investigate properties of random links in this model, and, moreover, of the respective 3-manifolds that are link complements in 3-sphere. We use this for exploring geometric properties of a link complement. Specifically, we give expected twist number of a link diagram and use it to bound expected hyperbolic and simplicial volume of random links. The tools from combinatorics that we use include Catalan and Narayana numbers, and Zeilberger’s algorithm. \ \textbf{Keywords:} Random links, knots, meanders, link complement, hyperbolic volume \ \textbf{MSC 2020:} 57K10, 57K32, 05C80 \end{abstract} \maketitle \section{Introduction} In the recent years, there has been an increased interest in using probabilistic methods in low-dimensional geometry and topology. A number of models for 3-manifolds and links appeared, and were used to study their topological properties and various invariants. For example, Dunfield and W. Thurston \cite{DunfieldThurston} studied finite covers of random 3-manifolds, and Even-Zohar, Hass, Linial and Nowik \cite{EZHLN4} studied linking number and Casson invariants in random petaluma model for knots. One of the benefits of using random models is that they often allow one to check the typical behavior of an object (e.g. of a link) beyond well-studied families. In this paper, we introduce a new model for links called the {\em random meander link} model. For an overview of previously existing models for random knots and links, we direct the reader to the exposition by Even-Zohar \cite{EvenZohar}. For our model, we use meander diagrams. Informally, a meander is a pair of curves in the plane, where one curve is assumed to be a straight line and the other one ``meanders'' back and forth over it. In a meander diagram, we assume that the ends of these two curves are connected. The study of meanders dates back at least to Poincar\'e's work on dynamical systems on surfaces. Meanders naturally appear in various areas of mathematics (see, for example, combinatorial study by Franz and Earnshaw \cite{FE}), as well as in natural sciences. Every knot is known to have a meander diagram \cite[Theorem 1.2]{AST}, and we generalize these diagrams so that every link has one too. This is described in Section \ref{sec:model}. It does not follow however that a random model that produces various meander diagrams will produce all knots or links: indeed, many distinct meander diagrams may represent isotopic (i.e. equivalent) links. We address this separately, as explained below. The first important question about any random link model is whether it produces non-trivial links with high probability. The proof of this is often far from trivial. For example, for a \textit{grid walk model}, it was conjectured by Delbruck in 1961 that a random knot $K_n$ is knotted with high probability \cite{Delbruck}. And only in 1988 and 1989, two proofs of this appeared, one by Sumners and Whittington \cite{SW}, and another by Pippenger \cite{Pippenger}. It was also conjectured that a different model, called a \textit{polygonal walk model}, produces unknots with vanishing probability by Frischand and Wasserman in 1961 \cite{FW}. The first proof of this appeared in 1994, for Gaussian-steps polygons, due to Diao, Pippenger, and Sumners \cite{DPS}. For another, more recent random knot model, called \textit{petaluma model} for knots, the paper from 2018 by Even-Zohar, Hass, Linial and Nowik \cite{EZHLN1} is mostly devoted to proving that the probability of obtaining every specific knot type decays to zero as the number of petals grows. Yet another recent work by Chapman from 2017 \cite{Chapman} studies \textit{random planar diagram model} and shows that such diagrams are non-trivial with high probability. This list of such results is not exhaustive. In Sections \ref{SecCircles} and \ref{Unlinks}, we prove that as the number of crossings in a link diagram grows, we obtain a non-trivial knot or link with probability 1 in the \textit{random meander model}. This is the first main result of this paper, given in Theorem \ref{thm:Rare}. While we use a programmed worksheet for Zielberger's algorithm as a shortcut (which is mathematically rigorous), the main part of the proof is theoretical rather than computer-assisted, and relies on observations concerning topology of knots, graphs and combinatorics. In particular, we show that a certain fragment ("a pierced circle") appears in our random meander link diagrams with high probability. This can be compared with Chapman's approach \cite{Chapman}: he shows that another fragment, similar to a trefoil, appears in random planar diagrams with high probability as well. As a corollary, we also show that as the number of crossings and components in a random diagram grows, no link is obtained with probability 1 in our random model, and therefore the model yields infinitely many non-isotopic links (Proposition \ref{nonisotopic}). We proceed to give a lower bound on the number of distinct (up to isotopy) knots produced by our model for a fixed number of crossings in Proposition \ref{prime}. In Section \ref{VolumeEtc}, we give an application of this random model to low-dimensional topology by finding the expected number of twists in a random link diagram. This is Theorem \ref{twists}. This number is related to geometric properties of the respective link complement in 3-sphere, as is shown, for example, in the work on hyperbolic volume by Lackenby \cite{Lackenby} and Purcell \cite{Purcell}, or in the work on cusp volume by Lackenby and Purcell \cite{LackenbyPurcell2}. Applying the former paper by Lackenby and its extension concerning simplicial volume by Dasbach and Tsvietkova \cite{DT}, we give bounds for the expected hyperbolic or simplicial volume of links in Corollary \ref{volume}, in the spirit of Obeidin's observations \cite{Obeidin}. Together, Theorem \ref{twists} and Corollary \ref{volume} can be seen as the second main result of the paper. We conclude with some open questions concerning hyperbolicity and volume of random meander links. One of the advantages of random meander link model is that constructing meander diagrams corresponds in a certain way to a well-known combinatorial problem about pairs of parentheses which are correctly matched. Thus the number of random meander diagrams has a simple formula in terms of Catalan numbers. Moreover, topological and diagrammatic properties of random meander links translate into combinatorial identities, and can be investigated using tools from combinatorics. For example, to prove that unlinks are rare, we use Poincar\'e's theorem about recurrence relations, and Zeilberg's algorithm that finds a polynomial recurrence for hypergeometric identities. Finding mathematical expectation for volume of a link complement involves summing Narayana numbers, another well-known series. This is perhaps an unexpected way to look at knot theory problems. \section{Random meander link model}\label{sec:model} We begin by providing the background required to define our random model. The following result is attributed to Gauss. \begin{theorem}\label{thm:AST}{\cite[Theorem 1.2]{AST}} Every knot has a projection that can be decomposed into two sub-arcs such that each sub-arc never crosses itself. \end{theorem} By planar isotopy, one of these sub-arcs can be taken to be a subset of the $x$-axis, which we will call the {\em axis}. The resulting diagram is called {\em straight}. The other sub-arc we call the {\em meander curve.} We will call a segment of the meander curve between two consecutive crossings an {\em arc}. The {\em {complementary} axis} is the $x$-axis minus the axis. Then every arc that makes up the meander curve either crosses the complementary axis or not. If an arc does not cross the complementary axis, we call it a {\em contained arc} and if it does cross the complementary axis, it is an {\em uncontained arc}. A straight diagram is said to be a {\em meander diagram} if there are no uncontained arcs. See Figure \ref{fig:AST}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.4] \begin{scope}[xshift = -8cm, scale = .8] \draw [ultra thick] (-.2,.-.2) to [out=40, in=-90] (1.5,2.4); \draw [ultra thick] (-.2,-.2) to [out=180+40, in=90] (-1.2,-1.6) to [out=-90, in=180-30] (0,-3.2); \draw [line width=0.13cm, white] (0,-3.2) to [out=-30, in=-90] (3.5,-.6) to [out=90, in=-20] (1.6 ,2) to [out=160, in=20] (-1.3 ,2.1); \draw [ultra thick] (0,-3.2) to [out=-30, in=-90] (3.5,-.6) to [out=90, in=-20] (1.6 ,2) to [out=160, in=20] (-1.3 ,2.1); \draw [ultra thick] (.2,-.2) to [out=-40, in=120] (1.05,-1); \draw [ultra thick] (1.05,-1) to [out=300, in=70] (1.1,-2.3); \draw [line width=0.13cm, white] (1.1,-2.3) to [out=-110, in=27] (0,-3.25) to [out=207, in=-90] (-3.5,-.6) to [out=90, in=200] (-1.6 ,2); \draw [ultra thick] (1.1,-2.3) to [out=-110, in=27] (0,-3.25) to [out=207, in=-90] (-3.5,-.6) to [out=90, in=200] (-1.6 ,2); \draw [line width=0.13cm, white] (-.2,.2) to [out=140, in=-90] (-1.5,2.4); \draw [ultra thick] (-.2,.2) to [out=140, in=-90] (-1.5,2.4); \draw [ultra thick] (-1.5,2.4) arc [radius=1.5, start angle=180, end angle= 0]; \end{scope} \begin{scope}[xshift = -1cm, yshift=0cm, scale=1.1] \draw [ultra thick] (3,0) arc [radius=1/2, start angle=180, end angle= 0]; \draw [ultra thick] (2,0) arc [radius=3/2, start angle=180, end angle= 0]; \draw [ultra thick] (-1,0) arc [radius=1, start angle=180, end angle= 0]; \draw [ultra thick] (-1,0) arc [radius=5/2, start angle=180, end angle= 360]; \draw [ultra thick] (0,0) arc [radius=3/2, start angle=180, end angle= 360]; \draw [ultra thick] (1,0) arc [radius=1/2, start angle=180, end angle= 360]; \draw [line width=0.13cm, white] (0.5,0) to (1.85,0); \draw [line width=0.13cm, white] (2.15,0) to (3.5,0); \draw [ultra thick] (0,0) to (1.8,0); \draw [ultra thick] (2.2,0) to (3.8,0); \draw [ultra thick] (4.2,0) to (5,0); \draw [ultra thick, line cap = round] (5,0) to (5,0); \draw [ultra thick, line cap = round] (0,0) to (0,0); \end{scope} \begin{scope}[ xshift=8cm, scale=1.2] \draw [ultra thick] (0,0) arc [radius=5/2, start angle=180, end angle= 360]; \draw [ultra thick] (1,0) arc [radius=1/2, start angle=180, end angle= 0]; \draw [ultra thick] (1,0) arc [radius=3/2, start angle=180, end angle= 360]; \draw [ultra thick] (2,0) arc [radius=1/2, start angle=180, end angle= 360]; \draw [ultra thick] (3,0) arc [radius=3/2, start angle=180, end angle= 0]; \draw [ultra thick] (4,0) arc [radius=1/2, start angle=180, end angle= 0]; \draw [line width=0.13cm, white] (1.5,0) to (2.85,0); \draw [line width=0.13cm, white] (4.15,0) to (5.5,0); \draw [ultra thick] (0,0) to (.8,0); \draw [ultra thick] (1.2,0) to (2.8,0); \draw [ultra thick] (3.2,0) to (3.8,0); \draw [ultra thick] (4.2,0) to (6,0); \draw [ultra thick, line cap = round] (6,0) to (6,0); \draw [ultra thick, line cap = round] (0,0) to (0,0); \end{scope} \end{tikzpicture} \end{center} \caption{Three diagrams of the figure-eight knot. On the left, the standard diagram; in the center, a straight diagram; on the right, a meander diagram. }\label{fig:AST} \end{figure} For study of meander knots and links from the point of view of knot theory and low-dimensional topology, see for example \cite{JR, Owad1, Owad2}. As we will see, variations of the constructions that we describe below (meander knot, meander link, meander link with multiple parallel strands) appeared in literature before, at times in a somewhat different form, but this is the first time they are used for a random model. The following fact was proved by Owad in \cite{Owad2}, Theorem 2.8, and was known to Adams, Shinjo, and Tanaka (see Fig. 2 in \cite{AST}). \begin{theorem}\label{thm:OwadMeander} Every knot has a meander diagram. \end{theorem} A {\em meander graph} is the 4-valent planar graph obtained by replacing each crossing of a meander diagram with a vertex. The graph's edges are the upper semi-circles above the axis, lower semi-circles below the axis, and every segment from a vertex to a vertex along the axis. In addition, there can be an edge of the graph made up of either the leftmost or the righmost segment of the axis and one semicircle, adjacent to that segment. Such an edge may be a loop, i.e. its two vertex endpoints may coincide. By a pair of parentheses we mean two parentheses, left and right: \texttt{()}. Further, we will refer to any string of $s$ pairs of parentheses as a \textit{p-string} of length $s$. Put a collection of upper semi-circles in correspondence with a $p$-string, as in Figure \ref{fig:parenLink}, where a pair $a$ is inside the pair $b$ if and only if the respective semicircle for $a$ is inside the respective semicircle for $b$. Do the same for lower semi-circles. To generate a random meander graph, we will use two $p$-strings, each of length $s$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale = .6] \draw [very thick] (-5.85,0) to (-2.45,0); \node [above] at (-4,0) {\texttt{(()())(())}}; \node [below] at (-4.29,0) { \texttt{()((()()))}}; \begin{scope}[scale = .8] \draw [very thick, line cap=round] (0,0) to (10,0); \draw [very thick] (10,0) arc [radius=1.5, start angle=0, end angle= 180]; \draw [very thick] (7,0) arc [radius=.5, start angle=0, end angle= -180]; \draw [very thick] (6,0) arc [radius=2.5, start angle=0, end angle= 180]; \draw [very thick] (1,0) arc [radius=.5, start angle=0, end angle= -180]; \draw [very thick] (9,0) arc [radius=.5, start angle=0, end angle= 180]; \draw [very thick] (8,0) arc [radius=2.5, start angle=0, end angle= -180]; \draw [very thick] (4,0) arc [radius=.5, start angle=180, end angle= 0]; \draw [very thick] (4,0) arc [radius=.5, start angle=180, end angle= 360]; \draw [very thick] (3,0) arc [radius=.5, start angle=0, end angle= 180]; \draw [very thick] (2,0) arc [radius=3.5, start angle=180, end angle= 360]; \foreach \a in {1,...,9}{ \draw [fill] (\a,0) circle [radius=0.1]; } \end{scope} \begin{scope}[xshift = 9cm, scale = .8] \draw [very thick] (10,0) arc [radius=1.5, start angle=0, end angle= 180]; \draw [very thick] (7,0) arc [radius=.5, start angle=0, end angle= -180]; \draw [very thick] (6,0) arc [radius=2.5, start angle=0, end angle= 180]; \draw [very thick] (1,0) arc [radius=.5, start angle=0, end angle= -180]; \draw [very thick] (9,0) arc [radius=.5, start angle=0, end angle= 180]; \draw [very thick] (8,0) arc [radius=2.5, start angle=0, end angle= -180]; \draw [very thick] (4,0) arc [radius=.5, start angle=180, end angle= 0]; \draw [very thick] (4,0) arc [radius=.5, start angle=180, end angle= 360]; \draw [very thick] (3,0) arc [radius=.5, start angle=0, end angle= 180]; \draw [very thick] (2,0) arc [radius=3.5, start angle=180, end angle= 360]; \foreach \a in {.2}{ \draw [line width = .13cm, white] (\a,0) to (3-\a,0); \draw [line width = .13cm, white] (3+\a,0) to (4-\a,0); \draw [line width = .13cm, white] (4+\a,0) to (7-\a,0); \draw [line width = .13cm, white] (7+\a,0) to (9-\a,0); \draw [line width = .13cm, white] (9+\a,0) to (10-\a,0); \draw [very thick] (0,0) to (3-\a,0); \draw [very thick] (3+\a,0) to (4-\a,0); \draw [very thick] (4+\a,0) to (7-\a,0); \draw [very thick] (7+\a,0) to (9-\a,0); \draw [very thick] (9+\a,0) to (10,0); } \draw [very thick, line cap=round] (0,0) to (0,0); \draw [very thick, line cap=round] (10,0) to (10,0); \end{scope} \end{tikzpicture} \caption{Left: two $p$-strings of length 5. Middle: the corresponding meander graph $\Gamma_9$. Right: the random meander link $L_9$ obtained by adding crossing information string $V_9 = OOUUOOUOU$} to $\Gamma_9$. \label{fig:parenLink} \end{center} \end{figure} Let $\Gamma_{2s-1}$ be a meander graph generated with two $p$-strings of lengths $s$. The \textit{crossing information string} at each vertex is a word $V_{2s-1}$ consisting of letters $U$ and $O$ of length ${2s-1}$. The pair ($\Gamma_{2s-1}, V_{2s-1}$) defines a meander link $L_{2s-1}$ as follows: every letter in $V_{2s-1}$ from left to right corresponds to either overpass ($O$) or underpass ($U$) of the axis of the diagram drawn instead of the vertices of $\Gamma_{2s-1}$ from left to right. Call the link component that contains the axis the {\em axis component}. All link crossings are a part of this component. As a result, the other link components are unknots and not linked with each other. Now we generalize this construction. For every link component of $L_{2s-1}$, take $r-1$ additional strands parallel to it. Instead of each crossing of $L_{2s-1}$ we have $r^2$ crossings in the new link. Substitute each of them by a vertex again. Denote the resulting graph by $\Gamma^r_{2s-1}$ and call it $(r,{2s-1})$-meander graph. Denote the collection of $r$ words, where each word consists of $r({2s-1})$ letters $O$ and $U$, by $V^r_{2s-1}$, and call it $(r,{2s-1})$-crossing information string. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale = .7] \def \w {.4} \begin{scope}[xshift = 0cm] \draw [line width = \w mm, line cap=round] (0,0) to (4,0); \draw [line width = \w mm] (3,0) arc [radius=.5, start angle=0, end angle= 180]; \draw [line width = \w mm] (4,0) arc [radius=1.5, start angle=0, end angle= 180]; \draw [line width = \w mm] (3,0) arc [radius=1.5, start angle=0, end angle= -180]; \draw [line width = \w mm] (2,0) arc [radius=.5, start angle=0, end angle= -180]; \foreach \a in {1,2,3}{ \draw [fill] (\a,0) circle [radius=0.1]; } \end{scope} \begin{scope}[xshift = 6cm] \foreach \a in {0,.2,-.2}{ \draw [line width = \w mm, line cap=round] (-\a,\a) to (4-\a,\a); } \draw [line width = \w mm] (-.2,.2) to (-.2,-.2); \draw [line width = \w mm] (0,0) to (0,-.2); \draw [line width = \w mm] (4.2,.2) to (4.2,-.2); \draw [line width = \w mm] (4,0) to (4,.2); \foreach \b in {1,...,3}{ \foreach \a in {0,.2,-.2}{ \draw [line width = \w mm] (\a+\b,.2) to (\b+\a,-.2); } } \foreach \a in {0,-.2,.2}{ \foreach \b in {.2}{ \draw [line width = \w mm] (3+\a,\b) arc [radius=.5+\a, start angle=0, end angle= 180]; \draw [line width = \w mm] (4+\a,\b) arc [radius=1.5+\a, start angle=0, end angle= 180]; } \foreach \b in {-.2}{ \draw [line width = \w mm] (3+\a,\b) arc [radius=1.5+\a, start angle=0, end angle= -180]; \draw [line width = \w mm] (2+\a,\b) arc [radius=.5+\a, start angle=0, end angle= -180]; } } \end{scope} \begin{scope}[xshift = 17cm] \node [] at (-3.3,1) {\Small{$UUOOUUOUO$}}; \node [] at (-3.3,0) {\Small{$OOUUOUUOU$}}; \node [] at (-3.3,-1) {\Small{$UOOOOUUOO$}}; \foreach \a in {0,.2,-.2}{ \draw [line width = \w mm, line cap=round] (-\a,\a) to (4-\a,\a); } \draw [line width = \w mm] (-.2,.2) to (-.2,-.2); \draw [line width = \w mm] (0,0) to (0,-.2); \draw [line width = \w mm] (4.2,.2) to (4.2,-.2); \draw [line width = \w mm] (4,0) to (4,.2); \foreach \b in {1,...,3}{ \foreach \a in {0,.2,-.2}{ \draw [line width = \w mm] (\a+\b,.2) to (\b+\a,-.2); } } \foreach \a in {0,-.2,.2}{ \foreach \b in {.2}{ \draw [line width = \w mm] (3+\a,\b) arc [radius=.5+\a, start angle=0, end angle= 180]; \draw [line width = \w mm] (4+\a,\b) arc [radius=1.5+\a, start angle=0, end angle= 180]; } \foreach \b in {-.2}{ \draw [line width = \w mm] (3+\a,\b) arc [radius=1.5+\a, start angle=0, end angle= -180]; \draw [line width = \w mm] (2+\a,\b) arc [radius=.5+\a, start angle=0, end angle= -180]; } } \foreach \a in {1.2,1.8,2.8,3.2}{ \draw [line width = 2*\w mm, white] (\a - .1,.2) to (\a + .1,.2); \draw [line width = \w mm, black] (\a - .1,.2) to (\a + .1,.2); } \foreach \a in {.8,1,2,2.2,3}{ \draw [line width = 2*\w mm, white] (\a,.25) to (\a,.15); \draw [line width = \w mm] (\a,.25) to (\a,.15); } \foreach \a in {.8,1,2,3}{ \draw [line width = 2*\w mm, white] (\a - .1,0) to (\a + .1,0); \draw [line width = \w mm, black] (\a - .1,0) to (\a + .1,0); } \foreach \a in {1.2,1.8,2.2,2.8,3.2}{ \draw [line width = 2*\w mm, white] (\a,.05) to (\a,-.05); \draw [line width = \w mm] (\a,.05) to (\a,-.05); } \foreach \a in {1,1.2,1.8,2,3,3.2}{ \draw [line width = 2*\w mm, white] (\a - .1,-.2) to (\a + .1,-.2); \draw [line width = \w mm, black] (\a - .1,-.2) to (\a + .1,-.2); } \foreach \a in {.8,2.2,2.8}{ \draw [line width = 2*\w mm, white] (\a,-.25) to (\a,-.15); \draw [line width = \w mm] (\a,-.25) to (\a,-.15); } \end{scope} \end{tikzpicture} \caption{From left to right: an example of a graph $\Gamma_3$, a graph $\Gamma_3^3$, a crossing information string $V^3_3$, and the knot obtained as $(\Gamma_3^3, V^3_3)$.} \label{fig:3random} \end{center} \end{figure} The {\em $(r,{2s-1})$-meander link diagram or link} is the link diagram or the link obtained by adding the crossing information string $V^r_{2s-1}$ to the graph $\Gamma^r_{2s-1}$ as above. We denote the set of all $(r,{2s-1})$-meander link diagrams for a given $s$ and $r$ by $\mathcal{L}^r_{2s-1}$. Note that one link can be in several such sets for different $r, s$. An example of a meander graph, an $(r,{2s-1})$-meander graph, a crossing information string, and the $(r,{2s-1})$-meander link diagram obtained from them is given in Figure \ref{fig:3random}. A planar isotopy applied to a $(r,{2s-1})$-meander link diagram transforms it into a diagram called a \textit{potholder diagram} in \cite{EZHLN2}. We therefore can reformulate Theorem 1.4 from \cite{EZHLN2} as follows. \begin{theorem}\label{links} Any link has an $(r,{2s-1})$-meander link diagram. \end{theorem} Theorem \ref{links} means, in particular, that the set of all links is the same as the set of $(r,{2s-1})$-meander links for all natural $r$ and $s$, up to link isotopy. If we set $r=1$, the set $\bigcup_{s=1}^\infty\mathcal{L}^1_{2s-1}$ contains the set of all knots by Theorem \ref{thm:AST}. It also contains some links with more than one component: see Figure \ref{fig:parenLink} for an example of a link in this set. For positive integers $s$ and $r$, choose $\Gamma^r_{2s-1}$ and $V^r_{2s-1}$ uniformly at random from the set of all $(r,{2s-1})$-meander graphs and the set all $(r,{2s-1})$-crossing information strings. The resulting $(r,{2s-1})$-meander link diagram or link is called a \textit{random $(r,{2s-1})$-meander link diagram or link}. \section{Links with pierced circles}\label{SecCircles} Here and further we consider link complements in $S^3$. When projected to a plane or $S^2$, certain fragments of a link diagram guarantee the the link is not trivial, i.e. is not an unlink in $S^3$. In meander link diagrams, we identify one such fragment below. Later in this section, we will find the mathematical expectation of such a fragment being present. A \textit{pierced circle} is a fragment of a meander graph or link depicted in Figure \ref{fig:pierced}. The circle component has exactly two consecutive vertices or crossings on the axis. See Figure \ref{fig:parenLink}, middle and right, for an example of a meander graph and a meander link with a pierced circle. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale = .6] \draw [very thick, line cap=round] (0.25,0) to (2.75,0); \draw [very thick] (1,0) arc [radius=.5, start angle=-180, end angle= 180]; \foreach \a in {1,2, 5,6,7,8,9,11,12,13,14,15,16,17}{ \draw [fill] (\a,0) circle [radius=0.1]; } \draw [very thick, line cap=round] (5,0) to (9,0); \draw [very thick, line cap=round] (11,0) to (17,0); \draw [very thick] (13,0) arc [radius=.5, start angle=-180, end angle= 180]; \draw [very thick] (15,0) arc [radius=.5, start angle=-180, end angle= 180]; \foreach \a in {1,...,7}{ \node at (10+\a,-1) {\a}; } \draw[decoration={brace,mirror,raise=5pt},decorate] (11,-1.3) -- node[below=6pt] {$a=2$} (12,-1.3); \draw[decoration={brace,mirror,raise=5pt},decorate] (13,-1.3) -- node[below=6pt] {$b_1=2$} (14,-1.3); \draw[decoration={brace,mirror,raise=5pt},decorate] (15,-1.3) -- node[below=6pt] {$b_2=3$} (17,-1.3); \end{tikzpicture} \caption{From left to right: a pierced circle in a graph; the path graph $P_5$; a graph from $\mathcal{A}_7(2)$ on the right.} \label{fig:pierced} \end{center} \end{figure} We will determine the expected number of pierced circles in a random meander diagram. Consider the path graph $P_v$, as in Figure \ref{fig:pierced}, middle. Label the vertices 1 through $v$ from left to right. A pierced circle is \textit{at position} $i$ if the left vertex of the circle is labeled $i$, $1\leq i \leq v-1$. We will count the number of ways to add $k$ pierced circles to $P_v$ so that no two circles share a vertex. Let $\mathcal{A}_v(k)$ be the collection of the resulting graphs, with $k$ pierced circles added. Recall that the number of integer ordered partititons (or compositions) of a nonegative integer $n$ into $k$ natural numbers is the number of ways to write $n$ as $a_1+a_2+\cdots +a_k$ for some natural numbers $a_1, a_2, \ldots, a_k \geq 1$. The quantity $|\mathcal{A}_v(k)|$ can be seen as the number of integer partititons of $v$ into one non-negative integer, say $a$, and $k$ integers that are two or greater, say $b_1, b_2, ..., b_k$, where $b_i\geq 2$, as follows. The non-negative integer $a$ represents the number of vertices before the position of the first pierced circle, which may be zero. Now label $k$ pierced circles in $P_v$ by $1, 2, ..., k$ from left to right. For $m=1, ..., k-1,$ the integer $b_m$ is determined by the pair of $m$-th and $(m+1)$-th consecutive circles in the graph $P_v$, say located at positions $i$ and $j$, with $b_m=\vert i-j \vert$. And the last integer $b_k$ is the number of vertices from the position of the last circle, say position $q$, to the end of the path graph, as $v-q+1$. In Figure \ref{fig:pierced} with $v=7$ and $k=2$, we have $a=2, b_1=2, b_2=3$, i.e. $7 = 2 + 2 + 3$. \begin{lemma}\label{lem:numKcircles} There are ${v-k \choose k}$ graphs in the set $\mathcal{A}_v(k)$. \end{lemma} \begin{proof} Above, we observed that $|\mathcal{A}_v(k)|$ is the number of ordered partitions of $v$ into $a, b_1, b_2, ..., b_k$ where $a\geq 0$ and $b_i\geq 2$. We relabel the summands by taking a bijective function $f$ that maps $a\mapsto a_1-1$ and $b_i \mapsto a_{i+1} +1$ for $i = 1,2, \ldots k$. Then $a_i\geq 1$ for all $i$. Using this relabeling, we have \begin{equation*} \begin{split} v & = (a_1-1) + (a_2+1)+(a_3+1) +\cdots +(a_{k+1}+1)\\ v & = a_1+a_2+\cdots +a_{k+1} +k-1\\ v-k+1& = a_1+a_2+\cdots +a_{k+1}. \end{split} \end{equation*} This an ordered partition of $v-k+1$ into $k+1$ natural numbers. But every such ordered partititon corresponds to an ordered partition of $v$ into $a, b_1, b_2, ..., b_k$, through the bijection $f$. Hence the number of such ordered partititons is the same as the number of ordered partitions of $v$ into $a, b_1, b_2, ..., b_k$, and is equal to $|\mathcal{A}_v(k)|$. In general, the number of ordered partititons of a nonegative integer $n$ into $j$ natural numbers is ${ n-1 \choose j-1}$ \cite{Stanley}. Therefore $|\mathcal{A}_v(k)|={v-k \choose k}$. \end{proof} Recall that the \textit{Catalan number} $C_s = \frac{1}{s+1}{2s \choose s}$ counts the number of valid strings of $s$ pairs of parentheses \cite{Stanley}, i.e. the number of $p$-strings of length $s$. A $p$-string has $s$ pairs of parentheses, so $2s$ positions for a parenthesis, which is either left or right. We call a matched pair of parentheses next to each other, \texttt{()}, a {\em nesting}. A nesting is \textit{at position} $i$ if the left parenthesis is at the $i$-th position in the $p$-string. For example, consider the $p$-string of length 6: \texttt{(())((()()))}. Replace the parentheses that are not a part of any nesting with a dash: \texttt{-()---()()--}. We see that the positions of three nestings in this $p$-string are 2, 7, and 9. \begin{lemma}\label{lem:Csminusj} Fix a $p$-string $P$ of length $s, s\geq 2,$ with $j$ nestings, $j\geq0$. Vary the parentheses in $P$ that do no belong to the nestings. The number of resulting $p$-strings is $C_{s-j}$. \end{lemma} Note that each resulting $p$-string in the lemma contains nestings at the same positions as $P$, but also possibly other nestings. \begin{proof} Let the positions for the nestings in $P$ be $x_1, x_2, ..., x_j$. Then there are $2(s-j)$ parentheses that are not part of a nesting in $P$. We replace each of them with a dash, as above. Now choose a $p$-string $P'$ of length $s-j$. Fill in the dashes of $P$ by substituting the parentheses of $P'$, in the same order as in $P'$. This gives a bijection between strings of length $s-j$ (like $P'$), and the strings we are counting in the lemma statement. And there are $C_{s-j}$ distinct $p$-strings of length $s-j$. \end{proof} Recall that a meander graph $\Gamma^1_{2s-1}$ has no added parallel strands, and exactly ${2s-1}$ vertices of valence 4. Let the set of such graphs that have exactly $k$ pierced circles be denoted by $\mathcal{E}_s(k)$, and let $|\mathcal{E}_s(k)|=E(s, k)$. Let $\mathcal{O}_s(k)$ be a set of ordered triples $$\mathcal{O}_s(k) = \{ (A, P, Q) | A\in\mathcal{A}_{2s-1}(k), \text{ and } P,Q \text{ are $p$-strings of length } s-k \}.$$ Denote $|\mathcal{O}_s(k)|=O(s, k)$. We now establish an upper bound for the number $E(s, k)$ \begin{lemma}\label{newEandO}Given $s\geq 1$ and $k$ for $0\leq k\leq s$, \begin{enumerate} \item $O(s, k) = {{2s-k-1 } \choose {k} } (C_{s-k})^2$\\ \item $O(s, k) = \sum_{m=k}^s \ch{m}{k} E(s, m)$\\ \end{enumerate} \end{lemma} \begin{proof} For part (1), recall that $\mathcal{A}_{2s-1}(k)$ denotes the set of path graphs with $2s-1$ vertices and $k$ pierced circles. The number of such graphs $|\mathcal{A}_{2s-1}(k)|$ is given by Lemma \ref{lem:numKcircles}, once we substitute $v=2s-1$ in the lemma statement, and it is $|\mathcal{A}_{2s-1}(k)|={{2s-k-1 } \choose {k} }$. Recall also that $\mathcal{O}_s(k)$ is the Cartesian product of $\mathcal{A}_{2s-1}(k)$, the set $\mathcal{P}$ of $p$-strings $P$ of length $s-k$, and one more such set $\mathcal{Q}$. Since $|\mathcal{P}|=|\mathcal{Q}|=C_{s-k}$ by Lemma \ref{lem:Csminusj}, we obtain: \begin{equation}\label{EandO} O(s, k) = |\mathcal{O}_s(k)|= {{2s-k-1 } \choose {k} } (C_{s-k})^2. \end{equation} For part (2), consider a meander graph $\Gamma = \Gamma^1_{2s-1}$ with exactly $j\geq k$ pierced circles. We claim that $O(s, k)$ counts $\Gamma$ exactly $ j \choose {k}$ times. Indeed, take a subgraph $A$ of $\Gamma$ by leaving $k$ of the $j$ pierced circles of $\Gamma$, and the axis, and deleting all other edges of $\Gamma$. The graph $A$ is an element of $ \mathcal{A}_s(k)$. There are exactly $j \choose k$ such distinct subgraphs of $\Gamma$. For each such subgraph $A$, there is exactly one pair of $p$-strings of length $s-k$ that, combined with $A$, yields $\Gamma$. Thus there are $\ch{j}{k}$ elements of $\mathcal{O}_s(k)$ that all correspond to $\Gamma$. But the graphs like $\Gamma$, with $k$ pierced circles, make up the set $\mathcal{E}_s(k)$, with $|\mathcal{E}_s(k)|=E(s, k)$ by definition. Then for each $m$ with $k\leq m\leq s$, we sum all occurences of each such graph $\Gamma$, and obtain \begin{center} $O(s, k) = E(s,k) + \ch{k+1}{k}E(s,k+1) +\cdots+ \ch{s}{k}E(s,s) = \sum_{m=k}^s \ch{m}{k} E(s, m)$. \end{center} \end{proof} Note that Lemma \ref{newEandO}(2) implies that $O(s, k) \geq E(s,k)$, with equality when $s=k$. We will also obtain a new relation between these quantities in the next lemma, that will be useful for us later. The following lemma will be useful in the next section. \begin{lemma}\label{Circles} Given $s\geq 1$ and $k$ for $0\leq k\leq s$, $$E(s, k) = \sum_{m=k}^s (-1)^{m+k}\ch{m}{k}O(s, m).$$ \end{lemma} \begin{proof} Rearrange Lemma \ref{newEandO}(2) as follows: \begin{equation}\label{Ek} E(s, k)=O(s, k) - \sum_{m=k+1}^s \ch{m}{k} E(s, m). \end{equation} Apply strong induction on $s-k$ for a fixed $s$, that is, we fix $s$ and let $k$ decrease: $k=s, s-1, s-2, ..., 1, 0$. As noted before the lemma, $O(s, s) = E(s, s) = 0$. This is the base step. Assuming the proposition statement holds for $ E(s,s), E(s,s-1), \ldots, E(s,k+1)$, we prove it for $E(s,k)$. Plugging in the formula from the proposition statement into equation (\ref{Ek}), we obtain \begin{center} $E(s, k) = O(s, k) - \sum_{m=k+1}^s \ch{m}{k} \sum_{j=m}^s (-1)^{m+j}\ch{j}{m}O(s, j)$. \end{center} Via rearranging and regrouping, we then have the following: \begin{equation}\label{star} E(s, k)=O(s, k) - \sum_{m=k+1}^s O(s, m) \ch{m}{k} \left[ \sum_{i=k+1}^{m} (-1)^{m-i} \ch{m-k}{m-i}\right].\tag{$*$} \end{equation} The rearranging above is trivial, and we give full details, with sum expansions, in Appendix, as Claim \ref{claim}. In the above formula, let $t=m-i$. Recall that the sum of alternating binomial coefficients is zero, i.e. $ \sum_{t = 0}^{m-k} (-1)^{t} \ch{m-k}{t} = 0$. Thus the inner sum on the right in (\ref{star}) becomes $\sum_{t = 0}^{m-k-1} (-1)^{t} \ch{m-k}{t} = \left[ \sum_{t = 0}^{m-k} (-1)^{t} \ch{m-k}{t} \right] -(-1)^{m-k} \ch{m-k}{m-k} = (-1)^{m-k+1}$. With this, (\ref{star}) is now \begin{center} $E(s, k)=O(s, k) - \sum_{m=k+1}^s \ch{m}{k}O(s, m)\left[(-1)^{m-k+1}\right]=\sum_{m=k}^s (-1)^{m+k}\ch{m}{k}O(s, m).$ \end{center} \end{proof} \begin{proposition}\label{circles} For a fixed $s$, the expected number of pierced circles in $\Gamma^1_{2s-1}$ is $\mathbb{E} = \frac{ O(s, 1) }{C_s^2}.$ \end{proposition} \begin{proof} For a random graph $\Gamma^1_{2s-1}$, introduce an indicator random variable $X_i$ that takes the value $1$ when a pierced circle appears at a position $i$ in $\Gamma^1_{2s-1}$, and value 0 otherwise. There are $2s-2$ possible positions for the pierced circle in a graph $\Gamma^1_{2s-1}$ by construction. Note that the variables $X_1, X_2, ... X_{2s-2}$ are not independent: e.g. if a pierced circle appearred at a position 1, it cannot appear at a position 2 in the same graph. We claim that the probability $P(X_i=1)=(C_{s-1}/C_s)^2$. Indeed, the graph $\Gamma^1_{2s-1}$ corresponds to two $p$-strings, and each string must have at least one nesting, at the position $i$. By Lemma \ref{lem:Csminusj}, the number of options for two distinct $p$-strings, with one nesting each, is $C_{s-1}^2$, and the number of options for any two distinct $p$-strings is $C_s^2$. Now by the linearity of expectation, the expectation of the number of pierced circles in $\Gamma^1_{2s-1}$ is $\mathbb{E}=P(X_1=1)+P(X_2=1)+...+P(X_{2s-2}=1)=(2s-2)(C_{s-1}/C_s)^2$. According to formula (\ref{EandO}), $(2s-2)C^2_{s-1}=O(s, 1)$, and the expectation is $\mathbb{E}=\frac{ O(s, 1) }{C_s^2}$. \end{proof} From a straightforward simplification of the definitions, the expectation has the nice closed form, $ \frac{ O(s, 1) }{C_s^2} = \frac{s^3+s^2-s-1}{8s^2-8s+2}$. This asymptotically approaches $ \frac{s+2}{8}$ from above for large $s$. In fact, by $s=6$, the error between $ \frac{s+2}{8}$ and $ \frac{ O(s, 1) }{C_s^2}$ is less than $0.013$, and we expect about one pierced circle. \section{Unlinks are rare}\label{Unlinks} A natural question to ask is whether the new model is likely to produce a non-trivial link. Our goal now is to show that the random links generated in our model are nontrivial. We will do this by first showing that as $s \to \infty$ for a link $\Gamma_{2s-1}^1$, the number of corresponding link diagrams with no pierced circles is small compared to the total number of diagrams. A hypergeometric series is a series of the form $\sum t_k$, where the ratio of consecutive terms $\frac{t_{k+1}}{t_k}$ is a rational function of $k$. To prove Proposition \ref{thm:circlesPresent}, we will consider such series $\frac{E(s, 0)}{C_s^2}$, which is the ratio of the number of meander graphs with no pierced circles to all meander graphs for fixed $s$. We will obtain a recurrence relation for $E(s, 0)$. Once we have a recurrence relation, we can apply the following classic result of Poincar\'e to show that this ratio becomes small, and therefore almost all meander graphs have a pierced circle. Suppose $\{u_n\}$ is a number sequence for natural indices $n$. Consider a linear recurrence relation in $k+1$ terms of this sequence for some fixed positive natural $k$, i.e. $\alpha_{0, n}u_{n+k} +\alpha_{1,n}u_{n+k-1} +\alpha_{2,n}u_{n+k-2} +\cdots +\alpha_{k,n}u_{n}+c =0$. Here $c$ and $\alpha_{i,n} \in \mathbb{R}$ for $i=0, 1, ..., k$ are coefficients, and the relation holds for all natural $n$. We can assume $\alpha_{0, n}=1$. In \cite{Poincare}, such a relation is called a difference equation. It is called homogeneous if the constant term $c=0$. We will use the following theorem. In what follows, $u=(u_n, ...., u_{n+k})$. \begin{theorem}\label{thm:poincare}{ [{Poincar\'e} \cite{Poincare}]} Suppose the coefficients $\alpha_{i,n}$ for $i = 1, 2, \ldots, k$, of a linear homogeneous difference equation \begin{equation}\label{eq:poincare} u_{n+k} +\alpha_{1,n}u_{n+k-1} +\alpha_{2,n}u_{n+k-2} +\cdots +\alpha_{k,n}u_{n} =0 \end{equation} \noindent have limits $\lim_{n\to \infty} \alpha_{i,n}= \alpha_i$ for $i = 1, 2, \ldots, k$, and the roots $\lambda_1, \ldots, \lambda_k$ of the characteristic equation $t^k+\alpha_1t^{k-1}+\cdots + \alpha_k = 0$ have distinct absolute values. Then for any solution $u$ of equation (\ref{eq:poincare}), either $u_n = 0 $ for all sufficiently large $n$ or $\lim_{n\to\infty} \frac{u_{n+1}}{u_n} = \lambda_i$ for some $i$. \end{theorem} \begin{lemma}\label{lem:Zeilbergers} The sequence $\{E(s, 0) \}_{s=1}^\infty$ satisfies the recurrence relation $$\sum_{k=0}^3 P_k(s) E(s+k, 0) = 0,$$ with polynomials $P_k$ given by \begin{table}[ht] \begin{center} \begin{tabular}{ l c l } $P_0(s) = 2 s^3+s^2-8 s+5$, & &$P_1(s) = -26 s^3-93 s^2-82 s-30$,\\ $P_2(s) = -26 s^3-141 s^2-226 s-81$, & & $P_3(s) = 2 s^3+17 s^2+40 s+16$. \end{tabular} \end{center} \end{table} \end{lemma} \begin{proof} This was verified using the computer algebra software {Maple}\texttrademark\ \cite{Maple} by applying command \texttt{Zeilberger} from subpackage \texttt{SumTools[Hypergeometric]}. For a short explanation of Zeilberger's algorithm, recall that we are looking for a recurrence relation on $E(s, 0)$. By Lemma \ref{Circles}, $E(s, 0)=O(s, 0) - O(s, 1) +O(s, 2)-\cdots +(-1)^s O(s, s)$. Recall also that $O(s, k) = {{2s-k-1 } \choose {k} } (C_{s-k})^2$ by formula (\ref{EandO}). The polynomials $P_0, P_1, P_2, P_3$ are obtained and verified using Zeilberger's Algorithm, described, for example, in \cite{aeqb}. In particular, the command \texttt{Zeilberger} returns these polynomials and a certificate verifying the calculation. We will outline the mathematics behind the command \texttt{Zeilberger} and the verification, to show that this leads to a rigorous proof. Using the {Maple}\texttrademark\ notation, the command \texttt{Zeilberger} takes as input the function $T(s,m) = (-1)^m O(s, m)$, so that $E(s, 0)=\sum_{m=0}^s T(s,m) $. The command then outputs the functions $L$ and $G$, which we describe later. By $x = x(s)$ the {\em shift} operator is denoted, that is, a function $x$ of $s$ such that $xT(s,m) = T(s+1,m)$. In our notation, $x(-1)^m O(s, m) = (-1)^m O(s+1, m)$. One can solve for $x$ without difficulty, but we only need the existence of such $x$ for the proof. It is also important for the final simplification below that we can extend the definition of $E(s, 0)$ as follows. The sum $E(s, 0)$ has summands that range from $m=0$ to $s$. We can increase the number of summands by increasing the largest value of $m$. This will not change $E(s, 0)$, since the terms $O(s, m)$ are zero if $m\geq s$. Thus, in the remaining part of the proof we may consider $E(s, 0), E(s+1, 0), E(s+2, 0)$, and $E(s+3, 0)$ to have summands up to $m=s+3$. Then we obtain, using the shift operator $x$, \begin{center} $\sum_{k=0}^3 P_k(s) E(s+k, 0) = \sum_{k=0}^3 \left( P_k(s) \sum_{m=0}^{s+3}(-1)^m O(s+k, m) \right) = $ $= \sum_{m=0}^{s+3}\left( \sum_{k=0}^3 P_k(s)(-1)^m O(s+k, m) \right) = \sum_{m=0}^{s+3}\left( \sum_{k=0}^3 x^kP_k(s)(-1)^m O(s, m) \right) =$ $= \sum_{m=0}^{s+3}\left( (-1)^m O(s, m) \sum_{k=0}^3 x^kP_k(s)\right)$ \end{center} One part of the output of \texttt{Zeilberger} command is $L = P_0(s) + xP_1(s) +x^2P_2(s) + x^3P_3(s) $ where $P_k$ are the four explicit polynomials Zeilberger's algorithm provides us. The other part of the output is $G$, which denotes a function $G(s,m)$ such that $L(T(s,m) ) = G(s,m+1) - G(s,m)$. We exclude the explicit expression for $G$ for brevity, but it is provided by Maple. See \cite{MapleFile} for the explicit expression and the details of the computation. The function $G$ is what allows us to creatively telescope. In particular, for $0\leq m \leq s+3$, \begin{center} $\sum_{m=0}^{s+3}\left( (-1)^m O(s, m) \sum_{k=0}^3 x^kP_k(s)\right) = \sum_{m=0}^{s+3}\left( T(s,m) L \right) =$ $= \sum_{m=0}^{s+3}\left( G(s,m+1)-G(s,m)\right) = G(s,s+4)-G(s,0) = 0$, \end{center} where the last equality follows from inspecting the explicit expression for $G$. \end{proof} Now we have all the pieces for the next theorem. \begin{proposition}\label{thm:circlesPresent} The ratio of $(1, 2s-1)$-meander graphs without any pierced circles to all $(1, 2s-1)$-meander graphs in the random meander model asymptotically approaches zero, i.e. $\frac{E(s, 0)}{C_s^2} \to 0$ as $s\to \infty$. \end{proposition} \begin{proof} Let $a_s = \frac{ E(s, 0) }{C_s^2}$. We will use the ratio test for sequences and show that $\lim_{s\to\infty} \frac{a_{s+1}}{a_{s}}<1$. It is straight forward to see that $\lim_{s\to\infty} \frac{C_{s}}{C_{s+1}} = \lim_{s\to\infty} \frac{s+2}{2(2s+1)}=\frac{1}{4}$, and notice that $$\frac{a_{s+1}}{a_{s}} = \frac{E(s+1, 0) {C_s^2}}{E(s, 0) C_{s+1}^2}.$$ This allows us to reduce our problem to showing that \begin{equation}\label{eq:ratio} \lim_{s\to\infty} \frac{E(s+1, 0)}{E(s, 0)} < 16. \end{equation} We divide the recurrence relation from Lemma \ref{lem:Zeilbergers} by $P_3(s)$. $$ E(s+3, 0)+ \frac{P_2(s)}{P_3(s)}E(s+2, 0)+\frac{P_1(s)}{P_3(s)}E(s+1, 0)+\frac{P_0(s)}{P_3(s)}E(s, 0) = 0 .$$ The above is a linear homogeneous difference equation in $u_s= E(s, 0) $ with 4 terms. We can apply Poincar\'e's theorem to it, with $\alpha_{1,s}=\frac{P_{2}(s)}{P_3(s)} , \alpha_{2,s}=\frac{P_{1}(s)}{P_3(s)}$, and $\alpha_{3,s}=\frac{P_{0}(s)}{P_3(s)}$. Then, as defined in the statement of Poincar\'e's theorem, $ \alpha_i = \lim_{s\to\infty} \alpha_{i,s} $ and via basic calculus, we see $\alpha_1 = -13, \alpha_2 = -13, $ and $\alpha_3 = 1$. Thus, the characteristic equation of this recurrence is $t^3-13t^2-13t+1=0$. There are three real solutions to the characteristic equation, $z=-1, 7-4\sqrt{3}\approx 0.0718,$ and $7+4\sqrt{3} \approx 13.928$, all of which are less than 16. Theorem \ref{thm:poincare} states that $\lim_{s\to \infty} \frac{E(s+1, 0)}{E(s, 0)}$ converges to zero or one of the solutions of the characteristic equation, which verifies the inequality (\ref{eq:ratio}), finishing the proof. \end{proof} Consider a single circle $C$ in a diagram, with $r$ horizontal strands crossing it, {as in} Figure \ref{fig:unlinks}. In our model, the $r$ horizontal strands are parallel copies of a knot. The crossing information is chosen at random. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale = .5] \foreach \a in {.5}{ \foreach \b in {1.3}{ \draw [line width = \a mm, line cap=round] (0,.75) to (4,.75); \draw [line width = \a mm, line cap=round] (0,-.75) to (2,-.75); \draw [line width = \b mm, white] (3.5,0) arc [radius=1.5, start angle=0, end angle= 360]; \draw [line width = \a mm] (3.5,0) arc [radius=1.5, start angle=0, end angle= 360]; \draw [line width = \b mm, line cap=round, white] (0,0) to (4,0); \draw [line width = \a mm, line cap=round] (0,0) to (4,0); \draw [line width = \b mm ,white] (2,-.75) to (4,-.75); \draw [line width = \a mm, line cap=round] (2,-.75) to (4,-.75); \node at (3.25,2) {$C$}; } } \end{tikzpicture} \caption{The circle $C$ meeting three parallel copies of the axis {component}. The top two are {not linked with other link components shown, while} the bottom one is linked {with $C$}. } \label{fig:unlinks} \end{center} \end{figure} \begin{lemma}\label{lem:srlink} The component $C$, which is a circle in a meander link diagram, is unlinked from other components in the $(s,r)$-random link with probability $\frac{1}{2^r}$. \end{lemma} \begin{proof} For an axis component, consider its two crossings with $C$. There are four options for the crossing information string for such a pair of crossings: (over, over), (over, under), (under, over), and (under, under). Note that $C$ is unlinked from an axis component if the axis component passes under $C$ at both its crossings with $C$ or passes over $C$ at both (see Figure \ref{fig:unlinks}). Hence half of the options for the crossing information imply that $C$ and the axis component are not linked. Therefore, if there are $r$ axis components, the probability that all of them are unlinked from $C$ is $\frac{1}{2^r}$. \end{proof} The random model described above is easily modified to produce only links that are alternating. Each $(r, 2s-1)$-meander graph yields the same number of alternating link diagrams. Indeed, every link projection can be made into an alternating link diagram with the correct choice of under and overpasses (i.e. of the crossing information). Therefore, this is also true for any $(r, 2s-1)$-meander graph. Moreover, there are exactly two options for alternating crossing information once a meander graph is given: choose an arbitrary graph vertex $v$, and connect two horizontal edges adjacent to $v$ by an underpass at $v$. This determines the first alternating link. The second link is obtained by making an overpass between horizontal edges at $v$. Such a (restricted) model that takes any meander graph together with a choice of alternating crossing information is what we will refer to whenever we discuss random alternating links. We can now state the main result of this section. \begin{theorem}\label{thm:Rare} Let $L$ be a $(r, 2s-1)$-random meander link. If $L$ is alternating and $s$ tends to infinity, then $L$ is nontrivial with probability one. If $L$ has its crossings chosen at random and $s$ and $ r$ tend to infinity, then $L$ is nontrivial with probability one. \end{theorem} \begin{proof} The first statement follows from Proposition \ref{thm:circlesPresent}, since an alternating link with a pierced circle cannot be an unlink. Similarly in the second statement, we are guaranteed there is a pierced circle as $s\to\infty$. But with crossing information string chosen at random, the circle might be unlinked from other link components. Letting $r\to\infty$, Lemma \ref{lem:srlink} ensures at least one pierced circle will be linked to an axis component. \end{proof} \subsection{More on distribution of links in random meander model.} We conclude this section with several further observations and questions about obtaining different links and knots in this model. The above theorem shows that the probability of obtaining an unlink in this random model approaches zero as $r, s$ grow. This is in fact true if we substitute the unlink by another fixed link. For a link $L$, denote its number of components by $t_L$. Note that if $L$ is an $(r, 2s-1)$-random link, then $r\leq t_L \leq sr$. \begin{proposition}\label{nonisotopic} For a given link $K$ and a $(r, 2s-1)$-random link $L$, the probability that $K$ and $L$ are isotopic $P(L=K) \to 0$ as $r\to \infty$. \end{proposition} \begin{proof} As $t_K$ is finite, there is some $r > t_K$, and every $(r, 2s-1)$-random link $L$ has more components than the chosen link $K$. \end{proof} The above ensures that we can produce infinitely many distinct or non-isotopic links using the random meander link model. It is also interesting how many distinct links are created by using this model for every $r, s$. This can be answered for knots, which only occur when $r=1$. \begin{proposition}\label{prime} There are at least $\Omega({2.68}^n)$ distinct meander prime knots with $(2^{n+1} -12)$ crossings. \end{proposition} \begin{proof} Suppose we have a knot $K$ with crossing number $c(K)$. Denote the number of crossings in $(1, 2s-1)$-meander diagram of $K$ by $m(K)$. Theorem \ref{thm:OwadMeander} says that every knot has a meander diagram. Moreover, for random meander knots, Theorem 3.2 and Proposition 3.4 in \cite{Owad2} combined yield $m(K)\leq 4(2^{c(K) -1} - 1) -8 = 4(2)^{c(K) -1} -12 = 2^{c(K)+1} -12$. Therefore, once we generate meander knot diagrams with at most $2^{n+1} -12$ crossings, we also generate all knots with $n$ crossings. The number of prime knots with $n$ crossings was given by Welsch \cite{Welsh} and is at least $\Omega (2.68^n)$. \end{proof} To make a similar observation for links ($r>1$), a bound on the crossing number of a meander link is needed in terms of the crossing number of a link. Then the number of prime links with $n$ crossings can be used similarly, which can also be found in Welsch \cite{Welsh}. \begin{question} Given an $(r, 2s-1)$-meander link $L$ with crossing number $m(L)$, what is an upper bound on the minimal crossing number $c(L)$ of this link? \end{question} \section{Expected number of twists and Expected volume}\label{VolumeEtc} In this section, we investigate some properties of links in our model. We find the expected number of twists for our diagram. We give bounds on the mathematical expectation for hyperbolic and simplicial volume of link complements. \subsection{Expected number of twists in a random link diagram.} In Section \ref{SecCircles}, we discussed how Catalan numbers are related to strings of parentheses. Consider \texttt{()}, an open parenthesis which is adjacent to its matching closed parenthesis in a string. Now let us count the number of $p$-string of length $n$ with exactly $k$ such substrings \texttt{()}. The number is the Narayana number, denoted by $N(n,k)$. By definition, $\sum_{k=1}^n N(n,k) = C_n$. Explicitly, $N(n,k) = \frac{1}{n} {{n}\choose k}{{n}\choose k-1}$. Note that there is always an innermost pair of parentheses, so $N(n,k)\geq 1$ for an integer $k\in [1,n]$ and zero else. For a more in depth discussion on Narayana Numbers, see Peterson \cite{Peterson}. We will also use the fact that $N(n,k) = N(n,n-k+1)$. The following is well-known; since we did not find the exact reference, we give one of possible short proofs. \begin{lemma}\label{lem:narayana} The expected number of nestings in a random $p$-string of length $n$ is $\frac{n+1}{2}$. \end{lemma} \begin{proof} We perform the straightforward calculation below. \begin{center} $\mathbb{E}(\# {\text{ of }} \texttt{()}~) = \sum_{k=1}^{n} k\frac{N(n,k)}{C_n} =\frac{1}{2} \left[ \sum_{k=1}^{n} k\frac{N(n,k)}{C_n}+\sum_{k=1}^{n} k\frac{N(n,k)}{C_n} \right] = $ $ =\frac{1}{2} \left[ 1\frac{N(n,1)}{C_n}+ 2\frac{N(n,2)}{C_n}+\quad \cdots\quad + n\frac{N(n,n)}{C_n}+ \right.$ $ \qquad + \left. n\frac{N(n,n)}{C_n} + (n-1)\frac{N(n,n-1)}{C_n} + \cdots + 1\frac{N(n,1)}{C_n} \right]. $ \end{center} Since $N(n, k)=N(n, n-k+1)$ for $k=1, 2, ..., n$, the above is equal to \begin{center} $ \frac{1}{2} \left[ 1\frac{N(n,1)}{C_n}+ 2\frac{N(n,2)}{C_n}+\quad \cdots\quad + n\frac{N(n,n)}{C_n}+ \right.$ $+ \qquad \left. n\frac{N(n,1)}{C_n} + (n-1)\frac{N(n,2)}{C_n} + \cdots + 1\frac{N(n,n)}{C_n} \right] =$ $ =\frac{1}{2} \left[ (n+1)\frac{N(n,1)}{C_n}+ (n+1)\frac{N(n,2)}{C_n}+\cdots+ (n+1)\frac{N(n,n)}{C_n} \right]=$ $ =\frac{n+1}{2} \left[ \frac{C_n}{C_n} \right] =\frac{n+1}{2}$. \end{center}\end{proof} \begin{lemma}\label{ExpBigons} Given a link $L^r_{2s-1}$, the expected number of bigons is $s+1$. \end{lemma} \begin{proof} A link $L^r_{2s-1}$, is created by a string of parentheses above the axis and below the axis. By Lemma \ref{lem:narayana}, the expected number of nestings is $\frac{s+1}{2}$ for each, and these are independent, so we add. \end{proof} \begin{theorem}\label{twists} Let $D$ be the diagram of an $(r, 2s-1)$-random meander link in $S^3$. Then the expected number of twist regions is $(2s-1)r^2-s-1$. In particular, when $r=1$, the expected number of twist regions is $s-2$. \end{theorem} \begin{proof} Given a link diagram $D$ with $c$ crossings, $b$ bigons, and $t$ twist regions, the number $t=c-b$. Together with Lemma \ref{ExpBigons}, this observation yields $$t = (2s-1)r^2 - (s+1) = (2s-1)r^2-s-1.$$ Let $r = 1$ and we obtain $t=s-2$. \end{proof} \subsection{Expected hyperbolic and simplicial volume.} We now apply this to obtain bounds on the mathematical expectation of volume for links. \begin{theorem}\label{bound} [Lackenby \cite{Lackenby}, Agol-Thurston] Let $D$ be a prime alternating diagram of a hyperbolic link $L$ in $S^3$ with twist number $t(D)$. Then $$v_3(t(D) -2)/2 \leq Volume(S^3 -L) < 10v_3(t(D)-1),$$ where $v_3$ is the volume of a regular hyperbolic ideal 3-simplex. \end{theorem} \begin{remark}\label{simplicial} The above upper bound was originally proved for all hyperbolic links, including non-alternating ones. It is also shown in \cite{DT} that it holds for simplicial volume of non-hyperbolic links. \end{remark} In the spirit of Obeidin's observations on volume in another random link model \cite{Obeidin}, we apply this to random meander links: \begin{corollary}\label{volume} Let $L$ be a $(r, 2s-1)$-random meander link in $S^3$. Then the mathematical expectation for its volume, hyperbolic or simplicial, satisfies the upper bound: $$\mathbb{E}(Volume(S^3 -L) \leq 10v_3((2s-1)r^2-s-3).$$ In particular, when $r = 1$, $L$ has $2s-1$ crossings, and $$\mathbb{E}(Volume(S^3 -L) ) \leq 10v_3(s-3).$$ Additionally, for any $(r, 2s-1)$-random alternating meander link $L$ with $r \neq 1$, and $L$ hyperbolic, the mathematical expectation for its hyperbolic volume satisfies the lower bound: \begin{center} $v_3((2s-1)r^2-s-5)/2 \leq \mathbb{E}(Volume(S^3 -L) ).$ \end{center} \end{corollary} \begin{proof} Both upper an lower bounds follow from Theorems \ref{twists} and \ref{bound}, and Remark \ref{simplicial}. Note however that the lower bound in Theorem \ref{bound} is for prime alternating diagrams only, while our model produces all alternating diagrams once the crossing information is chosen correctly. We claim that every alternating $(r, 2s-1)$-meander diagram is a prime link diagram under two conditions: with $r$ not equal to 1, and with no nugatory self-crossings. Indeed, in the absence of nugatory self-crossings, there are at least four strands going out of every non-trivial tangle when there are $r > 1$ identical copies of every strand, and hence the diagram is prime. A nugatory crossing can only occur in the upper rightmost or bottom leftmost end of a $(r, 2s-1)$-meander graph. This happens when a nesting occurs at the leftmost position on the top or at the rightmost position on the bottom. Such a link diagram is not reduced. If the diagram is alternating, one can reduce it simply by removing the nugatory self-crossings (untwisting it), and then count the remaining twists and apply the lower volume bound from Theorem \ref{bound} . To account for this potential untwisting, we subtract 2 from the expected twist number in the lower bound: this results in the constant $-5$ in our lower bound, compared to $-3$ in the bound of Theorem \ref{bound}. Also note that when we consider the restricted random model that produces only alternating links, this does not change Lemma \ref{ExpBigons} and Theorem \ref{twists}. Indeed, the expected number of twists is computed there for meander graphs, not involving crossing information, and hence is the same for this (restricted) model. \end{proof} The above bounds are all linear in $s$ and quadratic in $r$, but there are $(2s-1)r^2$ crossings in the links. Thus, we have linear bounds in the number of crossings for the expected volume of a random meander link. \subsection{Some related questions.}One of the difficulties with some random link and knot models is that they rarely yield hyperbolic links. For example, links obtained using random walks in plane or space are often composite, and cannot be hyperbolic by W. Thurston's results \cite{Thurston}. At the same time, hyperbolic links are often the ones that posses deep geometric and topological structure, and interesting properties. Hyperbolicity of random meander links is therefore a natural question. \begin{question} What is the probability for an $(r, 2s-1)$-random meander link diagram to represent a hyperbolic link? What about the probability for the family of alternating $(r, 2s-1)$-random meander link diagrams? \end{question} One way of approaching the above question might be through tracking the presence of certain fragments in meander diagrams and meander graphs, as we did above for detecting unlinks. Note that a $(1,{2s-1})$-meander link $L$ with pierced circle $C$ is a satellite link and thus is not hyperbolic by \cite{Thurston}. Indeed, if $C$ and the axis component $A$ are linked, there is an embedded essential torus $T$ following $A$ in the complement of $L$. If $C$ and $A$ are not linked, $L$ is a split link, and is also not hyperbolic. \begin{question} What fragments of a meander link diagram or meander graph guarantee that the respective link complement in the 3-sphere is not hyperbolic? What is the probability for each such fragment to appear? \end{question} We provide bounds for mathematical expectation for volume of links complements above. In \cite{DT0, DT}, the upper bound for hyperbolic and simplicial volume from \cite{Lackenby} is refined based on differentiating between twists with 1, 2, 3, or at least 4 crossings. It is interesting whether one can find the probability of having such twists in $(r, 2s-1)$-random meander link diagram and apply this to these volume bounds similarly to the above. \begin{question} Can the upper bound for the mathematical expectation of the volume of $(r, 2s-1)$-random meander link complement be refined? \end{question} \section{Appendix} \begin{claim}\label{claim} With the notation as in the rest of the paper, the following holds: \begin{equation*} \label{eq1} \begin{split} E(s, k) &= O(s, k) - \sum_{m=k+1}^s \ch{m}{k} \sum_{j=m}^s (-1)^{m+j}\ch{j}{m}O(s, j) \\ & = O(s, k) - \sum_{m=k+1}^s O(s, m) \ch{m}{k} \left[ \sum_{i=k+1}^{m} (-1)^{m-i} \ch{m-k}{m-i}\right] \end{split} \end{equation*} \end{claim} \begin{proof} Starting with the first equality \begin{center} $E(s, k) = O(s, k) - \sum_{m=k+1}^s \ch{m}{k} \sum_{j=m}^s (-1)^{m+j}\ch{j}{m}O(s, j)$, \end{center} {expand the sums and group the terms with $O(s, k)$, then with $O(s, k+1)$, then $O(s, k+2)$, and so up to $O(s, s)$. We obtain} \noindent $ E(s, k)=O(s, k) - O(s, k+1) \ch{k+1}{k} +O(s, k+2)\left[-\ch{k+2}{k} + \ch{k+1}{k}\ch{k+2}{k+1}\right] +...$ \noindent $+O(s, s) \left[-\ch{s}{k} +\ch{s-1}{k}\ch{s}{s-1} - ... \mp \ch{k+2}{k}\ch{s}{k+2} \pm \ch{k+1}{k}\ch{s}{k+1} \right]$. Rewrite the latter expression using the summation notation for the sums next to $O(s, k+1)$, $O(s, k+2)$, ..., $O(s, s)$: \noindent $E(s, k)=O(s, k) - O(s, k+1) \sum_{i=k+1}^{k+1} (-1)^{k+1+i} \ch{i}{k}\ch{k+1}{i}-$ \noindent $-O(s, k+2) \sum_{i=k+1}^{k+2} (-1)^{k+2+i} \ch{i}{k}\ch{k+2}{i} - O(s, k+3) \sum_{i=k+1}^{k+3} (-1)^{k+3+i} \ch{i}{k}\ch{k+3}{i}-$ \noindent $-...-O(s, s) \sum_{i=k+1}^{s} (-1)^{s+i} \ch{i}{k}\ch{s}{i}=O(s, k) - \sum_{m=k+1}^s O(s, m) \left[ \sum_{i=k+1}^{m} (-1)^{m+i} \ch{i}{k}\ch{m}{i}\right]$. Using that $\ch{i}{k}\ch{m}{i}=\ch{m}{k}\ch{m-k}{m-i}$, we have \begin{equation}\label{star} E(s, k)=O(s, k) - \sum_{m=k+1}^s O(s, m) \ch{m}{k} \left[ \sum_{i=k+1}^{m} (-1)^{m-i} \ch{m-k}{m-i}\right].\tag{$*$} \end{equation} \end{proof} \textbf{Acknowledgments} \ Tsvietkova was partially supported by the National Science Foundation (NSF) of the United States, grants DMS-1664425 (previously 1406588) and DMS-2005496, and the Institute of Advanced Study under NSF grant DMS-1926686. Both authors thank Okinawa Institute of Science and Technology for the support. We thank Kasper Anderson from Lund University for drawing our attention to Zeilberger's Algorithm and Poincar\'e's theorem that played an important role in Section \ref{Unlinks}. We are grateful to referees whose suggestions made this a better paper. \begin{thebibliography}{99} \bibitem{AST} C.~Adams, R.~Shinjo and K.~Tanaka, {\em Complementary Regions for Knot and Link Complements}, Annals of Combinatorics. {\textbf {15}} (2011), no. 4, 549--563 \bibitem{Chapman} H. Chapman, \textit{Asymptotic laws for random knot diagrams}, J. Phys. A50 (2017), no.22, 225001, 32 pp. \bibitem{DT0} O. Dasbach, A. Tsvietkova, {\em A refined upper bound for the hyperbolic volume of alternating links and the colored Jones polynomial}, Mathematical Research Letters 22 (2015), No. 4, 1047--1060 \bibitem{DT} O. Dasbach, A. Tsvietkova, {\em Simplicial volume of links from link diagrams, } Mathematical Proceedings of Cambridge Philosophical Society 166 (2019), no. 1, 75--81 \bibitem{Delbruck} M. Delbruck, \textit{Knotting problems in biology}, Plant Genome Data and Information Center collection on computational molecular biology and genetics, 1961 \bibitem{DPS} Y. Diao, N. Pippenger, D. W. Sumners, \textit{On random knots}, J. Knot Theory Ramifications (1994), no.3, 419--429 \bibitem{DunfieldThurston} N. M. Dunfield, W. P. Thurston, {\em Finite covers of random 3-manifolds}, Invent. Math. 166 (2006), no. 3, 457--521 \bibitem{EvenZohar} Chaim Even-Zohar, {\em Models of random knots,} Journal of Applied and Computational Topology, {\bf 1(2)} (2017), 263--296 \bibitem{EZHLN1} C. Even-Zohar, J. Hass, N. Linial, T. Nowik, {\em The distribution of knots in the petaluma model,} Algebr. Geom. Topol. 18 (2018), no. 6, 3647--3667 \bibitem{EZHLN2}C. Even-Zohar, J. Hass, N. Linial, T. Nowik, {\em Universal Knot Diagrams}, J. Knot Theory Ramifications 28 (2019), no. 7, 30 pp. \bibitem{EZHLN4} C. Even-Zohar, J. Hass, N. Linial, T. Nowik, {\em Invariants of random knots and links,} Discrete and Computational Geometry, 56 (2), 2016, 274--314 \bibitem{FE} R. O. W. Franz, B. A. Earnshaw, {\em A constructive enumeration of meanders, } Ann. Combin. 6 (2002), 7--17 \bibitem{FW} H. L. Frisch, E. Wasserman, \textit{Chemical topology}, J. of the American Chemical Soc., 83(18) : 3789--3795, 1961. \bibitem{JR} S. Jablan and Lj. Radovi\'c, {\em Meander Knots and Links,} Filomat {\textbf {(29)(10)}} (2015), pp. 2381--2392 \bibitem{Lackenby} M. Lackenby, {\em The volume of hyperbolic alternating link complements. With an appendix by I. Agol and D. Thurston,} Proc. London Math. Soc. {\textbf {88}} (2004), 204--224 \bibitem{LackenbyPurcell2} M. Lackenby, J. S. Purcell, {\em Cusp volumes of alternating knots,} Geometry and Topology, Vol. 20 (2016), No. 4, 2053--2078 \bibitem{ProbCompScTextbook} E. Lehman, F. T. Leighton, A. R. Meyer, \textit{Mathematics for Computer Science}, Samurai Media Limited, 2017, 988 pp. \bibitem{Maple} {\em Maple} (2021.2), Maplesoft, a division of Waterloo Maple Inc., Waterloo, Ontario \bibitem{Owad1} N. Owad, {\em Families of not perfectly straight knots}, J. Knot Theory Ramifications 28 (2019), no. 3, 1950027, 12 pp. \bibitem{MapleFile} N. Owad, {\em Maple worksheet for computing and verifying polynomials P\_i in random meander link model}, (2022) \url{https://nick.owad.org/Zeilbergers.mw.zip} \bibitem{Owad2} N. Owad, {\em Straight Knots}, preprint (2018), \url{https://arxiv.org/abs/1801.10428} \bibitem{Obeidin} M. Obeidin, {\em Volumes of Random Alternating Link Diagrams}, preprint (2017), \url{https://arxiv.org/abs/1611.04944} \bibitem{Peterson} K. Petersen, {\em Eulerian Numbers,} Birkhäuser Advanced Texts Basler Lehrbücher (2015), 456 pp. \bibitem{aeqb} M. Petkovsek, H. Wilf and D. Zeilberger, {\em A=B}, A K Peters, Wellesley, MA (1996), 212 pp. \bibitem{Pippenger} N. Pippenger, \textit{Knots in random walks}, Discrete Appl. Math. 25 (1989), no.3, 273--278 \bibitem{Poincare} H. Poincar\'e, {\em Sur les Equations Lineaires aux Differentielles Ordinaires et aux Differences Finies.} (French) Amer. J. Math. {\bf {7(3)}} (1885), 203--258 \bibitem{Purcell} J. S. Purcell {\em Volumes of highly twisted knots and links, } Algebraic and Geometric Topology, Vol. 7 (2007), pp. 93--108 \bibitem{Stanley} R. P. Stanley, {\em Enumerative Combinatorics: Volume 1}, Cambridge Studies in Advanced Mathematics, 49, Cambridge University Press, Cambridge (2012), 626 pp. \bibitem{SW} D. W. Sumners, G. S. Whittington, \textit{Knots in self-avoiding walks}, J. Phys. A21 (1988), no.7, 1689--1694 \bibitem{Thurston} W. P. Thurston, {\em The geometry and topology of three-manifolds}, Princeton Univ. Math. Dept. Notes, 1979 \bibitem{Welsh} Dominic JA Welsh. {\em On the number of knots and links,} Colloq. Math. Soc. Janos Bolyai, 59:1-6, 1991 \end{thebibliography} \ Anastasiia Tsvietkova Rutgers University, Newark [email protected] \ Nicholas Owad [email protected] \end{document}
2205.03397v2
http://arxiv.org/abs/2205.03397v2
A biorthogonal approach to the infinite dimensional fractional Poisson measure
\documentclass[12pt,letterpaper,american]{article} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \synctex=-1 \usepackage{xcolor} \usepackage{pdfcolmk} \usepackage{babel} \usepackage{mathrsfs} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{graphicx} \usepackage[all]{xy} \PassOptionsToPackage{normalem}{ulem} \usepackage{ulem} \usepackage{nomencl} \providecommand{\printnomenclature}{\printglossary} \providecommand{\makenomenclature}{\makeglossary} \makenomenclature \usepackage[unicode=true, bookmarks=true,bookmarksnumbered=false,bookmarksopen=false, breaklinks=true,pdfborder={0 0 0},pdfborderstyle={},backref=false,colorlinks=false] {hyperref} \hypersetup{pdftitle={Biorthogonal Approach to Infinite Dimensional Fractional Poisson Measure}, pdfauthor={Jerome B. Bendong and Sheila M. Menchavez and Jose Luis da Silva}, pdfsubject={Non Gaussian Measures}, pdfkeywords={Fractional Poisson measure, Appell polynomials}, pdfborderstyle=} \makeatletter \pdfpageheight\paperheight \pdfpagewidth\paperwidth \providecolor{lyxadded}{rgb}{0,0,1} \providecolor{lyxdeleted}{rgb}{1,0,0} \DeclareRobustCommand{\lyxadded}[3]{{\texorpdfstring{\color{lyxadded}{}}{}#3}} \DeclareRobustCommand{\lyxdeleted}[3]{{\texorpdfstring{\color{lyxdeleted}\lyxsout{#3}}{}}} } \theoremstyle{plain} \newtheorem{thm}{\protect\theoremname}[section] \theoremstyle{remark} \newtheorem{rem}[thm]{\protect\remarkname} \theoremstyle{plain} \newtheorem{lem}[thm]{\protect\lemmaname} \theoremstyle{plain} \newtheorem{cor}[thm]{\protect\corollaryname} \theoremstyle{definition} \newtheorem{defn}[thm]{\protect\definitionname} \theoremstyle{plain} \newtheorem{prop}[thm]{\protect\propositionname} \theoremstyle{definition} \newtheorem{example}[thm]{\protect\examplename} \usepackage{babel} \usepackage{xcolor} \usepackage{pdfcolmk} \usepackage{bm} \usepackage{datetime} \usepackage{currfile} \usepackage{pdfsync} \usepackage{bbm,geometry} \newcommand{\zx}{\color{red}} \newcommand{\xz}{\color{black}} \numberwithin{equation}{section} \providecommand{\corollaryname}{Corollary} \providecommand{\definitionname}{Definition} \providecommand{\examplename}{Example} \providecommand{\remarkname}{Remark} \providecommand{\theoremname}{Theorem} \newcommand{\A}{\mathbb{A}} \newcommand{\E}{\mathbb{E}} \newcommand{\ds}{\displaystyle} \newcommand{\nid}{\noindent} \newcommand{\np}{\newpage} \newcommand{\lb}{\linebreak} \geometry{a4paper, top=2cm, bottom=2cm, left=2.0cm, right=2.cm, } \newcommand{\vertiii}[1]{{\left\vert\kern-0.4ex\left\vert\kern-0.4ex\left\vert #1 \right\vert\kern-.4ex\right\vert\kern-0.4ex\right\vert}} \providecommand{\propositionname}{Proposition} \newcommand{\xyR}[1]{ \xydef@\xymatrixrowsep@{#1}} \newcommand{\xyC}[1]{ \xydef@\xymatrixcolsep@{#1}} \makeatother \providecommand{\corollaryname}{Corollary} \providecommand{\definitionname}{Definition} \providecommand{\examplename}{Example} \providecommand{\lemmaname}{Lemma} \providecommand{\propositionname}{Proposition} \providecommand{\remarkname}{Remark} \providecommand{\theoremname}{Theorem} \begin{document} \title{A Biorthogonal Approach to the Infinite Dimensional Fractional Poisson Measure} \author{\textbf{Jerome B. Bendong}\\ DMS, MSU-Iligan Institute of Technology,\\ Tibanga, 9200 Iligan City, Philippines\\ Email: [email protected]\and \textbf{Sheila M. Menchavez}\\ DMS, MSU-Iligan Institute of Technology,\\ Tibanga, 9200 Iligan City, Philippines\\ Email: [email protected]\and \textbf{Jos{\'e} Lu{\'\i}s da Silva}\\ Faculdade de Ci{\^e}ncias Exatas e da Engenharia,\\ CIMA, Universidade da Madeira,\\ Campus Universit{\'a}rio da Penteada,\\ 9020-105 Funchal, Portugal\\ Email: [email protected]} \date{\today} \maketitle \begin{abstract} In this paper we use a biorthogonal approach to the analysis of the infinite dimensional fractional Poisson measure $\pi_{\sigma}^{\beta}$, $0<\beta\leq1$, on the dual of Schwartz test function space $\mathcal{D}'$. The Hilbert space $L^{2}(\pi_{\sigma}^{\beta})$ of complex-valued functions is described in terms of a system of generalized Appell polynomials $\mathbb{P}^{\sigma,\beta,\alpha}$ associated to the measure $\pi_{\sigma}^{\beta}$. The kernels $C_{n}^{\sigma,\beta}(\cdot)$, $n\in\mathbb{N}_{0}$, of the monomials may be expressed in terms of the Stirling operators of the first and second kind as well as the falling factorials in infinite dimensions. Associated to the system $\mathbb{P}^{\sigma,\beta,\alpha}$, there is a generalized dual Appell system $\mathbb{Q}^{\sigma,\beta,\alpha}$ that is biorthogonal to $\mathbb{P}^{\sigma,\beta,\alpha}$. The test and generalized function spaces associated to the measure $\pi_{\sigma}^{\beta}$ are completely characterized using an integral transform as entire functions. \\ \\ \textbf{Keywords}: fractional Poisson measure, generalized Appell system, Wick exponential, test functions, generalized functions, Stirling operators, $S$-transform. \end{abstract} \tableofcontents{} \section{Introduction} \label{sec:Introduction}In this paper we develop a biorthogonal approach to the analysis of the infinite dimensional fractional Poisson measure (fPm) on the configuration space $\Gamma$ or over $\mathcal{D}'$ (the dual of the Schwartz test function space $\mathcal{D}$). As a special case of a non-Gaussian measure (for which this biorthogonal approach was developed in \cite{ADKS96,KSWY95,KdSS98}) the fPm revealed an interesting connection with the Stirling operators and falling factorials in the context of infinite dimensional analysis introduced recently in \cite{Finkelshtein2022}. To describe our results more precisely, let us recall that there are different ways to introduce a total set of orthogonal polynomials in the Hilbert space of square integrable functions with respect to (wrt) a probability measure. For example, applying the Gram-Schmidt method to an independent sequence of functions or using generating functions. In the case at hand, that is, the fPm $\pi_{\sigma}^{\beta}$ ($0<\beta\le1$, $\sigma$ a non-degenerate and non-atomic measure in $\mathbb{R}^{d}$), we have chosen the generating function procedure because the Gram-Schmidt method is not practical. In addition, the generating function is picked in a way such that at $\beta=1$, we recover the classical Charlier polynomials, that is, $\pi_{\sigma}^{1}$ coincides with the standard Poisson measure $\pi_{\sigma}$ on $\Gamma$, see \cite{AKR97a} for more details. In explicit, given the map \[ \alpha:\mathcal{D}_{\mathbb{C}}\longrightarrow\mathcal{D}_{\mathbb{C}},\;\varphi\mapsto\alpha(\varphi)(x):=\log(1+\varphi(x)),\quad x\in\mathbb{R}^{d}, \] we define the modified Wick exponential \[ \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w):=\frac{\exp(\langle w,\alpha(\varphi)\rangle)}{l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))}=\sum_{n=0}^{\infty}\frac{1}{n!}\langle C_{n}^{\sigma,\beta}(w),\varphi^{\otimes n}\rangle,\quad w\in\mathcal{D}'_{\mathbb{C}}, \] where $\varphi$ is properly chosen from a neighborhood of zero in $\mathcal{D}_{\mathbb{C}}$. The monomials $\langle C_{n}^{\sigma,\beta}(w),\varphi^{\otimes n}\rangle$, $n\in\mathbb{N}_{0}$ generates a system of polynomials $\mathbb{P}^{\sigma,\beta,\alpha}$ which forms a total set in the space $L^{2}(\pi_{\sigma}^{\beta})$ of square $\pi_{\sigma}^{\beta}$-integrable complex functions. The kernels $C_{n}^{\sigma,\beta}(\cdot)$, $n\in\mathbb{N}_{0}$, possess certain remarkable properties involving the Stirling operators of the first and second kind as well as the falling factorials $(w)_{n}$, $w\in\mathcal{D}'_{\mathbb{C}}$ introduced in \cite{Finkelshtein2022}. We refer to Proposition~\ref{prop:generalized-appell-polynomials-inf-dim} and Appendix \ref{sec:Stirling-Operators} for more details and results. Other choice of generating functions like $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;\cdot)$ is also possible (see the beginning of Section \ref{sec:Appell-System}), but at $\beta=1$, the corresponding system of polynomials do not coincide with the classical Charlier polynomials. Thus, our natural choice goes to the modified Wick exponential generating function $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)$. On the other hand, the construction of the generalized dual Appell system $\mathbb{Q}^{\sigma,\beta,\alpha}$ turns out to be very appealing since it involves a differential operator of infinite order on the space of polynomials $\mathcal{P}(\mathcal{D}')$ over $\mathcal{D}'$ and the adjoints of the Stirling operators. This careful choice of the system $\mathbb{Q}^{\sigma,\beta,\alpha}$ leads us to the so-called biorthogonal property between the two systems $\mathbb{P}^{\sigma,\beta,\alpha}$ and $\mathbb{Q}^{\sigma,\beta,\alpha}$, see Theorem \ref{thm:biorthogonal-property-inf-dim}. The generalized Appell system $\mathbb{A}^{\sigma,\beta,\alpha}:=(\mathbb{P}^{\sigma,\beta,\alpha},\mathbb{Q}^{\sigma,\beta,\alpha})$ is used to introduce a family of test function spaces $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$, $0\le\kappa\le1$, which are nuclear spaces and continuously embedded in $L^{2}(\pi_{\sigma}^{\beta})$. The dual space of $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$ is given by the general duality theory as $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}$. In this way, we obtain the chain of continuous embeddings \[ (\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}\subset L^{2}(\pi_{\sigma}^{\beta})\subset(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}. \] A typical example of a test function is the modified Wick exponential $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$ (see Example \ref{exa:normalized-exp-as-test-function}) given as a convergent series in terms of the system $\mathbb{P}^{\sigma,\beta,\alpha}$ while a particular element in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$ is given by the generalized Radon-Nikodym derivative $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)$, $w\in\mathcal{N}'_{\mathbb{C}}$. Moreover, the generalized function $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)$ plays the role of the generating function of the system $\mathbb{Q}^{\sigma,\beta,\alpha}$, that is, \[ \rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)=\sum_{k=0}^{\infty}\frac{1}{k!}Q_{k}^{\text{\ensuremath{\pi_{\sigma}^{\beta},\alpha}}}((-w)_{k}). \] The spaces $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\pm\kappa}$ may be characterized in terms of an integral transform, called the $S_{\pi_{\sigma}^{\beta}}$-transform. It turns out that all these spaces $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\pm\kappa}$, $0\le\kappa\le1$, are universal in the sense that the $S_{\pi_{\sigma}^{\beta}}$-transform of their elements are entire functions (for $0\le\kappa<1)$ or holomorphic functions ($\kappa=1$) and independent of the measure $\pi_{\sigma}^{\beta}$, see Theorem~\ref{thm:characterizations}. This feature is well known in non-Gaussian analysis. The paper is organized as follows. In Section \ref{sec:Nuclear-spaces}, we recall some known concepts of nuclear spaces and their tensor products. As a motivation to the generalization of fPm to infinite dimensions, we discuss its finite dimensional version in Section \ref{sec:FPm-finite-dim}. We show that the monic polynomials $C_{n}^{\beta}(x)$, $n\in\mathbb{N}_{0}$, obtained using Gram-Schmidt orthogonalization process to the monomials $x^{n}$, $n\in\mathbb{\mathbb{N}}_{0}$ are orthogonal in $L^{2}(\pi_{\vec{\lambda},\beta}^{2})$ ($\pi_{\vec{\lambda},\beta}^{2}$ is the Poisson measure in 2-dimensions) if, and only if, $\beta=1$. In Section \ref{sec:Inf-Dim-fPm}, we define the fPm $\pi_{\sigma}^{\beta}$ in infinite dimensions as a probability measure on $(\mathcal{D}',\mathcal{C}_{\sigma}(\mathcal{D}'))$, where $\mathcal{C}_{\sigma}(\mathcal{D}')$ is the $\sigma$-algebra generated by the cylinder sets. We also discuss the concept of configuration space $\Gamma$ and then using the Kolmogorov extension theorem we define a unique measure $\pi_{\sigma}^{\beta}$ on the configuration space $(\Gamma,\mathcal{B}(\Gamma))$ whose characteristic function coincides with that of $\pi_{\sigma}^{\beta}$ on the distribution space $\mathcal{D}'$. In Section \ref{sec:Appell-System}, we introduce the generalized Appell system associated with the fPm $\pi_{\sigma}^{\beta}$. This includes the system of generalized Appell polynomials and the dual Appell system which are biorthogonal with respect to the fPm $\pi_{\sigma}^{\beta}$. Finally in Section \ref{sec:test-and-generalized-function-spaces}, we construct the test and generalized function spaces associated to the fPm $\pi_{\sigma}^{\beta}$ and provide some properties as well as its characterization theorems. For completeness, in the Appendices \ref{sec:Kolmogorov-extension-theorem-on-config-space}--\ref{sec:alternative-proof-biorthogonal-property} we provide certain concepts and results already known in the literature, particularly, the Kolmogorov extension theorem on the configuration space and Stirling operators in infinite dimensions. \section{Tensor Powers of Nuclear Spaces} \label{sec:Nuclear-spaces}We first consider nuclear Fr\'{e}chet spaces (i.e., a complete metrizable locally convex space) that may be characterized in terms of projective limits of a countable number of Hilbert spaces, see e.g., \cite{BK88}, \cite{BSU96}, \cite{GV68}, \cite{HKPS93} and \cite{O94} for more details and proofs. Let $\mathcal{H}$ be a real separable Hilbert space with inner product $(\cdot,\cdot)$ and corresponding norm $|\cdot|$. Consider a family of real separable Hilbert space $\mathcal{H}_{p}$, $p\in\mathbb{N}$ with Hilbert norm $|\cdot|_{p}$ such that the space $\bigcap_{p\in\mathbb{N}}\mathcal{H}_{p}$ is dense in each $\mathcal{H}_{p}$, and \[ \dots\subset\mathcal{H}_{p}\subset\dots\subset\mathcal{H}_{1}\subset\mathcal{H} \] with the corresponding system of norms being ordered, i.e., \[ |\cdot|\leq|\cdot|_{1}\leq\dots\leq|\cdot|_{p}\leq\dots\quad p\in\mathbb{N}. \] Now we assume that the space $\mathcal{N}=\bigcap_{p\in\mathbb{N}}\mathcal{H}_{p}$ is nuclear (i.e., for each $p\in\mathbb{N}$ there is a $q>p$ such that the canonical embedding $\mathcal{H}_{q}\hookrightarrow\mathcal{H}_{p}$ is of Hilbert-Schmidt class) and on $\mathcal{N}$ we fix the \emph{projective limit topology}, i.e., the coarsest topology on $\mathcal{N}$ with respect to which each canonical embedding $\mathcal{N}\hookrightarrow\mathcal{H}_{p}$, $p\in\mathbb{N}$, is continuous. With respect to this topology, $\mathcal{N}$ is a Fr\'{e}chet space and we use the notation \[ \mathcal{N}=\underset{p\in\mathbb{N}}{\mathrm{pr\,lim}}\mathcal{H}_{p} \] to denote the space $\mathcal{N}$ endowed with the corresponding projective limit topology. Such a topological space is called a \emph{projective limit} or a \emph{countable limit of the family} $(\mathcal{H}_{p})_{p\in\mathbb{N}}$. Let us denote by $\mathcal{H}_{-p}$, $p\in\mathbb{N}$, the dual of $\mathcal{H}_{p}$ with respect to the space $\mathcal{H}$, with the corresponding Hilbert norm $|\cdot|_{-p}$. By the general duality theory, the dual space $\mathcal{N}'$ of $\mathcal{N}$ with respect to $\mathcal{H}$ can then be written as \[ \mathcal{N}':=\bigcup_{p\in\mathbb{N}}\mathcal{H}_{-p} \] with the \emph{inductive limit topology}, i.e., the finest topology on $\mathcal{N}'$ with respect to which all the embeddings $\mathcal{H}_{-p}\hookrightarrow\mathcal{N}'$ are continuous. This topological space is denoted by \[ \mathcal{N}'=\underset{p\in\mathbb{N}}{\mathrm{ind\,lim}}\mathcal{H}_{-p} \] and is called an \emph{inductive limit of the family} $(\mathcal{H}_{-p})_{p\in\mathbb{N}}$. In this way we have obtained the chain of spaces \[ \mathcal{N}\mathcal{\subset H}\subset\mathcal{N}' \] called a \emph{nuclear triple} or \emph{Gelfand triple}. The dual pairing $\langle\cdot,\cdot\rangle$ between $\mathcal{N}$ and $\mathcal{N}'$ is then realized as an extension of the inner product $(\cdot,\cdot)$ on $\mathcal{H}$, i.e., \[ \langle g,\xi\rangle=(g,\xi),\quad g\in\mathcal{H},\xi\in\mathcal{N}. \] The tensor product of the Hilbert spaces $\mathcal{H}_{p}$, $p\in\mathbb{N}$, is denoted by $\mathcal{H}_{p}^{\otimes n}$. We keep the notation $|\cdot|_{p}$ for the Hilbert norm on this space. The subspace of $\mathcal{H}_{p}^{\otimes n}$ of symmetric elements is denoted by $\mathcal{H}_{p}^{\hat{\otimes}n}$. The \emph{$n$}-th tensor power $\mathcal{N}^{\otimes n}$ of $\mathcal{N}$ and the \emph{$n$}-th symmetric tensor power $\mathcal{N}^{\hat{\otimes}n}$ of $\mathcal{N}$ are the nuclear Fr\'{e}chet spaces given by \[ \mathcal{N}^{\otimes n}:=\underset{p\in\mathbb{N}}{\mathrm{pr\,lim}}\mathcal{H}_{p}^{\otimes n}\mathrm{\quad and}\quad\mathcal{N}^{\hat{\otimes}n}:=\underset{p\in\mathbb{N}}{\mathrm{pr\,lim}}\mathcal{H}_{p}^{\hat{\otimes}n}. \] Furthermore, if $\mathcal{H}_{-p}^{\otimes n}$ (resp., $\mathcal{H}_{-p}^{\hat{\otimes}n}$ ) denotes the dual space of $\mathcal{H}_{p}^{\otimes n}$ (resp., $\mathcal{H}_{p}^{\hat{\otimes}n}$) with respect to $\mathcal{H}^{\otimes n}$, then the dual space $(\mathcal{N}{}^{\otimes n})'$ of $\mathcal{N}^{\otimes n}$ with respect to $\mathcal{H}^{\otimes n}$ and the dual space $(\mathcal{N}{}^{\hat{\otimes}n})'$ of $\mathcal{N}^{\hat{\otimes}n}$ with respect to $\mathcal{H}^{\otimes n}$ can be written as \[ (\mathcal{N}{}^{\otimes n})'=\underset{p\in\mathbb{N}}{\mathrm{ind\,lim}}\mathcal{H}_{-p}^{\otimes n}\mathrm{\quad and}\quad(\mathcal{N}{}^{\hat{\otimes}n})'=\underset{p\in\mathbb{N}}{\mathrm{ind\,lim}}\mathcal{H}_{-p}^{\hat{\otimes}n}, \] respectively. As before we use the notation $|\cdot|_{-p}$ for the norm on $\mathcal{H}_{-p}^{\otimes n}$, $p\in\mathbb{N}$, and $\langle\cdot,\cdot\rangle$ for the dual pairing between $(\mathcal{N}{}^{\otimes n})'$ and $\mathcal{N}^{\otimes n}$. Thus we have defined the nuclear triples \[ \mathcal{N}^{\otimes n}\subset\mathcal{H}^{\otimes n}\subset(\mathcal{N}{}^{\otimes n})'\quad\mathrm{and}\quad\mathcal{N}^{\hat{\otimes}n}\subset\mathcal{H}^{\hat{\otimes}n}\subset(\mathcal{N}{}^{\hat{\otimes}n})'. \] To all the real spaces in this section, we may also consider their complexifications which will be distinguished by a subscript $\mathbb{C}$, i.e., the complexification of $\text{\ensuremath{\mathcal{H}}}$ is $\text{\ensuremath{\mathcal{H}}}_{\mathbb{C}}$ and so on. This means that for $h\in\text{\ensuremath{\mathcal{H}}}_{\mathbb{C}}$, we have $h=h_{1}+ih_{2}$ where $h_{1},h_{2}\in\text{\ensuremath{\mathcal{H}}}$. Let us now introduce spaces of entire functions which will be used later in the characterization theorems in Section \ref{sec:test-and-generalized-function-spaces}. Let $\mathcal{E}_{2^{-l}}^{k}(\mathcal{H}_{-p,\mathbb{C}})$ denote the set of all entire functions on $\mathcal{H}_{-p,\mathbb{C}}$ of growth $k\in[1,2]$ and type $2^{-l}$, $p,l\in\mathbb{Z}$. This is a linear space with norm \[ n_{p,l,k}(\varphi)=\sup_{w\in\mathcal{H}_{-p,\mathbb{C}}}|\varphi(w)|\exp(-2^{-l}|z|_{-p}^{k}),\;\varphi\in\mathcal{E}_{2^{-l}}^{k}(\mathcal{H}_{-p,\mathbb{C}}). \] The space of entire functions on $\mathcal{N}'_{\mathbb{C}}$ of growth $k$ and minimal type is naturally introduced by \[ \mathcal{E}_{\min}^{k}(\mathcal{N}'_{\mathbb{C}}):=\underset{p,l\in\mathbb{N}}{\mathrm{pr\,lim}}\mathcal{E}_{2^{-l}}^{k}(\mathcal{H}_{-p,\mathbb{C}}), \] see e.g., \cite{Ko80a,BK88}. We will also need the space of entire functions on $\mathcal{N}_{\mathbb{C}}$ of growth $k$ and finite type given by \[ \mathcal{E}_{\max}^{k}(\mathcal{N}{}_{\mathbb{C}}):=\underset{p,l\in\mathbb{N}}{\mathrm{ind\,lim}}\mathcal{E}_{2^{l}}^{k}(\mathcal{H}_{p,\mathbb{C}}). \] \section{Finite Dimensional Fractional Poisson Measure} \label{sec:FPm-finite-dim}In this section we discuss the finite dimensional version of the fractional Poisson measure as a motivation to its generalization to infinite dimensions. The one dimensional version of the fractional Poisson analysis was studied in \cite{Bendong2022}. At first we introduce the Mittag-Leffler function $E_{\beta}$ with parameter $\beta\in(0,1]$. The Mittag-Leffler function is an entire function defined on the complex plane by the power series \begin{equation} E_{\beta}(z):=\sum_{n=0}^{\infty}\frac{z^{n}}{\Gamma(\beta n+1)},\quad z\in\mathbb{C}.\label{eq:ML-function} \end{equation} The Mittag-Leffler function plays the same role for the fPm as the exponential function plays for Poisson measure. Note that for $\beta=1$ we have $E_{1}(z)=\mathrm{e}^{z}$. For any $0<\beta\le1$, the fPm $\pi_{\lambda,\beta}$ on $\mathbb{N}_{0}$ (or $\mathbb{R})$ with rate $\lambda>0$ is defined for any $B\in\mathscr{P}(\mathbb{N}_{0})$ by \[ \pi_{\lambda,\beta}(B):=\sum_{k\in B}\frac{\lambda^{k}}{k!}E_{\beta}^{(k)}(-\lambda), \] where $E_{\beta}^{(k)}(z):=\frac{d^{k}}{dz^{k}}E_{\beta}(z)$ is the $k$-th derivative of the Mittag-Leffler $E_{\beta}$ function. In particular, if $B=\{k\}\in\mathscr{P}(\mathbb{N}_{0})$, $k\in\mathbb{N}_{0}$, we obtain \[ \pi_{\lambda,\beta}(\{k\}):=\frac{\lambda^{k}}{k!}E_{\beta}^{(k)}(-\lambda). \] The Laplace transform of the measure $\pi_{\lambda,\beta}$ is given for any $z\in\mathbb{C}$ by \begin{equation} l_{\pi_{\lambda,\beta}}(z)=\int_{\mathbb{R}}\mathrm{e}^{zx}\,\mathrm{d}\pi_{\lambda,\beta}(x)=\sum_{k=0}^{\infty}\frac{\big(\mathrm{e}^{z}\lambda\big)^{k}}{k!}E_{\beta}^{(k)}\big(-\lambda\big)=E_{\beta}\big(\lambda(\mathrm{e}^{z}-1)\big).\label{eq:LT-fPm} \end{equation} \begin{rem} The measure $\pi_{\lambda t^{\beta},\beta}$ corresponds to the marginal distribution of the fractional Poisson process $N_{\lambda,\beta}=(N_{\lambda,\beta}(t))_{t\ge0}$ with parameter $\lambda t^{\beta}>0$ defined on a probability space $(\Omega,\mathcal{F},P)$. Thus, we obtain \[ \pi_{\lambda t^{\beta},\beta}(\{k\})=P(N_{\lambda,\beta}(t)=k)=\frac{(\lambda t^{\beta})^{k}}{k!}E_{\beta}^{(k)}(-\lambda t^{\beta}),\quad k\in\mathbb{N}_{0}. \] \end{rem} \begin{rem} The fractional Poisson process $N_{\lambda,\beta}$ was proposed by O.\ N.\ Repin and A.\ I.\ Saichev \cite{Repin-Saichev00}. Since then, it was studied by many authors see for example \cite{L03,MGS04,Mainardi-Gorenflo-Vivoli-05,Gorenflo2015,Uchaikin2008,Beghin:2009fi,Politi-Kaizoji-2011,Meerschaert2011,Biard2014} and references therein. \end{rem} A remarkable property of the fPm is that $\pi_{\lambda,\beta}$ is given as a mixture of Poisson measures with respect to a probability measure $\nu_{\beta}$ on $\mathbb{R}_{+}:=[0,\infty)$. That probability measure $\nu_{\beta}$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}_{+}$ with a probability density $W_{-\beta,1-\beta}$, that is, the Wright function. The Laplace transform of the measure $\nu_{\beta}$ (or its density $W_{-\beta,1-\beta}$) is given by \begin{equation} \int_{0}^{\infty}\mathrm{e}^{-\tau z}\,\mathrm{d}\nu_{\beta}(\tau)=\int_{0}^{\infty}\mathrm{e}^{-z\tau}W_{-\beta,1-\beta}(\tau)\,\mathrm{d}\tau=E_{\beta}(-z),\label{eq:monotonicity-Mittag} \end{equation} for any $z\in\mathbb{C}$ such that $\mathrm{Re}(z)\geq0$, see \cite[Cor.~A.5]{GJRS14}. Equation \eqref{eq:monotonicity-Mittag} is called the complete monotonicity property of the Mittag-Leffler function, see \cite{Pollard48}. More precisely, we have the following lemma. \begin{lem} \label{lem:mixture-1d}For $0<\beta\leq1$, the fPm $\pi_{\lambda,\beta}$ is an integral (or mixture) of Poisson measure $\pi_{\lambda}$ with respect to the probability measure $\nu_{\beta}$, i.e., \begin{equation} \pi_{\lambda,\beta}=\int_{0}^{\infty}\pi_{\lambda\tau}\,\mathrm{d\nu_{\beta}(\tau),}\quad\forall\lambda>0.\label{eq:fPm-mixture} \end{equation} \end{lem} \begin{proof} For $\beta=1$, we have $\nu_{1}=\delta_{1}$, the Dirac measure at 1, and the result is clear. For $0<\beta<1$, we denote the right hand side of \eqref{eq:fPm-mixture} by $\mathrm{\mu}:=\int_{0}^{\infty}\mathrm{\pi}_{\lambda\tau}W_{-\beta,1-\beta}(\tau)\,\mathrm{d}\tau$. We compute the Laplace transform of $\mu$ and use Fubini's theorem to obtain \begin{align*} \int_{0}^{\infty}\mathrm{e}^{zx}\,\mathrm{d}\mu(x) & =\int_{0}^{\infty}\mathrm{e}^{zx}\int_{0}^{\infty}\mathrm{d\pi}_{\lambda\tau}(x)W_{-\beta,1-\beta}(\tau)\,\mathrm{d}\tau\\ & =\int_{0}^{\infty}\left(\int_{0}^{\infty}\mathrm{e}^{zx}\mathrm{d\pi}_{\lambda\tau}(x)\right)W_{-\beta,1-\beta}(\tau)\,\mathrm{d}\tau\\ & =\int_{0}^{\infty}\mathrm{e}^{\tau\lambda(\mathrm{e}^{z}-1)}W_{-\beta,1-\beta}(\tau)\,\mathrm{d}\tau\\ & =E_{\beta}(\lambda(e^{z}-1)). \end{align*} Thus, we conclude that the Laplace transforms of $\mu$ and $\pi_{\lambda,\beta}$ (cf.~\eqref{eq:LT-fPm}) coincide. The result follows by the uniqueness of the Laplace transform. \end{proof} \begin{thm}[Moments of $\pi_{\lambda,\beta},$ cf.~\cite{L09}] \label{thm:fPmm}The fPm $\pi_{\lambda,\beta}$ has moments of all order. More precisely, the $n$-th moment of the measure $\pi_{\lambda,\beta}$ is given by \begin{equation} m_{\lambda,\beta}(n):=\int_{\mathbb{R}}x^{n}\,\mathrm{d}\pi_{\lambda,\beta}(x)=\sum_{m=0}^{n}\frac{m!}{\Gamma(m\beta+1)}S(n,m)\lambda^{m},\label{cnn} \end{equation} where $S(n,m)$ is the Stirling number of the second kind. \end{thm} Here are the first few moments of the measure $\pi_{\lambda,\beta}$: \begin{center} $\begin{array}{rcl} m_{\lambda,\beta}(0) & = & 1,\\ m_{\lambda,\beta}(1) & = & {\displaystyle \frac{\lambda}{\Gamma(\beta+1)},}\\ m_{\lambda,\beta}(2) & = & {\displaystyle \frac{\lambda}{\Gamma(\beta+1)}+\frac{2\lambda^{2}}{\Gamma(2\beta+1)},}\\ m_{\lambda,\beta}(3) & = & {\displaystyle \frac{\lambda}{\Gamma(\beta+1)}+\frac{6\lambda^{2}}{\Gamma(2\beta+1)}+\frac{6\lambda^{3}}{\Gamma(3\beta+1)}.} \end{array}$ \par\end{center} \begin{flushleft} When $\beta=1$, these moments become the moments of the Poisson measure. \par\end{flushleft} In addition to the Poisson measure $\pi_{\lambda}$ and fPm $\pi_{\lambda,\beta}$ in $\mathbb{N}_{0}$ we also need the two dimensional version of both of these measures in $\mathbb{N}_{0}^{2}$ or $\mathbb{R}^{2}$, the reason for that is clear after Corollary \ref{cor:orthogonal-for-beta1}. The $d$-dimensional Poisson measure is given by \[ \pi_{\vec{\lambda}}^{d}(\{k_{1},\dots,k_{d}\})=\prod_{i=1}^{d}\frac{\lambda_{i}^{k_{i}}}{k_{i}!}\mathrm{e}^{-\lambda_{i}}. \] The Laplace transform of $\pi_{\vec{\lambda}}^{2}$ is given by \begin{equation} l_{\pi_{\vec{\lambda}}^{2}}(z)=\int_{\mathbb{R}^{2}}\mathrm{e}^{(x,s)}\,\mathrm{d}\pi_{\vec{\lambda}}^{2}(x)=\exp\big(\lambda_{1}(\mathrm{e}^{s_{1}}-1)+\lambda_{2}(\mathrm{e}^{s_{2}}-1)\big)\label{eq:LT-Pm-dim2} \end{equation} where $s=(s_{1},s_{2})\in\mathbb{R}^{2}$. For any $0<\beta\le1$, $\vec{\lambda}\in(\mathbb{R}_{+}^{*})^{2},$ then a possible fractional generalization of $\pi_{\vec{\lambda}}^{2}$, denoted by $\pi_{\vec{\lambda},\beta}^{2}$, is given, via its Laplace transform, by replacing the first exponential function on the right hand side of \eqref{eq:LT-Pm-dim2} by the Mittag-Leffler function. More precisely, the Laplace transform of $\pi_{\vec{\lambda},\beta}^{2}$ is given by \begin{equation} l_{\pi_{\vec{\lambda},\beta}^{2}}(s)=\int_{\mathbb{R}^{2}}\mathrm{e}^{(x,s)}\,\mathrm{d}\pi_{\vec{\lambda},\beta}^{2}(x)=E_{\beta}\big(\lambda_{1}(\mathrm{e}^{s_{1}}-1)+\lambda_{2}(\mathrm{e}^{s_{2}}-1)\big),\label{eq:LT-fPm-2d} \end{equation} where $s=(s_{1},s_{2})\in\mathbb{R}^{2}$. The moments of the measure $\pi_{\vec{\lambda},\beta}^{2}$, denoted by $m_{\vec{\lambda},\beta}^{2}(n_{1},n_{2})$, can be obtained by applying $\frac{d^{n_{1}}}{ds_{1}^{n_{1}}}\frac{d^{n_{2}}}{ds_{2}^{n_{2}}}$, $n_{1},n_{2}\in\mathbb{N}_{0}$, to Equation \eqref{eq:LT-fPm-2d} and then evaluating at $s_{1}=s_{2}=0$. As an example, here we compute the moments $m_{\vec{\lambda},\beta}^{2}(1,1)$ and $m_{\vec{\lambda},\beta}^{2}(1,2)$ of the measure $\pi_{\vec{\lambda},\beta}^{2}$ needed later on: \begin{align*} m_{\vec{\lambda},\beta}^{2}(1,1)=\int_{\mathbb{R}^{2}}x_{1}x_{2}\,\mathrm{d}\pi_{\vec{\lambda},\beta}^{2}(x_{1,}x_{2}) & =\frac{2\lambda_{1}\lambda_{2}}{\Gamma(2\beta+1)},\\ m_{\vec{\lambda},\beta}^{2}(1,2)=\int_{\mathbb{R}^{2}}x_{1}x_{2}^{2}\,\mathrm{d}\pi_{\vec{\lambda},\beta}^{2}(x_{1,}x_{2}) & =\frac{2\lambda_{1}\lambda_{2}}{\Gamma(2\beta+1)}+\frac{6\lambda_{1}\lambda_{2}^{2}}{\Gamma(3\beta+1)}. \end{align*} We apply the Gram-Schmidt orthogonalization process to the monomials $x^{n}$, $n\in\mathbb{\mathbb{N}}_{0},$ to obtain monic polynomials $C_{n}^{\beta}(x)$ with $\deg C_{n}^{\beta}(x)=n$ with respect to the inner product \[ (p,q)_{\pi_{\lambda,\beta}}:=\int_{\mathbb{R}}p(x)q(x)\,\mathrm{d}\pi_{\lambda,\beta}(x). \] These polynomials are determined by the moments of the measure $\pi_{\lambda,\beta}$. The first few of these polynomials are given by \begin{align*} C_{0}^{\beta}(x) & =1,\\ C_{1}^{\beta}(x) & =x-(x,C_{0}^{\beta})_{\pi_{\lambda,\beta}}C_{0}^{\beta}(x)=x-m_{\lambda,\beta}(1),\\ C_{2}^{\beta}(x) & =x^{2}-(x^{2},C_{0}^{\beta})_{\pi_{\lambda,\beta}}C_{0}^{\beta}(x)-\left(x^{2},\frac{C_{1}^{\beta}}{\|C_{1}^{\beta}\|_{\pi_{\lambda,\beta}}^{2}}\right)_{\pi_{\lambda,\beta}}C_{1}^{\beta}(x),\\ & =x^{2}-A(\beta,\lambda)x-m_{\lambda,\beta}(2)+A(\beta,\lambda)m_{\lambda,\beta}(1), \end{align*} where \[ A(\beta,\lambda)=\frac{m_{\lambda,\beta}(3)-m_{\lambda,\beta}(1)m_{\lambda,\beta}(2)}{m_{\lambda,\beta}(2)-(m_{\lambda,\beta}(1))^{2}}. \] When $\beta=1$, the measure $\pi_{\lambda,1}$ becomes the Poisson measure $\pi_{\lambda}$ and the polynomials $C_{n}^{1}(x),$ $n\in\mathbb{N}_{0}$, are the classical Charlier polynomials. \begin{cor} \label{cor:orthogonal-for-beta1}For $\beta\in(0,1]$ it holds \[ \int_{\mathbb{R}^{2}}C_{1}^{\beta}(x_{1})C_{2}^{\beta}(x_{2})\,\mathrm{d}\pi_{\vec{\lambda},\beta}^{2}(x_{1},x_{2})=0 \] if, and only if, $\beta=1.$ \end{cor} \begin{proof} When $\beta=1$, we have the well known orthogonal property of the Charlier polynomials, that is, \[ \int_{\mathbb{R}^{2}}C_{1}(x_{1})C_{2}(x_{2})\,\mathrm{d}\pi_{\vec{\lambda}}^{2}(x_{1},x_{2})=\int_{\mathbb{R}}C_{1}(x_{1})\,\mathrm{d}\pi_{\lambda}(x_{1})\int_{\mathbb{R}}C_{2}(x_{2})\,\mathrm{d}\pi_{\lambda}(x_{2})=0. \] On the other hand, for $\beta\in(0,1)$ we have \begin{align} & \int_{\mathbb{R}^{2}}C_{1}^{\beta}(x_{1})C_{2}^{\beta}(x_{2})\,\mathrm{d}\pi_{\vec{\lambda},\beta}^{2}(x_{1},x_{2})\nonumber \\ & =\int_{\mathbb{R}^{2}}\left(x_{1}-m_{\lambda_{1},\beta}(1)\right)\big(x_{2}^{2}-A(\beta,\lambda_{2})x_{2}-m_{\lambda_{2},\beta}(2)+A(\beta,\lambda_{2})m_{\lambda_{2},\beta}(1)\big)\,\mathrm{d}\pi_{\vec{\lambda},\beta}^{2}(x_{1},x_{2})\nonumber \\ & =m_{\vec{\lambda},\beta}^{2}(1,2)-A(\beta,\lambda_{2})m_{\vec{\lambda},\beta}^{2}(1,1)-m_{\lambda_{1},\beta}(1)m_{\lambda_{2},\beta}(2)+A(\beta,\lambda_{2})m_{\lambda_{1},\beta}(1)m_{\lambda_{2},\beta}(1).\label{eq:moments-12} \end{align} Equation (3.3.3) defines a function $F(\beta,\lambda_{1},\lambda_{2})$ which is not equal from to zero for every $\beta\in(0,1)$, see Figure 3.1. \end{proof} \begin{figure} \begin{centering} \includegraphics[scale=0.4]{beta2} \par\end{centering} \caption{\label{fig:beta1}The graph of the function $F(\cdot,\lambda_{1},\lambda_{2})$ with $\vec{\lambda}=(1,1),(2,3),(1,2)$.} \end{figure} Having in mind the above results, this motivate us to introduce a biorthogonal system of the fPm in higher dimension. \section{Infinite Dimensional Fractional Poisson Measure} \label{sec:Inf-Dim-fPm}After the above preparation, we are ready to define the fPm in infinite dimensions. We define the fPm in the linear space $\mathcal{D}'$ and then a more careful analysis shows that fPm is indeed a probability measure on the configuration space $\Gamma$ over $\mathbb{R}^{d}$. \subsection{Fractional Poisson Measure on the Linear Space $\mathcal{D}'$} \label{sec:fPm-on-D'}Let $\vec{\lambda}=(\lambda_{1},\dots,\lambda_{d})\in(\mathbb{R}_{+}^{*})^{d}$ and $z=(z_{1},\dots,z_{d})\in\mathbb{R}^{d}$ be given. The $d$-dimensional Poisson measure has characteristic function given by \begin{equation} C_{\pi_{\lambda}^{d}}(z)=\int_{\mathbb{R}^{d}}\mathrm{e}^{\mathrm{i}(x,z)}\,\mathrm{d}\pi_{\vec{\lambda}}^{d}(x)=\exp\left(\sum_{k=1}^{d}\lambda_{k}(\mathrm{e}^{\mathrm{i}z_{k}}-1)\right).\label{eq:CF-Pm-fd} \end{equation} Let us consider a \emph{Radon measure} $\sigma$ on $(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$, that is, $\sigma(\Lambda)<\infty$ for every $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$, where $\mathcal{B}_{c}(\mathbb{R}^{d})$ the family of all $\mathcal{B}(\mathbb{R}^{d})$-measurable sets with compact closure. Elements of $\mathcal{B}_{c}(\mathbb{R}^{d})$ are called finite volumes. Here, we assume $\sigma$ to be non-degenerate (i.e., $\sigma(O)>0$ for all non-empty open sets $O\subset\mathbb{R}^{d}$) and non-atomic (i.e., $\sigma(\left\{ x\right\} )=0$ for every $x\in\mathbb{R}^{d}$). In addition, we always assume that $\sigma(\mathbb{R}^{d})=\infty$. Let $\mathcal{D}:=\mathcal{D}(\mathbb{R}^{d})$ be the space of $C^{\infty}$-functions with compact support in $\mathbb{R}^{d}$ and $\mathcal{D}':=\mathcal{D}'(\mathbb{R}^{d})$ be the dual of $\mathcal{D}$ with respect to the Hilbert space $L^{2}(\sigma):=L^{2}(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}),\sigma)$. In this way, we obtain the triple \begin{equation} \mathcal{D}\subset L^{2}(\sigma)\subset\mathcal{D}'.\label{eq:triple-L2sigma} \end{equation} The infinite-dimensional generalization of the Poisson measure with intensity measure $\sigma$, denoted by $\pi_{\sigma}$, is obtained by generalizing the characteristic function \eqref{eq:CF-Pm-fd} to \begin{equation} C_{\pi_{\sigma}}(\varphi):=\int_{\mathcal{D}'}\mathrm{e}^{\mathrm{i}\langle w,\varphi\rangle}\,\mathrm{d}\pi_{\sigma}(w)=\exp\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\mathrm{\,d}\sigma(x)\right),\;\varphi\in\mathcal{D}.\label{eq:CF-Pm-inf} \end{equation} This is achieve through the Bochner-Minlos theorem (see e.g.~\cite{BK88}) by showing that $C_{\pi_{\sigma}}$ is the Fourier transform of a measure on the distribution space $\mathcal{D}'$, see \cite{AKR97} and references therein. Now, using the fact that the Mittag-Leffler function is a natural generalization of the exponential function, one conjectures that the characteristic functional \begin{equation} C_{\pi_{\sigma}^{\beta}}(\varphi):=\int_{\mathcal{D}'}\mathrm{e}^{\mathrm{i}\langle w,\varphi\rangle}\,\mathrm{d}\pi_{\sigma}^{\beta}(w)=E_{\beta}\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\mathrm{\,d}\sigma(x)\right),\quad\varphi\in\mathcal{D},\label{eq:CF-fPm-inf} \end{equation} defines an infinite-dimensional version of the fPm, denoted by $\pi_{\sigma}^{\beta}$. However, since the Mittag-Leffler function does not satisfy the semigroup property of the exponential, it is not obvious that this is the Fourier transform of a measure on $\mathcal{D}'$. Hence, we use the Bochner-Minlos theorem to show that $C_{\pi_{\sigma}^{\beta}}$ is the Fourier transform of a probability measure $\pi_{\sigma}^{\beta}$ on the distribution space $\mathcal{D}'$. \begin{thm} \label{thm:characteristic-functional-of-fPm-on-D'}For each $0<\beta\leq1$ fixed, the functional $C_{\pi_{\sigma}^{\beta}}$ in Equation \eqref{eq:CF-fPm-inf} is the characteristic functional on $\mathcal{D}$ of a probability measure $\pi_{\sigma}^{\beta}$ on the distribution space $\mathcal{D}'$. \end{thm} \begin{proof} Using the properties of the Mittag-Leffler function, the functional $C_{\pi_{\sigma}^{\beta}}$ is continuous and $C_{\pi_{\sigma}^{\beta}}(0)=1$ follow directly. To show that the functional $C_{\pi_{\sigma}^{\beta}}$ is positive definite, we use the complete monotonicity property of $E_{\beta}$, $0<\beta<1$, see \eqref{eq:monotonicity-Mittag}. For any $\varphi_{i}\in\mathcal{D}$, $z_{i}\in\mathbb{C}$, $i=1,\dots,n$, using Equation~\eqref{eq:CF-Pm-inf}, we obtain \begin{align*} \sum_{k,j=1}^{n}C_{\pi_{\sigma}^{\beta}}(\varphi_{k}-\varphi_{j})z_{k}\bar{z}_{j} & =\int_{0}^{\infty}\sum_{k,j=1}^{n}\mathrm{e}^{\tau\int_{\mathbb{R}^{d}}(\mathrm{e}^{\mathrm{i}(\varphi_{k}-\varphi_{j})}-1)\,\mathrm{d}\sigma(x)}z_{k}\bar{z}_{j}\,\mathrm{d}\nu_{\beta}(\tau)\\ & =\int_{0}^{\infty}\sum_{k,j=1}^{n}C_{\pi_{\tau\sigma}}(\varphi_{k}-\varphi_{j})z_{k}\bar{z}_{j}\,\mathrm{d}\nu_{\beta}(\tau). \end{align*} Using the definition of $C_{\pi_{\tau\sigma}}$, the integrand of the last integral may be written as \[ \sum_{k,j=1}^{n}C_{\pi_{\tau\sigma}}(\varphi_{k}-\varphi_{j})z_{k}\bar{z}_{j}=\int_{\mathcal{D}'}\left|\sum_{k=1}^{n}\mathrm{e}^{\mathrm{i}\langle w,\varphi_{k}\rangle}z_{k}\right|^{2}\mathrm{d}\pi_{\tau\sigma}(w)\geq0. \] This implies that $C_{\pi_{\sigma}^{\beta}}$ is positive-definite. Thus by the Bochner-Minlos theorem, $C_{\pi_{\sigma}^{\beta}}$ is the characteristic functional of a probability measure $\pi_{\sigma}^{\beta}$ on the measurable space $(\mathcal{D}',\mathcal{C}_{\sigma}(\mathcal{D}'))$. \end{proof} \begin{rem} By the analytic property of the Mittag-Leffler function one may write \eqref{eq:CF-fPm-inf} for any $\varphi\in\mathcal{D}$ such that $\mathrm{supp}\,\varphi\subset\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$, as \begin{align*} C_{\pi_{\sigma}^{\beta}}(\varphi) & =E_{\beta}\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\mathrm{\,d}\sigma(x)\right)=E_{\beta}\left(\int_{\Lambda}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\mathrm{\,d}\sigma(x)\right)\\ & =E_{\beta}\left(\int_{\Lambda}\mathrm{e}^{\mathrm{i}\varphi(x)}\mathrm{\,d}\sigma(x)-\sigma(\Lambda)\right)\\ & =\sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}\left(-\sigma(\Lambda)\right)}{n!}\left(\int_{\Lambda}\mathrm{e}^{\mathrm{i}\varphi(x)}\,\mathrm{d}\sigma(x)\right)^{n}\\ & =\sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}\left(-\sigma(\Lambda)\right)}{n!}\int_{\Lambda^{n}}\mathrm{e}^{\mathrm{i}(\varphi(x_{1})+\dots+\varphi(x_{n}))}\,\mathrm{d}\sigma^{\otimes n}(x_{1},\dots,x_{n}), \end{align*} where $\sigma^{\otimes n}=\sigma\otimes\dots\otimes\sigma$ is a measure defined on the Cartesian space $(\mathbb{R}^{d})^{n}:=\mathbb{R}^{d}\times\dots\times\mathbb{R}^{d}.$ In the Poisson case, we have $\exp\left(-\sigma(\Lambda)\right)$ instead of $E_{\beta}^{(n)}\left(-\sigma(\Lambda)\right)$, for all $n\in\mathbb{N}_{0}$, while the rest of the terms are the same. Hence, the main difference between these measures ($\pi_{\sigma}^{\beta}$ and $\pi_{\sigma}$) is the different weight given in each $n$-particle space. In Subsection \ref{sec:fPm-cspace} we show that, indeed, the support of the measure $\pi_{\sigma}^{\beta}$ is a subset of $\mathcal{D}'$, called the configuration space over $\mathbb{R}^{d}$. \end{rem} We may now generalize the result of Lemma \ref{lem:mixture-1d} to the present infinite dimensional setting. \begin{lem} For $0<\beta\leq1$, the fPm $\pi_{\sigma}^{\beta}$ is an integral (or mixture) of Poisson measure $\pi_{\sigma}$ with respect to the probability measure $\nu_{\beta}$, i.e., \begin{equation} \pi_{\sigma}^{\beta}=\int_{0}^{\infty}\pi_{\tau\sigma}\,\mathrm{d\nu_{\beta}(\tau)}.\label{eq:fPm-mixture-inf} \end{equation} \end{lem} \begin{proof} For $\beta=1$, the result is clear as in the proof of Lemma \ref{lem:mixture-1d}. For $0<\beta<1$, we use the representation \eqref{eq:monotonicity-Mittag} of the Mittag-Leffler function, the characteristic functional \eqref{eq:CF-fPm-inf} of $\pi_{\sigma}^{\beta}$ can be rewritten as \[ C_{\pi_{\sigma}^{\beta}}(\varphi)=\int_{0}^{\infty}\exp\left(-\tau\int_{\mathbb{R}^{d}}(1-\mathrm{e}^{\mathrm{i}\varphi(x)})\mathrm{\,d}\sigma(x)\right)\mathrm{d}\nu_{\beta}(\tau) \] with the integrand being the characteristic function of the Poisson measure $\pi_{\tau\sigma}$, $\tau>0$. This implies that the characteristic functional \eqref{eq:CF-fPm-inf} coincides with the characteristic functional of the measure $\int_{0}^{\infty}\pi_{\tau\sigma}\,\mathrm{d}\nu_{\beta}(\tau)$. The result follows by the uniqueness of the characteristic functional. \end{proof} The fPm $\pi_{\sigma}^{\beta}$ is indeed a probability measure on $(\mathcal{D}',\mathcal{C}_{\sigma}(\mathcal{D}'))$. In what follows, we are going to find an appropriate support for $\pi_{\sigma}^{\beta}$. \subsection{Configuration Space} \label{subsec:configuration-space} Recall that $\mathcal{B}(\mathbb{R}^{d})$ denotes the Borel $\sigma$-algebra on $\mathbb{R}^{d}$ and $\mathcal{B}_{c}(\mathbb{R}^{d})$ the system of all sets in $\mathcal{B}(\mathbb{R}^{d})$ which are bounded and have compact closure. Below we recall the configuration space over $\mathbb{R}^{d}$ and related concepts, see \cite{AKR97,KK02} for more details. \begin{defn} The \textit{infinite configuration space}\textit{\emph{\index{configuration space@\textit{configuration space}}}} $\Gamma:=\Gamma_{\mathbb{R}^{d}}$ over $\mathbb{R}^{d}$ is defined as the set of all locally finite subsets from $\mathbb{R}^{d}$, that is, \[ \Gamma:=\big\{\gamma\subset\mathbb{R}^{d}:|\gamma\cap\Lambda|<\infty\thinspace\text{for every }\thinspace\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})\big\}, \] where $|B|$ denotes the cardinality of the set $B$. The\index{sigma-algebra@$\sigma$-algebra} elements of the space $\Gamma$ are called \emph{configurations}. \end{defn} Let $C_{0}(\mathbb{R}^{d})$ denote the class of all real-valued continuous functions on $\mathbb{R}^{d}$ with compact support and $\mathcal{M}^{+}:=\mathcal{M}^{+}(\mathbb{R}^{d})$\nomenclature[positive measures on Rd]{\$\textbackslash\{\}mathcal\{M\}\^\{\}\{+\}:=\textbackslash\{\}mathcal\{M\}\^\{\}\{+\}(\textbackslash\{\}mathbb\{R\}\^\{\}\{d\})\$}{positive measures on \$\textbackslash\{\}mathbb\{R\}\^\{\}\{d\}\$} (resp.~$\mathcal{M}_{\mathbb{N}_{0}}^{+}:=\mathcal{M}_{\mathbb{N}_{0}}^{+}(\mathbb{R}^{d})$) denote the space of all positive (resp.~positive integer-valued) Radon measures on $\mathcal{B}(\mathbb{R}^{d})$. \begin{defn} Each configuration $\gamma\in\Gamma$ can be identified with a non-negative integer-valued Radon measure as follows \[ \Gamma\ni\gamma\mapsto\sum_{x\in\gamma}\delta_{x}\in\mathcal{M}_{\mathbb{N}_{0}}^{+}\subset\mathcal{M}^{+}, \] where $\delta_{x}$ is the Dirac measure at $x\in\mathbb{R}^{d}$ and $\sum_{x\in\emptyset}\delta_{x}:=0$ (zero measure). The space $\Gamma$ can be endowed with the topology induced by the vague topology on $\mathcal{M}^{+}$, i.e., the weakest topology on $\Gamma$ with respect to which all mappings \[ \Gamma\ni\gamma\mapsto\langle\gamma,f\rangle:=\langle f\rangle_{\gamma}:=\int_{\mathbb{R}^{d}}f(x)\,\mathrm{d}\gamma(x)=\sum_{x\in\gamma}f(x)\in\mathbb{R} \] are continuous for any $f\in C_{0}(\mathbb{R}^{d})$. \end{defn} \begin{defn} Let $\mathcal{B}(\Gamma)$ be the Borel $\sigma$-algebra corresponding to the vague topology on $\Gamma$. \begin{enumerate} \item The $\sigma$-algebra $\mathcal{B}(\Gamma)$ is generated by the sets of the form \begin{equation} C_{\Lambda,n}=\left\{ \gamma\in\Gamma\,|\,|\gamma\cap\Lambda|=n\right\} \!,\label{eq:cylinder-sets} \end{equation} where $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$, $n\in\mathbb{N}_{0}$, and the set $C_{\Lambda,n}$ is a Borel set of $\Gamma$, that is, $C_{\Lambda,n}\in\mathcal{B}(\Gamma)$. Sets of the form \eqref{eq:cylinder-sets} are called \emph{cylinder sets}\index{sets!cylinder}. \item For any $B\subset\mathbb{R}^{d}$, we introduce a function $N_{B}:\Gamma\rightarrow\mathbb{N}_{0}$ such that \begin{align*} N_{B}(\gamma): & =|\gamma\cap B|,\quad\gamma\in\Gamma. \end{align*} Then $\mathcal{B}(\Gamma)$ is the minimal $\sigma$-algebra with which all functions $N_{\Lambda}$, $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$, are measurable. \end{enumerate} \end{defn} \begin{defn} Let $Y\in\mathcal{B}(\mathbb{R}^{d})$ be given. The space\emph{ }of configurations contained in\emph{ $Y$} is denoted by \emph{$\Gamma_{Y}$, }i.e., \[ \Gamma_{Y}:=\left\{ \gamma\in\Gamma\mid|\gamma\cap(\mathbb{R}^{d}\setminus Y)|=0\right\} . \] The $\sigma$-algebra $\mathcal{B}(\Gamma_{Y})$ may be introduced in a similar way \[ \mathcal{B}(\Gamma_{Y}):=\sigma\left(\left\{ N_{\Lambda}{\upharpoonright}_{\Gamma_{Y}}\mid\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})\right\} \right), \] where $N_{\Lambda}{\upharpoonright}_{\Gamma_{Y}}$ denotes the restriction of the mapping $N_{\Lambda}$ to $\Gamma_{Y}$. \end{defn} \begin{defn} Let $Y\in\mathcal{B}(\mathbb{R}^{d})$ be given. The \textit{space of n-point configurations} $\Gamma_{Y}^{(n)}$ over a set $Y$ is the subset of $\Gamma_{Y}$ defined by \[ \Gamma_{Y}^{(n)}:=\{\gamma\in\Gamma_{Y}\mid|\gamma|=n\},n\in\mathbb{N},\quad\Gamma_{Y}^{(0)}:=\{\emptyset\}. \] \end{defn} A topological structure may be introduced on $\Gamma_{Y}^{(n)}$, $n\in\mathbb{N}$, through a natural surjective mapping from \[ \widetilde{Y^{n}}=\{(x_{1},\dotsc,x_{n})\mid x_{k}\in Y,x_{k}\neq x_{j}\thinspace\text{if}\thinspace k\neq j\} \] onto $\Gamma_{Y}^{(n)}$ defined by \[ \mathrm{sym}_{Y}^{n}:\widetilde{Y^{n}}\longrightarrow\Gamma_{Y}^{(n)} \] \[ (x_{1},\dotsc,x_{n})\mapsto\{x_{1},\dotsc,x_{n}\}. \] Indeed, using the mapping $\mathrm{sym}_{\Lambda}^{n},$ one constructs a bijective mapping between $\Gamma_{Y}^{(n)}$ and the symmetrization $\widetilde{Y^{n}}/S_{n}$ of $\widetilde{Y^{n}},$ where $S_{n}$ is the permutation group over $\{1,\dotsc,n\}.$ In this way, $\mathrm{sym}_{Y}^{n}$ induces a metric on $\Gamma_{Y}^{(n)}$. A set $U\subset\Gamma_{Y}^{(n)}$ is open in this topology if, and only if, the inverse image $(\mathrm{sym}_{Y}^{n})^{-1}(U)$ is open in $\widetilde{Y^{n}}.$ We denote by $\mathcal{B}(\Gamma_{Y}^{(n)})$ the corresponding Borel $\sigma$-algebra and $\mathcal{T}_{Y}^{(n)}$ the associated topology on $\Gamma_{Y}^{(n)}$. For $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d}),$ each space $\Gamma_{\Lambda}$ can be described by the disjoint union \[ \Gamma_{\Lambda}=\bigsqcup_{n=0}^{\infty}\Gamma_{\Lambda}^{(n)}. \] In particular, this representation provides an equivalent description of the $\sigma$-algebra $\mathcal{B}(\Gamma_{\Lambda})$ as the $\sigma$-algebra of the disjoint union of the $\sigma$-algebras $\mathcal{B}(\Gamma_{\Lambda}^{(n)})\thinspace,n\in\mathbb{N}_{0}.$ The corresponding topology is denoted by $\mathcal{T}_{\Lambda}$ such that $(\Gamma_{\Lambda},\mathcal{T}_{\Lambda})$ is a topological space for each $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$. For each $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$ and any pair $\Lambda_{1},\Lambda_{2}\in\mathcal{B}_{c}(\mathbb{R}^{d})$ such that $\Lambda_{1}\subset\Lambda_{2},$ let us consider the natural measurable projections \begin{align} p_{\Lambda}:\Gamma\longrightarrow\Gamma_{\Lambda} & & p_{\Lambda_{1},\Lambda_{2}}:\Gamma_{\Lambda_{2}}\longrightarrow\Gamma_{\Lambda_{1}}\label{eq:projections}\\ \gamma\mapsto\gamma\cap\Lambda & & \gamma\mapsto\gamma\cap\Lambda_{1}.\nonumber \end{align} We use now the concept of the projective limit in order to show that the measurable space $(\Gamma,\mathcal{B}(\Gamma))$ coincides with the projective limit. More precisely, we have the following theorem. \begin{thm} The family $\left\{ (\Gamma_{\Lambda},\mathcal{B}(\Gamma_{\Lambda})),p_{\Lambda_{1},\Lambda_{2}},\mathcal{B}_{c}(\mathbb{R}^{d})\right\} $ is a projective system of measurable spaces with ordered index set $(\mathcal{B}_{c}(\mathbb{R}^{d}),\subset)$ and the measurable space $(\Gamma,\mathcal{B}(\Gamma))$ is (up to an isomorphism) the projective limit together with the family of maps $p_{\Lambda}:\Gamma\longrightarrow\Gamma_{\Lambda}$ for any $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$. In addition, we have the following commutative diagram. \[ \xyC{6pc}\xyR{4pc}\xymatrix{ & \Gamma'\ar@{->}[d]_{I}\ar[ddl]_{P_{\Lambda_{2}}}\ar[ddr]^{P_{\Lambda_{1}}}\\ & \Gamma\ar[dl]^{p_{\Lambda_{2}}}\ar[dr]_{p_{\Lambda_{1}}}\\ \Gamma_{\Lambda_{2}}\ar[rr]_{p_{\Lambda_{2},\Lambda_{1}}} & & \Gamma_{\Lambda_{1}} } \] \end{thm} \begin{proof} It is clear from the construction above that the maps $p_{\Lambda_{1},\Lambda_{2}}$, $\Lambda_{1},\Lambda_{2}\in\mathcal{B}_{c}(\mathbb{R}^{d})$ are measurable and satisfies \[ p_{\Lambda_{1},\Lambda_{2}}\circ p_{\Lambda_{2},\Lambda_{3}}=p_{\Lambda_{1},\Lambda_{3}},\quad\Lambda_{1}\subset\Lambda_{2}\subset\Lambda_{3}\:\:\mathrm{in\:\:}\mathcal{B}_{c}(\mathbb{R}^{d}). \] As a result, $\left\{ (\Gamma_{\Lambda},\mathcal{B}(\Gamma_{\Lambda})),p_{\Lambda_{1},\Lambda_{2}},\mathcal{B}_{c}(\mathbb{R}^{d})\right\} $ is a projective system. On the other hand, it is easy to see from \eqref{eq:projections} that the following relation \[ p_{\Lambda_{1}}=p_{\Lambda_{1},\Lambda_{2}}\circ p_{\Lambda_{2}},\quad\Lambda_{1}\subset\Lambda_{2}\:\:\mathrm{in\:\:}\mathcal{B}_{c}(\mathbb{R}^{d}) \] holds. By definition of $\mathcal{B}(\Gamma)$, the family of maps $p_{\Lambda}$, $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$ satisfy the conditions in a projective limit of measurable spaces. This concludes the proof. \end{proof} \subsection{Fractional Poisson Measure on $\Gamma$} \label{sec:fPm-cspace}Recall from Section \ref{sec:fPm-on-D'} the measure $\sigma$ on the underlying measurable space $(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$ and the product measure $\sigma^{\otimes n}$ on $((\mathbb{R}^{d})^{n},\mathcal{B}((\mathbb{R}^{d})^{n}))$, for each $n\in\mathbb{N}$. Then $\sigma^{\otimes n}((\mathbb{R}^{d})^{n}\setminus\widetilde{(\mathbb{R}^{d})^{n}})=0$, since $\sigma$ is non-atomic. It follows that $\sigma^{\otimes n}(B{}^{n}\setminus\widetilde{B^{n}})=0$, for every $B\in\mathcal{B}(\mathbb{R}^{d})$. For each $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$, let us consider the restriction of $\sigma^{\otimes n}$ to $(\widetilde{\Lambda^{n}},\mathcal{B}(\widetilde{\Lambda^{n}}))$, which is a finite measure, and then define the image measure $\sigma_{\Lambda}^{(n)}$ on $(\Gamma_{\Lambda}^{(n)},\mathcal{B}(\Gamma_{\Lambda}^{(n)}))$ under the mapping $\mathrm{sym}_{\Lambda}^{n}$ by \[ \sigma_{\Lambda}^{(n)}:=\sigma^{\otimes n}\circ(\mathrm{sym}_{\Lambda}^{n})^{-1}. \] For $n=0$, we set $\sigma_{\Lambda}^{(0)}(\left\{ \emptyset\right\} ):=1$. Now, for each $0<\beta<1$, one may define a probability measure $\pi_{\sigma,\Lambda}^{\beta}$ on $(\Gamma_{\Lambda},\mathcal{B}(\Gamma_{\Lambda}))$ by \begin{equation} \pi_{\sigma,\Lambda}^{\beta}:=\sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}(-\sigma(\Lambda))}{n!}\sigma_{\Lambda}^{(n)}.\label{eq:fPm-on-cs} \end{equation} Note that $E_{\beta}^{(n)}(-\sigma(\Lambda))\geq0$, $n\in\mathbb{N}_{0}$ due to \eqref{eq:monotonicity-Mittag}. In addition, we have \begin{align*} \pi_{\sigma,\Lambda}^{\beta}(\Gamma_{\Lambda}) & =\pi_{\sigma,\Lambda}^{\beta}\left(\bigsqcup_{n=0}^{\infty}\Gamma_{\Lambda}^{(n)}\right)=\sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}(-\sigma(\Lambda))}{n!}\sigma_{\Lambda}^{(n)}(\Gamma_{\Lambda}^{(n)})\\ & =\sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}(-\sigma(\Lambda))}{n!}\sigma(\Lambda)^{n}=E_{\beta}(0)=1. \end{align*} The family $\{\pi_{\sigma,\Lambda}^{\beta}\mid\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})\}$ of probability measures yields a probability measure on $(\Gamma,\mathcal{B}(\Gamma))$. In fact, this family is consistent in the sense that the measure $\pi_{\sigma,\Lambda_{1}}^{\beta}$ is the image measure of $\pi_{\sigma,\Lambda_{2}}^{\beta}$ under $p_{\Lambda_{1},\Lambda_{2}}$, that is, \[ \pi_{\sigma,\Lambda_{1}}^{\beta}=\pi_{\sigma,\Lambda_{2}}^{\beta}\circ p_{\Lambda_{1},\Lambda_{2}}^{-1},\quad\forall\Lambda_{1},\Lambda_{2}\in\mathcal{B}_{c}(\mathbb{R}^{d}),\Lambda_{1}\subset\Lambda_{2}. \] By the Kolmogorov extension theorem on configuration space (see Appendix~\ref{sec:Kolmogorov-extension-theorem-on-config-space}) the family $\big\{\pi_{\sigma,\Lambda}^{\beta}\mid\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})\big\}$ determines uniquely a measure $\pi_{\sigma}^{\beta}$ on $(\Gamma,\mathcal{B}(\Gamma))$ such that \[ \pi_{\sigma,\Lambda}^{\beta}=\pi_{\sigma}^{\beta}\circ p_{\Lambda}^{-1},\quad\forall\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d}). \] Actually, we don't need the whole family of local sets $\mathcal{B}_{c}(\mathbb{R}^{d})$ instead, a sub-family $\mathcal{J}_{\mathbb{R}^{d}}$ with an abstract concept of local sets, see (I1)--(I3) in Appendix \ref{sec:Kolmogorov-extension-theorem-on-config-space}. Let us now compute the characteristic functional of the measure $\pi_{\sigma}^{\beta}$. Given $\varphi\in\mathcal{D}$, we have $\mathrm{supp\,}\varphi\subset\Lambda$ for some $\Lambda\in\mathcal{B}_{c}(\mathbb{R}^{d})$, such that \[ \langle\gamma,\varphi\rangle=\langle p_{\Lambda}(\gamma),\varphi\rangle,\hspace{0.1in}\forall~\gamma\in\Gamma. \] Thus, \[ \int_{\Gamma}\mathrm{e}^{\mathrm{i}\langle\gamma,\varphi\rangle}\mathrm{d}\pi_{\sigma}^{\beta}(\gamma)=\int_{\Gamma_{\Lambda}}\mathrm{e}^{\mathrm{i}\langle\gamma,\varphi\rangle}\mathrm{d}\pi_{\sigma,\Lambda}^{\beta}(\gamma) \] and the infinite divisibility \eqref{eq:fPm-on-cs} of the measure $\pi_{\sigma,\Lambda}^{\beta}$ yields for the right-hand side of the above equality \[ \sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}(-\sigma(\Lambda))}{n!}\int_{\Lambda^{n}}\mathrm{e}^{\mathrm{i}(\varphi(x_{1})+\dots+\varphi(x_{n}))}\,\mathrm{d}\sigma^{\otimes n}(x_{1},\dots,x_{n})=\sum_{n=0}^{\infty}\frac{E_{\beta}^{(n)}(-\sigma(\Lambda))}{n!}\left(\int_{\Lambda}\mathrm{e}^{\mathrm{i}\varphi(x)}\,\mathrm{d}\sigma(x)\right)^{n} \] which corresponds to the Taylor expansion of the function \[ E_{\beta}\left(\int_{\Lambda}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\,\mathrm{d}\sigma(x)\right)=E_{\beta}\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\,\mathrm{d}\sigma(x)\right). \] Hence, for any $\varphi\in\mathcal{D}$, we obtain \begin{equation} \int_{\Gamma}\mathrm{e}^{\mathrm{i}\langle\gamma,\varphi\rangle}\,\mathrm{d}\pi_{\sigma}^{\beta}(\gamma)=E_{\beta}\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\mathrm{i}\varphi(x)}-1)\,\mathrm{d}\sigma(x)\right).\label{eq:characteristic-function-Gamma} \end{equation} \begin{rem} \label{rem:fPm-Gamma-Dprime} \begin{enumerate} \item The characteristic functional of the measure $\pi_{\sigma}^{\beta}$ given in \eqref{eq:characteristic-function-Gamma} coincides with the characteristic functional \eqref{eq:CF-fPm-inf} of the measure $\pi_{\sigma}^{\beta}$ on the distribution space $\mathcal{D}'$. The functional \eqref{eq:characteristic-function-Gamma} shows that the measure $\pi_{\sigma}^{\beta}$ is supported on generalized functions of the form $\sum_{x\in\gamma}\delta_{x}\in\mathcal{D}',$ $\gamma\in\Gamma$. \item Note that $\Gamma\subset\mathcal{D}'$ but in contrast to $\Gamma,$ $\mathcal{D}'$ is a linear space. Since $\pi_{\sigma}^{\beta}(\Gamma)=1$, the measure space $(\mathcal{D}',\mathcal{C}_{\sigma}(\mathcal{D}'),\pi_{\sigma}^{\beta})$ can, in this way, be regarded as a linear extension of the fractional Poisson space $(\Gamma,\mathcal{B}(\Gamma),\pi_{\sigma}^{\beta}).$ \end{enumerate} \end{rem} \section{Generalized Appell System} \label{sec:Appell-System}In this section we introduce the generalized Appell system associated with the fPm $\pi_{\sigma}^{\beta}$. First we consider the analytic continuation of the characteristic functional $C_{\pi_{\sigma}^{\beta}}$ to $\mathcal{D}_{\mathbb{C}}:=\mathcal{D}\oplus\mathrm{i}\mathcal{D}$. By definition, an element $\varphi\in\mathcal{D}_{\mathbb{C}}$ decomposes into $\varphi=\varphi_{1}+\mathrm{i}\varphi_{2}$, $\varphi_{1},\varphi_{2}\in\mathcal{D}$. Hence, computing $C_{\pi_{\sigma}^{\beta}}(-\mathrm{i}\varphi)$, $\varphi\in\mathcal{D}$, yields the Laplace transform of the measure $\pi_{\sigma}^{\beta}$, that is, \[ l_{\pi_{\sigma}^{\beta}}(\varphi):=C_{\pi_{\sigma}^{\beta}}(-\mathrm{i}\varphi)=E_{\beta}\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\varphi(x)}-1)\,\mathrm{d}\sigma(x)\right). \] In particular, choosing $\beta=1$ we obtain the Laplace transform of the classical Poisson measure $\pi_{\sigma}:=\pi_{\sigma}^{1}$ with intensity $\sigma$ on the configuration space $\Gamma$. For more details, we refer to \cite{GV68,KMM78,I88,IK88,AKR97a} and reference therein. The following two properties are satisfied by the fPm $\pi_{\sigma}^{\beta}$, $0<\beta\leq1$. \begin{description} \item [{(A1)}] The measure $\pi_{\sigma}^{\beta}$ has an analytic Laplace transform in a neighborhood of zero, that is, the mapping \[ \mathcal{D_{\mathbb{C}}}\ni\varphi\mapsto l_{\pi_{\sigma}^{\beta}}(\varphi)=\int_{\mathcal{D}'}\mathrm{e}^{\langle w,\varphi\rangle}\,\mathrm{d}\pi_{\sigma}^{\beta}(w)=E_{\beta}\left(\int_{\mathbb{R}^{d}}(\mathrm{e}^{\varphi(x)}-1)\,\mathrm{d}\sigma(x)\right)\in\mathbb{C} \] is holomorphic in a neighborhood $\mathcal{U}\subset\mathcal{D}_{\mathbb{C}}$ of zero. \item [{(A2)}] For any nonempty open subset $\mathcal{U}\subset\mathcal{D}'$ it should hold that $\pi_{\sigma}^{\beta}(\mathcal{U})>0$. \end{description} The assumption (A1) guarantees the existence of the moments of all order of the measure $\pi_{\sigma}^{\beta}$ while (A2) guarantees the embedding of the test function space on $L^{2}(\pi_{\sigma}^{\beta})$, see e.g., Section~3 in \cite{KaKo99}. In addition, the Laplace transform $l_{\pi_{\sigma}^{\beta}}(\varphi)$ of the measure $\pi_{\sigma}^{\beta}$ has the decomposition in terms of the moment kernels $M_{n}^{\sigma,\beta}$ (by the kernel theorem) given by \begin{equation} l_{\pi_{\sigma}^{\beta}}(\varphi)=\sum_{n=0}^{\infty}\frac{1}{n!}\langle M_{n}^{\sigma,\beta},\varphi^{\otimes n}\rangle,\qquad\varphi\in\mathcal{D}_{\mathbb{C}},\,M_{n}^{\sigma,\beta}\in\big(\mathcal{D}_{\mathbb{C}}^{^{\hat{\otimes}n}}\big)'.\label{eq:Laplace-transform-inf-dim} \end{equation} \subsection{Generalized Appell Polynomials} \label{sec:Generalized-Appell-Polynomials}In this subsection we follow \cite{KSWY95} to introduce the system of Appell polynomials associated with the fPm $\pi_{\sigma}^{\beta}$. Let us consider the triple \eqref{eq:triple-L2sigma} such that \begin{equation} \mathcal{D}\subset\mathcal{N}\subset L^{2}(\sigma)\subset\mathcal{N}'\subset\mathcal{D}'\label{eq:triple-L2sigma-with-N} \end{equation} as described in Section~\ref{sec:Nuclear-spaces}. Also, the chain \eqref{eq:triple-L2sigma-with-N} holds for the tensor product of these spaces. Then we introduce the normalized exponential $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;z)$ by \begin{equation} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;w)=\frac{\mathrm{e}^{\langle w,\varphi\rangle}}{l_{\pi_{\sigma}^{\beta}}(\varphi)},\quad w\in\mathcal{D}'_{\mathbb{C}},\,\varphi\in\mathcal{D}_{\mathbb{C}}.\label{eq:normalized-expo-inf-dim} \end{equation} Since $l_{\pi_{\sigma}^{\beta}}(0)=1$ and $l_{\pi_{\sigma}^{\beta}}$ is holomorphic, there exists a neighborhood $\mathcal{U}_{0}\subset\mathcal{D}_{\mathbb{C}}$ of zero, such that $l_{\pi_{\sigma}^{\beta}}(\varphi)\neq0$ for all $\varphi\in\mathcal{U}_{0}$. For $\varphi\in\mathcal{U}_{0}$, the normalized exponential $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;z)$ can be expanded in a power series and then we use the polarization identity in order to apply the kernel theorem to obtain \begin{equation} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;w)=\sum_{n=0}^{\infty}\frac{1}{n!}\langle P_{n}^{\sigma,\beta}(w),\varphi^{\otimes n}\rangle,\quad w\in\mathcal{D}'_{\mathbb{C}},\;\varphi\in\mathcal{U}_{0},\label{eq:Wick-Appell-inf-dim} \end{equation} for suitable $P_{n}^{\sigma,\beta}(w)\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$. The family \begin{equation} \mathbb{P}^{\sigma,\beta}=\left\{ \langle P_{n}^{\sigma,\beta}(\cdot),\varphi^{(n)}\rangle\mid\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},n\in\mathbb{N}_{0}\right\} \label{eq:P-system} \end{equation} is called the \emph{Appell system }associated to the fPm\emph{ $\pi_{\sigma}^{\beta}$}. Let us now consider the transformation $\alpha:\mathcal{D}_{\mathbb{C}}\longrightarrow\mathcal{D}_{\mathbb{C}}$ defined on a neighborhood $\mathcal{U}_{\alpha}\subset\mathcal{D}_{\mathbb{C}}$ of zero, by \[ \alpha(\varphi)(x)=\log(1+\varphi(x)),\quad\varphi\in\mathcal{U}_{\alpha},\:x\in\mathbb{R}^{d}. \] Note that for $\varphi=0\in\mathcal{D_{\mathbb{C}}}$, we have $\alpha(\varphi)=0$. Also, $\mathcal{U}_{\alpha}$ is chosen in such a way that $\alpha$ is invertible and holomorphic on $\mathcal{U}_{\alpha}$. Then $\alpha$ can be expanded as \begin{equation} \alpha(\varphi)=\sum_{n=1}^{\infty}\frac{1}{n!}\widehat{\mathrm{d}^{n}\alpha(0)}(\varphi)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}\varphi^{n}}{n},\label{eq:alpha-Taylor} \end{equation} where \[ \widehat{\mathrm{d}^{n}\alpha(0)}(\varphi)=\frac{\partial^{n}}{\mathrm{\partial}t_{1}\dots\mathrm{\partial}t_{n}}\alpha(t_{1}\varphi+\dots+t_{n}\varphi)|_{t_{1}=\dots=t_{n}=0} \] for all $n\in\mathbb{N}$. For the inverse function $g_{\alpha}$ of $\alpha$, we have \[ (g_{\alpha}\varphi)(x)=\mathrm{e}^{\varphi(x)}-1,\qquad\varphi\in\mathcal{V}_{\alpha}\subset\mathcal{D}_{\mathbb{C}},\:x\in\mathbb{R}^{d} \] for some neighborhood $\mathcal{V}_{\alpha}$ of zero in $\mathcal{D}_{\mathbb{C}}$. A similar procedure as before yields the decomposition \begin{equation} g_{\alpha}(\varphi)=\sum_{n=1}^{\infty}\frac{1}{n!}\widehat{\mathrm{d}^{n}g_{\alpha}(0)}(\varphi)=\sum_{n=1}^{\infty}\frac{\varphi^{n}}{n!}.\label{eq:g-decomposition} \end{equation} Now using the function $\alpha$, we introduce the modified normalized exponential $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);x)$ as \begin{equation} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w):=\frac{\exp(\langle w,\alpha(\varphi)\rangle)}{l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))}=\frac{\exp(\langle w,\log(1+\varphi)\rangle)}{E_{\beta}\left(\int_{\mathbb{R}^{d}}\varphi(x)\mathrm{\,d}\sigma(x)\right)}=\frac{\exp(\langle w,\log(1+\varphi)\rangle)}{E_{\beta}\left(\langle\varphi\rangle_{\sigma}\right)}\label{eq:normalized-expo-inf-dim-alpha} \end{equation} for $\varphi\in\mathcal{U}'_{\alpha}\subset\mathcal{U_{\alpha}}$, $w\in\mathcal{D}_{\mathbb{C}}^{'}$. Since $l_{\pi_{\sigma}^{\beta}}$ is holomorphic on a neighborhood of zero, for each fixed $w\in\mathcal{D}'_{\mathbb{C}}$, $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\cdot);w)$ is a holomorphic function on some neighborhood $\mathcal{U}'_{\alpha}\subset\mathcal{U}_{\alpha}$ of zero. Then we have the map $\mathcal{D}_{\mathbb{C}}\ni\varphi\mapsto\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w)$ which admits a power series \begin{equation} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w)=\sum_{n=0}^{\infty}\frac{1}{n!}\langle C_{n}^{\sigma,\beta}(w),\varphi^{\otimes n}\rangle,\quad\varphi\in\mathcal{U}'_{\alpha},\;w\in\mathcal{D}_{\mathbb{C}}^{'},\label{eq:normalized-exp-fPm-inf-dim} \end{equation} where the kernels $C_{n}^{\sigma,\beta}:\mathcal{D}'_{\mathbb{C}}\rightarrow(\mathcal{D}_{\mathbb{C}}^{^{\hat{\otimes}n}})'$, $n\in\mathbb{N}$, $C_{0}^{\sigma,\beta}=1$. By Equation \eqref{eq:normalized-exp-fPm-inf-dim}, it follows that for any $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$, $n\in\mathbb{N}_{0}$, the function \[ \mathcal{D}'_{\mathbb{C}}\ni w\mapsto\langle C_{n}^{\sigma,\beta}(w),\varphi^{(n)}\rangle \] is a polynomial of order $n$ on $\mathcal{D}'_{\mathbb{C}}$. \begin{defn} The family \[ \mathbb{P}^{\sigma,\beta,\alpha}=\left\{ \langle C_{n}^{\sigma,\beta}(\cdot),\varphi^{(n)}\rangle\mid\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},n\in\mathbb{N}_{0}\right\} \] is called the \emph{generalized Appell system }associated to the fPm\emph{ $\pi_{\sigma}^{\beta}$ }or the\emph{ $\mathbb{P}^{\sigma,\beta,\alpha}$-system.} In the following proposition we collect some properties of the kernels $C_{n}^{\sigma,\beta}(\cdot)$ which appeared in \cite{KdSS98} but specific to the measure \emph{$\pi_{\sigma}^{\beta}$}. \end{defn} \begin{prop} \label{prop:generalized-appell-polynomials-inf-dim}For $z,w\in\mathcal{D}_{\mathbb{C}}'$, $n\in\mathbb{N}_{0}$, the following properties hold \begin{description} \item [{(P1)}] $C_{n}^{\sigma,\beta}(w)=\sum_{m=0}^{n}\mathbf{s}(n,m)^{*}P_{m}^{\sigma,\beta}(w)$, where $\mathbf{s}(n,m)$ is the Stirling operator of the first kind defined in \eqref{eq:Stirling-first-D-property} in Appendix~\ref{sec:Stirling-Operators}. \item [{(P2)}] $w^{\otimes n}=\sum_{k=0}^{n}\sum_{m=0}^{k}\binom{n}{k}\mathbf{S}(k,m)^{*}C_{m}^{\sigma,\beta}(w)\hat{\otimes}M_{n-k}^{\sigma,\beta}$, where $\mathbf{S}(n,m)$ is the Stirling operator of the second kind defined in \eqref{eq:Stirling-second-D-property} in Appendix~\ref{sec:Stirling-Operators} and $M_{n}^{\sigma,\beta}\in(\mathcal{D}_{\mathbb{C}}^{^{\hat{\otimes}n}})'$ are the moment kernels of $\pi_{\sigma}^{\beta}$ given in \eqref{eq:Laplace-transform-inf-dim}. \item [{(P3)}] \emph{$C_{n}^{\sigma,\beta}(z+w)=\sum_{k+l+m=n}\frac{n!}{k!l!m!}C_{k}^{\sigma,\beta}(z)\hat{\otimes}C_{l}^{\sigma,\beta}(w)\hat{\otimes}M_{m}^{\sigma,\beta,\alpha}$, }where $M_{m}^{\sigma,\beta,\alpha}\in(\mathcal{D}_{\mathbb{C}}^{^{\hat{\otimes}n}})'$ is determined by \begin{equation} l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))=\sum_{m=0}^{\infty}\frac{1}{m!}\langle M_{m}^{\sigma,\beta,\alpha},\varphi^{\otimes m}\rangle,\quad\varphi\in\mathcal{D}_{\mathbb{C}}.\label{eq:moments-alpha} \end{equation} \item [{(P4)}] \emph{$C_{n}^{\sigma,\beta}(z+w)=\sum_{k=0}^{n}\binom{n}{k}C_{k}^{\sigma,\beta}(z)\hat{\otimes}(w)_{n-k}$, }where $(w)_{n}$ is the falling factorial on $\mathcal{D}'_{\mathbb{C}}$ determined by \eqref{eq:generating-function-f-factorial-inf-dim}. \item [{(P5)}] $C_{n}^{\sigma,\beta}(w)=\sum_{k=0}^{n}\binom{n}{k}\sum_{m=0}^{n-k}C_{k}^{\sigma,\beta}(0)\hat{\otimes}\big(\mathbf{s}(n-k,m)^{*}w^{\otimes m}\big)$. \item [{(P6)}] \emph{$\mathbb{E}_{\pi_{\sigma}^{\beta}}(\langle C_{n}^{\sigma,\beta}(\cdot),\varphi^{(n)}\rangle)=\delta_{n,0}$, }where $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$, $\delta_{n,k}$ is the Kronecker delta function and $\mathbb{E}_{\pi_{\sigma}^{\beta}}(\cdot)$ is the expectation with respect to the measure \emph{$\pi_{\sigma}^{\beta}$.} \item [{(P7)}] For all $p'>p$ such that the embedding $\mathcal{H}_{p'}\hookrightarrow\mathcal{H}_{p}$ is a Hilbert-Schmidt operator and for all $\varepsilon>0$ there exist $C_{\varepsilon}>0$ such that \[ |C_{n}^{\sigma,\beta}(w)|_{-p'}\leq C_{\varepsilon}n!\varepsilon^{-n}\exp(\varepsilon|w|_{-p}),\quad w\in\mathcal{H}_{-p',\mathbb{C}},\,n\in\mathbb{N}_{0}. \] \end{description} \end{prop} \begin{proof} (P1) In view of Equation~\eqref{eq:Wick-Appell-inf-dim}, we have \begin{equation} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w)=\frac{\exp(\langle w,\alpha(\varphi)\rangle)}{l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))}=\sum_{m=0}^{\infty}\frac{1}{m!}\langle P_{m}^{\sigma,\beta}(w),\alpha(\varphi)^{\otimes m}\rangle.\label{eq:normalized-exp-P-alpha} \end{equation} Using Equation \eqref{eq:decomposition-alpha^k}, we obtain \begin{align*} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w) & =\sum_{m=0}^{\infty}\Bigg\langle P_{m}^{\sigma,\beta}(w),\sum_{n=m}^{\infty}\frac{1}{n!}\mathbf{s}(n,m)\varphi^{\otimes n}\Bigg\rangle\\ & =\sum_{m=0}^{\infty}\sum_{n=m}^{\infty}\frac{1}{n!}\Bigg\langle\mathbf{s}(n,m)^{*}P_{m}^{\sigma,\beta}(w),\varphi^{\otimes n}\Bigg\rangle\\ & =\sum_{n=0}^{\infty}\frac{1}{n!}\Bigg\langle\sum_{m=0}^{n}\mathbf{s}(n,m)^{*}P_{m}^{\sigma,\beta}(w),\varphi^{\otimes n}\Bigg\rangle. \end{align*} On the other hand, using the equality \eqref{eq:normalized-exp-fPm-inf-dim} and comparing both series for $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi),w)$ gives \[ C_{n}^{\sigma,\beta}(w)=\sum_{m=1}^{n}\mathbf{s}(n,m)^{*}P_{m}^{\sigma,\beta}(w). \] \noindent (P2) Similar as in the proof of (P1), we use Equation \eqref{eq:normalized-exp-fPm-inf-dim} and the fact that $g_{\alpha}$ is the inverse of $\alpha$ to obtain \begin{equation} \mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;w)=\sum_{m=0}^{\infty}\frac{1}{m!}\langle C_{m}^{\sigma,\beta}(w),g_{\alpha}(\varphi)^{\otimes m}\rangle.\label{eq:normalized-exp-C-g} \end{equation} Using Equation \eqref{eq:decomposition-g-alpha^k} we replace $g_{\alpha}(\varphi)^{\otimes m}$ in the above Equation \eqref{eq:normalized-exp-C-g} and making some standard manipulations yields \[ \mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;z)=\sum_{m=0}^{\infty}\Bigg\langle C_{m}^{\sigma,\beta}(w),\sum_{n=m}^{\infty}\frac{1}{n!}\mathbf{S}(n,m)\varphi^{\otimes n}\Bigg\rangle=\sum_{n=0}^{\infty}\frac{1}{n!}\Bigg\langle\sum_{m=0}^{n}\mathbf{S}(n,m)^{*}C_{m}^{\sigma,\beta}(w),\varphi^{\otimes n}\Bigg\rangle. \] On the other hand, comparing the above series for $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi,w)$ and the Equation~\eqref{eq:Wick-Appell-inf-dim}, we obtain \begin{equation} P_{n}^{\sigma,\beta}(w)=\sum_{m=0}^{n}\mathbf{S}(n,m)^{*}C_{m}^{\sigma,\beta}(w).\label{eq:P-C-polynomials} \end{equation} By Equation \eqref{eq:normalized-expo-inf-dim}, we have the equality \begin{equation} \mathrm{e}^{\langle w,\varphi\rangle}=\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;w)l_{\pi_{\sigma}^{\beta}}(\varphi).\label{eq:normalized-expo-inf-dim-Laplace} \end{equation} Now using the equations \eqref{eq:Wick-Appell-inf-dim} and \eqref{eq:Laplace-transform-inf-dim}, we obtain the equation \[ \sum_{n=0}^{\infty}\frac{1}{n!}\langle w^{\otimes n},\varphi^{\otimes n}\rangle=\sum_{n=0}^{\infty}\frac{1}{n!}\Bigg\langle\sum_{k=0}^{n}\binom{n}{k}P_{n}^{\sigma,\beta}(w)\hat{\otimes}M_{n-k}^{\sigma,\beta},\varphi^{\otimes n}\Bigg\rangle \] which implies that \begin{equation} w^{\otimes n}=\sum_{k=0}^{n}\binom{n}{k}P_{n}^{\sigma,\beta}(w)\hat{\otimes}M_{n-k}^{\sigma,\beta}.\label{eq:z-P-M-polynomials} \end{equation} The claim follows by applying Equation~\eqref{eq:P-C-polynomials} to Equation~\eqref{eq:z-P-M-polynomials}. \noindent (P3) By definition of the modified normalized exponential, we have \[ \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);z+w)=\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);z)\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w)l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi)). \] For $l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))$, we use the decomposition \eqref{eq:moments-alpha} such that the above equation yields \begin{align*} \sum_{n=0}^{\infty}\frac{1}{n!}\langle C_{n}^{\sigma,\beta}(z+w),\varphi^{\otimes n}\rangle & =\sum_{k=0}^{\infty}\frac{1}{k!}\langle C_{k}^{\sigma,\beta}(z),\varphi^{\otimes k}\rangle\sum_{l=0}^{\infty}\frac{1}{l!}\langle C_{l}^{\sigma,\beta}(w),\varphi^{\otimes l}\rangle\sum_{m=0}^{\infty}\frac{1}{m!}\langle M_{m}^{\sigma,\beta,\alpha},\varphi^{\otimes m}\rangle\\ & =\sum_{n=0}^{\infty}\frac{1}{n!}\Bigg\langle\sum_{k+l+m=n}\frac{n!}{k!l!m!}C_{k}^{\sigma,\beta}(z)\hat{\otimes}C_{l}^{\sigma,\beta}(w)\hat{\otimes}M_{m}^{\sigma,\beta,\alpha},\varphi^{\otimes n}\Bigg\rangle. \end{align*} Thus, the result follows by comparing the coefficients in both sides of the equation. \noindent (P4) Again, by definition of the modified normalized exponential, we have \[ \mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);z+w)=\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);z)\exp(\langle w,\alpha(\varphi)\rangle). \] By Equations \eqref{eq:generating-function-f-factorial-inf-dim} and \eqref{eq:normalized-exp-fPm-inf-dim}, we have \begin{align*} \sum_{n=0}^{\infty}\frac{1}{n!}\langle C_{n}^{\sigma,\beta}(z+w),\varphi^{\otimes n}\rangle & =\sum_{k=0}^{\infty}\frac{1}{k!}\langle C_{k}^{\sigma,\beta}(z),\varphi^{\otimes k}\rangle\sum_{m=0}^{\infty}\frac{1}{m!}\langle(w)_{m},\varphi^{\otimes m}\rangle\\ & =\sum_{n=0}^{\infty}\frac{1}{n!}\Bigg\langle\sum_{k=0}^{n}\binom{n}{k}C_{k}^{\sigma,\beta}(z)\hat{\otimes}(w)_{n-k},\varphi^{\otimes n}\Bigg\rangle. \end{align*} Thus the assertion follows immediately by comparing the coefficients in both sides of the equation. \noindent (P5) The result follows from (P4) at $z=0$ and \eqref{eq:falling-factorial-Stirling}. \noindent (P6) Note that for $\varphi\in\mathcal{D}_{\mathbb{C}}$, we have \[ \sum_{n=0}^{\infty}\frac{1}{n!}\mathbb{E_{\pi_{\sigma}^{\beta}}}(\langle C_{n}^{\sigma,\beta}(w),\varphi^{\otimes n}\rangle)=\mathbb{E}_{\pi_{\sigma}^{\beta}}(\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);w))=\frac{\mathbb{E}_{\pi_{\sigma}^{\beta}}(\exp(\langle\cdot,\alpha(\varphi)\rangle))}{l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))}=1. \] By polarization identity and comparison of coefficients, we obtain the result. \noindent (P7) Let $\varepsilon>0$ be given. Then let $C_{\varepsilon},\sigma_{\varepsilon}>0$ be chosen in such a way that $|\alpha(\varphi)|_{p}\leq\varepsilon$ and $C_{\varepsilon}\geq1/|l_{\pi_{\beta}^{\sigma}}(\alpha(\varphi))|$ for $|\varphi|_{p}=\sigma_{\varepsilon}.$ By definition of $C_{n}^{\sigma,\beta}(w)$ and the Cauchy formula, we have \begin{align*} |\langle C_{n}^{\sigma,\beta}(w),\varphi^{\otimes n}\rangle| & =|\widehat{\mathrm{d}^{n}\mathrm{e}_{\pi_{\sigma}^{\beta}}(0;w)}(\varphi)|\\ & \leq n!\frac{1}{\sigma_{\varepsilon}^{n}}\left(\sup_{|\varphi|_{p}=\sigma_{\varepsilon}}\frac{\exp\left(|\alpha(\varphi)|_{p}|w|_{-p}\right)}{|l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))|}\right)|\varphi|_{p}^{n}\\ & \leq n!\frac{1}{\sigma_{\varepsilon}^{n}}\left(\sup_{|\varphi|_{p}=\sigma_{\varepsilon}}\frac{1}{|l_{\pi_{\sigma}^{\beta}}(\alpha(\varphi))|}\right)\exp\left(\varepsilon|w|_{-p}\right)|\varphi|_{p}^{n}\\ & \leq C_{\varepsilon}n!\sigma_{\varepsilon}^{-n}\exp\left(\varepsilon|w|_{-p}\right)|\varphi|_{p}^{n}. \end{align*} Let $p'>p$ be such that $i_{p',p}$ is a Hilbert-Schmidt operator. Then by the kernel theorem, we have \[ |C_{n}^{\sigma,\beta}(w)|_{-p'}\leq n!C_{\varepsilon}\exp\left(\varepsilon|w|_{-p}\right)\left(\frac{1}{\sigma_{\varepsilon}}\|i_{p',p}\|_{HS}\right)^{n},\quad w\in\mathcal{H}_{-p,\mathbb{C}}. \] For sufficiently small $\varepsilon$, we fix $\sigma_{\varepsilon}=\varepsilon\|i_{p',p}\|_{HS}$ so that \[ |C_{n}^{\sigma,\beta}(w)|_{-p'}\leq n!C_{\varepsilon}\varepsilon^{-n}\exp\left(\varepsilon|w|_{-p}\right). \] This concludes the proof. \end{proof} \subsection{Generalized Dual Appell System} \label{sec:Generalized-Dual-Appell-System}In what follows, we use again the approach in \cite{KSWY95} of non-Gausian analysis to introduce the generalized dual Appell system associated with the fPm $\pi_{\sigma}^{\beta}$. \begin{defn} The \emph{space of smooth polynomials} $\mathcal{P}(\mathcal{D}')$ on $\mathcal{D}'$ is the space consisting of finite linear combinations of monomial functions, that is, \[ \mathcal{P}(\mathcal{D}'):=\left\{ \varphi(w)=\sum_{n=0}^{N(\varphi)}\langle w^{\otimes n},\varphi^{(n)}\rangle\,\bigg|\,\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},\;w\in\mathcal{D}',\;N(\varphi)\in\mathbb{N}_{0}\right\} . \] \end{defn} The space $\mathcal{P}(\mathcal{D}')$ shall be equipped with the natural topology, such that the mapping \[ I:\mathcal{P}(\mathcal{D}')\longrightarrow\bigoplus_{n=0}^{\infty}\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n} \] defined for any $\varphi(\cdot)=\sum_{n=0}^{\infty}\langle\cdot^{\otimes n},\varphi^{(n)}\rangle\in\mathcal{P}(\mathcal{D}')$ by \[ I\varphi=\vec{\varphi}=(\varphi^{(0)},\varphi^{(1)},\dots,\varphi^{(n)},\dots) \] becomes a topological isomorphism from $\mathcal{P}(\mathcal{D}')$ to the topological direct sum of symmetric tensor powers $\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$ (see \cite{BK88,S71}). Note that only a finite number of $\varphi^{(n)}$ is non-zero. With respect to this topology, a sequence $(\varphi_{m})_{m\in\mathbb{N}}$ of smooth continuous polynomials, that is, $\varphi_{m}(w)=\sum_{n=0}^{N(\varphi_{m})}\langle w^{\otimes n},\varphi_{m}^{(n)}\rangle$ converges to $\varphi(w)=\sum_{n=0}^{N(\varphi)}\langle w^{\otimes n},\varphi^{(n)}\rangle\in\mathcal{P}(\mathcal{D}')$ if, and only if, the sequence $(N(\varphi_{m}))_{m\in\mathbb{N}}$ is bounded and $(\varphi_{m}^{(n)})_{m\in\mathbb{N}}$ converges to $\varphi^{(n)}$ in $\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$ for all $n\in\mathbb{N}_{0}$. Using Proposition~\ref{prop:generalized-appell-polynomials-inf-dim}-(P2), the space of smooth polynomials $\mathcal{P}(\mathcal{D}')$ can also be expressed in terms of the generalized Appell polynomials associated with the measure $\pi_{\sigma}^{\beta}$ given by \[ \mathcal{P}(\mathcal{D}'):=\left\{ \varphi(w)=\sum_{n=0}^{N(\varphi)}\langle C_{n}^{\sigma,\beta}(w),\varphi^{(n)}\rangle\,\bigg|\,\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},w\in\mathcal{D}',N(\varphi)\in\mathbb{N}_{0}\right\} . \] We denote by $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ the dual space of $\mathcal{P}(\mathcal{D}')$ with respect to $L^{2}(\pi_{\sigma}^{\beta}):=L^{2}(\mathcal{D}',\mathcal{C}_{\sigma}(\mathcal{D}'),\pi_{\sigma}^{\beta};\mathbb{C})$ and obtain the triple \begin{equation} \mathcal{P}(\mathcal{D}')\subset L^{2}(\pi_{\sigma}^{\beta})\subset\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}').\label{eq:triple-P(D)-L2} \end{equation} The (bilinear) dual pairing $\langle\!\langle\cdot,\cdot\rangle\!\rangle_{\pi_{\sigma}^{\beta}}$ between $\mathcal{P}(\mathcal{D}')$ and $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ is then related to the (sesquilinear) inner product on $L^{2}(\pi_{\sigma}^{\beta})$ by \[ \langle\!\langle F,\varphi\rangle\!\rangle_{\pi_{\sigma}^{\beta}}=(\!(F,\bar{\varphi})\!)_{L^{2}(\pi_{\sigma}^{\beta})},\quad F\in L^{2}(\pi_{\sigma}^{\beta}),\;\varphi\in\mathcal{P}(\mathcal{D}'), \] where $\bar{\varphi}$ denotes the complex conjugate function of $\varphi$. Further we introduce the constant function $\boldsymbol{1}\in L^{2}(\pi_{\sigma}^{\beta})\subset\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ such that $\boldsymbol{1}(w)=1$ for all $w\in\mathcal{D}'$, so for any polynomial $\varphi\in\mathcal{P}(\mathcal{D}')$, \[ \mathbb{E_{\pi_{\sigma}^{\beta}}}(\varphi):=\int_{\mathcal{D}'}\varphi(w)\,\mathrm{d}\pi_{\sigma}^{\beta}(w)=\langle\!\langle\boldsymbol{1},\varphi\rangle\!\rangle_{\pi_{\sigma}^{\beta}}. \] Now, we will describe the distributions in $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ in a similar way as the smooth polynomials $\mathcal{P}(\mathcal{D}')$, that is, for any $\Phi\in\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$, we find elements $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ and operators $Q_{n}^{\sigma,\beta,\alpha}$ on $(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$, such that \[ \Phi=\sum_{n=0}^{\infty}Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})\in\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}'). \] To this end, we define first a differential operator $D(\Phi^{(n)})$ depending on $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ such that when applied to the monomials $\langle w^{\otimes m},\varphi^{(m)}\rangle,$ $\varphi^{(m)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}m}$, $m\in\mathbb{N}_{0}$, gives \[ D(\Phi^{(n)})\langle w^{\otimes m},\varphi^{(m)}\rangle:=\begin{cases} \frac{m!}{(m-n)!}\langle w^{\otimes(m-n)}\hat{\otimes}\Phi^{(n)},\varphi^{(m)}\rangle, & \mathrm{for}\:m\geq n\\ 0, & \mathrm{otherwise} \end{cases} \] and extend by linearity from the monomials to elements in $\mathcal{P}(\mathcal{D}')$. If we consider the space of Schwartz test function $\mathcal{S}(\mathbb{R})$ instead of using the space $\mathcal{D}$ with the triple \[ \mathcal{S}(\mathbb{R})\subset L^{2}(\mathbb{R},\mathrm{d}x)\subset\mathcal{S}'(\mathbb{R}), \] then for $n=1$ and $\Phi^{(1)}=\delta_{t}\in\mathcal{S}_{\mathbb{C}}'(\mathbb{R})$, the differential operator $D(\delta_{t})$ coincides with the Hida derivative, see \cite{HKPS93}. Note that $D(\Phi^{(n)})$ is a continuous linear operator from $\mathcal{P}(\mathcal{D}')$ to $\mathcal{P}(\mathcal{D}')$ (see \cite[Lemma 4.13]{KSWY95}) and this enables us to define the dual operator \[ D(\Phi^{(n)})^{*}:\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')\longrightarrow\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}'). \] Below we need the evaluation of the operator $D(\Phi^{(n)})$ on the monomials $\langle P_{m}^{\sigma,\beta}(w),\varphi^{(m)}\rangle$, $m\in\mathbb{N}_{0}$ in \eqref{eq:P-system}. We state this result in the next proposition and the proof can be found in \cite[Lemma 4.14]{KSWY95}. \begin{prop} \label{prop:operator-D-on-P}For $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ and $\varphi^{(m)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}m}$ we have \[ D(\Phi^{(n)})\langle P_{m}^{\sigma,\beta}(w),\varphi^{(m)}\rangle=\begin{cases} {\displaystyle \frac{m!}{(m-n)!}\langle P_{m-n}^{\sigma,\beta}(w)\hat{\otimes}\Phi^{(n)},\varphi^{(m)}\rangle,} & \mathit{for}\;m\geq n\\ 0, & \mathit{for}\;m<n. \end{cases} \] \end{prop} Now, we set $Q_{n}^{\sigma,\beta}(\Phi^{(n)}):=D(\Phi^{(n)})^{*}\mathbf{1}$ for $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ and denote the so-called $\mathbb{Q}^{\sigma,\beta}$-system in $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ by \[ \mathbb{Q}^{\sigma,\beta}:=\left\{ Q_{n}^{\sigma,\beta}(\Phi^{(n)})\mid\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})',\,n\in\mathbb{N}_{0}\right\} . \] The pair $\mathbb{A}^{\sigma,\beta}=(\mathbb{P}^{\sigma,\beta},\mathbb{Q}^{\sigma,\beta})$ is called the \emph{Appell system generated by the measure} $\pi_{\sigma}^{\beta}$. This system satisfies the biorthogonal property, see \cite{KSWY95}, given in the following theorem. \begin{thm} For $\Phi^{(m)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}m})'$ and $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$ we have \begin{equation} \langle\!\langle Q_{n}^{\sigma,\beta}(\Phi^{(m)}),\langle P_{n}^{\sigma,\beta},\varphi^{(n)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}}=\delta_{m,n}n!\langle\Phi^{(n)},\varphi^{(n)}\rangle,\quad n,m\in\mathbb{N}_{0}.\label{eq:biorthogonal-P} \end{equation} \end{thm} However, our aim is to construct the generalized dual Appell system $\mathbb{Q}^{\sigma,\beta,\alpha}$ such that $\mathbb{P}^{\sigma,\beta,\alpha}$ and $\mathbb{Q}^{\sigma,\beta,\alpha}$ are biorthogonal. The reason to do this is because when $\beta=1$ we obtain only one system of orthogonal polynomials, so-called the Charlier polynomials, see \cite{IK88}. First, recall the function $g_{\alpha}(\varphi)$, $\varphi\in\mathcal{D}_{\mathbb{C}}$ from Section \ref{sec:Generalized-Appell-Polynomials}. By Equation \eqref{eq:decomposition-g-alpha^k}, we have \[ g_{\alpha}(\varphi)^{\otimes n}=\sum_{k=n}^{\infty}\frac{n!}{k!}\mathbf{S}(n,k)\varphi^{\otimes k}. \] Then for any $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$, we have \begin{equation} \langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle=\sum_{k=n}^{\infty}\frac{n!}{k!}\langle\Phi^{(n)},\mathbf{S}(k,n)\varphi^{\otimes k}\rangle=\sum_{k=n}^{\infty}\frac{n!}{k!}\langle\mathbf{S}(k,n)^{*}\Phi^{(n)},\varphi^{\otimes k}\rangle,\label{eq:Phi-g-alpha-n-Stirling} \end{equation} where $\mathbf{S}(k,n)^{*}\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})'$. Now, we define the operator $G(\Phi^{(n)})$ by \[ G(\Phi^{(n)}):\mathcal{P}(\mathcal{D}')\longrightarrow\mathcal{P}(\mathcal{D}'),\;\varphi\mapsto G(\Phi^{(n)})\varphi:=\sum_{k=n}^{\infty}\frac{n!}{k!}D(\mathbf{S}(k,n)^{*}\Phi^{(n)})\varphi. \] Since $D(\mathbf{S}(k,n)^{*}\Phi^{(n)})$ is continuous for any $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$, it is easy to see that $G(\Phi^{(n)})$ is also continuous and so its adjoint $G(\Phi^{(n)})^{*}:\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')\longrightarrow\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ exists. \begin{defn} For any $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$, we define the \emph{generalized function} $Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})\in\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$, $n\in\mathbb{N}_{0}$, by \begin{equation} Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}):=G(\Phi^{(n)})^{*}\boldsymbol{1}.\label{eq:generalized-function-inf-dim} \end{equation} The family \[ \mathbb{Q}^{\sigma,\beta,\alpha}:=\left\{ Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})\mid\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})',\,n\in\mathbb{N}_{0}\right\} \] is said to be the \emph{generalized dual Appell system} $\mathbb{Q}^{\sigma,\beta,\alpha}$ associated with $\pi_{\sigma}^{\beta}$ or the $\mathbb{Q}^{\sigma,\beta,\alpha}$-system and the pair $\mathbb{A}^{\sigma,\beta,\alpha}:=(\mathbb{P}^{\sigma,\beta,\alpha},\mathbb{Q}^{\sigma,\beta,\alpha})$ is called the \emph{generalized Appell system} generated by the measure $\pi_{\sigma}^{\beta}$. \end{defn} The following theorem states the biorthogonal property of the generalized Appell system $\mathbb{A}^{\sigma,\beta,\alpha}$. \begin{thm} \label{thm:biorthogonal-property-inf-dim}For $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ and $\varphi^{(m)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}m}$ we have \[ \langle\!\langle Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}),\langle C_{m}^{\sigma,\beta},\varphi^{(m)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}}=\delta_{n,m}n!\langle\Phi^{(n)},\varphi^{(n)}\rangle,\quad n,m\in\mathbb{N}_{0}. \] \end{thm} \begin{proof} By Proposition \ref{prop:generalized-appell-polynomials-inf-dim}-(P1), we have \[ \langle C_{m}^{\sigma,\beta},\varphi^{(m)}\rangle=\bigg\langle\sum_{i=0}^{m}\mathbf{s}(m,i)^{*}P_{i}^{\sigma,\beta},\varphi^{(m)}\bigg\rangle=\sum_{i=0}^{m}\langle P_{i}^{\sigma,\beta},\mathbf{s}(m,i)\varphi^{(m)}\rangle. \] Then it follows from Proposition \ref{prop:operator-D-on-P} (noted below with $\star$) and Proposition 11-(P4) in \cite{KSWY95} ($\star$$\star$) that \begin{align*} \langle\!\langle Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}),\langle C_{m}^{\sigma,\beta},\varphi^{(m)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}} & =\langle\!\langle\mathbf{1},G(\Phi^{(n)})\langle C_{m}^{\sigma,\beta},\varphi^{(m)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}}\\ & =\sum_{i=0}^{m}\langle\!\langle\mathbf{1},G(\Phi^{(n)})\langle P_{i}^{\sigma,\beta},\mathbf{s}(m,i)\varphi^{(m)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}}\\ & \overset{\star}{=}\sum_{i=k}^{m}\sum_{k=n}^{\infty}\frac{n!}{k!}\frac{i!}{(i-k)!}\langle\!\langle\mathbf{1},\langle P_{i-k}^{\sigma,\beta}\hat{\otimes}\mathbf{S}(k,n)^{*}\Phi^{(n)},\mathbf{s}(m,i)\varphi^{(m)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}}\\ & =\sum_{i=k}^{m}\sum_{k=n}^{\infty}\frac{n!i!}{k!(i-k)!}\mathbb{E}_{\pi_{\sigma}^{\beta}}(\langle P_{i-k}^{\sigma,\beta}\hat{\otimes}\mathbf{S}(k,n)^{*}\Phi^{(n)},\mathbf{s}(m,i)\varphi^{(m)}\rangle)\\ & \overset{\star\star}{=}\sum_{i=k}^{m}\sum_{k=n}^{m}\frac{n!i!}{k!(i-k)!}\delta_{i,k}\langle\mathbf{S}(k,n)^{*}\Phi^{(n)},\mathbf{s}(m,k)\varphi^{(m)}\rangle\\ & =\sum_{k=n}^{m}n!\langle\Phi^{(n)},\mathbf{S}(k,n)\mathbf{s}(m,k)\varphi^{(m)}\rangle\\ & =\delta_{n,m}n!\langle\Phi^{(n)},\varphi^{(n)}\rangle, \end{align*} where the last equality is obtained using Proposition \ref{prop:Stirling-1st-2nd-kind-inf-dim} in Appendix \ref{sec:Stirling-Operators}. \end{proof} \begin{rem} In Appendix \ref{sec:alternative-proof-biorthogonal-property}, we provide an alternative proof for the biorthogonal property of the generalized Appell system $\mathbb{A}^{\sigma,\beta,\alpha}$ using the $S_{\text{\ensuremath{\pi_{\sigma}^{\beta}}}}$-transform (to be introduced in Section \ref{sec:test-and-generalized-function-spaces}) of the generalized function $Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})\in\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$. It is based on the fact that $\exp(\langle z,\varphi\rangle)$ is an eigenfunction of the generalized function $G(\Phi^{(n)})$. \end{rem} Using Theorem \ref{thm:biorthogonal-property-inf-dim}, the space $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ can now be characterized in a similar way as the space $\mathcal{P}(\mathcal{D}')$. See \cite{KdSS98} for the proof of the following theorem. \begin{thm} For every $\Phi\in\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$, there exists a unique sequence $(\Phi^{(n)})_{n\in\mathbb{N}_{0}}$, $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ such that \[ \Phi=\sum_{n=0}^{\infty}Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}) \] and vice versa, every such series generates a generalized function in $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$. \end{thm} \section{Test and Generalized Function Spaces} \label{sec:test-and-generalized-function-spaces}In this section, we construct the test function space and the generalized function space associated to the fPm $\pi_{\sigma}^{\beta}$ and study some properties. Here, we consider a nuclear triple \[ \underset{p\in\mathbb{N}}{\mathrm{pr\,lim}}\mathcal{H}_{p}=\mathcal{N}\subset L^{2}(\sigma)\subset\mathcal{N}'=\underset{p\in\mathbb{N}}{\mathrm{ind\,lim}}\mathcal{H}_{-p}, \] as described in Section~\ref{sec:Nuclear-spaces} such that \[ \mathcal{D}\subset\mathcal{N}\subset L^{2}(\sigma)\subset\mathcal{N}'\subset\mathcal{D}'. \] Let $\varphi=\sum_{n=0}^{N}\langle C_{n}^{\sigma,\beta}(w),\varphi^{(n)}\rangle\in\mathcal{P}(\mathcal{D}')$ be given. Then we use the fact that $\mathcal{D}\subset\mathcal{N}$ so that $\varphi^{(n)}\in\mathcal{N}_{\mathbb{C}}^{\hat{\otimes}n}$. Note that \[ \mathcal{N}_{\mathbb{C}}^{\hat{\otimes}n}=\underset{p\in\mathbb{N}}{\mathrm{pr\,lim}}\,\mathcal{H}_{p,\mathbb{C}}^{\hat{\otimes}n} \] and so $\varphi^{(n)}\in\mathcal{H}_{p,\mathbb{C}}^{\hat{\otimes}n}$ for all $p\in\mathbb{N}$. For each $p,q\in\mathbb{N}$ and $\kappa\in[0,1]$, we introduce a norm $\|\cdot\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}$ on $\mathcal{P}(\mathcal{D}')$ by \[ \|\varphi\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}^{2}:=\sum_{n=0}^{\infty}(n!)^{1+\kappa}2^{nq}|\varphi^{(n)}|_{p}^{2}. \] Let $(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{\kappa}$ be the Hilbert space obtained by completing the space $\mathcal{P}(\mathcal{D}')$ with respect to the norm $\|\cdot\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}^ {}$. The Hilbert space $(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{\kappa}$ has inner product given by \[ (\!(\varphi,\psi)\!)_{q,\pi_{\sigma}^{\beta}}^{\kappa}:=\sum_{n=0}^{\infty}(n!)^{1+\kappa}2^{nq}(\varphi^{(n)},\overline{\psi^{(n)}})_{p}, \] and admits the representation \[ (\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{\kappa}:=\left\{ \varphi=\sum_{n=0}^{\infty}\langle C_{m}^{\sigma,\beta},\varphi^{(n)}\rangle\in L^{2}(\pi_{\sigma}^{\beta})\;\bigg|\;\|\varphi\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}^{2}=\sum_{n=0}^{\infty}(n!)^{1+\kappa}2^{nq}|\varphi^{(n)}|_{p}^{2}<\infty\right\} . \] Then the test function space $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$ is defined by \[ (\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}:=\underset{p,q\in\mathbb{N}}{\mathrm{pr\,lim}}(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{\kappa}. \] The test function space $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$ is a nuclear space which is continuously embedded in $L^{2}(\pi_{\sigma}^{\beta})$. \begin{example} \label{exa:normalized-exp-as-test-function}The modified normalized exponential given in \eqref{eq:normalized-expo-inf-dim-alpha} has the norm \[ \|\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}^{2}=\sum_{n=0}^{\infty}(n!)^{1+\kappa}2^{nq}\frac{|\varphi|_{p}^{2n}}{(n!)^{2}},\quad\varphi\in\mathcal{N}_{\mathbb{C}}. \] \begin{enumerate} \item If $\kappa=0$, we have \[ \|\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\|_{p,q,0,\pi_{\sigma}^{\beta}}^{2}=\exp(2^{q}|\varphi|_{p}^{2})<\infty,\qquad\forall\varphi\in\mathcal{N}_{\mathbb{C}}. \] \item For $\kappa\in(0,1)$, we use the H{\"o}lder inequality with the pair $(\frac{1}{\kappa},\frac{1}{1-\kappa})$ and obtain \begin{align*} \|\mathrm{e}_{\pi_{\lambda,\beta}}(\alpha(\varphi);\cdot)\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}^{2} & \leq\left(\sum_{n=0}^{\infty}\left(\frac{1}{2^{n\kappa}}\right)^{\frac{1}{\kappa}}\right)^{\kappa}\left(\sum_{n=0}^{\infty}\left(\frac{\left(2^{\kappa}2^{q}|\varphi|_{p}^{2}\right)^{n}}{(n!)^{1-\kappa}}\right)^{\frac{1}{1-\kappa}}\right)^{1-\kappa}\\ & =2^{\kappa}\exp\left((1-\kappa)2^{\frac{\kappa+q}{1-\kappa}}|\varphi|_{p}^{\frac{2}{1-\kappa}}\right)<\infty, \end{align*} for all $\varphi\in\mathcal{N}_{\mathbb{C}}.$ Thus, $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$, $\kappa\in[0,1)$. \item For $\kappa=1$, we have \[ \|\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\|_{p,q,\kappa,\pi_{\sigma}^{\beta}}^{2}=\sum_{n=0}^{\infty}2^{nq}|\varphi|_{p}^{2n},\quad\varphi\in\mathcal{N}_{\mathbb{C}}. \] Hence, we have $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\notin(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ if $\varphi\neq0$, but $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\in(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{1}$ if $2^{q}|\varphi|_{p}^{2}<1$. Moreover, the set \[ \{\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)\mid2^{q}|\varphi|_{p}^{2}<1,\varphi\in\mathrm{\mathcal{N}_{\mathbb{C}}}\} \] is total in $(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{1}$. \end{enumerate} \end{example} For $\kappa=1$, we collect below the most important properties of the space $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$, see \cite{KdSS98} for the proofs. \begin{thm} \begin{description} \item [{$(i)$}] $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ is a nuclear space. \item [{$(ii)$}] The topology in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ is uniquely defined by the topology on $\mathcal{N}$, i.e., it does not depend on the choice of the family of norms $\{|\cdot|_{p}\}$, $p\in\mathbb{N}$. \item [{$(iii)$}] There exist $p',q'>0$ such that for all $p\geq p'$, $q\geq q'$ the topological embedding $(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{1}\subset L^{2}(\pi_{\sigma}^{\beta})$ holds. $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ is continuously and densely embedded in $L^{2}(\pi_{\sigma}^{\beta})$. \end{description} \end{thm} \begin{prop} \label{prop:test-function-estimate}Any test function $\varphi$ in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ has a uniquely defined extension to $\mathcal{N}'_{\mathbb{C}}$ as an element of $\mathcal{E}_{\min}^{1}(\mathcal{N}'_{\mathbb{C}})$. For all $p>p'$ such that the embedding $\mathcal{H}_{p}\hookrightarrow\mathcal{H}_{p'}$ is of the Hilbert-Schmidt class and for all $\varepsilon>0$, $p\in\mathbb{N}$, we obtain the following bound \[ |\varphi(w)|\leq C\|\varphi\|_{p,q,1,\pi_{\sigma}^{\beta}}\mathrm{e}^{\varepsilon|w|_{-p'}},\qquad\varphi\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1},\:w\in\mathcal{H}_{-p,\mathbb{C}}, \] where $2^{q}>(\varepsilon\|i_{p',p}\|_{HS})^{-2}$ and \[ C=C_{\varepsilon}(1-2^{-q}\varepsilon^{-2})^{-1/2}. \] \end{prop} For each $p,q\in\mathbb{N}$ and $\kappa\in[0,1]$, we denote by $(\mathcal{H}_{-p})_{-q,\pi_{\sigma}^{\beta}}^{-\kappa}$ the Hilbert space dual of the space $(\mathcal{H}_{p})_{q,\pi_{\sigma}^{\beta}}^{\kappa}$ with respect to $L^{2}(\pi_{\sigma}^{\beta})$ with the corresponding (Hilbert) norm $\|\cdot\|_{-p,-q,\kappa,\pi_{\sigma}^{\beta}}$. This space admits the following representation \[ (\mathcal{H}_{-p})_{-q,\pi_{\sigma}^{\beta}}^{-\kappa}:=\left\{ \Phi=\sum_{n=0}^{\infty}Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})\in\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')\,\middle|\,\|\Phi\|_{-p,-q,\kappa,\pi_{\sigma}^{\beta}}^{2}:=\sum_{n=0}^{\infty}(n!)^{1-\kappa}2^{-nq}|\Phi^{(n)}|_{-p}^{2}<\infty\right\} . \] By the general duality theory, the dual space $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}$ of $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$ with respect to $L^{2}(\pi_{\sigma}^{\beta})$ is then given by \[ (\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}:=\bigcup_{p,q\in\mathbb{N}}(\mathcal{H}_{-p})_{-q,\pi_{\sigma}^{\beta}}^{-\kappa}. \] Since $\mathcal{P}(\mathcal{D}')\subset(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$, the space $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}$ can be viewed as a subspace of $\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ and so we extend the triple in \eqref{eq:triple-P(D)-L2} to the chain of spaces \[ \mathcal{P}(\mathcal{D}')\subset(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}\subset L^{2}(\pi_{\sigma}^{\beta})\subset(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}\subset\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}'). \] The action of a distribution \[ \Phi=\sum_{n=0}^{\infty}Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa} \] on a test function \[ \varphi=\sum_{n=0}^{\infty}\langle C_{n}^{\sigma,\beta}(w),\varphi^{(n)}\rangle\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa} \] using the biorthogonal property in Theorem \ref{thm:biorthogonal-property-inf-dim} is given by \[ \langle\!\langle\Phi,\varphi\rangle\!\rangle_{\pi_{\sigma}^{\beta}}=\sum_{n=0}^{\infty}n!\langle\Phi^{(n)},\varphi^{(n)}\rangle. \] Now we give two examples of the generalized functions in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$. For a more generalized case, see \cite{KdSS98}. \begin{example}[Generalized Radon-Nikodym derivative] \label{exa:RND} We define a generalized function $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$, $w\in\mathcal{N}'_{\mathbb{C}}$ with the following property \[ \langle\!\langle\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot),\varphi\rangle\!\rangle_{\pi_{\sigma}^{\beta}}=\int_{\mathcal{N}'}\varphi(x-w)\,\mathrm{d}\mu(x),\quad\varphi\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}. \] First, we have to establish the continuity of $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)$. Let $w\in\mathcal{H}_{-p,\mathbb{C}}$ be given. Then, if $p\geq p'$ is sufficiently large and $\varepsilon>0$ is small enough, we use Proposition~\ref{prop:test-function-estimate}, that is, there exists $q\in\mathbb{N}$ and $C>0$ such that \begin{align*} {\displaystyle \left|\int_{\mathcal{N}'}\varphi(x-w)\,\mathrm{d}\pi_{\sigma}^{\beta}(x)\right|} & \leq{\displaystyle C\|\varphi\|_{p,q,1,\pi_{\sigma}^{\beta}}\int_{\mathcal{N}'}\exp(\varepsilon|x-w|_{-p'})\,\mathrm{d}\pi_{\sigma}^{\beta}(x)}\\ & {\displaystyle \leq C\|\varphi\|_{p,q,1,\pi_{\sigma}^{\beta}}\exp(\varepsilon|w|_{-p'})\int_{\mathcal{N}'}\exp(\varepsilon|x|_{-p'})\,\mathrm{d}\pi_{\sigma}^{\beta}(x).} \end{align*} Since $\varepsilon$ is sufficiently small, the last integral exists by Lemma 9 from \cite{KSWY95}. This implies that $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$. Let us show that in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$ the generalized function $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)$ admits the canonical expansion \begin{equation} \rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)=\sum_{k=0}^{\infty}\frac{1}{k!}Q_{n}^{\sigma,\beta,\alpha}((-w)_{k}).\label{eq:rho-alpha-expansion} \end{equation} Note that the right hand side of \eqref{eq:rho-alpha-expansion} defines an element in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$. Then it is sufficient to compare the action of both sides of \eqref{eq:rho-alpha-expansion} on a total set from $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$. For $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$, we use the biorthogonal property of $\mathbb{P}^{\sigma,\beta,\alpha}$ and $\mathbb{Q}^{\sigma,\beta,\alpha}$-systems and obtain \begin{align*} \langle\!\langle\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot),\langle C_{n}^{\sigma,\beta},\varphi^{(n)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}} & =\bigg\langle\!\!\!\bigg\langle\sum_{k=0}^{\infty}\frac{1}{k!}Q_{n}^{\sigma,\beta,\alpha}((-w)_{k}),\langle C_{n}^{\sigma,\beta},\varphi^{(n)}\rangle\bigg\rangle\!\!\!\bigg\rangle_{\pi_{\sigma}^{\beta}}\\ & =\langle(-w)_{n},\varphi^{(n)}\rangle. \end{align*} On the other hand, by Proposition \ref{prop:generalized-appell-polynomials-inf-dim}-(P4) and (P6), \begin{align*} \langle\!\langle\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot),\langle C_{n}^{\sigma,\beta},\varphi^{(n)}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}} & ={\displaystyle \int}_{\mathcal{D}'}\langle C_{n}^{\sigma,\beta}(x-w),\varphi^{(n)}\rangle\,\mathrm{d}\pi_{\sigma}^{\beta}(x)\\ & {\displaystyle \negthickspace\overset{(P4)}{=}\sum_{k=0}^{\infty}}\binom{n}{k}{\displaystyle \int}_{\mathcal{D}'}\langle C_{n}^{\sigma,\beta}(x)\hat{\otimes}(-w)_{n-k},\varphi^{(n)}\rangle\,\mathrm{d}\pi_{\sigma}^{\beta}(x)\\ & {\displaystyle =\sum_{k=0}^{\infty}}\binom{n}{k}\mathbb{E}_{\pi_{\sigma}^{\beta}}(\langle C_{n}^{\sigma,\beta}(x)\hat{\otimes}(-w)_{n-k},\varphi^{(n)}\rangle)\\ & \negthickspace\negthickspace\overset{(P6)}{=}\langle(-w)_{n},\varphi^{(n)}\rangle. \end{align*} Thus, we have shown that $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(w,\cdot)$ is the generating function of the $\mathbb{Q}^{\sigma,\beta,\alpha}$-system, i.e., \[ \rho_{\pi_{\sigma}^{\beta}}^{\alpha}(-w,\cdot)=\sum_{k=0}^{\infty}\frac{1}{k!}Q_{k}^{\sigma,\beta,\alpha}((w)_{k}). \] \end{example} \begin{example}[Delta function] For $w\in\mathcal{N}'_{\mathbb{C}}$, we define a distribution by the following $\mathbb{Q}^{\sigma,\beta,\alpha}$-decomposition: \[ \delta_{w}=\sum_{n=0}^{\infty}Q_{n}^{\sigma,\beta,\alpha}(C_{n}^{\sigma,\beta}(w)). \] If $p\in\mathbb{N}$ is large enough and $\varepsilon>0$ is sufficiently small, by Proposition \ref{prop:generalized-appell-polynomials-inf-dim}-(P7), for any $w\in\mathcal{H}_{-p,\mathbb{C}}$ we have \[ \|\delta_{w}\|_{-p,-q,\pi_{\sigma}^{\beta},\alpha}^{2}=\sum_{n=0}^{\infty}2^{-nq}|C_{n}^{\sigma,\beta}(w)|_{-p}^{2}\overset{(P7)}{\leq}C_{\varepsilon}^{2}\exp(2\varepsilon|w|_{-p})\sum_{n=0}^{\infty}\varepsilon^{-2n}2^{-nq} \] which is finite for sufficiently large $q\in\mathbb{N}$. This implies that $\delta_{w}\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$. Now, for \[ \varphi=\sum_{n=0}^{\infty}\langle C_{n}^{\sigma,\beta},\varphi^{(n)}\rangle\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1} \] the action of $\delta_{w}$ is given by \[ \langle\!\langle\delta_{w},\varphi\rangle\!\rangle_{\pi_{\sigma}^{\beta}}=\sum_{n=0}^{\infty}\langle C_{n}^{\sigma,\beta}(w),\varphi^{(n)}\rangle=\varphi(w) \] using the biorthogonal property of $\mathbb{P}^{\sigma,\beta,\alpha}$ and $\mathbb{Q}^{\sigma,\beta,\alpha}$-systems. This means that $\delta_{w}$ (in particular for $w$ real) plays the role of a $\delta$-function (evaluation map) in the calculus we discuss. \end{example} Recall Example \ref{exa:normalized-exp-as-test-function} where the modified normalized exponential $\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi);\cdot)$ is a test function in $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ only if $2^{q}|\varphi|_{p}^{2}<1$ for $\varphi\in\mathcal{N}_{\mathbb{C}}$. We define the $S_{\pi_{\sigma}^{\beta}}$-transform of a distribution $\Phi\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}\subset\mathcal{P}'_{\pi_{\sigma}^{\beta}}(\mathcal{D}')$ by \[ S_{\pi_{\sigma}^{\beta}}\Phi(\varphi):=\langle\!\langle\Phi,\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi;\cdot)\rangle\!\rangle_{\pi_{\sigma}^{\beta}} \] if $\varphi$ is chosen in the above way. By the biorthogonal property of $\mathbb{P}^{\sigma,\beta,\alpha}$ and $\mathbb{Q}^{\sigma,\beta,\alpha}$-systems, we have \[ S_{\pi_{\sigma}^{\beta}}\Phi(\varphi)=\sum_{n=0}^{\infty}\langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle. \] Now, we introduce the convolution of a function $\varphi\in(N)_{\pi_{\sigma}^{\beta}}^{1}$, with respect to the measure $\pi_{\sigma}^{\beta}$ given by \[ C_{\pi_{\sigma}^{\beta}}\varphi(w)=\langle\!\langle\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(-w,\cdot),\varphi\rangle\!\rangle_{\pi_{\sigma}^{\beta}}, \] where $\rho_{\pi_{\sigma}^{\beta}}^{\alpha}(-w,\cdot)\in(N)_{\pi_{\sigma}^{\beta}}^{-1}$ is the generalized Radon-Nikodym derivative (see Example~\ref{exa:RND}) for any $w\in\mathcal{N}'_{\mathbb{C}}$. If $\varphi$ has the representation \[ \varphi=\sum_{n=0}^{\infty}\langle C_{n}^{\sigma,\beta}(w),\varphi^{(n)}\rangle\in(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}, \] then the action of $C_{\pi_{\sigma}^{\beta}}$ on $\varphi$ is given, for every $w\in\mathcal{N}'_{\mathbb{C}}$, by \[ C_{\pi_{\sigma}^{\beta}}\varphi(w)=\sum_{n=0}^{\infty}\langle(-w)_{n},\varphi^{(n)}\rangle\overset{\eqref{eq:Stirling-1st-inner-product}}{=}\sum_{n=0}^{\infty}\sum_{k=0}^{n}(-1)^{k}\langle w^{\otimes k},\mathbf{s}(n,k)\varphi^{(n)}\rangle. \] The following is the characterization theorem of the test and generalized function spaces associated to the fPm which is a standard result for this approach. For the proof, we refer to \cite{KSWY95,KdSS98}. \begin{thm} \label{thm:characterizations} \begin{description} \item [{$(i)$}] The convolution $C_{\pi_{\sigma}^{\beta}}$ is a topological isomorphism from $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{1}$ on $\mathcal{E}_{\min}^{1}(\mathcal{N}'_{\mathbb{C}})$. \item [{$(ii)$}] The $S_{\pi_{\sigma}^{\beta}}$-transform is a topological isomorphism from $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-1}$ on $\mathrm{Hol}_{0}(\mathcal{N}_{\mathbb{C}})$. \item [{$(iii)$}] The $S_{\pi_{\sigma}^{\beta}}$-transform is a topological isomorphism from $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}$, $\kappa\in[0,1)$, on $\mathcal{E}_{\max}^{2/(1-\kappa)}(\mathcal{N}_{\mathbb{C}})$. \end{description} \end{thm} \section{Conclusion and Outlook} In this paper, we constructed the generalized Appell system $\mathbb{A}^{\sigma,\beta,\alpha}=(\mathbb{P}^{\sigma,\beta,\alpha},\mathbb{Q}^{\sigma,\beta,\alpha})$ associated to the fPm $\pi_{\sigma}^{\beta}$ in infinite dimension. The Appell polynomials $\mathbb{P}^{\sigma,\beta,\alpha}$ (generated by the modified Wick exponential) and the dual Appell system $\mathbb{Q}^{\sigma,\beta,\alpha}$ are biorthogonal to each other, see Theorem~\ref{thm:biorthogonal-property-inf-dim}. It turns out that the kernels $C_{n}^{\sigma,\beta}(\cdot)$ of the system $\mathbb{P}^{\sigma,\beta,\alpha}$ are given in terms of the Stirling operators or in terms of the falling factorials on $\mathcal{D}'_{\mathbb{C}}$, see Proposition~\ref{prop:generalized-appell-polynomials-inf-dim}. The system $\mathbb{P}^{\sigma,\beta,\alpha}$ is used to define the spaces of test functions $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$, $0\leq\kappa\leq1$, while $\mathbb{Q}^{\sigma,\beta,\alpha}$ is suitable to describe the generalized functions spaces $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}$ arising from $\pi_{\sigma}^{\beta}$, see Section~\ref{sec:test-and-generalized-function-spaces}. The spaces $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{\kappa}$ and $(\mathcal{N})_{\pi_{\sigma}^{\beta}}^{-\kappa}$ are universal in the sense that their characterization via $S_{\pi_{\sigma}^{\beta}}$-transform is independent of the measure $\pi_{\sigma}^{\beta}$ (see Theorem~\ref{thm:characterizations}) as is well known from non-Gaussian analysis, \cite{KSWY95,KdSS98}. In a future work we plan to investigate the stochastic counterpart associated to the fPm, namely the fractional Poisson process $N_{\lambda}^{\beta}$ in one and infinite dimensions. In particular, their representations in terms of known processes as well as possible applications. \appendix \section{Appendix} \subsection{Kolmogorov extension theorem on configuration space} \label{sec:Kolmogorov-extension-theorem-on-config-space} In this section, we discuss a version of Kolmogorov extension theorem to the configuration space $(\Gamma,\mathcal{B}(\Gamma))$. The following definitions and properties of measurable spaces can be found in \cite{C83}, \cite{G88} and \cite{P67}. \begin{defn} Let $(X,\mathcal{A})$ and $(X',\mathcal{A}')$ be two measurable spaces. \begin{enumerate} \item The spaces $(X,\mathcal{A})$ and $(X',\mathcal{A}')$ are called isomorphic if, and only if, there exists a measurable bijective mapping $f:X\longrightarrow X'$ such that its inverse $f^{-1}$ is also measurable. \item $(X,\mathcal{A})$ and $(X',\mathcal{A}')$ are called $\sigma$-isomorphic if, and only if, there exists a bijective mapping $F:\mathcal{A}\longrightarrow\mathcal{A}'$ between the $\sigma$-algebras which preserves the operations in a $\sigma$-algebra. \item $(X,\mathcal{A})$ is said to be countable generated if, and only if, there exists a denumerable class $\mathscr{D}\subset\mathcal{A}$ such that $\mathscr{D}$ generates $\mathcal{A}$. \item $(X,\mathcal{A})$ is said to be separable if, and only if, it is countably generated and for each $x\in X$ the set $\left\{ x\right\} \in\mathcal{A}$. \end{enumerate} \end{defn} \begin{defn} Let $(X,\mathcal{A})$ be a countable generated measurable space. Then $(X,\mathcal{A})$ is called the standard Borel space if, and only if, there exists a Polish space $(X',\mathcal{A}')$ (i.e., a metrizable, complete metric space which fulfills the second axiom of countability and the $\sigma$-algebra $\mathcal{A}'$ coincides with the Borel $\sigma$-sigma) such that $(X,\mathcal{A})$ and $(X',\mathcal{B}(X'))$ are $\sigma$-isomorphic. \end{defn} \begin{example} \label{exa:standard-Borel-spaces} \begin{enumerate} \item Every locally compact, $\sigma$-compact space is a standard Borel space. \item Polish spaces are standard Borel spaces. \end{enumerate} \end{example} \begin{prop} \label{prop:X-X'-sigma-isomorphic} \begin{enumerate} \item If $(X,\mathcal{A})$ is a countable generated measurable space, then there exists $E\subset\left\{ 0,1\right\} ^{\mathbb{N}}$ such that $(X,\mathcal{A})$ is $\sigma$-isomorphic to $(E,\mathcal{B}(E))$. Thus $(X,\mathcal{A})$ is $\sigma$-isomorphic to a separable measurable space. \item Let $(X,\mathcal{A})$ and $(X',\mathcal{A}')$ be separable measurable spaces. Then $(X,\mathcal{A})$ is $\sigma$-isomorphic to $(X',\mathcal{A}')$ if, and only if, they are isomorphic. \end{enumerate} \end{prop} The following theorem states some operations under which separable standard Borel spaces are closed, see \cite{P67} and \cite{C83}. \begin{thm} \label{thm:separable-standard-Borel-space} \begin{enumerate} \item Countable product, sum, and union are separable standard Borel spaces. \item The projective limit is a separable standard Borel space. \item Any measurable subset of a separable standard Borel space is also a separable standard Borel space. \end{enumerate} \end{thm} We need also a version of Kolmogorov's extension theorem for separable standard Borel spaces. \begin{thm}[{cf. \cite[Chap. V, Theorem 3.2]{P67}}] \label{thm:Kolmogorov-extension-thm} Let $(X_{n},\mathcal{A}_{n})$, $n\in\mathbb{N}$, be separable standard Borel spaces. Let $(X,\mathcal{A})$ be the projective limit of the space $(X_{n},\mathcal{A}_{n})$ relative to the maps $p_{m,n}:X_{n}\longrightarrow X_{m}$, $m\leq n$. If $\left\{ \mu_{n}\right\} _{n\in\mathbb{N}}$ is a sequence of probability measures such that n is a measure on $(X_{n},\mathcal{A}_{n})$ and $\mu_{m}=\mu_{n}\circ p_{m,n}^{-1}$ for $m\leq n$. Then there exists a unique measure on $(X,\mathcal{A})$ such that $\mu_{n}=\mu\circ p_{n}^{-1}$ for all $n\in\mathbb{N}$ where $p_{n}$ is the projection map from $X$ on $X_{n}$. \end{thm} This theorem can be extended to an index set $I$ which is a directed set with an order generating sequence, i.e., there exists a sequence $(\alpha_{n})_{n\in\mathbb{N}}$ in $I$ such that for every $\alpha\in I$ there exists $n\in\mathbb{N}$ with $\alpha<\alpha_{n}$. We apply this general framework to our configuration space $\Gamma$. Assume that $(X,\mathfrak{X})$ is a separable standard Borel space. To use $\mathcal{B}_{c}(X)$ makes this generality have no sense, hence we have to introduce an abstract concept of local sets. Let $\mathcal{J}_{X}$ be a subset of $\mathfrak{X}$ with the properties: \begin{description} \item [{(I1)}] $\Lambda_{1}\cup\Lambda_{2}\in\mathcal{J}_{X}$ for all $\Lambda_{1},\Lambda_{2}\in\mathcal{J}_{X}.$ \item [{(I2)}] If $\Lambda\in\mathcal{J}_{X}$ and $A\in\mathfrak{X}$ with $A\subset\Lambda$ then $A\in\mathcal{J}_{X}$. \item [{(I3)}] There exists a sequence $\left\{ \Lambda_{n}\mid n\in\mathbb{N}\right\} $ from $\mathcal{J}_{X}$ with $X=\bigcup_{n\in\mathbb{N}}\Lambda_{n}$ such that if $\Lambda\in\mathcal{J}_{X}$ then $\Lambda\subset\Lambda_{n}$ for some $n\in\mathbb{N}$. \end{description} We can then construct the configuration space as in Subsection~\ref{subsec:configuration-space} taking $X=\mathbb{R}^{d}$ and replacing $\mathcal{B}(\mathbb{R}^{d})$ by $\mathcal{J}_{\mathbb{R}^{d}}$. Our aim is to show that $(\Gamma,\mathcal{B}(\Gamma))$ is a separable standard Borel space and thus by Theorem~\ref{thm:Kolmogorov-extension-thm} the measure $\pi_{\sigma}^{\beta}$ in Subsection~\ref{sec:fPm-cspace} exists. It follows from Theorem~\ref{thm:separable-standard-Borel-space} that for any $\Lambda\in\mathcal{J}_{\mathbb{R}^{d}}$ and for any $n\in\mathbb{N}$, the set $\Lambda^{n}$ is a separable standard Borel space. Thus, by the same argument $\widetilde{\Lambda^{n}}/S_{n}$ is also a separable standard Borel space, see e.g. \cite{S94}. Now taking into account the isomorphism between $\widetilde{\Lambda^{n}}/S_{n}$ and $\Gamma_{\Lambda}^{(n)}$, $\Gamma_{\Lambda}^{(n)}$ is also a separable standard Borel space as well as $\Gamma_{\Lambda}$ by Theorem~\ref{thm:separable-standard-Borel-space}-(1). Therefore, given $(\Gamma,\mathcal{B}(\Gamma))$ as the projective limit of the projective system $\left\{ (\Gamma_{\Lambda},\mathcal{B}(\Gamma_{\Lambda})),p_{\Lambda_{1},\Lambda_{2}},\mathcal{J}_{\mathbb{R}^{d}}\right\} $ of separable standard Borel spaces, by Theorem~\ref{thm:separable-standard-Borel-space}-(2), $(\Gamma,\mathcal{B}(\Gamma))$ is a separable standard Borel space. \subsection{Stirling Operators} \label{sec:Stirling-Operators}In this appendix we discuss the Stirling operators which we use in Section~\ref{sec:Appell-System} related to the Taylor expansion of a holomorphic function typical in Poisson analysis. For more details and other applications, see \cite{Finkelshtein2019b,Finkelshtein2022}. For $n\in\mathbb{N}$ and $k\in\mathbb{N}_{0}$, we define the \emph{falling factorial} by \[ (k)_{n}:=n!\binom{k}{n}=k(k-1)\cdots(k-n+1). \] The latter expression allows us to define falling factorials as polynomials of a variable $z\in\mathbb{C}$ replacing $k$ as \[ (z)_{n}:=z(z-1)\cdots(z-n+1). \] The generating function of the falling factorials is \[ \sum_{n=0}^{\infty}\frac{u^{n}}{n!}(z)_{n}=\exp[z\log(1+u)]. \] The \emph{Stirling numbers of the first kind,} denoted by $s(n,k)$, are defined as the coefficients of the expansion $(z)_{n}$ in $z$, in explicit, \[ (z)_{n}:=\sum_{k=1}^{n}s(n,k)z^{k}, \] while the \emph{Stirling numbers of the second kind, }denoted by $S(n,k)$, are defined as the coefficients of the expansion $z^{n}$ in $(z)_{k}$, that is, \[ z^{n}=\sum_{k=1}^{n}S(n,k)(z)_{k}. \] Let us consider lifting the polynomials $(z)_{n}$ to $\mathcal{D}_{\mathbb{C}}'$. We call these polynomials the falling factorials on $\mathcal{D}_{\mathbb{C}}'$, denoted by $(w)_{n}$, for $w\in\mathcal{D}_{\mathbb{C}}'$ (see \cite{Finkelshtein2019b}). The generating function of the falling factorials on $\mathcal{D}_{\mathbb{C}}'$ is given by \begin{equation} \exp(\langle w,\log(1+\varphi)\rangle)=\sum_{n=0}^{\infty}\frac{1}{n!}\langle(w)_{n},\varphi^{\otimes n}\rangle,\quad\varphi\in\mathcal{D}_{\mathbb{C}},\;w\in\mathcal{D}_{\mathbb{C}}'.\label{eq:generating-function-f-factorial-inf-dim} \end{equation} The falling factorial may be written recursively (see Proposition 5.4 in \cite{Finkelshtein2019b}) as follows \begin{align*} & (w)_{0}=1,\\ & (w)_{1}=w,\\ & (w)_{n}(x_{1},\dots,x_{n})=w(x_{1})(w(x_{2})-\delta_{x_{1}}(x_{2}))\\ & \qquad\qquad\times\dots\times(w(x_{n})-\delta_{x_{1}}(x_{n})-\delta_{x_{2}}(x_{n})-\dots-\delta_{x_{n-1}}(x_{n})), \end{align*} for $n\geq2$ and $(x_{1},\dots,x_{n})\in(\mathbb{R}^{d})^{n}$. Now we define the \emph{Stirling operators of the first kind} as the linear operators $\mathbf{s}(n,k):\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}\longrightarrow\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k}$, $n\geq k$, satisfying \begin{align} \langle(w)_{n},\varphi^{(n)}\rangle & =\sum_{k=1}^{n}\langle w^{\otimes k},\mathbf{s}(n,k)\varphi^{(n)}\rangle,\qquad\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},\;w\in\mathcal{D}_{\mathbb{C}}',\label{eq:Stirling-1st-inner-product} \end{align} and the \emph{Stirling operators of the second kind} as the linear operators $\mathbf{S}(n,k):\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}\longrightarrow\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k}$, $n\geq k$, satisfying \begin{align} \langle w^{\otimes n},\varphi^{(n)}\rangle & =\sum_{k=1}^{n}\langle(w)_{k},\mathbf{S}(n,k)\varphi^{(n)}\rangle,\qquad\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},\;w\in\mathcal{D}_{\mathbb{C}}'.\label{eq:Stirling-2nd-inner-product} \end{align} \begin{rem} The Stirling operators $\mathbf{s}(n,k)$ and $\mathbf{S}(n,k)$ introduced in \cite{Finkelshtein2022} are defined on the space of measurable, bounded, compactly supported, symmetric functions, however; in this paper, we define these operators on the space $\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$ as a consequence of extending the falling factorials to the space of generalized functions $\mathcal{D}_{\mathbb{C}}'$ rather than using the space of Radon measures. \end{rem} Let $n,k\in\mathbb{N}$, $k\leq n$ and $i_{1},\dots,i_{k}\in\mathbb{N}$ such that $i_{1}+\dots+i_{k}=n$. We define the operator $\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}\in\mathcal{L}(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n},\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})$ (the space of linear operators from $\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$ into $\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k}$) by \begin{equation} (\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}\varphi^{(n)})(x_{1},\dots,x_{k}):=\frac{1}{k!}\sum_{\iota\in S_{k}}\varphi^{(n)}(\underbrace{x_{\iota(1)},\dots,x_{\iota(1)}}_{i_{1}},\dots,\underbrace{x_{\iota(k)},\dots,x_{\iota(k)}}_{i_{k}}),\label{eq:derivative-of-order-n} \end{equation} for $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$ and $(x_{1},\dots,x_{k})\in(\mathbb{R}^{d})^{k}$. In particular, we have \begin{equation} \mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}\varphi^{\otimes n}=\varphi^{i_{1}}\hat{\otimes}\dots\hat{\otimes}\varphi^{i_{k}},\quad\varphi\in\mathcal{D}_{\mathbb{C}}.\label{eq:derivative-tensor-product} \end{equation} When $k=1$, we denote $\mathbb{D}_{n}^{(n)}:=\mathbb{D}^{(n)}$ such that for $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$, \[ (\mathbb{D}^{(n)}\varphi^{(n)})(x)=\varphi^{(n)}(x,\dots,x),\quad x\in\mathbb{R}^{d}. \] The operator $\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}$ is continuous (see \cite{Finkelshtein2019b} and \cite{Finkelshtein2022}), that is, its adjoint $(\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)})^{*}:(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})'\longrightarrow(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$ exists and is well-defined. In fact, the operators $\mathbf{s}(n,k)$ and $\mathbf{S}(n,k)$ can be written explicitly in terms of the operator $\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}$, (see Proposition 3.7 in \cite{Finkelshtein2022}) that is, for any $n,k\in\mathbb{N}$, $k\leq n$, \begin{equation} \mathbf{s}(n,k)=\frac{n!}{k!}\sum_{i_{1}+\dots+i_{k}=n}\frac{(-1)^{n-k}}{i_{1}\dots i_{k}}\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}.\label{eq:Stirling-first-D-property} \end{equation} and \begin{equation} \mathbf{S}(n,k)=\frac{n!}{k!}\sum_{i_{1}+\dots+i_{k}=n}\frac{1}{i_{1}!\dots i_{k}!}\mathbb{D}_{i_{1},\dots,i_{k}}^{(n)}.\label{eq:Stirling-second-D-property} \end{equation} Hence, the Stirling operators are continuous (see Proposition 3.7 in \cite{Finkelshtein2022}) and so their adjoints $\mathbf{s}(n,k)^{*}$ and $\mathbf{S}(n,k)^{*}$ are well defined, that is, \begin{equation} \mathbf{s}(n,k)^{*}:(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})'\longrightarrow(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'\quad\mathrm{and\quad}\mathbf{S}(n,k)^{*}:(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})'\longrightarrow(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'\label{eq:adjoint-Stirling-operator} \end{equation} and satisfy \[ \langle w^{(k)},\mathbf{s}(n,k)\varphi^{(n)}\rangle=\langle\mathbf{s}(n,k)^{*}w^{(k)},\varphi^{(n)}\rangle\quad\mathrm{and\quad}\langle w^{(k)},\mathbf{S}(n,k)\varphi^{(n)}\rangle=\langle\mathbf{S}(n,k)^{*}w^{(k)},\varphi^{(n)}\rangle, \] for all $w^{(k)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})'$ and $\varphi^{(n)}\in\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n}$. Hence, the Equations \eqref{eq:Stirling-1st-inner-product} and \eqref{eq:Stirling-2nd-inner-product} imply that \begin{align} (w)_{n} & =\sum_{k=1}^{n}\mathbf{s}(n,k)^{*}w^{\otimes k},\label{eq:falling-factorial-Stirling}\\ w^{\otimes n} & =\sum_{k=1}^{n}\mathbf{S}(n,k)^{*}(w)_{k}.\nonumber \end{align} \begin{prop}[{see \cite[Prop.~3.15]{Finkelshtein2022}}] For each $k\in\mathbb{N}$ and $\xi\in\mathcal{D}_{\mathbb{C}},$ \begin{equation} \sum_{n=k}^{\infty}\frac{1}{n!}\mathbf{S}(n,k)\xi^{\otimes n}=\frac{1}{k!}(\mathrm{e}^{\xi}-1)^{\otimes k}\label{eq:decomposition-g-alpha^k} \end{equation} and \begin{equation} \sum_{n=k}^{\infty}\frac{1}{n!}\mathbf{s}(n,k)\xi^{\otimes n}=\frac{1}{k!}(\log(1+\xi))^{\otimes k}.\label{eq:decomposition-alpha^k} \end{equation} \end{prop} \begin{prop}[{see \cite[Prop. 3.19]{Finkelshtein2022}}] \label{prop:Stirling-1st-2nd-kind-inf-dim}For any $i,n\in\mathbb{N}$, \[ \sum_{k=1}^{n}\mathbf{s}(k,i)\mathbf{S}(n,k)=\sum_{k=1}^{n}\mathbf{S}(k,i)\mathbf{s}(n,k)=\delta_{n,i}\mathbf{1}^{(i)}, \] where $\mathbf{1}^{(i)}$ denotes the identity operator on $\mathcal{D}_{\mathbb{C}}^{\otimes i}$. \end{prop} \subsection{An Alternative Proof of the Theorem \ref{thm:biorthogonal-property-inf-dim} (Biorthogonal Property)} \label{sec:alternative-proof-biorthogonal-property}In Section \ref{sec:Generalized-Dual-Appell-System}, we proved the biorthogonal property of $\mathbb{A}^{\sigma,\beta,\alpha}$ using the definitions of $\mathbb{P}^{\sigma,\beta,\alpha}$ and $\mathbb{Q}^{\sigma,\beta,\alpha}$-systems. Here, we use a property of the generalized function $Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})$ using the $S_{\pi_{\sigma}^{\beta}}$-transform (see Theorem \ref{thm:generalized-function-S-transform} below) to provide an alternative proof of the Theorem \ref{thm:biorthogonal-property-inf-dim}. \begin{lem} \label{lem:operator-G-on-exp-z-phi}For every $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$, $z\in\mathcal{D}'_{\mathbb{C}}$ and $\varphi\in\mathcal{D}_{\mathbb{C}}$, we have \[ G(\Phi^{(n)})(\exp\langle z,\varphi\rangle)=\langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle\exp\langle z,\varphi\rangle. \] In other words, the function $\exp\langle z,\varphi\rangle$ is an eigenfunction of the generalized function $G(\Phi^{(n)})$. \end{lem} \begin{proof} It follows from \eqref{eq:adjoint-Stirling-operator} that $\mathbf{S}(k,n)^{*}\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}k})'$. We use the definition of the differential operator $D(\mathbf{S}(k,n)^{*}\Phi^{(n)})$ to the monomial $\langle z,\varphi\rangle^{m}$, $m\ge k$, to obtain \begin{align*} D(\mathbf{S}(k,n)^{*}\Phi^{(n)})\langle z,\varphi\rangle^{m} & =D(\mathbf{S}(k,n)^{*}\Phi^{(n)})\langle z^{\otimes m},\varphi^{\otimes m}\rangle\\ & =\frac{m!}{(m-k)!}\langle z^{\otimes(m-k)}\hat{\otimes}\mathbf{S}(k,n)^{*}\Phi^{(n)},\varphi^{\otimes m}\rangle\\ & =\frac{m!}{(m-k)!}\langle z,\varphi\rangle^{m-k}\langle\mathbf{S}(k,n)^{*}\Phi^{(n)},\varphi^{\otimes k}\rangle. \end{align*} Now, we apply the above result ($\star$) to the Taylor series of the function $\exp\langle z,\varphi\rangle$ and obtain \begin{align*} D(\mathbf{S}(k,n)^{*}\Phi^{(n)})(\exp\langle z,\varphi\rangle) & =D(\mathbf{S}(k,n)^{*}\Phi^{(n)})\sum_{m=0}^{\infty}\frac{\langle z,\varphi\rangle^{m}}{m!}\\ & \overset{\star}{=}\langle\mathbf{S}(k,n)^{*}\Phi^{(n)},\varphi^{\otimes k}\rangle\sum_{m=k}^{\infty}\frac{1}{(m-k)!}\langle z,\varphi\rangle^{m-k}\\ & =\langle\mathbf{S}(k,n)^{*}\Phi^{(n)},\varphi^{\otimes n}\rangle\exp\langle z,\varphi\rangle. \end{align*} Thus, applying the operator $G(\Phi^{(n)})$ to $\exp\langle z,\varphi\rangle$, we obtain \begin{align*} G(\Phi^{(n)})(\exp\langle z,\varphi\rangle) & =\sum_{k=n}^{\infty}\frac{n!}{k!}D(\mathbf{S}(k,n)^{*}\Phi^{(n)})(\exp\langle z,\varphi\rangle)\\ & =\sum_{k=n}^{\infty}\frac{n!}{k!}\langle\mathbf{S}(k,n)^{*}\Phi^{(n)},\varphi^{\otimes k}\rangle\exp\langle z,\varphi\rangle\\ & =\langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle\exp\langle z,\varphi\rangle, \end{align*} where the last equality is a consequence of Equation \eqref{eq:Phi-g-alpha-n-Stirling}. \end{proof} \begin{thm} \label{thm:generalized-function-S-transform}For $\Phi^{(n)}\in(\mathcal{D}_{\mathbb{C}}^{\hat{\otimes}n})'$, the generalized function $Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})$ satisfies \[ S_{\text{\ensuremath{\pi_{\sigma}^{\beta}}}}(Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}))(\varphi)=\langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle,\quad\varphi\in\mathcal{V}_{\alpha}\subset\mathcal{D}_{\mathbb{C}}. \] \end{thm} \begin{proof} Using Lemma \ref{lem:operator-G-on-exp-z-phi}, the $S_{\pi_{\sigma}^{\beta}}$-transform of $Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})$ is given by \begin{align*} S_{\pi_{\sigma}^{\beta}}(Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}))(\varphi) & =\langle\!\langle G(\Phi^{(n)})^{*}\boldsymbol{1},\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi,\cdot)\rangle\!\rangle_{\pi_{\sigma}^{\beta}}\\ & =\langle\!\langle\boldsymbol{1},G(\Phi^{(n)})\mathrm{e}_{\pi_{\sigma}^{\beta}}(\varphi,\cdot)\rangle\!\rangle_{\pi_{\sigma}^{\beta}}\\ & ={\displaystyle \frac{1}{l_{\pi_{\sigma}^{\beta}}(\varphi)}\int_{\mathcal{D}'}G(\Phi^{(n)})}(\exp\langle z,\varphi\rangle)\,\mathrm{d}\pi_{\sigma}^{\beta}(z)\\ & ={\displaystyle \frac{\langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle}{l_{\pi_{\sigma}^{\beta}}(\varphi)}\int_{\mathcal{D}'}}\exp\langle z,\varphi\rangle\,\mathrm{d}\pi_{\sigma}^{\beta}(z)\\ & =\langle\Phi^{(n)},g_{\alpha}(\varphi)^{\otimes n}\rangle.\qedhere \end{align*} \end{proof} Now using the above result, we provide an alternative proof of Theorem \ref{thm:biorthogonal-property-inf-dim}. \begin{proof}[Proof of Theorem \ref{thm:biorthogonal-property-inf-dim} \emph{(}Alternative\emph{)}] The $S_{\pi_{\sigma}^{\beta}}$-transform of $Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)})$ at $\alpha(\varphi)$ is given by \begin{align*} S_{\pi_{\sigma}^{\beta}}(Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}))(\alpha(\varphi)) & =\langle\!\langle Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}),\mathrm{e}_{\pi_{\sigma}^{\beta}}(\alpha(\varphi),\cdot)\rangle\!\rangle_{\pi_{\sigma}^{\beta}}\\ & =\sum_{m=0}^{\infty}\frac{1}{m!}\langle\!\langle Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}),\langle C_{m}^{\sigma,\beta},\varphi^{\otimes m}\rangle\rangle\!\rangle_{\pi_{\sigma}^{\beta}}. \end{align*} By Theorem \ref{thm:generalized-function-S-transform} with $\varphi$ replaced by $\alpha(\varphi)$ we obtain \[ S_{\pi_{\sigma}^{\beta}}(Q_{n}^{\sigma,\beta,\alpha}(\Phi^{(n)}))(\alpha(\varphi))=\langle\Phi^{(m)},\varphi^{\otimes m}\rangle. \] The result follows by a comparison of coefficients and the polarization identity. \end{proof} \subsubsection*{Acknowledgments} This work was partially supported by the Center for Research in Mathematics and Applications (CIMA-UMa) related with the Statistics, Stochastic Processes and Applications (SSPA) group, through the grant UIDB/MAT/04674/2020 of FCT-Funda{\c c\~a}o para a Ci{\^e}ncia e a Tecnologia, Portugal and by the Complex systems group of the Premiere Research Institute of Science and Mathematics (PRISM), MSU-Iligan Institute of Technology. The financial support of the Department of Science and Technology -- Accelerated Science and Technology Human Resource Development Program (DOST-ASTHRDP) of the Philippines under the Research Enrichment (Sandwich) Program is gratefully acknowledged. Also, special thanks to Prof. Ludwig Streit for recommending relevant reading resources. \begin{thebibliography}{10} \bibitem{ADKS96} S.~Albeverio, Y.~Daletzky, Y.~G. Kondratiev and L.~Streit, Non-{G}aussian infinite dimensional analysis, {\em J.\ Funct.\ Anal.} {\bf 138} (1996) 311--350. \bibitem{AKR97} S.~Albeverio, Y.~G. Kondratiev and M.~R{\"o}ckner, Analysis and geometry on configuration spaces, {\em J.\ Funct.\ Anal.} {\bf 154} (1998) 444--500. \bibitem{AKR97a} S.~Albeverio, Y.~G. Kondratiev and M.~R{\"o}ckner, Analysis and geometry on configuration spaces: The {G}ibbsian case, {\em J.\ Funct.\ Anal.} {\bf 157} (1998) 242--291. \bibitem{Beghin:2009fi} L.~Beghin and E.~Orsingher, Fractional {P}oisson processes and related planar random motions, {\em Electron.\ J.\ Probab.} {\bf 14}(61) (2009) 1790--1827. \bibitem{Bendong2022} J.~Bendong, S.~Menchavez and J.~L. da~Silva, Fractional poisson analysis in dimension one (2022) arXiv:2205.00059v1. \bibitem{BK88} Y.~M. Berezansky and Y.~G. Kondratiev, {\em Spectral Methods in Infinite-Dimensional Analysis} (Kluwer Academic Publishers, Dordrecht, 1995). \bibitem{BSU96} Y.~M. Berezansky, Z.~G. Sheftel and G.~F. Us, {\em Functional Analysis} (Birkh{\"a}user, Boston, Basel, Berlin, 1996). \bibitem{Biard2014} R.~Biard and B.~Saussereau, Fractional {P}oisson process: long-range dependence and applications in ruin theory, {\em J.\ Appl.\ Probab.} {\bf 51}(3) (2014) 727--740. \bibitem{C83} D.~L. Cohn, {\em Measure Theory} (Birkh{\"a}user, Boston, Basel, Stuttgart, 2013). \bibitem{Finkelshtein2019b} D.~Finkelshtein, Y.~Kondratiev, E.~Lytvynov and M.~J. Oliveira, An infinite dimensional umbral calculus, {\em J.\ Funct.\ Anal.} {\bf 276}(12) (2019) 3714--3766. \bibitem{Finkelshtein2022} D.~Finkelshtein, Y.~Kondratiev, E.~Lytvynov and M.~J. Oliveira, Stirling operators in spatial combinatorics, {\em J. Funct. Anal.} {\bf 282} (2022) 1--45. \bibitem{GV68} I.~M. Gel'fand and N.~Y. Vilenkin, {\em Generalized Functions} (Academic Press, Inc., New York and London, 1968). \bibitem{G88} H.-O. Georgii, {\em Gibbs Measures and Phase Transitions} (de Gruyter, Berlin, 1988). \bibitem{Gorenflo2015} R.~Gorenflo and F.~Mainardi, On the fractional {P}oisson process and the discretized stable subordinator, {\em Axioms} {\bf 4}(3) (2015) 321--344. \bibitem{GJRS14} M.~Grothaus, F.~Jahnert, F.~Riemann and J.~L. Silva, {Mittag-Leffler Analysis I: Construction and characterization}, {\em J.\ Funct.\ Anal.} {\bf 268}(7) (2015) 1876--1903. \bibitem{HKPS93} T.~Hida, H.-H. Kuo, J.~Potthoff and L.~Streit, {\em White Noise. An Infinite Dimensional Calculus} (Kluwer Academic Publishers, Dordrecht, 1993). \bibitem{I88} Y.~Ito, Generalized {P}oisson functionals, {\em Probab.\ Theory Related Fields} {\bf 77} (1988) 1--28. \bibitem{IK88} Y.~Ito and I.~Kubo, Calculus on {G}aussian and {P}oisson white noises, {\em Nagoya Math.\ J.} {\bf 111} (1988) 41--84. \bibitem{KaKo99} N.~A. Kachanovsky and S.~V. Koshkin, Minimality of {A}ppell-like systems and embeddings of test function spaces in a generalization of white noise analysis, {\em Methods Funct.\ Anal.\ Topology} {\bf 5}(3) (1999) 13--25. \bibitem{KMM78} J.~Kerstan, K.~Matthes and J.~Mecke, {\em Infinitely Divisible Point Processes} (Wiley, Berlin, 1978). \bibitem{KdSS98} Y.~G. Kondratiev, J.~L. da~Silva and L.~Streit, Generalized {A}ppell systems, {\em Methods Funct.\ Anal.\ Topology} {\bf 3}(3) (1997) 28--61. \bibitem{KK02} Y.~G. Kondratiev and T.~Kuna, Harmonic analysis on configuration spaces {I}. {G}eneral theory, {\em Infin.\ Dimens.\ Anal.\ Quantum Probab.\ Relat.\ Top.} {\bf 5}(2) (2002) 201--233. \bibitem{KSWY95} Y.~G. Kondratiev, L.~Streit, W.~Westerkamp and J.-A. Yan, Generalized functions in infinite dimensional analysis, {\em Hiroshima Math.\ J.} {\bf 28}(2) (1998) 213--260. \bibitem{Ko80a} Y.~G. Kondratiev, Spaces of entire functions of an infinite number of variables, connected with the rigging of a {F}ock space, {\em Selecta Mathematica Sovietica} {\bf 10}(2) (1991) 165--180. \bibitem{L03} N.~Laskin, Fractional {P}oisson process, {\em Commun. Nonlinear Sci. Numer. Simul.} {\bf 8}(3-4) (2003) 201--213. \bibitem{L09} N.~Laskin, Some applications of the fractional {P}oisson probability distribution, {\em J.\ Math.\ Phys.} {\bf 50}(11) (2009) 113513, 12. \bibitem{MGS04} F.~Mainardi, R.~Gorenflo and E.~Scalas, A fractional generalization of the {P}oisson processes, {\em Vietnam J.\ Math.} {\bf 32}(Special Issue) (2004) 53--64. \bibitem{Mainardi-Gorenflo-Vivoli-05} F.~Mainardi, R.~Gorenflo and A.~Vivoli, Renewal processes of {M}ittag-{L}effler and {W}right type, {\em Fract.\ Calc.\ Appl.\ Anal.} {\bf 8}(1) (2005) 7--38. \bibitem{Meerschaert2011} M.~M. Meerschaert, E.~Nane and P.~Vellaisamy, The fractional {P}oisson process and the inverse stable subordinator, {\em Electron.\ J.\ Probab.} {\bf 16}(59) (2011) 1600--1620. \bibitem{O94} N.~Obata, {\em White Noise Calculus and Fock Space}, Lecture Notes in Math.,, Vol.~1577 (Springer-Verlag, Berlin, Heidelberg and New York, 1994). \bibitem{P67} K.~R. Parthasarathy, {\em Probability Measures on Metric Spaces}Probability and mathematical statistics, Probability and mathematical statistics (Academic Press, Inc., 1967). \bibitem{Politi-Kaizoji-2011} M.~Politi and T.~Kaijozi, Full characterization of the fractional poisson process, {\em Europhys.\ Lett.} {\bf 96}(2) (2011) 20004--p1--20004--p6. \bibitem{Pollard48} H.~Pollard, The completely monotonic character of the {M}ittag-{L}effler function {$E_a(-x)$}, {\em Bull.\ Amer.\ Math.\ Soc.} {\bf 54} (1948) 1115--1116. \bibitem{Repin-Saichev00} O.~N. Repin and A.~I. Saichev, Fractional {P}oisson law, {\em Radiophys. Quantum Electron.} {\bf 43}(9) (2000) 738--741. \bibitem{S71} H.~H. Schaefer, {\em Topological Vector Spaces} (Springer-Verlag, Berlin, Heidelberg and New York, 1971). \bibitem{S94} H.~Shimomura, {P}oisson measures on the configuration space and unitary representations of the group of diffeomorphism, {\em J.\ Math.\ Kyoto Univ.} {\bf 34} (1994) 599--614. \bibitem{Uchaikin2008} V.~V. Uchaikin, D.~O. Cahoy and R.~T. Sibatov, {Fractional processes: from Poisson to branching one}, {\em Internat.\ J.\ Bifur.\ Chaos Appl.\ Sci.\ Engrg.} {\bf 18}(09) (2008) 2717--2725. \end{thebibliography} \end{document}
2205.03260v3
http://arxiv.org/abs/2205.03260v3
Convex Analysis at Infinity: An Introduction to Astral Space
\documentclass[10pt]{article} \usepackage[textheight=538pt, textwidth=346pt, hmarginratio=1:1]{geometry} \usepackage[utf8]{inputenc} \usepackage{layout} \usepackage{eucal} \usepackage{tocloft} \usepackage{appendix} \newif\ifdraft \draftfalse \newif\iflinenums \linenumsfalse \ifdraft \linenumstrue \usepackage[round]{natbib} \usepackage{mathtools} \usepackage{booktabs} \usepackage{xspace} \usepackage{makecell} \usepackage{longtable} \usepackage{relsize} \usepackage[dvipsnames]{xcolor} \usepackage{bigstrut} \renewcommand{\cellalign}{tl} \usepackage{subcaption} \usepackage{framed} \usepackage{amsmath,amssymb,amsthm,bm,microtype,thmtools} \usepackage{enumitem} \let\oldpart\part \renewcommand{\part}[1]{\clearpage\oldpart{#1}\clearpage} \let\oldsection\section \renewcommand{\section}{\clearpage\oldsection} \cftpagenumbersoff{part} \renewcommand{\cftpartfont}{\scshape} \renewcommand\cftpartpresnum{Part~} \renewcommand{\cftpartpagefont}{\bfseries} \usepackage[pdfencoding=unicode]{hyperref} \usepackage{bookmark} \usepackage{lineno} \iflinenums \linenumbers \usepackage{cleveref} \Crefname{equation}{Eq.}{Eqs.} \Crefname{proposition}{Proposition}{Propositions} \crefname{proposition}{proposition}{propositions} \crefname{item}{part}{parts} \Crefname{item}{Part}{Parts} \Crefname{lemma}{Lemma}{Lemmas} \crefname{lemma}{lemma}{lemmas} \Crefname{figure}{Figure}{Figures} \definecolor{cerulean}{rgb}{0.10, 0.58, 0.75} \hyphenation{Ada-Boost} \hyphenation{half-space} \hyphenation{half-spaces} \hyphenation{sub-differential} \hyphenation{sub-gradient} \renewcommand{\rmdefault}{ptm} \newcommand{\mathopfont}[1]{#1} \DeclareFontFamily{U}{FdSymbolA}{} \DeclareFontShape{U}{FdSymbolA}{m}{n}{ <-> FdSymbolA-Book }{} \DeclareSymbolFont{FdSymbolA}{U}{FdSymbolA}{m}{n} \DeclareMathSymbol{\altdiamond}{2}{FdSymbolA}{"82} \DeclareMathSymbol{\altlozenge}{2}{FdSymbolA}{"8E} \DeclareMathSymbol{\altsquare}{2}{FdSymbolA}{"74} \DeclareFontFamily{U}{stix}{} \DeclareFontShape{U}{stix}{m}{n}{ <-> stix-mathscr }{} \DeclareSymbolFont{stix}{U}{stix}{m}{n} \DeclareMathSymbol{\altcircup}{2}{stix}{"EA} \DeclareSymbolFont{bbold}{U}{bbold}{m}{n} \DeclareMathSymbol{\altplus}{2}{bbold}{"2B} \newcommand{\me}{e} \newcommand{\zero}{{\bf 0}} \newcommand{\aaa}{{\bf a}} \newcommand{\bb}{{\bf b}} \newcommand{\cc}{{\bf c}} \newcommand{\dd}{{\bf d}} \newcommand{\ee}{{\bf e}} \newcommand{\xx}{{\bf x}} \newcommand{\xing}{\hat{\xx}} \newcommand{\ying}{\hat{y}} \newcommand{\uhat}{\hat{\uu}} \newcommand{\vhat}{\hat{\vv}} \newcommand{\what}{\hat{\ww}} \newcommand{\bhati}{{\hat{b}}} \newcommand{\bhat}{{\hat{\bf b}}} \newcommand{\chat}{{\hat{\bf c}}} \newcommand{\hatc}{{\hat{c}}} \newcommand{\qhat}{{\hat{\bf q}}} \newcommand{\xhat}{{\hat{\bf x}}} \newcommand{\yhat}{{\hat{\bf y}}} \newcommand{\zhat}{{\hat{\bf z}}} \newcommand{\xstar}{{{\bf x}^*}} \newcommand{\zstar}{{{\bf z}^*}} \newcommand{\fstar}{{f^*}} star}{{f_i^*}} \newcommand{\Fstar}{{F^*}} \newcommand{\Gstar}{{G^*}} \newcommand{\fpstar}{{f'^*}} \newcommand{\gstar}{{g^*}} \newcommand{\gistar}{{g_i^*}} \newcommand{\gkstar}{{g_k^*}} \newcommand{\gimstar}{{g_{i-1}^*}} \newcommand{\gpstar}{{g'^*}} \newcommand{\hstar}{{h^*}} \newcommand{\pstar}{{p^*}} \newcommand{\rstar}{{r^*}} \newcommand{\sstar}{{s^*}} \newcommand{\gtil}{{\tilde{g}}} \newcommand{\gtilbar}{\bar{\tilde{g}}} \newcommand{\htil}{{\tilde{h}}} \newcommand{\ghat}{{\hat{g}}} \newcommand{\sbar}{\overline{\sS}} \newcommand{\ubar}{\overline{\uu}} \newcommand{\vbar}{\overline{\vv}} \newcommand{\wbar}{\overline{\ww}} \newcommand{\xbar}{\overline{\xx}}\newcommand{\bbar}{\overline{\bb}} \newcommand{\dbar}{\overline{\dd}} \newcommand{\ebar}{\overline{\ee}} \newcommand{\bara}{\bar{a}} \newcommand{\barb}{\bar{b}} \newcommand{\barc}{\bar{c}} \newcommand{\bare}{\bar{e}} \newcommand{\barw}{\bar{w}} \newcommand{\barx}{\bar{x}} \newcommand{\bary}{\bar{y}} \newcommand{\barz}{\bar{z}} \newcommand{\baralpha}{\bar{\alpha}} \newcommand{\barbeta}{\bar{\beta}} \newcommand{\ybar}{\overline{\yy}} \newcommand{\ybart}{\ybar_{\!t}} \newcommand{\zbar}{\overline{\zz}} \newcommand{\Cbar}{\overline{C}} \newcommand{\Ebar}{\overline{E}} \newcommand{\Jbar}{\overline{J}} \newcommand{\Kbar}{\overlineKernIt{K}} \newcommand{\Khat}{\hat{K}} \newcommand{\Qbar}{\overlineKernIt{Q}} \newcommand{\Lbar}{\overlineKernIt{L}} \newcommand{\Mbar}{\overlineKernIt{M}} \newcommand{\KLbar}{\overline{K\cap L}} \newcommand{\Rbar}{\overlineKernR{R}} \newcommand{\Rbari}{\overlineKernR{R}_i} \newcommand{\Ralpha}{R_{\alpha}} \newcommand{\Rbeta}{R_{\beta}} \newcommand{\negKern}{\mkern-1.5mu} \newcommand{\overlineKernIt}[1]{{}\mkern2mu\overline{\mkern-2mu#1}} \newcommand{\overlineKernR}[1]{{}\mkern3mu\overline{\mkern-3mu#1\mkern2mu}\mkern-2mu} \newcommand{\overlineKernS}[1]{{}\mkern3mu\overline{\mkern-3mu#1\mkern1.5mu}\mkern-1.5mu} \newcommand{\overlineKernf}[1]{{}\mkern4.5mu\overline{\mkern-4.5mu#1\mkern1.5mu}\mkern-1.5mu} \newcommand{\Abar}{\overlineKernf{A}} \newcommand{\Abarinv}{\bigParens{\mkern-1.5mu\Abar\mkern2mu}^{\mkern-1.5mu-1}}\newcommand{\Alininv}{A^{-1}} \newcommand{\Fbar}{\overline{F}} \newcommand{\Sbar}{\overlineKernS{S}} \newcommand{\Sbarp}{\overlineKernS{S'}} \newcommand{\Pbar}{\overlineKernIt{P}} \newcommand{\Ubar}{\overlineKernIt{U}} \newcommand{\Nbar}{\overlineKernIt{N}} \newcommand{\PNbar}{\overline{P\cap N}} \newcommand{\SUbar}{\overline{S\cap U}} \newcommand{\Zbar}{\overlineKernIt{Z}} \newcommand{\Xbar}{\overlineKernIt{X}} \newcommand{\Ybar}{\overlineKernIt{Y}} \newcommand{\Xbari}{\overlineKernIt{X}_{i}} \newcommand{\Xbarit}{\overline{X}_{it}} \newcommand{\Xbarp}{\overline{X'}} \newcommand{\Xbarpt}{\overlineKernIt{X'_t}} \newcommand{\Xbarpi}{\overlineKernIt{X'_i}} \newcommand{\Xbarpit}{\overlineKernIt{X'_{it}}} \newcommand{\Ybarpt}{\overline{Y'}_t} \newcommand{\zp}{{\zz^0}} \newcommand{\zpbar}{{\overline{\zz}^0}} \newcommand{\sS}{{\bf s}} \newcommand{\pp}{\mathbf{p}} \newcommand{\qq}{{\bf q}} \newcommand{\rr}{{\bf r}} \newcommand{\uu}{{\bf u}} \newcommand{\vv}{{\bf v}} \newcommand{\ww}{{\bf w}} \newcommand{\yy}{\mathbf{y}} \newcommand{\xtil}{\tilde{\xx}} \newcommand{\ytil}{\tilde{\yy}} \newcommand{\zz}{{\bf z}} \newcommand{\A}{\mathbf{A}} \newcommand{\Ainv}{{\A^{-1}}} \newcommand{\B}{{\bf B}} \newcommand{\J}{\mathbf{J}} \newcommand{\EE}{\mathbf{E}} \newcommand{\PP}{\mathbf{P}} \newcommand{\QQ}{\mathbf{Q}} \newcommand{\QQy}{\QQ_y} \newcommand{\QQz}{\QQ_z} \newcommand{\RR}{\mathbf{R}} \newcommand{\Rinv}{{\RR^{-1}}} \newcommand{\Ss}{{\bf S}} \newcommand{\VV}{\mathbf{V}} \newcommand{\WW}{{\bf W}} \newcommand{\ZZ}{{\bf Z}} \newcommand{\Adag}{\A^\dagger} \newcommand{\VVdag}{\VV^\dagger} \newcommand{\Bpseudoinv}{\B^\dagger} \newcommand{\xpar}{\xx^L}\newcommand{\xparperp}{\xx^{\Lperp}}\newcommand{\PPf}{\PP\mkern-1.5mu f} \newcommand{\PPi}{\PPsub{i}} \newcommand{\PPsub}[1]{\PP_{\mkern-2.5mu #1}} \newcommand{\PPx}{\PP}\newcommand{\PPy}{\EE} \newcommand{\zerov}[1]{\zero_{#1}} \newcommand{\zerovec}{\zerov{0}} \newcommand{\zeromat}[2]{\zero_{{#1}\times {#2}}} \newcommand{\Iden}{\mathbf{I}} \newcommand{\Idn}[1]{\Iden_{#1}} \newcommand{\Idnn}{\Idn{n}} \newcommand{\zalph}{\zz_{\alpha}} \newcommand{\ellbar}{\overlineKernS{\ell}} \newcommand{\abar}{\overlineKernIt{a}} \newcommand{\abarstar}{{\overlineKernIt{a}}^{*}\negKern} \newcommand{\hardcore}[1]{{H_{#1}}} \newcommand{\indset}{I} \newcommand{\uset}[1]{\uu[#1]} \newcommand{\topo}{{\cal T}} \newcommand{\expex}{{\overline{{\exp}\ministrut}}} \newcommand{\invex}{\overline{\mathrm{inv}}} \newcommand{\logex}{{\overline{\ln}}} \newcommand{\sqp}{\mathrm{sq}} \newcommand{\Vtilde}{\tilde{\VV}} \newcommand{\qtilde}{\tilde{\qq}} \newcommand{\Rtilde}{\tilde{\RR}} \newcommand{\qperp}{{\qq}^{\bot}} \newcommand{\slinmap}[1]{L_{#1}} \newcommand{\alinmap}[1]{\Lbar_{#1}} \newcommand{\slinmapinv}[1]{L^{-1}_{#1}} \newcommand{\alinmapinv}[1]{\Lbar^{\;-1}_{#1}} \newcommand{\slinmapA}{A}\newcommand{\alinmapA}{\Abar}\newcommand{\slinmapAinv}{\Alininv}\newcommand{\alinmapAinv}{\Abarinv} \newcommand{\Hcompi}{H^c_i} \newcommand{\clHcompi}{\overline{H^c_i}} \newcommand{\Hcomp}{H^c} \newcommand{\clHcomp}{\overline{H^c}} \newcommand{\Ncomp}{{N^c}} \newcommand{\Uv}{{U_{\vv}}} \newcommand{\Uzero}{{U_{\zero}}} \newcommand{\Unormv}{{U_{\vv/\norm{\vv}}}} \newcommand{\Uw}{{U_{\ww}}} \newcommand{\US}{{U_{S}}} \newcommand{\UV}{{U_{V}}} \newcommand{\UVj}{{U_{V_j}}} \newcommand{\Upv}{{U'_{\vv}}} \newcommand{\btildet}{{\tilde{\bf b}_{t}}} \newcommand{\btilde}{{\tilde{b}}} \newcommand{\rats}{{\mathbb{Q}}} \newcommand{\nats}{{\mathbb{N}}} \newcommand{\R}{{\mathbb{R}}} \newcommand{\Rn}{\R^n} \newcommand{\RN}{\R^N} \newcommand{\Rm}{\R^m} \newcommand{\Rmn}{{\R^{m\times n}}} \newcommand{\Rnk}{{\R^{n\times k}}} \newcommand{\Rnp}{{\R^{n+1}}} \newcommand{\Rnpnp}{{\R^{(n+1)\times (n+1)}}} \newcommand{\Rr}{{\R^r}} \newcommand{\Rk}{{\R^k}} \newcommand{\Rnmo}{{\R^{n-1}}} \newcommand{\Qn}{{\rats^n}} \newcommand{\Rpos}{\mathbb{R}_{\geq 0}} \newcommand{\Rposgen}[1]{\mathbb{R}^{#1}_{\geq 0}} \newcommand{\Rposn}{\Rposgen{n}} \newcommand{\Rstrictpos}{\mathbb{R}_{> 0}} \newcommand{\Rneg}{\mathbb{R}_{\leq 0}} \newcommand{\Rstrictneg}{\mathbb{R}_{< 0}} \newcommand{\Rextpos}{{\Rpos\cup\{\oms\}}} \newcommand{\Rextstrictpos}{{\Rstrictpos\cup\{\oms\}}} \newcommand{\Trans}[1]{{#1}^{\top}} \newcommand{\trans}[1]{#1^\top} \newcommand{\transKern}[1]{#1^\top\!} \newcommand{\sign}{{\rm sign}} \newcommand{\pseudinv}[1]{{#1}^{+}} \newcommand{\amatu}{{\bf A}} \newcommand{\transamatu}{{\trans{\amatu}}} \DeclareMathOperator{\resc}{\mathopfont{rec}} \newcommand{\represc}[1]{{(\resc{#1})^{\triangle}}} \newcommand{\rescperp}[1]{{(\resc{#1})^{\bot}}} \newcommand{\rescperperp}[1]{{(\resc{#1})^{\bot\bot}}} \newcommand{\rescbar}[1]{{\overline{(\resc{#1})}}} \newcommand{\rescpol}[1]{{(\resc{#1})^{\circ}}} \newcommand{\rescdubpol}[1]{{(\resc{#1})^{\circ\circ}}} \DeclareMathOperator{\conssp}{\mathopfont{cons}} \newcommand{\consspperp}[1]{{(\conssp{#1})^{\bot}}} \newcommand{\consspperperp}[1]{{(\conssp{#1})^{\bot\bot}}} \DeclareMathOperator{\ri}{\mathopfont{ri}} \newcommand{\ric}[1]{{\ri(\resc{#1})}} \DeclareMathOperator{\arescone}{\mathopfont{rec}} \newcommand{\aresconeF}{{\arescone F}} \newcommand{\aresconef}{{\arescone \fext}} \newcommand{\aresconefp}{{\arescone \fpext}} \newcommand{\aresconeg}{{\arescone \gext}} \newcommand{\aresconeh}{{\arescone \hext}} \newcommand{\aresconegsub}[1]{{\arescone \gext_{#1}}} \newcommand{\ddarc}[1]{{\tilde{\cal R}_{#1}}} \newcommand{\perpresf}{{(\aresconef)^{\bot}}} \newcommand{\perpresfp}{{(\aresconefp)^{\bot}}} \newcommand{\perpresg}{{(\aresconeg)^{\bot}}} \newcommand{\perpresgsub}[1]{{(\aresconegsub{#1})^{\bot}}} \newcommand{\perperpresf}{{(\aresconef)^{\bot\bot}}} \newcommand{\perperperpresf}{{(\aresconef)^{\bot\bot\bot}}} \newcommand{\perperpresfp}{{(\aresconefp)^{\bot\bot}}} \newcommand{\perperpresg}{{(\aresconeg)^{\bot\bot}}} \newcommand{\fullshad}[1]{{{#1}^{\diamond}}} \newcommand{\fullshadnoparen}[1]{{#1^{\diamond}}} \newcommand{\fullshadgk}{{g^{\diamond}_k}} \DeclareMathOperator{\contset}{\mathopfont{cont}} \DeclareMathOperator{\unimin}{\mathopfont{univ}} \newcommand{\contsetf}{\contset{\fext}} \newcommand{\aintdom}[1]{{{\cal I}_{#1}}} \DeclareMathOperator{\intr}{\mathopfont{int}} \newcommand{\intdom}[1]{{\intr(\dom{#1})}} \newcommand{\ball}{B} \DeclareMathOperator{\range}{\mathopfont{range}} \newcommand{\rangedif}[1]{\range \partial {#1}} \newcommand{\calJ}{{\cal J}} \newcommand{\calK}{{\cal K}} \newcommand{\calKpol}{{{\cal K}^{\circ}}} \newcommand{\calU}{{\cal U}} \newcommand{\calUpol}{{{\cal U}^{\circ}}} \newcommand{\calA}{{\cal A}} \newcommand{\calApol}{{{\cal A}^{\circ}}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calBpol}{{{\cal B}^{\circ}}} \newcommand{\calC}{{\cal C}} \newcommand{\calP}{{\cal P}} \newcommand{\calS}{{\cal S}} \newcommand{\polar}[1]{{#1}^{\circ}} \newcommand{\dubpolar}[1]{{#1}^{\circ\circ}} \newcommand{\apol}[1]{#1^{\bar{\circ}}} \newcommand{\apolconedomfstar}{\apol{(\cone(\dom{\fstar}))}} \newcommand{\apolconedomFstar}{\apol{(\cone(\dom{\Fstar}))}} \newcommand{\apolconedomfpstar}{\apol{(\cone(\dom{\fpstar}))}} \newcommand{\apolslopesf}{\apol{(\slopes{f})}} \newcommand{\polapol}[1]{{#1}^{\circ\bar{\circ}}} \newcommand{\polpol}[1]{{#1}^{\circ\circ}} \newcommand{\apolpol}[1]{{#1}^{\bar{\circ}\circ}} \newcommand{\polapolpol}[1]{{#1}^{\circ\bar{\circ}\circ}} \newcommand{\polpolapol}[1]{{#1}^{\circ\circ\bar{\circ}}} \newcommand{\polmap}{\Pi} \newcommand{\polmapinv}{\Pi^{-1}} \newcommand{\Jpol}{\polar{J}} \newcommand{\Kpol}{\polar{K}} \newcommand{\Japol}{\apol{J}} \newcommand{\Kapol}{\apol{K}} \newcommand{\Kpolbar}{\clbar{\Kpol}} \newcommand{\Kppolbar}{\clbar{\polar{K'}}} \newcommand{\Jpolapol}{\polapol{J}} \newcommand{\Kpolapol}{\polapol{K}} \newcommand{\Kapolpol}{\apolpol{K}} \newcommand{\Lpol}{\polar{L}} \newcommand{\Lpolbar}{\clbar{\Lpol}} \newcommand{\Rpol}{\polar{R}} \newcommand{\Spol}{\polar{S}} \newcommand{\Upol}{\polar{U}} \newcommand{\Khatpol}{\polar{\Khat}} \newcommand{\Khatdubpol}{\dubpolar{\Khat}} \newcommand{\aperp}[1]{#1^{\bar{\bot}}} \newcommand{\perpaperp}[1]{#1^{\bot\bar{\bot}}} \newcommand{\perperp}[1]{#1^{\bot\bot}} \newcommand{\aperperp}[1]{#1^{\bar{\bot}\bot}} \newcommand{\aperpaperp}[1]{#1^{\bar{\bot}\bar{\bot}}} \newcommand{\Lperpaperp}{\perpaperp{L}} \newcommand{\Laperp}{\aperp{L}} \newcommand{\Lapol}{\apol{L}} \newcommand{\Uaperp}{\aperp{U}} \newcommand{\Uperpaperp}{\perpaperp{U}} \newcommand{\Saperp}{\aperp{S}} \newcommand{\Uaperperp}{\aperperp{U}} \newcommand{\Uaperpaperp}{\aperpaperp{U}} \newcommand{\calX}{{\cal X}} \DeclareMathOperator{\cl}{\mathopfont{cl}} \DeclareMathOperator{\lsc}{\mathopfont{lsc}} \newcommand{\clbar}{\overline} \newcommand{\clgen}[1]{{\cl_{#1}}} \newcommand{\clrn}{\clgen{{\mathrm r}}} \newcommand{\cles}{\clgen{{\mathrm a}}} \newcommand{\clmx}{\clgen{{\mathrm m}}} \newcommand{\mxspace}{M} \DeclareMathOperator{\cone}{\mathopfont{cone}} \DeclareMathOperator{\scc}{\mathopfont{cone}_{\circ}} \newcommand{\clcone}[1]{{\overline{\cone{#1}}}} \newcommand{\scone}[1]{\Rstrictpos\, {#1}} \newcommand{\repcl}[1]{{{#1}^{\triangle}}} \newcommand{\rpair}[2]{{\langle{#1}, {#2}\rangle}} \newcommand{\rpairf}[2]{#1, #2} \newcommand{\mtuple}[1]{{\langle{#1}\rangle}} \DeclareMathOperator{\epiOp}{\mathopfont{epi}} \DeclareMathOperator{\domOp}{\mathopfont{dom}} \DeclareMathOperator{\relintOp}{\mathopfont{ri}} \let\epi=\epiOp \let\dom=\domOp \let\relint=\relintOp \DeclareMathOperator{\hypo}{\mathopfont{hypo}} \DeclareMathOperator{\ray}{\mathopfont{ray}} \DeclareMathOperator{\aray}{\mathopfont{ray}_{\star}} \DeclareMathOperator{\acone}{\mathopfont{cone}_{\star}} \DeclareMathOperator{\oconich}{\widetilde{\mathopfont{cone}}_{\star}} \DeclareMathOperator{\acolspace}{\mathopfont{col}_{\star}} \DeclareMathOperator{\aspan}{\mathopfont{span}_{\star}} \DeclareMathOperator{\ahlineop}{\mathopfont{hline}_{\star}} \DeclareMathOperator{\hlineop}{\mathopfont{hline}} \newcommand{\hfline}[2]{\hlineop({#1},{#2})} \newcommand{\ahfline}[2]{\ahlineop({#1},{#2})} \newcommand{\rec}[1]{{\mbox{\rm rec}({#1})}} \newcommand{\intrec}[1]{{\mbox{\rm irec}({#1})}} \newcommand{\epibar}[1]{{\clmx{(\epi{#1})}}} \newcommand{\epibarbar}[1]{{\overline{\epi{#1}}}} \newcommand{\clepi}{\epibarbar} \newcommand{\cldom}[1]{\overline{\dom{#1}}} \newcommand{\cldomfext}{\cldom{\efspace}} \newcommand{\clepifext}{\clepi{\efspace}} \newcommand{\clepif}{\overline{\epi f}} \DeclareMathOperator{\csmOp}{\mathopfont{csm}} \newcommand{\cosmspn}{\csmOp \Rn} \DeclareMathOperator{\rwdirOp}{\mathopfont{dir}} \newcommand{\rwdir}{\rwdirOp} \newcommand{\quof}{p} \newcommand{\quofinv}{p^{-1}} \newcommand{\xrw}{\hat{\xx}} \newcommand{\fext}{\ef}\newcommand{\fpext}{\efp}\newcommand{\fpextsub}[1]{\efpsub{#1}}\newcommand{\fextsub}[1]{\efsub{#1}} \newcommand{\gext}{\eg} \newcommand{\eginv}{\eg^{\mkern2mu-1}}\newcommand{\ginv}{g^{-1}} \newcommand{\gpext}{\overline{g'}} \newcommand{\hext}{\eh}\newcommand{\hpext}{\overline{h'}} \newcommand{\cext}{\overline{c}} \newcommand{\dext}{\overline{d}} \newcommand{\rext}{\overline{r}} \newcommand{\pext}{\overline{p}} \newcommand{\sext}{\overline{s}} \newcommand{\lscfext}{\overline{(\lsc f)}} \newcommand{\lschtilext}{\overline{(\lsc \htil)}} \newcommand{\htilext}{\overline{\htil}} \newcommand{\Af}{({\A f})} \newcommand{\Afext}{(\overline{\A f})} \newcommand{\Afextnp}{\overline{\A f}} \newcommand{\fAext}{(\overline{\fA})} \newcommand{\gAext}{(\overline{g \A})} \newcommand{\gfext}{(\overline{g\circ f})} \newcommand{\lamfext}{(\overline{\lambda f})} \newcommand{\fplusgext}{(\overline{f+g})} \newcommand{\Gofext}{(\overline{G\circ f})} \newcommand{\Gofextnp}{\overline{G\circ f}} \newcommand{\fgAext}{(\overline{f + g \A})} \newcommand{\vfext}{(\overline{v f})} \newcommand{\sfprodext}[2]{(\overline{\sfprod{#1}{#2}})} \newcommand{\gex}{\tilde{g}} \newcommand{\ginfval}{g_{\infty}} \newcommand{\fomin}{g} \newcommand{\fominext}{\overline{\fomin}} \newcommand{\normfcn}{\ell} \newcommand{\normfcnext}{\overline{\normfcn}} \newcommand{\dualstar}{\bar{*}} \newcommand{\phantomdualstar}{\vphantom{\dualstar}} \newcommand{\fdubs}{f^{**}} \newcommand{\fdub}{f^{*\dualstar}} dub}{f_i^{*\dualstar}} \newcommand{\gdub}{g^{*\dualstar}} \newcommand{\hdub}{h^{*\dualstar}} \newcommand{\psistar}{\psi^{*}} \newcommand{\psistarb}{\psi^{\dualstar}} \newcommand{\Fdub}{F^{*\dualstar}} \newcommand{\Gdub}{G^{*\dualstar}} \newcommand{\affuv}{\ell_{\uu,v}} \newcommand{\fextstar}{{\efspace}^{*}\mkern-1.5mu} \newcommand{\fextdub}{{\efspace}^{*\dualstar}} \newcommand{\dlin}[1]{\xi_{#1}} \newcommand{\dlinxbuz}{\dlin{\xbar,\uu_0,\beta}} \newcommand{\dlinxbu}{\dlin{\xbar,\uu,\beta}} \newcommand{\transexp}[1]{({#1})} \newcommand{\transu}[1]{{#1^{\transexp{\uu}}}} \newcommand{\psitransu}{\transu{\psi}} \newcommand{\Fstartransu}{F^{*\transexp{\uu}}} \newcommand{\Fstartransustar}{F^{*\transexp{\uu}\dualstar}} \newcommand{\Fstarustarxbar}{\Fstartransustar(\xbar)} \newcommand{\fstartransustar}{f^{*\transexp{\uu}\dualstar}} \newcommand{\fstarustarxbar}{\fstartransustar(\xbar)} \newcommand{\fminussym}[1]{#1} \newcommand{\fminusgen}[2]{{#1}_{\fminussym{#2}}} \newcommand{\fminusu}{\fminusgen{f}{\uu}} \newcommand{\fminusw}{\fminusgen{f}{\ww}} \newcommand{\fminusU}{\fminusgen{f}{u}} \newcommand{\fminusuext}{\fminusgen{\fext}{\uu}} \newcommand{\fminuswext}{\fminusgen{\fext}{\ww}} \newcommand{\fminusUext}{\fminusgen{\fext}{u}} \newcommand{\fminusustar}{\fminusu^{*}} \newcommand{\fminusuextstar}{(\fminusuext)^{*}} \newcommand{\fminusudub}{\fminusu^{*\dualstar}} \newcommand{\gminusminusu}{\fminusgen{g}{-\uu}} \newcommand{\gminusminusuext}{\fminusgen{\gext}{-\uu}} \newcommand{\gminusu}{\fminusgen{g}{\uu}} \newcommand{\gminusU}{\fminusgen{g}{u}} \newcommand{\gminusuext}{\fminusgen{\gext}{\uu}} \newcommand{\gminusUext}{\fminusgen{\gext}{u}} \newcommand{\hminusu}{\fminusgen{h}{\uu}} \newcommand{\hminusw}{\fminusgen{h}{\ww}} \newcommand{\hminusU}{\fminusgen{h}{u}} \newcommand{\hminuslamu}{\fminusgen{h}{\lambda\uu}} \newcommand{\hminusuext}{\fminusgen{\hext}{\uu}} \newcommand{\hminuswext}{\fminusgen{\hext}{\ww}} \newcommand{\hminusUext}{\fminusgen{\hext}{u}} \newcommand{\sminuswz}{\fminusgen{s}{\rpair{\ww}{0}}} \newcommand{\sminuswzext}{\fminusgen{\sext}{\rpair{\ww}{0}}} \newcommand{\fminussub}[2]{f_{{#1},\fminussym{#2}}} \newcommand{\fminussubext}[2]{\efsub{{#1},\fminussym{#2}}} \newcommand{\fminususubd}[1]{\fminussub{#1}{\uu_{#1}}} \newcommand{\fminususubdext}[1]{\fminussubext{#1}{\uu_{#1}}} \newcommand{\fminususub}[1]{\fminussub{#1}{\uu}} \newcommand{\fminususubext}[1]{\fminussubext{#1}{\uu}} \newcommand{\fminuswsub}[1]{\fminussub{#1}{\ww}} \newcommand{\fminuswsubext}[1]{\fminussubext{#1}{\ww}} \newcommand{\fminusgenstar}[2]{{#1}^*_{\fminussym{#2}}} \newcommand{\fminussubstar}[2]{f^*_{{#1},\fminussym{#2}}} minusustar}{\fminussubstar{i}{\uu}} \newcommand{\hminusustar}{\fminusgenstar{h}{\uu}} \newcommand{\fVgen}[2]{{#1}_{#2}} \newcommand{\fV}{\fVgen{f}{\VV}} \newcommand{\fVp}{\fVgen{f}{\VV'}} \newcommand{\asubsym}{\partial} \newcommand{\asubdifplain}[1]{{\asubsym{#1}}} \newcommand{\asubdif}[2]{{\asubsym{#1}({#2})}} \newcommand{\asubdiflagranglmext}[1]{\asubsym\lagranglmext{#1}} \newcommand{\asubdiff}[1]{\asubdif{f}{#1}} \newcommand{\asubdiffext}[1]{\asubdif{\fext}{#1}} \newcommand{\asubdiffAext}[1]{\asubdif{\fAext}{#1}} \newcommand{\asubdifAfext}[1]{\asubdif{\Afext}{#1}} \newcommand{\asubdifgext}[1]{\asubdif{\gext}{#1}} \newcommand{\asubdifhext}[1]{\asubdif{\hext}{#1}} \newcommand{\asubdifrext}[1]{\asubdif{\rext}{#1}} \newcommand{\asubdifsext}[1]{\asubdif{\sext}{#1}} \newcommand{\asubdiffplusgext}[1]{\asubdif{\fplusgext}{#1}} \newcommand{\asubdiffextsub}[2]{\asubdif{\fextsub{#1}}{#2}} \newcommand{\asubdiffminusuext}[1]{\asubdif{\fminusuext}{#1}} \newcommand{\asubdiffminususubext}[2]{\asubdif{\fminususubext{#1}}{#2}} \newcommand{\asubdifgminusUext}[1]{\asubdif{\gminusUext}{#1}} \newcommand{\asubdifhminusUext}[1]{\asubdif{\hminusUext}{#1}} \newcommand{\asubdifhminusuext}[1]{\asubdif{\hminusuext}{#1}} \newcommand{\asubdifinddomfext}[1]{\asubdif{\inddomfext}{#1}} \newcommand{\asubdifindaS}[1]{\asubdif{\indfa{S}}{#1}} \newcommand{\asubdifindepifext}[1]{\asubdif{\indepifext}{#1}} \newcommand{\asubdifindsext}[1]{\asubdif{\indsext}{#1}} \newcommand{\asubdifindMext}[1]{\asubdif{\indMext}{#1}} \newcommand{\asubdifGofext}[1]{\asubdif{\Gofext}{#1}} \newcommand{\asubdifgAext}[1]{\asubdif{\gAext}{#1}} \newcommand{\asubdifF}[1]{\asubdif{F}{#1}} \newcommand{\asubdifh}[1]{\asubdif{h}{#1}} \newcommand{\asubdifpsi}[1]{\asubdif{\psi}{#1}} \newcommand{\asubdifpsistarb}[1]{\asubdif{\psistarb}{#1}} \newcommand{\asubdifgminusminusuext}[1]{\asubdif{\gminusminusuext}{#1}} \newcommand{\asubdifelliext}[1]{\asubdif{\ellbar_i}{#1}} \newcommand{\asubdifsfprodext}[3]{\asubdif{\sfprodext{#1}{#2}}{#3}} \newcommand{\asubdiflamgiext}[1]{\asubdifsfprodext{\lambda_i}{g_i}{#1}} \newcommand{\asubdiflamgext}[1]{\asubdifsfprodext{\lambda}{g}{#1}} \newcommand{\asubdifmuhext}[1]{\asubdif{\muhext}{#1}} \newcommand{\asubdifmuhjext}[1]{\asubdif{\muhjext}{#1}} \newcommand{\asubdifnegogext}[1]{\asubdif{\negogext}{#1}} \newcommand{\asubdifnegogiext}[1]{\asubdif{\negogiext}{#1}} \newcommand{\asubdifnegext}[1]{\asubdif{\negfext}{#1}} \newcommand{\asubdifNegogext}[1]{\asubdif{\Negogext}{#1}} \newcommand{\asubdifNegogiext}[1]{\asubdif{\Negogiext}{#1}} \newcommand{\asubdiflamhext}[1]{\asubdif{\lamhext}{#1}} \newcommand{\asubdifindzohext}[1]{\asubdif{\indzohext}{#1}} \newcommand{\asubdifindzohjext}[1]{\asubdif{\indzohjext}{#1}} \newcommand{\asubdifindzext}[1]{\asubdif{\indzext}{#1}} \newcommand{\asubdifpext}[1]{\asubdif{\pext}{#1}} \newcommand{\adsubsym}{\bar{\partial}} \newcommand{\adsubdifplain}[1]{{\adsubsym{#1}}} \newcommand{\adsubdif}[2]{{\adsubsym{#1}({#2})}} \newcommand{\adsubdiffstar}[1]{\adsubdif{\fstar}{#1}} \newcommand{\adsubdifgstar}[1]{\adsubdif{\gstar}{#1}} \newcommand{\adsubdifFstar}[1]{\adsubdif{\Fstar}{#1}} \newcommand{\adsubdifhstar}[1]{\adsubdif{\hstar}{#1}} \newcommand{\adsubdifpsi}[1]{\adsubdif{\psi}{#1}} \newcommand{\adsubdifpsidub}[1]{\adsubdif{\psidub}{#1}} \newcommand{\adsubdiffminusustar}[1]{\adsubdif{\fminusustar}{#1}} \newcommand{\psidubs}{\psi^{**}} \newcommand{\psidub}{\psi^{\dualstar*}} \newcommand{\bpartial}{\check{\partial}} \newcommand{\basubdifplain}[1]{{\bpartial{#1}}} \newcommand{\basubdif}[2]{{\bpartial{#1}({#2})}} \newcommand{\basubdiflagranglmext}[1]{\bpartial\lagranglmext{#1}} \newcommand{\basubdiflagrangext}[2]{\bpartial\lagrangext{#1}{#2}} \newcommand{\basubdifF}[1]{\basubdif{F}{#1}} \newcommand{\basubdiffext}[1]{\basubdif{\fext}{#1}} \newcommand{\basubdiffAext}[1]{\basubdif{\fAext}{#1}} \newcommand{\basubdifAfext}[1]{\basubdif{\Afext}{#1}} \newcommand{\basubdifgext}[1]{\basubdif{\gext}{#1}} \newcommand{\basubdifhext}[1]{\basubdif{\hext}{#1}} \newcommand{\basubdifrext}[1]{\basubdif{\rext}{#1}} \newcommand{\basubdifsext}[1]{\basubdif{\sext}{#1}} \newcommand{\basubdiffextsub}[2]{\basubdif{\fextsub{#1}}{#2}} \newcommand{\basubdifGofext}[1]{\basubdif{\Gofext}{#1}} \newcommand{\basubdifinddomfext}[1]{\basubdif{\inddomfext}{#1}} \newcommand{\basubdifindepifext}[1]{\basubdif{\indepifext}{#1}} \newcommand{\basubdifelliext}[1]{\basubdif{\ellbar_i}{#1}} \newcommand{\basubdifsfprodext}[3]{\basubdif{\sfprodext{#1}{#2}}{#3}} \newcommand{\basubdiflamgiext}[1]{\basubdifsfprodext{\lambda_i}{g_i}{#1}} \newcommand{\basubdiflamgext}[1]{\basubdifsfprodext{\lambda}{g}{#1}} \newcommand{\basubdifmuhext}[1]{\basubdif{\muhext}{#1}} \newcommand{\basubdifmuhjext}[1]{\basubdif{\muhjext}{#1}} \newcommand{\basubdifnegogext}[1]{\basubdif{\negogext}{#1}} \newcommand{\basubdifnegogiext}[1]{\basubdif{\negogiext}{#1}} \newcommand{\basubdifnegext}[1]{\basubdif{\negfext}{#1}} \newcommand{\basubdifNegogext}[1]{\basubdif{\Negogext}{#1}} \newcommand{\basubdifNegogiext}[1]{\basubdif{\Negogiext}{#1}} \newcommand{\basubdiflamhext}[1]{\basubdif{\lamhext}{#1}} \newcommand{\basubdifindzohext}[1]{\basubdif{\indzohext}{#1}} \newcommand{\basubdifindzohjext}[1]{\basubdif{\indzohjext}{#1}} \newcommand{\basubdifindzext}[1]{\basubdif{\indzext}{#1}} \newcommand{\basubdifpext}[1]{\basubdif{\pext}{#1}} \newcommand{\nbpartial}{\hat{\partial}} \newcommand{\nbasubdif}[2]{{\nbpartial{#1}({#2})}} \newcommand{\nbasubdifF}[1]{\nbasubdif{F}{#1}} \newcommand{\nbasubdiffext}[1]{\nbasubdif{\fext}{#1}} \newcommand{\gradf}{\nabla f} \newcommand{\gradh}{\nabla h} \newcommand{\dder}[3]{#1'(#2;#3)} \newcommand{\dderf}[2]{\dder{f}{#1}{#2}} \newcommand{\dderh}[2]{\dder{h}{#1}{#2}} \newcommand{\rightlim}[2]{{#1}\rightarrow {#2}^{+}} \newcommand{\shadexp}[1]{[{#1}]} \newcommand{\genshad}[2]{{#1^{\shadexp{#2}}}} \newcommand{\fshad}[1]{\genshad{f}{#1}} \newcommand{\fpshad}[1]{\genshad{f'}{#1}} \newcommand{\gshad}[1]{\genshad{g}{#1}} \newcommand{\gishadvi}{\genshad{g_{i-1}}{\limray{\vv_i}}} \newcommand{\hshad}[1]{\genshad{h}{#1}} \newcommand{\fshadd}{\fshad{\ebar}} \newcommand{\fshadv}{\fshad{\limray{\vv}}} \newcommand{\fshadvb}{\fshad{\limray{\vbar}}} \newcommand{\fpshadv}{\fpshad{\limray{\vv}}} \newcommand{\fshadext}[1]{\overline{\fshad{#1}}} \newcommand{\fshadextd}{\fshadext{\ebar}} \newcommand{\fminusushadd}{\genshad{\fminusu}{\ebar}} \newcommand{\htilshad}[1]{\genshad{\htil}{#1}} \newcommand{\astF}{F} nclset}{{{\cal M}_n}} \newcommand{\homf}{\mu} \newcommand{\homfinv}{\mu^{-1}} \newcommand{\homat}{{\bf P}} \newcommand{\huv}{h_{\uu,v}} \newcommand{\hbaruv}{\overline{h}_{\uu,v}} \newcommand{\slhypuab}{H_{\uu,\alpha,\beta}} \newcommand{\projperpv}[1]{{\Pi_{\vv}^{\bot}({#1})}} \newcommand{\xseq}[2]{{\xx^{({#1})}_{#2}}} \newcommand{\oms}{\omega} \newcommand{\omsf}[1]{\oms {#1}} \newcommand{\lmset}[1]{{\oms{#1}}} \newcommand{\omm}{\boldsymbol{\omega}}\newcommand{\ommsub}[1]{\omm_{#1}} \newcommand{\ommk}{\ommsub{k}} \newcommand{\limray}[1]{\omsf{#1}} \newcommand{\limrays}[1]{{[{#1}] \omm}} nv}{F^{-1}} \newcommand{\uperp}{{\uu^\bot}} \newcommand{\uperpt}{{\uu_t^\bot}} \newcommand{\vperp}[1]{{\vv_{#1}^\bot}} \newcommand{\wprp}[1]{{\ww_{#1}^\bot}} \newcommand{\wperp}{{\ww^\bot}} \newcommand{\xperp}{{\xx^\bot}} \newcommand{\yperp}{{\yy^\bot}} \newcommand{\xperpt}{{\xx_t^\bot}} \newcommand{\xperpjt}{{\xx_{jt}^\bot}} \newcommand{\yperpt}{{\yy_t^\bot}} \newcommand{\zperp}{{\zz^\bot}} \newcommand{\ebarperp}{\ebar^{\bot}} \newcommand{\bbarperp}{\bbar^{\bot}} \newcommand{\wbarperpj}{\wbar_j^{\bot}} \newcommand{\xbarperp}{\xbar^{\bot}} \newcommand{\xbarperpi}{\xbar_i^{\bot}} \newcommand{\xbarperpt}{\xbar_t^{\bot}} \newcommand{\ybarperp}{\ybar^{\bot}} \newcommand{\zbarperp}{\zbar^{\bot}} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\norms}[1]{\left\|{#1}\right\|_2} \newcommand{\normp}[1]{\left\|{#1}\right\|_{p}} \newcommand{\normq}[1]{\left\|{#1}\right\|_{q}} \DeclareMathOperator{\colspace}{\mathopfont{col}} \DeclareMathOperator{\spn}{\mathopfont{span}} \newcommand{\spnfin}[1]{{\spn\{{#1}\}}} \newcommand{\comColV}{(\colspace\VV)^\perp} \DeclareMathOperator{\rspan}{\mathopfont{rspan}} \newcommand{\rspanset}[1]{\rspan{\set{#1}}} \newcommand{\rspanxbar}{\rspanset{\xbar}} \newcommand{\rspanxbarperp}{\rspanset{\xbarperp}} \DeclareMathOperator{\columns}{\mathopfont{columns}} \newcommand{\Vxbar}{\VV^{\xbar}} \newcommand{\qqxbar}{\qq^{\xbar}} \newcommand{\kxbar}{k^{\xbar}} \newcommand{\Rxbar}{R^{\xbar}} \newcommand{\Rxbari}{R^{\xbar_i}} \let\amseqref\eqref \renewcommand{\eqref}[1]{Eq.~\textup{(\ref{#1})}} \newcommand{\countsetgen}[1]{\set{#1}} \newcommand{\countset}[1]{\countsetgen{#1_t}} \newcommand{\refequiv}[3]{\ref{#1}(\ref{#2},\ref{#3})} \newcommand{\refequivnp}[3]{\ref{#1}\ref{#2},\ref{#3}} \newcommand{\paren}[1]{\left({#1}\right)} \newcommand{\brackets}[1]{\left[{#1}\right]} \newcommand{\braces}[1]{\{#1\}} \newcommand{\abs}[1]{\left|{#1}\right|} \newcommand{\card}[1]{\left|{#1}\right|} \newcommand{\angles}[1]{\left\langle{#1}\right\rangle} \newcommand{\ceil}[1]{\left\lceil{#1}\right\rceil} \newcommand{\extspac}[1]{\overline{\R^{#1}}} \newcommand{\extspace}{\extspac{n}} \newcommand{\extspacnp}{\extspac{n+1}} \newcommand{\corez}[1]{\mathcal{E}_{#1}} \newcommand{\corezn}{\corez{n}} \newcommand{\coreznp}{\corez{n+1}} \newcommand{\clcorezn}{\overlineKernIt{\mathcal{E}}_n} \newcommand{\galax}[1]{{{\cal G}_{{#1}}}} \newcommand{\galaxd}{\galax{\ebar}} \newcommand{\galaxdp}{\galax{\ebar'}} \newcommand{\galcl}[1]{\overlineKernIt{\cal G}_{{#1}}} \newcommand{\galcld}{\galcl{\ebar}} \newcommand{\galcldt}{\galcl{\ebar_t}} \newcommand{\galcldp}{\galcl{\ebar'}} \newcommand{\hogal}{\gamma} \newcommand{\hogalinv}{\gamma'} \newcommand{\fcnsp}[1]{{\cal F}_{#1}} \newcommand{\fcnspn}{{\cal F}} \newcommand{\neigh}{{B}} \newcommand{\neighbase}{{\cal N}} \newcommand{\nbfinint}{{\neighbase^*}} \newcommand{\indfalet}{I} \newcommand{\indflet}{\iota} \newcommand{\indfa}[1]{\indfalet_{#1}} \newcommand{\indfastar}[1]{\indfalet^*_{#1}} \newcommand{\indfadub}[1]{\indfalet^{*\dualstar}_{#1}} \newcommand{\indfdub}[1]{\indflet^{*\dualstar}_{#1}} \newcommand{\indfdstar}[1]{\indflet^{\dualstar}_{#1}} \newcommand{\indaS}{\indfa{S}} \newcommand{\indaSstar}{\indfastar{S}} \newcommand{\indaz}{\indfa{0}} \newcommand{\indab}{\indfa{b}} \newcommand{\lb}[2]{\seg(#1,#2)} \newcommand{\indf}[1]{\indflet_{#1}} \newcommand{\indfstar}[1]{\indflet^{*}_{#1}} \newcommand{\inds}{\indf{S}} \newcommand{\indC}{\indf{C}} \newcommand{\indK}{\indf{K}} \newcommand{\indL}{\indf{L}} \newcommand{\indM}{\indf{M}} \newcommand{\indz}{\indf{0}} \newcommand{\indZ}{\indf{\zero}} \newcommand{\indb}{\indf{b}} \newcommand{\indgenext}[1]{\ibar_{#1}} \newcommand{\indgenextstar}[1]{\ibar^*_{#1}} \newcommand{\indsext}{\indgenext{S}} \newcommand{\indsextdub}{\indgenext{S}^{\mkern1mu*\dualstar}} \newcommand{\indMext}{\indgenext{M}} \newcommand{\indsextstar}{\indgenextstar{S}} \newcommand{\indzext}{\indgenext{0}} \newcommand{\indbext}{\indgenext{b}} \newcommand{\indzohext}{(\overline{\indz\circ h})} \newcommand{\indzohjext}{(\overline{\indz\circ h_j})} \newcommand{\inddomf}{\indf{\dom{f}}} \newcommand{\inddomfext}{\indgenext{\dom{f}}} \newcommand{\inddomgiext}{\indgenext{\dom{g_i}}} \newcommand{\indsdub}{\indflet^{*\dualstar}_S} \newcommand{\indstar}[1]{{\indflet^*_{#1}}} \newcommand{\indstars}{\indstar{S}} \newcommand{\indstarM}{\indstar{M}} \newcommand{\indzstar}{\indstar{0}} \newcommand{\inddubstar}[1]{{\indflet^{**}_{#1}}} \newcommand{\indepif}{\indf{\epi f}} \newcommand{\indepifext}{\indgenext{\epi f}} \newcommand{\indepifstar}{\indstar{\epi f}} \newcommand{\inddomfstar}{\indstar{\dom f}} \DeclareMathOperator{\conv}{\mathopfont{conv}} \DeclareMathOperator{\seg}{\mathopfont{seg}} \DeclareMathOperator{\rank}{\mathopfont{rank}} \DeclareMathOperator{\ohull}{\widetilde{\mathopfont{conv}}} \newcommand{\simplex}{\ohull} \newcommand{\ohs}[2]{H_{{#1},{#2}}} \newcommand{\ohsua}{\ohs{\uu}{a}} \newcommand{\chs}[2]{{H}_{{#1},{#2}}} \newcommand{\chsub}{\chs{\uu}{\beta}} \newcommand{\chsua}{\chsub} \newcommand{\chsuz}{\chs{\uu}{0}} \newcommand{\allchs}{{\cal H}} \newcommand{\calI}{{\cal I}} \newcommand{\lambar}{\hat{\lambda}} \newcommand{\ahfsp}[3]{H_{{#1},{#2},{#3}}} \DeclareMathOperator{\genhull}{\mathopfont{hull}} \DeclareMathOperator{\midptop}{\mathopfont{mid}} \newcommand{\midpt}[3]{{#1}\text{-}\midptop({#2},\,{#3})} \newcommand{\lammid}[2]{\midpt{\lambda}{#1}{#2}} \newcommand{\zrmid}[2]{\midpt{0}{#1}{#2}} \newcommand{\lexless}{<_L} \newcommand{\compgen}[3]{\;{#1}^{#2}_{#3}\;} \newcommand{\leqxy}{\preceq} \newcommand{\geqxy}{\succeq} \newcommand{\ltxy}{\prec} \newcommand{\gtxy}{\succ} \newcommand{\leqxyrn}{\leqxy} \newcommand{\leqxyp}{\preceq'} \newcommand{\lcmin}{\lambda_{\mathrm{min}}} \newcommand{\lcmax}{\lambda_{\mathrm{max}}} \newcommand{\mphom}{\xi} \newcommand{\mphominv}{\xi^{-1}} \newcommand{\mphomu}{\xi_{\uu}} \newcommand{\mphompu}{\xi'_{\uu}} \newcommand{\mphomj}[1]{\xi^{#1}} \newcommand{\mphomuj}[1]{\mphomu^{#1}} \newcommand{\nuj}[1]{\nu^{#1}} \newcommand{\Lj}[1]{L^{#1}} \newcommand{\Pj}[1]{P^{#1}} \newcommand{\xbarj}[1]{\xbar^{#1}} \newcommand{\leqj}{\leq^{j}} \newcommand{\leqk}{\leq^{k}} \newcommand{\ltj}{<^{j}} \newcommand{\lcminj}[1]{\lcmin^{#1}} \newcommand{\lcmaxj}[1]{\lcmax^{#1}} \newcommand{\circf}{\vv} \newcommand{\circfinv}{\ww} \newcommand{\cfarga}{\alpha} \newcommand{\cfargb}{\beta} \newcommand{\circfval}{\circf_{\cfarga}} \newcommand{\circfinvval}{\circfinv_{\cfarga}} \newcommand{\circfvalz}{\circf_{\cfarga_0}} \newcommand{\circfinvvalz}{\circfinv_{\cfarga_0}} \newcommand{\sfprodsym}{\mathbin{\mkern-1mu\varcirc\mkern-3mu}} \newcommand{\sfprod}[2]{#1\sfprodsym#2} \newcommand{\sfprodstar}[2]{(\sfprod{#1}{#2})^*} \newcommand{\rhoinv}{\rho^{-1}} \newcommand{\vepsilon}{\boldsymbol{\epsilon}} \newcommand{\lamvec}{\boldsymbol{\lambda}} \newcommand{\muvec}{\boldsymbol{\mu}} \newcommand{\lagrangfcn}{\ell} \newcommand{\lagrang}[2]{\lagrangfcn({#1} \mid {#2})} \newcommand{\lagrangext}[2]{\overline{\lagrangfcn}({#1} \mid {#2})} \newcommand{\lagrangstar}[2]{\lagrangfcn^*({#1} \mid {#2})} \newcommand{\lagranglm}[1]{\lagrang{#1}{\lamvec,\muvec}} \newcommand{\lagranglmd}[1]{\lagrang{#1}{2\lamvec,2\muvec}} \newcommand{\lagranglmz}[1]{\lagrang{#1}{\lamvec_0,\muvec_0}} \newcommand{\lagranglmext}[1]{\lagrangext{#1}{\lamvec,\muvec}} \newcommand{\lagranglmdext}[1]{\lagrangext{#1}{2\lamvec,2\muvec}} \newcommand{\lagranglmzext}[1]{\lagrangext{#1}{\lamvec_0,\muvec_0}} \newcommand{\lagranglmstar}[1]{\lagrangstar{#1}{\lamvec,\muvec}} \newcommand{\lagrangl}[1]{\lagrang{#1}{\lamvec}} \newcommand{\lagranglext}[1]{\lagrangext{#1}{\lamvec}} \newcommand{\lagrangflm}{\lagranglm{\cdot}} \newcommand{\lagpair}[2]{{\langle{#1}, {#2}\rangle}} \newcommand{\lammupair}{\lagpair{\lamvec}{\muvec}} \newcommand{\lammupairz}{\lagpair{\lamvec_0}{\muvec_0}} \newcommand{\lammupairp}{\lagpair{\lamvec'}{\muvec'}} \newcommand{\Lampairs}{\Gamma_{r,s}} \newcommand{\sumitor}{\sum_{i=1}^r} \newcommand{\sumjtos}{\sum_{j=1}^s} \newcommand{\negf}{\mathrm{neg}} \newcommand{\Negf}{\mathrm{Neg}} \newcommand{\negfext}{\overline{\negf}} \newcommand{\negogiext}{(\overline{\negf\circ g_i})} \newcommand{\negogext}{(\overline{\negf\circ g})} \newcommand{\Negogiext}{(\overline{\Negf\circ g_i})} \newcommand{\Negogext}{(\overline{\Negf\circ g})} \newcommand{\lamhext}{(\overline{\lambda h})} \newcommand{\lamgextgen}[1]{(\overline{\lambda_{#1} g_{#1}})} \newcommand{\lamgiext}{\lamgextgen{i}} \newcommand{\muhextgen}[1]{(\overline{\mu_{#1} h_{#1}})} \newcommand{\muhext}{(\overline{\mu h})} \newcommand{\muhjext}{\muhextgen{j}} \newcommand{\equi}[1]{[{#1}]} \newcommand{\Sperp}{{S^{\bot}}} \newcommand{\Sperperp}{\perperp{S}} \newcommand{\Sperpaperp}{\perpaperp{S}} \newcommand{\Sbarperp}{{(\Sbar)^\bot}} \newcommand{\Uperp}{{U^{\bot}}} \newcommand{\Uperperp}{{U^{\bot\bot}}} \newcommand{\Kperp}{{K^{\bot}}} \newcommand{\Lperp}{{L^{\bot}}} \newcommand{\Lperperp}{{L^{\bot\bot}}} \newcommand{\Mperp}{{M^{\bot}}} \newcommand{\Mperperp}{{M^{\bot\bot}}} \newcommand{\Bperp}{{B^{\bot}}} \DeclareMathOperator{\vertsl}{\mathopfont{vert}} \DeclareMathOperator{\slopes}{\mathopfont{bar}} \DeclareMathOperator{\barr}{\mathopfont{bar}} \newcommand{\hlfcn}[2]{{\psi({#1}, {#2})}} \newcommand{\falph}{{f_{\alpha}}} \newcommand{\galph}{{g_{\alpha}}} \newcommand{\fbara}{{\overline{f}_{\alpha}}} \newcommand{\gbara}{{\overline{g}_{\alpha}}} \newcommand{\intmu}[1]{{\int {#1} \, d\mu(\alpha)}} \newcommand{\alpspace}{{\cal A}} \newcommand{\Rext}{\overline{\R}} \newcommand{\ph}[1]{\phi_{#1}} \newcommand{\phstar}[1]{{\phi^{\dualstar}_{#1}}} \newcommand{\phstars}[1]{{\phi^{*}_{#1}}} \newcommand{\phdub}[1]{{\phi^{\dualstar *}_{#1}}} \newcommand{\phfcn}{{\varphi}} \newcommand{\phx}{\ph{\xx}} \newcommand{\phimg}{\Phi}\newcommand{\phimgcl}{\overline{\Phi}}\newcommand{\phimgm}{\phfcn(\Rm)} \newcommand{\phimgclm}{\overline{\phimgm}} \newcommand{\phimgA}{\Phi'}\newcommand{\calF}{{\cal F}} \newcommand{\calH}{{\cal H}} \newcommand{\calQ}{{\cal Q}} \newcommand{\calL}{{\cal L}} \newcommand{\calM}{{\cal M}} \newcommand{\Lambdainv}{\Lambda^{-1}} \newcommand{\Ep}{{E_{+}}} \newcommand{\Em}{{E_{-}}} \newcommand{\Hp}{H_{+}} \newcommand{\Hm}{H_{-}} \newcommand{\Mp}{{M_{+}}} \newcommand{\Mm}{{M_{-}}} \newcommand{\Mz}{{M_0}} \newcommand{\Hsep}{H} \newcommand{\allseq}{{\cal S}} \newcommand{\explet}{d} \newcommand{\true}{{\bf true}} \newcommand{\false}{{\bf false}} \newcommand{\distset}{I} \newcommand{\distelt}{i} \newcommand{\distalt}{j} \newcommand{\distaltii}{\ell} \newcommand{\featmap}{{\boldsymbol{\phi}}} \newcommand{\featmapsc}{\phi} \newcommand{\featmapj}{\phi_j} \newcommand{\featmapu}{\featmap_{\uu}} \newcommand{\featmapw}{\featmap_{\ww}} \newcommand{\qsub}[1]{p_{#1}} \newcommand{\qx}{\qsub{\xx}} \newcommand{\qsx}{\qsub{x}} \newcommand{\qbarx}{\qsub{\barx}} \newcommand{\qxbar}{\qsub{\xbar}} \newcommand{\qxbarp}{\qsub{\xbar'}} \newcommand{\qxt}{\qsub{\xx_t}} \newcommand{\qebar}{\qsub{\ebar}} \newcommand{\medists}{Q} \newcommand{\mldists}{P} \newcommand{\sumex}{z} \newcommand{\sumexu}{\sumex_{\uu}} \newcommand{\sumexw}{\sumex_{\ww}} \newcommand{\sumexext}{\overline{\sumex}} \newcommand{\sumexextfi}{\overline{\sumex}_{\uu}} \newcommand{\sumexextu}{\overline{\sumex}_{\uu}} \newcommand{\fullshadsumexu}{\fullshadnoparen{\sumexu}} \newcommand{\lpart}{a} \newcommand{\lpartu}{\lpart_{\uu}} \newcommand{\lpartfi}{\lpart_{\featmap(\distelt)}} \newcommand{\lpartext}{\overline{\lpart}} \newcommand{\lpartextu}{\overline{\lpart}_{\uu}} \newcommand{\lpartextfi}{\overline{\lpart}_{\featmap(\distelt)}} \newcommand{\lpartstar}{\lpart^*} \newcommand{\lpartustar}{\lpartu^*} \newcommand{\meanmap}{M} \newcommand{\meanmapu}{\meanmap_{\uu}} \newcommand{\entropy}{\mathrm{H}} \newcommand{\convfeat}{\conv{\featmap(\distset)}} \newcommand{\convfeatu}{\conv{\featmapu(\distset)}} \newcommand{\subdiflpart}{\partial\lpart} \newcommand{\subdiflpartu}{\partial\lpartu} \newcommand{\subdiflpartstar}{\partial\lpartstar} \newcommand{\asubdiflpart}{\asubdifplain{\lpartext}} \newcommand{\asubdiflpartu}{\asubdifplain{\lpartextu}} \newcommand{\basubdiflpart}{\basubdifplain{\lpartext}} \newcommand{\basubdiflpartu}{\basubdifplain{\lpartextu}} \newcommand{\adsubdiflpartstar}{\adsubdifplain{\lpartstar}} \newcommand{\adsubdiflpartustar}{\adsubdifplain{\lpartustar}} \newcommand{\Nj}[1]{m_{#1}} \newcommand{\Nelt}{\Nj{\distelt}} \newcommand{\numsamp}{m} \newcommand{\loglik}{\ell} \newcommand{\sfrac}[2]{\mbox{$\frac{#1}{#2}$}} \newcommand{\phat}{\hat{p}} \newcommand{\popdist}{\pi} \newcommand{\Exp}[2]{{\bf E}_{#1}\!\brackets{#2}} \DeclareMathOperator{\support}{\mathopfont{supp}} \DeclareMathOperator{\affh}{\mathopfont{aff}} \newcommand{\plusr}{\dotplus} \newcommand{\plusl}{\varplusl}\newcommand{\plusd}{\varplusd}\newcommand{\plusu}{\varplusu} \newcommand{\lowplus}{\plusd} \newcommand{\seqsum}{\varhash} \newcommand{\Seqsum}{\Sigma_{\seqsum}} \newcommand{\matusplusl}{{\mkern -2mu \text{\begin{tikzpicture}[baseline=(plus.base)] \node (plus) {$+$}; ll (-0.085,0.0) circle (0.035); \end{tikzpicture}}\mkern -2mu }} \newcommand{\matusplusd}{{\mkern -2mu \begin{tikzpicture}[baseline=(plus.base)] \node (plus) {$+$}; ll (0.0,-0.085) circle (0.035); \end{tikzpicture}\mkern -2mu }} \renewcommand{\matusplusd}{\plusd} \newcommand{\matuspluslb}{{\mkern -2mu \text{\begin{tikzpicture}[baseline=(plus.base)] \node (plus) {$+$}; ll (-0.0925,0.0) circle (0.025); \end{tikzpicture}}\mkern -2mu }} \newcommand{\matusplusdb}{{\mkern -2mu \begin{tikzpicture}[baseline=(plus.base)] \node (plus) {$+$}; ll (0.0,-0.0925) circle (0.025); \end{tikzpicture}\mkern -2mu }} \newcommand{\singerphi}{\varphi} \newcommand{\scriptontop}[2]{\substack{{#1} \\ {#2}}} \newcommand{\infseq}[3]{\inf_{\scriptontop{{#1}\textit{ in }{#2}:}{{#3}}}} \newcommand{\InfseqLiminf}[4]{\inf_{\substack{\vphantom{t\to\infty}#1 \textit{ in } #2:\\ #3}} \; \liminf_{\substack{t\to\infty\vphantom{#1 \textit{ in } #2:}\\ \vphantom{#3}}} {#4}} \newcommand{\InfseqLim}[4]{\inf_{\substack{\vphantom{t\to\infty}#1 \textit{ in } #2:\\ #3}} \; \lim_{\substack{t\to\infty\vphantom{#1 \textit{ in } #2:}\\ \vphantom{#3}}} {#4}} \renewcommand{\theenumi}{\alph{enumi}} \numberwithin{equation}{section} \newtheoremstyle{myplain} {8pt plus 2pt minus 4pt} {8pt plus 2pt minus 4pt} {\itshape} {0pt} {\bfseries} {.} {.5em} {} \newtheoremstyle{myremark} {4pt plus 1pt minus 2pt} {4pt plus 1pt minus 2pt} {\normalfont} {0pt} {\itshape} {.} {.5em} {} \newtheoremstyle{myproof} {8pt plus 2pt minus 4pt} {8pt plus 2pt minus 4pt} {\normalfont} {0pt} {\itshape} {.} {.5em} {} \newtheoremstyle{mysubproof} {8pt plus 2pt minus 4pt} {8pt plus 2pt minus 4pt} {\normalfont} {0pt} {\itshape} {.} {.5em} {} \newtheoremstyle{mysubclaim} {8pt plus 2pt minus 4pt} {8pt plus 2pt minus 4pt} {\normalfont} {0pt} {\itshape} {.} {.5em} {} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem*{proposition*}{Proposition} \theoremstyle{remark} \newtheorem{claimp}{Claim} \newtheorem*{mainclaimp*}{Main claim} \declaretheorem[title=Claim,within=theorem,style=mysubclaim]{claimpx} \renewcommand{\theclaimpx}{\arabic{claimpx}} \declaretheorem[title={\hskip0pt Proof},numbered=no,style=mysubproof,qed=$\lozenge$]{proofx} \newcommand{\clqedsymbol}{\lozenge} \newcommand{\clqed}{\leavevmode\unskip\penalty9999 \hbox{}\nobreak\hfill \quad\hbox{\ensuremath{\clqedsymbol}}} \theoremstyle{definition} \declaretheorem[title=Example,numberlike=theorem,style=definition,qed=$\blacksquare$]{example} \newtheorem{definition}[theorem]{Definition} \newtheorem{idefinition}[theorem]{Informal Definition} \newcounter{tmpenumcounter} \newcommand{\saveenumcounter}{ \setcounter{tmpenumcounter}{\arabic{enumi}} } \newcommand{\restoreenumcounter}{ \setcounter{enumi}{\thetmpenumcounter} } \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\argmax}{arg\,max} \newcommand{\Eq}[1]{\eqref{eq:#1}} \newcommand{\Ex}[1]{Example~\ref{ex:#1}} \newcommand{\Sec}[1]{\S\ref{sec:#1}} \newcommand{\Subsec}[1]{\S\ref{subsec:#1}} \newcommand{\MarkYes}{} \newcommand{\MarkMaybe}{} \newcommand{\Confirmed}{\color{Green}{\hfill\xspace\mbox{$\blacktriangleright$~\textit{confirmed}}}} \newcommand{\ToDiscuss}{\color{Red}{\hfill\xspace\mbox{$\blacktriangleright$~\textit{to discuss}}}} \newcommand{\Change}{\color{Orange}{\mbox{$\blacktriangleright$~\textit{change:}}}\xspace} \newcommand{\Tentative}{\color{Fuchsia}{\hfill\xspace\mbox{$\blacktriangleright$~\textit{tentative}}}} \newcommand{\vtheta}{\boldsymbol{\theta}} \newcommand{\vmu}{\boldsymbol{\mu}} \newcommand{\vmubar}{\overlineKernIt{\boldsymbol{\mu}}} \newcommand{\vlambda}{\boldsymbol{\lambda}} \newcommand{\vlambdabar}{\overlineKernIt{\boldsymbol{\lambda}}} \newcommand{\mubar}{\overlineKernIt{\mu}} \newcommand{\hpp}{\hat{\pp}} \newcommand{\lsec}{\hat{p}} \newcommand{\lsevec}{\hpp} \newcommand{\probsimgen}[1]{\Delta_{#1}} \newcommand{\probsim}{\probsimgen{n}} \newcommand{\probsimm}{\probsimgen{m}} \newcommand{\eRn}{\overline{\R^n}}\newcommand{\eRd}{\overline{\R^d}}\newcommand{\eRm}{\overline{\R^m}}\newcommand{\eR}{\overline{\R}} \newcommand{\eRf}[1]{\overline{\R^{#1}}} \newcommand{\e}[1]{{}\mkern3mu\overline{\mkern-3mu#1}} \newcommand{\ef}{\overlineKernf{f}} \newcommand{\efp}{{}\mkern3mu\overline{\mkern-3mu f'\mkern1mu}\mkern-1mu} \newcommand{\efpsub}[1]{{}\mkern3mu\overline{\mkern-3mu f'_{#1}\mkern1mu}\mkern-1mu} \newcommand{\efspace}{{}\mkern4.5mu\overline{\mkern-4.5mu f\mkern1.5mu}} \newcommand{\efsub}[1]{{}\mkern2mu\e{\mkern-2mu f\mkern1mu}\mkern-1mu\mkern-2mu{}_{#1}} \newcommand{\ep}{{}\mkern2mu\overline{\mkern-2mu p\mkern1mu}\mkern-1mu} \newcommand{\eg}{{}\mkern2mu\overline{\mkern-2mu g\mkern1mu}\mkern-1mu} \newcommand{\eh}{{}\mkern1mu\overline{\mkern-1mu h}} \newcommand{\ibar}{\ibarii} \newcommand{\ministrut}{\hbox{\vbox to 1.07ex{}}} \newcommand{\ibari}{\mkern1mu\overline{\mkern-1mu\iota\mkern1mu}\mkern-1mu} \newcommand{\ibarii}{\mkern1mu\overline{\mkern-1mu\iota\ministrut\mkern1mu}\mkern-1mu} \newcommand{\ibariii}{\mkern1mu\overline{\mkern-1mu i\mkern1mu}\mkern-1mu} \newcommand{\epp}{\overline{\pp}} \newcommand{\ex}{\e{x}} \newcommand{\ey}{\e{y}} \newcommand{\enegf}{\overline{-f}} \newcommand{\epartial}{\e{\partial}} \newcommand{\exx}{\overline{\xx}} \newcommand{\eyy}{\overline{\yy}} \newcommand{\cF}{\mathcal{F}} \newcommand{\ecF}{\e{\cF}} \newcommand{\cL}{\mathcal{L}} \newcommand{\ecL}{\e{\cL}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\veps}{\boldsymbol{\varepsilon}} \newcommand{\vphi}{\boldsymbol{\phi}} \newcommand{\inprod}{\cdot} \newcommand{\iad}{i.a.d.\@\xspace} \newcommand{\Iad}{I.a.d.\@\xspace} \newcommand{\iadto}{\xrightarrow{\textup{\iad}}}\newcommand{\iadtau}{\xrightarrow{\tau}}\newcommand{\wo}{\backslash} \makeatletter \newcommand{\varlozenge}{\mathpalette\v@rlozenge\relax} \newcommand{\varcirc}{\mathpalette\v@rcirc\relax} \newcommand{\varcircup}{\mathpalette\v@rcircup\relax} \newcommand{\varsquare}{\mathpalette\v@rsquare\relax} \newcommand\v@rlozenge[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.45\wd0}{$\m@th#1\blacklozenge\vphantom{+}$}}}$}\mathbin{\ooalign{\box1}}} \newcommand\v@rcirc[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.5\wd0}{$\m@th#1\bullet\vphantom{+}$}}}$}\mathbin{\ooalign{\box1}}} \newcommand\v@rcircup[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.35\wd0}{$\m@th#1\altcircup\vphantom{+}$}}}$}\mathbin{\ooalign{\box1}}} \newcommand\v@rsquare[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.35\wd0}{$\m@th#1\blacksquare\vphantom{+}$}}}$}\mathbin{\ooalign{\box1}}} \newcommand{\varplusl}{\mathpalette\v@rplusl\relax} \newcommand{\varplusr}{\mathpalette\v@rplusr\relax} \newcommand{\varpluslr}{\mathpalette\v@rpluslr\relax} \newcommand{\varplusd}{\mathpalette\v@rplusd\relax} \newcommand{\varplusu}{\mathpalette\v@rplusu\relax} \newcommand{\varhash}{\mathpalette\v@rhash\relax} \newlength{\v@rhashtot} \newlength{\v@rhashraise} \newcommand\v@rhashvar[2]{\setbox0=\hbox{$\m@th#1=$}\setbox1=\hbox{$\m@th#1\parallel$}\setbox2=\hbox{$\m@th#1+$}\setlength{\v@rhashtot}{\ht2}\addtolength{\v@rhashtot}{\dp2}\setbox3=\hbox{$\m@th#1\vbox{\hbox{\resizebox{\wd0}{\ht0}{$\m@th#1=$}}}$}\setbox4=\hbox{$\m@th#1\vcenter{\hbox{\resizebox*{\wd1}{\v@rhashtot}{$\m@th#1\parallel$}}}$}\mathbin{\ooalign{\hidewidth\box4\hidewidth\cr \box3\cr}}} \newcommand\v@rhashvarred[2]{\setbox0=\hbox{$\m@th#1=$}\setbox1=\hbox{$\m@th#1\parallel$}\setbox2=\hbox{$\m@th#1+$}\setlength{\v@rhashtot}{\ht2}\addtolength{\v@rhashtot}{\dp2}\setlength{\v@rhashraise}{0.1875\ht0}\setbox3=\hbox{$\m@th#1\vbox{\hbox{\resizebox{\wd0}{0.75\ht0}{$\m@th#1=$}}}$}\setbox4=\hbox{$\m@th#1\vcenter{\hbox{\resizebox*{0.75\wd1}{\v@rhashtot}{$\m@th#1\parallel$}}}$}\mathbin{\ooalign{\hidewidth\box4\hidewidth\cr \raise\v@rhashraise\box3\cr}}} \newcommand\v@rhash[2]{\setbox0=\hbox{$\m@th#1-$}\setbox1=\hbox{$\m@th#1\mid$}\setbox2=\hbox{$\m@th#1+$}\setlength{\v@rhashtot}{\ht2}\addtolength{\v@rhashtot}{\dp2}\setbox3=\copy0 \setbox4=\copy3 \setbox5=\hbox{$\m@th#1\vcenter{\hbox{\resizebox*{\wd1}{\v@rhashtot}{$\m@th#1\mid$}}}$}\setbox6=\copy5 \mathbin{\ooalign{\hidewidth\box5\kern.23\v@rhashtot\hidewidth\cr \hidewidth\kern.23\v@rhashtot\box6\hidewidth\cr \raise.115\v@rhashtot\box3\cr \lower.115\v@rhashtot\box4}}} \newlength{\v@rplusraise} \newcommand\v@rplusl[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.3\wd0}{$\m@th#1\bullet\vphantom{+}$}}}$}\mathbin{\ooalign{\box1\hidewidth\cr \box0\cr}}} \newcommand\v@rplusr[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.3\wd0}{$\m@th#1\bullet\vphantom{+}$}}}$}\mathbin{\ooalign{\hidewidth\box1\cr \box0\cr}}} \newcommand\v@rpluslr[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{$\m@th#1\vcenter{\hbox{\resizebox{!}{0.3\wd0}{$\m@th#1\bullet\vphantom{+}$}}}$}\setbox2=\copy1 \mathbin{\ooalign{\hidewidth\box1\cr \box2\hidewidth\cr \box0\cr}}} \newcommand\v@rplusd[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{\resizebox{!}{0.3\wd0}{$\m@th#1\bullet\vphantom{+}$}}\setlength{\v@rplusraise}{-\dp0}\addtolength{\v@rplusraise}{-0.25\ht1}\mathbin{\ooalign{\hidewidth\raise\v@rplusraise\box1\hidewidth\cr \box0\cr}}} \newcommand\v@rplusu[2]{\setbox0=\hbox{$\m@th#1+$}\setbox1=\hbox{\resizebox{!}{0.3\wd0}{$\m@th#1\bullet\vphantom{+}$}}\setlength{\v@rplusraise}{\ht0}\addtolength{\v@rplusraise}{\dp1}\addtolength{\v@rplusraise}{-0.75\ht1}\mathbin{\ooalign{\hidewidth\raise\v@rplusraise\box1\hidewidth\cr \box0\cr}}} \makeatother \newcommand{\bracks}[1]{[#1]} \newcommand{\Bracks}[1]{\left[#1\right]} \newcommand{\bigBracks}[1]{\bigl[#1\bigr]} \newcommand{\BigBracks}[1]{\Bigl[#1\Bigr]} \newcommand{\BigBraces}[1]{\Bigl\{#1\Bigr\}} \newcommand{\biggBraces}[1]{\biggl\{#1\biggr\}} \newcommand{\biggBracks}[1]{\biggl[#1\biggr]} \newcommand{\bigBraces}[1]{\bigl\{#1\bigr\}} \newcommand{\Braces}[1]{\left\{#1\right\}} \newcommand{\parens}{\Parens} \newcommand{\Parens}[1]{\left(#1\right)} \newcommand{\bigParens}[1]{\bigl(#1\bigr)} \newcommand{\BigParens}[1]{\Bigl(#1\Bigr)} \newcommand{\biggParens}[1]{\biggl(#1\biggr)} \newcommand{\BiggParens}[1]{\Biggl(#1\Biggr)} \newcommand{\set}[1]{\{#1\}} \newcommand{\bigSet}[1]{\bigl\{#1\bigr\}} \newcommand{\Set}[1]{\left\{#1\right\}} \newcommand{\Norm}[1]{\left\lVert#1\right\rVert} \newcommand{\seq}[1]{{(#1)}} \newcommand{\bigSeq}[1]{\bigl(#1\bigr)} \newcommand{\bigAbs}[1]{\bigl\lvert#1\bigr\rvert} \newcommand{\tupset}[2]{\langle#1\rangle_{#2}} \newcommand{\tprojsym}{\pi} \newcommand{\tproja}{\tprojsym_{\alpha}} \newcommand{\tprojb}{\tprojsym_{\beta}} \newcommand{\tprojinva}{\tprojsym^{-1}_{\alpha}} \newcommand{\tprojz}{\tprojsym_{z}} \newcommand{\tprojinvz}{\tprojsym^{-1}_{z}} \newcommand{\dif}{\mathrm{d}} \newcommand{\RefsImplication}[2]{(\ref{#1})~$\Rightarrow$~(\ref{#2})} \newcommand{\nablaf}{\nabla\mkern-1.5mu f} \newcommand{\fA}{f\mkern-1mu\A} \newcommand{\transA}{\trans{\A\mkern-3mu}} \newcommand{\lambdaUni}{^^f0^^9d^^9c^^86}\newcommand{\pmUni}{^^c2^^b1}\newcommand{\inftyUni}{^^e2^^88^^9e} \newlist{letter-compact}{enumerate}{1} \setlist[letter-compact]{noitemsep, label={\upshape(\alph*)}, ref=\alph*} \newlist{letter}{enumerate}{1} \setlist[letter]{label={\upshape(\alph*)}, ref=\alph*} \newlist{letter-compact-prime}{enumerate}{1} \setlist[letter-compact-prime]{noitemsep, label={\upshape(\alph*)\hphantom{$'$}}, ref=\alph*} \makeatletter \newcommand{\itemprime}{\item[\upshape(\alph{letter-compact-primei}$'$)]\protected@edef\@currentlabel{\alph{letter-compact-primei}$'$}\ignorespaces } \makeatother \newcounter{tmplccounter} \newcommand{\savelccounter}{ \setcounter{tmplccounter}{\arabic{letter-compacti}} } \newcommand{\restorelccounter}{ \setcounter{letter-compacti}{\thetmplccounter} } \newlist{roman-compact}{enumerate}{1} \setlist[roman-compact]{noitemsep, label={\upshape(\roman*)}, ref=\roman*} \newlist{item-compact}{itemize}{1} \setlist[item-compact]{noitemsep, label=$\bullet$} \newlist{proof-parts}{description}{2} \setlist[proof-parts]{nosep, wide, font=\rmfamily\mdseries\itshape} \newcommand{\pfpart}[1]{\item[#1]} \newcommand{\overviewsecheading}[1]{\subsubsection*{\ref{#1}: \nameref{#1}}} \begin{document} \title{{\huge Convex Analysis at Infinity} \\ {\LARGE An Introduction to Astral Space}\footnote{Intermediate draft. Comments and feedback are very welcome. Special thanks to Ziwei Ji for numerous ideas and helpful suggestions. } \\ {\emph{\normalsize (draft version 3)}}\bigskip } \author{Miroslav Dud\'ik\footnote{Microsoft Research, New York City. \texttt{\href{mailto:[email protected]}{[email protected]}}, \texttt{\href{mailto:[email protected]}{[email protected]}}. } \and Robert E. Schapire\footnotemark[2] \and Matus Telgarsky\footnote{Courant Institute, New York University. Supported partially by NSF grant IIS-1750051. Part of this research was conducted while visiting Microsoft Research, and also while at University of Illinois, Urbana-Champaign, Department of Computer Science. \texttt{\href{mailto:[email protected]}{[email protected]}}. } ~~~ \bigskip } \pdfbookmark[1]{Convex Analysis at Infinity: An Introduction to Astral Space}{title} \pagenumbering{roman} \begin{titlepage} \maketitle \thispagestyle{empty} \vspace{2em} \begin{abstract} Not all convex functions on $\R^n$ have finite minimizers; some can only be minimized by a sequence as it heads to infinity. In this work, we aim to develop a theory for understanding such minimizers at infinity. We study \emph{astral space}, a compact extension of $\R^n$ to which such points at infinity have been added. Astral space is constructed to be as small as possible while still ensuring that all linear functions can be continuously extended to the new space. Although astral space includes all of $\R^n$, it is not a vector space, nor even a metric space. However, it is sufficiently well-structured to allow useful and meaningful extensions of such concepts as convexity, conjugacy, and subdifferentials. We develop these concepts and analyze various properties of convex functions on astral space, including the detailed structure of their minimizers, exact characterizations of continuity, and convergence of descent algorithms. \end{abstract} \end{titlepage} \pagenumbering{arabic} \clearpage \pdfbookmark[1]{\contentsname}{toc} \tableofcontents \section{Introduction} \label{sec:intro} \subsection{Motivation} \label{sec:motivation} Convex functions are analytically and algorithmically very well-behaved. For example, every local minimizer of a convex function is also a global minimizer, and there are many efficient algorithms that can be used to find minima of convex functions under mild assumptions~\citep{nesterov_book,nemirovsky_yudin}. However, there are important examples in statistics, economics, and machine learning where standard assumptions do not hold, specifically, where the underlying objective function is convex, continuous, and bounded below, but nevertheless fails to attain its minimum at any finite point in $\Rn$. When this happens, to approach its infimum, the function's argument must somehow be taken ``to infinity.'' This can happen even in the simplest of cases: \begin{example}[Exponential] \label{ex:exp} Consider minimization of $f(x)=e^x$ over $x\in\R$. This function can only be ``minimized'' in a limit as $x\to-\infty$. \end{example} This example is by no means unusual. In statistics, for instance, many common parametric distributions, such as a multivariate Gaussian or a multinomial, can be expressed as elements of a suitable ``exponential family'' and their parameters can be fitted by maximizing log likelihood~\citep[Equation 3.38]{WainwrightJo08}. This modeling approach is both analytically and algorithmically appealing, because the log-likelihood function for exponential families is concave. However, there are cases when the log likelihood is maximized only as the parameter goes to infinity. Since the parameter is, in general, a multi-dimensional vector, there are a variety of ways how it can go to infinity. The analysis of these cases relies on machinery of convex duality~\citep[Section 3.6]{WainwrightJo08}. At the crux of the analysis, optimality conditions are developed for sequences of parameters diverging to infinity. The analysis is highly tailored to exponential families. Another example, drawn from machine learning, is the problem of binary classification. Here, a learning algorithm is provided with a set of training examples (such as images of photographs, represented as vectors of pixels); some of the examples are labeled as positive and some as negative (indicating, for example, if the photograph is or is not of a person's face). The goal then is to find a rule for predicting if a new instance should be labeled as positive or negative. Many standard algorithms, including variants of boosting and logistic regression, can be recast as minimization algorithms for specific convex objective functions~\citep[Chapter 7]{schapire_freund_book_final}. Again, there are cases when the minimum is only attained as the underlying parameter goes to infinity. The analysis of these cases is highly specialized to the classification setting, and, similar to the analysis of exponential families, relies on the machinery of convex duality \citep{zhang_yu_boosting,mjt_margins,nati_logistic,riskparam,GLSS18}. These highly-tailored approaches suggest that there is a structure among the divergent sequences that could be perhaps described by a unified theory. Motivated by that observation, we ask in this book: \emph{How can we extend $\Rn$ to include ``points at infinity'' that would lead to a more complete theory of convex functions?} Ideally, with such an extension, there would be no need to analyze various types of divergent sequences; instead, we could work directly in the extended space, which would contain ``infinite points'' to which these sequences converge. \bigskip The task of adding ``points at infinity'' is deceivingly simple in one dimension, where we only need to add $+\infty$ and $-\infty$. The resulting extension is typically referred to as the \emph{(affinely) extended real line} and denoted $\eR=[-\infty,+\infty]$. But it is far from obvious how to generalize this concept to multiple dimensions. To develop some intuition about what is required of such an extension, we next look at several examples of convex functions and their minimizing sequences. Before proceeding, some brief notational comments: Vectors $\xx$ in $\Rn$ are usually written in bold, with components $x_i$ given in italics so that $\xx=\trans{[x_1,\dotsc,x_n]}$. We use $t=1,2,\dotsc$ as the index of sequences, unless noted otherwise, and write $\seq{\xx_t}$ to mean the sequence $\xx_1,\xx_2,\dotsc$. Limits and convergence are taken as $t\rightarrow+\infty$, unless stated otherwise. For example, $\lim \xx_t$ means $\lim_{t\to\infty}\xx_t$, and $\xx_t\rightarrow\xx$ means $\xx_t$ converges to~$\xx$ as $t\rightarrow+\infty$. (More will be said about notation in Section~\ref{sec:prelim-notation}.) \begin{example}[Log-sum-exp] \label{ex:log-sum-exp} We first consider the log-sum-exp function, which comes up, for example, when fitting a multinomial model by maximum likelihood. Specifically, let \begin{equation} \label{eqn:probsim-defn} \probsim = \Braces{ \xx\in [0,1]^n : \sum_{i=1}^n x_i = 1 } \end{equation} be the probability simplex in $\Rn$, and let $\lsevec$ be any point in $\probsim$. Consider minimization of the function $f:\Rn\rightarrow\R$ defined, for $\xx\in\Rn$, by \[ \textstyle f(\xx)=\ln\bigParens{\sum_{i=1}^n e^{x_i}}-\lsevec\inprod\xx. \] As is standard, $f$ can be rewritten as \begin{equation} \label{eqn:log-sum-exp-fcn} f(\xx)=-\sum_{i=1}^n \lsec_i\ln\Parens{\frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}} =-\sum_{i=1}^n \lsec_i\ln p_i, \end{equation} where $p_i=e^{x_i}/\sum_{j=1}^n e^{x_j}$, so $\pp$ is a probability vector in $\probsim$. For this reformulation of $f$ it can be proved that \[ \inf_{\xx\in\Rn} f(\xx) = -\sum_{i=1}^n \lsec_i\ln\lsec_i, \] and that the minimum is attained if and only if $\pp=\lsevec$ (see, for example, Theorem 2.6.3 of \citealp{CoverTh91}). So, if $\lsec_i>0$ for all $i$, the minimum is attained by the vector $\xx$ with entries $x_i=\ln\lsec_i$. However, if $\lsec_i=0$ for some $i$, then there is no $\xx\in\Rn$ that attains the minimum. The infimum $-\sum_{i=1}^n \lsec_i\ln\lsec_i$ is in this case reached by any sequence $\seq{\xx_t}$ whose corresponding sequence $\seq{\pp_t}$ converges to $\lsevec$. For example, if $n=3$ and $\lsevec=\trans{[0,\frac13,\frac23]}$, then the sequence $\xx_t=\trans{[-t,\ln\frac13,\ln\frac23]}$ converges to the infimum as $t\to+\infty$. \end{example} This example suggests that it perhaps would suffice to allow individual coordinates to take on values $\pm\infty$. In other words, we could consider a Cartesian product of extended reals, $(\eR)^n$. The minimizer of \Ex{log-sum-exp} would then be $\trans{[-\infty,\ln\frac13,\ln\frac23]}$. But the next example shows that this is not enough because minimization may take us in a direction that is not aligned with coordinate axes. \begin{example}[Diagonal valley] \label{ex:diagonal-valley} Let $f:\R^2\to\R$ be defined, for $\xx\in\R^2$, as \begin{equation} \label{eqn:eg-diag-val} f(\xx)=f(x_1,x_2)=e^{-x_1}+(x_2-x_1)^2, \end{equation} as plotted in Figure~\ref{fig:diagonalvalley}. The infimum is obtained in the limit of any sequence $\seq{\xx_t}$ that satisfies $x_{t,1}\to+\infty$ while also $x_{t,2}-x_{t,1}\to 0$. One such sequence is $\xx_t=\trans{[t,t]}$, which goes to infinity in the direction of the vector $\vv=\trans{[1,1]}$. On this sequence, $f(\xx_t)=e^{-t}\rightarrow 0$. If we were to just work in $(\Rext)^2$, we would find that $\xx_t\to\trans{[+\infty,+\infty]}$. However, to minimize $f$, the direction in which $\seq{\xx_t}$ goes to infinity is critical, and that direction is not represented by limit points in $(\Rext)^2$. For instance, the sequence $\xx'_t=\trans{[2t,t]}$, which goes to infinity in the direction $\trans{[2,1]}$, also converges to $\trans{[+\infty,+\infty]}$ in $(\Rext)^2$, but it fails to minimize $f$ since $f(\xx'_t)\rightarrow+\infty$. \end{example} So perhaps we should extend $\R^n$ with ``limit points'' of sequences going to infinity in various directions in $\R^n$. For instance, the sequences $\seq{\xx_t}$ and $\seq{\xx'_t}$, which follow along rays with different directions, would then also have different limits. Such limit points are considered by \citet[Section 3]{rock_wets}, who call them ``direction points,'' while developing the topology and properties of the resulting ``cosmic space.'' See also the closely related ``enlarged space'' of \citet{hansen_dupin99}. Another related concept is that of ``ideal points'' in the real projective plane, where a separate ideal point is introduced for each class of parallel lines~\citep[see][Chapter 8]{cox_little_oshea__ideals_varieties_algorithms}. However, these abstractions do not capture the minimizing sequences in the diagonal valley example. For example, in the cosmic space formalism, the sequence $\xx''_t=\trans{[t+1,t]}$ converges to the same limit point as $\xx_t=\trans{[t,t]}$, since they both go to infinity in the same direction, but the sequence $\seq{\xx''_t}$ fails to minimize $f$ since $f(\xx''_t)\rightarrow 1$. \begin{figure}[t!] \begin{center} \includegraphics[width=\textwidth]{figs/bw/contours__3d2_1__diagonalvalley.jpg} \end{center} \caption{A plot of the function $f$ given in Example~\ref{ex:diagonal-valley}. The top plot shows the surface of the function. The bottom plot shows a contour plot in which each curve is the set of points $\xx\in\R^2$ for which $f(\xx)$ is equal to some constant. Also depicted are the first few points of the sequence $\xx_t=\trans{[t,t]}$, which follow a halfline to infinity. } \label{fig:diagonalvalley} \end{figure} Maybe we just need to consider an ``offset'' in addition to the direction of a sequence, and consider limit points corresponding to following a ray with a specified starting point and direction. All the minimizing sequences in the preceding examples can be written as $\xx_t=t\vv+\qq$ for a suitable choice of $\vv$ and $\qq$, so each indeed proceeds along a ray with a starting point $\qq$ and a direction $\vv$. It turns out that this is still not enough, as seen in the next example: \begin{example}[Two-speed exponential] \label{ex:two-speed-exp} Let $f:\R^2\to\R$ be defined, for $\xx\in\R^2$, as \[ f(\xx)=f(x_1,x_2)=e^{-x_1} + e^{-x_2+x_1^2/2}, \] as plotted in Figure~\ref{fig:twospeedexponential}. The infimum is obtained in the limit of any sequence $\seq{\xx_t}$ that satisfies $x_{t,1}\to+\infty$ while also $-x_{t,2}+x_{t,1}^2/2\to-\infty$. This means $f$ cannot be minimized along any ray; rather, $x_{t,2}$ must go to $+\infty$ at least quadratically faster than $x_{t,1}$. One such sequence is $\smash{\xx_t=\trans{[t,t^2]}}$ on which $f(\xx)\rightarrow 0$. \end{example} The above examples show that the task of adding ``infinite points'' is subtle already in $\R^2$. So perhaps we should just stick to sequences to maintain the broadest flexibility. The downside is that sequences seem to only be indirect proxies for the underlying ``infinite points,'' so working with sequences makes it harder to discover their structural properties. Moreover, we hope that by adding the extra ``infinite points,'' the theory of convex functions will become more ``regular,'' for example, by ensuring that all lower-bounded and continuous convex functions (like all those from our examples) have minima. Our hope is to extend $\R^n$ analogously to how the set of rational numbers is extended to obtain reals, or the set of reals is extended to obtain complex numbers. When moving from rational numbers to reals to complex numbers, basic arithmetic operations naturally extend to the enlarged space, and the enlarged space has a more regular structure; for example, bounded sets of reals have real-valued infima (which is not true for rational numbers), and all polynomials with complex coefficients have complex roots (whereas polynomials with real coefficients might have no real roots). Here, we seek an extension of $\R^n$ which would lend itself to natural extensions of the key concepts in convex analysis, like convex sets and functions, conjugacy, subdifferentials, and optimality conditions, but which would also exhibit more regularity, for example, when it comes to the existence of minimizers. Our aim is to reveal the structure of how convex functions behave at infinity. We seek to build up the foundations of an expanded theory of convex analysis that will, for instance, make proving the convergence of optimization algorithms rather routine, even if the function being optimized has no finite minimizer. \begin{figure}[t!] \begin{center} \includegraphics[width=\textwidth]{figs/bw/contours__3d2_5__twospeedexponential.jpg} \end{center} \caption{A plot of the function $f$ given in Example~\ref{ex:two-speed-exp}. Also depicted are the first few points of the sequence $\xx_t=\trans{[t,t^2]}$, which follow a parabola to infinity. } \label{fig:twospeedexponential} \end{figure} \bigskip So, what might a ``more complete'' theory of convex functions look like, and what is there to gain from such a theory? To start, and as just suggested, there might be some algorithmic benefits. For example, if the extension of $\R^n$ were a \emph{sequentially compact} topological space, then any sequence would have a convergent subsequence, making it much easier to establish convergence of optimization algorithms. But such a theory might exhibit additional regularities. To get an initial glimpse of what these might be, we study a continuous extension of the exponential function to $\eR$. \begin{example}[Exponential, continued] \label{ex:exp:cont} Let $f(x)=e^x$ over $x\in\R$. A natural, continuous, extension of $f$ to $\eR$ is the function $\ef:\eR\to\eR$ defined as \begin{align*} &\ef(x)=e^x\text{ when $x\in\R$}, \\ &\textstyle \ef(-\infty)=\lim_{x\to-\infty}e^x=0, \\ &\textstyle \ef(+\infty)=\lim_{x\to+\infty}e^x=+\infty. \end{align*} Unlike $f$, which has no minimum, the extension $\ef$ is more ``regular'' because it does attain its minimum (at $-\infty$). Now, let us attempt to generalize the notion of conjugacy to the extension $\ef$. Recall that each convex function $f:\Rn\to\Rext$ has an associated conjugate $f^*:\Rn\to\Rext$ defined as \[ f^*(\uu)=\sup_{\xx\in\Rn}\bigBracks{\xx\inprod\uu-f(\xx)}, \] which is itself a convex function. The conjugate is closely related to the problem of minimization of $f$: if the conjugate is finite (or equal to $-\infty$) at the origin, it means that $f$ is bounded below; and if the conjugate is differentiable at the origin, then $\nablaf^*(\zero)$ must be the minimizer of $f$. More generally, any subgradient of $f^*$ at the origin is a minimizer of $f$. Since the subgradients at the origin are exactly the slopes of nonvertical tangents of the (epi)graph of $f^*$ at the origin, the analysis of minimizers of $f$ can be conducted by analyzing the tangents of the epigraph of $f^*$ at the origin. ({Subgradients} are generalizations of gradients, and will be discussed further in Example~\ref{ex:log-sum-exp:cont}; see also Section~\ref{sec:prelim:subgrads}. And the {epigraph} of a function, defined formally in Eq.~\ref{eqn:epi-def}, is the set of points above its graph.) Is it possible to lift this style of reasoning---a conjugate, subgradients at the origin, tangents of epigraph at the origin---to the function $\ef$ we defined above? Let's try! First, the conjugate of $\ef$ could be perhaps defined as $\fextstar:\R\to\Rext$ such that \[ \fextstar(u) =\sup_{\ex\in\eR:\:\ef(\ex)\in\R} [\ex\cdot u - \ef(\ex)]. \] We restricted the supremum to the set where $\ef(\ex)$ is finite to assure that the expression $\ex\cdot u - \ef(\ex)$ is defined even when $\ex\in\set{-\infty,+\infty}$. This definition does not completely match the definition we provide in Chapter~\ref{sec:conjugacy}, but for the presently studied function (and more generally, whenever $f>-\infty$) it gives rise to the same $\fextstar$. This resulting $\fextstar$ coincides with the standard conjugate $f^*$, that is, \[ \fextstar(u)= f^*(u)= \begin{cases} u\ln u-u &\text{if $u\ge0$,} \\ +\infty &\text{if $u<0$,} \end{cases} \] with the convention $0\ln 0=0$. The function $\fextstar$ is differentiable at all $u>0$, with the derivative $\bigParens{\fextstar}{\strut}'(u)=\ln u$. The function is finite at $0$, but not differentiable, because its epigraph has no nonvertical tangents at $0$ as shown in Figure~\ref{fig:u-ln-u}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figs/xlnxminusx2.pdf} \caption{The function $\fextstar(u)$ from Example~\ref{ex:exp:cont}. The function equals $u\ln u-u$ for $u\ge 0$ and $+\infty$ for $u<0$. It is differentiable for $u>0$, but not at $u=0$, where the graph of $\fextstar$ has no nonvertical tangents.} \label{fig:u-ln-u} \end{figure} However, its epigraph has a \emph{vertical} tangent at $0$. We could represent this vertical tangent by positing the ``derivative'' equal to $-\infty$, corresponding to the observation that $\fextstar$ is ``decreasing infinitely fast at $0$'' (faster than any finite slope). If we could now apply a theorem that says that subgradients of $\fextstar$ at $0$ are minimizers of $\ef$, we would obtain that $-\infty$ is the minimizer of $\ef$, as is actually the case. \end{example} The above example suggests just one area, besides the existence of minimizers, in which the theory of an extended space and extended functions might give rise to a ``more complete'' convex analysis. In this case, by allowing subgradients to take on infinite values, extended subdifferentials would represent not only nonvertical, but also vertical tangents of the (epi)graphs of convex functions. As a result, convex functions with closed epigraphs would be subdifferentiable everywhere where they are finite, which is not the case in standard convex analysis. The extended real line, $\Rext$, appears suitable for developing such a more complete theory of convex functions in one dimension. But what might such an extension look like in multiple dimensions? Our goal here is not to merely extend the subgradients to include vertical tangents; that goal is already achieved by other frameworks, for example, by Rockafellar's ``horizon subgradients''~(\citealp{rockafellar1985extensions}; \citealp[Chapter 8]{rock_wets}). Horizon subgradients have a well-developed variational theory, but beyond two dimensions, the approach misses some structures that our theory captures, and these structures are required to analyze Examples~\ref{ex:log-sum-exp}, \ref{ex:diagonal-valley}, and~\ref{ex:two-speed-exp}. We seek to capture these structures by developing an expansive theory of convex analysis into which extended subgradients would fit like a puzzle piece, together with extended conjugates, optimality conditions, and an array of other fundamental concepts. \bigskip In this book, we propose such a theory, extending $\R^n$ to an enlarged topological space called \emph{astral space}. We study the new space's properties, and develop a theory of convex functions on this space. Although astral space includes all of $\R^n$, it is not a vector space, nor even a metric space. However, it is sufficiently well-structured to allow useful and meaningful extensions of such concepts as convexity, conjugacy, and subgradients. We develop these concepts and analyze various properties of convex functions on astral space, including the structure of their minimizers, characterization of continuity, and convergence of descent algorithms. Although the conceptual underpinnings of astral space are simple, the full formal development is somewhat involved since key topological properties need to be carefully established. As a teaser of what the book is about, we next present a condensed development of astral space. This will allow us to revisit our earlier multi-dimensional examples and situate them within the theory of functions on astral space. We then finish this introductory chapter with a high-level overview of the entire book. \subsection{A quick introduction to astral space} \label{sec:intro:astral} In constructing astral space, our aim is to derive a topological extension of $\R^n$ in which various ``points at infinity'' have been added, corresponding to limits of sequences such as those from Examples~\ref{ex:exp}, \ref{ex:log-sum-exp}, \ref{ex:diagonal-valley}, and~\ref{ex:two-speed-exp}. In fact, we seek a \emph{compactification} of $\R^n$, in which every sequence has a convergent subsequence. There are many possible compactifications, which differ in how many new points they add and how finely they differentiate among different kinds of convergence to infinity. We would like to add as few points as possible, but the fewer points we add, the fewer functions will be continuous in the new space, because there will be more sequences converging to each new point. In this work, we choose a tradeoff in which we add as few points as possible while still ensuring that all \emph{linear} functions, which are the bedrock of convex analysis, remain continuous. As a consequence, we will see that ``anything linear'' is likely to behave nicely when extended to astral space, including such notions as linear maps, hyperplanes, and halfspaces, as well as convex sets, convex functions, conjugates, and subgradients, all of which are based, in their definitions, on linear functions. \paragraph{Convergence in all directions.} To formalize this idea, we note that linear functions on $\R^n$ all take the form $\xx\mapsto\xx\inprod\uu$ for some $\uu\in\R^n$, and so each can be viewed as realizing a ``projection along a vector $\uu$'' for some $\uu\in\R^n$. We define a notion of a well-behaved sequence with respect to linear maps by saying that a sequence $\seq{\xx_t}$ in $\R^n$ \emph{converges in all directions} if its projection $\seq{\xx_t\inprod\uu}$ along any vector $\uu\in\R^n$ converges in $\eR$, meaning that $\lim(\xx_t\inprod\uu)$ exists in $\eR$ for all $\uu\in\Rn$. For example, every sequence that converges in $\R^n$ also converges in all directions, because $\xx_t\to\xx$ implies that $\xx_t\inprod\uu\to\xx\inprod\uu$. There are additional sequences that converge in all directions like all those appearing in Examples~\ref{ex:exp}, \ref{ex:log-sum-exp}, \ref{ex:diagonal-valley}, and~\ref{ex:two-speed-exp}.\looseness=-1 If two sequences $\seq{\xx_t}$ and $\seq{\yy_t}$ both converge in all directions and also ${\lim(\xx_t\inprod\uu)}={\lim(\yy_t\inprod\uu)}$ for all $\uu\in\R^n$, we say that $\seq{\xx_t}$ and $\seq{\yy_t}$ are \emph{all-directions equivalent}. \paragraph{Astral space.} All-directions equivalence creates a partition into equivalence classes of the set of sequences that converge in all directions. As formally defined in Section~\ref{sec:astral:construction}, to construct \emph{astral space}, we associate an \emph{astral point} with each such equivalence class; this point will then be exactly the common limit of every sequence in the associated class. We write $\eRn$ for $n$-dimensional astral space, and use bar or overline notation to denote its elements, such as $\exx$ or $\eyy$.\looseness=-1 This definition allows us to naturally extend the inner product. Specifically, for all $\xbar\in\extspace$ and all $\uu\in\R^n$, we define the \emph{coupling function} \[ \exx\inprod\uu=\lim(\xx_t\inprod\uu)\in\eR, \] where $\seq{\xx_t}$ is any sequence in the equivalence class associated with $\exx$. Note that the value of $\exx\inprod\uu$ does not depend on the choice of $\seq{\xx_t}$ because all the sequences associated with $\exx$ are all-directions equivalent and therefore have identical limits $\lim(\xx_t\inprod\uu)$. In fact, the values of these limits uniquely identify each astral point, so every astral point $\xbar$ is uniquely identified by the values of $\xbar\cdot\uu$ over all $\uu\in\R^n$. The space $\R^n$ is naturally included in $\eRn$, coinciding with equivalence classes of sequences that converge to points in $\R^n$. But there are additional elements, which are said to be \emph{infinite} since they must satisfy $\exx\inprod\uu\in\set{-\infty,+\infty}$ for at least one vector $\uu$. The simplest astral points (other than those in $\Rn$) are called \emph{astrons}, each obtained as the limit of a sequence of the form $\seq{t\vv}$ for some $\vv\in\R^n$. The resulting astron is denoted $\limray{\vv}$. By reasoning about the limit of the sequence $\seq{t\vv\inprod\uu}$, for $\uu\in\Rn$, it can be checked that \[ (\limray{\vv})\cdot\uu = \begin{cases} +\infty & \text{if $\vv\cdot\uu>0$,} \\ 0 & \text{if $\vv\cdot\uu=0$,} \\ -\infty & \text{if $\vv\cdot\uu<0$.} \end{cases} \] If $\vv\neq\zero$, then the associated astron $\limray{\vv}$ is infinite, and is exactly the limit of a sequence that follows a ray from the origin in the direction $\vv$. In one-dimensional astral space, $\overline{\R^1}$, the only astrons are $0$, and two others corresponding to $+\infty$ and $-\infty$. In multiple dimensions, besides $\zero$, there is a distinct astron $\omega\vv$ associated with every unit vector $\vv$. The astron construction can be generalized in a way that turns out to yield all additional astral points, including the limits in Examples~\ref{ex:log-sum-exp}, \ref{ex:diagonal-valley}, \ref{ex:two-speed-exp}. \begin{example}[Polynomial-speed sequence] \label{ex:poly-speed-intro} Let $\qq\in\Rn$, let $\vv_1,\dotsc,\vv_k\in\Rn$, for some $k\geq 0$, and consider the sequence \begin{equation} \label{eqn:xt-poly-seq} \xx_t = t^k \vv_1 + t^{k-1} \vv_2 + \cdots + t \vv_k + \qq = \sum_{i=1}^k t^{k-i+1} \vv_i + \qq. \end{equation} We can verify that this sequence converges in all directions, and therefore corresponds to an astral point, by calculating the limit of $\seq{\xx_t\cdot\uu}$ along any direction $\uu\in\Rn$. To calculate this limit, note that the evolution of the sequence $(\xx_t)$ is dominated by its overwhelmingly rapid growth in the direction of $\vv_1$. As a result, if $\vv_1\inprod\uu>0$ then $\xx_t\cdot\uu\rightarrow +\infty$, and if ${\vv_1\inprod\uu<0}$ then $\xx_t\cdot\uu\rightarrow -\infty$. However, if $\vv_1\inprod\uu=0$, then the term involving $\vv_1$ vanishes when considering $\xx_t\cdot\uu$. So, when projecting the sequence along vectors $\uu$ perpendicular to $\vv_1$, the direction $\vv_2$ becomes dominant. Considering these vectors $\uu$, we find once again that $\xx_t\cdot\uu$ converges to $+\infty$ or $-\infty$ if $\vv_2\inprod\uu>0$ or $\vv_2\inprod\uu<0$, respectively. This analysis can be continued, so that we next consider vectors $\uu$ in the subspace orthogonal to both $\vv_1$ and $\vv_2$, where $\vv_3$ is dominant. And so on. Eventually, for vectors $\uu$ that are orthogonal to all $\vv_1,\dotsc,\vv_k$, we find that $\xx_t\cdot\uu$ converges to the finite value $\qq\cdot\uu$.\looseness=-1 In summary, this argument shows that the sequence $\seq{\xx_t}$ converges in all directions, and its corresponding astral point $\exx$ is described by \begin{equation} \label{eq:intro-astral-point} \newcommand{\ttinprod}{\mkern-.8mu\inprod\mkern-.8mu} \!\!\! \xbar\ttinprod\uu= \begin{cases} +\infty &\text{if $\vv_i\ttinprod\uu>0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ -\infty &\text{if $\vv_i\ttinprod\uu<0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ \qq\ttinprod\uu &\text{if $\vv_i\ttinprod\uu=0$ for $i=1,\dotsc,k$.} \end{cases} \end{equation} \end{example} Although this example analyzes only one type of a sequence, we will later see (by a different style of reasoning) that every sequence that converges in all directions must have a similar structure. Namely, every such sequence must have a most dominant direction $\vv_1$ in which it is tending to infinity most rapidly, followed by a second most dominant direction~$\vv_2$, and so on, with residual convergence to some finite point $\qq$. As a result, as we prove in \Cref{sec:astral:space:summary}, every astral point in $\extspace$ is characterized by \eqref{eq:intro-astral-point} for some choice of $\vv_1,\dotsc,\vv_k,\qq\in\Rn$ (for some finite $k\geq 0$). \paragraph{Astral topology.} We endow astral space with an \emph{astral topology} (formally defined in Section~\ref{subsec:astral-pts-as-fcns}), which turns out to be the ``coarsest'' topology under which all linear maps $\xx\mapsto\xx\inprod\uu$ defined over $\R^n$ can be extended continuously to $\extspace$, specifically, to yield the maps $\exx\mapsto\exx\inprod\uu$ (for all $\uu\in\Rn$). In this topology, astral space has the following key properties (see Theorems~\ref{thm:i:1} and~\ref{thm:first-count-and-conseq}): \begin{letter-compact} \item \label{intro:prop:compact} $\extspace$ is compact.\item \label{intro:prop:first} $\extspace$ is a first-countable topological space. \item \label{intro:prop:iad:1} $\xbar_t\rightarrow\xbar$ if and only if $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$ for all $\uu\in\Rn$. \item \label{intro:prop:iad:2} If $\seq{\xbar_t\cdot\uu}$ converges in $\Rext$ for all $\uu\in\Rn$, then $\xbar_t\rightarrow\xbar$ for some $\xbar\in\extspace$. \item \label{intro:prop:dense} $\Rn$ is dense in $\extspace$. \item \label{intro:prop:R1} $\extspac{1}$ is homeomorphic with $\eR$. \end{letter-compact} The most important property is compactness, a key aspect in how astral space is more ``regular'' than $\R^n$. Because of compactness, every continuous function on astral space attains its minimum. Also, every closed subset is compact. First-countability is another crucial structural property of astral space, allowing us to work with limits of sequences similar to $\R^n$, although astral space is not metrizable for $n\ge 2$ (see Section~\ref{sec:not-second}). First-countability implies, for example, that the closure of a set $S\subseteq\eRn$ coincides with the set of limits of all sequences with elements in~$S$. Also, a map $f:\eRn\to Y$ is continuous if and only if $\exx_t\to\exx$ implies $f(\exx_t)\to f(\exx)$. Since astral space is both compact and first-countable, it is also sequentially compact, meaning every sequence in $\eRn$ has a convergent subsequence, greatly simplifying the analysis of optimization algorithms. Many of our proofs depend critically on compactness and first-countability. Properties~(\ref{intro:prop:iad:1}) and~(\ref{intro:prop:iad:2}) establish that convergence in astral space is the same as convergence in all directions. Indeed, convergence in all directions is how we often analyze convergence in astral space. Finally, that $\R^n$ is dense in $\eRn$ means that $\eRn$ is the closure of $\R^n$, part of what it means for $\extspace$ to be a compactification. As a result, every astral point $\xbar\in\extspace$ is arbitrarily ``near'' to $\Rn$ in the topological sense that every neighborhood of $\xbar$ must include a point in $\Rn$. In one dimension, $\extspac{1}$ is homeomorphic with $\eR$, so we can write $\eR$ instead of $\extspac{1}$ and work with the standard topology on $\eR$. In fact, we later define $\extspac{1}$ to be \emph{equal} to $\Rext$. \paragraph{Representing astral points.} Although astral space is a topological extension of the vector space $\R^n$, it is not a vector space itself, and astral points cannot be added. The root problem is that the sum of $-\infty$ and $+\infty$ is undefined, making it impossible, for example, to establish the identity $(\xbar + \ybar)\inprod\uu=\xbar\inprod\uu+\ybar\inprod\uu$, which is meaningless when $-\infty$ and $+\infty$ appear on the right-hand side. While standard addition does not generalize to astral space, a noncommuta\-tive variant does. For $\barx,\bary\in\Rext$, we write this operation, called \emph{leftward addition}, as $\barx\plusl \bary$. It is the same as ordinary addition except that, when adding $-\infty$ and $+\infty$, the argument on the \emph{left} dominates. Thus, \begin{align} (+\infty) \plusl (-\infty) &= +\infty, \nonumber \\ (-\infty) \plusl (+\infty) &= -\infty, \label{eqn:intro-left-sum-defn} \\ \barx \plusl \bary &= \barx + \bary \;\;\mbox{in all other cases.} \nonumber \end{align} This operation can be extended from $\Rext$ to $\extspace$: For $\xbar,\ybar\in\extspace$, the leftward sum, written $\xbar\plusl\ybar$, is defined to be that unique point in $\extspace$ for which \[ (\xbar\plusl\ybar)\cdot\uu = \xbar\cdot\uu \plusl \ybar\cdot\uu \] for all $\uu\in\Rn$. Such a point must always exist (see Proposition~\ref{pr:i:6}). While leftward addition is not commutative, it is associative. Scalar multiplication of a vector and standard matrix-vector multiplication also extend to astral space, and are distributive with leftward addition. All astral points can be decomposed as the leftward sum of astrons and a finite part, corresponding exactly to the sequence of dominant directions that were seen in Example~\ref{ex:poly-speed-intro}. In that example, \begin{equation} \label{eq:intro-sum-astrons} \xbar = \limray{\vv_1}\plusl\dotsb\plusl\limray{\vv_k}\plusl\qq. \end{equation} This can be seen by comparing the definitions and observations above with \eqref{eq:intro-astral-point}. Such a representation of an astral point is not unique. Nevertheless, every astral point does have a unique \emph{canonical representation} in which the vectors $\vv_1,\dotsc, \vv_k$ are orthonormal, and $\qq$ is orthogonal to all of them (see Theorem~\ref{pr:uniq-canon-rep}). Every astral point has an intrinsic \emph{astral rank}, which is the smallest number of astrons needed to represent it, and which is also equal to the number of astrons appearing in its canonical representation. In particular, this means that every astral point in $\extspace$ has astral rank at most $n$. Points in $\Rn$ have astral rank~$0$. Astral points of the form $\omega\vv\plusl\qq$ with $\vv\ne\zero$ have astral rank $1$; these are the points obtained as limits of sequences that go to infinity along a halfline with starting point $\qq$ and direction $\vv$, such as $\xx_t= t\vv+\qq$. \subsection{Minimization revisited} \label{sec:intro-min-revisit} In each of the examples from \Cref{sec:motivation}, the function being considered was minimized only via a sequence of points going to infinity. The preceding development allows us now to take the limit of each of those sequences in astral space. The next step is to extend the functions themselves to astral space so that they can be evaluated at such an infinite limit point. To do so, we focus especially on a natural extension of a function $f:\R^n\to\eR$ to astral space called the \emph{lower semicontinuous extension} (or simply, \emph{extension}) of $f$ to $\eRn$, which we introduce in Section~\ref{sec:lsc:ext}. This function, written $\fext:\extspace\rightarrow\Rext$, is defined at any $\xbar\in\eRn$ as \[ \fext(\xbar) = \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xbar} {f(\xx_t)}, \] where the infimum is taken over all sequences $\seq{\xx_t}$ in $\Rn$ that converge to $\xbar$. In words, $\ef(\xbar)$ is the infimum across all possible limit values achievable by any sequence $\bigParens{f(\xx_t)}$ for which $\xx_t\to\xbar$. For example, this definition yields the extension of the expontial function given in Example~\ref{ex:exp:cont}. In general, the extension $\ef$ is a lower semicontinuous function on a compact space, and therefore always has a minimizer. Moreover, by $\fext$'s construction, an astral point $\xbar$ minimizes the extension $\fext$ if and only if there exists a sequence in $\Rn$ converging to $\xbar$ that minimizes the original function $f$. Additionally, if $\ef$ is not just lower semicontinuous, but actually continuous at $\xbar$, and if $\xbar$ minimizes $\fext$, then \emph{every} sequence in $\Rn$ converging to $\xbar$ minimizes $f$. This makes $\fext$'s continuity properties algorithmically appealing; understanding when and where the extension $\fext$ is continuous is therefore one important focus of this book. \bigskip Having extended both $\Rn$ and functions on $\Rn$, we are now ready to return to the minimization problems from \Cref{sec:motivation} as an illustration of the notions discussed above, as well as a few of the general results appearing later in the book (though somewhat specialized for simpler presentation). In what follows, $\ee_1,\dotsc,\ee_n$ denote the standard basis vectors in $\Rn$ (so $\ee_i$ is all $0$'s except for a $1$ in the $i$-th component). \begin{example}[Diagonal valley, continued] \label{ex:diagonal-valley:cont} We derive the extension $\fext$ of the diagonal valley function from Example~\ref{ex:diagonal-valley}, which turns out to be continuous everywhere. First, we rewrite the function using inner products, since these can be continuously extended to astral space: \[ f(\xx)=e^{-x_1}+(x_2-x_1)^2=e^{\xx\inprod(-\ee_1)}+\bigBracks{\xx\inprod(\ee_2-\ee_1)}^2. \] Now to obtain the continuous extension, we can just rely on the continuity of the coupling function to obtain \[ \ef(\exx)=e^{\exx\inprod(-\ee_1)}+\bigBracks{\exx\inprod(\ee_2-\ee_1)}^2. \] Here, implicitly, the functions $e^x$ and $x^2$ have been extended in the natural way to $\Rext$ according to their limits as $x\rightarrow\pm\infty$. Note that both terms of the summation are nonnegative (though possibly $+\infty$), so the sum is always defined. The minimizing sequence $\xx_t=\trans{[t,t]}$ from Example~\ref{ex:diagonal-valley} follows a ray from the origin, and converges in astral space to the point $\xbar=\limray{\vv}$ where $\vv=\trans{[1,1]}$. This point is an astron and its astral rank is one. That $\fext$ is both continuous everwhere and has a rank-one minimizer is not a coincidence: In Section~\ref{subsec:rank-one-minimizers}, we prove that if the extension $\fext$ is continuous everywhere, then it must have a minimizer of astral rank at most one. \looseness=-1 \end{example} \begin{example}[Two-speed exponential, continued] \label{ex:two-speed-exp:cont} Recall the two-speed exponential function from Example~\ref{ex:two-speed-exp}: \[ f(\xx)=e^{-x_1} + e^{-x_2+x_1^2/2}. \] Unlike the previous example, this function's extension $\fext$ is not continuous everywhere. Earlier, we argued that the sequence $\xx_t=\trans{[t,t^2]}$ minimizes $f$, satisfying $f(\xx_t)\to 0$. The sequence $(\xx_t)$ converges to the astral point $\exx=\omega\ee_2\plusl\omega\ee_1$ with astral rank~2. On the other hand, the sequence $\xx'_t=\trans{[2t,t^2]}$ also converges to $\omega\ee_2\plusl\omega\ee_1$, but $f(\xx'_t)\to +\infty$. This shows that $\fext$ is not continuous at $\xbar$, and means, more specifically, that the extension $\ef$ satisfies $\ef(\xbar)=0$, but not all sequences converging to $\xbar$ minimize $f$.\looseness=-1 It turns out that $\xbar$ is the only minimizer of $\ef$, so $\ef$ is also an example of a function that does not have a rank-one minimizer, and so cannot be minimized by following a ray to infinity. As discussed in the previous example, the fact that $\fext$ has no rank-one minimizer implies generally that it cannot be continuous everywhere. Of course, for minimizing a function, we mainly care about continuity at its minimizers; in this case, the function $\fext$ is discontinuous exactly at its only minimizer. In general, we might hope that functions that are not continuous everywhere are at least continuous at their minimizers. For functions $f$ that are finite everywhere, such as this one, however, it turns out that that is impossible. In Theorem~\ref{thm:cont-conds-finiteev}, we prove in this case that $\fext$ is continuous everywhere if and only if it is continuous at all its minimizers. So if $\fext$ is discontinuous anywhere, then it must be discontinuous at least at one of its minimizers, as in this example. In Section~\ref{subsec:charact:cont}, we determine the exact set of astral points where $\fext$ is both continuous and not $+\infty$. In this case, that characterization implies directly that $\fext$ is both finite and continuous exactly at all points in $\R^2$ and also all points of the form $\limray{\ee_2}\plusl\qq$, for some $\qq\in\R^2$. In particular, this means it is not continuous at its minimizer, $\xbar=\limray{\ee_2}\plusl\limray{\ee_1}$, as we determined earlier.\end{example} \begin{example}[Log-sum-exp, continued] \label{ex:log-sum-exp:cont} The log-sum-exp function from Example~\ref{ex:log-sum-exp} has a continuous extension similar to the diagonal valley function, but a direct construction is slightly more involved. Instead, we take a different route that showcases an exact dual characterization of continuity in terms of a geometric property related to $f$'s conjugate, $\fstar$. In Section~\ref{subsec:charact:cont:duality}, we show that when $f$ is finite everywhere, its extension $\ef$ is continuous everywhere if and only if the effective domain of $\fstar$ has a conic hull that is \emph{polyhedral}, meaning that it is equal to the intersection of finitely many halfspaces in $\Rn$. (The \emph{effective domain} of a function with range $\Rext$ is the set of points where the function is not $+\infty$. The \emph{conic hull} of a set $S\subseteq\Rn$ is obtained by taking all of the nonnegative combinations of points in $S$.) For the log-sum-exp function, we have \[ \fstar(\uu)=\begin{cases} \sum_{i=1}^n (u_i+\lsec_i)\ln(u_i+\lsec_i) &\text{if $\uu+\lsevec\in\probsim$,} \\ +\infty &\text{otherwise,} \end{cases} \] where, as before, $\probsim$ is the probability simplex in $\Rn$. Hence, the effective domain of $\fstar$ is $\probsim-\lsevec$, which, being a translation of a simplex, is polyhedral. Its conic hull is therefore also polyhedral, so $\ef$ is continuous everywhere (by Corollary~\ref{cor:thm:cont-conds-finiteev}). Earlier, we considered the particular case that $n=3$ and $\lsevec=\trans{[0,\frac13,\frac23]}$. We saw, in this case, that $f$ is minimized by the sequence $\xx_t=t\vv+\qq$, where $\vv=\trans{[-1,0,0]}$ and $\qq=\trans{[0,\ln\frac13,\ln\frac23]}$. In astral terms, the sequence $\seq{\xx_t}$ converges to $\xbar=\limray{\vv}\plusl\qq$, which does indeed minimize $\fext$ (as does every sequence converging to $\xbar$, since $\fext$ is continuous everywhere). These facts can be related to differential properties of $f^*$. In Example~\ref{ex:exp:cont}, we discussed that any subgradient of $\fstar$ at $\zero$ minimizes $f$ (with mild conditions on $f$). By its standard definition, $\xx\in\Rn$ is a \emph{subgradient} of $\fstar$ at $\uu_0\in\Rn$ if \begin{equation} \label{eqn:intro-stand-subgrad} \fstar(\uu) \geq \fstar(\uu_0) + \xx\cdot (\uu-\uu_0) \end{equation} for all $\uu\in\Rn$, so that $\fstar(\uu)$ is supported at $\uu_0$ by the affine function (in $\uu$) on the right-hand side of the inequality. In this case, as in Example~\ref{ex:exp:cont}, $\fstar$ has \emph{no} subgradients at $\zero$, even though $\zero$ is in the effective domain of $\fstar$, corresponding to $f$ having no finite minimizers. This irregularity goes away when working in astral space. In Section~\ref{sec:astral-dual-subgrad}, we will see how infinite points can be added as subgradients of $f^*$ in such cases. Although our formulation there is more general, when $\fstar(\uu_0)\in\R$, we simply need to replace the inner product in \eqref{eqn:intro-stand-subgrad} with the astral coupling function; thus, a point $\xbar\in\extspace$ is an \emph{astral dual subgradient} of $\fstar$ at $\uu_0$ if \[ \fstar(\uu) \geq \fstar(\uu_0) + \xbar\cdot (\uu-\uu_0) \] for all $\uu\in\Rn$. Whereas, as just seen, it is possible for a function to have no standard subgradients at a particular point, in astral space, \emph{every} point must have an astral dual subgradient (even points not in the function's effective domain). In this case, from the preceding development, it can be checked that $\xbar=\limray{\vv}\plusl\qq$, as defined above, does indeed satisfy this condition at $\uu_0=\zero$, and so is an astral dual subgradient. By general results given in Chapter~\ref{sec:gradients}, this implies that $\zero$ is a (primal) astral subgradient of $\fext$ at $\xbar$, which in turn implies that $\xbar$ minimizes $\fext$. Thus, whereas the conjugate of the original function $f$ has no subgradient at $\zero$ corresponding to $f$ having no finite minimizer, in astral space, the function's extension $\fext$ does attain its minimum at an astral point that, correspondingly, is an astral dual subgradient of the conjugate at $\zero$. \end{example} These examples illustrate some of the key questions and topics that we study in astral space, including continuity, conjugacy, convexity, the structure of minimizers, and differential theory. \subsection{Overview of the book} \label{sec:intro:overview} We end this chapter with a brief summary of the different parts of the book. At the highest level, this book defines astral space, studies its properties, then develops the theory of convex sets and convex functions on astral space. We follow the conceptual framework developed for convex analysis on $\R^n$ by \citet{ROC}, which in turn grew out of Fenchel's lecture notes~\citeyearpar{fenchel1953convex}. \overviewsecheading{part:prelim} We begin with preliminaries and a review of relevant topics in linear algebra, topology, and convex analysis. This review is meant to make the book self-contained, and to provide a convenient reference, not as a substitute for a beginning introduction to these areas. \overviewsecheading{part:astral-space} In the first main part of the book, we formally construct astral space along the lines of the outline given in Section~\ref{sec:intro:astral}. We then define its topology, and prove its main properties, including compactness and first countability. Next, we extend linear maps to astral space and study in detail how astral points can be represented using astrons and leftward addition, or even more conveniently using matrices. We directly link these representations to the sequences converging to a particular astral point. We then explore how astral points can be decomposed using their representations in ways that are not only useful, but also revealing of the structure of astral space itself. We end this part with a comparison to the cosmic space of \citet[Section 3]{rock_wets}. \overviewsecheading{part:extending-functions} In the next part, we explore how ordinary functions $f$ on $\Rn$ can be extended to astral space. We first define the lower semicontinuous extension $\fext$, as briefly seen in Section~\ref{sec:intro-min-revisit}, and prove some of its properties. We then define astral conjugates. We especially focus on the astral \emph{biconjugate} of a function $f$ on $\Rn$, which can be seen as an alternative way of extending $f$ to astral space. We give precise conditions for when this biconjugate is the same as the lower semicontinuous extension. In the last chapter of this part, we derive rules for computing lower semicontinuous extensions (for instance, for the sum of two functions, or for the composition of a function with an affine map). \overviewsecheading{part:convex-sets} The next part of the book explores convexity in astral space. We begin by defining astral convex sets, along with such related notions as astral halfspaces and astral convex hull. We then look at how astral convex sets can be constructed or operated upon, for instance, under an affine map. We introduce here an especially useful operation on sets, called \emph{sequential sum}, which is the astral analogue of taking the sum of two sets in $\Rn$. Astral cones are another major topic of this part, which we here define and study in detail. We especially look at useful operations which relate standard cones in $\Rn$ to astral cones, such as the analogue of the polar operation. This study of astral cones turns out to be particularly relevant when later characterizing continuity of functions. With these notions in hand, we are able to prove separation theorems for astral space, for instance, showing that any two disjoint, closed, convex sets in astral space can be strongly separated by an astral hyperplane. As in standard convex analysis, such theorems are fundamental, with many consequences. We next define and explore astral linear subspaces, which are a kind of cone. Finally, we study convex functions defined on astral space (which might not be extensions of functions on $\Rn$), giving characterizations of convexity as well as operations for their construction. \overviewsecheading{part:minimizers-and-cont} The next part of the book studies in detail the structure of minimizers of astral convex functions, especially the extensions of convex functions on $\Rn$. We relate the ``infinite part'' of all such minimizers to a set called the \emph{astral recession cone}, and their ``finite part'' to a particularly well-behaved convex function on $\Rn$ called the \emph{universal reduction}. We give characterizations and a kind of procedure for enumerating all minimizers of a given function. We also focus especially on minimizers of a particular canonical form. We then explore the structure of minimizers in specific cases, focusing especially on the astral rank of minimizers, which turn out to be related to continuity properties of the function. In the final chapter of this part, we precisely characterize the set of points where the extension of a convex function on $\Rn$ is continuous. We further characterize exactly when that extension is continuous everywhere. \overviewsecheading{part:differential-theory} In the last part of the book, we study differential theory extended to astral space. We define two forms of astral subgradient: the first assigns (finite) subgradients to points at infinity; the second assigns possibly infinite subgradients to finite points (for instance, to points with vertical tangents). We show how these two forms of subgradient are related via conjugacy. We also show that every convex function has subgradients of the second form at every point, unlike in standard convex analysis. We derive calculus rules for astral subgradients, for instance, for the sum of two functions. We then explore how important optimality conditions, such as the KKT conditions, extend to astral space, making for a more complete characterization of the solutions of a convex program. We also study iterative methods for minimizing a convex function which might have no finite minimizer, and apply these to a range of commonly encountered methods and problems. Finally, we take a close look at how these ideas can be applied to the well-studied exponential family of distributions which are commonly fit to data by minimizing a particular convex function. \part{Preliminaries and Background} \label{part:prelim} \section{General preliminaries and review of linear algebra} \label{sec:more-prelim} This chapter provides some notational conventions that we adopt, and some general background, including a review of a few topics from linear algebra. \subsection{Notational conventions} \label{sec:prelim-notation} We write $\R$ for the set of reals, $\rats$ for the set of rational numbers, and $\nats$ for the set of (strictly) positive integers. We let $\eR$ denote the set of extended reals: \[\Rext=[-\infty,+\infty]=\R\cup\set{-\infty,+\infty}.\] Also, $\Rpos=[0,+\infty)$ is the set of all nonnegative reals, and $\Rstrictpos=(0,+\infty)$ is the set of all strictly positive reals; likewise, $\Rneg=(-\infty,0]$ and $\Rstrictneg=(-\infty,0)$. In general, we mostly adhere to the following conventions: Scalars are denoted like $x$, in italics. Vectors in $\Rn$ are denoted like $\xx$, in bold. Points in astral space are denoted like $\xbar$, in bold with a bar. Matrices in $\R^{m\times n}$ are denoted like $\A$, in bold uppercase, with the transpose written as $\trans{\A\!}$. Scalars in $\Rext$ are sometimes written like $\barx$, with a bar. Greek letters, like $\alpha$ and $\lambda$, are also used for scalars. Vectors $\xx\in\Rn$ are usually understood to have components $x_i$, and are taken to have a column shape, so $\xx=\trans{[x_1,\dotsc,x_n]}$. A matrix in $\R^{n\times k}$ with columns $\vv_ 1,\dotsc,\vv_k\in\R^n$ is written $[\vv_1,\dotsc,\vv_k]$. We also use this notation to piece together a larger matrix from other matrices and vectors in the natural way. For instance, if $\VV\in\R^{n\times k}$, $\ww\in\Rn$, and $\VV'\in\R^{n\times k'}$, then $[\VV,\ww,\VV']$ is a matrix in $\R^{n\times (k+1+k')}$ whose first $k$ columns are a copy of $\VV$, whose next column is $\ww$, and whose last $k'$ columns are a copy of $\VV'$. For a matrix $\VV\in\Rnk$, we write $\columns(\VV)$ for the set of columns of $\VV$; thus, if $\VV=[\vv_1,\ldots,\vv_k]$ then $\columns(\VV)=\{\vv_1,\ldots,\vv_k\}$. We write $\ee_1,\dotsc,\ee_n\in\Rn$ for the \emph{standard basis vectors} (where the dimension $n$ is provided or implied by the context). Thus, $\ee_i$ has $i$-th component equal to $1$, and all other components equal to $0$. The $n\times n$ identity matrix is written $\Idnn$, or without the subscript when clear from context. We write $\zerov{n}$ for the all-zeros vector in $\Rn$, dropping the subscript when clear from context, and also write $\zeromat{m}{n}$ for the $m\times n$ matrix with entries that are all equal to zero. The inner product of vectors $\xx,\yy\in\Rn$ is defined as $\xx\inprod\yy=\sum_{i=1}^n x_i y_i$; the corresponding (Euclidean) norm is $\norm{\xx}=\sqrt{\xx\inprod\xx}$, also called the length of $\xx$. The \emph{(Euclidean) open ball} with center $\xx\in\Rn$ and radius $\epsilon>0$ is defined as \begin{equation} \label{eqn:open-ball-defn} \ball(\xx,\epsilon) = \{ \zz\in\Rn : \norm{\zz-\xx} < \epsilon \}. \end{equation} Throughout this book, the variable $t$ has special meaning as the usual index of all sequences. Thus, we write \emph{$\seq{x_t}$ in $X$} for the sequence $x_1,x_2,\ldots$ with all elements in $X$. Limits and convergence are taken as $t\to +\infty$, unless stated otherwise. For example, $\lim f(x_t)$ means $\lim_{t\to\infty} f(x_t)$. We say a property of a sequence holds for all $t$ to mean that it holds for all $t\in \nats$. We say that it holds for all sufficiently large $t$ to mean that it holds for all but finitely many values of $t$ in $\nats$. Likewise, a family of sets $\{S_1,S_2,\ldots\}$ indexed by $t\in\nats$ is written more succinctly as $\countset{S}$. For a function $f:X\to Y$, the \emph{image of a set $A\subseteq X$ under $f$} is the set \[ f(A)=\set{f(x):\:{x\in A}},\] while the \emph{inverse image of a set $B\subseteq Y$ under $f$} is the set \[f^{-1}(B)=\set{x\in X:\:f(x)\in B}.\] The \emph{set difference} of sets $X$ and $Y$, written $X\setminus Y$, is the set \[ X\setminus Y = \Braces{x \in X : x\not\in Y}. \] Let $X,Y\subseteq\Rn$ and $\Lambda\subseteq\R$. We define \[ X+Y = \Braces{\xx + \yy : \xx \in X, \yy\in Y}, \mbox{~~~and~~~} \Lambda X = \Braces{\lambda \xx : \lambda\in\Lambda, \xx \in X}. \] For $\xx,\yy\in\Rn$ and $\lambda\in\R$, we also define $\xx+Y=\{\xx\}+Y$, $X+\yy=X+\{\yy\}$, $\lambda X=\{\lambda\}X$, $-X=(-1)X$, and $X-Y = X+(-Y)$. For a matrix $\A\in\Rmn$, we likewise define \[ \A X = \Braces{ \A \xx : \xx\in X}. \] We write (ordered) pairs as $\rpair{x}{y}$ (which should not be confused with similar notation commonly used by other authors for inner product). The set of all such pairs over $x\in X$ and $y\in Y$ is the set $X\times Y$. In particular, if $\xx\in\Rn$ and $y\in\R$, then $\rpair{\xx}{y}$ is a point in $\Rn\times\R$, which we also interpret as the equivalent $(n+1)$-dimensional (column) vector in $\R^{n+1}$ whose first $n$ coordinates are given by $\xx$, and whose last coordinate is given by $y$. Likewise, functions $f$ that are defined on $\R^{n+1}$ can be viewed equivalently as functions on $\Rn\times\R$. In such cases, to simplify notation, for $\xx\in\Rn$ and $y\in\R$, we write simply $f(\xx,y)$ as shorthand for $f(\rpair{\xx}{y})$. The \emph{epigraph} of a function $f:X\rightarrow \Rext$, denoted $\epi{f}$, is the set of pairs $\rpair{x}{y}$, with $x\in X$ and $y\in\R$, for which $f(x)\leq y$: \begin{equation} \label{eqn:epi-def} \epi{f} = \{ \rpair{x}{y} \in X\times\R:\: f(x) \leq y \}. \end{equation} The \emph{effective domain} of $f$ (or simply its \emph{domain}, for short) is the set of points where $f(x)<+\infty$: \[ \dom{f} = \{ x \in X:\: f(x) < +\infty \}. \] We write $\inf f$ for the function's infimum, $\inf_{x\in X} f(x)$, and similarly define $\sup f$, as well as $\min f$ and $\max f$, when these are attained. For any $\alpha\in\Rext$, we write $f\equiv \alpha$ to mean $f(x)=\alpha$ for all $x\in X$. Likewise, $f>\alpha$ means $f(x)>\alpha$ for all $x\in X$, with $f\geq \alpha$, $f<\alpha$, $f\leq \alpha$ defined similarly. For any $f:X\rightarrow \Rext$ and $g:X\rightarrow \Rext$, we write $f=g$ to mean $f(x)=g(x)$ for all $x\in X$, and similarly define $f<g$, $f\leq g$, etc. We say that $f$ \emph{majorizes} $g$ if $f\geq g$. \subsection{Working with \texorpdfstring{$\pm\infty$}{\pmUni\inftyUni}} \label{sec:prelim-work-with-infty} The sum of $-\infty$ and $+\infty$ is undefined, but other sums and products involving $\pm\infty$ are defined as usual (see, for example, \citealp[Section~4]{ROC}): \begin{align*} & \alpha+(+\infty)=(+\infty)+\alpha=+\infty &&\text{if $\alpha\in(-\infty,+\infty]$} \\ & \alpha+(-\infty)=(-\infty)+\alpha=-\infty &&\text{if $\alpha\in[-\infty,+\infty)$} \\[\medskipamount] & \alpha\cdot(+\infty)=(+\infty)\cdot\alpha=(-\alpha)\cdot(-\infty)=(-\infty)\cdot(-\alpha)=+\infty &&\text{if $\alpha\in(0,+\infty]$} \\ & \alpha\cdot(-\infty)=(-\infty)\cdot\alpha=(-\alpha)\cdot(+\infty)=(+\infty)\cdot(-\alpha)=-\infty &&\text{if $\alpha\in(0,+\infty]$} \\ & 0\cdot(+\infty)=(+\infty)\cdot0=0\cdot(-\infty)=(-\infty)\cdot0=0. \end{align*} Note importantly that $0\cdot(\pm\infty)$ is defined to be $0$. We also define the symbol $\oms$ to be equal to $+\infty$. We introduce this synonymous notation for readability and to emphasize $\oms$'s primary role as a scalar multiplier. Evidently, for $\barx\in\Rext$, \[ \omsf{\barx} = \begin{cases} +\infty & \text{if $\barx>0$,}\\ 0 & \text{if $\barx=0$,}\\ -\infty & \text{if $\barx<0$.} \end{cases} \] Both $\oms$ and its negation, $-\oms=-\infty$, are of course included in $\Rext$. Later, we will extend this notation along the lines discussed in Section~\ref{sec:intro:astral}. We say that extended reals $\alpha,\beta\in\eR$ are \emph{summable} if their sum is defined, that is, if it is not the case that one of them is $+\infty$ and the other is $-\infty$. More generally, extended reals $\alpha_1,\dotsc,\alpha_m\in\eR$ are {summable} if their sum $\alpha_1+\cdots+\alpha_m$ is defined, that is, if $-\infty$ and $+\infty$ are not both included in $\{\alpha_1,\dotsc,\alpha_m\}$. Also, letting $f_i:X\rightarrow\Rext$ for $i=1,\ldots,m$, we say that $f_1,\ldots,f_m$ are summable if $f_1(x),\ldots,f_m(x)$ are summable for all $x\in X$ (so that the function sum $f_1+\cdots+f_m$ is defined). Finally, $\alpha,\beta\in\Rext$ are \emph{subtractable} if their difference is defined, that is, if they are not both $+\infty$ and not both $-\infty$. Since the ordinary sum of $-\infty$ and $+\infty$ is undefined, we will make frequent use of modified versions of addition in which that sum is defined. The modifications that we make are meant only to resolve the ``tie'' that occurs when combining $-\infty$ and $+\infty$; these operations are the same as ordinary addition in all other situations. We define three different such operations, corresponding to the different ways of resolving such ties. The first and most important of these, called \emph{leftward addition}, was previously introduced in Section~\ref{sec:intro:astral}. For this operation, when adding $-\infty$ and $+\infty$, the leftward sum is defined to be equal to the argument on the left. Thus, the leftward sum of extended reals $\barx,\bary\in\Rext$, written $\barx\plusl \bary$, is defined as in \eqref{eqn:intro-left-sum-defn}, or equivalently, as \begin{equation} \label{eqn:left-sum-alt-defn} \barx \plusl \bary = \begin{cases} \barx & \text{if $\barx\in\{-\infty,+\infty\}$,}\\ \barx + \bary & \text{otherwise.} \end{cases} \end{equation} This operation is not commutative, but it is associative and distributive. It is, nonetheless, commutative (and equivalent to ordinary addition) when the arguments are summable, and in particular, if either is in $\R$. These and other basic properties are summarized in the next proposition, whose proof is straightforward. \begin{proposition} \label{pr:i:5} For all $\barx,\bary,\barz,\barx',\bary'\in\Rext$, the following hold: \begin{letter-compact} \item \label{pr:i:5a} $(\barx\plusl \bary)\plusl \barz=\barx\plusl (\bary\plusl \barz)$. \item \label{pr:i:5b} $\lambda(\barx\plusl \bary)=\lambda\barx\plusl \lambda\bary$, for $\lambda\in\R$. \item \label{pr:i:5bb} $\alpha\barx\plusl \beta\barx=\alpha\barx+\beta\barx =(\alpha+\beta)\barx$, for $\alpha,\beta\in\Rext$ with $\alpha\beta\geq 0$. \item \label{pr:i:5c} If $\barx$ and $\bary$ are summable then $\barx\plusl\bary=\barx+\bary=\bary+\barx=\bary\plusl\barx$. \item \label{pr:i:5ord} If $\barx\leq \barx'$ and $\bary\leq \bary'$ then $\barx\plusl \bary \leq \barx'\plusl \bary'$. \item \label{pr:i:5lim} $\barx\plusl\bary_t\to\barx\plusl\bary$, for any sequence $\seq{\bary_t}$ in $\eR$ with $\bary_t\to\bary$. \end{letter-compact} \end{proposition} Note that part~(\ref{pr:i:5b}) of Proposition~\ref{pr:i:5} holds for $\lambda\in\R$, but does not hold in general if $\lambda=\oms$. For instance, if $x=1$ and $y=-2$ then $\oms(x\plusl y)=-\infty$ but $\oms x \plusl \oms y = +\infty$. Also, part~(\ref{pr:i:5lim}) does not hold in general if the arguments are reversed. That is, $\bary_t\plusl\barx$ need not converge to $\bary\plusl\barx$ (where $\bary_t\rightarrow\bary$). For instance, suppose $\barx=-\infty$, $\bary=+\infty$, and $\bary_t=t$ for all $t$. Then $\bary_t\rightarrow\bary$, $\bary_t\plusl\barx=-\infty$ for all $t$, but $\bary\plusl\barx=+\infty$. Next, we define the \emph{downward sum} (or \emph{lower sum}) of $\ex, \bary\in\Rext$, denoted $\ex \plusd \bary$, to be equal to $-\infty$ if \emph{either} $\ex$ or $\bary$ is $-\infty$ (and is like ordinary addition otherwise). Thus, \begin{equation} \label{eq:down-add-def} \ex \plusd \bary = \begin{cases} -\infty & \text{if $\ex=-\infty$ or $\bary=-\infty$,} \\ \ex + \bary & \text{otherwise.} \end{cases} \end{equation} The next proposition summarizes some properties of this operation: \begin{proposition} \label{pr:plusd-props} For all $\barx,\bary,\barz,\barx',\bary'\in\Rext$, the following hold: \begin{letter-compact} \item \label{pr:plusd-props:a} $\barx\plusd \bary = \bary\plusd \barx$. \item \label{pr:plusd-props:b} $(\barx\plusd \bary) \plusd \barz = \barx\plusd (\bary \plusd \barz)$. \item \label{pr:plusd-props:lambda} $\lambda(\barx\plusd \bary) = \lambda\barx \plusd \lambda\bary$, for $\lambda\in\Rpos$. \item \label{pr:plusd-props:c} If $\barx$ and $\bary$ are summable, then $\barx\plusd \bary = \barx+\bary$. \item \label{pr:plusd-props:d} $\sup \braces{\barx - w:\:w\in\R,\,w\geq \bary} = -\bary \plusd \barx$. \item \label{pr:plusd-props:e} $\barx \geq \bary \plusd \barz$ if and only if $-\bary \geq -\barx \plusd \barz$. \item \label{pr:plusd-props:f} If $\barx\leq\barx'$ and $\bary\leq\bary'$ then $\barx\plusd\bary\leq\barx'\plusd\bary'$. \end{letter-compact} \end{proposition} \begin{proof} Part~(\ref{pr:plusd-props:d}) can be checked by separately considering the following cases: $\bary=+\infty$ (noting $\sup\emptyset = -\infty$); $\bary<+\infty$ and $\barx\in\{-\infty,+\infty\}$; and $\bary<+\infty$ and $\barx\in\R$. The other parts can be checked in a similar fashion. \end{proof} Finally, we define the \emph{upward sum} (or \emph{upper sum}) of $\barx,\bary\in\Rext$, denoted $\barx\plusu\bary$, to be \begin{equation} \label{eq:up-add-def} \barx \plusu \bary = \begin{cases} +\infty & \text{if $\barx=+\infty$ or $\bary=+\infty$,} \\ \barx + \bary & \text{otherwise.} \end{cases} \end{equation} Analogous properties to those given in \Cref{pr:plusd-props} can be proved for upward sum. A sequence $\seq{\alpha_t}$ in $\eR$ converges to $+\infty$ if for every $M\in\R$ the sequence eventually stays in $(M,+\infty]$. It converges to $-\infty$ if for every $M\in\R$ the sequence eventually stays in $[-\infty,M)$. Finally, it converges to $x\in\R$ if for every $\epsilon>0$ the sequence eventually stays in $(x-\epsilon,x+\epsilon)$. Limits in $\eR$ satisfy the following: \begin{proposition} \label{prop:lim:eR} Let $\seq{\alpha_t}$ and $\seq{\beta_t}$ be sequences in $\eR$. Then: \begin{letter-compact} \item \label{i:liminf:eR:sum} $\liminf(\alpha_t\plusd\beta_t)\ge(\liminf\alpha_t)\plusd(\liminf\beta_t)$. \item \label{i:liminf:eR:mul} $\liminf(\lambda\alpha_t)=\lambda(\liminf\alpha_t)$ for all $\lambda\in\Rpos$. \item \label{i:liminf:eR:min} $\liminf\bigParens{\min\set{\alpha_t,\beta_t}} = \min\bigBraces{\liminf\alpha_t,\,\liminf\beta_t}$. \item \label{i:limsup:eR:max} $\limsup\bigParens{\max\set{\alpha_t,\beta_t}} = \max\bigBraces{\limsup\alpha_t,\,\limsup\beta_t}$. \end{letter-compact} Furthermore, if $\alpha_t\to\alpha$ and $\beta_t\to\beta$ in $\eR$, then: \begin{letter-compact}[resume] \item \label{i:lim:eR:sum} $\alpha_t+\beta_t\to\alpha+\beta$ if $\alpha$ and $\beta$ are summable. \item \label{i:lim:eR:mul} $\lambda\alpha_t\to\lambda\alpha$ for all $\lambda\in\R$. \end{letter-compact} \end{proposition} \subsection{Linear algebra} Every matrix $\A\in\Rmn$ is associated with a linear map $A:\Rn\rightarrow\Rm$ defined by $A(\xx)=\A\xx$ for $\xx\in\Rn$. A nonempty subset $L\subseteq\Rn$ is a \emph{linear subspace} (also called simply a \emph{subspace}) if it is closed under vector addition and multiplication by any scalar, that is, if $\xx+\yy\in L$ and $\lambda\xx\in L$ for all $\xx,\yy\in L$ and $\lambda\in\R$. The intersection of an arbitrary collection of linear subspaces is also a linear subspace. The \emph{span} of a set $S\subseteq\Rn$, denoted $\spn{S}$, is the smallest linear subspace that includes $S$, that is, the intersection of all linear subspaces that include $S$. A point $\zz\in\Rn$ is a \emph{linear combination} of points $\xx_1,\ldots,\xx_k\in\Rn$, if $\zz=\lambda_1\xx_1+\cdots+\lambda_k\xx_k$ for some $\lambda_1,\ldots,\lambda_k\in\R$. \begin{proposition} \label{pr:span-is-lin-comb} Let $S\subseteq\Rn$. Then $\spn{S}$ consists of all linear combinations of zero or more points in $S$. \end{proposition} A set of vectors $\xx_1,\ldots,\xx_k\in\Rn$ is \emph{linearly dependent} if there exist $\lambda_1,\ldots,\lambda_k\in\R$, not all $0$, such that $\lambda_1\xx_1+\cdots\lambda_k\xx_k=\zero$, or equivalently, if one of the points is a linear combination of the others. If $\xx_1,\ldots,\xx_k$ are not linearly dependent, then they are \emph{linearly independent}. A \emph{basis} for a linear subspace $L\subseteq\Rn$ is a linearly independent subset $B\subseteq\Rn$ that spans $L$ (meaning $L=\spn{B}$). Every basis for $L$ is of the same cardinality, called the \emph{dimension} of $L$, and denoted $\dim{L}$. The standard basis vectors, $\ee_1,\ldots,\ee_n$, form the \emph{standard basis} for $\Rn$. We say that vectors $\xx,\yy\in\Rn$ are \emph{orthogonal} and write $\xx\perp\yy$ if $\xx\inprod\yy=0$. We say that $\xx\in\Rn$ is orthogonal to a matrix $\A\in\R^{n\times k}$ and write $\xx\perp\A$ if $\xx$ is orthogonal to all columns of $\A$, that is, if $\trans{\xx}\A=\zero$. Still more generally, two matrices $\A\in\R^{n\times k}$ and $\A'\in\R^{n\times k'}$ are said to be orthogonal, written $\A\perp\A'$, if every column of $\A$ is orthogonal to every column of $\A'$. A set $B\subseteq\Rn$ is orthogonal if every vector in $B$ is orthogonal to every other vector in $B$. The set is \emph{orthonormal} if it is orthogonal and if also all vectors in the set have unit length. The \emph{column space} of a matrix $\A\in\R^{n\times k}$ is the span of its columns, denoted $\colspace\A$. The \emph{rank} of $\A$ is the dimension of its column space. If the columns of $\A$ are linearly independent, then we say that $\A$ has \emph{full column rank}. We say that $\A$ is \emph{column-orthogonal} if its columns are orthonormal, that is, if the columns all have unit length and are orthogonal to one another. If $L_1$ and $L_2$ are linear subspaces of $\Rn$, then $L_1+L_2$ and $L_1\cap L_2$ are as well. \begin{proposition} \label{pr:subspace-intersect-dim} Let $L_1$ and $L_2$ be linear subspaces of $\Rn$. Then \[ \dim(L_1+L_2) + \dim(L_1\cap L_2) = \dim{L_1} + \dim{L_2}. \] \end{proposition} \begin{proof} See \citet[Eq.~0.1.7.1]{horn_johnson_2nd}. \end{proof} \subsection{Orthogonal complement, projection matrices, pseudoinverse} \label{sec:prelim:orth-proj-pseud} For a set $S\subseteq\Rn$, we define the \emph{orthogonal complement} of $S$, denoted $\Sperp$, to be the set of vectors orthogonal to all points in $S$, that is, \begin{equation} \label{eq:std-ortho-comp-defn} \Sperp = \Braces{ \xx\in\Rn : \xx\cdot\uu=0 \mbox{ for all }\uu\in S }. \end{equation} Note that this definition is sometimes only applied when $S$ is a linear subspace, but we allow $S$ to be an arbitrary subset. We write $\Sperperp$ for $(\Sperp)^\bot$. \begin{proposition} \label{pr:std-perp-props} Let $S,U\subseteq\Rn$. \begin{letter-compact} \item \label{pr:std-perp-props:a} $\Sperp$ is a linear subspace. \item \label{pr:std-perp-props:b} If $S\subseteq U$ then $\Uperp\subseteq\Sperp$. \item \label{pr:std-perp-props:c} $\Sperperp = \spn{S}$. In particular, if $S$ is a linear subspace then $\Sperperp=S$. \item \label{pr:std-perp-props:d} $(\spn{S})^{\bot} = \Sperp$. \end{letter-compact} \end{proposition} If $L\subseteq\Rn$ is a linear subspace of dimension $k$, then its orthogonal complement $L^\perp$ is a linear subspace of dimension $n-k$. Furthermore, every point can be uniquely decomposed as the sum of some point in $L$ and some point in $\Lperp$: \begin{proposition} \label{pr:lin-decomp} Let $L\subseteq\Rn$ be a linear subspace, and let $\xx\in\Rn$. Then there exist unique vectors $\xpar\in L$ and $\xperp\in \Lperp$ such that $\xx=\xpar+\xperp$. \end{proposition} The vector $\xpar$ appearing in this proposition is called the \emph{orthogonal projection of $\xx$ onto $L$}. The mapping $\xx\mapsto\xpar$, called \emph{orthogonal projection onto $L$}, is a linear map described by a unique matrix $\PP\in\R^{n\times n}$ called the \emph{(orthogonal) projection matrix onto $L$}; that is, $\xpar=\PP\xx$ for all $\xx\in\Rn$. Specifically, if $\set{\vv_1,\dotsc,\vv_k}$ is any orthonormal basis of $L$, then the matrix $\PP$ is given by \[ \PP=\sum_{i=1}^k \vv_i\trans{\vv_i}=\VV\trans{\VV}, \] where $\VV=[\vv_1,\dotsc,\vv_k]$, which is column-orthogonal. Here are properties of orthogonal projection matrices: \begin{proposition} \label{pr:proj-mat-props} Let $\PP\in\R^{n\times n}$ be the orthogonal projection matrix onto some linear subspace $L\subseteq\Rn$. Let $\xx\in\Rn$. Then: \begin{letter-compact} \item \label{pr:proj-mat-props:a} $\PP$ is symmetric. \item \label{pr:proj-mat-props:b} $\PP^2=\PP$. \item \label{pr:proj-mat-props:c} $\PP\xx\in L$. \item \label{pr:proj-mat-props:d} If $\xx\in L$ then $\PP\xx=\xx$. \item \label{pr:proj-mat-props:e} If $\xx\in\Lperp$ then $\PP\xx=\zero$. \end{letter-compact} \end{proposition} Orthogonal projection onto $L^\perp$ is described by the matrix $\PP'=\Iden-\PP$ (where $\Iden$ is the $n\times n$ identity matrix). We call this projection the \emph{projection orthogonal to~$L$}. When $L=\spn\set{\vv}$ for some $\vv\in\Rn$, we call this the \emph{projection orthogonal to $\vv$}. In that case, for $\vv\ne\zero$, \[ \PP'=\Iden-\frac{\vv\trans{\vv}}{\norm{\vv}^2}, \] and for $\vv=\zero$, we have $\PP'=\Iden$. For every matrix $\A\in\R^{m\times n}$, there exists a unique matrix $\Adag\in\R^{n\times m}$, called $\A$'s \emph{pseudoninverse} (or \emph{Moore-Penrose generalized inverse}), satisfying the following conditions: \begin{item-compact} \item $\A\Adag\A=\A$ \item $\Adag\A\Adag=\Adag$ \item $\trans{(\A\Adag)}=\A\Adag$ \item $\trans{(\Adag\A)}=\Adag\A$. \end{item-compact} \begin{proposition} \label{pr:pseudoinv-props} Let $\A\in\R^{m\times n}$. Then: \begin{letter-compact} \item \label{pr:pseudoinv-props:a} $\colspace\bigParens{\trans{(\Adag)}}=\colspace\A$. \item \label{pr:pseudoinv-props:b} $\A\Adag$ is the orthogonal projection matrix onto $\colspace\A$. \item \label{pr:pseudoinv-props:c} If $\A$ has full column rank then $\Adag\A=\Idn{n}$. \item \label{pr:pseudoinv-props:d} If $\A$ is column-orthogonal then $\Adag=\trans{\A}$. \end{letter-compact} \end{proposition} \subsection{Positive upper triangular matrices} \label{sec:prelim:pos-up-tri-mat} A square matrix $\RR\in\R^{n\times n}$ with entries $r_{ij}$, where $i$ indexes rows and $j$ columns, is \emph{upper triangular} if $r_{ij} = 0$ for all $i>j$. We say $\RR$ is \emph{positive upper triangular} if it is upper triangular and if all of its diagonal entries are strictly positive. \begin{proposition} \label{prop:pos-upper} ~ \begin{letter-compact} \item \label{prop:pos-upper:prod} The product of two positive upper triangular matrices is also positive upper triangular. \item \label{prop:pos-upper:inv} Every positive upper triangular matrix $\RR$ is invertible, and its inverse $\Rinv$ is also positive upper triangular. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{prop:pos-upper:prod}):} This is straightforward to check. (See also \citealp[Section~0.9.3]{horn_johnson_2nd}). \pfpart{Part~(\ref{prop:pos-upper:inv}):} Let $\RR\in\R^{n\times n}$ be positive upper triangular with entries $r_{ij}$ and columns $\rr_1,\ldots,\rr_n$. By induction on $j=1,\ldots,n$, we claim that each standard basis vector $\ee_j$ can be written as \begin{equation} \label{eq:prop:pos-upper:1} \ee_j=\sum_{i=1}^j s_{ij} \rr_i \end{equation} for some $s_{1j},\ldots,s_{jj}\in\R$ with $s_{jj}>0$. This is because $\rr_j=\sum_{i=1}^j r_{ij} \ee_i$, so \[ \ee_j = \frac{1}{r_{jj}} \brackets{ \rr_j - \sum_{i=1}^{j-1} r_{ij}\ee_i } = \frac{1}{r_{jj}} \brackets{ \rr_j - \sum_{i=1}^{j-1} r_{ij} \sum_{k=1}^i s_{ki} \rr_k }, \] where the second equality is by inductive hypothesis. Thus, $\ee_j$ can be written as a linear combination of $\rr_1,\ldots,\rr_j$, and therefore as in \eqref{eq:prop:pos-upper:1}; in particular, $s_{jj}=1/r_{jj}>0$. Setting $s_{ij}=0$ for $i>j$ and letting $\Ss$ denote the $n\times n$ matrix with entries $s_{ij}$, it follows that $\Ss$ is positive upper triangular, and that $\RR\Ss=\Iden$, that is, $\Ss=\Rinv$. \qedhere \end{proof-parts} \end{proof} \begin{proposition}[QR factorization] \label{prop:QR} Let $\A\in \R^{m\times n}$ have full column rank. Then there exists a column-orthogonal matrix $\QQ\in\R^{m\times n}$ and a positive upper triangular matrix $\RR\in\R^{n\times n}$ such that $\A=\QQ\RR$. Furthermore, the factors $\QQ$ and $\RR$ are uniquely determined. \end{proposition} \begin{proof} See Theorem~2.1.14(a,b,e) of \citet{horn_johnson_2nd}. \end{proof} \subsection{Zero vectors, zero matrices, zero-dimensional Euclidean space} \label{sec:prelim-zero-dim-space} As noted already, $\zerov{n}$ denotes the all-zeros vector in $\Rn$, and $\zeromat{m}{n}$ denotes the $m\times n$ all-zeros matrix, corresponding to the linear map mapping all of $\R^n$ to $\zerov{m}$. We write $\R^0$ for the zero-dimensional Euclidean space, consisting of a single point, the origin, denoted $\zerov{0}$ or simply $\zero$. As the zero of a vector space, this point satisfies the standard identities $\zerovec+\zerovec=\zerovec$ and $\lambda\zerovec=\zerovec$ for all $\lambda\in\R$. The inner product is also defined as $\zerovec\cdot\zerovec=0$. Conceptually, $\zerovec$ can be viewed as the unique ``empty'' vector. Interpreting matrices in $\Rmn$ as linear maps from $\R^n$ to $\R^m$, for any $m,n\ge 0$, we find that $\R^{m\times 0}$ contains only a single matrix, denoted $\zeromat{m}{0}$, corresponding to the only linear map from $\R^0$ to $\R^m$, which maps $\zerov{0}$ to $\zerov{m}$. Similarly, $\R^{0\times n}$ contains only one matrix, denoted $\zeromat{0}{n}$, corresponding to the linear map from $\R^n$ to $\R^0$ mapping all of $\R^n$ to $\zerov{0}$. The matrix $\zeromat{0}{0}$ is both a zero matrix as well as an identity matrix, because it is the identity map on $\R^0$; thus, $\Idn{0}=\zeromat{0}{0}$. Vacuously, $\zeromat{0}{0}$ is positive upper triangular, and $\zeromat{m}{0}$ is column-orthogonal for $m\geq 0$. Since $\R^{m\times n}$, for any $m,n\ge 0$, is itself a vector space with $\zeromat{m}{n}$ as its zero, we obtain standard identities $\zeromat{m}{n}+\zeromat{m}{n}=\zeromat{m}{n}$ and $\lambda\zeromat{m}{n}=\zeromat{m}{n}$ for all $\lambda\in\R$. Interpreting matrix product as composition of linear maps, we further obtain the following identities for $m,n,k\geq 0$, and $\A\in \R^{m\times k}$ and $\B\in\R^{k\times n}$: \[ \A \zeromat{k}{n} = \zeromat{m}{k} \B = \zeromat{m}{n}. \] Finally, just as vectors in $\Rn$ are interpreted as $n\times 1$ matrices, we identify $\zerovec$ with $\zeromat{0}{1}$ so that, for example, we can write $ \zerov{n}=\zeromat{n}{1}=\zeromat{n}{0}\zeromat{0}{1}=\zeromat{n}{0}\zerov{0}. $ \section{Review of topology} \label{sec:prelim:topology} Convex analysis in $\Rn$ uses the standard Euclidean norm to define topological concepts such as convergence, closure, and continuity. We will see that astral space is not a normed space, and is not even metrizable, so instead of building the topology of astral space from a metric, we will need a more general topological framework, which we briefly review in this section. Depending on background, this section can be skimmed or skipped, and revisited only as needed. A more complete introduction to topology can be found, for instance, in \citet{hitchhiker_guide_analysis} or \citet{munkres}. \subsection{Topological space, open sets, base, subbase} \begin{definition} \label{def:open} A \emph{topology} $\topo$ on a set $X$ is a collection of subsets of $X$, called \emph{open sets}, that satisfy the following conditions: \begin{letter-compact} \item\label{i:open:a} $\emptyset\in\topo$ and $X\in\topo$. \item\label{i:open:b} $\topo$ is closed under finite intersections, that is, $U\cap V\in\topo$ for all $U,V\in\topo$. \item\label{i:open:c} $\topo$ is closed under arbitrary unions; that is, if $U_\alpha\in\topo$ for all $\alpha\in\indset$ (where $\indset$ is any index set), then $\bigcup_{\alpha\in\indset} U_\alpha\in\topo$. \end{letter-compact} The set~$X$ with topology $\topo$ is called a \emph{topological space}. \end{definition} \begin{example}[Topology on $\R$] The standard topology on $\R$ consists of all sets $U$ such that for every $x\in U$ there exists $\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subseteq U$. \end{example} \begin{example}[Euclidean topology on $\Rn$] \label{ex:topo-on-rn} The Euclidean topology on $\Rn$ consists of all sets $U$ such that for every $\xx\in U$, there exists $\epsilon>0$ such that $B(\xx,\epsilon)\subseteq U$. Except where explicitly stated otherwise, we will always assume this topology on $\Rn$. \end{example} Instead of specifying the topology $\topo$ directly, it is usually more convenient to specify a suitable subfamily of $\topo$ and use it to generate $\topo$ by means of unions and intersections. As such, a subfamily $\calB$ of $\topo$ is called a \emph{base} for $\topo$ if every element of $\topo$ can be written as a union of elements of $\calB$. A subfamily $\calS$ of $\topo$ is called a \emph{subbase} for~$\topo$ if the collection of all finite intersections of elements of $\calS$ forms a base for $\topo$. (Some authors, including \citealp{munkres}, instead use the terms \emph{basis} and \emph{subbasis}.) Note that any collection $\calS$ of subsets of $X$ whose union is all of $X$ is a subbase for a topology on $X$ obtained by taking all unions of all finite intersections of elements of $\calS$. In this case, we say that $\calS$ \emph{generates} the topology $\topo$. \begin{proposition} \label{pr:base-equiv-topo} Let $X$ be a topological space with topology $\topo$, let $\calB$ be a base for $\topo$, and let $U\subseteq X$. Then $U$ is open in $\topo$ if and only if for all $x\in U$, there exists a set $B\in\calB$ such that $x\in B\subseteq U$. \end{proposition} \begin{proof} See \citet[Lemma~13.1]{munkres}. \end{proof} \begin{example}[Base and subbase for topology on $\R$] A base for the standard topology on $\R$ consists of all open intervals $(a,b)$, where $a,b\in\R$. A subbase consists of sets of the form $(-\infty,b)$ and $(b,+\infty)$ for $b\in\R$. \end{example} \begin{example}[Base and subbase for topology on $\eR$] \label{ex:topo-rext} On $\eR$, we will always assume the topology with a base consisting of all intervals $(a,b)$, $[-\infty,b)$, and $(b,+\infty]$, for $a,b\in\R$. This topology is generated by a subbase consisting of sets $[-\infty,b)$ and $(b,+\infty]$ for all $b\in\R$. \end{example} \begin{example}[Base for the Euclidean topology on $\Rn$] A base for the Euclidean topology on $\Rn$ consists of open balls $\ball(\xx,\epsilon)$ for all $\xx\in\Rn$ and $\epsilon>0$. \end{example} For a metric $d:X\times X\rightarrow\R$, the \emph{metric topology} on $X$ induced by $d$ is that topology whose base consists of all balls $\{ y\in X: d(x,y)<\epsilon \}$, for $x\in X$ and $\epsilon>0$. Thus, Euclidean topology on $\Rn$ is the same as the metric topology on $\Rn$ with metric given by the Euclidean distance, $d(\xx,\yy)=\norm{\xx-\yy}$. A topological space $X$ is \emph{metrizable} if there exists a metric that induces its topology. \subsection{Closed sets, neighborhoods, dense sets, separation properties} \label{sec:prelim:topo:closed-sets} Complements of open sets are called \emph{closed sets}. Their basic properties are symmetric to the properties of open sets from Definition~\ref{def:open}: \begin{letter-compact} \item $\emptyset$ and $X$ are closed in any topology on $X$. \item Finite unions of closed sets are closed. \item Arbitrary intersections of closed sets are closed. \end{letter-compact} The \emph{closure} of a set $A\subseteq X$, denoted $\Abar$, is the intersection of all closed sets containing~$A$. The \emph{interior of $A$}, denoted $\intr A$, is the union of all open sets contained in $A$. A~\emph{neighborhood} of a point $x\in X$ is an open set that contains $x$. Note that some authors, including \citet{hitchhiker_guide_analysis}, define a neighborhood of $x$ as any set that contains an open set containing $x$, that is, any set that is a superset of our notion of a neighborhood. They would refer to our neighborhoods as open neighborhoods. \begin{proposition} \label{pr:closure:intersect} Let $X$ be a topological space, let $A\subseteq X$, and let $x\in X$. \begin{letter} \item \label{pr:closure:intersect:s1} $A=\Abar$ if and only if $A$ is closed. \item \label{pr:closure:intersect:s2} For every closed set $C$ in $X$, if $A\subseteq C$ then $\Abar\subseteq C$. \item \label{pr:closure:intersect:a} $x\in\Abar$ if and only if $A\cap U\neq\emptyset$ for every neighborhood $U$ of $x$. \item \label{pr:closure:intersect:b} $x\in\intr{A}$ if and only if $U\subseteq A$ for some neighborhood $U$ of $x$. \item \label{pr:closure:intersect:comp} $\intr{A}=X\setminus(\clbar{X\setminus A})$. \end{letter} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Parts~(\ref{pr:closure:intersect:s1}) and~(\ref{pr:closure:intersect:s2}):} These are straightforward from the definition of closure. \pfpart{Part~(\ref{pr:closure:intersect:a}):} See \citet[Theorem 17.5]{munkres}. \pfpart{Part~(\ref{pr:closure:intersect:b}):} Let $x\in X$. Suppose $x\in U$ where $U$ is open and $U\subseteq A$. Then $U$ is also in $\intr A$, by its definition, so $x\in\intr A$. Conversely, if $x\in\intr A$ then $x$ is in an open set that is included in $A$, namely, $\intr A$. \pfpart{Part~(\ref{pr:closure:intersect:comp}):} See \citet[Lemma~2.4]{hitchhiker_guide_analysis}. \qedhere \end{proof-parts} \end{proof} A subset $Z$ of a topological space $X$ is said to be \emph{dense} in $X$ if $\Zbar=X$. By \Cref{pr:closure:intersect}(\ref{pr:closure:intersect:a}), this means that $Z$ is dense in $X$ if and only if every nonempty open set in $X$ has a nonempty intersection with $Z$. Two nonempty subsets $A$ and $B$ of a topological space are said to be \emph{separated by open sets} if they are included in disjoint open sets, that is, if there exist disjoint open sets $U$ and $V$ such that $A\subseteq U$ and $B\subseteq V$. A topological space is called: \begin{item-compact} \item \emph{Hausdorff} if any two distinct points are separated by open sets; \item \emph{regular} if any nonempty closed set and any point disjoint from it are separated by open sets; \item \emph{normal} if any two disjoint nonempty closed sets are separated by open sets. \end{item-compact} \begin{proposition} \label{prop:hausdorff:normal} In a Hausdorff space, all singleton sets are closed, so every normal Hausdorff space is regular. \end{proposition} \begin{proof} That singletons in a Hausdorff space are closed follows from \citet[Thoerem~17.8]{munkres}. That a normal Hausdorff space is regular then follows from the definitions above. \end{proof} \subsection{Subspace topology} \label{sec:subspace} Let $X$ be a space with topology $\topo$, and let $Y\subseteq X$. Then the collection $\topo_Y=\set{U\cap Y:\:U\in\topo}$ defines a topology on $Y$, called the \emph{subspace topology}. The set $Y$ with topology $\topo_Y$ is called a \emph{(topological) subspace} of $X$. Furthermore, for any base $\calB$ for $\topo$, the collection $\set{U\cap Y:\:U\in\calB}$ is a base for $\topo_Y$. Likewise, for any subbase $\calS$ for $\topo$, the collection $\set{V\cap Y:\:V\in\calS}$ is a subbase for $\topo_Y$. \begin{example}[Topology on \mbox{$[0,1]$}] \label{ex:topo-zero-one} The topology on $[0,1]$, as a subspace of $\R$, has as subbase all sets $[0,b)$ and $(b,1]$ for $b\in [0,1]$. \end{example} Note that, when working with subspaces, it can be important to specify which topology we are referencing since, for instance, a set might be closed in subspace $Y$ (meaning in the subspace topology $\topo_Y$), but not closed in $X$ (meaning in the original topology $\topo$). \begin{proposition} \label{prop:subspace} Let $X$ be a topological space, $Y$ its subspace, and $Z\subseteq Y$. \begin{letter-compact} \item \label{i:subspace:closed} $Z$ is closed in $Y$ if and only if there exists a set $C$ closed in $X$ such that $Z=C\cap Y$. \item \label{i:subspace:closure} The closure of $Z$ in $Y$ equals $\Zbar\cap Y$, where $\Zbar$ is the closure of $Z$ in $X$. \item \label{i:subspace:Hausdorff} If $X$ is Hausdorff, then so is $Y$. \item \label{i:subspace:dense} If $Z$ is dense in $Y$ and $Y$ is dense in $X$, then $Z$ is dense in $X$. \item \label{i:subspace:subspace} The topology on $Z$ as a subspace of $Y$ coincides with the topology on $Z$ as a subspace of $X$. \end{letter-compact} \end{proposition} \begin{proof} Parts~(\ref{i:subspace:closed}, \ref{i:subspace:closure}, \ref{i:subspace:Hausdorff}) follow by \citet[Theorems 17.2, 17.4, 17.11]{munkres}. To prove part~(\ref{i:subspace:dense}), note that for every nonempty open set $U\subseteq X$, the intersection $U\cap Y$ is nonempty and open in $Y$, and therefore, the intersection $(U\cap Y)\cap Z=U\cap Z$ is nonempty, meaning that $Z$ is dense in $X$. Finally, part~(\ref{i:subspace:subspace}) follows from the definition of subspace topology. \end{proof} \subsection{Continuity, homeomorphism, compactness} A function $f:X\to Y$ between topological spaces is said to be \emph{continuous} if $f^{-1}(V)$ is open in $X$ for every $V$ that is open in~$Y$. We say that $f$ is \emph{continuous at a point $x$} if $f^{-1}(V)$ contains a neighborhood of $x$ for every $V$ that is a neighborhood of $f(x)$. \begin{proposition} \label{prop:cont} For a function $f:X\to Y$ between topological spaces the following are equivalent: \begin{letter-compact} \item \label{prop:cont:a} $f$ is continuous. \item \label{prop:cont:b} $f$ is continuous at every point in $X$. \item \label{prop:cont:inv:closed} $f^{-1}(C)$ is closed in $X$ for every $C$ that is a closed subset of $Y$. \item \label{prop:cont:sub} $f^{-1}(V)$ is open in $X$ for every element $V$ of some subbase for the topology on $Y$. \item \label{prop:cont:c} $f(\Abar)\subseteq\overlineKernIt{f(A)}$ for every $A\subseteq X$. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem 2.27]{hitchhiker_guide_analysis}, or Theorem~18.1 and the preceding discussion of \citet{munkres}. \end{proof} Let $f:X\rightarrow Y$. The function $f$ is \emph{injective} (or \emph{one-to-one}) if for all $x,x'\in X$, if $x\neq x'$ then $f(x)\neq f(x')$. The function is \emph{surjective} (or is said to map \emph{onto} $Y$) if $f(X)=Y$. The function is \emph{bijective}, or is a \emph{bijection}, if it is both injective and surjective. Every bijective function has an inverse $f^{-1}:Y\rightarrow X$, (meaning $f^{-1}(f(x))=x$ for all $x\in X$ and $f(f^{-1}(y))=y$ for all $y\in Y$). Let $X$ and $Y$ be topological spaces. A function $f:X\rightarrow Y$ is a \emph{homeomorphism} if $f$ is a bijection, and if both $f$ and $f^{-1}$ are continuous. When such a function exists, we say that $X$ and $Y$ are \emph{homeomorphic}. From the topological perspective, spaces $X$ and $Y$ are identical; only the names of the points have been changed. Any topological property of $X$, meaning a property derived from the topology alone, is also a property of $Y$ (and vice versa). \begin{example}[Homeomorphism of $\Rext$ with \mbox{$[0,1]$}] \label{ex:homeo-rext-finint} We assume topologies as in Examples~\ref{ex:topo-rext} and~\ref{ex:topo-zero-one}. Let $f:\Rext \rightarrow [0,1]$ be defined by \[ f(\barx) = \begin{cases} 0 & \text{if $\barx=-\infty$}, \\ 1/(1+e^{-\barx}) & \text{if $\barx\in\R$}, \\ 1 & \text{if $\barx=+\infty$} \end{cases} \] for $\barx\in\Rext$. This function is bijective and continuous with a continuous inverse. Therefore, it is a homeomorphism, so $\Rext$ and $[0,1]$ are homeomorphic. For instance, since $[0,1]$ is metrizable (using ordinary distance between points), this shows that $\Rext$ is also metrizable. \end{example} Let $X$ be a topological space. An \emph{open cover} of a set $K\subseteq X$ is any family of open sets whose union includes $K$, that is, any family $\set{U_\alpha}_{\alpha\in\indset}$ of open sets such that $K\subseteq\bigcup_{\alpha\in\indset} U_\alpha$. We say that a set $K$ is \emph{compact} if every open cover of $K$ has a finite open subcover, that is, if for every open cover $\calU$ of $K$, there exist $U_1,\ldots,U_m\in\calU$ such that $K\subseteq \bigcup_{i=1}^m U_i$. If $X$ is itself compact, then we say that $X$ is a \emph{compact space}. A set $S\subseteq\Rn$ is \emph{bounded} if $S\subseteq\ball(\zero,R)$ for some $R\in\R$. \begin{proposition} \label{pr:compact-in-rn} A set in $\Rn$ is compact if and only if it is closed and bounded. \end{proposition} \begin{proof} See \citet[Theorem~27.3]{munkres}. \end{proof} In particular, this proposition implies that $\Rn$ itself is not compact (for $n\geq 1$). Although $\R$ is not compact, its extension $\Rext$ is compact: \begin{example}[Compactness of $\Rext$] \label{ex:rext-compact} As seen in Example~\ref{ex:homeo-rext-finint}, $\Rext$ is homeomorphic with $[0,1]$, which is compact, being closed and bounded. Therefore, $\Rext$ is also compact. \end{example} Here are some compactness properties: \begin{proposition}~ \label{prop:compact} \begin{letter-compact} \item \label{prop:compact:cont-compact} The image of a compact set under a continuous function is compact. \item \label{prop:compact:closed-subset} Every closed subset of a compact set is compact. \item \label{prop:compact:closed} Every compact subset of a Hausdorff space is closed. \item \label{prop:compact:subset-of-Hausdorff} Every compact Hausdorff space is normal, and therefore regular. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorems 26.5, 26.2, 26.3, 32.3]{munkres}. Regularity in part (\ref{prop:compact:subset-of-Hausdorff}) follows by \Cref{prop:hausdorff:normal}. \end{proof} \begin{proposition} \label{pr:cont-compact-attains-max} Let $f:X\to\eR$ be continuous and $X$ compact. Then $f$ attains both its minimum and maximum on $X$. \end{proposition} \begin{proof} This is a special case of \citet[Theorem~27.4]{munkres}. \end{proof} For topological spaces $X$ and $Y$, we say that $Y$ is a \emph{compactification} of $X$ if $Y$ is compact, $X$ is a subspace of $Y$, and $X$ is dense in $Y$. (In defining compactification, others, including \citealp{munkres}, also require that $Y$ be Hausdorff.) \subsection{Sequences, first countability, second countability} \label{sec:prelim:countability} Let $X$ be a topological space. We say that a sequence $\seq{x_t}$ in $X$ \emph{converges} to a point $x$ in $X$ if for every neighborhood $U$ of $x$, $x_t\in U$ for all $t$ sufficiently large (that is, there exists an index $t_0$ such that $x_t\in U$ for $t\ge t_0$). When this occurs, we write $x_t\to x$ and refer to $x$ as a \emph{limit} of $\seq{x_t}$. We say that a sequence $\seq{x_t}$ \emph{converges in $X$} if it has a limit $x\in X$. A sequence in a Hausdorff space can have at most one limit. In that case, we write $\lim x_t=x$ when $x_t\to x$. A \emph{subsequence} of a sequence $\seq{x_t}$ is any sequence $\seq{x_{s(t)}}$ where $s:\nats\rightarrow\nats$, and $s(1)<s(2)<\cdots$. We say that $x$ is a \emph{subsequential limit} of a sequence if $x$ is the limit of one of its subsequences. A set $S$ is \emph{countably infinite} if there exists a bijection mapping $\nats$ to $S$. It is \emph{countable} if it is either finite or countably infinite. \begin{proposition} \label{pr:count-equiv} Let $S$ be a nonempty set. Then the following are equivalent: \begin{letter-compact} \item \label{pr:count-equiv:a} $S$ is countable. \item \label{pr:count-equiv:b} There exists a surjective function from $\nats$ to $S$. \item \label{pr:count-equiv:c} There exists an injective function from $S$ to $\nats$. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~7.1]{munkres}. \end{proof} \begin{proposition} \label{pr:uncount-interval} Let $a,b\in\R$ with $a<b$. Then the interval $[a,b]$ is uncountable. \end{proposition} \begin{proof} See \citet[Theorem~27.8]{munkres}. \end{proof} A \emph{neighborhood base} at a point $x$ is a collection $\calB$ of neighborhoods of $x$ such that each neighborhood of $x$ contains at least one of the elements of $\calB$. A topological space $X$ is said to be \emph{first-countable} if there exists a countable neighborhood base at every point $x\in X$. \begin{example}[First countability of $\Rn$] \label{ex:rn-first-count} Let $\xx\in\Rn$, and for each $t$, let $B_t=\ball(\xx,1/t)$. Then $\countset{B}$ is a countable neighborhood base for $\xx$. This is because if $U$ is a neighborhood of $\xx$, then there exist $\yy\in\Rn$ and $\epsilon>0$ such that $\xx\in \ball(\yy,\epsilon)\subseteq U$. Letting $t$ be so large that $\norm{\xx-\yy}+1/t<\epsilon$, it then follows that $B_t\subseteq \ball(\yy,\epsilon)\subseteq U$. Thus, in the Euclidean topology, $\Rn$ is first-countable. \end{example} A countable neighborhood base $\countset{B}$ at a point $x$ is \emph{nested} if $B_1\supseteq B_2\supseteq\cdots$. For instance, the countable neighborhood base in Example~\ref{ex:rn-first-count} is nested. Any countable neighborhood base $\countset{B}$ can be turned into a nested one, $\countset{B'}$, by setting $B'_t=B_1\cap\dotsc\cap B_t$ for all $t$. If $\countset{B}$ is a nested countable neighborhood base at $x$, then every sequence $\seq{x_t}$ with each point $x_t$ in $B_t$ must converge to $x$: \begin{proposition} \label{prop:nested:limit} Let $x$ be a point in a topological space $X$. Let $\countset{B}$ be a nested countable neighborhood base at $x$, and let $x_t\in B_t$ for all $t$. Then $x_t\to x$. \end{proposition} \begin{proof} Let $U$ be a neighborhood of $x$. Then there exists $t_0$ such that $B_{t_0}\subseteq U$, and hence, by the nested property, $x_t\in U$ for all $t\ge t_0$. Thus, $x_t\to x$. \end{proof} First countability allows us to work with sequences as we would in $\Rn$, for instance, for characterizing the closure of a set or the continuity of a function: \begin{proposition} \label{prop:first:properties} Let $X$ and $Y$ be a topological spaces, and let $x\in X$. \begin{letter-compact} \item \label{prop:first:closure} Let $A\subseteq X$. If there exists a sequence $\seq{x_t}$ in $A$ with $x_t\rightarrow x$ then $x\in\Abar$. If $X$ is first-countable, then the converse holds as well. \item \label{prop:first:cont} Let $f:X\to Y$. If $f$ is continuous at $x$ then $f(x_t)\rightarrow f(x)$ for every sequence $\seq{x_t}$ in $X$ that converges to $x$. If $X$ is first-countable, then the converse holds as well. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{prop:first:closure}):} See \citet[Theorem~30.1a]{munkres}. \pfpart{Part~(\ref{prop:first:cont}):} Assume first that $f$ is continuous at $x$, and let $\seq{x_t}$ be any sequence in $X$ that converges to $x$. Then for any neighborhood $V$ of $f(x)$, there exists a neighborhood $U$ of $x$ such that $f(U)\subseteq V$. Since $x_t\to x$, there exists $t_0$ such that $x_t\in U$ for all $t\ge t_0$, and so also $f(x_t)\in V$ for all $t\ge t_0$. Thus, $f(x_t)\to f(x)$. For the converse, assume $X$ is first-countable, and that $x_t\to x$ implies $f(x_t)\to f(x)$. Let $\countset{B}$ be a nested countable neighborhood base at $x$, which exists by first countability. For contradiction, suppose there exists a neighborhood $V$ of $f(x)$ such that $f^{-1}(V)$ does not contain any neighborhood of $x$, and so in particular does not contain any $B_t$. Then for all $t$, there exists $x_t\in B_t\setminus f^{-1}(V)$. Thus, $f(x_t)\not\in V$ for all $t$, but also $x_t\to x$, by \Cref{prop:nested:limit}, implying $f(x_t)\to f(x)$, by assumption. The set $V^c=Y\setminus V$ is closed, and $f(x_t)\in V^c$ for all $t$, so by part~(\ref{prop:first:closure}), we also have that $f(x)\in V^c$. This contradicts that $V$ is a neighborhood of $f(x)$. \qedhere \end{proof-parts} \end{proof} The next proposition proves a kind of converse to \Cref{prop:nested:limit}. In particular, it shows that if, in some topological space $X$, a family of neighborhoods $\countset{B}$ of a point $x$ has the property given in \Cref{prop:nested:limit} (so that if each $x_t$ is in $B_t$ then $x_t\rightarrow x$), then those neighborhoods must constitute a neighborhood base at $x$. The proposition shows more generally that this holds even if we only assume the property from \Cref{prop:nested:limit} to be true for sequences in a dense subset of the space $X$ (rather than all of $X$). Note that the proposition does not assume that $X$ is first-countable. \begin{proposition} \label{thm:first:fromseq} Let $X$ be a regular topological space with a dense subset $Z$. Let $x\in X$, and let $\countset{B}$ be a countable family of neighborhoods of $x$. Suppose that for every sequence $\seq{z_t}$ with each $z_t\in Z\cap B_t$, we have $z_t\rightarrow x$. Then the family $\countset{B}$ is a countable neighborhood base at $x$. \end{proposition} \begin{proof} Contrary to the claim, suppose $\countset{B}$ is not a countable neighborhood base at $x$, meaning there exists a neighborhood $N$ of $x$ that does not contain any $B_t$. Then for each $t$, there is a point $x_t\in B_t\cap \Ncomp$ where $\Ncomp=X \setminus N$ is the complement of $N$. Since $\Ncomp$ is closed (being the complement of an open set) and $x\not\in\Ncomp$, the fact that the space $X$ is regular implies that there must exist disjoint open sets $U$ and $V$ such that $\Ncomp\subseteq U$ and $x\in V$. For each $t$, $x_t\in B_t\cap\Ncomp \subseteq B_t\cap U$, meaning $B_t\cap U$ is a neighborhood of~$x_t$. Therefore, since $Z$ is dense in $X$, there exists a point $z_t\in B_t\cap U\cap Z$. By assumption, the resulting sequence $\seq{z_t}$ converges to $x$. Since $V$ is a neighborhood of $x$, this means that all but finitely many of the points $z_t$ are in $V$. But this is a contradiction since every $z_t\in U$, and $U$ and $V$ are disjoint. \end{proof} A subset $A$ of a topological space $X$ is said to be \emph{sequentially compact} if every sequence in $A$ has a subsequence converging to an element of $A$. \begin{proposition} \label{prop:first:subsets} ~ \begin{letter-compact} \item \label{i:first:subspace} Every subspace of a first-countable space is first-countable. \item \label{i:first:compact} Every compact subset of a first-countable space is sequentially compact. \end{letter-compact} \end{proposition} A topological space is said to be \emph{second-countable} if it has a countable base. \begin{proposition} \label{prop:sep:metrizable} Let $X$ be a topological space. If $X$ is metrizable and if $X$ includes a countable dense subset, then $X$ is second-countable. \end{proposition} \begin{proof} See \citet[Theorem~3.40]{hitchhiker_guide_analysis}, or \citet[Exercise~30.5a]{munkres}. \end{proof} \begin{example}[Countable dense subset and countable base for $\Rn$] \label{ex:2nd-count-rn} The set $\rats$ is countable, as is its $n$-fold Cartesian product $\rats^n$ \citep[Example~7.3 and Theorem~7.6]{munkres}. Further, $\rats^n$ is dense in $\Rn$ (as can be seen, for instance, by constructing, for each $\xx\in\Rn$, a sequence in $\rats^n$ that converges to $\xx$). Since $\Rn$ is a metric space, \Cref{prop:sep:metrizable} then implies that it is second-countable, and so has a countable base. Explicitly, such a base is given by all sets $\ball(\yy,1/k)$ for all $\yy\in\rats^n$ and all $k\in\nats$. \end{example} \subsection{Product topology, Tychonoff's theorem, weak topology} \label{sec:prod-top} The \emph{Cartesian product} of an indexed family of sets $\{X_\alpha\}_{\alpha\in\indset}$, where $\indset$ is some index set, is the set \begin{equation} \label{eqn:cart-prod-notation} \prod_{\alpha\in\indset} X_\alpha = \bigBraces{\tupset{x_\alpha}{\alpha\in I} : x_\alpha \in X_\alpha \mbox{ for all } \alpha\in I }. \end{equation} Here, $x=\tupset{x_\alpha}{\alpha\in\indset}$ is a \emph{tuple}, which, as it appears in this equation, is formally the function $x:\indset\rightarrow \bigcup_{\alpha\in\indset} X_\alpha$ with $x(\alpha)=x_\alpha$ for $\alpha\in\indset$. The product $\prod_{\alpha\in\indset} X_\alpha$ thus consists of all such tuples with $x_\alpha\in X_\alpha$ for $\alpha\in\indset$. Since tuples are functions, we sometimes write the components of a tuple $x$ as $x(\alpha)$ rather than $x_\alpha$, depending on context. When $X_\alpha=X$ for all $\alpha\in\indset$, the Cartesian product in \eqref{eqn:cart-prod-notation} is written $X^\indset$, and every tuple in this set is simply a function mapping $\indset$ into $X$. Thus, in general, $X^\indset$ denotes the set of all functions from $\indset$ to $X$. When $\indset=\{1,2\}$, the Cartesian product in \eqref{eqn:cart-prod-notation} is simply $X_1\times X_2$, with elements given by all pairs $\rpair{x_1}{x_2}$ with $x_1\in X_1$ and $x_2\in X_2$. Suppose each $X_\alpha$ is a topological space with topology $\topo_\alpha$, for all $\alpha\in\indset$. Let $X=\prod_{\alpha\in\indset} X_\alpha$. For each $\beta\in\indset$, let $\tprojb:X\rightarrow X_\alpha$ denote the \emph{projection map} defined by $\tprojb(x)=\tprojb(\tupset{x_\alpha}{\alpha\in\indset})=x_\beta$ for all tuples $x\in X$. The \emph{product topology} on $X$ is then the topology generated by a subbase consisting of all sets \begin{equation} \label{eqn:prod-topo-subbase} \tprojinva(U_\alpha) = \Braces{ x \in X : x_\alpha \in U_\alpha }, \end{equation} for all $\alpha\in\indset$ and all $U_\alpha\in\topo_\alpha$, that is, all sets $U_\alpha$ that are open in $X_\alpha$. The resulting topological space is called a \emph{product space}. Unless explicitly stated otherwise, we always assume this product topology when working with Cartesian products. If each topology $\topo_\alpha$ is generated by a subbase $\calS_\alpha$, then the same product topology is generated if only subbase elements are used, that is, if a subbase for the product space is constructed using only sets $\tprojinva(U_\alpha)$ for $\alpha\in\indset$ and $U_\alpha\in\calS_\alpha$. Equivalently, the product topology is generated by a base consisting of all sets $\prod_{\alpha\in\indset} U_\alpha$ where each $U_\alpha$ is open in $X_\alpha$ for all $\alpha\in\indset$, and where $U_\alpha=X_\alpha$ for all but finitely many values of $\alpha$ \citep[Theorem~19.1]{munkres}. In particular, $X\times Y$, where $X$ and $Y$ are topological spaces, has as base all sets $U\times V$ where $U$ and $V$ are open in $X$ and $Y$, respectively. Here are some properties of product spaces in the product topology. Note especially that the product of compact spaces is compact. \begin{proposition} \label{pr:prod-top-props} Let $\indset$ be an index set, and for each $\alpha\in\indset$, let $X_\alpha$ be a topological space. Let $X=\prod_{\alpha\in\indset} X_\alpha$ be in the product topology. Then: \begin{letter-compact} \item \label{pr:prod-top-props:a} If each $X_\alpha$ is Hausdorff, then $X$ is also Hausdorff. \item \label{pr:prod-top-props:b} \textup{(Tychonoff's theorem)~} If each $X_\alpha$ is compact, then $X$ is also compact. \item \label{pr:prod-top-props:c} If each $X_\alpha$ is first-countable, and if $\indset$ is countable, then $X$ is also first-countable. \item \label{pr:prod-top-props:d} Let $A_\alpha\subseteq X_\alpha$, for each $\alpha\in\indset$. Then \[ \clbar{\prod_{\alpha\in\indset} A_\alpha} = \prod_{\alpha\in\indset} \Abar_\alpha, \] where the closure on the left is in $X$, and where $\Abar_\alpha$ denotes closure of $A_\alpha$ in $X_\alpha$. \item \label{pr:prod-top-props:e} Let $\seq{x_t}$ be a sequence in $X$, and let $x\in X$. Then $x_t\rightarrow x$ in $X$ if and only if $x_t(\alpha)\rightarrow x(\alpha)$ in $X_\alpha$ for all $\alpha\in\indset$. \end{letter-compact} \end{proposition} \begin{proof} See, respectively, Theorems~19.4,~37.3,~30.2,~19.5, and Exercise~19.6 of \citet{munkres}. \end{proof} In particular, if $X$ and $Y$ are topological spaces, then \Cref{pr:prod-top-props}(\ref{pr:prod-top-props:a},\ref{pr:prod-top-props:b},\ref{pr:prod-top-props:c}) show that if both $X$ and $Y$ are, respectively, Hausdorff or compact or first-countable, then so is $X\times Y$. Further, \Cref{pr:prod-top-props}(\ref{pr:prod-top-props:e}) shows that for all sequences $\seq{x_t}$ in $X$ and $\seq{y_t}$ in $Y$, and for all $x\in X$ and $y\in Y$, the sequence $\seq{\rpair{x_t}{y_t}}$ converges to $\rpair{x}{y}$ (in $X\times Y$) if and only if $x_t\rightarrow x$ and $y_t\rightarrow y$. If $\topo$ and $\topo'$ are topologies on some space $X$ with $\topo\subseteq\topo'$, then we say that $\topo$ is \emph{courser} (or \emph{smaller} or \emph{weaker}) than $\topo'$, and likewise, that $\topo'$ is \emph{finer} (or \emph{larger} or \emph{stronger}) than $\topo$. The intersection $\bigcap_{\alpha\in\indset}\topo_\alpha$ of a family of topologies $\set{\topo_\alpha}_{\alpha\in\indset}$ on $X$ is also a topology on~$X$. Consequently, for any family $\calS$ of sets whose union is $X$, there exists a unique coursest topology that contains $\calS$, namely, the intersection of all topologies that contain $\calS$. This is precisely the topology generated by $\calS$ as a subbase. Let $\calH$ be a family of functions from a set $X$ to a topological space $Y$. The \emph{weak topology} with respect to $\calH$ is the coursest topology on $X$ under which all the functions in $\calH$ are continuous. By \Cref{prop:cont}(\ref{prop:cont:sub}), the weak topology with respect to $\calH$ has a subbase consisting of all sets $h^{-1}(U)$ for all $h\in\calH$ and all $U$ open in $Y$ (or, alternatively, all elements of a subbase for $Y$). In particular, the product topology on $X^\indset$ is the same as the weak topology on this set with respect to the set of all projection maps $\tproja$ for $\alpha\in\indset$. Let $Z$ be a set, and let $Y$ be a topological space. As noted earlier, $Y^Z$ is the set of all functions mapping $Z$ into $Y$, that is, all functions $f:Z\rightarrow Y$. This set then is an important special case of a Cartesian product. In this case, the projection maps $\tprojz$ simply become function evaluation maps so that $\tprojz(f)=f(z)$, for $z\in Z$ and $f:Z\rightarrow Y$. As such, a subbase for the product topology on $Y^Z$ consists of all sets \begin{equation} \label{eq:fcn-cl-gen-subbase} \tprojinvz(U) = \Braces{ f\in Y^Z : f(z)\in U }, \end{equation} for all $z\in Z$ and $U$ open in $Y$, or alternatively, all $U$ that are elements of a subbase for $Y$. The product topology on the function space $Y^Z$ is also called the \emph{topology of pointwise convergence} due to the following property: \begin{proposition} \label{pr:prod-top-ptwise-conv} Let $Z$ be a set, let $Y$ be a topological space, let $f:Z\rightarrow Y$, and let $\seq{f_t}$ be a sequence of functions in $Y^Z$. Then $f_t\rightarrow f$ in the product topology on $Y^Z$ if and only if $f_t(z)\rightarrow f(z)$ for all $z\in Z$. \end{proposition} \begin{proof} This is a special case of \Cref{pr:prod-top-props}(\ref{pr:prod-top-props:e}). \end{proof} \subsection{Lower semicontinuity on first-countable spaces} \label{sec:prelim:lower-semicont} A function $f:X\to\eR$ on a first-countable space $X$ is said to be \emph{lower semicontinuous} at $x\in X$ if $f(x)\le\liminf f(x_t)$ for every sequence $\seq{x_t}$ in $X$ with $x_t\to x$. Likewise, $f$ is \emph{upper semicontinuous} at $x\in X$ if $f(x)\ge\limsup f(x_t)$ for every sequence $\seq{x_t}$ in $X$ with $x_t\to x$. (We only define semicontinuity on first-countable spaces, but it is also possible to give definitions for general topological spaces. See \citealp[Section~2.10]{hitchhiker_guide_analysis}.) By \Cref{prop:first:properties}(\ref{prop:first:cont}), $f$ is continuous at~$x$ if and only if it is both lower semicontinuous and upper semicontinuous at $x$. Below, we focus on lower semicontinuity, but symmetric results hold for upper semicontinuous functions as well since $f$ is upper semicontinuous at $x$ if and only if $-f$ is lower semicontinous at~$x$. A function $f:X\rightarrow\Rext$ is \emph{lower semicontinuous} if it is lower semicontinuous at every point in $X$. These are precisely the functions whose epigraphs are closed in $X\times\R$, or equivalently, the functions whose sublevel sets are closed: \begin{proposition} \label{prop:lsc} Let $f:X\to\eR$ where $X$ is a first-countable space. Then the following are equivalent: \begin{letter-compact} \item $f$ is lower semicontinuous. \label{prop:lsc:a} \item The epigraph of $f$ is closed in $X\times\R$. \label{prop:lsc:b} \item The sublevel set $\set{x\in X:\:f(x)\le\alpha}$ is closed for every $\alpha\in\eR$. \label{prop:lsc:c} \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{prop:lsc:a}) $\Rightarrow$ (\ref{prop:lsc:b}):} Note first that $X\times\R$ is first-countable by \Cref{pr:prod-top-props}(\ref{pr:prod-top-props:c}) since both $X$ and $\R$ are first-countable. Assume $f$ is lower semicontinuous, and let $\rpair{x}{y}\in X\times\R$ be in the closure of $\epi f$. We aim to show that $\rpair{x}{y}\in\epi f$, proving that $\epi f$ is closed. By \Cref{prop:first:properties}(\ref{prop:first:closure}), there exists a sequence $\bigSeq{\rpair{x_t}{y_t}}$ in $\epi f$ converging to $\rpair{x}{y}$, implying $x_t\rightarrow x$ and $y_t\rightarrow y$ by \Cref{pr:prod-top-props}(\ref{pr:prod-top-props:e}). By lower semicontinuity, and the fact that $f(x_t)\le y_t$, we have \[ f(x)\le\liminf f(x_t)\le\liminf y_t=y. \] Thus, $\rpair{x}{y}\in\epi f$, so $\epi f$ is closed. \pfpart{(\ref{prop:lsc:b}) $\Rightarrow$ (\ref{prop:lsc:c}):} Assume $\epi{f}$ is closed, and for $\alpha\in\eR$, let \[ S_{\alpha}=\set{x\in X:\:f(x)\le\alpha}. \] We need to show that $S_\alpha$ is closed for every $\alpha\in\eR$. This is trivially true for $\alpha=+\infty$, since $S_{+\infty}=X$. For $\alpha\in\R$, let $x\in X$ be in the closure of $S_\alpha$, implying, since $X$ is first-countable, that there exists a sequence $\seq{x_t}$ in $S_\alpha$ with $x_t\rightarrow x$. Then $\rpair{x_t}{\alpha}\in\epi f$ for all $t$, and furthermore, the sequence $\seq{\rpair{x_t}{\alpha}}$ in $\epi f$ converges to $\rpair{x}{\alpha}$ (by \Cref{pr:prod-top-props}\ref{pr:prod-top-props:e}). Since $\epi f$ is closed, this implies that $\rpair{x}{\alpha}$ is in $\epi f$. Thus, $x\in S_\alpha$, and therefore $S_\alpha$ is closed. The remaining case, $\alpha=-\infty$, follows because $S_{-\infty}=\bigcap_{\alpha\in\R} S_\alpha$, which must be closed as an intersection of closed sets.\looseness=-1 \pfpart{(\ref{prop:lsc:c}) $\Rightarrow$ (\ref{prop:lsc:a}):} Assume all sublevel sets are closed. Let $\seq{x_t}$ be a sequence in $X$ converging to some $x\in X$. We will argue that $f(x)\le\liminf f(x_t)$. If $\liminf f(x_t)=+\infty$, then this trivially holds. Otherwise, let $\alpha\in\R$ be such that $\alpha>\liminf f(x_t)$. Then the sequence $\seq{x_t}$ has infinitely many elements $x_t$ with $f(x_t)\le \alpha$. Let $\seq{x'_t}$ be the subsequence consisting of those elements. The set $S=\set{x\in X:f(x)\le \alpha}$ is closed, by assumption, and includes all $x'_t$. Since $x'_t\to x$, we must have $x\in S$ by \Cref{prop:first:properties}(\ref{prop:first:closure}), so $f(x)\le \alpha$. Since this is true for all $\alpha>\liminf f(x_t)$, we obtain $f(x)\le\liminf f(x_t)$. \qedhere \end{proof-parts} \end{proof} Lower semicontinuous functions on compact sets are particularly well-behaved because they always attain a minimum: \begin{proposition} \label{thm:weierstrass} Let $f:X\to\eR$ be a lower semicontinuous function on a first-countable compact space $X$. Then $f$ attains a minimum on $X$. \end{proposition} \begin{proof} Let $\alpha=\inf f$. If $\alpha=+\infty$, then every $x\in X$ attains the minimum, so we assume henceforth that $\alpha<+\infty$. We construct a sequence $\seq{x_t}$ such that $f(x_t)\to\alpha$: If $\alpha=-\infty$, then let each $x_t$ be such that $f(x_t)<-t$. Otherwise, if $\alpha\in\R$, then let $x_t$ be such that $f(x_t)<\alpha+1/t$. In either case, such points $x_t$ must exist since $\alpha=\inf f$. By compactness (and first-countability), the sequence $\seq{x_t}$ has a convergent subsequence, which we denote $\seq{x'_t}$. Let $x$ be a limit of $\seq{x'_t}$. Then by lower semicontinuity we have \[ f(x)\le\liminf f(x'_t)=\lim f(x'_t)=\alpha. \] Since $\alpha=\inf f$, the point $x$ must be a minimizer of~$f$. \end{proof} The \emph{lower semicontinuous hull} of $f:X\to\eR$, where $X$ is first-countable, is the function $(\lsc f):X\to\eR$ defined, for $x\in X$, as \begin{equation} \label{eq:lsc:liminf:X:prelims} (\lsc f)(x) = \InfseqLiminf{\seq{x_t}}{X}{x_t\rightarrow x} {f(x_t)}, \end{equation} where the notation means that the infimum is over all sequences $\seq{x_t}$ in $X$ converging to~$x$. The function $\lsc f$ can be characterized as the greatest lower semicontinuous function that is majorized by $f$, and its epigraph as the closure of $\epi f$ in $X\times\R$: \begin{proposition} \label{prop:lsc:characterize} Let $f:X\to\eR$ where $X$ is a first-countable space. Then $\lsc f$ is the greatest lower semicontinuous function on $X$ that is majorized by $f$, and $\epi(\lsc f)$ is the closure of $\epi f$ in $X\times\R$. \end{proposition} \begin{proof} Let $E=\clbar{\epi f}$, the closure of $\epi{f}$ in $X\times\R$. For $x\in X$, let \[ Y_x=\set{y:\:\rpair{x}{y}\in E}, \] and let $g(x)=\inf Y_x$. As the first step in our proof, we will show that $E=\epi g$ by arguing that $Y_x=\set{y\in\R:\: y\ge g(x)}$ for every $x\in X$. If $g(x)=+\infty$ then $Y_x=\emptyset$, proving the claim in this case. Otherwise, let $x\in X$ be such that $g(x)<+\infty$. If $y\in Y_x$, then $y\geq \inf Y_x = g(x)$; thus, $Y_x\subseteq [g(x),+\infty)$. For the reverse inclusion, let $y\in\R$ be such that $y>g(x)$. Since $g(x)=\inf Y_x$, there exists $y'\in Y_x$, such that $g(x)\le y'< y$. This means that $\rpair{x}{y'}\in E$, so $\rpair{x_t}{y'_t}\to\rpair{x}{y'}$ for some sequence $\seq{\rpair{x_t}{y'_t}}$ in $\epi f$. For any $s>0$, we then also have $\rpair{x}{y'+s}\in E$ since the points $\rpair{x_t}{y'_t+s}$ are also in $\epi f$ and converge to $\rpair{x}{y'+s}$. In particular, setting $s=y-y'$, we obtain that $\rpair{x}{y}\in E$, and hence also $y\in Y_x$. Thus, we have shown that $(g(x),+\infty)\subseteq Y_x$. It remains to argue that $g(x)\in Y_x$. Let $\seq{y_t}$ be a sequence of points in $Y_x$ that converges to $g(x)$. If $g(x)\in\R$ we can choose $y_t=g(x)+1/t$; if $g(x)=-\infty$ we can choose $y_t=-t$. Since $y_t\in Y_x$, we have $\rpair{x}{y_t}\in E$. And since $E$ is closed, the point $\rpair{x}{g(x)}$, which is a limit of $\bigSeq{\rpair{x}{y_t}}$, is also in $E$. Thus, $g(x)\in Y_x$. This means that in all cases we have shown that $Y_x=\set{y\in\R:\: y\ge g(x)}$, proving that $E$ is the epigraph of $g(x)$. Since the epigraph of $g$ is closed, $g$ is lower semicontinous by \Cref{prop:lsc}(\ref{prop:lsc:a},\ref{prop:lsc:b}). Also, $\epi f\subseteq E = \epi g$, so $g$ is majorized by $f$. If $h$ is any lower semicontinuous function majorized by $f$, meaning that $\epi h\supseteq\epi f$ and $\epi h$ is closed (by \Cref{prop:lsc}\ref{prop:lsc:a},\ref{prop:lsc:b}), then $\epi h\supseteq\clbar{\epi f}=\epi g$. Thus, $g\ge h$. This proves that $g$ is the greatest lower semicontinuous function on $X$ that is majorized by $f$. It remains to show that $g=\lsc f$. Let $x\in X$. Then by lower semicontinuity of $g$ and the fact that $g\le f$, \[ g(x) \le \InfseqLiminf{\seq{x_t}}{X}{x_t\rightarrow x} {g(x_t)} \le \InfseqLiminf{\seq{x_t}}{X}{x_t\rightarrow x} {f(x_t)} = (\lsc f)(x). \] To prove the reverse inequality, $g(x)\ge(\lsc f)(x)$, it suffices to focus on the case $g(x)<+\infty$ (since it holds trivially if $g(x)=+\infty$). So assume that $g(x)<+\infty$ and let $y\in\R$ be such that $y>g(x)$. Then $\rpair{x}{y}\in E$, meaning that $\rpair{x_t}{y_t}\to\rpair{x}{y}$ for some sequence $\seq{\rpair{x_t}{y_t}}$ in $\epi f$. Therefore, \[ y = \liminf y_t \ge \liminf f(x_t) \ge (\lsc f)(x), \] with the last inequality from \eqref{eq:lsc:liminf:X:prelims}. Since this holds for all $y>g(x)$, we obtain $g(x)\ge(\lsc f)(x)$, finishing the proof. \end{proof} \section{Review of convex analysis} \label{sec:rev-cvx-analysis} We briefly review some fundamentals of convex analysis that will be used later in this work. For a more complete introduction, the reader is referred to a standard text, such as \citet{ROC}, on which this review is mainly based. As with the last chapter, depending on background, readers may prefer to skip or skim this section and only refer back to it as needed. \subsection{Convex sets} \label{sec:prelim:convex-sets} The \emph{line segment joining points $\xx$ and $\yy$} in $\Rn$ is the set $\braces{(1-\lambda)\xx+\lambda\yy : \lambda\in [0,1]}$. A set $S\subseteq\Rn$ is \emph{convex} if the line segment joining any two points in $S$ is also entirely included in $S$. If $\lambda_1,\ldots,\lambda_m$ are nonnegative and sum to $1$, then the point $\sum_{i=1}^m \lambda_i \xx_i$ is said to be a \emph{convex combination} of $\xx_1,\ldots,\xx_m\in\Rn$. The arbitrary intersection of a family of convex sets is also convex. As such, for any set $S\subseteq\Rn$, there exists a smallest convex set that includes $S$, namely, the intersection of all convex sets that include $S$, which is called the \emph{convex hull} of $S$, and denoted $\conv{S}$. Here are some facts about convex hulls: \begin{proposition} \label{roc:thm2.3} Let $S\subseteq\Rn$. Then $\conv{S}$ consists of all convex combinations of points in $S$. \end{proposition} \begin{proof} See \citet[Theorem~2.3]{ROC}. \end{proof} Carath\'{e}odory's theorem is more specific: \begin{proposition} \label{roc:thm17.1} Let $S\subseteq\Rn$. If $\xx\in\conv{S}$ then $\xx$ is a convex combination of at most $n+1$ points in $S$. \end{proposition} \begin{proof} See \citet[Theorem~17.1]{ROC}. \end{proof} \begin{proposition} \label{roc:thm3.3} Let $C_1,\ldots,C_m\subseteq\Rn$ be convex and nonempty, and let \[ C = \conv\Parens{\bigcup_{i=1}^m C_i}. \] Then $C$ consists exactly of all points of the form $\sum_{i=1}^m \lambda_i \xx_i$ for some $\lambda_1,\ldots,\lambda_m\in [0,1]$ which sum to $1$, and some $\xx_i\in C_i$ for $i=1,\ldots,m$. \end{proposition} \begin{proof} This follows as a special case of \citet[Theorem~3.3]{ROC} (with $I=\{1,\ldots,m\}$). \end{proof} \begin{proposition} \label{roc:thm3.1} Let $C,D\subseteq\Rn$ be convex. Then $C+D$ is also convex. \end{proposition} \begin{proof} See \citet[Theorem~3.1]{ROC}. \end{proof} \begin{proposition} \label{pr:aff-preserves-cvx} Let $\A\in\Rmn$ and $\bb\in\Rm$, and let $F:\Rn\rightarrow\Rm$ be defined by $F(\xx)=\A\xx+\bb$ for $\xx\in\Rn$. Let $C\subseteq\Rn$ and $D\subseteq\Rm$ both be convex. nv(D)$ are also both convex. \end{proposition} \begin{proof} Let $A:\Rn\rightarrow\Rm$ be the linear map associated with $\A$ so that $A(\xx)=\A\xx$ for $\xx\in\Rn$. nv(D)=\Alininv(D-\bb)$. From the definition of convexity, it is straightforward to show that the translation of a convex set is also convex. Combined with \citet[Theorem~3.4]{ROC}, the claim follows. \end{proof} \subsection{Hull operators} \label{sec:prelim:hull-ops} The convex hull operation is an example of a more generic hull operation that arises repeatedly, as we now discuss. Let $X$ be any set and let $\calC$ be a collection of subsets of $X$ that includes $X$ itself and that is closed under arbitrary intersection (meaning that if $\calS\subseteq\calC$ then $\bigcap_{S\in\calS} S$ is also in $\calC$). For a set $S\subseteq X$, we then define $\genhull S$ to be the smallest set in $\calC$ that includes $S$, that is, the intersection of all sets in $\calC$ that include $S$: \[ \genhull S = \bigcap_{C\in\calC: S\subseteq C} C. \] The mapping $S\mapsto \genhull S$ is called the \emph{hull operator for $\calC$}. For example, the convex hull operation $S\mapsto\conv{S}$ for $S\subseteq\Rn$ is the hull operator for the set of all convex sets in $\Rn$. As another example, for any topological space $X$, the closure operation $S\mapsto \Sbar$ is the hull operator for the set of all closed sets, as seen in Section~\ref{sec:prelim:topo:closed-sets}. We will shortly see several other examples. Here are some general properties of hull operators: \begin{proposition} \label{pr:gen-hull-ops} Let $\calC$ be a collection of subsets of $X$ that includes $X$ itself and that is closed under arbitrary intersection. Let $\genhull$ be the hull operator for $\calC$. Let $S,U\subseteq X$. Then the following hold: \begin{letter-compact} \item \label{pr:gen-hull-ops:a} $S\subseteq \genhull S$ and $\genhull S\in\calC$. Also, $S= \genhull S$ if and only if $S\in\calC$. \item \label{pr:gen-hull-ops:b} If $S\subseteq U$ and $U\in\calC$, then $\genhull S \subseteq U$. \item \label{pr:gen-hull-ops:c} If $S\subseteq U$, then $\genhull S\subseteq \genhull U$. \item \label{pr:gen-hull-ops:d} If $S\subseteq U\subseteq \genhull S$, then $\genhull U = \genhull S$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Parts~(\ref{pr:gen-hull-ops:a}) and~(\ref{pr:gen-hull-ops:b}):} These are immediate from definitions. \pfpart{Part~(\ref{pr:gen-hull-ops:c}):} Since $S\subseteq U\subseteq\genhull{U}$, and since $\genhull{U}\in\calC$, $\genhull{S}\subseteq\genhull{U}$ by part~(\ref{pr:gen-hull-ops:b}). \pfpart{Part~(\ref{pr:gen-hull-ops:d}):} $\genhull{S}\subseteq\genhull U$ by part~(\ref{pr:gen-hull-ops:c}). And since $U\subseteq\genhull{S}$ and $\genhull{S}\in\calC$, $\genhull U\subseteq\genhull{S}$ by part~(\ref{pr:gen-hull-ops:b}). \qedhere \end{proof-parts} \end{proof} \subsection{Affine sets} \label{sec:prelim:affine-sets} A set $A\subseteq\Rn$ is \emph{affine} if it includes $(1-\lambda)\xx+\lambda\yy$ for all $\xx,\yy\in S$ and for all $\lambda\in\R$. The linear subspaces of $\Rn$ are precisely the affine sets that include the origin \citep[Theorem 1.1]{ROC}. Affine sets $A$ and $A'$ in $\Rn$ are said to be \emph{parallel} if $A'=A+\uu$ for some $\uu\in\Rn$. \begin{proposition} \label{roc:thm1.2} Let $A\subseteq\Rn$ be a nonempty affine set. Then there exists a unique linear subspace parallel to $A$. \end{proposition} \begin{proof} See \citet[Theorem~1.2]{ROC}. \end{proof} The \emph{dimension} of a nonempty affine set $A$ is defined to be the dimension of the unique linear subspace that is parallel to it. The \emph{affine hull} of a set $S\subseteq\Rn$, denoted $\affh{S}$, is the smallest affine set that includes $S$; that is, $\affh{S}$ is the intersection of all affine sets that include $S$. Equivalently, $\affh{S}$ consists of all \emph{affine combinations} of finitely many points in $S$, that is, all combinations $\sum_{i=1}^m \lambda_i \xx_i$ where $\xx_1,\dotsc,\xx_m\in S$, $\lambda_1,\dotsc,\lambda_m\in\R$, and $\sum_{i=1}^m \lambda_i = 1$. An expression for the linear subspace parallel to $\affh{S}$ is given in the next proposition: \begin{proposition} \label{pr:lin-aff-par} Let $S\subseteq\Rn$, and let $\uu\in\affh S$. Then $\spn(S-\uu)$ is the linear subspace parallel to $\affh S$. \end{proposition} \begin{proof} Let $A=\affh{S}$ and let $L=\spn(S-\uu)$. Suppose first that $\zz\in A$. Then $\zz=\sum_{i=1}^m \lambda_i \xx_i$ for some $\xx_1,\dotsc,\xx_m\in S$, $\lambda_1,\dotsc,\lambda_m\in\R$, $\sum_{i=1}^m \lambda_i = 1$. Thus, by algebra, \[ \zz = \uu + \sum_{i=1}^m \lambda_i (\xx_i - \uu). \] The sum on the right is evidently in $L=\spn(S-\uu)$, since each $\xx_i\in S$. Therefore, $\zz\in L+\uu$, so $A\subseteq L+\uu$. Suppose now that $\zz\in L$. Then \[ \zz = \sum_{i=1}^m \lambda_i (\xx_i - \uu) \] for some $\xx_1,\dotsc,\xx_m\in S$, $\lambda_1,\dotsc,\lambda_m\in\R$. By algebra, \[ \zz+\uu = \sum_{i=1}^m \lambda_i \xx_i + \paren{1-\sum_{i=1}^m \lambda_i} \uu. \] Thus, $\zz+\uu$ is an affine combination of $\xx_1,\dotsc,\xx_m$ and $\uu$, all of which are in $A=\affh{S}$, so $\zz+\uu$ is as well. Thus, $L+\uu\subseteq A$. \end{proof} \subsection{Closure, interior, relative interior} When working in $\Rn$, it is understood that we are always using the standard Euclidean topology, as defined in Example~\ref{ex:topo-on-rn}, unless explicitly stated otherwise. Let $S\subseteq\Rn$. The closure of $S$ in $\Rn$ (that is, the smallest closed set in $\Rn$ that includes $S$) is denoted $\cl S$. As in Section~\ref{sec:prelim:topo:closed-sets}, the interior of $S$ (that is, the largest open set in $\Rn$ included in $S$) is denoted $\intr S$. The \emph{boundary} of $S$ is the set difference $(\cl{S})\setminus(\intr S)$. Often, it is more useful to consider $S$ in the subspace topology induced by its affine hull, $\affh{S}$. The interior of $S$ in $\affh S$ (when viewed as a topological subspace of $\Rn$) is called its \emph{relative interior}, and is denoted $\ri S$. Thus, a point $\xx\in\Rn$ is in $\ri S$ if and only if $\ball(\xx,\epsilon)\cap(\affh{S})\subseteq S$ for some $\epsilon>0$ (where $\ball(\xx,\epsilon)$ is as in Eq.~\ref{eqn:open-ball-defn}). From definitions, it follows that \begin{equation} \label{eq:ri-in-cl} \intr S \subseteq \ri S \subseteq S \subseteq \cl S \end{equation} for all $S\subseteq\Rn$. The set difference $(\cl S)\setminus(\ri S)$ is called the \emph{relative boundary} of $S$. If $S=\ri S$, then $S$ is said to be \emph{relatively open}. Here are some facts about the relative interior of a convex set: \begin{proposition} \label{pr:ri-props} Let $C,D\subseteq\Rn$ be convex. \begin{letter-compact} \item \label{pr:ri-props:roc-thm6.2a} $\ri C$ and $\cl C$ are convex. \item \label{pr:ri-props:roc-thm6.2b} If $C\neq\emptyset$ then $\ri{C} \neq\emptyset$. \item \label{pr:ri-props:roc-thm6.3} $\ri(\cl C)=\ri C$ and $\cl(\ri C) = \cl C$. \item \label{pr:ri-props:roc-cor6.6.2} $\ri(C+D)=(\ri C) + (\ri D)$. \item \label{pr:ri-props:intC-nonemp-implies-eq-riC} If $\intr C\neq\emptyset$ then $\intr C = \ri C$. \item \label{pr:ri-props:intC-D-implies-riC-riD} If $(\intr C)\cap(\cl D)\neq\emptyset$ then $(\ri C)\cap(\ri D)\neq\emptyset$. \item \label{pr:ri-props:roc-cor6.3.1} The following are equivalent: \begin{roman-compact} \item $\ri C = \ri D$. \item $\cl C = \cl D$. \item $\ri C \subseteq D \subseteq \cl C$. \end{roman-compact} \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Parts~(\ref{pr:ri-props:roc-thm6.2a}) and~(\ref{pr:ri-props:roc-thm6.2b}):} See \citet[Theorem~6.2]{ROC}. \pfpart{Part~(\ref{pr:ri-props:roc-thm6.3}):} See \citet[Theorem~6.3]{ROC}. \pfpart{Part~(\ref{pr:ri-props:roc-cor6.6.2}):} See \citet[Theorem~6.6.2]{ROC}. \pfpart{Part~(\ref{pr:ri-props:intC-nonemp-implies-eq-riC}):} Suppose $\xx\in\intr C$. Then, for some $\epsilon>0$, the ball $\ball(\xx,\epsilon)$ is included in $C$, implying that $\affh{C}=\Rn$, and so that $\ri C = \intr C$ by definition of relative interior. \pfpart{Part~(\ref{pr:ri-props:intC-D-implies-riC-riD}):} Suppose $(\intr C)\cap(\cl D)\neq\emptyset$. Since $\intr C$ is open, it follows that $(\intr C)\cap(\ri D)\neq\emptyset$, by \citet[Corollary 6.3.2]{ROC}. Therefore, $(\ri C)\cap(\ri D)\neq\emptyset$, by \eqref{eq:ri-in-cl}. \pfpart{Part~(\ref{pr:ri-props:roc-cor6.3.1}):} See \citet[Corollary~6.3.1]{ROC}. \qedhere \end{proof-parts} \end{proof} \begin{proposition} \label{roc:thm6.1} Let $C\subseteq\Rn$ be convex, let $\xx\in\ri C$, let $\yy\in\cl C$, and let $\lambda\in (0,1]$. Then $\lambda\xx + (1-\lambda)\yy \in \ri C$. \end{proposition} \begin{proof} See \citet[Theorem~6.1]{ROC}. \end{proof} \begin{proposition} \label{roc:thm6.4} Let $C\subseteq\Rn$ be convex and nonempty, and let $\xx\in\Rn$. Then $\xx\in\ri C$ if and only if for all $\yy\in C$ there exists $\delta\in\Rstrictpos$ such that $\xx+\delta (\xx-\yy) \in C$. \end{proposition} \begin{proof} See \citet[Theorem~6.4]{ROC}. \end{proof} For points in a convex set $C$'s relative boundary, there always exist points not in $C$'s closure that are arbitrarily close: \begin{proposition} \label{pr:bnd-near-cl-comp} Let $C\subseteq\Rn$ be convex, let $\xx\in(\cl C)\setminus(\ri C)$, and let $\epsilon>0$. Then there exists a point $\zz\in\Rn\setminus(\cl C)$ such that $\norm{\xx-\zz}<\epsilon$. \end{proposition} {\color{cerulean} \begin{proof}[Alternate proof that does not require $C$ to be convex] For any set $S\subseteq\Rn$, let $S^c=\Rn\wo S$ denote its complement in $\Rn$. Since $\xx\in(\ri C)^c$ and $\intr C\subseteq\ri C$, we also have \[ \xx\in[\intr C]^c =[\intr(\cl C)]^c =\cl[(\cl C)^c]. \] Thus, there exists $\zz\in(\cl C)^c$ such that $\norm{\zz-\xx}\le\epsilon$. \end{proof} } \begin{proof} Let $C'=\cl C$. By \Cref{pr:ri-props}(\ref{pr:ri-props:roc-thm6.3}), $\ri C' = \ri C$, so $\xx\not\in\ri C'$. By \Cref{roc:thm6.4} (applied to $C'$), it follows that there must exist a point $\yy\in C'$ such that for all $\delta>0$, $\zz_{\delta}\not\in C'$, where $\zz_{\delta}=\xx+\delta(\xx-\yy)$. Letting $\zz=\zz_{\delta}$ for $\delta>0$ sufficiently small (so that $\norm{\xx-\zz}=\delta \norm{\xx-\yy} < \epsilon$), the claim now follows. \end{proof} \begin{proposition} \label{roc:thm6.5} Let $C_1,\ldots,C_m\subseteq\Rn$ be convex, and assume $\bigcap_{i=1}^m (\ri C_i)\neq\emptyset$. Then \[ \ri\Parens{\bigcap_{i=1}^m C_i} = \bigcap_{i=1}^m (\ri C_i). \] \end{proposition} \begin{proof} See \citet[Theorem~6.5]{ROC}. \end{proof} \begin{proposition} \label{roc:thm6.7} Let $C\subseteq\Rm$ be convex, let $\A\in\Rmn$, and let $A(\xx)=\A\xx$ for $\xx\in\Rn$. Assume there exists $\xing\in\Rn$ for which $\A\xing\in\ri C$. Then $\Alininv(\ri C) = \ri(\Alininv(C))$. \end{proposition} \begin{proof} See \citet[Theorem~6.7]{ROC}. \end{proof} \begin{proposition} \label{pr:ri-conv-finite} For any finite set $\xx_1,\ldots,\xx_m\in\Rn$, \[ \ri(\conv \{\xx_1,\ldots,\xx_m\}) = \Braces{ \sum_{i=1}^m \lambda_i \xx_i : \lambda_1,\ldots,\lambda_m > 0, \sum_{i=1}^m \lambda_i = 1 }. \] \end{proposition} \begin{proof} This follows from \citet[Theorem~6.9]{ROC} (with each set $C_i$, in his notation, set to the singleton $\{\xx_i\}$). \end{proof} \subsection{Cones} \label{sec:prelim:cones} A set $K\subseteq\Rn$ is a \emph{cone} if $\zero\in K$ and if $K$ is closed under multiplication by positive scalars, that is, if $\lambda \xx\in K$ for all $\xx\in K$ and $\lambda\in\Rstrictpos$. Equivalently, $K$ is a cone if it is nonempty and closed under multiplication by nonnegative scalars. Note importantly that some authors, such as \citet{rock_wets}, require that a cone include the origin, as we have done here, but others, including \citet{ROC}, do not. Every linear subspace is a closed convex cone. The arbitrary intersection of cones is itself a cone, and therefore the same holds for convex cones. Consequently, for any set $S\subseteq\Rn$, there exists a smallest convex cone that includes $S$, which is the intersection of all convex cones containing $S$. The resulting convex cone, denoted $\cone{S}$, is called the \emph{conic hull of $S$} or the \emph{convex cone generated by $S$}. For $m\geq 0$ and $\lambda_1,\ldots,\lambda_m\in\Rpos$, the point $\sum_{i=1}^m \lambda_i \xx_i$ is said to be a \emph{conic combination} of the points $\xx_1,\ldots,\xx_m\in\Rn$. \begin{proposition} \label{pr:scc-cone-elts} Let $S\subseteq\Rn$. \begin{letter-compact} \item \label{pr:scc-cone-elts:b} The conic hull $\cone{S}$ consists of all conic combinations of points in $S$. \item \label{pr:scc-cone-elts:c:new} Suppose $S$ is convex. Then \[ \cone{S} = \{\zero\} \cup \Braces{ \lambda\xx : \xx\in S, \lambda\in\Rstrictpos }. \] \item \label{pr:scc-cone-elts:d} Suppose $S$ is a cone. Then $S$ is convex if and only if it is closed under vector addition (so that $\xx+\yy\in S$ for all $\xx,\yy\in S$). \item \label{pr:scc-cone-elts:span} $\spn{S}=\cone{(S \cup -S)}$. \end{letter-compact} \end{proposition} \begin{proof} For parts~(\ref{pr:scc-cone-elts:b}) and~(\ref{pr:scc-cone-elts:c:new}), see Corollaries~2.6.2 and~2.6.3 of \citet{ROC} as well as the immediately following discussion (keeping in mind, as mentioned above, that Rockafellar's definition of a cone differs slightly from ours). For part~(\ref{pr:scc-cone-elts:d}), see \citet[Theorem~2.6]{ROC}. Part~(\ref{pr:scc-cone-elts:span}): By Proposition~\ref{pr:span-is-lin-comb}, the set $\spn{S}$ consists of all linear combinations $\sum_{i=1}^m \lambda_i \xx_i$ where each $\lambda_i\in\R$ and each $\xx_i\in S$. From part~(\ref{pr:scc-cone-elts:b}), $\cone{(S\cup -S)}$ consists of all such linear combinations with each $\lambda_i\in\Rpos$ and each $\xx_i\in S\cup -S$. Using the simple fact that $\lambda_i\xx_i=(-\lambda_i)(-\xx_i)$, it follows that these two sets are the same. \end{proof} \begin{proposition} \label{prop:cone-linear} Let $K\subseteq\Rn$ be a convex cone and let $A:\Rn\to\Rm$ be a linear map defined by $A(\xx)=\A\xx$ for some $\A\in\Rmn$. Then $A(K)$ is also a convex cone. \end{proposition} \begin{proof} Let $\yy\in A(K)$ and $\lambda\in\Rstrictpos$. Since $\yy\in A(K)$, we must have $\yy=\A\xx$ for some $\xx\in K$, so also $\lambda\xx\in K$ and thus $\lambda\yy=\A(\lambda\xx)\in A(K)$. Hence, $A(K)$ is a cone. It includes the origin because $\A\zero=\zero$, and convex by \Cref{pr:aff-preserves-cvx}. \end{proof} Let $K\subseteq\Rn$ be a convex cone. Its \emph{polar}, denoted $\Kpol$, is the set of points whose inner product with every point in $K$ is nonpositive, that is, \begin{equation} \label{eqn:polar-def} \Kpol = \Braces{\uu\in\Rn : \xx\cdot\uu\leq 0 \mbox{ for all } \xx\in K}. \end{equation} We write $\dubpolar{K}$ for the polar of $\Kpol$; that is, $\dubpolar{K}=\polar{(\Kpol)}$. \begin{proposition} \label{pr:polar-props} Let $K,J\subseteq\Rn$ be convex cones. \begin{letter-compact} \item \label{pr:polar-props:a} $\Kpol=\polar{(\cl K)}$. \item \label{pr:polar-props:b} $\Kpol$ is a closed (in $\Rn$), convex cone. \item \label{pr:polar-props:c} $\dubpolar{K}=\cl K$. \item \label{pr:polar-props:d} If $J\subseteq K$ then $\Kpol\subseteq\Jpol$. \item \label{pr:polar-props:e} $J+K$ is also a convex cone, and $\polar{(J+K)} = \Jpol \cap \Kpol$. \item \label{pr:polar-props:f} If $K$ is a linear subspace then $\Kpol = \Kperp$. \end{letter-compact} \end{proposition} \begin{proof} For parts~(\ref{pr:polar-props:a}) and~(\ref{pr:polar-props:f}), see the discussion preceding and following Theorem~14.1 of \citet{ROC}. Parts~(\ref{pr:polar-props:b}) and~(\ref{pr:polar-props:c}) follow from that same theorem applied to $\cl K$ (and using part~\ref{pr:polar-props:a}). It is straightforward to argue from definitions both part~(\ref{pr:polar-props:d}) and that $J+K$ is also a cone, and therefore a convex cone by \Cref{roc:thm3.1}. For the expression for $\polar{(J+K)}$ given in part~(\ref{pr:polar-props:e}), see \citet[Corollary~16.4.2]{ROC}. \end{proof} \subsection{Separation theorems} \label{sec:prelim-sep-thms} A \emph{hyperplane} $H$ in $\Rn$ is defined by a vector $\vv\in\Rn\setminus\{\zero\}$ and scalar $\beta\in\R$, and consists of the set of points \begin{equation} \label{eqn:std-hyp-plane} H=\braces{\xx\in\Rn : \xx\cdot\vv = \beta}. \end{equation} The hyperplane is associated with two \emph{closed halfspaces} consisting of those points $\xx\in\Rn$ for which $\xx\cdot\vv\leq\beta$, and those for which $\xx\cdot\vv\geq\beta$. It is similarly associated with two \emph{open halfspaces} defined by corresponding strict inequalities. If the hyperplane or the boundary of the halfspace includes the origin, we say that it is \emph{homogeneous} (that is, if $\beta=0$). Let $C$ and $D$ be nonempty subsets of $\Rn$. We say that a hyperplane $H$ \emph{separates} $C$ and $D$ if $C$ is included in one of the closed halfspaces associated with $H$, and $D$ is included in the other; thus, for $H$ as defined above, $\xx\cdot\vv\leq\beta$ for all $\xx\in C$ and $\xx\cdot\vv\geq\beta$ for all $\xx\in D$. (This is without loss of generality since if the reverse inequalities hold, we can simply negate $\vv$ and $\beta$.) We say that $H$ \emph{properly} separates $C$ and $D$ if, in addition, $C$ and $D$ are not both included in $H$ (that is, $C\cup D \not\subseteq H$). We say that $H$ \emph{strongly} separates $C$ and $D$ if for some $\epsilon>0$, $C+\ball(\zero,\epsilon)$ is included in one of the open halfspaces associated with $H$, and $D+\ball(\zero,\epsilon)$ is included in the other (where $\ball(\zero,\epsilon)$ is as defined in Eq.~\ref{eqn:open-ball-defn}). Equivalently, with $H$ as defined in \eqref{eqn:std-hyp-plane}, this means that $\xx\cdot\vv<\beta-\epsilon$ for all $\xx\in C$, and $\xx\cdot\vv>\beta+\epsilon$ for all $\xx\in D$. Here are several facts about separating convex sets: \begin{proposition} \label{roc:thm11.1} Let $C,D\subseteq\Rn$ be nonempty. Then there exists a hyperplane that strongly separates $C$ and $D$ if and only if there exists $\vv\in\Rn$ such that \[ \sup_{\xx\in C} \xx\cdot \vv < \inf_{\xx\in D} \xx\cdot \vv. \] \end{proposition} \begin{proof} See \citet[Theorem~11.1]{ROC}. \end{proof} \begin{proposition} \label{roc:cor11.4.2} Let $C,D\subseteq\Rn$ be convex and nonempty. Assume $(\cl{C})\cap(\cl{D})=\emptyset$ and that $C$ is bounded. Then there exists a hyperplane that strongly separates $C$ and $D$. \end{proposition} \begin{proof} See \citet[Corollary~11.4.2]{ROC}. \end{proof} \begin{proposition} \label{roc:thm11.2} Let $C\subseteq\Rn$ be nonempty, convex and relatively open, and let $A\subseteq\Rn$ be nonempty and affine. Assume $C\cap A=\emptyset$. Then there exists a hyperplane $H$ that includes $A$ and such that $C$ is included in one of the open halfspaces associated with $H$. That is, there exists $\vv\in\Rn$ and $\beta\in\R$ such that $\xx\cdot\vv=\beta$ for all $\xx\in A$, and $\xx\cdot\vv<\beta$ for all $\xx\in C$. \end{proposition} \begin{proof} See \citet[Theorem~11.2]{ROC}. \end{proof} \begin{proposition} \label{roc:thm11.3} Let $C,D\subseteq\Rn$ be nonempty and convex. Then there exists a hyperplane separating $C$ and $D$ properly if and only if $(\ri C)\cap(\ri D)=\emptyset$. \end{proposition} \begin{proof} See \citet[Theorem~11.3]{ROC}. \end{proof} \begin{proposition} \label{pr:con-int-halfspaces} Let $S\subseteq\Rn$. \begin{letter-compact} \item \label{roc:cor11.5.1} The closure of the convex hull of $S$, $\cl(\conv{S})$, is equal to the intersection of all closed halfspaces in $\Rn$ that include~$S$. \item \label{roc:cor11.7.2} The closure of the conic hull of $S$, $\cl(\cone{S})$, is equal to the intersection of all homogeneous closed halfspaces in $\Rn$ that include $S$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{roc:cor11.5.1}):} See \citet[Corollary~11.5.1]{ROC}. \pfpart{Part~(\ref{roc:cor11.7.2}):} See \citet[Corollary~11.7.2]{ROC}. \qedhere \end{proof-parts} \end{proof} Let $K_1$ and $K_2$ be convex cones in $\Rn$, and let $L_i=K_i\cap -K_i$, for $i\in\{1,2\}$. Then $K_1$ and $K_2$ are said to be \emph{sharply discriminated} by a vector $\uu\in\Rn$ if: \[ \xx\cdot\uu \begin{cases} < 0 & \text{for all $\xx\in K_1\setminus L_1$,} \\ = 0 & \text{for all $\xx\in L_1 \cup L_2$,} \\ > 0 & \text{for all $\xx\in K_2\setminus L_2$.} \end{cases} \] Except when $\uu=\zero$, this condition means that $K_1$ and $K_2$ are separated by the hyperplane $H=\{\xx\in\Rn : \xx\cdot\uu=0\}$ (since $\xx\cdot\uu\leq 0$ for $\xx\in K_1$, and $\xx\cdot\uu\geq 0$ for $\xx\in K_2$), and moreover that $K_1\setminus L_1$ is included in one of the open halfspaces associated with $H$, while $K_2\setminus L_2$ is included in the other. (Nonetheless, we use the phrase ``discriminated by a vector'' rather than ``separated by a hyperplane'' since in fact we do allow $\uu=\zero$, in which case, the set $H$ above is not technically a hyperplane. If $K_1\neq L_1$ or $K_2\neq L_2$, then this possibility is ruled out.) The next proposition and the main ideas of its proof are due to \citet[Theorem~2.7]{Klee55}. \begin{proposition} \label{pr:cones-sharp-sep} Let $K_1,K_2\subseteq\Rn$ be closed convex cones, and assume $K_1\cap K_2=\{\zero\}$. Then there exists a vector $\uu\in\Rn$ that sharply discriminates $K_1$ and $K_2$. \end{proposition} \begin{proof} Let $K=K_1-K_2$, and let $L=K\cap-K$. Then $K$ is a closed convex cone. To see this, note first that $-K_2$ is a convex cone (by \Cref{prop:cone-linear}), and is closed (by \Cref{prop:cont}\ref{prop:cont:inv:closed}, since the map $\xx\mapsto-\xx$ is continuous and is its own inverse). Therefore, $K=K_1+(-K_2)$ is a convex cone by \Cref{pr:polar-props}(\ref{pr:polar-props:e}), and is closed by \citet[Corollary~9.1.3]{ROC} since $K_1$ and $K_2$ are closed convex cones whose intersection is $\{\zero\}$. Let $\uu$ be any point in $\ri \Kpol$ (which must exist by Propositions~\ref{pr:polar-props}\ref{pr:polar-props:b} and~\ref{pr:ri-props}\ref{pr:ri-props:roc-thm6.2b}). We first show that $\xx\cdot\uu\leq 0$ for $\xx\in K$ and that $\xx\cdot\uu<0$ for $\xx\in K\setminus L$ (which is equivalent to $\uu$ sharply discriminating $K$ and $\{\zero\}$). Then below, we show this implies that $\uu$ sharply discriminates $K_1$ and $K_2$. First, $\uu\in\Kpol$, meaning $\xx\cdot\uu\leq 0$ for all $\xx\in K$. Let $\xx\in K\setminus L$; we aim to show $\xx\cdot\uu<0$. Then $\xx\not\in -K$, so $-\xx\not\in K = \dubpolar{K}$, where the equality is by \Cref{pr:polar-props}(\ref{pr:polar-props:c}). Therefore, there exists $\ww\in\Kpol$ such that $-\xx\cdot\ww>0$, that is, $\xx\cdot\ww<0$. Since $\uu\in\ri{\Kpol}$, by \Cref{roc:thm6.4}, there exists $\delta\in\Rstrictpos$ such that the point $\vv=\uu+\delta(\uu-\ww)$ is in $\Kpol$. We then have \[ \xx\cdot\uu = \frac{\xx\cdot\vv + \delta \xx\cdot\ww} {1+\delta} < 0. \] The equality is by algebra from $\vv$'s definition. The inequality is because $\xx\cdot\ww<0$ and $\xx\cdot\vv\leq 0$ (since $\xx\in K$ and $\vv\in \Kpol$). Next, we show $\uu$ sharply discriminates $K_1$ and $K_2$. Observe first that $K_1=K_1-\{\zero\}\subseteq K_1-K_2=K$, since $\zero\in K_2$. We claim $K_1\cap L\subseteq L_1$. Let $\xx\in K_1\cap L$, which we aim to show is in $L_1$. Then $\xx\in -K$ so $\xx=\yy_2-\yy_1$ for some $\yy_1\in K_1$ and $\yy_2\in K_2$. This implies $\yy_2=\xx+\yy_1$. Since $\xx$ and $\yy_1$ are both in the convex cone $K_1$, their sum $\xx+\yy_1$ is as well. Therefore, since $K_1\cap K_2=\{\zero\}$, we must have $\yy_2=\zero=\xx+\yy_1$. Thus, $\xx=-\yy_1\in -K_1$, so $\xx\in L_1$, as claimed. Consequently, \begin{equation} \label{eq:pr:cones-sharp-sep:1} K_1\setminus L_1 \subseteq K_1\setminus (K_1\cap L) = K_1 \setminus L \subseteq K\setminus L, \end{equation} with the first inclusion from the preceding argument, and the second from $K_1\subseteq K$. Thus, if $\xx\in K_1$ then $\xx\in K$ so $\xx\cdot\uu\leq 0$, from the argument above. Hence, if $\xx\in L_1$, then $\xx\in K_1$ and $-\xx\in K_1$, together implying that $\xx\cdot\uu=0$. And if $\xx\in K_1\setminus L_1$ then $\xx\in K\setminus L$ (from Eq.~\ref{eq:pr:cones-sharp-sep:1}), implying $\xx\cdot\uu<0$, as argued earlier. By similar arguments, $K_2\subseteq -K$ and $K_2\setminus L_2 \subseteq (-K)\setminus L$. So if $\xx\in K_2$ then $-\xx\in K$ implying $\xx\cdot\uu\geq 0$. Thus, if $\xx\in L_2$ then $\xx\cdot\uu=0$. And if $\xx\in K_2\setminus L_2$ then $-\xx\in K\setminus L$ implying $\xx\cdot\uu>0$. \end{proof} \subsection{Faces and polyhedra} \label{sec:prelim:faces} Let $C\subseteq\Rn$ be convex. A convex subset $F\subseteq C$ is said to be \emph{a face of $C$} if for all points $\xx,\yy\in C$, and for all $\lambda\in (0,1)$, if $(1-\lambda)\xx+\lambda\yy$ is in $F$, then $\xx$ and $\yy$ are also in $F$. For instance, the faces of the cube $[0,1]^3$ in $\R^3$ are the cube's eight vertices, twelve edges, six square faces, the entire cube, and the empty set. \begin{proposition} \label{pr:face-props} Let $C\subseteq\Rn$ be convex, and let $F$ be a face of $C$. \begin{letter-compact} \item \label{pr:face-props:cor18.1.1} If $C$ is closed (in $\Rn$), then so is $F$. \item \label{pr:face-props:thm18.1} Let $D\subseteq C$ be convex with $(\ri D)\cap F\neq\emptyset$. Then $D\subseteq F$. \item \label{pr:face-props:cor18.1.2} Let $E$ be a face of $C$ with $(\ri E)\cap (\ri F)\neq\emptyset$. Then $E=F$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:face-props:cor18.1.1}):} See \citet[Corollary~18.1.1]{ROC}. \pfpart{Part~(\ref{pr:face-props:thm18.1}):} See \citet[Theorem~18.1]{ROC}. \pfpart{Part~(\ref{pr:face-props:cor18.1.2}):} See \citet[Corollary~18.1.2]{ROC}. \end{proof-parts} \end{proof} \begin{proposition} \label{roc:thm18.2} Let $C\subseteq\Rn$ be convex. Then the relative interiors of the nonempty faces of $C$ form a partition of $C$. That is, for all $\xx\in C$, there exists a unique face $F$ of $C$ whose relative interior, $\ri F$, includes $\xx$. \end{proposition} \begin{proof} See \citet[Theorem~18.2]{ROC}. \end{proof} \begin{proposition} \label{roc:thm18.3} Let $S\subseteq\Rn$, let $C=\conv{S}$, and let $F$ be a face of $C$. Then $F=\conv(S\cap F)$. \end{proposition} \begin{proof} See \citet[Theorem~18.3]{ROC}. \end{proof} A convex set $C$ is \emph{polyhedral} if it is equal to the intersection of finitely many halfspaces. The set is \emph{finitely generated} if there exist vectors $\vv_1,\ldots,\vv_m\in\Rn$ and $k\in\{1,\ldots,m\}$ such that \[ C = \Braces{ \sum_{i=1}^m \lambda_i \vv_i : \lambda_1,\ldots,\lambda_m\in\Rpos, \sum_{i=1}^k \lambda_i = 1 }. \] Thus, a convex cone $K$ is finitely generated if and only if $K=\cone{V}$ for some finite $V\subseteq\Rn$ (so that $k=1$ and $\vv_1=\zero$ in the expression above). A convex set $C$ is both bounded and finitely generated if and only if it is a \emph{polytope}, that is, the convex hull of finitely many points. \begin{proposition} \label{roc:thm19.1} Let $C\subseteq\Rn$ be convex. Then the following are equivalent: \begin{letter-compact} \item $C$ is polyhedral. \item $C$ is finitely generated. \item $C$ is closed (in $\Rn$) and has finitely many faces. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~19.1]{ROC}. \end{proof} \begin{proposition} \label{roc:cor19.2.2} Let $K\subseteq\Rn$ be a polyhedral convex cone. Then $\Kpol$ is also polyhedral. \end{proposition} \begin{proof} See \citet[Corollary~19.2.2]{ROC}. \end{proof} \begin{proposition} \label{roc:thm19.6} Let $C_1,\ldots,C_m\subseteq\Rn$ be convex and polyhedral. Then \[ \cl\Parens{\conv \bigcup_{i=1}^m C_i} \] is also polyhedral. \end{proposition} \begin{proof} See \citet[Theorem~19.6]{ROC}. \end{proof} \subsection{Convex functions} \label{sec:prelim:cvx-fcns} The epigraph of a function $f:\Rn\rightarrow\Rext$, as defined in \eqref{eqn:epi-def}, is a subset of $\Rn\times\R$, which thus can be viewed as a subset of $\R^{n+1}$. We say that $f$ is \emph{convex} if its epigraph, $\epi{f}$, is a convex set in $\R^{n+1}$. The function is \emph{concave} if $-f$ is convex. The function is \emph{finite everywhere} if $f>-\infty$ and $f<+\infty$. The function is \emph{proper} if $f > -\infty$ and $f\not\equiv+\infty$. If $f$ is convex, then its domain, $\dom{f}$, is also convex, as are all its sublevel sets: \begin{proposition} \label{roc:thm4.6} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\alpha\in\Rext$. Then the sets \[ \{\xx\in\Rn : f(\xx)\leq \alpha\} \;\;\mbox{ and }\;\; \{\xx\in\Rn : f(\xx) < \alpha\} \] are both convex. In particular, $\dom{f}$ is convex. \end{proposition} \begin{proof} See \citet[Theorem~4.6]{ROC}. \end{proof} Here are some useful characterizations for when a function is convex. These are similar to other standard characterizations, such as \citet[Theorems~4.1 and~4.2]{ROC}. The ones given here are applicable even when the function is improper. \begin{proposition} \label{pr:stand-cvx-fcn-char} Let $f:\Rn\rightarrow\Rext$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:stand-cvx-fcn-char:a} $f$ is convex. \item \label{pr:stand-cvx-fcn-char:b} For all $\xx_0,\xx_1\in\dom f$ and all $\lambda\in [0,1]$, \begin{equation} \label{eq:stand-cvx-fcn-char:b} f\bigParens{(1-\lambda)\xx_0 + \lambda \xx_1} \leq (1-\lambda) f(\xx_0) + \lambda f(\xx_1). \end{equation} \item \label{pr:stand-cvx-fcn-char:c} For all $\xx_0,\xx_1\in\Rn$ and all $\lambda\in [0,1]$, if $f(\xx_0)$ and $f(\xx_1)$ are summable then \begin{equation} \label{eq:stand-cvx-fcn-char:c} f\bigParens{(1-\lambda)\xx_0 + \lambda \xx_1} \leq (1-\lambda) f(\xx_0) + \lambda f(\xx_1). \end{equation} \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:stand-cvx-fcn-char:a}) $\Rightarrow$ (\ref{pr:stand-cvx-fcn-char:b}): } Suppose $f$ is convex, and let $\xx_0,\xx_1\in\dom f$ and $\lambda\in [0,1]$. We aim to show that \eqref{eq:stand-cvx-fcn-char:b} holds. For $i\in\{0,1\}$, let $y_i\in\R$ with $f(\xx_i)\leq y_i$ so that $\rpair{\xx_i}{y_i}\in\epi{f}$. Let $\xx=(1-\lambda)\xx_0+\lambda\xx_1$ and $y=(1-\lambda)y_0+\lambda y_1$. Since $f$ is convex, its epigraph is convex, so the point \begin{equation} \label{eq:pr:stand-cvx-fcn-char:2} \rpair{\xx}{y} = (1-\lambda)\rpair{\xx_0}{y_0}+\lambda\rpair{\xx_1}{y_1} \end{equation} is in $\epi{f}$ as well. Hence, $f(\xx)\leq y$, that is, $f((1-\lambda)\xx_0 + \lambda \xx_1)\leq (1-\lambda) y_0 + \lambda y_1$. Since this holds for all $y_0\geq f(\xx_0)$ and $y_1\geq f(\xx_1)$, this implies \eqref{eq:stand-cvx-fcn-char:b}. \pfpart{(\ref{pr:stand-cvx-fcn-char:b}) $\Rightarrow$ (\ref{pr:stand-cvx-fcn-char:a}): } Suppose statement~(\ref{pr:stand-cvx-fcn-char:b}) holds. Let $\rpair{\xx_i}{y_i}\in\epi{f}$, for $i\in\{0,1\}$, implying $\xx_i\in\dom{f}$ and $f(\xx_i)\leq y_i$. Let $\lambda\in [0,1]$, and let $\xx=(1-\lambda)\xx_0+\lambda\xx_1$ and $y=(1-\lambda)y_0+\lambda y_1$, implying \eqref{eq:pr:stand-cvx-fcn-char:2}. Then, by our assumption, \[ f(\xx) \leq (1-\lambda)f(\xx_0) + \lambda f(\xx_1) \leq (1-\lambda) y_0 + \lambda y_1 = y. \] Therefore, $\rpair{\xx}{y}\in\epi{f}$, so $\epi{f}$ is convex, and so is $f$. \pfpart{(\ref{pr:stand-cvx-fcn-char:b}) $\Rightarrow$ (\ref{pr:stand-cvx-fcn-char:c}):} Suppose statement (\ref{pr:stand-cvx-fcn-char:b}) holds. To prove (\ref{pr:stand-cvx-fcn-char:c}), we only need to consider cases when either $f(\xx_0)=+\infty$ or $f(\xx_1)=+\infty$ since otherwise the implication is immediate. If $\lambda\in\set{0,1}$ then \eqref{eq:stand-cvx-fcn-char:c} trivially holds. If $\lambda\in(0,1)$ then \eqref{eq:stand-cvx-fcn-char:c} must again hold since its right-hand side is $+\infty$. \pfpart{(\ref{pr:stand-cvx-fcn-char:c}) $\Rightarrow$ (\ref{pr:stand-cvx-fcn-char:b}):} The implication is immediate since for any $\xx_0,\xx_1\in\dom f$ the function values $f(\xx_0)$ and $f(\xx_1)$ are summable. \qedhere \end{proof-parts} \end{proof} A proper function $f:\Rn\rightarrow\Rext$ is \emph{strictly convex} if \eqref{eq:stand-cvx-fcn-char:b} holds with strict inequality for all $\lambda\in (0,1)$ and all $\xx_0,\xx_1\in\dom{f}$ with $\xx_0\neq\xx_1$. We next review some natural ways of constructing functions that are convex. For a set $S\subseteq\Rn$, the \emph{indicator function} $\inds:\Rn\rightarrow\Rext$ is defined, for $\xx\in\Rn$, by \begin{equation} \label{eq:indf-defn} \inds(\xx) = \begin{cases} 0 & \text{if $\xx\in S$,} \\ +\infty & \text{otherwise.} \end{cases} \end{equation} If $S$ is a convex set, then $\inds$ is a convex function. Also, when $S$ is a singleton $\{\zz\}$, we often write $\indf{\zz}$ as shorthand for $\indf{\{\zz\}}$. For a function $f:X\rightarrow\Rext$ and $\lambda\in\R$, we define the function $\lambda f:X\rightarrow\Rext$ by $(\lambda f)(x)=\lambda [f(x)]$ for $x\in X$. If $f:\Rn\rightarrow\Rext$ is convex and $\lambda\geq 0$ then $\lambda f$ is convex. It is sometimes useful to consider a slight variant in which only function values for points in the effective domain are scaled. Thus, for a function $f:\Rn\rightarrow\Rext$ and a nonnegative scalar $\lambda\in\Rpos$, we define the function $(\sfprod{\lambda}{f}):\Rn\rightarrow\Rext$, for $\xx\in\Rn$, by \begin{equation} \label{eq:sfprod-defn} (\sfprod{\lambda}{f})(\xx) = \begin{cases} \lambda f(\xx) & \text{if $f(\xx) < +\infty$,} \\ +\infty & \text{if $f(\xx) = +\infty$.} \end{cases} \end{equation} If $\lambda>0$ then $\sfprod{\lambda}{f}$ is the same as the standard scalar multiple $\lambda f$. However, if $\lambda = 0$ then $\sfprod{0}{f}$ zeros out only the set $\dom{f}$, resulting in an indicator function on that set, whereas $0 f\equiv 0$. Thus, \begin{equation} \label{eq:sfprod-identity} \sfprod{\lambda}{f} = \begin{cases} \lambda f & \text{if $\lambda>0$,} \\ \inddomf & \text{if $\lambda=0$.} \end{cases} \end{equation} If $f$ is convex, then $\sfprod{\lambda}{f}$ is as well, for $\lambda\in\Rpos$. The sum of summable convex functions is also convex (slightly generalizing \citealp[Theorem~5.2]{ROC}): \begin{proposition} Let $f:\Rn\rightarrow\Rext$ and $g:\Rn\rightarrow\Rext$ be convex and summable. Then $f+g$ is convex. \end{proposition} \begin{proof} Let $h=f+g$, let $\xx_0,\xx_1\in\dom{h}$, and let $\lambda\in[0,1]$. Then $\xx_0$ and $\xx_1$ are also in $\dom{f}$ and $\dom{g}$, so, by \Cref{pr:stand-cvx-fcn-char}, \eqref{eq:stand-cvx-fcn-char:b} holds for both $f$ and $g$. Adding these inequalities and again applying that proposition yield the claim. \end{proof} For a function $f:\Rm\rightarrow\Rext$ and matrix $\A\in\Rmn$, the function $\fA:\Rn\rightarrow\Rext$ is defined, for $\xx\in\Rn$, by \begin{equation} \label{eq:fA-defn} (\fA)(\xx)=f(\A\xx). \end{equation} Thus, $\fA$ is the composition of $f$ with the linear map associated with $\A$. \begin{proposition} \label{roc:thm5.7:fA} Let $f:\Rm\rightarrow\Rext$ be convex and let $\A\in\Rmn$. Then $\fA$ is convex. \end{proposition} \begin{proof} See \citet[Theorem~5.7]{ROC}. \end{proof} For a function $f:\Rn\rightarrow\Rext$ and matrix $\A\in\Rmn$, the function $\A f:\Rm\rightarrow\Rext$ is defined, for $\xx\in\Rm$, by \begin{equation} \label{eq:lin-image-fcn-defn} (\A f)(\xx) = \inf\bigBraces{f(\zz):\:\zz\in\Rn,\,\A\zz=\xx}. \end{equation} The function $\A f$ is called the \emph{image of $f$ under $\A$}. \begin{proposition} \label{roc:thm5.7:Af} Let $f:\Rn\rightarrow\Rext$ be convex and let $\A\in\Rmn$. Then $\A f$ is convex. \end{proposition} \begin{proof} See \citet[Theorem~5.7]{ROC}. \end{proof} \begin{proposition} \label{roc:thm5.5} Let $f_{i}:\Rn\rightarrow\Rext$ be convex for all $i\in\indset$, where $\indset$ is any index set. Let $h:\Rn\rightarrow\Rext$ be their pointwise supremum, that is, \[ h(\xx) = \sup_{i\in\indset} f_i(\xx) \] for $\xx\in\Rn$. Then $h$ is convex. \end{proposition} \begin{proof} See \citet[Theorem~5.5]{ROC}. \end{proof} Let $g:\R\to\eR$ be nondecreasing. We say that $G:\eR\to\eR$ is a \emph{monotone extension} of~$g$ if $G$ is nondecreasing and agrees with $g$ on $\R$, that is, if $G(-\infty)\le\inf g$, $G(x)=g(x)$ for $x\in\R$, and $G(+\infty)\ge\sup g$. The next proposition shows that a function obtained by composing a convex function $f:\Rn\to\eR$ with a nondecreasing convex function $g:\R\to\Rext$, appropriately extended to $\eR$, is also convex. This slightly generalizes \citet[Theorem~5.1]{ROC} by now allowing both $f$ and $g$ to include $-\infty$ in their ranges. \begin{proposition} \label{prop:nondec:convex} Let $f:\Rn\rightarrow\Rext$ be convex, let $g:\R\to\eR$ be convex and nondecreasing, and let $G:\eR\to\eR$ be a monotone extension of $g$ such that $G(+\infty)=+\infty$. Then the function $h=G\circ f$ is convex. \end{proposition} \begin{proof} Let $\xx_0,\xx_1\in\dom{h}$ and let $\lambda\in[0,1]$. We will show \begin{equation} \label{eq:prop:nondec:convex:1} h((1-\lambda)\xx_0 + \lambda \xx_1) \leq (1-\lambda) h(\xx_0) + \lambda h(\xx_1), \end{equation} and so that $h$ satisfies condition (\ref{pr:stand-cvx-fcn-char:b}) of \Cref{pr:stand-cvx-fcn-char}. \eqref{eq:prop:nondec:convex:1} holds trivially if $\lambda\in\set{0,1}$, so we assume henceforth that $\lambda\in(0,1)$. Since $G(+\infty)=+\infty$, $\xx_0,\xx_1\in\dom f$, so \begin{align} h\bigParens{(1-\lambda)\xx_0+\lambda\xx_1} &= G\bigParens{f\bigParens{(1-\lambda)\xx_0+\lambda\xx_1}} \nonumber \\ &\le G\bigParens{(1-\lambda)f(\xx_0)+\lambda f(\xx_1)}, \label{eq:prop:nondec:convex:2} \end{align} where the inequality follows by monotonicity of $G$ and convexity of $f$ (using \Cref{pr:stand-cvx-fcn-char}\ref{pr:stand-cvx-fcn-char:b}). Now, if either $f(\xx_0)=-\infty$ or $f(\xx_1)=-\infty$, then \begin{align*} G\bigParens{(1-\lambda)f(\xx_0)+\lambda f(\xx_1)} = G(-\infty) &= (1-\lambda)G(-\infty)+\lambda G(-\infty) \\ &\le (1-\lambda)G\bigParens{f(\xx_0)}+\lambda G\bigParens{f(\xx_1)}, \end{align*} where the inequality follows by monotonicity of $G$. Combined with \eqref{eq:prop:nondec:convex:2} and since $h=G\circ f$, this proves \eqref{eq:prop:nondec:convex:1} in this case. In the remaining case, $f(\xx_0)$ and $f(\xx_1)$ are in $\R$, implying $g(f(\xx_i))=G(f(\xx_i))=h(\xx_i)<+\infty$ for $i\in\{0,1\}$, and so that $f(\xx_i)\in\dom{g}$. Thus, \begin{align*} G\bigParens{(1-\lambda)f(\xx_0)+\lambda f(\xx_1)} &= g\bigParens{(1-\lambda)f(\xx_0)+\lambda f(\xx_1)} \\ &\le (1-\lambda)g\bigParens{f(\xx_0)}+\lambda g\bigParens{f(\xx_1)}, \end{align*} where the inequality follows by convexity of $g$ (using \Cref{pr:stand-cvx-fcn-char}\ref{pr:stand-cvx-fcn-char:b}). Again combining with \eqref{eq:prop:nondec:convex:2}, this proves \eqref{eq:prop:nondec:convex:1}, completing the proof. \end{proof} \subsection{Lower semicontinuity and continuity} \label{sec:prelim:lsc} As was defined more generally in Section~\ref{sec:prelim:lower-semicont}, a function $f:\Rn\rightarrow\Rext$ is {lower semicontinuous at a point $\xx\in\Rn$} if \[ \liminf f(\xx_t)\geq f(\xx) \] for every sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xx$. The function is {lower semicontinuous} if it is lower semicontinuous at every point $\xx\in\Rn$. Lower semicontinuity of $f$ is equivalent to the epigraph of $f$ being closed as a subset of $\R^{n+1}$ (a special case of \Cref{prop:lsc}): \begin{proposition} \label{roc:thm7.1} Let $f:\Rn\rightarrow\Rext$. Then the following are equivalent: \begin{letter-compact} \item \label{roc:thm7.1:a} $f$ is lower semicontinuous. \item \label{roc:thm7.1:b} $\epi{f}$ is closed (in $\R^{n+1}$). \item \label{roc:thm7.1:c} For all $\beta\in\R$, the sublevel set $\braces{\xx\in\Rn : f(\xx)\leq\beta}$ is closed (in $\Rn$). \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~7.1]{ROC} (or \Cref{prop:lsc}). \end{proof} If $f$ is lower semicontinuous, convex and improper, then it is infinite at every point: \begin{proposition} \label{pr:improper-vals} Let $f:\Rn\rightarrow\Rext$ be convex and improper. \begin{letter-compact} \item \label{pr:improper-vals:thm7.2} For all $\xx\in\ri(\dom{f})$, $f(\xx)=-\infty$. \item \label{pr:improper-vals:cor7.2.1} Suppose, in addition, that $f$ is lower semicontinuous. Then $f(\xx)\in\{-\infty,+\infty\}$ for all $\xx\in\Rn$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:improper-vals:thm7.2}):} See \citet[Theorem~7.2]{ROC}. \pfpart{Part~(\ref{pr:improper-vals:cor7.2.1}):} See \citet[Corollary~7.2.1]{ROC}. \end{proof-parts} \end{proof} As a special case of \eqref{eq:lsc:liminf:X:prelims}, the {lower semicontinuous hull} of a function $f:\Rn\rightarrow\Rext$ is that function $(\lsc f):\Rn\rightarrow\Rext$ defined, for $\xx\in\Rn$, by \begin{equation} \label{eq:prelim:lsc-defn} (\lsc f)(\xx) = \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xx}{f(\xx_t)}, \end{equation} where the infimum is over all sequences $\seq{\xx_t}$ in $\Rn$ converging to $\xx$. \begin{proposition} \label{pr:lsc-equiv-char-rn} Let $f:\Rn\rightarrow\Rext$. Then $\lsc{f}$ is the greatest lower semicontinuous function on $\Rn$ that is majorized by $f$. Furthermore, $\lsc{f}$ is that function whose epigraph is the closure of $f$'s epigraph in $\R^{n+1}$; that is, $\epi(\lsc f)=\cl(\epi f)$. \end{proposition} \begin{proof} This is a special case of \Cref{prop:lsc:characterize}. \end{proof} For a convex function $f:\Rn\rightarrow\Rext$, its \emph{closure} $(\cl f):\Rn\rightarrow\Rext$ is defined to be the same as its lower semicontinuous hull, $\lsc f$, if $f>-\infty$, and is identically $-\infty$ otherwise. A convex function $f$ is \emph{closed} if $f=\cl f$, that is, if it is lower semicontinuous and either $f > -\infty$ or $f\equiv-\infty$. Thus, if $f$ is proper, then it is closed if and only if it is lower semicontinuous; if it is improper, then it is closed if and only if either $f\equiv+\infty$ or $f\equiv-\infty$. This definition of a closed convex function follows \citet[Section~7]{ROC}, but note that there is not full agreement in the literature on this terminology (see, for instance, \citealp{Bertsekas2009}, Section~1.1.2). \begin{proposition} \label{pr:lsc-props} Let $f:\Rn\rightarrow\Rext$ be convex. \begin{letter-compact} \item \label{pr:lsc-props:a} $\lsc f$ is convex and lower semicontinuous. \item \label{pr:lsc-props:b} If $\xx\in\Rn$ is not in the relative boundary of $\dom{f}$, then $(\lsc f)(\xx)=f(\xx)$. \item \label{pr:lsc-props:c} $\dom(\lsc f)$ and $\dom{f}$ have the same closure and relative interior. \item \label{pr:lsc-props:d} If, in addition, $f$ is proper, then $\cl f (=\lsc f)$ is closed and proper. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:lsc-props:a}):} By definition, a function is convex if and only if its epigraph is convex, so $\epi{f}$ is convex, and so also is its closure (by \Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.2a}), which is the same as $\epi(\lsc f)$. Therefore, $\lsc f$ is convex. \pfpart{Part~(\ref{pr:lsc-props:b}):} If $f$ is proper, then this follows from \citet[Theorem~7.4]{ROC}. Suppose instead that $f$ is not proper, and let $\xx\in\Rn$ be a point not in the relative boundary of $\dom f$. If $\xx\in\ri(\dom f)$, then $f(\xx)=-\infty$ by \Cref{pr:improper-vals}(\ref{pr:improper-vals:thm7.2}), implying $(\lsc f)(\xx)=-\infty$ since $(\lsc f)(\xx)\leq f(\xx)$ always (as can be seen by considering the trivial sequence with $\xx_t=\xx$ for all $t$). Otherwise, we must have $\xx\not\in\cl(\dom f)$, which means there must exist a neighborhood $U\subseteq\Rn$ of $\xx$ that is disjoint from $\dom f$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xx$. Then all but finitely many elements of the sequence must be in $U$, being a neighborhood of $\xx$, and none of these are in $\dom{f}$. Thus, $f(\xx_t)\rightarrow+\infty=f(\xx)$. \pfpart{Part~(\ref{pr:lsc-props:c}):} From part~(\ref{pr:lsc-props:b}), \[ \ri(\dom f) \subseteq \dom(\lsc f) \subseteq \cl(\dom f). \] The claim then follows from \Cref{pr:ri-props}(\ref{pr:ri-props:roc-cor6.3.1}). \pfpart{Part~(\ref{pr:lsc-props:d}):} See \citet[Theorem~7.4]{ROC}. \end{proof-parts} \end{proof} \begin{proposition} \label{roc:thm7.6-mod} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\beta\in\R$ with $\inf f < \beta$. Let \begin{align*} L &= \Braces{\xx\in\Rn : f(\xx) \leq \beta} \\ M &= \Braces{\xx\in\Rn : f(\xx) < \beta}. \end{align*} Then \begin{align*} \cl L &= \cl M = \Braces{\xx\in\Rn : (\lsc f)(\xx) \leq \beta} \\ \ri L &= \ri M = \Braces{\xx\in \ri(\dom f) : f(\xx) < \beta}. \end{align*} \end{proposition} \begin{proof} This follows by the same proof as in \citet[Theorem~7.6]{ROC} with $\cl f$ replaced by $\lsc f$. \end{proof} \begin{proposition} \label{roc:lem7.3} Let $f:\Rn\rightarrow\Rext$ be convex. Then $\ri(\epi{f})$ consists of all pairs $\rpair{\xx}{y}\in\Rn\times\R$ with $\xx\in\ri(\dom{f})$ and $f(\xx)<y$. \end{proposition} \begin{proof} See \citet[Lemma~7.3]{ROC}. \end{proof} By \Cref{prop:first:properties}(\ref{prop:first:cont}) (and since $\Rn$ is first-countable), a function $f:\Rn\rightarrow\Rext$ is continuous at a point $\xx\in\Rn$ if and only if $f(\xx_t)\rightarrow f(\xx)$ for every sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xx$. The function is continuous (or continuous everywhere) if it is continuous at every point $\xx\in\Rn$. Convex functions are continuous at all points in $\Rn$ except possibly those on the boundary of $\dom{f}$: \begin{proposition} \label{pr:stand-cvx-cont} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xx\in\Rn$. If $\xx$ is not on the boundary of $\dom{f}$, then $f$ is continuous at $\xx$. Consequently, if $f$ is finite everywhere, then $f$ is continuous everywhere. \end{proposition} \begin{proof} Suppose $\xx$ is not on the boundary of $\dom{f}$. This means that either $\xx\in\Rn\setminus\cl(\dom f)$ or $\xx\in\intdom{f}$. In the former case, since $\Rn\setminus\cl(\dom f)$ is an open set, there exists a neighborhood of $\xx$ where $f$ equals $+\infty$ and so $f$ is continuous at $\xx$. In the latter case, $\intdom{f}$ is nonempty, and thus it is equal to $\ri(\dom{f})$ by \Cref{pr:ri-props}(\ref{pr:ri-props:intC-nonemp-implies-eq-riC}), and so $f$ is continuous at $\xx$ by \citet[Theorem~10.1]{ROC}. \end{proof} \subsection{Conjugacy} \label{sec:prelim:conjugate} The \emph{conjugate} of a function $f:\Rn\rightarrow\Rext$ is the function $\fstar:\Rn\rightarrow\Rext$ defined, for $\uu\in\Rn$, by \[ \fstar(\uu) = \sup_{\xx\in\Rn} \bracks{\xx\cdot\uu - f(\xx)}. \] In general, a pair $\rpair{\uu}{v}\in\Rn\times\R$ is in $\epi{\fstar}$, meaning that $\fstar(\uu)\leq v$, if and only if $\xx\cdot\uu - f(\xx)\leq v$ for all $\xx\in\Rn$, or equivalently, if and only if $f(\xx)\geq \xx\cdot\uu - v$ for all $\xx\in\Rn$. Thus, $\fstar$ encodes all affine functions $\xx\mapsto \xx\cdot\uu - v$ that are majorized by $f$. We write $\fdubs$ for the \emph{biconjugate} of $f$, the conjugate of its conjugate; that is, $\fdubs=(\fstar)^*$. From the preceding observation, it can be argued that the biconjugate of $f$ is equal to the pointwise supremum of all affine functions that are majorized by $f$. Here are facts about conjugates: \begin{proposition} \label{pr:conj-props} Let $f:\Rn\rightarrow\Rext$. \begin{letter-compact} \item \label{pr:conj-props:a} $\inf f = -\fstar(\zero)$. \item \label{pr:conj-props:b} Let $g:\Rn\rightarrow\Rext$. If $f\geq g$ then $\fstar \leq \gstar$. \item \label{pr:conj-props:c1} If $f\equiv+\infty$ then $\fstar\equiv-\infty$; otherwise, if $f\not\equiv+\infty$ then $\fstar>-\infty$. \item \label{pr:conj-props:c2} If $f(\xx)=-\infty$ for some $\xx\in\Rn$ then $\fstar\equiv+\infty$. \item \label{pr:conj-props:d} $\fstar$ is closed and convex. \item \label{pr:conj-props:e} $\fstar = (\lsc{f})^* = (\cl{f})^*$. \item \label{pr:conj-props:f} Suppose $f$ is convex. Then $f$ is proper if and only if $\fstar$ is proper. Also, $\fdubs=\cl{f}$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Parts~(\ref{pr:conj-props:a}), (\ref{pr:conj-props:b}), (\ref{pr:conj-props:c1}), (\ref{pr:conj-props:c2}):} These are straightforward from definitions. \pfpart{Part~(\ref{pr:conj-props:d}):} For $\xx\in\Rn$, let $\ell_{\xx}(\uu)=\xx\cdot\uu-f(\xx)$ for $\uu\in\Rn$. Then $\epi{\ell_\xx}$ is a closed halfspace in $\R^{n+1}$. Since $\fstar$ is the pointwise supremum of all $\ell_\xx$, its epigraph is \[ \epi{\fstar} = \bigcap_{\xx\in\Rn} \epi{\ell_\xx}, \] and so is closed and convex. Therefore, $\fstar$ is convex and lower semicontinuous (by \Cref{roc:thm7.1}\ref{roc:thm7.1:a},\ref{roc:thm7.1:b}). If $\fstar>-\infty$, then this also shows that $\fstar$ is closed. Otherwise, if $\fstar(\uu)=-\infty$ for some $\uu\in\Rn$, then $\xx\cdot\uu-f(\xx)\leq -\infty$ for all $\xx\in\Rn$, implying $f\equiv+\infty$ and so that $\fstar\equiv-\infty$ (by part c). Thus, $\fstar$ is closed in this case as well. \pfpart{Part~(\ref{pr:conj-props:e}):} As noted above, a pair $\rpair{\uu}{v}\in\Rn\times\R$ is in $\epi{\fstar}$ if and only if the affine function $h(\xx)=\xx\cdot\uu-v$ is majorized by $f$, that is, if and only if $\epi{f}\subseteq\epi{h}$. Likewise for $\lsc{f}$. Since $\epi(\lsc{f})=\cl(\epi{f})$ and since $\epi{h}$ is a closed halfspace in $\R^{n+1}$, it follows that $\epi{f}\subseteq\epi{h}$ if and only if $\epi(\lsc{f})\subseteq\epi{h}$. Thus, $\fstar$ and $(\lsc{f})^*$ have the same epigraphs, and therefore are equal. If $f>-\infty$, then $\cl{f}=\lsc{f}$, implying their conjugates are also equal. Otherwise, if $f\not>-\infty$, then $\cl{f}\equiv-\infty$ implying both $\fstar$ and $(\cl f)^*$ are identically $+\infty$. \pfpart{Part~(\ref{pr:conj-props:f}):} See \citet[Theorem~12.2]{ROC}. \qedhere \end{proof-parts} \end{proof} We next look at the conjugates of functions of particular forms. As defined in \eqref{eq:indf-defn}, let $\inds$ be the indicator function for some set $S\subseteq\Rn$. Its conjugate, $\indstars$, is called the \emph{support function} for $S$, and is equal to \begin{equation} \label{eq:e:2} \indstars(\uu) = \sup_{\xx\in S} \xx\cdot\uu, \end{equation} for $\uu\in\Rn$. \begin{proposition} \label{roc:thm14.1-conj} Let $K\subseteq\Rn$ be a closed convex cone. Then $\indstar{K}=\indf{\Kpol}$ and $\indstar{\Kpol}=\indf{K}$. \end{proposition} \begin{proof} See \citet[Theorem~14.1]{ROC}. \end{proof} \begin{proposition} \label{roc:thm16.4} Let $f_i:\Rn\rightarrow\Rext$ be convex and proper, for $i=1,\ldots,m$. Assume $\bigcap_{i=1}^m \ri(\dom{f})\neq\emptyset$. Let $\uu\in\Rn$. Then \[ (f_1+\cdots+f_m)^*(\uu) = \inf\Braces{\sum_{i=1}^m f^*_i(\uu_i):\: \uu_1,\ldots,\uu_m\in\Rn, \sum_{i=1}^m \uu_i = \uu}. \] Furthermore, the infimum is attained. \end{proposition} \begin{proof} See \citet[Theorem~16.4]{ROC}. \end{proof} \begin{proposition} \label{roc:thm16.3:fA} Let $f:\Rm\rightarrow\Rext$ be convex, and let $\A\in\Rmn$. Assume there exists $\xing\in\Rn$ such that $\A\xing\in\ri(\dom f)$. Let $\uu\in\Rn$. Then \[ (\fA)^*(\uu) = \inf\Braces{ \fstar(\ww):\: \ww\in\Rm,\,\transA\ww = \uu }. \] Furthermore, if $\uu\in\colspace{\transA}$, then the infimum is attained. (Otherwise, if $\uu\not\in\colspace{\transA}$, then the infimum is vacuously equal to $+\infty$.) \end{proposition} \begin{proof} See \citet[Theorem~16.3]{ROC}. \end{proof} \begin{proposition} \label{roc:thm16.3:Af} Let $f:\Rn\rightarrow\Rext$ be convex and let $\A\in\Rmn$. Then \[ (\A f)^* = \fstar \transA. \] \end{proposition} \begin{proof} See \citet[Theorem~16.3]{ROC}. \end{proof} \begin{proposition} \label{pr:prelim-conj-max} Let $f_i:\Rn\rightarrow\R$ be convex, for $i=1,\dotsc,m$, and let \[ h(\xx)=\max\Braces{f_1(\xx),\dotsc,f_m(\xx)} \] for $\xx\in\Rn$. Let $\uu\in\Rn$. Then \[ \hstar(\uu) = star(\uu_i):\: \sum_{i=1}^m \lambda_i \uu_i = \uu }, \] where the infimum is over all $\uu_1,\ldots,\uu_m\in\Rn$ and all nonnegative $\lambda_1,\ldots,\lambda_m$ which sum to~$1$, subject to $\sum_{i=1}^m \lambda_i \uu_i = \uu$. Furthermore, the infimum is attained. \end{proposition} \begin{proof} See \citet[Theorem~X.2.4.7]{HULL-big-v2}. \end{proof} We next give an expression for the conjugate of the composition of a nondecreasing convex function on $\R$, suitably extended to $\Rext$, with a convex function $f:\Rn\rightarrow\Rext$. This expression is in terms of $\indepifstar$, the support function for the epigraph of $f$, for which an explicit expression in terms of $\fstar$ is given below in \Cref{pr:support-epi-f-conjugate}. This material, including the proof, is based closely on \citet[Theorem~E.2.5.1]{HULL}. \begin{proposition} \label{thm:conj-compose-our-version} Let $f:\Rn\rightarrow\Rext$ be convex, and let $g:\R\rightarrow\Rext$ be convex, proper and nondecreasing. Assume there exists $\xing\in\dom{f}$ such that $f(\xing)\in\intdom{g}$. Let $G:\eR\to\eR$ be the monotone extension of $g$ with $G(-\infty)=\inf g$ and $G(+\infty)=+\infty$. Let $\uu\in\Rn$. Then \[ (G\circ f)^*(\uu) = \min_{v\in\R} \bigBracks{g^*(v)+\indepifstar(\rpairf{\uu}{-v})}, \] with the minimum attained. \end{proposition} \begin{proof} In the proof, we consider points $\zz\in\R^{n+1}$ of the form $\rpair{\xx}{y}$ where $\xx\in\Rn$ and $y\in\R$. To extract $\xx$ and $y$ from $\zz$, we introduce two matrices: The first matrix, in $\R^{n\times(n+1)}$, is $\PPx=[\Iden,\zero]$, where $\Iden$ is the $n\times n$ identity matrix and $\zero$ is the all-zeros vector in $\Rn$. The other matrix, in $\R^{1\times(n+1)}$, is $\PPy=[0,\dotsc,0,1]=\trans{\ee_{n+1}}$, where $\ee_{n+1}=\rpair{\zero}{1}$ is the $(n+1)$-st standard basis vector in $\R^{n+1}$. Thus, for $\zz=\rpair{\xx}{y}$, we obtain $\PPx\zz=\xx$ and $\PPy\zz=y$. Let $h=G\circ f$. We aim to derive an expression for $\hstar(\uu)$. Because $g$ is nondecreasing, we can rewrite $h$, for $\xx\in\Rn$, as \begin{align} \label{eq:thm:conj-compose-our-version:1a} h(\xx) &= \inf\Braces{ g(y):\:y\in\R, y\geq f(\xx) } \\ \notag &= \inf\Braces{ g(y) + \indepif(\rpairf{\xx}{y}):\:y\in\R } \\ \label{eq:thm:conj-compose-our-version:1} &= \inf\Braces{ g(\PPy\zz) + \indepif(\zz):\:\zz\in\R^{n+1}, \PPx\zz=\xx }. \end{align} Note that the definition of $G$ at $\pm\infty$ implies that \eqref{eq:thm:conj-compose-our-version:1a} is valid even if $f(\xx)$ is $\pm\infty$. Let $r=g\PPy$; in other words, for $\xx\in\Rn$ and $y\in\R$, $r(\rpairf{\xx}{y})=g(y)$. Furthermore, let $s=r+\indepif=g\PPy+\indepif$. Then \eqref{eq:thm:conj-compose-our-version:1} shows that $h=\PPx s$. Consequently, by \Cref{roc:thm16.3:Af}, $\hstar=\sstar \trans{\PP}$, that is, \begin{equation} \label{eq:thm:conj-compose-our-version:4} \hstar(\uu) = \sstar(\trans{\PP} \uu) = \sstar(\rpairf{\uu}{0}). \end{equation} It only remains then to compute $\sstar(\rpairf{\uu}{0})$. For this, we will make use of the next claim, stated as a lemma for later reference. \begin{lemma} \label{lem:thm:conj-compose-our-version:1} Let $f,g,\xing$ be as stated in \Cref{thm:conj-compose-our-version}, and let $r=g\PPy$. Then $\ri(\dom{r})\cap\ri(\epi f)\neq\emptyset$. \end{lemma} \begin{proofx} Let $\ying=f(\xing)$, which, by assumption, is in $\intdom{g}$, implying there exists an open set $U\subseteq\dom{g}$ that includes $\ying$. Then $\Rn\times U$ is open in $\Rn\times\R=\R^{n+1}$ and includes the point $\rpair{\xing}{\ying}$. Furthermore, $\Rn\times U\subseteq\dom{r}$ since $r(\rpairf{\xx}{y})=g(y)<+\infty$ for $\xx\in\Rn$ and $y\in U$. Therefore, $\rpair{\xing}{\ying}\in\intdom{r}$. Also, $\rpair{\xing}{\ying}\in\epi{f}$. Thus, $\intdom{r}\cap\epi{f}\neq\emptyset$. The claim now follows by \Cref{pr:ri-props}(\ref{pr:ri-props:intC-D-implies-riC-riD}). \end{proofx} In light of Lemma~\ref{lem:thm:conj-compose-our-version:1}, we can now compute $\sstar(\rpairf{\uu}{0})$ using \Cref{roc:thm16.4}, yielding (with Eq.~\ref{eq:thm:conj-compose-our-version:4}) that \begin{equation} \label{eq:thm:conj-compose-our-version:3} \hstar(\uu) = \sstar(\rpairf{\uu}{0}) = \inf\BigBraces{\rstar(\ww) + \indepifstar\bigParens{\rpair{\uu}{0} - \ww} :\:\ww\in\R^{n+1} }, \end{equation} and furthermore that the infimum is attained. Recall that $r = g\PPy$. Note that $(\colspace \PPy) \cap \ri(\dom{g})\neq\emptyset$ since $\PPy\rpair{\zero}{f(\xing)}=f(\xing)\in\ri(\dom{g})$. We can therefore apply \Cref{roc:thm16.3:fA} to compute $\rstar$, yielding, for $\ww\in\R^{n+1}$, that \[ \rstar(\ww) = \inf\BigBraces{ \gstar(v) :\: v\in\R,\,\trans{\PPy} v = \ww }. \] This means that if $\ww$ has the form $\ww= \trans{\PPy} v=\rpair{\zero}{v}$, for some $v\in\R$, then this $v$ must be unique and we must have $\rstar(\ww)=\gstar(v)$. Otherwise, if $\ww$ does not have this form, then vacuously $\rstar(\ww)=+\infty$. This last fact implies that, in calculating the infimum in \eqref{eq:thm:conj-compose-our-version:3}, we need only consider $\ww$ of the form $\ww=\rpair{\zero}{v}$, for $v\in\R$. Thus, that equation becomes \begin{align} \notag \hstar(\uu) &= \inf\BigBraces{\rstar(\rpairf{\zero}{v}) + \indepifstar\bigParens{\rpair{\uu}{0} - \rpair{\zero}{v}} :\: v\in\R } \\ \label{eq:thm:conj-compose-our-version:6} &= \inf\BigBraces{\gstar(v) + \indepifstar(\rpairf{\uu}{-v}) :\: v\in\R }. \end{align} It remains to show that the infimum is always attained. If $\hstar(\uu)<+\infty$, then, as previously noted, the infimum in \eqref{eq:thm:conj-compose-our-version:3} is attained by some $\ww\in\R^{n+1}$, which must be of the form $\ww=\rpair{\zero}{v}$, for some $v\in\R$ (since otherwise $\rstar(\ww)=+\infty$), implying that the infimum in \eqref{eq:thm:conj-compose-our-version:6} is attained as well. Otherwise, if $\hstar(\uu)=+\infty$, then \eqref{eq:thm:conj-compose-our-version:6} implies that the infimum is attained at every point $v\in\R$. \end{proof} The support function $\indepifstar$ that appears in \Cref{thm:conj-compose-our-version} can be expressed as follows (where $\sfprod{v}{f}$ is as defined in Eq.~\ref{eq:sfprod-defn}): \begin{proposition} \label{pr:support-epi-f-conjugate} Let $f:\Rn\rightarrow\Rext$, let $\uu\in\Rn$, and let $v\in\R$. If $v\geq 0$ then \[ \indepifstar(\rpairf{\uu}{-v}) = \sfprodstar{v}{f}(\uu) = \begin{cases} v \fstar(\uu/v) & \text{if $v>0$,} \\ \inddomfstar(\uu) & \text{if $v=0$.} \end{cases} \] Otherwise, if $v<0$ and $f\not\equiv+\infty$ then $\indepifstar(\rpairf{\uu}{-v})=+\infty$. \end{proposition} \begin{proof} Suppose first that $v\geq 0$. Then \begin{align*} \indepifstar(\rpairf{\uu}{-v}) &= \sup_{\rpair{\xx}{y}\in\epi f} \bigBracks{ \xx\cdot\uu - yv } \\ &= \adjustlimits\sup_{\xx\in\dom f} \sup_{\;y\in\R:\:y\geq f(\xx)\;} \bigBracks{ \xx\cdot\uu - yv } \\ &= \sup_{\xx\in\dom f} \bigBracks{ \xx\cdot\uu - v f(\xx) } \\ &= \sup_{\xx\in\Rn} \bigBracks{ \xx\cdot\uu - (\sfprod{v}{f})(\xx) } \\ &= \sfprodstar{v}{f}(\uu). \end{align*} The first equality is by definition of support function (Eq.~\ref{eq:e:2}). The fourth equality is because $(\sfprod{v}{f})(\xx)$ equals $v f(\xx)$ if $\xx\in\dom{f}$, and is $+\infty$ otherwise. And the last equality is by definition of conjugate (Eq.~\ref{eq:fstar-def}). If $v=0$ then $\sfprodstar{0}{f}=\inddomfstar$ by \eqref{eq:sfprod-identity}. And if $v>0$ then \[ \sfprodstar{v}{f}(\uu) = (v f)^*(\uu) = \sup_{\xx\in\Rn} \bigBracks{ \xx\cdot\uu - v f(\xx) } = v\!\sup_{\xx\in\Rn} \BigBracks{ \xx\cdot\frac{\uu}{v} - f(\xx) } = v \fstar\Parens{\frac{\uu}{v}}, \] using \eqref{eq:fstar-def}. Finally, suppose $v<0$ and $f\not\equiv+\infty$. Let $\xing$ be any point with $f(\xing)<+\infty$. Then \[ \indepifstar(\rpairf{\uu}{-v}) = \sup_{\rpair{\xx}{y}\in\epi f} \bigBracks{ \xx\cdot\uu - yv } \geq \sup_{\scriptontop{y\in\R:}{y\geq f(\xing)}} \bigBracks{ \xing\cdot\uu - yv } = +\infty \] since $\rpair{\xing}{y}\in\epi f$ for all $y\geq f(\xing)$. \end{proof} \Cref{thm:conj-compose-our-version} can be simplified if the function $g$ only takes values in $\R$ and $f$ is proper: \begin{corollary} \label{cor:conj-compose-our-version} Let $f:\Rn\rightarrow\Rext$ be convex and proper, and let $g:\R\rightarrow\R$ be convex and nondecreasing. Let $G:\eR\to\eR$ be the monotone extension of $g$ with $G(-\infty)=\inf g$ and $G(+\infty)=+\infty$. Then, for every $\uu\in\Rn$, \[ (G\circ f)^*(\uu) = \min_{v\ge 0} \bigBracks{g^*(v)+\indepifstar(\rpairf{\uu}{-v})}, \] with the minimum attained. \end{corollary} \begin{proof} Since $f$ is proper, there exists $\xing\in\Rn$ such that $f(\xing)\in\R=\intr(\dom g)$, satisfying the condition of \Cref{thm:conj-compose-our-version}. Moreover, by \Cref{pr:support-epi-f-conjugate} it suffices to restrict minimization to $v\ge 0$. \end{proof} \subsection{Subgradients and subdifferentials} \label{sec:prelim:subgrads} We say that $\uu\in\Rn$ is a \emph{subgradient} of a function $f:\Rn\rightarrow\Rext$ at a point $\xx\in\Rn$ if \begin{equation} \label{eqn:prelim-standard-subgrad-ineq} f(\xx') \geq f(\xx) + (\xx'-\xx)\cdot\uu \end{equation} for all $\xx'\in\Rn$. The \emph{subdifferential} of $f$ at $\xx$, denoted $\partial f(\xx)$, is the set of all subgradients of $f$ at $\xx$. It is immediate from this definition that $\zero\in\partial f(\xx)$ if and only if $\xx$ minimizes $f$. Subdifferentials and their extension to astral space will be explored in considerable detail beginning in Chapters~\ref{sec:gradients} and~\ref{sec:calc-subgrads}. Here are some properties and characterizations of the subgradients of a convex function: \begin{proposition} \label{roc:thm23.4} Let $f:\Rn\rightarrow\Rext$ be convex and proper, and let $\xx\in\Rn$. \begin{letter-compact} \item \label{roc:thm23.4:a} If $\xx\in\ri(\dom{f})$ then $\partial f(\xx)\neq\emptyset$. \item \label{roc:thm23.4:b} If $\partial f(\xx)\neq\emptyset$ then $\xx\in\dom{f}$. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~23.4]{ROC}. \end{proof} \begin{proposition} \label{pr:stan-subgrad-equiv-props} Let $f:\Rn\rightarrow\Rext$, and let $\xx,\uu\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:stan-subgrad-equiv-props:a} $\uu\in\partial f(\xx)$. \item \label{pr:stan-subgrad-equiv-props:b} $\fstar(\uu) = \xx\cdot\uu - f(\xx)$. \savelccounter \end{letter-compact} In addition, if $f$ is convex and proper, then the statements above and the statements below are also all equivalent to one another: \begin{letter-compact} \restorelccounter \item \label{pr:stan-subgrad-equiv-props:c} $(\cl{f})(\xx) = f(\xx)$ and $\xx\in\partial\fstar(\uu)$. \item \label{pr:stan-subgrad-equiv-props:d} $(\cl{f})(\xx) = f(\xx)$ and $\uu\in\partial(\cl{f})(\uu)$. \end{letter-compact} \end{proposition} \begin{proof} By simple algebra, the definition of standard subgradient given in \eqref{eqn:prelim-standard-subgrad-ineq} holds if and only if $\xx\cdot\uu-f(\xx) \geq \xx'\cdot\uu-f(\xx')$ for all $\xx'\in\Rn$, that is, if and only if \[ \xx\cdot\uu-f(\xx) = \sup_{\xx'\in\Rn} [\xx'\cdot\uu-f(\xx')]. \] Since the term on the right is exactly $\fstar(\uu)$, this proves the equivalence of (\ref{pr:stan-subgrad-equiv-props:a}) and (\ref{pr:stan-subgrad-equiv-props:b}). When $f$ is convex and proper, the remaining equivalences follow directly from Theorem~23.5 and Corollary~23.5.2 of \citet{ROC}. \end{proof} \begin{proposition} \label{roc:thm24.4} Let $f:\Rn\rightarrow\Rext$ be closed, proper and convex. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$, and let $\xx,\uu\in\Rn$. Suppose $\xx_t\rightarrow\xx$, $\uu_t\rightarrow\uu$, and that $\uu_t\in\partial f(\xx_t)$ for all $t$. Then $\uu\in\partial f(\xx)$. \end{proposition} \begin{proof} See \citet[Theorem~24.4]{ROC}. \end{proof} \begin{proposition} \label{roc:thm25.1} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xx\in\Rn$ with $f(\xx)\in\R$. \begin{letter-compact} \item \label{roc:thm25.1:a} If $f$ is differentiable at $\xx$, then $\nabla f(\xx)$ is $f$'s only subgradient at $\xx$; that is, $\partial f(\xx) = \{ \nabla f(\xx) \}$. \item \label{roc:thm25.1:b} Conversely, if $\partial f(\xx)$ is a singleton, then $f$ is differentiable at $\xx$. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~25.1]{ROC}. \end{proof} Here are some rules for computing subdifferentials, which extend standard calculus rules for finding ordinary gradients: If $f:\Rn\rightarrow\Rext$ and $\lambda\in\Rstrictpos$, then for $\xx\in\Rn$, $\partial (\lambda f)(\xx) = \lambda \partial f(\xx)$ (as can be shown straightforwardly from definitions). As is often the case, the next rule, for sums of functions, depends partially on a topological condition: \begin{proposition} \label{roc:thm23.8} Let $f_i:\Rn\rightarrow\Rext$ be convex and proper, for $i=1,\ldots,m$. Let $h=f_1+\cdots+f_m$, and let $\xx\in\Rn$. \begin{letter-compact} \item $\partial f_1(\xx)+\cdots+\partial f_m(\xx) \subseteq \partial h(\xx)$. \item If, in addition, $\bigcap_{i=1}^m \ri(\dom{f_i})\neq\emptyset$, then $\partial f_1(\xx)+\cdots+\partial f_m(\xx) = \partial h(\xx)$. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~23.8]{ROC}. \end{proof} \begin{proposition} \label{roc:thm23.9} Let $f:\Rn\rightarrow\Rext$ be convex and proper, and let $\A\in\Rmn$. Let $\xx\in\Rn$. \begin{letter-compact} \item $\transA\negKern\partial f(\A \xx) \subseteq \partial (\fA)(\xx)$. \item If, in addition, there exists $\xing\in\Rn$ such that $\A \xing\in\ri(\dom{f})$, then $\transA\negKern\partial f(\A \xx) = \partial (\fA)(\xx)$. \end{letter-compact} \end{proposition} \begin{proof} See \citet[Theorem~23.9]{ROC}. \end{proof} The following is based on \citet[Theorem~VI.4.5.1]{HULL-big-v1}: \begin{proposition} \label{pr:stan-subgrad-lin-img} Let $f:\Rn\rightarrow\Rext$ and $\A\in\Rmn$. Let $\xx,\uu\in\Rm$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:stan-subgrad-lin-img:a} $\uu\in\partial (\A f)(\xx)$, and also the infimum defining $\A f$ in \eqref{eq:lin-image-fcn-defn} is attained by some $\zz\in\Rn$ with $\A\zz=\xx$. \item \label{pr:stan-subgrad-lin-img:b} There exists $\zz\in\Rn$ such that $\A \zz = \xx$ and $\transA \uu \in \partial f(\zz)$. \end{letter-compact} \end{proposition} \begin{proof} Let $h=\A f$. \begin{proof-parts} \pfpart{(\ref{pr:stan-subgrad-lin-img:a}) $\Rightarrow$ (\ref{pr:stan-subgrad-lin-img:b}): } Suppose $\uu\in\partial h(\xx)$ and that there exists $\zz\in\Rn$ attaining the infimum in \eqref{eq:lin-image-fcn-defn}, implying $\A\zz=\xx$ and $h(\xx)=f(\zz)$. Then for all $\zz'\in\Rm$, \[ f(\zz') \geq h(\A \zz') \geq h(\xx) + \uu\cdot(\A\zz' - \xx) = f(\zz) + (\transA\uu)\cdot(\zz'-\zz). \] The first inequality is by $\A f$'s definition. The second is because $\uu\in\partial h(\xx)$. The equality is by algebra, using $h(\xx)=f(\zz)$ and $\xx=\A\zz$. Thus, $\transA\uu\in\partial f(\zz)$, as claimed. \pfpart{(\ref{pr:stan-subgrad-lin-img:b}) $\Rightarrow$ (\ref{pr:stan-subgrad-lin-img:a}): } Suppose there exists $\zz\in\Rn$ with $\A \zz = \xx$ and $\transA \uu \in \partial f(\zz)$. Let $\zz'\in\Rn$, and let $\xx'=\A\zz'$. Then \[ f(\zz') \geq f(\zz) + (\transA \uu)\cdot (\zz' - \zz) = f(\zz) + \uu\cdot (\xx' - \xx). \] The inequality is because $\transA \uu \in \partial f(\zz)$. The equality is by algebra (since $\A\zz'=\xx'$ and $\A\zz=\xx$). Since this holds for all $\zz'$ with $\A\zz'=\xx'$, it follows that \[ h(\xx') \geq f(\zz) + \uu\cdot (\xx' - \xx) \geq h(\xx) + \uu\cdot (\xx' - \xx), \] where the second inequality is because $f(\zz)\geq h(\xx)$ by $\A f$'s definition. This shows that $\uu\in\partial h(\xx)$. Furthermore, applied with $\xx'=\xx$, it shows that $f(\zz)=h(\xx)$ and thus that the infimum in \eqref{eq:lin-image-fcn-defn} is attained (by $\zz$). \qedhere \end{proof-parts} \end{proof} For subgradients of a pointwise supremum, we have the following general inclusion, followed by a special case in which the inclusion holds with equality. A more general result is given by \citet[Theorem~VI.4.4.2]{HULL-big-v1}. \begin{proposition} \label{pr:stnd-sup-subgrad} Let $f_i:\Rn\rightarrow\Rext$ for all $i\in\indset$, and let \[ h(\xx) = \sup_{i\in\indset} f_i(\xx) \] for $\xx\in\Rn$. Let $\xx\in\Rn$, and let $J = \Braces{i \in \indset : h(\xx)=f_i(\xx)}$. Then \[ \cl\conv\Parens{ \bigcup_{i\in J} \partial f_i(\xx) } \subseteq \partial h(\xx). \] \end{proposition} \begin{proof} This can be proved as in Lemma~VI.4.4.1 of \citet{HULL-big-v1} (even though that lemma itself is stated for a more limited case). \end{proof} \begin{proposition} \label{pr:std-max-fin-subgrad} Let $f_i:\Rn\rightarrow\R$ be convex, for $i=1,\dotsc,m$, and let \[ h(\xx)=\max\Braces{f_1(\xx),\dotsc,f_m(\xx)} \] for $\xx\in\Rn$. Let $\xx\in\Rn$, and let $ J = \Braces{i \in \{1,\ldots,m\} : h(\xx)=f_i(\xx)}$. Then \[ \partial h(\xx) = \conv\Parens{ \bigcup_{i\in J} \partial f_i(\xx) }. \] \end{proposition} \begin{proof} See \citet[Corollary~VI.4.3.2]{HULL-big-v1}. \end{proof} \begin{proposition} \label{pr:std-subgrad-comp-inc} Let $f:\Rn\rightarrow\R$ be convex, and let $g:\R\rightarrow\R$ be convex and nondecreasing. Let $h=g\circ f$ and let $\xx\in\Rn$. Then \[ \partial h(\xx) = \Braces{ v \uu : \uu\in\partial f(\xx), v \in \partial g(f(\xx)) }. \] \end{proposition} \begin{proof} A special case of \citet[Theorem~VI.4.3.1]{HULL-big-v1}. \end{proof} \subsection{Recession cone and constancy space} \label{sec:prelim:rec-cone} The \emph{recession cone} of a function $f:\Rn\rightarrow\Rext$, denoted $\resc{f}$, is the set of directions in which the function never increases: \begin{equation} \label{eqn:resc-cone-def} \resc{f} = \Braces{\vv\in\Rn : \forall \xx\in\Rn, \forall \lambda\in\Rpos, f(\xx+\lambda\vv)\leq f(\xx) }. \end{equation} \begin{proposition} \label{pr:resc-cone-basic-props} Let $f:\Rn\rightarrow\Rext$. Then $f$'s recession cone, $\resc{f}$, is a convex cone. In addition, if $f$ is lower semicontinuous, then $\resc{f}$ is closed in~$\Rn$. \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Cone:} This is immediate from definitions. \pfpart{Convex:} Let $\vv$ and $\ww$ be in $\resc{f}$, and let $\lambda\in\Rpos$. Then for all $\xx\in\Rn$, \[ f(\xx+\lambda(\vv+\ww)) = f(\xx+\lambda\vv+\lambda\ww) \leq f(\xx+\lambda\vv) \leq f(\xx). \] The first inequality is because $\ww\in\resc{f}$, and the second is because $\vv\in\resc{f}$. Thus, $\vv+\ww\in\resc{f}$. Therefore, $\resc{f}$ is convex by \Cref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:d}). \pfpart{Closed:} Assume $f$ is lower semicontinuous. Let $\seq{\vv_t}$ be any convergent sequence in $\resc{f}$, and suppose its limit is $\vv$. Then for any $\xx\in\Rn$ and $\lambda\in\Rpos$, \[ f(\xx) \geq \liminf f(\xx+\lambda\vv_t) \geq f(\xx+\lambda\vv) \] since $\xx+\lambda\vv_t\rightarrow\xx+\lambda\vv$ and $f$ is lower semicontinuous. Thus, $\vv\in\resc{f}$, so $\resc{f}$ is closed. \qedhere \end{proof-parts} \end{proof} \begin{proposition} \label{pr:stan-rec-equiv} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous, and let $\vv\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:stan-rec-equiv:a} $\vv\in\resc{f}$. \item \label{pr:stan-rec-equiv:b} For all $\xx\in\Rn$, $f(\xx+\vv)\leq f(\xx)$. \item \label{pr:stan-rec-equiv:c} Either $f\equiv+\infty$ or \begin{equation} \label{eq:pr:stan-rec-equiv:1} \liminf_{\lambda\rightarrow+\infty} f(\xx+\lambda\vv) < +\infty \end{equation} for some $\xx\in\Rn$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:stan-rec-equiv:a}) $\Rightarrow$ (\ref{pr:stan-rec-equiv:b}): } This is immediate from the definition of $\resc{f}$ (Eq.~\ref{eqn:resc-cone-def}). \pfpart{(\ref{pr:stan-rec-equiv:b}) $\Rightarrow$ (\ref{pr:stan-rec-equiv:c}): } Suppose statement~(\ref{pr:stan-rec-equiv:b}) holds and that $f\not\equiv+\infty$. Let $\xx$ be any point in $\dom{f}$. Then for all $t$, $f(\xx+t\vv)\leq f(\xx+(t-1)\vv)$, so by an inductive argument, $f(\xx+t\vv)\leq f(\xx)$ for $t=1,2,\ldots$. Therefore, $\liminf_{\lambda\rightarrow+\infty} f(\xx+\lambda\vv) \leq f(\xx) < +\infty$, proving \eqref{eq:pr:stan-rec-equiv:1}. \pfpart{(\ref{pr:stan-rec-equiv:c}) $\Rightarrow$ (\ref{pr:stan-rec-equiv:a}): } If $f$ is proper, then this follows directly from \citet[Theorem~8.6]{ROC}. Also, if $f\equiv+\infty$, then $\resc{f}=\Rn$, again proving the claim. So assume henceforth that $f$ is improper and that \eqref{eq:pr:stan-rec-equiv:1} holds for some $\xx\in\Rn$. Then $f(\zz)\in\{-\infty,+\infty\}$ for all $\zz\in\Rn$, by \Cref{pr:improper-vals}(\ref{pr:improper-vals:cor7.2.1}). Consequently, there exists a sequence $\seq{\lambda_t}$ in $\R$ with $\lambda_t\rightarrow+\infty$ and $f(\xx+\lambda_t\vv)=-\infty$ for all $t$. Let $S=\dom{f}$, which is convex and closed (by \Cref{roc:thm7.1}\ref{roc:thm7.1:a},\ref{roc:thm7.1:c}). Let $f'=\inds$, the indicator function for $S$ (Eq.~\ref{eq:indf-defn}), which is convex, closed and proper. Then $f'(\xx+\lambda_t\vv)=0$ for all $t$, so \eqref{eq:pr:stan-rec-equiv:1} holds for $f'$, implying $\vv\in\resc{f'}$ by the argument above. Since $f(\yy)\leq f(\zz)$ if and only if $f'(\yy)\leq f'(\zz)$ for all $\yy,\zz\in\Rn$, it follows from the definition of recession cone that $\resc{f'}=\resc{f}$. Therefore, $\vv\in\resc{f}$. \qedhere \end{proof-parts} \end{proof} The recession cone of a closed, proper convex function $f:\Rn\rightarrow\Rext$ can be expressed as a polar of $\cone(\dom{\fstar})$, the cone generated by the effective domain of its conjugate: \begin{proposition} \label{pr:rescpol-is-con-dom-fstar} Let $f:\Rn\rightarrow\Rext$ be closed, proper and convex. Then \begin{equation} \label{eq:pr:rescpol-is-con-dom-fstar:1} \rescpol{f} = \cl\bigParens{\cone(\dom{\fstar})}. \end{equation} Consequently, \begin{equation} \label{eq:pr:rescpol-is-con-dom-fstar:2} \resc{f} = \polar{\bigParens{\cone(\dom{\fstar})}} = \Braces{\vv\in\Rn : \uu\cdot\vv\leq 0 \textup{ for all } \uu\in\dom{\fstar}}. \end{equation} \end{proposition} \begin{proof} For \eqref{eq:pr:rescpol-is-con-dom-fstar:1}, see \citet[Theorem~14.2]{ROC}. Then \[ \resc{f} = \rescdubpol{f} = \polar{\bigBracks{\cl\bigParens{\cone(\dom{\fstar})}}} = \polar{\bigParens{\cone(\dom{\fstar})}}. \] The first equality is from Proposition~\ref{pr:polar-props}(\ref{pr:polar-props:c}) and since $\resc{f}$ is a closed (in $\Rn$) convex cone (Proposition~\ref{pr:resc-cone-basic-props}). The second equality is from \eqref{eq:pr:rescpol-is-con-dom-fstar:1}. The last equality is by Proposition~\ref{pr:polar-props}(\ref{pr:polar-props:a}). This proves the first equality of \eqref{eq:pr:rescpol-is-con-dom-fstar:2}. The second equality then follows from \Cref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:c:new}) (since if $\vv\cdot\uu\leq 0$ for all $\uu\in \dom{\fstar}$, then so also $\vv\cdot(\lambda\uu)\leq 0$ for all $\lambda\in\Rstrictpos$). \end{proof} The \emph{constancy space} of a function $f:\Rn\rightarrow\Rext$, denoted $\conssp{f}$, consists of those directions in which the value of $f$ remains constant: \begin{equation} \label{eq:conssp-defn} \conssp{f} = \Braces{\vv\in\Rn : \forall \xx\in\Rn, \forall \lambda\in\R, f(\xx+\lambda\vv) = f(\xx) }. \end{equation} \begin{proposition} \label{pr:prelim:const-props} Let $f:\Rn\rightarrow\Rext$. \begin{letter-compact} \item \label{pr:prelim:const-props:a} $\conssp{f}=(\resc{f}) \cap (-\resc{f})$. Consequently, $\zero\in\conssp{f}\subseteq\resc{f}$. \item \label{pr:prelim:const-props:b} $\conssp{f}$ is a linear subspace. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:prelim:const-props:a}):} If $\vv\in\conssp{f}$ then from definitions, both $\vv$ and $-\vv$ are in $\resc{f}$. Thus, $\conssp{f}\subseteq(\resc{f}) \cap (-\resc{f})$. For the reverse inclusion, suppose $\vv\in(\resc{f}) \cap (-\resc{f})$. Then for all $\xx\in\Rn$ and all $\lambda\in\Rpos$, $f(\xx+\lambda\vv)\leq f(\xx)$ and $f(\xx)=f((\xx+\lambda\vv)-\lambda\vv)\leq f(\xx+\lambda\vv)$ since $-\vv\in\resc{f}$. Thus, $f(\xx+\lambda\vv) = f(\xx)$. Therefore, $\vv\in\conssp{f}$. \pfpart{Part~(\ref{pr:prelim:const-props:b}):} Since $\resc{f}$ is a convex cone (\Cref{pr:resc-cone-basic-props}), this follows from part~(\ref{pr:prelim:const-props:a}) and \citet[Theorem~2.7]{ROC}. \qedhere \end{proof-parts} \end{proof} \begin{proposition} \label{pr:cons-equiv} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous, and let $\vv\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:cons-equiv:a} $\vv\in\conssp{f}$. \item \label{pr:cons-equiv:b} For all $\xx\in\Rn$, $f(\xx+\vv) = f(\xx)$. \item \label{pr:cons-equiv:c} Either $f\equiv+\infty$ or \begin{equation} \label{eq:pr:cons-equiv:1} \sup_{\lambda\in\R} f(\xx+\lambda\vv) < +\infty \end{equation} for some $\xx\in\Rn$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:cons-equiv:a}) $\Rightarrow$ (\ref{pr:cons-equiv:b}): } This is immediate from the definition of $\conssp{f}$ (Eq.~\ref{eq:conssp-defn}). \pfpart{(\ref{pr:cons-equiv:b}) $\Rightarrow$ (\ref{pr:cons-equiv:a}): } Suppose~(\ref{pr:cons-equiv:b}) holds, implying $\vv\in\resc{f}$ by \Cref{pr:stan-rec-equiv}(\ref{pr:stan-rec-equiv:a},\ref{pr:stan-rec-equiv:b}). Then also $f(\xx-\vv)=f(\xx)$ for all $\xx\in\Rn$, similarly implying $-\vv\in\resc{f}$. Thus, $\vv\in(\resc{f})\cap(-\resc{f})=\conssp{f}$ by \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}). \pfpart{(\ref{pr:cons-equiv:a}) $\Rightarrow$ (\ref{pr:cons-equiv:c}): } Suppose $\vv\in\conssp{f}$ and $f\not\equiv+\infty$. Let $\xx$ be any point in $\dom{f}$. Then $f(\xx+\lambda\vv)=f(\xx)<+\infty$ for all $\lambda\in\R$, implying \eqref{eq:pr:cons-equiv:1}. \pfpart{(\ref{pr:cons-equiv:c}) $\Rightarrow$ (\ref{pr:cons-equiv:a}): } If $f\equiv+\infty$ then $\conssp{f}=\Rn$, implying the claim. Otherwise, $f(\xx+\lambda \vv) \leq \beta$ for some $\xx\in\Rn$ and $\beta\in\R$. This implies that \eqref{eq:pr:stan-rec-equiv:1} holds for both $\vv$ and $-\vv$, so both points are in $\resc{f}$, by \Cref{pr:stan-rec-equiv}(\ref{pr:stan-rec-equiv:a},\ref{pr:stan-rec-equiv:c}). Thus, as above, $\vv\in(\resc{f})\cap(-\resc{f})=\conssp{f}$. \qedhere \end{proof-parts} \end{proof} We say that a vector $\vv\in\Rn$ is \emph{strictly recessive} if it is in $(\resc{f})\setminus(\conssp{f})$, meaning that the function never increases in direction $\vv$, but is strictly decreasing somewhere in that direction. \part{Astral Space and Astral Points} \label{part:astral-space} \section{Constructing astral space} \label{sec:astral-space-intro} We begin our development with the formal construction of {astral space}. Earlier, in Section~\ref{sec:intro:astral}, we gave a quick introduction to astral space, including the main ideas of its construction. Nonetheless, in this chapter, we start again fresh so as to bring out more fully the intuition and rationale for our construction, and to fill in its details more formally and precisely while also establishing some first basic properties of astral points. At a high level, our aim is to construct astral space as a compact extension of Euclidean space in which various points at infinity have been added. We require compactness because it is such a powerful property, and because the lack of compactness is at the heart of what makes $\Rn$ difficult to work with when sequences to infinity are involved. There are numerous ways of compactifying $\Rn$, and the choice of which to study should be based on how it will be used. Our purpose here is to construct a compactification that will be favorably compatible with convex analysis, which we also hope to extend to the enlarged space. For that reason, we specifically aim for a compactification to which all linear functions can be extended continuously since linear functions are so fundamental to convex analysis. We will see through the course of our development how this property helps to ensure that notions and properties that are essentially linear in nature tend to extend reasonably to astral space. \subsection{Motivation} \label{subsec:astral-intro-motiv} Unbounded sequences in $\Rn$ cannot converge because there is nothing ``out there'' for them to converge to. The main idea in constructing astral space is to add various points at infinity to $\Rn$ so that such sequences can have limits. Astral points thus represent such limits or ``destinations'' for sequences in $\Rn$, including those that are unbounded. Therefore, to construct the space, we need to decide which sequences in $\Rn$ should have limits so that new points can be added accordingly, and which sequences should be regarded as having the same or different limits in the new space. To motivate our approach, we consider a few simple examples. In the simplest case of $n=1$ dimensions, the only reasonable limits of an unbounded sequence $\seq{x_t}$ of points in $\R$ are $+\infty$ and $-\infty$. Adding these two points to $\R$ yields the extended reals, $\Rext$, a standard compactification of $\R$. When moving to $n=2$ dimensions, we encouter many more possibilities and issues. Consider first the sequence $\seq{\xx_t}$ in $\R^2$ with elements \begin{equation} \label{eq:ray-no-offset-ex} \xx_t=\trans{[2t,\,t]}=t\vv, \mbox{~~where~} \vv=\trans{[2,1]}. \end{equation} This sequence follows a ray from the origin in the direction of~$\vv$. In~$\R^2$, this sequence does not converge, but in our compactification, sequences of this form along rays from the origin do converge, with a particular limit that we will later denote $\limray{\vv}$. Intuitively, ``similar'' sequences that ``eventually follow'' this same ray should have the same limit; for instance, we might expect sequences like $\sqrt{t}\vv$ or $\trans{[2t,t+1/t]}$ to have the same limit (as will be the case in astral space). Consider next the sequence $\seq{\yy_t}$ in $\R^2$ with elements \begin{equation*} \label{eq:ray-with-offset-ex} \yy_t=\trans{[2t-1,\,t+1]}=t\vv+\ww, \mbox{~~where~} \ww=\trans{[-1,1]}. \end{equation*} This sequence is certainly similar to the one in \eqref{eq:ray-no-offset-ex}. Both follow along parallel halflines in the direction $\vv$; however, these halflines differ in their starting points, with the $\xx_t$'s following along a halfline that begins at the origin, while the $\yy_t$'s halfline begins at $\ww$. Does this seemingly small difference matter as $t$ gets large? In other words, do we want to regard these two sequences as having the same or different limits? To see the issues involved, let us consider how the sequences progress in various directions. In the direction of $\vv$, both sequences are clearly heading to $+\infty$; that is, $\xx_t\cdot\vv$, the projection of $\xx_t$ in the direction of $\vv$, is converging to $+\infty$, and likewise for $\yy_t$. So in this direction, the sequences' limiting behaviors are the same. We can similarly consider projections of the sequences in other directions $\uu\in\R^2$. Indeed, the direction $\vv$ is so dominant that if $\vv\cdot\uu>0$, then both $\xx_t\cdot\uu$ and $\yy_t\cdot\uu$ will converge to $+\infty$ so that, in all these directions, the sequences again appear asymptotically the same. But if we project along a direction $\uu$ that is perpendicular to $\vv$, say, $\uu=\trans{[1,-2]}$, the situation changes, with $\xx_t\cdot\uu\rightarrow 0$ and $\yy_t\cdot\uu\rightarrow \ww\cdot\uu = -3$. So viewed in this direction, the sequences appear rather different in terms of their limits. As a consequence, if $\seq{\xx_t}$ and $\seq{\yy_t}$ are regarded as having the same limit in our extended space, then the simple function $f(x_1,x_2)= x_1 - 2 x_2$ (that is, $f(\xx)=\xx\cdot\uu$, with $\uu$ as above) cannot be extended continuously to this space since $\lim f(\xx_t) \neq \lim f(\yy_t)$. This is because, no matter how we define the value of an extended version of $f$ at the common limit of the two sequences, at least one of the sequences $\smash{\bigSeq{f(\xx_t)}}$ or $\smash{\bigSeq{f(\yy_t)}}$ must fail to converge to that value so that the extended function will not be continuous. Therefore, and more generally, if we want linear functions to have continuous extensions in the extended space we are constructing, then we need to treat sequences as having distinct limits if they differ in their limit in any direction. In this example, this means that $\seq{\yy_t}$ should have a different limit from $\seq{\xx_t}$, as will be the case in astral space. As a last example, consider the sequence $\seq{\zz_t}$ with elements \[ \zz_t= t \vv + \sqrt{t} \ww = \trans{\bigBracks{2 t - \sqrt{t},\,t + \sqrt{t}\,}}. \] Like the last two sequences, this one is moving to infinity most rapidly in the direction of~$\vv$. But this sequence is also growing to infinity, though at a slower rate, in the direction of~$\ww$. As before, we can examine how the sequence evolves in various directions. As was the case earlier, since $\vv$ is so dominant, for $\uu\in\R^2$, if $\vv\cdot\uu>0$, then $\zz_t\cdot\uu\rightarrow+\infty$, and if $\vv\cdot\uu<0$, then $\zz_t\cdot\uu\rightarrow -\infty$. In these cases, the secondary direction $\ww$ is irrelevant (asymptotically). But if we look in a direction $\uu$ that is perpendicular to $\vv$, then $\vv$'s effect on the sequence vanishes, and $\ww$'s secondary effect becomes apparent. In particular, for such a vector $\uu\in\R^2$ with $\uu\perp\vv$, we have $\zz_t\cdot\uu=\sqrt{t} \ww\cdot\uu$, so the limit of $\zz_t\cdot\uu$ is determined by the sign of $\ww\cdot\uu$. For example, if $\uu=\trans{[1,-2]}$, as before, then $\zz_t\cdot\uu\rightarrow-\infty$ (since $\ww\cdot\uu=-3<0$), which differs from the limits of either $\xx_t\cdot\uu$ or $\yy_t\cdot\uu$. Accordingly, the limit of $\seq{\zz_t}$ should be considered distinct from that of either $\seq{\xx_t}$ or $\seq{\yy_t}$. As suggested by this example, astral space will include not only the limits of sequences along rays, but also sequences which are growing in multiple directions at varying rates. \subsection{The construction} \label{sec:astral:construction} Our construction of astral space is based centrally on the asymptotic behavior of sequences when projected in various directions. The space will consist of $\Rn$ together with newly constructed points at infinity representing the limits of sequences. Moreover, our construction will ensure that two convergent sequences will have different limits in astral space if and only if their projections differ asymptotically in one or more directions. Formally, let $\seq{\xx_t}$ be a sequence in $\Rn$. We say that $\seq{\xx_t}$ \emph{converges in all directions} if for all $\uu\in\Rn$, the sequence $\xx_t\cdot\uu$ converges to a limit in $\Rext$. All of the example sequences in Section~\ref{subsec:astral-intro-motiv} converge in all directions. This is also true for any sequence~$\seq{\xx_t}$ which has a limit $\xx\in\Rn$ since in that case, $\xx_t\cdot\uu\rightarrow\xx\cdot\uu$ for all $\uu$. Let $\allseq$ denote the set of all sequences that converge in all directions. In accord with our approach, we say that two sequences $\seq{\xx_t}$ and $\seq{\yy_t}$, both in $\allseq$, are \emph{all-directions equivalent} (or just \emph{equivalent}, when the context is clear) if they have the same limits in every direction, that is, if for all $\uu\in\Rn$, $\lim (\xx_t\cdot\uu) = \lim (\yy_t\cdot\uu)$. This defines an equivalence relation on $\allseq$. As discussed above, we want two sequences to have the same limit in astral space if and only if they are equivalent in exactly this sense. As such, we can use all-directions equivalence to partition $\allseq$ into equivalence classes, and use those classes to define astral space. Let $\Pi$ be the resulting collection of equivalence classes; that is, the sets in $\Pi$ are all nonempty, their union is all of $\allseq$, and two sequences are in the same set in $\Pi$ if and only if they are all-directions equivalent. Each equivalence class will correspond to a common limit of the sequences included in that class. As such, we define astral space so that every point is effectively identified with one of the equivalence classes of $\Pi$. Note also that for every point $\xx\in\Rn$, there exists one equivalence class in $\Pi$ consisting exactly of all sequences that converge to $\xx$; naturally, we will want to identify $\xx$ with this class so that $\Rn$ is included in the space. Formally, astral space is defined as follows: \begin{definition} \label{def:astral-space} \emph{Astral space} is a set denoted $\extspace$ such that the following hold: \begin{item-compact} \item There exists a bijection $\pi:\extspace\rightarrow\Pi$ identifying each element of $\extspace$ with an equivalence class in $\Pi$ (where $\Pi$ is defined above). \item $\Rn\subseteq\extspace$. \item For all $\xx\in\Rn$, $\pi(\xx)$ is the equivalence class consisting of all sequences that converge to $\xx$, establishing the natural correspondence discussed above. \end{item-compact} \end{definition} In the special case that $n=1$, we choose $\extspac{1}=\Rext$, with $\pi$ defined in the most natural way. This is possible because, for every $\barx\in\Rext$ (including $\pm\infty$), there is one equivalence class consisting of all sequences $\seq{x_t}$ in $\R$ that converge to $\barx$; naturally, we define $\pi(\barx)$ to be equal to this class. Furthermore, these are the only equivalence classes in $\Pi$. When $n=0$, it follows from definitions that $\extspac{0}=\R^0=\{\zerovec\}$ since the only possible sequence has every element equal to $\R^0$'s only point, $\zerovec$. We will later define a natural topology for $\extspace$. For every point $\xbar\in\extspace$, we will later see that, in this topology, every sequence $\seq{\xx_t}$ in the associated equivalence class $\pi(\xbar)$ converges to $\xbar$, so that the astral point $\xbar$ truly can be understood as the limit of sequences, as previously discussed. We will also see that $\extspace$ is indeed a compactification of $\Rn$. \subsection{The coupling function} Let $\xbar\in\extspace$, and let $\uu\in\Rn$. By construction, $\lim (\xx_t\cdot\uu)$ exists and is the same for every sequence $\seq{\xx_t}$ in $\pi(\xbar)$. We use the notation $\xbar\cdot\uu$ to denote this common limit. That is, we define \begin{equation} \label{eqn:coupling-defn} \xbar\cdot\uu = \lim (\xx_t\cdot\uu), \end{equation} where $\seq{\xx_t}$ is any sequence in $\pi(\xbar)$, noting that the same value will result regardless of which one is selected. If $\xbar=\xx$ for some $\xx\in\Rn$, we will then have $\xbar\cdot\uu=\xx\cdot\uu$ since $\xx_t\rightarrow\xx$ for every $\seq{\xx_t}$ in $\pi(\xx)$; in other words, this notation is compatible with the usual definition of $\xx\cdot\uu$ as the inner product between $\xx$ and $\uu$. Note also that $\xbar\cdot\zero=0$ for all $\xbar\in\extspace$. The operation $\xbar\cdot\uu$, when viewed as a function mapping $\eRn\times\Rn$ to $\eR$, is called the \emph{(astral) coupling function}, or simply the \emph{coupling}. This coupling function is critically central to our development; indeed, we will see that all properties of astral points $\xbar\in\extspace$ can be expressed in terms of the values of $\xbar\cdot\uu$ over all $\uu\in\Rn$. As a start, these values uniquely determine $\xbar$'s identity: \begin{proposition} \label{pr:i:4} Let $\xbar$ and $\xbar'$ be in $\extspace$. Then $\xbar=\xbar'$ if and only if $\xbar\cdot\uu=\xbar'\cdot\uu$ for all $\uu\in\Rn$. \end{proposition} \begin{proof} Suppose $\xbar\cdot\uu=\xbar'\cdot\uu$ for all $\uu\in\Rn$. If $\seq{\xx_t}$ is a sequence in $\pi(\xbar)$, then $\xx_t\cdot\uu\rightarrow \xbar\cdot\uu=\xbar'\cdot\uu$, for all $\uu\in\Rn$, implying $\seq{\xx_t}$ is also in $\pi(\xbar')$. Thus, $\pi(\xbar)$ and $\pi(\xbar')$ are equivalence classes with a nonempty intersection, which implies that they must actually be equal. Therefore, $\xbar=\xbar'$ since $\pi$ is a bijection. The reverse implication is immediate. \end{proof} Despite the suggestiveness of the notation, the coupling function is not actually an inner product. Nonetheless, it does have some similar properties, as we show in the next two propositions. In the first, we show that it is partially distributive, except when adding $-\infty$ with $+\infty$ would be involved. \begin{proposition} \label{pr:i:1} Let $\xbar\in\extspace$ and $\uu,\vv\in\Rn$. Suppose $\xbar\cdot\uu$ and $\xbar\cdot\vv$ are summable. Then \[ \xbar\cdot(\uu+\vv) = \xbar\cdot\uu + \xbar\cdot\vv. \] \end{proposition} \begin{proof} Let $\xx_t\in\Rn$ be any sequence in $\pi(\xbar)$. Then \begin{align*} \xbar\cdot\uu + \xbar\cdot\vv &= \lim (\xx_t\cdot\uu) + \lim (\xx_t\cdot\vv) \\ &= \lim (\xx_t\cdot\uu+\xx_t\cdot\vv) \\ &= \lim \bigParens{\xx_t\cdot(\uu+\vv)} \\ &= \xbar\cdot(\uu+\vv). \end{align*} The first and last equalities are because $\seq{\xx_t}$ is in $\pi(\xbar)$, and the second equality is by continuity of addition in $\eR$ (\Cref{prop:lim:eR}\ref{i:lim:eR:sum}). \end{proof} For any point $\xbar\in\extspace$ and scalar $\lambda\in\R$, we define the scalar product $\lambda\xbar$ to be the unique point in $\extspace$ for which $(\lambda\xbar)\cdot\uu=\lambda(\xbar\cdot\uu)$ for all $\uu\in\Rn$. The next proposition proves that such a point exists. Note that when $\xbar=\xx$ is in $\Rn$, $\lambda\xbar$ is necessarily equal to the usual product $\lambda\xx$ of scalar $\lambda$ with vector $\xx$. For the case $\lambda=0$, this proposition (combined with Proposition~\ref{pr:i:4}) implies $0\xbar=\zero$ for all $\xbar\in\extspace$ (keeping in mind that $0\cdot(\pm\infty)=0$). By definition, we let $-\xbar=(-1)\xbar$. \begin{proposition} \label{pr:i:2} Let $\xbar\in\extspace$ and let $\lambda\in\R$. Then there exists a unique point in $\extspace$, henceforth denoted $\lambda\xbar$, for which \[ (\lambda\xbar)\cdot\uu = \lambda (\xbar\cdot\uu) = \xbar\cdot (\lambda\uu) \] for all $\uu\in\Rn$. \end{proposition} \begin{proof} Let $\seq{\xx_t}$ in $\Rn$ be any sequence in $\pi(\xbar)$. Then \begin{align} \xbar\cdot (\lambda\uu) &= \lim \bigParens{\xx_t\cdot (\lambda\uu)} \nonumber \\ &= \lim \bigParens{(\lambda\xx_t) \cdot \uu} \nonumber \\ &= \lim \lambda (\xx_t\cdot\uu) \nonumber \\ &= \lambda \lim (\xx_t\cdot\uu) \label{eq:i:2a} \\ &= \lambda (\xbar\cdot\uu), \nonumber \end{align} where \eqref{eq:i:2a} is by continuity of scalar multiplication (Proposition~\ref{prop:lim:eR}\ref{i:lim:eR:mul}). Thus, the sequence $\seq{\lambda \xx_t}$ is in $\allseq$ since, for every $\uu\in\Rn$, $(\lambda \xx_t) \cdot \uu$ has a limit, namely, $\xbar\cdot (\lambda\uu)=\lambda(\xbar\cdot\uu)$. As a result, the sequence $\seq{\lambda \xx_t}$ is in the equivalence class $\pi(\ybar)$ for some $\ybar\in\extspace$, which must be unique by Proposition~\ref{pr:i:4}. Defining $\lambda\xbar$ to be $\ybar$ proves the result. \end{proof} Points in $\Rn$ are characterized by being finite in every direction $\uu$, or equivalently, by being finite along every coordinate axis. \begin{proposition} \label{pr:i:3} Let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item\label{i:3a} $\xbar\in\Rn$. \item\label{i:3b} $\xbar\cdot\uu\in\R$ for all $\uu\in\Rn$. \item\label{i:3c} $\xbar\cdot\ee_i\in\R$ for $i=1,\dotsc,n$. \end{letter-compact} \end{proposition} \begin{proof} Implications (\ref{i:3a})$\,\Rightarrow\,$(\ref{i:3b}) and (\ref{i:3b})$\,\Rightarrow\,$(\ref{i:3c}) are immediate. It remains to prove (\ref{i:3c})$\,\Rightarrow\,$(\ref{i:3a}). Suppose that $\xbar\cdot\ee_i\in\Rn$ for $i=1,\dotsc,n$. Let $x_i=\xbar\cdot\ee_i$, and $\xx=\trans{[x_1,\dotsc,x_n]}$. For any $\uu\in\Rn$, we can write $\uu=\sum_{i=1}^n u_i \ee_i$. Then by repeated application of Propositions~\ref{pr:i:1} and~\ref{pr:i:2}, \[ \xbar\cdot\uu = \sum_{i=1}^n u_i (\xbar\cdot\ee_i) = \sum_{i=1}^n u_i x_i = \xx\cdot\uu. \] Since this holds for all $\uu\in\Rn$, $\xbar$ must be equal to $\xx$, by Proposition~\ref{pr:i:4}, and therefore is in $\Rn$. \end{proof} Although projections along coordinate axes fully characterize points in $\Rn$, they are not sufficient to characterize astral points outside $\Rn$. In other words, astral space $\extspace$ is distinct from $(\Rext)^n$, the $n$-fold Cartesian product of $\Rext$ with itself. Points in either space can be regarded as the limits of possibly unbounded sequences in $\Rn$. But astral points embody far more information. To see this, suppose some sequence $\seq{\xx_t}$ in $\Rn$ converges to a point $\xhat$ in $(\Rext)^n$. Then $\xhat$, by its form, encodes the limit of $(\xx_t\cdot\ee_i)$, for each standard basis vector $\ee_i$, since this limit is exactly the $i$-th component of $\xhat$. In comparison, if instead the sequence $\seq{\xx_t}$ is in the equivalence class of an astral point $\xbar$ in $\extspace$, then $\xbar$ encodes the limit of $(\xx_t\cdot\uu)$ for \emph{all} directions $\uu\in\Rn$, not just along the coordinate axes. For instance, if $n=2$ and $\xhat=\trans{[+\infty,+\infty]}$ in $(\Rext)^2$, then $\xx_t\cdot\ee_i\rightarrow+\infty$, for $i=1,2$. From this information, if $\uu=\ee_1-\ee_2$, for example, there is no way to deduce the limit of $\seq{\xx_t\cdot\uu}$, or even if the limit exists. On the other hand, this limit, or the limit in any other direction $\uu\in\R^2$, would be readily available from $\xbar\in\extspac{2}$. Thus, as limits of sequences, astral points $\xbar\in\extspace$ retain all of the information embodied by points $\xhat$ in $(\Rext)^n$, and usually far more. This results in astral space having a remarkably rich, powerful structure, as will be developed throughout this work. \subsection{Astral points} \label{subsec:astral-pt-form} What is the nature of points comprising astral space? The space includes all of $\Rn$, of course. All of the other ``new'' points correspond to equivalence classes of unbounded sequences $\seq{\xx_t}$ in $\Rn$ (that is, for which $\norm{\xx_t}\rightarrow+\infty$). This follows from Proposition~\ref{pr:i:3} which shows that any point $\xbar$ in $\extspace\setminus\Rn$ must be infinite in some direction $\uu$, and therefore $\xx_t\cdot\uu$ converges to $\pm\infty$ for any sequence $\seq{\xx_t}$ in $\pi(\xbar)$. Thus, points in $\Rn$ are said to be \emph{finite}, and all other points are \emph{infinite}. We will see that all points in $\extspace$ have a specific structure, as suggested by the examples in Section~\ref{subsec:astral-intro-motiv}. Every astral point $\xbar$, outside those in $\Rn$, corresponds to the equivalence class of sequences which have a particular dominant direction $\vv$ in which they grow to infinity most rapidly. In addition, these sequences may be growing to infinity in other directions that are secondary, tertiary,~etc. These sequences may also have a finite part in the sense of converging to a finite value in some directions. Importantly, the details of this structure are entirely determined by the point $\xbar$ itself so that every sequence that converges to $\xbar$ will have the same dominant direction, same finite part,~etc. To demonstrate this structure, we introduced, in Example~\ref{ex:poly-speed-intro}, the polynomial-speed sequence \begin{equation} \label{eq:h:6} \xx_t = t^k \vv_1 + t^{k-1} \vv_2 + \cdots + t \vv_k + \qq, \end{equation} where $\qq,\vv_1,\dotsc,\vv_k\in\Rn$. We argued that $\seq{\xx_t}$ converges in all directions and is in the equivalence class of the unique astral point $\exx$ that satisfies, for $\uu\in\Rn$, \begin{equation} \label{eqn:pf:2} \newcommand{\ttinprod}{\mkern-.8mu\inprod\mkern-.8mu} \!\!\! \xbar\ttinprod\uu= \begin{cases} +\infty &\text{if $\vv_i\ttinprod\uu>0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ -\infty &\text{if $\vv_i\ttinprod\uu<0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ \qq\ttinprod\uu &\text{if $\vv_i\ttinprod\uu=0$ for $i=1,\dotsc,k$.} \end{cases} \end{equation} The same astral point $\xbar$ would be obtained if polynomial coefficients in \eqref{eq:h:6} were replaced by any coefficient sequences going to infinity at decreasing rates and $\qq$ was replaced by any sequence of $\qq_t$'s converging to $\qq$ (see Theorem~\ref{thm:i:seq-rep}). Under natural conditions, this turns out to characterize all the sequences in the equivalence class of $\xbar$ (see Theorem~\ref{thm:seq-rep}). Moreover, as we show in Section~\ref{sec:astral-as-fcns}, every point in astral space must have exactly the form of \eqref{eqn:pf:2}, for some $\qq,\vv_1,\dotsc,\vv_k\in\Rn$. This specific structure is fundamental to our understanding of $\extspace$, and will be central to the foundation on which all of the later results are based. \section{Astral topology} \label{sec:astral-as-fcns} We next study a topology of astral space. We will see that astral space, in this topology, is compact, an extremely useful property. We will also see that astral space is first-countable, which will mean we can work in the space mainly using sequences, as in $\Rn$. Nonetheless, we will also see that astral space is not second-countable and not metrizable. In Section~\ref{sec:astral:construction}, astral space was constructed to consist of points representing equivalence classes of sequences. We begin this chapter by presenting an alternative view of the space in which astral points are in one-to-one correspondence with certain functions, a perspective that will allow us to derive a topology for the space, and to prove properties like compactness. \subsection{Astral points as functions} \label{subsec:astral-pts-as-fcns} Every point $\xbar\in\extspace$ defines a function $\ph{\xbar}:\Rn\rightarrow\Rext$ corresponding to the evaluation of the coupling function with the first argument fixed to $\xbar$, namely, \begin{equation} \label{eq:ph-xbar-defn} \ph{\xbar}(\uu) = \xbar\cdot\uu \end{equation} for $\uu\in\Rn$. Every such function is included in the set of all functions mapping $\Rn$ to $\Rext$, \[ \fcnspn=\Rext^{\Rn}, \] which we endow with the product topology, that is, the topology of pointwise convergence, as in Section~\ref{sec:prod-top}. Since $\Rext$ is compact (Example~\ref{ex:rext-compact}), $\fcnspn$ is also compact by Tychonoff's theorem (\Cref{pr:prod-top-props}\ref{pr:prod-top-props:b}). We consider two topological subspaces of $\fcnspn$. The first consists of the functions $\ph{\xx}$ defined by $\xx\in\Rn$, while the second, larger one, consists of the functions $\ph{\xbar}$ defined by $\xbar\in\eRn$: \begin{align*} \phimg&= \set{\ph{\xx}:\: \xx\in\Rn}, \\ \phimgA&= \set{\ph{\xbar}:\: \xbar\in\extspace}. \end{align*} Astral space $\extspace$ is in a natural correspondence with $\phimgA$, as given by the map $\phfcn:\extspace\to\phimgA$ where $\phfcn(\xbar)=\ph{\xbar}$ for $\xbar\in\extspace$. This map is a bijection by Proposition~\ref{pr:i:4}. Furthermore, we will soon define a topology on $\extspace$ under which $\phfcn$ is also a homeomorphism, so topological properties proved for $\phimgA$ will apply to $\extspace$ as well. In particular, we will prove below that $\phimgA$ is closed, and therefore compact (since $\fcnspn$ is compact), which will show that $\extspace$ is compact as well. \begin{example} \label{ex:fcn-classes-n=1} When $n=1$, $\fcnspn$ consists of all functions $\psi:\R\rightarrow\Rext$. The set $\phimg$ includes all functions $\ph{x}$, for $x\in\R$, where $\ph{x}(u)=xu$ for $u\in\R$, in other words, all lines that pass through the origin with arbitrary (finite) slope. The set $\phimgA$ includes all of these functions as well as $\ph{\barx}$ for $\barx\in\{-\infty,+\infty\}$. Let $\seq{x_t}$ be any sequence in $\R$ with $x_t\rightarrow+\infty$. Then from the definitions given in Section~\ref{sec:astral:construction} and \eqref{eqn:coupling-defn}, for $u\in\R$, \[ \ph{+\infty}(u) = (+\infty)\cdot u = \lim (x_t u) = \begin{cases} +\infty & \text{if $u>0$},\\ 0 & \text{if $u=0$},\\ -\infty & \text{if $u<0$.} \end{cases} \] Similarly, $\ph{-\infty}=-\ph{+\infty}$. In this way, $\phfcn$ defines a bijection from points in $\extspac{1}=\Rext$ onto the functions in $\phimgA$. \end{example} Along the way to deriving topological properties, we will also prove that functions in $\phimgA$ have specific convexity properties and a particular functional form. When translated back to $\extspace$, this will characterize the structural form of all points in $\extspace$ as discussed in Section~\ref{subsec:astral-pt-form}. As already mentioned, we assume a product topology on $\fcnspn$. As seen in Section~\ref{sec:prod-top}, the elements of a subbase for the product topology on a general function class are as given in \eqref{eq:fcn-cl-gen-subbase}. In the current setting, since $\Rext$ has as subbase all sets $[-\infty,b)$ and $(b,+\infty]$ for $b\in\R$ (Example~\ref{ex:topo-rext}), this means that a subbase for the product topology on $\fcnspn$ consists of all sets of the form \[ \set{\psi\in\fcnspn:\: \psi(\uu) < b} \mbox{~~or~~} \set{\psi\in\fcnspn:\: \psi(\uu) > b} \] for all $b\in\R$, $\uu\in\Rn$. Or, more succinctly, this subbase consists of all sets \[ \set{\psi\in\fcnspn:\: s\psi(\uu) < b} \] for all $s\in\set{-1,+1}$, $b\in\R$, $\uu\in\Rn$. In this topology, as is generally true for the product topology, if $\seq{\psi_t}$ is any sequence in $\fcnspn$, and $\psi\in\fcnspn$, then $\psi_t\rightarrow\psi$ if and only if $\psi_t(\uu)\rightarrow\psi(\uu)$ for all $\uu\in\Rn$ (\Cref{pr:prod-top-ptwise-conv}). The topology on the subspace $\phimgA$ has subbase elements \begin{equation} \label{eq:h:3:sub:phimgA} \set{\ph{\xbar}\in\phimgA:\: s\ph{\xbar}(\uu) < b} \end{equation} for $s\in\set{-1,+1}$, $b\in\R$, $\uu\in\Rn$. We noted that $\phfcn$ defines a bijection between $\extspace$ and $\phimgA$. To ensure that $\phfcn$ defines a homeomorphism as well, we define our topology for astral space so that a set $U\subseteq\extspace$ is open if and only if its image $\phfcn(U)$ is open in~$\phimgA$. This can be assured by defining a subbase for $\extspace$ with those elements $S\subseteq\extspace$ whose images $\phfcn(S)$ form a subbase for~$\phimgA$. From \eqref{eq:h:3:sub:phimgA}, such a subbase consists of elements of the form given in the next definition: \begin{definition} \label{def:astral-topology} The \emph{astral topology} on $\extspace$ is that topology generated by a subbase consisting of all sets \begin{equation} \label{eq:h:3a:sub} \set{\xbar\in\extspace:\: s (\xbar\cdot\uu) < b} \end{equation} for all $s\in\{-1,+1\}$, $\uu\in\Rn$, and $b\in\R$. \end{definition} By Proposition~\ref{pr:i:2}, we have $s(\xbar\cdot\uu)=\xbar\cdot(s\uu)$, so, when convenient, we can take $s=+1$ when working with sets as in \eqref{eq:h:3a:sub}. Furthermore, when $\uu=\zero$, the set in \eqref{eq:h:3a:sub} is either the empty set or the entire space $\extspace$. Both of these can be discarded from the subbase without changing the generated topology since $\emptyset$ and $\extspace$ will be included anyway in the topology as, respectively, the empty union and empty intersection of subbase elements. Thus, simplifying Definition~\ref{def:astral-topology}, we can say that the astral topology is equivalently generated by a subbase consisting of all sets \begin{equation} \label{eq:h:3a:sub-alt} \set{\xbar\in\extspace:\: \xbar\cdot\uu < b} \end{equation} for all $\uu\in\Rn\setminus\{\zero\}$ and $b\in\R$. Moreover, by taking finite intersections, this subbase generates a base consisting of all sets \begin{equation} \label{eq:h:3a} \set{\xbar\in\extspace:\: \xbar\cdot\uu_i < b_i \text{ for all } i=1,\dotsc,k}, \end{equation} for some finite $k\geq 0$, and some $\uu_i\in\Rn\setminus\{\zero\}$ and $b_i\in\R$, for $i=1,\dotsc,k$. When $n=1$, the astral topology for $\extspac{1}$ is the same as the usual topology for $\Rext$ since, in this case, the subbase elements given in \eqref{eq:h:3a:sub} coincide with those in Example~\ref{ex:topo-rext} for $\Rext$ (along with $\emptyset$ and $\R$). This definition says that astral topology is precisely the weak topology (as defined in Section~\ref{sec:prod-top}) on $\eRn$ with respect to the family of all maps $\xbar\mapsto\xbar\inprod\uu$, for all $\uu\in\Rn$. In other words, astral topology is the coarsest topology on $\extspace$ under which all such maps are continuous. Since these maps are the natural astral extensions of standard linear functions on $\Rn$, this choice of topology is then fully consistent with our intention to construct astral space in a way that allows all linear functions to be extended continuously. As we show next, the topology on $\Rn$ as a subspace of $\extspace$ (in the astral topology) is the same as the standard Euclidean topology on $\Rn$. Furthermore, any set $U\subseteq\Rn$ is open in the Euclidean topology on $\Rn$ if and only if it is open in the astral topology on $\eRn$. For the purposes of this proposition, we say that a set $U$ is open in $\extspace$ if it is open in the astral topology, and open in $\Rn$ if it is open in the Euclidean topology. \begin{proposition} \label{pr:open-sets-equiv-alt} Let $U\subseteq\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:open-sets-equiv-alt:a} $U$ is open in $\Rn$. \item \label{pr:open-sets-equiv-alt:b} $U$ is open in $\extspace$. \item \label{pr:open-sets-equiv-alt:c} $U=V\cap\Rn$ for some set $V\subseteq\extspace$ that is open in $\extspace$. (That is, $U$ is open in the topology on $\Rn$ as a subspace of $\extspace$.) \end{letter-compact} Consequently, the topology on $\Rn$ as a subspace of $\extspace$ is the same as the Euclidean topology on $\Rn$. \end{proposition} \begin{proof} We assume $U\neq\emptyset$ since otherwise the claimed equivalence is trivial. \begin{proof-parts} \pfpart{(\ref{pr:open-sets-equiv-alt:a}) $\Rightarrow$ (\ref{pr:open-sets-equiv-alt:b}): } Suppose $U$ is open in $\Rn$. Let $\yy\in U$. Then there exists $\epsilon>0$ such that $\ball(\yy,\epsilon) \subseteq U$. Let $C$ be the following base element (in $\extspace$): \[ C = \BigBraces{ \xbar\in\extspace : |\xbar\cdot\ee_i - \yy\cdot\ee_i| < \delta \text{ for } i=1,\dotsc,n } \] where $\delta = \epsilon/\sqrt{n}$. Then $C\subseteq\Rn$ since if $\xbar\in C$, then we must have $\xbar\cdot\ee_i\in\R$, for $i=1,\dotsc,n$, implying $\xbar\in\Rn$ by Proposition~\refequiv{pr:i:3}{i:3a}{i:3c}. Further, $C\subseteq\ball(\yy,\epsilon)$ since if $\xx\in C\subseteq\Rn$ then $|(\xx-\yy)\cdot\ee_i|<\delta$, for $i=1,\dotsc,n$, implying $\norm{\xx-\yy}<\delta\sqrt{n} = \epsilon$. Thus, $\yy\in C\subseteq\ball(\yy,\epsilon)\subseteq U$, proving $U$ is open in $\extspace$ (by \Cref{pr:base-equiv-topo}). \pfpart{(\ref{pr:open-sets-equiv-alt:b}) $\Rightarrow$ (\ref{pr:open-sets-equiv-alt:c}): } Since $U=U\cap\Rn$, if $U$ satisfies (\ref{pr:open-sets-equiv-alt:b}), then it also satisfies (\ref{pr:open-sets-equiv-alt:c}). \pfpart{(\ref{pr:open-sets-equiv-alt:c}) $\Rightarrow$ (\ref{pr:open-sets-equiv-alt:a}): } Suppose $U=V\cap\Rn$ for some $V\subseteq\extspace$ that is open in $\extspace$. Let $\yy\in U$. Then $\yy$ is also in $V$, which is open in $\extspace$, so by \Cref{pr:base-equiv-topo}, there exists a base element $C$ as in \eqref{eq:h:3a} such that $\yy\in C \subseteq V$. Since each $\uu_i$ in that equation is not $\zero$, we can assume further without loss of generality that $\norm{\uu_i}=1$ (since otherwise we can divide both sides of the inequality by $\norm{\uu_i}$). Let $\epsilon\in\R$ be such that \[ 0 < \epsilon < \min \Braces{b_i - \yy\cdot\uu_i : i=1,\dotsc,k}, \] which must exist since $\yy\in C$. If $\xx\in \ball(\yy,\epsilon)$, then for $i=1,\dotsc,k$, \[ \xx\cdot\uu_i = \yy\cdot\uu_i + (\xx-\yy)\cdot\uu_i < \yy\cdot\uu_i + \epsilon < \yy\cdot\uu_i + (b_i - \yy\cdot\uu_i) = b_i. \] The first inequality is by the Cauchy-Schwartz inequality, and the second is by our choice of $\epsilon$. Thus, $\xx\in C$, so $\ball(\yy,\epsilon)\subseteq C\cap\Rn \subseteq V\cap\Rn = U$. Therefore, $U$ is open in $\Rn$. \pfpart{Equivalent topologies:} The equivalence of~(\ref{pr:open-sets-equiv-alt:a}) and~(\ref{pr:open-sets-equiv-alt:c}) shows that these two topologies are identical. \qedhere \end{proof-parts} \end{proof} \subsection{Characterizing functional form} \label{sec:char:func} Let $\phimgcl$ denote the closure of $\phimg$ in $\fcnspn$. We will study the properties of $\phimgcl$ and of the functions comprising it. Eventually, we will prove that $\phimgcl=\phimgA$, which will imply that $\phimgA$ is closed and therefore compact (so that $\extspace$ is as well). Additionally, we will show that $\phimgA$ consists exactly of those functions $\psi:\Rn\rightarrow\Rext$ that are convex, concave, and that vanish at the origin (meaning $\psi(\zero)=0$). Indeed, for every $\xx\in\Rn$, the linear function $\ph{\xx}(\uu)=\xx\cdot\uu$ is convex, concave, and vanishes at the origin, which shows that every function in $\phimg\subseteq\phimgA$ has these properties. Nonetheless, these are not the only ones; there are other functions, all improper, that have these properties as well. Here is an example: \begin{example}[Cliff ramp] \label{ex:cliff-ramp} In $\R^2$, let \begin{align} \nonumber \psi(\uu) = \psi(u_1,u_2) &= \begin{cases} -\infty & \text{if $2u_1+u_2<0$}, \\ u_2-u_1 & \text{if $2u_1+u_2=0$}, \\ +\infty & \text{if $2u_1+u_2>0$}. \end{cases} \end{align} This function coincides with the linear function $u_2-u_1$ along the line $2u_1+u_2=0$ (the ``ramp'' portion of the function), but equals $-\infty$ on one side of that line, and $+\infty$ on the other side (the ``cliff''). It can be checked that the epigraphs of both $\psi$ and $-\psi$ are convex sets. Therefore, the function is convex, concave, and satisfies $\psi(\zero)=0$. \end{example} As we will see, the fact that the cliff ramp function $\psi$ is convex, concave, and vanishes at the origin means that it must belong to~$\phimgA$. We begin our development with the following inclusion: \begin{proposition} \label{pr:h:7} Let $\xbar\in\extspace$. Then $\ph{\xbar}\in\phimgcl$. Therefore, $\phimgA\subseteq\phimgcl$. \end{proposition} \begin{proof} By construction of $\extspace$, there exists a sequence $\seq{\xx_t}$ in $\Rn$ such that, for all $\uu\in\Rn$, $\xx_t\cdot\uu\rightarrow\xbar\cdot\uu$, that is, $\ph{\xx_t}(\uu)\rightarrow\ph{\xbar}(\uu)$. By \Cref{pr:prod-top-ptwise-conv}, this implies that $\ph{\xx_t}\rightarrow\ph{\xbar}$. Since $\ph{\xx_t}\in\phimg$ for all $t$, this shows that $\ph{\xbar}\in\phimgcl$ (by \Cref{prop:first:properties}\ref{prop:first:closure}). \end{proof} The next proposition gives some properties of functions in $\phimgcl$, in particular, showing that they all vanish at the origin, and that the set is closed under negation: \begin{proposition} \label{pr:h:2} Let $\psi\in\phimgcl$. Then: \begin{letter-compact} \item \label{pr:h:2:a} $\psi(\zero)=0$. \item \label{pr:h:2:b} $-\psi\in\phimgcl$. \end{letter-compact} \end{proposition} \begin{proof}~ \begin{proof-parts} \pfpart{Part~(\ref{pr:h:2:a}):} First note that the set $C=\{\xi\in\fcnspn : \xi(\zero)=0\}$ is closed, since it is the complement of the set \[ \{\xi\in\fcnspn:\:\xi(\zero) < 0\} \cup \{\xi\in\fcnspn:\:\xi(\zero) > 0\}, \] which is a union of two subbase elements and therefore open. Since $\phx\in C$ for all $\xx\in\Rn$, we have $\phimg\subseteq C$, implying $\phimgcl\subseteq C$ since $C$ is closed. Therefore, $\psi(\zero)=0$. \pfpart{Part~(\ref{pr:h:2:b}):} For any element $S$ of our subbase for $\fcnspn$, the set $-S=\set{-\xi:\:\xi\in S}$ is also an element of the subbase. Thus, a set $U$ is open if and only if $-U$ is open, and a set $C$ is closed if and only if $-C$ is closed. Next, note that $\phimg=-\phimg$, because $-\phx=\ph{-\xx}$ for all $\xx\in\Rn$. Thus, $\phimg\subseteq\phimgcl$ implies $\phimg\subseteq-\phimgcl$, and hence $\phimg\subseteq\phimgcl\cap(-\phimgcl)$, which is an intersection of two closed sets and therefore closed. Since $\phimgcl$ is the smallest closed set containing $\phimg$, we therefore have \[ \phimgcl\subseteq\phimgcl\cap(-\phimgcl)\subseteq -\phimgcl. \] By taking a negative, we also have $-\phimgcl\subseteq\phimgcl$, and so $\phimgcl=-\phimgcl$, meaning that $\psi\in\phimgcl$ if and only if $-\psi\in\phimgcl$. \qedhere \end{proof-parts} \end{proof} Next, we show that all functions in $\phimgcl$ are both convex and concave. Consequently, if $\seq{\xx_t}$ is a sequence with $\ph{\xx_t}\rightarrow\psi$, then $\psi$, which must be in $\phimgcl$, must be convex and concave. \begin{example}[Cliff ramp, continued] \label{ex:cliff-ramp-cont} The cliff ramp function $\psi$ from Example~\ref{ex:cliff-ramp} can be written as \[ \psi(\uu) =\begin{cases} -\infty & \text{if $\vv\cdot\uu<0$,} \\ \qq\cdot\uu & \text{if $\vv\cdot\uu=0$,} \\ +\infty & \text{if $\vv\cdot\uu>0$,} \end{cases} \] where $\vv=\trans{[2,1]}$ and $\qq=\trans{[-1,1]}$. This is a special case of the right-hand side of \eqref{eqn:pf:2}, which resulted from the analysis of the polynomial-speed sequence in \eqref{eq:h:6}. Following that example, we obtain that for the sequence $\xx_t=t\vv+\qq$, we must have $\xx_t\cdot\uu\rightarrow\psi(\uu)$ for all $\uu\in\R^2$. Thus, $\ph{\xx_t}\to\psi$. The next theorem shows this implies that $\psi$ is convex and concave, as was previously noted in Example~\ref{ex:cliff-ramp}. \end{example} \begin{theorem} \label{thm:h:3} Let $\psi\in\phimgcl$. Then $\psi$ is both convex and concave. \end{theorem} \begin{proof} To show that $\psi$ is convex, we prove that it satisfies the condition in Proposition~\ref{pr:stand-cvx-fcn-char}(\ref{pr:stand-cvx-fcn-char:b}). Let $\uu,\vv\in\dom\psi$ and let $\lambda\in [0,1]$. We aim to show that \begin{equation} \label{eq:thm:h:3:1} \psi(\ww)\leq (1-\lambda)\psi(\uu) + \lambda\psi(\vv) \end{equation} where $\ww = (1-\lambda)\uu + \lambda\vv$. Let $\alpha,\beta\in\R$ be such that $\psi(\uu)<\alpha$ and $\psi(\vv)<\beta$ (which exist since $\uu,\vv\in\dom\psi$). We claim that $\psi(\ww)\leq \gamma$ where $\gamma = (1-\lambda)\alpha + \lambda\beta$. Suppose to the contrary that $\psi(\ww)>\gamma$. Let \[ U = \Braces{ \xi\in\fcnspn : \xi(\ww)>\gamma, \xi(\uu)<\alpha, \xi(\vv)<\beta }. \] Then $U$ is open (being a finite intersection of subbase elements) and $\psi\in U$, so $U$ is a neighborhood of $\psi$. Therefore, since $\psi\in\phimgcl$, there exists a function $\phx\in U\cap\phimg$, for some $\xx\in\Rn$ (by \Cref{pr:closure:intersect}\ref{pr:closure:intersect:a}), implying $\phx(\ww)>\gamma$, $\phx(\uu)<\alpha$ and $\phx(\vv)<\beta$. By definition of $\phx$, it follows that \[ \gamma < \xx\cdot\ww = (1-\lambda) \xx\cdot\uu + \lambda \xx\cdot\vv \leq (1-\lambda) \alpha + \lambda \beta = \gamma, \] with the first equality following from $\ww$'s definition. This is clearly a contradiction. Thus, \[ \psi(\ww)\leq (1-\lambda) \alpha + \lambda \beta. \] Since this holds for all $\alpha>\psi(\uu)$ and $\beta>\psi(\vv)$, this proves \eqref{eq:thm:h:3:1}, and therefore that $\psi$ is convex by Proposition~\refequiv{pr:stand-cvx-fcn-char}{pr:stand-cvx-fcn-char:a}{pr:stand-cvx-fcn-char:b}. Since this holds for all of $\phimgcl$, this also means that $-\psi$ is convex by Proposition~\ref{pr:h:2}(\ref{pr:h:2:b}). Therefore, $\psi$ is concave as well. \end{proof} We come next to a central theorem showing that any function $\psi:\Rn\rightarrow\Rext$ that is convex, concave and that vanishes at the origin must have a particular structural form, which we express using the $\oms$ notation and leftward sum operation that were introduced in Section~\ref{sec:prelim-work-with-infty}. By Theorem~\ref{thm:h:3}, this will apply to every function in $\phimgcl$, and so also every function in $\phimgA$ (by Proposition~\ref{pr:h:7}). Since $\phimgA$ consists of all functions $\ph{\xbar}$, we obtain that every function $\ph{\xbar}(\uu)=\xbar\inprod\uu$ must have the form given in Theorem~\ref{thm:h:4}, which is exactly the form we derived in Example~\ref{ex:poly-speed-intro} for a specific sequence, though expressed more succinctly in Theorem~\ref{thm:h:4}. For instance, the cliff ramp function $\psi$ from Examples~\ref{ex:cliff-ramp} and~\ref{ex:cliff-ramp-cont}, which we have already seen is convex, concave and vanishes at the origin, can be written as \[ \psi(\uu) = \omsf{(\vv\inprod\uu)} \plusl \qq\inprod\uu = \omsf{(2u_1+u_2)} \plusl (u_2 - u_1). \] \begin{theorem} \label{thm:h:4} Let $\psi:\Rn\rightarrow\Rext$ be convex and concave with $\psi(\zero)=0$. Then, for some $k\ge 0$, there exist orthonormal vectors $\vv_1\dotsc,\vv_k\in\Rn$, and $\qq\in\Rn$ that is orthogonal to them such that for all $\uu\in\Rn$, \begin{equation} \label{thm:h:4:eq1} \psi(\uu) = \omsf{(\vv_1\cdot\uu)} \plusl \dotsb \plusl \omsf{(\vv_k\cdot\uu)} \plusl \qq\cdot\uu. \end{equation} \end{theorem} \begin{proof} We prove the following \emph{main claim} by induction on $d=0,\dotsc,n$: \begin{mainclaimp*} For every linear subspace $L$ of $\Rn$ of dimension $d$, there exists a function $\xi\in\fcnspn$ of the form given on the right-hand side of \eqref{thm:h:4:eq1}, where $\vv_1,\dotsc,\vv_k\in L$ are orthonormal and $\qq\in L$ is orthogonal to them, such that $\psi(\uu)=\xi(\uu)$ for all $\uu\in L$. \end{mainclaimp*} The theorem will follow by letting $L=\Rn$. In the base case that $d=0$, we must have $L=\{\zero\}$ and so we can simply define $\xi$ by choosing $k=0$ and $\qq=\zero$ so that $\xi(\zero)=0=\psi(\zero)$. For the inductive step, let $d>0$, and assume the main claim holds for all subspaces of dimension $d-1$. Let $L\subseteq\Rn$ be any linear subspace of dimension $d$. Since $\psi$ is both convex and concave, it must have a convex epigraph as well as a convex \emph{hypograph}, $\hypo\psi=\set{\rpair{\uu}{z} \in \Rn\times\R : z\leq \psi(\uu)}$, which is the set of points on and below the graph of a function. Let us define the sets corresponding to the epigraph and hypograph of $\psi$ on the set $L$: \begin{align*} \Ep &= \{\rpair{\uu}{z} \in L\times\R : z\geq \psi(\uu)\}, \\ \Em &= \{\rpair{\uu}{z} \in L\times\R : z\leq \psi(\uu)\}. \end{align*} These sets are intersections of the epigraph and hypograph of $\psi$ with the convex set $L\times\R$, and therefore themselves convex. They are also nonempty since they contain $\rpair{\zero}{0}$. The main idea of the proof is to separate these sets with a hyperplane, which, together with our inductive hypothesis, will allow us to derive a representation of $\psi$. To start, we claim that $\Ep$ and $\Em$ have disjoint relative interiors, that is, that $(\relint{\Ep})\cap(\relint{\Em})=\emptyset$. Note that $\Ep$ is itself an epigraph of a convex function, equal to $\psi$ on $L$ and $+\infty$ outside~$L$, and so, by \Cref{roc:lem7.3}, any point $\rpair{\uu}{z}\in\relint{\Ep}$ must satisfy $z>\psi(\uu)$, and so cannot be in $\Em$, nor its relative interior. Since $\Ep$ and $\Em$ are nonempty convex subsets of $\Rnp$ with disjoint relative interiors, by \Cref{roc:thm11.3}, there exists a hyperplane that properly separates them, meaning that $\Ep$ is included in the closed halfspace $\Hp$ on one side of the hyperplane, $\Em$ is included in the opposite closed halfspace $\Hm$, and the sets $\Ep$ and $\Em$ are not both entirely included in the separating hyperplane itself. Thus, there exist $\rpair{\vv}{b}\in\R^{n+1}$ and $c\in\R$, with $\rpair{\vv}{b}\ne\rpair{\zero}{0}$, such that \begin{gather} \label{eq:sep:1} \Ep \subseteq\Hp = \set{ \rpair{\uu}{z}\in \Rnp:\: \vv\cdot\uu + bz\le c }, \\ \label{eq:sep:2} \Em \subseteq\Hm = \set{ \rpair{\uu}{z}\in \Rnp:\: \vv\cdot\uu + bz\ge c }, \\ \label{eq:sep:3} \Ep\cup\Em\not\subseteq\Hp\cap\Hm. \end{gather} We assume, without loss of generality, that $\vv\in L$. Otherwise, we can replace $\vv$ by $\vv^\prime=\PP\vv$ where $\PP$ is the projection matrix onto the subspace $L$. To see this, let $\Hp'$ and $\Hm'$ be defined like $\Hp$ and $\Hm$, but with $\vv$ replaced by $\vv'$. For all $\uu\in L$, we have \[ \vv'\!\inprod\uu=\trans{\vv}\PP\uu=\trans{\vv}\uu=\vv\inprod\uu \] (using \Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:a},\ref{pr:proj-mat-props:d}). Therefore, if $\rpair{\uu}{z}\in\Ep$, then $\vv'\!\inprod\uu+bz=\vv\inprod\uu+bz\le c$, so $\Ep\subseteq\Hp'$. Symmetrically, $\Em\subseteq\Hm'$. Also, since $\Ep$ and $\Em$ are separated properly, there must exist $\rpair{\uu}{z}\in\Ep\cup\Em$ such that $\rpair{\uu}{z}\not\in\Hp\cap\Hm$, that is, $\vv\inprod\uu+bz\ne c$. For this same $\rpair{\uu}{z}$ we also have $\vv'\!\inprod\uu+bz=\vv\inprod\uu+bz\ne c$ and thus $\rpair{\uu}{z}\not\in\Hp'\cap\Hm'$. Hence, Eqs.~(\ref{eq:sep:1}--\ref{eq:sep:3}) continue to hold when $\vv$ is replaced by $\vv'$. Also, we must have $c=0$ in Eqs.~(\ref{eq:sep:1}) and~(\ref{eq:sep:2}) since $\rpair{\zero}{0}\in\Ep\cap\Em\subseteq \Hp\cap\Hm$. Furthermore, $b\le c=0$ since $\rpair{\zero}{1}\in\Ep\subseteq\Hp$. The remainder of the proof considers separately the cases when $b<0$, corresponding to a non-vertical separating hyperplane, and $b=0$, corresponding to a vertical separating hyperplane. \begin{proof-parts} \pfpart{Case $b<0$:} In this case, the inequalities defining the halfspaces $\Hp$ and $\Hm$ can be rearranged as \begin{align*} \Hp &= \set{ \rpair{\uu}{z}\in \Rnp:\: z \ge -(\vv/b)\cdot\uu} = \set{ \rpair{\uu}{z}:\: z \ge \xi(\uu)}, \\ \Hm &= \set{ \rpair{\uu}{z}\in \Rnp:\: z \le -(\vv/b)\cdot\uu} = \set{ \rpair{\uu}{z}:\: z \le \xi(\uu)}, \end{align*} where $\xi$ is the linear function $\xi(\uu)=-(\vv/b)\cdot\uu$. Thus, $\Hp$ is the epigraph and $\Hm$ is the hypograph of $\xi$. The sets $\Ep$ and $\Em$ are also an epigraph and a hypograph of a function, namely, $\psi$ restricted to $L$. We argue that inclusions $\Ep\subseteq\Hp$ and $\Em\subseteq\Hm$ imply that $\psi(\uu)\ge\xi(\uu)$ and $\psi(\uu)\le\xi(\uu)$ for $\uu\in L$. For $\uu\in L$, we first argue that $\psi(\uu)\geq\xi(\uu)$. This is immediate if $\psi(\uu)=+\infty$. Otherwise, for all $z\in\R$, if $z\geq \psi(\uu)$ then $\rpair{\uu}{z}\in\Ep\subseteq\Hp$, so $z\geq \xi(\uu)$. Since $z\geq\xi(\uu)$ for all $z\geq \psi(\uu)$, it follows that $\psi(\uu)\geq \xi(\uu)$. By a symmetric argument, using $\Em\subseteq\Hm$, we obtain that $\psi(\uu)\leq \xi(\uu)$ for $\uu\in L$. Therefore, $\psi(\uu)=\xi(\uu)$ for all $\uu\in L$. Since $\xi(\uu)=\qq\inprod\uu$ for $\qq=-(\vv/b)\in L$, the function $\xi$ satisfies the conditions of the main claim with $k=0$. \pfpart{Case $b=0$:} In this case, we have $\vv\ne\zero$. The halfspaces $\Hp$ and $\Hm$ can be written as \begin{align*} \Hp&= \set{ \rpair{\uu}{z}\in \Rnp:\: \vv\cdot\uu\le 0 }, \\ \Hm&= \set{ \rpair{\uu}{z}\in \Rnp:\: \vv\cdot\uu\ge 0 }. \end{align*} We assume, without loss of generality, that $\norm{\vv}=1$; otherwise we can replace $\vv$ by $\vv/\norm{\vv}$. We examine the function $\psi(\uu)$ for $\uu\in L$, separately for when $\vv\inprod\uu>0$, $\vv\inprod\uu<0$, and $\vv\inprod\uu=0$. First, suppose $\uu\in L$ with $\vv\inprod\uu>0$. Then we must have $\psi(\uu)=+\infty$, because if $\psi(\uu)<+\infty$, then there exists $\rpair{\uu}{z}\in\Ep\subseteq\Hp$, contradicting $\vv\inprod\uu>0$. By a symmetric argument, if $\uu\in L$ with $\vv\inprod\uu<0$ then $\psi(\uu)=-\infty$. It remains to consider the case $\uu\in L$ with $\vv\inprod\uu=0$, that is, $\uu\in L\cap M$, where $M=\set{\uu\in \Rn : \vv\cdot\uu=0}$. The separating hyperplane can be expressed in terms of~$M$ as \[ \Hp\cap\Hm=\{\rpair{\uu}{z}\in \Rnp : \vv\cdot\uu=0\} = M\times\R. \] Because $\Ep$ and $\Em$ are \emph{properly} separated, they are not both entirely contained in this hyperplane; that is, \[ L\times\R = \Ep\cup\Em \not\subseteq M\times\R, \] so $L\not\subseteq M$. We next apply our inductive hypothesis to the linear subspace $L' = L\cap M $. Since $L\not\subseteq M$, this space has dimension $d-1$. Thus, there exists a function $\xi':\Rn\to\eR$ of the form \[ \xi'(\uu) = \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)} \plusl \qq\cdot\uu \] for some orthonormal $\vv_1\dotsc,\vv_k\in L'$ and $\qq\in L'$ that is orthogonal to them, and such that $\psi(\uu)=\xi'(\uu)$ for all $\uu\in L'$. Let us define $\xi:\Rn\rightarrow\Rext$ by \begin{align*} \xi(\uu) &= \omsf{(\vv\cdot\uu)} \plusl \xi'(\uu) \\ &= \omsf{(\vv\cdot\uu)} \plusl \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)} \plusl \qq\cdot\uu \end{align*} for $\uu\in\Rn$. Then for $\uu\in L$, \[ \xi(\uu) = \begin{cases} +\infty & \text{if $\vv\cdot\uu>0$,}\\ \xi'(\uu) & \text{if $\vv\cdot\uu=0$,}\\ -\infty & \text{if $\vv\cdot\uu<0$.} \end{cases} \] As we argued above, if $\vv\cdot\uu>0$, then $\psi(\uu)=+\infty=\xi(\uu)$, and if $\vv\cdot\uu<0$, then $\psi(\uu)=-\infty=\xi(\uu)$. Finally, if $\vv\cdot\uu=0$, then $\uu\in L'$ so $\psi(\uu)=\xi'(\uu)=\xi(\uu)$ by inductive hypothesis. Also by inductive hypothesis, vectors $\vv_1,\dotsc,\vv_k$ are orthonormal and in $L'\subseteq L$, and $\qq$ is orthogonal to them and in $L'\subseteq L$. The vector $\vv$ is of unit length, in~$L$, and orthogonal to $L'$, because $L'\subseteq M$. Thus, $\xi$ satisfies the main claim. \qedhere \end{proof-parts} \end{proof} The form of the function $\psi$ in \Cref{thm:h:4} is the same as the one that arose in the analysis of the polynomial-speed sequence in Example~\ref{ex:poly-speed-intro}: \begin{lemma} \label{lemma:chain:psi} For any vectors $\vv_1,\dotsc,\vv_k,\qq\in\Rn$ and any $\uu\in\Rn$, let \begin{align*} \psi(\uu) &= \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)}\plusl \qq\cdot\uu, \\ \xi(\uu) &= \newcommand{\ttinprod}{\mkern-.8mu\inprod\mkern-.8mu} \begin{cases} +\infty &\text{if $\vv_i\ttinprod\uu>0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ -\infty &\text{if $\vv_i\ttinprod\uu<0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ \qq\ttinprod\uu &\text{if $\vv_i\ttinprod\uu=0$ for $i=1,\dotsc,k$.} \end{cases} \end{align*} Then $\psi(\uu)=\xi(\uu)$ for all $\uu\in\Rn$. \end{lemma} \begin{proof} The proof follows by verifying that $\xi(\uu)=\psi(\uu)$ in each of the cases in the definition of $\xi(\uu)$. \end{proof} Pulling these results together, we can now conclude that the sets $\phimgA$ and $\phimgcl$ are actually identical, each consisting exactly of all of the functions on $\Rn$ that are convex, concave and that vanish at the origin, and furthermore, that these are exactly the functions of the form given in \Cref{thm:h:4}. \begin{theorem} \label{thm:h:5} Let $\psi\in\fcnspn$, that is, $\psi:\Rn\rightarrow\Rext$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:h:5a0} $\psi\in\phimgA$. That is, for some $\xbar\in\extspace$, $\psi(\uu)=\ph{\xbar}(\uu)=\xbar\cdot\uu$ for all $\uu\in\Rn$. \item \label{thm:h:5a} $\psi\in\phimgcl$. \item \label{thm:h:5b} $\psi$ is convex, concave and $\psi(\zero)=0$. \item \label{thm:h:5c} There exist $\qq,\vv_1\dotsc,\vv_k\in\Rn$, for some $k\geq 0$, such that for all $\uu\in\Rn$, \[ \psi(\uu) = \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)}\plusl \qq\cdot\uu. \] \end{letter-compact} Furthermore, the same equivalence holds if in part~(\ref{thm:h:5c}) we additionally require that the vectors $\vv_1,\dotsc,\vv_k$ are orthonormal and that $\qq$ is orthogonal to them. \end{theorem} \begin{proof} Implication (\ref{thm:h:5a0}) $\Rightarrow$ (\ref{thm:h:5a}) follows by \Cref{pr:h:7}. Implication (\ref{thm:h:5a}) $\Rightarrow$ (\ref{thm:h:5b}) follows from \Cref{pr:h:2} and \Cref{thm:h:3}. Implication (\ref{thm:h:5b}) $\Rightarrow$ (\ref{thm:h:5c}) follows by Theorem~\ref{thm:h:4}, including with the more stringent requirement on the form of $\qq$ and the $\vv_i$'s It remains then only to show that (\ref{thm:h:5c}) $\Rightarrow$ (\ref{thm:h:5a0}). Let $\psi$ have the form given in part~(\ref{thm:h:5c}), and let $\seq{\xx_t}$ be the polynomial-speed sequence from Example~\ref{ex:poly-speed-intro}. Then, as argued in Example~\ref{ex:poly-speed-intro}, there exists an astral point $\xbar$ such that $\xx_t\cdot\uu\rightarrow \xbar\inprod\uu$ for all $\uu\in\Rn$. Moreover, the expression for $\xbar\inprod\uu$ in \eqref{eqn:pf:2} coincides with the expression for $\xi(\uu)$ in \Cref{lemma:chain:psi}, and therefore $\xbar\inprod\uu=\xi(\uu)=\psi(\uu)$ for all $\uu\in\Rn$. That is, $\ph{\xbar}=\psi$, proving $\psi\in\phimgA$. \end{proof} \citet{waggoner21} defines \emph{linear extended functions} as functions $\psi:\Rn\rightarrow\Rext$ that satisfy $\psi(\lambda\uu)=\lambda \psi(\uu)$ for all $\uu\in\Rn$, $\lambda\in\R$, and also $\psi(\uu+\vv)=\psi(\uu)+\psi(\vv)$ for all $\uu,\vv\in\Rn$ such that $\psi(\uu)$ and $\psi(\vv)$ are summable. \citet[Proposition~2.5]{waggoner21} shows essentially that a function $\psi:\Rn\rightarrow\Rext$ is linear extended if and only if it has the form given in part~(\ref{thm:h:5c}) of Theorem~\ref{thm:h:5} (though stated somewhat differently). As a result, the set $\phimgA$ of all functions $\ph{\xbar}$, for $\xbar\in\extspace$, comprises exactly all linear extended functions. Indeed, Propositions~\ref{pr:i:1} and~\ref{pr:i:2} show that every function $\ph{\xbar}$ has the two properties defining such functions. \subsection{Astral space} \label{sec:astral:space:summary} As already discussed, we can translate results about $\phimg$ and $\phimgA$ back to $\Rn$ and $\extspace$ via the homeomorphism $\phfcn$. We summarize these topological implications in the following theorem. Most importantly, we have shown that $\extspace$ is a compactification of $\Rn$: \begin{theorem} \label{thm:i:1} ~ \begin{letter-compact} \item \label{thm:i:1a} $\extspace$ is compact and Hausdorff. \item \label{thm:i:1aa} $\extspace$ is normal and regular. \item \label{thm:i:1b} $\Rn$ is dense in $\extspace$. \item \label{thm:i:1-compactification} $\extspace$ is a compactification of $\Rn$. \end{letter-compact} Furthermore, let $\seq{\xbar_t}$ be a sequence in $\extspace$ and let $\xbar\in\extspace$. Then: \begin{letter-compact}[resume] \item \label{thm:i:1c} $\xbar_t\rightarrow\xbar$ if and only if for all $\uu\in\Rn$, $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$. \item \label{thm:i:1e} $\seq{\xbar_t}$ converges in $\extspace$ if and only if for all $\uu\in\Rn$, $\seq{\xbar_t\cdot\uu}$ converges in $\eR$. \item \label{thm:i:1d} There exists a sequence $\seq{\xx_t}$ in $\Rn$ with $\xx_t\rightarrow\xbar$. \end{letter-compact} \end{theorem} \begin{proof}~ \begin{proof-parts} \pfpart{Part~(\ref{thm:i:1a}):} As earlier noted, the product space $\fcnspn$ is compact by Tychonoff's theorem (\Cref{pr:prod-top-props}\ref{pr:prod-top-props:b}), since $\Rext$ is compact (Example~\ref{ex:rext-compact}). By Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5a}, $\phimgA$ equals $\phimgcl$ and is therefore compact, being a closed subset of the compact space $\fcnspn$ (\Cref{prop:compact}\ref{prop:compact:closed-subset}). Since $\Rext$ is Hausdorff, $\fcnspn$ and its subspace $\phimgA$ are also Hausdorff by Propositions~\ref{pr:prod-top-props}(\ref{pr:prod-top-props:a}) and~\ref{prop:subspace}(\ref{i:subspace:Hausdorff}). Therefore, $\extspace$, which is homeomorphic with $\phimgA$, is also compact and Hausdorff. \pfpart{Part~(\ref{thm:i:1aa}):} Since $\extspace$ is compact and Haussdorf, by \Cref{prop:compact}(\ref{prop:compact:subset-of-Hausdorff}) it is also normal and regular. \pfpart{Part~(\ref{thm:i:1b}):} That $\phimgA=\phimgcl$ means exactly that $\phimg$ is dense in $\phimgA$. So, by our homeomorphism, $\Rn$ is dense in $\extspace$. \pfpart{Part~(\ref{thm:i:1-compactification}):} This follows from parts~(\ref{thm:i:1a}) and~(\ref{thm:i:1b}), and from Proposition~\refequiv{pr:open-sets-equiv-alt}{pr:open-sets-equiv-alt:a}{pr:open-sets-equiv-alt:c} which shows that $\Rn$, in the Euclidean topology, is a subspace of $\extspace$. \pfpart{Part~(\ref{thm:i:1c}):} By our homeomorphism, ${\xbar_t}\rightarrow{\xbar}$ if and only if $\ph{\xbar_t}\rightarrow\ph{\xbar}$, which, because we are using product topology in $\fcnspn$, in turn holds if and only if $\xbar_t\cdot\uu=\ph{\xbar_t}(\uu)\rightarrow\ph{\xbar}(\uu)=\xbar\cdot\uu$ for all $\uu\in\Rn$ (by \Cref{pr:prod-top-ptwise-conv}). \pfpart{Part~(\ref{thm:i:1e}):} If ${\xbar_t}\rightarrow{\xbar}$ then part~(\ref{thm:i:1c}) implies that $\seq{\xbar_t\cdot\uu}$ converges in $\eR$. For the converse, let $\psi:\Rn\rightarrow\Rext$ be defined by $\psi(\uu)=\lim (\xbar_t\cdot\uu)$ for $\uu\in\Rn$. Then $\ph{\xbar_t}(\uu)\rightarrow\psi(\uu)$ for all $\uu\in\Rn$, so $\ph{\xbar_t}\rightarrow\psi$. Since $\ph{\xbar_t}\in\phimgA$, for all $t$, and since $\phimgA$ is closed, $\psi$ must also be in $\phimgA$ (\Cref{prop:first:properties}\ref{prop:first:closure}). Thus, $\psi=\ph{\xbar}$ for some $\xbar\in\extspace$, so $\ph{\xbar_t}\rightarrow\ph{\xbar}$, implying, by our homeomorphism, that ${\xbar_t}\rightarrow{\xbar}$. \pfpart{Part~(\ref{thm:i:1d}):} Let $\seq{\xx_t}$ in $\Rn$ be any sequence in $\pi(\xbar)$ (where $\pi$ is as given in Definition~\ref{def:astral-space}). Then for all $\uu\in\Rn$, $\xx_t\cdot\uu\rightarrow\xbar\cdot\uu$ by definition of the coupling function (Eq.~\ref{eqn:coupling-defn}), so $\xx_t\rightarrow\xbar$ by part~(\ref{thm:i:1c}). \qedhere \end{proof-parts} \end{proof} As an immediate corollary, we obtain the following property of continuous maps on astral space: \begin{corollary} \label{cor:cont:closed} Let $F:\eRn\to\eRm$ be a continuous map and let $A$ be a closed subset of $\eRn$. Then $F(A)$ is also closed. \end{corollary} \begin{proof} As a closed subset of a compact space, $X$ is compact (\Cref{prop:compact}\ref{prop:compact:closed-subset}), and so its image under $F$ is also compact (\Cref{prop:compact}\ref{prop:compact:cont-compact}), and therefore closed (\Cref{prop:compact}\ref{prop:compact:closed}). \end{proof} In Sections~\ref{sec:intro:astral} and~\ref{subsec:astral-intro-motiv}, we encountered an important example of an astral point not in $\Rn$: For $\vv\in\Rn$, the point $\limray{\vv}$, which is the product of the scalar $\oms=+\infty$ with the vector $\vv$, is called an \emph{astron}, and is defined to be that point in $\extspace$ that is the limit of the sequence $\seq{t\vv}$. More generally, for $\xbar\in\extspace$, we define the product $\limray{\xbar}$ analogously, as the limit of the sequence $\seq{t\xbar}$. (Note that a point $\limray{\xbar}$ is \emph{not} called an astron unless it is equal to $\limray{\vv}$ for some $\vv\in\Rn$.) The next simple proposition proves the existence and general form of such points, including all astrons. As will be seen shortly, astrons act as fundamental building blocks which, together with real vectors, generate all of astral space using a generalized form of leftward addition. \begin{proposition} \label{pr:astrons-exist} Let $\xbar\in\extspace$. Then the sequence $\seq{t\xbar}$ has a limit in $\extspace$, henceforth denoted $\limray{\xbar}$. Furthermore, for all $\uu\in\Rn$, \[ (\limray{\xbar})\cdot\uu = \omsf{(\xbar\cdot\uu)} = \begin{cases} +\infty & \text{if $\xbar\cdot\uu>0$,} \\ 0 & \text{if $\xbar\cdot\uu=0$,} \\ -\infty & \text{if $\xbar\cdot\uu<0$.} \end{cases} \] \end{proposition} \begin{proof} For $\uu\in\Rn$, \[ (t\xbar)\cdot\uu = t(\xbar\cdot\uu) \rightarrow \omsf{(\xbar\cdot\uu)} \] (with the equality following from Proposition~\ref{pr:i:2}). Since this limit exists for each $\uu\in\Rn$, by Theorem~\ref{thm:i:1}(\ref{thm:i:1e}), the sequence $\seq{t\xbar}$ must have a limit in $\extspace$, namely, $\limray{\xbar}$. This also proves $(\limray{\xbar})\cdot\uu$ has the form given in the proposition. \end{proof} When multiplying a scalar $\lambda\in\R$ by a real vector $\xx=\trans{[x_1,\ldots,x_n]}$, the product $\lambda\xx$ can of course be computed simply by multiplying each component $x_i$ by $\lambda$ to obtain $\trans{[\lambda x_1,\ldots,\lambda x_n]}$. The same is \emph{not} true when multiplying by $\oms$. Indeed, multiplying each component $x_i$ by $\oms$ would yield a vector in $(\Rext)^n$, not an astral point in $\extspace$. For instance, if we multiply each component of $\xx=\trans{[2,0,-1]}$ by $\oms$, we obtain the point $\trans{[+\infty,0,-\infty]}$ in $(\Rext)^3$, which is very different from the astron $\limray{\xx}$ in $\extspac{3}$. In general, in $\extspace$, the astrons $\limray{\vv}$ are all distinct from one another for $\vv\in\Rn$ with $\norm{\vv}=1$. Together with the origin, $\limray{\zero}=\zero$, these comprise all of the astrons in $\extspace$. For example, $\extspac{1}=\Rext$ has three astrons: $\limray{(1)}=\oms=+\infty$, $\limray{(-1)}=-\oms=-\infty$, and $\limray{(0)}=0$. Although used less often, for completeness, we also define scalar multiplication by $-\oms$ as $(-\oms)\xbar=\limray{(-\xbar)}$ for $\xbar\in\extspace$. The next proposition summarizes properties of scalar multiplication: \begin{proposition} \label{pr:scalar-prod-props} Let $\xbar\in\extspace$, and let $\alpha,\beta\in\Rext$. Then the following hold: \begin{letter-compact} \item \label{pr:scalar-prod-props:a} $(\alpha\xbar)\cdot\uu = \alpha(\xbar\cdot\uu)$ for all $\uu\in\Rn$. \item \label{pr:scalar-prod-props:b} $\alpha(\beta\xbar)=(\alpha \beta) \xbar$. \item \label{pr:scalar-prod-props:c} $0 \xbar = \zero$. \item \label{pr:scalar-prod-props:d} $1 \xbar = \xbar$. \item \label{pr:scalar-prod-props:e} $\lambda_t \xbar_t \rightarrow \lambda \xbar$ for any sequence $\seq{\xbar_t}$ in $\extspace$ with $\xbar_t\rightarrow\xbar$, and any sequence $\seq{\lambda_t}$ in $\R$ and $\lambda\in\R\setminus\{0\}$ with $\lambda_t\rightarrow\lambda$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:scalar-prod-props:a}):} If $\alpha\in\R$, then the claim follows from Proposition~\ref{pr:i:2}. If $\alpha=\oms$, then it follows from Proposition~\ref{pr:astrons-exist}. If $\alpha=-\oms$, then for $\uu\in\Rn$, \[ [(-\oms)\xbar]\cdot\uu = [\limray{(-\xbar)}]\cdot\uu = \limray{(-\xbar\cdot\uu)} = (-\oms)(\xbar\cdot\uu). \] The first equality is by definition of multiplication by $-\oms$. The second is by Proposition~\ref{pr:astrons-exist}. And the last is by commutativity of multiplication over $\Rext$. \pfpart{Part~(\ref{pr:scalar-prod-props:b}):} By part~(\ref{pr:scalar-prod-props:a}), for all $\uu\in\Rn$, \[ [\alpha(\beta\xbar)]\cdot\uu = \alpha [(\beta\xbar)\cdot\uu] = \alpha [\beta(\xbar\cdot\uu)] = (\alpha\beta)(\xbar\cdot\uu) = [(\alpha\beta)\xbar]\cdot\uu. \] The claim then follows by Proposition~\ref{pr:i:4}. The proofs of the remaining parts are similar. \qedhere \end{proof-parts} \end{proof} Next, we extend leftward addition, previously defined in Eqs.~(\ref{eqn:intro-left-sum-defn}) and~(\ref{eqn:left-sum-alt-defn}) for scalars in $\Rext$, to all points in $\extspace$. For $\xbar,\ybar\in\extspace$, we define $\xbar\plusl\ybar$ to be that unique point in $\extspace$ for which \[ (\xbar\plusl\ybar)\cdot\uu = \xbar\cdot\uu \plusl \ybar\cdot\uu \] for all $\uu\in\Rn$. The next proposition shows that such a point must exist. This operation will turn out to be useful for describing and working with points in $\extspace$. It is much like taking the vector sum of two points in $\Rn$, except that in those directions $\uu$ where this would lead to adding $+\infty$ and $-\infty$, the left summand dominates. \begin{proposition} \label{pr:i:6} Let $\xbar$ and $\ybar$ be in $\extspace$. Then there exists a unique point in $\extspace$, henceforth denoted $\xbar\plusl\ybar$, for which \[ (\xbar\plusl\ybar)\cdot\uu = \xbar\cdot\uu \plusl \ybar\cdot\uu \] for all $\uu\in\Rn$. \end{proposition} \begin{proof} By Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5c}, since $\ph{\xbar}$ and $\ph{\ybar}$ are both in $\phimgA$, there exists $\qq,\vv_1\dotsc,\vv_k\in\Rn$, for some $k\geq 0$, and $\qq',\vv'_1\dotsc,\vv'_{k'}\in\Rn$, for some $k'\geq 0$, such that for all $\uu\in\Rn$, \begin{align} \ph{\xbar}(\uu) &= \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)} \plusl \qq\cdot\uu, \notag \\ \ph{\ybar}(\uu) &= \omsf{(\vv'_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv'_{k'}\cdot\uu)} \plusl \qq'\cdot\uu. \label{eq:i:3} \end{align} Let $\psi:\Rn\rightarrow\Rext$ be defined by \begin{align*} \psi(\uu) &= \xbar\cdot\uu \plusl \ybar\cdot\uu \\ &= \ph{\xbar}(\uu)\plusl \ph{\ybar}(\uu) \\ &= \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)} \plusl \omsf{(\vv'_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv'_{k'}\cdot\uu)} \plusl (\qq+\qq')\cdot\uu, \end{align*} for $\uu\in\Rn$. where the last equality uses \eqref{eq:i:3} and Proposition~\ref{pr:i:5}(\ref{pr:i:5c}). Because $\psi$ has this form, Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5c} implies it is in $\phimgA$ and therefore equal to $\ph{\zbar}$ for some $\zbar\in\extspace$. Setting $\xbar\plusl\ybar$ equal to $\zbar$ proves the proposition, with uniqueness following from Proposition~\ref{pr:i:4}. \end{proof} Just as was the case for scalars, leftward addition of astral points is associative, distributive and partially commutative (specifically, if either of the summands is in $\Rn$). It is also the same as vector addition when both summands are in $\Rn$. We summarize these and other properties in the next proposition. \begin{proposition} \label{pr:i:7} Let $\xbar,\ybar,\zbar\in\extspace$, and $\xx,\yy\in\Rn$. Then: \begin{letter-compact} \item \label{pr:i:7a} $(\xbar\plusl \ybar)\plusl \zbar=\xbar\plusl (\ybar\plusl \zbar)$. \item \label{pr:i:7b} $\lambda(\xbar\plusl \ybar)=\lambda\xbar\plusl \lambda\ybar$, for $\lambda\in\R$. \item \label{pr:i:7c} For all $\alpha,\beta\in\Rext$, if $\alpha\beta\geq 0$ then $\alpha\xbar\plusl \beta\xbar=(\alpha+\beta)\xbar$. \item \label{pr:i:7d} $\xbar\plusl \yy = \yy\plusl \xbar$. In particular, $\xbar\plusl \zero = \zero\plusl \xbar = \xbar$. \item \label{pr:i:7e} $\xx\plusl \yy = \xx + \yy$. \item \label{pr:i:7f} $\xbar\plusl \ybart\to\xbar\plusl\ybar$, for any sequence $\seq{\ybart}$ in $\eRn$ with $\ybart\to\ybar$. \item \label{pr:i:7g} $\xbar_t\plusl \yy_t\to\xbar\plusl\yy$, for any sequences $\seq{\xbar_t}$ in $\eRn$ and $\seq{\yy_t}$ in $\Rn$ with $\xbar_t\to\xbar$ and $\yy_t\rightarrow\yy$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:i:7a}):} By Propositions~\ref{pr:i:6} and~\ref{pr:i:5}(\ref{pr:i:5a}), for $\uu\in\Rn$, \begin{align*} \bigParens{(\xbar\plusl \ybar)\plusl \zbar}\cdot\uu &= (\xbar\plusl \ybar)\cdot\uu \plusl \zbar\cdot\uu \\ &= (\xbar\cdot\uu\plusl \ybar\cdot\uu) \plusl \zbar\cdot\uu \\ &= \xbar\cdot\uu\plusl (\ybar\cdot\uu \plusl \zbar\cdot\uu) \\ &= \xbar\cdot\uu\plusl (\ybar \plusl \zbar) \cdot\uu \\ &= \bigParens{\xbar \plusl (\ybar \plusl \zbar)} \cdot\uu. \end{align*} The claim now follows by Proposition~\ref{pr:i:4}. \end{proof-parts} The proofs of the other parts are similar. \end{proof} The leftward sum of two sets $X$ and $Y$ in $\extspace$ is defined similarly as for ordinary addition: \[ X\plusl Y = \{ \xbar\plusl\ybar:\: \xbar\in X, \ybar\in Y \}. \] We also define $X\plusl\ybar = X\plusl \{\ybar\}$ and $\ybar\plusl X = \{\ybar\} \plusl X$ for $X\subseteq\extspace$ and $\ybar\in\extspace$. Analogously, if $A\subseteq\Rext$ and $X\subseteq\extspace$, then we define the product \[ A X = \{ \alpha \xbar:\: \alpha\in A, \xbar\in X \}, \] with $\alpha X = \{\alpha\} X$ and $A \xbar = A \{\xbar\}$ for $\alpha\in\Rext$ and $\xbar\in\extspace$. Using astrons and leftward addition, we can now succinctly state and prove the form of every point in $\extspace$ using Theorem~\ref{thm:h:5}. \begin{corollary} \label{cor:h:1} Astral space $\extspace$ consists of all points $\xbar$ of the form \[ \xbar=\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl \qq \] for $\qq,\vv_1,\dotsc,\vv_k\in\Rn$, $k\geq 0$. Furthermore, the same statement holds if we require the vectors $\vv_1,\dotsc,\vv_k$ to be orthonormal and $\qq$ to be orthogonal to them. \end{corollary} \begin{proof} Every expression of the form given in the corollary is in $\extspace$ since astral space includes all astrons and all of $\Rn$, and since the space is closed under leftward addition (by Propositions~\ref{pr:astrons-exist} and~\ref{pr:i:6}). To show every point has this form, suppose $\xbar\in\extspace$. Then $\ph{\xbar}\in\phimgA$, so by Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5c}, there exist orthonormal $\vv_1,\dotsc,\vv_k\in\Rn$ and $\qq\in\Rn$ that is orthogonal to them, such that \begin{align*} \xbar\cdot\uu &= \ph{\xbar}(\uu) \\ &= \omsf{(\vv_1\cdot\uu)} \plusl \cdots \plusl \omsf{(\vv_k\cdot\uu)} \plusl \qq\cdot\uu \\ &= (\limray{\vv_1})\cdot\uu \plusl \cdots \plusl (\limray{\vv_k})\cdot\uu \plusl \qq\cdot\uu \\ &= \bigParens{\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl\qq}\cdot\uu \end{align*} for $\uu\in\Rn$. Here, the third and fourth equalities are by Propositions~\ref{pr:astrons-exist} and~\ref{pr:i:6}, respectively. Thus, by Proposition~\ref{pr:i:4}, \[ \xbar = {\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl\qq}. \qedhere \] \end{proof} We refer to the expression $\xbar=\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl \qq$ as an \emph{(astral) decomposition} of $\xbar$; it is not necessarily unique. If two astral decompositions correspond to the same point, they are said to be \emph{equivalent}. When vectors $\vv_1,\dotsc,\vv_k$ are independent, the astral decomposition is said to be \emph{nondegenerate}. When vectors $\vv_1,\dotsc,\vv_k$ are orthonormal and $\qq$ is orthogonal to them, then the astral decomposition is said to be \emph{orthogonal}. In \Cref{sec:canonical:rep}, we show that each astral point has a unique orthogonal decomposition. Using \Cref{lemma:chain:psi} we can also characterize the value of $\xbar\inprod\uu$ based on the astral decomposition of $\xbar$: \begin{lemma}[Case Decomposition Lemma] \label{lemma:case} Let $\xbar=\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl \qq$ \ for some $k\ge 0$ and $\vv_1,\dotsc,\vv_k,\qq\in\Rn$. Then for all $\uu\in\Rn$, \[ \xbar\inprod\uu = \newcommand{\ttinprod}{\mkern-.8mu\inprod\mkern-.8mu} \begin{cases} +\infty &\text{if $\vv_i\ttinprod\uu>0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ -\infty &\text{if $\vv_i\ttinprod\uu<0$ for some $i$, and $\vv_j\ttinprod\uu=0$ for $j=1,\dotsc,i-1$,} \\ \qq\ttinprod\uu &\text{if $\vv_i\ttinprod\uu=0$ for $i=1,\dotsc,k$.} \end{cases} \] \end{lemma} \begin{proof} The result follows by \Cref{lemma:chain:psi}. \end{proof} Theorem~\ref{thm:h:5} and Corollary~\ref{cor:h:1} tell us that elements of astral space can be viewed from two main perspectives. First, as discussed in Chapter~\ref{sec:astral-space-intro}, we can think of astral points $\xbar\in\extspace$ as limits of sequences in $\Rn$. Such sequences can have a finite limit $\qq$ in $\Rn$, exactly when they converge to $\qq$ in the Euclidean topology. But more interestingly, Corollary~\ref{cor:h:1} expresses precisely the form of every astral point, and so also all of the ways in which convergent sequences in astral space can ``go to infinity:'' Every such sequence has a primary dominant direction~$\vv_1$, and may also have a secondary direction $\vv_2$, a tertiary direction $\vv_3$, etc., and a remaining finite part $\qq\in\Rn$. This is exactly the form discussed in Section~\ref{subsec:astral-pt-form}; the polynomial-speed sequence constructed in Example~\ref{ex:poly-speed-intro} is an example of a sequence with just these properties. Alternatively, as we saw in Sections~\ref{subsec:astral-pts-as-fcns} and~\ref{sec:char:func}, astral points have a dual representation as functions mapping $\Rn$ to $\Rext$, and indeed their topology and other properties are largely derived from this correspondence. Astral space, we proved, is a homeomorphic copy of the function space $\phimgA=\phimgcl$, a space that consists of those functions on $\Rn$ that are convex, concave and that vanish at the origin. We also proved that these functions have a very particular functional form. In the foregoing, we followed an approach in which astral space was constructed based on the first perspective. But it would have been possible to instead construct this same space based on the second perspective. Such an approach would begin with $\phimg$, which is exactly the space of all linear functions, and which is equivalent topologically and as a vector space to $\Rn$. The next step would be to form the closure of this space, $\phimgcl$, which is compact as a closed subset of the compact space $\fcnspn$. The actual astral space $\extspace$ could then be constructed as a homeomorphic copy of $\phimgcl$. This is very much like the approach used, for instance, in constructing the Stone-\v{C}ech compactification~\citep[Section 38]{munkres}. More significantly, this view shows that astral space is a special case of the $\calQ$-compacti\-fications studied by~\citet{q_compactification}, in which we set $\calQ$ to be the set of all linear functions on $\Rn$. In particular, this connection implies that astral space is the smallest compactification of $\Rn$ under which all linear functions can be extended continuously, which was a primary aim for its construction. \subsection{First countability} We next show that astral space is \emph{first-countable}, meaning that every point $\xbar\in\extspace$ has a {countable neighborhood base} (see \Cref{sec:prelim:countability}). This fundamental topological property allows us to work with sequential characterizations of closure and continuity (see \Cref{prop:first:properties}), and also implies that astral space is {sequentially compact} (\Cref{prop:first:subsets}\ref{i:first:compact}). To prove that a given astral point $\xbar$ has a countable neighborhood base, we rely on \Cref{thm:first:fromseq}, which in our setting states that a collection of $\xbar$'s neighborhoods $\countset{B}$ is a countable neighborhood base at $\xbar$ if any sequence with elements chosen from $B_t\cap\Rn$ converges to~$\xbar$. In order to construct such a collection, we prove a sufficient condition on the convergence of sequences in $\Rn$ to an arbitrary astral point~$\xbar$. Our proof generalizes the analysis from Example~\ref{ex:poly-speed-intro}, where we showed convergence of a polynomial-speed sequence. Here we show that polynomial coefficients can be replaced by arbitrary sequences going to infinity at different speeds: \begin{theorem} \label{thm:i:seq-rep} Let $\xbar=\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl \qq$ \ for some $k\ge 0$ and $\vv_1,\dotsc,\vv_k,\qq\in\Rn$. Consider the sequence \begin{equation} \label{eq:i:seq-rep} \xx_t = b_{t,1} \vv_1 + b_{t,2} \vv_2 + \dotsb + b_{t,k}\vv_k + \qq_t = \sum_{i=1}^k b_{t,i} \vv_i + \qq_t, \end{equation} where $b_{t,i}\in\R$ and $\qq_t\in\Rn$ satisfy all of the following: \begin{letter-compact} \item \label{thm:i:seq-rep:a} $b_{t,i}\rightarrow+\infty$, for $i=1,\dotsc,k$. \item \label{thm:i:seq-rep:b} $b_{t,i+1}/b_{t,i}\rightarrow 0$, for $i=1,\dotsc,k-1$. \item \label{thm:i:seq-rep:c} $\qq_t\rightarrow\qq$. \end{letter-compact} Then $\xx_t\to\xbar$. \end{theorem} \begin{proof} The condition (\ref{thm:i:seq-rep:a}) implies, for each $i=1,\dotsc,k$, that $b_{t,i}>0$ for all but finitely many values of $t$. We discard from the sequence any element $\xx_t$ for which coefficient $b_{t,i}$ is nonpositive, and assume in what follows that $b_{t,i}>0$ for all $t$ and all $i=1,\dotsc,k$. We need to show $\xx_t\cdot\uu\rightarrow\xbar\cdot\uu$ for all $\uu\in\Rn$, which, by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), will imply $\xx_t\rightarrow\xbar$. Let $\uu\in\Rn$. Suppose first that $\vv_i\inprod\uu=0$ for all $i=1,\dotsc,k$. Then \[ \xx_t\cdot\uu = \qq_t\cdot\uu \rightarrow \qq\cdot\uu = \xbar\cdot\uu, \] where the convergence in the middle follows from (\ref{thm:i:seq-rep:c}), and the last equality from the case decomposition of $\xbar\inprod\uu$ in \Cref{lemma:case}. Otherwise, let $j\in\set{1,\dotsc,k}$ be the first index for which $\vv_j\inprod\uu\neq 0$, so that $\vv_i\inprod\uu=0$ for $i=1,\dotsc,j-1$. Assume further that $\vv_j\inprod\uu>0$; the case $\vv_j\inprod\uu<0$ can be proved symmetrically. Then $\xbar\inprod\uu=+\infty$ by \Cref{lemma:case}. Also, \begin{align} \xx_t\cdot\uu &= \sum_{i=1}^k b_{t,i} \vv_i \cdot \uu + \qq_t\cdot\uu \label{eqn:thm:i:seq-rep:1} = b_{t,j} \biggBracks{ \vv_j\cdot\uu + \sum_{i=j+1}^k \frac{b_{t,i}}{b_{t,j}} \vv_i\cdot\uu } + {\qq_t\cdot\uu}. \end{align} By condition~(\ref{thm:i:seq-rep:b}), for each $i=j+1,\dotsc,k$, \begin{equation} \label{eqn:thm:i:seq-rep:2} \frac{b_{t,i}}{b_{t,j}} = \prod_{\ell=j}^{i-1} {\frac{b_{t,\ell+1}}{b_{t,\ell}}} \rightarrow 0. \end{equation} Therefore, the bracketed expression in \eqref{eqn:thm:i:seq-rep:1} is converging to $\vv_j\cdot\uu>0$, and so is at least $\vv_j\cdot\uu / 2 > 0$ for $t$ sufficiently large. Since $b_{t,j}\rightarrow+\infty$, by (\ref{thm:i:seq-rep:a}), it follows that the first term on the right-hand side of \eqref{eqn:thm:i:seq-rep:1} is converging to $+\infty$, while $\qq_t\cdot\uu$ is converging to $\qq\cdot\uu\in\R$ from (\ref{thm:i:seq-rep:c}). Therefore, $\xx_t\cdot\uu\rightarrow+\infty=\xbar\cdot\uu$. Thus, $\xx_t\cdot\uu\rightarrow\xbar\cdot\uu$ for all $\uu\in\Rn$, so $\xx_t\rightarrow\xbar$. \end{proof} We next use conditions from \Cref{thm:i:seq-rep} to construct a countable neighborhood base at an arbitrary astral point $\xbar$. Following \Cref{cor:h:1}, any $\xbar$ has an orthonormal decomposition. We use the vectors from this decomposition to define a family of neighborhoods $\countset{B}$ such that any sequence with elements $\xx_t\in B_t$ will satisfy the conditions of \Cref{thm:i:seq-rep}. The result will then follow by \Cref{thm:first:fromseq}. \Cref{thm:first:local} explicitly lists the sets composing the countable base for $\xbar$. For illustration, the first few elements of that neighborhood base are plotted in Figure~\ref{fig:first-countable-base} for three different points in $\extspac{2}$. \begin{figure}[t] \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figs/fig_first_countable_base_1} \caption{} \label{fig:first-countable-base:1} \end{subfigure}\hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figs/fig_first_countable_base_2} \caption{} \label{fig:first-countable-base:2} \end{subfigure}\hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figs/fig_first_countable_base_3} \caption{} \label{fig:first-countable-base:3} \end{subfigure}\caption{ Illustration of the countable neighborhood base from Theorem~\ref{thm:first:local} for a point $\xbar\in\extspac{2}$ in the following cases: (a): $\xbar=\qq$ for some $\qq\in\R^2$; (b): $\xbar=\limray{\ee_1}\protect\plusl\lambda\ee_2$ for some $\lambda\in\R$; (c): $\xbar=\limray{\ee_1}\protect\plusl\limray{\ee_2}$. } \label{fig:first-countable-base} \end{figure} \begin{theorem} \label{thm:first:local} Let $\xbar\in\extspace$ with an orthonormal decomposition $\xbar=\limray{\vv_1} \plusl \cdots \plusl \limray{\vv_k} \plusl \qq$, and let $\vv_{k+1},\dotsc,\vv_{n}$ be orthonormal vectors that are orthogonal to $\vv_1,\dotsc,\vv_k$, so that $\vv_1,\dotsc,\vv_n$ is an orthonormal basis for $\Rn$. Consider the family of sets $\countset{B}$ where $B_t$ consists of all points $\zbar\in\extspace$ satisfying all of the following conditions: \begin{letter-compact} \item \label{it:count-base-cond:a} $\zbar\cdot\vv_i > t \;$ for $i=1,\dotsc,k$. \item \label{it:count-base-cond:b} $\zbar\cdot(\vv_i - t\vv_{i+1}) > 0 \;$ for $i=1,\dotsc,k-1$. \item \label{it:count-base-cond:c} $|\zbar\cdot\vv_j - \qq\cdot\vv_j| < 1/t \;$ for $j=k+1,\dotsc,n$. \end{letter-compact} Then the collection $\countset{B}$ is a nested countable neighborhood base for $\xbar$. \end{theorem} \begin{proof} By \Cref{thm:i:1}(\ref{thm:i:1aa},\ref{thm:i:1b}), $\eRn$ is a regular topological space and $\Rn$ is dense in~$\eRn$. Therefore, in order to use \Cref{thm:first:fromseq} to show that $\countset{B}$ is a countable neighborhood base, it suffices to check that the sets $B_t$ are neighborhoods of $\xbar$ and then verify that any sequence with elements $\xx_t\in B_t\cap\Rn$ converges to $\xbar$. We will then separately argue that the collection $\countset{B}$ is in fact nested. To start, note that each $B_t$ is open since it is a finite intersection of subbase elements. Also, using the orthogonality of $\vv_1,\dotsc,\vv_k,\qq$, the case decomposition of $\xbar\inprod\uu$ in \Cref{lemma:case} implies that: \begin{itemize}[noitemsep] \item $\xbar\cdot\vv_i=+\infty$ for $i=1,\dotsc,k$. \item $\xbar\cdot(\vv_i - t\vv_{i+1}) = +\infty$ for $i=1,\dotsc,k-1$. \item $\xbar\cdot\vv_j = \qq\cdot\vv_j$ for $j=k+1,\dotsc,n$. \end{itemize} Thus, each $B_t$ is a neighborhood of $\xbar$. Now let $\xx_t\in B_t\cap\Rn$ for all $t$. We prove that the sequence $\seq{\xx_t}$ satisfies the conditions of Theorem~\ref{thm:i:seq-rep} and hence converges to $\xbar$. Since vectors $\vv_1,\dotsc,\vv_n$ form an orthonormal basis, we can write each $\xx_t$ as \[ \xx_t = \sum_{i=1}^k b_{t,i} \vv_i + \qq_t, \] where $b_{t,i}=\xx_t\cdot\vv_i$ and $\qq_t$ is orthogonal to $\vv_1,\dotsc,\vv_k$. The fact that $\xx_t\in B_t$ implies that $b_{t,i}=\xx_t\cdot\vv_i > t$, for $i=1,\dotsc,k$, so $b_{t,i}\to+\infty$, and hence condition~(\ref{thm:i:seq-rep:a}) of Theorem~\ref{thm:i:seq-rep} is satisfied. We also have $\xx_t\cdot(\vv_i - t\vv_{i+1}) > 0$, for $i=1,\dotsc,k-1$, so \[ b_{t,i}=\xx_t\cdot\vv_i > t(\xx_t\cdot\vv_{i+1}) = t b_{t,i+1}. \] As a result, $0<b_{t,i+1}/b_{t,i} < 1/t$, and thus, $b_{t,i+1}/b_{t,i}\rightarrow 0$. Therefore, condition~(\ref{thm:i:seq-rep:b}) of Theorem~\ref{thm:i:seq-rep} is satisfied as well. Finally, for $j=k+1,\dotsc,n$, we have $\xx_t\cdot\vv_j=\qq_t\cdot\vv_j$. Since $\xx_t\in B_t$, this implies \[ \bigAbs{\qq_t\cdot\vv_j - \qq\cdot\vv_j} = \bigAbs{\xx_t\cdot\vv_j - \qq\cdot\vv_j} < \frac{1}{t}, \] so $\qq_t\cdot\vv_j \to \qq\cdot\vv_j$. Since $\qq_t$ and $\qq$ are orthogonal to $\vv_1,\dotsc,\vv_k$, we also have $\qq_t\cdot\vv_i=\qq\cdot\vv_i=0$ for $i=1,\dotsc,k$. Thus, $\qq_t\cdot\vv_i\to \qq\cdot\vv_i$ for all basis vectors $\vv_i$, $i=1,\dotsc,n$, and so $\qq_t\to\qq$, satisfying the condition~(\ref{thm:i:seq-rep:c}) of Theorem~\ref{thm:i:seq-rep}. Having satisfied \Cref{thm:i:seq-rep}'s three conditions, we conclude that $\xx_t\to\xbar$. Thus, by \Cref{thm:first:fromseq}, the collection $\countset{B}$ is a countable neighborhood base at $\xbar$. It remains to show that the collection $\countset{B}$ is nested. Let $\zbar\in B_{t+1}$ for any $t\ge 1$. We will show that $\zbar\in B_t$. First, note that conditions (\ref{it:count-base-cond:a}) and (\ref{it:count-base-cond:c}) for the membership in $B_t$ immediately follow from the corresponding conditions for $B_{t+1}$. Furthermore, by conditions (\ref{it:count-base-cond:b}) and (\ref{it:count-base-cond:a}) for $B_{t+1}$, the expressions $\zbar\cdot\bigParens{\vv_i - (t+1)\vv_{i+1}}$ and $\zbar\cdot\vv_{i+1}$ are both positive and therefore summable, so by \Cref{pr:i:1}, \[ \zbar\cdot(\vv_i - t\vv_{i+1}) = \zbar\cdot\BigParens{\vv_i - (t+1)\vv_{i+1}} + \zbar\cdot\vv_{i+1} > t+1 > 0. \] Hence, condition (\ref{it:count-base-cond:b}) for $\zbar\in B_t$ also holds. Thus, $\zbar\in B_t$, finishing the proof. \end{proof} As an immediate consequence, astral space is first-countable, and thanks to its compactness, it is also sequentially compact: \begin{theorem} \label{thm:first-count-and-conseq} $\eRn$ is first-countable and sequentially compact. \end{theorem} \begin{proof} By \Cref{thm:first:local}, every point in $\extspace$ has a countable neighborhood base, so $\eRn$ is first-countable. Since $\eRn$ is compact and first-countable, it is also sequentially compact, by \Cref{prop:first:subsets}(\ref{i:first:compact}). \end{proof} Furthermore, as a property of a nested countable neighborhood base, any sequence whose points are selected respectively from the sets $B_t$ must converge to $\xbar$. \begin{corollary} \label{cor:first:local:conv} Let $\xbar\in\extspace$, and let $\countset{B}$ be the collection of neighborhoods of $\xbar$ constructed in Theorem~\ref{thm:first:local}. Then any sequence $\seq{\xbar_t}$ with $\xbar_t\in B_t$ converges to $\xbar$. \end{corollary} \begin{proof} This is immediate from \Cref{prop:nested:limit}. \end{proof} \subsection{Not second-countable and not metrizable} \label{sec:not-second} Although $\extspace$ is first-countable, we show next that it is not second-countable (for $n\geq 2$), meaning its topology does not have a countable base. By \Cref{prop:sep:metrizable}, this will further imply that it also is not metrizable. To prove this, for each $\vv\in\Rn$, we will define an open set $\Uv$ with the property that $\Uv$ includes the astron $\limray{\vv}$, but does not include any other astron. This will imply that a countable base must include a different set for every astron, which is impossible since the set of all astrons is uncountable when $n\geq 2$. The sets $\Uv$ will be used again at a later point; we therefore prove properties for these sets that are a bit stronger than necessary for our current purposes. In particular, we prove that all of the points in $\Uv$ must either be in $\Rn$ or have the form $\limray{\vv}\plusl\qq$ for some $\qq\in\Rn$. This allows us to exclude the possibility of $\Uv$ including any astrons other than $\limray{\vv}$. \begin{theorem} \label{thm:formerly-lem:h:1:new} For every $\vv\in\Rn$, there exists an open set $\Uv\subseteq\extspace$ that includes $\limray{\vv}$ and for which the following hold: \begin{letter-compact} \item \label{thm:formerly-lem:h:1:b:new} If $\xbar\in\Uv$, then either $\xbar\in\Rn$ or $\xbar=\limray{\vv}\plusl\qq$ for some $\qq\in\Rn$. \item \label{thm:formerly-lem:h:1:c:new} If $\ww\in\Rn$ and $\norm{\vv}=\norm{\ww}=1$, then $\limray{\ww}\in\Uv$ if and only if $\ww=\vv$. \end{letter-compact} \end{theorem} \begin{proof} Let $\vv\in\Rn$. If $\vv=\zero$, we can choose $\Uzero=\Rn$, which satisfies part~(\ref{thm:formerly-lem:h:1:b:new}) trivially and part~(\ref{thm:formerly-lem:h:1:c:new}) vacuously. When $\vv\neq\zero$, it suffices to consider only the case that $\vv$ is a unit vector since if $\norm{\vv}\neq 1$, then we can choose $\Uv$ to be the same as the corresponding set for a normalized version of $\vv$; that is, we can choose $\Uv=\Unormv$. Therefore, we assume henceforth that $\norm{\vv}=1$. Let $\uu_1,\dotsc,\uu_{n-1}$ be any orthonormal basis for the linear space orthogonal to $\vv$; thus, $\vv,\uu_1,\dotsc,\uu_{n-1}$ form an orthonormal basis for all of $\Rn$. We define $\Uv$ to be the set of all $\xbar\in\extspace$ which satisfy both of the following conditions: \begin{itemize}[noitemsep] \item $\xbar\cdot\vv>0$. \item $|\xbar\cdot\uu_j| < 1$ for all $j=1,\dotsc,n-1$. \end{itemize} Then $\Uv$ is open, since it is a finite intersection of subbase elements. Furthermore, $\limray{\vv}\cdot\vv=+\infty$ and $\limray{\vv}\cdot\uu_j=0$ for $j=1,\dotsc,n-1$; therefore, $\limray{\vv}\in\Uv$. We now prove that $\Uv$ satisfies the two parts of the theorem. \begin{proof-parts} \pfpart{Part~(\ref{thm:formerly-lem:h:1:b:new}):} Suppose $\xbar\in\Uv$, and let $\xbar=\limray{\vv_1}\plusl\dotsc\plusl\limray{\vv_k}\plusl\qq$ be an orthonormal decomposition of $\xbar$. We first argue that vectors $\vv_i$ must be orthogonal to vectors $\uu_j$. Suppose this is not the case and $\vv_i\cdot\uu_j\neq 0$ for some $i$, $j$. Then the case decomposition of $\xbar\inprod\uu_j$ from \Cref{lemma:case} yields $\xbar\cdot\uu_j\in\{-\infty,+\infty\}$; but this is a contradiction since $\xbar\in\Uv$, implying $|\xbar\cdot\uu_j|<1$. Therefore, $\vv_i\cdot\uu_j=0$ for all $i$, $j$. Since $\vv_i$ is a unit vector, and since $\vv,\uu_1,\dotsc,\uu_{n-1}$ form an orthonormal basis for $\Rn$, the only possibility is that each $\vv_i$ is either equal to $\vv$ or $-\vv$. Furthermore, since the $\vv_i$'s are orthogonal to one another, this further implies that $k\leq 1$. If $k=0$, then $\xbar=\qq\in\Rn$. Otherwise, if $k=1$, then $\vv_1$ is either $\vv$ or $-\vv$, as just argued. But if $\vv_1=-\vv$, then $\xbar\cdot\vv=\limray{\vv_1}\cdot\vv\plusl\qq\cdot\vv=-\infty$, contradicting that $\xbar\cdot\vv>0$ since $\xbar\in\Uv$. Thus, if $k=1$ then $\xbar=\limray{\vv_1}\plusl\qq=\limray{\vv}\plusl\qq$. We conclude that $\xbar$ is either in $\Rn$ or in $\limray{\vv}\plusl\Rn$, as claimed. \pfpart{Part~(\ref{thm:formerly-lem:h:1:c:new}):} Let $\ww\in\Rn$ be such that $\norm{\ww}=1$. If $\ww=\vv$ then $\limray{\ww}\in\Uv$, as observed above. For the converse, let $\xbar=\limray{\ww}$ and assume that $\xbar\in\Uv$. As we argued in part~(\ref{thm:formerly-lem:h:1:b:new}), this means that $\ww\inprod\uu_j=0$ for all $j$, and since $\ww$ is a unit vector, this implies either $\ww=\vv$ or $\ww=-\vv$. Since $\xbar\inprod\vv>0$, we cannot have $\ww=-\vv$, so we must have $\ww=\vv$. \qedhere \end{proof-parts} \end{proof} \begin{theorem} \label{thm:h:8:new} For $n\geq 2$, astral space $\extspace$ is not second-countable. \end{theorem} \begin{proof} Let $n\geq 2$, and let $S=\{ \vv\in\Rn : \norm{\vv}=1 \}$ be the unit sphere in $\Rn$. For each $\vv\in S$, let $\Uv$ be as in Theorem~\ref{thm:formerly-lem:h:1:new}. Suppose, contrary to the theorem, that there exists a countable base $B_1,B_2,\dotsc$ for $\extspace$. Since for each $\vv\in S$, $\Uv$ is a neighborhood of $\limray{\vv}$, by \Cref{pr:base-equiv-topo}, there must exist an index $i(\vv)\in\nats$ such that $\limray{\vv}\in B_{i(\vv)} \subseteq \Uv$. The resulting function $i: S\rightarrow\nats$ is injective since if $i(\vv)=i(\ww)$ for some $\vv,\ww\in S$, then \[ \limray{\ww}\in B_{i(\ww)} = B_{i(\vv)} \subseteq \Uv, \] which implies $\vv=\ww$ by Theorem~\ref{thm:formerly-lem:h:1:new}(\ref{thm:formerly-lem:h:1:c:new}). Let $c:[0,1]\rightarrow S$ be defined by \[ c(x)=\trans{\Bracks{x, \sqrt{1-x^2}, 0, \cdots, 0}} \] for $x\in [0,1]$; that is, the first two components of $c(x)$ are $x$ and $\sqrt{1-x^2}$, and all other components are zero. This function is injective. Therefore, $i\circ c$, its composition with $i$, maps $[0,1]$ to $\nats$, and is injective (since, for $x,x'\in [0,1]$, if $x\neq x'$ then $c(x)\neq c(x')$, implying $i(c(x))\neq i(c(x'))$). By Proposition~\refequiv{pr:count-equiv}{pr:count-equiv:a}{pr:count-equiv:c}, the existence of such a function implies that $[0,1]$ is countable, contradicting that this interval is in fact uncountable (\Cref{pr:uncount-interval}). Thus, no such countable base can exist. \end{proof} As a corollary, $\extspace$ also cannot be metrizable when $n\ge 2$: \begin{corollary} For $n\geq 2$, astral space $\extspace$ is not metrizable. \end{corollary} \begin{proof} Assume $n\geq 2$. We previously saw that $\Qn$ is a countable dense subset of $\Rn$ (Example~\ref{ex:2nd-count-rn}), and that $\Rn$ is dense in $\extspace$ (Theorem~\ref{thm:i:1}\ref{thm:i:1b}). Therefore, $\Qn$ is a countable dense subset of $\eRn$ by \Cref{prop:subspace}(\ref{i:subspace:dense}) (and since $\Rn$ is a subspace of $\extspace$ by Proposition~\ref{pr:open-sets-equiv-alt}). Every metrizable space with a countable dense subset is second-countable (\Cref{prop:sep:metrizable}), but $\extspace$ is not second-countable. (by \Cref{thm:h:8:new}). Therefore, $\extspace$ is not metrizable. \end{proof} Theorem~\ref{thm:formerly-lem:h:1:new} highlights an important topological property regarding astrons. In $\Rn$, any neighborhood of a point $\xx\in\Rn$, no matter how tiny and constrained, must include other points in $\Rn$. Topologically, this means no point can be isolated from all others. Intuitively, one might expect astrons to behave in a similar way so that any neighborhood of an astron $\limray{\vv}$ will include other ``nearby'' astrons $\limray{\ww}$, likely including those for which $\ww$ is exceedingly close to $\vv$ (in $\Rn$). Theorem~\ref{thm:formerly-lem:h:1:new} shows that this is not true for any astron. Rather, every astron $\limray{\vv}$ has a neighborhood $\Uv$ that excludes \emph{all} other astrons $\limray{\ww}$, no matter how tiny the distance between $\vv$ and $\ww$. So although it might be tempting to picture astrons intuitively as points ``in the sky,'' a seemingly continuous set in direct correspondence with points on the unit sphere, in fact, astrons are quite topologically discrete and isolated from one another. Related issues will be considered again in Section~\ref{sec:conv-in-dir}. \subsection{Closed sets and closures} We saw in Proposition~\refequiv{pr:open-sets-equiv-alt}{pr:open-sets-equiv-alt:a}{pr:open-sets-equiv-alt:b} that a set in $\Rn$ is open in astral topology on $\eRn$ if and only if it is open in the standard Euclidean topology on $\Rn$. For this reason, we can simply refer to a set in $\Rn$ as being open without specifying a topology. The same is \emph{not} true for closed sets: A set $U\subseteq\Rn$ can be closed with respect to the standard Euclidean topology, but not closed in astral topology. We use the phrase \emph{closed in $\Rn$} or \emph{closed in $\extspace$} to make clear which topology is being referred to, although usually the topology will be clear from context, especially since this ambiguity only arises when considering sets that are necessarily in $\Rn$. For instance, if $\vv\neq\zero$, then the ray $\{ \lambda \vv : \lambda\in\Rpos \}$ is closed in $\Rn$, but is not closed in $\extspace$ since it does not include~$\limray{\vv}$, the limit of the sequence $\seq{t\vv}$. Indeed, $\Rn$ itself is closed in $\Rn$ but not in $\extspace$. In the same way, the closure of a set in $\Rn$ can in general be different if the closure is with respect to the topology on $\Rn$ or $\extspace$. We use the notation $\Ubar$ for closure of a set $U$ in $\eRn$, and $\cl U$ for closure in $\Rn$. Here are some useful facts about closures and closed sets, all standard results specialized to our current setting: \begin{proposition} \label{pr:closed-set-facts} \mbox{} \begin{letter-compact} \item \label{pr:closed-set-facts:a} Let $S \subseteq \Rn$. Then $\cl S = \Sbar \cap \Rn$. \item \label{pr:closed-set-facts:aa} Let $S \subseteq \Rn$. Then $\clbar{\cl S} = \Sbar$. \item \label{pr:closed-set-facts:b} Let $U \subseteq \extspace$ be open. Then $\clbar{(U \cap \Rn)} = \Ubar$. \end{letter-compact} \end{proposition} \begin{proof}~ \begin{proof-parts} \pfpart{Part~(\ref{pr:closed-set-facts:a}):} This is a special case of \Cref{prop:subspace}(\ref{i:subspace:closure}). \pfpart{Part~(\ref{pr:closed-set-facts:aa}):} On the one hand, $\Sbar\subseteq \clbar{\cl S}$ since $S\subseteq \cl S$. On the other hand, $\cl S = \Sbar\cap\Rn\subseteq \Sbar$ by part~(\ref{pr:closed-set-facts:a}). Since $\Sbar$ is closed, this implies $\clbar{\cl S} \subseteq \Sbar$. \pfpart{Part~(\ref{pr:closed-set-facts:b}):} First, $\clbar{(U \cap \Rn)} \subseteq \Ubar$ since $U \cap \Rn \subseteq U$. To prove the reverse inclusion, suppose $\xbar\in\Ubar$. Let $V$ be any neighborhood of $\xbar$. Then there exists a point $\ybar$ in the open set $U\cap V$, which in turn, since $\Rn$ is dense in $\extspace$, implies that there is a point $\zz\in U\cap V\cap \Rn$. Therefore, by \Cref{pr:closure:intersect}(\ref{pr:closure:intersect:a}), $\xbar\in \clbar{(U \cap \Rn)}$ since $U\cap\Rn$ intersects every neighborhood $V$ of $\xbar$. \qedhere \end{proof-parts} \end{proof} \section{Linear maps and the matrix representation of astral points} \label{sec:representing} Astral space was specifically constructed so that linear functions, mapping $\Rn$ to $\R$, would extend continuously to it. As a consequence, we will see in this chapter that linear maps, from $\Rn$ to $\Rm$, also extend continuously to astral space. This fact turns out to be tremendously useful, allowing basic notions and operations from linear algebra, suitably adapted, to be applied to all of astral space, including the points at infinity. Indeed, we will see that every astral point can be represented using standard matrices, with properties of those matrices related directly to properties of the astral point being represented. We will also give a complete characterization of the exact set of sequences in $\Rn$ converging to a particular astral point in terms of the point's matrix representation. \subsection{Linear and affine maps} \label{sec:linear-maps} The coupling function, $\xbar\inprod\uu$, viewed as a function of~$\xbar$, was constructed to be a continuous extension of the standard inner product (see \Cref{thm:i:1}(\ref{thm:i:1c})). We begin this chapter by showing that it is similarly possible to extend all linear maps from $\Rn$ to $\Rm$ to maps from $\eRn$ to $\eRm$. \begin{theorem} \label{thm:mat-mult-def} Let $\A\in\Rmn$, and let $\xbar\in\extspace$. Then there exists a unique point in $\extspac{m}$, henceforth denoted $\A\xbar$, with the property that for every sequence $\seq{\xx_t}$ in $\Rn$, if $\xx_t\rightarrow\xbar$, then $\A\xx_t\rightarrow \A\xbar$. Furthermore, for all $\uu\in\Rm$, \begin{equation} \label{eq:i:4} (\A\xbar)\cdot\uu = \xbar\cdot(\trans{\A} \uu). \end{equation} \end{theorem} \begin{proof} Let $\seq{\xx_t}$ be any sequence in $\Rn$ that converges to $\xbar$, which must exist by Theorem~\ref{thm:i:1}(\ref{thm:i:1d}). Then for all $\uu\in\Rm$, \begin{equation} \label{eq:i:5} (\A\xx_t)\cdot\uu = \trans{(\A\xx_t)}\uu = \trans{\xx}_t \trans{\A} \uu = \xx_t\cdot(\trans{\A} \uu) \rightarrow \xbar\cdot (\trans{\A} \uu), \end{equation} where the last step follows by continuity of the coupling function in its first argument, that is, by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). Thus, for all $\uu\in\Rm$, the sequence $(\A\xx_t)\cdot\uu$ converges in $\eR$, and therefore, by Theorem~\ref{thm:i:1}(\ref{thm:i:1e}), the sequence $\seq{\A\xx_t}$ has a limit in $\eRm$, which we denote $\A\xbar$. Since $\A\xx_t\to\A\xbar$, we must have $(\A\xx_t)\cdot\uu\rightarrow(\A\xbar)\cdot\uu$ for all $\uu\in\Rm$ (again by Theorem~\ref{thm:i:1}\ref{thm:i:1c}). As seen in \eqref{eq:i:5}, this same sequence, $(\A\xx_t)\cdot\uu$, also converges to $\xbar\cdot (\trans{\A} \uu)$. Since these limits must be equal, this proves \eqref{eq:i:4}. If $\seq{\xx'_t}$ is any other sequence in $\Rn$ converging to $\xbar$, then this same reasoning shows that $\seq{\A\xx'_t}$ must have some limit $\zbar'\in\eRm$ satifying $\zbar'\cdot\uu=\xbar\cdot (\trans{\A} \uu)$ for all $\uu\in\Rm$. Combined with \eqref{eq:i:4}, it follows that $\zbar'=\A\xbar$ by Proposition~\ref{pr:i:4}. Thus, as claimed, $\A\xbar$ is the unique common limit of $\seq{\A\xx_t}$ for every sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$. \end{proof} If $\A\in\Rmn$ and $\xx\in\Rn$, then the notation $\A\xx$ as defined in \Cref{thm:mat-mult-def} is fully compatible with its usual meaning. In other words, $\A\xx$ refers to the same point in $\Rm$ whether interpreted as a standard matrix-vector product or as the product of matrix $\A$ with astral point $\xx$, as in \Cref{thm:mat-mult-def}. To see this, let $\xbar=\xx$, let $\A\xbar$ be as in \Cref{thm:mat-mult-def}, and, for the moment, let $\A\xx$ denote standard matrix-vector product. Let $\seq{\xx_t}$ be the constant sequence with $\xx_t=\xx$ for all $t$. Then $\A\xx_t\rightarrow\A\xx$ since $\A\xx_t=\A\xx$ for all $t$, but also $\xx_t\rightarrow\xbar$ so $\A\xx_t\rightarrow\A\xbar$ by \Cref{thm:mat-mult-def}. Thus, $\A\xbar=\A\xx$. In standard linear algebra, a matrix $\A\in\Rmn$ is associated with a (standard) linear map $A:\Rn\rightarrow\Rm$ given by $A(\xx)=\A\xx$ for $\xx\in\Rn$. \Cref{thm:mat-mult-def} shows that $\A$ is also associated with an \emph{astral linear map} $\Abar:\eRn\rightarrow\eRm$ given by $\Abar(\xbar)=\A\xbar$ for $\xbar\in\extspace$. The next theorem shows that $\Abar$ is the unique continuous extension of $A$ to astral space: \begin{theorem} \label{thm:linear:cont} Let $\A\in\Rmn$, and let $A:\Rn\rightarrow\Rm$ and $\Abar:\eRn\rightarrow\eRm$ be the associated standard and astral linear maps (so that $A(\xx)=\A\xx$ for $\xx\in\Rn$, and $\Abar(\xbar)=\A\xbar$ for $\xbar\in\extspace$). Then: \begin{letter-compact} \item \label{thm:linear:cont:a} $\Abar(\xx)=A(\xx)$ for $\xx\in\Rn$. \item \label{thm:linear:cont:b} $\Abar$ is continuous. \end{letter-compact} Moreover, $\Abar$ is the only function mapping $\eRn$ to $\eRm$ with both of these properties. \end{theorem} \begin{proof} Part~(\ref{thm:linear:cont:a}) was argued just above. For part~(\ref{thm:linear:cont:b}), let $\seq{\xbar_t}$ be any sequence in $\extspace$ converging to some point $\xbar\in\extspace$. Then for all $\uu\in\Rm$, \[ \Abar(\xbar_t)\cdot\uu = (\A\xbar_t)\cdot\uu = \xbar_t\cdot(\trans{\A}\uu) \rightarrow \xbar\cdot(\trans{\A}\uu) = (\A\xbar)\cdot\uu = \Abar(\xbar)\cdot\uu, \] where the second and third equalities are by \Cref{thm:mat-mult-def}, and the convergence is by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). It follows that $\Abar(\xbar_t)\rightarrow\Abar(\xbar)$ (again by Theorem~\ref{thm:i:1}\ref{thm:i:1c}). Thus, $\Abar$ is continuous (by \Cref{prop:first:properties}\ref{prop:first:cont} and first countability). For uniqueness, suppose $F:\eRn\rightarrow\eRm$ is another function with these same properties, and let $\xbar\in\extspace$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then \[ F(\xbar) = \lim F(\xx_t) = \lim A(\xx_t) = \lim \Abar(\xx_t) = \Abar(\xbar), \] where the first and last equalities are by continuity. Thus, $F=\Abar$. \end{proof} The (standard) column space of $\A$ is the span of its columns, or equivalently, the image of $\Rn$ under $A$: \[ \colspace\A = A(\Rn) = \Braces{\A\xx : \xx\in\Rn}. \] Analogously, we define the \emph{astral column space} of $\A$ to be the image of $\eRn$ under $\Abar$: \begin{equation*} \acolspace\A = \Abar(\eRn) = \set{\A\xbar:\:\xbar\in\eRn}. \end{equation*} If $\uu\in\Rn$ and $\xbar\in\extspace$, then Theorem~\ref{thm:mat-mult-def} implies that $\trans{\uu}\xbar$, which is a point in $\extspac{1}=\Rext$, is the same as $\xbar\cdot\uu$: \begin{proposition} \label{pr:trans-uu-xbar} Let $\xbar\in\extspace$ and $\uu\in\Rn$. Then $\trans{\uu}\xbar=\xbar\cdot\uu$. \end{proposition} \begin{proof} Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then \[ \trans{\uu}\xbar = \lim (\trans{\uu} \xx_t) = \lim (\xx_t\cdot\uu) = \xbar\cdot\uu, \] with the first equality from Theorem~\ref{thm:mat-mult-def} (with $\A=\trans{\uu}$), and the third from Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). \end{proof} Here are some basic properties of astral linear maps. Note that the first three generalize analogous properties of the coupling function (Propositions~\ref{pr:i:6},~\ref{pr:i:2} and~\ref{pr:astrons-exist}). \begin{proposition} \label{pr:h:4} Let $\A\in\Rmn$, and let $\xbar\in\extspace$. Then: \begin{letter-compact} \item \label{pr:h:4c} $\A(\xbar\plusl\ybar) = \A\xbar \plusl \A\ybar$, for $\ybar\in\extspace$. \item \label{pr:h:4g} \label{pr:h:4e} $\A (\alpha\,\xbar) = \alpha(\A\xbar)$, for $\alpha\in\Rext$. \item \label{pr:h:4lam-mat} $\lambda(\A\xbar) = (\lambda \A) \xbar$, for $\lambda\in\R$. \item \label{pr:h:4d} $\B(\A\xbar) = (\B\A)\xbar$, for $\B\in\R^{\ell\times m}$. \item \label{pr:h:4:iden} $\Iden\,\xbar = \xbar$, where $\Iden$ is the $n\times n$ identity matrix. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:h:4c}):} By Theorem~\ref{thm:mat-mult-def} and Proposition~\ref{pr:i:6}, for all $\uu\in\Rn$, \begin{align*} \smash{\bigParens{\A(\xbar\plusl\ybar)}\cdot\uu} &= (\xbar\plusl\ybar)\cdot(\trans{\A}\uu) \\ &= \xbar\cdot(\trans{\A}\uu) \plusl \ybar\cdot(\trans{\A}\uu) \\ &= (\A\xbar)\cdot\uu \plusl (\A\ybar)\cdot\uu \\ &= (\A\xbar \plusl \A\ybar)\cdot\uu. \end{align*} The claim now follows by Proposition~\ref{pr:i:4}. \end{proof-parts} The proofs of the other parts are similar. \end{proof} When multiplying a vector $\xx\in\Rn$ by a matrix $\A\in\Rmn$, the result $\A\xx$ is a vector in $\Rm$ whose $j$-th component is computed by multiplying $\xx$ by the $j$-th row of~$\A$. In the same way, for an astral point $\xbar\in\extspace$, we might expect that $\A\xbar$ in $\extspac{m}$ can be viewed as the result of multiplying $\xbar$ by the individual rows of~$\A$. However, this is not the case. Matrix multiplication of astral points is far more holistic, retaining much more information about the astral points $\xbar$ being multiplied, as shown in the next example. \begin{example} \label{ex:mat-prod-not-just-row-prods} With $m=n=2$, let $\zbar=\A\xbar$ with \[ \A = \begin{bmatrix*}[r] 1 & 0 \\ -1 & 1 \\ \end{bmatrix*} \] and $\xbar=\limray{\ee_1}\plusl \beta \ee_2$, for some $\beta\in\R$ (with $\ee_1,\ee_2$ the standard basis vectors). Let $\trans{\aaa}_1=[1, 0]$ and $\trans{\aaa}_2=[-1,1]$ denote the rows of $\A$. Multiplying these rows separately by $\xbar$, we obtain by the Case Decomposition Lemma~(\Cref{lemma:case}) that $\trans{\aaa}_1\xbar=\xbar\cdot\aaa_1=+\infty$ and $\trans{\aaa}_2\xbar=\xbar\cdot\aaa_2=-\infty$. From this, it might appear that information about $\beta$ has been entirely erased by the process of multiplying by $\A$. But this is incorrect. Indeed, using Proposition~\ref{pr:h:4}, we can compute \[ \zbar=\A\xbar=\limray{(\A\ee_1)}\plusl\beta(\A\ee_2) =\limray{\vv}\plusl\beta\ee_2 \] where $\vv=\trans{[1, -1]}$. As a result, the value of $\beta$ can be readily extracted from $\zbar$ by coupling it with $\uu=\trans{[1,1]}$ since \[ \zbar\cdot\uu = \limray{(\vv\cdot\uu)}\plusl\beta\ee_2\cdot\uu = \beta. \] Alternatively, since this matrix $\A$ is invertible, $\xbar=\A^{-1} \zbar$, by Proposition~\ref{pr:h:4}(\ref{pr:h:4d},\ref{pr:h:4:iden}), which means $\xbar$ (including the value of $\beta$) can be entirely recovered from $\zbar$, though clearly not from the row products, $\trans{\aaa}_1\xbar$ and $\trans{\aaa}_2\xbar$. \end{example} Thus, in general, $\A\xbar$ need not be fully determined by the corresponding row products, as described above. Nevertheless, as shown next, $\A\xbar$ \emph{is} fully determined by those row products if, unlike in Example~\ref{ex:mat-prod-not-just-row-prods}, they are all finite, or if $\A\xbar$ is finite. \begin{proposition} \label{pr:mat-prod-is-row-prods-if-finite} Let $\A\in\Rmn$, let $\xbar\in\extspace$, and let $\bb\in\Rm$. For $j=1,\ldots,m$, let $\trans{\aaa}_j$ be the $j$-th row of $\A$ (so that $\trans{\A}=[\aaa_1,\ldots,\aaa_m]$ and each $\aaa_j\in\Rn$). Then $\A\xbar=\bb$ if and only if $\xbar\cdot\aaa_j=b_j$ for $j=1,\ldots,m$. \end{proposition} \begin{proof} Suppose first that $\A\xbar=\bb$. Then for $j=1,\ldots,m$, \[ b_j = \bb\cdot\ee_j = (\A\xbar)\cdot\ee_j = \xbar\cdot(\trans{\A}\ee_j) = \xbar\cdot\aaa_j \] where the third equality is by Theorem~\ref{thm:mat-mult-def} (and $\ee_1,\ldots,\ee_m$ are the standard basis vectors in $\Rm$). For the converse, suppose now that $\xbar\cdot\aaa_j=b_j$ for $j=1,\ldots,m$. Let $\uu=\trans{[u_1,\ldots,u_m]}\in\Rm$. Then for all $j$, \begin{equation} \label{eq:pr:mat-prod-is-row-prods-if-finite:1} \xbar\cdot(u_j \aaa_j) = u_j (\xbar\cdot\aaa_j) = u_j b_j. \end{equation} Since these values are all in $\R$, $\xbar\cdot(u_1 \aaa_1),\ldots,\xbar\cdot(u_m \aaa_m)$ are summable. Consequently, \begin{eqnarray} (\A\xbar)\cdot\uu = \xbar\cdot(\trans{\A} \uu) &=& \xbar\cdot\Parens{ \sum_{j=1}^m u_j \aaa_j } \nonumber \\ &=& \sum_{j=1}^m \xbar\cdot(u_j \aaa_j) = \sum_{j=1}^m u_j b_j = \bb\cdot\uu. \label{eq:pr:mat-prod-is-row-prods-if-finite:2} \end{eqnarray} The first and third equalities are by Theorem~\ref{thm:mat-mult-def} and Proposition~\ref{pr:i:1}, and the fourth is by \eqref{eq:pr:mat-prod-is-row-prods-if-finite:1}. Since \eqref{eq:pr:mat-prod-is-row-prods-if-finite:2} holds for all $\uu\in\Rm$, it follows that $\A\xbar=\bb$ (by Proposition~\ref{pr:i:4}). \end{proof} Proposition~\ref{pr:h:4} lists some basic properties of the astral linear map $\xbar\mapsto \A\xbar$. As we show next, if a function $F:\Rn\rightarrow\Rm$ satisfies a subset of these properties, then actually $F$ must be an astral linear map, that is, we must have $F(\xbar)=\A\xbar$ for some matrix $\A\in\Rmn$. Thus, the properties listed in the next theorem are both necessary and sufficient for a map to be astral linear. \begin{theorem} \label{thm:linear-trans-is-matrix-map} Let $F:\eRn\to\eRm$ satisfy the following: \begin{letter-compact} \item \label{thm:linear-trans-is-matrix-map:a} $F(\xbar\plusl\ybar)=F(\xbar)\plusl F(\ybar)$ for all $\xbar,\ybar\in\extspace$. \item \label{thm:linear-trans-is-matrix-map:b} $F(\alpha \xx) = \alpha F(\xx)$ for all $\alpha\in\Rext$ and $\xx\in\Rn$. \end{letter-compact} Then there exists a unique matrix $\A\in\Rmn$ for which $F(\xbar)=\A\xbar$ for all $\xbar\in\extspace$. \end{theorem} \begin{proof} We prove first that $F(\Rn)\subseteq \Rm$. Suppose to the contrary that $\ybar=F(\xx)$ for some $\xx\in\Rn$, and some $\ybar\in\extspac{m}\setminus\Rm$. Then \begin{equation} \label{eqn:thm:linear-trans-is-matrix-map:2} \zerov{m} = F(\zerov{n}) = F\bigParens{\xx\plusl (-\xx)} = F(\xx) \plusl F(-\xx) = \ybar \plusl F(-\xx). \end{equation} The first equality is because, by condition~(\ref{thm:linear-trans-is-matrix-map:b}), ${F(\zerov{n})=F(0\cdot\zerov{n})=0\cdot F(\zerov{n}) = \zerov{m}}$. The second equality follows because leftward addition extends standard addition (Proposition~\ref{pr:i:7}\ref{pr:i:7e}), and the third equality follows from condition~(\ref{thm:linear-trans-is-matrix-map:a}). Since $\ybar\not\in\Rm$, we must have $\ybar\cdot\uu\not\in\R$ for some $\uu\in\Rm$ (by Proposition~\ref{pr:i:3}). Without loss of generality, we assume $\ybar\cdot\uu=+\infty$ (since if $\ybar\cdot\uu=-\infty$, we can replace $\uu$ with $-\uu$). Thus, by \eqref{eqn:thm:linear-trans-is-matrix-map:2}, \[ 0 = \zerov{m} \cdot\uu = \ybar\cdot\uu \plusl F(-\xx)\cdot\uu = +\infty, \] a clear contradiction. Having established $F(\Rn)\subseteq \Rm$, let $\A\in\Rmn$ be the matrix \[ \A=[F(\ee_1),\dotsc,F(\ee_n)], \] that is, whose $i$-th column is $F(\ee_i)$, where $\ee_1,\dotsc,\ee_n$ are the standard basis vectors in $\Rn$. Then for any vector $\xx=\trans{[x_1,\dotsc,x_n]}$ in $\Rn$, we have \begin{equation} \label{eqn:thm:linear-trans-is-matrix-map:1} F(\xx) = F\paren{\sum_{i=1}^n x_i \ee_i} = \sum_{i=1}^n F(x_i \ee_i) = \sum_{i=1}^n x_i F(\ee_i) = \A\xx \end{equation} with the second equality following from repeated application of condition~(\ref{thm:linear-trans-is-matrix-map:a}), the third from condition~(\ref{thm:linear-trans-is-matrix-map:b}), and the last equality from $\A$'s definition. Next, let $\xbar\in\extspace$. By Corollary~\ref{cor:h:1}, we can write $\xbar= \limray{\vv_1} \plusl \dotsb \plusl \limray{\vv_k} \plusl \qq$ for some $\qq,\vv_1,\dotsc,\vv_k\in\Rn$. Then \begin{align*} F(\xbar) &= F(\limray{\vv_1} \plusl \dotsb \plusl \limray{\vv_k} \plusl \qq) \\ &= F(\limray{\vv_1}) \plusl \dotsb \plusl F(\limray{\vv_k}) \plusl F(\qq) \\ &= \limray{\bigParens{F(\vv_1)}} \plusl \dotsb \plusl \limray{\bigParens{F(\vv_k)}} \plusl F(\qq) \\ &= \limray{(\A\vv_1)} \plusl \dotsb \plusl \limray{(\A\vv_k)} \plusl \A\qq \\ &= \A(\limray{\vv_1}) \plusl \dotsb \plusl \A(\limray{\vv_k}) \plusl \A\qq \\ &= \A\paren{ \limray{\vv_1} \plusl \dotsb \plusl \limray{\vv_k} \plusl \qq } \\ &= \A\xbar. \end{align*} The second and third equalities are by conditions~(\ref{thm:linear-trans-is-matrix-map:a}) and~(\ref{thm:linear-trans-is-matrix-map:b}), respectively. The fourth equality is by \eqref{eqn:thm:linear-trans-is-matrix-map:1}. The fifth and sixth equalities are by Proposition~\ref{pr:h:4}(\ref{pr:h:4e},\ref{pr:h:4c}). Thus, $F(\xbar)=\A\xbar$ for all $\xbar\in\extspace$. If, for some matrix $\B\in\Rmn$, we also have $F(\xbar)=\B\xbar$ for all $\xbar\in\extspace$, then $\A\ee_i=F(\ee_i)=\B\ee_i$ for $i=1,\dotsc,n$, meaning the $i$-th columns of $\A$ and $\B$ are identical, and therefore $\A=\B$. Hence, $\A$ is also unique. \end{proof} The two conditions of Theorem~\ref{thm:linear-trans-is-matrix-map} are roughly analogous to the standard definition of a linear transformation between two vector spaces with noncommutative leftward addition replacing vector addition in condition~(\ref{thm:linear-trans-is-matrix-map:a}), and condition~(\ref{thm:linear-trans-is-matrix-map:b}) only required for points $\xx$ in $\Rn$, rather than all of $\extspace$, but also with the scalar $\alpha$ ranging over all of $\Rext$, rather than only $\R$. If we instead only require that condition~(\ref{thm:linear-trans-is-matrix-map:b}) hold for finite $\alpha$ (in $\R$, not $\Rext$), then Theorem~\ref{thm:linear-trans-is-matrix-map} is no longer true, in general, as seen in the next example: \begin{example} With $m=n=1$, suppose $F:\Rext\rightarrow\Rext$ is defined, for $\barx\in\Rext$, by \[ F(\barx) = \begin{cases} +\infty & \text{if $\barx=-\infty$,}\\ \barx & \text{if $\barx\in\R$,}\\ -\infty & \text{if $\barx=+\infty$.} \end{cases} \] Then it can be checked that $F$ satisfies condition~(\ref{thm:linear-trans-is-matrix-map:a}) of Theorem~\ref{thm:linear-trans-is-matrix-map}, and also condition~(\ref{thm:linear-trans-is-matrix-map:b}), but only for $\alpha\in\R$. That is, $F(\lambda x) = \lambda F(x)$ for all $\lambda\in\R$ and $x\in\R$, but, for instance, $F(\limray{(1)}) \neq \limray{(F(1))}$. Furthermore, we cannot write $F$ as a linear map $\barx\mapsto A\barx$ for any matrix (scalar, actually) $A\in\R^{1\times 1}$. \end{example} Using astral linear maps as building blocks, we define \emph{astral affine maps} as functions that map each point $\xbar$ in $\extspace$ to $\bbar\plusl \A\xbar$ in $\extspac{m}$, for some matrix $\A\in\R^{m\times n}$ and point $\bbar\in\extspac{m}$. As a corollary of \Cref{thm:linear:cont}, all such maps are continuous: \begin{corollary} \label{cor:aff-cont} Let $\A\in\Rmn$ and $\bbar\in\extspac{m}$, and let $F:\extspace\rightarrow\extspac{m}$ be defined by $F(\xbar)=\bbar\plusl\A\xbar$ for $\xbar\in\extspace$. Then $F$ is continuous. \end{corollary} \begin{proof} Let $\seq{\xbar_t}$ be a sequence in $\extspace$ that converges to some point $\xbar\in\extspace$. Then $\A\xbar_t\to\A\xbar$ by \Cref{thm:linear:cont}(\ref{thm:linear:cont:b}), and thus $\bbar\plusl\A\xbar_t\to\bbar\plusl\A\xbar$ by \Cref{pr:i:7}(\ref{pr:i:7f}). The continuity of $F$ now follows from $\extspace$ being first-countable (and \Cref{prop:first:properties}\ref{prop:first:cont}). \end{proof} Astral affine maps are precisely the maps $F:\eRn\to\extspac{m}$ that satisfy $F(\xbar\plusl\ybar)=F(\xbar)\plusl L(\ybar)$ for some astral linear map $L$. (Astral affine maps clearly satisfy this condition, and any map that satisfies the condition can be seen to be astral affine by setting $\bbar=F(\zero)$ and noting that $L(\xbar)=\A\xbar$ for a suitable $\A$.) This generalizes an analogous characterization of affine maps from $\Rn$ to $\Rm$. Extended affine maps include continuous extensions of the standard affine maps, corresponding to $\bbar\in\R^{m}$, but also additional maps, corresponding to infinite $\bbar$. Note that the map formed by adding a point $\bbar$ on the \emph{right}, that is, a map of the form $\xbar\mapsto\A\xbar\plusl\bbar$, need not be continuous. For example, with $n=m=1$, the function $F(\barx)=\barx\plusl(-\infty)$, where $\barx\in\Rext$, is not continuous: On the sequence $x_t=t$, whose limit is $+\infty$, we see that $F(x_t)=-\infty$ for all $t$, but $F(+\infty)=+\infty$. \subsection{Astral points in matrix form} \label{sec:matrix:form} We have seen already, for instance in Corollary~\ref{cor:h:1}, that leftward sums of the form \begin{equation} \label{eqn:pf:4} \limray{\vv_1} \plusl \dotsb \plusl \limray{\vv_k} \end{equation} emerge naturally in studying astral space. We will next express such sums using matrices, which will result in more compact expressions that can be analyzed and manipulated with tools from linear algebra. Specifically, we rewrite \eqref{eqn:pf:4} in a compact form as $\VV\omm$ where $\VV=[\vv_1,\dotsc,\vv_k]$ denotes the matrix in $\R^{n\times k}$ whose $i$-th column is $\vv_i$, and where $\omm$ is a suitable astral point. To derive a suitable $\omm$, note first that the astral point appearing in \eqref{eqn:pf:4} is the limit of any sequence whose elements have the form \[ \VV\bb_t = b_{t,1} \vv_1 + \dotsb + b_{t,k} \vv_k, \] where $\bb_t = \trans{[b_{t,1},\dotsc,b_{t,k}]}$ and, for $i=1,\dotsc,k$, the sequence $\seq{b_{t,i}}$ grows to $+\infty$, with $\seq{b_{t,1}}$ growing fastest and each next sequence $\seq{b_{t,i}}$ growing slower than the preceding one, so that $b_{t,i+1}/b_{t,i}\rightarrow 0$ (see Theorem~\ref{thm:i:seq-rep}). Thus, to express \eqref{eqn:pf:4} as $\VV\omm$, the limit of a sequence $\seq{\VV\bb_t}$ as above, we informally want to think of $\omm$ as being like a column vector, all of whose elements are infinite, with the first element being the ``most infinite'' and each following element being ``less infinite'' than the previous one. Formally, we define $\ommk$ to be the point in $\extspac{k}$ defined by \begin{equation} \label{eq:i:7} \ommk = \limray{\ee_1} \plusl \dotsb \plusl \limray{\ee_k}, \end{equation} where $\ee_i\in\R^k$ is the $i$-th standard basis vector. Then by Proposition~\ref{pr:h:4}, \begin{align} \notag \VV \ommk &= \VV \bigParens{ \limray{\ee_1} \plusl \dotsb \plusl \limray{\ee_k} } \\ \notag &= \limray{(\VV\ee_1)} \plusl \dotsb \plusl \limray{(\VV\ee_k)} \\ \label{eqn:mat-omm-defn} &= \limray{\vv_1} \plusl \dotsb \plusl \limray{\vv_k}. \end{align} In other words, $\VV \ommk$, with $\ommk$ defined as in \eqref{eq:i:7}, is equal to \eqref{eqn:pf:4}, now stated in matrix form. When clear from context, we omit $\ommk$'s subscript and write simply $\omm$. When $k=0$, we define $\ommsub{0}=\zerovec$, the only element of $\extspac{0}=\R^0$. In this case, $\VV=\zeromat{n}{0}$, the only matrix in $\R^{n\times 0}$. Thus, $\VV\ommsub{0}=\zeromat{n}{0} \zerovec = \zerov{n}$, or more simply, $\VV\omm=\zero$. (Zero-dimensional Euclidean space $\R^0$ is reviewed in Section~\ref{sec:prelim-zero-dim-space}.) In matrix notation, \Cref{cor:h:1} states that astral space consists exactly of all points of the form $\VV\omm\plusl \qq$ for some matrix $\VV$ and vector $\qq$. That is, \begin{equation} \label{eq:i:6} \extspace = \{ \VV\omm\plusl\qq : \VV\in\R^{n\times k}, \qq\in\Rn, k\geq 0 \}. \end{equation} Thus, every point $\xbar\in\extspace$ can be written as $\xbar=\VV\omm\plusl\qq$. We refer to $\VV\omm\plusl\qq$, or more formally, the pair $(\VV,\qq)$, as a \emph{matrix representation}, or \emph{matrix form}, of the point $\xbar$. In matrix form, matrix-vector product is simple to compute: If $\A\in\Rmn$ and $\xbar\in\extspace$, then we can write $\xbar$ in the form $\xbar=\VV\omm\plusl\qq$ as above, and can then immediately compute the corresponding form of $\A\xbar$ using Proposition~\ref{pr:h:4} as \[ \A\xbar = (\A\VV)\omm \plusl (\A\qq). \] We next derive various properties of matrix representation and rules for manipulating it. We begin with a simple generalization of \eqref{eqn:mat-omm-defn}, which shows how to decompose $\VV\omm$ when $\VV$ is written as a concatenation of matrices rather than just columns: \begin{proposition} \label{prop:split} Consider matrices $\VV_i\in\R^{n\times k_i}$ for $i=1,\dots,\ell$, with $k_i\ge 0$, and let $\VV=[\VV_1,\dotsc,\VV_\ell]\in\R^{n\times k}$, where $k=k_1+\dotsb+k_\ell$, be their concatenation. Then \[ \VV\omm=\VV_1\omm\plusl\dotsb\plusl\VV_\ell\omm. \] \end{proposition} \begin{proof} If $k=0$, we must have $k_i=0$ for all $i$, and so $\VV\omm=\zero$ as well as $\VV_i\omm=\zero$ for all $i$, and hence the proposition follows. Otherwise, begin by omitting terms with $k_i=0$ from the right-hand side, for which we must have $\VV_i\omm=\zero$. The proposition then follows by expanding $\VV\omm$ on the left-hand side as well as the remaining terms $\VV_i\omm$ on the right-hand side using \eqref{eqn:mat-omm-defn}. \end{proof} For the next few results, it will be useful to explicitly characterize $\omm\inprod\uu$: \begin{proposition} \label{prop:omm:inprod} Let $\uu\in\R^k$. Then \[ \omm\inprod\uu= \begin{cases} 0&\text{if $\uu=\zero$,} \\ +\infty&\text{if the first nonzero coordinate of $\uu$ is positive,} \\ -\infty&\text{if the first nonzero coordinate of $\uu$ is negative.} \end{cases} \] \end{proposition} \begin{proof} Immediate from the definition of $\omm$ in \eqref{eq:i:7} and the Case Decomposition Lemma (\Cref{lemma:case}). \end{proof} A key consequence of \Cref{prop:omm:inprod} is that $\omm\inprod\uu\in\set{\pm\infty}$ whenever $\uu\ne\zero$. We use this to show how to express the value of $\xbar\cdot\uu$ for $\xbar=\VV\omm\plusl\qq$, depending on whether $\uu$ is orthogonal to the columns of $\VV$. In particular, we show that $\xbar\cdot\uu\in\R$ if and only if $\uu\perp\VV$. \begin{proposition} \label{pr:vtransu-zero} Let $\xbar=\VV\omm\plusl \qq$ where $\VV\in\R^{n\times k}$, for some $k\geq 0$, and $\qq\in\Rn$. For all $\uu\in\Rn$, if $\uu\perp\VV$ then $\xbar\cdot\uu=\qq\cdot\uu\in\R$; otherwise, $\xbar\cdot\uu=(\VV\omm)\cdot\uu \in \{-\infty,+\infty\}$. \end{proposition} \begin{proof} If $\uu\perp\VV$, then $\trans{\uu}\VV=\zero$, and hence \begin{align*} \xbar\cdot\uu=\trans{\uu}\xbar & =(\trans{\uu}\VV)\omm\plusl(\trans{\uu}\qq) =\zero\omm\plusl(\trans{\uu}\qq) =\qq\cdot\uu. \intertext{Otherwise, $\trans{\uu}\VV\ne\zero$ and thus, by \Cref{prop:omm:inprod}, $(\trans{\uu}\VV)\omm\in\set{-\infty,+\infty}$, implying } \xbar\cdot\uu=\trans{\uu}\xbar & =(\trans{\uu}\VV)\omm\plusl(\trans{\uu}\qq) =(\trans{\uu}\VV)\omm =(\VV\omm)\inprod\uu. \qedhere \end{align*} \end{proof} In words, \Cref{pr:vtransu-zero} states that if $\uu$ is a nonzero vector in the column space of $\VV$ then the value of $(\VV\omm\plusl\qq)\inprod\uu$ is already determined by $\VV\omm$; only when $\uu$ is orthogonal to $\VV$ does the evaluation proceed to the next term, which is $\qq$. A similar case analysis underlies the following important property of leftward sums: \begin{lemma}[Projection Lemma] \label{lemma:proj} Let $\VV\in\R^{n\times k}$, $\zbar\in\eRn$, and let $\PP$ be the projection matrix onto $(\colspace\VV)^\perp$. Then $\VV\omm\plusl\zbar=\VV\omm\plusl\PP\zbar$. \end{lemma} \begin{proof} Let $\uu\in\Rn$. If $\uu\perp\VV$, then $\PP\uu=\uu$, and hence \begin{align*} \trans{\uu}\bigParens{\VV\omm\plusl\zbar} & = \trans{\uu}(\VV\omm)\plusl\trans{\uu}\zbar \\ & = \trans{\uu}(\VV\omm)\plusl\trans{\uu}\PP\zbar = \trans{\uu}\bigParens{\VV\omm\plusl\PP\zbar} \end{align*} by \Cref{pr:proj-mat-props}(\ref{pr:proj-mat-props:a},\ref{pr:proj-mat-props:d}). Otherwise, $\trans{\uu}\VV\ne\zero$ so that, by \Cref{prop:omm:inprod}, $(\trans{\uu}\VV)\omm\in\set{-\infty,+\infty}$, implying \begin{align*} \trans{\uu}\bigParens{\VV\omm\plusl\zbar} & = (\trans{\uu}\VV)\omm\plusl\trans{\uu}\zbar = (\trans{\uu}\VV)\omm \\ & = (\trans{\uu}\VV)\omm\plusl\trans{\uu}\PP\zbar = \trans{\uu}\bigParens{\VV\omm\plusl\PP\zbar}. \end{align*} Thus, for all $\uu\in\Rn$, $(\VV\omm\plusl\zbar)\inprod\uu=(\VV\omm\plusl\PP\zbar)\inprod\uu$, so $\VV\omm\plusl\zbar=\VV\omm\plusl\PP\zbar$ (by \Cref{pr:i:4}). \end{proof} The lemma says that in leftward sums of the form $\VV\omm\plusl\zbar$, we can replace $\zbar$ by its projection onto $\comColV$. Stated differently, the values $\zbar\inprod\uu$ along vectors $\uu$ that are orthogonal to $\VV$ fully determine the astral point corresponding to the leftward sum $\VV\omm\plusl\zbar$, and so any transformation of $\zbar$ that does not affect those values yields the same astral point. For example, if $\xbar=[\vv_1,\dotsc,\vv_k]\omm\plusl\qq$, then we can add any multiple of $\vv_1$ to vectors $\vv_2,\dotsc,\vv_k$ or $\qq$ without affecting the value of $\xbar$. This follows by the Projection Lemma with $\VV=[\vv_1]$ and $\zbar=[\vv_2,\dotsc,\vv_k]\omm\plusl\qq$. More generally, we can add any linear combination of vectors $\vv_1,\dotsc,\vv_{i-1}$ to vectors $\vv_i,\dotsc,\vv_k$ or $\qq$ without affecting the value of $\xbar$. To study such equivalent transformations more generally, it will be convenient to represent them using {positive upper triangular} matrices, as defined in Section~\ref{sec:prelim:pos-up-tri-mat}. The next proposition shows that positive upper triangular matrices represent the operation of adding a linear combination of vectors $\vv_1,\dotsc,\vv_{i-1}$ to vectors $\vv_i,\dotsc,\vv_k$, while possibly also multiplying each $\vv_i$ by a positive scalar, which has no effect on $\VV\omm$, since $\lambda(\limray{\vv})=\limray(\lambda\vv)$ for $\vv\in\Rn$ and $\lambda\in\Rstrictpos$. \begin{proposition} \label{prop:pos-upper:descr} Let $k\ge 0$ and let $\VV=[\vv_1,\dotsc,\vv_k]$ and $\VV'=[\vv'_1,\dotsc,\vv'_k]$, where $\vv_i,\vv'_i\in\R^n$ for $i=1,\dotsc,k$. Then the following are equivalent: \begin{letter-compact} \item \label{prop:pos-upper:descr:a} There exists a positive upper triangular matrix $\RR\in\R^{k\times k}$ such that $\VV'=\VV\RR$. \item \label{prop:pos-upper:descr:b} For $j=1,\ldots,k$, each column $\vv'_j$ is the sum of a positive multiple of $\vv_j$ and a linear combination of the vectors $\vv_1,\dotsc,\vv_{j-1}$. \end{letter-compact} \end{proposition} \begin{proof} Let $\RR\in\R^{k\times k}$ be a positive upper triangular matrix with entries $r_{ij}$ and columns denoted $\rr_j$. Let $\VV''=[\vv''_1,\dotsc,\vv''_k]$ be the matrix $\VV''=\VV\RR$. Then, from the definition of matrix multiplication, \[ \vv''_j=\VV\rr_j=\sum_{i=1}^j r_{ij}\vv_i \] for $j=1,\ldots,k$. The implication \RefsImplication{prop:pos-upper:descr:a}{prop:pos-upper:descr:b} then follows by setting $\VV'=\VV''$. The implication \RefsImplication{prop:pos-upper:descr:b}{prop:pos-upper:descr:a} follows by setting $r_{ij}$ so that $\vv'_j=\sum_{i=1}^j r_{ij}\vv_i$, which is possible, by assumption, with $r_{ii}>0$ (and $r_{ij}=0$ for $i>j$). \end{proof} We next show that astral point $\omm$ is unchanged when multiplied by a positive upper triangular matrix $\RR$, and when any finite vector $\bb$ is added to it. This will immediately imply that an astral point with representation $\VV\omm\plusl\qq$ can be expressed alternatively with $\VV$ replaced by $\VV\RR$ and $\qq$ replaced by $\qq+\VV\bb$, corresponding to an operation of adding linear combinations of vectors appearing earlier in the matrix representation to those appearing later in the matrix representation. The fact that this operation has no effect on the value of an astral point is stated formally below as the \emph{Push Lemma}. \begin{lemma} \label{lemma:RR_omm} Let $k\ge 0$, let $\RR\in\R^{k\times k}$ be a positive upper triangular matrix, and let $\bb\in\R^k$. Then $\RR\omm\plusl\bb=\omm$. \end{lemma} \begin{proof} First note that $\omm\plusl\bb=\Iden\omm\plusl\bb$, so by the Projection Lemma (\Cref{lemma:proj}), $\Iden\omm\plusl\bb=\Iden\omm\plusl\zero=\omm$. Thus, $\omm\plusl\bb=\omm$. It remains to show that $\RR\omm=\omm$. Denote the entries of $\RR$ as $r_{ij}$ and its columns as $\rr_j$. To prove that $\RR\omm=\omm$, we need to show that $(\RR\omm)\cdot\uu = \omm\cdot\uu$ for all $\uu\in\R^{k}$. As such, let $\uu\in\R^k$, with entries $u_j$, and let $\uu'=\trans{\RR}\uu$, with entries $u'_j$. Then by matrix algebra, and since $\RR$ is upper triangular, \begin{equation} \label{eq:RR_omm:2} u'_j=\sum_{i=1}^j u_i r_{ij}. \end{equation} We claim that $\omm\cdot\uu' = \omm\cdot\uu$. If $\uu=\zero$ then also $\uu'=\zero$, implying the claim. Otherwise, let $\ell$ be the smallest index for which $u_{\ell}\neq 0$, so that $u_j=0$ for $j<\ell$. By \eqref{eq:RR_omm:2}, $u'_j=0$ for $j<\ell$ and $u'_{\ell}=u_{\ell}r_{\ell\ell}$. Thus, $\ell$ is also the smallest index for which $u'_{\ell}\neq 0$, and $u'_{\ell}$ has the same sign as $u_{\ell}$. Therefore, by \Cref{prop:omm:inprod}, $\omm\cdot\uu = \omm\cdot\uu'=\omm\cdot(\trans{\RR}\uu)$. By \Cref{thm:mat-mult-def}, this further implies that $\omm\cdot\uu=(\RR\omm)\cdot\uu$. Since this holds for all $\uu\in\R^k$, we conclude that $\omm=\RR\omm$ (by \Cref{pr:i:4}). \end{proof} \begin{lemma}[Push Lemma] \label{pr:g:1} Let $\VV\in\R^{n\times k}$ and $\qq\in\Rn$. Further, let $\RR\in\R^{k\times k}$ be a positive upper triangular matrix and $\bb\in\R^k$. Then $\VV\omm\plusl\qq=(\VV\RR)\omm\plusl(\qq+\VV\bb)$. \end{lemma} \begin{proof} By \Cref{lemma:RR_omm}, we have \begin{align*} (\VV\RR)\omm\plusl(\qq+\VV\bb) &= \VV(\RR\omm)\plusl\VV\bb\plusl\qq \\ &= \VV(\RR\omm\plusl\bb)\plusl\qq \\ &= \VV\omm\plusl\qq. \qedhere \end{align*} \end{proof} The next proposition, following directly from the Projection Lemma (\Cref{lemma:proj}), shows that any point $\VV\omm$ can be rewritten using a linearly independent subset of the columns of $\VV$ (an empty set of columns is considered linearly independent). \begin{proposition} \label{prop:V:indep} Let $\xbar= [\vv_1,\dotsc,\vv_k]\omm$ with $k\ge 0$. If $\vv_i\in\spn\set{\vv_1,\dotsc,\vv_{i-1}}$, for some $i\ge 1$, then $\xbar=[\vv_1,\dotsc,\vv_{i-1},\vv_{i+1},\dotsc,\vv_k]\omm$. Therefore, there exists $\ell\ge 0$ and a subset of indices $ 1\leq i_1 < i_2 < \dotsb < i_{\ell} \leq k $ such that $\xbar=[\vv_{i_1},\dotsc,\vv_{i_{\ell}}]\omm$ and $\vv_{i_1},\dotsc,\vv_{i_{\ell}}$ are linearly independent. \end{proposition} \begin{proof} Suppose $\vv_i\in\spnfin{\vv_1,\dotsc,\vv_{i-1}}$, and let $\VV=[\vv_1,\dotsc,\vv_{i-1}]$ and $\VV'=[\vv_{i+1},\dotsc,\vv_k]$. Let $\PP$ be the projection matrix onto $(\colspace\VV)^\perp$, the orthogonal complement of the column space of $\VV$. Since $\vv_i$ is in $\colspace{\VV}$, we have $\PP\vv_i=\zero$ (\Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:e}). Thus, using \Cref{prop:split} and the Projection Lemma (\Cref{lemma:proj}), \begin{align*} \xbar =[\VV,\vv_i,\VV']\omm & =\VV\omm\plusl\vv_i\omm\plusl\VV'\omm \\ & =\VV\omm\plusl(\PP\vv_i)\omm\plusl\VV'\omm =\VV\omm\plusl\zero\plusl\VV'\omm =[\VV,\VV']\omm. \end{align*} By repeatedly removing linearly dependent vectors in this way, we eventually must end up with a linearly independent subset, as claimed in the proposition. \end{proof} \begin{definition} The \emph{astral rank} of a point $\xbar\in\extspace$ is the smallest number of columns $k$ for which there exist a matrix $\VV\in\R^{n\times k}$ and $\qq\in\Rn$ with $\xbar=\VV\omm\plusl\qq$. \end{definition} Astral rank captures the dimensionality of the set of directions in which a sequence with limit $\xbar$ is going to infinity. The set of points with astral rank $0$ is exactly~$\Rn$. Points with astral rank $1$ take the form $\limray{\vv}\plusl\qq$ with $\vv\ne\zero$ and arise as limits of sequences that go to infinity along a halfline. Points with higher astral ranks arise as limits of sequences that go to infinity in one dominant direction, but also in some secondary direction, and possibly a tertiary direction, and so on, as discussed in Section~\ref{subsec:astral-pt-form}. \begin{theorem} \label{thm:ast-rank-is-mat-rank} Let $\xbar=\VV\omm\plusl\qq$ for some $\VV\in\R^{n\times k}$ and $\qq\in\Rn$. Then the astral rank of $\xbar$ is equal to the matrix rank of $\VV$. \end{theorem} \begin{proof} Let $r$ be the astral rank of $\xbar$, which means that there exist a matrix $\VV'\in\R^{n\times r}$ and a vector $\qq'\in\R^n$ such that $\xbar=\VV'\omm\plusl\qq'$. The columns of $\VV'$ must be linearly independent, otherwise we could remove some of them (by \Cref{prop:V:indep}), which would contradict the fact that $r$ is the rank of $\xbar$. Since $\VV'$ consists of $r$ linearly independent columns, its rank is $r$. To prove the proposition, it suffices to show that the rank of $\VV$ is equal to the rank of~$\VV'$. Let $L$ be the set of vectors $\uu\in\Rn$ such that $\xbar\inprod\uu\in\R$. Since $\xbar=\VV\omm\plusl\qq=\VV'\omm\plusl\qq'$, we obtain by \Cref{pr:vtransu-zero} that $L=(\colspace\VV)^\perp=(\colspace\VV')^\perp$. Thus, $L^\perp=\colspace\VV=\colspace\VV'$ (by \Cref{pr:std-perp-props}\ref{pr:std-perp-props:c}). Therefore, the rank of $\VV$ is equal to the rank of $\VV'$. \end{proof} As a corollary, we can show that astral rank is subadditive in the following sense: \begin{corollary} \label{cor:rank:subadd} Let $\xbar,\xbar_1,\xbar_2\in\eRn$ be such that $\xbar=\xbar_1\plusl\xbar_2$, and let $r,r_1,r_2$ be their respective astral ranks. Then \[ \max\{r_1, r_2\} \leq r\le r_1+r_2. \] \end{corollary} \begin{proof} Let $\xbar_1=\VV_1\omm\plusl\qq_1$ and $\xbar_2=\VV_2\omm\plusl\qq_2$. By Propositions~\ref{pr:i:7}(\ref{pr:i:7d}) and~\ref{prop:split}, \[ \xbar = \xbar_1\plusl\xbar_2 = (\VV_1\omm\plusl\qq_1) \plusl (\VV_2\omm\plusl\qq_2) = [\VV_1,\VV_2]\omm\plusl(\qq_1+\qq_2). \] The column space of $[\VV_1,\VV_2]$ is equal to the sum $\colspace{\VV_1}+\colspace{\VV_2}$ since any linear combination over the columns of $[\VV_1,\VV_2]$ naturally decomposes into the sum of a linear combination over the columns of $\VV_1$ plus another over the columns of $\VV_2$. Therefore, \[ r = \dim(\colspace[\VV_1,\VV_2]) \leq \dim(\colspace{\VV_1}) + \dim(\colspace{\VV_2}) = r_1 + r_2, \] where the inequality is by \Cref{pr:subspace-intersect-dim}, and the two equalities are both by \Cref{thm:ast-rank-is-mat-rank} (and definition of a matrix's rank as the dimension of its column space). Similarly, the linear subspace $\colspace[\VV_1,\VV_2]$ includes $\colspace{\VV_i}$, for $i=1,2$, so $r_i=\dim(\colspace{\VV_i})\leq \dim(\colspace[\VV_1,\VV_2])=r$. \end{proof} \subsection{Canonical representation} \label{sec:canonical:rep} Although the same astral point may have multiple representations, it is possible to obtain a unique representation of a particularly natural form: \begin{definition} For a matrix $\VV\in\Rnk$ and $\qq\in\Rn$, we say that $\VV\omm\plusl\qq$ is a \emph{canonical representation} if $\VV$ is column-orthogonal (so that the columns of $\VV$ are orthonormal), and if $\qq\perp\VV$. \end{definition} \Cref{cor:h:1} showed that every astral point can be represented in this canonical form. We show next that the canonical representation of every point is in fact unique. \begin{theorem} \label{pr:uniq-canon-rep} Every point in $\extspace$ has a unique canonical representation. \end{theorem} Before proving the theorem, we give a lemma that will be one of the main steps in proving uniqueness. We state the lemma in a form that is a bit more general than needed at this point since it will get used again shortly. To state the lemma, we need some additional terminology. We say that a matrix $\VV'\in\R^{n\times k'}$ is a \emph{prefix} of a matrix $\VV\in\R^{n\times k}$ if $k'\le k$ and if the first $k'$ columns of $\VV$ are identical to $\VV'$, that is, if $\VV=[\VV',\VV'']$ for some matrix $\VV''$. \begin{lemma} \label{lem:uniq-ortho-prefix} Let $\VV\in\R^{n\times k}$ and $\VV'\in\R^{n\times k'}$ be column-orthogonal matrices, and let $\zbar,\,\zbar'\in\extspace$. If $\VV\omm\plusl\zbar=\VV'\omm\plusl\zbar'$ then one of the matrices must be a prefix of the other. Consequently: \begin{letter-compact} \item \label{lem:uniq-ortho-prefix:a} If either $k\geq k'$ or $\zbar\in\Rn$, then $\VV'$ is a prefix of $\VV$. \item \label{lem:uniq-ortho-prefix:b} If either $k=k'$ or $\zbar,\zbar'\in\Rn$, then $\VV=\VV'$. \end{letter-compact} \end{lemma} \begin{proof} Let $\xbar = \VV\omm\plusl\zbar$ and $\xbar' = \VV'\omm\plusl\zbar'$. To prove the lemma, we assume that neither of the matrices $\VV$ and $\VV'$ is a prefix of the other, and show this implies $\xbar\neq\xbar'$. Since neither is a prefix of the other, we can write $\VV=[\VV_0,\vv,\VV_1]$ and $\VV'=[\VV_0,\vv',\VV_1']$ where $\VV_0$ is the initial (possibly empty) set of columns where $\VV$ and $\VV'$ agree, and $\vv,\vv'\in\Rn$ is the first column where $\VV$ and $\VV'$ disagree (so $\vv\ne\vv'$). Let $\uu=\vv-\vv'$. Note that $\vv,\vv'\perp\VV_0$ since the matrices $\VV$ and $\VV'$ are column-orthogonal, and therefore also $\uu\perp\VV_0$. We will show that $\xbar\inprod\uu\ne\xbar'\inprod\uu$. We have \begin{align} \notag \xbar\inprod\uu &= (\VV\omm\plusl\zbar)\inprod\uu = ([\VV_0,\vv,\VV_1]\omm\plusl\zbar)\inprod\uu \\ \label{eq:ortho-prefix:1} &= (\VV_0\omm)\inprod\uu \plusl (\limray{\vv})\inprod\uu \plusl (\VV_1\omm)\inprod\uu \plusl \zbar\inprod\uu \\ \label{eq:ortho-prefix:2} &= 0\plusl(+\infty) \plusl (\VV_1\omm)\inprod\uu \plusl \zbar\inprod\uu = +\infty. \end{align} \eqref{eq:ortho-prefix:1} follows by \Cref{prop:split}. In \eqref{eq:ortho-prefix:2}, we first used $\uu\perp\VV_0$, implying $(\VV_0\omm)\inprod\uu=0$ (by \Cref{pr:vtransu-zero}). We then used that $\vv\inprod\uu=\vv\inprod(\vv-\vv')=1-\vv\inprod\vv'>0$, which holds because $\norm{\vv}=\norm{\vv'}=1$ and $\vv\ne\vv'$. This then implies that $(\limray{\vv})\inprod\uu=+\infty$. Similarly, \begin{align} \xbar'\inprod\uu \notag &= (\VV_0\omm)\inprod\uu \plusl (\limray{\vv'})\inprod\uu \plusl (\VV'_1\omm)\inprod\uu \plusl \zbar'\inprod\uu \\ \label{eq:ortho-prefix:3} &= 0\plusl(-\infty) \plusl (\VV'_1\omm)\inprod\uu \plusl \zbar'\inprod\uu = -\infty. \end{align} This time, in \eqref{eq:ortho-prefix:3}, we used that $\vv'\inprod\uu=\vv'\inprod(\vv-\vv')=\vv'\inprod\vv-1<0$, so $(\limray{\vv'})\inprod\uu=-\infty$. Thus, $\xbar\inprod\uu\ne\xbar'\inprod\uu$; hence, $\xbar\neq\xbar'$. Therefore, one of the matrices $\VV$ and $\VV'$ must be a prefix of the other. \begin{proof-parts} \pfpart{Consequence~(\ref{lem:uniq-ortho-prefix:a}):} If $k\geq k'$ then the only possibility is that $\VV'$ is a prefix of $\VV$. If $\zbar=\zz\in\Rn$, then, by \Cref{thm:ast-rank-is-mat-rank}, the astral rank of $\VV\omm\plusl\zz$ is $k$, while the astral rank of $\VV'\omm\plusl\zbar'$ is at least $k'$ by \Cref{cor:rank:subadd}. Being equal, these two points must have the same astral rank, so $k\geq k'$, implying, as just noted, that $\VV'$ is a prefix of $\VV$. \pfpart{Consequence~(\ref{lem:uniq-ortho-prefix:b}):} If $k=k'$ or $\zbar,\zbar'\in\Rn$, then applying (\ref{lem:uniq-ortho-prefix:a}) in both directions yields that $\VV$ and $\VV'$ are prefixes of each other, and so are equal. \qedhere \end{proof-parts} \end{proof} \begin{proof}[Proof of Theorem~\ref{pr:uniq-canon-rep}] Existence was proved in \Cref{cor:h:1}. To prove uniqueness, consider canonical representations of two points, $\xbar = \VV\omm\plusl\qq$ and $\xbar' = \VV'\omm\plusl\qq'$, and assume that $\xbar=\xbar'$. We show that this implies that the two representations are identical. To start, \Cref{lem:uniq-ortho-prefix}(\ref{lem:uniq-ortho-prefix:b}) implies that $\VV=\VV'$. Next, note that $\qq$ and $\qq'$ are both orthogonal to all the columns of $\VV=\VV'$, so Proposition~\ref{pr:vtransu-zero} yields \begin{align*} \xbar\cdot(\qq-\qq') &= \qq \cdot(\qq - \qq') \\ \xbar'\cdot(\qq-\qq') &= \qq' \cdot(\qq - \qq'). \end{align*} Since $\xbar=\xbar'$, these quantities must be equal. Taking their difference yields $\norm{\qq-\qq'}^2=0$, and therefore $\qq=\qq'$. Thus, the representations are identical. \end{proof} We say that a matrix representation is \emph{nonsingular} if the columns of $\VV$ are linearly independent, that is, if $\VV$ has full column rank. Building on the uniqueness of the canonical representation, we can now show that every nonsingular representation can be obtained from the canonical representation by means of a push operation. In fact, we can show a stronger statement, that every nonsingular representation can be obtained from any other nonsingular representation by means of a push operation. \begin{theorem} \label{thm:i:2} \MarkMaybe Let $\xbar=\VV\omm\plusl \qq$ and $\xbar'=\VV'\omm\plusl \qq'$ be two points in $\extspace$, for some $\VV,\VV'\in\R^{n\times k}$, $k\geq 0$, and $\qq,\qq'\in\Rn$. Assume $\VV$ and $\VV'$ both have full column rank. Then $\xbar=\xbar'$ if and only if $\VV'=\VV\Rtilde$ and $\qq'=\qq+\VV\tilde{\bb}$ for some positive upper triangular matrix $\Rtilde\in\R^{k\times k}$ and some $\tilde{\bb}\in\R^{k}$. \end{theorem} \begin{proof} The fact that the given conditions imply $\xbar=\xbar'$ is proved by the Push Lemma (\Cref{pr:g:1}). To prove the converse, assume $\xbar=\xbar'$. By \Cref{prop:QR}, we can write $\VV=\WW\RR$ for some column-orthogonal $\WW\in\Rnk$ and positive upper triangular $\RR\in\R^{k\times k}$, and similarly $\VV'=\WW'\RR'$. Thus, by \Cref{lemma:RR_omm}, \begin{align*} \xbar&=\VV\omm\plusl\qq=\WW(\RR\omm)\plusl\qq=\WW\omm\plusl\qq, \\ \xbar'&=\VV'\omm\plusl\qq'=\WW'(\RR'\omm)\plusl\qq'=\WW'\omm\plusl\qq'. \end{align*} Since $\xbar=\xbar'$, \Cref{lem:uniq-ortho-prefix}(\ref{lem:uniq-ortho-prefix:b}) implies that $\WW=\WW'$. Thus, \[ \VV'=\WW\RR'=(\VV\Rinv)\RR'=\VV(\Rinv\RR'), \] so $\VV'=\VV\Rtilde$ where $\Rtilde=\Rinv\RR'$, which is positive upper triangular, as follows, along with $\RR$'s invertibility, from \Cref{prop:pos-upper}. Next, let $\PP$ be the projection matrix onto the orthogonal complement of the column space of $\WW$. By the Projection Lemma~(\Cref{lemma:proj}), \begin{align*} \xbar&=\WW\omm\plusl\qq=\WW\omm\plusl\PP\qq, \\ \xbar'&=\WW\omm\plusl\qq'=\WW\omm\plusl\PP\qq'. \end{align*} Since $\xbar=\xbar'$ and the canonical representation is unique, this means that $\PP\qq=\PP\qq'$. The column space of $\WW$ is the same as the column space of $\VV$, because $\VV=\WW\RR$ and $\RR$ is invertible. Therefore, we can write $\qq=\PP\qq+\VV\bb$ and $\qq'=\PP\qq'+\VV\bb'$ for suitable $\bb,\bb'\in\R^k$. Since $\PP\qq=\PP\qq'$, we obtain \[ \qq'=\PP\qq+\VV\bb'=\qq-\VV\bb+\VV\bb', \] proving that $\qq'=\qq+\VV\tilde{\bb}$ with $\tilde{\bb}=\bb'-\bb$. \end{proof} \subsection{Representation and sequences} \label{sec:rep-seq} We next characterize the sequences that converge to an astral point with a particular representation. Theorem~\ref{thm:i:seq-rep} showed that if a sequence satisfies certain conditions relative to the representation of some astral point $\xbar\in\extspace$, then it must in fact converge to that point. Here, we will see that those conditions fully characterize convergence in a sense that will be made precise, thus providing a direct connection between the represenation of astral points and the convergence of sequences. To begin, suppose the columns of some matrix $\VV\in\R^{n\times k}$ are linearly independent. Let $\xbar=\VV\omm\plusl\qq$ for some $\qq\in\Rn$; without loss of generality, we can assume $\qq\perp\VV$, because $\VV\omm\plusl\qq=\VV\omm\plusl\PP\qq$, where $\PP\in\R^{n\times n}$ is the projection matrix onto $(\colspace\VV)^\perp$ (by the Projection Lemma). Let $\seq{\xx_t}$ be any sequence in $\Rn$. Then by linear algebra, each point $\xx_t$ can be uniquely represented in the form $\xx_t=\VV \bb_t + \qq_t$ for some $\bb_t\in\Rk$ and some $\qq_t\in\Rn$ with $\qq_t\perp\VV$. The next theorem provides necessary and sufficient conditions for when the sequence $\seq{\xx_t}$ will converge to $\xbar$, in terms of this representation. As usual, $b_{t,i}$ denotes the $i$-th component of $\bb_t$. \begin{theorem} \label{thm:seq-rep} Let $\VV\in\R^{n\times k}$ be a matrix with $k\geq 0$ columns that are linearly independent. Let $\xbar=\VV\omm\plusl\qq$ where $\qq\in\Rn$ and $\qq\perp \VV$. Let $\seq{\xx_t}$ be a sequence in $\Rn$ with $\xx_t=\VV \bb_t + \qq_t$ for some $\bb_t\in\Rk$ and $\qq_t\in\Rn$ with $\qq_t \perp \VV$, for all $t$. Then $\xx_t\rightarrow\xbar$ if and only if all of the following hold: \begin{letter-compact} \item \label{thm:seq-rep:a} $b_{t,i}\rightarrow+\infty$, for $i=1,\dotsc,k$. \item \label{thm:seq-rep:b} $b_{t,i+1}/b_{t,i}\rightarrow 0$, for $i=1,\dotsc,k-1$. \item \label{thm:seq-rep:c} $\qq_t\rightarrow\qq$. \end{letter-compact} \end{theorem} \begin{proof} In \Cref{thm:i:seq-rep}, we showed that if the sequence $\seq{\xx_t}$ satisfies conditions (\ref{thm:seq-rep:a}), (\ref{thm:seq-rep:b}), and (\ref{thm:seq-rep:c}), then $\xx_t\rightarrow\xbar$. It remains to prove the converse. Assume that $\xx_t\rightarrow\xbar$. Let $\VVdag$ be $\VV$'s pseudoinverse (as discussed in Section~\ref{sec:prelim:orth-proj-pseud}), and let $\ZZ=\trans{(\VVdag)}$. Since $\VV$ has full column rank, $\trans{\ZZ}\VV=\VVdag \VV = \Iden$ (where $\Iden$ is the $k\times k$ identity matrix) by \Cref{pr:pseudoinv-props}(\ref{pr:pseudoinv-props:c}). Also, the column space of $\ZZ$ is the same as that of $\VV$, by \Cref{pr:pseudoinv-props}(\ref{pr:pseudoinv-props:a}). In particular, this means that every column of $\ZZ$ is a linear combination of the columns of $\VV$. Since $\qq$ is orthogonal to all columns of $\VV$, it follows that $\qq$ is also orthogonal to all columns of $\ZZ$, that is, $\qq\perp\ZZ$. Likewise, $\qq_t\perp\ZZ$ for all $t$. Thus, we have \[ \trans{\ZZ} \xx_t = \trans{\ZZ} (\VV \bb_t + \qq_t) = (\trans{\ZZ}\VV) \bb_t + \trans{\ZZ}\qq_t =\bb_t \] where the last equality follows because $\trans{\ZZ}\VV=\Iden$ and $\qq_t\perp\ZZ$. Similarly, \[ \trans{\ZZ} \xbar = \trans{\ZZ} (\VV \omm \plusl \qq) = (\trans{\ZZ}\VV) \omm \plusl \trans{\ZZ}\qq =\omm. \] Thus, \begin{equation} \label{eqn:thm:seq-rep:1} \bb_t = \trans{\ZZ}\xx_t \rightarrow \trans{\ZZ}\xbar = \omm \end{equation} by \Cref{thm:mat-mult-def}, since $\xx_t\rightarrow\xbar$. For $i=1,\ldots,k$, since $\bb_t\to\omm$, we must also have \[b_{t,i} = \bb_t \cdot \ee_i \rightarrow \omm \cdot \ee_i = +\infty,\] with convergence from Theorem~\ref{thm:i:1}(\ref{thm:i:1c}) and the last equality from \Cref{prop:omm:inprod} (or \Cref{lemma:case}). This proves part~(\ref{thm:seq-rep:a}). Next, let $\epsilon>0$. Then, by similar reasoning, for $i=1,\dotsc,k-1$, \[ \epsilon b_{t,i} - b_{t,i+1} = \bb_t \cdot (\epsilon \ee_i - \ee_{i+1}) \rightarrow \omm \cdot (\epsilon \ee_i - \ee_{i+1}) = +\infty, \] where the last step again follows from \Cref{prop:omm:inprod}. Thus, for all $t$ sufficiently large, $\epsilon b_{t,i} - b_{t,i+1} > 0$, and also $b_{t,i}>0$ and $b_{t,i+1}>0$ by part~(\ref{thm:seq-rep:a}). That is, $0 < b_{t,i+1}/b_{t,i} < \epsilon$. Since this holds for all $\epsilon>0$, this proves part~(\ref{thm:seq-rep:b}). Finally, let $\PP\in\R^{n\times n}$ be the projection matrix onto $(\colspace\VV)^\perp$. Then \[ \PP\xx_t = \PP (\VV \bb_t + \qq_t) = (\PP\VV) \bb_t + \PP\qq_t =\qq_t, \] where the last equality follows because, by \Cref{pr:proj-mat-props}(\ref{pr:proj-mat-props:d},\ref{pr:proj-mat-props:e}), $\PP\VV=\zeromat{n}{k}$ and $\PP\qq_t=\qq_t$ since $\qq_t\perp\VV$. Similarly, $\PP\xbar=\qq$, and therefore, $\qq_t=\PP\xx_t\rightarrow\PP\xbar=\qq$, proving part~(\ref{thm:seq-rep:c}). \end{proof} Theorem~\ref{thm:seq-rep} characterizes convergence of a sequence to a point $\xbar=\VV\omm\plusl\qq$ when the columns of $\VV$ are linearly independent and when $\qq\perp\VV$. If those columns are \emph{not} linearly independent, then a sequence point $\xx_t\in\Rn$ can still be expressed as $\xx_t=\VV \bb_t + \qq_t$ for some $\bb_t\in\Rk$ and $\qq_t\in\Rn$, but this way of writing $\xx_t$ is no longer {unique}, even with the additional requirement that $\qq_t\perp\VV$. Nevertheless, as the next theorem shows, there always \emph{exists} a choice of $\bb_t$ and $\qq_t$ so that the conditions of Theorem~\ref{thm:seq-rep} hold. Combined with Theorem~\ref{thm:i:seq-rep}, this then provides both a necessary and sufficient condition for a sequence to converge to a point with a represenation as above, even when $\VV$'s columns are not linearly independent. Note that we also no longer require $\qq\perp \VV$, nor that $\qq_t\perp \VV$. \begin{theorem} \label{thm:seq-rep-not-lin-ind} Let $\VV\in\R^{n\times k}$ with $k\geq 0$, let $\qq\in\Rn$, and let $\xbar=\VV\omm\plusl\qq$. Let $\seq{\xx_t}$ be a sequence in $\Rn$. Then $\xx_t\rightarrow\xbar$ if and only if there exist sequences $\seq{\bb_t}$ in $\Rk$ and $\seq{\qq_t}$ in $\Rn$ such that $\xx_t=\VV\bb_t + \qq_t$ for all $t$, and such that all three conditions (\ref{thm:seq-rep:a}), (\ref{thm:seq-rep:b}), and (\ref{thm:seq-rep:c}) of Theorem~\ref{thm:seq-rep} are satisfied. \end{theorem} \begin{proof} If there exist sequences $\seq{\bb_t}$ and $\seq{\qq_t}$ satisfying the conditions of the theorem, then Theorem~\ref{thm:i:seq-rep} shows that $\xx_t\rightarrow\xbar$. Informally, the main idea in proving the converse is to first apply Theorem~\ref{thm:seq-rep} to show the existence of the needed sequences for a linearly independent subset of the columns of $\VV$. Then, one column at a time, we adjust the sequence so that additional linearly dependent columns can be ``squeezed into'' the matrix. Although rather straightforward in principle, below, the details of how to adjust the sequence at each step become somewhat involved. Formally, proof is by induction on $k$, the number of columns of $\VV$. More precisely, let us fix $\xbar\in\extspace$ and a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$. We then prove by induction on $k$ that for every matrix $\VV\in\Rnk$ and for every $\qq\in\Rn$, if $\xbar=\VV\omm\plusl\qq$ then there exist sequences $\seq{\bb_t}$ and $\seq{\qq_t}$ with $\xx_t=\VV\bb_t+\qq_t$ for all $t$, and satisfying conditions (\ref{thm:seq-rep:a}), (\ref{thm:seq-rep:b}), and (\ref{thm:seq-rep:c}) of Theorem~\ref{thm:seq-rep}. Let $r$ be the astral rank of $\xbar$. In what follows, let $\VV\in\Rnk$ and $\qq\in\Rn$ be such that $\xbar=\VV\omm\plusl\qq$. Then the matrix rank of $\VV$ must also be equal to $r$ by \Cref{thm:ast-rank-is-mat-rank}. Thus, $k\geq r$, which means that the inductive hypothesis is vacuous if $k<r$. Therefore, the base case for our induction is when $k=r$, the minimum possible value for $k$. We begin with that base case in which the matrix $\VV$ has full column rank, $k=r$, implying that its columns are linearly independent. Note, however, that $\qq$ might not be orthogonal to $\VV$. Nonetheless, by linear algebra, there exist (unique) $\cc\in\Rk$ and $\qq'\in\Rn$ such that $\qq=\VV\cc+\qq'$ and $\qq'\perp\VV$. Likewise, for each $t$, there exist $\bb'_t\in\Rk$ and $\qq'_t\in\Rn$ such that $\xx_t=\VV\bb'_t+\qq'_t$ and $\qq'_t\perp\VV$. By Theorem~\ref{thm:seq-rep}, since $\xx_t\rightarrow\xbar$, the sequences $\seq{\bb'_t}$ and $\seq{\qq'_t}$ must satisfy conditions (\ref{thm:seq-rep:a}), (\ref{thm:seq-rep:b}), and (\ref{thm:seq-rep:c}) of that theorem. For each $t$, let $\qq_t=\VV\cc+\qq'_t$ and let $\bb_t=\bb'_t-\cc$. Then by algebra, \[ \xx_t = \VV\bb'_t + \qq'_t = \VV(\bb'_t - \cc) + (\VV\cc + \qq'_t) = \VV\bb_t + \qq_t. \] For $i=1,\ldots,k$, $b_{t,i}=b'_{t,i}-c_i\rightarrow+\infty$ (since $b'_{t,i}\rightarrow+\infty$), proving part~(\ref{thm:seq-rep:a}) of Theorem~\ref{thm:seq-rep}. Further, for $i=1,\ldots,k-1$, \[ \frac{b_{t,i+1}}{b_{t,i}} = \frac{b'_{t,i+1} - c_{i+1}} {b'_{t,i} - c_i} = \frac{b'_{t,i+1}} {b'_{t,i}} \cdot \frac{1 - c_{i+1} / b'_{t,i+1}} {1 - c_i / b'_{t,i}} \rightarrow 0 \] since $b'_{t,i}\rightarrow+\infty$, $b'_{t,i+1}\rightarrow+\infty$, and ${b'_{t,i+1}}/{b'_{t,i}}\rightarrow 0$. This proves part~(\ref{thm:seq-rep:b}). Finally, by continuity of vector addition, \[ \qq_t = \VV\cc+\qq'_t \rightarrow \VV\cc+\qq' = \qq, \] proving part~(\ref{thm:seq-rep:c}). For the inductive step, let $k>r$ and assume that the claim holds for $k-1$. Since $k>r$, the columns of $\VV$ are not linearly independent; therefore, there exists $\dd\in\Rk$ such that $\VV\dd=\zero$ but $\dd\neq\zero$. Let $\ell$ be the index of the last nonzero component of $\dd$ (so that $d_\ell\neq 0$ and $d_i=0$ for $i>\ell$). Further, replacing $\dd$ by $\dd/d_\ell$, we can assume henceforth that $d_\ell=1$. Let $\vv_1,\ldots,\vv_k\in\Rn$ be the columns of $\VV$ so that $\VV=[\vv_1,\ldots,\vv_k]$. Let $\VV_0=[\vv_1,\ldots,\vv_{\ell-1},\zero,\vv_{\ell+1},\ldots,\vv_k]$ be the same as $\VV$ but with the $\ell$-th column replaced by $\zero$, and let $\VV'=[\vv_1,\ldots,\vv_{\ell-1},\vv_{\ell+1},\ldots,\vv_k]$ be the result of deleting the $\ell$-th column. Then $\VV_0=\VV\RR$ where $\RR\in\R^{k\times k}$ is the same as the $k\times k$ identity matrix but whose $\ell$-th column equals $\dd$. Then \[ \VV'\omm = \VV_0 \omm = \VV\RR\omm = \VV\omm, \] where the last equality is by Lemma~\ref{lemma:RR_omm} since $\RR$ is positive upper triangular. Therefore, $\xbar=\VV\omm\plusl\qq=\VV'\omm\plusl\qq$. Since $\VV'$ has only $k-1$ columns, by inductive hypothesis, there exist sequences $\seq{\bb'_t}$ in $\R^{k-1}$ and $\seq{\qq_t}$ in $\Rn$ such that $\xx_t=\VV'\bb'_t + \qq_t$ for all $t$, and satisfying conditions (\ref{thm:seq-rep:a}), (\ref{thm:seq-rep:b}), and (\ref{thm:seq-rep:c}) of Theorem~\ref{thm:seq-rep}. We will use these sequences to construct a sequence $\seq{\bb_t}$ in $\Rk$ that, with $\seq{\qq_t}$, satisfies all of the claimed conditions. For each $t$, other than the $\ell$-th component, $\bb_t$ will be only a slightly modified version of $\bb'_t$, as explained shortly. We will then set its ``new'' $\ell$-th component, $b_{t,\ell}$, to the value $s_t$ where \[ s_t = \begin{cases} t b'_{t,1} & \text{if $\ell=1$},\\ \sqrt{b'_{t,\ell-1} b'_{t,\ell}} & \text{if $1<\ell<k$},\\ \sqrt{b'_{t,k-1}} & \text{if $\ell=k$}. \end{cases} \] (If any of the arguments to the square root are negative, we define $s_t$ arbitrarily, for instance, $s_t=0$. Since $b'_{t,i}\rightarrow+\infty$ for all $i$, by inductive hypothesis, this can only happen for finitely many values of $t$.) Note that, in every case, $s_t\rightarrow+\infty$ by inductive hypothesis. We will see further that this choice for $b_{t,\ell}=s_t$ grows to infinity at an appropriate rate for satisfying all of the needed conditions. In particular, we have the following: \begin{claimpx} \label{cl:thm:seq-rep-not-lin-ind:1} Let $i\in\{1,\ldots,\ell-1\}$. Then $s_t / b'_{t,i}\rightarrow 0$. \end{claimpx} \begin{proofx} Suppose first that $\ell<k$. Then \[ \frac{s_t}{b'_{t,i}} = \sqrt{ \frac{b'_{t,\ell-1}}{b'_{t,i}} \cdot \frac{b'_{t,\ell}}{b'_{t,i}} } \rightarrow 0. \] This is because, for all $t$ sufficiently large, $0<b'_{t,\ell-1} \leq b'_{t,i}$, while ${b'_{t,\ell}}/{b'_{t,i}}\rightarrow 0$ by inductive hypothesis using an argument similar to \eqref{eqn:thm:i:seq-rep:2}. If instead $\ell=k$, then, by similar reasoning, \[ \frac{s_t}{b'_{t,i}} = \frac{ \sqrt{b'_{t,k-1}} } {b'_{t,i}} \rightarrow 0 \] since $b'_{t,i}\geq b'_{t,k-1}$ for all $t$ sufficiently large and $b'_{t,k-1}\rightarrow+\infty$. \end{proofx} We can now define $\bb_t$. For $i=1,\ldots,k$, let \[ b_{t,i} = \begin{cases} b'_{t,i} + d_i s_t & \text{if $i<\ell$},\\ s_t & \text{if $i=\ell$},\\ b'_{t,i-1} & \text{if $i>\ell$}. \end{cases} \] Then \begin{eqnarray*} \VV \bb_t &=& \sum_{i=1}^k b_{t,i} \vv_i \\ &=& \sum_{i=1}^{\ell-1} \Parens{b'_{t,i} + d_i s_t} \vv_i + s_t \vv_{\ell} + \sum_{i=\ell+1}^k b'_{t,i-1} \vv_i \\ &=& \sum_{i=1}^{\ell-1} b'_{t,i} \vv_i + \sum_{i=\ell+1}^k b'_{t,i-1} \vv_i + s_t \Parens{\sum_{i=1}^{\ell-1} d_i \vv_i + \vv_{\ell}} \\ &=& \VV' \bb'_t + s_t (\VV \dd) = \VV' \bb'_t. \end{eqnarray*} The second equality is from $b_{t,i}$'s definition. The third and fourth equalities are by algebra. The last equality is because $\VV\dd = \zero$. Thus, $\xx_t = \VV' \bb'_t + \qq_t = \VV \bb_t + \qq_t$. It remains to prove conditions (\ref{thm:seq-rep:a}), (\ref{thm:seq-rep:b}), and (\ref{thm:seq-rep:c}). The last condition that $\qq_t\rightarrow\qq$ follows by inductive hypothesis. To prove condition~(\ref{thm:seq-rep:a}), let $i\in\{1,\ldots,k\}$. If $i>\ell$ then $b_{t,i} = b'_{t,i-1} \rightarrow +\infty$ by inductive hypothesis. If $i=\ell$ then $b_{t,\ell} = s_t \rightarrow +\infty$, as noted above. And if $i<\ell$ then \[ b_{t,i} = b'_{t,i} + d_i s_t = b'_{t,i} \Parens{1 + d_i \frac{s_t}{b'_{t,i}}} \rightarrow +\infty \] since $b'_{t,i}\rightarrow+\infty$ and by Claim~\ref{cl:thm:seq-rep-not-lin-ind:1}. Finally, we prove condition~(\ref{thm:seq-rep:b}). Let $i\in\{1,\ldots,k-1\}$. If $i>\ell$ then $b_{t,i+1}/b_{t,i} = b'_{t,i}/b'_{t,i-1}\rightarrow 0$, by inductive hypothesis. If $i=\ell$ then \[ \frac{b_{t,\ell+1}}{b_{t,\ell}} = \frac{b'_{t,\ell}}{s_t} = \begin{cases} \sqrt{{b'_{t,\ell}}/{b'_{t,\ell-1}}} & \text{if $\ell>1$} \\ 1/t & \text{if $\ell=1$.} \\ \end{cases} \] In either case, the expression on the right converges to $0$. If $i=\ell-1$ then \[ \frac{b_{t,\ell}}{b_{t,\ell-1}} = \frac{s_t}{b'_{t,\ell-1} + d_{\ell-1} s_t} = \frac{1}{d_{\ell-1} + b'_{t,\ell-1} / s_t} \rightarrow 0 \] by Claim~\ref{cl:thm:seq-rep-not-lin-ind:1}. And if $i<\ell-1$ then \[ \frac{b_{t,i+1}}{b_{t,i}} = \frac{b'_{t,i+1} + d_{i+1} s_t}{b'_{t,i} + d_{i} s_t} = \frac{b'_{t,i+1}}{b'_{t,i}} \cdot \frac{1+ d_{i+1} s_t/b'_{t,i+1}}{1+ d_{i} s_t/b'_{t,i}} \rightarrow 0 \] since ${b'_{t,i+1}}/{b'_{t,i}}\rightarrow 0$, and by Claim~\ref{cl:thm:seq-rep-not-lin-ind:1}. Thus, in all cases, condition~(\ref{thm:seq-rep:b}) holds, completing the induction and the proof. \end{proof} For a matrix $\A\in\Rmn$, Theorem~\ref{thm:mat-mult-def} shows that if $\seq{\xx_t}$ is a sequence in $\Rn$ converging to a point $\xbar\in\extspace$, then $\A\xx_t\rightarrow\A\xbar$, in other words, the original sequence can be converted into another sequence, $\seq{\A\xx_t}$, converging to $\A\xbar$. Using Theorem~\ref{thm:seq-rep-not-lin-ind}, we can prove that a kind of inverse operation exists, namely, that if $\seq{\yy_t}$ is a sequence in $\Rm$ converging to $\A\xbar$, then there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ such that $\yy_t$ and $\A\xx_t$ are asymptotically equal: \begin{theorem} \label{thm:inv-lin-seq} Let $\A\in\Rmn$, let $\seq{\yy_t}$ be a sequence in $\Rm$, and let $\xbar\in\extspace$. Suppose $\yy_t\rightarrow\A\xbar$. Then there exists a sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$ and $\A\xx_t - \yy_t \rightarrow \zero$. \end{theorem} \begin{proof} We can write $\xbar=\VV\omm\plusl\qq$ for some $\VV\in\Rnk$ and $\qq\in\Rn$, implying $\A\xbar=\A\VV\omm\plusl\A\qq$. Since $\yy_t\rightarrow\A\xbar$, by Theorem~\ref{thm:seq-rep-not-lin-ind}, there exist sequences $\seq{\bb_t}$ in $\Rk$ and $\seq{\rr_t}$ in $\Rm$ such that: $\yy_t=\A\VV\bb_t + \rr_t$; conditions (\ref{thm:seq-rep:a}) and (\ref{thm:seq-rep:b}) of Theorem~\ref{thm:seq-rep} hold; and $\rr_t\rightarrow\A\qq$. Let $\xx_t=\VV\bb_t + \qq$. Then $\xx_t\rightarrow\xbar$ by Theorem~\ref{thm:i:seq-rep} (or Theorem~\ref{thm:seq-rep-not-lin-ind}). Furthermore, \[ \A\xx_t - \yy_t = (\A\VV\bb_t + \A\qq) - (\A\VV\bb_t + \rr_t) = \A\qq - \rr_t \rightarrow \zero \] since $\rr_t\rightarrow\A\qq$. \end{proof} Theorem~\ref{thm:inv-lin-seq} is especially useful when working with projection matrices, as seen next. \begin{corollary} \label{cor:inv-proj-seq} Let $\WW\in\Rnk$, and let $\PP\in\R^{n\times n}$ be the projection matrix onto $(\colspace{\WW})^\perp$. Let $\xbar\in\extspace$ and let $\seq{\yy_t}$ be a sequence in $\Rn$ converging to $\PP\xbar$. Then there exists a sequence $\seq{\cc_t}$ in $\Rk$ such that $\yy_t + \WW\cc_t \rightarrow \xbar$. \end{corollary} \begin{proof} By Theorem~\ref{thm:inv-lin-seq} (applied with $\A=\PP$), there exists a sequence $\seq{\xx_t}$ in $\Rn$ with $\xx_t\rightarrow\xbar$ and $\PP\xx_t - \yy_t\rightarrow\zero$. By linear algebra, there exists $\cc_t\in\Rk$ such that $\xx_t = \PP\xx_t + \WW\cc_t$. Thus, \[ \yy_t+\WW\cc_t = \yy_t + (\xx_t - \PP\xx_t) = \xx_t + (\yy_t - \PP\xx_t) \rightarrow \xbar \plusl \zero = \xbar \] using Proposition~\ref{pr:i:7}(\ref{pr:i:7g}) for the convergence. \end{proof} In particular, if $\xbar\in\extspace$, $\vv\in\Rn$, and $\seq{\yy_t}$ is a sequence in $\Rn$ converging to $\xbarperp$, the projection of $\xbar$ orthogonal to $\vv$, then Corollary~\ref{cor:inv-proj-seq} implies there exists a sequence $\seq{\alpha_t}$ in $\R$ such that $\yy_t + \alpha_t \vv\rightarrow\xbar$. Let $\xbar=\VV\omm\plusl\qq$ where $\VV=[\vv_1,\ldots,\vv_k]$ for some $\vv_1,\ldots,\vv_k,\qq\in\Rn$. In considering sequences $\seq{\xx_t}$ in $\Rn$ that converge to $\xbar$, it is sometimes convenient to require that each element $\xx_t$ is constructed as a linear combination of only $\vv_1,\ldots,\vv_k,\qq$, that is, so that each $\xx_t$ is in the column space of $[\VV,\qq]$. This would seem natural since, in all other directions, the sequence is converging to zero. This was the case, for instance, for the polynomial-speed sequence of Example~\ref{ex:poly-speed-intro}. To study such sequences, we give a name to the column space just mentioned: \begin{definition} \label{def:rep-span-snglton} Let $\xbar=\VV\omm\plusl\qq$ where $\VV\in\Rnk$, $k\geq 0$, and $\qq\in\Rn$. Then the \emph{representational span} of the singleton $\{\xbar\}$, denoted $\rspanxbar$, is equal to $\colspace{[\VV,\qq]}$, the column space of the matrix $[\VV,\qq]$, or equivalently, the span of $\columns(\VV)\cup\{\qq\}$. \end{definition} For now, we only define representational span for singletons; later, in Section~\ref{sec:astral-span}, we extend this definition for arbitrary astral subsets. Superficially, this definition would appear to depend on a particular representation of $\xbar$, which might seem problematic since such representations are not unique. However, the next proposition, which gives an equivalent formulation for the representational span of $\{\xbar\}$, implies that this set is determined by $\xbar$, and so is not dependent on the specific representation chosen for it. \begin{proposition} \label{pr:rspan-sing-equiv-dual} Let $\xbar=\VV\omm\plusl\qq$ where $\VV\in\Rnk$, $k\geq 0$, and $\qq\in\Rn$. Let $\zz\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:rspan-sing-equiv-dual:a} $\zz\in\colspace{[\VV,\qq]}$ (that is, $\zz\in\rspanxbar$). \item \label{pr:rspan-sing-equiv-dual:b} For all $\uu\in\Rn$, if $\xbar\cdot\uu=0$ then $\zz\cdot\uu=0$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:rspan-sing-equiv-dual:a}) $\Rightarrow$ (\ref{pr:rspan-sing-equiv-dual:b}): } Suppose $\zz\in\colspace{[\VV,\qq]}$. Let $\uu\in\Rn$ be such that $\xbar\cdot\uu=0$. Then, by Proposition~\ref{pr:vtransu-zero}, $\uu\perp\VV$ and $\qq\cdot\uu=0$. Consequently, $\zz\cdot\uu=0$ since $\zz$ is a linear combination of $\qq$ and the columns of $\VV$. \pfpart{(\ref{pr:rspan-sing-equiv-dual:b}) $\Rightarrow$ (\ref{pr:rspan-sing-equiv-dual:a}): } Suppose~(\ref{pr:rspan-sing-equiv-dual:b}) holds. Let $L=\colspace{[\VV,\qq]}$, which is a linear subspace. By linear algebra, we can write $\zz=\yy+\uu$ where $\yy\in L$ and $\uu$ is in $\Lperp$, the set of vectors orthogonal to every point in $L$. In particular, $\uu\perp\VV$ and $\uu\perp\qq$ so $\xbar\cdot\uu=0$ by Proposition~\ref{pr:vtransu-zero}. Therefore, \[ 0 = \zz\cdot\uu = \yy\cdot\uu + \uu\cdot\uu = \norm{\uu}^2, \] where the first equality is by assumption (since $\xbar\cdot\uu=0$), and the third is because $\yy\perp\uu$. Thus, $\uu=\zero$ so $\zz=\yy\in L$. \qedhere \end{proof-parts} \end{proof} \begin{definition} A convergent sequence $\seq{\xx_t}$ in $\Rn$ is \emph{span-bound} if every element $\xx_t$ is in $\rspanxbar$, where $\xbar$ is the sequence's limit, $\lim \xx_t$. \end{definition} Not every convergent sequence is span-bound. For instance, the sequence in $\R$ with elements $x_t=1/t$ converges to $0$, but it is not span-bound since $\rspanset{0}=\{0\}$. Nevertheless, as we show next, for every convergent sequence, there exists a span-bound sequence whose elements are asymptotically equal to those of the original sequence: \begin{theorem} \label{thm:spam-limit-seqs-exist} Let $\seq{\xx_t}$ be a sequence in $\Rn$ converging to some point $\xbar\in\extspace$. Then there exists a span-bound sequence $\seq{\xx'_t}$ in $\Rn$ such that $\xx'_t\rightarrow\xbar$ and $\xx_t - \xx'_t \rightarrow \zero$. \end{theorem} \begin{proof} We can write $\xbar=\VV\omm\plusl\qq$ for some $\VV\in\Rnk$, $k\geq 0$, and $\qq\in\Rn$. Let $L=\rspanxbar=\colspace{[\VV,\qq]}$. Let $\PP\in\R^{n\times n}$ be the projection matrix onto $L$, and let $\Iden$ be the $n\times n$ identity matrix. Then for all $\ww\in L$, $\PP\ww=\ww$ (\Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:d}), so $(\Iden-\PP)\ww=\zero$. These imply $\PP\xbar=\PP\VV\omm\plusl\PP\qq=\VV\omm\plusl\qq=\xbar$, and similarly that $(\Iden-\PP)\xbar=\zero$. For each $t$, let $\xx'_t=\PP \xx_t$. Then $\xx'_t=\PP\xx_t\rightarrow\PP\xbar=\xbar$ by Theorem~\ref{thm:mat-mult-def}. Likewise, $\xx_t-\xx'_t=(\Iden-\PP)\xx_t\rightarrow(\Iden-\PP)\xbar=\zero$. Further, $\seq{\xx'_t}$ is span-bound since each $\xx'_t\in L$. \end{proof} \section{Astral-point decompositions and the structure of astral space} As seen in \Cref{cor:h:1}, every astral point $\xbar\in\extspace$ can be fully decomposed into a leftward sum of astrons, together with a finite vector in $\Rn$. Based on that decomposition, this chapter explores how astral points can be split apart in useful ways. Specifically, we look at how every astral point can be split into the leftward sum of a ``purely infinite'' part, called an icon, and a finite part, and will see how this decomposition reveals the structure of astral space as a whole. Next, for an infinite astral point, we focus on the first, or most dominant, astron in its decomposition, which we will see captures the strongest direction in which every sequence converging to that point is going to infinity. This will lay the groundwork for a technique in which an astral point can be pulled apart and analyzed, one astron at a time, which will be heavily used going forward, particularly in analyzing astral functions. \subsection{Icons and galaxies} \label{sec:core:zero} We first look at how every astral point can be decomposed into a finite part, and a part that is purely infinite in a sense that will be made precise shortly. In algebraic terms, we have seen that astral space $\extspace$ is closed under leftward addition (Proposition~\ref{pr:i:6}), and that this operation is associative (\Cref{pr:i:7}\ref{pr:i:7a}). This shows that astral space is a semigroup under this operation, and furthermore is a monoid since $\zero$ is an identity element. We will especially be interested in idempotent elements of this semigroup, that is, those points that are unchanged when leftwardly added to themselves. Such points are called icons: \begin{definition} A point $\ebar\in\extspace$ is an \emph{icon}, or is said to be \emph{iconic}, if $\ebar\plusl\ebar=\ebar$. The set of all icons is denoted \[ \corezn = \Braces{ \ebar\in\extspace:\: \ebar\plusl\ebar = \ebar }. \] \end{definition} The term ``icon'' is derived as a contraction of ``idempotent'' and ``cone,'' with the latter refering to a cone-like property of such points that will be discussed below and again in Section~\ref{sec:cones-basic}. For example, in $\Rext=\extspac{1}$, the only icons are $\corez{1}=\{-\infty,0,+\infty\}$. The next proposition gives three equivalent characterizations for this property. \begin{proposition} \label{pr:icon-equiv} Let $\ebar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:icon-equiv:a} $\ebar$ is an icon; that is, $\ebar\plusl\ebar=\ebar$. \item \label{pr:icon-equiv:b} For all $\uu\in\Rn$, $\ebar\cdot\uu\in\{-\infty,0,+\infty\}$. \item \label{pr:icon-equiv:c} $\ebar=\VV\omm$ for some $\VV\in\R^{n\times k}$, $k\geq 0$. \end{letter-compact} Furthermore, the same equivalence holds if the matrix in part~(\ref{pr:icon-equiv:c}) is required to be column-orthogonal. \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{\RefsImplication{pr:icon-equiv:a}{pr:icon-equiv:b}:} Suppose $\ebar\plusl\ebar=\ebar$. Then for all $\uu\in\Rn$, by Proposition~\ref{pr:i:6}, $\ebar\cdot\uu\plusl\ebar\cdot\uu=\ebar\cdot\uu$, which is impossible if $\ebar\cdot\uu$ is a positive or negative real number, that is, unless $\ebar\cdot\uu$ is in $\{-\infty,0,+\infty\}$. \pfpart{\RefsImplication{pr:icon-equiv:b}{pr:icon-equiv:c}:} Suppose $\ebar$ satisfies (\ref{pr:icon-equiv:b}). Let $\ebar=\VV\omm\plusl\qq$ be its canonical representation (which exists by Theorem~\ref{pr:uniq-canon-rep}). Then because $\qq$ is orthogonal to all the columns of $\VV$, by Proposition~\ref{pr:vtransu-zero}, $\ebar\cdot\qq=\qq\cdot\qq=\norm{\qq}^2$. Since $\ebar\cdot\qq\in\{-\infty,0,+\infty\}$, this quantity must be $0$, implying $\qq=\zero$. Note that the matrix $\VV$ is column-orthogonal, thereby satisfying the additional requirement stated at the end of the proposition. \pfpart{\RefsImplication{pr:icon-equiv:c}{pr:icon-equiv:a}:} Suppose $\ebar=\VV\omm$ for some $\VV\in\R^{n\times k}$, $k\geq 0$, and let $\PP\in\R^{n\times n}$ be the projection matrix onto $(\colspace\VV)^\perp$. Then, by the Projection Lemma (\Cref{lemma:proj}), \[ \VV\omm \plusl \VV\omm = \VV\omm \plusl \PP\VV\omm = \VV\omm\plusl\zerov{n} = \VV\omm. \] Therefore, $\ebar$ is an icon. \qedhere \end{proof-parts} \end{proof} We saw in Corollary~\ref{cor:h:1} that every point $\xbar\in\extspace$ can be represented as $\VV\omm\plusl\qq$. As a result of Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\VV\omm$ is an icon, which means that $\xbar$ can be written $\xbar=\ebar\plusl\qq$ for some icon $\ebar\in\corezn$ and some finite vector $\qq\in\Rn$. In other words, $\xbar$ can be decomposed into an \emph{iconic part}, $\ebar$, and a \emph{finite part}, $\qq$. This decomposition will be used very heavily in later sections. Furthermore, as shown next, the iconic part is uniquely determined by $\xbar$. The finite part, on the other hand, is not in general uniquely determined. For example, in $\Rext$, $+\infty=+\infty\plusl q$ for all $q\in\R$, so $+\infty$'s decomposition is not unique. \begin{theorem} \label{thm:icon-fin-decomp} Let $\xbar\in\extspace$. Then $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$. Thus, $\extspace = \corezn\plusl\Rn$. Furthermore, $\ebar$ is uniquely determined by $\xbar$; that is, if it also holds that $\xbar=\ebar'\plusl\qq'$ for some $\ebar'\in\corezn$ and $\qq'\in\Rn$, then $\ebar=\ebar'$. \end{theorem} \begin{proof} By Corollary~\ref{cor:h:1}, $\xbar$ can be written as $\xbar=\VV\omm\plusl\qq$ for some $\qq\in\Rn$ and some $\VV\in\Rnk$, $k\geq 0$. By Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\ebar=\VV\omm$ is an icon, so indeed $\xbar=\ebar\plusl\qq$ for $\ebar\in\corezn$ and $\qq\in\Rn$. To show uniqueness of $\xbar$'s iconic part, suppose $\xbar=\ebar\plusl\qq=\ebar'\plusl\qq'$ for some $\ebar,\ebar'\in\corezn$ and $\qq,\qq'\in\Rn$. Then by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, there are column-orthogonal matrices $\VV\in\R^{n\times k}$ and $\VV'\in\R^{n\times k'}$, for some $k,k'\geq 0$, such that $\ebar=\VV\omm$ and $\ebar'=\VV'\omm$. Further, by \Cref{thm:ast-rank-is-mat-rank}, $\VV$ and $\VV'$ must both have matrix rank equal to $\xbar$'s astral rank. Since both matrices are column-orthogonal, they must both have full column rank, implying that $\xbar$'s astral rank must be equal to both $k$ and $k'$, so $k=k'$. Since $\xbar=\VV\omm\plusl\qq=\VV'\omm\plusl\qq'$, it then follows by \Cref{lem:uniq-ortho-prefix}(\ref{lem:uniq-ortho-prefix:b}) that $\VV=\VV'$. Hence, $\ebar=\VV\omm=\VV'\omm=\ebar'$. \end{proof} Here are additional properties of icons. Note specifically that all astrons are iconic. \begin{proposition} \label{pr:i:8} Let $\ebar\in\corezn$ be an icon. \begin{letter-compact} \item \label{pr:i:8b} The only icon in $\Rn$ is $\zero$; that is, $\corezn\cap\Rn=\{\zero\}$. \item \label{pr:i:8d} Let $\alpha\in\Rextstrictpos$ and $\zbar\in\extspace$. Then $\alpha (\ebar \plusl \zbar) = \ebar \plusl \alpha \zbar$. In particular, $\alpha\ebar=\ebar$. \item \label{pr:i:8-infprod} $\limray{\xbar}$ is an icon for all $\xbar\in\extspace$. In particular, all astrons are icons. \item \label{pr:i:8-matprod} $\A \ebar$ is an icon for all $\A\in\Rmn$, and $\alpha \ebar$ is an icon for all $\alpha\in\Rext$. \item \label{pr:i:8-leftsum} The set of icons is closed under leftward addition; that is, if $\dbar$ is some icon in $\corezn$, then so is $\ebar\plusl\dbar$. \item \label{pr:i:8e} The set of all icons, $\corezn$, is topologically closed in $\extspace$, and therefore compact. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:i:8b}):} Let $\ee\in\Rn$. Then $\ee$ is an icon if and only if $\ee=\ee\plusl\ee=\ee+\ee$, which, by algebra, holds if and only if $\ee=\zero$. \pfpart{Part~(\ref{pr:i:8d}):} Suppose first that $\zbar=\zero$. Then for all $\uu\in\Rn$, $(\alpha\ebar)\cdot\uu = \alpha(\ebar\cdot\uu) = \ebar\cdot\uu$ by Proposition~\ref{pr:scalar-prod-props}(\ref{pr:scalar-prod-props:a}), and since $\ebar\cdot\uu\in\{-\infty,0,+\infty\}$ and $\alpha>0$. Therefore, $\alpha\ebar=\ebar$ (by Proposition~\ref{pr:i:4}). Next, suppose $\alpha\in\Rstrictpos$. Then $\alpha(\ebar\plusl\zbar)=\alpha\ebar\plusl\alpha\zbar=\ebar\plusl\alpha\zbar$ by Proposition~\ref{pr:i:7}(\ref{pr:i:7b}) and the preceding argument. Finally, for the case $\alpha=\oms$, we now have \[ \limray{(\ebar\plusl\zbar)} = \lim [t(\ebar\plusl\zbar)] = \lim [\ebar\plusl t\zbar] = \ebar\plusl\limray{\zbar} \] where the second equality applies the argument just given, and the third is by Proposition~\ref{pr:i:7}(\ref{pr:i:7f}). \pfpart{Part~(\ref{pr:i:8-infprod}):} Let $\xbar\in\extspace$ and let $\uu\in\Rn$. Then $(\limray{\xbar})\cdot\uu\in\{-\infty,0,+\infty\}$ by Proposition~\ref{pr:astrons-exist}. Therefore, $\limray{\xbar}$ is an icon by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}. \pfpart{Part~(\ref{pr:i:8-matprod}):} Let $\A\in\Rmn$. Since $\ebar$ is an icon, by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\ebar=\VV\omm$ for some $\VV\in\Rnk$, $k\geq 0$. Then $\A\ebar = \A(\VV\omm) = (\A\VV)\omm$. Thus, $\A\ebar$ is also an icon, again by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}. If $\alpha\in\R$, then $\alpha\ebar=\alpha(\Iden\ebar)=(\alpha\Iden)\ebar$ is an icon, as just argued (where $\Iden$ is the $n\times n$ identity matrix). Finally, $\limray{\ebar}$ and $(-\oms)\ebar=\limray{(-\ebar)}$ are icons by part~(\ref{pr:i:8-infprod}). \pfpart{Part~(\ref{pr:i:8-leftsum}):} Let $\dbar\in\corezn$, and let $\uu\in\Rn$. Then $\ebar\cdot\uu$ and $\dbar\cdot\uu$ are both in $\{-\infty,0,+\infty\}$, by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}. Combining with Proposition~\ref{pr:i:6}, this implies that $(\ebar\plusl\dbar)\cdot\uu=\ebar\cdot\uu\plusl\dbar\cdot\uu$ is in $\{-\infty,0,+\infty\}$ as well. Therefore, $\ebar\plusl\dbar\in\corezn$, again by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}. \pfpart{Part~(\ref{pr:i:8e}):} Let $\seq{\ebar_t}$ be a sequence in $\corezn$ with a limit $\xbar\in\eRn$. Let $\uu\in\Rn$. Then $\ebar_t\inprod\uu\in\set{-\infty,0,+\infty}$ for all $t$, and therefore also $\xbar\inprod\uu=\lim(\ebar_t\inprod\uu)\in\set{-\infty,0,+\infty}$. Since this holds for all $\uu\in\Rn$, we obtain $\xbar\in\corezn$, by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}. Thus, $\corezn=\clcorezn$, so $\corezn$ is closed. \qedhere \end{proof-parts} \end{proof} Proposition~\ref{pr:i:8}(\ref{pr:i:8d}) shows in particular that for any icon $\ebar\in\corezn$, the singleton set $\{\ebar\}$ acts superficially like a cone (as defined in Section~\ref{sec:prelim:cones}) in the sense of being closed under multiplication by a positive scalar. The same holds for any set of icons $K\subseteq\corezn$. We will have much more to say about cones in astral space in Chapter~\ref{sec:cones}. Theorem~\ref{thm:icon-fin-decomp} shows that every astral point has a uniquely determined iconic part. As a result, astral space itself can be naturally partitioned into disjoint sets called \emph{galaxies} consisting of all points with a common iconic part, that is, into sets \[ \galax{\ebar} = \Braces{ \ebar \plusl \qq : \qq\in\Rn } = \ebar \plusl \Rn \] defined for each $\ebar\in\corezn$. The set of finite vectors, $\Rn$, is the galaxy $\galax{\zero}$ with icon $\zero$. In algebraic terms, each galaxy $\galax{\ebar}$ is a commutative subgroup of $\eRn$ under leftward addition, which acts like vector addition on elements of the same galaxy; this is because if $\ebar\in\corezn$ and $\qq,\qq'\in\Rn$ then $(\ebar\plusl\qq)\plusl(\ebar\plusl\qq') = \ebar\plusl(\qq+\qq')$ using Proposition~\ref{pr:i:7}. We next explore properties of galaxies, their closures and relationships to one another, and how these relate to the overall topological structure of astral space. We first show that the closure of a galaxy $\galaxd$ includes exactly those astral points that can be written in the form $\ebar\plusl\zbar$, for some $\zbar\in\extspace$. For readability, we write the closure of galaxy $\galaxd$ as $\galcld$, rather than $\clbar{\galaxd}$. \begin{proposition} \label{pr:galaxy-closure} Let $\ebar\in\corezn$ be an icon. Then the closure of galaxy $\galaxd$ is \[ \galcld = \Braces{ \ebar \plusl \zbar : \zbar\in\extspace } = \ebar \plusl \extspace. \] \end{proposition} \begin{proof} Let $\xbar\in\extspace$. We prove $\xbar\in\galcld$ if and only if $\xbar\in\ebar\plusl\extspace$. Suppose $\xbar\in\ebar\plusl\extspace$, meaning $\xbar=\ebar\plusl\zbar$ for some $\zbar\in\extspace$. Then by Theorem~\ref{thm:i:1}(\ref{thm:i:1d}), there exists a sequence $\seq{\zz_t}$ in $\Rn$ that converges to $\zbar$. By \Cref{pr:i:7}(\ref{pr:i:7f}), $\ebar\plusl\zz_t\rightarrow\ebar\plusl\zbar=\xbar$. Since each $\ebar\plusl\zz_t$ is in $\galaxd$, we must have $\xbar\in\galcld$. Conversely, suppose $\xbar\in\galcld$. Then there exists a sequence $\seq{\xbar_t}$ of elements of $\galaxd$ converging to $\xbar$. We can write $\xbar_t=\ebar\plusl\zz_t$ for some $\zz_t\in\Rn$. By sequential compactness of $\eRn$, there exists a subsequence of $(\zz_t)$ that converges to some element $\zbar\in\eRn$. Discarding all of the other sequence elements, and the corresponding elements in the sequence $\seq{\xbar_t}$, we then obtain $\xbar_t=\ebar\plusl\zz_t\to\ebar\plusl\zbar$. Thus, $\xbar=\ebar\plusl\zbar\in\ebar\plusl\extspace$. \end{proof} A point $\xbar\in\extspace$ is in the closure $\galcld$ if and only if, for all $\uu\in\Rn$, $\xbar\cdot\uu$ and $\ebar\cdot\uu$ are equal whenever $\ebar\cdot\uu\neq 0$ (or equivalently, whenever $\ebar\cdot\uu$ is infinite): \begin{proposition} \label{pr:cl-gal-equiv} Let $\ebar\in\corezn$ be an icon, and let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:cl-gal-equiv:a} $\xbar\in\ebar\plusl\extspace$ (that is, $\xbar\in\galcld$). \item \label{pr:cl-gal-equiv:c} $\xbar=\ebar\plusl\xbar$. \item \label{pr:cl-gal-equiv:b} For all $\uu\in\Rn$, if $\ebar\cdot\uu\neq 0$ then $\xbar\cdot\uu=\ebar\cdot\uu$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:cl-gal-equiv:a}) $\Rightarrow$ (\ref{pr:cl-gal-equiv:b}): } Suppose $\xbar=\ebar\plusl\zbar$ for some $\zbar\in\extspace$. Let $\uu\in\Rn$, and suppose $\ebar\cdot\uu\neq 0$. Since $\ebar$ is an icon, we then must have $\ebar\cdot\uu\in\{-\infty,+\infty\}$ by Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}. Therefore, $\xbar\cdot\uu=\ebar\cdot\uu\plusl\zbar\cdot\uu=\ebar\cdot\uu$. \pfpart{(\ref{pr:cl-gal-equiv:b}) $\Rightarrow$ (\ref{pr:cl-gal-equiv:c}): } Suppose statement~(\ref{pr:cl-gal-equiv:b}) holds. For all $\uu\in\Rn$, we claim \begin{equation} \label{eq:pr:cl-gal-equiv:1} \xbar\cdot\uu=\ebar\cdot\uu \plusl \xbar\cdot\uu. \end{equation} This is immediate if $\ebar\cdot\uu= 0$. Otherwise, if $\ebar\cdot\uu\neq 0$, then $\ebar\cdot\uu\in\{-\infty,+\infty\}$ since $\ebar$ is an icon, and also $\xbar\cdot\uu=\ebar\cdot\uu$ by assumption. Together, these imply \eqref{eq:pr:cl-gal-equiv:1}. Therefore, $\xbar=\ebar\plusl\xbar$ by Proposition~\ref{pr:i:4}. \pfpart{(\ref{pr:cl-gal-equiv:c}) $\Rightarrow$ (\ref{pr:cl-gal-equiv:a}): } This is immediate. \qedhere \end{proof-parts} \end{proof} For any icons $\ebar,\,\ebar'\in\corezn$, the galaxies $\galaxd$ and $\galaxdp$ are disjoint, unless $\ebar=\ebar'$, as noted already. Nevertheless, the \emph{closures} of these galaxies might not be disjoint. Indeed, the next proposition shows that if $\galcld$ and $\galcldp$ intersect at even a single point, then one must fully contain the other. Furthermore, $\galcld$ is entirely included in $\galcldp$ if and only if $\ebar'$ is a \emph{prefix} of $\ebar$, meaning $\ebar=\ebar'\plusl\dbar$ for some icon $\dbar\in\corezn$. This can be re-stated equivalently in terms of canonical representations since if $\ebar=\VV\omm$ and $\ebar'=\VV'\omm$ are canonical representations, then \Cref{lem:uniq-ortho-prefix}(\ref{lem:uniq-ortho-prefix:a}) implies that $\ebar'$ is a prefix of $\ebar$ if and only if $\VV'$ is a prefix of $\VV$. \begin{proposition} \label{pr:galaxy-closure-inclusion} Let $\ebar,\,\ebar'\in\corezn$ be icons. Then the following hold: \begin{letter} \item \label{pr:galaxy-closure-inclusion:a} $\galcld\subseteq\galcldp$ if and only if $\ebar=\ebar'\plusl\dbar$ for some $\dbar\in\corezn$. \item \label{pr:galaxy-closure-inclusion:b} If $\galcld\cap\galcldp\neq\emptyset$ then either $\galcld\subseteq\galcldp$ or $\galcldp\subseteq\galcld$. \end{letter} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:galaxy-closure-inclusion:a}):} Suppose first that $\ebar=\ebar'\plusl\dbar$ for some $\dbar\in\corezn$. Then, by \Cref{pr:galaxy-closure}, \[ \galcld =\ebar\plusl\eRn =\ebar'\plusl\dbar\plusl\eRn \subseteq\ebar'\plusl\eRn =\galcldp. \] Conversely, suppose now that $\galcld\subseteq\galcldp$. Then, in particular, $\ebar\in\galcldp$, so $\ebar=\ebar'\plusl\zbar$ for some $\zbar\in\extspace$, by Proposition~\ref{pr:galaxy-closure}. By Theorem~\ref{thm:icon-fin-decomp}, $\zbar=\dbar\plusl\qq$ for some $\dbar\in\corezn$ and $\qq\in\Rn$, so $\ebar=(\ebar'\plusl\dbar)\plusl\qq$. Since the iconic part of this point is unique, it follows that $\ebar=\ebar'\plusl\dbar$ (by Theorem~\ref{thm:icon-fin-decomp} and Proposition~\ref{pr:i:8}\ref{pr:i:8-leftsum}). \pfpart{Part~(\ref{pr:galaxy-closure-inclusion:b}):} Suppose there exists a point $\xbar$ in $\galcld\cap\galcldp$. Then by Proposition~\ref{pr:galaxy-closure}, $\xbar=\ebar\plusl\zbar=\ebar'\plusl\zbar'$, for some $\zbar,\,\zbar'\in\extspace$. By Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\ebar=\VV\omm$ and $\ebar'=\VV'\omm$ for some column-orthogonal matrices $\VV\in\R^{n\times k}$ and $\VV'\in\R^{n\times k'}$. By \Cref{lem:uniq-ortho-prefix}, one of the matrices must be a prefix of the other. Without loss of generality, assume that $\VV'$ is a prefix of $\VV$, so $\VV=[\VV',\VV'']$ for some matrix $\VV''$, and thus by \Cref{prop:split}, \[ \ebar=[\VV',\VV'']\omm=\VV'\omm\plusl\VV''\omm=\ebar'\plusl\VV''\omm, \] so $\galcld\subseteq\galcldp$ by part~(\ref{pr:galaxy-closure-inclusion:a}). \qedhere \end{proof-parts} \end{proof} As a result, we can arrange the galaxies in a directed rooted tree capturing these inclusion relationships. The vertices of the tree consist exactly of all icons in $\corezn$ (corresponding to galaxies), with $\zero$ as the root. For all icons $\ebar\in\corezn$ and for all $\vv\in\Rn$, an edge is directed from $\ebar$ to $\ebar\plusl\limray{\vv}$ (which is also an icon, by Proposition~\ref{pr:i:8}\ref{pr:i:8-leftsum}), unless this would result in a self-loop, that is, unless $\ebar=\ebar\plusl\limray{\vv}$. That the resulting structure is a tree rather than a directed acyclic graph can be proved using Lemma~\ref{lem:uniq-ortho-prefix}. Equivalently, the tree can be formulated in terms of canonical representations, so that, using Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, for column-orthogonal matrices $\VV\in\R^{n\times k}$ and $\VV'\in\R^{n\times (k+1)}$, an edge is directed from $\VV\omm$ to $\VV'\omm$ if and only if $\VV$ is a prefix of $\VV'$. Thus, the children of $\zero$, the root, are exactly the nonzero astrons. Their children are the icons of astral rank~2, and so on. In general, the depth (distance from the root) of every icon $\ebar$ is exactly equal to its astral rank, so the height of the tree is $n$. Further, the tree captures all galactic inclusions since a path exists from $\ebar$ to $\ebar'$ if and only if $\ebar$ is a prefix of $\ebar'$, that is, if and only if $\galcld\supseteq\galcldp$ (by Proposition~\ref{pr:galaxy-closure-inclusion}\ref{pr:galaxy-closure-inclusion:a}). The prefix structure also plays a role in characterizing when two elements $\xbar,\ybar\in\eRn$ \emph{commute} with respect to leftward addition, that is, when they satisfy $\xbar\plusl\ybar=\ybar\plusl\xbar$: \begin{proposition} \label{prop:commute} Let $\xbar,\ybar\in\eRn$. Let $\xbar=\ebar\plusl\qq$ and $\ybar=\ebar'\plusl\qq'$ for some $\ebar,\ebar'\in\corezn$ and $\qq,\qq'\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{prop:commute:a} $\xbar\plusl\ybar=\ybar\plusl\xbar$. \item \label{prop:commute:b} $\ebar$ is a prefix of $\ebar'$, or $\ebar'$ is a prefix of $\ebar$. \item \label{prop:commute:c} For all $\uu\in\Rn$, $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are summable. \item \label{prop:commute:d} For all $\uu\in\Rn$, if $\xbar\inprod\uu=+\infty$ then $\ybar\inprod\uu>-\infty$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{\RefsImplication{prop:commute:a}{prop:commute:b}:} Let $\zbar=\xbar\plusl\ybar=\ebar\plusl(\qq\plusl\ybar)$, so $\zbar\in\galcl{\ebar}$ by \Cref{pr:galaxy-closure}. Since also $\zbar=\ybar\plusl\xbar$, we similarly obtain $\zbar\in\galcl{\ebar'}$. Thus, $\galcl{\ebar}\cap\galcl{\ebar'}\ne\emptyset$, so by \Cref{pr:galaxy-closure-inclusion}, $\ebar$ is a prefix of $\ebar'$, or $\ebar'$ is a prefix of $\ebar$. \pfpart{\RefsImplication{prop:commute:b}{prop:commute:c}:} Without loss of generality, assume that $\ebar$ is a prefix of $\ebar'$, so $\ebar'=\ebar\plusl\dbar$ for some $\dbar\in\corezn$. Thus, $\ybar=\ebar\plusl\zbar'$ with $\zbar'=\dbar\plusl\qq'$. Let $\uu\in\Rn$. We will show that $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are summable. If $\ebar\inprod\uu=0$ then $\xbar\inprod\uu=\qq\inprod\uu\in\R$, so $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are summable regardless of the value of $\ybar\inprod\uu$. Otherwise, if $\ebar\inprod\uu\ne 0$ then $\xbar\cdot\uu=\ebar\cdot\uu=\ybar\cdot\uu$ by \Cref{pr:cl-gal-equiv}(\ref{pr:cl-gal-equiv:a},\ref{pr:cl-gal-equiv:b}). Since $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are equal, they are also summable, finishing the proof. \pfpart{\RefsImplication{prop:commute:c}{prop:commute:a}:} Let $\uu\in\Rn$. Then \[ (\xbar\plusl\ybar)\inprod\uu= (\xbar\inprod\uu)\plusl(\ybar\inprod\uu) = (\ybar\inprod\uu)\plusl(\xbar\inprod\uu) = (\ybar\plusl\xbar)\inprod\uu, \] where the middle equality follows by \Cref{pr:i:5}(\ref{pr:i:5c}). Since this holds for all $\uu\in\Rn$, we obtain $\xbar\plusl\ybar=\ybar\plusl\xbar$. \pfpart{\RefsImplication{prop:commute:c}{prop:commute:d}:} Immediate. \pfpart{\RefsImplication{prop:commute:d}{prop:commute:c}:} Let $\uu\in\Rn$. If $\xbar\inprod\uu=+\infty$ then, by assumption, $\ybar\inprod\uu>-\infty$, so $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are summable. If $\xbar\inprod\uu\in\R$, then $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are also summable. Finally, if $\xbar\inprod\uu=-\infty$, then $\xbar\inprod(-\uu)=+\infty$, so by assumption, $\ybar\inprod(-\uu)>-\infty$. Hence, $\ybar\inprod\uu<+\infty$, so again $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are summable. \qedhere \end{proof-parts} \end{proof} Here are some consequences of commutativity. The first of these generalizes \Cref{pr:i:7}(\ref{pr:i:7g}): \begin{proposition} \label{pr:sum-seq-commuting-limits} Let $\xbar,\ybar\in\extspace$, and let $\seq{\xbar_t},\seq{\ybar_t}$ be sequences in $\extspace$ with $\xbar_t\rightarrow\xbar$ and $\ybar_t\rightarrow\ybar$. Assume $\xbar\plusl\ybar=\ybar\plusl\xbar$. Then $\xbar_t\plusl\ybar_t \rightarrow\xbar\plusl\ybar$. \end{proposition} \begin{proof} Let $\uu\in\Rn$. Then by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$ and $\ybar_t\cdot\uu\rightarrow\ybar\cdot\uu$. Also, by Proposition~\refequiv{prop:commute}{prop:commute:a}{prop:commute:c}, $\xbar\cdot\uu$ and $\ybar\cdot\uu$ are summable. This implies that $\xbar_t\cdot\uu$ and $\ybar_t\cdot\uu$ must also be summable for all sufficiently large $t$. This is because if $\xbar\cdot\uu$ and $\ybar\cdot\uu$ are both not $+\infty$, then $\xbar_t\cdot\uu$ and $\ybar_t\cdot\uu$ also both must not be $+\infty$ for $t$ sufficiently large, and so are summable; likewise if $\xbar\cdot\uu$ and $\ybar\cdot\uu$ are both not $-\infty$. Thus, except for finitely many values of $t$, $\xbar_t\cdot\uu$ and $\ybar_t\cdot\uu$ are summable so that $(\xbar_t\plusl\ybar_t)\cdot\uu = \xbar_t\cdot\uu + \ybar_t\cdot\uu$. By Proposition~\ref{prop:lim:eR}, this latter expression converges to $\xbar\cdot\uu + \ybar\cdot\uu = (\xbar\plusl\ybar)\cdot\uu$. Since this holds for all $\uu\in\Rn$, the claim follows by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). \end{proof} Proposition~\ref{pr:sum-seq-commuting-limits} need not hold if $\xbar$ and $\ybar$ do not commute. For instance, in $\Rext$, if $x_t=t$ and $y_t=-t$ then $x_t\rightarrow+\infty$, $y_t\rightarrow-\infty$, but $x_t+y_t\rightarrow 0\neq (+\infty)\plusl(-\infty)$. We next use commutativity to derive a sufficient condition for when $\A\xbar\plusl\B\xbar=(\A+\B)\xbar$. \begin{proposition} \label{prop:commute:AB} Let $\xbar\in\eRn$, let $\A,\B\in\Rmn$, and assume that $\A\xbar\plusl\B\xbar = \B\xbar\plusl\A\xbar$. Then $\A\xbar\plusl\B\xbar=(\A+\B)\xbar$. \end{proposition} \begin{proof} Let $\uu\in\Rm$. Then by Proposition~\refequiv{prop:commute}{prop:commute:a}{prop:commute:c}, $(\A\xbar)\cdot\uu$ and $(\B\xbar)\cdot\uu$ are summable. Therefore, \begin{align*} (\A\xbar\plusl\B\xbar)\cdot\uu &= (\A\xbar)\cdot\uu + (\B\xbar)\cdot\uu \\ &= \xbar\cdot\Parens{\trans{\A}\uu} + \xbar\cdot\Parens{\trans{\B}\uu} \\ &= \xbar\cdot\Parens{\trans{\A}\uu + \trans{\B}\uu} \\ &= \xbar\cdot\BigParens{\Parens{\trans{\A} + \trans{\B}}\uu} \\ &= \BigParens{(\A+\B)\xbar}\cdot\uu. \end{align*} The first equality is by Propositions~\ref{pr:i:6} and~\ref{pr:i:5}(\ref{pr:i:5c}). The second and fifth are by \Cref{thm:mat-mult-def}. The third is by \Cref{pr:i:1}, since the terms are summable. Since this holds for all $\uu\in\Rn$, the claim follows by \Cref{pr:i:4}. \end{proof} Returning to galaxies, we next consider their topology and topology of their closures. The next theorem shows that the galaxy $\galaxd$ of an icon $\ebar$ of astral rank $k$ is homeomorphic to $(n-k)$-dimensional Euclidean space, $\R^{n-k}$, and furthermore, the closure of that galaxy is homeomorphic to $(n-k)$-dimensional astral space, $\extspac{n-k}$. In other words, all galaxies and their closures are topological copies of lower-dimensional Euclidean spaces and astral spaces (respectively). \begin{theorem} \label{thm:galaxies-homeo} Let $\ebar\in\corezn$ be an icon of astral rank $k$. Then the following hold: \begin{letter-compact} \item \label{thm:galaxies-homeo:a} $\galcld$ is homeomorphic to $\extspac{n-k}$. \item \label{thm:galaxies-homeo:b} $\galaxd$ is homeomorphic to $\R^{n-k}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:galaxies-homeo:a}):} By \Cref{thm:ast-rank-is-mat-rank} and Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\ebar=\VV\omm$, for some column-orthogonal matrix $\VV\in\R^{n\times k}$. Let $\ww_1,\dotsc,\ww_{n-k}\in\Rn$ be an orthonormal basis of $(\colspace\VV)^\perp$, and let $\WW=[\ww_1,\dotsc,\ww_{n-k}]$. We construct continuous maps $\hogal: \galcld\rightarrow\extspac{n-k}$ and $\hogalinv: \extspac{n-k}\rightarrow\galcld$ and show they are inverses of each other, thus establishing that they are in fact homeomorphisms. Specifically, we let \begin{equation} \label{eq:galaxies-homeo:0} \hogal(\xbar) = \trans{\WW} \xbar, \qquad \hogalinv(\ybar)=\ebar\plusl \WW\ybar \end{equation} for $\xbar\in\galcld$ and $\ybar\in\extspac{n-k}$. Both maps are affine and therefore continuous by \Cref{cor:aff-cont}. Let $\xbar\in\galcld$, so $\xbar=\ebar\plusl\zbar$ for some $\zbar\in\extspace$ by \Cref{pr:galaxy-closure}. We will show that $\hogalinv(\hogal(\xbar))=\xbar$ and thus establish that $\hogalinv$ is a left inverse of $\hogal$. Let $\PP=\WW\trans{\WW}$, which is the projection matrix on the column space of $\WW$ (see Section~\ref{sec:prelim:orth-proj-pseud}). We calculate: \begin{align} \notag \hogalinv(\hogal(\xbar)) &= \ebar\plusl\WW\trans{\WW}(\ebar\plusl\zbar) \\ \notag &= \ebar\plusl\WW(\trans{\WW}\ebar)\plusl\PP\zbar \\ \label{eq:galaxies-homeo:1} &= \ebar\plusl\PP\zbar. \\ \notag &= \VV\omm\plusl\PP\zbar \\ \label{eq:galaxies-homeo:2} &= \VV\omm\plusl\zbar = \ebar\plusl\zbar = \xbar. \end{align} \eqref{eq:galaxies-homeo:1} uses that \begin{equation} \label{eq:galaxies-homeo:3} \trans{\WW}\ebar=(\trans{\WW}\VV)\omm=\zeromat{(n-k)}{k}\omm=\zero \end{equation} since $\WW\perp\VV$. In \eqref{eq:galaxies-homeo:2}, we applied the Projection Lemma (\Cref{lemma:proj}). Hence, $\hogalinv(\hogal(\xbar))=\xbar$, proving that $\hogalinv$ is a left inverse of $\hogal$. Next, let $\ybar\in\extspac{n-k}$. Then \[ \hogal(\hogalinv(\ybar)) = \trans{\WW}(\ebar\plusl\WW\ybar) = \trans{\WW}\ebar\plusl \trans{\WW}\WW\ybar = \ybar \] with the last equality following from \eqref{eq:galaxies-homeo:3} and that $\trans{\WW}\WW=\Iden_{(n-k)}$ since $\WW$ is column-orthogonal. Thus, $\hogalinv$ is also a right inverse of $\hogal$, completing the proof. \pfpart{Part~(\ref{thm:galaxies-homeo:b}):} We redefine $\hogal$ and $\hogalinv$ according to the same rules given in \eqref{eq:galaxies-homeo:0}, but now with restricted domain and range so that $\hogal: \galaxd\rightarrow\R^{n-k}$ and $\hogalinv: \R^{n-k}\rightarrow\galaxd$. If $\yy\in\R^{n-k}$, then $\hogalinv(\yy)=\ebar\plusl \WW\yy$, which is in $\galaxd$. And if $\xbar\in\galaxd$ then $\xbar=\ebar\plusl\zz$ for some $\zz\in\Rn$, so $\hogal(\xbar)=\trans{\WW}\ebar\plusl\trans{\WW}\zz=\trans{\WW}\zz$, by \eqref{eq:galaxies-homeo:3}; thus, $\hogal(\xbar)$ is in $\R^{n-k}$.\looseness=-1 Even with restricted domain and range, the arguments used in part~(\ref{thm:galaxies-homeo:a}) can be applied, proving that $\hogal$ and $\hogalinv$ are continuous maps and inverses of each other and therefore homeomorphisms. \qedhere \end{proof-parts} \end{proof} \subsection{Dominant directions} \label{sec:dom:dir} When an infinite astral point is decomposed into astrons and a finite part, as in \Cref{cor:h:1}, the astrons are ordered by dominance, as previously discussed. In this section, we focus on the first, most dominant, astron appearing in such a representation, as well as the natural decomposition of any infinite astral point into its first astron and ``everything else.'' This dominant astron has an important interpretation in terms of sequences: We will see that for every sequence $\seq{\xx_t}$ in $\Rn$ converging to an infinite point $\xbar\in\extspace\setminus\Rn$, the directions of the sequence elements, $\xx_t/\norm{\xx_t}$, must have a limit $\vv\in\Rn$ whose associated astron, $\limray{\vv}$, is always the first in any representation of $\xbar$. In more detail, a \emph{direction} in $\Rn$ is represented by a unit vector, that is, a vector $\vv\in\Rn$ with $\norm{\vv}=1$. The direction of a nonzero vector $\xx\in\Rn$ is $\xx/\norm{\xx}$. An infinite astral point's dominant direction is the direction associated with its first astron: \begin{definition} A vector $\vv\in\Rn$ is a \emph{dominant direction} of an infinite point $\xbar\in\extspace\setminus\Rn$ if $\norm{\vv}=1$ and if $\xbar=\limray{\vv}\plusl\zbar$ for some $\zbar\in\extspace$. \end{definition} The next theorem proves the fact just mentioned, that if $\seq{\xx_t}$ is any sequence in $\Rn$ converging to $\xbar$, then the directions of the vectors $\xx_t$ must converge to $\xbar$'s dominant direction. Moreover, every infinite astral point has a unique dominant direction. As a preliminary step, we give the following proposition, to be used in the proof, regarding the limit of $\norm{\xx_t}$ for an astrally convergent sequence. \begin{proposition} \label{pr:seq-to-inf-has-inf-len} Let $\seq{\xx_t}$ be a sequence in $\Rn$ that converges to some point $\xbar\in\extspace$. Then \[ \norm{\xx_t} \rightarrow \begin{cases} \norm{\xbar} & \text{if $\xbar\in\Rn$,}\\ +\infty & \text{otherwise.} \end{cases} \] \end{proposition} \begin{proof} If $\xbar=\xx\in\Rn$, then $\xx_t\rightarrow\xx$, implying $\norm{\xx_t}\rightarrow\norm{\xx}<+\infty$ by continuity. Otherwise, if $\xbar\not\in\Rn$, then $\xbar\cdot\uu\not\in\R$ for some $\uu\in\Rn$ (by Proposition~\ref{pr:i:3}), implying $|\xx_t\cdot\uu|\rightarrow|\xbar\cdot\uu|=+\infty$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}). Since $|\xx_t\cdot\uu|\leq\norm{\xx_t}\norm{\uu}$ by the Cauchy-Schwarz inequality, this proves the claim in this case. \end{proof} \begin{theorem} \label{thm:dom-dir} Let $\xbar\in\extspace\setminus\Rn$, and let $\vv\in\Rn$ with $\norm{\vv}=1$. Also, let $\seq{\xx_t}$ and $\seq{\dd_t}$ be sequences in $\Rn$ such that $\xx_t\rightarrow\xbar$, and $\dd_t=\xx_t / \norm{\xx_t}$ whenever $\xx_t\neq \zero$ (or equivalently, $\xx_t =\norm{\xx_t}\dd_t$ for all $t$). Then the following are equivalent: \begin{letter-compact} \item \label{thm:dom-dir:a} $\xbar=\limray{\vv}\plusl\zbar$ for some $\zbar\in\extspace$. That is, $\vv$ is a dominant direction of $\xbar$. \item \label{thm:dom-dir:b} For all $\uu\in\Rn$, if $\vv\cdot\uu>0$ then $\xbar\cdot\uu=+\infty$, and if $\vv\cdot\uu<0$ then $\xbar\cdot\uu=-\infty$. \item \label{thm:dom-dir:c} $\dd_t \rightarrow \vv$. \end{letter-compact} Furthermore, every point $\xbar\in\extspace\setminus\Rn$ has a unique dominant direction. \end{theorem} \begin{proof} We begin by establishing the existence and uniqueness of the dominant direction for any point $\xbar\in\extspace\setminus\Rn$, and then show the equivalence of conditions (\ref{thm:dom-dir:a}), (\ref{thm:dom-dir:b}), (\ref{thm:dom-dir:c}). \begin{proof-parts} \pfpart{Existence:} Being in $\extspace$, $\xbar$ must have the form given in Corollary~\ref{cor:h:1}. In the notation of that corollary, $k\geq 1$ (since $\xbar\not\in\Rn$), and $\vv_1$ must be a dominant direction (assuming, without loss of generality, that $\norm{\vv_1}=1$). \pfpart{Uniqueness:} Suppose both $\vv$ and $\vv'$ are dominant directions of $\xbar$. Then $\norm{\vv}=\norm{\vv'}=1$, and $\xbar=\limray{\vv}\plusl\zbar=\limray{\vv'}\plusl\zbar'$, for some $\zbar,\,\zbar'\in\extspace$. That is, $\VV\omm\plusl\zbar=\VV'\omm\plusl\zbar'$, where $\VV=[\vv]$ and $\VV'=[\vv']$. Since these matrices are column-orthogonal, we can apply \Cref{lem:uniq-ortho-prefix}(\ref{lem:uniq-ortho-prefix:b}), yielding $\VV=\VV'$. Thus, $\vv=\vv'$. \pfpart{(\ref{thm:dom-dir:b}) $\Rightarrow$ (\ref{thm:dom-dir:a}):} Suppose (\ref{thm:dom-dir:b}) holds. Let $\uu\in\Rn$. If $\limray{\vv}\cdot\uu=+\infty$ then $\vv\cdot\uu>0$ so $\xbar\cdot\uu=+\infty$ by assumption. Likewise, if $\limray{\vv}\cdot\uu=-\infty$ then $\xbar\cdot\uu=-\infty$. Thus, $\xbar\cdot\uu=\limray{\vv}\cdot\uu$ whenever $\limray{\vv}\cdot\uu\neq 0$. Therefore, we can apply Proposition~\refequiv{pr:cl-gal-equiv}{pr:cl-gal-equiv:a}{pr:cl-gal-equiv:b} with $\ebar=\limray{\vv}$, implying (\ref{thm:dom-dir:a}). \pfpart{(\ref{thm:dom-dir:c}) $\Rightarrow$ (\ref{thm:dom-dir:b}):} Suppose (\ref{thm:dom-dir:c}) holds. Let $\uu\in\Rn$, and suppose $\vv\cdot\uu>0$. Then $\xx_t\cdot\uu = \norm{\xx_t}(\dd_t\cdot\uu) \rightarrow +\infty$ since $\dd_t\cdot\uu\rightarrow \vv\cdot\uu >0$ and since $\norm{\xx_t}\rightarrow+\infty$ by Proposition~\ref{pr:seq-to-inf-has-inf-len}. Therefore, $\xbar\cdot\uu=+\infty$, by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), since $\xx_t\rightarrow\xbar$. By a symmetric argument, if $\vv\cdot\uu<0$ then $\xbar\cdot\uu=-\infty$. \pfpart{(\ref{thm:dom-dir:a}) $\Rightarrow$ (\ref{thm:dom-dir:c}):} Let $\xbar=\VV\omm\plusl\qq$ be the canonical representation of $\xbar$, with $\VV=[\vv_1,\dotsc,\vv_k]$ for some $k\ge 1$ (since $\xbar\not\in\Rn$) and $\qq\in\Rn$. By the uniqueness of the dominant direction, we must have $\vv_1=\vv$. We can further write $\xx_t=\VV\bb_t+\qq_t$ for some $\bb_t\in\R^k$ and $\qq_t\in\Rn$ with $\qq_t\perp\VV$. By \Cref{thm:seq-rep}, $b_{t,1}\to+\infty$, so $b_{t,1}\le 0$ for at most finitely many values of $t$; by discarding these sequence elements (which does not affect limits), we can assume henceforth that $b_{t,1}>0$ for all $t$. Then \[ \xx_t = b_{t,1}\vv_1+b_{t,2}\vv_2+\dotsb+b_{t,k}\vv_k+\qq_t = b_{t,1}\bigParens{\vv_1+\vepsilon_t} \] where \[ \vepsilon_t = \frac{b_{t,2}}{b_{t,1}}\vv_2 +\dotsb +\frac{b_{t,k}}{b_{t,1}}\vv_k +\frac{\qq_t}{b_{t,1}}. \] Then $\vepsilon_t\to\zero$ since, by \Cref{thm:seq-rep}, $b_{t,j}/b_{t,1}\to 0$ for $j>1$, and $\qq_t\to\qq$ while $b_{t,1}\to+\infty$, so $\qq_t/b_{t,1}\to\zero$. Moreover, we also have $\norm{\vv_1}=1$ and $\vepsilon_t\perp\vv_1$, so \begin{align*} \dd_t &=\frac{\xx_t}{\norm{\xx_t}} =\frac{\vv_1+\vepsilon_t}{\norm{\vv_1+\vepsilon_t}} =\frac{\vv_1+\vepsilon_t}{\sqrt{1+\norm{\vepsilon_t}^2}}. \end{align*} Since $\vepsilon_t\to\zero$, we therefore obtain $\dd_t\to\vv_1=\vv$. \qedhere \end{proof-parts} \end{proof} Thus, every infinite astral point $\xbar\in\extspace\setminus\Rn$ has a unique dominant direction associated with the first astron in any representation for $\xbar$. We next turn our attention to the rest of $\xbar$'s representation, the part following that first astron. Let $\vv$ be $\xbar$'s dominant direction. By definition, this means that $\xbar=\limray{\vv}\plusl\zbar$ for some $\zbar\in\extspace$, implying, by Proposition~\refequiv{pr:cl-gal-equiv}{pr:cl-gal-equiv:a}{pr:cl-gal-equiv:c}, that actually $\xbar=\limray{\vv}\plusl\xbar$. Applying the Projection Lemma (\Cref{lemma:proj}), it then follows that $\xbar=\limray{\vv}\plusl\PP\xbar$ where $\PP$ is the projection matrix onto the linear subspace orthogonal to $\vv$. Thus, in general, $\xbar$ can be decomposed as \begin{equation} \label{eqn:domdir-xperp-decomp} \xbar = \limray{\vv}\plusl\xbarperp, \end{equation} where $\xbarperp$, both here and throughout, is shorthand we often use for $\PP\xbar$ whenever the vector $\vv$ (and hence also the matrix $\PP$) is clear from context (for any point $\xbar$, including points in $\Rn$). This decomposition will be used in many of our proofs, which are often by induction on astral rank, since $\xbarperp$ has lower astral rank than $\xbar$. The next proposition provides the basis for such proofs: \begin{proposition} \label{pr:h:6} Let $\xbar$ be an infinite point in $\extspace\setminus\Rn$. Let $\vv$ be its dominant direction, and let $\xbarperp$ be $\xbar$'s projection orthogonal to $\vv$. Then $\xbar=\limray{\vv}\plusl\xbarperp$. Further, suppose $\xbar$'s canonical form is $\xbar=[\vv_1,\dotsc,\vv_k]\omm\plusl\qq$, so that its astral rank is $k\ge 1$. Then $\xbarperp$'s canonical form is $\xbarperp=[\vv_2,\dotsc,\vv_k]\omm\plusl\qq$, implying its astral rank is $k-1$. \end{proposition} \begin{proof} That $\xbar=\limray{\vv}\plusl\xbarperp$ was argued above. Let $\VV'=[\vv_2,\dotsc,\vv_k]$ so that $\xbar=[\vv_1,\VV']\omm\plusl\qq$, and let $\PP$ be the projection matrix onto the space orthogonal to $\vv_1$. Since $\vv$ is the unique dominant direction of $\xbar$, we must have $\vv=\vv_1$. Thus, \[ \xbarperp =\PP\bigParens{[\vv_1,\VV']\omm\plusl\qq} =[\PP\vv_1,\PP\VV']\omm\plusl\PP\qq =[\zero_n,\VV']\omm\plusl\qq =\VV'\omm\plusl\qq, \] where the second equality follows from \Cref{pr:h:4}(\ref{pr:h:4c},\ref{pr:h:4d}), the third from \Cref{pr:proj-mat-props}(\ref{pr:proj-mat-props:d},\ref{pr:proj-mat-props:e}) since $\VV'\perp\vv_1$ and $\qq\perp\vv_1$, and the final equality from \Cref{prop:V:indep}. \end{proof} Let $\seq{\xx_t}$ in $\Rn$ be any sequence converging to $\xbar\in\extspace$, and let $\xbar$'s canonical representation be as given in \Cref{pr:h:6}. As already seen in Theorem~\refequiv{thm:dom-dir}{thm:dom-dir:a}{thm:dom-dir:c}, if $\xbar\not\in\Rn$, we can determine its dominant direction, $\vv_1$, from the sequence as the limit of $\xx_t/\norm{\xx_t}$. In fact, from any such sequence, using the decomposition given in \eqref{eqn:domdir-xperp-decomp}, we can determine the rest of $\xbar$'s canonical representation as well. To do so, we next project the sequence using $\PP$, forming the sequence with elements $\xperpt=\PP\xx_t$, which, by \Cref{thm:mat-mult-def}, converges to $\xbarperp=\PP\xbar$. From this sequence, we can determine $\xbarperp$'s dominant direction (as the limit of the directions of the projected points $\xperpt$), which is $\vv_2$ by \Cref{pr:h:6}, thus yielding the second astron in $\xbar$'s canonical representation. This process can be repeated to determine one astron after another, until these repeated projections finally yield a sequence converging to a finite point $\qq\in\Rn$. Thus, any astral point $\xbar$'s entire canonical representation is determined by this process for any sequence converging to $\xbar$. As already noted, the projections $\xbar\mapsto\xbarperp$ are special cases of astral linear maps. As such, the next proposition summarizes some properties of those projections. \begin{proposition} \label{pr:h:5} Let $\vv\in\Rn$. Let $\xbar\in\extspace$, and let $\xbarperp$ be its projection orthogonal to $\vv$. Then the following hold: \begin{letter-compact} \item \label{pr:h:5a} $\xbarperp\cdot\uu=\xbar\cdot\uperp$ for all $\uu\in\Rn$. \item \label{pr:h:5b} For any sequence $\seq{\xbar_t}$ in $\extspace$, if $\xbar_t\rightarrow\xbar$ then $\xbarperpt\rightarrow \xbarperp$. \item \label{pr:h:5c} $(\xbar\plusl\ybar)^\bot = \xbarperp \plusl \ybarperp$ for $\ybar\in\extspace$. \item \label{pr:h:5e} $(\alpha\ww)^\bot = \alpha(\ww^\bot)$ for $\ww\in\Rn$ and $\alpha\in\Rext$. \end{letter-compact} \end{proposition} \begin{proof} Let $\PP$ be the projection onto the linear space orthogonal to $\vv$. Then for part~(\ref{pr:h:5a}), we have $\xbarperp\cdot\uu=(\PP\xbar)\cdot\uu=\xbar\cdot(\PP\uu)=\xbar\cdot\uperp$ by \Cref{thm:mat-mult-def} and since $\trans{\PP}=\PP$ (\Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:a}). The other parts follow respectively from \Cref{thm:linear:cont}(\ref{thm:linear:cont:b}) and \Cref{pr:h:4}(\ref{pr:h:4c},\ref{pr:h:4e}). \end{proof} Finally, given a sequence $\seq{\xx_t}$ in $\Rn$ converging to some $\xbar\in\extspace$, it is sometimes useful to modify the sequence to obtain a new sequence converging to $\limray{\vv}\plusl\xbar$, for some $\vv\in\Rn$. Intuitively, this can be done simply by adding a large scalar multiple of $\vv$ to each sequence element $\xx_t$ so that $\vv$'s direction will be the most dominant. The next lemma makes this intuition precise. \begin{lemma} \label{lem:mod-seq-for-new-dom-dir} Let $\seq{\xx_t}$ be a sequence in $\Rn$ that converges to some point $\xbar\in\extspace$, and let $\vv\in\Rn$. Let $\seq{\alpha_t}$ be a sequence in $\Rstrictpos$ such that $\alpha_t\rightarrow+\infty$ and $\norm{\xx_t}/\alpha_t\rightarrow 0$. (For instance, these conditions are satisfied if $\alpha_t \geq t (1 + \norm{\xx_t})$ for all $t$.) Then $\alpha_t \vv + \xx_t \rightarrow \limray{\vv}\plusl\xbar$. \end{lemma} \begin{proof} Let $\zbar = \limray{\vv}\plusl\xbar$, and for each $t$, let $\zz_t=\alpha_t \vv + \xx_t$. Let $\uu\in\Rn$. We aim to show $\zz_t\cdot\uu\rightarrow\zbar\cdot\uu$, which will prove the claim by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). If $\vv\cdot\uu>0$ then \begin{equation} \label{eq:lem:mod-seq-for-new-dom-dir:1} \zz_t\cdot\uu = \alpha_t \Parens{ \vv\cdot \uu + \frac{\xx_t\cdot\uu}{\alpha_t} } \geq \alpha_t \Parens{ \vv\cdot \uu - \frac{\norm{\xx_t} \norm{\uu}} {\alpha_t} }. \end{equation} The equality is by algebra, and the inequality is by the Cauchy-Schwarz inequality. Since $\norm{\xx_t}/\alpha_t\rightarrow 0$, the parenthesized expression on the right-hand side of \eqref{eq:lem:mod-seq-for-new-dom-dir:1} is converging to $\vv\cdot\uu>0$. Since $\alpha_t\rightarrow+\infty$, this shows that $\zz_t\cdot\uu\rightarrow+\infty=\zbar\cdot\uu$. The case $\vv\cdot\uu<0$ can be handled symmetrically, or by applying the preceding argument to $-\uu$. And if $\vv\cdot\uu=0$ then $\zz_t\cdot\uu = \xx_t\cdot\uu \rightarrow\xbar\cdot\uu = \zbar\cdot\uu$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}), completing the proof. \end{proof} \subsection{Convergence in direction} \label{sec:conv-in-dir} Suppose $\seq{\vv_t}$ is a sequence in $\Rn$ converging to some point $\vv\in\Rn$. What can we say about the limit of the associated sequence of astrons $\limray{\vv_t}$? We might at first guess that this sequence should converge to $\limray{\vv}$. Such a conjecture might be based on a naive view of astral space in which astrons are pictured as points on a very large sphere at the outer edge of $\Rn$. By now, however, we know this view to be incorrect. Indeed, as was seen in Section~\ref{sec:not-second}, every astron is topologically isolated from all other astrons. Specifically, Theorem~\ref{thm:formerly-lem:h:1:new} proved that there exists a neighborhood $\Uv$ that includes $\limray{\vv}$, but no other astron. Therefore, it is impossible for the sequence $\seq{\limray{\vv_t}}$ to converge to $\limray{\vv}$, unless $\limray{\vv_t}$ is actually equal to $\limray{\vv}$ for all but finitely many values of $t$. Thus, a convergent sequence of astrons need not converge to an astron. In contrast, a convergent sequence of icons (and in particular, of astrons) will always converge to an icon (by Proposition~\ref{pr:i:8}\ref{pr:i:8e}). Here is a concrete example: \begin{example}[Limits of astrons] In $\R^2$, let $\vv_t = \ee_1 + (1/t) \ee_2$. Then $\vv_t\rightarrow\ee_1$, but $\limray{\vv_t}\not\rightarrow\limray{\ee_1}$. Rather, $\limray{\vv_t}\rightarrow\zbar$ where $\zbar=\limray{\ee_1}\plusl\limray{\ee_2}$. To see this, let $\uu\in\Rn$. If $\ee_1\cdot\uu>0$ then $\vv_t\cdot\uu\rightarrow\ee_1\cdot\uu$, so for all sufficiently large $t$, $\vv_t\cdot\uu>0$, implying $\limray{\vv_t}\cdot\uu=+\infty=\zbar\cdot\uu$. The case $\ee_1\cdot\uu<0$ is symmetric. And if $\ee_1\cdot\uu=0$, then $\vv_t\cdot\uu=(1/t)\ee_2\cdot\uu$ so, in this case, $\limray{\vv_t}\cdot\uu=\limray{\ee_2}\cdot\uu=\zbar\cdot\uu$ for all $t$. In every case $\limray{\vv_t}\cdot\uu\rightarrow\zbar\cdot\uu$ so $\limray{\vv_t}\rightarrow\zbar$ (by \Cref{thm:i:1}\ref{thm:i:1c}). \end{example} We can try to give some (very) informal intuition for what is happening in this example. A naive guess might have been that $\seq{\limray{\vv_t}}$ would converge to $\limray{\ee_1}$, since $\vv_t\rightarrow\ee_1$, but that would be impossible, as already discussed. We might then expect the sequence to get ``close'' to $\limray{\ee_1}$, for instance, to converge to a point in its galaxy, $\galax{\limray{\ee_1}}$, but this also does not happen (and could not happen since, as already mentioned, the limit must be an icon). The galaxy $\galax{\limray{\ee_1}}$ is topologically a line, consisting of all points $\limray{\ee_1}\plusl \lambda\ee_2$, for $\lambda\in\R$. The sequence includes points $\limray{\vv_t}=\limray{(\ee_1+(1/t)\ee_2)}$ that get ever ``closer'' to the galaxy, approaching it from ``above'' (that is, in the direction of $\ee_2$). Eventually, the sequence converges to $\limray{\ee_1}\plusl\limray{\ee_2}$, a point in the galaxy's closure at its ``upper'' edge. So the sequence approaches but never enters the galaxy, rather converging to a point at its outer limit. In this example, although the sequence does not converge to $\limray{\ee_1}$, as might have been expected, we did see that the sequence converges to a point $\zbar$ whose dominant direction is $\ee_1$, that is, whose first astron is $\limray{\ee_1}$. This kind of convergence holds always, and in much greater generality, as shown in the next theorem. For instance, if $\vv_t\rightarrow\vv$, as in the discussion above, this theorem implies that $\limray{\vv_t}$ must converge to a point of the form $\limray{\vv}\plusl\ybar$ for some $\ybar\in\extspace$. \begin{theorem} \label{thm:gen-dom-dir-converg} Let $\seq{\vbar_t}$ be a sequence in $\extspace$ converging to some point $\vbar\in\extspace$. Let $\seq{\xbar_t}$ be a sequence with each element $\xbar_t$ in $\limray{\vbar_t}\plusl\extspace$ and which converges to some point $\xbar\in\extspace$. Then $\xbar\in\limray{\vbar}\plusl\extspace$. \end{theorem} \begin{proof} We prove the theorem using Proposition~\ref{pr:cl-gal-equiv}. As such, let $\uu\in\Rn$. If $\limray{\vbar}\cdot\uu=+\infty$ then $\vbar\cdot\uu>0$. Since $\vbar_t\cdot\uu\rightarrow\vbar\cdot\uu$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}), this implies that for all $t$ sufficiently large, $\vbar_t\cdot\uu>0$, and so that $\limray{\vbar_t}\cdot\uu=+\infty$, implying $\xbar_t\cdot\uu=+\infty$ (by Proposition~\ref{pr:cl-gal-equiv}\ref{pr:cl-gal-equiv:a},\ref{pr:cl-gal-equiv:b}). Since $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$, it follows that $\xbar\cdot\uu=+\infty$. Likewise, if $\limray{\vbar}\cdot\uu=-\infty$ then $\xbar\cdot\uu=-\infty$. Since $\limray{\vbar}$ is an icon (by Proposition~\ref{pr:i:8}\ref{pr:i:8-infprod}), we conclude that $\xbar\cdot\uu=\limray{\vbar}\cdot\uu$ whenever $\limray{\vbar}\cdot\uu\neq 0$. By Proposition~\refequiv{pr:cl-gal-equiv}{pr:cl-gal-equiv:a}{pr:cl-gal-equiv:b}, this proves the theorem. \end{proof} In the special case that each $\vbar_t$ is equal to an icon $\ebar_t\in\corezn$, Theorem~\ref{thm:gen-dom-dir-converg} implies that if $\ebar_t\rightarrow\ebar$ for some $\ebar\in\corezn$ and each $\xbar_t\in\galcldt$ then their limit $\xbar$ is in $\galcld$. Using Theorem~\ref{thm:gen-dom-dir-converg}, we can now prove that the dominant direction of the limit of a sequence is identical to the limit of the dominant directions of sequence elements: \begin{theorem} \label{thm:dom-dirs-continuous} Let $\seq{\xbar_t}$ be a sequence in $\extspace\setminus\Rn$ that converges to some point $\xbar\in\extspace$ (which cannot be in $\Rn$). For each $t$, let $\vv_t$ be the dominant direction of $\xbar_t$, and let $\vv\in\Rn$. Then $\vv$ is the dominant direction of $\xbar$ if and only if $\vv_t\rightarrow\vv$. \end{theorem} \begin{proof} First, $\xbar$ cannot be in $\Rn$ since otherwise $\Rn$ would be a neighborhood of $\xbar$, implying infinitely many of the points in the sequence are in $\Rn$, a contradiction. \begin{proof-parts} \pfpart{``If'' ($\Leftarrow$):} Suppose $\vv_t\rightarrow\vv$. Then Theorem~\ref{thm:gen-dom-dir-converg}, applied with $\vbar_t=\vv_t$ and $\vbar=\vv$, implies that $\xbar\in\limray{\vv}\plusl\extspace$, that is, that $\vv$ is the dominant direction of $\xbar$. \pfpart{``Only if'' ($\Rightarrow$):} Suppose $\vv$ is the dominant direction of $\xbar$, and that, contrary to the theorem's claim, $\vv_t\not\rightarrow\vv$. Then there exists a neighborhood $U\subseteq\Rn$ of $\vv$ that excludes infinitely many $\vv_t$. By discarding all other sequence elements, we can assume $\vv_t\not\in U$ for all $t$. Furthermore, because each $\vv_t$ is on the unit sphere in $\Rn$, which is compact, the sequence must have a subsequence that converges to some unit vector $\ww\in\Rn$. By again discarding all other sequence elements, we can assume the entire sequence converges so that $\vv_t\rightarrow\ww$, and still $\xbar_t\rightarrow\xbar$. Since $U$'s complement, $\Rn\setminus U$, is closed (in $\Rn$) and includes each $\vv_t$, it must also include their limit $\ww$, but not $\vv$; thus, $\ww\neq \vv$. Since $\vv_t\rightarrow\ww$, as argued above, $\ww$ must be the dominant direction of $\xbar$. However, by assumption, $\vv$ is also $\xbar$'s dominant direction, a contradiction since $\xbar$'s dominant direction is unique (by Theorem~\ref{thm:dom-dir}), but $\vv\neq\ww$. Having reached a contradiction, we conclude that $\vv_t\rightarrow\vv$. \qedhere \end{proof-parts} \end{proof} \subsection{Comparison to cosmic space} \label{sec:cosmic} As earlier mentioned, \citet[Section~3A]{rock_wets} study a different compactification of $\Rn$ called \emph{cosmic space}, which is closely related to the \emph{enlarged space} of \citet{hansen_dupin99}. Here, we explore how cosmic space and astral space are related, which we will see is also of direct relevance to the preceding discussion. Cosmic space consists of $\Rn$ together with \emph{direction points}, one for every ray from the origin. Rockafellar and Wets denote such points by the notation $\rwdir{\vv}$, for $\vv\in\Rn\setminus\{\zero\}$. Thus, $n$-dimensional cosmic space, written $\cosmspn$, is $\Rn$ together with all direction points: \[ \cosmspn = \Rn \cup \bigBraces{\rwdir{\vv} : \vv\in\Rn\setminus\{\zero\}}. \] Note that for vectors $\vv,\vv'\in\Rn\setminus\{\zero\}$, $\rwdir{\vv}=\rwdir{\vv'}$ if and only if $\vv$ and $\vv'$ have the same direction (so that $\vv'=\lambda\vv$ for some $\lambda\in\Rstrictpos$). As discussed by Rockafellar and Wets, the topology on cosmic space is homeomorphic to the closed unit ball in $\Rn$, with $\Rn$ itself mapped to the interior of the ball, and the direction points mapped to its surface. (The topology on this unit ball is simply as a subspace of Euclidean space $\Rn$.) More concretely, this homeomorphism can be given by the map $\xx\mapsto\xx/(1+\norm{\xx})$ for $\xx\in\Rn$, and with each direction point $\rwdir{\vv}$ mapped to $\vv$, for $\vv\in\Rn$ with $\norm{\vv}=1$. Thus, we can picture cosmic space being formed by shrinking $\Rn$ down to the open unit ball in $\Rn$, and then taking its closure so that the points on the surface of the ball correspond exactly to the ``new'' direction points. Since the closed unit ball is compact, cosmic space is as well. \citet[Definition~3.1]{rock_wets} assert that in this topology, a sequence $\seq{\xrw_t}$ in $\cosmspn$ converges to a direction point $\rwdir{\vv}$, where $\vv\in\Rn\setminus\{\zero\}$, according exactly to the following conditions: \begin{itemize} \item If the sequence is entirely in $\Rn$, so that $\xrw_t=\xx_t\in\Rn$ for each $t$, then $\xx_t\rightarrow\rwdir{\vv}$ in $\cosmspn$ if and only if there exists a sequence $\seq{\lambda_t}$ in $\Rstrictpos$ with $\lambda_t\rightarrow 0$ and $\lambda_t \xx_t \rightarrow \vv$. \item If the sequence is entirely outside $\Rn$, so that $\xrw_t=\rwdir{\vv_t}$ for some $\vv_t\in\Rn\setminus\{\zero\}$ for each $t$, then $\rwdir{\vv_t}\rightarrow\rwdir{\vv}$ in $\cosmspn$ if and only if there exists a sequence $\seq{\lambda_t}$ in $\Rstrictpos$ with $\lambda_t \vv_t \rightarrow \vv$. \item If the sequence $\seq{\xrw_t}$ is a mix that includes infinitely many finite points in $\Rn$ and infinitely many direction points, then it converges to $\xrw$ if and only if the subsequence of all its finite points and the subsequence of all its direction points each converge separately to $\xrw$. (If the sequence only includes finitely many of either type, then these can be disregarded.) \end{itemize} Cosmic space captures the intuitive view, discussed in Sections~\ref{sec:not-second} and~\ref{sec:conv-in-dir}, of points at infinity forming a kind of continuum, with every neighborhood of a direction point $\rwdir{\vv}$ necessarily including nearby direction points. This contrasts starkly with astral space for which this is not the case: as was seen in Theorem~\ref{thm:formerly-lem:h:1:new}, every individual astron in astral space is topologically isolated from all other astrons. As seen in Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), every linear function $f(\xx)=\xx\cdot\uu$, for $\xx\in\Rn$ and some $\uu\in\Rn$, can be extended continuously to astral space (namely, by the function $\xbar\mapsto\xbar\cdot\uu$, for $\xbar\in\extspace$). The same is \emph{not} true for cosmic space. To see this, suppose $n>1$ and $\uu\neq\zero$. Let $\ww\in\Rn$ be any nonzero vector with $\uu\cdot\ww=0$, and let $\alpha\in\R$. Finally, define the sequence $\seq{\xx_t}$ by $\xx_t=\alpha \uu + t \ww$. Regardless of $\alpha$, this sequence converges to $\rwdir{\ww}$ in $\cosmspn$ (since $\lambda_t \xx_t\rightarrow \ww$, with $\lambda_t=1/t$). On the other hand, $f(\xx_t)=\alpha \norm{\uu}^2$ for all $t$, implying $\lim f(\xx_t)$ has this same value, and so is {not} independent of $\alpha$. Thus, for different choices of $\alpha$, the sequence $\seq{\xx_t}$ always has the same limit $\rwdir{\ww}$, but the function values $f(\xx_t)$ have different limits. As a result, no extension of $f$ to cosmic space can be continuous at $\rwdir{\ww}$. Indeed, this argument shows that \emph{no} linear function on $\Rn$ can be extended continuously to cosmic space, except for the identically zero function, or in $n=1$ dimensions. We next look at the particular relationship between the topologies on cosmic space and astral space. To explain this connection, let us first define the map $\quof:\extspace\rightarrow\cosmspn$ as follows: For $\xx\in\Rn$, the map is simply the identity, so $\quof(\xx)=\xx$. For all other points $\xbar\in\extspace\setminus\Rn$, we let $\quof(\xbar)=\rwdir{\vv}$ where $\vv\in\Rn$ is $\xbar$'s domimant direction (which exists and is unique by Theorem~\ref{thm:dom-dir}). Thus, $\quof$ maps all infinite points $\xbar\in\extspace$ with the same dominant direction $\vv$ to the same direction point $\rwdir{\vv}\in\cosmspn$. In other words, $\quofinv(\rwdir{\vv})$ consists exactly of those astral points of the form $\limray{\vv}\plusl\zbar$, for some $\zbar\in\extspace$, which means $\quofinv(\rwdir{\vv})$ is exactly $\galcl{\limray{\vv}}$, the closure of $\limray{\vv}$'s galaxy. In this sense, applying the map $\quof$ to astral space causes every such set $\galcl{\limray{\vv}}$ to ``collapse'' down to a single point, namely, $\rwdir{\vv}$. We claim that the topology on $\cosmspn$ inherited from $\extspace$ as a result of this collapsing operation is exactly the cosmic topology defined earlier. Formally, as shown in the next theorem, we are claiming that $\quof$ is a \emph{quotient map}, meaning that it is surjective, and that, for all subsets $U\subseteq\cosmspn$, $\quofinv(U)$ is open in $\extspace$ if and only if $U$ is open in $\cosmspn$. As a result, the topology on $\cosmspn$ is exactly the \emph{quotient topology} induced by $\quof$, and so also $\cosmspn$ is a \emph{quotient space} of $\extspace$. \begin{theorem} Let $\quof:\extspace\rightarrow\cosmspn$ be the map defined above. Then $\quof$ is a quotient map. Therefore, $\cosmspn$ is a quotient space of $\extspace$. \end{theorem} \begin{proof} First, $\quof$ is surjective since $\quof(\xx)=\xx$ for all $\xx\in\Rn$ and $\quof(\limray{\vv})=\rwdir{\vv}$ for all $\vv\in\Rn\setminus\{\zero\}$. Next, we claim $\quof$ is continuous. Let $\seq{\xbar_t}$ be a sequence in $\extspace$ that converges to $\xbar\in\extspace$. We aim to show $\quof(\xbar_t)\rightarrow\quof(\xbar)$ in $\cosmspn$ (which, by \Cref{prop:first:properties}\ref{prop:first:cont}, is sufficient for proving continuity since $\extspace$ is first-countable). If $\xbar=\xx\in\Rn$, then, because $\Rn$ is a neighborhood of $\xx$, all but finitely many of the elements $\xbar_t$ must also be in $\Rn$. Since $\quof$ is the identity function on $\Rn$, and since the topologies on $\cosmspn$ and $\extspace$ are the same when restricted to $\Rn$, the claim follows directly in this case. Suppose then that $\xbar\not\in\Rn$, and therefore has some dominant direction $\vv\in\Rn$ so that $\quof(\xbar)=\rwdir{\vv}$. We consider cases based on the elements of the sequence $\seq{\xbar_t}$. First, suppose the sequence includes at most finitely many elements not in $\Rn$. By discarding or disregarding elements not in $\Rn$, we can assume without loss of generality that the entire sequence is in $\Rn$ so that $\xbar_t=\xx_t\in\Rn$ for all $t$, and $\xx_t\rightarrow\xbar$ in $\extspace$. Let $\lambda_t=1/\norm{\xx_t}$ if $\xx_t\neq\zero$, and $\lambda_t=1$ otherwise. Then $\xx_t=(\lambda_t \xx_t) \norm{\xx_t}$ for all $t$, so $\lambda_t \xx_t \rightarrow \vv$ by Theorem~\refequiv{thm:dom-dir}{thm:dom-dir:a}{thm:dom-dir:c} (with $\dd_t=\lambda_t\xx_t$, and since $\vv$ is $\xbar$'s dominant direction). Further, $\norm{\xx_t}\rightarrow+\infty$ (by Proposition~\ref{pr:seq-to-inf-has-inf-len}), so $\lambda_t\rightarrow 0$. Thus, the sequence $\quof(\xx_t)=\xx_t$ converges in $\cosmspn$ to $\quof(\xbar)=\rwdir{\vv}$, as follows from the characterization of convergence in cosmic space given by \citet[Definition~3.1]{rock_wets}, as discussed above. Next, suppose the sequence includes at most finitely many elements that are in $\Rn$. Then as before, we can assume without loss of generality that none of the sequence elements $\xbar_t$ are in $\Rn$. For each $t$, let $\vv_t$ be the dominant direction of $\xbar_t$ so that $\quof(\xbar_t)=\rwdir{\vv_t}$. Then by Theorem~\ref{thm:dom-dirs-continuous}, it follows immediately that $\vv_t\rightarrow\vv$. Therefore, the sequence $\quof(\xbar_t)=\rwdir{\vv_t}$ converges in $\cosmspn$ to $\quof(\xbar)=\rwdir{\vv}$, as follows again from \citet[Definition~3.1]{rock_wets} as discussed above (with $\lambda_t=1$ for all $t$). If the sequence $\seq{\xbar_t}$ is a mix of infinitely many elements in $\Rn$ and infinitely many elements not in $\Rn$, then we can treat the two subsequences of elements in or not in $\Rn$ separately. The arguments above show that the images of each of these subsequences under $\quof$ converge to $\quof(\xbar)$. Therefore, the image of the entire sequence converges to $\quof(\xbar)$. (This is because for any neighborhood $U$ of $\quof(\xbar)$ in $\cosmspn$, all elements of each subsequence must eventually be in $U$; therefore, all elements of the entire sequence must eventually be in $U$.) Thus, in all cases, $\quof(\xbar_t)\rightarrow\quof(\xbar)$. Therefore, $\quof$ is continuous. We next claim that $\quof$ is a \emph{closed map}, meaning that it maps every closed set $V$ in $\extspace$ to a closed set $\quof(V)$ in $\cosmspn$. Indeed, suppose $V\subseteq\extspace$ is closed. Then $V$ is compact since $\extspace$ is compact (by \Cref{prop:compact}\ref{prop:compact:closed-subset}). Therefore, by \Cref{prop:compact}(\ref{prop:compact:cont-compact},\ref{prop:compact:closed}), its image $\quof(V)$ is compact, since $\quof$ is continuous, and so is closed in $\cosmspn$, since $\cosmspn$ is Haussdorf (being homeomorphic to a subspace of Euclidean space). Thus, $\quof$ is a surjective, continuous, closed map. Together, these properties imply that it is a quotient map: To see this, let $U\subseteq\cosmspn$. If $U$ is open, then $\quofinv(U)$ is open, since $\quof$ is continuous. Conversely, if $\quofinv(U)$ is open, then $\extspace\setminus\quofinv(U)$ is closed. Its image is $\quof(\extspace\setminus\quofinv(U))=(\cosmspn)\setminus U$, since $\quof$ is surjective. This image must be closed, since $\quof$ is a closed map; therefore, $U$ is open. \end{proof} \part{Extending functions to astral space} \label{part:extending-functions} \section{Lower semicontinuous extension} \label{sec:functions} We are now ready to begin the study of functions that have been extended to astral space. We are especially motivated by the fundamental problem of minimizing a convex function $f$ on $\Rn$. In general, such a function might not be minimized at any finite point in its domain and its minimizers might only be attained ``at infinity,'' by following a sequence. To study this situation within our framework, we focus particularly on an extension $\fext$ of $f$ to $\extspace$, which is constructed in such a way that $\ef$ is lower semicontinuous, and so that $f$'s minimum over sequences in $\Rn$ coincides with $\fext$'s minimum over points in $\extspace$. Further, $\fext$ always attains its minimum thanks to its lower semicontinuity and the compactness of $\eRn$. Much of the rest of this book studies $\fext$'s properties, for example, its continuity and the structure of its minimizers. \subsection{Definition and basic properties} \label{sec:lsc:ext} To begin, we define the extension $\fext$, prove its lower semicontinuity, and derive several related properties. {As discussed in Section~\ref{sec:prelim:lower-semicont}, a function $f:X\to\eR$ on a first-countable space $X$ is {lower semicontinuous} at $x\in X$ if $f(x)\le\liminf f(x_t)$ whenever $x_t\to x$. The function is lower semicontinuous if it is lower semicontinuous at every point in $X$. Lower semicontinuous functions are precisely those whose epigraphs are closed in $X\times\R$ (\Cref{prop:lsc}). On compact sets, such functions always attain their minimum~(\Cref{thm:weierstrass}). When $X$ is first-countable, the {lower semicontinuous hull} of $f:X\to\eR$, denoted $\lsc{f}$, is as defined in \eqref{eq:lsc:liminf:X:prelims}. As seen in \Cref{prop:lsc:characterize}, this is the greatest lower semicontinuous function majorized by $f$; further, its epigraph is the closure of $\epi f$ in $X\times\R$. The lower semicontinuous hull of a function $f:\Rn\rightarrow\Rext$ is given by \begin{equation} \label{eq:lsc:liminf} (\lsc f)(\xx) = \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xx} {f(\xx_t)} \end{equation} for $\xx\in\Rn$, as was seen in Section~\ref{sec:prelim:lsc}. The operation of a lower semicontinuous hull is particularly natural for convex functions $f$ since they are already lower semicontinuous on the relative interior of $\dom{f}$, so only the function values on the relative boundary of $\dom{f}$ may need to be adjusted; further, the resulting function, $\lsc f$, remains convex (\Cref{pr:lsc-props}\ref{pr:lsc-props:a},\ref{pr:lsc-props:b}). To extend $f$ to astral space, we use the same idea but now considering all sequences converging to points in $\extspace$, not just $\Rn$: \begin{definition} \label{def:lsc-ext} The \emph{lower semicontinuous extension} of a function $f:\Rn\rightarrow\Rext$ (or simply the \emph{extension} of $f$) is the function $\fext:\eRn\to\eR$ defined by \begin{equation} \label{eq:e:7} \fext(\xbar) = \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xbar} {f(\xx_t)}, \end{equation} for all $\xbar\in\extspace$, where (as usual for such notation) the infimum is over all sequences $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$. \end{definition} Clearly $\ef(\xx)=(\lsc f)(\xx)$ for all $\xx\in\Rn$, but $\ef$ is also defined at points $\xbar\in\eRn\wo\Rn$. The next proposition shows that $\ef$ is the greatest lower semicontinuous function that, when restricted to $\Rn$, is majorized by $f$. Moreover, the epigraph of $\ef$ is the closure of the epigraph of $f$ in $\eRn\times\R$. Thus, $\ef$ can be viewed, in a sense, as a lower semicontinuous hull of $f$ over all of $\eRn$. \begin{proposition} \label{prop:ext:F} Let $f:\Rn\to\eR$. Then its extension $\ef:\eRn\to\eR$ is the greatest lower semicontinuous function on $\eRn$ which, when restricted to $\Rn$, is majorized by $f$. Furthermore, $\epi\ef$ is the closure of $\epi f$ in $\eRn\times\R$. \end{proposition} \begin{proof} Let $F:\eRn\to\eR$ be a ``naive'' extension of $f$ defined by \begin{equation} \label{eq:naive:ext} F(\xbar)= \begin{cases} f(\xbar) &\text{if $\xbar\in\Rn$,} \\ +\infty &\text{otherwise.} \end{cases} \end{equation} From this definition, $\epi f=\epi F$. Moreover, for any function $G:\eRn\to\eR$, we have $G\le F$ if and only if $G(\xx)\le f(\xx)$ for all $\xx\in\Rn$, that is, $G$ is majorized by $F$ if and only if it is majorized by $f$ when restricted to $\Rn$. Thus, using the characterization of lower continuous hull (\Cref{prop:lsc:characterize}), $\lsc F$ is the greatest lower semicontinuous function on $\eRn$ that is majorized by $f$ when restricted to $\Rn$, and $\epi(\lsc F)$ is the closure of $\epi f$ in $\eRn\times\R$. We will show that $\lsc F=\ef$, which will complete the proof. Let $\xbar\in\eRn$. Then $(\lsc F)(\xbar)\le\ef(\xbar)$ directly from the definitions (Eqs.~\ref{eq:lsc:liminf:X:prelims} and~\ref{eq:e:7}), \begin{align*} (\lsc F)(\xbar) = \InfseqLiminf{\seq{\xbar_t}}{\eRn}{\xbar_t\rightarrow\xbar} {F(\xbar_t)} & \le \InfseqLiminf{\seq{\xx_t}}{\Rn\vphantom{\eRn}}{\xx_t\rightarrow\xbar} {F(\xx_t)} \\ & = \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xbar} {f(\xx_t)} = \fext(\xbar). \end{align*} It remains to show that $(\lsc F)(\xbar)\ge\ef(\xbar)$. This holds trivially if $(\lsc F)(\xbar)=+\infty$, so assume that $(\lsc F)(\xbar)<+\infty$ and consider any $\beta\in\R$ such that $(\lsc F)(\xbar)<\beta$. Then, from the definition of $\lsc F$, there must exist a sequence $\seq{\xbar_t}$ in $\eRn$ that converges to $\xbar$ and such that $F(\xbar_t)<\beta$ infinitely often. Let $\seq{\xbar'_t}$ denote the subsequence consisting of those elements. Since, for all elements $\xbar'_t$, $F(\xbar'_t)<+\infty$, we must in fact have $\xbar'_t\in\Rn$, and so the sequence $\seq{\xbar'_t}$ is included in the infimum appearing in the definition of $\ef(\xbar)$, meaning that $\ef(\xbar)\le\beta$. This is true for all $\beta>(\lsc F)(\xbar)$, so we obtain that $\ef(\xbar)\le(\lsc F)(\xbar)$. Thus, $\ef=\lsc F$, completing the proof. \end{proof} \begin{example}[Extension of an affine function] \label{ex:ext-affine} Let $\uu\in\Rn$, $b\in\R$ and consider $f(\xx)=\xx\cdot\uu+b$ for all $\xx\in\Rn$. In this simple case, \eqref{eq:e:7} yields $\fext(\xbar)=\xbar\cdot\uu+b$ since if $\seq{\xx_t}$ is any sequence in $\Rn$ that converges to $\xbar\in\extspace$, then $f(\xx_t)=\xx_t\cdot\uu+b\rightarrow\xbar\cdot\uu+b$ by continuity (see \Cref{thm:i:1}\ref{thm:i:1c}). In fact, $\ef$ is an instance of an astral affine function from \Cref{sec:linear-maps}. \end{example} \begin{example}[Product of hyperbolas] \label{ex:recip-fcn-eg} Suppose \begin{equation} \label{eqn:recip-fcn-eg} f(\xx) = f(x_1, x_2) = \begin{cases} \dfrac{1}{x_1 x_2} & \text{if $x_1>0$ and $x_2>0$,} \\[1em] +\infty & \text{otherwise,} \end{cases} \end{equation} for $\xx=\trans{[x_1,x_2]}\in\R^2$. This function is convex, closed, proper and continuous everywhere. Suppose $\beta\in\R$ and $\xbar=\limray{\ee_1}\plusl\beta\ee_2$ (where $\ee_1$ and $\ee_2$ are standard basis vectors). If $\beta>0$, then $\fext(\xbar)=0$ since on any sequence $\seq{\xx_t}$ converging to $\xbar$, the first component $\xx_t\cdot\ee_1=x_{t1}$ converges to $\xbar\cdot\ee_1=+\infty$, while the second component $\xx_t\cdot\ee_2=x_{t2}$ converges to $\xbar\cdot\ee_2=\beta>0$, implying that $f(\xx_t)\rightarrow 0$. If $\beta<0$, then a similar argument shows that $\fext(\xbar)=+\infty$. And if $\beta=0$, so that $\xbar=\limray{\ee_1}$, then $\fext(\xbar)$ is again equal to $0$, although more care is now needed in finding a sequence that shows this. One example is a sequence $\xx_t=t^2\ee_1 + (1/t)\ee_2$, for which $f(\xx_t)=1/t\rightarrow 0$. This implies $\fext(\xbar)\leq 0$, and since $f$ is nonnegative everywhere, $\fext$ is as well, so $\fext(\xbar)=0$. \end{example} For the remainder of this subsection, we establish some basic properties of $\fext$. \begin{proposition} \label{pr:h:1} Let $f:\Rn\rightarrow\Rext$. Then the following hold for $f$'s extension, $\fext$: \begin{letter-compact} \item \label{pr:h:1a} For all $\xx\in\Rn$, $\fext(\xx)=(\lsc f)(\xx)\leq f(\xx)$. Thus, $\fext(\xx)=f(\xx)$ if $f$ is already lower semicontinuous at $\xx$. \item \label{pr:h:1aa} The extension of $f$ is the same as that of its lower semicontinuous hull. That is, $\fext=\lscfext$. \item \label{pr:h:1b} Let $U\subseteq\extspace$ be a neighborhood of some point $\xbar\in\extspace$, and suppose $\fext(\xbar)<b$ for some $b\in\R$. Then there exists a point $\xx\in U\cap \Rn$ with $f(\xx)<b$. \item \label{pr:h:1b2} Let $U\subseteq\extspace$ be a neighborhood of some point $\xbar\in\extspace$, and suppose $\fext(\xbar)>b$ for some $b\in\R$. Then there exists a point $\xx\in U\cap \Rn$ with $f(\xx)>b$. \item \label{pr:h:1c} In $\extspace$, the closures of the effective domains of $f$ and $\fext$ are identical. That is, $\cldom{f}=\cldomfext$. \item \label{pr:h:1:geq} Let $g:\Rn\rightarrow\Rext$. If $f\geq g$ then $\fext\geq\gext$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:h:1a}):} The equality follows from the definitions of $\lsc f$ and $\ef$ (Eqs.~\ref{eq:lsc:liminf} and~\ref{eq:e:7}). The inequality follows from the fact that $f$ majorizes $\lsc f$. \pfpart{Part~(\ref{pr:h:1aa}):} We need to show that $\ef=\eg$ where $g=\lsc f$. Since $g\le f$, we have $\eg\le\ef$. To show that $\ef\le\eg$, note that $\ef$ is lower semicontinuous (by \Cref{prop:ext:F}) and majorized by $g$ on $\Rn$ by part~(\ref{pr:h:1a}), and so it must be majorized by $\eg$ (by \Cref{prop:ext:F} applied to $g$). \pfpart{Part~(\ref{pr:h:1b}):} Since $\fext(\xbar)<b$, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ such that $f(\xx_t) < b$ for infinitely many values of $t$. Since $U$ is a neighborhood of $\xbar$, $\xx_t\in U$ for all $t$ sufficiently large. So, among the sequence elements with $f(\xx_t) < b$, it suffices to pick $\xx_t$ with a sufficiently large index $t$. \pfpart{Part~(\ref{pr:h:1b2}):} Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. By the definition of $\ef$, $\liminf f(\xx_t)\geq \fext(\xbar)>b$, which implies that $f(\xx_t)>b$ for infinitely many values of~$t$. Since $U$ is a neighborhood of $\xbar$, $\xx_t\in U$ for all $t$ sufficiently large. So, among the sequence elements with $f(\xx_t) > b$, it suffices to pick $\xx_t$ with a sufficiently large index $t$. \pfpart{Part~(\ref{pr:h:1c}):} By part~(\ref{pr:h:1a}), $\fext(\xx)\leq f(\xx)$ for all $\xx\in\Rn$. Therefore, $\dom f \subseteq \dom \fext$, implying $\cldom{f} \subseteq \cldomfext$. For the reverse inclusion, suppose $\xbar\in\dom{\fext}$ and let $b\in \R$ be such that $\fext(\xbar)<b$. From the definition of $\fext$, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ and such that $f(\xx_t)<b$ for all $t$. But this means that $\xx_t\in\dom{f}$ for all $t$, and since $\xx_t\to\xbar$, we obtain $\xbar\in\cldom{f}$. Thus, $\dom{\fext}\subseteq\cldom{f}$, so also $\cldomfext\subseteq\cldom{f}$. \pfpart{Part~(\ref{pr:h:1:geq}):} Let $\xbar\in\extspace$, and suppose $f\geq g$. Let $\seq{\xx_t}$ be any sequence converging to $\xbar$. Then \[ \liminf f(\xx_t) \geq \liminf g(\xx_t) \geq \gext(\xbar), \] where the last inequality is by definition of $\gext$ (Definition~\ref{def:lsc-ext}). Since this holds for all such sequences, the claim follows (by definition of $\fext$). \qedhere \end{proof-parts} \end{proof} We next show that the infimum appearing in \eqref{eq:e:7} must be realized by some sequence for which $f(\xx_t)$ converges to $\fext(\xbar)$: \begin{proposition} \label{pr:d1} Let $f:\Rn\rightarrow\Rext$, and let $\xbar\in\extspace$. Then there exists a sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$ and $f(\xx_t)\rightarrow \fext(\xbar)$. \end{proposition} \begin{proof} If $f(\xbar)=+\infty$ then any sequence $\xx_t\to\xx$ satisfies $\liminf f(\xx_t)=+\infty$, which implies that $f(\xx_t)\to+\infty$. Consider the case $f(\xbar)<+\infty$, and let $\countset{B}$ be a nested countable neighborhood base for $\xbar$ (which exists by \Cref{thm:first:local}). By Proposition~\ref{pr:h:1}(\ref{pr:h:1b}), for each $t$, there must exist a point $\xx_t\in B_t\cap \Rn$ for which $f(\xx_t) < b_t$ where \[ b_t = \begin{cases} -t & \text{if $\fext(\xbar) = -\infty$,}\\ \fext(\xbar) + 1/t & \text{if $\fext(\xbar) \in \R$.}\\ \end{cases} \] The resulting sequence $\seq{\xx_t}$ converges to $\xbar$ (by \Cref{cor:first:local:conv}), so \[ \liminf f(\xx_t) \geq \fext(\xbar). \] On the other hand, \[ \limsup f(\xx_t) \leq \limsup b_t = \fext(\xbar). \] Thus, $\lim f(\xx_t) = \fext(\xbar)$. \end{proof} Because astral space is compact and $\fext$ is lower semicontinuous, the minimum of $\fext$ is always realized at some point $\xbar\in\extspace$. The next proposition shows that $\fext(\xbar)=\inf f$. Thus, minimizing a function $f$ on $\Rn$ is equivalent to finding a minimizer of its extension $\fext$: \begin{proposition} \label{pr:fext-min-exists} Let $f:\Rn\rightarrow\Rext$. Then there exists a point $\xbar\in\extspace$ for which $\fext(\xbar)=\inf f$. Thus, \[ \min_{\xbar\in\extspace} \fext(\xbar) = \inf_{\xx\in\Rn\vphantom{\extspace}} f(\xx) = \inf f. \] \end{proposition} \begin{proof} Since $\ef$ is a lower semicontinuous function on a compact space, its minimum is attained at some point $\xbar\in\eRn$ (\Cref{thm:weierstrass}). Also, since $f$ majorizes $\ef$ on $\Rn$, we must have $\inf f\ge\inf\ef=\ef(\xbar)$. It remains to show that $\inf f\le\ef(\xbar)$. By \Cref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\to\xbar$ and $f(\xx_t)\to\ef(\xbar)$, so $\inf f\le\lim f(\xx_t)=\ef(\xbar)$, completing the proof. \end{proof} Analogous to the standard indicator function $\inds$ defined in \eqref{eq:indf-defn}, for an astral set $S\subseteq\eRn$, we define the \emph{astral indicator function} $\indfa{S}:\extspace\rightarrow\Rext$, for $\xbar\in\eRn$, as \begin{equation} \label{eq:indfa-defn} \indfa{S}(\xbar) = \begin{cases} 0 & \text{if $\xbar\in S$,} \\ +\infty & \text{otherwise.} \end{cases} \end{equation} Also, as with standard indicator functions, for a single point $\zbar\in\extspace$, we sometimes write $\indfa{\zbar}$ as shorthand for $\indfa{\{\zbar\}}$. As shown in the next proposition, for $S\subseteq\Rn$, the extension of a standard indicator function $\inds$ is the astral indicator function for $\Sbar$, the closure of $S$ in~$\extspace$: \begin{proposition} \label{pr:inds-ext} Let $S\subseteq\Rn$. Then $\indsext=\indfa{\Sbar}$. \end{proposition} \begin{proof} By \Cref{prop:ext:F}, the epigraph of $\indsext$ is the closure of $\epi\inds$ in $\eRn\times\R$. Since $\epi\inds=S\times\Rpos$, and the closure of a product is the product of the closures (\Cref{pr:prod-top-props}\ref{pr:prod-top-props:d}), we obtain $\epi\indsext=\Sbar\times\Rpos=\epi{\indfa{\Sbar}}$. \end{proof} \subsection{Continuity} We next look more closely at what continuity means for an extension $\fext$. Although $\fext$ must always be lower semicontinuous (\Cref{prop:ext:F}), it need not be continuous everywhere. Later, we will exactly characterize at what points $\fext$ is continuous and when it is continuous everywhere (see especially Chapter~\ref{sec:continuity}). Because $\extspace$ is first-countable, an astral function $F:\extspace\rightarrow\Rext$, including an extension $\fext$, is {continuous} at some point $\xbar$ if and only if, for every sequence of points $\seq{\xbar_t}$ in $\extspace$, if $\xbar_t\rightarrow\xbar$ then $F(\xbar_t)\rightarrow F(\xbar)$ (\Cref{prop:first:properties}\ref{prop:first:cont}). For extensions, there is also another natural notion of continuity: specifically, we define a function $f:\Rn\rightarrow\Rext$ to be \emph{extensibly continuous at $\xbar\in\extspace$} if for every sequence $\seq{\xx_t}$ in $\Rn$, if $\xx_t\rightarrow\xbar$ then $f(\xx_t)\rightarrow\fext(\xbar)$. Thus, extensible continuity only involves sequences in $\Rn$ and the values on this sequence of the original function $f$ (rather than sequences in $\extspace$ and the values of $\fext$). The next theorem shows that this property, which is often easier to work with, implies ordinary continuity of $\fext$ at $\xbar$. For instance, we earlier argued that the function $f$ in \Cref{ex:recip-fcn-eg} is extensibly continuous at $\xbar=\limray{\ee_1}\plusl\beta\ee_2$ if $\beta\neq 0$, implying that $\fext$ is continuous at this same point. In general, continuity of $\fext$ at a point~$\xbar$ need not imply extensible continuity of $f$ at~$\xbar$. For example, consider a nonconvex function $f:\R\to\eR$ defined, for $x\in\R$, as \[ f(x)= \begin{cases} 1 & \text{if $x=0$,} \\ 0 & \text{otherwise.} \end{cases} \] Then $\fext\equiv 0$, which is continuous everywhere, but $f$ is not extensibly continuous at $0$ since, for instance, the sequence with elements $x_t=0$ converges trivially to $0$, but $\lim f(x_t) = 1 \neq \fext(0)$. Nevertheless, as shown in the next theorem, for a function $f$ that is either convex or lower semicontinuous, extensible continuity of $f$ at $\xbar$ is equivalent to ordinary continuity of $\fext$ at $\xbar$: \begin{theorem} \label{thm:ext-cont-f} Let $f:\Rn\rightarrow\Rext$, and let $\xbar\in\extspace$. \begin{letter-compact} \item \label{thm:ext-cont-f:a} If $f$ is extensibly continuous at $\xbar$ then $\fext$ is continuous at $\xbar$. \item \label{thm:ext-cont-f:b} Suppose $f$ is either convex or lower semicontinuous (or both). Then $f$ is extensibly continuous at $\xbar$ if and only if $\fext$ is continuous at $\xbar$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:ext-cont-f:a}):} Suppose $f$ is extensibly continuous at $\xbar$. Let $\seq{\xbar_t}$ be any sequence in $\extspace$ converging to $\xbar$. Since $\ef$ is lower semicontinuous (\Cref{prop:ext:F}), $\liminf \fext(\xbar_t) \geq \fext(\xbar)$. So, to prove continuity of $\fext$ at $\xbar$, we only need to show \begin{equation} \label{eqn:thm:ext-cont-f:1} \limsup \fext(\xbar_t) \leq \fext(\xbar) . \end{equation} Assume that $\limsup\fext(\xbar_t)>-\infty$ (since otherwise \eqref{eqn:thm:ext-cont-f:1} is immediate), and let ${\beta\in\R}$ be such that $\limsup \fext(\xbar_t) > \beta$. Let $\countset{B}$ be a nested countable neighborhood base for $\xbar$. For each $t$, there must exist some $s$ with $\xbar_{s}\in B_t$ and $\fext(\xbar_{s})>\beta$ (since all but finitely many of the sequence elements $\xbar_s$ must be included in $B_t$, and since $\fext(\xbar_s)>\beta$ for infinitely many values of $s$). Therefore, by Proposition~\ref{pr:h:1}(\ref{pr:h:1b2}), there exists $\xx_t\in B_t\cap\Rn$ with $f(\xx_t)>\beta$. By \Cref{cor:first:local:conv}, the resulting sequence $\seq{\xx_t}$ converges to $\xbar$. Therefore, by extensible continuity, $\fext(\xbar)=\lim f(\xx_t) \geq \beta$, since $f(\xx_t)>\beta$ for all $t$. Since this holds for all $\beta<\limsup \fext(\xbar_t)$, this proves \eqref{eqn:thm:ext-cont-f:1}, completing the proof. \pfpart{Part~(\ref{thm:ext-cont-f:b}):} Suppose that $f$ is either convex or lower semicontinuous. In light of part~(\ref{thm:ext-cont-f:a}), it suffices to prove, under these assumptions, that if $\fext$ is continuous at $\xbar$ then $f$ is extensibly continuous at $\xbar$. Therefore, we assume henceforth that $\fext$ is continuous at~$\xbar$.\looseness=-1 First, suppose $f$ is lower semicontinuous. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then $f(\xx_t)=\fext(\xx_t)$, for all $t$, by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}), since $f$ is lower semicontinuous. Also, $\fext(\xx_t)\rightarrow\fext(\xbar)$ since $\fext$ is continuous at $\xbar$. Therefore, $f(\xx_t)\rightarrow\fext(\xbar)$, so $f$ is extensibly continuous at $\xbar$. In the remaining case, we assume $f$ is convex (and not necessarily lower semicontinuous). Suppose, by way of contradiction, that contrary to the claim, there exists a sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$, but for which $f(\xx_t)\not\rightarrow\fext(\xbar)$. This implies $\fext(\xbar)<+\infty$ since otherwise we would have $\liminf f(\xx_t)\geq\fext(\xbar)=+\infty$, implying $f(\xx_t)\rightarrow+\infty=\fext(\xbar)$. Also, since $\fext(\xx_t)\rightarrow\fext(\xbar)$ (by continuity of $\fext$ at $\xbar$), we must have $f(\xx_t)\neq\fext(\xx_t)$ for infinitely many values of $t$. By discarding all other elements of the sequence, we assume henceforth that $f(\xx_t)\neq\fext(\xx_t)$ for all $t$, while retaining the property that $\xx_t\rightarrow\xbar$. This implies further that, for all $t$, $f(\xx_t)\neq(\lsc f)(\xx_t)$ (by Proposition~\ref{pr:h:1}\ref{pr:h:1a}), and so, by \Cref{pr:lsc-props}(\ref{pr:lsc-props:b}), that $\xx_t\in\cl(\dom{f})\setminus\ri(\dom{f})$, the relative boundary of $\dom{f}$. Consequently, by \Cref{pr:bnd-near-cl-comp} (applied to $\dom{f}$), for each $t$, there exists a point $\xx'_t\in\Rn\setminus\bigParens{\cl(\dom{f})}$ with $\norm{\xx'_t-\xx_t} < 1/t$. Thus, $\xx'_t=\xx_t+\vepsilon_t$ for some $\vepsilon_t\in\Rn$ with $\norm{\vepsilon_t} < 1/t$. Since $\vepsilon_t\rightarrow\zero$ and $\xx_t\rightarrow\xbar$, we also have $\xx'_t\rightarrow\xbar$ (by Proposition~\ref{pr:i:7}\ref{pr:i:7g}). We have \[ \fext(\xx'_t) = (\lsc f)(\xx'_t) = f(\xx'_t) = +\infty, \] where the first equality is from Proposition~\ref{pr:h:1}(\ref{pr:h:1a}), the second is from \Cref{pr:lsc-props}(\ref{pr:lsc-props:b}) since $\xx'_t\not\in\cl(\dom{f})$, and the last follows also from this latter fact. Thus, by continuity of $\ef$ at $\xbar$, $\fext(\xbar)=\lim \fext(\xx'_t)=+\infty$, which contradicts $\fext(\xbar)<+\infty$. \qedhere \end{proof-parts} \end{proof} \subsection{Extension of a convex nondecreasing function} \label{sec:ext:nondec} As a simple illustration, we next study properties of an extension of a convex nondecreasing function on $\R$: \begin{proposition} \label{pr:conv-inc:prop} Let $g:\R\to\eR$ be convex and nondecreasing. Then: \begin{letter-compact} \item \label{pr:conv-inc:infsup} $\eg$ is continuous at $\pm\infty$ with $\eg(-\infty)=\inf g$ and $\eg(+\infty)=\sup g$. Moreover, if $g$ is continuous at a point $x\in\R$ then $\eg(x)=g(x)$ and $\eg$ is continuous at $x$. \item \label{pr:conv-inc:nondec} For $x\in\R$, $\eg(x)=\sup_{y<x} g(y)$, so $\eg$ is nondecreasing. \item \label{pr:conv-inc:discont} If $g$ is not continuous, there must exist a unique $z\in\R$ where $g$ is not continuous. Then $\eg(x)=g(x)$ for $x\in\R\setminus\set{z}$, $\eg(z)=\sup_{x<z} g(z)$, and $\dom \eg=[-\infty,z]$. Thus, $\eg$ is continuous on $\eR\setminus\set{z}$, but not at $z$. \item \label{pr:conv-inc:nonconst} If $g$ is not constant then neither is $\eg$, and $\eg(+\infty)=+\infty$. \item \label{pr:conv-inc:strictly} If $g$ is strictly increasing then so is $\eg$, and $\eg(+\infty)=+\infty$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:conv-inc:infsup}):} Let $\seq{x_t}$ be a sequence in $\R$ converging to $+\infty$. Then for any $y\in\R$, we have $x_t>y$ if $t$ is sufficiently large, so $\liminf g(x_t)\ge g(y)$. Therefore, \[ \sup_{y\in\R} g(y)\le\liminf g(x_t)\le\limsup g(x_t)\le\sup_{y\in\R} g(y), \] showing that $g(x_t)\to\sup g$. Thus, $\eg(+\infty)=\sup g$ and $g$ is extensively continuous at $+\infty$, so $\eg$ is continuous at $+\infty$ (by \Cref{thm:ext-cont-f}\ref{thm:ext-cont-f:a}). By a symmetric argument, $\eg(-\infty)=\inf g$ and $\eg$ is continuous at $-\infty$. For the second part of the claim, let $x\in\R$ be a point where $g$ is continuous. Then $g$ is both lower semicontinuous and extensibly continuous at $x$, so $\eg(x)=g(x)$ (by \Cref{pr:h:1}\ref{pr:h:1a}) and $\eg$ is continuous at~$x$ (by \Cref{thm:ext-cont-f}\ref{thm:ext-cont-f:a}). \pfpart{Part~(\ref{pr:conv-inc:nondec}):} Let $x\in\R$ and let $x_t\to x$ be a sequence in $\R$ such that $g(x_t)\to\eg(x)$ (such a sequence exists by \Cref{pr:d1}). Then for any $y<x$, we have $x_t>y$ if $t$ is sufficiently large, so $\lim g(x_t)\ge g(y)$. Taking a supremum over $y<x$ yields \[ \sup_{y<x} g(y)\le\lim g(x_t)=\eg(x) \le\liminf g(x-1/t) \le\sup_{y<x} g(y), \] where the second inequality follows because $x-1/t\to x$. Thus, $\eg(x)=\sup_{y<x} g(y)$ as claimed. Since $\eg(-\infty)=\inf g$ and $\eg(+\infty)=\sup g$, we obtain that $\eg$ is nondecreasing. \pfpart{Part~(\ref{pr:conv-inc:discont}):} Assume that $g$ is not continuous and let $z=\sup(\dom g)$. We claim that $z\in\R$. If $z=\pm\infty$ then $\dom g=\R$ or $\dom g=\emptyset$, so in both cases the boundary of $\dom g$ is empty and $g$ is continuous everywhere (by \Cref{pr:stand-cvx-cont}). Thus, we must have $z\in\R$. By monotonicity of $g$, $\dom g$ is either $(-\infty,z)$ or $(-\infty,z]$. Its boundary consists of a single point $z$, so $z$ must be the sole point where $g$ is discontinuous (by \Cref{pr:stand-cvx-cont}), and $g$ is continuous everywhere else. Therefore, by part~(\ref{pr:conv-inc:infsup}), $\eg(x)=g(x)$ for $x\in\R\setminus\set{z}$ and $\eg$ is continuous on $\eR\setminus\set{z}$. By part~(\ref{pr:conv-inc:nondec}), $\eg(z)=\sup_{x<z} g(x)$. Moreover, we must have $\eg(z)<+\infty$. Otherwise, $g(z)\ge\eg(z)=+\infty$ (by \Cref{pr:h:1}\ref{pr:h:1a}), which would imply continuity of $g$ at~$z$, because any sequence $z_t\to z$ would satisfy $\liminf g(z_t)\ge\eg(z)=+\infty$, and hence $g(z_t)\to+\infty=g(z)$. Thus, $\eg(z)<+\infty$, but $\lim\eg(z+1/t)=+\infty$, so $\eg$ is not continuous at $z$. \pfpart{Part~(\ref{pr:conv-inc:nonconst}):} If $g$ is not constant, then $\eg(-\infty)=\inf g\ne\sup g=\eg(+\infty)$, so $\eg$ is not constant either. If $g$ is improper and not constant then $\eg(+\infty)=\sup g=+\infty$, and the same is true if $g$ is proper and $\dom g\ne\R$. Henceforth, assume that $g$ is proper and $\dom g=\R$, so $g:\R\to\R$. Since $g$ is not constant, there exist $x,x'\in\R$, $x<x'$, such that $g(x)<g(x')$. Let $y_t=x+t(x'-x)$. By \Cref{pr:stand-cvx-fcn-char}, \[ g(x')=g\bigParens{\tfrac{t-1}{t}x+\tfrac{1}{t}y_t}\le\tfrac{t-1}{t}g(x)+\tfrac{1}{t}g(y_t), \] which can be rearranged to \[ g(x')+(t-1)\bigParens{g(x')-g(x)}\le g(y_t). \] Since $g(x')>g(x)$, we have $g(y_t)\to+\infty$, and hence $\eg(+\infty)=\sup g=+\infty$. \pfpart{Part~(\ref{pr:conv-inc:strictly}):} By parts~(\ref{pr:conv-inc:infsup}) and~(\ref{pr:conv-inc:nondec}), $\eg(-\infty)=\inf g$, $\eg(x)=\sup_{y<x} g(y)$ for $x\in\R$, and $\eg(+\infty)=\sup g$, which together with the fact that $g$ is strictly increasing implies that $\eg$ is strictly increasing as well. Since $g$ is strictly increasing on $\R$, it is not constant, so $\eg(+\infty)=+\infty$ by part~(\ref{pr:conv-inc:nonconst}). \qedhere \end{proof-parts} \end{proof} \subsection{Working with epigraphs} \label{sec:work-with-epis} When working with a function $f:\Rn\rightarrow\Rext$, we have seen that a point in the function's epigraph can be regarded either as a pair in $\Rn\times\R$ or equivalently as a vector in $\R^{n+1}$. This equivalence is seemless and straightforward. When working with the epigraph of an extension $\fext$, or more generally, of any astral function, we will see in this section that an analogous kind of equivalence can be established, but in a way that formally requires somewhat greater care. Let $F:\extspace\rightarrow\Rext$. Then $F$'s epigraph is a subset of $\extspace\times\R$, namely, the set of pairs $\rpair{\xbar}{y}\in\extspace\times\R$ for which $F(\xbar)\leq y$. It will often be beneficial to regard $\extspace\times\R$ as a subset of $\extspac{n+1}$, so that $\epi{F}$ also becomes a subset of $\extspac{n+1}$, an astral space with numerous favorable properties, such as compactness. Although $\extspace\times\R$ need not formally be a subset of $\extspac{n+1}$, we show next that it can nonetheless be embedded in $\extspac{n+1}$, that is, shown to be homeomorphic to a subset of the larger space. Going forward, this will allow us to treat $\extspace\times\R$, and so also the epigraph of any astral function, effectively as a subset of $\extspac{n+1}$. For a point $\zz=\rpair{\xx}{y}$, where $\xx\in\Rn$ and $y\in\R$, we can extract $y$ simply by taking $\zz$'s inner product with $\rpair{\zero}{1}=\ee_{n+1}$, the vector in $\Rnp$ that is all zeros except the last coordinate which is $1$. That is, $\zz\cdot\rpair{\zero}{1}=y$. Intuitively then, $\extspace\times\R$ should be identified with those points $\zbar$ in $\extspac{n+1}$ whose ``last coordinate'' (corresponding to $y$) is in $\R$. We can extract that coordinate from $\zbar$ by coupling it with $\rpair{\zero}{1}$. Thus, the set of all such points is \begin{equation} \label{eq:finclset-defn} nclset = \{ \zbar\in \extspac{n+1} : \zbar\cdot \rpair{\zero}{1}\in \R \}. \end{equation} nclset$ are homeomorphic in a natural way that maps each point in $\Rn\times\R=\Rnp$ to itself, as we show in the next theorem. In what follows, $\homat$ denotes the $n\times (n+1)$ matrix whose first $n$ columns form the $n\times n$ identity matrix $\Iden$, and whose last column is $\zero$, the all-zeros vector in $\Rn$. Thus, in block form, \begin{equation} \label{eqn:homat-def} \homat = \left[ \begin{array}{ccc|c} & & & \\ ~ & \Iden & ~ & \zero \\ & & & \\ \end{array} \right], \end{equation} or more succinctly, $\homat=[\Iden,\zero]$. Note that $\homat \rpair{\xx}{y} = \xx$ for all $\xx\in\Rn$ and $y\in\R$. As a result, multiplying by $\homat$ has the effect of extracting the first $n$ elements of a vector in $\R^{n+1}$. \begin{theorem} \label{thm:homf} Define nclset$ to be the function \begin{equation} \label{eq:thm:homf:1} \homf(\xbar,y) = \trans{\homat} \xbar \plusl \rpair{\zero}{y} \end{equation} for $\xbar\in\extspace$, $y\in\R$, where $\homat=[\Iden,\zero]$ (as in Eq.~\ref{eqn:homat-def}). Then $\homf$ has the following properties: \begin{letter-compact} \item \label{thm:homf:a} For all $\xbar\in\extspace$, $y\in\R$, \[ \homf(\xbar,y) \cdot \rpair{\uu}{v} = \xbar\cdot\uu + y v \] for all $\rpair{\uu}{v}\in\Rn\times\R=\Rnp$ (implying, in particular, that $\homf(\xbar,y)$ is indeed in nclset$). \item \label{thm:homf:aa} $\homf$ is bijective with inverse \begin{equation} \label{eqn:homfinv-def} \homfinv(\zbar) = \rpair{\homat\zbar}{\;\;\zbar\cdot\rpair{\zero}{1}} \end{equation} for $\zbar\in\extspacnp$. \item \label{thm:homf:b} $\homf$ is a homeomorphism (that is, both $\homf$ and its inverse are continuous). \item \label{thm:homf:c} $\homf(\xx,y)=\rpair{\xx}{y}$ for all $\rpair{\xx}{y}\in\Rn\times\R$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:homf:a}):} Let $\xbar\in\extspace$, $y\in\R$, and $\rpair{\uu}{v}\in\Rn\times\R$. Then \begin{eqnarray*} \homf(\xbar,y) \cdot \rpair{\uu}{v} &=& (\trans{\homat} \xbar) \cdot \rpair{\uu}{v} \plusl \rpair{\zero}{y} \cdot \rpair{\uu}{v} \\ &=& \xbar \cdot (\homat \rpair{\uu}{v}) \plusl y v \\ &=& \xbar \cdot \uu + y v. \end{eqnarray*} The first equality is by $\homf$'s definition and Proposition~\ref{pr:i:6}; the second by Theorem~\ref{thm:mat-mult-def}; the third by $\homat$'s definition (and since $y v\in\R$). Taking $\rpair{\uu}{v}=\rpair{\zero}{1}$ then shows that nclset$. \pfpart{Part~(\ref{thm:homf:aa}):} nclset\rightarrow\extspace\times\R$ be the function given in \eqref{eqn:homfinv-def}, which we aim to show is the functional inverse of $\homf$. Let $\xbar\in\extspace$ and $y\in\R$. We first show $ \gamma(\homf(\xbar,y)) = \rpair{\xbar}{y} $. Let $\zbar=\homf(\xbar,y)$. Then by $\homf$'s definition, \[ \homat \zbar = \homat \trans{\homat} \xbar \plusl \homat \rpair{\zero}{y} = \xbar \] since $\homat\trans{\homat}$ is the identity matrix and $\homat \rpair{\zero}{y} = \zero$. That $\zbar\cdot \rpair{\zero}{1}=y$ follows from part~(\ref{thm:homf:a}). Thus, $\gamma$ is a left inverse of $\homf$. nclset$, and let $\zbar'=\homf(\gamma(\zbar))$. We aim to show $\zbar'=\zbar$. Let $\rpair{\uu}{v}\in\Rn\times\R$. Then \begin{eqnarray} \zbar'\cdot\rpair{\uu}{v} &=& (\homat\zbar)\cdot\uu + (\zbar\cdot\rpair{\zero}{1}) v \nonumber \\ &=& \zbar\cdot(\trans{\homat} \uu) + \zbar\cdot\rpair{\zero}{v} \nonumber \\ &=& \zbar\cdot\rpair{\uu}{0} + \zbar\cdot\rpair{\zero}{v} \nonumber \\ &=& \zbar\cdot\rpair{\uu}{v}. \label{eqn:thm:homf:1} \end{eqnarray} The first equality is by \eqref{eqn:homfinv-def} combined with part~(\ref{thm:homf:a}), noting that $\zbar\cdot\rpair{\zero}{1}\in\R$ nclset$. The second equality is by Theorem~\ref{thm:mat-mult-def} and Proposition~\ref{pr:i:2}. The third is a simple matrix calculation. And the last is by Proposition~\ref{pr:i:1} (since $\zbar\cdot\rpair{\zero}{v}\in\R$). Since \eqref{eqn:thm:homf:1} holds for all $\rpair{\uu}{v}\in\Rn\times\R$, this implies $\zbar'=\zbar$ (by Proposition~\ref{pr:i:4}). Thus, $\gamma$ is also a right inverse of $\homf$. Therefore, $\homf$ is bijective with inverse $\homfinv=\gamma$, as claimed. \pfpart{Part~(\ref{thm:homf:b}):} The function $\homf$ is continuous since if $\seq{\rpair{\xbar_t}{y_t}}$ is any sequence in $\extspace\times\R$ that converges to $\rpair{\xbar}{y}\in\extspace\times\R$, then for all $\rpair{\uu}{v}\in\Rn\times\R$, \[ \homf(\xbar_t,y_t)\cdot\rpair{\uu}{v} = \xbar_t\cdot\uu + v y_t \rightarrow \xbar\cdot\uu + v y = \homf(\xbar,y)\cdot\rpair{\uu}{v}, \] by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}) and continuity (and the equalities following from part~\ref{thm:homf:a}). This implies $\homf(\xbar_t,y_t)\rightarrow\homf(\xbar,y)$, again by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). The function $\homfinv$ is continuous by \Cref{cor:aff-cont} and Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). Thus, $\homf$ is a homeomorphism. \pfpart{Part~(\ref{thm:homf:c}):} Let $\xx\in\Rn$, $y\in\R$, and $\zbar=\homf(\xx,y)$. Then $\zbar\cdot\rpair{\uu}{v}=\rpair{\xx}{y}\cdot\rpair{\uu}{v}$ for all $\rpair{\uu}{v}\in\Rn\times\R$, by part~(\ref{thm:homf:a}). Combined with Proposition~\ref{pr:i:4}, this implies $\zbar=\rpair{\xx}{y}$. (Alternatively, this could be proved directly from the definition of $\homf$.) \qedhere \end{proof-parts} \end{proof} For the rest of this section, $\homf$ denotes the function given in Theorem~\ref{thm:homf}. Also, we write $\clmx{S}$ for the closure in $\extspace\times\R$ of any set $S$ in that space. (For a set $S\subseteq\extspacnp$, we continue to write $\clbar{S}$ for the closure of $S$ in $\extspacnp$.) Let $f:\Rn\rightarrow\Rext$. In Proposition~\ref{prop:ext:F}, it was seen that the epigraph of its extension, $\fext$, is equal to $\epibar{f}$, the closure of $\epi{f}$ in $\extspace\times\R$. As we show next, $\homf(\epi{\fext})$, its homeomorphic image in $\extspacnp$, is equal to $\epi{f}$'s closure in $\extspacnp$, nclset$. As a consequence, these imply that the closures of the epigraphs of $f$ and $\fext$ are (homeomorphically) the same: \begin{proposition} \label{pr:wasthm:e:3} \MarkMaybe Let $f:\Rn\rightarrow\Rext$, and let $\fext$ be its extension. Then: \begin{letter} \item \label{pr:wasthm:e:3a} $\epi{\fext}=\epibar{f}$. \item \label{pr:wasthm:e:3b} nclset$. \item \label{pr:wasthm:e:3c} $\clbar{\homf(\epi{\fext})} = \epibarbar{f}$. \end{letter} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:wasthm:e:3a}):} This was previously proved in Proposition~\ref{prop:ext:F}. \pfpart{Part~(\ref{pr:wasthm:e:3b}):} We relate $\epibar{f}$, the closure of $\epi{f}$ in $\extspace\times\R$, to the closure of $\epi{f}$ in two other sets. Specifically, since $\epi{f}$ is a subset of nclset\subseteq\extspacnp$, we consider $\epibarbar{f}$, the closure of $\epi{f}$ in $\extspacnp$, and also nclset$, which we denote by $E$. Because nclset$, a subspace of $\extspacnp$, nclset$ is equal to its closure in $\extspacnp$ nclset$ (by \Cref{prop:subspace}\ref{i:subspace:closure}); that is, nclset$. On the other hand, $\epi{f}$ is a subset of $\Rn\times\R$ and so is equal to $\homf(\epi{f})$, its own image under $\homf$, by Theorem~\ref{thm:homf}(\ref{thm:homf:c}). nclset$ of $\homf(\epi{f})=\epi f$. Because $\homf$ is a homeomorphism, this implies that $E$, the closure of the image of $\epi{f}$ under $\homf$, is equal to the image of its closure in $\extspace\times\R$, namely, $\homf(\epibar{f})$. In other words, $E=\homf(\epibar{f})$. Combining the above observations with part~(\ref{pr:wasthm:e:3a}) (and since $\homf$ is a bijection) yields \[ \homf(\epi{\fext}) = \homf(\epibar{f}) = E = nclset, \] as claimed. \pfpart{Part~(\ref{pr:wasthm:e:3c}):} By Proposition~\ref{pr:h:1}(\ref{pr:h:1a}), if $\rpair{\xx}{y}\in\epi f$, then $y\geq f(\xx)\geq \fext(\xx)$, so $\rpair{\xx}{y}$ is also in $\epi \fext$. Thus, $\epi f = \homf(\epi f) \subseteq \homf(\epi{\fext})$ (by Theorem~\ref{thm:homf}\ref{thm:homf:aa},\ref{thm:homf:c}), so $\epibarbar{f} \subseteq \clbar{\homf(\epi{\fext})}$. For the reverse inclusion, part~(\ref{pr:wasthm:e:3b}) immediately implies $\homf(\epi{\fext}) \subseteq \epibarbar{f}$, yielding $\clbar{\homf(\epi{\fext})} \subseteq \epibarbar{f}$ since $\epibarbar{f}$ is closed (in $\extspacnp$). \qedhere \end{proof-parts} \end{proof} In Theorem~\ref{thm:homf}, we showed that the set of pairs $\extspace\times\R$ is homeomorphic to the set nclset\subseteq\extspacnp$ given in \eqref{eq:finclset-defn}, with each point $\rpair{\xbar}{y}\in\extspace\times\R$ nclset$ according to the unique function $\homf$. This means that we can very much think of $\extspace\times\R$ as a subset of $\extspacnp$. To simplify notation, for the rest of this book, we therefore identify each point $\rpair{\xbar}{y}$ with its homeomorphic image $\homf(\xbar,y)$ so that, when clear from context, $\rpair{\xbar}{y}$ may denote either the given pair in $\extspace\times\R$ or the point nclset\subseteq\extspacnp$. Note importantly that this convention only applies when $y$ is finite (in $\R$, not $\pm\infty$). In some cases, it may be that the interpretation cannot be determined from context; usually, this means that either interpretation can be used. Nonetheless, in the vast majority of cases, $\rpair{\xbar}{y}$ should be regarded as the point $\homf(\xbar,y)$ in $\extspacnp$. We also apply this simplification to subsets $S\subseteq\extspace\times\R$, such as the epigraph $\epi F$ of an astral function $F$, writing simply $S$, when clear from context, to denote nclset$. For instance, we can now write $\extspace\times\R\subseteq\extspacnp$, which formally means that $\homf(\extspace\times\R)\subseteq\extspacnp$. The next proposition summarizes properties of such pairs $\rpair{\xbar}{y}$, and is largely a mere restatement of Theorem~\ref{thm:homf}. In this proposition, the notation $\rpair{\xbar}{y}$ always refers to a point in $\extspacnp$. \begin{proposition} \label{pr:xy-pairs-props} Let $\xbar,\xbar'\in\extspace$, and let $y,y'\in\R$. Also, let $\homat=[\Iden,\zero]$ (as in Eq.~\ref{eqn:homat-def}). Then: \begin{letter-compact} \item \label{pr:xy-pairs-props:a} $\rpair{\xbar}{y} = \trans{\homat} \xbar \plusl \rpair{\zero}{y}$. \item \label{pr:xy-pairs-props:b} For all $\uu\in\Rn$ and $v\in\R$, $\rpair{\xbar}{y}\cdot\rpair{\uu}{v} = \xbar\cdot\uu + yv$. In particular, $\rpair{\xbar}{y}\cdot\rpair{\zero}{1} = y$. \item \label{pr:xy-pairs-props:c} $\homat\rpair{\xbar}{y} = \xbar$. \item \label{pr:xy-pairs-props:d} Let $\zbar\in\extspacnp$. If $\zbar\cdot\rpair{\zero}{1}\in\R$ then $\zbar = \rpair{\homat\zbar}{\;\;\zbar\cdot\rpair{\zero}{1}}$. \item \label{pr:xy-pairs-props:h} $\rpair{\xbar}{y}=\rpair{\xbar'}{y'}$ if and only if $\xbar=\xbar'$ and $y=y'$. \item \label{pr:xy-pairs-props:e} $\rpair{\xbar}{y}\plusl\rpair{\xbar'}{y'} = \rpair{\xbar\plusl\xbar'}{y+y'}$. \item \label{pr:xy-pairs-props:f} $\lambda \rpair{\xbar}{y} = \rpair{\lambda\xbar}{\lambda y}$ for all $\lambda\in\R$. \item \label{pr:xy-pairs-props:g} Let $\seq{\xbar_t}$ be a sequence in $\extspace$, let $\seq{y_t}$ be a sequence in $\R$, let $\zbar=\rpair{\xbar}{y}$, and for each $t$, let $\zbar_t=\rpair{\xbar_t}{y_t}$. Then $\zbar_t\rightarrow\zbar$ if and only if $\xbar_t\rightarrow\xbar$ and $y_t\rightarrow y$. \end{letter-compact} \end{proposition} \begin{proof} Part~(\ref{pr:xy-pairs-props:a}) follows from \eqref{eq:thm:homf:1} of Theorem~\ref{thm:homf}. Part~(\ref{pr:xy-pairs-props:b}) is from Theorem~\ref{thm:homf}(\ref{thm:homf:a}). Parts~(\ref{pr:xy-pairs-props:c}) and~(\ref{pr:xy-pairs-props:d}) follow from Theorem~\ref{thm:homf}(\ref{thm:homf:aa}). Part~(\ref{pr:xy-pairs-props:h}) is because $\homf$ is a bijection (Theorem~\ref{thm:homf}\ref{thm:homf:aa}). For part~(\ref{pr:xy-pairs-props:e}), we have \begin{align*} \rpair{\xbar}{y}\plusl\rpair{\xbar'}{y'} &= \Parens{\trans{\homat} \xbar \plusl \rpair{\zero}{y}} \plusl \Parens{\trans{\homat} \xbar' \plusl \rpair{\zero}{y'}} \\ &= \trans{\homat} (\xbar\plusl\xbar') \plusl \rpair{\zero}{y+y'} \\ &= \rpair{\xbar\plusl\xbar'}{y+y'}. \end{align*} The first and third equalities are by part~(\ref{pr:xy-pairs-props:a}). The second equality uses Proposition~\ref{pr:h:4}(\ref{pr:h:4c}). The proof of part~(\ref{pr:xy-pairs-props:f}) is similar. Part~(\ref{pr:xy-pairs-props:g}) is because $\homf$ is a homeomorphism (Theorem~\ref{thm:homf}\ref{thm:homf:b}). \end{proof} \subsection{Reductions} \label{sec:shadow} We next begin the study of reductions, a core technique for analyzing astral functions that will be used throughout the remainder of the book. In Proposition~\ref{pr:h:6}, we saw how every infinite astral point $\xbar$ can be decomposed into the astron associated with its dominant direction $\vv$ and the projection $\exx^\perp$ orthogonal to $\vv$, whose astral rank is lower than that of $\exx$. This decomposition forms the basis of proofs by induction on astral rank. For the purpose of applying this technique when analyzing an extension $\ef$, we next introduce a kind of projection operation, which effectively reduces the dimensionality of the domain of $\ef$ while preserving its key properties. \begin{definition} \label{def:astron-reduction} Let $f:\Rn\rightarrow\Rext$, and let $\vv\in\Rn$. The \emph{reduction of $f$ at astron $\limray{\vv}$} is the function $\fshadv:\Rn\rightarrow\Rext$ defined, for $\xx\in\Rn$, by \[ \fshadv(\xx) = \fext(\limray{\vv} \plusl \xx). \] \end{definition} We refer to this type of reduction as \emph{astronic}; more general reductions will be introduced later in Section~\ref{sec:ent-closed-fcn}. Let $g=\fshadv$ be such a reduction. This function is a constant in the direction of $\vv$ (that is, $g(\xx)=g(\xx+\lambda\vv)$ for all $\xx\in\Rn$ and $\lambda\in\R$), which means that the reduction~$g$ can be regarded informally as a function only over the space orthogonal to $\vv$, even if it is formally defined over all of $\Rn$. In this sense, $f$ has been ``reduced'' in forming $g$. \begin{example}[Reduction of the product of hyperbolas] \label{ex:recip-fcn-eg-cont} Suppose that $f$ is the product of hyperbolas from \Cref{ex:recip-fcn-eg}, and let $\vv=\ee_1$. Then, for $\xx\in\R^2$, \begin{equation} \label{eqn:recip-fcn-eg:g} g(\xx) = g(x_1,x_2) = \fext(\limray{\ee_1}\plusl\xx) = \begin{cases} 0 & \text{if $x_2\geq 0$,} \\ +\infty & \text{otherwise,} \end{cases} \end{equation} as can be seen by plugging in the values $\fext(\limray{\ee_1}\plusl\xx)$ derived in \Cref{ex:recip-fcn-eg}. \end{example} When forming a reduction $\fshadv$, the vector $\vv$ is usually (but not always) assumed to be in the {recession cone} of $f$, $\resc{f}$, as defined in \Cref{sec:prelim:rec-cone}. For instance, for the product of hyperbolas (\Cref{ex:recip-fcn-eg,ex:recip-fcn-eg-cont}), the recession cone consists of all vectors $\vv=\trans{[v_1,v_2]}$ with $v_1,v_2\geq 0$, that is, vectors in $\Rpos^2$. If $\vv\in\resc{f}$, then, as shown next, the minimum of the reduction $g$ is the same as the minimum of~$f$. This suggests that $f$ can be minimized by first minimizing $g$ and then adjusting the resulting solution appropriately. Later, in Section~\ref{sec:minimizers}, we will develop ideas along these lines which constructively characterize the minimizers of $f$ by defining and recursively minimizing an astronic reduction. Here is a statement of some simple properties of such reductions: \begin{proposition} \label{pr:d2} Let $f:\Rn\rightarrow\Rext$. Let $\vv\in\resc{f}$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Let $\xx\in\Rn$ (with projection $\xperp$ orthogonal to $\vv$). Then: \begin{letter-compact} \item \label{pr:d2:a} $g(\xperp) = g(\xx)$. \item \label{pr:d2:b} $g(\xx) \leq f(\xx)$. \item \label{pr:d2:c} $\inf g = \inf f$. Consequently, $g\equiv+\infty$ if and only if $f\equiv+\infty$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:d2:a}):} The statement follows because $\limray{\vv}\plusl\xx=\limray{\vv}\plusl\xperp$ by the Projection Lemma (\Cref{lemma:proj}). \pfpart{Part~(\ref{pr:d2:b}):} Since $\vv\in\resc{f}$, for all $t$, $ f(\xx) \geq f(\xx + t \vv) $. Further, the sequence $\seq{\xx+t\vv}$ converges to $\limray{\vv}\plusl\xx$ (by \Cref{pr:i:7}\ref{pr:i:7f}), so \[ f(\xx) \geq \liminf f(\xx+t\vv) \geq \fext(\limray{\vv}\plusl\xx) = g(\xx). \] \pfpart{Part~(\ref{pr:d2:c}):} Since $g(\xx)=\ef(\limray{\vv}\plusl\xx)$, we have $\inf g\ge\inf\ef=\inf f$, where the equality follows by \Cref{pr:fext-min-exists}. On the other hand, by part~(\ref{pr:d2:b}), $\inf g \leq \inf f$. \qedhere \end{proof-parts} \end{proof} The reduction $g=\fshadv$ has its own extension $\gext$. In a moment, Theorem~\ref{thm:d4} will show that the basic properties for $g$'s behavior given in Proposition~\ref{pr:d2} and Definition~\ref{def:astron-reduction} carry over in a natural way to its extension. Before proving the theorem, we give the following more general proposition which will be used in the proof, specifically in showing that the kind of property given in Proposition~\ref{pr:d2}(\ref{pr:d2:a}) generally carries over to extensions. \begin{proposition} \label{pr:PP:ef} Let $f:\Rn\to\eR$ and $\PP\in\R^{n\times n}$ be an orthogonal projection matrix such that $f(\xx)=f(\PP\xx)$ for all $\xx\in\Rn$. Then $\ef(\xbar)=\ef(\PP\xbar)$ for all $\xbar\in\eRn$. \end{proposition} \begin{proof} Let $\xbar\in\eRn$. By \Cref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ with $\xx_t\to\xbar$ and $f(\xx_t)\to\ef(\xbar)$. Then \[ \ef(\xbar)=\lim f(\xx_t)=\lim f(\PP\xx_t)\ge\ef(\PP\xbar) \] where the inequality follows from the definition of $\ef$ and the fact that $\PP\xx_t\to\PP\xbar$ by continuity of linear maps (\Cref{thm:mat-mult-def}). It remains to prove the reverse inequality. By \Cref{pr:d1}, there exists a sequence $\seq{\yy_t}$ in $\Rn$ with $\yy_t\to\PP\xbar$ and $f(\yy_t)\to\ef(\PP\xbar)$. Let $\ww_1,\dotsc,\ww_k$, for some $k\ge 0$, be an orthonormal basis of $(\colspace\PP)^\perp$ and let $\WW=[\ww_1,\dotsc,\ww_k]$. By \Cref{cor:inv-proj-seq}, there exists a sequence $\seq{\cc_t}$ in $\R^k$ such that $\yy_t+\WW\cc_t\to\xbar$. Therefore, \begin{align} \notag \ef(\PP\xbar)=\lim f(\yy_t) & =\lim f(\PP\yy_t) \\ \label{eq:PP:ef} & =\lim f\bigParens{\PP(\yy_t+\WW\cc_t)} =\lim f(\yy_t+\WW\cc_t) \ge\ef(\xbar) \end{align} where the first equality in \eqref{eq:PP:ef} follows because $\PP\WW=\zero_{n\times k}$ (by \Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:e}). Thus, $\ef(\xbar)=\ef(\PP\xbar)$ for all $\xbar\in\eRn$. \end{proof} \begin{theorem} \label{thm:d4} Let $f:\Rn\rightarrow\Rext$. Let $\vv\in\resc{f}$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Let $\xbar\in\extspace$ (with projection $\xbarperp$ orthogonal to $\vv$). Then: \begin{letter-compact} \item \label{thm:d4:a} $\gext(\xbarperp) = \gext(\xbar)$. \item \label{thm:d4:b} $\gext(\xbar)\leq \fext(\xbar)$. \item \label{thm:d4:c} $\gext(\xbar)=\fext(\limray{\vv}\plusl\xbar)$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:d4:a}):} Immediate from \Cref{pr:d2}(\ref{pr:d2:a}) and \Cref{pr:PP:ef}. \pfpart{Part~(\ref{thm:d4:b}):} By \Cref{pr:d2}(\ref{pr:d2:b}), we have $g\le f$, so also $\eg\le\ef$ (from the definition of an extension). \pfpart{Part~(\ref{thm:d4:c}):} First, \begin{equation} \label{eq:thm:d4:1} \gext(\xbar) = \gext(\xbarperp) = \gext\paren{(\limray{\vv}\plusl\xbar)^{\bot}} = \gext(\limray{\vv}\plusl\xbar) \leq \fext(\limray{\vv}\plusl\xbar). \end{equation} The first and third equalities are by part~(\ref{thm:d4:a}), and the second is because $(\limray{\vv}\plusl\xbar)^{\bot} = \xbarperp$ by \Cref{pr:h:5}(\ref{pr:h:5c},\ref{pr:h:5e}). The inequality is by part~(\ref{thm:d4:b}). To show the reverse inequality, by \Cref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in~$\Rn$ with $\xx_t\rightarrow\xbar$ and $g(\xx_t)\rightarrow\gext(\xbar)$. For each $t$, let $\ybar_t=\limray{\vv}\plusl\xx_t$. Then $\ybar_t\rightarrow\limray{\vv}\plusl\xbar$ (by \Cref{pr:i:7}\ref{pr:i:7f}). Thus, \[ \gext(\xbar) = \lim g(\xx_t) = \lim \fext(\limray{\vv}\plusl \xx_t) = \lim \fext(\ybar_t) \geq \fext(\limray{\vv}\plusl \xbar) \] where the last inequality follows by lower semicontinuity of $\ef$ (\Cref{prop:ext:F}). \qedhere \end{proof-parts} \end{proof} The next theorem proves that the reduction $g=\fshadv$ is convex and lower semicontinuous, assuming $f$ is convex and that $\vv$ is in $f$'s recession cone. The theorem also relates $g$ to another function $\gtil:\Rn\rightarrow\Rext$ that is given by \begin{equation} \label{eqn:gtil-defn} \gtil(\xx)=\inf_{\lambda\in\R} f(\xx+\lambda \vv) \end{equation} for $\xx\in\Rn$. This function can be viewed as a kind of ``shadow'' of $f$ in the direction of~$\vv$. Specifically, the theorem shows that $g$ is the {lower semicontinuous hull} of $\gtil$, making use of a succinct and convenient representation of $\gtil$. (The notation that is used here was defined in Eqs.~\ref{eq:fA-defn} and~\ref{eq:lin-image-fcn-defn}.) \begin{example}[Shadow of the product of hyperbolas] Consider again the product of hyperbolas from \Cref{ex:recip-fcn-eg} and let $\vv=\ee_1$. Then \[ \gtil(\xx) = \gtil(x_1,x_2) = \begin{cases} 0 & \text{if $x_2 > 0$,} \\ +\infty & \text{otherwise,} \end{cases} \] which differs from the reduction $g$ derived in \Cref{ex:recip-fcn-eg-cont} only when $x_2=0$. Evidently, $g=\lsc\gtil$, as is true in general. \end{example} \begin{theorem} \label{thm:a10-nunu} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\vv\in\resc{f}$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Also, let $\gtil:\Rn\rightarrow\Rext$ be defined, for $\xx\in\Rn$, as \[ \gtil(\xx)=\inf_{\lambda\in\R} f(\xx+\lambda \vv). \] Then $\gtil=(\PPf)\PP$ where $\PP\in\Rn$ is the projection matrix orthogonal to $\vv$, so $\gtil$ is convex. Moreover, $g=\lsc\gtil$. Therefore, $g$ is convex and lower semicontinuous. \end{theorem} \begin{proof} Throughout the proof we write $h=\gtil$. For $\xx\in\Rn$, let \[ Z(\xx)=\set{\xx+\lambda\vv:\:\lambda\in\R}. \] By standard linear algebra, the set of points in $Z(\xx)$, as an affine set, can be described by a system of linear equalities, specifically, $Z(\xx)=\set{\zz\in\Rn:\:\PP\zz=\PP\xx}$. Thus, \begin{equation} \label{eq:h:Pz:Px} h(\xx) = \inf_{\lambda\in\R} f(\xx+\lambda\vv) = \inf_{\zz\in Z(\xx)} f(\zz) = \inf_{\zz\in\Rn:\:\PP\zz=\PP\xx} f(\zz). \end{equation} We will argue that $h=(\PPf)\PP$. From the definition of $\PPf$ (see Eq.~\ref{eq:lin-image-fcn-defn}), for $\yy\in\Rn$, \[ (\PPf)(\yy)=\inf_{\zz\in\Rn:\:\PP\zz=\yy} f(\zz). \] Therefore, for $\xx\in\Rn$, \[ [(\PPf)\PP](\xx)=(\PPf)(\PP\xx) = \inf_{\zz\in\Rn:\:\PP\zz=\PP\xx} f(\zz) = h(\xx), \] where the last equality follows by \eqref{eq:h:Pz:Px}. Thus, $h=(\PPf)\PP$. Since $f$ is convex, convexity of $h$ now follows by \Cref{roc:thm5.7:fA,roc:thm5.7:Af}. Since $\vv\in\resc{f}$, $f(\xx+\lambda\vv)$ is nonincreasing as a function of $\lambda\in\R$, for any fixed $\xx\in\Rn$. Therefore, the expression for $h=\gtil$ from \eqref{eqn:gtil-defn} can be strengthened to \begin{equation} \label{eq:j:4:h} h(\xx)= \lim_{\lambda\rightarrow+\infty} f(\xx+\lambda\vv). \end{equation} We will prove $\eh=\eg$, which will imply that $\lsc h=g$, because $\lsc h$ agrees with $\eh$ on~$\Rn$ (by \Cref{pr:h:1}\ref{pr:h:1a}) and $g$ agrees with $\eg$ on $\Rn$ (by \Cref{thm:d4}\ref{thm:d4:c}). Let $\xx\in\Rn$. Consider the sequence $\seq{\xx+t\vv}$ converging to $\limray{\vv}\plusl\xx$ (by \Cref{pr:i:7}\ref{pr:i:7f}). By \eqref{eq:j:4:h} and by definitions of $\ef$ and $g$, \[ h(\xx)=\lim f(\xx+t\vv)\ge\ef(\limray{\vv}\plusl\xx)=g(\xx). \] Thus, $h\ge g$, and hence also $\eh\ge\eg$. For the reverse inequality, the definition of $\gtil=h$ in \eqref{eqn:gtil-defn} implies that $h\le f$, and so $\eh\le\ef$. Let $\xbarperp$ denote the projection of any $\xbar\in\extspace$ orthogonal to $\vv$. The definition of $\gtil=h$ then also implies that $h(\xx)=h(\xx^\perp)$ for all $\xx\in\Rn$, and so $\eh(\xbar)=\eh(\xbar^\perp)$ for all $\xbar\in\eRn$ (by \Cref{pr:PP:ef}). Thus, for all $\xbar\in\eRn$, \[ \eh(\xbar) =\eh(\xbar^\perp) =\eh\bigParens{(\limray{\vv}\plusl\xbar)^\perp} =\eh(\limray{\vv}\plusl\xbar) \le\ef(\limray{\vv}\plusl\xbar) =\eg(\xbar), \] by the same reasoning used for \eqref{eq:thm:d4:1}. Thus, $\eh=\eg$, and so, as we argued above, $\lsc h=g$. Therefore, $g$ is convex and lower semicontinuous by \Cref{pr:lsc-props}(\ref{pr:lsc-props:a}). \end{proof} \section{Conjugacy} \label{sec:conjugacy} As discussed briefly in Section~\ref{sec:prelim:conjugate}, the {conjugate} of a function $f:\Rn\rightarrow\Rext$ is the function $\fstar:\Rn\rightarrow\Rext$ given by \begin{equation} \label{eq:fstar-def} \fstar(\uu) = \sup_{\xx\in\Rn} \bracks{\xx\cdot\uu - f(\xx)} \end{equation} for $\uu\in\Rn$. This is a centrally important notion in standard convex analysis. The conjugate $\fstar$ encodes exactly those affine functions that are majorized by $f$ in the sense that any pair $\rpair{\uu}{v}\in\Rn\times\R$ is in the epigraph of $\fstar$ if and only if $v\geq \xx\cdot\uu - f(\xx)$ for all $\xx\in\Rn$, that is, if and only if $f(\xx)\geq \xx\cdot\uu - v$ so that $f$ majorizes the affine function $\xx \mapsto \xx\cdot\uu - v$. Moreover, as reviewed in \Cref{pr:conj-props}(\ref{pr:conj-props:f}), functions $f$ that are closed and convex are equal to their own biconjugate, that is, the conjugate of $\fstar$, so that $f=(\fstar)^* = \fdubs$. This means that $f$ can be represented as the supremum over all affine functions that it majorizes, and furthermore, that the original function $f$ can be fully reconstructed from the dual representation afforded by~$\fstar$. In this section, we extend these fundamental concepts to astral space. \subsection{Conjugates and biconjugates} \label{sec:conjugacy-def} Let $F:\extspace\rightarrow\Rext$. For now, we allow $F$ to be any function defined on astral space, although later we will focus especially on the case that $F$ is the extension $\fext$ of some convex function $f$. How can the definition of the conjugate $\fstar$ given in \eqref{eq:fstar-def} be extended to a function $F$? A natural idea is to simply replace $f$ by $F$ and $\xx$ by $\xbar$ so that the expression inside the supremum (now over $\xbar\in\extspace$) becomes $\xbar\cdot\uu - F(\xbar)$. The problem, of course, is that $\xbar\cdot\uu$ and $F(\xbar)$ might both be $+\infty$ (or both $-\infty$) so that this expression is undefined, being equal to the sum of $+\infty$ and $-\infty$. To address this, we can re-express the conjugate in a way that generalizes more easily. In particular, since the epigraph of $f$ consists of all pairs $\rpair{\xx}{y}$ in $\Rn\times\R$ with $y\geq f(\xx)$, we can rewrite \eqref{eq:fstar-def} as \begin{equation} \label{eq:fstar-mod-def} \fstar(\uu) = \sup_{\rpair{\xx}{y}\in\epi f} (\xx\cdot\uu - y). \end{equation} This expression generalizes directly to astral space by making the simple substitutions suggested earlier, and specifically replacing $\xx\cdot\uu$ with its astral analogue $\xbar\cdot\uu$. This yields the following definition for the conjugate of an astral function. Note that although $\xbar\cdot\uu$ might be infinite, $y$ is always in $\R$, so the earlier issue is entirely avoided. \begin{definition} The \emph{conjugate} of a function $F:\extspace\rightarrow\Rext$ is that function $\Fstar:\Rn\rightarrow\Rext$ defined by \begin{equation} \label{eq:Fstar-def} \Fstar(\uu) = \sup_{\rpair{\xbar}{y}\in\epi F} (\xbar\cdot\uu - y) \end{equation} for $\uu\in\Rn$. \end{definition} The resulting conjugate function $\Fstar$ is always convex: \begin{proposition} \label{pr:conj-is-convex} Let $F:\extspace\rightarrow\Rext$. Then its conjugate $\Fstar$ is convex. \end{proposition} \begin{proof} For any fixed $\xbar\in\extspace$ and $y\in\R$, $\xbar\cdot\uu$, viewed as a function of $\uu\in\Rn$, is convex by by Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5b}, so $\xbar\cdot\uu-y$ is convex as well. Therefore, $\Fstar$ is convex since it is the pointwise supremum of convex functions (\Cref{roc:thm5.5}). \end{proof} The definition for $\Fstar$ can be rewritten using the downward sum operation (as defined in Eq.~\ref{eq:down-add-def}). In particular, using Proposition~\ref{pr:plusd-props}(\ref{pr:plusd-props:d}), we can rewrite \eqref{eq:Fstar-def} as \begin{align} \Fstar(\uu) &= \sup_{\xbar\in\extspace} \BigBracks{ \sup \bigBraces{\xbar\cdot\uu - y:\:y\in\R,\,y\geq F(\xbar)} } \notag \\ &= \sup_{\xbar\in\extspace} \BigBracks{ - F(\xbar) \plusd \xbar\cdot\uu }. \label{eq:Fstar-down-def} \end{align} In this form, the conjugate $\Fstar$ (as well as the dual conjugate to be defined shortly) is the same as that defined by \citet[Definition~8.2]{singer_book} in his abstract treatment of convex analysis, which can be instantiated to the astral setting by letting his variables $X$ and $W$ be equal to $\extspace$ and $\Rn$, respectively, and defining his coupling function~$\singerphi$ to coincide with ours, $\singerphi(\xbar,\uu)=\xbar\cdot\uu$ for $\xbar\in\extspace$ and $\uu\in\Rn$. The equivalence of Eqs.~(\ref{eq:Fstar-down-def}) and~(\ref{eq:Fstar-def}) is mentioned by \citet[Eq.~8.34]{singer_book}. This general form of conjugate was originally proposed by \citet[Eq.~14.7]{moreau__convexity}. \begin{example}[Conjugate of an astral affine function] Consider the affine function $F(\xbar)=\xbar\cdot\ww+b$, for $\xbar\in\extspace$, where $\ww\in\Rn$ and $b\in\R$. Then \[ \Fstar(\uu) = \begin{cases} -b & \text{if $\uu=\ww$,} \\ +\infty & \text{otherwise.} \end{cases} \] This can be shown directly using \eqref{eq:Fstar-down-def}. Alternatively, since $F=\fext$, where $f(\xx)=\xx\cdot\ww+b$ is the corresponding function on $\Rn$, we can apply Proposition~\ref{pr:fextstar-is-fstar}, proved below, which implies that in this case the astral conjugate $\Fstar$ is the same as $\fstar$, the standard conjugate of $f$. \end{example} \begin{example}[Conjugate of an astral point indicator] \label{ex:ind-astral-point} Let $\zbar\in\extspace$. As mentioned earlier, $\indfa{\zbar}$ is shorthand for $\indfa{\set{\zbar}}$, the astral indicator of the singleton $\set{\zbar}$, as defined in \eqref{eq:indfa-defn}. This function's conjugate is $\indfastar{\zbar}(\uu)=\zbar\cdot\uu$, for $\uu\in\Rn$; that is, $\indfastar{\zbar}=\ph{\zbar}$, where $\ph{\zbar}$ is the function corresponding to evaluation of the coupling function in the second argument, introduced earlier in the construction of astral space (see Eq.~\ref{eq:ph-xbar-defn}). Thus, whereas the standard conjugate of any function on $\Rn$ is always closed and convex, this example shows that the astral conjugate of a function defined on $\extspace$ might not be closed or even lower semicontinuous (although it must be convex by Proposition~\ref{pr:conj-is-convex}). For example, if $n=1$ and $\barz=+\infty$, then $\indfastar{\barz}(u)=\limray{u}$, for $u\in\R$, which is not proper or closed or lower semicontinuous. \end{example} The conjugation $F\mapsto \Fstar$ maps a function defined on $\extspace$ to one defined on $\Rn$. We next define a dual operation that maps in the reverse direction, from functions on $\Rn$ to functions on $\extspace$. In standard convex analysis, both a function $f$ and its conjugate $\fstar$ are defined on $\Rn$ so that the same conjugation can be used in either direction. But in the astral setting, as is the case more generally in the abstract setting of~\citet{singer_book}, a different dual conjugation is required. This asymmetry reflects that the coupling function $\xbar\cdot\uu$ is defined with $\xbar\in\extspace$ and $\uu\in\Rn$ belonging to different spaces. Let $\psi: \Rn\rightarrow\Rext$ be any function on $\Rn$. We use a Greek letter to emphasize that we think of $\psi$ as operating on the dual variable $\uu\in\Rn$; later, we will often take $\psi$ to itself be the conjugate of some other function. By direct analogy with the preceding definition of $\Fstar$, we have: \begin{definition} The \emph{dual conjugate} of a function $\psi:\Rn\rightarrow\Rext$ is that function $\psistarb:\extspace\rightarrow\Rext$ defined by \begin{equation} \label{eq:psistar-def} \psistarb(\xbar) = \sup_{\rpair{\uu}{v}\in\epi \psi} (\xbar\cdot\uu - v), \end{equation} for $\xbar\in\extspace$. \end{definition} By Proposition~\ref{pr:plusd-props}(\ref{pr:plusd-props:d}), this definition can be stated equivalently as \begin{equation} \label{eq:psistar-def:2} \psistarb(\xbar) = \sup_{\uu\in\Rn} \bigBracks{ - \psi(\uu) \plusd \xbar\cdot\uu }. \end{equation} We use the notation $\psistarb$ rather than $\psistar$ because the latter denotes the standard conjugate of $\psi$, so $\psistar$ is a function defined on $\Rn$ while $\psistarb$ is defined on $\extspace$. We will be especially interested in the \emph{biconjugate} of a function $F:\extspace\rightarrow\Rext$, that is, $\Fdub=(\Fstar)^{\dualstar}$, the dual conjugate of the conjugate of $F$. In standard convex analysis, the biconjugate $\fdubs$ for a function $f:\Rn\rightarrow\Rext$ is equal to its closure, $\cl f$, if $f$ is convex (\Cref{pr:conj-props}\ref{pr:conj-props:f}). Thus, as already discussed, if $f$ is closed and convex, then $f=\fdubs$. Furthermore, $\fdubs$ is in general equal to the pointwise supremum over all affine functions that are majorized by~$f$. An analogous result holds in the astral setting, as we show now. For $\uu\in\Rn$ and $v\in\R$, let $\affuv(\xbar)=\xbar\cdot\uu-v$, for $\xbar\in\extspace$. This is the extension of the affine function $\xx\mapsto\xx\cdot\uu-v$ on $\Rn$, and is also a special case of the affine maps considered in Section~\ref{sec:linear-maps}. Then $\Fdub$ is exactly the pointwise supremum over all such functions that are majorized by $F$, as we show in the next theorem, which is a special case of \citet[Theorem~8.5]{singer_book}. \begin{theorem} \label{thm:fdub-sup-afffcns} Let $F:\extspace\rightarrow\Rext$, and let $\affuv(\xbar)=\xbar\cdot\uu-v$, for all $\xbar\in\extspace$, and for $\uu\in\Rn$ and $v\in\R$. Then for $\xbar\in\extspace$, \begin{equation} \label{eq:thm:fdub-sup-afffcns:1} \Fdub(\xbar) = \sup\bigBraces{\affuv(\xbar) :\: \uu\in\Rn,\,v\in\R,\,\affuv\leq F}. \end{equation} Consequently, $F\geq \Fdub$, and furthermore, $F=\Fdub$ if and only if $F$ is the pointwise supremum of some collection of affine functions $\affuv$. \end{theorem} \begin{proof} Let $\uu\in\Rn$ and $v\in\R$. Then $v\geq \Fstar(\uu)$ if and only if $\xbar\cdot\uu - y \leq v$ for all $\rpair{\xbar}{y}\in \epi F$ (by Eq.~\ref{eq:Fstar-def}), which in turn holds if and only if $\xbar\cdot\uu - v \leq F(\xbar)$ for all $\xbar\in\extspace$. In other words, $\rpair{\uu}{v}\in\epi \Fstar$ if and only if $\affuv\leq F$. The result in \eqref{eq:thm:fdub-sup-afffcns:1} therefore follows directly from \eqref{eq:psistar-def} with $\psi=\Fstar$. That $F\geq \Fdub$ always is now immediate, as is the claim that if $F=\Fdub$ then $F$ is the pointwise supremum over a collection of affine functions. For the converse of the latter statement, suppose, for some set $A\subseteq \Rn\times\R$, that $F(\xbar) = \sup_{\rpair{\uu}{v}\in A} \affuv(\xbar)$ for $\xbar\in\extspace$. If $\rpair{\uu}{v}\in A$, then this implies $\affuv\leq F$, so that $\affuv\leq\Fdub$ by \eqref{eq:thm:fdub-sup-afffcns:1}. Since this is true for every such pair in $A$, it follows that $F\leq\Fdub$. \end{proof} We mainly focus on the biconjugates $\Fdub$ (and later, $\fdub$, where $f$ is defined over~$\Rn$). But it is also possible to form a dual form of biconjugate $\psidub=(\psi^{\bar{*}})^*$ from a function $\psi:\Rn\rightarrow\Rext$ by applying the conjugations in the reverse order. Analogous properties to those shown in Theorem~\ref{thm:fdub-sup-afffcns} apply to $\psidub$. We prove just one of these, which will be used at a later point. The proof is largely symmetric to the one in Theorem~\ref{thm:fdub-sup-afffcns}, but we include it here for completeness: \begin{theorem} \label{thm:psi-geq-psidub} Let $\psi:\Rn\rightarrow\Rext$. Then $\psi\geq\psidub$. \end{theorem} \begin{proof} Let $\uu\in\Rn$ and suppose $v\geq \psi(\uu)$, with $v\in\R$. For any point $\rpair{\xbar}{y}\in\epi \psistarb$, we have \[ y \geq \psistarb(\xbar) \geq \xbar\cdot\uu - v, \] where the second inequality is by \eqref{eq:psistar-def}. Thus, $v \geq \xbar\cdot\uu - y$. Since this holds for all $\rpair{\xbar}{y}\in\epi \psistarb$, it follows that $v \geq \psidub(\uu)$, by \eqref{eq:Fstar-def}. And since this holds for all $v\geq\psi(\uu)$, we conclude that $\psi(\uu)\geq \psidub(\uu)$. \end{proof} \begin{example}[Dual conjugate of $\ph{\zbar}$.] \label{ex:dual:ph:zbar} For $\zbar\in\extspace$, consider the function $\ph{\zbar}:\Rn\to\eR$, $\ph{\zbar}(\uu)=\zbar\cdot\uu$, introduced earlier in the construction of astral space (see Eq.~\ref{eq:ph-xbar-defn}). Its dual conjugate is \[ \phstar{\zbar}(\xbar) = \sup_{\uu\in\Rn} \parens{ - \zbar\cdot\uu \plusd \xbar\cdot\uu } = \indfa{\zbar}^{\phantomdualstar}(\xbar), \] with $\indfa{\zbar}$ as in \Cref{ex:ind-astral-point}. To see this, note that $-\zbar\cdot\uu\plusd\zbar\cdot\uu$ is equal to~$0$ if $\zbar\cdot\uu\in\R$ (including if $\uu=\zero$), and otherwise is equal to $-\infty$; thus, $\phstar{\zbar}(\zbar)=0$. If $\xbar\neq\zbar$ then, for some $\uu\in\Rn$, $\xbar\cdot\uu\neq\zbar\cdot\uu$ (by Proposition~\ref{pr:i:4}), which means that $\xbar\inprod\uu$ and $\zbar\inprod\uu$ must be subtractable, and $-\zbar\inprod\uu+\xbar\inprod\uu\ne 0$. Therefore, the expression $- \zbar\cdot(\lambda\uu) \plusd \xbar\cdot(\lambda\uu) = \lambda(- \zbar\cdot\uu +\xbar\cdot\uu )$ can be made arbitrarily large for an appropriate choice of $\lambda\in\R$. Thus, $\phstar{\zbar}=\indfa{\zbar}^{\phantomdualstar}$. Also, as we saw in \Cref{ex:ind-astral-point}, $\indfastar{\zbar}=\ph{\zbar}^{\phantomdualstar}$, and hence $\indfadub{\zbar}=\indfa{\zbar}^{\phantomdualstar}$ and $\phdub{\zbar}=\ph{\zbar}^{\phantomdualstar}$. If $\zbar$ is not in $\Rn$, then $\ph{\zbar}$'s standard conjugate is $\phstars{\zbar}\equiv+\infty$ (since it can be shown using Proposition~\ref{pr:i:3} that $\ph{\zbar}(\uu)=-\infty$ for some $\uu\in\Rn$). Thus, the standard conjugation entirely erases the identity of $\zbar$. On the other hand, from the astral dual conjugate $\phstar{\zbar}$, we have just seen that it is possible to reconstruct $\ph{\zbar}^{\phantomdualstar}=\phdub{\zbar}$, and thus recover $\zbar$. This shows that the astral dual conjugate $\psistarb$ can retain more information about the original function $\psi:\Rn\rightarrow\Rext$ than the standard conjugate $\psistar$. \end{example} Next, we turn to the particular case that $F$ is the extension $\fext$ of some function $f$ defined on $\Rn$. In this case, the (astral) conjugate of $\fext$, denoted $\fextstar=\bigParens{\fext\,}^{\!*}$, is the same as the (standard) conjugate of $f$: \begin{proposition} \label{pr:fextstar-is-fstar} Let $f:\Rn\rightarrow\Rext$. Then $\fextstar=\fstar$. \end{proposition} \begin{proof} For any $\uu\in\Rn$, \[ \fstar(\uu) = \sup_{\rpair{\xx}{y}\in\epi f} (\xx\cdot\uu - y) \le \sup_{\rpair{\xbar}{y}\in\epi\ef} (\xbar\cdot\uu - y) = \fextstar(\uu), \] where the equalities follow from the definitions of conjugates (Eqs.~\ref{eq:fstar-mod-def} and \ref{eq:Fstar-def}), and the inequality is because $\epi f\subseteq\epi\ef$ (by \Cref{prop:ext:F}). For the reverse inequality, let $\uu\in\Rn$ and let $\rpair{\xbar}{y}$ be any point in $\epi \fext$. Since $\epi\fext$ is the closure of $\epi f$ in $\eRn\times\R$ (by \Cref{prop:ext:F}), there exists a sequence of points $\rpair{\xx_t}{y_t}\in\epi f$ such that $\xx_t\to\xbar$ and $y_t\to y$. Thus, \[ \xbar\cdot\uu-y=\lim\bigParens{\xx_t\cdot\uu-y_t}\le\sup_{\rpair{\xx}{y}\in\epi f} (\xx\cdot\uu - y)=\fstar(\uu). \] Taking the supremum over $\rpair{\xbar}{y}\in\epi\ef$ then yields $\fextstar(\uu)\le \fstar(\uu)$. \end{proof} Applied to the extension $\fext$ of a function $f:\Rn\rightarrow\Rext$, this shows that $\fext$'s biconjugate is \begin{equation} \label{eqn:fdub-form-lower-plus} \fextdub(\xbar) = \fdub(\xbar) = \sup_{\uu\in\Rn} \bigParens{-\fstar(\uu) \plusd \xbar\cdot\uu} \end{equation} for $\xbar\in\extspace$, where the second equality is from \eqref{eq:psistar-def:2}. This expression is very close in form to the standard biconjugate $\fdubs$, and shows that $\fdubs(\xx)=\fdub(\xx)$ for all $\xx\in\Rn$. In several later proofs, we decompose $\xbar\in\extspace$ in \Cref{eq:psistar-def:2,eqn:fdub-form-lower-plus} as $\xbar={\limray{\vv}\plusl\zbar}$ for some $\vv\in\Rn$ and $\zbar\in\extspace$. In such cases, it will be convenient to use variants of those equations with leftward addition instead of the downward addition. The next proposition provides sufficient conditions for rewriting \eqref{eq:psistar-def:2} using leftward addition, which will be satisfied, for instance, when $\psi$ is convex and closed. \begin{proposition} \label{pr:psi-with-plusl} Let $\psi:\Rn\rightarrow\Rext$, and assume that either $\psi>-\infty$ or $\psi(\zero)=-\infty$. Then for $\xbar\in\extspace$, \begin{equation} \label{eq:pr:psi-with-plusl:1} \psistarb(\xbar) = \sup_{\uu\in\Rn} \bigParens{ - \psi(\uu) \plusl \xbar\cdot\uu }. \end{equation} \end{proposition} \begin{proof} Note that $\barx\plusd \bary = \barx\plusl \bary$ for all $\barx,\bary\in\Rext$, except if $\barx=+\infty$ and $\bary=-\infty$. Therefore, if $\psi>-\infty$ then $ - \psi(\uu) \plusd \xbar\cdot\uu = - \psi(\uu) \plusl \xbar\cdot\uu $ for all $\uu\in\Rn$ and $\xbar\in\extspace$, proving the claim in this case. Otherwise, if $\psi(\zero)=-\infty$ then for all $\xbar\in\extspace$, \[ -\psi(\zero)\plusd \xbar\cdot\zero = +\infty = -\psi(\zero)\plusl \xbar\cdot\zero, \] implying that both $\psistarb(\xbar)$ and the right-hand side of \eqref{eq:pr:psi-with-plusl:1} are equal to $+\infty$, proving the claim in this case as well. \end{proof} The next theorem summarizes results for $\fext$ and its biconjugate: \begin{theorem} \label{thm:fext-dub-sum} Let $f:\Rn\rightarrow\Rext$. Then \begin{letter-compact} \item \label{thm:fext-dub-sum:a} $\fext\geq\fextdub = \fdub$. \item \label{thm:fext-dub-sum:b} For all $\xbar\in\extspace$, \[ \fdub(\xbar) = \sup_{\uu\in\Rn} \bigParens{-\fstar(\uu) \plusd \xbar\cdot\uu} = \sup_{\uu\in\Rn} \bigParens{-\fstar(\uu) \plusl \xbar\cdot\uu}. \] \item \label{thm:fext-dub-sum:fdubs} For all $\xx\in\Rn$, $\fdub(\xx)=\fdubs(\xx)$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:fext-dub-sum:a}):} This is direct from Theorem~\ref{thm:fdub-sup-afffcns} (applied to $\fext$) and Proposition~\ref{pr:fextstar-is-fstar}. \pfpart{Part~(\ref{thm:fext-dub-sum:b}):} The first equality is from \eqref{eq:psistar-def:2}. The second equality holds by Proposition~\ref{pr:psi-with-plusl} since $f^*$ is closed (by \Cref{pr:conj-props}\ref{pr:conj-props:d}). \pfpart{Part~(\ref{thm:fext-dub-sum:fdubs}):} From the definition of standard conjugate (Eq.~\ref{eq:fstar-def}), \[ \fdubs(\xx) = \sup_{\uu\in\Rn} \bigParens{-\fstar(\uu) + \xx\cdot\uu} \] for $\xx\in\Rn$. This coincides with the expression in part~(\ref{thm:fext-dub-sum:b}) when $\xbar=\xx$. \qedhere \end{proof-parts} \end{proof} In \Cref{sec:ent-closed-fcn,sec:dual-char-ent-clos}, we give necessary and sufficient conditions (substantially more informative than those given in Theorem~\ref{thm:fdub-sup-afffcns}) for when $\fext=\fdub$. These results build on properties of reductions. As we saw in \Cref{thm:a10-nunu}, the form of a reduction $\fshadv$ can depend on whether $\vv\in\resc f$. For that and other reasons that will later become apparent, the analysis of $\resc f$ is at the core of our development. However, standard characterizations of $\resc f$ focus on closed, proper and convex functions. For example, by \Cref{pr:rescpol-is-con-dom-fstar}, the recession cone of a closed, proper convex function $f:\Rn\rightarrow\Rext$ can be expressed as \begin{equation} \label{eq:rescpol-is-con-dom-fstar} \resc{f} = \polar{\bigParens{\cone(\dom{\fstar})}}. \end{equation} This is insufficient for our needs, because functions that arise in the analysis of astral space, such as astronic reductions, can be improper even when they are derived from functions on $\Rn$ that are closed, convex and proper. Therefore, before we can characterize when $\fext=\fdub$, we need to generalize results like the one in \eqref{eq:rescpol-is-con-dom-fstar} to improper convex functions. There are two components in our generalization approach. First, we replace $\cone(\dom{\fstar})$, with a slightly larger set, called the {barrier cone of~$f$}, which we introduce and analyze in \Cref{sec:slopes}. Second, we give a technique for replacing any improper function $f$ by a proper function $f'$ using a monotone transformation (namely, composition with the exponential function) which preserves convexity and many other of $f$'s properties. We can then apply existing results to the modified function $f'$ to obtain results for the original function $f$. This technique is developed in \Cref{sec:exp-comp}. \subsection{Barrier cone of a function} \label{sec:slopes} The \emph{barrier cone} of a set $S\subseteq\Rn$, denoted $\barr S$, is the set \begin{equation} \label{eqn:barrier-cone-defn} \barr S = \Braces{ \uu\in\Rn :\: \sup_{\xx\in S} \xx\cdot\uu < +\infty }, \end{equation} which can be viewed as the set of directions in which the set $S$ is eventually bounded. The barrier cone can also be expressed in terms of the support function for $S$ (as defined in Eq.~\ref{eq:e:2}), specifically, as the effective domain of $\indstars$: \begin{equation} \label{eqn:bar-cone-defn} \barr S = \bigBraces{ \uu\in\Rn :\: \indstar{S}(\uu) < +\infty } = \dom{\indstar{S}}. \end{equation} Note that the barrier cone of the empty set is $\Rn$ since then the supremum appearing in \eqref{eqn:barrier-cone-defn} is over an empty set and so equals $-\infty$. (This also follows from \eqref{eqn:bar-cone-defn} since $\indstar{\emptyset}\equiv-\infty$.) \begin{proposition} \label{prop:bar:conv-pointed} Let $S\subseteq\Rn$. Then $\barr S$ is a convex cone. \end{proposition} \begin{proof} If $S=\emptyset$, then $\barr{S}=\Rn$, which is a convex cone. So assume henceforth that $S$ is nonempty. From \eqref{eqn:barrier-cone-defn}, it is clear that $\barr{S}$ includes $\zero$ and is closed under multiplication by a positive scalar. Thus, $\barr{S}$ is a cone. Moreover, by \eqref{eqn:bar-cone-defn}, $\barr{S}$ is the effective domain of the convex function $\indstar{S}$, so it must be convex (by Propositions~\ref{roc:thm4.6} and~\ref{pr:conj-props}\ref{pr:conj-props:d}). \end{proof} We will especially be interested in the barrier cone of the epigraph of a function $f:\Rn\rightarrow\Rext$, which is the set \[ \barr(\epi f) = \biggBraces{ \rpair{\uu}{v}\in\Rn\times\R : \sup_{\rpair{\xx}{y}\in\epi f} (\xx\cdot\uu + yv) < +\infty } = \dom{\indstar{\epi f}}. \] The form of $\indstar{\epi f}$ was given precisely in \Cref{pr:support-epi-f-conjugate}. Using that proposition, we can relate the set $\cone(\dom{\fstar})$, appearing in the standard characterization of $\resc{f}$ in~\eqref{eq:rescpol-is-con-dom-fstar}, to the barrier cone $\barr(\epi f)=\dom{\indepifstar}$: \begin{proposition} \label{pr:conedomfstar-and-barrepif} Let $f:\Rn\rightarrow\Rext$. Let $\uu\in\Rn\setminus\{\zero\}$. Then $\uu\in\cone(\dom{\fstar})$ if and only if $\rpair{\uu}{-v}\in\barr(\epi f)$ for some $v>0$. \end{proposition} \begin{proof} By \Cref{pr:support-epi-f-conjugate}, for any $v>0$, \[ \indepifstar(\rpairf{\uu}{-v}) = v\fstar(\uu/v). \] Therefore, $\rpair{\uu}{-v}\in\barr(\epi f)=\dom{\indepifstar}$ if and only if $(\uu/v)\in\dom\fstar$. This proves that if $\rpair{\uu}{-v}\in\barr(\epi f)$ for some $v>0$ then $\uu\in\cone(\dom\fstar)$. For the reverse implication, suppose $\uu\in\cone(\dom{\fstar})$. Since $\uu\ne\zero$ and $\dom\fstar$ is convex, there exists $v>0$ such that $(\uu/v)\in\dom\fstar$ (by \Cref{pr:scc-cone-elts}\ref{pr:scc-cone-elts:c:new}); hence, $\rpair{\uu}{-v}\in\barr(\epi f)$. \end{proof} Thus, except possibly for the origin, $\cone(\dom{\fstar})$ consists of vectors $\uu$ that correspond to the elements of $\barr(\epi f)$ taking the form $\rpair{\uu}{v}$ with $v<0$. Nevertheless, to analyze improper functions, we need to consider the \emph{entire} set $\barr(\epi f)$, including the elements $\rpair{\uu}{v}$ with $v=0$. Accordingly, we define the barrier cone of a function: \begin{definition} Let $f:\Rn\rightarrow\Rext$. Then the \emph{barrier cone of $f$}, denoted $\slopes{f}$, is the set \begin{equation} \label{eq:bar-f-defn} \slopes{f} = \BigBraces{ \uu\in\Rn :\: \exists v\in\R,\, \rpair{\uu}{v}\in\barr(\epi f) }. \end{equation} \end{definition} In other words, $\slopes{f}$ is the projection of $\barr(\epi f)$ onto $\Rn$; that is, \begin{equation} \label{eq:bar-f-defn-alt} \slopes{f} = P\bigParens{\barr(\epi f)}, \end{equation} where $P:\R^{n+1}\rightarrow\Rn$ is the projection map $P(\uu,v)=\uu$ for $\rpair{\uu}{v}\in\Rn\times\R$. Note in particular that if $f\equiv+\infty$ then $\barr{f}=\Rn$. The pairs $\rpair{\uu}{v}$ in $\barr(\epi f)$ with $v=0$ are of particular interest, as they do not generally correspond to vectors in $\cone(\dom\fstar)$. We call the set of all such vectors $\uu\in\Rn$ with $\rpair{\uu}{0}\in\barr(\epi f)$ the \emph{vertical barrier cone of $f$}, denoted $\vertsl{f}$, so called because each vector $\uu$ in this set defines a vertical halfspace in $\R^{n+1}$ that includes all of $\epi f$. This set can also be expressed as the barrier cone of $\dom f$: \begin{proposition} \label{pr:vert-bar-is-bar-dom} Let $f:\Rn\rightarrow\Rext$. Then $\vertsl{f}$ is a convex cone, and \[ \vertsl{f}=\dom{\indstar{\dom{f}}}=\barr(\dom{f}). \] \end{proposition} \begin{proof} We have \begin{align*} \vertsl{f} &= \bigBraces{ \uu\in\Rn : \rpair{\uu}{0}\in\barr(\epi f) } \\ \notag &= \bigBraces{ \uu\in\Rn : \indstar{\epi f}(\uu,0) < +\infty } \\ &= \dom{\indstar{\dom{f}}} = \barr(\dom{f}), \end{align*} where the second and fourth equalities are by \eqref{eqn:bar-cone-defn}, and the third equality is because $\indepifstar(\rpairf{\uu}{0})=\inddomfstar(\uu)$ by \Cref{pr:support-epi-f-conjugate}. In particular, this shows that $\vertsl{f}$ is a convex cone, by \Cref{prop:bar:conv-pointed}. \end{proof} When $\vertsl{f}$ is combined with $\cone(\dom{\fstar})$, we obtain exactly the barrier cone of $f$, as shown in the next proposition. Thus, $\cone(\dom{\fstar})$ is always included in the barrier cone of $f$, but this latter set might also include additional elements from the vertical barrier cone. \begin{theorem} \label{thm:slopes-equiv} Let $f:\Rn\rightarrow\Rext$. Then $\slopes{f}$ is a convex cone, and \[ \slopes{f} = \cone(\dom{\fstar}) \cup (\vertsl{f}). \] \end{theorem} \begin{proof} If $f\equiv+\infty$, then $\fstar\equiv-\infty$ so $\slopes{f}=\cone(\dom{\fstar})=\Rn$, implying the claim. We therefore assume henceforth that $f\not\equiv+\infty$. As a projection of the convex cone $\barr(\epi f)$, the set $\slopes{f}$ is also a convex cone (by \Cref{prop:cone-linear}). By \Cref{pr:support-epi-f-conjugate}, all the points $\rpair{\uu}{v}\in\barr(\epi f)=\dom{\indstar{\epi f}}$ satisfy $v\le 0$. Let $B_0$ denote the set of points $\rpair{\uu}{v}\in\barr(\epi f)$ with $v=0$ and $B_{-}$ the set of points with $v<0$; thus, $\barr(\epi f)=B_0\cup B_{-}$. We then have (with $P$ as used in Eq.~\ref{eq:bar-f-defn-alt}): \begin{align*} \slopes{f} = P\bigParens{\barr(\epi f)} &= P(B_0)\cup P(B_{-}) \\ &= (\vertsl{f})\cup P(B_{-}) \\ &= (\vertsl{f})\cup \bigParens{P(B_{-})\setminus\set{\zero}} \\ &= (\vertsl{f})\cup \bigParens{\cone(\dom{\fstar})\setminus\set{\zero}} \\ &= (\vertsl{f})\cup\cone(\dom{\fstar}). \end{align*} The third equality is by definition of $\vertsl{f}$. The fourth and last equalities are because $\vertsl{f}$ includes $\zero$, being a cone (\Cref{pr:vert-bar-is-bar-dom}). And the fifth equality is by \Cref{pr:conedomfstar-and-barrepif}. \end{proof} The barrier cone of a function is the same as that of its lower semicontinuous hull: \begin{proposition} \label{pr:slopes-same-lsc} Let $f:\Rn\rightarrow\Rext$. Then $\slopes(\lsc f)=\slopes{f}$. \end{proposition} \begin{proof} We assume $f\not\equiv+\infty$ since otherwise $f$ is already lower semicontinuous. As a general observation, note that for any nonempty set $S\subseteq\Rn$, and for all $\uu\in\Rn$, \[ \sup_{\xx\in S} \xx\cdot\uu = \sup_{\xx\in \cl{S}} \xx\cdot\uu \] by continuity of the map $\xx\mapsto\xx\cdot\uu$. Thus, the barrier cone of a set is the same as that of its closure in $\Rn$; that is, $\barr S = \barr(\cl{S})$. In particular, this means that $\barr(\epi f)=\barr\bigParens{\cl(\epi f)}=\barr\bigParens{\epi{(\lsc f)}}$, since the epigraph of the lower semicontinuous hull of $f$ is exactly the closure of $f$'s epigraph in $\R^{n+1}$ (\Cref{pr:lsc-equiv-char-rn}). This proves the claim. \end{proof} In general, $\slopes{f}$ need not be the same as $\cone(\dom{\fstar})$, even when $f$ is convex, closed and proper, as shown by the next example: \begin{example}[Restricted linear function] \label{ex:negx1-else-inf} In $\R^2$, let \begin{equation} \label{eqn:ex:1} f(x_1,x_2) = \begin{cases} -x_1 & \text{if $x_2\geq 0$,} \\ +\infty & \text{otherwise.} \end{cases} \end{equation} This function is convex, closed and proper. Its conjugate can be computed to be \begin{equation} \label{eqn:ex:1:conj} \fstar(u_1,u_2) = \begin{cases} 0 & \mbox{if $u_1=-1$ and $u_2\leq 0$,} \\ +\infty & \mbox{otherwise.} \end{cases} \end{equation} Thus, \begin{equation} \label{eqn:ex:1:cone-dom-fstar} \cone(\dom{\fstar})= \{\zero\} \cup \{\uu\in\R^2: u_1<0, u_2\leq 0\}, \end{equation} but $\vertsl{f} = \{\uu\in\R^2: u_1=0, u_2\leq 0\}$, so $\slopes{f} = \{\uu\in\R^2: u_1,u_2\le 0\}$. \end{example} Later, in Section~\ref{sec:dual-char-ent-clos}, we will study when $\slopes{f}=\cone(\dom{\fstar})$, a property that we will see characterizes when a function's biconjugate $\fdub$ is the same as its extension~$\fext$. \subsection{Exponential composition} \label{sec:exp-comp} We next introduce a simple trick that will allow translating many results for closed, proper convex functions to analogous results for improper lower semicontinuous functions. Specifically, we transform a possibly improper convex function $f\not\equiv+\infty$ into a proper function~$f'$ via a composition with the exponential function. We show that $f'$ retains many of $f$'s properties, so we can apply existing results to the modified function $f'$ to obtain results for the original function $f$. Let $g:\R\to\R$ be any strictly increasing convex function that is bounded below, and let $\eg:\eR\to\eR$ be its extension. Then $\eg$ must be continuous and strictly increasing over its entire domain (by \Cref{pr:conv-inc:prop}\ref{pr:conv-inc:infsup},\ref{pr:conv-inc:strictly}). For example, when $g(x)=e^x=\exp(x)$, then the extension $\eg=\expex$ is given, for $\ex\in\Rext$, by \begin{equation} \label{eq:expex-defn} \expex(\ex) = \begin{cases} 0 & \text{if $\ex=-\infty$,} \\ \exp(\ex) & \text{if $\ex\in\R$,} \\ +\infty & \text{if $\ex=+\infty$.} \end{cases} \end{equation} Any convex function $f:\Rn\rightarrow\Rext$ can be composed with $\eg$ to derive a new convex function $f'=\eg\circ f$, which is bounded below. This and other properties of $f'$ are given in the next proposition. Canonically, we will invoke this proposition with $\eg=\expex$. \begin{proposition} \label{pr:j:2} Let $f:\Rn\rightarrow\Rext$ be convex, let $g:\R\to\R$ be a strictly increasing convex function bounded below, and let $f'=\eg\circ f$, that is, $f'(\xx)=\eg(f(\xx))$ for $\xx\in\Rn$. Then the following hold: \begin{letter-compact} \item \label{pr:j:2a} $f'$ is convex and bounded below, with $\inf f'\ge \inf g$. \item \label{pr:j:2c} $\fpext=\overline{\eg\circ f}=\eg\circ\ef$. \item \label{pr:j:2lsc} $\lsc{f'}=\lsc(\eg\circ f)=\eg\circ(\lsc f)$. Therefore, if $f$ is lower semicontinuous, then $f'$ is as well. \item \label{pr:j:2b} For all $\xx,\yy\in\Rn$, $f'(\xx)\leq f'(\yy)$ if and only if $f(\xx)\leq f(\yy)$. So, $\resc{f'}=\resc{f}$. \item \label{pr:j:2d} For all $\xbar,\ybar\in\extspace$, $\fpext(\xbar)\leq \fpext(\ybar)$ if and only if $\fext(\xbar)\leq \fext(\ybar)$. \item \label{pr:j:2lim} Let $\xbar\in\eRn$ and let $\seq{\xx_t}$ be a sequence in $\Rn$ such that $\xx_t\to\xbar$. Then $f(\xx_t)\to\ef(\xbar)$ if and only if $f'(\xx_t)\to\fpext(\xbar)$. \end{letter-compact} In particular, the above statements hold when $g=\exp$ and $\eg=\expex$. \end{proposition} \begin{proof} As a real-valued convex function, $g$ must be continuous (\Cref{pr:stand-cvx-cont}), so \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:infsup}) implies that $\eg$ is continuous on $\eR$ with the values $\eg(-\infty)=\inf g$, $\eg(x)=g(x)$ for $x\in\R$, and $\eg(+\infty)=\sup g$. Moreover, by \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:strictly}), $\eg$ is strictly increasing and $\eg(+\infty)=+\infty$. \begin{proof-parts} \pfpart{Part~(\ref{pr:j:2a}):} From the definition of $f'$, we have $\inf f'\geq\inf\eg=\inf g$, which is in $\R$ since $g$ is lower bounded. Convexity of $f'$ follows by \Cref{prop:nondec:convex}. \pfpart{Part~(\ref{pr:j:2c}):} Let $\xbar\in\eR$. By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ with $f(\xx_t)\rightarrow \fext(\xbar)$. This implies \[ \eg\bigParens{\fext(\xbar)} = \eg\bigParens{\lim f(\xx_t)} = \lim \eg\bigParens{f(\xx_t)} = \lim f'(\xx_t) \geq \fpext(\xbar), \] where the second equality is by continuity of $\eg$, and the inequality is by definition of $\fpext$. By a similar argument, there exists a sequence $\seq{\xx'_t}$ in $\Rn$ converging to $\xbar$ with $f'(\xx'_t)\rightarrow \fpext(\xbar)$. Further, since $\Rext$ is sequentially compact, there exists a subsequence of the sequence $\seq{f(\xx'_t)}$ that converges; by discarding all other elements, we can assume that the entire sequence converges in $\Rext$. Then \[ \fpext(\xbar) = \lim f'(\xx'_t) = \lim \eg\bigParens{f(\xx'_t)} = \eg\bigParens{\lim f(\xx'_t)} \geq \eg\bigParens{\fext(\xbar)}. \] The third equality is because $\eg$ is continuous. The inequality is because $\lim f(\xx'_t)\geq \fext(\xbar)$ by definition of $\fext$, and because $\eg$ is strictly increasing. \pfpart{Part~(\ref{pr:j:2lsc}):} For all $\xx\in\Rn$, \[ (\lsc f')(\xx) = \fpext(\xx) = \eg\bigParens{\fext(\xx)} = \eg\bigParens{(\lsc f)(\xx)} \] by part~(\ref{pr:j:2c}) and Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). If $f$ is lower semicontinuous then the right-hand side is equal to $\eg(f(\xx))=f'(\xx)$, proving $f'$ is lower semicontinuous as well. \pfpart{Part~(\ref{pr:j:2b}):} The first claim is immediate from the fact that $\eg$ is strictly increasing. The second claim follows from the first claim and the definition of the recession cone. \pfpart{Part~(\ref{pr:j:2d}):} This follows from part~(\ref{pr:j:2c}) since $\eg$ is strictly increasing. \pfpart{Part~(\ref{pr:j:2lim}):} If $f(\xx_t)\to\ef(\xbar)$, then by continuity of $\eg$ and part~(\ref{pr:j:2c}), we also have \[ \lim f'(\xx_t) = \lim \eg\bigParens{f(\xx_t)} = \eg\bigParens{\lim f(\xx_t)} = \eg\bigParens{\ef(\xbar)} = \fpext(\xbar). \] For the converse, assume that $f'(\xx_t)\to\fpext(\xbar)$. From the definition of $\ef$, $\liminf f(\xx_t)\ge\ef(\xbar)$. For a contradiction, suppose $\limsup f(\xx_t)>\ef(\xbar)$. Then there exists $\beta\in\R$ such that $\beta>\ef(\xbar)$ and $f(\xx_t)\geq\beta$ for infinitely many values of $t$. For all such $t$, by the strict monotonicity of $\eg$, we have $ f'(\xx_t) = \eg\bigParens{f(\xx_t)} \geq \eg(\beta)$, and furthermore that $ \eg(\beta) > \eg\bigParens{\ef(\xbar)} = \fpext(\xbar)$, with the last equality from part~(\ref{pr:j:2c}). This contradicts that $f'(\xx_t)\to\fpext(\xbar)$. \qedhere \end{proof-parts} \end{proof} The effective domain of $\fpstar$ turns out to be exactly equal to $f$'s barrier cone, $\slopes{f}$: \begin{theorem} \label{thm:dom-fpstar} Let $f:\Rn\rightarrow\Rext$ be convex, and let $f'=\expex\circ f$. Then \[ \slopes{f} = \dom{\fpstar} = \cone(\dom{\fpstar}). \] \end{theorem} \begin{proof} Since $\slopes{f}$ is a convex cone (by \Cref{thm:slopes-equiv}), it suffices to prove $\slopes{f}=\dom{\fpstar}$. We assume without loss of generality that $f$ is lower semicontinuous, since otherwise it suffices to prove the result for its lower semicontinuous hull, $\lsc f$. This is because $\slopes(\lsc f)=\slopes{f}$ by Proposition~\ref{pr:slopes-same-lsc}, and also, by Proposition~\ref{pr:j:2}(\ref{pr:j:2lsc}), $(\expex\circ(\lsc f))^*=(\lsc f')^*=\fpstar$. If $f\equiv+\infty$ then $f'\equiv+\infty$ and $\fpstar\equiv-\infty$, so $\barr{f}=\Rn=\dom{\fpstar}$ in this case. If $f(\xx)=-\infty$ for some $\xx\in\Rn$, then $f$ is improper and lower semicontinuous, and therefore equal to $-\infty$ or $+\infty$ everywhere (by \Cref{pr:improper-vals}\ref{pr:improper-vals:cor7.2.1}). Thus, $f'$ is actually the indicator function for $\dom f$, that is, $f' = \indf{\dom f}$, so \[ \dom{\fpstar} = \dom{\indstar{\dom f}} = \vertsl{f} = \slopes{f}. \] The second equality is by \Cref{pr:vert-bar-is-bar-dom}. The last equality is from \Cref{thm:slopes-equiv}, noting that $\cone(\dom{\fstar})=\set{\zero}$ (since $\fstar\equiv+\infty$) and $\zero\in\vertsl{f}$ (since $\vertsl{f}$ is a cone). In the remaining case, $f$ is proper. We apply \Cref{cor:conj-compose-our-version}, setting $g=\exp$ and $G=\expex$, to obtain, for $\uu\in\Rn$, that \begin{equation} \label{eq:from:conj-compose} \fpstar(\uu)=(G\circ f)^*(\uu) = \min_{v\ge 0} \bigBracks{g^*(v)+\indepifstar(\rpairf{\uu}{-v})}. \end{equation} By a standard calculation, for $v\in\R$, \begin{equation} \label{eq:expstar} g^*(v)= \begin{cases} v\ln v - v &\text{if $v\ge 0$,} \\ +\infty &\text{if $v<0$,} \end{cases} \end{equation} with the convention $0\ln 0=0$. Let $\uu\in\Rn$. Then we have the following sequence of equivalences: \begin{align*} \uu\in\dom{\fpstar} &\Leftrightarrow \exists v\ge 0 \text{ such that } v\in\dom{\gstar} \text{ and } \rpair{\uu}{-v}\in\dom{\indf{\epi f}^*} \\ &\Leftrightarrow \exists v\ge 0 \text{ such that } \rpair{\uu}{-v}\in\dom{\indf{\epi f}^*} \\ &\Leftrightarrow \exists v\in\R \text{ such that } \rpair{\uu}{-v}\in\barr(\epi f) \\ &\Leftrightarrow \uu\in\barr f. \end{align*} The first equivalence is by \eqref{eq:from:conj-compose}. The second equivalence is because $\dom\gstar=[0,+\infty)$ by \eqref{eq:expstar}. The third equivalence is because $\indf{\epi f}^*(\uu,-v)=+\infty$ if $v<0$, by \Cref{pr:support-epi-f-conjugate}, and since $\dom{\indf{\epi f}^*}=\barr(\epi f)$ by \eqref{eqn:barrier-cone-defn}. The final equivalence is from the definition of $\barr f$ (Eq.~\ref{eq:bar-f-defn}). Hence $\dom{\fpstar}=\barr f$. \end{proof} \subsection{Reductions and conjugacy} \label{sec:conjugacy:reductions} In \Cref{sec:shadow} we studied properties of reductions $g=\fshadv$ under the assumption that $\vv$ is in $f$'s recession cone, $\resc{f}$. We next use exponential composition to show that if $\vv$ is \emph{not} in $\resc{f}$, then $g$ is identically equal to $+\infty$: \begin{theorem} \label{thm:i:4} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\vv\in\Rn$ and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Assume $\vv\not\in\resc{f}$. Then \[ \gext(\xbar) =\fext(\limray{\vv}\plusl\xbar) =+\infty \] for all $\xbar\in\extspace$. If, in addition, $f$ is closed, then $\fdub(\limray{\vv}\plusl\xbar) =+\infty$ for all $\xbar\in\extspace$. \end{theorem} \begin{proof} The cases $f\equiv +\infty$ or $f\equiv -\infty$ are both impossible since then we would have $\resc{f}=\Rn$, but $\vv\not\in\resc{f}$. Consequently, being lower semicontinuous, $f$ is closed if and only if it is proper. Let us assume for the moment that $f$ is closed, and so also proper. Then by \Cref{pr:rescpol-is-con-dom-fstar}, \[ \resc f = \bigSet{\vv\in\Rn:\: \forall \uu\in\dom{\fstar},\, \uu\cdot\vv\leq 0}. \] Since $\vv\not\in\resc{f}$, there must exist a point $\uu\in\dom{\fstar}$ with $\vv\cdot\uu>0$. For all $\xbar\in\extspace$, we then have \begin{align*} \fext(\limray{\vv}\plusl\xbar) &\geq \fdub(\limray{\vv}\plusl\xbar) \\ &\geq -\fstar(\uu) \plusl (\limray{\vv}\plusl \xbar)\cdot\uu \\ &= -\fstar(\uu) \plusl \limray{\vv}\cdot\uu \plusl \xbar\cdot\uu = +\infty. \end{align*} The inequalities are by Theorem~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:a},\ref{thm:fext-dub-sum:b}). The last equality is because $-\fstar(\uu)>-\infty$ (since $\uu\in\dom{\fstar}$), and $\limray{\vv}\cdot\uu=+\infty$ (since $\vv\cdot\uu>0$). Thus, $\fext(\limray{\vv}\plusl\xbar)=\fdub(\limray{\vv}\plusl\xbar)=+\infty$ for all $\xbar\in\extspace$. Returning to the general case, suppose $f\not\equiv +\infty$ but $f$ is not necessarily closed and proper. Let $f'=\expex\circ f$. Then $f'$ is convex, lower-bounded and lower semicontinuous by Proposition~\ref{pr:j:2}(\ref{pr:j:2a},\ref{pr:j:2lsc}), so $f'$ is also proper and closed. Also, $\resc{f'}=\resc{f}$ (by Proposition~\ref{pr:j:2}\ref{pr:j:2b}), so $\vv\not\in\resc{f'}$. Therefore, for all $\xx\in\Rn$, \[ \expex\bigParens{g(\xx)} = \expex\bigParens{\fext(\limray{\vv}\plusl\xx)} = \fpext(\limray{\vv}\plusl\xx)=+\infty, \] with the second equality from Proposition~\ref{pr:j:2}(\ref{pr:j:2c}), and the third from the argument above, applied to $f'$. This implies $g(\xx)=+\infty$. Thus, $g\equiv+\infty$ so $\gext\equiv+\infty$. \end{proof} As a summary, the next corollary combines Theorem~\ref{thm:i:4} with Theorems~\ref{thm:d4} and~\ref{thm:a10-nunu}, making no assumption about $\vv$: \begin{corollary} \label{cor:i:1} Let $f:\Rn\rightarrow\Rext$ be convex, let $\vv\in\Rn$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Then the following hold: \begin{letter-compact} \item \label{cor:i:1a} $g$ is convex and lower semicontinuous. \item \label{cor:i:1b} $\gext(\xbarperp)=\gext(\xbar)=\fext(\limray{\vv}\plusl\xbar)$ for all $\xbar\in\extspace$. \item \label{cor:i:1c} If $\vv\in\resc{f}$, then $\gext\leq\fext$. \end{letter-compact} \end{corollary} \begin{proof} Part~(\ref{cor:i:1c}) is exactly what is stated in Theorem~\ref{thm:d4}(\ref{thm:d4:b}). Assume for the remainder of the proof that $f$ is lower semicontinuous. This is without loss of generality since if it is not, we can replace $f$ with $\lsc f$ and apply \Cref{pr:h:1}(\ref{pr:h:1aa}). If $\vv\in\resc{f}$, then parts~(\ref{cor:i:1a}) and~(\ref{cor:i:1b}) follow from Theorem~\ref{thm:d4}(\ref{thm:d4:a},\ref{thm:d4:c}) and Theorem~\ref{thm:a10-nunu}. If $\vv\not\in\resc{f}$, then $\gext(\xbar)=\fext(\limray{\vv}\plusl\xbar)=+\infty$ for all $\xbar\in\extspace$, by Theorem~\ref{thm:i:4}. This implies part~(\ref{cor:i:1b}). This also implies that $g$ is the constant function $+\infty$, which is convex and lower semicontinuous, proving part~(\ref{cor:i:1a}). \end{proof} We next explore how conjugacy and astronic reductions relate to one another. We start by relating the conjugate of a function $f$ to that of its reduction $g=\fshadv$: \begin{theorem} \label{thm:e1} Let $f:\Rn\rightarrow\Rext$ be convex, $f\not\equiv+\infty$, and let $\vv\in\resc{f}$. Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$, and let \[ L=\{\uu\in\Rn : \uu\cdot\vv=0\} \] be the linear subspace orthogonal to $\vv$. Then: \begin{letter-compact} \item \label{thm:e1:a} $g^*=f^*+\indf{L}$; that is, for $\uu\in\Rn$, \[ \gstar(\uu) = \begin{cases} \fstar(\uu) & \text{if $\uu\cdot\vv=0$,} \\ +\infty & \text{otherwise.} \end{cases} \] \item \label{thm:e1:b} $\dom{\gstar} = (\dom{\fstar}) \cap L$. \item \label{thm:e1:c} $\cone(\dom{\gstar}) = \cone(\dom{\fstar}) \cap L$. \item \label{thm:e1:d} $\slopes{g} = (\slopes{f}) \cap L$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:e1:a}):} Let $\PP$ be the projection matrix onto $L$. By \Cref{thm:a10-nunu}, $g=\lsc\gtil$ where $\gtil=(\PPf)\PP$. We first claim there exists $\xhat\in\Rn$ such that $\PP\xhat\in\ri(\dom{\PPf})$. This is because, by assumption, there exists some point $\xx\in\dom{f}$ implying, by definition of a function's image under a linear map (Eq.~\ref{eq:lin-image-fcn-defn}), that $(\PPf)(\PP\xx)\leq f(\xx)<+\infty$. Thus, $\dom{\PPf}$ is nonempty, and is also convex, being the domain of a convex function (Propositions~\ref{roc:thm4.6} and~\ref{roc:thm5.7:Af}). Therefore, there exists a point $\zz$ in its relative interior, $\ri(\dom{\PPf})$ (\Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.2b}). Since $(\PPf)(\zz)<+\infty$, this further implies $\zz=\PP\xhat$ for some $\xhat\in\Rn$, again by \eqref{eq:lin-image-fcn-defn}. This proves the claim. Thus, \[ \gtil^*=[(\PPf)\PP]^*=\PP[(\PPf)^*]=\PP(f^*\PP). \] The second equality is from \Cref{roc:thm16.3:fA} which can be applied here having just shown that $\PP\xhat\in\ri(\dom{\PPf})$ for some $\xhat\in\Rn$. The last equality is from \Cref{roc:thm16.3:Af}. It then follows that, for $\uu\in\Rn$, \begin{align*} g^*(\uu) = \gtil^*(\uu) &= [\PP(f^*\PP)](\uu) \\ &= \inf\bigBraces{ f^*(\PP\ww) : \ww\in\Rn,\, \PP\ww=\uu } \\ &= \begin{cases} f^*(\uu) &\text{if $\uu\in\colspace\PP=L$,} \\ +\infty &\text{otherwise.} \end{cases} \end{align*} The first equality is because $g=\lsc\gtil$ and by \Cref{pr:conj-props}(\ref{pr:conj-props:e}). The third equality follows by expanding definitions (Eqs.~\ref{eq:fA-defn} and~\ref{eq:lin-image-fcn-defn}). \pfpart{Part~(\ref{thm:e1:b}):} This follows directly from part~(\ref{thm:e1:a}). \pfpart{Part~(\ref{thm:e1:c}):} Since $\dom{\gstar}\subseteq \dom{\fstar}$, we have $\cone(\dom{\gstar})\subseteq\cone(\dom{\fstar})$ and since $L$ is a convex cone and $\dom{\gstar}\subseteq L$, we have $\cone(\dom{\gstar})\subseteq L$. Thus, $ \cone(\dom{\gstar})\subseteq\cone(\dom{\fstar})\cap L $. For the reverse inclusion, let $\ww\in\cone(\dom{\fstar})\cap L$. Clearly, $\zero$ is in $\cone(\dom{\gstar})$. Otherwise, if $\ww\neq\zero$, then since $\ww\in \cone(\dom{\fstar})$, by \Cref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:c:new}), we can write $\ww=\lambda\uu$ for some $\lambda\in\Rstrictpos$ and $\uu\in\dom{\fstar}$. Moreover, since $\ww\in L$ and $L$ is a linear subspace, we have $\uu=\ww/\lambda\in L$, implying $\uu\in (\dom{\fstar})\cap L = \dom{\gstar}$. Thus, $\ww=\lambda\uu\in\cone(\dom{\gstar})$, and so $\cone(\dom{\fstar})\cap L\subseteq \cone(\dom{\gstar})$. \pfpart{Part~(\ref{thm:e1:d}):} Let $f'=\expex\circ f$, which is convex by \Cref{pr:j:2}(\ref{pr:j:2a}), and $f'\not\equiv+\infty$. In addition, let $g'=\fpshadv$ be the reduction of $f'$ at $\limray{\vv}$. Then for all $\xx\in\Rn$, \[ g'(\xx) = \fpext(\limray{\vv}\plusl\xx) = \expex\bigParens{\fext(\limray{\vv}\plusl\xx)} = \expex(g(\xx)), \] with the second equality following from Proposition~\ref{pr:j:2}(\ref{pr:j:2c}). Reductions $g$ and $g'$ are convex by \Cref{thm:a10-nunu}, and not identically $+\infty$ (by \Cref{pr:d2}\ref{pr:d2:c}). Therefore, \[ \slopes{g} = \dom{\gpstar} = (\dom{\fpstar}) \cap L = (\slopes{f}) \cap L, \] where the second equality is by part~(\ref{thm:e1:b}) applied to $f'$, and the first and third equalities are by \Cref{thm:dom-fpstar}. \qedhere \end{proof-parts} \end{proof} We can also relate the biconjugate $\gdub$ of a reduction $g=\fshadv$ to that of the function $f$ from which it was derived. This fact will be used soon in characterizing when $\fext=\fdub$. \begin{theorem} \label{thm:i:5} Let $f:\Rn\rightarrow\Rext$ be convex and closed, let $\vv\in\Rn$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Then $\gdub(\xbar)=\fdub(\limray{\vv}\plusl\xbar)$ for all $\xbar\in\extspace$. \end{theorem} \begin{proof} We proceed in cases. Suppose first that $f$ is not proper. In this case, since $f$ is convex, closed and improper, it must be either identically $+\infty$ or identically $-\infty$. If $f\equiv +\infty$, then it can be checked that $g\equiv +\infty$, $\fstar=\gstar\equiv -\infty$ and $\fdub=\gdub\equiv +\infty$. Similarly, if $f\equiv -\infty$ then $\fdub=\gdub\equiv -\infty$. Either way, the theorem's claim holds. For the next case, suppose $\vv\not\in\resc{f}$. Then Theorem~\ref{thm:i:4} implies that $\fdub(\limray{\vv}\plusl\xbar) =+\infty$ for all $\xbar\in\extspace$, and also that $g\equiv +\infty$, so $\gdub\equiv +\infty$. Thus, the claim holds in this case as well. We are left only with the case that $f$ is closed, convex and proper, and that $\vv\in\resc{f}$, which we assume for the remainder of the proof. Let $\uu\in\Rn$. We argue next that \begin{equation} \label{eq:e:9} -\gstar(\uu) = -\fstar(\uu) \plusl \limray{\vv}\cdot\uu. \end{equation} If $\uu\cdot\vv=0$ then by Theorem~\ref{thm:e1}(\ref{thm:e1:a}), $\gstar(\uu)=\fstar(\uu)$ implying \eqref{eq:e:9} in this case. If $\uu\cdot\vv<0$, then $\gstar(\uu)=+\infty$ by Theorem~\ref{thm:e1}(\ref{thm:e1:a}), and $\fstar(\uu)>-\infty$ since $f$ is proper (so that $\fstar$ is as well). These imply that both sides of \eqref{eq:e:9} are equal to $-\infty$ in this case. And if $\uu\cdot\vv>0$, then $\uu\not\in\dom{\fstar}$ (by \Cref{pr:rescpol-is-con-dom-fstar}, since $\vv\in\resc{f}$), and so $\uu\not\in\dom{\gstar}$ by Theorem~\ref{thm:e1}(\ref{thm:e1:b}). Therefore, in this case as well, both sides of \eqref{eq:e:9} are equal to $-\infty$. Thus, for all $\xbar\in\extspace$, \begin{align*} \gdub(\xbar) &= \sup_{\uu\in\Rn} \bigBracks{- \gstar(\uu) \plusl \xbar\cdot\uu } \\ &= \sup_{\uu\in\Rn} \bigBracks{- \fstar(\uu) \plusl \limray{\vv}\cdot\uu \plusl \xbar\cdot\uu } \\ &= \sup_{\uu\in\Rn} \bigBracks{- \fstar(\uu) \plusl (\limray{\vv} \plusl \xbar)\cdot\uu } \\ &= \fdub(\limray{\vv}\plusl \xbar), \end{align*} where the first and fourth equalities are by Theorem~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}), and the second equality is by \eqref{eq:e:9}. This completes the proof. \end{proof} \subsection{Reduction-closedness} \label{sec:ent-closed-fcn} We next develop a general and precise characterization for when $\fext=\fdub$. As a step in that direction, we first generalize the reductions introduced in Section~\ref{sec:shadow} in a straightforward way, allowing reductions now to be at any icon, not just astrons. \begin{definition} \label{def:iconic-reduction} Let $f:\Rn\rightarrow\Rext$, and let $\ebar\in\corezn$ be an icon. The \emph{reduction of $f$ at icon $\ebar$} is the function $\fshadd:\Rn\rightarrow\Rext$ defined, for $\xx\in\Rn$, by \begin{equation*} \fshadd(\xx) = \fext(\ebar\plusl \xx). \end{equation*} Such a reduction is said to be \emph{iconic}. \end{definition} Clearly, a reduction $\fshadv$ at astron $\limray{\vv}$ is a special case of such an iconic reduction in which $\ebar=\limray{\vv}$. When $\ebar=\zero$, the resulting reduction at $\zero$ is $\fshad{\zero}=\lsc f$, the lower semicontinuous hull of $f$, by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). The function $\fshadd$ captures the behavior of $\fext$ on the single galaxy $\galax{\ebar}=\ebar\plusl\Rn$, but it can also be viewed as a kind of composition of multiple astronic reductions, as shown in the next proposition. It is the closedness of all reductions at all icons in $\corezn$ that will characterize when $\fext=\fdub$. \begin{proposition} \label{pr:icon-red-decomp-astron-red} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\vv_1,\ldots,\vv_k\in\Rn$. Let $g_0 = \lsc f$ and, for $i=1,\dotsc,k$, let $g_i = \gishadvi$. Then for $i=0,\ldots,k$, each function $g_i$ is convex and lower semicontinuous, and furthermore, \begin{align} g_i(\xx) &= \fext([\vv_1,\dotsc,\vv_i]\omm\plusl\xx), \label{eq:pr:icon-red-decomp-astron-red:1} \\ \intertext{for all $\xx\in\Rn$, and} \gext_i(\xbar) &= \fext([\vv_1,\dotsc,\vv_i]\omm\plusl\xbar). \label{eq:pr:icon-red-decomp-astron-red:2} \end{align} for all $\xbar\in\extspace$. In particular, $g_k = \fshadd$ where $\ebar=\limrays{\vv_1,\ldots,\vv_k}$. \end{proposition} \begin{proof} Proof is by induction on $i=0,\dotsc,k$. In the base case that $i=0$, we have, by Proposition~\ref{pr:h:1}(\ref{pr:h:1a},\ref{pr:h:1aa}), that $g_0(\xx)=(\lsc{f})(\xx)=\fext(\xx)$ for $\xx\in\Rn$, and that $\gext_0=\lscfext=\fext$. Further, $g_0$ is convex and lower semicontinuous by \Cref{pr:lsc-props}(\ref{pr:lsc-props:a}). For the inductive step when $i>0$, suppose the claim holds for $i-1$. Then $g_i$ is convex and lower semicontinuous by Corollary~\ref{cor:i:1}(\ref{cor:i:1a}). Further, for $\xbar\in\extspace$, \begin{align*} \gext_i(\xbar) = \gext_{i-1}(\limray{\vv_i}\plusl\xbar) &= \fext([\vv_1,\dotsc,\vv_{i-1}]\omm \plusl \limray{\vv_i}\plusl\xbar) \\ &= \fext([\vv_1,\dotsc,\vv_i]\omm\plusl\xbar). \end{align*} The first equality is by Corollary~\ref{cor:i:1}(\ref{cor:i:1b}). The second is by inductive hypothesis. This proves \eqref{eq:pr:icon-red-decomp-astron-red:2}. Since $g_i$ is lower semicontinuous, it follows then by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}), that $g_i(\xx)=(\lsc{g_i})(\xx)=\gext_i(\xx)$ for $\xx\in\Rn$, proving \eqref{eq:pr:icon-red-decomp-astron-red:1}. \end{proof} \Cref{pr:icon-red-decomp-astron-red} then implies the following simple properties of iconic reductions: \begin{proposition} \label{pr:i:9} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\ebar\in\corezn$. Then the following hold: \begin{letter-compact} \item \label{pr:i:9a} $\fshadd$ is convex and lower semicontinuous. \item \label{pr:i:9b} $\fshadextd(\xbar) = \fext(\ebar\plusl\xbar)$ for all $\xbar\in\extspace$. \item \label{pr:i:9c} Either $\fshadextd\leq \fext$ or $\fshadd\equiv +\infty$. \end{letter-compact} \end{proposition} \begin{proof} By Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\ebar=\limrays{\vv_1,\ldots,\vv_k}$ for some $\vv_1,\dotsc,\vv_k\in\Rn$. Let $g_0 = \lsc f$ and let $g_i = \gishadvi$ for $i=1,\dotsc,k$. \Cref{pr:icon-red-decomp-astron-red} then implies that $g_k=\fshadd$, and that, for $i=0,\ldots,k$, each $g_i$ is convex and lower semicontinuous, and furthermore that \eqref{eq:pr:icon-red-decomp-astron-red:2} holds for $\xbar\in\extspace$. In particular, this proves part~(\ref{pr:i:9a}). It also shows that, for $\xbar\in\extspace$, $\fshadextd(\xbar)=\gext_k(\xbar)=\fext(\ebar\plusl\xbar)$, proving part~(\ref{pr:i:9b}). For part~(\ref{pr:i:9c}), suppose first that $\vv_i\not\in\resc{g_{i-1}}$ for some $i\in\{1,\dotsc,k\}$. Then $g_i\equiv+\infty$, by Theorem~\ref{thm:i:4}, so $g_j\equiv+\infty$ for $j\geq i$, and in particular $\fshadd=g_k\equiv+\infty$. Otherwise, $\vv_i\in\resc{g_{i-1}}$ for all $i\in\{1,\dotsc,k\}$, so \[ \fshadextd = \gext_k \leq \gext_{k-1} \leq \dotsb \leq \gext_0 = \fext, \] with each inequality following from Corollary~\ref{cor:i:1}(\ref{cor:i:1c}). \end{proof} So all of $f$'s iconic reductions are lower semicontinuous. However, they are not necessarily closed. The property of \emph{all} of the iconic reductions being closed, called reduction-closedness, turns out to exactly characterize when $\fext=\fdub$, as we will see shortly. \begin{definition} A convex function $f:\Rn\rightarrow\Rext$ is \emph{reduction-closed} if all of its reductions are closed at all icons, that is, if $\fshadd$ is closed for every icon $\ebar\in\corezn$. \end{definition} Here are some useful facts about this property: \begin{proposition} \label{pr:j:1} Let $f:\Rn\rightarrow\Rext$ be convex. Then the following hold: \begin{letter-compact} \item \label{pr:j:1a} Let $g=\fshadv$ be the reduction of $g$ at $\limray{\vv}$, for some $\vv\in\Rn$. Then $\gshad{\ebar}=\fshad{\limray{\vv}\plusl\ebar}$ for all $\ebar\in\corezn$. Therefore, if $f$ is reduction-closed, then so is $g$. \item \label{pr:j:1b} If $f$ is \emph{not} reduction-closed then there exists an icon $\ebar\in\corezn$ and $\qq,\qq'\in\Rn$ such that $\fshadd(\qq)=\fext(\ebar\plusl\qq)=+\infty$ and $\fshadd(\qq')=\fext(\ebar\plusl\qq')=-\infty$. \item \label{pr:j:1c} If $\inf f > -\infty$ then $f$ is reduction-closed. \item \label{pr:j:1d} If $f < +\infty$ then $f$ is reduction-closed. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:j:1a}):} For all icons $\ebar\in\corezn$ and for all $\xx\in\Rn$, \[ \gshad{\ebar}(\xx) = \gext(\ebar\plusl\xx) = \fext(\limray{\vv}\plusl\ebar\plusl\xx) = \fshad{\limray{\vv}\plusl\ebar}(\xx) \] by Corollary~\ref{cor:i:1}(\ref{cor:i:1b}). Thus, $\gshad{\ebar}=\fshad{\limray{\vv}\plusl\ebar}$. Therefore, if $f$ is reduction-closed, then for all $\ebar\in\corezn$, $\gshad{\ebar}=\fshad{\limray{\vv}\plusl\ebar}$ is closed, which implies that $g$ is also reduction-closed. \pfpart{Part~(\ref{pr:j:1b}):} By definition, $f$ is not reduction-closed if and only if $\fshadd$ is not closed for some $\ebar\in\corezn$. Since $\fshadd$ is lower semicontinuous (by Proposition~\ref{pr:i:9}\ref{pr:i:9a}), it is not closed if and only if it is equal to $-\infty$ at some point, and equal to $+\infty$ at some other point. This is because, as discussed in Section~\ref{sec:prelim:lsc}, a function that is lower semicontinuous, but not closed, must be improper. Therefore, it only takes on the values of $+\infty$ and $-\infty$ (by \Cref{pr:improper-vals}\ref{pr:improper-vals:cor7.2.1}), but is not identically equal to either (which would mean that it is closed). \pfpart{Part~(\ref{pr:j:1c}):} If $f \geq \beta$ for some $\beta > -\infty$, then $\fshadd(\xx)=\fext(\ebar\plusl\xx)\geq \beta$ for all $\ebar\in\corezn$ and $\xx\in\Rn$. Therefore, by part~(\ref{pr:j:1b}), $f$ must be reduction-closed. \pfpart{Part~(\ref{pr:j:1d}):} If $f<+\infty$, then for all $\ebar\in\corezn$, by Proposition~\ref{pr:i:9}(\ref{pr:i:9c}), either $\fshadd\equiv+\infty$ or $\fshadd(\xx)\leq\fext(\xx)\leq f(\xx)<+\infty$ for all $\xx\in\Rn$. In either case, the condition of part~(\ref{pr:j:1b}) is ruled out, and therefore $f$ is reduction-closed. \qedhere \end{proof-parts} \end{proof} We can now characterize precisely when $\fext=\fdub$: \begin{theorem} \label{thm:dub-conj-new} Let $f:\Rn\rightarrow\Rext$ be convex. Then $f$ is reduction-closed if and only if $\fext=\fdub$. \end{theorem} \begin{proof} Since $\fext=\lscfext$ (\Cref{pr:h:1}\ref{pr:h:1aa}), and since $\fstar=(\lsc f)^*$, we assume without loss of generality that $f$ is lower semicontinuous (replacing it with $\lsc f$ if it is not).\looseness=-1 We first prove that if $f$ is reduction-closed then $\fext(\xbar)=\fdub(\xbar)$, by induction on the astral rank of $\xbar$. More precisely, we show by induction on $k=0,\dotsc,n$ that for every lower semicontinuous reduction-closed convex function $f$, and for all $\xbar\in\extspace$, if $\xbar$ has astral rank $k$ then $\fext(\xbar)=\fdub(\xbar)$. So suppose that $f$ is convex, lower semicontinuous, and reduction-closed. In particular, this implies that $\fshad{\zero}=\lsc f = f$ is closed. Also, let $\xbar\in\extspace$ have astral rank $k$.\looseness=-1 In the base case that $k=0$, $\xbar$ must be some point $\xx\in\Rn$. Since $f$ is closed, $\fdubs=f$ (\Cref{pr:conj-props}\ref{pr:conj-props:f}), so \begin{equation} \label{eq:thm:dub-conj-new:1} \fdub(\xx)=\fdubs(\xx)=f(\xx)=\fext(\xx), \end{equation} where the first equality follows from \Cref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:fdubs}) and the last from lower semicontinuity of $f$ (by \Cref{pr:h:1}\ref{pr:h:1a}). For the inductive step when $k>0$, we can write $\xbar=\limray{\vv}\plusl\xbarperp$ where $\vv$ is $\xbar$'s dominant direction, and $\xbarperp$ has astral rank $k-1$ (Proposition~\ref{pr:h:6}). Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Then $g$ is convex and lower semicontinuous by Corollary~\ref{cor:i:1}(\ref{cor:i:1a}), and is reduction-closed by Proposition~\ref{pr:j:1}(\ref{pr:j:1a}). Thus, we can apply our inductive hypothesis to $g$ at $\xbarperp$, yielding \[ \fext(\xbar) = \gext(\xbarperp) = \gdub(\xbarperp) = \fdub(\xbar), \] where the equalities follow, respectively, from Corollary~\ref{cor:i:1}(\ref{cor:i:1b}), our inductive hypothesis, and Theorem~\ref{thm:i:5}. This completes the induction. Conversely, suppose now that $\fext=\fdub$. Further, suppose by way of contradiction that $f$ is not reduction-closed. Then by Proposition~\ref{pr:j:1}(\ref{pr:j:1b}), there exists an icon $\ebar\in\extspace$ and $\qq,\qq'\in\Rn$ such that $\fext(\ebar\plusl\qq)=+\infty$ and $\fext(\ebar\plusl\qq')=-\infty$. By our assumption, for any $\xx\in\Rn$, \begin{equation} \label{eq:i:8} \fext(\ebar\plusl\xx) = \fdub(\ebar\plusl\xx) = \sup_{\uu\in\Rn} \bigBracks{-\fstar(\uu) \plusl \ebar\cdot\uu \plusl \xx\cdot\uu} \end{equation} by Theorem~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}). In particular, when $\xx=\qq'$, the left-hand side is equal to $-\infty$, so the expression inside the supremum is equal to $-\infty$ for all $\uu\in\Rn$. Since $\qq'\cdot\uu\in\R$, this means that $-\fstar(\uu) \plusl \ebar\cdot\uu = -\infty$ for all $\uu\in\Rn$. In turn, this implies, by \eqref{eq:i:8}, that actually $\fext(\ebar\plusl\xx)=-\infty$ for all $\xx\in\Rn$. But this contradicts that $\fext(\ebar\plusl\qq)=+\infty$. Thus, $f$ is reduction-closed. \end{proof} Combined with Proposition~\ref{pr:j:1}(\ref{pr:j:1c},\ref{pr:j:1d}), we immediately obtain the following corollary: \begin{corollary} \label{cor:all-red-closed-sp-cases} Let $f:\Rn\rightarrow\Rext$ be convex. If either $\inf f > -\infty$ or $f<+\infty$ then $\fext=\fdub$. \end{corollary} The next example shows that it is indeed possible that $\fext\neq\fdub$: \begin{example}[Astral biconjugate of the restricted linear function.] \label{ex:biconj:notext} Consider the restricted linear function from \Cref{ex:negx1-else-inf}, \[ f(x_1,x_2) = \begin{cases} -x_1 & \text{if $x_2\geq 0$,} \\ +\infty & \text{otherwise,} \end{cases} \] and its conjugate $\fstar$, which is the indicator of the set $\set{\uu\in\R^2:\:u_1=-1,u_2\le 0}$. Let $\ebar=\limray{\ee_1}$. By \Cref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}), for all $\xx\in\R^2$, \[ \fdub(\ebar\plusl\xx) = \sup_{\uu\in\R^2:\:u_1=-1,u_2\le 0} (\ebar\plusl\xx)\cdot\uu =-\infty. \] On the other hand, \begin{align} \label{eq:j:2} \fext(\ebar\plusl\xx) &= \begin{cases} -\infty & \text{if $x_2\geq 0$,} \\ +\infty & \text{otherwise.} \end{cases} \end{align} Thus, $+\infty=\fext(\ebar\plusl\xx)\neq\fdub(\ebar\plusl\xx)=-\infty$ if $x_2<0$. \eqref{eq:j:2} also shows that the reduction $\fshadd$ is not closed; thus, consistent with Theorem~\ref{thm:dub-conj-new}, $f$ is not reduction-closed. \end{example} In general, even if $f$ is not reduction-closed, $\fext$ and $\fdub$ must agree at every point in the closure of the effective domain of $\fext$ (which is the same as the closure of $\dom f$, by Proposition~\ref{pr:h:1}\ref{pr:h:1c}) and at every point $\xbar\in\extspace$ with $\fdub(\xbar)>-\infty$. This means that at points $\xbar$ where $\fext$ and $\fdub$ differ, we can say exactly what values each function will take, namely, $\fext(\xbar)=+\infty$ and $\fdub(\xbar)=-\infty$, as was the case in \Cref{ex:biconj:notext}. \begin{theorem} \label{thm:fext-neq-fdub} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xbar\in\extspace$. If $\xbar\in\cldom{f}$ or $\fdub(\xbar)>-\infty$ then $\fext(\xbar)=\fdub(\xbar)$. Consequently, if $\fext(\xbar)\neq\fdub(\xbar)$ then $\fext(\xbar)=+\infty$ and $\fdub(\xbar)=-\infty$. \end{theorem} \begin{proof} The proof is similar to the first part of the proof of Theorem~\ref{thm:dub-conj-new}. By \Cref{pr:h:1}(\ref{pr:h:1c}) and \Cref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}), $\cldom{f}=\cldomfext$ and $\fdub=\fextdub$, which means the theorem's claim can be stated entirely in terms of $\fext$. Thus, since $\fext=\lscfext$ (by \Cref{pr:h:1}\ref{pr:h:1aa}), it suffices to prove the result for $\lsc f$. Therefore, without loss of generality, we assume henceforth that $f$ is lower semicontinuous. First, suppose that $f$ is not closed, and that either $\xbar\in\cldom{f}$ or $\fdub(\xbar)>-\infty$. Under these assumptions, we show that $\fext(\xbar)=\fdub(\xbar)$. In this case, as noted in the proof of Proposition~\ref{pr:j:1}(\ref{pr:j:1b}), since $f$ is lower semicontinuous but not closed, $f$ must be infinite everywhere so that $f(\xx)\in\{-\infty,+\infty\}$ for all $\xx\in\Rn$, and furthermore, $f\not\equiv+\infty$. These facts imply $\fstar\equiv+\infty$ so $\fdub\equiv-\infty$. From our assumptions, this further implies that $\xbar\in\cldom{f}$, which means there exists a sequence $\seq{\xx_t}$ in $\dom{f}$ with $\xx_t\rightarrow\xbar$. For all $t$, since $f(\xx_t)<+\infty$, the foregoing implies we must actually have $f(\xx_t)=-\infty$, so \[ -\infty = \liminf f(\xx_t) \geq \fext(\xbar) \] by definition of $\fext$. Therefore, as claimed, $\fext(\xbar)=-\infty=\fdub(\xbar)$ in this case. As in the proof of Theorem~\ref{thm:dub-conj-new}, we prove the theorem for the case when $f$ is closed by induction on the astral rank of $\xbar$. Specifically, we prove by induction on $k=0,\dotsc,n$ that for every lower semicontinuous, convex function $f$, and for all $\xbar\in\extspace$, if $\xbar$ has astral rank $k$, and if either $\xbar\in\cldom{f}$ or $\fdub(\xbar)>-\infty$, then $\fext(\xbar)=\fdub(\xbar)$. Let $f$ be such a function and let $\xbar$ have astral rank $k$ with $\xbar\in\cldom{f}$ or $\fdub(\xbar)>-\infty$. We further assume that $f$ is closed, since the case that $f$ is not closed was handled above. In the base case that $k=0$, $\xbar$ is a point $\xx\in\Rn$. Since $f$ is closed, $\ef(\xx)=f(\xx)=\fdubs(\xx)=\fdub(\xx)$, where the first equality follows from lower semicontinuity of $f$ (by \Cref{pr:h:1}\ref{pr:h:1a}) and the last from \Cref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:fdubs}). For the inductive step when $k>0$, by Proposition~\ref{pr:h:6}, we can write $\xbar=\limray{\vv}\plusl\xbarperp$ where $\vv\in\Rn$ is $\xbar$'s dominant direction, and $\xbarperp$, the projection of $\xbar$ orthogonal to~$\vv$, has astral rank $k-1$. If $\vv\not\in\resc{f}$, then \Cref{thm:i:4} immediately implies that $\fext(\xbar)=+\infty=\fdub(\xbar)$ in this case. Therefore, we assume henceforth that $\vv\in\resc{f}$. Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$, which is convex and lower semicontinuous (\Cref{cor:i:1}\ref{cor:i:1a}). Then $ \fext(\xbar) = \gext(\xbarperp)$ by Corollary~\ref{cor:i:1}(\ref{cor:i:1b}), and $ \gdub(\xbarperp) = \fdub(\xbar) $ by Theorem~\ref{thm:i:5}. Therefore, to prove $\fext(\xbar)=\fdub(\xbar)$, it suffices to show $\gext(\xbarperp)=\gdub(\xbarperp)$. If $\fdub(\xbar)>-\infty$ then $ \gdub(\xbarperp) > -\infty $, so that, by inductive hypthesis, $\gext(\xbarperp)=\gdub(\xbarperp)$. Otherwise, we must have $\xbar\in\cldom{f}$, meaning there exists a sequence $\seq{\xx_t}$ in $\dom f$ with $\xx_t\rightarrow\xbar$. Thus, for each $t$, by Proposition~\ref{pr:d2}(\ref{pr:d2:a},\ref{pr:d2:b}), $g(\xperpt)\leq f(\xx_t)<+\infty$, so $\xperpt\in\dom g$. Since $\xperpt\rightarrow\xbarperp$ (by \Cref{pr:h:5}\ref{pr:h:5b}), this means $\xbarperp\in\cldom{g}$. Therefore, by inductive hypthesis, $\gext(\xbarperp)=\gdub(\xbarperp)$ in this case as well. This completes the induction and the proof. \end{proof} \subsection{A dual characterization of reduction-closedness} \label{sec:dual-char-ent-clos} We next give a dual characterization of when a function $f$ is reduction-closed, and thus when $\fext=\fdub$. In Section~\ref{sec:slopes}, we defined $\slopes{f}$, the barrier cone of $f$, which includes all of $\cone(\dom{\fstar})$ as well as $f$'s vertical barrier cone, $\vertsl{f}$. We show now that $f$ is reduction-closed if and only if $\slopes{f}=\cone(\dom{\fstar})$, that is, if and only if $f$'s vertical barrier cone is already entirely included in $\cone(\dom{\fstar})$. Whereas reduction-closedness would appear to be an astral property involving the behavior of $\fext$ on every galaxy, this shows that actually the property can be precisely characterized just in terms of $f$ and its conjugate. We begin by generalizing the standard result relating $\resc f$ and $\cone(\dom\fstar)$ when $f$ is closed, proper and convex (\Cref{pr:rescpol-is-con-dom-fstar}), to also cover improper lower semicontinuous functions that are not identically $+\infty$: \begin{corollary} \label{cor:rescpol-is-slopes} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous with $f\not\equiv+\infty$. Then \[ \resc{f} = \polar{(\slopes{f})} \] and \[ \rescpol{f} = \cl(\slopes{f}). \] \end{corollary} \begin{proof} Let $f'=\expex\circ f$. Then \[ \resc{f} = \resc{f'} = \polar{\bigParens{\cone(\dom{\fpstar})}} = \polar{(\slopes{f})}. \] The first equality is by Proposition~\ref{pr:j:2}(\ref{pr:j:2b}). The second is by \Cref{pr:rescpol-is-con-dom-fstar} applied to $f'$ (which is convex and lower-bounded by Proposition~\ref{pr:j:2}(\ref{pr:j:2a}), and so also closed and proper). The third is by \Cref{thm:dom-fpstar}. This proves the first claim of the corollary. Taking the polar of both sides then yields the second claim (using \Cref{pr:polar-props}\ref{pr:polar-props:c}). \end{proof} In particular, combined with \Cref{pr:rescpol-is-con-dom-fstar}, when $f$ is closed, proper and convex, this shows that $\slopes{f}$ and $\cone(\dom{\fstar})$ have the same closures (in $\Rn$) and therefore can differ only in their relative boundaries. Our dual characterization of reduction-closedness states that $f$ is reduction-closed precisely when these two sets coincide. We prove each direction of the characterization separately. \begin{theorem} \label{thm:slopescone-implies-entclosed} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. If $\slopes{f}=\cone(\dom{\fstar})$ then $f$ is reduction-closed. \end{theorem} \begin{proof} We need to show that $\fshadd$ is closed for all $\ebar\in\corezn$. The proof is by induction on the astral rank of $\ebar$ (which, like that of all points in $\extspace$, cannot exceed $n$). More precisely, we prove the following by induction on $k=0,\dotsc,n$: For every convex, lower semicontinuous function $f$, and for every $\ebar\in\corezn$ of astral rank at most $k$, if $\slopes{f}=\cone(\dom{\fstar})$ then $\fshadd$ is closed. In the base case that $k=0$, the only icon of astral rank $0$ is $\ebar=\zero$ (by \Cref{pr:i:8}\ref{pr:i:8b}). Consider the corresponding reduction $\fshad{\zero}=\lsc f=f$. If $f>-\infty$ then $f$ is closed, because it is lower semicontinuous. Otherwise, $f(\qq)=-\infty$ for some $\qq\in\Rn$. In that case, $\fstar\equiv+\infty$, so $\dom{\fstar}=\emptyset$ and $\cone(\dom{\fstar})=\set{\zero}$. By the theorem's assumption, $\slopes{f}=\cone(\dom{\fstar})=\set{\zero}$, implying that $\resc{f}=\polar{(\slopes{f})}=\Rn$, by Corollary~\ref{cor:rescpol-is-slopes}. This in turn implies that $f$ is constant (since $f(\xx)\leq f(\yy)$ for all $\xx,\yy\in\Rn$), so $f\equiv-\infty$, which is closed, finishing the proof of the base case. For the inductive step, we assume $k>0$ and that the inductive hypothesis holds for $k-1$. Consider $\ebar$ of rank $k$, and write $\ebar=\limray{\vv}\plusl\ebarperp$, where $\vv$ is the dominant direction of $\ebar$, and $\ebarperp$ is the projection of $\ebar$ orthogonal to $\vv$. Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Note that $\gshad{\ebarperp}=\fshad{\ebar}$ by Proposition~\ref{pr:j:1}(\ref{pr:j:1a}). If $\vv\not\in\resc{f}$ then $g\equiv+\infty$ (by \Cref{thm:i:4}), so $\gshad{\ebarperp}\equiv+\infty$; therefore, $\fshad{\ebar}=\gshad{\ebarperp}\equiv+\infty$ is closed. In the remaining case, $\vv\in\resc{f}$. If $f\equiv+\infty$ then $\fshad{\ebar}\equiv+\infty$, which is closed. Otherwise, $f\not\equiv+\infty$ so $g\not\equiv+\infty$ by \Cref{pr:d2}(\ref{pr:d2:c}). Moreover, by \Cref{thm:e1}(\ref{thm:e1:c},\ref{thm:e1:d}), \[ \slopes{g}=(\slopes{f})\cap L=\bigParens{\cone(\dom{\fstar})}\cap L=\cone(\dom{\gstar}). \] Since $\ebarperp$ has astral rank $k-1$ (by~\Cref{pr:h:6}), we can apply our inductive hypothesis to $g$ and $\ebarperp$, and obtain that $\fshad{\ebar}=\gshad{\ebarperp}$ is closed. \end{proof} Next, we prove the converse: \begin{theorem} \label{thm:entclosed-implies-slopescone} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. If $f$ is reduction-closed then $\slopes{f}=\cone(\dom{\fstar})$. \end{theorem} \begin{proof} If $f\equiv+\infty$ then $\fstar\equiv-\infty$ so $\slopes{f}=\Rn=\cone(\dom{\fstar})$. We therefore assume henceforth that $f\not\equiv+\infty$. We proceed by induction on the dimension of $\cone(\dom{\fstar})$. That is, we prove by induction on $k=0,\dotsc,n$ that for every convex, lower semicontinuous function $f$ with $f\not\equiv+\infty$, if $f$ is reduction-closed and $\cone(\dom{\fstar})\subseteq L$ for some linear subspace $L\subseteq\Rn$ of dimension at most $k$, then $\slopes{f}=\cone(\dom{\fstar})$. We begin with the base case that $k=0$. Then $\cone(\dom{\fstar})\subseteq L =\{\zero\}$, which means $\dom{\fstar}$ is either $\emptyset$ or $\set{\zero}$. We need to show that in both cases, $\barr f=\set{\zero}$. If $\dom{\fstar}=\{\zero\}$ then $\inf f = -\fstar(\zero) > -\infty$, implying that $f$ is closed and proper. Therefore, \[ \{\zero\} = \cl\bigParens{\cone(\dom{\fstar})} = \rescpol{f} = \cl(\slopes{f}), \] where the second and third equalities are by \Cref{pr:rescpol-is-con-dom-fstar} and Corollary~\ref{cor:rescpol-is-slopes}, respectively. This implies that $\slopes{f}=\set{\zero}$. In the remaining case, $\dom{\fstar}=\emptyset$, so $\fstar\equiv+\infty$ and $\fdubs\equiv-\infty$. Since $f$ is reduction-closed, $f=\lsc f=\fshad{\zero}$ is closed, and therefore $f=\fdubs\equiv-\infty$. Hence, $\resc{f}=\Rn$, and $\barr f=\set{\zero}$ by \Cref{cor:rescpol-is-slopes}, completing the proof of the base case. For the inductive step, let $f$ and $L$ satisfy the conditions of the induction with $k>0$. If $\cone(\dom{\fstar})$ is contained in a linear space of dimension $k-1$, then $\barr f=\cone(\dom{\fstar})$ by inductive hypothesis, so we assume henceforth that this is not the case. In particular, this means that $\cone(\dom{\fstar})\ne\emptyset$ and thus $\dom{\fstar}\ne\emptyset$, which implies that $f>-\infty$, and therefore $f$ is closed and proper. For the sake of contradiction, assume that $f$ is reduction-closed, but that $\barr f\ne\cone(\dom{\fstar})$. Since $\cone(\dom{\fstar})\subseteq\barr f$ (by \Cref{thm:slopes-equiv}), this means that there exists some point $\uu$ in $(\slopes{f})\setminus\cone(\dom{\fstar})$. Since $f$ is closed and proper, \[ \uu \in \slopes{f} \subseteq \cl(\slopes{f}) = \rescpol{f} = \cl(\cone(\dom{\fstar})), \] where the two equalities are respectively from Corollary~\ref{cor:rescpol-is-slopes} and \Cref{pr:rescpol-is-con-dom-fstar}. On the other hand, $\uu\not\in\cone(\dom{\fstar})$. Therefore, $\uu$ is a relative boundary point of $\cone(\dom{\fstar})$. Before continuing, we pause to prove the following lemma regarding convex cones and their relative boundary points: \begin{lemma} \label{lem:rel-bnd-convcone} Let $K\subseteq\Rn$ be a convex cone, and let $\uu\in(\cl K) \setminus(\ri K)$ be a relative boundary point of $K$. Then there exists a vector $\vv\in\Rn$ such that all of the following hold: \begin{letter-compact} \item $\uu\cdot\vv = 0$. \item $\xx\cdot\vv < 0$ for all $\xx\in\ri K$ (implying that there exists $\yy\in K$ with $\yy\cdot\vv<0$). \item $\xx\cdot\vv \leq 0$ for all $\xx\in K$. That is, $\vv\in \Kpol$. \end{letter-compact} \end{lemma} \begin{proofx} Since $\uu\in\cl K$, $K$ is not empty. Thus, $\ri K$ is nonempty (by \Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.2b}) and relatively open, and $\set{\uu}$ is an affine set disjoint from $\ri K$. Therefore, by \Cref{roc:thm11.2}, there exists a hyperplane that contains $\set{\uu}$, and such that $\ri K$ is contained in one of the open halfspaces associated with the hyperplane. In other words, there exist $\vv\in\Rn$ and $\beta\in\R$ such that $\uu\cdot\vv=\beta$ and $\xx\cdot\vv<\beta$ for all $\xx\in\ri K$. We claim that $\xx\cdot\vv\leq\beta$ for all $\xx\in K$. This is because if $\xx\in K$ then it is also in $\cl{K} = \cl(\ri{K})$ (\Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.3}), implying there exists a sequence $\seq{\xx_t}$ in $\ri{K}$ with $\xx_t\rightarrow\xx$. Since $\xx_t\cdot\vv<\beta$ for all $t$, and the map $\zz\mapsto\zz\cdot\vv$ is continuous, it follows that $\xx\cdot\vv\leq\beta$. Next, we claim that $\beta=0$. To see this, let $\epsilon>0$. Then the open set $\braces{\xx\in\Rn : \xx\cdot\vv > \beta - \epsilon}$ is a neighborhood of $\uu$. Since $\uu\in\cl{K}$, this neighborhood must intersect $K$ at some point $\zz$ (\Cref{pr:closure:intersect}\ref{pr:closure:intersect:a}); that is, $\zz\in K$ and $\zz\cdot\vv>\beta-\epsilon$. Since $K$ is a cone, $2\zz$ is also in $K$, implying $2(\beta-\epsilon) < (2\zz)\cdot\vv \leq \beta$, and therefore that $\beta < 2\epsilon$. Similarly, $\zz/2$ is in~$K$, implying that $(\beta-\epsilon)/2 < (\zz/2)\cdot\vv \leq \beta$, and therefore that $-\epsilon < \beta$. Thus, $-\epsilon<\beta<2\epsilon$ for all $\epsilon>0$, which implies that $\beta=0$. We have shown that $\uu\cdot\vv=0$; $\xx\cdot\vv < 0$ for all $\xx\in\ri K$; and $\xx\cdot\vv \leq 0$ for all $\xx\in K$. By definition, this last fact means that $\vv\in\Kpol$. And since $\ri K$ is not empty, there exists $\yy\in \ri K \subseteq K$ with $\yy\cdot\vv<0$. \end{proofx} Since $\uu$ is a relative boundary point of $\cone(\dom{\fstar})$, Lemma~\ref{lem:rel-bnd-convcone} implies that there exists $\vv\in\Rn$ with $\uu\cdot\vv=0$ and $\yy\cdot\vv<0$ for some $\yy\in\cone(\dom{\fstar})$. Furthermore, $\vv\in \polar{\bigParens{\cone(\dom{\fstar})}} = \resc{f}$ (by \Cref{pr:rescpol-is-con-dom-fstar}). Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Also, let $M=\{\xx\in\Rn : \xx\cdot\vv=0\}$ be the linear subspace orthogonal to $\vv$ and let $L'=L\cap M$. Note that $\yy\in\cone(\dom{\fstar})\subseteq L$ but $\yy\not\in M$, because $\yy\inprod\vv<0$, and so $\yy\not\in L'$. Thus, $L'\subseteq L$ but $L'\neq L$ so $L'$ has dimension strictly less than $L$. The function $g$ is convex and lower semicontinuous (by \Cref{thm:a10-nunu}) with $g\not\equiv+\infty$ (by \Cref{pr:d2}\ref{pr:d2:c}), and is also reduction-closed (by \Cref{pr:j:1}\ref{pr:j:1a}). Moreover, by \Cref{thm:e1}(\ref{thm:e1:c}), \[ \cone(\dom{\gstar}) = \cone(\dom{\fstar}) \cap M \subseteq (L \cap M) = L'. \] Therefore, by inductive hypothesis applied to $g$ and $L'$, and by \Cref{thm:e1}(\ref{thm:e1:d}), \[ \cone(\dom{\gstar}) = \barr g = (\barr f) \cap M. \] Since $\uu\in\barr f$ and $\uu\in M$, we obtain \[ \uu\in(\barr f) \cap M=\cone(\dom{\gstar})\subseteq\cone(\dom{\fstar}), \] which contradicts the assumption that $\uu\not\in\cone(\dom{\fstar})$. Thus, we must have $\barr f=\cone(\dom{\fstar})$. This completes the induction and the proof. \end{proof} Combined with Theorem~\ref{thm:dub-conj-new}, we thus have proved: \begin{corollary} \label{cor:ent-clos-is-slopes-cone} Let $f:\Rn\rightarrow\Rext$ be convex. Then the following are equivalent: \begin{letter-compact} \item \label{cor:ent-clos-is-slopes-cone:b} $\fext=\fdub$. \item \label{cor:ent-clos-is-slopes-cone:a} $f$ is reduction-closed. \item \label{cor:ent-clos-is-slopes-cone:c} $\slopes{f}=\cone(\dom{\fstar})$. \end{letter-compact} \end{corollary} \begin{proof} By Propositions~\ref{pr:h:1}(\ref{pr:h:1aa}) and~\ref{pr:slopes-same-lsc}, and since $\fstar=(\lsc f)^*$, we can assume without loss of generality that $f$ is lower semicontinuous (replacing it with $\lsc f$ if it is not). In this case, the equivalence of (\ref{cor:ent-clos-is-slopes-cone:b}) and (\ref{cor:ent-clos-is-slopes-cone:a}) was proved in Theorem~\ref{thm:dub-conj-new}. The equivalence of (\ref{cor:ent-clos-is-slopes-cone:a}) and (\ref{cor:ent-clos-is-slopes-cone:c}) is proved by Theorems~\ref{thm:slopescone-implies-entclosed} and~\ref{thm:entclosed-implies-slopescone}. \end{proof} \section{Calculus rules for extensions} In this section, we develop some simple tools for computing extensions of functions constructed using standard operations like addition of two functions or the composition of a function with a linear map. \subsection{Scalar multiple} The extension of a nonnegative scalar multiple $\lambda f$ of a function $f:\Rn\rightarrow\Rext$ is straightforward to compute: \begin{proposition} \label{pr:scal-mult-ext} Let $f:\Rn\rightarrow\Rext$, let $\lambda\geq 0$, and let $\xbar\in\extspace$. Then $\lamfext(\xbar)=\lambda\fext(\xbar)$. That is, $\lamfext=\lambda \fext$. \end{proposition} \begin{proof} If $\lambda=0$ then $\lambda f\equiv 0$ so $\lamfext(\xbar)=0=\lambda \fext(\xbar)$. Otherwise, $\lambda>0$. From the definition of extension (Eq.~\ref{eq:e:7}), and since $\ex\mapsto \lambda\ex$ is strictly increasing over $\ex\in\Rext$, \begin{align*} \lamfext(\xbar) &= \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xbar} {\lambda f(\xx_t)} \\ &= \lambda \InfseqLiminf{\seq{\xx_t}}{\Rn}{\xx_t\rightarrow \xbar} {f(\xx_t)} = \lambda\fext(\xbar). \qedhere \end{align*} \end{proof} \subsection{Sums of functions} Suppose $h=f+g$, where $f:\Rn\rightarrow\Rext$ and $g:\Rn\rightarrow\Rext$. We might expect $h$'s extension to be the sum of the extensions of $f$ and $g$, so that $\hext=\fext+\gext$ (provided, of course, that the sums involved are defined). We will see that this is indeed the case under various conditions. First, however, we give two examples showing that, in general, $\hext(\xbar)$ need not be equal to $\fext(\xbar)+\gext(\xbar)$, even when $\fext(\xbar)$ and $\gext(\xbar)$ are summable: \begin{example}[Positives and negatives] \label{ex:pos-neg} Let $P=\R_{>0}$ and $N=\R_{<0}$ denote, respectively, the sets of positive and negative reals, and let $f=\indf{P}$ and $g=\indf{N}$ be the associated indicator functions (see Eq.~\ref{eq:indf-defn}). Then $P\cap N=\emptyset$, so $h=f+g\equiv+\infty$, and therefore also $\eh\equiv+\infty$. On the other hand, $\Pbar=[0,+\infty]$ and $\Nbar=[-\infty,0]$, so by \Cref{pr:inds-ext}, \[ \ef+\eg=\indfa{\Pbar}+\indfa{\Nbar}=\indfa{\Pbar\cap\Nbar}=\indfa{\set{0}}, \] so $\ef(0)+\eg(0)=0$ but $\eh(0)=+\infty$. \end{example} In the previous example, the functions $f$ and $g$ are not lower semicontinuous at $0$, which is also the point where $\ef+\eg$ disagrees with $\eh$. The next example shows that even when $f$ and $g$ are lower semicontinuous, we do not necessarily have $\eh=\ef+\eg$. \begin{figure} \includegraphics[width=\textwidth]{figs/sideways-cone.pdf} \caption{The cone $K$ from Example~\ref{ex:KLsets:extsum-not-sum-exts}.} \label{fig:sideways-cone} \end{figure} \begin{example}[Sideways cone] \label{ex:KLsets:extsum-not-sum-exts} The standard second-order cone in $\R^3$ is the set \begin{equation} \label{eq:stan-2nd-ord-cone} \Braces{ \zz\in\R^3 : \sqrt{z_1^2 + z_2^2} \leq z_3 }, \end{equation} which is a classic, upward-oriented ``ice-cream cone'' in which every horizontal slice, with $z_3$ held to any nonnegative constant, is a disc of radius $z_3$. Let $K$ be this same cone rotated so that its axis is instead pointing in the direction of the vector $\uu=\trans{[0,1,1]}$ (see \Cref{fig:sideways-cone}). To derive an analytic description of $K$, note that it consists of all points $\xx\in\R^3$ whose angle $\theta$ with $\uu$ is at most $45^\circ$ so that \[ \xx\inprod\uu = \norm{\xx}\norm{\uu} \cos \theta \ge \norm{\xx}\norm{\uu} \cos\Parens{\frac{\pi}{4}} = \norm{\xx}. \] Squaring, rearranging, and restricting to $\xx$ with $\xx\inprod\uu\ge 0$ then yields \begin{equation} \label{eqn:bad-set-eg-K} K =\set{\xx\in\R^3:\: x_1^2\le 2 x_2 x_3\text{ and }x_2,x_3\ge 0}. \end{equation} (In the same way, \eqref{eq:stan-2nd-ord-cone} can be derived by instead setting $\uu=\ee_3$.) Next, let $L\subseteq\R^3$ be the plane \begin{equation} \label{eqn:bad-set-eg-L} L = \Braces{ \xx\in\R^3 : x_3 = 0 }, \end{equation} and let $f=\indK$ and $g=\indL$ be the corresponding indicator functions (as defined in Eq.~\ref{eq:indf-defn}). Then $h=f+g$ is equal to $\indf{K\cap L}$, the indicator function of the ray \[ K\cap L = \Braces{ \xx\in\R^3 : x_1 = x_3 = 0, x_2 \geq 0 }, \] depicted as a dashed line in \Cref{fig:sideways-cone}. Let $\xbar=\limray{\ee_2}\plusl\ee_1$. Then $\xbar\in\Kbar$ since, for each $t$, the point $\xx_t=\transKern{[1,\,t,\,1/(2t)]}\in K$, so $\xbar=\lim \xx_t$ is in~$\Kbar$. Likewise, $\xbar\in\Lbar$ since, for each $t$, the point $\xx'_t=\transKern{[1,\,t,\,0]}\in L$, so $\xbar=\lim \xx'_t$ is in~$\Lbar$. On the other hand, $\xbar\not\in \KLbar$ since if $\seq{\zz_t}$ is any sequence in $K\cap L$ with a limit $\zbar\in\extspac{3}$, then $\zz_t\cdot\ee_1=0$ for all $t$, so $\zbar\cdot\ee_1=\lim \zz_t\cdot\ee_1=0$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}), whereas $\xbar\cdot\ee_1=1$. Therefore, by Proposition~\ref{pr:inds-ext}, $\fext(\xbar)=\gext(\xbar)=0$, but $\hext(\xbar)=+\infty$. \end{example} With these counterexamples in mind, we proceed to give sufficient conditions for when in fact $\hext(\xbar)=\fext(\xbar)+\gext(\xbar)$. The next proposition provides such conditions based on continuity and lower semicontinuity. \begin{proposition} \label{pr:ext-sum-fcns} Let $f:\Rn\rightarrow\Rext$ and $g:\Rn\rightarrow\Rext$, and assume $f$ and $g$ are summable. Let $h=f+g$, let $\xbar\in\extspace$, and assume $\fext(\xbar)$ and $\gext(\xbar)$ are summable. Then the following hold: \begin{letter-compact} \item \label{pr:ext-sum-fcns:a} $\hext(\xbar)\geq\fext(\xbar)+\gext(\xbar)$. \item \label{pr:ext-sum-fcns:b} If either $f$ or $g$ is extensibly continuous at $\xbar$, then $\hext(\xbar)=\fext(\xbar)+\gext(\xbar)$. \item \label{pr:ext-sum-fcns:c} If both $f$ and $g$ are extensibly continuous at $\xbar$, then $h$ is as well. \item \label{pr:ext-sum-fcns:d} If $\xbar=\xx\in\Rn$ and both $f$ and $g$ are lower semicontinuous at $\xx$, then $h$ is as well, and $\hext(\xx)=\fext(\xx)+\gext(\xx)$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ext-sum-fcns:a}):} By \Cref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ such that $h(\xx_t)\to\eh(\xbar)$. Then \begin{align*} \fext(\xbar) + \gext(\xbar) &\le \liminf f(\xx_t) \plusd \liminf g(\xx_t) \\ &\le \liminf \bigBracks{f(\xx_t) + g(\xx_t)} \\ &= \liminf h(\xx_t) = \eh(\xbar). \end{align*} The first inequality follows by monotonicity of downward addition (\Cref{pr:plusd-props}\ref{pr:plusd-props:f}), since $\fext(\xbar)\le\liminf f(\xx_t)$ and similarly for~$\eg(\xbar)$. The second inequality follows by superadditivity of $\liminf$ (\Cref{prop:lim:eR}\ref{i:liminf:eR:sum}), and the final equality is by our choice of $\seq{\xx_t}$. \pfpart{Part~(\ref{pr:ext-sum-fcns:b}):} Without loss of generality, assume $f$ is extensibly continuous at $\xbar$ (swapping $f$ and $g$ otherwise). By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$ and for which $g(\xx_t)\rightarrow\gext(\xbar)$. Also, since $f$ is extensibly continuous at~$\xbar$, $f(\xx_t)\rightarrow\fext(\xbar)$. Thus, $h(\xx_t) = f(\xx_t) + g(\xx_t)\rightarrow \fext(\xbar) + \gext(\xbar)$ (\Cref{prop:lim:eR}\ref{i:lim:eR:sum}), so $\hext(\xbar)\leq \lim h(\xx_t) = \fext(\xbar) + \gext(\xbar)$. Combined with part~(\ref{pr:ext-sum-fcns:a}), this proves the claim. \pfpart{Part~(\ref{pr:ext-sum-fcns:c}):} Suppose $f$ and $g$ are extensibly continuous at $\xbar$, and let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then $f(\xx_t)\rightarrow\fext(\xbar)$ and $g(\xx_t)\rightarrow\gext(\xbar)$, so \[ h(\xx_t) = f(\xx_t) + g(\xx_t) \rightarrow \fext(\xbar) + \gext(\xbar) = \hext(\xbar) \] with the last equality following from part~(\ref{pr:ext-sum-fcns:b}). \pfpart{Part~(\ref{pr:ext-sum-fcns:d}):} Suppose $\xbar=\xx\in\Rn$ and $f$ and $g$ are lower semicontinuous at $\xx$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xx$. Then \begin{align*} h(\xx)= f(\xx)+g(\xx) &\le \liminf f(\xx_t) \plusd \liminf g(\xx_t) \\ &\le \liminf \bigBracks{f(\xx_t) + g(\xx_t)} = \liminf h(\xx_t), \end{align*} where the first inequality is by lower semicontinuity of $f$ and $g$ at $\xx$ and monotonicity of downward addition, and the second inequality is by \Cref{prop:lim:eR}(\ref{i:liminf:eR:sum}). Thus, $h$ is lower semicontinuous at $\xx$ as claimed. Since $\ef$, $\eg$, $\eh$ are lower semicontinuous at $\xx$, we have, by \Cref{pr:h:1}(\ref{pr:h:1a}), \[ \eh(\xx)=h(\xx)=f(\xx)+g(\xx)=\ef(\xx)+\eg(\xx). \qedhere \] \end{proof-parts} \end{proof} In particular, \Cref{pr:ext-sum-fcns}(\ref{pr:ext-sum-fcns:d}) shows that if $f$ and $g$ are lower semicontinuous then $\eh$ can differ from $\ef+\eg$ only at infinite points, as was the case in \Cref{ex:KLsets:extsum-not-sum-exts}. In Proposition~\ref{pr:ext-sum-fcns}, we made continuity and lower semicontinuity assumptions, but did not assume convexity. We will next give different sufficient conditions for when the extension of a sum is the sum of the extensions, based on convexity and the assumption that the relative interiors of the domains of the functions being added have at least one point in common. We first prove a lemma for when all of the functions are nonnegative, and later, in Theorem~\ref{thm:ext-sum-fcns-w-duality}, give a more general result. The proof is based on duality. \begin{lemma} \label{lem:ext-sum-nonneg-fcns-w-duality} For $i=1,\dotsc,m$, let $f_i:\Rn\rightarrow\Rext$ be convex with $f_i\geq 0$. Assume $ \bigcap_{i=1}^m \ri(\dom{f_i}) \neq \emptyset $. Let $h=f_1+\dotsb+f_m$. Then \[ \hext(\xbar)=\efsub{1}(\xbar)+\dotsb+\efsub{m}(\xbar) \] for all $\xbar\in\extspace$. \end{lemma} \begin{proof} Let $\xbar\in\extspace$. It suffices to prove \begin{equation} \label{eqn:thm:ext-sum-fcns-w-duality:1} \hext(\xbar) \leq \sum_{i=1}^m \efsub{i}(\xbar) \end{equation} since the reverse inequality follows from Proposition~\ref{pr:ext-sum-fcns}(\ref{pr:ext-sum-fcns:a}), applied inductively. We assume henceforth that $\efsub{i}(\xbar)<+\infty$ for all $i$, since otherwise \eqref{eqn:thm:ext-sum-fcns-w-duality:1} is immediate. This implies each $f_i$ is proper. Let $\uu\in\Rn$. We will prove \begin{equation} \label{eqn:thm:ext-sum-fcns-w-duality:2} -\hstar(\uu)\plusd \xbar\cdot\uu \leq \sum_{i=1}^m \efsub{i}(\xbar). \end{equation} Once proved, this will imply \eqref{eqn:thm:ext-sum-fcns-w-duality:1} since then \[ \hext(\xbar) = \hdub(\xbar) = \sup_{\uu\in\Rn} \bigParens{ -\hstar(\uu)\plusd \xbar\cdot\uu } \leq \sum_{i=1}^m \efsub{i}(\xbar), \] with the equalities following respectively from Corollary~\ref{cor:all-red-closed-sp-cases} (since $h\geq 0$) and Theorem~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}). We assume $\hstar(\uu)<+\infty$ since otherwise \eqref{eqn:thm:ext-sum-fcns-w-duality:2} is immediate. By standard characterization of the conjugate of a sum of proper convex functions (\Cref{roc:thm16.4}) and using the fact that $ \bigcap_{i=1}^m \ri(\dom{f_i}) \neq \emptyset $, there exist $\uu_1,\dotsc,\uu_m\in\Rn$ with \[ \sum_{i=1}^m \uu_i = \uu \] and \[ star(\uu_i) = \hstar(\uu). \] Since $\hstar(\uu)<+\infty$, it follows that star(\uu_i)<+\infty$, and so actually star$) is proper. By Theorem~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:a},\ref{thm:fext-dub-sum:b}), \begin{equation} \label{eqn:thm:ext-sum-fcns-w-duality:3} \efsub{i}(\xbar) \geq dub(\xbar) \geq star(\uu_i)\plusd \xbar\cdot\uu_i \end{equation} for $i=1,\dotsc,m$. star(\uu_i)\in\R$ and since we assumed $\efsub{i}(\xbar) < +\infty$, it follows that $\xbar\cdot\uu_i<+\infty$. Thus, $\xbar\cdot\uu_1,\dotsc,\xbar\cdot\uu_m$ are summable. By Proposition~\ref{pr:i:1}, applied repeatedly, this implies \[ \xbar\cdot\uu = \xbar\cdot\paren{\sum_{i=1}^m \uu_i} = \sum_{i=1}^m \xbar\cdot\uu_i. \] Combining now yields \begin{align*} -\hstar(\uu)\plusd\xbar\cdot\uu &= star(\uu_i)} + \sum_{i=1}^m \xbar\cdot\uu_i \\ &= star(\uu_i) + \xbar\cdot\uu_i } \\ &\leq \sum_{i=1}^m \efsub{i}(\xbar), \end{align*} with the last line following from \eqref{eqn:thm:ext-sum-fcns-w-duality:3}. This proves \eqref{eqn:thm:ext-sum-fcns-w-duality:2}, completing the lemma. \end{proof} As a consequence of \Cref{lem:ext-sum-nonneg-fcns-w-duality}, we can derive a useful \emph{simultaneous} characterization of a finite collection of extensions. More precisely, in Proposition~\ref{pr:d1}, we saw that for any function $f:\Rn\rightarrow\Rext$ and point $\xbar\in\extspace$, there must exist a sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$ and for which $f(\xx_t)\rightarrow\fext(\xbar)$. Using Lemma~\ref{lem:ext-sum-nonneg-fcns-w-duality}, we show next that, for any finite collection of convex functions $f_1,\dotsc,f_m$ with effective domains whose relative interiors overlap, there must exist a {single} sequence $\seq{\xx_t}$ converging to $\xbar$ for which this property holds simultaneously for each of the functions, so that $f_i(\xx_t)\rightarrow\efsub{i}(\xbar)$ for $i=1,\dotsc,m$. This will also help in proving the more general form of Lemma~\ref{lem:ext-sum-nonneg-fcns-w-duality} given afterwards in Theorem~\ref{thm:ext-sum-fcns-w-duality}. \begin{theorem} \label{thm:seq-for-multi-ext} Let $f_i:\Rn\rightarrow\Rext$ be convex, for $i=1,\dotsc,m$, and let $\xbar\in\extspace$. Assume $\bigcap_{i=1}^m \ri(\dom{f_i}) \neq \emptyset$. Then there exists a sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$ and $f_i(\xx_t)\rightarrow\efsub{i}(\xbar)$ for $i=1,\dotsc,m$. \end{theorem} \begin{proof} First, by possibly re-arranging the indices of the functions, we can assume without loss of generality that $\efsub{i}(\xbar)<+\infty$ for $i=1,\dotsc,\ell$ and $\efsub{i}(\xbar)=+\infty$ for $i=\ell+1,\dotsc,m$, for some $\ell\in\{0,\dotsc,m\}$. Let $f'_i=\expex\circ f_i$, for $i=1,\dotsc,m$ (where $\expex$ is as in Eq.~\ref{eq:expex-defn}), and let $h=f'_1+\dotsb + f'_\ell$. Then for $i=1,\dotsc,\ell$, each $f'_i$ is convex and nonnegative (by Proposition~\ref{pr:j:2}\ref{pr:j:2a}). Also, $\dom{f'_i}=\dom{f_i}$, so $\bigcap_{i=1}^\ell \ri(\dom{f'_i}) \neq \emptyset$. Therefore, by Lemma~\ref{lem:ext-sum-nonneg-fcns-w-duality}, \begin{equation} \label{eqn:thm:seq-for-multi-ext:1} \hext(\xbar) = \sum_{i=1}^\ell \fpextsub{i}(\xbar). \end{equation} By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ with $h(\xx_t)\rightarrow\hext(\xbar)$. For $i=1,\ldots,\ell$, we consider each sequence $\seq{f'_i(\xx_t)}$, one by one. By sequential compactness of $\eR$, this sequence must have a convergent subsequence; discarding all other elements then yields one on which the entire sequence converges. Repeating for each $i$, we obtain a sequence $\seq{\xx_t}$ that converges to $\xbar$ and for which $h(\xx_t)\rightarrow\hext(\xbar)$ and $\seq{f'_i(\xx_t)}$ converges in $\eR$ for all $i=1,\ldots,\ell$. We then have \begin{equation} \label{eq:seq-multi-ext} \sum_{i=1}^\ell \fpextsub{i}(\xbar) = \hext(\xbar) = \lim h(\xx_t) =\lim\BigBracks{\sum_{i=1}^\ell f'_i(\xx_t)} =\sum_{i=1}^\ell\BigBracks{\lim f'_i(\xx_t)}, \end{equation} where the last equality is by continuity (\Cref{prop:lim:eR}\ref{i:lim:eR:sum}), noting that $f'_i(\xx_t)\ge 0$, so also $\lim f'_i(\xx_t)\ge 0$, implying that the limits are summable. Furthermore, for $i=1,\ldots,\ell$, $\lim f'_i(\xx_t)\ge\fpextsub{i}(\xbar)$ (by definition of extensions) and also $\fpextsub{i}(\xbar)\in\R$. Combined with \eqref{eq:seq-multi-ext}, these facts imply that actually $\lim f'_i(\xx_t)=\fpextsub{i}(\xbar)$ for all $i$, since otherwise the rightmost expression in \eqref{eq:seq-multi-ext} would be strictly greater than the leftmost expression in that equality. By \Cref{pr:j:2}(\ref{pr:j:2lim}), this implies that $ \lim f_i(\xx_t) =\efsub{i}(\xbar) $ for $i=1,\dotsc,\ell$. Also, for $i=\ell+1,\dotsc,m$, the sequence $\seq{\xx_t}$ satisfies $\lim f_i(\xx_t)=+\infty=\efsub{i}(\xbar)$, which holds whenever $\xx_t\to\xbar$, since $\efsub{i}(\xbar)=+\infty$. \end{proof} We can now prove the more general form of Lemma~\ref{lem:ext-sum-nonneg-fcns-w-duality}: \begin{theorem} \label{thm:ext-sum-fcns-w-duality} Let $f_i:\Rn\rightarrow\Rext$ be convex, for $i=1,\dotsc,m$. Assume $f_1,\dotsc,f_m$ are summable, and that $\bigcap_{i=1}^m \ri(\dom{f_i}) \neq \emptyset$. Let $h=f_1+\dotsb+f_m$, and let $\xbar\in\extspace$. Assume also that $\efsub{1}(\xbar),\dotsc,\efsub{m}(\xbar)$ are summable. Then \[ \hext(\xbar)=\efsub{1}(\xbar)+\dotsb+\efsub{m}(\xbar). \] \end{theorem} \begin{proof} By Theorem~\ref{thm:seq-for-multi-ext}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ with $\xx_t\rightarrow\xbar$ and $f_i(\xx_t)\rightarrow\efsub{i}(\xbar)$ for $i=1,\dotsc,m$. For this sequence, \[ \hext(\xbar) \ge \efsub{1}(\xbar)+\dotsb+\efsub{m}(\xbar) = \lim\bigBracks{ f_1(\xx_t)+\dotsb+f_m(\xx_t) } = \lim h(\xx_t) \ge \hext(\xbar). \] The first inequality is by repeated application of \Cref{pr:ext-sum-fcns}(\ref{pr:ext-sum-fcns:a}), and the first equality is by \Cref{prop:lim:eR}(\ref{i:lim:eR:sum}). The last inequality is by definition of extensions. This proves the theorem. \end{proof} When applied to indicator functions, Theorem~\ref{thm:ext-sum-fcns-w-duality} (or \Cref{lem:ext-sum-nonneg-fcns-w-duality}) implies that for any finite collection of convex sets in $\Rn$ with overlapping relative interiors, the intersection of their (astral) closures is equal to the closure of their intersection: \begin{corollary} \label{cor:clos-int-sets} Let $S_1,\dotsc,S_m\subseteq\Rn$ be convex, and let $U=\bigcap_{i=1}^m S_i$. Assume $\bigcap_{i=1}^m \ri{S_i}\neq\emptyset$. Then \[ \Ubar = \bigcap_{i=1}^m \Sbar_i. \] \end{corollary} \begin{proof} Let $f_i=\indf{S_i}$ be the indicator functions of the sets $S_i$, for $i=1,\dotsc,m$. These functions are convex and nonnegative. Let $h= f_1 + \dotsb + f_m$, implying $h=\indf{U}$. Since $\bigcap_{i=1}^m \ri(\dom{f_i})= \bigcap_{i=1}^m (\ri{S_i})\neq\emptyset$, we can apply Lemma~\ref{lem:ext-sum-nonneg-fcns-w-duality} yielding $\hext= \efsub{1} + \dotsb + \efsub{m}$. We then have \[ I_{\Ubar}=\hext = \efsub{1} + \dotsb + \efsub{m} =I_{\Sbar_1}+\dotsb+I_{\Sbar_m} =I_{\cap_{i=1}^m\Sbar_i}, \] where the first and third equalities are by Proposition~\ref{pr:inds-ext}. This proves the claim. \end{proof} Without assuming overlapping relative interiors, Corollary~\ref{cor:clos-int-sets} need not hold in general. For instance, the sets $P=\R_{>0}$ and $N=\R_{<0}$ from \Cref{ex:pos-neg} do not intersect, but their astral closures do, so $\PNbar\ne\Pbar\cap\Nbar$. Similarly, in \Cref{ex:KLsets:extsum-not-sum-exts}, we constructed sets $K,L\subseteq\Rn$ and a point $\xbar\in\eRn$, such that $\xbar\in\Kbar\cap\Lbar$, but $\xbar\not\in\KLbar$, so $\Kbar\cap\Lbar\ne\KLbar$. \subsection{Composition with an affine map} We next consider functions $h:\Rn\rightarrow\Rext$ of the form $h(\xx)=f(\bb+\A\xx)$, where $f:\Rm\rightarrow\Rext$, $\A\in\Rmn$ and $\bb\in\Rm$, so that $h$ is the composition of $f$ with an affine map. In this case, we might expect $h$'s extension to be $\hext(\xbar)=\fext(\bb\plusl\A\xbar)$. As before, this will be the case under some fairly mild conditions. First, however, we show that this need not hold in general: \begin{example}[Positives and negatives, continued] Let $f=i_P$ where $P=\R_{>0}$ as in \Cref{ex:pos-neg}, let $\A=\zero_{1\times 1}$ be the zero matrix, and let $h(x)=f(\A x)$ for $x\in\R$. Thus, $h(x)=f(0)=+\infty$ for all $x\in\R$, so also $\eh(\ex)=+\infty$ for all $\ex\in\eR$. However, as we saw in \Cref{ex:pos-neg}, $\ef=\indfa{\Pbar}$ where $\Pbar=[0,+\infty]$, so $\ef(\A\ex)=\ef(0)=0$ for all $\ex$. \end{example} The culprit in the previous example is the lack of lower semicontinuity of $f$ at $0$. The next example shows that lower semicontinuity of $f$ is not sufficient to guarantee that $\eh(\xbar)=\ef(\bb\plusl\A\xbar)$ for all $\xbar$. \begin{example}[Sideways cone, continued] \label{ex:fA-ext-countereg} Let $f=\indK$ where $K\subseteq\R^3$ is the sideways cone from \Cref{ex:KLsets:extsum-not-sum-exts}, let \[ \A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \] and let $h(\xx)=f(\A\xx)$ for $\xx\in\R^3$. Let $\xbar=\limray{\ee_2}\plusl\ee_1$, where $\ee_1,\ee_2,\ee_3$ are the standard basis vectors in $\R^3$. Then $\A\xbar=\limray{\ee_2}\plusl\ee_1$ (using Proposition~\ref{pr:h:4}). In Example~\ref{ex:KLsets:extsum-not-sum-exts}, we argued that $\fext(\A\xbar)=0$. For $\xx\in\R^3$, $h(\xx) = h(x_1,x_2,x_3) = f(x_1,x_2,0)$, which is an indicator function~$\indf{M}$ for the set $M=\{\xx\in\R^3 :\: x_1=0,\,x_2\geq 0\}$. Therefore, by \Cref{pr:inds-ext}, if $\hext(\xbar)=0$ then there exists a sequence $\seq{\xx_t}$ in $M$ that converges to $\xbar$, implying $0=\xx_t\cdot\ee_1\rightarrow\xbar\cdot\ee_1$, a contradiction since $\xbar\cdot\ee_1=1$. Thus, $\hext(\xbar)=+\infty$ but $\fext(\A\xbar)=0$. \end{example} \begin{proposition} \label{pr:ext-affine-comp} Let $f:\Rm\rightarrow\Rext$, $\A\in\Rmn$, and $\bb\in\Rm$. Let $h:\Rn\rightarrow\Rext$ be defined by $h(\xx) = f(\bb + \A\xx)$ for $\xx\in\Rn$. Let $\xbar\in\extspace$. Then the following hold: \begin{letter-compact} \item \label{pr:ext-affine-comp:a} $\hext(\xbar)\geq\fext(\bb\plusl\A\xbar)$. \item \label{pr:ext-affine-comp:b} If $m=n$ and $\A$ is invertible then $\hext(\xbar)=\fext(\bb\plusl\A\xbar)$. \item \label{pr:ext-affine-comp:c} If $f$ is extensibly continuous at $\bb\plusl\A\xbar$ then $\hext(\xbar)=\fext(\bb\plusl\A\xbar)$ and $h$ is extensibly continuous at $\xbar$. \item \label{pr:ext-affine-comp:d} If $\xbar=\xx\in\Rn$ and $f$ is lower semicontinuous at $\bb+\A\xx$ then $\hext(\xx)=\fext(\bb+\A\xx)$ and $h$ is lower semicontinuous at $\xx$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ext-affine-comp:a}):} Suppose $\seq{\xx_t}$ is a sequence in $\Rn$ that converges to $\xbar$. Then $\bb+\A\xx_t=\bb\plusl\A\xx_t\rightarrow \bb\plusl\A\xbar$ by continuity (\Cref{cor:aff-cont}). Therefore, \[ \liminf h(\xx_t) = \liminf f(\bb+\A\xx_t) \geq \fext(\bb\plusl\A\xbar). \] Since this holds for all such sequences, the claim follows. \pfpart{Part~(\ref{pr:ext-affine-comp:b}):} Suppose $m=n$ and $\A$ is invertible. Then by part~(\ref{pr:ext-affine-comp:a}), $\hext(\xbar)\geq\fext(\bb\plusl\A\xbar)$. For the reverse inequality, note that, for $\xx,\yy\in\Rn$, $\yy=\bb+\A\xx$ if and only if $\xx=\bb'+\Ainv\yy$, where $\bb' = -\Ainv\bb$. Therefore, $f(\yy)=h(\bb'+\Ainv\yy)$ for all $\yy\in\Rn$. Applying part~(\ref{pr:ext-affine-comp:a}) with $f$ and $h$ swapped, it follows that $\fext(\ybar)\geq\hext(\bb'\plusl\Ainv\ybar)$ for $\ybar\in\extspace$. In particular, setting $\ybar=\bb\plusl\A\xbar$, this yields $\fext(\bb\plusl\A\xbar)\geq\hext(\xbar)$. \pfpart{Part~(\ref{pr:ext-affine-comp:c}):} Suppose $f$ is extensibly continuous at $\bb\plusl\A\xbar$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$, implying $\bb+\A\xx_t\rightarrow \bb\plusl\A\xbar$ by continuity (\Cref{cor:aff-cont}). Since $f$ is extensibly continuous at $\bb\plusl\A\xbar$, it follows that $h(\xx_t)=f(\bb+\A\xx_t)\rightarrow\fext(\bb\plusl\A\xbar)$. Thus, $\lim h(\xx_t)$ exists and has the same value $\fext(\bb\plusl\A\xbar)$ for every such sequence. Furthermore, this limit must be equal to $\hext(\xbar)$ for at least one such sequence, by Proposition~\ref{pr:d1}. This proves both parts of the claim. \pfpart{Part~(\ref{pr:ext-affine-comp:d}):} Suppose $\xbar=\xx\in\Rn$ and $f$ is lower semicontinuous at $\bb+\A\xx$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xx$. Then $\bb+\A\xx_t\to\bb+\A\xx$ by continuity, so \[ h(\xx) = f(\bb+\A\xx) \le \liminf f(\bb+\A\xx_t) = \liminf h(\xx_t) \] by lower semicontinuity of $f$. Thus, $h$ is lower semicontinuous at $\xx$. Therefore, by \Cref{pr:h:1}(\ref{pr:h:1a}), \[ \eh(\xx) = h(\xx) = f(\bb+\A\xx) = \ef(\bb+\A\xx). \qedhere \] \end{proof-parts} \end{proof} Henceforth, we consider only composition with a linear map, that is, with $\bb=\zero$. As in \eqref{eq:fA-defn}, for a function $f:\Rm\rightarrow\Rext$ and matrix $\A\in\Rmn$, we write $\fA$ for the function $(\fA)(\xx)=f(\A\xx)$, for $\xx\in\Rn$. We show next that if $f$ is convex and if the column space of the matrix $\A$ intersects the relative interior of $\dom f$, then $\fAext(\xbar)=\fext(\A\xbar)$. \begin{theorem} \label{thm:ext-linear-comp} Let $f:\Rm\rightarrow\Rext$ be convex and let $\A\in\Rmn$. Assume there exists $\xing\in\Rn$ such that $\A\xing\in\ri(\dom f)$. Then \[ \fAext(\xbar)=\fext(\A\xbar) \] for all $\xbar\in\extspace$. \end{theorem} \begin{proof} Let $\xbar\in\eRn$, and let $h=\fA$. It suffices to prove $\hext(\xbar)\leq\fext(\A\xbar)$ since the reverse inequality was proved in Proposition~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:a}). Let us assume for now that $f\geq 0$, later returning to the general case. Then $h\geq 0$, so both $f$ and $h$ are reduction-closed (Proposition~\ref{pr:j:1}\ref{pr:j:1c}). Furthermore, by assumption of the theorem, $\dom f\ne\emptyset$ and $\dom h\ne\emptyset$, so both $f$ and $h$ are proper. Let $\uu\in\Rm$. We will show $-\hstar(\uu)\plusd\xbar\cdot\uu \leq \fext(\A\xbar)$, which we will see is sufficient to prove the main claim. This is immediate if $\hstar(\uu)=+\infty$, so we assume henceforth that $\hstar(\uu)<+\infty$. Since $h$ is proper, so is $\hstar$, and hence in fact $\hstar(\uu)\in\R$. Using the characterization of the conjugate $(\fA)^*$ (\Cref{roc:thm16.3:fA}) and the fact that $\hstar(\uu)\in\R$, we obtain that $\hstar(\uu)=(\fA)^*(\uu)=\fstar(\ww)$ for some $\ww\in\Rm$ such that $\transA\ww=\uu$. Thus, \begin{align*} -\hstar(\uu)\plusd\xbar\cdot\uu &= -\fstar(\ww)\plusd\xbar\cdot(\transA\ww) \\ &= -\fstar(\ww)\plusd(\A\xbar)\cdot\ww \\ &\leq \fdub(\A\xbar) \leq \fext(\A\xbar). \end{align*} The second equality is by \Cref{thm:mat-mult-def}, and the inequalities are by \Cref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:a}\ref{thm:fext-dub-sum:b}). Taking supremum over $\uu\in\Rm$ then yields \begin{equation} \label{eq:ext-linear-comp:final} \hext(\xbar) = \hdub(\xbar) = \sup_{\uu\in\Rm}\bigParens{-\hstar(\uu)\plusd\xbar\cdot\uu} \leq \fext(\A\xbar) \end{equation} with the equalities following from Corollary~\ref{cor:all-red-closed-sp-cases} and Theorem~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}). This proves the claim when $f\geq 0$. For the general case, we apply the exponential composition trick (Section~\ref{sec:exp-comp}). Let $f$ be as stated in the theorem (not necessarily nonnegative). Let $f'=\expex\circ f$ and $h'=\expex\circ h$. Then for $\xx\in\Rn$, $h'(\xx)=\expex(h(\xx))=\expex(f(\A\xx))=f'(\A\xx)$. Also, $\dom f'=\dom f$ so if $\A\xing\in\ri(\dom f)$ then also $\A\xing\in\ri(\dom f')$. Thus, for $\xbar\in\extspac{n}$, \[ \expex(\hext(\xbar)) = \hpext(\xbar) \leq \fpext(\A\xbar) = \expex(\fext(\A\xbar)). \] The inequality follows from the argument above (namely, Eq.~\ref{eq:ext-linear-comp:final}) applied to $f'$ (which is nonnegative). The two equalities are from Proposition~\ref{pr:j:2}(\ref{pr:j:2c}). Since $\expex$ is strictly increasing, it follows that $\hext(\xbar)\leq\fext(\A\xbar)$ as claimed. \end{proof} As before, we can apply this theorem to an indicator function, yielding the next corollary which shows that, under suitable conditions, the astral closure of the inverse image of a convex set $S\subseteq\Rm$ under a linear map is the same as the inverse image of the corresponding astral closure $\Sbar$ under the corresponding astral linear map. \begin{corollary} \label{cor:lin-map-inv} Let $\A\in\Rmn$ and let $\slinmapA:\Rn\rightarrow\Rm$ and $\alinmapA:\extspace\rightarrow\extspac{m}$ be the associated standard and astral linear maps (so that $\slinmapA(\xx)=\A\xx$ for $\xx\in\Rn$ and $\alinmapA(\xbar)=\A\xbar$ for $\xbar\in\extspace$). Let $S\subseteq\Rm$ be convex, and assume there exists some point $\xing\in\Rn$ with $\A\xing\in\ri{S}$. Then \[ \clbar{\slinmapAinv(S)}=\alinmapAinv(\Sbar). \] \end{corollary} \begin{proof} Let $f=\indf{S}$ be $S$'s indicator function, which is convex, and let $h=\fA$. Then $h(\xx)=0$ if and only if $\A\xx\in S$ (and otherwise is $+\infty$); that is, $h=\indf{U}$ where \[ U = \{ \xx\in\Rn :\: \A\xx\in S \} = \slinmapAinv(S). \] Since $f$ and $h$ are indicators of convex sets $S$ and $U$, their extensions are astral indicators of the closures of $S$ and $U$ (by \Cref{pr:inds-ext}), that is, $\fext=\indset_{\Sbar}$ and $\hext=\indset_{\Ubar}$. By assumption, there exists a point $\xing$ with $\A\xing\in\ri{S}=\ri(\dom f)$, so we can apply Theorem~\ref{thm:ext-linear-comp} yielding $\hext(\xbar)=\fext(\A\xbar)$ for $\xbar\in\extspace$. Since $\fext$ is the indicator of $\Sbar$ and $\hext$ is the indicator of $\Ubar$, the set $\Ubar$ satisfies \[ \Ubar = \{ \xbar\in\extspace :\: \A\xbar\in\Sbar \} = \alinmapAinv(\Sbar), \] which concludes the proof, because $U=\slinmapAinv(S)$. \end{proof} As an additional corollary, we can compute the astral closure of any affine set: \begin{corollary} \label{cor:closure-affine-set} Let $\A\in\Rmn$ and $\bb\in\Rm$, and let $M = \{ \xx\in\Rn :\: \A\xx = \bb \}$. Then \begin{equation} \label{eq:cor:closure-affine-set:1} \Mbar = \Braces{ \xbar\in\extspace :\: \A\xbar = \bb }. \end{equation} \end{corollary} \begin{proof} Let $U$ denote the set on the right-hand side of \eqref{eq:cor:closure-affine-set:1}. Suppose first that $M$ is empty, so that $\Mbar$ is empty as well. To prove the corollary in this case, we need to show that $U$ is empty. Suppose to the contrary that $\A\xbar=\bb$ for some $\xbar\in\extspace$. We can write $\xbar=\ebar\plusl\qq$ for some icon $\ebar\in\corezn$ (\Cref{thm:icon-fin-decomp}). Then \begin{equation} \label{eq:cor:closure-affine-set:2} \bb = \A(\ebar\plusl\qq) = (\A\ebar)\plusl(\A\qq), \end{equation} where the second equality is by \Cref{pr:h:4}(\ref{pr:h:4c}). Since $\A\ebar$ is an icon (by \Cref{pr:i:8}\ref{pr:i:8-matprod}), and since the icons for the expressions on the left- and right-hand sides of \eqref{eq:cor:closure-affine-set:2} must be the same (by Theorem~\ref{thm:icon-fin-decomp}), it follows that $\A\ebar=\zero$. Thus, $\A\qq=\bb$, contradicting that $M=\emptyset$. In the alternative case, $M$ includes some point $\xing\in\Rn$. We apply Corollary~\ref{cor:lin-map-inv} with $S=\{\bb\}$, noting that $\A\xing=\bb\in\{\bb\}=\ri S$ and that $\Sbar=\{\bb\}$. Thus, in the notation of that corollary, $M = \slinmapAinv(S)$ and $U = \smash{\alinmapAinv(\Sbar)}$. Therefore, $\Mbar=U$. \end{proof} \subsection{Linear image of a function} As in \eqref{eq:lin-image-fcn-defn}, for a matrix $\A\in\Rmn$ and a function $f:\Rn\rightarrow\Rext$, the image of $f$ under $\A$, denoted $\A f$, is defined, for any $\xx\in\Rm$, as \begin{equation*} (\A f)(\xx) = \inf\bigBraces{f(\zz):\:\zz\in\Rn,\,\A\zz=\xx}. \end{equation*} When $f$ is convex, the constructed function $\A f$ is as well (\Cref{roc:thm5.7:Af}). The extension $\Afextnp$ always has the expected form: \begin{proposition} \label{pr:inf-lin-ext} Let $f:\Rn\rightarrow\Rext$ and $\A\in\Rmn$, and let $\xbar\in\extspac{m}$. Then \begin{equation} \label{eqn:pr:inf-lin-ext:1} \Afext(\xbar) = \inf\bigBraces{\fext(\zbar):\:\zbar\in\extspace,\,\A\zbar=\xbar}. \end{equation} Furthermore, if $\xbar$ is in the astral column space of $\A$, then the infimum is attained by some $\zbar\in\extspace$ with $\A\zbar=\xbar$ (and otherwise is vacuous). \end{proposition} \begin{proof} For $\xbar\in\extspac{m}$, let $G(\xbar)$ denote the right-hand side of \eqref{eqn:pr:inf-lin-ext:1}, and let $h=\A f$. We aim to prove $\hext(\xbar)=G(\xbar)$. We first show $G(\xbar)\geq \hext(\xbar)$. If the set on the right-hand side of \eqref{eqn:pr:inf-lin-ext:1} is empty, then $G(\xbar)=+\infty$ and the claim holds. Otherwise, consider any $\zbar\in\extspace$ such that $\A\zbar=\xbar$, and let $\seq{\zz_t}$ be a sequence in $\Rn$ converging to $\zbar$ with $f(\zz_t)\rightarrow\fext(\zbar)$ (which exists by Proposition~\ref{pr:d1}). Then \[ \fext(\zbar) = \lim f(\zz_t) \geq \liminf h(\A\zz_t) \geq \hext(\xbar), \] where the first inequality is because $f(\zz_t)\geq h(\A\zz_t)$, by $h$'s definition, and the second inequality is because $\A\zz_t\rightarrow\A\zbar=\xbar$ by continuity (\Cref{thm:mat-mult-def}). Taking infimum over all $\zbar\in\extspace$ such that $\A\zbar=\xbar$ then yields $G(\xbar)\geq \hext(\xbar)$. It remains to show $\hext(\xbar)\geq G(\xbar)$, while also proving the infimum in \eqref{eqn:pr:inf-lin-ext:1} is attained under the stated condition. If $\hext(\xbar)=+\infty$, then this inequality is immediate. In that case, since $G(\xbar)\geq\hext(\xbar)$ (as proved above), we must have $G(\xbar)=+\infty$, meaning that the value of the infimum in \eqref{eqn:pr:inf-lin-ext:1} is $+\infty$, and any $\zbar\in\extspace$ such that $\A\zbar=\xbar$ attains the infimum. Such $\zbar$ exists precisely when $\xbar$ is in $\A$'s astral column space. Henceforth we assume that $\hext(\xbar)<+\infty$. Let $\seq{\xx_t}$ be a sequence in $\Rm$ converging to $\xbar$ with $h(\xx_t)\rightarrow\hext(\xbar)$. Since $\hext(\xbar)<+\infty$, we can have $h(\xx_t)=+\infty$ for at most finitely many sequence elements; by discarding these, we can assume without loss of generality that $h(\xx_t)<+\infty$ for all $t$. For each $t$, let $\beta_t\in\R$ be chosen so that $\beta_t>h(\xx_t)$, and so that $\beta_t\rightarrow\hext(\xbar)$. For instance, we can choose \[ \beta_t = \begin{cases} h(\xx_t) + 1/t & \text{if $h(\xx_t)\in\R$,} \\ -t & \text{if $h(\xx_t)=-\infty$,} \end{cases} \] which has both these properties, since $h(\xx_t)\rightarrow\hext(\xbar)$. For each $t$, by $h$'s definition and since $h(\xx_t)<+\infty$, there exists $\zz_t\in\Rn$ with $\A\zz_t=\xx_t$ and $f(\zz_t)<\beta_t$. By sequential compactness, the resulting sequence $\seq{\zz_t}$ must have a convergent subsequence. By discarding all other elements, we can assume the entire sequence converges to some point $\zbar\in\extspace$. Note that $\xbar=\A\zbar$ since $\A\zz_t\rightarrow\A\zbar$ (by \Cref{thm:mat-mult-def}), but on the other hand, $\A\zz_t=\xx_t\rightarrow\xbar$. Thus, \[ G(\xbar) \geq \hext(\xbar) = \liminf \beta_t \geq \liminf f(\zz_t) \geq \fext(\zbar) \geq G(\xbar). \] The first inequality was proved above. The second and third inequalities are because $\beta_t>f(\zz_t)$ and because $\zz_t\rightarrow\zbar$. The last inequality is by $G$'s definition, since $\A\zbar=\xbar$. Therefore, $\Afext(\xbar)=G(\xbar)=\fext(\zbar)$, and $\zbar$ attains the infimum in \eqref{eqn:pr:inf-lin-ext:1}. \end{proof} The next proposition provides a different expression for the extension of $\A f$, along the lines of the original definition of $\fext$ given in \eqref{eq:e:7}: \begin{proposition} \label{pr:inf-lin-ext-seq} Let $f:\Rn\rightarrow\Rext$, and $\A\in\Rmn$. Let $\xbar\in\extspac{m}$. Then \begin{equation} \label{eqn:pr:inf-lin-ext-seq:1} \Afext(\xbar) = \InfseqLiminf{\seq{\zz_t}}{\Rn}{\A\zz_t\rightarrow \xbar}{f(\zz_t)}. \end{equation} Furthermore, if $\xbar$ is in the astral column space of $\A$, then there exists a sequence $\seq{\zz_t}$ in $\Rn$ with $\A\zz_t\rightarrow\xbar$ and $f(\zz_t)\rightarrow\Afext(\xbar)$. \end{proposition} \begin{proof} Let $R$ denote the value of the expression on the right-hand side of \eqref{eqn:pr:inf-lin-ext-seq:1}, which we aim to show is equal to $\Afext(\xbar)$. We first show $R\geq \Afext(\xbar)$. If the infimum in \eqref{eqn:pr:inf-lin-ext-seq:1} is over an empty set then $R=+\infty$ and the claim holds. Otherwise, consider any sequence $\seq{\zz_t}$ in $\Rn$ such that $\A\zz_t\to\xbar$. Then \[ \liminf f(\zz_t) \ge \liminf{(\A f)(\A\zz_t)} \ge \Afext(\xbar), \] where the first inequality follows from the definition of $\A f$ and the second by definition of extensions since $\A\zz_t\to\xbar$. Taking infimum over all sequences $\seq{\zz_t}$ in $\Rn$ such that $\A\zz_t\to\xbar$ then yields $R\ge\Afext(\xbar)$. For the reverse inequality, note first that if $\xbar$ is not in $\A$'s astral column space, then $\Afext(\xbar)=+\infty$ by Proposition~\ref{pr:inf-lin-ext}, implying $R=+\infty$ as well, so $\Afext(\xbar)=R$. We therefore assume henceforth that $\xbar$ is in $\A$'s astral column space. By Proposition~\ref{pr:inf-lin-ext}, there exists $\zbar\in\extspace$ with $\A\zbar=\xbar$ that attains the infimum in \eqref{eqn:pr:inf-lin-ext:1} so that $\Afext(\xbar)=\fext(\zbar)$. By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\zz_t}$ in $\Rn$ with $\zz_t\rightarrow\zbar$ and $f(\zz_t)\rightarrow\fext(\zbar)$. Also, $\A\zz_t\rightarrow\A\zbar=\xbar$ (by \Cref{thm:mat-mult-def}), so \[ \Afext(\xbar) = \fext(\zbar) = \lim f(\zz_t) \geq R \geq \Afext(\xbar). \] The first inequality is by $R$'s definition, and the second was shown above. Thus, $\Afext(\xbar)=R$ and $\seq{\zz_t}$ has the stated properties. \end{proof} The \emph{infimal convolution} of two proper functions $f:\Rn\rightarrow\Rext$ and $g:\Rn\rightarrow\Rext$ is that function $h:\Rn\rightarrow\Rext$ defined by \[ h(\zz) = \inf\BigBraces{f(\xx) + g(\zz-\xx) :\: \xx\in\Rn }. \] As an application of the foregoing results, we can characterize the extension of such a function. To compute $\hext$, define the function $s:\R^{2n}\to\eR$ by $s(\rpairf{\xx}{\yy})=f(\xx)+g(\yy)$ for $\xx,\yy\in\Rn$, and let $\A=[\Idnn,\Idnn]$ be the $n\times 2n$ matrix such that $\A\rpair{\xx}{\yy}=\xx+\yy$ for $\xx,\yy\in\Rn$. We can then rewrite $h$ as \begin{align*} h(\zz) &= \inf\BigBraces{s(\rpairf{\xx}{\yy}) :\: \xx,\yy\in\Rn,\,\A\rpair{\xx}{\yy}=\zz } \\ &= \inf\BigBraces{s(\ww) :\: \ww\in\R^{2n},\,\A\ww=\zz }. \intertext{By Propositions~\ref{pr:inf-lin-ext} and~\ref{pr:inf-lin-ext-seq}, it then follows that, for $\zbar\in\extspace$,} \hext(\zbar) &= \inf\BigBraces{\sext(\wbar) :\: \wbar\in\extspac{2n},\,\A\wbar=\zbar } \\[4pt] &= \InfseqLiminf{\seq{\xx_t},\seq{\yy_t}}{\Rn}{\xx_t+\yy_t\rightarrow \zbar} {\bigBracks{f(\xx_t) + g(\yy_t)}}. \end{align*} \subsection{Maxima and suprema of functions} We next consider the extension of the pointwise maximum or supremum of a collection of functions: \[ h(\xx) = \sup_{i\in\indset} f_i(\xx) \] for $\xx\in\Rn$, where $f_i:\Rn\rightarrow\Rext$ for all $i$ in some index set $\indset$. If each $f_i$ is convex, then $h$ is as well (\Cref{roc:thm5.5}). In this case, we expect $h$'s extension to be \begin{equation} \label{eq:hext-sup-fexti} \hext(\xbar) = \sup_{i\in\indset} \efsub{i}(\xbar) \end{equation} for $\xbar\in\extspace$. Below, we establish sufficient conditions for when this holds. First note that this is not always the case. For instance, for $f$ and $g$ as defined in \Cref{ex:pos-neg} (or as in \Cref{ex:KLsets:extsum-not-sum-exts}), we can set $h=\max\set{f,g}=f+g$, and obtain $\hext\ne\ef+\eg=\max\set{\ef,\eg}$. Nevertheless, we can show that $\hext(\xbar) \geq \sup_{i\in\indset} \efsub{i}(\xbar)$ for all $\xbar\in\eRn$, and that the equality holds, under lower semicontinuity assumptions, for $\xbar=\xx\in\Rn$. \begin{proposition} \label{pr:ext-sup-of-fcns-bnd} Let $f_i:\Rn\rightarrow\Rext$ for $i\in\indset$, and let $h(\xx) = \sup_{i\in\indset} f_i(\xx)$ for $\xx\in\Rn$. Let $\xbar\in\extspace$. Then \begin{letter-compact} \item \label{pr:ext-sup:a} $\hext(\xbar) \geq \sup_{i\in\indset} \efsub{i}(\xbar)$. \item \label{pr:ext-sup:b} If $\xbar=\xx\in\Rn$ and each $f_i$ is lower semicontinuous at $\xx$, then $h$ is as well, and $\hext(\xx)=\sup_{i\in\indset}\efsub{i}(\xx)$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ext-sup:a}):} For all $i\in\indset$, $h\geq f_i$, implying $\hext\geq\efsub{i}$ by \Cref{pr:h:1}(\ref{pr:h:1:geq}), proving the claim. \pfpart{Part~(\ref{pr:ext-sup:b}):} Assume that $\xbar=\xx\in\Rn$ and let $\seq{\xx_t}$ by any sequence in $\Rn$ that converges to $\xx$. Then by part~(\ref{pr:ext-sup:a}) and the fact that each $f_i$ is lower semicontinuous, \begin{align} \label{eq:ext-sup:b} \liminf h(\xx_t) \ge \eh(\xx) \ge \sup_{i\in\indset} \efsub{i}(\xx) = \sup_{i\in\indset} f_i(\xx) = h(\xx), \end{align} so $h$ is lower semicontinuous at $\xx$. Since $\eh(\xx)\le h(\xx)$ (by \Cref{pr:h:1}\ref{pr:h:1a}), and since \eqref{eq:ext-sup:b} shows $\eh(\xx)\ge h(\xx)$, we obtain $\eh(\xx)=h(\xx)$. \qedhere \end{proof-parts} \end{proof} When $\indset$ is finite, we can prove sufficient conditions for when the equality holds for general $\xbar\in\eRn$. These will all follow from the next lemma: \begin{lemma} \label{lem:ext-sup-fcns-seq} Let $f_i:\Rn\rightarrow\Rext$ for $i=1,\dotsc,m$, and let \[ h(\xx)=\max\Braces{f_1(\xx),\dotsc,f_m(\xx)} \] for $\xx\in\Rn$. Let $\xbar\in\extspace$, and let $\seq{\xx_t}$ be a sequence in $\Rn$ that converges to $\xbar$. Suppose $f_i(\xx_t)\rightarrow\efsub{i}(\xbar)$ for all $i=1,\dotsc,m$. Then $h(\xx_t)\rightarrow\hext(\xbar)$ and \begin{equation} \label{eqn:lem:ext-sup-fcns-seq:1} \hext(\xbar) = \max\Braces{\efsub{1}(\xbar),\dotsc,\efsub{m}(\xbar)}. \end{equation} \end{lemma} \begin{proof} Let $\indset=\set{1,\dots,m}$ be the index set for $i$. Then \begin{align} \notag \limsup h(\xx_t) &= \limsup \BigBracks{\max_{i\in\indset} f_i(\xx_t)} \\ \label{eq:ext-sup:lemma} &= \max_{i\in\indset} \BigBracks{\limsup f_i(\xx_t)} = \max_{i\in\indset}\efsub{i}(\xbar), \end{align} where the second equality is by repeated application of \Cref{prop:lim:eR}(\ref{i:limsup:eR:max}), and the last equality is by the assumption in the lemma. Therefore, \[ \hext(\xbar) \leq \liminf h(\xx_t) \leq \limsup h(\xx_t) = \max_{i\in\indset}\efsub{i}(\xbar) \leq \hext(\xbar). \] The first inequality is by definition of extensions since $\xx_t\rightarrow\xbar$. The equality is by \eqref{eq:ext-sup:lemma}. The last inequality follows from \Cref{pr:ext-sup-of-fcns-bnd}(\ref{pr:ext-sup:a}). This proves both of the lemma's claims. \end{proof} With this lemma, we can relate continuity to the form of the extension of the maximum of a pair of functions: \begin{proposition} \label{pr:ext-sup-fcns-cont} Let $f:\Rn\rightarrow\Rext$ and $g:\Rn\rightarrow\Rext$, and let $h(\xx)=\max\{f(\xx),g(\xx)\}$ for $\xx\in\Rn$. Let $\xbar\in\eRn$. Then \begin{letter-compact} \item \label{pr:ext-sup-fcns-cont:a} If either $f$ or $g$ is extensibly continuous at $\xbar$, then $\hext(\xbar)=\max\{\fext(\xbar),\gext(\xbar)\}$. \item \label{pr:ext-sup-fcns-cont:b} If both $f$ and $g$ are extensibly continuous at $\xbar$, then $h$ is as well. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ext-sup-fcns-cont:a}):} Suppose $g$ is extensibly continuous at $\xbar$. By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ and for which $f(\xx_t)\rightarrow\fext(\xbar)$. Since $g$ is extensibly continuous, it also follows that $g(\xx_t)\rightarrow\gext(\xbar)$. Therefore, the claim follows by Lemma~\ref{lem:ext-sup-fcns-seq}. \pfpart{Part~(\ref{pr:ext-sup-fcns-cont:b}):} Suppose $f$ and $g$ are extensibly continuous at $\xbar$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then $f(\xx_t)\rightarrow\fext(\xbar)$ and $g(\xx_t)\rightarrow\gext(\xbar)$, since the two functions are extensibly continuous. Therefore, $h(\xx_t)\rightarrow\hext(\xbar)$ by Lemma~\ref{lem:ext-sup-fcns-seq}. Since this holds for every such sequence, the claim follows. \qedhere \end{proof-parts} \end{proof} We next give a condition based on convexity that holds when the relative interiors of the functions' effective domains have a point in common: \begin{theorem} \label{thm:ext-finite-max-convex} Let $f_i:\Rn\rightarrow\Rext$ be convex, for $i=1,\dotsc,m$, and let \[ h(\xx)=\max\Braces{f_1(\xx),\dotsc,f_m(\xx)} \] for $\xx\in\Rn$. Let $\xbar\in\extspace$, and assume $\bigcap_{i=1}^m \ri(\dom{f_i}) \neq \emptyset$. Then \[ \hext(\xbar) = \max\Braces{\efsub{1}(\xbar),\dotsc,\efsub{m}(\xbar)}. \] \end{theorem} \begin{proof} By \Cref{thm:seq-for-multi-ext}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ and with $f_i(\xx_t)\rightarrow\efsub{i}(\xbar)$ for $i=1,\dotsc,m$. The claim then follows from \Cref{lem:ext-sup-fcns-seq}. \end{proof} \Cref{pr:ext-sup-fcns-cont} and \Cref{thm:ext-finite-max-convex} cannot be generalized to work with infinite collections of functions without making further assumptions. In fact, any example of a closed proper convex function $h:\Rn\to\eRn$ such that $\eh\ne\hdub$ can be turned into a counterexample, because $h(\xx)=\sup_{\uu\in\dom h^*}[-h^*(\uu)+\xx\inprod\uu]$ and $\hdub(\xbar)=\sup_{\uu\in\dom h^*}[-h^*(\uu)+\xbar\inprod\uu]$ (by \Cref{thm:fext-dub-sum}\ref{thm:fext-dub-sum:b}). Here, $h$ is expressed as a supremum over affine functions $f_\uu(\xx)=-h^*(\uu)+\xx\inprod\uu$ for $\xx\in\Rn$, with $\uu\in\dom h^*$. We demonstrate this on an example based on the restricted linear function from \Cref{ex:negx1-else-inf}, whose extension does not coincide with its astral biconjugate, as we saw in \Cref{ex:biconj:notext}. \begin{example} Let $U=\set{\uu\in\R^2:\:u_1=-1,\,u_2\le 0}$ and consider the collection of linear functions $f_\uu(\xx)=\xx\inprod\uu$ for $\xx\in\R^2$, with $\uu\in U$. Define $h:\R^2\to\eR$, for $\xx\in\Rn$, as \[ h(\xx)=\sup_{\uu\in U} f_\uu(\xx) =\sup_{u_2\in\R_{\le 0}} [-x_1 + u_2 x_2] = \begin{cases} -x_1 & \text{if $x_2\geq 0$,} \\ +\infty & \text{otherwise,} \end{cases} \] so $h(\xx)$ is the restricted linear function from \Cref{ex:negx1-else-inf,ex:biconj:notext}. Let $\ebar=\limray{\ee_1}$ be the astron associated with the first vector of the standard basis. As we argued in \Cref{ex:biconj:notext}, for all $\xx\in\R^2$, \[ \sup_{\uu\in U} \efsub{\uu}(\ebar\plusl\xx) = \sup_{\uu\in\R^2:\:u_1=-1,\,u_2\le 0} (\ebar\plusl\xx)\cdot\uu =-\infty, \] but \[ \eh(\ebar\plusl\xx) = \begin{cases} -\infty & \text{if $x_2\geq 0$,} \\ +\infty & \text{otherwise.} \end{cases} \] Thus, $\eh(\ebar\plusl\xx)\ne\sup_{\uu\in U} \fextsub{\uu}(\ebar\plusl\xx)$ if $x_2<0$. \end{example} As a simple corollary of \Cref{thm:ext-finite-max-convex}, we can compute the extension of a function obtained by restricting a function to a convex subregion: \begin{corollary} \label{cor:ext-restricted-fcn} Let $f:\Rn\rightarrow\Rext$ be convex, and let $S\subseteq\Rn$ be convex. Assume $\ri(\dom{f})\cap (\ri S)\neq\emptyset$. Let $h:\Rn\rightarrow\Rext$ be defined, for $\xx\in\Rn$, by \begin{align*} h(\xx) &= \begin{cases} f(\xx) & \text{if $\xx\in S$,} \\ +\infty & \text{otherwise} \end{cases} \\ &= \indf{S}(\xx)\plusu f(\xx). \end{align*} Let $\xbar\in\extspace$. Then \begin{align*} \hext(\xbar) &= \begin{cases} \fext(\xbar) & \text{if $\xbar\in \Sbar$,} \\ +\infty & \text{otherwise} \end{cases} \\ &= \indfa{\Sbar}(\xbar)\plusu\fext(\xbar). \end{align*} \end{corollary} \begin{proof} For $\xx\in\Rn$, let \[ g(\xx) = \begin{cases} -\infty & \text{if $\xx\in S$,} \\ +\infty & \text{otherwise,} \end{cases} \] which is convex. To compute $g$'s extension, let $g'=\expex\circ g$. Then $g'=\inds$ so $\gpext=\indfa{\Sbar}$ by Proposition~\ref{pr:inds-ext}. Further, by Proposition~\ref{pr:j:2}(\ref{pr:j:2c}), $\gpext=\expex\circ\gext$. Hence, \begin{equation} \label{eq:cor:ext-restricted-fcn:1} \gext(\xbar) = \begin{cases} -\infty & \text{if $\xbar\in\Sbar$,} \\ +\infty & \text{otherwise.} \end{cases} \end{equation} Note that $h(\xx)=\max\{f(\xx),g(\xx)\}$ for $\xx\in\Rn$. Also, $\ri(\dom{f})\cap\ri(\dom{g})\neq\emptyset$, by assumption, since $\dom{g}=S$. Therefore, we can apply Theorem~\ref{thm:ext-finite-max-convex}, yielding $\hext(\xbar)=\max\{\fext(\xbar),\gext(\xbar)\}$. Combined with \eqref{eq:cor:ext-restricted-fcn:1}, this proves the claim. \end{proof} \subsection{Composition with a nondecreasing function} \label{sec:calc-ext-comp-inc-fcn} Next, we consider the composition $G(f(\xx))$ of a function $f:\Rn\rightarrow\Rext$ with a nondecreasing function $G:\eR\rightarrow\eR$. In \Cref{sec:exp-comp}, we saw that when $G$ is an extension of a suitable convex function (say $G=\expex$) then $\Gofextnp=G\circ\ef$. Here we generalize that result to a wider range of functions $G$. First, we give two examples showing that it is not enough to assume that $G$ is lower semicontinuous: \begin{example} \label{ex:Gof:pos-neg} Let $G=\indfa{\le 0}$ be the indicator of the set $[-\infty,0]$ and let \[ f(x)= \begin{cases} x &\text{if $x>0$,} \\ +\infty &\text{otherwise,} \end{cases} \] for $x\in\R$. Let $h=G\circ f$. Then $h\equiv+\infty$, so also $\eh\equiv+\infty$. However, \[ \ef(\ex)= \begin{cases} \ex &\text{if $\ex\ge0$,} \\ +\infty &\text{otherwise,} \end{cases} \] for $\ex\in\eR$, so $G\circ\ef=\indfa{\set{0}}$. Thus, $\eh\ne G\circ\ef$. \end{example} Above, the issue is the lack of lower semicontinuity of $f$ at $0$. Next we show that even when $f$ is closed, convex and proper, and $G$ is an extension of a closed proper convex function, we might have $\eh\ne G\circ\ef$. \begin{example} \label{ex:Gof:sideways} For $\xx\in\R^3$, let \[ f(\xx) = f(x_1,x_2,x_3)= \begin{cases} x_3 & \text{if $\xx\in K$,} \\ +\infty & \text{otherwise,} \end{cases} \] where $K$ is the sideways cone from \Cref{ex:KLsets:extsum-not-sum-exts}, and let $G=\indfa{\le 0}$ as above, which is the extension of the corresponding standard indicator function, $\indf{\leq 0}$. The composition $h=G\circ f$ is then the indicator $\indf{K\cap L}$, where $L=\set{\xx\in\R^3:x_3=0}$ is the plane from \Cref{ex:KLsets:extsum-not-sum-exts}. As in Example~\ref{ex:KLsets:extsum-not-sum-exts}, let $\xbar=\limray{\ee_2}\plusl\ee_1$, and let $\xx_t=\trans{[1,\,t,\,1/(2t)]}$, which converges to $\xbar$. Then $f(\xx_t)=1/(2t)\rightarrow 0$. Since $\inf f\geq 0$, it follows that $\fext(\xbar)=0$. Thus, $G\bigParens{\fext(\xbar)}=G(0)=0$. On the other hand, as was argued earlier, $\hext(\xbar)=+\infty$. \end{example} Although lower semicontinuity of $G$ does not suffice to ensure that $\Gofextnp=G\circ\ef$, it does suffice to ensure an inequality, and the continuity of $G$ suffices to ensure equality: \begin{proposition} \label{pr:Gf-cont} Let $f:\Rn\rightarrow\eR$, let $G:\eR\rightarrow\eR$ be nondecreasing, and let $h=G\circ f$. Let $\xbar\in\extspace$. Then the following hold: \begin{letter-compact} \item \label{pr:Gf-cont:a} If $G$ is lower semicontinuous at $\ef(\xbar)$ then $\eh(\xbar)\geq G\bigParens{\ef(\xbar)}$. \item \label{pr:Gf-cont:b} If $G$ is continuous at $\ef(\xbar)$ then $\eh(\xbar)= G\bigParens{\ef(\xbar)}$. If additionally $f$ is extensibly continuous at $\xbar$ then $h$ is extensibly continuous at $\xbar$ as well. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part (\ref{pr:Gf-cont:a}):} Assume that $G$ is lower semicontinuous at $\ef(\xbar)$, and let $\seq{\xx_t}$ be a sequence in $\Rn$ such that $\xx_t\to\xbar$ and $h(\xx_t)\to\eh(\xbar)$ (whose existence follows by \Cref{pr:d1}). Then \begin{equation} \label{eq:Gf-cont:a} \eh(\xbar) = \lim h(\xx_t) = \lim G\bigParens{f(\xx_t)} \ge \liminf G\bigParens{\ef(\xx_t)}, \end{equation} where the inequality is by monotonicity of $G$ and because $f\ge\ef$ (\Cref{pr:h:1}\ref{pr:h:1a}). Let $\seq{\xx'_t}$ be a subsequence of $\seq{\xx_t}$ such that $\lim G\bigParens{\ef(\xx'_t)}=\liminf G\bigParens{\ef(\xx_t)}$. Furthermore, by sequential compactness of $\eR$, we can pick a subsequence of $\seq{\xx'_t}$, which we denote $\seq{\xx''_t}$, such that the sequence $\bigSeq{\ef(\xx''_t)}$ converges in $\eR$. Thus, \[ \liminf G\bigParens{\ef(\xx_t)} = \lim G\bigParens{\ef(\xx'_t)} = \lim G\bigParens{\ef(\xx''_t)} \ge G\bigParens{\lim \ef(\xx''_t)} \ge G\bigParens{\ef(\xbar)}. \] The first equality is by our choice of $\seq{\xx'_t}$. The second is because $\seq{\xx''_t}$ is a subsequence of $\seq{\xx'_t}$. The first inequality is by lower semicontinuity of~$G$. And the final inequality is because $G$ is nondecreasing and $\lim \ef(\xx''_t)\ge\ef(\xbar)$, where the latter follows because $\ef$ is lower semicontinuous (\Cref{prop:ext:F}) and $\xx''_t\to\xbar$. Combining with \eqref{eq:Gf-cont:a}, we obtain $\eh(\xbar)\ge G\bigParens{\ef(\xbar)}$. \pfpart{Part (\ref{pr:Gf-cont:b}):} Assume that $G$ is continuous (so also lower semicontinuous) at $\ef(\xbar)$, and let $\seq{\xx_t}$ be a sequence in $\Rn$ such that $\xx_t\to\xbar$ and $f(\xx_t)\to\ef(\xbar)$. Then \[ \eh(\xbar) \le \liminf h(\xx_t) = \liminf G\bigParens{f(\xx_t)}. \] Let $\seq{\xx'_t}$ be a subsequence of $\seq{\xx_t}$ such that $\lim G\bigParens{f(\xx'_t)}=\liminf G\bigParens{f(\xx_t)}$. Then \[ \liminf G\bigParens{f(\xx_t)} = \lim G\bigParens{f(\xx'_t)} = G\bigParens{\ef(\xbar)}, \] where the second equality follows by continuity of $G$ and because $f(\xx'_t)\to\ef(\xbar)$. Thus, we have shown $\eh(\xbar)\le G\bigParens{\ef(\xbar)}$. Combining with part~(\ref{pr:Gf-cont:a}) yields $\eh(\xbar)= G\bigParens{\ef(\xbar)}$. For the second part of the claim, suppose $f$ is extensibly continuous at $\xbar$. Let $\seq{\xx_t}$ be any sequence in $\Rn$ such that $\xx_t\to\xbar$. Then \[ \eh(\xbar) = G\bigParens{\ef(\xbar)} = G\bigParens{\lim f(\xx_t)} = \lim G\bigParens{f(\xx_t)} = \lim h(\xx_t). \] The first equality was proved above. The second is by extensible continuity of $f$. The third is by continuity of $G$. \qedhere \end{proof-parts} \end{proof} Next we focus on composition of the form $h=\eg\circ f$ where $f$ is convex and $\eg$ is an extension of a nondecreasing convex function $g$. We will show that $\eh=\eg\circ\ef$ without assuming that $\eg$ is continuous, as long as $f$ attains a value in the interior of $\dom g$ (which was not the case in \Cref{ex:Gof:pos-neg,ex:Gof:sideways}). \begin{theorem} \label{thm:Gf-conv} Let $f:\Rn\rightarrow\Rext$ be convex, let $g:\R\rightarrow\Rext$ be convex and nondecreasing, and assume there exists a point $\xing\in\dom{f}$ such that $f(\xing)<\sup(\dom g)$. Let $h=\eg\circ f$. Then $\eh=\eg\circ\ef$. \end{theorem} \begin{proof} Let $\xbar\in\eRn$. We will show that $\eh(\xbar)=\eg(\ef(\xbar))$. This holds if $\eg$ is continuous at~$\ef(\xbar)$ (by \Cref{pr:Gf-cont}\ref{pr:Gf-cont:b}), so we assume henceforth that $\eg$ is \emph{not} continuous at $\ef(\xbar)$. By \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:infsup},\ref{pr:conv-inc:discont}), this can only happen when $g$ and $\eg$ are both discontinuous at a single point $z=\ef(\xbar)\in\R$. By \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:discont}), $\dom g$ is then equal to either $(-\infty,z)$ or $(-\infty,z]$; in either case, we have $\sup(\dom g)=z$. In this case, $f$ must be proper. Otherwise we would have $(\lsc f)(\xx)\in\set{-\infty,+\infty}$ for all $\xx$ (\Cref{pr:improper-vals}\ref{pr:improper-vals:cor7.2.1}), and so also $\ef(\xbar)=\lscfext(\xbar)\in\set{-\infty,+\infty}$, so $\eg$ would be continuous at~$\ef(\xbar)$. Since $f$ is proper, we have $f(\xing)\in(-\infty,z)$. \begin{claimpx} There exists a sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$ and with $f(\xx_t)\in (-\infty,z)$ for all $t$. \end{claimpx} \begin{proofx} By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ and with $f(\xx_t)\rightarrow\fext(\xbar)=z$. Since $z\in\R$, there can be at most finitely many values of $t$ for which $f(\xx_t)$ is infinite; discarding these, we can assume $f(\xx_t)\in\R$ for all $t$. If $f(\xx_t)<z$ for infinitely many values of $t$, then discarding all other elements results in a sequence with the properties stated in the claim. Therefore, we focus henceforth on the alternative case in which $f(\xx_t)<z$ for only finitely many sequence elements. Further, by discarding these, we can assume henceforth that $f(\xx_t) \geq z$ for all $t$. For each $t$, let \[ \lambda_t = \frac{\bigParens{1-\frac{1}{t}} \bigParens{z - f(\xing)}} {f(\xx_t) - f(\xing)}. \] Then $0\leq \lambda_t < 1$ since $f(\xing)<z\leq f(\xx_t)$. Also, $\lambda_t\rightarrow 1$ since $f(\xx_t)\rightarrow z$. Let $ \yy_t = \lambda_t \xx_t + (1-\lambda_t) \xing $. Then \[ f(\yy_t) \leq \lambda_t f(\xx_t) + (1-\lambda_t) f(\xing) = \bigParens{1-\sfrac{1}{t}} z + \sfrac{1}{t} f(\xing) < z. \] The first inequality is because $f$ is convex. The equality is by our choice of $\lambda_t$. The last inequality is because $f(\xing)<z$. Further, since $\lambda_t\xx_t\to\xbar$ (by \Cref{pr:scalar-prod-props}\ref{pr:scalar-prod-props:e}) and $(1-\lambda_t) \xing\to\zero$, we must have $\yy_t\rightarrow\xbar$ (by \Cref{pr:i:7}\ref{pr:i:7g}). Finally, note that $z=\fext(\xbar)\leq \liminf f(\yy_t)$, so $f(\yy_t)>-\infty$ for all but finitely many values of $t$, which can be discarded. Thus, the resulting sequence $\seq{\yy_t}$ satisfies all the claimed properties. \end{proofx} Let $\seq{\xx_t}$ be a sequence with the properties stated in the preceding claim. Then \begin{align*} \hext(\xbar) \leq \liminf h(\xx_t) &= \liminf \gext(f(\xx_t)) \\ &= \liminf g(f(\xx_t)) \leq \sup_{y<z} g(y) = \gext(z) = \gext(\fext(\xbar)) \leq \hext(\xbar). \end{align*} The first inequality is by definition of extensions. The second equality is because $\eg$ agrees with $g$ on $(-\infty,z)$ (\Cref{pr:conv-inc:prop}\ref{pr:conv-inc:discont}), and since $f(\xx_t)\in(-\infty,z)$. The second inequality is also because of this latter fact. The third equality is by \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:nondec}). And the last inequality is by \Cref{pr:Gf-cont}(\ref{pr:Gf-cont:b}). This completes the proof. \end{proof} As an application, we can use \Cref{thm:Gf-conv} to relate the sublevel sets of a function and its extension: \begin{theorem} \label{thm:closure-of-sublev-sets} Let $f:\Rn\rightarrow\Rext$ be convex, let $\beta\in\R$ be such that $\beta>\inf f$, and let \begin{align*} L &= \set{\xx\in\Rn :\: f(\xx) \leq \beta}, \\ M &= \set{\xx\in\Rn :\: f(\xx) < \beta}. \end{align*} Then \begin{equation} \label{eq:closure-of-sublev-sets} \Lbar = \Mbar = \set{\xbar\in\extspace :\: \fext(\xbar) \leq \beta}. \end{equation} \end{theorem} \begin{proof} Let $g=\indf{\le\beta}$ be the indicator of the set $(-\infty,\beta]$, so $\eg=\indfa{\le\beta}$ is the indicator of the set $[-\infty,\beta]$ (by \Cref{pr:inds-ext}). Let $R$ denote the set on the right-hand side of \eqref{eq:closure-of-sublev-sets} and let $h=\eg\circ f$. Then $h$ is exactly $\indf{L}$, the indicator for~$L$, and $\eh=\indfa{\Lbar}$ (by \Cref{pr:inds-ext}). Since $\beta>\inf f$, there exists a point $\xing$ with $f(\xing)<\beta=\sup(\dom g)$. Therefore, \[ \indfa{\Lbar} = \eh=\eg\circ\ef=\indfa{\le\beta}\circ\ef=\indfa{R}, \] where the second equality is by \Cref{thm:Gf-conv}. Thus, $\Lbar=R$, proving the claim for $\Lbar$. This also proves the claim for $\Mbar$ since \[ \Mbar = \clbar{\cl M} = \clbar{\cl L} = \Lbar, \] where the first and third equalities are by Proposition~\ref{pr:closed-set-facts}(\ref{pr:closed-set-facts:aa}), and the second equality is because $\cl M = \cl L$ (\Cref{roc:thm7.6-mod}). \end{proof} As a corollary, we can compute the closures of closed and open halfspaces. \begin{corollary} \label{cor:halfspace-closure} Let $\uu\in\Rn\setminus\{\zero\}$, let $\beta\in\R$, and let \begin{align*} L &= \set{\xx\in\Rn :\: \xx\cdot\uu \leq \beta}, \\ M &= \set{\xx\in\Rn :\: \xx\cdot\uu < \beta}. \end{align*} Then \[ \Lbar = \Mbar = \set{\xbar\in\extspace :\: \xbar\cdot\uu \leq \beta}. \] \end{corollary} \begin{proof} Let $f(\xx)=\xx\cdot\uu$, for $\xx\in\Rn$, whose extension is $\fext(\xbar)=\xbar\cdot\uu$ (Example~\ref{ex:ext-affine}). In addition, \[ \inf f \leq \lim_{\lambda\rightarrow-\infty} f(\lambda\uu) = -\infty < \beta. \] Therefore, we can apply Theorem~\ref{thm:closure-of-sublev-sets} to $f$, yielding the claim. \end{proof} \part{Convex Sets, Cones, and Functions} \label{part:convex-sets} \section{Convex sets} \label{sec:convexity} We next study how the notion of convexity can be extended to astral space. In $\Rn$, a set is convex if for every pair of points $\xx,\yy$ in the set, their convex combination $\lambda\xx+(1-\lambda)\yy$ is also in the set, for all $\lambda\in [0,1]$. Said differently, a set is convex if it includes the line segment connecting every pair of points in the set. As a natural first attempt at extending this notion to astral space, we might try to define line segments and convexity in astral space using some kind of convex combination $\zbar=\lambda\xbar+(1-\lambda)\ybar$ of two points $\xbar,\ybar\in\extspace$. We have not actually defined the ordinary sum of two astral points, as in this expression, nor will we define it. But if we could, then for any $\uu\in\Rn$, we would naturally want it to be the case that \[ \zbar\cdot\uu = \lambda \xbar\cdot\uu + (1-\lambda) \ybar\cdot\uu. \] Such an expression, however, would be problematic since $\xbar\inprod\uu$ and $\ybar\inprod\uu$ are not necessarily summable across all $\uu$ (see \Cref{prop:commute}), so that the right-hand side might be undefined. To avoid this difficulty, we take a different approach below. Rather than generalizing the notion of a convex combination, we will instead directly generalize the notion of a line segment and define convexity using such generalized line segments. This kind of convexity is called \emph{interval convexity} by \citet[Chapter 4]{vandeVel} and an \emph{inner approach} by \citet[Section 0.1a]{singer_book}. \subsection{Halfspaces, segments, outer hull, convexity} \label{sec:def-convexity} In this section, we develop a sequence of fundamental notions, eventually culminating in a definition of convexity for astral sets. We begin by extending the standard definitions of hyperplanes and halfspaces to astral space. A (standard) closed halfspace is the set of points $\xx\in\Rn$ for which $\xx\cdot\uu\leq \beta$, for some $\uu\in\Rn\setminus\{\zero\}$ and $\beta\in\R$. We can immediately extend this to astral space, defining an \emph{(astral) closed halfspace}, denoted $\chsua$, to be the set \begin{equation} \label{eq:chsua-defn} \chsua = \Braces{\xbar\in\extspace :\: \xbar\cdot\uu \leq \beta}. \end{equation} In the same way, \emph{(astral) open halfspaces} and \emph{(astral) hyperplanes} are sets of the form \begin{equation} \label{eqn:open-hfspace-defn} \Braces{\xbar\in\extspace :\: \xbar\cdot\uu<\beta} \end{equation} and \begin{equation} \label{eqn:hyperplane-defn} \Braces{\xbar\in\extspace :\: \xbar\cdot\uu=\beta}, \end{equation} respectively. As usual, these forms accomodate halfspaces defined by reverse inequalities, by substituting $-\uu$ for $\uu$ and $-\beta$ for $\beta$. In fact, the subbase for the astral topology, as given in \eqref{eq:h:3a:sub-alt}, consists exactly of all astral open halfspaces. Likewise, the base elements for the astral topology (as in Eq.~\ref{eq:h:3a}) consist of all finite intersections of astral open halfspaces. In particular, this means that all open halfspaces are indeed open, and that all closed halfspaces are closed, being complements of open halfspaces. The intersection of an astral closed or open halfspace with $\Rn$ is just the corresponding standard closed or open halfspace in $\Rn$; for instance, \begin{equation} \label{eqn:chsua-cap-rn} \chsua\cap\Rn = \Braces{\xx\in\Rn : \xx\cdot\uu \leq \beta}. \end{equation} The closure (in $\extspace$) of a closed or open halfspace like this one in $\Rn$ is exactly the closed astral halfspace $\chsua$ (Corollary~\ref{cor:halfspace-closure}). Similar facts hold for hyperplanes. As expected, the interior of an astral closed halfspace is the corresponding open halfspace: \begin{proposition} \label{prop:intr:H} Let $\uu\in\Rn\wo\set{\zero}$, $\beta\in\R$, and $H = \braces{\xbar\in\extspace :\: \xbar\cdot\uu \leq \beta}$. Then \[ \intr H=\braces{\xbar\in\extspace :\: \xbar\cdot\uu < \beta}. \] \end{proposition} \begin{proof} Let $\Hcomp=\eRn\wo H$. Since $H$ is closed, $\Hcomp$ is open. Let \[ M=\Hcomp\cap\Rn=\braces{\xx\in\Rn :\: \xx\cdot\uu > \beta}. \] By \Cref{pr:closed-set-facts}(\ref{pr:closed-set-facts:b}) and \Cref{cor:halfspace-closure}, $\clHcomp=\Mbar=\braces{\xbar\in\extspace :\: \xbar\cdot\uu\ge\beta}$. Hence, by \Cref{pr:closure:intersect}(\ref{pr:closure:intersect:comp}), $\intr H=\extspace\setminus(\clHcomp)=\braces{\xbar\in\extspace :\: \xbar\cdot\uu < \beta}$. \end{proof} The starting point for standard convex analysis is the line segment joining two points $\xx$ and $\yy$. As discussed above, in $\Rn$, this is the set of all convex combinations of the two points, a perspective that does not immediately generalize to astral space. However, there is another way of thinking about the line segment joining $\xx$ and $\yy$, namely, as the intersection of all halfspaces that include both of the endpoints. This interpretation generalizes directly to astral space, leading, in a moment, to a definition of the segment joining two astral points as the intersection of all halfspaces that include both of the points. To state this more formally, we first give a more general definition for the intersection of all closed halfspaces that include an arbitrary set $S\subseteq\extspace$, called the outer conic hull. A segment will then be a special case in which $S$ has two elements: \begin{definition} Let $S\subseteq\extspace$. The \emph{outer convex hull} of $S$ (or \emph{outer hull}, for short), denoted $\ohull S$, is the intersection of all astral closed halfspaces that include $S$; that is, \begin{equation} \label{eqn:ohull-defn} \ohull S = \bigcap{\BigBraces{ \chsua:\: \uu\in\Rn\setminus\{\zero\},\, \beta\in\R,\, S\subseteq \chsua}}. \end{equation} \end{definition} Note that if $S$ is not included in any halfspace $\chsua$, then the intersection on the right-hand side of \eqref{eqn:ohull-defn} is vacuous and therefore equal to $\extspace$. Thus, in this case, $\ohull S = \extspace$. (Alternatively, we could allow $\uu=\zero$ in \eqref{eqn:ohull-defn} so that this intersection is never empty since $\chs{\zero}{0}=\extspace$ will always be included.) In these terms, we can now define astral segments: \begin{definition} Let $\xbar,\ybar\in\extspace$. The \emph{segment joining} $\xbar$ and $\ybar$, denoted $\lb{\xbar}{\ybar}$, is the intersection of all astral closed halfspaces that include both $\xbar$ and $\ybar$; thus, \[\lb{\xbar}{\ybar} = \ohull\{\xbar,\ybar\}.\] \end{definition} Here are some properties of outer convex hull: \begin{proposition} \label{pr:ohull:hull} Outer convex hull is a hull operator. Moreover, for any $S\subseteq\eRn$, the set $\ohull S$ is closed (in $\eRn$). Consequently, $\ohull\Sbar=\ohull S$. \end{proposition} \begin{proof} Let $\calC$ be the collection consisting of all possible intersections of closed halfspaces in $\eRn$ (including the empty intersection, which is all of $\eRn$). Then $\ohull$ is the hull operator for $\calC$ (as defined in Section~\ref{sec:prelim:hull-ops}). Let $S\subseteq\extspace$. Then $\ohull S$ is an intersection of closed halfspaces and is therefore closed. As such, $S\subseteq\Sbar\subseteq\ohull S$. Since $\ohull$ is a hull operator, it follows, by \Cref{pr:gen-hull-ops}(\ref{pr:gen-hull-ops:a},\ref{pr:gen-hull-ops:c}), that \[ \ohull S \subseteq \ohull\Sbar\subseteq\ohull(\ohull S)=\ohull S. \qedhere \] \end{proof} The outer hull of any set $S\subseteq\extspace$ can be characterized in a way that is often more useful, as shown next: \begin{proposition} \label{pr:ohull-simplify} Let $S\subseteq\extspace$, and let $\zbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:ohull-simplify:a} $\zbar\in\ohull{S}$. \item \label{pr:ohull-simplify:b} For all $\uu\in\Rn$, \begin{equation} \label{eq:pr:ohull-simplify:1} \zbar\cdot\uu \leq \sup_{\xbar\in S} \xbar\cdot\uu. \end{equation} \item \label{pr:ohull-simplify:c} For all $\uu\in\Rn$, \begin{equation} \label{eq:pr:ohull-simplify:2} \inf_{\xbar\in S} \xbar\cdot\uu \leq \zbar\cdot\uu \leq \sup_{\xbar\in S} \xbar\cdot\uu. \end{equation} \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:ohull-simplify:a}) $\Rightarrow$ (\ref{pr:ohull-simplify:b}): } Suppose $\zbar\in\ohull{S}$. Let $\uu\in\Rn$, and let $\gamma$ be equal to the right-hand side of \eqref{eq:pr:ohull-simplify:1}; we aim to show that $\zbar\cdot\uu\leq\gamma$. We assume $\uu\neq\zero$ and that $\gamma<+\infty$ since otherwise the claim is immediate. Let $\beta\in\R$ be such that $\beta\geq\gamma$. Then $\sup_{\xbar\in S} \xbar\cdot\uu = \gamma\leq\beta$, implying that $S\subseteq\chsua$, and so that $\zbar\in\chsua$ by definition of outer hull (Eq.~\ref{eqn:ohull-defn}). Therefore, $\zbar\cdot\uu\leq\beta$. Since this holds for all $\beta\geq\gamma$, it follows that $\zbar\cdot\uu\leq\gamma$. \pfpart{(\ref{pr:ohull-simplify:b}) $\Rightarrow$ (\ref{pr:ohull-simplify:a}): } Assume \eqref{eq:pr:ohull-simplify:1} holds for all $\uu\in\Rn$. Suppose $S\subseteq\chsua$ for some $\uu\in\Rn\setminus\{\zero\}$ and $\beta\in\R$. This means $\xbar\cdot\uu\leq\beta$ for all $\xbar\in S$, so, combined with \eqref{eq:pr:ohull-simplify:1}, $\zbar\cdot\uu\leq\sup_{\xbar\in S}\xbar\cdot\uu\leq\beta$. Therefore, $\zbar\in\chsua$. Since this holds for all closed halfspaces that include $S$, $\zbar$ must be in $\ohull{S}$, by its definition. \pfpart{(\ref{pr:ohull-simplify:c}) $\Rightarrow$ (\ref{pr:ohull-simplify:b}): } This is immediate. \pfpart{(\ref{pr:ohull-simplify:b}) $\Rightarrow$ (\ref{pr:ohull-simplify:c}): } \eqref{eq:pr:ohull-simplify:1} immediately implies the second inequality of \eqref{eq:pr:ohull-simplify:2}, and also implies the first inequality by substituting $\uu$ with $-\uu$. \qedhere \end{proof-parts} \end{proof} As an immediate consequence, we obtain the following characterization of segments, which, by definition, are outer hulls of pairs of points: \begin{proposition} \label{pr:seg-simplify} Let $\xbar,\ybar,\zbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:seg-simplify:a} $\zbar\in\lb{\xbar}{\ybar}$. \item \label{pr:seg-simplify:b} For all $\uu\in\Rn$, $\zbar\cdot\uu \leq \max\{\xbar\cdot\uu,\, \ybar\cdot\uu\}$. \item \label{pr:seg-simplify:c} For all $\uu\in\Rn$, $ \min\{\xbar\cdot\uu,\, \ybar\cdot\uu\} \leq \zbar\cdot\uu \leq \max\{\xbar\cdot\uu,\, \ybar\cdot\uu\} $. \end{letter-compact} \end{proposition} Analogous to the standard support function $\indstars$ given in \eqref{eq:e:2}, the \emph{astral support function} for a set $S\subseteq\extspace$ is the conjugate $\indaSstar$ of the astral indicator function $\indaS$ (defined in Eq.~\ref{eq:indfa-defn}). From \eqref{eq:Fstar-down-def}, this is the function \begin{equation} \label{eqn:astral-support-fcn-def} \indaSstar(\uu) = \sup_{\xbar\in S} \xbar\cdot\uu, \end{equation} for $\uu\in\Rn$. Note that this function is the same as the expression that appears on the right-hand side of \eqref{eq:pr:ohull-simplify:1}. As a result, the outer convex hull of $S$ is directly linked to the biconjugate $\indfadub{S}$: \begin{theorem} \label{thm:ohull-biconj} Let $S\subseteq\extspace$. Then \[ \indfadub{S} = \indfa{\ohull S}. \] \end{theorem} \begin{proof} Let $\zbar\in\eRn$. Then we have \begin{align} \zbar\in\ohull S &\Leftrightarrow \forall\uu\in\Rn,\,\zbar\inprod\uu\le\indfastar{S}(\uu) \notag \\ &\Leftrightarrow \forall\uu\in\Rn,\,-\indfastar{S}(\uu)\plusd\zbar\inprod\uu\le 0 \notag \\ &\Leftrightarrow \indfadub{S}(\zbar)\le 0. \label{eq:thm:ohull-biconj:1} \end{align} The first equivalence is by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}) and \eqref{eqn:astral-support-fcn-def}. The second equivalence is by \Cref{pr:plusd-props}(\ref{pr:plusd-props:e}). The last equivalence is because \begin{equation} \label{eq:thm:ohull-biconj:2} \indfadub{S}(\zbar) = \sup_{\uu\in\Rn}\bigBracks{-\indfastar{S}(\uu)\plusd\zbar\inprod\uu} \end{equation} by definition of the dual conjugate (Eq.~\ref{eq:psistar-def:2}). We claim that $\indfadub{S}(\zbar)\in\set{0,+\infty}$, which, in light of \eqref{eq:thm:ohull-biconj:1}, will suffice to prove the proposition. If $S=\emptyset$ then $\indaS\equiv+\infty$ implying $\indaSstar\equiv-\infty$ and $\indfadub{S}\equiv+\infty$, so the claim holds. Otherwise, $\indfastar{S}(\zero)=0$, so $\indfadub{S}(\zbar)\ge-\indfastar{S}(\zero)\plusd\zbar\inprod\zero=0$. If $\indfadub{S}(\zbar)=0$ then the claim holds, so in the rest of the proof we consider the case $\indfadub{S}(\zbar)>0$. Since $\indfadub{S}(\zbar)>0$, \eqref{eq:thm:ohull-biconj:2} implies there exists $\uhat\in\Rn$ such that $-\indfastar{S}(\uhat)\plusd\zbar\inprod\uhat>0$. For any $\lambda\in\Rstrictpos$, \[ \indfastar{S}(\lambda\uhat)= \sup_{\xbar\in S}\bigBracks{\xbar\cdot(\lambda\uhat)}= \lambda\sup_{\xbar\in S}\bigBracks{\xbar\cdot\uhat}= \lambda\indfastar{S}(\uhat), \] so \begin{align*} \indfadub{S}(\zbar) &\ge \bigBracks{-\indfastar{S}(\lambda\uhat)\plusd\zbar\inprod(\lambda\uhat)} = \lambda\bigBracks{-\indfastar{S}(\uhat)\plusd\zbar\inprod\uhat}. \end{align*} Taking $\lambda\to+\infty$, we obtain $\indfadub{S}(\zbar)=+\infty$, completing the proof. \end{proof} To get a sense of what astral segments are like, we give a few examples in $\eRf{2}$, thereby also demonstrating how \Cref{pr:seg-simplify} can be used for this purpose. Later, we will develop additional tools (specifically, Theorems~\ref{thm:lb-with-zero} and~\ref{thm:conv-lmset-char}) that can be applied more directly for these same examples. \begin{example} \label{ex:seg-zero-e1} First, consider $S=\lb{\zero}{\ee_1}$. We instantiate \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:c}) with values $\uu$ of the form $\trans{[1,\alpha]}\negKern$ and $\trans{[0,1]}\negKern$, for $\alpha\in\R$. Then any $\zbar\in S$ must satisfy \begin{align*} 0 = \min\set{0,\ee_1\inprod\trans{[1,\alpha]}} \le &\;\; \zbar\inprod\trans{[1,\alpha]} \le\max\set{0,\ee_1\inprod\trans{[1,\alpha]}}=1, \\ 0 = \min\set{0,\ee_1\inprod\trans{[0,1]}} \le &\;\; \zbar\inprod\trans{[0,1]} \le\max\set{0,\ee_1\inprod\trans{[0,1]}}=0. \end{align*} Every vector $\uu\in\R^2$ is a scalar multiple of one of the ones used above. Consequently, every constraint as given in \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:c}) must be equivalent to one of the constraints above. Therefore, $\zbar\in S$ if and only if it satisfies these two constraints. We summarize these as \[ \textup{(C1)}\quad \zbar\inprod\trans{[1,\alpha]}\!\!\in[0,1] \text{ for all }\alpha\in\R, \qquad \textup{(C2)}\quad \zbar\inprod\trans{[0,1]}\!\!=0. \] The set of points $\zbar$ that satisfy these is the standard line segment connecting $\zero$ and $\ee_1$: \[ \lb{\zero}{\ee_1} = \set{\beta\ee_1:\:\beta\in[0,1]}. \qedhere \] \end{example} \begin{example} \label{ex:seg-zero-oe2} Now consider $S=\lb{\zero}{\limray{\ee_2}}$, and instantiate \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:c}) with $\uu$ of the form $\trans{[\alpha,1]}\negKern$ and $\trans{[1,0]}\negKern$, for $\alpha\in\R$. Scalar multiples of these vectors include all of~$\R^2$, so $\zbar\in S$ if and only if it satisfies \[ \textup{(C1)}\quad \zbar\inprod\trans{[\alpha,1]}\!\!\in[0,+\infty] \text{ for all }\alpha\in\R, \qquad \textup{(C2)}\quad \zbar\inprod\trans{[1,0]}\!\!=0. \] The set of points that satisfy these is the astral closure of the ray in the direction $\ee_2$: \[ \lb{\zero}{\limray{\ee_2}} = \set{\beta\ee_2:\:\beta\in[0,+\infty]}. \qedhere \] \end{example} \begin{example} \label{ex:seg-zero-oe2-plus-e1} Next, consider $S=\lb{\zero}{\limray{\ee_2}\plusl\ee_1}$. As in the previous example, instantiate \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:c}) with $\uu$ of the form $\trans{[\alpha,1]}\negKern$ and $\trans{[1,0]}\negKern$, for $\alpha\in\R$. Scalar multiples of these vectors include all of~$\R^2$, so $\zbar\in S$ if and only if it satisfies \[ \textup{(C1)}\quad \zbar\inprod\trans{[\alpha,1]}\!\!\in[0,+\infty] \text{ for all }\alpha\in\R, \qquad \textup{(C2)}\quad \zbar\inprod\trans{[1,0]}\!\!\in[0,1]. \] This means that if $\zbar$ is infinite, its dominant direction must be $\ee_2$, and its projection orthogonal to $\ee_2$ must be of the form $\beta\ee_1$ with $\beta\in[0,1]$. On the other hand, if $\zbar$ is finite, say $\zbar=\trans{[z_1,z_2]}\negKern$, then (C1) implies that $\alpha z_1+z_2\ge 0$ for all $\alpha\in\R$, which is only possible if $z_1=0$ and $z_2\ge 0$. Thus, the set of points that satisfy (C1) and (C2) is \[ \lb{\zero}{\limray{\ee_2}\plusl\ee_1} = \set{\beta\ee_2:\:\beta\in[0,+\infty)} \,\cup\, \set{\omega\ee_2\plusl\beta\ee_1:\:\beta\in[0,1]}. \qedhere \] \end{example} \begin{example} \label{ex:seg-oe1-oe2} The final example in $\eRf{2}$ is $S=\lb{\limray{\ee_1}}{\limray{\ee_2}}$. We instantiate \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:c}) with $\uu$ of the form $\trans{[1,\alpha]}\negKern$, $\trans{[1,-\alpha]}\negKern$, $\trans{[1,0]}\negKern$, and $\trans{[0,1]}\negKern$, where $\alpha\in\R_{>0}$. Scalar multiples of these vectors include all of~$\R^2$, so $\zbar\in S$ if and only if it satisfies \begin{gather*} \begin{aligned} & \textup{(C1)}\quad \zbar\inprod\trans{[1,\alpha]}\!\!=+\infty \text{ for all }\alpha\in\R_{>0}, \\& \textup{(C2)}\quad \zbar\inprod\trans{[1,-\alpha]}\!\!\in[-\infty,+\infty] \text{ for all }\alpha\in\R_{>0}, \end{aligned} \\ \textup{(C3)}\quad \zbar\inprod\trans{[1,0]}\!\!\in[0,+\infty], \qquad \textup{(C4)}\quad \zbar\inprod\trans{[0,1]}\!\!\in[0,+\infty]. \end{gather*} By (C1), $\zbar$ must be infinite. Let $\vv=\trans{[v_1,v_2]}$ be its dominant direction (implying $\vv\neq\zero$). By~(C1), $v_1+\alpha v_2\ge 0$ for all $\alpha\in\R_{>0}$, so $v_1,v_2\ge 0$. If $v_1,v_2>0$, then $\zbar=\limray{\vv}\plusl\zbar'$ satisfies (C1--C4) for any $\zbar'\in\eRf{2}$. By the Projection Lemma (\Cref{lemma:proj}), it suffices to consider $\zbar'=\beta\ww$, where $\beta\in\eR$ and $\ww=\trans{[v_2,-v_1]}\negKern$ spans the linear space orthogonal to $\vv$. If $v_1=0$ then $\zbar$ can be written as $\limray{\ee_2}\plusl\beta\ee_1$ with $\beta\in\eR$. By (C3), we must actually have $\beta\in[0,+\infty]$. Symmetrically, if $v_2=0$ then $\zbar=\limray{\ee_1}\plusl\beta\ee_2$ where $\beta\in[0,+\infty]$. Altogether, we thus obtain \begin{align*} \lb{\limray{\ee_1}}{\limray{\ee_2}} &= \bigSet{\limray{\vv}\plusl\beta\ww:\: \vv=\trans{[v_1,v_2]}\!\!\in\R^2_{>0},\, \ww=\trans{[v_2,-v_1]}\negKern,\, \beta\in\eR} \\ &\qquad{} \cup\, \bigSet{\limray{\ee_1}\plusl\beta\ee_2:\:\beta\in[0,+\infty]} \\ &\qquad{} \cup\, \bigSet{\limray{\ee_2}\plusl\beta\ee_1:\:\beta\in[0,+\infty]}. \end{align*} This set can be visualized as the concatenation of closed galaxies associated with astrons in the directions $\vv=\trans{[\cos\varphi,\sin\varphi]}\negKern$ with the polar angle $\varphi\in(0,\pi/2)$. The two ``boundary astrons'' $\limray{\ee_1}$ and $\limray{\ee_2}$ ($\varphi=0$ and $\varphi=\pi/2$) only contribute a closed ``half-galaxy'' that attaches to the other included galaxies. \end{example} \begin{example} \label{ex:seg-negiden-iden} We next consider a less intuitive example in $\eRn$. Let $\Iden$ be the $n\times n$ identity matrix. Then the segment joining the points $-\Iden \omm$ and $\Iden \omm$ turns out to be all of $\extspace$; that is, $ \lb{-\Iden \omm}{\Iden \omm}=\extspace $. To see this, let $\zbar\in\extspace$ and let $\uu\in\Rn$. If $\uu\neq\zero$, then $\Iden\omm\cdot\uu\in\{-\infty,+\infty\}$ by \Cref{pr:vtransu-zero}, so \[ \zbar\cdot\uu\leq +\infty=\max\{-\Iden\omm\cdot\uu,\,\Iden\omm\cdot\uu\}. \] Otherwise, if $\uu=\zero$, then $\zbar\cdot\zero=0=\max\{-\Iden\omm\cdot\zero,\,\Iden\omm\cdot\zero\}$. Thus, in all cases, the inequality appearing in \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:b}) is satisfied, so $\zbar\in\lb{-\Iden \omm}{\Iden \omm}$. \end{example} This last example shows, in an extreme way, that the ``segment'' joining two infinite astral points can be very different from the standard, one-dimensional segment joining points in $\Rn$. In Appendix~\ref{sec:mono-passages}, we explore a different way of understanding segments (and so also convex sets) which turn out to be composed of structures that are, in certain respects, more like ordinary line segments. Continuing our general development, we are now ready to define convexity for subsets of astral space. Having defined segments, this definition is entirely analogous to the standard one for subsets of $\Rn$: \begin{definition} A set $S\subseteq\extspace$ is \emph{astrally convex} (or simply \emph{convex}) if the entire segment joining every pair of points in $S$ is also included in $S$, that is, if \[ \lb{\xbar}{\ybar} \subseteq S \quad \text{for all $\xbar,\ybar\in S$.} \] \end{definition} We next show that several important examples of sets are astrally convex, and derive some fundamental properties of astral convex sets. One of these concerns {updirected} families of sets, where an indexed collection of sets $\set{S_i:\:i\in\indset}$ is said to be \emph{updirected} if for any $i,j\in\indset$ there exists $k\in\indset$ such that $S_i\cup S_j\subseteq S_k$. The union of an updirected family of convex sets in $\Rn$ is convex. We show the same is true for an updirected family of astrally convex sets. We also show that intersections of arbitrary collections of astrally convex sets are astrally convex, and that the empty set and the entire space $\eRn$ are astrally convex. Altogether, these properties, stated as \Cref{pr:e1}(\ref{pr:e1:univ},\ref{pr:e1:b},\ref{pr:e1:union}) below, imply that astral convexity is an instance of abstract convexity as defined by \citet{vandeVel}. \begin{proposition} \label{pr:e1} ~ \begin{letter-compact} \item \label{pr:e1:a} If $\xx,\yy\in\Rn$, then $\lb{\xx}{\yy}$ is the standard line segment joining $\xx$ and $\yy$ in $\Rn$: \[ \lb{\xx}{\yy} = \bigBraces{(1-\lambda)\xx + \lambda\yy :\: \lambda\in [0,1]}. \] Therefore, a subset of $\Rn$ is astrally convex if and only if it is convex in the standard sense.\looseness=-1 \item \label{pr:e1:univ} The empty set $\emptyset$ and the full astral space $\eRn$ are astrally convex. \item \label{pr:e1:b} The intersection of an arbitrary collection of astrally convex sets is astrally convex. \item \label{pr:e1:union} Let $\set{S_i:\:i\in\indset}$ be an {updirected} family of astrally convex sets $S_i\subseteq\eRn$ (where $\indset$ is any index set). Then their union $\bigcup_{i\in\indset} S_i$ is also astrally convex. \item \label{pr:e1:c} Every set of the form given in Eqs.~(\ref{eq:chsua-defn}),~(\ref{eqn:open-hfspace-defn}) or~(\ref{eqn:hyperplane-defn}), with $\uu\in\Rn$ and $\beta\in\Rext$, is astrally convex. Therefore, every astral hyperplane and astral closed or open halfspace is astrally convex. \item \label{pr:e1:ohull} For any $S\subseteq\extspace$, the outer hull, $\ohull S$, is astrally convex. Also, for all $\xbar\in\extspace$, $\ohull \{\xbar\} = \lb{\xbar}{\xbar} = \{\xbar\}$. Therefore, all segments and singletons are astrally convex. \item \label{pr:e1:d} Every base element in the astral topology, of the form given in \eqref{eq:h:3a}, is astrally convex. \item \label{pr:e1:base} Every point $\xbar\in\extspace$ has a nested countable neighborhood base consisting of astrally convex sets. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:e1:a}):} Suppose $\xx,\yy\in\Rn$. If $\zz\in\lb{\xx}{\yy}$, then for all $\uu\in\Rn$, by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:c}), \[ -\infty < \min\{\xx\cdot\uu,\,\yy\cdot\uu\} \leq \zz\cdot\uu \leq \max\{\xx\cdot\uu,\,\yy\cdot\uu\} < +\infty, \] implying that $\zz\in\Rn$ (by Proposition~\ref{pr:i:3}\ref{i:3a},\ref{i:3b}). Thus, $\lb{\xx}{\yy}\subseteq\Rn$, so $\lb{\xx}{\yy}$ is included in some halfspace $\chsua$ in $\extspace$ if and only if it is included in the corresponding halfspace in $\Rn$ given in \eqref{eqn:chsua-cap-rn}. It follows that $\lb{\xx}{\yy}$ is the intersection of all closed halfspaces in $\Rn$ containing both $\xx$ and $\yy$. By \Cref{pr:con-int-halfspaces}(\ref{roc:cor11.5.1}), this is exactly the convex hull of $\{\xx,\yy\}$, namely, the set of all their convex combinations as given in the proposition. This implies that, when restricted to subsets of $\Rn$, astral convexity coincides exactly with the standard definition of convexity in $\Rn$. \pfpart{Part~(\ref{pr:e1:univ}):} Immediate from the definition of astral convexity. \pfpart{Part~(\ref{pr:e1:b}):} Let \[ M = \bigcap_{i\in\indset} S_i \] where each $S_i\subseteq\extspace$ is convex, and $\indset$ is an arbitrary index set. Let $\xbar$, $\ybar$ be in $M$. Then for all $i\in\indset$, $\xbar,\ybar\in S_i$, so $\lb{\xbar}{\ybar}\subseteq S_i$. Since this holds for all $i\in\indset$, $\lb{\xbar}{\ybar}\subseteq M$, and $M$ is convex. \pfpart{Part~(\ref{pr:e1:union}):} Let $\set{S_i:\:i\in\indset}$ be an updirected collection of astrally convex sets in $\eRn$, and let $M=\bigcup_{i\in\indset} S_i$. We will show that $M$ is convex. Let $\xbar$, $\ybar\in M$. Then there exist $i,j\in\indset$ such that $\xbar\in S_i$ and $\ybar\in S_j$, and so also $k\in\indset$ such that $\xbar,\ybar\in S_k$. By astral convexity of $S_k$, $\lb{\xbar}{\ybar}\subseteq S_k\subseteq M$, so $M$ is convex. \pfpart{Part~(\ref{pr:e1:c}):} Let $\uu\in\Rn$ and $\beta\in\Rext$, and let $H=\set{\xbar\in\extspace :\: \xbar\cdot\uu \leq \beta}$, which we aim to show is convex. Suppose $\xbar,\ybar\in H$, and that $\zbar\in\lb{\xbar}{\ybar}$. Then \[ \zbar\cdot\uu \leq \max\{\xbar\cdot\uu,\,\ybar\cdot\uu\} \leq \beta, \] where the first inequality is from \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}), and the second is because $\xbar,\ybar\in H$. Thus, $\zbar\in H$, so $H$ is convex. Therefore, sets of the form given in \eqref{eq:chsua-defn}, including astral closed halfspaces, are convex. The proof is the same for sets as given in \eqref{eqn:open-hfspace-defn}, but using strict inequalities; these include astral open halfspaces. The set of points for which $\xbar\cdot\uu=\beta$, as in \eqref{eqn:hyperplane-defn}, is convex by the above and part~(\ref{pr:e1:b}), since this set is the intersection of the two convex sets defined by $\xbar\cdot\uu\le\beta$ and $\xbar\cdot\uu\ge\beta$, respectively. Thus, astral hyperplanes are also convex. \pfpart{Part~(\ref{pr:e1:ohull}):} An outer hull is an intersection of closed halfspaces and therefore convex by parts~(\ref{pr:e1:b}) and~(\ref{pr:e1:c}). Segments, as outer hulls, are thus convex. It remains to argue that $\ohull \{\xbar\} = \{\xbar\}$ for all $\xbar\in\eRn$, which will also establish that singletons are convex. Let $\xbar,\zbar\in\extspace$. Then by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:c}), $\zbar\in\ohull\{\xbar\}$ if and only if, for all $\uu\in\Rn$, $\xbar\cdot\uu\leq\zbar\cdot\uu\leq\xbar\cdot\uu$, which, by Proposition~\ref{pr:i:4}, holds if and only if $\zbar=\xbar$. Therefore, $\ohull\{\xbar\}=\{\xbar\}$. \pfpart{Part~(\ref{pr:e1:d}):} Since base elements, as in \eqref{eq:h:3a}, are intersections of astral open halfspaces, they are convex by parts~(\ref{pr:e1:b}) and~(\ref{pr:e1:c}). \pfpart{Part~(\ref{pr:e1:base}):} Elements of the nested countable neighborhood base from \Cref{thm:first:local} are also base elements in the astral topology (of the form given in Eq.~\ref{eq:h:3a}), and are therefore convex by part~(\ref{pr:e1:d}). \qedhere \end{proof-parts} \end{proof} Thanks to \Cref{pr:e1}(\ref{pr:e1:a}), there is no ambiguity in referring to an astrally convex set $S\subseteq\eRn$ simply as convex; we use this terminology from now on. We next show that if $S$ is any convex subset of $\Rn$, then its astral closure $\Sbar$ is also convex, and more specifically, is exactly equal to the outer hull of $S$, the intersection of all astral closed halfspaces that contain $S$. \begin{theorem} \label{thm:e:6} \MarkMaybe Let $S\subseteq\Rn$ be convex. Then $\Sbar$, its closure in $\extspace$, is exactly equal to its outer hull; that is, $\Sbar=\ohull S$. Consequently, $\Sbar$ is convex. \end{theorem} \begin{proof} Let $S\subseteq\Rn$ be convex, and let $\inds$ be its indicator function (see Eq.~\ref{eq:indf-defn}). Then \[ \indfa{\Sbar} = \indsext = \indsdub = \indsextdub = \indfadub{\Sbar} = \indfa{\ohull\Sbar} = \indfa{\ohull S} \] where the equalities follow by \Cref{pr:inds-ext}, \Cref{cor:all-red-closed-sp-cases} (since $\inds\ge 0$), \Cref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:a}), \Cref{pr:inds-ext} again, \Cref{thm:ohull-biconj}, and finally by \Cref{pr:ohull:hull}. Thus, $\Sbar=\ohull S$, so the convexity of $\Sbar$ follows by \Cref{pr:e1}(\ref{pr:e1:ohull}). \end{proof} In general, Theorem~\ref{thm:e:6} does not hold for arbitrary convex sets in $\extspace$ (rather than in $\Rn$), as will be demonstrated in Section~\ref{sec:closure-convex-set}. \subsection{Outer hull of finite sets} \label{sec:simplices} We next study the outer hull, $\ohull V$, of any finite set $V=\{\xbar_1,\dotsc,\xbar_m\}$ in $\extspace$. In standard convex analysis, the convex hull of a finite set of points in $\Rn$ can be viewed in two ways: either as the intersection of all halfspaces containing the points, or as the set of all convex combinations of the points. Thus, there is both an ``outside'' and ``inside'' way of characterizing convexity in this case. The astral outer hull that we consider here has so far been described in ``outside'' terms, as the intersection of closed halfspaces. We next give an alternative ``inside'' description of this same set. Specifically, we show that $\ohull V$ can be characterized in terms of sequences via a formulation which says that a point is in the outer hull of $V$ if and only if it is the limit of points in $\Rn$ that are themselves convex combinations of points converging to the points in $V$. (See the illustration in Figure~\ref{fig:thm:e:7}.) More precisely: \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{figs/thm_e_7} \caption{An illustration of the sequences in Theorem~\ref{thm:e:7} (and Corollary~\ref{cor:e:1}). Here, the two sequences $\seq{t\ee_1}$ and $\seq{t\ee_2}$ in $\R^2$ are shown along the axes, converging respectively to $\xbar_1=\limray{\ee_1}$ and $\xbar_2=\limray{\ee_2}$. The dashed segments show the (standard) segments between each pair, $t\ee_1$ and $t\ee_2$. Each line or curve converges to some astral point $\zbar\in\extspac{2}$, as indicated in the figure, so the points of intersection of that line or curve with the dashed segments also converge to $\zbar$. Since that sequence satisfies the conditions of Theorem~\ref{thm:e:7}, this proves that $\zbar$ is in $\ohull\{\xbar_1,\xbar_2\}$. } \label{fig:thm:e:7} \end{figure} \begin{theorem} \label{thm:e:7} Let $V=\{\xbar_1,\dotsc,\xbar_m\}\subseteq\extspace$, and let $\zbar\in\extspace$. Then $\zbar\in\ohull{V}$ if and only if there exist sequences $\seq{\xx_{it}}$ in $\Rn$ and $\seq{\lambda_{it}}$ in $\Rpos$, for $i=1,\dotsc,m$, such that: \begin{itemize}[noitemsep] \item $\xx_{it}\rightarrow\xbar_i$ for $i=1,\dotsc,m$. \item $\sum_{i=1}^m \lambda_{it} = 1$ for all $t$. \item The sequence $\zz_t=\sum_{i=1}^m \lambda_{it} \xx_{it}$ converges to $\zbar$. \end{itemize} The same equivalence holds if we additionally require either or both of the following: \begin{letter-compact} \item \label{thm:e:7:add:a} $\lambda_{it}\to\lambar_i$ for $i=1,\dotsc,m$, for some $\lambar_i\in [0,1]$ with $\sum_{i=1}^m \lambar_i = 1$. \item \label{thm:e:7:add:b} Each of the sequences $\seq{\xx_{it}}$ is span-bound, for $i=1,\dotsc,m$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{``If'' ($\Leftarrow$):} Suppose there exist sequences of the form given in the theorem. Then for each $\uu\in\Rn$, \begin{align*} \zbar\cdot\uu = \lim {(\zz_t\cdot\uu)} &= \lim \Bracks{\sum_{i=1}^m \lambda_{it} \xx_{it} \cdot\uu} \\ &\leq \lim \max\{\xx_{1t}\cdot\uu,\dotsc,\xx_{mt}\cdot\uu\} \\ &= \max\{\xbar_1\cdot\uu,\dotsc,\xbar_m\cdot\uu\}. \end{align*} The first equality is by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), since $\zz_t\rightarrow\zbar$. The last equality (as well as the fact that the last limit exists) is by continuity of the function which computes the maximum of $m$ numbers in $\Rext$, and because $\xx_{it}\rightarrow\xbar_i$, implying $\xx_{it}\cdot\uu\rightarrow\xbar_i\cdot\uu$, for $i=1,\ldots,m$ (again by Theorem~\ref{thm:i:1}\ref{thm:i:1c}). Since this holds for all $\uu\in\Rn$, we obtain $\zbar\in\simplex{V}$ by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}). \pfpart{``Only if'' ($\Rightarrow$):} Assume $\zbar\in\simplex{V}$. We will construct the needed sequences explicitly. For this purpose, we first prove the following lemma showing that within any neighborhood of $\zbar$, there must exist a point that is a convex combination (in $\Rn$) of finite points from any collection of neighborhoods of the points in $V$. \begin{lemma} \label{lem:e:1mod} Let $\zbar\in\simplex{V}$, where $V=\{\xbar_1,\dotsc,\xbar_m\}$. Furthermore, let $Z\subseteq\extspace$ be any neighbor\-hood of $\zbar$, and let $X_i\subseteq\eRn$ be any neighborhood of $\xbar_i$, for $i=1,\dotsc,m$. Then there exist $\zz\in\Rn\cap Z$ and $\xx_i\in\Rn\cap X_i$, for $i=1,\dotsc,m$, such that $\zz$ is a convex combination of $\xx_1,\dotsc,\xx_m$, that is, \[ \zz = \sum_{i=1}^m \lambda_i \xx_i \] for some $\lambda_1,\dotsc,\lambda_m\in [0,1]$ with $\sum_{i=1}^m \lambda_i = 1$. \end{lemma} \begin{proofx} Let $i\in\{1,\ldots,m\}$. Since $X_i$ is a neighborhood of $\xbar_i$, by \Cref{pr:base-equiv-topo}, there exists a base element $X'_i$ in the astral topology (of the form given in Eq.~\ref{eq:h:3a}) such that $\xbar_i\in X'_i\subseteq X_i$. Furthermore, $X'_i$ is convex by \Cref{pr:e1}(\ref{pr:e1:d}). Let $R_i=\Rn\cap X'_i$. Then $R_i$ is convex (by \Cref{pr:e1}\ref{pr:e1:a},\ref{pr:e1:b}), and $\xbar_i\in X'_i\subseteq\Xbarpi=\Rbari$, where the last equality follows by \Cref{pr:closed-set-facts}(\ref{pr:closed-set-facts:b}). Let \[ R= \conv\Parens{\bigcup_{i=1}^m R_i}. \] Then, for $i=1,\dotsc,m$, $R_i\subseteq R$, so also $\xbar_i\in\Rbari\subseteq\Rbar$. Therefore, \[ \zbar\in\simplex{V}\subseteq\ohull\Rbar=\ohull R=\Rbar. \] The first inclusion follows because $\ohull$ is a hull operator and $V\subseteq\Rbar$. The equalities follow by \Cref{pr:ohull:hull} and \Cref{thm:e:6}. Thus, $\zbar\in\Rbar$. Let $\zz$ be any point in $Z\cap R$; this set cannot be empty, because $Z$ is a neighborhood of $\zbar$ and $R$ intersects every neighborhood of $\zbar$, since $\zbar\in\Rbar$ (\Cref{pr:closure:intersect}). Since $\zz$ is in $R$, which is the convex hull of the union of the convex sets $R_i$, \Cref{roc:thm3.3} implies that $\zz$ can be written as a convex combination of points, one in each set $R_i\subseteq\Rn\cap X_i$. Thus, $\zz$~has the properties stated in the lemma. \end{proofx} To complete the proof of \Cref{thm:e:7}, let $\countset{Z}$ be a nested countable neighborhood base for $\zbar$, and for each $i=1,\dotsc,m$, let $\countsetgen{X_{it}}$ be a nested countable neighborhood base for $\xbar_i$. (These must exist by \Cref{thm:first:local}.) By \Cref{lem:e:1mod}, applied to $Z_t$ and the $X_{it}\negKern$'s, there exist $\zz_t\in\Rn\cap Z_t$, and $\xx_{it}\in\Rn\cap X_{it}$, $\lambda_{it}\in [0,1]$, for $i=1,\dotsc,m$, such that $\sum_{i=1}^m \lambda_{it} = 1$ and \[ \zz_t = \sum_{i=1}^m \lambda_{it} \xx_{it}. \] Then $\zz_t\rightarrow \zbar$ (by \Cref{cor:first:local:conv}), and likewise, $\xx_{it}\rightarrow\xbar_i$ for each $i$. \pfpart{Additional requirement~(\ref{thm:e:7:add:a}):} Suppose sequences as stated in the theorem exist. Note that the sequence of vectors $\trans{[\lambda_{1t},\dotsc,\lambda_{mt}]}$ are in the compact set $[0,1]^m$, and therefore, there must exist a convergent subsequence. Discarding those $t$ outside this subsequence yields a sequence that still satisfies all of the properties stated in the theorem, and in addition, provides that $\lambda_{it}\rightarrow\lambar_i$, for some $\lambar_i\in [0,1]$ with $\sum_{i=1}^m \lambar_i = 1$. \pfpart{Additional requirement~(\ref{thm:e:7:add:b}):} Suppose sequences as stated in the theorem exist. Then by Theorem~\ref{thm:spam-limit-seqs-exist}, for $i=1,\dotsc,m$, there exist span-bound sequences $\seq{\xx'_{it}}$ with $\xx'_{it}\rightarrow\xbar_i$ and $\xx_{it}-\xx'_{it}\rightarrow\zero$. For all $t$, let $\zz'_t=\sum_{i=1}^m \lambda_{it} \xx'_{it}$. Then $\zz_t-\zz'_t=\sum_{i=1}^m \lambda_{it} (\xx_{it}-\xx'_{it})$, which converges to $\zero$ since $\xx_{it}-\xx'_{it}\rightarrow\zero$ for all $i$. Thus, $\zz'_t = \zz_t - (\zz_t - \zz'_t) \rightarrow\zbar$ by Proposition~\ref{pr:i:7}(\ref{pr:i:7g}) since $\zz_t\rightarrow\zbar$. \qedhere \end{proof-parts} \end{proof} Since the segment joining two points is the same as the outer hull of the two points, we immediately obtain the following corollary which shows, in a sense, that the segment joining points $\xbar$ and $\ybar$ in $\extspace$ is the union of all limits of sequences of line segments in $\Rn$ whose endpoints converge to $\xbar$ and $\ybar$. \begin{corollary} \label{cor:e:1} Let $\xbar,\ybar\in\extspace$, and let $\zbar\in\extspace$. Then $\zbar\in\lb{\xbar}{\ybar}$ if and only if there exist sequences $\seq{\xx_t}$ and $\seq{\yy_t}$ in $\Rn$, and $\seq{\lambda_t}$ in $[0,1]$ such that $\xx_t\rightarrow\xbar$ and $\yy_t\rightarrow\ybar$, and the sequence \[ \zz_t= (1-\lambda_t) \xx_t + \lambda_t \yy_t \] converges to $\zbar$. The same equivalence holds if the sequence $\seq{\lambda_t}$ is additionally required to converge to a limit in $[0,1]$, and the sequences $\seq{\xx_t}$ and $\seq{\yy_t}$ are required to be span-bound. \end{corollary} By combining Theorem~\ref{thm:e:7} with Carath\'{e}odory's theorem, we obtain the following: \begin{theorem} \label{thm:carath} \MarkMaybe Suppose $\zbar\in\simplex{V}$ for some finite set $V\subseteq\extspace$. Then $\zbar\in\simplex{V'}$ for some $V'\subseteq V$ with $|V'|\leq n+1$. \end{theorem} \begin{proof} Let $V=\{\xbar_1,\dotsc,\xbar_m\}$. Since $\zbar\in\simplex{V}$, there exist sequences as given in Theorem~\ref{thm:e:7}. For each $t$, let $I_t$ be the nonzero indices of the $\lambda_{it}\negKern$'s, that is, \[ I_t = \bigBraces{ i\in\{1,\dotsc,m\} :\: \lambda_{it}>0 }. \] By Carath\'{e}odory's theorem (\Cref{roc:thm17.1}), we can assume without loss of generality that the $\lambda_{it}\negKern$'s have been chosen in such a way that $|I_t|\leq n+1$, for all $t$. Since there are only finitely many subsets of $m$ items, there must exist some subset $I\subseteq\{1,\dotsc,m\}$ for which $I_t=I$ for infinitely many values of $t$ (implying $|I|\leq n+1$). On the subsequence consisting of all such values of $t$, all conditions of the theorem are satisfied with each $\zz_t$ a convex combination of only points $\xx_{it}$ with $i\in I$. Applying Theorem~\ref{thm:e:7} to this subsequence then shows that $\zbar\in\simplex{V'}$ where \[ V' = \set{ \xbar_i :\: i\in I }. \qedhere \] \end{proof} The following theorem shows that if $V$ is a finite subset of some convex set, then the outer hull of $V$ must also be entirely included in that set. This is useful, for instance, for characterizing the convex hull, as we will see shortly. \begin{theorem} \label{thm:e:2} Let $S\subseteq\extspace$ be convex, and let $V\subseteq S$ be a finite subset. Then $\simplex{V}\subseteq S$. \end{theorem} \begin{proof} Let $V=\{\xbar_1,\dotsc,\xbar_m\}\subseteq S$. Proof is by induction on $m=|V|$. In the base case that $m=1$, we have $\simplex{V}=\{\xbar_1\}\subseteq S$, by Proposition~\ref{pr:e1}(\ref{pr:e1:ohull}). For the inductive step, assume $m\geq 2$, and that the claim holds for $m-1$. Let $\zbar\in\simplex{V}$. Let $\zz_t$, $\xx_{it}$ and $\lambda_{it}$ be sequences as given in Theorem~\ref{thm:e:7} with $\lambda_{it}$ converging to $\lambar_i$. Since no more than one of these limits $\lambar_i$ can be equal to $1$, assume without loss of generality that $\lambar_m<1$. Then for all $t$ sufficiently large, $\lambda_{mt}<1$; let us assume that is the case for all $t$ (by discarding all others from the sequence). Define a new sequence that is, for all $t$, a convex combination of just the $\xx_{it}\negKern$'s for $i=1,\dotsc,m-1$: \[ \yy_t = \sum_{i=1}^{m-1} \frac{\lambda_{it}}{1-\lambda_{mt}} \xx_{it}. \] Since $\extspace$ is sequentially compact, the $\yy_t\negKern$'s must have a convergent subsequence. By discarding all indices $t$ that are not part of this subsequence, let us assume that the entire sequence of $\yy_t\negKern$'s converge to some point $\ybar\in\extspace$. Since all the conditions of Theorem~\ref{thm:e:7} are now satisfied for $\ybar$, it must be the case that $\ybar\in\simplex\set{\xbar_1,\dotsc,\xbar_{m-1}}$, and so $\ybar\in S$ by inductive hypothesis. Further, $\zz_t = (1-\lambda_{mt}) \yy_t + \lambda_{mt} \xx_{mt}$, which converges to $\zbar$. Thus, the conditions of Corollary~\ref{cor:e:1} are satisfied, and so $\zbar\in\lb{\ybar}{\xbar_m}$. Therefore, $\zbar\in S$ since $\ybar$ and $\xbar_m$ are. \end{proof} \subsection{Convex hull} \label{sec:convex-hull} The astral convex hull of any set in $\extspace$ is defined in the same way as for sets in $\Rn$: \begin{definition} Let $S\subseteq\extspace$. The \emph{convex hull} of $S$, denoted $\conv{S}$, is the intersection of all convex sets in $\extspace$ that include $S$. \end{definition} By \Cref{pr:e1}(\ref{pr:e1:b}), the convex hull $\conv{S}$ of any set $S\subseteq\extspace$ is convex. Thus, $\conv{S}$ is the {smallest} convex set that includes $S$. In fact, this definition and the notation $\conv{S}$ are really an extension of the standard definition for sets in $\Rn$ (as in Section~\ref{sec:prelim:convex-sets}), which is possible since astral and standard convexity have the same meaning for sets in $\Rn$ (\Cref{pr:e1}\ref{pr:e1:a}). The astral convex hull operation is the hull operator on the set of all astral convex sets. As such, basic properties of standard convex hulls in $\Rn$ carry over easily, as stated in the next proposition: \begin{proposition} \label{pr:conhull-prop} Let $S, U\subseteq\extspace$. \begin{letter-compact} \item \label{pr:conhull-prop:aa} If $S\subseteq U$ and $U$ is convex, then $\conv{S}\subseteq U$. \item \label{pr:conhull-prop:b} If $S\subseteq U$, then $\conv{S}\subseteq\conv{U}$. \item \label{pr:conhull-prop:c} If $S\subseteq U\subseteq\conv{S}$, then $\conv{U} = \conv{S}$. \item \label{pr:conhull-prop:a} $\conv{S}\subseteq\simplex{S}$ with equality if $|S|<+\infty$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Parts~(\ref{pr:conhull-prop:aa}),~(\ref{pr:conhull-prop:b}) and~(\ref{pr:conhull-prop:c}):} Since the astral convex hull operation is a hull operator, these follow from Proposition~\ref{pr:gen-hull-ops}(\ref{pr:gen-hull-ops:b},\ref{pr:gen-hull-ops:c},\ref{pr:gen-hull-ops:d}). \pfpart{Part~(\ref{pr:conhull-prop:a}):} The outer hull $\simplex{S}$ is convex and includes $S$ (\Cref{pr:e1}\ref{pr:e1:ohull}). Therefore, $\conv{S}\subseteq\simplex{S}$ by part~(\ref{pr:conhull-prop:aa}). On the other hand, if $S$ is finite, then $\simplex{S}\subseteq\conv{S}$ by Theorem~\ref{thm:e:2} since $\conv{S}$ is convex and includes $S$. \qedhere \end{proof-parts} \end{proof} \Cref{pr:conhull-prop}(\ref{pr:conhull-prop:a}) states that the convex hull and outer hull of a set $S\subseteq\extspace$ need only be the same when $S$ is finite. Indeed, they may be different, since, for instance, the outer hull is always closed, but convex hull need not be. Later (in Section~\ref{sec:sep-cvx-sets}), we will establish a more specific relationship between these sets: whereas the convex hull $\conv{S}$ of any set $S\subseteq\extspace$ is the smallest convex set that includes $S$, the outer hull $\ohull{S}$ is the smallest \emph{closed} convex set that includes $S$. The convex hull $\conv{V}$ of any finite set of points $V\subseteq\extspace$ is called the \emph{polytope formed by $V$}. By Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:a}), this is the same as its outer hull, $\ohull V$. In general, the convex hull of any set is the union of all polytopes formed by all its finite subsets, as we show next: \begin{theorem} \label{thm:convhull-of-simpices} Let $S\subseteq\extspace$. Then its convex hull is equal to the union of all polytopes formed by finite subsets of $S$, that is, \begin{align} \label{eq:e:6} \conv{S} &= \bigcup_{\substack{V\subseteq S:\\ \card{V}<+\infty}} \!\!\simplex{V} = \bigcup_{\substack{V\subseteq S:\\ \card{V}\leq n+1}} \!\!\simplex{V} \end{align} \end{theorem} \begin{proof} Consider the collection of sets appearing in the first union, that is, \[ \calF =\set{\simplex{V}:\: V\subseteq S,\,\card{V}<+\infty}. \] This collection is updirected, because for any finite sets $V_1,V_2\subseteq S$, their union is also finite and a subset of $S$, so $\ohull(V_1\cup V_2)$ is in $\calF$. Moreover, \[ (\ohull V_1)\cup(\ohull V_2)\subseteq\ohull(V_1\cup V_2), \] since $\ohull$ is a hull operator. Thus, by \Cref{pr:e1}(\ref{pr:e1:union}), $\bigcup{\calF}$ is convex. Also, for any $\xbar\in S$, $\set{\xbar}=\ohull\set{\xbar}\in\calF$, so $S\subseteq\bigcup{\calF}$, and therefore $\conv S\subseteq\bigcup{\calF}$ by \Cref{pr:conhull-prop}(\ref{pr:conhull-prop:aa}). Furthermore, for any finite set $V\subseteq S$, $\ohull V\subseteq\conv S$ by \Cref{thm:e:2}, so $\bigcup{\calF}\subseteq\conv S$. Thus, $\bigcup{\calF}=\conv S$, proving the first equality of \eqref{eq:e:6}. The second equality then follows from \Cref{thm:carath}. \end{proof} \section{Constructing and operating on convex sets} \label{sec:convex-set-ops} We next study some operations for constructing or manipulating convex sets. These include the analogues of operations commonly applied to convex sets in $\Rn$, such as mapping under a linear or affine map, or adding together two convex sets. \subsection{Convexity under affine transformations} We first consider affine transformations. We begin by showing that the image of a polytope under an affine map is equal to the polytope formed by the images of the points that formed the original polytope. Among its consequences, this will imply that the image or pre-image of a convex set under an affine map is also convex. \begin{theorem} \label{thm:e:9} Let $\A\in\R^{m\times n}$, $\bbar\in\extspac{m}$, and let $F:\extspace\rightarrow\extspac{m}$ be the affine map $F(\zbar)=\bbar\plusl \A\zbar$ for $\zbar\in\extspace$. Let $V=\{\xbar_1,\dotsc,\xbar_\ell\}\subseteq\extspace$. Then \[ \simplex{F(V)} = F(\simplex{V}). \] In particular, $\lb{F(\xbar)}{F(\ybar)}=F(\lb{\xbar}{\ybar})$ for all $\xbar,\ybar\in\extspace$. \end{theorem} Later, in \Cref{cor:ohull-fs-is-f-ohull-s}, we will show that Theorem~\ref{thm:e:9} holds more generally if the finite set $V$ is replaced by an arbitrary set $S\subseteq\extspace$. The next lemma argues one of the needed inclusions, proved in this broader context, as a first step in the proof of Theorem~\ref{thm:e:9}: \begin{lemma} \label{lem:f-conv-S-in-conv-F-S} Let $\A$, $\bbar$ and $F$ be as given in Theorem~\ref{thm:e:9}, and let $S\subseteq\extspace$. Then \[ F(\ohull{S}) \subseteq \ohull{F(S)}. \] \end{lemma} \begin{proof} Let $\zbar\in\ohull{S}$; we aim to show that $F(\zbar)\in\ohull{F(S)}$. Let $\uu\in\R^m$. For all $\xbar\in\extspace$, we have \begin{equation} \label{eq:lem:f-conv-S-in-conv-F-S:1} F(\xbar)\cdot\uu = (\bbar\plusl\A\xbar)\cdot\uu = \bbar\cdot \uu \plusl (\A\xbar)\cdot\uu = \bbar\cdot \uu \plusl \xbar\cdot(\transA \uu), \end{equation} with the second and third equalities following from Proposition~\ref{pr:i:6} and Theorem~\ref{thm:mat-mult-def}, respectively. We claim that \begin{equation} \label{eq:lem:f-conv-S-in-conv-F-S:2} F(\zbar)\cdot\uu \leq \sup_{\xbar\in S} [F(\xbar)\cdot\uu]. \end{equation} If $\bbar\cdot\uu\in\{-\infty,+\infty\}$, then, by \eqref{eq:lem:f-conv-S-in-conv-F-S:1}, $F(\xbar)\cdot\uu=\bbar\cdot\uu$ for all $\xbar\in\extspace$, implying that \eqref{eq:lem:f-conv-S-in-conv-F-S:2} must hold (with equality) in this case. Otherwise, $\bbar\cdot\uu\in\R$, so \begin{align*} F(\zbar)\cdot\uu &= \bbar\cdot\uu + \zbar\cdot(\transA \uu) \\ &\leq \bbar\cdot\uu + \sup_{\xbar\in S} [\xbar\cdot(\transA \uu)] \\ &= \sup_{\xbar\in S} [\bbar\cdot\uu + \xbar\cdot(\transA \uu)] = \sup_{\xbar\in S} [F(\xbar)\cdot\uu]. \end{align*} The first and last equalities are by \eqref{eq:lem:f-conv-S-in-conv-F-S:1}. The inequality is by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}) since $\zbar\in\ohull{S}$. Thus, \eqref{eq:lem:f-conv-S-in-conv-F-S:2} holds for all $\uu\in\R^m$. Therefore, $F(\zbar)\in \ohull{F(S)}$, again by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}). \end{proof} To prove the reverse inclusion, we first give lemmas for two special cases; we then combine these to give the general result. We begin with the special case that $\bbar=\qq\in\Rm$: \begin{lemma} \label{thm:e:9:lem:b} Assume the setup of Theorem~\ref{thm:e:9} with the additional assumption that $\bbar=\qq$ for some $\qq\in\Rm$ (so that $F(\zbar)=\qq\plusl\A\zbar$ for $\zbar\in\extspace$). Then \[ \ohull{F(V)} \subseteq F(\ohull{V}). \] \end{lemma} \begin{proof} Let $\zbar'\in \ohull F(V)$. We will show there exists $\zbar\in\ohull V$ such that $\zbar'=F(\zbar)$. By \Cref{thm:e:7}, for $i=1,\dotsc,\ell$, there exist sequences $\seq{\yy_{it}}$ in $\Rm$ and $\seq{\lambda_{it}}$ in $[0,1]$ such that $\yy_{it}\to F(\xbar_i)=\qq\plusl \A\xbar_i$ for all $i$, $\sum_{i=1}^\ell\lambda_{it}=1$ for all $t$, and $\sum_{i=1}^\ell\lambda_{it}\yy_{it}\to\zbar'$. For each $i$, we further have that $\yy_{it}-\qq\rightarrow \A\xbar_i$ (by Proposition~\ref{pr:i:7}\ref{pr:i:7f}), implying, by \Cref{thm:inv-lin-seq}, that there exists a sequence $\seq{\xx_{it}}$ in $\Rn$ such that $\xx_{it}\to\xbar_i$ and $\A\xx_{it}+\qq-\yy_{it}\to\zero$. Let $\zz_t=\sum_{i=1}^\ell\lambda_{it}\xx_{it}$. By sequential compact\-ness of $\eRn$, the sequence $\seq{\zz_t}$ must have a subsequence that converges to some $\zbar\in\eRn$. Discarding all other sequence elements (as well as the corresponding elements of the other sequences), we obtain sequences $\seq{\xx_{it}}$ and $\seq{\lambda_{it}}$ that satisfy the conditions of \Cref{thm:e:7}, with $\sum_{i=1}^\ell\lambda_{it}\xx_{it}=\zz_t\to\zbar$, so $\zbar\in\ohull V$. It remains to show that $\zbar'=F(\zbar)$. We have \begin{align} F(\zz_t) = \A\zz_t+\qq &= \sum_{i=1}^\ell \lambda_{it} (\A\xx_{it} + \qq) \notag \\ &= \sum_{i=1}^\ell \lambda_{it} \yy_{it} + \sum_{i=1}^\ell \lambda_{it} (\A\xx_{it} + \qq - \yy_{it}) \rightarrow \zbar'. \label{eq:thm:e:9:lem:b:1} \end{align} The second equality is by linearity, noting that $\sum_{i=1}^\ell \lambda_{it}= 1$. The convergence is by Proposition~\ref{pr:i:7}(\ref{pr:i:7g}), and because $\sum_{i=1}^\ell \lambda_{it}(\A\xx_{it}+\qq-\yy_{it})\to\zero$, since $\lambda_{it}\in[0,1]$ and $\A\xx_{it}+\qq-\yy_{it}\to\zero$ for all~$i$. On the other hand, $F(\zz_t)\rightarrow F(\zbar)$ by \Cref{cor:aff-cont}. Combined with \eqref{eq:thm:e:9:lem:b:1}, it follows that $F(\zbar)=\zbar'$, so $\zbar'\in F(\ohull{V})$. \end{proof} The next lemma handles the special case in which $\A$ is the identity matrix and $\bbar$ is an icon. \begin{lemma} \label{thm:e:9:lem:c} Let $\ebar\in\corezn$ and let $V=\{\xbar_1,\dotsc,\xbar_\ell\}\subseteq\extspace$. Then \[ \ohull{(\ebar\plusl V)} \subseteq \ebar\plusl(\ohull{V}). \] \end{lemma} \begin{proof} Let $\zbar\in\ohull{(\ebar\plusl V)}$, which we aim to show is in $\ebar\plusl(\ohull{V})$. We claim first that $\zbar=\ebar\plusl\zbar$. Otherwise, if this were not the case, then by Proposition~\refequiv{pr:cl-gal-equiv}{pr:cl-gal-equiv:c}{pr:cl-gal-equiv:b}, there must exist $\uu\in\Rn$ with $\ebar\cdot\uu\neq 0$ and $\zbar\cdot\uu\neq\ebar\cdot\uu$. Since $\ebar$ is an icon, this implies $\ebar\cdot\uu\in\{-\infty,+\infty\}$; without loss of generality, we assume $\ebar\cdot\uu=-\infty$ (otherwise replacing $\uu$ with $-\uu$). We then have \[ -\infty < \zbar\cdot\uu \leq \max\Braces{(\ebar\plusl\xbar_i)\cdot\uu :\: i=1,\ldots,\ell} = -\infty. \] The first inequality is because $\zbar\cdot\uu\neq\ebar\cdot\uu=-\infty$. The second is by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}), since $\zbar\in\ohull{(\ebar\plusl V)}$. The equality is because, for all $i$, $(\ebar\plusl\xbar_i)\cdot\uu=\ebar\cdot\uu\plusl\xbar_i\cdot\uu=-\infty$. Having reached a contradiction, we conclude that $\zbar=\ebar\plusl\zbar$. We can write $\ebar=\WW\omm$ for some matrix $\WW\in\Rnk$ with $k\geq 0$. Let $\PP$ be the projection matrix onto $(\colspace\WW)^\perp$, the linear subspace orthogonal to the columns of $\WW$. Also, let $P$ be the associated astral linear map, $P(\xbar)=\PP\xbar$ for $\xbar\in\extspace$. Then \begin{align*} \PP\zbar &\in P\Parens{\ohull\Braces{\ebar\plusl\xbar_i : i=1,\ldots,\ell}} \\ &\subseteq \ohull\Braces{\PP\ebar\plusl\PP\xbar_i : i=1,\ldots,\ell} \\ &= \ohull\Braces{\PP\xbar_i : i=1,\ldots,\ell} \\ &\subseteq P\parens{\ohull\Braces{\xbar_i : i=1,\ldots,\ell}} \\ &= P(\ohull{V}). \end{align*} The first inclusion is because $\zbar\in\ohull{(\ebar\plusl V)}$. The second is by Lemma~\ref{lem:f-conv-S-in-conv-F-S} (applied with $F=P$ and $S=V$). The first equality is because $\PP\WW=\zeromat{n}{k}$ (by \Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:e}), implying $\PP\ebar=\zero$. The third inclusion is by \Cref{thm:e:9:lem:b} (with $F=P$). Thus, $\PP\zbar\in P(\ohull{V})$, so there exists $\ybar\in\ohull{V}$ such that $\PP\zbar=\PP\ybar$. Consequently, \[ \zbar = \ebar\plusl\zbar = \ebar\plusl\PP\zbar = \ebar\plusl\PP\ybar = \ebar\plusl\ybar, \] where the second and fourth equalities are both by the Projection Lemma (\Cref{lemma:proj}). Therefore, $\zbar\in \ebar\plusl(\ohull{V})$, proving the claim. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:e:9}] We return now to the fully general set up of Theorem~\ref{thm:e:9}. We can write $\bbar=\ebar\plusl\qq$ for some icon $\ebar\in\corez{m}$ and $\qq\in\Rm$. Also, let $G:\extspace\rightarrow\extspac{m}$ be defined by $G(\zbar)=\qq\plusl\A\zbar$ for $\zbar\in\extspace$, implying $F(\zbar)=\bbar\plusl\A\zbar=\ebar\plusl G(\zbar)$. Then we have \begin{align*} \ohull{F(V)} = \ohull(\ebar\plusl G(V)) &\subseteq \ebar\plusl\ohull{G(V)} \\ &\subseteq \ebar\plusl G(\ohull{V}) = F(\ohull{V}) \subseteq \ohull{F(V)}, \end{align*} where the inclusions follow, respectively, from Lemmas~\ref{thm:e:9:lem:c},~\ref{thm:e:9:lem:b} and~\ref{lem:f-conv-S-in-conv-F-S}. This proves the theorem. \end{proof} Theorem~\ref{thm:e:9} does not hold in general if an astral point $\bbar$ is added on the {right} rather than the left, that is, for mappings of the form $\xbar\mapsto \A\xbar\plusl\bbar$. For instance, in~$\eRf{1}$, suppose $F(\ex)=\ex\plusl(-\infty)$ and $V=\set{0,+\infty}$. Then $\ohull V=[0,+\infty]$, so $F(\ohull V)=\set{-\infty,+\infty}$, but $\ohull F(V)=[-\infty,+\infty]$. The next two corollaries of Theorem~\ref{thm:e:9} show that both the image and inverse image of a convex set under an affine map are convex. \begin{corollary} \label{cor:thm:e:9} Let $F:\extspace\rightarrow\extspac{m}$ be an affine map (as in Theorem~\ref{thm:e:9}), and let $S$ be a convex subset of $\extspace$. Then $F(S)$ is also convex. \end{corollary} \begin{proof} Let $F(\xbar)$ and $F(\ybar)$ be any two points of $F(S)$, where $\xbar,\ybar\in S$. Then \[ \seg\bigParens{F(\xbar),F(\ybar)} = F\bigParens{\seg(\xbar,\ybar)} \subseteq F(S), \] with the equality from Theorem~\ref{thm:e:9} and the inclusion following from the convexity of~$S$. Thus, $F(S)$ is convex. \end{proof} \begin{corollary} \label{cor:inv-image-convex} Let $F:\extspace\rightarrow\extspac{m}$ be an affine map (as in Theorem~\ref{thm:e:9}), and let $S$ be a convex subset of $\extspac{m}$. nv(S)$ is also convex. \end{corollary} \begin{proof} nv(S)$, so that $F(\xbar),F(\ybar)\in S$. Then \[ F\bigParens{\seg(\xbar,\ybar)} = \seg\bigParens{F(\xbar),F(\ybar)} \subseteq S, \] with the equality from Theorem~\ref{thm:e:9} and the inclusion from the convexity of $S$. This implies nv(S)$. nv(S)$ is convex \end{proof} The next corollary shows that Theorem~\ref{thm:e:9} holds for the convex hull of arbitrary sets. (Theorem~\ref{thm:e:9} is then a special case of the corollary in which $S$ is finite.) \begin{corollary} \label{cor:thm:e:9b} Let $F:\extspace\rightarrow\extspac{m}$ be an affine map (as in Theorem~\ref{thm:e:9}), and let $S\subseteq\extspace$. Then \[ \conv{F(S)} = F(\conv{S}). \] \end{corollary} \begin{proof} Since $F(S)$ is included in $F(\conv{S})$, and since the latter set is convex by Corollary~\ref{cor:thm:e:9}, we must have $\conv{F(S)}\subseteq F(\conv{S})$ (by \Cref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). For the reverse inclusion, suppose $F(\xbar)$ is any point in $F(\conv{S})$, where $\xbar\in\conv{S}$. Then $\xbar\in\simplex{V}$ for some finite $V\subseteq S$, by Theorem~\ref{thm:convhull-of-simpices}. Thus, \[ F(\xbar) \in F(\simplex{V}) = \simplex{F(V)} \subseteq \conv{F(S)}. \] The equality is by Theorem~\ref{thm:e:9}. The last line again uses Theorem~\ref{thm:convhull-of-simpices}. \end{proof} \subsection{The segment joining a point and the origin} \label{sec:seg-join-point-origin} We next explicitly work out $\lb{\zero}{\xbar}$, the segment joining the origin and an arbitrary point $\xbar\in\extspace$. We saw instances of this type of segment in Examples~\ref{ex:seg-zero-e1}, \ref{ex:seg-zero-oe2}, and~\ref{ex:seg-zero-oe2-plus-e1}. \begin{theorem} \label{thm:lb-with-zero} Suppose $\xbar=\limrays{\vv_1,\dotsc,\vv_k}\plusl\qq$ where $\vv_1,\dotsc,\vv_k,\qq\in\Rn$. Then \begin{align} \notag \lb{\zero}{\xbar} &= \bigSet{ \limrays{\vv_1,\dotsc,\vv_{j-1}}\plusl\lambda\vv_j :\: j\in\{1,\dotsc,k\},\, \lambda\in\R_{\ge 0} } \\ \label{eq:seg:zero} &\qquad{} \cup\, \bigSet{ \limrays{\vv_1,\dotsc,\vv_k}\plusl\lambda\qq :\: \lambda\in [0,1] }. \end{align} \end{theorem} The representation of $\xbar$ given in the theorem need not be canonical. The theorem states that the segment $\seg(\zero,\xbar)$ consists of several blocks. The first block starts at the origin and passes along the ray $\set{\lambda\vv_1:\:\lambda\in\R_{\ge0}}$ to the astron $\limray{\vv_1}$, which is where the second block starts, continuing along $\set{\limray{\vv_1}\plusl\lambda\vv_2:\:\lambda\in\R_{\ge0}}$, and so on, until the final block, $\set{\limrays{\vv_1,\dotsc,\vv_k}\plusl\lambda\qq :\: \lambda\in [0,1] }$. We saw this block structure in \Cref{ex:seg-zero-oe2-plus-e1}. Before giving the general proof, we give some lemmas for special cases. The first allows us to determine the finite points in a segment joining $\zero$ and any infinite point, which turn out to depend only on the point's dominant direction. \begin{lemma} \label{lem:lb-with-zero:fin-part} Let $\vv\in\Rn\setminus\{\zero\}$ and $\ybar\in\extspace$. Then \begin{equation} \label{eq:lem:lb-with-zero:fin-part:2} \lb{\zero}{\limray{\vv}\plusl\ybar} \cap \Rn = \set{ \lambda\vv : \lambda\in\Rpos }. \end{equation} \end{lemma} \begin{proof} From \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}), a point $\zz\in\Rn$ is in $\lb{\zero}{\limray{\vv}\plusl\ybar}$ if and only if \begin{equation} \label{eq:lem:lb-with-zero:fin-part:1} \zz\cdot\uu \leq \max\{0,\, \limray{\vv}\cdot\uu\plusl\ybar\cdot\uu\} \end{equation} for all $\uu\in\Rn$. We aim to show that $\zz$ satisfies this inequality if and only if it has the stated form. Suppose first that $\zz=\lambda\vv$ for some $\lambda\in\Rpos$, and let $\uu\in\Rn$. If $\vv\cdot\uu>0$, then \eqref{eq:lem:lb-with-zero:fin-part:1} is satisfied since its right-hand side is $+\infty$. Otherwise, if $\vv\cdot\uu\leq 0$, then \eqref{eq:lem:lb-with-zero:fin-part:1} is also satisfied since its right-hand side is at least $0$, while $\zz\cdot\uu\leq 0$. Thus, $\zz\in\lb{\zero}{\limray{\vv}\plusl\ybar}$. Suppose now that $\zz$ does not have the stated form. In general, we can write $\zz=\alpha\vv+\zperp$ for some $\alpha\in\R$ and $\zperp\in\Rn$ with $\vv\cdot\zperp=0$. Let $\uu=\beta\zperp-\vv$ for some $\beta\in\R$. For this choice of $\uu$, the right-hand side of \eqref{eq:lem:lb-with-zero:fin-part:1} is equal to $0$ (regardless of $\beta$) since $\vv\cdot\uu=-\norm{\vv}^2<0$. On the other hand, $\zz\cdot\uu = \beta\norm{\zperp}^2 - \alpha\norm{\vv}^2$. Thus, if $\zperp\neq\zero$, then $\zz\cdot\uu>0$ for $\beta$ sufficiently large. Otherwise, if $\zperp=\zero$ and $\alpha<0$, then also $\zz\cdot\uu>0$ (for any $\beta$). In either case, there exists $\uu\in\Rn$ for which \eqref{eq:lem:lb-with-zero:fin-part:1} is not satisfied, so $\zz\not\in\lb{\zero}{\limray{\vv}\plusl\ybar}$. \end{proof} Next, we characterize all the infinite points on a segment joining $\zero$ and an astral point in terms of the canonical representations of the points: \begin{lemma} \label{lem:lb-with-zero:icon-part} Let $\xbar,\zbar\in\extspace$ have canonical representations $\xbar=\VV\omm\plusl\qq$ and $\zbar=\VV'\omm\plusl\rr$, where $\VV\in\Rnk$, $\VV'\in\R^{n\times k'}$ and $\qq,\rr\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{lem:lb-with-zero:icon-part:a} $\zbar\in\lb{\zero}{\xbar}$. \item \label{lem:lb-with-zero:icon-part:b} There exists $\WW\in\R^{n\times (k-k')}$ (implying $k\geq k'$) such that $\VV=[\VV',\WW]$ and $\rr\in\lb{\zero}{\WW\omm\plusl\qq}$. \end{letter-compact} \end{lemma} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{lem:lb-with-zero:icon-part:b}) $\Rightarrow$ (\ref{lem:lb-with-zero:icon-part:a}): } Suppose statement~(\ref{lem:lb-with-zero:icon-part:b}) holds. From \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}), for all $\uu\in\Rn$, we have \begin{equation} \label{eq:lem:lb-with-zero:icon-part:1} \rr\cdot\uu \leq \max\{0,\, \WW\omm\cdot\uu \plusl \qq\cdot\uu\}, \end{equation} and we aim to show \begin{equation} \label{eq:lem:lb-with-zero:icon-part:2} \VV'\omm\cdot\uu \plusl \rr\cdot\uu \leq \max\{0,\, \VV\omm\cdot\uu \plusl \qq\cdot\uu\}. \end{equation} Let $\uu\in\Rn$. Note that $\VV\omm=\VV'\omm\plusl\WW\omm$ (\Cref{prop:split}), implying \begin{equation} \label{eq:lem:lb-with-zero:icon-part:3} \VV\omm\cdot\uu=\VV'\omm\cdot\uu\plusl\WW\omm\cdot\uu. \end{equation} By Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:b}{pr:icon-equiv:c}, $\VV'\omm\cdot\uu\in\{-\infty,0,+\infty\}$. If $\VV'\omm\cdot\uu=-\infty$, then \eqref{eq:lem:lb-with-zero:icon-part:2} holds since its left-hand side is $-\infty$. If $\VV'\omm\cdot\uu=+\infty$, then that equation also holds since then $\VV\omm\cdot\uu=+\infty$ (by Eq.~\ref{eq:lem:lb-with-zero:icon-part:3}) implying that \eqref{eq:lem:lb-with-zero:icon-part:2}'s right-hand side is $+\infty$. Finally, if $\VV'\omm\cdot\uu=0$ then $\VV\omm\cdot\uu=\WW\omm\cdot\uu$ so that \eqref{eq:lem:lb-with-zero:icon-part:2} follows from \eqref{eq:lem:lb-with-zero:icon-part:1}. \pfpart{(\ref{lem:lb-with-zero:icon-part:a}) $\Rightarrow$ (\ref{lem:lb-with-zero:icon-part:b}): } Suppose $\zbar\in\lb{\zero}{\xbar}$. Let $\ebar=\VV'\omm$. We claim first that $\xbar$ is in $\ebar \plusl \extspace$. Otherwise, if this is not the case, then by \Cref{pr:cl-gal-equiv}(\ref{pr:cl-gal-equiv:a},\ref{pr:cl-gal-equiv:b}), there exists $\uu\in\Rn$ such that $\ebar\cdot\uu\neq 0$ and $\xbar\cdot\uu\neq\ebar\cdot\uu$. Since $\ebar$ is an icon, this implies that $\ebar\cdot\uu$ is $\pm\infty$. Without loss of generality, assume $\ebar\cdot\uu=+\infty$ (otherwise, we can replace $\uu$ with $-\uu$). Then $\xbar\cdot\uu<+\infty$, so \[ \zbar\cdot\uu = \ebar\cdot\uu + \rr\cdot\uu = +\infty > \max\{0,\, \xbar\cdot\uu\}. \] By \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}), this contradicts that $\zbar\in\lb{\zero}{\xbar}$. Thus, $\VV\omm\plusl\qq=\xbar=\VV'\omm\plusl\ybar$ for some $\ybar\in\extspace$. From \Cref{lem:uniq-ortho-prefix}(\ref{lem:uniq-ortho-prefix:a}), this implies that $\VV'$ is a prefix of $\VV$, so $k'\leq k$ and $\VV=[\VV',\WW]$ where $\WW$ is a matrix of the remaining $k-k'$ columns of $\VV$. It remains to show that $\rr\in\lb{\zero}{\WW\omm\plusl\qq}$. Let $\PP$ be the projection matrix onto $(\colspace{\VV'})^{\perp}$. Let $\uu\in\Rn$. Since $\rr\perp\VV'$, we have $\rr\cdot(\PP\uu)=(\PP\rr)\cdot\uu=\rr\cdot\uu$ by \Cref{pr:proj-mat-props}(\ref{pr:proj-mat-props:a},\ref{pr:proj-mat-props:d}). Likewise, $\qq\perp\VV$, implying $\qq\perp\VV'$, so $\qq\cdot(\PP\uu)=\qq\cdot\uu$. Since $\VV'\perp\WW$, we similarly have that \[ \VV'\omm\cdot(\PP\uu)=(\PP\VV')\omm\cdot\uu=0, \mbox{~~and~~} \WW\omm\cdot(\PP\uu)=(\PP\WW)\omm\cdot\uu=\WW\omm\cdot\uu, \] by \Cref{thm:mat-mult-def} and \Cref{pr:proj-mat-props}(\ref{pr:proj-mat-props:a},\ref{pr:proj-mat-props:d},\ref{pr:proj-mat-props:e}). These further imply that $\VV\omm\cdot(\PP\uu)=\VV'\omm\cdot(\PP\uu)+\WW\omm\cdot(\PP\uu)=\WW\omm\cdot\uu$. Combining, we thus have that \begin{align*} \rr\cdot\uu = \VV'\omm\cdot(\PP\uu)\plusl\rr\cdot(\PP\uu) &= \zbar\cdot(\PP\uu) \\ &\leq \max\Braces{0,\, \xbar\cdot(\PP\uu)} \\ &= \max\Braces{0,\, \VV\omm\cdot(\PP\uu)\plusl\qq\cdot(\PP\uu)} \\ &= \max\Braces{0,\, \WW\omm\cdot\uu\plusl\qq\cdot\uu}, \end{align*} where the inequality is by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}). Since this holds for all $\uu\in\Rn$, this proves $\rr\in\lb{\zero}{\WW\omm\plusl\qq}$ (again by \Cref{pr:seg-simplify}\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}). \qedhere \end{proof-parts} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:lb-with-zero}] Let $\VV=[\vv_1,\ldots,\vv_k]$ so that $\xbar=\VV\omm\plusl\qq$. For the moment, let us assume that this representation of $\xbar$ is canonical. For $j=1,\ldots,k+1$, let $\VV'_j=[\vv_1,\ldots,\vv_{j-1}]$ and $\WW_j=[\vv_j,\ldots,\vv_k]$. Also let $R_j = \lb{\zero}{\WW_j\omm\plusl\qq}\cap\Rn$. By \Cref{lem:lb-with-zero:icon-part}, considering all possible prefixes of $\VV$, we have \[ \lb{\zero}{\xbar} = \bigcup_{j=1}^{k+1} \Parens{ \VV'_j\omm \plusl R_j }. \] If $j\leq k$ then $R_j = \set{ \lambda\vv_j : \lambda\in\Rpos }$ by \Cref{lem:lb-with-zero:fin-part}. And if $j=k+1$ then $R_{k+1} = \lb{\zero}{\qq} = \set{ \lambda\qq : \lambda\in[0,1] }$ by ordinary convexity (\Cref{pr:e1}\ref{pr:e1:a}). Combining, these yield the expression given in the theorem. Now suppose $\xbar=\VV\omm\plusl\qq$ is no longer necessarily $\xbar$'s canonical representation. Let $\xbar'=[\ee_1,\dotsc,\ee_k]\omm\plusl\ee_{k+1}$, where $\ee_1,\dotsc,\ee_{k+1}$ are the standard basis vectors in $\R^{k+1}$. Let $\A=[\VV,\qq]$, and let $A:\eRf{k+1}\to\eRf{n}$ be the associated astral linear map, $A(\zbar)=\A\zbar$ for $\zbar\in\eRf{k+1}$. Then $A(\xbar')=\xbar$, so by \Cref{thm:e:9}, $\seg(\zero,\xbar)=\seg(\zero,\A\xbar')=A\bigParens{\seg(\zero,\xbar')}$. By the argument above, \eqref{eq:seg:zero} holds for the given canonical representation of $\xbar'$. Applying the astral linear map $A$ to the right-hand side of \eqref{eq:seg:zero} for $\xbar'$, we then obtain \eqref{eq:seg:zero} for $\xbar$, since $\A\ee_j=\vv_j$ for $j=1,\dotsc,k$, and $\A\ee_{k+1}=\qq$, completing the proof. \end{proof} As an immediate corollary, we can also compute the segment joining an arbitrary point in $\Rn$ and any other point in $\extspace$: \begin{corollary} \label{cor:lb-with-finite} Let $\yy\in\Rn$, and suppose that $\xbar=\limrays{\vv_1,\dotsc,\vv_k}\plusl\qq$ where $\vv_1,\dotsc,\vv_k,\qq\in\Rn$. Then \begin{align*} \lb{\yy}{\xbar} &= \bigSet{ \limrays{\vv_1,\dotsc,\vv_{j-1}}\plusl \parens{\lambda\vv_j+\yy} :\: j\in\{1,\dotsc,k\},\, \lambda\in\R_{\ge 0} } \\ &\qquad{} \cup\, \bigSet{ \limrays{\vv_1,\dotsc,\vv_k}\plusl \parens{\lambda\qq + (1-\lambda)\yy} :\: \lambda\in [0,1] }. \end{align*} \end{corollary} \begin{proof} By \Cref{thm:e:9}, \begin{equation} \label{eqn:cor:lb-with-finite:1} \lb{\yy}{\xbar} = \yy \plusl \seg\bigParens{\zero,\xbar \plusl (-\yy)}. \end{equation} The result now follows by evaluating the right-hand side using \Cref{thm:lb-with-zero}. \end{proof} Here is a simple but useful consequence of Corollary~\ref{cor:lb-with-finite}: \begin{corollary} \label{cor:d-in-lb-0-dplusx} Let $\ebar\in\corezn$, and let $\xbar\in\extspace$. Then \begin{equation} \label{eq:cor:d-in-lb-0-dplusx:1} \ebar \in \lb{\zero}{\ebar} \subseteq \lb{\zero}{\ebar\plusl\xbar}. \end{equation} \end{corollary} \begin{proof} The first inclusion of \eqref{eq:cor:d-in-lb-0-dplusx:1} is immediate. By Proposition~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:c}, $\ebar=\limrays{\vv_1,\dotsc,\vv_k}$ for some $\vv_1,\dotsc,\vv_k\in\Rn$, while $\xbar=\limrays{\ww_1,\dotsc,\ww_\ell}\plusl\qq$ for some $\ww_1,\dotsc,\ww_\ell,\qq\in\Rn$. Since \[ \ebar\plusl\xbar=\limrays{\vv_1,\dotsc,\vv_k, \ww_1,\dotsc,\ww_\ell}\plusl\qq, \] it follows from a direct application of Theorem~\ref{thm:lb-with-zero} that $\ebar \in \lb{\zero}{\ebar\plusl\xbar}$. Since $\zero$ is also in $\lb{\zero}{\ebar\plusl\xbar}$, this implies the second inclusion of \eqref{eq:cor:d-in-lb-0-dplusx:1} by definition of convexity (and \Cref{pr:e1}\ref{pr:e1:ohull}). \end{proof} As another corollary, we derive next an inductive description of $\lb{\zero}{\xbar}$. We show that if $\xbar\in\extspace$ has dominant direction $\vv$, and thus, if $\xbar=\limray{\vv}\plusl\xbar'$ for some $\xbar'$, then the points in $\lb{\zero}{\xbar}$ are of two types: the finite points in $\Rn$, which are exactly all of the nonnegative multiples of $\vv$; and the infinite points, which are the points of the form $\limray{\vv}\plusl\zbar'$ where $\zbar'$ is in the segment joining $\zero$ and $\xbar'$. \begin{corollary} \label{cor:seg:zero} Suppose $\xbar=\limray{\vv}\plusl\xbar'$, for some $\vv\in\Rn$ and $\xbar'\in\extspace$. Then \begin{equation} \label{eq:cor:seg:zero:1} \lb{\zero}{\xbar} = \bigBraces{ \lambda \vv :\: \lambda \in \Rpos } \,\cup\, \bigBracks{\limray{\vv} \plusl \lb{\zero}{\xbar'}}. \end{equation} \end{corollary} \begin{proof} We can write $\xbar'=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ for some $\vv_1,\ldots,\vv_k,\qq\in\Rn$. Then $\xbar=\limrays{\vv,\vv_1,\ldots,\vv_k}\plusl\qq$. Computing $\lb{\zero}{\xbar}$ and $\lb{\zero}{\xbar'}$ according to Theorem~\ref{thm:lb-with-zero}, the claim then follows by direct examination of the points comprising each side of \eqref{eq:cor:seg:zero:1}. \end{proof} An ordinary segment $\lb{\xx}{\yy}$ joining finite points $\xx$ and $\yy$ in $\Rn$ can be split into two segments at any point $\zz\in\lb{\xx}{\yy}$. The same is true for the segment joining a finite point and any other astral point, as shown next. \begin{theorem} \label{thm:decomp-seg-at-inc-pt} Let $\xbar\in\extspace$, let $\yy\in\Rn$, and let $\zbar\in\lb{\yy}{\xbar}$. Then \begin{equation} \label{eq:thm:decomp-seg-at-inc-pt:2} \lb{\yy}{\xbar} = \lb{\yy}{\zbar} \cup \lb{\zbar}{\xbar}, \end{equation} and \begin{equation} \label{eq:thm:decomp-seg-at-inc-pt:3} \lb{\yy}{\zbar} \cap \lb{\zbar}{\xbar} = \{\zbar\}. \end{equation} \end{theorem} \begin{proof} We first consider the special case that $\yy=\zero$, and then return to the general case. Let $\xbar=[\vv_1,\dotsc,\vv_k]\omm\plusl\vv_{k+1}$ be the canonical representation of $\xbar$. Since $\zbar\in\seg(\zero,\xbar)$, by \Cref{thm:lb-with-zero} it can be written as $\zbar=[\vv_1,\dotsc,\vv_{j-1}]\omm\plusl\lambda\vv_j$ for some $1\le j\le k+1$ and $\lambda\in\R_{\ge0}$, with $\lambda\le 1$ if $j=k+1$. Let $\ebar=[\vv_1,\dotsc,\vv_{j-1}]\omm$ and $\vv=\vv_j$, so $\zbar=\ebar\plusl\lambda\vv$. We first prove \eqref{eq:thm:decomp-seg-at-inc-pt:2}, distinguishing two cases. First, if $j=k+1$ then $\xbar=\ebar\plusl\vv$ and $\lambda\le 1$. In this case, \begin{align} \seg(\zbar,\xbar) &= \seg(\ebar\plusl\lambda\vv,\,\ebar\plusl\vv) \notag \\ &= \ebar\plusl\lambda\vv\plusl \seg\bigParens{\zero,(1-\lambda)\vv} \notag \\ &= \ebar\plusl\lambda\vv\plusl \Braces{\lambda'\vv :\: \lambda'\in [0,1-\lambda]} \notag \\ &= \Braces{\ebar\plusl\lambda'\vv :\: \lambda'\in [\lambda,1]}. \label{eq:thm:decomp-seg-at-inc-pt:1} \end{align} The second equality is by \Cref{thm:e:9}, and the third by \Cref{pr:e1}(\ref{pr:e1:a}). Evaluating $\lb{\zero}{\xbar}$ and $\lb{\zero}{\zbar}$ using \Cref{thm:lb-with-zero}, and combining with \eqref{eq:thm:decomp-seg-at-inc-pt:1}, we see that \eqref{eq:thm:decomp-seg-at-inc-pt:2} holds in this case. In the alternative case, $j\le k$, so $\xbar=\ebar\plusl\limray{\vv}\plusl\xbar'$ where $\xbar'=[\vv_{j+1},\dotsc,\vv_k]\omm\plusl\vv_{k+1}$. Then \begin{align} \seg(\zbar,\xbar) &= \seg(\ebar\plusl\lambda\vv,\,\ebar\plusl\limray{\vv}\plusl\xbar') \notag \\ &= \ebar\plusl\lambda\vv\plusl \seg(\zero,\limray{\vv}\plusl\xbar') \notag \\ &= \ebar\plusl\lambda\vv\plusl \bigBracks{ \Braces{\lambda'\vv :\: \lambda'\in\Rpos} \,\cup\, \bigParens{\limray{\vv}\plusl \lb{\zero}{\xbar'}} } \notag \\ &= \bigBraces{\ebar\plusl\lambda'\vv :\: \lambda'\in [\lambda,+\infty)} \,\cup\, \bigBracks{ \ebar\plusl\limray{\vv}\plusl \lb{\zero}{\xbar'} }. \label{eq:thm:decomp-seg-at-inc-pt:4} \end{align} The second equality is by \Cref{thm:e:9} (and since $\lambda\vv\plusl\limray{\vv}=\limray{\vv}$). The third is by \Cref{cor:seg:zero}. As before, we can then see that \eqref{eq:thm:decomp-seg-at-inc-pt:2} holds in this case by evaluating $\lb{\zero}{\xbar}$, $\lb{\zero}{\zbar}$ and $\lb{\zero}{\xbar'}$ using \Cref{thm:lb-with-zero}, and combining with \eqref{eq:thm:decomp-seg-at-inc-pt:4}. We turn next to proving \eqref{eq:thm:decomp-seg-at-inc-pt:3}. For either of the cases above, let $\beta=\lambda\norm{\vv}^2$, and let $H=\set{\zbar'\in\eRn:\:\zbar'\inprod\vv\le\beta}$, which is convex (by \Cref{pr:e1}\ref{pr:e1:c}). Then both $\zero$ and $\zbar$ are in $H$ (since $\zero\cdot\vv=0\leq\beta$ and $\zbar\cdot\vv=\lambda\norm{\vv}^2=\beta$ as a result of $\vv_1,\ldots,\vv_{k+1}$ being orthogonal to one another). Therefore, $\seg(\zero,\zbar)\subseteq H$. We will show further that \begin{equation} \label{eq:thm:decomp-seg-at-inc-pt:5} H\cap\seg(\zbar,\xbar)\subseteq\set{\zbar}, \end{equation} which will imply \[ \set{\zbar} \subseteq \lb{\zero}{\zbar} \cap \lb{\zbar}{\xbar} \subseteq H\cap\lb{\zbar}{\xbar} \subseteq \set{\zbar}, \] thereby proving \eqref{eq:thm:decomp-seg-at-inc-pt:3}. Returning to the cases above, if $j\leq k$, then from \eqref{eq:thm:decomp-seg-at-inc-pt:4}, it is apparent that the the only point $\zbar'\in\seg(\zbar,\xbar)$ that satisfies $\zbar'\inprod\vv\le\beta$ is $\zbar=\ebar\plusl\lambda\vv$ (noting again the orthogonality of $\vv_1,\ldots,\vv_{k+1}$, and also that $\norm{\vv}=1$), thus proving \eqref{eq:thm:decomp-seg-at-inc-pt:5}. The argument is similar if $j= k+1$. If $\zbar'\in\seg(\zbar,\xbar)$ then by \eqref{eq:thm:decomp-seg-at-inc-pt:1}, $\zbar'=\ebar\plusl\lambda'\vv$ for some $\lambda'\in[\lambda,1]$, implying $\zbar'\cdot\vv=\lambda'\norm{\vv}^2$, which is at most $\beta=\lambda\norm{\vv}^2$ if and only if either $\lambda=\lambda'$ or $\vv=\zero$. In either case, $\zbar'=\ebar\plusl\lambda'\vv=\ebar\plusl\lambda\vv=\zbar$, proving \eqref{eq:thm:decomp-seg-at-inc-pt:5}. This completes the proof for when $\yy=\zero$. For the general case (when $\yy$ is not necessarily $\zero$), let $\xbar'=-\yy\plusl\xbar$ and $\zbar'=-\yy\plusl\zbar$. Since $\zbar\in\lb{\yy}{\xbar}$, we also have \[ \zbar' = -\yy\plusl\zbar \in \bigBracks{ -\yy\plusl\lb{\yy}{\xbar} } = \lb{\zero}{\xbar'}, \] where the last equality is by \Cref{thm:e:9}. Thus, \begin{align*} \lb{\yy}{\zbar} \,\cup\, \lb{\zbar}{\xbar} &= \bigBracks{\yy \plusl \lb{\zero}{\zbar'}} \,\cup\, \bigBracks{\yy \plusl \lb{\zbar'}{\xbar'}} \\ &= \yy \plusl \bigBracks{ \lb{\zero}{\zbar'} \,\cup\, \lb{\zbar'}{\xbar'} } \\ &= \yy \plusl \lb{\zero}{\xbar'} = \lb{\yy}{\xbar}, \end{align*} where the first and fourth equalities are by \Cref{thm:e:9}, and the third is by the proof above. Similarly, \begin{align*} \lb{\yy}{\zbar} \,\cap\, \lb{\zbar}{\xbar} &= \bigBracks{\yy \plusl \lb{\zero}{\zbar'}} \,\cap\, \bigBracks{\yy \plusl \lb{\zbar'}{\xbar'}} \\ &= \yy \plusl \bigBracks{\lb{\zero}{\zbar'} \,\cap\, \lb{\zbar'}{\xbar'} } \\ &= \yy \plusl \set{\zbar'} = \set{\zbar}. \qedhere \end{align*} \end{proof} Theorem~\ref{thm:decomp-seg-at-inc-pt} does not hold in general for the segment joining any two astral points; in other words, the theorem is false if $\yy$ is replaced by an arbitrary point in $\extspace$. For instance, in $\extspac{2}$, let $\xbar=\limray{\ee_1}\plusl\limray{\ee_2}$. Then $\lb{-\xbar}{\xbar}=\extspac{2}$, as we saw in \Cref{ex:seg-negiden-iden}. This is different from $\lb{-\xbar}{\zero}\cup\lb{\zero}{\xbar}$, which, for instance, does not include $\ee_2$, as follows from \Cref{thm:lb-with-zero}. \subsection{Sequential sum of sets} \label{sec:seq-sum} Suppose $\seq{\xx_t}$ and $\seq{\yy_t}$ are two sequences in $\Rn$ converging, respectively, to astral points $\xbar$ and $\ybar$. What can be said about the limit (if it exists) of the sum of the sequences, $\seq{\xx_t+\yy_t}$? More generally, if $\seq{\xx_t}$ and $\seq{\yy_t}$ each have limits, respectively, in some sets $X$ and $Y$ in $\extspace$, then what can be said about the limit of $\seq{\xx_t+\yy_t}$? To answer this, we study the set of all such limits, called the sequential sum: \begin{definition} The \emph{sequential sum} of two sets $X,Y\subseteq\extspace$, denoted $X\seqsum Y$, is the set of all points $\zbar\in\extspace$ for which there exist sequences $\seq{\xx_t}$ and $\seq{\yy_t}$ in $\Rn$ and points $\xbar\in X$ and $\ybar\in Y$ such that $\xx_t\rightarrow\xbar$, $\yy_t\rightarrow\ybar$, and $\xx_t+\yy_t\rightarrow \zbar$. \end{definition} If $X$ and $Y$ are in $\Rn$, then $X\seqsum Y$ is simply the sum of the sets, $X+Y$, so we can view sequential addition of astral sets as a generalization of ordinary addition of sets in~$\Rn$. Here are some simple properties of sequential addition, whose proofs are routine from its definition: \begin{proposition} \label{pr:seqsum-props} ~ \begin{letter-compact} \item \label{pr:seqsum-props:finite} For $X,Y\subseteq\Rn$, $X \seqsum Y = X + Y$. \item \label{pr:seqsum-props:a} For $X,Y\subseteq\extspace$, $X \seqsum Y = Y \seqsum X$. \item \label{pr:seqsum-props:b} Suppose $X\subseteq X'\subseteq\extspace$ and $Y\subseteq Y'\subseteq\extspace$. Then $X \seqsum Y \subseteq X' \seqsum Y'$. \item \label{pr:seqsum-props:c} Let $X_i$, for $i\in I$, and $Y_j$, for $j\in J$, be arbitrary families of sets in $\extspace$, where $I$ and $J$ are index sets. Then \[ \BiggParens{\,\bigcup_{i\in I} X_i} \seqsum \BiggParens{\,\bigcup_{j\in J} Y_j} = \bigcup_{i\in I,\,j\in J} (X_i \seqsum Y_j). \] \end{letter-compact} \end{proposition} To establish several key properties of sequential addition, we consider its generalization to finite collections of sets. For $m\ge 1$ and sets $X_1,\dotsc,X_m\subseteq\eRn$, we define their \emph{$m$-ary sequential sum} $\Seqsum(X_1,\dotsc,X_m)$ as the set of all points $\zbar\in\extspace$ for which there exist $\xbar_i\in X_i$ and sequences $\seq{\xx_{it}}$, for $i=1,\dotsc,m$, such that $\xx_{it}\to\xbar_i$ for all $i$, and $\sum_{i=1}^m\xx_{it}\to\zbar$. When $m=0$, we define the $0$-ary sequential sum $\Seqsum(\,)$ to be $\{\zero\}$. For $m=1$ and $m=2$, this definition yields $\Seqsum(X)=X$ and $\Seqsum(X,Y)=X\seqsum Y$. Later, we will show, as expected, that $\Seqsum(X_1,\dotsc,X_m)=X_1\seqsum \cdots \seqsum X_m$ for $m\geq 1$, but this will require proof. (Indeed, we have not yet even established that sequential sum is associative, so this latter expression is, at this point, somewhat ambiguous.) We next characterize $m$-ary sequential sums using linear maps. For vectors $\xx_1,\dotsc,\xx_m\in\Rn$, we write $\mtuple{\xx_1,\dotsc,\xx_{m}}$ for the (column) vector in $\R^{mn}$ obtained by concatenating the vectors together in the natural way. For $i=1,\dotsc,m$, we then define the \emph{$i$-th component projection matrix} $\PPi\in\R^{n\times mn}$ as \begin{equation} \label{eq:comp-proj-mat-defn} \PPi = [ \underbrace{\zero,\dotsc,\zero}_{i-1}, \Iden, \underbrace{\zero,\dotsc,\zero}_{m-i} ], \end{equation} where $\Iden$ is the $n\times n$ identity matrix, and, in this context, $\zero$ is the $n\times n$ all-zeros matrix. Thus, $\PPi \mtuple{\xx_1,\dotsc,\xx_{m}} = \xx_i$, for $i=1,\dotsc,m$. We also define the \emph{addition matrix} $\A\in\R^{n\times mn}$ as $\A=[\Iden,\dotsc,\Iden]$. Thus, \begin{equation} \label{eq:add-mat-defn} \A=\sum_{i=1}^m\PPi, \end{equation} so $\A \mtuple{\xx_1,\dotsc,\xx_{m}} =\sum_{i=1}^m\xx_i$. Using the language of linear maps, the next \lcnamecref{thm:seqsum-mat-char} shows how sequential addition can be viewed as an extension of standard vector addition: \begin{theorem} \label{thm:seqsum-mat-char} Let $X_1,\dotsc,X_m\subseteq\eRn$, let $\PPsub{1},\ldots,\PPsub{m}$ be the component projection matrices, and let $\A$ be the addition matrix (as in Eqs.~\ref{eq:comp-proj-mat-defn} and~\ref{eq:add-mat-defn}). Then \begin{align} \notag &\Seqsum(X_1,\dotsc,X_m) \\ \label{eq:prop:seqsum} &\qquad{} =\bigSet{\A\wbar:\:\wbar\in\eRf{mn} \text{ such that }\PPi\wbar\in X_i\text{ for }i=1,\dotsc,m }. \end{align} \end{theorem} \begin{proof} Let $N=mn$, let $Z=\Seqsum(X_1,\dotsc,X_m)$, and let $U$ denote the right-hand side of \eqref{eq:prop:seqsum}. We will show that $Z=U$. First, let $\zbar\in Z$. Then there exist sequences $\seq{\xx_{it}}$ in $\Rn$, for $i=1,\dotsc,m$, such that $\xx_{it}\to\xbar_i$ for some $\xbar_i\in X_i$, and such that $\sum_{i=1}^m \xx_{it}\to\zbar$. Let $\ww_t=\mtuple{\xx_{1t},\dotsc,\xx_{mt}}$. By sequential compactness, the sequence $\seq{\ww_t}$ must have a convergent subsequence; by discarding all other elements as well as the corresponding elements of the sequences~$\seq{\xx_{it}}$, we can assume the entire sequence converges to some point $\wbar\in\eRf{N}$. By continuity of linear maps (\Cref{thm:mat-mult-def}), $\PPi \wbar = \lim(\PPi\ww_t)=\lim \xx_{it} = \xbar_i \in X_i$ for all $i$, while $\zbar=\lim(\sum_{i=1}^m \xx_{it})=\lim(\A\ww_t)=\A\wbar$. Therefore, $\zbar\in U$, so $Z\subseteq U$. For the reverse inclusion, let $\zbar\in U$. Then there exists $\wbar\in\eRf{N}$ such that $\A\wbar=\zbar$ and $\PPi \wbar=\xbar_i$ for some $\xbar_i \in X_i$, for $i=1,\dotsc,m$. Since $\wbar\in\eRf{N}$, there exists a sequence $\seq{\ww_t}$ in $\R^N$ such that $\ww_t\rightarrow\wbar$. For $i=1,\dotsc,m$, setting $\xx_{it}=\PPi \ww_t$, we obtain $\xx_{it}=\PPi \ww_t \to \PPi \wbar = \xbar_i\in X_i$. Also, $\sum_{i=1}^m\xx_{it}=\A\ww_t\to\A\wbar=\zbar$. Thus, $\zbar\in Z$, so $Z=U$. \end{proof} Using the characterization in \Cref{thm:seqsum-mat-char}, we can show that sequential addition is associative, as an immediate corollary of the following lemma. \begin{lemma} \label{lemma:seqsum:assoc} Let $X_1,\dotsc,X_m,Y\subseteq\eRn$, then \[ \Seqsum(X_1,\dotsc,X_m)\seqsum Y= \Seqsum(X_1,\dotsc,X_m,Y). \] \end{lemma} \begin{proof} Let $Z=\Seqsum(X_1,\dotsc,X_m)$, let $V=Z\seqsum Y$, and let $Z'=\Seqsum(X_1,\dotsc,X_m,Y)$. We will show that $V=Z'$. First, let $\zbar'\in Z'$. Then there exist sequences $\seq{\xx_{it}}$ in $\Rn$, for $i=1,\dotsc,m$, such that $\xx_{it}\to\xbar_i$ for some $\xbar_i\in X_i$, and a sequence $\seq{\yy_t}$ in $\Rn$, such that $\yy_t\to\ybar$ for some $\ybar\in Y$, and such that $(\sum_{i=1}^m \xx_{it})+\yy_t\to\zbar'$. Let $\zz_t=\sum_{i=1}^m \xx_{it}$. By sequential compactness, the sequence $\seq{\zz_t}$ must have a convergent subsequence; by discarding all other elements as well as the corresponding elements of the sequences $\seq{\xx_{it}}$ and~$\seq{\yy_t}$, we can assume the entire sequence converges to some point $\zbar\in\eRn$. Since $\xx_{it}\to\xbar_i\in X_i$ and $(\sum_{i=1}^m \xx_{it})=\zz_t\to\zbar$, we have $\zbar\in Z$. Moreover, $\yy_t\to\ybar\in Y$, and $\zz_t+\yy_t=(\sum_{i=1}^m \xx_{it})+\yy_t\to\zbar'$, and so $\zbar'\in Z\seqsum Y=V$. Therefore, $Z'\subseteq V$. For the reverse inclusion, let $\vbar\in V$. By \Cref{thm:seqsum-mat-char}, \[ V=\bigSet{\B\ubar:\:\ubar\in\eRf{2n}\text{ such that } \QQz\ubar\in Z\text{ and }\QQy\ubar\in Y}, \] where $\B=[\Iden,\Iden]$, $\QQz=[\Iden,\zero]$, and $\QQy=[\zero,\Iden]$ (writing $\Iden=\Idnn$ and $\zero=\zero_{n\times n}$ in this context). Therefore, there exists $\ubar\in\eRf{2n}$ such that $\B\ubar=\vbar$, and also $\QQz\ubar=\zbar$ for some $\zbar\in Z$, and $\QQy\ubar=\ybar$ for some $\ybar\in Y$. Let $N=mn$. Then, by \Cref{thm:seqsum-mat-char}, we also have \[ Z=\bigSet{\A\wbar:\:\wbar\in\eRf{N} \text{ such that }\PPi\wbar\in X_i\text{ for }i=1,\dotsc,m }, \] with the addition matrix $\A\in\R^{n\times N}$ and the component projection matrices $\PPi\in\R^{n\times N}$ for $i=1,\dotsc,m$. Since $\zbar\in Z$, there exists $\wbar\in\eRf{N}$ such that $\A\wbar=\zbar$ and $\PPi\wbar=\xbar_i$ for some $\xbar_i\in X_i$, for $i=1,\dotsc,m$. To finish the proof, we construct sequences $\seq{\xx_{it}}$ in $\Rn$, for $i=1,\dotsc,m$, and $\seq{\yy_t}$ in $\Rn$ whose properties will imply that $\vbar\in Z'$. To start, let $\seq{\uu_t}$ be a sequence in $\R^{2n}$ such that $\uu_t\to\ubar$. By continuity of linear maps (\Cref{thm:mat-mult-def}), $\QQz\uu_t\to\QQz\ubar=\zbar=\A\wbar$. Therefore, by \Cref{thm:inv-lin-seq}, there exists a sequence $\seq{\ww_t}$ in $\R^N$ such that $\ww_t\to\wbar$, and $\A\ww_t-\QQz\uu_t\to\zero$. Let $\veps_t=\A\ww_t-\QQz\uu_t$, and set $\xx_{it}=\PPi\ww_t$, for $i=1,\dotsc,m$, and $\yy_t=\QQy\uu_t$. By the foregoing construction and continuity of linear maps, \begin{align*} & \xx_{it}=\PPi\ww_t\to\PPi\wbar=\xbar_i\quad\text{for $i=1,\dotsc,m$,} \\ & \yy_t=\QQy\uu_t\to\QQy\ubar=\ybar, \\ & \bigParens{\textstyle\sum_{i=1}^m\xx_{it}}+\yy_t = \A\ww_t+\yy_t = \QQz\uu_t+\veps_t+\QQy\uu_t = \B\uu_t+\veps_t \to \B\ubar = \vbar, \end{align*} where the convergence in the last line follows by \Cref{pr:i:7}(\ref{pr:i:7g}). Thus, $\vbar\in Z'$, so $V=Z'$. \end{proof} \begin{corollary} \label{cor:seqsum:assoc} Let $X,Y,Z\subseteq\eRn$. Then \[ (X\seqsum Y)\seqsum Z = X\seqsum(Y\seqsum Z). \] \end{corollary} \begin{proof} By \Cref{lemma:seqsum:assoc} and commutativity of sequential addition (\Cref{pr:seqsum-props}\ref{pr:seqsum-props:a}) \[ (X\seqsum Y)\seqsum Z = \Seqsum(X,Y,Z) = \Seqsum(Y,Z,X) = (Y\seqsum Z)\seqsum X = X\seqsum(Y\seqsum Z), \] where the second equality follows from the definition of the $m$-ary sequential sum. \end{proof} Having established associativity of (binary) sequential addition, we can now omit parentheses and simply write $X\seqsum Y\seqsum Z$, or more generally $X_1\seqsum\dotsb\seqsum X_m$ for any sets $X_1,\dotsc,X_m\subseteq\eRn$. When $m=0$, such an expression is understood to be equal to~$\{\zero\}$. The next theorem shows that such a chain of binary sequential sums is equal to the $m$-ary sequential sum. It also shows that a sequential sum can be decomposed into sequential sums of singletons, and that sequential addition preserves closedness and convexity of its arguments. When taking sequential sums involving singletons, as in the theorem below, we usually streamline notation and omit braces, for instance, writing $\xbar\seqsum \ybar$ for $\{\xbar\} \seqsum \{\ybar\}$. (It is important, nonetheless, to keep in mind that sequential sum always denotes a set in~$\extspace$, even if that set happens to be a singleton.) \begin{theorem} \label{prop:seqsum-multi} Let $Z=X_1\seqsum\dotsb\seqsum X_m$ for some $X_1,\dotsc,X_m\subseteq\eRn$. Then: \begin{letter-compact} \item \label{prop:seqsum-multi:equiv} $Z = \Seqsum(X_1,\dotsc,X_m)$. \item \label{prop:seqsum-multi:decomp} \[ \displaystyle Z = \bigcup_{\xbar_1\in X_1, \dotsc, \xbar_m\in X_m} (\xbar_1 \seqsum \dotsb \seqsum \xbar_m). \] \item \label{prop:seqsum-multi:closed} $Z$ is closed if the sets $X_1,\dotsc,X_m$ are closed. \item \label{prop:seqsum-multi:convex} $Z$ is convex if the sets $X_1,\dotsc,X_m$ are convex. \end{letter-compact} \end{theorem} \begin{proof} The claims are straightforward to check when $m=0$, so we assume $m\geq 1$. \begin{proof-parts} \pfpart{Part~(\ref{prop:seqsum-multi:equiv}):} Proof is by induction on $m$. By definition of $m$-ary sequential sum, the claim holds for $m=1$. Let $m>1$ and assume the claim holds for $m-1$. Then \[ Z=(X_1\seqsum\dotsb\seqsum X_{m-1})\seqsum X_m =\Seqsum(X_1,\dotsc,X_{m-1})\seqsum X_m =\Seqsum(X_1,\dotsc,X_m). \] The second equality is by inductive hypothesis and the third is by \Cref{lemma:seqsum:assoc}. \pfpart{Part~(\ref{prop:seqsum-multi:decomp}):} Proof is again by induction on $m$. The base case that $m=1$ is immediate. For the inductive step, let $m>1$ and assume that the claim holds for $m-1$. Then \begin{align*} Z&= \parens{ X_1 \seqsum \dotsb \seqsum X_{m-1} } \seqsum X_m \\ &= \BiggParens{\,\bigcup_{\xbar_1\in X_1, \dotsc, \xbar_{m-1}\in X_{m-1}} (\xbar_1 \seqsum \dotsb \seqsum \xbar_{m-1}) } \seqsum \BiggParens{\,\bigcup_{\xbar_{m}\in X_{m}} \{ \xbar_m \} } \\ &= \bigcup_{\xbar_1\in X_1, \dotsc, \xbar_{m-1}\in X_{m-1}, \xbar_m\in X_m} \BigParens{(\xbar_1 \seqsum \dotsb \seqsum \xbar_{m-1}) \seqsum \xbar_m}. \end{align*} The second equality is by inductive hypothesis and the third is by \Cref{pr:seqsum-props}(\ref{pr:seqsum-props:c}). \pfpart{Part~(\ref{prop:seqsum-multi:closed}):} By part (a) and \Cref{thm:seqsum-mat-char}, \begin{equation} \label{eq:seqsum:1} Z = \bigSet{\A\wbar:\:\wbar\in\eRf{mn} \text{ such that }\PPi\wbar\in X_i\text{ for }i=1,\dotsc,m }, \end{equation} where $\A$ is the addition matrix and $\PPsub{1},\ldots,\PPsub{m}$ are the component projection matrices (as in Eqs.~\ref{eq:comp-proj-mat-defn} and~\ref{eq:add-mat-defn}). Let $A:\eRf{mn}\to\eRn$ and $P_i:\eRf{mn}\to\eRn$ denote the associated astral linear maps (so that $A(\wbar)=\A\wbar$ and $P_i(\wbar)=\PPi\wbar$ for $\wbar\in\eRf{mn}$). Then \eqref{eq:seqsum:1} can be rewritten as \begin{equation} \label{eq:seqsum:2} Z = A\BiggParens{\,\bigcap_{i=1}^m P_i^{-1}(X_i)}. \end{equation} Suppose that each $X_i$ for $i=1,\dotsc,m$ is closed. Then, by continuity of $P_i$ (\Cref{thm:linear:cont}), each of the sets $P_i^{-1}(X_i)$ is closed as well (\Cref{prop:cont}\ref{prop:cont:a},\ref{prop:cont:inv:closed}), and so is their intersection, and also the image of the intersection under $A$, by \Cref{cor:cont:closed}, since $A$ is a continuous map between astral spaces. \pfpart{Part~(\ref{prop:seqsum-multi:convex}):} Write $Z$ as in \eqref{eq:seqsum:2} and suppose that each of the sets $X_i$ is convex. Then each of the sets $P_i^{-1}(X_i)$ is also convex (\Cref{cor:inv-image-convex}), so their intersection is convex (\Cref{pr:e1}\ref{pr:e1:b}), implying that $Z$, the image of that intersection under $A$, is as well (\Cref{cor:thm:e:9}). \qedhere \end{proof-parts} \end{proof} Thus, if $X$ and $Y$ are convex subsets of $\extspace$, then $X\seqsum Y$ is also convex. The same is not true, in general, for $X\plusl Y$, as the next example shows. We will say more about the relationship between $X\seqsum Y$ and $X\plusl Y$ later in this section. \begin{example} In $\R^2$, let $X=\lb{\zero}{\limray{\ee_1}}$ and $Y=\lb{\zero}{\limray{\ee_2}}$, which are both convex. By Theorem~\ref{thm:lb-with-zero}, $X=\{\lambda \ee_1 : \lambda\in\Rpos\} \cup \{\limray{\ee_1}\}$ and similary for $Y$, so $\limray{\ee_1}$ and $\limray{\ee_2}$ are in $X\plusl Y$, but $\limray{\ee_2}\plusl\limray{\ee_1}$ is not. On the other hand, this latter point is on the segment joining $\limray{\ee_1}$ and $\limray{\ee_2}$, as we saw in \Cref{ex:seg-oe1-oe2}. Therefore, $X\plusl Y$ is not convex. \end{example} The next theorem gives various characterizations for the sequential sum of a finite collection of astral points, $\xbar_1,\dotsc,\xbar_m$ (or, more precisely, of the singletons associated with these points). Implicitly, it also characterizes the sequential sum of arbitrary sets, since these can be decomposed in terms of singletons using \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:decomp}). The first characterization is in terms of an addition matrix and component projection matrices, following immediately from \Cref{thm:seqsum-mat-char}. The second characterization is in terms of the behavior of the associated functions $\uu\mapsto \xbar_i\cdot\uu$. Finally, the third shows that the sequential sum is actually an astral polytope whose vertices are formed by permuting the ordering of the $\xbar_i$'s, in all possible ways, and then combining them using leftward addition. (In the theorem, a \emph{permutation of a set $S$} is simply a bijection $\pi:S\rightarrow S$.) \begin{theorem} \label{thm:seqsum-equiv-mod} Let $\xbar_1,\dotsc,\xbar_m\in\extspace$, and let $\zbar\in\extspace$. Also, let $\PPsub{1},\ldots,\PPsub{m}$ be the component projection matrices, and let $\A$ be the addition matrix (as in Eqs.~\ref{eq:comp-proj-mat-defn} and~\ref{eq:add-mat-defn}). Then the following are equivalent: \begin{letter-compact} \item \label{thm:seqsum-equiv-mod:a} $\zbar \in \xbar_1 \seqsum \dotsb \seqsum \xbar_m$. \item \label{thm:seqsum-equiv-mod:b} $\zbar =\A\wbar$ for some $\wbar\in\eRf{mn}$ such that $\PPi\wbar=\xbar_i$ for $i=1,\dotsc,m$. \item \label{thm:seqsum-equiv-mod:c} For all $\uu\in\Rn$, if $\xbar_1\cdot\uu,\dotsc,\xbar_m\cdot\uu$ are summable, then $\zbar\cdot\uu = \sum_{i=1}^m \xbar_i\cdot\uu$. \item \label{thm:seqsum-equiv-mod:d} $ \zbar \in \ohull\bigBraces{ \xbar_{\pi(1)}\plusl \dotsb \plusl \xbar_{\pi(m)} :\: \pi\in \Pi_m } $ where $\Pi_m$ is the set of all permutations of $\{1,\dotsc,m\}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:seqsum-equiv-mod:a}) $\Leftrightarrow$ (\ref{thm:seqsum-equiv-mod:b}): } Immediate by \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:equiv}) and \Cref{thm:seqsum-mat-char}. \pfpart{(\ref{thm:seqsum-equiv-mod:b}) $\Rightarrow$ (\ref{thm:seqsum-equiv-mod:c}): } Suppose, for some $\wbar\in\eRf{mn}$, that $\zbar=\A\wbar$ and $\xbar_i=\PPi\wbar$ for $i=1,\dotsc,m$. Let $\uu\in\Rn$, and suppose that $\xbar_1\cdot\uu,\dotsc,\xbar_m\cdot\uu$ are summable. By \Cref{thm:mat-mult-def}, $\xbar_i\cdot\uu=(\PPi\wbar)\cdot\uu=\wbar\cdot(\trans{\PPi}\uu)$ for all $i$, and $\zbar\inprod\uu=(\A\wbar)\inprod\uu=\wbar\inprod(\transA\uu)$. Thus, \begin{align*} \sum_{i=1}^m \xbar_i\inprod\uu &= \sum_{i=1}^m \wbar\inprod(\trans{\PPi}\uu) =\wbar\inprod\BiggParens{\sum_{i=1}^m \trans{\PPi}\uu} =\wbar\inprod(\transA\uu) =\zbar\inprod\uu, \end{align*} where the second equality is by an iterated application of \Cref{pr:i:1} (having assumed summability), and the third by \eqref{eq:add-mat-defn}. \pfpart{(\ref{thm:seqsum-equiv-mod:c}) $\Rightarrow$ (\ref{thm:seqsum-equiv-mod:d}): } Suppose part~(\ref{thm:seqsum-equiv-mod:c}) holds. Let $\uu\in\Rn$. To prove part~(\ref{thm:seqsum-equiv-mod:d}), by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}), it suffices to show that \begin{equation} \label{eq:thm:seqsum-equiv-mod:1} \zbar\cdot\uu \leq \max\BigBraces{ \bigParens{\xbar_{\pi(1)}\plusl \dotsb \plusl \xbar_{\pi(m)}}\cdot\uu :\: \pi\in \Pi_m }. \end{equation} Suppose first that $\xbar_i\cdot\uu=+\infty$ for some $i\in\{1,\dotsc,m\}$. In this case, let $\pi$ be any permutation in $\Pi_m$ for which $\pi(1)=i$. Then $(\xbar_{\pi(1)}\plusl \dotsb \plusl \xbar_{\pi(m)})\cdot\uu =+\infty$, implying \eqref{eq:thm:seqsum-equiv-mod:1} since its right-hand side must also be $+\infty$. Otherwise, we must have $\xbar_i\cdot\uu<+\infty$ for $i=1,\dotsc,m$, and so ${\xbar_1\cdot\uu},\dotsc,{\xbar_m\cdot\uu}$ are summable, with their sum being equal to $\zbar\cdot\uu$ (by assumption). Thus, for every $\pi\in\Pi_m$, we must have \[ \zbar\cdot\uu = \sum_{i=1}^m \xbar_i\cdot\uu = \xbar_{\pi(1)}\cdot\uu \plusl \dotsb \plusl \xbar_{\pi(m)}\cdot\uu = \bigParens{\xbar_{\pi(1)} \plusl \dotsb \plusl \xbar_{\pi(m)}}\cdot\uu, \] implying \eqref{eq:thm:seqsum-equiv-mod:1}, and completing the proof. \pfpart{(\ref{thm:seqsum-equiv-mod:d}) $\Rightarrow$ (\ref{thm:seqsum-equiv-mod:b}): } Let $Z$ be the set of all points $\zbar\in\extspace$ which satisfy part~(\ref{thm:seqsum-equiv-mod:b}), and let \[ V=\bigBraces{ \xbar_{\pi(1)}\plusl \dotsb \plusl \xbar_{\pi(m)} :\: \pi\in\Pi_m }. \] We need to show that $\ohull V\subseteq Z$. The set $Z$ is convex by \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:convex}), using the equivalence of parts (\ref{thm:seqsum-equiv-mod:a}) and (\ref{thm:seqsum-equiv-mod:b}) (and since all singletons are convex). Therefore, it suffices to show that $V\subseteq Z$, which will imply $\ohull V\subseteq Z$ by \Cref{thm:e:2}. Let $\vbar\in V$, so $\vbar=\xbar_{\pi(1)}\plusl \dotsb \plusl \xbar_{\pi(m)}$ for some $\pi\in\Pi_m$. Let \[ \wbar = \trans{\PPsub{\pi(1)}}\xbar_{\pi(1)} \plusl\dotsb\plusl \trans{\PPsub{\pi(m)}}\xbar_{\pi(m)}. \] Note that for $i,j\in\set{1,\dotsc,m}$ we have \begin{equation} \label{eq:PPiPPj} \PPi\trans{\PPsub{j}}= \begin{cases} \zero_{n\times n} &\text{if $i\ne j$,} \\ \Idnn &\text{if $i=j$.} \end{cases} \end{equation} Thus, for each $i$, \[ \PPi\wbar = \PPi\trans{\PPsub{\pi(1)}}\xbar_{\pi(1)} \plusl\dotsb\plusl \PPi\trans{\PPsub{\pi(m)}}\xbar_{\pi(m)} = \xbar_i, \] where the first equality is by \Cref{pr:h:4}(\ref{pr:h:4c}) and the second by \eqref{eq:PPiPPj}. Also, \[ \A\wbar = \A\trans{\PPsub{\pi(1)}}\xbar_{\pi(1)} \plusl\dotsb\plusl \A\trans{\PPsub{\pi(m)}}\xbar_{\pi(m)} = \xbar_{\pi(1)}\plusl \dotsb \plusl \xbar_{\pi(m)} = \vbar. \] The first equality is again by \Cref{pr:h:4}(\ref{pr:h:4c}) and the second by \eqref{eq:PPiPPj}, using \eqref{eq:add-mat-defn}. Thus, $\vbar=\A\wbar$ and $\PPi\wbar=\xbar_i$ for all $i$, and so $\vbar\in Z$. \qedhere \end{proof-parts} \end{proof} As we show next, \Cref{thm:seqsum-equiv-mod} implies that sequential addition exhibits a certain continuity property called the \emph{closed graph} property, which states that the set of tuples $\mtuple{\xbar,\ybar,\zbar}$ such that $\zbar\in\xbar\seqsum\ybar$ is closed in $\eRn\times\eRn\times\eRn$. In our setting, this in turn implies another property called \emph{upper hemicontinuity}~\citep[see, for instance,][Theorem 7.11]{hitchhiker_guide_analysis}. \begin{theorem} \label{thm:seqsum-cont-prop} Let $\seq{\xbar_t}$, $\seq{\ybar_t}$, $\seq{\zbar_t}$ be sequences in $\extspace$ converging, respectively, to some points $\xbar,\ybar,\zbar\in\extspace$. Suppose $\zbar_t\in\xbar_t\seqsum\ybar_t$ for all $t$. Then $\zbar\in\xbar\seqsum\ybar$. \end{theorem} \begin{proof} Let $\A\in\R^{n\times 2n}$ be the addition matrix, and $\PPsub{1},\PPsub{2}\in\R^{n\times 2n}$ the component projection matrices (as in Eqs.~\ref{eq:comp-proj-mat-defn} and~\ref{eq:add-mat-defn}). Then, by \Cref{thm:seqsum-equiv-mod}(\ref{thm:seqsum-equiv-mod:a},\ref{thm:seqsum-equiv-mod:b}), for each $t$, there exists $\wbar_t\in\eRf{2n}$ such that $\PPsub{1}\wbar_t=\xbar_t$, $\PPsub{2}\wbar_t=\ybar_t$, and $\A\wbar_t=\zbar_t$. By sequential compactness, the sequence $\seq{\wbar_t}$ must have a convergent subsequence; by discarding all other elements (as well as the corresponding elements of the other sequences), we can assume the entire sequence converges to some point $\wbar\in\eRn$. By continuity of linear maps (\Cref{thm:linear:cont}), $\PPsub{1}\wbar=\lim\PPsub{1}\wbar_t=\lim\xbar_t=\xbar$, and similarly $\PPsub{2}\wbar=\ybar$ and $\A\wbar=\zbar$. Therefore, $\zbar\in\xbar\seqsum\ybar$ by \Cref{thm:seqsum-equiv-mod}(\ref{thm:seqsum-equiv-mod:a},\ref{thm:seqsum-equiv-mod:b}). \end{proof} Here are several additional consequences of \Cref{thm:seqsum-equiv-mod}: \begin{corollary} \label{cor:seqsum-conseqs} Let $\xbar,\ybar,\zbar\in\extspace$, and let $X,Y\subseteq\extspace$. \begin{letter-compact} \item \label{cor:seqsum-conseqs:a} $\xbar\seqsum \ybar = \lb{\xbar\plusl\ybar}{\,\ybar\plusl\xbar}$. \item \label{cor:seqsum-conseqs:b} $\zbar\in\xbar\seqsum\ybar$ if and only if, for all $\uu\in\Rn$, if $\xbar\cdot\uu$ and $\ybar\cdot\uu$ are summable then $\zbar\cdot\uu = \xbar\cdot\uu + \ybar\cdot\uu$. \item \label{cor:seqsum-conseqs:c} If $\xbar\plusl\ybar=\ybar\plusl\xbar$ then $\xbar\seqsum\ybar = \{\xbar\plusl\ybar\} = \{\ybar\plusl\xbar\}$. \item \label{cor:seqsum-conseqs:d} If $X$ and $Y$ are convex then \begin{equation} \label{eq:cor:seqsum-conseqs:1} X\seqsum Y = \conv\bigParens{(X\plusl Y) \cup (Y\plusl X)}. \end{equation} \item \label{cor:seqsum-conseqs:e} Suppose $\xbar'\plusl\ybar'=\ybar'\plusl\xbar'$ for all $\xbar'\in X$ and all $\ybar'\in Y$ (as will be the case, for instance, if either $X$ or $Y$ is included in $\Rn$). Then $X\seqsum Y = X\plusl Y = Y\plusl X$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{Parts~(\ref{cor:seqsum-conseqs:a}) and~(\ref{cor:seqsum-conseqs:b}):} Immediate from \Cref{thm:seqsum-equiv-mod} applied to $\xbar$ and $\ybar$. \pfpart{Part~(\ref{cor:seqsum-conseqs:c}):} This follows from part~(\ref{cor:seqsum-conseqs:a}) and Proposition~\ref{pr:e1}(\ref{pr:e1:ohull}). \pfpart{Part~(\ref{cor:seqsum-conseqs:d}):} Let $C$ denote the set on the right-hand side of \eqref{eq:cor:seqsum-conseqs:1}. From \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:decomp}) and part~(\ref{cor:seqsum-conseqs:a}), we have \begin{equation} \label{eq:cor:seqsum-conseqs:2} X \seqsum Y = \bigcup_{\xbar\in X, \ybar\in Y} (\xbar\seqsum\ybar) = \bigcup_{\xbar\in X, \ybar\in Y} \lb{\xbar\plusl\ybar}{\,\ybar\plusl\xbar}. \end{equation} If $\xbar\in X$ and $\ybar\in Y$ then $\xbar\plusl\ybar\in X\plusl Y\subseteq C$, and similarly, $\ybar\plusl\xbar\in C$. Since $C$ is convex, it follows that $\lb{\xbar\plusl\ybar}{\,\ybar\plusl\xbar}\subseteq C$. Thus, $X \seqsum Y \subseteq C$. For the reverse inclusion, since $\xbar\plusl\ybar\in\lb{\xbar\plusl\ybar}{\,\ybar\plusl\xbar}$, the rightmost expression in \eqref{eq:cor:seqsum-conseqs:2} includes $X\plusl Y$. Thus $X\plusl Y\subseteq X\seqsum Y$. Likewise, $Y\plusl X\subseteq X\seqsum Y$. Since $X\seqsum Y$ is convex by \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:convex}), it follows that $C\subseteq X\seqsum Y$ (by Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). \pfpart{Part~(\ref{cor:seqsum-conseqs:e}):} Immediate from \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:decomp}) and part~(\ref{cor:seqsum-conseqs:c}). \qedhere \end{proof-parts} \end{proof} We finish this section by deriving several useful properties of sequential addition. The first result relates sequential sums to convex hulls using the characterization of an outer convex hull via limits (\Cref{thm:e:7}): \begin{theorem} \label{thm:seqsum-in-union} Let $X_1,\dotsc,X_m\subseteq\extspace$, and let $\lambda_1,\dotsc,\lambda_m\in\Rpos$, where $m\geq 1$. Then \begin{equation} \label{eq:thm:seqsum-in-union:2} (\lambda_1 X_1) \seqsum \dotsb \seqsum (\lambda_m X_m) \subseteq \Parens{\sum_{i=1}^m \lambda_i} \conv{\Parens{\bigcup_{i=1}^m X_i}}. \end{equation} \end{theorem} \begin{proof} If any of the sets $X_i$ is empty, then the left-hand side of \eqref{eq:thm:seqsum-in-union:2} is also empty and the claim holds. We therefore assume henceforth that $X_i\neq\emptyset$ for $i=1,\dotsc,m$. Let $s=\sum_{i=1}^m \lambda_i$. If $s=0$, so that $\lambda_i=0$ for all $i$, then both sides of \eqref{eq:thm:seqsum-in-union:2} are equal to $\{\zero\}$ again implying the claim. We therefore assume henceforth that $s>0$. Also, by possibly re-arranging the indices, we assume without loss of generality that $\lambda_i>0$ for $i=1,\dotsc,\ell$ and $\lambda_i=0$ for $i=\ell+1,\dotsc,m$, for some $\ell\in\set{1,\dotsc,m}$. Let $\zbar\in (\lambda_1 X_1) \seqsum \dotsb \seqsum (\lambda_m X_m) =(\lambda_1 X_1) \seqsum \dotsb \seqsum (\lambda_\ell X_\ell) =\Seqsum(\lambda_1 X_1,\dotsc,\lambda_\ell X_\ell)$. Thus, for $i=1,\dotsc,\ell$, there exist sequences $\seq{\xx'_{it}}$ in $\Rn$ such that $\xx'_{it}\to\lambda_i\xbar_i$ for some $\xbar_i\in X_i$, and $\sum_{i=1}^\ell \xx'_{it}\to\zbar$. Let $\xx_{it}=\xx'_{it}/\lambda_i$ and $\lambar_i=\lambda_i/s$. Then $\xx_{it}\to\xbar_i$ for $i=1,\dotsc,\ell$, $\sum_{i=1}^\ell\lambar_i=1$, and $\sum_{i=1}^\ell\lambar_i\xx_{it}\to\zbar/s$. Therefore, \[ {\zbar}/{s} \in\ohull\set{\xbar_1,\dotsc,\xbar_\ell} \subseteq\conv{\Parens{\bigcup_{i=1}^m X_i}}, \] where the inclusions are, respectively, by \Cref{thm:e:7} and \Cref{thm:convhull-of-simpices}. Multiplying by $s$ completes the proof. \end{proof} The next proposition relates sequential addition to an analogous generalization of subtraction: \begin{proposition} \label{pr:swap-seq-sum} Let $\xbar,\ybar,\zbar\in\extspace$. Then $\zbar\in\xbar\seqsum\ybar$ if and only if $\xbar\in\zbar\seqsum(-\ybar)$. \end{proposition} \begin{proof} Suppose $\zbar\in\xbar\seqsum\ybar$. Then there exist sequences $\seq{\xx_t}$, $\seq{\yy_t}$, and $\seq{\zz_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$, $\yy_t\rightarrow\ybar$, $\zz_t\rightarrow\zbar$, and $\zz_t=\xx_t+\yy_t$ for all $t$. Since $-\yy_t\rightarrow-\ybar$ and $\zz_t-\yy_t=\xx_t\rightarrow\xbar$, this also shows that $\xbar\in\zbar\seqsum(-\ybar)$. The converse can then be derived by applying the above with $\xbar$ and $\zbar$ swapped, and $\ybar$ and $-\ybar$ swapped. \end{proof} Finally, we show that sequential addition commutes with linear maps: \begin{theorem} \label{thm:distrib-seqsum} Let $\A\in\Rmn$, and let $A:\eRn\to\eRm$ be the associated astral linear map (so that $A(\xbar)=\A\xbar$ for $\xbar\in\extspace$). Let $X,Y\subseteq\extspace$. Then \[ A (X\seqsum Y) = A(X) \seqsum A(Y). \] \end{theorem} \begin{proof} For any $\xbar,\ybar\in\extspace$, we have \begin{align} \notag A (\xbar\seqsum\ybar) &= A\bigParens{\lb{\xbar\plusl\ybar}{\,\ybar\plusl\xbar}} \\ \notag &= \seg\bigParens{A(\xbar\plusl\ybar),\,A(\ybar\plusl\xbar)} \\ \notag &= \seg\bigParens{A(\xbar)\plusl A(\ybar),\, A(\ybar)\plusl A(\xbar)} \\ \label{eq:thm:distrib-seqsum:1} &= A(\xbar)\seqsum A(\ybar). \end{align} The first and fourth equalities are by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:a}). The second and third are by Theorem~\ref{thm:e:9} and Proposition~\ref{pr:h:4}(\ref{pr:h:4c}), respectively. Thus, \begin{align*} A (X \seqsum Y) &= A\BiggParens{\,\bigcup_{\xbar\in X, \ybar\in Y} (\xbar\seqsum\ybar)} \\ &= \bigcup_{\xbar\in X, \ybar\in Y} A(\xbar\seqsum\ybar) \\ &= \bigcup_{\xbar\in X, \ybar\in Y} \bigBracks{A(\xbar) \seqsum A(\ybar)} \\ &= A(X) \seqsum A(Y). \end{align*} The first and fourth equalities are by Proposition~\ref{pr:seqsum-props}(\ref{pr:seqsum-props:c}). The third is by \eqref{eq:thm:distrib-seqsum:1}. \end{proof} \subsection{Interior of a convex set and convex hull of an open set} We prove next that the interior of a convex set in $\extspace$ is also convex. For the proof, recall that a convex set in $\Rn$ is \emph{polyhedral} if it is the intersection of a finite collection of closed halfspaces. \begin{theorem} \label{thm:interior-convex} Let $S$ be a convex subset of $\extspace$. Then its interior, $\intr S$, is also convex. \end{theorem} \begin{proof} We assume that $\intr S$ is not empty, since otherwise the claim holds trivially. Let $\xbar,\ybar\in \intr S$. We aim to prove convexity by showing that the segment joining them, $\lb{\xbar}{\ybar}$, is entirely contained in $\intr S$. \begin{claimpx} \label{cl:thm:interior-convex:1} There exists a topological base element $X\subseteq\extspace$ (of the form given in Eq.~\ref{eq:h:3a}) that includes $\xbar$, and whose closure $\Xbar$ is included in $S$. \end{claimpx} \begin{proofx} Let $C=\extspace \setminus (\intr S)$, which is closed, being the complement of $\intr S$. Furthermore, $\xbar\not\in C$. Therefore, by Theorem~\ref{thm:i:1}(\ref{thm:i:1aa}), there exist disjoint open sets $X$ and $V$ such that $\xbar\in X$ and $C\subseteq V$. Without loss of generality, we can assume $X$ is a base element (since otherwise we can replace it by a base element containing $\xbar$ and included in $X$, by \Cref{pr:base-equiv-topo}). Since $X$ and $V$ are disjoint, $X$ is included in the closed set $\extspace \setminus V$, and therefore, \[ \Xbar \subseteq \extspace \setminus V \subseteq \extspace\setminus C = \intr S \subseteq S. \qedhere \] \end{proofx} Let $X$ be as in Claim~\ref{cl:thm:interior-convex:1}, and let $Y$ be a similar base element for $\ybar$ (so $\ybar\in Y$ and $\Ybar\subseteq S$). Next, let $X'=\cl(X\cap\Rn)$ and $Y'=\cl(Y\cap\Rn)$. Since $X$ is a base element of the form given in \eqref{eq:h:3a}, $X\cap\Rn$ is an intersection of open halfspaces in $\Rn$, which means that $X'$, its closure in $\Rn$, is an intersection of closed halfspaces in $\Rn$; in other words, $X'$ is a polyhedral convex set, as is $Y'$ by the same argument. Furthermore, none of these sets can be empty since $\Rn$ is dense in $\extspace$. Let $R=\conv(X' \cup Y')$ be the convex hull of their union. Since $X'$ and $Y'$ are polyhedral, $\cl R$, the closure of $R$ in $\Rn$, is also polyhedral (by \Cref{roc:thm19.6}). Thus, \begin{equation} \label{eqn:thm:interior-convex:1} \cl R = \Braces{ \zz\in\Rn :\: \zz\cdot\uu_i \leq b_i \mbox{~for~} i=1,\dotsc,k } \end{equation} for some $\uu_1,\dotsc,\uu_k\in\Rn$ and $b_1,\dotsc,b_k\in\R$, and some $k\geq 0$. Furthermore, without loss of generality, we can assume $\norm{\uu_i}=1$ for $i=1,\dotsc,k$ (since an inequality with $\uu_i=\zero$ can simply be discarded, and for all others, we can divide both sides of the inequality by $\norm{\uu_i}$). The closure of $R$ in $\extspace$ is exactly the convex hull of $\Xbar\cup\Ybar$: \begin{claimpx} \label{cl:thm:interior-convex:2} $\Rbar = \conv(\Xbar\cup\Ybar)$. \end{claimpx} \begin{proofx} By construction, $X\cap\Rn \subseteq X' \subseteq R$. Therefore, by Proposition~\ref{pr:closed-set-facts}(\ref{pr:closed-set-facts:b}), $\Xbar = \clbar{X\cap\Rn}\subseteq \Rbar$. Likewise, $\Ybar\subseteq\Rbar$. Therefore, since $\Rbar$ is convex (by Theorem~\ref{thm:e:6}), $\conv(\Xbar\cup\Ybar)\subseteq\Rbar$ (by \Cref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). For the reverse inclusion, suppose $\zbar\in\Rbar$, implying there exists a sequence $\seq{\zz_t}$ in $R$ that converges to $\zbar$. Then by $R$'s definition, and since $X'$ and $Y'$ are convex, for each $t$, we can write $ \zz_t = (1-\lambda_t) \xx_t + \lambda_t \yy_t $ for some $\xx_t\in X'$, $\yy_t\in Y'$ and $\lambda_t\in [0,1]$ (by \Cref{roc:thm3.3}). By sequential compactness, the sequence $\seq{\xx_t}$ must have a convergent subsequence; by discarding all other elements, we can assume that the entire sequence converges to some point $\xbar'$, which thereby must be in $\Xbarp$. Furthermore, $\Xbarp=\clbar{X\cap\Rn}=\Xbar$ by \Cref{pr:closed-set-facts}(\ref{pr:closed-set-facts:aa},\ref{pr:closed-set-facts:b}). Repeating this argument, we can take the sequence $\seq{\yy_t}$ to converge to some point $\ybar'\in\Ybar$. Therefore, applying \Cref{cor:e:1}, $\zbar\in\lb{\xbar'}{\ybar'}\subseteq\conv(\Xbar\cup\Ybar)$. \end{proofx} Let $Q$ be the set \[ Q = \Braces{ \zbar\in\extspace : \zbar\cdot\uu_i < b_i \mbox{~for~} i=1,\dotsc,k }, \] which is the intersection of open halfspaces in $\extspace$ corresponding to the closed halfspaces (in $\Rn$) whose intersection defines $\cl R$ in \eqref{eqn:thm:interior-convex:1}. This set is clearly open (and actually is a base element). Moreover, \begin{equation} \label{eq:cl:thm:interior-convex:3} Q\subseteq\Qbar=\overline{Q\cap\Rn}\subseteq\clbar{\cl R}=\Rbar, \end{equation} where the first equality is by \Cref{pr:closed-set-facts}(\ref{pr:closed-set-facts:b}) (since $Q$ is open), the second inclusion is because $Q\cap\Rn\subseteq\cl R$, and the final equality is by \Cref{pr:closed-set-facts}(\ref{pr:closed-set-facts:aa}). \begin{claimpx} \label{cl:thm:interior-convex:4} $\xbar\in Q$. \end{claimpx} \begin{proofx} To prove the claim, we show $\xbar\cdot\uu_i < b_i$ for each $i\in\{1,\dotsc,k\}$. Let $H_i$ be the closed halfspace \[ H_i=\{\zbar\in\extspace :\: \zbar\cdot\uu_i \leq b_i\}. \] Then $R\subseteq \cl R \subseteq H_i$, so $\Rbar\subseteq H_i$ since $H_i$ is closed. Therefore, using Claim~\ref{cl:thm:interior-convex:2}, $X \subseteq \Xbar \subseteq \Rbar \subseteq H_i$. Since $X$ is open, we have $X\subseteq\intr H_i=\{\zbar\in\extspace :\: \zbar\cdot\uu_i < b_i\}$ with the equality following by \Cref{prop:intr:H}. Since $\xbar\in X$, this means $\xbar\cdot\uu_i < b_i$. \end{proofx} By \Cref{cl:thm:interior-convex:1}, $\Xbar\subseteq S$, and similarly $\Ybar\subseteq S$, so $\conv(\Xbar\cup\Ybar)\subseteq S$ by \Cref{pr:conhull-prop}(\ref{pr:conhull-prop:aa}). Together with \eqref{eq:cl:thm:interior-convex:3} and \Cref{cl:thm:interior-convex:2} this implies $Q\subseteq \Rbar=\conv(\Xbar\cup\Ybar)\subseteq S$. Since $Q$ is open, this further implies that $Q$ is included in the interior of $S$. By Claim~\ref{cl:thm:interior-convex:4}, $\xbar\in Q$, and by the same argument, $\ybar\in Q$. Since $Q$ is convex (by \Cref{pr:e1}\ref{pr:e1:d}), it follows that $\lb{\xbar}{\ybar}\subseteq Q \subseteq \intr S$, completing the proof. \end{proof} As a consequence, the convex hull of an open set is also open: \begin{corollary} \label{cor:convhull-open} Let $U\subseteq\extspace$ be open. Then its convex hull, $\conv{U}$, is also open. \end{corollary} \begin{proof} Let $S=\conv{U}$. Then $U\subseteq S$, implying, since $U$ is open, that $U\subseteq \intr S$. By Theorem~\ref{thm:interior-convex}, $\intr S$ is convex. Therefore, $S=\conv{U}\subseteq \intr S$ (by \Cref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). Thus, $S= \intr S$ (since $\intr S\subseteq S$ always), so $S$ is open. \end{proof} \subsection{Closure of a convex set and convex hull of a closed set} \label{sec:closure-convex-set} In standard convex analysis, the closure of any convex set is also convex and equal to the intersection of closed halfspaces that contain it. In \Cref{thm:e:6}, we saw that also the \emph{astral} closure of any convex set in $\Rn$ is convex, and is moreover equal to the outer hull of the set, the intersection of {astral} closed halfspaces that contain the set. As we show in the next theorem, this is not true for arbitrary convex sets in $\extspace$. In particular, we show that for $n\geq 2$, there exist sets in $\extspace$ that are convex, but whose closures are not convex. This also means that the closure of such a set cannot be equal to its outer hull, since the outer hull of any set is always convex. \begin{theorem} \label{thm:closure-not-always-convex} For $n\geq 2$, there exists a convex set $S\subseteq\extspace$, whose closure $\Sbar$ is not convex. Consequently, its closure is not equal to its outer hull; that is, $\Sbar\neq\ohull S$. \end{theorem} \begin{proof} Let $n\geq 2$, and let $\ee_1$ and $\ee_2$ be the first two standard basis vectors in $\Rn$. For $\alpha\in\R$, let $\Ralpha$ be the segment joining $\alpha \ee_1$ and $\limray{\ee_1}\plusl\limray{\ee_2}$, \[\Ralpha=\lb{\alpha \ee_1}{\,\limray{\ee_1}\plusl\limray{\ee_2}},\] and let $S$ be their union over all $\alpha\in\R$: \[ S = \bigcup_{\alpha\in\R} \Ralpha. \] We will show that $S$ is convex, but that its closure, $\Sbar$, is not (implying $\Sbar\neq\ohull S$ by \Cref{pr:e1}\ref{pr:e1:ohull}). First, for $\alpha\in\R$, we can compute $\Ralpha$ explicitly using Corollary~\ref{cor:lb-with-finite}: \begin{align} \Ralpha &= \Braces{ \lambda \ee_1 + \alpha \ee_1 :\: \lambda \in \Rpos } \nonumber \\ &\qquad{} \cup\, \Braces{ \limray{\ee_1} \plusl (\lambda\ee_2 + \alpha\ee_1) :\: \lambda \in \Rpos } \nonumber \\ &\qquad{} \cup\, \Braces{ \limray{\ee_1} \plusl \limray{\ee_2} \plusl \lambda\alpha \ee_1 :\: \lambda\in [0,1] } \nonumber \\ &= \{ \lambda \ee_1 :\: \lambda \in [\alpha,+\infty) \} \,\cup\, \{ \limray{\ee_1} \plusl \lambda\ee_2 :\: \lambda \in \Rpos \} \,\cup\, \{ \limray{\ee_1} \plusl \limray{\ee_2} \}, \label{eq:thm:closure-not-always-convex:1} \end{align} using that $\limray{\ee_1}\plusl\zbar\plusl\beta\ee_1=\limray{\ee_1}\plusl\zbar$ for any $\zbar\in\eRn$ and any $\beta\in\R$ (by the Projection Lemma \ref{lemma:proj}). Thus, \begin{equation} \label{eq:thm:closure-not-always-convex:2} S = \{ \lambda \ee_1 :\: \lambda\in\R \} \,\cup\, \{ \limray{\ee_1} \plusl \lambda\ee_2 :\: \lambda\in\Rpos \} \,\cup\, \{ \limray{\ee_1} \plusl \limray{\ee_2} \}. \end{equation} The set $S$ is convex by \Cref{pr:e1}(\ref{pr:e1:union}), because it is the union over the collection of sets $\{R_\alpha : \alpha\in\R\}$, which is updirected since $R_{\alpha}\cup R_{\alpha'}=R_{\min\set{\alpha,\alpha'}}$ for any $\alpha,\alpha'\in\R$. We next show that $\Sbar$ is \emph{not} convex. Let $\xbar=\limray{(-\ee_1)}$ and let $\ybar=\limray{\ee_1}\plusl\limray{\ee_2}$. From \eqref{eq:thm:closure-not-always-convex:2}, we see that both $\xbar$ and $\ybar$ are in $\Sbar$ since $\ybar\in S$ and $\xbar$ is the limit of the sequence $\seq{-t \ee_1}$, all of whose elements are in $S$. Let $\zz=\ee_2$. To prove $\Sbar$ is not convex, we will show that $\zz$ is on the segment joining $\xbar$ and $\ybar$, but that $\zz$ is not itself in $\Sbar$. Indeed, $\zz\not\in\Sbar$ since, for instance, the open set $\{ \xx \in \Rn :\: \xx\cdot\ee_2 > 1/2 \}$ includes $\zz$ but is entirely disjoint from $S$. It remains to show that $\zz\in\lb{\xbar}{\ybar}$. To do so, we construct sequences satisfying the conditions of Corollary~\ref{cor:e:1}. Specifically, for each $t$, let \begin{align*} \xx_t &= -t \ee_1, \\ \yy_t &= t(t-1) \ee_1 + t \ee_2, \\ \zz_t &= \paren{1-\frac{1}{t}} \xx_t + \frac{1}{t} \yy_t. \end{align*} Then $\xx_t\rightarrow\xbar$ and $\yy_t\rightarrow\ybar$ (by Theorem~\ref{thm:seq-rep}). Also, by plugging in expressions for $\xx_t$ and $\yy_t$ into the expression for $\zz_t$ and simplifying, we obtain $\zz_t=\ee_2$, for all $t$, so $\zz_t\rightarrow\zz$. Thus, as claimed, $\zz\in\lb{\xbar}{\ybar}$ by \Cref{cor:e:1}, so $\Sbar$ is not convex. \end{proof} Thus, the closure of a convex set need not be convex. On the other hand, the convex hull of a closed set is always closed: \begin{theorem} \label{thm:cnv-hull-closed-is-closed} Let $S\subseteq\extspace$ be closed (in $\extspace$). Then $\conv{S}$ is also closed. \end{theorem} \begin{proof} If $S=\emptyset$, then the claim is immediate, so we assume $S\neq\emptyset$. Let $\zbar\in\clbar{\conv{S}}$; we aim to show $\zbar\in\conv{S}$. Since $\zbar\in\clbar{\conv{S}}$, there exists a sequence $\seq{\zbar_t}$ in $\conv{S}$ with $\zbar_t\rightarrow\zbar$. For each $t$, by Theorem~\ref{thm:convhull-of-simpices}, $\zbar_t$ is in the outer hull of at most $n+1$ elements of $S$. That is, $\zbar_t\in\ohull\{\xbar_{1t},\dotsc,\xbar_{(n+1)t}\}$ for some $\xbar_{1t},\dotsc,\xbar_{(n+1)t}\in S$ (where we here allow the same point to appear repeatedly so that each $\zbar_t$ is in the outer hull of exactly $n+1$ not necessarily distinct points). We can consider each sequence $\seq{\xbar_{it}}$ in turn, for $i=1,\dotsc,n+1$. By sequential compactness, the sequence $\seq{\xbar_{it}}$ must have a convergent subsequence; by discarding the other elements (and the corresponding elements of the other sequences), we can ensure the entire sequence converges so that $\xbar_{it}\rightarrow\xbar_i$ for some $\xbar_i\in\extspace$. Moreover, since $S$ is closed, $\xbar_i\in S$. Let $\uu\in\Rn$. By \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}), for each $t$, \begin{equation} \label{eq:thm:cnv-hull-closed-is-closed:1} \zbar_t\cdot\uu \leq \max\{\xbar_{1t}\cdot\uu,\dotsc,\xbar_{(n+1)t}\cdot\uu\}. \end{equation} By Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), $\zbar_t\cdot\uu\rightarrow\zbar\cdot\uu$ and $\xbar_{it}\cdot\uu\rightarrow\xbar_i\cdot\uu$ for $i=1,\dotsc,n+1$. Consequently, taking limits of both sides of \eqref{eq:thm:cnv-hull-closed-is-closed:1} yields \[ \zbar\cdot\uu \leq \max\{\xbar_{1}\cdot\uu,\dotsc,\xbar_{n+1}\cdot\uu\} \] since the maximum of $n+1$ numbers, as a function, is continuous. Therefore, $\zbar\in\ohull\{\xbar_1,\dotsc,\xbar_{n+1}\}\subseteq\conv{S}$ by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}) and Theorem~\ref{thm:convhull-of-simpices}. \end{proof} \section{Astral cones} \label{sec:cones} In standard convex analysis, convex cones arise naturally and play important roles. For instance, both the recession cone and barrier cone of a function are convex cones. In this chapter, we extend the standard notion of a cone to astral space. As will be seen in later chapters, convex astral cones play a critical role in understanding properties of convex functions extended to astral space. For instance, in Chapter~\ref{sec:minimizers}, we will see how a convex astral cone that is an analog of the recession cone is central to how such functions are minimized. Astral cones will also play an important part in deriving some of the main separation theorems given in Chapter~\ref{sec:sep-thms}. \subsection{Definition and basic properties} \label{sec:cones-basic} Recall that a set $K\subseteq\Rn$ is a cone if it includes the origin and is closed under positive scalar multiplication, that is, if $\lambda\xx\in K$ for all $\lambda\in\Rstrictpos$ and all $\xx\in K$. To extend this notion to astral space, we might, as a first attempt, consider an exactly analogous definition. Accordingly, we say that a set $K\subseteq\extspace$ is a \emph{naive cone} if $\zero\in K$ and if $\lambda\xbar\in K$ for all $\lambda\in\Rstrictpos$ and all $\xbar\in K$. Clearly, a set $K\subseteq\Rn$ is a cone if and only if it is a naive cone. By its definition, an ordinary cone $K$ in $\Rn$ has the property that if a point $\xx$ is in $K$, then there exists a ray eminating from the origin that passes through $\xx$ and continues forth to infinity. This is not necessarily so for naive cones in astral space. For instance, in $\Rext=\extspac{1}$, the set $\{0,+\infty\}$ is a naive cone, but evidently, no ray or path connects the origin with $+\infty$. More generally, in $\extspace$, every set of icons $K\subseteq\corezn$ that includes the origin is a naive cone (as follows from Proposition~\ref{pr:i:8}\ref{pr:i:8d}), but every point in the set $K$ (other than $\zero$) is topologically disconnected from the origin. This suggests that perhaps this definition does not properly generalize the standard notion of a cone. Stepping back, we can say that a nonempty set $K\subseteq\Rn$ is a cone if for every point $\xx\in K$, the entire ray in the direction of $\xx$ is also included in $K$, meaning that $K$ includes the set $\ray{\xx}$ where \begin{equation} \label{eq:std-ray-defn} \ray{\xx} = \Braces{\lambda\xx : \lambda\in\Rpos} = \cone{\{\xx\}}. \end{equation} This is the usual way of defining such a ray, which, when generalized directly to astral space, leads to the notion of a naive cone, as just discussed. Alternatively, however, we can view $\ray{\xx}$ in a different way, namely, as the intersection of all homogeneous closed halfspaces that include $\xx$, as follows from \Cref{pr:con-int-halfspaces}(\ref{roc:cor11.7.2}). This latter view of what is meant by a ray generalizes to astral space, leading also to a different definition for an astral cone, very much like how we defined convexity for astral sets in Section~\ref{sec:def-convexity}. There, the outer convex hull of a set $S\subseteq\extspace$ was defined as the intersection of all astral closed halfspaces $\chsua$, as defined in \eqref{eq:chsua-defn}, that include $S$. Convexity was then defined in terms of segments, the outer convex hull of pairs of points. For astral cones, we follow an analogous sequence of definitions. First, we say that an astral closed halfspace $\chsua$ is \emph{homogeneous} if $\beta=0$, that is, if it has the form \begin{equation} \label{eqn:homo-halfspace} \chsuz=\{\xbar\in\extspace : \xbar\cdot\uu \leq 0\} \end{equation} for some $\uu\in\Rn\setminus\{\zero\}$. The outer conic hull of a set is then defined like the outer convex hull, but now using homogeneous halfspaces: \begin{definition} Let $S\subseteq\extspace$. The \emph{outer conic hull} of $S$, denoted $\oconich S$, is the intersection of all homogeneous astral closed halfspaces that include $S$; that is, \[ \oconich S = \bigcap{\BigBraces{ \chsuz:\: \uu\in\Rn\setminus\{\zero\},\, S\subseteq \chsuz}}. \] \end{definition} Astral rays are then defined on analogy with standard rays: \begin{definition} Let $\xbar\in\extspace$. The \emph{astral ray through $\xbar$}, denoted $\aray{\xbar}$, is the intersection of all homogeneous astral closed halfspaces that include $\xbar$; that is, \[ \aray{\xbar} = \oconich\{\xbar\}. \] \end{definition} Finally, we can define astral cones: \begin{definition} A nonempty set $K\subseteq\extspace$ is an \emph{astral cone} if $K$ includes the astral ray through $\xbar$ for all $\xbar\in K$, that is, if \[ \aray{\xbar}\subseteq K \quad \text{for all $\xbar\in K$.} \] \end{definition} We study properties of astral cones and the outer conic hull. To begin, the outer conic hull of any set can be expressed in terms of an outer convex hull, as shown next. Consequently, the astral ray $\aray{\xbar}$ is equal to the segment joining $\zero$ and $\limray{\xbar}$. Informally, this ray follows a path from the origin through $\xbar$, continuing on in the ``direction'' of $\xbar$, and reaching $\limray{\xbar}$ in the limit (see Theorem~\ref{thm:y-xbar-is-path}). For a point $\xx\in\Rn$, $\aray{\xx}$ is thus the standard ray, $\ray{\xx}$, together with $\limray{\xx}$, the one additional point in its astral closure; that is, $\aray{\xx}=(\ray{\xx})\cup\{\limray{\xx}\}$. For example, in $\Rext$, the only astral cones are $\{ 0 \}$, $[0,+\infty]$, $[-\infty,0]$, and $\Rext$. \begin{theorem} \label{thm:oconichull-equals-ocvxhull} Let $S\subseteq\extspace$. Then \begin{equation} \label{eq:thm:oconichull-equals-ocvxhull:2} \ohull{S} \subseteq \oconich{S} = \ohull{(\{\zero\} \cup \lmset{S})}. \end{equation} In particular, for all $\xbar\in\extspace$, $\aray{\xbar}=\lb{\zero}{\limray{\xbar}}$. \end{theorem} \begin{proof} For the inclusion appearing in \eqref{eq:thm:oconichull-equals-ocvxhull:2}, by definition, the outer conic hull of $S$ is the intersection of all homogeneous astral closed halfspaces that include $S$, while its outer convex hull is the intersection of all astral closed halfspaces that include $S$, whether homogeneous or not. Therefore, the latter is included in the former. It remains to prove the equality in \eqref{eq:thm:oconichull-equals-ocvxhull:2}. Let $U=\ohull{(\{\zero\} \cup \lmset{S})}$. We aim to show that $\oconich{S}=U$. Let $\xbar\in\oconich{S}$, and let $\uu\in\Rn$. We claim that \begin{equation} \label{eq:thm:oconichull-equals-ocvxhull:1} \xbar\cdot\uu \leq \max\Braces{ 0, \; \sup_{\zbar\in S} \limray{\zbar}\cdot\uu }. \end{equation} This is immediate if $\uu=\zero$. Further, if $\zbar\cdot\uu>0$ for any $\zbar\in S$, then $\limray{\zbar}\cdot\uu=+\infty$, implying that the right-hand side of \eqref{eq:thm:oconichull-equals-ocvxhull:1} is also $+\infty$ so that the equation holds trivially. Otherwise, $\uu\neq\zero$ and $\zbar\cdot\uu\leq 0$ for all $\zbar\in S$. In this case, $\limray{\zbar}\cdot\uu\leq 0$ for all $\zbar\in S$ so that the right-hand side of \eqref{eq:thm:oconichull-equals-ocvxhull:1} is equal to~$0$. Furthermore, in this case, $S$ is included in the homogeneous astral closed halfspace $\chsuz$ (Eq.~\ref{eqn:homo-halfspace}). Therefore, by definition of outer conic hull, $\xbar\in\oconich{S}\subseteq\chsuz$, so $\xbar\cdot\uu\leq 0$. Together with the preceding, this proves \eqref{eq:thm:oconichull-equals-ocvxhull:1} in this case as well. Since \eqref{eq:thm:oconichull-equals-ocvxhull:1} holds for all $\uu\in\Rn$, it follows by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}) that $\xbar\in U$. Thus, $\oconich{S}\subseteq U$. To prove the reverse implication, suppose now that $\xbar\in U$. Suppose $S$ is included in some homogeneous astral closed halfspace $\chsuz$ where $\uu\in\Rn\setminus\{\zero\}$. Then for all $\zbar\in S$, $\zbar\cdot\uu\leq 0$, implying $\limray{\zbar}\cdot\uu\leq 0$. Since $\xbar\in U$, by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}), \eqref{eq:thm:oconichull-equals-ocvxhull:1} must hold. Furthermore, the preceding remarks imply that its right-hand side is equal to~$0$. Therefore, $\xbar\cdot\uu\leq 0$, so $\xbar\in\chsuz$. Since this holds for all homogeneous astral closed halfspaces $\chsuz$ that include $S$, it follows that $\xbar\in\oconich{S}$, and thus that $U\subseteq\oconich{S}$, completing the proof of \eqref{eq:thm:oconichull-equals-ocvxhull:2}. Finally, for all $\xbar\in\extspace$, by the foregoing, \[ \aray{\xbar} = \oconich\{\xbar\} = \ohull{(\{\zero\}\cup\{\limray{\xbar}\})} = \ohull\{\zero,\limray{\xbar}\} = \lb{\zero}{\limray{\xbar}}, \] proving the final claim. \end{proof} Astral cones have many of the same properties as ordinary cones. In particular, every astral cone is also a naive cone, and its intersection with $\Rn$ is a standard cone: \begin{proposition} \label{pr:ast-cone-is-naive} Let $K\subseteq\extspace$ be an astral cone, and let $\xbar\in\extspace$. Suppose either $\xbar\in K$ or $\limray{\xbar}\in K$. Then $\alpha \xbar\in K$ for all $\alpha\in\Rextpos$. Consequently, $\zero\in K$, $K$ is a naive cone, and $K\cap\Rn$ is a cone in $\Rn$. \end{proposition} \begin{proof} We can write $\xbar=\VV\omm\plusl\qq$ for some $\VV\in\Rnk$, $k\geq 0$, and $\qq\in\Rn$. Then by Proposition~\ref{pr:i:8}(\ref{pr:i:8d}), \begin{equation} \label{eq:pr:ast-cone-is-naive:1} \limray{\xbar} =\VV\omm\plusl\limray{\qq}=[\VV,\qq]\omm. \end{equation} If $\xbar\in K$ then again \[ \lb{\zero}{\limray{\xbar}} = \aray{\xbar} \subseteq K, \] with the equality from Theorem~\ref{thm:oconichull-equals-ocvxhull}, and the inclusion from $K$ being an astral cone. If instead $\limray{\xbar}\in K$, then \[ \lb{\zero}{\limray{\xbar}} = \aray{\limray{\xbar}} \subseteq K \] by the same reasoning (since $\limray{(\limray{\xbar})}=\limray{\xbar}$). Thus, in either case, for $\alpha\in\Rextpos$, \[ \alpha\xbar \in \lb{\zero}{\limray{\xbar}} \subseteq K. \] Here, the first inclusion is by an application of Theorem~\ref{thm:lb-with-zero} to $\limray{\xbar}$, whose form is given in \eqref{eq:pr:ast-cone-is-naive:1} (and noting that $\alpha\xbar = \VV\omm\plusl\alpha\qq$ if $\alpha>0$). In particular, taking $\alpha=0$, this shows that $\zero\in K$ (since $K$ is nonempty), and taking $\alpha\in\Rstrictpos$, this shows that $K$ is a naive cone. Applied to points in $K\cap\Rn$, it follows that this set is a cone. \end{proof} Here are some closure properties of astral cones: \begin{theorem} Let $K\subseteq\extspace$ be an astral cone, let $\A\in\Rmn$, and let $A:\extspace\rightarrow\extspac{m}$ be the associated linear map (so that $A(\xbar)=\A\xbar$ for $\xbar\in\extspace$). Then $A(K)$ is also an astral cone. \end{theorem} \begin{proof} Since $K$ is nonempty, so is $A(K)$. Let $\zbar\in A(K)$, implying $\zbar=A(\xbar)=\A\xbar$ for some $\xbar\in K$. Then \begin{align*} \aray{\zbar} = \lb{\zero}{\limray{\zbar}} &= \lb{\zero}{\limray{(A(\xbar))}} \\ &= \lb{A(\zero)}{A(\limray{\xbar})} \\ &= A\bigParens{\lb{\zero}{\limray{\xbar}}} = A(\aray{\xbar}) \subseteq A(K). \end{align*} The first and last equalities are by Theorem~\ref{thm:oconichull-equals-ocvxhull}, and the fourth is by Theorem~\ref{thm:e:9}. The inclusion is because $K$ is an astral cone. Thus, $A(K)$ is an astral cone. \end{proof} \begin{theorem} \label{thm:affine-inv-ast-cone} Let $\A\in\R^{m\times n}$, $\ebar\in\corez{m}$, and let $F:\extspace\rightarrow\extspac{m}$ be the affine map $F(\zbar)=\ebar\plusl \A\zbar$ for all $\zbar\in\extspace$. Let $K\subseteq\extspac{m}$ be an astral cone, and assume $\ebar\in K$. Then nv(K)$ is also an astral cone. \end{theorem} \begin{proof} Since $F(\zero)=\ebar\in K$, nv(K)$ is not empty. nv(K)$, so that $F(\zbar)\in K$. Let \[ S = \aray{F(\zbar)} = \lb{\zero}{\limray{F(\zbar)}} \] (using Theorem~\ref{thm:oconichull-equals-ocvxhull} for the second equality). Note that \begin{equation} \label{eq:thm:affine-inv-ast-cone:1} \limray{F(\zbar)} = \limray{(\ebar\plusl \A \zbar)} = \ebar \plusl \A (\limray\zbar) = F(\limray{\zbar}), \end{equation} where the second equality uses Propositions~\ref{pr:i:8}(\ref{pr:i:8d}) and~\ref{pr:h:4}(\ref{pr:h:4g}). Then \[ F(\aray{\zbar}) = F\Parens{\lb{\zero}{\limray{\zbar}}} = \lb{F(\zero)}{F(\limray{\zbar})} = \lb{\ebar}{\limray{F(\zbar)}} \subseteq S \subseteq K. \] The second equality is by Theorem~\ref{thm:e:9}, and the third is by \eqref{eq:thm:affine-inv-ast-cone:1}. The first inclusion is because $S$ is convex and includes both $\limray{F(\zbar)}$ and $\ebar$ (by Corollary~\ref{cor:d-in-lb-0-dplusx} with $\yy=\zero$ and $\xbar= \A (\limray\zbar)$, again using Eq.~\ref{eq:thm:affine-inv-ast-cone:1}). The last inclusion is because $K$ is an astral cone that includes $F(\zbar)$. nv(K)$. nv(K)$ is an astral cone. \end{proof} Theorem~\ref{thm:affine-inv-ast-cone} is false in general if $\ebar$ is replaced by a general astral point that is not necessarily an icon. For instance, in $\Rext$, let $K=[0,+\infty]$, which is an astral cone, and let $F(\barx)=\barx+1$ for $\barx\in\Rext$. nv(K)=[-1,+\infty]$, which is not an astral cone. \subsection{Convex astral cones} Of particular importance are convex astral cones, that is, astral cones that are convex. \begin{proposition} \label{pr:astral-cone-props} Let $S\subseteq\extspace$. \begin{letter-compact} \item \label{pr:astral-cone-props:b} The intersection of an arbitrary collection of astral cones is an astral cone. \item \label{pr:astral-cone-props:c} Suppose $S$ is convex. Then $S$ is a convex astral cone if and only if $\zero\in S$ and $\lmset{S}\subseteq S$. \item \label{pr:astral-cone-props:e} Any intersection of homogeneous astral closed halfspaces in $\extspace$ is a closed convex astral cone. Therefore, $\oconich{S}$ is a closed convex astral cone, as is $\aray{\xbar}$ for all $\xbar\in\extspace$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:astral-cone-props:b}):} Similar to the proof of Proposition~\ref{pr:e1}(\ref{pr:e1:b}). \pfpart{Part~(\ref{pr:astral-cone-props:c}):} If $S$ is an astral cone then $\aray{\xbar}=\lb{\zero}{\limray{\xbar}}$ is included in $S$ for all $\xbar\in S$, proving the ``only if'' direction. For the converse, suppose $\zero\in S$ and $\lmset{S}\subseteq S$. Then for all $\xbar\in S$, $\limray{\xbar}\in S$, implying $\aray{\xbar}=\lb{\zero}{\limray{\xbar}}\subseteq S$ since $S$ is convex. Therefore, $S$ is an astral cone. \pfpart{Part~(\ref{pr:astral-cone-props:e}):} Let $\chsuz$ be a homogeneous astral closed halfspace (as in Eq.~\ref{eqn:homo-halfspace}), for some $\uu\in\Rn\setminus\{\zero\}$. Then $\chsuz$ is convex by Proposition~\ref{pr:e1}(\ref{pr:e1:c}). Further, $\chsuz$ includes $\zero$, and if $\xbar\in\chsuz$ then $\xbar\cdot\uu\leq 0$, implying $\limray{\xbar}\cdot\uu\leq 0$, and so that $\limray{\xbar}\in\chsuz$. Therefore, $\chsuz$ is a convex astral cone by part~(\ref{pr:astral-cone-props:c}). That an intersection of homogeneous astral closed halfspaces is a convex astral cone now follows by part~(\ref{pr:astral-cone-props:b}) and Proposition~\ref{pr:e1}(\ref{pr:e1:b}). By definition, this includes $\oconich{S}$ and $\aray{\xbar}$ for any $\xbar\in\extspace$. \qedhere \end{proof-parts} \end{proof} The convex hull of an astral cone is a convex astral cone: \begin{theorem} Let $K\subseteq\extspace$ be an astral cone. Then its convex hull, $\conv{K}$, is also an astral cone (and therefore a convex astral cone). \end{theorem} \begin{proof} First, $\zero\in K$ (by Proposition~\ref{pr:ast-cone-is-naive}), so $\zero\in\conv{K}$. Let $\xbar\in \conv{K}$. In light of Proposition~\ref{pr:astral-cone-props}(\ref{pr:astral-cone-props:c}), to prove the proposition, it suffices to show that $\limray{\xbar}\in\conv{K}$. By Theorem~\ref{thm:convhull-of-simpices}, there exists a finite subset $V\subseteq K$ such that $\xbar\in\ohull{V}$. Thus, \[ \xbar \in \ohull{V} \subseteq \oconich{V} = \ohull{(\{\zero\}\cup \lmset{V})} \subseteq \conv{K}. \] The second inclusion and the equality are by Theorem~\ref{thm:oconichull-equals-ocvxhull}. The last inclusion is by Theorem~\ref{thm:convhull-of-simpices} since $\{\zero\}\cup \lmset{V}$ is of finite cardinality and, by Proposition~\ref{pr:ast-cone-is-naive}, is included in $K$ since $K$ is an astral cone. Since $\oconich{V}$ is an astral cone (by Proposition~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}), it follows that $\limray{\xbar}\in\oconich{V}\subseteq\conv{K}$ by Proposition~\ref{pr:ast-cone-is-naive}. \end{proof} Analogous to similar results in standard convex analysis, the sequential sum of two astral cones is also an astral cone. Furthermore, the sequential sum of two or more convex astral cones is equal to the convex hull of their union: \begin{theorem} \label{thm:seqsum-ast-cone} ~ \begin{letter-compact} \item \label{thm:seqsum-ast-cone:a} Let $K_1$ and $K_2$ be astral cones in $\extspace$. Then $K_1\seqsum K_2$ is also an astral cone. \item \label{thm:seqsum-ast-cone:b} Let $K_1,\ldots,K_m$ be convex astral cones in $\extspace$, with $m\geq 1$. Then \[ K_1\seqsum\dotsb\seqsum K_m = \conv{\Parens{K_1\cup\dotsb\cup K_m}}, \] which is also a convex astral cone. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:seqsum-ast-cone:a}):} Let $Z=K_1\seqsum K_2$, and let $\zbar\in Z$. We aim to show that $\aray{\zbar}\subseteq Z$, which will prove that $Z$ is an astral cone. Since $\zbar\in Z$, we must have $\zbar\in \xbar_1\seqsum\xbar_2$ for some $\xbar_1\in K_1$ and $\xbar_2\in K_2$. For $i\in\{1,2\}$, let $K'_i=\aray{\xbar_i}=\lb{\zero}{\limray{\xbar_i}}$, and let $Z'=K'_1\seqsum K'_2$. Note that $K'_i\subseteq K_i$ since $K_i$ is an astral cone, so $Z'\subseteq Z$ (by Proposition~\ref{pr:seqsum-props}\ref{pr:seqsum-props:b}). Further, each $K'_i$ is closed and convex (Proposition~\ref{pr:e1}\ref{pr:e1:ohull}), implying $Z'$ is as well, by \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:closed},\ref{prop:seqsum-multi:convex}). Therefore, for $t\geq 1$, \[ t\zbar\in (t\xbar_1)\seqsum(t\xbar_2)\subseteq K'_1\seqsum K'_2. \] The first inclusion is by Theorem~\ref{thm:distrib-seqsum} (applied with $\A= t\Iden$, where $\Iden$ is the $n\times n$ identity matrix, and with $X=\{\xbar_1\}$ and $Y=\{\xbar_2\}$). The second is because, for $i\in\{1,2\}$, $\xbar_i\in K'_i$ and each $K'_i$ is an astral cone and therefore also a naive cone (by Propositions~\ref{pr:ast-cone-is-naive} and~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}). Taking limits, it follows that $\limray{\zbar}=\lim(t\zbar)\in Z'$, since $Z'$ is closed. In addition, $\zero\in Z'$ since $\zero$ is in both $K'_1$ and $K'_2$. Thus, $\aray{\zbar}=\lb{\zero}{\limray{\zbar}}\subseteq Z'\subseteq Z$ since $Z'$ is convex. \pfpart{Part~(\ref{thm:seqsum-ast-cone:b}):} Let $J=K_1\seqsum\dotsb\seqsum K_m$ and let $U=K_1\cup \dotsb \cup K_m$. Then $J$ is convex by \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:convex}), and is an astral cone by part~(\ref{thm:seqsum-ast-cone:a}) (applying both repeatedly). We aim to show that $J=\conv{U}$. We have \[ J \subseteq m \conv{\Parens{\bigcup_{i=1}^m K_i}} = \conv{\Parens{\bigcup_{i=1}^m (m K_i)}} \subseteq \conv{\Parens{\bigcup_{i=1}^m K_i}} = \conv{U}. \] The first inclusion is by Theorem~\ref{thm:seqsum-in-union}. The first equality is by Corollary~\ref{cor:thm:e:9b} (applied to the linear map $\xbar\mapsto m\xbar$). The last inclusion is because each $K_i$ is an astral cone (and by Proposition~\ref{pr:ast-cone-is-naive}), implying $m K_i\subseteq K_i$ for $i=1,\ldots,m$. On the other hand, since $\zero$ is in each astral cone $K_i$, \[ K_1 = K_1 \seqsum \underbrace{ \zero \seqsum \dotsb \seqsum \zero }_{m-1} \subseteq K_1 \seqsum (K_2\seqsum \dotsb \seqsum K_m) = J. \] By symmetry, this also shows that $K_i\subseteq J$ for $i=1,\ldots,m$. Thus, $U\subseteq J$, implying $\conv{U}\subseteq J$, since $J$ is convex. This completes the proof. \qedhere \end{proof-parts} \end{proof} We next show that an astral cone is convex if and only if it is closed under the sequential sum operation. An analogous fact from standard convex analysis is given in \Cref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:d}). \begin{theorem} \label{thm:ast-cone-is-cvx-if-sum} Let $K\subseteq\extspace$ be an astral cone. Then $K$ is convex if and only if $K\seqsum K\subseteq K$ (that is, if and only if $\xbar\seqsum\ybar\subseteq K$ for all $\xbar,\ybar\in K$). \end{theorem} \begin{proof} Suppose $K$ is convex. Then \[ K\seqsum K = \conv{(K\cup K)} = K, \] where the first equality is by Theorem~\ref{thm:seqsum-ast-cone}(\ref{thm:seqsum-ast-cone:b}), and the second is because $K$ is convex. For the converse, suppose $K\seqsum K\subseteq K$. Let $\xbar,\ybar\in K$. To show that $K$ is convex, we aim to show that $\lb{\xbar}{\ybar}\subseteq K$. Let $J = (\aray{\xbar})\seqsum(\aray{\ybar})$. Then $J\subseteq K\seqsum K\subseteq K$ since $K$ is an astral cone (so that $\aray{\xbar}\subseteq K$ and $\aray{\ybar}\subseteq K$). Further, $\xbar\in\aray{\xbar}$ and $\zero\in\aray{\ybar}$ so $\xbar\in\xbar\seqsum\zero\subseteq J$; similarly, $\ybar\in J$. We also have that $\aray{\xbar}$ and $\aray{\ybar}$ are convex (by Proposition~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}), so $J$ is convex by \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:convex}). Consequently, $\lb{\xbar}{\ybar}\subseteq J \subseteq K$, completing the proof. \end{proof} Theorem~\ref{thm:ast-cone-is-cvx-if-sum} is false in general if, instead of sequential sum, we were to use leftward addition. In other words, it is not the case, in general, that an astral cone is convex if and only if it is closed under leftward addition, as shown in the next example: \begin{example} Let $K$ be the set of all points in $\extspac{3}$ that can be written in the form $\limray{\vv_1}\plusl\dotsb\plusl\limray{\vv_k}\plusl\vv_{k+1}$ where $\vv_1,\ldots,\vv_{k+1}\in\R^3$, and such that for all $j\in\{1,\ldots,k+1\}$, if $\vv_j\cdot\ee_3\neq 0$ then $\vv_i\in\{\ee_1,\ee_2\}$ for some $i<j$. For instance, $\limray{\ee_2}\plusl\ee_3$ is in $K$, but $\limray{\ee_3}\plusl\ee_2$ is not. It can be checked using Theorem~\ref{thm:lb-with-zero} that $K$ is an astral cone. It also can be checked, by direct argument, that $K$ is closed under leftward addition (so that $\xbar\plusl\ybar\in K$ for all $\xbar,\ybar\in K$). Nevertheless, $K$ is not convex. For instance, let $\xbar=\limray{\ee_1}\plusl\ee_3$ and $\ybar=\limray{\ee_2}\plusl\ee_3$, which are both in $K$. Also, let $\zbar=\limray{(\ee_1+\ee_2)}\plusl\ee_3$. Then $\zbar\in\lb{\xbar}{\ybar}$, as can be seen from Example~\ref{ex:seg-oe1-oe2} combined with Theorem~\ref{thm:e:9}. However, $\zbar\not\in K$, so $K$ is not convex. \end{example} As mentioned earlier, every set of icons that includes the origin is a naive cone. This is certainly not the case for astral cones; indeed, the only set of icons that is an astral cone is the singleton $\{\zero\}$. Nevertheless, as we show next, the convex hull of any set of icons that includes the origin is a convex astral cone. Moreover, every convex astral cone can be expressed in this way, and more specifically, as the convex hull of all the icons that it includes. \begin{theorem} \label{thm:ast-cvx-cone-equiv} Let $K\subseteq\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:ast-cvx-cone-equiv:a} $K$ is a convex astral cone. \item \label{thm:ast-cvx-cone-equiv:b} $K=\conv{E}$ for some set of icons $E\subseteq\corezn$ that includes the origin. \item \label{thm:ast-cvx-cone-equiv:c} $\zero\in K$ and $K=\conv{(K \cap \corezn)}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:ast-cvx-cone-equiv:c}) $\Rightarrow$ (\ref{thm:ast-cvx-cone-equiv:b}): } Setting $E=K\cap \corezn$, which must include $\zero$ by assumption, this implication follows immediately. \pfpart{(\ref{thm:ast-cvx-cone-equiv:b}) $\Rightarrow$ (\ref{thm:ast-cvx-cone-equiv:a}): } Suppose $K=\conv{E}$ for some $E\subseteq\corezn$ with $\zero\in E$. Then $K$ is convex and includes $\zero$. Let $\xbar\in K$. By Proposition~\ref{pr:astral-cone-props}(\ref{pr:astral-cone-props:c}), to show $K$ is an astral cone, it suffices to show that $\limray{\xbar}\in K$. Since $\xbar\in\conv{E}$, by Theorem~\ref{thm:convhull-of-simpices}, there exists a finite subset $V\subseteq E$ such that $\xbar\in\ohull{V}$. Without loss of generality, we assume $\zero\in V$. We claim that $\limray{\xbar}$ is also in $\ohull{V}$. To see this, let $\uu\in\Rn$. If $\xbar\cdot\uu\leq 0$, then \[ \limray{\xbar}\cdot\uu \leq 0 \leq \max_{\ebar\in V} \ebar\cdot\uu \] where the second inequality is because $\zero\in V$. Otherwise, if $\xbar\cdot\uu>0$ then \[ 0 < \xbar\cdot\uu \leq \max_{\ebar\in V} \ebar\cdot\uu, \] with the second inequality following from \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}). Since all of the points in $V$ are icons, this implies that $\ebar\cdot\uu=+\infty$ for some $\ebar\in V$ (by Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}). Therefore, \[ \limray{\xbar}\cdot\uu \leq \max_{\ebar\in V} \ebar\cdot\uu \] in this case as well. From \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}), it follows that $\limray{\xbar}\in\ohull{V}\subseteq\conv{E}$, as claimed, and so that $K$ is an astral cone. \pfpart{(\ref{thm:ast-cvx-cone-equiv:a}) $\Rightarrow$ (\ref{thm:ast-cvx-cone-equiv:c}): } Suppose $K$ is a convex astral cone, and let $E=K\cap\Rn$. Since $E\subseteq K$ and $K$ is convex, $\conv{E}\subseteq K$ (by Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). For the reverse implication, let $\xbar\in K$. Then both $\zero$ and $\limray{\xbar}$ are in $K$ by Proposition~\ref{pr:ast-cone-is-naive}. Since they are both icons (Proposition~\ref{pr:i:8}\ref{pr:i:8-infprod}), they also are both in $E$. Therefore, \[ \xbar \in \aray{\xbar} = \lb{\zero}{\limray{\xbar}} \subseteq \conv{E}, \] where the equality is by Theorem~\ref{thm:oconichull-equals-ocvxhull}, and the last inclusion is because $\conv{E}$ is convex and includes $\zero$ and $\limray{\xbar}$. Thus, $K\subseteq\conv{E}$, completing the proof. \qedhere \end{proof-parts} \end{proof} \subsection{Conic hull operations} As shown next, the outer conic hull of a set in $\Rn$ is simply equal to the astral closure of its (standard) conic hull. If the set is already a convex cone, then this is simply its closure. \begin{theorem} \label{thm:out-conic-h-is-closure} Let $S\subseteq\Rn$. Then $\oconich{S}=\clcone{S}$. Consequently, if $K\subseteq\Rn$ is a convex cone in $\Rn$, then $\oconich{K}=\Kbar$. \end{theorem} \begin{proof} Let $\chsuz$ be a homogeneous astral closed halfspace (as in Eq.~\ref{eqn:homo-halfspace}), for some $\uu\in\Rn\setminus\{\zero\}$. If $S\subseteq\chsuz$, then $S$ is also included in the standard homogeneous closed halfspace $\chsuz\cap\Rn$ in $\Rn$. By \Cref{pr:con-int-halfspaces}(\ref{roc:cor11.7.2}), this implies that \[ \cone{S} \subseteq \cl(\cone{S}) \subseteq \chsuz\cap\Rn \subseteq \chsuz. \] Since this holds for all homogeneous astral closed halfspaces that include $S$, it follows that $\cone{S}\subseteq\oconich{S}$, by definition of outer conic hull. Therefore, $\clcone{S}\subseteq\oconich{S}$ since $\oconich{S}$ is closed in $\extspace$ (being an intersection of closed halfspaces). For the reverse inclusion, we show that $\oconich{S}$ is included in $\ohull{(\cone{S})}$, which is the same as $\clcone{S}$ by Theorem~\ref{thm:e:6}. To do so, let $\chsub$ be an astral closed halfspace (as in Eq.~\ref{eq:chsua-defn}) that includes $\cone{S}$, for some $\uu\in\Rn\setminus\{\zero\}$ and $\beta\in\R$. Since $\zero\in\cone{S}$, this implies that $0=\zero\cdot\uu\leq\beta$. Furthermore, for all $\xx\in\cone{S}$, we claim that $\xx\cdot\uu\leq 0$. This is because for all $\lambda\in\Rstrictpos$, $\lambda\xx$ is also in $\cone{S}$, implying $\lambda\xx\cdot\uu\leq \beta$. Therefore, if $\xx\cdot\uu>0$ then the left-hand side of this inequality could be made arbitrarily large in the limit $\lambda\rightarrow+\infty$, contradicting that it is bounded by $\beta$. Thus, \[ S \subseteq \cone{S} \subseteq \chsuz. \] This implies that \[ \oconich{S} \subseteq \chsuz \subseteq \chsub. \] The first inclusion is by definition of outer conic hull since $S\subseteq \chsuz$. The second inclusion is because $\beta\geq 0$, implying the stated inclusion of halfspaces. Since this holds for all astral closed halfspaces $\chsub$ that include $\cone{S}$, it follows, by definition of outer convex hull, that \[ \oconich{S} \subseteq \ohull{(\cone{S})} = \clcone{S}, \] with the equality from Theorem~\ref{thm:e:6}. This completes the proof that $\oconich{S}=\clcone{S}$. If $K$ is a convex cone in $\Rn$, then $\cone{K}=K$, yielding the final claim. \end{proof} Analogous to the conic hull operation (Section~\ref{sec:prelim:cones}), for any set $S\subseteq\extspace$, we define the \emph{astral conic hull of $S$}, denoted $\acone{S}$, to be the smallest convex astral cone that includes $S$, or equivalently, the intersection of all convex astral cones that include $S$. This operation is the hull operator for the set of all convex astral cones, and so has properties like those given in Proposition~\ref{pr:conhull-prop} for convex hull, as listed in the next proposition. \begin{proposition} \label{pr:acone-hull-props} Let $S,U\subseteq\extspace$. \begin{letter-compact} \item \label{pr:acone-hull-props:a} If $S\subseteq U$ and $U$ is a convex astral cone, then $\acone{S}\subseteq U$. \item \label{pr:acone-hull-props:b} If $S\subseteq U$ then $\acone{S}\subseteq\acone{U}$. \item \label{pr:acone-hull-props:c} If $S\subseteq U\subseteq\acone{S}$, then $\acone{U}=\acone{S}$. \item \label{pr:acone-hull-props:d} $\acone{S}\subseteq\oconich{S}$ with equality if $|S|<+\infty$. \end{letter-compact} \end{proposition} \begin{proof} Similar to that of Proposition~\ref{pr:conhull-prop}. \end{proof} In standard convex analysis, the conic hull of a set $S\subseteq\Rn$ is equal to the convex hull of the origin together with the union of all rays $\ray{\xx}$ for all $\xx\in S$. The astral conic hull of a set $S\subseteq\extspace$ can be expressed analogously, namely, as the convex hull of the origin together with all of the astral rays $\aray{\xbar}$ for all $\xbar\in S$. More simply, as given in the next theorem, this is the same as the convex hull of the origin with all of the endpoints of such astral rays, that is, with all of the points $\limray{\xbar}$ for all $\xbar\in S$; this is because, by including $\zero$ and $\limray{\xbar}$ in the convex hull, we also implicitly include the segment between these points, which is exactly the astral ray through $\xbar$. Also, if the set $S$ is already an astral cone, then its astral conic hull is the same as its convex hull. \begin{theorem} \label{thm:acone-char} Let $S\subseteq\extspace$. \begin{letter-compact} \item \label{thm:acone-char:a} $\acone{S}=\conv{(\{\zero\}\cup \lmset{S})}$. \item \label{thm:acone-char:b} If $S$ is an astral cone, then $\acone{S}=\conv{S}$. \end{letter-compact} \end{theorem} \begin{proof} Let $E=\{\zero\}\cup \lmset{S}$, which is a set of icons (by Proposition~\ref{pr:i:8}\ref{pr:i:8-infprod}) that includes the origin. \begin{proof-parts} \pfpart{Part~(\ref{thm:acone-char:a}):} By Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:b}, $\conv{E}$ is a convex astral cone. Furthermore, for all $\xbar\in S$, we have $\limray{\xbar}\in E\subseteq\conv{E}$, implying that $\xbar$ is also in the astral cone $\conv{E}$ by Proposition~\ref{pr:ast-cone-is-naive}. Therefore, $S\subseteq\conv{E}$, implying $\acone{S}\subseteq\conv{E}$ by Proposition~\ref{pr:acone-hull-props}(\ref{pr:acone-hull-props:a}). For the reverse inclusion, since $\acone{S}$ is an astral cone, by Proposition~\ref{pr:ast-cone-is-naive}, it must include $\zero$ and $\limray{\xbar}$ for all $\xbar\in\acone{S}$, and so also for all $\xbar\in S$. Therefore, $E\subseteq\acone{S}$. Since $\acone{S}$ is convex, it follows that $\conv{E}\subseteq\acone{S}$ (by Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa}), completing the proof. \pfpart{Part~(\ref{thm:acone-char:b}):} The set $S$ is included in $\acone{S}$, which is convex. Therefore, $\conv{S}\subseteq\acone{S}$. On the other hand, since $S$ is an astral cone, it includes $\zero$ and $\limray{\xbar}$ for $\xbar\in S$ (Proposition~\ref{pr:ast-cone-is-naive}). Therefore, $E\subseteq S$, implying $\acone{S}=\conv{E}\subseteq\conv{S}$ by part~(\ref{thm:acone-char:a}). \qedhere \end{proof-parts} \end{proof} Combining Theorems~\ref{thm:seqsum-ast-cone}(\ref{thm:seqsum-ast-cone:b}) and~\ref{thm:acone-char}(\ref{thm:acone-char:a}) yields the following identity for the astral conic hull of a union: \begin{theorem} \label{thm:decomp-acone} Let $S_1,S_2\subseteq\extspace$. Then \[ (\acone{S_1}) \seqsum (\acone{S_2}) = \acone{(S_1\cup S_2)}. \] \end{theorem} \begin{proof} Let $K_i=\acone{S_i}$ for $i\in\{0,1\}$. Then \begin{align*} K_1 \seqsum K_2 &= \conv(K_1 \cup K_2) \\ &= \conv\bigParens{\conv\Parens{\{\zero\}\cup\lmset{S_1}} \cup \conv\Parens{\{\zero\}\cup\lmset{S_2}} } \\ &= \conv\Parens{\{\zero\}\cup \lmset{S_1} \cup\lmset{S_2}} \\ &= \acone(S_1\cup S_2). \end{align*} The first equality is by Theorem~\ref{thm:seqsum-ast-cone}(\ref{thm:seqsum-ast-cone:b}) since $K_1$ and $K_2$ are convex astral cones. The second and fourth equalities are both by Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}). The third equality is by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:c}) since \begin{align*} \{\zero\}\cup \lmset{S_1} \cup\lmset{S_2} &\subseteq \conv\Parens{\{\zero\}\cup\lmset{S_1}} \cup \conv\Parens{\{\zero\}\cup\lmset{S_2}} \\ &\subseteq \conv\Parens{\{\zero\}\cup\lmset{S_1}\cup\lmset{S_1}} \end{align*} (with the second inclusion following from Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:b}). \end{proof} For example, if $S=\{\xbar_1,\ldots,\xbar_m\}$ is a finite subset of $\extspace$, then Theorem~\ref{thm:decomp-acone} shows that $S$'s astral conic hull can be decomposed as a sequential sum of astral rays: \[ \acone{S} = (\aray{\xbar_1})\seqsum\dotsb\seqsum(\aray{\xbar_m}). \] Earlier, in Theorem~\ref{thm:e:7}, we gave a characterization in terms of sequences for when a point $\zbar\in\extspace$ is in the outer convex hull of a finite set of points $V=\{\xbar_1,\ldots,\xbar_m\}$. Specifically, we saw that $\zbar\in\ohull{V}$ if and only if there exist sequences $\seq{\xx_{it}}$ converging to each point $\xbar_i$ such that some convex combination $\sum_{i=1}^m \lambda_{it} \xx_{it}$ of points in those sequences converges to $\zbar$ (where $\lambda_{it}\in\Rpos$ with $\sum_{i=1}^m \lambda_{it}=1$). Here, we give a corresponding theorem for the outer conic hull of a finite set of points. This characterization for when $\zbar$ is in $\oconich{V}$ is the same except that now we consider \emph{conic} combinations $\sum_{i=1}^m \lambda_{it} \xx_{it}$ with each $\lambda_{it}$ nonnegative, but no longer required to sum to $1$ for all $t$. Additionally, this characterization now requires that each of the sequences $\seq{\xx_{it}}$ is span-bound. \begin{theorem} \label{thm:oconic-hull-and-seqs} Let $V=\{\xbar_1,\dotsc,\xbar_m\}\subseteq\extspace$, and let $\zbar\in\extspace$. Then $\zbar\in\oconich{V}$ if and only if there exist sequences $\seq{\lambda_{it}}$ in $\Rpos$ and span-bound sequences $\seq{\xx_{it}}$ in $\Rn$, for $i=1,\dotsc,m$, such that: \begin{itemize} \item $\xx_{it}\rightarrow\xbar_i$ for $i=1,\dotsc,m$. \item The sequence $\zz_t=\sum_{i=1}^m \lambda_{it} \xx_{it}$ converges to $\zbar$. \end{itemize} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{``If'' ($\Leftarrow$):} Suppose sequences as stated in the theorem exist. We aim to show $\zbar\in\oconich{V}$. Let $\uu\in\Rn\setminus\{\zero\}$ and suppose that $V$ is included in the homogeneous astral closed halfspace $\chsuz$ (as in Eq.~\ref{eqn:homo-halfspace}). Let $i\in\{1,\ldots,m\}$. We claim that $\xx_{it}\cdot\uu\leq 0$ for all sufficiently large $t$. To see this, note first that $\xbar_i\in\chsuz$ so $\xbar_i\cdot\uu\leq 0$. If $\xbar_i\cdot\uu=0$ then $\xx_{it}\cdot\uu=0$ for all $t$ by Proposition~\ref{pr:rspan-sing-equiv-dual} since $\seq{\xx_{it}}$ is span-bound. Otherwise, if $\xbar_i\cdot\uu<0$ then $\{\ybar\in\extspace : \ybar\cdot\uu<0\}$ is a neighborhood of $\xbar_i$ which therefore must include all but finitely many of the sequence elements $\xx_{it}$. Thus, $\xx_{it}\cdot\uu<0$ for all $t$ sufficiently large. Therefore, for all $t$ sufficiently large, we must have $\xx_{it}\cdot\uu\leq 0$ for all $i=1,\ldots,m$. Since all of the $\lambda_{it}$'s are nonnegative, it follows that \[ \zz_t\cdot\uu = \sum_{i=1}^m \lambda_{it} \xx_{it}\cdot \uu \leq 0. \] That is, $\zz_t\in\chsuz$, implying that $\zbar=\lim\zz_t$ is also in $\chsuz$, being closed. Thus, $\zbar$ belongs to every homogeneous astral closed halfspace that includes $V$. Therefore, $\zbar$ is in $\oconich{V}$, the intersection of all such halfspaces. \pfpart{``Only if'' ($\Rightarrow$):} Assume $\zbar\in\oconich{V}$. Then $\zbar\in\ohull{(\{\zero\}\cup\lmset{V})}$ by Theorem~\ref{thm:oconichull-equals-ocvxhull}. Therefore, by Theorem~\ref{thm:e:7}, there exist sequences $\seq{\gamma_{it}}$ in $\Rpos$ and span-bound sequences $\seq{\yy_{it}}$ in $\Rn$, for $i=0,1,\ldots,m$, such that \begin{itemize} \item $\yy_{0t}\rightarrow\zero$; \item $\yy_{it}\rightarrow\limray{\xbar_i}$ for $i=1,\ldots,m$; \item $\sum_{i=0}^m \gamma_{it} = 1$ for all $t$; and \item $\zz_t=\sum_{i=0}^m \gamma_{it} \yy_{it} \rightarrow \zbar$. \end{itemize} Note that, because $\seq{\yy_{0t}}$ is span-bound and converges to $\zero$, we must have $\yy_{0t}=\zero$ for all $t$ since $\rspanset{\zero}=\{\zero\}$. To fulfill the requirements of the theorem, we will want to convert each sequence $\seq{\yy_{it}}$ converging to $\limray{\xbar_i}$ into a corresponding sequence converging to $\xbar_i$. For this, we prove the following lemma: \begin{lemma} \label{lem:seq-to-omegaxbar-to-xbar} Let $\xbar\in\extspace$, and let $\seq{\yy_t}$ be a span-bound sequence in $\Rn$ that converges to $\limray{\xbar}$. Then there exists a sequence $\seq{\alpha_t}$ in $\Rstrictpos$ such that the sequence $\xx_t=\yy_t / \alpha_t \rightarrow \xbar$. Furthermore, $\seq{\xx_t}$ is span-bound. \end{lemma} \begin{proofx} Let $\xbar=\VV\omm\plusl\qq$ be the canonical representation of $\xbar$, where $\VV\in\Rnk$, $k\geq 0$, $\qq\in\Rn$. As a first case, suppose $\qq=\zero$. Then $\xbar$ is an icon so $\limray{\xbar}=\xbar$ (by Propositions~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:c} and~\ref{pr:i:8}\ref{pr:i:8d}). Therefore, in this case, we can let $\alpha_t=1$ for all $t$ so that $\xx_t=\yy_t\rightarrow \limray{\xbar}=\xbar$, which is span-bound by assumption. Otherwise, $\qq\neq\zero$. Since $\seq{\yy_t}$ is span-bound, each point $\yy_t$ is in $\colspace{[\VV,\qq]}$ and so can be written in the form $\yy_t = \VV \bb_t + c_t \qq$ for some $\bb_t\in\Rk$ and $c_t\in\R$. Applying Theorem~\ref{thm:seq-rep} to this sequence, which converges to $\limray{\xbar}=\VV\omm\plusl \limray{\qq}=[\VV,\qq]\omm$, it follows from part~(\ref{thm:seq-rep:a}) of that theorem that $b_{t,i}\rightarrow+\infty$ for $i=1,\ldots,k$, and $c_t\rightarrow+\infty$; and from part~(\ref{thm:seq-rep:b}) that $b_{t,i+1}/b_{t,i}\rightarrow 0$ for $i=1,\ldots,k-1$, and, if $k>0$, that $c_t/b_{t,k}\rightarrow 0$. Let $\alpha_t=\max\{1,c_t\}$, which is equal to $c_t$ for all $t$ sufficiently large since $c_t\rightarrow+\infty$. Also, let $\xx_t=\yy_t/\alpha_t$ and $\bb'_t=\bb_t/\alpha_t$. We then have that $b'_{t,i+1}/b'_{t,i} = b_{t,i+1}/b_{t,i}\rightarrow 0$ for $i=1,\ldots,k-1$. Furthermore, for $i=1,\ldots,k$, \[ \frac{\alpha_t}{b_{t,i}} = \frac{c_t}{b_{t,k}} \cdot \prod_{j=i}^{k-1} \frac{b_{t,j+1}}{b_{t,j}} \rightarrow 0. \] The equality holds when $\alpha_t=c_t$ and when $b_{t,j}>0$ for all $t$ and $j=1,\ldots,k$, as will be the case for all $t$ sufficiently large. The convergence follows from the sequences' properties noted above. Consequently, $b'_{t,i}=b_{t,i}/\alpha_t\rightarrow+\infty$. Thus, for all $t$ sufficiently large (so that $\alpha_t=c_t$), \[ \xx_t = \frac{\yy_t}{\alpha_t} = \frac{\VV \bb_t + c_t \qq}{\alpha_t} = \VV \bb'_t + \qq \rightarrow \VV \omm \plusl \qq = \xbar \] where the convergence follows from Theorem~\ref{thm:seq-rep} having satisfied all its conditions. Furthermore, $\seq{\xx_t}$ is span-bound since $\yy_t$ is in $\rspanset{\limray{\xbar}}=\colspace{[\VV,\qq]}=\rspanxbar$, implying $\xx_t$ is as well. \end{proofx} Thus, by Lemma~\ref{lem:seq-to-omegaxbar-to-xbar}, for each $i=1,\ldots,m$, there exists a sequence $\seq{\alpha_{it}}$ in $\Rstrictpos$ such that the sequence $\xx_{it}=\yy_{it}/\alpha_{it}$ converges to $\xbar_i$ and is also span-bound. Let $\lambda_{it}=\gamma_{it} \alpha_{it}$ for all $t$, which is nonnegative. Then by algebra (and since $\yy_{0t}=\zero$), \[ \zz_t = \sum_{i=0}^m \gamma_{it} \yy_{it} = \sum_{i=1}^m \lambda_{it} \xx_{it}. \] Thus, the sequences $\seq{\xx_{it}}$ and $\seq{\lambda_{it}}$ fulfill all the parts of the theorem. \qedhere \end{proof-parts} \end{proof} Since $\aray{\xbar}=\oconich{\{\xbar\}}$, we immediately obtain as corollary: \begin{corollary} \label{cor:aray-and-seqs} Let $\xbar\in\extspace$ and let $\zbar\in\extspace$. Then $\zbar\in\aray{\xbar}$ if and only if there exist a sequence $\seq{\lambda_t}$ in $\Rpos$ and a span-bound sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$ and $\lambda_t\xx_t\rightarrow\zbar$. \end{corollary} Theorem~\ref{thm:oconic-hull-and-seqs} and Corollary~\ref{cor:aray-and-seqs} do not hold in general without the requirement that the sequences are span-bound. For instance, in $\R$, suppose $x_t=1/t$ and $\lambda_t=t$. Then $x_t\rightarrow 0$ and $\lambda_t x_t \rightarrow 1$, but $1\not\in\aray{0}=\oconich\{0\}=\{0\}$. Analogous to Theorem~\ref{thm:convhull-of-simpices}, the astral conic hull of a set $S$ is equal to the union of all outer conic hulls of all finite subsets of $S$: \begin{theorem} \label{thm:ast-conic-hull-union-fin} Let $S\subseteq\extspace$. Then $S$'s astral conic hull is equal to the union of all outer conic hulls of finite subsets of $S$. That is, \begin{equation} \label{eq:thm:ast-conic-hull-union-fin:1} \acone{S} = \bigcup_{\scriptontop{V\subseteq S:}{|V|<+\infty}} \oconich{V}. \end{equation} \end{theorem} \begin{proof} Let $U$ denote the union on the right-hand side of \eqref{eq:thm:ast-conic-hull-union-fin:1}. First, if $V\subseteq S$ and $|V|<+\infty$ then $\oconich{V}=\acone{V}\subseteq\acone{S}$ by Proposition~\ref{pr:acone-hull-props}(\ref{pr:acone-hull-props:b},\ref{pr:acone-hull-props:d}). Thus, $U\subseteq\acone{S}$. For the reverse inclusion, we have \begin{align*} \acone{S} &= \conv{(\{\zero\}\cup\lmset{S})} \\ &= \bigcup_{\scriptontop{V\subseteq S\cup\{\zero\}:}{|V|<+\infty}} \ohull(\lmset{V}) \\ &\subseteq \bigcup_{\scriptontop{V\subseteq S:}{|V|<+\infty}} \ohull(\{\zero\}\cup\lmset{V}) \\ &= \bigcup_{\scriptontop{V\subseteq S:}{|V|<+\infty}} \oconich{V} = U. \end{align*} The first equality is by Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}). The second is by Theorem~\ref{thm:convhull-of-simpices}. The inclusion is because if $V\subseteq S\cup\{\zero\}$ then $V\subseteq \{\zero\}\cup V'$ where $V'=V\cap S\subseteq S$, implying $\ohull(\lmset{V})\subseteq \ohull(\{\zero\}\cup\lmset{V'})$. The third equality is by Theorem~\ref{thm:oconichull-equals-ocvxhull}. \end{proof} \section{Separation theorems} \label{sec:sep-thms} Separation theorems are at the foundation of convex analysis, proving, for instance, the intuitive fact that a hyperplane exists separating any closed convex set from any point outside the set. Separation theorems have many implications and applications, showing, for example, that every closed convex set is equal to the intersection of all closed halfspaces that include it. In this section, we prove analogous separation theorems for astral space. We show that any two closed convex sets in astral space can be separated in a strong sense using an astral hyperplane. We will see that this has several consequences. For instance, it implies that the outer convex hull of any set $S\subseteq\extspace$ is equal to the smallest closed convex set that includes $S$. These results use astral hyperplanes, defined by vectors $\uu$ in $\Rn$, to separate one set in $\extspace$ from another. In a dual fashion, we also show how ordinary convex sets in $\Rn$ can be separated using a kind of generalized hyperplane that is defined by an astral point in $\extspace$. \subsection{Separating convex sets} \label{sec:sep-cvx-sets} As defined in Section~\ref{sec:prelim-sep-thms}, two nonempty sets $X,Y\subseteq\Rn$ are strongly separated by a (standard) hyperplane $H$ if the two sets are included in the halfspaces on opposite sides of the hyperplane with all points in $X\cup Y$ a distance at least some $\epsilon>0$ from the separating hyperplane. In \Cref{roc:thm11.1}, we saw that this condition is equivalent to \begin{equation} \label{eqn:std-str-sep-crit1} \sup_{\xx\in X} \xx\cdot\uu < \inf_{\yy\in Y} \yy\cdot\uu, \end{equation} for some $\uu\in\Rn$. Taking a step further, it can be argued that this is also equivalent to the condition \begin{equation} \label{eqn:std-str-sep-crit2} \sup_{\xx\in X,\yy\in Y} (\xx-\yy)\cdot\uu < 0. \end{equation} Extending these definitions, we can now define an analogous notion of separation in which two astral sets are strongly separated if they are included in the astral halfspaces on opposite sides of some astral hyperplane by some positive margin: \begin{definition} Let $X$ and $Y$ be nonempty subsets of $\extspace$. Let $H=\{\xbar\in\extspace : \xbar\cdot\uu=\beta\}$ be an astral hyperplane defined by some $\uu\in\Rn\setminus\{\zero\}$ and $\beta\in\R$. We say that $H$ \emph{strongly separates} $X$ and $Y$ if there exists $\epsilon>0$ such that $\xbar\cdot\uu < \beta-\epsilon$ for all $\xbar\in X$, and $\ybar\cdot\uu > \beta+\epsilon$ for all $\ybar\in Y$. \end{definition} More simply, analogous to \eqref{eqn:std-str-sep-crit1}, $X$ and $Y$ are strongly separated by some hyperplane if and only if \begin{equation} \label{eqn:ast-strong-sep-defn} \sup_{\xbar\in X} \xbar\cdot \uu < \inf_{\ybar\in Y} \ybar\cdot \uu \end{equation} for some $\uu\in\Rn$ (where the possibility that $\uu=\zero$ is ruled out since $X$ and $Y$ are nonempty). When \eqref{eqn:ast-strong-sep-defn} holds, we say simply that $X$ and $Y$ are strongly separated by the vector $\uu\in\Rn$, or that $\uu$ strongly separates them (rather than referring to an astral hyperplane, such as the one given above). We say that $X$ and $Y$ are strongly separated if there exists $\uu\in\Rn$ that strongly separates them. The condition for (standard) strong separation given in \eqref{eqn:std-str-sep-crit2} means that $\zz\cdot\uu$ is bounded below $0$ for all $\zz\in X-Y$. We next give an analogous criterion for strong separation in astral space in which, for sets $X,Y\subseteq\extspace$, the set difference $X-Y$ is replaced by $\Xbar\seqsum(-\Ybar)$, using sequential sum after taking closures. \begin{theorem} \label{thm:ast-str-sep-seqsum-crit} Let $X,Y\subseteq\extspace$ be nonempty, and let $\uu\in\Rn$. Then \begin{equation} \label{eq:thm:ast-str-sep-seqsum-crit:1} \sup_{\xbar\in X} \xbar\cdot \uu < \inf_{\ybar\in Y} \ybar\cdot \uu \end{equation} if and only if \begin{equation} \label{eq:thm:ast-str-sep-seqsum-crit:2} \sup_{\zbar\in \Xbar\seqsum(-\Ybar)} \zbar\cdot \uu < 0. \end{equation} Consequently, $X$ and $Y$ are strongly separated if and only if \eqref{eq:thm:ast-str-sep-seqsum-crit:2} holds for some $\uu\in\Rn$. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{``Only if'' ($\Rightarrow$):} Suppose \eqref{eq:thm:ast-str-sep-seqsum-crit:1} holds, implying $\uu\neq\zero$ (since $X$ and $Y$ are nonempty). Then there exist $\beta,\gamma\in\R$ such that \[ \sup_{\xbar\in X} \xbar\cdot \uu \leq \beta < \gamma \leq \inf_{\ybar\in Y} \ybar\cdot \uu. \] Let $\zbar\in \Xbar\seqsum(-\Ybar)$, implying that $\zbar\in\xbar\seqsum(-\ybar)$ for some $\xbar\in \Xbar$ and $\ybar\in \Ybar$. Then $\xbar\cdot\uu\leq\beta<+\infty$ since $X$, and therefore also $\Xbar$, is included in the astral closed halfspace $\{\xbar'\in\extspace : \xbar'\cdot\uu\leq\beta\}$. Similarly, $\ybar\cdot\uu\geq\gamma$, so $-\ybar\cdot\uu\leq-\gamma<+\infty$. Thus, $\xbar\cdot\uu$ and $-\ybar\cdot\uu$ are summable, implying \[ \zbar\cdot\uu = \xbar\cdot\uu - \ybar\cdot\uu \leq \beta - \gamma \] where the equality is by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:b}). Since this holds for all $\zbar\in \Xbar\seqsum(-\Ybar)$, and since $\beta-\gamma$ is a negative constant, this proves the claim. \pfpart{``If'' ($\Leftarrow$):} Suppose \eqref{eq:thm:ast-str-sep-seqsum-crit:2} holds. Let $\xbar'\in\Xbar$ maximize $\xbar\cdot\uu$ over all $\xbar\in\Xbar$. Such a point attaining this maximum must exist by Proposition~\ref{pr:cont-compact-attains-max} since $\Xbar$ is closed and therefore compact (Proposition~\ref{prop:compact}\ref{prop:compact:closed-subset}), and since the map $\xbar\mapsto\xbar\cdot\uu$ is continuous (Theorem~\ref{thm:i:1}\ref{thm:i:1c}). Likewise, let $\ybar'\in\Ybar$ minimize $\ybar\cdot\uu$ over all $\ybar\in\Ybar$. Let $\zbar'=\xbar'\plusl(-\ybar')$, and let $\zbar''=-\ybar'\plusl\xbar'$. Then $\zbar'$ and $\zbar''$ are both in $\xbar'\seqsum(-\ybar')\subseteq \Xbar\seqsum(-\Ybar)$ by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:a}). Consequently, \begin{equation} \label{eq:thm:ast-str-sep-seqsum-crit:3} \xbar'\cdot\uu\plusl(-\ybar'\cdot\uu)=\zbar'\cdot\uu<0, \end{equation} implying specifically that $\xbar'\cdot\uu<+\infty$. Likewise, $(-\ybar'\cdot\uu)\plusl\xbar'\cdot\uu=\zbar''\cdot\uu<0$, implying that $-\ybar'\cdot\uu<+\infty$. Thus, $\xbar'\cdot\uu$ and $-\ybar'\cdot\uu$ are summable, so \begin{equation} \label{eq:thm:ast-str-sep-seqsum-crit:4} \xbar'\cdot\uu\plusl(-\ybar'\cdot\uu)=\xbar'\cdot\uu - \ybar'\cdot\uu. \end{equation} Combining, we now have that \[ \sup_{\xbar\in X} \xbar\cdot\uu \leq \xbar'\cdot\uu < \ybar'\cdot\uu \leq \inf_{\ybar\in Y} \ybar\cdot\uu. \] The first and last inequalities are by our choice of $\xbar'$ and $\ybar'$. The middle (strict) inequality follows from Eqs.~(\ref{eq:thm:ast-str-sep-seqsum-crit:3}) and~(\ref{eq:thm:ast-str-sep-seqsum-crit:4}). \pfpart{Strong separation equivalence:} If $X$ and $Y$ are both nonempty, then $X$ and $Y$ are strongly separated if and only if \eqref{eq:thm:ast-str-sep-seqsum-crit:1} holds for some $\uu\in\Rn$, which, as shown above, holds if and only if \eqref{eq:thm:ast-str-sep-seqsum-crit:2} holds for some $\uu\in\Rn$. Otherwise, if $X$ is empty, then \eqref{eq:thm:ast-str-sep-seqsum-crit:2} holds vacuously. Setting $\uu=\zero$, the left-hand side of \eqref{eq:thm:ast-str-sep-seqsum-crit:1} is vacuously $-\infty$ while the right-hand side is either $+\infty$ (if $Y$ is empty) or $0$ (otherwise). In either case, this equation holds as well so that $X$ and $Y$ are strongly separated. Likewise if $Y$ is empty. \qedhere \end{proof-parts} \end{proof} The equivalence given in Theorem~\ref{thm:ast-str-sep-seqsum-crit} is false in general if, in \eqref{eq:thm:ast-str-sep-seqsum-crit:2}, we use the sequential sum $X\seqsum(-Y)$, without taking closures, rather than $\Xbar\seqsum(-\Ybar)$. For instance, in $\Rext$, suppose $X=\{-\infty\}$, $Y=\R$, and $u=1$. Then $X\seqsum(-Y)=\{-\infty\}$, so $\sup_{\barz\in X\seqsum(-Y)} \barz u = -\infty<0$, but \eqref{eq:thm:ast-str-sep-seqsum-crit:1} does not hold in this case (since both sides of the inequality are equal to $-\infty$). We next give the central theorem of this section, showing every pair of disjoint, closed convex sets in $\extspace$ is strongly separated. \begin{theorem} \label{thm:sep-cvx-sets} Let $X,Y\subseteq\extspace$ be nonempty, closed (in $\extspace$), convex, and disjoint. Then $X$ and $Y$ are strongly separated. \end{theorem} The main steps in proving this theorem are in showing that it holds in particular special cases, and then showing how these special cases imply the general case. To begin, we prove the next lemma showing how a closed convex astral cone $K\subseteq\extspace$ can be separated from a single, finite point $\zz$ in $\Rn\setminus K$, and more specifically, that there must always exist a homogeneous astral closed halfspace that includes $K$ but not $\zz$. The main idea of the lemma's proof is to first separate $\zz$ from the (ordinary) convex cone $K'=K\cap\Rn$, and then show that that separation also leads to a separation of $\zz$ from the entire astral cone $K$. For this to work, care must be taken in how this separation is done; in the proof, we use sharp discrimination, as was introduced in Section~\ref{sec:prelim-sep-thms}. \begin{lemma} \label{lem:acone-sep-from-fin-pt} Let $K\subseteq\extspace$ be a closed convex astral cone, and let $\zz\in\Rn\setminus K$. Then there exists $\uu\in\Rn\setminus\{\zero\}$ such that \[ \sup_{\xbar\in K} \xbar\cdot\uu \leq 0 < \zz\cdot\uu. \] \end{lemma} \begin{proof} Note that $\zz\neq\zero$ since $\zero\in K$ (by Proposition~\ref{pr:ast-cone-is-naive}) and $\zz\not\in K$. Let $K'=K\cap\Rn$. Then $K'$ is a convex cone (being the intersection of two convex sets, and by Proposition~\ref{pr:ast-cone-is-naive}). Also, $K'$ is closed since if $\xx\in \cl K'$ then there exists a sequence $\seq{\xx_t}$ in $K'\subseteq K$ converging to $\xx$, implying $\xx$ is in $K$ (being closed), and so in $K'$. Let $J=\cone \{\zz\}$, which is a closed convex cone. Then $J\cap K'=\{\zero\}$, since otherwise we would have $\lambda \zz \in K'\subseteq K$ for some $\lambda\in\Rstrictpos$, implying $\zz\in K$ (by Proposition~\ref{pr:ast-cone-is-naive}), a contradiction. Thus, we can apply \Cref{pr:cones-sharp-sep} to $K'$ and $J$, yielding their sharp discrimination. Letting $L=K'\cap -K'$, this means there exists $\uu\in\Rn$ such that $\xx\cdot\uu= 0$ for $\xx\in L$; $\xx\cdot\uu< 0$ for $\xx\in K'\setminus L$; and $\zz\cdot\uu > 0$ (noting that $J\cap-J=\{\zero\}$). In particular, this implies $\uu\neq\zero$. To prove the lemma, we will show that $K\subseteq\chsuz$ (where $\chsuz$ is as defined in Eq.~\ref{eqn:homo-halfspace}). To prove this, we will see that it is sufficient to show that all of $K$'s icons are in $\chsuz$, since $K$ is the convex hull of its icons (Theorem~\ref{thm:ast-cvx-cone-equiv}\ref{thm:ast-cvx-cone-equiv:a},\ref{thm:ast-cvx-cone-equiv:c}). As such, let $\ebar\in K\cap\corezn$ be an icon in $K$; we aim to show that $\ebar\cdot\uu\leq 0$. We can write $\ebar=[\vv_1,\ldots,\vv_k]\omm$ for some $\vv_1,\ldots,\vv_k\in\Rn$. If $\vv_i\in L$ for all $i=1,\ldots,k$, then $\vv_i\cdot\uu=0$ for all $i$, implying in this case that $\ebar\cdot\uu=0$ (by Proposition~\ref{pr:vtransu-zero}). Otherwise, let $j\in\{1,\ldots,k\}$ be the smallest index for which $\vv_j\not\in L$ (implying $\vv_i\in L$ for $i=1,\ldots,j-1$). \begin{claimpx} $\vv_j\in K'$. \end{claimpx} \begin{proofx} Let $\ybar=[\vv_1,\ldots,\vv_j]\omm$. Then \[ \ybar \in \lb{\zero}{\ebar} = \aray{\ebar} \subseteq K. \] The first inclusion is by Theorem~\ref{thm:lb-with-zero} (or Corollary~\ref{cor:d-in-lb-0-dplusx}), the equality is by Theorem~\ref{thm:oconichull-equals-ocvxhull}, and the second inclusion is because $K$ is an astral cone that includes $\ebar$. If $j=1$, then $\limray{\vv_1}=\ybar\in K$, implying $\vv_1\in K$ by Proposition~\ref{pr:ast-cone-is-naive}. So suppose henceforth that $j>1$. Then for $i=1,\ldots,j-1$, $-\vv_i\in K'$ since $\vv_i\in L$, implying $\limray{(-\vv_i)}=-\limray{\vv_i}$ is in $K$ (by Proposition~\ref{pr:ast-cone-is-naive}). To show $\vv_j\in K$, we will argue that it is in the outer convex hull of $\ybar$ and $-\limray{\vv_1},\ldots,-\limray{\vv_{j-1}}$, all of which are in $K$, using sequences so that Theorem~\ref{thm:e:7} can be applied. As such, let \[ \yy_t = \sum_{i=1}^j t^{j-i+1} \vv_i \\ \;\;\;\;\mbox{ and }\;\;\;\; \xx_{it} = -\frac{t^{j-i}}{\lambda_t} \vv_i, \] for $i=1,\ldots,j-1$, where \[ \gamma_t = \frac{1}{t} \;\;\;\;\mbox{ and }\;\;\;\; \lambda_t = \frac{1}{j-1}\Parens{1-\gamma_t}. \] Finally, let \[ \zz_t = \gamma_t \yy_t + \sum_{i=1}^{j-1} \lambda_t \xx_{it}. \] Note that $\gamma_t + (j-1)\lambda_t = 1$. Then $\yy_t\rightarrow\ybar$, and $\xx_{it}\rightarrow -\limray{\vv_i}$ for $i=1,\ldots,j-1$ (by Theorem~\ref{thm:i:seq-rep}). Further, by algebra, $\zz_t=\vv_j$ for all $t$, and so trivially converges to $\vv_j$. It follows that \[ \vv_j \in \ohull\Braces{\ybar, -\limray{\vv_1},\ldots, -\limray{\vv_{j-1}}} \subseteq K, \] where the first inclusion is by Theorem~\ref{thm:e:7}, applied to the sequences above, and the second by Theorem~\ref{thm:e:2} since each of the points in braces are in $K$. Thus, $\vv_j\in K'$, as claimed. \end{proofx} Since $\uu$ sharply discriminates $K'$ and $J$, it follows that $\vv_j\cdot\uu<0$ since $\vv_j$ is in $K'$ but not in $L$ by our choice of $j$. Further, $\vv_i\cdot\uu=0$ for $i=1,\ldots,j-1$ since each $\vv_i\in L$. Consequently, $\ebar\cdot\uu=-\infty<0$ (by Lemma~\ref{lemma:case}), so $\ebar\in\chsuz$. Thus, $E\subseteq\chsuz$, where $E=K\cap\corezn$. Therefore, $K=\conv{E}\subseteq\chsuz$ by Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:c} and Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:aa}) since $\chsuz$ is convex (Proposition~\ref{pr:e1}\ref{pr:e1:c}). Since $\zz\not\in\chsuz$, this completes the proof. \end{proof} Next, we extend the preceding lemma for an astral cone to prove Theorem~\ref{thm:sep-cvx-sets} in the special case that one set is a closed convex set and the other is the singleton $\{\zero\}$. In standard convex analysis, there is a technique that is sometimes used to extend a result for convex cones to general convex sets. The idea, given a convex set $X\subseteq\Rn$, is to create a convex cone $K\subseteq\Rnp$, and to then apply the result for cones to $K$, yielding sometimes something useful regarding $X$. One way to define the cone $K$ is as the conic hull of all pairs $\rpair{\xx}{1}$ for all $\xx\in X$; that is, $K = \cone\{\rpair{\xx}{1} : \xx\in X\}$. In the proof of the next lemma, we use the same idea, adapted to astral space, constructing a convex astral cone from the given astral convex set, and then applying Lemma~\ref{lem:acone-sep-from-fin-pt} to find a separating hyperplane. \begin{lemma} \label{lem:sep-cvx-set-from-origin} Let $X\subseteq\extspace$ be nonempty, convex and closed (in $\extspace$), and assume $\zero\not\in X$. Then there exists $\uu\in\Rn\setminus\{\zero\}$ such that \[ \sup_{\xbar\in X} \xbar\cdot\uu < 0. \] \end{lemma} \begin{proof} As just discussed, the main idea of the proof is to use $X$ to construct a closed convex astral cone in $\extspacnp$, and to then use Lemma~\ref{lem:acone-sep-from-fin-pt} to find a separating hyperplane. As in Section~\ref{sec:work-with-epis}, for $\xbar\in\extspace$ and $y\in\R$, the notation $\rpair{\xbar}{y}$ denotes, depending on context, either a pair in $\extspace\times\R$ or a point in $\extspacnp$, namely, the image of that pair under the homeomorphism $\homf$ given in Theorem~\ref{thm:homf}. We also make use of the matrix $\homat=[\Idnn,\zerov{n}]$ from \eqref{eqn:homat-def}. Let $S\subseteq\extspacnp$ be the set \[ S = \Braces{ \rpair{\xbar}{1} : \xbar\in X }. \] Equivalently, $S=F(X)$ where $F:\extspace\rightarrow\extspacnp$ is the affine map $\xbar\mapsto \rpair{\zero}{1}\plusl\trans{\homat}\xbar$ for $\xbar\in\extspace$ (Proposition~\ref{pr:xy-pairs-props}\ref{pr:xy-pairs-props:a}). Therefore, $S$ is convex (by Corollary~\ref{cor:thm:e:9}). Also, $S$ is closed since $X$ is closed and since $F$ is continuous (by \Cref{cor:cont:closed} and \Cref{cor:aff-cont}). The next step is to use $S$ to construct a closed convex astral cone $K\subseteq\extspacnp$, to which Lemma~\ref{lem:acone-sep-from-fin-pt} can be applied. It might seem natural to choose $K$ simply to be $\acone{S}$, the astral conic hull of $S$. The problem, however, is that $\acone{S}$ might not be closed. Instead, we use a variant on $\acone{S}$ that ensures that $K$ has all the required properties. Specifically, we define \begin{equation} \label{eq:lem:sep-cvx-set-from-origin:2} K = \conv\Parens{ \{\zero\} \cup E } \;\mbox{ where }\; E = \clbar{\lmset{S}}. \end{equation} Thus, comparing to the expression for $\acone{S}$ given in Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}), we have simply replaced $\lmset{S}$ with its closure. (To be clear, $\zero$, as it appears in this expression, is $\zerov{n+1}$.) The set $E$ is clearly closed, so $K$ is as well, by Theorem~\ref{thm:cnv-hull-closed-is-closed}. Also, since $\lmset{S}$ is included in the closed set $\coreznp$ (Proposition~\ref{pr:i:8}\ref{pr:i:8-infprod},\ref{pr:i:8e}), $E$ is also included in $\coreznp$, and so consists only of icons. Therefore, $K$ is a convex astral cone by Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:b}. Although $E$ may differ from $\lmset{S}$, every element of $E$ has a prefix that is an icon in $\lmset{S}$, as shown in the next claim: \begin{claimpx} \label{cl:lem:sep-cvx-set-from-origin:1} Let $\ebar\in E$. Then $\ebar=\limray{\vbar}\plusl\ybar$ for some $\vbar\in S$ and $\ybar\in\extspacnp$. That is, $E\subseteq\lmset{S}\plusl\extspacnp$. \end{claimpx} \begin{proofx} Since $\ebar\in E$, there exists a sequence $\seq{\vbar_t}$ in $S$ such that $\limray{\vbar_t}\rightarrow\ebar$. By sequential compactness, this sequence must have a convergent subsequence; by discarding all other elements, we can assume the entire sequence $\seq{\vbar_t}$ converges to some point $\vbar$, which must also be in $S$ since $S$ is closed. Therefore, $\ebar\in\limray{\vbar}\plusl\extspacnp$ by Theorem~\ref{thm:gen-dom-dir-converg}. \end{proofx} Since $\zero\not\in X$, we also have that $\rpair{\zero}{1}\not\in S$. The next several steps of the proof are directed toward showing that $\rpair{\zero}{1}$ also is not in $K$, which will allow us to apply Lemma~\ref{lem:acone-sep-from-fin-pt} to separate it from $K$. The astral cone $K$ includes $S$, and so includes all positive scalar multiples of points in $S$. Thus, if $\xbar\in X$ and $\lambda\in\Rstrictpos$, then $\rpair{\xbar}{1}$ is in $S\subseteq K$ so $\lambda\rpair{\xbar}{1}=\rpair{\lambda\xbar}{\lambda}$ is also in $K$. In the next claim, we take a step toward showing that these are the only points in $K$ whose ``last component'' is equal to $\lambda$. Here, we can think of $\limray{\zbar}\plusl\ybar$ as a point in $E$, which, as just shown in Claim~\ref{cl:lem:sep-cvx-set-from-origin:1}, must have the given form. \begin{claimpx} \label{cl:lem:sep-cvx-set-from-origin:2} Let $\xbar\in X$, $\ybar\in\extspacnp$, $\zbar=\rpair{\xbar}{1}$, and let $\wbar\in\aray{(\limray{\zbar}\plusl\ybar)}$. Suppose $\wbar\cdot\rpair{\zero}{1}=\lambda$ where $\lambda\in\Rstrictpos$. Then $\wbar=\lambda\zbar=\rpair{\lambda\xbar}{\lambda}$. \end{claimpx} \begin{proofx} We have \begin{align} \aray{(\limray{\zbar}\plusl\ybar)} &= \lb{\zero}{\limray{\zbar}\plusl\limray{\ybar}} \nonumber \\ &= \lb{\zero}{\limray{\zbar}} \cup \lb{\limray{\zbar}}{\limray{\zbar}\plusl\limray{\ybar}} \nonumber \\ &= (\aray{\zbar}) \cup \lb{\limray{\zbar}}{\limray{\zbar}\plusl\limray{\ybar}}. \label{eq:lem:sep-cvx-set-from-origin:1} \end{align} The first and third equalities are by Theorem~\ref{thm:oconichull-equals-ocvxhull} (and Proposition~\ref{pr:i:8}\ref{pr:i:8d}). The second equality is by Theorem~\ref{thm:decomp-seg-at-inc-pt} since $\limray{\zbar}\in\lb{\zero}{\limray{\zbar}\plusl\limray{\ybar}}$ (by Corollary~\ref{cor:d-in-lb-0-dplusx}). We claim that $\wbar\not\in\lb{\limray{\zbar}}{\limray{\zbar}\plusl\limray{\ybar}}$. This is because $\limray{\zbar}\cdot\rpair{\zero}{1}=+\infty$, implying $(\limray{\zbar}\plusl\limray{\ybar})\cdot\rpair{\zero}{1}=+\infty$ as well. Thus, if $\wbar$ were on the segment joining these points, we would have $\wbar\cdot\rpair{\zero}{1}=+\infty$ by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:c}), a contradiction. Thus, in light of \eqref{eq:lem:sep-cvx-set-from-origin:1}, it follows that $\wbar\in\aray{\zbar}$. Therefore, by Corollary~\ref{cor:aray-and-seqs}, there must exist a sequence $\seq{\lambda_t}$ in $\Rpos$ and a (span-bound) sequence $\seq{\zz_t}$ in $\Rnp$ such that $\zz_t\rightarrow\zbar$ and $\lambda_t \zz_t \rightarrow \wbar$. For each $t$, we can write $\zz_t=\rpair{\xx_t}{y_t}$ where $\xx_t\in\Rn$ and $y_t\in\R$. So $\rpair{\xx_t}{y_t}\rightarrow\zbar=\rpair{\xbar}{1}$, implying $\xx_t\rightarrow\xbar$ and $y_t\rightarrow 1$, by Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:g}). Also, since $\wbar\cdot\rpair{\zero}{1}=\lambda\in\R$, by Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:d}), $\wbar=\rpair{\homat \wbar}{\lambda}$. Since $\lambda_t\zz_t = \rpair{\lambda_t\xx_t}{\lambda_t y_t}\rightarrow\wbar$, it therefore follows that $\lambda_t\xx_t\rightarrow\homat\wbar$ and $\lambda_t y_t\rightarrow \lambda$ (again by Proposition~\ref{pr:xy-pairs-props}\ref{pr:xy-pairs-props:g}). Combining, these imply that $\lambda_t = (\lambda_t y_t)/y_t \rightarrow \lambda$ (by continuity of division), and so that $\lambda_t\xx_t \rightarrow \lambda\xbar$ by Proposition~\ref{pr:scalar-prod-props}(\ref{pr:scalar-prod-props:e}) and since $\xx_t\rightarrow\xbar$. Since $\lambda\xbar$ and $\homat\wbar$ are both the limit of the sequence $\seq{\lambda_t\xx_t}$, they must be equal. Thus, $\wbar=\rpair{\lambda\xbar}{\lambda}=\lambda\zbar$, as claimed (with the last equality from Proposition~\ref{pr:xy-pairs-props}\ref{pr:xy-pairs-props:f}). \end{proofx} We can now show that the only points in $K$ with ``last coordinate'' equal to $1$ are the points in $S$: \begin{claimpx} \label{cl:lem:sep-cvx-set-from-origin:3} Let $\xbar\in\extspace$, and suppose $\zbar=\rpair{\xbar}{1}\in K$. Then $\xbar\in X$. \end{claimpx} \begin{proofx} The set $K$, as defined in \eqref{eq:lem:sep-cvx-set-from-origin:2}, is equal to $\acone{E}$ by Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}) (and since $\lmset{E}=E$). Therefore, since $\zbar\in K$, Theorem~\ref{thm:ast-conic-hull-union-fin} implies that there must exist icons $\ebar_1,\ldots,\ebar_m\in E$ for which \[ \zbar \in \oconich\{\ebar_1,\ldots,\ebar_m\} = \acone\{\ebar_1,\ldots,\ebar_m\} = (\aray{\ebar_1})\seqsum\dotsb\seqsum(\aray{\ebar_m}), \] where the first equality is by Proposition~\ref{pr:acone-hull-props}(\ref{pr:acone-hull-props:d}) and the second equality is by repeated application of Theorem~\ref{thm:decomp-acone} (and by definition of astral rays). Thus, \begin{equation} \label{eq:lem:sep-cvx-set-from-origin:3} \zbar \in \wbar_1\seqsum\dotsb\seqsum\wbar_m \end{equation} for some $\wbar_1,\ldots,\wbar_m$ with $\wbar_i\in\aray{\ebar_i}$ for $i=1,\ldots,m$. Let $ H = \{ \ybar\in\extspacnp : \ybar\cdot\rpair{\zero}{1} \geq 0 \} $. This is a homogeneous astral closed halfspace, and therefore also a convex astral cone (Proposition~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}). Note that $S\subseteq H$ (since $\ybar\cdot\rpair{\zero}{1}=1$ for $\ybar\in S$), so $\lmset{S}\subseteq H$ by Proposition~\ref{pr:ast-cone-is-naive} since $H$ is an astral cone. Therefore, $E\subseteq H$ since $H$ is closed. Consequently, for $i=1,\ldots,m$, $\ebar_i\in H$ implying $\aray{\ebar_i}\subseteq H$ (by definition of astral rays), and thus that $\wbar_i\in H$. Let $\lambda_i=\wbar_i\cdot\rpair{\zero}{1}$ for $i=1,\ldots,m$. Then, as just argued, $\lambda_1,\ldots,\lambda_m$ are all nonnegative and therefore summable. Combined with \eqref{eq:lem:sep-cvx-set-from-origin:3}, \Cref{thm:seqsum-equiv-mod}(\ref{thm:seqsum-equiv-mod:a},\ref{thm:seqsum-equiv-mod:c}) therefore implies that \begin{equation} \label{eq:lem:sep-cvx-set-from-origin:4} 1 = \zbar\cdot\rpair{\zero}{1} = \sum_{i=1}^m \wbar_i\cdot\rpair{\zero}{1} = \sum_{i=1}^m \lambda_i. \end{equation} Thus, for $i=1,\ldots,m$, each $\lambda_i\in [0,1]$. By Claim~\ref{cl:lem:sep-cvx-set-from-origin:1}, $\ebar_i\in\limray{(\rpair{\xbar_i}{1})}\plusl\extspacnp$ for some $\xbar_i\in X$. Therefore, by Claim~\ref{cl:lem:sep-cvx-set-from-origin:2}, $\wbar_i=\rpair{\lambda_i \xbar_i}{\lambda_i}$. We then have that \begin{align*} \xbar = \homat \zbar &\in (\homat \wbar_1) \seqsum \dotsb \seqsum (\homat \wbar_m) \\ &= (\lambda_1 \xbar_1) \seqsum \dotsb \seqsum (\lambda_m \xbar_m) \\ &\subseteq \conv\{\xbar_1,\ldots,\xbar_m\} \subseteq X. \end{align*} The two equalities are by Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:c}). The first inclusion is by \eqref{eq:lem:sep-cvx-set-from-origin:3} and Theorem~\ref{thm:distrib-seqsum} (applied repeatedly). The second inclusion is by Theorem~\ref{thm:seqsum-in-union} and \eqref{eq:lem:sep-cvx-set-from-origin:4}. The last inclusion is by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:aa}) since $X$ is convex. \end{proofx} In particular, since $\zero\not\in X$, Claim~\ref{cl:lem:sep-cvx-set-from-origin:3} implies that $\rpair{\zero}{1}$ is not in $K$. Therefore, we can finally apply Lemma~\ref{lem:acone-sep-from-fin-pt} to $K$ and $\rpair{\zero}{1}$, yielding that there exists $\uu\in\Rn$ and $v\in\R$ such that $v=\rpair{\zero}{1}\cdot\rpair{\uu}{v} > 0$ and $\zbar\cdot\rpair{\uu}{v}\leq 0$ for all $\zbar\in K$. In particular, for all $\xbar\in X$, since $\rpair{\xbar}{1}\in K$, this means that \[ \xbar\cdot\uu + v=\rpair{\xbar}{1}\cdot\rpair{\uu}{v}\leq 0 \] (with the equality from Proposition~\ref{pr:xy-pairs-props}\ref{pr:xy-pairs-props:b}). Thus, \begin{equation} \label{eq:lem:sep-cvx-set-from-origin:5} \sup_{\xbar\in X} \xbar\cdot\uu \leq -v < 0, \end{equation} completing the proof (after noting, by Eq.~\ref{eq:lem:sep-cvx-set-from-origin:5}, that $\uu\neq\zero$ since $X$ is nonempty). \end{proof} We can now prove the general case: \begin{proof}[Proof of Theorem~\ref{thm:sep-cvx-sets}] Let $Z=X\seqsum(-Y)$. The set $-Y$ is convex (by Corollary~\ref{cor:thm:e:9}) and closed (by \Cref{cor:cont:closed}, since the map $\xbar\mapsto-\xbar$ is continuous by \Cref{thm:linear:cont}\ref{thm:linear:cont:b}). Therefore, $Z$ is convex and closed (by \Cref{prop:seqsum-multi}\ref{prop:seqsum-multi:closed}\ref{prop:seqsum-multi:convex}). (Also, $Z$ is nonempty since $X$ and $Y$ are; for instance, $\xbar\plusl(-\ybar)\in Z$ for all $\xbar\in X$ and $\ybar\in Y$.) We further claim $\zero\not\in Z$. Otherwise, if $\zero$ were in $Z$, then we would have $\zero\in\xbar\seqsum(-\ybar)$ for some $\xbar\in X$ and $\ybar\in Y$, implying $\xbar\in\zero\seqsum\ybar=\{\ybar\}$ by Proposition~\ref{pr:swap-seq-sum}, and thus that $\xbar=\ybar$, a contradiction since $X$ and $Y$ are disjoint. Thus, we can apply Lemma~\ref{lem:sep-cvx-set-from-origin} to $Z$, implying that there exists $\uu\in\Rn\setminus\{\zero\}$ for which \[ \sup_{\zbar\in X\seqsum(-Y)} \zbar\cdot\uu < 0. \] By Theorem~\ref{thm:ast-str-sep-seqsum-crit} (and since $X$ and $Y$ are closed), this proves the theorem. \end{proof} Theorem~\ref{thm:sep-cvx-sets} has several direct consequences, as we summarize in the next corollary. Note especially that the outer convex hull of any set $S\subseteq\extspace$ can now be seen to be the smallest closed convex set that includes $S$. \begin{corollary} \label{cor:sep-cvx-sets-conseqs} Let $S\subseteq\extspace$, and let $X, Y\subseteq\extspace$ be nonempty. \begin{letter-compact} \item \label{cor:sep-cvx-sets-conseqs:d} The following are equivalent: \begin{roman-compact} \item \label{cor:sep-cvx-sets-conseqs:d:1} $S$ is closed and convex. \item \label{cor:sep-cvx-sets-conseqs:d:2} $S$ is equal to the intersection of some collection of closed astral halfspaces. \item \label{cor:sep-cvx-sets-conseqs:d:3} $S$ is equal to the intersection of all closed astral halfspaces that include $S$; that is, $S=\ohull{S}$. \end{roman-compact} \item \label{cor:sep-cvx-sets-conseqs:a} The outer convex hull of $S$, $\ohull{S}$, is the smallest closed convex set that includes $S$. (That is, $\ohull{S}$ is equal to the intersection of all closed convex sets in $\extspace$ that include $S$.) \item \label{cor:sep-cvx-sets-conseqs:b} $\ohull{S}=\conv{\Sbar}$. \item \label{cor:sep-cvx-sets-conseqs:c} $X$ and $Y$ are strongly separated if and only if $(\ohull{X})\cap(\ohull{Y})=\emptyset$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{cor:sep-cvx-sets-conseqs:d}):} That (\ref{cor:sep-cvx-sets-conseqs:d:3}) implies (\ref{cor:sep-cvx-sets-conseqs:d:2}) is immediate. Every closed astral halfspace is closed and convex, so the arbitrary intersection of such sets is also closed and convex (Proposition~\ref{pr:e1}\ref{pr:e1:b},\ref{pr:e1:c}), proving that (\ref{cor:sep-cvx-sets-conseqs:d:2}) implies (\ref{cor:sep-cvx-sets-conseqs:d:1}). To prove (\ref{cor:sep-cvx-sets-conseqs:d:1}) implies (\ref{cor:sep-cvx-sets-conseqs:d:3}), suppose $S$ is closed and convex. We assume $S$ is nonempty since otherwise, $\ohull{S}=\emptyset=S$ (since the intersection of all astral closed halfspaces is empty). Clearly, $S\subseteq\ohull{S}$. For the reverse inclusion, suppose $\zbar\in\extspace\setminus S$. Then Theorem~\ref{thm:sep-cvx-sets}, applied to $S$ and $\{\zbar\}$, yields that there exist $\uu\in\Rn\setminus\{\zero\}$ and $\beta\in\R$ such that $\xbar\cdot\uu<\beta$ for $\xbar\in S$, but $\zbar\cdot\uu>\beta$. Thus, $S\subseteq\chsua$ but $\zbar\not\in\chsua$, implying that $\zbar\not\in\ohull{S}$. Therefore, $\ohull{S}\subseteq S$. \pfpart{Part~(\ref{cor:sep-cvx-sets-conseqs:a}):} Let $U$ be equal to the intersection of all closed convex sets in $\extspace$ that include $S$. Then $U\subseteq\ohull{S}$ since $\ohull{S}$ is such a set. On the other hand, \[ \ohull{S} \subseteq \ohull{U} = U \] where the inclusion is because $S\subseteq U$, and the equality is by part~(\ref{cor:sep-cvx-sets-conseqs:d}). \pfpart{Part~(\ref{cor:sep-cvx-sets-conseqs:b}):} The set $\conv{\Sbar}$ includes $S$ and is closed and convex by Theorem~\ref{thm:cnv-hull-closed-is-closed}. Therefore, $\ohull{S}\subseteq\conv{\Sbar}$ by part~(\ref{cor:sep-cvx-sets-conseqs:a}). On the other hand, $\ohull{S}$ is closed and includes $S$, so $\Sbar\subseteq\ohull{S}$. Since $\ohull{S}$ is also convex, this implies that $\conv{\Sbar}\subseteq\ohull{S}$ (Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). \pfpart{Part~(\ref{cor:sep-cvx-sets-conseqs:c}):} Suppose $\ohull{X}$ and $\ohull{Y}$ are disjoint. Since they are also closed and convex, Theorem~\ref{thm:sep-cvx-sets} then implies that they are strongly separated. Therefore, $X$ and $Y$, being subsets of $\ohull{X}$ and $\ohull{Y}$, are also strongly separated. Conversely, suppose $X$ and $Y$ are strongly separated. Then there exists $\uu\in\Rn\setminus\{zero\}$ and $\beta,\gamma\in\R$ such that \[ \sup_{\xbar\in X} \xbar\cdot\uu \leq \beta < \gamma \leq \sup_{\ybar\in Y} \ybar\cdot\uu. \] That is, $X\subseteq H$ and $Y\subseteq H'$, where \[ H=\{\zbar\in\extspace : \zbar\cdot\uu\leq\beta\} \mbox{ and } H'=\{\zbar\in\extspace : \zbar\cdot\uu\geq\gamma\}. \] Since $H$ and $H'$ are astral closed halfspaces, by definition of outer hull, $\ohull{X}\subseteq H$ and $\ohull{Y}\subseteq H'$. Since $H$ and $H'$ are disjoint, this proves that $\ohull{X}$ and $\ohull{Y}$ are as well. \qedhere \end{proof-parts} \end{proof} Corollary~\ref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:b}) shows that the convex hull of the closure of any set $S\subseteq\extspace$ is equal to $S$'s outer hull, which is also the smallest closed convex set that includes $S$. As seen in Section~\ref{sec:closure-convex-set}, this is not generally true if these operations are applied in the reverse order; that is, $\clbar{\conv{S}}$, the closure of the convex hull of $S$, need not in general be equal to $\ohull{S}$ (nor must it even be convex). Theorem~\ref{thm:sep-cvx-sets} shows that two convex astral sets are strongly separated if and only if they are closed (in $\extspace$) and disjoint. The same is not true when working only in $\Rn$. In other words, it is not true, in general, that two convex sets $X,Y\subseteq\Rn$ are strongly separated if and only if they are disjoint and closed in $\Rn$. Nevertheless, Corollary~\ref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:c}) provides a simple test for determining if they are strongly separated, specifically, if their outer hulls are disjoint. Equivalently, in this case, $X$ and $Y$ are strongly separated if and only if their astral closures are disjoint, that is, if and only if $\Xbar\cap\Ybar=\emptyset$, as follows from Theorem~\ref{thm:e:6} since the sets are convex and in $\Rn$. Here is an example: \begin{example} In $\R^2$, let $X=\{\xx\in\Rpos^2 : x_1 x_2 \geq 1\}$, and let $Y=\R\times\{\zero\}$. These sets are convex, closed (in $\R^2$), and disjoint. Nonetheless, they are not strongly separated. To prove this, note that the sequence $\seq{\trans{[t,1/t]}}$ in $X$ and the sequence $\seq{\trans{[t,0]}}$ in $Y$ both converge to $\limray{\ee_1}$, implying $\limray{\ee_1}\in\Xbar\cap\Ybar$. Therefore, by Corollary~\ref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:c}) (and Theorem~\ref{thm:e:6}), $X$ and $Y$ are not strongly separated. \end{example} Thus, although the strong separation of sets in $\Rn$ involves only notions from standard convex analysis, we see that concepts from astral space can still provide insight. In general, convex sets $X$ and $Y$ in $\Rn$ are strongly separated if and only if $X-Y$ and the singleton $\{\zero\}$ are strongly separated (as reflected in the equivalence of Eqs.~\ref{eqn:std-str-sep-crit1} and~\ref{eqn:std-str-sep-crit2}). Therefore, by the preceding argument, $X$ and $Y$ are strongly separated if and only if $\{\zero\}$ and $\clbar{X-Y}$ are disjoint, that is, if and only if $\zero\not\in \clbar{(X-Y)}\cap\Rn=\cl{(X-Y)}$. This is equivalent to \citet[Theorem~11.4]{ROC}. In Theorem~\ref{thm:e:9}, we saw that $\ohull{F(S)}=F(\ohull{S})$ for any affine function $F$ and any finite set $S\subseteq\extspace$. Using Corollary~\ref{cor:sep-cvx-sets-conseqs}, we can now prove that the same holds even when $S$ is not necessarily finite. \begin{corollary} \label{cor:ohull-fs-is-f-ohull-s} Let $\A\in\R^{m\times n}$, $\bbar\in\extspac{m}$, and let $F:\extspace\rightarrow\extspac{m}$ be the affine map $F(\zbar)=\bbar\plusl \A\zbar$ for $\zbar\in\extspace$. Let $S\subseteq\extspace$. Then \[ \ohull{F(S)} = F(\ohull{S}). \] \end{corollary} \begin{proof} That $F(\ohull{S}) \subseteq \ohull{F(S)}$ was proved in Lemma~\ref{lem:f-conv-S-in-conv-F-S}. For the reverse inclusion, $\ohull{S}$ is closed and convex, so $F(\ohull{S})$ is also convex (by \Cref{cor:thm:e:9}) and closed (by \Cref{cor:cont:closed} since $F$ is continuous, by \Cref{cor:aff-cont}). Furthermore, $F(S)\subseteq F(\ohull{S})$ since $S\subseteq\ohull{S}$. Therefore, $\ohull{F(S)}\subseteq F(\ohull{S})$ since, by \Cref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:a}), $\ohull{F(S)}$ is the smallest closed convex set that includes $F(S)$. \end{proof} We next consider separating a closed convex astral cone from a disjoint closed convex set. In this case, it suffices to use a homogeneous astral hyperplane for the separation, as we show next (thereby generalizing Lemma~\ref{lem:acone-sep-from-fin-pt}). \begin{theorem} \label{thm:sep-cone-from-cvx} Let $K\subseteq\extspace$ be a closed convex astral cone, and let $Y\subseteq\extspace$ be a nonempty closed convex set with $K\cap Y=\emptyset$. Then there exists $\uu\in\Rn\setminus\{\zero\}$ such that \[ \sup_{\xbar\in K} \xbar\cdot\uu \leq 0 < \inf_{\ybar\in Y} \ybar\cdot \uu. \] \end{theorem} \begin{proof} Since $K$ and $Y$ are nonempty, closed, convex and disjoint, we can apply Theorem~\ref{thm:sep-cvx-sets} yielding that there exist $\uu\in\Rn\setminus\{\zero\}$ and $\beta\in\R$ such that \[ \sup_{\xbar\in K} \xbar\cdot\uu < \beta < \inf_{\ybar\in Y} \ybar\cdot \uu. \] Since $K$ is an astral cone, we have that $\zero\in K$ (by Proposition~\ref{pr:ast-cone-is-naive}), implying that $0=\zero\cdot\uu<\beta$. Also, for all $\xbar\in K$, we claim that $\xbar\cdot\uu\leq 0$. Otherwise, if $\xbar\cdot\uu>0$ then $\limray{\xbar}\cdot\uu=+\infty$. On the other hand, $\limray{\xbar}\in K$ (by Proposition~\ref{pr:ast-cone-is-naive}, since $\xbar\in K$), implying $\limray{\xbar}\cdot\uu < \beta$, a contradiction. Together, these prove all the parts of the theorem. \end{proof} Here are some consequences of Theorem~\ref{thm:sep-cone-from-cvx}. In particular, we now can see that the outer conic hull of any set $S\subseteq\extspace$ is the smallest closed convex astral cone that includes $S$. \begin{corollary} \label{cor:sep-ast-cone-conseqs} Let $S\subseteq\extspace$. \begin{letter-compact} \item \label{cor:sep-ast-cone-conseqs:b} The following are equivalent: \begin{roman-compact} \item \label{cor:sep-ast-cone-conseqs:b:1} $S$ is a closed convex astral cone. \item \label{cor:sep-ast-cone-conseqs:b:2} $S$ is equal to the intersection of some collection of homogeneous, closed astral halfspaces. \item \label{cor:sep-ast-cone-conseqs:b:3} $S$ is equal to the intersection of all homogeneous, closed astral halfspaces that include $S$; that is, $S=\oconich{S}$. \end{roman-compact} \item \label{cor:sep-ast-cone-conseqs:a} The outer conic hull of $S$, $\oconich{S}$, is the smallest closed convex astral cone that includes $S$. (That is, $\oconich{S}$ is equal to the intersection of all closed convex astral cones in $\extspace$ that include $S$.) \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{cor:sep-ast-cone-conseqs:b}):} The proof is very similar to that of Corollary~\ref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:d}), but in proving that (\ref{cor:sep-ast-cone-conseqs:b:1}) implies (\ref{cor:sep-ast-cone-conseqs:b:3}), we instead use Theorem~\ref{thm:sep-cone-from-cvx} to yield a homogeneous halfspace that includes $K$ but not any particular point. \pfpart{Part~(\ref{cor:sep-ast-cone-conseqs:a}):} Similar to the proof of Corollary~\ref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:a}). \qedhere \end{proof-parts} \end{proof} \subsection{Astral-defined separation of sets in $\Rn$} \label{sec:ast-def-sep-thms} So far, we have focused on how convex sets in astral space can be separated using astral halfspaces defined by a vector $\uu\in\Rn$. Next, we consider the dual question of how convex sets in $\Rn$ can be separated using a generalized form of halfspaces, each defined now by an astral point $\xbar\in\extspace$. Of course, as seen in Section~\ref{sec:prelim-sep-thms}, under certain conditions, a pair of disjoint convex sets in $\Rn$ can be separated according to various definitions using ordinary hyperplanes. But this is not always the case. For instance, in $\R$, the sets $\Rneg$ and $\Rstrictpos$ are disjoint and convex, but cannot be strongly separated. In this section, using astral points to define the halfspaces, we will see shortly that this limitation vanishes entirely, admitting the separation of any two disjoint convex sets in $\Rn$. Much of what follows is closely related to the previous work of \citet{lexicographic_separation,hemispaces}, though using a rather different formalism. Here, we give a development based on astral space. We considered the strong separation of sets in $\Rn$ in its standard form in Section~\ref{sec:prelim-sep-thms}. In Proposition~\ref{roc:thm11.1} and again in \eqref{eqn:std-str-sep-crit1}, we saw (using slightly different notation) that two nonempty sets $U$ and $V$ in $\Rn$ are strongly separated if, for some $\xx\in\Rn\setminus\{\zero\}$, \begin{equation} \label{eqn:std-str-sep-crit} \sup_{\uu\in U} \xx\cdot\uu < \inf_{\vv\in V} \xx\cdot\vv. \end{equation} In this case, the sets are separated by the hyperplane $H=\{\uu\in\Rn : \xx\cdot\uu = \beta\}$, for some $\beta\in\R$. Geometrically, nothing changes if we translate everything by any vector $\rr\in\Rn$. That is, the translations $U-\rr$ and $V-\rr$ are still strongly separated by a translation of this same hyperplane, namely, $H-\rr$. Algebraically, this is saying simply that \begin{equation} \label{eqn:std-str-sep-crit3} \sup_{\uu\in U} \xx\cdot(\uu-\rr) < \inf_{\vv\in V} \xx\cdot(\vv-\rr), \end{equation} which is of course equivalent to \eqref{eqn:std-str-sep-crit} since $\xx\cdot\rr$ is a constant. Thus, $U$ and $V$ are strongly separated if \eqref{eqn:std-str-sep-crit3} holds for some $\xx\in\Rn\setminus\{\zero\}$ and some (or all) $\rr\in\Rn$. In this form, we can generalize to separation of sets in $\Rn$ using separating sets defined by points in $\extspace$ simply by replacing $\xx$ in \eqref{eqn:std-str-sep-crit3} with an astral point $\xbar$, leading to: \begin{definition} Let $U$ and $V$ be nonempty subsets of $\Rn$. Then $U$ and $V$ are \emph{astral strongly separated} if there exist a point $\xbar\in\extspace$ and a point $\rr\in\Rn$ such that \begin{equation} \label{eqn:ast-str-sep-defn} \sup_{\uu\in U} \xbar\cdot(\uu-\rr) < \inf_{\vv\in V} \xbar\cdot(\vv-\rr). \end{equation} (Note that $\xbar$ cannot be $\zero$ since $U$ and $V$ are nonempty.) \end{definition} Equivalently, \eqref{eqn:ast-str-sep-defn} holds if and only if, for some $\beta\in\R$ and $\epsilon>0$, $\xbar\cdot(\uu-\rr)<\beta-\epsilon$ for all $\uu\in U$, and $\xbar\cdot(\vv-\rr)>\beta+\epsilon$ for all $\vv\in V$. Intuitively, $\rr\in\Rn$ is acting to translate the sets $U$ and $V$, and $\xbar$ is specifying a kind of astral-defined separation set $\{\uu\in\Rn : \xbar\cdot\uu=\beta\}$, for some $\beta\in\R$, analogous to a standard hyperplane. \begin{example} For instance, in $\R^2$, let $\bb\in\R^2$, and let \begin{align*} U &= \Braces{\uu\in\R^2 : u_1 \leq b_1 \mbox{ and } u_2 \leq b_2} \\ V &= \Braces{\vv\in\R^2 : v_1 \geq b_1} \setminus U. \end{align*} Thus, $U$ includes all points in the plane that are ``below and to the left'' of $\bb$, while $V$ includes all points ``to the right'' of $\bb$, except those in $U$. These sets are convex and disjoint. The set $V$ is neither open nor closed. These sets cannot be strongly separated in the standard sense using ordinary hyperplanes. Indeed, suppose $\xx\in\R^2$ satisfies \eqref{eqn:std-str-sep-crit}. If $x_2\neq 0$ then the right-hand side of that equation would be $-\infty$, which is impossible. Thus, $\xx=\lambda\ee_1$ for some $\lambda\in\R$. If $\lambda<0$ then the left-hand side would be $+\infty$, which is impossible. And if $\lambda\geq 0$, then both sides of the equation would be equal to $\lambda b_1$, which also violates that strict inequality. Thus, a contradiction is reached in all cases. It can further be shown that there is no closed or open halfspace that entirely includes one set while entirely excluding the other. These sets are, however, astral strongly separated. In particular, setting $\xbar=\limray{\ebar_1}\plusl\limray{\ebar_2}$ and $\rr=\bb$, it can be checked that \eqref{eqn:ast-str-sep-defn} holds, specifically, that $\xbar\cdot(\uu-\rr)\leq 0$ for all $\uu\in U$ while $\xbar\cdot(\vv-\rr) = +\infty$ for all $\vv\in V$. On the other hand, if $\bb=\trans{[2,2]}$, and if we require $\rr=\zero$, then \eqref{eqn:ast-str-sep-defn} does not hold for any $\xbar\in\extspac{2}$. To see this, note first that, as just argued, this equation cannot hold for any $\xbar\in\R^2$. So suppose it holds for some $\xbar\in\extspac{2}\setminus\R^2$, and let $\ww\in\R^2$ be $\xbar$'s dominant direction. Then $\ww \in U$, implying the left-hand side of \eqref{eqn:ast-str-sep-defn} is equal to $+\infty$ (since $\xbar\cdot\ww=+\infty$), a contradiction. \end{example} This example shows why we include the translation $\rr$ in the definition of astral strong separation given in \eqref{eqn:ast-str-sep-defn}. Nevertheless, there is an alternative formulation, analogous to \eqref{eqn:std-str-sep-crit2}, that dispenses with $\rr$. In this formulation, we instead require that $\xbar\cdot(\uu-\vv)$ is bounded below $0$ for all $\uu\in U$ and $\vv\in V$, that is, that \[ \sup_{\uu\in U, \vv\in V} \xbar\cdot(\uu-\vv) < 0. \] The next theorem shows that this condition is equivalent to the one given in \eqref{eqn:ast-str-sep-defn}, spelling out the specific relationship between these two notions: \begin{theorem} \label{thm:ast-str-sep-equiv-no-trans} Let $U,V\subseteq\Rn$ and let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:ast-str-sep-equiv-no-trans:a} There exist $\ybar\in\lb{\zero}{\xbar}$ and $\rr\in\Rn$ such that \begin{equation} \label{eq:thm:ast-str-sep-equiv-no-trans:1} \sup_{\uu\in U} \ybar\cdot(\uu-\rr) < \inf_{\vv\in V} \ybar\cdot(\vv-\rr). \end{equation} \item \label{thm:ast-str-sep-equiv-no-trans:b} We have \begin{equation} \label{eq:thm:ast-str-sep-equiv-no-trans:3} \sup_{\uu\in U, \vv\in V} \xbar\cdot(\uu-\vv) < 0. \end{equation} \end{letter-compact} Therefore, if $U$ and $V$ are nonempty, then they are astral strongly separated if and only if \eqref{eq:thm:ast-str-sep-equiv-no-trans:3} holds for some $\xbar\in\extspace$. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:ast-str-sep-equiv-no-trans:a}) $\Rightarrow$ (\ref{thm:ast-str-sep-equiv-no-trans:b}): } Suppose that \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} holds for some $\ybar\in\lb{\zero}{\xbar}$ and $\rr\in\Rn$. Then there exist $\beta,\gamma\in\R$ such that \[ \sup_{\uu\in U} \ybar\cdot(\uu-\rr) \leq \beta < \gamma \leq \inf_{\vv\in V} \ybar\cdot(\vv-\rr). \] Suppose $\uu\in U$, $\vv\in V$. Then $\ybar\cdot(\vv-\rr)\geq\gamma>-\infty$ and $\ybar\cdot(\uu-\rr)\leq\beta$, implying $\ybar\cdot(\rr-\uu)\geq -\beta>-\infty$. Consequently, by Proposition~\ref{pr:i:1}, \begin{equation} \label{eq:thm:ast-str-sep-equiv-no-trans:2} \ybar\cdot(\vv-\uu) = \ybar\cdot(\vv-\rr) + \ybar\cdot(\rr-\uu) \geq \gamma-\beta, \end{equation} since the expressions being added are summable. This further implies that \[ 0 < \gamma-\beta \leq \ybar\cdot(\vv-\uu) \leq \max\Braces{ 0, \xbar\cdot(\vv-\uu) } = \xbar\cdot(\vv-\uu). \] The second inequality is from \eqref{eq:thm:ast-str-sep-equiv-no-trans:2}. The third inequality is because $\ybar\in\lb{\zero}{\xbar}$ (by \Cref{pr:seg-simplify}\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}). The equality is because the maximum that appears here must be equal to one of its two arguments, but cannot be $0$ since we have already established that the maximum is strictly positive. Thus, $\xbar\cdot(\uu-\vv)\leq\beta-\gamma<0$ for all $\uu\in U$ and $\vv\in V$, proving the claim. \pfpart{(\ref{thm:ast-str-sep-equiv-no-trans:b}) $\Rightarrow$ (\ref{thm:ast-str-sep-equiv-no-trans:a}): } Proof is by induction on the astral rank of $\xbar$. More specifically, we prove the following by induction on $k=0,\ldots,n$: For all $U,V\subseteq\Rn$ and for all $\xbar\in\extspace$, if $\xbar$ has astral rank $k$ and if \eqref{eq:thm:ast-str-sep-equiv-no-trans:3} holds, then there exist $\ybar\in\lb{\zero}{\xbar}$ and $\rr\in\rspanxbar$ for which \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} holds. Before beginning the induction, we consider the case that either $U$ or $V$ (or both) are empty. In this case, we can choose $\ybar=\zero$ (which is in $\lb{\zero}{\xbar}$) and $\rr=\zero$ (which is in $\rspanxbar$). If $V=\emptyset$, then \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} holds since the right-hand side is $+\infty$ and the left-hand side is either $-\infty$ (if $U=\emptyset$) or $0$ (otherwise). The case $U=\emptyset$ is similar. In the induction argument that follows, we suppose that $\xbar$ has astral rank $k$ and that \eqref{eq:thm:ast-str-sep-equiv-no-trans:3} holds, implying that there exists $\beta\in\R$ be such that $\sup_{\uu\in U,\vv\in V} \xbar\cdot(\uu-\vv) \leq -\beta < 0$. In the base case that $k=0$, $\xbar=\qq$ for some $\qq\in\Rn$. The case that either $U$ or $V$ is empty was handled above, so we assume both are nonempty. Then for all $\uu\in U$ and $\vv\in V$, $\qq\cdot(\uu-\vv)\leq-\beta$, implying $\qq\cdot\uu \leq \qq\cdot\vv-\beta$. Thus, \[ -\infty < \sup_{\uu\in U} \qq\cdot\uu \leq \inf_{\vv\in V} \qq\cdot\vv - \beta < +\infty, \] where the first and last inequalities are because $U$ and $V$ are nonempty. Letting $\ybar=\qq$ and $\rr=\zero$, the claim thereby follows in this case. For the inductive step, let $k>0$ and assume the claim holds for $k-1$. We again assume $U$ and $V$ are both nonempty, since otherwise we can apply the argument above. Then we can write $\xbar=\limray{\ww}\plusl\xbarperp$ where $\ww\in\Rn$ is $\xbar$'s dominant direction, and $\xbarperp$ is the projection of $\xbar$ orthogonal to $\ww$ (Proposition~\ref{pr:h:6}). For all $\uu\in U$ and $\vv\in V$, we must have $\ww\cdot\uu\leq\ww\cdot\vv$; otherwise, if $\ww\cdot(\uu-\vv)>0$, then we would have $\xbar\cdot(\uu-\vv)=+\infty$, a contradiction. Thus, \[ -\infty < \sup_{\uu\in U} \ww\cdot\uu \leq \inf_{\vv\in V} \ww\cdot\vv < +\infty, \] where the first and last inequalities are because $U$ and $V$ are nonempty. Let $\lambda\in\R$ be such that $\sup_{\uu\in U} \ww\cdot\uu \leq \lambda \leq \inf_{\vv\in V} \ww\cdot\vv$. Also, let \[ J=\{\uu\in\Rn : \ww\cdot\uu = \lambda\}, \] let $U'=U\cap J$, and let $V'=V\cap J$. Then for all $\uu\in U'$ and $\vv\in V'$, we have $\ww\cdot(\uu-\vv)=\ww\cdot\uu-\ww\cdot\vv=0$ (since $\uu,\vv\in J$). Thus, \[ \xbarperp\cdot(\uu-\vv) = (\limray{\ww}\plusl\xbarperp)\cdot(\uu-\vv) = \xbar\cdot(\uu-\vv) \leq -\beta < 0. \] Therefore, we can apply our inductive hypothesis to $U'$, $V'$ and $\xbarperp$ (whose astral rank is $k-1$, by Proposition~\ref{pr:h:6}), implying that there exist $\ybar'\in\lb{\zero}{\xbarperp}$ and $\rr'\in\rspanxbarperp$ such that \[ \sup_{\uu\in U'} \ybar'\cdot(\uu-\rr') < \inf_{\vv\in V'} \ybar'\cdot(\vv-\rr'). \] Let $\ybar=\limray{\ww}\plusl\ybar'$ and let $\rr=\lambda\ww+\rr'$. Then $\ybar\in\limray{\ww}\plusl\lb{\zero}{\xbarperp}\subseteq\lb{\zero}{\xbar}$ by \Cref{cor:seg:zero}. Further, $\rr\in\rspanxbar$ since if $\xbarperp=\VV'\omm\plusl\qq'$ for some $\VV'\in\R^{n\times k'}$, $k'\geq 0$, $\qq'\in\Rn$ then $\rr'\in\rspanxbarperp=\colspace{[\VV',\qq']}$, so $\rr\in\colspace{[\ww,\VV',\qq']}=\rspanxbar$. Also, $\xbarperp\cdot\ww=0$, implying $\rr'\cdot\ww=0$ by Proposition~\ref{pr:rspan-sing-equiv-dual}. Moreover, $\ybar'\cdot\ww=0$ since $\zero$ and $\xbarperp$ are both in $\{\zbar\in\extspace : \zbar\cdot\ww=0\}$, which, being convex (Proposition~\ref{pr:e1}\ref{pr:e1:c}), must include $\lb{\zero}{\xbarperp}$, specifically, $\ybar'$. Let $\uu\in U$. We next argue that \begin{equation} \label{eq:thm:ast-str-sep-equiv-no-trans:5} \ybar\cdot(\uu-\rr) = \begin{cases} \ybar'\cdot(\uu-\rr') & \text{if $\uu\in U'$} \\ -\infty & \text{otherwise.} \\ \end{cases} \end{equation} We have \begin{equation} \label{eq:thm:ast-str-sep-equiv-no-trans:4} \ww\cdot(\uu-\rr) = \ww\cdot(\uu-\lambda\ww-\rr') = \ww\cdot\uu - \lambda \end{equation} since $\ww\cdot\rr'=0$ and $\norm{\ww}=1$. Also, by $\lambda$'s definition, $\ww\cdot\uu\leq \lambda$. Therefore, if $\uu\not\in U'$ then $\uu\not\in J$ so $\ww\cdot\uu<\lambda$ and $\ww\cdot(\uu-\rr)<0$ (by Eq.~\ref{eq:thm:ast-str-sep-equiv-no-trans:4}), implying \[ \ybar\cdot(\uu-\rr) = \limray{\ww}\cdot(\uu-\rr)\plusl\ybar'\cdot(\uu-\rr) = -\infty. \] Similarly, if $\uu\in U'$ then $\ww\cdot\uu=\lambda$ so $\ww\cdot(\uu-\rr)=0$ (again by Eq.~\ref{eq:thm:ast-str-sep-equiv-no-trans:4}), implying \[ \ybar\cdot(\uu-\rr) = \ybar'\cdot(\uu-\rr) = \ybar'\cdot(\uu-\lambda\ww-\rr') = \ybar'\cdot(\uu-\rr') \] where the last equality is because $\ybar'\cdot\ww=0$ (and using Proposition~\ref{pr:i:1}). By a similar argument, for $\vv\in V$, \begin{equation} \label{eq:thm:ast-str-sep-equiv-no-trans:6} \ybar\cdot(\vv-\rr) = \begin{cases} \ybar'\cdot(\vv-\rr') & \text{if $\vv\in V'$} \\ +\infty & \text{otherwise.} \\ \end{cases} \end{equation} Thus, combining, we have \[ \sup_{\uu\in U} \ybar\cdot(\uu-\rr) \leq \sup_{\uu\in U'} \ybar'\cdot(\uu-\rr') < \inf_{\vv\in V'} \ybar'\cdot(\vv-\rr') \leq \inf_{\vv\in V} \ybar\cdot(\vv-\rr). \] The first and third inequalities are by Eqs.~(\ref{eq:thm:ast-str-sep-equiv-no-trans:5}) and~(\ref{eq:thm:ast-str-sep-equiv-no-trans:6}), respectively. The second inequality is by inductive hypothesis. This completes the induction and the proof. \qedhere \end{proof-parts} \end{proof} The equivalence given in Theorem~\ref{thm:ast-str-sep-equiv-no-trans} does not hold in general if, in part~(\ref{thm:ast-str-sep-equiv-no-trans:a}) of that theorem, we require that $\ybar=\xbar$. Here is an example: \begin{example} In $\R^2$, let $U$ be the closed halfplane $U=\{\uu\in\R^2 : u_1 \leq 0\}$, and let $V$ be the open halfplane that is its complement, $V=\R^2\setminus U$. Let $\xbar=\limray{\ee_1}\plusl\limray{\ee_2}$. Then \eqref{eq:thm:ast-str-sep-equiv-no-trans:3} is satisfied since $\xbar\cdot(\uu-\vv)=-\infty$ for all $\uu\in U$ and $\vv\in V$. However, if $\ybar=\xbar$, then \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} is not satisfied for any $\rr\in\R^2$. This is because if $r_1>0$ then there exists $\vv\in V$ with $r_1>v_1>0$, implying $\xbar\cdot(\vv-\rr)=-\infty$ so that \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} is unsatisfied since its right-hand side is $-\infty$. Similarly, if $r_1\leq 0$ then there exists $\uu\in U$ with $u_1=r_1$ and $u_2>r_2$ so that $\xbar\cdot(\uu-\rr)=+\infty$, again implying that \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} is unsatisfied. On the other hand, \eqref{eq:thm:ast-str-sep-equiv-no-trans:1} is satisfied if we choose $\rr=\zero$ and $\ybar=\limray{\ee_1}$, which is in $\lb{\zero}{\xbar}$. \end{example} Every pair of nonempty, disjoint convex sets in $\Rn$ is astral strongly separated: \begin{theorem} \label{thm:ast-def-sep-cvx-sets} Let $U,V\subseteq\Rn$ be nonempty, convex and disjoint. Then $U$ and $V$ are astral strongly separated. \end{theorem} Before proving the theorem, we give a lemma handling the special case that one set is the singleton $\{\zero\}$. We then show how this implies the general case. \begin{lemma} \label{lem:ast-def-sep-from-orig} Let $U\subseteq\Rn\setminus\{\zero\}$ be convex. Then there exists $\xbar\in\extspace$ for which \begin{equation} \label{eq:lem:ast-def-sep-from-orig:1} \sup_{\uu\in U} \xbar\cdot\uu < 0. \end{equation} \end{lemma} \begin{proof} Proof is by induction on the dimension of a linear subspace that includes $U$. More precisely, we prove by induction on $d=0,\ldots,n$ that for all convex sets $U\subseteq\Rn$ and for all linear subspaces $L\subseteq\Rn$, if $U\subseteq L\setminus\{\zero\}$ and $\dim{L}\leq d$ then there exists $\xbar\in\extspace$ satisfying \eqref{eq:lem:ast-def-sep-from-orig:1}. As a preliminary step, we note that if $U=\emptyset$, then we can choose $\xbar=\zero$ so that \eqref{eq:lem:ast-def-sep-from-orig:1} holds vacuously. In the base case that $d=0$, we must have $L=\{\zero\}$, implying $U=\emptyset$. Thus, as just discussed, we can choose $\xbar=\zero$ in this case. For the inductive step, let $d>0$ and assume the claim holds for $d-1$. Let $U$ and $L$ be as described in the claim. Since the case $U=\emptyset$ was handled above, we assume $U$ is nonempty. Since $U$ and the singleton $\{\zero\}$ are disjoint, their relative interiors are as well. Therefore, by Proposition~\ref{roc:thm11.3}, there exists a (standard) hyperplane \[ H=\{ \uu\in\Rn : \ww\cdot\uu = \beta \} \] that properly separates them, for some $\ww\in\Rn$ and $\beta\in\R$. That is, $\ww\cdot\uu\leq \beta$ for all $\uu\in U$, and $0=\ww\cdot\zero\geq \beta$. Furthermore, there is some point in $U\cup \{\zero\}$ that is not in $H$. If $\zero\not\in H$ then $0=\ww\cdot\zero>\beta$, implying $\sup_{\uu\in U} \ww\cdot\uu \leq \beta < 0$, and therefore that \eqref{eq:lem:ast-def-sep-from-orig:1} is satisfied if we choose $\xbar=\ww$. Otherwise, $\zero\in H$, implying $\beta=\ww\cdot\zero=0$, and furthermore, since the separation is proper, that there exists a point $\uhat\in U\setminus H$. Thus, $\ww\cdot\uu\leq 0$ for all $\uu\in U$, and $\ww\cdot\uhat< 0$. Let $U'=U\cap H$, and let $L'=L\cap H$. Then $L'$ is a linear subspace, and $U'$ is convex and included in $L'\setminus\{\zero\}$. Further, $\uhat\in U\subseteq L$, but $\uhat\not\in H$, implying $\uhat\not\in L'$. Thus, $L'$ is a proper subset of $L$, so $\dim{L'}<\dim{L}$. Therefore, we can apply our inductive hypothesis (to $U'$ and $L'$), yielding that there exists $\xbar'\in\extspace$ such that $\gamma<0$ where $\gamma=\sup_{\uu\in U'}\xbar'\cdot\uu$. Let $\xbar=\limray{\ww}\plusl\xbar'$. Let $\uu\in U$. If $\uu\not\in H$ then $\ww\cdot\uu<0$ so $\xbar\cdot\uu=\limray{\ww}\cdot\uu\plusl\xbar'\cdot\uu=-\infty$. Otherwise, if $\uu\in H$ then $\uu\in U'$ and $\ww\cdot\uu=0$ so $\xbar\cdot\uu=\xbar'\cdot\uu\leq \gamma$. Thus, \[ \sup_{\uu\in U} \xbar\cdot\uu \leq \gamma < 0, \] completing the induction and the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:ast-def-sep-cvx-sets}] Let $W=U-V$. Then $-V$ is convex by \Cref{pr:aff-preserves-cvx}, so $W$ is convex by \Cref{roc:thm3.1}. Further, $\zero\not\in W$ since $U$ and $V$ are disjoint. Therefore, by Lemma~\ref{lem:ast-def-sep-from-orig}, applied to $W$, there exists $\xbar\in\extspace$ such that $\sup_{\uu\in U,\vv\in V} \xbar\cdot(\uu-\vv)<0$. By Theorem~\ref{thm:ast-str-sep-equiv-no-trans}, it follows that $U$ and $V$ are astral strongly separated. \end{proof} An \emph{astral dual halfspace} (or \emph{dual halfspace} for short) is a set of the form \begin{equation} \label{eqn:ast-def-hfspace-defn} \ahfsp{\xbar}{\rr}{\beta} = \Braces{\uu\in\Rn : \xbar\cdot(\uu-\rr)\leq \beta} \end{equation} for some $\xbar\in\extspace\setminus\{\zero\}$, $\rr\in\Rn$, $\beta\in\R$. Such a set is always convex, as shown next. However, in general, it need not be either closed or open. \begin{proposition} \label{pr:ast-def-hfspace-convex} Let $\xbar\in\extspace$, $\rr\in\Rn$, $\beta\in\R$, and let \begin{align*} H &= \Braces{\uu\in\Rn : \xbar\cdot(\uu-\rr)\leq \beta}, \\ H' &= \Braces{\uu\in\Rn : \xbar\cdot(\uu-\rr) < \beta}. \end{align*} Then both $H$ and $H'$ are convex. Furthermore, if $\rr=\zero$ and $\beta=0$, then $H$ is a convex cone. \end{proposition} \begin{proof} Setting $\uu'=\uu-\rr$, we can write $H$ as \begin{equation} \label{eq:ast-def-hfspace-convex:1} H = \rr + \{\uu'\in\Rn : \xbar\cdot\uu' \leq \beta\}. \end{equation} Thus, $H$ is the translation of a sublevel set of the function $\uu\mapsto\xbar\cdot\uu$. This function is convex by Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5b}. Therefore, the sublevel set appearing on the far right of \eqref{eq:ast-def-hfspace-convex:1} is also convex (by \Cref{roc:thm4.6}), so $H$ is as well. The proof for $H'$ is similar. If $\rr=\zero$ and $\beta=0$, then $H$ is also a cone since it includes $\zero$, and since if $\uu\in H$ then $\xbar\cdot\uu\leq 0$, implying, for $\lambda\in\Rpos$, that $\xbar\cdot(\lambda\uu)=\lambda(\xbar\cdot\uu)\leq 0$ so that $\lambda\uu$ is in $H$ as well. \end{proof} In this terminology, according to Theorem~\ref{thm:ast-def-sep-cvx-sets}, if two nonempty sets $U,V\subseteq\Rn$ are astral strongly separable, then they must be included respectively in a pair of disjoint astral dual halfspaces. More specifically, if $U,V$ satisfy \eqref{eqn:ast-str-sep-defn}, then \begin{align*} U &\subseteq \Braces{\uu\in\Rn : \xbar\cdot(\uu-\rr)\leq \beta} \\ V &\subseteq \Braces{\uu\in\Rn : \xbar\cdot(\uu-\rr)\geq \gamma}, \end{align*} where $\beta,\gamma\in\R$ are such that $\sup_{\uu\in U} \xbar\cdot(\uu-\rr)\leq\beta<\gamma\leq \inf_{\uu\in V} \xbar\cdot(\uu-\rr)$. Moreover, these sets are disjoint since $\beta<\gamma$. Thus, according to Theorem~\ref{thm:ast-def-sep-cvx-sets}, any two disjoint convex sets are included in disjoint astral dual halfspaces. The closure of the convex hull of any set $S\subseteq\Rn$ is equal to the intersection of all the closed halfspaces that include it (Proposition~\ref{pr:con-int-halfspaces}\ref{roc:cor11.5.1}). When working with astral dual halfspaces, the same holds simply for the convex hull of the set, without taking its closure, as we show next. In particular, this means that every convex set in $\Rn$ is equal to the intersection of all the astral dual halfspaces that include it. This and other consequences of Theorem~\ref{thm:ast-def-sep-cvx-sets} are summarized next: \begin{corollary} \label{cor:ast-def-sep-conseqs} Let $S, U, V\subseteq\Rn$. \begin{letter-compact} \item \label{cor:ast-def-sep-conseqs:a} The following are equivalent: \begin{roman-compact} \item \label{cor:ast-def-sep-conseqs:a1} $S$ is convex. \item \label{cor:ast-def-sep-conseqs:a2} $S$ is equal to the intersection of some collection of astral dual halfspaces. \item \label{cor:ast-def-sep-conseqs:a3} $S$ is equal to the intersection of all astral dual halfspaces that include $S$. \end{roman-compact} \item \label{cor:ast-def-sep-conseqs:b} The convex hull of $S$, $\conv{S}$, is equal to the intersection of all astral dual halfspaces that include $S$. \item \label{cor:ast-def-sep-conseqs:c} $U$ and $V$ are astral strongly separated if and only if $(\conv{U})\cap(\conv{V})=\emptyset$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{cor:ast-def-sep-conseqs:a}):} That (\ref{cor:ast-def-sep-conseqs:a3}) implies (\ref{cor:ast-def-sep-conseqs:a2}) is immediate. That (\ref{cor:ast-def-sep-conseqs:a2}) implies (\ref{cor:ast-def-sep-conseqs:a1}) follows from Propositions~\ref{pr:e1}(\ref{pr:e1:b}) and~\ref{pr:ast-def-hfspace-convex}. To show that (\ref{cor:ast-def-sep-conseqs:a1}) implies (\ref{cor:ast-def-sep-conseqs:a3}), suppose that $S$ is convex, and let $U$ be the intersection of all astral dual halfspaces that include $S$; we aim to show $S=U$. We assume $S$ is not empty since otherwise $U=\emptyset=S$. Clearly, $S\subseteq U$. Suppose $\ww\in\Rn\setminus S$. Then by Theorem~\ref{thm:ast-def-sep-cvx-sets}, applied to $S$ and $\{\ww\}$, there exists $\xbar\in\extspace$, $\rr\in\Rn$ and $\beta\in\R$ such that $\xbar\cdot(\ww-\rr)>\beta$, while $\xbar\cdot(\uu-\rr)\leq\beta$ for all $\uu\in S$ (implying $\xbar\neq\zero$ since $S$ is nonempty). Thus, $S$ is in the astral dual halfspace $\ahfsp{\xbar}{\rr}{\beta}$ (as defined in Eq.~\ref{eqn:ast-def-hfspace-defn}), but $\ww$ is not. Therefore, $\ww\not\in U$, proving $S=U$. \pfpart{Part~(\ref{cor:ast-def-sep-conseqs:b}):} For any astral dual halfspace $H\subseteq\Rn$, since $H$ is convex (Proposition~\ref{pr:ast-def-hfspace-convex}), $S\subseteq H$ if and only if $\conv{S}\subseteq H$ (by Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). Therefore, the intersection of all astral dual halfspaces that include $S$ is the same as the intersection of all astral dual halfspaces that include $\conv{S}$, which, by part~(\ref{cor:ast-def-sep-conseqs:a}), is simply $\conv{S}$. \pfpart{Part~(\ref{cor:ast-def-sep-conseqs:c}):} The proof is analogous to that of Corollary~\ref{cor:sep-cvx-sets-conseqs}(\ref{cor:sep-cvx-sets-conseqs:c}). \qedhere \end{proof-parts} \end{proof} A set $H\subseteq\Rn$ is said to be a \emph{hemispace} if both $H$ and its complement, $\Rn\setminus H$, are convex. For example, ordinary open and closed halfspaces are hemispaces. In fact, except for $\emptyset$ and $\Rn$, a set is a hemispace if and only if it is an astral dual halfspace, as we show next as a corollary of Theorem~\ref{thm:ast-def-sep-cvx-sets}. Closely related equivalences for hemispaces were proved by \citet[Theorem~1.1]{hemispaces}. \begin{corollary} \label{cor:hemi-is-ast-dual-halfspace} Let $H\subseteq\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{cor:hemi-is-ast-dual-halfspace:a} $H$ is a hemispace \item \label{cor:hemi-is-ast-dual-halfspace:b} $H$ is either $\emptyset$ or $\Rn$ or an astral dual halfspace. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{cor:hemi-is-ast-dual-halfspace:b}) $\Rightarrow$ (\ref{cor:hemi-is-ast-dual-halfspace:a}): } If $H$ is $\emptyset$ or $\Rn$, then it is clearly a hemispace. Suppose $H$ is the astral dual halfspace $\ahfsp{\xbar}{\rr}{\beta}$ given in \eqref{eqn:ast-def-hfspace-defn}. Then by Proposition~\ref{pr:ast-def-hfspace-convex}, $H$ is convex, as is its complement, \[ \Braces{\uu\in\Rn : \xbar\cdot(\uu-\rr) > \beta} = \Braces{\uu\in\Rn : -\xbar\cdot(\uu-\rr) < -\beta}. \] Thus, $H$ is a hemispace. \pfpart{(\ref{cor:hemi-is-ast-dual-halfspace:a}) $\Rightarrow$ (\ref{cor:hemi-is-ast-dual-halfspace:b}): } Suppose $H$ is a hemispace and is neither $\emptyset$ nor $\Rn$. Then both $H$ and its complement, $J=\Rn\setminus H$, are convex and nonempty. Since they are also disjoint, by Theorem~\ref{thm:ast-def-sep-cvx-sets}, they are astral strongly separated, so that \[ \sup_{\uu\in H} \xbar\cdot(\uu-\rr) < \beta < \sup_{\vv\in J} \xbar\cdot(\vv-\rr), \] for some $\xbar\in\extspace\setminus\{\zero\}$, $\rr\in\Rn$, and $\beta\in\R$. It follows that $H$ is included in the astral dual halfspace $H'=\ahfsp{\xbar}{\rr}{\beta}$, while its complement, $J$, is in the complement of $H'$. That is, $\Rn\setminus H = J \subseteq \Rn\setminus H'$, so $H'\subseteq H$. Thus, $H=H'$, so $H$ is an astral dual halfspace. \qedhere \end{proof-parts} \end{proof} Suppose $U,V\subseteq\Rn$ are astral strongly separated. Then by Theorem~\ref{thm:ast-str-sep-equiv-no-trans}, there exists $\xbar\in\extspace$ such that $\xbar\cdot(\uu-\vv)<0$ for all $\uu\in U$, $\vv\in V$, which in turn implies that $\limray{\xbar}\cdot(\uu-\vv)=-\infty$. Since $\limray{\xbar}$ is an icon, we can further write it in the form $\limray{\xbar}=\WW\omm$, for some $\WW\in\Rnk$, so that $\WW \omm\cdot(\uu-\vv) = -\infty$ for all $\uu\in U$, $\vv\in V$. We remark that this latter condition can be expressed in terms of {lexicographic ordering}. Specifically, for any two vectors $\aaa,\bb\in\Rk$, we say that $\aaa$ is \emph{lexicographically less than} $\bb$, written $\aaa \lexless \bb$, if $\aaa\neq\bb$ and if, in the first component where they differ, $\aaa$ is less than $\bb$; that is, if for some $j\in\{1,\ldots,k\}$, $a_j < b_j$ and $a_i = b_i$ for $i=1,\ldots,j-1$. Then it can be checked (for instance, using Lemma~\ref{lemma:case}) that the condition above, that $\WW \omm\cdot(\uu-\vv) = -\infty$, holds if and only if $\trans{\WW}(\uu-\vv) \lexless \zero$, or equivalently, $\trans{\WW}\uu \lexless \trans{\WW}\vv$. In this sense, since this holds for all $\uu\in U$, and all $\vv\in V$, the sets $U$ and $V$ are \emph{lexicographically separated}. Thus, Theorem~\ref{thm:ast-def-sep-cvx-sets} shows that any two disjoint, convex sets in $\Rn$ can be lexicographically separated. This latter fact had been shown previously by \citet[Theorem~2.1]{lexicographic_separation}. Indeed, the preceding argument can be used to show that our notion of astral strong separation is essentially equivalent to lexicographic separation, so in this sense, Theorem~\ref{thm:ast-def-sep-cvx-sets} and Corollary~\ref{cor:ast-def-sep-conseqs}(\ref{cor:ast-def-sep-conseqs:c}) were already known in their work. We consider next separation involving convex cones. We say that an astral dual halfspace, as in \eqref{eqn:ast-def-hfspace-defn}, is \emph{homogeneous} if both $\rr=\zero$ and $\beta=0$, that is, if it has the form \begin{equation} \label{eqn:ast-def-homo-hfspace-defn} \ahfsp{\xbar}{\zero}{0} = \Braces{\uu\in\Rn : \xbar\cdot\uu\leq 0}. \end{equation} When separating a convex cone $K\subseteq\Rn$ from a disjoint convex set $V\subseteq\Rn$, we show next that there always exists a homogeneous astral dual halfspace that includes $K$ while excluding all of $V$. \begin{theorem} \label{thm:ast-def-sep-cone-sing} Let $K\subseteq\Rn$ be a convex cone, and let $V\subseteq\Rn$ be nonempty and convex with $K\cap V=\emptyset$. Then there exists $\xbar\in\extspace$ such that \[ \sup_{\uu\in K} \xbar\cdot\uu \leq 0 < \inf_{\vv\in V} \xbar\cdot\vv. \] \end{theorem} Before proving the theorem, we give a lemma that will be used in its proof. The lemma supposes that we are given some $\xbar\in\extspace$ and sets $U,W\subseteq\Rn$ such that $\xbar\cdot\uu\leq 0$ for all $\uu\in U$ and $\xbar\cdot\ww<0$ for all $\ww\in W$. In particular, it is possible that $\xbar\cdot\ww=-\infty$ for all $\ww\in W$. The lemma shows that there then must exist some point $\ybar\in\extspace$ that preserves all these same properties with the additional condition that $\ybar\cdot\ww\in\R$ for at least one point $\ww\in W$. \begin{lemma} \label{lem:sep-one-pt-finite} Let $U,W\subseteq\Rn$, with $W$ nonempty, and let $\xbar\in\extspace$. Suppose $\xbar\cdot\uu\leq 0$ for all $\uu\in U$, and that $\xbar\cdot\ww < 0$ for all $\ww\in W$. Then there exists a point $\ybar\in\extspace$ such that $\ybar\cdot\uu\leq 0$ for all $\uu\in U$, $\ybar\cdot\ww < 0$ for all $\ww\in W$, and such that there exists some $\what\in W$ with $\ybar\cdot\what \in \R$. \end{lemma} \begin{proof} Since $\limray{\xbar}$ is an icon, we can write it as $\limray{\xbar}=\limrays{\vv_1,\ldots,\vv_k}$ for some $\vv_1,\ldots,\vv_k\in\Rn$ (Propositions~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:c} and~\ref{pr:i:8}\ref{pr:i:8-infprod}). For $i=0,\ldots,k$, let $ \ebar_i = \limrays{\vv_1,\ldots,\vv_i} $. For all $\uu\in U\cup W$ and all $i\in\{0,\ldots,k\}$, we claim $\ebar_i\cdot\uu\leq 0$. Otherwise, if $\ebar_i\cdot\uu>0$ then $\ebar_i\cdot\uu=+\infty$ (by Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}, since $\ebar_i$ is an icon), implying $\limray{\xbar}\cdot\uu = (\ebar_i\plusl\limrays{\vv_{i+1},\ldots,\vv_k})\cdot\uu = +\infty $, so that $\xbar\cdot\uu>0$, a contradiction. Let $j\in\{1,\ldots,k\}$ be the largest index for which there exists $\what\in W$ such that $\ebar_{j-1}\cdot\what=0$. In particular, this means that $\ebar_j\cdot\ww\neq 0$ for all $\ww\in W$. Note that $1\leq j\leq k$ since for all $\ww\in W$, $\ebar_0\cdot\ww=\zero\cdot\ww=0$ and $\ebar_k\cdot\ww=\limray{\xbar}\cdot\ww<0$ (since $\xbar\cdot\ww<0$). Let $\ybar=\ebar\plusl\vv$ where $\ebar=\ebar_{j-1}$ and $\vv=\vv_j$. Note that $\limray{\ybar}=\ebar\plusl\limray{\vv}=\ebar_j$ (Proposition~\ref{pr:i:8}\ref{pr:i:8d}). Therefore, for all $\uu\in U\cup W$, $\limray{\ybar}\cdot\uu=\ebar_j\cdot\uu\leq 0$, as argued above, implying $\ybar\cdot\uu\leq 0$. Further, by $j$'s choice, for all $\ww\in W$, $\limray{\ybar}\cdot\ww=\ebar_j\cdot\ww\neq 0$, so $\ybar\cdot\ww\neq 0$, implying $\ybar\cdot\ww<0$. Finally, $\what$ and $j$ were chosen so that $\ebar\cdot\what=0$. Therefore, $\ybar\cdot\what=\ebar\cdot\what\plusl\vv\cdot\what=\vv\cdot\what\in\R$, completing the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:ast-def-sep-cone-sing}] The sets $K$ and $V$ are nonempty, convex and disjoint, so by Theorems~\ref{thm:ast-str-sep-equiv-no-trans} and~\ref{thm:ast-def-sep-cvx-sets}, there exists $\zbar\in\extspace$ such that $\zbar\cdot(\uu-\vv)<0$ for all $\uu\in K$ and $\vv\in V$. Since $\zero\in K$, this implies $\zbar\cdot(-\vv)<0$ for all $\vv\in V$. Therefore, by Lemma~\ref{lem:sep-one-pt-finite} (applied with $\xbar$, $U$ and $W$, as they appear in the lemma, set to $\zbar$, $K-V$ and $-V$), there exists $\ybar\in\extspace$ and $\vhat\in V$ such that, for all $\uu\in K$ and $\vv\in V$, $\ybar\cdot(\uu-\vv)\leq 0$ and $\ybar\cdot\vv>0$, and furthermore, $\ybar\cdot\vhat\in\R$. For $\uu\in K$, we thus have that $\ybar\cdot\uu$ and $-\ybar\cdot\vhat$ are summable, so $\ybar\cdot\uu-\ybar\cdot\vhat=\ybar\cdot(\uu-\vhat)\leq 0$ (by Proposition~\ref{pr:i:1} and since $\vhat\in V$). Therefore, for all $\uu\in K$, $\ybar\cdot\uu\leq \ybar\cdot\vhat$. Since $K$ is a cone, this further implies, for all $\lambda\in\Rpos$, that $\lambda\uu\in K$ so that \begin{equation*} \label{eq:thm:ast-def-sep-cone-sing:1} \lambda\ybar\cdot\uu = \ybar\cdot(\lambda\uu) \leq \ybar\cdot\vhat. \end{equation*} This means that $\ybar\cdot\uu\leq 0$; otherwise, if $\ybar\cdot\uu>0$, then the left-hand side of this equation could be made arbitrarily large as $\lambda$ becomes large, while the right-hand side remains bounded (since $\ybar\cdot\vhat\in\R$). Finally, let $\xbar=\limray{\ybar}$. Then for all $\uu\in K$, $\xbar\cdot\uu=\limray{\ybar}\cdot\uu\leq 0$ (since, as just argued, $\ybar\cdot\uu\leq 0$). And for all $\vv\in V$, $\ybar\cdot\vv>0$, implying $\xbar\cdot\vv=\limray{\ybar}\cdot\uu=+\infty$, completing the proof. \end{proof} Here are some consequences of Theorem~\ref{thm:ast-def-sep-cone-sing}, analogous to Corollary~\ref{cor:ast-def-sep-conseqs}. In particular, every (standard) convex cone in $\Rn$ is equal to the intersection of all homogeneous astral dual halfspaces that include it. \begin{corollary} \label{cor:ast-def-sep-cone-conseqs} Let $S\subseteq\Rn$. \begin{letter-compact} \item \label{cor:ast-def-sep-cone-conseqs:a} The following are equivalent: \begin{roman-compact} \item \label{cor:ast-def-sep-cone-conseqs:a1} $S$ is a convex cone. \item \label{cor:ast-def-sep-cone-conseqs:a2} $S$ is equal to the intersection of some collection of homogeneous astral dual halfspaces. \item \label{cor:ast-def-sep-cone-conseqs:a3} $S$ is equal to the intersection of all homogeneous astral dual halfspaces that include $S$. \end{roman-compact} \item \label{cor:ast-def-sep-cone-conseqs:b} The conic hull of $S$, $\cone{S}$, is equal to the intersection of all homogeneous astral dual halfspaces that include $S$. \end{letter-compact} \end{corollary} \begin{proof} The proof is analogous to that of Corollary~\ref{cor:ast-def-sep-conseqs}, after making the additional observation that every homogeneous astral dual halfspace, as in \eqref{eqn:ast-def-homo-hfspace-defn}, is itself a convex cone (convex by Proposition~\ref{pr:ast-def-hfspace-convex}, a cone by Proposition~\ref{pr:i:2}). \end{proof} \section{Linking astral cones and standard cones} This chapter looks at some ways in which astral cones can be constructed or operated upon, focusing especially on the linking of convex astral cones with ordinary convex cones in $\Rn$, for instance, how a standard convex cone can be extended to astral space, and what the properties are of that extension. We also study astral polar cone operations that provide a one-to-one correspondence between standard convex cones in $\Rn$ and closed convex astral cones in $\extspace$. \subsection{Extending a standard convex cone to astral space} \label{sec:cones:astrons} We begin by considering natural ways in which a standard convex cone $K\subseteq\Rn$ can be extended to form a convex astral cone. As seen already, one way is simply to take its closure, $\Kbar$, which, by Theorem~\ref{thm:out-conic-h-is-closure}, is the same as $\oconich{K}$, its outer conic hull. By Corollary~\ref{cor:sep-ast-cone-conseqs}(\ref{cor:sep-ast-cone-conseqs:a}), this set is the smallest closed convex astral cone that includes $K$. An alternative means of extending $K$ is to take its astral conic hull, $\acone{K}$, which is the smallest convex astral cone (whether closed or not) that includes $K$. In this section, we consider the form of this set. We will see that every point in this set has a representation of a particularly natural and convenient form, namely, one involving only vectors in $K$. To make this precise, we define the following set: \begin{definition} Let $K\subseteq\Rn$ be a convex cone. The \emph{representational closure of $K$}, denoted $\repcl{K}$, is the set of all points in $\extspace$ that can be represented using vectors in $K$; that is, \begin{equation} \label{eq:rep-cl-defn} \repcl{K} = \bigBraces{ \limrays{\vv_1,\ldots,\vv_k}\plusl \qq : \qq,\vv_1,\ldots,\vv_k,\in K, k\geq 0 }. \end{equation} \end{definition} Note that the points appearing in \eqref{eq:rep-cl-defn} need {not} be canonical representations. For example, in this notation, Corollary~\ref{cor:h:1} states exactly that $\extspace=\repcl{(\Rn)}$. As suggested above, the astral conic hull of any standard convex cone is the same as its representational closure. More generally, as shown next, the astral conic hull of any set in $\Rn$ is equal to the representational closure of its (standard) conic hull. \begin{theorem} \label{thm:acone-is-repclos} Let $S\subseteq\Rn$. Then \[ \acone{S} = \repcl{(\cone{S})}. \] Consequently, $\cone{S} = (\acone{S})\cap\Rn$. \end{theorem} Before proving the theorem, we give a key lemma that is central to its proof. The lemma shows that the topological closure of a finitely generated cone (that is, of the form $\cone{V}$, for some finite set $V\subseteq\Rn$) is always included in its representational closure. \begin{lemma} \label{lem:conv-lmset-char:a} Let $V$ be a finite subset of $\Rn$. Then $\clcone{V}\subseteq \repcl{(\cone V)}$. \end{lemma} \begin{proof} If $V$ is empty, then $\cone V = \{\zero\}$, so $\clcone{V} = \{\zero\} = \repcl{(\cone V)}$. For $V$ nonempty, we prove the result by induction on astral rank. More precisely, we prove, by induction on $k=0,1,\ldots,n$, that for all $\xbar\in\extspace$ and for every finite, nonempty set $V\subseteq\Rn$, if $\xbar$ has astral rank at most $k$ and if $\xbar\in\clcone{V}$ then $\xbar\in\repcl{(\cone V)}$. For the base case that $k=0$, suppose $\xbar$ has astral rank zero and that $\xbar\in\clcone{V}$ where $V\subseteq\Rn$ is finite and nonempty. Then $\xbar$ is equal to some $\qq\in\Rn$. Since $\xbar\in\clcone{V}$, there exists a sequence $\seq{\xx_t}$ in $\cone V$ converging to $\xbar=\qq$. Thus, $\qq\in \cone V$ since $\cone V$ is closed in $\Rn$ by \Cref{roc:thm19.1}. Therefore, $\xbar\in\repcl{(\cone V)}$. For the inductive step, suppose $\xbar$ has astral rank $k>0$ and that $\xbar\in\clcone{V}$ for some $V=\{\zz_1,\ldots,\zz_m\}\subseteq\Rn$ with $m\geq 1$. Furthermore, assume inductively that the claim holds for all points with astral rank strictly less than $k$. Then by Proposition~\ref{pr:h:6}, we can write $\xbar=\limray{\ww}\plusl\xbarperp$ where $\ww$ is $\xbar$'s dominant direction, and $\xbarperp$ is the projection of $\xbar$ perpendicular to $\ww$. Since $\xbar\in\clcone{V}$, there exists a sequence $\seq{\xx_t}$ in $\cone V$ that converges to $\xbar$. Since $\norm{\xx_t}\rightarrow+\infty$ (by Proposition~\ref{pr:seq-to-inf-has-inf-len}), we can discard all elements of the sequence with $\xx_t=\zero$, of which there can be at most finitely many. Then by Theorem~\refequiv{thm:dom-dir}{thm:dom-dir:a}{thm:dom-dir:c}, \[ \ww = \lim \frac{\xx_t}{\norm{\xx_t}}. \] Since $\cone V$ is a cone, $\xx_t/\norm{\xx_t}\in\cone V$, for all $t$, which implies that $\ww\in \cone V$ since $\cone V$ is closed in $\Rn$ (\Cref{roc:thm19.1}). By linear algebra, for each $t$, we can write $\xx_t = b_t \ww + \xperpt$ where $\xperpt\cdot\ww=0$ and $b_t=\xx_t\cdot\ww\in\R$. Since $\xx_t\rightarrow\xbar$ and since $\ww$ is $\xbar$'s dominant direction, $\xx_t\cdot\ww\rightarrow \xbar\cdot\ww = +\infty$ (by Theorems~\ref{thm:i:1}\ref{thm:i:1c} and~\ref{thm:dom-dir}\ref{thm:dom-dir:a},\ref{thm:dom-dir:b}). Thus, $b_t$ must be positive for all but finitely many values of $t$; by discarding these, we can assume that $b_t>0$ for all $t$. By \Cref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:b}), each $\xx_t$, being in $\cone V$, is a nonnegative linear combination of $\zz_1,\ldots,\zz_m$, the points comprising $V$. Therefore, each point $\xperpt=\xx_t-b_t\ww$ is a nonnegative linear combination of the points in the expanded set \[ V' = V \cup \{ -\ww \} = \{ \zz_1,\ldots,\zz_m,\; -\ww \}. \] In other words, $\xperpt\in\cone V'$, for all $t$. Furthermore, $\xbarperp$ must be in the astral closure of this cone, $\clcone{V'}$, since $\xperpt\rightarrow\xbarperp$ (by Proposition~\ref{pr:h:5}\ref{pr:h:5b}). Since $\xbarperp$ has strictly lower astral rank than $\xbar$ (by Proposition~\ref{pr:h:6}), we can apply our inductive assumption which implies that \[ \xbarperp = [\vv'_1,\ldots,\vv'_{k'}]\omm \plusl \qq' \] for some $\qq',\vv'_1,\ldots,\vv'_{k'}\in \cone V'$. For $i=1,\ldots,k'$, since $\vv'_i\in \cone V'$, we can write $\vv'_i$ as a nonnegative linear combination over the points in $V'$ so that $\vv'_i=\vv_i - a_i \ww$ where \[ \vv_i = \sum_{j=1}^m c_{ij} \zz_j \] for some $a_i\geq 0$, $c_{ij}\geq 0$, $j=1,\ldots,m$. Note that $\vv_i\in\cone V$. Similarly we can write $\qq'=\qq-b\ww$ for some $\qq\in \cone V$ and $b\geq 0$. Thus, \begin{eqnarray*} \xbar &=& \limray{\ww} \plusl \xbarperp \\ &=& \limray{\ww} \plusl [\vv'_1,\ldots,\vv'_{k'}]\omm \plusl \qq' \\ &=& [\ww,\; \vv'_1,\ldots,\vv'_{k'}]\omm \plusl \qq' \\ &=& [\ww,\; \vv_1,\ldots,\vv_{k'}]\omm \plusl \qq. \end{eqnarray*} The last equality follows from \Cref{pr:g:1}, combined with Proposition~\ref{prop:pos-upper:descr}, since $\vv_i=\vv'_i+a_i\ww$ for $i=1,\ldots,k'$, and since $\qq=\qq'+b\ww$. Since $\ww$, $\qq$, and $\vv_1,\ldots,\vv_{k'}$ are all in $\cone V$, this shows that $\xbar\in\repcl{(\cone V)}$, completing the induction and the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:acone-is-repclos}] Let $K=\cone{S}$. We first prove that $\repcl{K}\subseteq\acone{S}$. The set $(\acone{S})\cap\Rn$ includes $S$ and is a convex cone (by Proposition~\ref{pr:ast-cone-is-naive} and since $\acone{S}$ and $\Rn$ are both convex). Therefore, it must include $S$'s conic hull, so \[ K=\cone{S}\subseteq(\acone{S})\cap\Rn\subseteq\acone{S}. \] This further implies that $\lmset{K}\subseteq\acone{S}$ (again by Proposition~\ref{pr:ast-cone-is-naive}). Since $\acone{S}$ is a convex astral cone, it is closed under sequential sum, by Theorem~\ref{thm:ast-cone-is-cvx-if-sum}. That is, if $\xbar,\ybar\in\acone{S}$, then $\xbar\seqsum\ybar\subseteq\acone{S}$, and so, in particular, $\xbar\plusl\ybar\in\acone{S}$ (by Corollary~\ref{cor:seqsum-conseqs}\ref{cor:seqsum-conseqs:a}). Therefore, $\acone{S}$ is closed under leftward addition. Let $\xbar\in\repcl{K}$. Then $\xbar=\limray{\vv_1}\plusl\dotsb\plusl\limray{\vv_k}\plusl\qq$ for some $\qq,\vv_1,\ldots,\vv_k\in K$. By the foregoing, $\qq$ and $\limray{\vv_i}$, for $i=1,\ldots,k$, are all in $\acone{S}$, implying $\xbar$ is as well. Therefore, $\repcl{K}\subseteq\acone{S}$. For the reverse inclusion, let $\xbar\in\acone{S}$. Then by Theorem~\ref{thm:ast-conic-hull-union-fin}, $\xbar\in\oconich{V}$ for some finite subset $V\subseteq S$. We then have \[ \xbar \in \oconich{V} = \clcone{V} \subseteq \repcl{(\cone V)} \subseteq \repcl{(\cone S)} = \repcl{K}. \] The first equality is by Theorem~\ref{thm:out-conic-h-is-closure}. The second inclusion is by Lemma~\ref{lem:conv-lmset-char:a}. The third inclusion is because $V\subseteq S$, a relation that is preserved when generating cones and taking representational closure. Thus, $\acone{S} \subseteq \repcl{K}$, completing the proof. \end{proof} As discussed earlier, there are two natural ways of extending a convex cone $K\subseteq\Rn$ to astral space: The first is the astral conic hull of $K$, the smallest convex astral cone that includes $K$, which is the same as $K$'s representational closure. The other is the topological closure of $K$, the smallest closed set that includes $K$, which is the same as $K$'s outer conic hull and so also a convex astral cone, which therefore includes $K$'s astral conic hull. We summarize these relations in the next corollary. Later, in Section~\ref{sec:repcl-and-polyhedral}, we will give necessary and sufficient conditions for when these two constructions yield the same convex astral cones, that is, for when $\repcl{K}=\Kbar$. \begin{corollary} \label{cor:a:1} Let $K\subseteq\Rn$ be a convex cone. Then $\repcl{K}$ is a convex astral cone, and \[ K \subseteq \repcl{K} = \acone{K} = \conv(\lmset{K}) \subseteq \oconich{K} = \Kbar. \] \end{corollary} \begin{proof} Since $K$ is already a convex cone, $\cone K = K$. The first equality therefore follows from Theorem~\ref{thm:acone-is-repclos} (which also shows that $\repcl{K}$ is a convex astral cone). The second equality is from Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}) (and since $\zero\in K$). The second inclusion is by Proposition~\ref{pr:acone-hull-props}(\ref{pr:acone-hull-props:d}), and the final equality is by Theorem~\ref{thm:out-conic-h-is-closure}. \end{proof} Suppose $S\subseteq\Rn$ includes the origin. Then as seen in Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}), the astral conic hull of $S$ is equal to the convex hull of the set of all astrons associated with points in $S$. Theorem~\ref{thm:acone-is-repclos} therefore implies that $\conv(\lmset{S})=\repcl{(\cone{S})}$. In other words, the theorem succinctly characterizes the representations of all points in $\conv(\lmset{S})$. As we show in the next theorem, this can be generalized to provide a similarly simple characterization for the convex hull of any set of astrons associated with a set $S\subseteq\Rn$, whether or not that set includes the origin. In particular, we show that if the origin is in the convex hull of $S$, then $\conv(\lmset{S})$ is equal to $\repcl{(\cone{S})}$. Otherwise, if the origin is not in $S$'s convex hull, then $\conv(\lmset{S})$ consists of just the infinite points in $\repcl{(\cone{S})}$. \begin{example} In \Cref{ex:seg-oe1-oe2}, we gave a somewhat involved calculation of the representations of all points on the segment joining $\limray{\ee_1}$ and $\limray{\ee_2}$ in $\extspac{2}$. That segment, which is the same as the convex hull of the two astrons, can now be computed directly from \Cref{thm:conv-lmset-char} with $S=\{\ee_1,\ee_2\}$ so that $\cone{S}=\Rpos^2$, yielding \[ \lb{\limray{\ee_1}}{\limray{\ee_2}} = \conv{\{\limray{\ee_1},\limray{\ee_2}\}} = \repcl{(\Rpos^2)}\setminus\R^2. \] Thus, this segment consists of all infinite points in $\extspac{2}$ with representations that only involve vectors in $\Rpos^2$. Although these representations are not canonical as they were in \Cref{ex:seg-oe1-oe2}, they nonetheless represent the same set of points. \end{example} \begin{theorem} \label{thm:conv-lmset-char} Let $S\subseteq\Rn$. \begin{letter-compact} \item \label{thm:conv-lmset-char:a} If $\zero\in\conv{S}$, then $ \conv(\lmset{S}) = \repcl{(\cone{S})} $. \item \label{thm:conv-lmset-char:b} If $\zero\not\in\conv{S}$, then $ \conv(\lmset{S}) = \repcl{(\cone{S})} \setminus \Rn$. \end{letter-compact} \end{theorem} Before proving the theorem, we first give a simple lemma to be used in its proof (stated in a bit more generality than is needed at this point). \begin{lemma} \label{lem:conv-lmset-disj-rn} Let $S\subseteq\extspace$. \begin{letter-compact} \item \label{lem:conv-lmset-disj-rn:a} If $\zero\in\conv{S}$, then $\zero\in\conv{(\lmset{S})}$. \item \label{lem:conv-lmset-disj-rn:b} If $\zero\not\in\conv{S}$, then $\conv{(\lmset{S})}\cap\Rn=\emptyset$. \end{letter-compact} \end{lemma} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{lem:conv-lmset-disj-rn:a}):} Suppose $\zero\in\conv{S}$. Then by Theorem~\ref{thm:convhull-of-simpices}, $\zero\in\ohull{V}$ for some finite subset $V\subseteq S$. Let $\uu\in\Rn$. Then \[ 0 = \zero\cdot\uu \leq \max\Braces{\xbar\cdot\uu : \xbar\in V} \] by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}). Thus, for some $\xbar\in V$, $\xbar\cdot\uu\geq 0$, implying $\limray{\xbar}\cdot\uu\geq 0$. Therefore, \[ \zero\cdot\uu = 0 \leq \max\Braces{\limray{\xbar}\cdot\uu : \xbar\in V}, \] so $\zero\in\ohull{(\lmset{V})}\subseteq\conv{(\lmset{S})}$ (again by \Cref{pr:ohull-simplify}\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b} and Theorem~\ref{thm:convhull-of-simpices}). \pfpart{Part~(\ref{lem:conv-lmset-disj-rn:b}):} We prove the contrapositive with a proof similar to the preceding part. Suppose there exists a point $\qq\in\conv(\lmset{S})\cap\Rn$; we aim to show this implies $\zero\in\conv{S}$. By Theorem~\ref{thm:convhull-of-simpices}, $\qq\in \ohull(\lmset{V})$ for some finite subset $V\subseteq S$. Let $\uu\in\Rn$. Then by \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}), \begin{equation} \label{eq:h:8a} \qq\cdot\uu \leq \max\Braces{\limray{\xbar}\cdot\uu : \xbar\in V}. \end{equation} This implies that \begin{equation*} \label{eq:h:8b} \zero\cdot\uu = 0 \leq \max\Braces{\xbar\cdot\uu : \xbar\in V}, \end{equation*} since otherwise, if $\xbar\cdot\uu<0$ for $i=1,\ldots,m$, then the right-hand side of \eqref{eq:h:8a} would be equal to $-\infty$, which is impossible since $\qq\cdot\uu\in\R$. Again applying \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}) and Theorem~\ref{thm:convhull-of-simpices}, this shows that $\zero\in\ohull{V}\subseteq\conv{S}$. \qedhere \end{proof-parts} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:conv-lmset-char}] ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:conv-lmset-char:a}):} Suppose $\zero\in\conv{S}$. Then \[ \conv(\lmset{S}) = \conv{(\{\zero\}\cup\lmset{S})} = \acone{S} = \repcl{(\cone{S})}. \] The first equality is by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:c}) since, by Lemma~\ref{lem:conv-lmset-disj-rn}(\ref{lem:conv-lmset-disj-rn:a}), $\zero\in\conv(\lmset{S})$. The second equality is by Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}), and the third is by Theorem~\ref{thm:acone-is-repclos}. \pfpart{Part~(\ref{thm:conv-lmset-char:b}):} Suppose $\zero\not\in\conv{S}$. Then, similar to the foregoing, \[ \conv(\lmset{S}) \subseteq \conv{(\{\zero\}\cup\lmset{S})} = \acone{S} = \repcl{(\cone{S})}, \] where the equalities follow again from Theorems~\ref{thm:acone-char}(\ref{thm:acone-char:a}) and~\ref{thm:acone-is-repclos}. Also, by Lemma~\ref{lem:conv-lmset-disj-rn}(\ref{lem:conv-lmset-disj-rn:b}), $\conv(\lmset{S})\cap\Rn=\emptyset$. Therefore, $\conv(\lmset{S})\subseteq\repcl{(\cone{S})}\setminus\Rn$. For the reverse inclusion, let $\zbar\in\repcl{(\cone{S})}\setminus\Rn$, which we aim to show is in $\conv(\lmset{S})$. Then by Theorem~\ref{thm:acone-is-repclos}, $\zbar\in\acone{S}$, implying, by Theorem~\ref{thm:ast-conic-hull-union-fin}, that $\zbar\in\oconich{V}$ for some set $V=\{\xx_1,\ldots,\xx_m\}\subseteq S$. Note that each $\xx_i$ is in $\Rn\setminus\{\zero\}$ since $S\subseteq\Rn$ and $\zero\not\in\conv{S}$. Therefore, by Theorem~\ref{thm:oconic-hull-and-seqs}, there exist sequences $\seq{\lambda_{it}}$ in $\Rpos$ and span-bound sequences $\seq{\xx_{it}}$ in $\Rn$ such that $\xx_{it}\rightarrow\xx_i$ for $i=1,\ldots,m$, and $\zz_t=\sum_{i=1}^m \lambda_{it} \xx_{it} \rightarrow \zbar$. Since, for each $i$, the sequence $\seq{\xx_{it}}$ is span-bound, each element $\xx_{it}$ is in $\rspanset{\xx_i}=\spn\{\xx_i\}$; thus, $\xx_{it}=\gamma_{it}\xx_i$ for some $\gamma_{it}\in\R$. Furthermore, since $\xx_{it}\rightarrow\xx_i$ and $\xx_i\neq\zero$, we must have $\gamma_{it}\rightarrow 1$. Thus, without loss of generality, we can assume henceforth that $\gamma_{it}>0$ for all $i$ and all $t$, since all other elements can be discarded. For each $t$ and $i=1,\ldots,m$, let $\lambda'_{it}=\lambda_{it} \gamma_{it}$, which is nonnegative since $\lambda_{it}$ and $\gamma_{it}$ are. We then have that \[ \zz_t = \sum_{i=1}^m \lambda'_{it} \xx_i. \] Also, let \begin{equation} \label{eq:thm:conv-lmset-char:2} \sigma_t = \sum_{i=1}^m \lambda'_{it}. \end{equation} We claim that $\sigma_t\rightarrow+\infty$. Otherwise, if $\sigma_t$ remains bounded on any subsequence, then $\zz_t$ would also remain in a compact subregion of $\Rn$ on that subsequence, implying that $\zbar$, its limit, is in that same subregion, a contradiction since $\zbar\not\in\Rn$. As a consequence, we can assume $\sigma_t>0$ for all $t$ since all other elements can be discarded. Rewriting \eqref{eq:thm:conv-lmset-char:2}, we then have that \[ \zz_t = \sum_{i=1}^m \Parens{\frac{\lambda'_{it}}{\sigma_t}} (\sigma_t \xx_i). \] Since $\sigma_t \xx_i\rightarrow\limray{\xx_i}$ for all $i$ (by Theorem~\ref{thm:i:seq-rep}), and since $\sum_{i=1}^m (\lambda'_{it}/\sigma_t)=1$ for all $t$, this then implies that \[ \zbar \in \ohull\{\limray{\xx_1},\ldots,\limray{\xx_m}\} \subseteq \conv(\limray{S}), \] where the inclusions are by Theorems~\ref{thm:e:7} and~\ref{thm:convhull-of-simpices}, respectively, completing the proof. \qedhere \end{proof-parts} \end{proof} \subsection{The astral conic hull of a polyhedral cone} \label{sec:repcl-and-polyhedral} In the last section, we discussed two possible extensions of a convex cone $K\subseteq\Rn$ to astral space. The first is $\repcl{K}$, its representational closure, which is the same as its astral conic hull (by Theorem~\ref{thm:acone-is-repclos}). The other is $\Kbar$, its topological closure, which is the same as its outer conic hull (by Theorem~\ref{thm:out-conic-h-is-closure}). In this section, we consider when these extensions, both convex astral cones, are the same. We begin with an example showing that it is indeed possible for them to be different: \begin{example} Let $K$ be the subset of $\R^3$ given in \eqref{eqn:bad-set-eg-K}, which was studied as part of Example~\ref{ex:KLsets:extsum-not-sum-exts}. It can be checked that $K$ is a closed (in $\R^3$) convex cone. As before, let $\xbar=\limray{\ee_2}\plusl\ee_1$. It was earlier argued that $\xbar\in\Kbar$. On the other hand, $\xbar\not\in\repcl{K}$. To see this, observe first that if $\xx\in K$ and $x_3=0$ then it must also be the case that $x_1=0$. Suppose $\xbar\in\repcl{K}$ so that $\xbar=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ for some $\vv_1,\ldots,\vv_k,\qq\in K$. Note that $\xbar\cdot\ee_3=0$. This implies that $\vv_i\cdot\ee_3=0$ for $i=1,\ldots,k$, and so also that $\qq\cdot\ee_3=0$ (by Proposition~\ref{pr:vtransu-zero}). But then, by the preceding observation, it must also be the case that $\qq\cdot\ee_1=0$, and that $\vv_i\cdot\ee_1=0$ for $i=1,\ldots,k$. These imply that $\xbar\cdot\ee_1=0$, a contradiction, since in fact, $\xbar\cdot\ee_1=1$. We conclude that $\repcl{K}\neq\Kbar$ in this case. \end{example} Thus, it is possible for $\repcl{K}$ and $\Kbar$ to be different, but it is also possible for them to be the same. Indeed, if $K$ is {finitely generated}, that is, if $K=\cone V$ for some finite set $V\subseteq\Rn$, then Lemma~\ref{lem:conv-lmset-char:a}, together with Corollary~\ref{cor:a:1}, proves that $\repcl{K}$ and $\Kbar$ must be equal, implying also that $\repcl{K}$ must be closed. In fact, as we show in the next theorem, these properties of $K$ and its extensions to astral space turn out to be equivalent; that is, $\repcl{K}$ is closed and equal to $\Kbar$ if and only if $K$ is finitely generated. Later, in Section~\ref{subsec:charact:cont:duality}, this fact will turn out to be especially important in characterizing the continuity of a function's extension. As seen in \Cref{roc:thm19.1}, a convex set in $\Rn$ is finitely generated if and only if it is {polyhedral}, meaning that the set is the intersection of finitely many closed halfspaces. Thus, when discussing convex sets in $\Rn$, particularly convex cones, we can use these terms interchangably. \begin{theorem} \label{thm:repcl-polyhedral-cone} Let $K\subseteq\Rn$ be a convex cone. Then the following are equivalent: \begin{letter-compact} \item \label{thm:repcl-polyhedral-cone:a} $\repcl{K}$ is closed (in $\extspace$). \item \label{thm:repcl-polyhedral-cone:b} $\repcl{K}=\Kbar$. \item \label{thm:repcl-polyhedral-cone:c} $K$ is finitely generated (or equivalently, polyhedral); that is, $K=\cone V$ for some finite set $V\subseteq\Rn$. \end{letter-compact} \end{theorem} \begin{proof} That (\ref{thm:repcl-polyhedral-cone:b}) implies (\ref{thm:repcl-polyhedral-cone:a}) is immediate. Corollary~\ref{cor:a:1} shows that $\repcl{K}\subseteq\Kbar$ in general. If $K$ is finitely generated, then Lemma~\ref{lem:conv-lmset-char:a} proves $\Kbar\subseteq\repcl{K}$, thus proving (\ref{thm:repcl-polyhedral-cone:c}) implies (\ref{thm:repcl-polyhedral-cone:b}). In the remainder of the proof, we show that (\ref{thm:repcl-polyhedral-cone:a}) implies (\ref{thm:repcl-polyhedral-cone:c}). We assume henceforth that $\repcl{K}$ is closed, and therefore compact being a closed subset of the compact space $\extspace$. To prove the result, we construct an open cover of $\repcl{K}$, which, since $\repcl{K}$ is compact, must include a finite subcover. From this, we show that a finite set of points can be extracted that are sufficient to generate the cone $K$. We make use of the open sets $\Uv$ that were shown to exist in Theorem~\ref{thm:formerly-lem:h:1:new}. For $\vv\in\Rn$, recall that the set $\Uv$ includes the astron $\limray{\vv}$, and is entirely contained in $\Rn \cup [\limray{\vv}\plusl\Rn]$, meaning all points in $\Uv$ are either in $\Rn$ or have the form $\limray{\vv}\plusl\qq$ for some $\qq\in\Rn$. Slightly overloading notation, for any set $S\subseteq\Rn$, we further define $\US$ to be the convex hull of the union of all sets $\Uv$ over $\vv\in S$: \[ \US = \conv\paren{\bigcup_{\vv\in S} \Uv}. \] The parenthesized union is open (being the union of open sets), implying, by Corollary~\ref{cor:convhull-open}, that its convex hull, $\US$, is also open, for all $S\subseteq\Rn$. Using compactness, we show next that $\repcl{K}$ is included in one of these sets $\UV$ for some {finite} set $V\subseteq K$. First, for all $\vv\in\Rn$, $\limray{\vv}\in\Uv$; therefore, for all $S\subseteq\Rn$, $\lmset{S}\subseteq\US$, and so $\conv(\lmset{S})\subseteq\US$ by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:aa}) since $\US$ is convex. As a result, we can cover $\repcl{K}$ using the collection of all open sets $\UV$ for all finite subsets $V\subseteq K$. This is a cover because \[ \repcl{K} = \conv(\lmset{K}) = \bigcup_{\scriptontop{V\subseteq K:}{|V|<+\infty}} \conv(\lmset{V}) \subseteq \bigcup_{\scriptontop{V\subseteq K:}{|V|<+\infty}} \UV. \] The equalities follow respectively from Corollary~\ref{cor:a:1} and Theorem~\ref{thm:convhull-of-simpices} (and also by Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:a}). Since $\repcl{K}$ is compact, there exists a finite subcover, that is, a finite collection of sets $V_1,\ldots,V_\ell$ where each $V_j$ is a finite subset of $K$, for $j=1,\ldots,\ell$, and such that \begin{equation} \label{eqn:thm:repcl-polyhedral-cone:1} \repcl{K} \subseteq \bigcup_{j=1}^{\ell} \UVj. \end{equation} Let \[ V=\{\zero\}\cup \bigcup_{j=1}^{\ell} V_j \] be their union, along with the origin (added for later convenience), which is also a finite subset of $K$. Furthermore, \eqref{eqn:thm:repcl-polyhedral-cone:1} implies $\repcl{K}\subseteq \UV$ since, for $j=1,\ldots,\ell$, $V_j\subseteq V$, implying $\UVj\subseteq\UV$ by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:b}). Summarizing, $\repcl{K}\subseteq \UV$, where $\zero\in V\subseteq K$ and $|V|<+\infty$. Let $\Khat=\cone V$ be the cone generated by $V$. To complete the proof that $K$ is finitely generated, we show that $K=\Khat$. Actually, $\Khat=\cone V\subseteq K$ since $V$ is included in $K$, which is a convex cone. So it remains only to prove that $K\subseteq\Khat$. Let $\ww$ be any point in $K$, which we aim to show is in $\Khat$. Since $\ww\in K$, \[ \limray{\ww}\in\repcl{K}\subseteq \UV = \conv\paren{\bigcup_{\vv\in V} \Uv}. \] Therefore, by Theorem~\ref{thm:convhull-of-simpices}, \begin{equation} \label{eqn:thm:repcl-polyhedral-cone:5} \limray{\ww} \in \simplex{ \{ \xbar_1,\ldots,\xbar_m \} } \end{equation} for some $\xbar_1,\ldots,\xbar_m\in\bigcup_{\vv\in V} \Uv$. From the form of points in $\Uv$ (Theorem~\ref{thm:formerly-lem:h:1:new}\ref{thm:formerly-lem:h:1:b:new}), this means, for $j=1,\ldots,m$, that we can write $\xbar_j=\limray{\vv_j}\plusl\qq_j$ for some $\vv_j\in V$ and some $\qq_j\in\Rn$. (Note that this takes into account the possibility that $\xbar_j$ might be in $\Rn$ since in that case we can choose $\vv_j=\zero$, which is in $V$.) We claim that $\ww$ is in the polar (defined in Eq.~\ref{eqn:polar-def}) of the polar of $\Khat$, that is, that $\ww$ is in $(\Khatpol)^{\circ}=\Khatdubpol$, which is the same as $\Khat$. To see this, let $\uu$ be any point in $\Khatpol$. Then in light of \eqref{eqn:thm:repcl-polyhedral-cone:5}, \Cref{pr:ohull-simplify}(\ref{pr:ohull-simplify:a},\ref{pr:ohull-simplify:b}) implies that \begin{equation} \label{eqn:thm:repcl-polyhedral-cone:4} \limray{\ww}\cdot\uu \leq \max\{ \xbar_1\cdot\uu,\ldots,\xbar_m\cdot\uu \}. \end{equation} Also, for $j=1,\ldots,m$, $\vv_j\in V \subseteq \cone V = \Khat$, so $\vv_j\cdot\uu \leq 0$. Therefore, \[ \xbar_j\cdot\uu = \limray{\vv_j}\cdot\uu \plusl \qq_j\cdot\uu < +\infty \] since $\limray{\vv_j}\cdot\uu\leq 0$ and $\qq_j\cdot\uu\in\R$. Combined with \eqref{eqn:thm:repcl-polyhedral-cone:4}, this means $\limray{\ww}\cdot\uu<+\infty$, and therefore $\limray{\ww}\cdot\uu\leq 0$ (since $\limray{\ww}$ is an icon) so $\ww\cdot\uu\leq 0$. Since this holds for all $\uu\in\Khatpol$, it follows that $\ww\in\Khatdubpol$, and so that $K\subseteq\Khatdubpol$. Furthermore, because $\Khat$ is a finitely generated convex cone in $\Rn$, it must be closed in $\Rn$ (\Cref{roc:thm19.1}). Therefore, $\Khatdubpol=\Khat$ (by \Cref{pr:polar-props}\ref{pr:polar-props:c}). Thus, $K=\Khat$, so $K$ is finitely generated. \end{proof} Applied to linear subspaces yields the following corollary: \begin{corollary} \label{cor:lin-sub-bar-is-repcl} Let $L\subseteq\Rn$ be a linear subspace. Then $\Lbar=\repcl{L}$. \end{corollary} \begin{proof} Let $B\subseteq\Rn$ be any basis for $L$. Then $|B|$ is finite and $L=\spn{B}=\cone(B\cup -B)$ (by Proposition~\ref{pr:scc-cone-elts}\ref{pr:scc-cone-elts:span}). Thus, $L$ is a finitely generated, convex cone. The claim then follows directly from Theorem~\ref{thm:repcl-polyhedral-cone}. \end{proof} \subsection{Astral polar cones} \label{sec:ast-pol-cones} We next study natural extensions of the standard polar cone defined in \eqref{eqn:polar-def} to astral space. In later chapters, we will see applications of such astral polar cones, for instance, in the critical role they play in the minimization and continuity properties of convex functions. Recall from Section~\ref{sec:prelim:cones} that the standard polar $\Kpol$ of a convex cone $K\subseteq\Rn$ is the set \begin{equation} \label{eq:std-pol-cone-rvw} \Kpol = \Braces{\uu\in\Rn : \xx\cdot\uu\leq 0 \mbox{ for all } \xx\in K}. \end{equation} We will consider two extensions of this notion. In the first of these, we simply extend this definition to apply to any naive cone in $\extspace$ (including any astral cone or any standard cone in $\Rn$): \begin{definition} Let $K\subseteq\extspace$ be a naive cone. Then the \emph{polar} (or \emph{polar cone}) of $K$, denoted $\Kpol$, is the set \begin{equation} \label{eq:pol-cone-ext-defn} \Kpol = \Braces{\uu\in\Rn : \xbar\cdot\uu\leq 0 \mbox{ for all } \xbar\in K}. \end{equation} \end{definition} Clearly, if $K\subseteq\Rn$, then this definition matches the one given in \eqref{eq:std-pol-cone-rvw}, so the new definition of $\Kpol$ subsumes and is consistent with the old one for standard convex cones. In the polar-cone definition given in \eqref{eq:pol-cone-ext-defn}, $K$ is a subset of $\extspace$, and its polar, $\Kpol$ is in $\Rn$. We also consider a dual notion, which can be viewed as another kind of generalization of the standard polar cone. In this definition, $K$ is in $\Rn$, and $\apol{K}$ is in $\extspace$. \begin{definition} Let $K\subseteq\Rn$ be a cone. Then the \emph{astral polar} (or \emph{astral polar cone}) of $K$, denoted $\apol{K}$, is the set \[ \apol{K} = \Braces{\xbar\in\extspace : \xbar\cdot\uu\leq 0 \mbox{ for all } \uu\in K }. \] \end{definition} Here are some properties of each of these operations: \begin{proposition} \label{pr:ast-pol-props} Let $J,K\subseteq\Rn$ be cones. Then the following hold: \begin{letter-compact} \item \label{pr:ast-pol-props:c} $\apol{K}$ is a closed (in $\extspace$) convex astral cone. \item \label{pr:ast-pol-props:b} If $J\subseteq K$, then $\apol{K}\subseteq\apol{J}$. \item \label{pr:ast-pol-props:f} $\apol{K}=\apol{(\cone{K})}$. \item \label{pr:ast-pol-props:e} $\apol{(J+K)}=\Japol\cap\Kapol$. \item \label{pr:ast-pol-props:a} $\Kpol=\apol{K}\cap \Rn$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ast-pol-props:c}):} The set $\apol{K}$ can be expressed as \[ \apol{K} = \bigcap_{\uu\in K} \Braces{\xbar\in\extspace : \xbar\cdot\uu\leq 0}. \] Each set in this intersection is either a homogeneous astral closed halfspaces or all of $\extspace$ (if $\uu=\zero$). Therefore, $\apol{K}$ is a closed convex astral cone by Proposition~\ref{pr:astral-cone-props}(\ref{pr:astral-cone-props:e}). \pfpart{Part~(\ref{pr:ast-pol-props:b}):} If $\xbar\in\apol{K}$ then $\xbar\cdot\uu\leq 0$ for all $\uu\in K$, and therefore all $\uu\in J$; thus, $\xbar\in\apol{J}$. \pfpart{Part~(\ref{pr:ast-pol-props:f}):} From part~(\ref{pr:ast-pol-props:b}), $\apol{(\cone{K})} \subseteq \apol{K}$ since $K\subseteq\cone{K}$. For the reverse inclusion, suppose $\xbar\in\apol{K}$, meaning $\xbar\cdot\uu\leq 0$ for $\uu\in K$, and so that $K\subseteq C$ where $C=\{\uu\in\Rn : \xbar\cdot\uu \leq 0\}$. The set $C$ is a convex cone by Proposition~\ref{pr:ast-def-hfspace-convex}. Therefore, $\cone{K}\subseteq C$, implying $\xbar\in\apol{(\cone{K})}$. Thus, $\apol{K}=\apol{(\cone{K})}$. \pfpart{Part~(\ref{pr:ast-pol-props:e}):} Since $\zero\in K$, $J\subseteq J+K$, so $\apol{(J+K)}\subseteq\Japol$ by part~(\ref{pr:ast-pol-props:b}). Similarly, $\apol{(J+K)}\subseteq\Kapol$, so $\apol{(J+K)}\subseteq\Japol\cap\Kapol$. For the reverse inclusion, let $\xbar\in\Japol\cap\Kapol$. Let $\uu\in J$ and $\vv\in K$. Then, by definition of polar, $\xbar\cdot\uu\leq 0$ and $\xbar\cdot\vv\leq 0$. Thus, $\xbar\cdot\uu$ and $\xbar\cdot\vv$ are summable, so $\xbar\cdot(\uu+\vv) = \xbar\cdot\uu + \xbar\cdot\vv \leq 0$ by Proposition~\ref{pr:i:1}. Since this holds for all $\uu\in J$ and $\vv\in K$, it follows that $\xbar\in\apol{(J+K)}$, proving the claim. \pfpart{Part~(\ref{pr:ast-pol-props:a}):} This follows immediately from definitions. \qedhere \end{proof-parts} \end{proof} \begin{proposition} \label{pr:ext-pol-cone-props} Let $J,K\subseteq\extspace$ be naive cones. Then: \begin{letter-compact} \item \label{pr:ext-pol-cone-props:a} $\Kpol$ is a convex cone in $\Rn$. \item \label{pr:ext-pol-cone-props:d} If $J\subseteq K$ then $\Kpol\subseteq\Jpol$. \item \label{pr:ext-pol-cone-props:b} $\Kpol=\polar{(\Kbar)}=\polar{(\oconich{K})}$. \item \label{pr:ext-pol-cone-props:e} $\polar{(J\seqsum K)}=\Jpol\cap\Kpol$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ext-pol-cone-props:a}):} We can write \[ \Kpol = \bigcap_{\xbar\in K} \Braces{\uu\in\Rn : \xbar\cdot\uu\leq 0}. \] Each of the sets appearing on the right is a convex cone by Proposition~\ref{pr:ast-def-hfspace-convex}. Therefore, $\Kpol$ is a convex cone. \pfpart{Part~(\ref{pr:ext-pol-cone-props:d}):} Proof is analogous to that of Proposition~\ref{pr:ast-pol-props}(\ref{pr:ast-pol-props:b}). \pfpart{Part~(\ref{pr:ext-pol-cone-props:b}):} We show first that $\Kpol\subseteq\polar{(\oconich{K})}$. Let $\uu\in\Kpol$. If $\uu=\zero$ then it is also in $\polar{(\oconich{K})}$, being a cone (by part~\ref{pr:ext-pol-cone-props:a}). Otherwise, $\xbar\cdot\uu\leq 0$ for all $\xbar\in K$, meaning $K\subseteq H$ where $H=\{\xbar\in\extspace : \xbar\cdot\uu \leq 0\}$. Since $H$ is a homogeneous closed astral halfspace that includes $K$, this implies $\oconich{K}\subseteq H$ by definition of outer conic hull. Therefore, $\uu\in\polar{(\oconich{K})}$. Thus, \[ \Kpol \subseteq \polar{(\oconich{K})} \subseteq \polar{(\Kbar)} \subseteq \Kpol. \] The first inclusion is by the preceding argument. The others are from part~(\ref{pr:ext-pol-cone-props:d}) since $K\subseteq\Kbar\subseteq\oconich{K}$ (since $\oconich{K}$ is closed and includes $K$). (We also note that $\Kbar$ is a naive cone since $K$ is. This is because if $\xbar\in\Kbar$ then there exists a sequence $\seq{\xbar_t}$ in $K$ with $\xbar_t\rightarrow\xbar$, implying for all $\lambda\in\Rpos$ that $\lambda\xbar_t\rightarrow\lambda\xbar$ and thus that $\lambda\xbar\in\Kbar$ since each $\lambda\xbar_t\in K$.) \pfpart{Part~(\ref{pr:ext-pol-cone-props:e}):} That $\polar{(J\seqsum K)}\subseteq\Jpol\cap\Kpol$ follows by an argument analogous to that given in proving Proposition~\ref{pr:ast-pol-props}(\ref{pr:ast-pol-props:e}). For the reverse inclusion, suppose $\uu\in\Jpol\cap\Kpol$. Let $\zbar\in J\seqsum K$. Then $\zbar\in\xbar\seqsum\ybar$ for some $\xbar\in J$ and $\ybar\in K$, implying $\xbar\cdot\uu\leq 0$ and $\ybar\cdot\uu\leq 0$ (since $\uu\in\Jpol\cap\Kpol$), and so that $\xbar\cdot\uu$ and $\ybar\cdot\uu$ are summable. Therefore, $\zbar\cdot\uu = \xbar\cdot\uu + \ybar\cdot\uu \leq 0$ by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:b}). Thus, $\uu\in\polar{(J\seqsum K)}$, completing the proof. \qedhere \end{proof-parts} \end{proof} Applying the standard polar operation twice to a convex cone $K\subseteq\Rn$ yields its closure; that is, $\dubpolar{K}=\cl K$ (\Cref{pr:polar-props}\ref{pr:polar-props:c}). More generally, it can be shown that if $K\subseteq\Rn$ is a cone (not necessarily convex), then $\dubpolar{K}=\cl(\cone{K})$. We next consider the similar effect of applying these astral polar operations twice: \begin{theorem} \label{thm:dub-ast-polar} ~ \begin{letter-compact} \item \label{thm:dub-ast-polar:a} Let $J\subseteq\extspace$ be a naive cone. Then $\Jpolapol=\oconich{J}$. Therefore, if $J$ is a closed convex astral cone, then $\Jpolapol=J$. \item \label{thm:dub-ast-polar:b} Let $K\subseteq\Rn$ be a cone. Then $\Kapolpol=\cone{K}$. Therefore, if $K$ is a convex cone, then $\Kapolpol=K$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:dub-ast-polar:a}):} For $\uu\in\Rn\setminus\{\zero\}$, let $\chsuz$ be a homogeneous astral closed halfspace, as in \eqref{eqn:homo-halfspace}. Then \begin{align*} \Jpolapol &= \bigcap_{\uu\in\Jpol} \Braces{\xbar\in\extspace : \xbar\cdot\uu\leq 0} \\ &= \bigcap \bigBraces{\chsuz :\: \uu\in\Rn\setminus\{\zero\}, J\subseteq\chsuz} = \oconich{J}. \end{align*} The first equality is by definition of astral polar, and the third is by definition of outer conic hull. The second equality is because, for $\uu\in\Rn\setminus\{\zero\}$, $\uu\in\Jpol$ if and only if $J\subseteq\chsuz$ (and because $\{\xbar\in\extspace : \xbar\cdot\uu\leq 0\}=\extspace$ if $\uu=\zero$). Thus, if $J$ is a closed convex astral cone, then $\Jpolapol=\oconich{J}=J$ by Corollary~\ref{cor:sep-ast-cone-conseqs}(\ref{cor:sep-ast-cone-conseqs:b}). \pfpart{Part~(\ref{thm:dub-ast-polar:b}):} First, if $\uu\in K$ then $\xbar\cdot\uu\leq 0$ for all $\xbar\in\Kapol$, by its definition, so $\uu\in\Kapolpol$. Therefore, $K$ is included in $\Kapolpol$, which is a convex cone, by Proposition~\ref{pr:ext-pol-cone-props}(\ref{pr:ext-pol-cone-props:a}). Thus, $\cone{K}\subseteq\Kapolpol$. For the reverse inclusion, suppose $\vv\not\in \cone{K}$. Then, by Theorem~\ref{thm:ast-def-sep-cone-sing}, there exists $\xbar\in\extspace$ such that \[ \sup_{\uu\in \cone{K}} \xbar\cdot\uu \leq 0 < \xbar\cdot\vv. \] Since $K\subseteq\cone{K}$, the first inequality implies that $\xbar\in\Kapol$. Combined with the second inequality, this shows that $\vv\not\in\Kapolpol$. Thus, $\Kapolpol\subseteq \cone{K}$. \qedhere \end{proof-parts} \end{proof} In standard convex analysis, the polar operation $K\mapsto\Kpol$ maps the set of all closed convex cones bijectively unto itself. In a similar way, Theorem~\ref{thm:dub-ast-polar} shows that the astral polar operations we have been working with define a one-to-one correspondence between the set $\calJ$ of all closed convex astral cones in $\extspace$, and the set $\calK$ of all convex cones in $\Rn$ (whether closed or not). More precisely, let $\polmap:\calJ\rightarrow\calK$ be the map defined by $\polmap(J)=\Jpol$ for $J\in\calJ$. Proposition~\ref{pr:ext-pol-cone-props}(\ref{pr:ext-pol-cone-props:a}) shows that $\polmap$ does indeed map into $\calK$. Then Theorem~\ref{thm:dub-ast-polar} shows that $\polmap$ is a bijection with inverse $\polmapinv(K)=\Kapol$ for $K\in\calK$. Thus, $\calJ$ and $\calK$ are in one-to-one correspondence via the map $J\mapsto \Jpol$ for $J\in\calJ$, and its inverse $K\mapsto \Kapol$ for $K\in\calK$. In this way, these operations act as duals of each other. This duality is also reflected in the conjugates of the indicator functions associated with standard or astral cones. In standard convex analysis, the indicator functions of a closed convex cone $K\subseteq\Rn$ and its standard polar $\Kpol$ are conjugates of one another, as seen in \Cref{roc:thm14.1-conj}. The analogous relationship between polars and conjugates in astral space is given next: \begin{proposition} \label{pr:ind-apol-conj-ind} ~ \begin{letter-compact} \item \label{pr:ind-apol-conj-ind:a} Let $J\subseteq\extspace$ be a naive cone. Then $\indfastar{J}=\indf{\Jpol}$. \item \label{pr:ind-apol-conj-ind:b} Let $K\subseteq\Rn$ be a cone. Then $\indfdstar{K} = \indfa{\apol{K}}$. \end{letter-compact} \end{proposition} \begin{proof} We prove only part~(\ref{pr:ind-apol-conj-ind:a}); the proof for part~(\ref{pr:ind-apol-conj-ind:b}) is entirely analogous. Let $\uu\in\Rn$. Then by the form of the conjugate given in \eqref{eq:Fstar-down-def}, \[ \indfastar{J}(\uu) = \sup_{\xbar\in\extspace} \BigBracks{ - \indfa{J}(\xbar) \plusd \xbar\cdot\uu } = \sup_{\xbar\in J} \xbar\cdot\uu, \] which we aim to show is equal to $\indf{\Jpol}(\uu)$. If $\uu\in\Jpol$, then $\xbar\cdot\uu\leq 0$ for all $\xbar\in J$, so $\indfastar{J}(\uu)\leq 0$. On the other hand, $\zero\in J$, so $\indfastar{J}(\uu)\geq 0$. Thus, $\indfastar{J}(\uu) = 0 = \indf{\Jpol}(\uu)$ in this case. Otherwise, if $\uu\not\in\Jpol$, then $\xbar\cdot\uu>0$ for some $\xbar\in J$. Since $J$ is a naive cone, $\lambda\xbar$ is also in $J$ for all $\lambda\in\Rstrictpos$. This implies that $\indfastar{J}(\uu)\geq (\lambda\xbar)\cdot\uu = \lambda (\xbar\cdot\uu)$, which tends to $+\infty$ as $\lambda\rightarrow+\infty$. Thus, in this case, $\indfastar{J}(\uu) = +\infty = \indf{\Jpol}(\uu)$. \end{proof} A basic property of the standard polar cone is that the polar of a convex cone $K\subseteq\Rn$ is the same as the polar of its closure, that is, $\Kpol=\polar{(\cl K)}$. Extending to astral space, it was further seen in Proposition~\ref{pr:ext-pol-cone-props}(\ref{pr:ext-pol-cone-props:b}) that $\Jpol=\polar{(\Jbar)}$ if $J$ is a naive cone in $\extspace$. Nevertheless, the analogous property does not hold, in general, for the astral polar; in other words, it is possible that $\apol{K}\neq\apol{(\cl K)}$, as seen in the next example. \begin{example} In $\R^2$, suppose $K$ is the open left halfplane adjoined with the origin: \[ K = \{\zero\} \cup \{(u_1,u_2) \in \R^2 : u_1 < 0\}. \] The standard polar of this convex cone is the ray $\Kpol = \{ \lambda \ee_1 : \lambda\in\Rpos\}$. Its astral polar, $\apol{K}$, includes $\Kpol$, and also includes all infinite points in $\extspac{2}$ whose dominant direction is $\ee_1$. This is because, for every such point $\xbar$, $\xbar\cdot\uu=\limray{\ee_1}\cdot\uu=-\infty$, for all $\uu\in K\setminus \{\zero\}$. Thus, \[ \apol{K} = \{ \lambda \ee_1 : \lambda \in \Rpos\} \cup \Bracks{ \limray{\ee_1} \plusl \extspac{2} }. \] On the other hand, the astral polar $\apol{(\cl K)}$ of $\cl K$, the closed left halfplane of $\R^2$, includes only $\Kpol=\polar{(\cl K)}$ together with the single infinite point $\limray{\ee_1}$; that is, \[ \apol{(\cl K)} = \{ \lambda \ee_1 : \lambda \in \Rpos\} \cup \{ \limray{\ee_1} \}. \] Note that this set is exactly the closure (in $\extspac{2}$) of $\Kpol$. \end{example} Indeed, this last observation turns out to be general: As we show next, if $K\subseteq\Rn$ is a convex cone, then $\apol{(\cl K)}$, the astral polar of its closure in $\Rn$, is always equal to $\Kpolbar$, the closure in $\extspace$ of its standard polar. Consequently, $\apol{K}=\Kpolbar$ if and only if $K$ is closed in $\Rn$. \begin{theorem} \label{thm:apol-of-closure} Let $K\subseteq\Rn$ be a convex cone. Then: \begin{letter-compact} \item \label{thm:apol-of-closure:a} $\Kpolbar = \apol{(\cl K)} \subseteq \apol{K}$. \item \label{thm:apol-of-closure:b} $\apol{K} = \Kpolbar$ if and only if $K$ is closed in $\Rn$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:apol-of-closure:a}):} We have \[ \Kpolbar = \oconich{\Kpol} = \polapol{(\Kpol)} = \apol{(\polpol{K})} = \apol{(\cl K)} \subseteq \apol{K}. \] The first equality is by Theorem~\ref{thm:out-conic-h-is-closure} (since $\Kpol$ is a convex cone). The second equality is by Theorem~\ref{thm:dub-ast-polar}(\ref{thm:dub-ast-polar:a}). The fourth equality is by Proposition~\ref{pr:polar-props}(\ref{pr:polar-props:c}). The inclusion is by Proposition~\ref{pr:ast-pol-props}(\ref{pr:ast-pol-props:b}) since $K\subseteq \cl{K}$. \pfpart{Part~(\ref{thm:apol-of-closure:b}):} If $K$ is closed in $\Rn$, then part~(\ref{thm:apol-of-closure:a}) immediately implies that $\apol{K} = \Kpolbar$. For the converse, suppose $\apol{K} = \Kpolbar$, implying, by part~(\ref{thm:apol-of-closure:a}), that $\apol{K} = \apol{(\cl K)}$. Taking the polar of both sides then yields \[ K = \apolpol{K} = \apolpol{(\cl K)} = \cl K, \] where the first and third equalities are by Theorem~\ref{thm:dub-ast-polar}(\ref{thm:dub-ast-polar:b}) (since $K$ and $\cl K$ are both convex cones). Thus, $K$ is closed. \qedhere \end{proof-parts} \end{proof} \section{Astral linear subspaces} \label{sec:ast-lin-subspaces} In this chapter, we study the astral analogue of linear subspaces. There are several natural ways of extending the notion of a linear subspace to astral space; we will see that all of these are equivalent. We will also see how such notions as span, basis and orthogonal complement carry over to astral space. \subsection{Definition and characterizations} We begin with definitions. A nonempty set $L\subseteq\Rn$ is a linear subspace if it is closed under vector addition (so that $\xx+\yy\in L$ for all $\xx,\yy\in L$) and multiplication by any scalar (so that $\lambda\xx\in L$ for all $\xx\in L$ and $\lambda\in\R$). However, a direct attempt to generalize this particular definition to astral space is likely to be problematic, leading to the kind of issues that arose in the definition of a naive cone in Section~\ref{sec:cones-basic}. Nonetheless, these conditions can be restated to say equivalently that $L$ is a linear subspace if and only if $L$ is a convex cone (which, by Proposition~\ref{pr:scc-cone-elts}\ref{pr:scc-cone-elts:d}, holds if and only if $L$ includes the origin and is closed under vector addition and multiplication by positive scalars) and is also closed under negation (so that $-\xx\in L$ for all $\xx\in L$). In this form, this definition extends straightforwardly to astral space: \begin{definition} A set $M\subseteq\extspace$ is an \emph{astral linear subspace} if $M$ is a convex astral cone and if $M=-M$ (so that $M$ is closed under negation). \end{definition} There are severals ways in which a standard linear subspace can be constructed, for instance, as the column space of a matrix or as the set of solutions to a system of homogeneous linear equations. In a similar way, there are numerous natural means of constructing or characterizing astral linear subspaces. The next theorem enumerates several of these. In words, a set $M\subseteq\extspace$ is an astral linear subspace if and only if: it is the (topological or representational) closure of a standard linear subspace; it is the astral column space of some matrix; it is the astral null space of some matrix; it is the astral orthogonal complement of some set; it is the astral conic hull of some set that is closed under negation; it is the convex hull of a set of icons, closed under negation; it is the segment joining an icon and its negation. The condition that $M$ is an astral conic hull can be further restricted to require that it is the astral conic hull of only finitely many points, all in $\Rn$. Likewise, the condition that $M$ is the convex hull of a set of icons can be restricted to require that it is the convex hull of finitely many astrons. \begin{theorem} \label{thm:ast-lin-sub-equivs} Let $M\subseteq\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:ast-lin-sub-equivs:a} $M$ is an astral linear subspace. \item \label{thm:ast-lin-sub-equivs:b} $M=\Lbar=\repcl{L}$ for some (standard) linear subspace $L\subseteq\Rn$. \item \label{thm:ast-lin-sub-equivs:c} $M=\acolspace \A = \{\A\zbar : \zbar\in\extspac{k}\}$ for some matrix $\A\in\R^{n\times k}$, $k\geq 0$. \item \label{thm:ast-lin-sub-equivs:d} $M=\{\xbar\in\extspace : \B\xbar = \zero\}$ for some matrix $\B\in\R^{m\times n}$, $m\geq 0$. \item \label{thm:ast-lin-sub-equivs:e} $M=\{\xbar\in\extspace : \xbar\cdot\uu=0 \textup{ for all } \uu\in U\}$, for some $U\subseteq\Rn$. \item \label{thm:ast-lin-sub-equivs:f} $M=\acone{S}$ for some $S\subseteq\extspace$ with $S=-S$. \item \label{thm:ast-lin-sub-equivs:ff} $M=\acone{S}$ for some $S\subseteq\Rn$ with $S=-S$ and $|S|<+\infty$. \item \label{thm:ast-lin-sub-equivs:g} $M=\conv{E}$ for some nonempty set of icons $E\subseteq\corezn$ with $E=-E$. \item \label{thm:ast-lin-sub-equivs:gg} $M=\conv{(\lmset{S})}$ for some nonempty set $S\subseteq\Rn$ with $S=-S$ and $|S|<+\infty$. \item \label{thm:ast-lin-sub-equivs:h} $M=\lb{-\ebar}{\ebar}$ for some icon $\ebar\in\corezn$. \end{letter-compact} \end{theorem} Before proving the theorem, we give a lemma that spells out in detail the nature of some of these equivalences. \begin{lemma} \label{lem:ast-subspace-equivs} Let $\vv_1,\ldots,\vv_k\in\Rn$, and also define $V=\{\vv_1,\ldots,\vv_k\}$, $L=\spn{V}$, $\VV=[\vv_1,\ldots,\vv_k]$, and $\ebar=\VV\omm$. Then \[ \Lbar = \repcl{L} = \conv\Parens{\{\zero\}\cup\lmset{V}\cup-\lmset{V}} = \acone(V\cup-V) = \acolspace \VV = \lb{-\ebar}{\ebar}. \] \end{lemma} \begin{proof} ~ \begin{proof-parts} \pfpart{$\Lbar=\repcl{L}$: } This is immediate from Corollary~\ref{cor:lin-sub-bar-is-repcl}. \pfpart{$\repcl{L} = \conv\Parens{\{\zero\}\cup\lmset{V}\cup-\lmset{V}}$: } Since the latter expression is the convex hull of a set of astrons, we have \begin{align*} \conv\Parens{\{\zero\}\cup\lmset{V}\cup-\lmset{V}} &= \repcl{\Parens{\cone(\{\zero\}\cup V\cup -V)}} \\ &= \repcl{\Parens{\cone(V\cup -V)}} = \repcl{(\spn{V})}. \end{align*} The first equality is by Theorem~\ref{thm:conv-lmset-char}. The second is because $\zero$ is included in every cone. And the last is by Proposition~\ref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:span}). \pfpart{$\conv\Parens{\{\zero\}\cup\lmset{V}\cup-\lmset{V}} = \acone(V\cup-V)$: } This is immediate from Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}). \pfpart{$\acolspace \VV \subseteq \Lbar$: } Let $F:\extspac{k}\rightarrow\extspace$ be the astral linear map associated with $\VV$ so that $F(\zbar)=\VV\zbar$ for $\zbar\in\extspac{k}$. We then have \[ \acolspace \VV = F\Parens{\extspac{k}} \subseteq \clbar{F(\Rk)} = \Lbar. \] The first equality is by definition of astral column space. The inclusion is by Proposition~\refequiv{prop:cont}{prop:cont:a}{prop:cont:c} since $F$ is continuous (\Cref{thm:linear:cont}\ref{thm:linear:cont:b}). The last equality is because $L$, the set of linear combinations of columns of $\VV$, is exactly $F(\Rk)$. \pfpart{$\Lbar \subseteq \lb{-\ebar}{\ebar}$: } We first show that $L \subseteq \lb{-\ebar}{\ebar}$. Let $\xx\in L$ and let $\uu\in\Rn$. We claim that \begin{equation} \label{eq:lem:ast-subspace-equivs:1} \xx\cdot\uu \leq |\ebar\cdot\uu| = \max\{-\ebar\cdot\uu,\ebar\cdot\uu\}, \end{equation} which, by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}), will imply that $\xx\in \lb{-\ebar}{\ebar}$. If $\xx\cdot\uu=0$, then \eqref{eq:lem:ast-subspace-equivs:1} must hold since the right-hand side of that equation is always nonnegative. Otherwise, suppose $\xx\cdot\uu\neq 0$. Since $\xx$ is a linear combination of $\vv_1,\ldots,\vv_k$, this implies that $\vv_i\cdot\uu\neq 0$, for some $i\in\{1,\ldots,k\}$. Consequently, $\ebar\cdot\uu\in\{-\infty,+\infty\}$ by Proposition~\ref{pr:vtransu-zero}, so \eqref{eq:lem:ast-subspace-equivs:1} must hold since the expression on the right-hand side is $+\infty$. Thus, $\xx\in \lb{-\ebar}{\ebar}$, so $L\subseteq\lb{-\ebar}{\ebar}$. Since $\lb{-\ebar}{\ebar}$ is closed in $\extspace$ (Proposition~\ref{pr:e1}\ref{pr:e1:ohull}), this further shows that $\Lbar\subseteq\lb{-\ebar}{\ebar}$. \pfpart{$\lb{-\ebar}{\ebar} \subseteq \acolspace \VV$: } We have $\ebar=\VV \omm\in\acolspace \VV$ since $\omm\in\extspac{k}$. Likewise, $-\ebar=-\VV \omm = \VV (-\omm)\in\acolspace \VV$. Further, with $F$ as defined above, $\acolspace \VV = F(\extspac{k})$ is convex by Corollary~\ref{cor:thm:e:9}, being the image of the convex set $\extspac{k}$ under the linear map $F$. Therefore, $\lb{-\ebar}{\ebar} \subseteq \acolspace \VV$. \qedhere \end{proof-parts} \end{proof} \begin{figure} \centering \includegraphics[height=1.5in,trim={1in 3in 5in 1.5in},clip]{figs/linsubspace-equiv-thm-graph.pdf} \caption{The graph structure of the implications shown in the proof of Theorem~\ref{thm:ast-lin-sub-equivs}. } \label{fig:linsubspace-equiv-thm-graph} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:ast-lin-sub-equivs}] We prove several implications, each shown as an edge in the graph in Figure~\ref{fig:linsubspace-equiv-thm-graph}. As can be seen from the graph, together these implications prove all the claimed equivalences. \begin{proof-parts} \pfpart{(\ref{thm:ast-lin-sub-equivs:a}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:b}): } Suppose $M$ is an astral linear subspace. Let $L=M\cap\Rn$. We claim first that $L$ is a (standard) linear subspace. Since $M$ is a convex astral cone, $L$ is a (standard) convex cone by Propositions~\ref{pr:e1}(\ref{pr:e1:a},\ref{pr:e1:b}) and~\ref{pr:ast-cone-is-naive}. And since $M$ is closed under negation, $L$ is as well. It follows that $L$ is closed under vector addition by Proposition~\ref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:d}). Further, since $L$ is a cone, if $\xx\in L$ and $\lambda\in\R$ with $\lambda\geq 0$, then $\lambda\xx\in L$; and if $\lambda<0$ then $\lambda\xx=(-\lambda)(-\xx)\in L$ since $-\xx\in L$. Thus, $L$ is closed under multiplication by any scalar. Together, these facts show that $L$ is a linear subspace. Next, we show that $M=\repcl{L}$. Suppose $\xbar\in\repcl{L}$. Then we can write \begin{equation} \label{eq:thm:ast-lin-sub-equivs:1} \xbar=\limray{\vv_1}\plusl\dotsb\plusl\limray{\vv_k}\plusl\vv_{k+1} \end{equation} for some vectors $\vv_1,\ldots,\vv_{k+1}\in L\subseteq M$. For $i=1,\ldots,k$, we then have that $\limray{\vv_i}\in M$ by Proposition~\ref{pr:ast-cone-is-naive}, since $M$ is an astral cone. And since $M$ is a convex astral cone, it is closed under sequential sum by Theorem~\ref{thm:ast-cone-is-cvx-if-sum} (meaning $M\seqsum M\subseteq M$), and so also under leftward addition by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:a}) (meaning $M\plusl M\subseteq M$). Therefore, from its form in \eqref{eq:thm:ast-lin-sub-equivs:1}, $\xbar\in M$, so $\repcl{L}\subseteq M$. For the reverse inclusion, suppose now that $\xbar\in M$. Then $\limray{\xbar}$ is also in the astral cone $M$ (Proposition~\ref{pr:ast-cone-is-naive}), as is $-\limray{\xbar}=\limray{(-\xbar)}$ since $-\xbar$ is also in $M$. We can again write $\xbar$ as in \eqref{eq:thm:ast-lin-sub-equivs:1} for some $\vv_1,\ldots,\vv_{k+1}\in \Rn$. We claim that each $\vv_i$ is in $L$, implying $\xbar\in\repcl{L}$. To see this, let $V=\{\vv_1,\ldots,\vv_{k+1}\}$ and let $\VV=[\vv_1,\ldots,\vv_{k+1}]$. Then \[ V \subseteq \spn{V} \subseteq \clbar{\spn{V}} = \lb{-\VV\omm}{\VV\omm} = \lb{-\limray{\xbar}}{\limray{\xbar}} \subseteq M. \] The first equality is by Lemma~\ref{lem:ast-subspace-equivs} (applied to $\vv_1,\ldots,\vv_{k+1}$). The second equality is because $\limray{\xbar}=\VV \omm$ (Proposition~\ref{pr:i:8}\ref{pr:i:8d}). The last inclusion is because, as noted above, $-\limray{\xbar}$ and $\limray{\xbar}$ are in $M$, and since $M$ is convex. Thus, $V\subseteq M\cap\Rn=L$, so $\xbar\in\repcl{L}$. Therefore, $M=\repcl{L}$. That also $\repcl{L}=\Lbar$ follows from Corollary~\ref{cor:lin-sub-bar-is-repcl}. \pfpart{(\ref{thm:ast-lin-sub-equivs:b}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:c}), (\ref{thm:ast-lin-sub-equivs:b}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:gg}), (\ref{thm:ast-lin-sub-equivs:b}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:h}): } Suppose $M=\Lbar$ where $L\subseteq\Rn$ is a linear subspace. Let $V=\{\vv_1,\ldots,\vv_k\}\subseteq\Rn$ be a basis for $L$ (so that $L=\spn{V}$), and let $\VV=[\vv_1,\ldots,\vv_k]$. Then by Lemma~\ref{lem:ast-subspace-equivs}, $M=\acolspace \VV$, proving statement~(\ref{thm:ast-lin-sub-equivs:c}); $M=\conv(\{\zero\}\cup\lmset{V}\cup-\lmset{V})$, proving statement~(\ref{thm:ast-lin-sub-equivs:gg}) with $S=\{\zero\}\cup V\cup -V$; and $M=\lb{-\VV\omm}{\VV\omm}$, proving statement~(\ref{thm:ast-lin-sub-equivs:h}) with $\ebar=\VV\omm$. \pfpart{(\ref{thm:ast-lin-sub-equivs:c}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:ff}): } Suppose $M=\acolspace \A$ where $\A=[\vv_1,\ldots,\vv_k]$ for some $\vv_1,\ldots,\vv_k\in\Rn$. Then by Lemma~\ref{lem:ast-subspace-equivs}, $M=\acone{S}$ where $S=V\cup -V$ and $V=\{\vv_1,\ldots,\vv_k\}$. \pfpart{(\ref{thm:ast-lin-sub-equivs:ff}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:f}): } This is immediate. \pfpart{(\ref{thm:ast-lin-sub-equivs:f}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:g}): } Suppose $M=\acone{S}$ for some $S\subseteq\extspace$ with $S=-S$. Then by Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}), $M=\conv{E}$ where $E=\{\zero\}\cup\lmset{S}$, which is a nonempty set of icons with $E=-E$. \pfpart{(\ref{thm:ast-lin-sub-equivs:gg}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:g}): } This is immediate since the astrons in $\lmset{S}$, for any $S\subseteq\Rn$, are also icons. \pfpart{(\ref{thm:ast-lin-sub-equivs:h}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:g}): } If $M=\lb{-\ebar}{\ebar}$ for some icon $\ebar\in\corezn$, then $M=\conv{E}$ where $E=\{-\ebar,\ebar\}$. \pfpart{(\ref{thm:ast-lin-sub-equivs:g}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:a}): } Suppose $M=\conv{E}$ where $E\subseteq\corezn$ is nonempty with $E=-E$. We will show that $M$ is a convex astral cone using Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:b}. To do so, we first argue that $M$ can be expressed in the same form if the origin is adjoined to $E$. Let $\ebar$ be any point in $E$ (which must exist), implying $-\ebar$ is also in $E$. Then $ \zero \in \lb{-\ebar}{\ebar} \subseteq \conv{E} $. The first inclusion follows from \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}) since, for all $\uu\in\Rn$, $\zero\cdot\uu=0\leq \max\{-\ebar\cdot\uu,\ebar\cdot\uu\}$. The second inclusion is because $\ebar$ and $-\ebar$ are in $E$, and so also in $\conv{E}$, which is convex. Thus, by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:c}), $M=\conv{(\{\zero\}\cup E)}$. Therefore, $M$ is a convex astral cone by Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:b}. Next, we have \[ -M = -(\conv{E}) = \conv{(-E)} = \conv{E} = M, \] where the second equality follows from Corollary~\ref{cor:thm:e:9} (applied with the linear map $\xbar\mapsto-\xbar$). Thus, $M$ is closed under negation. Therefore, $M$ is an astral linear subspace. \pfpart{(\ref{thm:ast-lin-sub-equivs:b}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:d}): } Suppose $M=\Lbar$ for some linear subspace $L\subseteq\Rn$. Let $U=\{\uu_1,\ldots,\uu_m\}\subseteq \Rn$ be a basis for $\Lperp$ so that $\Lperp=\spn{U}$. Let $\B=\trans{[\uu_1,\ldots,\uu_m]}$; that is, the $i$-th row of matrix $\B\in\Rmn$ is $\trans{\uu}_i$ for $i=1,\ldots,m$. Then \[ L = \Lperperp = \Uperp = \{ \xx\in\Rn : \B \xx = \zero \}, \] where the first and second equalities are by Proposition~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:c},\ref{pr:std-perp-props:d}), and the third is by the definitions of $\B$ and $\Uperp$. Thus, \[ M = \Lbar = \{ \xbar\in\extspace : \B \xbar = \zero \} \] by Corollary~\ref{cor:closure-affine-set}. \pfpart{(\ref{thm:ast-lin-sub-equivs:d}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:e}): } Suppose $M=\{\xbar\in\extspace : \B\xbar = \zero\}$ for some matrix $\B\in\Rmn$. Then $\trans{\B} = [\uu_1,\ldots,\uu_m]$ for some $\uu_1,\ldots,\uu_m\in\Rn$ (that is, $\trans{\uu}_1,\ldots,\trans{\uu}_m$ are the rows of $\B$). Let $U=\{\uu_1,\ldots,\uu_m\}$. Then for all $\xbar\in\extspace$, by Proposition~\ref{pr:mat-prod-is-row-prods-if-finite}, $\B\xbar=\zero$ if and only if $\xbar\cdot\uu_i=0$ for $i=1,\ldots,m$. Therefore, $M=\{\xbar\in\extspace : \xbar\cdot\uu=0 \mbox{ for all } \uu\in U\}$. \pfpart{(\ref{thm:ast-lin-sub-equivs:e}) $\Rightarrow$ (\ref{thm:ast-lin-sub-equivs:a}): } Suppose $M=\{\xbar\in\extspace : \xbar\cdot\uu=0 \mbox{ for all } \uu\in U\}$, for some $U\subseteq\Rn$. Then we can write \[ M = \bigcap_{\uu\in U} \BigParens{ \Braces{\xbar\in\extspace : \xbar\cdot\uu \leq 0} \cap \Braces{\xbar\in\extspace : \xbar\cdot\uu \geq 0} }. \] Each set appearing in this intersection is either a homogeneous closed astral halfspace or all of $\extspace$ (if $\uu=\zero$). Therefore, $M$ is a convex astral cone by Proposition~\ref{pr:astral-cone-props}(\ref{pr:astral-cone-props:e}). Furthermore, $M=-M$ since clearly $\xbar\cdot\uu=0$ if and only if $-\xbar\cdot\uu=0$, for all $\xbar\in\extspace$ and $\uu\in U$. Therefore, $M$ is an astral linear subspace. \qedhere \end{proof-parts} \end{proof} The characterization of astral linear subspaces given in part~(\ref{thm:ast-lin-sub-equivs:b}) of Theorem~\ref{thm:ast-lin-sub-equivs} implies a one-to-one correspondence between astral linear subspaces and standard linear subspaces. According to this correspondence, every astral linear subspace $M\subseteq\extspace$ is mapped to a standard linear subspace by taking its intersection with $\Rn$, that is, via the map $M\mapsto M\cap\Rn$. Likewise, every linear subspace $L\subseteq\Rn$ is mapped to an astral linear subspace via the closure operation $L\mapsto \Lbar$. As expressed in the next corollary, these operations are bijective and are inverses of one another. \begin{corollary} \label{cor:std-ast-linsub-corr} ~ \begin{letter-compact} \item \label{cor:std-ast-linsub-corr:L} Let $L\subseteq\Rn$ be a linear subspace. Then $\Lbar$ is an astral linear subspace, and moreover, $\Lbar\cap\Rn=L$. \item \label{cor:std-ast-linsub-corr:M} Let $M\subseteq\extspace$ be an astral linear subspace. Then $M\cap\Rn$ is a (standard) linear subspace, and moreover, $\clbar{M\cap\Rn}=M$. \item \label{cor:std-ast-linsub-corr:Lambda} Let $\calL$ be the set of all linear subspaces in $\Rn$, and let $\calM$ be the set of all astral linear subspaces in $\extspace$. Let $\Lambda:\calL\rightarrow\calM$ be defined by $\Lambda(L)=\Lbar$ for $L\in\calL$. Then $\Lambda$ is bijective with inverse $\Lambdainv(M)=M\cap\Rn$ for $M\in\calM$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{cor:std-ast-linsub-corr:L}):} Theorem~\refequiv{thm:ast-lin-sub-equivs}{thm:ast-lin-sub-equivs:a}{thm:ast-lin-sub-equivs:b} shows that $\Lbar$ is an astral linear subspace. Further, by Proposition~\ref{pr:closed-set-facts}(\ref{pr:closed-set-facts:a}), $\Lbar\cap\Rn=\cl{L}=L$ since $L$ is closed in $\Rn$. \pfpart{Part~(\ref{cor:std-ast-linsub-corr:M}):} By Theorem~\refequiv{thm:ast-lin-sub-equivs}{thm:ast-lin-sub-equivs:a}{thm:ast-lin-sub-equivs:b}, since $M$ is an astral linear subspace, there exists a linear subspace $L\subseteq\Rn$ such that $M=\Lbar=\repcl{L}$. Consequently, $M\cap\Rn=\repcl{L}\cap\Rn=L$. Thus, $M\cap\Rn$ is a linear subspace with closure $\clbar{M\cap\Rn}=\Lbar=M$. \pfpart{Part~(\ref{cor:std-ast-linsub-corr:Lambda}):} This is really just a restatement of the preceding parts. Let $\Lambda':\calM\rightarrow\calL$ be defined by $\Lambda'(M)=M\cap\Rn$ for $M\in\calM$. By part~(\ref{cor:std-ast-linsub-corr:L}), $\Lambda$ is indeed a mapping into $\calM$, and $\Lambda'(\Lambda(L))=L$ for all $L\in\calL$, so $\Lambda'$ is a left inverse of $\Lambda$. Similarly, by part~(\ref{cor:std-ast-linsub-corr:M}), $\Lambda'$ always maps into $\calL$, and $\Lambda(\Lambda'(M))=M$ for all $M\in\calM$, so $\Lambda'$ is a right inverse of $\Lambda$. Together, these prove that $\Lambda$ is bijective with inverse $\Lambdainv=\Lambda'$. \qedhere \end{proof-parts} \end{proof} Here are some closure properties of astral linear subspaces, analogous to similar properties for standard linear subspaces: \begin{proposition} \label{pr:ast-lin-sub-clos-props} ~ \begin{letter-compact} \item \label{pr:ast-lin-sub-clos-props:a} The intersection of an arbitrary collection of astral linear subspaces is an astral linear subspace. \item \label{pr:ast-lin-sub-clos-props:b} Let $M_1$ and $M_2$ be astral linear subspaces. Then $M_1\seqsum M_2$ is also an astral linear subspace. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:ast-lin-sub-clos-props:a}):} The intersection of an arbitrary collection of astral linear subspaces is a convex astral cone by Propositions~\ref{pr:e1}(\ref{pr:e1:b}) and~\ref{pr:astral-cone-props}(\ref{pr:astral-cone-props:b}). That it is also closed under negation follows by a proof similar to that of Proposition~\ref{pr:e1}(\ref{pr:e1:b}). Thus, it is an astral linear subspace. \pfpart{Part~(\ref{pr:ast-lin-sub-clos-props:b}):} Let $M=M_1\seqsum M_2$. Since $M_1$ and $M_2$ are convex astral cones, $M$ is as well, by Theorem~\ref{thm:seqsum-ast-cone}(\ref{thm:seqsum-ast-cone:b}). Also, \[ -M = -(M_1\seqsum M_2) = (-M_1)\seqsum (-M_2) = M_1 \seqsum M_2 = M. \] Here, the second equality is by Theorem~\ref{thm:distrib-seqsum} (applied with $\A=-\Iden$), and the third is because $M_1$ and $M_2$ are closed under negation. Thus, $M$ is as well, so $M$ is an astral linear subspace. \qedhere \end{proof-parts} \end{proof} \subsection{Astral span and bases} \label{sec:astral-span} In light of Proposition~\ref{pr:ast-lin-sub-clos-props}(\ref{pr:ast-lin-sub-clos-props:a}), we can define an astral span hull operator, analogous to the standard span of a set: \begin{definition} The \emph{astral span} of a set $S\subseteq\extspace$, denoted $\aspan{S}$, is the smallest astral linear subspace that includes $S$, or equivalently, the intersection of all astral linear subspaces that include $S$. \end{definition} Astral span has the usual properties of a hull operator as in Proposition~\ref{pr:gen-hull-ops}: \begin{proposition} \label{pr:aspan-hull-props} Let $S,U\subseteq\extspace$. \begin{letter-compact} \item \label{pr:aspan-hull-props:a} If $S\subseteq U$ and $U$ is an astral linear subspace, then $\aspan{S}\subseteq U$. \item \label{pr:aspan-hull-props:b} If $S\subseteq U$ then $\aspan{S}\subseteq\aspan{U}$. \item \label{pr:aspan-hull-props:c} If $S\subseteq U\subseteq\aspan{S}$, then $\aspan{U}=\aspan{S}$. \end{letter-compact} \end{proposition} The astral span of any set can always be re-expressed using the astral conic hull or convex hull operations: \begin{proposition} \label{pr:aspan-ito-acone-conv} Let $S\subseteq\extspace$. Then \begin{equation} \label{eq:pr:aspan-ito-acone-conv:1} \aspan{S} = \acone{(S\cup-S)} = \conv{\Parens{\{\zero\}\cup\lmset{S}\cup-\lmset{S}}}. \end{equation} If $S\neq\emptyset$, then also \begin{equation} \label{eq:pr:aspan-ito-acone-conv:2} \aspan{S} = \conv{\Parens{\lmset{S}\cup-\lmset{S}}}. \end{equation} \end{proposition} \begin{proof} We begin with the first equality of \eqref{eq:pr:aspan-ito-acone-conv:1}. The astral linear subspace $\aspan{S}$ is a convex astral cone that includes $S$, and so also $-S$, being closed under negation. Therefore, $\acone{(S\cup-S)} \subseteq \aspan{S}$ (Proposition~\ref{pr:acone-hull-props}\ref{pr:acone-hull-props:a}). On the other hand, $\acone{(S\cup-S)}$ includes $S$ and is an astral linear subspace by Theorem~\refequiv{thm:ast-lin-sub-equivs}{thm:ast-lin-sub-equivs:a}{thm:ast-lin-sub-equivs:f}. Thus, $\aspan{S} \subseteq \acone{(S\cup-S)}$. The second equality of \eqref{eq:pr:aspan-ito-acone-conv:1} is immediate from Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}). Suppose henceforth that $S$ is not empty. Then $\conv{\Parens{\lmset{S}\cup-\lmset{S}}}$ is an astral linear subspace by Theorem~\refequiv{thm:ast-lin-sub-equivs}{thm:ast-lin-sub-equivs:a}{thm:ast-lin-sub-equivs:g} and so includes the origin (for instance, by Proposition~\ref{pr:ast-cone-is-naive}). Therefore, $\conv{\Parens{\lmset{S}\cup-\lmset{S}}} = \conv{\Parens{\{\zero\}\cup\lmset{S}\cup-\lmset{S}}}$ by Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:c}), proving \eqref{eq:pr:aspan-ito-acone-conv:2}. \end{proof} In Section~\ref{sec:rep-seq}, we encountered the representational span of a singleton $\{\xbar\}$, where $\xbar\in\extspace$, which is the span of the vectors in $\xbar$'s representation. In a moment, we will see that representational span is closely related to astral span. But first, we extend our earlier definition for the representational span of singletons to arbitrary sets $S\subseteq\extspace$. Specifically, the representational span of $S$ is the span (in $\Rn$) of the union of all vectors appearing in the representations of all the points in $S$: \begin{definition} \label{def:rspan-gen} Let $S\subseteq\extspace$. For each $\xbar\in S$, let $\xbar = \Vxbar\, \omm \plusl \qqxbar$ be any representation of $\xbar$ where $\Vxbar\in\R^{n\times \kxbar}$, $\kxbar\geq 0$, and $\qqxbar\in\Rn$. Also, let $ \Rxbar=(\columns{\Vxbar})\cup\{\qqxbar\} $ be the vectors appearing in $\xbar$'s representation, and let $R=\bigcup_{\xbar\in S} \Rxbar$ be the union of all of these. Then the \emph{representational span} of $S$, denoted $\rspan{S}$, is equal to $\spn{R}$, the (standard) span of $R$. \end{definition} Clearly, the definition for singletons given in Definition~\ref{def:rep-span-snglton} is a special case of the one given here. This definition would appear to depend very much on the specific representations of the points in $S$, which are not unique, and may be quite arbitrary; in particular, they need not be canonical. Nonetheless, as we show next, the representational span of any set $S\subseteq\extspace$ is equal to the astral span of $S$, restricted to $\Rn$. In other words, $\rspan{S}$ and $\aspan{S}$ are related to each other in the same way as in \Cref{cor:std-ast-linsub-corr}, implying further that $\aspan{S}$ is the astral closure of $\rspan{S}$. Since $\aspan{S}$ does not depend on the representations of the points in $S$, this also shows that $\rspan{S}$ is likewise independent of those representations. \begin{theorem} \label{thm:rspan-is-aspan-in-rn} Let $S\subseteq\extspace$, and assume the notation of Definition~\ref{def:rspan-gen}. Then \begin{equation} \label{eq:thm:rspan-is-aspan-in-rn:1} \rspan{S}=\spn{R}=(\aspan{S})\cap\Rn, \end{equation} and \begin{equation} \label{eq:thm:rspan-is-aspan-in-rn:2} \clbar{\rspan{S}} = \clbar{\spn{R}} = \aspan{S}. \end{equation} \end{theorem} \begin{proof} The first equality in \eqref{eq:thm:rspan-is-aspan-in-rn:1} is by definition of $\rspan{S}$. For the second equality, let $L=\spn{R}$ and $L'=(\aspan{S})\cap\Rn$; we aim to prove $L=L'$. We show first that $L'\subseteq L$. Let $\xbar\in S$. Then every vector in $\xbar$'s representation, $\Vxbar\omm\plusl\qqxbar$, is in $\Rxbar\subseteq R\subseteq\spn{R}=L$. Therefore, $\xbar\in\repcl{L}$. Thus, $S\subseteq\repcl{L}$. Since $\repcl{L}$ is an astral linear subspace (by Theorem~\ref{thm:ast-lin-sub-equivs}\ref{thm:ast-lin-sub-equivs:a},\ref{thm:ast-lin-sub-equivs:b}), it follows that $\aspan{S}\subseteq\repcl{L}$ (by Proposition~\ref{pr:aspan-hull-props}\ref{pr:aspan-hull-props:a}). Thus, $L'=(\aspan{S})\cap\Rn\subseteq\repcl{L}\cap\Rn=L$. For the reverse inclusion, let $\xbar\in S$. Then \begin{align*} \Rxbar \subseteq \spn{\Rxbar} \subseteq \clbar{\spn{\Rxbar}} &= \lb{-\limray{\xbar}}{\limray{\xbar}} \\ &= \conv\{-\limray{\xbar},\limray{\xbar}\} = \aspan\{\xbar\} \subseteq \aspan{S}. \end{align*} The first equality is by Lemma~\ref{lem:ast-subspace-equivs}, applied with $V=\Rxbar$ and $\VV=[\Vxbar,\qqxbar]$ so that $\ebar=\VV\omm=\limray{\xbar}$ by Proposition~\ref{pr:i:8}(\ref{pr:i:8d}). The third equality is by Proposition~\ref{pr:aspan-ito-acone-conv}. And the final inclusion is by Proposition~\ref{pr:aspan-hull-props}(\ref{pr:aspan-hull-props:b}). Since this holds for all $\xbar\in S$, it follows that $R\subseteq(\aspan{S})\cap\Rn=L'$. Therefore, $L=\spn{R}\subseteq L'$ since $L'$ is a linear subspace (by Corollary~\ref{cor:std-ast-linsub-corr}\ref{cor:std-ast-linsub-corr:M}). This completes the proof of \eqref{eq:thm:rspan-is-aspan-in-rn:1}. Taking closures then yields \eqref{eq:thm:rspan-is-aspan-in-rn:2}, again by Corollary~\ref{cor:std-ast-linsub-corr}(\ref{cor:std-ast-linsub-corr:M}). \end{proof} When $S$ is included in $\Rn$, we can express its astral span and representational span in terms of its standard span: \begin{proposition} \label{pr:aspan-for-rn} Let $S\subseteq\Rn$. Then \[ \aspan{S}=\clbar{\spn{S}}=\repcl{(\spn{S})}, \] and \[ \rspan{S}=\spn{S}. \] \end{proposition} \begin{proof} We have \[ \aspan{S} = \acone{(S\cup-S)} = \repcl{\Parens{\cone(S\cup-S)}} = \repcl{(\spn{S})} = \clbar{\spn{S}}. \] The first equality is by Proposition~\ref{pr:aspan-ito-acone-conv}. The second is by Theorem~\ref{thm:acone-is-repclos}. The third is by Proposition~\ref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:span}). And the fourth is by Corollary~\ref{cor:lin-sub-bar-is-repcl}. Since $\rspan{S}$ is the restriction of $\aspan{S}$ to $\Rn$ (Theorem~\ref{thm:rspan-is-aspan-in-rn}), it then follows that $\rspan{S}=\spn{S}$ by Corollary~\ref{cor:std-ast-linsub-corr}(\ref{cor:std-ast-linsub-corr:L}). \end{proof} There is an intuitive sense in which the (standard) span of a nonempty set of vectors $S\subseteq\Rn$ is like the convex hull of those vectors and their negations as they are extended to infinity. Indeed, taking the convex hull of scalar multiples of $S$ and $-S$ will, in the limit, encompass all of the linear subspace spanned by $S$; that is, \[ \spn{S} = \bigcup_{\lambda\in\Rstrictpos} \conv{(\lambda S \cup -\lambda S)}. \] In astral space, this can be expressed exactly, without resorting to limits: As seen in \eqref{eq:pr:aspan-ito-acone-conv:2} of Proposition~\ref{pr:aspan-ito-acone-conv}, the astral span of $S$ in this case is equal to the convex hull of all the astrons associated with $S$ and $-S$. Consequently, by Proposition~\ref{pr:aspan-for-rn} combined with Theorem~\ref{thm:rspan-is-aspan-in-rn}, $\spn{S}$ is simply the intersection of this convex hull of astrons with $\Rn$. Applying Theorem~\ref{thm:decomp-acone} yields the next identity for the astral span of a union: \begin{theorem} \label{thm:decomp-aspan} Let $S_1,S_2\subseteq\extspace$. Then \[ (\aspan{S_1}) \seqsum (\aspan{S_2}) = \aspan{(S_1\cup S_2)}. \] \end{theorem} \begin{proof} We have: \begin{align*} (\aspan{S_1}) \seqsum (\aspan{S_2}) &= \acone(S_1\cup -S_1) \seqsum \acone(S_2\cup -S_2) \\ &= \acone(S_1\cup -S_1 \cup S_2\cup -S_2) \\ &= \aspan(S_1\cup S_2). \end{align*} The first and third equalities are by Proposition~\ref{pr:aspan-ito-acone-conv}. The second equality is by Theorem~\ref{thm:decomp-acone}. \end{proof} A set $B\subseteq\Rn$ is a basis for a linear subspace $L\subseteq\Rn$ if $B$ spans $L$ and if $B$ is linearly independent. In the same way, we say that $B\subseteq\Rn$ is an \emph{astral basis} (or simply a \emph{basis}) for an astral linear subspace $M\subseteq\extspace$ if $M=\aspan{B}$ and $B$ is linearly independent. As we show next, $B$ is an astral basis for $M$ if and only if it is a standard basis for the linear subspace $M\cap\Rn$. We also give another characterization for when a set is an astral basis. In standard linear algebra, a set $B$ is a basis for the linear subspace $L$ if the coefficients defining the linear combinations of $B$ are in one-to-one correspondence with the elements of $L$. More precisely, in matrix form, the linear combinations of $B$ are exactly the vectors $\B\zz$ where $\B$ is a matrix with columns matching the $d$ vectors in $B$, and $\zz\in\R^d$. Then $B$ is a basis if and only if the map $\zz\mapsto\B\zz$ is a bijection from $\R^d$ onto $L$. Likewise, as we show next, $B$ is an astral basis for $M$ if and only if the corresponding astral map $\zbar\mapsto\B\zbar$ is a bijection from $\extspac{d}$ onto $M$. \begin{theorem} \label{thm:basis-equiv} Let $B=\{\vv_1,\ldots,\vv_d\}\subseteq\Rn$ where $d=|B|<+\infty$, and let $\B=[\vv_1,\ldots,\vv_d]$. Let $M\subseteq\extspace$ be an astral linear subspace. Then the following are equivalent: \begin{letter-compact} \item \label{thm:basis-equiv:a} $B$ is an astral basis for $M$. \item \label{thm:basis-equiv:b} $B$ is a (standard) basis for the linear subspace $M\cap\Rn$. \item \label{thm:basis-equiv:c} The map $\zbar\mapsto\B\zbar$ defines a bijection from $\extspac{d}$ onto $M$. \end{letter-compact} Consequently, an astral basis must exist for every astral linear subspace $M$. Furthermore, every astral basis for $M$ must have cardinality equal to $\dim(M\cap\Rn)$. \end{theorem} \begin{proof} Let $L=M\cap\Rn$, which is a linear subspace with $M=\Lbar$ (by Corollary~\ref{cor:std-ast-linsub-corr}\ref{cor:std-ast-linsub-corr:M}). \begin{proof-parts} \pfpart{(\ref{thm:basis-equiv:b}) $\Rightarrow$ (\ref{thm:basis-equiv:a}): } Suppose $B$ is a standard basis for $L$. Then $B$ is linearly independent, and \[ M = \Lbar = \clbar{\spn{B}} = \aspan{B} \] where the second equality is because $B$ is a basis for $L$, and the third is by Proposition~\ref{pr:aspan-for-rn}. Thus, $B$ is an astral basis for $M$. \pfpart{(\ref{thm:basis-equiv:a}) $\Rightarrow$ (\ref{thm:basis-equiv:c}): } Suppose $B$ is an astral basis for $M$. Then \begin{equation} \label{eq:thm:basis-equiv:1} M = \aspan{B} = \clbar{\spn{B}} = \acolspace{\B}, \end{equation} where the second equality is by Proposition~\ref{pr:aspan-for-rn}, and the third is by Lemma~\ref{lem:ast-subspace-equivs}. Let $\Fbar:\extspac{d}\rightarrow M$ be the astral linear map associated with $\B$ so that $\Fbar(\zbar)=\B\zbar$ for $\zbar\in\extspac{d}$. Note that the restriction of $\Fbar$'s range to $M$ is justified by \eqref{eq:thm:basis-equiv:1}. Indeed, $\Fbar(\extspac{d})$ is exactly the astral column space of $\B$; therefore, \eqref{eq:thm:basis-equiv:1} also shows that $\Fbar$ is surjective. To see that $\Fbar$ is injective, suppose $\Fbar(\zbar)=\Fbar(\zbar')$ for some $\zbar,\zbar'\in\extspac{d}$. Then \[ \zbar = \Bpseudoinv \B\zbar = \Bpseudoinv \B\zbar' = \zbar' \] where $\Bpseudoinv$ is $\B$'s pseudoinverse. The second equality is because, by assumption, $\B\zbar=\B\zbar'$. The first and third equalities follow from the identity $\Bpseudoinv\B=\Iden$ (where $\Iden$ is the $d\times d$ identity matrix), which holds by Proposition~\ref{pr:pseudoinv-props}(\ref{pr:pseudoinv-props:c}) since $B$ is linearly independent (being a basis), implying that $\B$ has full column rank. Thus, $\Fbar$ is a bijection, as claimed. \pfpart{(\ref{thm:basis-equiv:c}) $\Rightarrow$ (\ref{thm:basis-equiv:b}): } Suppose $\Fbar$, as defined above, is a bijection. Let $F:\R^d\rightarrow L$ be the restriction of $\Fbar$ to $\R^d$ so that $F(\zz)=\Fbar(\zz)=\B\zz$ for $\zz\in\R^d$. Note that the restriction of $F$'s range to $L$ is justified because, for $\zz\in\R^d$, $\Fbar(\zz)=\B\zz$ is in $M\cap\Rn=L$. We claim $F$ is a bijection. To see this, observe first that $F$ is injective since if $F(\zz)=F(\zz')$ for some $\zz,\zz'\in\R^d$ then $\Fbar(\zz)=F(\zz)=F(\zz')=\Fbar(\zz')$, implying $\zz=\zz'$ since $\Fbar$ is injective. To see that $F$ is surjective, let $\xx\in L$. Then because $\Fbar$ is surjective, there exists $\zbar\in\extspac{d}$ for which $\Fbar(\zbar)=\xx$. We can write $\zbar=\ebar\plusl\qq$ for some icon $\ebar\in\corez{d}$ and some $\qq\in\R^d$. Thus, $\xx=\B\zbar=\B\ebar\plusl \B\qq$. Evidently, $\B\ebar$, which is an icon, is the iconic part of the expression on the right-hand side of this equality, while $\zero$ is the iconic part of $\xx$, the expression on the left. Since the iconic part of any point is unique (Theorem~\ref{thm:icon-fin-decomp}), it follows that $\Fbar(\ebar)=\B\ebar=\zero$. Since $\Fbar(\zero)=\zero$ and since $\Fbar$ is injective, this implies that $\ebar=\zero$. Thus, $\zbar=\qq$ and $F(\qq)=\Fbar(\zbar)=\xx$, so $F$ is surjective. Since $F$ is surjective, $\spn{B}=\colspace{\B}=F(\Rn)=L$, so $B$ spans $L$. And since $F$ is injective, $B$ must be linearly independent. This is because if some linear combination of $B$ is $\zero$, meaning $\B\zz=\zero$ for some $\zz\in\R^d$, then $F(\zz)=\zero=F(\zero)$ implying $\zz=\zero$ since $F$ is injective. Thus, $B$ is a basis for $L$. \pfpart{Existence and equal cardinality:} The linear subspace $L=M\cap\Rn$ must have a basis, and every basis of $L$ must have cardinality $\dim{L}$. Since, as shown above, a set $B$ is an astral basis for $M$ if and only if it is a standard basis for $L$, the same properties hold for $M$. \qedhere \end{proof-parts} \end{proof} Thus, every astral linear subspace $M\subseteq\extspace$ must have an astral basis, and the cardinality of all such bases must be the same, namely, $\dim(M\cap\Rn)$, the usual dimension of the standard linear subspace $M\cap\Rn$. Accordingly, we define the \emph{dimension of $M$}, denoted $\dim{M}$, to be the cardinality of any basis of $M$. Thus, in general, $\dim{M}=\dim(M\cap\Rn)$. We show next that every set $S\subseteq\extspace$ includes a finite subset $V$ whose astral span is the same as that of the full set $S$; moreover, $V$'s cardinality need not exceed $\aspan{S}$'s dimension. \begin{theorem} \label{thm:aspan-finite-subset} Let $S\subseteq\extspace$, and let $d=\dim(\aspan{S})$. Then there exists a subset $V\subseteq S$ with $|V|\leq d$ such that $\aspan{V}=\aspan{S}$. \end{theorem} \begin{proof} Let $\Rxbar$ and $R$ be as in Definition~\ref{def:rspan-gen} (for $\xbar\in S$), implying that $\rspan{S}=\spn{R}$. Let $B$ be a linearly independent subset of $R$ of maximum cardinality among all such subsets. Then every vector in $R$ is equal to a linear combination of the vectors in $B$; otherwise, if some vector $\vv\in R$ is not such a linear combination, then $B\cup\{\vv\}$ would be a strictly larger linearly independent subset of $R$. Thus, $B\subseteq R\subseteq \spn{B}$, so $\spn{R}=\spn{B}$ (by Proposition~\ref{pr:gen-hull-ops}\ref{pr:gen-hull-ops:d} since linear span is a hull operator). This shows that $B$, being linearly independent, is a basis for $\spn{R}$, which, by Theorem~\ref{thm:rspan-is-aspan-in-rn}, is the same as $(\aspan{S})\cap\Rn$. Therefore, $B$ is an astral basis for $\aspan{S}$ as well by Theorem~\refequiv{thm:basis-equiv}{thm:basis-equiv:a}{thm:basis-equiv:b}. Consequently, $|B|=d$. Let $\bb_1,\ldots,\bb_d$ be the elements of $B$. Since each $\bb_i\in R$, there must exist $\xbar_i\in S$ such that $\bb_i\in\Rxbari$, for $i=1,\ldots,d$. Let $V=\{\xbar_1,\ldots,\xbar_d\}$, whose cardinality is at most $d$, and let \[ R'=\bigcup_{i=1}^d \Rxbari. \] Then $B\subseteq R'\subseteq R$ so \begin{equation} \label{eq:thm:aspan-finite-subset:1} \spn{B} \subseteq \spn{R'} \subseteq \spn{R} = \spn{B}. \end{equation} Therefore, \[ \aspan{V} = \clbar{\spn{R'}} = \clbar{\spn{R}} = \aspan{S}. \] The second equality is because $\spn{R'}=\spn{R}$ by \eqref{eq:thm:aspan-finite-subset:1}. The first and third equalities are by Theorem~\ref{thm:rspan-is-aspan-in-rn} (applied to $V$ and $S$ respectively). \end{proof} We previously saw in Theorems~\ref{thm:e:7} and~\ref{thm:oconic-hull-and-seqs} how the outer convex hull and outer conic hull of a finite set of points can be characterized in terms of sequences. The astral span of a finite set can be similarly characterized, as shown next. Indeed, this follows simply from the fact that the astral span of any set can always be expressed as a conic hull. \begin{theorem} \label{thm:aspan-and-seqs} Let $V=\{\xbar_1,\dotsc,\xbar_m\}\subseteq\extspace$, and let $\zbar\in\extspace$. Then $\zbar\in\aspan{V}$ if and only if there exist sequences $\seq{\lambda_{it}}$ and $\seq{\lambda'_{it}}$ in $\Rpos$, and span-bound sequences $\seq{\xx_{it}}$ and $\seq{\xx'_{it}}$ in $\Rn$, for $i=1,\dotsc,m$, such that: \begin{itemize} \item $\xx_{it}\rightarrow\xbar_i$ and $\xx'_{it}\rightarrow -\xbar_i$ for $i=1,\dotsc,m$. \item The sequence $\zz_t=\sum_{i=1}^m (\lambda_{it} \xx_{it} + \lambda'_{it} \xx'_{it})$ converges to $\zbar$. \end{itemize} \end{theorem} \begin{proof} By Propositions~\ref{pr:aspan-ito-acone-conv} and~\ref{pr:acone-hull-props}(\ref{pr:acone-hull-props:d}), \[ \aspan{V} = \acone(V\cup -V) = \oconich(V\cup -V). \] Therefore, applying Theorem~\ref{thm:oconic-hull-and-seqs} to $V\cup -V$ yields the claim. \end{proof} The characterization given in Theorem~\ref{thm:aspan-and-seqs} requires twin sequences for every point $\xbar_i$ in $V$: one sequence, $\seq{\xx_{it}}$, converging to $\xbar_i$, and the other, $\seq{\xx'_{it}}$, converging to its negation, $-\xbar_i$. The sequence elements $\zz_t$ are then conic (nonnegative) combinations of the elements of all these sequences. It seems reasonable to wonder if such twin sequences are really necessary, or if it would suffice to have a \emph{single} sequence $\seq{\xx_{it}}$ for each $\xbar_i$, and then for each $\zz_t$ to be a \emph{linear} combination of the $\xx_{it}$'s (with possibly negative coefficients). Thus, the characterization would say that $\zbar\in\aspan{V}$ if and only if there exist span-bound sequences $\seq{\xx_{it}}$ converging to $\xbar_i$ and (arbitrary) sequences $\seq{\lambda_{it}}$ in $\R$ such that $\zz_t=\sum_{i=1}^m \lambda_{it} \xx_{it}$ converges to $\zbar$. The existence of such sequences is indeed sufficient for $\zbar$ to be in $\aspan{V}$, as can be shown to follow from Theorem~\ref{thm:aspan-and-seqs}. However, it is not necessary, as the next example shows: \begin{example} In $\extspac{2}$, let $V=\{\xbar\}$ where $\xbar=\limray{\ee_1}\plusl\limray{\ee_2}$, and let $\zbar=\zz=\ee_2$. Then $\aspan{V}=\extspac{2}$ (as follows, for instance, from Theorem~\ref{thm:rspan-is-aspan-in-rn}). In particular, $\zz\in\aspan{V}$. Suppose there exist sequences as described above, namely, a sequence $\seq{\lambda_t}$ in $\R$ and a span-bound sequence $\seq{\xx_t}$ in $\R^2$ such that $\zz_t=\lambda_t \xx_t \rightarrow \zz$. We must then have that either $\lambda_t\geq 0$ for infinitely many values of $t$, or that $\lambda_t\leq 0$ for infinitely many $t$ (or both). Suppose the first case holds. Then by discarding all other sequence elements, we can assume $\lambda_t\geq 0$ for all $t$. By Theorem~\ref{thm:oconic-hull-and-seqs}, the existence of such sequences implies that $\zz\in\oconich{V}=\aray{\xbar}=\lb{\zero}{\limray{\xbar}}$. However, this is a contradiction since $\zz$ is not in this set (by Theorem~\ref{thm:lb-with-zero}). In the alternative case that $\lambda_t\leq 0$ for infinitely many $t$, a similar argument shows that $\zz\in\aray{(-\xbar)}$, which is again a contradiction. Thus, no such sequences can exist, even though $\zz\in\aspan{V}$. To construct twin sequences as in Theorem~\ref{thm:aspan-and-seqs} for this case, we can choose $\xx_t=\trans{[t^2,t+1]}$, $\xx'_t=\trans{[-t^2,-t]}$, and $\lambda_t=\lambda'_t=1$, for all $t$. Then $\xx_t\rightarrow\xbar$, $\xx'_t\rightarrow -\xbar$, and $\zz_t=\lambda_t\xx_t+\lambda'_t\xx'_t=\ee_2\rightarrow\zz$. \end{example} \subsection{Orthogonal complements} We next look at extensions of the orthogonal complement operation to astral space. Recall that, as given in \eqref{eq:std-ortho-comp-defn}, the orthogonal complement of a set $S\subseteq\Rn$ is the set \begin{equation} \label{eq:std-ortho-comp-defn-rev} \Sperp = \Braces{\uu\in\Rn : \xx\cdot\uu = 0 \mbox{ for all } \xx\in S }. \end{equation} As was done with the standard polar operation in Section~\ref{sec:ast-pol-cones}, we extend orthogonal complement to astral space in two ways. In the first of these, we simply allow $S$ to be any subset of $\extspace$: \begin{definition} The \emph{orthogonal complement} of a set $S\subseteq\extspace$, denoted $\Sperp$, is the set \begin{equation} \label{eqn:sperp-def} \Sperp = \Braces{\uu\in\Rn : \xbar\cdot\uu=0 \mbox{ for all } \xbar\in S}. \end{equation} \end{definition} Thus, $\Sperp$ is the set of vectors in $\Rn$ that are orthogonal to every point in $S\subseteq\extspace$. Clearly, for sets in $\Rn$, this definition is consistent with the old one that it generalizes. In the second extension to astral space, we reverse the roles of $\Rn$ and $\extspace$: \begin{definition} The \emph{astral orthogonal complement} of a set $U\subseteq\Rn$, denoted $\aperp{U}$, is the set \begin{equation} \label{eq:aperp-defn} \aperp{U} = \Braces{\xbar\in\extspace : \xbar\cdot\uu=0 \mbox{ for all } \uu\in U}. \end{equation} \end{definition} Thus, $\aperp{U}$ is the set of points in $\extspace$ that are orthogonal to all points in $U\subseteq\Rn$. As with the polar operations defined in Section~\ref{sec:ast-pol-cones}, we will see that these orthogonal complement operations complement one another, each acting, to a degree, as the dual of the other. Note that one operation maps any set $S\subseteq\extspace$ to a set $\Sperp$ in $\Rn$, while the other maps any set $U\subseteq\Rn$ to a set $\Uaperp$ in $\extspace$. The next theorems summarize various properties of these operations: \begin{proposition} \label{pr:aperp-props} Let $S,U\subseteq\Rn$. \begin{letter-compact} \item \label{pr:aperp-props:d} $\Uaperp$ is an astral linear subspace. \item \label{pr:aperp-props:a} If $S\subseteq U$ then $\Uaperp\subseteq\Saperp$. \item \label{pr:aperp-props:b} $\Uaperp=\aperp{(\cl{U})}=\aperp{(\spn{U})}$. \item \label{pr:aperp-props:c} If $U$ is a linear subspace then $\apol{U} = \aperp{U}$. \item \label{pr:aperp-props:e} $\Uaperp=\clbar{\Uperp}$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:aperp-props:d}):} This is immediate from Theorem~\refequiv{thm:ast-lin-sub-equivs}{thm:ast-lin-sub-equivs:a}{thm:ast-lin-sub-equivs:e} and \eqref{eq:aperp-defn}. \pfpart{Part~(\ref{pr:aperp-props:a}):} If $\xbar\in\Uaperp$, then $\xbar\cdot\uu=0$ for all $\uu\in U$, and therefore also for all $\uu\in S$. Thus, $\uu\in\Saperp$. \pfpart{Part~(\ref{pr:aperp-props:b}):} We show first that $\Uaperp\subseteq\aperp{(\spn{U})}$. Suppose $\xbar\in\Uaperp$, and let $\uu\in\spn{U}$. Then $\uu=\sum_{i=1}^m \lambda_i \uu_i$ for some $\lambda_1,\ldots,\lambda_m\in\R$ and $\uu_1,\ldots,\uu_m\in U$. For $i=1,\ldots,m$, we then have $\xbar\cdot\uu_i=0$, implying that $\xbar\cdot(\lambda_i\uu_i)=\lambda_i(\xbar\cdot\uu_i)=0$, and so that \[ \xbar\cdot\uu = \xbar\cdot\Parens{\sum_{i=1}^m \lambda_i \uu_i} = \sum_{i=1}^m \xbar\cdot(\lambda_i\uu_i) = 0 \] where the second equality is by Proposition~\ref{pr:i:1}. Since this holds for all $\uu\in\spn{U}$, it follows that $\xbar\in\aperp{(\spn{U})}$. Thus, \[ \Uaperp \subseteq \aperp{(\spn{U})} \subseteq \aperp{(\cl{U})} \subseteq \Uaperp. \] The first inclusion was just shown. The second and third inclusions follow from part~(\ref{pr:aperp-props:a}) since $U\subseteq \cl{U} \subseteq\spn{U}$ (since the span of any set is closed). \pfpart{Part~(\ref{pr:aperp-props:c}):} Suppose $U$ is a linear subspace and that $\xbar\in\aperp{U}$. Then for all $\uu\in U$, $\xbar\cdot\uu=0\leq 0$. Thus, $\xbar\in\apol{U}$, so $\aperp{U}\subseteq\apol{U}$. For the reverse inclusion, suppose $\xbar\in\apol{U}$. Then for all $\uu\in U$, $\xbar\cdot\uu\leq 0$, implying that $-\xbar\cdot\uu=\xbar\cdot(-\uu)\leq 0$ since $-\uu$ is also in $U$. Thus, $\xbar\cdot\uu=0$ so $\xbar\in\aperp{U}$, proving $\apol{U}\subseteq\aperp{U}$. \pfpart{Part~(\ref{pr:aperp-props:e}):} Let $L=\spn{U}$, which is a linear subspace and therefore also a closed (in $\Rn$) convex cone. We then have \[ \Uaperp = \Laperp = \Lapol = \Lpolbar = \clbar{\Lperp} = \clbar{\Uperp}. \] The first equality is by part~(\ref{pr:aperp-props:b}). The second is by part~(\ref{pr:aperp-props:c}). The third is by Theorem~\ref{thm:apol-of-closure}(\ref{thm:apol-of-closure:b}). The fourth is by Proposition~\ref{pr:polar-props}(\ref{pr:polar-props:f}). And the last is by Proposition~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:d}). \qedhere \end{proof-parts} \end{proof} \begin{proposition} \label{pr:perp-props-new} Let $S, U\subseteq\extspace$. \begin{letter-compact} \item \label{pr:perp-props-new:a} $\Sperp$ is a linear subspace of $\Rn$. \item \label{pr:perp-props-new:b} If $S\subseteq U$ then $\Uperp\subseteq\Sperp$. \item \label{pr:perp-props-new:c} $\Sperp = \Sbarperp = (\aspan{S})^\bot = (\rspan{S})^\bot$. \item \label{pr:perp-props-new:d} If $S$ is an astral linear subspace then $\polar{S}=\Sperp$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:perp-props-new:a}):} Let $L=\Sperp$. Clearly, $\zero\in L$. If $\uu,\vv\in L$, then for all $\xbar\in S$, $\xbar\cdot\uu=\xbar\cdot\vv=0$ so $\xbar\cdot(\uu+\vv)=0$ by Proposition~\ref{pr:i:1}; therefore, $\uu+\vv\in L$. Finally, if $\uu\in L$ and $\lambda\in\R$, then for all $\xbar\in S$, $\xbar\cdot(\lambda\uu)=\lambda(\xbar\cdot\uu)=0$, by Proposition~\ref{pr:i:2}, so $\lambda\uu\in L$. Thus, $L$ is a linear subspace. \pfpart{Part~(\ref{pr:perp-props-new:b}):} Proof is similar to that of Proposition~\ref{pr:aperp-props}(\ref{pr:aperp-props:a}). \pfpart{Part~(\ref{pr:perp-props-new:c}):} We show first that $\Sperp\subseteq(\aspan{S})^\bot$. Let $\uu\in\Sperp$. This means $\uu$ is orthogonal to every point in $S$, and thus that $S\subseteq C$ where $C=\aperp{\{\uu\}}$. By Proposition~\ref{pr:aperp-props}(\ref{pr:aperp-props:d}), $C$ is an astral linear subspace, so $\aspan{S}\subseteq C$ (Proposition~\ref{pr:aspan-hull-props}\ref{pr:aspan-hull-props:a}), that is, every point in $\aspan{S}$ is orthogonal to $\uu$. Therefore, $\uu\in(\aspan{S})^\bot$. We thus have \[ \Sperp \subseteq (\aspan{S})^\bot \subseteq \Sbarperp \subseteq \Sperp \] with the first inclusion following from the argument above, and the other inclusions from part~(\ref{pr:perp-props-new:b}) since $S\subseteq \Sbar\subseteq\aspan{S}$ (since astral linear subspaces are closed, for instance, by Theorem~\ref{thm:ast-lin-sub-equivs}\ref{thm:ast-lin-sub-equivs:a},\ref{thm:ast-lin-sub-equivs:b}). This shows that $\Sperp = \Sbarperp = (\aspan{S})^\bot$. Replacing $S$ with $\rspan{S}$, this same argument shows that $(\rspan{S})^\bot = (\clbar{\rspan{S}})^\bot=(\aspan{S})^\bot$ since $\aspan{S} = \clbar{\rspan{S}}$ (by \Cref{thm:rspan-is-aspan-in-rn}). \pfpart{Part~(\ref{pr:perp-props-new:d}):} The proof is analogous to that of Proposition~\ref{pr:aperp-props}(\ref{pr:aperp-props:c}). \qedhere \end{proof-parts} \end{proof} We next consider the effect of applying orthogonal complement operations twice, generalizing the standard result that $\Uperperp=\spn{U}$ for $U\subseteq\Rn$ (Proposition~\ref{pr:std-perp-props}\ref{pr:std-perp-props:c}): \begin{theorem} \label{thm:dub-perp} Let $S\subseteq\extspace$ and let $U\subseteq\Rn$. \begin{letter-compact} \item \label{thm:dub-perp:d} $\Uperpaperp = \clbar{\spn{U}}$. Thus, if $U$ is a linear subspace, then $\Uperpaperp = \Ubar$. \item \label{thm:dub-perp:c} $\Uaperperp = \spn{U}$. Thus, if $U$ is a linear subspace, then $\Uaperperp = U$. \item \label{thm:dub-perp:a} $\Sperpaperp = \aspan{S}$. Thus, if $S$ is an astral linear subspace, then $\Sperpaperp = S$. \item \label{thm:dub-perp:b} $\Sperperp = \rspan{S}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:dub-perp:d}):} We have \[ \Uperpaperp = \clbar{\Uperperp} = \clbar{\spn{U}}. \] The first equality is by Proposition~\ref{pr:aperp-props}(\ref{pr:aperp-props:e}), applied to $\Uperp$. The second is by Proposition~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:c}). \pfpart{Part~(\ref{thm:dub-perp:c}):} We have \[ \Uaperperp = \aperperp{(\spn{U})} = \Parens{\clbar{(\spn{U})^\bot}}^\bot = \perperp{(\spn{U})} = \spn{U}. \] The first equality is by Proposition~\ref{pr:aperp-props}(\ref{pr:aperp-props:b}). The second is by Proposition~\ref{pr:aperp-props}(\ref{pr:aperp-props:e}). The third is by Proposition~\ref{pr:perp-props-new}(\ref{pr:perp-props-new:c}). And the fourth is by Proposition~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:c}). \pfpart{Part~(\ref{thm:dub-perp:a}):} Let $L=\rspan{S}$. Then \[ \aspan{S} = \Lbar = \Lperpaperp = \Sperpaperp. \] The first equality is from \Cref{thm:rspan-is-aspan-in-rn}. The second is by part~(\ref{thm:dub-perp:d}) since $L$ is a linear subspace. And the third is by Proposition~\ref{pr:perp-props-new}(\ref{pr:perp-props-new:c}). \pfpart{Part~(\ref{thm:dub-perp:b}):} We have \[ \Sperperp = \Sperpaperp \cap \Rn = (\aspan{S}) \cap \Rn = \rspan{S}, \] where the first equality follows from definitions (Eqs.~\ref{eq:std-ortho-comp-defn-rev} and~\ref{eq:aperp-defn}), the second is by part~(\ref{thm:dub-perp:a}), the third is by \Cref{thm:rspan-is-aspan-in-rn}. \qedhere \end{proof-parts} \end{proof} Parts~(\ref{thm:dub-perp:c}) and~(\ref{thm:dub-perp:a}) of Theorem~\ref{thm:dub-perp} show that the orthogonal complement operations define a one-to-one correspondence between astral linear subspaces and standard linear subspaces with $M\mapsto \Mperp$ mapping astral linear subspaces $M\subseteq\extspace$ to linear subspaces in $\Rn$, and $L\mapsto \Laperp$ mapping standard linear subspaces $L\subseteq\Rn$ to astral linear subspaces. The theorem shows that these operations are inverses of each other, and therefore are bijective. Theorem~\ref{thm:dub-perp}(\ref{thm:dub-perp:a}) can be rephrased to say that an astral point $\zbar$ is in $\aspan{S}$ if and only if, for all $\uu\in\Rn$, if $\xbar\cdot\uu=0$ for all $\xbar\in S$, then $\zbar\cdot\uu=0$. As such, Proposition~\ref{pr:rspan-sing-equiv-dual} can now be seen to be a special case of this theorem. \section{Convex functions} \label{sec:conv:fct} Having defined and studied convex sets in astral space, we next consider how the notion of a convex function can be extended using similar ideas. In this chapter, we give a definition and characterizations of what convexity means for astral functions. We also look at operations for constructing astral convex functions. \subsection{Definition and characterization with \texorpdfstring{$\lambda$-midpoints}{\lambdaUni-midpoints} } Recall that a function $f:\Rn\rightarrow\Rext$ is convex if its epigraph, which is a subset of $\Rn\times\R=\Rnp$, is convex. In an analogous way, we say that an astral function $F:\extspace\rightarrow\Rext$ is \emph{convex} if its epigraph, $\epi{F}$, is convex when viewed as a subset of $\extspacnp$. As a first observation, the extension $\fext$ of any convex function $f:\Rn\rightarrow\Rext$ is convex: \begin{theorem} \label{thm:fext-convex} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex. Then $\fext$, its lower semicontinuous extension, is also convex. \end{theorem} \begin{proof} Since $f$ is convex, its epigraph, $\epi{f}$, is a convex subset of $\R^{n+1}$, so $\epibarbar{f}$, its closure in $\extspacnp$, is convex by Theorem~\ref{thm:e:6}. Furthermore, the set \[ nclset = \{\zbar\in\extspacnp : \zbar\cdot \rpair{\zero}{1} > -\infty \} \cap \{\zbar\in\extspacnp : \zbar\cdot \rpair{\zero}{1} < +\infty \} \] is convex by Proposition~\ref{pr:e1}(\ref{pr:e1:b},\ref{pr:e1:c}), nclset$ is as given in \eqref{eq:finclset-defn}. nclset$ is convex, implying that $\epi{\fext}$, viewed as a subset of $\extspacnp$, is convex by Proposition~\ref{pr:wasthm:e:3}(\ref{pr:wasthm:e:3b}). Therefore, $\fext$ is convex. \end{proof} In studying standard convexity of functions, it is often convenient and more intuitive to work with a characterization like the one in Proposition~\ref{pr:stand-cvx-fcn-char}(\ref{pr:stand-cvx-fcn-char:b}), which states that $f:\Rn\rightarrow\Rext$ is convex if and only if \begin{equation} \label{eqn:stand-f-cvx-ineq} f\Parens{(1-\lambda)\xx + \lambda \yy} \leq (1-\lambda) f(\xx) + \lambda f(\yy) \end{equation} for all $\xx,\yy\in\dom{f}$ and all $\lambda\in [0,1]$. Indeed, when working exclusively with proper functions, \eqref{eqn:stand-f-cvx-ineq} is sometimes presented as a \emph{definition} of convexity. Astral convex functions can be similarly characterized in a useful way. To generalize \eqref{eqn:stand-f-cvx-ineq} to astral functions, we naturally need to replace $f$ with an astral function $F:\extspace\rightarrow\Rext$, and $\xx,\yy$ with astral points $\xbar,\ybar\in\dom{F}\subseteq\extspace$. In this way, the right-hand side straightforwardly generalizes to astral functions. But what about $(1-\lambda)\xx + \lambda \yy$ which appears on the left-hand side? Substituting directly with astral points, this expression becomes $(1-\lambda)\xbar + \lambda\ybar$, which does not make sense since astral points cannot be added using ordinary vector addition. What then should replace this expression? Surely, this inequality should apply to points on the segment joining $\xbar$ and $\ybar$, as in the standard case; but which point or points precisely? To answer this, we return to Corollary~\ref{cor:e:1}, which characterized the segment joining two points $\xbar,\ybar\in\extspace$, showing specifically that a point is in $\lb{\xbar}{\ybar}$ if and only if there exist sequences with the properties stated in that corollary. Motivated by that characterization, we focus in on points on the segment associated with sequences with a certain property that will allow us to address the issue above: \begin{definition} Let $\xbar,\ybar\in\extspace$, and let $\lambda\in[0,1]$. We say that a point $\zbar\in\extspace$ is a \emph{$\lambda$-midpoint of $\xbar$ and $\ybar$} if there exist sequences $\seq{\xx_t}$, $\seq{\yy_t}$ in $\Rn$, and $\seq{\lambda_t}$ in $[0,1]$ with the properties stated in Corollary~\ref{cor:e:1} (that is, $\xx_t\rightarrow\xbar$, $\yy_t\rightarrow\ybar$ and $(1-\lambda_t) \xx_t + \lambda_t \yy_t \rightarrow \zbar$), and additionally with $\lambda_t\rightarrow\lambda$. We write $\lammid{\xbar}{\ybar}$ for the set of all $\lambda$-midpoints of $\xbar$ and $\ybar$. \end{definition} Thus, Corollary~\ref{cor:e:1} states that a point $\zbar$ is in $\lb{\xbar}{\ybar}$ if and only if it is a $\lambda$-midpoint of $\xbar$ and $\ybar$ for some $\lambda\in [0,1]$; that is, \begin{equation} \label{eq:seg-as-lam-midpts} \lb{\xbar}{\ybar} = \bigcup_{\lambda\in [0,1]} \lammid{\xbar}{\ybar}. \end{equation} The only $\lambda$-midpoint of finite points $\xx,\yy\in\Rn$ is $(1-\lambda) \xx + \lambda \yy$: \begin{proposition} \label{pr:lam-mid-finite} Let $\xx,\yy\in\Rn$, and let $\lambda\in[0,1]$. Then \[ \lammid{\xx}{\yy} = \Braces{ (1-\lambda) \xx + \lambda \yy }. \] \end{proposition} \begin{proof} For any sequences $\seq{\xx_t}$, $\seq{\yy_t}$ in $\Rn$ and $\seq{\lambda_t}$ in $[0,1]$ with $\xx_t\rightarrow\xx$, $\yy_t\rightarrow\yy$ and $\lambda_t\rightarrow\lambda$, we must have $(1-\lambda_t)\xx_t+\lambda_t \yy_t\rightarrow(1-\lambda)\xx+\lambda \yy$ by continuity of vector operations. Further, there exist such sequences (such as $\xx_t=\xx$, $\yy_t=\yy$, $\lambda_t=\lambda$). Therefore, $\lammid{\xx}{\yy}$ includes this one and only point. \end{proof} So a pair of finite points have exactly one $\lambda$-midpoint. In general, however, a pair of astral points might have many $\lambda$-midpoints. As an extreme example, in $\Rext$, it can be argued (either directly or using Theorem~\ref{thm:lammid-char-seqsum} below) that $\lammid{-\infty}{+\infty} = \Rext$ for all $\lambda\in [0,1]$. The expression appearing on the left-hand side of \eqref{eqn:stand-f-cvx-ineq}, $(1-\lambda) \xx + \lambda \yy$, is, as just seen, the unique $\lambda$-midpoint of $\xx$ and $\yy$, suggesting the plausibility of using $\lambda$-midpoints to generalize that equation and thereby Proposition~\ref{pr:stand-cvx-fcn-char}(\ref{pr:stand-cvx-fcn-char:b}) to astral functions. Indeed, in a moment, we will present just such a generalization in Theorem~\ref{thm:ast-F-char-fcn-vals}. Before proving that theorem, we give in the next proposition some general rules for deriving $\lambda$-midpoints. \begin{proposition} \label{pr:lam-mid-props} Let $\xbar_0,\xbar_1\in\extspace$, let $\lambda\in[0,1]$, and let $\xbar\in\lammid{\xbar_0}{\xbar_1}$. \begin{letter-compact} \item \label{pr:lam-mid-props:aff} Let $\A\in\R^{m\times n}$, let $\bbar\in\extspac{m}$, and let $F(\zbar)=\bbar\plusl\A\zbar$ for $\zbar\in\extspace$. Then \[ F(\xbar) \in \lammid{F(\xbar_0)}{F(\xbar_1)}. \] \item \label{pr:lam-mid-props:c} Let $\yy_0,\yy_1\in\Rn$ and let $\yy=(1-\lambda) \yy_0 + \lambda \yy_1$. Then \[\xbar\plusl\yy \in \lammid{\xbar_0\plusl\yy_0}{\xbar_1\plusl\yy_1}.\] \item \label{pr:lam-mid-props:d} Let $y_0,y_1\in\R$ and let $y=(1-\lambda) y_0 + \lambda y_1$. Then \[\rpair{\xbar}{y} \in \lammid{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}}.\] \end{letter-compact} \end{proposition} \begin{proof} Since $\xbar\in\lammid{\xbar_0}{\xbar_1}$, there exist sequences $\seq{\xx_{it}}$ in $\Rn$, for $i\in\{0,1\}$, and $\seq{\lambda_t}$ in $[0,1]$, such that $\xx_{it}\rightarrow\xbar_i$, $\lambda_t\rightarrow\lambda$, and such that $\xhat_t\rightarrow\xbar$ where $\xhat_t=(1-\lambda_t)\xx_{0t}+\lambda_t\xx_{1t}$. \begin{proof-parts} \pfpart{Part~(\ref{pr:lam-mid-props:aff}):} We can write $\bbar=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ for some $\qq,\vv_1,\ldots,\vv_k\in\Rm$. For $j=1,\ldots,k+1$, let \[G_j(\zbar)=\limrays{\vv_{j},\vv_{j+1},\ldots,\vv_k}\plusl\qq\plusl\A\zbar\] for $\zbar\in\extspace$. We prove \begin{equation} \label{eq:pr:lam-mid-props:1} G_j(\xbar)\in\lammid{G_j(\xbar_0)}{G_j(\xbar_1)} \end{equation} by backwards induction on $j$. Since $F=G_1$, this will give the desired result. We begin with the base case that $j=k+1$. For $i\in\{0,1\}$, and for all $t$, let $\xx'_{it}=\qq+\A\xx_{it}$. Then by continuity of affine maps (\Cref{cor:aff-cont}), \[ \xx'_{it}\rightarrow\qq\plusl\A\xbar_i=G_{k+1}(\xbar_i), \] and also, \[ (1-\lambda_t)\xx'_{0t}+\lambda_t\xx'_{1t} = \qq + \A \xhat_t \rightarrow \qq\plusl\A\xbar = G_{k+1}(\xbar), \] proving the claim in this case. For the inductive step, suppose $j<k$ and that $G_{j+1}(\xbar)$ is a $\lambda$-midpoint of $G_{j+1}(\xbar_0)$ and $G_{j+1}(\xbar_1)$. Then there exist sequences $\seq{\zz_{it}}$ in $\Rm$, for $i\in\{0,1\}$, and $\seq{\lambda'_t}$ in $[0,1]$ such that $\zz_{it}\rightarrow G_{j+1}(\xbar_i)$, $\lambda'_t\rightarrow\lambda$, and such that $\zhat_t\rightarrow G_{j+1}(\xbar)$ where $\zhat_t=(1-\lambda'_t)\zz_{0t}+\lambda'_t\zz_{1t}$. For all $t$, let \[ \alpha_t = t \Parens{1+\max\{\norm{\zz_{0t}}, \norm{\zz_{1t}}, \norm{\zhat_t}\}}. \] Also, for $i\in\{0,1\}$, let $\zz'_{it}=\alpha_t \vv_j + \zz_{it}$. Then by Lemma~\ref{lem:mod-seq-for-new-dom-dir}, \[ \zz'_{it} \rightarrow \limray{\vv_j}\plusl G_{j+1}(\xbar_i) = G_j(\xbar_i). \] Furthermore, \[ (1-\lambda'_t) \zz'_{0t} + \lambda'_t \zz'_{1t} = \alpha_t \vv_j + \zhat_t \rightarrow \limray{\vv_j} \plusl G_{j+1}(\xbar) = G_j(\xbar), \] with convergence again following from Lemma~\ref{lem:mod-seq-for-new-dom-dir}. This proves \eqref{eq:pr:lam-mid-props:1}, completing the induction. \pfpart{Part~(\ref{pr:lam-mid-props:c}):} For $i\in\{0,1\}$, and for all $t$, let $\ww_{it}=\xx_{it} + \yy_i$. Then $\ww_{it}\rightarrow\xbar_i\plusl\yy_i$, by Proposition~\ref{pr:i:7}(\ref{pr:i:7f}) (or Proposition~\ref{pr:sum-seq-commuting-limits}). Also, by continuity of vector operations, $(1-\lambda_t)\yy_0+\lambda_t \yy_1\rightarrow \yy$. Thus, \[ (1-\lambda_t) \ww_{0t} + \lambda_t \ww_{1t} = \xhat_t + \Bracks{(1-\lambda_t) \yy_0 + \lambda_t \yy_1} \rightarrow \xbar\plusl\yy \] by Proposition~\ref{pr:sum-seq-commuting-limits}, proving the claim. \pfpart{Part~(\ref{pr:lam-mid-props:d}):} Let $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}). From part~(\ref{pr:lam-mid-props:aff}), $\trans{\homat}\xbar \in \lammid{\trans{\homat}\xbar_0}{\trans{\homat}\xbar_1}$. Therefore, \begin{align*} \rpair{\xbar}{y} &= \trans{\homat} \xbar \plusl \rpair{\zero}{y} \\ &\in \lammid{\trans{\homat} \xbar_0 \plusl \rpair{\zero}{y_0}} {\trans{\homat} \xbar_1 \plusl \rpair{\zero}{y_1}} \\ &= \lammid{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}}, \end{align*} with the inclusion following from part~(\ref{pr:lam-mid-props:c}) (and since $\rpair{\zero}{y}=(1-\lambda) \rpair{\zero}{y_0} + \lambda \rpair{\zero}{y_1} $), and both equalities from Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:a}). \qedhere \end{proof-parts} \end{proof} \begin{theorem} \label{thm:ast-F-char-fcn-vals} Let $F:\extspace\rightarrow\Rext$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:ast-F-char-fcn-vals:a} $F$ is convex. \item \label{thm:ast-F-char-fcn-vals:b} For all $\xbar_0,\xbar_1\in\dom{F}$, for all $\lambda\in [0,1]$, and for all $\xbar\in\lammid{\xbar_0}{\xbar_1}$, \begin{equation} \label{eq:thm:ast-F-char-fcn-vals:1} F(\xbar) \leq (1-\lambda) F(\xbar_0) + \lambda F(\xbar_1). \end{equation} \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:ast-F-char-fcn-vals:a}) $\Rightarrow$ (\ref{thm:ast-F-char-fcn-vals:b}): } Suppose $F$ is convex, and let $\xbar_0,\xbar_1\in\dom{F}$, $\lambda\in [0,1]$, and $\xbar\in\lammid{\xbar_0}{\xbar_1}$. We aim to prove \eqref{eq:thm:ast-F-char-fcn-vals:1}. For $i\in\{0,1\}$, let $y_i\in\R$ with $y_i\geq F(\xbar_i)$ so that $\rpair{\xbar_i}{y_i}\in\epi{F}$. Let $y=(1-\lambda) y_0 + \lambda y_1$. Then \begin{eqnarray*} \rpair{\xbar}{y} &\in& \lammid{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}} \\ &\subseteq& \lb{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}} \\ &\subseteq& \epi{F}. \end{eqnarray*} The first inclusion is by Proposition~\ref{pr:lam-mid-props}(\ref{pr:lam-mid-props:d}). The second is by Corollary~\ref{cor:e:1} (or equivalently, Eq.~\ref{eq:seg-as-lam-midpts}). The last is because $\epi{F}$ is convex (since $F$ is). Thus, $\rpair{\xbar}{y}\in\epi{F}$, so $F(\xbar)\leq y = (1-\lambda) y_0 + \lambda y_1$. Since this holds for all $y_0\geq F(\xbar_0)$ and $y_1\geq F(\xbar_1)$, this proves \eqref{eq:thm:ast-F-char-fcn-vals:1}. \pfpart{(\ref{thm:ast-F-char-fcn-vals:b}) $\Rightarrow$ (\ref{thm:ast-F-char-fcn-vals:a}): } Suppose statement~(\ref{thm:ast-F-char-fcn-vals:b}) holds. Let $\rpair{\xbar_i}{y_i}\in\epi{F}$, for $i\in\{0,1\}$. Let $\zbar\in\lb{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}}$. To prove convexity, we aim to show $\zbar\in\epi{F}$. By \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:c}) (and Proposition~\ref{pr:xy-pairs-props}\ref{pr:xy-pairs-props:b}), \[ -\infty < \min\{y_0,y_1\} \leq \zbar\cdot\rpair{\zero}{1} \leq \max\{y_0,y_1\} < +\infty. \] Thus, $\zbar\cdot\rpair{\zero}{1}\in\R$, so by Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:d}), we can write $\zbar=\rpair{\xbar}{y}$ for some $\xbar\in\extspace$ and $y\in\R$. By Corollary~\ref{cor:e:1}, $\rpair{\xbar}{y}$ must be a $\lambda$-midpoint of $\rpair{\xbar_0}{y_0}$ and $\rpair{\xbar_1}{y_1}$, for some $\lambda\in[0,1]$. Letting $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}), it follows that \[ \xbar = \homat \rpair{\xbar}{y} \in \lammid{\homat \rpair{\xbar_0}{y_0}}{\homat \rpair{\xbar_1}{y_1}} = \lammid{\xbar_0}{\xbar_1}, \] with the inclusion from Proposition~\ref{pr:lam-mid-props}(\ref{pr:lam-mid-props:aff}), and the equalities from Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:c}). By the same reasoning, letting $\EE=\trans{\rpair{\zero}{1}}$, we have \[ y = \EE \rpair{\xbar}{y} \in \lammid{\EE \rpair{\xbar_0}{y_0}}{\EE \rpair{\xbar_1}{y_1}} = \lammid{y_0}{y_1}, \] with the equalities from Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:b}) (as well as \Cref{pr:trans-uu-xbar}). From Proposition~\ref{pr:lam-mid-finite}, this then implies that $y=(1-\lambda)y_0+\lambda y_1$. Since $\xbar_0,\xbar_1\in\dom{F}$ and since $\xbar\in\lammid{\xbar_0}{\xbar_1}$, \eqref{eq:thm:ast-F-char-fcn-vals:1} must hold so that \[ F(\xbar) \leq (1-\lambda) F(\xbar_0) + \lambda F(\xbar_1) \leq (1-\lambda) y_0 + \lambda y_1 = y. \] Thus, $\zbar=\rpair{\xbar}{y}\in\epi{F}$, completing the proof. \qedhere \end{proof-parts} \end{proof} Theorem~\ref{thm:ast-F-char-fcn-vals} generalizes the characterization of standard convex functions that was given in part~(\ref{pr:stand-cvx-fcn-char:b}) of \Cref{pr:stand-cvx-fcn-char}. Part~(\ref{pr:stand-cvx-fcn-char:c}) of that theorem provides a slightly different characterization whose natural astral analogue would say that a function $F:\extspace\rightarrow\Rext$ is convex if and only if \eqref{eq:thm:ast-F-char-fcn-vals:1} holds whenever $F(\xbar_0)$ and $F(\xbar_1)$ are summable, for all $\xbar_0,\xbar_1\in\extspace$, for all $\lambda\in [0,1]$, and for all $\xbar\in\lammid{\xbar_0}{\xbar_1}$. Such a characterization, however, does not hold in general; in other words, it is possible for $F$ to be convex, but for this latter property not to hold, as shown in the next example: \begin{example} Let $f:\R\rightarrow\R$ be the linear function $f(x)=x$ for $x\in\R$, and let $F=\fext$ be its extension, which is convex (by Theorem~\ref{thm:fext-convex}). Let $\barx_0=0$, $\barx_1=+\infty$, $\lambda=0$, and let $\barx=1$. Then $\barx$ is a $0$-midpoint of $\barx_0$ and $\barx_1$. (To see this, let $x_{0t}=0$, $x_{1t}=t$ and let $\lambda_t=1/t$. Then $x_{0t}\rightarrow 0$, $x_{1t}\rightarrow+\infty$, $\lambda_t\rightarrow 0$, and $(1-\lambda_t)x_{0t}+\lambda_t x_{1t}\rightarrow 1$.) Also, $F(\barx_0)=0$ and $F(\barx_1)=+\infty$ are summable. Nonetheless, $ F(\barx)=1 > 0 = (1-\lambda) F(\barx_0) + \lambda F(\barx_1) $. Thus, \eqref{eq:thm:ast-F-char-fcn-vals:1} does not hold in this case. \end{example} Here is a simple corollary of Theorem~\ref{thm:ast-F-char-fcn-vals}: \begin{corollary} \label{cor:F-conv-max-seg} Let $F:\extspace\rightarrow\Rext$ be convex. Let $\xbar,\ybar\in\extspace$, and let $\zbar\in\lb{\xbar}{\ybar}$. Then $F(\zbar)\leq \max\{F(\xbar),F(\ybar)\}$. \end{corollary} \begin{proof} If either $F(\xbar)=+\infty$ or $F(\ybar)=+\infty$, then the claim is immediate, so we assume $\xbar,\ybar\in\dom{F}$. Then by Corollary~\ref{cor:e:1}, $\zbar$ is a $\lambda$-midpoint of $\xbar,\ybar$, for some $\lambda\in [0,1]$, implying, by Theorem~\ref{thm:ast-F-char-fcn-vals}, that \[ F(\zbar) \leq (1-\lambda) F(\xbar) + \lambda F(\ybar) \leq \max\{ F(\xbar), F(\ybar) \}. \] \end{proof} Consequently, as in standard convex analysis, the effective domain and all sublevel sets of an astrally convex function are convex: \begin{theorem} \label{thm:f:9} Let $F:\extspace\rightarrow\Rext$ be convex, let $\alpha\in\Rext$, and let \begin{align*} S &= \Braces{\xbar\in\extspace : F(\xbar) \leq \alpha} \\ S' &= \Braces{\xbar\in\extspace : F(\xbar) < \alpha}. \end{align*} Then $S$ and $S'$ are both convex. Consequently, $\dom F$, the effective domain of $F$, is convex as well. \end{theorem} \begin{proof} Let $\xbar,\ybar\in S$, and let $\zbar\in\lb{\xbar}{\ybar}$. Then, by Corollary~\ref{cor:F-conv-max-seg}, $F(\zbar)\leq\max\set{F(\xbar),F(\ybar)}\leq\alpha$, so $\zbar\in S$. Therefore, $S$ is convex, as claimed. The proof that $S'$ is convex is similar. In particular, taking $\alpha=+\infty$, this shows that $\dom{F}$ is convex. \end{proof} We show next how $\lammid{\xbar}{\ybar}$, the set of $\lambda$-midpoints between $\xbar,\ybar\in\extspace$, can be expressed precisely in terms of sequential sums. For $\lambda\in (0,1)$, this set is simply $((1-\lambda)\xbar) \seqsum (\lambda\ybar)$, a natural generalization of Proposition~\ref{pr:lam-mid-finite} for finite points, and one that might be expected from the similarity between the definitions of sequential sums and $\lambda$-midpoints. For $\lambda=0$, the characterization is more subtle: $\zrmid{\xbar}{\ybar}$ does indeed include $\xbar\seqsum\zero=\{\xbar\}$, but it also includes $\xbar\seqsum\zbar$ for every point $\zbar$ on the segment joining $\zero$ and $\ybar$'s icon. (We do not give an expression for $1$-midpoints since in general $\midpt{1}{\xbar}{\ybar}=\zrmid{\ybar}{\xbar}$ for all $\xbar,\ybar\in\extspace$.) \begin{theorem} \label{thm:lammid-char-seqsum} Let $\xbar,\ybar\in\extspace$. \begin{letter-compact} \item \label{thm:lammid-char-seqsum:a} Let $\lambda\in (0,1)$. Then \[ \lammid{\xbar}{\ybar} = \Bracks{(1-\lambda)\xbar} \seqsum \Bracks{\lambda\ybar}. \] \item \label{thm:lammid-char-seqsum:b} Let $\ebar\in\corezn$ be the iconic part of $\ybar$ (meaning $\ybar\in\ebar\plusl\Rn$). Then \begin{equation} \label{eq:thm:lammid-char-seqsum:2} \zrmid{\xbar}{\ybar} = \xbar \seqsum \lb{\zero}{\ebar}. \end{equation} \item \label{thm:lammid-char-seqsum:c} Consequently, for all $\lambda\in[0,1]$, \[ \Bracks{(1-\lambda)\xbar} \seqsum \Bracks{\lambda\ybar} \subseteq \lammid{\xbar}{\ybar}. \] \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:lammid-char-seqsum:a}):} Let $\zbar\in\lammid{\xbar}{\ybar}$. Then there exist sequences $\seq{\xx_t},\seq{\yy_t}$ in $\Rn$ and $\seq{\lambda_t}$ in $[0,1]$ with $\xx_t\rightarrow\xbar$, $\yy_t\rightarrow\ybar$, $\lambda_t\rightarrow\lambda$, and $(1-\lambda_t)\xx_t + \lambda_t \yy_t \rightarrow \zbar$. For each $t$, let $\xx'_t = (1-\lambda_t)\xx_t$ and $\yy'_t = \lambda_t\yy_t$. Then $\xx'_t\rightarrow(1-\lambda)\xbar$ and $\yy'_t\rightarrow \lambda \ybar$ by Proposition~\ref{pr:scalar-prod-props}(\ref{pr:scalar-prod-props:e}). Since $\xx'_t+\yy'_t\rightarrow\zbar$, this proves $\zbar\in [(1-\lambda)\xbar] \seqsum [\lambda\ybar]$, as claimed. For the reverse inclusion, suppose now that $\zbar\in [(1-\lambda)\xbar] \seqsum [\lambda\ybar]$. Then there exists $\seq{\xx_t}$ and $\seq{\yy_t}$ in $\Rn$ with $\xx_t\rightarrow(1-\lambda)\xbar$, $\yy_t\rightarrow\lambda\ybar$, and $\xx_t+\yy_t\rightarrow\zbar$. For each $t$, let $\xx'_t=\xx_t / (1-\lambda)$ and $\yy'_t=\yy_t / \lambda$. Then $\yy'_t\rightarrow\ybar$ (as follows from Theorem~\ref{thm:mat-mult-def} with $\A=(1/\lambda)\Iden$). Similarly, $\xx'_t\rightarrow\xbar$. Since $(1-\lambda)\xx'_t+\lambda\yy'_t\rightarrow\zbar$, it follows that $\zbar$ is a $\lambda$-midpoint of $\xbar,\ybar$. \pfpart{Part~(\ref{thm:lammid-char-seqsum:b}):} Let $\VV\omm\plusl\qq$ be $\ybar$'s canonical representation where $\VV=[\vv_1,\dotsc,\vv_k]$ and $\qq,\vv_1,\ldots,\vv_k\in\Rn$. This implies $\ebar=\VV\omm$ (by Theorem~\ref{thm:icon-fin-decomp}). We first show $\zrmid{\xbar}{\ybar}\subseteq \xbar \seqsum \lb{\zero}{\ebar}$. Let $\zbar\in\zrmid{\xbar}{\ybar}$. Then there exist sequences $\seq{\xx_t},\seq{\yy_t}$ in $\Rn$ and $\seq{\lambda_t}$ in $[0,1]$ with $\xx_t\rightarrow\xbar$, $\yy_t\rightarrow\ybar$, $\lambda_t\rightarrow 0$, and $(1-\lambda_t)\xx_t + \lambda_t \yy_t \rightarrow \zbar$. For each $t$, let $\xx'_t = (1-\lambda_t)\xx_t$ and $\yy'_t = \lambda_t\yy_t$. Then $\xx'_t\rightarrow\xbar$ by Proposition~\ref{pr:scalar-prod-props}(\ref{pr:scalar-prod-props:e}). By sequential compactness, the sequence $\seq{\yy'_t}$ must have a convergent subsequence. By discarding all other elements (as well as corresponding elements of the other sequences), we can assume that the entire sequence converges to some point $\wbar$. We claim this point is in $\lb{\zero}{\ebar}$. Suppose, by way of contradiction, that it is not. Note first that $(1-\lambda_t)\zero + \lambda_t \yy_t\rightarrow\wbar$. Since $\yy_t\rightarrow\ybar$, this implies that $\wbar\in\lb{\zero}{\ybar}$ by Corollary~\ref{cor:e:1}. Further, since we have assumed $\wbar\not\in\lb{\zero}{\ebar}$, by Theorem~\ref{thm:lb-with-zero} (applied to $\ybar=\VV\omm\plusl\qq$, and again to $\ebar=\VV\omm$), the only possibility is that $\wbar=\VV\omm+\alpha\qq$ for some $\alpha>0$, and that $\qq\neq\zero$. Thus, $\yy'_t\cdot\qq\rightarrow\wbar\cdot\qq=\alpha\norm{\qq}^2>0$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c} and Proposition~\ref{pr:vtransu-zero}, since $\qq\perp\VV$). On the other hand, by similar reasoning, $\yy_t\cdot\qq\rightarrow\ybar\cdot\qq=\norm{\qq}$, so $\yy'_t\cdot\qq=\lambda_t\yy_t\cdot\qq\rightarrow 0$ by continuity of multiplication, and since $\lambda_t\rightarrow 0$. Having reached a contradiction, we conclude $\wbar\in\lb{\zero}{\ebar}$. Since $\xx'_t\rightarrow\xbar$, $\yy'_t\rightarrow\wbar$, and $\xx'_t+\yy'_t\rightarrow\zbar$, it then follows that $\zbar\in\xbar\seqsum\lb{\zero}{\ebar}$, proving the claim. For the reverse inclusion, suppose now that $\zbar\in\xbar\seqsum\lb{\zero}{\ebar}$. Then $\zbar\in\xbar\seqsum\wbar$ for some $\wbar\in\lb{\zero}{\ebar}$. Thus, there exist sequences $\seq{\xx_t},\seq{\ww_t}$ in $\Rn$ with $\xx_t\rightarrow\xbar$, $\ww_t\rightarrow\wbar$, and $\xx_t+\ww_t\rightarrow\zbar$. By Theorem~\ref{thm:lb-with-zero}, and since $\wbar\in\lb{\zero}{\VV\omm}$, we must have \[ \wbar = \limrays{\vv_1,\ldots,\vv_\ell} \plusl \alpha \vv_{\ell+1} \] for some $\ell\in\{0,\ldots,k\}$ and some $\alpha\geq 0$, where we define $\vv_{k+1}=\zero$, and where it is understood that $\alpha=0$ if $\ell=k$. Let $\VV'=[\vv_1,\ldots,\vv_\ell]$ and let $\rr= \alpha \vv_{\ell+1}$. For each $t$, by linear algebra, we can write $\ww_t = \VV' \bb_t + \rr_t$ for some (unique) $\bb_t\in\R^{\ell}$ and $\rr_t\in\Rn$ with $\rr_t\perp\VV'$. Further, these sequences must satisfy the conditions of Theorem~\ref{thm:seq-rep}. In particular, $\rr_t\rightarrow\rr$. Letting $\ww'_t=\VV'\bb_t+\rr=\ww_t+(\rr-\rr_t)$, it follows from Proposition~\ref{pr:i:7}(\ref{pr:i:7g}) that $\ww'_t\rightarrow\wbar$ and $ \xx_t+\ww'_t\rightarrow\zbar $. Let \[ d= \begin{cases} \ell+1 & \text{if $\alpha=0$}\\ \ell+2 & \text{otherwise,} \end{cases} \] and let \begin{eqnarray*} \yy_t &=& e^t \ww'_t + \sum_{i=d}^k t^{k-i+1} \vv_i + \qq \\ &=& \begin{cases} \sum_{i=1}^\ell e^t b_{t,i} \vv_i + \sum_{i=\ell+1}^k t^{k-i+1} \vv_i + \qq & \text{if $\alpha=0$} \\ \sum_{i=1}^\ell e^t b_{t,i} \vv_i + \alpha e^t \vv_{\ell+1} + \sum_{i=\ell+2}^k t^{k-i+1} \vv_i + \qq & \text{otherwise.} \end{cases} \end{eqnarray*} Considering separately when $\alpha$ is zero or positive, it can be checked that the conditions of Theorem~\ref{thm:seq-rep} are satisfied, implying $\yy_t\rightarrow\ybar$. Also, let $\lambda_t=e^{-t}$ and $\xx'_t = \xx_t/(1-\lambda_t)$. Then $\lambda_t\rightarrow 0$ and $\xx'_t\rightarrow\xbar$ (by Proposition~\ref{pr:scalar-prod-props}\ref{pr:scalar-prod-props:e}). Further, \[ (1-\lambda_t) \xx'_t + \lambda_t \yy_t = \xx_t + \ww'_t + e^{-t} \Parens{\sum_{i=d}^k t^{k-i+1} \vv_i + \qq} \rightarrow \zbar \] since $\xx_t+\ww'_t\rightarrow\zbar$, and using Proposition~\ref{pr:i:7}(\ref{pr:i:7g}). Thus, $\zbar\in\zrmid{\xbar}{\ybar}$, completing the proof. \pfpart{Part~(\ref{thm:lammid-char-seqsum:c}):} If $\lambda\in(0,1)$, then the claim is immediate from part~(\ref{thm:lammid-char-seqsum:a}). The case $\lambda=0$ follows because $ \xbar \seqsum \zero \subseteq \zrmid{\xbar}{\ybar} $ by part~(\ref{thm:lammid-char-seqsum:b}) (and Proposition~\ref{pr:seqsum-props}\ref{pr:seqsum-props:b}). And for $\lambda=1$, as just argued, $ \zero \seqsum \ybar \subseteq \zrmid{\ybar}{\xbar} = \midpt{1}{\xbar}{\ybar} $. \end{proof-parts} \end{proof} \subsection{Astral halflines} \label{sec:ast-halflines} The form of the set on the right-hand side of \eqref{eq:thm:lammid-char-seqsum:2} may seem a bit obscure, but turns out to be related to the astral analogue of a halfline, as we pause now to discuss. For $\xx,\vv\in\Rn$, the \emph{halfline with endpoint $\xx$ in direction $\vv$}, denoted $\hfline{\xx}{\vv}$, is the set \begin{equation} \label{eq:std-halfline-defn} \hfline{\xx}{\vv} = \Braces{ \lambda \vv + \xx : \lambda \in \Rpos } = \xx+(\ray{\vv}). \end{equation} How can we derive an astral analogue of this set, for an astral ``endpoint'' $\xbar\in\extspace$, and an astral direction $\vbar\in\extspace$? Naturally, we can replace the standard ray, $\ray{\vv}$, with an astral ray through $\vbar$, $\aray{\vbar}$, as was defined on analogy to standard rays in Section~\ref{sec:cones-basic}. For the ordinary addition of sets used on the right-hand side of \eqref{eq:std-halfline-defn} (for the sum of the singleton $\{\xx\}$ with $\ray{\vv}$), we would argue that the sequential sum operation from Section~\ref{sec:seq-sum} is the right astral analogue since it has many similar, favorable properties; for instance, it is commutative and preserves convexity and closedness (\Cref{prop:seqsum-multi}\ref{prop:seqsum-multi:closed}\ref{prop:seqsum-multi:convex}), unlike leftward addition. All this leads to: \begin{definition} Let $\xbar,\vbar\in\extspace$. The \emph{astral halfline with endpoint $\xbar$ in direction $\vbar$}, denoted $\ahfline{\xbar}{\vbar}$, is the set \begin{equation} \label{eq:ast-hline-defn} \ahfline{\xbar}{\vbar} = \xbar\seqsum (\aray{\vbar}). \end{equation} \end{definition} By Theorem~\ref{thm:oconichull-equals-ocvxhull}, this is equivalent to \begin{equation} \label{eq:ast-hline-defn-alt} \ahfline{\xbar}{\vbar} = \xbar\seqsum \lb{\zero}{\limray{\vbar}}. \end{equation} Note that, since $\limray{\vbar}$ is an icon, the expression on the right has the same form as in \eqref{eq:thm:lammid-char-seqsum:2}. Similar to earlier theorems relating astral sets to sequences of points in their standard analogues (such as Corollaries~\ref{cor:e:1} and~\ref{cor:aray-and-seqs}), we show next that a point is included in the astral halfline with endpoint $\xbar$ in direction $\vbar$ if and only if it is the limit of points on standard halflines with endpoints and directions converging respectively to $\xbar$ and $\vbar$. \begin{theorem} \label{thm:ahfline-seq} Let $\xbar,\vbar,\zbar\in\extspace$. Then $\zbar\in\ahfline{\xbar}{\vbar}$ if and only if there exist sequences $\seq{\xx_t}$ in $\Rn$, $\seq{\lambda_t}$ in $\Rpos$, and a span-bound sequence $\seq{\vv_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$, $\vv_t\rightarrow\vbar$, and $\xx_t+\lambda_t\vv_t\rightarrow\zbar$. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{``If'' ($\Leftarrow$):} Suppose sequences as stated in the theorem exist. For all $t$, let $\yy_t=\lambda_t\vv_t$. Then by sequential compactness, the sequence $\seq{\yy_t}$ must have a convergent subsequence; by discarding all other elements (as well as corresponding elements of the other sequences), we can assume that the entire sequence converges to some point $\ybar\in\extspace$. Then $\ybar\in\aray{\vbar}$ by Corollary~\ref{cor:aray-and-seqs}. Since $\xx_t+\yy_t\rightarrow\zbar$, it follows that \[ \zbar \in \xbar\seqsum\ybar \subseteq \xbar\seqsum(\aray{\vbar}) = \ahfline{\xbar}{\vbar}. \] \pfpart{``Only if'' ($\Rightarrow$):} Suppose $\zbar\in\ahfline{\xbar}{\vbar}$. Then $\zbar\in\zrmid{\xbar}{\limray{\vbar}}$ by Theorem~\ref{thm:lammid-char-seqsum}(\ref{thm:lammid-char-seqsum:b}) and \eqref{eq:ast-hline-defn-alt}. Therefore, there exist sequences $\seq{\xx_t}$ and $\seq{\yy_t}$ in $\Rn$, and $\seq{\gamma_t}$ in $[0,1]$ such that $\xx_t\rightarrow\xbar$, $\yy_t\rightarrow\limray{\vbar}$, $\gamma_t\rightarrow 0$, and $\zz_t=(1-\gamma_t)\xx_t + \gamma_t \yy_t \rightarrow \zbar$. Without loss of generality, we assume $\gamma_t<1$ for all $t$, since otherwise we can discard all elements $t$ from all sequences for which $\gamma_t=1$, of which there can only be finitely many. Also, we can assume without loss of generality that $\seq{\yy_t}$ is span-bound. This is because if it is not, then by Theorem~\ref{thm:spam-limit-seqs-exist}, there exists a span-bound sequence $\seq{\yy'_t}$ converging to $\limray{\vbar}$ with $\yy_t-\yy'_t\rightarrow\zero$, implying that \[ (1-\gamma_t)\xx_t + \gamma_t \yy'_t = \zz_t + \gamma_t (\yy'_t - \yy_t) \rightarrow \zbar \] by Proposition~\ref{pr:i:7}(\ref{pr:i:7g}). Thus, we can replace $\seq{\yy_t}$ with the span-bound sequence $\seq{\yy'_t}$. By Lemma~\ref{lem:seq-to-omegaxbar-to-xbar}, there exists a sequence $\seq{\alpha_t}$ in $\Rstrictpos$ such that the sequence $\vv_t=\yy_t/\alpha_t$ is span-bound and converges to $\vbar$. For all $t$, let $\lambda_t = \gamma_t \alpha_t / (1-\gamma_t)$, and let $\zz_t = \xx_t + \lambda_t \vv_t$. Then $\lambda_t\in\Rpos$ and \[ \zz_t = \frac{1}{1-\gamma_t} \Bracks{(1-\gamma_t) \xx_t + \gamma_t \yy_t} \rightarrow \zbar. \] The equality is by algebra. The convergence is by Proposition~\ref{pr:scalar-prod-props}(\ref{pr:scalar-prod-props:e}) since $\gamma_t\rightarrow 0$ and the bracketed expression converges to $\zbar$. This proves the claim. \qedhere \end{proof-parts} \end{proof} As seen in \Cref{pr:stan-rec-equiv}(\ref{pr:stan-rec-equiv:a},\ref{pr:stan-rec-equiv:c}), for a convex, lower semicontinuous function $f:\Rn\rightarrow\Rext$, it is known that if $\liminf_{\lambda\rightarrow+\infty} f(\yy+\lambda\vv)<+\infty$, for any points $\vv,\yy\in\Rn$, then we must have $f(\xx+\lambda\vv)\leq f(\xx)$ for all $\xx\in\Rn$ and all $\lambda\in\Rpos$; that is, $f(\zz)\leq f(\xx)$ for all $\zz\in\hfline{\xx}{\vv}$, or more succinctly, \[ \sup f\bigParens{\hfline{\xx}{\vv}} \leq f(\xx). \] Combining Theorems~\ref{thm:ast-F-char-fcn-vals} and~\ref{thm:lammid-char-seqsum} yields next a kind of analogue for astral convex functions. (The standard result just quoted can then be derived by setting $F=\fext$, $\ybar=\yy$, $\vbar=\vv$, and $\xbar=\xx$.) \begin{theorem} \label{thm:F-conv-res} Let $F:\extspace\rightarrow\Rext$ be convex, let $\vbar\in\extspace$, and assume $F(\limray{\vbar}\plusl\ybar)<+\infty$ for some $\ybar\in\extspace$. Then for all $\xbar\in\extspace$, \[ \sup F\bigParens{\ahfline{\xbar}{\vbar}} \leq F(\xbar); \] that is, $F(\zbar)\leq F(\xbar)$ for all $\zbar\in\ahfline{\xbar}{\vbar}$. \end{theorem} \begin{proof} Let $\xbar\in\extspace$, let $\ebar=\limray{\vbar}$ (which is an icon by Proposition~\ref{pr:i:8}\ref{pr:i:8-infprod}), and let $\zbar\in\ahfline{\xbar}{\vbar}$. We aim to prove $F(\zbar)\leq F(\xbar)$. If $F(\xbar)=+\infty$ then this holds trivially, so we assume henceforth that $\xbar\in\dom{F}$. We can write $\ybar=\dbar\plusl\qq$ for some $\dbar\in\corezn$ and $\qq\in\Rn$. Then $\ebar\plusl\dbar$ is the iconic part of $\ebar\plusl\ybar$. By Corollary~\ref{cor:d-in-lb-0-dplusx}, $\lb{\zero}{\ebar}\subseteq\lb{\zero}{\ebar\plusl\dbar}$. Thus, \[ \zbar \in \ahfline{\xbar}{\vbar} = \xbar\seqsum\lb{\zero}{\ebar} \subseteq \xbar\seqsum\lb{\zero}{\ebar\plusl\dbar}, \] where the equality is by \eqref{eq:ast-hline-defn-alt} (and the second inclusion uses Proposition~\ref{pr:seqsum-props}\ref{pr:seqsum-props:b}). By Theorem~\ref{thm:lammid-char-seqsum}, it follows that $\zbar$ is a $0$-midpoint of $\xbar$ and $\ebar\plusl\ybar$, implying \[F(\zbar) \leq F(\xbar) + 0\cdot F(\ebar\plusl\ybar) = F(\xbar)\] by Theorem~\ref{thm:ast-F-char-fcn-vals} (and since $F(\ebar\plusl\ybar)<+\infty$ by assumption). \end{proof} In standard convex analysis, it is known that, for $\vv\in\Rn$, if the halfline $\hfline{\xx}{\vv}$ is included in some closed (in $\Rn$) and convex set $C\subseteq\Rn$ for even one point $\xx\in C$, then the same holds for \emph{every} point $\xx\in C$ \citep[Theorem~8.3]{ROC}. We show next how this fact generalizes for astral convex sets, using Theorem~\ref{thm:F-conv-res}. For $\xx,\vv\in\Rn$, the halfline $\hfline{\xx}{\vv}$ has an astral limit at infinity, namely, $\limray{\vv}\plusl\xx$. Moreover, the astral halfline with endpoint $\xx$ in direction $\vv$ is simply the associated standard halfline adjoined with this limit point, that is, \begin{align} \ahfline{\xx}{\vv} &= \xx\plusl\lb{\zero}{\limray{\vv}} \nonumber \\ &= \lb{\xx}{\limray{\vv}\plusl\xx} = \hfline{\xx}{\vv} \cup \{\limray{\vv}\plusl\xx\}, \label{eqn:ahfline-fin-pts} \end{align} as follows using Theorem~\ref{thm:e:9} and Corollary~\ref{cor:lb-with-finite}. Intuitively then, the inclusion of the standard halfline $\hfline{\xx}{\vv}$ in the closed, convex set $C\subseteq\Rn$ should be related, by taking limits, to $\limray{\vv}\plusl\xx$'s inclusion in $\Cbar$. This is shown in the next proposition: \begin{proposition} \label{pr:hfline-equiv-ahfline} Let $C\subseteq\Rn$ be convex and closed (in $\Rn$). Then the following are equivalent: \begin{letter-compact} \item \label{pr:hfline-equiv-ahfline:a} $\hfline{\xx}{\vv}\subseteq C$. \item \label{pr:hfline-equiv-ahfline:b} $\ahfline{\xx}{\vv}\subseteq \Cbar$. \item \label{pr:hfline-equiv-ahfline:c} $\xx\in C$ and $\limray{\vv}\plusl\xx\in\Cbar$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:hfline-equiv-ahfline:a}) $\Rightarrow$ (\ref{pr:hfline-equiv-ahfline:c}): } If $\hfline{\xx}{\vv}\subseteq C$, then $\xx\in C$ and $t\vv+\xx\in C$ for all $t$, implying that $\limray{\vv}\plusl\xx\in\Cbar$. \pfpart{(\ref{pr:hfline-equiv-ahfline:c}) $\Rightarrow$ (\ref{pr:hfline-equiv-ahfline:b}): } If $\xx\in C\subseteq\Cbar$ and $\limray{\vv}\plusl\xx\in\Cbar$, then $\ahfline{\xx}{\vv}=\lb{\xx}{\limray{\vv}\plusl\xx}\subseteq\Cbar$ by \eqref{eqn:ahfline-fin-pts} and since $\Cbar$ is convex by Theorem~\ref{thm:e:6}. \pfpart{(\ref{pr:hfline-equiv-ahfline:b}) $\Rightarrow$ (\ref{pr:hfline-equiv-ahfline:a}): } If $\ahfline{\xx}{\vv}\subseteq\Cbar$ then $\hfline{\xx}{\vv}\subseteq\Cbar\cap\Rn=\cl C=C$ by \eqref{eqn:ahfline-fin-pts} (and Proposition~\ref{pr:closed-set-facts}\ref{pr:closed-set-facts:a}). \qedhere \end{proof-parts} \end{proof} The preceding fact from standard convex analysis states that if $\hfline{\xx}{\vv}\subseteq C$ for some point $\yy\in C$, then $\hfline{\xx}{\vv}\in C$ for all $\xx\in C$. Using Proposition~\ref{pr:hfline-equiv-ahfline}, we can now restate that fact equivalently as saying that if $\limray{\vv}\plusl\yy\in\Cbar$ for some point $\yy\in C$, then $\ahfline{\xx}{\vv}\in\Cbar$ for all $\xx\in C$. In these terms, we can obtain a generalization of this fact for astral convex sets simply by applying Theorem~\ref{thm:F-conv-res} to indicator functions, as shown in the next corollary. In particular, the just quoted claim follows from this corollary by letting $S=\Cbar$, $\vbar=\vv$, $\ybar=\yy$, and $\xbar=\xx$. \begin{corollary} \label{cor:gen-one-in-set-implies-all} Let $S\subseteq\extspace$ be convex and let $\vbar\in\extspace$. Suppose there exists a point $\ybar\in\extspace$ for which $\limray{\vbar}\plusl\ybar\in S$. Then $\ahfline{\xbar}{\vbar} \subseteq S$ for all $\xbar\in S$. Thus, \begin{equation*} \label{eq:cor:gen-one-in-set-implies-all:1} S \seqsum (\aray{\vbar}) = S. \end{equation*} \end{corollary} \begin{proof} Let $F=\indaS$, the indicator function for $S$, which is convex since $S$ is (as will be proved momentarily in Proposition~\ref{pr:ast-ind-fcn-cvx}). Then $F(\limray{\vbar}\plusl\ybar)=0<+\infty$. Therefore, by Theorem~\ref{thm:F-conv-res}, for all $\zbar\in\ahfline{\xbar}{\vbar}$, we have $F(\zbar)\leq F(\xbar)=0$, implying $F(\zbar)=0$ and so that $\zbar\in S$. Using \eqref{eq:ast-hline-defn} and Proposition~\ref{pr:seqsum-props}(\ref{pr:seqsum-props:c}), it then follows that $S \seqsum (\aray{\vbar}) \subseteq S$. The reverse implication holds because $\zero\in\aray{\vbar}$. \end{proof} \subsection{Constructing and operating on convex functions} \label{sec:op-ast-cvx-fcns} We next present various ways in which astral convex functions can be constructed, operated upon, and generally understood. Many of these directly generalize functional operations in standard convex analysis that preserve convexity, for instance, by adding two functions, or by composing with an affine function. As a first illustration, we characterize exactly those astral functions on $\Rext$ that are convex: \begin{proposition} \label{pr:1d-cvx} Let $F:\Rext\rightarrow\Rext$. Then $F$ is convex if and only if all of the following hold: \begin{roman-compact} \item \label{pr:1d-cvx:a} $f$ is convex, where $f:\R\rightarrow\Rext$ is $F$'s restriction to $\R$ (so that $f(\xx)=F(\xx)$ for all $\xx\in\R$). \item \label{pr:1d-cvx:b} If $F(+\infty)<+\infty$ then $F$ is nonincreasing. \item \label{pr:1d-cvx:c} If $F(-\infty)<+\infty$ then $F$ is nondecreasing. \end{roman-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{``Only if'' ($\Rightarrow$):} Suppose $F$ is convex. We aim to prove the three stated conditions. First, a point $\rpair{x}{y}\in\R\times\R=\R^2$ is clearly in $f$'s epigraph if and only if it is in $F$'s epigraph; thus, $\epi{f} = (\epi{F}) \cap \R^2$. Since the latter two sets are convex (in $\extspac{2}$), $\epi{f}$ is as well, proving condition~(\ref{pr:1d-cvx:a}). For condition~(\ref{pr:1d-cvx:b}), suppose $F(+\infty)<+\infty$. Then for all $\barw\in [0,+\infty]$, and for all $\barx\in\Rext$, we have \[ \barw\plusl\barx \in \barx\seqsum\barw \subseteq \barx\seqsum(\aray{1}) = \ahfline{\barx}{1} \] by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:a}), since $\aray{1}=[0,+\infty]$, and by \eqref{eq:ast-hline-defn}. Applying Theorem~\ref{thm:F-conv-res} (with $\vbar=1$, $\ybar=0$, $\xbar=\barx$, $\zbar=\barw\plusl\barx$), it then follows that $F(\barw\plusl\barx)\leq F(\barx)$. In particular, taking $\barw=+\infty$, this shows that $F(+\infty)=\inf F$. Also, taking $\barw=w\in\Rpos$ and $\barx=x\in\R$, this shows that $F(x+w)\leq F(x)$. Furthermore, for all $\barx\in\Rext$, since $\barx\in\lb{-\infty}{+\infty}=\Rext$, by Corollary~\ref{cor:F-conv-max-seg}, $F(\barx)\leq\max\{F(-\infty),F(+\infty)\}=F(-\infty)$ (since $F(+\infty)=\inf F$). Thus, $F(-\infty)=\sup F$. We conclude that $F$ is nonincreasing. The proof of condition~(\ref{pr:1d-cvx:c}) is symmetric. \pfpart{``If'' ($\Leftarrow$):} Suppose conditions~(\ref{pr:1d-cvx:a}),~(\ref{pr:1d-cvx:b}) and~(\ref{pr:1d-cvx:c}) hold. We aim to show that $F$ is convex. If $F(-\infty)=F(+\infty)=+\infty$, then $\epi F = \epi f$, which is convex since $f$ is convex, implying $F$ is as well. Otherwise, let $\lambda\in [0,1]$, let $\barx,\bary\in\dom{F}$, and let $\barz$ be a $\lambda$-midpoint of $\barx,\bary$. We aim to show \begin{equation} \label{eq:pr:1d-cvx:1} F(\barz) \leq (1-\lambda) F(\barx) + \lambda F(\bary), \end{equation} implying convexity by Theorem~\ref{thm:ast-F-char-fcn-vals}. If $F(-\infty)<+\infty$ and $F(+\infty)<+\infty$, then conditions~(\ref{pr:1d-cvx:b}) and~(\ref{pr:1d-cvx:c}) together imply that $F$ is identically equal to some constant in $\R\cup\{-\infty\}$, which then implies \eqref{eq:pr:1d-cvx:1}. Assume then that $F(-\infty)=+\infty$ and $F(+\infty)<+\infty$, implying that $F$ is nonincreasing. (The remaining case that $F(-\infty)<+\infty$ and $F(+\infty)=+\infty$ can be handled symmetrically.) In particular, this implies $\barx,\bary>-\infty$. Suppose both $\barx$ and $\bary$ are in $\R$. Then $\barz=(1-\lambda)\barx+\lambda\bary$ by Proposition~\ref{pr:lam-mid-finite}. Since $f$ is convex, $f(\barz)\leq (1-\lambda) f(\barx) + \lambda f(\bary)$ (Proposition~\ref{pr:stand-cvx-fcn-char}), implying \eqref{eq:pr:1d-cvx:1} since $F$ equals $f$ on all points in $\R$. Otherwise, suppose one of $\barx$ or $\bary$ is $+\infty$. Without loss of generality, suppose $\barx\in\R$ and $\bary=+\infty$. Then $\barz\in\lb{\barx}{\bary}=[\barx,+\infty]$. If $\barz=+\infty$ then $F(\barz)\leq F(\barx)$ since $F$ is nonincreasing. This implies $(1-\lambda)F(\barz)\leq (1-\lambda)F(\barx)$, futher implying \eqref{eq:pr:1d-cvx:1} by algebra and since $\barz=\bary$. Otherwise, $\barz\in [\barx,+\infty)$. This further implies $\lambda=0$ since if $\lambda>0$ then $\lammid{\barx}{+\infty}=\{+\infty\}$ (as can be seen, for instance, from Theorem~\ref{thm:lammid-char-seqsum}\ref{thm:lammid-char-seqsum:a}, combined with Corollary~\ref{cor:seqsum-conseqs}\ref{cor:seqsum-conseqs:c}). Since $F$ is nonincreasing, this implies $F(\barz)\leq F(\barx) = (1-\lambda) F(\barx) + \lambda F(\bary)$. \qedhere \end{proof-parts} \end{proof} An astral indicator function on any convex subset of $\extspace$ is convex: \begin{proposition} \label{pr:ast-ind-fcn-cvx} Let $S\subseteq\extspace$ be convex. Then the astral indicator function $\indaS$ on $S$ (as defined in Eq.~\ref{eq:indfa-defn}) is convex. \end{proposition} \begin{proof} Let $\xbar_0,\xbar_1\in S=\dom{\indaS}$, let $\lambda\in [0,1]$, and let $\xbar\in\lammid{\xbar_0}{\xbar_1}$. Then $\xbar\in\lb{\xbar_0}{\xbar_1}$ so $\xbar\in S$ since $S$ is convex. Therefore, $\indaS(\xbar)=0=(1-\lambda)\indaS(\xbar_0)+\lambda\indaS(\xbar_1)$, proving the claim by Theorem~\ref{thm:ast-F-char-fcn-vals}. \end{proof} The upward sum (as defined in Eq.~\ref{eq:up-add-def}) of two convex funtions is also convex: \begin{theorem} \label{thm:sum-ast-cvx-fcns} Let $F:\extspace\rightarrow\Rext$ and $G:\extspace\rightarrow\Rext$ be convex. Then $F\plusu G$ is also convex. \end{theorem} \begin{proof} Let $H=F\plusu G$. Let $\xbar_0,\xbar_1\in\dom{H}$, and let $\lambda\in[0,1]$. Suppose $\xbar\in\lammid{\xbar_0}{\xbar_1}$. Then $\xbar_0,\xbar_1$ also are both in $\dom{F}$ and $\dom{G}$ (by definition of upward sum). Therefore, from Theorem~\ref{thm:ast-F-char-fcn-vals}, \begin{align*} F(\xbar) &\leq (1-\lambda) F(\xbar_0) + \lambda F(\xbar_1) \\ G(\xbar) &\leq (1-\lambda) G(\xbar_0) + \lambda G(\xbar_1). \end{align*} None of the terms appearing in either inequality can be $+\infty$. Therefore, the two inequalities can be added yielding \[ H(\xbar) \leq (1-\lambda) H(\xbar_0) + \lambda H(\xbar_1), \] proving $H$ is convex by Theorem~\ref{thm:ast-F-char-fcn-vals}. \end{proof} In general, neither the leftward nor the downward sum of two astral convex functions is necessarily convex, as the next example shows: \begin{example} For $\barx\in\Rext$, let \[ F(\barx) = \left\{ \begin{array}{cl} -\infty & \text{if $\barx=-\infty$} \\ 0 & \text{if $\barx\in\R$} \\ +\infty & \text{if $\barx=+\infty$,} \end{array} \right. \] and let \[ G(\barx) = \left\{ \begin{array}{cl} \barx^2 & \text{if $\barx\in\R$} \\ +\infty & \text{otherwise.} \end{array} \right. \] Then both $F$ and $G$ are convex, by Proposition~\ref{pr:1d-cvx}. However, if either $H=F\plusl G$ or $H=F\plusd G$, then \[ H(\barx) = \left\{ \begin{array}{cl} -\infty & \text{if $\barx=-\infty$} \\ \barx^2 & \text{if $\barx\in\R$} \\ +\infty & \text{if $\barx=+\infty$} \end{array} \right. \] for $\barx\in\Rext$, which is not convex (again by Proposition~\ref{pr:1d-cvx}, since condition~(\ref{pr:1d-cvx:c}) is violated). \end{example} The composition of an astral convex function with a linear or affine map is convex: \begin{theorem} \label{thm:cvx-compose-affine-cvx} Let $F:\extspac{m}\rightarrow\Rext$ be convex, let $\A\in\Rmn$ and let $\bbar\in\extspac{m}$. Let $H:\extspace\rightarrow\Rext$ be defined by $H(\xbar)=F(\bbar\plusl\A\xbar)$ for $\xbar\in\extspace$. Then $H$ is convex. \end{theorem} \begin{proof} Let $\xbar_0,\xbar_1\in\dom{H}$, let $\lambda\in[0,1]$, and let $\xbar\in\lammid{\xbar_0}{\xbar_1}$. Let $G(\zbar)=\bbar\plusl\A\zbar$ for $\zbar\in\extspace$. Then by Proposition~\ref{pr:lam-mid-props}(\ref{pr:lam-mid-props:aff}), $G(\xbar)\in\lammid{G(\xbar_0)}{G(\xbar_1)}$. Also, for $i\in\{0,1\}$, $F(G(\xbar_i))=H(\xbar_i)<+\infty$ so $G(\xbar_i)\in\dom{F}$. Thus, using Theorem~\ref{thm:ast-F-char-fcn-vals}, \[ H(\xbar) = F(G(\xbar)) \leq (1-\lambda) F(G(\xbar_0)) + \lambda F(G(\xbar_1)) = (1-\lambda) H(\xbar_0) + \lambda H(\xbar_1), \] so $H$ is convex. \end{proof} Let $F:\extspace\rightarrow\Rext$ and $\A\in\Rmn$. As in standard convex analysis, the \emph{image of $F$ under $\A$} is the function $\A F:\extspac{m}\rightarrow\Rext$ defined by \[ (\A F)(\xbar) = \inf\Braces{F(\zbar) : \zbar\in\extspace, \A\zbar=\xbar} \] for $\xbar\in\extspac{m}$. If $F$ is convex, then so is $\A F$: \begin{theorem} \label{th:lin-img-ast-fcn-cvx} Let $F:\extspace\rightarrow\Rext$ be convex and let $\A\in\Rmn$. Then $\A F$ is also convex. \end{theorem} \begin{proof} Let $H=\A F$. Let $\xbar_0,\xbar_1\in\dom{H}$, let $\lambda\in[0,1]$, and let $\xbar\in\lammid{\xbar_0}{\xbar_1}$. We aim to show that \begin{equation} \label{eq:th:lin-img-ast-fcn-cvx:1} H(\xbar) \leq (1-\lambda) H(\xbar_0) + \lambda H(\xbar_1). \end{equation} Let $\B\in\R^{(m+1)\times(n+1)}$ be the matrix \[ \B = \left[ \begin{array}{ccc|c} & & & \\ ~ & \A & ~ & \zerov{m} \\ & & & \\ \hline \rule{0pt}{2.5ex} & \trans{\zerov{n}} & & 1 \end{array} \right]. \] Thus, the upper left $(m\times n)$-submatrix is a copy of $\A$, the bottom right entry is $1$, and all other entries are $0$. Then $\B \rpair{\zz}{y} = \rpair{\A\zz}{y}$ for all $\zz\in\Rn$ and $y\in\R$. Further, with $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}), \begin{equation} \label{eq:th:lin-img-ast-fcn-cvx:2} \B \rpair{\zbar}{y} = \B \trans{\homat} \zbar \plusl \B \rpair{\zero}{y} = \trans{\homat} \A \zbar \plusl \rpair{\zero}{y} = \rpair{\A \zbar}{y} \end{equation} for all $\zbar\in\extspace$ and $y\in\R$. The first and third equalities are both by Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:a}). The second equality is by matrix algebra. Let $U=\B (\epi{F})$, the image of $\epi{F}$ under $\B$. Since $F$ is convex, so is its epigraph, implying that $U$ is as well (by Corollary~\ref{cor:thm:e:9}). For $i\in\{0,1\}$, let $y_i\in\R$ with $y_i > H(\xbar_i)$. Then by $H$'s definition, there exists $\zbar_i\in\extspace$ such that $\A\zbar_i=\xbar_i$ and $F(\zbar_i)<y_i$. Since $\rpair{\zbar_i}{y_i}\in\epi{F}$, $\rpair{\xbar_i}{y_i}=\B\rpair{\zbar_i}{y_i}\in U$, with the equality following from \eqref{eq:th:lin-img-ast-fcn-cvx:2}. Let $y=(1-\lambda)y_0 + \lambda y_1$. Then \[ \rpair{\xbar}{y} \in \lammid{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}} \subseteq \lb{\rpair{\xbar_0}{y_0}}{\rpair{\xbar_1}{y_1}} \subseteq U. \] The first inclusion is by Proposition~\ref{pr:lam-mid-props}(\ref{pr:lam-mid-props:d}). The second is by Corollary~\ref{cor:e:1}. The last is because $U$ is convex. Since $\rpair{\xbar}{y}\in U$, there exists $\zbar\in\extspace$ and $y'\in\R$ such that $\rpair{\zbar}{y'}\in\epi{F}$ and $\B\rpair{\zbar}{y'}=\rpair{\xbar}{y}$. By \eqref{eq:th:lin-img-ast-fcn-cvx:2}, this implies $\rpair{\A\zbar}{y'}=\rpair{\xbar}{y}$, and so that $y'=y$ and $\A\zbar=\xbar$ (Proposition~\ref{pr:xy-pairs-props}\ref{pr:xy-pairs-props:h}). Therefore, \[ H(\xbar) \leq F(\zbar) \leq y = (1-\lambda)y_0 + \lambda y_1, \] with the first inequality from $H$'s definition, and the second because $\rpair{\zbar}{y}\in\epi{F}$. Since this holds for all $y_0>H(\xbar_0)$ and $y_1>H(\xbar_1)$, \eqref{eq:th:lin-img-ast-fcn-cvx:1} must also hold, proving convexity by Theorem~\ref{thm:ast-F-char-fcn-vals}. \end{proof} The pointwise supremum of any collection of convex functions is convex: \begin{theorem} \label{thm:point-sup-is-convex} Let $F_\alpha:\extspace\rightarrow\Rext$ be convex for all $\alpha\in\indset$, where $\indset$ is any index set. Let $H$ be their pointwise supremum, that is, \[ H(\xbar) = \sup_{\alpha\in\indset} F_\alpha(\xbar) \] for $\xbar\in\extspace$. Then $H$ is convex. \end{theorem} \begin{proof} If $\indset$ is empty, then $H\equiv-\infty$, which is convex (for instance, by Theorem~\ref{thm:ast-F-char-fcn-vals}). So we assume henceforth that $\indset$ is not empty. The epigraph of $H$ is exactly the intersection of the epigraphs of the functions $F_\alpha$; that is, \[ \epi{H} = \bigcap_{\alpha\in\indset} \epi{F_\alpha}. \] This is because a pair $\rpair{\xbar}{y}$ is in $\epi{H}$, meaning $y\geq H(\xbar)$, if and only if $y\geq F_\alpha(\xbar)$ for all $\alpha\in\indset$, that is, if and only if $\rpair{\xbar}{y}$ is in $\epi{F_\alpha}$ for all $\alpha\in\indset$. Since $F_\alpha$ is convex, its epigraph, $\epi{F_\alpha}$, is convex, for $\alpha\in\indset$. Thus, $\epi{H}$ is convex by Proposition~\ref{pr:e1}(\ref{pr:e1:b}), and therefore $H$ is as well. \end{proof} We had earlier shown (Proposition~\ref{pr:conj-is-convex}) that the conjugate $\Fstar$ of any function $F:\extspace\rightarrow\Rext$ must always be convex. We can now show that the dual conjugate $\psistarb$ of any function $\psi:\Rn\rightarrow\Rext$ must also always be convex. In particular, this immediately implies that the biconjugate $\Fdub$ is always convex, as well as $\fdub$, for any function $f:\Rn\rightarrow\Rext$. \begin{theorem} \label{thm:dual-conj-cvx} \MarkMaybe Let $\psi:\Rn\rightarrow\Rext$. Then its dual conjugate, $\psistarb$, is convex. \end{theorem} \begin{proof} For $\uu\in\Rn$ and $v\in\R$, let us define the affine function \[ \huv(\xx) = \xx\cdot\uu - v \] for $\xx\in\Rn$. This function is convex, and, as seen in Example~\ref{ex:ext-affine}, its extension is \[ \hbaruv(\xbar) = \xbar\cdot\uu - v, \] which is convex by Theorem~\ref{thm:fext-convex}. The dual conjugate $\psistarb$, defined in \eqref{eq:psistar-def}, can be written \[ \psistarb(\xbar) = \sup_{\rpair{\uu}{v}\in\epi \psi} \hbaruv(\xbar). \] Thus, $\psistarb$ is a pointwise supremum over convex functions and therefore is convex by Theorem~\ref{thm:point-sup-is-convex}. \end{proof} The composition of a nondecreasing, convex astral function with a convex astral function is convex: \begin{theorem} \label{thm:comp-nondec-fcn-cvx} Let $F:\extspace\rightarrow\Rext$ be convex, and let $G:\Rext\rightarrow\Rext$ be convex and nondecreasing. Then $G\circ F$ is convex. \end{theorem} \begin{proof} Let $H=G\circ F$. If $G(+\infty)<+\infty$, then $G$ must be nonincreasing, by Proposition~\ref{pr:1d-cvx}. Since $G$ is also nondecreasing, this implies $G\equiv\barc$ for some $\barc\in\Rext$, further implying $H\equiv\barc$, which is convex (for instance, by Theorem~\ref{thm:ast-F-char-fcn-vals}). Otherwise, $G(+\infty)=+\infty$. Let $\xbar_0,\xbar_1\in\dom{H}$, let $\lambda\in[0,1]$, and let $\xbar\in\lammid{\xbar_0}{\xbar_1}$. For $i\in\{0,1\}$, $G(F(\xbar_i))=H(\xbar_i)<+\infty$ so $F(\xbar_i)\in\dom{G}$, implying also that $F(\xbar_i)<+\infty$. By Theorem~\ref{thm:ast-F-char-fcn-vals}, \begin{equation} \label{eq:thm:comp-nondec-fcn-cvx:1} F(\xbar) \leq (1-\lambda) F(\xbar_0) + \lambda F(\xbar_1). \end{equation} Further, the expression on the right-hand side is equal to $(1-\lambda) F(\xbar_0) \plusl \lambda F(\xbar_1)$, which is in $[(1-\lambda) F(\xbar_0)] \seqsum [\lambda F(\xbar_1)]$ by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:a}), and so is a $\lambda$-midpoint of $F(\xbar_0)$ and $F(\xbar_1)$ by Theorem~\ref{thm:lammid-char-seqsum}(\ref{thm:lammid-char-seqsum:c}). Thus, \begin{eqnarray*} H(\xbar) &=& G(F(\xbar)) \\ &\leq& G((1-\lambda) F(\xbar_0) + \lambda F(\xbar_1)) \\ &\leq& (1-\lambda) G(F(\xbar_0)) + \lambda G(F(\xbar_1)) \\ &=& (1-\lambda) H(\xbar_0) + \lambda H(\xbar_1). \end{eqnarray*} The first inequality follows from \eqref{eq:thm:comp-nondec-fcn-cvx:1} since $G$ is nondecreasing. The second inequality is by Theorem~\ref{thm:ast-F-char-fcn-vals} applied to $G$. This proves $H$ is convex. \end{proof} \part{Minimizers and Continuity} \label{part:minimizers-and-cont} \section{Minimizers and their structure} \label{sec:minimizers} We next study the general nature of astral points that minimize the extension $\fext$ of a convex function $f:\Rn\rightarrow\Rext$. Determining the astral \emph{points} that minimize $\fext$ is, to a large degree, equivalent to finding or characterizing the \emph{sequences} that minimize the original function $f$ (see Proposition~\ref{pr:fext-min-exists}). As seen in Theorem~\ref{thm:icon-fin-decomp}, every astral point $\xbar\in\extspace$ can be decomposed as $\xbar=\ebar\plusl\qq$ for some icon $\ebar\in\corezn$ and finite $\qq\in\Rn$. In the same way, the problem of minimizing the function $\fext$ decomposes into the separate issues of how to minimize $\fext$ over the choice of $\ebar$, and how to minimize it over $\qq$, both of which will be studied in detail. In this chapter, we will see that if $\xbar$ minimizes $\fext$, then its icon $\ebar$ must belong to a particular set called the astral recession cone, which will be our starting point. We study the properties of this set and the structure of its elements, leading to a procedure that, in a sense described below, enumerates all of the minimizers of $\fext$. \subsection{Astral recession cone} \label{sec:arescone} The standard recession cone is the set of directions in which a function (on $\Rn$) is never increasing. We begin by studying an extension of this notion to astral space, which will be centrally important to our understanding of minimizers, continuity, and more. For a function $f:\Rn\rightarrow\Rext$, the definition given in \eqref{eqn:resc-cone-def} states that a vector $\vv\in\Rn$ is in $f$'s standard recession cone, $\resc{f}$, if $f(\xx+\lambda\vv)\leq f(\xx)$ for all $\xx\in\Rn$ and all $\lambda\in\Rpos$. This is equivalent to the condition that $f(\yy)\leq f(\xx)$ for all points $\yy$ along the halfline $\hfline{\xx}{\vv}$ with endpoint $\xx$ in direction $\vv$ (as defined in Eq.~\ref{eq:std-halfline-defn}). Thus, \[ \resc{f} = \Braces{\vv\in\Rn : \forall \xx\in\Rn,\, \forall \yy\in \hfline{\xx}{\vv},\, f(\yy)\leq f(\xx). }. \] Simplifying further, this means that \begin{equation} \label{eq:std-rescone-w-sup} \resc{f} = \Braces{\vv\in\Rn : \forall \xx\in\Rn,\, \sup f(\hfline{\xx}{\vv})\leq f(\xx) }, \end{equation} where, as usual, $f(\hfline{\xx}{\vv})$ is the set of all values of $f$ along points on the halfline $\hfline{\xx}{\vv}$. These forms of the definition can be extended analogously to astral space simply by replacing the standard halfline with its astral analogue, \[\ahfline{\xbar}{\vbar}=\xbar\seqsum\aray{\vbar},\] as was defined in \eqref{eq:ast-hline-defn}. This leads to: \begin{definition} Let $F:\extspace\rightarrow\Rext$. The \emph{astral recession cone} of $F$, denoted $\aresconeF$, is the set of points $\vbar\in\extspace$ with the property that $F(\ybar)\leq F(\xbar)$ for all $\xbar\in\extspace$ and all $\ybar\in\ahfline{\xbar}{\vbar}$. That is, \begin{align} \aresconeF &= \Braces{ \vbar\in\extspace :\: \forall \xbar\in\extspace,\, \forall \ybar\in\ahfline{\xbar}{\vbar},\, F(\ybar) \leq F(\xbar) }. \nonumber \\ \intertext{Or, more simply,} \aresconeF &= \Braces{ \vbar\in\extspace :\: \forall \xbar\in\extspace,\, \sup F(\ahfline{\xbar}{\vbar}) \leq F(\xbar) } \nonumber \\ &= \Braces{ \vbar\in\extspace :\: \forall \xbar\in\extspace,\, \sup F(\xbar\seqsum\aray{\vbar}) \leq F(\xbar) }. \label{eqn:aresconeF-def} \end{align} \end{definition} This definition is rather strong in the sense that it implies many other relations in terms of $F$'s behavior. The next simple proposition summarizes some of the useful ones that follow from a point $\vbar$ being in $\aresconeF$. Shortly, we will see that, under convexity assumptions, some of the simplest of these necessary conditions are in fact sufficient as well. \begin{proposition} \label{pr:arescone-def-ez-cons} Let $F:\extspace\rightarrow\Rext$ and let $\vbar\in\aresconeF$. Then the following hold: \begin{letter-compact} \item \label{pr:arescone-def-ez-cons:a} $F(\zbar\plusl\xbar)\leq F(\xbar)$ and $F(\xbar\plusl\zbar)\leq F(\xbar)$ for all $\xbar\in\extspace$ and all $\zbar\in\aray{\vbar}$. \item \label{pr:arescone-def-ez-cons:b} $F(\alpha\vbar\plusl\xbar)\leq F(\xbar)$ and $F(\xbar\plusl\alpha\vbar)\leq F(\xbar)$ for all $\xbar\in\extspace$ and all $\alpha\in\Rextpos$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:arescone-def-ez-cons:a}):} Let $\xbar\in\extspace$ and $\zbar\in\aray{\vbar}$. Then $\zbar\plusl\xbar \in \zbar\seqsum\xbar \subseteq \xbar\seqsum\aray{\vbar}$ by Corollary~\ref{cor:seqsum-conseqs}(\ref{cor:seqsum-conseqs:a}). Similarly, $\xbar\plusl\zbar\in\xbar\seqsum\aray{\vbar}$. The claim then follows by definition of $\aresconeF$ (Eq.~\ref{eqn:aresconeF-def}). \pfpart{Part~(\ref{pr:arescone-def-ez-cons:b}):} Let $\alpha\in\Rextpos$. Since $\alpha\vbar\in\aray{\vbar}$ (by Propositions~\ref{pr:ast-cone-is-naive} and~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}), this follows from part~(\ref{pr:arescone-def-ez-cons:a}). \qedhere \end{proof-parts} \end{proof} Just as the standard recession cone of any function on $\Rn$ is a convex cone (\Cref{pr:resc-cone-basic-props}), so the astral recession cone of any astral function is always a convex astral cone: \begin{theorem} \label{thm:arescone-is-ast-cvx-cone} Let $F:\extspace\rightarrow\Rext$. Then $\aresconeF$ is a convex astral cone. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Astral cone:} Let $\vbar\in\aresconeF$. We aim to show that $\aray{\vbar}\subseteq\aresconeF$. As such, let $\ybar\in\aray{\vbar}$. Then, for all $\xbar\in\extspace$, \[ \sup F(\xbar\seqsum\aray{\ybar}) \leq \sup F(\xbar\seqsum\aray{\vbar}) \leq F(\xbar). \] The first inequality is because $\aray{\ybar}\subseteq\aray{\vbar}$ since $\aray{\vbar}$ is an astral cone (Proposition~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}). The second inequality is because $\vbar\in\aresconeF$. Thus, $\ybar\in\aresconeF$, so $\aresconeF$ is an astral cone. \pfpart{Convex:} In light of Theorem~\ref{thm:ast-cone-is-cvx-if-sum}, to show that the astral cone $\aresconeF$ is convex, it suffices to show that it is closed under sequential sum. As such, let $\vbar_0,\vbar_1\in\aresconeF$, and let $\vbar\in\vbar_0\seqsum\vbar_1$. We aim to show that $\vbar\in\aresconeF$. Let $K_i=\aray{\vbar_i}$ for $i\in\{0,1\}$. Then $K_0$ and $K_1$ are astral cones (by Proposition~\ref{pr:astral-cone-props}\ref{pr:astral-cone-props:e}), so $K_0\seqsum K_1$ is also an astral cone by Theorem~\ref{thm:seqsum-ast-cone}(\ref{thm:seqsum-ast-cone:a}). Further, $\vbar\in\vbar_0\seqsum\vbar_1\subseteq K_0\seqsum K_1$, so $\aray{\vbar}\subseteq K_0\seqsum K_1$. Thus, for all $\xbar\in\extspace$, \begin{align*} \sup F(\xbar\seqsum\aray{\vbar}) &\leq \sup F\Parens{\xbar\seqsum (\aray{\vbar_0}) \seqsum (\aray{\vbar_1})} \\ &\leq \sup F\Parens{\xbar\seqsum \aray{\vbar_0}} \\ &\leq F(\xbar). \end{align*} The first inequality is because, as just argued, $\aray{\vbar}\subseteq (\aray{\vbar_0}) \seqsum (\aray{\vbar_1})$. The second is because $\vbar_1\in\aresconeF$, implying $\sup F(\ybar\seqsum \aray{\vbar_1}) \leq F(\ybar)$ for all $\ybar\in\xbar\seqsum \aray{\vbar_0}$. The third is because $\vbar_0\in\aresconeF$. Therefore, $\vbar\in\aresconeF$, so $\aresconeF$ is convex. \qedhere \end{proof-parts} \end{proof} As suggested above, if $F:\extspace\rightarrow\Rext$ is convex, then much simpler conditions than those used in defining $\aresconeF$ are necessary and sufficient for inclusion in that set, as we show next. First, if we merely have that $F(\limray{\vbar}\plusl\xbar)\leq F(\xbar)$ for all $\xbar\in\extspace$ then $\vbar$ must be in $\aresconeF$ (and conversely), so in this sense, this one inequality is as strong as all the others that are implicit in \eqref{eqn:aresconeF-def}. Furthermore, if $F(\limray{\vbar}\plusl\xbar)<+\infty$ at even a single point $\xbar\in\extspace$, then $\vbar$ must be in $\aresconeF$. \begin{theorem} \label{thm:recF-equivs} Let $F:\extspace\rightarrow\Rext$ be convex, and let $\vbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:recF-equivs:a} $\vbar\in\aresconeF$. \item \label{thm:recF-equivs:b} For all $\xbar\in\extspace$, $F(\limray{\vbar}\plusl\xbar)\leq F(\xbar)$. \item \label{thm:recF-equivs:c} Either $F\equiv+\infty$ or there exists $\xbar\in\extspace$ such that $F(\limray{\vbar}\plusl\xbar) < +\infty$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:recF-equivs:a}) $\Rightarrow$ (\ref{thm:recF-equivs:b}): } This is immediate from Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}). \pfpart{(\ref{thm:recF-equivs:b}) $\Rightarrow$ (\ref{thm:recF-equivs:c}): } Suppose condition~(\ref{thm:recF-equivs:b}) holds. If $F\not\equiv+\infty$ then there exists $\xbar\in\dom{F}$ implying $F(\limray{\vbar}\plusl\xbar)\leq F(\xbar)<+\infty$. \pfpart{(\ref{thm:recF-equivs:c}) $\Rightarrow$ (\ref{thm:recF-equivs:a}): } If $F\equiv+\infty$ then $\aresconeF=\extspace$ which trivially includes $\vbar$. Otherwise, suppose there exists $\ybar\in\extspace$ such that $F(\limray{\vbar}\plusl\ybar)<+\infty$. Then Theorem~\ref{thm:F-conv-res} immediately implies that $\vbar\in\aresconeF$. \qedhere \end{proof-parts} \end{proof} We will be especially interested in the astral recession cone of the extension $\fext$ of a convex function $f:\Rn\rightarrow\Rext$. In this case, even simpler conditions are necessary and sufficient for a point to be in $\aresconef$: \begin{theorem} \label{thm:rec-ext-equivs} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\vbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:rec-ext-equivs:a} $\vbar\in\aresconef$. \item \label{thm:rec-ext-equivs:b} For all $\xx\in\Rn$, $\fext(\vbar\plusl\xx)\leq f(\xx)$. \item \label{thm:rec-ext-equivs:c} For all $\xx\in\Rn$, $\fext(\limray{\vbar}\plusl\xx)\leq f(\xx)$. \item \label{thm:rec-ext-equivs:d} Either $f\equiv+\infty$ or $\fshadvb\not\equiv+\infty$ (that is, $\fext(\limray{\vbar}\plusl\yy)<+\infty$ for some $\yy\in\Rn$). \item \label{thm:rec-ext-equivs:e} Either $f\equiv+\infty$ or $\fext(\limray{\vbar}\plusl\ybar)<+\infty$ for some $\ybar\in\extspace$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:rec-ext-equivs:a}) $\Rightarrow$ (\ref{thm:rec-ext-equivs:b}): } Suppose $\vbar\in\aresconef$. Then for all $\xx\in\Rn$, $ \fext(\vbar\plusl\xx) \leq \fext(\xx) \leq f(\xx)$ by Propositions~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}) and~\ref{pr:h:1}(\ref{pr:h:1a}). \pfpart{(\ref{thm:rec-ext-equivs:b}) $\Rightarrow$ (\ref{thm:rec-ext-equivs:c}): } Suppose condition~(\ref{thm:rec-ext-equivs:b}) holds. Let $\xbar\in\extspace$. Then there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ and with $f(\xx_t)\rightarrow\fext(\xbar)$ (Proposition~\ref{pr:d1}). Thus, \begin{equation} \label{eq:thm:rec-ext-equivs:1} \fext(\xbar) = \lim f(\xx_t) \geq \liminf \fext(\vbar\plusl\xx_t) \geq \fext(\vbar\plusl\xbar). \end{equation} The first inequality is by our assumption, and the second is because $\fext$ is lower semicontinuous and $\vbar\plusl\xx_t\rightarrow\vbar\plusl\xbar$ (Proposition~\ref{pr:i:7}\ref{pr:i:7f}). Let $\xx\in\Rn$. Then for all $t$, we now have \[ \fext(\xx\plusl t \vbar) = \fext(t\vbar \plusl \xx) \leq \fext(\xx) \leq f(\xx) \] where the first inequality follows by repeated application of \eqref{eq:thm:rec-ext-equivs:1}, and the second by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). Thus, \[ \fext(\limray{\vbar}\plusl\xx) = \fext(\xx\plusl\limray{\vbar}) \leq \liminf \fext(\xx\plusl t\vbar) \leq f(\xx), \] where the first inequality is by lower semicontinuity of $\fext$ and since $\xx\plusl t\vbar\rightarrow \xx\plusl\limray{\vbar}$ (by Proposition~\ref{pr:i:7}\ref{pr:i:7f}). \pfpart{(\ref{thm:rec-ext-equivs:c}) $\Rightarrow$ (\ref{thm:rec-ext-equivs:d}): } Suppose condition~(\ref{thm:rec-ext-equivs:c}) holds. If $f\not\equiv+\infty$ then there exists $\yy\in\dom{f}$ so $\fext(\limray{\vbar}\plusl\yy)\leq f(\yy)<+\infty$. \pfpart{(\ref{thm:rec-ext-equivs:d}) $\Rightarrow$ (\ref{thm:rec-ext-equivs:e}): } This is immediate. \pfpart{(\ref{thm:rec-ext-equivs:e}) $\Rightarrow$ (\ref{thm:rec-ext-equivs:a}): } Suppose condition~(\ref{thm:rec-ext-equivs:e}) holds. Then condition~(\ref{thm:recF-equivs:c}) of Theorem~\ref{thm:recF-equivs} must hold as well with $F=\fext$ (since if $f\equiv+\infty$ then $\fext\equiv+\infty$). By that theorem, it then follows that $\vbar\in\aresconef$ (noting that $\fext$ is convex by Theorem~\ref{thm:fext-convex}). \qedhere \end{proof-parts} \end{proof} Here is an immediate corollary for icons: \begin{corollary} \label{cor:a:4} Let $f:\Rn\rightarrow\Rext$ be convex and not identically $+\infty$, and let $\ebar\in\corezn$. Then $\ebar\in\aresconef$ if and only if $\fext(\ebar\plusl\qq)<+\infty$ for some $\qq\in\Rn$. \end{corollary} \begin{proof} This follows immediately from Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:d}, noting that $\limray{\ebar}=\ebar$ by Proposition~\ref{pr:i:8}(\ref{pr:i:8d}). \end{proof} If $F:\extspace\rightarrow\Rext$ is convex and lower semicontinuous, then its astral recession cone must be closed: \begin{theorem} \label{thm:rescone-closed} Let $F:\extspace\rightarrow\Rext$ be convex and lower semicontinuous. Then $\aresconeF$ is a closed (in $\extspace$) convex astral cone. \end{theorem} \begin{proof} That $\aresconeF$ is a convex astral cone was shown in Theorem~\ref{thm:arescone-is-ast-cvx-cone}, so it only remains to show that it is also closed. If $F\equiv+\infty$, then $\aresconeF=\extspace$, which is closed. Therefore, we assume henceforth that $F\not\equiv+\infty$. Let $\vbar$ be any point in $\clbar{\aresconeF}$, implying there exists a sequence $\seq{\vbar_t}$ in $\aresconeF$ with $\vbar_t\rightarrow\vbar$. To show that $\aresconeF$ is closed, we aim to show that $\vbar\in\aresconeF$. Let $\ybar$ be any point in $\dom{F}$, which exists since $F\not\equiv+\infty$. For all $t$, let $\xbar_t=\limray{\vbar_t}\plusl\ybar$. Then by sequential compactness, the sequence $\seq{\xbar_t}$ must have a subsequence converging to some point $\xbar\in\extspace$. By discarding all other sequence elements, we can assume that $\xbar_t\rightarrow\xbar$. We then have \[ F(\xbar) \leq \liminf F(\xbar_t) = \liminf F(\limray{\vbar_t}\plusl\ybar) \leq F(\ybar) < +\infty. \] The first inequality is because $F$ is lower semicontinuous, and the second is by Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}) since $\vbar_t\in\aresconeF$. Furthermore, by Theorem~\ref{thm:gen-dom-dir-converg}, $\xbar\in\limray{\vbar}\plusl\extspace$ since $\vbar_t\rightarrow\vbar$; that is, $\xbar=\limray{\vbar}\plusl\zbar$ for some $\zbar\in\extspace$. Since $F(\limray{\vbar}\plusl\zbar)<+\infty$, it now follows by Theorem~\refequiv{thm:recF-equivs}{thm:recF-equivs:a}{thm:recF-equivs:c} that $\vbar\in\aresconeF$, completing the proof. \end{proof} Theorem~\ref{thm:rescone-closed} applies specifically to the extension of any convex function on $\Rn$: \begin{corollary} \label{cor:res-fbar-closed} Let $f:\Rn\rightarrow\Rext$ be convex. Then $\aresconef$ is a closed (in $\extspace$) convex astral cone. \end{corollary} \begin{proof} This follows immediately from Theorem~\ref{thm:rescone-closed} since $\fext$ is convex by Theorem~\ref{thm:fext-convex}, and lower semicontinuous by Proposition~\ref{prop:ext:F}. \end{proof} We next look at the relationship between standard and astral recession cones when working with a convex, lower semicontinuous function $f:\Rn\rightarrow\Rext$. We first show that $\fext$'s astral recession cone, when restricted to $\Rn$, is the same as $f$'s standard recession cone. \begin{proposition} \label{pr:f:1} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then $(\aresconef)\cap\Rn = \resc{f}$. \end{proposition} \begin{proof} Let $\vv\in\resc{f}\subseteq\Rn$. Then for all $\xx\in\Rn$, $\fext(\vv+\xx)=f(\vv+\xx)\leq f(\xx)$, where the equality is by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}) since $f$ is lower semicontinuous, and the inequality is because $\vv\in\resc{f}$. Therefore, $\vv\in\aresconef$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}. Thus, $\resc{f}\subseteq(\aresconef)\cap\Rn$. For the reverse inclusion, suppose now that $\vv\in(\aresconef)\cap\Rn$. Then for all $\xx\in\Rn$ and for all $\lambda\in\Rpos$, we have \[ f(\lambda\vv+\xx) = \fext(\lambda\vv+\xx) \leq \fext(\xx) = f(\xx). \] The two equalities are by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). The inequality is by Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}) since $\vv\in\aresconef$. Therefore, $\vv\in\resc{f}$, completing the proof. \end{proof} Combined with general properties of astral cones, this then shows that the following inclusions also hold: \begin{proposition} \label{pr:repres-in-arescone} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then \[ \resc{f} \subseteq \represc{f} \subseteq \rescbar{f} \subseteq \aresconef. \] \end{proposition} \begin{proof} Proposition~\ref{pr:f:1} implies that $\resc{f} \subseteq \aresconef$. Since $\aresconef$ is closed in $\extspace$ (by Corollary~\ref{cor:res-fbar-closed}), this further implies that this set must also include $\resc{f}$'s closure, $\rescbar{f}$. By Proposition~\ref{pr:resc-cone-basic-props}, $\resc{f}$ is a convex cone in $\Rn$. Therefore, Corollary~\ref{cor:a:1} immediately yields that $\resc{f} \subseteq \represc{f} \subseteq \rescbar{f}$, completing the proof. \end{proof} By Corollary~\ref{cor:a:1}, $\represc{f}$ is the same as $\acone{(\resc{f})}$, the astral conic hull of $\resc{f}$. Since $\aresconef$ is a convex astral cone that includes $\resc{f}$, and since $\represc{f}$ is thus the smallest set with this property, we can think of $\represc{f}$ as the ``smallest'' that $\aresconef$ can be. Later, in Section~\ref{subsec:cond-for-cont}, we will see that $\represc{f}$ and its relationship to $\aresconef$ play critically important roles in characterizing exactly where $\fext$ is continuous. When these sets are equal to each other, so that $\aresconef=\represc{f}$ (implying $\represc{f} = \rescbar{f} = \aresconef$ by Proposition~\ref{pr:repres-in-arescone}), we say that $f$ is \emph{recessive complete}. When this property holds, $f$'s standard recession cone $\resc{f}$ is, in a sense, sufficiently ``complete'' to entirely fill or determine the astral recession cone $\aresconef$ (via its representational closure, $\represc{f}$). Examples will be given shortly. We can now state precisely which points minimize $\fext$, namely, those points of the form $\ebar\plusl\qq$ where $\ebar$ is an icon in the astral recession cone, and $\qq\in\Rn$ minimizes the reduction $\fshadd$ (defined in Definition~\ref{def:iconic-reduction}). In later sections, especially in Chapter~\ref{sec:univ-red-and-min}, we will develop a much more detailed analysis of the minimizers of $\fext$, but this theorem provides a start: \begin{theorem} \label{thm:arescone-fshadd-min} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\xbar=\ebar\plusl\qq$ where $\ebar\in\corezn$ and $\qq\in\Rn$. Then $\xbar$ minimizes $\fext$ if and only if $\ebar\in\aresconef$ and $\qq$ minimizes $\fshadd$. \end{theorem} \begin{proof} If $f\equiv+\infty$ then $\fext\equiv+\infty$, $\fshadd\equiv+\infty$, and $\aresconef=\extspace$, so the claim follows trivially. Therefore, we assume $f\not\equiv+\infty$, so $\min \fext = \inf f < +\infty$ (by Proposition~\ref{pr:fext-min-exists}). Suppose $\xbar$ minimizes $\fext$. Then $\fext(\ebar\plusl\qq)<+\infty$, so $\ebar\in\aresconef$ by Corollary~\ref{cor:a:4}. If, contrary to the claim, $\qq$ does not minimize $\fshadd$, then there exists $\qq'\in\Rn$ with \[ \fext(\ebar\plusl\qq') = \fshadd(\qq') < \fshadd(\qq) = \fext(\xbar), \] contradicting that $\xbar$ minimizes $\fext$. Conversely, suppose $\ebar\in\aresconef$ and $\qq$ minimizes $\fshadd$. Let $\beta$ be any number in $\R$ with $\beta > \inf f$, and let $\yy\in\Rn$ be such that $f(\yy) < \beta$. Then \[ \inf f \leq \fext(\xbar) = \fshadd(\qq) \leq \fshadd(\yy) = \fext(\ebar\plusl\yy) \leq f(\yy) < \beta. \] The first inequality is by Proposition~\ref{pr:fext-min-exists}, the second is because $\qq$ minimizes $\fshadd$, and the third is by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}. Since this holds for all $\beta>\inf f$, $\fext(\xbar)=\inf f$, so $\xbar$ minimizes $\fext$. \end{proof} We end this section with two examples of functions and their astral recession cones as an illustration of some of the notions presented above. \begin{example} \label{ex:negx1-else-inf-cont} In $\R^2$, let $f$ be the function given in \eqref{eqn:ex:1} as part of Example~\ref{ex:negx1-else-inf}. It can be checked that this function's standard recession cone is the set \[ \resc{f} = \Rpos^2 = \Braces{ \vv\in\R^2 : \vv\cdot\ee_1 \geq 0 \textrm{ and } \vv\cdot\ee_2 \geq 0 }. \] The function's extension is \[ \fext(\xbar) = \begin{cases} \xbar\cdot(-\ee_1) & \text{if $\xbar\cdot\ee_2\geq 0$} \\ +\infty & \text{otherwise,} \end{cases} \] as can be seen by combining Example~\ref{ex:ext-affine}, and Corollaries~\ref{cor:ext-restricted-fcn} and~\ref{cor:halfspace-closure}. We claim that $\fext$'s astral recession cone is \begin{equation} \label{eq:ex:negx1-else-inf-cont:1} \aresconef = \Braces{ \vbar\in\extspac{2} : \vbar\cdot\ee_1 \geq 0 \textrm{ and } \vbar\cdot\ee_2 \geq 0 }. \end{equation} This is because if $\vbar\cdot\ee_1$ and $\vbar\cdot\ee_2$ are both nonnegative, then for all $\xx\in\R^2$, it can be argued that $\fext(\vbar\plusl\xx)\leq f(\xx)$, implying that $\vbar\in\aresconef$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}. Otherwise, if either $\vbar\cdot\ee_1$ or $\vbar\cdot\ee_2$ are negative, then it can be argued that $\fext(\limray{\vbar}\plusl\xx)=+\infty$ for all $\xx\in\R^2$, implying that $\vbar\not\in\aresconef$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:d}. The set $\resc{f}$ is the intersection of two closed halfspaces. Therefore, combining Corollaries~\ref{cor:clos-int-sets} and~\ref{cor:halfspace-closure}, its closure can be seen to be the same as $\aresconef$. Further, since $\resc{f}$ is polyhedral, its closure is equal to $\represc{f}$, by Theorem~\refequiv{thm:repcl-polyhedral-cone}{thm:repcl-polyhedral-cone:b}{thm:repcl-polyhedral-cone:c}. Thus, $\represc{f} = \rescbar{f} = \aresconef$, so $f$ is recessive complete. The extension $\fext$ is minimized, for instance, by $\limray{\ee_1}\plusl\ee_2$, $\limray{\ee_2}\plusl\limray{\ee_1}$, and $\limray{(\ee_1+\ee_2)}\plusl\ee_1$. Consistent with Theorem~\ref{thm:arescone-fshadd-min}, the iconic part of each of these is included in $\aresconef$ with finite part minimizing the corresponding reduction. Also consistent with that theorem, $\limray{\ee_2}+\ee_1$ does not minimize $\fext$, even though its iconic part, $\limray{\ee_2}$, is included in $\aresconef$ since $\ee_1$ does not minimize $\fshad{\limray{\ee_2}}$. \end{example} \begin{example} \label{ex:x1sq-over-x2} For $\xx\in\R^2$, let \begin{equation} \label{eqn:curve-discont-finiteev-eg} f(\xx) = f(x_1,x_2) = \left\{ \begin{array}{cl} {x_1^2}/{x_2} & \mbox{if $x_2 > |x_1|$} \\ 2|x_1| - x_2 & \mbox{otherwise.} \\ \end{array} \right. \end{equation} This function is convex, closed, proper, finite everywhere, and continuous everywhere. It is also nonnegative everywhere. It can be checked that $f$'s standard recession cone is the closed ray $\resc{f}=\{ \lambda \ee_2 : \lambda\in\Rpos \}$. (This can be seen by noting, for fixed $x_1\in\R$, that $f(x_1,x_2)$ is nonincreasing as a function of $x_2$. On the other hand, if $\vv$ is not a nonnegative multiple of $\ee_2$, then $f(\vv)>0=f(\zero)$, so $\vv\not\in\resc{f}$.) In this case, $\resc{f}$'s representational and topological closures are the same, namely, \begin{equation} \label{eq:ex:x1sq-over-x2:1} \represc{f} = \rescbar{f} = (\resc{f})\cup\{\limray{\ee_2}\} = \Braces{ \alpha \ee_2 : \alpha\in\Rextpos }, \end{equation} which, by Proposition~\ref{pr:repres-in-arescone}, is included in $\aresconef$. It can further be checked that $\fshad{\limray{\ee_2}}(\xx)=\fext(\limray{\ee_2}\plusl\xx)=0$ for all $\xx\in\R^2$. For all $\ybar\in\extspac{2}$, this implies, by Theorem~\ref{thm:d4}(\ref{thm:d4:c}), that $\fext(\limray{\ee_2}\plusl\ybar\plusl\xx) = 0 \leq f(\xx)$, for $\xx\in\R^2$. This means, by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}, that any astral point of the form $\limray{\ee_2}\plusl\ybar$ is in $\aresconef$. Moreover, it can be argued that $\aresconef$ cannot include any additional points without violating Proposition~\ref{pr:f:1}. Thus, in summary, \[ \aresconef = \Braces{ \lambda \ee_2 : \lambda \in\Rpos } \cup \Bracks{\limray{\ee_2} \plusl \extspac{2}}. \] This set includes points that are outside the set given in \eqref{eq:ex:x1sq-over-x2:1}, such as $\limray{\ee_2}\plusl\ee_1$ and $\limray{\ee_2}\plusl\limray{\ee_1}$. Therefore, $f$ is not recessive complete. Consistent with Theorem~\ref{thm:arescone-fshadd-min}, $\fext$ is minimized, for instance, by $\ee_2$, $\limray{\ee_2}\plusl\ee_1$, and $\limray{\ee_2}\plusl\limray{\ee_1}$, but not by $\limray{\ee_1}\plusl\limray{\ee_2}$ or $\limray{\ee_1+\ee_2}$. \end{example} \subsection{A dual characterization} \label{sec:astral-cone-dual} We will often find it useful to rely on another fundamental characterization of the astral recession cone in terms of the function's dual properties. As seen in \Cref{pr:rescpol-is-con-dom-fstar}, the standard recession cone of a closed, proper, convex function $f$ is the (standard) polar of $\cone(\dom{\fstar})$. This fact extends analogously to astral functions and their astral recession cones via the astral polar operation: \begin{theorem} \label{thm:arescone-is-conedomFstar-pol} Let $F:\extspace\rightarrow\Rext$ with $F\not\equiv+\infty$. Then the following hold: \begin{letter-compact} \item \label{thm:arescone-is-conedomFstar-pol:a} $\aresconeF\subseteq\apolconedomFstar$. \item \label{thm:arescone-is-conedomFstar-pol:b} If, in addition, $F=\Fdub$, then $\aresconeF=\apolconedomFstar$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:arescone-is-conedomFstar-pol:a}):} Suppose, by way of contradiction, that some point $\vbar$ is in $\aresconeF$ but not $\apolconedomFstar$. Since $\Fstar$ is convex (Proposition~\ref{pr:conj-is-convex}) and so has a convex domain, and by \Cref{pr:scc-cone-elts}(\ref{pr:scc-cone-elts:c:new}), $\cone(\dom{\Fstar})$ is equal to the origin with all positive scalar multiples of points in $\dom{\Fstar}$. Thus, because $\vbar\not\in\apolconedomFstar$, there must exist $\uu\in\dom{\Fstar}$ and $\lambda\in\Rstrictpos$ with $\vbar\cdot(\lambda\uu)>0$, implying $\vbar\cdot\uu>0$. We then have, for $\xbar\in\extspace$, \begin{equation} \label{eq:thm:arescone-is-conedomFstar-pol:1} F(\limray{\vbar}\plusl\xbar) \geq \Fdub(\limray{\vbar}\plusl\xbar) \geq -\Fstar(\uu) \plusd (\limray{\vbar}\cdot\uu\plusl\xbar\cdot\uu) = +\infty. \end{equation} The first inequality is by Theorem~\ref{thm:fdub-sup-afffcns}. The second is by definition of dual conjugate (Eq.~\ref{eq:psistar-def:2}), applied to $\Fstar$. The equality is because $-\Fstar(\uu)>-\infty$ (since $\uu\in\dom{\Fstar}$) and since $\limray{\vbar}\cdot\uu=+\infty$ since $\vbar\cdot\uu>0$. Let $\ybar$ be any point in $\dom{F}$, which exists since $F\not\equiv+\infty$. We then obtain the following contradiction: \[ +\infty = F(\limray{\vbar}\plusl\ybar) \leq F(\ybar) < +\infty. \] The equality is from \eqref{eq:thm:arescone-is-conedomFstar-pol:1}. The first inequality is by Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}) since $\vbar\in\aresconeF$. And the last inequality is because $\ybar\in\dom{F}$. This completes the proof. \pfpart{Part~(\ref{thm:arescone-is-conedomFstar-pol:b}):} Suppose $F=\Fdub$, which is convex by Theorem~\ref{thm:dual-conj-cvx}. Let $\vbar\in\apolconedomFstar$; we aim to show $\vbar\in\aresconeF$. Combined with part~(\ref{thm:arescone-is-conedomFstar-pol:a}), this will prove the claim. Let $\xbar\in\extspace$ and $\uu\in\Rn$. We claim that \begin{equation} \label{eq:thm:arescone-is-conedomFstar-pol:2} -\Fstar(\uu) \plusd (\limray{\vbar}\cdot\uu\plusl\xbar\cdot\uu) \leq -\Fstar(\uu) \plusd \xbar\cdot\uu. \end{equation} If $\Fstar(\uu)=+\infty$, then this is immediate since both sides of the inequality are $-\infty$. Otherwise, $\uu\in\dom{\Fstar}\subseteq\cone(\dom{\Fstar})$ so $\vbar\cdot\uu\leq 0$ (since $\vbar\in\apolconedomFstar$), implying $\limray{\vbar\cdot\uu}\leq 0$ and so that \eqref{eq:thm:arescone-is-conedomFstar-pol:2} holds in this case as well (by Propositions~\ref{pr:i:5}\ref{pr:i:5ord} and~\ref{pr:plusd-props}\ref{pr:plusd-props:f}). Thus, for all $\xbar\in\extspace$, \begin{align*} F(\limray{\vbar}\plusl\xbar) &= \Fdub(\limray{\vbar}\plusl\xbar) \\ &= \sup_{\uu\in\Rn} \Bracks{ -\Fstar(\uu) \plusd (\limray{\vbar}\cdot\uu\plusl\xbar\cdot\uu) } \\ &\leq \sup_{\uu\in\Rn} \Bracks{ -\Fstar(\uu) \plusd \xbar\cdot\uu } \\ &= \Fdub(\xbar) = F(\xbar). \end{align*} The first and last equalities are by our assumption that $F=\Fdub$. The second and third equalities are by definition of dual conjugate (Eq.~\ref{eq:psistar-def:2}). The inequality is by \eqref{eq:thm:arescone-is-conedomFstar-pol:2}. From Theorem~\refequiv{thm:recF-equivs}{thm:recF-equivs:a}{thm:recF-equivs:b}, it follows that $\vbar\in\aresconeF$, completing the proof. \qedhere \end{proof-parts} \end{proof} As corollary, we immediately obtain: \begin{corollary} \label{cor:res-pol-char-red-clsd} Let $f:\Rn\rightarrow\Rext$ be convex and reduction-closed, with $f\not\equiv+\infty$. Then $\aresconef=\apolconedomfstar$. \end{corollary} \begin{proof} Since $f$ is convex and reduction-closed, we have $\fext=\fdub$ by Theorem~\ref{thm:dub-conj-new}. Noting also that $\fextstar=\fstar$ (Proposition~\ref{pr:fextstar-is-fstar}), the claim then follows directly from Theorem~\ref{thm:arescone-is-conedomFstar-pol}(\ref{thm:arescone-is-conedomFstar-pol:b}) applied with $F=\fext$. \end{proof} If $f$ is not reduction-closed, then the sets $\aresconef$ and $\apolconedomfstar$ need not be equal. Here is an example: \begin{example} Consider the function $f$ given in \eqref{eqn:ex:1} as in Examples~\ref{ex:negx1-else-inf} and~\ref{ex:negx1-else-inf-cont}. Let $\vbar=\limray{\ee_1}\plusl\limray{(-\ee_2)}$. Then $\vbar\in\apolconedomfstar$ (since, from Eq.~\ref{eqn:ex:1:conj}, if $\uu\in\dom{\fstar}$ then $\uu\cdot\ee_1=-1$ implying $\vbar\cdot\uu=-\infty$). But $\vbar$ is not in $\fext$'s astral recession cone, $\aresconef$, which was given in \eqref{eq:ex:negx1-else-inf-cont:1} (since $\vbar\cdot\ee_2=-\infty$). \end{example} Nonetheless, we can generalize Corollary~\ref{cor:res-pol-char-red-clsd} so that it holds for all convex functions not identically $+\infty$, even if they are not reduction-closed, using the technique developed in Section~\ref{sec:exp-comp}, with $\cone(\dom{\fstar})$ replaced by $\slopes{f}$: \begin{corollary} \label{cor:ares-is-apolslopes} Let $f:\Rn\rightarrow\Rext$ be convex with $f\not\equiv+\infty$. Then $\aresconef=\apolslopesf$. \end{corollary} \begin{proof} Let $f'=\expex\circ f$. Then \[ \aresconef = \aresconefp = \apolconedomfpstar = \apolslopesf. \] The first equality is by \Cref{pr:j:2}(\ref{pr:j:2d}). The second is by Corollary~\ref{cor:res-pol-char-red-clsd} applied to $f'$ (which is convex and lower-bounded by Proposition~\ref{pr:j:2}\ref{pr:j:2a}, and therefore is reduction-closed by \Cref{pr:j:1}\ref{pr:j:1c}). The third is by \Cref{thm:dom-fpstar}. \end{proof} \subsection{Recursive formulation for the astral recession cone} \label{sec:astral-cone} As seen in Theorem~\ref{thm:arescone-fshadd-min}, every minimizer of $\fext$ must involve points in its astral recession cone. Thus, to minimize $\fext$ (as well as $f$), it will be helpful to understand the structure of such points, and how to construct them. We will see how this is done in this section using the methods developed earlier based on projections and reductions. Let $\vv\in\Rn$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. We begin by showing how points in $\aresconef$, the astral recession cone of $\fext$, relate to points in $\aresconeg$, the astral recession cone of $\gext$. \begin{theorem} \label{thm:f:3} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\vv\in\Rn$ and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Then: \begin{letter-compact} \item \label{thm:f:3a} $\aresconef \subseteq \aresconeg$. (Consequently, $\resc{f} \subseteq \resc{g}$.) \item \label{thm:f:3b} Suppose $\ybar=\limray{\vv}\plusl\zbar$ for some $\zbar\in\extspace$. Then $\ybar\in\aresconef$ if and only if $\vv\in\resc{f}$ and $\zbar\in\aresconeg$. \end{letter-compact} \end{theorem} Part~(\ref{thm:f:3b}) of this theorem provides a kind of recursive characterization of all of the points comprising $\aresconef$: The points in $\aresconef$ that are also in $\Rn$ are exactly those in the standard recession cone $\resc{f}$, by Proposition~\ref{pr:f:1}. All of the other points in $\aresconef$ can be entirely enumerated by considering each point $\vv\in\resc{f}$, forming the reduction $g$ of $f$ at $\limray{\vv}$, finding $\gext$'s astral recession cone $\aresconeg$, and then adding $\limray{\vv}$ to each element in $\aresconeg$. Thus, \[ \aresconef = (\resc{f}) \cup \bigcup_{\vv\in\resc{f}} \paren{\limray{\vv}\plusl \arescone{\fshadext{\limray{\vv}}}}. \] Alternatively, we can think of part~(\ref{thm:f:3b}), together with Proposition~\ref{pr:f:1}, as providing a test for determining if a given point $\ybar$ is in $\aresconef$: If $\ybar$ is in $\Rn$, then it is in $\aresconef$ if and only if it is in $\resc{f}$. Otherwise, it is in $\aresconef$ if and only if its dominant direction $\vv$ is in $\resc{f}$ and its projection $\ybarperp$ is in $\aresconeg$, as can be determined in a recursive manner. This characterization can also be interpreted in terms of sequences. Suppose some sequence converges to a point $\xbar=\ebar\plusl\qq$ where $\ebar\in\corezn$ and $\qq\in\Rn$, and such that $f$ is eventually bounded above on the sequence (as will be the case if $f$ is actually minimized by the sequence). Then Corollary~\ref{cor:a:4} implies $\ebar\in\aresconef$. So Theorem~\ref{thm:f:3} tells us that, unless $\xbar\in\Rn$, the sequence must have a dominant direction $\vv$ in $\resc{f}$. Moreover, we can project the sequence to the space perpendicular to $\vv$ and form the associated reduction $g$ of $f$ at $\limray{\vv}$. The projected sequence now must converge to $\ebarperp\plusl\qperp$. According to Theorem~\ref{thm:f:3}, $\ebarperp\in\aresconeg$, so we can apply this same reasoning again to the projected sequence, so that either the projected sequence converges to a point in $\Rn$, or its dominant direction is in $\resc{g}$. Continuing in this fashion, we can effectively characterize all of the dominant directions of the sequence. \begin{proof}[Proof of \Cref{thm:f:3}] If $f\equiv +\infty$, then $g\equiv+\infty$, so $\fext=\gext\equiv +\infty$, $\aresconef=\aresconeg=\extspace$, and $\resc{f}=\resc{g}=\Rn$, implying all the parts of the theorem. We therefore assume henceforth that $f\not\equiv+\infty$. \begin{proof-parts} \pfpart{Part~(\ref{thm:f:3a}):} If $\vv\not\in\resc{f}$, then $\gext\equiv +\infty$ by Theorem~\ref{thm:i:4}, again implying $\aresconeg=\extspace$ and trivially yielding the claim. So suppose $\vv\in\resc{f}$ and $f\not\equiv+\infty$. Let $\ybar\in\aresconef$. Then by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:d}, there exists $\rr\in\Rn$ for which $\fext(\limray{\ybar} \plusl \rr)<+\infty$. This implies \[ \gext(\limray{\ybar} \plusl \rr) \leq \fext(\limray{\ybar} \plusl \rr) < +\infty, \] where the first inequality is by Corollary~\ref{cor:i:1}(\ref{cor:i:1c}). Therefore, $\ybar\in\aresconeg$, again by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:d}. Having proved $\aresconef \subseteq \aresconeg$, it now follows, when combined with Proposition~\ref{pr:f:1}, that \[ \resc{f} = (\aresconef)\cap\Rn \subseteq (\aresconeg)\cap\Rn = \resc{g}. \] \pfpart{Part~(\ref{thm:f:3b}):} Suppose first that $\vv\in\resc{f}$ and that $\zbar\in\aresconeg$. Then for all $\xx\in\Rn$, \[ \fext(\ybar\plusl \xx) = \fext(\limray{\vv}\plusl\zbar\plusl \xx) = \gext(\zbar\plusl \xx) \leq g(\xx) \leq f(\xx). \] The second equality is by Corollary~\ref{cor:i:1}(\ref{cor:i:1b}). The first inequality is by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}, since $\zbar\in\aresconeg$. The second inequality is by Proposition~\ref{pr:d2}(\ref{pr:d2:b}) (since $\vv\in\resc{f}$). Therefore, $\ybar\in\aresconef$, again by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}. For the converse, suppose for the rest of the proof that $\ybar\in\aresconef$. We argue separately that $\vv\in\resc{f}$ and $\zbar\in\aresconeg$. Let $\qq\in\dom{f}$, which exists since $f\not\equiv+\infty$. Then \[ \fext(\limray{\vv}\plusl\zbar\plusl\qq) = \fext(\ybar\plusl\qq) \leq f(\qq) <+\infty, \] where the first inequality is by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}, since $\ybar\in\aresconef$. Therefore, $\vv\in(\aresconef)\cap\Rn = \resc{f}$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:e} and Proposition~\ref{pr:f:1}. Finally, we have \[ \gext(\limray{\zbar}\plusl\qq) = \fext(\limray{\vv}\plusl\limray{\zbar}\plusl\qq) = \fext(\limray{\ybar}\plusl\qq) \leq f(\qq) < +\infty. \] The first equality is by Corollary~\ref{cor:i:1}(\ref{cor:i:1b}). The second equality is because $\limray{\ybar}=\limray{\vv}\plusl\limray{\zbar}$ by Proposition~\ref{pr:i:8}(\ref{pr:i:8d},\ref{pr:i:8-infprod}). The first inequality is by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:c}, since $\ybar\in\aresconef$. Thus, $\zbar\in\aresconeg$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:d}. \qedhere \end{proof-parts} \end{proof} Using the recursive formulation for points in the astral recession cone given in Theorem~\ref{thm:f:3}(\ref{thm:f:3b}), we can generalize some of our earlier results for astronic reductions at an astron associated with a point in $\resc{f}$ to iconic reductions at an icon in $\aresconef$. The next theorem generalizes Theorem~\ref{thm:e1}(\ref{thm:e1:a}) to compute the conjugate of the reduction $\fshadd$ for any icon $\ebar$ in the astral recession cone $\aresconef$. \begin{theorem} \label{thm:conj-of-iconic-reduc} Let $f:\Rn\rightarrow\Rext$ be convex, $f\not\equiv+\infty$, and let $\ebar\in\corezn\cap (\aresconef)$. Let $g=\fshad{\ebar}$ be the reduction of $f$ at $\ebar$. Then, for all $\uu\in\Rn$, \[ \gstar(\uu) = \begin{cases} \fstar(\uu) & \text{if $\ebar\cdot\uu=0$,} \\ +\infty & \text{otherwise.} \end{cases} \] \end{theorem} \begin{proof} Proof is by induction on the astral rank of $\ebar$. That is, by induction on $k=0,\ldots,n$, we show that if $\ebar$ has astral rank $k$, then the theorem holds. In the base case that $k=0$, we must have $\ebar\in\Rn$, implying $\ebar=\zero$ (by Proposition~\ref{pr:i:8}\ref{pr:i:8b}). Thus, $g(\xx)=\fext(\xx)=(\lsc f)(\xx)$, for $\xx\in\Rn$ (by Proposition~\ref{pr:h:1}\ref{pr:h:1a}), so $\gstar=(\lsc f)^*=\fstar$ (by \Cref{pr:conj-props}\ref{pr:conj-props:e}), yielding the claim in this case. For the inductive step, suppose $k\geq 1$ and that $\ebar$ has astral rank $k$. Then we can write $\ebar=\limray{\vv}\plusl\ebarperp$ where $\vv$ is $\ebar$'s dominant direction, and $\ebarperp$ is $\ebar$'s projection orthogonal to $\vv$. Let $h=\fshadv$. Note that by Proposition~\ref{pr:j:1}(\ref{pr:j:1a}), \begin{equation} \label{eq:thm:conj-of-iconic-reduc:1} \hshad{\ebarperp} = \fshad{\limray{\vv}\plusl\ebarperp} = \fshad{\ebar} = g. \end{equation} By Theorem~\ref{thm:f:3}(\ref{thm:f:3b}), $\vv\in\resc{f}$ and $\ebarperp\in\aresconeh$. Therefore, by Theorem~\ref{thm:e1}(\ref{thm:e1:a}), \begin{equation} \label{eq:thm:conj-of-iconic-reduc:2} \hstar(\uu) = \begin{cases} \fstar(\uu) & \text{if $\vv\cdot\uu=0$} \\ +\infty & \text{otherwise} \end{cases} \end{equation} for $\uu\in\Rn$. Also, $\ebarperp$ has astral rank $k-1$ (by Proposition~\ref{pr:h:6}). Thus, for $\uu\in\Rn$, \begin{eqnarray*} \gstar(\uu) = (\hshad{\ebarperp})^*(\uu) &=& \begin{cases} \hstar(\uu) & \text{if $\ebarperp\cdot\uu=0$} \\ +\infty & \text{otherwise} \end{cases} \\ &=& \begin{cases} \fstar(\uu) & \text{if $\vv\cdot\uu=0$ and $\ebarperp\cdot\uu=0$} \\ +\infty & \text{otherwise.} \end{cases} \end{eqnarray*} The first equality is by \eqref{eq:thm:conj-of-iconic-reduc:1}. The second is by inductive hypothesis, applied to $h$ (which is convex with $h\not\equiv+\infty$ by Theorem~\ref{thm:a10-nunu} and Proposition~\ref{pr:d2}\ref{pr:d2:c}). And the last equality is by \eqref{eq:thm:conj-of-iconic-reduc:2}. Since $\ebar\cdot\uu=\limray{\vv}\cdot\uu\plusl\ebarperp\cdot\uu=0$ if and only if $\vv\cdot\uu=0$ and $\ebarperp\cdot\uu=0$, this completes the induction and the proof. \end{proof} Along similar lines, we next generalize Theorem~\ref{thm:a10-nunu} to show that, if $\ebar=\VV\omm$ is an icon in $\aresconef$, then $\fshadd$ is the lower semicontinuous hull of a function that can be expressed explicitly as an infimum analogous to \eqref{eqn:gtil-defn}. \begin{theorem} \label{thm:icon-reduc-lsc-inf} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\VV\in\R^{n\times k}$, let $\ebar=\VV\omm$, and assume $\ebar\in\aresconef$. Let $g=\fshadd$, and let $\gtil:\Rn\rightarrow\Rext$ be defined by \begin{equation} \label{eq:thm:icon-reduc-lsc-inf:3} \gtil(\xx)=\inf_{\yy\in\colspace \VV} f(\xx+\yy) \end{equation} for $\xx\in\Rn$. Then $\gtil$ is convex, and $g = \lsc \gtil$. \end{theorem} \begin{proof} As a function of $\rpair{\bb}{\xx}\in\Rk\times\Rn$, $f(\VV\bb+\xx)$ is convex by \Cref{roc:thm5.7:fA}. Since, for $\xx\in\Rn$, $\gtil(\xx)= \inf_{\bb\in\Rk} f(\VV\bb+\xx)$, it then follows that $\gtil$ is convex by \Cref{roc:thm5.7:Af}. For the remainder, we prove by induction on $k=0,\ldots,n$ that $g=\lsc \gtil$ for every convex function $f:\Rn\rightarrow\Rext$ and every matrix $\VV\in\R^{n\times k}$ with $\ebar=\VV\omm\in\aresconef$. In the base case that $k=0$, we must have $\ebar=\zero$ and $\colspace \VV = \{\zero\}$. Hence, $\gtil=f$ and $g = \fshad{\zero} = \lsc f = \lsc \gtil$ by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). For the inductive step, suppose $k\geq 1$. Then $\VV=[\vv,\VV']$ where $\vv$ is the first column of $\VV$, and $\VV'$ are the remaining columns. Thus, $\ebar=\VV\omm=\limray{\vv}\plusl \ebar'$ where $\ebar'=\VV' \omm$. Let $h=\fshadv$. Then, similar to the proof of Theorem~\ref{thm:conj-of-iconic-reduc}, $\vv\in\resc{f}$ and $\ebar'\in\aresconeh$ by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}), and \begin{equation} \label{eq:thm:icon-reduc-lsc-inf:1} \hshad{\ebar'} = \fshad{\limray{\vv}\plusl\ebar'} = \fshad{\ebar} = g \end{equation} by Proposition~\ref{pr:j:1}(\ref{pr:j:1a}). Let $\htil:\Rn\rightarrow\Rext$ be defined by \begin{equation} \label{eq:thm:icon-reduc-lsc-inf:2} \htil(\xx)=\inf_{\lambda\in\R} f(\xx+\lambda \vv) \end{equation} for $\xx\in\Rn$. Then by Theorem~\ref{thm:a10-nunu}, $\htil$ is convex and $h=\lsc \htil$. Therefore, $\hext = \lschtilext = \htilext$ (by \Cref{pr:h:1}\ref{pr:h:1aa}). Consequently, by \eqref{eq:thm:icon-reduc-lsc-inf:1}, for $\xx\in\Rn$, \[ g(\xx) = \hext(\ebar'\plusl\xx) = \htilext(\ebar'\plusl\xx) = \htilshad{\ebar'}(\xx). \] That is, $g=\htilshad{\ebar'}$. Consequently, we can apply our inductive hypothesis to $\htil$ and $\VV'$ yielding that $g=\lsc \gtil$ where \begin{eqnarray*} \gtil(\xx) &=& \inf_{\yy\in\colspace \VV'} \htil(\xx+\yy) \\ &=& \inf_{\yy\in\colspace \VV'} \inf_{\lambda\in\R} f(\xx+\yy+\lambda\vv) \\ &=& \inf_{\yy\in\colspace \VV} f(\xx+\yy). \end{eqnarray*} The second equality is by \eqref{eq:thm:icon-reduc-lsc-inf:2}. The last equality combines the infima which together range over all of $\colspace \VV$. \end{proof} As a consequence, we can express the relative interior of the effective domain of any reduction at an icon in $\aresconef$: \begin{theorem} \label{thm:ri-dom-icon-reduc} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\VV\in\R^{n\times k}$, and let $\ebar=\VV\omm$ be an icon in $\aresconef$. Then \[ \ri(\dom{\fshadd}) = \ri(\dom{f}) + (\colspace\VV) \supseteq \ri(\dom{f}). \] \end{theorem} \begin{proof} Let $g=\fshadd$ and let $\gtil$ be as defined in \eqref{eq:thm:icon-reduc-lsc-inf:3}. Then $g=\lsc \gtil$ by Theorem~\ref{thm:icon-reduc-lsc-inf}. Note that, by \Cref{pr:lsc-props}(\ref{pr:lsc-props:c}), \begin{equation} \label{eq:thm:ri-dom-icon-reduc:1} \ri(\dom{g}) = \ri(\dom{\gtil}). \end{equation} From its form, for $\xx\in\Rn$, $\gtil(\xx)<+\infty$ if and only if there exists $\yy\in\colspace\VV$ with $f(\xx+\yy)<+\infty$. Thus, \[ \dom{\gtil} = (\dom{f}) + (\colspace\VV), \] and therefore, \[ \ri(\dom{\gtil}) = \ri(\dom{f}) + \ri(\colspace\VV) = \ri(\dom{f}) + (\colspace\VV) \] by \Cref{pr:ri-props}(\ref{pr:ri-props:roc-cor6.6.2}), and since $\colspace\VV$, being a linear subspace, is its own relative interior. Combined with \eqref{eq:thm:ri-dom-icon-reduc:1}, this completes the proof. \end{proof} \subsection{Finding all minimizers} \label{sec:find-all-min} \begin{figure}[t] \underline{Given:} \begin{itemize} \item function $f:\Rn\rightarrow\Rext$ that is convex and lower semicontinuous \item test point $\xbar = \limrays{\vv_1,\ldots,\vv_k} \plusl \qq$ where $\vv_1,\ldots,\vv_k,\qq\in\Rn$ \end{itemize} \smallskip \underline{Return:} \true\ if $\xbar$ minimizes $\fext$; \false\ otherwise \smallskip \underline{Procedure:} \begin{itemize} \item if $k=0$ then \begin{itemize} \item if $\qq$ minimizes $f$ then return \true \item else return \false \end{itemize} \item else \begin{itemize} \item if $\vv_1\not\in \resc{f}$ then return \false \item else \begin{itemize} \item let $g = \fshadv$ \item recursively test if $\limrays{\vv_2,\ldots,\vv_k} \plusl \qq$ minimizes $\gext$, and return the result \end{itemize} \end{itemize} \end{itemize} \caption{A procedure for testing if a given point minimizes $\fext$.} \label{fig:min-test-proc} \end{figure} Combining the results developed above, we can now provide a procedure for testing if a given astral point minimizes $\fext$. Such a procedure is shown in Figure~\ref{fig:min-test-proc}. The input is a function $f$ and an expicitly represented test point $\xbar = \limrays{\vv_1,\ldots,\vv_k} \plusl \qq$. The procedure determines if $\xbar$ minimizes $\fext$ using only more basic primitives which operate on standard points and functions over $\Rn$, specifically, for testing if a point in $\Rn$ minimizes an ordinary convex function $f:\Rn\rightarrow\Rext$, and also for testing if a vector in $\Rn$ is in the standard recession cone of such a function. The operation and correctness of this procedure follow directly from our development regarding minimizers and reductions: If $k=0$ then $\xbar=\qq\in\Rn$, so $\xbar$ minimizes $\fext$ if and only if $\qq$ minimizes the standard function $f$ (which we have assumed is lower semicontinuous). Otherwise, if $k>0$, then $\xbar=\limray{\vv_1} \plusl \zbar$ where $\zbar = \limrays{\vv_2,\ldots,\vv_k} \plusl \qq$. If $\vv_1\not\in\resc{f}$, then $\limrays{\vv_1,\ldots,\vv_k}$ cannot be in $\aresconef$, by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}), and therefore $\xbar$ cannot minimize $\fext$, by Theorem~\ref{thm:arescone-fshadd-min}. Otherwise, with $g$ as defined in the figure, if $\vv_1\in\resc{f}$, then $\gext(\zbar)=\fext(\limray{\vv_1}\plusl\zbar)=\fext(\xbar)$ and $\min \gext = \inf g = \inf f = \min \fext$ by Theorem~\ref{thm:d4} and Proposition~\ref{pr:fext-min-exists}. Therefore, $\xbar$ minimizes $\fext$ if and only if $\zbar$ minimizes $\gext$. \begin{figure}[t] \underline{Given:} function $f:\Rn\rightarrow\Rext$ that is convex and lower semicontinuous \smallskip \underline{Process:} \begin{itemize} \item $i \leftarrow 0$ \item $g_0 = f$ \item repeat \emph{at least} until $g_i$ has a finite minimizer \begin{itemize} \item $i \leftarrow i+1$ \item let $\vv_i$ be \emph{any} point in $\resc{g_{i-1}}$ \item $g_i = \gishadvi$ \end{itemize} \item $k \leftarrow i$ \item $\ebar=\limrays{\vv_1,\ldots,\vv_k}$ \item let $\qq\in\Rn$ be \emph{any} finite minimizer of $g_k$ \item $\xbar=\ebar\plusl\qq$ \end{itemize} \medskip \underline{Properties:} \begin{itemize} \item $g_k = \fshadd$ \item $\ebar\in (\aresconef)\cap\corezn$ \item $\qq$ minimizes $\fshadd$ \item $\xbar$ minimizes $\fext$ \end{itemize} \caption{\MarkYes A process for finding all astral minimizers of $\fext$.} \label{fig:all-min-proc} \end{figure} Our study so far of the astral recession cone and the minimizers of $\fext$ also yields a general, iterative procedure that, in a sense described below, can find {all} of the minimizers of $\fext$, as we present next. By Theorem~\ref{thm:arescone-fshadd-min}, every minimizer $\xbar=\ebar\plusl\qq$ has an iconic part $\ebar$ that is in the astral recession cone $\aresconef$ and a finite part $\qq$ that minimizes $\fshadd$. To find an icon in $\aresconef$, by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}), we can first find a point $\vv\in\resc{f}$, form the associated reduction $g$ of $f$ at $\limray{\vv}$, and then repeat the process to find a point in $\aresconeg$, eventually choosing an appropriate time to stop. More precisely, and using a bit more notation, we initially let $g_0=f$. On each iteration $i$, we find a vector $\vv_i$ in $\resc{g_{i-1}}$, the standard recession cone of $g_{i-1}$. Then we define $g_i = \gishadvi$ to be the next reduction, in this way, ensuring that the resulting icon $\ebar=\limrays{\vv_1,\ldots,\vv_k}$ formed by the $\vv_i$'s must be in $\aresconef$. We can continue this process until we manage to form a reduction $g_k$ that has some finite minimizer $\qq\in\Rn$. By such a construction, $g_k$ actually is equal to the reduction $\fshadd$ at icon $\ebar$, so in fact, $\qq$ minimizes $\fshadd$, which, combined with $\ebar$ being in $\aresconef$, ensures that the point $\xbar=\ebar\plusl\qq$ minimizes $\fext$. We summarize this process in Figure~\ref{fig:all-min-proc}. Although we describe the process in the form of an algorithm, we do not literally mean to suggest that it be implemented on a computer, at least not in this generality. The point, rather, is to reveal the structure of the minimizers of a function in astral space, and how that structure can be related to standard notions from convex analysis. \begin{example} \label{ex:simple-eg-exp-exp-sq} Suppose $f$ is the convex function \begin{equation} \label{eq:simple-eg-exp-exp-sq} f(\xx) = f(x_1,x_2,x_3) = \me^{x_3-x_1} + \me^{-x_2} + (2+x_2-x_3)^2 \end{equation} for $\xx\in\R^3$. For $\zz\in\R^3$ to be in $f$'s standard recession cone, it must satisfy $z_3-z_1\leq 0$, $z_2\geq 0$ and $z_2=z_3$, so that a change in direction $\zz$ cannot cause any term in \eqref{eq:simple-eg-exp-exp-sq} to increase; thus, \begin{equation} \label{eq:simple-eg-exp-exp-sq:rescone} \resc{f} = \Braces{ \zz\in\R^3 : 0\leq z_2=z_3\leq z_1 }. \end{equation} The function's extension can be shown to be continuous everywhere, and is specifically, \begin{equation} \label{eq:simple-eg-exp-exp-sq:fext} \fext(\xbar) = \expex\paren{\xbar\cdot (\ee_3-\ee_1)} + \expex\paren{\xbar\cdot (-\ee_2)} + \paren{2+\xbar\cdot (\ee_2-\ee_3)}^2 \end{equation} for $\xbar\in\extspac{3}$. (Here, $\expex$ is as given in \eqref{eq:expex-defn}; $(\pm\infty)^2=+\infty$ by standard arithmetic over $\Rext$; and $\ee_1,\ee_2,\ee_3$ are the standard basis vectors.) Suppose we apply the process of Figure~\ref{fig:all-min-proc} to $f$. On the first iteration, the process chooses any vector $\vv_1$ in the standard recession cone of $g_0=f$, say $\vv_1=\trans{[1,1,1]}$. Next, the reduction $g_1=\genshad{g_0}{\limray{\vv_1}}$ is formed, which is \[ g_1(\xx) = \fext(\limray{\vv_1}\plusl \xx) = \me^{x_3-x_1} + (2+x_2-x_3)^2 \] for $\xx\in\R^3$. Its recession cone is \[ \resc{g_1} = \Braces{ \zz\in\R^3 : z_2=z_3\leq z_1 }, \] so on the next iteration, we can choose any $\vv_2$ in this set, say $\vv_2=\trans{[1,-1,-1]}$. The next reduction $g_2=\genshad{g_1}{\limray{\vv_2}}$ is $g_2(\xx)=(2+x_2-x_3)^2$. This function has finite minimizers, such as $\qq=\trans{[0,0,2]}$. The resulting minimizer of $\fext$ is $\xbar=\ebar\plusl\qq$ where $\ebar=\limray{\vv_1}\plusl\limray{\vv_2}$ is indeed an icon in $\aresconef$ with $\qq$ minimizing $g_2=\fshadd$. \end{example} Returning to our general discussion, the process of Figure~\ref{fig:all-min-proc} is \emph{nondeterministic} in the sense that at various points, choices are made in a way that is entirely arbitrary. This happens at three different points: First, on each iteration of the main loop, an \emph{arbitrary} point $\vv_i$ is selected from $\resc{g_{i-1}}$. Second, this loop must iterate \emph{at least} until $g_i$ has a finite minimizer, but can continue to iterate arbitrarily beyond that point. Third, after terminating the loop, an \emph{arbitrary} finite minimizer $\qq$ of $g_k$ is selected. Clearly, the point $\xbar$ that is eventually computed by the process depends on these arbitrary choices. Nevertheless, in all cases, the resulting point $\xbar$ must be a minimizer of $\fext$. Conversely, if $\xbar$ minimizes $\fext$, then it must be possible for these arbitrary choices to be made in such a way that $\xbar$ is produced (while still respecting the constraints imposed at each step of the process). It is in this sense that the process computes \emph{all} of the minimizers of $\fext$. When there exists such a sequence of choices that results in $\xbar$ as the final output or product of the computation, we say that $\xbar$ is a \emph{potential product} of the process. Thus, we are claiming that a point $\xbar\in\extspace$ minimizes $\fext$ if and only if $\xbar$ is a potential product of the process. This is shown formally by the next theorem, whose condition~(\ref{thm:all-min-proc-correct:b}) captures exactly when the point $\xbar$ is a potential product. \begin{theorem} \label{thm:all-min-proc-correct} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\xbar = \limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ where $\vv_1,\ldots,\vv_k,\qq\in\Rn$. Let $g_0=f$ and $g_i = \gishadvi$ for $i=1,\ldots,k$. Then $g_k=\fshadd$, and the following are equivalent: \begin{letter-compact} \item \label{thm:all-min-proc-correct:a} $\xbar$ minimizes $\fext$. \item \label{thm:all-min-proc-correct:b} $\qq$ minimizes $g_k$ and $\vv_i\in\resc{g_{i-1}}$ for $i=1,\ldots,k$. \end{letter-compact} \end{theorem} \begin{proof} Let $\ebar=\limrays{\vv_1,\ldots,\vv_k}$. As preliminary steps, we note by \Cref{pr:icon-red-decomp-astron-red} that each $g_i$ is convex and lower semicontinuous, and that $g_k=\fshadd$. Also, for $i=0,\ldots,k$, let $\ebar_i=\limrays{\vv_{i+1},\ldots,\vv_k}$. Since $\ebar_{i-1}=\limray{\vv_i}\plusl\ebar_i$, and by $g_i$'s definition, Theorem~\ref{thm:f:3}(\ref{thm:f:3b}) implies that, for $i=1,\ldots,k$, $\ebar_{i-1}\in\aresconegsub{{i-1}}$ if and only if $\vv_i\in\resc{g_{i-1}}$ and $\ebar_{i}\in\aresconegsub{{i}}$. \begin{proof-parts} \pfpart{(\ref{thm:all-min-proc-correct:a}) $\Rightarrow$ (\ref{thm:all-min-proc-correct:b}): } Suppose $\xbar$ minimizes $\fext$. Then Theorem~\ref{thm:arescone-fshadd-min} implies that $\qq$ minimizes $\fshadd=g_k$, and also that $\ebar\in\aresconef$, or equivalently, that $\ebar_0\in\aresconegsub{0}$. From the preceding remarks, it now follows by a straightforward induction that $\vv_i\in\resc{g_{i-1}}$ and $\ebar_{i}\in\aresconegsub{{i}}$ for $i=1,\ldots,k$. \pfpart{(\ref{thm:all-min-proc-correct:b}) $\Rightarrow$ (\ref{thm:all-min-proc-correct:a}): } Suppose $\qq$ minimizes $g_k=\fshadd$ and that $\vv_i\in\resc{g_{i-1}}$ for $i=1,\ldots,k$. Then by backwards induction on $i=0,\ldots,k$, $\ebar_{i}\in\aresconegsub{{i}}$. The base case, when $i=k$, holds because $\ebar_k=\zero\in\aresconegsub{k}$. For the inductive step, when $i<k$, $\ebar_{i+1}\in\aresconegsub{{i+1}}$ by inductive hypothesis, and $\vv_{i+1}\in\resc{g_i}$ by assumption, so the earlier remark implies $\ebar_{i}\in\aresconegsub{{i}}$. Thus, $\qq$ minimizes $\fshadd$ and $\ebar=\ebar_0\in\aresconegsub{0}=\aresconef$. Therefore, $\xbar$ minimizes $\fext$ by Theorem~\ref{thm:arescone-fshadd-min}. \qedhere \end{proof-parts} \end{proof} Theorem~\ref{thm:all-min-proc-correct} shows that if this process terminates, then the computed point $\xbar$ must minimize $\fext$. But what if the process never terminates? Indeed, it is possible for the process to never terminate, or even to reach the loop's (optional) termination condition. For instance, the same point $\vv_i=\zero$ (which is in every recession cone) might be chosen on every iteration so that the process never makes any progress toward a solution at all. Also, superficially, it might seem plausible that the process could make poor choices early on that make it impossible to eventually reach a point at which the termination condition is satisfied. We will address these issues later in Section~\ref{sec:ensure-termination}. \section{Universal reduction and canonical minimizers} \label{sec:univ-red-and-min} For a convex function $f:\Rn\rightarrow\Rext$, we have seen so far that if $\xbar=\ebar\plusl\qq$ minimizes $\fext$, where $\ebar\in\corezn$ and $\qq\in\Rn$, then $\xbar$'s iconic part $\ebar$ must be in $\fext$'s astral recession cone (Theorem~\ref{thm:arescone-fshadd-min}). In this section, we delve further into the structure of $\fext$'s minimizers, both their iconic and finite parts. We will see that all finite parts $\qq$ of all minimizers are exactly captured as the (finite) minimizers of one particular convex function called the universal reduction, defined in a moment. Furthermore, all of the minimizers of this function are, in a sense explained below, necessarily in a bounded region of $\Rn$, thereby alleviating the consideration of minimizers at infinity. Thus, the problem of finding the finite parts $\qq$ of all minimizers of $\fext$ can be reduced to minimization of a standard convex function in the most favorable setting that finite minimizers exist and only occur within some compact region. Furthermore, we will see that there exist choices for the icon $\ebar$ that minimize $\fext(\ebar\plusl\xx)$ over choices for $\ebar$ \emph{simultaneously} for all $\xx\in\Rn$. We will discuss how to find such points, their properties, and how they combine naturally with the universal reduction function just described, yielding minimizers of $\fext$ that are, informally, the most canonical or extreme of minimizers. \subsection{The universal reduction} \label{sec:fullshad} We begin by defining the function briefly described above: \begin{definition} \label{def:univ-reduction} Let $f:\Rn\rightarrow\Rext$ be convex. The \emph{universal reduction} of $f$ is the function $\fullshad{f}:\Rn\rightarrow \Rext$ defined, for $\xx\in\Rn$, by \begin{equation} \label{eqn:fullshad-def} \fullshad{f}(\xx) = \inf_{\ebar\in\corezn} \fext(\ebar\plusl\xx) = \inf_{\ebar\in\corezn} \fshadd(\xx). \end{equation} \end{definition} Thus, $\fullshad{f}$ computes the minimum possible value of $\fext$ when some point $\xx\in\Rn$ is combined with any icon $\ebar\in\corezn$. Said differently, $\fullshad{f}$ is the pointwise infimum of all reductions $\fshadd$ over all icons $\ebar\in\corezn$. In this sense, $\fullshad{f}$ can be viewed as itself a reduction of $f$ across the entire universe of astral space; it is for this reason that it is called the {universal reduction}. Intuitively, in minimizing $f$, this function ``washes out'' what is possible by pursuing the trajectory of a sequence to infinity beginning at $\xx$ and following the path defined by any icon $\ebar$. Alternatively, $\fullshad{f}$ can be viewed informally as bringing in $f$'s behavior at infinity to a compact region of $\Rn$. The definition of $\fullshad{f}$ remains the same if we consider adding points $\ebar$ that are instead in the astral recession cone of $\fext$, whether or not restricted to those that are icons, as stated in the next proposition. Furthermore, in all cases, including \eqref{eqn:fullshad-def}, the respective infima are always realized by some point $\ebar$, which means we can state these expressions in terms of minima rather than infima. \begin{proposition} \label{pr:fullshad-equivs} Let $f:\Rn\rightarrow\Rext$ be convex. Then for all $\xx\in\Rn$, \[ \fullshad{f}(\xx) = \min_{\ebar\in\corezn} \fext(\ebar\plusl\xx) = \min_{\ebar\in\aresconef} \fext(\ebar\plusl\xx) = \min_{\ebar\in(\aresconef)\cap\corezn} \fext(\ebar\plusl\xx). \] In particular, this means that each of these minima is attained. \end{proposition} \begin{proof} Let $\xx\in\Rn$. By Corollary~\ref{cor:a:4}, if $\ebar\in\corezn\setminus(\aresconef)$, then $\fext(\ebar\plusl\xx)=+\infty$. Therefore, \begin{equation} \label{eqn:pr:fullshad-equivs:2} \fullshad{f}(\xx) = \inf_{\ebar\in(\aresconef)\cap\corezn} \fext(\ebar\plusl\xx) \geq \inf_{\ebar\in\aresconef} \fext(\ebar\plusl\xx), \end{equation} where the inequality is simply because $\aresconef$ is a superset of $(\aresconef)\cap\corezn$. Let \begin{equation} \label{eqn:pr:fullshad-equivs:3} \beta = \inf_{\ebar\in\aresconef} \fext(\ebar\plusl\xx), \end{equation} and, for $t=1,2,\ldots$, let \[ b_t = \left\{ \begin{array}{cl} -t & \mbox{if $\beta = -\infty$}\\ \beta + 1/t & \mbox{if $\beta \in \R$}\\ +\infty & \mbox{if $\beta = +\infty$.} \end{array} \right. \] Let $\ybar_t$ be any point in $\aresconef$ with $\fext(\ybar_t\plusl\xx)\leq b_t$. Then the sequence $\seq{\ybar_t}$ has a convergent subsequence (by sequential compactness); discarding all other elements, we can assume the entire sequence converges to some point $\ybar\in\extspace$. Further, $\ybar$ must be in $\aresconef$ since each $\ybar_t$ is in $\aresconef$, which is closed by Corollary~\ref{cor:res-fbar-closed}. Thus, \begin{equation} \label{eqn:pr:fullshad-equivs:4} \beta = \lim b_t \geq \liminf \fext(\ybar_t\plusl\xx) \geq \fext(\ybar\plusl\xx), \end{equation} where the second inequality is because $\xx\plusl\ybar_t\rightarrow\xx\plusl\ybar$ (by Proposition~\ref{pr:i:7}\ref{pr:i:7f}), and by lower semicontinuity of $\fext$ (Proposition~\ref{prop:ext:F}). Note that $\limray{\ybar}$ is an icon (by Proposition~\ref{pr:i:8}\ref{pr:i:8-infprod}) and also is in $\aresconef$ by Proposition~\ref{pr:ast-cone-is-naive} since $\aresconef$ is an astral cone (Theorem~\ref{thm:arescone-is-ast-cvx-cone}). Thus, \begin{equation} \label{eqn:pr:fullshad-equivs:7} \beta \geq \fext(\ybar\plusl\xx) \geq \fext(\limray{\ybar}\plusl\ybar\plusl\xx) = \fext(\limray{\ybar}\plusl\xx). \end{equation} The first inequality is \eqref{eqn:pr:fullshad-equivs:4}. The second is by Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b})since $\limray{\ybar}\in\aresconef$. The equality is because $\limray{\ybar}\plusl\ybar = \limray{\ybar}$ by Proposition~\ref{pr:i:7}(\ref{pr:i:7c}). Combining now yields \begin{eqnarray} \beta \geq \fext(\limray{\ybar}\plusl\xx) &\geq& \inf_{\ebar\in(\aresconef)\cap\corezn} \fext(\ebar\plusl\xx) \nonumber \\ &=& \inf_{\ebar\in\corezn} \fext(\ebar\plusl\xx) \nonumber \\ &=& \fullshad{f}(\xx) \nonumber \\ &\geq& \inf_{\ebar\in\aresconef} \fext(\ebar\plusl\xx) = \beta. \label{eqn:pr:fullshad-equivs:5} \end{eqnarray} The first inequality is from \eqref{eqn:pr:fullshad-equivs:7}. The second is because, as argued above, $\limray{\ybar}$ is an icon in $\aresconef$. The first two equalities and the last inequality are from \eqref{eqn:fullshad-def} and \eqref{eqn:pr:fullshad-equivs:2}. And the last equality is \eqref{eqn:pr:fullshad-equivs:3}. Thus, equality holds across all of \eqref{eqn:pr:fullshad-equivs:5}. Furthermore, this shows that $\limray{\ybar}$, which is in $(\aresconef)\cap\corezn$, realizes each of the three infima. \end{proof} As noted above, the set of all minimizers of $\fullshad{f}$ is exactly equal to the set of all finite parts of all minimizers of $\fext$: \begin{proposition} \label{pr:min-fullshad-is-finite-min} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\qq\in\Rn$. Then $\qq$ minimizes $\fullshad{f}$ if and only if there exists $\ebar\in\corezn$ such that $\ebar\plusl\qq$ minimizes $\fext$. Consequently, $\fullshad{f}$ attains its minimum. \end{proposition} \proof Suppose first that $\qq$ minimizes $\fullshad{f}$. By Proposition~\ref{pr:fullshad-equivs}, there exists $\ebar\in\corezn$ such that $\fext(\ebar\plusl\qq)=\fullshad{f}(\qq)$. Let $\xbar'=\ebar'\plusl\qq'$ be any astral point with iconic part $\ebar'\in\corezn$ and finite part $\qq'\in\Rn$. Then \[ \fext(\ebar\plusl\qq) = \fullshad{f}(\qq) \leq \fullshad{f}(\qq') \leq \fext(\ebar'\plusl\qq') = \fext(\xbar'). \] The first inequality is because $\qq$ minimizes $\fullshad{f}$, and the second is from $\fullshad{f}$'s definition (Definition~\ref{def:univ-reduction}). Therefore, $\ebar\plusl\qq$ minimizes $\fext$. Conversely, suppose now that $\ebar\plusl\qq$ minimizes $\fext$, for some $\ebar\in\corezn$. Then for all $\xx\in\Rn$, \begin{equation} \label{eqn:pr:min-fullshad-is-finite-min:1} \fullshad{f}(\qq) \leq \fext(\ebar\plusl\qq) = \min \fext \leq \fullshad{f}(\xx). \end{equation} Both inequalities follow from $\fullshad{f}$'s definition (Definition~\ref{def:univ-reduction}), and the equality is by assumption. Therefore, $\qq$ minimizes $\fullshad{f}$. Finally, by Proposition~\ref{pr:fext-min-exists}, such a minimizer $\xbar=\ebar\plusl\qq$ of $\fext$ must exist, for some $\ebar\in\corezn$ and $\qq\in\Rn$. As just argued, this implies that $\qq$ attains the minimum of $\fullshad{f}$. \qed An important property of the universal reduction $\fullshad{f}$ is that it is invariant to reducing at an astron $\limray{\vv}$, if $\vv\in\resc{f}$; in other words, if $g$ is a reduction of $f$ at such an astron, then $\fullshad{g}=\fullshad{f}$. Because our approach to minimizing $\fext$ is based on such reductions, this will be very useful since it will mean that, to find $\fullshad{f}$, we can form a reduction $g$ of $f$ at some astron, and instead focus on the possibly easier problem of finding $\fullshad{g}$. \begin{theorem} \label{thm:f:5} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous, let $\vv\in\resc{f}$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Then the universal reductions of $f$ and $g$ are identical; that is, $\fullshad{g}=\fullshad{f}$. \end{theorem} \proof Let $\xx\in\Rn$. Using Proposition~\ref{pr:fullshad-equivs}, we have \begin{eqnarray*} \fullshad{g}(\xx) &=& \min_{\ebar\in\corezn} \gext(\ebar\plusl\xx) \nonumber \\ &=& \min_{\ebar\in\corezn} \fext(\limray{\vv}\plusl \ebar \plusl \xx) \label{eq:f:3a} \\ &\geq& \min_{\ebar\in\corezn} \fext(\ebar \plusl \xx) \label{eq:f:3b} \\ &=& \fullshad{f}(\xx). \nonumber \end{eqnarray*} The second equality by Corollary~\ref{cor:i:1}(\ref{cor:i:1b}), and the inequality is because $\limray{\vv}\plusl\ebar$ is an icon since $\ebar$ is (by Proposition~\ref{pr:i:8}\ref{pr:i:8-leftsum}). On the other hand, \[ \fullshad{f}(\xx) = \min_{\ebar\in\corezn} \fext(\ebar \plusl \xx) \geq \min_{\ebar\in\corezn} \gext(\ebar \plusl \xx) = \fullshad{g}(\xx), \] where the inequality is by Corollary~\ref{cor:i:1}(\ref{cor:i:1c}). \qed We will see that the universal reduction $\fullshad{f}$ is effectively restricted to a linear subspace of $\Rn$ in the sense that it is constant in all directions that are orthogonal to it. More specifically, in analyzing and minimizing $\fullshad{f}$, we will see that we can safely focus our attention exclusively on $\perpresf$, the linear subspace consisting of just those points that are orthogonal to all of the points in $\aresconef$. This is because $\fullshad{f}$ is constant in all directions orthogonal to $\perpresf$, which means that in minimizing $\fullshad{f}$, we can ignore points not in $\perpresf$. Moreover, considering only this restricted domain, we will see that $\fullshad{f}$'s sublevel sets are all bounded so that its minimizers must all be within a bounded (and compact) region of $\Rn$. By Proposition~\ref{pr:min-fullshad-is-finite-min}, these same comments apply to the set of all finite parts of all minimizers of $\fext$, which is identical to the set of all minimizers of $\fullshad{f}$. \begin{example} \label{ex:simple-eg-exp-exp-sq-2} For instance, let $f$ be the function given in Example~\ref{ex:simple-eg-exp-exp-sq}, specifically, \eqref{eq:simple-eg-exp-exp-sq}. The astral recession cone of this function's extension, $\fext$, turns out to be $\aresconef=\represc{f}$, as can be checked using Proposition~\ref{pr:f:1} and Theorem~\ref{thm:f:3} (or alternatively using some of the general results that will be proved later, specifically, Theorem~\ref{thm:cont-conds-finiteev}). As a result, $\perpresf$ can be shown to be the line \begin{equation} \label{eq:simple-eg-exp-exp-sq:perpresf} \perpresf = \Braces{ \trans{[0,\lambda,-\lambda]} : \lambda\in\R }. \end{equation} The universal reduction of $f$ works out to be \begin{equation} \label{eq:simple-eg-exp-exp-sq:fullshad} \fullshad{f}(\xx)=(2+x_2-x_3)^2, \end{equation} which is the same as the function $g_2$ computed earlier when simulating the process of Figure~\ref{fig:all-min-proc} (for reasons to be developed shortly). The function $\fullshad{f}(\xx)$ is evidently constant in the direction $\trans{[1,0,0]}$ (being independent of $x_1$) and also $\trans{[0,1,1]}$ (since a change in this direction leaves $x_2-x_3$ unaffected). Consequently, $\fullshad{f}$ is constant in every direction in the span of these two vectors, which is exactly the space $\perperpresf$ of points perpendicular to the line in \eqref{eq:simple-eg-exp-exp-sq:perpresf}. In this sense, $\fullshad{f}$ is effectively a function just of points in $\perpresf$. Further, $\fullshad{f}$ has bounded sublevel sets when restricted to $\perpresf$ since \[ \fullshad{f}\paren{\trans{[0,\lambda,-\lambda]}} = (2+2\lambda)^2 \] for $\lambda\in\R$. The only minimizer of $\fullshad{f}$ in $\perpresf$ is $\trans{[0,-1,1]}$, so the entire set of minimizers of $\fullshad{f}$ (and therefore the set of all finite parts of all minimizers of $\fext$) is exactly \[ \trans{[0,-1,1]} + \perperpresf = \Braces{ \trans{[\alpha,\beta-1,\beta+1]} : \alpha,\beta \in \R }. \] \end{example} Returning to the general case, as a next step in proving the properties discussed above, we show that, like $\fullshad{f}$, $\perpresf$ is invariant to reducing at an astron $\limray{\vv}$, provided $\vv\in\resc{f}$. \begin{theorem} \label{thm:f:6} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous, let $\vv\in\resc{f}$, and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Then $\perpresg=\perpresf$. \end{theorem} \proof By Theorem~\ref{thm:f:3}, $\aresconef\subseteq\aresconeg$, which implies $\perpresg\subseteq\perpresf$ by Proposition~\ref{pr:perp-props-new}(\ref{pr:perp-props-new:b}). To prove the reverse inclusion, suppose $\uu\in\perpresf$. Let $\ybar\in\aresconeg$. Then $\limray{\vv}\plusl\ybar\in\aresconef$ by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}), so $\limray{\vv}\cdot\uu\plusl\ybar\cdot\uu=0$. This can only be possible if $\vv\cdot\uu=0$ and $\ybar\cdot\uu=0$. Thus, $\uu\in\perpresg$, proving $\perpresf\subseteq\perpresg$. \qed \subsection{Constructing the universal reduction} \label{sec:construct-univ-reduct} As discussed in Section~\ref{sec:prelim:rec-cone}, the constancy space of a function $f$, denoted $\conssp{f}$ and defined in \eqref{eq:conssp-defn}, is the set of directions in which $f$ remains constant. As we show next, the universal reduction $\fullshad{f}$ can be constructed using a process very similar to the one in Figure~\ref{fig:all-min-proc}, modified only in the termination condition for the main loop. Recall that that process constructs a sequence of reductions $g_i$, each the reduction of $g_{i-1}$ at astron $\limray{\vv_i}$, for some $\vv_i\in\resc{g_{i-1}}$. Before reducing, $g_{i-1}$ is decreasing, or at least not increasing, in the direction $\vv_i$; after reducing, the new function $g_i$ is constant in direction $\vv_i$, by Proposition~\ref{pr:d2}(\ref{pr:d2:a}). In this way, directions in which the original function $f$ is recessive are successively replaced by reductions that are constant in those same directions. At some point, this process might yield a reduction $g_k$ whose only recessive directions are those in which the function is constant, meaning $\resc{g_k}=\conssp{g_k}$. When this happens, $g_k$ must increase to $+\infty$ in any direction in which it is not a constant, implying that its minimizers are all finite when all directions in its constancy space are disregarded. Indeed, at this point, $g_k$ is exactly $\fullshad{f}$. \begin{figure}[tp] \underline{Given:} function $f:\Rn\rightarrow\Rext$ that is convex and lower semicontinuous \smallskip \underline{Process:} \begin{itemize} \item $i \leftarrow 0$ \item $g_0 = f$ \item repeat \emph{at least} until $\resc{g_i} = \conssp{g_i}$ \begin{itemize} \item $i \leftarrow i+1$ \item let $\vv_i$ be \emph{any} point in $\resc{g_{i-1}}$ \item $g_i = \gishadvi$ \end{itemize} \item $k \leftarrow i$ \item $\ebar=\limrays{\vv_1,\ldots,\vv_k}$ \item let $\qq\in\Rn$ be \emph{any} finite minimizer of $g_k$ \item $\xbar=\ebar\plusl\qq$ \end{itemize} \medskip \underline{Properties:} \begin{itemize} \item $g_k = \fullshad{f}$ \item $\ebar\in \unimin{f} \subseteq (\aresconef)\cap\corezn$ \item $\qq$ minimizes $\fullshad{f}$ \item $\xbar$ minimizes $\fext$ \end{itemize} \caption{A process for finding the universal reduction $\fullshad{f}$ and all canonical minimizers of $\fext$.} \label{fig:min-proc} \end{figure} Thus, as shown in Figure~\ref{fig:min-proc}, to find the universal reduction $\fullshad{f}$, we use exactly the same process as in Figure~\ref{fig:all-min-proc}, except with a modified termination condition for the main loop. Previously, this loop could optionally terminate once the current reduction $g_i$ has a finite minimizer. Now, in the new process, the loop can terminate once $\resc{g_i}=\conssp{g_i}$. As before, the process is nondeterministic with a similar set of choices that can be made arbitrarily. We show later (Corollary~\ref{cor:fig-cons-main-props}\ref{cor:fig-cons-main-props:b}) that if $\resc{g_k}=\conssp{g_k}$, then $g_k$ must have a finite minimizer $\qq$ as required by the process upon termination of the main loop. This also shows that if all the conditions of Figure~\ref{fig:min-proc} are satisfied for some execution of the process, then so will be those of Figure~\ref{fig:all-min-proc}, implying that properties proved for the latter immediately carry over to the former. In particular, this shows that in constructing $\fullshad{f}$, the process of Figure~\ref{fig:min-proc} also yields a point $\xbar=\ebar\plusl\qq$ that minimizes $\fext$. This point's finite part, $\qq\in\Rn$, is an arbitrary minimizer of $\fullshad{f}$, which, as discussed above, could be selected by considering only a compact region of $\Rn$. Its iconic part, $\ebar\in\corezn$, is in $\aresconef$, as was the case in Figure~\ref{fig:all-min-proc}, but also has an important property that will be explored in detail in Section~\ref{subsec:univ-min}. (In this regard, the figure mentions universal minimizers and the set $\unimin{f}$, which will both be introduced in Section~\ref{subsec:univ-min}, and so can be disregarded for now.) \begin{example} \label{ex:simple-eg-exp-exp-sq-part2} In Example~\ref{ex:simple-eg-exp-exp-sq}, we considered a run of the process in Figure~\ref{fig:all-min-proc} on the function in \eqref{eq:simple-eg-exp-exp-sq}. In fact, that identical run could also have occurred using instead the process of Figure~\ref{fig:min-proc} since, on that example, the function $g_2$ is constant in every direction in which it is nonincreasing, so that the (optional) termination condition $\resc{g_2} = \conssp{g_2}$ is satisfied. Thus, $g_2=\fullshad{f}$, as previously noted. \end{example} We proceed now to prove the claims made in the figure and the preceding discussion. We begin by showing that the termination condition of the constancy space being the same as the recession cone for a convex function $f$ is actually equivalent to the function being equal to its own universal reduction. In addition, if $f$ is closed, proper and reduction-closed, then these two conditions also are equivalent to the domain of $\fstar$ being entirely included in $\perpresf$. A more general version of this condition is given shortly as a corollary. We prove these results first for a function $f$ by itself; we then apply these results to get a more general statement regarding the process in Figure~\ref{fig:min-proc}. \begin{theorem} \label{thm:f:4} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex, closed, proper, and reduction-closed. Then the following are equivalent: \begin{letter-compact} \item \label{thm:f:4a} $\resc{f}=\conssp{f}$. \item \label{thm:f:4c} $\fullshad{f} = f$. \item \label{thm:f:4b} $\dom{\fstar}\subseteq\perpresf$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:f:4a}) $\Rightarrow$ (\ref{thm:f:4b}):} Assume $\resc{f}=\conssp{f}$. To show $\dom{\fstar}\subseteq\perpresf$, we prove the following by induction on $k=0,\ldots,n$: for all $\zbar\in\aresconef$, and for all $\uu\in\dom{\fstar}$, if the astral rank of $\zbar$ is at most $k$ then $\zbar\cdot\uu=0$. In the base case that $k=0$, $\zbar=\zz$ must be in $\Rn$ so $\zz\in(\aresconef)\cap\Rn=\resc{f}$ by Proposition~\ref{pr:f:1}. Therefore, by \Cref{pr:rescpol-is-con-dom-fstar}, $\zz\cdot\uu\leq 0$ for all $\uu\in\dom{\fstar}$. Since $\resc{f}=\conssp{f}$, this applies to $-\zz$ as well, implying $(-\zz)\cdot\uu\leq 0$, and therefore, $\zbar\cdot\uu=\zz\cdot\uu=0$, as claimed. For the inductive case that $k>0$, let $\zbar\in\aresconef$ have astral rank $k$. Let $\vv$ be $\zbar$'s dominant direction so that $ \zbar = \limray{\vv} \plusl \zbarperp$ (by Proposition~\ref{pr:h:6}), where $\zbarperp$ is $\zbar$'s projection orthogonal to $\vv$. Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Since $\zbar\in\aresconef$, $\vv\in\resc{f}=\conssp{f}$ by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}). We claim $g=f$. To see this, let $\gtil$ be the ``shadow'' function given in \eqref{eqn:gtil-defn}. Then $\gtil=f$, since $\vv\in\conssp{f}$. Therefore, $g=\lsc f = f$ by Theorem~\ref{thm:a10-nunu} and since $f$ is closed. Thus, $\zbarperp\in\aresconeg=\aresconef$, by a second application of Theorem~\ref{thm:f:3}(\ref{thm:f:3b}). Let $\uu\in\dom{\fstar}$. Since $\vv\in\resc{f}\subseteq\aresconef$ (by Proposition~\ref{pr:f:1}), and since $\vv$ has astral rank $0<k$, $\vv\cdot\uu=0$, by inductive hypothesis. Also, $\zbarperp\in\aresconef$ and has astral rank $k-1$ (by Proposition~\ref{pr:h:6}), so $\zbarperp\cdot\uu=0$, again by inductive hypothesis. Therefore, $\zbar\cdot\uu=\limray{\vv}\cdot\uu\plusl\zbarperp\cdot\uu=0$, completing the induction and the proof. \pfpart{(\ref{thm:f:4b}) $\Rightarrow$ (\ref{thm:f:4c}):} Assume $\dom{\fstar}\subseteq\perpresf$. Let $\ebar\in(\aresconef)\cap \corezn$. We claim first that $-\fstar(\uu)\plusl \ebar\cdot\uu = -\fstar(\uu)$ for all $\uu\in\Rn$. This is immediate if $\fstar(\uu)\in\{-\infty,+\infty\}$. Otherwise, if $\fstar(\uu)\in\Rn$, then $\uu\in\dom{\fstar}\subseteq\perpresf$, implying that $\ebar\cdot\uu=0$ since $\ebar\in\aresconef$. Therefore, by Theorems~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}) and~\ref{thm:dub-conj-new}, since $f$ is reduction-closed, for all $\xx\in\Rn$, and all $\ebar\in(\aresconef)\cap\corezn$, \begin{eqnarray*} \fext(\ebar\plusl\xx) = \fdub(\ebar\plusl\xx) &=& \sup_{\uu\in\Rn} \brackets{ -\fstar(\uu) \plusl \ebar\cdot\uu \plusl \xx\cdot\uu } \\ &=& \sup_{\uu\in\Rn} \brackets{ -\fstar(\uu) \plusl \xx\cdot\uu } \\ &=& \fdub(\xx) = \fext(\xx) = f(\xx) \end{eqnarray*} (using Proposition~\ref{pr:h:1}\ref{pr:h:1a}). Thus, by Proposition~\ref{pr:fullshad-equivs}, \[ f(\xx) = \min_{\ebar\in(\aresconef)\cap\corezn} \fext(\ebar\plusl\xx) = \fullshad{f}(\xx). \] \pfpart{(\ref{thm:f:4c}) $\Rightarrow$ (\ref{thm:f:4a}):} We prove this in the contrapositive. Suppose $\resc{f}\neq\conssp{f}$. Then there exists a strictly recessive direction $\vv\in(\resc{f})\setminus(\conssp{f})$. Since $\vv\not\in\conssp{f}$, there exists some $\xx\in\Rn$ for which $f(\vv+\xx) \neq f(\xx)$, which implies, since $\vv\in\resc{f}$, that $f(\vv+\xx) < f(\xx)$. By Proposition~\ref{pr:f:1}, $\vv\in\aresconef$, so \[ \fullshad{f}(\xx) \leq \fext(\vv+\xx) \leq f(\vv+\xx) < f(\xx) \] with the first two inequalities following from Proposition~\ref{pr:fullshad-equivs} and Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). This proves the claim. \qedhere \end{proof-parts} \end{proof} Using the technique developed in Section~\ref{sec:exp-comp}, we immediately obtain a generalization of Theorem~\ref{thm:f:4} that only requires that $f$ is convex and lower semicontinuous. To obtain this generalization, the dual condition~(\ref{thm:f:4b}) is replaced (when $f\not\equiv+\infty$) by the condition that $\slopes{f}$ is entirely included in $\perpresf$. \begin{corollary} \label{cor:thm:f:4:1} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then the following are equivalent: \begin{letter-compact} \item \label{cor:thm:f:4:1a} $\resc{f}=\conssp{f}$. \item \label{cor:thm:f:4:1c} $\fullshad{f} = f$. \item \label{cor:thm:f:4:1b} Either $f\equiv+\infty$ or $\slopes{f}\subseteq\perpresf$. \end{letter-compact} \end{corollary} \begin{proof} If $f\equiv +\infty$, then the corollary holds since then $\resc{f}=\conssp{f}=\Rn$, and $\fullshad{f}=f$. We therefore assume henceforth that $f\not\equiv +\infty$. Let $f' = \expex\circ f$. Then $f'$ is convex, lower-bounded and lower semicontinuous by Proposition~\ref{pr:j:2}(\ref{pr:j:2a},\ref{pr:j:2lsc}), proper and closed (since $f'\geq 0$ and $f\not\equiv+\infty$), and also is reduction-closed (by \Cref{pr:j:1}\ref{pr:j:1c}). Therefore, the three conditions of \Cref{thm:f:4}, applied to $f'$, are equivalent to each other. We show that each of these conditions is individually equivalent to the three conditions of the corollary. First, $\resc{f'}=\resc{f}$, by Proposition~\ref{pr:j:2}(\ref{pr:j:2b}), so $\conssp{f'}=\conssp{f}$, using \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}); thus, $\resc{f'}=\conssp{f'}$ if and only if $\resc{f}=\conssp{f}$. Next, $\aresconefp=\aresconef$, by Proposition~\ref{pr:j:2}(\ref{pr:j:2d}), and $\dom{\fpstar}=\slopes{f}$ by \Cref{thm:dom-fpstar} (since $f\not\equiv+\infty$); thus, $\dom{\fpstar}\subseteq\perpresfp$ if and only if $\slopes{f}\subseteq\perpresf$. Finally, we claim that $\fullshad{f'}=\expex\circ \fullshad{f}$; that is, for all $\xx\in\Rn$, $\fullshad{f'}(\xx)= \expex(\fullshad{f}(\xx))$. First, by Propostion~\ref{pr:fullshad-equivs}, there exists $\ebar'\in\corezn$ realizing the minimum defining $\fullshad{f'}(\xx)$. Therefore, \[ \fullshad{f'}(\xx) = \fpext(\ebar'\plusl\xx) = \expex(\fext(\ebar'\plusl\xx)) \geq \expex(\fullshad{f}(\xx)), \] where the second equality is by Proposition~\ref{pr:j:2}(\ref{pr:j:2c}), and the inequality is by $\fullshad{f}$'s definition (Definition~\ref{def:univ-reduction}), and because $\expex$ is strictly increasing. Similarly, again using Propostion~\ref{pr:fullshad-equivs}, there exists $\ebar\in\corezn$ realizing the minimum defining $\fullshad{f}(\xx)$, so \[ \expex(\fullshad{f}(\xx)) = \expex(\fext(\ebar\plusl\xx)) = \fpext(\ebar\plusl\xx) \geq \fullshad{f'}(\xx). \] Thus, $\fullshad{f'}=f'$ if and only if $\fullshad{f}=f$ (since $\expex$ is strictly increasing). Combining now yields the corollary. \end{proof} We can now apply these results more directly to the procedure outlined above and in Figure~\ref{fig:min-proc}, thereby justifying our termination condition. In particular, once the termination criterion that $\resc{g_k}=\conssp{g_k}$ has been reached, the next corollary shows that $g_k=\fullshad{f}$. \begin{corollary} \label{cor:f:4} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $g_0=f$, let $\vv_i\in\resc{g_{i-1}}$, and let $g_i = \gishadvi$ for $i=1,\ldots,k$. Then the following are equivalent: \begin{letter-compact} \item \label{cor:f:4a} $\resc{g_k}=\conssp{g_k}$. \item \label{cor:f:4c} $g_k=\fullshad{f}$. \item \label{cor:f:4b} Either $f\equiv+\infty$ or $\slopes{g_k}\subseteq\perpresf$. \end{letter-compact} \end{corollary} \begin{proof} Applied repeatedly, Theorems~\ref{thm:f:5} and~\ref{thm:f:6} show that $\perpresf=\perpresgsub{k}$ and $\fullshad{f}=\fullshadgk$, while Proposition~\ref{pr:d2}(\ref{pr:d2:c}) shows that $f\equiv+\infty$ if and only if $g_k\equiv+\infty$. Furthermore, $g_k$ is convex and lower semicontinuous, by Theorem~\ref{thm:a10-nunu}. With these facts, the corollary follows immediately from Corollary~\ref{cor:thm:f:4:1}. \end{proof} As discussed earlier, the universal reduction $\fullshad{f}$ must realize its minimum at some point in a bounded region of $\Rn$. More specifically, we now show that $\fullshad{f}$ is constant in all directions orthogonal to the linear subspace $\perpresf$, which means that, effectively, we can restrict attention only to $\perpresf$. Furthermore, within $\perpresf$, all of the sublevel sets of $\fullshad{f}$ are bounded and consequently compact, which means that all minimizers must also be in a compact region of $\Rn$ (indeed, in any nonempty sublevel set). In Lemmas~\ref{lem:thm:f:4x:1} and~\ref{lem:thm:f:4x:2}, we prove these properties first under the restrictive assumption that $\resc{f}=\conssp{f}$, that is, the termination criterion used in Figure~\ref{fig:min-proc}. These lemmas are stated in terms of $f$, but also are implicitly about $\fullshad{f}$ since $f=\fullshad{f}$ when $\resc{f}=\conssp{f}$, by Corollary~\refequiv{cor:thm:f:4:1}{cor:thm:f:4:1a}{cor:thm:f:4:1c}. We then prove, in Theorem~\ref{thm:f:4x}, that the same properties therefore hold in general. \begin{lemma} \label{lem:thm:f:4x:1} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous, and assume $\resc{f}=\conssp{f}$. Then $f$ is constant exactly in those directions that are orthogonal to $\perpresf$. That is, $\conssp{f}=\perperpresf$, and thus, $\perpresf = \consspperp{f} = \rescperp{f}$. \end{lemma} \begin{proof} If $f\equiv+\infty$, then the result is immediate since $\resc{f}=\conssp{f}=\perperpresf=\Rn$ in this case. So we assume henceforth that $f\not\equiv+\infty$. By \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}) and Proposition~\ref{pr:f:1}, $\conssp{f}\subseteq\resc{f}\subseteq\aresconef$. Applying Proposition~\ref{pr:perp-props-new}(\ref{pr:perp-props-new:b}) then implies $\perpresf\subseteq\consspperp{f}$, and so \begin{equation} \label{eq:j:5} \conssp{f}=\consspperperp{f}\subseteq\perperpresf \end{equation} by Propositions~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:c}) and~\ref{pr:perp-props-new}(\ref{pr:perp-props-new:b}). To prove the reverse inclusion, suppose first that $f$ is closed, proper, and reduction-closed. Let $\dd\in\perperpresf$. By Theorem~\ref{thm:f:4}, for $\uu\in\Rn$, if $\fstar(\uu)\in\R$ then $\uu\in\perpresf$, which implies that $\dd\cdot\uu=0$. Thus, for all $\uu\in\Rn$, $-\fstar(\uu)\plusl \dd\cdot\uu = -\fstar(\uu)$. Therefore, by Theorems~\ref{thm:fext-dub-sum}(\ref{thm:fext-dub-sum:b}) and~\ref{thm:dub-conj-new}, and also Proposition~\ref{pr:h:1}(\ref{pr:h:1a}), \begin{eqnarray*} f(\dd+\xx) &=& \sup_{\uu\in\Rn} \brackets{ -\fstar(\uu) \plusl \dd\cdot\uu \plusl \xx\cdot\uu } \\ &=& \sup_{\uu\in\Rn} \brackets{ -\fstar(\uu) \plusl \xx\cdot\uu } \\ &=& f(\xx) \end{eqnarray*} for all $\xx\in\Rn$. In other words, $\dd\in\conssp{f}$, proving $\perperpresf\subseteq\conssp{f}$ when $f$ is reduction-closed. More generally, given $f$ that does not necessarily satisfy these additional conditions, we can use our usual trick of defining $f'=\expex\circ f$, similar to the proof of Corollary~\ref{cor:thm:f:4:1}. Then $f'$ is convex, lower-bounded, and lower semicontinuous by Proposition~\ref{pr:j:2}(\ref{pr:j:2a},\ref{pr:j:2lsc}), and so also is proper, closed, and reduction-closed (by \Cref{pr:j:1}\ref{pr:j:1c}). Thus, \[ \perperpresf = \perperpresfp \subseteq \conssp{f'} = \conssp{f}. \] The inclusion is by the above argument applied to $f'$. The equalities are by Proposition~\ref{pr:j:2}(\ref{pr:j:2b},\ref{pr:j:2d}) combined with \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}). Thus, $\conssp{f}=\perperpresf$. In turn, this implies \[ \consspperp{f} = \perperperpresf = \perpresf \] using Proposition~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:c}) (since $\perpresf$ is a linear subspace by Proposition~\ref{pr:perp-props-new}\ref{pr:perp-props-new:a}). \end{proof} \begin{lemma} \label{lem:thm:f:4x:2} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous, and assume $\resc{f}=\conssp{f}$. If restricted to $\perpresf$, all of $f$'s sublevel sets are compact; that is, for all $\beta\in\R$, the set \[ \{ \xx \in \perpresf : f(\xx) \leq \beta \} \] is a compact subset of $\Rn$. \end{lemma} To prove this lemma, we will first prove the more general result given in Theorem~\ref{thm:lem:thm:f:4x:2}. Note that this theorem only concerns concepts from standard convex analysis, and could be proved using standard techniques (indeed, closely related results are given, for instance, in \citealp[Section~8]{ROC}). Here we instead give a more direct proof as an illustration of astral techniques applied to standard convex analysis. \begin{theorem} \label{thm:lem:thm:f:4x:2} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. If restricted to $\rescperp{f}$, all of $f$'s sublevel sets are compact; that is, for all $\beta\in\R$, the set \[ \{ \xx \in \rescperp{f} : f(\xx) \leq \beta \} \] is a compact subset of $\Rn$. \end{theorem} \proof Let $\beta\in\R$, and let \[ L = \{ \xx \in \rescperp{f} : f(\xx) \leq \beta \}. \] We claim first that $L$ is bounded. Suppose not. Then there exists a sequence $\seq{\xx_t}$ in $\rescperp{f}$ with $f(\xx_t)\leq \beta$ for all $t$, and such that $\norm{\xx_t}\rightarrow+\infty$. By sequential compactness of $\extspace$, there exists a subsequence of the $\xx_t$'s that converges to some point $\xbar\in\extspace$; by discarding all other elements, we can assume the entire sequence converges to $\xbar$. We can write $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$ (by Theorem~\ref{thm:icon-fin-decomp}). Further, let $\vv$ be the dominant direction of $\ebar$ so that $\ebar=\limray{\vv}\plusl\ebarperp$ (where $\ebarperp$ is $\ebar$'s projection orthogonal to $\vv$, by Proposition~\ref{pr:h:6}). Then \[ \fext(\ebar\plusl\qq) = \fext(\xbar) \leq \liminf f(\xx_t) \leq \beta. \] Therefore, $\ebar\in\aresconef$, by Corollary~\ref{cor:a:4}, since $\ebar$ is an icon. This further implies that $\vv\in\resc{f}\subseteq\aresconef$, by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}) and Proposition~\ref{pr:f:1}. Thus, for all $t$, $\xx_t\cdot\vv=0$ since $\xx_t\in\rescperp{f}$. On the other hand, $\xbar=\limray{\vv}\plusl\ebarperp\plusl\qq$, so $\vv$ is also $\xbar$'s dominant direction. Therefore, since $\xx_t\rightarrow\xbar$, \[ \xx_t\cdot\vv \rightarrow \xbar\cdot\vv = +\infty, \] by Theorem~\ref{thm:dom-dir}. But this is a contradiction, since $\xx_t\cdot\vv=0$ for all $t$. Therefore, $L$ is bounded. We claim next that $L$ is closed (where, throughout this discussion, ``closed'' means in $\Rn$, not $\extspace$). This is because $L$ is equal to the intersection of two closed sets, specifically, \[ L = \rescperp{f}\cap \{ \xx \in \Rn : f(\xx) \leq \beta \}. \] These sets are closed because $\rescperp{f}$ is a linear subspace (by Proposition~\ref{pr:perp-props-new}\ref{pr:perp-props-new:a}) and therefore closed, while the rightmost set is a sublevel set which is closed since $f$ is lower semicontinuous (\Cref{roc:thm7.1}). Thus, $L$ is compact, being closed (in $\Rn$) and bounded. \qed \proof[Proof of Lemma~\ref{lem:thm:f:4x:2}] The lemma is immediate from Theorem~\ref{thm:lem:thm:f:4x:2} after noting, by Lemma~\ref{lem:thm:f:4x:1}, that $\perpresf = \rescperp{f}$ since $\resc{f}=\conssp{f}$. \qed We now can prove in full generality the properties discussed above for $\fullshad{f}$'s sublevel sets and constancy space. \begin{theorem} \label{thm:f:4x} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex. Then the following hold: \begin{letter-compact} \item \label{thm:f:4xaa} $\fullshad{f}$ is convex and lower semicontinuous. \item \label{thm:f:4xa} $\fullshad{f}$ is constant exactly in those directions that are orthogonal to $\perpresf$. That is, $\conssp{\fullshad{f}}=\perperpresf$, and also, $\perpresf = \consspperp{\fullshad{f}} = \rescperp{\fullshad{f}}$. \item \label{thm:f:4xb} If restricted to $\perpresf$, all of $\fullshad{f}$'s sublevel sets are compact; that is, for all $\beta\in\R$, the set \begin{equation} \label{eqn:thm:f:4x:1} \{ \xx \in \perpresf : \fullshad{f}(\xx) \leq \beta \} \end{equation} is a compact subset of $\Rn$. \item \label{thm:f:4xd} $\fullshad{f}$ attains its minimum at some point in $\perpresf$. \end{letter-compact} \end{theorem} \begin{proof} The theorem concerns only $\fext$ and $\fullshad{f}$, and furthermore, the latter is defined entirely in terms of $\fext$. Since $\fext=\lscfext$ (by \Cref{pr:h:1}\ref{pr:h:1aa}), it therefore suffices to prove the theorem under the assumption that $f$ is lower semicontinuous, replacing it with $\lsc f$ if it is not. \begin{proof-parts} \pfpart{Parts~(\ref{thm:f:4xaa}),~(\ref{thm:f:4xa}) and~(\ref{thm:f:4xb}):} It suffices to prove the following statement by \emph{backwards} induction on $\ell=0,1,\dotsc,n+1$: For all convex and lower semicontinuous functions $f:\Rn\rightarrow\Rext$, if $\dim(\conssp{f})\geq\ell$ then parts~(\ref{thm:f:4xaa}),~(\ref{thm:f:4xa}) and~(\ref{thm:f:4xb}) of the theorem hold. In the base case that $\ell=n+1$, the claimed statement is vacuously true since $\dim(\conssp{f})\leq n$ always. For the inductive step, assume $\ell\leq n$, and that the claim holds for $\ell+1$. Suppose $f$ is convex and lower semicontinuous, and that $\dim(\conssp{f})\geq\ell$. If $\resc{f}=\conssp{f}$, then parts~(\ref{thm:f:4xa}) and~(\ref{thm:f:4xb}) of the theorem follow immediately from Lemmas~\ref{lem:thm:f:4x:1} and~\ref{lem:thm:f:4x:2}, after noting that $f=\fullshad{f}$ by Corollary~\refequiv{cor:thm:f:4:1}{cor:thm:f:4:1a}{cor:thm:f:4:1c}. This last fact also implies part~(\ref{thm:f:4xaa}) since $f$ is convex and lower semicontinuous. Otherwise, $\resc{f}\neq\conssp{f}$, so there must exist $\vv\in(\resc{f})\setminus(\conssp{f})$. Let $g=\fshadv$ be the reduction at $\limray{\vv}$. We claim $\dim(\conssp{g}) > \dim(\conssp{f})$. To see this, by Theorem~\ref{thm:f:3}(\ref{thm:f:3a}), $\resc{f} \subseteq \resc{g}$, implying, by \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}), that $ \conssp{f} \subseteq \conssp{g}$. Also, by Proposition~\ref{pr:d2}(\ref{pr:d2:a}), $g(\xx+\lambda\vv)=g(\xx)$ for all $\xx\in\Rn$ and all $\lambda\in\R$. Therefore, $\vv$ is in $\conssp{g}$, the constancy space of $g$, but by assumption, $\vv\not\in\conssp{f}$. Combining, these show that $\conssp{f}$ is a proper subset of $\conssp{g}$. Since they are both linear subspaces, this implies $\dim(\conssp{g}) > \dim(\conssp{f})$, as claimed. Thus, $\dim(\conssp{g})\geq \dim(\conssp{f}) + 1 \geq \ell+1$, and so our inductive hypothesis holds for $g$ (using Theorem~\ref{thm:a10-nunu}). Also, $\fullshad{g}=\fullshad{f}$ and $\perpresg=\perpresf$ by Theorems~\ref{thm:f:5} and~\ref{thm:f:6}, respectively. Therefore, \[ \conssp{\fullshad{f}} = \conssp{\fullshad{g}} = \perperpresg = \perperpresf \] where the middle equality is by inductive hypothesis. The argument that $\perpresf = \consspperp{\fullshad{f}} = \rescperp{\fullshad{f}}$ is similar, proving part~(\ref{thm:f:4xa}). Likewise, \[ \{ \xx \in \perpresf : \fullshad{f}(\xx) \leq \beta \} = \{ \xx \in \perpresg : \fullshad{g}(\xx) \leq \beta \} \] and so is compact, by inductive hypothesis, proving part~(\ref{thm:f:4xb}). And part~(\ref{thm:f:4xaa}) follows from $\fullshad{f}=\fullshad{g}$, by inductive hypothesis. \pfpart{Part~(\ref{thm:f:4xd}):} Let $\xx\in\Rn$ be a minimizer of $\fullshad{f}$, which must exist by Proposition~\ref{pr:min-fullshad-is-finite-min}. Let $\yy$ be the projection of $\xx$ onto the linear subspace $\perpresf$. Then $\xx-\yy$ must be orthogonal to that space, that is, in $\perperpresf=\conssp{\fullshad{f}}$, implying $\fullshad{f}(\yy)=\fullshad{f}(\xx)$. Therefore, $\yy\in\perpresf$ also minimizes $\fullshad{f}$. \qedhere \end{proof-parts} \end{proof} Thus, if $\xbar=\ebar\plusl\qq$ minimizes $\fext$, where $\ebar\in\corezn$ and $\qq\in\Rn$, then $\ebar$ must be in $\aresconef$, by Theorem~\ref{thm:arescone-fshadd-min}, and $\qq$ must minimize $\fullshad{f}$, by Proposition~\ref{pr:min-fullshad-is-finite-min}. Furthermore, as a consequence of Theorem~\ref{thm:f:4x}, $\qq$ can effectively be restricted to a compact subset of $\perpresf$. Regarding sequences, Theorem~\ref{thm:f:4x} implies that if a convex function $f$ is minimized by some sequence, then that sequence must also minimize $\fullshad{f}$, as does also the projection of that sequence onto the linear subspace $\perpresf$. Further, that projected sequence cannot be unbounded. \begin{proposition} \label{pr:proj-mins-fullshad} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\seq{\xx_t}$ be a sequence in $\Rn$, and for each $t$, let $\qq_t$ be the projection of $\xx_t$ onto the linear subspace $\perpresf$. Assume $f(\xx_t)\rightarrow\inf f$. Then $\fullshad{f}(\xx_t)=\fullshad{f}(\qq_t)\rightarrow \min\fullshad{f}$. Furthermore, the entire sequence $\seq{\qq_t}$ is included in a compact subset of $\Rn$. \end{proposition} \proof If $f\equiv+\infty$, then $\fullshad{f}\equiv+\infty$ and $\perpresf=\{\zero\}$ so $\qq_t=\zero$ for all $t$, implying the claim. Therefore, we assume henceforth that $f\not\equiv+\infty$. For all $t$, \begin{equation} \label{eq:pr:proj-mins-fullshad:1} \inf f = \min \fext \leq \min \fullshad{f} \leq \fullshad{f}(\qq_t) = \fullshad{f}(\xx_t) \leq \fext(\xx_t) \leq f(\xx_t). \end{equation} The first equality is by Proposition~\ref{pr:fext-min-exists}. The first inequality follows from the definition of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}). The second equality follows from Theorem~\ref{thm:f:4x}(\ref{thm:f:4xa}) (since $\xx_t-\qq_t$ is perpendicular to $\rescperp{f}$). The third inequality follows also from the definition of $\fullshad{f}$, since $\zero$ is an icon. And the last inequality is from Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). Since $f(\xx_t)\rightarrow\inf f$, \eqref{eq:pr:proj-mins-fullshad:1} implies $\fullshad{f}(\xx_t)=\fullshad{f}(\qq_t) \rightarrow \min \fullshad{f}$, as claimed. This also shows, for any $\beta>\min\fullshad{f}$, that all but finitely many of the $\qq_t$'s are in some sublevel set of $\fullshad{f}$, as in \eqref{eqn:thm:f:4x:1}. By Theorem~\ref{thm:f:4x}(\ref{thm:f:4xb}), every such sublevel set is compact. Therefore, there exists a (possibly larger) compact subset of $\Rn$ that includes the entire sequence $\seq{\qq_t}$. \qed \subsection{Ensuring termination} \label{sec:ensure-termination} As discussed in Section~\ref{sec:find-all-min}, the processes in Figures~\ref{fig:all-min-proc} and~\ref{fig:min-proc} might never terminate since, for instance, we might arbitrarily choose $\vv_i=\zero$ on every iteration of the main loop. In general, if a vector $\vv_i$ is chosen that is already in the span of the preceeding vectors $\vv_1,\ldots,\vv_{i-1}$, then no progress is made in the sense that the icon that is being constructed has not changed; that is, \[ \limrays{\vv_1,\ldots,\vv_i} = \limrays{\vv_1,\ldots,\vv_{i-1}} \] (by Proposition~\ref{pr:g:1}). Thus, to ensure progress, we might insist that $\vv_i$ be chosen to be not only in the recession cone of $g_{i-1}$ but also outside the span of $\vv_1,\ldots,\vv_{i-1}$ so that \begin{equation} \label{eq:sensible-vi-dfn} \vv_i \in (\resc{g_{i-1}}) \setminus \spnfin{\vv_1,\ldots,\vv_{i-1}}. \end{equation} We say such a choice of $\vv_i$ is \emph{sensible}. The next proposition shows that if no sensible choice is possible then the termination condition of Figure~\ref{fig:min-proc} must have already been reached: \begin{proposition} \label{pr:no-sense-then-done} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $g_0=f$, let $\vv_i\in\resc{g_{i-1}}$, and let $g_i = \gishadvi$, for $i=1,\ldots,k$. Suppose \[ \resc{g_k} \subseteq \spnfin{\vv_1,\ldots,\vv_k}. \] Then $\resc{g_k}=\conssp{g_k}$. \end{proposition} \proof We claim by induction on $i=0,\ldots,k$ that \[ {\vv_1,\ldots,\vv_i} \subseteq \conssp{g_i}. \] The base case that $i=0$ holds trivially. For the inductive step, suppose $i>0$. Then $\vv_1,\ldots,\vv_{i-1}\in\conssp{g_{i-1}}$ by inductive hypothesis. Furthermore, $\resc{g_{i-1}}\subseteq\resc{g_i}$ by Theorem~\ref{thm:f:3}(\ref{thm:f:3a}), implying $\conssp{g_{i-1}}\subseteq\conssp{g_i}$ by \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}). In addition, for all $\xx\in\Rn$ and all $\lambda\in\R$, $g_i(\xx+\lambda\vv_i)=g_i(\xx)$ by Proposition~\ref{pr:d2}(\ref{pr:d2:a}), implying $\vv_i\in\conssp{g_i}$. Combining completes the induction. Therefore, \[ \conssp{g_k} \subseteq \resc{g_k} \subseteq \spnfin{\vv_1,\ldots,\vv_k} \subseteq \conssp{g_k}. \] These inclusions follow respectively from \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}), by assumption, and from the inductive claim proved above (since $\conssp{g_k}$ is a linear subspace). \qed If, in the process of Figure~\ref{fig:min-proc}, each $\vv_i$ is chosen sensibly, then the dimension of the space spanned by the $\vv_i$'s increases by one on each iteration of the main loop. Therefore, within $n$ iterations, no more sensible choices can be possible, and therefore, by Proposition~\ref{pr:no-sense-then-done}, the termination condition must have been reached. This shows that the process can always be run in a way that guarantees termination within $n$ iterations. Furthermore, it shows that the process cannot ``get stuck'' in the sense that, no matter what preceeding choices have been made by the processs, the ensuing choices of $\vv_i$ can be made sensibly, again ensuring termination within $n$ additional iterations. The next corollary summarizes some of the main properties of the construction in Figure~\ref{fig:min-proc}. In particular, part~(\ref{cor:fig-cons-main-props:b}) shows that the termination condition of Figure~\ref{fig:min-proc} implies that of Figure~\ref{fig:all-min-proc}, as mentioned earlier. Therefore, the comments above regarding termination apply to that process as well. \begin{corollary} \label{cor:fig-cons-main-props} \MarkYes Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. There exists $\vv_1,\ldots,\vv_k\in\Rn$, for some $k\geq 0$, with $\vv_i\in\resc{g_{i-1}}$, for $i=1,\ldots,k$, and $\resc{g_k}=\conssp{g_k}$, where $g_0=f$, and $g_i = \gishadvi$. Whenever these hold, the following are also true: \begin{letter-compact} \item \label{cor:fig-cons-main-props:a} For all $\xx\in\Rn$, $\fullshad{f}(\xx)=g_k(\xx)=\fext(\ebar\plusl\xx)=\fshadd(\xx)$, where $\ebar=\limrays{\vv_1,\ldots,\vv_k}$. \item \label{cor:fig-cons-main-props:b} There exists some $\qq\in\Rn$ that minimizes $g_k=\fullshad{f}$. \item \label{cor:fig-cons-main-props:c} The point $\xbar=\ebar\plusl\qq$ minimizes $\fext$. \end{letter-compact} \end{corollary} \begin{proof} Suppose the process of Figure~\ref{fig:min-proc} is executed sensibly so that, on each iteration, $\vv_i$ is chosen to satisfy \eqref{eq:sensible-vi-dfn}. Then, as just discussed, within $n$ iterations the process must reach a point at which no such choice is possible, implying, by Proposition~\ref{pr:no-sense-then-done}, that the termination condition of the main loop has been reached. Upon termination, all of the claimed properties hold: \begin{proof-parts} \pfpart{Part~(\ref{cor:fig-cons-main-props:a}):} Corollary~\refequiv{cor:f:4}{cor:f:4a}{cor:f:4c} proves that $g_k=\fullshad{f}$, and \Cref{pr:icon-red-decomp-astron-red} shows that $g_k=\fshadd$. \pfpart{Part~(\ref{cor:fig-cons-main-props:b}):} For this proof only, for any point $\xx\in\Rn$, let $\xperp\in\perpresf$ denote $\xx$'s projection onto the linear subspace $\perpresf$. By the nature of projection, this implies $\xx=\xperp+\yy$ for some $\yy\in\Rn$ which is orthogonal to $\perpresf$, that is, some $\yy\in\perperpresf$. Since $\perperpresf = \conssp{\fullshad{f}}$ by Theorem~\ref{thm:f:4x}(\ref{thm:f:4xa}), this means $\fullshad{f}(\xx)=\fullshad{f}(\xperp)$ for all $\xx\in\Rn$. If $\fullshad{f}\equiv +\infty$, then any point in $\Rn$ is a minimizer. Otherwise, for some $\beta\in\R$, the set \[ L = \{ \xx \in \perpresf : \fullshad{f}(\xx) \leq \beta \} \] is not empty (since if $\fullshad{f}(\xx)\leq \beta$ then $\fullshad{f}(\xperp)\leq \beta$ as well) and compact (by Theorem~\ref{thm:f:4x}\ref{thm:f:4xb}). Therefore, since $\fullshad{f}$ is lower semicontinuous (\Cref{thm:f:4x}\ref{thm:f:4xaa}), it attains its minimum over $L$ at some point $\qq\in\perpresf$ (\Cref{thm:weierstrass}). Furthermore, $\qq$ must actually minimize $\fullshad{f}$ over all of $\Rn$ since if $\xx\in\Rn$, then $\fullshad{f}(\xx)=\fullshad{f}(\xperp)\geq\fullshad{f}(\qq)$. \pfpart{Part~(\ref{cor:fig-cons-main-props:c}):} Having proved part~(\ref{cor:fig-cons-main-props:a}), this follows directly from Theorem~\ref{thm:all-min-proc-correct}, all of whose conditions are satisfied. \qedhere \end{proof-parts} \end{proof} \subsection{Canonical minimizers} \label{subsec:univ-min} As seen in Corollary~\ref{cor:fig-cons-main-props}, the construction in Figure~\ref{fig:min-proc} yields a minimizer of $\fext$ of the form $\ebar\plusl\qq$, where $\qq\in\Rn$ minimizes the universal reduction $\fullshad{f}$, and where $\ebar$ has the property that \begin{equation} \label{eqn:dbar-fullshad-min} \fext(\ebar\plusl\xx) = \fullshad{f}(\xx) = \min_{\ebar'\in\corezn} \fext(\ebar'\plusl\xx) \end{equation} for all $\xx\in\Rn$. That the minimum that appears here is realized was previously proved in Proposition~\ref{pr:fullshad-equivs}. In fact, \eqref{eqn:dbar-fullshad-min} is showing something much stronger, namely, that $\ebar$ realizes that minimum for \emph{all} $\xx$ \emph{simultaneously}, which is fairly remarkable. Furthermore, the construction in Figure~\ref{fig:min-proc} reveals that there is a whole set of points with this same property, since the construction and proof hold for a whole range of arbitrary choices, as previously discussed. Here, we study some of the properties of that set. \begin{definition} \label{def:univ-reducer} Let $f:\Rn\rightarrow\Rext$ be convex. We say that an icon $\ebar\in\corezn$ is a \emph{universal reducer} for $f$ if $\fshadd=\fullshad{f}$, that is, if $ \fext(\ebar\plusl\xx)=\fullshad{f}(\xx) $ for all $\xx\in\Rn$. We write $\unimin{f}$ for the set of all such universal reducers: \[ \unimin{f} = \Braces{ \ebar\in\corezn : \fshadd=\fullshad{f} }. \] \end{definition} We call such icons ``universal'' because of their connection to universal reductions, and also because they universally attain the minimum in \eqref{eqn:dbar-fullshad-min} simultaneously for all $\xx\in\Rn$. The next proposition gives some simple properties of $\unimin{f}$. In particular, the fact that this set is never empty means that the universal reduction $\fullshad{f}$ of a convex function $f$ is itself an iconic reduction with $\fullshad{f}=\fshadd$ for any $\ebar\in\unimin{f}$. \begin{proposition} \label{pr:new:thm:f:8a} Let $f:\Rn\rightarrow\Rext$ be convex. Then: \begin{letter-compact} \item \label{pr:new:thm:f:8a:nonemp-closed} $\unimin{f}$ is nonempty and closed (in $\extspace$). \item \label{pr:new:thm:f:8a:a} $\unimin{f}\subseteq\conv(\unimin{f})\subseteq\aresconef$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:new:thm:f:8a:nonemp-closed}):} That $\unimin{f}$ is nonempty follows directly from Corollary~\ref{cor:fig-cons-main-props}(\ref{cor:fig-cons-main-props:a}). Let $\ebar\in\clbar{\unimin{f}}$, which we aim to show is in $\unimin{f}$. Then there exists a sequence $\seq{\ebar_t}$ in $\unimin{f}$ which converges to $\ebar$. This implies $\ebar$ is an icon since $\corezn$ is closed in $\extspace$ (\Cref{pr:i:8}\ref{pr:i:8e}). Also, for all $\xx\in\Rn$, \[ \fullshad{f}(\xx) \leq \fext(\ebar\plusl\xx) \leq \liminf \fext(\ebar_t\plusl\xx) = \fullshad{f}(\xx). \] The first inequality is by definition of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}). The second inequality is by lower semicontinuity of $\fext$ (Proposition~\ref{prop:ext:F}), and because $\ebar_t\plusl\xx\rightarrow\ebar\plusl\xx$ (by Proposition~\ref{pr:i:7}\ref{pr:i:7g}). The equality is because each $\ebar_t\in\unimin{f}$. Therefore, $\ebar\in\unimin{f}$, so $\unimin{f}$ is closed. \pfpart{Part~(\ref{pr:new:thm:f:8a:a}):} We argue that $\unimin{f}\subseteq\aresconef$. Since $\aresconef$ is convex (by \Cref{cor:res-fbar-closed}), it will then follow that $\conv(\unimin{f})\subseteq\aresconef$ (by \Cref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). Let $\ebar\in\unimin{f}$. For all $\xx\in\Rn$, \[ \fext(\ebar\plusl\xx) = \fullshad{f}(\xx) \leq \fext(\xx) \leq f(\xx). \] The equality is because $\ebar$ is a universal reducer. The first inequality is by definition of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}), and since $\zero\in\corezn$. And the second is from Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). Therefore, $\ebar\in\aresconef$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:c}. \qedhere \end{proof-parts} \end{proof} Suppose $\xbar=\ebar\plusl\qq$ where $\ebar\in\corezn$ and $\qq\in\Rn$. Previously, we saw that if $\xbar$ minimizes $\fext$ then $\ebar$ must be in $\aresconef$ (by Theorem~\ref{thm:arescone-fshadd-min}), and $\qq$ must be a minimizer of $\fullshad{f}$ (by Proposition~\ref{pr:min-fullshad-is-finite-min}). The converse is false. In other words, in general, it is not the case that $\xbar$ minimizes $\fext$ for every choice of $\ebar\in\aresconef$ and every $\qq$ that minimizes $\fullshad{f}$. For instance, for the function $f$ considered in Examples~\ref{ex:simple-eg-exp-exp-sq} and~\ref{ex:simple-eg-exp-exp-sq-2}, $\qq=\trans{[0,0,2]}$ minimizes $\fullshad{f}$, and $\ebar=\limray{\ee_1}\in\aresconef$, but $\xbar=\ebar\plusl\qq$ does not minimize $\fext$ since $\fext(\xbar)=1>0=\inf f$. Nevertheless, as discussed above, Corollary~\ref{cor:fig-cons-main-props} shows that the construction of Figure~\ref{fig:min-proc} yields a minimizer $\xbar=\ebar\plusl\qq$ of a particular form, namely, with $\qq$ a finite minimizer of $\fullshad{f}$, and icon $\ebar$ not only in $\aresconef$, but also a universal reducer, as shown in part~(\ref{cor:fig-cons-main-props:a}) of that corollary. We call such a point (where $\ebar\in\unimin{f}$ is a universal reducer, and $\qq$ minimizes the universal reduction $\fullshad{f}$) a \emph{canonical minimizer} of $\fext$. Every such point is indeed a minimizer of $\fext$, as follows from the next proposition. Later, we will see that the process of Figure~\ref{fig:min-proc} finds all of the canonical minimizers (and thereby all of the universal reducers as well). \begin{proposition} \label{pr:unimin-to-global-min} Let $f:\Rn\rightarrow\Rext$ be convex. Suppose $\ebar\in\unimin{f}$ and that $\qq\in\Rn$ minimizes $\fullshad{f}$. Then $\ebar\plusl\qq$ minimizes $\fext$. \end{proposition} \begin{proof} By Proposition~\ref{pr:min-fullshad-is-finite-min}, since $\qq$ minimizes $\fullshad{f}$, there exists an icon $\ebar'\in\corezn$ such that $\ebar'\plusl\qq$ minimizes $\fext$. Then \[ \fext(\ebar\plusl\qq) = \fullshad{f}(\qq) \leq \fext(\ebar'\plusl\qq) = \min \fext, \] where the first equality is because $\ebar\in\unimin{f}$, and the inequality is by definition of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}). Therefore, $\ebar\plusl\qq$ also minimizes $\fext$. \end{proof} Not all minimizers of $\fext$ are canonical minimizers. For example, for the function $f$ in \eqref{eqn:recip-fcn-eg} of Example~\ref{ex:recip-fcn-eg}, $\fext$ is minimized by $\limray{\ee_1}\plusl\ee_2$, but $\limray{\ee_1}$ is not a universal reducer (since, for instance, $\fext(\limray{\ee_1}\plusl(-\ee_2))=+\infty$, but $\fullshad{f}\equiv 0$). For a convex, lower semicontinuous function $f:\Rn\rightarrow\Rext$, universal reducers can be constructed as in Figure~\ref{fig:min-proc} in a recursive fashion, justified next, that is very similar to that seen for points in the astral recession cone in Theorem~\ref{thm:f:3}. Indeed, the recursive formulation is identical to what was seen in that theorem. The critical difference between the construction of points in $\unimin{f}$ and $\aresconef$ comes instead in where these sets intersect $\Rn$. As seen in Proposition~\ref{pr:f:1}, the astral recession cone, $\aresconef$, intersects $\Rn$ exactly at the standard recession cone, $\resc{f}$. On the other hand, since $\unimin{f}$ consists only of icons, it can only intersect $\Rn$ at the origin. Furthermore, as shown in the next proposition, $\zero$ is included in $\unimin{f}$ if and only if the termination criterion used in the construction in Figure~\ref{fig:min-proc} has been reached, or equivalently, if and only if $f=\fullshad{f}$. Thus, if this criterion does not hold, then $\unimin{f}$ includes no points in $\Rn$. \begin{proposition} \label{pr:zero-in-univf} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then the following are equivalent: \begin{letter-compact} \item \label{pr:zero-in-univf:a} $\zero\in\unimin{f}$. \item \label{pr:zero-in-univf:b} $f=\fullshad{f}$. \item \label{pr:zero-in-univf:c} $\resc{f}=\conssp{f}$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:zero-in-univf:a}) $\Leftrightarrow$ (\ref{pr:zero-in-univf:b}): } This is immediate from Definition~\ref{def:univ-reducer} since $\fshad{\zero}(\xx)=\fext(\xx)=f(\xx)$ for $\xx\in\Rn$, by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). \pfpart{(\ref{pr:zero-in-univf:b}) $\Leftrightarrow$ (\ref{pr:zero-in-univf:c}): } This is immediate from Corollary~\refequiv{cor:thm:f:4:1}{cor:thm:f:4:1a}{cor:thm:f:4:1c}. \qedhere \end{proof-parts} \end{proof} \begin{theorem} \label{thm:new:f:8.2} \MarkYes Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\vv\in\Rn$ and let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. Suppose $\dbar=\limray{\vv}\plusl\ebar$ for some $\ebar\in\corezn$. Then $\dbar\in\unimin{f}$ if and only if $\vv\in\resc{f}$ and $\ebar\in\unimin{g}$. \end{theorem} \begin{proof} Suppose $\dbar\in\unimin{f}$. Then $\dbar\in\aresconef$ by Propostion~\ref{pr:new:thm:f:8a}(\ref{pr:new:thm:f:8a:a}), so $\vv\in\resc{f}$ by Theorem~\ref{thm:f:3}(\ref{thm:f:3b}). For all $\xx\in\Rn$, \[ \gext(\ebar\plusl\xx) = \fext(\limray{\vv}\plusl\ebar\plusl\xx) = \fext(\dbar\plusl\xx) = \fullshad{f}(\xx) = \fullshad{g}(\xx). \] The first equality is by Theorem~\ref{thm:d4}(\ref{thm:d4:c}). The third is because $\dbar\in\unimin{f}$. The last equality is by Theorem~\ref{thm:f:5}. Thus, $\ebar\in\unimin{g}$. The converse is similar. Suppose that $\vv\in\resc{f}$ and $\ebar\in\unimin{g}$. Then for all $\xx\in\Rn$, \[ \fext(\dbar\plusl\xx) = \fext(\limray{\vv}\plusl\ebar\plusl\xx) = \gext(\ebar\plusl\xx) = \fullshad{g}(\xx) = \fullshad{f}(\xx). \] The second equality is by Theorem~\ref{thm:d4}(\ref{thm:d4:c}). The third is because $\ebar\in\unimin{g}$. The last equality is by Theorem~\ref{thm:f:5}. Thus, $\dbar\in\unimin{f}$. \end{proof} We show next that the potential products of the process of Figure~\ref{fig:min-proc} are exactly the canonical minimizers of $\fext$. In this sense, this process finds all canonical minimizers, and so also all universal reducers. \begin{theorem} \label{thm:min-proc-all-can-min} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\vv_1,\dotsc,\vv_k,\qq\in\Rn$, and let $\ebar = \limrays{\vv_1,\ldots,\vv_k}$. Let $g_0=f$ and $g_i = \gishadvi$ for $i=1,\ldots,k$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:min-proc-all-can-min:a} $\ebar\in\unimin{f}$ and $\qq$ minimizes $\fullshad{f}$; that is, $\ebar\plusl\qq$ is a canonical minimizer of $\fext$. \item \label{thm:min-proc-all-can-min:b} $\resc{g_k}=\conssp{g_k}$; $\qq$ minimizes $g_k$; and $\vv_i\in\resc{g_{i-1}}$ for $i=1,\ldots,k$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:min-proc-all-can-min:a}) $\Rightarrow$ (\ref{thm:min-proc-all-can-min:b}): } Suppose $\ebar\in\unimin{f}$ and $\qq$ minimizes $\fullshad{f}$. Then $\ebar\plusl\qq$ minimizes $\fext$, by Proposition~\ref{pr:unimin-to-global-min}. Therefore, by Theorem~\ref{thm:all-min-proc-correct}, $\qq$ minimizes $g_k$ and $\vv_i\in\resc{g_{i-1}}$ for $i=1,\ldots,k$. Also, $\fshadd=\fullshad{f}$ (since $\ebar\in\unimin{f}$), and $g_k=\fshadd$ by \Cref{pr:icon-red-decomp-astron-red}. Thus, $g_k=\fullshad{f}$, which, combined with the foregoing, implies $\resc{g_k}=\conssp{g_k}$ by Corollary~\refequiv{cor:f:4}{cor:f:4a}{cor:f:4c}. \pfpart{(\ref{thm:min-proc-all-can-min:b}) $\Rightarrow$ (\ref{thm:min-proc-all-can-min:a}): } Suppose $\resc{g_k}=\conssp{g_k}$, $\qq$ minimizes $g_k$, and $\vv_i\in\resc{g_{i-1}}$ for $i=1,\ldots,k$. These conditions imply $g_k=\fshadd=\fullshad{f}$, by Corollary~\ref{cor:fig-cons-main-props}(\ref{cor:fig-cons-main-props:a}). Therefore, $\ebar\in\unimin{f}$. This also shows $\qq$ minimizes $\fullshad{f}$ since it minimizes $g_k$. \qedhere \end{proof-parts} \end{proof} We saw in Proposition~\ref{pr:new:thm:f:8a}(\ref{pr:new:thm:f:8a:a}) that $\unimin{f}\subseteq\aresconef$. In fact, there is a much more precise relationship that exists between these two sets. Specifically, the astral recession cone, $\aresconef$, is exactly the astral conic hull of the set $\unimin{f}$ of universal reducers, or equivalently, the convex hull of $\unimin{f}$ adjoined with the origin, as stated in the next theorem. Thus, in this way, the astral recession cone is effectively being generated by the universal reducers, which are, in this sense, its most ``extreme'' points. \begin{theorem} \label{thm:res-convhull-unimin} Let $f:\Rn\rightarrow\Rext$ be convex. Then \begin{equation} \label{eq:thm:res-convhull-unimin:1} \aresconef = \acone(\unimin{f}) = \conv\bigParens{(\unimin{f}) \cup \{\zero\}}. \end{equation} \end{theorem} \begin{proof} Let $K=\acone(\unimin{f})$. We first argue that \begin{equation} \label{eqn:thm:res-convhull-unimin:a} (\aresconef)\cap\corezn \subseteq K. \end{equation} Let $\ebar\in(\aresconef)\cap\corezn$, which we aim to show is in $K$. Let $\dbar$ be any point in $\unimin{f}$ (which exists by \Cref{pr:new:thm:f:8a}\ref{pr:new:thm:f:8a:nonemp-closed}). Then for all $\xx\in\Rn$, \[ \fullshad{f}(\xx) \leq \fext(\ebar\plusl\dbar\plusl\xx) \leq \fext(\dbar\plusl\xx) = \fullshad{f}(\xx). \] The first inequality is by defintion of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}). The second is by Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}) since $\ebar\in\aresconef$. The equality is because $\dbar\in\unimin{f}$. Thus, $\ebar\plusl\dbar$ (which is an icon by Proposition~\ref{pr:i:8}\ref{pr:i:8-leftsum}) is a universal reducer so $\ebar\plusl\dbar\in\unimin{f}\subseteq K$. We then have that \[ \ebar \in \lb{\zero}{\ebar\plusl\dbar} = \aray{(\ebar\plusl\dbar)} \subseteq K. \] The first inclusion is by Corollary~\ref{cor:d-in-lb-0-dplusx}. The equality is by Theorem~\ref{thm:oconichull-equals-ocvxhull} (and since $\ebar\plusl\dbar$ is an icon). The last inclusion is because $K$ is an astral cone that includes $\ebar\plusl\dbar$. Having proved \eqref{eqn:thm:res-convhull-unimin:a}, we now have \[ \aresconef = \conv\bigParens{(\aresconef) \cap \corezn} \subseteq K = \acone(\unimin{f}) \subseteq \aresconef. \] The first equality is by Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:c} since $\aresconef$ is a convex astral cone (Corollary~\ref{cor:res-fbar-closed}). The first inclusion is by \eqref{eqn:thm:res-convhull-unimin:a}, and by Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa} since $K$ is convex. The last inclusion is because, by Proposition~\ref{pr:new:thm:f:8a}(\ref{pr:new:thm:f:8a:a}), $\unimin{f}$ is included in the convex astral cone $\aresconef$ (and using Proposition~\ref{pr:acone-hull-props}\ref{pr:acone-hull-props:a}). This proves the first equality of \eqref{eq:thm:res-convhull-unimin:1}. The second equality then follows from Theorem~\ref{thm:acone-char}(\ref{thm:acone-char:a}) since $\unimin{f}$ consists only of icons. \end{proof} Theorem~\ref{thm:res-convhull-unimin} shows that the convex hull of $\unimin{f}$, adjoined with the origin, yields the astral recession cone. We next characterize the convex hull of just $\unimin{f}$, without adjoining the origin. This set turns out to consist of all points $\zbar$ in $\extspace$, not just the icons, for which $\fext(\zbar\plusl\xx)=\fullshad{f}(\xx)$ for all $\xx\in\Rn$. Equivalently, it consists of all points whose iconic part is a universal reducer and whose finite part is in the constancy space of $\fullshad{f}$. \begin{theorem} \label{thm:conv-univ-equiv} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\zbar=\ebar\plusl\qq$ where $\ebar\in\corezn$ and $\qq\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:conv-univ-equiv:a} $\zbar\in\conv(\unimin{f})$. \item \label{thm:conv-univ-equiv:b} For all $\xx\in\Rn$, $\fext(\zbar\plusl\xx)\leq \fullshad{f}(\xx)$. \item \label{thm:conv-univ-equiv:c} For all $\xx\in\Rn$, $\fext(\zbar\plusl\xx)= \fullshad{f}(\xx)$. \item \label{thm:conv-univ-equiv:d} $\ebar\in\unimin{f}$ and $\qq\in\conssp{\fullshad{f}}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:conv-univ-equiv:a}) $\Rightarrow$ (\ref{thm:conv-univ-equiv:b}): } Let $\zbar\in\conv(\unimin{f})$, let $\xx\in\Rn$, and let $G:\extspace\rightarrow\Rext$ be defined by $G(\ybar)=\fext(\ybar\plusl\xx)$ for $\ybar\in\extspace$. Then $G$ is convex by \Cref{thm:cvx-compose-affine-cvx} since $\fext$ is convex (\Cref{thm:fext-convex}). Therefore, the sublevel set $S=\set{\ybar\in\extspace :\: G(\ybar)\leq \fullshad{f}(\xx)}$ is convex by \Cref{thm:f:9}. Further, $\unimin{f}\subseteq S$ (since $G(\ebar)=\fext(\ebar\plusl\xx)=\fullshad{f}(\xx)$ for $\ebar\in\unimin{f}$), implying $\conv(\unimin{f})\subseteq S$ (\Cref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). In particular, $\zbar\in S$, so $\fext(\zbar\plusl\xx)=G(\zbar)\leq \fullshad{f}(\xx)$. \pfpart{(\ref{thm:conv-univ-equiv:b}) $\Rightarrow$ (\ref{thm:conv-univ-equiv:d}): } Suppose (\ref{thm:conv-univ-equiv:b}) holds. Let $h=\fshadd$, which is convex and lower semicontinuous (by Proposition~\ref{pr:i:9}\ref{pr:i:9a}). For all $\xx\in\Rn$, \begin{equation} \label{eqn:thm:new:thm:f:8a:1} h(\xx+\qq) = \fext(\ebar\plusl\qq\plusl\xx) = \fext(\zbar\plusl\xx) \leq \fullshad{f}(\xx) \leq \fext(\ebar\plusl\xx) = h(\xx). \end{equation} The first inequality is by assumption, and the second is by definition of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}). Next, note that, since $\aresconef$ is an astral cone (Corollary~\ref{cor:res-fbar-closed}) that includes $\zbar$, it also must include $2\zbar=\ebar\plusl 2\qq$ (Proposition~\ref{pr:ast-cone-is-naive}). Thus, similar to \eqref{eqn:thm:new:thm:f:8a:1}, \begin{eqnarray} h(\xx) = \fext(\ebar\plusl \xx) &=& \fext(\ebar\plusl\qq\plusl (\xx - \qq)) \nonumber \\ &=& \fext(\zbar\plusl (\xx - \qq)) \nonumber \\ &\leq& \fullshad{f}(\xx-\qq) \nonumber \\ &\leq& \fext(\ebar\plusl 2\qq \plusl (\xx-\qq)) \nonumber \\ &=& \fext(\ebar\plusl (\xx+\qq)) = h(\xx+\qq). \label{eqn:thm:new:thm:f:8a:4} \end{eqnarray} The first inequality is by assumption, and the second is from Proposition~\ref{pr:fullshad-equivs} since $\ebar\plusl 2\qq \in \aresconef$. (We also have made liberal use of Propostion~\ref{pr:i:7}.) \eqref{eqn:thm:new:thm:f:8a:4} implies that \eqref{eqn:thm:new:thm:f:8a:1} must hold with equality so that \[ h(\xx+\qq)=h(\xx)=\fext(\ebar\plusl\xx)=\fullshad{f}(\xx) \] for all $\xx\in\Rn$. This means that $\ebar\in\unimin{f}$, as claimed, and also that $h=\fullshad{f}$. Further, by \Cref{pr:cons-equiv}(\ref{pr:cons-equiv:a},\ref{pr:cons-equiv:b}), this shows that $\qq\in\conssp{h}=\conssp{\fullshad{f}}$. \pfpart{(\ref{thm:conv-univ-equiv:d}) $\Rightarrow$ (\ref{thm:conv-univ-equiv:a}): } Suppose $\ebar\in\unimin{f}$ and $\qq\in\conssp{\fullshad{f}}$. We claim first that $\ebar\plusl\limray{\qq}$, which is an icon, is also a universal reducer. To see this, let $\xx\in\Rn$. Then for all $t$, \begin{equation} \label{eq:thm:conv-univ-equiv:1} \fext(\ebar\plusl t\qq \plusl \xx) = \fullshad{f}(t\qq + \xx) = \fullshad{f}(\xx). \end{equation} The first equality is because $\ebar\in\unimin{f}$, and the second is because $\qq\in\conssp{\fullshad{f}}$. Thus, \begin{equation} \label{eq:thm:conv-univ-equiv:2} \fullshad{f}(\xx) \leq \fext(\ebar\plusl \limray{\qq} \plusl\xx) \leq \liminf \fext(\ebar\plusl t\qq \plusl \xx) = \fullshad{f}(\xx). \end{equation} The equality is by \eqref{eq:thm:conv-univ-equiv:1}. The first inequality is by definition of $\fullshad{f}$ (Definition~\ref{def:univ-reduction}). The second inequality is by lower semicontinuity of $\fext$ (\Cref{prop:ext:F}), since $ \ebar\plusl t\qq\plusl\xx = \ebar\plusl\xx\plusl t\qq \rightarrow \ebar\plusl\xx\plusl \limray{\qq} = \ebar\plusl\limray{\qq}\plusl\xx $, with convergence by \Cref{pr:i:7}(\ref{pr:i:7f}). \eqref{eq:thm:conv-univ-equiv:2} implies that $\ebar\plusl\limray{\qq}$ is also a universal reducer. Hence, \[ \zbar = \ebar\plusl\qq \in \ebar\plusl \lb{\zero}{\limray{\qq}} = \lb{\ebar}{\ebar\plusl\limray{\qq}} \subseteq \conv(\unimin{f}). \] The first inclusion is by Theorem~\ref{thm:lb-with-zero}. The equality is by Theorem~\ref{thm:e:9}. The last inclusion is because $\ebar$ and $\ebar\plusl\limray{\qq}$ are both in $\unimin{f}$, and so also in its convex hull. \pfpart{(\ref{thm:conv-univ-equiv:c}) $\Rightarrow$ (\ref{thm:conv-univ-equiv:b}): } This is immediate. \pfpart{(\ref{thm:conv-univ-equiv:d}) $\Rightarrow$ (\ref{thm:conv-univ-equiv:c}): } Suppose $\ebar\in\unimin{f}$ and $\qq\in\conssp{\fullshad{f}}$. Then for all $\xx\in\Rn$, \[ \fext(\zbar\plusl\xx) = \fext(\ebar\plusl\qq\plusl\xx) = \fullshad{f}(\qq+\xx) = \fullshad{f}(\xx), \] where the second equality is because $\ebar\in\unimin{f}$ and the last is because $\qq\in\conssp{\fullshad{f}}$. \qedhere \end{proof-parts} \end{proof} As corollary, the only icons in $\conv(\unimin{f})$ are the universal reducers: \begin{corollary} \label{cor:conv-univ-icons} Let $f:\Rn\rightarrow\Rext$ be convex. Then $\unimin{f}=\conv(\unimin{f})\cap\corezn$. \end{corollary} \begin{proof} That $\unimin{f}\subseteq\conv(\unimin{f})\cap\corezn$ is immediate. The reverse inclusion follows because if $\ebar\in\conv(\unimin{f})\cap\corezn$ then $\fshadd=\fullshad{f}$ by Theorem~\refequiv{thm:conv-univ-equiv}{thm:conv-univ-equiv:a}{thm:conv-univ-equiv:c}. \end{proof} \section{The structure of minimizers in some particular cases} \label{sec:minimizers-examples} We study next the nature of minimizers of an extension $\fext$ in some particular cases, focusing on the astral rank of minimizers, and also a natural class of minimization problems commonly encountered in statistics and machine learning. \subsection{Minimizers can have maximum astral rank} \label{sec:max-rank-minimizers} When a convex function has no finite minimizer, but can only be minimized by an unbounded sequence of points, it seems natural to wonder if the function can nevertheless always be minimized by following along some straight and unwavering halfline to infinity, as seems so often to be the case. For instance, the function $f$ in \eqref{eqn:recip-fcn-eg} can be minimized along a halfline in the direction of $\vv=\trans{[1,1]}$ by the sequence $\seq{t\vv}$, implying $\fext$ is minimized by $\limray{\vv}$. Can every convex function be minimized in this way? Or, to minimize the function, might it be necessary to pursue a more convoluted route to infinity? In astral terms, we are asking if the extension $\fext$ of a convex function can always be minimized by a point in $\extspace$ whose astral rank is at most one. In this section, we show that this is not always possible. Quite on the contrary, we study an example of a convex function $f:\Rn\rightarrow\R$ whose extension $\fext$ can only be minimized by a point with astral rank $n$, the maximum possible. Thus, the function not only cannot be minimized by following a simple, one-dimensional halfline, but in fact, the only way to minimize the function is by pursuing a trajectory involving all $n$ dimensions. The same behavior was seen in Example~\ref{ex:two-speed-exp}, and indeed, the function presented below is essentially a generalization of that example to $\Rn$. We also use this example to illustrate some of the earlier concepts developed in the preceding sections. Here is the function, where, as usual, we write $x_i$ for the $i$-th component of a vector $\xx\in\Rn$: \begin{eqnarray} f(\xx) &=& \exp({x_2^2 - x_1}) + \exp({x_3^2 - x_2}) + \dotsb + \exp({x_n^2 - x_{n-1}}) + \exp({-x_n}) \nonumber \\ &=& \sum_{j=1}^n h_j(\xx) \label{eq:h:9} \end{eqnarray} where \[ h_j(\xx) = \left\{ \begin{array}{ll} \exp({x_{j+1}^2 - x_j}) & \mbox{if $j<n$} \\ \exp({- x_n}) & \mbox{if $j=n$.} \end{array} \right. \] Each function $h_j$ is convex (since $x_{j+1}^2 - x_j$ and $-x_n$ are both convex, implying $h_j$ is, by Proposition~\ref{pr:j:2}\ref{pr:j:2a}); therefore, $f$ is convex as well. Clearly, $f$ and the $h_j$'s are all also continuous, closed, proper, finite everywhere, and strictly positive everywhere. Intuitively, to minimize $h_j$, we need $x_j$ to be growing to $+\infty$ faster than $x_{j+1}^2$. Thus, to minimize $f$, we need every variable $x_j$ to tend to $+\infty$, but with $x_1$ growing much faster than $x_2$, which is growing much faster than $x_3$, and so on. To see how this intuition is captured by our formal results, we first analyze $\unimin{f}$, the set of universal reducers for this function. This set turns out to consist only of the single icon \[ \Iden \omm = \limrays{\ee_1,\ldots,\ee_n} = \limray{\ee_1}\plusl\dotsb\plusl\limray{\ee_n} \] where, as usual $\ee_i$ is the $i$-th standard basis vector in $\Rn$, and $\Iden$ is the $n\times n$ identity matrix. (We could also write this point simply as $\omm=\ommsub{n}$, as in Eq.~\ref{eq:i:7}, but instead write it as above to make its form more explicit.) Consistent with the intuition suggested above, this point also turns out to be the unique minimizer of $\fext$. \begin{proposition} \label{pr:hi-rank-f-eg} Let $f$ be as defined in \eqref{eq:h:9}. Then: \begin{letter-compact} \item \label{pr:hi-rank-f-eg:a} $f$'s only universal reducer is $\Iden\omm$; that is, $ \unimin{f} = \{ \Iden\omm \} $. \item \label{pr:hi-rank-f-eg:b} $\fext$ is uniquely minimized at $\Iden\omm$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:hi-rank-f-eg:a}):} Let $\ebar$ be any icon in $\unimin{f}$. We will prove the proposition by showing this implies $\ebar=\Iden\omm$. That $\Iden\omm$ actually is in $\unimin{f}$ then follows from the fact that $\unimin{f}$ cannot be empty, by \Cref{pr:new:thm:f:8a}(\ref{pr:new:thm:f:8a:nonemp-closed}). Let $\ebar=\limrays{\vv_1,\ldots,\vv_k}$ be $\ebar$'s canonical representation (with finite part $\zero$, by \Cref{pr:icon-equiv}). We define functions $g_0=f$ and $g_i = \gishadvi$ for $i=1,\ldots,k$, and also \[ \ghat_i = \sum_{j=i+1}^n h_j \] for $i=0,\ldots,n$. Finally, let \[ \dbar_i = \limrays{\vv_{i+1},\ldots,\vv_k} \] for $i=0,\ldots,k$. Note that $\dbar_0=\ebar$ and $\ghat_0=f=g_0$. To show that $\ebar=\Iden\omm$, we prove the following by induction on $i=0,\ldots,n$: \begin{roman-compact} \item \label{pr:hi-rank-f-eg:induc:a} $k\geq i$. \item \label{pr:hi-rank-f-eg:induc:b} $g_i = \ghat_i$. \item \label{pr:hi-rank-f-eg:induc:c} $\vv_j=\ee_j$ for $j=1,\ldots, i$. \item \label{pr:hi-rank-f-eg:induc:d} $\dbar_i\in\unimin{g_i}$. \end{roman-compact} Specifically, when $i=n$, items~(\ref{pr:hi-rank-f-eg:induc:a}) and~(\ref{pr:hi-rank-f-eg:induc:c}) will imply that $\ebar=\Iden\omm$. The base case that $i=0$ follows immediately from definitions and assumptions. For the inductive step, let $i\geq 1$ and assume that the inductive hypothesis holds for $i-1$. Then $k\geq i-1$, $g_{i-1} = \ghat_{i-1}$, $\dbar_{i-1}\in\unimin{g_{i-1}}$, and $\vv_j=\ee_j$ for $j=1,\ldots,i-1$. Note first that, for $\lambda\in\R$, \[ g_{i-1}(\lambda\ee_i) = \ghat_{i-1}(\lambda\ee_i) = \me^{-\lambda} + (n-i), \] which is strictly decreasing as a function of $\lambda$, and so is bounded by $g_{i-1}(\zero)<+\infty$ for $\lambda\in\Rpos$. By \Cref{pr:stan-rec-equiv}(\ref{pr:stan-rec-equiv:a},\ref{pr:stan-rec-equiv:c}), this shows that $\ee_i\in\resc{g_{i-1}}$, and also shows that $-\ee_i\not\in\resc{g_{i-1}}$. Specifically, this implies that $\resc{g_{i-1}}\neq\conssp{g_{i-1}}$, by \Cref{pr:prelim:const-props}(\ref{pr:prelim:const-props:a}). Therefore, $\zero\not\in\unimin{g_{i-1}}$ by Proposition~\refequiv{pr:zero-in-univf}{pr:zero-in-univf:a}{pr:zero-in-univf:c}. Since $\dbar_{i-1}\in\unimin{g_{i-1}}$, this proves that $\dbar_{i-1}\neq\zero$, so $k\geq i$, proving item~(\ref{pr:hi-rank-f-eg:induc:a}). Thus, $\dbar_{i-1}=\limray{\vv_i}\plusl\dbar_i$, which further implies that $\vv_i\in\resc{g_{i-1}}$ and $\dbar_i\in\unimin{g_i}$ by Theorem~\ref{thm:new:f:8.2}, proving item~(\ref{pr:hi-rank-f-eg:induc:d}). The components of $\vv_i$ are $\vv_i=\trans{[v_{i1},\ldots,v_{in}]}$. For all $j<i$, $\vv_j=\ee_j$ is orthogonal to $\vv_i$, since $\ebar$ has been written in its canonical representation. Thus, $v_{ij}=0$ for $j<i$. We further claim that $v_{ij}=0$ for $j>i$ as well. Suppose to the contrary that $v_{ij}\neq 0$ for some $j>i$. Then for $\lambda\in\Rpos$, \[ g_{i-1}(\zero) \geq g_{i-1}(\lambda\vv_i) = \ghat_{i-1}(\lambda\vv_i) \geq h_{j-1}(\lambda\vv_i) = \exp(\lambda^2 v_{ij}^2 - \lambda v_{i,j-1}), \] where the first inequality is because $\vv_i$ is in $\resc{g_{i-1}}$ (which is a cone by Proposition~\ref{pr:resc-cone-basic-props}). Note that the expression on the right tends to $+\infty$ as $\lambda\rightarrow+\infty$ since $v_{ij}\neq 0$. But this is a contradiction since $g_{i-1}(\zero)<+\infty$. Thus, $\vv_i$, which has unit length, must be either $\ee_i$ or $-\ee_i$. Since, as argued above, $-\ee_i\not\in\resc{g_{i-1}}$, we conclude that $\vv_i=\ee_i$, proving item~(\ref{pr:hi-rank-f-eg:induc:c}). Finally, for all $\xx\in\Rn$, we claim \begin{equation} \label{eqn:pr:hi-rank-f-eg:1} g_i(\xx) = \gext_{i-1}(\limray{\ee_i}\plusl\xx) = \ghat_i(\xx). \end{equation} The first equality is by $g_i$'s definition. To see the second equality, let $\zbar=\limray{\ee_i}\plusl\xx$. By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\zz_t}$ in $\Rn$ that converges to $\zbar$ and such that $g_{i-1}(\zz_t)\rightarrow \gext_{i-1}(\zbar)$. Also, $z_{tj}=\zz_t\cdot\ee_j\rightarrow\zbar\cdot\ee_j$, for $j=1,\ldots,n$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}), implying $z_{ti}\rightarrow+\infty$ and $z_{tj}\rightarrow x_j$ for $j\neq i$. Therefore, by continuity, $h_i(\zz_t)\rightarrow 0$ and $h_j(\zz_t)\rightarrow h_j(\xx)$ for $j>i$, so that $\ghat_{i-1}(\zz_t)\rightarrow \ghat_i(\xx)$. Since $g_{i-1}(\zz_t)=\ghat_{i-1}(\zz_t)$ for all $t$, the limits of these two sequences must be equal; thus, $\gext_{i-1}(\zbar)=\ghat_i(\xx)$, proving \eqref{eqn:pr:hi-rank-f-eg:1}. This completes the induction and the proof. \pfpart{Part~(\ref{pr:hi-rank-f-eg:b}):} Continuing the argument above, we have \[ \fext(\Iden\omm) = \fshadd(\zero) = g_n(\zero) = \ghat_n(\zero) = 0, \] where the first equality is because $\ebar=\Iden\omm$, the second is by \Cref{pr:icon-red-decomp-astron-red}, the third is by item~(\ref{pr:hi-rank-f-eg:induc:b}) above (with $i=n$), and the last is because, by definition, $\ghat_n\equiv 0$. Since $\inf \fext = \inf f \geq 0$ (by \Cref{pr:fext-min-exists}), this shows that $\Iden\omm$ minimizes $\fext$. To show this is the only minimizer, suppose $\xbar\in\extspace$ is some minimizer of $\fext$. We can write $\xbar=\dbar\plusl\qq$ where $\dbar\in\corezn$ and $\qq\in\Rn$. Then \[ \dbar \in \aresconef \subseteq \conv\bigParens{ (\unimin{f})\cup\{\zero\} } = \lb{\zero}{\Iden\omm}, \] where the first inclusion is by Theorem~\ref{thm:arescone-fshadd-min}, the second is by \Cref{thm:res-convhull-unimin}, and the equality is by part~(\ref{pr:hi-rank-f-eg:a}). From Theorem~\ref{thm:lb-with-zero}, it then follows that $\dbar=\limrays{\ee_1,\ldots,\ee_j}$ for some $j\in\{0,\ldots,n\}$, since points of this form are the only icons in the segment $\lb{\zero}{\Iden\omm}$. If $j<n$, then \[ \fext(\xbar) \geq \hext_n(\xbar) = \expex(-\xbar\cdot\ee_n) = \exp(-\qq\cdot\ee_n) > 0. \] The first inequality is by \Cref{pr:h:1}(\ref{pr:h:1:geq}) since $f\geq h_n$. The first equality is by \Cref{pr:j:2}(\ref{pr:j:2c}) and Example~\ref{ex:ext-affine}. The second equality is because $\xbar\cdot\ee_n=\dbar\cdot\ee_n\plusl\qq\cdot\ee_n$ and $\dbar\cdot\ee_n=0$. Thus, $\xbar$ cannot be a minimizer in this case. Otherwise, if $j=n$, then $\xbar=\Iden\omm\plusl\qq=\Iden\omm$ by the Projection Lemma (\Cref{lemma:proj}). Thus, this is the only minimizer of $\fext$. \qedhere \end{proof-parts} \end{proof} Thus, the function $f$ in \eqref{eq:h:9} cannot be minimized by a sequence following a straight line, nor even converging asymptotically to a line. On the contrary, the function can only be minimized by a sequence that expands unboundedly across all $n$ dimensions. For example, $f$ is minimized by the sequence \[ \xx_t = \sum_{i=1}^n t^{3^{n-i}} \ee_i, \] which converges to $\Iden\omm$. However, $f$ need not be minimized by \emph{every} sequence converging to this point (unless $n=1$); for instance, the sequence \[ \xx'_t = \sum_{i=1}^n t^{n-i+1} \ee_i \] converges to $\Iden\omm$, but $f(\xx'_t)\not\rightarrow 0$. In other words, $\fext$ is not continuous at $\Iden\omm$. Still, Proposition~\ref{pr:hi-rank-f-eg}(\ref{pr:hi-rank-f-eg:b}) does imply that convergence to $\Iden\omm$ is a {necessary} condition for a sequence to minimize $f$, meaning $f$ cannot be minimized by any sequence that does \emph{not} converge in $\extspace$ to $\Iden\omm$. As a corollary, we can use Proposition~\ref{pr:hi-rank-f-eg} to construct, for any astral point $\zbar$, a function whose extension is minimized only at $\zbar$: \begin{theorem} \label{thm:gen-uniq-min} Let $\zbar\in\extspace$. Then there exists a convex function $f:\Rn\rightarrow\R$ whose extension $\fext$ is uniquely minimized at $\zbar$. \end{theorem} \proof Let $\zbar=\VV\omm\plusl\qq$ be the canonical representation of $\zbar$ where $\VV\in\R^{n\times k}$ is column-orthogonal and $\qq\in\Rn$ with $\trans{\VV}\qq=\zero$. Let $\fomin:\Rk\rightarrow\R$ denote the function given in \eqref{eq:h:9} but with $n$ replaced by $k$ so that its extension, $\fominext$, is uniquely minimized at $\omm=\omm_k$, by Proposition~\ref{pr:hi-rank-f-eg}(\ref{pr:hi-rank-f-eg:b}). Let $\normfcn(\xx)=\norm{\xx}$ for $\xx\in\Rn$, and let $\PP=\Iden - \VV \trans{\VV}$ where $\Iden$ is the $n\times n$ identity matrix. To construct $f$, we define \[ f(\xx) = \fomin\paren{\trans{\VV} \xx} + \normfcn\paren{\PP \xx - \qq}. \] for $\xx\in\Rn$. Both $\fomin$ and $\normfcn$ are convex, closed, proper, finite everywhere, and nonnegative everywhere, so $f$ is as well. By Proposition~\ref{pr:seq-to-inf-has-inf-len}, $\normfcn$'s extension is \[ \normfcnext(\xbar) = \left\{ \begin{array}{cl} \norm{\xbar} & \mbox{if $\xbar\in\Rn$}\\ +\infty & \mbox{otherwise} \end{array} \right. \] for $\xbar\in\extspace$, and moreover, that proposition shows that $\normfcn$ is extensibly continuous everywhere. Clearly, $\normfcnext$ is uniquely minimized by $\zero$. By Proposition~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:c}), the extension of $\xx\mapsto\normfcn(\PP \xx - \qq)$ is $\xbar\mapsto\normfcnext(-\qq \plusl \PP\xbar)$, and by Theorem~\ref{thm:ext-linear-comp}, the extension of $\xx\mapsto \fomin\paren{\trans{\VV} \xx}$ is $\xbar\mapsto \fominext\paren{\trans{\VV} \xbar}$. Combining then yields \begin{equation} \label{eqn:thm:gen-uniq-min:1} \fext(\xbar) = \fominext\paren{\trans{\VV} \xbar} + \normfcnext(-\qq \plusl \PP\xbar) \end{equation} by Theorem~\ref{thm:ext-sum-fcns-w-duality} (or Proposition~\ref{pr:ext-sum-fcns}\ref{pr:ext-sum-fcns:b}) since $\fomin$ and $\normfcn$ are both nonnegative and finite everywhere. We show next that $\fext$ is minimized by $\zbar$. First, \[ \trans{\VV}\zbar = \trans{\VV} \VV\omm \plusl \trans{\VV}\qq = \omm \] since $\VV$ is column-orthogonal and $\trans{\VV}\qq=\zero$. For these same reasons, \[ \PP\zbar = \PP \VV \omm \plusl \PP \qq = \qq \] since, by straightforward matrix algebra, $\PP\VV=\zeromat{n}{k}$ and $\PP\qq=\qq$. Since $\fominext$ is minimized by $\omm=\trans{\VV}\zbar$, and $\normfcnext$ is minimized by $\zero=-\qq\plusl\PP\zbar$, this shows that $\zbar$ minimizes $\fext$. Finally, we show that $\zbar$ is the only minimizer of $\fext$. Let $\xbar\in\extspace$ be any minimizer of $\fext$. For this to be the case, $\xbar$, like $\zbar$, must minimize both terms on the right-hand side of \eqref{eqn:thm:gen-uniq-min:1}. To minimize the second term, we must have $-\qq\plusl\PP\xbar=\zero$, the only minimizer of $\normfcnext$, implying $\PP\xbar=\qq$. Likewise, to minimize the first term, we must have $\trans{\VV}\xbar=\omm$, implying $\VV\trans{\VV}\xbar=\VV\omm$. Thus, \[ \zbar = \VV\omm \plusl \qq = \VV\trans{\VV}\xbar \plusl \PP\xbar = (\VV\trans{\VV} + \PP)\xbar = \Iden \xbar = \xbar. \] The third equality follows from \Cref{prop:commute:AB} since $\PP\xbar=\qq\in\Rn$, which therefore commutes with any astral point (by Proposition~\ref{pr:i:7}\ref{pr:i:7d}). The fourth equality is immediate from $\PP$'s definition. Thus, as claimed, $\zbar$ is $\fext$'s only minimizer. \qed \subsection{Sufficient conditions for rank one minimizers} \label{subsec:rank-one-minimizers} Proposition~\ref{pr:hi-rank-f-eg} shows that a convex function need not have a universal reducer (nor a minimizer) of astral rank at most one. For the function that was studied to prove this, we also noted that its extension was not continuous. In fact, there turns out to be a general connnection between continuity and the existence of rank-one minimizers. Among its consequences, we will see that this connection implies that if $\fext$ is continuous everywhere then there must exist a universal reducer (and therefore also a minimizer) of astral rank at most one. Continuity will be studied more closely in Chapter~\ref{sec:continuity}. Here, we make the link to rank-one universal reducers. Let $f:\Rn\rightarrow\Rext$ be convex. As was mentioned earlier, we will see later how the relationship between $\represc{f}$ and $\aresconef$ is critical in characterizing exactly where $\fext$ is continuous. In particular, if $\fext$ is continuous everywhere, then these sets must be equal to each other, that is, $f$ must be recessive complete. Furthermore, if $f$ is finite everywhere, then recessive completeness is both necessary and sufficient for $\fext$ to be continuous everywhere. Details of this connection will be given in Chapter~\ref{sec:continuity}. For now, we focus just on the consequences of recessive completeness for our study of minimizers. In Theorem~\ref{thm:f:4x}, we saw how the domain of the universal reduction $\fullshad{f}$ is effectively limited to the linear subspace $\perpresf$. As a preliminary step, we show that if $f$ is recessive complete, then the set $\perpresf$ is simply equal to $\rescperp{f}$, the set of points orthogonal to all directions in the standard recession cone, $\resc{f}$: \begin{proposition} \label{pr:perpres-is-rescperp} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then $\perpresf\subseteq\rescperp{f}$. If, in addition, $f$ is recessive complete, then $\perpresf = \rescperp{f}$. \end{proposition} \begin{proof} That $\perpresf\subseteq\rescperp{f}$ follows immediately from Proposition~\ref{pr:perp-props-new}(\ref{pr:perp-props-new:b}) since $\resc{f} \subseteq \aresconef$ (by Proposition~\ref{pr:f:1}). Suppose now that $\aresconef=\represc{f}$. Let $\uu\in\rescperp{f}$, and let $\xbar\in\aresconef$. Since $\xbar$ is also in $\represc{f}$, we can write it in the form $\xbar=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ where $\vv_1,\ldots,\vv_k,\qq\in \resc{f}$. Since $\uu\in\rescperp{f}$, it is orthogonal to all points in $\resc{f}$, so $\qq\cdot\uu=0$ and $\vv_i\cdot\uu=0$, for $i=1,\ldots,k$, implying $\xbar\cdot\uu=0$. Therefore, $\uu\in\perpresf$, being perpendicular to every point in $\aresconef$, so $\rescperp{f}\subseteq\perpresf$, completing the proof. \end{proof} When $f$ is recessive complete (and therefore, as remarked above, whenever $\fext$ is continuous), there must always exist a universal reducer of astral rank at most one. Specifically, as we show next, if $\vv$ is any point in the relative interior of $f$'s standard recession cone, $\resc{f}$, then the associated astron $\limray{\vv}$ must be a universal reducer. As a consequence, $\limray{\vv}\plusl\qq$ must be a canonical minimizer of $\fext$ for every $\qq\in\Rn$ that minimizes $\fullshad{f}$ (and thus, for all finite parts of all minimizers of $\fext$, by Proposition~\ref{pr:min-fullshad-is-finite-min}). This is in dramatic contrast to the example of \eqref{eq:h:9} which had no universal reducers and no minimizers of astral rank less than $n$. \begin{theorem}\label{thm:unimin-can-be-rankone} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex, lower semicontinuous, and recessive complete. Let $\vv\in\ric{f}$, the relative interior of $f$'s standard recession cone. Then $\limray{\vv}\in\unimin{f}$. Consequently, if $\qq\in\Rn$ minimizes $\fullshad{f}$, then $\limray{\vv}\plusl\qq$ is a canonical minimizer of $\fext$. \end{theorem} \begin{proof} If $f\equiv+\infty$ then the claim holds trivially since then $\fullshad{f}\equiv+\infty$ implying $\unimin{f}=\extspace$. We therefore assume henceforth that $f\not\equiv+\infty$. Let $g=\fshadv$ be the reduction of $f$ at $\limray{\vv}$. We will prove the result using Corollary~\ref{cor:f:4} by first showing that $g$ satisfies condition~(\ref{cor:f:4b}) of that corollary. So suppose $\uu\in\slopes{g}$; we aim to show this implies that $\uu\in\perpresf$. Since $\uu\in\slopes{g}$ and $\vv\in\resc{f}$, \Cref{thm:e1}(\ref{thm:e1:d}) implies that $\vv\cdot\uu=0$ and $\uu\in\slopes{f}$. Let $\ww$ be any point in $\resc{f}$. This set is polar to $\slopes{f}$ by Corollary~\ref{cor:rescpol-is-slopes}; therefore, $\uu\cdot\ww\leq 0$. Also, because $\vv\in\ric{f}$, there must exist $\delta>0$ such that the point $\vv+\delta(\vv-\ww)$ is in $\resc{f}$ as well (by \Cref{roc:thm6.4}). Applying Corollary~\ref{cor:rescpol-is-slopes} again to this point, and since $\uu\cdot\vv=0$, yields \[ -\delta \uu\cdot\ww = \uu \cdot [\vv+\delta(\vv-\ww)] \leq 0. \] Since $\delta>0$, it follows that $\uu\cdot\ww=0$. Thus, because this holds for all $\ww\in\resc{f}$, we conclude that $\uu\in\rescperp{f}$. Since $\perpresf=\rescperp{f}$, by Proposition~\ref{pr:perpres-is-rescperp}, we have thus shown that $\slopes{g}\subseteq\perpresf$. In other words, we have shown that condition~(\ref{cor:f:4b}) of Corollary~\ref{cor:f:4} is satisfied with $k=1$ and $\vv_1=\vv$ (so that $g_0=f$ and $g_1=g$). Therefore, by condition~(\ref{cor:f:4c}) of that corollary, it now follows that \[ \fext(\limray{\vv}\plusl\xx) = g(\xx) = \fullshad{f}(\xx) \] for all $\xx\in\Rn$. In other words, $\limray{\vv}$ is a universal reducer for $f$, as claimed. (Therefore, if $\qq\in\Rn$ minimizes $\fullshad{f}$, then $\limray{\vv}\plusl\qq$ is a canonical minimizer, by definition.) \end{proof} \begin{example} In studying the example function $f$ of \eqref{eq:simple-eg-exp-exp-sq}, we considered in Examples~\ref{ex:simple-eg-exp-exp-sq} and~\ref{ex:simple-eg-exp-exp-sq-part2} runs of the processes of Figures~\ref{fig:all-min-proc} and~\ref{fig:min-proc}. These both resulted in finding a minimizer $\xbar$ of astral rank~2. However, because this function is recessive complete, by Theorem~\ref{thm:unimin-can-be-rankone}, $\fext$ must have a (canonical) minimizer of astral rank one. Indeed, this is the case. For instance, letting $\vv=\trans{[2,1,1]}$, which is in $\resc{f}$'s relative interior, it can be checked that $\limray{\vv}$ is a universal reducer. Therefore, combining with $\qq=\trans{[0,0,2]}$, which minimizes $\fullshad{f}$, yields the canonical minimizer $\limray{\vv}\plusl\qq$ of astral rank one. \end{example} \subsection{Empirical risk minimization} \label{sec:emp-loss-min} We next consider functions of the form \begin{equation} \label{eqn:loss-sum-form} f(\xx) = \sum_{i\in\indset} \ell_i(\xx\cdot\uu_i), \end{equation} for $\xx\in\Rn$, where $\indset$ is a finite index set, each $\uu_i\in\Rn$, and each function $\ell_i:\R\rightarrow\R$ is convex, lower-bounded, and nondecreasing. We focus on the minimizers of such functions, and especially how these relate to concepts developed earlier. Minimizing functions of the form given in \eqref{eqn:loss-sum-form} is a fundamental problem in machine learning and statistics. Very briefly, in a typical setting, a learning algorithm might be given random ``training examples'' $(\zz_i,y_i)$, for $i=1,\ldots,m$, where $\zz_i\in\Rn$ is an ``instance'' or ``pattern'' (such as an image or photograph, treated as a vector of pixel intensities in $\Rn$), and $y_i\in\{-1,+1\}$ is a ``label'' (that might indicate, for example, if the photograph is or is not of a person's face). The goal then is to find a rule for predicting if a new instance $\zz\in\Rn$ should be labeled $-1$ or $+1$. As an example, in logistic regression, the learner finds a vector $\ww\in\Rn$, based on the training examples, and then predicts that a new instance $\zz$ should be labeled according to the sign of $\ww\cdot\zz$. Specifically, $\ww$ is chosen to minimize the ``logistic loss'' on the training examples, that is, \begin{equation} \label{eqn:logistic-reg-obj} f(\ww) = \sum_{i=1}^m \ln\paren{1+\exp(-y_i \ww\cdot\zz_i)}. \end{equation} This kind of function, which is more generally called the \emph{empirical risk}, has the same form as in \eqref{eqn:loss-sum-form} (with $\xx=\ww$, $\uu_i = -y_i \zz_i$, and $\ell_i(z)=\ln(1+\me^z)$). Returning to the general case in \eqref{eqn:loss-sum-form}, for $i\in\indset$, we will assume, without loss of generality, that $\inf \ell_i = 0$, and that $\ell_i$ is not constant (in addition to the other assumptions mentioned above). Since $\ell_i$ is nondecreasing, these conditions imply that $\lim_{x\rightarrow -\infty} \ell_i(x) = 0$ and $\lim_{x\rightarrow +\infty} \ell_i(x) = +\infty$. Each $\ell_i$ is convex and finite everywhere, and therefore continuous everywhere (\Cref{pr:stand-cvx-cont}); the same is also true of $f$. So $\ell_i$'s extension is \begin{equation} \label{eqn:hard-core:1} \ellbar_i(\barx) = \begin{cases} 0 & \text{if $\barx=-\infty$,} \\ \ell_i(\barx) & \mbox{if $\barx\in\R$,} \\ +\infty & \mbox{if $\barx=+\infty$} \end{cases} \end{equation} for $\barx\in\Rext$ (by \Cref{pr:conv-inc:prop}\ref{pr:conv-inc:infsup}). The next proposition gives the form of $f$'s extension, $\fext$, as well as $f$'s astral and standard recession cones, and also shows $\fext$ is continuous everywhere, implying that $f$ is recessive complete. \begin{proposition} \label{pr:hard-core:1} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is convex, nondecreasing, not constant, with $\inf \ell_i = 0$. Then the following hold: \begin{letter-compact} \item \label{pr:hard-core:1:a} The lower semicontinuous extension of $f$ is \[ \fext(\xbar) = \sum_{i\in\indset} \ellbar_i(\xbar\cdot\uu_i), \] for $\xbar\in\extspace$. This function is continuous everywhere. \item \label{pr:hard-core:1:b} The astral recession cone of $\fext$ and standard recession cone of $f$ are: \begin{align*} \aresconef &= \{ \ybar\in\extspace : \ybar\cdot\uu_i \leq 0 \mbox{ for } i\in\indset \}, \\ \resc{f} &= \{ \yy\in\Rn : \yy\cdot\uu_i \leq 0 \mbox{ for } i\in\indset \}. \end{align*} Furthermore, $f$ is recessive complete, implying $\rescperp{f}=\perpresf$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:hard-core:1:a}):} For $i\in\indset$, let $h_i(\xx)=\ell_i(\xx\cdot\uu_i)$ for $\xx\in\Rn$. As noted above, $\ell_i$ is continuous everywhere, implying $\ellbar_i$ is continuous everywhere by \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:infsup}). In turn, \Cref{pr:Gf-cont}(\ref{pr:Gf-cont:b}) (applied with $G$ and $f$, as they appear in that proposition, set to $\ellbar_i$ and $\xx\mapsto\xx\cdot\uu_i$) implies that $\hext_i(\xbar)=\ellbar_i(\xbar\cdot\uu_i)$ for $\xbar\in\extspace$, and that $\hext_i$ is continuous everywhere. The form and continuity of $\fext$ now follow from Proposition~\ref{pr:ext-sum-fcns}(\ref{pr:ext-sum-fcns:b},\ref{pr:ext-sum-fcns:c}) (with summability following from $\hext_i\geq 0$ since $h_i\geq 0$). \pfpart{Part~(\ref{pr:hard-core:1:b}):} Suppose $\ybar\cdot\uu_i\leq 0$ for $i\in\indset$. Then because $\ellbar_i$ is nondecreasing, for $\xx\in\Rn$, $\ellbar_i(\ybar\cdot\uu_i+\xx\cdot\uu_i) \leq \ellbar_i(\xx\cdot\uu_i)$ for all $i\in\indset$, implying $\fext(\ybar \plusl \xx) \leq \fext(\xx) = f(\xx)$. Therefore, $\ybar\in\aresconef$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:b}. For the converse, suppose for some $i\in\indset$ that $\ybar\cdot\uu_i > 0$. Then $\limray{\ybar}\cdot\uu_i=+\infty$, implying $\ellbar_i(\limray{\ybar}\cdot\uu_i)=+\infty$, and so that $\fext(\limray{\ybar})=+\infty>f(\zero)$. Therefore, $\ybar\not\in\aresconef$ by Theorem~\refequiv{thm:rec-ext-equivs}{thm:rec-ext-equivs:a}{thm:rec-ext-equivs:c}. The expression for $\resc{f}$ now follows immediately from Proposition~\ref{pr:f:1}. That $f$ is recessive complete follows from the fact that $\fext$ is continuous everywhere, by direct application of Theorem~\ref{thm:cont-conds-finiteev} (to be proved later). This implies $\rescperp{f}=\perpresf$, by Proposition~\ref{pr:perpres-is-rescperp}. \qedhere \end{proof-parts} \end{proof} \citet{primal_dual_boosting} studied functions of the form we are considering, and showed that the points $\uu_i$ can be partitioned into a so-called {easy set} and a {hard core}. Roughly speaking, when $f$ is minimized over $\xx\in\Rn$, if $\uu_i$ is in the easy set, then the corresponding term $\ell_i(\xx\cdot\uu_i)$ is reduced to its minimum value, which is achieved by driving $\xx\cdot\uu_i$ to $-\infty$. This leaves the problem of minimizing the remaining terms in $f$, those involving the hard core. These terms cannot be driven to their minimum values; rather, for these, $\xx\cdot\uu_i$ must converge to some finite value. Here, we revisit these notions, re-casting them in terms of central concepts developed in this book. In astral terms, $\uu_i$ is considered easy if there exists a point $\xbar\in\extspace$ with $\xbar\cdot\uu_i=-\infty$ (so that $\ellbar_i(\xbar\cdot\uu_i)$ equals $\ellbar_i$'s minimum value) and for which $\fext(\xbar)<+\infty$. This is equivalent to there existing a sequence $\seq{\xx_t}$ in $\Rn$ for which $\xx_t\cdot\uu_i\rightarrow-\infty$, without $f(\xx_t)$ becoming unboundedly large. Otherwise, if there exists no such $\xbar$, then $\uu_i$ is hard. We can write any $\xbar\in\extspace$ as $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$. The condition that $\fext(\xbar)<+\infty$ implies that $\ebar\in\aresconef$, by Corollary~\ref{cor:a:4}. Further, if $\ebar\in\aresconef$ and if it happens that $\uu_i\in\perpresf$, then $\ebar\cdot\uu_i=0$, implying that $\xbar\cdot\uu_i=\qq\cdot\uu_i$, which is in $\R$. This shows that if $\uu_i\in\perpresf$ then it must be hard in the sense just described, since for all $\xbar\in\extspace$, either $\fext(\xbar)=+\infty$ or $\xbar\cdot\uu_i>-\infty$. This motivates our formal definitions: \begin{definition} \label{def:hard-core} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is convex, nondecreasing, not constant, with $\inf \ell_i > -\infty$. Then the \emph{hard core} of $f$, denoted $\hardcore{f}$, is the set \begin{equation} \label{eq:hard-core:3} \hardcore{f} = \Braces{ i \in \indset : \uu_i\in\rescperp{f} }, \end{equation} (where we take the points $\uu_i$, for $i\in\indset$, to be fixed and given to simplify notation). The \emph{easy set} is the complement of the hard core, $\indset\setminus\hardcore{f}$. \end{definition} Note that in \eqref{eq:hard-core:3}, we use $\rescperp{f}$, but this is the same as $\perpresf$ (by Proposition~\ref{pr:hard-core:1}\ref{pr:hard-core:1:b}). \begin{example} \label{ex:erm-running-eg1} The function $f$ of \eqref{eq:simple-eg-exp-exp-sq} from Example~\ref{ex:simple-eg-exp-exp-sq} can be put in the form of \eqref{eqn:loss-sum-form}. To see this, for $x\in\R$, let $\sqp(x)=\paren{\max\{0,x\}}^2$, which is convex, nondecreasing, not constant, with $\inf \sqp=0$. Note that $x^2=\sqp(x)+\sqp(-x)$ always. As a result, we can write $f$ as \begin{equation} \label{eq:simple-eg-exp-exp-sq:modform} f(\xx) = \me^{x_3-x_1} + \me^{-x_2} + \sqp(2+x_2-x_3) + \sqp(-2-x_2+x_3) \end{equation} which satisfies the conditions of Proposition~\ref{pr:hard-core:1} (with $\ell_1(z)=\ell_2(z)=e^z$, $\ell_3(z)=\sqp(2+z)$, and $\ell_4(z)=\sqp(-2+z)$). As a result, that proposition confirms various previously determined facts about $f$: by part~(\ref{pr:hard-core:1:a}), its extension $\fext$ is continuous everywhere, and is as given in \eqref{eq:simple-eg-exp-exp-sq:fext}; by part~(\ref{pr:hard-core:1:b}), $f$ is recessive complete with standard recession cone as in \eqref{eq:simple-eg-exp-exp-sq:rescone}. Thus, $\rescperp{f}=\perpresf$ is the line given in \eqref{eq:simple-eg-exp-exp-sq:perpresf}. This line includes $\uu_3=\trans{[0,1,-1]}$ and $\uu_4=\trans{[0,-1,1]}$, but not $\uu_1=\trans{[-1,0,1]}$ or $\uu_2=\trans{[0,-1,0]}$. Therefore, the hard core in this case is the set $\hardcore{f} = \{3,4\}$, indicating that the first two terms of \eqref{eq:simple-eg-exp-exp-sq:modform} are ``easy'' in the sense described above, and the last two are ``hard.'' \end{example} We have seen already that the set $\unimin{f}$ of universal reducers together with the universal reduction $\fullshad{f}$ are central elements in the general theory we have developed for minimizing convex functions. Both of these can be expressed precisely in terms of the hard core, as we show in the next theorem. For the rest of this section, for a set $J\subseteq\indset$, we define the notation \[ \uset{J} = \Braces{\uu_i : i\in J}. \] Thus, $\uset{\hardcore{f}}$ is the set of all points $\uu_i$ in the hard core. \begin{theorem} \label{thm:hard-core:3} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is convex, nondecreasing, not constant, with $\inf \ell_i = 0$. \begin{letter-compact} \item \label{thm:hard-core:3:a} If $\ybar\in\aresconef$ then $\ybar\cdot\uu_i=0$ for all $i\in\hardcore{f}$. \item \label{thm:hard-core:3:b:conv} Let $\ybar\in\extspace$. Then $\ybar\in\conv(\unimin{f})$ if and only if, for all $i\in\indset$, \begin{equation} \label{eq:hard-core:conv:2} \ybar\cdot\uu_i = \left\{ \begin{array}{cl} 0 & \mbox{if $i\in\hardcore{f}$} \\ -\infty & \mbox{otherwise.} \\ \end{array} \right. \end{equation} \item \label{thm:hard-core:3:b:univ} Let $\ebar\in\corezn$. Then $\ebar\in\unimin{f}$ if and only if, for all $i\in\indset$, \begin{equation*} \label{eq:hard-core:univ:2} \ebar\cdot\uu_i = \left\{ \begin{array}{cl} 0 & \mbox{if $i\in\hardcore{f}$} \\ -\infty & \mbox{otherwise.} \\ \end{array} \right. \end{equation*} \item \label{thm:hard-core:3:c} Let $\ybar\in\extspace$, and suppose, for some $i\in\hardcore{f}$, that $\ybar\cdot\uu_i<0$. Then there exists $j\in\hardcore{f}$ for which $\ybar\cdot\uu_j>0$. \item \label{thm:hard-core:3:d} For $\xx\in\Rn$, \[ \fullshad{f}(\xx) = \sum_{i\in\hardcore{f}} \ell_i(\xx\cdot\uu_i). \] \item \label{thm:hard-core:3:e} $\rescperp{f}=\spn\uset{\hardcore{f}}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:hard-core:3:a}):} If $i\in\hardcore{f}$ then $\uu_i\in\rescperp{f}=\perpresf$, meaning exactly that $\ybar\cdot\uu_i=0$ if $\ybar\in\aresconef$. \pfpart{Part~(\ref{thm:hard-core:3:b:conv}):} Suppose $\ybar\in\conv(\unimin{f})$, implying $\ybar\in\aresconef$ by Proposition~\ref{pr:new:thm:f:8a}(\ref{pr:new:thm:f:8a:a}). Then $\ybar\cdot\uu_i\leq 0$ for $i\in\indset$, by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}). Specifically, $\ybar\cdot\uu_i=0$ for $i\in \hardcore{f}$, by part~(\ref{thm:hard-core:3:a}). It remains then only to show that if $i\not\in\hardcore{f}$ then $\ybar\cdot\uu_i=-\infty$, which would be implied by showing that $\ybar\cdot\uu_i\not\in\R$. Suppose then, by way of contradiction, that there exists $j\not\in\hardcore{f}$ with $\ybar\cdot\uu_j\in\R$. By definition, since $j\not\in\hardcore{f}$, there must exist $\vv\in\resc{f}$ with $\vv\cdot\uu_j\neq 0$, implying, by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}), that actually $\vv\cdot\uu_j< 0$. Let $\lambda\in\R$. To derive a contradiction, we compare function values at $\ybar\plusl\lambda\uu_j$ and $\limray{\vv}\plusl\ybar\plusl\lambda\uu_j$. For $i\in\indset$, by application of Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}), $\vv\cdot\uu_i\leq 0$ and so $\limray{\vv}\cdot\uu_i\leq 0$, implying $\limray{\vv}\in\aresconef$. Since $\ellbar_i$ is nondecreasing, this further shows that \begin{eqnarray} \ellbar_i((\limray{\vv}\plusl\ybar\plusl\lambda\uu_j)\cdot\uu_i) &=& \ellbar_i(\limray{\vv}\cdot\uu_i \plusl(\ybar\plusl\lambda\uu_j)\cdot\uu_i) \nonumber \\ &\leq& \ellbar_i((\ybar\plusl\lambda\uu_j)\cdot\uu_i). \label{eqn:thm:hard-core:3:2} \end{eqnarray} In particular, when $i=j$, \[ \ellbar_j((\limray{\vv}\plusl\ybar\plusl\lambda\uu_j)\cdot\uu_j) = \ellbar_j(\limray{\vv}\cdot\uu_j \plusl(\ybar\plusl\lambda\uu_j)\cdot\uu_j) = 0 \] since $\limray{\vv}\cdot\uu_j=-\infty$. On the other hand, \[ \ellbar_j((\ybar\plusl\lambda\uu_j)\cdot\uu_j) = \ellbar_j(\ybar\cdot\uu_j \plusl \lambda\uu_j\cdot\uu_j) \rightarrow +\infty \] as $\lambda\rightarrow+\infty$, since $\ybar\cdot\uu_j\in\R$ and $\uu_j\neq\zero$ (since $\vv\cdot\uu_j<0$), and since $\ellbar_j$ is nondecreasing and not constant. Thus, \eqref{eqn:thm:hard-core:3:2} holds for all $i\in\indset$, and furthermore the inequality is strict when $i=j$ and when $\lambda$ is sufficiently large. Therefore, for $\lambda$ sufficiently large, we have shown that \begin{align*} \fullshad{f}(\lambda\uu_j) \leq \fext(\limray{\vv}\plusl \ybar \plusl \lambda\uu_j) &= \sum_{i\in\indset} \ellbar_i\bigParens{(\limray{\vv}\plusl\ybar\plusl\lambda\uu_j)\cdot\uu_i} \nonumber \\ &< \sum_{i\in\indset} \ellbar_i\bigParens{(\ybar\plusl\lambda\uu_j)\cdot\uu_i} \nonumber \\ &= \fext(\ybar \plusl \lambda\uu_j) \leq \fullshad{f}(\lambda\uu_j). \end{align*} The equalities are by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:a}). The second inequality is by the argument above. The final inequality is by Theorem~\refequiv{thm:conv-univ-equiv}{thm:conv-univ-equiv:a}{thm:conv-univ-equiv:b} since $\ybar\in\conv(\unimin{f})$. For the first inequality, note that because $\limray{\vv}$ and $\ybar$ are both in $\aresconef$, their leftward sum $\limray{\vv}\plusl\ybar$ is as well; this is because $\aresconef$ is a convex astral cone (by \Cref{cor:res-fbar-closed}), which is therefore closed under sequential sum (by \Cref{thm:ast-cone-is-cvx-if-sum}), and so also under leftward addition. The first inequality therefore follows from Proposition~\ref{pr:fullshad-equivs}. Thus, having reached a contradiction, we conclude that if $\ybar\in\conv(\unimin{f})$, then \eqref{eq:hard-core:conv:2} is satisfied for $i\in\indset$. For the converse, suppose \eqref{eq:hard-core:conv:2} is satisfied for $i\in\indset$. Let $\ebar$ be any point in $\unimin{f}$ (which is nonempty by \Cref{pr:new:thm:f:8a}\ref{pr:new:thm:f:8a:nonemp-closed}). Then, as just argued, $\ebar$ satisfies \eqref{eq:hard-core:conv:2} as well, so $\ebar\cdot\uu_i=\ybar\cdot\uu_i$ for $i\in\indset$. So for all $\xx\in\Rn$, $\fext(\ybar\plusl\xx)=\fext(\ebar\plusl\xx)=\fullshad{f}(\xx)$ with the first equality from Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:a}), and the second because $\ebar\in\unimin{f}$. Therefore, $\ybar\in\conv(\unimin{f})$ by Theorem~\refequiv{thm:conv-univ-equiv}{thm:conv-univ-equiv:a}{thm:conv-univ-equiv:c}. \pfpart{Part~(\ref{thm:hard-core:3:b:univ}):} By Corollary~\ref{cor:conv-univ-icons}, the icon $\ebar$ is in $\unimin{f}$ if and only if it is in $\conv(\unimin{f})$. Combining with part~(\ref{thm:hard-core:3:b:conv}), this proves the claim. \pfpart{Part~(\ref{thm:hard-core:3:c}):} Let $\ebar$ be any point in $\unimin{f}$ (which exists by \Cref{pr:new:thm:f:8a}\ref{pr:new:thm:f:8a:nonemp-closed}), and let $\zbar=\ebar\plusl\ybar$. Then $\zbar\cdot\uu_i<0$ since $\ebar\cdot\uu_i=0$ by part~(\ref{thm:hard-core:3:b:univ}), so $\zbar\not\in\aresconef$, by part~(\ref{thm:hard-core:3:a}). Therefore, for some $j\in\indset$, $\zbar\cdot\uu_j>0$, by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}). Further, it must be that $j\in\hardcore{f}$ since otherwise part~(\ref{thm:hard-core:3:b:univ}) would imply $\ebar\cdot\uu_j=-\infty$, so also $\zbar\cdot\uu_j=-\infty$. Thus, $\ybar\cdot\uu_j>0$ since $\ebar\cdot\uu_j=0$ by part~(\ref{thm:hard-core:3:b:univ}). \pfpart{Part~(\ref{thm:hard-core:3:d}):} Let $\ebar$ be any point in $\unimin{f}$ (which exists by \Cref{pr:new:thm:f:8a}\ref{pr:new:thm:f:8a:nonemp-closed}). Then for $\xx\in\Rn$, \[ \fullshad{f}(\xx) = \fshadd(\xx) = \fext(\ebar\plusl\xx) = \sum_{i\in\indset} \ellbar_i(\ebar\cdot\uu_i\plusl\xx\cdot\uu_i) = \sum_{i\in\hardcore{f}} \ell_i(\xx\cdot\uu_i). \] The first equality is because $\ebar\in\unimin{f}$. The third is by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:a}). The fourth is by part~(\ref{thm:hard-core:3:b:univ}) and \eqref{eqn:hard-core:1}. \pfpart{Part~(\ref{thm:hard-core:3:e}):} Let $U=\uset{\hardcore{f}}$. We aim to show $\rescperp{f}=\spn U$. If $i\in\hardcore{f}$ then $\uu_i\in\rescperp{f}$ by definition, so $U\subseteq\rescperp{f}$, implying $\spn U\subseteq\rescperp{f}$ since $\rescperp{f}$ is a linear subspace (Proposition~\ref{pr:std-perp-props}\ref{pr:std-perp-props:a}). For the reverse implication, suppose $\yy\in \Uperp$, meaning $\yy\cdot\uu_i=0$ for all $i\in\hardcore{f}$. By applying Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}) to $\fullshad{f}$, whose form is given in part~(\ref{thm:hard-core:3:d}), this shows that $\yy\in\resc{\fullshad{f}}$. Thus, $\Uperp\subseteq\resc{\fullshad{f}}$, so \[ \rescperp{f} = \perpresf = \rescperp{\fullshad{f}} \subseteq \Uperperp = \spn U. \] The first two equalities are by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}) and Theorem~\ref{thm:f:4x}(\ref{thm:f:4xa}); the inclusion and last equality are by Proposition~\ref{pr:std-perp-props}(\ref{pr:std-perp-props:b},\ref{pr:std-perp-props:c}). \qedhere \end{proof-parts} \end{proof} \begin{example} Continuing Example~\ref{ex:erm-running-eg1}, part~(\ref{thm:hard-core:3:d}) of Theorem~\ref{thm:hard-core:3} implies that $\fullshad{f}$, the universal reduction of $f$, is \[ \fullshad{f}(\xx) = \sum_{i\in\hardcore{f}} \ell_i(\xx\cdot\uu_i) = \sqp(2+x_2-x_3) + \sqp(-2-x_2+x_3) = (2+x_2-x_3)^2, \] as was previously noted in \eqref{eq:simple-eg-exp-exp-sq:fullshad}, while part~(\ref{thm:hard-core:3:e}) means $\rescperp{f}=\perpresf$ is the linear subspace (in this case a line) spanned by the hard-core points $\{\uu_3,\uu_4\}$, as in \eqref{eq:simple-eg-exp-exp-sq:perpresf}. \end{example} Theorem~\ref{thm:hard-core:3} provides a characterization of all canonical minimizers for the current setting in terms of the hard core, namely, all points $\xbar=\ebar\plusl\qq$ whose finite part $\qq\in\Rn$ minimizes $\fullshad{f}$ in part~(\ref{thm:hard-core:3:d}), and whose iconic part $\ebar\in\corezn$ is a universal reducer, a point satisfying the equalities in part~(\ref{thm:hard-core:3:b:univ}). The theorem shows that all universal reducers $\ebar$ are identical in terms of the values of $\ebar\cdot\uu_i$, for $i\in\indset$, as determined by the hard core. Every universal reducer has the effect of causing those terms $i$ in \eqref{eqn:loss-sum-form} that are not in the hard core to vanish. The remaining terms, those that are in the hard core, constitute exactly the universal reduction $\fullshad{f}$. As is generally the case (Theorem~\ref{thm:f:4x}), all sublevel sets of this function are compact when restricted to $\perpresf=\rescperp{f}$, and the function is constant in all directions orthogonal to this subspace. We also remark that Theorem~\ref{thm:unimin-can-be-rankone}, combined with Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}), shows that $\fext$ can always be minimized along a ray, specifically, at any point $\xbar=\limray{\vv}\plusl\qq$ if $\vv\in\ric{f}$ and $\qq$ minimizes $\fullshad{f}$. We look next at a geometric characterization of the hard core, focusing on the faces of $S=\conv(\uset{\indset})$, the convex hull of all the points $\uu_i$. (See Section~\ref{sec:prelim:faces} for a brief introduction to the faces of a convex set.) The next theorems show that the location of the origin relative to $S$ exactly determines the hard core. In particular, if the origin is not included in $S$, then the hard core must be the empty set (implying that $\fext$ is minimized by a point $\xbar$ for which $\xbar\cdot\uu_i=-\infty$ for all $i\in\indset$). Otherwise, the origin must be in $\ri{C}$ for exactly one of the faces $C$, and the hard core is then precisely the set of indices of all points $\uu_i$ included in $C$. Alternatively, we can say that $\conv(\uset{\hardcore{f}})$ is a face of $S$, and is specifically the smallest face that includes the origin (meaning that it is included in all other faces that include the origin). For instance, in the example above, the convex hull of the points $\uu_1,\ldots,\uu_4$ is an (irregular) tetrahedron in $\R^3$. Its faces consist of the tetrahedron itself, its four triangular faces, six edges, four vertices, and the empty set. The origin is in the relative interior of the edge connecting $\uu_3$ and $\uu_4$ (since $\zero=\sfrac{1}{2}\uu_3+\sfrac{1}{2}\uu_4$), corresponding to the hard core being $\{3,4\}$ in this case. That edge is indeed the smallest face that includes the origin. \begin{theorem} \label{thm:erm-faces-hardcore} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is convex, nondecreasing, not constant, with $\inf \ell_i = 0$. Let $S=\conv{\uset{I}}$. Then the following hold: \begin{letter-compact} \item \label{thm:erm-faces-hardcore:b} $\conv(\uset{\hardcore{f}})$ is a face of $S$. \item \label{thm:erm-faces-hardcore:a} Let $J\subseteq I$, and suppose $\zero\in\ri(\conv{\uset{J}})$. Then $J\subseteq\hardcore{f}$. \item \label{thm:erm-faces-hardcore:aa} Let $C$ be a face of $S$, and suppose $\zero\in C$. Then $\zero\in\conv(\uset{\hardcore{f}})\subseteq C$. \item \label{thm:erm-faces-hardcore:c} $\zero\in S$ if and only if $\hardcore{f}\neq\emptyset$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:erm-faces-hardcore:b}):} Let $U=\uset{\hardcore{f}}$, and let $C=S \cap (\spn U)$. We show first that $C$ is a face of $S$, and later show $C=\conv U$. Let $\xx,\zz\in S$ and $\lambda\in (0,1)$. Let $\ww=(1-\lambda)\xx+\lambda\zz$. Assume $\ww\in C$, which we aim to show implies that $\xx$ and $\zz$ are also in $C$. Let $\yy\in\resc{f}$, implying $\yy\cdot\uu_i\leq 0$ for $i\in\indset$, by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}). Since $\xx$ and $\zz$ are in $S$, the convex hull of the $\uu_i$'s, this implies $\yy\cdot\xx\leq 0$ and $\yy\cdot\zz\leq 0$. Also, for all $i\in\hardcore{f}$, $\uu_i\in\rescperp{f}$ so $\yy\cdot\uu_i=0$. Since $\ww\in\spn U$, this also means $\yy\cdot\ww=0$. Thus, \[ 0 = \yy\cdot\ww = (1-\lambda)(\yy\cdot\xx) + \lambda(\yy\cdot\zz). \] Since $\lambda\in (0,1)$ and the two terms on the right are nonpositive, we must have $\yy\cdot\xx=\yy\cdot\zz=0$. Therefore, $\xx,\zz\in\rescperp{f}$, since this holds for all $\yy\in\resc{f}$. Thus, $\xx,\zz\in C$ since $\rescperp{f}=\spn U$, by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:e}). We have shown $C$ is a face of $S$. As such, $C$ is equal to the convex hull of the points in $\uset{\indset}$ that are included in $C$, by \Cref{roc:thm18.3}. Moreover, a point $\uu_i$, for $i\in\indset$, is included in $C$ if and only if it is in $\spn U=\rescperp{f}$, that is, if and only if $i\in\hardcore{f}$. We conclude that $C=\conv{\uset{\hardcore{f}}}$, completing the proof. \pfpart{Part~(\ref{thm:erm-faces-hardcore:a}):} Let $j\in J$, and let $C=\conv(\uset{J})$. Since $\zero\in\ri{C}$, and since $\uu_j\in C$, there exists $\delta>0$ for which the point $\ww=(1+\delta)\zero-\delta\uu_j=-\delta\uu_j$ is also in $C$ (\Cref{roc:thm6.4}). Let $\yy\in\resc{f}$, implying $\yy\cdot\uu_i\leq 0$ for all $i\in\indset$, by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}); in particular, $\yy\cdot\uu_j\leq 0$. Also, $\ww$ is in $C$ and therefore a convex combination of points in $\uset{J}$. Thus, $\yy\cdot(-\delta\uu_j)=\yy\cdot\ww\leq 0$ as well. Together, these imply $\yy\cdot\uu_j=0$ since $\delta>0$. Since this holds for all $\yy\in\resc{f}$, we have shown that $\uu_j\in\rescperp{f}$, that is, $j\in\hardcore{f}$. \pfpart{Part~(\ref{thm:erm-faces-hardcore:aa}):} Let $F=\conv(\uset{\hardcore{f}})$, which is a face of $S$ by part~(\ref{thm:erm-faces-hardcore:b}). We show first that $\zero\in F$. Since $\zero\in C\subseteq S$, and since the relative interiors of $S$ form a partition (\Cref{roc:thm18.2}), there must exist a face $D$ of $S$ for which $\zero\in\ri{D}$. Let $J=\{i\in\indset : \uu_i\in D\}$. Then $D=\conv(\uset{J})$, by \Cref{roc:thm18.3}. From part~(\ref{thm:erm-faces-hardcore:a}), $J\subseteq\hardcore{f}$, so $\zero\in D\subseteq F$, as claimed. We next show $F\subseteq C$. Suppose not. Let $C'=F\cap C$, which is a face of $S$ since both $F$ and $C$ are faces. Also, $\zero\in C'$, but $F\not\subseteq C'$ since we have assumed $F\not\subseteq C$. Because $F$ and $C'$ are distinct faces of $S$, their relative interiors are disjoint so that $(\ri{F})\cap(\ri{C'})=\emptyset$ (by \Cref{pr:face-props}\ref{pr:face-props:cor18.1.2}). As a result, there exists a hyperplane properly separating $F$ and $C'$ (by \Cref{roc:thm11.3}). That is, there exist $\vv\in\Rn$ and $\beta\in\R$ for which $\vv\cdot\ww\leq \beta$ for all $\ww\in F$ and $\vv\cdot\ww\geq \beta$ for all $\ww\in C'$. Since $C'\subseteq F$, this actually implies $\vv\cdot\ww = \beta$ for all $\ww\in C'$. Moreover, because $\zero\in C'$, we must have $\beta=0$. Furthermore, this hyperplane \emph{properly} separates these sets, meaning there must exist a point in $F\cup C'$ not in the separating hyperplane itself. Since $C'$ is entirely included in the hyperplane, this implies there must be a point $\zz\in F$ for which $\vv\cdot\zz<0$. Since $\zz\in\conv{\uset{\hardcore{f}}}$, it must be a convex combination of points $\uu_i$, for $i\in\hardcore{f}$. Therefore, there must exist some $i\in\hardcore{f}$ with $\vv\cdot\uu_i<0$. By Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:c}), this implies there exists a point $j\in\hardcore{f}$ with $\vv\cdot\uu_j>0$, contradicting that $\vv\cdot\ww\leq 0$ for all $\ww\in F$. \pfpart{Part~(\ref{thm:erm-faces-hardcore:c}):} Suppose $\zero\in S$, and that, contrary to the claim, $\hardcore{f}=\emptyset$. Let $\ebar\in\unimin{f}$ (which must exist by \Cref{pr:new:thm:f:8a}\ref{pr:new:thm:f:8a:nonemp-closed}). Then $\ebar\cdot\uu_i=-\infty$ for all $i\in\indset$, by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:univ}). Since $\zero\in S$, $\zero$ is a convex combination of the $\uu_i$'s, implying $\ebar\cdot\zero=-\infty$ using Proposition~\ref{pr:i:1}, a contradition. For the converse, suppose $\zero\not\in S$, and, contrary to the claim, that $\hardcore{f}\neq\emptyset$. Then because both $S$ and $\{\zero\}$ are convex, closed (in $\Rn$), and bounded, there exists a hyperplane strongly separating them by \Cref{roc:cor11.4.2}. That is, there exists $\vv\in\Rn$ for which \[ \sup_{\ww\in S} \vv\cdot\ww < \vv\cdot\zero = 0 \] by \Cref{roc:thm11.1}. In particular, this means $\vv\cdot\uu_i<0$ for all $i\in\indset$. Let $i\in\hardcore{f}$, which we have assumed is not empty. Then by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:c}), because $\vv\cdot\uu_i<0$, there also must exist $j\in\hardcore{f}$ with $\vv\cdot\uu_j>0$, a contradiction. \qedhere \end{proof-parts} \end{proof} For a convex set $S\subseteq\Rn$ and a set $A\subseteq S$, we say that a face $C$ of $S$ is the \emph{smallest face of $S$ that includes $A$} if $A\subseteq C$ and if for all faces $C'$ of $S$, if $A\subseteq C'$ then $C\subseteq C'$. Equivalently, the smallest face of $S$ that includes $A$ is the intersection of all of the faces of $S$ that include $A$, which is itself a face (since the arbitrary intersection of faces is a face). \begin{theorem} \label{thm:erm-faces-hardcore2} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is convex, nondecreasing, not constant, with $\inf \ell_i = 0$. Let $C$ be any nonempty face of $S=\conv(\uset{I})$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:erm-faces-hardcore2:a} $\zero\in\ri{C}$. \item \label{thm:erm-faces-hardcore2:b} $C$ is the smallest face of $S$ that includes $\{\zero\}$. \item \label{thm:erm-faces-hardcore2:c} $\hardcore{f}=\{i\in\indset : \uu_i\in C\}$. \item \label{thm:erm-faces-hardcore2:d} $C = \conv(\uset{\hardcore{f}})$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:erm-faces-hardcore2:a}) $\Rightarrow$ (\ref{thm:erm-faces-hardcore2:b}) }: Suppose $\zero\in\ri{C}$. Then clearly $\zero\in C$. If $\zero$ is included in some face $C'$ of $S$, then $C'$ and $\ri{C}$ are not disjoint, implying $C\subseteq C'$ by \Cref{pr:face-props}(\ref{pr:face-props:thm18.1}). \pfpart{(\ref{thm:erm-faces-hardcore2:b}) $\Rightarrow$ (\ref{thm:erm-faces-hardcore2:c}) }: Suppose (\ref{thm:erm-faces-hardcore2:b}) holds. Let $J=\{i\in\indset : \uu_i\in C\}$. By Theorem~\ref{thm:erm-faces-hardcore}(\ref{thm:erm-faces-hardcore:aa}), $\zero\in\conv(\uset{\hardcore{f}})\subseteq C$, since $\zero\in C$. Therefore, $\hardcore{f}\subseteq J$. For the reverse inclusion, since $\zero\in S$, by \Cref{roc:thm18.2}, there exists a face $C'$ of $S$ with $\zero\in\ri{C'}$. By assumption, this implies $C\subseteq C'$, and so $J\subseteq J'$ where $J'=\{i\in\indset : \uu_i\in C'\}$. Furthermore, $J'\subseteq \hardcore{f}$ by Theorem~\ref{thm:erm-faces-hardcore}(\ref{thm:erm-faces-hardcore:a}). Combining yields $J=\hardcore{f}$ as claimed. \pfpart{(\ref{thm:erm-faces-hardcore2:c}) $\Rightarrow$ (\ref{thm:erm-faces-hardcore2:d}) }: This is immediate from by \Cref{roc:thm18.3}. \pfpart{(\ref{thm:erm-faces-hardcore2:d}) $\Rightarrow$ (\ref{thm:erm-faces-hardcore2:a}) }: Suppose $C = \conv(\uset{\hardcore{f}})$. Since $C$ is not empty, $\hardcore{f}\neq\emptyset$, so $\zero\in S$ by Theorem~\ref{thm:erm-faces-hardcore}(\ref{thm:erm-faces-hardcore:c}). Therefore, by \Cref{roc:thm18.2}, there exists a face $C'$ of $S$ for which $\zero\in\ri{C'}$. That is, $C'$ satisfies (\ref{thm:erm-faces-hardcore2:a}), and so also (\ref{thm:erm-faces-hardcore2:d}), by the implications proved above. Thus, $C'=\conv(\uset{\hardcore{f}})=C$, so $\zero\in\ri{C}$. \end{proof-parts} \end{proof} Notice that the sets we have been considering, namely, the standard and astral recession cones, the set of universal reducers, as well as the hard core, all depend exclusively on the $\uu_i$'s, and are entirely {independent} of the specific functions $\ell_i$. In other words, suppose we form a new function $f'$ as in \eqref{eqn:loss-sum-form} with the $\uu_i$'s unchanged, but with each $\ell_i$ replaced by some other function $\ell'_i$ (though still satisfying the same assumed properties). Then the sets listed above are unchanged. That is, $\resc{f}=\resc{f'}$ and $\aresconef=\aresconefp$ by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}); $\hardcore{f}=\hardcore{f'}$ by the definition in \eqref{eq:hard-core:3}; and $\unimin{f}=\unimin{f'}$ by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:univ}). From Proposition~\ref{pr:unimin-to-global-min}, $\fext$ is minimized at a point $\xbar=\ebar\plusl\qq$ if $\ebar\in\unimin{f}$ and $\qq$ minimizes $\fullshad{f}$. If, in addition to our preceding assumptions, each $\ell_i$ is \emph{strictly} increasing, then these conditions are not only sufficient but also necessary for $\xbar$ to minimize $\fext$. Furthermore, if each function $\ell_i$ is \emph{strictly} convex, then $\fullshad{f}$ is uniquely minimized over the linear subspace $\perpresf = \rescperp{f}$, implying also that, for all $i\in\indset$, the value of $\xbar\cdot\uu_i$ will be the same for all of $\fext$'s minimizers $\xbar$. We show these in the next two theorems. \begin{theorem} \label{thm:waspr:hard-core:2} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is convex, with $\inf \ell_i = 0$. Suppose further that each $\ell_i$ is strictly increasing (as will be the case if each $\ell_i$ is nondecreasing and strictly convex). Let $\xbar=\ebar\plusl\qq$ where $\ebar\in\corezn$ and $\qq\in\Rn$. Then $\xbar$ minimizes $\fext$ if and only if $\ebar\in\unimin{f}$ and $\qq$ minimizes $\fullshad{f}$. \end{theorem} \begin{proof} The ``if'' direction follows from Proposition~\ref{pr:unimin-to-global-min}. For the converse, suppose $\xbar$ minimizes $\fext$. Let $\ebar'$ be any point in $\unimin{f}$ (which exists by \Cref{pr:new:thm:f:8a}\ref{pr:new:thm:f:8a:nonemp-closed}). Then for all $\zz\in\Rn$, \[ \fullshad{f}(\qq) \leq \fext(\ebar\plusl\qq) \leq \fext(\ebar'\plusl\zz) = \fullshad{f}(\zz). \] The inequalities are, respectively, by $\fullshad{f}$'s definition (Definition~\ref{def:univ-reduction}), and since $\xbar$ minimizes $\fext$. The equality is because $\ebar'\in\unimin{f}$. Thus, $\qq$ minimizes $\fullshad{f}$. To show $\ebar\in\unimin{f}$, note that $\ebar\in\aresconef$ by Theorem~\ref{thm:arescone-fshadd-min} (since $\xbar$ minimizes $\fext$). Therefore, by Propositions~\refequiv{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}, and~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}), $\ebar\cdot\uu_i\in\{-\infty,0\}$ for $i\in\indset$ since $\ebar\in\corezn$. Let $J=\{ i \in\indset : \ebar\cdot\uu_i = 0\}$. We claim that $J=\hardcore{f}$. The inclusion $\hardcore{f}\subseteq J$ follows directly from Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:a}). For the reverse inclusion, suppose, by way of contradiction, that there exists an index $j$ in $J\setminus\hardcore{f}$. Then \begin{align*} \fext(\ebar\plusl\qq) = \sum_{i\in\indset} \ellbar_i(\ebar\cdot\uu_i\plusl\qq\cdot\uu_i) &= \sum_{i\in J} \ell_i(\qq\cdot\uu_i) \\ &> \sum_{i\in \hardcore{f}} \ell_i(\qq\cdot\uu_i) = \fullshad{f}(\qq) = \fext(\ebar'\plusl\qq). \end{align*} The first two equalities are from Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:a}) and \eqref{eqn:hard-core:1}. The first inequality is because $\ell_j(\qq\cdot\uu_j)>\ellbar_j(-\infty)=0$ since $\ell_j$ is strictly increasing (and since $\hardcore{f}\subseteq J$ and $j\in J\setminus\hardcore{f}$). The third equality is by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:d}), and the last equality is because $\ebar'\in\unimin{f}$. This is a contradiction since $\ebar\plusl\qq$ minimizes $\fext$. Thus, $J=\hardcore{f}$ and therefore $\ebar\in\unimin{f}$ by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:univ}). Finally, we note that if $\ell_i$ is nondecreasing and strictly convex, then it is also strictly increasing. Otherwise, there would exist real numbers $x<y$ for which $\ell_i(x)=\ell_i(y)$. Letting $z=(x+y)/2$, this implies, by strict convexity, that $\ell_i(z)<(\ell_i(x)+\ell_i(y))/2=\ell_i(x)$. Thus, $x<z$, but $\ell_i(x)>\ell_i(z)$, contradicting that $\ell_i$ is nondecreasing. \end{proof} \begin{theorem} \label{thm:waspr:hard-core:4} Let $f:\Rn\rightarrow\R$ have the form given in \eqref{eqn:loss-sum-form}, where, for $i\in\indset$, $\uu_i\in\Rn$ and $\ell_i:\R\rightarrow\R$ is nondecreasing and strictly convex, with $\inf \ell_i = 0$. Then $\fullshad{f}$, if restricted to $\perpresf = \rescperp{f}$, has a unique minimizer $\qq$. Furthermore, the following are equivalent, for $\xbar\in\extspace$: \begin{letter-compact} \item \label{pr:hard-core:4:a} $\xbar$ minimizes $\fext$. \item \label{pr:hard-core:4:b} $\xbar=\zbar\plusl\qq$ for some $\zbar\in\conv(\unimin{f})$. \item \label{pr:hard-core:4:c} For $i\in\indset$, \[ \xbar\cdot\uu_i = \left\{ \begin{array}{cl} \qq\cdot\uu_i & \mbox{if $i\in\hardcore{f}$} \\ -\infty & \mbox{otherwise.} \\ \end{array} \right. \] \end{letter-compact} \end{theorem} \begin{proof} Let $\qq$ be a minimizer of $\fullshad{f}$ in $\rescperp{f}$, which must exist by Theorem~\ref{thm:f:4x}(\ref{thm:f:4xd}). Suppose, by way of contradiction, that some other point $\qq'\in\rescperp{f}$ also minimizes $\fullshad{f}$, with $\qq\neq\qq'$. We claim first that $\qq\cdot\uu_i\neq\qq'\cdot\uu_i$ for some $i\in\hardcore{f}$. Suppose to the contrary that $\dd\cdot\uu_i=0$ for all $i\in\hardcore{f}$, where $\dd=\qq'-\qq\neq\zero$. Then because $\rescperp{f}$ is a linear subspace (Proposition~\ref{pr:std-perp-props}\ref{pr:std-perp-props:a}), for all $\lambda\in\R$, $\lambda\dd\in\rescperp{f}$. Furthermore, by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:d}), $\fullshad{f}(\lambda\dd)=\fullshad{f}(\zero)<+\infty$. In other words, one of the sublevel sets of $\fullshad{f}$ in $\rescperp{f}=\perpresf$ includes the entire line $\{ \lambda\dd : \lambda\in\R\}$. This, however, is a contradiction since all such sublevel sets are bounded, by Theorem~\ref{thm:f:4x}(\ref{thm:f:4xb}). So let $i\in\hardcore{f}$ be such that $\qq\cdot\uu_i\neq\qq'\cdot\uu_i$. Let $\zz=(\qq+\qq')/2$. Since each $\ell_j$ is convex, $\ell_j(\zz\cdot\uu_j)\leq(\ell_j(\qq\cdot\uu_j)+\ell_j(\qq'\cdot\uu_j))/2$. Furthermore, when $j=i$, by strict convexity of $\ell_i$, this inequality is strict. Therefore, applying Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:d}) yields $\fullshad{f}(\zz) < (\fullshad{f}(\qq) + \fullshad{f}(\qq'))/2$, contradicting the assumption that $\qq$ and $\qq'$ both minimize $\fullshad{f}$. Thus, $\qq$ is the only minimizer of $\fullshad{f}$ in $\rescperp{f}$. We next prove the stated equivalences: \begin{proof-parts} \pfpart{(\ref{pr:hard-core:4:a}) $\Rightarrow$ (\ref{pr:hard-core:4:b}) }: Suppose $\xbar$ minimizes $\fext$. Then by Theorem~\ref{thm:waspr:hard-core:2}, $\xbar=\ebar\plusl\yy$ for some $\ebar\in\unimin{f}$ and some $\yy\in\Rn$ that minimizes $\fullshad{f}$. By linear algebra, we can further write $\yy$ as $\yy=\yy'+\yy''$ where $\yy'\in\rescperp{f}$ and $\yy''\in\rescperperp{f}$ are $\yy$'s projections onto these two orthogonal linear subspaces. Then $\yy''$ is in the constancy space of $\fullshad{f}$ since $\rescperperp{f}=\perperpresf=\conssp{\fullshad{f}}$, by Theorem~\ref{thm:f:4x}(\ref{thm:f:4xa}). Thus, $\fullshad{f}(\yy)=\fullshad{f}(\yy')$, so $\yy'$ also minimizes $\fullshad{f}$. Therefore, $\yy'=\qq$ since, as already shown, $\qq$ is the only minimizer of $\fullshad{f}$ in $\rescperp{f}$. Thus, $\xbar=\zbar\plusl\qq$ where $\zbar=\ebar\plusl\yy''$ is in $\conv(\unimin{f})$ by Theorem~\refequiv{thm:conv-univ-equiv}{thm:conv-univ-equiv:a}{thm:conv-univ-equiv:d}. \pfpart{(\ref{pr:hard-core:4:b}) $\Rightarrow$ (\ref{pr:hard-core:4:c}) }: Suppose $\xbar=\zbar\plusl\qq$ for some $\zbar\in\conv(\unimin{f})$. Then for each $i\in\indset$, $\xbar\cdot\uu_i=\zbar\cdot\uu_i\plusl\qq\cdot\uu_i$. That these values take the form given in~(\ref{pr:hard-core:4:c}) therefore follows directly from Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:conv}). \pfpart{(\ref{pr:hard-core:4:c}) $\Rightarrow$ (\ref{pr:hard-core:4:a}) }: Suppose $\xbar\in\extspace$ has the property stated in~(\ref{pr:hard-core:4:c}). Let $\xbar'\in\extspace$ be any minimizer of $\fext$ (which exists by Proposition~\ref{pr:fext-min-exists}). Since $\xbar'$ satisfies~(\ref{pr:hard-core:4:a}), by the foregoing implications, it also must satisfy (\ref{pr:hard-core:4:c}). Thus, $\xbar\cdot\uu_i=\xbar'\cdot\uu_i$ for all $i\in\indset$, implying $\fext(\xbar)=\fext(\xbar')$ by Theorem~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:a}). Therefore, $\xbar$ also minimizes $\fext$. \qedhere \end{proof-parts} \end{proof} Finally, we mention some implications for minimizing sequences. Let $\seq{\xx_t}$ be a sequence in $\Rn$ that minimizes some function $f$ satisfying the assumptions of Theorem~\ref{thm:waspr:hard-core:2}. Then it can be argued using Theorem~\ref{thm:waspr:hard-core:2} that $\xx_t\cdot\uu_i\rightarrow-\infty$ for all $i\not\in\hardcore{f}$. Also, let $\qq_t$ be the projection of $\xx_t$ onto the linear subspace $\rescperp{f}=\perpresf$. Then by Proposition~\ref{pr:proj-mins-fullshad}, $\fullshad{f}(\xx_t)=\fullshad{f}(\qq_t) \rightarrow \min \fullshad{f}$, and furthermore, the $\qq_t$'s are all in a compact region of $\rescperp{f}$. Thus, if $i\in\hardcore{f}$, then $\uu_i\in\rescperp{f}$, by $\hardcore{f}$'s definition, implying $\xx_t\cdot\uu_i=\qq_t\cdot\uu_i$ for all $t$ (since $\xx_t-\qq_t$ is orthogonal to $\rescperp{f}$). Therefore, $\xx_t\cdot\uu_i$ remains always in some bounded interval of $\R$. If, in addition, each $\ell_i$ is strictly convex, then $\fullshad{f}$ has a unique minimum $\qq$ in $\rescperp{f}$, by Theorem~\ref{thm:waspr:hard-core:4}; in this case, it can be further argued that $\qq_t\rightarrow\qq$, and $\xx_t\cdot\uu_i\rightarrow\qq\cdot\uu_i$ for $i\in\hardcore{f}$. \section{Continuity} \label{sec:continuity} As seen in earlier examples, the extension $\fext$ of a convex function $f:\Rn\rightarrow\Rext$ may or may not be continuous at a particular point, even if the function $f$ is continuous everywhere and well-behaved in other ways. Nevertheless, in this section, we will characterize exactly the set of points where $\fext$ is continuous in terms of properties of the original function $f$. We also will give precise necessary and sufficient conditions for $\fext$ to be continuous everywhere. We begin with some examples: \begin{example} \label{ex:recip-fcn-eg:cont} Consider first the function in \eqref{eqn:recip-fcn-eg}. In Example~\ref{ex:recip-fcn-eg}, we argued that if $\xbar=\limray{\ee_1}\plusl\beta\ee_2$ and $\beta\neq 0$, then on any sequence converging to $\xbar$, $f$ has the same limit $\fext(\xbar)$; in other words, $\fext$ is continuous at $\xbar$. However, $\fext$ is not continuous at $\xbar=\limray{\ee_1}$: Although, as already seen, $\fext(\xbar)=0$, on some sequences converging to $\xbar$, $f$ does not converge to $0$. For instance, for any constant $\lambda>0$, let $\xx_t=\trans{[t/\lambda, 1/t]}$. This sequence converges to $\xbar$, but $f(\xx_t)=\lambda$ for all $t$. Thus, on sequences converging to $\xbar$, $f$ can converge to any nonnegative value. \end{example} \begin{example} \label{ex:x1sq-over-x2:cont} In $\R^2$, let $f$ be as in \eqref{eqn:curve-discont-finiteev-eg} as seen in Example~\ref{ex:x1sq-over-x2}. As already mentioned, this function is convex, closed, proper, finite everywhere and continuous everywhere. Nevertheless, $f$'s extension $\fext$ is not continuous, for instance, at $\xbar=\limray{\ee_2}\plusl\limray{\ee_1}$. For example, if $\xx_t=\trans{[t,t^{3}]}$, then $\xx_t\rightarrow\xbar$ and $f(\xx_t)=1/t\rightarrow 0$. If instead $\xx_t=\trans{[t^2,t^3]}$, then $\xx_t\rightarrow\xbar$ but now $f(\xx_t)={t}\rightarrow +\infty$. Thus, $\fext$ is not continuous at $\xbar$. Indeed, if $\xx_t=\trans{[t,t^2/\lambda]}$, for any constant $\lambda>0$, then $\xx_t\rightarrow\xbar$ but $f(\xx_t)=\lambda$ for all sufficiently large $t$. Thus, on a sequence converging to $\xbar$, $f$ can converge to any number in $[0,+\infty]$. \end{example} At an intuitive level, these examples suggest two different ways in which $\fext$ can be discontinuous. In the first example, the discontinuity seemed to arise as a result of reaching the boundary between where the function $f$ is finite (namely, all points with $x_1>0$ and $x_2>0$), and where it is infinite, in other words, the boundary of $\dom{f}$. On the other hand, in the second example, the function $f$ is finite everywhere so there is no such boundary to its effective domain. Instead, the discontinuity seemed to arise as a result of the variety of ways in which we can follow a ``curved'' trajectory reaching the same astral point at infinity, but on which the function takes very different values. We will soon see that our characterization of continuity exactly captures these two different kinds of discontinuity. \subsection{Characterizing exactly where \texorpdfstring{$\fext$}{the extension of f} is continuous} \label{subsec:charact:cont} We turn now to the general case of a convex function $f:\Rn\rightarrow\Rext$. Note first that if $\fext(\xbar)=+\infty$, then $\fext$ is necessarily continuous at $\xbar$ since $f$ must converge to $+\infty$ on every sequence converging to $\xbar$. Therefore, we focus on understanding continuity at points $\xbar$ where $\fext(\xbar)< +\infty$, that is, astral points in $\dom{\fext}$. Let $\contsetf$ denote the set of all points $\xbar\in\extspace$ where $\fext$ is continuous, and $\fext(\xbar)<+\infty$. In this subsection, we will characterize exactly those points where $\fext$ is continuous; in other words, we will determine the set $\contsetf$ exactly. We will do this in two different ways. On the one hand, we will see that $\contsetf$ is equal to the interior of the effective domain of $\fext$, that is, $\contsetf=\intdom{\fext}$. This means that $\fext$ is continuous everywhere except for points that are in $\dom{\fext}$, but not its interior. This provides a close analog to the continuity properties of standard convex functions on $\Rn$. In addition, we further characterize $\contsetf$ in terms of the original function $f$ itself. In particular, we will see that this set consists exactly of all points $\xbar\in\extspace$ of a specific form $\xbar=\VV\omm\plusl\qq$ where $\qq\in\Rn$ is in the interior of the effective domain of $f$, and all of the columns of $\VV$ are in $\resc{f}$ so that $\VV\omm\in\represc{f}$. Thus, \[ \contsetf = \paren{\represc{f}\cap\corezn} \plusl \intdom{f}. \] Furthermore, we will see that the value of $\fext$ at any point $\xbar$ of this form is equal to $\fV(\qq)$ where $\fV:\Rn\rightarrow\Rext$ is defined by \begin{equation} \label{eqn:fV-defn} \fV(\xx) = \inf_{\yy\in\colspace\VV} f(\yy+\xx) = \inf_{\bb\in\Rk} f(\VV\bb+\xx) \end{equation} for $\xx\in\Rn$. Since $\fext$ is continuous at $\xbar$, this means that for every sequence $\seq{\xx_t}$ in $\Rn$ with $\xx_t\rightarrow\xbar$, we must also have $f(\xx_t)\rightarrow\fV(\qq)$. We prove all this in a series of theorems. The first regards convergence of sequences of a particular form, and will imply as corollary that $\fext$ must be continuous at every point of the form just described. In what follows, for vectors $\bb,\,\cc\in\Rk$, we write $\bb\geq\cc$ to mean $b_i\geq c_i$ for every component $i\in\{1,\ldots,k\}$. \begin{theorem} \label{thm:recf-seq-cont} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\VV=[\vv_1,\ldots,\vv_k]$ where $\vv_1,\ldots,\vv_k\in\resc{f}$, $k\geq 0$, and let $\qq=\qhat+\VV\chat$ where $\qhat\in\intdom{f}$ and $\chat\in\R^k$. For each $t$, let $\bb_t\in\R^k$ and $\qq_t\in\Rn$, and let $\xx_t = \VV \bb_t + \qq_t$. Assume $b_{t,i}\rightarrow+\infty$ for $i=1,\ldots,k$, and that $\qq_t\rightarrow\qq$. Then $f(\xx_t)\rightarrow \fV(\qq)=\fV(\qhat)$. \end{theorem} \begin{proof} Since $\qhat\in\intdom{f}$, there exists an open set $U\subseteq\Rn$ that includes $\zero$ and such that $\qhat+U\subseteq\dom{f}$. Let $h=\fV$. As a function of $\rpair{\bb}{\xx}$, $f(\VV\bb+\xx)$ is convex by \Cref{roc:thm5.7:fA}, so $h$ is convex by \Cref{roc:thm5.7:Af}. Note that \begin{equation} \label{eqn:h-prop:1} h(\VV\bb+\xx)=h(\xx)\leq f(\xx) \end{equation} for all $\xx\in\Rn$ and $\bb\in\Rk$. In particular, \[h(\qq)=h(\VV\chat+\qhat)=h(\qhat)\leq f(\qhat)<+\infty,\] since $\qhat\in\dom{f}$. \begin{claimpx} \label{cl:a4} $h$ is continuous at $\qq$. \end{claimpx} \begin{proofx} For all $\sS\in U$, \[ h(\qq+\sS) =h(\VV\chat + \qhat+\sS) =h(\qhat +\sS) \leq f(\qhat+\sS)<+\infty. \] The second equality and first inequality are both from \eqref{eqn:h-prop:1}. The last inequality is because $\qhat+U\subseteq\dom{f}$. Thus, $\qq+U\subseteq\dom{h}$, so $\qq\in\intdom{h}$. Since $h$ is convex, this implies that $h$ is continuous at $\qq$ (by \Cref{pr:stand-cvx-cont}). \end{proofx} \begin{claimpx} \label{cl:a5} $\liminf f(\xx_t) \geq h(\qq)$. \end{claimpx} \begin{proofx} We have that \begin{eqnarray*} \liminf f(\xx_t) &=& \liminf f(\VV \bb_t + \qq_t) \\ &\geq& \liminf h(\qq_t) \\ &=& h(\qq). \end{eqnarray*} The inequality is by \eqref{eqn:h-prop:1}, and the second equality is by Claim~\ref{cl:a4} and since $\qq_t\rightarrow\qq$. \end{proofx} To prove the theorem, it remains then only to show that $\limsup f(\xx_t)\leq h(\qq)$. Let $\beta\in\R$ be such that $\beta>h(\qq)=h(\qhat)$. Then by $\fV$'s definition (Eq.~\ref{eqn:fV-defn}), there exists $\dd\in\Rk$ for which $f(\VV\dd+\qhat) < \beta$. Without loss of generality, we can assume that $\dd\geq\zero$; otherwise, if this is not the case, we can replace $\dd$ by $\dd'\in\Rk$ where $d'_i=\max\{0,d_i\}$ for $i=1,\ldots,k$, so that \[ f(\VV \dd' + \qhat) = f(\VV \dd + \qhat + \VV (\dd' - \dd)) \leq f(\VV \dd + \qhat) < \beta, \] where the first inequality is because $\dd' - \dd \geq \zero$, implying $\VV (\dd' - \dd) \in \resc{f}$. (This is because, in general, for $\bb\in\Rk$, if $\bb\geq\zero$, then $\VV\bb=\sum_{i=1}^k b_i \vv_i$ is in $\resc{f}$, being a convex cone by Proposition~\ref{pr:resc-cone-basic-props}.) \begin{claimpx} \label{cl:thm:recf-seq-cont:1} $f$ is continuous at $\VV \dd + \qhat$. \end{claimpx} \begin{proofx} For all $\sS\in U$, \[ f(\VV \dd + \qhat + \sS) \leq f(\qhat + \sS) < +\infty. \] The first inequality is because $\dd\geq\zero$, implying $\VV\dd\in\resc{f}$, and the second is because $\qhat+U\subseteq \dom{f}$. Thus, $\VV\dd+\qhat\in\intdom{f}$, proving the claim (by \Cref{pr:stand-cvx-cont}). \end{proofx} From Claim~\ref{cl:thm:recf-seq-cont:1}, it follows that there exists $\delta>0$ such that if $\norm{\yy}<\delta$ then $f(\VV\dd + \qhat + \yy) < \beta$. Since $b_{t,i}\rightarrow+\infty$ and $\qq_t\rightarrow\qq$, we must have that for all $t$ sufficiently large, $b_{t,i}\geq d_i - \hatc_i$ for $i=1,\ldots,k$ (that is, $\bb_t \geq \dd - \chat$), and also $\norm{\qq_t - \qq}<\delta$. When these conditions hold, we have \begin{eqnarray*} f(\xx_t) &=& f(\VV \bb_t + \qq_t) \\ &=& f\Parens{\VV \dd + \qhat + (\qq_t-\qq) + \VV(\bb_t - \dd + \chat)} \\ &\leq& f\Parens{\VV \dd + \qhat + (\qq_t-\qq)} \\ &<& \beta. \end{eqnarray*} The second equality is by algebra. The first inequality is because $\bb_t\geq\dd-\chat$ implying $\VV(\bb_t - \dd + \chat)\in\resc{f}$. The last inequality is because $\norm{\qq_t-\qq}<\delta$. Thus, $\limsup f(\xx_t)\leq \beta$. Since this holds for all $\beta>h(\qq)$, it follows, with Claim~\ref{cl:a5}, that \[ h(\qq) \leq \liminf f(\xx_t) \leq \limsup f(\xx_t)\leq h(\qq), \] proving the claim. \end{proof} \begin{theorem} \label{thm:g:2} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\VV=[\vv_1,\ldots,\vv_k]$ where $\vv_1,\ldots,\vv_k\in\resc{f}$, $k\geq 0$, and let $\qq\in\intdom{f}$. Let $\xbar=\VV\omm\plusl\qq$. Then $\fext(\xbar)=\fV(\qq)<+\infty$ and $\fext$ is continuous at $\xbar$. Thus, \[ \paren{\represc{f}\cap\corezn} \plusl \intdom{f} \subseteq \contsetf. \] \end{theorem} \begin{proof} For the proof, we can assume without loss of generality that $\vv_1,\ldots,\vv_k$ are linearly independent. Otherwise, by Proposition~\ref{prop:V:indep}, we can repeatedly eliminate columns of $\VV$ until a linearly independent subset is obtained, yielding some matrix $\VV'$ with $\VV\omm=\VV'\omm$. Further, since every eliminated column is in the span of the remaining columns, $\colspace\VV=\colspace\VV'$, implying $\fV(\qq)=\fVp(\qq)$. Also, let $\qq'\in\Rn$ be the projection of $\qq$ onto $(\colspace\VV)^\perp$, so that $\qq'=\qq+\VV\cc$ for some $\cc\in\Rk$, and $\qq'\perp \VV$. Then $\xbar=\VV\omm\plusl\qq'$ by the Projection Lemma (Lemma~\ref{lemma:proj}). Let $\seq{\xx_t}$ be any sequence in $\Rn$ that converges to $\xbar$. Then for each $t$, by linear algebra, we can write $\xx_t = \VV \bb_t + \qq_t$ for some (unique) $\bb_t\in\Rk$ and $\qq_t\in\Rn$ with $\qq_t \perp \VV$. By Theorem~\ref{thm:seq-rep}, $b_{t,i}\rightarrow+\infty$ for $i=1,\ldots,k$ and $\qq_t\rightarrow\qq'$. Therefore, by Theorem~\ref{thm:recf-seq-cont}, $f(\xx_t)\rightarrow\fV(\qq')=\fV(\qq)$. Since this holds for every such sequence, this implies that $\fext(\xbar)=\fV(\qq)$ by definition of $\fext$ (Eq.~\ref{eq:e:7}), and furthermore that $\fext$ is continuous at $\xbar$ by Theorem~\ref{thm:ext-cont-f}(\ref{thm:ext-cont-f:a}). In addition, $\fV(\qq)\leq f(\qq)<+\infty$ since $\qq\in\dom{f}$, so $\xbar\in\contsetf$. \end{proof} Next, we show that if $\xbar$ is in $\dom{\fext}$, and if $\fext$ is continuous at $\xbar$, then actually $\xbar$ must be in the interior of $\dom{\fext}$. \begin{theorem} Let $f:\Rn\rightarrow\Rext$ be convex. Suppose $\fext$ is continuous at some point $\xbar\in\extspace$, and that $\fext(\xbar)<+\infty$. Then $\xbar\in\intdom{\fext}$. In other words, \[ \contsetf \subseteq \intdom{\fext}. \] \end{theorem} \begin{proof} Suppose, by way of contradiction, that $\xbar\not\in\intdom{\fext}$. Let $\countset{B}$ be a nested countable neighborhood base for $\xbar$ (which exists by \Cref{thm:first:local}). Since we have assumed $\xbar\not\in\intdom{\fext}$, no neighborhood $B_t$ can be included in $\dom{\fext}$. Therefore, for each $t$, we can choose a point $\xbar_t\in B_t \setminus (\dom{\fext})$. Then $\xbar_t\rightarrow\xbar$, by \Cref{cor:first:local:conv}, so $\fext(\xbar_t)\rightarrow \fext(\xbar)$ by continuity of $\fext$ at $\xbar$. But this is a contradiction since $\fext(\xbar)<+\infty$, while $\fext(\xbar_t)=+\infty$ for all $t$. \end{proof} Finally, we show that every point in $\intdom{\fext}$ must have the form given in Theorem~\ref{thm:g:2}. \begin{theorem} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then \[ \intdom{\fext} \subseteq \paren{\represc{f}\cap\corezn} \plusl \intdom{f}. \] That is, if $\xbar\in\intdom{\fext}$, then $\xbar=\ebar\plusl\qq$ for some $\ebar\in\represc{f}\cap\corezn$ and some $\qq\in\intdom{f}$. \end{theorem} \begin{proof} Let $\xbar\in\intdom{\fext}$. By Corollary~\ref{cor:h:1}, we can write $\xbar=\ebar\plusl\qq'$, where $\ebar=[\vv_1,\ldots,\vv_k]\omm$, for some $\vv_1,\ldots,\vv_k\in\Rn$, $k\geq 0$, and some $\qq'\in\Rn$. Without loss of generality (by \Cref{prop:V:indep}), we assume $\vv_1,\ldots,\vv_k$ are linearly independent. Since $\xbar\in\intdom{\fext}$, there exists a neighborhood $U$ of $\xbar$ that is included in $\dom{\fext}$. We prove the theorem in two parts: \setcounter{claimp}{0} \begin{claimpx} There exists some $\qq\in\intdom{f}$ for which $\xbar=\ebar\plusl\qq$. \end{claimpx} \begin{proofx} We construct a sequence converging to $\xbar$ by first letting \[ \dd_t = \sum_{i=1}^{k} t^{k+1-i} \vv_i, \] and then letting $\xx_t=\dd_t+\qq'$, for $t=1,2,\ldots$. Then $\dd_t\rightarrow\ebar$ and $\xx_t\rightarrow\xbar$ (by Theorem~\ref{thm:i:seq-rep}). We claim there must exist some $t_0\geq 1$ and some $\epsilon>0$ for which $\ball(\xx_{t_0},\epsilon)\subseteq \dom{f}$ (where $\ball(\cdot,\cdot)$ denotes an open ball in $\Rn$, as in Eq.~\ref{eqn:open-ball-defn}). Suppose this claim is false. Then for each $t\geq 1$, there must exist a point $\xx'_t$ that is in $\ball(\xx_t,1/t)$ but not $\dom{f}$. Further, $\xx'_t=\xx_t + (\xx'_t-\xx_t)$ must converge to $\xbar$ by \Cref{pr:i:7}(\ref{pr:i:7g}) (since $\xx_t\rightarrow\xbar$ and $\xx'_t-\xx_t\rightarrow\zero$). Since $U$ is a neighborhood of $\xbar$, it follows that, for $t$ sufficiently large, $\xx'_t\in U\subseteq\dom{\fext}$, implying $\fext(\xx'_t)=f(\xx'_t)<+\infty$ (by Proposition~\ref{pr:h:1}\ref{pr:h:1a}, since $f$ is lower semicontinuous). But this is a contradiction since $\xx'_t\not\in\dom{f}$ for all $t$. So let $t_0\geq 1$ and $\epsilon>0$ be such that $\ball(\xx_{t_0},\epsilon)\subseteq \dom{f}$, and let $\qq=\xx_{t_0} = \dd_{t_0}+\qq'$. Then $\qq\in\intdom{f}$. And $\xbar=\ebar\plusl\qq$, by Proposition~\ref{pr:g:1}, since $\qq-\qq'=\dd_{t_0}$ is in the span of $\vv_1,\ldots,\vv_k$. \end{proofx} \begin{claimpx} $\ebar\in\represc{f}$. \end{claimpx} \begin{proofx} Let $\sbar_j = \limrays{\vv_1,\ldots,\vv_j}$ for $j=0,1,\ldots,k$; in particular, $\ebar=\sbar_k$. We prove by induction on $j=0,1,\ldots,k$ that $\sbar_j\in\represc{f}$, that is, that $\sbar_j=\limrays{\ww_1,\ldots,\ww_j}$ for some vectors $\ww_1,\ldots,\ww_j\in\resc{f}$. When $j=k$, the claim is proved. The base case, when $j=0$, holds vacuously. For the inductive step, let $j\geq 1$, and assume $\sbar_{j-1}=\limrays{\ww_1,\ldots,\ww_{j-1}}$ for some $\ww_1,\ldots,\ww_{j-1}\in\resc{f}$. Let \[ \ybar = \sbar_j = \sbar_{j-1}\plusl \limray{\vv_j} = \limrays{\ww_1,\ldots,\ww_{j-1},\vv_j}, \] and let $\zbar=\limrays{\vv_{j+1},\ldots,\vv_k}\plusl\qq'$ so that $\xbar=\ybar\plusl\zbar$. We construct several sequences converging to these points: for each $t$, let \begin{align*} \yy_t &= \sum_{i=1}^{j-1} t^{k+1-i} \ww_i + t^{k+1-j} \vv_j, \\ \zz_t &= \sum_{i=j+1}^{k} t^{k+1-i} \vv_i + \qq', \\ \xx_t &= \yy_t + \zz_t. \end{align*} Also, let $\ybar_t=\limray{\yy_t}$, and $\xbar_t=\ybar_t\plusl\zz_t$. Clearly, $\xx_t\rightarrow\xbar$ and $\yy_t\rightarrow\ybar$ (by Theorem~\ref{thm:i:seq-rep}). We claim furthermore that $\xbar_t\rightarrow\xbar$. To see this, let $\uu$ be any point in $\Rn$; we aim to show that $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$. First, suppose $\ybar\cdot\uu=+\infty$. Then $\xbar\cdot\uu=+\infty$. Since $\yy_t\cdot\uu\rightarrow\ybar\cdot\uu$, we must have $\yy_t\cdot\uu>0$ for $t$ sufficiently large, implying, for all such $t$, that $\ybar_t\cdot\uu=+\infty$ and so also $\xbar_t\cdot\uu=+\infty$. Thus, $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$ in this case. The case $\ybar\cdot\uu=-\infty$ is handled similarly (or we can apply the above argument to $-\uu$). The only remaining case is that $\ybar\cdot\uu=0$ (by Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}, since $\ybar$ is an icon). This is only possible if $\vv_j\cdot\uu=0$ and $\ww_i\cdot\uu=0$ for $i=1,\ldots,j-1$; these imply, for all $t$, that $\yy_t\cdot\uu=0$ and so $\ybar_t\cdot\uu=0$. Thus, \[ \xbar_t\cdot\uu = \ybar_t\cdot\uu \plusl \zz_t\cdot\uu = \zz_t\cdot\uu = \yy_t\cdot\uu + \zz_t\cdot\uu = \xx_t\cdot\uu \rightarrow \xbar\cdot\uu. \] We conclude $\xbar_t\rightarrow\xbar$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}). Therefore, for all sufficiently large $t$, $\xbar_t$ must be in the neighborhood $U\subseteq\dom{\fext}$. For the rest of the proof, let $t$ be any such index so that $\fext(\xbar_{t})<+\infty$. Then it must also be the case that $\yy_t\in\resc{f}$, since otherwise, we would have \[ \fext(\xbar_t) = \fext(\ybar_t\plusl\zz_t) = \fext(\limray{\yy_t}\plusl\zz_t) = +\infty, \] with the last equality from Theorem~\ref{thm:i:4}. Furthermore, by Proposition~\ref{pr:g:1}, \[ \sbar_j = \ybar = \limrays{\ww_1,\ldots,\ww_{j-1},\yy_t} \] since, by $\yy_t$'s definition, $\yy_t - t^{k+1-j}\vv_{j}$ is a linear combination of $\ww_1,\ldots,\ww_{j-1}$. Setting $\ww_j=\yy_t\in\resc{f}$ now completes the induction. \end{proofx} Combining the two claims proves the theorem. \end{proof} Together, the last three theorems fully characterize exactly the points where $\fext$ is continuous (and not $+\infty$): \begin{corollary} \label{cor:cont-gen-char} \MarkYes Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then for all $\xbar\in\extspace$, the following are equivalent: \begin{itemize} \item $\fext(\xbar)<+\infty$ and $\fext$ is continuous at $\xbar$. \item $\xbar\in\intdom{\fext}$. \item $\xbar=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ for some $\qq\in\intdom{f}$ and some $\vv_1,\ldots,\vv_k\in\resc{f}$, $k\geq 0$. \end{itemize} That is, \[ \contsetf = \intdom{\fext} = \paren{\represc{f}\cap\corezn} \plusl \intdom{f} = \represc{f} \plusl \intdom{f}. \] \end{corollary} \proof All parts of this corollary were proved in the preceding theorems, except for the final equality for which it suffices to show \[ \represc{f} \plusl \intdom{f} \subseteq \paren{\represc{f}\cap\corezn} \plusl \intdom{f} \] (since the reverse inclusion is immediate). Suppose $\xbar=\ybar\plusl\zz$ where $\ybar\in\represc{f}$ and $\zz\in\intdom{f}$. Then $\ybar=\ebar\plusl\qq$ for some $\ebar\in\represc{f}\cap\corezn$ and $\qq\in\resc{f}$ (since $\ybar$ can be represented using only vectors in $\resc{f}$). Since $\zz\in\intdom{f}$, there exists an open set $U\subseteq\Rn$ including $\zero$ such that $\zz+U\subseteq\dom{f}$. Since $\qq\in\resc{f}$, for all $\sS\in U$, $f(\qq+\zz+\sS)\leq f(\zz+\sS)<+\infty$, so $\qq+\zz+U\subseteq\dom{f}$, implying $\qq+\zz\in\intdom{f}$. Thus, $\xbar=\ebar\plusl(\qq+\zz)\in \paren{\represc{f}\cap\corezn} \plusl \intdom{f}$, completing the proof. \qed Let $\xbar\in\extspace$, and suppose $\fext(\xbar)<+\infty$. We can write $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$; furthermore, $\ebar\in\aresconef$ by Corollary~\ref{cor:a:4}. Corollary~\ref{cor:cont-gen-char} makes explicit the precise conditions under which $\fext$ is or is not continuous at $\xbar$, namely, $\fext$ is continuous at $\xbar$ if $\ebar\in\represc{f}$ and also $\qq$ can be chosen to be in $\intdom{f}$. Otherwise, if $\ebar\not\in\represc{f}$ or if there is no way of choosing $\qq$ so that $\xbar=\ebar\plusl\qq$ still holds and also $\qq\in\intdom{f}$, then $\fext$ is discontinuous at $\xbar$. These latter two conditions for there to be a discontinuity at a point $\xbar$ can be seen in the two examples given at the beginning of this section: \begin{example} We saw that the function in Example~\ref{ex:recip-fcn-eg:cont} is discontinuous at the point $\xbar=\limray{\ee_1}$. We mentioned earlier that $\resc{f}=\Rpos^2$ for this function, so $\limray{\ee_1}\in\represc{f}$, which means the first condition for continuity is satisfied. However, we can only write $\xbar=\limray{\ee_1}\plusl\qq$ if $\qq=\beta\ee_1$ for some $\beta\in\R$. Since no such point is in the effective domain of $f$, let alone its interior, a discontinuity results at $\xbar$. \end{example} \begin{example} For the function in Example~\ref{ex:x1sq-over-x2:cont}, we saw that $\fext$ is not continuous at the point $\xbar=\ebar\plusl\zero$ where $\ebar=\limray{\ee_2}\plusl\limray{\ee_1}$. In this case, the function $f$ is finite everywhere, so all points in $\R^2$, including the origin, are in the interior of $\dom{f}=\R^2$, thereby satisfying the second condition for continuity. However, for this function, it can be checked that the standard recession cone is equal to $\resc{f}=\{ \beta \ee_2 : \beta \geq 0 \}$, which implies that the only icons in $\represc{f}$ are $\zero$ and $\limray{\ee_2}$. In particular, this means $\ebar\not\in\represc{f}$, yielding the discontinuity at $\xbar$. \end{example} Theorem~\ref{thm:recf-seq-cont} implies broader conditions for continuity at a point than are fully exploited by Theorem~\ref{thm:g:2}. The next theorem captures more of that generality using sequential sums. Theorem~\ref{thm:g:2} then follows as a special case since, in the notation that follows, $\VV\omm\plusl\qq$ is always included in the sequential sum appearing in \eqref{eq:thm:seqsum-cont:1}. \begin{theorem} \label{thm:seqsum-cont} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Let $\vv_1,\ldots,\vv_k\in\resc{f}$, let $\VV=[\vv_1,\ldots,\vv_k]$, and let $\qq\in\intdom{f}$. Let \begin{equation} \label{eq:thm:seqsum-cont:1} \xbar \in \limray{\vv_1}\seqsum\dotsb\seqsum\limray{\vv_k}\seqsum\qq. \end{equation} Then $\fext(\xbar)=\fV(\qq)<+\infty$ and $\fext$ is continuous at $\xbar$. \end{theorem} \begin{proof} We first show there exist sequences with the following convergence properties: \begin{claimpx} \label{cl:thm:seqsum-cont:1} There exist sequences $\seq{\bb_t}$ in $\Rk$ and $\seq{\qq_t}$ in $\Rn$ such that $b_{t,i}\rightarrow+\infty$ for $i=1,\ldots,k$, $\qq_t\rightarrow\qq$, and $\VV \bb_t + \qq_t \rightarrow\xbar$. \end{claimpx} \begin{proofx} By \Cref{prop:seqsum-multi}(\ref{prop:seqsum-multi:equiv}), there exist a sequence $\seq{\rr_t}$ in $\Rn$ and sequences $\seq{\ww_{it}}$ in $\Rn$, for $i=1,\ldots,k$, such that: $\rr_t\rightarrow\qq$; $\ww_{it}\rightarrow\limray{\vv_i}$; and $\xx_t\rightarrow\xbar$ where $\xx_t = \sum_{i=1}^k \ww_{it} + \rr_t$. By linear algebra, we can write $\ww_{it}=b_{t,i} \vv_i + \sS_{it}$ for some $b_{t,i}\in\R$ and $\sS_{it}\in\Rn$ with $\vv_i\perp \sS_{it}$. Then Theorem~\ref{thm:seq-rep} (applied with $\VV$, $\qq$ and $\xx_t$, as they appear in that theorem, set to $[\vv_i]$, $\zero$ and $\ww_{it}$), yields that $b_{t,i}\rightarrow+\infty$ and $\sS_{it}\rightarrow \zero$. Let $\qq_t=\sum_{i=1}^k \sS_{it} + \rr_t$. Then the foregoing implies that $\qq_t\rightarrow\qq$, by continuity of addition. Further, by algebra, $ \xx_t = \VV \bb_t + \qq_t$ (where, as usual, $\bb_t=\trans{[b_{t,1},\ldots,b_{t,k}]}$). Since $\xx_t\rightarrow\xbar$, this completes the proof. \end{proofx} Let $\seq{\bb_t}$ and $\seq{\qq_t}$ be as in Claim~\ref{cl:thm:seqsum-cont:1}, and let $ \xx_t = \VV \bb_t + \qq_t$. Since $b_{t,i}\rightarrow+\infty$ for all $i$, we can discard all elements $t$ with $\bb_t\not\geq\zero$ of which there can be at most finitely many. We therefore assume henceforth that $\bb_t\geq\zero$ for all $t$. Let $\zbar=\xbar\plusl (-\qq)$. Then $\VV\bb_t=\xx_t-\qq_t\rightarrow\zbar$ (by Proposition~\ref{pr:sum-seq-commuting-limits}). Let $K=\cone\{\vv_1,\ldots,\vv_k\}$, which is a finitely-generated convex cone in $\Rn$. Note that $\VV\bb_t\in K$, for all $t$. Therefore, $\zbar$ is in $\Kbar$, and so is in $\repcl{K}$ as well by Lemma~\ref{lem:conv-lmset-char:a} (or Theorem~\ref{thm:repcl-polyhedral-cone}). Since $\vv_1,\ldots,\vv_k$ are in the convex cone $\resc{f}$, we also have that $K\subseteq\resc{f}$, implying $\zbar\in\repcl{K}\subseteq\represc{f}$. Thus, $\xbar=\zbar\plusl\qq\in\represc{f}\plusl\intdom{f}$, so $\fext$ is continuous at $\xbar$ with $\fext(\xbar)<+\infty$ by Corollary~\ref{cor:cont-gen-char}. Furthermore, by Theorem~\ref{thm:recf-seq-cont}, $f(\xx_t)\rightarrow \fV(\qq)$. Since $\fext$ is continuous at $\xbar$ and since $\xx_t\rightarrow\xbar$, it follows that $\fext(\xbar)=\fV(\qq)$. \end{proof} \subsection{Conditions for continuity} \label{subsec:cond-for-cont} We next explore general conditions for continuity, especially for $\fext$ to be continuous everywhere, that is, at all points in $\extspace$. We begin with the more direct implications of the characterization given in Corollary~\ref{cor:cont-gen-char}. As noted above, if $\fext(\xbar)<+\infty$, where $\xbar=\ebar\plusl\qq$, $\ebar\in\corezn$ and $\qq\in\Rn$, then $\ebar\in\aresconef$ (by Corollary~\ref{cor:a:4}). And if $\ebar\not\in\represc{f}$, then $\fext$ cannot be continuous at $\xbar$ (by Corollary~\ref{cor:cont-gen-char}). Thus, for $\fext$ to be continuous everywhere, it is necessary that $(\aresconef)\cap\corezn\subseteq\represc{f}$. Actually, this latter condition is equivalent to $\aresconef$ being equal to $\represc{f}$, as we show in the next proposition. When this simpler condition holds, that $\aresconef=\represc{f}$, we say that $f$ is recessive complete, as was discussed previously in Section~\ref{sec:arescone} and also in Section~\ref{subsec:rank-one-minimizers} where the condition was shown to imply the existence of canonical minimizers with astral rank one. \begin{proposition} \label{pr:r-equals-c-equiv-forms} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then the following are equivalent: \begin{letter-compact} \item \label{pr:r-equals-c-equiv-forms:a} $(\aresconef) \cap \corezn = \represc{f} \cap \corezn$. \item \label{pr:r-equals-c-equiv-forms:b} $(\aresconef) \cap \corezn \subseteq \represc{f}$. \item \label{pr:r-equals-c-equiv-forms:c} $\aresconef = \represc{f}$. (That is, $f$ is recessive complete.) \end{letter-compact} \end{proposition} \begin{proof} That (\ref{pr:r-equals-c-equiv-forms:a}) $\Rightarrow$ (\ref{pr:r-equals-c-equiv-forms:b}), and (\ref{pr:r-equals-c-equiv-forms:c}) $\Rightarrow$ (\ref{pr:r-equals-c-equiv-forms:a}) are both immediate. To see (\ref{pr:r-equals-c-equiv-forms:b}) $\Rightarrow$ (\ref{pr:r-equals-c-equiv-forms:c}), suppose $(\aresconef) \cap \corezn \subseteq \represc{f}$. Then \[ \aresconef = \conv\paren{(\aresconef) \cap \corezn} \subseteq \represc{f} \subseteq \aresconef. \] The equality follows from Theorem~\refequiv{thm:ast-cvx-cone-equiv}{thm:ast-cvx-cone-equiv:a}{thm:ast-cvx-cone-equiv:c} since $\aresconef$ is a convex astral cone (Corollary~\ref{cor:res-fbar-closed}). The first inclusion is by our assumption and since $\represc{f}$ is convex by Proposition~\ref{pr:repres-in-arescone} (and also Proposition~\ref{pr:conhull-prop}\ref{pr:conhull-prop:aa}). And the second inclusion is by Proposition~\ref{pr:repres-in-arescone}. \end{proof} Expanding on the discussion above, we prove several direct consequences of the characterization given in Corollary~\ref{cor:cont-gen-char}. First, if $\ebar$ is an icon in $(\aresconef)\cap\corezn$ but not in $\represc{f}$, and if $\xbar$ is any point in $\extspace$, then the point $\zbar=\ebar\plusl\xbar$ cannot be in $\contsetf$; in other words, it is not possible both that $\fext(\zbar)<+\infty$ and that $\fext$ is continuous at $\zbar$: \begin{theorem} \label{thm:d5} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Suppose $\ebar\in ((\aresconef)\cap\corezn) \setminus\represc{f}$. Then for all $\xbar\in\extspace$, either $\fext(\ebar\plusl\xbar)=\fext(\xbar)=+\infty$ or $\fext$ is not continuous at $\ebar\plusl\xbar$. \end{theorem} \proof Let $\xbar\in\extspace$. If $\fext(\ebar\plusl\xbar)=+\infty$ then $\fext(\xbar)=+\infty$ since $\ebar\in\aresconef$. So assume $\fext(\ebar\plusl\xbar)<+\infty$ and suppose, by way of contradiction, that $\fext$ is continuous at $\ebar\plusl\xbar$. Then by Corollary~\ref{cor:cont-gen-char}, there exists $\dbar\in\represc{f}\cap\corezn$ and $\qq\in\intdom{f}$ such that $\ebar\plusl\xbar=\dbar\plusl\qq$. This implies $\dbar=\ebar\plusl\xbar\plusl (-\qq)$, and so $\ebar\in\lb{\zero}{\dbar}$ by Corollary~\ref{cor:d-in-lb-0-dplusx}. Since both $\zero$ and $\dbar$ are in $\represc{f}$, and since $\represc{f}$ is convex (by Proposition~\ref{pr:repres-in-arescone}), it follows that $\ebar\in\represc{f}$, contradicting our initial assumption. \qed We previously remarked that if $\fext$ is continuous everywhere then $f$ is recessive complete. Actually, we can make a somewhat stronger statement, namely, that if $\fext$ is continuous \emph{at all of its minimizers}, then $f$ is recessive complete. Clearly, this implies the former assertion. \begin{theorem} \label{thm:cont-at-min-implies-ares} \MarkMaybe Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. If $\fext$ is continuous at all of its minimizers, then $f$ is recessive complete. \end{theorem} \begin{proof} We assume $f\not\equiv +\infty$ since otherwise $\aresconef=\represc{f}=\extspace$. We prove the theorem in the contrapositive. Suppose $f$ is not recessive complete. Then by Proposition~\ref{pr:r-equals-c-equiv-forms}, there exists a point $\ebar \in ((\aresconef) \cap \corezn) \setminus \represc{f}$. Let $\ybar\in\extspace$ be any point that minimizes $\fext$, implying $\fext(\ybar)<+\infty$. Furthermore, $\fext(\ebar\plusl\ybar)\leq\fext(\ybar)$ by Proposition~\ref{pr:arescone-def-ez-cons}(\ref{pr:arescone-def-ez-cons:b}) since $\ebar\in\aresconef$, so $\ebar\plusl\ybar$ must also minimize $\fext$. It now follows immediately from Theorem~\ref{thm:d5} that $\fext$ is not continuous at $\ebar\plusl\ybar$, which is one of its minimizers. \end{proof} So recessive completeness is a necessary condition for $\fext$ to be continuous everywhere, or even for it to be continuous at its minimizers. When $f$ is convex and finite everywhere, these conditions all turn out to be equivalent. In other words, in this case, $\fext$ is continuous everywhere if and only if $f$ is recessive complete. Furthermore, and quite remarkably, if $\fext$ is continuous at all its minimizers, then it must actually be continuous everywhere. Equivalently, if $\fext$ is discontinuous anywhere, then it must be discontinuous at one or more of its minimizers (as was the case for the function in Example~\ref{ex:x1sq-over-x2:cont}, which is finite everywhere). \begin{theorem} \label{thm:cont-conds-finiteev} \MarkYes Let $f:\Rn\rightarrow \R$ be convex. Then the following are equivalent: \begin{letter-compact} \item \label{thm:cont-conds-finiteev:a} $\fext$ is continuous everywhere. \item \label{thm:cont-conds-finiteev:b} $\fext$ is continuous at all its minimizers. \item \label{thm:cont-conds-finiteev:c} $f$ is recessive complete. \end{letter-compact} \end{theorem} \proof Since $f$ is convex and finite everywhere, it is also continuous everywhere (\Cref{pr:stand-cvx-cont}). That (\ref{thm:cont-conds-finiteev:a}) $\Rightarrow$ (\ref{thm:cont-conds-finiteev:b}) is immediate. That (\ref{thm:cont-conds-finiteev:b}) $\Rightarrow$ (\ref{thm:cont-conds-finiteev:c}) follows immediately from Theorem~\ref{thm:cont-at-min-implies-ares}. To see (\ref{thm:cont-conds-finiteev:c}) $\Rightarrow$ (\ref{thm:cont-conds-finiteev:a}), suppose $\aresconef=\represc{f}$. Let $\xbar\in\extspace$, which we can write $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$. If $\fext(\xbar)=+\infty$, then $f$ converges to $+\infty$ on every sequence converging to $\xbar$, so $\fext$ is continuous at $\xbar$. Otherwise, $\fext(\xbar)<+\infty$ so $\ebar\in\aresconef=\represc{f}$ by Corollary~\ref{cor:a:4}. Since $f$ is finite everywhere, $\dom{f}=\Rn$ so $\qq\in\intdom{f}$. Therefore, $\fext$ is continuous at $\xbar$ by Corollary~\ref{cor:cont-gen-char}. \qed If $f$ is convex but not finite everywhere, then it is possible that $\fext$ is continuous at all its minimizers, but not continuous elsewhere. This is possible even if the function $f$ itself is continuous everywhere, as seen in the next example: \begin{example} Consider the following variation on the function given in Example~\ref{ex:recip-fcn-eg}: For $\xx\in\R^2$, \begin{equation} \label{eqn:recip-fcn-eg-mod} f(\xx) = f(x_1, x_2) = \begin{cases} \displaystyle{\frac{1}{x_1 x_2}} + \me^{- x_1} + \me^{- x_2} & \text{if $x_1>0$ and $x_2>0$,} \\[1em] +\infty & \text{otherwise.} \\ \end{cases} \end{equation} It can be checked that $f$ converges to zero, and is thereby minimized, on just those sequences $\seq{\xx_t}$ in $\R^2$ for which $\xx_t\cdot\ee_1\rightarrow+\infty$ and $\xx_t\cdot\ee_2\rightarrow+\infty$. Thus, $\fext$ is minimized, with $\fext(\xbar)=0$, exactly at those points $\xbar\in\extspac{2}$ for which $\xbar\cdot\ee_1=\xbar\cdot\ee_2=+\infty$. Moreover, $\fext$ is continuous at all such points. On the other hand, $\fext$ is not continuous at $\limray{\ee_1}$ by similar arguments to those given following \eqref{eqn:recip-fcn-eg}, but this point is not a minimizer since $\fext(\limray{\ee_1})=1$. \end{example} \subsection{Dual characterization of continuity} \label{subsec:charact:cont:duality} As we show next, the recessive completeness of a convex function $f:\Rn\rightarrow\Rext$, which is so central to the continuity of its extension $\fext$, can be further characterized in terms of very simple and general dual properties, specifically regarding its barrier cone, $\slopes{f}$ (introduced in Section~\ref{sec:slopes}). Specifically, we will see that $f$ is recessive complete if and only if $\slopes{f}$ is polyhedral, that is, if and only if it is finitely generated. Thus, this geometric property of the barrier cone $\slopes{f}$ entirely determines recessive completeness, and thereby, at least in some cases (such as when $f$ is finite everywhere), entirely determines if its extension $\fext$ is continuous everywhere. Indeed, this characterization follows directly from what was proved more generally for convex astral cones in Chapter~\ref{sec:cones}, applied here to the standard and astral recession cones, which we have seen can each be expressed respectively as the standard and astral polars of the barrier cone $\slopes{f}$. As a first such application, we can characterize exactly when $\represc{f}$ is closed and when $\rescbar{f}=\aresconef$: \begin{theorem} \label{thm:dual-char-specs} Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then: \begin{letter-compact} \item \label{thm:dual-char-specs:a} $\represc{f}=\rescbar{f}$ if and only if $\resc{f}$ is polyhedral. \item \label{thm:dual-char-specs:b} $\rescbar{f}=\aresconef$ if and only if $\slopes{f}$ is closed in $\Rn$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:dual-char-specs:a}):} This is immediate from Theorem~\ref{thm:repcl-polyhedral-cone} applied to $\resc{f}$. \pfpart{Part~(\ref{thm:dual-char-specs:b}):} If $f\equiv+\infty$ then $\slopes{f}=\Rn$ which is closed in $\Rn$, and also $\rescbar{f}=\aresconef=\extspace$, proving the claim in this case. Otherwise, if $f\not\equiv+\infty$, then this part follows from Theorem~\ref{thm:apol-of-closure}(\ref{thm:apol-of-closure:b}) applied to $\slopes{f}$ (which is a convex cone, by \Cref{thm:slopes-equiv}), using $\aresconef=\apol{(\slopes{f})}$ and $\resc{f}=\polar{(\slopes{f})}$ by Corollaries~\ref{cor:ares-is-apolslopes} and~\ref{cor:rescpol-is-slopes}. \qedhere \end{proof-parts} \end{proof} Combining the two parts of this theorem, we can now prove our characterization of recessive completeness: \begin{theorem} \label{thm:dual-cond-char} \MarkYes Let $f:\Rn\rightarrow\Rext$ be convex and lower semicontinuous. Then the following are equivalent: \begin{letter-compact} \item \label{thm:dual-cond-char:a} $f$ is recessive complete. \item \label{thm:dual-cond-char:b} $\resc{f}$ is polyhedral and also $\slopes{f}$ is closed in $\Rn$. \item \label{thm:dual-cond-char:c} $\slopes{f}$ is polyhedral. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:dual-cond-char:a}) $\Leftrightarrow$ (\ref{thm:dual-cond-char:b}): } By Proposition~\ref{pr:repres-in-arescone}, $\aresconef=\represc{f}$ if and only if $\represc{f}=\rescbar{f}$ and $\rescbar{f}=\aresconef$. Therefore, Theorem~\ref{thm:dual-char-specs} immediately implies that $\aresconef=\represc{f}$ if and only if the conditions in (\ref{thm:dual-cond-char:b}) hold. If $f\equiv+\infty$ then $\resc{f}=\slopes{f}=\Rn$ which is closed (in $\Rn$) and polyhedral, proving the equivalence of~(\ref{thm:dual-cond-char:b}) and~(\ref{thm:dual-cond-char:c}) in this case. We therefore assume henceforth that $f\not\equiv+\infty$: \pfpart{(\ref{thm:dual-cond-char:b}) $\Rightarrow$ (\ref{thm:dual-cond-char:c}): } Suppose $\resc{f}$ is polyhedral and that $\slopes{f}$ is closed in $\Rn$. Then $\rescpol{f}$, the polar of $\resc{f}$, is also polyhedral (\Cref{roc:cor19.2.2}), and furthermore, $ \rescpol{f} = \cl(\slopes{f}) = \slopes{f} $ by Corollary~\ref{cor:rescpol-is-slopes}, and since $\slopes{f}$ is closed in $\Rn$. \pfpart{(\ref{thm:dual-cond-char:c}) $\Rightarrow$ (\ref{thm:dual-cond-char:b}): } Suppose $\slopes{f}$ is polyhedral. Then it is closed in $\Rn$ (\Cref{roc:thm19.1}), and its polar, $\polar{(\slopes{f})}=\resc{f}$, must also be polyhedral (again using Corollary~\ref{cor:rescpol-is-slopes} and \Cref{roc:cor19.2.2}). \end{proof-parts} \end{proof} The condition that $\resc{f}$ is polyhedral (which implies that its polar $\rescpol{f}=\cl(\slopes{f})$ must also be polyhedral) is not in itself sufficient for $f$ to be recessive complete, as shown by the next example: \begin{example} As we have already seen, the recession cone $\resc{f}$ of the function $f$ in Examples~\ref{ex:x1sq-over-x2} and~\ref{ex:x1sq-over-x2:cont} is exactly the cone generated by the singleton $\{\ee_2\}$, and thus is polyhedral. Nevertheless, we have seen that this function is not continuous everywhere, implying $f$ is not recessive complete in this case, by Theorem~\ref{thm:cont-conds-finiteev}. Indeed, in more detail, it can be calculated that the effective domain of $\fstar$ is \[ \dom{\fstar} = \Braces{\uu \in \R^2 : -1 \leq u_2 \leq -\frac{u_1^2}{4} } \] (and actually $\fstar$ is the indicator function for this set). It can then be further calculated that $\slopes{f}$ (which is the same as $\cone(\dom{\fstar})$ in this case, by \Cref{cor:ent-clos-is-slopes-cone} since this function is finite everywhere and therefore reduction-closed by \Cref{pr:j:1}\ref{pr:j:1d}) is equal to the origin adjoined to the open lower half-plane: \[ \slopes{f} = \cone(\dom{\fstar}) = \{\zero\} \cup \{ \uu\in\R^2 : u_2 < 0 \}. \] Consistent with Theorem~\ref{thm:dual-cond-char}, this set is not polyhedral or closed, although its closure is polyhedral. \end{example} Theorem~\ref{thm:dual-cond-char} is not true in general if we replace $\slopes{f}$ by $\cone(\dom{\fstar})$ in parts (\ref{thm:dual-cond-char:b}) and (\ref{thm:dual-cond-char:c}), as shown by the next example: \begin{example} For the function $f$ in \eqref{eqn:ex:1} in Example~\ref{ex:negx1-else-inf}, it can be checked that $\resc{f}=\Rpos^2$ and that $f$ is recessive complete. Nevertheless, as was seen earlier, $\cone(\dom{\fstar})$ for this function is the set given in \eqref{eqn:ex:1:cone-dom-fstar}, which is not closed in $\R^2$. (However, its closure is polyhedral, and furthermore, $\slopes{f}=-\Rpos^2$, which is polyhedral, consistent with Theorem~\ref{thm:dual-cond-char}.) So $\cone(\dom{\fstar})$ being polyhedral is not necessary for $f$ to be recessive complete. \end{example} On the other hand, if $f$ is convex, closed and proper, then $\cone(\dom{\fstar})$ being polyhedral is always sufficient for $f$ to be recessive complete. This is because if $\cone(\dom{\fstar})$ is polyhedral, then it is also closed, so \[ \rescpol{f} = \cl(\cone(\dom{\fstar})) = \cone(\dom{\fstar}) \subseteq \slopes{f} \subseteq \rescpol{f}, \] where the first equality is by \Cref{pr:rescpol-is-con-dom-fstar}, and the two inclusions are by \Cref{thm:slopes-equiv} and Corollary~\ref{cor:rescpol-is-slopes}, respectively. Thus, $\slopes{f}$, being equal to $\cone(\dom{\fstar})$, is also polyhedral, implying $f$ is recessive complete by Theorem~\ref{thm:dual-cond-char}. When $f$ is reduction-closed (including, for instance, when $\inf f > -\infty$), Corollary~\ref{cor:ent-clos-is-slopes-cone} implies that $\slopes{f}=\cone(\dom{\fstar})$, which means, in this case, we can immediately replace $\slopes{f}$ by $\cone(\dom{\fstar})$ in Theorem~\ref{thm:dual-cond-char} to obtain the following corollary: \begin{corollary} \label{cor:thm:dual-cond-char:ent-closed} \MarkYes Let $f:\Rn\rightarrow\Rext$ be convex, lower semicontinuous, and reduction-closed. Then the following are equivalent: \begin{letter-compact} \item $f$ is recessive complete. \item $\resc{f}$ is polyhedral and also $\cone(\dom{\fstar})$ is closed in $\Rn$. \item $\cone(\dom{\fstar})$ is polyhedral. \end{letter-compact} \end{corollary} If $f$ is finite everywhere then it is reduction-closed (\Cref{pr:j:1}\ref{pr:j:1d}). In this case, we can immediately combine Corollary~\ref{cor:thm:dual-cond-char:ent-closed} with Theorem~\ref{thm:cont-conds-finiteev} to obtain an expanded list of conditions that are necessary and sufficient for $\fext$ to be continuous everywhere: \begin{corollary} \label{cor:thm:cont-conds-finiteev} \MarkYes Let $f:\Rn\rightarrow \R$ be convex. Then the following are equivalent: \begin{letter-compact} \item \label{cor:thm:cont-conds-finiteev:a} $\fext$ is continuous everywhere. \item \label{cor:thm:cont-conds-finiteev:b} $\fext$ is continuous at all its minimizers. \item \label{cor:thm:cont-conds-finiteev:c} $f$ is recessive complete. \item \label{cor:thm:cont-conds-finiteev:d} $\resc{f}$ is polyhedral and also $\cone(\dom{\fstar})$ is closed in $\Rn$. \item \label{cor:thm:cont-conds-finiteev:e} $\cone(\dom{\fstar})$ is polyhedral. \end{letter-compact} \end{corollary} \begin{example} Suppose $f$ is a function of the form given in \eqref{eqn:loss-sum-form} (under the same assumptions as in Proposition~\ref{pr:hard-core:1} and throughout Section~\ref{sec:emp-loss-min}). Then Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}) shows that $\resc{f}$ is evidently the polar of the cone generated by $\{\uu_1,\ldots,\uu_m\}$. Since $\fext$ is continuous everywhere (Proposition~\ref{pr:hard-core:1}\ref{pr:hard-core:1:a}), Corollary~\ref{cor:thm:cont-conds-finiteev} implies that $\cone(\dom{\fstar})$ is closed. Thus, \[ \slopes{f} = \cone(\dom{\fstar}) = \cl(\cone(\dom{\fstar})) = \rescpol{f} = \cone \{\uu_1,\ldots,\uu_m\}, \] which is finitely generated and therefore polyhedral. (The form of $\cone(\dom{\fstar})$ could also be derived more directly using standard tools for calculating conjugates.) \end{example} \part{Differential Theory and Optimality} \label{part:differential-theory} \section{Differential theory} \label{sec:gradients} Gradients and subgradients are centrally important to convex analysis because of the power they afford in tackling and understanding optimization problems. In this chapter, we develop a theory of subdifferentials for functions defined over astral space. We study two different generalizations of the standard subgradient. The first allows us to assign meaningful subgradients for points at infinity. The other allows us to assign subgradients for finite points where the standard subgradient would otherwise be undefined, for instance, due to a tangent slope that is infinite. In such cases, the subgradients we assign are themselves astral points, and thus potentially infinite. We will see how these two forms of astral subgradient are related to one another, and how they can be viewed as duals of one another and, to a degree, as inverses of one another via conjugacy. The notions of subgradient that we are about to present differ from the ``horizon subgradients'' of \citet{rockafellar1985extensions} and \citet[Chapter 8]{rock_wets}. They also differ from those introduced by \citet[Chapter 10]{singer_book}, even as our development of conjugacy built on his abstract framework. \subsection{Astral subgradients in general} \label{sec:gradients-def} As briefly discussed in Section~\ref{sec:prelim:subgrads}, a vector $\uu\in\Rn$ is said to be a {subgradient} of a function $f:\Rn\rightarrow\Rext$ at $\xx\in\Rn$ if \begin{equation} \label{eqn:standard-subgrad-ineq} f(\xx') \geq f(\xx) + (\xx'-\xx)\cdot\uu \end{equation} for all $\xx'\in\Rn$, so that the affine function (in $\xx'$) on the right-hand side of this inequality is supporting $f$ at $\xx$. The {subdifferential} of $f$ at $\xx$, denoted $\partial f(\xx)$, is the set of all subgradients of $f$ at $\xx$. (Although these definitions are intended for convex functions, we also apply them in what follows to general functions $f$ that are not necessarily convex.) If $f$ is convex, then its gradient, $\nabla f(\xx)$, if it exists, is always also a subgradient. Subgradients are central to optimization since $\zero\in\partial f(\xx)$ if and only if $f$ is minimized at $\xx$. \begin{example} \label{ex:standard-subgrad} For instance, suppose, for $x\in\R$, that \begin{equation} \label{eq:ex:x:2x} f(x) = \left\{ \begin{array}{cl} \max\{x,2x\} & \mbox{if $x\leq 1$} \\ +\infty & \mbox{otherwise.} \\ \end{array} \right. \end{equation} Then the subdifferential of $f$ at $0$ is equal to the set $[1,2]$. This is because, for $u\in [1,2]$, $f(x)\geq x u$ for all $x\in\R$, with equality at $x=0$. Equivalently, in the plane $\R^2$, the epigraph of $f$ is entirely above the line $\{ \rpair{x}{y} : y= x u \}$, which includes $\rpair{0}{f(0)}=\rpair{0}{0}$. By similar reasoning, \[ \partial f(x) = \left\{ \begin{array}{cl} \{1\} & \mbox{if $x < 0$} \\ {[1,2]} & \mbox{if $x = 0$} \\ \{2\} & \mbox{if $0 < x < 1$} \\ {[2,+\infty)} & \mbox{if $x =1$} \\ \emptyset & \mbox{if $x>1$.} \\ \end{array} \right. \] Note how the subgradients ``wrap around'' the graph of this function at $x=1$. The standard subdifferential is not defined at $-\infty$, but if it were, we might reasonably expect that $1$ should be included as a subgradient at this point since $f(x)\geq x$ for all $x\in\R$, with equality holding asymptotically in the limit as $x\rightarrow-\infty$. We might also expect $0$ to be a subgradient at $-\infty$ since $f$ is minimized at this point. We will see soon how both these intuitions are captured by the definition we propose below. \end{example} In extending subdifferentials to astral space, we will need two different definitions: the first maps astral points in $\extspace$ to subgradients which are real vectors in $\Rn$; the other, which is a kind of dual, maps in the reverse direction from $\Rn$ to subsets of $\extspace$. This asymmetry, previously encountered in the two forms of conjugate developed in Chapter~\ref{sec:conjugacy}, is a consequence of the asymmetry of the two spaces we are working over, with the key coupling operation $\xbar\cdot\uu$ being defined over $\xbar\in\extspace$ but $\uu\in\Rn$. We begin in this section by defining an astral subgradient on astral points in $\extspace$. When considering \eqref{eqn:standard-subgrad-ineq} with standard convex functions, only $f(\xx')$ and $f(\xx)$ can be infinite, so there is no possibility of adding $-\infty$ and $+\infty$ in this expression. However, when extending to astral space, other quantities, particularly those involving inner products, may become infinite. Furthermore, there is no operation for directly adding or subtracting astral points analogous to the difference of vectors, $\xx'-\xx$, that appears in \eqref{eqn:standard-subgrad-ineq}. As a result, it is not immediately clear how to generalize the definition given in \eqref{eqn:standard-subgrad-ineq} simply by replacing each variable and function by its astral counterpart. Rather, we take an aproach that focuses on the function's epigraph. To simplify this discussion, let us suppose momentarily that $f(\xx)\in\R$. In this case, as was seen in the example above, the condition given in \eqref{eqn:standard-subgrad-ineq} means that, for some $\beta\in\R$, $f(\xx')\geq \xx'\cdot\uu - \beta$ for all $\xx'\in\Rn$, with equality at $\xx$ (so that $f(\xx) = \xx\cdot\uu - \beta$). Equivalently, for every point $\rpair{\xx'}{y'}$ in $f$'s epigraph, $y'\geq \xx'\cdot\uu - \beta$, so that $\xx'\cdot\uu - y'\leq \beta$. Furthermore, equality holds at the point $\rpair{\xx}{f(\xx)}$, so that $ \xx\cdot\uu - f(\xx) = \beta$. Thus, when $f(\xx)\in\R$, $\uu\in\partial f(\xx)$ if there exists $\beta\in\R$ such that \begin{letter-compact} \item $\xx'\cdot\uu - y' \leq \beta$ for all $\rpair{\xx'}{y'}\in \epi f$; and \item $\xx\cdot\uu - f(\xx) = \beta$. \end{letter-compact} Restated in these terms, we can more readily extend subgradients to astral space. Let $F:\extspace\rightarrow\Rext$. We aim to define what it means for some vector $\uu\in\Rn$ to be a subgradient of $F$ at some point $\xbar\in\extspace$. We begin by requiring that there exist some $\beta\in\Rext$ such that $\xbar'\cdot\uu - y' \leq \beta$ for all $\rpair{\xbar'}{y'}$ in $F$'s epigraph. This is exactly the same as in the first condition above for standard subgradients, except that we now allow $\beta$ to be $\pm\infty$. To generalize the second condition, it would be tempting to simply require $\xbar\cdot\uu - F(\xbar) = \beta$. However, such an expression is problematic since it might result in the undefined sum of $-\infty$ and $+\infty$. Instead, we require that the second condition above hold in the limit for some \emph{sequence} of pairs $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$; thus, we require that there exist such a sequence which converges to the pair given by $\xbar$ and its function value, $F(\xbar)$ (so that $\xbar_t\rightarrow\xbar$ and $y_t\rightarrow F(\xbar)$), and for which $\xbar_t\cdot\uu - y_t \rightarrow \beta$. Thus, we define: \begin{definition} \label{def:ast-subgrad} Let $F:\extspace\rightarrow\Rext$, and let $\uu\in\Rn$ and $\xbar\in\extspace$. We say that $\uu$ is an \emph{astral subgradient} of $F$ at $\xbar$ if there exists $\beta\in\Rext$ such that: \begin{letter-compact} \item \label{en:ast-sub-defn-cond-1} $\xbar'\cdot\uu - y' \leq \beta$ for all $\rpair{\xbar'}{y'} \in \epi F$; and \item \label{en:ast-sub-defn-cond-2} there exists a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ such that $\xbar_t\rightarrow \xbar$, $y_t\rightarrow F(\xbar)$, and $\xbar_t\cdot\uu - y_t \rightarrow \beta$. \end{letter-compact} The \emph{astral subdifferential} of $F$ at $\xbar\in\extspace$, denoted $\asubdifF{\xbar}$, is the set of all such astral subgradients of $F$ at $\xbar$. \end{definition} When $\beta\in\R$, condition~(\ref{en:ast-sub-defn-cond-1}) means that $F(\xbar')\geq \xbar'\cdot\uu - \beta$ for all $\xbar'\in\extspace$, so that $F$ majorizes the given affine function. In particular, this implies $y_t\geq F(\xbar_t)\geq \xbar_t\cdot\uu - \beta$ for all $t$, while condition~(\ref{en:ast-sub-defn-cond-2}) requires that $y_t - (\xbar_t\cdot\uu - \beta)\rightarrow 0$ so that, in this sense, $F$ is asymptotically approaching this same affine function as $\xbar_t\rightarrow\xbar$ and $y_t\rightarrow F(\xbar)$. \begin{example} \label{ex:standard-subgrad-cont} Astral subgradients are meant to provide meaningful subgradients for points at infinity. For instance, let $f$ be the function in \eqref{eq:ex:x:2x} with extension $\fext$ (which is the same as $f$ with $\fext(-\infty)=-\infty$ and $\fext(+\infty)=+\infty$). At $\barx=-\infty$, we can see that $u=1$ is a subgradient of $F=\fext$ according to the definition above with $\beta=0$ and as witnessed by the sequence $\seq{\rpair{x_t}{y_t}}$ with $x_t=y_t=-t$. This same sequence also shows that each $u<1$ is a subgradient at $-\infty$, but now with $\beta=+\infty$. It can be checked that there are no other subgradients at $-\infty$. Thus, $\partial \fext(-\infty)=(-\infty,1]$, so that the astral subgradients are seen to ``wrap around'' at $-\infty$, similar to the behavior observed for standard subgradients at $x=0$ and $x=1$. Note in particular that $0$ is a subgradient at $-\infty$, consistent with $\fext$ attaining its minimum at this point. (It can also be checked that $\partial \fext(+\infty)=\emptyset$.) \end{example} \begin{example} \label{ex:subgrad-log1+ex} Suppose \begin{equation} \label{eq:ex:ln1plusexp} f(x) = \ln(1+e^x) \end{equation} for $x\in\R$, and let $\fext$ be the extension of $f$. The (standard) subgradients of this function at points $x\in\R$ are simply given by its derivative $f'$. Consistent with this derivative tending to $1$ as $x\rightarrow+\infty$, it can be checked that $u=1$ is a subgradient of $\fext$ at $\barx=+\infty$ (with $\beta=0$ and using the sequence $\seq{\rpair{x_t}{f(x_t)}}$ where $x_t=t$). Indeed, \begin{equation} \label{eq:ex:ln1plusexp-subgrad} \partial \fext(\barx) = \left\{ \begin{array}{cl} (-\infty,0] & \mbox{if $\barx = -\infty$} \\ \{f'(\barx)\} & \mbox{if $\barx\in\R$} \\ {[1,+\infty)} & \mbox{if $\barx = +\infty$.} \end{array} \right. \end{equation} \end{example} As we show next, in the definition of astral subgradient given above, we can always take $\beta$ to be $\Fstar(\uu)$, thereby making condition~(\ref{en:ast-sub-defn-cond-1}) entirely redundant. Further, the definition can be restated in terms of astral points in $\extspacnp$ rather than sequences. As in Section~\ref{sec:work-with-epis}, we continue to write $\rpair{\xbar}{y}$, for $\xbar\in\extspace$ and $y\in\R$, to denote either a pair in $\extspace\times\R$, or its homeomorphic image in $\extspacnp$. Likewise, depending on context, $\epi F$ can be regarded as a subset of $\extspacnp$, with $\clepi{F}$ denoting its closure in $\extspacnp$. \begin{proposition} \label{pr:equiv-ast-subdif-defn} Let $F:\extspace\rightarrow\Rext$, let $\xbar\in\extspace$ and $\uu\in\Rn$, and let $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}). Then the following are equivalent: \begin{letter-compact} \item \label{pr:equiv-ast-subdif-defn:a} $\uu\in\partial F(\xbar)$. \item \label{pr:equiv-ast-subdif-defn:b} There exists a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ such that $\xbar_t\rightarrow\xbar$, $y_t \rightarrow F(\xbar)$, and $\xbar_t\cdot\uu - y_t \rightarrow \Fstar(\uu)$. \item \label{pr:equiv-ast-subdif-defn:c} There exists $\zbar\in \clepi{F}$ such that $\homat \zbar = \xbar$, $\zbar\cdot \rpair{\zero}{1} = F(\xbar)$, and $\zbar \cdot \rpair{\uu}{-1} = \Fstar(\uu)$. \item \label{pr:equiv-ast-subdif-defn:d} $\zbar'\cdot \rpair{\uu}{-1}$, as a function of $\zbar'$, is maximized over $\clepi{F}$ by some $\zbar\in\clepi{F}$ with $\homat\zbar = \xbar$ and $\zbar\cdot\rpair{\zero}{1} = F(\xbar)$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:equiv-ast-subdif-defn:b}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn:a}): } Suppose part~(\ref{pr:equiv-ast-subdif-defn:b}) holds. Let $\beta=\Fstar(\uu)$. Then $\xbar'\cdot\uu - y' \leq \Fstar(\uu)$ for all $\rpair{\xbar'}{y'}\in \epi F$ by \eqref{eq:Fstar-def}. Thus, on the given sequence, all conditions are satisfied for $\uu$ to be an astral subgradient at $\xbar$. \pfpart{(\ref{pr:equiv-ast-subdif-defn:c}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn:b}): } Let $\zbar$ be as specified in part~(\ref{pr:equiv-ast-subdif-defn:c}). Because $\zbar\in\clepi{F}$, there exists a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ that converges to $\zbar$. Then \[ \xbar_t = \homat \rpair{\xbar_t}{y_t} \rightarrow \homat \zbar = \xbar \] with convergence following from \Cref{thm:linear:cont}(\ref{thm:linear:cont:b}) (and the first equality from Theorem~\ref{thm:homf}\ref{thm:homf:aa}). Likewise, \[ y_t = \rpair{\xbar_t}{y_t} \cdot \rpair{\zero}{1} \rightarrow \zbar \cdot \rpair{\zero}{1} = F(\xbar), \] and \[ \xbar_t \cdot \uu - y_t = \rpair{\xbar_t}{y_t} \cdot \rpair{\uu}{-1} \rightarrow \zbar \cdot \rpair{\uu}{-1} = \Fstar(\uu) \] (using Theorem~\ref{thm:i:1}\ref{thm:i:1c} and Theorem~\ref{thm:homf}\ref{thm:homf:a}). Thus, the sequence $\seq{\rpair{\xbar_t}{y_t}}$ satisfies part~(\ref{pr:equiv-ast-subdif-defn:b}). \pfpart{(\ref{pr:equiv-ast-subdif-defn:d}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn:c}): } Let $\zbar$ be as specified in part~(\ref{pr:equiv-ast-subdif-defn:d}). Then \begin{align*} \zbar\cdot \rpair{\uu}{-1} &= \max_{\zbar'\in\clepi{F}} \zbar'\cdot \rpair{\uu}{-1} \\ &= \sup_{\rpair{\xbar'}{y'}\in\epi F} \rpair{\xbar'}{y'}\cdot \rpair{\uu}{-1} \\ &= \sup_{\rpair{\xbar'}{y'}\in\epi F} [\xbar'\cdot\uu - y'] = \Fstar(\uu). \end{align*} The four equalities hold respectively by assumption; by continuity of the function being maximized (Theorem~\ref{thm:i:1}\ref{thm:i:1c}); by \Cref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:b}); and by \eqref{eq:Fstar-def}. Thus, $\zbar$ satisfies part~(\ref{pr:equiv-ast-subdif-defn:c}). \pfpart{(\ref{pr:equiv-ast-subdif-defn:a}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn:d}): } Let $\beta\in\Rext$ and $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ be as in the definition of astral subgradient. Then this sequence must have a convergent subsequence; by discarding all other elements, we can assume the entire sequence converges to some point $\zbar$, which must be in $\clepi F$. Similar to the previous arguments, $\xbar_t = \homat \rpair{\xbar_t}{y_t} \rightarrow \homat \zbar$, so $\homat \zbar = \xbar$ since $\xbar_t\rightarrow\xbar$. Likewise, $y_t = \rpair{\xbar_t}{y_t} \cdot \rpair{\zero}{1} \rightarrow \zbar\cdot\rpair{\zero}{1}$, implying $ \zbar\cdot\rpair{\zero}{1} = F(\xbar) $, and $\xbar_t\cdot\uu - y_t = \rpair{\xbar_t}{y_t} \cdot \rpair{\uu}{-1} \rightarrow \zbar\cdot\rpair{\uu}{-1}$, implying $ \zbar\cdot\rpair{\uu}{-1} = \beta $. We have that \[ \rpair{\xbar'}{y'} \cdot \rpair{\uu}{-1} = \xbar'\cdot\uu - y' \leq \beta \] for $\rpair{\xbar'}{y'}\in\epi F$, so $ \zbar' \cdot \rpair{\uu}{-1} \leq \beta $ for $\zbar'\in\clepi{F}$. Therefore, $ \zbar' \cdot \rpair{\uu}{-1} $ is maximized when $\zbar'=\zbar$. Thus, all conditions of part~(\ref{pr:equiv-ast-subdif-defn:d}) are satisfied. \qedhere \end{proof-parts} \end{proof} The next proposition gives some simple but useful observations: \begin{proposition} \label{pr:subgrad-imp-in-cldom} Let $F:\extspace\rightarrow\Rext$, and let $\xbar\in\extspace$. Then the following hold: \begin{letter-compact} \item \label{pr:subgrad-imp-in-cldom:a} If $\asubdifF{\xbar}\neq \emptyset$ then $\xbar\in\cldom{F}$. \item \label{pr:subgrad-imp-in-cldom:b} If $F\equiv+\infty$ then $\asubdifF{\xbar}=\emptyset$. \end{letter-compact} \end{proposition} \proof ~ Part~(\ref{pr:subgrad-imp-in-cldom:a}): Suppose $\uu\in\asubdifF{\xbar}$. Then by its definition, there exists a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ with $\xbar_t\rightarrow\xbar$ (along with other properties). Since each $\xbar_t\in\dom F$, this proves the claim. Part~(\ref{pr:subgrad-imp-in-cldom:b}): If $F\equiv+\infty$ then $\dom{F}=\emptyset$, implying the claim by part~(\ref{pr:subgrad-imp-in-cldom:a}). \qed In standard convex analysis, the Fenchel-Young inequality states that, for any proper convex function $f$, \begin{equation} \label{eqn:fenchel-stand} f(\xx) + \fstar(\uu) \geq \xx\cdot\uu \end{equation} for all $\xx\in\Rn$ and all $\uu\in\Rn$. Furthermore, this holds with equality if and only if $\uu\in\partial f(\xx)$ (\Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}), a very useful characterization of subgradients. A version of the Fenchel-Young inequality generalizes directly to functions $F:\extspace\rightarrow\Rext$ over astral space since \eqref{eq:Fstar-down-def}, which gives the form of the conjugate $\Fstar$, immediately implies that \begin{equation} \label{eqn:ast-fenchel} \Fstar(\uu) \geq - F(\xbar) \plusd \xbar\cdot\uu \end{equation} for all $\xbar\in\extspace$ and all $\uu\in\Rn$. The next theorem shows that if \eqref{eqn:ast-fenchel} holds with equality, then $\uu$ must be a subgradient of $F$ at $\xbar$ (provided $\xbar\in\cldom{F}$). Furthermore, if $-F(\xbar)$ and $\xbar\cdot\uu$ are summable (so that the sum $-F(\xbar)+\xbar\cdot\uu$ is defined), then the converse holds as well: \begin{theorem} \label{thm:fenchel-implies-subgrad} Let $F:\extspace\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Consider the following statements: \begin{letter-compact} \item \label{thm:fenchel-implies-subgrad:a} $\Fstar(\uu) = -F(\xbar) \plusd \xbar\cdot\uu$ and $\xbar\in\cldom{F}$. \item \label{thm:fenchel-implies-subgrad:b} $\uu\in\asubdifF{\xbar}$. \end{letter-compact} Then statement~(\ref{thm:fenchel-implies-subgrad:a}) implies statement~(\ref{thm:fenchel-implies-subgrad:b}). Furthermore, if $-F(\xbar)$ and $\xbar\cdot\uu$ are summable, then the two statements~(\ref{thm:fenchel-implies-subgrad:a}) and~(\ref{thm:fenchel-implies-subgrad:b}) are equivalent. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:fenchel-implies-subgrad:a}) $\Rightarrow$ (\ref{thm:fenchel-implies-subgrad:b}): } Suppose statement~(\ref{thm:fenchel-implies-subgrad:a}) holds. Consider first the case that $F(\xbar)=+\infty$, which implies, by our assumption, that $\Fstar(\uu)=-\infty$ so that $\xbar'\cdot\uu-y'=-\infty$ for all $\rpair{\xbar'}{y'}\in\epi F$ (by \eqref{eq:Fstar-def}). Since $\xbar\in\cldom{F}$, there exists a sequence $\seq{\xbar_t}$ in $\dom F$ that converges to $\xbar$. For each $t$, let $y_t = \max\{t, F(\xbar_t)\}$, so that $\rpair{\xbar_t}{y_t}\in\epi F$ (since $F(\xbar_t)<+\infty$), implying $\xbar_t\cdot\uu-y_t=-\infty$. Thus, $\uu\in\partial F(\xbar)$ by Proposition~\refequiv{pr:equiv-ast-subdif-defn}{pr:equiv-ast-subdif-defn:a}{pr:equiv-ast-subdif-defn:b} since $\xbar_t\rightarrow\xbar$, $y_t\rightarrow+\infty=F(\xbar)$, and $\xbar_t\cdot\uu-y_t\rightarrow-\infty=\Fstar(\uu)$. Consider next the alternate case that $F(\xbar)<+\infty$. For each $t$, let $\xbar_t=\xbar$ and $y_t=\max\{-t, F(\xbar)\}$ so that $\rpair{\xbar_t}{y_t}\in\epi F$ with $\xbar_t\rightarrow\xbar$ and $y_t\rightarrow F(\xbar)$. If $\xbar\cdot\uu=-\infty$ then $\Fstar(\uu)=-\infty$ by the proposition's assumption, so $\xbar_t\cdot\uu - y_t = -\infty \rightarrow \Fstar(\uu)$. Otherwise, \[ \xbar_t\cdot\uu - y_t \rightarrow -F(\xbar) + \xbar\cdot\uu = \Fstar(\uu) \] by continuity of addition (since $\xbar\cdot\uu>-\infty$ and $F(\xbar)<+\infty$), and by our assumption. In either case, Proposition~\refequiv{pr:equiv-ast-subdif-defn}{pr:equiv-ast-subdif-defn:a}{pr:equiv-ast-subdif-defn:b} then implies $\uu\in\partial F(\xbar)$. \pfpart{(\ref{thm:fenchel-implies-subgrad:b}) (with conditions) $\Rightarrow$ (\ref{thm:fenchel-implies-subgrad:a}): } Suppose $\uu\in\partial F(\xbar)$ and that $-F(\xbar)$ and $\xbar\cdot\uu$ are summable. Then by Proposition~\refequiv{pr:equiv-ast-subdif-defn}{pr:equiv-ast-subdif-defn:a}{pr:equiv-ast-subdif-defn:b}, there exists a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ with $\xbar_t\rightarrow\xbar$, $y_t \rightarrow F(\xbar)$, and $\xbar_t\cdot\uu - y_t \rightarrow \Fstar(\uu)$. On the other hand, \[ \xbar_t\cdot\uu - y_t \rightarrow \xbar\cdot\uu - F(\xbar) \] by continuity of addition (\Cref{prop:lim:eR}\ref{i:lim:eR:sum}), since $-F(\xbar)$ and $\xbar\cdot\uu$ are summable (and since $\xbar_t\cdot\uu\rightarrow\xbar\cdot\uu$ by Theorem~\ref{thm:i:1}\ref{thm:i:1c}). Thus, $\Fstar(\uu) = \xbar\cdot\uu - F(\xbar)$ as claimed. Also, since each $\xbar_t\in\dom F$, their limit $\xbar$ must be in $\cldom{F}$. \qedhere \end{proof-parts} \end{proof} The (partial) converse proved in Theorem~\ref{thm:fenchel-implies-subgrad} does not hold in general without the additional assumption that $-F(\xbar)$ and $\xbar\cdot\uu$ are summable. In other words, if $-F(\xbar)$ and $\xbar\cdot\uu$ are not summable, then it is possible that $\uu$ is a subgradient of $F$ at $\xbar$ but that $\Fstar(\uu) \neq - F(\xbar) \plusd \xbar\cdot\uu$. For instance, consider the extension $\fext$ of the function $f$ given in \eqref{eq:ex:ln1plusexp}, and let $u=1$ and $\barx=+\infty$. Then $u\in\partial\fext(\barx)$, as previously discussed, while $\fext(\barx)=+\infty$ and $\fextstar(u)=\fstar(u)=0$, so that $\fstar(u) = 0 \neq -\infty = - \fext(\barx) \plusd \barx u$. The assumption that $-F(\xbar)$ and $\xbar\cdot\uu$ are summable always holds when $F(\xbar)\in\R$ or $\xbar\cdot\uu\in\R$, including when $\xbar=\xx\in\Rn$ or when $\uu=\zero$. Applied in this last case immediately yields that $\zero$ is a subgradient of $F$ at $\xbar$ if and only if $\xbar$ minimizes $F$ (unless $F\equiv+\infty$): \begin{proposition} \label{pr:asub-zero-is-min} Let $F:\extspace\rightarrow\Rext$, with $F\not\equiv+\infty$. Let $\xbar\in\extspace$. Then $\zero\in\asubdifF{\xbar}$ if and only if $\xbar$ minimizes $F$. \end{proposition} \proof \eqref{eq:Fstar-down-def} implies \[ \Fstar(\zero) = \sup_{\xbar'\in\extspace} [-F(\xbar')] = -\inf_{\xbar'\in\extspace} F(\xbar'). \] Therefore, $F(\xbar)=-\Fstar(\zero)$ if and only if $\xbar$ minimizes $F$. So if $\zero\in\partial F(\xbar)$, then $F(\xbar)=-\Fstar(\zero)$ by Theorem~\ref{thm:fenchel-implies-subgrad} implying that $\xbar$ minimizes $F$. Conversely, if $\xbar$ minimizes $F$, then $F(\xbar)=-\Fstar(\zero)$ and also $F(\xbar)<+\infty$ since $F\not\equiv+\infty$. Therefore, $\zero\in\partial F(\xbar)$ by Theorem~\ref{thm:fenchel-implies-subgrad}. \qed As a useful example, the next proposition characterizes the astral subgradients of the indicator function $\indaS$ for a set $S\subseteq\extspace$. The proposition is stated in terms of the astral support function, $\indaSstar$, as given in \eqref{eqn:astral-support-fcn-def}. \begin{proposition} \label{pr:subdif-ind-fcn} Let $S\subseteq\extspace$, $\xbar\in\extspace$ and $\uu\in\Rn$. Then $\uu\in\asubdifindaS{\xbar}$ if and only if all of the following hold: \begin{itemize} \item $\xbar\cdot\uu=\indfastar{S}(\uu)$; \item $\xbar\in\Sbar$; and \item either $\xbar\in S$ or $\indaSstar(\uu)\not\in\R$. \end{itemize} Consequently, if $S\subseteq\Rn$ then $\uu\in\asubdifindsext{\xbar}$ if and only if $\xbar\cdot\uu=\indstars(\uu)$ and $\xbar\in\Sbar$. \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Only if:} Suppose $\uu\in\asubdifindaS{\xbar}$. Then by Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:a},\ref{pr:equiv-ast-subdif-defn:b}), there exists $\rpair{\xbar_t}{y_t}\in\epi \indaS$ with $\xbar_t\rightarrow\xbar$, $y_t\rightarrow\indfa{S}(\xbar)$ and $\xbar_t\cdot\uu - y_t \rightarrow \indaSstar(\uu)$. This implies $\xbar_t\in \dom{\indaS} = S$ and $y_t\geq 0$ for all $t$. Hence, $\xbar\in\Sbar$, and $ \xbar_t\cdot\uu\rightarrow\xbar\cdot\uu $ (by Theorem~\ref{thm:i:1}(\ref{thm:i:1c})). Thus, \[ \indaSstar(\uu) \geq \lim \xbar_t\cdot\uu \geq \lim [\xbar_t\cdot\uu - y_t] = \indaSstar(\uu). \] The first inequality is from \eqref{eqn:astral-support-fcn-def} since each $\xbar_t\in S$, and the second inequality is because $y_t\geq 0$. Therefore, $\xbar\cdot\uu=\lim \xbar_t\cdot\uu = \indaSstar(\uu)$. It remains to show that either $\xbar\in S$ or $\indaSstar(\uu)\not\in\R$, or equivalently, that $\indaSstar(\uu)\in\R$ implies $\xbar\in S$. So suppose $\indaSstar(\uu)\in\R$. We have already shown $\xbar\cdot\uu=\indaSstar(\uu)$, so $\xbar\cdot\uu$ is also in $\R$, implying $-\indaS(\xbar)$ and $\xbar\cdot\uu$ are summable. Therefore, \[ \xbar\cdot\uu = \indaSstar(\uu) = -\indaS(\xbar) + \xbar\cdot\uu \] by Theorem~\ref{thm:fenchel-implies-subgrad}, implying $\indaS(\xbar)=0$, so $\xbar\in S$. \pfpart{If:} For the converse, suppose $\xbar\cdot\uu=\indfastar{S}(\uu)$, $\xbar\in\Sbar$, and that either $\xbar\in S$ or $\indaSstar(\uu)\not\in\R$. If $\xbar\in S$ then $\indaS(\xbar)=0$ so \begin{equation} \label{eq:pr:subdif-ind-fcn:1} \indaSstar(\uu) = -\indaS(\xbar) \plusd \xbar\cdot\uu, \end{equation} implying $\uu\in\asubdifindaS{\xbar}$ by Theorem~\ref{thm:fenchel-implies-subgrad}. Likewise, if $\xbar\not\in S$ (so that $\indaS(\xbar)=+\infty$), and if $\indaSstar(\xbar)=-\infty$, then \eqref{eq:pr:subdif-ind-fcn:1} also holds, again implying $\uu\in\asubdifindaS{\xbar}$ by Theorem~\ref{thm:fenchel-implies-subgrad}. In the only remaining case, $\xbar\not\in S$ and $\indaSstar(\uu)=+\infty$. Since $\xbar\in\Sbar$, there exists a sequence $\seq{\xbar_t}$ in $S$ that converges to $\xbar$. By Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), \[ \xbar_t\cdot\uu \rightarrow \xbar\cdot\uu = \indaSstar(\uu) = +\infty. \] We assume without loss of generality that $\xbar_t\cdot\uu>0$ for all $t$, since the other elements, of which there can be at most finitely many, can be discarded. For each $t$, let $\beta_t=\min\{t,\,\xbar_t\cdot\uu\}$. Then $\beta_t\in\R$ and $\beta_t\rightarrow+\infty$. Let $y_t=\sqrt{\beta_t}$. Then $y_t>0$ so $\rpair{\xbar_t}{y_t}\in\epi\indaS$, and $y_t\rightarrow+\infty=\indaS(\xbar)$. Finally, \[ \xbar_t\cdot\uu-y_t \geq \beta_t - \sqrt{\beta_t} \rightarrow +\infty=\indaSstar(\uu). \] Thus, $\uu\in\asubdifindaS{\xbar}$ by Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:a},\ref{pr:equiv-ast-subdif-defn:b}). \pfpart{Subgradients of $\indsext$ for $S\subseteq\Rn$:} Suppose $S\subseteq\Rn$. Then $\indsext=\indfa{\Sbar}$ by Proposition~\ref{pr:inds-ext}, and $\indfastar{\Sbar}=\indsextstar=\indstars$ by Proposition~\ref{pr:fextstar-is-fstar}. Applying the foregoing with $S$ replaced by the closed set $\Sbar$ then yields the final claim. \end{proof-parts} \end{proof} \subsection{Astral subgradients of an extension} We next focus on properties of astral subgradients in the important case that $F$ is the extension $\fext$ of a function $f:\Rn\rightarrow\Rext$. As a start, we give a version of the equivalent formulations of astral subgradient given in Proposition~\ref{pr:equiv-ast-subdif-defn} specialized to when $F$ is an extension. In this case, we can replace $\clepi{F}$ with $\clepi{f}$ and $\Fstar(\uu)$ with $\fstar(\uu)$, and also can work directly with function values rather than points in the epigraph. Additional characterizations of the astral subgradients of an extension $\fext$ will be given shortly in Theorem~\ref{thm:fminus-subgrad-char}. \begin{proposition} \label{pr:equiv-ast-subdif-defn-fext} Let $f:\Rn\rightarrow\Rext$, let $\xbar\in\extspace$ and $\uu\in\Rn$, and let $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}). Then the following are equivalent: \begin{letter-compact} \item \label{pr:equiv-ast-subdif-defn-fext:a} $\uu\in\asubdiffext{\xbar}$. \item \label{pr:equiv-ast-subdif-defn-fext:b} There exists a sequence $\seq{\rpair{\xx_t}{y_t}}$ in $\epi f$ such that $\xx_t\rightarrow\xbar$, $y_t \rightarrow \fext(\xbar)$, and $\xx_t\cdot\uu - y_t \rightarrow \fstar(\uu)$. \item \label{pr:equiv-ast-subdif-defn-fext:c} There exists $\zbar\in \clepi{f}$ such that $\homat \zbar = \xbar$, $\zbar\cdot \rpair{\zero}{1} = \fext(\xbar)$, and $\zbar \cdot \rpair{\uu}{-1} = \fstar(\uu)$. \item \label{pr:equiv-ast-subdif-defn-fext:d} $\zbar'\cdot \rpair{\uu}{-1}$, as a function of $\zbar'$, is maximized over $\clepi{f}$ by some $\zbar\in\clepi{f}$ with $\homat\zbar = \xbar$ and $\zbar\cdot\rpair{\zero}{1} = \fext(\xbar)$. \item \label{pr:equiv-ast-subdif-defn-fext:b1} There exists a sequence $\seq{\xx_t}$ in $\dom f$ such that $\xx_t\rightarrow\xbar$, $f(\xx_t) \rightarrow \fext(\xbar)$, and $\xx_t\cdot\uu - f(\xx_t) \rightarrow \fstar(\uu)$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:equiv-ast-subdif-defn-fext:a}) $\Leftrightarrow$ (\ref{pr:equiv-ast-subdif-defn-fext:c}) $\Leftrightarrow$ (\ref{pr:equiv-ast-subdif-defn-fext:d}): } By Proposition~\ref{pr:wasthm:e:3}(\ref{pr:wasthm:e:3c}), $\clepifext=\clepi{f}$, and by Proposition~\ref{pr:fextstar-is-fstar}, $\fextstar=\fstar$. Therefore, parts~(\ref{pr:equiv-ast-subdif-defn-fext:a}), (\ref{pr:equiv-ast-subdif-defn-fext:c}) and~(\ref{pr:equiv-ast-subdif-defn-fext:d}) are each equivalent to the corresponding part of Proposition~\ref{pr:equiv-ast-subdif-defn} (with $F=\fext$), and so are equivalent to each other as well. \pfpart{(\ref{pr:equiv-ast-subdif-defn-fext:b}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn-fext:a}): } Suppose part~(\ref{pr:equiv-ast-subdif-defn-fext:b}) holds for some sequence in $\seq{\rpair{\xx_t}{y_t}}$ in $\epi f$, which is included in $\epi \fext$ by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). Then this same sequence also satisfies part~(\ref{pr:equiv-ast-subdif-defn:b}) of Proposition~\ref{pr:equiv-ast-subdif-defn} (with $F=\fext$), which thus implies part~(\ref{pr:equiv-ast-subdif-defn-fext:a}). \pfpart{(\ref{pr:equiv-ast-subdif-defn-fext:c}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn-fext:b}): } Let $\zbar$ be as in part~(\ref{pr:equiv-ast-subdif-defn-fext:c}). Since $\zbar\in\clepi{f}$, there exists a sequence $\seq{\rpair{\xx_t}{y_t}}$ in $\epi f$ that converges to $\zbar$. The rest of the proof is exactly as in the proof of Proposition~\ref{pr:equiv-ast-subdif-defn} for this same implication (with $F=\fext$ and $\xbar_t$ replaced by $\xx_t$). \pfpart{(\ref{pr:equiv-ast-subdif-defn-fext:b}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn-fext:b1}): } Suppose a sequence as in part~(\ref{pr:equiv-ast-subdif-defn-fext:b}) exists. Then \[ \fext(\xbar) \leq \liminf f(\xx_t) \leq \limsup f(\xx_t) \leq \limsup y_t = \fext(\xbar). \] The first inequality is because $\xx_t\rightarrow\xbar$. The third inequality is because $f(\xx_t)\leq y_t$ for all $t$. The equality is because $y_t\rightarrow\fext(\xbar)$. Thus, $f(\xx_t)\rightarrow\fext(\xbar)$. Similarly, \begin{eqnarray*} \fstar(\uu) &=& \liminf \paren{\xx_t\cdot\uu - y_t} \\ &\leq& \liminf \paren{\xx_t\cdot\uu - f(\xx_t)} \\ &\leq& \limsup\paren{\xx_t\cdot\uu - f(\xx_t)} \\ &\leq& \fstar(\uu). \end{eqnarray*} The equality is becaue $\xx_t\cdot\uu-y_t\rightarrow \fstar(\uu)$. The first inequality is because $f(\xx_t)\leq y_t$ for all $t$. The last inequality is by \eqref{eq:fstar-def}. Thus, $\xx_t\cdot\uu-f(\xx_t)\rightarrow\fstar(\uu)$. \pfpart{(\ref{pr:equiv-ast-subdif-defn-fext:b1}) $\Rightarrow$ (\ref{pr:equiv-ast-subdif-defn-fext:b}): } Suppose a sequence exists as in part~(\ref{pr:equiv-ast-subdif-defn-fext:b1}). For each $t$, let \[ y_t = \max\Braces{ \, \min\{0, \xx_t\cdot\uu\} - t,\, f(\xx_t) \, }. \] Then $y_t\in\R$ (since $f(\xx_t)<+\infty$), and $y_t\geq f(\xx_t)$ so $\rpair{\xx_t}{y_t}\in\epi f$. We show first that $y_t\rightarrow\fext(\xbar)$. Suppose $\fext(\xbar)>-\infty$. Let $\beta\in\R$ be such that $\fext(\xbar)>\beta$. For all $t$ sufficiently large, we must have $t > -\beta$ and also $f(\xx_t)>\beta$ (since $f(\xx_t)\rightarrow\fext(\xbar)$). When both of these hold, $f(\xx_t)>\beta>-t$, implying $y_t=f(\xx_t)$. Since this holds for all $t$ sufficiently large, it follows that $y_t\rightarrow\fext(\xbar)$. Suppose now that $\fext(\xbar)=-\infty$. Let $\beta\in\R$. For all $t$ sufficiently large, we must have $f(\xx_t)<\beta$ and $t>-\beta$. When these both hold, $y_t\leq\max\{-t,f(\xx_t)\}<\beta$. Since this holds for all $\beta\in\R$, it follows that $y_t\rightarrow-\infty=\fext(\xbar)$. It remains to show that $\xx_t\cdot\uu - y_t\rightarrow\fstar(\uu)$. Let $\beta\in\R$ with $\beta < \fstar(\uu)$. (As before, $\fstar(\uu)>-\infty$ since $f\not\equiv+\infty$.) For all $t$ sufficiently large, we must have $t>\beta$ and also $\xx_t\cdot\uu - f(\xx_t) > \beta$ (since $\xx_t\cdot\uu - f(\xx_t) \rightarrow \fstar(\uu)$). Suppose both these conditions hold. We claim these imply $\xx_t\cdot\uu-y_t>\beta$. This is immediate if $y_t=f(\xx_t)$. Otherwise, if $y_t\neq f(\xx_t)$ then we must have $y_t=\min\{0,\xx_t\cdot\uu\} - t \leq \xx_t\cdot\uu - t$. Hence, $\xx_t\cdot\uu-y_t\geq t>\beta$. Thus, $\xx_t\cdot\uu-y_t>\beta$ for all $t$ sufficiently large. Since this holds for all $\beta<\fstar(\uu)$, it follows that \[ \fstar(\uu) \leq \liminf\paren{\xx_t\cdot\uu - y_t} \leq \limsup\paren{\xx_t\cdot\uu - y_t} \leq \fstar(\uu) \] with the last inequality from \eqref{eq:fstar-mod-def}. This proves $\xx_t\cdot\uu - y_t\rightarrow \fstar(\uu)$, completing the proof. \qedhere \end{proof-parts} \end{proof} For the extension of a function $f:\Rn\rightarrow\Rext$, we give next a different characterization of astral subgradients which, like the one in Theorem~\ref{thm:fenchel-implies-subgrad}, relates to the Fenchel-Young inequality. To do so, we start with a definition: \begin{definition} Let $f:\Rn\rightarrow\Rext$ and let $\uu\in\Rn$. The \emph{linear shift of $f$ by $\uu$} is the function $\fminusu:\Rn\rightarrow\Rext$ given, for $\xx\in\Rn$, by \begin{equation} \label{eqn:fminusu-defn} \fminusu(\xx) = f(\xx) - \xx\cdot\uu. \end{equation} \end{definition} The Fenchel-Young inequality (Eq.~\ref{eqn:fenchel-stand}) can then be rewritten as \[ -\fstar(\uu) \leq \fminusu(\xx), \] which holds for all $\xx$ (by Eq.~\ref{eq:fstar-def}). As a result, it also holds for $\fminusu$'s extension so that \[ -\fstar(\uu) \leq \fminusuext(\xbar) \] for all $\xbar\in\extspace$ (by \Cref{pr:fext-min-exists}). As we show in the next theorem, assuming $f\not\equiv+\infty$, this inequality holds with equality if and only if $\uu$ is an astral subgradient of $\fext$ at $\xbar$. The theorem further shows that, in the characterization of astral subgradients given in Proposition~\ref{pr:equiv-ast-subdif-defn-fext}(\ref{pr:equiv-ast-subdif-defn-fext:b1}), the condition $f(\xx_t)\rightarrow\fext(\xbar)$ turns out to be superfluous, as is the requirement that the points $\xx_t$ be in $\dom f$. We first give some simple facts about linear shifts: \begin{proposition} \label{pr:fminusu-props} Let $f:\Rn\rightarrow\Rext$, $\xbar\in\extspace$, $\uu\in\Rn$, and let $\fminusu$ be $f$'s linear shift by $\uu$. Then the following hold: \begin{letter-compact} \item \label{pr:fminusu-props:a} $\dom{\fminusu}=\dom{f}$. \item \label{pr:fminusu-props:b} $\fminusustar(\ww)=\fstar(\ww+\uu)$ for $\ww\in\Rn$. \item \label{pr:fminusu-props:d} $-\fstar(\uu)=\inf \fminusu \leq \fminusuext(\xbar)$. \item \label{pr:fminusu-props:e} If $-\fext(\xbar)$ and $\xbar\cdot\uu$ are summable then $\fminusuext(\xbar)=\fext(\xbar)-\xbar\cdot\uu$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:fminusu-props:a}):} For $\xx\in\Rn$, $\fminusu(\xx)=+\infty$ if and only if $f(\xx)=+\infty$, proving the claim. \pfpart{Part~(\ref{pr:fminusu-props:b}):} From \eqref{eq:fstar-def}, we can compute, for $\ww\in\Rn$, \begin{align} \notag \fminusustar(\ww) &= \sup_{\xx\in\Rn} \bigBracks{\xx\cdot\ww - \bigParens{f(\xx)-\xx\cdot\uu}} \\ \label{eq:thm:adif-fext-inverses:1} &= \sup_{\xx\in\Rn} \bigBracks{\xx\cdot(\ww+\uu) - f(\xx)} = \fstar(\ww+\uu). \end{align} \pfpart{Part~(\ref{pr:fminusu-props:d}):} From \eqref{eq:fstar-def}, $\fstar(\uu)=\sup (-\fminusu)=-\inf \fminusu$. By Proposition~\ref{pr:fext-min-exists}, $\inf\fminusu\leq \fminusuext(\xbar)$. \pfpart{Part~(\ref{pr:fminusu-props:e}):} This follows from Proposition~\ref{pr:ext-sum-fcns} applied to $f$ and $g(\xx)=-\xx\cdot\uu$, since $g$'s extension is $\gext(\xbar)=-\xbar\cdot\uu$ (Example~\ref{ex:ext-affine}), and since $g$ is extensibly continuous everywhere, by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). \qedhere \end{proof-parts} \end{proof} \begin{theorem} \label{thm:fminus-subgrad-char} Let $f:\Rn\rightarrow\Rext$, and assume $f\not\equiv+\infty$. Let $\xbar\in\extspace$ and $\uu\in\Rn$, and let $\fminusu$ be the linear shift of $f$ by $\uu$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:fminus-subgrad-char:a} $\uu\in\asubdiffext{\xbar}$. \item \label{thm:fminus-subgrad-char:b} There exists a sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$ and $\xx_t\cdot\uu - f(\xx_t) \rightarrow \fstar(\uu)$. \item \label{thm:fminus-subgrad-char:c} There exists a sequence $\seq{\xx_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar$ and \begin{equation} \label{eqn:thm:fminus-subgrad-char:2} \limsup (\xx_t\cdot\uu - f(\xx_t)) \geq \fstar(\uu). \end{equation} \item \label{thm:fminus-subgrad-char:d} $\fminusuext(\xbar) = -\fstar(\uu)$. \item \label{thm:fminus-subgrad-char:e} $\xbar$ minimizes $\fminusuext$; that is, $\fminusuext(\xbar) = \inf \fminusu$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:fminus-subgrad-char:a}) $\Rightarrow$ (\ref{thm:fminus-subgrad-char:b}): } Suppose $\uu\in\asubdiffext{\xbar}$. Then there must exist a sequence as in Proposition~\ref{pr:equiv-ast-subdif-defn-fext}(\ref{pr:equiv-ast-subdif-defn-fext:b1}), which thus also satisfies the conditions of part~(\ref{thm:fminus-subgrad-char:b}). \pfpart{(\ref{thm:fminus-subgrad-char:b}) $\Rightarrow$ (\ref{thm:fminus-subgrad-char:c}): } This is immediate. \pfpart{(\ref{thm:fminus-subgrad-char:c}) $\Rightarrow$ (\ref{thm:fminus-subgrad-char:d}): } Suppose there exists a sequence as in part~(\ref{thm:fminus-subgrad-char:c}). Then \[ \fminusuext(\xbar) \leq \liminf \fminusu(\xx_t) \leq -\fstar(\uu) \leq \fminusuext(\xbar). \] The first inequality is because $\xx_t\rightarrow\xbar$. The second inequality is equivalent to the assumption in \eqref{eqn:thm:fminus-subgrad-char:2}. The last inequality is by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}). This proves the claimed equality. \pfpart{(\ref{thm:fminus-subgrad-char:d}) $\Leftrightarrow$ (\ref{thm:fminus-subgrad-char:e}): } This is immediate from Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:e}) (and Proposition~\ref{pr:fext-min-exists}). \pfpart{(\ref{thm:fminus-subgrad-char:d}) $\Rightarrow$ (\ref{thm:fminus-subgrad-char:a}): } Suppose $-\fstar(\uu)=\fminusuext(\xbar)$. By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ converging to $\xbar$ and for which $\fminusu(\xx_t)\rightarrow\fminusuext(\xbar)=-\fstar(\uu)$. Since $f\not\equiv+\infty$, $\fstar(\uu)>-\infty$, so $\fminusu(\xx_t)$ can be equal to $+\infty$ for at most finitely many values of $t$. By discarding these, we can assume henceforth that for all $t$, $\fminusu(\xx_t)<+\infty$, implying $f(\xx_t)<+\infty$ as well. Since $\xx_t\rightarrow\xbar$, this shows that $\xbar\in\cldom{f}$. As a first case, suppose $-\fext(\xbar)$ and $\xbar\cdot\uu$ are summable. Then $ \fminusuext(\xbar)=\fext(\xbar)-\xbar\cdot\uu $, by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:e}). Thus, $-\fstar(\uu)=\fext(\xbar)-\xbar\cdot\uu$. Since $\xbar\in\cldom{f}=\cldom{\fext}$ (Proposition~\ref{pr:h:1}\ref{pr:h:1c}), this implies $\uu\in\asubdiffext{\xbar}$ by Theorem~\ref{thm:fenchel-implies-subgrad} (with $F=\fext$). Alternatively, if $-\fext(\xbar)$ and $\xbar\cdot\uu$ are not summable, then we must have $\fext(\xbar)=\xbar\cdot\uu\in\{-\infty,+\infty\}$. For the sequence above, we have already established that $\xx_t\in\dom f$ for all $t$, that $\xx_t\rightarrow\xbar$, and that \begin{equation} \label{eqn:thm:fminus-subgrad-char:1} \xx_t\cdot\uu-f(\xx_t) = -\fminusu(\xx_t) \rightarrow \fstar(\uu). \end{equation} Therefore, to satisfy the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn-fext}(\ref{pr:equiv-ast-subdif-defn-fext:b1}), it remains only to show that $f(\xx_t)\rightarrow\fext(\xbar)$. If $\fext(\xbar)=+\infty$, then $f(\xx_t)\rightarrow+\infty$ for every sequence converging to $\xbar$, by $\fext$'s definition. Otherwise, we must have $\fext(\xbar)=\xbar\cdot\uu=-\infty$. We claim $f(\xx_t)\rightarrow-\infty$. If not, then there exists $\beta\in\R$ such that $f(\xx_t)\geq\beta$ for infinitely many values of $t$. By discarding all others, we can assume $f(\xx_t)\geq\beta$ for all $t$, implying $\xx_t\cdot\uu - f(\xx_t)\rightarrow-\infty$ since $\xx_t\cdot\uu\rightarrow\xbar\cdot\uu=-\infty$. However, this contradicts \eqref{eqn:thm:fminus-subgrad-char:1} since $\fstar(\uu)>-\infty$. Thus, in both cases, $f(\xx_t)\rightarrow\fext(\xbar)$. By Proposition~\refequiv{pr:equiv-ast-subdif-defn-fext}{pr:equiv-ast-subdif-defn-fext:a}{pr:equiv-ast-subdif-defn-fext:b1}, it follows that $\uu\in\asubdiffext{\xbar}$. \qedhere \end{proof-parts} \end{proof} The astral subgradients of a linear shift are straightforward to compute, as we show next, and use shortly. \begin{proposition} \label{pr:fminusu-subgrad} Let $f:\Rn\rightarrow\Rext$, $\xbar\in\extspace$, $\uu,\ww\in\Rn$, and let $\fminusu$ be $f$'s linear shift by $\uu$. Then $\ww\in\asubdiffext{\xbar}$ if and only if $\ww-\uu \in \asubdiffminusuext{\xbar}$; that is, $\asubdiffminusuext{\xbar} = \asubdiffext{\xbar} - \uu$. \end{proposition} \proof If $f\equiv+\infty$, then $\asubdiffext{\xbar} = \asubdiffminusuext{\xbar} = \emptyset$ (by Proposition~\ref{pr:subgrad-imp-in-cldom}(\ref{pr:subgrad-imp-in-cldom:b})), so the proposition holds vacuously. We therefore assume henceforth that $f\not\equiv+\infty$. Suppose $\ww\in\asubdiffext{\xbar}$. Then by Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:b}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ for which \begin{equation} \label{eqn:pr:fminusu-subgrad:1} \xx_t\cdot\ww - f(\xx_t) \rightarrow \fstar(\ww). \end{equation} Re-writing the left-hand side with simple algebra, and applying Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:b}) to the right-hand side, this is equivalent to \[ \xx_t\cdot(\ww-\uu) - \fminusu(\xx_t) \rightarrow \fminusustar(\ww-\uu), \] implying $\ww-\uu \in \asubdiffminusuext{\xbar}$ by Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:b}. For the converse, suppose $\ww-\uu \in \asubdiffminusuext{\xbar}$. Letting $g=\fminusu$ and $\vv=\ww-\uu$, this is equivalent to $\vv\in\asubdifgext{\xbar}$. Let $\gminusminusu$ be the linear shift of $g$ by $-\uu$. By the first part of the proof, it then follows that $\ww=\vv+\uu\in\asubdifgminusminusuext{\xbar}$. This proves the claim since $\gminusminusu=f$, since \[ \gminusminusu(\xx) = g(\xx) + \xx\cdot\uu = (f(\xx) - \xx\cdot\uu) + \xx\cdot\uu = f(\xx) \] for $\xx\in\Rn$. \qed Astral subgradients generalize standard subgradients in the sense that the standard subdifferential of a function $f:\Rn\rightarrow\Rext$ is the same as the astral subdifferential of an essentially equivalent astral function $F:\extspace\rightarrow\Rext$ that is the same as $f$ on $\Rn$ and $+\infty$ everywhere else, so that the epigraphs of $f$ and $F$ are identical. In addition, the astral subdifferential of $f$'s extension, $\fext$, is equal to the standard subdifferential of $f$ at all points in $\Rn$ where $f$ is lower semicontinuous (which is empty everywhere else). These statements hold always except if $f\equiv+\infty$, in which case $\partial f(\xx)=\Rn$ but $\asubdiffext{\xx}=\asubdifF{\xx}=\emptyset$ for all $\xx\in\Rn$ (Proposition~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:b}). \begin{proposition} \label{pr:asubdiffext-at-x-in-rn} Let $f:\Rn\rightarrow\Rext$, with $f\not\equiv+\infty$, and let $\xx\in\Rn$. Then the following hold: \begin{letter-compact} \item \label{pr:asubdiffext-at-x-in-rn:a} Let $F:\extspace\rightarrow\Rext$ be defined by \[ F(\xbar) = \left\{ \begin{array}{cl} f(\xbar) & \mbox{if $\xbar\in\Rn$} \\ +\infty & \mbox{otherwise.} \end{array} \right. \] Then $\asubdifF{\xx} = \partial f(\xx)$. \item \label{pr:asubdiffext-at-x-in-rn:b} In addition, \[ \partial f(\xx) = \left\{ \begin{array}{cl} \asubdiffext{\xx} & \mbox{if $\fext(\xx) = f(\xx)$} \\ \emptyset & \mbox{otherwise.} \end{array} \right. \] Consequently, $\asubdiffext{\xx} = \partial (\lsc f)(\xx)$. \end{letter-compact} \end{proposition} \begin{proof} Let $\uu\in\Rn$. Note that $\fstar(\uu)>-\infty$ by our assumption that $f\not\equiv+\infty$. \begin{proof-parts} \pfpart{Part~(\ref{pr:asubdiffext-at-x-in-rn:a}):} We show that $\uu\in\partial f(\xx)$ if and only if $\uu\in\asubdifF{\xx}$. By construction, $F(\xx)=f(\xx)$ and $\epi F = \epi f$, implying $\Fstar=\fstar$, by comparison of Eqs.~(\ref{eq:fstar-mod-def}) and~(\ref{eq:Fstar-def}). Therefore, $\xx\cdot\uu - F(\xx) = \Fstar(\uu)$ if and only if $\xx\cdot\uu - f(\xx) = \fstar(\uu)$. If $F(\xx)=f(\xx)=+\infty$, then $\xx\cdot\uu - f(\xx) \neq \fstar(\uu)$ since $\fstar(\uu)>-\infty$. This implies that $\uu\not\in\partial f(\xx)$ (by \Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}), and also that $\xx\cdot\uu - F(\xx) \neq \Fstar(\uu)$. Therefore, $\uu\not\in\asubdifF{\xx}$ by Theorem~\ref{thm:fenchel-implies-subgrad}, proving the claim in this case. In the alternative case, $F(\xx)=f(\xx)<+\infty$. Since $\uu\in\partial f(\xx)$ if and only if $\xx\cdot\uu - f(\xx) = \fstar(\uu)$ (by \Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}), and since $\uu\in\asubdifF{\xx}$ if and only if $\xx\cdot\uu - F(\xx) = \Fstar(\uu)$ by Theorem~\ref{thm:fenchel-implies-subgrad} (since $\xx\in\dom F$), it follows that $\uu\in\partial f(\xx)$ if and only if $\uu\in\asubdifF{\xx}$, proving the claim in this case as well. \pfpart{Part~(\ref{pr:asubdiffext-at-x-in-rn:b}):} To prove the result, we show that $\uu\in \partial f(\xx)$ if and only if $\fext(\xx)=f(\xx)$ and $\uu\in\asubdiffext{\xx}$. Suppose $\uu\in\partial f(\xx)$. Then \begin{equation} \label{eq:asubdiffext-at-x-in-rn} \xx\cdot\uu - \fext(\xx) \geq \xx\cdot\uu - f(\xx) = \fstar(\uu) = \fextstar(\uu) \geq \xx\cdot\uu - \fext(\xx). \end{equation} The first inequality is by Proposition~\ref{pr:h:1}(\ref{pr:h:1a}). The two equalities are by \Cref{pr:stan-subgrad-equiv-props}(\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}) and Proposition~\ref{pr:fextstar-is-fstar}, respectively. The last inequality is by \eqref{eqn:ast-fenchel} (applied to $F=\fext$). \eqref{eq:asubdiffext-at-x-in-rn} then implies that $\fext(\xx)=f(\xx)$ (using $\xx\cdot\uu\in\R$), and also that $\fext(\xx)<+\infty$ since $\fstar(\uu)>-\infty$. Applying Theorem~\ref{thm:fenchel-implies-subgrad}, it then follows from \eqref{eq:asubdiffext-at-x-in-rn} that $\uu\in\asubdiffext{\xx}$. Conversely, suppose now that $\fext(\xx)=f(\xx)$ and that $\uu\in\asubdiffext{\xx}$. Then \[ \xx\cdot\uu - f(\xx) = \xx\cdot\uu - \fext(\xx) = \fextstar(\uu) = \fstar(\uu) \] with the second equality following from Theorem~\ref{thm:fenchel-implies-subgrad} (since $\uu\in\asubdiffext{\xx}$ and $\xx\cdot\uu\in\R$), and the third from Proposition~\ref{pr:fextstar-is-fstar}. Thus, $\uu\in\partial f(\xx)$ by \Cref{pr:stan-subgrad-equiv-props}(\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}). Applied to $\lsc f$, which is lower semicontinuous everywhere, this shows that $\asubdiffext{\xx} = \partial \lscfext(\xx) = \partial(\lsc f)(\xx)$ (by \Cref{pr:h:1}\ref{pr:h:1aa}). \qedhere \end{proof-parts} \end{proof} The astral subdifferentials of an extension $\fext$ are always convex, as shown in the next theorem. \begin{theorem} \label{thm:asubdiff-is-convex} Let $f:\Rn\rightarrow\Rext$ and let $\xbar\in\extspace$. Then $\asubdiffext{\xbar}$ is convex. \end{theorem} \begin{proof} If $\asubdiffext{\xbar}=\emptyset$, then the claim holds vacuously. We therefore assume henceforth that $\asubdiffext{\xbar}\neq\emptyset$, and so also that $f\not\equiv+\infty$ (by Proposition~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:b}). Let $\uu,\vv\in\asubdiffext{\xbar}$, let $\lambda\in[0,1]$, and let $\ww=(1-\lambda)\uu + \lambda \vv$. To prove convexity, we aim to show $\ww\in\asubdiffext{\xbar}$. Note that \begin{equation} \label{eq:thm:asubdiff-is-convex:3} \ww - \uu = \lambda (\vv-\uu), \end{equation} and also, since $\asubdiffext{\xbar}\neq\emptyset$, that \begin{equation} \label{eq:thm:asubdiff-is-convex:4} \xbar\in\cldom{\fext}=\cldom{f}=\cldom{\fminusu}=\cldom{\fminusuext}, \end{equation} by Propositions~\ref{pr:subgrad-imp-in-cldom}(\ref{pr:subgrad-imp-in-cldom:a}), \ref{pr:h:1}(\ref{pr:h:1c}), and~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:a}). As a first case, suppose either $\fstar(\uu)\in\R$ or $\fstar(\vv)\in\R$. Indeed, by symmetry, we can assume without loss of generality that $\fstar(\uu)\in\R$, since if instead $\fstar(\vv)\in\R$, we can simply swap $\uu$ and $\vv$, and replace $\lambda$ with $1-\lambda$. Let $\fminusu$ be $f$'s linear shift at $\uu$. Then $\fminusuext(\xbar)=-\fstar(\uu)$, by Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:d}, since $\uu\in\asubdiffext{\xbar}$. This implies $\fminusuext(\xbar)\in\R$, and also that \begin{equation} \label{eq:thm:asubdiff-is-convex:1} -(1-\lambda) \fminusuext(\xbar) = (1-\lambda) \fstar(\uu) = (1-\lambda) \fminusustar(\zero) \end{equation} using Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:b}) in the second equality. Since $\vv\in\asubdiffext{\xbar}$, $\vv-\uu\in\asubdiffminusuext{\xbar}$, by Proposition~\ref{pr:fminusu-subgrad}. Theorem~\ref{thm:fenchel-implies-subgrad} then implies \[ \xbar\cdot (\vv-\uu) - \fminusuext(\xbar) = \fminusustar(\vv-\uu) \] since the terms on the left are summable (since $\fminusuext(\xbar)\in\R$). Consequently, \begin{equation} \label{eq:thm:asubdiff-is-convex:2} \lambda \xbar\cdot (\vv-\uu) - \lambda \fminusuext(\xbar) = \lambda \fminusustar(\vv-\uu). \end{equation} Combining, we now have \begin{eqnarray*} \fminusustar(\ww - \uu) &\geq& \xbar\cdot(\ww - \uu) - \fminusuext(\xbar) \\ &=& \lambda \xbar\cdot(\vv - \uu) - \fminusuext(\xbar) \\ &=& (1-\lambda) \fminusustar(\zero) + \lambda \fminusustar(\vv - \uu) \\ &\geq& \fminusustar\Parens{\lambda(\vv-\uu)} \\ &=& \fminusustar(\ww-\uu). \end{eqnarray*} The first inequality is by \eqref{eqn:ast-fenchel} (applied to $F=\fminusuext$ and using Proposition~\ref{pr:fextstar-is-fstar}). The first and last equalities are by \eqref{eq:thm:asubdiff-is-convex:3}. The second equality is obtained by adding Eqs.~(\ref{eq:thm:asubdiff-is-convex:1}) and~(\ref{eq:thm:asubdiff-is-convex:2}), noting that $\fminusuext(\xbar)$ and $\fminusustar(\zero)=\fstar(\uu)$ are finite, ensuring summability. The second inequality follows from the convexity of the conjugate function $\fminusustar$ (\Cref{pr:conj-props}\ref{pr:conj-props:d}), using Proposition~\refequiv{pr:stand-cvx-fcn-char}{pr:stand-cvx-fcn-char:a}{pr:stand-cvx-fcn-char:c}. Thus, \[ \xbar\cdot(\ww - \uu) - \fminusuext(\xbar) = \fminusustar(\ww-\uu). \] Combined with \eqref{eq:thm:asubdiff-is-convex:4}, this implies by Theorem~\ref{thm:fenchel-implies-subgrad} (applied to $F=\fminusuext$) that $\ww-\uu\in\asubdiffminusuext{\xbar}$. Therefore, $\ww\in\asubdiffext{\xbar}$ by Proposition~\ref{pr:fminusu-subgrad}, completing the proof in this case. Since $f\not\equiv+\infty$, $\fstar>-\infty$. Thus, in the remaining case, $\fstar(\uu)=\fstar(\vv)=+\infty$. We can assume without loss of generality that $\xbar\cdot(\uu-\vv)<+\infty$, since otherwise, if $\xbar\cdot(\uu-\vv)=+\infty$, then we can, as above, swap $\uu$ and $\vv$, and replace $\lambda$ with $1-\lambda$. Since $\uu\in\asubdiffext{\xbar}$, by Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:d}, $\fminusuext(\xbar)=-\fstar(\uu)=-\infty$. Also, \[ \xbar\cdot(\ww-\uu) = \lambda\xbar\cdot(\vv-\uu)>-\infty \] by \eqref{eq:thm:asubdiff-is-convex:3}, and since $\lambda\geq 0$ and $\xbar\cdot(\uu-\vv)<+\infty$. Therefore, $-\fminusuext(\xbar)$ and $\xbar\cdot(\ww-\uu)$ are summable, and in particular, \[ \fminusustar(\ww-\uu) \geq \xbar\cdot(\ww-\uu) - \fminusuext(\xbar) = +\infty \] with the inequality following from \eqref{eqn:ast-fenchel}. Thus, \[ \fminusustar(\ww-\uu) = \xbar\cdot(\ww-\uu) - \fminusuext(\xbar). \] Combined with \eqref{eq:thm:asubdiff-is-convex:4}, it then follows, as above, from Theorem~\ref{thm:fenchel-implies-subgrad} that $\ww-\uu\in\asubdiffminusuext{\xbar}$, and so that $\ww\in\asubdiffext{\xbar}$ by Proposition~\ref{pr:fminusu-subgrad}, completing the proof. \end{proof} \subsection{Astral dual subgradients} \label{sec:astral-dual-subgrad} For a function $F:\extspace\rightarrow\Rext$ and a point $\xbar\in\extspace$, the astral subdifferential $\asubdifF{\xbar}$ is a set of vectors $\uu$ in $\Rn$. Its purpose, as we have discussed, is to assign meaningful subgradients to points at infinity. We turn next to a different, opposite kind of subdifferential, called the astral dual subdifferential, which, for a function defined on $\Rn$, assigns subgradients to finite points $\uu\in\Rn$ where the subgradients are themselves astral points $\xbar\in\extspace$. In this way, subgradients can be captured at points where such subgradients might not otherwise be defined. Here is an example: \begin{example} \label{ex:entropy-1d} Let $\psi:\R\rightarrow\Rext$ be defined, for $u\in\R$, by \[ \psi(u) = \begin{cases} u \ln u & \text{if $u\geq 0$,} \\ +\infty & \mbox{otherwise} \end{cases} \] (where, as usual, $0\ln 0 = 0$). For $u > 0$, the subgradients of this function are the same as its derivative, as will be the astral dual subgradients. But for $u\leq 0$, the function has no standard subgradients, even at $u=0$ which is in the function's effective domain. Nevertheless, at $u=0$, it will turn out that $-\infty$ is an astral dual subgradient, as is to be expected since the derivative of this function approaches $-\infty$ as $u$ approaches $0$ from the right. \end{example} To define astral dual subgradients, let $\psi:\Rn\rightarrow\Rext$ be any function, changing notation to emphasize the switch to dual space, and because later we will often take $\psi$ to be the conjugate of some function. Replacing $f$ by $\psi$ and swapping variable names in \eqref{eqn:standard-subgrad-ineq}, we can say that $\xx\in\Rn$ is a standard subgradient of $\psi$ at $\uu\in\Rn$ if \begin{equation} \label{eqn:psi-subgrad:1} \psi(\uu') \geq \psi(\uu) + \xx\cdot(\uu'-\uu) \end{equation} for all $\uu'\in\Rn$. To extend this notion to astral space, while avoiding the possibility of adding $-\infty$ and $+\infty$, we again focus on epigraphs, specifically, on $\epi \psi$. \eqref{eqn:psi-subgrad:1} is equivalent to \[ v' \geq \psi(\uu) + \xx\cdot(\uu'-\uu) \] for all $\rpair{\uu'}{v'}\in\epi \psi$, which in turn is equivalent to \begin{equation} \label{eqn:psi-subgrad:1a} -\psi(\uu) \geq \xx\cdot(\uu'-\uu) - v' \end{equation} by simple algebra. In this form, the definition of standard subgradient immediately and naturally generalizes to astral space since we can simply replace $\xx$ by an astral point $\xbar\in\extspace$ in \eqref{eqn:psi-subgrad:1a}: \begin{definition} \label{def:dual-subgrad} Let $\psi:\Rn\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. We say that $\xbar$ is an \emph{astral dual subgradient} of $\psi$ at $\uu$ if \begin{equation} \label{eqn:psi-subgrad:3a} -\psi(\uu) \geq \xbar\cdot(\uu'-\uu) - v' \end{equation} for all $\rpair{\uu'}{v'}\in\epi \psi$. The \emph{astral dual subdifferential} of $\psi$ at $\uu$, denoted $\adsubdifpsi{\uu}$, is the set of all such astral dual subgradients of $\psi$ at $\uu$. \end{definition} Equivalently, $\xbar$ is an {astral dual subgradient} of $\psi$ at $\uu$ if and only if \begin{equation} \label{eqn:psi-subgrad:3} -\psi(\uu) \geq -\psi(\uu') \plusd \xbar\cdot(\uu'-\uu) \end{equation} for all $\uu'\in\Rn$, since, by Proposition~\ref{pr:plusd-props}(\ref{pr:plusd-props:d}), the right-hand side of \eqref{eqn:psi-subgrad:3} is equal to the supremum of the right-hand side of \eqref{eqn:psi-subgrad:3a} over all $v'\geq\psi(\uu')$. Furthermore, by Proposition~\ref{pr:plusd-props}(\ref{pr:plusd-props:e}), \eqref{eqn:psi-subgrad:3} is equivalent to \begin{equation} \label{eqn:psi-subgrad:3-alt} \psi(\uu') \geq \psi(\uu) \plusd \xbar\cdot(\uu'-\uu) \end{equation} for all $\uu'\in\Rn$, which even more closely resembles the standard definition given in \eqref{eqn:psi-subgrad:1}. When $\psi>-\infty$ and $\psi(\uu)\in\R$, this definition is the same as the ``extended subgradients'' given by \citet[Definition~3.1]{waggoner21}. We use the notation $\adsubdifplain{\psi}$ to distinguish the astral dual subdifferential from the standard subdifferential $\partial \psi$, since either operation can be applied to an ordinary function $\psi$ over $\Rn$. (In contrast, the astral subdifferential $\asubdifplain{F}$ is only applied to functions $F$ over $\extspace$.) As discussed above, the astral primal subdifferential $\asubdifF{\xbar}$ captures finite subgradients $\uu\in\Rn$ of an astral function $F:\extspace\rightarrow\Rext$ at astral (and therefore potentially infinite) points $\xbar\in\extspace$, while the astral dual subdifferential $\adsubdifpsi{\uu}$ captures astral (and so potentially infinite) subgradients $\xbar\in\extspace$ of a function $\psi:\Rn\rightarrow\Rext$ at finite points $\uu\in\Rn$. An example was given in Example~\ref{ex:entropy-1d}. The next example shows further how it is possible for an astral dual subgradient to encode an infinite slope (or rate of change) in one direction, while simultaneously encoding an ordinary finite slope in another direction: \begin{example} \label{ex:entropy-ast-dual-subgrad} Let \[ \Delta = \Braces{\uu\in\Rpos^2 :\: u_1+u_2\leq 1}, \] and let $\psi:\R^2\rightarrow\Rext$ be given, for $\uu\in\R^2$, by \begin{equation*} \psi(\uu) = \begin{cases} u_1 \ln u_1 + u_2 \ln u_2 + (1-u_1-u_2) \ln(1-u_1-u_2) & \text{if $\uu\in\Delta$,} \\ +\infty & \text{otherwise.} \end{cases} \end{equation*} This function is closed, proper, convex. It is also differentiable at all points in the relative interior of its domain, so its standard and astral dual subgradients are the same as its gradient at all such points. In particular, for $\uu\in\ri{\Delta}$, \begin{equation} \label{eq:ex:entropy-ast-dual-subgrad:3} \nabla\psi(\uu) = \Bracks{ \begin{array}{c} \ln\bigParens{u_1/(1-u_1-u_2)} \\[0.5em] \ln\bigParens{u_2/(1-u_1-u_2)} \end{array} }. \end{equation} Nonetheless, let us consider a point $\uu=\trans{[u,1-u]}$, for some $u\in(0,1)$, which is on the relative boundary of $\Delta=\dom{\psi}$. At such a point, $\psi$ has no standard subgradient, but it does have an astral dual subgradient, as we will now see. Let $\vv=\trans{[1,1]}$ and $\ww=\trans{[1,-1]} / \sqrt{2}$, so that $\norm{\ww}=1$ and $\vv\perp\ww$. Also, let $\rho:\R\rightarrow\Rext$ be the function given by $\rho(r)=\psi(\uu+r\ww)$, for $r\in\R$, and let $\rho'$ denote its first derivative. We claim that $\psi$'s only astral dual subgradient at $\uu$ is the point \begin{equation} \label{eq:ex:entropy-ast-dual-subgrad:1} \xbar = \limray{\vv}\plusl{\rho'(0)}\ww. \end{equation} Notice that $\xbar$ is capturing both $\psi$'s ``infinite slope'' in the direction of $\vv$, as well as its better-behaved, finite derivative information in the direction of $\ww$, that is, in the direction perpendicular to $\vv$. To see that $\xbar\in\adsubdifpsi{\uu}$, let $\uu'\in\R^2$; we aim to show that \eqref{eqn:psi-subgrad:3-alt} holds. This is immediate if $\uu'\not\in\Delta$. If $\uu'\in\Delta$ and $u'_1+u'_2<1$ then $\vv\cdot\uu'<1=\vv\cdot\uu$, implying $\vv\cdot(\uu'-\uu)<0$ and so that $\xbar\cdot(\uu'-\uu)=-\infty$; therefore, \eqref{eqn:psi-subgrad:3-alt} holds in this case as well. In the last case, $\uu'\in\Delta$ and $u'_1+u'_2=1$. Then $\vv\cdot(\uu'-\uu)=0$, that is, $\uu'-\uu$ is perpendicular to $\vv$, implying $\uu'-\uu=r\ww$ for some $r\in\R$. We then have \[ \psi(\uu') = \psi(\uu+r\ww) = \rho(r) \geq \rho(0) + \rho'(0) r = \psi(\uu) + \xbar\cdot(\uu'-\uu). \] The inequality is because $\rho'(0)$, being a gradient of $\rho$ at $0$, is also a subgradient (\Cref{roc:thm25.1}\ref{roc:thm25.1:a}). The last equality is by definition of $\xbar$ (and Lemma~\ref{lemma:case}). Thus, \eqref{eqn:psi-subgrad:3-alt} holds in all cases. To see that $\xbar$ is the only astral dual subgradient, suppose $\xbar'\in\extspac{2}$ is in $\adsubdifpsi{\uu}$. We claim first that $\xbar'\cdot\vv=+\infty$. Let $\uu'=\uu-\epsilon\vv$ for some $\epsilon>0$ with $\epsilon<\min\{u,1-u\}$, so that $\uu'\in\ri{\Delta}$. We then have \[ \psi(\uu) + \nabla\psi(\uu')\cdot(\uu'-\uu) \geq \psi(\uu') \geq \psi(\uu) + \xbar'\cdot(\uu'-\uu). \] The first inequality is because $\psi$ is differentiable at $\uu'$ so its gradient is also a subgradient (\Cref{roc:thm25.1}\ref{roc:thm25.1:a}). The second inequality is by \eqref{eqn:psi-subgrad:3-alt}. Thus, by algebra, $\xbar'\cdot\vv\geq\nabla\psi(\uu')\cdot\vv$. Since, by \eqref{eq:ex:entropy-ast-dual-subgrad:3}, $\nabla\psi(\uu')\cdot\vv\rightarrow+\infty$ as $\epsilon\rightarrow 0$, it follows that $\xbar'\cdot\vv=+\infty$. We next claim that $\xbar'\cdot\ww=\rho'(0)$. For $r\in\R$, \[ \rho(r) = \psi(\uu+r\ww) \geq \psi(\uu)+\xbar'\cdot(r\ww) = \rho(0)+r(\xbar'\cdot\ww), \] with the inequality from \eqref{eqn:psi-subgrad:3-alt}. Thus, $\xbar'\cdot\ww$ is a subgradient of $\rho$ at $0$, and so is equal to $\rho'(0)$ since $\rho$ is differentiable at $0$ (\Cref{roc:thm25.1}\ref{roc:thm25.1:a}). Because $\xbar'\cdot\vv=+\infty$ and $\xbar'\cdot\ww=\rho'(0)$, we must have $\xbar'=\xbar$ since $\xbar$ is the only point in $\extspac{2}$ for which these both hold. Thus, at a point $\uu=\trans{[u,1-u]}$ with $u\in(0,1)$, we have shown that $\psi$'s only astral dual subgradient is the point $\xbar$ given in \eqref{eq:ex:entropy-ast-dual-subgrad:1}. Evaluating $\rho'(\zero)$, we can say more explicitly that $\psi$'s astral dual subdifferential at $\uu$ is the singleton \[ \adsubdifpsi{\uu} = \Braces{ \limray{\vv} \plusl \Parens{\frac{1}{\sqrt{2}}\ln\Parens{\frac{u}{1-u}}}\ww }. \] When instead $u\in\{0,1\}$, similar reasoning can be used to determine the astral dual subdifferentials at $\ee_1=\trans{[1,0]}$ and $\ee_2=\trans{[0,1]}$, which turn out to be: \[ \adsubdifpsi{\ee_1} = \limray{\vv} \seqsum (-\limray{\ee_2}), \mbox{~~and~~} \adsubdifpsi{\ee_2} = \limray{\vv} \seqsum (-\limray{\ee_1}). \] \end{example} For any function $\psi:\Rn\rightarrow\Rext$, \eqref{eqn:psi-subgrad:3} immediately implies that $\uu\in\Rn$ minimizes $\psi$ if and only if $\zero$ is an astral dual subgradient of $\psi$ at $\uu$ so that $\zero\in\adsubdifpsi{\uu}$. Also, as we show next, astral dual subgradients generalize standard subgradients in the sense that the standard subdifferential of $\psi$ at $\uu$ is exactly equal to the finite points included in $\psi$'s astral dual subdifferential at $\uu$: \begin{proposition} \label{pr:adsubdif-int-rn} Let $\psi:\Rn\rightarrow\Rext$, and let $\uu\in\Rn$. Then $\partial\psi(\uu) = \adsubdifpsi{\uu}\cap\Rn$. \end{proposition} \proof When $\xbar=\xx\in\Rn$, \eqref{eqn:psi-subgrad:1a} and \eqref{eqn:psi-subgrad:3a} are equivalent for all $\rpair{\uu'}{v'}\in\epi \psi$. Since the former defines standard subgradients and the latter defines astral dual subgradients, this proves the claim. \qed Astral dual subdifferentials are always closed and convex: \begin{proposition} \label{pr:ast-dual-subdif-is-convex} Let $\psi:\Rn\rightarrow\Rext$, and let $\uu\in\Rn$. Then $\adsubdifpsi{\uu}$ is convex and closed (in $\extspace$). \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Convex:} Suppose $\xbar,\ybar\in\adsubdifpsi{\uu}$. Let $\zbar\in\lb{\xbar}{\ybar}$, which we aim to show is also in $\adsubdifpsi{\uu}$. Let $\rpair{\uu'}{v'}\in\epi \psi$. Then \begin{align*} -\psi(\uu) &\geq \max\Braces{\xbar\cdot(\uu'-\uu)-v',\; \ybar\cdot(\uu'-\uu)-v'} \\ &= \max\Braces{\xbar\cdot(\uu'-\uu),\; \ybar\cdot(\uu'-\uu)} - v' \\ &\geq \zbar\cdot(\uu'-\uu) - v'. \end{align*} The first inequality follows from the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3a} since both $\xbar$ and $\ybar$ are in $\adsubdifpsi{\uu}$. The last inequality is by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}) since $\zbar\in\lb{\xbar}{\ybar}$. Since this holds for all $\rpair{\uu'}{v'}\in\epi \psi$, it follows that $\zbar\in\adsubdifpsi{\uu}$ (again using Eq.~\ref{eqn:psi-subgrad:3a}). \pfpart{Closed:} Suppose $\seq{\xbar_t}$ is a sequence in $\adsubdifpsi{\uu}$ that converges to $\xbar\in\extspace$. Let $\rpair{\uu'}{v'}\in\epi \psi$. Then for all $t$, $-\psi(\uu)\geq \xbar_t\cdot(\uu'-\uu) - v'$, since $\xbar_t\in\adsubdifpsi{\uu}$, using \eqref{eqn:psi-subgrad:3a}. Also, $ \xbar_t\cdot(\uu'-\uu) - v' \rightarrow \xbar\cdot(\uu'-\uu) - v' $ by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}) and Proposition~\ref{prop:lim:eR}(\ref{i:lim:eR:sum}). Together, these imply $-\psi(\uu)\geq \xbar\cdot(\uu'-\uu) - v' $ for all $\rpair{\uu'}{v'}\in\epi \psi$, and thus that $\xbar\in\adsubdifpsi{\uu}$. \qedhere \end{proof-parts} \end{proof} Standard subdifferentials are said to be \emph{monotone} in the sense that, for a closed, proper, convex function $f:\Rn\rightarrow\Rext$, if $\uu\in\partial f(\xx)$ and $\vv\in\partial f(\yy)$, then $(\xx-\yy)\cdot(\uu-\vv)\geq 0$ \citep[Corollary~31.5.2]{ROC}. Astral dual subdifferentials have an analogous property, as we show now. Astral (primal) subdifferentials also have such a property, as will be seen shortly in Corollary~\ref{cor:ast-subgrad-monotone}. \begin{theorem} \label{thm:ast-dual-subgrad-monotone} Let $\psi:\Rn\rightarrow\Rext$, let $\uu,\vv\in\Rn$, and assume $\psi(\uu)$ and $-\psi(\vv)$ are summable. Let $\xbar\in\adsubdifpsi{\uu}$ and $\ybar\in\adsubdifpsi{\vv}$. Then \begin{equation} \label{eq:thm:ast-dual-subgrad-monotone:1} \xbar\cdot (\uu-\vv) \geq \ybar\cdot(\uu-\vv). \end{equation} \end{theorem} \begin{proof} We proceed in cases. Suppose first that at least one of $\psi(\uu)$ or $\psi(\vv)$ is finite. Then we can assume without loss of generality that $\psi(\uu)\in\R$ since, if instead $\psi(\vv)\in\R$, we can swap $\xbar$ and $\uu$ with $\ybar$ and $\vv$, which does not alter the theorem's symmetric claim (Eq.~\ref{eq:thm:ast-dual-subgrad-monotone:1}). So assume $\psi(\uu)\in\R$. Then, because $\xbar\in\adsubdifpsi{\uu}$, \[ \psi(\vv) \geq \psi(\uu) + \xbar\cdot(\vv-\uu) \] from \eqref{eqn:psi-subgrad:3-alt}, and since $\psi(\uu)\in\R$ so that downward addition can be replaced by ordinary addition. Similarly, since $\ybar\in\adsubdifpsi{\vv}$, \[ -\psi(\vv) \geq -\psi(\uu) + \ybar\cdot(\uu-\vv) \] from \eqref{eqn:psi-subgrad:3}. Combining yields \[ -\psi(\uu) - \xbar\cdot(\vv-\uu) \geq -\psi(\vv) \geq -\psi(\uu) + \ybar\cdot(\uu-\vv), \] implying \eqref{eq:thm:ast-dual-subgrad-monotone:1} (since $\psi(\uu)\in\R$). In the alternative case, both $\psi(\uu)$ and $\psi(\vv)$ are infinite, and, by our summability assumption, one is $+\infty$ and the other is $-\infty$. As in the previous case, we can assume without loss of generality that $\psi(\uu)=+\infty$ and $\psi(\vv)=-\infty$. Then \[ -\infty = \psi(\vv) \geq \psi(\uu) \plusd \xbar\cdot(\vv-\uu) \] by \eqref{eqn:psi-subgrad:3-alt} since $\xbar\in\adsubdifpsi{\uu}$. Since $\psi(\uu)=+\infty$, this is only possible if $\xbar\cdot(\vv-\uu) = -\infty$, implying $\xbar\cdot(\uu-\vv) = +\infty$ and so also \eqref{eq:thm:ast-dual-subgrad-monotone:1}. \end{proof} Theorem~\ref{thm:ast-dual-subgrad-monotone} is not true in general without the summability assumption, as the next example shows: \begin{example} Let $\psi:\R^2\rightarrow\Rext$ be the indicator function on the single point $-\ee_1$ so that $\psi=\indf{\{-\ee_1\}}$. Let $\xbar=\limray{\ee_1}$, $\uu=\ee_2$, $\ybar=\limray{\ee_1}\plusl\ee_2$ and $\vv=\zero$. Then $\psi(\uu)=\psi(\vv)=+\infty$. Further, it can be checked (for instance, using Eq.~\ref{eqn:psi-subgrad:3-alt}) that $\uu\in\adsubdifpsi{\xbar}$ and $\vv\in\adsubdifpsi{\ybar}$. However, $\xbar\cdot (\uu-\vv) =0 < 1 = \ybar\cdot(\uu-\vv)$, so \eqref{eq:thm:ast-dual-subgrad-monotone:1} does not hold. \end{example} \subsection{Conditions and relations among subgradients} \label{sec:subgrad-inv} In standard convex analysis, it is known that $\partial f$ and $\partial \fstar$ act as inverses of one another in the sense that, for $\xx\in\Rn$ and $\uu\in\Rn$, if $f:\Rn\rightarrow\Rext$ is proper and convex, and if $(\cl f)(\xx) = \fdubs(\xx)=f(\xx)$ then $\uu\in\partial f(\xx)$ if and only if $\xx\in\partial \fstar(\uu)$ (\Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:c}). We also discussed earlier that $\uu\in\partial f(\xx)$ if and only if the Fenchel-Young inequality (Eq.~\ref{eqn:fenchel-stand}) holds with equality. Thus, under the conditions above, the following are equivalent: \begin{letter-compact} \item $f(\xx)+\fstar(\uu) = \xx\cdot\uu$. \item $\uu\in\partial f(\xx)$. \item $\xx\in\partial \fstar(\uu)$. \end{letter-compact} In this section, we explore in detail the analogous connections between the astral subdifferential of a function $F:\extspace\rightarrow\Rext$ and the astral dual subdifferential of its dual $\Fstar$. We also relate these to when the form of the Fenchel-Young inequality given in \eqref{eqn:ast-fenchel} holds with equality, a connection that was seen already in Theorem~\ref{thm:fenchel-implies-subgrad}. Thus, for a function $F:\extspace\rightarrow\Rext$ and points $\xbar\in\extspace$ and $\uu\in\Rn$, we study how the following three conditions relate to one another: \begin{letter-compact} \item $\Fstar(\uu) = -F(\xbar) \plusd \xbar\cdot\uu$. \item $\uu\in\asubdifF{\xbar}$. \item $\xbar\in\adsubdifFstar{\uu}$. \end{letter-compact} It was seen in Theorem~\ref{thm:fenchel-implies-subgrad} that the first condition generally implies the second (provided $\xbar\in\cldom{F}$), and we will see shortly in Theorem~\ref{thm:asubdif-implies-adsubdif} that the second always implies the third. Under appropriate summability conditions, as in Theorem~\ref{thm:fenchel-implies-subgrad}, we will also see that the three conditions are equivalent to one another. Furthermore, in the centrally important case that $F$ is the extension $\fext$ of a convex function $f:\Rn\rightarrow\Rext$, we prove below (Theorem~\ref{thm:adif-fext-inverses}) that $\uu\in\asubdiffext{\xbar}$ if and only if $\xbar\in\adsubdiffstar{\uu}$, for all $\uu\in\Rn$ and $\xbar\in\cldom{f}$. As a next step, we show that if $\uu\in\asubdifF{\xbar}$ then it always follows that $\xbar\in\adsubdifFstar{\uu}$ and also $\xbar\in\cldom{F}$. In addition, if $\xbar\cdot\uu\in\R$ then we can further infer that $F(\xbar)=\Fdub(\xbar)$, thereby generalizing a result from standard convex analysis that if $\partial f(\xx)$ is nonempty then $f(\xx)=\fdubs(\xx)$, for $f:\Rn\rightarrow\Rext$ proper and convex, and any $\xx\in\Rn$ (\Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:c}). \begin{theorem} \label{thm:asubdif-implies-adsubdif} Let $F:\extspace\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Suppose $\uu\in\asubdifF{\xbar}$. Then all of the following hold: \begin{letter-compact} \item \label{thm:asubdif-implies-adsubdif:a} $\xbar\in\adsubdifFstar{\uu}$. \item \label{thm:asubdif-implies-adsubdif:c} Either $F(\xbar)=\Fdub(\xbar)$ or $\xbar\cdot\uu\not\in\R$. \end{letter-compact} \end{theorem} \begin{proof} Since $\uu\in\asubdifF{\xbar}$, Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:a},\ref{pr:equiv-ast-subdif-defn:b}) implies that there exists a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ with $\xbar_t\rightarrow\xbar$, $y_t \rightarrow F(\xbar)$, and \begin{equation} \label{eq:thm:asubdif-implies-adsubdif:2} \xbar_t\cdot\uu - y_t \rightarrow \Fstar(\uu). \end{equation} We use this same sequence for the entire proof. \begin{proof-parts} \pfpart{Part~(\ref{thm:asubdif-implies-adsubdif:a}):} Using the equivalent form of the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3-alt}, and setting $\ww=\uu'-\uu$, we can prove $\xbar\in\adsubdifFstar{\uu}$ by showing, for all $\ww\in\Rn$, that \begin{equation} \label{eq:thm:asubdif-implies-adsubdif:1} \Fstar(\uu+\ww) \geq \Fstar(\uu) \plusd \xbar\cdot\ww. \end{equation} Let $\ww\in\Rn$. If either $\Fstar(\uu)=-\infty$ or $\xbar\cdot\ww=-\infty$, then \eqref{eq:thm:asubdif-implies-adsubdif:1} holds trivially since the right-hand side is $-\infty$ in either case. Therefore, we assume henceforth that $\Fstar(\uu)>-\infty$ and $\xbar\cdot\ww>-\infty$. From \eqref{eq:thm:asubdif-implies-adsubdif:2} and since \begin{equation} \label{eq:thm:asubdif-implies-adsubdif:3} \xbar_t\cdot\ww \rightarrow \xbar\cdot\ww \end{equation} (by Theorem~\ref{thm:i:1}(\ref{thm:i:1c})), this implies that $\xbar_t\cdot\uu>-\infty$ and $\xbar_t\cdot\ww>-\infty$ for $t$ sufficiently large; by discarding all other elements of the sequence, we assume this holds for all $t$. Thus, we have \begin{eqnarray*} \Fstar(\uu+\ww) &\geq& \xbar_t\cdot(\uu+\ww) - y_t \\ &=& \xbar_t\cdot\uu + \xbar_t\cdot\ww - y_t \\ &=& (\xbar_t\cdot\uu - y_t) + \xbar_t\cdot\ww. \end{eqnarray*} The inequality is by \eqref{eq:Fstar-def} (since $\rpair{\xbar_t}{y_t}\in\epi F$), and the first equality is by Proposition~\ref{pr:i:1} (since neither $\xbar_t\cdot\uu$ nor $\xbar_t\cdot\ww$ can be $-\infty$). Since this inequality holds for all $t$, it also must hold in the limit. Thus, \[ \Fstar(\uu+\ww) \geq \lim [(\xbar_t\cdot\uu - y_t) + \xbar_t\cdot\ww] = \Fstar(\uu) + \xbar\cdot\ww \] by \eqref{eq:thm:asubdif-implies-adsubdif:2} and \eqref{eq:thm:asubdif-implies-adsubdif:3}, and by continuity of addition (since neither $\Fstar(\uu)$ nor $\xbar\cdot\ww$ is $-\infty$). This proves \eqref{eq:thm:asubdif-implies-adsubdif:1}. \pfpart{Part~(\ref{thm:asubdif-implies-adsubdif:c}):} Suppose $\xbar\cdot\uu\in\R$. Then Theorem~\ref{thm:fenchel-implies-subgrad} implies $\Fstar(\uu) = \xbar\cdot\uu - F(\xbar)$, and therefore $F(\xbar) = \xbar\cdot\uu - \Fstar(\uu)$ since $\xbar\cdot\uu\in\R$. Thus, \[ F(\xbar) \geq \Fdub(\xbar) \geq \xbar\cdot\uu - \Fstar(\uu) = F(\xbar), \] with the inequalities following respectively from Theorem~\ref{thm:fdub-sup-afffcns} and \eqref{eq:psistar-def:2} (applied to $\psi=\Fstar$). \qedhere \end{proof-parts} \end{proof} As discussed earlier, in standard convex analysis, it is known that if $\partial f(\xx)$ is nonempty then $f(\xx)=\fdubs(\xx)$. Theorem~\ref{thm:asubdif-implies-adsubdif}(\ref{thm:asubdif-implies-adsubdif:c}) shows that if there exists $\uu\in\asubdifF{\xbar}$ for which it also holds that $\xbar\cdot\uu\in\R$ then $F(\xbar)=\Fdub(\xbar)$. Without the additional condition that $\xbar\cdot\uu\in\R$, the theorem would be false, in general. In other words, it is possible that $\uu\in\asubdifF{\xbar}$ but $F(\xbar)\neq\Fdub(\xbar)$. \begin{example} For instance, suppose \[ F(\barx) = \left\{ \begin{array}{cl} 0 & \mbox{if $\barx\in\R$} \\ +\infty & \mbox{otherwise} \end{array} \right. \] for $\barx\in\Rext$, which is convex by Proposition~\ref{pr:1d-cvx}. Let $\barx=+\infty$ and $u=1$. Then it can be checked that $\Fstar(u)=+\infty$ and $\Fdub\equiv 0$. Let $x_t=t$ and $y_t=t/2$ for all $t$. Then $x_t\rightarrow\barx$, $y_t\rightarrow F(\barx)=+\infty$, and $x_t u - y_t \rightarrow \Fstar(u) = +\infty$. Thus, $u\in\asubdifF{\barx}$, but $F(\barx)=+\infty\neq 0 =\Fdub(\barx)$. \end{example} As a consequence of Theorem~\ref{thm:asubdif-implies-adsubdif}(\ref{thm:asubdif-implies-adsubdif:a}), we obtain the following direct corollary of Theorem~\ref{thm:ast-dual-subgrad-monotone} regarding monotonicity of astral subdifferentials: \begin{corollary} \label{cor:ast-subgrad-monotone} Let $F:\extspace\rightarrow\Rext$, let $\xbar,\ybar\in\extspace$, and assume $\Fstar(\uu)$ and $-\Fstar(\vv)$ are summable. Let $\uu\in\asubdifF{\xbar}$ and $\vv\in\asubdifF{\ybar}$. Then \[ \xbar\cdot (\uu-\vv) \geq \ybar\cdot(\uu-\vv). \] \end{corollary} \begin{proof} By Theorem~\ref{thm:asubdif-implies-adsubdif}(\ref{thm:asubdif-implies-adsubdif:a}), $\xbar\in\adsubdifFstar{\uu}$ and $\ybar\in\adsubdifFstar{\vv}$. Applying Theorem~\ref{thm:ast-dual-subgrad-monotone} (with $\psi=\Fstar$) now proves the claim. \end{proof} So far, we have taken as our starting point a function $F:\extspace\rightarrow\Rext$, and have considered the conditions discussed above in terms of its astral subdifferential $\asubdifplain{F}$, its conjugate $\Fstar$, and the astral dual subdifferential of that conjugate, $\adsubdifplain{\Fstar}$. We next take a somewhat different approach in which, beginning with a function $\psi:\Rn\rightarrow\Rext$, we focus on its astral dual subdifferential $\adsubdifplain{\psi}$, its astral dual conjugate $\psistarb$, and the astral (primal) subdifferential of that conjugate, $\asubdifplain{\psistarb}$. The next theorem shows how conditions analogous to the ones we have been considering regarding subgradients and the Fenchel-Young inequality can be related to one another in this alternative formulation: \begin{theorem} \label{thm:psi-subgrad-conds} Let $\psi:\Rn\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Consider the following statements: \begin{letter-compact} \item \label{thm:psi-subgrad-conds:a} $\psi(\uu) = -\psistarb(\xbar) \plusd \xbar\cdot\uu$ and $\xbar\in\cldom{\psistarb}$. \item \label{thm:psi-subgrad-conds:b} $\uu\in\asubdifpsistarb{\xbar}$ and $\psidub(\uu) = \psi(\uu)$. \item \label{thm:psi-subgrad-conds:c} $\xbar\in\adsubdifpsi{\uu}$ and $\xbar\in\cldom{\psistarb}$. \end{letter-compact} Then statement~(\ref{thm:psi-subgrad-conds:a}) implies statement~(\ref{thm:psi-subgrad-conds:b}), and statement~(\ref{thm:psi-subgrad-conds:b}) implies statement~(\ref{thm:psi-subgrad-conds:c}). \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:psi-subgrad-conds:a}) $\Rightarrow$ (\ref{thm:psi-subgrad-conds:b}): } Suppose statement~(\ref{thm:psi-subgrad-conds:a}) holds. Then \[ -\psistarb(\xbar) \plusd \xbar\cdot\uu = \psi(\uu) \geq \psidub(\uu) \geq -\psistarb(\xbar) \plusd \xbar\cdot\uu. \] The equality is by assumption. The two inequalities follow from Theorem~\ref{thm:psi-geq-psidub} and \eqref{eqn:ast-fenchel}. Thus, \[ \psi(\uu) = \psidub(\uu) = -\psistarb(\xbar) \plusd \xbar\cdot\uu, \] implying $\uu\in\asubdifpsistarb{\xbar}$ by Theorem~\ref{thm:fenchel-implies-subgrad} (applied to $F=\psistarb$). \pfpart{(\ref{thm:psi-subgrad-conds:b}) $\Rightarrow$ (\ref{thm:psi-subgrad-conds:c}): } Suppose statement~(\ref{thm:psi-subgrad-conds:b}) holds. Then Proposition~\ref{pr:subgrad-imp-in-cldom}(\ref{pr:subgrad-imp-in-cldom:a}) and Theorem~\ref{thm:asubdif-implies-adsubdif}(\ref{thm:asubdif-implies-adsubdif:a}), applied to $F=\psistarb$, imply that $\xbar\in\cldom{\psistarb}$ and that $\xbar\in\adsubdifpsidub{\uu}$. Therefore, using the form of the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3}, we have that for all $\uu'\in\Rn$, \[ -\psi(\uu) = -\psidub(\uu) \geq -\psidub(\uu') \plusd \xbar\cdot(\uu'-\uu) \geq -\psi(\uu') \plusd \xbar\cdot(\uu'-\uu). \] The equality is by assumption, and the last inequality is by Theorem~\ref{thm:psi-geq-psidub}. Thus, $\xbar\in\adsubdifpsi{\uu}$, as claimed. \qedhere \end{proof-parts} \end{proof} When $-\psistarb(\xbar)$ and $\xbar\cdot\uu$ are summable, the next theorem shows that statement~(\ref{thm:psi-subgrad-conds:c}) in Theorem~\ref{thm:psi-subgrad-conds} implies statement~(\ref{thm:psi-subgrad-conds:a}), and therefore that all three statements appearing in that theorem are equivalent to one another. \begin{theorem} \label{thm:psi-subgrad-conds-part2} Let $\psi:\Rn\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Suppose $-\psistarb(\xbar)$ and $\xbar\cdot\uu$ are summable, and that $\xbar\in\adsubdifpsi{\uu}$. Then $\psi(\uu) = \xbar\cdot\uu - \psistarb(\xbar)$. \end{theorem} \begin{proof} It suffices to prove \begin{equation} \label{eq:thm:psi-subgrad-conds:1} \psi(\uu) \leq \xbar\cdot\uu - \psistarb(\xbar) \end{equation} since, once proved, this will imply \[ \psi(\uu) \geq \psidub(\uu) \geq \xbar\cdot\uu - \psistarb(\xbar) \geq \psi(\uu), \] with the first two inequalities following, as before, from Theorem~\ref{thm:psi-geq-psidub} and \eqref{eqn:ast-fenchel}, thereby proving the claim. We aim therefore to prove \eqref{eq:thm:psi-subgrad-conds:1}. This inequality holds trivially if $\psi(\uu)=-\infty$ or $\xbar\cdot\uu=+\infty$ or $\psistarb(\xbar)=-\infty$. Therefore, we assume henceforth that none of these conditions hold. Let $\lambda\in\R$ be such that $\lambda<\psistarb(\xbar)$. (Such $\lambda$ must exist since $\psistarb(\xbar)>-\infty$.) By the definition of dual conjugate given in \eqref{eq:psistar-def}, there must exist $\rpair{\uu'}{v'}\in\epi\psi$ with $\xbar\cdot\uu'-v'>\lambda$. In particular, this implies $\xbar\cdot\uu'>-\infty$. Also, $\xbar\cdot(-\uu)>-\infty$ since $\xbar\cdot\uu<+\infty$. Thus, \begin{eqnarray} \lambda - \xbar\cdot\uu &=& \lambda + \xbar\cdot(-\uu) \nonumber \\ &\leq& (\xbar\cdot\uu' - v') + \xbar\cdot(-\uu) \nonumber \\ &=& \xbar\cdot(\uu'-\uu) - v' \nonumber \\ &\leq& -\psi(\uu). \label{eq:thm:psi-subgrad-conds:2} \end{eqnarray} The second equality follows from Proposition~\ref{pr:i:1} since $\xbar\cdot\uu'>-\infty$ and $\xbar\cdot(-\uu)>-\infty$. The last inequality uses our assumption that $\xbar\in\adsubdifpsi{\uu}$, together with the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3a}. Since $\psi(\uu)>-\infty$, \eqref{eq:thm:psi-subgrad-conds:2} implies that $\xbar\cdot\uu>-\infty$, and thus that $\xbar\cdot\uu\in\R$. Therefore, \eqref{eq:thm:psi-subgrad-conds:2} yields that $\lambda\leq\xbar\cdot\uu-\psi(\uu)$. Since this holds for all $\lambda<\psistarb(\xbar)$, it follows that $\psistarb(\xbar)\leq\xbar\cdot\uu-\psi(\uu)$, proving \eqref{eq:thm:psi-subgrad-conds:1} (since $\xbar\cdot\uu\in\R$), and completing the proof. \end{proof} As noted above, Theorems~\ref{thm:psi-subgrad-conds} and~\ref{thm:psi-subgrad-conds-part2} together imply the following equivalence: \begin{corollary} \label{cor:psi-subgrad-conds} Let $\psi:\Rn\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Assume the following hold: \begin{item-compact} \item $-\psistarb(\xbar)$ and $\xbar\cdot\uu$ are summable. \item Either $\psi(\uu)>-\infty$ or $\xbar\in\cldom{\psistarb}$. \end{item-compact} Then the following are equivalent: \begin{letter-compact} \item \label{cor:psi-subgrad-conds:a} $\psi(\uu) = \xbar\cdot\uu - \psistarb(\xbar)$. \item \label{cor:psi-subgrad-conds:b} $\uu\in\asubdifpsistarb{\xbar}$ and $\psidub(\uu) = \psi(\uu)$. \item \label{cor:psi-subgrad-conds:c} $\xbar\in\adsubdifpsi{\uu}$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{cor:psi-subgrad-conds:a}) $\Rightarrow$ (\ref{cor:psi-subgrad-conds:b}):} Suppose $\psi(\uu) = \xbar\cdot\uu - \psistarb(\xbar)$. If $\psi(\uu)>-\infty$, then this implies $\psistarb(\xbar)<+\infty$. Therefore, our assumption that either $\xbar\in\cldom{\psistarb}$ or $\psi(\uu)>-\infty$ in fact always implies that $\xbar\in\cldom{\psistarb}$. The claim now follows immediately from Theorem~\ref{thm:psi-subgrad-conds}. \pfpart{(\ref{cor:psi-subgrad-conds:b}) $\Rightarrow$ (\ref{cor:psi-subgrad-conds:c}):} Immediate from Theorem~\ref{thm:psi-subgrad-conds}. \pfpart{(\ref{cor:psi-subgrad-conds:c}) $\Rightarrow$ (\ref{cor:psi-subgrad-conds:a}):} Immediate from Theorem~\ref{thm:psi-subgrad-conds-part2}. \qedhere \end{proof-parts} \end{proof} Returning to our earlier study of subgradients of a function $F:\extspace\rightarrow\Rext$, we can apply Theorem~\ref{thm:psi-subgrad-conds-part2} to $\psi=\Fstar$ to obtain an analogous result as corollary: \begin{corollary} \label{cor:adsubdif-implies-fenchel} Let $F:\extspace\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Assume the following hold: \begin{item-compact} \item $\xbar\in\adsubdifFstar{\uu}$. \item $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are summable. \item $-F(\xbar) \plusd \xbar\cdot\uu = \xbar\cdot\uu - \Fdub(\xbar)$. \end{item-compact} Then $\Fstar(\uu) = -F(\xbar) \plusd \xbar\cdot\uu$. \end{corollary} \begin{proof} Our assumptions and Theorem~\ref{thm:psi-subgrad-conds-part2}, with $\psi=\Fstar$, yield \[ \Fstar(\uu) = \xbar\cdot\uu - \Fdub(\xbar) = -F(\xbar) \plusd \xbar\cdot\uu, \] as claimed. \end{proof} For the case that $-F(\xbar)$ and $\xbar\cdot\uu$ are summable and that also $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are summable, we summarize the various results in the following corollary which proves the equivalence of the conditions discussed above. Note that these conditions are always satisfied when $\xbar\cdot\uu\in\R$, in which case the condition that $\xbar\cdot\uu-F(\xbar)=\xbar\cdot\uu-\Fdub(\xbar)$ (appearing below in part~(\ref{cor:asub-summable-summary:c})) is equivalent to the simpler closedness condition $F(\xbar)=\Fdub(\xbar)$. Also, the assumption that either $\Fstar(\uu)>-\infty$ or $\xbar\in\cldom{F}$ is fairly minimal for such an equivalence to hold (at least when $\xbar\cdot\uu - F(\xbar) = \xbar\cdot\uu - \Fdub(\xbar)$) since if $\Fstar(\uu)=-\infty$ and $\xbar\not\in\cldom{F}$ then $\uu\not\in\asubdifF{\xbar}$ (by Proposition~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:a}), but $\xbar\in\adsubdifFstar{\uu}$ (from its definition in Eq.~\ref{eqn:psi-subgrad:3a}). Similar comments can be made about the analogous assumption appearing in Corollary~\ref{cor:psi-subgrad-conds}. \begin{corollary} \label{cor:asub-summable-summary} Let $F:\extspace\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Assume the following hold: \begin{item-compact} \item $-F(\xbar)$ and $\xbar\cdot\uu$ are summable. \item $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are summable. \item Either $\Fstar(\uu)>-\infty$ or $\xbar\in\cldom{F}$. \end{item-compact} Then the following are equivalent: \begin{letter-compact} \item \label{cor:asub-summable-summary:a} $\Fstar(\uu) = \xbar\cdot\uu - F(\xbar)$. \item \label{cor:asub-summable-summary:b} $\uu\in\asubdifF{\xbar}$. \item \label{cor:asub-summable-summary:c} $\xbar\in\adsubdifFstar{\uu}$ and $\xbar\cdot\uu - F(\xbar) = \xbar\cdot\uu - \Fdub(\xbar)$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{cor:asub-summable-summary:a}) $\Rightarrow$ (\ref{cor:asub-summable-summary:b}):} Suppose $\Fstar(\uu) = \xbar\cdot\uu - F(\xbar)$. Similar to the proof of Corollary~\ref{cor:psi-subgrad-conds}, if $\Fstar(\uu)>-\infty$ then $F(\xbar)<+\infty$. Therefore, our assumption that either $\Fstar(\uu)>-\infty$ or $\xbar\in\cldom{F}$ actually implies that $\xbar\in\cldom{F}$. The claim then follows immediately from Theorem~\ref{thm:fenchel-implies-subgrad}. \pfpart{(\ref{cor:asub-summable-summary:b}) $\Rightarrow$ (\ref{cor:asub-summable-summary:c}):} This follows directly from Theorem~\ref{thm:asubdif-implies-adsubdif}(\ref{thm:asubdif-implies-adsubdif:c}) since if either $F(\xbar)=\Fdub(\xbar)$ or if $\xbar\cdot\uu\in\{-\infty,+\infty\}$ then we must have $\xbar\cdot\uu - F(\xbar) = \xbar\cdot\uu - \Fdub(\xbar)$ (using our summability assumptions). \pfpart{(\ref{cor:asub-summable-summary:c}) $\Rightarrow$ (\ref{cor:asub-summable-summary:a}):} Immediate from Corollary~\ref{cor:adsubdif-implies-fenchel}. \qedhere \end{proof-parts} \end{proof} We turn next to when $F$ is the extension $\fext$ of a convex function $f:\Rn\rightarrow\Rext$, a primary focus of this book. In this case, the astral subdifferential of $\fext$ and the astral dual subdifferential of $\fstar$ are inverses over $\xbar\in\cldom{f}$ and $\uu\in\Rn$ in the sense that $\uu\in\asubdiffext{\xbar}$ if and only if $\xbar\in\adsubdiffstar{\uu}$ for all such pairs. We prove this equivalence in the next theorem. First, we relate the astral dual subgradients of the conjugates of $f$ and its linear shift, $\fminusu$, which will be used in the proof. \begin{proposition} \label{pr:fminusu-ast-dual-subgrad} Let $f:\Rn\rightarrow\Rext$, let $\uu\in\Rn$, and let $\fminusu$ be $f$'s linear shift by $\uu$. Then $\adsubdiffstar{\uu} = \adsubdiffminusustar{\zero}$. \end{proposition} \begin{proof} Let $\xbar\in\extspace$. Then $\xbar\in\adsubdiffstar{\uu}$ if and only if the formulation of astral dual subgradient given in \eqref{eqn:psi-subgrad:3-alt} is satisfied so that \[ \fstar(\ww) \geq \fstar(\uu) \plusd \xbar\cdot(\ww-\uu) \] for $\ww\in\Rn$. Replacing $\ww$ with $\ww+\uu$, this is equivalent to \[ \fstar(\ww+\uu) \geq \fstar(\uu) \plusd \xbar\cdot\ww \] for $\ww\in\Rn$. By Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:b}), this is equivalent to \[ \fminusustar(\ww) \geq \fminusustar(\zero) \plusd \xbar\cdot\ww \] for $\ww\in\Rn$, which holds if and only if $\xbar\in\adsubdiffminusustar{\zero}$ (again by the formulation in Eq.~\ref{eqn:psi-subgrad:3-alt}). \end{proof} \begin{theorem} \label{thm:adif-fext-inverses} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Let $\fminusu$ be $f$'s linear shift by $\uu$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:adif-fext-inverses:a} $\uu\in\asubdiffext{\xbar}$. \item \label{thm:adif-fext-inverses:c} $\xbar\in\adsubdiffstar{\uu}$ and $\xbar\in\cldom{f}$. \item \label{thm:adif-fext-inverses:b} $\fminusuext(\xbar)=-\fstar(\uu)$ and $f\not\equiv+\infty$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:adif-fext-inverses:b}) $\Rightarrow$ (\ref{thm:adif-fext-inverses:a}):} This is immediate from Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:d}. \pfpart{(\ref{thm:adif-fext-inverses:a}) $\Rightarrow$ (\ref{thm:adif-fext-inverses:c}):} This is immediate from Theorem~\ref{thm:asubdif-implies-adsubdif}(\ref{thm:asubdif-implies-adsubdif:a}) and Proposition~\ref{pr:subgrad-imp-in-cldom}(\ref{pr:subgrad-imp-in-cldom:a}) (with $F=\fext$), noting that $\cldomfext=\cldom{f}$ (by Proposition~\ref{pr:h:1}\ref{pr:h:1c}), and $\fextstar=\fstar$ (by Proposition~\ref{pr:fextstar-is-fstar}). \pfpart{(\ref{thm:adif-fext-inverses:c}) $\Rightarrow$ (\ref{thm:adif-fext-inverses:b}):} Suppose $\xbar\in\adsubdiffstar{\uu}$ and $\xbar\in\cldom{f}$. Since $\cldom{f}$ is not empty, $f\not\equiv+\infty$. Our assumptions imply that $\xbar\in\cldom{f}=\cldom{\fminusu}$ by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:a}), and that $\xbar\in\adsubdiffstar{\uu}=\adsubdiffminusustar{\zero}$ by Proposition~\ref{pr:fminusu-ast-dual-subgrad}. Consequently, $\fminusuext(\xbar)=\fminusudub(\xbar)$ by Theorem~\ref{thm:fext-neq-fdub} (noting that, since $f$ is convex, $\fminusu$ is as well). Thus, the conditions of Corollary~\ref{cor:adsubdif-implies-fenchel} are satisfied with $F=\fminusuext$, and $\uu$, as it appears in that corollary, set to $\zero$ (and using $\fminusuextstar=\fminusustar$ by Proposition~\ref{pr:fextstar-is-fstar}). Therefore, \[ \fminusuext(\xbar) = -\fminusustar(\zero) = -\fstar(\uu) \] with the first equality from Corollary~\ref{cor:adsubdif-implies-fenchel} and the second from Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:b}). \qedhere \end{proof-parts} \end{proof} \begin{example}[Astral subgradients of an affine function] \label{ex:affine-subgrad} For instance, we can use Theorem~\ref{thm:adif-fext-inverses} to compute the astral subgradients of the affine function $f(\xx)=\xx\cdot\ww+b$ for $\xx\in\Rn$, where $\ww\in\Rn$ and $b\in\R$. Its conjugate is \[ \fstar(\uu) = \left\{ \begin{array}{cl} -b & \mbox{if $\uu=\ww$} \\ +\infty & \mbox{otherwise.} \end{array} \right. \] Using the formulation of astral dual subgradient given in \eqref{eqn:psi-subgrad:3-alt} (with $\psi=\fstar$), it can be seen that $\xbar\in\adsubdiffstar{\ww}$ for all $\xbar\in\extspace$. So, as expected, $\ww$ is an astral subgradient of $\fext$ at every point $\xbar$ by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c} (noting that $\dom{f}=\Rn$ so $\cldom{f}=\extspace$). But there are other astral subgradients for points at the ``edges'' of $\Rn$. In particular, if $\uu\neq\ww$ then \eqref{eqn:psi-subgrad:3-alt} is satisfied at $\uu'=\ww$ if and only if $\xbar\cdot(\ww-\uu)=-\infty$. (For instance, $\xbar=\limray{(\uu-\ww)}$ is such a point.) Thus, by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}, $\uu\in\asubdiffext{\xbar}$ if and only if either $\uu=\ww$ or $\xbar\cdot(\ww-\uu)=-\infty$. Alternatively, all this can also be seen using part~(\ref{thm:adif-fext-inverses:b}) of Theorem~\ref{thm:adif-fext-inverses} since $\fminusuext(\xbar)=\xbar\cdot(\ww-\uu) - b$. The ``additional'' astral subgradients (that is, those other than $\ww$), capture the supporting halfspaces that ``bend around'' the graph of the function, behavior that has been seen before (even for standard subgradients as in Example~\ref{ex:standard-subgrad}). For example, if $n=1$ and $f(x)=xw+b$ for $x\in\R$, where $w,b\in\R$, then \begin{equation} \label{eq:subgrad-affine-n=1} \asubdiffext{\barx} = \left\{ \begin{array}{cl} (-\infty,w] & \mbox{if $\barx=-\infty$} \\ \{w\} & \mbox{if $\barx\in\R$} \\ {[w,+\infty)} & \mbox{if $\barx=+\infty$.} \end{array} \right. \end{equation} \end{example} The equivalence given in Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c} is no longer true in general if the requirement that $\xbar\in\cldom{f}$ is removed. As the next example shows, it is possible that $\xbar\in\adsubdiffstar{\uu}$ but $\uu\not\in\asubdiffext{\xbar}$. \begin{example} \label{ex:need-xbar-in-cldom} In $\R^2$, let $f$ be given by \[ f(\xx) = f(x_1,x_2) = \left\{ \begin{array}{cl} 0 & \mbox{if $x_2\geq 0$} \\ +\infty & \mbox{otherwise.} \end{array} \right. \] This function is convex, closed, proper, and is nonnegative everywhere, and therefore reduction-closed. Its conjugate can be computed to be \[ \fstar(\uu) = \fstar(u_1,u_2) = \left\{ \begin{array}{cl} 0 & \mbox{if $u_1=0$ and $u_2\leq 0$} \\ +\infty & \mbox{otherwise.} \end{array} \right. \] Let $\xbar=\limray{\ee_1}\plusl(-\ee_2)$ and $\uu=\ee_1$. Then $\xbar\in\adsubdiffstar{\uu}$, as can be seen by checking \eqref{eqn:psi-subgrad:3}, noting that if $\fstar(\uu')<+\infty$ then $\uu'\cdot\ee_1=0<1$ so that $\xbar\cdot(\uu'-\uu)=-\infty$. On the other hand, $\xbar\not\in\cldom{f}$ (since the open set $\{\xbar'\in\extspac{2} : \xbar'\cdot\ee_2 < 0\}$ includes $\xbar$ but is disjoint from $\dom f$). Therefore, $\uu\not\in\asubdiffext{\xbar}$ by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}. \end{example} As an application of the foregoing, we can now straightforwardly compute the subdifferentials $\asubdiffext{\barx}$ for a convex function $f:\R\rightarrow\Rext$ (with $f\not\equiv+\infty$). Indeed, if $x\in\R$ then $\asubdiffext{x} = \partial (\lsc f)(x)$ by Proposition~\ref{pr:asubdiffext-at-x-in-rn}(\ref{pr:asubdiffext-at-x-in-rn:b}). The remaining subdifferentials at $\pm\infty$ are given by the next proposition. For instance, it can be checked that these are consistent with Example~\ref{ex:affine-subgrad} for affine functions on $\R$, whose astral subgradients are given in \eqref{eq:subgrad-affine-n=1}. \begin{proposition} \label{pr:subdif-in-1d} Let $f:\R\rightarrow\Rext$ be convex with $f\not\equiv+\infty$. Let $u\in\R$. Then the following hold: \begin{letter-compact} \item \label{pr:subdif-in-1d:a} $u\in\asubdiffext{-\infty}$ if and only if $u\leq \inf(\dom{\fstar})$ and $-\infty\in\cldom{f}$. \item \label{pr:subdif-in-1d:b} $u\in\asubdiffext{+\infty}$ if and only if $u\geq \sup(\dom{\fstar})$ and $+\infty\in\cldom{f}$. \end{letter-compact} \end{proposition} \begin{proof} We prove only part~(\ref{pr:subdif-in-1d:b}); the proof for part~(\ref{pr:subdif-in-1d:a}) is symmetric (or can be derived from part~(\ref{pr:subdif-in-1d:b}) applied to $x\mapsto f(-x)$). Suppose first that $u\geq \sup(\dom{\fstar})$ and $+\infty\in\cldom{f}$. We claim that $+\infty\in\adsubdiffstar{u}$, meaning, from the definition in \eqref{eqn:psi-subgrad:3-alt}, that \begin{equation} \label{eq:pr:subdif-in-1d:1} \fstar(u') \geq \fstar(u) \plusd (+\infty)\cdot(u'-u) \end{equation} for all $u'\in\R$. If either $\fstar(u')=+\infty$ or $u'=u$, then \eqref{eq:pr:subdif-in-1d:1} is immediate. Otherwise, $u'\in\dom{\fstar}$ and $u'\neq u$, implying $u'\leq\sup(\dom{\fstar})\leq u$, and so actually that $u'<u$. Therefore, $(+\infty)\cdot(u'-u)=-\infty$, implying \eqref{eq:pr:subdif-in-1d:1} in this case as well. Thus, $+\infty\in\adsubdiffstar{u}$. Since also $+\infty\in\cldom{f}$, this implies $u\in\asubdiffext{+\infty}$ by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}. For the converse, suppose now that $u\in\asubdiffext{+\infty}$. By Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}, this implies that $+\infty\in\cldom{f}$ and that $+\infty\in\adsubdiffstar{u}$, meaning that \eqref{eq:pr:subdif-in-1d:1} holds for all $u'\in\R$. Suppose $u'\in\dom{\fstar}$. Then $\fstar(u')<+\infty$ and also $\fstar(u)>-\infty$ (since $f\not\equiv+\infty$). Combined with \eqref{eq:pr:subdif-in-1d:1}, these imply that $u'-u\leq 0$, since otherwise the right-hand side would be equal to $+\infty$. Thus, $u\geq u'$ for all $u'\in\dom{\fstar}$; therefore, $u\geq\sup(\dom{\fstar})$ as claimed. \end{proof} In general, if $f:\Rn\rightarrow\Rext$ is convex and proper, then \[ \ri(\dom{\fstar}) \subseteq \rangedif{f} \subseteq \dom{\fstar} \] where $\rangedif{f}=\bigcup_{\xx\in\Rn} \partial f(\xx)$, by \Cref{roc:thm23.4}. Consequently, assuming $f:\R\rightarrow\Rext$ is convex and proper, Proposition~\ref{pr:subdif-in-1d} can be re-stated with $\inf(\dom{\fstar})$ and $\sup(\dom{\fstar})$ replaced by $\inf(\rangedif{f})$ and $\sup(\rangedif{f})$, respectively. \subsection{Strict astral subgradients} In studying the astral subgradients of a function $F:\extspace\rightarrow\Rext$, we will see that it is sometimes more natural to only consider subgradients $\uu$ for which $\Fstar(\uu)\in\R$. As defined next, such astral subgradients are said to be strict: \begin{definition} Let $F:\extspace\rightarrow\Rext$, and let $\uu\in\Rn$ and $\xbar\in\extspace$. We say that $\uu$ is a \emph{strict astral subgradient} of $F$ at $\xbar$ if $\uu\in\asubdifF{\xbar}$ and $\Fstar(\uu)\in\R$. The \emph{strict astral subdifferential} of $F$ at $\xbar$, denoted $\basubdifF{\xbar}$, is the set of all such strict astral subgradients of $F$ at $\xbar$. \end{definition} Equivalently, as defined in Definition~\ref{def:ast-subgrad}, an astral subgradient is strict if and only if that definition holds with $\beta\in\R$. \begin{example}[Strict astral subgradients of an affine function] \label{ex:affine-ben-subgrad} For instance, let $f(\xx)=\xx\cdot\ww+b$ as in Example~\ref{ex:affine-subgrad}. As seen in that example, $\ww\in\asubdiffext{\xbar}$ at every point $\xbar\in\extspace$ and also $\dom{\fstar}=\{\ww\}$. Therefore, for all $\xbar\in\extspace$, $\ww$ is the only strict astral subgradient at $\xbar$; that is, $\basubdiffext{\xbar}=\{\ww\}$. \end{example} Thus, for an affine function $f$, as in this example, $\ww$ is the unique strict astral subgradient at every point. This is arguably more natural than what was seen for $\fext$'s (general) astral subgradients in Example~\ref{ex:affine-subgrad}, matching the behavior of $f$'s standard gradient which is also $\ww$ at every point. More generally, we have seen that if $\uu\in\asubdiffext{\xbar}$ then $\xbar\in\adsubdiffstar{\uu}$ by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}. But in standard convex analysis, no proper convex function has subgradients at points outside its effective domain (\Cref{roc:thm23.4}\ref{roc:thm23.4:b}), suggesting again that the most natural astral dual subgradients of $\fstar$ (corresponding to astral subgradients of $\fext$) are those at points $\uu\in\dom\fstar$. Indeed, in Section~\ref{sec:dual-subdiff-not-empty}, we will discuss in more detail the nature of astral dual subgradients of a function at points outside its effective domain, and will see that they are of a somewhat different character. On the other hand, for some purposes, it makes more sense to consider astral subgradients that are not necessarily strict. For instance, we saw in Proposition~\ref{pr:asub-zero-is-min} that a function $F$ (other than $F\equiv+\infty$) is minimized at a point $\xbar$ if and only if $\zero$ is an astral subgradient of $F$ at $\xbar$. However, this fact is no longer true if we only consider strict astral subgradients. (For instance, if $f(x)=x$, for $x\in\R$, then $\fext$ is minimized at $-\infty$; correspondingly, $0$ is an astral subgradient of $\fext$ at $-\infty$, but it is not a strict astral subgradient.) In Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}, for a convex function $f:\Rn\rightarrow\Rext$, we saw that there is an inverse relationship between the astral subgradients of $\fext$ and the astral dual subgradients of $\fstar$ at pairs $\xbar\in\extspace$ and $\uu\in\Rn$, provided $\xbar\in\cldom{f}$. The next theorem shows that this same inverse relationship holds if instead $\fstar(\uu)\in\R$ (so that the considered astral subgradients are all strict). \begin{theorem} \label{thm:strict-adif-fext-inverses} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Assume $\fstar(\uu)\in\R$. Then $\uu\in\asubdiffext{\xbar}$ (or equivalently, $\uu\in\basubdiffext{\xbar}$) if and only if $\xbar\in\adsubdiffstar{\uu}$. \end{theorem} \begin{proof} The ``only if'' direction follows immediately from Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}. For the converse, suppose $\xbar\in\adsubdiffstar{\uu}$. Note that $f\not\equiv+\infty$ since otherwise we would have $\fstar(\uu)=-\infty$. Let $\fminusu$ be the linear shift of $f$ by $\uu$. Then $\fminusustar(\zero)=\fstar(\uu)\in\R$ and $\xbar\in\adsubdiffminusustar{\zero}$ by Propositions~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:b}) and~\ref{pr:fminusu-ast-dual-subgrad}. From Theorem~\ref{thm:psi-subgrad-conds-part2} (with $\psi=\fminusustar$, and with $\uu$, as it appears in that theorem, set to $\zero$), it follows that $\fminusustar(\zero)=-\fminusudub(\xbar)$. Thus, $\fminusudub(\xbar)\in\R$, implying $\fminusuext(\xbar)=\fminusudub(\xbar)$ by Theorem~\ref{thm:fext-neq-fdub}. Combining, we have argued \[ \fminusuext(\xbar) = \fminusudub(\xbar) = -\fminusustar(\zero) = -\fstar(\uu), \] proving $\uu\in\asubdiffext{\xbar}$ by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:b}. \end{proof} As a corollary of Theorem~\ref{thm:asubdiff-is-convex}, the strict astral subdifferentials of an extension are always convex: \begin{corollary} \label{cor:ben-subdif-convex} Let $f:\Rn\rightarrow\Rext$ and let $\xbar\in\extspace$. Then $\basubdiffext{\xbar}$ is convex. \end{corollary} \begin{proof} If $f\equiv+\infty$ then $\fstar\equiv-\infty$ so $\basubdiffext{\xbar}=\emptyset$, which is vacuously convex. We therefore assume $f\not\equiv+\infty$ henceforth. This implies $\fstar>-\infty$, so that \[ \basubdiffext{\xbar} = \asubdiffext{\xbar} \cap (\dom{\fstar}). \] That $\asubdiffext{\xbar}$ is convex was proved in Theorem~\ref{thm:asubdiff-is-convex}. Also, $\fstar$ is convex, so its effective domain, $\dom{\fstar}$, is convex as well. Therefore, $\basubdiffext{\xbar}$ is convex. \end{proof} For the extension of a proper, convex function, all astral subgradients are strict at any finite point in $\Rn$: \begin{proposition} \label{pr:ben-subgrad-at-x-in-rn} Let $f:\Rn\rightarrow\Rext$ be proper and convex, and let $\xx\in\Rn$. Then \[ \basubdiffext{\xx} = \asubdiffext{\xx} = \partial (\lsc f)(\xx). \] \end{proposition} \begin{proof} Let $h = \lsc f$. We prove \begin{equation} \label{eq:pr:ben-subgrad-at-x-in-rn:1} \basubdifhext{\xx} = \asubdifhext{\xx} = \partial h(\xx), \end{equation} which implies the proposition since $\hext=\fext$ (by \Cref{pr:h:1}\ref{pr:h:1aa}). The second equality of \eqref{eq:pr:ben-subgrad-at-x-in-rn:1} was proved in Proposition~\ref{pr:asubdiffext-at-x-in-rn}. That $\basubdifhext{\xx} \subseteq \asubdifhext{\xx}$ is immediate. For the reverse implication, let $\uu\in \asubdifhext{\xx} = \partial h(\xx)$. Since $h$ is lower semi-continuous, convex and proper, it is also closed. Therefore, $\xx\in\partial \hstar(\uu)$ (by \Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:c}), implying $\uu\in\dom\hstar$ (by \Cref{roc:thm23.4}\ref{roc:thm23.4:b}). Thus, $\uu\in \basubdifhext{\xx}$. \end{proof} \subsection{Dual subdifferentials are never empty} \label{sec:dual-subdiff-not-empty} In standard convex analysis, a convex function has a nonempty subdifferential at every point in the relative interior of its effective domain (\Cref{roc:thm23.4}\ref{roc:thm23.4:a}). Nevertheless, it is possible for the function to have no subgradients at other points, as indeed will be the case at all points outside its effective domain, and possibly at some or all of its relative boundary points. In contrast, for any convex function $\psi:\Rn\rightarrow\Rext$, the astral dual subdifferential is nonempty at \emph{every} point, as shown in the next theorem. This same result is also proved by \citet[Proposition~3.2]{waggoner21} under the additional assumptions that $\psi>-\infty$ and $\uu\in\dom\psi$. \begin{theorem} \label{thm:adsubdiff-nonempty} Let $\psi:\Rn\rightarrow\Rext$ be convex, and let $\uu\in\Rn$. Then $\psi$ has an astral dual subgradient at $\uu$; that is, $\adsubdifpsi{\uu}\neq\emptyset$. \end{theorem} \begin{proof} If either $\psi\equiv+\infty$ or $\psi(\uu)=-\infty$, then actually $\adsubdifpsi{\uu}=\extspace$, as can be seen from \eqref{eqn:psi-subgrad:3-alt}, so we assume henceforth that $\psi\not\equiv+\infty$ and that $\psi(\uu)>-\infty$. The main idea of the proof is to apply the separation results from Section~\ref{sec:ast-def-sep-thms} to derive an astral dual subgradient. To this end, let $U=\epi{\psi}$ and \[ V = \Braces{ \rpair{\uu}{v} : v\in\R, v < \psi(\uu) } = \{\uu\}\times \bigParens{-\infty,\psi(\uu)}. \] Then $U$ and $V$ are nonempty, disjoint, convex subsets of $\Rnp$. (The set $U$ is convex because $\psi$ is convex, and $V$ is convex, being either a line or a halfline without its endpoint.) Therefore, by Theorem~\ref{thm:ast-def-sep-cvx-sets}, these sets are astral strongly separated so that, by Theorem~\ref{thm:ast-str-sep-equiv-no-trans}, there must exist a point $\zbar\in\extspacnp$ such that \begin{equation} \label{eq:thm:adsubdiff-nonempty:1} \sup_{\ww\in U-V} \zbar\cdot\ww = \sup_{\scriptontop{\rpair{\uu'}{v'}\in\epi{\psi},} {v\in (-\infty,\psi(\uu))} } \zbar\cdot\Parens{\rpair{\uu'-\uu}{v'-v}} < 0. \end{equation} \begin{proof-parts} \pfpart{Case $\psi(\uu)\in\R$:} In this case, setting $\uu'=\uu$, $v'=\psi(\uu)$, and $v=\psi(\uu) - 1$, \eqref{eq:thm:adsubdiff-nonempty:1} implies that $\zbar\cdot\rpair{\zero}{1} < 0$ (since $\rpair{\uu'}{v'}\in U$ and $\rpair{\uu}{v}\in V$). Since $\zbar\cdot\ww\leq 0$ for all $\ww\in U - V$, we can thus apply Lemma~\ref{lem:sep-one-pt-finite}, with $U$, $W$, and $\xbar$, as they appear in that lemma, set to $U-V$, $\{\rpair{\zero}{1}\}$, and $\zbar$, respectively. This yields that there exists $\zbar'\in\extspacnp$ such that $\zbar'\cdot\ww\leq 0$ for all $\ww\in U-V$, and $\zbar'\cdot\rpair{\zero}{1}=-y$ for some $y\in\Rstrictpos$. By Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:d}), we therefore can write $\zbar'=\rpair{\xbar}{-y}$ for some $\xbar\in\extspace$. Consequently, for all $\rpair{\uu'}{v'}\in\epi{\psi}$ and for all $v\in (-\infty,\psi(\uu))$, we have that \[ \xbar\cdot(\uu'-\uu) - y (v'-v) = \zbar'\cdot\Parens{\rpair{\uu'}{v'} - \rpair{\uu}{v}} \leq 0, \] where the equality is by Proposition~\ref{pr:xy-pairs-props}(\ref{pr:xy-pairs-props:b}). By algebra, this implies that \[ -v \geq \frac{\xbar}{y} \cdot (\uu'-\uu) - v'. \] Since this holds for all $v<\psi(\uu)$, we then have that \[ -\psi(\uu) \geq \frac{\xbar}{y} \cdot (\uu'-\uu) - v' \] for all $\rpair{\uu'}{v'}\in\epi{\psi}$. Therefore, $(1/y) \xbar \in \adsubdifpsi{\uu}$ since it satisfies \eqref{eqn:psi-subgrad:3a}. \pfpart{Case $\psi(\uu)=+\infty$:} Let $\xbar=\homat\zbar$, where $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}). Let $\rpair{\uu'}{v'}\in\epi{\psi}$. Then \begin{equation} \label{eq:thm:adsubdiff-nonempty:2} \xbar\cdot(\uu'-\uu) = (\homat\zbar)\cdot(\uu'-\uu) = \zbar\cdot(\trans{\homat}(\uu'-\uu)) = \zbar\cdot\rpair{\uu'-\uu}{0} < 0, \end{equation} where the second equality is by Theorem~\ref{thm:mat-mult-def}, and the inequality is from \eqref{eq:thm:adsubdiff-nonempty:1}, applied with $v=v'$ (which is in $(-\infty,\psi(\uu))$). Therefore, \[ -\psi(\uu) \geq \limray{\xbar}\cdot(\uu'-\uu) - v' \] since, by \eqref{eq:thm:adsubdiff-nonempty:2}, $\limray{\xbar}\cdot(\uu'-\uu)=-\infty$. Since this holds for all $\rpair{\uu'}{v'}\in\epi{\psi}$, $\limray{\xbar}\in \adsubdifpsi{\uu}$ (again, since it satisfies Eq.~\ref{eqn:psi-subgrad:3a}). \qedhere \end{proof-parts} \end{proof} For any convex function $\psi:\Rn\rightarrow\Rext$, Theorem~\ref{thm:adsubdiff-nonempty} shows there exists an astral dual subgradient $\xbar\in\adsubdifpsi{\zero}$. Combined with Proposition~\ref{pr:asub-zero-is-min} and Corollary~\refequiv{cor:psi-subgrad-conds}{cor:psi-subgrad-conds:b}{cor:psi-subgrad-conds:c}, this implies that $\xbar$ minimizes $\psistarb$, assuming $\psi(\zero)>-\infty$. Thus, every function $\psistarb$ that is an astral dual conjugate of some convex function $\psi:\Rn\rightarrow\Rext$ with $\psi(\zero)>-\infty$ must have a minimizer in $\extspace$. As a special case, if $\psi=\fstar$ for any function $f:\Rn\rightarrow\Rext$ (with $f\not\equiv+\infty$), this implies $\fdub$ must have a minimizer. If $f$ is reduction-closed, then $\fext=\fdub$, so $\fext$ must have a minimizer as well; of course, this was already known (and with fewer assumptions) from Proposition~\ref{pr:fext-min-exists}. As noted earlier, if $\uu\not\in\dom\psi$, then $\psi$ has no standard subgradient at $\uu$, but nevertheless has an astral dual subgradient, by Theorem~\ref{thm:adsubdiff-nonempty}. The next proposition characterizes these dual subgradients: \begin{proposition} \label{pr:dual-subgrad-outside-dom} Let $\psi:\Rn\rightarrow\Rext$, let $\uu\in\Rn$, and suppose $\psi(\uu)=+\infty$. Let $\xbar\in\extspace$. Then $\xbar\in\adsubdifpsi{\uu}$ if and only if for all $\ww\in\dom\psi$, $\xbar\cdot (\ww-\uu)=-\infty$. \end{proposition} \begin{proof} By the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3}, $\xbar\in\adsubdifpsi{\uu}$ if and only if for all $\ww\in\Rn$, \[ -\infty = -\psi(\uu) \geq -\psi(\ww)\plusd \xbar\cdot(\ww-\uu). \] This equation holds if and only if either of the terms on the right-hand side is equal to $-\infty$, that is, if and only if either $\psi(\ww)=+\infty$ or $\xbar\cdot(\ww-\uu)=-\infty$. This proves the proposition. \end{proof} In particular, Proposition~\ref{pr:dual-subgrad-outside-dom} means that if $\uu\not\in\dom{\psi}$ and $\xbar\in\adsubdifpsi{\uu}$ then \eqref{eq:thm:ast-str-sep-equiv-no-trans:3}, which defines a form of astral-defined separation, must also hold for the sets $U=\dom{\psi}$ and $V=\{\uu\}$. Thus, such astral dual subgradients are closely related to the astral strong separation of $\uu$ from $\dom{\psi}$. Theorem~\ref{thm:adsubdiff-nonempty} shows that if $\psi:\Rn\rightarrow\Rext$ is convex, then it has an astral dual subgradient at every point. The next theorem shows that the converse holds as well, that is, that this latter property implies convexity. The theorem also proves another equivalence, as we now explain: In studying conjugacy and biconjugates (Chapter~\ref{sec:conjugacy}), we considered when a function is equal to the pointwise supremum over all affine functions that it majorizes. In standard convex analysis, as has previously been discussed, for a function $\psi:\Rn\rightarrow\Rext$, the biconjugate $\psidubs$ is exactly this pointwise supremum over all majorized functions of the form $\uu\mapsto \xx\cdot\uu+\beta$, for $\uu\in\Rn$, and for some $\xx\in\Rn$ and $\beta\in\R$. Furthermore, it is known that $\psi=\psidubs$ if and only if $\psi$ is convex and closed. The next theorem considers expressing $\psi$ in a similar fashion using instead functions of the form $\uu\mapsto \xbar\cdot(\uu-\uu_0)+\beta$, for $\uu\in\Rn$, and for some $\xbar\in\extspace$, $\uu_0\in\Rn$, and $\beta\in\R$. These can be viewed informally as an ``affine'' form of the functions $\ph{\xbar}(\uu) = \xbar\cdot\uu$ studied in Section~\ref{subsec:astral-pts-as-fcns}. As we show next, Theorem~\ref{thm:adsubdiff-nonempty} implies that every convex function is equal to the pointwise supremum over all majorized functions of this form. Importantly, no other conditions are required beyond convexity. The equivalence given next was previously proved by \citet[Propositions~3.7 and~3.10]{waggoner21} under the additional assumption that $\psi>-\infty$, and requiring, in part~(\ref{thm:conv-equiv-dualsubgrad:b}), only that a subgradient exist at points in $\dom\psi$. Thus, the version given here is a slight generalization. \begin{theorem} \label{thm:conv-equiv-dualsubgrad} Let $\psi:\Rn\rightarrow\Rext$. For $\xbar\in\extspace$, $\uu_0\in\Rn$, and $\beta\in\R$, let \[\dlinxbuz(\uu)=\xbar\cdot(\uu-\uu_0)+\beta\] for $\uu\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:conv-equiv-dualsubgrad:a} $\psi$ is convex. \item \label{thm:conv-equiv-dualsubgrad:b} $\adsubdifpsi{\uu}\neq\emptyset$ for all $\uu\in\Rn$. \item \label{thm:conv-equiv-dualsubgrad:c} For all $\uu\in\Rn$, \begin{equation} \label{eqn:thm:conv-equiv-dualsubgrad:1} \psi(\uu) = \sup\Braces{ \dlinxbuz(\uu) : \xbar\in\extspace, \uu_0\in\Rn, \beta\in\R, \dlinxbuz \leq \psi }. \end{equation} \end{letter-compact} \end{theorem} \begin{proof} Throughout this proof, let $\sigma(\uu)$ denote the supremum appearing on the right-hand side of \eqref{eqn:thm:conv-equiv-dualsubgrad:1}. \begin{proof-parts} \pfpart{(\ref{thm:conv-equiv-dualsubgrad:a}) $\Rightarrow$ (\ref{thm:conv-equiv-dualsubgrad:b}):} This is exactly Theorem~\ref{thm:adsubdiff-nonempty}. \pfpart{(\ref{thm:conv-equiv-dualsubgrad:b}) $\Rightarrow$ (\ref{thm:conv-equiv-dualsubgrad:c}):} Assume $\adsubdifpsi{\uu}\neq\emptyset$ for all $\uu\in\Rn$. Since each $\dlinxbuz$ appearing in the supremum defining $\sigma$ is majorized by $\psi$, it follows immediately that $\sigma\leq\psi$. To show the reverse inequality, let $\uu\in\Rn$. We aim to show $\psi(\uu)\leq\sigma(\uu)$. This is immediate if $\psi(\uu)=-\infty$, so we assume henceforth that $\psi(\uu)>-\infty$. By assumption, $\psi$ has an astral dual subgradient $\xbar\in\extspace$ at $\uu$. Let $\beta\in\R$ with $\beta\leq\psi(\uu)$. Then for all $\uu'\in\Rn$, \[ \psi(\uu') \geq \psi(\uu) \plusd \xbar\cdot(\uu'-\uu) \geq \xbar\cdot(\uu'-\uu) + \beta = \dlinxbu(\uu'). \] The first inequality is \eqref{eqn:psi-subgrad:3-alt}, which holds since $\xbar\in\adsubdifpsi{\uu}$. The second inequality is by Proposition~\ref{pr:plusd-props}(\ref{pr:plusd-props:c},\ref{pr:plusd-props:f}). Thus, $\psi\geq\dlinxbu$ so $\dlinxbu$ is included in the supremum defining $\sigma$. Therefore, $\sigma(\uu)\geq \dlinxbu(\uu) = \beta$. Since this holds for all $\beta\leq \psi(\uu)$, it follows that $\sigma(\uu)\geq \psi(\uu)$, completing the proof. \pfpart{(\ref{thm:conv-equiv-dualsubgrad:c}) $\Rightarrow$ (\ref{thm:conv-equiv-dualsubgrad:a}):} Suppose $\psi=\sigma$. For all $\xbar\in\extspace$, the function $\ph{\xbar}$, as defined in \eqref{eq:ph-xbar-defn}, is convex, by Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5b}. Therefore, for all $\xbar\in\extspace$, $\uu_0\in\Rn$, and $\beta\in\R$, the function $\dlinxbuz(\uu)=\ph{\xbar}(\uu-\uu_0)+\beta$ is also convex. Therefore, $\psi$, being a pointwise supremum over such functions, is also convex (\Cref{roc:thm5.5}). \qedhere \end{proof-parts} \end{proof} \section{Calculus rules for astral subgradients} \label{sec:calc-subgrads} We next develop rules for computing astral subgradients analogous to the standard differential calculus and to rules for standard subgradients as reviewed in Section~\ref{sec:prelim:subgrads}. We focus particularly on the astral subdifferentials $\asubdiffext{\xbar}$ of the extension of a function $f:\Rn\rightarrow\Rext$, as well as its strict astral subdifferentials $\basubdiffext{\xbar}$, which we will see are often more regular. For instance, we derive rules for when $f$ is the sum of two other functions, or is the composition of a function with a linear map. Taken together, these rules result in a rich set of tools that can be used, for instance, to derive optimality conditions, as will be seen in Chapter~\ref{sec:opt-conds}. \subsection{Scalar multiple} Let $f:\Rn\rightarrow\Rext$, and suppose $h=\lambda f$ for some $\lambda>0$. Then the astral subgradients of $\hext$ can be calculated from the astral subgradients of $\fext$ simply by multiplying by $\lambda$, as in standard calculus: \begin{proposition} \label{pr:subgrad-scal-mult} Let $f:\Rn\rightarrow\Rext$, let $\xbar\in\extspace$, and let $\lambda>0$. Then \begin{equation} \label{eq:pr:subgrad-scal-mult:1} \asubdif{\lamfext}{\xbar} = \lambda \; \asubdiffext{\xbar} \end{equation} and \begin{equation} \label{eq:pr:subgrad-scal-mult:2} \basubdif{\lamfext}{\xbar} = \lambda \; \basubdiffext{\xbar}. \end{equation} \end{proposition} \begin{proof} Let $h=\lambda f$. If $f\equiv+\infty$, then $h\equiv+\infty$, so the proposition holds vacuously since all of the mentioned astral subdifferentials are empty (Proposition~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:b}). We therefore assume $f\not\equiv+\infty$, implying $h\not\equiv+\infty$. Let $\uu\in\Rn$, let $\ww=\lambda\uu$, and let $\fminusu$ and $\hminusw$ be linear shifts of $f$ and $h$. Then for $\xx\in\Rn$, \[ \hminusw(\xx) = h(\xx) - \xx\cdot \ww = \lambda f(\xx) - \xx\cdot (\lambda \uu) = \lambda \fminusu(\xx). \] Consequently, \begin{equation} \label{eq:pr:subgrad-scal-mult:3} \inf \hminusw = \lambda \inf \fminusu, \end{equation} and \begin{equation} \label{eq:pr:subgrad-scal-mult:4} \hminuswext(\xbar)=\lambda \fminusuext(\xbar) \end{equation} by Proposition~\ref{pr:scal-mult-ext}. Thus, \begin{eqnarray*} \uu\in\asubdiffext{\xbar} &\Leftrightarrow& \fminusuext(\xbar)=\inf \fminusu \\ &\Leftrightarrow& \lambda \fminusuext(\xbar)=\lambda \inf \fminusu \\ &\Leftrightarrow& \hminuswext(\xbar)=\inf \hminusw \\ &\Leftrightarrow& \ww \in \asubdifhext{\xbar}. \\ &\Leftrightarrow& \lambda\uu \in \asubdifhext{\xbar}. \end{eqnarray*} The first and fourth equivalence are both by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}). The third equivalence is by Eqs.~(\ref{eq:pr:subgrad-scal-mult:3}) and~(\ref{eq:pr:subgrad-scal-mult:4}). This proves \eqref{eq:pr:subgrad-scal-mult:1}. By Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}) and \eqref{eq:pr:subgrad-scal-mult:3}, \[ \hstar(\ww) = -\inf\hminusw = -\lambda \inf\fminusu = \lambda \fstar(\uu). \] Therefore, $\fstar(\uu)\in\R$ if and only if $\hstar(\lambda \uu)\in\R$, which, together with the foregoing, proves \eqref{eq:pr:subgrad-scal-mult:2}. \end{proof} \subsection{Sums of functions} We next consider the astral subgradients of a sum of functions. For two convex functions $f:\Rn\rightarrow\Rext$ and $g:\Rn\rightarrow\Rext$, if $\uu\in\asubdiffext{\xbar}$ and $\vv\in\asubdifgext{\xbar}$, then we naturally expect $\uu+\vv\in\asubdiffplusgext{\xbar}$, as would hold for ordinary gradients in standard calculus, as well as for standard subgradients (\Cref{roc:thm23.8}). Indeed, we will see that this is the case, provided that the relative interiors of the functions' effective domains overlap. Moreover, this accounts for all of the strict astral subgradients of $\fplusgext$. \begin{theorem} \label{thm:subgrad-sum-fcns} Let $f_i:\Rn\rightarrow\Rext$ be convex, for $i=1,\ldots,m$. Assume $f_1,\ldots,f_m$ are summable, and that $\bigcap_{i=1}^m \ri(\dom f_i) \neq \emptyset$. Let $h=f_1+\dotsb+f_m$, and let $\xbar\in\extspace$. Then the following hold: \begin{letter} \item \label{thm:subgrad-sum-fcns:a} $\displaystyle \asubdiffextsub{1}{\xbar} + \dotsb + \asubdiffextsub{m}{\xbar} \subseteq \asubdifhext{\xbar} $. \item \label{thm:subgrad-sum-fcns:b} $\displaystyle \basubdiffextsub{1}{\xbar} + \dotsb + \basubdiffextsub{m}{\xbar} = \basubdifhext{\xbar} $. \end{letter} \end{theorem} \begin{proof} By assumption, none of the domains $\dom{f_i}$ can be empty; therefore, $f_i\not\equiv+\infty$ for all $i$. We begin with part~(\ref{thm:subgrad-sum-fcns:a}). Let $\uu_i\in \asubdiffextsub{i}{\xbar}$, for $i=1,\ldots,m$. Let $\uu=\sum_{i=1}^m \uu_i$, and let $\hminusu$ and $\fminususubd{i}$ be linear shifts of $h$ and $f_i$ (that is, $\fminususubd{i}=\fminusgen{(f_i)}{\uu_i}$). Then for all $\xx\in\Rn$, \begin{equation} \label{eqn:thm:subgrad-sum-fcns:4} \hminusu(\xx) = h(\xx) - \xx\cdot\uu = \sum_{i=1}^m (f_i(\xx) - \xx\cdot\uu_i) = \sum_{i=1}^m \fminususubd{i}(\xx), \end{equation} noting that, since $f_1(\xx),\ldots,f_m(\xx)$ are summable, $\fminususubd{1}(\xx),\ldots,\fminususubd{m}(\xx)$ are as well. Since the rightmost expression of this equation is at least $\sum_{i=1}^m \inf \fminususubd{i}$ for all $\xx\in\Rn$, this implies \begin{equation} \label{eqn:thm:subgrad-sum-fcns:6} \inf \hminusu \geq \sum_{i=1}^m \inf \fminususubd{i}. \end{equation} From Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:a}), we also have that \[ \bigcap_{i=1}^m \ri(\dom{\fminususubd{i}}) = \bigcap_{i=1}^m \ri(\dom{f_i}) \neq \emptyset. \] For each $i$, since $\uu_i\in \asubdiffextsub{i}{\xbar}$, we have from Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}) that \begin{equation} \label{eqn:thm:subgrad-sum-fcns:3} \fminususubdext{i}(\xbar) = \inf \fminususubd{i} < +\infty. \end{equation} Here, the inequality is because $f_i\not\equiv+\infty$, implying $\fminususubd{i}\not\equiv+\infty$. Therefore, the values $\fminususubdext{1}(\xbar),\dotsc,\fminususubdext{m}(\xbar)$ are summable. Thus, combining, we have that \begin{equation} \label{eqn:thm:subgrad-sum-fcns:5} \inf \hminusu \leq \hminusuext(\xbar) = \sum_{i=1}^m \fminususubdext{i}(\xbar) = \sum_{i=1}^m \inf \fminususubd{i} \leq \inf \hminusu. \end{equation} The first equality follows from Theorem~\ref{thm:ext-sum-fcns-w-duality} (with $h$ and $f_i$, as they appear in that theorem, set to $\hminusu$ and $\fminususubd{i}$), all of whose conditions are satisfied, as argued above. The first inequality, second equality, and last inequality are, respectively, from Proposition~\ref{pr:fext-min-exists}, \eqref{eqn:thm:subgrad-sum-fcns:3}, and \eqref{eqn:thm:subgrad-sum-fcns:6}. Hence, $\hminusuext(\xbar)= \inf \hminusu$, so $\uu\in\asubdifhext{\xbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}). This proves part~(\ref{thm:subgrad-sum-fcns:a}). We also thus have shown that \[ \hstar(\uu) = -\inf \hminusu = - \sum_{i=1}^m \inf \fminususubd{i} = star(\uu_i), \] where the first and last equalities are from Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}), and the second equality is from \eqref{eqn:thm:subgrad-sum-fcns:5}. star(\uu_i)\in\R$ for all $i$, then $\hstar(\uu)\in\R$. Combined with the argument above, this proves \[ \basubdiffextsub{1}{\xbar} + \dotsb + \basubdiffextsub{m}{\xbar} \subseteq \basubdifhext{\xbar}. \] To complete the proof of part~(\ref{thm:subgrad-sum-fcns:b}), we now prove the reverse inclusion. Suppose $\uu\in\basubdifhext{\xbar}$. Then $\hstar(\uu)\in\R$, implying $h$ is proper (since otherwise we would have either $\hstar\equiv+\infty$ or $\hstar\equiv-\infty$). This further implies that each $f_i$ is proper (since if $f_i(\xx)=-\infty$ for any $\xx\in\Rn$, then also $h(\xx)=-\infty$). We can therefore apply a standard result for the conjugate of a sum (\Cref{roc:thm16.4}) from which it follows that there exist $\uu_1,\ldots,\uu_m\in\Rn$ with $\uu=\sum_{i=1}^m \uu_i$ and \begin{equation} \label{eqn:thm:subgrad-sum-fcns:8} star(\uu_i). \end{equation} star(\uu_i)\in\R$ for all $i$. As above, let $\hminusu$ and $\fminususubd{i}$ be linear shifts of $h$ and $f_i$, implying \eqref{eqn:thm:subgrad-sum-fcns:4}, as before. Then for each $i$, \begin{equation} \label{eqn:thm:subgrad-sum-fcns:9} \fminususubdext{i}(\xbar) \geq \inf \fminususubd{i} = star(\uu_i) \in \R, \end{equation} where the inequality and equality are from Proposition~\ref{pr:fext-min-exists} and Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}). Thus, $\fminususubdext{i}(\xbar)>-\infty$ for all $i$, so $\fminususubdext{1}(\xbar),\ldots,\fminususubdext{m}(\xbar)$ are summable. Consequently, we have that \begin{equation} \label{eqn:thm:subgrad-sum-fcns:7} \sum_{i=1}^m \fminususubdext{i}(\xbar) = \hminusuext(\xbar) = \inf \hminusu = -\hstar(\uu) = star(\uu_i) = \sum_{i=1}^m \inf \fminususubd{i}. \end{equation} The first equality is from Theorem~\ref{thm:ext-sum-fcns-w-duality}, applied as in the first part of this proof, noting again that all of that theorem's conditions are satisfied. The second equality is from Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}) since $\uu\in\asubdifhext{\xbar}$. The third and fifth equalities are both from Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}). And the fourth equality is from \eqref{eqn:thm:subgrad-sum-fcns:8}. \eqref{eqn:thm:subgrad-sum-fcns:7}, combined with \eqref{eqn:thm:subgrad-sum-fcns:9}, implies $\fminususubdext{i}(\xbar)=\inf \fminususubd{i}$ for all $i$, since otherwise, if $\fminususubdext{i}(\xbar)>\inf \fminususubd{i}$, then the far left-hand side of \eqref{eqn:thm:subgrad-sum-fcns:7} would be strictly greater than the far right-hand side. Therefore, $\uu_i\in \basubdiffextsub{i}{\xbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}) star(\uu_i)\in\R$. That is, \[ \uu = \uu_1+\dotsb+\uu_m \in \basubdiffextsub{1}{\xbar} + \dotsb + \basubdiffextsub{m}{\xbar}, \] completing the proof of part~(\ref{thm:subgrad-sum-fcns:b}). \end{proof} The inclusion given in Theorem~\ref{thm:subgrad-sum-fcns}(\ref{thm:subgrad-sum-fcns:a}) holds analogously for standard subgradients \emph{without} assuming the relative interiors of the effective domains of the functions have a point in common, as was seen in \Cref{roc:thm23.8}. For astral subgradients, however, the inclusion does not hold in general if this assumption is omitted, as shown in the next example: \begin{example} Let $f$, $g$ and $\xbar$ be as in Example~\ref{ex:KLsets:extsum-not-sum-exts}, and let $h=f+g$. Then, as shown in that example, $\fext(\xbar)=\gext(\xbar)=0$, but $\hext(\xbar)=+\infty$. Moreover, $\inf f = \inf g = \inf h = 0$. Therefore, by Proposition~\ref{pr:asub-zero-is-min}, $\zero\in\asubdiffext{\xbar}$ and $\zero\in\asubdifgext{\xbar}$, but $\zero+\zero=\zero\not\in\asubdifhext{\xbar}$. Thus, $\asubdiffext{\xbar}+\asubdifgext{\xbar}\not\subseteq\asubdifhext{\xbar}$. \end{example} Also, the equality that holds for strict astral subgradients in part~(\ref{thm:subgrad-sum-fcns:b}) of Theorem~\ref{thm:subgrad-sum-fcns} need not hold in general for astral subgradients that are not necessarily strict. In other words, it is possible for the inclusion given in part~(\ref{thm:subgrad-sum-fcns:a}) of that theorem to be strict, as shown in the next example, even when the assumption regarding the relative interiors of the functions' effective domains holds: \begin{example} \label{ex:subgrad-sum-inc-can-be-strict} In $\R^2$, let \[ f(\xx) = f(x_1,x_2) = \left\{ \begin{array}{cl} -\ln x_1 & \mbox{if $x_1>0$} \\ +\infty & \mbox{otherwise} \\ \end{array} \right. \] and let \[ g(\xx) = g(x_1,x_2) = \left\{ \begin{array}{cl} -\ln x_2 & \mbox{if $x_2>0$} \\ +\infty & \mbox{otherwise.} \\ \end{array} \right. \] These are both closed, proper and convex. Furthermore, $\ri(\dom{f})\cap\ri(\dom{g})\neq\emptyset$. Let $h=f+g$, and let $\xbar=\limray{\ee_1}$. Then $\hext(\xbar)=-\infty$ (since, for instance, the sequence $\xx_t=\trans{[t^2,1/t]}$ converges to $\xbar$ while $h(\xx_t)=-\ln t\rightarrow-\infty$). Therefore, $\xbar$ minimizes $\hext$ so $\zero\in\asubdifhext{\xbar}$, by Proposition~\ref{pr:asub-zero-is-min}. We will show, nevertheless, that $\fext$ and $\gext$ do not have astral subgradients which sum to $\zero$. First, suppose $\uu\in\asubdiffext{\xbar}$. We claim this implies $u_1\geq 0$. Suppose, to the contrary, that $u_1<0$. Then by Proposition~\ref{pr:equiv-ast-subdif-defn-fext}(\ref{pr:equiv-ast-subdif-defn-fext:a},\ref{pr:equiv-ast-subdif-defn-fext:b1}), there exists a sequence $\seq{\xx_t}$ in $\dom f$ converging to $\xbar$ with $\xx_t\cdot\uu-f(\xx_t)\rightarrow\fstar(\uu)$. Expanding, \begin{equation} \label{eqn:ex:subgrad-sum-inc-can-be-strict:1} \xx_t\cdot\uu-f(\xx_t) = x_{t,1}u_1 + x_{t,2}u_2 + \ln x_{t,1}. \end{equation} Since $\xx_t\rightarrow\xbar=\limray{\ee_1}$, $x_{t,1}\rightarrow+\infty$ and $x_{t,2}\rightarrow 0$. Since $u_1<0$, this implies that the right-hand side of \eqref{eqn:ex:subgrad-sum-inc-can-be-strict:1} converges to $-\infty$, and therefore that $\fstar(\uu)=-\infty$, which is impossible since $f\not\equiv+\infty$. Likewise, suppose $\vv\in\asubdifgext{\xbar}$. In this case, we give a similar argument showing $v_1> 0$. Suppose $v_1\leq 0$. Then there exists a sequence $\seq{\xx_t}$ in $\dom g$ converging to $\xbar$ with $\xx_t\cdot\uu-g(\xx_t)\rightarrow\gstar(\vv)$. Expanding, \[ \xx_t\cdot\uu-g(\xx_t) = x_{t,1}v_1 + x_{t,2}v_2 + \ln x_{t,2}. \] Since $x_{t,1}\rightarrow+\infty$ and $x_{t,2}\rightarrow 0$, and since $v_1\leq 0$, the first term on the right is nonpositive for all $t$ sufficiently large, the second term converges to zero, and the last term converges to $-\infty$. Thus, the entire expression converges to $-\infty$, implying, as earlier, that $\gstar(\uu)=-\infty$, a contradiction. Suppose now that $\uu\in\asubdiffext{\xbar}$ and $\vv\in\asubdifgext{\xbar}$. Then, by the foregoing, $u_1\geq 0$ and $v_1>0$, implying $u_1+v_1>0$. Consequently, $\uu+\vv\neq\zero$, and therefore, $\asubdiffext{\xbar}+\asubdifgext{\xbar}\neq\asubdifhext{\xbar}$ since $\zero\in\asubdifhext{\xbar}$. \end{example} \subsection{Composition with an affine map} \label{sec:subgrad-comp-aff-map} We next study the astral subgradients of a convex function composed with a linear or affine function. Let $f:\Rm\rightarrow\Rext$, let $\A\in\Rmn$, and let $h=f\A$, that is, $h(\xx)=f(\A\xx)$ for $\xx\in\Rn$. By the chain rule from calculus, if $f$ is diffentiable, then $\gradh(\xx)=\transA \gradf(\A\xx)$. Analogously, in standard convex analysis, if $\uu$ is a subgradient of $f$ at $\A\xx$, then $\transA\uu$ is a subgradient of $h$ at $\xx$ (\Cref{roc:thm23.9}). The next theorem shows how this rule generalizes to astral subgradients so that if $\uu\in\asubdiffext{\A\xbar}$ then $\transA\uu\in\asubdifhext{\xbar}$. Moreover, the theorem shows that this rule captures all astral subgradients of $\hext$ that are in the column space of $\transA$, as well as all the strict astral subgradients of $\hext$. In other words, if $\ww\in\colspace \transA$ or if $\hstar(\ww)\in\R$, then $\ww\in\asubdifhext{\A\xbar}$ if and only if $\ww=\transA\uu$ for some $\uu\in\asubdiffext{\A\xbar}$. As in Theorem~\ref{thm:ext-linear-comp}, we assume the column space of $\A$ overlaps with $\ri(\dom{f})$. \begin{theorem} \label{thm:subgrad-fA} Let $f:\Rm\rightarrow\Rext$ be convex, and let $\A\in\Rmn$. Assume there exists $\xing\in\Rn$ such that $\A\xing\in\ri(\dom f)$. Let $\xbar\in\extspace$. Then the following hold: \begin{letter} \item \label{thm:subgrad-fA:a} $\displaystyle \transA \asubdiffext{\A \xbar} = \asubdiffAext{\xbar} \,\cap\, (\colspace \transA) $. \item \label{thm:subgrad-fA:b} $\displaystyle \transA \basubdiffext{\A \xbar} = \basubdiffAext{\xbar} $. \end{letter} \end{theorem} \proof Let $h=f\A$. We first prove that for both parts of the theorem, the left-hand side is included in the right-hand side. Suppose $\uu\in\asubdiffext{\A\xbar}$, and let $\ww=\transA \uu$. For part~(\ref{thm:subgrad-fA:a}), we aim to show $\ww\in\asubdifhext{\xbar}$. Let $\fminusu$ and $\hminusw$ be linear shifts of $f$ and $h$. Then for all $\xx\in\Rn$, \begin{equation} \label{eq:thm:subgrad-fA:1} \hminusw(\xx) = h(\xx) - \xx\cdot (\transA\uu) = f(\A \xx) - (\A\xx)\cdot\uu = \fminusu(\A \xx). \end{equation} By Theorem~\ref{thm:ext-linear-comp}, it follows that \begin{equation} \label{eq:thm:subgrad-fA:3} \hminuswext(\xbar)=\fminusuext(\A\xbar) \end{equation} since $\A\xing\in\ri(\dom{f})=\ri(\dom{\fminusu})$ (by Proposition~\ref{pr:fminusu-props}\ref{pr:fminusu-props:a}). Thus, \[ \inf \hminusw \leq \hminuswext(\xbar) = \fminusuext(\A\xbar) = \inf \fminusu \leq \inf \hminusw. \] The first inequality is by Proposition~\ref{pr:fext-min-exists}. The second equality is by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}), since $\uu\in\asubdiffext{\A\xbar}$. And the last inequality follows from \eqref{eq:thm:subgrad-fA:1} (since the rightmost expression of that equation is at least $\inf\fminusu$ for all $\xx\in\Rn$). Therefore, $ \hminuswext(\xbar) = \inf \hminusw$. By Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}), this implies $\transA\uu=\ww\in\asubdifhext{\xbar}$. Since $\transA\uu$ is in the column space of $\transA$, this also proves $\transA \, \asubdiffext{\A \xbar} \subseteq \asubdiffAext{\xbar} \cap (\colspace \transA)$. Suppose next that $\uu\in\basubdiffext{\A\xbar}$, so that $\fstar(\uu)<+\infty$. Then $\transA\uu\in\asubdifhext{\xbar}$, as just argued. Furthermore, from \Cref{roc:thm16.3:fA}, which gives a rule for the conjugate of a function $f\A$ under the assumptions of this theorem, for all $\ww\in\Rm$, \begin{equation} \label{eq:thm:subgrad-fA:2} \hstar(\ww) = \inf\Braces{ \fstar(\uu') : \uu'\in\Rn, \transA \uu' = \ww }. \end{equation} Furthermore, the infimum is always realized, unless it is vacuous. In particular, this implies $\hstar(\transA\uu)\leq\fstar(\uu)<+\infty$. Thus, $\transA\uu\in\basubdifhext{\xbar}$, proving $\transA \, \basubdiffext{\A \xbar} \subseteq \basubdiffAext{\xbar}$. We next prove the reverse inclusions, beginning with part~(\ref{thm:subgrad-fA:a}). Suppose $\ww\in \asubdifhext{\xbar} \cap (\colspace \transA)$. Since $\ww\in \colspace \transA$, the infimum in \eqref{eq:thm:subgrad-fA:2} is not vacuous and therefore is realized (\Cref{roc:thm16.3:fA}). That is, there exists $\uu\in\Rn$ such that $\transA \uu = \ww$ and $\hstar(\ww)=\fstar(\uu)$. In particular, this means Eqs.~(\ref{eq:thm:subgrad-fA:1}) and~\ref{eq:thm:subgrad-fA:3} continue to hold. Thus, \[ \fminusuext(\A\xbar) = \hminuswext(\xbar) = -\hstar(\ww) = -\fstar(\uu), \] with the second equality following from Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}) (since $\ww\in \asubdifhext{\xbar}$). By that same theorem, this implies $\uu\in\asubdiffext{\A\xbar}$, and thus that $\ww\in\transA \, \asubdiffext{\A\xbar}$. This completes the proof of part~(\ref{thm:subgrad-fA:a}). For part~(\ref{thm:subgrad-fA:b}), suppose $\ww\in \basubdifhext{\xbar}$, so that $\hstar(\ww)<+\infty$. This implies that the infimum in \eqref{eq:thm:subgrad-fA:2} cannot be vacuous (since otherwise it would be equal to $+\infty$). Thus, as before, there must exist $\uu\in\Rn$ with $\transA \uu = \ww$ and $\hstar(\ww)=\fstar(\uu)$. As just argued, this implies $\uu\in\asubdiffext{\A\xbar}$, and furthermore, that $\fstar(\uu)<+\infty$. Thus, $\uu\in\basubdiffext{\A\xbar}$, and $\ww\in\transA \, \basubdiffext{\A\xbar}$, completing the proof of part~(\ref{thm:subgrad-fA:b}). \qed In general, without the assumption that $\colspace \A$ overlaps $\ri(\dom{f})$, Theorem~\ref{thm:subgrad-fA} need not hold: \begin{example} Let $f$, $\A$ and $\xbar$ be as in Example~\ref{ex:fA-ext-countereg}, and let $h=f\A$. Then, as shown in that example, $\fext(\A\xbar)=0$ and $\hext(\xbar)=+\infty$. Since $\fstar(\zero)=\inf f=\inf h=0$, this implies that $\zero\in\basubdiffext{\A\xbar}\subseteq\asubdiffext{\A\xbar}$, but that $\zero=\transA\zero\not\in\asubdifhext{\xbar}$, both by Proposition~\ref{pr:asub-zero-is-min}. Thus, both parts of Theorem~\ref{thm:subgrad-fA} are false for this example. \end{example} If $\colspace \transA=\Rn$, that is, if $\A$ has full column rank, then $\colspace \transA$, as it appears in Theorem~\ref{thm:subgrad-fA}(\ref{thm:subgrad-fA:a}), can of course be omitted so that, in such a case, all of the astral subgradients of $f\A$ at $\xbar$ must be of the form $\transA \uu$ for some $\uu\in \asubdiffext{\A \xbar} $. Otherwise, if $\A$ does not have full column rank, this might not be the case, as seen in the next example: \begin{example} Let $f:\Rm\rightarrow\Rext$ be the function $f\equiv 0$, and let $\A$ be any matrix in $\Rmn$ that does not have full column rank. Let $h=f\A$, implying $h\equiv 0$. Since $\A$ does not have full column rank, there exists a vector $\vv\in\Rn\setminus (\colspace \transA)$, implying $\vv\neq \zero$. Let $\xbar=\limray{\vv}$. Then $\vv\in\asubdifhext{\xbar}$ (as follows, for instance, as a special case of Example~\ref{ex:affine-subgrad}), but $\vv$ is not in $\colspace \transA$ and so also is not in $ \transA \, \asubdiffext{\A \xbar} $. \end{example} When a fixed vector $\bb\in\Rn$ is added to the argument of a function $f:\Rn\rightarrow\Rext$, yielding a function $\xx\mapsto f(\bb+\xx)$, the astral subgradients are unaffected, as expected. This is in accord with analogous results for standard gradients and subgradients from calculus (based on the chain rule) and standard convex analysis: \begin{proposition} \label{pr:subgrad-shift-arg} Let $f:\Rn\rightarrow\Rext$, let $\bb\in\Rn$, and let $h:\Rn\rightarrow\Rext$ be defined by $h(\xx)=f(\bb+\xx)$ for $\xx\in\Rn$. Let $\xbar\in\extspace$. Then the following hold: \begin{letter} \item \label{pr:subgrad-shift-arg:a} $\asubdifhext{\xbar} = \asubdiffext{\bb\plusl\xbar}$. \item \label{pr:subgrad-shift-arg:b} $\basubdifhext{\xbar} = \basubdiffext{\bb\plusl\xbar}$. \end{letter} \end{proposition} \proof For $\uu\in\Rn$, let $\fminusu$ and $\hminusu$ be linear shifts of $f$ and $h$. Then for $\xx\in\Rn$, \[ \hminusu(\xx) = h(\xx) - \xx\cdot\uu = f(\bb+\xx) - (\bb+\xx)\cdot\uu + \bb\cdot\uu = \fminusu(\bb+\xx) + \bb\cdot\uu. \] This implies \begin{equation} \label{eq:pr:subgrad-shift-arg:1} \inf \hminusu = \inf \fminusu + \bb\cdot\uu. \end{equation} Furthermore, \begin{equation} \label{eq:pr:subgrad-shift-arg:2} \hminusuext(\xbar) = \fminusuext(\bb\plusl\xbar) + \bb\cdot\uu, \end{equation} as follows from Proposition~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:b}), together with Proposition~\ref{pr:ext-sum-fcns}(\ref{pr:ext-sum-fcns:b}) (applied with $f$, as it appears in that proposition, set to $\xx\mapsto \fminusu(\bb+\xx)$, and with $g\equiv\bb\cdot\uu$, which, being a constant function, is extensibly continuous with extension identically equal to the same constant.) Combining, we then have \begin{eqnarray*} \uu\in\asubdifhext{\xbar} &\Leftrightarrow& \hminusuext(\xbar)=\inf\hminusu \\ &\Leftrightarrow& \fminusuext(\bb\plusl\xbar) + \bb\cdot\uu = \inf \fminusu + \bb\cdot\uu \\ &\Leftrightarrow& \fminusuext(\bb\plusl\xbar) = \inf \fminusu \\ &\Leftrightarrow& \uu\in\asubdiffext{\bb\plusl\xbar}. \end{eqnarray*} The first and last equivalences are by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}). The second equivalence is by Eqs.~(\ref{eq:pr:subgrad-shift-arg:1}) and~(\ref{eq:pr:subgrad-shift-arg:2}). This proves part~(\ref{pr:subgrad-shift-arg:a}). By Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}) combined with \eqref{eq:pr:subgrad-shift-arg:2}, \[ \hstar(\uu) = -\inf \hminusu = -\inf \fminusu - \bb\cdot\uu = \fstar(\uu) - \bb\cdot\uu. \] Thus, $\hstar(\uu)\in\R$ if and only if $\fstar(\uu)\in\R$. Together with part~(\ref{pr:subgrad-shift-arg:a}), this proves part~(\ref{pr:subgrad-shift-arg:b}). \qed Combining Theorem~\ref{thm:subgrad-fA} and Proposition~\ref{pr:subgrad-shift-arg} yields a corollary for the astral subgradients of a function when composed with an affine function: \begin{corollary} \label{cor:subgrad-comp-w-affine} Let $f:\Rm\rightarrow\Rext$ be convex, let $\A\in\Rmn$, and let $\bb\in\Rm$. Assume there exists $\xing\in\Rn$ such that $\bb+\A\xing\in\ri(\dom f)$. Define $h:\Rn\rightarrow\Rext$ by $h(\xx)=f(\bb+\A\xx)$ for $\xx\in\Rn$. Let $\xbar\in\extspace$. Then the following hold: \begin{letter} \item \label{cor:subgrad-comp-w-affine:a} $\displaystyle \transA \asubdiffext{\bb \plusl \A \xbar} = \asubdifhext{\xbar} \,\cap\, (\colspace \transA) $. \item \label{cor:subgrad-comp-w-affine:b} $\displaystyle \transA \basubdiffext{\bb \plusl \A \xbar} = \basubdifhext{\xbar} $. \end{letter} \end{corollary} \proof Let $g(\yy)=f(\bb+\yy)$ for $\yy\in\Rm$. Then $h(\xx)=g(\A\xx)$ for $\xx\in\Rn$. That $\bb+\A\xing\in\ri(\dom f)$ implies $\A\xing\in\ri(\dom g)$. Thus, by Theorem~\ref{thm:subgrad-fA}(\ref{thm:subgrad-fA:a}), \[ \transA \asubdifgext{\A \xbar} = \asubdifhext{\xbar} \,\cap\, (\colspace \transA), \] and by Proposition~\ref{pr:subgrad-shift-arg}, $ \asubdifgext{\A \xbar} = \asubdiffext{\bb\plusl\A\xbar}$. Combining proves part~(\ref{cor:subgrad-comp-w-affine:a}). The proof of part~(\ref{cor:subgrad-comp-w-affine:b}) is similar. \qed \subsection{Reductions} For much of this book, beginning in Section~\ref{sec:ent-closed-fcn}, we have seen the important role played by reductions $\fshadd$, as defined in Definition~\ref{def:iconic-reduction}, where $f:\Rn\rightarrow\Rext$ is convex and $\ebar$ is an icon, usually assumed to be in the astral recession cone $\aresconef$ (since otherwise $\fshadd\equiv+\infty$ by Corollary~\ref{cor:a:4}). In this section, we study the astral subgradients of $\fshadd$'s extension, which was seen in Proposition~\ref{pr:i:9}(\ref{pr:i:9b}) to be $\fshadextd(\xbar) = \fext(\ebar\plusl\xbar)$ for $\xbar\in\extspace$. This is similar to the extension of $\xx\mapsto f(\bb+\xx)$, for some $\bb\in\Rn$, considered in Proposition~\ref{pr:subgrad-shift-arg}, whose extension is $\xbar\mapsto \fext(\bb\plusl\xbar)$ by Proposition~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:b}). Thus, the results of this section can be seen as a generalization of Proposition~\ref{pr:subgrad-shift-arg}, as will be developed more fully below. That proposition shows that the astral subgradients of $f$ are unaffected when a vector $\bb$ is added to the argument, similar to analogous rules from calculus and standard convex analysis. We expect similar behavior for reductions. That is, supposing $g(\xx)=\fshadd(\xx)=\fext(\ebar\plusl\xx)$, we might expect $\asubdifgext{\xbar}=\asubdiffext{\ebar\plusl\xbar}$. This, however, does not hold in general, as shown in the next example. \begin{example} For $x\in\R$, let $f(x)=e^x$, let $\bare=-\infty$, and let $g=\fshad{\bare}$. Then $g\equiv 0$. Let $\barx=+\infty$ and $u=1$. It can be checked using Proposition~\ref{pr:subdif-in-1d} that $\asubdifgext{+\infty}=[0,+\infty)$, and that $\asubdiffext{\bare\plusl\barx}=\asubdiffext{-\infty}=(-\infty,0]$. Thus, $u\in\asubdifgext{\barx}$, but $u\not\in\asubdiffext{\bare\plusl\barx}$. \end{example} Thus, in general, it is not the case that for all $\uu\in\Rn$, $\uu\in\asubdifgext{\xbar}$ if and only if $\uu\in\asubdiffext{\ebar\plusl\xbar}$. Nevertheless, this equivalence does hold for all $\uu$ with $\ebar\cdot\uu=0$, as shown in the next theorem. Moreover, this accounts for all of the strict astral subgradients of $\gext$. \begin{theorem} \label{thm:subgrad-icon-reduc} Let $f:\Rn\rightarrow\Rext$ be convex, let $\ebar\in\corezn\cap(\aresconef)$, and let $g=\fshadd$. Let $\xbar\in\extspace$ and $\uu\in\Rn$. Then the following hold: \begin{letter} \item \label{thm:subgrad-icon-reduc:a} If $\ebar\cdot\uu = 0$ then $\uu\in\asubdifgext{\xbar}$ if and only if $\uu\in\asubdiffext{\ebar\plusl\xbar}$. \item \label{thm:subgrad-icon-reduc:b} $\uu\in\basubdifgext{\xbar}$ if and only if $\ebar\cdot\uu = 0$ and $\uu\in\basubdiffext{\ebar\plusl\xbar}$. That is, \[ \basubdifgext{\xbar} = \{ \ebar \}^\bot \cap\, \basubdiffext{\ebar\plusl\xbar}. \] \end{letter} \end{theorem} \begin{proof} ~ If $f\equiv+\infty$ then $g\equiv+\infty$ so that the theorem holds vacuously (since, in that case, all of the astral subdifferentials are empty by Proposition~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:b}). We therefore assume henceforth that $f\not\equiv+\infty$. Since $\ebar\in\aresconef$, this implies $g\not\equiv+\infty$ (since $g\leq f$). \begin{proof-parts} \pfpart{Part~(\ref{thm:subgrad-icon-reduc:a}):} Suppose $\ebar\cdot\uu=0$. Let $\fminusu$ and $\gminusu$ be linear shifts of $f$ and $g$ by $\uu$. Then for $\xx\in\Rn$, \[ \gminusu(\xx) = g(\xx) - \xx\cdot\uu = \fext(\ebar\plusl\xx) - (\ebar\plusl\xx)\cdot\uu = \fminusuext(\ebar\plusl\xx). \] The second equality holds because $\ebar\cdot\uu=0$ so that $(\ebar\plusl\xx)\cdot\uu=\xx\cdot\uu\in\R$, which also shows that the two terms in the second line are summable. The last equality then follows from Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:e}). Therefore, \begin{equation} \label{eq:thm:subgrad-icon-reduc:1} \gminusuext(\xbar) = \fminusuext(\ebar\plusl\xbar) \end{equation} by Proposition~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:b}). Thus, \begin{eqnarray*} \uu\in\asubdifgext{\xbar} &\Leftrightarrow& \gminusuext(\xbar) = -\gstar(\uu) \\ &\Leftrightarrow& \fminusuext(\ebar\plusl\xbar) = -\fstar(\uu) \\ &\Leftrightarrow& \uu\in\asubdiffext{\ebar\plusl\xbar}. \end{eqnarray*} The first and third equivalence are by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}) (and since $f\not\equiv+\infty$ and $g\not\equiv+\infty$). The second equivalence is by \eqref{eq:thm:subgrad-icon-reduc:1} and because $\gstar(\uu)=\fstar(\uu)$ by Theorem~\ref{thm:conj-of-iconic-reduc}. \pfpart{Part~(\ref{thm:subgrad-icon-reduc:b}):} Suppose first that $\ebar\cdot\uu = 0$ and $\uu\in\basubdiffext{\ebar\plusl\xbar}$. Then by part~(\ref{thm:subgrad-icon-reduc:a}), $\uu\in\asubdifgext{\xbar}$, and also, $\gstar(\uu)=\fstar(\uu)\in\R$ by Theorem~\ref{thm:conj-of-iconic-reduc}. Thus, $\uu\in\basubdifgext{\xbar}$. For the converse, suppose $\uu\in\basubdifgext{\xbar}$. Then $\gstar(\uu)\in\R$, implying, by Theorem~\ref{thm:conj-of-iconic-reduc}, that $\ebar\cdot\uu=0$ and so that $\uu\in\asubdiffext{\ebar\plusl\xbar}$ by part~(\ref{thm:subgrad-icon-reduc:a}). Since $\fstar(\uu)=\gstar(\uu)\in\R$, again by Theorem~\ref{thm:conj-of-iconic-reduc}, this completes the proof. \qedhere \end{proof-parts} \end{proof} If the condition $\ebar\cdot\uu=0$ is omitted, then Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}) no longer holds in general, as seen in the next example: \begin{example} For $\xx\in\R^2$, let $f(\xx)=f(x_1,x_2)=x_2-x_1=\xx\cdot\uu$, where $\uu=\trans{[-1,1]}$. Let $\ebar=\limray{\ee_1}$, which is in $\aresconef$, and let $g=\fshadd$. Then $\uu\in\basubdiffext{\ebar\plusl\xbar}$, for all $\xbar\in\extspac{2}$, as seen in Example~\ref{ex:affine-subgrad} (and since $\fstar(\uu)=0$). However, $\ebar\cdot\uu\neq 0$, implying $\uu\not\in\basubdifgext{\xbar}$ by Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}). Indeed, $g\equiv-\infty$, implying $\gstar\equiv+\infty$ so $\basubdifgext{\xbar}=\emptyset$. \end{example} Although the condition $\ebar\cdot\uu=0$ is necessary for Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}) to hold in general, that condition can nevertheless be omitted if $g$ is proper, as we show next: \begin{theorem} \label{thm:subgrad-icon-reduc-proper} Let $f:\Rn\rightarrow\Rext$ be convex, let $\ebar\in\corezn$, let $g=\fshadd$, and assume $g$ is proper. Let $\xbar\in\extspace$. Then \[ \basubdifgext{\xbar} = \basubdiffext{\ebar\plusl\xbar}. \] \end{theorem} \begin{proof} Since $g$ is proper, $\ebar\in\aresconef$ by Corollary~\ref{cor:a:4}. That $ \basubdifgext{\xbar} \subseteq \basubdiffext{\ebar\plusl\xbar}$ is immediate from Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}). To prove the reverse inclusion, suppose $\uu\in \basubdiffext{\ebar\plusl\xbar}$. Because $g$ is proper and convex (Proposition~\ref{pr:i:9}\ref{pr:i:9a}), its effective domain, $\dom{g}$, is convex and nonempty, and therefore its relative interior is also nonempty (\Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.2b}). Thus, there exists a point $\yy\in\ri(\dom{g})$. Every such point must have a standard subgradient $\ww\in\partial g(\yy)$ (\Cref{roc:thm23.4}\ref{roc:thm23.4:a}), implying $\ww\in\basubdifgext{\yy}$ by Proposition~\ref{pr:ben-subgrad-at-x-in-rn}. By Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}), it then follows that $\ebar\cdot\ww=0$ and $\ww\in\basubdiffext{\ebar\plusl\yy}$. Suppose, contrary to what we are aiming to prove, that $\uu\not\in\basubdifgext{\xbar}$. Since $\uu\in \basubdiffext{\ebar\plusl\xbar}$, this implies, by Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}), that $\ebar\cdot\uu\neq 0$, and thus that $\ebar\cdot\uu$ is infinite (Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}). Suppose, as a first case, that $\ebar\cdot\uu=+\infty$. Because $\ww\in\asubdiffext{\ebar\plusl\yy}$, it follows that $\ebar\plusl\yy\in\adsubdiffstar{\ww}$ by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}. Therefore, from the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3-alt}, \begin{equation} \label{eq:thm:subgrad-icon-reduc-proper:1} \fstar(\uu) \geq \fstar(\ww) \plusd (\ebar\plusl\yy)\cdot (\uu - \ww). \end{equation} Since $\ebar\cdot\uu=+\infty$ and $\ebar\cdot\ww=0$, which are summable, \begin{equation*} \label{eq:thm:subgrad-icon-reduc-proper:3} \ebar\cdot(\uu-\ww) = \ebar\cdot\uu - \ebar\cdot\ww = +\infty \end{equation*} by Proposition~\ref{pr:i:1}. Therefore, \begin{equation*} \label{eq:thm:subgrad-icon-reduc-proper:4} (\ebar\plusl\yy)\cdot (\uu - \ww) = \ebar\cdot(\uu-\ww)\plusl\yy\cdot(\uu-\ww) = +\infty. \end{equation*} On the other hand, since $\uu$ and $\ww$ are strict astral subgradients, both $\fstar(\uu)$ and $\fstar(\ww)$ are finite. Therefore, \eqref{eq:thm:subgrad-icon-reduc-proper:1} yields a contradition since the left-hand side is finite but the right-hand side is $+\infty$. In the alternative case, suppose that $\ebar\cdot\uu=-\infty$, which we handle similarly. Then $\ebar\plusl\xbar\in\adsubdiffstar{\uu}$ since $\uu\in\asubdiffext{\ebar\plusl\xbar}$, by Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}, so \begin{equation} \label{eq:thm:subgrad-icon-reduc-proper:2} \fstar(\ww) \geq \fstar(\uu) \plusd (\ebar\plusl\xbar)\cdot (\ww - \uu). \end{equation} Also, $\ebar\cdot(\ww-\uu)=+\infty$ by Proposition~\ref{pr:i:1}, so \[ (\ebar\plusl\xbar)\cdot (\ww - \uu) = \ebar\cdot(\ww-\uu)\plusl\xbar\cdot(\ww-\uu) = +\infty. \] Therefore, since $\fstar(\uu)$ and $\fstar(\ww)$ are finite, \eqref{eq:thm:subgrad-icon-reduc-proper:2} yields a contradition as before. Having reached a contradiction in both cases, we conclude that $\uu\in\basubdifgext{\xbar}$, completing the proof. \end{proof} As corollary, Theorems~\ref{thm:subgrad-icon-reduc} and~\ref{thm:subgrad-icon-reduc-proper} can be straightforwardly combined with Proposition~\ref{pr:subgrad-shift-arg} yielding rules for the astral subgradients of the extension of a function of the form $\xx\mapsto\fext(\zbar\plusl\xx)$, for any $\zbar\in\extspace$: \begin{corollary} \label{cor:subgrad-arb-reduc} Let $f:\Rn\rightarrow\Rext$ be convex, let $\zbar\in\extspace$, and let $h:\Rn\rightarrow\Rext$ be defined by $h(\xx)=\fext(\zbar\plusl\xx)$ for $\xx\in\Rn$. Assume $h\not\equiv+\infty$. Let $\xbar\in\extspace$ and $\uu\in\Rn$. Then: \begin{letter} \item \label{cor:subgrad-arb-reduc:a} If $\zbar\cdot\uu\in\R$ then $\uu\in\asubdifhext{\xbar}$ if and only if $\uu\in\asubdiffext{\zbar\plusl\xbar}$. \item \label{cor:subgrad-arb-reduc:b} $\uu\in\basubdifhext{\xbar}$ if and only if $\zbar\cdot\uu\in\R$ and $\uu\in\basubdiffext{\zbar\plusl\xbar}$. \item \label{cor:subgrad-arb-reduc:c} If $h$ is proper then $\basubdifhext{\xbar} = \basubdiffext{\zbar\plusl\xbar}$. \end{letter} \end{corollary} \begin{proof} We can write $\zbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$ (Theorem~\ref{thm:icon-fin-decomp}). Since $h\not\equiv+\infty$ there exists $\yy\in\Rn$ for which $h(\yy)<+\infty$, that is, $\fext(\ebar\plusl\qq\plusl\yy)<+\infty$, implying $\ebar\in\aresconef$ by Corollary~\ref{cor:a:4}. Also, since $\ebar$ is an icon, $\zbar\cdot\uu=\ebar\cdot\uu+\qq\cdot\uu\in\R$ if and only if $\ebar\cdot\uu=0$ (by Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}). Let $g=\fshadd$. Then \[ h(\xx) = \fext(\ebar\plusl\qq\plusl\xx) = g(\qq+\xx) \] for $\xx\in\Rn$. \begin{proof-parts} \pfpart{Part~(\ref{cor:subgrad-arb-reduc:a}):} Suppose $\zbar\cdot\uu\in\R$. Then $\ebar\cdot\uu=0$, as noted above. Thus, by Proposition~\ref{pr:subgrad-shift-arg}(\ref{pr:subgrad-shift-arg:a}), $\uu\in\asubdifhext{\xbar}$ if and only if $\uu\in\asubdifgext{\qq\plusl\xbar}$. In turn, by Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:a}), this holds if and only if $\uu\in\asubdiffext{\ebar\plusl\qq\plusl\xbar}$, that is, if and only if $\uu\in\asubdiffext{\zbar\plusl\xbar}$, proving the claim. \pfpart{Part~(\ref{cor:subgrad-arb-reduc:b}):} Similarly, by Proposition~\ref{pr:subgrad-shift-arg}(\ref{pr:subgrad-shift-arg:b}), $\uu\in\basubdifhext{\xbar}$ if and only if $\uu\in\basubdifgext{\qq\plusl\xbar}$. By Theorem~\ref{thm:subgrad-icon-reduc}(\ref{thm:subgrad-icon-reduc:b}), this holds if and only if $\ebar\cdot\uu=0$ and $\uu\in\basubdiffext{\ebar\plusl\qq\plusl\xbar}$, that is, if and only if $\zbar\cdot\uu\in\R$ and $\uu\in\basubdiffext{\zbar\plusl\xbar}$. \pfpart{Part~(\ref{cor:subgrad-arb-reduc:c}):} Since $h$ is proper, $g$ is as well. Thus, \[ \basubdifhext{\xbar} = \basubdifgext{\qq\plusl\xbar} = \basubdiffext{\ebar\plusl\qq\plusl\xbar} = \basubdiffext{\zbar\plusl\xbar} \] by Proposition~\ref{pr:subgrad-shift-arg}(\ref{pr:subgrad-shift-arg:b}) and Theorem~\ref{thm:subgrad-icon-reduc-proper}. \qedhere \end{proof-parts} \end{proof} Ordinary gradients can be viewed as providing the instantaneous rate at which a function $f$ on $\Rn$ is increasing at a point $\xx\in\Rn$ along any vector $\vv\in\Rn$, that is, how quickly $f(\xx+\lambda\vv)$ is increasing as a function of $\lambda\in\R$, at $\lambda=0$. Indeed, this rate is simply $\gradf(\xx)\cdot\vv$, so the gradient $\gradf(\xx)$ provides complete information about all such rates along all possible vectors $\vv$. Similarly, in standard convex analysis, the \emph{one-sided directional derivative} of a function $f:\Rn\rightarrow\Rext$ at a point $\xx\in\Rn$ with respect to a vector $\vv\in\Rn$, denoted $\dderf{\xx}{\vv}$, captures similar directional rates at a point, and is defined to be \[ \dderf{\xx}{\vv} = \lim_{\rightlim{\lambda}{0}} \frac{f(\xx+\lambda\vv) - f(\xx)} {\lambda}, \] provided $f(\xx)\in\R$, where the limit is taken from the right, that is, as $\lambda>0$ approaches $0$. These directional derivatives are fully determined by the subdifferential at $\xx$; specifically, if $f$ is convex and proper, then \begin{equation} \label{eq:dir-der-from-subgrad} \dderf{\xx}{\vv} = \sup_{ \uu\in\partial f(\xx) } \vv\cdot\uu \end{equation} \citep[Theorem~23.4]{ROC}. We can generalize these notions to astral space to compute how quickly the extension $\fext$ of a convex function $f:\Rn\rightarrow\Rext$ is increasing at a point $\xbar\in\extspace$ along a vector $\vv\in\Rn$, and we can relate these notions to astral subgradients by way of Corollary~\ref{cor:subgrad-arb-reduc}. To do so, we first define $h:\Rn\rightarrow\Rext$ by $h(\yy)=\fext(\xbar\plusl\yy)$ for $\yy\in\Rn$. Then the one-sided directional derivative of $h$ at $\zero$ provides the instantaneous rate at which $\fext(\xbar\plusl\lambda\vv)$ is increasing as a function $\lambda>0$, since \[ \dderh{\zero}{\vv} = \lim_{\rightlim{\lambda}{0}} \frac{h(\lambda\vv) - h(\zero)} {\lambda} = \lim_{\rightlim{\lambda}{0}} \frac{\fext(\xbar\plusl\lambda\vv) - \fext(\xbar)} {\lambda} \] (provided $\fext(\xbar)\in\R$). Furthermore, by \eqref{eq:dir-der-from-subgrad}, this directional derivative is determined by the subdifferential $\partial h(\zero)$. If $h$ is proper, the next theorem shows that this (standard) subdifferential is actually the same as the strict astral subdifferential $\basubdiffext{\xbar}$. The theorem also gives characterizations providing an astral analog of the standard definition of subgradient seen in \eqref{eqn:standard-subgrad-ineq}. \begin{theorem} \label{thm:subgrad-reduc-dir-der} Let $f:\Rn\rightarrow\Rext$ be convex, let $\xbar\in\extspace$, and let $h:\Rn\rightarrow\Rext$ be defined by $h(\yy)=\fext(\xbar\plusl\yy)$ for $\yy\in\Rn$. Assume $h$ is proper (or equivalently, that $\fext(\xbar\plusl\zz)\in\R$ for some $\zz\in\Rn$). Then $h$ is convex and lower semicontinuous, and \begin{equation} \label{eq:thm:subgrad-reduc-dir-der:2} \partial h(\zero) = \basubdifhext{\zero} = \basubdiffext{\xbar}. \end{equation} Furthermore, for $\uu\in\Rn$, the following are equivalent: \begin{letter-compact} \item \label{thm:subgrad-reduc-dir-der:c} $\uu\in \basubdiffext{\xbar}$. \item \label{thm:subgrad-reduc-dir-der:d} For all $\yy\in\Rn$, $\fext(\xbar\plusl\yy)\geq\fext(\xbar) + \yy\cdot\uu$. \item \label{thm:subgrad-reduc-dir-der:e} For all $\ybar\in\extspace$, $\fext(\xbar\plusl\ybar)\geq\fext(\xbar) \plusl \ybar\cdot\uu$. \end{letter-compact} \end{theorem} \begin{proof} Similar to the proof of Corollary~\ref{cor:subgrad-arb-reduc}, we write $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$, and we define $g=\fshadd$, implying $h(\yy)=g(\qq+\yy)$ for $\yy\in\Rn$. Then $g$ is convex and lower semicontinuous, by Proposition~\ref{pr:i:9}(\ref{pr:i:9a}), so $h$ is as well, being the composition of the convex, lower semicontinuous function $g$ with the affine function $\yy\mapsto\qq+\yy$. As noted in the theorem's statement, $h$ is proper if and only if $h(\zz)=\fext(\xbar\plusl\zz)\in\R$ for some $\zz\in\Rn$. This is because, being convex and lower semicontinuous, by \Cref{pr:improper-vals}(\ref{pr:improper-vals:cor7.2.1}), $h$ is improper if and only if it has no finite values. The first equality of \eqref{eq:thm:subgrad-reduc-dir-der:2} is by Proposition~\ref{pr:ben-subgrad-at-x-in-rn}. The second equality is Corollary~\ref{cor:subgrad-arb-reduc}(\ref{cor:subgrad-arb-reduc:c}) (with $\zbar$ and $\xbar$, as they appear in that corollary, set to $\xbar$ and $\zero$). \begin{proof-parts} \pfpart{(\ref{thm:subgrad-reduc-dir-der:c}) $\Leftrightarrow$ (\ref{thm:subgrad-reduc-dir-der:d}): } By definition of standard subgradient (Eq.~\ref{eqn:standard-subgrad-ineq}), $\uu\in\basubdiffext{\xbar}=\partial h(\zero)$ if and only if $h(\yy)\geq h(\zero) + \yy\cdot\uu$ for all $\yy\in\Rn$. This is equivalent to condition~(\ref{thm:subgrad-reduc-dir-der:d}). \pfpart{(\ref{thm:subgrad-reduc-dir-der:e}) $\Rightarrow$ (\ref{thm:subgrad-reduc-dir-der:d}): } This follows immediately by setting $\ybar=\yy$. \pfpart{(\ref{thm:subgrad-reduc-dir-der:d}) $\Rightarrow$ (\ref{thm:subgrad-reduc-dir-der:e}): } Suppose condition~(\ref{thm:subgrad-reduc-dir-der:d}) holds, and let $\ybar\in\extspace$. Note that \begin{equation} \label{eq:thm:subgrad-reduc-dir-der:1} \hext(\ybar) = \gext(\qq\plusl\ybar) = \fext(\ebar\plusl\qq\plusl\ybar) = \fext(\xbar\plusl\ybar), \end{equation} where the first and second equalities are by Propositions~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:b}) and~\ref{pr:i:9}(\ref{pr:i:9b}). Let $\seq{\yy_t}$ be a sequence in $\Rn$ converging to $\ybar$ with $h(\yy_t)\rightarrow\hext(\ybar)$ (which exists by Proposition~\ref{pr:d1}). Then \[ \fext(\xbar\plusl\ybar) = \hext(\ybar) = \lim h(\yy_t) = \lim \fext(\xbar\plusl\yy_t) \geq \lim \Bracks{ \fext(\xbar)\plusl\yy_t\cdot\uu } = \fext(\xbar) \plusl\ybar\cdot\uu. \] The first equality is by \eqref{eq:thm:subgrad-reduc-dir-der:1}. The inequality is by our assumption that condition~(\ref{thm:subgrad-reduc-dir-der:d}) holds. The last equality and the fact that the third limit exists are because $\yy_t\cdot\uu\rightarrow\ybar\cdot\uu$ (by Theorem~\ref{thm:i:1}\ref{thm:i:1c}), combined with Proposition~\ref{pr:i:5}(\ref{pr:i:5lim}). \qedhere \end{proof-parts} \end{proof} As additional consequences, the next two theorems give sufficient or necessary conditions for when astral subdifferentials of an extension must be nonempty. The first theorem shows that if $\xbar\in\extspace$ can be expressed with finite part in $\ri(\dom{f})$, then $\basubdiffext{\xbar}$ is nonempty if $\fext(\xbar)\in\R$, and $\asubdiffext{\xbar}$ is nonempty if $\fext(\xbar)<+\infty$. This implies that the same holds at all points where $\fext$ is continuous, and holds everywhere if $f$ is finite everywhere. \begin{theorem} \label{thm:subgrad-not-empty-fin-ri} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xbar\in\extspace$. Assume at least one of the following holds: \begin{roman-compact} \item \label{thm:subgrad-not-empty-fin-ri:a} $\xbar\in\corezn\plusl\ri(\dom{f})$; that is, $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\ri(\dom{f})$. \item \label{thm:subgrad-not-empty-fin-ri:b} $\dom{f}=\Rn$. \item \label{thm:subgrad-not-empty-fin-ri:c} $\fext$ is continuous at $\xbar$. \end{roman-compact} Then the following also hold: \begin{letter} \item \label{thm:subgrad-not-empty-fin-ri:ben} If $\fext(\xbar)\in\R$ then $\basubdiffext{\xbar}\neq\emptyset$. \item \label{thm:subgrad-not-empty-fin-ri:gen} If $\fext(\xbar)<+\infty$ then $\asubdiffext{\xbar}\neq\emptyset$. \end{letter} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:subgrad-not-empty-fin-ri:ben}):} Assume $\fext(\xbar)\in\R$. We prove $\basubdiffext{\xbar}\neq\emptyset$ under each of the three stated conditions. Suppose first that condition~(\ref{thm:subgrad-not-empty-fin-ri:a}) holds so that $\xbar=\ebar\plusl\qq$ where $\ebar\in\corezn$ and $\qq\in\ri(\dom{f})$. Let $g=\fshadd$, which is convex and lower semicontinuous (Proposition~\ref{pr:i:9}\ref{pr:i:9a}). Then $g(\qq)=\fext(\ebar\plusl\qq)=\fext(\xbar)\in\R$, so $g$ is proper (by \Cref{pr:improper-vals}\ref{pr:improper-vals:cor7.2.1}). Also, $\ri(\dom{f})\subseteq\ri(\dom{g})$ by Theorem~\ref{thm:ri-dom-icon-reduc}, so $\qq\in\ri(\dom{g})$. This implies that there exists a standard subgradient $\uu\in\partial g(\qq)$ (by \Cref{roc:thm23.4}\ref{roc:thm23.4:a}). Therefore, $\uu\in\basubdifgext{\qq}$ by Proposition~\ref{pr:ben-subgrad-at-x-in-rn}, so $\uu\in\basubdiffext{\ebar\plusl\qq}=\basubdiffext{\xbar}$ by Theorem~\ref{thm:subgrad-icon-reduc-proper}. Next, suppose condition~(\ref{thm:subgrad-not-empty-fin-ri:b}) holds so that $\dom{f}=\Rn$. In this case, $\xbar\in\extspace=\corezn\plusl\ri(\dom{f})$ by Theorem~\ref{thm:icon-fin-decomp}, so the claim follows from the preceding argument for condition~(\ref{thm:subgrad-not-empty-fin-ri:a}). Finally, suppose condition~(\ref{thm:subgrad-not-empty-fin-ri:c}) holds so that $\fext$ is continuous at $\xbar$. In this case, let $h=\lsc f$, implying $\hext=\fext$ by \Cref{pr:h:1}(\ref{pr:h:1aa}). Then by Corollary~\ref{cor:cont-gen-char}, $\xbar=\ebar\plusl\qq$ for some $\ebar\in\represc{h}\cap\corezn \subseteq \corezn$ and $\qq\in\intdom{h}\subseteq\ri(\dom{h})$. Therefore, condition~(\ref{thm:subgrad-not-empty-fin-ri:a}) holds for $h$, implying, by the preceding argument for that case, that $\basubdiffext{\xbar}=\basubdifhext{\xbar}\neq\emptyset$. \pfpart{Part~(\ref{thm:subgrad-not-empty-fin-ri:gen}):} Suppose $\fext(\xbar)<+\infty$. If $\fext(\xbar)\in\R$, then $\basubdiffext{\xbar}$ is not empty by part~(\ref{thm:subgrad-not-empty-fin-ri:ben}), so $\asubdiffext{\xbar}$ is as well. And if $\fext(\xbar)=-\infty$, then $\xbar$ minimizes $\fext$ so $\zero\in\asubdiffext{\xbar}$ by Proposition~\ref{pr:asub-zero-is-min}. \qedhere \end{proof-parts} \end{proof} As a partial converse, the next theorem shows that, unless $\fext(\xbar\plusl\zz)$ is infinite for all $\zz\in\Rn$ (in other words, across $\xbar$'s entire galaxy), if $\basubdiffext{\xbar}$ is nonempty then $\fext(\xbar)$ must be finite. \begin{theorem} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xbar\in\extspace$. Assume there exists $\zz\in\Rn$ for which $\fext(\xbar\plusl\zz)\in\R$, and also that $\uu\in\basubdiffext{\xbar}$. Then $\fext(\xbar)\in\R$ and $\xbar\cdot\uu\in\R$. \end{theorem} \begin{proof} Let $h(\yy)=\fext(\xbar\plusl\yy)$ for $\yy\in\Rn$. Then Theorem~\ref{thm:subgrad-reduc-dir-der} shows that \[ \uu \in \basubdiffext{\xbar} = \basubdifhext{\zero} = \partial h(\zero), \] and also that $h$ is convex, lower semicontinuous and proper. Since $\uu\in\basubdifhext{\zero}$, Corollary~\ref{cor:subgrad-arb-reduc}(\ref{cor:subgrad-arb-reduc:b}) implies that $\xbar\cdot\uu\in\R$. And since $\uu\in \partial h(\zero)$, $\zero\in\dom{h}$ by \Cref{roc:thm23.4}(\ref{roc:thm23.4:b}). Thus, $\fext(\xbar)=h(\zero)\in\R$ (since $h$ is proper). \end{proof} \subsection{Linear image of a function} We next identify the astral subgradients of $\A f$, the image of a function $f:\Rn\rightarrow\Rext$ under $\A\in\Rmn$, as given in \eqref{eq:lin-image-fcn-defn}. Let $h=\A f$. As was seen in Proposition~\ref{pr:inf-lin-ext}, if a point $\xbar$ is in the astral column space of $\A$, then there exists a point $\zbar$ with $\A\zbar=\xbar$ and $\hext(\xbar)=\fext(\zbar)$, that is, $\fext(\zbar)=\hext(\A\zbar)$. Informally then, in light of Theorem~\ref{thm:subgrad-fA}, we might expect $\uu$ to be an astral subgradient of $\hext$ at $\xbar=\A\zbar$ when $\transA\uu$ is an astral subgradient of $\fext$ at $\zbar$. This heuristic reasoning about the astral subgradients of $\A f$ is made precise in the next theorem: \begin{theorem} \label{thm:subgrad-Af} Let $f:\Rn\rightarrow\Rext$ and $\A\in\Rmn$. Let $\xbar\in\extspac{m}$ and $\uu\in\Rm$. Then the following hold: \begin{letter} \item \label{thm:subgrad-Af:a} $\uu\in\asubdifAfext{\xbar}$ if and only if there exists $\zbar\in\extspace$ such that $\A\zbar=\xbar$ and $\transA \uu \in \asubdiffext{\zbar}$. \item \label{thm:subgrad-Af:b} $\uu\in\basubdifAfext{\xbar}$ if and only if there exists $\zbar\in\extspace$ such that $\A\zbar=\xbar$ and $\transA \uu \in \basubdiffext{\zbar}$. \end{letter} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:subgrad-Af:a}):} Let $h=\A f$, let $\uu\in\Rm$ and let $\ww=\transA \uu$. If $f\equiv+\infty$ then $h\equiv+\infty$ so the claim holds vacuously since all astral subdifferentials are empty in this case (Proposition~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:b}). Therefore, we assume henceforth that $f\not\equiv+\infty$, implying $h\not\equiv+\infty$ (since if $f(\zz)<+\infty$ then $h(\A\zz)\leq f(\zz)<+\infty$). We first make some preliminary observations. Let $\fminusw$ and $\hminusu$ be linear shifts of $f$ and $h$. Then for $\xx\in\Rm$, \begin{eqnarray} \hminusu(\xx) &=& \inf \Braces{ f(\zz) : \zz\in\Rn, \A\zz=\xx } - \xx\cdot\uu \nonumber \\ &=& \inf \Braces{ f(\zz) - \xx\cdot\uu : \zz\in\Rn, \A\zz=\xx } \nonumber \\ &=& \inf \Braces{ f(\zz) - \zz\cdot\ww : \zz\in\Rn, \A\zz=\xx } \nonumber \\ &=& \inf \Braces{ \fminusw(\zz) : \zz\in\Rn, \A\zz=\xx }. \label{eq:thm:subgrad-Af:3} \end{eqnarray} The third equality is because if $\A\zz=\xx$ then \[ \xx\cdot\uu = (\A\zz)\cdot\uu = \zz\cdot(\transA\uu) = \zz\cdot\ww \] by Theorem~\ref{thm:mat-mult-def} (or just ordinary linear algebra). Thus, $\hminusu = \A \fminusw$. From Proposition~\ref{pr:inf-lin-ext}, this implies \begin{equation} \label{eq:thm:subgrad-Af:1} \hminusuext(\xbar) = \inf\Braces{\fminuswext(\zbar) \,:\, \zbar\in\extspace,\, \A\zbar=\xbar}. \end{equation} Next, we claim that \begin{equation} \label{eq:thm:subgrad-Af:2} \inf \hminusu = \inf \fminusw. \end{equation} Since $\fminusw(\zz)\geq\inf\fminusw$ for all $\zz\in\Rn$, the last expression of \eqref{eq:thm:subgrad-Af:3} also is always at least $\inf\fminusw$; thus, $\inf\hminusu\geq\inf\fminusw$. On the other hand, for all $\zz\in\Rn$, \eqref{eq:thm:subgrad-Af:3} implies that $\inf\hminusu\leq \hminusu(\A\zz)\leq\fminusw(\zz)$, so $\inf\hminusu\leq\inf\fminusw$. We now prove the theorem. Suppose there exists $\zbar\in\extspace$ with $\A\zbar=\xbar$ and $\ww=\transA \uu \in \asubdiffext{\zbar}$. Then \[ \inf \hminusu \leq \hminusuext(\xbar) \leq \fminuswext(\zbar) = \inf \fminusw = \inf \hminusu. \] The first inequality is from Proposition~\ref{pr:fext-min-exists}. The second inequality is from \eqref{eq:thm:subgrad-Af:1}, since $\A\zbar=\xbar$. The first equality is from Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}), since $\ww\in\asubdiffext{\zbar}$. And the last equality is \eqref{eq:thm:subgrad-Af:2}. Thus, $ \hminusuext(\xbar) = \inf \hminusu$, so $\uu\in\asubdifhext{\xbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}). For the converse, suppose now that $\uu\in\asubdifhext{\xbar}$. We claim that $\cldom{h}$ is included in $\acolspace \A$, the astral column space of $\A$. To see this, note first that if $\xx\in\dom{h}$ then we must have $\A\zz=\xx$ for some $\zz\in\Rn$ (since otherwise the infimum in \eqref{eq:lin-image-fcn-defn} would be vacuous). Thus, $\dom{h}\subseteq\acolspace \A$. Also, $\acolspace \A$ is the image of $\extspace$ under the map $\ybar\mapsto \A\ybar$ for $\ybar\in\extspace$, which is continuous (\Cref{thm:linear:cont}\ref{thm:linear:cont:b}). Since $\extspace$ is compact, its image $\acolspace \A$ is also compact, and therefore closed in $\extspace$ (\Cref{prop:compact}\ref{prop:compact:cont-compact},\ref{prop:compact:closed}). Therefore, $\cldom{h}\subseteq\acolspace \A$. In particular, since $\uu\in\asubdifhext{\xbar}$, $\xbar$ is in $\cldom{h}$ (by Propositions~\ref{pr:subgrad-imp-in-cldom}\ref{pr:subgrad-imp-in-cldom:a} and~\ref{pr:h:1}\ref{pr:h:1c}), and so also in $\acolspace \A$. Therefore, by Proposition~\ref{pr:inf-lin-ext}, there exists $\zbar$ attaining the infimum in \eqref{eq:thm:subgrad-Af:1} so that $\A\zbar=\xbar$ and $\hminusuext(\xbar)=\fminuswext(\zbar)$. Thus, \[ \fminuswext(\zbar) = \hminusuext(\xbar) = \inf\hminusu = \inf\fminusw. \] The second equality is from Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}) since $\uu\in\asubdifhext{\xbar}$, and the last equality is \eqref{eq:thm:subgrad-Af:2}. Therefore, $\transA\uu = \ww\in\asubdiffext{\zbar}$, again by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}), completing the proof. \pfpart{Part~(\ref{thm:subgrad-Af:b}):} \eqref{eq:thm:subgrad-Af:2}, combined with Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}), implies $\hstar(\uu)=\fstar(\ww)$, so $\hstar(\uu)\in\R$ if and only if $\fstar(\ww)\in\R$. Combined with part~(\ref{thm:subgrad-Af:a}), this proves the claim. \qedhere \end{proof-parts} \end{proof} An analog of Theorem~\ref{thm:subgrad-Af} for standard subgradients was given in \Cref{pr:stan-subgrad-lin-img}. However, that result requires an additional condition that the infimum defining $\A f$ in \eqref{eq:lin-image-fcn-defn} be attained. That condition is unnecessary in the astral case since the analogous infimum is alway attained (unless vacuous), as was seen in Proposition~\ref{pr:inf-lin-ext}. Without this additional condition, the analog of Theorem~\ref{thm:subgrad-Af} for standard subgradients does not hold true in general. In other words, with $f$ and $\A$ as in the theorem, and for $\xx,\uu\in\Rm$, it is not the case that $\uu\in\partial \Af(\xx)$ if and only if there exists $\zz\in\Rn$ such that $\A\zz=\xx$ and $\transA \uu \in \partial f(\zz)$, as shown in the next example. \begin{example} In $\R^2$, let $f(\zz)=f(z_1,z_2)=e^{z_1}$, and let $\A = [0,1] = \trans{\ee_2}$ (so $m=1$ and $n=2$). Then $\Af(x)=\inf\{e^{z_1} : \zz\in\R^2, z_2 = x\} = 0$ for all $x\in\R$, so $\A f\equiv 0$. Now let $x=u=0$. Then $\nabla\Af(0)=0$, so $u\in\partial \Af(x)$. On the other hand, for all $\zz\in\R^2$, $\gradf(\zz)=\trans{[e^{z_1},0]}\neq \zero$. Therefore, $\transA u = \zero \not\in\partial f(\zz)$ for any $\zz\in\R^2$. Although its analog for standard subgradients is false for this example, Theorem~\ref{thm:subgrad-Af} is of course true in the astral setting. In this case, $u\in\asubdifAfext{x}$, and, setting $\zbar=\limray{(-\ee_1)}$, we have $\A\zbar = 0 = x$ and $\transA u = \zero \in \asubdiffext{\zbar}$ (by Proposition~\ref{pr:asub-zero-is-min} since $\fext$ is minimized by $\zbar$). \end{example} \subsection{Maxima and suprema of functions} Next, we study the astral subgradients of a function $h$ that is the pointwise supremum of other functions, that is, \begin{equation} \label{eqn:point-sup-def} h(\xx) = \sup_{i\in\indset} f_i(\xx) \end{equation} for $\xx\in\Rn$, where $f_i:\Rn\rightarrow\Rext$ for $i$ in some index set $\indset$. In standard convex analysis and under suitable conditions, many or all of the subgradients of $h$ at a point $\xx\in\Rn$ can be determined from the subgradients of all those functions $f_i$ at which the supremum in \eqref{eqn:point-sup-def} is attained. More specifically, if $h(\xx)=f_i(\xx)$ then every subgradient of $f_i$ at $\xx$ is also a subgradient of $h$ at $\xx$ so that $\partial f_i(\xx) \subseteq \partial h(\xx)$. Furthermore, since $\partial h(\xx)$ is closed and convex, this implies \begin{equation*} \cl \conv\Parens{\bigcup_{\scriptontop{i\in I :}{h(\xx)=f_i(\xx)}} \partial f_i(\xx) } \subseteq \partial h(\xx), \end{equation*} as seen in \Cref{pr:stnd-sup-subgrad}. Under some conditions, this can account for all of $h$'s subgradients; for instance, this is the case if each $f_i$ is convex and finite everywhere, and if $\indset$ is finite, as was seen in \Cref{pr:std-max-fin-subgrad}. \begin{example} \label{ex:subgrad-max-linear} For instance, for $x\in\R$, suppose $h(x)=\max\{f(x),g(x)\}$ where $f(x)=x$ and $g(x)=2x$. Then $\partial f(x)=\{1\}$, $\partial g(x)=\{2\}$ and, for $x\in\R$, \[ \partial h(x) = \begin{cases} \{1\} & \mbox{if $x<0$,} \\ {[1,2]} & \mbox{if $x=0$,} \\ \{2\} & \mbox{if $x>0$.} \end{cases} \] Thus, for points $x<0$, $h(x)=f(x)$ and $h$'s slope (unique subgradient) is equal to $f$'s; similarly at points $x>0$. At $x=0$, $h(x)=f(x)=g(x)$, and $\partial h(0)$ is equal to the convex hull of $f$'s slope and $g$'s slope. \end{example} For astral subgradients, we expect a similar relationship between $\asubdifhext{\xbar}$ and $\asubdiffextsub{i}{\xbar}$ for those $i\in\indset$ with $\hext(\xbar)=\fextsub{i}(\xbar)$. Indeed, if $\hext(\xbar)=\fextsub{i}(\xbar)$ and if this value is finite, then $\asubdiffextsub{i}{\xbar} \subseteq \asubdifhext{\xbar}$, as expected. But if $\hext(\xbar)=\fextsub{i}(\xbar)\in\{-\infty,+\infty\}$ then this might not be true. For instance, for the functions in Example~\ref{ex:subgrad-max-linear}, $\hext(+\infty)=\fext(+\infty)=+\infty$ and $\asubdiffext{+\infty}=[1,+\infty)$ but $\asubdifhext{+\infty}=[2,+\infty)$, so $\asubdiffext{+\infty}\not\subseteq\asubdifhext{+\infty}$. Apparently, when their common value is infinite, requiring merely that $\hext(\xbar)=\fextsub{i}(\xbar)$ is too coarse to ensure that $\asubdiffextsub{i}{\xbar} \subseteq \asubdifhext{\xbar}$. Nonetheless, although $\hext(+\infty)=\fext(+\infty)=\gext(+\infty)=+\infty$ in this example, it is also true that $h(x)=g(x) > f(x)$ for all $x>0$. It therefore feels natural that the same should be true at $+\infty$, that, intuitively, the maximum defining $h$ (and so also $\hext$) is attained at $+\infty$ in an informal, asymptotic sense only by $\gext$, not $\fext$. This intuition can be made precise using linear shifts. Let $\hminusu$ and $\fminususub{i}$ be linear shifts of $h$ and $f_i$ at $\uu\in\Rn$, for $i\in\indset$. As just discussed, for $\asubdiffextsub{i}{\xbar} \subseteq \asubdifhext{\xbar}$, we will prove it is sufficient that $\hext(\xbar)=\fextsub{i}(\xbar)\in\R$. In fact, we will see that it is also sufficient if this holds for any linear shift, that is, if $\hminusuext(\xbar)=\fminususubext{i}(\xbar)\in\R$, for any $\uu\in\Rn$. For instance, for the functions from Example~\ref{ex:subgrad-max-linear}, with $u=2$, $\fminusU(x)=-x$, $\gminusU(x)=0$, and $\hminusU(x)=\max\{-x,0\}$ for $x\in\R$. So $\hminusUext(+\infty)=\gminusUext(+\infty)=0\in\R$, and indeed, $\asubdifgext{+\infty}\subseteq\asubdifhext{+\infty}$. On the other hand, we cannot have $\hminusUext(+\infty)=\fminusUext(+\infty)\in\R$ for any $u\in\R$. Together, these facts capture the intuition discussed above regarding asymptotic attainment of the maximum. The next theorem brings these ideas together. \begin{theorem} \label{thm:sup-subgrad-subset} Let $f_i:\Rn\rightarrow\Rext$ for all $i\in\indset$, and let \[ h(\xx) = \sup_{i\in\indset} f_i(\xx) \] for $\xx\in\Rn$. Let $\hminusu$ and $\fminususub{i}$ be linear shifts of $h$ and $f_i$ by $\uu$. Let $\xbar\in\extspace$, and let \[ J = \Braces{i \in \indset : \exists \uu\in\Rn,\, \hminusuext(\xbar)=\fminususubext{i}(\xbar)\in\R }. \] Then the following hold: \begin{letter} \item \label{thm:sup-subgrad-subset:a} $ \displaystyle \conv\Parens{ \bigcup_{i\in J} \asubdiffextsub{i}{\xbar} } \subseteq \asubdifhext{\xbar} $. \item \label{thm:sup-subgrad-subset:b} $ \displaystyle \conv\Parens{ \bigcup_{i\in J} \basubdiffextsub{i}{\xbar} } \subseteq \basubdifhext{\xbar} $. \end{letter} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:sup-subgrad-subset:a}):} Let $\ww\in\asubdiffextsub{i}{\xbar}$ for some $i\in J$. We will show $\ww\in\asubdifhext{\xbar}$. From $J$'s definition, there exists $\uu\in\Rn$ for which \begin{equation} \label{eq:thm:sup-subgrad-subset:1} \hminusuext(\xbar)=\fminususubext{i}(\xbar)\in\R. \end{equation} Note that $\ww-\uu\in\asubdiffminususubext{i}{\xbar}$ by Proposition~\ref{pr:fminusu-subgrad}. We have: \begin{align} \hstar(\ww) \leq star(\ww) &= minusustar(\ww-\uu) \nonumber \\ &= \xbar\cdot(\ww-\uu) - \fminususubext{i}(\xbar) \nonumber \\ &= \xbar\cdot(\ww-\uu) - \hminusuext(\xbar) \nonumber \\ &\leq \hminusustar(\ww-\uu) \nonumber \\ &= \hstar(\ww). \label{eq:thm:sup-subgrad-subset:2} \end{align} The first inequality is because, by $h$'s definition, $f_i\leq h$, implying star$ from the definition of conjugate (Eq.~\ref{eq:fstar-def}). The first and last equalities are by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:b}). The summability of the terms appearing at the third and fourth lines, as well as the equality of these expressions, follow from \eqref{eq:thm:sup-subgrad-subset:1}. The second equality follows from Theorem~\ref{thm:fenchel-implies-subgrad} (with $F=\fminususubext{i}$) since $\ww-\uu\in\asubdiffminususubext{i}{\xbar}$. The last inequality is by \eqref{eqn:ast-fenchel} (with $F=\hminusuext$). Thus, \[ \xbar\cdot(\ww-\uu) - \hminusuext(\xbar) = \hminusustar(\ww-\uu). \] Also, by \eqref{eq:thm:sup-subgrad-subset:1}, $\xbar\in\cldom{\hminusuext}$. Therefore, $\ww-\uu\in\asubdifhminusuext{\xbar}$, by Theorem~\ref{thm:fenchel-implies-subgrad}, implying $\ww\in\asubdifhext{\xbar}$ by Proposition~\ref{pr:fminusu-subgrad}. We conclude that $\asubdiffextsub{i}{\xbar}\subseteq\asubdifhext{\xbar}$ for all $i\in J$. Since $\asubdifhext{\xbar}$ is convex by Theorem~\ref{thm:asubdiff-is-convex}, we can apply Proposition~\ref{pr:conhull-prop}(\ref{pr:conhull-prop:aa}) to complete the proof of part~(\ref{thm:sup-subgrad-subset:a}). \pfpart{Part~(\ref{thm:sup-subgrad-subset:b}):} Let $\ww\in\basubdiffextsub{i}{\xbar}$ for some $i\in J$. star(\ww)\in\R$ and $\ww\in\asubdiffextsub{i}{\xbar}$, so the proof above shows that $\ww\in\asubdifhext{\xbar}$. Furthermore, \eqref{eq:thm:sup-subgrad-subset:2} implies that star(\ww)$, and hence that $\hstar(\ww)\in\R$. Therefore, $\ww\in\basubdifhext{\xbar}$. Part~(\ref{thm:sup-subgrad-subset:b}) then follows after noting that $\basubdifhext{\xbar}$ is convex by Corollary~\ref{cor:ben-subdif-convex}. \qedhere \end{proof-parts} \end{proof} Analogous to \Cref{pr:std-max-fin-subgrad}, when $I$ is finite and each $f_i$ is convex and finite everywhere, the set given in Theorem~\ref{thm:sup-subgrad-subset}(\ref{thm:sup-subgrad-subset:b}) exactly captures all of $\hext$'s strict astral subgradients, as we show now: \begin{theorem} \label{thm:ben-subgrad-max} Let $f_i:\Rn\rightarrow\R$ be convex, for $i=1,\dotsc,m$, and let \[ h(\xx)=\max\Braces{f_1(\xx),\dotsc,f_m(\xx)} \] for $\xx\in\Rn$. Let $\xbar\in\extspace$, and let \[ J = \Braces{i \in \{1,\ldots,m\} : \exists \uu\in\Rn,\, \hminusuext(\xbar)=\fminususubext{i}(\xbar)\in\R }. \] Then \[ \basubdifhext{\xbar} = \conv\Parens{ \bigcup_{i\in J} \basubdiffextsub{i}{\xbar} }. \] \end{theorem} \begin{proof} It suffices to prove \begin{equation} \label{eq:thm:ben-subgrad-max:1} \basubdifhext{\xbar} \subseteq \conv\Parens{ \bigcup_{i\in J} \basubdiffextsub{i}{\xbar} } \end{equation} since the reverse inclusion follows from Theorem~\ref{thm:sup-subgrad-subset}(\ref{thm:sup-subgrad-subset:b}). Let $\uu\in\basubdifhext{\xbar}$, which we aim to show is in the set on the right-hand side of \eqref{eq:thm:ben-subgrad-max:1}. By a standard result regarding the conjugate of a maximum (\Cref{pr:prelim-conj-max}), there exist a subset $M\subseteq\{1,\ldots,m\}$, and numbers $\alpha_i\in[0,1]$ and vectors $\uu_i\in\Rn$, for $i\in M$, such that $ \sum_{i\in M} \alpha_i = 1 $, \begin{equation} \label{eq:thm:ben-subgrad-max:2} \uu = \sum_{i\in M} \alpha_i \uu_i, \end{equation} and \begin{equation} \label{eq:thm:ben-subgrad-max:8} star(\uu_i). \end{equation} Furthermore, without loss of generality, we can assume $\alpha_i>0$ for all $i\in M$ (since discarding elements with $\alpha_i=0$ does not affect these properties). star(\uu_i)\in\R$ for $i\in M$. The idea of the proof is to show that $\uu_i\in\basubdiffextsub{i}{\xbar}$ for $i\in M$, and also that $M\subseteq J$. Together, these will prove the theorem. To show all this, we define a function $g:\Rn\rightarrow\R$ whose extension will be computed in two different ways. Let \[ g(\xx) = \sum_{i\in M} \alpha_i \fminususubd{i}(\xx) \] for $\xx\in\Rn$. Note that, for $i\in M$, \begin{equation} \label{eq:thm:ben-subgrad-max:4} star(\uu_i) > -\infty \end{equation} by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}), star(\uu_i)\in\R$. Therefore, the values $\fminususubdext{i}(\xbar)$, for $i\in M$, are summable. By Theorem~\ref{thm:ext-sum-fcns-w-duality} and Proposition~\ref{pr:scal-mult-ext}, this implies \begin{equation} \label{eq:thm:ben-subgrad-max:5} \gext(\xx) = \sum_{i\in M} \alpha_i \fminususubdext{i}(\xbar) \end{equation} (using $\dom{\fminususubd{i}}=\Rn$, for $i\in M$). We can re-express $g$ as \begin{eqnarray} g(\xx) &=& \sum_{i\in M} \alpha_i (f_i(\xx) - \xx\cdot\uu_i) \nonumber \\ &=& \sum_{i\in M} \alpha_i (f_i(\xx) - \xx\cdot\uu) \nonumber \\ &=& \sum_{i\in M} \alpha_i \fminususub{i}(\xx), \label{eq:thm:ben-subgrad-max:3} \end{eqnarray} for $\xx\in\Rn$, as follows from straightforward algebra using \eqref{eq:thm:ben-subgrad-max:2}. For $i\in M$, \begin{equation} \label{eq:thm:ben-subgrad-max:7} \fminususubext{i}(\xbar) \leq \hminusuext(\xbar) = -\hstar(\uu) < +\infty. \end{equation} The first inequality is because $f_i\leq h$, so $\fminususub{i}(\xx) = f_i(\xx) - \xx\cdot\uu \leq h(\xx) - \xx\cdot\uu = \hminusu(\xx)$ for $\xx\in\Rn$, and hence also $\fminususubext{i}\le \hminusuext$ by \Cref{pr:h:1}(\ref{pr:h:1:geq}). The equality is by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}), since $\uu\in\asubdifh{\xbar}$. The last equality is because $\hstar(\uu)\in\R$. Thus, the values $\fminususubext{i}(\xbar)$, for $i\in M$, are summable, so we can again apply Theorem~\ref{thm:ext-sum-fcns-w-duality} and Proposition~\ref{pr:scal-mult-ext} to \eqref{eq:thm:ben-subgrad-max:3} yielding \begin{equation} \label{eq:thm:ben-subgrad-max:6} \gext(\xbar) = \sum_{i\in M} \alpha_i \fminususubext{i}(\xbar). \end{equation} Combining, we now have: \begin{eqnarray} -\hstar(\uu) &=& star(\uu_i) \nonumber \\ &\leq& \sum_{i\in M} \alpha_i \fminususubdext{i}(\xbar) \nonumber \\ &=& \gext(\xbar) \nonumber \\ &=& \sum_{i\in M} \alpha_i \fminususubext{i}(\xbar) \nonumber \\ &\leq& \hminusuext(\xbar) \nonumber \\ &=& -\hstar(\uu). \label{eq:thm:ben-subgrad-max:9} \end{eqnarray} The first equality is \eqref{eq:thm:ben-subgrad-max:8}. The first inequality is by \eqref{eq:thm:ben-subgrad-max:4}. The second and third equalities are by Eqs.~(\ref{eq:thm:ben-subgrad-max:5}) and~(\ref{eq:thm:ben-subgrad-max:6}). And the last inequality and last equality are both by \eqref{eq:thm:ben-subgrad-max:7}. Eqs.~(\ref{eq:thm:ben-subgrad-max:9}) and~(\ref{eq:thm:ben-subgrad-max:8}) imply that \[ \sum_{i\in M} \alpha_i \fminususubdext{i}(\xbar) = -\hstar(\uu) = star(\uu_i). \] In view of \eqref{eq:thm:ben-subgrad-max:4}, and since every $\alpha_i>0$, this implies that star(\uu_i)$, for $i\in M$. Therefore, $\uu_i\in\basubdiffextsub{i}{\xbar}$, by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}) star(\uu_i)\in\R$. \eqref{eq:thm:ben-subgrad-max:9} also implies that \[ \sum_{i\in M} \alpha_i \fminususubext{i}(\xbar) = \hminusuext(\xbar). \] In view of \eqref{eq:thm:ben-subgrad-max:7}, this implies $ \fminususubext{i}(\xbar) = \hminusuext(\xbar)$, for $i\in M$, and thus that $M\subseteq J$. By the form of $\uu$ given in \eqref{eq:thm:ben-subgrad-max:2}, we can therefore conclude that \[ \uu \in \conv\Parens{ \bigcup_{i\in M} \basubdiffextsub{i}{\xbar} } \subseteq \conv\Parens{ \bigcup_{i\in J} \basubdiffextsub{i}{\xbar} }, \] completing the proof. \end{proof} \subsection{Composition with a nondecreasing function} \label{sec:subgrad-comp-inc-fcn} We next consider the astral subgradients of the extension of the composition of a convex, nondecreasing function $g:\R\rightarrow\Rext$ with another convex function $f:\Rn\rightarrow\Rext$. Since $f$ may take values of $\pm\infty$, for this to make sense, we compose $f$ with the extension $\eg$. Letting $h=\eg\circ f$, our aim then is to determine the astral subgradients of $\hext$. In standard calculus, assuming differentiability, the gradient of $h$ can be computed using the chain rule, $\gradh(\xx)=g'(f(\xx))\,\gradf(\xx)$, where $g'$ is the first derivative of~$g$. Analogous results hold for standard subgradients, as was seen in \Cref{pr:std-subgrad-comp-inc}. Correspondingly, for astral subgradients, if $v\in\asubdifgext{\fext(\xbar)}$ and $\uu\in\asubdiffext{\xbar}$, then we expect that the product $v \uu$ should be an astral subgradient of $\hext$ at $\xbar$. We will see that this is generally the case, as made precise and proved in the next theorem. There can, however, be other subgradients besides these, even when only considering standard subgradients at finite points, as seen in the next example. (This does not contradict \Cref{pr:std-subgrad-comp-inc} which assumes $f$ is finite everywhere.) \begin{example} \label{ex:uv-not-suf-for-subgrad-g-of-f} For $x\in\R$, let \[ f(x) = \begin{cases} x \ln x & \mbox{if $x\in [0,1]$,} \\ +\infty & \mbox{otherwise,} \end{cases} \] and let $g(x)=\max\{0,x\}$. Then $f$ and $g$ are closed, proper and convex, and $g$ is also nondecreasing. We have $\eg(-\infty)=0$, $\eg(x)=g(x)$ for $x\in\R$, and $\eg(+\infty)=+\infty$. Thus, $h=\eg\circ f$ is the indicator function $\indf{[0,1]}$. It can be checked that $\partial h(0) = (-\infty,0]$. We claim, however, that $\partial f(0) = \emptyset$. This is because, if $u\in\partial f(0)$, then we must have $\fstar(u) = f(0) + \fstar(u) = 0\cdot u = 0$ (\Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}). However, this is impossible since $\fstar(u)=e^{u-1}>0$ for all $u\in\R$. Therefore, letting $x=0$ and $w=-1$, this shows that $w\in\partial h(x)$, and yet there does not exist $v\in\partial g(f(x))$ and $u\in\partial f(x)$ with $v u = w$. By \Cref{pr:ben-subgrad-at-x-in-rn}, the same statement holds for the corresponding (strict) astral subgradients of these functions' extensions; that is, $w\in\asubdifhext{x}=\basubdifhext{x}$, but there does not exist $v\in\asubdifgext{\fext(x)}$ and $u\in\asubdiffext{x}$ with $v u = w$. \end{example} Thus, in general, not all (astral or standard) subgradients of the composition $h$ can be computed using the most direct generalization of the chain rule. In this example, the exception occurred for a subgradient $w=-1$ at a point $x=0$ at the edge of $h$'s effective domain where, as seen previously, subgradients can ``wrap around'' in a way that is different from ordinary gradients (see Example~\ref{ex:standard-subgrad}). In terms of $f$ and $g$, the following facts seem plausibly relevant to such behavior at the edges: First, $g$ was minimized by $f(x)$, that is, $0\in\partial g(f(x))$. And second, $x=0$ was at the ``far end'' of $\dom{f}=[0,1]$, the effective domain of $f$, in the direction of $w$. In other words, \[ x w = \sup_{z\in\dom{f}} z w = \inddomfstar(w). \] For points in $\dom f$, this condition can be shown to be equivalent to $w\in\partial \inddomf(x)$. In fact, in astral space, the analogous forms of these conditions are generally sufficient for $\hext$ to have a particular astral subgradient. Specifically, if $\ww\in\asubdifinddomfext{\xbar}$ and if $\zero\in\asubdifgext{\fext(\xbar)}$, then $\ww\in\asubdifhext{\xbar}$. The next theorem proves the sufficiency of this condition as well as the more usual chain-rule condition discussed above. \begin{theorem} \label{thm:subgrad-comp-inc-fcn} Let $f:\Rn\rightarrow\Rext$ be convex, and let $g:\R\rightarrow\Rext$ be proper, convex, lower semicontinuous, nondecreasing, and not a constant. Assume there exists $\xing\in\Rn$ such that $f(\xing)\in\intdom{g}$. Let $h=\eg\circ f$, and let $\xbar\in\extspace$. \begin{letter-compact} \item \label{thm:subgrad-comp-inc-fcn:a1} If $\uu\in\asubdiffext{\xbar}$ and $v\in\asubdifgext{\fext(\xbar)}$ with $v>0$ then $v \uu \in\asubdifhext{\xbar}$. \item \label{thm:subgrad-comp-inc-fcn:a2} If $\ww\in\asubdifinddomfext{\xbar}$ and $0\in\asubdifgext{\fext(\xbar)}$ then $\ww \in\asubdifhext{\xbar}$. \end{letter-compact} The same holds if, in part~(\ref{thm:subgrad-comp-inc-fcn:a1}), the condition $v>0$ is replaced with $v\geq 0$. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:subgrad-comp-inc-fcn:a1}):} Suppose $\uu\in\asubdiffext{\xbar}$ and $v\in\asubdifgext{\fext(\xbar)}$ with $v>0$. Let $\ww=v\uu$. Let $\homat=[\Idnn,\zerov{n}]$ (as in Eq.~\ref{eqn:homat-def}). By lower semicontinuity of $g$, we have $\eg(y)=g(y)$ for $y\in\R$, and by \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:infsup},\ref{pr:conv-inc:nonconst}), $\eg(-\infty)=\inf g$ and $\eg(+\infty)=+\infty$, so $\eg$ is nondecreasing. Therefore, similar to the proof of \Cref{thm:conj-compose-our-version}, $h$ can be written, for $\xx\in\Rn$, as \begin{align} \notag h(\xx) &= \inf\set{\eg(\ey) :\: \ey\in\eR, \ey\geq f(\xx) } \\ \notag &= \inf\set{ g(y) :\: y\in\R, y\geq f(\xx) } \\ \notag &= \inf\set{ g(y) + \indepif(\rpair{\xx}{y}) :\: y\in\R } \\ \notag &= \inf\set{ r(\zz) + \indepif(\zz) :\: \zz\in\R^{n+1}, \homat\zz=\xx } \\ \label{eq:thm:subgrad-comp-inc-fcn:1} &= \inf\Braces{ s(\zz) :\: \zz\in\R^{n+1}, \homat\zz=\xx }, \end{align} where $r:\R^{n+1}\rightarrow\Rext$ is the function \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:2} r(\zz) = g(\zz\cdot\rpair{\zero}{1}) = g(\EE\zz) \end{equation} for $\zz\in\R^{n+1}$, and where $\EE=\trans{\rpair{\zero}{1}}=\trans{\ee_{n+1}}$. Thus, for $\xx\in\Rn$ and $y\in\R$, $r(\rpair{\xx}{y})=g(y)$ and $\homat\rpair{\xx}{y}=\xx$. Also, $s=r+\indepif$. Once written in this form, $h$'s astral subgradients can largely be computed using rules developed in the preceding sections, as we show now. Since $\uu\in\asubdiffext{\xbar}$, by Proposition~\refequiv{pr:equiv-ast-subdif-defn-fext}{pr:equiv-ast-subdif-defn-fext:a}{pr:equiv-ast-subdif-defn-fext:c}, there exists $\zbar\in \clepi{f}$ such that $\homat \zbar = \xbar$, $\zbar\cdot \rpair{\zero}{1} = \fext(\xbar)$ (or equivalently, $\EE\zbar=\fext(\xbar)$, by \Cref{pr:trans-uu-xbar}), and $\zbar \cdot \rpair{\uu}{-1} = \fstar(\uu)$. Multiplying by $v$, this last fact implies \[ \zbar \cdot \rpair{v\uu}{-v} = v \fstar(\uu) = \indepifstar(\rpair{v\uu}{-v}) \] by Proposition~\ref{pr:support-epi-f-conjugate}. By Proposition~\ref{pr:subdif-ind-fcn} (and since $\zbar\in \clepi{f}$), this implies that \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:6} \rpair{\ww}{-v} = \rpair{v\uu}{-v} \in \asubdifindepifext{\zbar} \end{equation} (with $S$, $\xbar$ and $\uu$, as they appear in that proposition, set to $\epi{f}$, $\zbar$ and $\rpair{\ww}{-v}$). From \eqref{eq:thm:subgrad-comp-inc-fcn:2}, $r=g\EE$. Also, $(\colspace \EE) \cap \ri(\dom{g})\neq\emptyset$ since $\EE \rpair{\zero}{f(\xing)}=f(\xing)\in \ri(\dom{g})$. Therefore, we can apply Theorem~\ref{thm:subgrad-fA} (with $f$, as it appears in that theorem, set to $g$, and $\A=\EE$), yielding that \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:7} \rpair{\zero}{v}=\trans{\EE} v \in \asubdifrext{\zbar} \end{equation} since $v\in\asubdifgext{\fext(\xbar)}=\asubdifgext{\EE\zbar}$. Thus, \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:3} \trans{\homat} \ww = \rpair{\ww}{0} = \rpair{\zero}{v} + \rpair{\ww}{-v} \in \asubdifrext{\zbar} + \asubdifindepifext{\zbar} \subseteq \asubdifsext{\zbar}. \end{equation} The first inclusion is by Eqs.~(\ref{eq:thm:subgrad-comp-inc-fcn:6}) and~(\ref{eq:thm:subgrad-comp-inc-fcn:7}). The second inclusion follows from Theorem~\ref{thm:subgrad-sum-fcns} (with $f_1=r$ and $f_2=\indepif$), noting that $\ri(\dom{r})\cap\ri(\epi{f})\neq\emptyset$ by Lemma~\ref{lem:thm:conj-compose-our-version:1}. From \eqref{eq:thm:subgrad-comp-inc-fcn:1}, $h=\homat s$. Since $\homat\zbar=\xbar$ and by \eqref{eq:thm:subgrad-comp-inc-fcn:3}, Theorem~\ref{thm:subgrad-Af} (applied with $\A=\homat$ and $f=s$) now yields $\ww\in\asubdifhext{\xbar}$, as claimed. \pfpart{Part~(\ref{thm:subgrad-comp-inc-fcn:a2}):} Suppose $\ww\in\asubdifinddomfext{\xbar}$ and $0\in\asubdifgext{\fext(\xbar)}$. The existence of $\xing$ implies, by \Cref{thm:Gf-conv}, that \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:4} \hext(\xbar) = \gext(\fext(\xbar)) = -\gstar(0) \end{equation} where the second equality is from Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:e}) since $0\in\asubdifgext{\fext(\xbar)}$. In particular, this implies $\hext(\xbar)<+\infty$ since $g\not\equiv+\infty$ (since $f(\xing)\in\dom g$), so $\gstar>-\infty$. Also, by Proposition~\ref{pr:subdif-ind-fcn}, \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:5} \xbar\cdot\ww=\inddomfstar(\ww), \end{equation} since $\ww\in\asubdifinddomfext{\xbar}$. In particular, this implies $\xbar\cdot\ww>-\infty$ since $f(\xing)\in\dom{g}\subseteq\R$, so $\dom{f}\neq\emptyset$, and so $\inddomfstar>-\infty$. Thus, we have \[ -\hstar(\ww) \leq \hminuswext(\xbar) = \hext(\xbar) - \xbar\cdot\ww = -\gstar(0) - \inddomfstar(\ww) \leq -\hstar(\ww). \] The first inequality is by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}). The first equality is by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:e}) since the arguments above imply that $-\hext(\xbar)$ and $\xbar\cdot\ww$ are summable. The second equality is from Eqs.~(\ref{eq:thm:subgrad-comp-inc-fcn:4}) and~(\ref{eq:thm:subgrad-comp-inc-fcn:5}). And the last equality is from \Cref{thm:conj-compose-our-version} (with $\uu=\ww$ and $v=0$). Thus, $\hminuswext(\xbar) = -\hstar(\ww)$, so $\ww \in\asubdifhext{\xbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}). \pfpart{Part~(\ref{thm:subgrad-comp-inc-fcn:a1}) for $v\geq 0$:} Suppose $\uu\in\asubdiffext{\xbar}$ and $0\in\asubdifgext{\fext(\xbar)}$. Then $\xbar\in\cldom{f}$ (by Propositions~\ref{pr:subgrad-imp-in-cldom}(\ref{pr:subgrad-imp-in-cldom:a}) and~\ref{pr:h:1}(\ref{pr:h:1c})). In particular, this implies $f\not\equiv+\infty$, so $\inddomfstar(\zero) = 0$ from \eqref{eq:e:2}. Thus, $\xbar\cdot\zero = \inddomfstar(\zero)$ and therefore $\zero\in\asubdifinddomfext{\xbar}$ by Proposition~\ref{pr:subdif-ind-fcn}. From part~(\ref{thm:subgrad-comp-inc-fcn:a2}), it now follows that $\zero\in \asubdifhext{\xbar}$, as claimed. \qedhere \end{proof-parts} \end{proof} Compared to the familiar chain rule of calculus, condition~(\ref{thm:subgrad-comp-inc-fcn:a2}) of Theorem~\ref{thm:subgrad-comp-inc-fcn} may seem odd, as is the apparent need to be using two seemingly unconnected conditions rather than a single unified rule. In fact, as we now explain, condition~(\ref{thm:subgrad-comp-inc-fcn:a2}) can be expressed in a form that better reveals its connection to the chain rule, while also unifying the two conditions into a single rule. As discussed earlier, the chain rule states that $\gradh(\xx)=g'(f(\xx))\,\gradf(\xx)$ (where $h$ is the composition of $g$ with $f$, and $g'$ is the first derivative of $g$). Letting $v=g'(f(\xx))$, we can re-state this as \[ \nabla h(\xx) = v \nabla f(\xx) = \nabla (v f)(\xx). \] In an analogous way, we can re-express the conditions of Theorem~\ref{thm:subgrad-comp-inc-fcn} using the notation $\sfprod{\lambda}{f}$ from Eqs.~(\ref{eq:sfprod-defn}) and~(\ref{eq:sfprod-identity}). In particular, if $v>0$ and $\ww=v \uu$ then $\uu\in\asubdiffext{\xbar}$ if and only if $\ww=v \uu \in \asubdif{\vfext}{\xbar}= \asubdifsfprodext{v}{f}{\xbar}$ (by Proposition~\ref{pr:subgrad-scal-mult}). Thus, condition~(\ref{thm:subgrad-comp-inc-fcn:a1}) of Theorem~\ref{thm:subgrad-comp-inc-fcn} says that if $v\in\asubdifgext{\fext(\xbar)}$ and $\ww\in\asubdifsfprodext{v}{f}{\xbar}$ then $\ww\in\asubdifhext{\xbar}$. In fact, condition~(\ref{thm:subgrad-comp-inc-fcn:a2}) can be put into the same form: Since $\sfprod{0}{f}=\inddomf$, this condition can be re-expressed as stating that if $0\in\asubdifgext{\fext(\xbar)}$ and $\ww\in\asubdifsfprodext{0}{f}{\xbar}$ then $\ww\in\asubdifhext{\xbar}$. In other words, the two conditions, when expressed in this way, have the identical form, one being for when $v>0$ and the other for $v=0$. Summarizing, we can re-express Theorem~\ref{thm:subgrad-comp-inc-fcn} as stated in the next corollary: \begin{corollary} \label{cor:subgrad-comp-inc-fcn} Let $f:\Rn\rightarrow\Rext$ be convex, and let $g:\R\rightarrow\Rext$ be proper, convex, lower semicontinuous, nondecreasing, and not a constant. Assume there exists $\xing\in\Rn$ such that $f(\xing)\in\intdom{g}$. Let $h=\eg\circ f$, and $\xbar\in\extspace$. Then \begin{equation*} \asubdifhext{\xbar} \;\supseteq\; \bigcup\;\BigBraces{\asubdifsfprodext{v}{f}{\xbar} :\: v\in \asubdifgext{\fext(\xbar)},\, v\geq 0 }. \end{equation*} \end{corollary} \begin{proof} Suppose $v\in \asubdifgext{\fext(\xbar)}$ with $v\geq 0$, and that $\ww\in\asubdifsfprodext{v}{f}{\xbar}$. We aim to show $\ww\in\asubdifhext{\xbar}$. If $v=0$ then $\ww\in\asubdifhext{\xbar}$ by Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a2}) since $\sfprod{0}{f}=\inddomf$ (Eq.~\ref{eq:sfprod-identity}). Otherwise, suppose $v>0$, implying $\sfprod{v}{f} = v f$. Let $\uu=\ww/v$. Then $v \uu = \ww \in \asubdif{\vfext}{\xbar}$, implying $\uu\in\asubdiffext{\xbar}$ by Proposition~\ref{pr:subgrad-scal-mult}. Thus, $\ww\in\asubdifhext{\xbar}$ by Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a1}). \end{proof} Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a1}), as well as Corollary~\ref{cor:subgrad-comp-inc-fcn}, explicitly require $v\geq 0$. Since $g$ is nondecreasing, we expect its (astral) subgradients to be nonnegative, so this requirement may seem unnecessary. However, similar to what has been seen before, negative astral subgradients are possible at $-\infty$, which is at the edge of the function's effective domain. For such negative astral subgradients $v$, Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a1}) does not hold in general, as the next example shows. \begin{example} Let $f:\R\rightarrow\R$ be the function $f(x)=x$ for $x\in\R$, and let $g=f$, implying that also $h=f$ (where $h=\eg\circ f$). Let $\barx=-\infty$ so that $\fext(\barx)=-\infty$. Then it can be checked using Proposition~\ref{pr:subdif-in-1d} that \[ \asubdiffext{\barx} = \asubdifgext{\fext(\barx)} = \asubdifhext{\barx} =(-\infty,1]. \] Thus, $u=-2\in \asubdiffext{\barx}$ and $v=-1\in \asubdifgext{\fext(\barx)}$, but $uv=2\not\in\asubdifhext{\barx}$. \end{example} When working with strict astral subgradients, the explicit requirement on the sign of $v$ is not needed since the strict astral subgradients of $\gext$ can never be negative, even at $-\infty$. \begin{proposition} \label{pr:strict-subgrad-inc-not-neg} Let $g:\R\rightarrow\Rext$ be nondecreasing. Suppose $v\in\basubdifgext{\barx}$ for some $\barx\in\Rext$. Then $v\geq 0$. \end{proposition} \begin{proof} We have $\gstar(v)\in\R$ since $v\in\basubdifgext{\barx}$. This implies $g\not\equiv+\infty$ (since otherwise we would have $\gstar\equiv-\infty$). Therefore, there exists $z\in\dom{g}$. Suppose, by way of contradiction, that $v<0$. Then \[ \gstar(v) = \sup_{x\in\R} [ xv - g(x) ] \geq \sup_{\scriptontop{x\in\R:}{x\leq z}} [ xv - g(x) ] \geq \sup_{\scriptontop{x\in\R:}{x\leq z}} [ xv - g(z) ] = +\infty, \] where the second inequality is because $g$ is nondecreasing, and the last equality is because $v<0$ and $g(z)<+\infty$. This contradicts that $\gstar(v)\in\R$. \end{proof} As we show next, the two sufficient conditions given in Theorem~\ref{thm:subgrad-comp-inc-fcn} account for all strict astral subgradients of the composition $h$'s extension. Moreover, the explicit requirement that $v$ not be negative can now be eliminated, as just discussed. \begin{theorem} \label{thm:strict-subgrad-comp-inc-fcn} Let $f:\Rn\rightarrow\Rext$ be convex, and let $g:\R\rightarrow\Rext$ be proper, convex, lower semicontinuous, nondecreasing, and not a constant. Assume there exists $\xing\in\Rn$ such that $f(\xing)\in\intdom{g}$. Let $h=\eg\circ f$, and let $\xbar\in\extspace$ and $\ww\in\Rn$. Then $\ww\in\basubdifhext{\xbar}$ if and only if either of the following hold: \begin{letter-compact} \item \label{thm:strict-subgrad-comp-inc-fcn:a} there exists $\uu\in\basubdiffext{\xbar}$ and $v\in\basubdifgext{\fext(\xbar)}$ such that $\ww=v \uu$; or \item \label{thm:strict-subgrad-comp-inc-fcn:b} $\ww\in\basubdifinddomfext{\xbar}$ and $0\in\basubdifgext{\fext(\xbar)}$. \end{letter-compact} The same holds if, in condition~(\ref{thm:strict-subgrad-comp-inc-fcn:a}), we further require that $v>0$. \end{theorem} \begin{proof} Note first that since $f(\xing)\in \dom{g} \subseteq \R$, $f\not\equiv+\infty$, $g\not\equiv+\infty$, and $h\not\equiv+\infty$. \begin{proof-parts} \pfpart{``If'' ($\Leftarrow$):} Suppose $\uu\in\basubdiffext{\xbar}$ and $v\in\basubdifgext{\fext(\xbar)}$ with $\ww=v \uu$. Then $v\geq 0$ by Proposition~\ref{pr:strict-subgrad-inc-not-neg} so $\ww\in\asubdifhext{\xbar}$ by Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a1}), and $\hstar(\ww)>-\infty$ since $h\not\equiv+\infty$. If $v>0$ then $\indepifstar(\rpair{\ww}{-v})=v \fstar(\uu) <+\infty$, by Proposition~\ref{pr:support-epi-f-conjugate}. And if $v=0$ (implying $\ww=\zero$) then $\indepifstar(\rpair{\zero}{0})=0$ (from Eq.~\ref{eq:e:2} and since $f\not\equiv+\infty$). Thus, in either case, $\indepifstar(\rpair{\ww}{-v}) <+\infty$, implying $\hstar(\ww)\leq \gstar(v) + \indepifstar(\rpair{\ww}{-v})<+\infty$ by \Cref{thm:conj-compose-our-version}. Therefore, $\ww\in\basubdifhext{\xbar}$. Suppose next that $\ww\in\basubdifinddomfext{\xbar}$ and $0\in\basubdifgext{\fext(\xbar)}$. By Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a2}), $\ww\in\asubdifhext{\xbar}$. Since $h\not\equiv+\infty$ and by \Cref{thm:conj-compose-our-version} and Proposition~\ref{pr:support-epi-f-conjugate}, \[ -\infty < \hstar(\ww) \leq \gstar(0)+\indepifstar(\ww,0) = \gstar(0)+\inddomfstar(\ww) < +\infty. \] Thus, $\ww\in\basubdifhext{\xbar}$. \pfpart{``Only if'' ($\Rightarrow$):} For the converse, suppose $\ww\in\basubdifhext{\xbar}$. We follow the same definitions for $\homat$, $r$, $\EE$, and $s$ as given in the proof of Theorem~\ref{thm:subgrad-comp-inc-fcn}(\ref{thm:subgrad-comp-inc-fcn:a1}). We will first want to derive a point in $\extspacnp$ at which $\sext$ has $\rpair{\ww}{0}$ as an astral subgradient (Claim~\ref{cl:thm:subgrad-comp-inc-fcn:3}). As will be seen, this can then be used to derive astral subgradients with the properties stated in the theorem. As usual, we write $\hminusw$ and $\sminuswz$ for the linear shifts of $h$ by $\ww$, and of $s$ by $\rpair{\ww}{0}$. Note that if $\rpair{\xx}{y}\in\epi{f}$, then \begin{eqnarray} \sminuswz(\xx,y) &=& r(\xx,y) + \indepif(\xx,y) - \rpair{\xx}{y}\cdot\rpair{\ww}{0} \nonumber \\ &=& g(y) - \xx\cdot\ww. \label{eq:thm:subgrad-comp-inc-fcn:8} \end{eqnarray} For any point $\xx\in\dom{f}$, the next claim shows there must exist a point $\zbar\in\clepif$ with ``similar'' properties to $\xx$, in the stated senses: \begin{claimpx} \label{cl:thm:subgrad-comp-inc-fcn:1} Let $\xx\in\dom{f}$. Then there exists $\zbar\in\clepif$ such that $\homat\zbar=\xx$, $\zbar\cdot\rpair{\zero}{1}=f(\xx)$, and $\sminuswzext(\zbar)\leq\hminusw(\xx)$. \end{claimpx} \begin{proofx} If $f(\xx)\in\R$, then we can simply let $\zbar=\rpair{\xx}{f(\xx)}$, which is in $\epi f$, and satisfies $\homat\zbar=\xx$ and $\zbar\cdot\rpair{\zero}{1}=f(\xx)$. Furthermore, \[ \sminuswzext(\zbar) \leq \sminuswz(\zbar) = g(f(\xx)) - \xx\cdot\ww = \hminusw(\xx) \] with the inequality from Proposition~\ref{pr:h:1}(\ref{pr:h:1a}), and the first equality from \eqref{eq:thm:subgrad-comp-inc-fcn:8}. Otherwise, $f(\xx)=-\infty$. In this case, we let \[ \zbar = \lim \rpair{\xx}{-t} = \lim \Bracks{ t \rpair{\zero}{-1} + \rpair{\xx}{0} } = \limray{(\rpair{\zero}{-1})} \plusl \rpair{\xx}{0}, \] with the third equality following from Proposition~\ref{pr:i:7}(\ref{pr:i:7f}). Then $\zbar\in\clepif$ since $\rpair{\xx}{-t}\in\epi f$ for all $t$. Also, by \Cref{thm:linear:cont}(\ref{thm:linear:cont:b}), $\homat\zbar=\lim \homat \rpair{\xx}{-t}=\xx$, and by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), \[ \zbar\cdot\rpair{\zero}{1} = \lim \Bracks{ \rpair{\xx}{-t}\cdot\rpair{\zero}{1} } = -\infty = f(\xx). \] Finally, \begin{align*} \sminuswzext(\zbar) &\leq \liminf \sminuswz(\xx,-t) \\ &= \liminf \Bracks{g(-t) - \xx\cdot\ww} \\ &= (\inf g) - \xx\cdot\ww \\ &= \eg(f(\xx)) - \xx\cdot\ww = \hminusw(\xx). \end{align*} The inequality is from the definition of an extension. The first equality is by \eqref{eq:thm:subgrad-comp-inc-fcn:8}; the second is because $g$ is nondecreasing; and the third is because $f(\xx)=-\infty$ and $\eg(-\infty)=\inf g$ by \Cref{pr:conv-inc:prop}(\ref{pr:conv-inc:infsup}). \end{proofx} \begin{claimpx} \label{cl:thm:subgrad-comp-inc-fcn:2} $\ri(\dom \hminusw)\cap\ri(\dom f)\neq\emptyset$. \end{claimpx} \begin{proofx} We will prove that \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:9} \ri(\dom h) \subseteq \ri(\dom f). \end{equation} This is sufficient to prove the claim, since, by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:a}), $\dom h = \dom{\hminusw}$, which also must be convex and nonempty (since $h\not\equiv+\infty$), thus implying that $\emptyset\neq \ri(\dom \hminusw) \subseteq \ri(\dom f)$ (\Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.2b}). Let $\alpha=\sup(\dom g)$. Since $g$ is proper and nondecreasing, we must have $\alpha>-\infty$ and $\dom g$ is either $(-\infty,\alpha)$ or $(-\infty,\alpha]$. If $\alpha=+\infty$, then $g$ is finite everywhere. As a result, $\eg(\ey)=+\infty$ only when $\ey=+\infty$, and so $h(\xx)<+\infty$ if and only if $f(\xx)<+\infty$. Thus, $\dom h = \dom f$, implying \eqref{eq:thm:subgrad-comp-inc-fcn:9}. It remains to consider the case $\alpha\in\R$. Then $\dom g$ is either $(-\infty,\alpha)$ or $(-\infty,\alpha]$, and so $\dom h$ is either \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:10} \{\xx\in\Rn : f(\xx)<\alpha\} \textrm{~~or~~} \{\xx\in\Rn : f(\xx)\leq \alpha\}. \end{equation} Also, $f(\xing)\in\intdom{g}=(-\infty,\alpha)$, implying $\inf f \leq f(\xing) < \alpha$. It then follows, by \Cref{roc:thm7.6-mod}, that the sets in \eqref{eq:thm:subgrad-comp-inc-fcn:10} have the same relative interior, namely, \[ \ri(\dom h) = \Braces{\xx\in\ri(\dom{f}) : f(\xx) < \alpha}. \] This implies \eqref{eq:thm:subgrad-comp-inc-fcn:9}, completing the proof. \end{proofx} By Theorem~\ref{thm:seq-for-multi-ext}, and in light of Claim~\ref{cl:thm:subgrad-comp-inc-fcn:2}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$, with $f(\xx_t)\rightarrow\fext(\xbar)$ and $\hminusw(\xx_t)\rightarrow\hminuswext(\xbar)$. Also, since $h\not\equiv+\infty$ and since $\ww\in\asubdifhext{\xbar}$, $\hminuswext(\xbar) = -\hstar(\ww) < +\infty$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}). Therefore, $\hminusw(\xx_t)$ can by $+\infty$ for at most finitely many elements; by discarding these, we can assume henceforth that $\hminusw(\xx_t)<+\infty$ for all $t$, which implies $h(\xx_t)<+\infty$ and $f(\xx_t)<+\infty$ for all $t$ as well (since $\eg(+\infty)=+\infty$). For each $t$, let $\zbar_t\in\clepif$ be as in Claim~\ref{cl:thm:subgrad-comp-inc-fcn:1} applied to $\xx_t$ (so that $\homat\zbar_t=\xx_t$, $\zbar_t\cdot\rpair{\zero}{1}=f(\xx_t)$, and $\sminuswzext(\zbar_t)\leq\hminusw(\xx_t)$). By sequential compactness, the resulting sequence $\seq{\zbar_t}$ must have a convergent subsequence; by discarding all other elements, we can assume the entire sequence converges to some point $\zbar\in\extspacnp$. Since each $\zbar_t$ is in the closed set $\clepif$, $\zbar$ is as well. Also, \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:14} \homat\zbar = \lim \homat\zbar_t = \lim \xx_t = \xbar \end{equation} (using \Cref{thm:linear:cont}\ref{thm:linear:cont:b}). Likewise, \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:13} \zbar\cdot\rpair{\zero}{1} = \lim \Bracks{ \zbar_t\cdot\rpair{\zero}{1} } = \lim f(\xx_t) = \fext(\xbar) \end{equation} (using Theorem~\ref{thm:i:1}\ref{thm:i:1c}). We next claim $\rpair{\ww}{0}$ is an astral subgradient of $\sext$ at $\zbar$: \begin{claimpx} \label{cl:thm:subgrad-comp-inc-fcn:3} $\rpair{\ww}{0} \in \basubdifsext{\zbar}$ \end{claimpx} \begin{proofx} We have \begin{align} -\sstar(\rpair{\ww}{0}) \leq \sminuswzext(\zbar) &\leq \liminf \sminuswzext(\zbar_t) \nonumber \\ &\leq \liminf \hminusw(\xx_t) = \hminuswext(\xbar) = -\hstar(\ww) = -\sstar(\rpair{\ww}{0}). \label{eq:thm:subgrad-comp-inc-fcn:11} \end{align} The first inequality is by Proposition~\ref{pr:fminusu-props}(\ref{pr:fminusu-props:d}). The second inequality is by lower semicontinuity of $\sminuswzext$, since $\zbar_t\rightarrow\zbar$. The third inequality is by one of the defining properties of $\zbar_t$. The second equality is by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}) since $\ww\in\asubdifhext{\xbar}$. For the final equality, we use the fact that $h=\homat s$, by \eqref{eq:thm:subgrad-comp-inc-fcn:1}, which implies \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:12} \hstar(\ww) = \sstar(\trans{\homat} \ww) = \sstar(\rpair{\ww}{0}) \end{equation} (by \Cref{roc:thm16.3:Af}). \eqref{eq:thm:subgrad-comp-inc-fcn:11} implies $\sminuswzext(\zbar) = -\sstar(\rpair{\ww}{0})$. Together with \eqref{eq:thm:subgrad-comp-inc-fcn:12} (and since $\hstar(\ww)\in\R$), this implies $\rpair{\ww}{0} \in \basubdifsext{\zbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}). \end{proofx} \begin{claimpx} \label{cl:thm:subgrad-comp-inc-fcn:4} There exists $v\in\R$ such that $v\in\basubdifgext{\fext(\xbar)}$ and $\rpair{\ww}{-v}\in\basubdifindepifext{\zbar}$. \end{claimpx} \begin{proofx} In light of Claim~\ref{cl:thm:subgrad-comp-inc-fcn:3}, we can apply Theorem~\ref{thm:subgrad-sum-fcns}(\ref{thm:subgrad-sum-fcns:b}) (with $f_1=r$, $f_2=\indepif$, and $\xbar$, as it appears in that theorem, set to $\zbar$), which implies there exists $\rpair{\ww'}{v'}\in\basubdifrext{\zbar}$ such that $\rpair{\ww-\ww'}{-v'}\in\basubdifindepifext{\zbar}$ (so that the sum of these astral subgradients is $\rpair{\ww}{0}$, which is in $\basubdifsext{\zbar}$). Consequently, since $g=r\EE$, we can apply Theorem~\ref{thm:subgrad-fA}(\ref{thm:subgrad-fA:b}) (noting that $(\colspace \EE)\cap \ri(\dom{g})\neq\emptyset$ since $\EE \rpair{\zero}{f(\xing)}=f(\xing)\in \ri(\dom{g})$), yielding that there exists $v\in\basubdifgext{\EE\zbar}$ such that $\rpair{\ww'}{v'}=\trans{\EE}v = \rpair{\zero}{v}$. This implies $v'=v$ and $\ww'=\zero$. Also, $\EE\zbar=\fext(\xbar)$ by \eqref{eq:thm:subgrad-comp-inc-fcn:13}. Thus, $v\in\basubdifgext{\fext(\xbar)}$ and $\rpair{\ww}{-v}\in\basubdifindepifext{\zbar}$. \end{proofx} Let $v$ be as in Claim~\ref{cl:thm:subgrad-comp-inc-fcn:4}. Then $\rpair{\ww}{-v}\in\basubdifindepifext{\zbar}$, implying in particular that $\indepifstar(\rpair{\ww}{-v})\in\R$. By Proposition~\ref{pr:support-epi-f-conjugate}, it follows that $v\geq 0$. Also, by Proposition~\ref{pr:subdif-ind-fcn}, \begin{equation} \label{eq:thm:subgrad-comp-inc-fcn:15} \zbar\cdot\rpair{\ww}{-v} = \indepifstar(\rpair{\ww}{-v}). \end{equation} If $v=0$ then by Proposition~\ref{pr:support-epi-f-conjugate}, $\indepifstar(\rpair{\ww}{0})=\inddomfstar(\ww)$, which is in $\R$. Also, \[ \zbar\cdot\rpair{\ww}{0} = \zbar\cdot(\trans{\homat}\ww) = (\homat\zbar)\cdot\ww = \xbar\cdot\ww \] with the second equality from Theorem~\ref{thm:mat-mult-def}, and the last from \eqref{eq:thm:subgrad-comp-inc-fcn:14}. Thus, $\xbar\cdot\ww=\inddomfstar(\ww)$. Furthermore, $\xbar\in\cldom{f}$ since $\xx_t\in\dom f$ and $\xx_t\rightarrow\xbar$. Therefore, $\ww\in\basubdifinddomfext{\xbar}$ by Proposition~\ref{pr:subdif-ind-fcn}. Otherwise, $v>0$. In this case, we let $\uu=\ww/v$. Proposition~\ref{pr:support-epi-f-conjugate} then yields that $\indepifstar(\rpair{\ww}{-v}) = v \fstar(\uu)$, and so that $\fstar(\uu)\in\R$. Combining with \eqref{eq:thm:subgrad-comp-inc-fcn:15} then yields \[ \zbar\cdot\rpair{\ww}{-v} = v \fstar(\uu), \] and so, dividing both sides by $v$, \[ \zbar\cdot\rpair{\uu}{-1} = \fstar(\uu). \] Combined with Eqs.~(\ref{eq:thm:subgrad-comp-inc-fcn:14}) and~(\ref{eq:thm:subgrad-comp-inc-fcn:13}), it now follows by Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:a},\ref{pr:equiv-ast-subdif-defn:c}) that $\uu\in\basubdiffext{\xbar}$. \qedhere \end{proof-parts} \end{proof} As in Corollary~\ref{cor:subgrad-comp-inc-fcn}, Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} can be re-stated more succinctly as follows: \begin{corollary} \label{cor:strict-subgrad-comp-inc-fcn} Let $f:\Rn\rightarrow\Rext$ be convex, and let $g:\R\rightarrow\Rext$ be proper, convex, lower semicontinuous, nondecreasing, and not a constant. Assume there exists $\xing\in\Rn$ such that $f(\xing)\in\intdom{g}$. Let $h=\eg\circ f$, and let $\xbar\in\extspace$. Then \begin{equation*} \basubdifhext{\xbar} \;=\; \bigcup\;\BigBraces{\basubdifsfprodext{v}{f}{\xbar} :\: v\in \basubdifgext{\fext(\xbar)} }. \end{equation*} \end{corollary} \begin{proof} By Proposition~\ref{pr:strict-subgrad-inc-not-neg}, if $v\in\basubdifgext{\fext(\xbar)}$, then $v\geq 0$. Thus, the union that appears in the corollary is the same if it is further required that $v\geq 0$. Let $\ww\in\Rn$ and $v\geq 0$. If $v>0$ then $\sfprod{v}{f}=v f$ so $\ww\in\basubdifsfprodext{v}{f}{\xbar}$ if and only if $\ww/v \in \basubdiffext{\xbar}$ (by Proposition~\ref{pr:subgrad-scal-mult}). Thus, condition~(\ref{thm:strict-subgrad-comp-inc-fcn:a}) of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn}, modified to also require $v>0$, holds if and only if $\ww\in\basubdifsfprodext{v}{f}{\xbar}$ and $v\in\basubdifgext{\fext(\xbar)}$ for some $v>0$. Also, since $\sfprod{0}{f}=\inddomf$, condition~(\ref{thm:strict-subgrad-comp-inc-fcn:b}) of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} holds if and only if $\ww\in\basubdifsfprodext{0}{f}{\xbar}$ and $0\in\basubdifgext{\fext(\xbar)}$. Taken together, for $v\geq 0$, this shows that $\ww\in\basubdifsfprodext{v}{f}{\xbar}$ and $v\in\basubdifgext{\fext(\xbar)}$ if and only if either of the conditions of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} hold. By that same theorem, either condition holds if and only if $\ww\in\basubdifhext{\xbar}$, proving the corollary. \end{proof} Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} does not hold in general for astral subgradients that are not necessarily strict. Said differently, the conditions given in Theorem~\ref{thm:subgrad-comp-inc-fcn} do not account for all of $\hext$'s astral subgradients, as shown in the next example. \begin{example} \label{ex:subgrad-comp-inc-fcn-not-nec} For $\xx\in\R^2$, let $f(\xx)=f(x_1,x_2)=-x_2$, and let \[ g(x) = \left\{ \begin{array}{cl} - \ln(-x) & \mbox{if $x<0$} \\ +\infty & \mbox{otherwise} \end{array} \right. \] for $x\in\R$. Then $h=\eg\circ f$ is the function \[ h(\xx) = h(x_1,x_2) = \left\{ \begin{array}{cl} - \ln x_2 & \mbox{if $x_2>0$} \\ +\infty & \mbox{otherwise} \end{array} \right. \] for $\xx\in\R^2$. All these functions are closed, proper and convex, and $g$ is nondecreasing. Let $\xbar=\limray{\ee_1}$, and let $\ww=\ee_1$. Then $\fext(\xbar)=\xbar\cdot(-\ee_2)=0$. We claim that $\ww\in\asubdifhext{\xbar}$. To see this, let $\xx_t=\trans{[t,1/t]}$, which converges to $\xbar$, and furthermore, \begin{equation} \label{eq:ex:subgrad-comp-inc-fcn-not-nec:1} \xx_t\cdot\ww - h(\xx_t) = t - \ln t \rightarrow +\infty. \end{equation} The expression on the left is at most $\hstar(\ww)$ for all $t$ (by \eqref{eq:fstar-def}); therefore, $\hstar(\ww)=+\infty$. \eqref{eq:ex:subgrad-comp-inc-fcn-not-nec:1} then shows that $\ww\in\asubdifhext{\xbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:b}). Thus, $\ww$ is an astral subgradient of $\hext$ at $\xbar$, but not a strict one. On the other hand, $0\not\in\dom g$ so $\partial g(0)=\emptyset$ by \Cref{roc:thm23.4}(\ref{roc:thm23.4:b}). Therefore, since $g$ is closed, $\asubdifgext{0}=\partial g(0)$ by Proposition~\ref{pr:asubdiffext-at-x-in-rn}(\ref{pr:asubdiffext-at-x-in-rn:b}). That is, $\asubdifgext{\fext(\xbar)}=\emptyset$. Thus, $\ww\in\asubdifhext{\xbar}$ but there nevertheless does not exist $\uu\in\asubdiffext{\xbar}$ and $v\in\asubdifgext{\fext(\xbar)}$ with $\ww=v \uu$, and it also is not the case that both $\ww\in\asubdifinddomfext{\xbar}$ and $0\in\asubdifgext{\fext(\xbar)}$. \end{example} As discussed earlier and seen specifically in Example~\ref{ex:uv-not-suf-for-subgrad-g-of-f}, Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} does not hold in general if condition~(\ref{thm:strict-subgrad-comp-inc-fcn:b}) is omitted, leaving only the more immediate generalization of the chain rule given in condition~(\ref{thm:strict-subgrad-comp-inc-fcn:a}). Nevertheless, as a corollary of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn}, we can identify various cases in which condition~(\ref{thm:strict-subgrad-comp-inc-fcn:b}) can be omitted. \begin{corollary} \label{cor:suff-cond-subgrad-comp-chain-rule} Let $f:\Rn\rightarrow\Rext$ be convex, and let $g:\R\rightarrow\Rext$ be proper, convex, lower semicontinuous, nondecreasing, and not a constant. Assume there exists $\xing\in\Rn$ such that $f(\xing)\in\intdom{g}$. Let $h=\eg\circ f$, and let $\xbar\in\extspace$. Assume further that at least one of the following holds: \begin{roman-compact} \item \label{cor:suff-cond-subgrad-comp-chain-rule:b} $\inf g = -\infty$. \item \label{cor:suff-cond-subgrad-comp-chain-rule:c} $\fext(\xbar)$ does not minimize $\gext$; that is, $\gext(\fext(\xbar))>\inf g$. \item \label{cor:suff-cond-subgrad-comp-chain-rule:a} $\dom{f}=\Rn$ and $\fext(\xbar)\in\R$. \end{roman-compact} Then \begin{equation} \label{eq:cor:suff-cond-subgrad-comp-chain-rule:1} \basubdifhext{\xbar} = \Braces{ v \uu : \uu\in\basubdiffext{\xbar}, v\in\basubdifgext{\fext(\xbar)} }. \end{equation} \end{corollary} \begin{proof} Let $U$ denote the set on the right-hand side of \eqref{eq:cor:suff-cond-subgrad-comp-chain-rule:1}. If $\uu\in\basubdiffext{\xbar}$ and $v\in\basubdifgext{\fext(\xbar)}$, then $v\uu\in\basubdifhext{\xbar}$ by Theorem~\ref{thm:strict-subgrad-comp-inc-fcn}. Thus, $U\subseteq\basubdifhext{\xbar}$. For the reverse inclusion, let $\ww\in\basubdifhext{\xbar}$. By Theorem~\ref{thm:strict-subgrad-comp-inc-fcn}, one of the two conditions stated in that theorem must hold. If condition~(\ref{cor:suff-cond-subgrad-comp-chain-rule:b}) of the current corollary holds, then $\gstar(0)=-\inf g=+\infty$, so $0\not\in\basubdifgext{\fext(\xbar)}$. Therefore, condition~(\ref{thm:strict-subgrad-comp-inc-fcn:b}) of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} cannot hold, which means instead that that theorem's condition~(\ref{thm:strict-subgrad-comp-inc-fcn:a}) must instead hold, implying $\ww\in U$. Likewise, if condition~(\ref{cor:suff-cond-subgrad-comp-chain-rule:c}) of the current corollary holds, then $0\not\in\asubdifgext{\fext(\xbar)}$ by Proposition~\ref{pr:asub-zero-is-min}, so again $0\not\in\basubdifgext{\fext(\xbar)}$, implying, by the same argument, that $\ww\in U$. In the remaining case, suppose condition~(\ref{cor:suff-cond-subgrad-comp-chain-rule:a}) of the current corollary holds. Suppose first that $\ww\neq\zero$. Then \[ \inddomfstar(\ww) = \sup_{\zz\in\dom{f}} \zz\cdot\ww = +\infty \] since $\dom{f}=\Rn$. Thus, $\ww\not\in\basubdifinddomfext{\xbar}$ so, as in the preceding cases, condition~(\ref{thm:strict-subgrad-comp-inc-fcn:b}) of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} cannot hold, implying instead that condition~(\ref{thm:strict-subgrad-comp-inc-fcn:a}) of that theorem holds so that $\ww\in U$. In the last case, $\ww=\zero$. Suppose condition~(\ref{thm:strict-subgrad-comp-inc-fcn:b}) of Theorem~\ref{thm:strict-subgrad-comp-inc-fcn} holds (since otherwise, if condition~(\ref{thm:strict-subgrad-comp-inc-fcn:a}) holds, then $\ww\in U$, as in the other cases). Then $0\in\basubdifgext{\fext(\xbar)}$. Also, because $\fext(\xbar)\in\R$, there exists a point $\uu\in\basubdiffext{\xbar}$ by Theorem~\ref{thm:subgrad-not-empty-fin-ri}(\ref{thm:subgrad-not-empty-fin-ri:ben}). Therefore, $\ww\in U$ since $\ww=\zero=0 \uu$. \end{proof} \section{Some optimality conditions} \label{sec:opt-conds} We consider next convex optimization problems of particular forms. For instance, the problem might be to minimize a convex function subject to linear or convex constraints. In such cases, there are classical conditions characterizing the solution, provided, of course, that that solution is finite. In this chapter, we study how these classical conditions can be naturally generalized to astral space, yielding conditions that apply even when no finite solution exists. These generalizations build heavily on the calculus developed in Chapter~\ref{sec:calc-subgrads}. \subsection{Fenchel duality} We begin with an astral generalization of Fenchel's duality theorem. In one version of this setting, the goal is to minimize the sum of two convex functions. Or, in slightly more generality, one of the functions may be replaced with a composition of a convex function with a linear map. Thus, the goal is to minimize $f+g\A$ (that is, $\xx\mapsto f(\xx)+g(\A\xx)$), where $f:\Rn\rightarrow\Rext$ and $g:\Rm\rightarrow\Rext$ are convex and proper, and $\A\in\Rmn$. For instance, $g$ might be an indicator function that encodes various constraints that the solution must satisfy so that the goal is to minimize $f$ subject to those constraints. A standard generalization of Fenchel's duality theorem shows that, under certain topological conditions, minimizing $f+g\A$ is equivalent, in a dual sense, to minimizing $\fstar (-\transA) + \gstar$, that is, $\uu\mapsto \fstar(-\transA \uu) + \gstar(\uu)$. The next theorem summarizes this standard result: \begin{theorem} \label{thm:std-fenchel-duality} Let $f:\Rn\rightarrow\Rext$ and $g:\Rm\rightarrow\Rext$ be closed, proper and convex, and let $\A\in\Rmn$. Assume there exists $\xing\in\ri(\dom f)$ such that $\A\xing\in\ri(\dom g)$. Then \begin{equation} \label{eq:thm:std-fenchel-duality:1} \inf_{\xx\in\Rn} [f(\xx) + g(\A\xx)] = - \min_{\uu\in\Rm} [\fstar(-\transA \uu) + \gstar(\uu)], \end{equation} meaning, in particular, that the minimum on the right is attained. \end{theorem} \begin{proof} This follows as a specialized form of \citet[Corollary~31.2.1]{ROC}. (In more detail, we have replaced the concave function $g$ in Rockafellar's version with $-g$, and have noted that $(-g)_*(\uu)=\gstar(-\uu)$ for $\uu\in\Rm$, where $f_*$ denotes the \emph{concave} conjugate of a function $f$, as used by Rockafellar.) \end{proof} Working in standard Euclidean space, Theorem~\ref{thm:std-fenchel-duality} provides only that the minimum on the right-hand side of \eqref{eq:thm:std-fenchel-duality:1} is attained, which need not be the case for the infimum on the left. Nevertheless, when extended in a natural way to astral space, this infimum is always attained, as shown next: \begin{theorem} \label{thm:ast-fenchel-duality} Let $f:\Rn\rightarrow\Rext$ and $g:\Rm\rightarrow\Rext$ be closed, proper and convex, and let $\A\in\Rmn$. Assume there exists $\xing\in\ri(\dom f)$ such that $\A\xing\in\ri(\dom g)$. Then for all $\xbar\in\extspace$ and $\uu\in\Rm$, the following are equivalent: \begin{letter} \item \label{thm:ast-fenchel-duality:a} $\xbar$ minimizes $\fgAext$, and $\uu$ minimizes $\fstar(-\transA)+\gstar$. \item \label{thm:ast-fenchel-duality:b} $\fgAext(\xbar) = -[\fstar(-\transA \uu) + \gstar(\uu)]$. \end{letter} Furthermore, there exists such a pair $\xbar,\uu$ satisfying both of these conditions. Thus, \[ \min_{\xbar\in\extspace} \fgAext(\xbar) = - \min_{\uu\in\Rm} [\fstar(-\transA \uu) + \gstar(\uu)], \] with both minima attained. \end{theorem} \begin{proof} Let $h=f+g\A$. \begin{proof-parts} \pfpart{(\ref{thm:ast-fenchel-duality:a}) $\Rightarrow$ (\ref{thm:ast-fenchel-duality:b}): } Suppose statement~(\ref{thm:ast-fenchel-duality:a}) holds for some $\xbar\in\extspace$ and $\uu\in\Rn$. Then \[ \hext(\xbar) = \inf h = -\inf\Bracks{ \fstar(-\transA) + \gstar } = -[\fstar(-\transA \uu) + \gstar(\uu)]. \] The first equality is by assumption and Proposition~\ref{pr:fext-min-exists}. The second equality is by Theorem~\ref{thm:std-fenchel-duality}. And the last equality is also by assumption. \pfpart{(\ref{thm:ast-fenchel-duality:b}) $\Rightarrow$ (\ref{thm:ast-fenchel-duality:a}): } Suppose now that statement~(\ref{thm:ast-fenchel-duality:b}) holds for some $\xbar\in\extspace$ and $\uu\in\Rn$. Then \[ \inf h \leq \hext(\xbar) = - [\fstar(-\transA \uu) + \gstar(\uu)] \leq -\inf\Bracks{ \fstar(-\transA) + \gstar } = \inf h. \] The first inequality is by Proposition~\ref{pr:fext-min-exists}. The first equality is by assumption. And the last equality is by Theorem~\ref{thm:std-fenchel-duality}. Thus, $\hext(\xbar)=\inf h$ so $\xbar$ minimizes $\hext$, and similarly, $\uu$ minimizes $\fstar(-\transA) + \gstar$. \pfpart{Existence:} By Theorem~\ref{thm:std-fenchel-duality}, there exists $\uu\in\Rn$ minimizing $\fstar(-\transA) + \gstar$. And by Proposition~\ref{pr:fext-min-exists}, there exists $\xbar\in\extspace$ minimizing $\hext$. Together, these thus satisfy statement~(\ref{thm:ast-fenchel-duality:a}) (and so statement~(\ref{thm:ast-fenchel-duality:b}) as well). \qedhere \end{proof-parts} \end{proof} The next theorem gives sufficient optimality conditions based on astral subgradients for when a pair $\xbar,\uu$ is a solution pair in the sense of satisfying the conditions of Theorem~\ref{thm:ast-fenchel-duality}. Below, we show that these same subgradient conditions are sometimes necessary as well. \begin{theorem} \label{thm:subgrad-suff-fenchel} Let $f:\Rn\rightarrow\Rext$ and $g:\Rm\rightarrow\Rext$ be closed, proper and convex, and let $\A\in\Rmn$. Assume there exists $\xing\in\ri(\dom f)$ such that $\A\xing\in\ri(\dom g)$. Suppose, for some $\xbar\in\extspace$ and $\uu\in\Rm$, that $\uu\in\asubdifgext{\A\xbar}$ and that $-\transA\uu\in\asubdiffext{\xbar}$. Then $\xbar$ minimizes $\fgAext$, and $\uu$ minimizes $\fstar(-\transA)+\gstar$. \end{theorem} \begin{proof} Let $h=f+g\A$, which is convex and proper (since $h(\xing)<+\infty$). Let $\slinmapA:\Rn\rightarrow\Rm$ be the linear map associated with $\A$, that is, $\slinmapA(\xx)=\A\xx$ for $\xx\in\Rn$. As a preliminary step, we note that \begin{equation} \label{eq:thm:subgrad-suff-fenchel:1} \xing \in \slinmapAinv\Parens{\ri(\dom{g})} = \ri\Parens{\slinmapAinv(\dom{g})} = \ri\Parens{\dom{(g\A)}}. \end{equation} The inclusion is because $\A\xing\in\ri(\dom{g})$. The first equality is by \Cref{roc:thm6.7}. The last equality is because $\dom{(g\A)}=\{\xx\in\Rn : \A\xx\in\dom{g}\} = \slinmapAinv(\dom{g})$. Since $\uu\in\asubdifgext{\A\xbar}$ and $\A\xing\in\ri(\dom g)$, by Theorem~\ref{thm:subgrad-fA}(\ref{thm:subgrad-fA:a}), $\transA \uu \in \asubdifgAext{\xbar}$. Since also $-\transA\uu\in\asubdiffext{\xbar}$, and since $\xing\in\ri(\dom{f})\cap\ri(\dom{(g\A)})$ (by \eqref{eq:thm:subgrad-suff-fenchel:1}), it follows by Theorem~\ref{thm:subgrad-sum-fcns}(\ref{thm:subgrad-sum-fcns:a}) that \[ \zero = -\transA \uu + \transA \uu \in \asubdiffext{\xbar} + \asubdifgAext{\xbar} \subseteq \asubdifhext{\xbar}. \] Thus, $\xbar$ minimizes $\hext$ by Proposition~\ref{pr:asub-zero-is-min}, proving the theorem's first main claim. For the second claim, let $\uu'\in\Rm$. We aim to prove \begin{equation} \label{eq:thm:subgrad-suff-fenchel:4} \fstar(-\transA \uu') + \gstar(\uu') \geq \fstar(-\transA \uu) + \gstar(\uu). \end{equation} Note that $\fstar$ and $\gstar$ are proper, since $f$ and $g$ are. Note also that \eqref{eq:thm:subgrad-suff-fenchel:4} holds trivially if the left-hand side is $+\infty$. Therefore, we assume henceforth that it is not $+\infty$, implying that $\fstar(-\transA \uu')$ and $\gstar(\uu')$ are both finite. Since $\uu\in\asubdifgext{\A\xbar}$, it follows from Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c} that $\A\xbar\in\adsubdifgstar{\uu}$, implying, by the definition of astral dual subgradient given in \eqref{eqn:psi-subgrad:3-alt}, that \begin{equation} \label{eq:thm:subgrad-suff-fenchel:2} \gstar(\uu')\geq\gstar(\uu) \plusd (\A\xbar)\cdot(\uu'-\uu). \end{equation} Likewise, since $-\transA\uu\in\asubdiffext{\xbar}$, Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c} implies $\xbar\in\adsubdiffstar{-\transA \uu}$, and so, using \eqref{eqn:psi-subgrad:3-alt}, that \[ \fstar(\ww)\geq\fstar(-\transA\uu) \plusd \xbar\cdot(\ww + \transA \uu) \] for all $\ww\in\Rn$. Setting $\ww = -\transA \uu'$, this implies \begin{eqnarray} \fstar(-\transA \uu') &\geq& \fstar(-\transA\uu) \plusd \xbar\cdot\Bracks{-\transA(\uu'-\uu)} \nonumber \\ &=& \fstar(-\transA\uu) \plusd (-\A\xbar)\cdot(\uu'-\uu), \label{eq:thm:subgrad-suff-fenchel:3} \end{eqnarray} using Theorem~\ref{thm:mat-mult-def}. We claim $(\A\xbar)\cdot(\uu'-\uu)\in\R$. Otherwise, if $(\A\xbar)\cdot(\uu'-\uu)=+\infty$, then the right-hand side of \eqref{eq:thm:subgrad-suff-fenchel:2} must also be $+\infty$ (since $\gstar>-\infty$), contradicting our assumption that $\gstar(\uu')\in\R$. By similar reasoning, if $(\A\xbar)\cdot(\uu'-\uu)=-\infty$, then the right-hand side of \eqref{eq:thm:subgrad-suff-fenchel:3} must be $+\infty$, contradicting that $\fstar(-\transA\uu')\in\R$. Thus, all of the terms appearing in Eqs.~(\ref{eq:thm:subgrad-suff-fenchel:2}) and~(\ref{eq:thm:subgrad-suff-fenchel:3}) must in fact be finite, implying that these two inequalities can be added (with the downward addition replaced by ordinary addition). The resulting inequality is exactly \eqref{eq:thm:subgrad-suff-fenchel:4}, completing the proof. \end{proof} When the objective function being minimized, $f+g\A$, is lower-bounded, the differential condition given in Theorem~\ref{thm:subgrad-suff-fenchel} becomes both necessary and sufficient for $\xbar,\uu$ to be a solution pair. Also, in this case, strict astral subgradients can always be used. \begin{theorem} \label{thm:subgrad-equiv-fenchel} Let $f:\Rn\rightarrow\Rext$ and $g:\Rm\rightarrow\Rext$ be closed, proper and convex, and let $\A\in\Rmn$. Assume there exists $\xing\in\ri(\dom f)$ such that $\A\xing\in\ri(\dom g)$, and assume also that $\inf (f+g\A) > -\infty$. Then for all $\xbar\in\extspace$ and $\uu\in\Rm$, the following are equivalent: \begin{letter} \item \label{thm:subgrad-equiv-fenchel:a} $\xbar$ minimizes $\fgAext$, and $\uu$ minimizes $\fstar(-\transA)+\gstar$. \item \label{thm:subgrad-equiv-fenchel:b} $\fgAext(\xbar) = -[\fstar(-\transA \uu) + \gstar(\uu)]$. \item \label{thm:subgrad-equiv-fenchel:c} $\uu\in\basubdifgext{\A\xbar}$ and $-\transA\uu\in\basubdiffext{\xbar}$. \item \label{thm:subgrad-equiv-fenchel:d} $\uu\in\asubdifgext{\A\xbar}$ and $-\transA\uu\in\asubdiffext{\xbar}$. \end{letter} Furthermore, there exists such a pair $\xbar,\uu$ satisfying all these conditions. \end{theorem} \begin{proof} That (\ref{thm:subgrad-equiv-fenchel:a}) implies (\ref{thm:subgrad-equiv-fenchel:b}) follows from Theorem~\ref{thm:ast-fenchel-duality}, as does the existence of a pair satisfying (\ref{thm:subgrad-equiv-fenchel:a}). That (\ref{thm:subgrad-equiv-fenchel:c}) implies (\ref{thm:subgrad-equiv-fenchel:d}) is immediate. That (\ref{thm:subgrad-equiv-fenchel:d}) implies (\ref{thm:subgrad-equiv-fenchel:a}) follows from Theorem~\ref{thm:subgrad-suff-fenchel}. Therefore, it only remains to prove that (\ref{thm:subgrad-equiv-fenchel:b}) implies (\ref{thm:subgrad-equiv-fenchel:c}). Suppose then that statement~(\ref{thm:subgrad-equiv-fenchel:b}) holds for some $\xbar\in\extspace$ and $\uu\in\Rn$. Let $h=f+g\A$. Since $\fstar$ and $\gstar$ are proper (since $f$ and $g$ are), statement~(\ref{thm:subgrad-equiv-fenchel:b}) implies that $\hext(\xbar)<+\infty$. Also, $\hext(\xbar)\geq \inf h>-\infty$ by assumption (and Proposition~\ref{pr:fext-min-exists}), so $\hext(\xbar)\in\R$. Consequently, $\fstar(-\transA\uu)$ and $\gstar(\uu)$ are both finite. Let $\alpha=\fstar(-\transA\uu)$ and $\beta=\gstar(\uu)$. Next, let $\seq{\xx_t}$ be a sequence in $\Rn$ converging to $\xbar$ and with $h(\xx_t)\rightarrow\hext(\xbar)$ (which exists by Proposition~\ref{pr:d1}). For each $t$, let \begin{eqnarray*} \alpha_t &=& \xx_t\cdot(-\transA\uu) - f(\xx_t) = -(\A\xx_t)\cdot\uu - f(\xx_t) \\ \beta_t &=& (\A\xx_t)\cdot\uu - g(\A\xx_t). \end{eqnarray*} Note that $-h(\xx_t)=\alpha_t+\beta_t$, and that statement~(\ref{thm:subgrad-equiv-fenchel:b}) means that $-\hext(\xbar)=\alpha+\beta$. Thus, $\alpha_t+\beta_t\rightarrow\alpha+\beta$. Also, for all $t$, $\alpha_t\leq\alpha$ and $\beta_t\leq\beta$ by definition of conjugate (Eq.~\ref{eq:fstar-def}). We claim $\alpha_t\rightarrow\alpha$. This is because \[ \alpha \geq \limsup \alpha_t \geq \liminf \alpha_t \geq \liminf [(\alpha_t + \beta_t) - \beta] = (\alpha + \beta) - \beta = \alpha, \] where the first and third inequalities are because $\alpha_t\leq\alpha$ and $\beta_t\leq\beta$, and the first equality is because $\alpha_t+\beta_t\rightarrow\alpha+\beta$. Thus, $\alpha_t\rightarrow\alpha$, implying $-\transA\uu\in\asubdiffext{\xbar}$ by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:b}), and so that $-\transA\uu\in\basubdiffext{\xbar}$ since $\fstar(-\transA\uu)\in\R$. Similarly, $\beta_t\rightarrow\beta$, implying $\uu\in\asubdifgext{\A\xbar}$ (again by Theorem~\ref{thm:fminus-subgrad-char}\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:b}), and so that $\uu\in\basubdifgext{\A\xbar}$ since $\gstar(\uu)\in\R$. \end{proof} Without the assumption that $f+g\A$ is lower-bounded, Theorem~\ref{thm:subgrad-equiv-fenchel} is no longer true in general. In other words, if $\inf(f+g\A)=-\infty$, then it is possible that statements~(\ref{thm:subgrad-equiv-fenchel:a}) and~(\ref{thm:subgrad-equiv-fenchel:b}) of that theorem are true, but statements~(\ref{thm:subgrad-equiv-fenchel:c}) and~(\ref{thm:subgrad-equiv-fenchel:d}) are false, as seen in the next example. \begin{example} Let $f(x)=-2x$ and $g(x)=x$ for $x\in\R$; both these functions are convex, proper and closed. Let $\A=[1]$ (the $1\times 1$ identity matrix), and let $\barx=+\infty$ and $u=0$. Let $h=f+g\A$, implying $h(x)=-x$ for $x\in\R$. Then $\hext(\barx)=-\infty$, and it can be checked that $\fstar(-\transA u) = \fstar(0) = +\infty$ and $\gstar(u)=\gstar(0)=+\infty$. Thus, statement~(\ref{thm:subgrad-equiv-fenchel:b}) of Theorem~\ref{thm:subgrad-equiv-fenchel} holds, and so statement~(\ref{thm:subgrad-equiv-fenchel:a}) does as well, by Theorem~\ref{thm:ast-fenchel-duality}. On the other hand, $\gext$ is not minimized by $+\infty$, so $u=0\not\in\asubdifgext{+\infty}=\asubdifgext{\A\barx}$ by Proposition~\ref{pr:asub-zero-is-min}. Thus, statements~(\ref{thm:subgrad-equiv-fenchel:c}) and~(\ref{thm:subgrad-equiv-fenchel:d}) of Theorem~\ref{thm:subgrad-equiv-fenchel} do not hold. \end{example} As an application, for the problem of minimizing a convex function subject to affine equality constraints, \Cref{thm:subgrad-equiv-fenchel} provides necessary and sufficient conditions for a point to be a solution, as we show next: \begin{corollary} \label{cor:fenchel-aff-constraints} Let $\A\in\Rmn$, $\bb\in\Rn$, and let $\psi:\Rm\rightarrow\Rext$ be closed, proper and convex. Assume that: \begin{roman-compact} \item \label{cor:fenchel-aff-constraints:asm:1} $(\colspace{\A})\cap\ri(\dom{\psistar})\neq\emptyset$; and \item \label{cor:fenchel-aff-constraints:asm:2} there exists $\uhat\in\Rm$ such that $\trans{\A}\uhat=\bb$ and $\psi(\uhat)<+\infty$. \end{roman-compact} Let $\uu\in\Rm$ be such that $\trans{\A}\uu=\bb$. Then the following are equivalent: \begin{letter} \item \label{cor:fenchel-aff-constraints:a} $\uu$ minimizes $\psi$ over all $\uu'\in\Rm$ with $\trans{\A}\uu'=\bb$. \item \label{cor:fenchel-aff-constraints:b} $\adsubdifpsi{\uu}\cap(\acolspace{\A})\neq\emptyset$ and $\psi(\uu)<+\infty$. \end{letter} \end{corollary} \begin{proof} Let $g=\psistar$ and let $f:\Rn\rightarrow\R$ be defined by $f(\xx)=-\xx\cdot\bb$ for $\xx\in\Rn$. Note that $f=\indfstar{-\bb}$ (by Eq.~\ref{eq:e:2}), where $\indf{-\bb}$ is the indicator function for the singleton $\{-\bb\}$. Further, $\fstar=\indf{-\bb}$ and $\gstar=\psi$ (both by \Cref{pr:conj-props}\ref{pr:conj-props:f}). Therefore, for $\ww\in\Rm$, \begin{equation} \label{eq:cor:fenchel-aff-constraints:1} \fstar(-\trans{\A} \ww) + \gstar(\ww) = \indf{-\bb}(-\trans{\A}\ww) + \psi(\ww) = \begin{cases} \psi(\ww) & \text{if $\trans{\A} \ww = \bb$,} \\ +\infty & \text{otherwise.} \end{cases} \end{equation} For these choices of $f$, $g$ and $\A$, we claim that the assumptions of \Cref{thm:subgrad-equiv-fenchel} hold. First, by assumption~(\ref{cor:fenchel-aff-constraints:asm:1}), there exists a point $\xhat\in\Rn$ such that $\A\xhat\in\ri(\dom{\psistar})=\ri(\dom{g})$. Furthermore, $\xhat\in\ri(\dom{f})$ since $\dom{f}=\Rn$. Next, for $\uhat$ as in assumption~(\ref{cor:fenchel-aff-constraints:asm:2}), we have that $\fstar(-\trans{\A} \uhat) + \gstar(\uhat) < +\infty$ by \eqref{eq:cor:fenchel-aff-constraints:1}. Combined with \Cref{thm:ast-fenchel-duality}, this implies that $\inf (f+g\A) > -\infty$. \begin{proof-parts} \pfpart{(\ref{cor:fenchel-aff-constraints:a}) $\Rightarrow$ (\ref{cor:fenchel-aff-constraints:b}): } Suppose statement~(\ref{cor:fenchel-aff-constraints:a}) holds. Then $\psi(\uu)\leq\psi(\uhat)<+\infty$, by assumption~(\ref{cor:fenchel-aff-constraints:asm:2}), so $\gstar(\uu)=\psi(\uu)\in\R$ since $\psi$ is proper. Further, there exists $\xbar\in\extspace$ that minimizes $\fgAext$ (\Cref{pr:fext-min-exists}). By \Cref{thm:subgrad-equiv-fenchel}(\ref{thm:subgrad-equiv-fenchel:a},\ref{thm:subgrad-equiv-fenchel:c}), it then follows that $\uu\in\basubdifgext{\A\xbar}$, which implies that $\A\xbar\in\adsubdifgstar{\uu}=\adsubdifpsi{\uu}$, by \Cref{thm:strict-adif-fext-inverses}. Thus, $\adsubdifpsi{\uu}\cap(\acolspace{\A})\neq\emptyset$, proving the claim. \pfpart{(\ref{cor:fenchel-aff-constraints:b}) $\Rightarrow$ (\ref{cor:fenchel-aff-constraints:a}): } Suppose statement~(\ref{cor:fenchel-aff-constraints:b}) holds. Then there exists $\xbar\in\extspace$ such that $\A\xbar\in\adsubdifpsi{\uu} = \adsubdifgstar{\uu}$. Further, $\gstar(\uu)=\psi(\uu)\in\R$ by assumption and since $\psi$ is proper. Therefore, $\uu\in\basubdifgext{\A\xbar}$ by \Cref{thm:strict-adif-fext-inverses}. In addition, $\trans{\A}\uu=\bb$, so $-\trans{\A}\uu\in\{-\bb\}=\basubdiffext{\xbar}$ by Example~\ref{ex:affine-ben-subgrad}. Together, by \Cref{thm:subgrad-equiv-fenchel}(\ref{thm:subgrad-equiv-fenchel:a},\ref{thm:subgrad-equiv-fenchel:c}), these imply that $\uu$ minimizes the function in \eqref{eq:cor:fenchel-aff-constraints:1}, which means statement~(\ref{cor:fenchel-aff-constraints:a}) holds. \qedhere \end{proof-parts} \end{proof} Here is an example application: \begin{example} Let $\psi:\R^2\rightarrow\Rext$ be the function defined in Example~\ref{ex:entropy-ast-dual-subgrad}, and consider the problem of minimizing $\psi(\uu)$ over $\uu\in\R^2$ subject to the constraint that $u_1+u_2=1$. Note that at such points, $\psi$ has no standard subgradients. To apply \Cref{cor:fenchel-aff-constraints}, let $\A=\vv$ and $\bb=[1]$, where $\vv=\trans{[1,1]}$. Further, the assumptions of that corollary are satisfied since, it can be shown, $\dom\psistar=\R^2$, and, for instance, $\psi(\ee_1)<+\infty$. All of the astral dual subgradients at points $\uu\in\dom\psi$ satisfying the constraint $\trans{\A}\uu=\bb$ were calculated in Example~\ref{ex:entropy-ast-dual-subgrad}. Of these, the only astral dual subgradient that is also in $\acolspace{\A}=\Rext\vv$ occurs when $\uu=\trans{[\frac{1}{2},\frac{1}{2}]}$ where $\adsubdifpsi{\uu}=\{\limray{\vv}\}$. Therefore, by Corollary~\ref{cor:fenchel-aff-constraints}, $\uu$ uniquely minimizes $\psi$ subject to this constraint. \end{example} \subsection{Application to astral matrix games} As another application, we can apply Fenchel duality to obtain an astral generalization of von Neumann's minmax theorem for two-person, zero-sum matrix games. In one version of this setting, there are two players, Minnie and Max, playing a game described by a matrix $\A\in\Rmn$. To play the game, Minnie chooses a probability vector $\xx\in\probsim$ (where $\probsim$ is as defined in Eq.~\ref{eqn:probsim-defn}), and simultaneously, Max chooses a probability vector $\uu\in\probsimm$. In this setting, such vectors are often called \emph{strategies}. The outcome of the game is then \[ \trans{\uu} \A \xx = (\A \xx)\cdot\uu = \xx\cdot(\transA\uu), \] which represents the ``loss'' to Minnie that she aims to minimize, and the ``gain'' to Max that he aims to maximize. Von Neumann's minmax theorem states that \[ \min_{\xx\in\probsim} \max_{\uu\in\probsimm} \Bracks{(\A\xx)\cdot\uu} = \max_{\uu\in\probsimm} \min_{\xx\in\probsim} \Bracks{(\A\xx)\cdot\uu}, \] with all minima and maxima attained. The common value $v$ is called the \emph{value} of the game. This theorem means that there exists a minmax strategy for Minnie such that, no matter how Max plays, her loss will never exceed $v$; similarly, there exists a maxmin strategy for Max that assures his gain can never be less than $v$, regardless of how Minnie plays. We can straightforwardly generalize this set-up replacing $\probsim$, the set of strategies from which Minnie can choose, with an arbitrary closed, convex set $X\subseteq\Rn$, and similarly replace $\probsimm$ with some closed, convex set $U\subseteq\Rm$. Now, however, if either set is unbounded, there might not exist a minmax or maxmin strategy, as the next example shows. \begin{example} \label{ex:game-no-minmax} Let $X=\{\xx\in\Rpos^2 : x_1 x_2 \geq 1\}$, and let $U$ be the interval $[-1,1]$ (so $n=2$ and $m=1$). Let $\A$ be the row vector $\trans{\ee_2}=[0, 1]$. Thus, if Minnie plays $\xx\in X$ and Max plays $u\in U$ then the outcome will be $\A\xx\cdot u=u x_2$. To maximize this outcome, Max's best play will always be to choose $u=1$, regardless of how Minnie plays. On the other hand, to minimize the outcome, Minnie will want to choose $x_2$ as small as possible; however, there is no point in $X$ that attains the minimum value of $x_2$. \end{example} Thus, in this example, there exists a maxmin strategy for Max, but no minmax strategy for Minnie. It seems natural to extend such a case to astral space where Minnie would have a minmax strategy. To do so, we allow Minnie to choose her astral strategy $\xbar$ from $\Xbar$, the astral closure of $X$. The resulting outcome then becomes $(\A\xbar)\cdot\uu$. In this astral version of the problem, there must exist minmax and maxmin strategies for both sides under a topological condition, as shown in the next theorem. For instance, in the astral version of the game given in Example~\ref{ex:game-no-minmax}, $\xbar=\limray{\ee_1}$, which is in $\Xbar$, is minmax since $\A\xbar\cdot u=0$ for all $u\in U$. The barrier cone, $\barr U$, which appears in the next theorem's statement, was defined in \eqref{eqn:barrier-cone-defn}. \begin{theorem} \label{thm:ast-von-neumann} Let $X\subseteq\Rn$ and $U\subseteq\Rm$ be convex and nonempty, and assume also that $U$ is closed (in $\Rm$). Let $\A\in\Rmn$. Assume there exists $\xing\in\ri X$ such that $\A\xing\in\ri(\barr U)$. Then \begin{equation} \label{eqn:thm:ast-von-neumann:8} \min_{\xbar\in\Xbar} \sup_{\uu\in U} [(\A\xbar)\cdot\uu] = \max_{\uu\in U} \min_{\xbar\in\Xbar} [(\A\xbar)\cdot\uu] \end{equation} with the minima and the maximum attained. \end{theorem} \begin{proof} Without loss of generality, we can assume that $X$ is closed in $\Rn$; otherwise, we can replace it with $\cl X$ which does not change the theorem's conclusion since $\clbar{(\cl X)}=\Xbar$ (Proposition~\ref{pr:closed-set-facts}\ref{pr:closed-set-facts:aa}). The proof's key step is based on Fenchel duality (Theorem~\ref{thm:std-fenchel-duality}), which we apply as follows: Let $f=\indf{X}$, the indicator function for $X$, which is closed, proper and convex (since $X$ is closed, convex and nonempty). Let $g:\Rm\rightarrow\Rext$ be the support function for $U$; that is, for $\zz\in\Rm$, \begin{equation} \label{eqn:thm:ast-von-neumann:7} g(\zz) = \sup_{\uu\in U} \zz\cdot\uu. \end{equation} Note that $g=\indstar{U}$, the conjugate of the indicator function $\indf{U}$ (see Eq.~\ref{eq:e:2}); therefore, $g$ is closed and convex, and $g>-\infty$ since $U$ is nonempty. Moreover, by definition of barrier cone (\eqref{eqn:barrier-cone-defn}), $\dom{g}=\barr U$. Thus, $\xing\in\ri(\dom{f})$ (since $\dom{f}=X$), and $\A\xing\in\ri(\dom{g})$. This further shows $g\not\equiv+\infty$ and therefore that $g$ is proper. Let $h=f+g\A$. Then the foregoing shows that $h\not\equiv+\infty$ since $h(\xing)<+\infty$. Since $g$ is proper, we can write $h(\xx)=\indf{X}(\xx)+(g\A)(\xx)=\indf{X}(\xx)\plusl(g\A)(\xx)$, for $\xx\in\Rn$, so \begin{equation} \label{eqn:thm:ast-von-neumann:4} \hext(\xbar)=\indfa{\Xbar}(\xbar)\plusu \gAext(\xbar) \end{equation} for $\xbar\in\extspace$, by Corollary~\ref{cor:ext-restricted-fcn}. We also note that \begin{equation} \label{eqn:thm:ast-von-neumann:5} \gext(\zbar) \geq \sup_{\uu\in U} \zbar\cdot\uu \end{equation} for $\zbar\in\extspac{m}$. This follows from Proposition~\ref{pr:ext-sup-of-fcns-bnd} since $g$, as defined in \eqref{eqn:thm:ast-von-neumann:7}, is the pointwise supremum of the family of functions $\zz\mapsto\zz\cdot\uu$, indexed by $\uu\in U$, with each having extension $\zbar\mapsto\zbar\cdot\uu$ (Example~\ref{ex:ext-affine}). For all $\ww\in\Rn$, we claim \begin{equation} \label{eqn:thm:ast-von-neumann:1} \inf_{\xx\in X} \xx\cdot\ww = \min_{\xbar\in\Xbar} \xbar\cdot\ww \end{equation} with the minimum attained. To see this, let $r(\xx)=\indf{X}(\xx) + \xx\cdot\ww$ for $\xx\in\Rn$. We can then apply Corollary~\ref{cor:ext-restricted-fcn}, with $S=X$ and $f$, as it appears in the corollary, set to the function $\xx\mapsto\xx\cdot\ww$ whose effective domain is all of $\Rn$, implying $\ri(\dom{f})\cap (\ri S)=\ri S\neq\emptyset$. Together with Example~\ref{ex:ext-affine}, this yields that $r$'s extension is \[\rext(\xbar)=\indfa{\Xbar}(\xbar)\plusu \xbar\cdot\ww,\] for $\xbar\in\extspace$. It then follows by Proposition~\ref{pr:fext-min-exists} that \begin{equation} \label{eqn:thm:ast-von-neumann:2} \inf r = \min_{\xbar\in\extspace} \rext(\xbar), \end{equation} with the minimum attained, and, moreover, attained at a point $\xbar\in\Xbar$ since $X\neq\emptyset$, implying $\inf r<+\infty$. \eqref{eqn:thm:ast-von-neumann:1} now follows from \eqref{eqn:thm:ast-von-neumann:2}. The conjugates of $f$ and $g$ are $\fstar=\indstar{X}$, the support function of $X$ (Eq.~\ref{eq:e:2}), and $\gstar=\inddubstar{U}=\indf{U}$ (by \Cref{pr:conj-props}\ref{pr:conj-props:f} since $U$ is closed and convex). Thus, applying Fenchel duality, \begin{align} \inf h &= - \min_{\uu\in\Rm} [\fstar(-\transA \uu) + \gstar(\uu)] \nonumber \\ &= - \min_{\uu\in U} \fstar(-\transA \uu) \nonumber \\ &= - \min_{\uu\in U} \sup_{\xx\in X} \Bracks{\xx\cdot (-\transA\uu)} \nonumber \\ &= \max_{\uu\in U} \inf_{\xx\in X} \Bracks{ \xx\cdot(\transA\uu) } \nonumber \\ &= \max_{\uu\in U} \min_{\xbar\in \Xbar} \Bracks{ \xbar\cdot(\transA\uu) } \nonumber \\ &= \max_{\uu\in U} \min_{\xbar\in \Xbar} \Bracks{ (\A\xbar)\cdot\uu } \label{eqn:thm:ast-von-neumann:3} \end{align} with the minima and maxima attained. The first equality is by Theorem~\ref{thm:std-fenchel-duality}. The second and third equalities are from our expressions for $\gstar$ and $\fstar$. The fifth equality is by \eqref{eqn:thm:ast-von-neumann:1}. And the last equality is by Theorem~\ref{thm:mat-mult-def}. Let $\uu_0\in U$ attain the maximum in \eqref{eqn:thm:ast-von-neumann:3}, and let $\xbar_0\in\extspace$ minimize $\hext$ so that $\hext(\xbar_0)=\inf h$ (which exists by Proposition~\ref{pr:fext-min-exists}), implying $\xbar_0\in\Xbar$ by \eqref{eqn:thm:ast-von-neumann:4} since $h\not\equiv+\infty$. We then have: \begin{align} \hext(\xbar_0) = \inf h &= \min_{\xbar\in \Xbar} \Bracks{ (\A\xbar)\cdot\uu_0 } \nonumber \\ &\leq \inf_{\xbar\in \Xbar} \sup_{\uu\in U} \Bracks{ (\A\xbar)\cdot\uu } \nonumber \\ &\leq \sup_{\uu\in U} \Bracks{ (\A\xbar_0)\cdot\uu } \nonumber \\ &\leq \gext(\A\xbar_0) \leq \gAext(\xbar_0) = \hext(\xbar_0). \label{eqn:thm:ast-von-neumann:6} \end{align} The second equality is by \eqref{eqn:thm:ast-von-neumann:3} (and by definition of $\uu_0$). The third inequality is by \eqref{eqn:thm:ast-von-neumann:5}. The fourth inequality is by Proposition~\ref{pr:ext-affine-comp}(\ref{pr:ext-affine-comp:a}). And the last equality is by \eqref{eqn:thm:ast-von-neumann:4} since $\xbar_0\in\Xbar$. \eqref{eqn:thm:ast-von-neumann:6} shows that \[ \inf h = \inf_{\xbar\in \Xbar} \sup_{\uu\in U} \Bracks{ (\A\xbar)\cdot\uu } = \sup_{\uu\in U} \Bracks{ (\A\xbar_0)\cdot\uu }, \] and so, in particular, that this infimum is attained by $\xbar_0\in\Xbar$. Combined with \eqref{eqn:thm:ast-von-neumann:3}, this proves the theorem. \end{proof} When $U$ is also bounded (and therefore compact), a simplified form of Theorem~\ref{thm:ast-von-neumann} is obtained: \begin{corollary} \label{cor:ast-von-neumann-bounded-u} Let $X\subseteq\Rn$ and $U\subseteq\Rm$ be convex and nonempty, and assume also that $U$ is compact (that is, bounded and closed in $\Rm$). Let $\A\in\Rmn$. Then \[ \min_{\xbar\in\Xbar} \max_{\uu\in U} [(\A\xbar)\cdot\uu] = \max_{\uu\in U} \min_{\xbar\in\Xbar} [(\A\xbar)\cdot\uu] \] with all minima and maxima attained. \end{corollary} \begin{proof} For all $\zz\in\Rm$, we have \[ \sup_{\uu\in U} \zz\cdot\uu \leq \sup_{\uu\in U} \norm{\zz}\,\norm{\uu} = \norm{\zz} \sup_{\uu\in U} \norm{\uu} < +\infty \] by the Cauchy-Schwarz inequality, and since $U$ is bounded. Therefore, $\barr U = \Rm$. Because $U$ is convex and nonempty, there exists $\xing\in\ri X$, and necessarily, $\A\xing\in\Rm=\ri(\barr U)$. Thus, we can apply Theorem~\ref{thm:ast-von-neumann} yielding \eqref{eqn:thm:ast-von-neumann:8}. It remains then only to show that the supremum appearing on the left-hand side of that equation is attained. To prove this, let $\zbar\in\extspac{m}$. We claim the supremum of $\zbar\cdot\uu$ over $\uu\in U$ must be attained. If $\zbar\cdot\uu=+\infty$ for some $\uu\in U$, then this $\uu$ must attain that supremum. Likewise, if $\zbar\cdot\uu=-\infty$ for all $\uu\in U$, then any $\uu\in U$ attains the supremum. Therefore, we assume henceforth that $\zbar\cdot\uu<+\infty$ for all $\uu\in U$, and that $\zbar\cdot\uu\in\R$ for some $\uu\in U$. We can write $\zbar=\VV\omm\plusl\qq$ for some $\VV\in\R^{m\times k}$ and some $\qq\in\Rm$. Let $L$ be the linear subspace orthogonal to the columns of $\VV$, that is, $L=\{\uu\in\Rm : \uu\perp\VV\}$. Let $U'=U\cap L$, which, by Proposition~\ref{pr:vtransu-zero}, is exactly the set of points $\uu\in U$ for which $\zbar\cdot\uu\in\R$. By the above assumptions, $U'$ is nonempty, and if $\uu\in U\setminus L$ then $\zbar\cdot\uu=-\infty$. Thus, \begin{equation} \label{eq:cor:ast-von-neumann-bounded-u:1} \sup_{\uu\in U} \zbar\cdot\uu = \sup_{\uu\in U'} \zbar\cdot\uu = \sup_{\uu\in U'} \qq\cdot\uu, \end{equation} with the last equality following from Proposition~\ref{pr:vtransu-zero}. Since $U$ and $L$ are closed (in $\Rm$), and since $U$ is bounded, $U'$ is compact. Since also the function $\uu\mapsto\qq\cdot\uu$ is continuous, the supremum in \eqref{eq:cor:ast-von-neumann-bounded-u:1} must be attained, proving the claim. \end{proof} \subsection{KKT conditions} We next consider constrained optimization problems in which the goal is to minimize a convex function subject to explicitly given convex inequality constraints and affine equality constraints. In standard convex analysis, such optimization problems are often handled using the Karush-Kuhn-Tucker (KKT) conditions, which generalize the method of Lagrange multipliers. In this section, we will see how these notions extend to astral space, thereby allowing them to be used for such optimization problems even when no finite solution exists. More specifically, we study how to minimize some \emph{objective function} $f(\xx)$ subject to the constraints that $g_i(\xx)\leq 0$ for $i=1,\ldots,r$, and that $h_j(\xx) = 0$ for $j=1,\ldots,s$. We assume that these functions have the following properties: \begin{itemize} \item $f:\Rn\rightarrow\Rext$ is convex and proper. \item For $i=1,\ldots,r$, $g_i:\Rn\rightarrow\Rext$ is convex and proper with $\dom f\subseteq\dom g_i$ and $\ri(\dom f)\subseteq\ri(\dom g_i)$. \item For $j=1,\ldots,s$, $h_j:\Rn\rightarrow\R$ is affine. \end{itemize} Thus, all of these functions are finite on $f$'s entire effective domain. We refer to such a problem as an \emph{ordinary convex program} $P$, as specified by the functions above. We assume the notation just described throughout this section. We can combine the various components of such a program into a single convex function to be minimized. To do so, let $\negf:\R\rightarrow\Rext$ denote the indicator function $\negf=\indf{\Rneg}$, and let $\Negf=\negfext$ be its extension; thus, $\negf$ and $\Negf$ are both $0$ if their arugment is not positive, and are $+\infty$ otherwise. As usual, on $\R$ and $\Rext$, respectively, we write $\indz$ and $\indaz$ as shorthand for the indicator functions $\indf{\{0\}}$ and $\indfa{\{0\}}$. Then solving the ordinary convex program $P$ is equivalent to minimizing the convex function $p:\Rn\rightarrow\Rext$ given by \begin{equation} \label{eqn:kkt-p-defn} p(\xx) = f(\xx) + \sumitor (\Negf\circ g_i)(\xx) + \sumjtos (\indz\circ h_j)(\xx) \end{equation} since this function is $+\infty$ if any constraint of $P$ is violated, and otherwise is equal to $f$. A point $\xx\in\Rn$ is said to be \emph{(standard) feasible} if $\xx$ satisfies all of the constraints so that $g_i(\xx)\leq 0$ for $i=1,\ldots,r$ and $h_j(\xx)=0$ for $j=1,\ldots,s$. We let $C$ denote the set of all standard feasible points: \begin{equation} \label{eqn:feas-set-defn} C = \Braces{\xx\in\Rn :\: g_i(\xx)\leq 0 \mbox{ for $i=1,\ldots,r$};\; h_j(\xx) = 0 \mbox{ for $j=1,\ldots,s$} }. \end{equation} A feasible point $\xx\in C$ is a \emph{solution} of $P$ if $f$ is minimized by $\xx$ among all feasible points, or equivalently, if $\xx$ minimizes $p$. We call $\inf p$ the \emph{value} of program $P$; it is the value of $f(\xx)$ at any solution $\xx$, and more generally, is the infimal value of $f$ over feasible points $\xx\in C$. A standard approach to solving such a problem is to introduce new variables which are then used to form a simpler function which can then be optimized. The new variables or \emph{multipliers}, one for each constraint, are denoted $\lambda_i$ for $i=1,\ldots,r$ and $\mu_j$ for $j=1,\ldots,s$. We also write these in vector form as $\lamvec\in\R^r$ and $\muvec\in\R^s$. For such vectors $\lamvec,\muvec$, we define the function $\lagrangflm:\Rn\rightarrow\Rext$ by \begin{equation} \label{eqn:lagrang-defn} \lagranglm{\xx} = f(\xx) + \sumitor \lambda_i g_i(\xx) + \sumjtos \mu_j h_j(\xx) \end{equation} for $\xx\in\Rn$. We often view $\lagranglm{\xx}$ as a function of $\xx$ only, with $\lammupair$ fixed, but sometimes we regard it as a function of both $\xx$ and $\lammupair$, called the \emph{Lagrangian}. Note importantly that the notation $\lagranglmext{\xbar}$ and $\lagranglmstar{\uu}$ denote the extension and conjugate of $\lagranglm{\xx}$ regarded as a function of $\xx$ only, with $\lammupair$ fixed. That is, if $h(\xx)=\lagranglm{\xx}$ for $\xx\in\Rn$, then $\lagranglmext{\xbar}=\hext(\xbar)$ and $\lagranglmstar{\uu}=\hstar(\uu)$; in particular, $\asubdiflagranglmext{\xbar}=\asubdifhext{\xbar}$. Generally, we require $\lambda_i\geq 0$ for $i=1,\ldots,r$ so that $\lammupair\in\Lampairs$ where $\Lampairs=\Rpos^r\times\R^s$. This condition is called \emph{dual feasibility}. In addition to the assumptions defining an ordinary convex program, we assume throughout this section that the program $P$ satisfies \emph{Slater's condition} requiring that there exists a point $\xing\in\ri(\dom{f})$ that is ``strictly feasible'' in the sense that $g_i(\xing)<0$ for $i=1,\ldots,r$, and $h_j(\xing)=0$ for $j=1,\ldots,s$. Under this condition, and also assuming the value $\inf p$ of the program $P$ is not $-\infty$, it is known that there must exist multipliers $\lammupair\in\Lampairs$ for which $p$ and the Lagrangian $\lagrangflm$ have the same infima: \begin{proposition} \label{roc:thm28.2} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition, and let $p$ and $\lagrangfcn$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and (\ref{eqn:lagrang-defn}). Assume $\inf p>-\infty$. Then there exists $\lammupair\in\Lampairs$ such that $\inf \lagrangflm = \inf p$. \end{proposition} \begin{proof} See \citet[Theorem~28.2]{ROC}. \end{proof} Furthermore, under these same conditions, it is known that $\xx$ is a solution of $P$ if and only if there exists $\lammupair\in\Lampairs$ such that the resulting pair $\xx,\lammupair$ is a \emph{saddle point} of the Lagrangian, meaning $\xx$ minimizes $\lagranglm{\cdot}$ and simultaneously $\lammupair$ maximizes $\lagrang{\xx}{\cdot}$; that is, \[ \lagrang{\xx}{\lamvec',\muvec'} \leq \lagranglm{\xx} \leq \lagranglm{\xx'} \] for all $\lammupairp\in\Lampairs$ and for all $\xx'\in\Rn$. This will also be the case if and only if the pair satisfy the \emph{KKT conditions} requiring that: \begin{itemize} \item $\xx$ is standard feasible; \item $\lambda_i g_i(\xx)=0$ for $i=1,\ldots,r$, a condition called \emph{complementary slackness}; \item $\xx$ minimizes the Lagrangian so that, in differential form, \begin{equation} \label{eqn:kkt-lagrang-opt} \zero \in \partial f(\xx) + \sumitor \lambda_i \partial g_i(\xx) + \sumjtos \mu_j \partial h_j(\xx). \end{equation} \end{itemize} Thus, the KKT conditions require standard feasibility, dual feasibility, complementary slackness, and first-order optimality of the Lagrangian as given in \eqref{eqn:kkt-lagrang-opt}. Summarizing: \begin{theorem} \label{roc:cor28.3.1} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition, and let $p$ and $\lagrangfcn$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and (\ref{eqn:lagrang-defn}). Assume $\inf p>-\infty$. Then the following are equivalent: \begin{letter-compact} \item $\xx$ is a (standard) solution of $P$. \item There exists $\lammupair\in\Lampairs$ such that the pair $\xx,\lammupair$ is a {saddle point} of the Lagrangian $\lagrangfcn$. \item There exists $\lammupair\in\Lampairs$ such that the pair $\xx,\lammupair$ satisfies the KKT conditions. \end{letter-compact} \end{theorem} \begin{proof} See \citet[Corollary~28.3.1]{ROC}. \end{proof} In fact, \eqref{eqn:kkt-lagrang-opt} is not precisely correct as written. The problem is that if $\lambda_i=0$ and also $\partial g_i(\xx)=\emptyset$, then the entire set sum appearing on the right-hand side of the equation will be empty, and consequently the inclusion will never hold. What is actually meant is that terms with $\lambda_i=0$ should be entirely omitted or disregarded from the first sum, even if the corresponding subdifferential $\partial g_i(\xx)$ is empty. We could revise the equation to reflect this by modifying the first sum to only be over those $i$ with $\lambda_i>0$. Alternatively, and a bit more cleanly, we can move the scalar $\lambda_i$ inside the subdifferential, replacing $\lambda_i \partial g_i(\xx)$ with $\partial (\lambda_i g_i)(\xx)$. If $\lambda_i>0$, then these two expressions are always equal. But if $\lambda_i=0$ then $\lambda_i \partial g_i(\xx)$ might be empty while $\partial (\lambda_i g_i)(\xx)$ is necessarily equal to $\{\zero\}$ since $0 g_i \equiv 0$. Likewise, we might choose to move $\mu_j$ inside its accompanying subdifferential, replacing $\mu_j \partial h_j(\xx)$ by $\partial (\mu_j h_j)(\xx)$, even though this change is not strictly necessary for correctness at this point. Thus, with both these changes, what is intended by \eqref{eqn:kkt-lagrang-opt} can be more properly and correctly written as \begin{equation} \label{eqn:kkt-lagrang-opt-mod} \zero \in \partial f(\xx) + \sumitor \partial (\lambda_i g_i)(\xx) + \sumjtos \partial (\mu_j h_j)(\xx). \end{equation} The KKT conditions characterize when a finite point $\xx\in\Rn$ is a solution of the convex program $P$. Nevertheless, even under the favorable conditions given above, it is possible that no finite solution exists. Here is an example, which we will later see handled using astral tools. \begin{example} \label{ex:kkt-running-ex-standard} Working in $\R^3$, consider minimizing $2x_3 - x_1 - 2x_2$ subject to the constraints that \[ e^{x_1} + e^{x_2-x_3} + e^{x_2+x_3} \leq 1, \] and that $x_1\geq x_2$. In other words, we aim to minimize $f(\xx)$ subject to $g_i(\xx)\leq 0$ for $i=1,2$, where \begin{eqnarray} f(\xx) &=& 2x_3 - x_1 - 2x_2, \nonumber \\ g_1(\xx) &=& e^{x_1} + e^{x_2-x_3} + e^{x_2+x_3} - 1, \label{eq:ex:kkt-running-ex-standard:1} \\ g_2(\xx) &=& x_2-x_1 \nonumber \end{eqnarray} (with no equality constraints). This convex program has no finite solution. Informally, this is because, for any feasible point $\xx\in\R^3$, we can decrease $x_2$ and $x_3$ slightly while also slightly increasing $x_1$ to obtain another feasible point $\xx'\in\R^3$ that is strictly better than $\xx$ (meaning $f(\xx')<f(\xx)$). This can be argued more precisely using this program's Lagrangian, \begin{equation} \label{eq:ex:kkt-running-ex-standard:2} \lagrangl{\xx} = f(\xx) + \lambda_1 g_1(\xx) + \lambda_2 g_2(\xx), \end{equation} where $\lamvec\in\Rpos^2$. If $\lambda_1>0$ then for all $\xx\in\R^3$, decrementing $x_2$ and $x_3$ strictly reduces the Lagrangian; that is, \[ \lagrangl{x_1,x_2-1,x_3-1}<\lagrangl{x_1,x_2,x_3}. \] Therefore, $\xx$ cannot minimize $\lagrangl{\cdot}$. Likewise, if $\lambda_1=0$ then for all $\xx\in\R^3$, \[ \lagrangl{x_1+1,x_2,x_3}<\lagrangl{x_1,x_2,x_3}, \] so again $\xx$ cannot minimize $\lagrangl{\cdot}$. Thus, for all $\lamvec\in\Rposgen{2}$, no point $\xx\in\R^3$ minimizes $\lagrangl{\xx}$. Therefore, there cannot exist any pair $\xx,\lamvec$ that is a saddle point, implying that no finite point $\xx\in\R^3$ can be a solution. \end{example} The standard KKT theory only characterizes a convex program's finite solutions. However, as just seen, the program's solutions might only exist at infinity. We show now how that theory can be extended to astral space yielding an astral version of the KKT conditions that characterizes the solutions of the original convex program, even if those solutions are infinite. We first need to define what it might mean for an astral point $\xbar\in\extspace$ to solve the convex program $P$. We saw that solving $P$ is equivalent to minimizing the function $p$ given in \eqref{eqn:kkt-p-defn}. Therefore, we say that $\xbar$ is an \emph{(astral) solution} of $P$ if it minimizes $p$'s extension, $\pext$, that is, if $\pext(\xbar)=\inf p$. This, however, is just one of a few reasonable definitions of what it might mean for an astral point to solve $P$. As one alternative, we might instead consider more directly the astral extensions of the objective function and individual constraints. Specifically, we define an astral point $\xbar\in\extspace$ to be \emph{(astral) feasible} if $\gext_i(\xbar)\leq 0$ for $i=1,\ldots,r$, and $\hext_j(\xbar)=0$ for $j=1,\ldots,s$. We then might define $\xbar$ to solve $P$ if it minimizes $\fext(\xbar)$ over all astral feasible points $\xbar$. In yet another alternative, based on sequences, we might say that $\xbar$ is a solution of $P$ if there exists a sequence $\seq{\xx_t}$ of standard feasible points (that is, in $C$) that converges to $\xbar$ and such that the objective function values $f(\xx_t)$ converge to the program's value, $\inf p$. In fact, all three of these alternative definitions are equivalent, as we will show shortly (Proposition~\ref{pr:equiv-astral-kkt}). We first make some preliminary observations that will be needed for that and other proofs. We start with some consequences of Slater's condition, and then give expressions for both the extension $\pext$ and the set of astral feasible points. \begin{proposition} \label{pr:slater-conseqs} Let $P$ be an ordinary convex program in the notation above, and let $p$ and $\lagrangfcn$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and (\ref{eqn:lagrang-defn}). Assume Slater's condition, that is, that there exists a point $\xing\in\ri(\dom{f})$ with $g_i(\xing)<0$ for $i=1,\ldots,r$, and $h_j(\xing)=0$ for $j=1,\ldots,s$. Then: \begin{letter} \item \label{pr:slater-conseqs:b} $\xing\in\ri(\dom(\Negf\circ g_i))$ for $i=1,\ldots,r$. \item \label{pr:slater-conseqs:c} $\xing\in\ri(\dom(\indz\circ h_j))$ for $j=1,\ldots,s$. \item \label{pr:slater-conseqs:e} For all $\lammupair\in\Lampairs$, $\dom{f} = \dom{\lagranglm{\cdot}}$; therefore, $\xing\in\ri(\dom{\lagranglm{\cdot}})$ and $\lagranglm{\cdot}\not\equiv+\infty$. \item \label{pr:slater-conseqs:d} $\inf p < +\infty$. \end{letter} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:slater-conseqs:b}):} Let $i\in\{1,\ldots,r\}$. By assumption, $\xing\in\ri(\dom{f})\subseteq\ri(\dom{g_i})$. Also, $g_i(\xing)<0$ so \begin{align*} \xing &\in \Braces{ \xx\in\ri(\dom{g_i}) : g_i(\xing) < 0 } \\ &= \ri\bigParens{\Braces{\xx\in\Rn : g_i(\xx) \leq 0} } \\ &= \ri\bigParens{(\dom(\Negf\circ g_i)}, \end{align*} where the first equality is by \Cref{roc:thm7.6-mod}. \pfpart{Part~(\ref{pr:slater-conseqs:c}):} Let $j\in\{1,\ldots,s\}$, and let $M=\dom(\indz\circ h_j)=\{\xx\in\Rn : h_j(\xx)=0\}$. Since $h_j$ is affine, $M$ is an affine set, which is therefore relatively open. Since $h_j(\xing)=0$, it follows that $\xing\in M = \ri M$. \pfpart{Part~(\ref{pr:slater-conseqs:e}):} By assumption, $\dom{f}\subseteq\dom{g_i}$ for $i=1,\ldots,r$, and also $\dom{h_j}=\Rn$ for $j=1,\ldots,s$. Therefore, $\dom{\lagranglm{\cdot}}=\dom{f}$. \pfpart{Part~(\ref{pr:slater-conseqs:d}):} Since $\xing$ is feasible and in $\dom{f}$, $\inf p \leq p(\xing) = f(\xing) < +\infty$. \qedhere \end{proof-parts} \end{proof} \begin{proposition} \label{pr:kkt-p-C-exps} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition. Let $p$ and $C$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and~(\ref{eqn:feas-set-defn}), and let $\xbar\in\extspace$. Then: \begin{letter-compact} \item \label{pr:kkt-p-C-exps:a} $\xbar$ is astral feasible if and only if $\xbar\in\Cbar$. \item \label{pr:kkt-p-C-exps:b} $p$'s extension is \begin{eqnarray*} \pext(\xbar) &=& \Bracks{\sumitor \Negf(\gext_i(\xbar)) + \sumjtos \indaz(\hext_j(\xbar)) } \plusl \fext(\xbar) \\ &=& \indfa{\Cbar}(\xbar)\plusu \fext(\xbar) \\ &=& \left\{ \begin{array}{cl} \fext(\xbar) & \mbox{if $\xbar$ is astral feasible} \\ +\infty & \mbox{otherwise.} \end{array} \right. \end{eqnarray*} \end{letter-compact} \end{proposition} \begin{proof} The proofs of both parts are mainly a matter of computing various extensions using previously developed tools. Let $\xing$ be as in Proposition~\ref{pr:slater-conseqs}. \begin{proof-parts} \pfpart{Part~(\ref{pr:kkt-p-C-exps:a}):} Let $i\in\{1,\ldots,r\}$. Since $g_i(\xing)<0=\sup(\dom{\Negf})$, we can apply \Cref{thm:Gf-conv} yielding, for $\xbar\in\extspace$, \begin{equation} \label{eq:pr:kkt-p-C-exps:1} \Negogiext(\xbar) = \negfext(\gext_i(\xbar)) = \Negf(\gext_i(\xbar)). \end{equation} Next, we compute the extension of $\indz$ composed with any affine function: \begin{claimpx} \label{cl:pr:kkt-p-C-exps:1} Let $h:\Rn\rightarrow\Rext$ be affine, and let $\xbar\in\extspace$. Assume $h(\xing)=0$. Then $\indzohext(\xbar)=\indaz(\hext(\xbar))$. \end{claimpx} \begin{proofx} Since $h$ is affine, we can write it as $h(\xx)=\xx\cdot\ww-b$ for some $\ww\in\Rn$ and $b\in\R$. Then $\indz(h(\xx)) = \indb(\xx\cdot\ww) = \indb(\trans{\ww}\xx)$ for $\xx\in\Rn$. Therefore, \[ \indzohext(\xbar) = \indbext(\trans{\ww} \xbar) = \indab(\xbar\cdot\ww) = \indaz(\xbar\cdot\ww - b) = \indaz(\hext(\xbar)). \] The first equality is by Theorem~\ref{thm:ext-linear-comp} (with $\A=\trans{\ww}$ and $f=\indb$), which we can apply since $h(\xing)=0$, so $\trans{\ww}{\xing}=b\in\ri(\dom{\indb})$. The second equality is by Propositions~\ref{pr:trans-uu-xbar} and~\ref{pr:inds-ext}. The last equality uses Example~\ref{ex:ext-affine}. \end{proofx} For $\xx\in\Rn$, let \[ c(\xx) = \sumitor \Negf(g_i(\xx)) + \sumjtos \indz(h_j(\xx)). \] This function is in fact the indicator function for the set $C$; that is, $c=\indC$. We can compute $c$'s extension: \begin{align} \cext(\xbar) &= \sumitor \Negogiext(\xbar) + \sumjtos \indzohjext(\xbar) \nonumber \\ &= \sumitor \Negf(\gext_i(\xbar)) + \sumjtos \indaz(\hext_j(\xbar)) \label{eq:pr:kkt-p-C-exps:3} \\ &= \begin{cases} 0 & \mbox{if $\xbar$ is astral feasible,} \\ +\infty & \mbox{otherwise.} \end{cases} \label{eq:pr:kkt-p-C-exps:2} \end{align} Here, the first equality is by Theorem~\ref{thm:ext-sum-fcns-w-duality}, which we can apply here because the extensions $\Negogiext(\xbar)$ and $\indzohjext(\xbar)$ are all nonnegative and therefore summable, and also because the relative interiors of the domains of the functions being added have a point in common (namely, $\xing$), by Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:b},\ref{pr:slater-conseqs:c}). The second equality is by \eqref{eq:pr:kkt-p-C-exps:1} and Claim~\ref{cl:pr:kkt-p-C-exps:1}. The last equality is by definition of astral feasibility. On the other hand, $\cext(\xbar)=\indfa{\Cbar}(\xbar)$ by Proposition~\ref{pr:inds-ext}. Combined with \eqref{eq:pr:kkt-p-C-exps:2}, this proves the claim. \pfpart{Part~(\ref{pr:kkt-p-C-exps:b}):} By its definition, we can write $p(\xx)=\indC(\xx)\plusl f(\xx)$ for $\xx\in\Rn$. We can therefore compute its extension using Corollary~\ref{cor:ext-restricted-fcn}. To do so, we note that \begin{eqnarray*} \xing &\in& \bigcap_{i=1}^r \ri(\dom(\Negf\circ g_i)) \cap \bigcap_{j=1}^s \ri(\dom(\indz\circ h_j)) \\ &=& \ri\Parens{ \bigcap_{i=1}^r \dom(\Negf\circ g_i) \cap \bigcap_{j=1}^s \dom(\indz\circ h_j) } \\ &=& \ri C, \end{eqnarray*} where the inclusion is by Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:b},~\ref{pr:slater-conseqs:c}), the first equality is by \Cref{roc:thm6.5}, and the last equality is by $C$'s definition. Since $\xing$ is also in $\ri(\dom{f})$, we can therefore apply Corollary~\ref{cor:ext-restricted-fcn} yielding \[ \pext(\xbar) = \indfa{\Cbar}(\xbar)\plusu \fext(\xbar) = \cext(\xbar) \plusu \fext(\xbar). \] Combined with Eqs.~(\ref{eq:pr:kkt-p-C-exps:3}) and~(\ref{eq:pr:kkt-p-C-exps:2}), this gives all three stated expressions for $\pext(\xbar)$. \qedhere \end{proof-parts} \end{proof} We can now prove the equivalence of plausible astral solution concepts as discussed above: \begin{proposition} \label{pr:equiv-astral-kkt} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition. Let $p$ be as in \eqref{eqn:kkt-p-defn}, and let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{pr:equiv-astral-kkt:a} $\xbar$ is an astral solution of $P$; that is, $\xbar$ minimizes $\pext$. \item \label{pr:equiv-astral-kkt:b} $\xbar$ is astral feasible and minimizes $\fext$ over all astral feasible points. \item \label{pr:equiv-astral-kkt:c} There exists a sequence $\seq{\xx_t}$ of standard feasible points in $\Rn$ such that $\xx_t\rightarrow\xbar$ and $f(\xx_t)\rightarrow \inf p$. \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{pr:equiv-astral-kkt:a}) $\Leftrightarrow$ (\ref{pr:equiv-astral-kkt:b}): } This follows directly from Proposition~\ref{pr:kkt-p-C-exps}(\ref{pr:kkt-p-C-exps:b}) (and since $\inf p < +\infty$ by Proposition~\ref{pr:slater-conseqs}\ref{pr:slater-conseqs:d}). \pfpart{(\ref{pr:equiv-astral-kkt:a}) $\Rightarrow$ (\ref{pr:equiv-astral-kkt:c}): } Suppose $\pext(\xbar)=\inf p$. By Proposition~\ref{pr:d1}, there exists a sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$ and with $p(\xx_t)\rightarrow\pext(\xbar)$. Since $\pext(\xbar)<+\infty$ (by Proposition~\ref{pr:slater-conseqs}\ref{pr:slater-conseqs:d}), we can have $p(\xx_t)=+\infty$ for at most finitely many $t$. By discarding these, we can assume for all $t$ that $p(\xx_t)<+\infty$, and therefore that $\xx_t$ is (standard) feasible and that $p(\xx_t)=f(\xx_t)$. Thus, $f(\xx_t)=p(\xx_t)\rightarrow\pext(\xbar)=\inf p$, as claimed. \pfpart{(\ref{pr:equiv-astral-kkt:c}) $\Rightarrow$ (\ref{pr:equiv-astral-kkt:a}): } Suppose $\seq{\xx_t}$ is a sequence as in part~(\ref{pr:equiv-astral-kkt:c}). For all $t$, because $\xx_t$ is standard feasible, $p(\xx_t)=f(\xx_t)$. Thus, $p(\xx_t)=f(\xx_t)\rightarrow \inf p$, and therefore, \[ \inf p = \lim p(\xx_t) \geq \pext(\xbar) \geq \inf p \] where the inequalities are from the definition of an extension and by \Cref{pr:fext-min-exists}. This proves the claim. \qedhere \end{proof-parts} \end{proof} The next theorems show that natural astral analogs of the standard KKT conditions are necessary and sufficient to ensure that a point $\xbar\in\extspace$ is an astral solution of the convex program $P$. These astral conditions are largely derived from the standard conditions simply by replacing standard functions by their astral extensions. Importantly, we have based the astral version of the differential form on the modified \eqref{eqn:kkt-lagrang-opt-mod} in which scalars have been moved inside subdifferentials, rather than on \eqref{eqn:kkt-lagrang-opt}, which, as discussed above, is slightly incorrect. Indeed, the modification leading to \eqref{eqn:kkt-lagrang-opt-mod} is even more critical in the astral setting where, for instance, multiplying an astral subdifferential by a negative scalar can be problematic, even for the subdifferential of an affine function. We also made one additional change, replacing $\lambda_i g_i$ with the closely-related function $\sfprod{\lambda_i}{g_i}$, as defined in \eqref{eq:sfprod-defn}. These two functions are identical on all of $g_i$'s effective domain, and so also on $f$ and $p$'s as well. Nevertheless, we will see that $\sfprod{\lambda_i}{g_i}$ turns out to be more convenient when working with astral subdifferentials. We first consider when the value of the program is finite. In this case, $\xbar\in\extspace$ is an astral solution of the program if and only if an astral version of the KKT conditions holds which only involves strict astral subgradients. \begin{theorem} \label{thm:kkt-necsuf-strict} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition. Let $p$ and $\lagrangfcn$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and (\ref{eqn:lagrang-defn}), and let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:kkt-necsuf-strict:a} $\xbar$ is an astral solution of $P$ and $\inf p > -\infty$. \item \label{thm:kkt-necsuf-strict:b} $\xbar$ is astral feasible, and there exists $\lammupair\in\Lampairs$ with $\lambda_i \gext_i(\xbar)=0$ for $i=1,\ldots,r$ and with \begin{equation} \label{eq:thm:kkt-necsuf-strict:1} \zero \in \basubdiffext{\xbar} + \sumitor \basubdiflamgiext{\xbar} + \sumjtos \basubdifmuhjext{\xbar}. \end{equation} \item \label{thm:kkt-necsuf-strict:c} $\xbar$ is astral feasible, and there exists $\lammupair\in\Lampairs$ with $\lambda_i \gext_i(\xbar)=0$ for $i=1,\ldots,r$ and with $\zero\in\basubdiflagranglmext{\xbar}$. \end{letter-compact} \end{theorem} Thus, as with the standard KKT conditions, this theorem shows that astral versions of feasibility, dual feasibility, complementary slackness and first-order optimality based on the Lagrangian are necessary and sufficient for $\xbar$ to be an astral solution of $P$. Before proving this theorem, we give a proposition that will be useful for working with the Lagrangian's extension and its astral subdifferentials, proved by direct application of previously developed tools. \begin{proposition} \label{pr:lagrang-ext-subdif} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition. Let $\lagrangfcn$ be as in \eqref{eqn:lagrang-defn}. Let $\xbar\in\extspace$ and $\lammupair\in\Lampairs$. Then the following hold: \begin{letter} \item \label{pr:lagrang-ext-subdif:b} If $\fext(\xbar), \lambda_1 \gext_1(\xbar),\ldots, \lambda_r \gext_r(\xbar), \mu_1 \hext_1(\xbar),\ldots, \mu_s \hext_s(\xbar) $ are summable, then \[ \lagranglmext{\xbar} = \fext(\xbar) + \sumitor \lambda_i \gext_i(\xbar) + \sumjtos \mu_j \hext_j(\xbar). \] \item \label{pr:lagrang-ext-subdif:a1} $\displaystyle \asubdiffext{\xbar} + \sumitor \asubdiflamgiext{\xbar} + \sumjtos \asubdifmuhjext{\xbar} \subseteq \asubdiflagranglmext{\xbar} $. \item \label{pr:lagrang-ext-subdif:a2} $\displaystyle \basubdiffext{\xbar} + \sumitor \basubdiflamgiext{\xbar} + \sumjtos \basubdifmuhjext{\xbar} = \basubdiflagranglmext{\xbar} $. \end{letter} \end{proposition} \begin{proof} Let $\xing$ be as in Proposition~\ref{pr:slater-conseqs}. \begin{proof-parts} \pfpart{Part~(\ref{pr:lagrang-ext-subdif:b}):} Suppose the values $\fext(\xbar), \lambda_1 \gext_1(\xbar),\dotsc, \lambda_r \gext_r(\xbar), \mu_1 \hext_1(\xbar),\dotsc, \mu_s \hext_s(\xbar) $ are summable. By Proposition~\ref{pr:scal-mult-ext}, \begin{equation} \label{eq:pr:lagrang-ext-subdif:1} \lambda_i \gext_i(\xbar) = \lamgiext(\xbar). \end{equation} Also, for $j=1,\ldots,s$, since $h_j$ is affine, we can write $h_j(\xx)=\xx\cdot\ww_j-b_j$ for some $\ww_j\in\Rn$ and $b_j\in\R$. Then \begin{equation} \label{eq:pr:lagrang-ext-subdif:2} \mu_j \hext_j(\xbar) = \mu_j (\xbar\cdot\ww_j - b_j) = \xbar\cdot (\mu_j \ww_j) - \mu_j b_j = \muhjext(\xbar), \end{equation} where the first and third equalities are by Example~\ref{ex:ext-affine}. Thus, $\fext(\xbar), \lamgextgen{1}(\xbar),\ldots, \lamgextgen{r}(\xbar), \muhextgen{1}(\xbar),\ldots, \muhextgen{s}(\xbar) $ are summable. We have \begin{equation} \label{eq:pr:lagrang-ext-subdif:3} \xing \in \ri(\dom{f}) \subseteq \ri(\dom{g_i}) = \ri(\dom{(\sfprod{\lambda_i}{g_i})}) \subseteq \ri(\dom{(\lambda_i g_i)}) \end{equation} for $i=1,\ldots,r$, where the last inclusion is because if $\lambda_i>0$ then $\sfprod{\lambda_i}{g_i}=\lambda_i g_i$, and if $\lambda_i=0$ then $\dom{(\lambda_i g_i)}=\Rn$. Also, $\dom{(\mu_j h_j)}=\Rn$ for $j=1,\ldots,s$. Therefore, we can apply Theorem~\ref{thm:ext-sum-fcns-w-duality} yielding \begin{eqnarray*} \lagranglm{\xbar} &=& \fext(\xbar) + \sumitor \lamgiext(\xbar) + \sumjtos \muhjext(\xbar) \\ &=& \fext(\xbar) + \sumitor \lambda_i \gext_i(\xbar) + \sumjtos \mu_j \hext_j(\xbar), \end{eqnarray*} with the second equality following from Eqs.~(\ref{eq:pr:lagrang-ext-subdif:1}) and~(\ref{eq:pr:lagrang-ext-subdif:2}). \pfpart{Parts~(\ref{pr:lagrang-ext-subdif:a1}) and~(\ref{pr:lagrang-ext-subdif:a2}): } We can slightly rewrite the Lagrangian from \eqref{eqn:lagrang-defn} as \[ \lagranglm{\xx} = f(\xx) + \sumitor (\sfprod{\lambda_i}{g_i})(\xx) + \sumjtos (\mu_j h_j)(\xx) \] since $(\sfprod{\lambda_i}{g_i})(\xx)= \lambda_i g_i(\xx)$ for all $\xx\in\dom{g_i}$ and therefore for all $\xx\in\dom{f}$. In view of \eqref{eq:pr:lagrang-ext-subdif:3} and since, as already noted, $\dom{(\mu_j h_j)}=\Rn$ for $j=1,\ldots,s$, we can apply Theorem~\ref{thm:subgrad-sum-fcns} yielding the claimed expressions. \qedhere \end{proof-parts} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:kkt-necsuf-strict}] The proof is based mainly on relating the subdifferentials of functions appearing in the Lagrangian to the subdifferentials of functions appearing in $p$. As such, we begin by proving two claims making this connection explicit. \begin{claimpx} \label{cl:subdif-neg-o-g} Let $g:\Rn\rightarrow\Rext$ be convex and proper, and let $\xbar\in\extspace$. Assume $\gext(\xbar)\leq 0$, and that there exists $\xing\in\Rn$ with $g(\xing)<0$. Then \[ \basubdifNegogext{\xbar} \;=\; \bigcup\;\BigBraces{\basubdiflamgext{\xbar} :\: \lambda\in\Rpos,\, \lambda\gext(\xbar)=0 }. \] \end{claimpx} \begin{proofx} By a standard calculation combined with Propositions~\ref{pr:subdif-in-1d} and~\ref{pr:ben-subgrad-at-x-in-rn}, for $\bary\in\Rext$, \[ \basubdifnegext{\bary} = \left\{ \begin{array}{cl} \{0\} & \mbox{if $\bary < 0$} \\ \relax [0,+\infty) & \mbox{if $\bary = 0$} \\ \emptyset & \mbox{if $\bary > 0$.} \end{array} \right. \] Thus, since $\gext(\xbar)\leq 0$, it follows that $\lambda\in\basubdifnegext{\gext(\xbar)}$ if and only if $\lambda\geq 0$ and $\lambda\gext(\xbar)=0$. With this observation, the claim now follows directly from Corollary~\ref{cor:strict-subgrad-comp-inc-fcn} (applied with $f$ and $g$, as they appear in that corollary, set to $g$ and $\negf$, and noting that $g(\xing)\in\intdom{\negf}$). \end{proofx} \begin{claimpx} \label{cl:subdif-indz-o-h} Let $h:\Rn\rightarrow\R$ be affine, and let $\xbar\in\extspace$. Assume $\hext(\xbar) = 0$. Then \[ \basubdifindzohext{\xbar} \;=\; \bigcup\;\BigBraces{\basubdifmuhext{\xbar} :\: \mu\in\R }. \] \end{claimpx} \begin{proofx} Since $h$ is affine, it can be written in the form $h(\xx)=\xx\cdot\ww-b$ for $\xx\in\Rn$, for some $\ww\in\Rn$ and $b\in\R$. This implies that $\hext(\xbar)=\xbar\cdot\ww-b$ (Example~\ref{ex:ext-affine}), and so that $\xbar\cdot\ww=b$. Let $M=\{\xx\in\Rn : \xx\cdot\ww=b\}$. Then $\indz\circ h$ is exactly the indicator function for $M$; that is, $\indz\circ h=\indf{M}$. Also, $\xbar\in\Mbar$ by Corollary~\ref{cor:closure-affine-set} (applied with $\A=\trans{\ww}$ and $\bb=b$), implying $M\neq\emptyset$ and so that there exists $\xing\in M$. Proposition~\ref{pr:subdif-ind-fcn} (applied with $n=1$ and $S=\{0\}$) implies $\asubdifindzext{0}=\R$ since $\indzstar\equiv 0$. Also, $\xing\cdot\ww-b=0\in\ri(\dom \indz)$. Therefore, we can apply Corollary~\ref{cor:subgrad-comp-w-affine} (with $\A=\trans{\ww}$, $\bb=-b$ and $f=\indz$), yielding \begin{equation} \label{eq:cl:subdif-indz-o-h:1} \basubdifindzohext{\xbar} = \ww\, \basubdifindzext{\trans{\ww}\xbar-b} = \ww\, \basubdifindzext{\hext(\xbar)} = \ww\, \basubdifindzext{0} = \R\,\ww. \end{equation} On the other hand, by Example~\ref{ex:affine-ben-subgrad}, for $\mu\in\R$, $\basubdifmuhext{\xbar} = \{\mu \ww\}$ (since $(\mu h)(\xx)=\xx\cdot(\mu\ww) - \mu b$). Taking the union over $\mu\in\R$ then yields exactly $\basubdifindzohext{\xbar}$ by \eqref{eq:cl:subdif-indz-o-h:1}, completing the proof. \end{proofx} Let $\xing$ be as in Proposition~\ref{pr:slater-conseqs}. \begin{proof-parts} \pfpart{(\ref{thm:kkt-necsuf-strict:b}) $\Rightarrow$ (\ref{thm:kkt-necsuf-strict:a}): } Suppose statement~(\ref{thm:kkt-necsuf-strict:b}) holds for some $\lammupair\in\Lampairs$. Then: \begin{eqnarray} \zero &\in& \basubdiffext{\xbar} + \sumitor \basubdiflamgiext{\xbar} + \sumjtos \basubdifmuhjext{\xbar} \nonumber \\ &\subseteq& \basubdiffext{\xbar} + \sumitor \basubdifNegogiext{\xbar} + \sumjtos \basubdifindzohjext{\xbar} \nonumber \\ &=& \basubdifpext{\xbar}. \label{eq:thm:kkt-necsuf-strict:2} \end{eqnarray} The inclusion at the second line is because $\basubdiflamgiext{\xbar}\subseteq \basubdifNegogiext{\xbar}$ for $i=1,\ldots,r$ by Claim~\ref{cl:subdif-neg-o-g}, and because $\basubdifmuhjext{\xbar}\subseteq \basubdifindzohjext{\xbar}$ for $j=1,\ldots,s$ by Claim~\ref{cl:subdif-indz-o-h}. Note that the hypotheses of these claims are satisfied by the theorem's assumptions and the properties of $\xing$. The equality is by Theorem~\ref{thm:subgrad-sum-fcns}(\ref{thm:subgrad-sum-fcns:b}) which can be applied since $\xing\in\ri(\dom{f})$ and by Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:b},\ref{pr:slater-conseqs:c}). Thus, $\zero\in\basubdifpext{\xbar}$. By Proposition~\ref{pr:asub-zero-is-min}, it follows that $\xbar$ minimizes $\pext$, and therefore that $\xbar$ is an astral solution of $P$. It also follows that $\inf p = -\pstar(\zero)\in\R$. \pfpart{(\ref{thm:kkt-necsuf-strict:a}) $\Rightarrow$ (\ref{thm:kkt-necsuf-strict:b}): } Suppose statement~(\ref{thm:kkt-necsuf-strict:a}) holds so that $\xbar$ minimizes $\pext$, implying $\xbar$ is astral feasible by Proposition~\ref{pr:equiv-astral-kkt}. By Proposition~\ref{pr:asub-zero-is-min}, $\zero\in\asubdifpext{\xbar}$. Also, $\pstar(\zero)=-\inf p\in\R$ since $\inf p>-\infty$ by assumption, and $\inf p < +\infty$ by Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:d}). Thus, \[ \zero \in \basubdifpext{\xbar} = \basubdiffext{\xbar} + \sumitor \basubdifNegogiext{\xbar} + \sumjtos \basubdifindzohjext{\xbar} \] with the equality following, as in \eqref{eq:thm:kkt-necsuf-strict:2}, from Theorem~\ref{thm:subgrad-sum-fcns}(\ref{thm:subgrad-sum-fcns:b}). Consequently, there exists $\uu\in\basubdiffext{\xbar}$, $\vv_i\in\basubdifNegogiext{\xbar}$ for $i=1,\ldots,r$, and $\ww_j\in\basubdifindzohjext{\xbar}$ for $j=1,\ldots,s$, such that $\uu+\sumitor \vv_i + \sumjtos \ww_j = \zero$. For each $i$, since $\vv_i\in\basubdifNegogiext{\xbar}$, it follows from Claim~\ref{cl:subdif-neg-o-g} that there exists $\lambda_i\in\Rpos$ with $\lambda_i \gext_i(\xbar)=0$ such that $\vv_i\in\basubdiflamgiext{\xbar}$. Likewise, for each $j$, since $\ww_j\in\basubdifindzohjext{\xbar}$, from Claim~\ref{cl:subdif-indz-o-h}, there exists $\mu_j\in\R$ such that $\ww_j\in \basubdifmuhjext{\xbar}$. Thus, \begin{eqnarray*} \zero &=& \uu + \sumitor \vv_i + \sumjtos \ww_j \\ &\in& \basubdiffext{\xbar} + \sumitor \basubdiflamgiext{\xbar} + \sumjtos \basubdifmuhjext{\xbar} \end{eqnarray*} as claimed, with all of the stated conditions satisfied (and with the $\lambda_i$'s and $\mu_j$'s combined into vectors $\lammupair\in\Lampairs$). \pfpart{(\ref{thm:kkt-necsuf-strict:b}) $\Leftrightarrow$ (\ref{thm:kkt-necsuf-strict:c}): } Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:a2}) immediately implies that $\zero\in\basubdiflagranglmext{\xbar}$ if and only if \eqref{eq:thm:kkt-necsuf-strict:1} holds. \qedhere \end{proof-parts} \end{proof} In the general case that the program's value is not necessarily finite, the astral KKT conditions are still necessary and sufficient for a point to be a solution, but now using general (that is, not necessarily strict) astral subgradients. \begin{theorem} \label{thm:kkt-necsuf-gen} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition. Let $p$ and $\lagrangfcn$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and (\ref{eqn:lagrang-defn}), and let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:kkt-necsuf-gen:a} $\xbar$ is an astral solution of $P$. \item \label{thm:kkt-necsuf-gen:b} $\xbar$ is astral feasible, and there exists $\lammupair\in\Lampairs$ with $\lambda_i \gext_i(\xbar)=0$ for $i=1,\ldots,r$ and with \[ \zero \in \asubdiffext{\xbar} + \sumitor \asubdiflamgiext{\xbar} + \sumjtos \asubdifmuhjext{\xbar}. \] \item \label{thm:kkt-necsuf-gen:c} $\xbar$ is astral feasible, and there exists $\lammupair\in\Lampairs$ with $\lambda_i \gext_i(\xbar)=0$ for $i=1,\ldots,r$ and with $\zero\in\asubdiflagranglmext{\xbar}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:kkt-necsuf-gen:a}) $\Rightarrow$ (\ref{thm:kkt-necsuf-gen:b}): } If $\inf p>-\infty$, then the claim follows directly from Theorem~\ref{thm:kkt-necsuf-strict}. Otherwise, suppose $\inf p = -\infty$ and that statement~(\ref{thm:kkt-necsuf-gen:a}) holds, implying $\pext(\xbar)=\inf p = -\infty$. Then $\xbar$ is astral feasible by Proposition~\ref{pr:equiv-astral-kkt}, so $\fext(\xbar)=\pext(\xbar)=-\infty$ by Proposition~\ref{pr:kkt-p-C-exps}(\ref{pr:kkt-p-C-exps:b}). Therefore, $\xbar$ minimizes $\fext$ so $\zero\in\asubdiffext{\xbar}$ by Proposition~\ref{pr:asub-zero-is-min}, implying in particular that $\xbar\in\cldom{f}$ by Proposition~\ref{pr:subgrad-imp-in-cldom}(\ref{pr:subgrad-imp-in-cldom:a}) (and Proposition~\ref{pr:h:1}\ref{pr:h:1c}). For each $i=1,\ldots,r$, let $\lambda_i=0$, implying $\lambda_i\gext_i(\xbar)=0$. Then $\sfprodext{\lambda_i}{g_i}(\xbar)=\inddomgiext(\xbar)=0$ by \eqref{eq:sfprod-identity} and Proposition~\ref{pr:inds-ext} since $\xbar\in\cldom{f}\subseteq\cldom{g_i}$. Thus, $\xbar$ minimizes $\sfprodext{\lambda_i}{g_i}$, so $\zero\in\asubdiflamgiext{\xbar}$ by Proposition~\ref{pr:asub-zero-is-min}. Similarly, for each $j=1,\ldots,s$, let $\mu_j=0$. Then $\mu_j h_j\equiv 0$ so $\zero\in\asubdifmuhjext{\xbar}$. Also, $\lammupair\in\Lampairs$. Combining yields \[ \zero = \zero + \sumitor \zero + \sumjtos \zero \in \asubdiffext{\xbar} + \sumitor \asubdiflamgiext{\xbar} + \sumjtos \asubdifmuhjext{\xbar}, \] completing the proof. \pfpart{(\ref{thm:kkt-necsuf-gen:b}) $\Rightarrow$ (\ref{thm:kkt-necsuf-gen:c}): } By Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:a1}), if statement~(\ref{thm:kkt-necsuf-gen:b}) holds, then statement~(\ref{thm:kkt-necsuf-gen:c}) must as well. \pfpart{(\ref{thm:kkt-necsuf-gen:c}) $\Rightarrow$ (\ref{thm:kkt-necsuf-gen:a}): } Suppose statement~(\ref{thm:kkt-necsuf-gen:c}) holds for some $\lammupair\in\Lampairs$. Then $\xbar$ minimizes $\lagranglmext{\cdot}$ by Proposition~\ref{pr:asub-zero-is-min}, so in particular, $\lagranglmext{\xbar}<+\infty$ by Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:e}). As a first case, suppose $\lagranglmext{\xbar}\in\R$. Then $-\lagranglmstar{\zero}=\lagranglmext{\xbar}\in\R$, by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}), so $\zero\in\basubdiflagranglmext{\xbar}$. Therefore, $\xbar$ is an astral solution of $P$ by Theorem~\ref{thm:kkt-necsuf-strict}. Otherwise, $\lagranglmext{\xbar}=-\infty$. From the stated assumptions, we can apply Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}), yielding that $\lagranglmext{\xbar}=\fext(\xbar)$. Thus, $\fext(\xbar)=-\infty$. Since also $\xbar$ is astral feasible, this implies that $\xbar$ minimizes $\pext$ by Proposition~\ref{pr:kkt-p-C-exps}(\ref{pr:kkt-p-C-exps:b}), proving $\xbar$ is an astral solution of $P$. \qedhere \end{proof-parts} \end{proof} Throughout this section, we have chosen to work with affine equality constraints, $h_j(\xx)=0$, separately for each $j$. Nonetheless, when convenient, these can be combined into a single matrix constraint, both for standard and astral feasibility. More precisely, suppose, for $j=1,\ldots,s$, that $h_j(\xx)=\xx\cdot\aaa_j-b_j$ for $\xx\in\Rn$ where $\aaa_j\in\Rn$ and $b_j\in\R$. Then the equality constraints appearing in $P$ mean that $\xx\cdot\aaa_j=b_j$ for all $j$, or, in matrix form, that $\A\xx=\bb$ where $\A\in\R^{s\times n}$ is a matrix whose $j$-th row is equal to $\trans{\aaa}_j$ (and, as usual, $\bb=\trans{[b_1,\ldots,b_s]}$). For astral feasibility, these equality constraints require, for all $j$, that $\hext_j(\xbar)=0$, that is, that $\xbar\cdot\aaa_j=b_j$ (by Example~\ref{ex:ext-affine}). By Proposition~\ref{pr:mat-prod-is-row-prods-if-finite}, this condition is equivalent to the astral matrix constraint $\A\xbar=\bb$. For the special case that there are no inequality constraints but only affine equality constraints, the next corollary summarizes conditions for optimality as implied by Proposition~\ref{pr:equiv-astral-kkt}, and Theorems~\ref{thm:kkt-necsuf-strict} and~\ref{thm:kkt-necsuf-gen}. \begin{corollary} \label{cor:kkt-no-ineq-consts} Let $P$ be an ordinary convex program in the notation above with no inequality constraints ($r=0$), and with equality constraints given by affine functions $h_j(\xx)=\xx\cdot\aaa_j-b_j$ for $\xx\in\Rn$ where $\aaa_j\in\Rn$ and $b_j\in\R$. Let $\A\in\R^{s\times n}$ be the matrix whose $j$-th row is $\trans{\aaa}_j$ (so that $\transA=[\aaa_1,\ldots,\aaa_s]$), and let $\bb=\trans{[b_1,\ldots,b_s]}$. Let \begin{equation} \label{eq:cor:kkt-no-ineq-consts:1} p(\xx)=f(\xx) + \indZ(\A\xx-\bb) \end{equation} for $\xx\in\Rn$. Assume there exists $\xing\in\ri(\dom{f})$ such that $\A\xing=\bb$. Let $\xbar\in\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{cor:kkt-no-ineq-consts:a} $\xbar$ is an astral solution of $P$. \item \label{cor:kkt-no-ineq-consts:b} $\A\xbar=\bb$ and $\xbar$ minimizes $\fext$ over all $\xbar'\in\extspace$ with $\A\xbar'=\bb$. \item \label{cor:kkt-no-ineq-consts:c} There exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$ with $\A\xx_t=\bb$ for all $t$ and $f(\xx_t)\rightarrow \inf p$. \item \label{cor:kkt-no-ineq-consts:d} $\A\xbar=\bb$ and $\asubdiffext{\xbar} \cap (\colspace \transA)\neq\emptyset$. \end{letter-compact} Under the additional assumption that $\inf p>-\infty$, the same holds if, in part~(\ref{cor:kkt-no-ineq-consts:d}), $\asubdiffext{\xbar}$ is replaced by $\basubdiffext{\xbar}$. \end{corollary} \begin{proof} In the case considered in this corollary, the existence of $\xing$ is equivalent to Slater's condition, and $p$, as given in \eqref{eq:cor:kkt-no-ineq-consts:1}, is the same as in \eqref{eqn:kkt-p-defn}. The set $C$ of standard feasible points in \eqref{eqn:feas-set-defn} is in this case $\{\zz\in\Rn : \A\zz=\bb\}$, with closure $\Cbar=\{\zbar\in\extspace : \A\zbar=\bb\}$ by Corollary~\ref{cor:closure-affine-set}. Thus, by Proposition~\ref{pr:kkt-p-C-exps}(\ref{pr:kkt-p-C-exps:a}), a point $\zbar\in\extspace$ is astral feasible for $P$ if and only if $\A\zbar=\bb$. \begin{proof-parts} \pfpart{(\ref{cor:kkt-no-ineq-consts:a}) $\Leftrightarrow$ (\ref{cor:kkt-no-ineq-consts:b}) $\Leftrightarrow$ (\ref{cor:kkt-no-ineq-consts:c}): } In light of the comments above, this is immediate from Proposition~\ref{pr:equiv-astral-kkt}. \pfpart{(\ref{cor:kkt-no-ineq-consts:a}) $\Leftrightarrow$ (\ref{cor:kkt-no-ineq-consts:d}): } For all $\muvec\in\R^s$, \begin{equation} \label{eq:cor:kkt-no-ineq-consts:2} \sumjtos \basubdifmuhjext{\xbar} = \Braces{ \sumjtos \mu_j \aaa_j } \end{equation} since, by Example~\ref{ex:affine-ben-subgrad}, $\basubdifmuhjext{\xbar}=\{\mu_j \aaa_j\}$ for all $j$. Suppose for the moment that $\inf p > -\infty$. Then by Theorem~\refequiv{thm:kkt-necsuf-strict}{thm:kkt-necsuf-strict:a}{thm:kkt-necsuf-strict:b}, $\xbar$ is an astral solution of $P$ if and only if $\A\xbar=\bb$ (so that $\xbar$ is astral feasible) and there exists $\uu\in\basubdiffext{\xbar}$ such that $-\uu \in \sumjtos \basubdifmuhjext{\xbar}$ for some $\muvec\in\R^s$. By \eqref{eq:cor:kkt-no-ineq-consts:2}, this last condition holds if and only if $-\uu\in\colspace \transA$, and so also if and only if $\uu\in\colspace \transA$. Therefore, when $\inf p>-\infty$, $\xbar$ is an astral solution of $P$ if and only if $\A\xbar=\bb$ and $\basubdiffext{\xbar} \cap (\colspace \transA)\neq\emptyset$. Consider now the general case in which $\inf p$ might be $-\infty$. Suppose first that $\A\xbar=\bb$ and that there exists a point $\uu\in\asubdiffext{\xbar} \cap (\colspace \transA)$. Then, by the foregoing argument, $-\uu \in \sumjtos \basubdifmuhjext{\xbar}$ for some $\muvec\in\R^s$, so \[ \zero \in \asubdiffext{\xbar} + \sumjtos \basubdifmuhjext{\xbar} \subseteq \asubdiffext{\xbar} + \sumjtos \asubdifmuhjext{\xbar}. \] It then follows by Theorem~\refequiv{thm:kkt-necsuf-gen}{thm:kkt-necsuf-gen:a}{thm:kkt-necsuf-gen:b} that $\xbar$ is an astral solution of $P$. For the converse, suppose $\xbar$ is an astral solution of $P$, implying $\A\xbar=\bb$. If $\inf p>-\infty$, then $\basubdiffext{\xbar} \cap (\colspace \transA)\neq\emptyset$, as already shown. Otherwise, if $\inf p = -\infty$, then $\pext(\xbar)=-\infty$, implying $\fext(\xbar)=-\infty$ by Proposition~\ref{pr:kkt-p-C-exps}(\ref{pr:kkt-p-C-exps:b}). Therefore, $\zero\in\asubdiffext{\xbar} \cap (\colspace \transA)$ by Proposition~\ref{pr:asub-zero-is-min} (and since $\zero$ is included in every linear subspace), proving the claim in this case as well. \qedhere \end{proof-parts} \end{proof} Analogous to the standard setting, if a pair $\xbar,\lammupair$ satisfies the astral KKT conditions given in Theorem~\ref{thm:kkt-necsuf-gen}(\ref{thm:kkt-necsuf-gen:c}), then that pair must be a saddle point of the Lagrangian's extension, as we show next. Furthermore, when the value of the convex program $P$ is finite, this condition is both necessary and sufficient. \begin{theorem} \label{thm:kkt-equiv-saddle} Let $P$ be an ordinary convex program in the notation above that satisfies Slater's condition. Let $p$ and $\lagrangfcn$ be as in Eqs.~(\ref{eqn:kkt-p-defn}) and (\ref{eqn:lagrang-defn}). Let $\xbar\in\extspace$ and let $\lammupair\in\Lampairs$. Consider the following statements: \begin{roman-compact} \item \label{thm:kkt-equiv-saddle:i} $\xbar$ is astral feasible, $\lambda_i \gext_i(\xbar)=0$ for $i=1,\ldots,r$, and $\zero\in\asubdiflagranglmext{\xbar}$. \item \label{thm:kkt-equiv-saddle:ii} For all $\lammupairp\in\Lampairs$ and for all $\xbar'\in\extspace$, \begin{equation} \label{eq:thm:kkt-equiv-saddle:1} \lagrangext{\xbar}{\lamvec',\muvec'} \leq \lagranglmext{\xbar} \leq \lagranglmext{\xbar'}. \end{equation} \end{roman-compact} Then the following hold: \begin{letter-compact} \item \label{thm:kkt-equiv-saddle:a} Statement~(\ref{thm:kkt-equiv-saddle:i}) implies statement~(\ref{thm:kkt-equiv-saddle:ii}). \item \label{thm:kkt-equiv-saddle:b} There exists a pair $\xbar,\lammupair$ satisfying both statements~(\ref{thm:kkt-equiv-saddle:i}) and~(\ref{thm:kkt-equiv-saddle:ii}). For any such pair, $\xbar$ is an astral solution of $P$ and $\lagranglmext{\xbar}=\inf p$. \item \label{thm:kkt-equiv-saddle:c} In addition to the assumptions above, suppose that $\inf p > -\infty$. Then statements~(\ref{thm:kkt-equiv-saddle:i}) and~(\ref{thm:kkt-equiv-saddle:ii}) are equivalent. The same holds if, in statement~(\ref{thm:kkt-equiv-saddle:i}), $\asubdiflagranglmext{\xbar}$ is replaced with $\basubdiflagranglmext{\xbar}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:kkt-equiv-saddle:a}):} Suppose statement~(\ref{thm:kkt-equiv-saddle:i}) holds. Then $\zero\in\asubdiflagranglmext{\xbar}$, implying $\xbar$ minimizes $\lagranglmext{\cdot}$ by Proposition~\ref{pr:asub-zero-is-min}, proving the second inequality of \eqref{eq:thm:kkt-equiv-saddle:1}. This also implies $\lagranglmext{\xbar}=\inf \lagranglm{\cdot} < +\infty$, by Propositions~\ref{pr:fext-min-exists} and~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:e}). By our assumptions, $\lambda_i \gext_i(\xbar)=0$ for $i=1,\ldots,r$, and $\hext_j(\xbar)=0$ for $j=1,\ldots,s$. Therefore, by Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}), $ \lagranglmext{\xbar} = \fext(\xbar) $, implying $\fext(\xbar) < +\infty$. Let $\lammupairp\in\Lampairs$. Then $\lambda'_i \gext_i(\xbar) \leq 0$ for $i=1,\ldots,r$. Thus, since $\fext(\xbar)<+\infty$ and $\hext_j(\xbar)=0$ for all $j$, the summability conditions of Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}) are satisfied for $\xbar,\lammupairp$, so \begin{eqnarray*} \lagrangext{\xbar}{\lamvec',\muvec'} &=& \fext(\xbar) + \sumitor \lambda'_i \gext_i(\xbar) + \sumjtos \mu'_j \hext_j(\xbar) \\ &\leq& \fext(\xbar) \\ &=& \lagranglmext{\xbar}, \end{eqnarray*} proving the first inequality of \eqref{eq:thm:kkt-equiv-saddle:1}. \pfpart{Part~(\ref{thm:kkt-equiv-saddle:b}):} By Proposition~\ref{pr:fext-min-exists}, there exists $\xbar\in\extspace$ minimizing $\pext$, that is, an astral solution of $P$. Therefore, by Theorem~\ref{thm:kkt-necsuf-gen}, there must exist $\lammupair\in\Lampairs$ with the properties stated in statement~(\ref{thm:kkt-equiv-saddle:i}), and so also statement~(\ref{thm:kkt-equiv-saddle:ii}), as just proved. Suppose the pair $\xbar,\lammupair$ satisfies statement~(\ref{thm:kkt-equiv-saddle:i}). Then $\xbar$ is an astral solution of $P$ by Theorem~\ref{thm:kkt-necsuf-gen}, so $\pext(\xbar)=\inf p$. Also, $\lagranglmext{\xbar}=\fext(\xbar)$ by Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}), and $\fext(\xbar)=\pext(\xbar)$ by Proposition~\ref{pr:kkt-p-C-exps}(\ref{pr:kkt-p-C-exps:b}) since $\xbar$ is astral feasible. Thus, $\lagranglmext{\xbar}=\inf p$. \pfpart{Part~(\ref{thm:kkt-equiv-saddle:c}):} Suppose $\inf p > -\infty$ and also that statement~(\ref{thm:kkt-equiv-saddle:ii}) holds. We prove the slightly stronger form of statement~(\ref{thm:kkt-equiv-saddle:i}) in which $\asubdiflagranglmext{\xbar}$ has been replaced with $\basubdiflagranglmext{\xbar}$ (which immediately implies the stated version with $\asubdiflagranglmext{\xbar}$). We begin by showing that $\lagranglmext{\xbar}$ and $\fext(\xbar)$ are finite and equal. \begin{claimpx} \label{cl:thm:kkt-equiv-saddle:1} $\lagranglmext{\xbar}=\fext(\xbar)\in\R$. \end{claimpx} \begin{proofx} Let $\xing$ be as in Proposition~\ref{pr:slater-conseqs}. Observe first that \begin{equation} \label{eq:thm:kkt-equiv-saddle:3} \fext(\xbar) = \lagrangext{\xbar}{\zerov{r},\zerov{s}} \leq \lagranglmext{\xbar} \leq \lagranglm{\xing} < +\infty. \end{equation} (Here, as usual, $\zerov{r}$ and $\zerov{s}$ denote all-zero vectors in $\R^r$ and $\R^s$.) The equality is because $\lagrang{\xx}{\zerov{r},\zerov{s}}=f(\xx)$ for $\xx\in\Rn$. The first and second inequalities follow from \eqref{eq:thm:kkt-equiv-saddle:1} (and Proposition~\ref{pr:h:1}\ref{pr:h:1a}). The last inequality is by Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:e}). Under the conditions of this theorem, by \Cref{roc:thm28.2}, there exists $\lammupairz\in\Lampairs$ for which $\inf \lagranglmz{\cdot} = \inf p$. Therefore, \[ -\infty < \inf p = \inf \lagranglmz{\cdot} \leq \lagranglmzext{\xbar} \leq \lagranglmext{\xbar}. \] The first inequality is by assumption; the second by Proposition~\ref{pr:fext-min-exists}; the third by \eqref{eq:thm:kkt-equiv-saddle:1}. Thus, $\lagranglmext{\xbar}\in\R$. For $\xx\in\Rn$, let $d(\xx)=2\lagranglm{\xx}$, implying, by algebra, that \[ d(\xx)=f(\xx)+\lagranglmd{\xx}. \] By Proposition~\ref{pr:slater-conseqs}(\ref{pr:slater-conseqs:e}), $\xing\in\ri(\dom{\lagranglmd{\cdot}})$. Also, \[ \lagranglmdext{\xbar}\leq\lagranglmext{\xbar}<+\infty \] (by \eqref{eq:thm:kkt-equiv-saddle:1}), so $\fext(\xbar)$ and $\lagranglmdext{\xbar}$ are summable (using \eqref{eq:thm:kkt-equiv-saddle:3}). Therefore, \[ 2 \lagranglmext{\xbar} = \dext(\xbar) = \fext(\xbar) + \lagranglmdext{\xbar} \leq \fext(\xbar) + \lagranglmext{\xbar}. \] The first equality is by Proposition~\ref{pr:scal-mult-ext}, and the second by Theorem~\ref{thm:ext-sum-fcns-w-duality}. It follows that $\lagranglmext{\xbar}\leq\fext(\xbar)$ (since $\lagranglmext{\xbar}\in\R$). Combined with \eqref{eq:thm:kkt-equiv-saddle:3}, this completes the proof. \end{proofx} Since $\xbar$ minimizes $\lagranglmext{\cdot}$, Proposition~\ref{pr:asub-zero-is-min} implies $\zero\in\asubdiflagranglmext{\xbar}$. It follows, by Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:d}), that $\lagranglmstar{\zero}=-\lagranglmext{\xbar}\in\R$, so $\zero\in\basubdiflagranglmext{\xbar}$. For each $i\in\{1,\ldots,r\}$, we show next that $\gext_i(\xbar)\leq 0$. Let $\lammupairp\in\Lampairs$ with $\lamvec'=\ee_i$ (where $\ee_i$ is a standard basis vector in $\R^r$) and $\muvec'=\zerov{s}$. Then \begin{equation} \label{eq:thm:kkt-equiv-saddle:4} \fext(\xbar) = \lagranglmext{\xbar} \geq \lagrangext{\xbar}{\lamvec',\muvec'} = \fext(\xbar) + \gext_i(\xbar). \end{equation} The first equality is by Claim~\ref{cl:thm:kkt-equiv-saddle:1}. The inequality is by \eqref{eq:thm:kkt-equiv-saddle:1}. The second equality is by Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}) whose summability assumption is satisfied since $\fext(\xbar)\in\R$. As claimed, this implies that $\gext_i(\xbar)\leq 0$. Likewise, for each $j\in\{1,\ldots,s\}$, we argue that $\hext_j(\xbar)=0$. Suppose to the contrary that $\hext_j(\xbar)\neq 0$. Let $\sigma$ be the sign of $\hext_j(\xbar)$ (that is, $+1$ if $\hext_j(\xbar)>0$ and $-1$ if $\hext_j(\xbar)<0$). Let $\lammupairp\in\Lampairs$ with $\lamvec'=\zerov{r}$ and $\muvec'=\sigma \ee_j$ (where now $\ee_j$ is a standard basis vector in $\R^s$). Then by similar reasoning, \[ \fext(\xbar) = \lagranglmext{\xbar} \geq \lagrangext{\xbar}{\lamvec',\muvec'} = \fext(\xbar) + \sigma \hext_j(\xbar) = \fext(\xbar) + \abs{\hext_j(\xbar)} > \fext(\xbar), \] a contradiction. Therefore, $\hext_j(\xbar)=0$. We conclude that $\xbar$ is astral feasible. Consequently, \[ \fext(\xbar) = \lagranglmext{\xbar} = \fext(\xbar) + \sumitor \lambda_i \gext_i(\xbar) \] where the first equality is by Claim~\ref{cl:thm:kkt-equiv-saddle:1}, and the second by Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}) using $\fext(\xbar)\in\R$ and that $\xbar$ is astral feasible (implying the needed summability assumption). Since $\fext(\xbar)\in\R$ and since none of the terms $\lambda_i \gext_i(\xbar)$ can be positive, this implies $\lambda_i \gext_i(\xbar) = 0$ for $i=1,\ldots,r$, completing the proof. \qedhere \end{proof-parts} \end{proof} Without the assumption that $\inf p > -\infty$, the equivalence between statements~(\ref{thm:kkt-equiv-saddle:i}) and~(\ref{thm:kkt-equiv-saddle:ii}) in Theorem~\ref{thm:kkt-equiv-saddle}(\ref{thm:kkt-equiv-saddle:c}) does not hold, in general. In other words, as shown in the next example, it is possible for a pair $\xbar,\lammupair$ to be a saddle point, as in statement~(\ref{thm:kkt-equiv-saddle:ii}), without the astral KKT conditions given in statement~(\ref{thm:kkt-equiv-saddle:i}) holding, and also without $\xbar$ being an astral solution of the program $P$. \begin{example} In $\R^2$, consider the problem of minimizing $x_1$ subject to the constraint that $x_2\leq 0$. In other words, we aim to minimize $f(\xx)$ subject to $g(\xx)\leq 0$ where $f(\xx)=\xx\cdot\ee_1$ and $g(\xx)=\xx\cdot\ee_2$. (There are no equality constraints, and we omit subscripts since there is only one constraint.) The Lagrangian is \[ \lagrang{\xx}{\lambda} = f(\xx)+\lambda g(\xx) = \xx\cdot(\ee_1 + \lambda\ee_2). \] Let $\xbar=\limray{(-\ee_1)}\plusl\ee_2$ and let $\lambda=1$. This pair is a saddle point because, for all $\lambda'\in\R$, $\lagrangext{\xbar}{\lambda'}=\xbar\cdot(\ee_1 + \lambda'\ee_2)=-\infty$, so $\xbar$ minimizes $\lagrangext{\cdot}{\lambda}$ and $\lambda$ maximizes $\lagrangext{\xbar}{\cdot}$, satisfying statement~(\ref{thm:kkt-equiv-saddle:i}) of Theorem~\ref{thm:kkt-equiv-saddle}. On the other hand, $\gext(\xbar)=\xbar\cdot\ee_2=1$, so $\xbar$ is not astral feasible, and therefore does not satisfy the astral KKT conditions in statement~(\ref{thm:kkt-equiv-saddle:i}) of that theorem, nor is it an astral solution. \end{example} We earlier saw that the convex program in Example~\ref{ex:kkt-running-ex-standard} had no finite solution. We now return to that example to see in detail how the same program can be handled using the astral tools developed above. \begin{example} \label{ex:kkt-running-ex-astral} We consider the convex program of Example~\ref{ex:kkt-running-ex-standard} with $f$, $g_1$ and $g_2$ defined as in \eqref{eq:ex:kkt-running-ex-standard:1}. We can rewrite these functions as $f(\xx)=\xx\cdot\uu$ where $\uu=\trans{[-1,-2,2]}$; $g_1(\xx) = \sum_{i=1}^3 \exp(\xx\cdot\vv_i)$ where $\vv_1=\trans{[1,0,0]}$, $\vv_2=\trans{[0,1,-1]}$, and $\vv_3=\trans{[0,1,1]}$; and $g_2(\xx)=\xx\cdot\ww$ where $\ww=\trans{[-1,1,0]}$. We aim to find a pair $\xbar,\lamvec$ satisfying the conditions in Theorem~\ref{thm:kkt-necsuf-strict}(\ref{thm:kkt-necsuf-strict:b}), and so also in Theorem~\ref{thm:kkt-equiv-saddle}. To do so, for $\lamvec\in\Rpos^2$, we attempt to find $\xbar\in\extspac{3}$ satisfying \eqref{eq:thm:kkt-necsuf-strict:1}, noting that $\sfprod{\lambda_i}{g_i}=\lambda_i g_i$ since $\dom{g_i}=\R^3$. From Example~\ref{ex:affine-ben-subgrad}, $\basubdiffext{\xbar}=\{\uu\}$ and $\basubdif{\lamgextgen{2}}{\xbar}=\{\lambda_2 \ww\}$. By Theorem~\ref{thm:subgrad-sum-fcns}(\ref{thm:subgrad-sum-fcns:b}) and Theorem~\ref{thm:subgrad-fA}(\ref{thm:subgrad-fA:b}), \[ \basubdif{\gext_1}{\xbar} = \Braces{ \sum_{i=1}^3 \vv_i\, \expex(\xbar\cdot\vv_i) }, \] assuming $\xbar\cdot\vv_i<+\infty$ for $i=1,2,3$. Here, we also used \[ \basubdif{\expex}{\bary} = \left\{ \begin{array}{cl} \{\expex(\bary)\} & \mbox{if $\bary<+\infty$} \\ \emptyset & \mbox{if $\bary=+\infty$} \end{array} \right. \] for $\bary\in\Rext$. In addition, $\basubdif{\lamgextgen{1}}{\xbar}=\lambda_1\basubdif{\gext_1}{\xbar}$ by Proposition~\ref{pr:subgrad-scal-mult} (which holds also if $\lambda_1=0$ unless $\basubdif{\gext_1}{\xbar}$ is empty). Combining, all this shows that $\basubdiflagrangext{\xbar}{\lamvec}=\{\zz\}$ where \begin{equation} \label{eq:thm:kkt-equiv-saddle:5} \zz = \uu + \lambda_1 \sum_{i=1}^3 \vv_i\, \expex(\xbar\cdot\vv_i) + \lambda_2 \ww. \end{equation} Thus, in order for \eqref{eq:thm:kkt-necsuf-strict:1} to hold, or equivalently, for $\zero$ to be included in $\basubdiflagrangext{\xbar}{\lamvec}$, we must have $\zz=\zero$. Suppose indeed that $\zz=\zero$. Then taking inner product with each $\vv_i$ yields three equations, namely: \begin{align} 0 &= \zz\cdot\vv_1 = -1 + \lambda_1 \expex(\xbar\cdot\vv_1) - \lambda_2 \label{eq:ex:kkt-running-ex-astral:1} \\ 0 &= \zz\cdot\vv_2 = -4 + 2\lambda_1 \expex(\xbar\cdot\vv_2) + \lambda_2 \label{eq:ex:kkt-running-ex-astral:2} \\ 0 &= \zz\cdot\vv_3 = 0 + 2\lambda_1 \expex(\xbar\cdot\vv_3) + \lambda_2. \label{eq:ex:kkt-running-ex-astral:3} \end{align} \eqref{eq:ex:kkt-running-ex-astral:1} implies $\lambda_1>0$ (since $\lamvec\in\Rpos^2$ and $\expex\geq 0$). \eqref{eq:ex:kkt-running-ex-astral:3} then implies $\lambda_2=0$ and that $\expex(\xbar\cdot\vv_3)=0$, that is, that $\xbar\cdot\vv_3=-\infty$. Using algebra to solve Eqs.~(\ref{eq:ex:kkt-running-ex-astral:1}) and~(\ref{eq:ex:kkt-running-ex-astral:2}), this then implies $\xbar\cdot\vv_1=\ln(1/\lambda_1)$ and $\xbar\cdot\vv_2=\ln(2/\lambda_1)$. Supposing $\xbar$ satisfies the above, and taking into account that $\lambda_1>0$ and $\lambda_2=0$, we can then compute \begin{eqnarray} \lagranglext{\xbar} &=& \xbar\cdot\uu + \lambda_1 \Bracks{ \sum_{i=1}^3 \expex(\xbar\cdot\vv_i) - 1 } \nonumber \\ &=& \ln(\lambda_1) + 2 \ln(\lambda_1/2) + 3 - \lambda_1. \label{eq:ex:kkt-running-ex-astral:4} \end{eqnarray} The first equality uses Proposition~\ref{pr:lagrang-ext-subdif}(\ref{pr:lagrang-ext-subdif:b}), Example~\ref{ex:ext-affine}, and Proposition~\ref{pr:j:2}(\ref{pr:j:2c}). In the second, we have used the expressions derived above for $\xbar\cdot\vv_i$, for $i=1,2,3$. We also used $\uu=-\vv_1-2\vv_2$, implying $\xbar\cdot\uu=-\xbar\cdot\vv_1-2\xbar\cdot\vv_2$ (assuming summability). \eqref{eq:ex:kkt-running-ex-astral:4} represents the minimum value of $\lagranglext{\xbar}$ over $\xbar\in\extspac{3}$ in terms of $\lambda_1$ (and with $\lambda_2=0$). If $\xbar,\lamvec$ is a saddle point as in Theorem~\ref{thm:kkt-equiv-saddle}(\ref{thm:kkt-equiv-saddle:ii}), then $\lambda_1$ must maximize this expression. Using ordinary calculus, this implies $\lambda_1=3$. Combined with the above, we obtain that $\xbar$ must satisfy \begin{eqnarray*} \xbar\cdot\vv_1 &=& \ln(1/3) \\ \xbar\cdot\vv_2 &=& \ln(2/3) \\ \xbar\cdot\vv_3 &=& -\infty. \end{eqnarray*} For instance, these are satisfied if \[ \xbar = \limray{(-\vv_3)} \plusl \trans{\Bracks{ \ln\frac{1}{3},\, \frac{1}{2}\ln\frac{2}{3},\, -\frac{1}{2}\ln\frac{2}{3} }}. \] We also have $\lamvec=\trans{[3,0]}$. The derivation above was somewhat heuristic, but we can confirm that the answer we obtained is actually a solution by verifying it satisfies the astral KKT conditions in Theorem~\ref{thm:kkt-necsuf-strict}(\ref{thm:kkt-necsuf-strict:b}). It can be checked that \begin{align*} \fext(\xbar) &= \xbar\cdot\uu = \ln(27/4) \\ \gext_1(\xbar) &= \sum_{i=1}^3 \expex(\xbar\cdot\vv_i) - 1 = 0 \\ \gext_2(\xbar) &= \xbar\cdot\ww = -\infty, \end{align*} so $\xbar$ is astral feasible and, together with $\lamvec$, satifies dual feasibility and complementary slackness. Also, the calculations above show that $\basubdiflagrangext{\xbar}{\lamvec}=\{\zz\}$ with $\zz$ as given in \eqref{eq:thm:kkt-equiv-saddle:5}; it can now be checked that $\zz=\zero$. \end{example} \section{Differential continuity and descent methods} Many methods for minimizing a convex function are iterative, meaning that successive approximations are computed which hopefully minimize the function in the limit. Often, such methods are related to subgradients, for instance, operating in a way to drive the subgradients at the computed iterates to zero, or by taking steps in the direction of a subgradient. This chapter applies astral differential theory to the analysis of such methods. We first consider continuity properties of subdifferentials. These are then applied to derive general conditions for when a class of iterative methods based on subgradients will converge. This leads to a convergence analysis of many standard algorithms when applied to a range of convex optimization problems. \subsection{Subdifferentials and continuity} \label{sec:subdif:cont} A convex function $f:\Rn\rightarrow\Rext$ is minimized at a point $\xx\in\Rn$ if and only if $\zero$ is a subgradient of $f$ at $\xx$, and in particular, if the gradient $\gradf(\xx)$ exists and is equal to $\zero$. Therefore, to minimize $f$ numerically, it is natural to construct a sequence $\seq{\xx_t}$ in $\Rn$ whose gradients converge to $\zero$, that is, for which $\gradf(\xx_t)\rightarrow\zero$. If this is possible, and if the sequence converges to some point $\xx\in\Rn$, then indeed, $\xx$ must minimize $f$, as follows from \Cref{roc:thm24.4}, assuming $f$ is closed and proper; that is, $f(\xx_t)\rightarrow \inf f$. Moreover, even if the sequence $\seq{\xx_t}$ does not converge but nevertheless remains within a bounded region of $\Rn$, then an argument can again be made that $f(\xx_t)\rightarrow \inf f$. Since driving the gradient of a function to $\zero$ seems so closely connected to minimization, especially for convex functions, one might expect that it should also be effective as a means of minimizing the function for any sequence, not just bounded sequences. In other words, for a convex function $f:\Rn\rightarrow\Rext$ and sequence $\seq{\xx_t}$ in $\Rn$, we might expect that if $\gradf(\xx_t)\rightarrow\zero$ then $f(\xx_t)\rightarrow \inf f$. However, this is false in general, even for a convex function with many favorable properties, as seen in the next example: \begin{example} \label{ex:x1sq-over-x2:grad} Let $f$ be the function in Example~\ref{ex:x1sq-over-x2:cont}, as defined in \eqref{eqn:curve-discont-finiteev-eg}. As already mentioned, this function is convex, closed, proper, continuous everywhere, finite everywhere, and nonnegative everywhere. It is also continuously differentiable everywhere except along the ray $\{\trans{[0,x_2]} : x_2 \in \Rneg \}$, a part of the space that is far from the sequences we will be considering. For each $t$, let $\xx_t=\trans{[t^2,\,t^3]}$. Then it can be checked that $f$'s gradient at each $\xx_t$ is \[ \gradf(\xx_t) = \trans{\Bracks{ \frac{2}{t},\, -\frac{1}{t^2}}}, \] so $\gradf(\xx_t)\rightarrow\zero$. Nevertheless, as seen earlier, $f(\xx_t)={t}\rightarrow +\infty$. Thus, the gradients are converging to $\zero$, but the function values are becoming infinite, as far away from $\inf f = 0$ as is possible. \end{example} It is no coincidence that this same function $f$ was used earlier in Example~\ref{ex:x1sq-over-x2:cont} as an example of a function whose extension $\fext$ is discontinuous. The same sequence was also used in that discussion where it was seen that $\xx_t\rightarrow \xbar=\limray{\ee_2}\plusl\limray{\ee_1}$, and that $\fext$ is discontinuous at $\xbar$. Indeed, there is a close connection between continuity in astral space and convergence of gradients: As will be seen below, in general, for a convex function $f:\Rn\rightarrow\Rext$, if $\seq{\xx_t}$ is a sequence in $\Rn$ converging to a point $\xbar\in\extspace$ where $\fext$ is continuous (and also with $\fext(\xbar)<+\infty$), and if $\gradf(\xx_t)\rightarrow\zero$ then $f(\xx_t)\rightarrow \inf f$. If $\fext$ is not continuous at $\xbar$, then this statement need not hold, as just seen in the preceding example. We focus particularly on convergence of subgradients. Let $f:\Rn\rightarrow\Rext$ be convex, and suppose $\seq{\xx_t}$ and $\seq{\uu_t}$ are sequences in $\Rn$ with each $\uu_t$ a subgradient of $f$ at $\xx_t$ so that $\uu_t\in\partial f(\xx_t)$. Suppose also that $\xx_t\rightarrow\xbar$ and $\uu_t\rightarrow\uu$ for some $\xbar\in\extspace$ and $\uu\in\Rn$. We seek conditions that guarantee $\uu\in\asubdiffext{\xbar}$ so that $\uu$, the limit of the subgradients $\uu_t$, will itself be a subgradient of $\fext$ at $\xbar$, the limit of the points $\xx_t$. Indeed, from standard convex analysis, this is known to be the case if $\xbar=\xx\in\Rn$, provided $f$ is closed and proper (via \Cref{roc:thm24.4} and Proposition~\ref{pr:asubdiffext-at-x-in-rn}). In general, the sequence of values $\xx_t\cdot\uu_t$ need \emph{not} converge to $\xbar\cdot\uu$. (For example, for the function $f$ and sequence $\seq{\xx_t}$ in Example~\ref{ex:x1sq-over-x2:grad} with $\uu_t=\gradf(\xx_t)$, it can be checked that $\xx_t\cdot\uu_t={t}\rightarrow+\infty$, but $\uu=\lim \uu_t=\zero$ so $\xbar\cdot\uu=0$.) The convergence properties of the sequence $\seq{\xx_t\cdot\uu_t}$ turn out to be closely connected to the continuity of $\fext$ at $\xbar$, as we show in the next few theorems. We show first that the closer $\xx_t\cdot\uu_t$ comes to $\xbar\cdot\uu$, the closer will the function values $f(\xx_t)$ get to $\fext(\xbar)$: \begin{theorem} \label{thm:limsup-beta-bnd} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$ with each $\uu_t\in\partial f(\xx_t)$, and with $\xx_t\rightarrow\xbar$ and $\uu_t\rightarrow\uu$ for some $\xbar\in\extspace$ and $\uu\in\Rn$. Assume $\xbar\cdot\uu\in\R$ and suppose $\limsup \xx_t\cdot\uu_t \leq \beta$ for some $\beta\in\R$. Then \begin{eqnarray} \fext(\xbar) &\leq& \liminf f(\xx_t) \nonumber \\ &\leq& \limsup f(\xx_t) \nonumber \\ &\leq& -\fstar(\uu) + \beta \label{thm:limsup-beta-bnd:s3} \\ &\leq& \fext(\xbar) + \beta - \xbar\cdot\uu. \nonumber \end{eqnarray} \end{theorem} \begin{proof} The first inequality is immediate from \eqref{eq:e:7} since $\xx_t\rightarrow\xbar$. The last inequality follows from \eqref{eqn:ast-fenchel} (applied to $F=\fext$, and using Proposition~\ref{pr:fextstar-is-fstar}), and since $\xbar\cdot\uu\in\R$. It remains to prove \eqref{thm:limsup-beta-bnd:s3}. Let $\epsilon>0$. Then for all $t$ sufficiently large, $\xx_t\cdot\uu_t\leq\beta+\epsilon$. We thus have \begin{eqnarray*} \limsup f(\xx_t) &=& \limsup (-\fstar(\uu_t) + \xx_t\cdot\uu_t) \\ &\leq& \limsup (-\fstar(\uu_t) + \beta+\epsilon) \\ &\leq& -\fstar(\uu) + \beta+\epsilon. \end{eqnarray*} The equality holds because, by \Cref{pr:stan-subgrad-equiv-props}(\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:b}), $ \fstar(\uu_t)= \xx_t\cdot\uu_t - f(\xx_t) $ for all $t$, since $\uu_t\in\partial f(\xx_t)$. The last inequality is because $\fstar$ is closed and therefore lower semicontinuous (\Cref{pr:conj-props}\ref{pr:conj-props:d}), implying $\liminf \fstar(\uu_t)\geq \fstar(\uu)$ since $\uu_t\rightarrow\uu$. Since this holds for all $\epsilon>0$, this proves \eqref{thm:limsup-beta-bnd:s3}. \end{proof} Taking $\beta=\xbar\cdot\uu$, Theorem~\ref{thm:limsup-beta-bnd} immediately implies that, under the same assumptions, if $\limsup \xx_t\cdot\uu_t \leq \xbar\cdot\uu$ then $\uu$ must be an astral subgradient of $\fext$ at $\xbar$, and also the sequence of values $f(\xx_t)$ must converge to $\fext(\xbar)$: \begin{corollary} \label{cor:xuleqxbaru-then-subgrad} Let $f:\Rn\rightarrow\Rext$ be convex. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$ with each $\uu_t\in\partial f(\xx_t)$, and with $\xx_t\rightarrow\xbar$ and $\uu_t\rightarrow\uu$ for some $\xbar\in\extspace$ and $\uu\in\Rn$. Assume $\xbar\cdot\uu\in\R$ and that $\limsup \xx_t\cdot\uu_t \leq \xbar\cdot\uu$. Then $f(\xx_t)\rightarrow \fext(\xbar)$ and $\uu\in\asubdiffext{\xbar}$. \end{corollary} \proof Given the stated assumptions, we can immediately apply Theorem~\ref{thm:limsup-beta-bnd} with $\beta=\xbar\cdot\uu$, yielding \[ \fext(\xbar) = \lim f(\xx_t) = -\fstar(\uu) + \xbar\cdot\uu. \] That $\uu\in\asubdiffext{\xbar}$ now follows directly from Theorem~\ref{thm:fenchel-implies-subgrad}, using $\xbar\cdot\uu\in\R$. \qed On the other hand, Theorem~\ref{thm:limsup-beta-bnd} also provides asymptotic lower bounds on the sequence $\xx_t\cdot\uu_t$, showing that $\liminf \xx_t\cdot\uu_t\geq\xbar\cdot\uu$ if $\fext(\xbar)>-\infty$, and that $\xx_t\cdot\uu_t\rightarrow+\infty$ if $\fext(\xbar)=+\infty$: \begin{corollary} \label{cor:xugeqxbaru} Let $f:\Rn\rightarrow\Rext$ be convex and not identically $+\infty$. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$ with each $\uu_t\in\partial f(\xx_t)$, and with $\xx_t\rightarrow\xbar$ and $\uu_t\rightarrow\uu$ for some $\xbar\in\extspace$ and $\uu\in\Rn$. Assume $\xbar\cdot\uu\in\R$ and $\fext(\xbar)>-\infty$. Then $\liminf \xx_t\cdot\uu_t \geq \xbar\cdot\uu$. If, in addition, $\fext(\xbar)=+\infty$, then $\xx_t\cdot\uu_t\rightarrow+\infty$. \end{corollary} \proof We consider first the case that $\fext(\xbar)\in\R$. Suppose, by way of contradiction, that $\liminf \xx_t\cdot\uu_t < \xbar\cdot\uu$. Then there exists $\epsilon>0$ and infinitely many values of $t$ for which $\xx_t\cdot\uu_t \leq \xbar\cdot\uu - \epsilon$. By discarding all other sequence elements, we can assume that this holds for all values of $t$. We therefore can apply Theorem~\ref{thm:limsup-beta-bnd} with $\beta=\xbar\cdot\uu - \epsilon$. However, this yields $\fext(\xbar) \leq \fext(\xbar) - \epsilon$, an obvious contradiction. Therefore, $\liminf \xx_t\cdot\uu_t \geq \xbar\cdot\uu$ if $\fext(\xbar)\in\R$. Next, consider the case that $\fext(\xbar)=+\infty$. Suppose $\xx_t\cdot\uu_t \not\rightarrow +\infty$. Then there exists $\beta\in\R$ such that $\xx_t\cdot\uu_t\leq\beta$ for infinitely many values of $t$. As before, we can discard all other sequence elements, so that this holds for all values of $t$. We can again apply Theorem~\ref{thm:limsup-beta-bnd} with this choice of $\beta$, yielding $\fext(\xbar) \leq -\fstar(\uu) + \beta$. Since $\fext(\xbar)=+\infty$, this implies $\fstar(\uu)=-\infty$. But this is a contradiction since we assumed $f\not\equiv+\infty$, implying $\fstar>-\infty$. Therefore, $\lim \xx_t\cdot\uu_t = +\infty \geq \xbar\cdot\uu$ if $\fext(\xbar)=+\infty$. \qed If $\fext(\xbar)=-\infty$, Corollary~\ref{cor:xugeqxbaru} need not hold; in other words, it need not be the case that $\liminf \xx_t\cdot\uu_t \geq \xbar\cdot\uu$: \begin{example} Let $f:\R\rightarrow\Rext$ be the convex function \[ f(x) = \begin{cases} -\ln x & \mbox{if $x > 0$,} \\ +\infty & \mbox{otherwise,} \end{cases} \] for $x\in\R$. Let $x_t=t$, and $u_t=f'(x_t)=-1/t$, where $f'$ is the first derivative of $f$. Then $x_t \rightarrow +\infty$ and $u_t \rightarrow 0$. Also, $x_t u_t = -1$ for all $t$, so $x_t u_t \rightarrow -1 < 0 = (+\infty)\cdot 0$. \end{example} Corollary~\ref{cor:xuleqxbaru-then-subgrad} shows that, given our other assumptions, to prove $\uu$ is an astral subgradient of $\fext$ at $\xbar$, it suffices to show $\limsup \xx_t\cdot\uu_t \leq \xbar\cdot\uu$. Indeed, this will be the case if $\fext$ is continuous at $\xbar$ and if $\fext(\xbar)<+\infty$, as we show next, along with some other more general sufficient conditions. Thus, continuity in astral space provides a sufficient condition for a sequence of subgradients to converge to an astral subgradient. Our earlier counterexample (Example~\ref{ex:x1sq-over-x2:grad}) shows that this need not be true in general without continuity. \begin{theorem} \label{thm:cont-subgrad-converg} Let $f:\Rn\rightarrow\Rext$ be convex and proper. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$ with each $\uu_t\in\partial f(\xx_t)$, and with $\xx_t\rightarrow\xbar$ and $\uu_t\rightarrow\uu$ for some $\xbar\in\extspace$ and $\uu\in\Rn$. Assume $\xbar\cdot\uu\in\R$, and that at least one of the following conditions also holds: \begin{letter} \item \label{thm:cont-subgrad-converg:a} $\xbar=\ebar\plusl\qq$ for some $\ebar\in\corezn$ and $\qq\in\Rn$, and for each $t$, $\xx_t=\ww_t+\qq_t$ for some $\ww_t\in\resc{f}$ and $\qq_t\in\Rn$ with $\qq_t\rightarrow\qq$. \item \label{thm:cont-subgrad-converg:b} $\xbar\in\represc{f}\plusl\Rn$. \item \label{thm:cont-subgrad-converg:c} $\xbar\in\contsetf$; that is, $\fext(\xbar)<+\infty$ and $\fext$ is continuous at $\xbar$. \end{letter} Then $\limsup \xx_t\cdot\uu_t \leq \xbar\cdot\uu$. Consequently, $f(\xx_t)\rightarrow\fext(\xbar)$ and $\uu\in\asubdiffext{\xbar}$. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Condition~(\ref{thm:cont-subgrad-converg:a}):} Suppose this condition holds. For each $t$, $f(\xx_t)>-\infty$ since $f$ is proper, and also $f(\xx_t)<+\infty$ by \Cref{roc:thm23.4}(\ref{roc:thm23.4:b}) since $\uu_t\in\partial f(\xx_t)$. Thus, $f(\xx_t)\in\R$. Note that $\xbar\cdot\uu=\ebar\cdot\uu\plusl\qq\cdot\uu=\qq\cdot\uu$ since if $\ebar\cdot\uu\neq 0$ then $\xbar\cdot\uu$ would be infinite (by Proposition~\refequivnp{pr:icon-equiv}{pr:icon-equiv:a}{pr:icon-equiv:b}), contradicting our assumption that it is finite. Next we have \[ f(\xx_t) \geq f(\xx_t+\ww_t) \geq f(\xx_t) + \ww_t\cdot\uu_t. \] The first inequality is because $\ww_t\in\resc{f}$, and the second is because $\uu_t\in\partial f(\xx_t)$. Since $f(\xx_t)\in\R$, this implies $\ww_t\cdot\uu_t\leq 0$. Thus, $ \xx_t\cdot\uu_t = \ww_t\cdot\uu_t + \qq_t\cdot\uu_t \leq \qq_t\cdot\uu_t $. Since $\qq_t\cdot\uu_t \rightarrow \qq\cdot\uu = \xbar\cdot\uu$, it follows that $\limsup \xx_t\cdot\uu_t \leq \xbar\cdot\uu$, as claimed. That $f(\xx_t)\rightarrow\fext(\xbar)$ and $\uu\in\asubdiffext{\xbar}$ then follow immediately from Corollary~\ref{cor:xuleqxbaru-then-subgrad}. \pfpart{Condition~(\ref{thm:cont-subgrad-converg:b}):} Under this condition, $\xbar=(\VV\omm\plusl\rr)\plusl\qq'$ for some $\rr\in\resc{f}$ and $\qq'\in\Rn$, and for some $\VV\in\Rnk$ with each column in $\resc{f}$. More simply, $\xbar=\VV\omm\plusl\qq$ where $\qq=\rr+\qq'$. Since $\xx_t\rightarrow\xbar$, by Theorem~\ref{thm:seq-rep-not-lin-ind}, there exist sequences $\seq{\bb_t}$ in $\Rk$ and $\seq{\qq_t}$ in $\Rn$ such that $\xx_t=\VV\bb_t+\qq_t$ for all $t$, and with $\qq_t\rightarrow\qq$, and $b_{t,i}\rightarrow+\infty$ for $i=1,\ldots,k$. This last condition implies $b_{t,i}$ can be negative for at most finitely many $t$; by discarding these, we can assume henceforth that $b_{t,i}\geq 0$ for all $t$ and for $i=1,\ldots,k$. Consequently, $\VV\bb_t\in\resc{f}$ since each column of $\VV$ is in $\resc{f}$ and since $\resc{f}$ is a convex cone (Proposition~\ref{pr:resc-cone-basic-props}). Thus, condition~(\ref{thm:cont-subgrad-converg:a}) is satisfied, implying the claim, as argued above. \pfpart{Condition~(\ref{thm:cont-subgrad-converg:c}):} Suppose this condition holds. Let $h=\lsc f$, which implies $h=\cl f$, since $f$ is convex and proper. Then for each $t$, because $\uu_t\in\partial f(\xx_t)$, it also holds that $h(\xx_t)=f(\xx_t)$ and that $\uu_t\in\partial h(\xx_t)$ (by \Cref{pr:stan-subgrad-equiv-props}\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:d}). Also, $\hext=\fext$ by \Cref{pr:h:1}(\ref{pr:h:1aa}), so in particular, $\contsetf=\contset{\hext}$. Thus, \[ \xbar \in \contsetf = \contset{\hext} = \represc{h}\plusl\intdom{h} \subseteq \represc{h}\plusl\Rn, \] with the second equality following from Corollary~\ref{cor:cont-gen-char} (applied to $h$). Therefore, condition~(\ref{thm:cont-subgrad-converg:b}) is satisfied (with $f$ replaced by $h$). As argued above, this implies $\limsup \xx_t\cdot\uu_t\leq\xbar\cdot\uu$, and consequently that $f(\xx_t)=h(\xx_t)\rightarrow\hext(\xbar)=\fext(\xbar)$, and $\uu\in\asubdifhext{\xbar}=\asubdiffext{\xbar}$. \qedhere \end{proof-parts} \end{proof} In particular, when $\uu=\zero$, Theorem~\ref{thm:cont-subgrad-converg} shows that for a convex, proper function $f:\Rn\rightarrow\Rext$, if $\xx_t\rightarrow\xbar$ and $\gradf(\xx_t)\rightarrow\zero$ (or more generally, $\uu_t\rightarrow\zero$ where $\uu_t\in\partial f(\xx_t)$) then $\xbar$ must minimize $\fext$, and also $f(\xx_t)\rightarrow\inf f$. Importantly, this assumes that one of the three conditions given in the theorem is satisfied, for instance, that $\fext$ is continuous at $\xbar$ and $\fext(\xbar)<+\infty$. As a corollary, we can prove that even if the sequence $\seq{\xx_t}$ does not converge, if the gradients or subgradients are converging to zero (so that $\gradf(\xx_t)\rightarrow\zero$ or $\uu_t\rightarrow\zero$), then the function values must be converging to the minimum of $f$, so that $f(\xx_t)\rightarrow\inf f$. For this, we need to make assumptions similar to those in Theorem~\ref{thm:cont-subgrad-converg} for any subsequential limit of the sequence (that is, for the limit of any convergent subsequence). As discussed earlier (Example~\ref{ex:x1sq-over-x2:grad}), such convergence to $f$'s minimum on a sequence whose gradients are converging to $\zero$ cannot in general be guaranteed without such assumptions. \begin{theorem} \label{thm:subdiff-min-cont} Let $f:\Rn\rightarrow\Rext$ be convex and proper. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$ with each $\uu_t\in\partial f(\xx_t)$. Assume $\uu_t\rightarrow\zero$, and also that every subsequential limit of $\seq{\xx_t}$ is either in $\represc{f}\plusl\Rn$ or $\contsetf$ (as will be the case, for instance, if $\limsup f(\xx_t)<+\infty$ and $\fext$ is continuous everywhere). Then $f(\xx_t)\rightarrow \inf f$. \end{theorem} \begin{proof} Suppose, contrary to the theorem's conclusion, that $f(\xx_t)\not\rightarrow\inf f$. Then there exists $\beta\in\R$ with $\beta>\inf f$ such that $f(\xx_t)\geq \beta$ for infinitely many values of $t$. By discarding all other sequence elements, we can assume that this holds for all $t$. By sequential compactness, the sequence $\seq{\xx_t}$ must have a subsequence converging to some point $\xbar\in\extspace$. By again discarding all elements not in this subsequence, we can assume $\xx_t\rightarrow\xbar$. By assumption, $\xbar$ is either in $\represc{f}\plusl\Rn$ or $\contsetf$. In either case, applying Theorem~\ref{thm:cont-subgrad-converg}(\ref{thm:cont-subgrad-converg:b},\ref{thm:cont-subgrad-converg:c}) with $\uu=\zero$ therefore yields that $f(\xx_t)\rightarrow\fext(\xbar)$ and that $\zero\in\asubdiffext{\xbar}$, implying $\fext(\xbar)=\inf f$ (by Proposition~\ref{pr:asub-zero-is-min}). Thus, $f(\xx_t)\rightarrow\inf f$, a contradiction since $f(\xx_t)\geq\beta>\inf f$ for all $t$. We conclude that $f(\xx_t)\rightarrow \inf f$. Finally, for an arbitrary sequence $\seq{\xx_t}$ in $\Rn$, suppose $\limsup f(\xx_t)<+\infty$ and that $\fext$ is continuous everywhere. We argue, under these assumptions, that every subsequential limit of $\seq{\xx_t}$ is in $\contsetf$. Since $\limsup f(\xx_t)<+\infty$, there exists $b\in\R$ such that $f(\xx_t)\leq b$ for all $t$ sufficiently large. Therefore, if $\xbar\in\extspace$ is the limit of some convergent subsequence, then we must have $\fext(\xbar)\leq b<+\infty$. Further, since $\fext$ is continuous everywhere, it must be continuous at $\xbar$, implying $\xbar\in\contsetf$. \end{proof} Stated slightly differently, Theorem~\ref{thm:cont-subgrad-converg} shows that, under suitable conditions, if $\seq{\rpair{\xx_t}{\uu_t}}$ is a sequence of subgradient pairs for a function $f:\Rn\rightarrow\Rext$ (meaning $\uu_t\in\partial f(\xx_t)$ for all $t$), and if that sequence converges to a pair $\rpair{\xbar}{\uu}$ in $\extspace\times\Rn$, then that pair is an astral subgradient pair for $\fext$ (meaning $\uu\in\asubdiffext{\xbar}$). In a moment, we will give in Corollary~\ref{cor:ast-subgrad-limit} simple conditions for the existence of such sequences. Specifically, assuming $f$ is closed, proper and convex, we will show that a point $\uu\in\Rn$ belongs to some astral subgradient pair $\rpair{\xbar}{\uu}$ for $\fext$ that is the limit of a sequence of subgradient pairs for $f$ if and only if $\uu\in\cl(\dom{\fstar})$. First, however, we prove an analogous theorem for astral dual subgradients. Theorem~\ref{thm:adsubdiff-nonempty} showed that if $\psi:\Rn\rightarrow\Rext$ is convex, then $\adsubdifpsi{\uu}$ is nonempty for all $\uu\in\Rn$. We next show that, under the conditions of the theorem, $\adsubdifpsi{\uu}$ must furthermore include an astral dual subgradient $\xbar\in\extspace$ such that the pair $\rpair{\uu}{\xbar}$ is the limit of a sequence of subgradient pairs of $\psi$. \begin{theorem} \label{thm:ast-dual-subgrad-limit} Let $\psi:\Rn\rightarrow\Rext$ be convex with $\psi\not\equiv+\infty$. Let $\uu\in\Rn$ and assume $\psi$ is lower semicontinuous at $\uu$. Then the following are equivalent: \begin{letter-compact} \item \label{thm:ast-dual-subgrad-limit:a} $\uu\in\cl(\dom\psi)$. \item \label{thm:ast-dual-subgrad-limit:b} There exist sequences $\seq{\xx_t}$ and $\seq{\uu_t}$ in $\Rn$ and $\xbar\in\extspace$ such that $\xx_t\rightarrow\xbar$, $\uu_t\rightarrow\uu$, $\xx_t\in\partial \psi(\uu_t)$ for all $t$, and $\xbar\in\adsubdifpsi{\uu}$. \end{letter-compact} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:ast-dual-subgrad-limit:b}) $\Rightarrow$ (\ref{thm:ast-dual-subgrad-limit:a}): } Suppose sequences as given in part~(\ref{thm:ast-dual-subgrad-limit:b}) exist. Since $\psi\not\equiv+\infty$, there exists a point $\uu'\in\dom\psi$. For each $t$, since $\xx_t\in\partial\psi(\uu_t)$, \[ +\infty > \psi(\uu') \geq \psi(\uu_t) + \xx_t\cdot(\uu'-\uu_t), \] implying $\uu_t\in\dom\psi$. Since $\uu_t\rightarrow\uu$, it follows that $\uu\in\cl(\dom\psi)$. \pfpart{(\ref{thm:ast-dual-subgrad-limit:a}) $\Rightarrow$ (\ref{thm:ast-dual-subgrad-limit:b}): } Suppose $\uu\in\cl(\dom\psi)$. We consider first the case that $\psi$ is improper. Since $\uu\in\cl(\dom\psi)=\cl(\ri(\dom\psi))$, there exists a sequence $\seq{\uu_t}$ in $\ri(\dom\psi)$ with $\uu_t\rightarrow\uu$. Since $\psi$ is improper, this implies $\psi(\uu_t)=-\infty$ for all $t$ (by \Cref{pr:improper-vals}\ref{pr:improper-vals:thm7.2}), and so that $\zero\in\partial\psi(\uu_t)$. Further, since $\psi$ is lower semicontinuous at $\uu$, $\psi(\uu)\leq\liminf\psi(\uu_t)=-\infty$, so $\psi(\uu)=-\infty$ and $\zero\in\adsubdifpsi{\uu}$. Thus, setting $\xbar=\zero$ and $\xx_t=\zero$ for all $t$, this proves the claim in this case. Henceforth, we assume that $\psi$ is proper. Let $\ww$ be any point in $\ri(\dom\psi)$ (which is nonempty since $\psi$ is convex and proper). Let $\seq{\lambda_t}$ be any decreasing sequence in $(0,1)$ converging to $0$ (so that $\lambda_{t+1}<\lambda_t$ for all $t$, and $\lambda_t\rightarrow 0$). For each $t$, let $\uu_t=(1-\lambda_t)\uu+\lambda_t\ww$, which converges to $\uu$. Since $\uu\in\cl(\dom\psi)$ and $\ww\in\ri(\dom\psi)$, each $\uu_t$ is in $\ri(\dom\psi)$ (\Cref{roc:thm6.1}). Therefore, by \Cref{roc:thm23.4}(\ref{roc:thm23.4:a}), $\psi$ has a subgradient $\xx_t$ at each point $\uu_t$ so that $\xx_t\in\partial\psi(\uu_t)$. By sequential compactness, the sequence $\seq{\xx_t}$ has a subsequence converging to some point $\xbar\in\extspace$. By discarding all other elements (and the corresponding elements of the $\seq{\lambda_t}$ and $\seq{\uu_t}$ sequences), we can assume henceforth that the entire sequence converges so that $\xx_t\rightarrow\xbar$. By monotonicity of subgradients, for all $t$, \[ 0 \leq (\xx_{t+1}-\xx_t)\cdot(\uu_{t+1}-\uu_t) = (\lambda_{t+1} - \lambda_t)(\xx_{t+1}-\xx_t)\cdot(\ww-\uu). \] The inequality follows from Theorem~\ref{thm:ast-dual-subgrad-monotone} (since $\xx_t\in\partial\psi(\uu_t)\subseteq\adsubdifpsi{\uu_t}$ and $\psi(\uu_t)\in\R$ for all $t$), and also using Proposition~\ref{pr:adsubdif-int-rn}. The equality is by algebra and definition of each $\uu_t$. Since $\lambda_{t+1}<\lambda_t$, it follows that $\xx_{t+1}\cdot(\ww-\uu)\leq\xx_t\cdot(\ww-\uu)$. Consequently, \begin{equation} \label{eq:thm:ast-dual-subgrad-limit:1} \xx_t\cdot(\ww-\uu)\leq\xx_1\cdot(\ww-\uu) \end{equation} for all $t$. Let $\uu'\in\Rn$. Then for all $t$, \begin{eqnarray} \psi(\uu') &\geq& \psi(\uu_t) + \xx_t \cdot (\uu'-\uu_t) \nonumber \\ &=& \psi(\uu_t) + \xx_t\cdot(\uu'-\uu) - \lambda_t \xx_t\cdot(\ww-\uu) \nonumber \\ &\geq& \psi(\uu_t) + \xx_t\cdot(\uu'-\uu) - \lambda_t \xx_1\cdot(\ww-\uu). \label{eq:thm:ast-dual-subgrad-limit:3} \end{eqnarray} The first inequality is because $\xx_t\in\partial\psi(\uu_t)$. The equality is by algebra and definition of $\uu_t$. The last inequality is from \eqref{eq:thm:ast-dual-subgrad-limit:1} and since $\lambda_t>0$. We claim that \begin{equation} \label{eq:thm:ast-dual-subgrad-limit:2} \psi(\uu') \geq \psi(\uu) \plusd \xbar\cdot(\uu'-\uu). \end{equation} This is immediate if $\xbar\cdot(\uu'-\uu)=-\infty$, so we assume henceforth that $\xbar\cdot(\uu'-\uu)>-\infty$. We also have $\psi(\uu)>-\infty$ since $\psi$ is proper. Thus, $\psi(\uu)$ and $\xbar\cdot(\uu'-\uu)$ are summable. Since \eqref{eq:thm:ast-dual-subgrad-limit:3} holds for all $t$, we can take limits yielding \begin{eqnarray*} \psi(\uu') &\geq& \liminf [\psi(\uu_t) + \xx_t\cdot(\uu'-\uu) - \lambda_t \xx_1\cdot(\ww-\uu)] \\ &\geq& \liminf \psi(\uu_t) + \liminf [\xx_t\cdot(\uu'-\uu) - \lambda_t \xx_1\cdot(\ww-\uu)] \\ &\geq& \psi(\uu) + \xbar\cdot(\uu'-\uu). \end{eqnarray*} The third inequality is because $\psi$ is assumed to be lower semicontinuous at $\uu$, and because $\xx_t\cdot(\uu'-\uu)\rightarrow\xbar\cdot(\uu'-\uu)$ (Theorem~\ref{thm:i:1}\ref{thm:i:1c}), and $\lambda_t\rightarrow 0$. The second inequality is by superadditivity of $\liminf$ and the summability just noted (\Cref{prop:lim:eR}\ref{i:liminf:eR:sum}). Since \eqref{eq:thm:ast-dual-subgrad-limit:2} holds for all $\uu'\in\Rn$, it follows that $\xbar\in\adsubdifpsi{\uu}$ (by Eq.~\ref{eqn:psi-subgrad:3-alt}). \qedhere \end{proof-parts} \end{proof} \begin{corollary} \label{cor:ast-subgrad-limit} Let $f:\Rn\rightarrow\Rext$ be convex, closed and proper. Let $\uu\in\Rn$. Then the following are equivalent: \begin{letter-compact} \item \label{cor:ast-subgrad-limit:a} $\uu\in\cl(\dom\fstar)$. \item \label{cor:ast-subgrad-limit:b} There exist sequences $\seq{\xx_t}$ and $\seq{\uu_t}$ in $\Rn$ and $\xbar\in\extspace$ such that $\xx_t\rightarrow\xbar$, $\uu_t\rightarrow\uu$, $\uu_t\in\partial f(\xx_t)$ for all $t$, and $\uu\in\asubdiffext{\xbar}$. \end{letter-compact} \end{corollary} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{cor:ast-subgrad-limit:b}) $\Rightarrow$ (\ref{cor:ast-subgrad-limit:a}): } Suppose sequences as in part~(\ref{cor:ast-subgrad-limit:b}) exist. Then for each $t$, $\xx_t\in\partial\fstar(\uu_t)$, by \Cref{pr:stan-subgrad-equiv-props}(\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:c}), implying $\uu_t\in\dom{\fstar}$ by \Cref{roc:thm23.4}(\ref{roc:thm23.4:b}). Therefore, $\uu\in\cl(\dom{\fstar})$. \pfpart{(\ref{cor:ast-subgrad-limit:a}) $\Rightarrow$ (\ref{cor:ast-subgrad-limit:b}): } Suppose $\uu\in\cl(\dom{\fstar})$. Then by Theorem~\ref{thm:ast-dual-subgrad-limit}, applied to $\psi=\fstar$ (which is convex, closed and proper since $f$ is), there must exist sequences $\seq{\xx_t}$ and $\seq{\uu_t}$ in $\Rn$ and $\xbar\in\extspace$ such that $\xx_t\rightarrow\xbar$, $\uu_t\rightarrow\uu$, $\xx_t\in\partial \fstar(\uu_t)$ for all $t$, and $\xbar\in\adsubdiffstar{\uu}$. This implies, for all $t$, that $\uu_t\in\partial f(\xx_t)$ by \Cref{pr:stan-subgrad-equiv-props}(\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:c}), and so that $\xx_t\in\dom{f}$ by \Cref{roc:thm23.4}(\ref{roc:thm23.4:b}). Thus, $\xbar\in\cldom{f}$. From Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c}, it then follows that $\uu\in\asubdiffext{\xbar}$. \qedhere \end{proof-parts} \end{proof} Taking $\uu=\zero$, Corollary~\ref{cor:ast-subgrad-limit} shows that, for a closed, proper, convex function $f:\Rn\rightarrow\Rext$, if $\zero\in\cl(\dom\fstar)$, then there must exist points $\xx_t$ with subgradients $\uu_t$ such that the sequence $\seq{\xx_t}$ converges to a minimizer of $\fext$ while simultaneously the sequence of subgradients $\seq{\uu_t}$ converges to $\zero$. In particular, this will be the case whenever $f$ is lower-bounded (so that $\fstar(\zero)=-\inf f<+\infty$). On the other hand, if $\zero\not\in\cl(\dom\fstar)$, then no such sequence can exist, which means $\fext$'s minimizers cannot be attained by attempting to drive $f$'s subgradients to zero. Here are examples: \begin{example} Suppose \[ f(x) = \begin{cases} -2\sqrt{x} & \mbox{if $x\geq 0$} \\ +\infty & \mbox{otherwise} \end{cases} \] for $x\in\R$. Then it can be calculated that \[ \fstar(u) = \begin{cases} -1/u & \mbox{if $u < 0$} \\ +\infty & \mbox{otherwise} \end{cases} \] for $u\in\R$. Thus, $0$ is in $\cl(\dom\fstar)$, though not in $\dom\fstar$. In this case, we can construct sequences as in Corollary~\ref{cor:ast-subgrad-limit} by letting $x_t=t$ and $u_t=f'(x_t)=-1/\sqrt{t}$ (where $f'$ is the first derivative of $f$). Then $u_t\rightarrow 0$ while $x_t\rightarrow+\infty$, which minimizes $\fext$. Suppose now instead that $f(x)=-x$ for $x\in\R$. Then $\fstar=\indf{\{-1\}}$, the indicator function for the single point $-1$ so that $\dom\fstar=\{-1\}$. Thus, $0\not\in\cl(\dom\fstar)$. Indeed, for every sequence $\seq{x_t}$ in $\Rn$, we must have $u_t=f'(x_t)=-1$, so no such sequence of subgradients can converge to $0$. \end{example} \subsection{Convergence of iterative methods} \label{sec:iterative} The preceding results, especially Theorem~\ref{thm:subdiff-min-cont}, can be applied to prove the convergence of iterative methods for minimizing a function, as we now illustrate. Let $f:\Rn\rightarrow\Rext$ be convex. We consider methods that compute a sequence of iterates $\seq{\xx_t}$ in $\Rn$ with the purpose of asymptotically minimizing $f$. A classic example is \emph{gradient descent} in which $\xx_1\in\Rn$ is arbitrary, and each successive iterate is defined by \begin{equation} \label{eqn:grad-desc-defn} \xx_{t+1} = \xx_t - \eta_t \nabla f(\xx_t) \end{equation} for some step size $\eta_t > 0$. Although certainly an important example, our aim is to develop techniques that are broadly applicable well beyond gradient descent. In analyzing the convergence of such iterative methods, it is very common to assume that $f$ has a finite minimizer in $\Rn$, and often also that we are effectively searching for a minimizer over only a compact subset of $\Rn$. (See, for example, \citealp[Chapters~9,~10,~11]{boyd_vandenberghe}.) Depending on the problem setting, such assumptions may or may not be reasonable. A primary purpose of the current work, of course, has been to develop a foundation that overcomes such difficulties and that can be applied without relying on such assumptions. Indeed, as we have seen, astral space is itself compact, and the extension $\fext$ of any convex function $f:\Rn\rightarrow\Rext$ always has a minimizer that is attained at some astral point in $\extspace$. Before exploring how astral methods can be used to prove general convergence results, we first give examples in the next theorem of how standard, non-astral techniques can sometimes be applied to prove specialized convergence for particular algorithms, in this case, subgradient descent, a generalization of gradient descent in which $\xx_{t+1} = \xx_t - \eta_t \uu_t$ where $\eta_t>0$ and $\uu_t$ is any subgradient of $f$ at $\xx_t$. Part~(\ref{thm:grad-desc-converges:a}) of the theorem, which is taken from \citet[Lemma 2]{ziwei_poly_tail_losses}, proves convergence to the function's minimum assuming a particular lower bound on how much $f(\xx_t)$ decreases on each iteration, an assumption that will be discussed further below. Under a different condition, which does not require per-iteration progress, part~(\ref{thm:grad-desc-converges:b}) proves that a weighted average of the iterates must minimize $f$ via a similar proof. This second part contrasts with standard results in the literature, such as \citet{nesterov_book} and \citet{Zhang04solvinglarge}, which typically require that the iterates $\xx_t$ remain bounded. \begin{theorem} \label{thm:grad-desc-converges} Let $f:\Rn\rightarrow\Rext$ be convex and proper. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$, where, for all $t$, $f(\xx_t)\in\R$, $\uu_t\in\partial f(\xx_t)$, $\xx_{t+1} = \xx_t - \eta_t \uu_t$, and $\eta_t>0$. Assume $\sum_{t=1}^\infty \eta_t = +\infty$. \begin{letter} \item \label{thm:grad-desc-converges:a} Suppose \begin{equation} \label{eqn:thm:grad-desc-converges:1} f(\xx_{t+1}) \leq f(\xx_t) - \frac{\eta_t}{2} \norm{\uu_t}^2 \end{equation} for all $t$. Then $f(\xx_t) \rightarrow \inf f$. \item \label{thm:grad-desc-converges:b} Suppose instead that $\sum_{t=1}^\infty \eta_t^2 \|\uu_t\|^2 < +\infty$. For each $t$, let \[ \xhat_t = \frac{\sum_{s=1}^t \eta_s \xx_s}{\sum_{s=1}^t \eta_s}. \] Then $f(\xhat_t) \rightarrow \inf f$. \end{letter} \end{theorem} \begin{proof} Both parts of the theorem rely on the following observation. For any $\zz\in\Rn$, and for all $t$, \begin{eqnarray*} \|\xx_{t+1} - \zz\|^2 &=& \norm{(\xx_{t} - \zz) - \eta_t \uu_t}^2 \\ &=& \|\xx_{t} - \zz\|^2 - 2 \eta_t \uu_t\cdot (\xx_t - \zz) + \eta_t^2 \norm{\uu_t}^2 \\ &\leq& \|\xx_{t} - \zz\|^2 - 2\eta_t \parens{f(\xx_t) - f(\zz)} + \eta_t^2 \norm{\uu_t}^2, \end{eqnarray*} with the last line following from $\uu_t\in\partial f(\xx_t)$ (by definition of subgradient, Eq.~\ref{eqn:prelim-standard-subgrad-ineq}). Applying this inequality repeatedly and rearranging then yields \begin{eqnarray} 2\sum_{s=1}^{t} \eta_{s} ( f(\xx_{s}) - f(\zz) ) &\leq& \norm{\xx_{t+1} - \zz}^2 + 2\sum_{s=1}^{t} \eta_{s} ( f(\xx_{s}) - f(\zz) ) \nonumber \\ &\leq& \norm{\xx_1 - \zz}^2 + \sum_{s=1}^{t} \eta_{s}^2 \norm{\uu_{s}}^2. \label{eq:sgd:magic} \end{eqnarray} The proof now considers the two cases separately. \begin{proof-parts} \pfpart{Part~(\ref{thm:grad-desc-converges:a}):} Suppose, by way of contradiction, that $f(\xx_t)\not\rightarrow \inf f$. \eqref{eqn:thm:grad-desc-converges:1} implies that the sequence of values $f(\xx_t)$ is nonincreasing, which means that they must have a limit, which is also equal to their infimum. Let $\gamma=\lim f(\xx_t)=\inf f(\xx_t)$. By our assumption, $\gamma>\inf f$, which also implies $\gamma\in\R$. Thus, there exists a point $\zz\in\Rn$ with $\gamma > f(\zz) \geq \inf f$ (and with $f(\zz)\in\R$ since $f$ is proper). Thus, for all $t$, \begin{eqnarray*} 2\paren{\sum_{s=1}^t \eta_{s}} (\gamma - f(\zz)) &\leq& 2 \sum_{s=1}^t \eta_{s} (f(\xx_{s+1}) - f(\zz)) \\ &\leq& 2 \sum_{s=1}^t \eta_{s} (f(\xx_{s}) - f(\zz)) - \sum_{s=1}^t \eta_{s}^2 \norm{\uu_{s}}^2 \\ &\leq& \norm{\xx_1 - \zz}^2. \end{eqnarray*} The first inequality is because $\gamma=\inf f(\xx_t)$. The second and third inequalities are by \eqref{eqn:thm:grad-desc-converges:1} and \eqref{eq:sgd:magic}, respectively. The left-hand side of this inequality is converging to $+\infty$ as $t\rightarrow+\infty$, since $\gamma>f(\zz)$ and since $\sum_{t=1}^\infty \eta_t = +\infty$. But this is a contradiction since the right-hand side is constant and finite. \pfpart{Part~(\ref{thm:grad-desc-converges:b}):} Similar to the last argument, suppose again, by way of contradiction, that $f(\xhat_t)\not\rightarrow \inf f$. Then for some $\gamma>\inf f$, $\gamma\in\R$, and some infinite set of indices $S\subseteq\nats$, we must have $f(\xhat_t)\geq \gamma$ for all $t\in S$. Further, there exists $\zz\in\Rn$ such that $\gamma>f(\zz)\geq \inf f$, implying $f(\zz)\in\R$ (since $f$ is proper). Note that, because $f$ is convex, \begin{equation} \label{eqn:thm:grad-desc-converges:3} f(\xhat_t) \leq \frac{\sum_{s=1}^t \eta_{s} f(\xx_{s})}{\sum_{s=1}^t \eta_{s}}. \end{equation} Thus, for all $t\in S$, \begin{eqnarray*} 2\paren{\sum_{s=1}^t \eta_{s}} (\gamma - f(\zz)) &\leq& 2\paren{\sum_{s=1}^t \eta_{s}} (f(\xhat_t) - f(\zz)) \\ &\leq& 2 \sum_{s=1}^t \eta_{s} (f(\xx_{s}) - f(\zz)) \\ &\leq& \norm{\xx_1 - \zz}^2 + \sum_{s=1}^{\infty} \eta_{s}^2 \norm{\uu_{s}}^2. \end{eqnarray*} The first inequality is because $t\in S$. The second and third inequalities are from \eqref{eqn:thm:grad-desc-converges:3} and \eqref{eq:sgd:magic}, respectively. As before, our assumptions imply that the left-hand side can be made arbitrarily large, since $S$ is infinite. But this is a contradiction since the right-hand side is finite and constant. \qedhere \end{proof-parts} \end{proof} Theorem~\ref{thm:grad-desc-converges}(\ref{thm:grad-desc-converges:a}) proves convergence assuming a lower bound on how much $f(\xx_t)$ decreases on each iteration as a function of its gradient, an approach that will henceforth be our main focus. As an example of when this is possible, suppose $f$ is \emph{smooth}, meaning \begin{equation} \label{eqn:grad-bnd-smooth} f(\xx') \leq f(\xx) + \gradf(\xx)\cdot(\xx'-\xx) + \frac{\beta}{2} \norm{\xx'-\xx}^2 \end{equation} for all $\xx,\,\xx'\in\Rn$, for some constant $\beta>0$ (and assuming $f$ is differentiable). Then if $\xx_{t+1}$ is computed as in \eqref{eqn:grad-desc-defn} with $\eta_t=1/\beta$, then this smoothness condition implies \[ f(\xx_{t+1}) \leq f(\xx_t) - \frac{1}{2\beta} \norm{\gradf(\xx_t)}^2. \] Thus, this update is guaranteed to decrease the function values of the iterates (from $f(\xx_t)$ to $f(\xx_{t+1})$) by at least a constant times $\norm{\gradf(\xx_t)}^2$. Once established, such a guarantee of progress can sometimes be sufficient to ensure $f(\xx_t)\rightarrow \inf f$, as was just seen in Theorem~\ref{thm:grad-desc-converges}(\ref{thm:grad-desc-converges:a}). Intuitively, if $\gradf(\xx_t)$ is getting close to $\zero$, then we should be approaching $f$'s minimum; on the other hand, as long as $\norm{\gradf(\xx_t)}$ remains large, we are assured of significant progress (in reducing $f(\xx_t)$) on each iteration. As such, we might expect that a progress guarantee of this kind should suffice to ensure the convergence of a broad family of methods, not just (sub)gradient descent. Nevertheless, as will be seen shortly, although these intuitions seem superficially reasonable, it is not always the case that such a guarantee is sufficient to ensure convergence to the function's minimum. For the remainder of this subsection, we apply astral methods to study in substantially greater generality when such convergence is assured for an arbitrary iterative method, given such a lower bound on per-iteration progress in terms of gradients or subgradients. In particular, we will see that continuity in astral space is sufficient to ensure such convergence. We will also see that such a result is not possible, in general, when astral continuity is not assumed. We thus focus on generalizing the approach taken in part~(\ref{thm:grad-desc-converges:a}) of Theorem~\ref{thm:grad-desc-converges} using astral methods, leaving the generalization of the approach in part~(\ref{thm:grad-desc-converges:b}) for future work. Let $\seq{\xx_t}$ be any sequence in $\Rn$ (not necessarily computed using gradient descent), and let $\seq{\uu_t}$ in $\Rn$ be a corresponding sequence of subgradients so that $\uu_t\in\partial f(\xx_t)$ for all $t$. Generalizing the kind of progress bounds considered above (such as \eqref{eqn:thm:grad-desc-converges:1}), we suppose that \begin{equation} \label{eqn:aux-fcn-prog-bnd} f(\xx_{t+1}) \leq f(\xx_t) - \alpha_t h(\uu_t) \end{equation} for some $\alpha_t\geq 0$, and some function $h:\Rn\rightarrow\Rpos$. We assume $h$ satisfies the properties that $h(\zero)=0$ and, for all $\epsilon>0$, \[ \inf \{ h(\uu) : \uu\in\Rn, \norm{\uu}\geq\epsilon \} > 0. \] We call such a function an \emph{adequate auxiliary function}. Intuitively, if $h(\uu)$ is small, these properties force $\uu$ to be close to $\zero$. For example, \eqref{eqn:grad-bnd-smooth} satisfies \eqref{eqn:aux-fcn-prog-bnd} with $\alpha_t=1/(2\beta)$ and $h(\uu)=\norm{\uu}^2$, which clearly is an adequate auxiliary function. In general, if $h$ is continuous, strictly positive except at $\zero$, and radially nondecreasing (meaning $h(\lambda\uu)\geq h(\uu)$ for all $\lambda\geq 1$ and all $\uu\in\Rn$), then it must be an adequate auxiliary function (by compactness of $\{ \uu\in\Rn : \norm{\uu}=\epsilon \}$ for $\epsilon>0$). Given a bound as in \eqref{eqn:aux-fcn-prog-bnd}, if $\fext$ is continuous (either everywhere, or just at the subsequential limits of the sequence of iterates), then the next theorem shows how we can use our previous results to prove convergence to $f$'s minimum, without requiring $f$ to have a finite minimizer, nor the sequence of iterates $\seq{\xx_t}$ to remain bounded. Unlike Theorem~\ref{thm:grad-desc-converges}, this theorem can be applied to any sequence $\seq{\xx_t}$, regardless of how it is computed or constructed (provided, of course, that it satisfies the stated conditions). Furthermore, the theorem relies on an assumed progress bound of a much weaker and more general form. \begin{theorem}\label{thm:fact:gd} Let $f:\Rn\rightarrow\Rext$ be convex and proper. Let $\seq{\xx_t}$ and $\seq{\uu_t}$ be sequences in $\Rn$ with each $\uu_t\in\partial f(\xx_t)$. Assume that $\fext$ is continuous at every subsequential limit of the sequence $\seq{\xx_t}$. Also assume that \[f(\xx_{t+1}) \leq f(\xx_t) - \alpha_t h(\uu_t)\] for each $t$, where $\alpha_t\geq 0$ and $\sum_{t=1}^\infty \alpha_t = +\infty$, and where $h:\Rn\rightarrow\Rpos$ is an adequate auxiliary function. Then $f(\xx_t)\rightarrow \inf f$. \end{theorem} \begin{proof} Note first that $f(\xx_1)<+\infty$ since $\uu_1\in\partial f(\xx_1)$ (by \Cref{roc:thm23.4}\ref{roc:thm23.4:b}). Suppose that $\liminf h(\uu_t) = 0$. Then in this case, there exists a subsequence $\seq{\xx_{s(t)}}$, with indices $s(1)<s(2)<\dotsb$, such that $h(\uu_{s(t)})\rightarrow 0$. We claim further that $\uu_{s(t)}\rightarrow\zero$. Suppose otherwise. Then for some $\epsilon>0$, $\norm{\uu_{s(t)}}\geq\epsilon$ for infinitely many values of $t$. Let $ \delta = \inf \{ h(\uu) : \uu\in\Rn, \norm{\uu}\geq\epsilon \} $. Then $\delta>0$, since $h$ is an adequate auxiliary function. Thus, $h(\uu_{s(t)})\geq\delta>0$ for infinitely many values of $t$. But this contradicts that $h(\uu_{s(t)})\rightarrow 0$. Since $\uu_{s(t)}\rightarrow\zero$, we can apply Theorem~\ref{thm:subdiff-min-cont} to the extracted subsequences $\seq{\xx_{s(t)}}$ and $\seq{\uu_{s(t)}}$, noting that, by assumption, $\fext$ is continuous at all the subsequential limits of $\seq{\xx_{s(t)}}$, and that $\sup f(\xx_t)<+\infty$ since $f(\xx_1)<+\infty$ and since the function values $f(\xx_t)$ are nonincreasing (from \eqref{eqn:aux-fcn-prog-bnd}, since $\alpha_t\geq 0$ and $h\geq 0$). Together, these imply that all the subsequential limits of $\seq{\xx_{s(t)}}$ are in $\contsetf$. Theorem~\ref{thm:subdiff-min-cont} thus yields $f(\xx_{s(t)})\rightarrow \inf f$. Furthermore, this shows the entire sequence, which is nonincreasing, converges to $f$'s minimum as well, so that $f(\xx_t)\rightarrow\inf f$. In the alternative case, suppose $\liminf h(\uu_t) > 0$. Then there exists a positive integer $t_0$ and $\epsilon>0$ such that $h(\uu_t)\geq\epsilon$ for all $t\geq t_0$. Summing \eqref{eqn:aux-fcn-prog-bnd} yields, for $t > t_0$, \[ f(\xx_t) \leq f(\xx_1) - \sum_{s=1}^{t-1} \alpha_{s} h(\uu_{s}) \leq f(\xx_1) - \epsilon \sum_{s=t_0}^{t-1} \alpha_{s}. \] As $t\rightarrow+\infty$, the sum on the right converges to $+\infty$ (by assumption, even disregarding finitely many terms), implying $f(\xx_t)\rightarrow -\infty$. Thus, $\inf f=-\infty$ and $f(\xx_t)\rightarrow\inf f$ in this case as well. \end{proof} If we drop the assumption regarding $\fext$'s continuity, then the convergence proved in Theorem~\ref{thm:fact:gd} can no longer be assured, in general, even given a progress bound like the one in \eqref{eqn:aux-fcn-prog-bnd} with the $\alpha_t$'s all equal to a positive constant, and even when $h(\uu)=\norm{\uu}^2$, the most standard case: \begin{example} \label{ex:x1sq-over-x2:progress} Consider again the function $f$ given in \eqref{eqn:curve-discont-finiteev-eg}, studied in Examples~\ref{ex:x1sq-over-x2:cont} and~\ref{ex:x1sq-over-x2:grad}. Let $\xx_t=\trans{[t+1,\, t(t+1)]}$. Then $f(\xx_t)=(t+1)/t$ and $\gradf(\xx_t)=\trans{[2/t,\, -1/t^2]}$. It can be checked that, for $t\geq 1$, \[ f(\xx_{t+1}) - f(\xx_t) = -\frac{1}{t(t+1)} \leq -\frac{1}{10} \cdot \frac{4t^2+1}{t^4} = -\frac{1}{10} \norm{\gradf(\xx_t)}^2. \] In other words, \eqref{eqn:aux-fcn-prog-bnd} is satisfied with $\alpha_t=1/10$, for all $t$, and $h(\uu)=\norm{\uu}^2$ (and with $\uu_t=\gradf(\xx_t)$). Thus, all of the conditions of Theorem~\ref{thm:fact:gd} are satisfied, except that $\fext$ is not continuous everywhere, nor at the limit of this particular sequence (namely, $\limray{\ee_2}\plusl\limray{\ee_1}$). And indeed, the theorem's conclusion is false in this case since $f(\xx_t)\rightarrow 1 > 0 = \inf f$. \end{example} This shows that Theorem~\ref{thm:fact:gd} is false, in general, if continuity is not assumed. Nevertheless, this does not rule out the possibility that particular algorithms might be effective at minimizing a convex function, even without a continuity assumption; indeed, this was shown to be true of subgradient descent in Theorem~\ref{thm:grad-desc-converges}(\ref{thm:grad-desc-converges:a}). Before providing concrete consequences of Theorem~\ref{thm:fact:gd}, we pause to discuss related literature. An intermediate between the guarantees of Theorems~\ref{thm:grad-desc-converges} and~\ref{thm:fact:gd} is the classical \emph{Zoutendijk condition} \citep[Eq.~3.14]{nocedal_wright}: for a broad family of descent methods whose steps merely have a positive inner product with the negative gradient direction, applied to functions which are smooth as in \Cref{eqn:grad-bnd-smooth}, the sequence of {gradients} must converge to zero, that is, $\nabla f(\xx_t) \to 0$. However, for the function $f$ in Example~\ref{ex:x1sq-over-x2:progress}, we saw that there exists a sequence $(\xx_t)$ satisfying these progress conditions, but also $\nabla f(\xx_t) \to 0$ and $f(\xx_t) \to 1 > 0 = \inf f$; thus, the classical Zoutendijk analysis is insufficient to recover the conclusion of \Cref{thm:fact:gd}. More recent work has established conditions under which gradient descent converges to minimizers under smoothness even without convexity \citep{jason_minimizers}; however, this work assumes the iterates $\xx_t$ are bounded and that a finite minimizer exists in $\Rn$, unlike \Cref{thm:fact:gd} which makes no such assumptions. A natural follow-up to \Cref{thm:fact:gd} is whether it is possible to establish convergence of the entire sequence of iterates $(\xx_t)$ in $\extspace$, not just subsequences. This question will not be settled here and is moreover left open by the existing literature, even in well-studied special cases. In particular, the promising literature on \emph{implicit regularization} of standard descent methods can show that if the function being minimized satisfies certain structural conditions, then {coordinate-descent} iterates (defined below) lie in a certain cone finer than the recession cone \citep{boosting_margin,zhang_yu_boosting,mjt_margins}, and furthermore that gradient-descent iterates have a convergent dominant direction \citep{nati_logistic,riskparam}. However, these guarantees and their associated proofs alone are insufficient to establish convergence over $\extspace$. We return now to consequences of Theorem~\ref{thm:fact:gd}, which we show can be applied to prove the effectiveness of a number of standard algorithms, even when no finite minimizer exists. \emph{Coordinate descent} is one such method in which, at each iteration, just one of the coordinates of $\xx_t$ is chosen and updated, not modifying any of the others; thus, $\xx_{t+1}=\xx_t + \eta_t \ee_{i_t}$ for some basis vector $\ee_{i_t}$ and some $\eta_t\in\R$. In a \emph{gradient-based} version of coordinate descent, $i_t$ is chosen to be the largest coordinate of the gradient $\gradf(\xx_t)$. In a \emph{fully greedy} version, both $i_t$ and $\eta_t$ are chosen to effect the maximum possible decrease in the function value, that is, to minimize $f(\xx_t+\eta\ee_i)$ among all choices of $i\in\{1,\ldots,n\}$ and $\eta\in\R$. Many other variations are possible. \emph{Steepest descent} \citep[Section~9.4]{boyd_vandenberghe} is a technique generalizing both gradient descent and one form of coordinate descent. In this method, for some $p\geq 1$, on each iteration $t$, $\vv_t\in\Rn$ is chosen to maximize $\gradf(\xx_t)\cdot\vv_t$ subject to $\normp{\vv_t}\leq 1$, and then subtracted from $\xx_t$ after scaling by some step size $\eta_t>0$; thus, $\xx_{t+1} = \xx_t - \eta_t \vv_t$. (Here, $\normp{\xx}=\Parens{\sum_{i=1}^n |x_i|^p}^{1/p}$ is the $\ell_p$-norm of $\xx\in\Rn$.) When $p=2$, this is gradient descent; when $p=1$, it is (gradient-based) coordinate descent. If $f$ is smooth so that \eqref{eqn:grad-bnd-smooth} is satisfied, then for all $p\geq 1$, steepest descent satisfies \eqref{eqn:aux-fcn-prog-bnd} and thus Theorem~\ref{thm:fact:gd} can be applied to prove its convergence (given the other conditions of that theorem). To see this, recall that there exists a constant $C_p>0$ for which $\norms{\xx}\leq C_p \normp{\xx}$ for all $\xx\in\Rn$. Let $\beta'=C_p^2 \beta$ and let $q\geq 1$ be such that $1/p + 1/q = 1$ (allowing either $p$ or $q$ to be $+\infty$). Then we have: \begin{eqnarray*} f(\xx_{t+1}) &\leq& f(\xx_t) + \gradf(\xx_t)\cdot\parens{\xx_{t+1} - \xx_t} + \frac{\beta}{2} \norms{\xx_{t+1} - \xx_{t}}^2 \\ &\leq& f(\xx_t) + \gradf(\xx_t)\cdot\parens{\xx_{t+1} - \xx_t} + \frac{\beta'}{2} \normp{\xx_{t+1} - \xx_{t}}^2 \\ &=& f(\xx_t) - \eta_t \gradf(\xx_t)\cdot\vv_t + \frac{\beta'\eta_t^2}{2} \\ &=& f(\xx_t) - \eta_t \normq{\gradf(\xx_t)} + \frac{\beta'\eta_t^2}{2} \\ &=& f(\xx_t) - \frac{1}{2\beta'} \normq{\gradf(\xx_t)}^2. \end{eqnarray*} The first equality is because $\xx_{t+1}-\xx_t=-\eta_t \vv_t$ and $\normp{\vv_t}=1$. The second equality is because, for all $\zz\in\Rn$, \[ \max_{\vv:\normp{\vv}\leq 1} \zz\cdot\vv = \normq{\zz}. \] And the last equality holds if we set $\eta_t = \normq{\gradf(\xx_t)} / \beta'$. Thus, \eqref{eqn:aux-fcn-prog-bnd} is satisfied with $\alpha_t = 1/(2\beta')$ and $h(\uu) = \normq{\uu}^2$, which is an adequate auxiliary function. In particular, for smooth convex functions $f$ as above, gradient-based coordinate descent effectively minimizes $f$, for appropriate step sizes. This also shows that the same holds for fully-greedy coordinate descent since this version makes at least as much progress on each iteration at descreasing the function values as the gradient-based version. As a consequence, any of the algorithms just discussed can be applied to a range of commonly-encountered convex optimization problems which, in general, might have no finite minimizers. For instance, logistic regression, as seen in \eqref{eqn:logistic-reg-obj}, is based on minimization of a function of the form \[ f(\xx) = \sum_{i=1}^m \ln\bigParens{1+\exp(\xx\cdot\uu_i)} \] for all $\xx\in\Rn$, and for some vectors $\uu_1,\ldots,\uu_m\in\Rn$. This function is convex and proper, but its minimizers might well be at infinity. Furthermore, its extension $\fext$ is continuous everywhere (by Proposition~\ref{pr:hard-core:1}\ref{pr:hard-core:1:a}). It can be shown that this function is smooth, for instance, by explicitly computing the Hessian $\nabla^2 f(\xx)$ for any $\xx\in\Rn$, each of whose components can be bounded by some constant that depends only on $\uu_1,\ldots,\uu_m$, which in turn implies \eqref{eqn:grad-bnd-smooth}, for an appropriate $\beta>0$, using Taylor's theorem. Thus, any of the methods discussed above can be applied to minimize this function. As another example, for $\xx\in\Rn$, let \[ f(\xx) = {\sum_{i=1}^m \exp(\xx\cdot\uu_i) } \] for some vectors $\uu_1,\ldots,\uu_m\in\Rn$, and let $g(\xx)=\ln f(\xx)$. Both functions $f$ and $g$ are convex and proper. The extension $\fext$ is continuous everywhere (by Proposition~\ref{pr:hard-core:1}\ref{pr:hard-core:1:a}), which also implies that $\gext$ is continuous everywhere. The function $g$ can be shown to be smooth, similar to the sketch above. Since $\log$ is strictly increasing, minimizing $g$ is equivalent to minimizing $f$. Thus, either $f$ or $g$ can be minimized by applying any of the methods above to $g$ (even if $\inf g = -\infty$). Moreover, whether applied to $f$ or $g$, fully-greedy coordinate descent is identical in its updates (as are the other methods, for appropriate choices of step size). In particular, the AdaBoost algorithm~\citep{schapire_freund_book_final} can be viewed as minimizing an \emph{exponential loss} function of exactly the same form as $f$ using fully-greedy coordinate descent; therefore, the arguments above prove that AdaBoost effectively minimizes exponential loss (as had previously been proved by \citealp{collins_schapire_singer_adaboost_bregman} using more specialized techniques based on Bregman distances). As a last example, let $\distset$ be a finite, nonempty set, and let $\featmap:\distset\rightarrow\Rn$. For any $\xx\in\Rn$, we can define a natural probability distribution $\qx$ over $\distset$ of the form \[ \qx(\distelt) = \frac{e^{\xx\cdot\featmap(\distelt)}} {\sum_{\distalt\in\distset} e^{\xx\cdot\featmap(\distalt)}}. \] Given samples $\distelt_1,\ldots,\distelt_m\in \distset$, we can then attempt to estimate the distribution generating these examples by minimizing the \emph{negative log likelihood} \[ f(\xx) = -\frac{1}{m} \sum_{j=1}^m \ln \qx(\distelt_j). \] Similar to the preceding examples, this function is convex, proper, smooth and can be shown to have an extension that is continuous everywhere. Therefore, once again, any of the methods above can be applied to minimize it, even if none of its minimizers are finite. In the next chapter, we study this family of distributions and estimation approach in detail. \section{Exponential-family distributions} \label{sec:exp-fam} We next study a broad and well-established family of probability distributions called the \emph{exponential family}. We will see how astral notions can be applied to handle the common situation in which the parameter values of interest are at infinity, as will be explained shortly. The development that we give here for the standard setting is quite well-studied. Many of the astral results presented below are generalizations or extensions of known results for the standard case. See, for instance, \citet[Chapter~3]{WainwrightJo08} for further background. \subsection{The standard setting} \label{sec:exp-fam:standard-setting} For simplicity, we focus on probability distributions defined over some nonempty, finite set $\distset$. In this chapter, a distribution $p$ is a function $p:\distset\rightarrow [0,1]$ with $\sum_{\distelt\in\distset} p(\distelt)=1$. We let $\Delta$ denote the set of all such distributions. We suppose we are given a \emph{feature function} $\featmap:\distset\rightarrow \Rn$, for some $n\geq 1$. For each $\distelt\in\distset$, $\featmap(\distelt)$ can be regarded as a kind of description of element $\distelt$, with each of the $n$ components $\featmapj$ providing one feature or descriptor. For example, if the elements $\distelt$ are people, then the features might provide each person's height, weight, age, and so on. We can use the feature function to construct probability distributions over $\distset$. An \emph{exponential-family distribution} is defined by parameters $\xx\in\Rn$, and denoted $\qx$, placing probability mass on $\distelt\in\distset$ proportional to $e^{\xx\cdot\featmap(\distelt)}$. Thus, \begin{equation} \label{eqn:exp-fam-defn} \qx(\distelt) = \frac{e^{\xx\cdot\featmap(\distelt)}} {\sumex(\xx)} = e^{\xx\cdot\featmap(\distelt) - \lpart(\xx)}, \end{equation} where $\sumex:\Rn\rightarrow \R$ provides normalization, and $\lpart:\Rn\rightarrow \R$ is its logarithm, called the \emph{log-partition function}. That is, for $\xx\in\Rn$, \begin{equation} \label{eqn:sumex-dfn} \sumex(\xx) = {\sum_{\distelt\in\distset} e^{\xx\cdot\featmap(\distelt)}}, \end{equation} and \[ \lpart(\xx) = \ln \sumex(\xx) = \ln\paren{\sum_{\distelt\in\distset} e^{\xx\cdot\featmap(\distelt)}}. \] Both of these functions are convex; they will play a central role in the development to follow. Exponential-family distributions are commonly used to estimate an unknown distribution from data. In such a setting, we suppose access to $\numsamp$ independent samples from some unknown distribution $\popdist$ over $\distset$. Let $\Nelt$ be the number of times that element $\distelt$ is observed. From such information, we aim to estimate $\popdist$. A standard approach is to posit that some exponential-family distribution $\qx$ is a reasonable approximation to $\popdist$, and to then select the parameters $\xx\in\Rn$ of that distribution that best fit the data. In particular, we can use the \emph{likelihood} of the data as a measure of fit, that is, the probability of observing the sequence of outcomes that actually were observed if we suppose that the unknown distribution $\popdist$ is in fact $\qx$. This likelihood can be computed to be \[ \prod_{\distelt\in\distset} [\qx(\distelt)]^{\Nelt}. \] According to the \emph{maximum-likelihood principle}, the parameters $\xx\in\Rn$ should be chosen to maximize this likelihood. Taking the negative logarithm and dividing by $\numsamp$, this is equivalent to choosing $\xx$ to minimize the negative log-likelihood, \begin{equation} \label{eqn:neg-log-like:1} \loglik(\xx) = -\sum_{\distelt\in\distset} \frac{\Nelt}{\numsamp} \ln \qx(\distelt). \end{equation} The negative log-likelihood function is convex in $\xx$; however, it might not have a finite minimizer. Here are some examples: \begin{example} Suppose $\distset=\{1,\ldots,n\}$, and that $\featmap(\distelt)=\ee_{\distelt}$, for $\distelt\in\distset$, where $\ee_1,\ldots,\ee_n$ are the standard basis vectors in $\Rn$. In this case, $\loglik(\xx)$ has exactly the form of the function given in \eqref{eqn:log-sum-exp-fcn} as part of Example~\ref{ex:log-sum-exp} (with $\lsec_i=\Nelt/\numsamp$). As was seen previously in that example, it may happen that there is no finite minimizer (for instance, if $n=3$, $\numsamp=3$, $\Nj{1}=0$, $\Nj{2}=1$, and $\Nj{3}=2$). \end{example} \begin{example} \label{ex:exp-fam-eg1} Suppose $\distset=\{1,2,3,4\}$, $n=2$, and $\featmap$ is defined as follows: \[ \begin{array}{rlcrl} \featmap(1) =& \left[\begin{array}{r} -3 \\ 4 \end{array}\right] & \;\;\;\; & \featmap(2) =& \left[\begin{array}{r} 1 \\ 0 \end{array}\right] \\[3ex] \featmap(3) =& \left[\begin{array}{r} 5 \\ -4 \end{array}\right] & \;\;\;\; & \featmap(4) =& \left[\begin{array}{r} 3 \\ 4 \end{array}\right]. \end{array} \] Suppose further that $\numsamp=4$, $\Nj{1}=3$, $\Nj{2}=1$, and $\Nj{3}=\Nj{4}=0$. Then the negative log-likelihood can be computed, for $\xx\in\R^2$, to be \begin{align} \loglik(\xx) &= \loglik(x_1,x_2) \nonumber \\ &= \ln\paren{ e^{-3x_1 + 4x_2} + e^{x_1} + e^{5x_1 - 4x_2} + e^{3x_1 + 4x_2} } + 2x_1 - 3x_2 \nonumber \\ &= \ln\paren{ e^{-x_1 + x_2} + e^{3x_1-3x_2} + e^{7x_1 - 7x_2} + e^{5x_1 + x_2} }. \label{eq:log-like-no-max-eg:2} \end{align} (For instance, this calculation can be done using Proposition~\ref{pr:exp-log-props}\ref{pr:exp-log-props:b}.) This function cannot have a finite minimizer since adding $\trans{[-1,-1]}$ to any point $\xx\in\R^2$ leaves the first three terms inside the log in the last line unchanged while strictly diminishing the last term; thus, $\loglik(x_1-1,x_2-1) < \loglik(x_1,x_2)$ for all $\xx\in\R^2$. \end{example} These examples show that even in simple cases, when working with an exponential family of distributions, there may be no finite setting of the parameters $\xx$ maximizing the likelihood of observed data. Nevertheless, in this section, we will see how the parameter space can be extended from $\Rn$ to astral space, $\extspace$, in a way that preserves and even enhances some of the key properties of exponential-family distributions, and in particular, assures that a maximum-likelihood setting of the parameters always exists. \subsection{Extending to astral space} In what follows, it will be helpful to consider simple translations of the feature function in which some fixed vector $\uu\in\Rn$ is subtracted from all values of the function. Thus, we define this modified feature function $\featmapu:\distset\rightarrow\Rn$ by \[ \featmapu(\distelt)=\featmap(\distelt)-\uu \] for $\distelt\in\distset$. Likewise, we define variants of $\lpart$ and $\sumex$, denoted $\lpartu$ and $\sumexu$, that are associated with $\featmapu$ rather than $\featmap$. That is, \[ \sumexu(\xx) = \sum_{\distelt\in\distset} e^{\xx\cdot\featmapu(\distelt)} = \sum_{\distelt\in\distset} e^{\xx\cdot(\featmap(\distelt)-\uu)}, \] and \begin{equation} \label{eqn:lpart-defn} \lpartu(\xx) = \ln \sumexu(\xx) = \lpart(\xx) - \xx\cdot\uu \end{equation} for $\xx\in\Rn$, where the last line follows by simple algebra. Note that the exponential-family distributions associated with $\featmapu$ are the same as the original distributions $\qx$ associated with $\featmap$ since, for $\distelt\in\distset$, \[ \frac{e^{\xx\cdot\featmapu(\distelt)}} {\sumexu(\xx)} = \frac{e^{\xx\cdot\featmap(\distelt)}} {\sumex(\xx)} = \qx(\distelt). \] These functions $\sumexu$ and $\lpartu$ extend continuously to astral space, for any $\uu\in\Rn$, as we show next. In the proposition, $\logex:[0,+\infty]\rightarrow\Rext$ denotes an extended, continuous version of the logarithm function: \[ \logex(\barx) = \begin{cases} \ln \barx & \mbox{if $0\leq\barx<+\infty$,} \\ +\infty & \mbox{if $\barx=+\infty$,} \end{cases} \] for $\barx\in [0,+\infty]$. We also define $\ln 0 = -\infty$. \begin{proposition} \label{pr:sumex-lpart-cont} Let $\featmap:\distset\rightarrow\Rn$, where $\distset$ is finite and nonempty, and let $\featmapu$, $\sumex$, $\lpart$, $\sumexu$, $\lpartu$ be as defined above. For all $\uu\in\Rn$, the extensions $\sumexextu$ and $\lpartextu$ are continuous everywhere. In particular, for $\xbar\in\extspace$, \[ \sumexextu(\xbar) = \sum_{\distelt\in\distset} \expex(\xbar\cdot\featmapu(\distelt)) \] and \[ \lpartextu(\xbar) = \logex(\sumexextu(\xbar)). \] \end{proposition} \begin{proof} It suffices to prove the proposition for $\featmap$ (that is, when $\uu=\zero$), since the result for general $\uu$ then follows simply by replacing $\featmap$ with $\featmapu$. From \eqref{eqn:sumex-dfn}, we see that $\sumex$ has exactly the form of the functions considered in Section~\ref{sec:emp-loss-min}, specifically, \eqref{eqn:loss-sum-form}. Therefore, the form and continuity of $\sumexext$ follows directly from Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:a}). The form and continuity of $\lpartext$ then follows from continuity of the extended logarithm function, $\logex$. \end{proof} We show next how the exponential-family distributions $\qx$ themselves, as a function of $\xx$, extend continuously to astral space. From Eqs.~(\ref{eqn:exp-fam-defn}) and~(\ref{eqn:lpart-defn}), for $\distelt\in\distset$ and $\xx\in\Rn$, \[ \qx(\distelt) = \exp\bigParens{\xx\cdot\featmap(\distelt) - \lpart(\xx)} = \exp\bigParens{-\lpartfi(\xx)}. \] In this form, it is straightforward how to extend to astral space since all of the functions involved extend continuously. Thus, for $\xbar\in\extspace$, we define a distribution $\qxbar\in\Delta$ with astral parameters $\xbar$ as \begin{equation} \label{eqn:qxbar-defn} \qxbar(\distelt) = \expex\bigParens{-\lpartextfi(\xbar)}. \end{equation} Applying Proposition~\ref{pr:sumex-lpart-cont}, this can be expressed more explicitly as \begin{equation} \label{eqn:qxbar-expr} \qxbar(\distelt) = \invex\bigParens{\sumexextfi(\xbar)} = \invex\Parens{\sum_{\distalt\in\distset} \expex\bigParens{\xbar\cdot(\featmap(\distalt)-\featmap(\distelt))}}. \end{equation} Here, $\invex:[0,+\infty]\rightarrow\Rext$ extends the inverse function $x\mapsto 1/x$: \[ \invex(\barx) = \begin{cases} +\infty & \text{if $\barx=0$,} \\ {1}/{\barx} & \text{if $0<\barx<+\infty$,} \\ 0 & \text{if $\barx=+\infty$,} \end{cases} \] for $\barx\in [0,+\infty]$. \begin{proposition} \label{pr:qxbar-cont} Let $\qxbar$ be as defined above in terms of some function $\featmap:\distset\rightarrow\Rn$, where $\distset$ is finite and nonempty. Let $\distelt\in\distset$, and let $\xbar\in\extspace$. Then as a function of $\xbar$, $\qxbar(\distelt)$ is continuous everywhere. Consequently, $\qxbar$ is a probability distribution in $\Delta$. \end{proposition} \begin{proof} Continuity of $\qxbar(\distelt)$, as defined in \eqref{eqn:qxbar-defn}, follows directly from Proposition~\ref{pr:sumex-lpart-cont} and from continuity of $\expex$. For any $\xbar\in\extspace$, there exists a sequence $\seq{\xx_t}$ in $\Rn$ converging to $\xbar$. For each $\distelt\in\distset$, $\qxbar(\distelt)\geq 0$ since $\qxt(\distelt)\rightarrow\qxbar(\distelt)$ and $\qxt(\distelt)\geq 0$. Likewise, $\sum_{\distelt\in\distset}\qxbar(\distelt)=1$ since \[ 1 = \sum_{\distelt\in\distset}\qxt(\distelt) \rightarrow \sum_{\distelt\in\distset}\qxbar(\distelt), \] by continuity of addition (Proposition~\ref{prop:lim:eR}\ref{i:lim:eR:sum}). Thus, $\qxbar\in\Delta$. \end{proof} For a distribution $p\in\Delta$, and any function $f$ on $\distset$, we write $\Exp{p}{f}$ for the expected value of $f$ with respect to $p$: \begin{equation} \label{eqn:expect-defn} \Exp{p}{f} = \Exp{\distelt\sim p}{f(\distelt)} = \sum_{\distelt\in\distset} p(\distelt) f(\distelt). \end{equation} We then define the \emph{mean map} $\meanmap:\extspace\rightarrow\Rn$ which maps the parameters $\xbar\in\extspace$ to the mean of the feature map $\featmap$ under the distribution $\qxbar$ defined by $\xbar$: \[ \meanmap(\xbar) = \Exp{\qxbar}{\featmap}. \] We will see that this map plays an important role in the development to follow, as it does generally in the study of standard exponential-family distributions. We also define $\meanmapu$ to be the same as $\meanmap$ with $\featmap$ replaced by $\featmapu$; thus, \begin{equation} \label{eqn:meanmapu-defn} \meanmapu(\xbar) = \Exp{\qxbar}{\featmapu} = \meanmap(\xbar) - \uu, \end{equation} since, as noted earlier, $\qxbar$ is unaffected when $\featmap$ is shifted by a fixed vector $\uu$. Straightforward calculus shows that, for $\xx\in\Rn$, $\meanmap(\xx)$ is in fact the gradient of the log-partition function $\lpart$ at $\xx$; that is, \begin{equation} \label{eqn:grad-is-meanmap} \nabla\lpart(\xx) = \meanmap(\xx). \end{equation} We will see below how this fact extends to astral parameters. We first show that the function $\meanmap$ is continuous and also preserves closures. Here and for the rest of this chapter, we refer to the foregoing defintions of $\featmapu$, $\qx$, $\qxbar$, $\sumex$, $\lpart$, $\sumexu$, $\lpartu$, $\meanmap$ and $\meanmapu$, all in terms of the function $\featmap:\distset\rightarrow\Rn$, where $\distset$ is finite and nonempty, as the \emph{general set-up of this chapter}. \begin{proposition} \label{pr:meanmap-cont} Assume the general set-up of this chapter. Then $\meanmap$ is continuous. Furthermore, for every set $S\subseteq\extspace$, $\meanmap(\Sbar)=\cl(\meanmap(S))$. \end{proposition} \begin{proof} From Proposition~\ref{pr:qxbar-cont}, $\qxbar(\distelt)$ is continuous as a function of $\xbar$, for each $\distelt\in\distset$. The continuity of $\meanmap$ then follows by continuity of ordinary vector operations. That $\meanmap(\Sbar)\subseteq\cl(\meanmap(S))$ follows from $\meanmap$ being continuous (\Cref{prop:cont}\ref{prop:cont:a},\ref{prop:cont:c}). For the reverse inclusion, note that $\Sbar$ is compact, being a closed subset of the compact space $\extspace$. Since $\meanmap$ is continuous, its image, $\meanmap(\Sbar)$ is also compact, and therefore closed in $\Rn$ (\Cref{prop:compact}\ref{prop:compact:cont-compact},\ref{prop:compact:closed-subset},\ref{prop:compact:closed}). Since $\meanmap(S)$ is included in the closed set $\meanmap(\Sbar)$, this implies that its closure, $\cl(\meanmap(S))$, is as well. \end{proof} \subsection{Conjugate and astral subgradients} In the development to follow, $\lpartstar$, the conjugate of $\lpart$, will play an important role. The next lemma shows that if $\meanmap(\xbar)=\uu$ then $\lpartstar(\uu)$ is equal to $-\entropy(\qxbar)$, where $\entropy(p)$ denotes the \emph{entropy} of any distribution $p\in\Delta$: \[ \entropy(p) = -\sum_{\distelt\in\distset} p(\distelt) \ln p(\distelt) = -\Exp{p}{\ln p}. \] (In expressions like the one on the right, we use $\ln p$ as shorthand for the function $\distelt\mapsto \ln p(\distelt)$.) First, we state some simple facts that will be used here and elsewhere: \begin{proposition} \label{pr:exp-log-props} Let $p\in\Delta$. Then the following hold: \begin{letter-compact} \item \label{pr:exp-log-props:a} For all $q\in\Delta$, \[ \Exp{p}{\ln q} \leq \Exp{p}{\ln p} \] with equality if and only if $q=p$. In other words, $\Exp{p}{\ln q}$ is uniquely maximized over $q\in\Delta$ when $q=p$. \item \label{pr:exp-log-props:b} For all $\xx\in\Rn$, \[ \Exp{p}{\ln \qx} = \xx\cdot\Exp{p}{\featmap} - \lpart(\xx). \] \item \label{pr:exp-log-props:c} Let $\seq{\xx_t}$ be a sequence in $\Rn$ that converges to some point $\xbar\in\extspace$. Then \[ \Exp{p}{\ln \qxt} \rightarrow \Exp{p}{\ln \qxbar}. \] \end{letter-compact} \end{proposition} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{pr:exp-log-props:a}):} This is proved, for instance, by \citet[Theorem~2.6.3]{CoverTh91}. \pfpart{Part~(\ref{pr:exp-log-props:b}):} By \eqref{eqn:exp-fam-defn} and linearity of expectations, for all $\xx\in\Rn$, \[ \Exp{p}{\ln \qx} = \Exp{\distelt\sim p}{\xx\cdot\featmap(\distelt) - \lpart(\xx)} = \xx\cdot\Exp{p}{\featmap} - \lpart(\xx). \] \pfpart{Part~(\ref{pr:exp-log-props:c}):} By Proposition~\ref{pr:qxbar-cont}, $\qxt(\distelt)\rightarrow\qxbar(\distelt)$, for $\distelt\in\distset$. Therefore, $\Exp{p}{\ln \qxt}\rightarrow \Exp{p}{\ln \qxbar}$ by continuity of the arithmetic functions involved (Proposition~\ref{prop:lim:eR}), and since $\ln \qxbar(\distelt) \leq 0$ for $\distelt\in\distset$. \qedhere \end{proof-parts} \end{proof} \begin{lemma} \label{lem:lpart-conj} Assume the general set-up of this chapter. Let $\xbar\in\extspace$ and $\uu\in\Rn$, and suppose $\meanmap(\xbar)=\uu$. Then $\lpartstar(\uu)=-\entropy(\qxbar)$. \end{lemma} \begin{proof} By Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:b}), for $\xx\in\Rn$, $ \Exp{\qxbar}{\ln \qx} = \xx\cdot\uu - \lpart(\xx) $ since $\meanmap(\xbar)=\uu$. As a result, by defintion of conjugate, \begin{equation} \label{eqn:lem:lpart-conj:1} \lpartstar(\uu) = \sup_{\xx\in\Rn} [\xx\cdot\uu - \lpart(\xx)] = \sup_{\xx\in\Rn} \Exp{\qxbar}{\ln \qx} . \end{equation} By Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:a}), for all $\xx\in\Rn$, $\Exp{\qxbar}{\ln \qx} \leq \Exp{\qxbar}{\ln \qxbar} = -\entropy(\qxbar)$. Therefore, $\lpartstar(\uu)\leq -\entropy(\qxbar)$. For the reverse inequality, there must exist a sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$ (by Theorem~\ref{thm:i:1}(\ref{thm:i:1d})). By Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:c}), $\Exp{\qxbar}{\ln \qxt}\rightarrow \Exp{\qxbar}{\ln \qxbar}$. Combined with \eqref{eqn:lem:lpart-conj:1}, this implies $\lpartstar(\uu)\geq \Exp{\qxbar}{\ln \qxbar}$, completing the proof. \end{proof} The convex hull of the set $\featmap(\distset)$ is called the \emph{marginal polytope}, consisting of all convex combinations of the points $\featmap(\distelt)$ for $\distelt\in\distset$ (\Cref{roc:thm2.3}). This is exactly the set of means $\Exp{p}{\featmap}$ that can be realized by any distribution $p\in\Delta$ (not necessarily in the exponential family). The next theorem shows that for every point $\uu\in\conv{\featmap(\distset)}$, which is to say every point for which there exists \emph{some} distribution $p\in\Delta$ with $\Exp{p}{\featmap}=\uu$, there must also exist an exponential-family distribution with parameters $\xbar\in\extspace$ realizing the same mean so that $\meanmap(\xbar)=\Exp{\qxbar}{\featmap}=\uu$. Thus, $\meanmap(\extspace)=\convfeat$, which, in light of Lemma~\ref{lem:lpart-conj}, is also equal to the effective domain of $\lpartstar$. The theorem further shows that the image of $\Rn$ under $\meanmap$ is equal to the relative interior of this same set. In the following proof and throughout this development, we also make use of the hard core $\hardcore{f}$, as introduced in Section~\ref{sec:emp-loss-min}, specifically, Definition~\ref{def:hard-core}. \begin{theorem} \label{thm:meanmap-onto} Assume the general set-up of this chapter. For all $\uu\in\Rn$, if $\uu\in\convfeat$, then there exists $\xbar\in\extspace$ for which $\meanmap(\xbar)=\uu$, implying that $\lpartstar(\uu)=-\entropy(\qxbar)$; otherwise, if $\uu\not\in\convfeat$, then $\lpartstar(\uu)=+\infty$. As such, \[ \meanmap(\extspace) = \convfeat = \dom \lpartstar, \] and \[ \meanmap(\Rn) = \ri\bigParens{\convfeat}. \] \end{theorem} \begin{proof} The proof consists mainly of a series of inclusions. \setcounter{claimp}{0} \begin{claimpx} \label{cl:thm:meanmap-onto:1} $\ri(\dom \lpartstar)\subseteq \meanmap(\Rn)$. \end{claimpx} \begin{proofx} Suppose $\uu\in\ri(\dom\lpartstar)$. Since $\lpartstar$ is convex and proper, it has a subgradient at every point in $\ri(\dom \lpartstar)$ (by \Cref{roc:thm23.4}\ref{roc:thm23.4:a}), implying there exists $\xx\in\subdiflpartstar(\uu)$. Since $\lpart$ is closed and proper, this further implies that $\uu\in\subdiflpart(\xx)$ by \Cref{pr:stan-subgrad-equiv-props}(\ref{pr:stan-subgrad-equiv-props:a},\ref{pr:stan-subgrad-equiv-props:c}). As noted earlier, $\lpart$ is differentiable and finite everywhere, so the only element of $\subdiflpart(\xx)$ is $\nabla\lpart(\xx)$ (by \Cref{roc:thm25.1}\ref{roc:thm25.1:a}). Therefore, $\uu=\nabla\lpart(\xx)=\meanmap(\xx)$ by \eqref{eqn:grad-is-meanmap}, so $\uu\in\meanmap(\Rn)$. \end{proofx} \begin{claimpx} \label{cl:thm:meanmap-onto:3} $\meanmap(\extspace)\subseteq \dom \lpartstar$. \end{claimpx} \begin{proofx} Suppose $\uu\in\meanmap(\extspace)$ so that $\meanmap(\xbar)=\uu$ for some $\xbar\in\extspace$. Then by Lemma~\ref{lem:lpart-conj}, $\lpartstar(\uu)=-\entropy(\qxbar)\leq 0$. Thus, $\uu\in \dom \lpartstar$. \end{proofx} Combining the claims now yields \[ \meanmap(\extspace) \subseteq \dom \lpartstar \subseteq \cl(\dom \lpartstar) \subseteq \cl(\meanmap(\Rn)) = \meanmap(\extspace), \] where the third inclusion follows from Claim~\ref{cl:thm:meanmap-onto:1} after taking the closure of both sides of the stated inclusion (since $\cl(\ri(\dom \lpartstar))=\cl(\dom \lpartstar)$ by \Cref{pr:ri-props}\ref{pr:ri-props:roc-thm6.3}), and the equality is from Proposition~\ref{pr:meanmap-cont}. Thus, $\meanmap(\extspace) = \dom \lpartstar$. Next, for $\xx\in\Rn$, note that $\qx$ is a distribution in $\Delta$ with $\qx>0$. Therefore, $\meanmap(\xx)$ is in $\ri(\convfeat)$ by Proposition~\ref{pr:ri-conv-finite}. Thus, $\meanmap(\Rn)\subseteq\ri(\convfeat)$. The next claim proves the reverse inclusion. \begin{claimpx} \label{cl:thm:meanmap-onto:4} $\ri(\convfeat)\subseteq\meanmap(\Rn)$. \end{claimpx} \begin{proofx} Let $\uu\in\ri(\convfeat)$. Then $\zero\in\ri(\convfeatu)$ since subtracting $\uu$ from $\featmap$ simply translates everything, including $\convfeat$, by $-\uu$. Since $\sumexu$ has the form of functions in Section~\ref{sec:emp-loss-min}, we can apply Theorem~\ref{thm:erm-faces-hardcore}(\ref{thm:erm-faces-hardcore:a}) yielding that $\distset\subseteq \hardcore{\sumexu}$, and so $\hardcore{\sumexu}=I$, where $\hardcore{\sumexu}$ is the hard core of $\sumexu$. This in turn implies, by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:univ}), that $\zero\in\unimin{\sumexu}$. Therefore, $\sumexextu$ has a minimizer of the form $\zero\plusl\xx=\xx$ for some $\xx\in\Rn$, by Propositions~\ref{pr:min-fullshad-is-finite-min} and~\ref{pr:unimin-to-global-min}. Since $\xx$ minimizes $\sumexextu$, it also minimizes $\sumexu$ and so $\lpartu$ since logarithm is strictly increasing. Being a differentiable function, this implies $\nabla\lpartu(\xx)=\zero$, so $\meanmapu(\xx)=\zero$ and $\meanmap(\xx)=\uu$ (by Eqs.~(\ref{eqn:meanmapu-defn}) and~(\ref{eqn:grad-is-meanmap})). Therefore, $\uu\in\meanmap(\Rn)$, proving the claim. \end{proofx} Thus, $ \meanmap(\Rn) = \ri(\convfeat) $. Taking closures and combining with the above then yields $\meanmap(\extspace)=\convfeat=\dom\lpartstar$ by Proposition~\ref{pr:meanmap-cont} (and since the convex hull of finitely many points is closed). Combined with Lemma~\ref{lem:lpart-conj}, this further proves the stated properties of $\lpartstar$. \end{proof} As seen in \eqref{eqn:grad-is-meanmap}, for $\xx\in\Rn$, $\meanmap(\xx)$ is exactly the gradient of the log-partition function at $\xx$, which means it is the only (standard) subgradient of $\lpart$ at $\xx$ so that $\subdiflpart(\xx)=\{\meanmap(\xx)\}$. As we show next, this fact generalizes to astral parameters. In particular, for all $\xbar\in\extspace$, $\lpartext$ has one and only one strict astral subgradient at $\xbar$, namely, $\meanmap(\xbar)$. (There may however be other astral subgradients that are not strict.) \begin{theorem} \label{thm:subgrad-lpart} Assume the general set-up of this chapter. Let $\xbar\in\extspace$. Then $\meanmap(\xbar)$ is the only strict astral subgradient of $\lpartext$ at $\xbar$; that is, $ \basubdiflpart(\xbar) = \{\meanmap(\xbar)\} $. \end{theorem} \begin{proof} Let $\uu=\meanmap(\xbar)$, which is in $\convfeat=\dom\lpartstar$ by Theorem~\ref{thm:meanmap-onto}, and which we aim to show is in $\asubdiflpart(\xbar)$ as well. We use the formulation of astral subgradient given in Theorem~\ref{thm:fminus-subgrad-char}(\ref{thm:fminus-subgrad-char:a},\ref{thm:fminus-subgrad-char:b}). As such, let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then \begin{eqnarray*} \xx_t\cdot\uu - \lpart(\xx_t) &=& \Exp{\distelt\sim\qxbar}{\xx_t\cdot\featmap(\distelt) - \lpart(\xx_t)} \\ &=& \Exp{\qxbar}{\ln \qxt} \\ &\rightarrow& \Exp{\qxbar}{\ln \qxbar} \\ &=& \lpartstar(\uu). \end{eqnarray*} The first equality is by linearity of expectations, and since $\uu=\Exp{\qxbar}{\featmap}$. The second equality is by \eqref{eqn:exp-fam-defn}. The convergence is by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:c}). The final equality is from Lemma~\ref{lem:lpart-conj}. Thus, $\uu$ is in $\asubdiflpart(\xbar)$ by Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:b}, and so also in $\basubdiflpart(\xbar)$. To show that this is the only strict astral subgradient, suppose by way of contradiction that $\uu'\in\basubdiflpart(\xbar)=(\convfeat)\cap\asubdiflpart(\xbar)$ for some $\uu'\neq\meanmap(\xbar)$. By Theorem~\ref{thm:meanmap-onto}, there exists $\xbar'\in\extspace$ for which $\meanmap(\xbar')=\uu'$. Since $\meanmap(\xbar)\neq\meanmap(\xbar')$, we also must have $\qxbar\neq\qxbarp$. Since $\uu'$ is an astral subgradient at $\xbar$, by Theorem~\refequiv{thm:fminus-subgrad-char}{thm:fminus-subgrad-char:a}{thm:fminus-subgrad-char:b}, there must exist a sequence $\seq{\xx_t}$ in $\Rn$ with $\xx_t\rightarrow\xbar$ and $\xx_t\cdot\uu'-\lpart(\xx_t)\rightarrow\lpartstar(\uu')$. For each $t$, similar to the derivation above, we have \[ \xx_t\cdot\uu' - \lpart(\xx_t) = \Exp{\distelt\sim\qxbarp}{\xx_t\cdot\featmap(\distelt) - \lpart(\xx_t)} = \Exp{\qxbarp}{\ln \qxt}. \] Thus, \begin{eqnarray*} \lpartstar(\uu') &=& \lim \paren{ \xx_t\cdot\uu' - \lpart(\xx_t) } \\ &=& \lim \paren{ \Exp{\qxbarp}{\ln \qxt} } \\ &=& \Exp{\qxbarp}{\ln \qxbar} \\ &<& \Exp{\qxbarp}{\ln \qxbarp} =\lpartstar(\uu'). \end{eqnarray*} As before, the third equality is by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:c}). The strict inequality is by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:a}). And the final equality is by Lemma~\ref{lem:lpart-conj}. Having reached a contradiction, this completes the proof. \end{proof} \begin{example} As perhaps the simplest possible example, suppose $n=1$, $\distset=\{0,1\}$ and $\featmapsc(\distelt)=\distelt$ for $\distelt\in\distset$. Then $\sumex(x)=1+e^x$ and $\lpart(x)=\ln(1+e^x)$, for $x\in\R$, so $\qsx(1)=e^x/(1+e^x)=1/(1+e^{-x})$, and $\qsx(0)=1-\qsx(1)$. Thus, this exponential family of distributions consists of all Bernoulli distributions with bias (probability of~$1$) in $(0,1)$. When extended to astral space, $\lpartext(\barx)=\logex(1+\expex(-\barx))$, for $\barx\in\Rext$, and \[ \qbarx(1) = \invex\paren{1+\expex(-\barx)} = \begin{cases} 0 & \text{if $\barx=-\infty$,} \\ \frac{1}{1+e^{-\barx}} & \text{if $\barx\in\R$,} \\ 1 & \text{if $\barx=+\infty$,} \end{cases} \] and $\qbarx(0)=1-\qbarx(1)$. In this way, Bernoulli distributions with bias $0$ and $1$ are now also included in the extended exponential family. The map $\meanmap$ is simply $\meanmap(\barx)=\qbarx(1)\cdot 1 + \qbarx(0)\cdot 0 = \qbarx(1)$. Consistent with Theorem~\ref{thm:meanmap-onto}, $\meanmap(\Rext)= \conv \featmapsc(\distset)=[0,1]$, and $\meanmap(\R)= \ri(\conv \featmapsc(\distset))=(0,1)$. The astral subdifferentials of $\lpartext$, as previously seen in Example~\ref{ex:subgrad-log1+ex}, are \[ \partial \lpartext(\barx) = \begin{cases} (-\infty,0] & \mbox{if $\barx = -\infty$,} \\ \{\lpart'(\barx)\} & \mbox{if $\barx\in\R$,} \\ {[1,+\infty)} & \mbox{if $\barx = +\infty$,} \end{cases} \] where $\lpart'(x)=1/(1+e^{-x})$ is the first derivative of $\lpart$. Thus, restricting to $\dom{\lpartstar}=\conv \featmapsc(\distset)=[0,1]$, we see that the only strict astral subgradient of $\lpartext$ at $\barx$ is $\meanmap(\barx)$, as is consistent with Theorem~\ref{thm:subgrad-lpart} (though other astral subgradients exist outside $[0,1]$ at $\barx=\pm\infty$). Let $u\in [0,1]$. Then $\meanmap(\barx)=u$ holds, by straightforward algebra, if and only if $\barx = \ln(u) - \ln(1-u)$, in which case, $\qbarx(1)=u$. Thus, by Theorem~\ref{thm:meanmap-onto}, \[ \lpartstar(u) = -\entropy(\qbarx) = u \ln(u) + (1-u)\ln(1-u). \] \end{example} \subsection{Maximum likelihood and maximum entropy} As discussed in Section~\ref{sec:exp-fam:standard-setting}, given random examples generated by some distribution $\popdist$, it is common to estimate $\popdist$ by finding the exponential-family distribution $\qx$, parameterized by $\xx\in\Rn$, with maximum likelihood, or equivalently, minimum negative log-likelihood as given in \eqref{eqn:neg-log-like:1}. Having extended the parameter space to all of astral space, we can now more generally seek parameters $\xbar\in\extspace$ for which the associated distribution $\qxbar$ has maximum likelihood. Note that the fractions $\Nelt/\numsamp$ appearing in \eqref{eqn:neg-log-like:1} themselves form a distribution $\phat\in\Delta$, called the \emph{empirical distribution}. Thus, in slightly more generic terms, given a distribution $\phat\in\Delta$ (which may or may not have this fractional form), we take the log-likelihood to be \begin{equation} \label{eqn:gen-log-like} \Exp{\phat}{\ln \qxbar}, \end{equation} which, in this approach, we aim to maximize over $\xbar\in\extspace$. As we show next, the negative of this log-likelihood is equal to the extended log-partition function, re-centered at $\Exp{\phat}{\featmap}$. As such, maximizing log-likelihood has several equivalent formulations, which we now summarize: \begin{proposition} \label{pr:ml-is-lpartu} Assume the general set-up of this chapter. Let $\uu\in\convfeat$, and let $\xbar\in\extspace$. Let $\phat\in\Delta$ be any distribution for which $\uu=\Exp{\phat}{\featmap}$ (which must exist). Then \begin{equation} \label{eqn:pr:ml-is-lpartu:1} -\Exp{\phat}{\ln \qxbar} = \lpartextu(\xbar). \end{equation} Consequently, the following are equivalent: \begin{letter-compact-prime} \item \label{pr:ml-is-lpartu:a} $\xbar$ maximizes $\Exp{\phat}{\ln \qxbar}$ (over $\xbar\in\extspace$). \medskip \item \label{pr:ml-is-lpartu:b} $\xbar$ minimizes $\lpartextu$. \itemprime \label{pr:ml-is-lpartu:bz} $\xbar$ minimizes $\sumexextu$. \medskip \item \label{pr:ml-is-lpartu:c} $\zero\in\asubdiflpartu(\xbar)$. \itemprime \label{pr:ml-is-lpartu:cp} $\uu\in\asubdiflpart(\xbar)$. \medskip \item \label{pr:ml-is-lpartu:d} $\xbar\in\adsubdiflpartustar(\zero)$. \itemprime \label{pr:ml-is-lpartu:dp} $\xbar\in\adsubdiflpartstar(\uu)$. \medskip \item \label{pr:ml-is-lpartu:e} $\meanmapu(\xbar)=\zero$. \itemprime \label{pr:ml-is-lpartu:ep} $\meanmap(\xbar)=\uu$. \end{letter-compact-prime} \end{proposition} \begin{proof} Let $\seq{\xx_t}$ be any sequence in $\Rn$ converging to $\xbar$. Then for all $t$, \[ -\Exp{\phat}{\ln \qxt} = \lpart(\xx_t) - \xx_t\cdot\uu = \lpartu(\xx_t) \] from Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:b}) and \eqref{eqn:lpart-defn}. By Proposition~\ref{pr:sumex-lpart-cont}, $\lpartu(\xx_t)\rightarrow\lpartextu(\xbar)$. On the other hand, $-\Exp{\phat}{\ln \qxt}\rightarrow-\Exp{\phat}{\ln \qxbar}$ by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:c}). These two limits, which are both for the same sequence, must be equal, proving the claim. We next prove the stated equivalences. (\ref{pr:ml-is-lpartu:a}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:b}) is immediate from \eqref{eqn:pr:ml-is-lpartu:1}, which we just proved. (\ref{pr:ml-is-lpartu:b}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:bz}) is by Proposition~\ref{pr:sumex-lpart-cont} and since $\logex$ is a strictly increasing bijection. (\ref{pr:ml-is-lpartu:b}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:c}) is by Proposition~\ref{pr:asub-zero-is-min}. (\ref{pr:ml-is-lpartu:c}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:d}) and (\ref{pr:ml-is-lpartu:cp}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:dp}) both follow from Theorem~\refequiv{thm:adif-fext-inverses}{thm:adif-fext-inverses:a}{thm:adif-fext-inverses:c} since both $\lpart$ and $\lpartu$ are finite everywhere, implying $\cldom{\lpart}=\cldom{\lpartu}=\extspace$. By assumption, $\uu$ is in $\convfeat$, implying $\zero\in\convfeatu$. Hence, (\ref{pr:ml-is-lpartu:cp}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:ep}) follows from Theorem~\ref{thm:subgrad-lpart}, as does (\ref{pr:ml-is-lpartu:c}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:e}) with $\featmap$ replaced by $\featmapu$. Finally, (\ref{pr:ml-is-lpartu:e}) $\Leftrightarrow$ (\ref{pr:ml-is-lpartu:ep}) is by \eqref{eqn:meanmapu-defn}. \end{proof} \begin{example} \label{ex:exp-fam-eg2} For instance, let us consider again Example~\ref{ex:exp-fam-eg1}. Let $\phat(\distelt)=\Nj{\distelt}/\numsamp$ for $\distelt\in\distset$. Then the negative log-likelihood for a distribution $\qx$ with parameters $\xx\in\R^2$ is $-\Exp{\phat}{\ln \qx}$, which is the same as what was earlier denoted $\loglik(\xx)$ in \eqref{eq:log-like-no-max-eg:2}. As argued earlier, this function has no finite minimizer in $\R^2$. Also, let \begin{equation} \label{eq:log-like-no-max-eg:3} \uu = \Exp{\phat}{\featmap} = \sfrac{3}{4} \featmap(1) + \sfrac{1}{4} \featmap(2) = \left[\begin{array}{r} -2 \\ 3 \end{array}\right]. \end{equation} Then it can be checked that the expressions given in \eqref{eq:log-like-no-max-eg:2} for $\loglik(\xx)$ are equal to $\lpart(\xx)-\xx\cdot\uu=\lpartu(\xx)$, consistent with Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:b}) (and Eq.~\ref{eqn:lpart-defn}). As seen in Proposition~\ref{pr:ml-is-lpartu}, maximizing the log-likelihood $\Exp{\phat}{\ln \qxbar}$ over astral parameters $\xbar\in\extspace$ is equivalent to minimizing $\lpartextu$ or $\sumexextu$, for which we have developed extensive tools. First, \[ \resc{\lpartu} = \resc{\sumexu} = \{ \lambda \vv : \lambda\in\Rpos \} \] where $\vv=\trans{[-1,-1]}$. This is because, as seen earlier, \eqref{eq:log-like-no-max-eg:2} is nonincreasing in the direction $\vv$ (and so also in direction $\lambda\vv$, for $\lambda\geq 0$), and it can be argued that this is not the case in any other direction. Using Proposition~\ref{pr:zero-in-univf} and Theorem~\ref{thm:new:f:8.2} (or Theorem~\ref{thm:hard-core:3}\ref{thm:hard-core:3:b:univ}), this can be shown to imply that $\limray{\vv}$ is $\sumexu$'s only universal reducer, that is, $\unimin{\sumexu}=\{\ebar\}$, where $\ebar=\limray{\vv}$. Further, from its definition in \eqref{eq:hard-core:3}, it can be seen that the hard core of $\sumexu$ is $\hardcore{\sumexu} = \{1,2,3\}$. Thus, by Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:d}), $\sumexu$'s universal reduction is \[ \fullshadsumexu(\xx) = \sumexextu(\ebar\plusl\xx) = e^{-x_1 + x_2} + e^{3x_1-3x_2} + e^{7x_1 - 7x_2}. \] Using calculus and algebra, this function can be shown analytically to be minimized at $\qq=\trans{[\alpha,-\alpha]}$ where \[ \alpha = \frac{1}{8} \ln\paren{\frac{\sqrt{37} - 3}{14}} \approx -0.18915 \] (as well as all points $\trans{[\lambda+\alpha,\lambda-\alpha]}$ for $\lambda\in\R$). Since $\ebar\in\unimin{\sumexu}$ and $\qq$ minimizes $\fullshadsumexu$, Theorem~\ref{thm:waspr:hard-core:2} implies that the resulting astral point $\xbar=\ebar\plusl\qq$ minimizes $\sumexextu$, and so also maximizes the log-likelihood $\Exp{\phat}{\ln \qxbar}$. The resulting probability distribution $\qxbar$ is \begin{equation} \label{eq:log-like-no-max-eg:4} \begin{array}{ccl} \qxbar(1) &\approx& 0.78822 \\ \qxbar(2) &\approx& 0.17356 \\ \qxbar(3) &\approx& 0.03822 \\ \qxbar(4) &=& 0. \end{array} \end{equation} It can be checked that $\meanmap(\xbar)=\uu$, consistent with Proposition~\ref{pr:ml-is-lpartu}. \end{example} We next consider an alternative approach for estimating the unknown distribution $\popdist$. The idea, first, is to find a distribution $q\in\Delta$ (not necessarily in the exponential family) under which the expectation of the features matches what was observed on the data, that is, for which $\Exp{q}{\featmap}=\uu$, where $\uu=\Exp{\phat}{\featmap}$ (and where $\phat\in\Delta$ is given or observed, as above). Typically, there will be many distributions satisfying this property. Among these, we choose the one of \emph{maximum entropy}, that is, for which $\entropy(q)$ is largest. Such a distribution is closest, in a certain sense, to the uniform distribution, and moreover, it can be argued, presupposes the least information beyond the empirical expectations $\Exp{\phat}{\featmap}$ that were observed on the data. Thus, the maximum-entropy approach estimates $\popdist$ by that distribution $q\in\medists$ for which $\entropy(q)$ is maximized, where $\medists$ is the set of all distributions $q\in\Delta$ for which $\Exp{q}{\featmap} = \uu$. By comparison, the maximum-likelihood approach estimates $\popdist$ by that exponential-family distribution $q\in\mldists$ which maximizes the log-likelihood $\Exp{\phat}{\ln q}$, where $\mldists$ is the set of all (extended) exponential-family distributions $\qxbar$ for $\xbar\in\extspace$. Remarkably, these two approaches always yield the same distribution, as we show next. Moreover, that distribution is always the unique point at the intersection of the two sets $\mldists\cap\medists$, which is to say, that exponential-family distribution $\qxbar$ for which $\Exp{\qxbar}{\featmap} = \uu$. This extends a similar formulation for the standard setting given by \citet{DellaDeLa97} to the current astral setting. \begin{theorem} \label{thm:ml-equal-maxent} Assume the general set-up of this chapter. Let $\phat\in\Delta$, let $\uu=\Exp{\phat}{\featmap}$, and let \begin{align*} \mldists &= \Braces{ \qxbar : \xbar\in\extspace }, \\ \medists &= \Braces{ q\in\Delta : \Exp{q}{\featmap} = \uu }. \end{align*} Then the following are equivalent, for $q\in\Delta$: \begin{letter} \item \label{thm:ml-equal-maxent:a} $\displaystyle q = \arg \max_{q\in\mldists} \Exp{\phat}{\ln q}$. \item \label{thm:ml-equal-maxent:b} $q \in \mldists\cap\medists$. \item \label{thm:ml-equal-maxent:c} $\displaystyle q = \arg \max_{q\in\medists} \entropy(q)$. \end{letter} Furthermore, there exists a unique distribution $q$ satisfying all of these. \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{(\ref{thm:ml-equal-maxent:a}) $\Rightarrow$ (\ref{thm:ml-equal-maxent:b}): } Suppose $q\in\mldists$ satisfies~(\ref{thm:ml-equal-maxent:a}). Then $q=\qxbar$ for some $\xbar\in\extspace$, and $\xbar$ maximizes $\Exp{\phat}{\ln \qxbar}$. By Proposition~\ref{pr:ml-is-lpartu}, this implies that $\Exp{\qxbar}{\featmap}=\meanmap(\xbar)=\uu$, and therefore that $q=\qxbar$ satisfies~(\ref{thm:ml-equal-maxent:b}). \pfpart{(\ref{thm:ml-equal-maxent:b}) $\Rightarrow$ (\ref{thm:ml-equal-maxent:c}): } Suppose (\ref{thm:ml-equal-maxent:b}) holds so that $\qxbar\in\medists$, for some $\xbar\in\extspace$. Let $q$ be any distribution that is also in $\medists$, so that $\Exp{q}{\featmap}=\uu=\Exp{\qxbar}{\featmap}$. Let $\seq{\xx_t}$ in $\Rn$ be a sequence converging to $\xbar$. Then for each $t$, \begin{align*} \Exp{\qxbar}{\ln \qxt} &= \xx_t\cdot\Exp{\qxbar}{\featmap} - \lpart(\xx_t) \\ &= \xx_t\cdot\Exp{q}{\featmap} - \lpart(\xx_t) \\ &= \Exp{q}{\ln \qxt} \\ &\leq \Exp{q}{\ln q}. \end{align*} The first and third equalities are both by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:b}), and the inequality is by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:a}). In the limit, $\Exp{\qxbar}{\ln \qxt} \rightarrow \Exp{\qxbar}{\ln \qxbar}$, by Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:c}). Hence, $\Exp{\qxbar}{\ln \qxbar} \leq \Exp{q}{\ln q}$, so $\entropy(q)\leq\entropy(\qxbar)$. Since this holds for all $q\in\medists$, $\qxbar$ must have maximum entropy among all such distributions. \pfpart{Existence:} By Proposition~\ref{pr:fext-min-exists}, $\lpartextu$ must attain its minimum at some point $\xbar\in\extspace$. By Proposition~\ref{thm:ml-equal-maxent}, the resulting distribution $q=\qxbar$ must satisfy~(\ref{thm:ml-equal-maxent:a}), and so~(\ref{thm:ml-equal-maxent:b}) and~(\ref{thm:ml-equal-maxent:c}) as well, as just shown. \pfpart{Uniqueness:} We show that at most one distribution can satisfy (\ref{thm:ml-equal-maxent:c}), implying, by the foregoing, that only one distribution can satisfy~(\ref{thm:ml-equal-maxent:a}) or~(\ref{thm:ml-equal-maxent:b}) as well. Suppose, by way of contradition, that two distributions $p$ and $p'$ in $\medists$ both have maximum entropy among all such distributions, and that $p\neq p'$. Let $q=(p+p')/2$, which is also in $\medists$ (using linearity of \eqref{eqn:expect-defn} in $p$), and which is also distinct from both $p$ and $p'$. Then \[ \Exp{p}{\ln p} + \Exp{p'}{\ln p'} > \Exp{p}{\ln q} + \Exp{p'}{\ln q} = 2 \Exp{q}{\ln q}, \] with the strict inequality following from Proposition~\ref{pr:exp-log-props}(\ref{pr:exp-log-props:a}), and the equality by algebra from \eqref{eqn:expect-defn}. Thus, \[ \frac{\entropy(p)+\entropy(p')}{2} < \entropy(q), \] contradicting that $\entropy(p)=\entropy(p')$ is maximum among all distributions in $\medists$. \pfpart{(\ref{thm:ml-equal-maxent:c}) $\Rightarrow$ (\ref{thm:ml-equal-maxent:a}): } Suppose $q\in\medists$ satisfies~(\ref{thm:ml-equal-maxent:c}). As shown above, there exists a distribution $p$ satisfying (\ref{thm:ml-equal-maxent:a}), and so also satisfying (\ref{thm:ml-equal-maxent:c}), by the implications proved already. Having proved uniqueness, this implies $p=q$, so $q$ satisfies (\ref{thm:ml-equal-maxent:a}) as well. \qedhere \end{proof-parts} \end{proof} Regarding algorithms, we can apply the techniques developed in Section~\ref{sec:iterative} to iteratively and asymptotically find astral parameters $\xbar\in\extspace$ maximizing the likelihood in \eqref{eqn:gen-log-like}, or to solve the equivalent maximum-entropy problem as given in Theorem~\ref{thm:ml-equal-maxent}(\ref{thm:ml-equal-maxent:c}). \subsection{Galaxies and faces of the marginal polytope} Earlier, we studied the mean map $\meanmap$ and its relationship with the marginal polytope, $\convfeat$, showing in Theorem~\ref{thm:meanmap-onto} that $\meanmap$ maps $\extspace$ onto that polytope, and maps $\Rn$ onto its relative interior. The next theorem fills in more detail regarding this relationship. Previously, in Section~\ref{sec:core:zero}, we saw how astral space can be naturally partitioned into galaxies. In standard convex analysis, convex sets in $\Rn$, including polytopes, can be partitioned in a different way, into the relative interiors of their faces, as seen in \Cref{roc:thm18.2}. As we show next, $\meanmap$ directly links these two partitions by continuously mapping every astral galaxy $\galaxd$, for any icon $\ebar$, onto the relative interior of one face $C$ of the marginal polytope, and also mapping the closure of that galaxy, $\galcld$, onto the entire face $C$. Moreover, this relationship holds exactly when $\ebar$ is a univeral reducer of the function $\lpartu$ for any (and every) point $\uu\in\ri{C}$. In what follows, the \emph{support} of a distribution $p\in\Delta$, denoted $\support(p)$, is the set of points assigned nonzero probability: \[ \support(p) = \{ \distelt\in\distset : p(\distelt)>0 \}. \] \begin{theorem} \label{thm:galaxy-mapto-face} Assume the general set-up of this chapter. Let $\ebar\in\corezn$, let $C$ be a nonempty face of $S=\convfeat$, and let \begin{align} J &= \{\distelt\in\distset :\: \featmap(\distelt)\in C\}, \label{eqn:thm:galaxy-mapto-face:J-def} \\ Z &= \Braces{ \distelt \in\distset :\: \forall \distalt\in\distset,\; \ebar\cdot (\featmap(\distalt) - \featmap(\distelt)) \leq 0 }. \label{eqn:thm:galaxy-mapto-face:Z-def} \end{align} Then the following hold: \begin{letter} \item \label{thm:galaxy-mapto-face:a} For all $\xbar\in\galaxd$, $Z = \support(\qxbar)$. That is, for $\distelt\in\distset$, $\qxbar(\distelt)>0$ if and only if $\distelt\in Z$. \item \label{thm:galaxy-mapto-face:b} For all $\uu\in\ri{C}$, $\hardcore{\sumexu} = J$. That is, for $\distelt\in\distset$, $\featmap(\distelt)\in C$ if and only if $\distelt\in \hardcore{\sumexu}$. \item \label{thm:galaxy-mapto-face:c} $\meanmap(\galaxd)$ is the relative interior of some nonempty face of $S$. \item \label{thm:galaxy-mapto-face:d} Let $\uu\in\ri{C}$. Then the following are equivalent: \begin{roman-compact} \item \label{thm:galaxy-mapto-face:d:0} $\meanmap(\galaxd) \cap (\ri{C}) \neq \emptyset$. \item \label{thm:galaxy-mapto-face:d:1} $\meanmap(\galaxd) = \ri{C}$. \item \label{thm:galaxy-mapto-face:d:2} $\meanmap(\galcld) = C$. \item \label{thm:galaxy-mapto-face:d:5} $C=\conv{\featmap(Z)}$. \item \label{thm:galaxy-mapto-face:d:4} $J=Z$. \item \label{thm:galaxy-mapto-face:d:3} $\ebar\in\unimin{\lpartu}\, (=\unimin{\sumexu})$. \end{roman-compact} \end{letter} \end{theorem} \begin{proof} ~ \begin{proof-parts} \pfpart{Part~(\ref{thm:galaxy-mapto-face:a}):} The form of $\qxbar(\distelt)$ is given in \eqref{eqn:qxbar-expr}. As such, $\qxbar(\distelt)=0$ if and only if $\xbar\cdot(\featmap(\distalt)-\featmap(\distelt))=+\infty$ for some $\distalt\in\distset$, which holds if and only if $\ebar\cdot(\featmap(\distalt)-\featmap(\distelt))=+\infty$ for some $\distalt\in\distset$, since $\xbar=\ebar\plusl\qq$ for some $\qq\in\Rn$. Thus, as claimed, $\qxbar(\distelt)>0$ if and only if $\distelt\in Z$ (using Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}). \pfpart{Part~(\ref{thm:galaxy-mapto-face:b}):} Let $\uu\in\ri{C}$, and let $C'=C-\uu$ and $S'=S-\uu=\conv{\featmapu(\distset)}$ be translations of $C$ and $S$ by $\uu$. Then $C'$ is a face of $S'$, and $\zero\in \ri{C'}$. Thus, directly applying Theorem~\ref{thm:erm-faces-hardcore2} (to $\sumexu$) yields \[ \hardcore{\sumexu} = \Braces{\distelt\in\distset : \featmapu(\distelt)\in C'} = J \] since $\featmapu(\distelt)\in C'$ if and only if $\featmap(\distelt)\in C$. \pfpart{Part~(\ref{thm:galaxy-mapto-face:c}):} Let $D=\conv{\featmap(Z)}$. We will prove the claim by showing that $D$ is a face of $S$ and that $\meanmap(\galaxd)=\ri{D}$. Part~(\ref{thm:galaxy-mapto-face:a}) implies that $Z$ is nonempty since $\ebar\in\galaxd$, and $\qebar$ cannot have empty support (being a probability distribution, by Proposition~\ref{pr:qxbar-cont}). Thus, $D$ is nonempty. By $Z$'s definition, if $\distelt\in Z$, then $\ebar\cdot (\featmap(\distalt)-\featmap(\distelt))\leq 0$ for all $\distalt\in\distset$. The next claim makes this value more precise. \setcounter{claimp}{0} \begin{claimpx} \label{cl:thm:galaxy-mapto-face:1} Let $\distelt\in Z$ and $\distalt\in\distset$. Then \[ \ebar\cdot (\featmap(\distalt)-\featmap(\distelt)) = \begin{cases} 0 & \mbox{if $\distalt\in Z$,} \\ -\infty & \mbox{otherwise.} \end{cases} \] \end{claimpx} \begin{proofx} Since $\distelt\in Z$, $\ebar\cdot (\featmap(\distalt)-\featmap(\distelt))\leq 0$. In the case $\distalt\in Z$, we can apply $Z$'s definition to $\distalt$, yielding $\ebar\cdot (\featmap(\distelt)-\featmap(\distalt))\leq 0$ as well. Thus, $\ebar\cdot (\featmap(\distalt)-\featmap(\distelt)) = 0$ in this case. In the alternative case that $\distalt\not\in Z$, suppose contrary to the claim that $\ebar\cdot (\featmap(\distalt)-\featmap(\distelt)) \neq -\infty$. Since $\ebar$ is an icon, the only remaining possibility is that \begin{equation} \label{eqn:thm:galaxy-mapto-face:6} \ebar\cdot (\featmap(\distalt)-\featmap(\distelt)) = 0 \end{equation} (by Proposition~\ref{pr:icon-equiv}\ref{pr:icon-equiv:a},\ref{pr:icon-equiv:b}). Let $\distaltii\in\distset$. Then \begin{align*} \ebar\cdot(\featmap(\distaltii)-\featmap(\distalt)) &= \ebar\cdot\paren{(\featmap(\distaltii)-\featmap(\distelt)) +(\featmap(\distelt)-\featmap(\distalt))} \\ &= \ebar\cdot(\featmap(\distaltii)-\featmap(\distelt)) + \ebar\cdot(\featmap(\distelt)-\featmap(\distalt)) \\ &\leq 0. \end{align*} The second equality is by Proposition~\ref{pr:i:1} combined with \eqref{eqn:thm:galaxy-mapto-face:6}. Together with $\distelt$ being in $Z$, this then yields the inequality. Since this holds for all $\distaltii\in\distset$, this shows that $\distalt$ is in $Z$, a contradition. \end{proofx} More generally, for $\uu\in D$ and $\ww\in S$, the value of $\ebar\cdot(\ww-\uu)$ is determined in a similar way by $\ww$'s membership in $D$: \begin{claimpx} \label{cl:thm:galaxy-mapto-face:2} Let $\uu\in D$ and $\ww\in S$. Then \[ \ebar\cdot (\ww-\uu) = \begin{cases} 0 & \mbox{if $\ww\in D$,} \\ -\infty & \mbox{otherwise.} \end{cases} \] \end{claimpx} \begin{proofx} Since $\uu$ and $\ww$ are in $S$, there exist distributions $p$ and $q$ in $\Delta$ for which $\Exp{p}{\featmap}=\uu$ and $\Exp{q}{\featmap}=\ww$. Furthermore, since $\uu\in D$, we can assume without loss of generality that $\support(p)\subseteq Z$. We then have \begin{align} \ebar\cdot(\ww-\uu) &= \ebar\cdot\paren{ \sum_{\distelt\in Z} \sum_{\distalt\in\distset} p(\distelt) q(\distalt) (\featmap(\distalt)-\featmap(\distelt)) } \nonumber \\ &= \sum_{\distelt\in Z} \sum_{\distalt\in\distset} p(\distelt) q(\distalt)\; \ebar\cdot(\featmap(\distalt)-\featmap(\distelt)). \label{eqn:thm:galaxy-mapto-face:1} \end{align} The first equality is straightforward algebra. The second equality follows from $\ebar\cdot(\featmap(\distalt)-\featmap(\distelt))\leq 0$ for $\distelt\in Z$ and $\distalt\in\distset$, by $Z$'s definition, allowing us to apply Proposition~\ref{pr:i:1} (and~\ref{pr:i:2}). In the case that $\ww\in D$, we can choose $q$ with $\support(q)\subseteq Z$. Combined with Claim~\ref{cl:thm:galaxy-mapto-face:1}, it then follows that every term of \eqref{eqn:thm:galaxy-mapto-face:1} is equal to zero, so $\ebar\cdot(\ww-\uu)=0$, as claimed. In the remaining case that $\ww\not\in D$, we must have $q(\distalt)>0$ for some $\distalt\not\in Z$. Further, $p$ cannot have empty support, so $p(\distelt)>0$ for some $\distelt\in Z$. By Claim~\ref{cl:thm:galaxy-mapto-face:1}, $\ebar\cdot(\featmap(\distalt)-\featmap(\distelt))=-\infty$ for this pair so at least this one term of \eqref{eqn:thm:galaxy-mapto-face:1} is $-\infty$. Since every other term is nonpositive (by $Z$'s definition), we conclude $\ebar\cdot(\ww-\uu)=-\infty$. \end{proofx} We claim \begin{equation} \label{eqn:thm:galaxy-mapto-face:3} Z = \{\distalt\in\distset : \featmap(\distalt)\in D\}, \end{equation} meaning $\distalt\in Z$ if and only if $\featmap(\distalt)\in D$. If $\distalt\in Z$, then clearly $\featmap(\distalt)\in\conv{\featmap(Z)}=D$. Conversely, if $\featmap(\distalt)\in D$, then for any $\distelt\in Z$, $\ebar\cdot(\featmap(\distalt)-\featmap(\distelt))=0$ by Claim~\ref{cl:thm:galaxy-mapto-face:2} (with $\ww=\featmap(\distalt)\in D$ and $\uu=\featmap(\distelt)\in D$). Therefore, $\distalt\in Z$ by Claim~\ref{cl:thm:galaxy-mapto-face:1}. Thus, if $\uu\in D$ and $\distelt\in\distset$, then \begin{equation} \label{eqn:thm:galaxy-mapto-face:4} \ebar\cdot\featmapu(\distelt) = \ebar\cdot(\featmap(\distelt)-\uu) = \begin{cases} 0 & \mbox{if $i\in Z$,} \\ -\infty & \mbox{otherwise,} \end{cases} \end{equation} as follows from \eqref{eqn:thm:galaxy-mapto-face:3} and Claim~\ref{cl:thm:galaxy-mapto-face:2} with $\ww=\featmap(\distelt)$. \begin{claimpx} \label{cl:thm:galaxy-mapto-face:3} $D$ is a face of $S$. \end{claimpx} \begin{proofx} Let $\ww_1,\ww_2\in S$, let $\lambda\in (0,1)$, and suppose that $\uu=\lambda\ww_1+(1-\lambda)\ww_2$ is in $D$. We aim to show that this implies that $\ww_1$ and $\ww_2$ are also in $D$. Claim~\ref{cl:thm:galaxy-mapto-face:2} implies $\ebar\cdot(\ww_b-\uu)\leq 0$, for $b=1,2$. Also, by algebra, $\zero=\lambda(\ww_1-\uu)+(1-\lambda)(\ww_2-\uu)$, so \[ 0 = \ebar\cdot\zero = \lambda\; \ebar\cdot(\ww_1-\uu) + (1-\lambda)\; \ebar\cdot(\ww_2-\uu), \] where the second equality uses Proposition~\ref{pr:i:1}. Since neither term is positive on the right and since $\lambda\in (0,1)$, this implies $\ebar\cdot(\ww_b-\uu)=0$, for $b=1,2$. Therefore, $\ww_b\in D$ by Claim~\ref{cl:thm:galaxy-mapto-face:2}, proving that $D$ is a face. \end{proofx} \begin{claimpx} \label{cl:thm:galaxy-mapto-face:4} Let $\uu\in S$, and suppose $\ebar\in\unimin{\sumexu}$. Then $\uu\in\meanmap(\galaxd)$. \end{claimpx} \begin{proofx} By Propositions~\ref{pr:min-fullshad-is-finite-min} and~\ref{pr:unimin-to-global-min}, $\lpartextu$ is minimized by some point $\xbar=\ebar\plusl\qq$, for some $\qq\in\Rn$. Then $\meanmap(\xbar)=\uu$ by Proposition~\ref{pr:ml-is-lpartu}, proving the claim. \end{proofx} \begin{claimpx} \label{cl:thm:galaxy-mapto-face:5} $\meanmap(\galaxd)=\ri{D}$. \end{claimpx} \begin{proofx} Let $\xbar\in\galaxd$. Then $\qxbar(\distelt)>0$ if and only if $\distelt\in Z$, by part~(\ref{thm:galaxy-mapto-face:a}), implying $\meanmap(\xbar)\in\ri(\conv{\featmap(Z)})$ by Proposition~\ref{pr:ri-conv-finite}. Therefore, $\meanmap(\galaxd)\subseteq\ri{D}$. For the reverse inclusion, suppose $\uu\in\ri{D}$, implying $\zero\in\ri{D'}$ where $D'=D-\uu=\conv{\featmapu(Z)}$. Applying the results of Section~\ref{sec:emp-loss-min} to $\sumexu$, we then have that $Z=\hardcore{\sumexu}$ by Theorem~\ref{thm:erm-faces-hardcore2} and \eqref{eqn:thm:galaxy-mapto-face:3}. Furthermore, $\ebar\in\unimin{\sumexu}$ by \eqref{eqn:thm:galaxy-mapto-face:4} combined with Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:univ}). By Claim~\ref{cl:thm:galaxy-mapto-face:4}, this implies $\uu\in\meanmap(\galaxd)$. Thus, $\ri{D}\subseteq\meanmap(\galaxd)$. \end{proofx} Part~(\ref{thm:galaxy-mapto-face:c}) now follows from Claims~\ref{cl:thm:galaxy-mapto-face:3} and~\ref{cl:thm:galaxy-mapto-face:5}. \pfpart{Part~(\ref{thm:galaxy-mapto-face:d}):} Before proceding, we note that $\unimin{\sumexu}=\unimin{\lpartu}$. To see this, let $\ebar\in\corezn$. Then $\ebar\in\unimin{\sumexu}$ if and only if $\sumexextu(\ebar\plusl\xx)\leq\sumexextu(\ebar'\plusl\xx)$ for all $\ebar'\in\corezn$, by definition of universal reducer and universal reduction (Definitions~\ref{def:univ-reduction} and~\ref{def:univ-reducer}). Since $\logex$ is a strictly increasing bijection, and by Proposition~\ref{pr:sumex-lpart-cont}, this in turn holds if and only if $\lpartextu(\ebar\plusl\xx)\leq\lpartextu(\ebar'\plusl\xx)$ for all $\ebar'\in\corezn$, that is, if and only if $\ebar\in\unimin{\lpartu}$. \pfpart{(\ref{thm:galaxy-mapto-face:d:0}) $\Rightarrow$ (\ref{thm:galaxy-mapto-face:d:1}): } Both $\meanmap(\galaxd)$ and $\ri{C}$ are relative interiors of faces of $S$, by part~(\ref{thm:galaxy-mapto-face:c}) and by assumption. Therefore, if they are not disjoint, then the two faces must be the same (by \Cref{pr:face-props}\ref{pr:face-props:cor18.1.2}). \pfpart{(\ref{thm:galaxy-mapto-face:d:1}) $\Rightarrow$ (\ref{thm:galaxy-mapto-face:d:2}): } If (\ref{thm:galaxy-mapto-face:d:1}) holds, then taking closures and applying Proposition~\ref{pr:meanmap-cont} yields \[ \meanmap(\galcld) = \cl(\meanmap(\galaxd)) = \cl(\ri{C}) = \cl{C} = C. \] The last equality is because $S$ is a polytope and therefore closed (\Cref{roc:thm19.1}), implying that its face $C$ is also closed (\Cref{pr:face-props}\ref{pr:face-props:cor18.1.1}). \pfpart{(\ref{thm:galaxy-mapto-face:d:2}) $\Rightarrow$ (\ref{thm:galaxy-mapto-face:d:5}): } Applied to $D$, the last implication, combined with Claims~\ref{cl:thm:galaxy-mapto-face:3} and~\ref{cl:thm:galaxy-mapto-face:5}, shows that $\meanmap(\galcld) = D$. Therefore, if~(\ref{thm:galaxy-mapto-face:d:2}) holds, then $C=D$, which is the same as~(\ref{thm:galaxy-mapto-face:d:5}). \pfpart{(\ref{thm:galaxy-mapto-face:d:5}) $\Rightarrow$ (\ref{thm:galaxy-mapto-face:d:4}): } If~(\ref{thm:galaxy-mapto-face:d:5}) holds, then $C=D$, implying $J=Z$ by Eqs.~(\ref{eqn:thm:galaxy-mapto-face:J-def}) and~(\ref{eqn:thm:galaxy-mapto-face:3}). \pfpart{(\ref{thm:galaxy-mapto-face:d:4}) $\Rightarrow$ (\ref{thm:galaxy-mapto-face:d:3}): } Suppose $J=Z$. Then $Z=\hardcore{\sumexu}$ by part~(\ref{thm:galaxy-mapto-face:b}). Combined with \eqref{eqn:thm:galaxy-mapto-face:4}, Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:b:univ}) then implies that $\ebar\in\unimin{\sumexu}$. Since, as noted above, $\unimin{\sumexu}=\unimin{\lpartu}$, this proves~(\ref{thm:galaxy-mapto-face:d:3}). \pfpart{(\ref{thm:galaxy-mapto-face:d:3}) $\Rightarrow$ (\ref{thm:galaxy-mapto-face:d:0}): } Suppose $\ebar$ is in $\unimin{\lpartu}$, and so also in $\unimin{\sumexu}$. Then $\uu\in\meanmap(\galaxd)$, so $\meanmap(\galaxd)$ and $\ri{C}$ are not disjoint (since $\uu$ is in both). \qedhere \end{proof-parts} \end{proof} \begin{example} For instance, continuing Examples~\ref{ex:exp-fam-eg1} and~\ref{ex:exp-fam-eg2}, the marginal polytope, $\convfeat$, is a triangle in $\R^2$ whose corners are $\featmap(1),\featmap(3),\featmap(4)$. Its faces are the triangle itself, its three edges (or sides), three vertices (or corners), and the empty set. The point $\uu$ in \eqref{eq:log-like-no-max-eg:3} is in the relative interior of the edge $C$ forming the side of the triangle between $\featmap(1)$ and $\featmap(3)$. Earlier, we established that $\ebar\in\unimin{\sumexu}=\unimin{\lpartu}$, where $\ebar=\limray{\vv}$ and $\vv=\trans{[-1,-1]}$. As such, Theorem~\ref{thm:galaxy-mapto-face} implies that $\meanmap(\galaxd) = \ri{C}$ and $\meanmap(\galcld) = C$. We also earlier noted that $\hardcore{\sumexu}=\{1,2,3\}$, which is consistent (according to Theorem~\ref{thm:galaxy-mapto-face}\ref{thm:galaxy-mapto-face:b}) with the corresponding points $\featmap(1),\featmap(2),\featmap(3)$ being the only ones in $C$. It can be checked that $\hardcore{\sumexu}=Z$, where $Z$ is as defined in \eqref{eqn:thm:galaxy-mapto-face:Z-def}. \end{example} In this example, although $\phat(3)=\phat(4)=0$, the resulting maximum-likelihood distribution $\qxbar$ in \eqref{eq:log-like-no-max-eg:4} includes point~$3$ in its support, but not point~$4$. This shows that, in general, the support of a given distribution $\phat$ need not match the support of the maximum-likelihood distribution $\qxbar$. There is, nonetheless, a strong relationship between these. The next theorem shows that the point $\uu=\Exp{\phat}{\featmap}$ must be in the relative interior of the smallest face $C$ of the marginal polytope that includes all points $\featmap(\distelt)$ for which $\phat(\distelt)>0$. The hard core $\hardcore{\sumexu}$ then consists of those $\distelt$ for which $\featmap(\distelt)\in C$. Further, the support of $\qxbar$ is exactly $\hardcore{\sumexu}$. For instance, in the example just discussed, the smallest face that includes $\featmap(1)$ and $\featmap(2)$ (corresponding to the points in the support of $\phat$) is the edge $C$ identified above, which also includes $\featmap(3)$, yielding the hard core $\{1,2,3\}$. The support of the resulting maximum-likelihood distribution $\qxbar$ is exactly the hard core, which thus includes point~$3$ (since it is in the smallest face $C$ containing $\support(\phat)$), but not point~$4$. \begin{theorem} \label{thm:comp-supports-phat-ml} Assume the general set-up of this chapter. Let $\phat\in\Delta$, let $\uu=\Exp{\phat}{\featmap}$, and let $C$ be the smallest face of $\convfeat$ that includes $\featmap(\support(\phat))$. Let $\xbar\in\extspace$ maximize $\Exp{\phat}{\ln \qxbar}$. Then $\uu\in\ri(C)$ and \begin{equation} \label{eqn:thm:comp-supports-phat-ml:1} \support(\phat) \subseteq \hardcore{\sumexu} = \{\distelt\in\distset : \featmap(\distelt)\in C\} = \support(\qxbar). \end{equation} \end{theorem} \begin{proof} We first argue that $C = \conv{\featmap(\hardcore{\sumexu})}$, or equivalently, after translating by $\uu$, that $C' = \conv{\featmapu(\hardcore{\sumexu})}$, where $C'=C-\uu$. Let $E=\support(\phat)$. Then $\uu\in\ri(\conv{\featmap(E)})$ by Proposition~\ref{pr:ri-conv-finite}, implying, after translation, that $\zero\in\ri(\conv{\featmapu(E)})$. Therefore, $E\subseteq\hardcore{\sumexu}$ by Theorem~\ref{thm:erm-faces-hardcore}(\ref{thm:erm-faces-hardcore:a}) (applied to $\sumexu$), proving the inclusion in \eqref{eqn:thm:comp-supports-phat-ml:1}. Consequently, $ \featmapu(E) \subseteq \featmapu(\hardcore{\sumexu}) \subseteq \conv{\featmapu(\hardcore{\sumexu})} $. Since $C$ is the smallest face that includes $\featmap(E)$, $C'$ is the smallest face of the translated polytope $\conv{\featmapu(\distset)}$ that includes $\featmapu(E)$. Since $\conv{\featmapu(\hardcore{\sumexu})}$ is a face of this latter polytope (Theorem~\ref{thm:erm-faces-hardcore}\ref{thm:erm-faces-hardcore:b}), it follows that $C'\subseteq \conv{\featmapu(\hardcore{\sumexu})}$. For the reverse inclusion, note that $\uu\in C$ since $\uu$ is a convex combination of points in $\featmap(E)\subseteq C$, and since $C$ is convex. Thus, $\zero\in C'$. Since $C'$ is a face of $\conv{\featmapu(\distset)}$, Theorem~\ref{thm:erm-faces-hardcore}(\ref{thm:erm-faces-hardcore:aa}) implies that $\conv{\featmapu(\hardcore{\sumexu})}\subseteq C'$. So $C'=\conv{\featmapu(\hardcore{\sumexu})}$ and $C=\conv{\featmap(\hardcore{\sumexu})}$. By Theorem~\ref{thm:erm-faces-hardcore2}, it follows that $\zero\in\ri(C')$ so $\uu\in\ri(C)$, and also that \[ \hardcore{\sumexu} = \{ \distelt\in\distset : \featmapu(\distelt)\in C' \} = \{ \distelt\in\distset : \featmap(\distelt)\in C \}, \] proving the first equality of \eqref{eqn:thm:comp-supports-phat-ml:1}. We can write $\xbar=\ebar\plusl\qq$ for some icon $\ebar\in\corezn$ and some $\qq\in\Rn$. By Proposition~\ref{pr:ml-is-lpartu}, $\xbar$ minimizes $\sumexextu$, so $\ebar\in\unimin{\sumexu}$ by Theorem~\ref{thm:waspr:hard-core:2}. We can then apply Theorem~\ref{thm:galaxy-mapto-face}, with $J$ and $Z$ as in Eqs.~(\ref{eqn:thm:galaxy-mapto-face:J-def}) and~(\ref{eqn:thm:galaxy-mapto-face:Z-def}), yielding $\support(\qxbar)=Z$ by part~(\ref{thm:galaxy-mapto-face:a}), $J=\hardcore{\sumexu}$ by part~(\ref{thm:galaxy-mapto-face:b}), and $Z=J$ by part~(\ref{thm:galaxy-mapto-face:d}). Thus, $\support(\qxbar)=\hardcore{\sumexu}$, proving the last equality of \eqref{eqn:thm:comp-supports-phat-ml:1}. \end{proof} Theorem~\ref{thm:galaxy-mapto-face} shows that for any icon $\ebar\in\corezn$, $\meanmap$ maps the galaxy $\galaxd$ surjectively onto $\ri{C}$, for some face $C$ of $\convfeat$. In general, this mapping is not injective. However, as we show next, if we further restrict the mapping to just part of the galaxy, then it becomes a bijection, and in fact, induces a homeomorphism. In more detail, suppose $\uu\in\ri{C}$. As seen in Proposition~\ref{pr:ml-is-lpartu}, $\meanmap(\xbar)=\uu$ if and only if $\xbar$ minimizes $\sumexextu$. From Theorem~\refequiv{thm:waspr:hard-core:4}{pr:hard-core:4:a}{pr:hard-core:4:b}, all such minimizers have the form $\zbar\plusl\qq$ where $\zbar\in\conv(\unimin{\sumexu})$ and $\qq\in\Rn$ is a uniquely determined point in $\rescperp{\sumexu}$. We will see next that the set $\rescperp{\sumexu}$ is the same linear subspace $L$ for all $\uu\in\ri{C}$. We then restrict $\meanmap$ to the part of the galaxy $\galaxd$ corresponding to $L$, resulting in the mapping $\qq\mapsto \meanmap(\ebar\plusl\qq)$, for $\qq\in L$. This mapping is a homeomorphism. To state the formal result, recall from Section~\ref{sec:prelim:affine-sets} that the affine hull of a set $S\subseteq\Rn$, denoted $\affh{S}$, is the smallest affine set (in $\Rn$) that includes $S$, and that for every nonempty affine set $A\subseteq\Rn$, there exists a unique linear subspace that is parallel to $A$ (\Cref{roc:thm1.2}). \begin{theorem} \label{thm:aff-homeo} Assume the general set-up of this chapter. Let $\ebar\in\corezn$, let $Z$ be as defined in \eqref{eqn:thm:galaxy-mapto-face:Z-def}, and let $C=\conv{\featmap(Z)}$ (so that $\meanmap(\galaxd)=\ri{C}$ by Theorem~\ref{thm:galaxy-mapto-face}). Also, let $L$ be the linear subspace of $\Rn$ that is parallel to $\affh{\featmap(Z)}$, the affine hull of $\featmap(Z)$. Then $\rescperp{\sumexu}=\rescperp{\lpartu}=L$ for all $\uu\in\ri{C}$. Further, let $\rho:L\rightarrow\ri{C}$ be defined by $\rho(\xx)=\meanmap(\ebar\plusl\xx)$, for $\xx\in L$. Then $\rho$ is a homeomorphism. \end{theorem} \begin{proof} Let $\uu\in\ri{C}$. That $\resc{\sumexu}=\resc{\lpartu}$ follows from the definition of standard recession cone (\eqref{eqn:resc-cone-def}) and $\ln$ being strictly increasing. From Theorem~\ref{thm:galaxy-mapto-face}(\ref{thm:galaxy-mapto-face:b},\ref{thm:galaxy-mapto-face:d}), $\hardcore{\sumexu}=Z$. Applying Theorem~\ref{thm:hard-core:3}(\ref{thm:hard-core:3:e}) then yields $\rescperp{\sumexu}=\spn\featmapu(Z)$. Further, $\uu$ is in $C=\conv{\featmap(Z)}$, and so also is in $\affh{\featmap(Z)}$. Therefore, by Proposition~\ref{pr:lin-aff-par}, $\spn\featmapu(Z)=\spn(\featmap(Z) - \uu)$ is the linear subspace $L$ parallel to $\affh{\featmap(Z)}$, as claimed. It remains to prove $\rho$ is a homeomorphism. The map $\zbar\mapsto\ebar\plusl\zbar$ is continuous by \Cref{cor:aff-cont}, and $\meanmap$ is continuous by Proposition~\ref{pr:meanmap-cont}. Therefore, $\rho$, which is constructed by composing these and then restricting to a subspace, is as well. To see $\rho$ is a bijection, let $\uu\in\ri{C}$. Then $\ebar\in\unimin{\lpartu}$, by Theorem~\ref{thm:galaxy-mapto-face}(\ref{thm:galaxy-mapto-face:d}). We apply results from Section~\ref{sec:emp-loss-min} to the function $\sumexu$, noting that $e^x$ is strictly increasing and strictly convex. By Theorem~\ref{thm:waspr:hard-core:2}, a point $\qq\in\Rn$ minimizes $\fullshadsumexu$ if and only if $\ebar\plusl\qq$ minimizes $\sumexextu$, which, by Proposition~\ref{pr:ml-is-lpartu}, holds if and only if $\meanmap(\ebar\plusl\qq)=\uu$. By Theorem~\ref{thm:waspr:hard-core:4}, there exists a unique point $\qq\in \rescperp{\sumexu}=L$ minimizing $\fullshadsumexu$, and therefore a unique point $\qq\in L$ with $\rho(\qq)=\meanmap(\ebar\plusl\qq)=\uu$, proving $\rho$ is a bijection. Finally, we argue that $\rhoinv$, the functional inverse of $\rho$, is continuous. Suppose $\seq{\uu_t}$ is a sequence in $\ri{C}$ converging to some point $\uu\in\ri{C}$. For each $t$, let $\qq_t=\rhoinv(\uu_t)$, and $\qq=\rhoinv(\uu)$. We aim to prove $\qq_t\rightarrow\qq$. Suppose not. Then there exists a neighborhood $U\subseteq\Rn$ of $\qq$ for which $\qq_t\not\in U$ for infinitely many values of $t$. By discarding all other sequence elements, we assume henceforth that $\qq_t\not\in U$ for all $t$. Let $\xbar_t=\ebar\plusl\qq_t$. By sequential compactness, the resulting sequence $\seq{\xbar_t}$ must have a convergent subsequence converging to some point $\xbar\in\extspace$. By again discarding all other sequence elements, we can assume the entire sequence converges so that $\xbar_t\rightarrow\xbar$. Since $\meanmap$ is continuous (Proposition~\ref{pr:meanmap-cont}), $\uu_t=\meanmap(\xbar_t)\rightarrow\meanmap(\xbar)$. Therefore, $\meanmap(\xbar)=\uu$ since $\uu_t\rightarrow\uu$, implying $\xbar$ minimizes $\sumexextu$ (by Proposition~\ref{pr:ml-is-lpartu}). By Theorem~\refequiv{thm:waspr:hard-core:4}{pr:hard-core:4:a}{pr:hard-core:4:b}, it follows that $\xbar=\zbar\plusl\qq'$ for some $\zbar\in\conv(\unimin{\sumexu})$, where $\qq'\in\rescperp{\sumexu}=L$ minimizes $\fullshadsumexu$. On the other hand, $\uu=\rho(\qq)=\meanmap(\ebar\plusl\qq)$, implying $\ebar\plusl\qq$ also minimizes $\sumexextu$ (by Proposition~\ref{pr:ml-is-lpartu}), and so, by Theorem~\ref{thm:waspr:hard-core:2}, that $\qq\in L$ minimizes $\fullshadsumexu$. But by Theorem~\ref{thm:waspr:hard-core:4}, $\fullshadsumexu$ has a unique minimizer in $L$. Therefore, $\qq'=\qq$. Let $\PP$ be the projection matrix onto the linear subspace $L$. We claim that if $\ybar\in\conv(\unimin{\sumexextu})$ then $\PP\ybar=\zero$. To see this, note that $\ybar\in\arescone{\sumexextu}$, by Proposition~\ref{pr:new:thm:f:8a}(\ref{pr:new:thm:f:8a:a}). Also, $L=\rescperp{\sumexu}=(\arescone{\sumexextu})^{\bot}$, by Proposition~\ref{pr:hard-core:1}(\ref{pr:hard-core:1:b}). As a result, for all $\uu\in\Rn$, $ (\PP\ybar)\cdot\uu = \ybar\cdot(\PP\uu) = 0 $ using Theorem~\ref{thm:mat-mult-def} and that $\PP$ is symmetric in the first equality, and that $\PP\uu\in L=(\arescone{\sumexextu})^{\bot}$ in the second equality (see \Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:a},\ref{pr:proj-mat-props:c}). Therefore, $\PP\ybar=\zero$ (by Proposition~\ref{pr:i:4}). Thus, $\PP\xbar = \PP\zbar\plusl\PP\qq = \qq$ since $\zbar\in\conv(\unimin{\sumexu})$ and $\qq\in L$ (using \Cref{pr:proj-mat-props}\ref{pr:proj-mat-props:d}). Similarly, $\PP\xbar_t = \PP\ebar\plusl\PP\qq_t = \qq_t$ since $\ebar\in\unimin{\sumexu}$ and $\qq_t\in L$. Therefore, $\qq_t = \PP\xbar_t \rightarrow \PP\xbar = \qq$ by \Cref{thm:linear:cont}(\ref{thm:linear:cont:b}), implying, in particular, that $\qq_t$ is in $U$, being a neighborhood of $\qq$, for all sufficiently large $t$. However, this is a contradiction since $\qq_t\not\in U$ for all $t$. \end{proof} The feature map $\featmap$ is said to be \emph{minimal} if there does not exist $\ww\in\Rn$ for which $\featmap(\distelt)\cdot\ww$ is constant as a function of $\distelt\in\distset$. Taking $\ebar=\zero$, Theorem~\ref{thm:galaxy-mapto-face} implies $\meanmap(\Rn)=\ri(\convfeat)$ and $\meanmap(\extspace)=\convfeat$, as was previously seen in Theorem~\ref{thm:meanmap-onto} as well. When, in addition, $\featmap$ is minimal, the affine hull of $\featmap(\distset)$ is all of $\Rn$, implying, as shown in the following corollary, that $\meanmap$ induces a homeomorphism from $\Rn$ to the interior of the marginal polytope, $\convfeat$, which is the same as its relative interior in this case. For instance, the feature map $\featmap$ given in Example~\ref{ex:exp-fam-eg1} is minimal, and as such, $\meanmap$, in this case, maps $\R^2$ homeomorphically onto the interior of the marginal polytope, the triangle with corners $\featmap(1),\featmap(3),\featmap(4)$. \begin{corollary} Assume the general set-up of this chapter, and also that $\featmap$ is minimal. Let $S=\conv{\featmap(\distset)}$. Then $\meanmap(\Rn)=\intr(S)$. Moreover, the function $\rho:\Rn\rightarrow\intr(S)$ defined by $\rho(\xx)=\meanmap(\xx)$, for $\xx\in\Rn$, is a homeomorphism. \end{corollary} \begin{proof} Let $A=\affh{\featmap(\distset)}$. We claim first that $A=\Rn$. Suppose not. Then there exists a point $\zz\in\Rn\setminus A$. The set $A$ and the singleton set $\{\zz\}$ are disjoint and both are convex and closed (in $\Rn$); also, $\{\zz\}$ is bounded. Therefore, there exists a hyperplane that separates the two sets strongly by \Cref{roc:cor11.4.2}. That is, there exists $\ww\in\Rn$ for which \[ \sup_{\xx\in A} \xx\cdot\ww < \zz\cdot\ww \] by \Cref{roc:thm11.1}. Let $\uu$ be any point in $A$ (which is nonempty since $\distset$ is nonempty), and let $\distelt\in\distset$ and $\lambda\in\R$. Then the point $\lambda\featmap(\distelt)+(1-\lambda)\uu$, being an affine combination of $\featmap(\distelt)$ and $\uu$, is also in $A$. Therefore, \[ (\lambda\featmap(\distelt)+(1-\lambda)\uu)\cdot\ww < \zz\cdot\ww, \] implying \[ \lambda [(\featmap(\distelt) - \uu)\cdot\ww] < (\zz - \uu)\cdot\ww. \] Since this holds for all $\lambda\in\R$, and since the right-hand side is finite, we must have $(\featmap(\distelt) - \uu)\cdot\ww=0$, that is, $\featmap(\distelt)\cdot\ww = \uu\cdot\ww$. Thus, $\featmap(\distelt)\cdot\ww$ is equal to the constant $\uu\cdot\ww$ for all $\distelt\in\distset$, contradicting that $\featmap$ is minimal. So $\affh{\featmap(\distset)}=\Rn$. It follows that $\affh{S}=\Rn$ as well since $S\supseteq \featmap(\distset)$. Thus, $\ri{S}=\intr{S}$ (by definition of relative interior), implying $\meanmap(\Rn)=\intr{S}$ by Theorem~\ref{thm:meanmap-onto} (or Theorem~\ref{thm:aff-homeo}). Let $\ebar=\zero$. Then $Z$, as defined in \eqref{eqn:thm:galaxy-mapto-face:Z-def}, is equal to $\distset$, implying that $L$, the linear subspace of $\Rn$ parallel to $\affh{\featmap(Z)}$, is $\Rn$. Applying Theorem~\ref{thm:aff-homeo} then yields that $\rho$ is a homeomorphism, as claimed. \end{proof} \bibliographystyle{plainnat} \clearpage \phantomsection \bookmarksetup{startatroot} \addtocontents{toc}{\vspace{1.25\baselineskip}} \addcontentsline{toc}{section}{\refname} \bibliography{../ab} \clearpage \appendix \bookmarksetupnext{level=-1} \addtocontents{toc}{\vspace{1.25\baselineskip}} \addtocontents{toc}{\cftpagenumbersoff{section}} \addappheadtotoc \addtocontents{toc}{\cftpagenumberson{section}} \appendixpage \section{Monotone passages} \label{sec:mono-passages} The definition of convexity given in Section~\ref{sec:def-convexity} is in terms of the segment joining two points. In both standard and astral convex analysis, the segment joining two (distinct) finite points in $\Rn$ is an ordinary line segment, a one-dimensional set that is homeomorphic to the interval $[0,1]$, and whose elements can be linearly ordered by distance from one endpoint. Informally, such a segment provides a natural way of getting from one endpoint to the other along a path that is straight and continuous. On the other hand, in Section~\ref{sec:def-convexity}, we saw that the segment $\lb{-\Iden \omm}{\Iden \omm}$ is all of $\extspace$, and thus that the segment joining two infinite astral points can be of a very different nature. In this appendix, we give an alternative view of segments, and so also of what it means for a set to be convex. Analogous to the ordinary line segment joining two finite points, we will see how any two astral points can be connected by a set called a \emph{monotone passage set}, which, informally, provides a linearly ordered, continuous ``route'' or ``passage'' for getting from one endpoint to the other. Furthermore, such a route has a monotonicity property, described below, that can be roughly interpreted as an analog of what it means for a set in $\Rn$ to be ``straight'' or ``unbending.'' We will see that the segment joining two astral points $\xbar$ and $\ybar$ in $\extspace$ consists exactly of all the monotone passage sets from $\xbar$ to $\ybar$. As a result, a set $S\subseteq\extspace$ is convex if and only if it includes every monotone passage set from $\xbar$ to $\ybar$, for all $\xbar,\ybar\in S$. \subsection{Definition} In more detail, let $\xx$ and $\yy$ be distinct points in $\Rn$, and let $P=\lb{\xx}{\yy}$ be the line segment joining them. As seen in Proposition~\ref{pr:e1}(\ref{pr:e1:a}), this set consists of all points $\mphom(\lambda)$, for $\lambda\in [0,1]$, where \[ \mphom(\lambda) = (1-\lambda) \xx + \lambda \yy. \] This function $\mphom:[0,1]\rightarrow P$ is a homeomorphism and defines a path along $P$ from $\mphom(0)=\xx$ to $\mphom(1)=\yy$. Furthermore, it provides a natural linear ordering of $P$, since $[0,1]$ is linearly ordered. Finally, we can consider the projection of this path in any direction $\uu\in\Rn$, that is, \begin{equation} \label{eqn:rho-cdot-u-monotone} \mphom(\lambda) \cdot \uu = \xx\cdot\uu + \lambda (\yy\cdot\uu - \xx\cdot\uu), \end{equation} which we here regard as a function of $\lambda\in[0,1]$. This function is monotonic in the sense of being either nondecreasing (if $\xx\cdot\uu\leq\yy\cdot\uu$) or nonincreasing (if $\xx\cdot\uu\geq\yy\cdot\uu$). For a general set $P\subseteq\Rn$, it can be shown that there exists a homeomorphism $\mphom:[0,1]\rightarrow P$ that also satisfies this monotonicity property (of $\lambda\mapsto\mphom(\lambda)\cdot\uu$ being monotonic, for all $\uu\in\Rn$) if and only if $P$ is a line segment (with distinct endpoints). These notions generalize to astral space, as will be seen in detail in this section. To do so, we will need to allow for more general linearly ordered sets than $[0,1]$. We recall some standard notions regarding ordered sets. A nonempty set $L$ (with default order relation $\leq$) is a \emph{partial order} if it is reflexive ($\lambda\leq\lambda$), antisymmetric (if $\lambda\leq\mu$ and $\mu\leq\lambda$ then $\lambda=\mu$), and transitive (if $\lambda\leq\mu$ and $\mu\leq\nu$ then $\lambda\leq\nu$), for all $\lambda,\mu,\nu\in L$. The set is a \emph{linear order} if every pair of elements $\lambda,\mu\in L$ is \emph{comparable}, meaning either $\lambda\leq\mu$ or $\mu\leq\lambda$. A \emph{chain} is a linearly ordered subset of the partial order $L$. We use the symbols $<$, $\geq$, $>$ to have their usual meanings in terms of $\leq$. Let $L$ be a linear order. Then $L$ is \emph{complete} if every nonempty subset of $L$ that has an upper bound in $L$ also has a \emph{least} upper bound in $L$. A subset $M\subseteq L$ is \emph{dense in $L$} if for all $\lambda,\mu\in L$, if $\lambda<\mu$ then there exists $\nu\in M$ with $\lambda<\nu<\mu$. We say that $L$ is \emph{dense} if it is dense in itself. A linear order $L$ that is both dense and complete is called a \emph{linear continuum}. (Note, however, that some authors require that a set $L$ consist of at least two elements to be considered either dense or a linear continuum; for mathematical convenience in what follows, we here allow $L$ to be a singleton with regard to these definitions.) Linear continua generalize the order properties of the real line, and will take the place of $[0,1]$ in the definitions below. To generalize the monotonicity property discussed above for ordinary line segments, it will be helpful to introduce a particular partial ordering $\leqxy$ of points in $\extspace$ which, crucially, is defined relative to two endpoints $\xbar,\ybar\in\extspace$. To make these endpoints explicit, we often add the phrase ``relative to $\xbar,\ybar$,'' but sometimes omit this when clear from context. The ordering is defined as follows: for $\wbar,\zbar\in\extspace$, we write \[ \wbar \leqxy \zbar \mbox{\emph{~relative to~}} \xbar,\ybar \] if it is the case that for all $\uu\in\Rn$, if $\xbar\cdot\uu\leq\ybar\cdot\uu$ then $\wbar\cdot\uu\leq\zbar\cdot\uu$. Note that this condition, applied to $-\uu$, implies also that if $\xbar\cdot\uu\geq\ybar\cdot\uu$ then $\wbar\cdot\uu\geq\zbar\cdot\uu$. Thus, $\wbar \leqxy \zbar$ relative to $\xbar,\ybar$ if and only if the ordering of $\wbar\cdot\uu$ and $\zbar\cdot\uu$ (that is, the projections of $\wbar$ and $\zbar$ in direction $\uu$) is consistent with the ordering of $\xbar\cdot\uu$ and $\ybar\cdot\uu$ (the projections of $\xbar$ and $\ybar$ in that same direction $\uu$), for all $\uu\in\Rn$. We call this the \emph{directional order relative to $\xbar,\ybar$}. The directional-order relation is indeed a partial order: \begin{proposition} Let $\xbar,\ybar\in\extspace$. The directional order $\leqxy$ relative to $\xbar,\ybar$ is a partial order. \end{proposition} \proof Reflexivity and transitivity are both straightforward. To show antisymmetry, let $\wbar,\zbar\in\extspace$ and suppose $\wbar\leqxy\zbar$ and $\zbar\leqxy\wbar$. Let $\uu\in\Rn$. If $\xbar\cdot\uu\leq\ybar\cdot\uu$, then, by definition of directional ordering, $\wbar\cdot\uu\leq\zbar\cdot\uu$ and $\zbar\cdot\uu\leq\wbar\cdot\uu$, so $\wbar\cdot\uu=\zbar\cdot\uu$. Otherwise, if $\xbar\cdot\uu\geq\ybar\cdot\uu$, then the preceding argument, applied to $-\uu$, shows that $\wbar\cdot(-\uu)=\zbar\cdot(-\uu)$, and therefore $\wbar\cdot\uu=\zbar\cdot\uu$ in this case as well. Since this equality holds for all $\uu\in\Rn$, it follows that $\wbar=\zbar$ (by Proposition~\ref{pr:i:4}). \qed In terms of directional ordering (relative to $\xbar,\ybar$), \Cref{pr:seg-simplify} can be rewritten to say that $\lb{\xbar}{\ybar}$ is exactly the closed interval in this ordering consisting of all points between $\xbar$ and $\ybar$: \begin{proposition} \label{pr:lb-def-by-part-order} Let $\xbar,\ybar\in\extspace$. Then \[ \lb{\xbar}{\ybar} = \Braces{ \zbar\in\extspace : \xbar \leqxy \zbar \leqxy \ybar \mbox{~relative to~} \xbar,\ybar }. \] \end{proposition} \proof Throughout this proof, $\leqxy$ is relative to $\xbar,\ybar$. If $\zbar\in\lb{\xbar}{\ybar}$ and $\uu\in\Rn$ with $\xbar\cdot\uu\leq\ybar\cdot\uu$ then by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:c}), $\xbar\cdot\uu\leq\zbar\cdot\uu\leq\ybar\cdot\uu$, so $\xbar \leqxy \zbar \leqxy \ybar$. Conversely, if $\xbar \leqxy \zbar \leqxy \ybar$ and $\uu\in\Rn$ then either $\xbar\cdot\uu\leq\zbar\cdot\uu\leq\ybar\cdot\uu$ or $\ybar\cdot\uu\leq\zbar\cdot\uu\leq\xbar\cdot\uu$. In either case, $\zbar\cdot\uu\leq\max\{\xbar\cdot\uu,\,\ybar\cdot\uu\}$, so $\zbar\in\lb{\xbar}{\ybar}$ by \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}). \qed We say that a function $\mphom:L\rightarrow M$, where $L$ and $M$ are partial orders, is \emph{order-preserving} if for all $\lambda,\mu\in L$, if $\lambda\leq\mu$ then $\mphom(\lambda)\leq\mphom(\mu)$. The function $\mphom$ is an \emph{order isomorphism} if $\mphom$ is a bijection and if it also holds that $\lambda\leq\mu$ if and only if $\mphom(\lambda)\leq\mphom(\mu)$, for all $\lambda,\mu\in L$. If $\mphom:L\rightarrow P$, where $L$ is a partial order and $P\subseteq\extspace$, we add the phrase \emph{with range relative to $\xbar,\ybar$} to specify that the range $P$ is taken to be ordered by directional order relative to $\xbar,\ybar$ (although often this will be understood from context). For instance, we will soon seek maps that are order-preserving with range relative to $\xbar,\ybar$. Likewise, if $\mphom:P\rightarrow L$, we use the phrase \emph{with domain relative to $\xbar,\ybar$} to specify that the domain $P$ is ordered by directional order relative to $\xbar,\ybar$. Note that these two phrases can be used together to specify ordering of both the domain and the range. In earlier discussing the segment joining points $\xx,\yy\in\Rn$, we noted that the function given in \eqref{eqn:rho-cdot-u-monotone} is monotonic. Indeed, that discussion showed more precisely that $\mphom$ is an order isomorphism between $[0,1]$ (in the usual order) and $P$ as ordered directionally relative to $\xx,\yy$, that is, $\mphom$ is an order isomorphism with range relative to $\xx,\yy$. We will use this formulation in generalizing to astral space. Unless indicated otherwise, we generally take subsets of $\extspace$, such as $P$, to be in the subspace topology, and we take linearly ordered sets, such as $L$, to be in the order topology, whose subbasis elements are of the form $\{ \lambda\in L : \lambda < \lambda_0 \}$ or $\{ \lambda\in L : \lambda > \lambda_0 \}$, for some $\lambda_0\in L$. With these preliminaries, for $\xbar,\ybar\in\extspace$, we can now define a {monotone passage from $\xbar$ to $\ybar$} to be a continuous and surjective map $\mphom:L\rightarrow P$, where $P\subseteq\extspace$ and where $L$ is some linear continuum whose minimum and maximum elements map respectively to $\xbar$ and $\ybar$, and whose ordering is preserved in the directional ordering of $P$ relative to $\xbar,\ybar$. More precisely, for a linear order $L$ and a subset $P\subseteq\extspace$, we say that the function $\mphom:L\rightarrow P$ is a \emph{monotone passage from $\xbar$ to $\ybar$} if all of the following hold: \begin{letter-compact} \item \label{property:mono-pas:a} $L$ is a linear continuum; \item \label{property:mono-pas:b} $\mphom$ is continuous and surjective; \item \label{property:mono-pas:c} $L$ has a minimum element $\lcmin$ and a maximum element $\lcmax$; furthermore, $\mphom(\lcmin)=\xbar$ and $\mphom(\lcmax)=\ybar$; \item \label{property:mono-pas:d} $\mphom$ is order-preserving with range relative to $\xbar,\ybar$. \end{letter-compact} We say that a set $P\subseteq\extspace$ is a \emph{monotone passage set from $\xbar$ to $\ybar$} if it is the range of some monotone passage $\mphom:L\rightarrow P$ from $\xbar$ to $\ybar$, for some ordered set $L$. For example, in $\R^2$ with $\zbar=\limray{\ee_2}\plusl\ee_1$, the segment $P=\lb{\zero}{\zbar}$ is given in \Cref{ex:seg-zero-oe2-plus-e1}. This segment is the range of the monotone passage $\mphom:L\rightarrow P$ where $L$ is the real interval $[0,2]$, and \begin{equation} \label{eqn:mono-path-eg} \mphom(\lambda) = \begin{cases} {\displaystyle \frac{\lambda}{1-\lambda}} \, \ee_2 & \mbox{if $\lambda\in [0,1)$,} \\[1em] \limray{\ee_2} \plusl (\lambda-1) \ee_1 & \mbox{if $\lambda\in [1,2]$.} \end{cases} \end{equation} In the definition of monotone passage, the requirement that $\mphom$ is order-preserving (property~\ref{property:mono-pas:d}) can be restated as follows. For a function $\mphom:L\rightarrow P$, where $L$ is a linear order and $P\subseteq\extspace$, and for each $\uu\in\Rn$, we define the function $\mphomu:L\rightarrow\Rext$ by \begin{equation} \label{eqn:mphomu-def} \mphomu(\lambda) = \mphom(\lambda)\cdot\uu \end{equation} for $\lambda\in L$. Then it follows directly from the definition of directional ordering that $\mphom$ is order-preserving with range relative to $\xbar,\ybar$ if and only if it is the case that for all $\uu\in\Rn$, if $\xbar\cdot\uu\leq\ybar\cdot\uu$ then $\mphomu(\lambda)$ is nondecreasing in $\lambda\in L$. We will often use this formulation in proving property~(\ref{property:mono-pas:d}). If $\mphom:L\rightarrow P$ is a monotone passage from $\xbar$ to $\ybar$, then the linear ordering of $L$ provides, in informal terms, a directed route or passage along the set $P$ for getting from $\xbar$ to $\ybar$. As shown in the next proposition, because $L$ is a linear continuum, this passage is unbroken in the sense that the set $P$ must be topologically connected, thus connecting $\xbar$ and $\ybar$ in $\extspace$. (Recall that a \emph{separation} of a topological space $X$ is a pair of disjoint, nonempty open sets $U$ and $V$ whose union is all of $X$. The space is \emph{connected} if there does not exist a separation of $X$.) Furthermore, for all $\uu\in\Rn$, the function $\mphomu$ (as in Eq.~\ref{eqn:mphomu-def}) is continuous and monotonic in $\lambda\in L$, and includes in its image set all values between $\xbar\cdot\uu$ and $\ybar\cdot\uu$. \begin{proposition} \label{pr:mono-pass-props} Let $\xbar,\ybar\in\extspace$, let $P\subseteq\extspace$, and let $\mphom:L\rightarrow P$ be a monotone passage from $\xbar$ to $\ybar$, for some ordered set $L$. Then the following hold: \begin{letter-compact} \item \label{pr:mono-pass-props:a} $P$ is connected. \item \label{pr:mono-pass-props:c} For all $\uu\in\Rn$, let $\mphomu:L\rightarrow\Rext$ be as defined in \eqref{eqn:mphomu-def}. Then $\mphomu$ is continuous. Furthermore, if $\xbar\cdot\uu\leq\ybar\cdot\uu$, then $\mphomu$ is nondecreasing, and $L$'s image under $\mphomu$ is the entire interval $\mphomu(L) = [\xbar\cdot\uu,\,\ybar\cdot\uu]$. (Correspondingly, if $\xbar\cdot\uu\geq\ybar\cdot\uu$, then $\mphomu$ is nonincreasing, and $\mphomu(L) = [\ybar\cdot\uu,\,\xbar\cdot\uu]$.) \end{letter-compact} \end{proposition} \proof ~ Part~(\ref{pr:mono-pass-props:a}): Since $L$ is a linear continuum, it is also connected, so $P$ is connected as well since it is the image of $L$ under the continuous map $\mphom$ \citep[Theorems~24.1 and~23.5]{munkres}. Part~(\ref{pr:mono-pass-props:c}): The function $\mphomu$ is continuous because it is the composition of $\mphom$, which is continuous, with $\zbar\mapsto\zbar\cdot\uu$, which is also continuous by Theorem~\ref{thm:i:1}(\ref{thm:i:1c}). For the rest of the proof, assume $\xbar\cdot\uu\leq\ybar\cdot\uu$. The arguments for the alternative case that $\xbar\cdot\uu\geq\ybar\cdot\uu$ are symmetric (or can be derived from the present case by replacing $\uu$ with $-\uu$). As discussed above, that $\mphomu$ is nondecreasing follows directly from $\mphom$ being order-preserving, and the definition of directional order. For the last claim, because $\mphomu$ is nondecreasing, its minimum and maximum values are $\mphomu(\lcmin)=\xbar\cdot\uu$ and $\mphomu(\lcmax)=\ybar\cdot\uu$, respectively. Thus, $\mphomu(L)\subseteq [\xbar\cdot\uu,\,\ybar\cdot\uu]$. Further, because $L$ is connected, the intermediate value theorem \citep[Theorem~24.3]{munkres} implies that for every $\alpha \in [\xbar\cdot\uu,\,\ybar\cdot\uu]$ (that is, between $\mphomu(\lcmin)$ and $\mphomu(\lcmax)$), there must exist $\lambda\in L$ with $\mphomu(\lambda)=\alpha$. Thus, $\mphomu(L) = [\xbar\cdot\uu,\,\ybar\cdot\uu]$, as claimed. \qed In our earlier discussion of line segments between points in $\Rn$, the function $\mphom$ given in \eqref{eqn:rho-cdot-u-monotone} defines a monotone passage from $\xx$ to $\yy$, but also has two additional special properties: first, the domain of $\mphom$ in this case is $[0,1]$, making it a topological path, a particularly natural special case; and second, $\mphom$ is not just continuous, surjective and order-preserving, but is in fact a bijection defining both a homeomorphism and an order isomorphism. We define terminology for both properties. In the first case, we say that $\mphom:L\rightarrow P$ is a \emph{monotone path from $\xbar$ to $\ybar$} if $\mphom$ is a monotone passage from $\xbar$ to $\ybar$, and also $L$ is a closed real interval (that is, a real interval $[a,b]$, for some $a,b\in\R$ with $a\leq b$). In the second case above, we say that a monotone passage $\mphom:L\rightarrow P$ from $\xbar$ to $\ybar$ is \emph{strict} if, in addition to the other required properties, $\mphom$ is both a homeomorphism (and therefore bijective) and an order isomorphism with range relative to $\xbar,\ybar$. These additional properties mean that $P$ is an exact copy of $L$ in terms of both its topology and ordering (relative to $\xbar,\ybar$). As with monotone passage sets, we similarly say that a set $P\subseteq\extspace$ is a (strict) monotone path/passage \emph{set} from $\xbar$ to $\ybar$ if it is the range of some (strict) monotone path/passage $\mphom:L\rightarrow P$ from $\xbar$ to $\ybar$, for some linearly ordered set $L$. For example, the monotone passage given in \eqref{eqn:mono-path-eg} is in fact a strict monotone path. In Section~\ref{sec:mono-paths}, we will explore monotone paths in greater detail, and will see that, as in this example, every monotone passage set between a finite point in $\Rn$ and an arbitrary astral point in $\extspace$ must actually be a monotone path set. On the other hand, between infinite points, we will see that there can exist monotone passage sets that are not monotone path sets. Later, in Corollary~\ref{cor:max-chain-iff-mono-pass}, we will also see that every monotone passage set is in fact a strict monotone passage set, so these two notions are actually equivalent. \subsection{Maximal chains} We turn next to proving the most central fact about monotone passages, namely, that the segment joining any two astral points consists of all the monotone passage sets from one endpoint to the other. Thus, a point $\zbar$ is in the segment joining $\xbar$ and $\ybar$ if and only if it is along some monotone passage from $\xbar$ to $\ybar$. We prove this by relating monotone passages to maximal chains of elements in the directional order, thereby allowing us to make statements about the existence of monotone passages using Zorn's lemma. For $\xbar,\ybar\in\extspace$, we say that a set $P\subseteq\extspace$ is a \emph{directional chain from $\xbar$ to $\ybar$} (or simply a \emph{chain}, if clear from context) if $P\subseteq\lb{\xbar}{\ybar}$ and $P$ is a chain in the directional order relative to $\xbar,\ybar$. Such a chain $P$ is \emph{maximal} if there does not exist any other directional chain from $\xbar$ to $\ybar$ that properly includes $P$. We prove in this section that a set $P$ is a monotone passage set from $\xbar$ to $\ybar$ if and only if it is a maximal directional chain from $\xbar$ to $\ybar$. Along the way, we also prove that either of these are also equivalent to $P$ being a \emph{strict} monotone passage set from $\xbar$ to $\ybar$. As a first step, we show that every monotone passage set is a linear continuum in the directional order (and therefore also a chain): \begin{proposition} \label{pr:mono-pass-lin-cont} Let $\xbar,\ybar\in\extspace$, and let $P$ be a monotone passage set from $\xbar$ to $\ybar$. Then $P\subseteq\lb{\xbar}{\ybar}$, and $P$ is a linear continuum in the directional order relative to $\xbar,\ybar$. \end{proposition} \proof By assumption, there exists a monotone passage $\mphom:L\rightarrow P$ from $\xbar$ to $\ybar$, where $L$ is some linear continuum with minimum $\lcmin$ and maximum $\lcmax$. Throughout this proof, $\leqxy$ is relative to $\xbar,\ybar$. We show first that $P$ is a directional chain from $\xbar$ to $\ybar$. Let $\zbar\in P$, and let $\lambda\in\mphominv(\zbar)$ (which is nonempty since $\mphom$ is surjective). Then $\lcmin\leq\lambda\leq\lcmax$, implying $\xbar\leqxy\zbar\leqxy\ybar$ (since $\mphom$ is order-preserving), so $\zbar\in\lb{\xbar}{\ybar}$ (by Proposition~\ref{pr:lb-def-by-part-order}). Thus, $P\subseteq\lb{\xbar}{\ybar}$. Now let $\zbar'\in P$, and let $\lambda'\in\mphominv(\zbar')$. Then either $\lambda\leq\lambda'$ or $\lambda'\leq\lambda$, implying, by order preservation, that either $\zbar\leqxy\zbar'$ or $\zbar'\leqxy\zbar$. Therefore, all pairs in $P$ are comparable (relative to $\xbar,\ybar$), so $P$ is linearly ordered by $\leqxy$. We show next that this ordering of $P$ is dense. Suppose $\zbar,\zbar'\in P$, and that $\zbar\ltxy\zbar'$ (so that $\zbar\not\geqxy\zbar'$). Then by definition of directional ordering, there must exist $\uu\in\Rn$ with $\xbar\cdot\uu\leq\ybar\cdot\uu$ and $\zbar\cdot\uu<\zbar'\cdot\uu$, implying $\zbar\cdot\uu<\alpha<\zbar'\cdot\uu$ for some $\alpha\in\R$. Let $\mphomu:L\rightarrow \Rext$ be as in \eqref{eqn:mphomu-def}. By Proposition~\ref{pr:mono-pass-props}(\ref{pr:mono-pass-props:c}), there exists $\mu\in L$ with $\mphomu(\mu)=\alpha$. Let $\wbar=\mphom(\mu)$. Then $\zbar\ltxy\wbar$ (since otherwise, we would have $\zbar\cdot\uu\geq\wbar\cdot\uu=\alpha$, a contradiction). Likewise $\wbar\ltxy\zbar'$. Thus, $P$ is dense in the directional order $\leqxy$. Finally, we show that $P$ is complete in this order. Let $Q$ be a nonempty subset of $P$; we aim to show that $Q$ has a least upper bound. Let $M=\mphominv(Q)$, which is nonempty (since $\mphom$ is surjective) and upper bounded by $\lcmax$. Therefore, $M$ has a least upper bound $\mu$ in $L$, since $L$ is complete. We claim that $\mphom(\mu)$ is a least upper bound on $Q$ in $P$. Let $\zbar\in Q$, and let $\lambda\in\mphominv(\zbar)$. Then $\lambda\in M$, so $\lambda\leq\mu$, implying $\zbar=\mphom(\lambda)\leqxy\mphom(\mu)$ since $\mphom$ is order-preserving. Thus, $\mphom(\mu)$ is an upper bound on $Q$. To see that it is the least upper bound, suppose to the contrary that there exists an upper bound $\wbar\in P$ on $Q$ with $\wbar\ltxy\mphom(\mu)$. As shown above, $P$ is dense, so there also exists $\wbar'\in P$ with $\wbar\ltxy\wbar'\ltxy\mphom(\mu)$. Let $\nu'\in\mphominv(\wbar')$. Then for all $\lambda\in M$, $\mphom(\lambda)\in Q$ so \[\mphom(\lambda)\leqxy\wbar\ltxy\wbar'=\mphom(\nu')\ltxy\mphom(\mu),\] implying $\lambda<\nu'<\mu$ (using order preservation in the contrapositive). This shows that $\nu'$ is an upper bound on $M$ that is strictly less than $\mu$, a contradiction. \qed Using this proposition, we can now prove that every monotone passage set is a maximal chain. \begin{theorem} \label{thm:mon-pas-is-max-chain} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$ be a monotone passage set from $\xbar$ to $\ybar$. Then $P$ is a maximal directional chain from $\xbar$ to $\ybar$. \end{theorem} \proof Throughout this proof, we understand $P$ to be ordered in the directional order $\leqxy$ relative to $\xbar,\ybar$. In this order, $P$ is a linear continuum, by Proposition~\ref{pr:mono-pass-lin-cont}, and also is included in $\lb{\xbar}{\ybar}$. Thus, $P$ is a directional chain from $\xbar$ to $\ybar$. It remains then only to show that $P$, as a chain, is maximal. Suppose it is not, and therefore, that there exists a point $\zbar\in\lb{\xbar}{\ybar}\setminus P$ that is comparable to every point in $P$. Let \begin{align*} I &= \{ \zbar'\in P : \zbar' \ltxy \zbar \}, \\ J &= \{ \zbar'\in P : \zbar' \gtxy \zbar \}. \end{align*} Then $\xbar\in I$ (by Proposition~\ref{pr:lb-def-by-part-order}), so $I$ is not empty, and is upper-bounded by $\ybar$. Therefore, since $P$ is a linear continuum, $I$ has a least upper bound $\ybar'\in P$. By a similar argument, $J$ has a greatest lower bound $\xbar'\in P$. These must be different from $\zbar$, which is not in $P$. Also, $I\cup J=P$ since $P$ is a chain with every element comparable to $\zbar$. We consider a few cases, deriving a contradiction in each one. If $\ybar' \ltxy \zbar$, then $\ybar' \not\geqxy \zbar$ so there exists $\uu\in\Rn$ such that $\xbar\cdot\uu\leq\ybar\cdot\uu$ and $\ybar'\cdot\uu < \zbar\cdot\uu$, so $\ybar'\cdot\uu < \alpha < \zbar\cdot\uu$ for some $\alpha\in\R$. Let $I'=\{\zbar'\in P : \zbar'\cdot\uu < \alpha\}$ and $J'=\{\zbar'\in P : \zbar'\cdot\uu > \alpha\}$, which are disjoint and both open in $P$, being standard basis elements of $\extspace$ restricted to $P$. If $\zbar'\in I$ then $\zbar'\cdot\uu\leq\ybar'\cdot\uu<\alpha$, since $\ybar'$ is an upper bound on $I$; thus, $I\subseteq I'$. If $\zbar'\in J$ then $\zbar'\cdot\uu\geq\zbar\cdot\uu>\alpha$, by $J$'s definition; thus, $J\subseteq J'$. Thus, $I'$ and $J'$ are a separation of $P$, a contradiction since $P$ is connected (Proposition~\ref{pr:mono-pass-props}\ref{pr:mono-pass-props:a}). By a symmetric argument, a contradiction can be derived if $\xbar'\gtxy\zbar$. So we assume henceforth that $\xbar'\ltxy\zbar\ltxy\ybar'$. Because $P$ is a linear continuum, there exists an element $\zbar'\in P$ with $\xbar'\ltxy\zbar'\ltxy\ybar'$. If $\zbar'\ltxy\zbar$ then $\zbar'$ is a lower bound on $J$, contradicting that $\xbar'$ is the greatest lower bound on $J$. A symmetric contradiction is reached if $\zbar'\gtxy\zbar$. Having reached a contradiction in all cases, we conclude that $P$ is maximal, as claimed. \qed As preliminary steps in proving the converse, the next proposition establishes the continuity properties of an order-preserving bijection from a linearly ordered set to a subset of $\extspace$, followed by a proposition showing that every maximal chain (and thus, every monotone passage set) is closed in $\extspace$, and therefore compact. \begin{proposition} \label{pr:mphom-inv-is-cont} Let $\xbar,\ybar\in\extspace$, and let $\mphom:L\rightarrow P$ be a bijection that is order-preserving with range relative to $\xbar,\ybar$, for some linearly ordered set $L$ and some $P\subseteq\extspace$. Then $\mphom$ is an order isomorphism (with range relative to $\xbar,\ybar$), and $\mphominv$ is continuous. If, in addition, $P$ is compact, then $\mphom$ is a homeomorphism. \end{proposition} \proof The directional order $\leqxy$ is understood to be relative to $\xbar,\ybar$ throughout this proof. To show $\mphom$ is an order isomorphism, let $\lambda,\mu\in L$. If $\lambda\leq\mu$ then $\mphom(\lambda)\leqxy\mphom(\mu)$, since $\mphom$ is order-preserving. For the converse, suppoe $\lambda\not\leq\mu$. Then $\lambda>\mu$, implying $\mphom(\lambda)\geqxy\mphom(\mu)$, since $\mphom$ is order-preserving, and that $\mphom(\lambda)\neq\mphom(\mu)$, since $\mphom$ is a bijection. Therefore, $\mphom(\lambda)\not\leqxy\mphom(\mu)$. To prove continuity, it suffices to show that the image of every subbasis element $V\subseteq L$ is open in $P$ \citep[Section~18]{munkres}. As such, let $V=\{ \lambda\in L : \lambda < \lambda_0 \}$ be a subbasis element, for some $\lambda_0\in L$. (The case that $V$ is defined by the reverse inequality, $\lambda > \lambda_0$, is entirely symmetric.) Let $\zbar_0=\mphom(\lambda_0)$, let $\zbar$ be any point in $\mphom(V)$, and let $\lambda=\mphominv(\zbar)$, implying $\lambda\in V$, that is, $\lambda<\lambda_0$. Since $\mphom$ was shown to be an order isomorphism, $\zbar\not\geqxy\zbar_0$. Therefore, there exists $\uu\in\Rn$ such that $\xbar\cdot\uu\leq\ybar\cdot\uu$ and $\zbar\cdot\uu<\zbar_0\cdot\uu$, so $\zbar\cdot\uu<\alpha<\zbar_0\cdot\uu$ for some $\alpha\in\R$. Let $B = \{ \wbar \in P : \wbar\cdot\uu < \alpha \}$, which is open (in $P$), being a standard basis element of $\extspace$ restricted to $P$; further, $B$ includes $\zbar$. We claim $B\subseteq \mphom(V)$. To see this, suppose $\wbar\not\in\mphom(V)$. Then $\mphominv(\wbar)\geq\lambda_0$, implying $\wbar\geqxy\zbar_0$, so $\wbar\cdot\uu\geq\zbar_0\cdot\uu>\alpha$. Therefore, $\wbar\not\in B$. Thus, every point $\zbar$ in $\mphom(V)$ has a neighborhood that is included in $\mphom(V)$, completing the proof that $\mphom(V)$ is open in $P$, and so that $\mphominv$ is continuous. Suppose now that $P$ is compact. The linearly ordered set $L$, in the order topology, is Hausdorff \citep[Theorem~17.11]{munkres}. Since, as just shown, $\mphominv$ is a continuous bijection, these facts together imply that $\mphominv:P\rightarrow L$ is a homeomorphism \citep[Theorem~26.6]{munkres}. Therefore, $\mphom$ is a homeomorphism as well. \qed \begin{proposition} \label{pr:sbar-all-comp} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$ be a directional chain from $\xbar$ to $\ybar$. Let $\zbar\in\extspace$. If $\zbar\in\Pbar$ then $\zbar$ is comparable (relative to $\xbar,\ybar$) to every point in $P$. Consequently, if $P\subseteq\extspace$ is a maximal directional chain from $\xbar$ to $\ybar$ (and therefore also if $P$ is a monotone passage set from $\xbar$ to $\ybar$) then $P$ is closed in $\extspace$. \end{proposition} \proof We prove the contrapositive. Suppose there exists a point $\zbar'\in P$ that is not comparable to $\zbar$. Then $\zbar\not\leqxy\zbar'$, so there exists $\uu\in\Rn$ such that $\xbar\cdot\uu\leq\ybar\cdot\uu$ and $\zbar\cdot\uu > \zbar'\cdot\uu$, implying $\zbar\cdot\uu > \alpha > \zbar'\cdot\uu$ for some $\alpha\in\R$. Likewise, $\zbar\not\geqxy\zbar'$, so there exists $\uu'\in\Rn$ and $\alpha'\in\R$ such that $\xbar\cdot\uu'\leq\ybar\cdot\uu'$ and $\zbar\cdot\uu' < \alpha' < \zbar'\cdot\uu'$. Let \[ V = \{\wbar\in\extspace : \wbar\cdot\uu > \alpha \mbox{~and~} \wbar\cdot\uu' < \alpha'\}. \] Then $V$ is open (being a standard basis element), and includes $\zbar$. We claim $V$ is disjoint from $P$, which will prove that $\zbar\not\in\Pbar$. Suppose $\wbar\in P$. Then either $\wbar\leqxy\zbar'$ or $\wbar\geqxy\zbar'$ (since $P$ is a chain). If $\wbar\leqxy\zbar'$ then $\wbar\cdot\uu\leq\zbar'\cdot\uu<\alpha$. And if $\wbar\geqxy\zbar'$ then $\wbar\cdot\uu'\geq\zbar'\cdot\uu'>\alpha'$. In either case, $\wbar\not\in V$, as claimed. For the last statement of the proposition, if $P$ is not closed in $\extspace$, then there exists a point $\zbar\in\Pbar\setminus P$ which is comparable to every point in $P$. Therefore, $P\cup\{\zbar\}$ is a directional chain that is a proper superset of $P$, so $P$ cannot be maximal. Thus, every maximal directional chain (and so also every monotone passage set, by Theorem~\ref{thm:mon-pas-is-max-chain}) is closed in $\extspace$. \qed We now prove that every maximal directional chain is a strict monotone passage set: \begin{theorem} \label{thm:max-chain-is-mon-pas} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$ be a maximal directional chain from $\xbar$ to $\ybar$. Then $P$ is a strict monotone passage set from $\xbar$ to $\ybar$. \end{theorem} \proof To show that $P$ is a strict monotone passage set, we need to construct a linear continuum $L$ and bijection $\mphom:L\rightarrow P$ satisfying all the required properties. To do so, we simply let $L$ be equal to $P$, with order defined to be the same as directional ordering $\leqxy$ of elements of $P$ (with all directional ordering in this proof relative to $\xbar,\ybar$). Then $L$ is linearly ordered since $P$ is a chain. Note importantly that $L$ and $P$ are identical as sets, but their topologies are defined differently: $P$ has the subspace topology inherited from $\extspace$, while $L$ is in the order topology associated with the order $\leqxy$. Since the elements of both $L$ and $P$ are astral points, we use the same notation for both, such as $\zbar$, rather than $\lambda$ as used previously. Likewise, we use $\leqxy$ to denote order in $L$, rather than $\leq$. We further define $\mphom:L\rightarrow P$ to simply be the identity map, meaning $\mphom(\zbar)=\zbar$ for all $\zbar\in L$. This function is clearly (and trivially) a bijection, and also an order isomorphism (with range relative to $\xbar,\ybar$). By Proposition~\ref{pr:sbar-all-comp}, $P$, being a maximal chain, is closed in $\extspace$, and so also compact. Therefore, $\mphom$ is a homeomorphism by Proposition~\ref{pr:mphom-inv-is-cont}. By Proposition~\ref{pr:lb-def-by-part-order}, and since $P\subseteq\lb{\xbar}{\ybar}$, $\xbar\leqxy\zbar\leqxy\ybar$, for all $\zbar\in P$. As a result, $\xbar$ must be in $P$, being comparable to all elements of $P$, since otherwise $P\cup\{\xbar\}$ would be a chain that properly includes $P$, contradicting that $P$ is maximal. Likewise, $\ybar\in P$. Since the ordering of $L$ is isomorphic to that of $P$, this further shows that $\xbar$ and $\ybar$ are minimum and maximum elements in $L$ with $\mphom(\xbar)=\xbar$ and $\mphom(\ybar)=\ybar$, proving part~(\ref{property:mono-pas:c}) in the definition of monotone passage. It remains only to prove that $L$ is a linear continuum, which we show in the next two claims: \setcounter{claimp}{0} \begin{claimpx} $L$ is complete. \end{claimpx} \proof Let $M$ be a nonempty subset of $L$ (which is upper bounded since $\ybar$ is a maximum element in $L$). We aim to show that $M$ has a least upper bound. Let $J$ denote the set of all points in $L$ that upper bound $M$; that is, \begin{align*} J &= \{ \zbar\in L : \forall \wbar\in M,\, \wbar\leqxy\zbar \}, \\ &= \bigcap_{\wbar\in M} \{ \zbar\in L : \wbar\leqxy\zbar \}. \end{align*} The sets appearing in the intersection in the last line are each closed in $L$, being complements of subbasis elements; therefore, $J$ is closed in $L$ since it is the intersection of such sets. Let $J'=\mphom(J)$. (Of course, $J$ and $J'$ are identical as sets, but they belong to differently defined topological spaces.) Then $J'$ is closed in $P$, since $\mphominv$ is continuous; therefore, $J'$ is also compact, since $P$ is compact. As a result, the continuous function $\mphominv$, over the compact subspace $J'$, must attain a minimum at some point $\ybar'\in J'$ \citep[Theorem~27.4]{munkres}. This means that $\ybar'=\mphominv(\ybar')\leqxy \zbar$ for all $\zbar\in J=\mphominv(J')$, and also that $\ybar'$ is itself in $J$ (so that it is itself an upper bound on $M$). Thus, $\ybar'$ is a least upper bound on $M$. \clqed \begin{claimpx} $L$ is dense. \end{claimpx} \proof Let $\xbar',\ybar'\in L$ with $\xbar' \ltxy \ybar'$. We aim to show there exists $\zbar\in L$ with $\xbar' \ltxy \zbar \ltxy \ybar'$. Suppose, by way of contradiction, that there does not exist any such point in $L$. Since $\xbar'\ltxy\ybar'$, there exists $\uu\in\Rn$ with $\xbar\cdot\uu\leq\ybar\cdot\uu$ and $\xbar'\cdot\uu < \ybar'\cdot\uu$. Let $\alpha\in\R$ be such that $\xbar'\cdot\uu < \alpha < \ybar'\cdot\uu$. We claim that there must exist a point $\zbar\in\lb{\xbar'}{\ybar'}$ with $\zbar\cdot\uu=\alpha$. By Theorem~\ref{thm:i:1}(\ref{thm:i:1d}), there exist sequences $\seq{\xx_t}$ and $\seq{\yy_t}$ in $\Rn$ such that $\xx_t\rightarrow\xbar'$ and $\yy_t\rightarrow\ybar'$. Since the open set $\{\wbar\in\extspace : \wbar\cdot\uu < \alpha\}$ includes $\xbar'$, it also must include all but finitely many of the points $\xx_t$, since they converge to $\xbar'$. By discarding all other sequence elements, we can assume that the entire sequence is in this open set so that $\xx_t\cdot\uu<\alpha$ for all $t$. By a similar argument, we can assume that $\yy_t\cdot\uu>\alpha$ for all $t$. Thus, $\xx_t\cdot\uu<\alpha<\yy_t\cdot\uu$. For each $t$, let $ \zz_t= (1-\lambda_t) \xx_t + \lambda_t \yy_t $ where \[ \lambda_t = \frac{\alpha - \xx_t\cdot\uu} {\yy_t\cdot\uu - \xx_t\cdot\uu}, \] which is in $[0,1]$. This choice ensures that $\zz_t\cdot\uu = \alpha$ for all $t$. By sequential compactness, the resulting sequence $\seq{\zz_t}$ must have a convergent subsequence. By discarding all other elements, we can assume the entire sequence converges to some point $\zbar\in\extspace$. This point must be in $\lb{\xbar'}{\ybar'}$ since all the conditions of Corollary~\ref{cor:e:1} have been satisfied. Furthermore, $\alpha=\zz_t\cdot\uu\rightarrow\zbar\cdot\uu$, by construction and Theorem~\ref{thm:i:1}(\ref{thm:i:1c}), so $\zbar\cdot\uu=\alpha$, proving the claim. For all $\uu'\in\Rn$, if $\xbar\cdot\uu'\leq\ybar\cdot\uu'$ then $\xbar'\cdot\uu'\leq\ybar'\cdot\uu'$ (since $\xbar'\ltxy\ybar'$) so $\xbar'\cdot\uu'\leq\zbar\cdot\uu'\leq\ybar'\cdot\uu'$ by Proposition~\ref{pr:lb-def-by-part-order} since $\zbar\in\lb{\xbar'}{\ybar'}$. This shows that $\xbar'\leqxy\zbar\leqxy\ybar'$, so actually $\xbar'\ltxy\zbar\ltxy\ybar'$ since $\xbar'\cdot\uu < \zbar\cdot\uu < \ybar'\cdot\uu$. It follows that $\zbar\not\in P$, since we assumed there is no point in $P$ that is strictly between $\xbar'$ and $\ybar'$. We claim every point in $P$ is comparable to $\zbar$. This is because if $\wbar\in P$, then, by our initial assumption, it cannot be that $\xbar'\ltxy\wbar\ltxy\ybar'$, so either $\wbar\leqxy\xbar'\ltxy\zbar$ or $\zbar\ltxy\ybar'\leqxy\wbar$. Thus, $P\cup\{\zbar\}$ is a chain that is a proper superset of $P$, contradicting that $P$ is a maximal chain. \clqed We conclude that $\mphom$ is a strict monotone passage, having proved all parts of its definition. \qed Combining yields the three-way equivalence of monotone passage sets, strict monotone passage sets, and maximal chains: \begin{corollary} \label{cor:max-chain-iff-mono-pass} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$. Then the following are equivalent: \begin{letter-compact} \item \label{cor:max-chain-iff-mono-pass:a} $P$ is a monotone passage set from $\xbar$ to $\ybar$. \item \label{cor:max-chain-iff-mono-pass:b} $P$ is a strict monotone passage set from $\xbar$ to $\ybar$. \item \label{cor:max-chain-iff-mono-pass:c} $P$ is a maximal directional chain from $\xbar$ to $\ybar$. \end{letter-compact} \end{corollary} \proof That (\ref{cor:max-chain-iff-mono-pass:a}) $\Rightarrow$ (\ref{cor:max-chain-iff-mono-pass:c}) and (\ref{cor:max-chain-iff-mono-pass:c}) $\Rightarrow$ (\ref{cor:max-chain-iff-mono-pass:b}) follow respectively from Theorems~\ref{thm:mon-pas-is-max-chain} and~\ref{thm:max-chain-is-mon-pas}. That (\ref{cor:max-chain-iff-mono-pass:b}) $\Rightarrow$ (\ref{cor:max-chain-iff-mono-pass:a}) is immediate. \qed As an immediate corollary, the subspace topology on a monotone passage set $P$ from $\xbar$ to $\ybar$ is identical to the order topology on that same set (under the directional order relative to $\xbar,\ybar$). \begin{corollary} \label{cor:subspace-equals-order-topo} Let $\xbar,\ybar\in\extspace$, and let $P$ be a monotone passage set from $\xbar$ to $\ybar$. Then the set $P$ in the subspace topology (as a subspace of $\extspace$) is homeomorphic to the same set $P$ in the order topology induced by directional order relative to $\xbar,\ybar$. \end{corollary} \proof By Corollary~\ref{cor:max-chain-iff-mono-pass}, there exists a strict monotone passage $\mphom:L\rightarrow P$ for some linearly ordered set $L$. Since $\mphom$ is an order isomorphism (with range relative to $\xbar,\ybar$), the order topology on $P$ is homeomorphic under $\mphominv$ to the order topology on $L$ under the bijection $\mphom$. Furthermore, since $\mphom$ is a homeomorphism, the set $L$ in the order topology is homeomorphic to $P$ in the subspace topology. Composing homeomorphisms yields the corollary. \qed \subsection{Existence using Zorn's lemma} We can use the characterization of monotone passage sets as maximal chains to prove their existence by direct application of Zorn's lemma. We use a version of Zorn's lemma specialized to families of sets \citep[Section~10.2]{davey_priestley_02}. We say that a family of sets $\calC$ is an \emph{inclusion-chain} if for all $P,P'\in\calC$, either $P\subseteq P'$ or $P'\subseteq P$. Suppose that some family of sets $\calP$ has the property that if $\calC\subseteq\calP$ is any nonempty inclusion-chain, then \[ \bigcup_{P\in\calC} P\in \calP, \] that is, the union of all sets in the inclusion-chain $\calC$ is also one of the sets in $\calP$. According to Zorn's lemma (specialized to this setting), if $\calP$ has this property, then there exists a set $P\in\calP$ that is \emph{maximal}, meaning it is not a proper subset of any set in $\calP$. (Both Zorn's lemma and Tychonoff's theorem, which we previously invoked in Section~\ref{subsec:astral-pts-as-fcns}, require that we assume the axiom of choice.) Using Zorn's lemma, we prove that any directional chain $P_0$ from $\xbar$ to $\ybar$ can be enlarged into a maximal chain (and therefore a monotone passage set) while still including the original ``seed'' chain. For example, if we take $P_0=\{\zbar\}$, where $\zbar$ is any point in $\lb{\xbar}{\ybar}$, this will prove that there exists a monotone passage set from $\xbar$ to $\ybar$ that includes $\zbar$. \begin{theorem} \label{thm:exists-mon-pass-with-chain} Let $\xbar,\ybar\in\extspace$, and let $P_0\subseteq\lb{\xbar}{\ybar}$ be a directional chain from $\xbar$ to $\ybar$. Then there exists a monotone passage set from $\xbar$ to $\ybar$ that includes $P_0$. \end{theorem} \proof Let $\calP$ consist of all directional chains $P$ from $\xbar$ to $\ybar$ that include $P_0$. This family of sets is nonempty since, for instance, it includes $P_0$. We claim that this family has the property required by Zorn's lemma, as described above. To see this, let $\calC\subseteq\calP$ be a nonempty inclusion-chain in $\calP$, and let $U = \bigcup_{P\in\calC} P$, which we aim to show is in $\calP$. Since $\calC$ is nonempty, it includes some set $P$ which, being in $\calP$, must include $P_0$. Thus, $P_0\subseteq U$. Also, every set $P\in\calP$ is included in $\lb{\xbar}{\ybar}$, so $U\subseteq\lb{\xbar}{\ybar}$. To show that $U$ is a directional chain, let $\wbar,\zbar\in U$, which we aim to show are comparable (in the directional ordering relative to $\xbar,\ybar$). By $U$'s definition, there must exist sets $P$ and $P'$ in $\calC$ with $\wbar\in P$ and $\zbar\in P'$. Further, since $\calC$ is an inclusion-chain, one of these sets must be contained in the other. Without loss of generality, assume $P'\subseteq P$. Then $\wbar$ and $\zbar$ must both be in $P$, implying that they are comparable since $P$, being in $\calP$, is a directional chain. Thus, $U$ is a directional chain from $\xbar$ to $\ybar$ that includes $P_0$, and therefore, $U\in\calP$ as claimed. As a result, Zorn's lemma now implies that there exists a maximal set $P$ in $\calP$. By construction of $\calP$, this set includes $P_0$ and is a directional chain from $\xbar$ to $\ybar$ which must be maximal, since otherwise there would be a set $P'$ in $\calP$ that properly contains it. Since $P$ is a maximal directional chain, it is also a monotone passage set from $\xbar$ to $\ybar$, by Theorem~\ref{thm:max-chain-is-mon-pas}. \qed The next corollary summarizes some consequences of Corollary~\ref{cor:max-chain-iff-mono-pass} and Theorem~\ref{thm:exists-mon-pass-with-chain}, many of which have already been discussed: \begin{corollary} \label{cor:mon-pass-max-chain-conseq} Let $\xbar,\ybar\in\extspace$. \begin{letter-compact} \item \label{cor:mon-pass-max-chain-conseq:a} There exists a monotone passage from $\xbar$ to $\ybar$. \item \label{cor:mon-pass-max-chain-conseq:b} The segment $\lb{\xbar}{\ybar}$ joining $\xbar$ and $\ybar$ is exactly equal to the union of all monotone passage sets from $\xbar$ to $\ybar$. That is, a point $\zbar\in\extspace$ is in $\lb{\xbar}{\ybar}$ if and only if $\zbar$ is included in some monotone passage set from $\xbar$ to $\ybar$. \item \label{cor:mon-pass-max-chain-conseq:c} A set $S\subseteq\extspace$ is convex if and only if for all $\xbar',\ybar'\in S$, every monotone passage set from $\xbar'$ to $\ybar'$ is included in $S$. \item \label{cor:mon-pass-max-chain-conseq:d} Every nonempty convex set in $\extspace$ is connected. \item \label{cor:mon-pass-max-chain-conseq:e} There exists only a single monotone passage set from $\xbar$ to $\ybar$ if and only if the entire segment $\lb{\xbar}{\ybar}$ is itself a monotone passage set from $\xbar$ to $\ybar$. \end{letter-compact} \end{corollary} \proof ~ Part~(\ref{cor:mon-pass-max-chain-conseq:a}): This follows immediately from Theorem~\ref{thm:exists-mon-pass-with-chain} applied with $P_0=\emptyset$. Part~(\ref{cor:mon-pass-max-chain-conseq:b}): If $\zbar$ is included in a monotone passage set $P$ from $\xbar$ to $\ybar$, then $P$ is a (maximal) {directional chain from $\xbar$ to $\ybar$}, by Theorem~\ref{thm:mon-pas-is-max-chain}, so, by definition, $\zbar\in P\subseteq \lb{\xbar}{\ybar}$. Conversely, if $\zbar\in\lb{\xbar}{\ybar}$, then by Theorem~\ref{thm:exists-mon-pass-with-chain} applied to $P_0=\{\zbar\}$, there must exist a monotone passage set from $\xbar$ to $\ybar$ that includes $\zbar$. Part~(\ref{cor:mon-pass-max-chain-conseq:c}): This follows immediately from part~(\ref{cor:mon-pass-max-chain-conseq:b}), and because, by definition of convexity, $S$ is convex if and only if $\lb{\xbar'}{\ybar'}\subseteq S$ for all $\xbar',\ybar'\in S$. Part~(\ref{cor:mon-pass-max-chain-conseq:d}): Suppose to the contrary that some set $S\subseteq\extspace$ is nonempty, convex, but not connected. Then there exist open sets $U$ and $V$ in $\extspace$ such that $S\subseteq U\cup V$, and $U\cap S$ and $V\cap S$ are disjoint and nonempty. Let $\xbar'\in U\cap S$ and $\ybar'\in V\cap S$, and let $P$ be a monotone passage set from $\xbar'$ to $\ybar'$, which exists by part~(\ref{cor:mon-pass-max-chain-conseq:a}), and is included in $S$ by part~(\ref{cor:mon-pass-max-chain-conseq:c}). Then $P\subseteq S\subseteq U\cup V$, and $U\cap P$ and $V\cap P$ are disjoint and nonempty, and so are a separation of $P$, contradicting that $P$ is connected by Proposition~\ref{pr:mono-pass-props}(\ref{pr:mono-pass-props:a}). Part~(\ref{cor:mon-pass-max-chain-conseq:e}): Suppose $\lb{\xbar}{\ybar}$ is itself a monotone passage set from $\xbar$ to $\ybar$, and so is a {directional chain from $\xbar$ to $\ybar$}, by Theorem~\ref{thm:mon-pas-is-max-chain}. Then no proper subset of $\lb{\xbar}{\ybar}$ can be a maximal directional chain, implying there can be no other monotone passage set from $\xbar$ to $\ybar$ (again by Theorem~\ref{thm:mon-pas-is-max-chain}). Conversely, using part~(\ref{cor:mon-pass-max-chain-conseq:b}), if $\lb{\xbar}{\ybar}$ is not a monotone passage set from $\xbar$ to $\ybar$, there must nevertheless exist such a monotone passage set $P$, which must be a proper subset of $\lb{\xbar}{\ybar}$. Thus, there must exist a point $\zbar\in \lb{\xbar}{\ybar}\setminus P$, which must be included in some other monotone passage set from $\xbar$ to $\ybar$ that is different from $P$. Therefore, $P$ is not unique. \qed \subsection{Operating on monotone passages} Next, we provide some tools for working with monotone passages. To begin, we show that the image of a monotone passage set under a map that is continuous and appropriately order-preserving is also a monotone passage set; the same holds for monotone paths. \begin{proposition} \label{pr:mono-pass-map} Let $\xbar,\ybar\in\extspace$, and let $P$ be a monotone passage set from $\xbar$ to $\ybar$. Let $\rho:P\rightarrow\extspac{m}$ be continuous and order-preserving with domain relative to $\xbar,\ybar$, and range relative to $\rho(\xbar),\rho(\ybar)$. Then $\rho(P)$ is a monotone passage set from $\rho(\xbar)$ to $\rho(\ybar)$. If, in addition, $P$ is a monotone path set from $\xbar$ to $\ybar$, then $\rho(P)$ is a monotone path set from $\rho(\xbar)$ to $\rho(\ybar)$. \end{proposition} \proof By assumption, there exists a monotone passage $\mphom:L\rightarrow P$ from $\xbar$ to $\ybar$, for some linearly ordered set $L$ with minimum $\lcmin$ and maximum $\lcmax$. Let $P'=\rho(P)$, $\xbar'=\rho(\xbar)$, and $\ybar'=\rho(\ybar)$. Also, let $\mphom':L\rightarrow P'$ be defined by $\mphom'(\lambda)=\rho(\mphom(\lambda))$, for $\lambda\in L$. Then $\mphom'$ is continuous, since it is the composition of continuous functions (with restricted range, which does not affect continuity). It is surjective, since $\mphom$ is surjective. It is order-preserving with range relative to $\xbar',\ybar'$ since for $\lambda,\mu\in L$, if $\lambda\leq \mu$ then $\mphom(\lambda)\leqxy\mphom(\mu)$ relative to $\xbar,\ybar$, implying $\mphom'(\lambda)\leqxy\mphom'(\mu)$ relative to $\xbar',\ybar'$. Finally, $\mphom'(\lcmin)=\xbar'$ and $\mphom'(\lcmax)=\ybar'$. Thus, $\mphom'$ is a monotone passage from $\xbar'$ to $\ybar'$. If $P$ is a monotone path set, we can take $L$ to be a closed real interval, yielding that $\mphom'$ is a monotone path from $\xbar'$ to $\ybar'$. \qed Consequently, the image of a monotone passage set under an affine map is also a monotone passage set (likewise for monotone paths). \begin{theorem} \label{thm:mono-pass-affine-map} Let $\xbar,\ybar\in\extspace$, and let $P$ be a monotone passage set from $\xbar$ to $\ybar$. Let $\A\in\Rmn$ and $\bbar\in\extspac{m}$, and let $F:\extspace\rightarrow\extspac{m}$ be defined by $F(\zbar)=\bbar\plusl\A\zbar$. Then $F(P)$ is a monotone passage set from $F(\xbar)$ to $F(\ybar)$. If, in addition, $P$ is a monotone path set from $\xbar$ to $\ybar$, then $F(P)$ is a monotone path set from $F(\xbar)$ to $F(\ybar)$. \end{theorem} \proof Let $\rho:P\rightarrow\extspac{m}$ be the restriction of $F$ to domain $P$. By \Cref{cor:aff-cont}, $F$ is continuous, so $\rho$ is as well. We need only show that $\rho$ is order-preserving with domain relative to $\xbar,\ybar$ and range relative to $F(\xbar),F(\ybar)$. Once established, the theorem then follows by a direct application of Proposition~\ref{pr:mono-pass-map} to $\rho$. Suppose $\zbar,\zbar'\in P$ with $\zbar\leqxy\zbar'$ (relative to $\xbar,\ybar$). We aim to show $F(\zbar)\leqxy F(\zbar')$ (relative to $F(\xbar),F(\ybar)$). For this purpose, let $\uu\in\Rm$ be such that $F(\xbar)\cdot\uu\leq F(\ybar)\cdot\uu$. Our goal then is to show that $F(\zbar)\cdot\uu\leq F(\zbar')\cdot\uu$. For all $\wbar\in\extspace$, \begin{equation} \label{eqn:thm:mono-pass-affine-map:1} F(\wbar)\cdot\uu = \bbar\cdot\uu\plusl (\A\wbar)\cdot\uu = \bbar\cdot\uu\plusl \wbar\cdot(\transA\uu) \end{equation} by Theorem~\ref{thm:mat-mult-def}. In particular, if $\bbar\cdot\uu\in \{-\infty,+\infty\}$, this implies $F(\zbar)\cdot\uu=\bbar\cdot\uu=F(\zbar')\cdot\uu$. Otherwise, $\bbar\cdot\uu\in\R$. In this case, \eqref{eqn:thm:mono-pass-affine-map:1} implies, for all $\wbar,\wbar'\in\extspace$, that $F(\wbar)\cdot\uu \leq F(\wbar')\cdot\uu$ if and only if $\wbar\cdot(\transA\uu) \leq \wbar'\cdot(\transA\uu)$. In particular, since $F(\xbar)\cdot\uu\leq F(\ybar)\cdot\uu$, it now follows that $\xbar\cdot(\transA\uu) \leq \ybar\cdot(\transA\uu)$. Therefore, $\zbar\cdot(\transA\uu) \leq \zbar'\cdot(\transA\uu)$, since $\zbar\leqxy\zbar'$ (relative to $\xbar,\ybar$), so $F(\zbar)\cdot\uu \leq F(\zbar')\cdot\uu$, completing the proof that $\rho$ is order-preserving. \qed We next consider subpassages, which are subsections of a monotone passage set. Formally, let $P$ be a monotone passage set from $\xbar$ to $\ybar$, and let $\xbar',\ybar'\in P$ be such that $\xbar\leqxy\xbar'\leqxy\ybar'\leqxy\ybar$ relative to $\xbar,\ybar$. Then we define the \emph{subpassage of $P$ from $\xbar'$ to $\ybar'$} to be all points in $P$ between $\xbar'$ and $\ybar'$ in the directional order relative to $\xbar,\ybar$, that is, the interval \[ \Braces{ \zbar\in\extspace : \xbar' \leqxy \zbar \leqxy \ybar' \mbox{~relative to~} \xbar,\ybar }. \] Note that the endpoints of the interval are $\xbar'$ and $\ybar'$, but the ordering is relative to $\xbar,\ybar$. As we show next, such a subpassage is itself a monotone passage from $\xbar'$ to $\ybar'$, meaning a monotone passage set can be broken apart into ``smaller'' monotone passage sets. \begin{theorem} \label{thm:subpassages} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$ be a monotone passage set from $\xbar$ to $\ybar$. Let $\xbar',\ybar'\in P$ be such that $\xbar\leqxy\xbar'\leqxy\ybar'\leqxy\ybar$ relative to $\xbar,\ybar$. Let $P'$ be the subpassage of $P$ from $\xbar'$ to $\ybar'$. Then $P'$ is a monotone passage set from $\xbar'$ to $\ybar'$. If, in addition, $P$ is a monotone path set from $\xbar$ to $\ybar$, then $P'$ is a monotone path set from $\xbar'$ to $\ybar'$. \end{theorem} \proof For clarity, throughout this proof, we write $\leqxy$ for directional order relative to $\xbar,\ybar$, and $\leqxyp$ for directional order relative to $\xbar',\ybar'$. We prove the theorem using Proposition~\ref{pr:mono-pass-map}. To do so, let $F:P\rightarrow \extspace$ be defined by \[ F(\zbar) = \begin{cases} \xbar' & \mbox{if $\zbar\leqxy\xbar'$,} \\ \zbar & \mbox{if $\xbar'\leqxy\zbar\leqxy\ybar'$,} \\ \ybar' & \mbox{if $\ybar'\leqxy\zbar$.} \end{cases} \] Thus, $F$ ``clamps'' $P$ between $\xbar'$ and $\ybar'$. Note that $F(P)=P'$, $F(\xbar)=\xbar'$ and $F(\ybar)=\ybar'$. This function has three pieces: Two of these pieces are constant-valued functions, which are therefore continuous. The third piece is the identity function over a subspace of $P$ (in the same topology for domain and range), and so is also continuous. Moreover, these pieces are defined over intervals of $P$ that are each closed by Corollary~\ref{cor:subspace-equals-order-topo}, being the complement of open sets in the order topology under $\leqxy$. Therefore, the piecewise function $F$ is continuous by a standard pasting lemma \citep[Theorem~18.3]{munkres}. We next show that $F$ is order-preserving with domain relative to $\xbar,\ybar$ and with range relative to $\xbar',\ybar'$. Let $\wbar,\zbar\in P$, and let $\wbar'=F(\wbar)$ and $\zbar'=F(\zbar)$. We suppose $\wbar\leqxy\zbar$, which we aim to show implies $\wbar'\leqxyp\zbar'$ (that is, in the directional order relative to $\xbar',\ybar'$). We note first that $\wbar'\leqxy\zbar'$ (where, importantly, the order here is relative to $\xbar,\ybar$). This is because if $\wbar\leqxy\xbar'$ then $\wbar'=\xbar'\leqxy\zbar'$; if $\ybar'\leqxy\zbar$ then $\wbar'\leqxy\ybar'=\zbar'$; otherwise, $\xbar'\leqxy\wbar\leqxy\zbar\leq\ybar'$ so $\wbar'=\wbar\leqxy\zbar=\zbar'$. Thus, $\xbar'\leqxy\wbar'\leqxy\zbar'\leqxy\ybar'$. To show $\wbar'\leqxyp\zbar'$, by definition, we need to prove that for all $\uu\in\Rn$, if $\xbar'\cdot\uu\leq\ybar'\cdot\uu$ then $\wbar'\cdot\uu\leq\zbar'\cdot\uu$. We prove this in the contrapositive. Let $\uu\in\Rn$ and suppose $\wbar'\cdot\uu>\zbar'\cdot\uu$. Combined with $\wbar'\leqxy\zbar'$, this implies $\xbar\cdot\uu>\ybar\cdot\uu$. Therefore, \[ \xbar'\cdot\uu \geq \wbar'\cdot\uu > \zbar'\cdot\uu \geq \ybar'\cdot\uu, \] where the first and last inequalities are because $\xbar'\leqxy\wbar'$ and $\zbar'\leqxy\ybar'$. Thus, $\xbar'\cdot\uu>\ybar'\cdot\uu$, as needed, so $\wbar'\leqxyp\zbar'$ and $F$ is order-preserving. Applying Proposition~\ref{pr:mono-pass-map} now shows that $F(P)=P'$ is a monotone passage set (or monotone path set, if $P$ is a monotone path set) from $F(\xbar)=\xbar'$ to $F(\ybar)=\ybar'$. \qed Using Theorem~\ref{thm:subpassages}, we can show that the part of any monotone passage that passes through $\Rn$ must entirely lie along a line. Combined with Theorem~\ref{thm:mono-pass-affine-map}, this implies that the same holds for the image of any monotone passage set under any affine map; that is, the intersection of that image with $\Rn$ must also lie along a line. This is one of the ways in which monotone passage sets retain certain linear properties, even when infinite astral points are involved. \begin{proposition} \label{pr:mon-pass-collinear-in-rn} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$ be a monotone passage set from $\xbar$ to $\ybar$ (with $n\geq 1$). Then there exists a line in $\Rn$ that includes $P\cap\Rn$. \end{proposition} \proof Let $P'=P\cap\Rn$. As a first step, let $\xx,\yy,\zz$ be any three points in $P'$; we claim they must be collinear. Because these points are included in $P$, which is a monotone passage set, they must be comparable to one another in the directional order $\leqxy$ (which is relative to $\xbar,\ybar$ throughout this proof), by Proposition~\ref{pr:mono-pass-lin-cont}. With possible renaming of variables, we thus can assume $\xx\leqxy\zz\leqxy\yy$. By Theorem~\ref{thm:subpassages}, it follows that $\zz$ belongs to a monotone passage set from $\xx$ to $\yy$, which in turn is included in $\lb{\xx}{\yy}$ (by Proposition~\ref{pr:mono-pass-lin-cont}). Thus, $\zz$ is on the ordinary line segment joining $\xx$ and $\yy$ (Proposition~\ref{pr:e1}\ref{pr:e1:a}). Therefore, the three points are collinear. We now prove the proposition. If $P'$ is empty or is a singleton, then the proposition is trivially true. Otherwise, let $\xx$ and $\yy$ be any two distinct points in $P'$. Then every other point in $P'$ is collinear with $\xx$ and $\yy$, as just argued, and therefore is included in the line determined by the two points. \qed In the examples we have seen so far, there has existed just one monotone passage set from one point to another. Indeed, Theorem~\ref{thm:y-xbar-is-path} will later show that this is always the case when at least one of the points is finite. Nevertheless, using Proposition~\ref{pr:mon-pass-collinear-in-rn}, we now can see that there sometimes must exist multiple distinct monotone passage sets connecting two infinite points. Indeed, whenever the segment $\lb{\xbar}{\ybar}$ joining points $\xbar$ and $\ybar$ includes three points in $\Rn$ that are not collinear, Proposition~\ref{pr:mon-pass-collinear-in-rn} implies that those points cannot all belong to the same monotone passage set from $\xbar$ to $\ybar$, and therefore there must exist more than one (by Corollary~\ref{cor:mon-pass-max-chain-conseq}\ref{cor:mon-pass-max-chain-conseq:b}). For instance, we saw in Section~\ref{sec:def-convexity} that $\lb{-\Iden \omm}{\Iden \omm}=\extspace\supseteq\Rn$, and therefore, by this argument, there must exist more than one monotone passage set from $-\Iden \omm$ to $\Iden \omm$ (assuming $n\geq 2$). Theorem~\ref{thm:subpassages} showed how monotone passages can be broken apart. Next, we show how they can be naturally pieced together: \begin{theorem} \label{thm:combine-mono-pass} Let $\xbar,\ybar,\xbarj{0},\ldots,\xbarj{s}\in\extspace$ with \begin{equation} \label{eqn:thm:combine-mono-pass:1} \xbar=\xbarj{0}\leqxy\xbarj{1}\leqxy\dotsb\leqxy\xbarj{s}=\ybar \end{equation} relative to $\xbar,\ybar$. For $j=1,\ldots,s$, let $\Pj{j}\subseteq\extspace$ be a monotone passage set from $\xbarj{j-1}$ to $\xbarj{j}$. Let $P=\bigcup_{j=1}^s \Pj{j}$. Then $P$ is a monotone passage set from $\xbar$ to $\ybar$. If, in addition, each $\Pj{j}$ is a monotone path set from $\xbarj{j-1}$ to $\xbarj{j}$, then $P$ is also a monotone path set. \end{theorem} \proof For each $j=1,\ldots,s$, there exists a monotone passage $\mphomj{j}:\Lj{j}\rightarrow\Pj{j}$ from $\xbarj{j-1}$ to $\xbarj{j}$, for some linearly ordered set $\Lj{j}$ with order denoted $\leqj$, and with minimum $\lcminj{j}$ and maximum $\lcmaxj{j}$. Without loss of generality, we assume $\Lj{j}$ is not a singleton. (Otherwise, we could replace $\Lj{j}$ with $[0,1]$, say, and $\mphomj{j}$ with a constant function on $[0,1]$ that maps all points to the single point in $\Pj{j}$.) To construct a monotone passage for $P$, we will paste together the sets $\Lj{j}$ in a natural way. We use distinguished elements $\nuj{0},\ldots,\nuj{s}$ to denote the boundaries between where one of the linearly ordered sets $\Lj{j}$ ends and the next begins. More specifically, with possible renaming of the elements of the sets $\Lj{j}$, we assume henceforth that the following hold: First, $\lcminj{j}=\nuj{j-1}$ and $\lcmaxj{j}=\nuj{j}$ for $j=1,\ldots,s$. Thus, the set $\Lj{j}$ has minimum $\nuj{j-1}$ and maximum $\nuj{j}$, implying that the sets intersect at these points (so $\nuj{j-1}$ is also the maximum of $\Lj{j-1}$, and $\nuj{j}$ is also the minimum of $\Lj{j+1}$). Other than these points of intersection, these sets are entirely disjoint. Thus, for $1\leq j<k\leq s$, \[ \Lj{j}\cap\Lj{k} = \begin{cases} \{\nuj{j}\} & \mbox{if $k=j+1$,} \\ \emptyset & \mbox{otherwise.} \end{cases} \] (Concretely, this renaming can be achieved, for instance, by replacing the minimum and maximum in each $\Lj{j}$ with the ``new'' elements $\nuj{j-1}$ and $\nuj{j}$, as above, and replacing every other element $\lambda\in\Lj{j}$ with a pair $\rpair{j}{\lambda}$.) Let $L=\bigcup_{j=1}^s \Lj{j}$. We define a natural order on $L$. Let $\lambda,\mu\in L$, and suppose $j$ and $k$ are the least indices in $\{1,\ldots,s\}$ for which $\lambda\in\Lj{j}$ and $\mu\in\Lj{k}$. Then we define $\lambda\leq\mu$ if $j<k$ or if $j=k$ and $\lambda\leqj\mu$. It is straightforward to check that this is a linear order. Its minimum and maximum are $\lcmin=\nuj{0}$ and $\lcmax=\nuj{s}$, respectively. We claim $L$ is also a linear continuum. To see that it is dense, let $\lambda,\mu\in L$ with $\lambda<\mu$, and let $j$ and $k$ be as above. If $j=k$ then we must have $\lambda\ltj\mu$, so there exists $\eta\in\Lj{j}$ with $\lambda\ltj\eta\ltj\mu$ (since $\Lj{j}$ is dense), implying $\lambda<\eta<\mu$. Otherwise, if $j<k$ then $\lambda\leqj\lcmaxj{j}=\nuj{j}$, so $\lambda\leq\nuj{j}$, and $\nuj{j}<\mu$ (otherwise, we would have $k\leq j$). If $k=j+1$, then $\nuj{j},\mu\in\Lj{j+1}$, so there exists $\eta\in\Lj{j+1}$ with $\lambda\leq\nuj{j}<\eta<\mu$. And if $k>j+1$, then $\nuj{j+1}\leq\mu$ so we can choose $\eta\in\Lj{j+1}$ with $\lambda\leq\nuj{j}<\eta<\nuj{j+1}\leq\mu$. To see that $L$ is complete, let $D\subseteq L$ be nonempty. Let $k\in\{1,\ldots,s\}$ be the largest index for which $D'=D\cap \Lj{k}$ is nonempty. Since $\Lj{k}$ is complete, $D'$ has a least upper bound $\mu$ in $\Lj{k}$. Then $\mu$ is an upper bound on $D$ in the $\leq$ order since if $\lambda\in D'$, then $\lambda\leq\mu$, and if $\lambda\in D\setminus D'$ then $\lambda\in \Lj{j}$ for some $j<k$, again implying $\lambda\leq\mu$. Further, a smaller upper bound on $D$ in the order on $L$ would also be a smaller upper bound on $D'$, contradicting that $\mu$ is the least upper bound on $D'$ in the $\leqk$ order on $\Lj{k}$. Thus, $\mu$ is a least upper bound on $D$ (in the $\leq$ order). The functions $\mphomj{j}$ can now be pasted together straightforwardly into a piecewise, composite function $\mphom:L\rightarrow P$ by letting $\mphom(\lambda)=\mphomj{j}(\lambda)$ if $\lambda\in\Lj{j}$, for $j=1,\ldots,s$. Note that at the boundary points $\nuj{j}$, $\mphom$ has been ``defined twice'' since $\nuj{j}$ is both in $\Lj{j}$ and $\Lj{j+1}$. Nevertheless, the two definitions are consistent since in one definition, $\mphom(\nuj{j})=\mphomj{j}(\nuj{j})=\mphomj{j}(\lcmaxj{j})=\xbarj{j}$, and in the other, $\mphom(\nuj{j})=\mphomj{j+1}(\nuj{j})=\mphomj{j+1}(\lcminj{j+1})=\xbarj{j}$. This also shows that $\mphom(\lcmin)=\mphom(\nuj{0})=\xbarj{0}=\xbar$ and $\mphom(\lcmax)=\mphom(\nuj{s})=\xbarj{s}=\ybar$. The function $\mphom$ is surjective, since each $\mphomj{j}$ is surjective (and since $L$ and $P$ are the unions of their respective domains and ranges). Also, each separate piece defining $\mphom$ is continuous, since each $\mphomj{j}$ is continuous, and since, by construction, the order topology on $\Lj{j}$ is identical to the subspace topology it inherits as a subspace of $L$. Further, each piece is defined over a closed interval of $L$, namely, $\{ \lambda\in L : \nuj{j-1} \leq \lambda \leq \nuj{j} \}$, which therefore is closed in the order topology on $L$. Thus, the piecewise function $\mphom$ is continuous by application of a standard pasting lemma \citep[Theorem~18.3]{munkres}. Finally, we claim that $\mphom$ is order-preserving. To show this, let $\uu\in\Rn$ with $\xbar\cdot\uu\leq\ybar\cdot\uu$. Let $\mphomu$ be as in \eqref{eqn:mphomu-def}, and let $\mphomuj{j}$ be defined analogously in terms of $\mphomj{j}$. We aim to show $\mphomu(\lambda)$ is nondecreasing as a function of $\lambda\in L$. Because $\xbar\cdot\uu\leq\ybar\cdot\uu$, \eqref{eqn:thm:combine-mono-pass:1} implies $\xbarj{j-1}\cdot\uu\leq\xbarj{j}\cdot\uu$ for $j=1,\ldots,s$. Since $\mphomj{j}$ is a monotone passage from $\xbarj{j-1}$ to $\xbarj{j}$, this implies, by Proposition~\ref{pr:mono-pass-props}(\ref{pr:mono-pass-props:c}), that $\mphomuj{j}(\lambda)$ is nondecreasing as a function of $\lambda\in \Lj{j}$. Therefore, $\mphomu$ is nondecreasing on this same interval $\Lj{j}$, that is, for all $\lambda\in L$ with $\nuj{j-1}\leq\lambda\leq\nu{j}$. Because these overlapping intervals cover all of $L$, it follows that $\mphomu$ is nondecreasing over its entire domain. We conclude that $\mphom$ is a monotone passage from $\xbar$ to $\ybar$, completing the first part of the theorem. If each $\Pj{j}$ is a monotone path, then each $\Lj{j}$ can be chosen to be a closed real interval $[a_j,b_j]$ for some $a_j,b_j\in\R$ with $a_j\leq b_j$. Without loss of generality, we can choose $\Lj{j}=[j-1,j]$ (by, if necessary, replacing $\mphomj{j}$ with $\lambda\mapsto\mphomj{j}(a_j + (\lambda-j+1)(b_j-a_j))$). Then, the assumed requirements of the construction above are satisfied with $\nuj{j}=j$, for $j=1,\ldots,s$, and $L=[0,s]$. The order defined above on $L$ is then identical to the usual order on intervals of $\R$. Thus, the resulting function $\mphom:L\rightarrow P$ is in fact a monotone path from $\xbar$ to $\ybar$. \qed \subsection{Monotone paths} \label{sec:mono-paths} In this section, we focus specifically on monotone paths. We show first that the equivalence between monotone passage sets and strict monotone passage sets given in Corollary~\ref{cor:max-chain-iff-mono-pass} carries over more specifically to monotone path sets: \begin{theorem} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$. Then $P$ is a monotone path set from $\xbar$ to $\ybar$ if and only if $P$ is a strict monotone path set from $\xbar$ to $\ybar$. \end{theorem} \proof The ``if'' part is immediate. For the converse, suppose $P$ is a monotone path set from $\xbar$ to $\ybar$. Then there exists a monotone path $\mphom:L\rightarrow P$ where $L=[a,b]$ for some $a,b\in\R$ with $a\leq b$. Without loss of generality, we assume $L=[0,1]$ since otherwise we can replace $\mphom$ with $\lambda\mapsto \mphom((1-\lambda)a + \lambda b)$, for $\lambda\in [0,1]$, which also is a monotone path. If $P$ is a single point, then that single point must be $\xbar=\ybar$, so $\mphom':\{0\}\rightarrow P$, with $\mphom'(0)=\xbar=\ybar$, is trivially a strict monotone path from $\xbar$ to $\ybar$. Therefore, we assume henceforth that $P$ is not a singleton. Throughout this proof, we understand $P$ to be ordered by directional order relative to $\xbar,\ybar$. In this order, by Proposition~\ref{pr:mono-pass-lin-cont}, $P$ is a linear continuum with minimum $\xbar$ and maximum $\ybar$. Since $P$ is dense with at least two elements, it is infinite in cardinality. We show next that $P$ has a countable dense subset, that is, a countable subset that is dense in $P$. This, together with $P$'s other properties, will allow us to use general results to show that $P$ is itself homeomorphic with $[0,1]$. Specifically, let \[ D = \{ \mphom(\lambda) : \lambda\in \rats\cap [0,1] \}, \] which is clearly countable, and which we argue now is dense in $P$. Suppose $\wbar,\zbar\in P$ with $\wbar\ltxy\zbar$. Then because $P$ is dense, there must exist $\wbar',\zbar'\in P$ with $\wbar\ltxy\wbar'\ltxy\zbar'\ltxy\zbar$ (since there must exist $\wbar'$ between $\wbar$ and $\zbar$, and then also $\zbar'$ between $\wbar'$ and $\zbar$). Let $\lambda\in\mphominv(\wbar')$ and $\mu\in\mphominv(\zbar')$ (which cannot be empty since $\mphom$ is surjective). Then $\lambda,\mu\in [0,1]$, and $\lambda<\mu$ since $\mphom$ is order preserving. Thus, there exists a rational number $\nu\in\rats$ with $\lambda<\nu<\mu$, implying $\mphom(\nu)\in D$, and also that \[ \wbar \ltxy \wbar' = \mphom(\lambda) \leqxy \mphom(\nu) \leqxy \mphom(\mu) = \zbar' \ltxy \zbar, \] since $\mphom$ is order preserving, proving the claim. Let $M=P\setminus \{\xbar,\ybar\}$, that is, $P$ with its minimum and maximum elements removed. Then $M$ is linearly ordered, since $P$ is. Moreover, $M\cap D$ is a countable dense subset in $M$ since if $\zbar,\zbar'\in M$ with $\zbar\ltxy\zbar'$ then there exists $\wbar\in D$ with $\xbar\ltxy\zbar\ltxy\wbar\ltxy\zbar'\ltxy\ybar$, so $\wbar\in D\cap M$. Also, $M$ is complete since if $Q\subseteq M$ is nonempty, so that it includes some element $\zbar\in M$, and upper bounded by some $\zbar'\in M$, then $Q$ must have a least upper bound $\wbar$ in $P$ with $\xbar\ltxy\zbar\leqxy\wbar\leqxy\zbar'\ltxy\ybar$ so that actually $\wbar\in M$. Finally, $M$ has no minimum. This is because, for all $\zbar\in M$, there must exist $\wbar\in P$ with $\xbar\ltxy\wbar\ltxy\zbar$ (since $P$ is dense); that is, $\wbar\in M$ and strictly less than $\zbar$. Likewise, $M$ has no maximum. Summarizing, $M$ is a complete linear order with a countable dense subset, and with no minimum or maximum. Together, these properties imply that $M$ is order-isomorphic to the real numbers. That is, there exists a (bijective) order isomorphism $\rho:\R\rightarrow M$ \citep[Theorem~4.5.7]{hrbacek_jech99}. To complete the construction, we scale $\rho$'s domain to $[0,1]$ while also adding back the minimum and maximum elements in a natural way yielding the function $\mphom':[0,1]\rightarrow P$ given by \[ \mphom'(\lambda) = \begin{cases} \xbar & \mbox{if $\lambda=0$,} \\ \rho(\sigma(\lambda)) & \mbox{if $\lambda\in (0,1)$,} \\ \ybar & \mbox{if $\lambda=1$,} \end{cases} \] for $\lambda\in [0,1]$, where $\sigma$ is any continuous, strictly increasing function mapping $(0,1)$ bijectively to $\R$ (such as $\lambda\mapsto \ln(\lambda/(1-\lambda))$). Since $\rho$ is an order isomorphism, $\mphom'$ is as well. As a result, $\mphom'$ is a homeomorphism by Proposition~\ref{pr:mphom-inv-is-cont} (and since $P$ is closed in $\extspace$, by Proposition~\ref{pr:sbar-all-comp}, and therefore compact). Thus, $\mphom'$ is a strict monotone path from $\xbar$ to $\ybar$. \qed We previously discussed that the segment joining two finite points $\xx,\yy\in\Rn$ is a single monotone path set. In fact, this is true also for the segment $\lb{\yy}{\xbar}$ joining any finite point $\yy\in\Rn$ and any (possibly infinite) astral point $\xbar\in\extspace$; that segment consists of a single monotone path set. When $\yy=\zero$ and $\xbar=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$, for some $\vv_1,\ldots,\vv_k,\qq\in\Rn$, that monotone path can nearly be ``read off'' from the form of $\lb{\zero}{\xbar}$ given in Theorem~\ref{thm:lb-with-zero}. Informally, such a path begins at the origin, then passes along the ray $\{\lambda \vv_1 : \lambda\geq 0\}$ to the astron $\limray{\vv_1}$, then continues on to $\{\limray \vv_1 \plusl \lambda \vv_2 : \lambda\geq 0\}$, and so on, in this way passing sequentially along each ``piece'' of the segment, $\{ \limrays{\vv_1,\ldots,\vv_{j-1}}\plusl\lambda\vv_j : \lambda\geq 0 \}$, for $j=1,\ldots,k$, finally passing along the final piece, $\{\limrays{\vv_1,\ldots,\vv_k}\plusl \lambda\qq : \lambda\in [0,1]\}$. In the next theorem, we give a formal proof using the tools we have been developing. \begin{theorem} \label{thm:y-xbar-is-path} Let $\yy\in\Rn$ and $\xbar\in\extspace$. Then $\lb{\yy}{\xbar}$ is a monotone path set from $\yy$ to $\xbar$, and moreover is the only monotone passage set from $\yy$ to $\xbar$. \end{theorem} \proof For now, we assume $\yy=\zero$, returning later to the more general case. We prove that $\lb{\zero}{\xbar}$ is a monotone path set by induction on the astral rank of $\xbar$. Specifically, we prove by induction on $k=0,\ldots,n$ that for all $\xbar\in\extspace$, if $\xbar$ has astral rank $k$, then $\lb{\zero}{\xbar}$ is a monotone path set from $\zero$ to $\xbar$. In the base case that $k=0$, we have $\xbar=\qq\in\Rn$, and we can choose $\mphom:[0,1]\rightarrow \lb{\zero}{\qq}$ with $\mphom(\lambda)=\lambda \qq$ for $\lambda\in [0,1]$. This function is continuous and surjective (using Proposition~\ref{pr:e1}\ref{pr:e1:a}) with $\mphom(0)=\zero$ and $\mphom(1)=\qq$. It is also order-preserving with range relative to $\zero,\qq$, since if $\qq\cdot\uu\geq \zero\cdot\uu=0$ then $\mphomu(\lambda)=\mphom(\lambda)\cdot\uu=\lambda \qq\cdot\uu$ is nondecreasing in $\lambda$. Thus, $\mphom$ is a monotone path from $\zero$ to $\qq$. For the inductive step, suppose $\xbar\in\extspace$ has astral rank $k>0$, and that the claim holds for $k-1$. Let $\vv\in\Rn$ be $\xbar$'s dominant direction. By Proposition~\ref{pr:h:6}, $\xbar=\limray{\vv}\plusl\xbarperp$, where $\xbarperp$ is $\xbar$'s projection perpendicular to $\vv$, which is of astral rank $k-1$. To prove that $\lb{\zero}{\xbar}$ is a monotone path set, we will cobble together a few other monotone paths. First, let \[ \Pj{1} = \lb{\zero}{\limray{\vv}} = \{ \lambda \vv : \lambda \geq 0 \} \;\cup\; \{ \limray{\vv} \}, \] where the second equality follows from Theorem~\ref{thm:lb-with-zero}. We claim $\Pj{1}$ is a monotone path set from $\zero$ to $\limray{\vv}$. To show this, we define $\mphomj{1}:[0,1]\rightarrow \Pj{1}$ as \[ \mphomj{1}(\lambda) = \begin{cases} {\displaystyle \frac{\lambda}{1-\lambda}} \, \vv & \mbox{if $\lambda\in [0,1)$,} \\[1em] \limray{\vv} & \mbox{if $\lambda=1$,} \end{cases} \] which we argue is a monotone path. This function is clearly surjective, with $\Pj{1}(0)=\zero$ and $\Pj{1}(1)=\limray{\vv}$. It is also continuous since $\limray{\vv}=\lim_{\tau\rightarrow+\infty} \tau \vv$. To show it is order-preserving with range relative to $\zero,\limray{\vv}$, let $\uu\in\Rn$ with $0=\zero\cdot\uu\leq\limray{\vv}\cdot\uu$, which implies $\vv\cdot\uu\geq 0$. We then need to show the function $\mphomuj{1}(\lambda)=\mphomj{1}(\lambda)\cdot\uu$ is nondecreasing in $\lambda\in [0,1]$. If $\vv\cdot\uu=0$, then $\mphomuj{1}\equiv 0$, which is nondecreasing. Otherwise, if $\vv\cdot\uu>0$, then $\mphomuj{1}$ increases monotonically from $0$ to $+\infty$. Thus, $\mphomj{1}$ is a monotone path from $\zero$ to $\limray{\vv}$, completing the claim. Next, let \[ \Pj{2} = {\limray{\vv} \plusl \lb{\zero}{\xbarperp}}. \] By inductive hypothesis, $\lb{\zero}{\xbarperp}$ is a monotone path set from $\zero$ to $\xbarperp$. Therefore, by Theorem~\ref{thm:mono-pass-affine-map}, applied to the affine map $\zbar\mapsto\limray{\vv}\plusl\zbar$, $\Pj{2}$ is a monotone path set from $\limray{\vv}$ to $\limray{\vv}\plusl\xbarperp=\xbar$. Let $P = \Pj{1} \cup \Pj{2}$. Then \begin{equation} \label{eqn:thm:y-xbar-is-path:3} P = \; \{ \lambda \vv : \lambda \geq 0 \} \;\cup\; \brackets{\limray{\vv} \plusl \lb{\zero}{\xbarperp}} = \lb{\zero}{\xbar} \end{equation} where the first equality is from definitions (and since $\limray{\vv}\in \Pj{2}$), and the second equality is by \Cref{cor:seg:zero}. \eqref{eqn:thm:y-xbar-is-path:3} implies $\limray{\vv}\in\lb{\zero}{\xbar}$, so $\zero\leqxy\limray{\vv}\leqxy\xbar$ relative to $\zero,\xbar$, by Proposition~\ref{pr:lb-def-by-part-order}. We have argued that $\Pj{1}$ is a monotone path set from $\zero$ to $\limray{\vv}$ and $\Pj{2}$ is a monotone path set from $\limray{\vv}$ to $\xbar$. Therefore, we can apply Theorem~\ref{thm:combine-mono-pass} yielding that $P=\Pj{1}\cup\Pj{2}$ is a monotone path set from $\zero$ to $\xbar$. This completes the induction and the proof when $\yy=\zero$. For general $\yy$, not necessarily $\zero$, we can, as in the proof of Corollary~\ref{cor:lb-with-finite}, write $\lb{\yy}{\xbar}$ as in \eqref{eqn:cor:lb-with-finite:1}. By the foregoing, $\lb{\zero}{\xbar \plusl (-\yy)}$ is a monotone path set from $\zero$ to $\xbar \plusl (-\yy)$. Therefore, $\lb{\yy}{\xbar}$ is a monotone path set from $\yy$ to $\xbar$ by application of Theorem~\ref{thm:mono-pass-affine-map} (using the affine map $\zbar\mapsto\yy\plusl\zbar$). Finally, since the entire segment $\lb{\yy}{\xbar}$ is a monotone path set from $\yy$ to $\xbar$, there can be no other monotone passage set from $\yy$ to $\xbar$, by Corollary~\ref{cor:mon-pass-max-chain-conseq}(\ref{cor:mon-pass-max-chain-conseq:e}). \qed As consequence, we can show that if a monotone passage set $P$ from $\xbar$ to $\ybar$ includes some finite point $\qq\in\Rn$, then actually $P$ must be a monotone path set. Furthermore, $P$ is uniquely determined, meaning there can exist no other monotone passage set from $\xbar$ to $\ybar$ that includes $\qq$. Thus, the only possible monotone passage sets that are not monotone path sets are those that are entirely disjoint from $\Rn$. \begin{corollary} \label{cor:mon-pass-thru-rn} Let $\xbar,\ybar\in\extspace$, and let $P\subseteq\extspace$ be a monotone passage set from $\xbar$ to $\ybar$. Suppose $\qq\in P\cap\Rn$. Then $P$ is a monotone path set from $\xbar$ to $\ybar$. Furthermore, \[ P = \lb{\xbar}{\qq} \cup \lb{\qq}{\ybar}. \] \end{corollary} \proof Let $\Pj{1}$ be the subpassage of $P$ from $\xbar$ to $\qq$, and let $\Pj{2}$ be the subpassage of $P$ from $\qq$ to $\ybar$, implying $P=\Pj{1}\cup\Pj{2}$. These are both monotone passage sets, by Theorem~\ref{thm:subpassages}. Furthermore, because $\qq\in\Rn$, these monotone passage sets are actually monotone path sets with $\Pj{1} = \lb{\xbar}{\qq}$ and $\Pj{2} = \lb{\qq}{\ybar}$, by Theorem~\ref{thm:y-xbar-is-path}. Since $\qq\in P$, $\xbar\leqxy\qq\leqxy\ybar$ relative to $\xbar,\ybar$. Thus, by Theorem~\ref{thm:combine-mono-pass}, $P$ is a monotone path set from $\xbar$ to $\ybar$. \qed For example, let $\qq\in\Rn$, and let $P$ be any monotone passage set from $-\Iden\omm$ to $\Iden\omm$ that includes $\qq$ (which must exist since, as earlier argued, $\lb{-\Iden\omm}{\Iden\omm}=\extspace$). Then Corollary~\ref{cor:mon-pass-thru-rn}, combined with Corollary~\ref{cor:lb-with-finite}, implies that the intersection of $P$ with $\Rn$ is exactly a line through $\qq$ and parallel to $\ee_1$ (the first standard basis vector), that is, \[ P \cap \Rn = \{ \qq + \lambda \ee_1 : \lambda\in\R \}. \] We have seen several cases so far in which monotone paths exist, for instance, when one of the endpoints is in $\Rn$. However, we will see next that monotone paths do not necessarily exist between every pair of astral points. Indeed, the next theorem shows that no monotone path can exist between two infinite points $\xbar,\ybar\in\extspace$, except possibly if their dominant directions are either the same or opposites of one another: \begin{theorem} \label{thm:no-mono-path-diff-domdir} Let $n\geq 2$, and let $\xbar_1,\xbar_2\in\extspace\setminus\Rn$. For $j=1,2$, let $\ww_j\in\Rn$ be $\xbar_j$'s dominant direction so that $\xbar_j=\limray{\ww_j}\plusl\ybar_j$ for some $\ybar_j\in\extspace$. Assume $\ww_1\neq\ww_2$ and $\ww_1\neq -\ww_2$. Then no monotone path exists from $\xbar_1$ to $\xbar_2$. \end{theorem} \proof Let us assume, for now, that $n=2$, $\ww_1=\ee_1$ and $\ww_2=\ee_2$ (where $\ee_1$ and $\ee_2$ are the standard basis vectors in $\R^2$). We prove the result first in this special case. We then return to the fully general case, which we will see can be reduced to the special case. We first show that all of the points in $\lb{\xbar_1}{\xbar_2}$ are infinite, and all have dominant directions in $\Rpos^2$. Since this claim will be re-used later, we state it as a lemma. \begin{lemma} \label{lem:thm:no-mono-path-diff-domdir:1} For $j=1,2$, let $\xbar_j=\limray{\ee_j}\plusl\ybar_j$ where $\ybar_j\in\extspac{2}$, and where $\ee_1,\ee_2$ are standard basis vectors in $\R^2$. Then \[ \lb{\xbar_1}{\xbar_2} \subseteq \bigcup_{\vv\in\Rpos^2} [\limray{\vv}\plusl\extspac{2}]. \] \end{lemma} \proof Let $S=\lb{\xbar_1}{\xbar_2}$. Suppose, contrary to the claim, that some finite point $\qq$ exists in $S\cap\R^2$. Let $\uu=-\ee_1-\ee_2$. Then $\xbar_j\cdot\uu=-\infty$, for $j=1,2$, but $\qq\cdot\uu\in\R$. Since $\qq\in \lb{\xbar_1}{\xbar_2}$, this contradicts \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}). Thus, every point in $S$ is infinite. Let $\zbar\in S$, and let $\vv\in\Rn$ be its dominant direction so that $\zbar=\limray{\vv}\plusl\ybar$, for some $\ybar\in\extspac{2}$. Suppose, again contrary to the claim, that $\vv\not\in\Rpos^2$ so that either $\vv\cdot\ee_1<0$ or $\vv\cdot\ee_2<0$. Without loss of generality, assume the former. Since $\norm{\vv}=1$, this implies $\vv\cdot\ee_2<1$. Letting $\uu=\vv-\ee_2$, it follows that $\ee_1\cdot\uu<0$, $\ee_2\cdot\uu<0$, and $\vv\cdot\uu>0$. Therefore, $\xbar_j\cdot\uu=-\infty$, for $j=1,2$, but $\zbar\cdot\uu=+\infty$. Since $\zbar\in\lb{\xbar_1}{\xbar_2}$, this again contradicts \Cref{pr:seg-simplify}(\ref{pr:seg-simplify:a},\ref{pr:seg-simplify:b}). \clqed Let $C=\{ \vv\in\Rstrictpos^2 : \norm{\vv}=1 \}$ be the quarter unit circle in the plane from $\ee_1$ to $\ee_2$ (excluding the endpoints). We claim next that every astron associated with points in $C$ must be included in every monotone passage set from $\xbar_1$ to $\xbar_2$. \setcounter{claimp}{0} \begin{claimpx} \label{cl:thm:no-mono-path-diff-domdir:2} Let $\mphom:L\rightarrow P$ be a monotone passage from $\xbar_1$ to $\xbar_2$, and let $\vv\in C$. Then $\limray{\vv}\in P$. \end{claimpx} \proof We can write $\vv$ in terms of its components as $\vv=\trans{[v_1,v_2]}$, with $v_1,v_2>0$. Let $\uu=\trans{[-v_2, v_1]}$, which is perpendicular to $\vv$. Note that $\xbar_1\cdot\uu=\limray{\ee_1}\cdot\uu\plusl\ybar_1\cdot\uu=-\infty$, and similarly, $\xbar_2\cdot\uu=+\infty$. Therefore, by Proposition~\ref{pr:mono-pass-props}(\ref{pr:mono-pass-props:c}), there must be some point $\zbar\in P$ with $\zbar\cdot\uu=0$. We claim that this point $\zbar$ must actually be $\limray{\vv}$. Let $\zbar=\limrays{\vv_1,\ldots,\vv_k}\plusl\qq$ be $\zbar$'s canonical form. By Lemma~\ref{lem:thm:no-mono-path-diff-domdir:1}, $\zbar\not\in\Rn$ (since $P\subseteq \lb{\xbar_1}{\xbar_2}$, by Proposition~\ref{pr:mono-pass-lin-cont}); therefore, $k\geq 1$. Since $\zbar\cdot\uu=0$, by Proposition~\ref{pr:vtransu-zero}, we must have $\vv_i\cdot\uu=0$, for $i=1,\ldots,k$, and so also $\qq\cdot\uu=0$. In $\R^2$, by $\uu$'s definition, these imply $\qq$ and each $\vv_i$ are scalar multiples of $\vv$; thus, $\qq=\beta\vv$, for some $\beta\in\R$, and each $\vv_i$, being a unit vector, is either equal to $\vv$ or $-\vv$. Since the $\vv_i$'s are orthonormal, this implies $k=1$, and since $\vv_1\cdot\qq=0$, this further implies $\qq=\zero$. Thus, $\zbar$ is either equal to $\limray{\vv}$ or $\limray{(-\vv)}$. However, by Lemma~\ref{lem:thm:no-mono-path-diff-domdir:1}, $\vv_1\in\Rpos^2$, ruling out the latter possibility. Therefore, $\zbar=\limray{\vv}$, proving the claim. \clqed Suppose now, by way of contradiction, that there exists a monotone path $\mphom:L\rightarrow P$ from $\xbar_1$ to $\xbar_2$, where $L$ is a closed real interval that, without loss of generality, we take to be $[0,1]$. As shown in Theorem~\ref{thm:formerly-lem:h:1:new} (with $n=2$), for each $\vv\in\R^2$, there exists an open set $\Uv\subseteq\extspac{2}$ that includes the astron $\limray{\vv}$, and such that $\Uv\subseteq\R^2\cup [\limray{\vv}\plusl\R^2]$. For each $\vv$ in the quarter unit circle $C$, let $\Upv=\Uv\cap P$, which is open in the subspace topology of $P$, and includes $\limray{\vv}$ by claim~\ref{cl:thm:no-mono-path-diff-domdir:2}. Because $P\cap\R^2=\emptyset$ (by Lemma~\ref{lem:thm:no-mono-path-diff-domdir:1}), every point $\Upv$ is infinite with iconic part $\limray{\vv}$; therefore, the sets $\Upv$, for $\vv\in C$, are disjoint from one another. Thus, because $\mphom$ is continuous and surjective, the pre-images $\mphominv(\Upv)$ are open and nonempty subsets of $[0,1]$. Therefore, for each $\vv\in C$, we can select a rational number $r(\vv)$ in $\rats\cap\mphominv(\Upv)$. The resulting function $r:C\rightarrow\rats$ is injective since the sets $\mphominv(\Upv)$, like the sets $\Upv$, are disjoint from one another over $\vv\in C$. However, this is a contradiction since $\rats$ is countable, but $C$ is uncountable, so no such function can exist. Having proved the theorem when $n=2$, $\ww_1=\ee_1$ and $\ww_2=\ee_2$, we return now to the general case as in the theorem's statement. Let $\WW$ be the $n\times 2$ matrix $\WW=[\ww_1, \ww_2]$. Then $\trans{\WW}\WW$ is invertible since its determinant is $1-(\ww_1\cdot\ww_2)^2>0$, since $\ww_1\neq\ww_2$ and $\ww_1\neq -\ww_2$. Let $\pseudinv{\WW}=(\trans{\WW}\WW)^{-1} \trans{\WW}$ be $\WW$'s \emph{pseudoinverse}. Then $\pseudinv{\WW}\WW=\Iden$, so that $\pseudinv{\WW}\ww_j=\ee_j$, for $j=1,2$, where $\ee_1,\ee_2$, as before, are the standard basis vectors in $\R^2$, and $\Iden$ is the $2\times 2$ identity matrix. Let $F:\extspace\rightarrow\extspac{2}$ be the function $F(\zbar)=\pseudinv{\WW} \zbar$. Suppose a monotone path set $P$ exists from $\xbar_1$ to $\xbar_2$. Then, by Theorem~\ref{thm:mono-pass-affine-map}, $F(P)$ is a monotone path set from $F(\xbar_1)=\limray{\ee_1}\plusl\pseudinv{\WW}\ybar_1$ to $F(\xbar_2)=\limray{\ee_2}\plusl\pseudinv{\WW}\ybar_2$, that is, in the special case considered above and shown not to be possible. Thus, in the general case as well, no monotone path from $\xbar_1$ to $\xbar_2$ can exist. \qed Thus, no monotone path can exist between points satisfying the conditions of Theorem~\ref{thm:no-mono-path-diff-domdir}. On the other hand, we know from Corollary~\ref{cor:mon-pass-max-chain-conseq}(\ref{cor:mon-pass-max-chain-conseq:a}) that there must nevertheless be a monotone passage connecting the two points. What then do such monotone passages look like? For example, in $\extspac{2}$, there must exist a monotone passage $\mphom:L\rightarrow P$ from $\limray{\ee_1}$ to $\limray{\ee_2}$. By claim~\ref{cl:thm:no-mono-path-diff-domdir:2} in the proof of Theorem~\ref{thm:no-mono-path-diff-domdir}, $P$ must include every astron $\limray{\vv}$ for all $\vv\in\Rstrictpos^2$. As a first attempt at constructing such a monotone passage, we might therefore consider simply running through all these astrons. More exactly, let \begin{equation} \label{eqn:circf-def} \circfval = \left[\begin{array}{c} \cos(\cfarga \pi /2) \\ \sin(\cfarga \pi /2) \end{array} \right] \end{equation} for $\cfarga\in [0,1]$, which parameterizes all of the points on the quarter unit circle in $\R^2$ from $\ee_1$ to $\ee_2$ (including the endpoints). We might then try to construct a monotone passage $\mphom(\cfarga)=\limray(\circfval)$, for $\cfarga\in [0,1]$, which traces through all of the associated astrons. Of course, this cannot actually be a monotone passage from $\limray{\ee_1}$ to $\limray{\ee_2}$ since we know from Theorem~\ref{thm:no-mono-path-diff-domdir} that no such monotone \emph{path} can exist. The particular problem with this function is that it is not continuous, and in fact, is discontinuous at \emph{every} point (since each point $\limray{\vv}$ in the range of $\mphom$ is in its own open set $\Uv$, as was used in the proof of Theorem~\ref{thm:no-mono-path-diff-domdir}, so that the pre-image of that open set is a singleton in $[0,1]$, which is not open). Thus, a monotone passage from $\limray{\ee_1}$ to $\limray{\ee_2}$ must include every astron, as above, but needs to also be continuous by somehow managing to move more smoothly from one astron to the next. We show in the next proposition an example of how this can be done. To slightly simplify the presentation, we consider finding a monotone passage from $\xbar=\limray{\ee_1}\plusl\limray{(-\ee_2)}$ to $\ybar=\limray{\ee_2}\plusl\limray{(-\ee_1)}$. (If desired, a monotone passage from $\limray{\ee_1}$ to $\limray{\ee_2}$ can then be extracted as the subpassage from $\limray{\ee_1}$ to $\limray{\ee_2}$ using Theorem~\ref{thm:subpassages}.) For each $\cfarga\in [0,1]$, let $\circfval$ be as in \eqref{eqn:circf-def}, and let $\circfinvval$ be a corresponding orthogonal unit vector, namely, \begin{equation} \label{eqn:circfinv-def} \circfinvval = \left[\begin{array}{r} - \sin(\cfarga \pi /2) \\ \cos(\cfarga \pi /2) \end{array} \right]. \end{equation} The monotone passage that we construct includes not only each astron $\limray{\circfval}$ as in the (unsuccessful) attempt above, but also passes along a path through its entire galaxy (and its closure), from $\limray{\circfval}\plusl\limray{(-\circfinvval)}$ at one end to $\limray{\circfval}\plusl\limray{\circfinvval}$ at the other. Such a path is followed separately for every $\cfarga\in [0,1]$; in this sense, the passage is, informally, a ``path of paths.'' As such, we define it over a Cartesian product of closed real intervals, specifically, $L=[0,1]\times[0,1]$, with linear order over pairs $\rpair{\cfarga}{\cfargb}\in L$ defined \emph{lexicographically}, meaning, in this case, that $\rpair{\cfarga}{\cfargb}\leq\rpair{\cfarga'}{\cfargb'}$ if $\cfarga<\cfarga'$ or if $\cfarga=\cfarga'$ and $\cfargb\leq\cfargb'$. \begin{proposition} Let $\xbar=\limray{\ee_1}\plusl\limray{(-\ee_2)}$ and $\ybar=\limray{\ee_2}\plusl\limray{(-\ee_1)}$, where $\ee_1,\ee_2$ are the standard basis vectors in $\R^2$. Let \[ P = \bigcup_{\vv\in \Rpos^2} [\limray{\vv}\plusl \extspac{2}], \] and let $L=[0,1]\times[0,1]$ be linearly ordered lexicographically. Define $\mphom:L\rightarrow P$ by \[ \mphom(\cfarga,\cfargb) = \limray{\circfval} \plusl \begin{cases} \limray{(-\circfinvval)} & \mbox{if $\cfargb=0$,} \\ \sigma(\cfargb)\, \circfinvval & \mbox{if $\cfargb\in (0,1)$,} \\ \limray{\circfinvval} & \mbox{if $\cfargb=1$,} \end{cases} \] where $\circfval$ and $\circfinvval$ are as defined in \eqref{eqn:circf-def} and \eqref{eqn:circfinv-def}, and where $\sigma(\cfargb)=\ln(\cfargb/(1-\cfargb))$ for $\sigma\in (0,1)$. Then $\mphom$ is a strict monotone passage from $\xbar$ to $\ybar$. Furthermore, $P=\lb{\xbar}{\ybar}$, implying $P$ is the only monotone passage set from $\xbar$ to $\ybar$. \end{proposition} \proof We argue first that $\mphom$ is bijective. It can be checked that the function $\sigma$ is strictly increasing and maps $(0,1)$ bijectively to $\R$. Note that $\cfarga\mapsto\limray{\circfval}$ maps $[0,1]$ bijectively to the set of all astrons $\limray{\vv}$ with $\vv\in\Rpos^2$. For each $\cfarga\in [0,1]$, $\cfargb\mapsto\mphom(\cfarga,\cfargb)$ maps bijectively from $[0,1]$ to $\galcl{\limray{\circfval}}=\limray{\circfval}\plusl\extspac{2}$. This can be seen by noting that every point $\mphom(\cfarga,\cfargb)$, as defined in the proposition's statement, is already in its unique canonical representation, which also makes it straightforward to check that the mapping is surjective onto all of $\galcl{\limray{\circfval}}$ (since only two orthogonal directions are possible in $\extspac{2}$). Thus, $\mphom$ is a bijection from $L$ to $P$. We next show that $\mphom$ is order-preserving with range relative to $\xbar,\ybar$. Let $\uu\in\R^2$ with $\xbar\cdot\uu\leq\ybar\cdot\uu$. We aim to show that the function $\mphomu(\cfarga,\cfargb)=\mphom(\cfarga,\cfargb)\cdot\uu$ is nondecreasing as a function of $\rpair{\cfarga}{\cfargb}\in L$ (ordered lexicographically). We show this in cases, based on the signs of the two components of $\uu=\trans{[u_1,u_2]}$. If $\uu=\zero$, then $\mphomu\equiv 0$. If $u_1>0$ and $u_2>0$, then $\circfval\cdot\uu>0$ for all $\cfarga\in [0,1]$, so $\mphomu\equiv +\infty$. Similarly, if $u_1<0$ and $u_2<0$, then $\mphomu\equiv -\infty$. If $u_1\geq 0$ and $u_2\leq 0$, but $\uu\neq\zero$, then $\xbar\cdot\uu=+\infty$ and $\ybar\cdot\uu=-\infty$, contradicting that $\xbar\cdot\uu\leq\ybar\cdot\uu$, so this case is impossible. We are left only with the case that $u_1\leq 0$ and $u_2\geq 0$, but $\uu\neq\zero$. By normalizing $\uu$ (which only changes $\mphomu$ by a positive scalar), we can assume $\norm{\uu}=1$. Therefore, for some $\cfarga_0\in[0,1]$, we must have $\circfinvvalz=\uu$. It can be checked that, under our assumptions on $\uu$, \[ \cfarga \mapsto \circfval\cdot\uu = u_1 \cos(\cfarga \pi /2) + u_2 \sin(\cfarga \pi /2) \] is strictly increasing as a function of $\cfarga\in [0,1]$, and is equal to zero if and only if $\cfarga=\cfarga_0$ (so that $\circfvalz\cdot\uu=\circfvalz\cdot\circfinvvalz=0$). Therefore, we can compute $\mphomu$ explicitly as \[ \mphomu(\cfarga,\cfargb) = \begin{cases} -\infty & \mbox{if $\cfarga<\cfarga_0$,} \\ -\infty & \mbox{if $\cfarga=\cfarga_0$ and $\cfargb=0$,} \\ \sigma(\cfargb) & \mbox{if $\cfarga=\cfarga_0$ and $\cfargb\in (0,1)$,} \\ +\infty & \mbox{if $\cfarga=\cfarga_0$ and $\cfargb=1$,} \\ +\infty & \mbox{if $\cfarga>\cfarga_0$,} \\ \end{cases} \] which is evidently nondecreasing in the lexicographic ordering of $L$. Thus, $\mphom$ is bijective and order-preserving (with range relative to $\xbar,\ybar$). It is known that $L$ is a linear continuum \citep[Example~24.1]{munkres}. Its minimum and maximum elements, $\lcmin=\rpair{0}{0}$ and $\lcmax=\rpair{1}{1}$, map to $\mphom(\lcmin)=\xbar$ and $\mphom(\lcmax)=\ybar$. By Proposition~\ref{pr:lb-def-by-part-order}, this implies that $P=\mphom(L)$ is included in $\lb{\xbar}{\ybar}$. On the other hand, $\lb{\xbar}{\ybar}\subseteq P$ by Lemma~\ref{lem:thm:no-mono-path-diff-domdir:1}. Thus, $P=\lb{\xbar}{\ybar}$, which is closed in $\extspac{2}$ (\Cref{pr:ohull:hull}), and therefore compact. Together, these facts imply, by Proposition~\ref{pr:mphom-inv-is-cont}, that $\mphom$ is a homeomorphism and also is an order isomorphism with range relative to $\xbar,\ybar$. Thus, $\mphom$ is a strict monotone passage. That $P$ is the only monotone passage set from $\xbar$ to $\ybar$ then follows from Corollary~\ref{cor:mon-pass-max-chain-conseq}(\ref{cor:mon-pass-max-chain-conseq:e}). \qed \ifdraft \section{Open questions and future directions} \subsection{Matus's questions} \begin{enumerate} \item Fenchel duality theory, and applications to exponential families. For things like multinomials this seems okay, but what about crazy things like Gaussians, where the sample space is infinite? \item The population risk can have minimizers of rank larger than one, even though the empirical risk always has rank 1. Does this have any weird statistical consequences? E.g., it seems that SGD should converge to the higher rank solution whereas unregularized GD will go to the rank 1 solution? \item Does GD converge in astral space? Ziwei and I know cases where GD seems to converge to something of rank higher than one on empirical risk minimization problems. (Or does this have weird interplay with the SGD part of the previous problem? E.g., is GD still magically trying to go to that higher rank solution?) \item Let $f$ be any convex function and $g$ any shadow. Is $f-g$ also convex? This would simplify some constructions and is also interesting. \end{enumerate} \subsection{Rob's questions} \begin{enumerate} \item Suppose $S\subseteq\extspace$ is closed (in $\extspace$) and convex. Must $S$ be equal to its own outer hull? That is, must $S=\ohull S$? \item Does gradient descent (or other standard methods) provably converge to the unique minimizer of the maximum-rank example given in Section~\ref{sec:max-rank-minimizers}? \item Can we say more about continuity of our subgradients and subdifferentials? I am especially wondering if, for instance, Rockafellar Theorem~25.6 can be generalized to astral space. \item Let $f:\Rn\rightarrow\Rext$ be convex, maybe with other nice properties. Suppose $\seq{\xx_t}$ is a sequence in $\Rn$ with $\gradf(\xx_t)\rightarrow\zero$, and also $\xx_t\rightarrow\xbar$, for some $\xbar\in\extspace$. Must $\xbar$ be a minimizer of $\fext$? The counterexample in Section~\ref{sec:subdif:cont} shows that $f(\xx_t)$ need not coverge to $\inf f$, but does not answer this question. \item Is there a relationship between monotone passages and geodesics or Riemannian geometry? (This was Matus's question.) \item What is the relationship between our subgradient and clarke subdifferentials? \item Can we apply the astral-space approach to say anything about the limits of graphs (in the limit as the number of vertices gets large)? Maybe instead of linear functions, we could consider counting functions on graphs, for instance, fraction of $k$-node subgraphs that are complete graphs. Is there a connection to the work of Chayes and Borgs? \item Can the conditions in Theorem~\ref{ap:thm:adsubdif-implies-asubdif} be greatly simplified? Can the condition $\clepi{F}=\ohull(\epi F)$ be dropped? \item How do minmax=maxmin type theorems generalize to astral space? Can we connect astral space to game theory? \item Is there an astral version of Kuhn-Tucker? \item In Theorem~\ref{thm:adif-fext-inverses}, can the condition ``$\xbar\in\cldom{f}$'' be dropped if $\fdub(\xbar)=\fext(\xbar)$, or if $f$ has all redutions closed? (This is Miro's question.) \item Can Theorem~\ref{thm:ben-subgrad-max} be made more general (for instance, so $I$ does not need to be finite)? See \cite[Theorem~VI.4.4.2]{HULL-big-v1}. \item For any $F:\extspace\rightarrow\Rext$, we know that $\aresconeF$ is an astral convex cone (Theorem~\ref{thm:arescone-is-ast-cvx-cone}). Analogous to \Cref{pr:resc-cone-basic-props}, it seems that if $F$ is lower semicontinuous, then $\aresconeF$ should also be closed, as in the standard case, without assuming convexity (as in \Cref{thm:rescone-closed}). Is it? \item Is convexity needed in \Cref{pr:f:1}? Definitely feels like we shouldn't need it. \end{enumerate} \section{Exercises} \subsection{Rob's questions} { \renewcommand{\theenumi}{\arabic{enumi}} \begin{enumerate} \item \label{xer:r:1} Proposition~\ref{pr:i:4} states that two points $\xbar$ and $\xbar'$ are equal if and only if $\xbar\cdot\uu=\xbar'\cdot\uu$ for all $\uu\in\Rn$. This question asks if the same holds if we only consider vectors $\uu$ with rational coordinates. \begin{enumerate} \item \label{xer:r:1a} Prove or disprove the following: Let $\xx,\xx'\in\Rn$. Then $\xx=\xx'$ if and only if $\xx\cdot\uu=\xx'\cdot\uu$ for all $\uu\in\rats^n$. \item \label{xer:r:1b} Prove or disprove the following: Let $\xbar,\xbar'\in\extspace$. Then $\xbar=\xbar'$ if and only if $\xbar\cdot\uu=\xbar'\cdot\uu$ for all $\uu\in\rats^n$. \end{enumerate} Solution: Part~\ref{xer:r:1a} is true since actually it suffices for $\uu$ to be all standard basis vectors. Part~\ref{xer:r:1b} is false. Example: Let $n=2$, and let $\vv=\trans{[1,\sqrt{2}]}$. Let $\qq=\trans{[\sqrt{2},-1]}$. Let $\xbar=\limray{\vv}\plusl\qq$ and $\xbar'=\limray{\vv}\plusl(-\qq)$. These points are not equal (since, for instance, $\xbar\cdot\qq\neq\xbar'\cdot\qq$). For $\uu\in\rats^2$, we cannot have $\vv\cdot\uu=0$, since $\sqrt{2}$ is irrational. Therefore, $\xbar\cdot\uu=\limray{\vv\cdot\uu}=\xbar'\cdot\uu$, for all $\uu\in\rats^2$. \item \label{xer:r:2} [This problem is probably out of date now and too easy following the inclusion of Theorem~\ref{thm:icon-reduc-lsc-inf}.] Let $f:\Rn\rightarrow\Rext$ be convex. For $i=1,2$, let $\ebar_i=\VV_i\omm$ where $\VV_i\in\R^{n\times k_i}$. \begin{enumerate} \item \label{xer:r:2:a} Assume $\ebar_1,\ebar_2\in\aresconef$ and $\colspace \VV_1=\colspace \VV_2$. Prove that $\fshad{\ebar_1}=\fshad{\ebar_2}$. Solution: If $f$ is reduction closed, then we can apply Theorem~\ref{thm:conj-of-iconic-reduc} to show that $(\fshad{\ebar_1})^*=(\fshad{\ebar_2})^*$, and therefore that $\fshad{\ebar_1}=\fshad{\ebar_2}$. If $f$ is not reduction closed, we can use the exponential composition trick. [See Rob's notes 5/23/23.] \item \label{xer:r:2:b} Show that part~(\ref{xer:r:2:a}) is false if the assumption that $\ebar_1,\ebar_2\in\aresconef$ is dropped. In other words, suppose that $\colspace \VV_1=\colspace \VV_2$, but that $\ebar_1$ $\ebar_2$ are not necessarily in $\aresconef$. Give a counterexample showing that $\fshad{\ebar_1}$ need not be equal to $\fshad{\ebar_2}$. Solution: Let $f(x)=e^x$ for $x\in\R$, let $\ebar_1=+\infty=\limray{(+1)}$ and $\ebar_2=-\infty=\limray{(-1)}$. Then $\fshad{\ebar_1}\equiv+\infty$ but $\fshad{\ebar_2}\equiv 0$. \end{enumerate} \item \label{xer:r:3} Let $f$ be as in \eqref{eqn:loss-sum-form}, and let $\xbar\in\extspace$. [Could maybe skip part~(\ref{xer:r:3:a}).] \begin{letter-compact} \item \label{xer:r:3:a} Prove that \[ \asubdiffext{\xbar} \supseteq \sum_{i\in\indset} \uu_i \asubdifelliext{\xbar\cdot\uu_i} \] and that \[ \basubdiffext{\xbar} = \sum_{i\in\indset} \uu_i \basubdifelliext{\xbar\cdot\uu_i}. \] \item \label{xer:r:3:b} Prove that the following are equivalent: \begin{roman-compact} \item \label{xer:r:3:a:1} $\xbar$ minimizes $\fext$. \item \label{xer:r:3:a:2} \[ \zero \in \sum_{i\in\indset} \uu_i \asubdifelliext{\xbar\cdot\uu_i}. \] \item \label{xer:r:3:a:3} \[ \zero \in \sum_{i\in\indset} \uu_i \basubdifelliext{\xbar\cdot\uu_i}. \] \end{roman-compact} \end{letter-compact} Solution: Apply calculus rules from Chapter~\ref{sec:calc-subgrads}. (See Rob's notes 6/4/23.) \item Show that Theorem~\ref{thm:closure-of-sublev-sets} need not hold in general when $\beta=\inf f$. Solution: Let $f(x)=e^x$ and $\beta=\inf f = 0$. Then $L=\emptyset$ but $\{\barx : \fext(\barx)\leq\beta\}=\{-\infty\}$. \item Let $J,K$ be pointed, convex cones in $\extspace$. Show that $J\seqsum K=\conv(J\cup K)$. Solution: See Rob notes 8/2/23. [out of date --- using old definition of cone] \item Let $\A\in\Rmn$, and let $\trans{\aaa}_i$ denote the $i$-th row of $\A$. Let $\xbar\in\extspace$ and $\bb\in\Rm$. Show that $\A\xbar=\bb$ if and only if $\xbar\cdot\aaa_i=b_i$ for $i=1,\ldots,m$. Does this contradict Example~\ref{ex:mat-prod-not-just-row-prods}? Solution: See Rob notes 8/3/23. \item \label{xer:r:4} Let $F:\Rext\rightarrow\Rext$. Consider the following statements: \begin{roman-compact} \item \label{xer:r:4:s1} $F$ is convex. \item \label{xer:r:4:s2} For all $\barx,\bary\in\dom{F}$, and for all $\lambda\in[0,1]$, \[ F((1-\lambda)\barx\plusl \lambda\bary) \leq (1-\lambda) F(\barx) + \lambda F(\bary). \] \end{roman-compact} \begin{letter-compact} \item Prove or disprove that, in general, (\ref{xer:r:4:s1}) implies (\ref{xer:r:4:s2}). \item Prove or disprove that, in general, (\ref{xer:r:4:s2}) implies (\ref{xer:r:4:s1}). \end{letter-compact} Solution: (\ref{xer:r:4:s1}) $\Rightarrow$ (\ref{xer:r:4:s2}) by Theorem~\ref{thm:lammid-char-seqsum}(\ref{thm:lammid-char-seqsum:c}) and Theorem~\ref{thm:ast-F-char-fcn-vals}. (\ref{xer:r:4:s1}) $\not\Rightarrow$ (\ref{xer:r:4:s2}): Let \[ F(\barx) = \left\{ \begin{array}{cl} \barx^2 & \text{if $\barx\in\R$} \\ -\infty & \text{otherwise.} \end{array} \right. \] Then $F$ satisfies (\ref{xer:r:4:s2}), but is not convex. \item (astral infimal convolution) Let $F:\extspace\rightarrow\Rext$ and $G:\extspace\rightarrow\Rext$ be convex. Let \[ H(\zbar) = \inf\Braces{ F(\xbar) \plusu G(\ybar) : \zbar\in \xbar \seqsum \ybar }. \] Show that $H$ is convex. Solution: Let $S=\{ \mtuple{\xx,\yy,\zz} : \xx + \yy = \zz \}$. Can write \[ H(\zbar) = \inf\Braces{ F(\PP_1 \wbar) \plusu G(\PP_2 \wbar) \plusu \indfa{\Sbar}(\wbar) : \wbar\in\extspac{3n}, \PP_3\wbar = \zbar}. \] Then can argue convex using tools in Section~\ref{sec:op-ast-cvx-fcns}. ($\PP_i$ notation is as in proof of Theorem~\ref{thm:seqsum-convex}.) \item For a function $F:\extspace\rightarrow\Rext$, let $\nbasubdifF{\xbar}=\asubdifF{\xbar}\setminus \basubdifF{\xbar}$ denote the set of non-benign astral subgradients. Let $f:\Rn\rightarrow\Rext$ be convex and proper, and let $\xbar\in\extspace$. Prove that $\nbasubdiffext{\xbar}$ is convex. Solution: Assume $\xbar\in\cldom{f}$ (otherwise $\asubdiffext{\xbar}=\emptyset$). Combining Theorem~\ref{thm:adif-fext-inverses} and Proposition~\ref{pr:dual-subgrad-outside-dom}, we have $\uu\in\nbasubdiffext{\xbar}$ if and only if for all $\ww\in\dom\fstar$, $\xbar\cdot(\ww-\uu)=-\infty$. Say $\uu,\uu'\in\nbasubdiffext{\xbar}$, let $\ww\in\dom\fstar$, and let $\lambda\in (0,1)$. Let $\vv=\lambda\uu+(1-\lambda)\uu'$. Then $\lambda\xbar\cdot(\ww-\uu)=-\infty$ and $(1-\lambda)\xbar\cdot(\ww-\uu')=-\infty$. Adding, and by summability, it follows that $\xbar\cdot(\ww-\vv)=-\infty$. [See Rob notes 4/2/23.] \item \label{xer:r:linear-extended-equiv} Let $\psi:\Rn\rightarrow\Rext$, and let $\phimgA$ be as in Section~\ref{subsec:astral-pts-as-fcns}. Show that $\psi\in\phimgA$ if and only if $\psi$ is a linear extended function. Solution: ``only if'' follows from Propositions~\ref{pr:i:1} and~\ref{pr:i:2} (as already stated in the text). For the ``if'' direction, suppose $\psi$ is linear extended. Then \[\psi(\zero)=\psi(0\cdot\zero)=0\cdot \psi(\zero) = 0.\] Let $\uu,\vv\in\Rn$ be such that $\psi(\uu)$ and $\psi(\vv)$ are summable. Let $\lambda\in [0,1]$. Then $\psi((1-\lambda)\uu)=(1-\lambda)\psi(\uu)$ and $\psi(\lambda\vv)=\lambda\psi(\vv)$ are also summable. Therefore, \begin{eqnarray*} \psi((1-\lambda)\uu+\lambda\vv) &=& \psi((1-\lambda)\uu)+\psi(\lambda\vv) \\ &=& (1-\lambda)\psi(\uu)+\lambda\psi(\vv). \end{eqnarray*} Therefore, $\psi$ is convex by Proposition~\refequiv{pr:stand-cvx-fcn-char}{pr:stand-cvx-fcn-char:a}{pr:stand-cvx-fcn-char:c}. By that same proposition, this also shows that $-\psi$ is convex, and therefore that $\psi$ is concave. Thus, $\psi\in\phimgA$ by Theorem~\refequiv{thm:h:5}{thm:h:5a0}{thm:h:5b}. \item Let $K\subseteq\extspace$ be an astral cone and let $\A\in\Rmn$. Show that $\A K$ is also an astral cone. Solution: see Rob notes 1/15/24. \item Let $K\subseteq\extspac{m}$ be an astral cone and let $\A\in\Rmn$. Show that $\{\xbar\in\extspace : \A \xbar \in K\}$ is also an astral cone. Solution: see Rob notes 1/15/24. \item Let $\uu\in\Rn$, and let $f(\xx)=\xx\cdot\uu$ for $\xx\in\Rn$. Find $\resc{f}$, $\represc{f}$, $\rescbar{f}$, and $\aresconef$. Is $f$ recessive complete? Solution: \[ \resc{f} = \{ \vv\in\Rn : \vv\cdot\uu \leq 0 \} \] and \[ \represc{f} = \rescbar{f} = \aresconef = \{ \vbar\in\extspace : \vbar\cdot\uu \leq 0 \}. \] \item Does Theorem~\ref{thm:decomp-acone} hold if $\acone{S}$ is replaced by $\conv{S}$? In other words, prove or disprove the following statement: For all $S_1,S_2\subseteq\extspace$, \[ (\conv{S_1}) \seqsum (\conv{S_2}) = \conv{(S_1\cup S_2)}. \] Solution: the statement is false (even for standard convex analysis). For instance, in $\R^2$, let $S_1=\{\zero,\ee_1\}$ and $S_2=\{\zero,\ee_2\}$. Then $(\conv{S_1}) \seqsum (\conv{S_2})$ is the square $[0,1]\times [0,1]$, but $\conv(S_1\cup S_2)$ is a triangle that does not include (for instance) $\trans{[1,1]}$. \item Suppose $M=\acolspace \A$ for some matrix $\A\in\R^{n\times k}$. Show that $\dim{M}$ is equal to the matrix rank of $\A$. \item Suppose $M=\lb{-\ebar}{\ebar}$ for some icon $\ebar\in\corezn$. Show that $\dim{M}$ is equal to the astral rank of $\ebar$. Solutions: See rob notes 3/5/24. \item Let $f$ be the function on $\R^2$ given in \eqref{eqn:eg-diag-val}. Let $\vv=\trans{[1,1]}$ and $\ww=\trans{[1,-1]}$, and let $\xbar=\limray{\vv}\plusl(\lambda/2)\ww$, for some $\lambda\in\R$. Let $\uu=2\lambda\ww$. Show that $\uu$ is an astral subgradient of $\fext$ at $\xbar$. Solution: This is as expected being the limit of $\nabla f(\xx_t)$ for any sequence $\seq{\xx_t}$ in $\Rn$ that converges to $\xbar$. This kind of argument can be used to formally prove that $\uu\in\asubdiffext{\xbar}$ by applying Theorem~\ref{thm:cont-subgrad-converg}. Alternatively, this fact can be proved more directly using the definition of astral subgradient: Specifically, it can be checked that $-\fext(\xbar)\plusd\xbar\cdot\uu=\lambda^2$. For any point $\zbar\in\extspac{2}$, let $\alpha=\zbar\cdot\ww$. If $\alpha\in\{-\infty,+\infty\}$, then $\fext(\zbar)=+\infty$ so $-\fext(\zbar)\plusd\zbar\cdot\uu=-\infty\leq\lambda^2$. Otherwise, if $\alpha\in\R$, then $-\fext(\zbar)\plusd\zbar\cdot\uu\leq-\alpha^2+2\lambda\alpha\leq\lambda^2$. In either case, the condition in Theorem~\ref{thm:fenchel-implies-subgrad} is satisfied. \item (Inspired by \citet[Theorem~3.2]{lexicographic_separation}.) Let $K\subseteq\Rn$ be a convex cone, and let $U\subseteq\Rn$. Assume $U+K$ is convex and that $\zero\not\in U+K$. \begin{letter-compact} \item Show that there exists $\xbar\in\apol{K}$ such that $\xbar\cdot\ww<0$ for all $\ww\in U+K$. \item Suppose in addition that $K$ is polyhedral. Show then that there exists $\xbar\in\repcl{(\Kpol)}$ such that $\xbar\cdot\ww<0$ for all $\ww\in U+K$. \end{letter-compact} Solution: See rob's notes 4/3/24. \item Let $\psi:\Rn\rightarrow\Rext$ be defined, for $\uu\in\Rn$, by \[ \psi(\uu) = \begin{cases} -\sqrt{1-\norm{\uu}^2} & \text{if $\norm{\uu}\leq 1$,} \\ +\infty & \mbox{otherwise,} \end{cases} \] whose graph in $\Rnp$ is the bottom half of a hypersphere. Let $\uu\in\Rn$ with $\norm{\uu}=1$. Show that $\partial\psi(\uu)=\emptyset$ and $\adsubdifpsi{\uu} = \limray{\uu} \plusl \extspace$. Solution: See Example~\ref{ex:hemisphere-ast-dual-subgrad} from old version from 7/15/24 (or Rob's notes from 7/12/24). \end{enumerate} } \section{Matus stuff --- eventually remove this section} \subsection{Clarke differentials} \begin{enumerate} \item Clarke differentials are defined as support functions via directional derivatives, which we could use in both the primal and the dual. A first try is \begin{align*} (D_1 f)(\xbar;\ybar) &:= \limsup_{\substack{\xx\to\xbar\\\yy\to\ybar\\t\downarrow 0}} \frac {f (\xx + t \yy) - f(\xx)}{t}, \\ \partial_1 f(\xbar) &:= \left\{ \uu\in\Rn: \forall \ybar\in\extspace, \ybar\cdot \uu \leq (D_1 f)(\xbar;\ybar\right \}. \end{align*} If we consider $g(x) = x$, we get \[ (D_1 g)(\barx;\bar y) = \limsup_{\substack{x\to\barx\\y\to\bar y\\t\downarrow 0}} \frac {x + t y - x}{t} = \limsup_{y\to\bar y} y = \bar y, \] and thus $\partial_1 g(-\infty) = \{1\} \not \supseteq \{0\}$. \item To get $\{0\}\in\partial(\barx\mapsto\barx)(-\infty)$, it seems there are two ideas: either allow fake directions which approach from ``beyond'' $-\infty$ (which is how we get $0$ at the boundaries of the $I_{[0,1]}$, or simply disallow consideration of directions ``beyond'' $-\infty$, whereby the support function allows those differntials. I prefer the second approach, for which \end{enumerate} \subsection{References} \label{sec:matus_refs} \begin{enumerate} \item \textbf{(Other spaces with infinite directions.)} \begin{itemize} \item \textbf{(Cosmic space.)} \citet{rock_wets} define \emph{cosmic space}, where $\R^n$ is extended to include rays; in the terminology here, the set of astral points with astral rank at most one. Cosmic space is compact \citep[Theorem 3.2]{rock_wets}, meaning every sequence has a convergent subsequence; they also develop related closure concepts for sets and functions. This formalism is not enough to ensure that that functions attain their minima; by contrast, this is ensured here in \Cref{pr:h:1}, and moreover there exist functions whose minimizers have arbitrarily high astral rank (see \Cref{pr:min-hi-rank-f-eg}), and thus are not elements of cosmic space. \item \textbf{($\cQ$-compactification.)} Our compactification is similar to one due to \citet{q_compactification}. \end{itemize} \item \textbf{(Abstract convexity.)} \begin{itemize} \item \textbf{(Abstract $c$-conjugate and $\matusplusd$.)} These were first introduced by \citet[Eq. (14.7)]{moreau__convexity}, but then heavily developed by \citet{singer_book}. \item \textbf{(Hemispaces.)} Terminology due to \citet{hemispaces}. Miro comment april 12: \begin{quote} I found the attached paper by Martinez-Legaz and Singer 88. It's the closest (so far) that I've found to our characterization of the extended linear functions. \end{quote} Miro comment april 13: \begin{quote} The simplest way to state the connection is that hemispaces are sets that are simultaneously convex and concave (meaning their complements are convex), so there is a connection between hemispaces and our Theorem 12. Their Theorem 2.1 for example gives what looks analogous to our characterization (like in our Eq. 7 or our Theorem 12.d). Their characterization has two additional degrees of freedom: \begin{itemize} \item they need the ability to "move the origin" (whereas we fix it at 0) \item they need the ability to exclude the origin (whereas we always include 0) --- that's why they need to consider lexicographic variants of both less-than-or-equal as well as just less-than in their definitions \end{itemize} \end{quote} \item \textbf{(Further abstract convexities \citep{sierksma,kay_womble,vandeVel}.)} Miro comments \begin{quote} \begin{itemize} \item Sierksma84: How to go from "convexity space" of Singer (aka "closure space" by van de Vel) to "aligned space" of Singer (aka "convexity space" by van de Vel) \item KayWomble71: Here, please only look at Section 1 until the proof of Theorem 2. I basically wanted to highlight some issues that come up when defining the convexity via "segments" how Rob is doing (that's captured in Theorem 2). \end{itemize} \end{quote} and Miro also said: \begin{quote} (3) can we relate the rho-convexity to Rob's convexity, which Singer calls "alignment" (for example, there are some theoretical results that begin with an abstract convexity (like (W,phi)- convexity) and seek the minimal aligned space that refines them). \end{quote} Here are some old comments I made about \citep{sierksma,kay_womble,vandeVel,singer_book}: \begin{quote} These authors all consider convexity based purely on set systems (and no vector operations or set topology) as follows. Given a base set $X$ (e.g., reasonable choices in the present work are $\R^n$ and $\extspace$), consider any family of subsets $M \subseteq \mathcal{P}(X)$, where $\mathcal{P}(X)$ denotes the power set, such that $\emptyset \in M$, and $X\in M$, and for any subfamily $M_0 \subseteq M$, then $\cap_{S\in M_0} \in M$. This is called a \emph{convexity system} by \citet{singer_book}, whereas the earlier work of \citet{kay_womble} call this a \emph{convexity structure} but further require $\{x\}\in M$ for every $x\in X$. This outer construction of convex sets is related to our construction, where we first define \emph{segment between} $\lb{\xbar}{\ybar}$ as an intersection of all generalized halfspaces containing both points, and then use this primitive to build up general convex sets. These works also extensively discuss abstract convex hull operators, and their relationship to convexity structures \citep{sierksma,kay_womble,vandeVel,singer_book}. \end{quote} \end{itemize} \item \textbf{(Optimization.)} \begin{itemize} \item \textbf{(Classical optimization ideas (without convexity): the Zoutendijk condition \citep[eq. (3.14)]{nocedal_wright}.)} The Zoutendijk condition \citep[eq. (3.14)]{nocedal_wright} ensures that steepest descent methods, when used with line search on a smooth function, have asymptotically vanishing gradients, and thus approach stationary points. A related more recent result is that gradient descent with reasonable step sizes, when applied to smooth functions with some regularity conditions, will escape saddle points and converge to minimizers \citep{jason_minimizers}. \item \textbf{(Classical optimization results with convexity.)} Classical convex optimization results explicitly refer to minimizers and distance to minimizers, and blow up if this distance is infinite, both in smooth and nonsmooth cases \citep[Theorem 2.1.14, Theorem 3.2.2]{nesterov_book}. While the machine learning literature has a varietyof results that are stated without boundedness (and bounded constraints) \citep{Zhang04solvinglarge,orabona_online}, these bounds still have the norm of some reference solution in their final statement, so an additional proof is needed to assert the descent method minimizes the function, as is explicitly proved in \Cref{thm:fact:gd} and \Cref{thm:grad-desc-converges}. \item \textbf{(Implicit regularization of descent methods.)} Since the AdaBoost algorithm~\citep{freund_schapire_adaboost} is equivalently a coordinate descent method applied to the exponential loss, it is possible for all solutions to be at infinity. In the well-studied scenario of \emph{weak learnability}, the infimal error is 0, and minimizing paths need to follow the recession cone. It was shown that in fact AdaBoost asymptotically stays interior to this recession cone \citep{boosting_margin}, and in fact that the path it takes maximizes a quantity known as the margin~\citep{zhang_yu_boosting}, indeed at a rate of $1/\sqrt{t}$ \citep{mjt_margins}, however it is not known if the method is converges in astral space, even though this setting admits rank-1 minimizers (cf. \Cref{pr:hard-core:1,thm:unimin-can-be-rankone}). The preceeding techniques also apply to gradient descent, and is currently an active area due to potential consequence on deep learning and its generalization performance \citep{nati_logistic,riskparam}. In this literature, a version of the smooth part of \Cref{thm:grad-desc-converges} was shown before \citep{ziwei_poly_tail_losses}. Similarly to the case of coordinate descent, it is not known whether gradient descent converges in $\extspace$, though it is known that gradient descent follows a minimizing path whose finite part converges to a minimum over the \emph{hard core} (cf. \Cref{sec:emp-loss-min} \citep{riskparam}. \end{itemize} \end{enumerate} \section{Tutorial: Miro's part---VERSION 7/11/2024} \subsection{Outline} \begin{itemize} \item In the first part of the tutorial, Rob showed you how to extend the Euclidean space with various points at infinity. [Picture reminding of the sequences of rank 1 and 2.] You also saw how the inner product can be extended to a coupling function between an astral point and a real point, by taking a limit. And you saw how standard convex functions can be extended to the entire astral space by taking limits. [Picture illustration. Perhaps of $e^x$ or possibly $\ln(1+e^x)$.] This way we end up with lower semi-continuous functions on astral space. These are very nice, because their minimum is always attained. \item In the second part of the tutorial, I will look into various conditions that characterize where the minimum is attained. These are called optimality conditions. Then I'll show you how we can use astral space to prove convergence of algorithms like gradient descent---even when the minimum is only attained at an infinite point. \end{itemize} \subsection{Standard optimality (unconstrained)} \begin{itemize} \item Let me start by reviewing \emph{standard} optimality conditions for unconstrained minimization. Say we are minimizing a convex function $f$. Then to check whether a finite point $\xx$ is a minimizer, we only need to check whether its gradient is zero. If $f$ is not necessarily differentiable, then we need to check a more general condition: we need to check that zero is a subgradient at $\xx$. Let me remind you what the subgradient is. The vector $\uu$ is a subgradient of $f$ at a point $\xx$, if the function $f$ is lower bounded by a tangent through the point $f(\xx)$ whose linear term grows as $\uu$. Said slightly differently, this tangent can be viewed as an affine lower bound that passes through $f(\xx)$ as this picture shows. The set of subgradients at $\xx$ is denoted as partial $f$ of $\xx$. When $f$ is differentiable, this set contains exactly one point, which is the gradient of $f$, but in general this set can have multiple points or be empty. \item If the subgradient set contains a zero, it means that there exists a horizontal tangent passing through $f(\xx)$, which is only possible when $\xx$ is a minimizer of $f$. When a function is only minimized in infinity, this condition cannot be invoked. So, our first question is, can we generalize the optimality condition to infinite points? To do this we need to be able to define subgradients at infinite points. How can we do this? \end{itemize} \subsection{Subgradients at infinity} \begin{itemize} \item As a first attempt, we could just take the original definition of the subgradient and plug in the infinite points $\xbar$. But this does not work, because it is generally not possible to subtract astral points. \item But we can still be guided by the geometric intuition about the tangent to the function graph. Here I'm showing you the graph of the function log $1+e^x$. Its extension is equal to plus infinity at the point $\xbar$ equal to plus infinity. The function itself asymptotes towards a linear function with slope one, so it seems reasonable to consider the linear function with slope one to be the tangent of $f$ at infinity. To formalize this intuition, we require two properties. The tangent should be below the graph of $\ef$, but it should meet the graph of $\ef$ at the point $\xbar$. Standard subgradients obviously satisfy this. As our second attempt, we could say that $\uu$ is a subgradient of $\ef$ at a point $\xbar$ if there exists the affine function with the linear term $\uu$, which is a lower bound to $\ef$---that's this first equation, and which is also equal to $\ef$ at point $\xbar$ that's the second equation. This is meant to exactly operationalize these two properties. Unfortunately, this does not always do the right thing. \item Here, I am showing you the absolute value function and a tangent to this function at zero. This tangent has a positive slope, but the slope is less than one. The tangent clearly lower bounds the function and also equals to positive infinity at positive infinity. So both of these conditions would be satisfied, but of course the tangent does not meet the graph of $\ef$ at $\xbar$. So, this formalization is not quite right. How do we fix this? \item The final and correct definition is a bit involved. Let me parse it for you. It still has two conditions. The first condition is unchanged---it states that the tangent needs to be a lower bound. The second condition states that the gap between the tangent and the function's graph must vanish in the neighborhood of $\xbar$. Using this definition, we can return to our two examples. In both cases, the only subgradient at plus infinity is 1, corresponding to a tangent with slope 1. As an aside, the subgradient definition that I have just introduced is in our arXiv version referred to as the \emph{strict subgradient} and denoted as $\check{\partial} f(\xbar)$. \item Using this definition, we can now state a more general optimality condition. It covers all minimizers including those at infinity. Specifically, it states that $\xbar$ minimizes $\ef$ if its subgradient set contains zero or if the function value at $\xbar$ is equal to negative infinity. The first case is depicted on the left. Here the function is minimized at negative infinity, where the $\ef$ has a subgradient of zero. The second case is depicted on the right. The linear function $f(x)=x$ is also minimized at negative infinity, where it is equal to negative infinity, although the only subgradient there is equal to 1. \item This covers unconstrained optimality conditions. Next, let's move to constrained optimization problems, and again let's begin by reviewing the standard machinery. \end{itemize} \subsection{Standard optimality (constrained)} \begin{itemize} \item Say we want to minimize a convex function subject to a linear constraint. As a running example, I will consider $g$ equal to negative entropy of a probability distribution over three outcomes. This can be represented as a convex function in two dimensions $u_1$ for probability of the first outcome, $u_2$ for probability of the second outcome and $1-u_1-u_2$ for the probability of the third outcome. On the left you see the graph of the function, and on the right the contour plot of the graph. The function $g$ is equal to plus infinity outside of the triangle, which corresponds to all probability distributions over three outcomes. Let me rotate the graph of the function a bit so you get a sense of what it looks like. As you see, it is convex and becomes steeper as it approaches the boundary of the triangle. However, although it becomes steeper towards the boundary, its values within the triangle are always negative or equal to zero in the corners. Now let me add an example of a linear constraint to this figure. Since we are in two dimension, a single linear constraint corresponds to a line in 2D, and the goal is to find the minimum of the function along this line. Using the machinery of Lagrange multipliers and KKT conditions we can get a sufficient condition for when a point $\uu$ is a solution to the constrained problem. This has two parts. The first part is feasibility, which states that the linear constraint must be satisfied by $\uu$. Then a sufficient condition for $\uu$ to be a solution consists of two parts: \begin{itemize} \item Feasibility, which states that the linear constraint is satisfied: $\aaa\inprod\uu=b$. \item And optimality, which states that the gradient of $g$ is orthogonal to the feasible set. In our case, this means: $\nabla g(\uu)=\lambda\aaa$ for some $\lambda\in\R$. \end{itemize} But this is only a sufficient condition, because convex functions don't need to have gradients on the relative boundary of their effective domain, but they can still have a solution there. For example, this is the entropy function in 1D: [Show entropy in 1D, or would it be better to redo this example in 2D?] It is differentiable on the open interval $(0,1)$, but it does not have a subgradient at 0 or 1, because the only tangents at those points are vertical---and standard subgradients can only express non-vertical tangents. So, the question is, can we extend the set of subgradients at a point $\uu$ to also represent these vertical tangents? \item To summarize, we have two challenges: how to define subgradients at infinite points, and how to enlarge the set of subgradients at finite points to include vertical tangents. \end{itemize} \subsection{Subgradients for vertical tangents} \begin{itemize} \item Now let's look at the second challenge: dealing with vertical tangents. Here we will take advantage of the dual linear functions. These are the functions $\phi_{\xbar}(\uu)=\xbar\inprod\uu$, defined on $\Rn$. In 1D, functions $\phi_{\xbar}$ look like this: \begin{itemize}[noitemsep] \item $\phi_{-\infty}$, vertical epigraph to the right \item $\phi_{x}$ when $x<0$, decreasing \item $\phi_{0}$, flat \item $\phi_{x}$ when $x>0$, increasing \item $\phi_{+\infty}$, vertical epigraph to the left \end{itemize} As you can see, $\phi_{\xbar}$ for infinite values of $\xbar$ are suggestive of vertical tangents. \item Let $g:\Rn\to\eR$. We say that $\xbar$ is a dual subgradient of $g$ at $\uu$, if $g(\uu)\in\R$ and \[ g(\vv)\ge\xbar\inprod(\vv-\uu)+g(\uu). \] We write $\epartial g(\uu)$ for the set of dual subgradients of $g$ at $\uu$. \item Example in 1D: [show 1D entropy and subgradients at 0 and 1] \item Now let's take a look at the 2D example [from earlier?]: [show 2D entropy] \item Using properties of standard convex sets and standard convex functions, this picture can be actually turned into a proof that convex functions on $\Rn$ must have dual subgradients at every point where they are finite. Theorem: Let $g:\Rn\to\eR$ be convex. Then $\epartial g(\uu)\ne\emptyset$ whenever $g(\uu)\in\R$. \item This means that the dual subgradients must be defined also on the relative boundary of the effective domain of $g$. Let's go back to the constrained optimization problem that we used to motivate this: \[ \min_{\uu} g(\uu)\quad\text{such that }\aaa\inprod\uu=b. \] Remember that the standard sufficient condition had two parts: \begin{itemize} \item Feasibility: $\aaa\inprod\uu=b$. \item Optimality: $\lambda\aaa\in\partial g(\uu)$ for some $\lambda\in\R$. \end{itemize} It turns out that the optimality condition can be expanded to include dual subgradiets, and therefore cover all points in the effective domain of $g$, including those on the relative boundary: \begin{itemize} \item Optimality: $\lambda\aaa\in\epartial g(\uu)$ for some $\lambda\in\eR$, \end{itemize} In addition to multiples $\lambda\aaa$ with real-valued $\lambda$, the condition also allows astrons $-\omega\aaa$ and $\omega\aaa$. Pictorially, these astrons correspond to the dual subgradients like this: [show picture with half-planes equal to $\pm\infty$ and the dividing line equal to zero.] So if we are minimizing the negative entropy subject to a linear constraint along the boundary [point at the picture], then the condition will be satisfied by the dual subgradient of this form [show the corresponding dual subgradient]. This is exactly the tangent at the minimum of $g$ under the linear constraint. In fact, it can be shown that these revised conditions are both necessary and sufficient when $g$ has a bounded effective domain (like entropy function). So, this results fills the gap in the standard theory. Although I have only presented this analysis for a single linear constraint. This is a special case of a more general optimality result known as Fenchel's duality, which generalizes from standard convex analysis to astral space. \end{itemize} \subsection{Conjugacy} \begin{itemize} \item Let's take a brief moment to review the two ways in which I have generalized subgradients: [Picture of gradient of $\ln(1+e^x)$ at infinity, next to picture of dual gradient of negative entropy at 1.] First, I introduced subgradients at infinite points, and then dual subgradients at finite points. Although I have introduced these two extensions separately, there is a tight connection between them---through the concept of conjugacy. In standard convex analysis, for any convex function $f:\Rn\to\eRn$, we can define its conjugate \[ f^*(\uu)=\sup_{\xx\in\Rn} [f(\xx)-\xx\inprod\uu], \] which is also a convex function. If you have not seen this definition, there's no need to study it in detail. But let me point out that conjugate functions pop up all over convex analysis, and in particular whenever constrained optimization is involved. In fact, without telling you, I have been showing you a pair of conjugate functions [show $f(x)=\ln(1+e^x)$ and negative entropy $f^*$]. The key point, which is not at all obvious from the definition, is that the conjugacy inverts the standard subgradient map. Specifically, if $f$ is a closed proper convex function then \[ \uu\in\partial f(\xx) \;\Leftrightarrow\; \xx\in\partial f^*(\uu). \] The two kinds of extensions generalize this relationship, so that \[ \uu\in\partial\ef(\xbar) \;\Leftrightarrow\; \xbar\in\epartial f^*(\uu). \] This picture is precisely the illustration of this relationship: [Picture of gradient of $f(x)=\ln(1+e^x)$ at infinity, next to picture of dual gradient of negative entropy at 1.] So indeed the fact that the subgradient of $\ef$ at $+\infty$ is $1$ is equivalent to the fact that the dual subgradient of $f^*$ at $1$ is $+\infty$. And similarly, the fact that the subgradient of $f$ at $-\infty$ is $0$ is equivalent to the fact that the dual subgradient of $f^*$ at $0$ is $-\infty$. This ability to extend the set at which point are differentiable allows us to extend, for example, the analysis of \begin{itemize} \item Exponential-family distributions, maximum likelihood, and maximum entropy; \item Belief elicitation using proper scoring rules or prediction markets. \end{itemize} \end{itemize} \subsection{Descent algorithms} \begin{itemize} \item Now, let me move to the final part of the tutorial. I will look into algorithmic implications of the theory of astral space. I'll show you how astral space can be used to easily prove convergence of algorithms. \item Let me begin by showing that extending standard theory is non-trivial. Again, let's first review the standard machinery. \item A differentiable convex function $f$ is minimized at $\xx\in\Rn$ if and only if $\nabla f(\xx)=\zero$. So one approach to minimizing $f$ is to construct a sequence $\xx_t\in\Rn$ such that $\nabla f(\xx_t)\to\zero$. This works as long as $\seq{\xx_t}$ converges to some point $\xx\in\Rn$. Does this also work more generally when $\seq{\xx_t}$ converges to some $\xbar\in\eRn$? \item The answer is no. See the example: [Show the example, do we want to show math?] \item However, the approach does work provided that $f$ can be continuously extended to $\ef$ and that the sequence $\seq{f(\xx_t)}$ is bounded above. [State the theorem.] This result is at the crux of our general results for convergence of descent algorithms. \item You might wonder how restrictive this continuity assumption is. One important class of functions that have continuous extensions are those that can be written as \[ f(\xx)=\sum_{j=1}^m \ell_j(\xx\inprod\uu_j), \] which includes many objectives in supervised machine learning, including boosting and logistic regression. To show how the continuity of subgradients can be applied to algorithms, let me make things more concrete. \item Let's take a look at the subgradient descent algorithm and a standard convergence result for it. \emph{Subgradient descent:} \begin{itemize} \item $\xx_1=\text{an arbitrary initialization point}$ \item $\xx_{t+1}=\xx_t-\eta_t\uu_t\text{ for some }\uu_t\in\partial f(\xx_t)$ \end{itemize} \emph{Convergence result:} Assume sequence $\seq{\eta_t}$ satisfies: \begin{itemize} \item $\sum_{t=1}^\infty \eta_t=+\infty$ \item $f(\xx_{t+1})\le f(\xx_t)-\frac12\eta_t\norm{\uu_t}^2$ \end{itemize} Then $f(\xx_t)\to\inf f$. \item To apply this result, we need to ensure that the progress condition is satisfied. For example if $f$ is twice differentiable and its Hessian is bounded, e.g., $\norm{\nabla^2f(\xx)}\le\beta$, then we can choose $\eta=1/\beta$. \item This result is nice, but somewhat limited, because it is specialized to subgradient descent. Also the proof technique is somewhat specialized to subgradient descent. We would like to generalize this type of result to a broader class of algorithms like greedy coordinate descent (pick the coordinate that decreases objective the most). This includes AdaBoost. \item Our convergence analysis will not assume any specific update rule, and we will also use a more generic progress condition. \item The progress condition is stated using the so-called \emph{adequate auxiliary function}. This is any function $h(\uu)$ such that $h(\zero)=\zero$ and for every $\epsilon>0$, \[ \inf\set{h(\uu):\:\norm{\uu}\ge\epsilon}>0. \] [Is this terminology standard? If not, can we use something more concise (also in the book)? Can we just say ``auxiliary'' function?] \item We are now ready to state the convergence result: Assume $f$ can be continuously extended to astral space. Let $\seq{\xx_t}$ be the sequence of iterates and $\uu_t\in\partial f(\xx_t)$. Let $\alpha_t\ge0$ such that \begin{itemize} \item $\sum_{t=1}^\infty \alpha_t=+\infty$ \item $f(\xx_{t+1})\le f(\xx_t)-\alpha_t h(\uu_t)$ \end{itemize} Then $f(\xx_t)\to\inf f$. It turns out that greedy coordinate descent like the one implemented by AdaBoost for exponential loss is a special case of this. \item The proof is not hard and builds on the subgradient continuity I mentioned earlier. It considers two cases: \begin{itemize} \item If $\uu_t\to\zero$ for some subsequence of iterates, then the result follows by the continuity of the subgradients. \item Otherwise, there exists $\epsilon>0$ such that $\norm{\uu_t}\ge\epsilon$ for all $\uu_t$, and so the objective must decrease by $\alpha_t c$ in each time step (for some $c>0$) and therefore $f(\xx_t)\to-\infty$. \end{itemize} \end{itemize} \subsection{Conclusion} In this tutorial, we tried to provide you with a taste of astral space. Rob showed you how to construct the astral space and how to extend functions to astral space. I have then showed you how to extend subgradients, and then we looked into generalized optimality conditions and convergence of algorithms. Our goal in developing this theory was to expand the foundations of convex analysis by encompassing points at infinity. We hope that this gives rise to a more general and complete theory. As examples, you saw optimality conditions that handle minima at infinity and vertical subgradients, and more general and simpler proofs of convergence. However, there are many many themes that I have not covered. You can take a look at those themes by checking out or manuscript on arXiv (which will eventually be published as a book). I'm going to leave you with the word cloud from the section titles from our book as a way to illustrated the breadth of topics. I thank you for your attention and Rob and I will be happy to take any questions. \section{Tutorial outline (focusing on Miro's part)---VERSION 6/7/2024} \subsection{Outline} \begin{itemize} \item In the first part of the tutorial, Rob showed you how to extend the Euclidean space with various points at infinity. [Picture reminding of the sequences of rank 1 and 2?] You also saw how the inner product can be extended to a coupling function between an astral point and a real point. In this way, each infinite point gives rise to a dual linear function: [Picture reminding of such a function?] And you saw how standard convex functions can be extended to entire astral space, by taking limits: [Picture illustration. Perhaps of $e^x$ or possibly of $\ln(1+e^x)$?] This way we end up with lower semi-continuous functions on astral space. These are very nice, because their minimum is always attained: [Show where the minimum is attained.] \item In the second part of the tutorial, I will look into various conditions that characterize where the minimum is attained. These are called optimality conditions. Then I'll show you how we can use astral space to prove convergence of algorithms like gradient descent---even when the minimum is attained at some infinite point. \end{itemize} \subsection{Standard optimality conditions and their shortcomings} \begin{itemize} \item Let me start with optimality conditions and let's review what I'd call the ``standard machinery.'' So what is the standard way to check that a convex function is minimized at a point $\xx\in\Rn$? This is done by checking that the gradient or subgradient is zero. \item More precisely, $\uu$ is a subgradient of $f$ at $\xx$ if \[ f(\yy)\ge(\yy-\xx)\inprod\uu + f(\xx) \text{ for all }\yy\in\Rn. \] Geometrically, it means that there is a tangent at the point $\xx$, with the linear term growing as $\uu$. So of course, if there is a horizontal tangent at the point $\xx$ then the function is minimized. When a function is only minimized in infinity, this condition cannot be invoked. So, our first question is, can we generalize subgradients and optimality conditions to the infinite points? I'll return to this question in a moment, but let me present another gap in standard theory, which comes up in the context of constrained optimization. \item Say we want to minimize a convex function subject to a linear constraint. \[ \min_{\uu} g(\uu)\quad\text{such that }\aaa\inprod\uu=b. \] Then a sufficient condition for $\uu$ to be a solution consists of two parts: \begin{itemize} \item Feasibility, which states that the linear constraint is satisfied: $\aaa\inprod\uu=b$. \item And optimality, which states that the gradient of $g$ is orthogonal to the feasible set. In our case, this means: $\nabla g(\uu)=\lambda\aaa$ for some $\lambda\in\R$. \end{itemize} But this is only a sufficient condition, because convex functions don't need to have gradients on the relative boundary of their effective domain, but they can still have a solution there. For example, this is the entropy function in 1D: [Show entropy in 1D, or would it be better to redo this example in 2D?] It is differentiable on the open interval $(0,1)$, but it does not have a subgradient at 0 or 1, because the only tangents at those points are vertical---and standard subgradients can only express non-vertical tangents. So, the question is, can we extend the set of subgradients at a point $\uu$ to also represent these vertical tangents? \item To summarize, we have two challenges: how to define subgradients at infinite points, and how to enlarge the set of subgradients at finite points to include vertical tangents. \end{itemize} \subsection{Subgradients at infinity} \begin{itemize} \item Let's start with the first challenge. How can we build on the concept of a tangent to define a subgradient of $\ef$ at an arbitrary astral point $\xbar$? \item The standard definition [show the standard formula] does not extend right away, because it is generally not possible to subtract astral points. But we can be guided by the geometric intuition: [Picture of a tangent at infinity, say for $\ln(1+e^x)$]. Intuitively, it seems that the red line is a tangent of $f$ at infinity, because it is obtained by shifting the linear function from the bottom of the chart up until it hits the function graph, and the function graph is ``hit'' at the infinite point $\xbar$. Said differently, it is possible to shift the linear function such that it is a lower bound on $\ef$ and this lower bound is tight at $\xbar$: \begin{align*} & \ef(\ybar)\ge\ybar\inprod\uu + b \text{ for all }\ybar\in\eRn &&\text{[lower bound]} \\ & \ef(\xbar)=\xbar\inprod\uu + b &&\text{[tight at $\xbar$]} \end{align*} But that doesn't fully capture the intuitive meaning of a tangent when values $\ef(\xbar)$ are infinite: [Picture of $\abs{x}$; tangent with the slope $0.5$, evaluated at $+\infty$]. \item We need to ensure that even if $\ef(\xbar)\in\set{\pm\infty}$, the gap between the tangent and the function's graph vanishes. Leading to our final definition of a subgradient at an infinite point $\xbar$: \begin{align*} & \ef(\ybar)\ge\ybar\inprod\uu + b, \\ & \ef(\xbar_t)-(\xbar_t\inprod\uu + b)\to 0\text{ for some sequence }\xbar_t\to\xbar. \end{align*} The set of subgradients of $\ef$ at $\xbar$ is denoted $\partial\ef(\xbar)$. \item With the subgradients at infinite points defined, we can now state an extension of the optimality condition for unconstrained minimization. Theorem: Let $f:\Rn\to\eR$ be convex. Then $\ef$ is minimized at $\xbar$ if either $\ef(\xbar)=-\infty$ or $\zero\in\partial\ef(\xbar)$. [Show an example of $\ln(1+e^x)$ at $-\infty$.] [Another example: linear function $x\mapsto cx$, $c>0$, is minimized at $-\infty$.] \end{itemize} \subsection{Subgradients for vertical tangents} \begin{itemize} \item Now let's look at the second challenge: dealing with vertical tangents. Here we will take advantage of the dual linear functions. These are the functions $\phi_{\xbar}(\uu)=\xbar\inprod\uu$, defined on $\Rn$. In 1D, functions $\phi_{\xbar}$ look like this: \begin{itemize}[noitemsep] \item $\phi_{-\infty}$, vertical epigraph to the right \item $\phi_{x}$ when $x<0$, decreasing \item $\phi_{0}$, flat \item $\phi_{x}$ when $x>0$, increasing \item $\phi_{+\infty}$, vertical epigraph to the left \end{itemize} As you can see, $\phi_{\xbar}$ for infinite values of $\xbar$ are suggestive of vertical tangents. \item Let $g:\Rn\to\eR$. We say that $\xbar$ is a dual subgradient of $g$ at $\uu$, if $g(\uu)\in\R$ and \[ g(\vv)\ge\xbar\inprod(\vv-\uu)+g(\uu). \] We write $\epartial g(\uu)$ for the set of dual subgradients of $g$ at $\uu$. \item Example in 1D: [show 1D entropy and subgradients at 0 and 1] \item Now let's take a look at the 2D example [from earlier?]: [show 2D entropy] \item Using properties of standard convex sets and standard convex functions, this picture can be actually turned into a proof that convex functions on $\Rn$ must have dual subgradients at every point where they are finite. Theorem: Let $g:\Rn\to\eR$ be convex. Then $\epartial g(\uu)\ne\emptyset$ whenever $g(\uu)\in\R$. \item This means that the dual subgradients must be defined also on the relative boundary of the effective domain of $g$. Let's go back to the constrained optimization problem that we used to motivate this: \[ \min_{\uu} g(\uu)\quad\text{such that }\aaa\inprod\uu=b. \] Remember that the standard sufficient condition had two parts: \begin{itemize} \item Feasibility: $\aaa\inprod\uu=b$. \item Optimality: $\lambda\aaa\in\partial g(\uu)$ for some $\lambda\in\R$. \end{itemize} It turns out that the optimality condition can be expanded to include dual subgradiets, and therefore cover all points in the effective domain of $g$, including those on the relative boundary: \begin{itemize} \item Optimality: $\lambda\aaa\in\epartial g(\uu)$ for some $\lambda\in\eR$, \end{itemize} In addition to multiples $\lambda\aaa$ with real-valued $\lambda$, the condition also allows astrons $-\omega\aaa$ and $\omega\aaa$. Pictorially, these astrons correspond to the dual subgradients like this: [show picture with half-planes equal to $\pm\infty$ and the dividing line equal to zero.] So if we are minimizing the negative entropy subject to a linear constraint along the boundary [point at the picture], then the condition will be satisfied by the dual subgradient of this form [show the corresponding dual subgradient]. This is exactly the tangent at the minimum of $g$ under the linear constraint. In fact, it can be shown that these revised conditions are both necessary and sufficient when $g$ has a bounded effective domain (like entropy function). So, this results fills the gap in the standard theory. Although I have only presented this analysis for a single linear constraint. This is a special case of a more general optimality result known as Fenchel's duality, which generalizes from standard convex analysis to astral space. \end{itemize} \subsection{Conjugacy} \begin{itemize} \item Let's take a brief moment to review the two ways in which I have generalized subgradients: [Picture of gradient of $\ln(1+e^x)$ at infinity, next to picture of dual gradient of negative entropy at 1.] First, I introduced subgradients at infinite points, and then dual subgradients at finite points. Although I have introduced these two extensions separately, there is a tight connection between them---through the concept of conjugacy. In standard convex analysis, for any convex function $f:\Rn\to\eRn$, we can define its conjugate \[ f^*(\uu)=\sup_{\xx\in\Rn} [f(\xx)-\xx\inprod\uu], \] which is also a convex function. If you have not seen this definition, there's no need to study it in detail. But let me point out that conjugate functions pop up all over convex analysis, and in particular whenever constrained optimization is involved. In fact, without telling you, I have been showing you a pair of conjugate functions [show $f(x)=\ln(1+e^x)$ and negative entropy $f^*$]. The key point, which is not at all obvious from the definition, is that the conjugacy inverts the standard subgradient map. Specifically, if $f$ is a closed proper convex function then \[ \uu\in\partial f(\xx) \;\Leftrightarrow\; \xx\in\partial f^*(\uu). \] The two kinds of extensions generalize this relationship, so that \[ \uu\in\partial\ef(\xbar) \;\Leftrightarrow\; \xbar\in\epartial f^*(\uu). \] This picture is precisely the illustration of this relationship: [Picture of gradient of $f(x)=\ln(1+e^x)$ at infinity, next to picture of dual gradient of negative entropy at 1.] So indeed the fact that the subgradient of $\ef$ at $+\infty$ is $1$ is equivalent to the fact that the dual subgradient of $f^*$ at $1$ is $+\infty$. And similarly, the fact that the subgradient of $f$ at $-\infty$ is $0$ is equivalent to the fact that the dual subgradient of $f^*$ at $0$ is $-\infty$. This ability to extend the set at which point are differentiable allows us to extend, for example, the analysis of \begin{itemize} \item Exponential-family distributions, maximum likelihood, and maximum entropy; \item Belief elicitation using proper scoring rules or prediction markets. \end{itemize} \end{itemize} \subsection{Descent algorithms} \begin{itemize} \item Now, let me move to the final part of the tutorial. I will look into algorithmic implications of the theory of astral space. I'll show you how astral space can be used to easily prove convergence of algorithms. \item Let me begin by showing that extending standard theory is non-trivial. Again, let's first review the standard machinery. \item A differentiable convex function $f$ is minimized at $\xx\in\Rn$ if and only if $\nabla f(\xx)=\zero$. So one approach to minimizing $f$ is to construct a sequence $\xx_t\in\Rn$ such that $\nabla f(\xx_t)\to\zero$. This works as long as $\seq{\xx_t}$ converges to some point $\xx\in\Rn$. Does this also work more generally when $\seq{\xx_t}$ converges to some $\xbar\in\eRn$? \item The answer is no. See the example: [Show the example, do we want to show math?] \item However, the approach does work provided that $f$ can be continuously extended to $\ef$ and that the sequence $\seq{f(\xx_t)}$ is bounded above. [State the theorem.] This result is at the crux of our general results for convergence of descent algorithms. \item You might wonder how restrictive this continuity assumption is. One important class of functions that have continuous extensions are those that can be written as \[ f(\xx)=\sum_{j=1}^m \ell_j(\xx\inprod\uu_j), \] which includes many objectives in supervised machine learning, including boosting and logistic regression. To show how the continuity of subgradients can be applied to algorithms, let me make things more concrete. \item Let's take a look at the subgradient descent algorithm and a standard convergence result for it. \emph{Subgradient descent:} \begin{itemize} \item $\xx_1=\text{an arbitrary initialization point}$ \item $\xx_{t+1}=\xx_t-\eta_t\uu_t\text{ for some }\uu_t\in\partial f(\xx_t)$ \end{itemize} \emph{Convergence result:} Assume sequence $\seq{\eta_t}$ satisfies: \begin{itemize} \item $\sum_{t=1}^\infty \eta_t=+\infty$ \item $f(\xx_{t+1})\le f(\xx_t)-\frac12\eta_t\norm{\uu_t}^2$ \end{itemize} Then $f(\xx_t)\to\inf f$. \item To apply this result, we need to ensure that the progress condition is satisfied. For example if $f$ is twice differentiable and its Hessian is bounded, e.g., $\norm{\nabla^2f(\xx)}\le\beta$, then we can choose $\eta=1/\beta$. \item This result is nice, but somewhat limited, because it is specialized to subgradient descent. Also the proof technique is somewhat specialized to subgradient descent. We would like to generalize this type of result to a broader class of algorithms like greedy coordinate descent (pick the coordinate that decreases objective the most). This includes AdaBoost. \item Our convergence analysis will not assume any specific update rule, and we will also use a more generic progress condition. \item The progress condition is stated using the so-called \emph{adequate auxiliary function}. This is any function $h(\uu)$ such that $h(\zero)=\zero$ and for every $\epsilon>0$, \[ \inf\set{h(\uu):\:\norm{\uu}\ge\epsilon}>0. \] [Is this terminology standard? If not, can we use something more concise (also in the book)? Can we just say ``auxiliary'' function?] \item We are now ready to state the convergence result: Assume $f$ can be continuously extended to astral space. Let $\seq{\xx_t}$ be the sequence of iterates and $\uu_t\in\partial f(\xx_t)$. Let $\alpha_t\ge0$ such that \begin{itemize} \item $\sum_{t=1}^\infty \alpha_t=+\infty$ \item $f(\xx_{t+1})\le f(\xx_t)-\alpha_t h(\uu_t)$ \end{itemize} Then $f(\xx_t)\to\inf f$. It turns out that greedy coordinate descent like the one implemented by AdaBoost for exponential loss is a special case of this. \item The proof is not hard and builds on the subgradient continuity I mentioned earlier. It considers two cases: \begin{itemize} \item If $\uu_t\to\zero$ for some subsequence of iterates, then the result follows by the continuity of the subgradients. \item Otherwise, there exists $\epsilon>0$ such that $\norm{\uu_t}\ge\epsilon$ for all $\uu_t$, and so the objective must decrease by $\alpha_t c$ in each time step (for some $c>0$) and therefore $f(\xx_t)\to-\infty$. \end{itemize} \end{itemize} \subsection{Conclusion} In this tutorial, we tried to provide you with a taste of astral space. Rob showed you how to construct the astral space and how to extend functions to astral space. I have then showed you how to extend subgradients, and then we looked into generalized optimality conditions and convergence of algorithms. Our goal in developing this theory was to expand the foundations of convex analysis by encompassing points at infinity. We hope that this gives rise to a more general and complete theory. As examples, you saw optimality conditions that handle minima at infinity and vertical subgradients, and more general and simpler proofs of convergence. However, there are many many themes that I have not covered. You can take a look at those themes by checking out or manuscript on arXiv (which will eventually be published as a book). I'm going to leave you with the word cloud from the section titles from our book as a way to illustrated the breadth of topics. I thank you for your attention and Rob and I will be happy to take any questions. \section{Tutorial outline (focusing on Miro's part)---VERSION 5/30/2024} \subsection{[Initial material from Rob's talk]} \begin{itemize} \item {}[Motivate astral space; define it via sequences that converge in all directions etc.] \end{itemize} \subsection{Astral space} \begin{itemize} \item Astral points are obtained as limits of sequences $(\xx_t)$ in $\Rn$ that converge in all directions. Any such sequence can be written as \[ \xx_t=b_{1,t}\vv_1+b_{2,t}\vv_2+\dotsb+b_{k,t}\vv_k+\qq_t, \] where \begin{itemize} \item $\vv_1,\dotsc,\vv_k$ are orthonormal \item $b_{j,t}\to\infty$ such that $b_{1,t}\gg b_{2,t}\gg\dotsb\gg b_{k,t}$ \item $\qq_t\perp \vv_1,\dotsc,\vv_k$ \item $\qq_t\to\qq$ in $\Rn$ \end{itemize} This can be viewed as an \emph{ordered} basis decomposition, beginning with the direction $\vv_1$ where $\xx_t$ goes to infinity fastest, followed by $\vv_2$ where it goes slower, and so on, until the remaining component $\qq_t$ which converges to some finite point $\qq\in\Rn$. We saw examples like this earlier. [Pictorially remind of those examples.] \item Let's refer to the limit point of such a specific sequence as $\xbar$. \item The defining condition of these sequences was that $\lim(\xx_t\inprod\uu)$ exists. We use this property to extend the inner product, so that \[ \xbar\inprod\uu=\lim(\xx_t\inprod\uu). \] The value of this operation can be finite or infinite, so it is not exactly an inner product. For example, $\xbar\inprod\vv_1=\lim(\xx_t\inprod\vv_1)=\lim b_{1,t}=+\infty$ (all components after the leading one are orthogonal to $\vv_1$, so zeroed out). \item We refer to the extended inner product as the \emph{coupling function} it is a map $\eRn\times\Rn\to\eR$. The first argument can be any astral point, the second argument can only be a finite vector. \item The values of the coupling function can be read off from the decomposition. \begin{itemize} \item Let $\uu\in\Rn$ \item If $\uu$ is orthogonal to $\vv_1,\dotsc,\vv_{j-1}$, but not to $\vv_j$: \begin{itemize} \item Then $\xbar\inprod\uu\in\set{\pm\infty}$ according to the sign of $\vv_j\inprod\uu$. \end{itemize} \item If $\uu$ is orthogonal to all $\vv_1,\dotsc,\vv_k$: \begin{itemize} \item Then $\xbar\inprod\uu=\qq\inprod\uu$. \end{itemize} \end{itemize} \item Pictorially that looks like this \begin{itemize} \item {}[Figure of $\xbar\inprod\uu$ as a function of $\uu$ in 2D] \end{itemize} \item The values of the inner product $\xbar\inprod\uu$ across all $\uu$ characterize $\xx$. They also define convergence and topology on astral space: \[ \xbar_t\to\xbar\quad\text{if }\lim(\xbar_t\inprod\uu)\to\xbar\inprod\uu\text{ for all }\uu\in\Rn. \] [Mention the (sub)base of the topology] Nice properties: $\eRn$ is compact and it is the astral closure of $\Rn$. So every sequence in $\eRn$ has a convergent subsequence, and any point in $\eRn$ is a limit of points in $\Rn$. \end{itemize} \subsection{Functions on astral space, subgradients} \begin{itemize} \item \textbf{Continuous extension.} If a function on $f:\Rn\to\eR$ is continuous, it is sometimes (but not always!) possible to continuously extend it to $\eRn$. It means to define a function $\ef:\eRn\to\eR$ which agrees with $f$ on $\Rn$ and whose values at the new astral points are obtained as limits \[ \ef(\xbar) = \lim f(\xx_t) \] for any $\xx_t\to\xbar$. More generally, we consider lower semicontinuous extensions, where we just take the smallest possible limit. \item Let me show you some examples. \begin{itemize} \item Linear function $\ell_{\uu}(\xx)=\xx\inprod\uu$ can be extended to $\ellbar_{\uu}(\xbar)=\xbar\inprod\uu$. We just did this! \item To extend log-sum-exp $a(\xx)=\ln(e^{x_1}+e^{x_2}+\dotsb+e^{x_n})$, first write it as \[ a(\xx)= \ln(e^{\xx\inprod\ee_1}+e^{\xx\inprod\ee_2}+\dotsb+e^{\xx\inprod\ee_n}) \] where $\ee_1,\dotsc,\ee_n$ are the vectors of the Euclidean basis. The extension is now straightforward \[ \abar(\xbar)= \ln(e^{\xbar\inprod\ee_1}+e^{\xbar\inprod\ee_2}+\dotsb+e^{\xbar\inprod\ee_n}). \] \emph{Technical points:} $e^{-\infty}=0$, $e^{+\infty}=+\infty$, $\ln 0=-\infty$, $\ln+\infty=+\infty$, the sum inside parentheses is always well defined. \end{itemize} \item \textbf{Subgradients.} For a convex function $f:\Rn\to\eR$, a non-vertical tangent to its graph at any point defines a subgradient. \item More precisely, $\uu$ is a subgradient of $f$ at $\xx$ if \[ f(\yy)\ge(\yy-\xx)\inprod\uu + f(\xx) \text{ for all }\yy\in\Rn. \] Pictorially: [show figure]. \item Can we build on the concept of a tangent to define a subgradient of $\ef$ at an arbitrary astral point $\xbar$? \item Need to be careful, because it is generally not possible to subtract astral points. We would like to formalize the idea in this picture: [picture of a tangent at infinity, say for $\ln(1+e^x)$]. Perhaps: \begin{align*} & \ef(\ybar)\ge\ybar\inprod\uu + b \text{ for all }\ybar\in\eRn. \\ & \ef(\xbar)=\xbar\inprod\uu + b. \end{align*} But that doesn't fully capture the intuitive meaning of a tangent when values $\ef(\xbar)$ are infinite: [picture of $\abs{x}$; tangent with the slope $0.5$, evaluated at $+\infty$]. \item We need to ensure that even if $\ef(\xbar)\in\set{\pm\infty}$, the gap between the tangent and the function's graph vanishes: \begin{align*} & \ef(\ybar)\ge\ybar\inprod\uu + b, \\ & \ef(\xx_t)-(\xx_t\inprod\uu + b)\to 0\text{ for some sequence }\xx_t\to\xbar. \end{align*} This defines when $\uu$ is a subgradient at $\xbar$. The set of subgradients of $\ef$ at $\xbar$ is denoted $\partial\ef(\xbar)$. \item Theorem: Let $f:\Rn\to\eR$ be continuous, convex, $f\not\equiv+\infty$, and assume it can be continuously extended to $\ef$. Then $\ef$ is minimized at $\xbar$ if either $\ef(\xbar)=-\infty$ or $\zero\in\partial\ef(\xbar)$. [By continuity, any sequence $\xx_t\to\xbar$ is approaching $\inf f=\ef(\xbar)$.] [Show an example of $\ln(1+e^x)$ at $-\infty$.] \item Some examples: \begin{itemize} \item Linear function---it is its own tangent (and no other tangents possible), so $\partial\ellbar_{\uu}(\xbar)=\set{\uu}$. If $\uu=\zero$ then minimized at all $\xbar$. If $\uu\ne\zero$ then minimized whenever $\xbar\inprod\uu=-\infty$, for example when $\xbar=\lim (-t\uu)$ or $\lim (-t\uu+\qq)$ for any $\qq\in\Rn$. \item Log-sum-exp. The standard gradient of the log-sum-exp function will be denoted $\vmu(\xx)=\nabla a(\xx)\in\Rn$, with components \[ \mu_i(\xx)=\frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}=\frac{1}{\sum_{j=1}^n e^{x_j-x_i}}=\frac{1}{\sum_{j=1}^n e^{\xx\inprod(\ee_j-\ee_i)}}. \] The vector $\vmu(\xx)$ is in simplex and its components are positive. In fact, any point in the relative interior of simplex can be obtained as a gradient for some $\xx\in\Rn$. The map $\vmu:\Rn\to\Rn$ can be continuously extended to $\vmubar:\eRn\to\Rn$, with components \[ \mubar_i(\xbar)=\frac{1}{\sum_{j=1}^n e^{\xbar\inprod(\ee_j-\ee_i)}}. \] By continuity, $\vmubar(\xbar)$ is in simplex, but its components can be equal to zero. Any point in the simplex can be obtained as a gradient for some $\xbar\in\eRn$. \emph{Technical points:} $1/{+\infty}=0$, the sum in the denominator is always $\ge 1$. \item Multinomial log loss. [We want to ensure that we can model and fit multinomial likelihood even if the solution is on the boundary. We could mention this earlier in the talk as part of the motivation.] \begin{align*} f(\xx)&= a(\xx)-\xx\inprod\hpp,\quad\text{[pull this equation out of the hat?]} \\ \nabla f(\xx)&=\nabla a(\xx)-\hpp=\vmu(\xx)-\hpp, \\ \nabla\ef(\xbar)&=\vmubar(\xbar)-\hpp. \end{align*} Since $\ef$ is bounded below, it is minimized at $\xbar$ if and only if $\nabla\ef(\xbar)=\zero$, i.e., at $\xbar$ with $\vmubar(\xbar)=\hpp$. \end{itemize} \end{itemize} \subsection{Conjugacy} \begin{itemize} \item The area above the graph of a function is called an epigraph. If the function $f:\Rn\to(-\infty,+\infty]$ is lower semicontinuous, convex, $f\not\equiv+\infty$ (i.e., $f$ is closed proper convex) then its epigraph is a non-empty closed convex set that does not include any vertical lines. Such sets can be characterized by their non-vertical tangents, as intersections of the corresponding non-vertical half-spaces. In convex analyses, these non-vertical tangents are represented using conjugate functions. \item For any vector $\uu$, consider the tangent that is obtained by shifting the linear function $\ell_\uu$ until it hits the epigraph. We write \[ f^*(\uu)=\inf\{b:\:f(\xx)\ge\xx\inprod\uu-b\text{ for all }\xx\in\Rn\}, \] this can be rearranged to obtain the definition \[ f^*(\uu)=\sup_{\xx\in\Rn} [-f(\xx)+\xx\inprod\uu]. \] The function $f^*$ is itself convex, and its subgradient mapping ``inverts'' the subgradient mapping of $f$, meaning that \[ \xx\in\partial f^*(\uu)\;\Leftrightarrow\;\uu\in\partial f(\xx). \] \item For example, the conjugate of log-sum-exp function is the negative entropy: \[ a^*(\uu)=\begin{cases} \sum_{i=1}^n u_i\ln u_i &\text{if $\uu$ is a probability distribution} \\ +\infty &\text{otherwise.} \end{cases} \] \emph{Technical points:} $0\ln 0=0$. \item Recall that $\uu\in\partial a(\xx)$ if \[ u_i =\frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}} \text{ for } i=1,\dotsc,n. \] And so also \[ \xx\in\partial a^*(\uu) \quad \text{if } u_i = \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}} \text{ for } i=1,\dotsc,n. \] How do these concepts translate to astral space? \item \textbf{Astral conjugate.} First, given a continuous extension $\ef:\eRn\to\eR$, we can still carry out the process of shifting a linear function $\ellbar_\uu$ until it hits the epigraph. We write \[ \fextstar(\uu)=\inf\{b:\:\ef(\xbar)\ge\xbar\inprod\uu-b\text{ for all }\xbar\in\Rn\}. \] This turns out to be equal to \[ \fextstar(\uu)=\sup_{\xbar\in\eRn} [-\ef(\xbar)\plusd\xbar\inprod\uu]. \] Here, we introduced the operation of \emph{downward addition}. It is like standard addition, except when adding positive and negative infinity the result is always negative infinity (the ambiguous case is resolved ``downward''). [Maybe don't introduce the operation just describe the convention?] Notice that although $\ef$ is a function defined on $\eRn$, its astral conjugate $\fextstar$ is only defined on $\uu$. This asymmetry is because the coupling function $\xbar\inprod\uu$ is defined on $\eRn\times\Rn$. We refer to $\xbar\in\eRn$ as the primal variable and $\uu\in\Rn$ as the dual variable. The extension $\ef$ is defined over the primal space, whereas its conjugate $\fextstar$ is defined over the dual space. For example, for the extended log-sum-exp function $\abar$, we have \[ \abarstar(\uu)=a^*(\uu)=\begin{cases} \sum_{i=1}^n u_i\ln u_i &\text{if $\uu$ is a probability distribution} \\ +\infty &\text{otherwise.} \end{cases} \] (This is not a coincidence and it is generally true that $\fextstar=f^*$ when $f$ is proper.) We know that the subgradient mapping $\partial a^*$ inverts the standard subgradient mapping $\partial a$. Does something similar happen when moving from $\abar$ to $\abarstar$? \end{itemize} \subsection{Dual subgradient} \begin{itemize} \item The answer is yes! We will define the \emph{dual subgradient} of $\fextstar$, which will be the inverse of $\partial\ef$. We will call the latter the \emph{primal subgradient}. \item Primal subgradient was defined by shifting linear functions $\ellbar_\uu(\xbar)=\xbar\inprod\uu$. These functions evaluate the coupling function in the primal variable (while fixing the dual variable). \item To define dual subgradients, we consider the ``dual linear'' functions, defined as $\phi_{\xbar}(\uu)=\xbar\inprod\uu$. They evaluate the coupling function in the dual variable (while fixing the primal variable). In 1D, functions $\phi_{\xbar}$ look like this: \begin{itemize}[noitemsep] \item $\phi_{-\infty}$, vertical epigraph to the right \item $\phi_{x}$ when $x<0$, decreasing \item $\phi_{0}$, flat \item $\phi_{x}$ when $x>0$, increasing \item $\phi_{+\infty}$, vertical epigraph to the left \end{itemize} As you can see, the infinite values of $\xbar$ are suggestive of vertical tangents. \item Let $g:\Rn\to\eR$. We say that $\xbar$ is a dual subgradient of $g$ at $\uu$, if $g(\uu)\in\R$ and \[ g(\vv)\ge\xbar\inprod(\vv-\uu)+g(\uu). \] We write $\epartial g(\uu)$ for the set of dual subgradients of $g$ at $\uu$. \item Example in 1D: [show 1D entropy and subgradients at 0 and 1] \item Theorem: Let $g:\Rn\to\eR$ be convex. Then $\epartial g(\uu)\ne\emptyset$ whenever $g(\uu)\in\R$. [Maybe: we could show the 2D variant of this for entropy and say that it is basically the proof (as an aside, mention that this is done using separation).] \item Theorem: Let $f:\Rn\to\eR$ be convex, proper and continuous, with a continuous extension $\ef:\eRn\to\eR$. Then \[ \xbar\in\epartial f^*(\uu)\;\Leftrightarrow\;\uu\in\partial\ef(\xbar). \] [Mention lower semicontinuous.] \item Some examples: \begin{itemize} \item In 1D: $\ef(\ex)=\ln(1+e^{\ex})$, $f^*(u)=u\ln u + (1-u)\ln(1-u)$ for $u\in[0,1]$ (and equals $+\infty$ otherwise). \item Log partition function and entropy. Entropy takes on finite values across simplex, but it does not have finite subgradients on the boundary. Nevertheless, it has dual subgradients: \[ \xbar\in\partial a^*(\uu) \quad \text{if } u_i = \frac{1}{\sum_{j=1}^n e^{\xbar\inprod(\ee_j-\ee_i)}} \text{ for } i=1,\dotsc,n. \] \end{itemize} \item As an application. Consider the problem of maximizing entropy of a distribution subject to linear constraints: \[ \min_{\uu\in\Rn} a^*(\uu)\quad\text{subject to }\A\uu=\bb. \] Using Lagrange multipliers, a sufficient condition for $\uu$ to be a solution is that: \begin{itemize} \item $\A\uu=\bb$. \item $\transA\vlambda\in\partial a^*(\uu)$ for some $\vlambda\in\R^{m}$. \end{itemize} With the use of the dual astral subgradients, we obtain a condition that is both sufficient and necessary: \begin{itemize} \item $\A\uu=\bb$. \item $\transA\vlambdabar\in\epartial a^*(\uu)$ for some $\vlambdabar\in\eRf{m}$. \end{itemize} [Mention that linear maps extend continuously to astral space. And that the reason why we can derive these necessary and sufficient conditions is that the entropy is dual subdifferentiable over simplex. Mention that in the book we cover a more general case known as Fenchel's duality.] \end{itemize} \subsection{Descent algorithms} \begin{itemize} \item A differentiable convex function $f$ is minimized at $\xx\in\Rn$ if and only if $\nabla f(\xx)=\zero$. One approach to minimizing $f$ is to construct a sequence $\xx_t\in\Rn$ such that $\nabla f(\xx_t)\to\zero$. This works as long as $\seq{\xx_t}$ converges to some point $\xx\in\Rn$. Does this also work more generally when $\seq{\xx_t}$ converges to some $\xbar\in\eRn$? \item The answer is no. See the example: [show example] \item However, the approach does work provided that $f$ can be continuously extended to $\ef$ and that the sequence $\seq{f(\xx_t)}$ is bounded above. [State the theorem.] \item This result is at the crux of our general results for convergence of descent algorithms. \item To motivate our algorithmic setup. We begin with a specific type of convergence result for subgradient descent when $f:\Rn\to\eRn$ is convex and proper: \begin{itemize} \item $\xx_1=\text{an arbitrary initialization point}$ \item $\xx_{t+1}=\xx_t-\eta_t\uu_t\text{ for some }\uu_t\in\partial f(\xx_t)$ \end{itemize} Assume sequence $\seq{\eta_t}$ satisfies: \begin{itemize} \item $\sum_{t=1}^\infty \eta_t=+\infty$ \item $f(\xx_{t+1})\le f(\xx_t)-\frac12\eta_t\norm{\uu_t}^2$ \end{itemize} Then $f(\xx_t)\to\inf f$. \item For example if $f$ is twice differentiable and its Hessian is bounded, e.g., $\norm{\nabla^2f(\xx)}\le\beta$, then we can choose $\eta=1/\beta$. \item We would like to be able to generalize this type of result to a broader class of algorithms like greedy coordinate descent (pick the coordinate that decreases objective the most). This includes AdaBoost. \item $\norm{\cdot}$ in the descent condition will be replaced by an \emph{adequate auxiliary function}: $h(\uu)$ such that $h(\zero)=\zero$ and for every $\epsilon>0$, \[ \inf\set{h(\uu):\:\uu\ge\epsilon}>0. \] [Is this terminology standard? If not, can we use something more concise (also in the book)? Can we just say ``auxiliary'' function?] \item Let $\seq{\xx_t}$ be the sequence of iterates and $\uu_t\in f(\xx_t)$. Let $\alpha_t\ge0$ such that \begin{itemize} \item $\sum_{t=1}^\infty \alpha_t=+\infty$ \item $f(\xx_{t+1})\le f(\xx_t)-\alpha_t h(\uu_t)$ \end{itemize} Then $f(\xx_t)\to\inf f$. \item {}[Proof based on case analysis + convergence of subgradients to zero.] \end{itemize} \section{Rob's notes} \subsection{What astral space is like} As previously discussed, every astral point $\xbar\in\extspace$ can be written in the form given in \eqref{eq:intro-sum-astrons}, so that $\xbar=\ebar\plusl\qq$, where $\qq\in\Rn$ and $\ebar=\limray{\vv_1}\plusl\cdots\plusl\limray{\vv_k}$. An astral point, such as $\ebar$, that is the leftward sum of (finitely many) astrons is called an \emph{icon}. Equivalently, $\ebar$ is an icon if and only if it is an \emph{idempotent} (with respect to leftward addition), meaning $\ebar\plusl\ebar=\ebar$. Thus, every astral point $\xbar$ can be decomposed into an \emph{iconic part} $\ebar$, and a \emph{finite part} $\qq$. Icons play a central role in our development. Every point's iconic part is uniquely determined. As a result, astral space can be partitioned into disjoint sets called \emph{galaxies}, with each one, written $\galax{\ebar}$, associated with an icon $\ebar$ and consisting exactly of those points with $\ebar$ as their iconic part; thus, \[ \galax{\ebar} = \braces{ \ebar \plusl \qq : \qq\in\Rn }. \] Every such galaxy is homeomorphic to $(n-k)$-dimensional Euclidean space, $\R^{n-k}$, where $k$ is the astral rank of the associated icon, $\ebar$. In turn, that galaxy's closure is homeomorphic to $(n-k)$-dimensional \emph{astral} space, $\extspac{n-k}$. We can get a sense of what astral space is like by imagining a journey through the space. Suppose we begin the journey at the origin, $\zero$, and choose a direction $\vv_1\in\Rn$ (with $\norm{\vv_1}=1$). We thus start in our ``home'' galaxy, $\galax{\zero}=\Rn$, with $\zero$ at its center. After traveling any finite distance in the direction of $\vv_1$, we will of course still be in $\Rn$. But after traveling infinitely far in that same direction, we will finally arrive at the astron $\limray{\vv_1}$. Upon arrival, we will see that that astron is at the center of its own galaxy $\galax{\limray{\vv_1}}$, which is a topological copy of $\R^{n-1}$, and thus of dimension one less than the original space $\Rn$. The closure of this galaxy is homeomorphic to $\extspac{n-1}$, $(n-1)$-dimensional astral space. We therefore can continue the journey in the same way, choosing a second direction $\vv_2\in\Rn$ (with $\norm{\vv_2}=1$ and linearly independent of $\vv_1$), and traveling infinitely far from $\limray{\vv_1}$ in the direction of $\vv_2$. Finally, we will arrive at $\limray{\vv_1}\plusl\limray{\vv_2}$, which can be viewed as an astron of $\limray{\vv_1}$'s galaxy, or an icon of the original space. This icon is at the center of another galaxy, $\galax{\limray{\vv_1}\plusl\limray{\vv_2}}$, which is homeomorphic to $\R^{n-2}$. This process can be repeated, say, $k$ times, arriving at icon $\ebar=\limray{\vv_1}\plusl\cdots\plusl\limray{\vv_k}$. From $\ebar$, at the center of its galaxy $\galax{\ebar}$, we can travel directly to a finite point $\qq$ in that galaxy, which is, equivalently, the point $\xbar=\ebar\plusl\qq$ in the original space $\extspace$. In this way, every astral point can be reached along such an imagined voyage, from one galaxy to the next, each of dimension one less than the previous one, and finally traveling to a finite point in the last galaxy. Of course, the point $\xbar$ could also be reached by following an entirely different path. For instance, the sequence given in \eqref{eqn:xt-poly-seq} travels along a curve, never leaving $\Rn$, but ultimately converging to the same point. \subsection{old theorems about subgradients} \begin{theorem} \label{thm:adif-fext-inverses:long} Let $f:\Rn\rightarrow\Rext$ be convex, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Then $\uu\in\asubdiffext{\xbar}$ if and only if $\xbar\in\adsubdiffstar{\uu}$ and $\xbar\in\cldom{f}$. \end{theorem} \proof Note first that $\cldomfext=\cldom{f}$ by Proposition~\ref{pr:h:1}(\ref{pr:h:1c}), and $\fextstar=\fstar$ by Proposition~\ref{pr:fextstar-is-fstar}. We use these identities throughout the proof. As a result, Theorem~\ref{thm:asubdif-implies-adsubdif} (with $F=\fext$) immediately implies the ``only if'' part of the theorem's statement. For the converse, suppose for the rest of the proof that $\xbar\in\adsubdiffstar{\uu}$ and $\xbar\in\cldom{f}$. We aim to prove $\uu\in\asubdiffext{\xbar}$. From $\xbar\in\cldom{f}$, it follows that $\fext(\xbar)=\fdub(\xbar)$ by Theorem~\ref{thm:fext-neq-fdub}, and also that $f\not\equiv+\infty$, which in turn implies $\fstar>-\infty$ (as can be seen from its definition, \eqref{eq:fstar-def}). Thus, if $-\fext(\xbar)$ and $\xbar\cdot\uu$ are summable, then $-\fdub(\xbar)$ and $\xbar\cdot\uu$ are as well, so that part~(\ref{cor:asub-summable-summary:c}) and all the requisite conditions of Corollary~\ref{cor:asub-summable-summary} are satisfied (with $F=\fext$), thereby implying $\uu\in\asubdiffext{\xbar}$ in this case. Therefore, we assume henceforth that $-\fext(\xbar)$ and $\xbar\cdot\uu$ are \emph{not} summable. Specifically, this means that \begin{equation} \label{eq:thm:adif-fext-inverses:long:1} \fext(\xbar) =\xbar\cdot\uu\in\{-\infty,+\infty\}. \end{equation} We prove that $\uu\in\asubdiffext{\xbar}$ by constructing a point $\zbar\in\extspacnp$ satisfying all the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:c}) (with $F=\fext$, and where, as previously discussed, $\homat$ is the matrix defined in \eqref{eqn:homat-def}). We begin by defining a matrix $\amatu\in\Rnpnp$ that will be used throughout the proof. This matrix is identical to the $(n+1)\times(n+1)$ identity matrix, except that the first $n$ entries of the bottom row are equal to $\trans{\uu}$. Thus, in block form, \[ \amatu = \left[ \begin{array}{ccc|c} & & & \\ ~ & \Iden & ~ & \zero \\ & & & \\ \hline \rule{0pt}{2.5ex} & \trans{\uu} & & 1 \end{array} \right] \] where $\Iden$ is the $n\times n$ identity matrix, and $\zero$, as usual, is the all-zeros vector in $\Rn$. For $t=1,2,\ldots$, let $y_t = \max\{-t, -\fstar(\uu)\}$. Then $y_t\in\R$ (since $\fstar>-\infty$), $y_t\geq -\fstar(\uu)$, and $y_t\rightarrow -\fstar(\uu)$. Next, let $\zbar_t=\amatu \rpair{\xbar}{y_t}$, forming a sequence in $\extspacnp$. By sequential compactness, this sequence must have a convergent subsequence; by discarding all other elements, we can assume that the entire sequece converges. Let $\zbar\in\extspacnp$ be its limit; that is, $\zbar_t\rightarrow\zbar$. We proceed to prove that $\zbar$ satisfies all the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:c}). First, \[ \homat \zbar_t = \homat \amatu \rpair{\xbar}{y_t} = \homat \rpair{\xbar}{y_t} = \xbar. \] The second equality is because $\homat \amatu = \homat$, by a straightforward matrix calculation (also using Proposition~\ref{pr:h:4}(\ref{pr:h:4d})). The last equality is by Theorem~\ref{thm:homf}(\ref{thm:homf:aa}). Since $\homat\zbar_t\rightarrow\homat\zbar$ (\Cref{cor:aff-cont}), it follows that $\homat\zbar=\xbar$. For $\ww\in\Rn$ and $a\in\R$, we can compute $\zbar_t \cdot \rpair{\ww}{a}$, which will be used repeatedly in the remainder of the proof: \begin{eqnarray} \zbar_t \cdot \rpair{\ww}{a} &=& \paren{\amatu \rpair{\xbar}{y_t}} \cdot \rpair{\ww}{a} \nonumber \\ &=& \rpair{\xbar}{y_t} \cdot \paren{ \transamatu \rpair{\ww}{a} } \nonumber \\ &=& \rpair{\xbar}{y_t} \cdot \rpair{\ww + a \uu}{a} \nonumber \\ &=& \xbar\cdot(\ww + a \uu) + a y_t. \label{eq:thm:adif-fext-inverses:long:4} \end{eqnarray} These equalities follow respectively from: definition of $\zbar_t$; \Cref{thm:mat-mult-def}; straightforward matrix calculation; and Theorem~\ref{thm:homf}(\ref{thm:homf:a}). In particular, setting $\ww=\zero$ and $a=1$, \eqref{eq:thm:adif-fext-inverses:long:4} shows that \[ \zbar_t \cdot \rpair{\zero}{1} = \xbar\cdot\uu + y_t = \fext(\xbar) \] with the last equality following from \eqref{eq:thm:adif-fext-inverses:long:1}. Since $\zbar_t \cdot \rpair{\zero}{1} \rightarrow \zbar \cdot \rpair{\zero}{1}$ (by Theorem~\ref{thm:i:1}(\ref{thm:i:1c})), this implies that $\zbar \cdot \rpair{\zero}{1} = \fext(\xbar)$. Likewise, setting $\ww=\uu$ and $a=-1$, \eqref{eq:thm:adif-fext-inverses:long:4} yields \[ \zbar_t \cdot \rpair{\uu}{-1} = \xbar\cdot\zero - y_t = -y_t. \] Since $y_t\rightarrow -\fstar(\uu)$, and since $\zbar_t \cdot \rpair{\uu}{-1} \rightarrow \zbar \cdot \rpair{\uu}{-1}$, we conclude that $\zbar \cdot \rpair{\uu}{-1} = \fstar(\uu)$. Thus, $\zbar$ satisfies all of the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:c}), except that it still remains to show that $\zbar\in\clepifext$. In fact, $\clepifext$ is the same as $\clepi{f}$ by Proposition~\ref{pr:wasthm:e:3}(\ref{pr:wasthm:e:3c}) (keeping in mind that we are now identifying points and sets in $\extspace\times\R$ with their homeomorphic images in nclset\subseteq\extspacnp$). Furthermore, since $\epi f$ is a convex subset of $\Rnp$ (since $f$ is convex), its closure in $\extspacnp$ is equal to its outer hull, $\ohull(\epi f)$, by Theorem~\ref{thm:e:6}. That is, \[ \clepifext=\clepi{f}=\ohull(\epi f). \] Therefore, to complete the proof, it suffices to show that $\zbar$ is in $\ohull(\epi f)$. To this end, we prove that every closed halfspace in $\extspacnp$ that includes $\epi f$ must also include $\zbar$. Let $H$ be the closed halfspace \[ H = \{ \zbar' \in \extspacnp : \zbar'\cdot\rpair{\ww}{a} \leq b \} \] for some $\ww\in\Rn$ and $a, b\in\R$, and assume $\epi f \subseteq H$. Under this assumption, we aim to prove $\zbar\in H$. We proceed in cases based on the value of $a$. Without loss of generality, we assume that $a\in\{-1,0,1\}$. (Otherwise, if $a\neq 0$, we can divide $\ww$, $a$ and $b$ by $|a|$, resulting in the same halfspace, but now with $a\in\{-1,1\}$.) Suppose first that $a=1$. Actually, this case is impossible under the assumption $\epi f\subseteq H$. To see this, let $\xx'\in\dom f$. Then for all $y'\in\R$ with $y'\geq f(\xx')$, we have $\rpair{\xx'}{y'}\in\epi f\subseteq H$, which means $ \xx'\cdot\ww + y' \leq b$. However, this is a contradiction since this inequality cannot hold for arbitrarily large values of $y'$ (with $\xx'\cdot\ww$ and $b$ in $\R$). Thus, $a$ can only be in $\{-1,0\}$. Suppose next that $a=0$. Since $\xbar\in\cldom{f}$, there exists a sequence $\seq{\rpair{\xx'_t}{y'_t}}$ in $\epi f$ with $\xx'_t\rightarrow\xbar$. Since $\rpair{\xx'_t}{y'_t}\in\epi f\subseteq H$, we have \begin{equation} \label{eq:thm:adif-fext-inverses:long:3} \xx'_t\cdot\ww = \rpair{\xx'_t}{y'_t}\cdot\rpair{\ww}{0} \leq b. \end{equation} Thus, \[ \zbar\cdot\rpair{\ww}{0} = \zbar\cdot(\trans{\homat} \ww) = (\homat \zbar) \cdot \ww = \xbar\cdot \ww \leq b. \] The equalities follow respectively from a simple matrix calculation; \Cref{thm:mat-mult-def}; and our argument above establishing that $\homat\zbar=\xbar$. The inequality follows from \eqref{eq:thm:adif-fext-inverses:long:3} since $\xx'_t\cdot\ww \rightarrow \xbar\cdot\ww$ (by Theorem~\ref{thm:i:1}(\ref{thm:i:1c})). This proves $\zbar\in H$ in this case. Finally, suppose $a=-1$. In this case, we have that for all $\rpair{\xx'}{y'}\in\epi f$, \[ \xx'\cdot\ww - y' = \rpair{\xx'}{y'} \cdot \rpair{\ww}{-1} \leq b. \] Since this holds for all $\rpair{\xx'}{y'}\in\epi f$, it follows from \eqref{eq:fstar-mod-def} that $\fstar(\ww)\leq b$; in other words, $\rpair{\ww}{b}\in\epi \fstar$. For each $t$, we now have that \begin{eqnarray*} \zbar_t \cdot \rpair{\ww}{-1} &=& \xbar\cdot(\ww-\uu) - y_t \\ &=& (\xbar\cdot(\ww-\uu) - b) - y_t + b \\ &\leq& -\fstar(\uu) - y_t + b \\ &\leq& b. \end{eqnarray*} The first equality is a direct application of \eqref{eq:thm:adif-fext-inverses:long:4} (with $a=-1$). The first inequality follows from the definition of astral dual subgradient (\eqref{eqn:psi-subgrad:3a}) since $\xbar\in\adsubdiffstar{\uu}$, and since, as just shown, $\rpair{\ww}{b}\in\epi \fstar$. The last inequality is because $y_t\geq -\fstar(\uu)$ by construction. Thus, each $\zbar_t\in H$, implying their limit, $\zbar$, is also in $H$, being a closed halfspace. Therefore, $\zbar\in\ohull(\epi f)=\clepifext$, completing the proof. \qed Theorem~\ref{thm:asubdif-implies-adsubdif} gave some of the implications of $\uu$ being an astral subgradient of $F$ at $\xbar$. We next prove a partial converse, providing general conditions which, when taken together, are sufficient for $\uu$ to be in $\asubdifF{\xbar}$. First, we require that $\xbar$ is in $\adsubdifFstar{\uu}$ and also in $\xbar\in\cldom{F}$, both of which were seen to be necessary in Theorem~\ref{thm:asubdif-implies-adsubdif}. We then list a few properties, any one of which is sufficient. The additional assumptions listed in this theorem are a bit technical and involved. We do not know if simpler, fewer or weaker assumptions would suffice. Nevertheless, as will be seen shortly, these assumptions are satisfied always in the natural and centrally important case that $F$ is an extension $\fext$ of a function $f:\Rn\rightarrow\Rext$. In standard convex analysis, the closure of a convex set is always equal to its outer hull (the intersection of all halfspaces that include it). In Section~\ref{sec:closure-convex-set}, we saw that this is not necessarily the case for all astral convex sets in $\extspace$ (although it is always true for subsets of $\Rn$, as seen in Theorem~\ref{thm:e:6}). In condition~(\ref{ap:thm:adsubdif-implies-asubdif:c}) below, we explicitly assume that the epigraph of $F$ have this regularity property, that its closure is equal to its outer hull. This holds always in the case just mentioned that $F$ is an extension $\fext$. \begin{theorem} \label{ap:thm:adsubdif-implies-asubdif} Let $F:\extspace\rightarrow\Rext$, and let $\xbar\in\extspace$ and $\uu\in\Rn$. Suppose $\xbar\in\adsubdifFstar{\uu}$ and $\xbar\in\cldom{F}$. Also suppose that at least one of the following holds: \begin{enumerate} \item \label{ap:thm:adsubdif-implies-asubdif:a} $\Fstar(\uu)=-\infty$. \item \label{ap:thm:adsubdif-implies-asubdif:b} $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are summable, and $-F(\xbar) \plusd \xbar\cdot\uu = \xbar\cdot\uu - \Fdub(\xbar)$. \item \label{ap:thm:adsubdif-implies-asubdif:c} All of the following hold: \begin{itemize} \item $\Fstar>-\infty$. \item $F(\xbar)=\Fdub(\xbar)$. \item $\clepi{F}=\ohull(\epi F)$. \end{itemize} \end{enumerate} Then $\uu\in\asubdifF{\xbar}$. \end{theorem} \proof ~ We consider each of the additional conditions separately. Condition~(\ref{ap:thm:adsubdif-implies-asubdif:a}): If $\Fstar(\uu)=-\infty$ then the Fenchel-Young inequality given in \eqref{eqn:ast-fenchel} must hold with equality, implying that $\uu\in\asubdifF{\xbar}$ by Theorem~\ref{thm:fenchel-implies-subgrad}. Condition~(\ref{ap:thm:adsubdif-implies-asubdif:b}): Assume this condition holds. Then $\xbar \in \cldom{F} \subseteq \cldom{\Fdub}$ since $F\geq \Fdub$ (by Theorem~ref{thm:fdub-sup-afffcns}), implying $\dom F \subseteq \dom \Fdub$. Therefore, statement~(\ref{thm:psi-subgrad-conds:c}) of Theorem~\ref{thm:psi-subgrad-conds} is satisfied with $\psi=\Fstar$, yielding, since $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are summable, that \[ \Fstar(\uu) = \xbar\cdot\uu - \Fdub(\xbar) = -F(\xbar) \plusd \xbar\cdot\uu. \] That $\uu\in\asubdifF{\xbar}$ now follows from Theorem~\ref{thm:fenchel-implies-subgrad}. Condition~(\ref{ap:thm:adsubdif-implies-asubdif:c}): Assume all the parts of this condition hold. If $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are summable, then condition~(\ref{ap:thm:adsubdif-implies-asubdif:b}) is satisfied (since $F(\xbar)=\Fdub(\xbar)$), implying $\uu\in\asubdifF{\xbar}$, as just proved. Therefore, we assume henceforth that $-\Fdub(\xbar)$ and $\xbar\cdot\uu$ are \emph{not} summable. Specifically, this means that \begin{equation} \label{eq:ap:thm:adsubdif-implies-asubdif:1} F(\xbar)=\Fdub(\xbar)=\xbar\cdot\uu\in\{-\infty,+\infty\}. \end{equation} We prove that $\uu\in\asubdifF{\xbar}$ by constructing a point $\zbar\in\extspacnp$ satisfying all the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:c}) (where, as previously discussed, $\homat$ is the matrix defined in \eqref{eqn:homat-def}). We begin by defining a matrix $\amatu\in\Rnpnp$ that will be used throughout the proof. This matrix is identical to the $(n+1)\times(n+1)$ identity matrix, except that the first $n$ entries of the bottom row are equal to $\trans{\uu}$. Thus, in block form, \[ \amatu = \left[ \begin{array}{ccc|c} & & & \\ ~ & \Iden & ~ & \zero \\ & & & \\ \hline \rule{0pt}{2.5ex} & \trans{\uu} & & 1 \end{array} \right] \] where $\Iden$ is the $n\times n$ identity matrix, and $\zero$, as usual, is the all-zeros vector in $\Rn$. For $t=1,2,\ldots$, let $y_t = \max\{-t, -\Fstar(\uu)\}$. Then $y_t\in\R$ (since $\Fstar>-\infty$), $y_t\geq -\Fstar(\uu)$, and $y_t\rightarrow -\Fstar(\uu)$. Next, let $\zbar_t=\amatu \rpair{\xbar}{y_t}$, forming a sequence in $\extspacnp$. By sequential compactness, this sequence must have a convergent subsequence; by discarding all other elements, we can assume that the entire sequece converges. Let $\zbar\in\extspacnp$ be its limit; that is, $\zbar_t\rightarrow\zbar$. We proceed to prove that $\zbar$ satisfies all the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:c}). First, \[ \homat \zbar_t = \homat \amatu \rpair{\xbar}{y_t} = \homat \rpair{\xbar}{y_t} = \xbar. \] The second equality is because $\homat \amatu = \homat$, by a straightforward matrix calculation (also using Proposition~\ref{pr:h:4}(\ref{pr:h:4d})). The last equality is by Theorem~\ref{thm:homf}(\ref{thm:homf:aa}). Since $\homat\zbar_t\rightarrow\homat\zbar$ (\Cref{cor:aff-cont}), it follows that $\homat\zbar=\xbar$. For $\ww\in\Rn$ and $a\in\R$, we can compute $\zbar_t \cdot \rpair{\ww}{a}$, which will be used repeatedly in the remainder of the proof: \begin{eqnarray} \zbar_t \cdot \rpair{\ww}{a} &=& \paren{\amatu \rpair{\xbar}{y_t}} \cdot \rpair{\ww}{a} \nonumber \\ &=& \rpair{\xbar}{y_t} \cdot \paren{ \transamatu \rpair{\ww}{a} } \nonumber \\ &=& \rpair{\xbar}{y_t} \cdot \rpair{\ww + a \uu}{a} \nonumber \\ &=& \xbar\cdot(\ww + a \uu) + a y_t. \label{eq:ap:thm:adsubdif-implies-asubdif:4} \end{eqnarray} These equalities follow respectively from: definition of $\zbar_t$; \Cref{thm:mat-mult-def}; straightforward matrix calculation; and Theorem~\ref{thm:homf}(\ref{thm:homf:a}). In particular, setting $\ww=\zero$ and $a=1$, \eqref{eq:ap:thm:adsubdif-implies-asubdif:4} shows that \[ \zbar_t \cdot \rpair{\zero}{1} = \xbar\cdot\uu + y_t = F(\xbar) \] with the last equality following from \eqref{eq:ap:thm:adsubdif-implies-asubdif:1}. Since $\zbar_t \cdot \rpair{\zero}{1} \rightarrow \zbar \cdot \rpair{\zero}{1}$ (by Theorem~\ref{thm:i:1}(\ref{thm:i:1c})), this implies that $\zbar \cdot \rpair{\zero}{1} = F(\xbar)$. Likewise, setting $\ww=\uu$ and $a=-1$, \eqref{eq:ap:thm:adsubdif-implies-asubdif:4} yields \[ \zbar_t \cdot \rpair{\uu}{-1} = \xbar\cdot\zero - y_t = -y_t. \] Since $y_t\rightarrow -\Fstar(\uu)$, and since $\zbar_t \cdot \rpair{\uu}{-1} \rightarrow \zbar \cdot \rpair{\uu}{-1}$, we conclude that $\zbar \cdot \rpair{\uu}{-1} = \Fstar(\uu)$. Thus, $\zbar$ satisfies all of the conditions of Proposition~\ref{pr:equiv-ast-subdif-defn}(\ref{pr:equiv-ast-subdif-defn:c}), except that it still remains to show that $\zbar\in\clepi{F}$. Since, by assumption, $\clepi{F}=\ohull(\epi F)$, we prove this by showing instead that $\zbar\in \ohull(\epi F)$. To this end, we prove that every closed halfspace in $\extspacnp$ that includes $\epi F$ must also include $\zbar$. Let $H$ be the closed halfspace \[ H = \{ \zbar' \in \extspacnp : \zbar'\cdot\rpair{\ww}{a} \leq b \} \] for some $\ww\in\Rn$ and $a, b\in\R$, and assume $\epi F \subseteq H$. Under this assumption, we aim to prove $\zbar\in H$. We proceed in cases based on the value of $a$. Without loss of generality, we assume that $a\in\{-1,0,1\}$. (Otherwise, if $a\neq 0$, we can divide $\ww$, $a$ and $b$ by $|a|$, resulting in the same halfspace, but now with $a\in\{-1,1\}$.) Suppose first that $a=1$. We prove that this case contradicts our assumption that $\Fstar>-\infty$, and therefore is impossible. Let $\xbar'\in\dom F$. Then for all $y'\in\R$ with $y'\geq F(\xbar')$, we have $\rpair{\xbar'}{y'}\in\epi F\subseteq H$, which means $ \xbar'\cdot\ww + y' \leq b$. Since this holds for arbitrarily large values of $y'$, and since $b\in\R$, this is only possible if $\xbar'\cdot\ww=-\infty$. Thus, for all $\xbar'\in\extspace$, either $F(\xbar')=+\infty$ or $\xbar'\cdot\ww=-\infty$. That is, \[ -F(\xbar') \plusd \xbar'\cdot\ww = -\infty \] for all $\xbar'\in\extspace$. It follows that $\Fstar(\ww)=-\infty$ (by \eqref{eq:Fstar-down-def}), contradicting that $\Fstar>-\infty$. Thus, $a$ can only be in $\{-1,0\}$. Suppose next that $a=0$. Since $\xbar\in\cldom{F}$, there exists a sequence $\seq{\rpair{\xbar'_t}{y'_t}}$ in $\epi F$ with $\xbar'_t\rightarrow\xbar$. Since $\rpair{\xbar'_t}{y'_t}\in\epi F\subseteq H$, we have \begin{equation} \label{eq:ap:thm:adsubdif-implies-asubdif:3} \xbar'_t\cdot\ww = \rpair{\xbar'_t}{y'_t}\cdot\rpair{\ww}{0} \leq b \end{equation} by Theorem~\ref{thm:homf}(\ref{thm:homf:a}). Thus, \[ \zbar\cdot\rpair{\ww}{0} = \zbar\cdot(\trans{\homat} \ww) = (\homat \zbar) \cdot \ww = \xbar\cdot \ww \leq b. \] The equalities follow respectively from a simple matrix calculation; \Cref{thm:mat-mult-def}; and our argument above establishing that $\homat\zbar=\xbar$. The inequality follows from \eqref{eq:ap:thm:adsubdif-implies-asubdif:3} since $\xbar'_t\cdot\ww \rightarrow \xbar\cdot\ww$ (by Theorem~\ref{thm:i:1}(\ref{thm:i:1c})). This proves $\zbar\in H$ in this case. Finally, suppose $a=-1$. In this case, we have that for all $\rpair{\xbar'}{y'}\in\epi F$, \[ \xbar'\cdot\ww - y' = \rpair{\xbar'}{y'} \cdot \rpair{\ww}{-1} \leq b \] (by Theorem~\ref{thm:homf}(\ref{thm:homf:a})). Since this holds for all $\rpair{\xbar'}{y'}\in\epi F$, it follows from \eqref{eq:Fstar-def} that $\Fstar(\ww)\leq b$; in other words, $\rpair{\ww}{b}\in\epi \Fstar$. For each $t$, we now have that \begin{eqnarray*} \zbar_t \cdot \rpair{\ww}{-1} &=& \xbar\cdot(\ww-\uu) - y_t \\ &=& (\xbar\cdot(\ww-\uu) - b) - y_t + b \\ &\leq& -\Fstar(\uu) - y_t + b \\ &\leq& b. \end{eqnarray*} The first equality is a direct application of \eqref{eq:ap:thm:adsubdif-implies-asubdif:4} (with $a=-1$). The first inequality follows from the definition of astral dual subgradient (\eqref{eqn:psi-subgrad:3a}) since $\xbar\in\adsubdifFstar{\uu}$, and since, as just shown, $\rpair{\ww}{b}\in\epi \Fstar$. The last inequality is because $y_t\geq -\Fstar(\uu)$ by construction. Thus, each $\zbar_t\in H$, implying their limit, $\zbar$, is also in $H$, being a closed halfspace. This completes the proof. \qed \subsection{notes on defining primal subdifferentials (really old)} A function $f:\Rn\rightarrow\Rext$ has subgradient $\uu$ at $\xx$ if $f$ is majorized by the affine function $\zz\mapsto\zz\cdot\uu-b$, for some $b\in\R$, with equality at $\xx$. This is equivalent to $\epi f$ being included in the halfspace \[ \{\rpair{\zz}{y}\in\Rn\times\R : \rpair{\zz}{y} \cdot \rpair{\uu}{-1} \leq b \} \] and with $\rpair{\xx}{f(\xx)}$ included in its boundary so that $\rpair{\xx}{f(\xx)} \cdot \rpair{\uu}{-1} = b$. (In what follows, I am being loose with notation and identifying $\rpair{\xbar}{y}$, a point in $\extspace\times\R$, with its homeomorphic image $\homf(\xbar,y)$ in $\extspac{n+1}$.) Let $F:\extspace\rightarrow\Rext$. We define $\uu\in\partial F(\xbar)$ if: \begin{itemize} \item $F$ is lower semicontinuous at $\xbar$; that is, $F(\xbar)=(\lsc F)(\xbar)$. \item There exists $\beta\in\Rext$ such that \[ \epi F \subseteq \{ \rpair{\xbar'}{y} \in \extspace\times\R : \rpair{\xbar'}{y} \cdot \rpair{\uu}{-1} \leq \beta \}. \] \item There exists a point $\zbar\in\clbar{\epi F}$ with $\PP\zbar=\xbar$ and $\zbar \cdot \rpair{\uu}{-1} = \beta$, so that $\zbar$ is on the ``boundary'' of the halfspace above (and where $\PP\in\R^{n\times(n+1)}$ is a projection matrix, $\PP\rpair{\xbar'}{y}=\xbar'$). \end{itemize} The second condition is equivalent to $F(\xbar')\geq -\beta \plusd \xbar'\cdot\uu$ for all $\xbar'$. The last condition is equivalent to there existing a sequence $\seq{\rpair{\xbar_t}{y_t}}$ in $\epi F$ with $\xbar_t\rightarrow\xbar$ and $\xbar_t\cdot\uu - y_t \rightarrow \beta$. If $\beta\in\R$, this last condition (combined with the second condition) is also equivalent to the following: for every neighborhood $U$ of $\xbar$ and for all $\epsilon>0$, there exists $\rpair{\xbar'}{y}\in\epi F$ with $\xbar'\in U$ and \[ \xbar'\cdot \uu - \beta \leq y \leq \xbar'\cdot \uu - \beta + \epsilon. \] So, as we get arbitrarily close to $\xbar$ (in terms of our topology), we can get arbitrarily close to the supporting affine function $\xbar'\mapsto\xbar'\cdot \uu - \beta$. If $\beta=+\infty$, that last condition is equivalent to the following: for every neighborhood $U$ of $\xbar$ and for all $\alpha\in\R$, there exists $\rpair{\xbar'}{y}\in\epi F$ with $\xbar'\in U$ and \[ y \leq \xbar'\cdot \uu - \alpha. \] And if $\beta=-\infty$, then $\xbar'\cdot\uu - y = -\infty$ for all $\rpair{\xbar'}{y}\in\epi F$, so that last condition means just that $\xbar\in\clbar{\dom F}$. (The case $\beta=-\infty$ is pretty degenerate. If $F=\fext$, where $f:\Rn\rightarrow\Rext$, this case can only occur if $f\equiv+\infty$.) \bigskip This definition of primal subgradient is the inverse of our dual subgradient: \begin{theorem}[needs more checking!] Assume $\clbar{\epi F} = \ohull(\epi F)$. Then $\uu\in\partial F(\xbar)$ if and only if: \begin{itemize} \item $F(\xbar)=(\lsc F)(\xbar)$. \item $\xbar\in \clbar{\dom F}$. \item $\xbar\in\adsubdifFstar{\uu}$. \end{itemize} \end{theorem} \bigskip Consider the following conditions: \begin{enumerate}[label=(\Alph*)] \item \label{condA} $\uu\in\partial F(\xbar)$. \item \label{condB} $F(\xbar)=-\Fstar(\uu) \plusd \xbar\cdot\uu$. \item \label{condC} $\Fstar(\uu)=-F(\xbar) \plusd \xbar\cdot\uu$. \end{enumerate} I believe the following are true: \begin{itemize} \item If $\Fstar(\uu)\in\R$ or if $\Fstar(\uu)=+\infty$ and $\xbar\cdot\uu<+\infty$ then \ref{condA}$\Rightarrow$\ref{condB}. \item If $\{-F(\xbar),\xbar\cdot\uu\}\neq\{-\infty,+\infty\}$ then \ref{condA}$\Rightarrow$\ref{condC}. \item If $\xbar\cdot\uu>-\infty$ then \ref{condC}$\Rightarrow$\ref{condA}. \item If $\xbar\cdot\uu\in\R$ then \ref{condB}$\Rightarrow$\ref{condC}$\Rightarrow$\ref{condA}. \item In general, \ref{condB} does not imply \ref{condA}, even if $\Fstar(\uu)\in\R$, as shown by Miro's examples (such as $\max\{x,2x\}$). \end{itemize} \subsection{alternate proof on separating an astral cone from a point} Here is an older proof on separating an astral cone from a point. I thought the techniques were interesting, but in the end, there was a way of avoiding this proof altogether. \begin{theorem} \label{thm:acone-sep-from-ast-pt} Let $K\subseteq\extspace$ be a closed astral convex cone, and let $\zbar\in\extspace\setminus K$. Then there exists $\uu\in\Rn$ such that $\zbar\cdot\uu>0$ and $\xbar\cdot\uu\leq 0$ for all $\xbar\in K$ (that is, $K\subseteq\chsuz$ but $\zbar\not\in\chsuz$). \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:acone-sep-from-ast-pt}] We can write the point $\limray{\zbar}$, which is an icon, in canonical form as $\limray{\zbar}=[\vv_1,\ldots,\vv_k]\omm$ where $\vv_1,\ldots,\vv_k\in\Rn$ are orthonormal. Note that $\limray{\zbar}\not\in K$ since $\zbar\not\in K$ (by the contrapositive of Proposition~\ref{pr:ast-cone-is-naive}). For $i=0,\ldots,k$, let \[ \ebar_i=[\vv_1,\ldots,\vv_i]\omm. \] Let $j\in\{1,\ldots,k\}$ be the smallest index for which $\ebar_j\not\in K$, which must exist since $\ebar_0=\zero\in K$ but $\ebar_k=\limray{\zbar}\not\in K$. Let $\ebar=\ebar_{j-1}$ and let $\vv=\vv_j$. Then, by $j$'s definition, $\ebar\in K$ but $\ebar\plusl\limray{\vv}=\ebar_j\not\in K$. Next, let \[ K' = \Braces{\xbar\in\extspace : \ebar\plusl\xbar\in K}. \] nv(K)$ where $F:\extspace\rightarrow\extspace$ is the affine map $F(\xbar)=\ebar\plusl\xbar$ for $\xbar\in\extspace$. Since $K$ is a closed astral convex cone, $K'$ is convex by Corollary~\ref{cor:inv-image-convex}, an astral cone by Theorem~\ref{thm:affine-inv-ast-cone}, and closed by \citet[Theorem~18.1]{munkres} since $F$ is continuous (\Cref{cor:aff-cont}). Further, by construction, $\limray{\vv}\not\in K'$, implying $\vv\not\in K'$ (by Proposition~\ref{pr:ast-cone-is-naive}). Therefore, by Lemma~\ref{lem:acone-sep-from-fin-pt} (applied to $K'$ and $\vv$), there exists $\uu'\in\Rn$ such that $\vv\cdot\uu'>0$ and $\xbar\cdot\uu'\leq 0$ for all $\xbar\in K'$. Let $\PP\in\R^{n\times n}$ be the projection matrix onto $L=\{\vv_1,\ldots,\vv_{j-1}\}^\bot$, the linear subspace orthogonal to $\vv_1,\ldots,\vv_{j-1}$ (or equivalently, to their span). Let $\uu=\PP\uu'$. We will show that $\uu$ defines the needed separation. First, for $i=1,\ldots,j-1$, $\vv_i\cdot\uu=0$ since $\uu\in L$ and so orthogonal to $\vv_i$. Also, $\vv=\vv_j\in L$ since $\vv_j$ is orthogonal to $\vv_1,\ldots,\vv_{j-1}$, implying $\PP\vv=\vv$, so \[ \vv_j\cdot\uu = \vv\cdot\uu = \vv\cdot(\PP \uu') = (\PP\vv)\cdot\uu' = \vv\cdot\uu' > 0. \] From these facts, it follows that $\limray{\zbar}\cdot\uu=+\infty>0$ (by Lemma~\ref{lemma:case}), and so that $\zbar\cdot\uu>0$ (by Proposition~\ref{pr:astrons-exist}). To complete the proof, let $\xbar\in K$, which we aim to show is in $\chsuz$. Then $\limray{\xbar}\in K$ (by Proposition~\ref{pr:ast-cone-is-naive}). Since $K$ is convex, we can therefore apply Corollary~\ref{cor:gen-one-in-set-implies-all} (with $S$, $\vbar$, $\ybar$, $\xbar$, as they appear in that corollary, set to $K$, $\xbar$, $\zero$, $\ebar$, respectively). This yields that $\ahfline{\ebar}{\xbar}=\ebar\seqsum\aray{\xbar}\subseteq K$. In particular, this means that $\ebar\plusl\xbar\in K$ (using Corollary~\ref{cor:seqsum-conseqs}\ref{cor:seqsum-conseqs:a}). By the projection lemma (Lemma~\ref{lemma:proj}), this implies further that \begin{equation} \label{eq:thm:acone-sep-from-ast-pt:1} \ebar\plusl\PP\xbar=\ebar\plusl\xbar\in K. \end{equation} Consequently, \[ \xbar\cdot\uu = \xbar\cdot(\PP\uu') = (\PP\xbar)\cdot\uu' \leq 0, \] where the second equality is by \Cref{thm:mat-mult-def} (and since $\PP$ is symmetric), and the inequality is because $\PP\xbar\in K'$ (by Eq.~\ref{eq:thm:acone-sep-from-ast-pt:1} and $K'$'s definition). This completes the proof. \end{proof} \end{document}
2205.03200v1
http://arxiv.org/abs/2205.03200v1
K-color region select game
\documentclass[12pt,a4paper]{amsart} \usepackage{amssymb,amsthm} \usepackage{multirow} \usepackage{dsfont} \usepackage{graphicx} \usepackage{float} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{fact}[thm]{Fact} \newtheorem{lem}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \newtheorem{quest}[thm]{Question} \newtheorem{prob}[thm]{Problem} \newtheorem{rem}[thm]{Remark} \newtheorem{definition}{Definition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{defns}[thm]{Definitions} \newtheorem{con}[thm]{Construction} \newtheorem{exmp}[thm]{Example} \newtheorem{exmps}[thm]{Examples} \newtheorem{notn}[thm]{Notation} \newtheorem{notns}[thm]{Notations} \newtheorem{addm}[thm]{Addendum} \newtheorem{exer}[thm]{Exercise} \begin{document} \title {$k$-Color Region Select Game} \author{Ahmet Batal, Neslihan G\"ug\"umc\"u} \address{Department of Mathematics\\ Izmir Institute of Technology\\ G\"ulbah\c ce Campus 35430 Izmir, TURKEY} \email{[email protected]} \email{[email protected]} \begin{abstract} The region select game, introduced by Ayaka Shimizu, Akio Kawauchi and Kengo Kishimoto, is a game that is played on knot diagrams whose crossings are endowed with two colors. The game is based on the region crossing change moves that induce an unknotting operation on knot diagrams. We generalize the region select game to be played on a knot diagram endowed with $k$-colors at its vertices for $2 \leq k \leq \infty$. \end{abstract} \subjclass[2020]{05C50, 05C57} \keywords{knot, link, region select game, unknotting} \maketitle \section*{Introduction} The \textit{region select game} that was produced in 2010 \cite{Shi2, Shi} and later released as a game app for Android \cite{And}, is a game played on knot diagrams. The region select game begins with a knot diagram that is initially endowed with two colors, either by $0$ or $1$, at its crossings, and played by selecting a number of regions (an area enclosed by the arcs of the diagram) of the knot diagram. Each choice of a region of the diagram results in the crossing colors which lie on the boundary of the region to be increased by $1$ modulo $2$. The aim of the game is to turn the color of each crossing of the knot diagram to $0$ (or to $1$) by selecting a number of regions. Shimizu showed \cite{Shi} that the region select game is always solvable, that is, for any initial color configuration of crossings there exists a choice of regions which turns the color of each crossing to $0$. In \cite{Shi} a \textit{region crossing change} move is defined to be a local transformation of the knot diagram that is applied on a region and changes the type of each crossing that lie on the boundary of the region. By encoding an over-crossing with $1$ and an under-crossing with $0$, it is clear that any knot diagram corresponds to a knot diagram given with an initial color configuration at its crossings. The solvability of the region select game follows from the result of Shimuzi that any knot diagram can be turned into an unknot diagram by a sequence of region crossing change moves \cite{Shi}. In \cite{Che}, Cheng and Gao showed that the result holds for two-component link diagrams if and only if their linking numbers are even. Soon after in 2012, Ahara and Suzuki \cite{AhSu} extended the region select game to an integral setting by introducing the \textit{integral region choice problem}. In the integral choice problem, one starts the game with a knot diagram that is endowed with colors labeled by integers at its crossings. Then, an integer is assigned to a region of the knot diagram. The assigned integer on the region changes the integer label on the crossings that lie in the boundary of the region according to two counting rules. In the first counting rule, named as \textit{the single counting rule}, the integer label on each crossing of the boundary of the integer-labeled region is increased by $n$, where $n$ is the integer assigned to the region. In the second counting rule, named as \textit{the double counting rule}, when an integer is assigned to a region, the integer labels on the crossings of the boundary that meet with the region once are increased by $n$, and the integer labels on the crossings of the boundary that meet with the region twice are increased by $2n$. In \cite{AhSu}, the authors showed that the integral region choice problem considered with respect to both of these rules is always solvable. In \cite{Kaw}, Kawamura gave a necessary and sufficient condition for the solvability of two-component links diagrams. In this paper, we introduce the $k$-color region select game that is the modulo $k$ extension of Shimizu's region select game, when $k$ is an integer greater than $2$. In this game, crossings of a knot diagram are initially colored by integers $0,1,...,k-1$. The game is played by pushing (selecting) a number of regions of the knot diagram. Each push of a region increases the color of the crossings at the boundary of the region by $1$ modulo $k$. The aim of the game is to make the color of every crossing $0$ by applying a push pattern to the regions. See Figure \ref{fig:example} for a knot diagram given with an initial coloring configuration. The integers on the regions of the knot diagram denote the required number of pushes on them to turn each vertex color to $0$ modulo $3$. Similar to the integral region choice problem of Ahara and Suzuki, we also define versions of the game for the cases $2 \leq k< \infty$ and $k=\infty$ with modified rules of counting. \begin{figure}[H] \centering \includegraphics[scale=.2]{examplegame1.pdf} \caption{A solution of a 3-color region select game played on a diagram of the knot $6_2$ \cite{Rotable}} \label{fig:example} \end{figure} Let us now give an outline of our paper. In Section \ref{sec:prem}, we review fundamental notions from knot theory and graph theory that are required throughout the paper. In Section \ref{sec:game}, we introduce the $k$-color region select game both for an integer $k$ that is greater than or equal to $2$ and for $k=\infty$. In Section \ref{sec:solvable} we prove that any version of the $k$-color region select game introduced in this paper, is always solvable on knot diagrams. In Sections \ref{sec:reduced} and \ref{sec:assertions} we examine the number of solving patterns for a given initial coloring configuration that are obtained without pushing certain regions of a knot diagram. We note here that the always solvability of the $k$-color region select game with the versions corresponding to the single and double counting rule, can be directly deduced from the always solvability of the integral region choice problem. However this does not make our proof redundant. In fact, the proofs of the always solvability of the integral region choice problem and the original ($2$-color) region select game are mostly knot theoretic where Reidemeister moves and checkerboard shadings of knot diagrams are used. On the other hand, our proof utilizes mostly linear algebra and few fundamental facts on regular curves (indeed we almost only utilize the fact that a knot diagram is an orientable closed curve). This enables us to prove the always solvability of the other versions of the region select game that are introduced in this paper that cannot be drawn directly from the arguments in \cite{AhSu}. In particular, with our proof method we also prove the always solvability of the integral region choice problem, not only for the single and double counting rule, but also for any arbitrary counting rule. With the arguments in our paper, the following questions are also answered. \begin{enumerate} \item How many solutions are there for a given initial color configuration? \item How many initial color configuration can we solve without pushing certain regions? \item Do there exist certain regions such that any initial color configuration can be solved without pushing them? \item Do the answers of the above questions depend on the value of $k$, the version of the game, and the type of the knot diagram? If so, how? \end{enumerate} \section{Preliminaries}\label{sec:prem} We shall begin by presenting basic definitions that we will be using throughout the paper. \begin{definition}\normalfont A \textit{link} with $n$ components is a smooth embedding of a union of $n$ unit circles, $S^1$ into $\mathbb{R}^3$, where $n \geq 1$. In particular, a link with one component is called a \textit{knot}. \end{definition} \begin{definition}\normalfont A \textit{link diagram} (or a \textit{knot diagram}) $D$ is a regular projection of a link (or a knot) into the $2$-sphere, $S^2$ with a finite number of transversal self-intersection points. Each self-intersection point of the projection curve is endowed either with over or under passage information to represent the weaving of the link in $\mathbb{R}^3$, and is called a \textit{crossing} of $D$. \end{definition} \begin{definition}\normalfont A crossing of a link diagram is called \textit{reducible} if there exists a circle in the plane of the diagram that meets the diagram transversely only at that crossing. A crossing is called \textit{irreducible} if it is not reducible. \end{definition} \begin{definition} \normalfont We call a component of a \textit{link diagram} without any crossing on it a \textit{loop}. \end{definition} It is clear that a loopless link diagram with $n$ crossings overlies a unique planar graph with $n$ four-valent vertices that is obtained by ignoring the weaving information at the crossings. By abusing the terminology, we extend this association to any link diagram by considering each loop component as a graph with one edge and no vertices. We also call the underlying graph of a link or a knot diagram a \textit{link diagram} or a \textit{knot diagram}, respectively. By a simple application of the Jordan curve theorem and Euler's formula, one can see that any knot diagram with $n$ vertices divides $S^2$ into $n+2$ regions for $n \geq 0$. \begin{definition} \normalfont For a link diagram $D$ on $S^2$, \textit{regions} of $D$ are defined as the connected components of $S^2 \backslash D$. A vertex $v$ (an edge $e$) is said to be \textit{incident} to a region $r$ and vice versa if $v$ ($e$, respectively) is in the boundary of $r$. Two regions of $D$ are called \textit{adjacent} if they are incident to the same edge. Similarly, two edges of $D$ are called \textit{adjacent} if they are incident to the same vertex. \end{definition} \begin{definition}\normalfont Let $D$ be a link diagram. The \textit{dual graph} of $D$ is the graph obtained by adding a vertex to each region of $D$ and an edge between each pair of vertices that lie on adjacent regions. \end{definition} \begin{figure}[H] \centering \includegraphics[scale=.2]{dualgraph.pdf} \caption{The dual graph of a diagram of the knot $6_2$} \label{fig:example} \end{figure} \section{$k$-color region select game}\label{sec:game} In this section, we introduce \textit{$k$-color region select game} as well as its modified versions that are all played on a knot diagram $D$, both for the cases $2 \leq k < \infty$ and $k=\infty$.\\ \textit{The $k$-color region select game when $2 \leq k < \infty$} :\\ We select $k$ colors and give a labeling to these colors as $color\,0,\, color \,1, ..., \\ color\, k-1$. Then we take an initial color configuration of vertices of $D$ by using these colors. The game is played by pushing regions of $D$. When a region is pushed, every vertex incident to the region changes its color by the following rule. The $color \,i$ changes to the $color \,i+1$ for $i\neq k-1$ and the $color\, k-1$ changes to the $color\, 0$. The aim of the game is to reach to the \textit{off color} configuration, in which every vertex is in $color \,0$ state, by applying a push pattern on regions for a given initial color configuration. \\ \textit{The $k$-color region select game when $k=\infty$}:\\ In this game, we have infinitely many colors labeled as $...,color\,-2,\, color \,-1,\, color \,0,\, color \,1,\,color \,2,...$. An initial color configuration of vertices of $D$ is obtained by a finite choice of these colors. Each push of a region is assigned to either to $1$ or $-1$, and is called a \textit{positive} or \textit{negative} push, respectively. When a positive (negative, respectively) push is applied to a region, color label of every vertex incident to the region increases (decreases, respectively) by $1$. The aim of the game is the same as in the finite case, to reach to the off color configuration by applying a signed push pattern for a given initial color configuration. \begin{definition}\normalfont Let $C$ denote an initial color configuration of a link diagram $D$. If there exists a push pattern $P$ of regions of $D$ which brings $C$ to the off color configuration then $C$ is called \textit{solvable} and $P$ is a solving pattern for $C$. \end{definition} \begin{definition}\normalfont If every initial color configuration of vertices of $D$ is solvable then $D$ is called \textit{always solvable} in the $k$-color region select game. \end{definition} Let $D$ have $n$ vertices and $m$ regions and let us enumerate the vertices and regions of $D$ as $\{v_1,...,v_n\}$, $\{r_1,...,r_m\}$, respectively. It is easy to observe that the order of the pushes has no importance. Moreover, for $k<\infty$, pushing a region $k$ times is equivalent to not to push it. For $k=\infty$, the net number of pushes, that is equal to the sum of signs of the pushes made, is important. Precisely, the color label of the vertices that are incident to the regions pushed change by the net number of pushes. Let $\mathbb{Z}_k$ denote the quotient ring ${\mathbb{Z}} /{ k \mathbb{Z}}$ when $k<\infty$, and it denotes $\mathbb{Z}$ when $k=\infty$. We identify a push pattern of regions by a column vector $\mathbf{p}=(p_1,..., p_m)^t \in \mathbb{Z}_k^m$ such that $\mathbf{p}(r_i):=p_i$ is the number of times the region $r_i$ is pushed modulo $k$ if $k<\infty$, and the net number of pushes of $r_i$ if $k=\infty$. Similarly, we identify a color configuration of vertices by a column vector $\mathbf{c}=(c_1,..., c_n)^t \in \mathbb{Z}_k^n$ such that $\mathbf{c}(v_i)=c_i$ is the label number of the color of the vertex $v_i$ in the configuration. The $n \times m$ \textit{vertex-region incidence matrix} $M_0=M_0(D)$ of $D$ is constructed as follows \cite{Che} \begin{align} (M_0)_{ij}= \left\{ \begin{array}{cc} 1 & \;\;\text{if}\;\;\;\; v_i\; \text{is incident to}\; r_j \\ 0 & \;\;\text{otherwise} \\ \end{array} \right\}. \end{align} Let $\mathbf{c}_{in}$ be an initial color configuration of vertices of $D$ and $\mathbf{c}_{fin}$ be the final state configuration obtained after applying a push pattern $\mathbf{p}$. One can observe that the following equation characterizes the relation among $\mathbf{c}_{in}$,$\mathbf{c}_{fin}$, and $\mathbf{p}$ over $\mathbb{Z}_k$ in a simple algebraic way. \begin{equation} \label{maineqn} \mathbf{c}_{in}+M_0(D)\mathbf{p}=\mathbf{c}_{fin}. \end{equation} We now introduce the modified versions of the game that are played with the rules explained below. \\ \emph{Modified rules of the game for $k<\infty$}: Take a link diagram $D$ and fix some $k<\infty$. Let $v$ be a vertex of $D$. \begin{enumerate} \item If $v$ is irreducible choose a number $a\in \mathbb{Z}_k$ which is not a zero divisor. Then define the new rule for this vertex such that a push on a region incident to $v$ increases the color label of $v$ by $a$ modulo $k$. \item If $v$ is reducible, choose three numbers $a_0, a_1, a_2 \in \mathbb{Z}_k$ such that $a_1$ and $a_2$ are not zero divisors. Let $r_0$, $r_1$, and $r_2$ be the regions incident to $v$ where $r_0$ is the region which touches $v$ from two sides. Then define the rule for this vertex such that a push on the incident region $r_i$ increases the color label of $v$ by $a_i$ modulo $k$ for $i=0,1, 2$. Let us call these numbers we choose for each vertex region pair $v$-$r$, \emph{the increment number of} $v$\emph{ with respect to the region} $r$ or \emph{the increment number of} $v$-$r$ \emph{pair}. Note that the increment number of $v$ is the same with respect to each incident region of $v$ if $v$ is irreducible, but it can be chosen differently for each incident region of $v$ if $v$ is reducible. \\ \end{enumerate} \emph{Modified rules of the game for $k=\infty$}: \begin{enumerate} \item The increment number of the incident vertex-region pairs $v$-$r$ is taken $1$ as in the original game if $v$ is irreducible, or if $v$ is reducible and $r$ is a region which touches $v$ from one side. \item If $v$ is a reducible vertex and $r$ is the region which touches $v$ from two sides, then the increment number of $v$-$r$ pair is allowed to be any number. \end{enumerate} The rules mentioned above and every choice of increment numbers induce different versions of the $k$-color region select game for $2 \leq k\leq\infty$. The game where all increment numbers are taken as $1$ corresponds to the original game, hence the modified versions are generalizations of the original game for $2 \leq k\leq\infty$. Although the complexity of the game is increased by these modifications, it will turn out that always solvability of the game is not affected as we show in Section \ref{sec:solvable}. Therefore, in the sequel, we consider the modified versions of the game. Note also that in the case of $k=\infty$, we allow the increment number of $v$-$r$ pair where $v$ is a reducible vertex and $r$ is the region which touches $v$ from two sides to be any number. When this number taken as $1$ or $2$ for all reducible vertices, the version corresponds to the integral choice problem for single or double counting rule \cite{AhSu}, respectively. \begin{definition}\normalfont Let $D$ be a link diagram with vertices labeled as $\{v_1,...,v_n\}$ and regions $\{r_1,...,r_{m}\}$ and $G$ be a version of the $k$-color region select game on $D$ induced by the choice of $k$ and the set of increment numbers. We define the \textit{game matrix} $M=M(D,G)$ \emph{over} $\mathbb{Z}_k$ \emph{corresponding to the diagram} $D$ and \emph{the game} $G$ such that $(M)_{ij}$ is equal to the increment number of the vertex $v_i$ with respect to the region $r_j$ if $v_i$ and $r_j$ are incident, and zero otherwise. \end{definition} Similar to the original game, in the game $G$, a final state color configuration $\mathbf{c}_{fin}$ is obtained after applying a push pattern ${\bf p}$ to an initial color configuration $\mathbf{c}_{in}$ if and only if \begin{equation} \label{maineqn2} \mathbf{c}_{in}+M(D,G)\mathbf{p}=\mathbf{c}_{fin} \;\;\text{over} \;\; \mathbb{Z}_k. \end{equation} Let us denote the kernel and column space of a matrix $A$ over the ring $\mathbb{Z}_k$ by $Ker_k(A)$ and $Col_k(A)$, respectively. Then, from the above algebraic formulation we immediately obtain the following facts. \begin{fact} An initial color configuration $\mathbf{c}$ of the vertices of $D$ is solvable in the game $G$ if and only if $\mathbf{c}\in Col_k(M)$. Indeed, $\mathbf{p}$ is a solving pattern for $\mathbf{c}$ if and only if \begin{equation} M\mathbf{p}=-\mathbf{c}. \end{equation} \end{fact} \begin{fact} \label{fact2} $D$ is always solvable in $G$ if and only if $Col_k(M)=\mathbb{Z}_k^n$. \end{fact} \begin{fact} \label{fact3} In the case $k<\infty$, for every solvable configuration $\mathbf{c}$, there exist exactly $s$ solving patterns where $s= |Ker_k(M)|$. \end{fact} We also have the following proposition. \begin{prop} \label{propker} In the case $k<\infty$, $D$ is always solvable in $G$ if and only if $|Ker_k(M)|=k^{m-n}$. \end{prop} \begin{proof} Since the matrix multiplication is a homomorphism of modules, by the fundamental theorem of homomorphisms we have\\ $$|Col_k(M)||Ker_k(M)|=|\mathbb{Z}_k^m|=k^m.$$ Then the result follows by Fact \ref{fact2}. \end{proof} \begin{definition}\normalfont Let $A$ be a matrix over $\mathbb{Z}_k$, where $k\leq \infty$. A pattern is called a \emph{null pattern} of $A$ if it belongs to $Ker_k(A)$. \end{definition} We have the following proposition. \begin{prop} \label{propmn} Let $D$ be a link diagram with $n$ vertices and $m$ regions on which we play a version of the $k$-color region select game $G$ where $k< \infty $. Let $M$ be the corresponding game matrix. Fix $i \geq 0$ regions of $D$. Let $j$ be the number of null patterns of $M$ where these regions are not pushed. Then, there are $k^{m-i}/ j$ initial color configurations that can be solved without pushing these regions. If there are $m-n$ regions where the only null pattern of $M$ these regions are not pushed is the trivial pattern $\mathbf{0}$, then, $D$ is always solvable in $G$. Moreover, any initial color configuration can be solved uniquely without pushing these regions. \end{prop} \begin{proof} Take an enumeration of the regions of $D$ such that the regions we fix are $r_{m-i+1},..., r_m$. For a vector $\mathbf{p}=(p_1,...,p_{m-i})^t\in\mathbb{Z}^{m-i}_k$, define the zero extension vector $\mathbf{p_e}=(p_1,...,p_{m-i},0,...,0)^t\in\mathbb{Z}^m_k$. Let $\widetilde{M}$ be the $n\times (m-i)$ matrix obtained from $M$ by deleting the last $i$ columns. Then, $\widetilde{M}\mathbf{p}=M\mathbf{p_e}$. Therefore $\mathbf{p}\in Ker_k(\widetilde{M})$ if and only if $\mathbf{p_e}\in Ker_k(M)$. Hence, $j=|Ker_k(\widetilde{M})|$. Moreover, if an initial color configuration can be solved without pushing the regions $r_{m-i+1},..., r_m$, it must belong to $Col_k(\widetilde{M})$. On the other hand, $|Col_k(\widetilde{M})|= k^{m-i} / |Ker_k(\widetilde{M})|$ by the fundamental theorem of homomorphisms. Hence, there are $k^{m-i}/ j$ number of initial color configurations that can be solved without pushing these regions. If there are $m-n$ regions where the only null pattern these regions are not pushed is the trivial pattern $\mathbf{0}$, then $i=m-n$ and $j=1$. Hence, $Ker_k(\widetilde{M})=\{\mathbf{0}\}$, and $ |Col_k(\widetilde{M})|= k^n $. Since $k^n$ is the number of all possible initial color configurations, this implies that any initial color configuration can be solved uniquely without pushing these regions. In particular, $D$ is always solvable. \end{proof} \section{Knot Diagrams are always solvable}\label{sec:solvable} In this section, we show that knot diagrams are always solvable with respect to any version of the $k$-color region select game for any $k \leq \infty$. \begin{definition}\normalfont For a fixed $k\leq \infty$, a vertex $v$ is said to be \emph{balanced} with respect to a push pattern $\mathbf{p}$ if the sum of the pushes of regions incident to $v$ is zero modulo $k$ in $\mathbf{p}$. \end{definition} \begin{lem} \label{lem:bal} Let $M$ be a game matrix of a link diagram $D$ over $\mathbb{Z}_k$, where $k\leq\infty$, and $\boldsymbol{\ell}$ be a null pattern of $M$. Then, any irreducible vertex of $D$ is balanced with respect to $\boldsymbol{\ell}$. \end{lem} \begin{proof} Let $v$ be an irreducible vertex of $D$ and let $a$ be the increment number of $v$ with respect to all its incident regions in the version of the $k$-color region select game corresponding to $M$. Let $r_1,...,r_4$ be the regions incident to $v$. Then, $(M\boldsymbol{\ell})(v)= a(\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_2)+\boldsymbol{\ell}(r_3)+\boldsymbol{\ell}(r_4))$. On the other hand, since $\boldsymbol{\ell}$ is a null pattern of $M$, $M\boldsymbol{\ell}=0$. Hence $a(\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_2)+\boldsymbol{\ell}(r_3)+\boldsymbol{\ell}(r_4))=0$. By the rules of the game $a=1$ if $k=\infty$ and $a$ is not a zero divisor of $\mathbb{Z}_k$ for $k<\infty$. Hence, $\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_2)+\boldsymbol{\ell}(r_3)+\boldsymbol{\ell}(r_4)=0$, which means $v$ is balanced. \end{proof} \begin{definition}\normalfont The \emph{push number} $\sigma_{\bf p}(e)$ \emph{of an edge} $e$ \emph{with respect to a push pattern} ${\bf p}$ is the sum of the pushes of the regions incident to $e$ in ${\bf p}$ modulo $k$. More precisely, if $e$ is incident to the regions $r_1$ and $r_2$, then $\sigma_{\bf p}(e)= {\bf p}(r_1)+ {\bf p}(r_2)$ $\mod$ $k$. \end{definition} We have the following lemma. \begin{lem} \label{lempush} Let $D$ be an oriented reduced knot diagram and $\boldsymbol{\ell}$ be a null pattern of a game matrix $M$ of $D$ over $\mathbb{Z}_k$, where $k\leq \infty$. Then, there exists $s\in \mathbb{Z}_k$ such that $\sigma_{\boldsymbol{\ell}}(e)=s$ or $-s$ for every edge $e$ of $D$. Moreover, for any pair of adjacent edges $e_1$ and $e_2$ which are not incident to the same region, $\sigma_{\boldsymbol{\ell}}(e_1)=s$ if and only if $\sigma_{\boldsymbol{\ell}}(e_2)=-s$. \end{lem} \begin{proof} Let $e_1$ and $e_2$ be two adjacent edges that meet at a vertex $v$ and are not incident to the same region. Let $r_1,...,r_4$ be the regions incident to $v$ such that $r_1$ and $r_2$ are incident to $e_1$, $r_3$ and $r_4$ are incident to $e_2$. Let $\sigma_{\boldsymbol{\ell}}(e_1)=s$, for some $s\in\mathbb{Z}_k $. This means $\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_2)=s$. On the other hand, since $D$ is a reduced knot diagram, $v$ is an irreducible vertex. Hence by Lemma \ref{lem:bal}, it is balanced with respect to $\boldsymbol{\ell}$, i.e; $\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_2)+\boldsymbol{\ell}(r_3)+\boldsymbol{\ell}(r_4)=0$. This implies $\sigma_{\boldsymbol{\ell}}(e_2)=\boldsymbol{\ell}(r_3)+\boldsymbol{\ell}(r_4)=-s$. Let us start to travel along $D$ starting from a point on $e_1$ by following the orientation on $D$. Using the above argument inductively, we see that the push number of any edge with respect to $\boldsymbol{\ell}$ on our path cannot assume any value other than $s$ or $-s$. Since $D$ is a closed curve this means every edge of $D$ has a push number which is either $s$ or $-s$. \end{proof} \begin{lem} \label{mainlemma} Let $D$ be a knot diagram, $v$ be an irreducible vertex of $D$, and $\boldsymbol{\ell}$ be a null pattern of a game matrix $M$ of $D$ over $\mathbb{Z}_k$ where $k\leq \infty$. Then, two non-adjacent regions incident to $v$ are pushed by the same number of times in $\boldsymbol{\ell}$. \end{lem} \begin{proof} First assume that $D$ is a reduced knot diagram. Let $e_1,...,e_4$ and $r_1,...,r_4$ be the edges and regions incident to $v$, respectively, which are oriented as in Figure \ref{fig:edges}. Without loss of generality we can assume that $\sigma_{\boldsymbol{\ell}}(e_1)=\sigma_{\boldsymbol{\ell}}(e_2)=s$, and $\sigma_{\boldsymbol{\ell}}(e_3)=\sigma_{\boldsymbol{\ell}}(e_4)=-s$ for some $s\in\mathbb{Z}_k$ by Lemma \ref{lempush}. Then, $\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_4)=\sigma_{\boldsymbol{\ell}}(e_1)=\sigma_{\boldsymbol{\ell}}(e_2)=\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r_2)$. Hence, $\boldsymbol{\ell}(r_4)=\boldsymbol{\ell}(r_2)$. \begin{figure}[H] \centering \includegraphics[scale=.25]{Fig2.pdf} \caption{Edges and regions that are incident to a vertex} \label{fig:edges} \end{figure} Let now $D$ be any knot diagram which contains reducible crossings. We first endow it with an orientation and construct the link diagram $D'$ obtained from $D$ by applying oriented smoothing operation simultaneously to every reducible vertex of $D$. We illustrate an example of this procedure in Figure \ref{fig:reducible}. Note that the oriented smoothing operation when applied to a reducible vertex preserves the vertex-region structure of irreducible crossings of the diagram. This means that a game matrix $M'$ of $D'$ can be constructed from $M$ by deleting the rows corresponding to the reducible vertices. Therefore, regions of $D$ and $D'$ can be identified and any null pattern of $M$ is also a null pattern of $M'$, in particular $\boldsymbol{\ell}$. Moreover $D'$ is the union of disjoint components. Let $D''$ be the component of $D'$ which contains $v$. We can construct a game matrix $M''$ of $D''$ by deleting the columns of $M'$ corresponding to the regions whose boundary does not intersect $D''$. Then the restriction $\boldsymbol{\ell}_{res}$ of $\boldsymbol{\ell}$ to the regions of $D''$ is a null pattern of $M''$. Since $D''$ is a reduced knot diagram, by the first part of the proof, two non-adjacent regions incident to $v$ are pushed by the same number of times in $\boldsymbol{\ell}_{res}$, hence in $\boldsymbol{\ell}$. \begin{figure}[H] \centering \includegraphics[scale=.25]{Fig1.pdf} \caption{A knot diagram containing reducible crossings} \label{fig:reducible} \end{figure} \end{proof} \begin{prop} \label{prop0} Let $D$ be a knot diagram, $M$ be a game matrix of $D$ over $\mathbb{Z}_k$, where $k\leq \infty$. Then, the only null pattern of $M$ where two adjacent regions of $D$ are not pushed is the trivial pattern $\mathbf{0}$. \end{prop} \begin{proof} Let $\boldsymbol{\boldsymbol{\ell}}$ be a null pattern where two adjacent regions $r_1$ and $r_2$ are not pushed. Let $v$ be a vertex incident to both $r_1$ and $r_2$. First assume that $v$ is an irreducible vertex. Let $r_3$, $r_4$ be the other two regions incident to $v$. Since $r_1$ and $r_2$ are not pushed in $\boldsymbol{\ell}$, one of the regions $r_3$ or $r_4$ should not be pushed either by Lemma \ref{mainlemma}. Assume without loss of generality that $r_3$ is not pushed. On the other hand $v$ must be balanced with respect to $\boldsymbol{\ell}$ by Lemma \ref{lem:bal}. Since $r_1,r_2,r_3$ are not pushed, this implies $r_4$ is not pushed either. Now assume that $v$ is a reducible vertex. Then, there is only one more region, call it $r$, which is incident to $v$ other than $r_1$ and $r_2$. Note that the regions which touch $v$ from one side cannot be adjacent to each other, so either $r_1$ or $r_2$ is the region which touches $v$ from both sides. Hence, $r$ touches $v$ from one side. Therefore the increment number $a$ of $v$ with respect to $r$ is not a zero divisor. Since $r_1$ and $r_2$ are not pushed $(M\boldsymbol{\ell})(v)=a\boldsymbol{\ell}(r)$. On the other hand, since $\boldsymbol{\ell}$ is a null pattern $M\boldsymbol{\ell}=0$. Hence $a\boldsymbol{\ell}(r)=0$. Since $a$ is not a zero divisor we conclude that $\boldsymbol{\ell}(r)=0$, i.e, $r$ is not pushed. Using induction on the number of vertices, this argument shows us, by traveling the underlying curve of $D$, starting from the edge incident to $r_1$ and $r_2$ we can never reach a pushed region. Since $D$ is a closed curve, this means that there is no pushed region in $D$, hence $\boldsymbol{\boldsymbol{\ell}}$ is the trivial null pattern $\mathbf{0}$. \end{proof} Now we are ready to state our main result. \begin{thm} \label{propadj} Every knot diagram is always solvable in any version of the $k$-color region select game for all $k\leq\infty$. Moreover, any initial color configuration can be solved uniquely without pushing any two adjacent regions. \end{thm} \begin{proof} Since the difference between the number of regions and number of vertices of a knot diagram is $2$, in the case $k<\infty$, the result follows by Proposition \ref{propmn} and Proposition \ref{prop0}. In the case $k=\infty$, Let $D$ be a knot diagram with $n$ vertices, $\{r_1,...,r_{n+2}\}$ be an enumeration of the regions of $D$ so that $r_{n+1}$ and $r_{n+2}$ are adjacent. Then take a game matrix $M$ of $D$ over $\mathbb{Z}$. Let $\widetilde{M}$ be the $n\times n$ matrix obtained from $M$ by deleting its last two columns. Then, Proposition \ref{prop0} implies that $ Ker_\infty(\widetilde{M})=\{\mathbf{0}\}$ (See the proof of Proposition \ref{propmn} for a more detailed explanation). This is equivalent to say that the column vectors of $\widetilde{M}$ (equivalently first $n$ column vectors of $M$) are linearly independent in the $\mathbb{Z}$-module $\mathbb{Z}^n$. Let us denote these column vectors by $\mathbf{c}_1,...,\mathbf{c}_n$. Let $\mathbf{c}$ be an arbitrary vector in $\mathbb{Z}^n$ corresponding to an initial color configuration. It is an elementary fact that $\mathbb{Z}^n$ has rank $n$. Therefore, any set of vectors which has more than $n$ elements is linearly dependent in $\mathbb{Z}^n$. Hence, there are integers $q_1,...,q_n$, and $q$, some of which are nonzero, such that \begin{equation} \label{eqnlin} q_1\mathbf{c}_1+...+q_n\mathbf{c}_n + q \mathbf{c}=\mathbf{0}. \end{equation} Note that $q$ cannot be zero, otherwise $\mathbf{c}_1,...,\mathbf{c}_n$ would be linearly dependent. Equation \eqref{eqnlin} is equivalent to the following matrix equation \begin{equation} \label{eqnmat} M \begin{bmatrix} q_{1} \\ \vdots \\ q_{n}\\ 0\\ 0 \end{bmatrix}= - q \mathbf{c}. \end{equation} Multiplying \eqref{eqnlin} by $-1$ if necessary, we can assume that $q > 0$. Our aim is to show that $q_i$ is divisible by $q$ for $i=1,...,n$. Since this is trivially true if $q=1$, assume further that $q$ is greater than $1$. Then, we can consider the above equation in modulo $q$ and obtain \begin{equation} \label{eqnmod} \overline{M} \begin{bmatrix} \overline{q_{1}} \\ \vdots \\ \overline{q_{n}}\\ 0\\ 0 \end{bmatrix}= \mathbf{0}, \end{equation} where $\overline{q_{i}}= q_i$ mod $q$ for $i=1,...,n$ and $\overline{M}$ is the matrix whose entries are given by $(\overline{M})_{ij}= (M)_{ij}$ mod $q$. It is easy to observe that $\overline{M}$ is a game matrix of $D$ over $\mathbb{Z}_q$. This observation, together with Proposition \ref{prop0}, immediately implies that $\overline{q_{i}}=0$ for $i=1,...,n$. So all $q_i$'s are divisible by $q$. Then, there exist numbers $p_1,..., p_n$ such that $q_i=q p_i$ for $i=1,...,n$, and by equation \eqref{eqnmat} we obtain \begin{equation} \label{eqnmat2} M \begin{bmatrix} p_{1} \\ \vdots \\ p_{n}\\ 0\\ 0 \end{bmatrix}= - \mathbf{c}. \end{equation} Since $M$ is an arbitrary game matrix over $\mathbb{Z}$, and $\mathbf{c}$ is an arbitrary initial color configuration, the above equation means that $D$ is always solvable in any version of the $\infty$-color region select game and any initial color configuration can be solved without pushing any two adjacent regions. Uniqueness follows from the fact that $ Ker_\infty(\widetilde{M})=\{\mathbf{0}\}$. \end{proof} \begin{thm} \label{thmker} Let $D$ be a knot diagram, $M$ be a game matrix of $D$ over $\mathbb{Z}_k$ where $k< \infty$. Then, $|Ker_k(M)|=k^2$ . \end{thm} \begin{proof} This follows from Proposition \ref{propker} and Theorem \ref{propadj}. \end{proof} \begin{prop} \label{thmker} For any knot diagram $D$, there are $k^2$ number of solving push patterns for each initial color configuration in any version of the $k$-color region select game for $k<\infty$. \end{prop} \begin{proof} This follows directly from Fact \ref{fact3} and Theorem \ref{propadj}. \end{proof} We also have the following proposition. \begin{prop} \label{propab} Let $D$ be a knot diagram on which we play a version of the $k$-color region select game, where $k\leq\infty$. Let $a, b \in \mathbb{Z}_k$. Fix two regions adjacent to each other. Then, for any initial color configuration, there is a unique solving pattern where one of the regions is pushed $a$ times and the other is pushed $b$ times. In particular, any null pattern of any game matrix of $D$ over $\mathbb{Z}_k$ is uniquely determined by its value on two adjacent regions. \end{prop} \begin{proof} Let $M$ be a game matrix of $D$ over $\mathbb{Z}_k$, $\mathbf{c}$ be an initial color configuration of vertices of $D$. Assume that the adjacent regions we fix corresponds to the last two columns of $M$. Then, consider the color configuration \begin{equation} \mathbf{\widetilde{c}}:= \mathbf{c}+ M\begin{bmatrix} 0 \\ \vdots \\ 0\\ a\\ b \end{bmatrix}. \end{equation} By Theorem \ref{propadj}, there is a unique solving push pattern $(p_1,...,p_n,0,0)^t$ for $\mathbf{\widetilde{c}}$, where $n$ is the number of vertices of $D$. Hence, \begin{equation} M\begin{bmatrix} p_1 \\ \vdots \\ p_n\\ 0\\ 0 \end{bmatrix}= -\mathbf{c}- M\begin{bmatrix} 0 \\ \vdots \\ 0\\ a\\ b \end{bmatrix}, \end{equation} which implies $M\mathbf{p}=-\mathbf{c}$, where $\mathbf{p}= (p_1,...,p_n,a,b)^t$. Hence $\mathbf{p}$ is a desired solving pattern for $\mathbf{c}$. For uniqueness, assume that there is another solving pattern $\mathbf{q}:=(q_1,...,q_n,a,b)^t$ for $\mathbf{c}$. Then, $\mathbf{p}-\mathbf{q}$ would be a null pattern of $M$ where two adjacent regions are not pushed. By Proposition \ref{prop0}, $\mathbf{p}=\mathbf{q}$. \end{proof} \section{Game on reduced knot diagrams}\label{sec:reduced} In this section, we examine the $k$-color region select game further for reduced knot diagrams. \begin{definition}\normalfont A shading of the regions of a link diagram $D$ is called a \textit{checkerboard shading} if for any pair of adjacent regions of $D$, one of the regions is shaded and the other one is unshaded. It is well-known that all link diagrams admit a checkerboard shading \cite{Ka}. \end{definition} \begin{thm} \label{thm2} Let $D$ be a reduced knot diagram with $n$ vertices on which we play the $2$-color region select game. Fix a checkerboard shading on $D$. Then, any initial color configuration can be solved uniquely without pushing one shaded and one unshaded region. In general, there are $2^{n+2-i}$ number of initial color configurations which can be solved without pushing $i$ number of regions which contains one shaded and one unshaded region. Moreover, there are $2^{n+1-i}$ number of initial color configurations which can be solved without pushing $i$ number of shaded regions or $i$ number of unshaded regions. \end{thm} \begin{proof} Take a checkerboard shading of $D$. Consider the following push patterns $\boldsymbol{\ell}_0$, $\boldsymbol{\ell}_1$, $\boldsymbol{\ell}_2$, and $\boldsymbol{\ell}_3$, where $\boldsymbol{\ell}_0$ is the zero pattern; $\boldsymbol{\ell}_1$ is the pattern where only shaded regions are pushed; $\boldsymbol{\ell}_2$ is the pattern where only unshaded regions are pushed; and $\boldsymbol{\ell}_3$ is the pattern where all regions are pushed. It is easy to see that all of these are null patterns of the incidence matrix $M_0(D)$ which corresponds to the $2$-color region select game matrix of $D$. Moreover, they form the set of all nonzero null patterns since $Ker_2(M_0)=4$ by Theorem \ref{thmker}. Note that the only null pattern where at least one shaded and one unshaded region are not pushed is the zero pattern $\boldsymbol{\ell}_0$. The null patterns where any number of unshaded regions are not pushed are $\boldsymbol{\ell}_0$ and $\boldsymbol{\ell}_1$. And lastly, the null patterns where any number of shaded regions are not pushed are $\boldsymbol{\ell}_0$ and $\boldsymbol{\ell}_2$. Hence, the result follows by Proposition \ref{propmn} . \end{proof} \begin{definition}\normalfont The \textit{distance} $d(r_1,r_2)$ between two regions $r_1$ and $r_2$ of a link diagram $D$ is defined to be the distance between the vertices corresponding to $r_1$ and $r_2$ in the dual graph of $D$. \end{definition} \begin{lem} \label{lemdis} Let $D$ be a reduced knot diagram and $\boldsymbol{\ell}$ be a null pattern of a game matrix $M$ of $D$ over $\mathbb{Z}_k$ where $k\leq \infty$. Let $s\in \mathbb{Z}_k$ be the push number of some edge $e$ of $D$ with respect to $\boldsymbol{\ell}$. Fix a checkerboard shading on $D$. Let $r_1$ and $r_2$ be two shaded or two unshaded regions. Then $\boldsymbol{\ell}(r_1)= \boldsymbol{\ell}(r_2) +2is$ mod $k$, where $i$ is an integer satisfying $|2i|\leq d(r_1,r_2)$. \end{lem} \begin{proof} Consider the case where $d(r_1,r_2)=2$. So there is a region, call it $r$, which is adjacent to both $r_1$ and $r_2$. Let $e_1$ and $e_2$ be the edges incident to $r_1$, $r$ and $r_2$, $r$, respectively. Then, $\boldsymbol{\ell}(r_1)- \boldsymbol{\ell}(r_2)=\boldsymbol{\ell}(r_1)+\boldsymbol{\ell}(r)- \boldsymbol{\ell}(r)- \boldsymbol{\ell}(r_2)=\sigma_{\ell}(e_1)-\sigma_{\ell}(e_2)$. On the other hand, $\sigma_{\ell}(e_1)= s$ or $-s$, similarly $\sigma_{\ell}(e_2)= s$ or $-s$ by Lemma \ref{lempush}. Considering every possible case, we obtain $\boldsymbol{\ell}(r_1)- \boldsymbol{\ell}(r_2)=0$, $-2s$, or $2s$. The general case follows by applying induction on the distance of $r_1$ and $r_2$. \end{proof} \begin{thm} \label{thmp} Let $D$ be a reduced knot diagram on which we play a version of the $k$-color region select game, where $k< \infty$. Fix a checkerboard shading on $D$. Then, for $k=2^n$, $n\in \mathbb{N}$, any initial color configuration can be solved uniquely without pushing one shaded and one unshaded region. For other values of $k$, let $p$ be the smallest odd prime factor of $k$. Then, any initial color configuration can be solved uniquely without pushing one shaded and one unshaded region if the distance between the regions is less than $p$. \end{thm} \begin{proof} Let $r_1$ be a shaded and $r_2$ be an unshaded region. Let $M$ be the game matrix of $D$ over $\mathbb{Z}_k$ corresponding to the version of the game we play on $D$. Let $\boldsymbol{\ell}$ be a null pattern of $M$, on which $r_1$ and $r_2$ are not pushed. Let $r$ be a shaded region, adjacent to $r_2$, such that $d(r_1,r_2)=d(r_1,r)+1$. Note that, if $e$ is an edge between $r_2$ and $r$, then $\sigma_{\boldsymbol{\ell}}(e)=\boldsymbol{\ell}(r)$ since $\boldsymbol{\ell}(r_2)=0$. Hence, by Lemma \ref{lemdis}, we have \begin{equation} \label{eqn2i} 0=\boldsymbol{\ell}(r_1)= (2i+1)\boldsymbol{\ell}(r) \mod k, \end{equation} where $|2i|\leq d(r_1,r)$. If $k=2^n$ for some $n\in \mathbb{N}$, then $2i+1$ mod $k$ cannot be a zero divisor of $\mathbb{Z}_k$, hence (\ref{eqn2i}) implies $\boldsymbol{\ell}(r)=0$. For other values of $k$, assume further that $d(r_1,r_2) < p$. Note that $|2i+1|\leq |2i|+1 \leq d(r_1,r_2)< p $. Hence, $2i+1$ mod $k$ cannot be a zero divisor of $\mathbb{Z}_k$, and therefore $\boldsymbol{\ell}(r)=0$ for this case as well. Since $r_2$ and $r$ are adjacent, and $\boldsymbol{\ell}(r)=\boldsymbol{\ell}(r_2)=0$, we have $\boldsymbol{\ell}=\boldsymbol{0}$ by Proposition \ref{prop0}. Then the result follows by Proposition \ref{propmn}. \end{proof} \subsection{Game on reduced alternating sign diagrams}\label{sec:reducedalternating} Take a checkerboard shading of a link diagram $L$. Assume that one of the subsets of regions, shaded or unshaded ones, admits an alternating $``+, -"$ signing where every vertex is incident to two regions with opposite signs, as exemplified in Figure \ref{fig:alternating}. Then, the subset of regions which admits such signing is called an \textit{alternating subset of regions}. \begin{definition}\normalfont A link diagram that has an alternating subset of its regions is called an \textit{alternating sign diagram}. \end{definition} We have the following proposition. \begin{figure}[H] \centering \includegraphics[scale=.25]{Fig3.pdf} \caption{An alternating sign diagram} \label{fig:alternating} \end{figure} \begin{prop} Take a checkerboard shading of a link diagram $L$. Then, the unshaded regions are alternating if and only if each connected component of the boundary of each shaded region, except the simple loop ones, have all even number of edges, and vice versa. \end{prop} \begin{proof} $(\Rightarrow)$ Let $\Gamma$ be a connected component of the boundary of a shaded region other than a loop. Take an alternating signing of unshaded regions and sign each edge of $\Gamma$ by the sign of its incident unshaded region. Then the signs of successive edges must be different while we travel along $\Gamma$ in one direction. Otherwise, the vertex between two successive edges would be incident to two unshaded regions with the same sign, which contradicts with the definition of the alternating signing. Hence, the signs of edges alternate while we travel along $\Gamma$ in one direction. Since $\Gamma$ is connected this is only possible if $\Gamma$ has even number of edges. $(\Leftarrow)$ Note that the claim holds true for the link diagrams with zero and one vertex. Suppose the claim holds true for all links with $n-1$ vertices. Now let $L$ be a link with $n$ vertices which satisfies the assumption of the claim. If $L$ does not have any irreducible vertex then it has a vertex on a curl. Removing this vertex with an oriented smoothing as in Figure \ref{fig:orientedsmooth} gives us a link $L'$ with $n-1$ vertices which also satisfies the assumption of the claim. By the induction hypothesis unshaded regions of $L'$ admits an alternating signing. Changing the sign of the region $r$, shown in Figure \ref{fig:orientedsmooth}, if necessary, we see that an alternating signing of unshaded regions of $L'$ induces an alternating signing of unshaded regions of $L$ by reversing the oriented smoothing operation while keeping the sings of the regions. If $L$ has an irreducible vertex $u$, apply a smoothing to $u$ so that the shaded regions incident to $u$ are connected, as shown in Figure \ref{fig:smoothing}. Then the resulting link $L''$ has $n-1$ vertices and it also satisfies the assumption of the claim. By induction hypothesis the unshaded regions of $L''$ admit an alternating signing. Note that the regions $r_1$ and $r_2$, shown in Figure \ref{fig:smoothing} must have opposite signs. Therefore by reversing the smoothing operation while keeping the signs of the unshaded regions of $L''$, we obtain an alternating signing of the unshaded regions of $L$. \end{proof} \begin{figure}[H] \centering \includegraphics[scale=.15]{Fig5.pdf} \caption{Oriented smoothing of a vertex on a curl} \label{fig:orientedsmooth} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.3]{Fig4.pdf} \caption{A smoothing of an irreducible vertex} \label{fig:smoothing} \end{figure} \begin{cor} Take a checkerboard shading of a knot diagram $D$. Then, the unshaded regions are alternating if and only if all shaded regions are incident to even number of edges, and vice versa. \end{cor} \begin{thm} \label{thmk} Let $D$ be a reduced knot diagram with $n$ vertices on which we play a version of the $k$-color region select game, where $k\leq \infty$. Assume that $D$ admits an alternating signing of its unshaded regions in a checkerboard shading of $D$. Then the followings hold. 1) Any initial color configuration can be solved uniquely without pushing one shaded and one unshaded region. 2) If $k$ is an odd number, then any initial color configuration can be solved uniquely without pushing two unshaded regions with opposite signs. 3) In general, let $S$ be a set of $i$ number of regions, and $q$ be the number of initial color configurations which can be solved without pushing the regions in $S$. Then, $q=k^{n+2-i}$ if $S$ contains one shaded and one unshaded region. $q=k^{n+1-i}$ if $S$ consists of shaded regions or unshaded regions with the same sign. $q=k^{n+2-i}$ for $k$ is odd, and $q=k^{n+2-i}/2$ for $k$ is even if $S$ consists of unshaded regions not all of which have the same sign. \end{thm} \begin{proof} We start by proving claim (3). Fix a version of the $k$-color region select game and let $M$ be the corresponding game matrix of $D$ over $\mathbb{Z}_k$. By Proposition \ref{propmn}, $q=k^{n+2-i}/j$ where $j$ is the number of null patterns of $M$ where the regions in $S$ are not pushed. Hence we just need to determine what $j$ is for each case. To do that let us investigate the form of null patterns. Let $a,b\in \mathbb{Z}_k$, and consider the push pattern $\boldsymbol{\ell}_{a,b}$ where $\boldsymbol{\ell}_{a,b}(r)=a$ if $r$ is a shaded region, $\boldsymbol{\ell}_{a,b}(r)=b$ if $r$ is an unshaded region with a plus sign, and $\boldsymbol{\ell}_{a,b}(r)=-b-2a$ (mod $k$) if $r$ is an unshaded region with a minus sign. Then $\boldsymbol{\ell}_{a,b}$ is a null pattern of $M$ for every choice of $a$ and $b$ because $D$ is a reduced knot diagram. Moreover, by Proposition \ref{propab}, all null patterns of $M$ must be in the form of $\boldsymbol{\ell}_{a,b}$. If one shaded and one unshaded region are not pushed in a null pattern $\boldsymbol{\ell}_{a,b}$, then $a$ and $b$ must be zero which corresponds to the zero null pattern. Hence $j=1$ and $q=q=k^{n+2-i}$. If a set of shaded regions are not pushed in a null pattern $\boldsymbol{\ell}_{a,b}$, then $a=0$ and $b$ can take any value in $\mathbb{Z}_k$. So there are $k$ number of such null patterns. Hence, $j=k$ and $q=k^{n+1-i}$. If a set of unshaded regions with the same signs are not pushed in a null pattern $\boldsymbol{\ell}_{a,b}$, then either $b=0$ or $b+2a=0$. In both cases, value of $a$ determines all possible null patterns. So there are $k$ number of such null patterns. Hence, $j=k$ and $q=k^{n+1-i}$. If a set of unshaded regions are not pushed in a null pattern $\boldsymbol{\ell}_{a,b}$ and at least two regions have opposite signs, then $b=0$ and $2a=0$. If $k$ is odd, then $2$ cannot be a zero divisor of $\mathbb{Z}_k$, hence $a=0$ as well. Then $\boldsymbol{\ell}_{a,b}$ corresponds to the zero null pattern, i.e., $j=1$. On the other hand, if $k$ is even, then $2a=0$ has two solutions $a=0$ and $a=k/2$, which correspond to two null patterns, i.e., $j=2$. Hence, $q=k^{n+2-i}$ if $k$ is odd, and $q=k^{n+2-i}/2$ if $k$ is even. This completes the proof of claim (3). Claim (3) implies claim (2) and claim (1) in the case $k<\infty$. Hence it remains to prove claim (1) in the case $k=\infty$. Note that the patterns $\boldsymbol{\ell}_{a,b}$ when $a,b\in \mathbb{Z}$ form the set of all null patterns of any game matrix of $D$ over $\mathbb{Z}$, as well. Hence, for all $k\leq \infty$ the only null pattern of any game matrix of $D$ over $\mathbb{Z}_k$ where one shaded and one unshaded region are not pushed is the trivial pattern $\mathbf{0}$. In other words, Proposition \ref{prop0} still holds true when we replace two adjacent region by one shaded and one unshaded region in the case $D$ is an alternating sign diagram. Therefore, we can repeat the proof of Theorem \ref{propadj} replacing two adjacent regions by one shaded and one unshaded region. Hence, the result follows. \end{proof} \section{Game on knot diagrams}\label{sec:assertions} Recall that in the proof of Lemma \ref{mainlemma}, for a knot diagram $D$ we constructed the link diagram $D'$ by applying the oriented smoothing operation simultaneously to every reducible vertex of $D$. Let us call this operation the \emph{reducing operation} on a knot diagram. \begin{definition}\normalfont We call a sub-diagram $P$ of $D$ a \emph{connectedly reducible part} of $D$ if it satisfies the following two conditions: 1) It stays connected after applying the reducing operation to $D$. 2) No sub-diagram containing it stays connected after the reducing operation. \end{definition} Equivalently, $P$ is a connectedly reducible part of $D$ if and only if it corresponds to a reduced (disjoint) component of $D'$ after the reducing operation. See Figure \ref{fig:six} for an illustration. \begin{figure}[H] \centering \includegraphics[scale=.2]{Fig6.pdf} \caption{A connectedly reducible part of a knot diagram (in red)} \label{fig:six} \end{figure} Now let $P$ be a connectedly reducible part of $D$ and denote the reduced component of $D'$ corresponding to $P$ by $P'$. Then $P$ and $P'$ have the same irreducible vertex-region incidence structure and we can identify their regions as we identify the regions of $D$ and $D'$. Let $M$ be a game matrix of $D$ over $\mathbb{Z}_k$, $k\leq \infty$. Delete all rows of $M$ corresponding to reducible vertices and columns corresponding to the regions whose boundary does not intersect $P$. The resulting matrix, call it $M'$ is a game matrix of $P'$. Note that if $\boldsymbol{\ell}$ is a null pattern of $M$, then its restriction $\boldsymbol{\ell}_{res}$ to the regions of $P$ is a null pattern of $M'$. Moreover, two different null patterns $\boldsymbol{\ell}$ and $\boldsymbol{\kappa}$ of $M$ cannot have the same restriction. Indeed, if $\boldsymbol{\ell}_{res}=\boldsymbol{\kappa}_{res}$, then $\boldsymbol{\ell}-\boldsymbol{\kappa}$ would be a null pattern of $M$ which is zero on the regions of $P'$, equivalently on the regions of $P$. But $P$ contains at least two adjacent regions of $D$. $\boldsymbol{\ell}=\boldsymbol{\kappa}$ by Proposition \ref{prop0}. So the restriction map of the null patterns of $M$ to the null patterns of $M'$ is injective. Moreover, it is also surjective. Indeed, let $\boldsymbol{\alpha}$ be a null pattern of $M'$, $r_1$ and $r_2$ be two adjacent regions of $P'$ (equivalently $P$ and $D$). Then, by Proposition \ref{propab}, there is a null pattern $\boldsymbol{\beta}$ of $M$ which agrees with $\boldsymbol{\alpha}$ on $r_1$ and $r_2$. However, then both $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}_{res}$ are null patterns of $M'$ which agree on two adjacent regions. Again by Proposition \ref{propab}, $\boldsymbol{\alpha}=\boldsymbol{\beta}_{res}$. From this one to one correspondence we obtain the following observation. \emph{Observation}: Let $S$ be a subset of regions of $P'$. Then, the number of null patterns of $M'$ where the regions of $S$ are not pushed is equal to the number of null patterns of $M$ where the regions of $S$ are not pushed. Theorem \ref{thm2}, \ref{thmp}, and \ref{thmk} determine how many initial color configurations there are which can be solved without pushing certain regions in any version of the $k$-color region select game when played on reduced knot diagrams with some conditions on $k$ or on the diagram. When we look at the proofs we see that this number is determined by the number of null patterns of the corresponding game matrix where these regions are not pushed. Therefore, by the above observation, and together with the fact that $P'$ is a reduced diagram whose regions can be identified by the regions of $P$, the proofs of Theorem \ref{thm2}, \ref{thmp} and \ref{thmk} imply the following result for general knot diagrams. \begin{thm} Let $D$ be a knot diagram on which we play a version of the $k$-color region select game. Fix a checkerboard shading on $D$. Let $P$ be a connectedly reducible part of $D$. Then all the assertions of Theorem \ref{thm2} for $k=2$ and Theorem \ref{thmp} for $k<\infty$, hold true if the regions in the assertions belong to $P$. Assume further that $P$ admits an alternating signing of its unshaded regions. Then all the assertions of Theorem \ref{thmk} for $k\leq\infty$ hold true if the regions in the assertions belong to $P$. \end{thm} \bibliographystyle{plain} \begin{thebibliography}{99} \bibitem{AhSu} K. Ahara, M. Suzuki. An integral region choice problem on knot projection. Journal of Knot Theory and Its Ramifications Vol. 21, No. 11, 1250119 (2012). \bibitem{Che} Z. Cheng, H. Gao. On Region Crossing Change and Incidence Matrix, Science China Mathematics, 2012, 55(7), 1487-1495. \bibitem{Ka} L.H. Kauffman. Knots and Physics. Series on Knots and Everything: Volume 1, $3^{rd}$ Edition. DOI: https://doi.org/10.1142/4256 \bibitem{Kaw} T. Kawamura. Integral choice problem on link diagrams. \url{arXiv:2103.16167}. \bibitem{Ro} D. Rolfsen, Knots and Links, Publish or Perish, Inc., Berkely, 1976. \bibitem{Shi2} A Shimizu. A game based on knot theory. Asia Pac. Math. Newsl 2 (4), 22-23. \bibitem{Shi} A. Shimizu. Region crossing change is an unknotting operation. J. Math. Soc. Japan 66(3): 693-708 (July, 2014). DOI: 10.2969/jmsj/06630693. \bibitem{Rotable} The knot atlas, \url{http://katlas.org/wiki//The Rolfsen Knot Table} \bibitem{And} APKPure APP. Region select V 1.1.2. Publish Date: 2018-05-22. \url{https://m.apkpure.com/region-select/jp.co.gecreative.regionselect} \end{thebibliography} \end{document}
2205.03191v7
http://arxiv.org/abs/2205.03191v7
Distribution of 3-regular and 5-regular partitions
\documentclass[12pt]{amsart} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[letterpaper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{amssymb} \usepackage{comment} \usepackage{indentfirst} \setlength\parindent{2em} \usepackage{csquotes} \usepackage{theoremref} \usepackage{hyperref} \usepackage{geometry} \hypersetup{hidelinks, colorlinks=true, allcolors=cyan, pdfstartview=Fit, breaklinks=true} \usepackage{amsthm} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \numberwithin{theorem}{section} \newtheorem{corollary}[theorem]{Corollary} \newtheorem*{corollary*}{Corollary} \newtheorem*{Example*}{Example} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{question}[theorem]{Question} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{def*}{Definition} \newtheorem*{theorem*}{Theorem} \newtheorem*{example}{Example} \newtheorem{Example}[theorem]{Example} \newtheorem{Problem}[theorem]{Problem} \newtheorem*{problem}{Problem} \newtheorem*{examples}{Examples} \newtheorem*{definition*}{Definition} \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem*{remarks}{Remarks} \newtheorem*{tworemarks}{Two remarks} \newtheorem*{threeremarks}{Three remarks} \newtheorem*{fourremarks}{Four remarks} \newtheorem*{notation}{Notation} \newcommand{\bracket}[1]{\left( #1 \right)} \newcommand{\modulo}[3]{#1\equiv#2\ \bracket{\mathrm{mod}\ #3}} \newcommand{\bthree}[1]{b_3\bracket{#1}} \newcommand{\bfive}[1]{b_5\bracket{#1}} \newcommand{\bbZ}[0]{\mathbb Z} \renewcommand{\baselinestretch}{1.2} \numberwithin{equation}{section} \title{\textbf{Distribution of 3-regular and 5-regular partitions}} \author{QI-YANG ZHENG} \date{} \address{Department of Mathematics, Sun Yat-sen University(Zhuhai Campus), Zhuhai} \email{[email protected]} \begin{document} \maketitle \begin{abstract} In this paper we study the function $b_3(n)$ and $b_5(n)$, which denote the number of $3$-regular partitions and $5$-regular partitions of $n$ respectively. Using the theory of modular forms, we prove several arithmetic properties of $b_3(n)$ and $b_5(n)$ modulo primes greater than $3$. \end{abstract} ~ \section{Introduction} The number of partitions of $n$ in which no parts are multiples of $k$ is denoted by $b_k(n)$, the $k$-regular partitions. $b_k(n)$ is also the number of partitions of $n$ into at most $k-1$ copies of each part. We agree that $b_3(0)=b_5(0)=1$ for convenience. Moreover, let $b_3(n)=b_5(n)=0$ if $n\not\in\bbZ_{\geq0}$. The $k$-regular partitions has generating function as follows: \begin{equation} \notag \sum_{n=0}^\infty b_k(n)q^n=\prod_{n=1}^\infty\frac{1-q^{kn}}{1-q^n}. \end{equation} ~ In 1919, Ramanujan found three remarkable congruences of $p(n)$ as follows \begin{equation} \notag \begin{aligned} p(5n+4)&\equiv0\ (\mathrm{mod}\ 5),\\ p(7n+5)&\equiv0\ (\mathrm{mod}\ 7),\\ p(11n+6)&\equiv0\ (\mathrm{mod}\ 11). \end{aligned} \end{equation} In 2000, Ono \cite{ono2000distribution} proved that for each prime number $m\geq5$, there exists infinitely many arithmetic sequences $An+B$ such that $$p(An+B)\equiv0\ (\mathrm{mod}\ m).$$ We call such congruences Ramanujan-type congruences. Subsequently Lovejoy \cite{lovejoy2001divisibility} gave similar results for the function $Q(n)$, the number of partitions of $n$ into distinct parts. Following strategies of Ono and Lovejoy, we prove the following theorem. ~ \begin{theorem} \label{infinitely many Ramanujan-type congruences} For each prime $m\geq5$, there are infinitely many Ramanujan-type congruences of $b_3(n)$ and $b_5(n)$ modulo $m$. \end{theorem} ~ Lovejoy and Penniston \cite{lovejoy20013} study the distribution of $b_3(n)$ modulo $3$. Recently, Keith and Zanello \cite{keith2022parity} study the parity of $b_3(n)$. As for the $5$-regular partitions, Calkin et al. \cite{calkin2008divisibility}, Hirschhorn and Sellers \cite{hirschhorn2010elementary} study the parity of $b_5(n)$. Gordon and Ono \cite{gordon1997divisibility} study the distribution of $b_5(n)$ modulo $5$. Moreover, they prove that \begin{equation} \label{b_5(5n+4)} b_5(5n+4)\equiv0\ (\mathrm{mod}\ 5). \end{equation} Up to now, we only know the distribution of $b_3(n)$ and $b_5(n)$ modulo primes mentioned above. In this paper, we study the distribution of $b_3(n)$ and $b_5(n)$ modulo primes $m\geq5$. It is noteworthy that we still do not know anything about $b_5(n)$ modulo $3$. As a natural corollary of Theorem \ref{infinitely many Ramanujan-type congruences}, we have ~ \begin{corollary} If $m\geq5$ is a prime and $k=3,5$, then there are infinitely many positive integers $n$ for which $$b_k(n)\equiv0\ (\mathrm{mod}\ m).$$ \noindent More precisely, we have $$\#\{ 0\leq n\leq X\ :\ b_k(n)\equiv0\ (\mathrm{mod}\ m) \}\gg X.$$ \end{corollary} ~ For other residue classes $i\not\equiv0\ (\mathrm{mod}\ m)$, we provide a useful criterion to verify whether there are infinitely many $n$ such that $b_k(n)\equiv i\ (\mathrm{mod}\ m)$. ~ \begin{proposition} \label{other residue classes} If $m\geq5$ is a prime and there is one $k\in\mathbb{Z}$ such that $$b_3\left( mk+\frac{m^2-1}{12} \right)\equiv e\not\equiv0\ (\mathrm{mod}\ m),$$ \noindent then for each $i=1,2,\cdots,m-1$, we have $$\#\{ 0\leq n\leq X\ :\ b_3(n)\equiv i\ (\mathrm{mod}\ m) \}\gg \frac{X}{\log X}.$$ \noindent Moreover, if such $k$ exists, then $k<18(m-1)$. \end{proposition} We obtain similar results for $b_5(n)$. \begin{proposition} \label{other residue classes2} Let $m\geq5$ be a prime. If there exists one $k\in\mathbb{Z}$ such that $$b_5\left( mk+\frac{m^2-1}{6} \right)\equiv e\not\equiv0\ (\mathrm{mod}\ m),$$ \noindent then for each $i=1,2,\cdots,m-1$, we have $$\#\{ 0\leq n\leq X\ |\ b_5(n)\equiv i\ (\mathrm{mod}\ m) \}\gg \frac{X}{\log X}.$$ \noindent Moreover, if such $k$ exists, then $k<10(m-1)$. \end{proposition} ~ \begin{remark} The congruence \eqref{b_5(5n+4)} show that our criterion is inapplicable for the case $m=5$. However the case $m=5$ is studied in \cite{gordon1997divisibility}. \end{remark} ~ \section{Preliminaries on modular forms} \noindent First we introduce the $U$ operator. If $j$ is a positive integer, then \begin{equation} \notag \left( \sum_{n=0}^\infty a(n)q^n \right)\ |\ U(j):=\sum_{n=0}^\infty a(jn)q^n. \end{equation} \noindent Recalling that Dedekind's eta function is defined by \begin{equation} \notag \eta(z)=q^\frac{1}{24}\prod_{n=1}^\infty (1-q^n), \end{equation} \noindent where $q=e^{2\pi iz}$. ~ If $m$ is a prime, then let $M_{k}(\Gamma_0(N),\chi)_m$(resp. $S_{k}(\Gamma_0(N),\chi)_m$) denote the $\mathbb{F}_m$-vector space of the reductions mod $m$ of the $q$-expansions of modular forms(resp. cusp forms) in $M_{k}(\Gamma_0(N),\chi)$(resp. $S_{k}(\Gamma_0(N),\chi)$) with integer coefficients. Sometimes, we will use the notation $a\equiv_m b$ in the place of $a\equiv b\ (\mathrm{mod}\ m)$ for convenience. We need the following theorem to construct modular forms \cite[Theorem 3]{gordon1993multiplicative}: \begin{theorem}[B. Gordon, K. Hughes] \label{eta-quotient} Let $$f(z)=\prod_{\delta|N}\eta^{r_\delta}(\delta z)$$ \noindent be a $\eta$-quotient provided ~ \noindent $\mathrm{(\romannumeral1)}$ $$\sum_{\delta|N}\delta r_\delta\equiv0\ (\mathrm{mod}\ 24);$$ $\mathrm{(\romannumeral2)}$ $$\sum_{\delta|N}\frac{Nr_\delta}{\delta}\equiv0\ (\mathrm{mod}\ 24);$$ $\mathrm{(\romannumeral3)}$ $$k:=\frac{1}{2}\sum_{\delta|N}r_\delta\in\mathbb{Z},$$ \noindent then $$f\left(\frac{az+b}{cz+d}\right)=\chi(d)(cz+d)^kf(z),$$ \noindent for each $\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\Gamma_0(N)$ and $\chi$ is a Dirichlet character $(\mathrm{mod}\ N)$ defined by $$\chi(n):=\left( \frac{(-1)^k\prod_{\delta|N}\delta^{r_\delta}}{n} \right),\ if\ n>0\ and\ (n,6)=1.$$ \end{theorem} ~ If $f(z)$ is holomorphic (resp. vanishes) at all cusps of $\Gamma_0(N)$, then $f(z)\in M_k(\Gamma_0(N),\chi)$ (resp. $S_k(\Gamma_0(N),\chi)$), since $\eta(z)$ is never vanishes on $\mathcal{H}$. The following theorem (c.f. \cite{martin1996multiplicative}) provide a useful criterion for compute the orders of an $\eta$-quotient at all cusps of $\Gamma_0(N)$. \begin{theorem}[Y. Martin] \label{order of cusp} Let $c$, $d$ and $N$ be positive integers with $d\,|\,N$ and $(c,d)=1$. If $f(z)$ is an $\eta$-quotient satisfying the conditions of Theorem \ref{eta-quotient}, then the order of vanishing of $f(z)$ at the cusp $c/d$ is $$\frac{N}{24}\sum_{\delta|N}\frac{r_\delta(d^2,\delta^2)}{\delta(d^2,N)}.$$ \end{theorem} \section{Ramanujan-type congruences} \noindent In this section, we will prove Theorem \ref{infinitely many Ramanujan-type congruences} via theory of modular forms. However, the generating function of regular partition function is not a modular form. But for primes $m\geq5$, it turns out that for a properly chosen function $h_m(n)$, then \begin{equation} \notag \sum_{n=0}^\infty b_k(h_m(n))q^n \end{equation} \noindent is the Fourier expansion of a cusp form modulo $m$. In fact, we have ~ \begin{theorem} \label{cusp form1} Let $m\geq5$ be a prime, then \begin{equation} \notag \sum_{n=0}^\infty b_3\left( \frac{mn-1}{12} \right)q^n\in S_{3m-3}(\Gamma_0(432),\chi_{12})_m, \end{equation} \noindent where $\chi_{12}(n)=\left( \frac{n}{3} \right)\left( \frac{-4}{n} \right)$. \end{theorem} ~ \begin{theorem} \label{cusp form2} Let $m\geq5$ be a prime, then \begin{equation} \notag \sum_{n=0}^\infty b_5\left( \frac{mn-1}{6} \right)q^n\in S_{2m-2}(\Gamma_0(180),\chi_5)_m, \end{equation} \noindent where $\chi_{5}(n)=\left( \frac{n}{5} \right)$. \end{theorem} ~ \begin{proof}[Proof of Theorem \ref{cusp form1}] We begin with an $\eta$-quotient \begin{equation} \notag f(m;z):=\frac{\eta(3z)}{\eta(z)}\eta^a(3mz)\eta^b(mz), \end{equation} \noindent where $m':=(m\ \mathrm{mod}\ 12)$, $a:=9-m'$ and $b:=m'-3$. It is easy to verify that $f(m;z)\equiv_m\eta^{am+1}(3z)\eta^{bm-1}(z)$ satisfies the conditions of Theorem \ref{eta-quotient}. Moreover, one can compute via Theorem \ref{order of cusp} that $\eta^{am+1}(3z)\eta^{bm-1}(z)$ has the minimal order of vanishing of $(m(3a+b)+2)/24$ at the cusp $\infty$ and $(m(a+3b)-2)/24$ at the cusp $0$. ~ Since $(m(3a+b)+2)/24=(m(12-m')+1)/12>0$ and $(m(a+3b)-2)/24=(mm'-1)/12>0$, $\eta^{am+1}(3z)\eta^{bm-1}(z)\in S_{3m}(\Gamma_0(3),\chi_3)$, where $\chi_3(n)=\left(\frac{n}{3}\right)$. On the other hand, \begin{equation} \notag f(m;z)=\sum_{n=0}^\infty b_3(n)q^{n+\frac{m(3a+b)+2}{24}}\cdot\prod_{n=1}^\infty(1-q^{3mn})^a(1-q^{mn})^b. \end{equation} \noindent Thus, \begin{equation} \label{after U(m)} \begin{aligned} &\ \ \ \ \eta^{am+1}(3z)\eta^{bm-1}(z)\ |\ U(m)\\ &\equiv_m\left(\sum_{n=0}^\infty b_3(n)q^{n+\frac{m(3a+b)+2}{24}}\ |\ U(m)\right)\cdot\prod_{n=1}^\infty(1-q^{3n})^a(1-q^{n})^b. \end{aligned} \end{equation} ~ \noindent As for LHS of (\ref{after U(m)}), $$\sum_{n=0}^\infty b_3(n)q^{n+\frac{m(3a+b)+2}{24}}\ |\ U(m)={\sum_{n\geq0}}^* b_3(n)q^{\frac{24n+m(3a+b)+2}{24m}},$$ \noindent where ${\sum}^*$ means take integral power coefficients of $q$, i.e. $$24n+m(3a+b)+2\equiv0\ (\mathrm{mod}\ 24m).$$ \noindent It is easy to check that $24\,|\,24n+m(3a+b)+2$. Thus the condition becomes $m\,|\,12n+1$. As for RHS of (\ref{after U(m)}), we have $$\eta^{am+1}(3z)\eta^{bm-1}(z)\ |\ U(m)\equiv_m\eta^{am+1}(3z)\eta^{bm-1}(z)\ |\ T(m),$$ \noindent where $T(m)$ denotes usual Hecke operator acting on $S_{3m}(\Gamma_0(3),\chi_3)$. Now we analyze the $\eta$-product $\eta^6(z)\eta^6(3z)$. By Theorem \ref{eta-quotient} and Theorem \ref{order of cusp}, $\eta^6(z)\eta^6(3z)$ is a cusp form of weight $6$ and level $3$ and has the minimal order of vanishing of $1$ at the two cusps of $\Gamma_0(3)$. Since $\eta(z)$ never vanishes on $\mathcal{H}$, we can write $\eta^{am+1}(3z)\eta^{bm-1}(z)\ |\ T(m)=\eta^6(z)\eta^6(3z)g(m;z)$, where $g(m;z)\in M_{3m-6}(\Gamma_0(3),\chi_3)$. In summary, we have \begin{equation} \label{b_3(n)} \sum_{\genfrac{}{}{0pt}{}{n\geq0}{m|12n+1}} b_3(n)q^{\frac{24n+m(3a+b)+2}{24m}}\equiv_m\frac{\eta^6(z)\eta^6(3z)g(m;z)}{\prod_{n=1}^\infty(1-q^{3n})^a(1-q^{n})^b}. \end{equation} Replacing $q$ by $q^{12}$ and then multiplying by $q^{-(3a+b)/2}$ on both sides of (\ref{b_3(n)}), obtaining $$\sum_{\genfrac{}{}{0pt}{}{n\geq0}{m|12n+1}} b_3(n)q^{\frac{12n+1}{m}}\equiv_m\eta^{6-a}(36z)\eta^{6-b}(12z)g(m;12z),$$ \noindent namely, $$\sum_{n=0}^\infty b_3\left(\frac{mn-1}{12}\right)q^n\equiv_m\eta^{6-a}(36z)\eta^{6-b}(12z)g(m;12z).$$ Using Theorem \ref{eta-quotient} and Theorem \ref{order of cusp} again, one can verity that $\eta^{6-a}(36z)\eta^{6-b}(12z)$ is a cusp form of weight $3$ and level $432$ and has the minimal order of vanishing of $m'$ at the cusps $c/d$ if $d=1,2,3,4,6,8,12,16,24,48$ and $12-m'$ if $d$ is other divisor of $432$. Therefore we obtain $$\eta^{6-a}(36z)\eta^{6-b}(12z)\in S_3\left(\Gamma_0(432),\chi_4\right),$$ \noindent where $\chi_4(n)=\left( \frac{-3}{n} \right)$. Together with $g(m;12z)\in M_{3m-6}(\Gamma_0(36),\chi_3)$, we have \begin{equation} \notag \sum_{n=0}^\infty b_3\left( \frac{mn-1}{12} \right)q^n\in S_{3m-3}(\Gamma_0(432),\chi_{12})_m. \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{cusp form2}] For a fixed prime $m$, let \begin{equation} \notag f(m;z):=\frac{\eta(5z)}{\eta(z)}\eta^a(5mz)\eta^b(mz), \end{equation} \noindent where $m':=(m\ \mathrm{mod}\ 6)$ and $a:=5-m',\ b:=m'-1$. It is easy to show that $\modulo{f(m;z)}{\eta^{am+1}(5z)\eta^{bm-1}(z)}{m}$ and $$\eta^{am+1}(5z)\eta^{bm-1}(z)\in S_{2m}(\Gamma_0(5),\chi_5),$$ \noindent where $\chi_5(n)=\bracket{\frac{n}{5}}$. On the other hand, \begin{equation} \notag f(m;z)=\sum_{n=0}^\infty b_5(n)q^{\frac{24n+m(5a+b)+4}{24}}\cdot\prod_{n=1}^\infty(1-q^{5mn})^a(1-q^{mn})^b. \end{equation} \noindent Acting the $U(m)$ operator on $f(z)$ and since $\modulo{U(m)}{T(m)}{m}$, obtaining \begin{equation} \label{after U/T} \modulo{\sum_{n=0}^\infty b_5(n)q^{\frac{24n+m(5a+b)+4}{24}}\ |\ U(m)}{\frac{\eta^{am+1}(5z)\eta^{bm-1}(z)\ |\ T(m)}{\prod_{n=1}^\infty(1-q^{5n})^a(1-q^{n})^b}}{m}, \end{equation} \noindent where $T(m)$ denotes usual Hecke operator acting on $S_{2m}(\Gamma_0(5),\chi_5)$. As for the LHS of (\ref{after U/T}), we have \begin{equation} \notag \sum_{n=0}^\infty b_5(n)q^{\frac{24n+m(5a+b)+4}{24}}\ |\ U(m)=\sum_{\genfrac{}{}{0pt}{}{n=0}{m|6n+1}}^\infty b_5(n)q^{\frac{24n+m(5a+b)+4}{24m}}. \end{equation} \noindent Using Theorem \ref{eta-quotient} and \ref{order of cusp}, one can verify that $\eta^4(5z)\eta^4(z)\in S_4(\Gamma_0(5))$ and have the order of $1$ at all cusps. Thus we can write $\eta^{am+1}(5z)\eta^{bm-1}(z)\ |\ T(m)=\eta^4(5z)\eta^4(z)g(m;z)$, where $g(m;z)\in M_{2m-4}(\Gamma_0(5),\chi_5)$. Hence \begin{equation} \notag \modulo{\sum_{\genfrac{}{}{0pt}{}{n=0}{m|6n+1}}^\infty b_5(n)q^{\frac{6n+1}{6m}}}{\eta^{4-a}(5z)\eta^{4-b}(z)g(m;z)}{m}. \end{equation} \noindent Replacing $q$ by $q^6$ shows that \begin{equation} \notag \modulo{\sum_{\genfrac{}{}{0pt}{}{n=0}{m|6n+1}}^\infty b_5(n)q^{\frac{6n+1}{m}}}{\eta^{4-a}(30z)\eta^{4-b}(6z)g(m;6z)}{m}. \end{equation} \noindent Since $b_5(n)$ vanishes for non-integer $n$, so \begin{equation} \notag \modulo{\sum_{n=0}^\infty b_5\bracket{\frac{mn-1}{6}}q^{n}}{\eta^{4-a}(30z)\eta^{4-b}(6z)g(m;6z)}{m}. \end{equation} \noindent Moreover, one can verify that $\eta^{4-a}(30z)\eta^{4-b}(6z)\in S_2(\Gamma_0(180))$. Together with $g(m;6z)\in M_{2m-4}(\Gamma_0(30),\chi_5)$, we have \begin{equation} \notag \sum_{n=0}^\infty b_5\left( \frac{mn-1}{6} \right)q^n\in S_{2m-2}(\Gamma_0(180),\chi_{5})_m. \end{equation} \end{proof} We need some important results due to Serre (c.f. \cite[(6.4)]{serre1974divisibilite}), which are the critical factors of the existence of Ramanujan-type congruences. \begin{theorem}[J.-P. Serre] \label{Serre's theorem} The set of primes $l\equiv-1\ (\mathrm{mod}\ Nm)$ such that $$f\ |\ T(l)\equiv0\ (\mathrm{mod}\ m)$$ \noindent for each $f(z)\in S_k(\Gamma_0(N),\psi)_m$ has positive density, where $T(l)$ denotes the usual Hecke operator acting on $S_k(\Gamma_0(N),\psi)$. \end{theorem} Now Theorem \ref{infinitely many Ramanujan-type congruences} is an immediately corollary of the next two theorems. \begin{theorem} \label{main theorem} Let $m\geq5$ be a prime. A positive density of the primes $l$ have the property that \begin{equation} \notag b_3\left( \frac{mln-1}{12} \right)\equiv0\ (\mathrm{mod}\ m) \end{equation} \noindent for each nonnegative integer $n$ coprime to $l$. \end{theorem} \begin{theorem} \label{main theorem2} Let $m\geq5$ be a prime. Then a positive density of primes $l$ have the property that \begin{equation} \notag b_5\left( \frac{mln-1}{6} \right)\equiv0\ (\mathrm{mod}\ m) \end{equation} \noindent satisfied for each integer $n$ coprime to $l$. \end{theorem} ~ \begin{proof}[Proof of Theorem \ref{main theorem}]Let \begin{equation} \notag F(m;z)=\sum_{n=0}^\infty b_3\left( \frac{mn-1}{12} \right)q^n, \end{equation} \noindent then $F(m;z)\in S_{3m-3}(\Gamma_0(432),\chi_{12})_m$. For a fix prime $m\geq5$, let $S(m)$ denote the of primes $l$ such that $$f\ |\ T(l)\equiv0\ (\mathrm{mod}\ m)$$ \noindent for each $f\in S_{3m-3}(\Gamma_0(432),\chi_{12})$. By Theorem \ref{Serre's theorem}, $S(m)$ contains a positive density of primes. So if $l\in S(m)$, we have $$F(m;z)\ |\ T(l)\equiv0\ (\mathrm{mod}\ m).$$ \noindent Then by the theory of Hecke operator we have \begin{equation} \notag F(m;z)\ |\ T(l)=\sum_{n=0}^\infty\left( b_3\left( \frac{mln-1}{12} \right)+\left(\frac{3}{l}\right)l^{3m-4}b_3\left( \frac{mn-l}{12l} \right) \right)q^n\equiv0\ (\mathrm{mod}\ m). \end{equation} Since $b_3(n)$ vanishes when $n$ is not an integer, $b_3\left((mn-l)/12l\right)=0$ for each $n$ coprime to $l$ and $l\neq m$. Thus $$b_3\left(\frac{mln-1}{12}\right)\equiv 0\ (\mathrm{mod}\ m)$$ \noindent satisfied for each integer $n$ coprime to $l$ with $l\neq m$. Moreover, the set of such primes $l$ has a positive density of primes. \end{proof} \begin{proof}[Proof of Theorem \ref{main theorem2}] Let \begin{equation} \notag F(m;z)=\sum_{n=0}^\infty b_5\left( \frac{mn-1}{6} \right)q^n\in S_{2m-2}(\Gamma_0(180),\chi_5)_m. \end{equation} \noindent By Theorem \ref{Serre's theorem}, the set of primes $l$ such that $$ F(m;z)\ |\ T(l)\equiv0\ (\mathrm{mod}\ m)$$ \noindent has positive density, where $T(l)$ denotes Hecke operator acting on $S_{2m-2}(\Gamma_0(180),\chi_5)$. Moreover, by the theory of Hecke operator, we have \begin{equation} \notag \sum_{n=0}^\infty F(m;z)\ |\ T(l)=\sum_{n=0}^\infty \bracket{b_5\left( \frac{mln-1}{6} \right)+\bracket{\frac{l}{5}}l^{2m-3}b_5\left( \frac{mn-l}{6l} \right)}q^n. \end{equation} Since $b_5(n)$ vanishes for non-integer $n$, $b_5((mn-l)/6l)=0$ when $(n,l)=1$ and $l\neq m$. Thus we obtain \begin{equation} \notag \modulo{b_5\bracket{\frac{mln-1}{6}}}{0}{m} \end{equation} \noindent satisfied for each integer $n$ with $(n,l)=1$ and $l\neq m$. Moreover, the set of such primes $l$ has a positive density of primes. \end{proof} Since the number of selections of $l$ is infinite, choose $l>3$. Replacing $n$ by $12nl+ml+12$, then we have $b_3(ml^2n+ml+(m^2l^2-1)/12)\equiv0\ (\mathrm{mod}\ m)$ satisfied for each nonnegative integer $n$. Similar way can be applied to $b_5(n)$. Hence we obtain Theorem \ref{infinitely many Ramanujan-type congruences}. Moreover, since the choices of $l$ is infinite, together with the Chinese Remainder Theorem and previous results, we obtain ~ \begin{corollary} If $m$ is a squarefree integer, then there are infinitely many Ramanujan-type congruences of $b_3(n)$ modulo $m$; if $k$ is a squarefree integer coprime to $3$, then there are infinitely many Ramanujan-type congruences of $b_5(n)$ modulo $k$. \end{corollary} \section{Distribution on nonzero residues} Following Lovejoy \cite{lovejoy2001divisibility}, we need the following theorem due to Serre \cite{serre1974divisibilite}. \begin{theorem}[J.-P. Serre] \label{Serre's theorem v2} The set of primes $l\equiv1\ (\mathrm{mod}\ Nm)$ such that $$a(nl^r)\equiv(r+1)a(n)\ (\mathrm{mod}\ m)$$ \noindent for each $f(z)=\sum_{n=0}^\infty a(n)q^n\in S_k(\Gamma_0(N),\psi)_m$ has positive density, where $r$ is a positive integer and $n$ is coprime to $l$. \end{theorem} ~ Here we introduce a theorem of Sturm(Theorem 1 of \cite{sturm1987congruence}), which provide a useful criterion for deciding when modular forms with integer coefficients are congruent to zero modulo a prime via finite computation. \begin{theorem}[J. Sturm] \label{Sturm's theorem} Suppose $f(z)=\sum_{n=0}^\infty a(n)q^n\in M_k(\Gamma_0(N),\chi)_m$ such that $$a(n)\equiv0\ (\mathrm{mod}\ m)$$ \noindent for all $n\leq \frac{kN}{12}\prod_{p|N}\left( 1+\frac1p \right)$. Then $a(n)\equiv0\ (\mathrm{mod}\ m)$ for all $n\in\mathbb{Z}$. \end{theorem} ~ \begin{proof}[Proof of Theorem \ref{other residue classes}] If there is one $k\in\mathbb{Z}$ such that $$b_3\left( mk+\frac{m^2-1}{12} \right)\equiv e\not\equiv0\ (\mathrm{mod}\ m),$$ \noindent let $s=12k+m$. Since $b_3(n)$ vanishes for negative $n$, we have $mk+\frac{m^2-1}{12}\geq0$. Hence $s=12k+m>0$ and $$b_3\left(\frac{ms-1}{12}\right)=b_3\left( mk+\frac{m^2-1}{12} \right)\equiv e\ (\mathrm{mod}\ m).$$ \noindent For a fix prime $m\geq5$, let $R(m)$ denote the set of primes $l$ such that $$a(nl^r)\equiv(r+1)a(n)\ (\mathrm{mod}\ m)$$ \noindent for each $f(z)=\sum_{n=0}^\infty a(n)q^n\in S_{3m-3}(\Gamma_0(432),\chi_{12})_m$, where $r$ is a positive integer and $n$ is coprime to $l$. By the proof of Theorem \ref{main theorem} we have $\sum_{n=0}^\infty b_3\left(\frac{mn-1}{12}\right)q^n\in S_{3m-3}(\Gamma_0(432),\chi_{12})_m$. Since $R(m)$ is infinite by Theorem \ref{Serre's theorem v2}, choose $l\in R(m)$ such that $l>s$, then $$b_3\left(\frac{ml^rs-1}{12}\right)\equiv(r+1)b_3\left(\frac{ms-1}{12}\right)\equiv(r+1)e\ (\mathrm{mod}\ m).$$ \noindent Now we fix $l$, choose $\rho\in R(m)$ such that $\rho>l$, then \begin{equation} \label{rho} b_3\left(\frac{m\rho n-1}{12}\right)\equiv2b_3\left(\frac{mn-1}{12}\right)\ (\mathrm{mod}\ m) \end{equation} \noindent satisfied for each $n$ coprime to $\rho$. For each $i=1,2,\cdots,m-1$, let $r_i\equiv i(2e)^{-1}-1\ (\mathrm{mod}\ m)$ and $r_i>0$. Let $n=l^{r_i}s$ in (\ref{rho}), we obtain $$b_3\left(\frac{m\rho l^{r_i}s-1}{12}\right)\equiv2b_3\left(\frac{ml^{r_i}s-1}{12}\right)\equiv2(r_i+1)e\equiv i\ (\mathrm{mod}\ m).$$ Since the variables except $\rho$ are fixed, it suffice to prove that the estimate of the choices of $\rho\gg X/\log X$ and which is easily derived from Theorem \ref{Serre's theorem v2} and the Prime Number Theorem. ~ Moreover, by Sturm's Theorem, if $b_3\left( \frac{mn-1}{12} \right)\equiv0\ (\mathrm{mod}\ m)$ for each $n\leq216(m-1)$, then $b_3\left( \frac{mn-1}{12} \right)\equiv0\ (\mathrm{mod}\ m)$ for all $n\in\mathbb{Z}$. Since $b_3(n)$ vanishes if $n$ is not an integer, it suffice to compute those $n$ of the form $12j+m$ for $12j+m\leq216(m-1)$. This implies $j<18(m-1)$. In addition, $$b_3\left( \frac{m(12j+m)-1}{12} \right)=b_3\left( mj+\frac{m^2-1}{12} \right).$$ \noindent Thus if such $k$ exist, then $k<18(m-1)$. \end{proof} ~ \begin{proof}[Proof of Theorem \ref{other residue classes2}] The proof is similar to the proof above so we omit it. \end{proof} ~ \section{Examples of Ramanujan-type congruences} \label{examples} \noindent By Theorem \ref{Sturm's theorem} we find that $$\sum_{n=0}^\infty b_3\left(\frac{mn-1}{12}\right)q^n\ |\ T(l)\equiv0\ (\mathrm{mod}\ m)$$ \noindent for the pairs $(m,l)=(5,61)$, $(7,71)$, $(11,12553)$. An elementary computation yields that ~ \noindent \begin{proposition} \begin{equation} \notag b_3(18605n+127)\equiv0\ (\mathrm{mod}\ 5), \end{equation} \begin{equation} \notag b_3(35287n+207)\equiv0\ (\mathrm{mod}\ 7), \end{equation} \begin{equation} \notag b_3(1733355899n+126576)\equiv0\ (\mathrm{mod}\ 11). \end{equation} \end{proposition} ~ \noindent Our method is not available to the case $m=3$, but one can prove that there are infinitely many Ramanujan-type congruences modulo $3$ via results of Lovejoy and Penniston \cite[Corollary 4]{lovejoy20013}. ~ \noindent \begin{proposition} If $m$ is a prime of the form $12k+1$, then $$b_3\left(m^3n+\frac{m^2-1}{12}\right)\equiv0\ (\mathrm{mod}\ 3).$$ \end{proposition} ~ \noindent For example, we obtain $$b_3\left(2197n+14\right)\equiv0\ (\mathrm{mod}\ 3).$$ ~ \noindent As for $b_5(n)$, we compute that $$\modulo{\sum_{n=0}^\infty b_5\bracket{\frac{mn-1}{6}}q^{n}\ |\ T(l)}{0}{m}$$ \noindent satisfied for $(m,l)=(7,17)$, $(11,41)$, $(13,16519)$. An elementary computation yields that ~ \begin{proposition} \begin{equation} \notag \modulo{b_5(2023n+99)}{0}{7}, \end{equation} \begin{equation} \notag \modulo{b_5(18491n+75)}{0}{11}, \end{equation} \begin{equation} \notag \modulo{b_5(3547405693n+35791)}{0}{13}. \end{equation} \end{proposition} ~ \noindent Moreover, the congruence $\modulo{b_5(5n+4)}{0}{5}$ implies that $$\modulo{\sum_{n=0}^\infty b_5\bracket{\frac{5n-1}{6}}q^{n}\ |\ T(l)}{0}{5}$$ \noindent satisfied for each prime $l$. ~ \section{More on \textit{k}-regular partitions} In this paper, we prove that for $b_k(n)(k=3,5)$ and each prime $m\geq5$ that there are infinitely many Ramanujan-type congruences modulo $m$. In fact, we conjecture that ~ \begin{conjecture} For $b_k(n)(k=3,5)$ and each positive integer $m$ that there are infinitely many Ramanujan-type congruences modulo $m$. \end{conjecture} ~ We also have the following conjecture analogous to Newman's Conjecture. ~ \begin{conjecture} If $m$ is an integer, $k=3,5$, then for each residue class $r\ (\mathrm{mod}\ m)$ there are infinitely many integers $n$ for which $b_k(n)\equiv r\ (\mathrm{mod}\ m)$. \end{conjecture} Though Ramanujan-type congruences modulo primes $m\geq5$ exist, one may need enormous computation to find some. We encourage interested readers to find examples of congruences modulo other primes. One can modify our proof to get some partial results of $b_{11}(n)$, the number of $11$-regular partitions of $n$. In fact, if $p$ is a prime for which $p>5$ and $\modulo{p}{5,7}{12}$, then $b_{11}(n)$ has infinitely many Ramanujan-type congruences modulo $p$. For example, we obtain $\modulo{b_{11}(43687n+230)}{0}{7}$. However, one can do better since from \cite{gordon1997divisibility} we have $\modulo{b_{11}(11n+6)}{0}{11}$. We intent to take up these in a future paper. ~ \section*{Acknowledgement} The ideas came to us after seeing the paper of Ono \cite{ono2000distribution} and Lovejoy \cite{lovejoy2001divisibility}. ~ \begin{thebibliography}{15} \bibitem{calkin2008divisibility} Calkin N, Drake N, James K, et al. Divisibility properties of the 5-regular and 13-regular partition functions[J]. Integers, 2008, 8(2): A60. \bibitem{gordon1993multiplicative} Gordon B, Hughes K. Multiplicative properties of eta-products II[J]. Contemporary Mathematics, 1993, 143: 415-415. \bibitem{gordon1997divisibility} Gordon B, Ono K. Divisibility of certain partition functions by powers of primes[J]. The Ramanujan Journal, 1997, 1(1): 25-34. \bibitem{hirschhorn2010elementary} Hirschhorn M D, Sellers J A. Elementary proofs of parity results for 5-regular partitions[J]. Bulletin of the Australian Mathematical Society, 2010, 81(1): 58-63. \bibitem{keith2022parity} Keith W J, Zanello F. Parity of the coefficients of certain eta-quotients[J]. Journal of Number Theory, 2022, 235: 275-304. \bibitem{lovejoy2001divisibility} Lovejoy J. Divisibility and distribution of partitions into distinct parts[J]. Advances in Mathematics, 2001, 158(2): 253-263. \bibitem{lovejoy20013} Lovejoy J, Penniston D. 3-regular partitions and a modular K3 surface[J]. Contemporary Mathematics, 2001, 291: 177-182. \bibitem{martin1996multiplicative} Martin Y. Multiplicative $\eta$-quotients[J]. Transactions of the American Mathematical Society, 1996, 348(12): 4825-4856. \bibitem{ono2000distribution} Ono K. Distribution of the partition function modulo m[J]. Annals of Mathematics, 2000: 293-307. \bibitem{serre1974divisibilite} Serre J P. Divisibilité de certaines fonctions arithmétiques[J]. Séminaire Delange-Pisot-Poitou. Théorie des nombres, 1974, 16(1): 1-28. \bibitem{sturm1987congruence} Sturm J. On the congruence of modular forms[M]//Number theory. Springer, Berlin, Heidelberg, 1987: 275-280. \end{thebibliography} \end{document}
2205.03188v2
http://arxiv.org/abs/2205.03188v2
A Cyclic Analogue of Stanley's Shuffle Theorem
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath,dsfont} \allowdisplaybreaks[4] \usepackage[all]{xy} \usepackage{amssymb} \usepackage{amsthm} \usepackage{hyperref} \hypersetup{colorlinks=true,linkcolor=blue,citecolor=red} \usepackage{amsmath} \usepackage{amscd}\usepackage{verbatim} \usepackage{eurosym} \usepackage{float} \usepackage{color} \usepackage{array} \usepackage{dcolumn} \usepackage[mathscr]{eucal} \usepackage[all]{xy} \usepackage{hyperref} \usepackage{tikz} \usetikzlibrary{decorations.pathmorphing} \usetikzlibrary{decorations.pathreplacing} \usetikzlibrary{decorations.markings} \usetikzlibrary{decorations.footprints} \usetikzlibrary{decorations.shapes} \usetikzlibrary{decorations.fractals} \usetikzlibrary{arrows,automata,chains,shadows} \usepackage{mathrsfs} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts,ifpdf} \usepackage{graphicx} \usepackage{times} \usepackage{float} \usepackage{epstopdf} \usepackage{cite} \usepackage{youngtab} \usepackage{ytableau} \ytableausetup{mathmode, boxsize=0.9em} \setlength{\evensidemargin}{0.3cm} \setlength{\oddsidemargin}{1.5cm} \parskip=10pt \frenchspacing \textwidth=15cm \textheight=23cm \parindent=16pt \oddsidemargin=0.5cm \evensidemargin=0.5cm \topmargin=-1.2cm \setlength{\baselineskip}{20pt} \renewcommand{\baselinestretch}{1.2} \newtheorem{thm}{Theorem}[section] \newtheorem{defi}[thm]{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{coro}[thm]{Corollary} \newtheorem{con}[thm]{Conjecture} \newtheorem{exci}{Exersise}[section] \newtheorem{pf}{proof}[section] \theoremstyle{remark} \newtheorem{exam}[thm]{Example} \def\theequation{\thesection.\arabic{equation}} \makeatletter \@addtoreset{equation}{section} \makeatother \makeindex \setcounter{tocdepth}{2} \def\qed{\hfill \rule{4pt}{7pt}} \def\pf{\vskip 0.2cm {\noindent \bf Proof.}\quad} \begin{document} \begin{center} {\Large \bf A Cyclic Analogue of Stanley's Shuffle Theorem} \end{center} \begin{center} {Kathy Q. Ji}$^{1}$ and {Dax T.X. Zhang}$^{2}$ \vskip 2mm $^{1,2}$ Center for Applied Mathematics, Tianjin University, Tianjin 300072, P.R. China\\[6pt] \vskip 2mm $^[email protected] and $^[email protected] \end{center} \vskip 6mm \noindent {\bf Abstract.} We introduce the cyclic major index of a cycle permutation and give a bivariate analogue of enumerative formula for the cyclic shuffles with a given cyclic descent number due to Adin, Gessel, Reiner and Roichman, which can be viewed as a cyclic analogue of Stanley's Shuffle Theorem. This gives an answer to a question of Adin, Gessel, Reiner and Roichman, which has been posed by Domagalski, Liang, Minnich, Sagan, Schmidt and Sietsema again. \noindent {\bf Keywords:} descent, major index, permutation, shuffle, cyclic permutation, cyclic descent. \noindent {\bf AMS Classification:} 05A05, 05A19, 11P81 \vskip 6mm \section{Introduction} The main theme of this note is to establish a cyclic analogue of Stanley's Shuffle Theorem. Recall that Stanley's Shuffle Theorem establishes an explicit expression for the generating function of the number of shufflings of two disjoint permutation $\sigma$ and $\pi$ with a given cyclic descent number and a given major index. Here we adopt some common notation and terminology on permutations as used in \cite[Chapter 1]{Stanley-2012}. We say that $\pi=\pi_1\pi_2\cdots \pi_n$ is a permutation of length $n$ if it is a sequence of $n$ distinct letters (not necessarily from $1$ to $n$). For example, $\pi=9\,2\,8\,10\,12\,3\,7$ is a permutation of length $7$. Let $\mathcal{S}_n$ denote the set of all permutations of length $n$. Let $\pi \in \mathcal{S}_n$, we say that $1\leq i \leq n-1$ is a descent of $\pi$ if $\pi_i>\pi_{i+1}$. The set of descents of $\pi$ is called the descent set of $\pi$, denoted ${\rm Des}(\pi)$, viz., \[{\rm Des}(\pi):=\{1\leq i \leq n-1:\pi_i>\pi_{i-1}\}.\] The number of its descents is called the descent number, denoted ${\rm des}(\pi)$, namely, \[{\rm des}(\pi):=\# {\rm Des}(\pi),\] where the hash symbol $\# \mathcal{S}$ stands for the cardinality of a set $\mathcal{S}$. The major index of $\pi$, denoted ${\rm maj}(\pi)$, is defined to be the sum of its descents. To wit, \[{\rm maj}(\pi):=\sum_{k \in {\rm Des}(\pi)}k.\] Let $\sigma \in \mathcal{S}_n$ and $\pi \in \mathcal{S}_m$ be disjoint permutations, that is, permutations with no letters in common. We say that $\alpha \in \mathcal{S}_{n+m}$ is a shuffle of $\sigma$ and $\pi$ if both $\sigma$ and $\pi$ are subsequences of $\alpha$. The set of shuffles of $\sigma$ and $\pi$ is denoted $\mathcal{S}(\sigma, \pi)$. For example, \[\mathcal{S}(6\,3,1\,4)=\{6\,3\,1\,4, 6\,1\,3\,4, 6\,1\,4\,3, 1\,4\,6\,3, 1\,6\,3\,4, 1\,6\,4\,3\}.\] Clearly, the number of permutations in $\mathcal{S}(\sigma, \pi) $ is ${m+n \choose n}$ for two disjoint permutations $\sigma \in \mathcal{S}_n$ and $\pi \in \mathcal{S}_m$. Stanley's Shuffle Theorem states that \begin{thm}\label{stanley} Let $\sigma \in \mathcal{S}_m$ and $\pi \in \mathcal{S}_n$ be disjoint permutations, where ${\rm des}(\sigma)=r$ and ${\rm des}(\pi)=s$. Then \begin{equation} \sum_{\alpha\in \mathcal{S}(\sigma,\pi) \atop {\rm des}(\alpha)=k}q^{{\rm maj}(\alpha)}= {m-r+s \brack k-r} {n-s+r \brack k-s} q^{{\rm maj}(\sigma)+{\rm maj}(\pi)+(k-s)(k-r)}. \end{equation} \end{thm} Here \[{n \brack m}=\frac{(1-q^n)(1-q^{n-1})\cdots (1-q^{n-m+1})}{(1-q^m)(1-q^{m-1})\cdots (1-q)}\] is the Gaussian polynomial (also called the $q$-binomial coefficient), see Andrews \cite[Chapter 1]{Andrews-1976}. Stanley \cite{Stanley-1972} obtained the above expression in light of the $q$-Pfaff-Saalsch\"utz identity in his setting of $P$-partitions. The bijective proofs of Stanley's shuffle theorem have been given by Goulden \cite{Goulden-1985}, Stadler \cite{Stadler-1999}, Ji and Zhang \cite{Ji-Zhang-2022}, respectively. Recently, Adin, Gessel, Reiner and Roichman \cite{Adin-Gessel-Reiner-Roichman-2021} introduced a cyclic version of quasisymmetric functions with a corresponding cyclic shuffle operation. A cyclic permutation $[\pi]$ of length $n$ can be viewed as an equivalence class of linear permutations $\pi=\pi_1\pi_2\cdots \pi_n$ of length $n$ under the cyclic equivalence relation $\pi_1\pi_2\cdots \pi_n \sim \pi_i\cdots \pi_n \pi_1\cdots \pi_{i-1}$ for all $2\leq i\leq n$. For example, \begin{equation}\label{examp-r} [4\,2\,3\,1]=\{4\,2\,3\,1,2\,3\,1\,4,3\,1\,4\,2,1\,4\,2\,3\} \end{equation} is a cyclic permutation of length $4$, where \[[4\,2\,3\,1]=[2\,3\,1\,4]=[3\,1\,4\,2]=[1\,4\,2\,3].\] Let $\pi_l$ be the largest element in $[\pi]$, the linear permutation $\hat{\pi}=\pi_l\pi_{l+1}\cdots \pi_n\pi_1\cdots \pi_{l-1}$ corresponding to the cyclic permutation $[\pi]$ is called the representative of the cyclic permutation $[\pi]$. For the example above, $4\,2\,3\,1$ is the representative of the cyclic permutation $[4\,2\,3\,1]$. Here and in the sequel, we use the representative to represent each cyclic permutation $[\pi]$. For example, we use $[4\,2\,3\,1]$ to represent the equivalence class in \eqref{examp-r}. In this way, all cyclic permutations of $\{1,2,3,4\}$ are listed as follows: \[[4\,1\,2\,3],[4\,3\,1\,2], [4\,1\,3\,2], [4\,2\,1\,3],[4\,2\,3\,1],[4\,3\,2\,1].\] Let $c\mathcal{S}_n$ denote the set of all cyclic permutations of length $n$ and let $[\sigma] \in c\mathcal{S}_n$ and $[\pi] \in c\mathcal{S}_m$ be disjoint cyclic permutations, that is, cyclic permutations with no letters in common. We say that $[\alpha] \in c\mathcal{S}_{n+m}$ is a cyclic shuffle of two cyclic permutations $[\sigma]$ and $[\pi]$ if both $[\sigma]$ and $[\pi]$ are circular subsequences of $[\alpha]$. The set of cyclic shuffles of $[\sigma]$ and $[\pi]$ is denoted $c\mathcal{S}([\sigma],[\pi])$. For example, \begin{equation}\label{example-c} c\mathcal{S}([6\,3],[4\,1])=\{[6\,3\,{\bf 1}\,{\bf 4}],[6\,3\,{\bf 4}\,{\bf 1}],[6\,{\bf 1}\,{\bf 4}\,3],[6\,{\bf 4}\,{\bf 1}\,3],[6\,{\bf 1}\,3\,{\bf 4}],[6\,{\bf 4}\,3\,{\bf 1}]\}. \end{equation} The elements of $[\pi]$ in $[\alpha]$ are in boldface to distinguish them from the elements of $[\sigma]$. Figure \ref{fig} lays out the circular representations of cyclic shuffles of $[6\,3]$ and $[4\,1]$. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.5] \draw[line width=1pt] (0, 0) circle (2); \draw[line width=1pt] (0, 0) circle (1); \node at(90:2)[below=2pt]{6}; \node at(0:2)[left]{3}; \node at(-90:2)[above]{{\bf 1}}; \node at(180:2)[right]{{\bf 4}}; \node at(-90:2)[below=17pt]{[6\,3\,{\bf 1}\,{\bf 4}]}; \foreach \x in {90,0,-90,180} \draw [line width=1pt](\x:1.85)--(\x:2.15); \draw[-latex, line width=1pt] (-1,0) arc(-179: -180: 1); \end{tikzpicture} \quad \quad \quad \quad \quad \begin{tikzpicture}[scale=0.5] \draw[line width=1pt] (0, 0) circle (2); \draw[line width=1pt] (0, 0) circle (1); \node at(90:2)[below=2pt]{6}; \node at(0:2)[left]{3}; \node at(-90:2)[above]{{\bf 4}}; \node at(180:2)[right]{{\bf 1}}; \node at(-90:2)[below=17pt]{[6\,3\,{\bf 4}\,{\bf 1}]}; \foreach \x in {90,0,-90,180} \draw [line width=1pt](\x:1.85)--(\x:2.15); \draw[-latex, line width=1pt] (-1,0) arc(-179: -180: 1); \end{tikzpicture} \quad \quad \quad \quad \quad \begin{tikzpicture}[scale=0.5] \draw[line width=1pt] (0, 0) circle (2); \draw[line width=1pt] (0, 0) circle (1); \node at(90:2)[below=2pt]{6}; \node at(0:2)[left]{{\bf 1}}; \node at(-90:2)[above]{{\bf 4}}; \node at(180:2)[right]{3}; \node at(-90:2)[below=17pt]{[6\,{\bf 1}\,{\bf 4}\,3]}; \foreach \x in {90,0,-90,180} \draw [line width=1pt](\x:1.85)--(\x:2.15); \draw[-latex, line width=1pt] (-1,0) arc(-179: -180: 1); \end{tikzpicture} \\[15pt] \begin{tikzpicture}[scale=0.5] \draw[line width=1pt] (0, 0) circle (2); \draw[line width=1pt] (0, 0) circle (1); \node at(90:2)[below=2pt]{6}; \node at(0:2)[left]{{\bf 4}}; \node at(-90:2)[above]{{\bf 1}}; \node at(180:2)[right]{3}; \node at(-90:2)[below=17pt]{[6\,{\bf 4}\,{\bf 1}\,3]}; \foreach \x in {90,0,-90,180} \draw [line width=1pt](\x:1.85)--(\x:2.15); \draw[-latex, line width=1pt] (-1,0) arc(-179: -180: 1); \end{tikzpicture} \quad \quad \quad \quad \quad \begin{tikzpicture}[scale=0.5] \draw[line width=1pt] (0, 0) circle (2); \draw[line width=1pt] (0, 0) circle (1); \node at(90:2)[below=2pt]{6}; \node at(0:2)[left]{{\bf 1}}; \node at(-90:2)[above]{3}; \node at(180:2)[right]{{\bf 4}}; \node at(-90:2)[below=17pt]{[6\,{\bf 1}\,3\,{\bf 4}]}; \foreach \x in {90,0,-90,180} \draw [line width=1pt](\x:1.85)--(\x:2.15); \draw[-latex, line width=1pt] (-1,0) arc(-179: -180: 1); \end{tikzpicture} \quad \quad \quad \quad \quad \begin{tikzpicture}[scale=0.5] \draw[line width=1pt] (0, 0) circle (2); \draw[line width=1pt] (0, 0) circle (1); \node at(90:2)[below=2pt]{6}; \node at(0:2)[left]{{\bf 4}}; \node at(-90:2)[above]{3}; \node at(180:2)[right]{{\bf 1}}; \node at(-90:2)[below=17pt]{[6\,{\bf 4}\,3\,{\bf 1}]}; \foreach \x in {90,0,-90,180} \draw [line width=1pt](\x:1.85)--(\x:2.15); \draw[-latex, line width=1pt] (-1,0) arc(-179: -180: 1); \end{tikzpicture} \end{center} \caption{The circular representations of cyclic shuffles of $[6\,3]$ and $[4\,1]$.}\label{fig} \end{figure} Evidently, \begin{equation}\label{cycli-enum} \# c\mathcal{S}([\sigma],[\pi])=(m+n-1){m+n-2\choose m-1}, \end{equation} for two disjoint cyclic permutations $[\sigma] \in c\mathcal{S}_n$ and $[\pi] \in c\mathcal{S}_m$, see \cite[Eq.~(7)]{Domagalski-Liang-Minnich-Sagan-Schmidt-Sietsema-2021}. In order to study Solomon's descent algebra, Cellini \cite{Cellini-1995, Cellini-1998} introduced the cyclic descent set. Let $\pi=\pi_1\pi_2\ldots\pi_n$ be a linear permutation, the cyclic descent set of $\pi$ is defined to be \begin{equation*} {\rm cDes} (\pi)=\{1\leq i \leq n\colon \pi_i>\pi_{i+1}\} \end{equation*} with the convention $\pi_{n+1}=\pi_1$. The number of its cyclic descents is called the cyclic descent number, denoted $ {\rm cdes}(\pi)$, viz., \[{\rm cdes}(\pi):=\# {\rm cDes}(\pi).\] Let $[\pi]$ be a cyclic permutation of length $n$, note that all linear permutations corresponding to $[\pi]$ have the same number of cyclic descents, so we may define the cyclic descent number of $[\pi]$ as \begin{equation}\label{defi-cdesnum} {\rm cdes}\left([\pi]\right)={\rm cdes}\left(\pi\right), \end{equation} where $\pi$ is any linear permutation corresponding to $[\pi]$. Based on their setting of cyclic quasi-symmetric functions, Adin, Gessel, Reiner and Roichman \cite{Adin-Gessel-Reiner-Roichman-2021} established the following enumerative formula for the cyclic shuffles with a given cyclic descent number. \begin{thm}[Adin-Gessel-Reiner-Roichman]\label{AGRR} Let $[\sigma]\in c\mathcal{S}_m$ and $[\pi] \in c\mathcal{S}_n$ be disjoint cyclic permutations, where ${\rm cdes}([\sigma])=r$ and ${\rm cdes}([\pi])=s$. Let $c\mathcal{S}([\sigma],[\pi],k)$ denote the set of cyclic shuffles of $[\sigma]$ and $[\pi]$ with $k$ cyclic descent number. Then \begin{align}\label{AGRR-e} \# c\mathcal{S}([\sigma],[\pi],k) = \frac{k(m-r)(n-s)+(m+n-k)rs}{(m-r+s)(n-s+r)}\dbinom{m-r+s}{k-r}\dbinom{n-s+r}{k-s}. \end{align} \end{thm} Summing \eqref{AGRR-e} over all $k$ gives \eqref{cycli-enum} upon using the Chu-Vandermond identity \cite[Eq.~(3.3.10) with $q=1$]{Andrews-1976}. At the end of their paper, Adin, Gessel, Reiner and Roichman \cite{Adin-Gessel-Reiner-Roichman-2021} asked a question about looking for a notion of cyclic major index, which provides a bivariate analogue of Theorem \ref{AGRR}. This question has been posed by Domagalski, Liang, Minnich, Sagan, Schmidt and Sietsema in \cite[Question 4.1]{Domagalski-Liang-Minnich-Sagan-Schmidt-Sietsema-2021} again. In this paper, we introduce the cyclic major index of a cycle permutation $[\pi]$. Let $[\pi]$ be a cycle permutation of length $n$ and its representative $\hat{\pi}=\hat{\pi}_1\hat{\pi}_2\cdots\hat{\pi}_n$, where $\hat{\pi}_1$ is the largest element in $[\pi]$. The cyclic major index of the cyclic permutation $[\pi]$ is defined to be \begin{equation} {\rm maj}([\pi])={\rm maj}(\hat{\pi}). \end{equation} For example, the representative of the cycle permutation $[4\,1\,3\,2]$ is $\hat{\pi}=4\,1\,3\,2$, and so its cyclic major index is defined to be the major index of $\hat{\pi}=4\,1\,3\,2$. It gives that ${\rm maj}([4\,1\,3\,2])=1+3=4.$ In order to state the cyclic analogue of Stanley's Shuffle Theorem, we will need to introduce the cyclic descent-bottom set of a cyclic permutation and recall the splitting map $S_i$ defined by Domagalski, Liang, Minnich, Sagan, Schmidt and Sietsema in \cite{Domagalski-Liang-Minnich-Sagan-Schmidt-Sietsema-2021}, which maps a cyclic permutation to a linear permutation. Let $[\pi]$ be a cyclic permutation of length $n$, the cyclic descent-bottom set of $[\pi]$ is defined as: \begin{equation}\label{defi-cbottoms} {\rm cB_d} ([\pi])=\{\pi_{i+1} \colon \pi_i>\pi_{i+1}, \ \text{for} \ 1\leq i\leq n\} \end{equation} with the convention $\pi_{n+1}=\pi_1$. It should be mentioned that the descent-bottom set of a linear permutation has been studied by Haglund and Visontai \cite{Haglund-Visontai-2012} and Hall and Remmel\cite {Hall-Remmel-2008,Hall-Remmel-2009} respectively. It is manifest from \eqref{defi-cdesnum} and \eqref{defi-cbottoms} that \[\# {\rm cB_d}([\pi])={\rm cdes}([\pi]).\] For example, \[ {\rm cB_d} ([6\,4\,1\,3])=\{1,4\}.\] Let $[\pi]$ be a cyclic permutation of length $n$. For $i\in [\pi]$, Domagalski, Liang, Minnich, Sagan, Schmidt and Sietsema \cite{Domagalski-Liang-Minnich-Sagan-Schmidt-Sietsema-2021} defined the map $S_i([\pi])$ to be the unique permutation corresponding to $[\pi]$ which starts with $i$. For example, \[ S_5([5\,1\,3\,4])=5\,1\,3\,4, \, S_1([5\,1\,3\,4])=1\,3\,4\,5,\, S_3([5\,1\,3\,4])=3\,4\,5\,1,\] and \[ S_4([5\,1\,3\,4])=4\,5\,1\,3.\] We obtain the following generating function of the number of cyclic shufflings of two disjoint cyclic permutations with a given cyclic descent number and a given cyclic major index. \begin{thm}[Cyclic Stanley's Shuffle Theorem]\label{cStanley} Let $[\sigma]\in c\mathcal{S}_m$ and $[\pi] \in c\mathcal{S}_n$ be disjoint cyclic permutations, where ${\rm cdes}([\sigma])=r$ and ${\rm cdes}([\pi])=s$. Moreover, the largest element of $[\sigma]$ and $[\pi]$ is in $[\sigma]$. Then \begin{align}\label{cStanley-e} & \sum_{[\alpha]\in c\mathcal{S}([\sigma],[\pi]) \atop {\rm cdes}([\alpha])=k}q^{{\rm maj} ([\alpha])}\nonumber\\[5pt] &\quad ={m-r+s \brack k-r}{n-s+r-1 \brack k-s-1}q^{{\rm maj} ([\sigma]) +(k-s)(k-r)} \sum_{ i \not\in {\rm cB_d}([\pi])}q^{{\rm maj} (S_i([\pi]))}\nonumber\\[5pt] &\quad \quad +{m-r+s-1 \brack k-r}{n-s+r \brack k-s}q^{{\rm maj} ([\sigma]) +(k-s+1)(k-r)} \sum_{ i \in {\rm cB_d}([\pi])}q^{{\rm maj} (S_i([\pi]))}. \end{align} \end{thm} Setting $q\rightarrow 1$ in Theorem \ref{cStanley}, we obtain \eqref{AGRR-e}, that is, \begin{align*} & \# c\mathcal{S}([\sigma],[\pi],k) \\[5pt] &\quad =\sum_{i \not\in {\rm cB_d}[\pi]} \dbinom{m-r+s}{k-r}\dbinom{n-s+r-1}{n-k+r}+\sum_{i \in {\rm cB_d}[\pi]} \dbinom{m-r+s-1}{k-r}\dbinom{n-s+r}{n-k+r}\\[8pt] &\quad=(n-s)\dbinom{m-r+s}{k-r}\dbinom{n-s+r-1}{n-k+r}+s \dbinom{m-r+s-1}{k-r}\dbinom{n-s+r}{n-k+r}\\[10pt] &\quad=\frac{k(m-r)(n-s)+(m+n-k)rs}{(m-r+s)(n-s+r)}\dbinom{m-r+s}{k-r}\dbinom{n-s+r}{k-s}. \end{align*} \section{Proof of Theorem \ref{cStanley}} This section is denoted to the proof of Theorem \ref{cStanley} with the aid of Stanley's Shuffle Theorem \ref{stanley}. \noindent{\it Proof of Theorem \ref{cStanley}.} Assume that $[\sigma]\in c\mathcal{S}_m$ and $[\pi] \in c\mathcal{S}_n$ are two disjoint cyclic permutations, where ${\rm cdes}([\sigma])=r$ and ${\rm cdes}([\pi])=s$. Moreover, the largest element of $[\sigma]$ and $[\pi]$ is in $[\sigma]$. Let $\hat{\sigma}=\hat{\sigma}_1\hat{\sigma}_2\cdots \hat{\sigma}_{m}$ be the representative of the cyclic permutation $[\sigma]$, that is, $\hat{\sigma}_1$ is the largest element of $[\sigma]$. Under the precondition in this theorem, we see that $\hat{\sigma}_1$ is greater than all elements in $[\pi]$. Define \[\hat{\sigma}'=\hat{\sigma}_2\cdots \hat{\sigma}_{m}.\] For $i\in [\pi]$, recall that $S_i([\pi])$ is the unique permutation corresponding to $[\pi]$ which starts with $i$. We claim that there is a bijection $\psi$ between the set $c\mathcal{S}([\sigma],[\pi])$ and the set $\bigcup_{i\in [\pi]} \mathcal{S}(\hat{\sigma}', S_i([\pi]))$, where $c\mathcal{S}([\sigma],[\pi])$ denotes the set of cyclic shuffles of $[\sigma]$ and $[\pi]$ and $\mathcal{S}(\hat{\sigma}', S_i([\pi]))$ denotes the set of linear shuffles of $\hat{\sigma}'$ and $S_i([\pi])$. Moreover, for $\alpha \in c\mathcal{S}([\sigma],[\pi])$, we have $\psi(\alpha)=\hat{\alpha}'$ such that \begin{equation}\label{eqn-s1} {\rm cdes}([\alpha])= {\rm des}(\hat{\alpha}')+1 \end{equation} and \begin{equation}\label{eqn-s2} {\rm maj}([\alpha])= {\rm maj}(\hat{\alpha}')+{\rm des}(\hat{\alpha}')+1. \end{equation} Let $\alpha \in c\mathcal{S}([\sigma],[\pi])$ and let $\hat{\alpha}=\hat{\alpha}_1\hat{\alpha}_2\cdots \hat{\alpha}_{n+m}$ be the representative of $[\alpha]$, which is a linear permutation corresponding to $[\alpha]$ such that $\hat{\alpha}_1$ is the largest element in $[\alpha]$. Then $\hat{\alpha}_1=\hat{\sigma}_1$ and \begin{equation}\label{eqn-t1} {\rm cdes}([\alpha])={\rm des}(\hat{\alpha}). \end{equation} Define \[\hat{\alpha}'=\hat{\alpha}_2\hat{\alpha}_3\cdots \hat{\alpha}_{n+m},\] which clearly belongs to $\bigcup_{i\in [\pi]} \mathcal{S}(\hat{\sigma}', S_i([\pi]))$. From the construction of $\hat{\alpha}'$ and \eqref{eqn-t1}, we see that $[\alpha]$ and $\hat{\alpha}'$ satisfy \eqref{eqn-s1} and \eqref{eqn-s2}. Moreover, this process is s reversible. This proved the claim. Hence it follows from \eqref{eqn-s1} and \eqref{eqn-s2} that \begin{align}\label{pf-thm-e3} & \sum_{[\alpha]\in c\mathcal{S}([\sigma],[\pi]) \atop {\rm cdes}([\alpha])=k}q^{{\rm maj} ([\alpha])}\nonumber \\[5pt] &\quad =\sum_{i\in [\pi]} \sum_{\hat{\alpha}' \in \mathcal{S}(\hat{\sigma}',S_i([\pi]) \atop {\rm des}(\hat{\alpha}')=k-1}q^{{\rm maj} (\hat{\alpha}')+k}\nonumber \\[5pt] &\quad =\sum_{ i \not\in {\rm cB_d}([\pi])} \sum_{\hat{\alpha}' \in \mathcal{S}(\hat{\sigma}',S_i([\pi]) \atop {\rm des}(\hat{\alpha}')=k-1}q^{{\rm maj} (\hat{\alpha}')+k} +\sum_{ i \in {\rm cB_d}([\pi])} \sum_{\hat{\alpha}'\in \mathcal{S}(\hat{\sigma}',S_i([\pi]) \atop {\rm des}(\hat{\alpha}')=k-1}q^{{\rm maj} (\hat{\alpha}')+k}. \end{align} Observe that ${\rm des}(\hat{\sigma}')={\rm cdes}([\sigma])-1=r-1$ and ${\rm des}(S_i([\pi]))={\rm cdes}([\pi])-1=s-1$ if $i \in {\rm cB_d}([\pi])$, otherwise, ${\rm des}(S_i([\pi]))={\rm cdes}([\pi])=s$. Moreover, \begin{equation}\label{eqn-s3} {\rm maj}(\hat{\sigma}')={\rm maj}([\sigma])-r. \end{equation} Hence, by Theorem \ref{stanley}, we obtain \begin{align}\label{pf-thm-e4} &\sum_{ i \not\in {\rm cB_d}([\pi])} \sum_{\hat{\alpha}'\in \mathcal{S}(\hat{\sigma}',S_i([\pi]) \atop {\rm des}(\hat{\alpha}')=k-1}q^{{\rm maj} (\hat{\alpha}')+k} \nonumber \\[5pt] &\quad =\sum_{ i \not\in {\rm cB_d}([\pi])} {m-r+s \brack k-r}{n-s+r-1 \brack k-s-1}q^{{\rm maj} (\hat{\sigma}') +{\rm maj} (S_i([\pi]))+(k-s-1)(k-r)+k} \end{align} and \begin{align}\label{pf-thm-e5} &\sum_{ i \in {\rm cB_d}([\pi])} \sum_{\hat{\alpha}'\in \mathcal{S}(\hat{\sigma}',S_i([\pi]) \atop {\rm des}(\hat{\alpha}')=k-1}q^{{\rm maj} (\hat{\alpha}')+k} \nonumber\\[5pt] &\quad =\sum_{ i \in {\rm cB_d}([\pi])} {m-r+s-1 \brack k-r}{n-s+r \brack k-s}q^{{\rm maj} (\hat{\sigma}') +{\rm maj} (S_i([\pi]))+(k-s)(k-r)+k} . \end{align} Substituting \eqref{pf-thm-e4} and \eqref{pf-thm-e5} into \eqref{pf-thm-e3} and using \eqref{eqn-s3}, we obtain \eqref{cStanley-e}. This completes the proof. \qed \vskip 0.2cm \noindent{\bf Acknowledgment.} We are grateful to Bruce Sagan for bringing this question to our attention and for providing useful comments and suggestions. This work was supported by the National Science Foundation of China. \begin{thebibliography}{0} \setlength{\itemsep}{-.8mm} \addcontentsline{toc}{section}{} \bibitem{Adin-Gessel-Reiner-Roichman-2021} Ron M. Adin, Ira M. Gessel, Victor Reiner, and Yuval Roichman, Cyclic quasi-symmetric functions, Israel J. Math. 243 (2021) 437--500. \bibitem{Andrews-1976} G.E. Andrews, The Theory of Partitions, Addison-Wesley Publishing Co., 1976. \bibitem{Cellini-1995} P. Cellini, A general commutative descent algebra, J. Algebra 175 (1995) 990--1014. \bibitem{Cellini-1998} P. Cellini, Cyclic Eulerian elements, European J. Combin. 19 (1998) 545--552. \bibitem{Domagalski-Liang-Minnich-Sagan-Schmidt-Sietsema-2021} R. Domagalski, J. Liang, Q. Minnich and B.E. Sagan, J. Schmidt, A. Sietsema, Cyclic shuffle compatibility, S\'em. Lothar. Combin. 85 ([2020--2021]), Art. B85d, 11 pp. \bibitem{Goulden-1985} I. P. Goulden, A bijective proof of Stanley's shuffling theorem, Trans. Amer. Math. Soc. 288 (1985), no. 1, 147--160. \bibitem{Haglund-Visontai-2012} J. Haglund and M. Visontai, Stable multivariate Eulerian polynomials and generalized Stirling permutations, European J. Combin. 33 (2012) 477--487. \bibitem{Hall-Remmel-2008} J.T. Hall and J.B. Remmel, Counting descent pairs with prescribed tops and bottoms, J. Combin. Theory Ser. A 115 (2008) 693--725. \bibitem{Hall-Remmel-2009} J.T. Hall and J.B. Remmel, $q$-counting descent pairs with prescribed tops and bottoms, Electron. J. Combin. 16 (2009) Research Paper 111, 25 pp. \bibitem{Ji-Zhang-2022} K.Q. Ji and D.T.X. Zhang, Stanley's Shuffle Theorem and Insertion Lemma, arXiv:2203.13543v1. \bibitem{Stadler-1999} J. D. Stadler, Stanley's shuffling theorem revisited, J. Combin. Theory Ser. A 88 (1999) 176--187. \bibitem{Stanley-1972} R. P. Stanley, Ordered structures and partitions, Memoirs of the American Mathematical Society, No. 119. American Mathematical Society, Providence, R.I., 1972. iii+104 pp. \bibitem{Stanley-2012} R.P. Stanley, Eumerative Combinatorics, Vol. I, 2nd ed., Cambridge University Press, Cambridge, 2012. \end{thebibliography} \end{document}
2205.03167v2
http://arxiv.org/abs/2205.03167v2
Absence of principal eigenvalues for higher rank locally symmetric spaces
\documentclass{amsart} \usepackage[hmargin=3.5cm,vmargin=3.5cm]{geometry} \usepackage[utf8]{inputenc} \usepackage{amsmath ,amssymb ,amsthm ,stmaryrd , mathtools, mathrsfs, relsize } \usepackage{graphicx,svg,xcolor} \usepackage{enumerate,blindtext} \usepackage[linktocpage=true, backref = page]{hyperref} \usepackage{todonotes, lipsum} \usepackage[all,cmtip]{xy} \usetikzlibrary{matrix} \makeatletter \g@addto@macro\bfseries{\boldmath} \makeatother \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{korollar}[theorem]{Corollary} \newtheorem{folgerung}[theorem]{Folgerung} \newtheorem{calc}[theorem]{Calculation} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{theorem*}{Theorem} \newtheorem*{notation}{Notation} \newcommand{\Pg}{\ensuremath{P_\gamma}} \newcommand{\G}{\ensuremath{\Gamma}} \newcommand{\g}{\ensuremath{\gamma}} \newcommand{\ds}{\displaystyle} \newcommand{\cU}{\ensuremath{\mathcal{U}}} \newcommand{\C}{\ensuremath{\mathbb{C}}} \renewcommand{\H}{\ensuremath{\mathbb{H}}} \newcommand{\bbS}{\ensuremath{\mathbb{S}}} \newcommand{\M}{\ensuremath{\mathbb{M}}} \newcommand{\N}{\ensuremath{\mathbb{N}}} \newcommand{\cO}{\ensuremath{\mathcal{O}}} \newcommand{\cP}{\ensuremath{\mathcal{P}}} \newcommand{\Q}{\ensuremath{\mathbb{Q}}} \newcommand{\ov}{\ensuremath{\overline}} \newcommand{\mf}{\ensuremath{\mathfrak}} \newcommand{\mc}{\ensuremath{\mathcal}} \newcommand{\Z}{\ensuremath{\mathbb{Z}}} \newcommand{\R}{\ensuremath{\mathbb{R}}} \newcommand{\inv}{\ensuremath{^{-1}}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\Tor}{\ensuremath{\operatorname{Tor}}} \renewcommand{\d}[1][t]{\ensuremath{\left.\frac{d}{d#1}\right|_{#1=0}}} \renewcommand{\Re}{\ensuremath{\operatorname{Re}}} \renewcommand{\Im}{\ensuremath{\operatorname{Im}}} \newcommand{\del}[2][x]{\ensuremath{\frac{\partial^#2}{\partial #1^#2}}} \newcommand{\on}{\ensuremath{\operatorname}} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\ad}{ad} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\res}{Res} \DeclareMathOperator{\im}{im} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\w}{w} \DeclareMathOperator{\HC}{HC} \DeclareMathOperator{\Op}{Op} \DeclareMathOperator{\pr}{pr} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ext}{Ext} \DeclareMathOperator{\WF}{WF} \DeclareMathOperator{\isom}{Isom} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\spa}{span} \DeclareMathOperator{\Diff}{Diff} \DeclareMathOperator{\RT}{RT} \makeatletter \newcommand*\bigcdot{\mathpalette\bigcdot@{.5}} \newcommand*\bigcdot@[2]{\mathbin{\vcenter{\hbox{\scalebox{#2}{$\m@th#1\bullet$}}}}} \makeatother \setlength{\parindent}{0em} \setlength{\parskip}{\medskipamount} \author{Tobias Weich} \thanks{T. Weich: Paderborn University, Warburger Straße 100, 33100 Paderborn, Germany, e-mail: \href{mailto:[email protected]}{[email protected]}} \author{Lasse L. Wolf} \thanks{L. L. Wolf (corresponding author): Paderborn University, Warburger Straße 100, 33100 Paderborn, Germany, e-mail: \href{mailto:[email protected]}{[email protected]}} \title{Absence of principal eigenvalues for higher rank locally symmetric spaces} \begin{document} \begin{abstract} Given a geometrically finite hyperbolic surface of infinite volume it is a classical result of Patterson that the positive Laplace-Beltrami operator has no $L^2$-eigenvalues $\geq 1/4$. In this article we prove a generalization of this result for the joint $L^2$-eigenvalues of the algebra of commuting differential operators on Riemannian locally symmetric spaces $\Gamma\backslash G/K$ of higher rank. We derive dynamical assumptions on the $\Gamma$-action on the geodesic and the Satake compactifications which imply the absence of the corresponding principal eigenvalues. A large class of examples fulfilling these assumptions are the non-compact quotients by Anosov subgroups. \end{abstract} \maketitle \section{Introduction} Let $\mathbb H =SL(2,\R)/SO(2)$ be the hyperbolic plane equipped with the Riemannian metric of constant negative curvature and $\Gamma\subset SL(2,\R)$ a discrete torsion-free subgroup. Then $\Gamma\backslash \mathbb H$ is a Riemannian surface of constant negative curvature and the relations between the geometry of $\Gamma\backslash \mathbb H$, the group theoretic properties of $\Gamma$, the dynamical properties of the $\Gamma$-action on $\mathbb H$ or its compactification, and the spectrum of the positive Laplace-Beltrami operator $\Delta$ have been intensively studied over several decades. Let us focus on the discrete $L^2$-spectrum of the Laplace-Beltrami operator, i.e. those $\mu\in \R$ such that $(\Delta-\mu)f=0$ for some $f\in L^2(\Gamma\backslash \mathbb H)$, $f\neq 0$. If $\Gamma\subset SL(2,\R)$ is cocompact, then $\mu_0=0$ is always an eigenvalue corresponding to the constant function and Weyl's law for the elliptic selfadjoint operator $\Delta$ implies that there is a discrete set of infinitely many eigenvalues $0=\mu_0<\mu_1\leq\ldots$ of finite multiplicity. From a representation theoretic perspective there is a clear distinction between $\mu_i\in \,]0,1/4[$ and $\mu_i\geq 1/4$. The former correspond to complementary series representations and the latter to principal series representations occurring in $L^2(\Gamma\backslash SL(2,\mathbb R))$. We call the eigenvalues accordingly principal eigenvalues (if $\mu_i\geq1/4$) and complementary or exceptional eigenvalues (if $\mu_i\in ]0,1/4[$). Merely by discreteness of the spectrum we know that there are at most finitely many complementary eigenvalues and infinitely many principal eigenvalues. If we pass to non-compact $\Gamma\backslash \mathbb H$, the situation becomes more intricate: For the modular surface $SL(2,\mathbb Z)\backslash \mathbb H$, which is non-compact but of finite volume, it is well known that there are no complementary eigenvalues but still infinitely many principal eigenvalues obeying a Weyl asymptotic. In general the question of existence of principal eigenvalues on finite volume hyperbolic surfaces is wide open. A long standing conjecture by Phillips and Sarnak \cite{Sarnak85} states that for a generic lattice $\Gamma \subset SL(2,\mathbb R)$ there should be no principal eigenvalues. If we pass to hyperbolic surfaces of infinite volume the situation is much better understood. A classical theorem by Patterson \cite{Patterson} states that if $\mathrm{vol}(\Gamma\backslash \mathbb H) = \infty$ and $\Gamma\subset SL(2,\mathbb R)$ is geometrically finite, then there are no principal eigenvalues. The result has later been generalized to real hyperbolic spaces of higher dimensions by Lax and Phillips \cite{laxphil}. Even if we are not aware of a reference, it seems folklore that the statement holds for general rank one locally symmetric spaces. In this article we are interested in a generalization of Patterson's theorem to higher rank locally symmetric spaces: Let us briefly\footnote{A more detailed description of the setting will be provided in Section~\ref{sec:notation}.} introduce the setting: Let $X=G/K$ be a Riemannian symmetric space of non-compact type and $\Gamma\subset G$ a discrete torsion-free subgroup. We will be interested in the $L^2$-spectrum of the locally symmetric space $\Gamma\backslash X$. As for hyperbolic surfaces the Laplace-Beltrami operator is a canonical geometric differential operator whose spectral theory can be studied. If the symmetric space is of higher rank, there are however further $G$-invariant differential operators on $X$ that descend to differential operators on $\Gamma\backslash X$. It is from many perspectives more desirable to study the spectral theory of the whole algebra of invariant differential operators $\mathbb D(G/K)$ instead of just the spectrum of the Laplacian. In order to introduce the definition of the joint spectrum of $\mathbb D(G/K)$ we recall that $\mathbb D(G/K)$ is a commutative algebra generated by $r\geq 1$ algebraically independent differential operators and $r$ equals the rank of the symmetric space $X$. After a choice of generating differential operators a joint eigenvalue of these commuting differential operators would be given by an element in $\C^r$. A more intrinsic way of defining the spectrum which does not require to choose any generators, is provided by the Harish-Chandra isomorphism. This is an algebra isomorphism $\HC: \mathbb D(G/K) \to \text{Poly}(\mathfrak a^*)^W$ between the invariant differential operators and the complex-valued Weyl group invariant polynomials on the dual of $\mathfrak a =\rm{Lie}(A)$, where $A$ is the abelian subgroup of $G$ in the Iwasawa decomposition $G=KAN$. If we fix $\lambda \in \mathfrak a^*$ and compose the Harish-Chandra isomorphism with the evaluation of the polynomial at $\lambda$ we obtain a character $\chi_\lambda:= \text{ev}_\lambda\circ \HC : \mathbb D(G/K)\to \mathbb C$. With this notation we call $\lambda\in\mathfrak a_\C^*$ a joint $L^2$-eigenvalue on $\Gamma\backslash X$ if there exists $f\in L^2(\Gamma \backslash X)$ such that for all $D\in \mathbb D(G/K)$: \[ Df=\chi_\lambda(D) f. \] As for the hyperbolic surfaces we can distinguish two kinds of $L^2$-eigenvalues: The purely imaginary joint eigenvalues $\lambda \in i\mathfrak a^*$ correspond to principal series representations and we call them \emph{principal joint $L^2$-eigenvalues}. The remaining eigenvalues are called \emph{complementary} or \emph{exceptional} eigenvalues. These two kind of eigenvalues are not only distinguished by representation theory, but they also behave differently from the point of view of spectral theory: In their seminal paper \cite{DKV}, Duistermaat, Kolk and Varadarajan consider the case of cocompact discrete subgroups $\Gamma\subset G$. They prove that there exist infinitely many principal joint eigenvalues and their asymptotic growth is precisely described by a Weyl law with a remainder term. They furthermore prove an upper bound on the number of complementary eigenvalues whose growth rate is strictly inferior than the Weyl asymptotic of the principal eigenvalues. There are thus much less complementary than principal eigenvalues. The most prominent non-compact higher rank locally symmetric space is without doubt $\G\backslash X =SL(n,\mathbb Z) \backslash SL(n,\mathbb R)/ SO(n)$. By \cite{Mue07} (and in a more general setting by \cite{LindenVenkatesh}) it is known that there are infinitely many joint $L^2$-eigenvalues. Assuming the generalized Ramanujan conjecture which implies the absence of complementary eigenvalues (see e.g. \cite{Blomerbrumley}), we would get infinitely many principal joint $L^2$-eigenvalues. If one replaces the full modular group by a congruence subgroup $\G(n)$ of level $n\geq3$, the existence of infinitely many principal joint $L^2$-eigenvalues has been shown by Lapid and Müller \cite{lapidmueller09}. More precisely, there is a Weyl law for the principal joint eigenvalues and the number of complementary eigenvalues are shown to be bounded by a function of lower order growth. In the recent article \cite{edwardsoh22} Edwards and Oh give examples and conditions on the discrete subgroup $\G$ which imply that the complementary eigenvalues are not only of lower quantity but that they are indeed absent. The main example are selfjoinings of convex-cocompact subgroups in $PSO(n,1)$, but they conjecture that this holds for every Anosov subgroup. In this article we are interested in conditions on the group $\Gamma$ which imply the absence of principal eigenvalues. In order to state our main theorem, recall the definition of a wandering point: If $\Gamma$ acts continuously on a topological space $T$, then a point $t\in T$ is called wandering, if there exists a neighborhood $U\subset T$ of $t$ such that $\{\gamma \in \Gamma: \gamma U\cap U\neq \emptyset\}$ is finite. The collection of all wandering points is called the wandering set $\w(\Gamma,T)$. We can now state our main theorem. \begin{theorem}\label{thm:main_intro} Let $X=G/K$ be a Riemannian symmetric space of non-compact type and $\Gamma\subset G$ a discrete torsion-free subgroup. Let $\ov X$ be the geodesic or the maximal Satake compactification (see Sections~\ref{sec:geo} and \ref{sec:satake}) and let $\w(\G,\ov X)$ be the wandering set for the action of $\G$ on $\ov X$. If $\w(\G,\ov X)\cap\partial \ov X\neq \emptyset$, then there are no principal joint $L^2$-eigenvalues on $\Gamma\backslash X$. \end{theorem} Let us compare our theorem to the classical result of Patterson: First of all, for $\mathbb H$ the geodesic compactification and the Satake compactification coincide. Furthermore, if $\Gamma\subset SL(2,\R)$ is geometrically finite, then it is well known that the following are equivalent: \begin{enumerate} \item $\mathrm{vol}(\Gamma\backslash \mathbb H) = \infty$ \item the limit set of $\Gamma$ is not the whole boundary $\Lambda(\Gamma)\neq \partial \mathbb H$ \item there is a non-empty open set of discontinuity $\Omega(\Gamma)\subset \partial \mathbb H$ on which $\Gamma$ acts properly discontinuously. \end{enumerate} The last point immediately implies the existence of a wandering point of the $\Gamma$ action on $\overline{ \mathbb H}$. In this sense our theorem boils down to the classical result of Patterson. Also the higher dimensional result of Lax-Phillips on $\mathbb H^n$ is easily recovered from our main theorem: If $\Gamma\subset PSO(1,n)$ is geometrically finite and $\Gamma\backslash \mathbb H^n$ of infinite volume, then at least one non-compact end has to be a funnel or a cusp of non-maximal rank, and the existence of such a non-compact end directly implies the wandering condition of Theorem~\ref{thm:main_intro}. As discrete subgroups on higher rank semisimple Lie groups are known to be constrained by strong rigidity results, it is a valid question whether there are interesting examples in higher rank which fulfill the wandering condition of Theorem~\ref{thm:main_intro}. We address this question in Section~\ref{sec:examples} and we will see that all images of Anosov representations fulfill our condition. This is a consequence of recent results on compactifications of Anosov symmetric spaces \cite{KL18,GGKW15} that are modeled on the Satake compactification. A further natural question is, whether one can also in the higher rank setting obtain the result by the assumption of infinite volume of the locally symmetric spaces instead of the dynamical assumption on the group action used in our theorem. We do not know a definitive answer. However, it should be noted, that there is so far no good notion of a geometrically finite group $\Gamma$ in higher rank. Without the assumption of geometric finiteness, to our best knowledge even for $SL(2,\mathbb R)$ it is unknown if infinite volume implies the absence of principal eigenvalues. \emph{Outline of the proof and the article}. Let $f\in C^ \infty(X)$ be the $\G$-invariant lift of a joint eigenfunction for $\mathbb D(X)$ that is in $L^ 2(\G\backslash X)$. The proof of Theorem~\ref{thm:main_intro} relies on the analysis of the asymptotic behavior of $f$ towards the boundary of the compactification at infinity. For the result on the geodesic compactification it suffices to study the asymptotics of $f$ into the regular directions. In order to obtain the result on the Satake compactification we are required to also analyze the behavior in singular directions along the different boundary strata of the Weyl chambers. In a first step we show that $f$ satisfies a certain growth condition called \emph{moderate growth}. This is done by elliptic regularity combined with coarse estimates on the injectivity radius (see Section~\ref{sec:modgrowth}). The knowledge of moderate growth then allows us (see Section~\ref{sec:absence}) to use asymptotic expansion results for $f$ by van den Ban-Schlichtkrull \cite{vdBanSchl87, vdBanSchl89LocalBD}. For the asymptotics into the regular directions, i.e. in the interior of the positive Weyl chamber $ \mf a^ +\subset \mf a$, it follows from \cite{vdBanSchl87} that the leading term for the expansion of $f(k\exp(tH)K)$ with $k\in K$ and $H\in \mf a^ +$ is \[ \sum_{w\in W} p_w(k)e^ {(w\lambda-\rho)(tH)} \quad \text{as} \quad t\to \infty, \] where $W$ is the Weyl group, $\rho$ the usual half sum of roots and $\lambda\in i\mathfrak a^*$ a regular spectral parameter (for singular spectral parameters the formula becomes slightly more complicated but is still tractable). The wandering condition of $\G$ acting on the geodesic compactification $X\cup X(\infty)$ yields a neighborhood $U$ in $X\cup X(\infty)$ of some point in $X(\infty)$ such that $f\in L^ 2(U\cap X)$. Combining this with the expansion and the description of such neighborhoods $U$ implies that all the boundary values $p_w$ vanish on an open subset of $K$. This implies, again by \cite{vdBanSchl87}, that $f=0$. The result for the Satake compactification follows the same strategy but involves more complicated expansions from \cite{vdBanSchl89LocalBD} that describe the asymptotic behavior into the singular directions along the different boundary strata of the Weyl chamber (Section~\ref{sec:satabsence}). Finally, in Section~\ref{sec:examples} we provide some examples of higher rank locally symmetric spaces that fulfill the wandering condition of Theorem~\ref{thm:main_intro}. In particular, we show that all quotients by Anosov subgroups fulfill the assumption. \emph{Acknowledgement.} We thank Valentin Blomer for his suggestion to study this question and for numerous stimulating discussions. We furthermore thank Samuel Edwards, Joachim Hilgert, Lizhen Ji, Fanny Kassel, Michael Magee, Werner Müller and Beatrice Pozzetti for discussions and advice to the literature. This work has received funding from the Deutsche Forschungsgemeinschaft (DFG) Grant No. WE 6173/1-1 (Emmy Noether group “Microlocal Methods for Hyperbolic Dynamics”) as well as SFB-TRR 358/1 2023 — 491392403 (CRC ``Integral Structures in Geometry and Representation Theory''). \section{Preliminaries} \subsection{Symmetric spaces}\label{sec:notation} In this section we fix the notation for the present article. Let $G$ be a real semisimple non-compact Lie group with finite center and with Iwasawa decomposition $G=KAN$. Furthermore, let $M\coloneqq Z_K(A)$ be the centralizer of $A$ in $K$. We denote by $\mf g, \mf a, \mf n, %\mf n_-, \mf k,\mf m$ the corresponding Lie algebras. We have a $K$-invariant inner product on $\mf g$ that is induced by the Killing form and the Cartan involution. We further have the orthogonal Bruhat decomposition $\mf g =\mf a \oplus \mf m \oplus \bigoplus _{\alpha\in\Sigma} \mf g_\alpha$ into root spaces $\mf g_\alpha$ with respect to the $\mf a$-action via the adjoint action $\ad$, i.e. $\mf g_\alpha = \{Y\in \mf g\mid [H,Y]= \alpha(H) Y \;\forall H\in \mf a\}$. Here $\Sigma=\{\alpha\in \mf a^ \ast\mid \mf g_\alpha\neq 0\}\subseteq \mf a^\ast$ is the set of restricted roots. Denote by $W$ the Weyl group of the root system of restricted roots. Let $n$ be the real rank of $G$ and $\Pi$ (resp. $\Sigma^+$) the simple (resp. positive) system in $\Sigma$ determined by the choice of the Iwasawa decomposition. Let $m_\alpha \coloneqq \dim_\R \mf g_\alpha$ and $\rho \coloneqq \frac 12 \Sigma_{\alpha\in \Sigma^+} m_\alpha \alpha$. Let $\mf a_+ \coloneqq \{H\in \mf a\mid \alpha(H)>0 \,\forall \alpha\in\Pi\}$ denote the positive Weyl chamber. If $\ov {A^+} \coloneqq \exp (\ov {\mf a_+})$, then we have the Cartan decomposition $G=K\ov {A ^+}K$. The main object of our study is the symmetric space $X=G/K$ of non-compact type. On $X$ with a natural $G$-invariant measure $dx$ we have the integral formula \begin{align}\label{eq:intKAK} \int_{X} f(x)dx=\int_K\int_{\mf a_+} f(k\exp(H)) \prod_{\alpha\in\Sigma^+} \sinh(\alpha(H))^{m_\alpha} dHdk. \end{align} (see \cite[Ch.~I Theorem~5.8]{gaga}). \begin{example} If $G=SL_n(\R)$, then we choose $K=SO(n)$, $A$ as the set of diagonal matrices of positive entries with determinant 1, and $N$ as the set of upper triangular matrices with 1's on the diagonal. $\mf a$ is the abelian Lie algebra of diagonal matrices and the set of restricted roots is $\Sigma= \{\varepsilon_i-\varepsilon_j\mid i\neq j\}$ where $\varepsilon_i(\lambda)$ is the $i$-th diagonal entry of $\lambda$. The positive system corresponding to the Iwasawa decomposition is $\Sigma^+=\{\varepsilon_i-\varepsilon_j\mid i<j\}$ with simple system $\Pi=\{\alpha_i = \varepsilon_i-\varepsilon_{i+1}\}$. The positive Weyl chamber is $\mf a_+=\{\operatorname{diag}(\lambda_1,\ldots,\lambda_n)\mid \lambda_1>\cdots>\lambda_n\}$ and the Weyl group is the symmetric group $S_n$ acting by permutation of the diagonal entries. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth,trim = 5cm 15cm 9cm 10cm,clip]{A2_essential.pdf} \caption{The root system for the special case $G=SL_3(\R)$: There are three positive roots $\Sigma^+=\{\alpha_1,\alpha_2,\alpha_1+\alpha_2\}$. As all root spaces are one dimensional the special element $\rho=\frac 12 \Sigma_{\alpha\in \Sigma^+} m_\alpha \alpha$ equals $\alpha_1+\alpha_2$.} \end{figure} \end{example} \subsection{Invariant differential operators}\label{sec:harishchandra} Let $\mathbb D(G/K)$ be the algebra of \emph{$G$-invariant differential operators} on $G/K$, i.e. differential operators commuting with the left translation by elements $g\in G$. Then we have an algebra isomorphism $\HC\colon \mathbb D(G/K)\to \text{Poly}(\mf a^\ast)^W$ from $\mathbb D(G/K)$ to the $W$-invariant complex polynomials on $\mf a^\ast$ which is called the \emph{Harish-Chandra homomorphism} (see \cite[Ch.~II~Theorem 5.18]{gaga}). For $\lambda\in \mf a^\ast_\C$ let $\chi_\lambda$ be the character of $\mathbb D(G/K)$ defined by $\chi_\lambda(D)\coloneqq \HC(D)(\lambda)$. Obviously, $\chi_\lambda= \chi_{w\lambda}$ for $w\in W$. Furthermore, the $\chi_\lambda$ exhaust all characters of $\mathbb D(G/K)$ (see \cite[Ch.~III Lemma~3.11]{gaga}). We define the space of joint eigenfunctions $$E_\lambda \coloneqq\{f\in C^\infty(G/K)\mid Df = \chi_\lambda (D) f \quad \forall D\in \mathbb D(G/K)\}.$$ Note that $E_\lambda$ is $G$-invariant. \begin{example} For $G=SL_n(\R)$ the algebra $\text{Poly}(\mf a_\C^\ast)^W$ is generated by $n-1$ elements $p_2,\ldots,p_n$. Let us identify $\mf a_\C$ and $\mf a_\C^\ast$ via $\lambda\leftrightarrow \operatorname{Tr}(\lambda\; \cdot)$. Then $p_i(\lambda)= \lambda_1^i+\cdots+ \lambda_n^i=\operatorname{Tr}(\lambda^i)$ where $\lambda=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)\in \mf a_\C$. Clearly, these polynomials are invariant under permutations of the diagonal entries and it can be shown that they are algebraically independent and generate $\text{Poly}(\mf a_\C^\ast)^W$ (see \cite{Hum92}). $\mathbb{D}(G/K)$ is then generated by the preimages of $p_i$ under $\HC$. Up to lower order terms the resulting invariant differential operators are given by the Maass-Selberg operators $\delta_i$ which are defined for $f\in C^\infty(G/K)=C^\infty(SL_n(\R)/SO(n))$ by \[ \left.\delta_if(g K)=\operatorname{Tr}\left(\left(\frac{\partial}{\partial X}\right)^i\right) \right|_{X=0}f\left(g \exp\left(X-\frac{1}{n}\operatorname{Tr}(X)I_n \right)K\right), \] where \[ X=\begin{pmatrix} x_{11}&\cdots&x_{1n}\\\vdots&\ddots&\vdots\\x_{1n}&\cdots&x_{nn} \end{pmatrix} \quad \text{and} \quad \frac{\partial}{\partial X}= \begin{pmatrix} \frac{\partial}{\partial x_{11}}&\cdots&\frac{\partial}{2\partial x_{1n}}\\\vdots& \ddots &\vdots \\ \frac{\partial}{2\partial x_{1n}} &\cdots &\frac{\partial}{\partial x_{nn}} \end{pmatrix}. \] (see \cite{brennecken2020algebraically}). \end{example} Now, let $\G\leq G$ be a torsion-free discrete subgroup. Since $D\in \mathbb D(G/K)$ is $G$-invariant, it descends to a differential operator ${}_\G D$ on the locally symmetric space $\G\backslash G/K$. Therefore, the left $\G$-invariant functions of $E_\lambda$ (denoted by ${}^\G E_\lambda$) can be identified with joint eigenfunctions on $\G\backslash G/K$ for each ${}_\G D$: $${}^\G E_\lambda = \{f\in C^\infty(\G\backslash G /K)\mid {}_\G D f = \chi_\lambda (D) f \quad \forall \, D\in \mathbb D(G/K)\}.$$ The goal is to show that $L^2(\G\backslash G/K)\cap {}^\G E_\lambda =\{0\}$ for $\lambda\in i\mf a^\ast$ and certain discrete subgroups $\G$. Then $$\sigma(\G\backslash X)\coloneqq\{\lambda\in\mf a^ \ast_\C\mid L^2(\G\backslash G/K)\cap {}^\G E_\lambda \neq\{0\}\}$$ has the property that the set of principal eigenvalues $\sigma(\G\backslash X)\cap i\mf a^ \ast$ is empty. \subsection{Geodesic compactification}\label{sec:geo} In this section we recall the notion of the geodesic compactification of a simply connected and non-positively curved Riemannian manifold $X$. A classical reference for this topic is \cite{Ebe96}. In the sequel also the Satake compactification will be crucial thus we provide detailed references to \cite{BJ} which treats both types of compactifications. \begin{definition}[{\cite[Section I.2.2]{BJ}}] Two (unit speed) geodesics $\g_1,\g_2$ are equivalent if $\limsup_{t\to \infty} d(\g_1(t),\g_2(t))<\infty$. The space $X(\infty)$ is the factor space of all geodesics modulo this equivalence relation. The union $X\cup X(\infty)$ is called \emph{geodesic compactification}. The topology on $X\cup X(\infty)$ is given as follows: For $[\g]\in X(\infty)$ the intersection with $X$ of a fundamental system of neighborhoods is given by $C(\g, \varepsilon, R) = C(\g,\varepsilon)\smallsetminus B(R)$ where $$C(\g,\epsilon)=\{x\in X\mid\text{the angle between $\g$ and the geodesic from $x_0$ to $x$ is less than $\varepsilon$}\}$$ and $B(R)$ is the ball of radius $R$ centered at some base point $x_0\in X$. This topology is Hausdorff and compact. \end{definition} The space $X(\infty)$ can be canonically identified with the unit sphere in the tangent space at the base point $x_0 \in X$. If $\exp \colon T_{x_0} X\to X$ is the (Riemannian) exponential map at $x_0$, then a representative of the equivalence class of geodesics corresponding to a unit vector $Y\in T_{x_0} X$ is given by the geodesic $t\mapsto \exp(tY)$. This identification yields the neighborhoods $C(Y_0,\varepsilon,R) = \{\exp{tY}\mid t>R,\|Y\|=1, |\cos\inv(\langle Y,Y_0\rangle)| < \varepsilon\}$ where $Y_0\in T_{x_0} X$ is normalized. More precisely, if $\gamma$ is the geodesic $t\mapsto \exp(tY_0)$ then $C(\gamma,\varepsilon,R)=C(Y_0,\varepsilon,R)$. Let us return to the setting where $X=G/K$ is a symmetric space of non-compact type, then $X$ is simply connected and non-positively curved. Hence, the geodesic compactification of $X$ is defined and we have the following proposition. \begin{proposition}[{\cite[Proposition I.2.5]{BJ}}] The action of $G$ on $X$ extends to a continuous action on $X\cup X(\infty)$. \end{proposition} \subsection{Maximal Satake compactification}\label{sec:satake} In this section we introduce a different compactification for a Riemannian symmetric space $X=G/K$ the so called maximal Satake compactification. Before entering the technicalities let us give some heuristics: Recall that the Cartan decomposition allows to write $G=K\overline{\exp \mf a_+}K$ and since $K$ is compact the ``way'' in which a point in $G/K$ tends to infinity can be described in $\mathfrak a_+$. Recall that the particular simplicity of a rank one locally symmetric space stems from the fact that $\mathfrak a_+$ is just a half line (geometrically it corresponds to the distance from the origin of the symmetric space) and there is only one ``way'' to tend towards infinity. In the higher rank case $\mf a_+$ is a higher dimensional simplicial cone bounded by the hyperplanes $\ker \alpha\subset \mf a$ for $\alpha \in \Pi$ and the Satake compactifications will ``detect'' if a sequence tends to infinity inside the cone, while staying at bounded distance to a certain number of chamber walls $\ker \alpha $ for some subset $\alpha\in I\subsetneq \Pi$. In order to describe the precise structure of the Satake compactification we need to introduce the following notion of standard parabolic subgroups: For $I\subsetneq \Pi$ let $\mf a_I\coloneqq\bigcap_{\alpha\in I} \ker \alpha$, $\mf a^I\coloneqq \mf a_I^\perp$, $\mf n_I\coloneqq\bigoplus_{\alpha\in \Sigma^+\smallsetminus\langle I\rangle} \mf g_\alpha$ and $\mf m_I \coloneqq \mf m\oplus \mf a^I \oplus\bigoplus _{\alpha\in\langle I\rangle} \mf g_\alpha$. Define the subgroups $A_I\coloneqq\exp \mf a_I$, $N_I\coloneqq\exp \mf n_I$ and $M_I\coloneqq M\langle\exp\mf m_I\rangle$. Then $P_I\coloneqq M_IA_IN_I$ is the standard parabolic subgroup for the subset $I$. We furthermore introduce the notation $\mf a^I_{+} \coloneqq \{H\in \mf a^I\mid \alpha (H)>0\;\forall \alpha\in I\}$ and $\mf a_{I,+} \coloneqq \{H\in \mf a_I\mid \alpha(H)>0\;\forall \alpha \in \Pi\smallsetminus I\}$. \begin{example} For $G=SL_n(\R)$ the set of simple roots is $\Pi=\{\alpha_i=\varepsilon_i-\varepsilon_{i+1} \mid 1\leq i\leq n-1\}$. Let $I=\{\alpha_{i_1},\ldots,\alpha_{i_k}\}$ be a proper subset of $\Pi$. Then $\mf a_I= \{\on{diag}(\lambda_1,\ldots,\lambda_n)\mid \lambda_{i_j}=\lambda_{i_j+1}\}$ and $\mf a^I= \bigoplus_j \{\on{diag}(0,\ldots,\lambda_{i_j},-\lambda_{i_j+1},\ldots,0)\}$. Note that $\mf a^I=\on{span}\alpha_{i_j}$ if one identifies $\mf a$ and $\mf a^\ast$ (see Figure~\ref{fig:subspacesofa} for an illustration). Hence, $\mf a^I$ consists of blocks where a single block is a copy of the $\mf a$-part of $SL_m(\R)$. Each block corresponds to a root in $\Pi\smallsetminus I$. More precisely, if $\alpha_i\in \Pi\smallsetminus I$ then a block ends in row $i$. Note that the $m_i$ can very well be equal to 1. In this case there is simply a zero at this point on the diagonal. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth, trim=5cm 6cm 4cm 4cm, clip]{A2_parabolics.png} \caption{The various cones and subspaces in $\mf a$ corresponding to subsets of $\Pi$ for $G=SL_n(\R)$. $\mf a_\emptyset$ is all of $\mf a$ and $\mf a^\emptyset$ is the origin.} \label{fig:subspacesofa} \end{figure} $\mf m_I$ adds the corresponding root spaces, so $\mf m_I$ is isomorphic to direct sum of different $\mf {sl}_m(\R)$. \[ \mf m_I= \begin{pmatrix} \mf {sl}_{m_1}(\R) &&\\ & \ddots &\\ && \mf {sl}_{m_{n-1-k}}(\R) \end{pmatrix} \] where the bottom rows of the blocks correspond to the index of the roots in $\Pi\smallsetminus I$. $\mf n_I$ is the Lie algebra that contains of the upper-triangular matrices with non-zero entries in the positions that are not in the blocks of $\mf m_I$. On the group level $A_I= \{\on{diag}(\lambda_1,\ldots,\lambda_n)\in A\mid \lambda_{i_j}=\lambda_{i_j +1}\}$ and $N_I$ is the same as $\mf n_I$ but with $1$'s on the diagonal. For $M_I$ one has to multiply by $M=\{\on{diag}(\pm 1,\ldots,\pm 1)\}$ so that $M_I$ consists of block diagonal matrices where each block has determinant $\pm 1$ under the condition that the whole matrix has determinant $1$. It follows that the standard parabolic subgroups $P_I$ are the sets of block upper-triangular matrices: \[ \begin{tikzpicture} \matrix (m1) [matrix of nodes, left delimiter={[}, right delimiter={]}, row sep=-0.5pt,column sep=-0.5pt, every node/.style={inner sep=5pt} ] { |[draw]| $ \mbox{\Huge $\ast$}$ & && $\mbox{\Huge $\ast$ }$ \\ & |[draw]| $\ast$ && \\ && $\ddots$ & \\ \Huge 0 && & |[draw]| $\mbox{\Large $\ast$}$ \\ }; \end{tikzpicture} \] \end{example} The maximal Satake compactification $\ov X^{\max}$ is the $G$-compactification of $X$ (i.e. a compact Hausdorff space containing $X$ as an open dense subset such that the $G$-action extends continuously from $X$ to the compactification) with the orbit structure $\ov X^{\max} = X \cup \bigcup_{I\subsetneq \Pi} \mc O_I$. For the orbit $\mc O_I$ we can choose a base point $x_I\in \mc O_I$ with $\operatorname{Stab}(x_I)=N_IA_I(M_I\cap K)$. The topology can be described as follows: Since $G = K\ov {A^+} K$ and $K$ is compact, it suffices to consider sequences $\exp H_n$, $H_n\in \ov {\mf a_+}$. Such a sequence by definition converges iff $\alpha(H_n)$ converges in $\R\cup\{\infty\}$ for all $\alpha\in \Pi$. If this is the case, to determine the limit, let $I=\{\alpha\in \Pi \mid \lim\alpha(H_n)<\infty \}$ and $H_\infty\in \ov{\mf a^I_+}$ such that $\alpha(H_\infty)=\lim\alpha(H_n)$ for $\alpha\in I$. Then $\exp H_n\to \exp (H_\infty)x_I$. The intersection with $X$ of a fundamental system of neighborhoods of $k\exp(H_\infty)x_I$ with $k\in K, H_\infty\in {\mf a^I_+}$ is given by $$V\exp\{H\in \ov{\mf a_+}\mid |\alpha(H)-\alpha(H_\infty)|< \varepsilon,\alpha\in I, \alpha(H)>R,\alpha\not\in I\}x_0,$$ where $V$ is a fundamental system of neighborhoods of $k$ in $K$, $\varepsilon\searrow 0$, $R\nearrow \infty$. If $H_\infty\in \ov{\mf a^I_+}$, let $J=\{\alpha\in I\mid \alpha(H_\infty)=0\}$. Then the intersection with $X$ of a fundamental system of neighborhoods of $k\exp(H_\infty)x_I$ with $k\in K$ is given by $$V(K\cap M_J) \exp\{H\in \ov{\mf a_+}\mid |\alpha(H)-\alpha(H_\infty)|< \varepsilon, \alpha\in I, \alpha(H)>R,\alpha\not \in I\}x_0,$$ where $V,\varepsilon,R$ are as above. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth, trim=1cm 1cm 11cm 0cm, clip]{A2_Satake_Umg.png} \caption{The compactification of $\mf a_+$ for $G=SL_3(\R)$ is obtained by gluing $\ov{\mf a_+^{\{\alpha_1\}}}, \ov{\mf a_+^{\{\alpha_2\}}}$ and $\ov{\mf a_+^\emptyset}$ to the boundary of $\ov{\mf a_+}$. The sets $U_I$ for $I=\{\alpha_1\},\{\alpha_2\},\emptyset$ are the intersection of $\mf a_+$ with a fundamental neighborhood of $\exp(H_\infty)x_I$.} \label{fig:} \end{figure} Note that usually one defines the Satake compactification in a different way (see e.g. \cite[Ch.~I.4]{BJ}). Namely, let $\tau\colon G\to PSL(n,\C)$ be an irreducible faithful projective representation such that $\tau(K)\subseteq PSU(n)$. The closure in the projective space of Hermitian matrices of the image of the embedding of $X$ given by $gK\mapsto\R (\tau(g)\tau(g)^ \ast)$ is then called Satake compactification. It only depends on the highest weight $\chi_\tau$ of $\tau$. If $\chi_\tau$ is contained in the interior of the Weyl chamber, then this compactification is isomorphic to the maximal Satake compactification defined above. It is maximal in the sense that it dominates every other Satake compactification $\ov X^ S$ (i.e. there is a continuous $G$-equivariant map $\ov X^{\max}\to \ov X^ S$). Since we only need the description of neighborhoods and the orbit structure we chose to introduce $\ov X^ {\max}$ this way. \section{Moderate growth}\label{sec:modgrowth} In this section we show that on a locally symmetric space each joint eigenfunction which is $L^2$ satisfies a growth condition in the following sense. \begin{definition} \begin{enumerate}[(i)] \item A function $f\colon X\to\C$ is called \emph{function of moderate growth} if there exist $r\in \R, C>0$ such that $$|f(x)|\leq C e^{rd(x,x_0)}$$ for all $x\in X$. \item For $\lambda\in \mf a_\C^ \ast$ the space $E_\lambda^\ast$ is the space of joint eigenfunction with moderate growth, i.e. $$E_\lambda^\ast = \{f\in E_\lambda\mid f \text{ has moderate growth}\}.$$ \end{enumerate} \end{definition} Let $\G\leq G$ be a torsion-free discrete subgroup. \begin{theorem}\label{thm:modgrowth} Let $f\in {}^\G E_\lambda\cap L^2(\G\backslash G/K)$. Then $f$ (considered as a $\Gamma$-invariant function on $X$) has moderate growth. \end{theorem} The proof uses Sobolev embedding and the following estimate on the injectivity radius. \begin{proposition}[{see \cite[Thm. 4.3]{CGT}}] Let $(M,g)$ be a complete Riemannian manifold such that the sectional curvature $K_M$ satisfies $K_M\leq K$ for constant $K\in\R$. Let $0< r <\pi/4\sqrt K$ if $K >0$ and $r\in (0,\infty)$ if $K\leq 0$. Then the injectivity radius $\mathrm{inj}(p)$ at $p$ satisfies $$\mathrm{inj}(p)\geq \frac{r\mathrm{Vol} (B_M(p,r))}{\mathrm{Vol} (B_M(p,r)) + \mathrm{Vol}_{T_pM}(B_{T_pM}(0,2r))},$$ where $\mathrm{Vol}_{T_pM}(B_{T_pM}(0,2r))$ denotes the volume of the ball of radius $2r$ in $T_pM$, where both the volume and the distance function are defined using the metric $g^\ast:= exp^\ast_p g$, i.e. the pull-back of the metric $g$ to $T_pM$ via the exponential map. \end{proposition} For $M=\G\backslash G/K$ we obtain that the injectivity radius decreases at most exponentially. \begin{proposition}\label{prop:injectivity} There are constants $C,s>0$ such that $$\mathrm{inj}_{\G\backslash G /K}(\G x)\geq C\inv e^{-sd(x,eK)}$$ for every $x\in G/K$. \end{proposition} \begin{proof} Since $\G\backslash G/K$ is of non-positive curvature we can apply the above proposition for every $r> 0$. Note that $\exp\colon T_pM\to M$ is the universal cover of $M$ and therefore $\mathrm{Vol}_{T_{\G x}M}(B_{T_{\G x}M}(0,2r)) = \mathrm{Vol}_{G/K}(B_{G/K}(x,2r)) = \mathrm{Vol}_{G/K}(B_{G/K}(x_0,2r)) \leq C e^{sr}$ for some constants $C,s$ independent of $x$, where $x_0$ is the base point $eK$ of $G/K$. Hence, $$\mathrm{inj}(\G x)\geq r (1+\mathrm{Vol}_{T_{\G x}M}(B_{T_{\G x}M}(0,2r))/\mathrm{Vol} (B_M(\G x,r)))\inv\geq r(1+ Ce^{sr}/\mathrm{Vol} (B_M(\G x,r)))\inv$$ For $r=1+d(x,x_0)$ we have $B_M(\G x,r)\supseteq B_M(\G x_0,1)$ and therefore $$\mathrm{inj}(\G x)\geq (1+d(x,x_0)) (1+ Ce^{s(1+d(x,x_0))}/\mathrm{Vol} (B_M(\G x_0,1))\inv \geq (1+C'e^{sd(x,x_0)})\inv.$$ This finishes the proof. \end{proof} Note that this estimate isn't sharp. Indeed, the growth rate $s$ that we obtain in the proof is independent of $\Gamma$ and only depends on the volume growth in $G/K$. Let $m=\dim X$. We need the following well-known lemma on the geodesic balls in $G/K$. \begin{lemma}\label{la:boxcounting} Fix $r>0$. There is a constant $C$ such that for every $x\in G/K$ and $\varepsilon > 0$ there is a finite set $A\subseteq B(x,r)$ such that $\bigcup_{a\in A} B(a,\varepsilon) \supseteq B(x,r)$ and $\#A\leq C \varepsilon^{-m}$. \end{lemma} \begin{proof} Let $a_1=x$ and choose inductively $a_{i+1} \in B(x,r)\smallsetminus \bigcup_{j=0}^i B(a_j,\varepsilon)$ if the latter is non-empty. This yields a finite set $A=\{a_1,\ldots,a_N\}$ (since $\overline {B(x,r)}$ is compact) such that $B(x,r+\varepsilon)\supseteq \bigcup_{j=0}^N B(a_j,\varepsilon)\supseteq B(x,r)$ and $B(a_j,\varepsilon/2)$ are pairwise disjoint. It follows that $\mathrm{Vol}(B(x,r+\varepsilon))\geq \sum_i \mathrm{Vol}(B(a_i,\varepsilon/2) = \# A \cdot \mathrm{Vol}(B(x,\varepsilon/2))$ and therefore $\# A\leq \frac C {\mathrm{Vol}(B(x,\varepsilon/2))}$. The lemma follows from the fact that the volume is independent from the center and decreases like $\varepsilon^m$ as $\varepsilon \to 0$. \end{proof} We can now combine Proposition~\ref{prop:injectivity} with Sobolev embedding to prove Theorem~\ref{thm:modgrowth}. \begin{proof}[Proof of Theorem~\ref{thm:modgrowth}] Since $B(x_0,1)$ is relatively compact, there exists a constant $C$ such that $$\sup_{x\in B(x_0,1)} |f(x)| \leq C \|(\Delta +1)^{m/4+\varepsilon}f\|_{L^2(B(x_0,1))} = C(\chi_\lambda(\Delta)+1)^{m/4+\varepsilon} \|f\|_{L^2(B(x_0,1))}$$ by ellipticity of the Laplace operator $\Delta$ on $G/K$ and the Sobolev embedding $H^{m/2+\varepsilon}(B(x_0,1))\hookrightarrow C(B(x_0,1))$. By $G$-invariance of $\Delta$ and $d$ the same holds true for $x_0$ replaced by an arbitrary point $x\in X$. In particular, $$|f(x)|\leq C(\lambda) \|f\|_{L^2(B(x,1))}.$$ By Proposition~\ref{prop:injectivity} there are constants $C,s>0$ independent of $x$ such that $\mathrm{inj}_{\G\backslash G/K}(\G y)\geq C\inv e^{-sd(x,eK)}$ for every $y\in B(x,1)$. Let $\varepsilon(x) \coloneqq \frac 1 C e^{-sd(x,eK)}$. Then there is a finite set $A(x)\subseteq B(x,1)$ such that $\bigcup_{a\in A(x)} B(a,\varepsilon(x))$ covers $ B(x,1)$ and $\#A(x)\leq C' \varepsilon(x)^{-m}$ by Lemma~\ref{la:boxcounting}. Hence, $$\|f\|_{L^2(B(x,1))}^2\leq \sum_{a\in A(x)} \|f \|_{L^2(B(a,\varepsilon(x)))}^2$$ Since $\mathrm{inj}_{\G\backslash G/K}(\G a)\geq \varepsilon(x)$ we have $\|f \|_{L^2(B(a,\varepsilon(x)))}\leq \|f\|_{L^2(\G\backslash G/K)}$ for $a\in A(x)$. Therefore, \begin{align*} |f(x)|&\leq C(\lambda) \|f\|_{L^2(\G\backslash G/K)} \sqrt{\#A(x)}\\ &\leq C(\lambda) \|f\|_{L^2(\G\backslash G/K)} C'^{1/2} \varepsilon(x)^{-m/2}\\ &= C(\lambda) \|f\|_{L^2(\G\backslash G/K)} C'^{1/2}C^{m/2} e^{msd(x,x_0)/2}. \qedhere \end{align*} \end{proof} \begin{remark} In the case of locally symmetric spaces of finite volume there is a different argument showing Theorem~\ref{thm:modgrowth}: If we lift $f$ to a function on $G$ which we also call $f$, then there is smooth compactly supported function $\alpha$ on $G$ such that $f=f\ast \alpha$ (see~\cite[Theorem~1]{HC66}). Then one easily shows that $|f(\G x)|\leq C\|f\|_{L^1(\G\backslash G/K)} e^{sd(x,x_0)}$ using simple estimates for lattice point counting. Since $L^2\subseteq L^1$ for spaces of finite volume, we can deduce moderate growth for $f$. Unfortunately, this argument does not work for infinite volume locally symmetric spaces since a pointwise bound including the $L^2$-norm of $f$ would need much better counting estimates. \end{remark} \section{Absence of imaginary values in the $L^2$-spectrum}\label{sec:absence} We introduce the space of smooth vectors in $E_\lambda^\ast$. It is precisely the space of joint eigenfunctions with smooth boundary values (see \cite{vdBanSchl87}). \begin{definition} $$E_\lambda^\infty = \{f\in E_\lambda\mid \exists\, r \;\forall u\in\mc U(\mf g)\;\exists\, C_u>0\colon |(uf)(x)|\leq C_u e^{r d(x,x_0)}\}$$ \end{definition} \subsection{Geodesic compactification}\label{sec:geoabsence} In this section we want to prove the following theorem. \begin{theorem}\label{thm:smoothsquarevanish} Let $f\in E_\lambda^\ast$, $\lambda\in i\mf a^\ast$, such that $f$ is square-integrable on $C(Y_0, \varepsilon,R)$ for some $\varepsilon, R, Y_0$ (see Section~\ref{sec:geo}). Then $f=0$. \end{theorem} Let $X(\lambda)\coloneqq \{w\lambda-\rho-\mu\mid w\in W, \mu \in \N_0\Pi\}$ (see Figure~\ref{fig:X_lambda} for a visualization in example of $SL(3,\R)$). We will use the following asymptotic expansion for functions in $E_\lambda^\infty$. \begin{theorem}[{\cite[Thm~3.5]{vdBanSchl87}}]\label{thm:expansion} For each $f\in E_\lambda^\infty$, $g\in G$, and $\xi \in X(\lambda)$ there is a unique polynomial $p_{\lambda,\xi}(f,x)$ on $\mf a$ which is smooth in $x$ such that $$f(g\exp(tH))\sim\sum_{\xi\in X(\lambda)} p_{\lambda,\xi}(f,g,tH) e^{t\xi(H)},\quad t\to \infty,$$ at every $H_0\in \mf a_+$, i.e. for every $N$ there exist a neighborhood $U$ of $H_0$ in $\mf a_+$, a neighborhood $V$ of $x$ in $G$, $\varepsilon>0$, $C>0$ such that $$\left|f(y\exp(tH)) - \sum_{\Re \xi(H_0)\geq -N} p_{\lambda,\xi}(f,y,tH) e^{t\xi(H)}\right| \leq C e^{(-N-\varepsilon)t}$$ for all $y\in V, H\in U$, $t\geq 0$. \end{theorem} \begin{remark} The uniformity in $x$ is not stated in \cite{vdBanSchl87} but it follows from (6.18) therein. \end{remark} \begin{example} In the case where $G/K$ is the upper half plane $\mathbb{H}$ a simplified version of this theorem can be stated as follows. Suppose $f\in E_{s-1/2}^\infty$, i.e. $f\in C^\infty(\mathbb{H})$ with $\Delta f=s(1-s)f$ and the derivatives of $f$ satisfy some uniform pointwise exponential bounds. We lift $f$ to a function (also called $f$) on the sphere bundle $S\mathbb{H}$ which is constant on the fibers. Denote by $\phi_t$ the geodesic flow. Then if $s\notin \frac 12 \mathbb{Z}$ \[ (\phi_t)_\ast f(x)\sim e^{-ts} \left( \sum_{n=0}^\infty p^+_n(x) e^{-nt} \right) + e^{-t(1-s)} \left(\sum_{n=0}^\infty p_n^- (x) e^{-nt}\right) \] with $p_n^\pm$ being smooth. If $s\in \frac 12 \mathbb{Z}$ the functions $p_n^\pm$ can be polynomials of degree one in $t$. \end{example} \begin{figure} \centering \begin{minipage}{.5\textwidth} \centering \frame{\includegraphics[width=.8\linewidth, trim=11.5cm 1cm 10cm 6cm,clip]{A2_exponents.pdf}}\end{minipage}\begin{minipage}{.5\textwidth} \centering \frame{ \includegraphics[width=.8\linewidth,trim=11.5cm 1cm 10cm 6cm,clip]{A2_finiteexpansion.pdf}} \end{minipage} \caption{\label{fig:X_lambda} Real part of the exponents of the asymptotic expansion in Theorem~\ref{thm:expansion} for $G=SL(3,\R)$} \label{fig:exponentsofexpansion} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:smoothsquarevanish}] First, we will consider the case $f\in E_\lambda^ \infty$. By continuity there is a unit vector $H_0 \in \mf a_+$, a neighborhood $U$ of $H_0$ in the unit sphere of $\mf a$, and an open set $V$ in $K$ such that $$\Omega=\left\{k\exp(H)\colon k\in V, \frac{H}{\|H\|}\in U,\|H\|>R\right \}\subseteq C(Y_0,\varepsilon, R).$$ Let $N=\rho(H_0)$ such that without loss of generality \begin{align}\label{eq:approximation} |f(k\exp(H)) - \sum_{w\in W} p_{\lambda,w\lambda-\rho}(f,k,H) e^{(w\lambda-\rho)(H)}| \leq C e^{(-\rho(H_0)-\varepsilon)\|H\|} \end{align} for all $k\in V, \frac H{\|H\|}\in U$. We use the integral formula \eqref{eq:intKAK} and observe that \begin{align*} \int_{(R,\infty)U} &e^{-2(\rho(H_0)+\varepsilon)\|H\|}\prod_{\alpha\in\Sigma^+} \sinh(\alpha(H))^{m_\alpha} dH\\ \leq& \int_{(R,\infty)U} e^{-2(\rho(H_0)+\varepsilon)\|H\|}e^{2\rho(H)}dH\leq \int_{(R,\infty)U} e^{2(\rho(\frac H{\|H\|}-H_0)-\varepsilon)\|H\|}dH \end{align*} which is finite after shrinking $U$ such that $\rho(\frac H{\|H\|}-H_0)<\varepsilon$ for $H\in U$. Consequently, the right hand side of \eqref{eq:approximation} and therefore also the left hand side of is square integrable on $\Omega$. Since $f$ is $L^2$ and the approximation \eqref{eq:approximation} holds, $$\left|\sum_{w\in W} p_{\lambda,w\lambda-\rho}(f,k,H) e^{(w\lambda-\rho)(H)}\right|^2 \prod_{\alpha\in\Sigma^+} \sinh(\alpha(H))^{m_\alpha}$$ is integrable on $V\times (R,\infty)U$. Hence, $\left|\sum_{w\in W} p_{\lambda,w\lambda-\rho}(f,k,H) e^{(w\lambda-\rho)(H)}\right|^2 \prod_{\alpha\in\Sigma^+} \sinh(\alpha(H))^{m_\alpha}$ is integrable on $(R,\infty)U$ for almost every $k\in V$. Since $\sinh(x)\geq e^x/4$ for $x\geq \frac 12 \log 2$, $$\left|\sum_{w\in W} p_{\lambda,w\lambda-\rho}(f,k,H) e^{(w\lambda-\rho)(H)}\right|^2 e^{2\rho(H)}=\left|\sum_{w\in W} p_{\lambda,w\lambda-\rho}(f,k,H) e^{w\lambda(H)}\right|^2$$ is integrable on $(R,\infty)U$ for $R$ large enough. This is only possible if $p_{\lambda,w\lambda-\rho}(f,k)$ vanishes on $\mf a$ for every $w\in W$ by \cite[Lemma~8.50]{Kna86}. Since $p_{\lambda,w\lambda-\rho}(f,\bigcdot)$ is smooth it vanishes identically on $V$. We now show that it also vanishes on $VAN$. For $n\in N$ \cite[Lemma~8.7]{vdBanSchl87} states for $f\in E_\lambda^\infty$ \[ p_{\lambda,\xi}(f,n)=\sum_{\mu\in \N_0\Pi,\xi+\mu\in X(\lambda)}p_{\lambda,\xi+\mu}(f_\mu,e),\quad \xi\in X(\lambda), \] where $f_\mu\in L(\mc U(\mf g))f$ (where $L$ is the left regular representation) are specific joint eigenfunctions obtained by the Taylor expansion of $f$ in the direction of $n$ and $f_0=f$. For $\xi=w\lambda-\rho$ the only summand comes from $\mu=0$ since $\lambda\in i\mf a^\ast$ and $X(\lambda)=\{w\lambda-\rho-\mu \mid w\in W, \mu\in \N_0\Pi\}$. In particular, $p_{\lambda,w\lambda-\rho}(f,n)=p_{\lambda,w\lambda-\rho}(f,e)$. To deal with $a\in A$ we use \cite[Lemma~8.5]{vdBanSchl87}: \[ p_{\lambda,\xi}(f,a,H)=a^\xi p_{\lambda,\xi}(f,e,H+\log a),\quad f\in E_\lambda^\infty, \xi\in X(\lambda), H\in \mf a, \] where as usual $a^\xi=e^{\xi(\log a)}$. Let us return to the situation that we achieved earlier, where $p_{\lambda,w\lambda-\rho}(f,k,H)=0$ for every $k\in V$ and $H\in \mf a$. But then \begin{align*} p_{\lambda,w\lambda-\rho}(f,kan,H)&= p_{\lambda,w\lambda-\rho}(L_{(ka)\inv}f,n,H)= p_{\lambda,w\lambda-\rho}(L_{(ka)\inv}f,e,H)\\ &= p_{\lambda,w\lambda-\rho}(L_{k\inv}f,a,H)= a^{w\lambda-\rho}p_{\lambda,w\lambda-\rho}(L_{k\inv}f,e,H)\\ &= a^{w\lambda-\rho}p_{\lambda,w\lambda-\rho}(f,k,H)=0 \end{align*} for every $k\in K, a\in A,n\in N$ and $w\in W$. Hence, $p_{\lambda,w\lambda-\rho}(f,x)=0$ if $x$ is contained in the open set $VAN$. This is exactly the assumption of \cite[Theorem~4.1]{vdBanSchl89LocalBD} in the case $I=I_\lambda$, i.e. $f$ is an eigenfunction for the whole algebra $\mathbb{D}(G/K)$ and is not only annihilated by an ideal of finite codimension. Note that in this case $X(I)=X(\lambda)$. We infer $f=0$. It remains to show that the statement also holds for $f\in E_\lambda^\ast$. Since $C(Y_0,\varepsilon,R)$ is a fundamental system of neighborhoods of $Y_0$ in the geodesic compactification and $G$ acts continuously on $X\cup X(\infty)$, there is a neighborhood $ V$ of $e$ in $G$ and $\varepsilon ', R'$ such that $V\inv C(Y_0,\varepsilon',R')\subseteq C(Y_0,\varepsilon,R)$. Let $\varphi_n$ be an approximate identity on $G$ with $\operatorname{supp} \varphi_n\subseteq V$, i.e. $\varphi_n \in C_c^\infty(G)$ is non-negative with $\int_G\varphi_n(g) dg =1$ and $\operatorname{supp}(\varphi_n)$ shrinks to $\{e\}$. We consider $(\varphi_n\ast f)(x)=\int_G \varphi_n(g)f(g\inv x)dg$. Obviously, $\varphi_n\ast f\in E_\lambda^\infty$ since $L_x R_y(\varphi_n\ast f)=(L_x\varphi_n)\ast (R_yf), x,y\in G$. Combining the already established case $f\in E_\lambda^\infty$ with Lemma~\ref{la:conv_square_int} below we infer that $\varphi_n\ast f=0$ for all $n$ and therefore $f=0$. This completes the proof. \end{proof} \begin{lemma}\label{la:conv_square_int} $\varphi_n\ast f$ is square-integrable on $C(Y_0, \varepsilon',R')$. \end{lemma} \begin{proof}[Proof of {Lemma~\ref{la:conv_square_int}}] Abbreviate $C'=C(Y_0, \varepsilon',R')$ and $C=C(Y_0, \varepsilon',R')$. It suffices to show that \begin{align*} \left|\int_{C'} h(x)(\varphi_n\ast f)(x)dx\right|\leq B\|h\|_{L^2(C')} \end{align*} for $h\in C_c(C')$ with a constant $B$ independent of $h$. Let us write $|h(x)\varphi_n(g)f(g\inv x)| =(|h|^2(x)\varphi_n(g))^{1/2} (|f|^2(g\inv x)\varphi_n(g))^{1/2}$ and use the Cauchy-Schwarz inequality of $L^2(V\times C')$ to obtain \begin{align*} \left|\int_{C'} h(x)(\varphi_n\ast f)(x)dx\right|&\leq \int _V \int_{C'}|h(x)\varphi_n(g)f(g\inv x)|dx dg \\ &\leq \left(\int_{V}\int_{C'} |h|^2(x) \varphi_n(g) dx dg \int_V \int_{C'} |f|^2(g\inv x) \varphi_n(g) dx dg\right)^{1/2}\\ &\leq \|h\|_{L^2(C')} \left(\int_V \int_{C} |f|^2(x) \varphi_n(g) dx dg\right)^{1/2}\\ &= \|h\|_{L^2(C')} \|f\|_{L^2(C)} \end{align*} where we used $V\inv C'\subseteq C$ in the last inequality. This finishes the proof. \end{proof} \subsection{Maximal Satake compactification}\label{sec:satabsence} In this section we prove a statement analogous to Theorem~\ref{thm:smoothsquarevanish} for the maximal Satake compactification. First of all we remark that each neighborhood of an element in the orbit $Gx_\emptyset\subseteq \ov X^ {\max}$ contains a neighborhood $C(Y_0,\varepsilon, R)$. Hence, we have the following proposition. \begin{proposition} Let $f\in E_\lambda^\ast$, $\lambda\in i\mf a^\ast$, such that $f$ is square-integrable in some neighborhood of an element in $Gx_\emptyset\subseteq \ov X^ {\max}$. Then $f=0$. \end{proposition} The goal is to prove this statement for general neighborhoods in $\ov X^{\max}$. \begin{theorem}\label{thm:satakesquarevanish} Let $f\in E_\lambda^\ast$, $\lambda\in i\mf a^\ast$, such that $f$ is square-integrable in some neighborhood of an element $x_\infty \in \partial X^ {\max}$. Then $f=0$. \end{theorem} \begin{proof} By the same reasoning as in the proof of Theorem~\ref{thm:smoothsquarevanish} we can assume $f\in E_\lambda^\infty$. Moreover, we can assume that $x_\infty=k \exp(H_\infty)x_I$ with $k\in K$ and $H_\infty\in {\mf a^I_+}$ (instead of $H_\infty\in \ov{\mf a^I_+}$) since every neighborhood of $k\exp(H_\infty)x_I$ contains an element $k'\exp(H_\infty')x_I$ with $H_\infty'\in \mf a^I_+$. Let $\Omega=V\exp (U)x_0\subseteq X$ with $U\coloneqq \{H\in \ov{\mf a_+}\mid |\alpha(H)-\alpha(H_\infty)|< \varepsilon,\alpha\in I, \alpha(H)>R,\alpha\not\in I\}$, so that $\Omega$ is the intersection of a neighborhood of $x_\infty$ with the interior of $\ov X^ {\max}$. Define $U^I\coloneqq \{H^I\in\mf a^I\mid |\alpha(H^I)-\alpha(H_\infty)|<\varepsilon, \alpha\in I\}$ which is a bounded open set in $\mf a^I$ since the set of linear forms $I$ restricted to $\mf a^I$ is linear independent. W.l.o.g. $U^I\subseteq \mf a^I_+$ has positive distance to the boundaries. Let $U_I\coloneqq \{H_I\in \mf a_I\mid \alpha(H_I)>C, \alpha\in \Pi\smallsetminus I\}\subseteq \mf a_{I,+}$ so that $U_I+U^I\subseteq U$ for $C$ large enough. \begin{figure} \includegraphics[width=.5\textwidth,trim=5cm 3cm 13cm 3cm, clip]{A2_Satake.pdf} \caption{Decomposition of $U$ for $G=SL(3,\R)$ and $I=\{\alpha_1\}$.} \end{figure} As in Theorem~\ref{thm:smoothsquarevanish} we use the integral formula \eqref{eq:intKAK} to obtain $$\int_{U\subseteq \mf a_+} |f|^2(k\exp(H))\prod \sinh(\alpha(H))^{m_\alpha}dH<\infty$$ for almost every $k\in V$. Therefore, $$\int_{U_I\subseteq \mf a_{I,+}} |f|^2(k\exp(H^I)\exp(H_I))\prod \sinh(\alpha(H_I+H^I))^{m_\alpha}dH_I<\infty$$ for almost every $k\in V$ and $H^I\in U^I\subseteq \mf a^I$ (with suitable Lebesgue measures on $\mf a_I$ and $\mf a^I$). The property that $U^I\subseteq \mf a^I_+$ has positive distance to the boundaries implies that $\alpha(H_I+H^I)>\varepsilon$ and hence $$\prod_{\alpha\in \Sigma^+} \sinh(\alpha(H_I+H^I))^{m_\alpha}\geq C e ^ {2\rho(H_I)},\quad H_I\in U_I,H^I\in U^I.$$ Therefore, $|f|^2(k\exp(H^I)\exp(H_I))e^{2\rho(H_I)}$ is integrable on $U_I$. Similarly to the proof of Theorem~\ref{thm:smoothsquarevanish} we use an asymptotic expansion for $f$, but this time we have to consider asymptotics along the boundary of the positive Weyl chamber instead of regular directions. \begin{theorem}[{\cite[Thm~1.5]{vdBanSchl89LocalBD}}] There exists a finite set $S(\lambda,I)\subseteq \mf a_I^\ast$ such that for each $f\in E_\lambda^\infty$, $g\in G$, and $\xi \in X(\lambda,I)= S(\lambda,I) - \N_0\Pi|_{\mf a_I}$ there is a unique polynomial $p_{I,\xi}(f,x)$ on $\mf a_I$ which is smooth in $x$ such that $$f(g\exp(tH_0))\sim\sum_{\xi\in X(\lambda,I)} p_{I,\xi}(f,g,tH_0) e^{t\xi(H_0)},\quad t\to\infty,$$ at every $H_0\in \mf a_{I,+}$, i.e. for every $N$ there exist a neighborhood $U$ of $H_0$ in $\mf a_{I,+}$, a neighborhood $V$ of $x$ in $G$, $\varepsilon>0$, $C>0$ such that $$\left |f(y\exp(tH)) - \sum_{\Re \xi(H_0)\geq -N} p_{I,\xi}(f,y,tH) e^{t\xi(H)}\right| \leq C e^{(-N-\varepsilon)t}$$ for all $y\in V, H\in U$, $t\geq 0$. \end{theorem} \begin{remark} The uniformity in $x$ is not stated in \cite{vdBanSchl89LocalBD} but it follows from Proposition~1.3 therein. \end{remark} Let $H_0\in \mf a_{I,+}$, $\|H_0\|=1$. After shrinking we can assume that $$\left|f(k\exp(H^I)\exp(H_I)) - \sum_{\Re \xi(H_0)\geq -\rho(H_0)} p_{I,\xi}(f,k\exp(H^I),H_I) e^{\xi(H_I)}\right| \leq C e^{(-\rho(H_0)-\varepsilon)\|H_I\|}$$ for all $k\in V, H^I\in U^I$, and $\frac{H_I}{\|H_I\|}$ in some neighborhood $\tilde U_I$ of $H_0$ in $\mf a_{I,+}$ such that $(R',\infty)\tilde U_I\subseteq U_I$. The error term $e^{(-\rho(H_0)-\varepsilon)\|H_I\|}$ satisfies $$e^{2(-\rho(H_0)-\varepsilon)\|H_I\|}e^{2\rho(H_I)}=e^{2(\rho(-H_0+\frac{H_I}{\|H_I\|}) -\varepsilon)\|H_I\|}\leq e^{-\varepsilon\|H_I\|}$$ if $\tilde U_I$ is sufficiently small. Since $e^{-\varepsilon\|H_I\|}$ is integrable on $(R',\infty)\tilde U_I$ the same is true for $$\left |\sum_{\Re \xi(H_0)\geq -\rho(H_0)} p_{I,\xi}(f,k\exp(H^I),H_I) e^{(\xi+\rho)(H_I)}\right |^2 .$$ Using \cite[Lemma~8.50]{Kna86} we obtain that $p_{I,\xi}(f,k\exp(H^I),H_I) = 0$ if $\Re (\xi+\rho)(H_I) \geq 0$ for almost every $k\in V$ and $H^I\in U^I$. Since $p_{I,\xi}(f,\bigcdot,H_I)$ is smooth, this holds for every $k\in V$ and $H^I\in U^I$. By \cite[Corollary~2.5]{vdBanSchl89LocalBD} the mapping $M_I\ni m\mapsto p_{I,\xi}(f,xm,H_I)$, $x\in G$, is real analytic. Therefore, $\mf a^I\ni H^I\mapsto p_{I,\xi}(f,k\exp{H^I}, H_I)$ is real analytic as well and vanishes on the open set $U^I$ for $k\in V$, $\Re (\xi+\rho)(H_I)\geq 0$. Hence it vanishes on $\mf a^I$ identically. In a last step of the proof we show that the vanishing of $p_{I,\xi}(f,k)$ for $\Re(\xi+\rho)(H_I)\geq 0$ implies that the expansion coefficients $p_{\lambda,\eta}(f,k)$ from Theorem~\ref{thm:expansion} vanish for all $\eta\in W\lambda-\rho$ and $k\in V$. For this purpose we use the following expansion for the polynomial $p_{I,\xi}$. \begin{proposition}[{\cite[Theorem~3.1]{vdBanSchl89LocalBD}}] Let $f\in E_\lambda^\infty, g\in G, $ and $\xi\in X(I,\lambda)$. \begin{enumerate} \item For every $H_I\in \mf a_{I,+}$ and $H^I\in \mf a^I_+$ the following asymptotic expansion holds: $$p_{I,\xi}(f,g\exp(tH^I),H_I)\sim \sum _{\eta \in w\lambda -\rho - \N_0\Pi, \eta|_{\mf a_I}=\xi} p_{\lambda,\eta}(f,g,H_I+tH^I) e^{t\eta(H^I)}.$$ \item For all $\eta = w\lambda -\rho - \N_0\Pi$ with $\eta|_{\mf a_I}\not\in X(\lambda,I)$ we have $p_{\lambda,\eta}(f,x)=0$. \end{enumerate} \end{proposition} Let $\eta=w\lambda-\rho$, $w\in W$, and $k\in V, H_I\in U_I$. If $\eta|_{\mf a_I}\not \in X(I,\lambda)$, then $p_{\lambda,\eta}(f,k)=0$. If $\eta|_{\mf a_I} = \xi \in X(I,\lambda)$, then $\Re(\xi+\rho)(H_I) = \Re w\lambda(H_I)=0\geq 0$. Therefore, $p_{I,\xi}(f,k\exp{H^I}, H_I)= 0$ for all $H^I\in\mf a^I$ by the previous paragraph. It follows that the asymptotic expansion has every coefficient vanishing (see \cite[Lemma~3.2]{vdBanSchl87}), in particular $ p_{\lambda,\eta}(f,k,H_I+tH^I)=0$, $H_I\in U_I$, $H^I\in \mf a^I$. Since $ p_{\lambda,\eta}(f,k)$ is a polynomial, this implies $p_{\lambda,\eta}(f,k)=0$. Hence in both cases $p_{\lambda,w\lambda-\rho}(f,k)=0$ for $k\in V$. The remainder of the proof proceeds the same way as the proof of Theorem~\ref{thm:smoothsquarevanish}. \end{proof} \subsection{Proof of Theorem~\ref{thm:main_intro}}\label{sec:conclusion} Let $\ov X$ be one of the compactifications $X\cup X(\infty)$ or $\ov X^ {\max}$. Recall that the \emph{wandering set} $\w(\G,\ov X)$ is defined to be the points $x\in \ov X$ for which there is a neighborhood $U$ of $x$ such that $\g U\cap U\neq \emptyset$ for at most finitely many $\g\in \G$. Clearly, $\w(\G,\ov X)$ is open, $\G$-invariant and contains $X$. Theorem~\ref{thm:main_intro} is a simple consequence of Theorem~\ref{thm:modgrowth} combined with Theorem~\ref{thm:smoothsquarevanish}, respectively \ref{thm:satakesquarevanish}. \begin{proof}[Proof of Theorem~\ref{thm:main_intro}] Let $x\in \w(\G,\ov X) \cap \partial \ov X$. Hence, there is an open subset $U$ of $\ov X$ containing $x$ such that $\{\g\mid \g U\cap U\neq \emptyset\}$ contains $N\in \N$ elements. Let $\lambda\in i\mf a^\ast$ and $f\in L^2(\G\backslash X)$ a joint eigenfunction of $\mathbb D(X)$ for the character $\chi_\lambda$. Let $\ov f\in E_\lambda$ be $\G$-invariant lift of $f$ to $X$. Then \begin{align*} \|\ov f\|^2_{L^2(U)} &= \int _U |\ov f|^2 = \int_{\G\backslash X} \sum_{\g\in \G} 1_U(\g y) |\ov f|^2(\g y) d (\G y) = \int_{\G\backslash X} \# \{\g\mid \g y\in U\} | f|^2(\G y) d (\G y)\\ &\leq N \|f\|^2_{L^2(\G\backslash X)}<\infty. \end{align*} Hence, $f$ is $L^2$ on $U$ and $f$ is of moderate growth by Theorem~\ref{thm:modgrowth}. Using Theorem~\ref{thm:smoothsquarevanish} or \ref{thm:satakesquarevanish} finishes the proof. \end{proof} \section{Examples}\label{sec:examples} In this section we discuss three classes of examples that satisfy the wandering condition of Theorem~\ref{thm:main_intro}. As mentioned in the introduction the condition is satisfied for geometrically finite discrete subgroups of $PSO(n,1)$ of infinite covolume. \subsection*{Products} The most basic example is the case of products. Let $X=X_1\times X_2$ be the product of two symmetric spaces of non-compact type where $X_i = G_i/K_i$. Let $\Gamma\leq G_1\times G_2$ be a discrete torsion-free subgroup that is the product of two discrete torsion-free subgroups $\G_i\leq G_i$. Then it is clear that the spectral theory of $\G\backslash X$ is completely determined by the spectral theory of the two factors. In particular, since the algebra $\mathbb D(G/K)$ is generated by $\mathbb D(G_i/K_i)$, $i=1,2$, there are no principal joint eigenvalues if the same holds for one of the factors. The same statement can be obtained by Theorem~\ref{thm:main_intro} using the maximal Satake compactification. Indeed, by \cite[Prop. I.4.35]{BJ} it holds that the maximal Satake compactification of $X$ is the product of the maximal Satake compactifications of $X_i$, i.e. $\ov X ^{\max}= \ov {X_1} ^{\max} \times \ov {X_2} ^{\max}$. Then it is clear from the definition of the wandering set that $\w(\G,\ov X^{\max})=\w(\G_1,\ov {X_1} ^{\max})\times \w(\G_2,\ov {X_2} ^{\max})$. Hence, the wandering condition $\w(\G,\ov X ^{\max} )\cap \partial \ov X^ {\max} \neq \emptyset$ is fulfilled if and only if it is fulfilled for one of the actions $\G_i\curvearrowright \ov {X_i}^{\max}$. \subsection*{Selfjoinings} A more interesting class of examples is given by selfjoinings of locally symmetric spaces. These are given as follows. As above let $X=X_1\times X_2$ be the product of two symmetric spaces of non-compact type where $X_i = G_i/K_i$. Now, let $\Upsilon$ be a discrete group and $\rho_i\colon \Upsilon\to G_i$, $i=1,2$, two representations into real semisimple non-compact Lie groups with finite center. We assume that $\rho_1$ has finite kernel and discrete image. We want to consider the subgroup $\G$ of $G_1\times G_2$ given by $\G=\{(\rho_1(\sigma),\rho_2(\sigma))\colon \sigma\in \Upsilon\}$ which is discrete. We assume that $ \G$ is torsion-free (e.g. if $\Upsilon$ is torsion-free). In contrast to the previous example the locally symmetric space $\G\backslash X$ is not a product of two locally symmetric spaces anymore, so also the spectral theory cannot be reduced to some lower rank factors. However, we can exploit that the globally symmetric space is still a product and consider the maximal Satake compactification which is given by $\ov X ^{\max}= \ov {X_1} ^{\max} \times \ov {X_2} ^{\max}$. Since $\rho_1(\Upsilon)$ is discrete, it acts properly discontinuously on $X_1$. Hence every point of $X_1$ is wandering for the action of $\rho_1(\Upsilon)$. It follows easily that $X_1\times \ov {X_2} ^{\max}$ is contained in the wandering set $\w(\G,\ov X^{\max})$ of the action $\G\curvearrowright \ov X^ {\max}$. Therefore, the wandering condition is fulfilled. Indeed, $\w(\G,\ov X^{\max}) \cap \partial\ov X^{\max} \supseteq X_1\times \partial \ov {X_2} ^{\max} \neq \emptyset$. \subsection*{Anosov subgroups} The result of Lax and Phillips \cite{laxphil} is in particular true if we consider a (non-cocompact) convex-cocompact subgroup of $PSO(n,1)$. Anosov subgroups as introduced by Labourie \cite{Lab06} for surface groups and generalized to arbitrary word hyperbolic groups by Guichard and Wienhard \cite{GW12} generalize convex-cocompact subgroups to higher rank symmetric spaces. For such $\G$ we have the following proposition. \begin{proposition} Let $\G$ be a torsion-free Anosov subgroup that is not a cocompact lattice in a rank one Lie group. Then the wandering condition $\w(\G, \ov X^ {\max})\cap \partial \ov X^ {\max} \neq \emptyset$ is fulfilled. \end{proposition} \begin{proof} By \cite{KL18} (and \cite{GGKW15} for a specific maximal parabolic subgroup) every locally symmetric space arising from an Anosov subgroup admits a compactification modeled on the maximal Satake compactification $\ov X^{\max}$, i.e. there is $X\subseteq \Omega \subseteq \ov X^ {\max}$ open such that $\G$ acts properly discontinuously and cocompactly on $\Omega$. Since $\Gamma$ does not act cocompactly on $X$, we have $ \Omega \cap \partial \ov X^ {\max} \neq \emptyset$. As every point in a region of discontinuity is wandering by definition we have $\Omega \subseteq \w(\G,\ov X^ {\max})$, and in particular the wandering condition is fulfilled. \end{proof} Combining the above proposition with Theorem~\ref{thm:main_intro} we obtain the following corollary. \begin{korollar} Let $\G$ be a torsion-free Anosov subgroup that is not a cocompact lattice in a rank one Lie group. Then there are no principal joint $L^2$-eigenvalues on $\G\backslash X$. \end{korollar} It is worth mentioning that selfjoinings of two representations into $PSO(n,1)$ yield Anosov subgroups if and only if one of the images of the representations is convex-cocompact. One can thus easily construct non-trivial examples which are not Anosov subgroups but fulfill the wandering condition of Theorem~\ref{thm:main_intro}. This is again parallel to Patterson's result that holds beyond the convex-cocompact case for hyperbolic surfaces admitting cusps and at least one funnel. \section*{Declaration} This work has received funding from the Deutsche Forschungsgemeinschaft (DFG) via Grant No. WE 6173/1-1 (Emmy Noether group “Microlocal Methods for Hyperbolic Dynamics”) as well as SFB-TRR 358/1 2023 — 491392403 (CRC ``Integral Structures in Geometry and Representation Theory''). The authors have no relevant financial or non-financial interests to disclose. The authors have no competing interests to declare that are relevant to the content of this article. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. The authors have no financial or proprietary interests in any material discussed in this article. \bibliographystyle{amsalpha} \bibliography{literatur} \bigskip \bigskip \end{document}
2205.03407v1
http://arxiv.org/abs/2205.03407v1
Time-dependent source identification problem for a fractional Schrodinger equation with the Riemann-Liouville derivative
\documentclass{amsart} \usepackage[russian]{babel} \numberwithin{equation}{section} \textwidth 135mm \textheight 220mm \oddsidemargin 10mm \evensidemargin 10mm \baselineskip+6pt \def\a{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta} \def\ve{\varepsilon} \def\e{\epsilon} \def\l{\lambda} \def\LL{\Lambda} \def\k{\kappa} \def\m{\mu} \def\p{\psi} \def\n{\nu} \def\r{\rho} \def\s{\sigma} \def\S{\Sigma} \def\t{\tau} \def\f{\varphi} \def\F{\Phi} \def\v{\phi} \def\th{\theta} \def\Th{\Theta} \def\w{\omega} \def\Om{\Omega} \def\z{\zeta} \pagestyle{myheadings} \thispagestyle{empty} \markboth{\small{Ravshan Ashurov}}{\small{Time-dependent source identification problem for a fractional Schr\"odinger equation with the Riemann-Liouville derivative}} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{defin}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{prob}[thm]{Problem} \newcommand{\ds}{\displaystyle} \def\di{\mathop{\rm d}} \def\sp{\mathop{\rm sp}} \def\spec{\mathop{\rm spec}} \def\id{\mathop{\rm id}} \def\tr{\mathop{\rm Tr}} \newcommand{\ty}[1]{\mathop{\rm {#1}}} \newcommand{\nn}{\nonumber} \def\id{{\bf 1}\!\!{\rm I}} \begin{document} \title{Time-dependent source identification problem for a fractional Schr\"odinger equation with the Riemann-Liouville derivative} \author{Ravshan Ashurov} \author{Marjona Shakarova} \address{Institute of Mathematics, Uzbekistan Academy of Science, Tashkent, Student Town str. 100174} \email{[email protected]} \curraddr{National University of Uzbekistan named after Mirzo Ulugbek, Tashkent, Student Town str. 100174} \email{[email protected]} \small \title[Time-dependent source identification problem ...] { Time-dependent source identification problem for a fractional Schrödinger equation with the Riemann-Liouville derivative } \begin{abstract} The Schr\"odinger equation $i \partial_t^\rho u(x,t)-u_{xx}(x,t) = p(t)q(x) + f(x,t)$ ( $0<t\leq T, \, 0<\rho<1$), with the Riemann-Liouville derivative is considered. An inverse problem is investigated in which, along with $u(x,t)$, also a time-dependent factor $p(t)$ of the source function is unknown. To solve this inverse problem, we take the additional condition $ B [u (\cdot,t)] = \psi (t) $ with an arbitrary bounded linear functional $ B $. Existence and uniqueness theorem for the solution to the problem under consideration is proved. Inequalities of stability are obtained. The applied method allows us to study a similar problem by taking instead of $d^2/dx^2$ an arbitrary elliptic differential operator $A(x, D)$, having a compact inverse. \vskip 0.3cm \noindent {\it AMS 2000 Mathematics Subject Classifications} : Primary 35R11; Secondary 34A12.\\ {\it Key words}: Schr\"odinger type equations, the Riemann-Liouville derivatives, time-dependent source identification problem. \end{abstract} \maketitle \section{Introduction} The fractional integration of order $ \sigma <0 $ of function $ h (t) $ defined on $ [0, \infty) $ has the form (see e.g. \cite{Pskhu}, p.14, \cite{SU}, Chapter 3) \[ J_t^\sigma h(t)=\frac{1}{\Gamma (-\sigma)}\int\limits_0^t\frac{h(\xi)}{(t-\xi)^{\sigma+1}} d\xi, \quad t>0, \] provided the right-hand side exists. Here $\Gamma(\sigma)$ is Euler's gamma function. Using this definition one can define the Riemann-Liouville fractional derivative of order $\rho$: $$ \partial_t^\rho h(t)= \frac{d}{dt}J_t^{\rho-1} h(t). $$ Note that if $\rho=1$, then the fractional derivative coincides with the ordinary classical derivative of the first order: $\partial_t h(t)= (d/dt) h(t)$. Let $\rho\in(0,1) $ be a fixed number and $\Omega =(0,\pi) \times (0, T]$. Consider the following initial-boundary value problem for the Shr\"odinger equation \begin{equation}\label{prob1} \left\{ \begin{aligned} &i\partial_t^\rho u(x,t) - u_{xx}(x,t) =p(t)q(x)+ f(x,t),\quad (x,t)\in \Omega;\\ &u(0,t)=u(\pi, t)=0, \quad 0\leq t\leq T;\\ &\lim\limits_{t\rightarrow 0}J_t^{\rho-1}u(x,t)=\varphi(x), \quad 0\leq x\leq \pi, \end{aligned} \right. \end{equation} where $t^{1-\rho}p(t)$, $t^{1-\rho}f(x,t)$ and $\varphi(x), q(x)$ are continuous functions in the closed domain $\overline{\Omega}$. This problem is also called the \textit{forward problem}. If $p(t)$ is a known function, then under certain conditions on the given functions a solution to problem (\ref{prob1}) exists and it is unique (see, e.g., \cite{AOLob}). We note the following property of the Riemann-Liouville integrals, which simplifies the verification of the initial condition in problem (\ref{prob1}) (see e.g. \cite{Pskhu}, p.104): \begin{equation} \label{property} \lim\limits_{t\rightarrow +0}J_t^{\alpha-1} h(t) = \Gamma (\alpha)\lim\limits_{t\rightarrow +0} t^{1-\alpha} h(t). \end{equation} From here, in particular, it follows that the solution of the forward problem can have a singularity at zero $t = 0$ of order $t^{\rho-1}$. Let $C[0, l]$ be the set of continuous functions defined on $[0,l]$ with the standard max-norm $||\cdot||_{C[0,l]}$. The purpose of this paper is not only to find a solution $u(x,t)$, but also to determine the time-dependent part $p(t)$ of the source function. To solve this time-dependent source identification problem one needs an extra condition. Following the papers of A. Ashyralyev et al. \cite{Ashyr1}-\cite{Ashyr3} we consider the additional condition in a rather general form: \begin{equation}\label{ad} B[u(\cdot,t)]=\psi(t), \quad 0\leq t \leq T, \end{equation} where $B: C[0, \pi]\rightarrow R$ is a given bounded linear functional: $||B[h(\cdot, t)]||_{C[0,T]}\leq b||h(x,t)||_{C(\overline{\Omega})}$, and $\psi(t)$ is a given continuous function. For example, as the functional $B$ one can take $B[u(\cdot,t)]=u(x_0, t)$, $x_0\in [0, \pi]$, or $B[u(\cdot,t)]=\int_{0}^{\pi} u(x,t) dx$, or a linear combination of these two functionals. We call the initial-boundary value problem (\ref{prob1}) together with additional condition (\ref{ad}) \emph{the inverse problem}. When solving the inverse problem, we will investigate the Cauchy and initial-boundary value problems for various differential equations. In this case, by the solution of the problem we mean the classical solution, i.e. we will assume that all derivatives and functions involved in the equation are continuous with respect to the variable $x$ and $t$ in a open set. As an example, let us give the definition of the solution to the inverse problem. \begin{defin}\label{def} A pair of functions $\{u(x,t), p(t)\}$ with the properties \begin{enumerate} \item $\partial_t^\rho u(x,t), u_{xx}(x,t)\in C(\Omega)$, \item$t^{1-\rho}u(x,t)\in C(\overline{\Omega})$, \item $t^{1-\rho}p(t)\in C[0,T]$, \end{enumerate} and satisfying conditions (\ref{prob1}), (\ref{ad}) is called \textbf{the solution} of the inverse problem. \end{defin} Note that condition 3 in this definition is taken in order to cover a wider class of functions, as function $p(t)$. In this regard, it should be noted that, to the best of our knowledge, the time-dependent source identification problem for equations with the Riemann-Liouville derivative is being studied for the first time. Taking into account the boundary conditions in problem (\ref{prob1}), it is convenient for us to introduce the H\"older classes as follows. Let $\omega_g(\delta)$ be the modulus of continuity of function $g(x)\in C[0, \pi]$, i.e. \[ \omega_g(\delta)=\sup\limits_{|x_1-x_2|\leq\delta} |g(x_1)-g(x_2)|, \quad x_1, x_2\in [0,\pi]. \] If $\omega_g(\delta)\leq C \delta^a$ is true for some $a>0$, where $C$ does not depend on $\delta$ and $g(0)=g(\pi)=0$, then $g(x)$ is said to belong to the H\"older class $C^a[0,\pi]$. Let us denote the smallest of all such constants $C$ by $||g||_{C^a[0,\pi]}$. Similarly, if the continuous function $h(x, t)$ is defined on $[0,\pi]\times[0, T]$, then the value \[ \omega_h(\delta ;t)=\sup\limits_{|x_1-x_2|\leq\delta} |h(x_1, t)-h(x_2, t)|, \quad x_1, x_2\in [0,\pi] \] is the modulus of continuity of function $h(x, t)$ with respect to the variable $x$. In case when $\omega_h(\delta; t)\leq C \delta^a$, where $C$ does not depend on $t$ and $\delta$ and $h(0,t)=h(\pi,t)=0,\,\, t\in [0,T]$, then we say that $h(x, t)$ belongs to the H\"older class $C_x^a(\overline{\Omega})$. Similarly, we denote the smallest constant $C$ by $||h||_{C_x^a(\overline{\Omega})}$. Let $C_{2,x}^a(\overline{\Omega})$ denote the class of functions $h(x,t)$ such that $h_{xx}(x,t)\in C_x^a(\overline{\Omega})$ and $h(0,t)=h(\pi,t)=0,\, t\in [0,T]$. Note that condition $h_{xx}(x,t)\in C_x^a(\overline{\Omega})$ implies that $h_{xx}(0,t)=h_{xx}(\pi,t)=0,\,\, t\in [0,T]$. For a function of one variable $g(x)$, we introduce classes $C_{2}^a[0, \pi]$ in a similar way. \begin{thm}\label{main} Let $a>\frac{1}{2}$ and the following conditions be satisfied \begin{enumerate} \item $t^{1-\rho}f(x,t)\in C^a_x(\overline{\Omega})$, \item$\varphi \in C^a[0, \pi]$, \item $t^{1-\rho}\psi(t),\,\, t^{1-\rho}\partial_t^\rho\psi(t) \in C[0,T]$, \item$q \in C_2^a[0, \pi],\,\, B[q(x)]\neq 0$. \end{enumerate} Then the inverse problem has a unique solution $\{u(x,t), p(t)\}$. \end{thm} Everywhere below we denote by $a$ an arbitrary number greater than $1/2$: $a>1/2$. If we additionally require that the initial function $\varphi\in C_2^a[0, \pi]$, then we can establish the following result on the stability of the solution of the inverse problem. \begin{thm}\label{estimate} Let assumptions of Theorem \ref{main} be satisfied and let $\varphi\in C_2^a[0, \pi]$. Then the solution to the invest problem obeys the stability estimate \[ ||t^{1-\rho}\partial_t^\rho u||_{C(\overline{\Omega})} + ||t^{1-\rho} u_{xx}||_{C(\overline{\Omega})} + ||t^{1-\rho}p||_{C[0,T]} \] \begin{equation}\label{se} \leq C_{\rho, q, B} \bigg[ ||\varphi_{xx}||_{C^a[0, \pi]} + ||t^{1-\rho}\psi||_{C[0,T]}+ ||t^{1-\rho}\partial_t^\rho\psi||_{C[0,T]} + ||t^{1-\rho}f(x,t)||_{C_x^a(\overline{\Omega})}\bigg], \end{equation} where $C_{\rho, q, B}$ is a constant, depending only on $\rho, q$ and $B$. \end{thm} It should be noted that the method proposed here, based on the Fourier method, is applicable to the equation in (\ref{prob1}) with an arbitrary elliptic differential operator $A(x, D)$ instead of $d^2/dx^2$, if only the corresponding spectral problem has a complete system of orthonormal eigenfunctions in $ L_2(G), \, G\subset R^N$. The interest in the study of source (right-hand side of the equation $F(x,t)$) identification inverse problems is caused primarily in connection with practical requirements in various branches of mechanics, seismology, medical tomography, and geophysics (see e.g. the survey paper Y. Liu et al. \cite{Hand1}). The identification of $F(x,t)=h(t)$ is appropriate, for example, in cases of accidents at nuclear power plants, when it can be assumed that the location of the source is known, but the decay of the radiation power over time is unknown and it is important to estimate it. On the other hand, one example of the identification of $F(x,t)=g(x)$ can be the detection of illegal wastewater discharges, which is a serious problem in some countries. The inverse problem of determining the source function $F$ with the final time observation have been well studied and many theoretical researches have been published for classical partial differential equations (see, e.g. monographs Kabanikhin \cite{Kab1} and Prilepko, Orlovsky, and Vasin \cite{prilepko}). As for fractional differential equations, it is possible to construct theories parallel to \cite{Kab1}, \cite{prilepko}, and the work is now ongoing. Let us mention only some of these works (a detailed review can be found in \cite{Hand1}). It should be noted right away that for the abstract case of the source function $F(x, t)$ there is currently no general closed theory. Known results deal with separated source term $F(x, t) = h(t)g(x)$. The appropriate choice of the overdetermination depends on the choice whether the unknown is $h(t)$ or $g(x)$. Relatively fewer works are devoted to the case when the unknown is the function $h(t)$ (see the survey work \cite{Hand1} and \cite{Yama11} for the case of subdiffusion equations, and, for example, \cite{Ashyr1}-\cite{Ashyr3} for the classical heat equation). Uniqueness questions in the inverse problem of finding a function $g(x)$ in fractional diffusion equations with the sourse function $g(x)h(t)$ has been studied in, e.g. \cite{Niu}, \cite{MS}, \cite{MS1}. In many papers, authors have considered an equation, in which $ h (t) \equiv 1 $ and $ g (x) $ is unknown (see, e.g. \cite{12} - \cite{AF1}). The case of subdiffusion equations whose elliptic part is an ordinary differential expression is considered in \cite{12} - \cite{Tor}. The authors of the articles \cite{20} - \cite{24} studied subdiffusion equations in which the elliptic part is either a Laplace operator or a second-order selfadjoint operator. The paper \cite{25} studied the inverse problem for the abstract subdiffution equation. In this article \cite{25} and most other articles, including \cite{20} - \cite{23}, the Caputo derivative is used as a fractional derivative. The subdiffusion equation considered in the recent papers \cite{AOLob}, \cite{AODif} contains the fractional Riemann-Liouville derivative, and the elliptical part is an arbitrary elliptic expression of order $m$. In \cite{13} and \cite{24}, the fractional derivative in the subdiffusion equation is a two-parameter generalized Hilfer fractional derivative. Note also that the papers \cite{13}, \cite{20}, \cite{23} contain a survey of papers dealing with inverse problems of determining the right-hand side of the subdiffusion equation. In \cite{24}, \cite{KirSal}, \cite{Ali}, non-self-adjoint differential operators (with nonlocal boundary conditions) were taken as elliptical part of the equation, and solutions to the inverse problem were found in the form of biortagonal series. In \cite{AF1}, the authors considered an inverse problem for simultaneously determining the order of the Riemann-Liouville fractional derivative and the source function in the subdiffusion equations. Using the classical Fourier method, the authors proved the uniqueness and existence of a solution to this inverse problem. It should be noted that in all of the listed works, the Cauchy conditions in time are considered (an exception is work \cite{Saima}, where the integral condition is set with respect to the variable $t$). In the recent paper \cite{AFFF}, for the best of our knowledge, an inverse problem for subdiffusion equation with a nonlocal condition in time is considered for the first time. The papers \cite{AF2} - \cite{AF3} deal with the inverse problem of determining the order of the fractional derivative in the subdiffusion equation and in the wave equation, respectively. Time-dependent source identification problem (\ref{prob1}) for classical Schr\"odinger type equations (i.e. $\rho=1$) with additional condition (\ref{ad}) was for the first time investigated in papers of A. Ashyralyev at al. \cite{Ashyr1}-\cite{Ashyr3}. To investigate the inverse problem (\ref{prob1}), (\ref{ad}) we borrow some original ideas from these papers. \section{Preliminaries} In this section, we recall some information about Mittag-Leffler functions, differential and integral equations, which we will use in the following sections. For $0 < \rho < 1$ and an arbitrary complex number $\mu$, by $ E_{\rho, \mu}(z)$ we denote the Mittag-Leffler function of complex argument $z$ with two parameters: \begin{equation}\label{ml} E_{\rho, \mu}(z)= \sum\limits_{k=0}^\infty \frac{z^k}{\Gamma(\rho k+\mu)}. \end{equation} If the parameter $\mu =1$, then we have the classical Mittag-Leffler function: $ E_{\rho}(z)= E_{\rho, 1}(z)$. Since $E_{\rho, \mu}(z)$ is an analytic function of $z$, then it is bounded for $|z|\leq 1$. On the other hand the well known asymptotic estimate of the Mittag-Leffler function has the form (see, e.g., \cite{Dzh66}, p. 133): \begin{lem}\label{ml} Let $\mu$ be an arbitrary complex number. Further let $\alpha$ be a fixed number, such that $\frac{\pi}{2}\rho<\alpha<\pi \rho$, and $\alpha \leq |\arg z|\leq \pi$. Then the following asymptotic estimate holds \[ E_{\rho, \mu}(z)= - \sum\limits_{k=1}^2\frac{z^{-k}}{\Gamma(\rho-k\mu)} + O(|z|^{-3}), \,\, |z|>1. \] \end{lem} We can choose the parameter $\alpha$ so that the following estimate is valid: \begin{cor}\label{ml1} For any $t\geq 0$ one has \[ |E_{\rho, \mu}(it)|\leq \frac{C}{1+t}, \] where constant $C$ does not depend on $t$ and $\mu$. \end{cor} We will also use a coarser estimate with positive number $\lambda$ and $0<\varepsilon<1$: \begin{equation}\label{ml2} |t^{\rho-1} E_{\rho,\rho}(-i\lambda t^\rho)|\leq \frac{Ct^{\rho-1}}{1+\lambda t^\rho}\leq C \lambda^{\varepsilon-1} t^{\varepsilon\rho-1}, \quad t>0, \end{equation} which is easy to verify. Indeed, let $t^\rho\lambda<1$, then $t< \lambda^{-1/\rho}$ and $$ t^{\rho -1} = t^{\rho-\varepsilon\rho} t^{\varepsilon\rho-1} < \lambda^{\varepsilon-1}t^{\varepsilon\rho-1}. $$ If $t^\rho\lambda\geq 1$, then $\lambda^{-1}\leq t^\rho$ and $$ \lambda^{-1} t^{-1}=\lambda^{-1+\varepsilon} \lambda^{-\varepsilon} t^{-1}\leq \lambda^{\varepsilon-1}t^{\varepsilon\rho-1}. $$ \begin{lem}\label{du} Let $t^{1-\rho}g(t)\in C[0, T]$. Then the unique solution of the Cauchy problem \begin{equation}\label{prob.y} \left\{ \begin{aligned} &i\partial_t^\rho y(t) + \lambda y(t) =g(t),\quad 0<t\leq T;\\ &\lim\limits_{t\rightarrow 0}J_t^{\rho-1}y(t)=y_0 \end{aligned} \right. \end{equation} has the form \[ y(t)=t^{\rho-1} E_{\rho, \rho}(i\lambda t^\rho) y_0 - i \int\limits_0^t (t-s)^{\rho-1} E_{\rho,\rho} (i\lambda (t-s)^\rho) g(s) ds. \] \end{lem} \begin{proof} Multiply equation (\ref{prob.y}) by $(-i)$ and then apply formula (7.2.16) of \cite{Gor}, p. 174 (see also \cite{AshCab}, \cite{AshCab2}). \end{proof} Let us denote by $A$ the operator $-d^2/dx^2$ with the domain $D(A) = \{v(x)\in W_2^2(0, \pi): v(0)=v(\pi)=0\}$, where $W_2^2(0, \pi)$ - the standard Sobolev space. Operator $A$ is selfadjoint in $L_2(0, \pi)$ and has the complete in $L_2(0, \pi)$ set of eigenfunctions $\{v_k(x) = \sin kx\}$ and eigenvalues $\lambda_k=k^2$, $k=1,2,...$. Consider the operator $E_{\rho, \mu} (it A)$, defined by the spectral theorem of J. von Neumann: \[ E_{\rho, \mu} (it A)h(x,t) = \sum\limits_{k=1}^\infty E_{\rho,\mu} (it \lambda_k) h_k(t) v_k(x), \] here and everywhere below, by $h_k(t)$ we will denote the Fourier coefficients of a function $h(x,t)$: $h_k(t)=(h(x,t),v_k)$, $(\cdot, \cdot)$ stands for the scalar product in $L_2(0, \pi)$. This series converges in the $L_2(0, \pi)$ norm. But we need to investigate the uniform convergence of this series in $\Omega$. To do this, we recall the following statement. \begin{lem}\label{Zyg}Let $g\in C^a[0, \pi]$. Then for any $\sigma\in [0, a-1/2)$ one has \[ \sum\limits_{k=1}^\infty k^\sigma|g_k|<\infty. \] \end{lem} For $\sigma=0$ this assertion coincides with the well-known theorem of S. N. Bernshtein on the absolute convergence of trigonometric series and is proved in exactly the same way as this theorem. For the convenience of readers, we recall the main points of the proof (see, e.g. \cite{Zyg}, p. 384). \begin{proof} In theorem (3.1) of A. Zygmund \cite{Zyg}, p. 384, it is proved that for an arbitrary function $g(x)\in C[0,\pi]$, with the properties $g(0)=g(\pi)=0$, one has the estimate \[ \sum\limits_{k=2^{n-1}+1}^{2^n} |g_k|^2 \leq \omega_g^2\bigg(\frac{1}{2^{n+1}}\bigg). \] Therefore, if $\sigma\geq 0$, then by the Cauchy-Bunyakovsky inequality \[ \sum\limits_{k=2^{n-1}+1}^{2^n} k^\sigma |g_k| \leq \bigg(\sum\limits_{k=2^{n-1}+1}^{2^n} |g_k|^2\bigg)^{\frac{1}{2}}\bigg(\sum\limits_{k=2^{n-1}+1}^{2^n} k^{2\sigma}\bigg)^{\frac{1}{2}}\leq C 2^{n(\frac{1}{2}+\sigma)}\omega_g\bigg(\frac{1}{2^{n+1}}\bigg), \] and finally \[ \sum\limits_{k=2}^{\infty} k^\sigma|g_k|= \sum\limits_{n=1}^\infty\sum\limits_{k=2^{n-1}+1}^{2^n} k^\sigma|g_k| \leq C \sum\limits_{n=1}^\infty 2^{n(\frac{1}{2}+\sigma)}\omega_g\bigg(\frac{1}{2^{n+1}}\bigg). \] Obviously, if $\omega_g(\delta)\leq C \delta^a$, $a>1/2$ and $0<\sigma<a - 1/2$, then the last series converges: \[ \sum\limits_{k=2}^{\infty} k^\sigma|g_k|\leq C ||g||_{C^a[0,\pi]}. \] \end{proof} \begin{lem}\label{EA}Let $h(x,t)\in C_x^a(\overline{\Omega})$ Then $E_{\rho, \mu} (it A)h(x,t)\in C(\overline{\Omega})$ and $\frac{\partial^2}{\partial x^2} E_{\rho, \mu} (it A)h(x,t)\in C([0, \pi]\times(0,T])$. Moreover, the following estimates hold \begin{equation}\label{E} ||E_{\rho, \mu} (it A)h(x,t)||_{C(\overline{\Omega})} \leq C ||h||_{C_x^a(\overline{\Omega})}; \end{equation} \begin{equation}\label{AE} \bigg |\bigg|\frac{\partial^2}{\partial x^2} E_{\rho, \mu} (it A)h(x,t)\bigg|\bigg|_{C[0, \pi]} \leq C t^{-1} ||h||_{C_x^a(\overline{\Omega})}, \,\, t>0. \end{equation} If $h(x,t)\in C_{2,x}^a(\overline{\Omega})$, then \begin{equation}\label{AEA} \bigg |\bigg|\frac{\partial^2}{\partial x^2} E_{\rho, \mu} (it A)h(x,t)\bigg|\bigg|_{C(\overline{\Omega})} \leq C ||h_{xx}||_{C_x^a(\overline{\Omega})}. \end{equation} \end{lem} \begin{proof}By definition one has \[ |E_{\rho, \mu} (it A)h(x,t)|=\bigg|\sum\limits_{k=1}^\infty E_{\rho, \mu} (it \lambda_k) h_k(t) v_k(x)\bigg|\leq\sum\limits_{k=1}^\infty |E_{\rho, \mu} (it \lambda_k) h_k(t)|. \] Corollary \ref{ml1} and Lemma \ref{Zyg} imply \[ |E_{\rho, \mu} (it A)h(x,t)|\leq C \sum\limits_{k=1}^\infty \bigg|\frac{ h_k(t)}{1+t \lambda_k}\bigg|\leq C ||h||_{C_x^a(\overline{\Omega})}. \] On the other hand, \[ \bigg |\frac{\partial^2}{\partial x^2} E_{\rho, \mu} (it A)h(x,t)\bigg|\leq C \sum\limits_{k=1}^\infty \bigg|\frac{\lambda_k h_k(t)}{1+t \lambda_k}\bigg|\leq C t^{-1} ||h||_{C_x^a(\overline{\Omega})}, \,\, t>0. \] If $h(x,t)\in C_{2,x}^a(\overline{\Omega})$, then $h_k(t)=-\lambda_k^{-1} (h_{xx})_k(t)$. Therefore, \[ \bigg|\frac{\partial^2}{\partial x^2} E_{\rho, \mu} (it A)h(x,t)\bigg|\leq C ||h_{xx}||_{C_x^a(\overline{\Omega})}, \,\, 0\leq t\leq T. \] \end{proof} \begin{lem}\label{EAintL}Let $t^{1-\rho}g(x,t)\in C_x^a(\overline{\Omega})$. Then there is a positive constant $c_1$ such that \begin{equation}\label{EAint} \bigg|t^{1-\rho}\int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i(t-s)^\rho A)g(x,s)ds \bigg| \leq c_1 \frac{t^\rho}{\rho}||t^{1-\rho}g||_{C_x^a(\overline{\Omega})}. \end{equation} \end{lem} \begin{proof} Apply estimate (\ref{E}) to get \[ \bigg|t^{1-\rho}\int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i(t-s)^\rho A)g(x,s)ds \bigg| \leq C t^{1-\rho}\int_{0}^{t}(t-s)^{\rho-1}s^{\rho-1} ds\cdot ||t^{1-\rho}g||_{C_x^a(\overline{\Omega})}. \] For the integral one has \begin{equation}\label{int} \int_{0}^{t}(t-s)^{\rho-1}s^{\rho-1} ds= \int\limits_0^{\frac{t}{2}} \cdot + \int\limits_{\frac{t}{2}}^{t} \cdot\leq \frac{2^{2(1-\rho)}}{\rho} t^{2\rho-1}. \end{equation} Denoting $c_1=4C$ we obtain the assertion of the lemma. \end{proof} \begin{cor}\label{g1g2} If function $g(x,t)$ can be represented in the form $g_1(x) g_2(t)$, then the right-hand side of estimate (\ref{EAint}) has the form: \[ c_1 \frac{t^\rho}{\rho}||g_1||_{C^a[0,\pi]}||t^{1-\rho}g_2||_{C[0,T ]}. \] \end{cor} \begin{lem}\label{ep} Let $t^{1-\rho}g(x,t)\in C_x^a(\overline{\Omega})$. Then \[ \bigg|\bigg|\int\limits_0^t(t-s)^{\rho-1} \frac{\partial^2}{\partial x^2} E_{\rho, \rho}(i(t-s)^\rho A)g(x,s) ds \bigg|\bigg|_{C(\overline{\Omega})}\leq C ||t^{1-\rho}g||_{C_x^a(\overline{\Omega})}. \] \end{lem} \begin{proof}Let \[ S_j(x,t)= \sum\limits_{k=1}^j \left[\int\limits_{0}^t(t-s)^{\rho-1} E_{\rho, \rho}(i\lambda_k(t-s)^\rho )g_k(s) ds\right] \lambda_k v_k(x). \] Choose $\varepsilon$ so that $0<\varepsilon<a-1/2$ and apply the inequality (\ref{ml2}) to get \[ |S_j(t)|\leq C\sum\limits_{k=1}^j \int\limits_0^t (t-s)^{\varepsilon\rho-1}s^{\rho-1}\lambda_k^{\varepsilon}|s^{1-\rho}g_k(s)| ds. \] By Lemma \ref{Zyg} we have \[ |S_j(t)|\leq C ||t^{1-\rho}g||_{C_x^a(\overline{\Omega})} \] and since \[ \int\limits_0^t(t-s)^{\rho-1} \frac{\partial^2}{\partial x^2} E_{\rho, \rho}(i(t-s)^\rho A)h(s) ds =\sum\limits_{j=1}^\infty S_j(t), \] the last inequality implies the assertion of the lemma. \end{proof} \begin{lem}\label{prob.w} Let $t^{1-\rho}G(x,t)\in C_x^a(\overline{\Omega})$ and $\varphi\in C^a[0,\pi]$. Then the unique solution of the following initial-boundary value problem \begin{equation}\label{prob2} \left\{ \begin{aligned} &i \partial_t^\rho w(x,t) - w_{xx}(x,t) =G(x,t),\,\, 0 < t \leq T;\\ &w(0,t)=w(\pi,t)=0, \,\, 0 < t \leq T;\\ &\lim\limits_{t\rightarrow 0}J_t^{\rho-1} w(x,t) =\varphi(x),\,\, 0\leq x\leq \pi, \end{aligned} \right. \end{equation} has the form \[ w(x,t)=t^{\rho-1} E_\rho(i t^\rho A) \varphi(x) - i \int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) G(x,s) ds. \] \end{lem} \begin{proof} According to the Fourier method, we will seek the solution to this problem in the form \[ w(x,t) =\sum\limits_{k=1}^\infty T_k(t) v_k(x), \] where $T_k(t)$ are the unique solutions of the problems \begin{equation}\label{prob.T} \left\{ \begin{aligned} &i \partial_t^\rho T_k + \lambda_k T_k(t) =G_k(t),\quad 0 < t \leq T;\\ &\lim\limits_{t\rightarrow 0}J_t^{\rho-1} T_k(t) =\varphi_k, \end{aligned} \right. \end{equation} Lemma \ref{du} implies \[ T_k(t)=t^{\rho-1} E_\rho(i\lambda_k t^\rho) \varphi_k - i \int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i\lambda_k (t-s)^\rho) G_k(s) ds. \] Hence the solution to problem (\ref{prob2}) has the form \[ w(x,t)=t^{\rho-1} E_\rho(i t^\rho A) \varphi(x) - i \int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) G(x,s) ds. \] Note that the existence of the first term follows from estimate (\ref{E}), and the existence of the second term follows from Lemma \ref{EAintL}. By Lemma \ref{ep} and estimate (\ref{AE}), we obtain: $w_{xx}(x,t)\in C(\Omega)$. Since $i\partial_t^\rho w(x,t)= -w_{xx}(x,t) + G(x,t)$, then $\partial_t^\rho w(x,t)\in C(\Omega)$. The uniqueness of the solution can be proved by the standard technique based on completeness of the set of eigenfunctions $\{v_k(x)\}$ in $L_2(0,\pi)$ (see, e.g., \cite{AOLob}). \end{proof} Let $t^{1-\rho}F(x,t)\in C(\overline{\Omega})$ and $g(x)\in C^a[0,\pi]$. Consider the Volterra integral equation \begin{equation}\label{VE} w(x,t)=F(x,t)+\int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) g(x ) B[w(\cdot,s)]ds. \end{equation} \begin{lem}\label{VElem} There is a unique solution $t^{1-\rho}w\in C(\overline{\Omega})$ to the integral equation (\ref{VE}). \end{lem} \begin{proof} Equation (\ref{VE}) is similar to the equations considered in the book \cite{Kil}, p. 199, Eq. (3.5.4) and it is solved in essentially the same way. Let us remind the main points. Equation (\ref{VE}) makes sense in any interval $[0, t_1]\in [0,T]$, $(0<t_1<T)$. Choose $t_1$ such that \begin{equation}\label{t1} c_1 b ||g||_{C^a[0,\pi]} \frac{t_1^\rho}{\rho}<1 \end{equation} and prove the existence of a unique solution $t^{1-\rho}w(x, t)\in C([0,\pi]\times[0, t_1])$ to the equation (\ref{VE}) on the interval $[0, t_1]$ (here the constant $c_1$ is taken from estimate (\ref{EAint}), see Corollary \ref{g1g2}). For this we use the Banach fixed point theorem for the space $C([0,\pi]\times[0, t_1])$ with the weight function $t^{1-\rho}$ (see, e.g., \cite{Kil}, theorem 1.9, p. 68), where the distance is given by \[ d(w_1, w_2) = ||t^{1-\rho}[w_1(x,t)- w_2(x,t)]||_{C([0,\pi]\times [0, t_1])}. \] Let us denote the right-hand side of equation (\ref{VE}) by $Pw(x,t)$, where $P$ is the corresponding linear operator. To apply the Banach fixed point theorem we have to prove the following: (a) if $t^{1-\rho}w(x,t)\in C([0,\pi]\times[0, t_1])$, then $t^{1-\rho}Pw(x,t)\in C([0,\pi]\times[0, t_1])$; (b) for any $t^{1-\rho}w_1, t^{1-\rho}w_2 \in C([0,\pi]\times[0, t_1])$ one has \[ d(Pw_1,Pw_2)\leq \delta\cdot d(w_1,w_2), \,\, \delta<1. \] Lemmas \ref{EA} and \ref{EAintL} imply condition (a). On the other hand, thanks to (\ref{EAint}) (see Corollary \ref{g1g2}) we arrive at \[ \bigg|\bigg|t^{1-\rho}\int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) g(x) B[w_1(\cdot,s)-w_2(\cdot,s)]ds\bigg|\bigg|_{C([0,\pi]\times [0, t_1])}\leq\delta\cdot d(w_1,w_2), \] where $\delta =c_1 b ||g||_{C^a[0,\pi]} \frac{t_1^\rho}{\rho}<1$ since condition (\ref{t1}). Hence by the Banach fixed point theorem, there exists a unique solution $t^{1-\rho} w^\star (x,t)\in C([0,\pi]\times [0, t_1])$ to equation (\ref{VE}) on the interval $[0, t_1]$ and this solution is a limit of the convergent sequence $w_n(x,t)=P^n F(x,t)= P P^{n-1} F(x,t)$: \[ \lim\limits_{n\rightarrow \infty} d(w_n(x,t),w^\star(x,t))=0. \] Next we consider the interval $[t_1, t_2]$, where $t_2=t_1+l_1<T$, and $l_1>0$. Rewrite the equation (\ref{VE}) in the form \[ w(x,t)=F_1(x,t)+\int\limits_{t_1}^t (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) g(x) B[w(\cdot,s)]ds, \] where \[ F_1(x,t)=F(x,t)+ \int\limits_{0}^{t_1} (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) g(x) B[w(\cdot,s)]ds, \] is a known function, since the function $w(x,t)$ is uniquely defined on the interval $[0, t_1]$. Using the same arguments as above, we derive that there exists a unique solution $t^{1-\rho}w^\star(x,t)\in C([0,\pi]\times[t_1, t_2])$ to equation (\ref{VE}) on the interval $[t_1, t_2].$ Taking the next interval $[t_2, t_3]$, where $t_3=t_2+l_2<T$, and $l_2>0$, and repeating this process (obviously, $l_n>l_0>0$), we conclude that there exists a unique solution $t^{1-\rho}w^\star(x,t)\in C([0,\pi]\times[0, T])$ to equation (\ref{VE}) on the interval $[0, T]$, and this solution is a limit of the convergent sequence $t^{1-\rho}w_n(x,t)\in C([0,\pi]\times[0, T])$: \begin{equation}\label{wn} \lim\limits_{n\rightarrow \infty} ||t^{1-\rho} [w_n(x,t)-w^\star(x,t)]||_{C(\overline{\Omega})}=0, \end{equation} with the choice of certain $w_n$ on each $[0, t_1], \cdots [t_{L-1}, T]$. \end{proof} We need the following kind of Gronwall's inequality: \begin{lem} Let $0<\rho<1$. Assume that the non-negative function $h(t)\in C[0,T]$ and the positive constants $K_0$ and $K_1$ satisfy \[ h(t)\leq K_0+ K_1\int\limits_0^t(t- s)^{\rho-1}s^{\rho-1} h(s) ds \] for all $t\in [0, T]$. Then there exists a positive constant $C_{\rho, T}$, depending only on $\rho$, $K_2$ and $T$ such that \begin{equation}\label{Gron} h(t)\leq K_0 \cdot C_{\rho, T} . \end{equation} \end{lem} Usually Gronwall's inequality is formulated with a continuous function $k(s)$ instead of $K_1(t-s)^{\rho-1}s^{\rho-1}$. However, estimate (\ref{Gron}) is proved in a similar way to the well-known Gronwall inequality. For the convenience of the reader, we present a proof of estimate (\ref{Gron}). \begin{proof} Iterating the hypothesis of Gronwall's inequality gives \[ h(t)\leq K_0+K_0 K_1 \int\limits_0^t(t- s)^{\rho-1}s^{\rho-1} ds + K_1^2\int\limits_0^t(t- s)^{\rho-1}s^{\rho-1}\int\limits_0^s(s-\xi)^{\rho-1} \xi^{\rho-1}h(\xi)d\xi ds \] \[ \leq K_{\rho, T}+K_1^2 \int\limits_0^t u(\xi) \xi^{\rho-1} \int\limits_\xi^t (t-s)^{\rho-1}(s-\xi)^{\rho-1} s^{\rho-1} ds d\xi, \] where \[ K_{\rho, T}=K_0+ K_0 K_1 \int\limits_0^T(t- s)^{\rho-1}s^{\rho-1} ds. \] For the inner integral we have (see (\ref{int})) \[ \int\limits_\xi^t (s-\xi)^{\rho-1}(t-s)^{\rho-1}ds = \int\limits_0^{t-\xi} y^{\rho-1}(t-\xi-y)^{\rho-1}dy\leq \frac{2^{2(1-\rho)}}{\rho} (t-\xi)^{2\rho-1}. \] Now the hypothesis is \[ h(t)\leq K_0+ K_1^2 \frac{2^{2(1-\rho)}}{\rho}\int\limits_0^t(t- s)^{2\rho-1} h(s) ds. \] By repeating this process so many times that $k\rho>1$, we make sure that there is a positive constant $C_\rho=C(\rho, K_2, T)>0$ such that \[ h(t)\leq K_0+ C_\rho\int\limits_0^t h(s) ds, \] or \[ \frac{h(\xi)}{K_0+C_\rho\int\limits_0^\xi h(s) ds}\leq 1. \] Multiply this by $C_\rho$ to get \[ \frac{d}{d\xi}\ln \bigg(K_0+C_\rho\int\limits_0^\xi h(s) ds\bigg)\leq C_\rho. \] Integrate from $\xi=0$ to $\xi=t$ and exponentiate to obtain \[ K_0+C_\rho\int\limits_0^t h(s) ds\leq K_0 e^{C_\rho t}. \] Finally note that the left side is $\geq h(t)$. \end{proof} \section{Auxiliary problem and proof of Theorem \ref{main}} Let us consider the following auxiliary initial-boundary value problem \begin{equation}\label{prob2} \left\{ \begin{aligned} &i \partial_t^\rho \omega(x,t) - \omega_{xx}(x,t) = - i \mu(t) q''(x) + f(x,t),\quad (x,t)\in \Omega;\\ &\omega(0, t)=\omega(\pi,t)=0, \quad 0\leq t\leq T;\\ &\lim\limits_{t\rightarrow 0}J_t^{\rho-1}\omega(x,t) =\varphi(x), \quad 0\leq x\leq \pi, \end{aligned} \right. \end{equation} where function $\mu(t)$ is the unique solution of the Cauchy problem: \begin{equation}\label{prob2a} \left\{ \begin{aligned} &\partial_t^\rho \mu(t) = p(t),\quad 0 < t \leq T;\\ &\lim\limits_{t\rightarrow 0}J_t^{\rho-1}\mu(t) =0. \end{aligned} \right. \end{equation} Note that the solution to the Cauchy problem (\ref{prob2a}) has the form (see, e.g., \cite{AshCab}): \[ \mu(t)=\frac{1}{\Gamma(\rho)}\int\limits_{0}^t (t-s)^{\rho-1} p(s)ds, \] \begin{defin}\label{def} A functions $\omega(x,t)$ with the properties \begin{enumerate} \item $\partial_t^\rho \omega(x,t), \omega_{xx}(x,t)\in C(\Omega)$, \item$t^{1-\rho}\omega_{xx}(x,t)\in C((0, \pi) \times [0,T])$, \item $t^{1-\rho}\omega(x,t)\in C(\overline{\Omega})$, \end{enumerate} and satisfying conditions (\ref{prob2a}) is called \textbf{the solution} of problem (\ref{prob2a}). \end{defin} \begin{lem}\label{auxiliary} Let $\omega(x,t)$ be a solution of problem (\ref{prob2}). Then the unique solution $\{u(x,t), p(t)\}$ to the inverse problem (\ref{prob1}), (\ref{ad}) has the form \begin{equation}\label{u} u(x,t)=\omega(x,t)-i \mu(t) q(x), \end{equation} \begin{equation}\label{p} p(t)=\frac{i}{B[ q(x)]}\{\partial^\rho_t\psi(t)-B[\partial^\rho_t \omega(\cdot,t)]\}, \end{equation} where \begin{equation}\label{mu} \mu(t)=\frac{i}{B[ q(x)]}[\psi(t)- B[\omega(\cdot,t)]]. \end{equation} \end{lem} \begin{proof} Substitute the function $u(x,t)$, defined by equality (\ref{u}), into the equation in (\ref{prob1}). Then \[ i \partial_t^\rho \omega(x,t) + \partial_t^\rho \mu(t) q(x) - \omega_{xx}(x,t) + i \mu(t) q''(x)= p(t) q(x)+f(x,t). \] Since $\partial_t^\rho \mu(t) = p(t)$ (see (\ref{prob2a})), we obtain equation (\ref{prob2}), i.e. function $u(x,t)$, defined by (\ref{u}) is a solution of the equation in (\ref{prob1}). As for the initial condition, again by virtue of (\ref{prob2a}) we get \[ \lim\limits_{t\rightarrow 0}J_t^{\rho-1} u(x,t)=\lim\limits_{t\rightarrow 0}J_t^{\rho-1}\omega(x,t)-i \lim\limits_{t\rightarrow 0}J_t^{\rho-1}\mu(t) q(x) =\lim\limits_{t\rightarrow 0}J_t^{\rho-1}\omega(x,t)=\varphi(x). \] On the other hand, conditions $q(0)=q(\pi)=0$ imply $u(0, t)=u(\pi, t)=0$, $0\leq t\leq T$. From Definition \ref{def2} of solution $\omega(x,t)$ and the property of the functions $\mu(t)$ and $q(x)$ it immediately follows that the function $u(x,t)$ satisfies the requirements: $\partial_t^\rho u(x,t), u_{xx}(x,t)\in C(\Omega)$, $t^{1-\rho}u(x,t)\in C(\overline{\Omega})$. Thus function $u(x,t)$, defined as (\ref{u}) is a solution of the initial-boundary value problem (\ref{prob1}). Let us prove equation (\ref{p}). Rewrite (\ref{u}) as \[ iq(x) \mu(t)=\omega(x,t)- u(x,t). \] Apply (\ref{ad}) to obtain \[ i \mu(t) B[ q(x)] =B[\omega(\cdot,t)] - \psi(t), \] or, since $B[ q(x)]\neq 0$, we get (\ref{mu}). Finally, using equality $\partial_t^\rho \mu(t)= p(t)$, we have \[ p(t)=\frac{i}{B[ q(x)]}[\partial_t^\rho \psi(t)- B[\partial_t^\rho \omega(\cdot,t)]], \] which coincides with (\ref{p}). Moreover, from the definition of solution $\omega(x,t)$ of problem (\ref{prob2}) and the property of function $\psi(t)$ one has $t^{1-\rho} p(t)\in C[0,T]$. \end{proof} Thus, to solve the inverse problem (\ref{prob1}), (\ref{ad}), it is sufficient to solve the initial-boundary value problem (\ref{prob2}). \begin{thm}\label{AP} Under the assumptions of Theorem \ref{main}, problem (\ref{prob2}) has a unique solution. \end{thm} \begin{proof} Let \begin{equation}\label{Gsep} G(x,s)=\frac{i}{B[ q(x)]} (B[\omega(\cdot,s)]-\psi(s))q''(x) +f(x,s) \end{equation} and suppose that $s^{1-\rho} G(x,s)\in C_x^a(\overline{\Omega})$. Then by Lemma \ref{prob.w} problem (\ref{prob2}) is equivalent to the integral equation \[ \omega(x,t)=t^{\rho-1} E_\rho (i t^\rho A)\varphi(x)- i \int\limits_0^t (t-s)^{\rho-1} E_{\rho,\rho} (i(t-s)^\rho A) G(x,s) ds. \] Rewrite this equation as \begin{equation}\label{VE2} \omega(x,t)=F(x,t)+\int\limits_0^t (t-s)^{\rho-1} E_{\rho, \rho} (i (t-s)^\rho A) \frac{ q''(x)}{B[ q(x)]} B[\omega(\cdot,s)]ds, \end{equation} where \[ F(x,t)=t^{\rho-1} E_\rho (i t^\rho A)\varphi(x)- i \int\limits_0^t (t-s)^{\rho-1} E_{\rho,\rho} (i(t-s)^\rho A) \bigg[-\frac{i q''(x)}{B[ q(x)]} \psi(s)+f(x,s)\bigg] ds. \] In order to apply Lemma \ref{VElem} to equation (\ref{VE2}), we show that $t^{1-\rho}F(x,t)\in C(\overline{\Omega})$. Indeed, by estimate (\ref{E}) one has $ E_\rho (i t^\rho A)\varphi(x) \in C(\overline{\Omega}) $. According to the conditions of the Theorem \ref{main} $h(x,s)=s^{1-\rho}[-\frac{i q''(x)}{B[ q(x)]} \psi(s)+f(x,s)]\in C_x^a(\overline{\Omega})$. Therefore, by virtue of estimate (\ref{EAint}), the second term of function $t^{1-\rho}F(x,t)$ also belongs to the class $C(\overline{\Omega})$. Hence, by virtue of Lemma \ref{VElem}, the Volterra equation (\ref{VE2}) has a unique solution $t^{1-\rho}\omega(x,t)\in C(\overline{\Omega})$. Let us show that $\partial_t^\rho \omega(x,t),\, \omega_{xx}(t)\in C(\Omega)$. First we consider $F_{xx}(x,t)$ and note, that by estimate (\ref{AE}) we have $\frac{\partial^2}{\partial x^2} E_\rho (i t^\rho A)\varphi (x)\in C([0, \pi]\times(0, T])$. Since function $h$ defined above, belongs to the class $C_x^a(\overline{\Omega})$, then by Lemma \ref{ep}, the second term of function $F_{xx}(x,t)$ belongs to $C(\Omega)$ and satisfies the estimate: \[ \bigg|\bigg|t^{1-\rho}\int\limits_0^t (t-s)^{\rho-1} \frac{\partial^2}{\partial x^2} E_{\rho,\rho} (i(t-s)^\rho A) \bigg[-\frac{i q''(x)}{B[ q(x)]} \psi(s)+f(x,s)\bigg] ds \bigg|\bigg|_{C(\overline{\Omega})} \] \begin{equation}\label{st1} \leq C \bigg[\bigg|\bigg|t^{1-\rho} \frac{q''(x)}{B[ q(x)]}\psi(t)\bigg|\bigg|_{C^a_x(\overline{\Omega})}+ ||t^{1-\rho}f(x,t)||_{C^a_x(\overline{\Omega})} \bigg]\leq C_{a, q, B} \big[||t^{1-\rho}\psi||_{C[0, T]}+||t^{1-\rho}f(x,t)||_{C^a_x(\overline{\Omega})}\big]. \end{equation} We pass to the second term on the right-hand side of equality (\ref{VE2}). Since $t^{1-\rho}\omega(x,t)\in C(\overline{\Omega})$, the conditions of Theorem \ref{main} imply that $s^{1-\rho}\frac{q''(x)}{B[q(x)]}B[\omega(\cdot,s)]\in C_x^a(\overline{\Omega})$. Then again by Lemma \ref{ep}, this term belongs to $C(\overline{\Omega})$ and satisfies the estimate: \[ \bigg|\bigg|t^{1-\rho}\int\limits_0^t (t-s)^{\rho-1} \frac{\partial^2}{\partial x^2} E_{\rho,\rho} (i(t-s)^\rho A) \frac{ q''(x)}{B[ q(x)]} B[\omega(\cdot,s)] ds\bigg|\bigg|_{C(H)} \] \begin{equation}\label{st2} \leq C\bigg|\bigg| \frac{q''(x)}{B[ q(x)]}B[t^{1-\rho}\omega(\cdot,t)]\bigg|\bigg|_{C(\overline{\Omega})}\leq C_{a, q, B} ||t^{1-\rho}\omega(x,t)||_{C(\overline{\Omega})}. \end{equation} Thus, $\omega_{xx}(x,t)\in C((0, T]; H)$. On the other hand, by virtue of equation (\ref{prob2}) and the conditions of Theorem \ref{main}, we will have \[ \partial_t^\rho \omega(x,t) = \omega_{xx}(x,t) - i \mu(t) q''(x) + f(x,t)\in C(\overline{\Omega})). \] The fact that here $\mu\in C[0,T]$ follows again from the conditions of the Theorem \ref{main} and equality (\ref{mu}). It remains to show that $t^{1-\rho} G(x,t)\in C_x^a(\overline{\Omega})$. But this fact follows from the conditions of Theorem \ref{main} and the already proven assertion: $t^{1-\rho}\omega(x,t)\in C(\overline{\Omega})$. \end{proof} As noted above Theorem \ref{main} is an immediate consequence of Lemma \ref{auxiliary} and Theorem \ref{AP}. \section{Proof of Theorem \ref{estimate}} First we prove the following statement on the stability of the solution to problem (\ref{prob2}), (\ref{prob2a}). \begin{thm}\label{estimate2} Let assumptions of Theorem \ref{estimate} be satisfied. Then the solution to problem (\ref{prob2}), (\ref{prob2a}) obeys the stability estimate \begin{equation}\label{se} ||t^{1-\rho}\partial_t^\rho \omega||_{C(\overline{\Omega})}\leq C_{\rho, q, B} \big[ ||\varphi_{xx}||_{C^a[0,\pi]} + ||t^{1-\rho}\psi||_{C[0,T]} + ||t^{1-\rho}f(x,t)||_{C^a_x(\overline{\Omega})}\big], \end{equation} where $C_{\rho, q, B, \epsilon}$ is a constant, depending only on $\rho, q$ and $B$. \end{thm} \begin{proof}Let us begin the proof of the inequality (\ref{se}) by establishing an estimate for $\omega_{xx}(x,t)$ and then use it with equation (\ref{prob2}). To this end we have from (\ref{AEA}) \[ \bigg|\bigg|\frac{\partial^2}{\partial x^2} E_\rho (i t^\rho A)\varphi\bigg|\bigg|_{C(\overline{\Omega})}\leq C ||\varphi_{xx}||_{C^a[0,\pi]}. \] This estimate together with (\ref{st1}) implies \[ ||t^{1-\rho}F_{xx}(x,t)||_{C(\overline{\Omega})}\leq C ||\varphi_{xx}||_{C^a[0,\pi]}+ C_{a, q, B}\big[ ||t^{1-\rho}\psi||_{C[0, T]}+||t^{1-\rho}f(x,t)||_{C^a_x(\overline{\Omega})}\big]. \] Then, using equality (\ref{VE2}) and inequality (\ref{st2}), we get \begin{equation}\label{Aomega} ||t^{1-\rho}\omega_{xx}(x,t)||_{C(\overline{\Omega})}\leq C ||\varphi_{xx}||_{C^a[0,\pi]}+ C_{a, q, B}\big[ ||t^{1-\rho}\psi||_{C[0, T]}+||t^{1-\rho}f(x,t)||_{C^a_x(\overline{\Omega})} + ||t^{1-\rho}\omega(x,t)||_{C(\overline{\Omega})}\big]. \end{equation} As a result, we have obtained an estimate for $\omega_{xx}(x,t)$ through $\omega(x,t)$. To estimate $||t^{1-\rho}\omega(x,t)||_{C(\overline{\Omega})}$, we will proceed as follows. Apply estimates (\ref{E}) and (\ref{EAint}) to get \[ ||t^{1-\rho} F(x,t)||_{C(\overline{\Omega})}\leq ||\varphi||_{C^a[0,\pi]}+\frac{T^\rho}{\rho}\big[ C_{q, B}||q''||_{C^a[0,\pi]}||t^{1-\rho} \psi||_{C[ 0,T]} +||t^{1-\rho} f||_{C^a_x(\overline{\Omega})}\big] \] Again by estimate (\ref{E}) we have \[ \bigg|\bigg|t^{1-\rho}\int\limits_0^t (t-s)^{\rho-1} E_{\rho,\rho} (i(t-s)^\rho A) \frac{ q''(x)}{B[ q(x)]} B[\omega(\cdot,s)] ds\bigg|\bigg|_{C[0,\pi]}\leq \] \[ C_{q, B} ||q''||_{C^a[0,\pi]}\int\limits_0^t (t-s)^{\rho-1}||\omega(x,s)||_{C[0,\pi]}. \] Therefore, from equation (\ref{VE2}) we obtain an estimate \[ ||t^{1-\rho}\omega(x,t)||_{C[0,\pi]}\leq ||\varphi||_{C^a[0,\pi]}+C_{q,\rho, B} [||t^{1-\rho}\psi||_{C[0,T]}+||t^{1-\rho}f||_{C^a_x(\overline{\Omega})}]+ \] \[ C_{q,B}\int\limits_0^t (t-s)^{\rho-1}s^{\rho-1}||s^{1-\rho}\omega(x,s)||_{C[0,\pi]}ds \] for all $t\in [0, T]$. Finally, the Gronwall inequality (\ref{Gron}) implies \[ ||t^{1-\rho}\omega(x,t)||_{C(\overline{\Omega})}\leq C_{q, \rho, B}\big[||\varphi||_{C^a[0,\pi]}+||t^{1-\rho}\psi||_{C[0,T]}+||t^{1-\rho}f||_{C^a_x(\overline{\Omega})}\big]. \] We substitute this estimate in (\ref{Aomega}) and apply $||\varphi||_{C^a[0,\pi]}\leq C ||\varphi_{xx}||_{C^a[0,\pi]}$ to get \[ ||t^{1-\rho} \omega_{xx}||_{C(\overline{\Omega})}\leq C_{\rho, q, B} \big[ ||\varphi_{xx}||_{C^a[0,\pi]} + ||t^{1-\rho}\psi||_{C[0,T]} + ||t^{1-\rho}f||_{C^a_x(\overline{\Omega})}\big], \] To obtain estimate (\ref{se}), it remains to note that \[ \partial_t^\rho \omega(x,t) = \omega_{xx}(x,t) - i \mu(t) q''(x) + f(x,t) \] and use the estimate \[ ||t^{1-\rho}\mu||_{C[0, T]}\leq C_{q,B}\big[ ||t^{1-\rho}\psi||_{C[0, T]} + ||t^{1-\rho} \omega||_{C(\overline{\Omega})}\big], \] which follows from definition (\ref{mu}) and the conditions of Theorem \ref{main}. \end{proof} \textbf{Proof of Theorem \ref{estimate}.} Apply (\ref{p}) to get \[ ||t^{1-\rho}p(t)||_{C[0, T]}\leq C_{q, B}\big[ ||t^{1-\rho}\partial_t^\rho \omega||_{C(\overline{\Omega})}+||t^{1-\rho}\partial_t^\rho \psi||_{C[0, T]}\big]. \] Equations (\ref{u}) and (\ref{prob2a}) imply \[ \partial^\rho_t u(x,t)=\partial^\rho_t \omega(x,t)+p(t) q(x). \] Hence, from estimates of $\partial^\rho_t \omega(x,t)$ and $p(t)$, we obtain an estimate for $\partial^\rho_t u(x,t)$. On the other hand, by virtue of equation (\ref{prob1}), we will have \[ -u_{xx}(x,t)=-i \partial_t^\rho u(x,t) + p(t) q(x) +f(x,t). \] Now, to establish estimate (\ref{se}), it suffices to use the statement of Theorem \ref{estimate2}. \section{Acknowledgement} The authors are grateful to A. O. Ashyralyev for posing the problem and they convey thanks to Sh. A. Alimov for discussions of these results. The authors acknowledge financial support from the Ministry of Innovative Development of the Republic of Uzbekistan, Grant No F-FA-2021-424. \begin{thebibliography}{99} \bibitem{Pskhu} A.~V.~Pskhu,\emph{ Uravneniya v chastnykh proizvodnykh drobnogo poryadka (Fractional Partial Differential Equations)}, Moscow: Nauka, 2005. \bibitem{SU} S.~Umarov, \emph{Introduction to Fractional and Pseudo-Differential Equations with Singular Symbols,} Springer, 2015. \bibitem{AOLob} R.~Ashurov, O.~Muhiddinova, \textquotedblleft Initial-boundary value problem for a time-fractional subdiffusion equation with an arbitrary elliptic differential operator,\textquotedblright Lobachevskii Journal of Mathematics {42}(3), 517--525 (2021). \bibitem{Ashyr1} A.~Ashyralyev, M.~Urun,\textquotedblleft Time-dependent source identification problem for the Schr\"odinger equation with nonlocal boundary conditions,\textquotedblright In: AIP Conf. Proc 2183 , Art. 070016 (2019). \bibitem{Ashyr2} A.~Ashyralyev, M.~Urun,\textquotedblleft On the Crank-Nicolson difference scheme for the time-dependent source identification problem,\textquotedblright Bulletin of the Karaganda university. Mathemat-ics {102}(2), 35--44,(2021). \bibitem{Ashyr3} A.~Ashyralyev, M.~Urun, \textquotedblleft Time-dependent source identication Schr\"odinger type problem, \textquotedblright International Journal of Applied Mathematics {34} (2), 297--310 (2021). \bibitem{Hand1} Y.~Liu, Z.~Li, M.~Yamamoto, \textquotedblleft Inverse problems of determining sources of the fractional partial differential equations, \textquotedblright Handbook of Fractional Calculus with Applications V. 2, J.A.T. Marchado Ed. DeGruyter,{2019}; 411-430. \bibitem{Kab1} S.~I.~Kabanikhin,\emph{Inverse and Ill-Posed Problems. Theory and Applications,} (De Gruyter ,2011). \bibitem{prilepko} A.~I.~Prilepko,D.~G.~Orlovsky,I.~A.~Vasin,\emph{Methods for Solving Inverse Problems in Mathematical Physics,} Marcel Dekkers, New York,{2000}. \bibitem {Yama11} K.~Sakamoto and M.~Yamamoto,\textquotedblleft Initial value boundary value problems for fractional diffusion-wave equations and applications to some inverse problems,\textquotedblright J. Math. Anal. Appl.{382}, 426-447 (2011). \bibitem{Niu} P.~Niu, T.~Helin, Z.~Zhang, \textquotedblleft An inverse random source problem in a stochastic fractional diffusion equation. \textquotedblleft Inverse Problems, {2020}, V. 36, No 4, Art. 045002. \bibitem{MS} Marian Slodichka. \textquotedblleft Uniqueness for an inverse source problem of determining a space-dependent source in a non-autonomous time-fractional diffusion equation,\textquotedblright Frac.Calculus and Appl. Anal. {2020}, V.~23, N 6. P.~1703-1711., DOI: 10.1515/fca-2020-0084 \bibitem{MS1} M.~Slodichka, K.~Sishskova, V.~Bockstal, \textquotedblleft Uniqueness for an inverse source problem of determining a space dependent source in a time-fractional diffusion equation,\textquotedblright Appl. Math. Letters, {2019}, 91, 15-21. \bibitem{12} Y.~Zhang, X.~Xu, Inverse scource problem for a fractional differential equations, Inverse Prob. {2011}, V.~27. N~3. P.~31-42. \bibitem{14} M.~Ismailov, I.~M.~Cicek, \textquotedblleft Inverse source problem for a time-fractional diffusion equation with nonlocal boundary conditions,\textquotedblright Applied Mathematical Modelling. {2016}, V.~40. P.~4891-4899. \bibitem{15} M.~Kirane, A.~S.~Malik, \textquotedblleft Determination of an unknown source term and the temperature distribution for the linear heat equation involving fractional derivative in time, \textquotedblright Applied Mathematics and Computation, {2011}, V.~218. P.~163-170. \bibitem{16} M.~Kirane, B.~Samet, B.~T.~Torebek, \textquotedblleft Determination of an unknown source term and the temperature distribution for the subdiffusion equation at the initial and final data,\textquotedblright Electronic Journal of Differential Equations,{2017}, V.~217. P.~1-13. \bibitem{17} H.~T.~Nguyen, D.~L.~Le, V.~T.~Nguyen, \textquotedblleft Regularized solution of an inverse source problem for a time fractional diffusion equation, \textquotedblright Applied Mathematical Modelling, {2016}, V.~40. P.~8244-8264. \bibitem{Tor} B.~T.~Torebek , R.~Tapdigoglu, \textquotedblleft Some inverse problems for the nonlocal heat equation with Caputo fractional derivative, \textquotedblright Mathematical Methods in Applied Sciences,{40}, 6468--6479 (2017). \bibitem{AF1} R.~Ashurov, Yu.~Fayziev, \textquotedblleft Determination of fractional order and source term in a fractional subdiffusion equation,\textquotedblright https://www.researchgate.net/publication/354997348 \bibitem{20} Z.~Li, Y.~Liu, M.~Yamamoto, \textquotedblleft Initial-boundary value problem for multi-term time-fractional diffusion equation with positive constant coefficients,\textquotedblright Applied Mathematica and Computation,{2015}, V.~257. P.~381-397. \bibitem{21} W.~Rundell, Z.~Zhang, \textquotedblleft Recovering an unknown source in a fractional diffusion problem, \textquotedblright Journal of Computational Physics, {2018}, V.~368. P.~299-314. \bibitem{22} N.~A.~Asl, D.~Rostamy,\textquotedblleft Identifying an unknown time-dependent boundary source ib time-fractional diffusion equation with a non-local boundary condition, \textquotedblright Journal of Computational and Applied Mathematics, {2019}, V.~335. P.~36-50. \bibitem{23} L.~Sun, Y.~Zhang, T.~Wei, \textquotedblleft Recovering the time-dependent potential function in a multi-term time-fractional diffusion equation\textquotedblright Applied Numerical Mathematics, {2019}, V.~135. P.~228-245. \bibitem{24} S.~A.~Malik, S.~Aziz, \textquotedblleft An inverse source problem for a two parameter anomalous diffusion equation with nonlocal boundary conditions,\textquotedblright Computers and Mathematics with applications, {2017}, V.~3. P.~7-19. \bibitem{25} M.~Ruzhansky, N.~Tokmagambetov, B.~T.~Torebek,\textquotedblleft Inverse source problems for positive operators. I: Hypoelliptic diffusion and subdiffusion equations\textquotedblright J. Inverse Ill-Possed Probl,{2019}, V.~27. P.~891-911. \bibitem{AODif} R.~Ashurov, O.~Muhiddinova, \textquotedblleft Inverse problem of determining the heat source density for the sub\-diffusion equation, \textquotedblright Differential equations,{2020}, Vol. 56, No. 12, pp. 1550-1563. \bibitem{13} K.~M.~Furati, O.~S.~Iyiola, M.~Kirane, \textquotedblleft An inverse problem for a generalized fractional diffusion, \textquotedblright Applied Mathematics and Computation. {2014}, V.~249. P.~24-31. \bibitem{KirSal} M.~Kirane, A.~M.~Salman, A.~Mohammed Al-Gwaiz.\textquotedblleft An inverse source problem for a two dimensional time fractional diffusion equation with nonlocal boundary conditions, \textquotedblright Math. Meth. Appl. Sci,{2012}, DOI: 10.1002/mma.2661 \bibitem{Ali} A.~Muhammad,A.~M.~Salman,\textquotedblleft An inverse problem for a family of time fractional diffusion equations, \textquotedblright Inverse Problems in Science and Engineering, {2016}, 25:9, P.~1299-1322, DOI: 10.1080/17415977.2016.1255738 \bibitem{Saima} Zh.~Shuang, R.~Saima,R.~Asia,K.~Khadija, M.~A.~Abdullah \textquotedblleft Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time\textquotedblright AIMS Mathematics, {2021}, V.~ 6, N 11: 12114-12132. doi: 10.3934/math.2021703 \bibitem{AFFF} R.~Ashurov, Y.~Fayziev, \textquotedblleft On the Nonlocal Problems in Time for Time-Fractional Subdiffusion Equations,\textquotedblright Fractal Fract. 2022, 6, 41. https://doi.org/10.3390/ fractalfract6010041. \bibitem{AF2} R.~Ashurov, Yu.~Fayziev,\textquotedblleft Uniqueness and existence for inverse problem of determining an order of time-fractional derivative of subdiffusion equation,\textquotedblright Lobachevskii journal of mathematics,{2021}, V 42, N 3, 508-516. \bibitem{AF3} R.~Ashurov, Yu.~Fayziev, \textquotedblleft Inverse problem for determining the order of the fractional derivative in the wave equation,\textquotedblright Mathematical Notes,{2021}, 110:6, 842-852 \bibitem{Dzh66} M.~M.~Dzherbashian [=Djrbashian], \emph{Integral Transforms and Representation of Functions in the Complex Domain, }(M.NAUKA, in Russian, 1966. \bibitem{Gor} R.~Gorenflo, A.~A.~Kilbas, F.~Mainardi, S.~V.~Rogozin, \emph{Mittag-Leffler functions, related topics and applications}, (Springer, 2014). \bibitem{AshCab} R.~Ashurov, A.~Cabada and B.~Turmetov, \textquotedblleft Operator method for construction of solutions of linear fractional differential equations with constant coefficients, \textquotedblright Fractional Calculus Appl. Anal,{1}, 229-252, 2016. \bibitem{AshCab2} R.~R.~Ashurov,Yu.~E.~Fayziev, \textquotedblleft On construction of solutions of linear fractional differentional equations with constant coefficients and the fractional derivatives,\textquotedblright Uzb.Math.Journ. 2017. 3. P. 3-21, [in Russian]. \bibitem{Zyg} A.~Zygmund, \emph{Trigonometric series, V. 1}, (Cambridge, 1959). \bibitem{Kil} A.~A.~Kilbas , H.~M.~Srivastava , J.~J.~Trujillo, \emph{Theory and applications of fractional differential equations}, (Elsevier, North-Holland, Mathematics studies, 2006). \end{thebibliography} \end{document}
2205.03405v1
http://arxiv.org/abs/2205.03405v1
On the uniqueness of solutions of two inverse problems for the subdiffusion equation
\documentclass{amsart} \numberwithin{equation}{section} \textwidth 135mm \textheight 220mm \oddsidemargin 10mm \evensidemargin 10mm \baselineskip+6pt \def\a{\alpha} \def\b{\beta} \def\g{\gamma} \def\G{\Gamma} \def\d{\delta} \def\D{\Delta} \def\ve{\varepsilon} \def\e{\epsilon} \def\l{\lambda} \def\LL{\Lambda} \def\k{\kappa} \def\m{\mu} \def\p{\psi} \def\n{\nu} \def\r{\rho} \def\s{\sigma} \def\S{\Sigma} \def\t{\tau} \def\f{\varphi} \def\F{\Phi} \def\v{\phi} \def\th{\theta} \def\Th{\Theta} \def\w{\omega} \def\Om{\Omega} \def\z{\zeta} \pagestyle{myheadings} \thispagestyle{empty} \markboth{\small{Ravshan Ashurov and Yusuf Fayziev}}{\small{On the uniqueness of solutions of two inverse problems for the subdiffusion equation}} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{defin}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{prob}[thm]{Problem} \newcommand{\ds}{\displaystyle} \def\di{\mathop{\rm d}} \def\sp{\mathop{\rm sp}} \def\spec{\mathop{\rm spec}} \def\id{\mathop{\rm id}} \def\tr{\mathop{\rm Tr}} \newcommand{\ty}[1]{\mathop{\rm {#1}}} \newcommand{\nn}{\nonumber} \def\id{{\bf 1}\!\!{\rm I}} \begin{document} \begin{center} \textbf{{\large {\ ON THE UNIQUENESS OF SOLUTIONS OF TWO INVERSE PROBLEMS FOR THE SUBDIFFUSION EQUATION}}}\\[0pt] \medskip \textbf{Ravshan Ashurov$^{1}$ and Yusuf Fayziev$^{2}$}\\[0pt] \medskip \textit{\ $^{1}$ Institute of Mathematics, Academy of Science of Uzbekistan} \textit{[email protected]\\[0pt] $^{2}$ National University of Uzbekistan} \textit{[email protected]\\[0pt] } \end{center} \textbf{Abstract}: Let $A$ be an arbitrary positive selfadjoint operator, defined in a separable Hilbert space $H$. The inverse problems of determining the right-hand side of the equation and the function $\varphi$ in the non-local boundary value problem $D_t^\rho u(t) + Au(t) = f(t)$ ($0<\rho<1$, $0<t\leq T$), $u(\xi)=\alpha u(0)+\varphi$ ($\alpha$ is a constant and $0<\xi\leq T$), is considered. Operator $D_t$ on the left-hand side of the equation expresses the Caputo derivative. For both inverse problems $u(\xi_1)=V$ is taken as the over-determination condition. Existence and uniqueness theorems for solutions of the problems under consideration are proved. The influence of the constant $\alpha$ on the existence and uniqueness of a solution to problems is investigated. An interesting effect was discovered: when solving the forward problem, the uniqueness of the solution $u(t)$ was violated, while when solving the inverse problem for the same values of $\alpha$, the solution $u(t)$ became unique. \textbf{Keywords}: Non-local problems, the Caputo derivatives, subdiffusion equation, inverse problems. \section{Introduction} Let $A: H\rightarrow H$ be an arbitrary unbounded positive selfadjoint operator in a separable Hilbert space $H$ with the scalar product $(\cdot, \cdot)$ and the norm $||\cdot||$. Let $A$ have a complete in $H$ system of orthonormal eigenfunctions $\{v_k\}$ and a countable set of positive eigenvalues $\lambda_k:$ $0<\lambda_1\leq\lambda_2 \cdot\cdot\cdot\rightarrow +\infty$. We will also assume that the sequence $\{\lambda_k\}$ has no finite limit points. For a vector-valued functions (or simply functions) $h: \mathbb{R}_+\rightarrow H$, we define the Caputo fractional derivative of order $0<\rho< 1$ as (see, e.g. \cite{Liz}) \[ D_t^\rho h(t)=\frac{1}{\Gamma (1-\rho)}\int\limits_0^t\frac{h'(\xi)}{(t-\xi)^{\rho}} d\xi, \quad t>0, \] provided the right-hand side exists. Here $\Gamma(\sigma)$ is Euler's gamma function. Finally, let $C((a,b); H)$ stand for a set of continuous functions $u(t)$ of $t\in (a,b)$ with values in $H$. The main object studied in this work is the following non-local boundary value problem: \begin{equation}\label{prob1} \left\{ \begin{aligned} &D_t^\rho u(t) + Au(t) = f(t),\quad 0< t \leq T;\\ &u(\xi_0) = \alpha u(0)+ \varphi, \quad 0 < \xi_0 \leq T, \end{aligned} \right. \end{equation} where $f(t) \in C((0,T]; H)$, $\varphi \in H$ and $\alpha$ is a constant, $\xi_0 $ - a fixed point. This problem is also called \emph{the forward problem}. In the case when $\xi_0=T$ and parameter $\alpha$ is equal to zero: $\alpha=0$, this problem is called \emph{the backward problem} and it is well studied in the works \cite{Yama10} - \cite{Florida} and \cite{AA1}. And if $\alpha=0$ and $ \rho = 1 $, then we get a classical problem called the inverse heat conduction problem with inverse time (\emph{retrospective inverse problem}), which has been studied in detail by various specialists (see, e.g. Chapter 8.2 of \cite{Kab1} and literature therein). It is well known that in most models described by differential (and pseudodifferential, see e.g., \cite{Umar}) equations, an initial condition is used to select a single solution. However, there are also processes where we have to use non-local conditions, for example, the integral over time intervals (see, e.g. \cite{Pao1} for reaction diffusion equations or \cite{Tuan} for fractional equations), or connection of solution values at different times, for example, at the initial time and at the final time (see, e.g. \cite{AshSob} - \cite{AshSob1}). It should be noted that non-local conditions model some details of natural phenomena more accurately, since they take into account additional information in the initial conditions. The non-local boundary value problem (\ref{prob1}) for the classical diffusion equation, namely the following problem \begin{equation}\label{probA} \left\{ \begin{aligned} &u'(t) + Au(t) = f(t),\quad 0< t \leq T;\\ &u(\xi_0) =u(0)+\varphi, \quad 0< \xi \leq T, \end{aligned} \right. \end{equation} has been extensively studied by many researchers (see, e.g. A. O. Ashyralyev et al. \cite{AshSob} - \cite{AshSob1}). As shown in these papers, in contrast to the retrospective inverse problem, problem (\ref{probA}) is coersively solvable in some spaces of differentiable functions. Let us return to the non-local problem (\ref{prob1}). The authors of this paper in their previous work \cite{AF2022} studied in detail the influence of parameter $\alpha\neq 0$ on the correctness of problem (\ref{prob1}). It turned out that the critical values of parameter $\alpha$ are in the interval $(0,1)$. In order to formulate the main result of work \cite{AF2022}, we recall the definition of the Mittag-Leffler function $E_{\rho, \mu}(z)$ with two parameters (see, e.g. \cite{PSK}, Chapter 1): \[ E_{\rho, \mu}(z)= \sum\limits_{n=0}^\infty \frac{z^n}{\Gamma(\rho n+\mu)}, \] where $\mu$ is an arbitrary complex number. If parameter $\mu =1$, then we have the classical Mittag-Leffler function: $ E_{\rho}(z)= E_{\rho, 1}(z)$. Recall (see, e.g. \cite{AF2022}), $E_\rho(-t)$ decreases strictly monotonically as $t>0$ and, moreover, has the following estimate \begin{equation}\label{ML} 0< E_\rho(-t) <1, \, t>0. \end{equation} In work \cite{AF2022} it is proved that if $\alpha \in (0,1)$ and $E_\rho(-\lambda_k t^\rho)\neq \alpha$ for all $k$, then the solution of problem (\ref{prob1}) exists and is unique. But it may turn out that for some eigenvalue $\lambda_{k_0}$ of operator $A$, with multiplicity $p_0$ (obviously, $p_0$ is a finite number), equality \begin{equation}\label{critical} E_\rho(-\lambda_{k_0} t^\rho) = \alpha \end{equation} will hold. Then, as proved in \cite{AF2022}, in order for a solution to exist, it is necessary to require the following orthogonality conditions \begin{equation}\label{Or1} (\varphi,v_k)=0, \,\, (f(t), v_k)=0,\,\, \text{for all}\,\, t> 0, \,\, k\in K_0;\,\, K_0=\{k_0, k_0+1,...., k_0+p_0-1\}. \end{equation} It should be noted that in this case there will be no uniqueness of the solution \cite{AF2022}. The paper \cite{AF2022} also studies two inverse problems of determining the function $\varphi$ from the non-local condition (\ref{prob1}) and the source function $f$, i.e. the right-hand side of the equation in (\ref{prob1}) (in the latter case, it is assumed that $f$ does not depend on $t$). It is proved that if $\alpha \notin (0,1)$, then the solutions of both inverse problems exist and are unique. The main goal of this paper is to study these inverse problems for critical values of parameter $\alpha \in (0,1)$. \begin{prob}\label{P1} Let $\alpha \in (0,1)$. Find a pair $\{u(t), f \}$ of function $u(t)\in C([0,T]; H)$ and $f \in H$ with the properties $D_t^\rho u(t), Au(t)\in C((0,T]; H)$ and satisfying the non-local problem (\ref{prob1}) (note, $f$ does not depend on $t$) and the over-determination condition \begin{equation}\label{ODcon1} u(\xi_1)=V, \quad 0 < \xi_1 <\xi_0, \end{equation} where $V$ is a given element of $H$. \end{prob} Note that if $\xi_1=\xi_0$, then the non-local condition in (\ref{prob1}) coincides with the Cauchy condition $u(0)=\varphi_1$ (note $\alpha \neq 0$). In this case, this inverse problem was studied in \cite{RuzTor}. If the reverse inequality $\xi_1 > \xi_0$ holds, then it will be shown that the solution may not be unique. \begin{prob}\label{P2} Let $\alpha \in (0,1)$. Find a pair $\{u(t), \varphi \}$ of function $u(t)\in C([0,T]; H)$ and $\varphi \in H$ with the properties $D_t^\rho u(t), Au(t)\in C((0,T]; H)$ and satisfying the non-local problem (\ref{prob1}) and the over-determination condition \begin{equation}\label{ODcon2} u(\xi_2)=W, \quad 0 < \xi_2 \leq T, \,\, \xi_2 \neq \xi_0, \end{equation} where $W$ is a given element of $H$. \end{prob} If $\xi_2 = \xi_0$, then the non-local condition $ u (\xi) = \alpha u (0) + \varphi $ coincides with the Cauchy condition $u(0)=\varphi_1$ (note $\alpha \neq 0$) and we have the backward problem, considered in \cite{Yama10} - \cite{Florida}. Everywhere below, for the vector - function $h(t)\in H$ (which may or may not depend on $t$) by the symbol $h_k(t)$ we will denote the Fourier coefficients with respect to the system of eigenfunctions $\{v_k\}$: $h_k(t)=(h(t), v_k)$. \begin{thm}\label{thP1} Let $\varphi, V \in D(A)$ and let the orthogonality conditions (\ref{Or1}) be satisfied. Then the inverse Problem \ref{P1} has a unique solution $\{u(t), f \}$ and this solution has the following form \begin{equation}\label{fP1} f =\sum\limits_{k\notin K_0} \bigg[ \frac{\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho})}{ E_{\rho}(-\lambda_k \xi_1^{\rho})\xi_0^\rho E_{\rho,\rho+1}(-\lambda_k\xi_0^{\rho})+\xi_1^\rho E_{\rho,\rho+1}(-\lambda_k \xi_1^{\rho})[\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho})]}\, V_k+ \end{equation} $$+\frac{E_{\rho}(-\lambda_k \xi_1^{\rho})}{ E_{\rho}(-\lambda_k \xi_1^{\rho})\xi_0^\rho E_{\rho,\rho+1} (-\lambda_k\xi_0^{\rho})+\xi_1^\rho E_{\rho,\rho+1}(-\lambda_k \xi_1^{\rho})[\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho})]}\,\varphi_k \bigg] v_k, $$ \begin{equation}\label{uP1} u(t)= \sum\limits_{k\notin K_0} \left[\frac{E_{\rho}(-\lambda_k t^{\rho})}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, [{\varphi_k-f_k \xi_0^\rho E_{\rho,\rho+1}(-\lambda_k \xi_0^{\rho}) }] +f_k t^\rho E_{\rho,\rho+1}(-\lambda_k t^{\rho})\right] v_k+. \end{equation} \[ +\sum\limits_{k\in K_0} \frac{E_\rho(-\lambda_k t^\rho)\,V_k}{E_\rho(-\lambda_k \xi_1^\rho)}\, v_k. \] \end{thm} Note that, due to the orthogonality condition (\ref{Or1}), all Fourier coefficients $f_k$ vanish for $k\in K_0$. Obviously, $K_0$ can also be an empty set; in this case the sum $\sum_{k\notin K_0}$ is the same as $\sum_{k=1}^\infty$. Let $ \tau $ be an arbitrary real number. In order to formulate a result on Problem \ref{P2} we introduce the power of operator $ A $, acting in $ H $ as $$ A^\tau h= \sum\limits_{k=1}^\infty \lambda_k^\tau h_k v_k, $$ where $h_k$ are the Fourier coefficients of $h\in H$. Obviously, the domain of this operator has the form $$ D(A^\tau)=\{h\in H: \sum\limits_{k=1}^\infty \lambda_k^{2\tau} |h_k|^2 < \infty\}. $$ For elements of $D(A^\tau)$ we introduce the norm \[ ||h||^2_\tau=\sum\limits_{k=1}^\infty \lambda_k^{2\tau} |h_k|^2 = ||A^\tau h||^2, \] and together with this norm $D(A^\tau)$ turns into a Hilbert space. \begin{thm}\label{thP2} Let $W \in D(A)$, $f \in C([0,T];D(A^\varepsilon))$ for some $\varepsilon\in (0,1)$ and let the orthogonality conditions (\ref{Or1}) be satisfied. Then the inverse Problem \ref{P2} has a unique solution $\{u(t), \varphi\}$ and this solution has the form \begin{equation}\label{fiP2} \varphi= \sum\limits_{k\notin K_0} \left[\frac{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}{E_{\rho}(-\lambda_k \xi_2^{\rho})} [W_k -\omega_k(\xi_2)]+\omega_k(\xi_0)\right] v_k, \end{equation} \begin{equation}\label{uP2} u(t)= \sum\limits_{k\notin K_0} \left[\frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, E_{\rho}(-\lambda_k t^{\rho})+\omega_k(t)\right] v_k+ \end{equation} \[ +\sum\limits_{k\in K_0}\frac{E_\rho(-\lambda_k t^\rho)\,W_k}{E_\rho(-\lambda_k \xi_2^\rho)}\, v_k, \] where $$ \omega_k(t)=\int\limits_{0}^t\eta^{\rho-1}E_{\rho,\rho}(-\lambda_k \eta^{\rho})f_k(t-\eta)d\eta. $$ \end{thm} Note that for $k\in K_0$ all Fourier coefficients $\varphi_k$ are equal to zero since the orthogonality condition (\ref{Or1}). When $K_0$ is an empty set, then the sum $\sum_{k\notin K_0}$ coincides with $\sum_{k=1}^\infty$. \begin{rem}\label{unique} It should be specially noted that, as was proved in \cite{AF2022} and noted above, when equality (\ref{critical}) holds, the solution to the forward problem is not unique. But it turns out that both inverse problems have a unique solution even under condition (\ref{critical}). \end{rem} To the best of our knowledge, the inverse problem of defining the function $ \varphi $ in the non-local condition was discussed only in the paper \cite{YulKad}. The authors considered this problem for the subdiffusion equation with the Caputo fractional derivative, the elliptic part of which is a two-variable differential expression with constant coefficients. On the other hand, it is not difficult to simulate a real process in which we will face just such an inverse problem. For example, in the temperature distribution process, the initial and final temperatures are not specified, and it is not required to find them, but information about the difference between the initial and final temperatures is sought. As for the inverse problems of determining the source function $f$ with final time observation, it is well studied, both for classical partial differential equations and for equations of fractional order. Many theoretical studies have been published. Kabanikhin \cite{Kab1} and Prilepko, Orlovsky and Vasin \cite{prilepko} should be mentioned as classical monographs for integer-order equations. As for fractional differential equations, it is possible to construct theories parallel to the works of \cite{Kab1}, \cite{prilepko}, and work in this direction is ongoing. In this note, we will pay attention to only some of them, referring interested readers to a review paper \cite{Hand1}. Also note the works \cite{AF2022}, \cite{FurKir, LiYam, Sun}, where there is a review of recent work in this direction. We note right away that no one has yet proposed a method for finding the right-hand side given in the abstract form $f(x,t)$. Known results deal with separated source term $f(x, t) = q(t)p(x)$. The appropriate choice of the over-determination depends on the choice whether the unknown is $q(t)$ or $p(x)$. Quite a lot of papers are devoted to the case considered in this article, namely $q(t)\equiv 1$ and the unknown is $p(x)$. Subdiffusion equations whose elliptic part $A$ is an ordinary differential expression are considered, for example, in \cite{FurKir,KirMa, KirTor,TorTap}. The authors of \cite{LiYam, Sun}, studied the inverse problem for multi-term subdiffusion equations in which the elliptic part is either a Laplace operator or a second order operator. Article \cite{RuzTor} studied the inverse problem for the subdiffusion equation (\ref{prob1}) with the Cauchy condition. Recent articles \cite{AODif} - \cite{AOLob} are devoted to the inverse problem for the subdiffusion equation with Riemann-Liouville derivatives. In \cite{KirSal} non-self-adjoint differential operators (with non-local boundary conditions) were taken as A, and the solutions of the inverse problem were found in the form of a biorthogonal series. In their previous work \cite{AF1}, the authors of this article considered an inverse problem for simultaneously determining the order of the Riemann-Liouville fractional derivative and the source function in the subdiffusion equations. Using the classical Fourier method, the authors proved the uniqueness and existence of a solution to this inverse problem. It should be noted that in all of the listed works, the Cauchy conditions in time are considered (an exception is work \cite{Saima}, where the integral condition is set with respect to the variable $t$). In the paper \cite{AF2022}, for the best of our knowledge, an inverse problem for subdiffusion equation with a non-local condition in time is considered for the first time. The most difficult case to study is the case when the function $q(t)$ is unknown (see the survey paper \cite{Hand1} and \cite{Yama11} ). In inverse problems of this type, the condition $u(x_0,t)=u_0(t)$ is taken as an additional condition. The authors studied mainly the uniqueness of the solution of the inverse problem. In this regard, we note the recent papers \cite{ASh}, \cite{ASh2} where the inverse problem for determining the right-hand side of the form $q(t)$ was studied for the Schrodinger equation. Taking over-determination conditions of a rather general form $Bu(\cdot, t)$, where $B: H\rightarrow R$ is a linear bounded functional, the authors proved both the existence and uniqueness of a solution to the inverse problem. The papers \cite{AF2} - \cite{AF3} deal with the inverse problem of determining an order of the fractional derivative in the subdiffusion equation and in the wave equation, respectively. \section{Inverse Problem \ref{P1}} \textbf{2.1. Existence.} Assume that all the conditions of Theorem \ref{thP1} are satisfied, i.e. $\varphi, V \in D(A)$ and let the orthogonality conditions (\ref{Or1}) be satisfied. Let us first prove the existence of a solution and that the solution has the form (\ref{fP1}) and (\ref{uP1}). The fact that these series converge in the norm $H$ and in (\ref{uP1}) the summation and operators $D^\rho_t$ and $A$ can be interchanged was proved in the work of the authors \cite{AF2022}. Therefore, it suffices to show that the series (\ref{fP1}) and (\ref{uP1}) formally satisfy the equation and the initial condition (\ref{prob1}), and the over-determination condition (\ref{ODcon1}). In order to do this, we rewrite the series (\ref{fP1}) and (\ref{uP1}) in the form $f=\sum f_k v_k$ and $u(t)=\sum u_k(t) v_k$. Now, according to the Fourier method, it suffices to show that the unknown coefficients $f_k$ and $u_k(t)$ satisfy equation \begin{equation}\label{Eq1} D_t^\rho u_k(t) + \lambda_k u_k(t) = f_k, \end{equation} the non-local condition \begin{equation}\label{Nc1} u_k(\xi_0) = \alpha u_k(0) + \varphi_k, \end{equation} and finally the over-determination condition \begin{equation}\label{Ovc1} u_k(\xi_1) = V_k, \end{equation} for all $k\geq 1$. Let us show that $u_k(t)$ and $f_k$ satisfy equation (\ref{Eq1}). Let $k\notin K_0$. We have $u_k(t)=u_k^1(t)+ u_k^2(t)$, where \[ u_k^1(t)=\frac{E_{\rho}(-\lambda_k t^{\rho})}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, [{\varphi_k-f_k \xi_0^\rho E_{\rho,\rho+1}(-\lambda_k \xi_0^{\rho}) }] \] and \[ u_k^2(t)=f_k t^\rho E_{\rho,\rho+1}(-\lambda_k t^{\rho}). \] It is known (see, e.g. \cite{Gor}, p. 174) that $u_k^1(t)$ is a solution to the homogeneous equation (\ref{Eq1}) with the initial condition $$ u_k^1(0)=\frac{\varphi_k-f_k \xi_0^\rho E_{\rho,\rho+1}(-\lambda_k \xi_0^{\rho}) }{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}. $$ It is also known (see ibid.) that the function $$ \omega_k(t)=\int\limits_{0}^t\eta^{\rho-1}E_{\rho,\rho}(-\lambda_k \eta^{\rho})f_k(t-\eta)d\eta $$ from Theorem \ref{thP2} is a solution to equation (\ref{Eq1}) with the right-hand side $f_k(t)$ and with the initial condition $\omega_k(0)=0$. If in this formula $f_k(t)$ does not depend on $t$, then the integral can be rewritten in the form (see e.g. \cite{Gor}, formula (4.4.4)) \[ f_k\int\limits_{0}^t\eta^{\rho-1}E_{\rho,\rho}(-\lambda_k \eta^{\rho})d\eta=f_k\,t^\rho E_{\rho,\rho+1}(-\lambda_k t^{\rho}). \] Therefore, the function $u_k^2(t)$ is a solution to the inhomogeneous equation (\ref{Eq1}) with the initial condition $ u_k^2(0)=0$. Now suppose that $k\in K_0$. Then the function \[ u_k(t)=\frac{E_\rho(-\lambda_k t^\rho)\,V_k}{E_\rho(-\lambda_k \xi_1^\rho)} \] is a solution of homogeneous equation (\ref{Eq1}) with the initial data \[ u_k(0)=\frac{V_k}{E_\rho(-\lambda_k \xi_1^\rho)}. \] Thus, it is proved that the functions (\ref{fP1}) and (\ref{uP1}) really satisfy equation (\ref{Eq1}). It remains to verify the fulfillment of the non-local condition (\ref{Nc1}) and the over-determination condition (\ref{Ovc1}). Let $k\notin K_0$. Since we have calculated $u_k(0)=u^1_k(0)+u^2_k(0)$, we can write \[ \alpha u_k(0) +\varphi_k=\frac{\varphi_k E_\rho (-\lambda_k \xi_0^\rho)-\alpha f_k \xi_0^\rho E_{\rho, \rho+1} (-\lambda_k \xi_0^\rho)}{E_\rho (-\lambda_k \xi_0^\rho)-\alpha}. \] On the other hand, according to (\ref{uP1}), $u_k(\xi_0)$ has exactly the same value: \[ u_k(\xi_0) =\frac{\varphi_k E_\rho (-\lambda_k \xi_0^\rho)-\alpha f_k \xi_0^\rho E_{\rho, \rho+1} (-\lambda_k \xi_0^\rho)}{E_\rho (-\lambda_k \xi_0^\rho)-\alpha}. \] Let now $k\in K_0$. Then $\varphi_k=0$ (see (\ref{Or1})) and $E_\rho(-\lambda_k\xi_0^\rho)=\alpha$. Therefore \[ \alpha u_k(0) +\varphi_k=\frac{\alpha V_k}{E_\rho(-\lambda_k \xi_1^\rho)}, \] and \[ u_k(\xi_0)=\frac{E_\rho(-\lambda_k \xi_0^\rho)\,V_k}{E_\rho(-\lambda_k \xi_1^\rho)}=\frac{\alpha V_k}{E_\rho(-\lambda_k \xi_1^\rho)}. \] Thus, the Fourier coefficients of function $u(t)$, defined by formula (\ref{uP1}), satisfy the non-local condition (\ref{Nc1}) for all $k\geq 1$. Let us check the fulfillment of the over-determination condition (\ref{Ovc1}). Consider again the case $k\notin K_0$. By virtue of condition (\ref{Ovc1}) we obtain: \[ \frac{E_{\rho}(-\lambda_k \xi_1^{\rho})}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, [{\varphi_k-f_k \xi_0^\rho E_{\rho,\rho+1}(-\lambda_k \xi_0^{\rho}) }] +f_k \xi_1^\rho E_{\rho,\rho+1}(-\lambda_k \xi_1^{\rho})=V_k. \] After simple calculations, we get \[ f_k = \frac{\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho})}{ E_{\rho}(-\lambda_k \xi_1^{\rho})\xi_0^\rho E_{\rho,\rho+1}(-\lambda_k\xi_0^{\rho})+\xi_1^\rho E_{\rho,\rho+1}(-\lambda_k \xi_1^{\rho})[\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho})]} V_k + \] $$ +\frac{E_{\rho}(-\lambda_k \tau^{\rho})}{ E_{\rho}(-\lambda_k \tau^{\rho})\xi^\rho E_{\rho,\rho+1}(-\lambda_k\xi^{\rho})+\tau^\rho E_{\rho,\rho+1}(-\lambda_k \tau^{\rho})[\alpha-E_{\rho}(-\lambda_k \xi^{\rho})]}\varphi_k, $$ and this coincides with the Fourier coefficients of the function (\ref{fP1}). If $k\in K_0$, then \[ u_k(\xi_1)=\frac{E_\rho(-\lambda_k \xi_1^\rho)\,V_k}{E_\rho(-\lambda_k \xi_1^\rho)} = V_k. \] This completes the proof of the existence of a solution to Problem \ref{P1}. \textbf{2.2. Uniqueness.} Let us proceed to the proof of the uniqueness of the solution of Problem \ref{P1}. We proceed in the standard way: assuming the existence of two solutions, we obtain contradictions. Let $\{u_1(t), f_1\}$ and $\{u_2(t), f_2\}$ be two solutions. It is required to prove $u(t)\equiv u_1(t)-u_2(t)\equiv 0$ and $f\equiv f_1-f_2 = 0$. To determine $u(t)$ and $f$ we have the problem: \begin{equation}\label{eq1} D_t^\rho u(t) + Au(t) = f, \quad t>0; \end{equation} \begin{equation}\label{nl} u(\xi_0) =\alpha u(0), \quad 0 < \xi_0 \leq T, \end{equation} \begin{equation}\label{od} u(\xi_1) =0, \quad 0<\xi_1< \xi_0, \end{equation} where $\xi_0 $ and $\xi_1$ are the fixed points. Let $u(t)$ be a solution to this problem and $u_k(t)=(u(t), v_k)$. Then, by virtue of equation (\ref{eq1}) and the selfadjointness of operator $A$, problem (\ref{eq1})-(\ref{od}) becomes the following non-local problem with respect to $u_k(t)$: \begin{equation}\label{prob2} D_t^\rho u_k(t) +\lambda_k u_k(t)=f_k,\quad t>0; \quad u_k(\xi_0) =\alpha u_k(0), \quad u_k(\xi_1)=0. \end{equation} Note that if $k\in K_0$ then $f_k=0$. Let first $k\notin K_0$. Suppose that $f_k$ is known and use the non-local condition to get (see, e.g. \cite{Gor}, p.174) $$ u_k(t)=\frac{f_k \xi_0^{\rho}E_{\rho,\rho+1}(-\lambda_k \xi_0^{\rho})}{\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho})} \, E_{\rho}(-\lambda_k t^{\rho})+f_k t^{\rho}E_{\rho,\rho+1}(-\lambda_k t^{\rho}). $$ Now apply $u_k(\xi_1)=0$ to have \begin{equation}\label{xitau} f_k[ \xi_0^{\rho}E_{\rho,\rho+1}(-\lambda_k \xi_0^{\rho}) E_{\rho}(-\lambda_k \xi_1^{\rho})+ \xi_1^{\rho}E_{\rho,\rho+1}(-\lambda_k \xi_1^{\rho})(\alpha-E_{\rho}(-\lambda_k \xi_0^{\rho}))]=0. \end{equation} Let us show that for $\xi_1<\xi_0$ the square bracket is not equal to zero. To do this, we introduce the notations: $a(t)=t^{\rho}E_{\rho,\rho+1}(-\lambda_k t^{\rho})>0$ and $b(t)=E_{\rho}(-\lambda_k t^{\rho})>0$. It is known (see, e.g. \cite{AF2022}) that the function $a(t)$ is increasing and the function $b(t)$ is decreasing. Now let us rewrite the square bracket as \[ c(\xi_0, \xi_1)=a(\xi_0) b(\xi_1) - a(\xi_1) b(\xi_0) + \alpha b(\xi_0). \] Obviously, for $\xi_1<\xi_0$ this expression is strictly positive. Therefore for all $k\notin K_0$ one has $f_k=0$ (see (\ref{xitau})). It should be noted that if the inverse inequality $\xi_1 > \xi_0$ is satisfied, then the first term in the expression for $c(\xi_0, \xi_1)$ becomes less than the second one and, as a result, there is $\alpha\in (0,1)$ that turns $c(\xi_0, \xi_1)$ into zero. Therefore, in this case $f_k$ may not vanish, i.e., the uniqueness $f_k$ for these $\alpha$ and $k$ is violated. Let us now consider the case $k\in K_0$. Denote $u_k(0)= b_k$. Then the unique solution to the differential equation in (\ref{prob2}) with this initial condition has the form $u_k(t)=b_k E_\rho (-\lambda_k t^\rho)$ (see, e.g. \cite{Gor}, p.174). Since $E_\rho (-\lambda_k \xi_0^\rho)=\alpha $ in the considering case, then the non-local condition is satisfied for an arbitrary $b_k$. But the over-determination condition $u_k(\xi_1)=0$ implies $b_k=0$ for $k\in K_0$. Therefore, from the completeness of the system of eigenfunctions $ \{v_k \} $, we finally obtain $f=0$ and $u(t) \equiv 0$, as required. The uniqueness and hence Theorem \ref{thP1} is completely proved. \ \section{Inverse Problem \ref{P2}} \textbf{3.1. Existence.} Suppose that $W \in D(A)$ and $f \in C([0,T];D(A^\varepsilon))$ for some $\varepsilon\in (0,1)$ and let the orthogonality conditions (\ref{Or1}) be satisfied. Let us first show that series (\ref{fiP2}) and (\ref{uP2}) are indeed solutions to Problem \ref{P2}. The fact that $u(t)\in C([0,T]; H)$ and $\varphi \in H$ and have properties $D_t^\rho u(t), Au(t)\in C ((0,T]; H)$ was proved in our previous paper \cite{AF2022}.Therefore, it suffices to prove that (\ref{fiP2}) and (\ref{uP2}) together are a formal solution to Problem \ref{P2}. In turn, for this it suffices to show that the Fourier coefficients $\varphi_k$ and $u_k(t)$ of functions (\ref{fiP2}) and (\ref{uP2}) respectively, satisfy equation (\ref{Eq1}), the non-local condition (\ref{Nc1}) and the over-determination condition \begin{equation}\label{Ovc2} u_k(\xi_2)=W_k. \end{equation} It is not hard to verify that $u_k(t)$ is a solution of equation (\ref{Eq1}). Indeed, let first, $k\notin K_0$. We introduce the notation \[ u_k^1(t)= \frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, E_{\rho}(-\lambda_k t^{\rho}). \] Then $u_k(t)=u_k^1(t)+ \omega_k(t)$. Here $u_k^1(t)$ is the solution of the homogeneous equation (\ref{Eq1}) with the initial condition \[ u_k^1(0)= \frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}, \] and $\omega_k(t)$ is the solution of equation (\ref{Eq1}) with zero initial condition (see, e.g. \cite{Gor}, p. 174). If $k\in K_0$, then according to the orthogonality conditions $f_k=0$ and the function \[ u_k(t)= \frac{E_\rho(-\lambda_k t^\rho)\,W_k}{E_\rho(-\lambda_k \xi_2^\rho)} \] is a solution of the homogeneous equation (\ref{Eq1}) with the initial condition \[ u_k(0)= \frac{W_k}{E_\rho(-\lambda_k \xi_2^\rho)}. \] Thus we have shown that $u_k(t)$ is a solution of equation (\ref{Eq1}). Let us check the non-local condition (\ref{Nc1}). Consider first the case $k\notin K_0$. We have \[ \alpha u_k(0) +\varphi_k=\alpha\frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}+\varphi_k. \] On the other hand, \[ u_k(\xi_0)=\frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, E_{\rho}(-\lambda_k \xi_0^{\rho})+\omega_k(\xi_0)= \] \[ =\frac{\varphi_k E_{\rho}(-\lambda_k \xi_0^{\rho}) -\alpha \, \omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}=\frac{\varphi_k \big(E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha\big) +\varphi_k \alpha -\alpha \omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}= \] \[ =\alpha\frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}+\varphi_k. \] Now consider the case $k\in K_0$. Note in this case $E_{\rho}(-\lambda_k \xi_0^{\rho})=\alpha$ and all Fourier coefficients $\varphi_k$ are equal to zero since the orthogonality condition (\ref{Or1}). Therefore, \[ \alpha u_k(0) +\varphi_k=\alpha\frac{W_k}{E_{\rho}(-\lambda_k \xi_2^{\rho})}=E_{\rho}(-\lambda_k \xi_0^{\rho})\frac{W_k}{E_{\rho}(-\lambda_k \xi_2^{\rho})}=u_k(\xi_0). \] Let us move on to checking the over-determination condition (\ref{Ovc2}). Let $k\notin K_0$. Then $$ \frac{\varphi_k-\omega_k (\xi_0)}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha} \, E_{\rho}(-\lambda_k \xi_2^{\rho})+\omega_k(\xi_2) = W_k, $$ or $$ \varphi_k= \frac{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}{E_{\rho}(-\lambda_k \xi_2^{\rho})} \,[W_k -\omega_k(\xi_2)]+\omega_k(\xi_0), $$ and this coincides with the Fourier coefficients of the function (\ref{fiP2}). If $k\in K_0$, then \[ u_k(\xi_2)=\frac{E_\rho(-\lambda_k \xi_2^\rho)\,W_k}{E_\rho(-\lambda_k \xi_2^\rho)} = W_k. \] This completes the proof of the existence of a solution to Problem \ref{P2}. \textbf{3.2. Uniqueness.} Obviously, to prove the uniqueness of the solution to Problem \ref{P2}, it suffices to show that the solution $\{u(t), \varphi\}$ to the following inverse problem: \[ D_t^\rho u(t) + Au(t) = 0, \quad \quad t>0; \] \[ u(\xi_0) =\alpha u(0)+ \varphi, \quad 0 < \xi_0 \leq T, \] \[ u(\xi_2) =0, \quad 0 < \xi_2 \leq T, \,\, \xi_2 \neq \xi_0, \] is identically zero: $u(t)\equiv 0$ and $\varphi=0$. Let $u(t)$ be a solution to this problem and let $u_k(t)=(u(t), v_k)$. Then \begin{equation}\label{Iprob2} D_t^\rho u_k(t) +\lambda_k u_k(t)=0,\quad t>0; \quad u_k(\xi_0) =\alpha u_k(0)+\varphi_k, \,\, u_k(\xi_2) =0. \end{equation} Let $k\notin K_0$. Then it is not hard to verify that the following function $$ u_k(t)=\frac{E_{\rho}(-\lambda_k t^{\rho})}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}\,\, \varphi_k $$ is the only solution to the equation and non-local condition in (\ref{Iprob2}). The over-determination condition in (\ref{Iprob2}) implies $$ u_k(\xi_2)=\frac{E_{\rho}(-\lambda_k \xi_2^{\rho})}{E_{\rho}(-\lambda_k \xi_0^{\rho})-\alpha}\,\, \varphi_k=0. $$ Since $E_{\rho}(-\lambda_k \xi^{\rho})\neq\alpha$ and $E_{\rho}(-\lambda_k \xi_2^{\rho})\neq 0$, then we have $\varphi_k=0$ and therefore $u_k(t)\equiv 0$ for all $k\notin K_0$. Now consider the case $k\in K_0$. Denote $u_k(0)= b_k$. Then the unique solution to the differential equation in (\ref{Iprob2}) with this initial condition has the form $u_k(t)=b_k E_\rho (-\lambda_k t^\rho)$ (see, e.g. \cite{Gor}, p.174). Since $E_\rho (-\lambda_k \xi_0^\rho)=\alpha $ and $\varphi_k=0$ in the considering case, then the non-local condition is satisfied for an arbitrary $b_k$. But the over-determination condition $u_k(\xi_2)=0$ implies $b_k=0$ and therefore $u_k(t)\equiv 0$ for $k\in K_0$. Thus, from the completeness of the system of eigenfunctions $ \{v_k \} $, we finally obtain $\varphi=0$ and $u(t) \equiv 0$, as required. The uniqueness and hence Theorem \ref{thP2} is completely proved. \section{Conclusion} In the previous paper of the authors \cite{AF2022} it is proved that for $\alpha \notin (0,1)$ the solutions of the forward and two inverse problems of determining $f$ and $\varphi$ exist and are unique. If $\alpha \in (0,1)$ and equality (\ref{critical}) holds for some $k\in K_0$, then to ensure the existence of the solution to the forward problem, it is necessary to require the orthogonality condition (\ref{Or1}). However, in this case the solution is not unique and it is determined up to the term \[ \sum\limits_{k\in K_0} b_k E_\rho(-\lambda_k t^\rho)\, v_k, \] where $b_k$ are arbitrary numbers. In this paper, we consider the above two inverse problems for critical values of parameter $\alpha \in (0,1)$. An interesting effect arises here: when solving the forward problem the uniqueness of solution $u(t)$ was violated, while when solving the inverse problem for the same values of $\alpha$, solution $u(t)$ became unique. What is the matter here? It turns out, as follows from the main results of this paper, the over-determination condition \[ u(\tau)=V \] can be rewritten in the form of two groups of conditions with respect to the Fourier coefficients \[ u_k(\tau)=V_k, \,\,\, k\notin K_0, \] and \[ u_k(\tau)=V_k, \,\,\, k\in K_0. \] With the help of the first group, the unique solutions of inverse problems are singled out, and since the coefficients $f_k$ and $\varphi_k$ are equal to zero for $k\in K_0$, the conditions from the second group are not used in this case. And the conditions from the second group ensure the uniqueness of the solution $u(t)$, namely, they determine uniquely the above arbitrary coefficients $b_k$. \section{Acknowledgement} The authors are grateful to Sh. A. Alimov for discussions of these results. The authors acknowledge financial support from the Ministry of Innovative Development of the Republic of Uzbekistan, Grant No F-FA-2021-424. \begin{thebibliography}{99} \normalsize \bibitem{Liz} {\sc C. Lizama}, {\it Abstract linear fractional evolution equations}, Handbook of Fractional Calculus with Applications J.A.T. Marchado Ed. DeGruyter, V. {\bf 2}, (2019), 465--497. \bibitem {Yama10} {\sc J. Liu and M. Yamamoto}, {\it A backward problem for the time-fractional diffusion equation}, Appl. Anal. M. {\bf 89}, (2010), 1769--1788. \bibitem{Yama11} {\sc K. Sakamoto and M. Yamamoto}, {\it Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems}, J. Math. Anal. Appl. Messenger Math. {\bf 382}, 1 (2011), 426-447. \bibitem {Florida} {\sc G. Floridia, Z. Li and M. Yamamoto}, {\it Well-posedness for the backward problems in time for general time-fractional difussion equation}, Rend. Lincei Mat. Appl. {\bf 31}, (2020), 593--610. \bibitem{AA1} {\sc Sh.A. Alimov and R.R. Ashurov}, {\it On the backward problems in time for time-fractional subdiffusion equations}, Fractional Differential Calculus, {\bf 11}, (2022) 2, 203--217, doi:10.7153/fdc-2021-11-14. \bibitem{Kab1} {\sc S.I. Kabanikhin}, {\it Inverse and Ill-Posed Problems}, Theory and Applications, De Gruyter 2011. \bibitem{Umar} {\sc S.R. Umarov}, {\it Introduction to Fractional and Pseudo-Differential Equations with Singular Symbols}, Springer, 2015. \bibitem{Pao1} {\sc C.V. Pao}, {\it Reaction diffusion equations with non-local boundary and non-local initial conditions}, J. Math. Anal. Appl. (1995), V. {\bf 195}, 702--718 \bibitem{Tuan} {\sc N.H. Tuan, N.A. Triet, N.H. Luc and N.D. Phuong}, {\it On a time fractional diffusion with non-local in time conditions}, Advancesin Difference Equations, (2021) {\bf 204}, https://doi.org/10.1186/s13662-021-03365-1 \bibitem{AshSob} {\sc A.O. Ashyralyev and P.E. Sobolevskii}, {\it Coercive stability of a multidimensional difference elliptic equation of 2m-th order with variable coefficients}, Investigations in the Theory of Differential Equations, (Russian), Minvuz Turkmen. SSR, Ashkhabad, (1987), 31--43. \bibitem{AshSob1} {\sc A.O. Ashyralyev, A. Hanalyev and P.E. Sobolevskii}, {\it Coercive solvability of non-local boundary value problem for parabolic equations}, Abstract and Applied Analysis, (2001), {\bf 6}, 1, 53--61. \bibitem{AF2022} {\sc R. Ashurov and Yu. Fayziev}, {\it On the non-local problems in time for time-fractional subdiffusion equations}, Fractal and Fractional, (2022), V. {\bf 6}, 41. \bibitem{PSK} {\sc A.V. Pskhu}, {\it Fractional partial differential equations}, (in Russian), M. NAUKA 2005. \bibitem{RuzTor} {\sc M. Ruzhansky, N. Tokmagambetov and B.T. Torebek}, {\it Inverse source problems for positive operators}. I: Hypoelliptic diffusion and subdiffusion equations, J. Inverse Ill-Possed Probl, V. {\bf 27}, (2019) 891--911. \bibitem{YulKad} {\sc T.K. Yuldashev and B.J. Kadirkulov}, {\it Inverse problem for a partial differential equation with Gerasimova-Caputo-type operator and degeneration}, Fractal and Fractional, (2021), {\bf 5}, 58. https://doi.org/10.3390/fractalfract5020058 \bibitem{prilepko} {\sc A.I. Prilepko, D.G. Orlovsky and I.A. Vasin}, {\it Methods for solving inverse problems in mathematical physics}, Marcel Dekkers, New York, 2000. \bibitem{Hand1} {\sc Y. Liu, Z. Li and M. Yamamoto}, {\it Inverse problems of determining sources of the fractional partial differential equations}, Handbook of Fractional Calculus with Applications, V. {\bf 2}, J.A.T. Marchado Ed. DeGruyter, (2019); 411--430. \bibitem{FurKir} {\sc K.M. Furati, O.S. Iyiola and M. Kirane}, {\it An inverse problem for a generalized fractional diffusion}, Applied Mathematics and Computation, (2014), V. {\bf 249}, 24--31. \bibitem{LiYam} {\sc Z. Li, Y. Liu and M. Yamamoto}, {\it Initial-boundary value problem for multi-term time-fractional diffusion equation with positive constant coefficients}, Applied Mathematica and Computation, (2015), V. {\bf 257}, 381--397. \bibitem{Sun} {\sc L. Sun, Y. Zhang and T. Wei}, {\it Recovering the time-dependent potential function in a multi-term time-fractional diffusion equation}, Applied Numerical Mathematics, (2019), V. {\bf 135}, 228--245. \bibitem{KirMa} {\sc M. Kirane and A.S. Malik}, {\it Determination of an unknown source term and the temperature distribution for the linear heat equation involving fractional derivative in time}, Applied Mathematics and Computation, (2011), V. {\bf 218}, 163--170. \bibitem{KirTor} {\sc M. Kirane, B. Samet and B.T. Torebek}, {\it Determination of an unknown source term and the temperature distribution for the subdiffusion equation at the initial and final data}, Electronic Journal of Differential Equations, (2017), V. {\bf 217}, 1--13. \bibitem{TorTap} {\sc B.T. Torebek and R. Tapdigoglu}, {\it Some inverse problems for the non-local heat equation with Caputo fractional derivative}, Mathematical Methods in Applied Sciences. (2017), V. {\bf 40}, 6468--6479. \bibitem{AODif} {\sc R. Ashurov and O. Muhiddinova}, {\it Inverse problem of determining the heat source density for the sub\-diffusion equation}, Differential equations, (2020), V. {\bf 56}, 12, 1550--1563. \bibitem{AOLob} {\sc R. Ashurov and O. Muhiddinova}, {\it Initial-boundary value problem for a time-fractional subdiffusion equation with an arbitrary elliptic differential operator}, Lobachevskii Journal of Mathematics, (2021), V. {\bf 42}, 3, 517--525. \bibitem{KirSal} {\sc M. Kirane, A.M. Salman and A. Mohammed Al-Gwaiz}, {\it An inverse source problem for a two dimensional time fractional diffusion equation with non-local boundary conditions}, Math. Meth. Appl. Sci. (2012), DOI: 10.1002/mma.2661 \bibitem{AF1} {\sc R. Ashurov and Yu. Fayziev}, {\it Determination of fractional order and source term in a fractional subdiffusion equation}, Eurasian Mathematical Journal, (2022), V. {\bf 13}, 1, 19--31. \bibitem{Saima} {\sc Zh. Shuang, R. Saima, R. Asia, K. Khadija and M.A. Abdullah}, {\it Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time}. AIMS Mathematics,(2021), V. {\bf 6}, 11, 12114--12132. doi: 10.3934/math.2021703 \bibitem{ASh} {\sc R. Ashurov and M. Shakarova}, {\it Time-Dependent source identification problem for fractional Schrodinger type equations}. Lobachevskii Journal of Mathematics, (2022), V. {\bf 43}, 5, 1053--1064. \bibitem{ASh2} {\sc R. Ashurov and M. Shakarova}, {\it Time-dependent source identification problem for a fractional schrodinger equation with the Riemann-Liouville derivative}, http://arxiv.org/submit/4165518/pdf \bibitem{AF2} {\sc R. Ashurov and Yu. Fayziev}, {\it Uniqueness and existence for inverse problem of determining an order of time-fractional derivative of subdiffusion equation}. Lobachevskii journal of mathematics, (202), V {\bf 42}, 3, 508--516. \bibitem{AF3} {\sc R. Ashurov and Yu. Fayziev}, {\it Inverse problem for determining the order of the fractional derivative in the wave equation}. Mathematical Notes, (2021), {\bf 110}:6, 842--852 \bibitem{Gor} {\sc R. Gorenflo, A.A. Kilbas, F. Mainardi and S.V. Rogozin}, {\it Mittag-Leffler functions, related topics and applications}, Springer 2014. \end{thebibliography} \end{document}
2205.03016v1
http://arxiv.org/abs/2205.03016v1
On exponential Yang-Mills fields and $p$-Yang-Mills fields
\documentclass[reqno]{amsproc} \usepackage{graphicx} \usepackage{hyperref} \usepackage{graphicx} \hypersetup{ colorlinks=true, linkcolor=blue, anchorcolor=blue, citecolor=blue } \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{remark}[theorem]{Remark} \newtheorem*{theoremA}{Theorem A} \newtheorem*{thm30}{Theorem 6.1} \numberwithin{equation}{section} \newcommand{\abs}[1]{\lvert#1\rvert} \begin{document} \title{On exponential Yang-Mills fields and $p$-Yang-Mills fields} \author{Shihshu Walter Wei} \address{Department of Mathematics, University of Oklahoma, Norman, OK 73072} \email{[email protected]} \subjclass[2010]{Primary 58E20, 53C21, 81T13; Secondary 26D15, 53C20} \keywords{Normalized exponential Yang-Mills-energy functional, stress-energy tensor, $e$-conservation law, exponential Yang-Mills connection, monotonicity formula, vanishing theorem, exponential Yang-Mills field, $p$-Yang-Mills field} \begin{abstract} We introduce \emph{normalized exponential Yang-Mills energy functional} $\mathcal{YM}_e^0$, stress-energy tensor $S_{e,\mathcal{YM}^0 }$ associated with the normalized \emph{exponential Yang-Mills energy functional} $\mathcal{YM}_e ^0 $, $e$-conservation law. We also introduce the notion of the {\it $e$-degree} $d_e$ which connects two separate parts in the associated normalize exponential stress-energy tensor $S_{e,\mathcal{YM}^0 }$ (cf. \eqref{3.10} and \eqref{4.15}), derive monotonicity formula for exponential Yang-Mills fields, and prove a vanishing theorem for exponential Yang-Mills fields. These monotonicity formula and vanishing theorem for exponential Yang-Mills fields augment and extend monotonicity formula and vanishing theorem for $F$-Yang-Mills fields in \cite {DW} and \cite [9.2] {W11}. We also discuss an average principle (cf. Proposition \ref{P: 8.1}), isoperimetric and Sobolev inequalities, convexity and Jensen's inequality, $p$-Yang-Mills fields, an extrinsic average variational method in the calculus of variation and $\Phi_{(3)}$-harmonic maps, from varied, coupled, generalized viewpoints and perspectives (cf. Theorems \ref{T: 6.1}, \ref{T: 7.1}, \ref{T: 9.1}, \ref{T: 9.2}, \ref{T: 10.1}, \ref{T: 10.2}, \ref{T: 11.13}, \ref{T: 11.14}, \ref{T: 11.15}). \end{abstract} \maketitle \section*{Contents} 1. Introduction 2. Fundamentals in vector bundles and principal $G$-bundle 3. Normalized exponential Yang-Mills functionals and $e$-conservation laws 4. Comparison theorems in Riemannian geometry 5. Monotonicity formulae 6. Vanishing theorems for exponential Yang-Mills fields 7. Vanishing theorems from exponential Yang-Mills fields to $F$-Yang-Mills fields 8. An average principle, isoparametric and sobelov inequalities 9. Convexity and Jensen's inequalities 10. $p$-Yang Mills fields 11. An extrinsic average variation method and $\Phi_{(3)}$-harmonic maps \section{Introduction} The \emph{Yang--Mills functional}, brought to mathematics by physics is broadly analogous to functionals such as the \emph{length functional} in \emph{geodesic theory}, the \emph{area functional} in \emph{minimal surface, or minimal submanifold theory}, the \emph{energy $($resp. $p$-energy$)$ functional} \emph{in harmonic $($resp. $p$-harmonic$)$ map theory}, or the \emph{mass functional} in \emph{stationary or minimal current, geometric measure theory }(cf.,e.g., \cite {FF, L, HoW}). A critical point of the Yang-Mills functional with respect to any compactly supported variation in the space of smooth connections $\nabla$ on the adjoint bundle is called a\emph{Yang-Mills connection}. Its associated curvature field $R^\nabla$ is known as\emph{Yang-Mills field} and is ``harmonic", i.e., a \emph{harmonic $2$-form with values in the vector bundle}. The Euler-Lagrange equation for the Yang-Mills functional is\emph{Yang-Mills equation}. Whereas \emph{Hodge theory of harmonic forms} is motivated in part by\emph{Maxwell's equations of unifying} magnetism with electricity in a physics world, and harmonic forms are privileged representatives in a \emph{de Rham cohomology class} picked out by the Hodge Laplacian, harmonic maps can be viewed as a nonlinear generalization of harmonic $1$-form and Yang-Mills field can be viewed as a nonlinear generalization of harmonic $2$-form. On the other hand, Yang-Mills equation which can be viewed as a non-abelian generalization of Maxwell's equations, has had wide-ranging consequences, and influenced developments in other fields such as low-dimensional topology, particularly the topology of smooth 4-manifolds. For example, M. Freedman and R. Kirby first observed the startling fact that \emph{there exists an exotic $\mathbb R^4$}, i.e., a manifold homeomorphic to, but not diffeomorphic to, $\mathbb R^4$ (cf. \cite [p. 95] {K}, \cite {D, FK, Go}). This is in stunning contrast to a phenomenal theorem of J. Milnor in compact high-dimensional topology which shows that \emph{there exist exotic seven-spheres $S^7$}, i.e., manifolds that are homeomorphic to, but not diffeomorphic to, the stadard Euclidean $S^7$ (cf. \cite {M}). In \cite{DW}, we unify the concept of minimal hypersurfaces in Euclidean space $\mathbb R^{n+1}$, maximal spacelike hypersurfaces in Minkowski space $\mathbb R^{n,1}$, harmonic maps, $p$-harmonic maps, $F$-harmonic maps, Yang-Mills fields, and introduce $F$-Yang-Mills fields, $F$-degree, and generalized Yang-Mills-Born-Infeld fields (with the plus sign or with the minus sign) on manifolds, where \begin{equation} F: [0, \infty) \to [0, \infty)\, \text{is}\, \text{ a}\, \text{strictly}\, \text{increasing}\, C^2\, \text{function}\, \text{with}\, F(0)=0. \label{1.1} \end{equation} When $$F(t)=t\, , p^{-1}(2t)^{\frac p2}\, , \sqrt{1+2t} -1\, , \text{and}\, 1-\sqrt{1-2t}, $$ $F$-Yang-Mills field becomes an ordinary Yang-Mills field, $p$-Yang-Mills field, a generalized Yang-Mills-Born-Infeld field with the plus sign, and a generalized Yang-Mills-Born-Infeld field with the minus sign on a manifold respectively (cf. \cite {BI,BL,BLS,CCW,D,La,LWW,LY, SiSiYa, W12,Ya}). When $$F(t)=t\, , e^t, p^{-1}(2t)^{\frac p2}\, , \sqrt{1+2t} -1\, , \text{and}\ 1-\sqrt{1-2t}\, , $$ $F$-harmonic map or the graph of $F$-harmonic map becomes an ordinary harmonic map, exponentially harmonic map, $p$-harmonic map, minimal hypersurface in Euclidean space $\mathbb R^{n+1}$, and maximal spacelike hypersurface in Minkowski space $\mathbb R^{n,1}$ respectively (cf. \cite {ES,WY,EL,Ar,WWZ}). We use ideas from physics - \index{stress-energy tensors}{stress-energy tensors} and \index{conservation laws}{conservation laws} to simplify and unify various properties in $F$-Yang-Mills fields, $F$-harmonic maps, and more generally differential $k$-forms, $k \ge 0$ with values in vector bundles. In this paper, we introduce \emph{normalized exponential Yang-Mills energy functional} $\mathcal{YM}_e^0$, $\big ($resp. \emph{exponential Yang-Mills energy functional} $\mathcal{YM}_e\big )$, \emph{stress-energy tensor $S_{e,\mathcal{YM}^0 }$ associated with the normalized exponential Yang-Mills energy functional} $\mathcal{YM}_e ^0 $, $\big ($resp. \emph{stress-energy tensor $S_{e,\mathcal{YM} }$ associated with the exponential Yang-Mills energy functional} $\mathcal{YM}_e \big )$, ( A critical point of $\mathcal{YM}_e^0$, i.e. a \emph{normalized exponential Yang-Mills connection}, and its associated \emph{normalized exponential Yang-Mills field} are just the same as the \emph{Yang-Mills connection} and its associated \emph{exponential Yang-Mills field}). We also introduce the notion of the {\it $e$-degree $d_e$} which connects two separate parts in the associated normalized exponential stress-energy tensors $S_{e,\mathcal{YM}^0 }$ $($cf. \eqref{4.15}$)$. These stress-energy tensors arise from calculating the rate of change of various functionals when the metric of the domain or base manifold is changed and are naturally linked to various conservation laws. For example, we prove that every normalized exponential Yang-Mills field or every exponential Yang-Mills field $R ^\nabla $ satisfies an $e$-conservation law (cf. Theorem \ref{T: 3.11}). Every normalized exponential Yang-Mills connection or exponential Yang-Mills connection satisfies the exponential Yang-Mills equation (cf. Corollary \ref{C: 3.7}). We then prove monotonicity formulae, via the coarea formula and comparison theorems in Riemannian geometry (cf. \cite {GW, DW, HLRW, W11}). Whereas a ``microscopic" approach to some of these monotonicity formulae leads to celebrated blow-up techniques due to E. de-Giorgi (\cite {Gi}) and W.L. Fleming (\cite {Fl}), and regularity theory in geometric measure theory(cf. \cite {A,Al,FF,HL,Lu,PS,SU}, for example, the regularity results of Allard (\cite {A}) depend on the monotonicity formulae for varifolds. Monotonicity properties are also dealt with by Price and Simon (\cite {PS}), Price (\cite {P}) for Yang-Mills fields, and by Hardt-Lin (\cite {HL}) and Luckhaus (\cite {Lu}) for $p$-harmonic maps), a ``macroscopic" version of these monotonicity formulae enable us to derive some vanishing theorems under suitable growth conditions on Cartan-Hadamard manifolds or manifolds which possess a pole with appropriate curvature assumptions. In particular, we have Theorem \ref{T: 5.1} - {\it the monotonicity formula for exponential Yang-Mills fields} and Theorem \ref{T: 6.1} - {\it the vanishing theorem for exponential Yang-Mills fields.} These monotonicity formula and vanishing theorem for exponential Yang-Mills fields augment and extend vanishing theorems for $F$-Yang-Mills fields in \cite {DW} and \cite {W11}. We note that even when $$F(t)=e^t\quad \text{or}\quad F(t) = e^t -1\quad \text{for}\ t= \frac{||R^{\nabla}||^2}2, $$ $F$-Yang-Mills field becomes exponential Yang-Mills field, the following vanishing theorem for $F$-Yang-Mills fields are not applicable to exponential Yang-Mills fields, This is due to the fact that for $F(t) = e^t\, ,$ the degree of $F$, $$d_F := \sup_{t\geq 0}\frac{tF^{\prime }(t)}{F(t)} = \infty,$$ and the $F$-Yang-Mills energy functional growth condition \eqref{1.3} is not satisfied for $\lambda = -\infty$ in \eqref{1.4}. To overcome this difficulty in getting estimates, we introduce the notion of e-degree $d_e\, ,$ for a given curvature tensor $R^\nabla \big ($cf. \eqref{4.15}$\big )$. \begin{theoremA}[Vanishing theorem for $F$-Yang-Mills fields (\cite {DW, W11})] Suppose that the radial curvature $K(r)$ of $M$ satisfies one of the seven conditions \begin{equation} \aligned {\rm(i)}&\quad -\alpha ^2\leq K(r)\leq -\beta ^2\, \text{with}\, \alpha >0, \beta >0\, \text{and}\, (n-1)\beta - 4\alpha d_F \geq 0;\\ {\rm(ii)}&\quad K(r) = 0\, \text{with}\, n-4 d_F>0;\\ {\rm(iii)}&\quad -\frac A{(1+r^2)^{1+\epsilon}}\leq K(r)\leq \frac B{(1+r^2)^{1+\epsilon}} \text{with}\, \epsilon > 0\, , A \ge 0\, , 0 < B < 2\epsilon\, , \text{and}\\ &\qquad n - (n-1)\frac B{2\epsilon} -4e^{\frac {A}{2\epsilon}} d_F > 0;\\ {\rm(iv)}&\quad-\frac {A}{r^2}\leq K(r)\leq -\frac {A_1}{r^2}\, \text{with}\quad 0 \le A_1 \le A\, ,\text{and}\\ &\qquad\ 1 + (n-1)\frac{1 + \sqrt {1+4A_1}}{2} - 2(1 + \sqrt {1+4A}) d_F > 0;\\ {\rm(v)}&\quad- \frac {A(A-1)}{r^2}\le K(r) \le - \frac {A_1(A_1-1)}{r^2}\, \text{and}\, A \ge A_1 \ge 1\, , \text{and}\\ &\qquad 1+(n-1)A_1-4A d_F > 0;\\ {\rm(vi)}&\quad \frac {B_1(1-B_1)}{r^2}\leq K(r)\le \frac {B(1-B)}{r^2}\, ,\text{with}\quad 0 \le B, \, B_1 \le 1\, , \text{and} \\ &\qquad 1 + (n-1)(|B-\frac {1}{2}|+ \frac {1}{2}) -2\big (1 + \sqrt {1+4B_1(1-B_1)}\big ) d_F > 0;\\ {\rm(vii)}&\quad \frac {B_1}{r^2} \le K(r) \le \frac {B}{r^2} \text{with}\quad 0 \le B_1 \le B \le \frac 14\, , \text{and}\\ &\qquad 1+ (n-1)\frac{1 + \sqrt {1-4B}}{2} -(1 + \sqrt {1+4B_1} )\|R^\nabla \|^2 _\infty > 0. \endaligned\label{1.2} \end{equation} If $R^\nabla \in A^2 \big (Ad(P)\big )$ is an $F$-Yang-Mills field and satisfies \begin{equation} \int_{B_\rho(x_0)}F (\frac{||R^\nabla||^2}2)\, dv = o(\rho^\lambda )\quad \text{as } \rho\rightarrow \infty,\label{1.3} \end{equation} where $\lambda $ is given by \begin{equation} \lambda \le \begin{cases} n-4\frac \alpha \beta d_F &\text {if } K(r)\ \text{obeys $($i$)$}\\ n-4d_F &\text {if } K(r)\ \text{obeys $($ii$)$}\\ n - (n-1)\frac B{2\epsilon} -4e^{\frac {A}{2\epsilon}}d_F &\text{if } K(r)\ \text{obeys $($iii$)$}\\ 1 + (n-1)\frac{1 + \sqrt {1+4A_1}}{2} - 2 (1 + \sqrt {1+4A}) d_F &\text{if } K(r)\, \text{ obeys $($iv$)$}\\ 1+(n-1)A_1-4A d_F &\text{if } K(r)\, \text{ obeys $($v$)$}\\ 1 + \frac{n-1}{(|B-\frac {1}{2}|+ {2}^{-1})^{-1}} - 2 \big (1 + \sqrt {1+4B_1(1-B_1)}\big ) d_F &\text{if } K(r)\ \text {obeys $($vi$)$}\\ 1+ (n-1)\frac{1 + \sqrt {1-4B}}{2} - 2 (1 + \sqrt {1+4B_1} ) d_F &\text{if } K(r)\ \text{obeys $($vii$)$}.\end{cases} \label{1.4} \end{equation} Then $R^{\nabla} \equiv 0$ on $M\, .$ In particular, every $F$-Yang-Mills field $R^{\nabla}$ with finite $F$-Yang-Mills energy functional vanishes on $M$. \end{theoremA} We also discuss An Average Principle (cf. Proposition {8.1}) and Jensen's inequality from varied, generalized viewpoints and perspectives of exponential Yang-Mills fields, $p$-Yang-Mills fields, and Yang-Mills fields. (Theorems \ref{T: 7.1}, \ref {T: 9.1}, \ref {T: 9.2}, \ref {T: 10.1}, \ref {T: 10.2}). In the context of harmonic maps, the stress-energy tensor was introduced and studied in detail by Baird and Eells (\cite {BE}). Following Baird-Eells (\cite {BE}, Sealey [Se2] introduced the stress-energy tensor for vector bundle valued $p$-forms and established some vanishing theorems for $L^2$ harmonic $p$-forms (cf. \cite {DLW, Se1, Xi1}). In a more general frame, Dong and Wei use a unified method to study the stress-energy tensors and yields monotonicity inequalities, and vanishing theorems for vector bundle valued $p$-forms (\cite {DW}). The idea and methods can be extended and unified in $\sigma_2$-version of harmonic maps - $\Phi$-Harmonic maps (cf. \cite {HW}). These are the second elementary symmetric function of a pull-back tensor, whereas harmonic maps are the first elementary symmetric function of a pull-back tensor. More recently, Feng-Han-Li-Wei use stress-energy tensors to unify properties in $\Phi_S$-harmonic maps (cf. \cite {FHLW}), Feng-Han-Wei extend and unify results in $\Phi_{S,p}$-harmonic maps (cf. \cite {FHW}), and Feng-Han-Jiang-Wei further extend and unify results in $\Phi_{(3)}$-harmonic maps (cf. \cite{FHJW}). Whereas we can view harmonic maps as $\Phi_{(1)}$-harmonic maps $($involving $\sigma_1$$)$ and $\Phi$-harmonic maps as $\Phi_{(2)}$-harmonic maps $($involving $\sigma_2$$)$, $\Phi_{(3)}$-harmonic maps involve $\sigma_3$, the third elementary symmetric function of the pullback tensor. In fact, an extrinsic average variational method in the calculus of variation can be carried over to more general settings by which we introduce a notion of $\Phi_{(3)}$-harmonic map and find a large class of manifolds, $\Phi_{(3)}$-superstrongly unstable $(\Phi_{(3)}$-$\text{SSU})$ manifolds, introduce notions of a stable $\Phi_{(3)}$-harmonic map, and $\Phi_{(3)}$-strongly unstable $(\Phi_{(3)}$-$\text{SU})$ manifolds (cf. Theorems \ref{T: 11.8}, \ref{T: 11.9}, \ref{T: 11.10}, and \ref{T: 11.11}). \noindent By an extrinsic average variational method in the calculus variations proposed in \cite {W3}, we find multiple large classes of manifolds with geometric and topological properties in the setting of varied, coupled, generalized type of harmonic maps, and summarize some of the results in Table 1. For some details, related ideas, techniques, we refer to \cite {CW3}, \cite {W1}-\cite {W12}, \cite {WLW}. {\fontsize{0.01}{8}\selectfont \begin{table}[ht] \caption{An Extrinsic Average Variational Method}\label{eqtable} \renewcommand\arraystretch{1.2} \noindent\[ \begin{array}{|c|c|c|c|c|} \hline \operatorname{Mappings}&\operatorname{Functionals}&\operatorname{New}\, \operatorname{manifolds}\, \operatorname{found}&\operatorname{Geometry}&\operatorname{Topology}\\ \hline \operatorname{harmonic}\,\operatorname{map}\, \operatorname{or}&\operatorname{energy}\, \operatorname{functional}\,E\, \operatorname{or}&\operatorname{SSU}\, \operatorname{manifolds}\, \operatorname{or}&\operatorname{SU}\, \operatorname{or}& \pi_1=\pi_2=0\\ \Phi_{(1)}-\operatorname{harmonic}\,\operatorname{map} & E_{\Phi_{(1)}}&\Phi_{(1)}-\operatorname{SSU}\, \operatorname{manifolds}&\Phi_{(1)}-\operatorname{SU}& \pi_1=\pi_2=0\\ \hline p-\operatorname{harmonic}\,\operatorname{map}&E_p&p-\operatorname{SSU}\, \operatorname{manifolds}&p-\operatorname{SU}& \pi_1=\cdots=\pi_{[p]}=0\\ \hline \Phi-\operatorname{harmonic}\,\operatorname{map}\, \operatorname{or}&\Phi-\operatorname{energy}\, \operatorname{functional}\,E_{\Phi}\, \operatorname{or}&\Phi-\operatorname{SSU}\, \operatorname{manifolds}\, \operatorname{or}&\Phi-\operatorname{SU}\, \operatorname{or}& \pi_1=\cdots=\pi_4=0\\ \Phi_{(2)}-\operatorname{harmonic}\,\operatorname{map}&E_{\Phi_{(2)}}&\Phi_{(2)}-\operatorname{SSU}\, \operatorname{manifolds}&\Phi_{(2)}-\operatorname{SU}& \pi_1=\cdots=\pi_4=0\\ \hline \Phi_S-\operatorname{harmonic}\,\operatorname{map}&E_{\Phi_S}&\Phi_S-\operatorname{SSU}\, \operatorname{manifolds}&\Phi_S-\operatorname{SU}& \pi_1=\cdots=\pi_4=0\\ \hline \Phi_{S,p}-\operatorname{harmonic}\,\operatorname{map}&E_{\Phi_{S,p}}&\Phi_{S,p}-\operatorname{SSU}\, \operatorname{manifolds}&\Phi_{S,p}-\operatorname{SU}& \pi_1=\cdots=\pi_{[2p]}=0\\ \hline \Phi_{(3)}-\operatorname{harmonic}\,\operatorname{map}&\Phi_{(3)}-\operatorname{energy}\, \operatorname{functional}\,E_{\Phi_{(3)}}&\Phi_{(3)}-\operatorname{SSU}\, \operatorname{manifolds}&\Phi_{(3)}-\operatorname{SU}& \pi_1=\cdots=\pi_{6}=0\\ \hline \end{array} \] \end{table} } \section{Fundamentals in vector bundles and principal $G$-bundle} This section is devoted to a brief discussion of the fundamental notions in vector bundles and principal $G$-bundle. \begin{definition} A $($differentiable$)$ {\it vector bundle} of rank $n$ consists of a total space $E$, a base $M$, and a projection $\pi : E \to M\, ,$ where $E$ and $M$ are differentiable manifolds, $\pi$ is differentiable, each fiber $E_x := \pi^{-1} (x)$ for $x \in M$, carries the structure of an $n$-dimensional (real) vector space, with the following local triviality: For each $x\in M$, there exist a neighborhood $U$ and a diffeomorphism $$\varphi: \pi^{-1} (U) \to U \times \mathbb R^n$$ such that for every $y\in U$ $$\varphi _y := \varphi_{|E_y} : E_y \to \{y\} \times \mathbb R^n$$ is a vector space isomorphism. Such a pair $(\varphi, U)$ is called a {\it bundle chart}. \end{definition} Note that local trivializations $\varphi_ {\alpha}, \varphi_ {\beta}$ with $U_ {\alpha} \cap U_{\beta} \ne \emptyset$ determines {\it transition maps} \[\varphi_ {\beta \alpha } : U_ {\alpha} \cap U_{\beta} \to \text {Gl}(n, \mathbb R) \] by \begin{equation*}\varphi_ {\beta} \circ \varphi_ {\alpha}^{-1} (x, v) =(x, \varphi_ {\beta \alpha} (x) v)\quad \text{for}\quad x\in M, v \in \mathbb R^n\, , \end{equation*} where $\text {Gl}(n, \mathbb R)$ is the general linear group of bijective linear self maps of $\mathbb R^n\, .$ As direct consequences, the transition maps satisfy: \begin{equation*} \begin{aligned} \varphi_ {\alpha\alpha} (x) &= \text{id}_{\mathbb R^n}\quad \text{for}\quad x\in U_{\alpha};\\ \varphi_ {\alpha \beta} (x)\varphi_ {\beta \alpha}(x) &= \text{id}_{\mathbb R^n}\quad \text{for}\quad x\in U_ {\alpha} \cap U_{\beta};\\ \varphi_ {\alpha \gamma}(x)\varphi_ {\gamma \beta} (x)\varphi_ {\beta \alpha} (x) &= \text{id}_{\mathbb R^n}\quad \text{for}\quad x\in U_ {\alpha} \cap U_{\beta}\cap U_{\gamma}. \end{aligned} \end{equation*} (cf. \cite {J}) A vector bundle can be reconstructed from its transition maps \begin{equation*} E = \coprod _{\alpha} \quad U_{\alpha} \times \mathbb R^n\, / \, \sim\, , \end{equation*} where $\coprod$ denotes disjoint union, and the equivalence relation $\sim$ is defined by \begin{equation} (x, v) \sim (y, w) \ : \Longleftrightarrow\ x= y \ \text{and} \ w = \varphi _{\beta\alpha} (x) v\ (x \in U_{\alpha}, y \in U_{\beta}, v, w \in \mathbb R^n)\, .\label{2.1} \end{equation} \begin{definition} Let $G$ be a subgroup of $\text {Gl}(n, \mathbb R)$, for example the orthogonal group $O(n)$ or special orthogonal group $SO(n)\, .$ By a vector bundle has {\it the structure group $G$}, we mean there exists an atlas of bundle charts for which all transition maps have their values in $G\, .$ \end{definition} \begin{definition} Let $G$ be a Lie group. A {\it principal $G$-bundle} consists of a base $M$, the total space $P$ of the bundle, and a differentiable projection $\pi : P \to M\, ,$ where $P$ and $M$ are differentiable manifolds, with an action of $G$ on $P$ satisfying \smallskip \item(i) $G$ acts freely on $P$ from the right: $(q,p)\in P \times G$ is mapped to $qp\in P\, ,$ and $qp \ne q$ for $q \ne e\, .$\smallskip \noindent The $G$ action then defines an equivalence relation on $P: p \sim q : \Longleftrightarrow \exists g\in G$ such that $p=qg\, .$\smallskip \item(ii) $M$ is the quotient of $P$ by this equivalence relation, and $\pi : P \to M$ maps $q\in M$ to its equivalence class. By $(\operatorname{i})$, each fiber $\pi^{-1} (x)$ can then be identified with $G$. \smallskip \item(iii) $P$ is locally trivial in the following sense: \noindent For each $x\in M$, there exists a neighborhood $U$ of $x$ and a diffeomorphism \[\varphi: \pi^{-1} (U) \to U \times G \] of the form $\varphi(p)=(\pi(p), \psi(g))$ which is $G$-equivariant, i.e. $\varphi (pg) = (\pi(p), \psi(p)g)$ for all $g \in G.$ \end{definition} \begin{example} We have the following results. \begin{enumerate} \item[\rm(i)] The projection $S^n \to P^n(\mathbb R)$ of the $n$-sphere to the real projective space is a principal bundle with group $G = O(1) = Z_2$ \smallskip \item[\rm(ii)] The Hopf map $S^{2n+1} \to P^n(\mathbb C)$ of the $2n+1$-sphere to the complex projective space is a principal bundle with group $G = U(1) = S^1$ \smallskip \item[\rm(iii)] The Hopf map $S^{4n+1} \to P^n(\mathbb Q)$ of the $4n+1$-sphere to the quaternionic projective space is a principal bundle with group $G = Sp(1) = S^3$ \smallskip \item[\rm(iv)] Hopf fibrations: $ S^1 \to S^1, S^3 \to S^2, S^7 \to S^4,$ and $ S^{15} \to S^8\, $ \end{enumerate} For $k=1, 2, 4, 8\, ,$ the Hopf construction is defined by $$(z,w)\mapsto u(z, w) = (|z|^2 - |w|^2, 2 z \cdot \overline w): \ \ \mathbb{R}^k \times \mathbb{R}^k \to \mathbb{R}^{k+1}. $$ In fact, Hopf fibrations are $p$-harmonic maps and $p$-harmonic morphisms for every $p > 1$ (c.f., e.g., \cite {W8, CW2}). \noindent We recall a $C^2$ map $u: M \to N$ is said to be a \emph{p-harmonic morphism} if for any p-harmonic function $f$ defined on an open set $V$ of $N$, the composition $f \circ u$ is $p$-harmonic on $u^{-1}(V)$. \label{E: 2.4} \end{example} \begin{example} If $E \to M$ is a vector bundle with fiber $V$, the bundle of bases of $E, B(E) \to M$ is a principle bundle with group $\operatorname{Gl}(V)\, .$ \end{example} \subsection{Reversibility of principal and vector bundles} \medskip \noindent $(\Longrightarrow)$ Given a principal $G$-bundle $P \to M$ and a vector space $V$ on which $G$ acts from the left, we construct the associated vector bundle $E \to M$ with fiber $V$ as follows: \noindent We have a free action of $G$ on $P \times V$ from the right: \[ \begin{aligned} P \times V \times G &\to P \times V\\ (p,v) \cdot g & = (p \cdot g, g^{-1} v)\, . \end{aligned} \] If we divide out this $G$-action, i.e. identify $(p,v)$ and $(p,v) \cdot g $, the fibers of $(P \times V) / G \to P/G$ becomes vector spaces isomorphic to $V$, and \begin{equation*} E := P \times _{G} \, V\, := (P \times V)_{/G} \to M \end{equation*} is a vector bundle with fiber $G \times _G \, V\, := (G \times V)_{/G} = V$ and structure group $G\, .$ The transition functions for $P$ also give transition functions for $E$ via the left action of $G$ on $V.$ \noindent $(\Longleftarrow)$ Conversely, given a vector bundle $E$ with structure group $G$, we construct a principal $G$-bundle as \begin{equation*}\coprod _{\alpha} \quad U_{\alpha} \times G\, / \, \sim \end{equation*} with \[ (x_\alpha, g_\alpha) \sim (x_\beta, g_\beta) \quad : \Longleftrightarrow\quad x_\alpha = x_\beta \in U_{\alpha} \cap U_{\beta}\quad \text{and} \quad g_\beta = \varphi _{\beta\alpha} (x) g_\alpha\] where $\{U_{\alpha}\}$ is a local trivialization of $E$ with transition functions $\varphi_{\beta \alpha}$ as in \eqref{2.1}. \smallskip \begin{example} We have the following assertions. \begin{itemize} \item[\rm(i)] The canonical line bundles (real, complex and quaternionic) over the projective spaces $P^n(\mathbb R)\, ,P^n(\mathbb C)\, $ and of the $P^n(\mathbb Q)$ are the {\it associated bundles} of the principal bundles in Example \ref{E: 2.4} $(\operatorname{i})-(\operatorname{iii})$ via the canonical actions of $O(1), U(1)$ and $Sp(1)$ on $\mathbb R, \mathbb C$ and $\mathbb Q$ respectively. \item[\rm(ii)] Let $E \to M$ be a bundle with fiber $F$ and structure group $G$ and $f: N \to M$ be a map between manifolds $N$ and $M$. Then the pull-back of $E \to M$ is a bundle $f^{-1} E \to M$ with fiber $F$, structure group $G$, and bundle charts $(\varphi \circ f, f^{-1}(U))$, where $\varphi (U)$ are bundle charts of $E\, .$ The pull-back $f^{-1} E \to M$ is called the {\it pull-back bundle}. \end{itemize} \end{example} \section{ Normalized exponential Yang-Mills functionals and $e$-conservation laws} Our basic set-up is the following: We consider a Riemannian manifold $M$, and a principal bundle $P$ with compact structure Lie group $G$ over $M$. Let $Ad(P)$ be the adjoint bundle \begin{equation} Ad(P)=P\times _{Ad}\mathcal{G}\, , \label{3.1} \end{equation} where $\mathcal{G} $ \ is the Lie algebra of $G$. Every connection $\rho$ on $P$ induces a connection $\nabla$ on $Ad(P)$. A connection $\nabla$ on the vector bundle $Ad(P)$ is a rule that equips us to take derivatives of smooth cross sections of $Ad(P)$. We also have the Riemannian connection $\nabla ^M$ on the tangent bundle $TM$, and the induced connection on the tensor product $\Lambda ^2T^{*}M\otimes Ad(P)$, where $\Lambda ^2T^{*}M$ is the second exterior power of the cotangent bundle $T^{*}M$. An $Ad_G$ invariant inner product on $\mathcal{G}$ induces a fiber metric on $Ad(P)$ and makes $Ad(P)$ and $\Lambda ^2T^{*}M\otimes Ad(P)$ into Riemannian vector bundles. Denote by $\Gamma \big (\Lambda ^2T^{*}M\otimes Ad(P)\big )$ the (infinite-dimensional) vector space of smooth sections of $\Lambda ^2T^{*}M\otimes Ad(P)\, .$ For $k\ge 0$ set $$A^k\big (Ad(P)\big)=\Gamma (\Lambda ^kT^{*}M\otimes Ad(P))$$ be the space of smooth $k$-forms on $M$ with values in the vector bundle $Ad(P)$. Although $\rho$ is not a section of $A^1\big (Ad(P)\big)$ , via its induced connection $\nabla $, the associated curvature tensor $ R^\nabla$, given by $$R^{\nabla}_{X,Y} = [\nabla_X, \nabla_Y]- \nabla_{[X,Y]}\, ,$$ is in $A^2(Ad(P))$. Let $\mathcal{C}$ be the space of smooth connections $\nabla \, $ on $Ad(P)\, ,$ and $dv$ be the volume element of $M\, .$ Recall the \emph{Yang-Mills functional} is the mapping $\mathcal{YM} : \mathcal{C}\to \mathbb{R}^+\, $ given by \begin{equation} \mathcal{YM}(\nabla )=\int_M \frac 12||R ^\nabla ||^2\, dv\, ,\label{3.2} \end{equation} the \emph{$p$-Yang-Mills functional}, for $p \ge 2\, $ (resp. the \emph{$F$-Yang-Mills functional}) is the mapping $\mathcal {YM}_p : \mathcal{C}\to \mathbb{R}^+\, $ given by \begin{equation}\aligned \mathcal{YM}_p(\nabla )& =\int_M \frac 1p||R ^\nabla ||^p\, dv\, \\ \big (resp.\quad \mathcal{YM}_F(\nabla )& =\int_M F(\frac 12||R ^\nabla ||^2)\, dv\, \big ), \endaligned \label{3.3} \end{equation} where the norm is defined in terms of the Riemannian metric on $M$ and a fixed $Ad_G$-invariant inner product on the Lie algebra $\mathcal{G}$ of $G\, .$ That is, at each point $x\in M\, ,$ its norm \begin{equation}||R^{\nabla}||^2_x = \sum_{i<j}||R^{\nabla}_{e_i,e_j}||^2_x\, \label{3.4} \end{equation} where $\{e_1, \cdots, e_n\}$ is an orthonormal basis of $T_x(M)$ and the norm of $R^{\nabla}_{e_i,e_j}$ is the standard one on Hom$(Ad(P), Ad(P))$-namely, $$\langle S, U\rangle \equiv \, \text {trace}\, (S^T \circ U)\, .$$ A connection $\nabla$ on the adjoint bundle $Ad(P)$ is said to be a \emph{Yang-Mills connection} (resp. \emph{$p$-Yang-Mills connection}, $p \ge 2$, \emph{$F$-Yang-Mills connection}) and its associated curvature tensor $R^{\nabla}$ is said to be a \emph{Yang-Mills field} (resp. \emph{$p$-Yang-Mills field}, $p \ge 2$, \emph{$F$-Yang-Mills field}), if $\nabla$ is a critical point of $\mathcal{YM}$ (resp. $\mathcal{YM}_p\, $, ${\mathcal{YM}}_F$) with respect to any compactly supported variation in the space of smooth connections on $Ad(P)$ . We now introduce \begin{definition} The \emph{normalized exponential Yang-Mills energy functional} is the mapping $\mathcal{YM}_e ^0 : \mathcal{C}\to \mathbb{R}^+\, $ given by \begin{equation} \mathcal{YM}_e ^0(\nabla )=\int_M \big (\exp (\frac 12||R ^\nabla ||^2) - 1\big )\, dv\, , \label{3.5} \end{equation} the \emph{exponential Yang-Mills energy functional} is the mapping $\mathcal{YM}_e : \mathcal{C}\to \mathbb{R}^+\, $ given by \begin{equation} \mathcal{YM}_e(\nabla )=\int_M \exp (\frac 12||R ^\nabla ||^2)\, dv\, , \label{3.6} \end{equation} \end{definition} \noindent on $M\, ,$ the uniform norm $||R^{\nabla}||_\infty$ is given by \begin{equation}||R^{\nabla}||^2_\infty = \sup_{x \in M}||R^{\nabla}||^2_x \, .\label{3.7} \end{equation} The normalized exponential Yang-Mills energy functional $\mathcal{YM}_e ^0$ has the following simple and useful advantage. \begin{proposition} \begin{equation}\mathcal{YM}_e ^0(\nabla ) \ge 0\quad \text{and}\quad \mathcal{YM}_e ^0(\nabla )=0\quad \Longleftrightarrow\quad R ^\nabla \equiv 0\, .\label{3.8}\end{equation} This is an analog of $p$-Yang-Mills functional, for $p \ge 2\, ,$ \begin{equation}\mathcal{YM}_p(\nabla ) \ge 0\quad \text{and}\quad \mathcal{YM}_p(\nabla )=0\quad \Longleftrightarrow\quad R ^\nabla \equiv 0\, .\label{3.9}\end{equation} \end{proposition} \begin{definition} The {\it stress-energy tensor} $S_{e,\mathcal{YM}^0 }$ associated with the normalized {\it exponential Yang-Mills energy functional} $\mathcal{YM}_e ^0 $ and the {\it stress-energy tensor} $S_{e,\mathcal{YM} }$ associated with the {\it exponential Yang-Mills energy functional} $\mathcal{YM}_e $ are defined respectively as follows: \begin{equation}\begin{aligned} S_{e,\mathcal{YM}^0 }(X,Y)&=\big (\exp (\frac{||R^{\nabla}||^2}2) -1 \big ) g(X,Y)-\exp (\frac{||R^{\nabla}||^2}2)\langle i_XR^{\nabla} ,i_YR^{\nabla} \rangle \, , \\ \end{aligned}\label{3.10} \end{equation} \begin{equation}\begin{aligned} S_{e,\mathcal{YM} }(X,Y)&=\exp (\frac{||R^{\nabla}||^2}2)\big (g(X,Y)- \langle i_XR^{\nabla} ,i_YR^{\nabla} \rangle \big ) \end{aligned}\label{3.11} \end{equation} where $\langle\quad , \quad \rangle$ is the induced inner product on $A^{1}\big (Ad(P)\big )\, ,$ and $i_XR^{\nabla}$ is the interior multiplication by the vector field $X$ given by \begin{equation} (i_XR^{\nabla})(Y_1)=R^{\nabla}(X,Y_1)\, ,\label{3.12} \end{equation} for any vector fields $Y_{1}$ on $M$. \end{definition} \medskip {We calculate the rate of change of the \emph{normalized exponential Yang-Mills energy functional} $\mathcal{YM}^0_{e,g} $ and \emph{exponential Yang-Mills energy functional} $\mathcal{YM}_{e,g} $ when the metric $g$ on the domain or base manifold is changed. To this end, we consider a compactly supported smooth one-parameter variation of the metric $g\, ,$ i.e. a smooth family of metrics $g_s$ such that $g_0=g\, .$ Set $\delta g =\frac {\partial g_s}{ \partial s}_{\big {|}_{s =0}}\, .$ Then $\delta g$ is a smooth $2$-covariant symmetric tensor field on $M$ with compact support. These give birth to their associated stress-energy tensors.} \begin{lemma} With the same notations as above, we have \begin{equation}\begin{aligned} \frac{d}{ds} \mathcal{YM}_{e,g_s }^0({\nabla} )_{\big {|}_{s =0}} & = \frac 12\int_M\langle S_{e,\mathcal{YM}^0 },\delta g \rangle dv_g\\ \end{aligned}\label{3.13}\end{equation} \begin{equation}\begin{aligned} \frac{d}{ds} \mathcal{YM}_{e,g_s }({\nabla} )_{\big {|}_{s =0}} & = \frac 12\int_M\langle S_{e,\mathcal{YM} },\delta g \rangle dv_g \end{aligned}\label{3.14}\end{equation} where $S_{e,\mathcal{YM}^0 }$ and $S_{e,\mathcal{YM}}$ are as in \eqref{3.8} and \eqref{3.9} respectively. \end{lemma} \begin{proof} From (\cite {Ba}), we obtain \begin{equation}\frac{d||R^{\nabla} ||_{g_s }^2}{ds }_{\big {|}_{s =0}}=-\sum _{i,j} \langle i_{e_i}R^{\nabla} , i_{e_j}R^{\nabla}\rangle \delta g(e_i,e_j)\label{3.15} \end{equation} and \begin{equation}\frac d{ds } \, dv_{g_s}\, _{\big {|}_{s =0}}=\frac 12\langle g,\delta g \rangle dv_g\, .\label{3.16} \end{equation} Then by the chain rule, \eqref{3.15}, \eqref{3.16}, and \eqref{3.10}, we have \begin{equation}\aligned \frac{d}{ds} \mathcal{YM}_{e,g_s }^0(\nabla) _{\big {|}_{s =0}} & = \int_M \frac {d}{ds } \bigg( \exp (\frac{||R^{\nabla} ||_{g_s }^2}{2}\big) - 1 \, dv_{g_s}\bigg ) _{\big {|}_{s =0}}\\ & = \int_M\exp (\frac{||R^{\nabla} ||^2}2)\frac d{ds } \big(\frac{||R^{\nabla} ||_{g_s }^2}2\big) {\big |}_{s =0}\, dv_g\\ & \qquad +\int_M \big (\exp (\frac 12||R ^\nabla ||^2) - 1\big )\frac d{ds } \, dv_{g_s}\, _{\big {|}_{s =0}} \\ &=\frac 12\int_M \bigg (\big (\exp (\frac 12||R ^\nabla ||^2) - 1\big )\langle g,\delta g \rangle \\ & \qquad -\exp (\frac{||R^{\nabla} ||^2}2) \sum _{i,j} \langle i_{e_i}R^{\nabla} , i_{e_j}R^{\nabla}\rangle \delta g(e_i,e_j)\bigg )\, dv_g \\ &=\frac 12\int_M\langle S_{e,\mathcal{YM}^0 },\delta g \rangle dv_g\, . \endaligned\label{3.17}\end{equation} Similarly, we can calculate $\frac{d}{ds} \mathcal{YM}_{e,g_s }(\nabla)_{\big {|}_{s =0}}$ and obtain the desired \eqref{3.14}. \end{proof} The {\it exterior differential operator} $d^\nabla :A^1\big (Ad(P)\big )\rightarrow A^2\big (Ad(P)\big )$ relative to the connection $\nabla$ is given by \begin{equation} (d^\nabla \sigma )(X_1, X_2)=(\nabla _{X_1}\sigma )(X_2) - (\nabla _{X_2}\sigma )(X_1) \, . \label{3.18} \end{equation} Relative to the Riemannian structures of $Ad(P)$ and $TM$, the {\it codifferential operator} $\delta ^\nabla : A^2\big (Ad(P)\big )\rightarrow A^1\big (Ad(P)\big )$ is characterized as the adjoint of $d$ via the formula \begin{equation} \int_M\langle d^\nabla \sigma ,\rho \rangle dv_g=\int_M\langle \sigma ,\delta ^\nabla \rho\rangle dv_g\, , \label{3.19} \end{equation} where $\sigma \in A^1\big (Ad(P)\big ),\rho \in A^2\big (Ad(P)\big )$ , one of which has compact support and $dv_g$ is the volume element associated with the metric $g$ on $TM$. Then \begin{equation} (\delta ^\nabla \rho )(X_1)=-\sum_i(\nabla _{e_i}\rho )(e_i,X_1)\, .\label{3.20} \end{equation} \begin{definition} A connection $\nabla$ on the adjoint bundle $Ad(P)$ is said to be an {\it exponential Yang-Mills connection} and its associated curvature tensor $R^{\nabla}$ is said to be an \emph{exponential Yang-Mills field}, if $\nabla$ is a critical point of $\mathcal{YM}_e$ with respect to any compactly supported variation in the space of connections on $Ad(P)$ . \end{definition} \medskip \begin{lemma}[The first variation formula for normalized exponential Yang-Mills functional $\mathcal{YM}_e ^0$ or $\mathcal{YM}_e$] Let $A \in A^1\big (Ad(P)\big )$ and $\nabla ^t =\nabla +t A $ be a family of connections on $Ad(P)$. Then \begin{equation}\aligned \frac d{dt}\mathcal{YM}_e ^0(\nabla ^t)_{\big {|}_{s =0}} = \frac d{dt}\mathcal{YM}_e ^0(\nabla ^t)_{\big {|}_{s =0}} =\int_M\langle\delta ^\nabla \big(\exp (\frac 12||R ^\nabla ||^2)R ^\nabla \big), A \rangle \, dv\, . \endaligned\label{3.21}\end{equation} Furthermore, The Euler-Lagrangian equation for $\mathcal{YM}_e ^0$ or $\mathcal{YM}_e$ is \begin{equation} \exp (\frac 12||R ^\nabla ||^2)\delta ^\nabla R ^\nabla -i_{\text{grad} \big( \exp (\frac 12||R^\nabla ||^2)\big)}R ^\nabla =0\, , \label{3.22} \end{equation} or \begin{equation} \delta ^\nabla \big(\exp (\frac 12||R ^\nabla ||^2)R^\nabla \big) = 0\, .\label{3.23} \end{equation}\label{L: 3.6} \end{lemma} \begin{proof} By assumption, the curvature of $\nabla^t$ is given by \begin{equation} R ^{\nabla^t} = R ^\nabla + t (d^\nabla A) + t^2 [A, A]\, , \label{3.24} \end{equation} where $[A, A]\in A^2(Ad(P))$ is given by $[A, A]_{X,Y}=[A_X, A_Y]\, .$ Indeed, for any local vector fields $X,Y$ on $M$. with $[X,Y]=0\, ,$ we have via \eqref{3.18} \begin{equation} \begin{aligned} R ^{\nabla^t}_{X,Y} &=(\nabla _X +t A_X)(\nabla_Y + t A_Y)-(\nabla _Y + t A_Y)(\nabla _X + t A_X)\\ &= R ^\nabla_{X,Y} + t [\nabla _X , A_Y ] - t [\nabla _Y , A_X ] + t^2 [A_X, A_Y]\\ &= R ^\nabla_{X,Y} + t \nabla _X (A_Y) - t \nabla _Y (A_X) + t^2 [A, A]_{X,Y}\\ &= R ^\nabla_{X,Y} + t (d^\nabla A)_{X,Y} + t^2 [A, A]_{X,Y}\, .\\ \end{aligned} \label{3.25} \end{equation} Thus, \begin{equation} \aligned \exp\, (\frac 12|| R^{\nabla^t} ||^2)=\exp\, (\frac 12||R ^\nabla ||^2 +t \langle R ^\nabla ,d^\nabla A \rangle + \varepsilon(t^2))\, , \endaligned \label{3.26} \end{equation} where $\varepsilon (t^2) = o (t^2)\quad \text{as}\, t \to 0\, . $ Therefore, \begin{equation} \mathcal{YM}_e(\nabla ^t)=\int _M \exp\, (\frac 12||R ^\nabla ||^2+t\langle R ^\nabla ,d^\nabla A \rangle+\varepsilon(t^2) )\, dv \label{3.27} \end{equation} and via \eqref{3.19}, we have \begin{equation} \aligned \frac d{dt}\mathcal{YM}_e ^0(\nabla ^t)_{\big {|}_{s =0}} & = \frac d{dt}\mathcal{YM}_e (\nabla ^t)_{\big {|}_{s =0}}\\ & =\int_M\ exp\, (\frac 12||R ^\nabla ||^2)\langle R^\nabla ,d^\nabla A \rangle \, dv\\ &=\int_M\langle\delta ^\nabla \big(\exp\, (\frac 12||R ^\nabla ||^2)R ^\nabla \big), A \rangle \, dv\, . \endaligned \label{3.28} \end{equation} This derives the Euler-Lagrange equation for $\mathcal{YM}_e ^0$ or $\mathcal{YM}_e$ by \eqref{3.20} as follows \begin{equation} \aligned 0 &=\delta ^\nabla \big (\exp\, (\frac 12||R ^\nabla ||^2)R^\nabla \big) \\ &= - \sum_{i=1}^m \big (\nabla _{e_i}\exp\, (\frac 12||R ^\nabla ||^2)R ^\nabla\big )(e_i,\cdot ) \\ &=\exp\, (\frac 12||R ^\nabla ||^2)\delta ^\nabla R ^\nabla -i_{\text{grad}\big (\exp\, (\frac 12||R^\nabla ||^2)\big )}R ^\nabla\, . \endaligned \label{3.29} \end{equation} \end{proof} \begin{corollary} Every normalized exponential Yang-Mills connection or every exponential Yang-Mills connecton $\nabla $ satisfies \eqref{3.29}. \label{C: 3.7} \end{corollary} Dong and Wei derive \smallskip \noindent {\bf Theorem B (\cite {DW})} {\it ${\rm(i)}$ The Euler-Lagrangian equation for $F$-Yang-Mills functional $\mathcal{YM}_F$ is \begin{equation} F^{\prime }(\frac 12||R ^\nabla ||^2)\delta ^\nabla R ^\nabla -i_{\text{grad} \big(F^{\prime }(\frac 12||R^\nabla ||^2)\big)}R ^\nabla =0 \label{3.30} \end{equation} or \[ \delta ^\nabla \big(F^{\prime }(\frac 12||R ^\nabla ||^2)R^\nabla \big) = 0\, .\] ${\rm(ii)}$ The Euler-Lagrangian equation for $p$-Yang-Mills functional $\mathcal{YM}_p\, , p \ge 2$ is \begin{equation} \delta ^\nabla ( ||R ^\nabla ||^{p-2}R ^\nabla ) =0\label{3.31} \end{equation} \[\qquad \text{or}\qquad ||R ^\nabla ||^{p-2}\delta ^\nabla R ^\nabla - i_ {\text{grad} (||R ^\nabla ||^{p-2})}R ^\nabla =0\, . \]} \eqref{3.30} is also due to C. Gherghe (\cite {G}). \smallskip \begin{corollary} Let $||R ^\nabla || =$ constant. Then the following are equivalent: \begin{equation} \aligned {\rm(i)} \quad&\operatorname{A}\, \operatorname{curvature}\, \operatorname{tensor}\, R ^\nabla \text{is}\, \text{a}\, \text{normalized}\, \text{exponential}\, \text{Yang-Mills}\, \, \text{field}\, .\\ {\rm(ii)} \quad &\operatorname{A}\, \operatorname{curvature}\, \operatorname{tensor}\, R ^\nabla \text{is}\, \text{a}\, \text{Yang-Mills}\, \text{field}\, .\\ {\rm(iii)} \quad & \operatorname{A}\, \operatorname{curvature}\, \operatorname{tensor}\, R ^\nabla \text{is}\, \text{a}\, \, p-\text{Yang-Mills}\, \text{field}, p \ge 2\,\, .\\ {\rm(iv)} \quad & \operatorname{A}\, \operatorname{curvature}\, \operatorname{tensor}\, R ^\nabla \text{is}\, \text{an}\, \text{exponential}\, \text{Yang-Mills}\, \text{field}\, .\\ {\rm(v)} \quad & \operatorname{A}\, \operatorname{curvature}\, \operatorname{tensor}\, R ^\nabla \text{is}\, \text{an}\, \, F\text{-Yang-Mills}\, \text{field}\, . \endaligned\label{3.32} \end{equation} \label{C: 3.8} \end{corollary} \begin{proof} This follows at once from \eqref{3.29}-\eqref{3.31}. \end{proof} \begin{lemma} Let $S_{e,\mathcal{YM}^0 }$ and $S_{e,\mathcal{YM} }$ be the stress-energy tensors defined by \eqref{3.9} and \eqref{3.10} respectively, then for any vector field $X$ on $M$, we have \begin{equation} \begin{aligned} (\operatorname{div} S_{e,\mathcal{YM}^0 })(X)&=(\operatorname{div} S_{e,\mathcal{YM} })(X)\\ & = \exp (\frac{||R^\nabla||^2}2)\langle\delta ^\nabla R^\nabla ,i_X R^\nabla \rangle+\exp (\frac{||R^\nabla||^2}2)\langle i_Xd^\nabla R^\nabla ,R^{\nabla} \rangle \\ &\qquad - \langle i_{\text{grad}(\exp(\frac{||R^\nabla ||^2}2))}R^\nabla ,i_X R^\nabla \rangle\, , \end{aligned} \label{3.33} \end{equation} where $\text{grad} \, (\, \bullet\, )$ is the gradient vector field of $\, \bullet\, .$ \label{L: 3.9} \end{lemma} \smallskip \begin{definition} A curvature tensor $R^{\nabla} \in A^2 \big (Ad(P) \big )$ is said to satisfy an \emph {$e$-conservation law} if $S_{e,\mathcal{YM}^0}$ is divergence free, i.e., \begin{equation} \text{div} S_{e,\mathcal{YM}^0 } = \text{div} S_{e,\mathcal{YM} } = 0\, . \label{3.34} \end{equation} \end{definition} \smallskip \begin{theorem} Every normalized exponential Yang-Mills field or every exponential Yang-Mills field $R ^\nabla $ satisfies an $e$-conservation law.\label{T: 3.11} \end{theorem} \begin{proof} It is known that $R ^\nabla $ satisfies the Bianchi identity \begin{equation} d^\nabla R ^\nabla =0\, . \label{3.35} \end{equation} Therefore, by Corollary \ref{C: 3.7}, Lemma \ref{L: 3.9} and \eqref{3.35}, we immediately derive the desired \eqref{3.34}. \end{proof} \section{Comparison theorems in Riemannian geometry} In this section, we will discuss comparison theorems with applications on Cartan-Hadamard manifolds or more generally on complete manifolds with a pole. We recall a Cartan-Hadamard manifold is a complete simply-connected Riemannian manifold of nonpositive sectional curvature. A {\it pole} is a point $x_0\in M$ such that the exponential map from the tangent space to $M$ at $x_0$ into $M$ is a diffeomorphism. By the {\it radial curvature} $K $ of a manifold with a pole, we mean the restriction of the sectional curvature function to all the planes which contain the unit vector $\partial (x)$ in $T_{x}M$ tangent to the unique geodesic joining $% x_{0}$ to $x$ and pointing away from $x_{0}.$ Let the tensor $g - dr \bigotimes dr = 0$ on the radial direction $\partial $, and is just the metric tensor $g$ on the orthogonal complement $\partial ^{\bot}$. \begin{theorem}(Hessian comparison theorem \cite{GW, DW, HLRW, W11}) Let $(M,g)$ be a complete Riemannian manifold with a pole $x_0$. Denote by $K(r)$ the radial curvature of $M$. Then \begin{equation} -\alpha ^2\leq K(r)\leq -\beta ^2\quad \text{with}\quad \alpha >0, \, \beta >0\tag{i \cite {GW}} \end{equation} \begin{equation} \Rightarrow\quad\beta \coth (\beta r)\big (g-dr\otimes dr\big )\leq Hess(r)\leq \alpha \coth (\alpha r)\big (g-dr\otimes dr\big );\label{4.1} \end{equation} \begin{equation*} K(r) = 0 \tag{ii \cite {GW}} \end{equation*} \begin{equation} \Rightarrow\quad\frac 1r\big (g-dr\otimes dr\big ) = Hess(r);\label{4.2} \end{equation} \begin{equation*} -\frac A{(1+r^2)^{1+\epsilon}}\leq K(r)\leq \frac B{(1+r^2)^{1+\epsilon}}\quad \text{with}\quad \epsilon > 0, \, A \ge 0, \quad \text{and}\quad 0 \le B < 2\epsilon\, \tag{iii \cite {GW}, \cite [Lemma 4.1.(iii)]{DW}} \end{equation*} \begin{equation} \Rightarrow\quad \frac{1-\frac B{2\epsilon}}{r}\big (g-dr\otimes dr\big )\leq Hess(r)\leq \frac{e^{\frac {A}{2\epsilon}}}{r}\big (g-dr\otimes dr\big ); \label{4.3} \end{equation} \begin{equation*}-\frac {A}{r^2}\leq K(r)\leq -\frac {A_1}{r^2}\quad \text{with}\quad 0 \le A_1 \le A\, \tag{iv \cite {HLRW}, \cite [Theorem A] {W11} } \end{equation*} \begin{equation} \Rightarrow\quad\frac{1+\sqrt{1+4A_1}}{2r}\bigg(g-dr\otimes dr\bigg) \le \text{Hess} (r) \le \frac{1+\sqrt{1+4A}}{2r}\bigg( g-dr\otimes dr\bigg);\label{4.4} \end{equation} \begin{equation*} - \frac {A(A-1)}{r^2}\le K(r) \le - \frac {A_1(A_1-1)}{r^2}\quad \text{with}\quad A \ge A_1 \ge 1\, \tag{v \cite [Corollary 3.1] {W11}} \end{equation*} \begin{equation} \Rightarrow\quad \frac{A_1}{r}\bigg( g-dr\otimes dr\bigg) \le \text{Hess} r \leq \frac{A}{r}\bigg( g-dr\otimes dr\bigg); \label{4.5} \end{equation} \begin{equation*} \frac {B_1(1-B_1)}{r^2}\leq K(r)\le \frac {B(1-B)}{r^2}\, ,\text{with}\quad 0 \le B, \, B_1 \le 1\, \tag{vi \cite [Corollary 3.5] {W11}} \end{equation*} \begin{equation} \Rightarrow\quad\frac {|B - \frac 12| + \frac 12}{r} \bigg(g-dr\otimes dr\bigg) \le \text{Hess} r \le \frac{1+\sqrt{1+4B_1(1-B_1)}}{2r}\bigg( g-dr\otimes dr\bigg);\label{4.6} \end{equation} \begin{equation*} \frac {B_1}{r^2} \le K(r) \le \frac {B}{r^2}\quad \text{with}\quad 0 \le B_1 \le B \le \frac 14\, \tag{vii \cite [Theorem 3.5]{W11}} \end{equation*} \begin{equation} \Rightarrow\quad \frac{1+\sqrt{1-4B}}{2r} \bigg( g-dr\otimes dr\bigg) \le \text{Hess} r \le \frac{1+\sqrt{1+4B_1}}{2r}\bigg(g-dr\otimes dr\bigg);\label{4.7} \end{equation} \begin{equation*} -Ar^{2q}\leq K(r)\leq -Br^{2q}\quad \text{with}\quad A\geq B>0\, , q>0\, \tag{viii \cite {GW}}\end{equation*} \begin{equation} \begin{aligned} \Rightarrow\quad & B_0r^q\big (g-dr\otimes dr\big )\leq Hess(r)\leq (\sqrt{A}\coth \sqrt{A} )r^q\big (g-dr\otimes dr\big )\, , \operatorname{for}\, r\geq 1\, , \end{aligned} \label{4.8} \end{equation} where \begin{equation} B_0=\min \{1,-\frac{q+1}2+\big (B+(\frac{q+1}2)^2\big )^{\frac{1}{2}}\}\, . \label{4.9} \end{equation} \label{T: 4.1} \end{theorem} \begin{proof}$(\operatorname{i})$, $(\operatorname{ii})$ and $(\operatorname{viii})$ are treated in section 2 of \cite {GW}, $(\operatorname{iii})$ is proved in \cite {DW}, $(\operatorname{iv})$ is derived in \cite{HLRW, W11}, $(\operatorname{v})$ - $(\operatorname{vii})$ are proved in \cite {W11}. \end{proof} We note there are many applications of this Theorem (cf., e.g., \cite {WW}), $(\operatorname{iv})$ extends the asymptotic comparison theorem in (\cite {GW}, \cite {PRS}, p.39), and $(\operatorname{vii})$ generalizes (\cite {EF}, Lemma 1.2 (b)). \smallskip Let $\flat $ denote the bundle isomorphism that identifies the vector field $X$ with the differential one-form $X^{\flat}$, and let $\nabla $ be the Riemannian connection of $M$. Then the covariant derivative $\nabla X^{\flat}$ of $X^{\flat}$ is a $(0,2)$-type tensor, given by \begin{equation} \nabla X^{\flat} (Y,Z) = \nabla _Y X^{\flat} Z = \langle \nabla _Y X, Z\rangle\, , \quad \forall\, X,Y \in \Gamma (M)\, . \label{4.10} \end{equation} If $X$ is conservative, then \begin{equation} X = \nabla f,\quad X^{\flat} = df\quad \text{and}\quad \nabla X^{\flat} = \text{Hess} (f)\, .\label{4.11} \end{equation} for some scalar potential $f$ $($cf. \cite {CW}, p. 1527$)$. A direct computation yields $($cf., e.g., \cite {DW}$)$\begin{equation} \text{div} (i_X S_{e,\mathcal{YM}^0 }) = \langle S_{e,\mathcal{YM}^0 },\nabla X^{\flat}\rangle + (\text{div} S_{e,\mathcal{YM}^0 }) (X)\, ,\quad \forall\, X\in \Gamma (M)\, . \label{4.12} \end{equation} By Theorem \ref{T: 3.11}, every normalized exponential Yang-Mills field $R ^\nabla $ satisfies an $e$-conservation law. It follows from the divergence theorem that for every bounded domain $D$ in $M\, $ with $C^1$ boundary $\partial D\, ,$ \begin{equation} \int_{\partial D}S_{e,\mathcal{YM}^0 }(X,\nu ) ds_g = \int_D\langle S_{e,\mathcal{YM}^0 },\nabla X^{\flat}\rangle dv_g\, , \label{4.13} \end{equation} where $\nu $ is unit outward normal vector field along $\partial D$ with $(n-1)$-dimensional volume element $ds_g$. When we choose scalar potential $f (x) = \frac 12 r^2 (x)$, \eqref{4.11} becomes \begin{equation} X = r\nabla r,\quad X^{\flat} = r dr\quad \text{and}\quad \nabla X^{\flat} = \text{Hess} (\frac 12 r^2) = dr\otimes dr + r\text{Hess} (r)\, .\label{4.14} \end{equation} The conservative vector field $X$ and $e$-conservation law will illuminate that the curvature of the base manifold $M$ via Hessian Comparison Theorems \ref{T: 4.1} influences the behavior of the stress energy tensor $S_{e,\mathcal{YM}^0 }$ and the behavior of the underlying criticality - curvature field $R ^\nabla \in A^2(Ad(P))$ with the help from the following concept \eqref{4.15} and estimate \eqref{4.20}. Analogous to $F$-degree, we introduce \begin{definition} For a given curvature field $R^\nabla$, the \emph{$e$-degree} $d_e$ is the quantity, given by \begin{equation} d_e = \sup_{x \in M} \frac {\exp\big (\frac{||R^\nabla||^2}2 (x)\big )}{\exp\big (\frac{||R^\nabla||^2}2(x)\big ) - 1}\, .\label{4.15} \end{equation} \label{D: 4.2} \end{definition} The $e$-degree $d_e$ will play a role in connecting two separated parts of the normalized stress-energy tensor $S_{e,\mathcal{YM}^0 }$. Since $\frac {e^t}{e^t - 1} $ is a decreasing function, with $1 \le \frac {e^t}{e^t - 1} \le \infty$, we have \begin{proposition} Suppose \begin{equation} \frac{||R^\nabla||^2}2(x) \le c\ \ \forall\ \ x \in M\, ,\label{4.16} \end{equation} where $c > 0$ is a constant. Then \begin{equation} d_e \ge \frac {e^c}{e^c - 1}\, . \label{4.17} \end{equation}\label{P: 4.3} \end{proposition} \begin{lemma} Let $M$ be a complete $n$-manifold with a pole $x_0$. Assume that there exist two positive functions $h_1(r)$ and $h_2(r)$ such that \begin{equation} h_1(r)(g-dr\otimes dr)\leq Hess(r)\leq h_2(r)(g-dr\otimes dr) \label{4.18} \end{equation} on $M\backslash \{x_0\}$. If $h_2(r)$ satisfies \begin{equation} rh_2(r)\geq 1\, , \label{4.19} \end{equation} and $||R^\nabla|| > 0$ on $M\, ,$ then \begin{equation} \langle S_{e,\mathcal{YM}^0 },\nabla X^\flat \rangle\,\geq \,\big(1+(n-1)rh_1(r)-2 d_e ||R^\nabla||_{\infty}^2rh_2(r)\big)\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )\, , \label{4.20} \end{equation} where $X=r \nabla r$.\label{L: 4.4} \end{lemma} \begin{proof}Choose an orthonormal frame $\{e_i,\frac \partial {\partial r}\}_{i=1,...,n-1}$ around $x\in M\backslash \{x_0\}$. Take $ X=r\nabla r$. Then \begin{equation} \nabla _{\frac \partial {\partial r}}X=\frac \partial {\partial r}\, , \label{4.21} \end{equation} \begin{equation} \nabla _{e_i}X=r\nabla _{e_i}\frac \partial {\partial r}=rHess(r)(e_i,e_j)e_j\, . \label{4.22} \end{equation} Using \eqref{3.10}, \eqref{4.14}, $\big ($or \eqref{4.21}, \eqref{4.22}$\big )$, we have \begin{equation} \begin{aligned} \langle S_{e,\mathcal{YM}^0 },\nabla X^\flat \rangle&=\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )(1+\sum_{i=1}^{n-1}rHess(r)(e_i,e_i)) \\ &\qquad -\sum_{i,j=1}^{n-1}\exp (\frac{||R^\nabla||^2}2)\langle i_{e_i} R^{\nabla} ,i_{e_j} R^{\nabla} \rangle r Hess(r)(e_i,e_j) \\ &\qquad -\exp (\frac{||R^\nabla\|^2}2) \langle i_{\frac \partial {\partial r}} R^{\nabla} ,i_{\frac \partial {\partial r}} R^{\nabla} \rangle\, . \end{aligned} \label{4.23} \end{equation} By \eqref{4.18} and \eqref{4.15}, \eqref{4.23} implies that \begin{equation} \begin{aligned} & \langle S_{e,\mathcal{YM}^0 },\nabla X^\flat \rangle\\ &\geq \big (\exp (\frac{||R^\nabla||^2}2) - 1\big )\big(1+(n-1)rh_1(r)\big) \\ &\qquad -\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )\sum_{i=1}^{n-1}\langle i_{e_i} R^{\nabla} ,i_{e_i} R^{\nabla} \rangle r h_2(r) \frac{\exp (\frac{||R^\nabla||^2}2) }{\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )}\\ &\qquad -\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )\langle i_{\frac \partial {\partial r}} R^{\nabla} ,i_{\frac \partial {\partial r}} R^{\nabla} \rangle \frac{\exp (\frac{||R^\nabla||^2}2) }{\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )}\\ &\geq \big (\exp (\frac{||R^\nabla||^2}2) - 1\big )\big(1+(n-1)rh_1(r)-2||R^\nabla||^2rh_2(r) \frac{\exp (\frac{||R^\nabla||^2}2) }{\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )}\big) \\ &\qquad +\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )(rh_2(r)-1)\langle i_{\frac\partial{\partial r}} R^{\nabla}, i_{\frac{\partial}{\partial r}}R^{\nabla} \rangle\frac{\exp (\frac{||R^\nabla||^2}2) }{\big (\exp (\frac{||R^\nabla||^2}2) - 1\big )}\\ &\geq \big(1+(n-1)rh_1(r)-2||R^\nabla||^2rh_2(r) d_e\big) \big (\exp (\frac{||R^\nabla||^2}2) - 1\big )\, .\end{aligned} \label{4.24} \end{equation} The last two steps follow from \eqref{4.19} and the fact that \begin{equation} \aligned & \sum_{i=1}^{n-1}\langle i_{e_i} R^{\nabla} ,i_{e_i} R^{\nabla} \rangle+\langle i_{\frac \partial {\partial r}} R^{\nabla} ,i_{\frac \partial {\partial r}} R^{\nabla} \rangle \\ &\quad = \sum_{1\le j_1\le n}\sum_{i=1}^{n}\langle R^\nabla (e_i,e_{j_1}), R^\nabla (e_i,e_{j_1})\rangle \\ &\quad = 2 ||R^\nabla||^2\, , \endaligned\label{4.25} \end{equation} where $e_n = \frac \partial{\partial r}\, .$ Now the Lemma follows immediately from \eqref{4.24} and \eqref{3.7}. \end{proof} \section{Monotonicity formulae} In this section, we will establish monotonicity formulae on complete manifolds with a pole.\smallskip \begin{theorem}[Monotonicity formulae]\label{t5.1} Let $(M,g)$ be an $n-$dimensional complete Riemannian manifold with a pole $ x_0$, $Ad(P)$ be the adjoint bundle and the curvature tensor $ R^\nabla \in A^2\big (Ad(P)\big )$ be an exponential Yang-Mills field. Assume that the radial curvature $K(r)$ of $M$ and the curvature tensor $ R^\nabla$ satisfy one of the following seven conditions:\smallskip \begin{equation} \aligned {\rm(i)}&\quad -\alpha ^2\leq K(r)\leq -\beta ^2\, \text{with}\, \alpha >0, \beta >0\, \text{and}\, (n-1)\beta - 2d_e\alpha \|R^\nabla \|^2 _\infty \geq 0;\\ {\rm(ii)}&\quad K(r) = 0\, \text{with}\, n-2d_e\|R^\nabla \|^2 _\infty>0;\\ {\rm(iii)}&\quad -\frac A{(1+r^2)^{1+\epsilon}}\leq K(r)\leq \frac B{(1+r^2)^{1+\epsilon}} \text{with}\, \epsilon > 0\, , A \ge 0\, , 0 < B < 2\epsilon\, , \text{and}\\ &\qquad n - (n-1)\frac B{2\epsilon} -2d_e e^{\frac {A}{2\epsilon}}\|R^\nabla \|^2 _\infty > 0;\\ {\rm(iv)}&\quad-\frac {A}{r^2}\leq K(r)\leq -\frac {A_1}{r^2}\, \text{with}\quad 0 \le A_1 \le A\, ,\text{and}\\ &\qquad 1 + (n-1)\frac{1 + \sqrt {1+4A_1}}{2} - d_e (1 + \sqrt {1+4A})\|R^\nabla \|^2 _\infty > 0;\\ {\rm(v)}&\quad- \frac {A(A-1)}{r^2}\le K(r) \le - \frac {A_1(A_1-1)}{r^2}\, \text{and}\, A \ge A_1 \ge 1\, , \text{and}\\ &\qquad 1+(n-1)A_1-2d_eA\|R^\nabla \|^2 _\infty > 0;\\ {\rm(vi)}&\quad \frac {B_1(1-B_1)}{r^2}\leq K(r)\le \frac {B(1-B)}{r^2}\, ,\text{with}\quad 0 \le B, \, B_1 \le 1\, , \text{and} \\ &\qquad 1 + (n-1)(|B-\frac {1}{2}|+ \frac {1}{2}) -d_e\big (1 + \sqrt {1+4B_1(1-B_1)}\big )\|R^\nabla \|^2 _\infty > 0;\\ {\rm(vii)}&\quad \frac {B_1}{r^2} \le K(r) \le \frac {B}{r^2} \text{with}\quad 0 \le B_1 \le B \le \frac 14\, , \text{and}\\ &\qquad 1+ (n-1)\frac{1 + \sqrt {1-4B}}{2} -d_e(1 + \sqrt {1+4B_1} )\|R^\nabla \|^2 _\infty > 0. \endaligned \label{5.1} \end{equation} Then \begin{equation} \frac 1{\rho _1^\lambda }\int_{B_{\rho _1}(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, dv \leq \frac 1{\rho _2^\lambda }\int_{B_{\rho _2}(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, dv\, ,\label{5.2} \end{equation} for any $0<\rho _1\leq \rho _2$, where \begin{equation} \lambda \le \begin{cases} n-2d_e\frac \alpha \beta \|R^\nabla \|^2 _\infty\ &\text {if } K(r)\ \text{obeys $($i$)$}\, ,\\ n-2d_e\|R^\nabla \|^2 _\infty\ &\text {if } K(r)\ \text{obeys $($ii$)$}\, ,\\ n - (n-1)\frac B{2\epsilon} -2d_ee^{\frac {A}{2\epsilon}}\|R^\nabla \|^2 _\infty\ &\text{if } K(r) \text{ obeys $($iii$)$}\, ,\\ 1 + (n-1)\frac{1 + \sqrt {1+4A_1}}{2} - d_e(1 + \sqrt {1+4A})\|R^\nabla \|^2 _\infty\ &\text{if } K(r) \text{ obeys $($iv$)$}\, ,\\ 1+(n-1)A_1-2d_eA\|R^\nabla \|^2 _\infty\ &\text{if } K(r) \text{ obeys $($v$)$}\, ,\\ 1 + \frac{n-1}{(|B-\frac {1}{2}|+ \frac {1}{2})^{-1}} -d_e\big (1 + \sqrt {1+4B_1(1-B_1)}\big )\|R^\nabla \|^2 _\infty\ &\text{if } K(r)\ \text {obeys $($vi$)$}\, ,\\ 1+ (n-1)\frac{1 + \sqrt {1-4B}}{2} -d_e(1 + \sqrt {1+4B_1} )\|R^\nabla \|^2 _\infty\ &\text{if } K(r)\ \text{obeys $($vii$)$}\, .\end{cases} \label{5.3} \end{equation}\label{T: 5.1} \end{theorem} \begin{proof} Take a smooth vector field $X=r\nabla r$ on $M\, .$ If $K(r)$ satisfies $($i$)$, then by Theorem \ref{4.1} and the increasing function $\alpha r \coth (\alpha r)\to 1\, $ as $r \to 0\, ,$ \eqref{4.19} holds. Now Lemma \ref{4.1} is applicable and by \eqref{4.20}, we have on $B_\rho(x_0)\backslash\{x_0\}\, ,$ for every $\rho >0,$ \begin{equation} \begin{aligned} &\langle S_{e,\mathcal{YM}^0 },\nabla X^\flat \rangle\\ &\geq \,\big(1+(n-1)\beta r \coth (\beta r)-2 d_e \alpha r \coth (\alpha r)\|R^\nabla \|^2 _\infty\big)\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\\ &=\,\big(1+ \beta r \coth (\beta r)(n-1-2 \cdot d_e \cdot \frac{\alpha r \coth (\alpha r)}{\beta r \coth (\beta r)}\|R^\nabla \|^2 _\infty) \big)\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\\ & > \,\big(1+ 1 \cdot (n-1-2\cdot d_e\cdot \frac{\alpha }{\beta } \cdot 1\|R^\nabla \|^2 _\infty) \big)\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\\ &\ge \lambda \big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, , \end{aligned}\label{5.4} \end{equation} provided that $$n-1-2\cdot d_e \cdot \frac{\alpha }{\beta }\|R^\nabla \|^2 _\infty \ge 0\, ,$$ since $$\beta r \coth (\beta r) > 1\ \text{for}\ r > 0\ , \text{and}\ \frac{\coth (\alpha r)}{\coth (\beta r)} < 1\, ,\text{for}\ 0 < \beta < \alpha\, , $$ and $\coth$ is a decreasing function. Similarly, from Theorem \ref{T: 4.1} and Lemma \ref{L: 4.4}, the above inequality holds for the cases (ii) - (vii) on $B_\rho(x_0)\backslash\{x_0\}\, .$ Thus, by the continuity of $\langle S_{e,\mathcal{YM} },\nabla X^\flat \rangle$ and $ \exp (\frac{||R^\nabla||^2}2)\, ,$ and \eqref{3.10}, we have for every $\rho >0,$ \begin{equation} \aligned &\langle S_{e,\mathcal{YM}^0 },\nabla X^\flat \rangle\geq \lambda \big (\exp (\frac{||R^\nabla||^2}2)-1\big) \qquad \text{in} \quad B_\rho(x_0) \\ & \rho\, \, \big (\exp (\frac{||R^\nabla||^2}2)-1\big) )\geq S_{e,\mathcal{YM}^0 }(X,\frac \partial {\partial r}) \qquad \text{on} \quad \partial B_\rho(x_0)\, . \endaligned\label{5.5} \end{equation} It follows from \eqref{4.13} and \eqref{5.5} that \begin{equation} \rho\int_{\partial B_\rho(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, ds \geq \lambda \int_{B_\rho(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, dv\, . \label{5.6} \end{equation} Hence, we get from \eqref{5.6} the following \begin{equation} \frac{\int_{\partial B_\rho(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, ds}{\int_{B_\rho(x_0)} \big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv} \geq \frac \lambda \rho\, . \label{5.7} \end{equation} The coarea formula implies that \begin{equation} \frac {d}{d\rho}\int_{B_\rho(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv =\int_{\partial B_\rho(x_0)} \big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, ds\, .\label{5.8} \end{equation} Thus we have \begin{equation} \frac{\frac {d}{d\rho}\int_{B_\rho(x_0)}\exp (\frac{||R^\nabla||^2}2)-1\, dv}{\int_{B_\rho(x_0)}\exp (\frac{||R^\nabla||^2}2)-1\, dv}\\ \geq \frac \lambda \rho \label{5.9} \end{equation} for a.e. $\rho >0\, .$ By integration \eqref{5.9} over $[\rho _1,\rho _2]$, we have \begin{equation} \ln \int_{B_{\rho _2}(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv -\ln \int_{B_{\rho _1}(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv \geq \ln \rho _2^\lambda -\ln \rho _1^\lambda\, . \label{5.10} \end{equation} This proves \eqref{5.2}. \end{proof} \begin{corollary} Suppose that $M$ has constant sectional curvature $-\alpha ^2\le 0$ and $$ \begin{cases} n-1-2d_e \|R^\nabla \|^2 _\infty \ge 0 \ & \text{if} \ \ \alpha \ne 0;\\ n - 2d_e\|R^\nabla \|^2 _\infty > 0\ &\text{if}\ \ \alpha=0. \end{cases} $$ Let $R^\nabla \in A^2\big (Ad(P)\big )$ be an exponential Yang-Mills field. Then \begin{equation} \begin{aligned} &\frac 1{\rho _1^{n-2d_e\|R^\nabla \|^2 _\infty}}\int_{B_{\rho _1}(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big ) \, dv \\ &\leq \frac 1{\rho _2^{n-2d_e\|R^\nabla \|^2 _\infty}} \int_{B_{\rho _2}(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv\, , \end{aligned}\label{5.11} \end{equation} for any $x_0\in M$ and $0<\rho _1\leq \rho _2$.\label{C: 5.2} \end{corollary} \begin{proof} In Theorem \ref{T: 5.1}, if we take $\alpha =\beta \ne 0$ for the case (i) or $\alpha=0$ for the case (ii), this corollary follows immediately. \end{proof} \begin{proposition} Let $(M,g)$ be an $n-$dimensional complete Riemannian manifold whose radial curvature satisfies \begin{equation} {\rm(viii)} -Ar^{2q}\leq K(r)\leq -Br^{2q}\quad \text{with}\quad A\geq B>0\quad \text{and}\quad q>0.\label{5.12} \end{equation} Let $R^\nabla$ be an exponential Yang-Mills field, and \begin{equation} \delta :=(n-1)B_0-2d_e\|R^\nabla \|^2 _\infty\sqrt{A}\coth \sqrt{A} \geq 0\, ,\label{5.13} \end{equation} where $B_0$ is as in \eqref{4.9}. Suppose that \eqref{5.18} holds. Then \begin{equation} \begin{aligned} &\frac 1{\rho _1^{1 + \delta}}\int_{B_{\rho _1}(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big ) \, dv \\ &\leq \frac 1{\rho _2^{1 + \delta}}\int_{B_{\rho _2}(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big ) \, dv\, , \end{aligned}\label{5.14} \end{equation} for any $1\leq \rho _1\leq \rho _2$.\label{P: 5.3} \end{proposition} \begin{proof} Take $X=r\nabla r$. Applying Theorem \ref{T: 4.1}, \eqref{4.19}, and \eqref{4.20}, we have \begin{equation} \aligned \langle S_{e,\mathcal{YM}^0 },\nabla X^\flat \rangle &\geq \big (\exp (\frac{||R^\nabla||^2}2)-1\big)(1+\delta r^{q+1}\big) \endaligned \label{5.15} \end{equation} and \begin{equation} \aligned & S_{e,\mathcal{YM}^0 }(X,\frac \partial {\partial r}) = \exp (\frac{||R^\nabla||^2}2) \big ( 1 - \langle i_{\frac\partial{\partial r}}R^\nabla, i_{\frac\partial{\partial r}}R^\nabla \rangle \big ) - 1\qquad \text{on} \quad \partial B_1(x_0) \\ & S_{e,\mathcal{YM}^0 }(X,\frac \partial {\partial r}) = \rho (\exp (\frac{||R^\nabla||^2}2) \big (1 - \langle i_{\frac\partial{\partial r}}R^\nabla, i_{\frac\partial{\partial r}}R^\nabla \rangle \big ) - \rho\qquad \text{on} \quad \partial B_\rho(x_0)\, . \endaligned \label{5.16} \end{equation} It follows from \eqref{4.13} that \begin{equation} \aligned & \rho\int_{\partial B_\rho(x_0)} \exp (\frac{||R^\nabla||^2}2) \big (1 - \langle i_{\frac\partial{\partial r}}R^\nabla, i_{\frac\partial{\partial r}}R^\nabla \rangle \big ) - 1\, ds \\ & \qquad - \int_{\partial B_1(x_0)} \exp (\frac{||R^\nabla||^2}2) \big (1 - \langle i_{\frac\partial{\partial r}}R^\nabla, i_{\frac\partial{\partial r}}R^\nabla \rangle \big ) - 1\, ds\\ & \ge \int_{B_\rho(x_0)-B_1(x_0)} (1+\delta r^{q+1}) \big (\exp (\frac{||R^\nabla||^2}2)-1\big)\, . \endaligned\label{5.17} \end{equation} Whence, if \begin{equation} \int_{\partial B_1(x_0)} \exp (\frac{||R^\nabla||^2}2) \big (1 - \langle i_{\frac\partial{\partial r}}R^\nabla, i_{\frac\partial{\partial r}}R^\nabla \rangle \big ) - 1\, ds \ge 0\, ,\label{5.18}\end{equation} then \begin{equation} \rho\int_{\partial B_\rho(x_0)} \big (\exp (\frac{||R^\nabla||^2}2)-1\big ) \, ds \geq (1+\delta )\int_{B_\rho(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big) \, dv\, , \label{5.19}\end{equation} for any $\rho > 1\, .$ Coarea formula then implies \begin{equation} \frac{d\int_{B_\rho(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big ) \, dv}{ \int_{B_\rho(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big ) \, dv}\geq \frac{1+\delta} \rho d\rho \label{5.20} \end{equation} for a.e. $\rho\geq 1$. Integrating \eqref{5.20} over $[\rho _1,\rho _2]$, we get \begin{equation} \begin{aligned} &\ln \big(\int_{B_{\rho _2}(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv \big)\\ &\ \ -\ln \big( \int_{B_{\rho _1}(x_0)-B_1(x_0)}\big (\exp (\frac{||R^\nabla||^2}2)-1\big )\, dv \big) \\ &\geq (1+\delta) \ln \rho _2- (1+\delta) \ln \rho _1\, . \end{aligned} \label{5.21}\end{equation} Hence we prove the proposition. \end{proof} \begin{corollary} Let $K(r)\, $ and $\delta$ be as in Proposition \ref{5.3}, satisfying \eqref{5.12} and \eqref{5.13} respectively, $\, R^\nabla $ be an exponential Yang-Mills field. Suppose \begin{equation} \exp (\frac{||R^\nabla||^2}2) \big (1 - \langle i_{\frac\partial{\partial r}}R^\nabla, i_{\frac\partial{\partial r}}R^\nabla \rangle \big ) \ge 1\label{5.22} \end{equation} on $\partial B_1\, .$ Then \eqref{5.14} holds. \label{C: 5.4} \end{corollary} \begin{proof} The assumption \eqref{5.22} implies that \eqref{5.18} holds, and the assertion follows from Proposition \ref{P: 5.3}. \end{proof} \section{Vanishing theorems for exponential Yang-Mills fields} \begin{theorem}[Vanishing Theorem]\label{T: 6.1} Suppose that the radial curvature $K(r)$ of $M$ satisfies one of the seven growth conditions in \eqref{5.1} $(\rm{i})$-$(\rm{vii}),$ Theorem \ref{T: 5.1}. Let $R^\nabla$ be an exponential Yang-Mills field satisfying the $\mathcal {YM}_e^0$-energy functional growth condition \begin{equation} \int_{B_\rho(x_0)} \big (\exp(\frac{||R^\nabla ||^2}2) - 1\big )\, dv = o(\rho^\lambda )\quad \text{as } \rho\rightarrow \infty\, ,\label{6.1} \end{equation} where $\lambda $ is given by \eqref{5.3}. Then $\exp(\frac{||R^\nabla ||^2}2)\equiv 1\, ,$ and hence $R^\nabla \equiv 0$. In particular, every exponential Yang-Mills field $R^{\nabla}$ with finite normalized exponential Yang-Mills $\mathcal {YM}_e^0$-energy functional vanishes on $M$. \end{theorem} \begin{proof} This follows at once from Theorem \ref{T: 5.1}. \end{proof} \begin{proposition} Let $(M,g)$ be an $n-$dimensional complete Riemannian manifold whose radial curvature satisfies \eqref{5.12} $(\rm{viii})\, ,$ Proposition \ref{5.1}. Let $\delta$ be as in \eqref{5.13} in which $B_0$ is as in \eqref{4.9}. Suppose \eqref{5.18} holds. Then every exponential Yang-Mills field $R^\nabla$ with the growth condition \begin{equation} \int_{B_\rho(x_0)-B_1(x_0)}\big (\exp(\frac{||R^\nabla ||^2}2) - 1\big )\, dv = o(\rho^{1 + \delta })\text{\quad as }\rho\rightarrow \infty\label{6.2}\end{equation} vanishes on $M-B_1(x_0)\, ,$ In particular, if $R^\nabla $ has finite normlalized exponential Yang-Mills energy on $M-B_1(x_0)$, then $R^\nabla \equiv 0$ on $M-B_1(x_0)$.\label{P: 6.2} \end{proposition} \begin{proof} This follows at once from Proposition \ref{P: 5.3}. \end{proof} \section{Vanishing theorems from exponential Yang-Mills fields to $F$-Yang-Mills fields} \begin{theorem} Suppose that the radial curvature $K(r)$ of $M$ satisfies one of the seven growth conditions in \eqref{1.1} ${\operatorname(i)}- {\operatorname(vii)}$, Theorem A , in which $d_F=1$. Let $R^\nabla$ be an exponential Yang-Mills field with $||R^\nabla|| = $constant and \begin{equation} \operatorname{Volume}\big (B_{\rho} (x_0)\big ) = o({\rho}^{\lambda} )\quad \text{as } \rho\rightarrow \infty\, ,\label{7.1} \end{equation} where $\lambda $ is given by \eqref{1.4}, in which $d_F=1$. Then $R^\nabla \equiv 0$. In particular, every exponential Yang-Mills field $R^{\nabla}$ with constant $||R^\nabla||$ over manifold which has finite volume, $\operatorname{Volume}(M) < \infty$ vanishes.\label{T: 7.1} \end{theorem} \begin{proof} By Corollary \ref{C: 3.8}, this exponential Yang-Mills filed $R^\nabla$ is a Yang-Mills field which is a special case of $F$-Yang-Mills field, where $F$ is the identity map. Thus the F-degree of the identity map $d_F = 1\, .$ Now we apply $F$-Yang-Mills Vanishing Theorem A in which $F(t) =t$, $d_F=1$, the $F$-Yang-Mills functional $\mathcal {YM}_F$ growth condition, \eqref{1.3} is transformed to the volume of the base manifold growth condition, \eqref{7.1}, and the conclusion $R^\nabla \equiv 0$ follows. \end{proof} \section{An average principle, isoparametric and sobelov inequalities} In this section, we state, interpret, and apply an average principe in a simple discrete version, then extend it to a dual (or continuous) version: \smallskip \begin{proposition} [{An average principle of concavity $(\operatorname{resp.}\, convexity, linearity )$}] \noindent Let $f$ be a concave function $($resp. convex function, linear function $)$ . Then \begin{equation} \begin{aligned} f(\text{average}) &\ge \text{average}\, (f)\, ,\\ \big (\operatorname{resp.}\quad f(\text{average}) &\le \text{average}\, (f)\, ,\\ f(\text{average}) & = \text{average}\, (f)\, \quad \big ). \end{aligned} \label{8.1} \end{equation}\label{P: 8.1} \end{proposition} Applying \eqref{8.1}, where a convex function $f = \exp\, $ and ``average" is taken over two positive numbers with respect to the sum, yields one of the simplest inequalities that has far-reaching impacts \begin{equation} \sqrt {a \cdot b} = \overset {``f(average)" \swarrow} {\exp (\frac {A+B}{2})} \overset {(\text{Average}\, \text{Principle})} \le \overset {``average(f)"\swarrow} {\frac {\exp A + \exp B}{2}} = \frac {a + b}{2}\, . \label{8.2} \end{equation} That is, \begin{example}[G.M. $\le$ A.M.] The Geometric Mean is no greater than the Arithmetic Mean: \begin{equation} \begin{aligned} \sqrt {a \cdot b} & \le \frac {a + b}{2}\, , \quad \operatorname{for}\quad a, b > 0\\ \operatorname{with}\, ``&="\, \operatorname{holds}\, \operatorname{if}\, \operatorname{and}\, \operatorname{only}\, \operatorname{if}\, a=b\, .\end{aligned}\label{8.3} \end{equation} \end{example} \noindent Indeed, Let $a = \exp(A)$ and $b=\exp(B)\, .$ Then applying An Average Principle of Convexity \eqref{8.1}, where $f = \exp\, $ yields \smallskip \medskip \noindent {\bf A geometric interpretation of this inequality}: Among all rectangles on the Euclidean plane with a given perimeter $\mathcal L$, the square has the largest area $\mathcal A$.\smallskip By duality, this means parallelly \smallskip Among all rectangles on the Euclidean plane with a given area $\mathcal A$, the square has the least perimeter $\mathcal L$. Indeed, \begin{equation} 16\, \mathcal A = 16\, a \cdot b \le (2a +2b)^2\ = {\mathcal L}^2 \label{8..4}\end{equation} Equality holds if and only if the rectangles are squares, i.e., $a=b$.\smallskip A dual approach from discreteness to continuity yields\smallskip \noindent{\bf A sharp isoperimetric inequlality for plane curves:} Among all simple closed smooth curves on the Euclidean plane with a given length $L$, the circle encloses the largest area $A\, .$ \begin{equation} 4\pi A \le L^2\label{8.5}\end{equation} Equality holds if and only if the curve encloses a disk. This is equivalent to \smallskip \noindent{\bf The Sobolev inequality on $\mathbb {R}^2$ with optimal constant}: If $u \in W^{1,1}(\mathbb {R}^2)$, then \begin{equation} 4 \pi \int _ {\mathbb {R}^2} |u|^{2}\, dx \le \bigg (\int _ {\mathbb {R}^2} |\nabla u|\, dx\bigg )^{2} .\label{8.6}\end{equation}\bigskip Similarly, applying \eqref{8.1}, where $f = \exp\, $ and ``average" is averaging the sum of $n$ positive numbers, $n \ge 2$, yields \begin{equation} \root n\of{a_1 \cdot a_n} = \overset {``f(average)" \swarrow} {\exp (n^{-1}\sum_{j=1}^nA_j)} \overset {(\text{Average}\, \text{Principle})} \le \overset {``average(f)"\swarrow} {n^{-1}\sum_{j=1}^n\exp A_j} =n^{-1}\sum_{j=1}^na_j\, . \label{8.7} \end{equation} That is, \begin{example}[The geometric mean of the numbers is no greater than the arithmetic mean of $n$ positive numbers] \begin{equation} \begin{aligned} \root n \of{a_1 \cdot a_n} &\le \frac {a_1 + \cdots + a_n}{n}\, , \quad \text{for}\quad a_1, \cdots a_n > 0\, , \\ \text{with}\, ``&="\, \text{holds}\, \text{if}\, \text{and}\, \text{only}\, \text{if}\, a_1= \cdots = a_n\, .\end{aligned} \label{8.8} \end{equation} \end{example} For a dual version, let a concave function $f=\log$, An Average Principle, Proposition {8.1} yields \begin{example} Let $g$ be a nonnegative measurable function on $[0,1]$. Then \begin{equation} \log \int _0^1 g(t)\, dt \ge \int _0^1 \log \big ( g(t) \big )\, dt \label{8.9} \end{equation} whenever the right side is defined. \end{example} Isoperimetric and Sobolev inequlalities can be generalized to higher dimensional Euclidean spaces. As in dimension two, the $n$-dimensional sharp isoperimetric inequality is equivalent (for sufficiently smooth domains) to :\smallskip \noindent {\bf The Sobolev inequality on $\mathbb {R}^n$ with optimal constant} \noindent If $u \in W^{1,1}(\mathbb {R}^n)$ and $\omega_n$ is the volume of the unit ball in $\mathbb {R}^n\, $, then \begin{equation} \bigg (\int _ {\mathbb {R}^n} |u|^{\frac {n}{n-1}}\, dx\bigg )^{\frac {n-1}{n}}\le \frac {1}{n} \frac {1}{\sqrt[n]{\omega_n}}\int _ {\mathbb {R}^n} |\nabla u|\, dx.\label{8.10}\end{equation} Isoperimetric and Sobolev inequlalities are extended to Riemannian manifolds $M$ with sharp constants and applications to optimal sphere theorems (cf., e.g., Wei-Zhu \cite {WZ}). \begin{theorem}[ A sharp isoperimetric inequality \cite {Du, WZ}] For {\bf every} domain $\Omega\, ($in $M)$, there exists a constant $C(M)$ depending on $M$ such that \begin{equation} P^n \ge n^n \omega _n V^{n-1}(1 - C(M) V^{\frac 2n}), \label{8.11} \end{equation} where $P=vol(\partial \Omega), V=vol(\Omega)\, ,$ and $\omega _n$ is the volume of the unit ball in ${R}^n$. \noindent Furthermore, on simply connected Riemannian manifolds of dimension $n$ with Ricci curvature bounded from below by $n - 1$, the best $C(M)$ one can take in the above inequality \eqref{8.10} is greater than or equal to \begin{equation}C_0 = \frac {n(n-1)}{2(n+2) \omega_n^{\frac 2n}}\, . \label{8.12} \end{equation} \label{T: 8.4}\end{theorem} It is then by a standard technique, via coarea formula and Cavalieri's principle, that \eqref{8.11} is equivalent to the following: \begin{theorem}[A sharp Sobolev inequality \cite{WZ}] There exists a constant $A=A(M)$ such that $\forall \varphi \in W^{1,1}(M),$ \begin{equation} (\int _M|\varphi|^{\frac{n}{n-1}} dv)^{\frac {n-1}{n}} \le K(n,1) (\int _M|\nabla \varphi| dv) +A(M)(\int _M|\varphi|^{\frac n{n+1}} dv)^{\frac {n+1}{n}},\ \ \ \ \ \ \ \label{8.13}\end{equation} where $$K(n,1)=\lim_{p\to +1}K(n,p)=\frac 1{n \omega_n^{\frac 1n}}\, .$$ \end{theorem} This isoperimetric inequality \eqref{8.11} certainly has its roots in global analysis and partial differential equations (see, e.g., \cite{AuL}). Furthermore, the optimal constants in \eqref{8.11} will have some geometric and even topological applications. An immediate example is that sharp estimate on $C(M)$ recaptures \begin{theorem}[Bernstein isoperimetric inequality \cite{Ber}] On the $2$-sphere $S^2\, ,$ \begin{equation} \begin{aligned} L^2 & \ge 4 \pi A (1- \frac 1{4 \pi} A)\\ & ``="\, \text{holds} \\ & \text{if}\, \text{and}\, \text{only}\, \text{if}\, \text{the}\, \text{domain}\, \text{in}\, \text{question}\, \text{is}\, \text{a}\, \text{disk}. \end{aligned}\label{8.14} \end{equation} \end{theorem} \medskip \begin{remark} For a generalization of isoperimetric inequality to $n$-dimensional integer multiplicity rectifiable current in $\mathbb R^{n+k}$, which follows from the deformation theorem in geometric measure theory, we refer to Federer and Fleming (\cite {FF}). \end{remark} \section{Convexity and Jensen's inequalities} We note by Proposition \ref{P: 8.1}, every convex function $f$ enjoys an Average Principle of Convexity and Jensen's inequality in an average sense. From the duality between discreteness and continuity, we consider Jensen's inequality involving normalized exponential Yang-Mills energy functional $\mathcal {YM}_e ^0$. \smallskip Let $M$ be a compact manifold and $E$ be a vector bundle over $M\, .$ Denote $\mathcal L_1^p(E)$ the Sobolev space of connections of $E$ which are $p$-integrable and so are their first derivatives. Set \begin{equation}\mathcal W (E) = \bigcap _{p \ge 1} \mathcal L_1 ^p (E) \cap \{\nabla: \mathcal {YM}_e ^0 (\nabla) < \infty\}\, . \label{9.1}\end{equation} \begin{theorem} (Jensen's inequality involving normalized exponential Yang-Mills energy functional $\mathcal {YM}_e ^0$) Let $\nabla$ be a connection in $\mathcal{W} (E)$. Then $\big ($applying \eqref{8.1} yields$\big )$ \begin{equation} \exp \bigg ( \frac {1}{\text{Volume}(M)} \int _M \frac {||R^\nabla||^2}{2}\, dv \bigg) - 1 \le \frac {1}{\text{Volume}(M)} \int _M \big (\exp (\frac {||R^\nabla||^2}{2})- 1\big )\, dv\, .\label{9.2}\end{equation} \begin{equation} \quad \text{That}\quad \text{is},\quad \exp \bigg ( \frac {1}{\text{Volume}(M)} \mathcal {YM}(\nabla)\bigg) - 1 \le \frac {1}{\text{Volume}(M)} \mathcal {YM}_e ^0(\nabla)\, .\label{9.3}\end{equation} Equality is valid if and only if $||R^\nabla||$ is constant almost everywhere.\label{T: 9.1} \end{theorem} \begin{proof} This is a form of Jensen's inequality for the convex function $e^t - 1$(c.f. \cite [p.21] {Mo}). \end{proof} \begin{theorem} Let $\nabla$ be a minimizer in $\mathcal{W} (E)$ of the Yang-Mills functional $\mathcal {MY}$, and the norm $||R^\nabla||$ be constant almost everywhere. Then the same connection $\nabla$ is a minimizer of the normalized exponential Yang-Mills functional $\mathcal {YM}_e ^0\, ,$ and for any minimizer $\tilde {\nabla}$ of the normalized exponential Yang-Mills functional $\mathcal {YM}_e ^0\, $ in $\mathcal{W} (E)$, the norm $||R^{\tilde {\nabla}}||$ is almost everywhere constant. \label{T: 9.2} \end{theorem} \begin{proof} By the definition of minimizer $\nabla$, the monotone of $t \mapsto e^t -1\, ,$ and Jensen's inequality \eqref{9.3}, we have for each $\tilde {\nabla}$ in $\mathcal{W} (E)$, \begin{equation} \aligned \exp \bigg ( \frac {1}{\text{Volume}(M)} \mathcal {YM}(\nabla)\bigg) - 1 & \le \exp \bigg ( \frac {1}{\text{Volume}(M)} \mathcal {YM}(\tilde {\nabla})\bigg) - 1\\ & \le \frac {1}{\text{Volume}(M)} \mathcal {YM}_e ^0(\tilde {\nabla})\, . \endaligned \label{9.4} \end{equation} so that \begin{equation} \exp \bigg ( \frac {1}{\text{Volume}(M)} \mathcal {YM}(\nabla)\bigg) - 1 \le \inf _{\tilde {\nabla} \in \mathcal{W} (E)}\frac {1}{\text{Volume}(M)} {\mathcal {YM}}_e ^0(\tilde {\nabla})\, . \label{9.5} \end{equation} On the other hand, since $||R^{\nabla}|| =$ constant a.e., \begin{equation} \frac {1}{\text{Volume}(M)} \mathcal {YM}_e ^0({\nabla}) = \exp (\frac {||R^\nabla||^2}{2})- 1 = \exp \bigg ( \frac {1}{\text{Volume}(M)} \mathcal {YM}(\nabla)\bigg) - 1\label{9.6} \end{equation} so that $\nabla$ is also a minimizer of the normalized exponential Yang-Mills functional $\mathcal {YM}_e ^0\, .$ Now we assume that $\tilde {\nabla}$ is any minimizer of the normalized exponential Yang-Mills functional $\mathcal {YM}_e ^0\, $ in $\mathcal{W} (E)$. Then \begin{equation} \frac {1}{\text{Volume}(M)} {\mathcal {YM}}_e ^0(\tilde {\nabla}) \le \frac {1}{\text{Volume}(M)} \mathcal {YM}_e ^0({\nabla})\label{9.7} \end{equation} and combining \eqref{9.7}, \eqref{9.6} and \eqref{9.4}, allows us to improve all inequalities in \eqref{9.4} to equalities, so that we are ready to apply Theorem \ref{T: 9.1} and conclude that $||R^{\tilde {\nabla}}||$ is constant almost everywhere. \end{proof} \section{$p$-Yang-Mills fields} Similarly, we set $$ \mathcal W ^p (E) = \mathcal L_1 ^p (E) \cap \mathcal L_1 ^2 (E)\, , p \ge 2$$ and obtain via \eqref{8.1} \smallskip \begin{theorem}[Jensen's inequality involving $p$-Yang-Mills energy functional $\mathcal {YM}_p,\ p \ge 2$] Let $\nabla$ be a connection in $ \mathcal W ^p (E)$. Then \begin{equation} \frac 1p \bigg ( \frac {2}{\text{Volume}(M)} \int _M \frac {||R^\nabla||^2}{2}\, dv \bigg)^{\frac p2} \le \frac {1}{\text{Volume}(M)} \int _M (\frac {||R^\nabla||^p}{p})\, dv\, .\label{10.1}\end{equation} \begin{equation} \quad \text{That}\quad \text{is},\quad \frac 1p \bigg ( \frac {2}{\text{Volume}(M)} \mathcal {YM}(\nabla) \bigg )^{\frac p2}\le \frac {1}{\text{Volume}(M)} \mathcal {YM}_p(\nabla)\, .\label{10.2}\end{equation} Equality is valid if and only if $||R^\nabla||$ is constant almost everywhere.\label{T: 10.1} \end{theorem} \begin{proof} This is a form of Jensen's inequality for the convex function $t \mapsto \frac 1p (2t)^{\frac p2}\, , p \ge 2$ (c.f. \cite [p.21] {Mo}). \end{proof} \begin{theorem} Let $\nabla$ be a minimizer in $\mathcal W ^p (E)$ of the Yang-Mills functional $\mathcal {MY}$, and the norm $||R^\nabla||$ be constant almost everywhere. Then the same connection $\nabla$ is a minimizer of the $p$-Yang-Mills functional $\mathcal {YM}_p\, ,$ and for any minimizer $\tilde {\nabla}$ of the $p$-Yang-Mills functional $\mathcal {YM}_p\, $ in $\mathcal{W}^p (E)$, the norm $||R^{\tilde {\nabla}}||$ is almost everywhere constant. \label{T: 10.2} \end{theorem} \begin{proof} By the definition of minimizer $\nabla$, and Jensen's inequality \eqref{10.2}, we have for each $\tilde {\nabla}$ in $\mathcal{W}^p (E)$, \begin{equation} \aligned \frac 1p \bigg ( \frac {2}{\text{Volume}(M)} \mathcal {YM}(\nabla) \bigg )^{\frac p2} & \le \frac 1p \bigg ( \frac {2}{\text{Volume}(M)} \mathcal {YM}(\tilde {\nabla}) \bigg )^{\frac p2} \\ & \le \frac {1}{\text{Volume}(M)} \mathcal {YM}_p(\tilde {\nabla}). \endaligned \label{10.3} \end{equation} so that \begin{equation} \frac 1p \bigg ( \frac {2}{\text{Volume}(M)} \mathcal {YM}(\nabla) \bigg )^{\frac p2} \le \inf _{\tilde {\nabla} \in \mathcal{W}^p (E)}\frac {1}{\text{Volume}(M)} \mathcal {YM}_p(\tilde {\nabla}). \label{10.4} \end{equation} On the other hand, since $||R^{\nabla}|| =$ constant a.e., \begin{equation} \frac {1}{\text{Volume}(M)} \mathcal {YM}_p({\nabla}) = \frac {||R^\nabla||^p}{p} = \frac 1p \bigg ( \frac {2}{\text{Volume}(M)} \mathcal {YM}(\nabla) \bigg )^{\frac p2} \label{10.5} \end{equation} so that $\nabla$ is also a minimizer of the $p$-Yang-Mills functional $\mathcal {YM}_p\, .$ Now we assume $\tilde {\nabla}$ is any minimizer of the $p$-Yang-Mills functional $\mathcal {YM}_p\, $ in $\mathcal{W} ^p (E)$. Then \begin{equation} \frac {1}{\text{Volume}(M)} {\mathcal {YM}}_p(\tilde {\nabla}) \le \frac {1}{\text{Volume}(M)} \mathcal {YM}_p({\nabla})\label{10.6} \end{equation} and combining \eqref{10.6}, \eqref{10.5} and \eqref{10.3} allows us to improve all inequalities in \eqref{10.3} to equalities, so that we are ready to apply Theorem {9.1} and conclude that $||R^{\tilde {\nabla}}||$ is constant almost everywhere. \end{proof} \begin{remark}J. Eells and L. Lemaire first derive Jensen's inequality and establish its optimality in the setting of exponentially harmonic maps (\cite {EL}). F. Matsuura and H. Urakawa show $$\exp \bigg ( \frac {\mathcal {YM}(\nabla)}{\text{Volume}(M)}\bigg) \le \frac {\mathcal {YM}_e(\nabla)}{\text{Volume}(M)}\ \ \text{for any}\ \ \nabla \in \mathcal{W} (E)\, ,$$ and the validity of equality (\cite {MU}). \end{remark} \section{An extrinsic average variation method and $\Phi_{(3)}$-harmonic maps} We propose an extrinsic, average variational method as an approach to confront and resolve problems in global, nonlinear analysis and geometry $($cf. \cite {W1, W3}$)$. In contrast to an average method in PDE that we applied in \cite {CW3} to obtain sharp growth estimates for warping functions in multiply warped product manifolds, we employ \emph {an extrinsic average variational method} in the calculus of variations $($\cite {W3}$)$, find a large class of manifolds of positive Ricci curvature that enjoy rich properties, and introduce the notions of \emph {superstrongly unstable $(\operatorname{SSU})$ manifolds} and \emph {$p$-superstrongly unstable $(p$-$\operatorname{SSU})$ manifolds} $($\cite {W5,W2,W4,WY}$)$. \begin{definition}\label{D: 11.1} A Riemannian manifold $M$ with its Riemannian metric $\langle \, , \, \rangle _M$ is said to be {\bf superstrongly unstable (SSU)} , if there exists an isometric immersion of $M$ in $(\mathbb R^q, \langle \, \cdot\, \rangle _{\mathbb R^q})$ with its second fundamental form $B$, such that for every {\sl unit} tangent vector $v$ to $M$ at every point $x\in M$, the following symmetric linear operator $Q^M_x$ is {\sl negative definite}. \begin{equation}\langle Q^M_x(v),v\rangle_M=\sum^m_{i=1} \bigg (2 \langle B(v,e_i), B(v, e_i)\rangle _{\mathbb R^q} - \langle B(v,v), B(e_i, e_i)\rangle _{\mathbb R^q} \bigg )\label{11.1}\end{equation} and $M$ is said to be $p${\bf -superstrongly unstable ($p$-SSU)} for $p\geq 2$ if the following functional is {\sl negative valued}. \begin{equation}\label{11.2} F_{p,x}(v)=(p-2)\langle \mathsf B(v,v), B(v,v)\rangle _{\mathbb R^q} + \langle Q^M_x(v),v\rangle _M, \end{equation} where $\lbrace e_1, \ldots, e_m\rbrace$ is a local orthonormal frame on $M$. \end{definition} We prove, in particular that every compact $\operatorname{SSU}$ manifold must be strongly unstable $(\operatorname{SU})$, i.e., $(\rm a)$ A compact $\operatorname{SSU}$ manifold cannot be the target of any nonconstant stable harmonic maps from any manifold, $(\rm b)$ The homotopic class of any map from any manifold into a compact $\operatorname{SSU}$ manifold contains elements of arbitrarily small energy $E$, $(\rm c)$ A compact $\operatorname{SSU}$ manifold cannot be the domain of any nonconstant stable harmonic map into any manifold, and $(\rm d)$ The homotopic class of any map from a compact $\operatorname{SSU}$ manifold into any manifold contains elements of arbitrarily small energy $E$ $($cf. \cite [Theorem 2.2, p.321] {HoW2}$)$. \smallskip \subsection{Harmonic maps and $p$-harmonic maps, from a viewpoint of the first elementary symmetric function $\sigma _1$}\label{S: 11.1}$\qquad$ \smallskip \noindent We recall at any fixed point $x_0 \in M\, ,$ a symmetric $2$-covariant tensor field $\alpha$ on $(M,g)$ in general, or the pullback metric $u^{\ast}$ in particular, has the eigenvalues $\lambda$ relative to the metric $g$ of $M$; i.e., the $m$ real roots of the equation $$\det (g_{ij} \lambda - \alpha_{ij}) = 0\ \operatorname{where}\ g_{ij} = g(e_i,e_j),\ \alpha_{ij} = \alpha(e_i,e_j)\, ,$$ and $\{e_1, \cdots e_m\}$ is a basis for $T_{x_0}(M)\, $ $($cf.,e.g., \cite {HW}$)$. \noindent A harmonic map $u: (M,g) \to (N,h)$ can be viewed as a critical point of the energy functional, given by the integral of a half of first elementary symmetric function $\sigma _1\, ,$ of engenvalues relative to the metric $g$, or the trace of the pulback metric tensor $u^{\ast} h$, with respect to $g$, where $\{e_1, \cdots , e_m\}$ is an local orthonormal frame field on $M$. That is, \begin{equation} E(u)=\int_ M \frac 12 \sum_{i=1}^m h\big (du (e_i), du(e_i)\big ) \, dv = \int _M {\frac 12} {\big (\sigma _1(u^{\ast})\big )}\, dv. \label{11.3} \end{equation} \smallskip \noindent A $p$-harmonic map can be viewed as a critical point of the $p$-energy functional $E_p(u)$, given by the integral of $\frac 1p$ times $\sigma _1 $ or the trace of the pullback metric tensor to the power ${\frac p2}$, i.e., \begin{equation} E(u) = \int_ M \frac 1p \bigg (\sum_{i=1}^m h\big (du (e_i), du(e_i)\big ) \bigg )^{\frac p2} \, dv= \int _M\, \frac {1}{p} \big ({\sigma _1} (u^{\ast})\big )^{\frac p2}\, dv. \label{11.4} \end{equation} \smallskip \noindent For the study of the stability of harmonic maps $($ resp. $p$-harmonic maps $)$, Howard and Wei $($ \cite {HoW2} $)$ $\big ($ resp. Wei and Yau $($\cite {WY}$)$ $\big )$ introduce the following notions: \begin{definition} A Riemannian manifold $M$ is said to be {\it strongly unstable} $(\operatorname{SU}) \big ($resp. {\it $p$-strongly unstable} $(p$-$\operatorname{SU}) \big )$ if $M$ is neither the domain nor the target of any nonconstant smooth stable harmonic map, (resp. stable $p$-harmonic map), and the homotopic class of maps from or into $M$ contains a map of arbitrarily small energy $E$ (resp. $p$-energy $E_p$). \end{definition} This definition leads to \begin{theorem} Every compact superstrongly unstable $(\operatorname{SSU})$-manifold $\big ($ resp. $p$-superstrongly unstable $(p$-$\operatorname{SSU})\big )$ manifold is strongly unstable $(\operatorname{SU})\, .\big ($ resp. $p$-strongly unstable $(p$-$\operatorname{SU})\big )\, .$\label{T: 11.3} \end{theorem} And, we make the following classification. \begin{theorem}[\cite {O, HoW}]\label{T: 11.4} Let $M$ be a compact irreducible symmetric space. The following statements are equivalent: \begin{enumerate} \item $M$ is $\operatorname{SSU}$. \item $M$ is $\operatorname{SU}$. \item $M$ is $\operatorname{U}$; i.e. $\operatorname{Id}_M$ is an unstable harmonic map. \item $M$ is one of the following: \end{enumerate} \begin{equation} \begin{aligned} & {\rm(i)}\ \text{the simply connected simple Lie groups}\quad (A_l)_{l\geq 1},\quad B_2=C_2\quad \operatorname {and}\quad (C_l)_{l\geq 3};\\ & {\rm(ii)}\ SU(2n)/Sp(n),\quad n\geq 3;\\ & {\rm(iii)}\ \text{Spheres}\quad S^k,\quad k>2;\\ & {\rm(iv)}\ \text{Quaternionic Grassmannians}\quad Sp(m+n)/Sp(m)\times Sp(n), m\geq n\geq 1;\\ & {\rm(v)}\ E_6/F_4;\\ & {\rm(vi)}\ \text{Cayley Plane}\quad F_4/Spin(9)\,.\end{aligned}\label{11.5} \end{equation} \end{theorem} \begin{theorem}[Topological Vanishing Theorem]\label{T: 11.5} Suppose that $M$ is a compact $\operatorname{SSU} ( \operatorname{resp.}\, p$-$\operatorname{SSU}\, )$ manifold. Then $M$ is $\operatorname{SU}$ and \begin{equation} \begin{aligned} \pi_1 (M) & = \pi_2 (M) = 0\\ \big (\operatorname{resp.}\, \pi_1(M) & = \cdots = \pi _{[p]} = 0\, \big ). \end{aligned} \label{11.6} \end{equation} Furthermore, the following three statements are equivalent: \begin{equation} \begin{aligned} & {\rm(a)}\ \pi_1 (M) = \pi_2 (M) = 0\, .\\ & {\rm(b)}\ \text{the infimum of the energy $E$ is $0$ among maps homotopic to the identity on}\, M\, . \\ & {\rm(c)}\ \text{the infimum of the energy $E$ is $0$ among maps homotopic to a map from}\, M\, . \end{aligned} \label{11.7} \end{equation} That is, \begin{equation} \begin{aligned} \pi_1 (M) = \pi_2 (M) = 0 & \overset {[\operatorname{Wh}]} \Longleftrightarrow \inf \{ E(u^{\prime}) : u^{\prime} \text{is homotopic to}\, \operatorname{Id}\, \text{on}\, M\},\\ & \overset {[\operatorname {EL2}]} \Longleftrightarrow \inf \{ E(u^{\prime}) : u^{\prime} \text{is homotopic to}\, u : M \to \bullet\} \end{aligned} \label{11.8} \end{equation} \end{theorem} $($Cf. \cite [the diagram on p.58]{W3}.$)$ \subsection{$\Phi$-harmonic maps, from a viewpoint of the second elementary symmetric function $\sigma _2$\, (\cite {HW})}$\qquad$ \smallskip \noindent We introduce the notion of $\Phi$-{\it harmonic map} which is the second symmetric function $\sigma_2$ of the pullback metric tensor $u^{\ast} h$, an analogue of $\sigma_1$ in the above subsection \ref{S: 11.1}. In \cite {HW}, Han and Wei show that the extrinsic average variational method in the calculus of variations employed in the study of harmonic maps, $p$-harmonic maps, $F$-harmonic maps and Yang-Mills fields can be extended to the study of $\Phi$-harmonic maps. In fact, we find a large class of manifolds with rich properties, \emph {$\Phi$-superstrongly unstable $(\Phi$-$\operatorname{SSU})$ manifolds}, establish their links to $p$-$\operatorname{SSU}$ manifolds and topology, and apply the theory of $p$-harmonic maps, minimal varieties and Yang-Mills fields to study such manifolds. With the same notations as above, we introduce the following notions: \begin{definition} A Riemannian manifold $(M^m,g)$ with a Riemannian metric $g$ is said to be $\Phi$-superstrongly unstable $(\Phi$-$\operatorname{SSU})$ if there exists an isometric immersion $\mathbb R^q$ such that, for all unit tangent vectors $v$ to at every point $x\in M^m$, the following functional is always negative: \begin{equation} F_{{\Phi}_x}(v)=\sum_{i=1}^m \big (4\langle B(v,e_i),B(v,e_i)\rangle-\langle B(v,v),B(e_i,e_i)\rangle\big ), \label{11.9} \end{equation} where $B$ is the second fundamental form of $M^m$ in $\mathbb R^q$, and $\{e_1,\cdots,e_m\}$ is a local orthonormal frame on $M$ near $x$. \end{definition} \begin{definition} A Riemannian manifold $M$ is $\Phi$-strongly unstable $(\Phi$-$\operatorname{SU})$ if it is neither the domain nor the target of any nonconstant smooth $\Phi$-stable stationary map, and the homotopic class of maps from or into $M$ contains a map of arbitrarily small energy. \end{definition} \begin{theorem} Every compact $\Phi$-superstrongly unstable $(\Phi$-$\operatorname{SSU})$ manifold is $\Phi$-strongly unstable $(\Phi$-$\operatorname{SU})\, .$\label{T: 11.8} \end{theorem} \subsection{$\Phi_{S}$-harmonic maps, from a viewpoint of an extended second symmetric function $\sigma _2$\, (\cite {FHLW})}$\qquad$ \smallskip \noindent We introduce the notion of $\Phi_{S}$-harmonic maps, which is a $\sigma _2$ version of the stress energy tensor $S$.$\qquad$ \smallskip \noindent In \cite {FHLW}, Feng, Han, Li, and Wei show that the extrinsic average variational method in the calculus of variations employed in the study of $\sigma_ 1$ and $\sigma _2$ versions of the pullback metric $u^{\ast} h$ on $M$ can be extended to the study of a $\sigma _2$ version of the stress energy tensor $S$. In fact, we find a large class of manifolds, {\it $\Phi_S$-superstrongly unstable $(\Phi_S$-$\operatorname{SSU})$ manifolds}, introduce the notions of a stable $\Phi_S$-harmonic map, $\Phi_S$-strongly unstable $($$\Phi_S$-$\operatorname{SU}$$)$ manifolds, and prove \begin{theorem} Every compact $\Phi_S$-superstrongly unstable $(\Phi_S$-$\operatorname{SSU})$ manifold is $\Phi_S$-strongly unstable $(\Phi_S$-$\operatorname{SU})\, .$\label{T: 11.9} \end{theorem} \subsection{$\Phi_{S,p}$-harmonic maps, from a viewpoint of a combined extended second symmetric function $\sigma _2$\, (\cite {FHW})}$\qquad$ \smallskip \noindent We introduce the notion of $\Phi_{S,p}$-harmonic maps, which is a combined generalized $\sigma _2$ version of the stress energy tensor $S$, and a $\sigma _1$ version of the pullback $u^{\ast}$.\smallskip In \cite {FHLW}, Feng, Han, Li, and Wei show that the extrinsic average variational method in the calculus of variations employed in the study of $\sigma_ 1$ and $\sigma _2$ versions of the pullback metric $u^{\ast} h$ on $M$ and stress-energy tensor can be extended to the study of a combined extended second symmetric function $\sigma _2$ version. In fact, we find a large class of manifolds, $\Phi_{S,p}$-superstrongly unstable $(\Phi_{S,p}$-$\operatorname{SSU})$ manifolds, introduce the notions of a stable $\Phi_{S,p}$-harmonic map, $\Phi_{S,p}$-strongly unstable $($$\Phi_{S,p}$-$\operatorname{SU}$$)$ manifolds, and prove \begin{theorem} Every compact $\Phi_{S,p}$-superstrongly unstable $(\Phi_{S,p}$-$\operatorname{SSU})$ manifold is $\Phi_{S,p}$-strongly unstable $(\Phi_{S,p}$-$\operatorname{SU})\, .$\label{T: 11.10} \end{theorem} \subsection{$\Phi_{(3)}$-harmonic maps, from a viewpoint of the third elementary symmetric function $\sigma _3$ (\cite {FHJW})}$\qquad$ \smallskip \noindent We introduce the notion of of $\Phi_{(3)}$-harmonic maps, which is a $\sigma _3$ version of the pullback $u^{\ast}$. In fact, Feng, Han, Jiang, and Wei show that the extrinsic average variational method in the calculus of variations employed in the study of $\sigma_ 1$ and $\sigma _2$ versions of the pullback metric $u^{\ast} h$ on $M$ can be extended to the study of the third symmetric function $\sigma _3$ version. Whereas we can {view harmonic maps as $\Phi_{(1)}$-harmonic maps} (involving $\sigma_1$) and $\Phi$-harmonic maps as $\Phi_{(2)}$-harmonic maps (involving $\sigma_2$) , we introduce the notion of a $\Phi_{(3)}$-harmonic map and find a large class of manifolds, $\Phi_{(3)}$-superstrongly unstable ($\Phi_{(3)}$-$\operatorname{SSU}$) manifolds, introduce the notions of a stable $\Phi_{(3)}$-harmonic map, $\Phi_{(3)}$-strongly unstable ($\Phi_{(3)}$-$\operatorname{SU}$) manifolds, and prove \begin{theorem}[\cite {FHJW}] Every compact $\Phi_{(3)}$-superstrongly unstable $(\Phi_{(3)}$-$\operatorname{SSU})$ manifold is $\Phi_{(3)}$-strongly unstable $(\Phi_{(3)}$-$\operatorname{SU})\, .$\label{T: 11.11} \end{theorem} \begin{definition}[\cite {FHJW}]\label{D: 11.11} A Riemannian manifold $M^{m}$ is said to be $\Phi_{(3)}$-superstrongly unstable $($$\Phi_{(3)}$-$\operatorname{SSU}$$)$ if there exists an isometric immersion of $M^{m}$ in $\mathbb{R}^{q}$ with its second fundamental form $B$ such that for all unit tangent vectors $v$ to $M^{m}$ at every point $x\in M^{m},$ the following functional is negative valued. \begin{equation} {F_{\Phi_{(3)}}}_x (v)=\sum_{i=1}^{m}\big (6\langle B(v,e_{i}),B(v,e_{i})\rangle _{\mathbb R^q}-\langle B(v,v),B(e_{i},e_{i})\rangle_{\mathbb R^q}\big ),\label{11.8} \end{equation} where $\{e_{1},\cdots,e_{m}\}$ is a local orthonormal frame field on $M^{m}$ near $x$. \end{definition} \begin{theorem}\label{T: 11.13} Every $\Phi_{(3)}$-$\operatorname{SSU}$ manifold $M$ is $p$-$\operatorname{SSU}$ for any $2 \le p \le 6$. \end{theorem} \begin{proof} By Definition \ref{D: 11.11}, $\Phi_{(3)}$-$\operatorname{SSU}$ manifold enjoys \begin{equation} \begin{aligned} {F_{\Phi_{(3)}}}_x (v)=\sum_{i=1}^{m}\big (6\langle B(v,e_{i}),B(v,e_{i})\rangle _{\mathbb R^q}-\langle B(v,v),B(e_{i},e_{i})\rangle_{\mathbb R^q}\big )<0 \end{aligned}\label{11.7} \end{equation} for all unit tanget vector $v \in T_x(M)$. It follows from \eqref{11.2} and \eqref{11.8} that \begin{equation} \begin{aligned} F_{p,x}(v)&=(p-2)\langle \mathsf B(v,v), B(v,v)\rangle _{\mathbb R^q} + \langle Q^M_x(v),v\rangle _M\\ & \le (p-2)\sum^n_{i=1} \bigg (2 \langle B(v,e_i), B(v, e_i)\rangle _{\mathbb R^q} \bigg ) \\ & \qquad + \sum^n_{i=1} \bigg (2 \langle B(v,e_i), B(v, e_i)\rangle _{\mathbb R^q} - \langle B(v,v), B(e_i, e_i)\rangle _{\mathbb R^q} \bigg )\\ &\leq\sum_{i=1}^n\big (p\langle B(v, e_i), B(v, e_i)\rangle-\langle B(v,v), B( e_i, e_i)\rangle\big ) \\ &\leq\sum_{i=1}^n\big (6\langle B(v,e_i), B(v, e_i)\rangle-\langle B(v,v),B( e_i, e_i)\rangle\big )<0, \end{aligned} \end{equation} for $2\leq p\leq 6$. So by Definition \ref{D: 11.1}, $M$ is $p$-SSU for any $2\leq p\leq 6$.\\ \end{proof} \begin{theorem}\label{T: 11.14} Every compact $\Phi_{(3)}$-$\operatorname{SSU}$ manifold $M$ is $6$-connected\, , i.e., \begin{equation} \pi_1(M) = \cdots = \pi_{6}(M) = 0. \end{equation} \end{theorem} \begin{proof} Since every compact $p$-SSU is $[p]$-connected (cf. \cite [Theorem 3.10 , p. 645] {W5}), and $p=6$ by the previous Theorem, the result follows. \end{proof} \begin{theorem}[Sphere Theorem]\label{T: 11.15} Every compact $\Phi_{(3)}$-$\operatorname{SSU}$ manifold $M$ of dimension $m \le 13$ is homeomorphic to an $m$-sphere. \end{theorem} \begin{proof} In view of Theorem \ref{T: 11.13}, $ M$ is 6-connected. By the Hurewicz isomorphism theorem, the 6-connectedness of $M$ implies homology groups $H_1(M)=\cdots=H_6(M)=0$. It follows from Proincare Duality Theorem and the Hurewicz Isomorphism Theorem (\cite{SP}) again, $H_{m-6}(M)=\cdots=H_{m-1}(M)=0$, $H_{m}(M)\neq 0$ and $M$ is ($m-1$)-connected. Hence $N$ is a homotopy $m$-sphere. Since $M$ is $\Phi_{(2)}$-SSU, $m\geq 7$. Consequently, a homotopy $m$-sphere $M$ for $m \ge 5$ is homeomorphic to an $m$-sphere by a Theorem of Smale (\cite{Sm}). \end{proof} We summarize some of new manifolds found and these results obtained by an extrinsic average method in Table 1 in Section 1. \bibliographystyle{amsalpha} \begin{thebibliography}{SiSiYa} \bibitem[A]{A} W.K. Allard, {\em On the first variation of a varifold}, Ann. Math. (2){\bf 95}(1972), 417-491. \bibitem[Al]{Al} F.J. Almgren, {\it Some interior regularity theorems for minimal surfaces and extension of Bernstein's theorem}. Ann. Math. (2) {\bf 84} (1966), 277-292. \bibitem[Ar]{Ar} M. Ara, {\it Geometry of $F-$harmonic maps}. Kodai Math. J. {\bf 22}(1999), 243-263. \bibitem[AuL]{AuL} T. Aubin and Y.Y. Li, {\it On the best Sobolev inequality}. J. Math. Pures Appl. (9) {\bf 78}(1999), 353-387. \bibitem[Ba]{Ba} P. Baird, {\it Stress-energy tensors and the Lichnerowicz Laplacian}. J. Geom. Phys. {\bf 58}(2008), 1329-1342. \bibitem[BE]{BE} P. Baird and J. Eells, {\it A conservation law for harmonic maps}, in: Geometry Symposium, Utrecht 1980. Lecture notes in Math. {894}, Springer (1982), 1-25. \bibitem[Ber]{Ber} F. Bernstein, {\it \"Uber die isoperimetrische Eigenschaft des Kreises auf der Kugeloberfl\'ache und in der Ebene}. Math. Ann. {\bf 60}(1905), 117-136. \bibitem[BI]{BI} M. Born and L. Infeld, {\it Foundation of a new field theory}. Proc. R. Soc. London Ser. A. {\bf 144}(1934), 425-451. \bibitem[BL]{BL} J.-P. Bourguignon and H.B. Lawson, {\it Stability and isolation phenomena for Yang-Mills fields}. Comm. Math. Phys. {\bf 79} (1981), 189-230. \bibitem[BLS]{BLS} J.-P. Bourguignon, H.B. Lawson, and J. Simons, {\it Stability and gap phenomena for Yang-Mills fields.} Proc. Nat. Acad. Sci. U.S.A. {\bf 76}(1979), 1550-1553. \bibitem[CCW]{CCW} S.-C. Chang, J.-T. Chen, and S.W. Wei, {\it Liouville properties for $p$-harmonic maps with finite $q$-energy.} Trans. Amer. Math. Soc. {\bf 368}(2016), 787-825. \bibitem[CW]{CW} B.-Y. Chen and S.W. Wei, {\em Riemannian submanifolds with concircular canonical field}. Bull. Korean Math. Soc. {\bf 56}(2019), 1525-1537. \bibitem[CW2]{CW2} B.-Y. Chen and S.W. Wei, {\it $p$-harmonic morphisms, cohomology classes and submersions}. Tamkang J. Math. {\bf 40} (2009), 377–382. \bibitem[CW3]{CW3} B.-Y. Chen and S.W. Wei, {\it Sharp growth estimates for warping functions in multiply warped product manifolds}. J. Geom. Symmetry Phys. {\bf 52} (2019), 27-46. \bibitem[D]{D} S.K. Donaldson {\em Mathematical uses of gauge theory},in: The Encyclopedia of Mathematical Physics, Eds. J.-P. Francoise, G. Naber, and T.S. Tsun, Elsevier, 2006. \bibitem[Gi]{Gi} E. de Giorgi, {\it Una estensione del theorema di Bernstein}, Ann. Scuola Norm. Sup. Pisa (3) {\bf 19}(1965), 79-85. \bibitem[Du]{Du} O. Druet, {\it Isoperimetric inequalities on compact manifolds}. Geometriae Dedicata, {\bf 90}(2002), 217-236. \bibitem[DW]{DW} Y. X. Dong and S.W. Wei, {\it On vanishing theorems for vector bundle valued $p$-forms and their applications}. Comm. Math. Phy. {\bf 304}(2011), 329-368. \bibitem[DLW]{DLW} Y. X. Dong, H. Z. Lin and S. W. Wei, {\it $L^2$ curvature pinching theorems and vanishing theorems on complete Riemannian manifolds}. Tohoku Math. J. (2){\bf 71} (2019), 581-607. \bibitem[EF]{EF} J. F. Escobar and A. Freire {\em The spectrum of the Laplacian of manifolds of positive curvature}. Duke Math. J. 65 (1992), no. 1, 1–21. \bibitem[EL2]{EL2} J. Eells and L. Lemaire {\em Selected topics in harmonic maps}. CBMS Regional Conference Series in Mathematics, 50. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1983. v+85 pp. \bibitem[EL]{EL} J. Eells and L. Lemaire {\em Some properties of exponentially harmonic maps}. Partial differential equations, Part 1, 2 (Warsaw, 1990), 129-136, Banach Center Publ. 27, Part 1, 2, Polish Acad. Sci. Inst. Math., Warsaw, 1992. \bibitem[ES]{ES} J. Eells and J. H. Sampson {\em Harmonic mappings of Riemannian manifolds}. Amer. J. Math. 86 (1964), 109–160. \bibitem[FF]{FF} H. Federer and W. H. Fleming, {\it Normal and integral currents}, Ann. Math. {\bf 72} (1960), 458-520. \bibitem[FHLW]{FHLW} S. Feng, Y. Han, X. Li, and S.W. Wei, {\it The geometry of $\Phi_S$-harmonic maps}. J. Geom. Anal. {\bf 31}(2021), no. 10, 9469-9508. \bibitem[FHW]{FHW} S. Feng, Y. Han, and S.W. Wei, {\it Liouville type theorems and stability of $\Phi _{S,p}$-harmonic maps.} Nonlinear Anal. {\bf 212} (2021), Paper No. 112468, 38 pp. \bibitem[FHJW]{FHJW} S. Feng, Y. Han, K. Jiang, and S.W. Wei, {\it The geometry of $\Phi _{(3)}$-harmonic maps.} {Preprint}. \bibitem[F1]{Fl} W.H. Fleming, {\it On the oriented Plateau problem}. Rend. Circ. Mat. Palermo (2){\bf 11}(1962), 69-90 \bibitem[FK]{FK} M. Freedman and R. Kirby, {\it A geometric proof of Rochlin's theorem}. Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 2, pp. 85–97, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978. \bibitem[G]{G} C. Gherghe, {\em On a gauge-invariant functional}, Proc. Edinb. Math. Soc. (2){\bf 53}(2010), 143-151. \bibitem[Go]{Go} R.E. Gompf, {\it Three exotic R4's and other anomalies}. J. Differ. Geom. {\bf 18}(1983), 317-328. \bibitem[GW]{GW} R.E. Greene and H. Wu, {\it Function theory on manifolds which posses a pole}. Lecture Notes in Math. {699}, Springer-Verlag, 1979. \bibitem[HLRW]{HLRW} Y. Han, Y. Li, Y. Ren, and S.W. Wei {\it New comparison theorems in Riemannian geometry}. Bull. Inst. Math. Acad. Sin. (N.S.) {\bf 9}(2014), 163-186. \bibitem[HL]{HL} R. Hardt and F.H. Lin {\it Mapping minimizing the $L^p$ norm of the gradient}. Comm. Pure Appl. Math. {\bf XL}(1987), 555-588. \bibitem[HW]{HW} Y. Han and S.W. Wei, {\it $\Phi$-harmonic maps and $\Phi$-superstrongly unstable manifolds}. J. Geom. Anal. {\bf 32} (2022), no. 1; (2022) 32:3. \bibitem[HoW2]{HoW2} R. Howard and S.W. Wei, {\it Nonexistence of stable harmonic maps to and from certain homogeneous spaces and submanifolds of Euclidean space.} Trans. Amer. Math. Soc. {\bf 294}(1986), 319–331. \bibitem[HoW]{HoW} R. Howard and S.W. Wei, {\it On the existence and nonexistence of stable submanifolds and currents in positively curved manifolds and the topology of submanifolds in Euclidean spaces}. Geometry and Topology of Submanifolds and Currents. 127–167, Contemp. Math., {\bf 646} Amer. Math. Soc., Providence, RI, 2015. \bibitem[J]{J} J. Jost, {\it Riemannian Geometry and Geometric Analysis.} 7th edition. Universitext. Springer, Cham, 2017. xiv+697. \bibitem[K]{K} R.C. Kirby, {\it The Topology of $4$-Manifolds}. Lecture Notes in Math. 1374. Springer-Verlag, Berlin, (1989). vi+108 pp. \bibitem[L]{L} H.B. Lawson, {\it Minimal Varieties in Real and Complex Geometry}. S\'eminaire de Math\'ematiques Sup\'erieures, No. {\bf 57} (\'Et\'e 1973). Les Presses de l'Universit\'e de Montr\'eal, Montreal, Que., 1974. 100 pp. \bibitem[La]{La} H. B. Lawson, {\it The Theory of Gauge Fields in Four Dimensions}. CBMS Regional Conference Series in Mathematics, {\bf 58} (1985). \bibitem[LWW]{LWW} Y.I. Lee, A.N. Wang, and S.W. Wei, {\em On a generalized 1-harmonic equation and the inverse mean curvature flow.} J. Geom. Phys. {\bf 61}(2011), 453–461. \bibitem[LY]{LY} F. H. Lin and Y.S. Yang, {\it Gauged harmonic maps, Born-Infeld electromagnetism, and Magnetic Vortices}. Comm. Pure Appl. Math. {\bf LVI} (2003), 1631-1665. \bibitem[Lu]{Lu} S. Luckhaus {\em Partial H\"older continuity for minima of certain energies among maps into a Riemannian manifold}. Indiana Univ. Math. J. {\bf 37}(1988), 349-367. \bibitem[MU]{MU} F. Matsuura and H. Urakawa {\em On exponential Yang-Mills connections}. J. Geom. Phys. {\bf 17}(1995), 73-89. \bibitem[M]{M} J. Milnor, {\em On manifolds homeomorphic to the $7$-sphere.} Ann. Math. (2) {\bf 64} (1956), 399-405. \bibitem[Mo]{Mo} C.B. Morrey, {\em Multiple integrals in the calculus of variations}. Die Grundlehren der mathematischen Wissenschaften, Band 130 Springer-Verlag New York, Inc., New York 1966 ix+506 pp. \bibitem[O]{O} Y. Ohnita, {\it Stability of harmonic maps and standard minimal immersion}. Tohoku Math. J. {\bf 38}(1986), 259-267. \bibitem[PRS]{PRS} S. Pigola, M. Rigoli, and A.G. Setti, {\em Vanishing and finiteness results in geometric analysis. A generalization of the Bochner technique}. Progress in Mathematics, {\bf 266.} Birkhäuser Verlag, Basel, 2008. xiv+282 pp. \bibitem[P]{P} P. Price, {\it A monotonicity formula for Yang-Mills fields}. Manuscripta Math. {\bf 43} (1983), 131-166. \bibitem[PS]{PS} P. Price and L. Simon, {\it Monotonicity formulae for Harmonic maps and Yang-Mills fields}, Preprint, Canberra 1982. Final version by P. Price, {\it A monotonicity formula for Yang-Mills fields}, Manus. Math. {\bf 43}(1983), 131-166. \bibitem[SU]{SU} R. Schoen, K. Uhlenbeck, {\it A regularity theory for harmonic maps}. J. Diff. Geom. {\bf 17}(1982), 307-335. \bibitem[Se1]{Se1} H.C.J. Sealey, {\it Some conditions ensuring the vanishing of harmonic differential forms with applications to harmonic maps and Yang-Mills theory}. Math. Soc. Camb. Phil. Soc. {\bf 91}(1982), 441-452. \bibitem[Se2]{Se2} H.C.J. Sealey, {\it The stress energy tensor and vanishing of $L^2$ harmonic forms}. Preprint. \bibitem[SiSiYa]{SiSiYa} L. Sibner, R. Sibner, and Y.S. Yang, {\it Generalized Bernstein property and gravitational strings in Born-Infeld theory}. Nonlinearity \textbf{20}(2007), 1193-1213. \bibitem[Sm]{Sm} S. Smale, {\it Generalized Poincar\'{e} conjecture in dimension greater than four}, Ann. of Math. 74(1961) 391-406. \bibitem[SP]{SP} E. Spanier, {\em Algebraic Topology}, McGraw-Hill Book Co., New York-Toronto, Ont.-London 1966 xiv+528 pp. \bibitem[W1]{W1} S.W. Wei, {\it An average process in the calculus of variations and the stability of harmonic maps}. Bull. Inst. Math. Acad. Sinica {\bf 11}(1983), 469-474. \bibitem[W2]{W2} S.W. Wei, {\it On topological vanishing theorems and the stability of Yang-Mills fields}. Indiana Univ. Math. J. {\bf 33} (1984), no. 4, 511-529. \bibitem[W3]{W3} S.W. Wei, {\it An extrinsic average variational method}. Recent developments in geometry (Los Angeles, CA, 1987), 55–78, Contemp. Math. {\bf 101} Amer. Math. Soc. Providence, RI, 1989. \bibitem[W4]{W4} S.W. Wei, {\it Liouville theorems and regularity of minimizing harmonic maps into super-strongly unstable manifolds.} Geometry and nonlinear partial differential equations (Fayetteville, AR, 1990), 131-154, Contemp. Math. {\bf 127}, Amer. Math. Soc. Providence, RI, 1992. \bibitem[W5]{W5} S.W. Wei, {\it The minima of the p-energy functional}. Elliptic and parabolic methods in geometry (Minneapolis, MN, 1994), 171-203, A K Peters, Wellesley, MA, 1996. \bibitem[W6]{W6} S. W. Wei, {\it Representing homotopy groups and spaces of maps by $p$-harmonic maps}. Indiana Univ. Math. J. {\bf 47}(1998), 625-670. \bibitem[W7]{W7} S.W. Wei, {\it On 1-harmonic functions}, SIGMA Symmetry Integrability Geom. Methods Appl. {\bf 3} (2007), Paper 127, 10 pp. \bibitem[W8]{W8} S.W. Wei, {\it $p$-Harmonic geometry and related topics}. Bull. Transilv. Univ. Brasov Ser. III 1 ({\bf 50})(2008), 415-453. \bibitem[W9]{W9} S.W. Wei, {\it The unity of $p$-harmonic geometry}. Recent developments in geometry and analysis, 439-483, Adv. Lect. Math. (ALM) {\bf 23}, Int. Press, Somerville, MA, 2012. \bibitem[W10]{W10} S.W. Wei, {\em Growth estimates for generalized harmonic forms on noncompact manifolds with geometric applications}. Geometry of Submanifolds, 247-269, Contemp. Math. {\bf 756} Amer. Math. Soc. Providence, RI, (2020). \bibitem[W11]{W11} S.W. Wei, {\it Dualities in comparison theorems and bundle-valued generalized harmonic forms on noncompact manifolds}. Sci. China Math. {\bf 64}(2021), 1649-1702. \bibitem[W12]{W12} S.W. Wei, {\it On exponential Yang-Mills fields}. Proceedings RIGA 2021, 235–258, Ed. Univ. Bucureşti, Bucharest, 2021. \bibitem[WLW]{WLW} S.W. Wei, J. F. Li, and L. Wu, {\em Generalizations of the Uniformization Theorem and Bochner's Method in $p$-Harmonic Geometry}. Proceedings of the 2006 Midwest Geometry Conference, Commun. Math. Anal. (2008), Conference 1, 46–68. \bibitem[WW]{WW} S.W. Wei and B.Y. Wu, Generalized Hardy type and Caffarelli-Kohn-Nirenberg type inequalities on Finsler manifolds. Internat. J. Math. {\bf 31} (2020), no. 13, 2050109, 27 pp. \bibitem[WWZ]{WWZ} S.W. Wei, L. Wu and Y.S. Zhang, {\em Remarks on stable minimal hypersurfaces in Riemannian manifolds and generalized Bernstein problems}. Geometry and topology of submanifolds and currents. Contemp. Math. {\bf 646} Amer. Math. Soc., Providence, RI, (2015), 169-186. \bibitem[WY]{WY} S.W. Wei and C.-M. Yau, {\it Regularity of $p$-energy minimizing maps and $p$-superstrongly unstable indices.} J. Geom. Anal. {\bf 4}(1994), 247-272. \bibitem[WZ]{WZ} S.W. Wei and M. Zhu, Sharp isoperimetric inequalities and sphere theorems. Pacific J. Math. {\bf 220}(2005), 183–195. \bibitem[Wh]{Wh} B. White {\it Infima of energy functionals in homotopy classes of mappings}. J. Differential Geom. 23 (1986), no. 2, 127–142. \bibitem[Xi1]{Xi1} Y.L. Xin, {\it Differential forms, conservation law and monotonicity formula}. Scientia Sinica (Ser A) {\bf XXIX}(1986), 40-50. \bibitem[Ya]{Ya} Y.S. Yang, {\it Classical solutions in the Born-Infeld theory}, Proc. R. Soc. Lond. A. \textbf{456}(2000), 615-640. \end{thebibliography} \end{document}
2205.03013v2
http://arxiv.org/abs/2205.03013v2
On mean-field control problems for backward doubly stochastic systems
\documentclass[11pt,reqno]{amsart} \usepackage{amsmath,amsfonts,amsthm,color,amssymb, mathrsfs} \usepackage{xcolor} \usepackage[T1]{fontenc} \usepackage{wasysym} \usepackage{ulem} \usepackage{stmaryrd} \usepackage[left=1in, right=1in, top=1.1in,bottom=1.1in]{geometry} \usepackage{enumitem} \usepackage{hyperref} \hypersetup{ colorlinks = true, citecolor = blue, linkcolor=blue } \setlength{\parskip}{4pt} \usepackage{cancel} \usepackage{comment} \allowdisplaybreaks \usepackage{csquotes} \usepackage{graphicx} \usepackage{pstricks} \usepackage{lmodern} \newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{proposition}{Proposition} \newtheorem{condition}{Condition} \newtheorem{assumption}{Assumption} \newtheorem{conjecture}{Conjecture} \newtheorem{problem}{Problem} \newtheorem{remark}{Remark} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defi}[thm]{Definition} \newtheorem{cond}[thm]{\textsc{Condition}} \newtheorem{ex}[thm]{Example} \newtheorem{rem}[thm]{Remark} \def\theequation{\arabic{section}.\arabic{equation}} \def\thetheorem{\arabic{section}.\arabic{theorem}} \def\thelemma{\arabic{section}.\arabic{lemma}} \def\theass{\arabic{section}.\arabic{ass}} \def\thecond{\arabic{section}.\arabic{cond}} \def\thedefinition{\arabic{section}.\arabic{definition}} \def\theremark{\arabic{section}.\arabic{remark}} \def\theproposition{\arabic{section}.\arabic{proposition}} \def\tr{\text{tr}} \def\blue{\color{blue}} \def\red{\color{red}} \def\gray{\color{gray}} \def\la{\left\langle} \def\ra{\right\rangle} \def\e{\varepsilon} \def\R{\mathbb R} \def\E{\mathbb E} \def\T{\text{T}} \def\todo{\blue To continue from here.} \def\cL{\mathcal L} \def\thecorollary{\arabic{section}.\arabic{corollary}} \newcommand{\hypothesis}{Hypothesis} \newcounter{bean} \newcommand{\benuma}{\setlength{\labelwidth}{.25in} \begin{list} {(\alph{bean})}{\usecounter{bean}}} \newcommand{\eenuma}{\end{list}} \newcommand{\beginsec}{{\setcounter{equation}{0}}{\setcounter{theorem}{0}} {\setcounter{lemma}{0}}{\setcounter{definition}{0}}{\setcounter{remark}{0}} {\setcounter{proposition}{0}} {\setcounter{corollary}{0}} } \begin{document} \title[Mean-field control problems for BDSDEs]{On mean-field control problems for backward doubly stochastic systems} \author[J. Song]{Jian Song} \address{Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao, Shandong, 266237, China; and School of Mathematics, Shandong University, Jinan, Shandong, 250100, China} \email{[email protected]} \author[M. Wang]{Meng Wang} \address{School of Mathematics, Shandong University, Jinan, Shandong, 250100, China} \email{[email protected]} \date{\today} \maketitle \begin{abstract} This article is concerned with stochastic control problems for backward doubly stochastic differential equations of mean-field type, where the coefficient functions depend on the joint distribution of the state process and the control process. We obtain the stochastic maximum principle which serves as a necessary condition for an optimal control, and we also prove its sufficiency under proper conditions. As a byproduct, we prove the well-posedness for a type of mean-field fully coupled forward-backward doubly stochastic differential equation arising naturally from the control problem, which is of interest in its own right. Some examples are provided to illustrate the applications of our results to control problems in the types of scalar interaction and first order interaction. \end{abstract} \tableofcontents \section{Introduction} In this paper, we are concerned with a control problem in which the state process $\{(y_t, z_t), t\in[0,T]\}$ is governed by the following equation \begin{equation}\label{state} \left\{ \begin{aligned} -dy_t=&f(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t))dt+ g(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t)) d\overleftarrow B_t\\&-z_t dW_t,\,\,\,\,t\in[0,T],\\ y_T=&\xi. \end{aligned} \right. \end{equation} In the above equation, the control process $\{u_t, t\in[0,T]\}$ is a given stochastic process; $\mathcal L(y_t, z_t, u_t)$ stands for the law of the random vector $(y_t, z_t, u_t)$; $B$ and $W$ are two mutually independent Brownian motions; the stochastic integral with respect to $B$ is a backward It\^o integral while the one with respect to $W$ is forward. This equation is called a mean-field backward doubly stochastic differential equation (MF-BDSDE) due to its dependence on two Brownian motions as well as on the joint law of state and control processes. The cost functional of the control problem is given by \begin{equation}\label{cost} J(u)=\E\left[ \int^T_0 h(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t))dt+\Phi(y_0,\mathcal L(y_0))\right]. \end{equation} Our goal of this paper is to obtain the stochastic maximum principle (SMP), a necessary condition for an optimal control, i.e., a control minimizing $J(u)$. Below we briefly recall some related results, which is by no means complete in the literature. Stochastic control problems have gained a particular interest due to their broad applications in economics, finance, engineering, etc. The earliest works can be retrospected to Kushner \cite{72k} and Bismut \cite{73b}. Among others, the theory of general backward stochastic differential equations (BSDEs) introduced in \cite{90pp} plays an important role in the study of stochastic control problems. As an extension of BSDEs, backward doubly stochastic differential equations (BDSDEs) were introduced by Pardoux and Peng in \cite{94pp}. We refer to Yong and Zhou \cite{99yz} and Zhang \cite{zhang17} for more details on stochastic control, BSDEs, and other related topics. Mean-field models are useful to characterize the asymptotic behavior when the size of the system is getting very large. Mean-field stochastic differential equations (MF-SDEs), also known as equations of McKean-Vlasov type, were first introduced by Kac \cite{56k} when investigating physical systems with a large number of interacting particles. The approach of studying large particle systems pioneered by Kac now is called in the literature propagation of chaos and we refer to Sznitman \cite{s91} for further reading. In recent years, mean-field theories for BSDEs and BDSDEs were investigated by Buckdahn et al. \cite{09bdlp} and Li and Xing \cite{lx22}, respectively. As is well known in the literature of game theory, it is in general hard to construct Nash equilibrium explicitly if the number of players is large. The pioneer work of Lasry and Lions \cite{07ll} proposed a framework of approximating Nash equilibrium for stochastic games with a large number of players. Huang et al. \cite{06hmc} dealt with large games in a similar approach. Later on, Carmona and Delarue~\cite{13cd} provided a probabilistic analysis for large games formulated by Lasry and Lions, in which they resolved the limiting optimal control problem by studying a mean-field forward-backward stochastic differential equation (MF-FBSDE). We refer to \cite{15cd,18cd} and the references therein for more details about mean-field games and related topics. Mean-field control problems have also attracted considerable attention accompanying with the development of mean-field game theory. At the beginning, the investigation was focused on the control problems which involve the expected values; for instance, Buckdahn et al. \cite{11bdl} obtained the global maximum principle for mean-field SDEs (see also \cite{11ad}). After Lions introduced the notion of derivatives with respect to probability measures in his seminal work \cite{lions07} (see also \cite{c12, 18cd}), a more general form of mean-field interaction where the law of the solution process is involved has been studied, see e.g. \cite{19cd,16blm}. We also refer to \cite{20hm,14ll,22ny} and the references therein for more development on mean-field control problems. Motivated by the existing works, in this paper we investigate the mean-field control problem (\ref{state})-(\ref{cost}) for MF-BDSDEs and aim to obtain SMP. We remark that Han et al. \cite{10hpw} has obtained SMP for control problems involving such BDSDEs without mean-field terms (see also \cite{11sz,20zzy}). In our control problem \eqref{state}-\eqref{cost}, the state process and the cost functional both depend on the joint distribution of the state process and the control process. Note that in our setting, the dependence on the joint distribution is rather general, and in particular, it includes the cases of $\varphi(t,X_t,\E[X_t],u_t)$ and $\widetilde \E\big[\varphi(t,X_t,\widetilde X_t,u_t)\big]$ which are known as the \textit{scalar interaction} and \textit{first order interaction} of mean-field type, respectively. These two cases will be treated in Section \ref{example} as examples of applying our main result. Let us finally summarize some difficulties and innovations of this work below. (i) From a modeling perspective, BDSDE is a generalization of BSDE and hence can describe more phenomena in the real world. It is worth mentioning that this generalization is not trivial, for instance, classical It\^o's formula can not be directly applied due to the appearance of the backward It\^o integral. We refer to \cite{94pp} for more details. (ii) The dependence of the coefficient functions on probability measures leads to a failure of the classical calculus. We will employ the concept of L-derivative for functions of probability measures initiated by P. L. Lions \cite{lions07} (see also \cite{c12,18cd}). (iii) We prove the well-posedness of the fully coupled mean-field forward backward doubly stochastic differential equations (FBDSDEs) \eqref{mf-bdsde} which naturally arise when investigating the control problem. This type of equation was first introduced by Peng and Shi \cite{03ps} and later on was further investigated for instance in \cite{10hpw}. This article is organized as follows. In Section \ref{2}, some preliminaries of the L-derivative of functions of probability measures is recalled. In Section \ref{3}, we prove our main result of stochastic maximum principle as well as a verification theorem. Section \ref{4} is devoted to the investigation of a type of fully coupled mean-field BDSDE, which is of interest in its own right. Finally, we provide some example s in Section \ref{example}. To conclude this section, we introduce some notations that will be used throughout the article. For two vectors $u, v\in \R^n$, denote by $\la u,v\ra$ the scalar product of $u$ and $v$, by $\left\vert v \right\vert=\sqrt{\la v, v\ra}$ the Euclidean norm of $v$. For $A,B\in\R^{n\times d }$, we denote the scalar product of $A$ and $B$ by $\la A,B\ra=\tr\{AB^{\text{T}}\}$ and the norm of the matrix $A$ by $\left\Vert A\right\Vert=\sqrt{\tr\{AA^{\text{T}}\}}$, where the superscript $\text{T}$ stands for the transpose of vectors or matrices. We also use the notation $\partial_x=\left(\frac{\partial}{\partial x_1},\cdots,\frac{\partial}{\partial x_n}\right)^\text{T}$ for $x\in \R^n$. Then for $\Psi:\R^n\rightarrow \R$, $\partial_x \Psi=\left( \frac\partial{\partial {x_i}} \Psi\right)_{n\times 1}$ is a column vector, and for $\Psi:\R^n\rightarrow \R^d$, $\partial_x \Psi=\left( \frac\partial{\partial {x_i}} \Psi_j\right)_{n\times d}$ is a $n\times d$ matrix. Henceforth, we denote by $C$ a generic constant which can be different in different lines. \setcounter{equation}{0} \section{Preliminaries on L-derivative}\label{2} In this section, we collect some preliminaries on L-differentiability for functions of probability laws which was initiated by P. L. Lions \cite{lions07}. We refer to \cite{c12} and \cite{18cd} for more details. For $m\in \mathbb N$, let $\mathcal P_2(\R^m)$ be the set of probability measures on $\R^m$ with finite second moment. Denote by $W_2(\cdot,\cdot)$ the 2-Wasserstein distance in $\mathcal{P}_2(\mathbb{R}^m)$, i.e., $$W_2(\mu_1,\mu_2)=\inf\left\{ \left(\int_{\R^{2m}}|x-y|^2\rho(dx,dy)\right)^{\frac{1}{2}}\right\},$$ where the infimum is taken over all $\rho\in \mathcal{P}_2(\mathbb{R}^{2m})$ with $\rho(dx,\R^m)=\mu_1(dx)$ and $\rho(\R^m,dy)=\mu_2(dy)$. Then, $(\mathcal P_2(\R^m),W_2)$ is a polish space. It's obvious from the definition that \[W(\mu_1,\mu_2)\leq \left(\E\Big[\big|X-Y\big|^2\Big]\right)^{\frac 12},\] here $X$ and $Y$ are $\R^m$-valued random variables with the distributions $\mu_1$ and $\mu_2$, respectively. For a function $H:\R^q\times \mathcal P_2(\R^m)\to \R$, we call $\widetilde H: \R^q\times L^2(\Omega;\R^m)\to \R$ a lifting of $H$ if $\widetilde H(x, Y) = H(x, \cL(Y))$, where $\cL(Y)$ means the probability law of $Y$. \begin{definition} A function $H: \R^q \times \mathcal P_2(\R^m)$ is said to be L-differentiable at $(x_0,\mu_0)\in \R^q\times \mathcal P_2(\R^m)$ if there exists a random variable $Y_0 \in L^2(\Omega;\R^m)$ with $\cL(Y_0)=\mu_0$, such that the lifted function $\widetilde H$ is Fr\'echet differentiable at $(x_0, Y_0)$, i.e., there exists a linear continuous mapping \[[D\widetilde{H}](x_0,Y_0):\R^q\times L^2(\Omega ;\mathbb{R}^m)\rightarrow \mathbb{R}\] such that \begin{equation}\label{def:F-D} \widetilde{H}(x_0+\Delta x, Y_0+\Delta Y)-\widetilde{H}(x_0,Y_0)=[D\widetilde{H}](x_0,Y_0)(\Delta x,\Delta Y)+o(|\Delta x|+ \|\Delta Y\|_{L^2}). \end{equation} \end{definition} Note that $\mathcal H: = \R^q\times L^2(\Omega;\mathbb{R}^m)$ is a Hilbert space with the inner product \[\big< (x_1, Y_1),(x_2,Y_2)\big>_{\mathcal H}=\la x_1, x_2 \ra+\E[\la Y_1,Y_2\ra].\] By Riesz representation theorem, the Fr\'echet derivative $[D\widetilde{H}](x_0, Y_0)$ can be viewed as an element $D\widetilde H(x_0, Y_0)$ in $\mathcal H$ in the sense that for all $(x, Y)\in \mathcal H$, \begin{equation}\label{e:F-D} [D\widetilde{H}](x_0,Y_0)(x, Y)=\big< D\widetilde{H}(x_0,Y_0), (x, Y)\big>_{\mathcal H}\,. \end{equation} Indeed, there exists a measurable function $g: \R^m\to \R^m$ depending only on $\mu_0$ such that $D\widetilde{H}(x_0, Y)=g(Y)$ a.s. for all $Y$ with $\cL(Y)=\mu_0$. Then, we define the {\it L-derivative of $H$ at $(x_0, \mu_0)$ along the random variable $Y$} by $g(Y)$, which is denoted by $\partial_\mu H(x_0,\mu_0)(Y)$. Thus, we have a.s. \[\partial_\mu H(x_0, \cL(Y)) (Y) = g(Y)= D\widetilde{H}(x_0, Y). \] \begin{example}\label{example-h} Consider the following function $H$ on $\mathcal P_2(\R^m)$, \begin{align*} H(\mu)=\int_{\R^m}h(y)\mu(dy), \end{align*} where $h:\R^m\to \R$ is twice differentiable with bounded second derivatives. Clearly, the lifted function $\widetilde H(Y)=\E\big[h(Y)\big]$ with $\cL(Y)=\mu$, and $\partial_\mu H(\cL(Y))(Y)=D\widetilde H(Y)=\partial_y h(Y)$ by \eqref{def:F-D} and \eqref{e:F-D}. \end{example} Similarly, for a function $H:\R^q\times \mathcal P_2(\R^n\times \R^{n\times l}\times \R^k)\to \R$ depending on a vector $x\in \R^q$ and a joint probability law $\mu=(\mu_y, \mu_z, \mu_u)\in \mathcal P_2(\R^n\times \R^{n\times l}\times \R^k)$, we can define partial L-differentiability. We say that $H$ is joint L-differentiable at $(x,\mu)$ if there exists a triple of random variables $(Y,Z,U)\in L^2(\Omega;\R^n\times \R^{n\times l}\times \R^k)$ with $\mathcal L(Y,Z,U)=\mu$ such that the lifted function $\widetilde H(x,Y,Z,U)=H(x,\mu)$ is Fr\'echet differentiable at $(x,Y,Z,U)$. Observing $\R^q\times L^2(\Omega;\R^n\times \R^{n\times l}\times \R^k)\cong \R^q\times L^2(\Omega;\R^n)\times L^2(\Omega; \R^{n\times l})\times L^2(\Omega;\R^k)$, the partial L-derivatives $\partial_{\mu_y} H,\partial_{\mu_z} H$ and $\partial_{\mu_u} H$ at $(x, \mu)$ along $(Y, Z, U)$ can be defined via the following identity \begin{align*} D\widetilde H(x,Y,Z,U) =\Big(\partial_xH(x,\mu),\partial_{\mu_y}H(x,\mu),\partial_{\mu_z}H(x,\mu),\partial_{\mu_u}H(x,\mu)\Big)(Y,Z,U). \end{align*} We remark that $\partial_x H(x,\mu) (Y, Z, U)$ actually does not depend on $(Y, Z, U)$. A standard result says that joint continuous differentiability in the two arguments is equivalent to partial differentiability in each of the two arguments and joint continuity of the partial derivatives. Hence, the joint continuity of $\partial_x H(x,\mu)$ means the joint continuity with respect to the Euclidean distance on $\R^q$ and the 2-Wasserstein distance on $\mathcal P_2(\R^n\times \R^{n\times l}\times \R^k)$; the joint continuity of $\partial_{\mu_y}H(x,\mu)$ is understood as the joint continuity of the mapping $(x,Y,Z,U)\mapsto \partial_{\mu_y}H(x,\cL(Y, Z, U))(Y,Z,U)$ from $\R^q\times L^2(\Omega;\R^n\times \R^{n\times l}\times \R^k)$ to $L^2(\Omega;\R^n)$.\\ \setcounter{equation}{0} \section{Stochastic maximum principle}\label{3} In this section, we aim to derive our main result of the stochastic maximum principle. First, we fix some mathematical notations, formulate our control problem, and recall It\^o's formula for stochastic processes involving backward It\^o's integral. Then we present the assumptions which will be used throughout the paper. The maximum principle will be obtained via the classical variational method. Assuming proper convexity conditions on the Hamiltonian, we prove a verification theorem, i.e., showing that the stochastic maximum principle is also a sufficient condition for an optimal control. \subsection{Some preliminaries for the control problem} On a probability space $(\Omega,{\mathcal{F}},P)$ satisfying usual conditions, let $\{B_t\}_{t\geq 0}$ and $\{W_t\}_{t\geq 0}$ be two mutually independent Brownian motions, taking values in $\R^d$ and $\R^l$ respectively. Denote by $\mathcal N$ the collection of $P$-null sets of $\mathcal F$. For each $t\in[0,T]$, denote \[\mathcal F_t=\mathcal F_t^W\vee\mathcal F_{t,T}^B\, ,\] where $\mathcal F_t^W=\sigma\big\{W_r,0\leq r\leq t\big\} \vee \mathcal N$ is the augmented $\sigma$-field generated by $W$ and similarly $\mathcal F_{t,T}^B=\sigma\big\{B_r-B_t,t\leq r\leq T\big\} \vee \mathcal N$. We stress that $\mathcal F_t$ is neither increasing nor decreasing in $t$ and hence does not constitute a filtration. Now let us introduce the following spaces: \begin{align*} &L_{\mathcal{G}}^2(\mathbb{R}^n)=\Big\{\xi:\Omega\rightarrow\mathbb{R}^n; \xi\in \mathcal{G}\text{ and }\ \E\left[|\xi|^2\right]<+\infty\Big\} \text{ for any $\sigma$-field}\ \mathcal G\subset \mathcal F;\\ &L^2_{\mathcal{F}}([s, r];\mathbb{R}^n)=\Big\{\phi:[s,r]\times\Omega\rightarrow \mathbb{R}^n; \phi_t\in \mathcal F_t \text{ for } t\in[s,r] \text{ and } \E\left[\int^r_s |\phi_t|^2dt\right]<+\infty\Big\};\\ &S^2_{\mathcal{F}}([s, r];\mathbb{R}^n)=\Big\{\phi:[s,r]\times\Omega\rightarrow \mathbb{R}^n; \phi \text{ is continuous a.s., } \phi_t\in \mathcal F_t \text{ for } t\in[s,r], \\ & \hspace{9cm}\text{ and } \E\Big[\sup_{s\leq t\leq r} |\phi_t|^2\Big]<+\infty\Big\}. \end{align*} The state process $(y_t,z_t)_{0\le t\le T}$ is governed by the following BDSDE \begin{align}\label{state-e} \left\{ \begin{aligned} -dy_t=&f(t,y_t,y_t,u_t,\mathcal L(y_t,z_t,u_t))dt+ g(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t)) d\overleftarrow B_t\\&-z_t dW_t,\, t\in[0,T],\\ y_T=&\xi, \end{aligned} \right. \end{align} with $\xi$ a given $\mathcal F_T$-measurable random variable. We aim to minimize the cost functional given by \begin{equation}\label{cost-e} \begin{aligned} J(u)=\E\left[ \int^T_0 h(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t))dt+\Phi(y_0,\mathcal L(y_0))\right], \end{aligned} \end{equation} over the set $\mathcal U:=L^2_{\mathcal F}([0,T];U)$ of admissible controls, where $U$ is a closed convex subset of $\R^k$. The functions $f$, $g$ and $h$ are measurable mappings from $[0,T]\times \R^n\times \R^{n\times l}\times \R^k \times \mathcal P_2(\R^n\times \R^{n\times l}\times \R^k )$ to $\R^n$, $\R^{n\times d}$ and $\R$, respectively. We stress that the state process $(y, z)$ and the cost function $J(u)$ depend on the joint distribution $\cL(y_t, z_t, u_t)$ of the state and the control processes. To end this subsection, we recall It\^o's formula obtained in \cite[Lemma 1.3]{94pp}, which is a key ingredient in our analysis. \begin{lemma}\label{lem:ito} Let $\alpha\in S_{\mathcal{F}}^2([0,T]; \R^n),\beta\in L_{\mathcal{F}}^2([0,T]; \R^n), \gamma \in L_{\mathcal{F}}^2([0,T]; \R^{n\times d}), \theta\in L_{\mathcal{F}}^2([0,T]; \R^{n\times l})$ be such that \[\alpha_t =\alpha_0 +\int_0^t \beta_s ds+\int_0^t \gamma_s d\overleftarrow{B}_s +\int_0^t \theta_s dW_s, 0\le t \le T. \] Then for $\phi\in C^2(\R^n)$, we have \begin{align} \phi(\alpha_t)=&\phi(\alpha_0)+\int_0^t \big< \partial_x\phi(\alpha_s), \beta_s \big> ds+ \int_0^t \big< \partial_x\phi(\alpha_s), \gamma_s d\overleftarrow{B}_s\big> + \int_0^t \big< \partial_x\phi(\alpha_s), \theta_sdW_s \big>\notag \\ &\qquad\quad -\frac12\int_0^t \text{tr}\Big[\partial^2_{xx}\phi(\alpha_s)\gamma_s \gamma_s^{\text{T}} \Big]ds + \frac12\int_0^t \text{tr}\big[\partial^2_{xx}\phi(\alpha_s)\theta_s \theta_s^{\text{T}} \Big]ds. \label{e:ito} \end{align} \end{lemma} The following product rule is a direct corollary of Lemma \ref{lem:ito}. \begin{lemma}\label{lem:prod-rule} Consider the processes $y$ and $p$ given by \begin{equation*} \begin{cases} dy_t= f_t dt + g_t d\overleftarrow{B}_t +z_t dW_t,\vspace{0.2cm}\\ dp_t=F_tdt+G_tdW_t+q_td\overleftarrow{B}_t, \end{cases} \end{equation*} where $f, g, z, F, G, q$ all belong to $L^2_{\mathcal F}([0,T]; \R^m)$ with proper dimension $m$. We have \begin{equation}\label{e:prod-rule} d\la p_t, y_t\ra= \la dp_t , y_t \ra + \la p_t, dy_t\ra +\big( G_tz_t - g_tq_t\big)dt. \end{equation} \end{lemma} \subsection{Main assumptions and the variational equation} We assume the following conditions for our control problem \eqref{state-e}-\eqref{cost-e}. \begin{enumerate} \item [(H1)] The functions $f(t,0,0,0,\delta_0)$ and $g(t,0,0,0,\delta_0)$ are uniformly bounded, where $\delta_0$ is the Dirac measure at $0$. The functions $f$, $g$ and $h$ are differentiable with respect to $(y,z,u)\in \R^n\times \R^{n\times l}\times \R^k$ for each $t\in[0,T]$ and $\mu\in\mathcal P_2(\R^{n}\times\R^{n\times l}\times \R^{k})$. Moreover, for $\rho=y,z,u$, the partial derivative $\partial_\rho \varphi$ is continuous and uniformly bounded in $(t,y, z, u, \mu)$ for $\varphi=f,g,h$. In particular, we require $\|\partial_z g(t, y,z,u,\mu)\|<\alpha_1\in(0,1)$. \medskip \item[(H2)]The functions $f$, $g$ and $h$ are L-differentiable with respect to $\mu$. Moreover, for ${\nu}=\mu_y,\mu_z,\mu_u$, the L-derivative $\partial_\nu \varphi$ is continuous with $L^2$-norm being uniformly bounded in $(t, y, z, u, \mu)$ for $\varphi=f, g, h$. In particular, we require \[\int_{\R^n\times \R^{n\times l}\times \R^k}\big\|\partial_{\mu_z}g(t, y,z,u, \mu)(y',z',u')\big\|^2d\mu(y',z',u')<\alpha_2\in (0, 1-\alpha_1).\] \medskip \item[(H3)]The function $\Phi$ is differentiable with respect to $y$ and L-differentiable with respect to $\mu_y$, and moreover $\partial_y \Phi(y,\mu)$ and $\partial_{\mu_y} \Phi(y,\mathcal L(Y))(Y)$ are jointly continuous. \end{enumerate} \begin{remark}\label{lip} Note that, if $f$, $g$ are continuously differentiable with uniformly bounded partial derivatives as assumed in (H1) and (H2), we can deduce that $f$, $g$ are Lipschitz in $(y,z,u)$ and $\mu$. Precisely, there exists a constant $C$ and $0<\alpha_1,\alpha_2<1$ with $\alpha_1+\alpha_2<1$ such that for all $ y,y'\in\R^n$, $z,z'\in\R^{n\times l}$, $u,u'\in\R^k$, $\mu=\mathcal L(y,z,u),\mu'=\mathcal L(y',z',u')$, \begin{align*} &\big|f(t,y,z,u,\mu)-f(t,y',z',u',\mu')\big|^2\\&\leq C\Big(|y-y'|^2+\|z-z'\|^2+|u-u'|^2+\E\big[|y-y'|^2+\|z-z'\|^2+|u-u'|^2\big]\Big), \end{align*} and \begin{align*} &\big\|g(t,y,z,u,\mu)-g(t,y',z',u',\mu')\big\|^2\\&\leq C\Big(|y-y'|^2+|u-u'|^2+\E\big[|y-y'|^2+|u-u'|^2\big]\Big)+\Big(\alpha_1\|z-z'\|^2+\alpha_2\E\big[\|z-z'\|^2\big]\Big). \end{align*} \end{remark} The following result borrowed from \cite{lx22} provides the existence and uniqueness for the solution of \eqref{state-e}. \begin{theorem}\label{exist-unique-state} Under the Assumptions (H1) and (H2), for any fixed $u=(u_t)_{0\leq t\leq T}\in \mathcal U$, there exists a unique solution $(y^u,z^u)\in S^2_\mathcal F([0,T];\R^n)\times L^2_\mathcal F([0,T];\R^{n\times l})$ to~ \eqref{state-e}. \end{theorem} Let $u\in \mathcal U$ be an optimal control, i.e., $J(u)=\inf\limits_{v\in\mathcal U}J(v)$, and $(y,z)$ be the corresponding state process. We shall introduce some notations that will be used in the sequel. Recalling that $\mathcal U=L^2_{\mathcal F}([0,T]; U)$ with $U$ being a convex set of $\R^k$, we have $u^\e:=u+\varepsilon v\in \mathcal U$ for $0\le\e\le 1$ and all $v=\bar u-u$ with $\bar u\in \mathcal U$. Let $(y^\varepsilon, z^\varepsilon)$ denote the solution of \eqref{state-e} with $u=u^\varepsilon$. We shall take the following abbreviated notations \begin{equation}\label{e:notations} \begin{aligned} &\theta_t=(y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t)),\,\,\theta_t^\varepsilon=(y_t^\varepsilon, z_t^\varepsilon,u_t^\varepsilon,\mathcal L(y^\varepsilon_t,z^\varepsilon_t,u^\varepsilon_t)),\\ &y^\lambda_t=y_t+\lambda(y_t^\varepsilon-y_t),\,\,z^\lambda_t=z_t+\lambda(z_t^\varepsilon-z_t),\,\,u^\lambda_t=u_t+\lambda\varepsilon v_t,\\ &\theta^\lambda_t=(y^\lambda_t,z^\lambda_t,u^\lambda_t,\mathcal L(y^\lambda_t,z^\lambda_t,u^\lambda_t)). \end{aligned} \end{equation} Let $(\widetilde{\Omega},\widetilde{\mathcal F},\widetilde P)$ be a copy of $(\Omega,\mathcal F,P)$. For a random variable $X$ defined on $(\Omega,\mathcal F,P)$, we denote by $\widetilde{X}$ its copy on $\widetilde{\Omega}$. For any integrable random variable $\xi$ on the probability space $(\Omega\times \widetilde{\Omega},\mathcal F\times \widetilde{\mathcal F}, P\otimes \widetilde P)$, we denote \begin{equation}\label{e:notations1} \E\big[\xi(\omega,\widetilde{\omega})\big]=\int_{\Omega}\xi(\omega,\widetilde{\omega})P(d\omega) \text{ and } \widetilde{\E}\big[\xi(\omega,\widetilde{\omega})\big]=\int_{\widetilde{\Omega}}\xi(\omega,\widetilde{\omega})P(d\widetilde{\omega}). \end{equation} With the above notations in mind, we introduce the following linear backward doubly stochastic differential equation \begin{equation}\label{variational-e} \left\{ \begin{aligned} -dK_t=&\Big\{\partial_yf(t,\theta_t)K_t+\partial_zf(t,\theta_t)L_t+\partial_uf(t,\theta_t)v_t+\widetilde \E\big[\partial_{\mu_y}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde K_t\big]\\&\hspace{1em}+\widetilde \E\big[\partial_{\mu_z}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde L_t\big]+\widetilde \E\big[\partial_{\mu_u}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde v_t\big]\Big\}dt\\ &+\Big\{\partial_yg(t,\theta_t)K_t+\partial_zg(t,\theta_t)L_t+\partial_ug(t,\theta_t)v_t+\widetilde \E\big[\partial_{\mu_y}g(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde K_t\big]\\&\quad +\widetilde \E\big[\partial_{\mu_z}g(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde L_t\big]+\widetilde \E\big[\partial_{\mu_u}g(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde v_t\big]\Big\}d\overleftarrow B_t\\&-L_tdW_t,\,\,\,t\in[0,T],\\ K_T=&0, \end{aligned} \right. \end{equation} to which there exists a unique solution $(K,L)\in S^2_\mathcal F([0,T];\R^n)\times L^2_\mathcal F([0,T];\R^{n\times l})$ by Theorem~\ref{exist-unique-state}. \begin{proposition}\label{estimate} Let assumptions (H1)-(H3) hold. Then, we have \begin{align*} \lim\limits_{\varepsilon\to 0}\E\left[\sup\limits_{0\leq t\leq T}\Big|\frac{y^\varepsilon_t-y_t}{\varepsilon}-K_t\Big|^2\right]=0 ~\text{ and }~ \lim\limits_{\varepsilon\to 0}\E\left[\int_0^T\Big\|\frac{z^\varepsilon_t-z_t}{\varepsilon}-L_t\Big\|^2dt\right]=0. \end{align*} \end{proposition} \begin{proof} Denote \begin{align}\label{e:hat-y-e} \hat y_t^\varepsilon=\frac{y^\varepsilon_t-y_t}{\varepsilon}-K_t ~ \text{ and } ~\hat z_t^\varepsilon=\frac{z^\varepsilon_t-z_t}{\varepsilon}-L_t. \end{align} Then, by \eqref{state-e} and \eqref{variational-e}, $(\hat y^\varepsilon_t,\hat z^\varepsilon_t)_{0\leq t\leq T}$ solves the following equation, \begin{equation}\label{e:y-hat} \left\{ \begin{aligned} -d\widehat y^\varepsilon_t=&\Big\{\tfrac{1}{\varepsilon}\big[f(t,\theta^\varepsilon_t)-f(t,\theta_t)\big]-\partial_yf(t,\theta_t)K_t-\partial_zf(t,\theta_t)L_t-\partial_uf(t,\theta_t)v_t\\& \quad -\widetilde \E\big[\partial_{\mu_y}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde K_t\big]-\widetilde \E\big[\partial_{\mu_z}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde L_t\big]\\ &\quad -\widetilde \E\big[\partial_{\mu_u}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde v_t\big]\Big\}dt\\ &+\Big\{\tfrac{1}{\varepsilon}\big[g(t,\theta^\varepsilon_t)-g(t,\theta_t)\big]-\partial_yg(t,\theta_t)K_t-\partial_zg(t,\theta_t)L_t-\partial_ug(t,\theta_t)v_t\\ &\qquad -\widetilde \E\big[\partial_{\mu_y}g(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t){\widetilde K}_t\big]-\widetilde \E\big[\partial_{\mu_z}g(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde L_t\big]\\ & \qquad -\widetilde \E\big[\partial_{\mu_u}g(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\widetilde v_t\big]\Big\}d\overleftarrow B_t-\widehat z^\varepsilon_tdW_t,\,\,\,t\in[0,T],\\ \hat y^\varepsilon_T=&0. \end{aligned} \right. \end{equation} Using notations \eqref{e:notations} and \eqref{e:notations1}, some algebraic work shows that \eqref{e:y-hat} can be written as \begin{equation} \left\{ \begin{aligned} -d\hat y^\varepsilon_t=&\bigg[\int_0^1\Big\{\partial_yf(t,\theta_t^\lambda) {\widehat y_t^\varepsilon}+\widetilde \E\big[\partial_{\mu_y}f(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde{ z^\lambda_t},\widetilde {u^\lambda_t})\widetilde {\widehat y^\varepsilon_t}\big]\\&\hspace{2em}+\partial_zf(t,\theta_t^\lambda) {\widehat z^\varepsilon_t}+\widetilde \E\big[\partial_{\mu_z}f(t,\theta^\lambda_t)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})\widetilde {\widehat z^\varepsilon_t}\big]+F^{\lambda,\varepsilon}_t\Big\}d\lambda \bigg]dt\\ &+\bigg[\int_0^1\Big\{\partial_yg(t,\theta_t^\lambda){\widehat y_t^\varepsilon}+\widetilde \E\big[\partial_{\mu_y}g(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})\widetilde {\widehat y^\varepsilon_t}\big]\\&\hspace{3em}+\partial_zg(t,\theta_t^\lambda) {\widehat z^\varepsilon_t}+\widetilde \E\big[\partial_{\mu_z}g(t,\theta^\lambda_t)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})\widetilde {\widehat z^\varepsilon_t}\big]+G^{\lambda,\varepsilon}_t\Big\}d\lambda \bigg]d\overleftarrow B_t\\ &-\widehat z^\varepsilon_tdW_t,\,\,\,t\in[0,T],\\ \hat y^\varepsilon_T=&0. \end{aligned} \right. \end{equation} where \begin{align*} F^{\lambda,\varepsilon}_t:=&\big(\partial_yf(t,\theta_t^\lambda)-\partial_yf(t,\theta_t)\big)K_t+\widetilde\E\Big[\big(\partial_{\mu_y}f(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})-\partial_{\mu_y}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\big)\widetilde K_t\Big]\\ +&\big(\partial_zf(t,\theta_t^\lambda)-\partial_zf(t,\theta_t)\big)L_t+\widetilde\E\Big[\big(\partial_{\mu_z}f(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})-\partial_{\mu_z}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\big)\widetilde L_t\Big]\\ +&\big(\partial_uf(t,\theta_t^\lambda)-\partial_uf(t,\theta_t)\big)v_t+\widetilde \E\Big[\big(\partial_{\mu_u}f(t,\theta^\lambda_t)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})-\partial_{\mu_u}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\big) \widetilde v_t \Big], \end{align*} and $G^{\lambda, \varepsilon}_t$ is of the same form as $F^{\lambda,\e}_t$ with $f$ replaced by $g$. Applying It\^o's formula \eqref{e:ito} to $\left|\widehat y_t^\varepsilon\right|^2$, we have \begin{align*} &\E\big[|\widehat y_t^\varepsilon|^2\big]+\E\Big[\int_t^T\|\widehat z_s^\varepsilon\|^2ds \Big]\\ =&2\E\int_t^T\Big<\int_0^1\big\{\partial_yf(s,\theta_s^\lambda) {\widehat y_s^\varepsilon}+\widetilde \E\big[\partial_{\mu_y}f(s,\theta_s^\lambda)(\widetilde{ y^\lambda_s},\widetilde {z^\lambda_s},\widetilde {u^\lambda_s})\widetilde {\widehat y^\varepsilon_s}\big]\\&\hspace{5em}+\partial_zf(s,\theta_s^\lambda) {\widehat z^\varepsilon_s}+\widetilde \E\big[\partial_{\mu_z}f(s,\theta^\lambda_s)(\widetilde{ y^\lambda_s},\widetilde {z^\lambda_s},\widetilde {u^\lambda_s})\widetilde {\widehat z^\varepsilon_s}\big]+F^{\lambda, \varepsilon}_s\big\}d\lambda,\widehat y^\varepsilon_s\Big>ds\\&\hspace{0.5em}+\E\int_t^T\Big\|\int_0^1\big\{\partial_yg(s,\theta_s^\lambda){\widehat y_s^\varepsilon}+\widetilde \E\big[\partial_{\mu_y}g(s,\theta_s^\lambda)(\widetilde{ y^\lambda_s},\widetilde {z^\lambda_s},\widetilde {u^\lambda_s})\widetilde {\widehat y^\varepsilon_s}\big]\\&\hspace{6.2em}+\partial_zg(s,\theta_s^\lambda) {\widehat z^\varepsilon_s}+\widetilde \E\big[\partial_{\mu_z}g(s,\theta^\lambda_s)(\widetilde{ y^\lambda_s},\widetilde {z^\lambda_s},\widetilde {u^\lambda_s})\widetilde {\widehat z^\varepsilon_s}\big]+G^{\lambda, \varepsilon}_s\big\}d\lambda\Big\|^2ds. \end{align*} The uniform boundedness of the partial derivatives of $f$ and $g$ as assumed in (H1) and (H2) yields \begin{align*} \E\big[|\widehat y_t^\varepsilon|^2\big]+\E\Big[\int_t^T\|\widehat z_s^\varepsilon\|^2ds \Big]\leq C\E\Big[\int_t^T|\widehat y^\varepsilon_s|^2ds\Big]+\E\int_t^T\int_0^1\big\{\big| F^{\lambda,\varepsilon}_s\big|^2+\big\| G^{\lambda, \varepsilon}_s\big\|^2\big\}d\lambda ds. \end{align*} To get the desired result, in view of Gronwall's lemma, it suffices to show \begin{equation}\label{estimate-F} \lim\limits_{\varepsilon\to0}\E\Big[\int_0^T\int_0^1\big|F_t^{\lambda,\varepsilon}\big|^2d\lambda dt\Big]=0, \end{equation} and \begin{equation}\label{estimate-G} \lim\limits_{\varepsilon\to 0}\E\Big[\int_0^T\int_0^1\big\|G_t^{\lambda, \varepsilon}\big\|^2d\lambda dt\Big]=0. \end{equation} We shall prove \eqref{estimate-F} below, and \eqref{estimate-G} can be proved in the same way and thus omitted. By H\"older's inequality, we have \begin{align*} &\E\Big[\int_0^T\int_0^1\big|F_t^{\lambda,\varepsilon}\big|^2d\lambda dt\Big]\\&\leq C\E\int_0^T\Big\{\int_0^1\big\{|\partial_yf(t,\theta_t^\lambda)-\partial_yf(t,\theta_t)|^2|K_t|^2+\|\partial_zf(t,\theta_t^\lambda)-\partial_zf(t,\theta_t)\|^2\|L_t\|^2\big\}d\lambda\\&\hspace{2.5em}\qquad+\int_0^1\widetilde\E\big[\big|\partial_{\mu_y}f(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})-\partial_{\mu_y}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\big|^2\big]\widetilde\E\big[|\widetilde K_t|^2\big]d\lambda\\&\hspace{2.5em}\qquad+\int_0^1\widetilde\E\big[\big\|\partial_{\mu_z}f(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde {z^\lambda_t},\widetilde {u^\lambda_t})-\partial_{\mu_z}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\big\|^2\big]\widetilde\E\big[\|\widetilde L_t\|^2\big]d\lambda\\&\hspace{2.5em}\qquad +\int_0^1\widetilde\E\big[\big|\partial_{{\mu_u}}f(t,\theta_t^\lambda)(\widetilde{ y^\lambda_t},\widetilde {Z^\lambda_t},\widetilde {u^\lambda_t})-\partial_{\mu_y}f(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t)\big|^2\big]\widetilde\E\big[|\widetilde v_t|^2\big]d\lambda\\&\hspace{2.5em}\qquad+\int_0^1\big|\partial_uf(t,\theta_t^\lambda)-\partial_uf(t,\theta_t)\big|^2|v_t|^2d\lambda \Big\}dt. \end{align*} Due to the continuity and uniform boundedness assumed in (H1) and (H2) for the partial derivatives, we can apply the dominated convergence theorem to prove \eqref{estimate-F}. The proof is concluded. \end{proof} The differentiability of the cost functional $J(\cdot)$ proved in the following result will be used in the derivation of the variational inequality in Section \ref{sec:adjoint-eq}. \begin{proposition}\label{expan-J} Under conditions (H1)-(H3), the cost functional $J(\cdot)$ defined by \eqref{cost-e} is Gateaux differentiable, and the derivative at $u$ in the direction $v$ is given by \begin{align}\label{e:J-derivative} \frac{d}{d\varepsilon}J(u+\varepsilon v)\Big|_{\varepsilon=0}=&\E\int_0^T\Big\{\big<\partial_yh(t,\theta_t),K_t\big>+\big<\partial_zh(t,\theta_t),L_t\big>+\big<\partial_uh(t,\theta_t),v_t\big>\notag\\&\hspace{2.5em}+\widetilde \E\big[\big<\partial_{\mu_y}h(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde K_t\big>\big]+\widetilde \E\big[\big<\partial_{\mu_z}h(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde L_t\big>\big]\notag\\&\hspace{2.5em}+\widetilde \E\big[\big<\partial_{{\mu_u}}h(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde v_t\big>\big]\Big\}dt\notag\\& +\E\Big[\big<\partial_y\Phi(y_0,\mathcal L(y_0)),K_0\big>+\widetilde \E\big[\big<\partial_{\mu_y}\Phi(y_0,\mathcal L(y_0))(\widetilde y_0),\widetilde K_0\big>\big]\Big]. \end{align} \end{proposition} \begin{proof} By the definition \eqref{cost-e} of $J$ and the notations \eqref{e:notations}, we have \begin{align} \frac{J(u^\varepsilon)-J(u)}{\varepsilon}&=\frac{1}{\varepsilon} \Big[\E\int_0^T\big\{h(t,\theta^\varepsilon_t)-h(t,\theta_t)\big\}dt\Big]+\frac{1}{\varepsilon}\E\Big[\Phi(y_0^\varepsilon,\mathcal L(y_0^\varepsilon))-\Phi(y_0,\mathcal L(y_0))\Big]\notag\\&=:I_1+I_2.\label{e:J-J} \end{align} For the term $I_1$, Taylor's first-order expansion yields \begin{align*} I_1=&\E\int_0^T\Big\{\big<\partial_yh(t,\theta_t),K_t\big>+\big<\partial_zh(t,\theta_t),L_t\big>+\big<\partial_uh(t,\theta_t),v_t\big>\\&\hspace{3em}+\widetilde \E\big[\big<\partial_{\mu_y}h(t,\theta_t)(\widetilde{y_t},\widetilde{z_t},\widetilde{u_t}),\widetilde K_t\big>\big]+\widetilde \E\big[\big<\partial_{\mu_z}h(t,\theta_t)(\widetilde{y_t},\widetilde{z_t},\widetilde{u_t}),\widetilde L_t\big>\big]\\&\hspace{3em}+\widetilde \E\big[\big<\partial_{{\mu_u}}h(t,\theta_t)(\widetilde{y_t},\widetilde{z_t},\widetilde{u_t}),\widetilde v_t\big>\big]\Big\}dt+\E\int_0^T\rho_t^\varepsilon dt, \end{align*} with \begin{align*} \rho^\varepsilon_t:=&\int_0^1\big\{\big<\partial_yh(t,\theta^\lambda_t),\widehat y_t^\varepsilon\big>+\big<\partial_yh(t,\theta^\lambda_t)-\partial_y h(t,\theta_t),K_t\big>\big\}d\lambda\\&+\int_0^1\big\{\big<\partial_zh(t,\theta^\lambda_t),\widehat z_t^\varepsilon\big>+\big<\partial_zh(t,\theta^\lambda_t)-\partial_z h(t,\theta_t),L_t\big>\big\}d\lambda\\&+\int_0^1\big\{\widetilde \E\big[\big<\partial_{\mu_y}h(t,\theta^\lambda_t)(\widetilde{y^\lambda_t},\widetilde{z^\lambda_t},\widetilde{u^\lambda_t}),\widetilde{\widehat y_t^\varepsilon}\big>\big]+\widetilde \E\big[\big<\partial_{\mu_z}h(t,\theta^\lambda_t)(\widetilde{y^\lambda_t},\widetilde{z^\lambda_t},\widetilde{u^\lambda_t}),\widetilde{\widehat z_t^\varepsilon}\big>\big]\big\}d\lambda \\ &+\int_0^1\widetilde\E\big[\big<\partial_{\mu_y}h(t,\theta^\lambda_t)(\widetilde{y^\lambda_t},\widetilde{z^\lambda_t},\widetilde{u^\lambda_t})-\partial_{\mu_y}h(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde K_t\big>\big]d\lambda\\ &+\int_0^1\widetilde\E\big[\big<\partial_{\mu_z}h(t,\theta^\lambda_t)(\widetilde{y^\lambda_t},\widetilde{z^\lambda_t},\widetilde{u^\lambda_t})-\partial_{\mu_z}h(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde L_t\big>\big]d\lambda\\ &+\int_0^1\widetilde\E\big[\big<\partial_{{\mu_u}}h(t,\theta^\lambda_t)(\widetilde{y^\lambda_t},\widetilde{z^\lambda_t},\widetilde{u^\lambda_t})-\partial_{{\mu_u}}h(t,\theta_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde v_t\big>\big]d\lambda\\&+\int_0^1\big<\partial_uh(t,\theta_t^\lambda)-\partial_uh(t,\theta_t),v_t\big>d\lambda, \end{align*} where we recall that $\hat y_t^\e, \hat z_t^\e$ are given in \eqref{e:hat-y-e}. Note that by (H1) and (H2), the partial derivatives of $h$ are jointly continuous and uniformly bounded. Combining this fact with Proposition \ref{estimate}, we can apply dominated convergence theorem to get \[\lim\limits_{\varepsilon\to 0}\E\int_0^T\rho_t^\varepsilon dt=0.\] The term $I_2$ in \eqref{e:J-J} can be analyzed in a similar way. The proof is concluded. \end{proof} \subsection{On necessity of the condition} \label{sec:adjoint-eq} In this subsection, we present our main result of stochastic maximum principle, which is a necessary condition for an optimal control. Let $H: [0,T]\times \R^n\times \R^{n\times l} \times\R^k\times \mathcal P_2(\R^n\times \R^{n\times l}\times \R^k)\times\R^n\times\R^{n\times d} \to \R$ denote the Hamiltonian given by \begin{align}\label{Hamiltonian} H(t,y,z,u,\mu,p,q):=\big<f(t,y,z,u,\mu),p\big>+\big<g(t,y,z,u,\mu),q\big>+h(t,y,z,u,\mu). \end{align} Consider the following adjoint equation \begin{equation}\label{e:pq} \left\{ \begin{aligned} dp_t=&\Big[\partial_yH(t,\theta_t,p_t,q_t)+\widetilde \E\big[\partial_{\mu_y}H(t,\widetilde\theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big]\Big]dt\\&+\Big[\partial_zH(t,\theta_t,p_t,q_t)+\widetilde \E\big[\partial_{\mu_z}H(t,\widetilde\theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big]\Big]dW_t\\&-q_td\overleftarrow B_t, \,\,\,t\in[0,T],\\ p_0=&\partial_y\Phi(y_0,\mathcal L(y_0))+\widetilde\E\big[\partial_{\mu_y}\Phi(\widetilde y_0,\mathcal L(y_0))(y_0)\big], \end{aligned} \right. \end{equation} where we have used notations in \eqref{e:notations}. Recalling the equation \eqref{variational-e} of $(K,L)$, applying It\^o's formula to $\big<p_t,K_t\big>$ from $0$ to $T$ and taking expectation, we can get \begin{align*} \E\big[\big<p_0,K_0\big>\big]=&\E\int_0^T\Big\{\big<\partial_u^\text{T}f(t,\theta_t)p_t+\widetilde\E\big[\partial^\text{T}_{\mu_u} f(t,\widetilde \theta_t)(y_t,z_t,u_t)\widetilde p_t\big]\\&\hspace{3.5em}+\partial_u^\text{T}g(t,\theta_t)q_t+\widetilde\E\big[\partial^\text{T}_{\mu_u} g(t,\widetilde \theta_t)(y_t,z_t,u_t)\widetilde q_t\big],v_t\big>\\&\hspace{3em}-\big<\partial_yh(t,\theta_t)+\widetilde\E\big[\partial_{\mu_y}h(t,\widetilde \theta_t)(y_t,z_t,u_t)\big],K_t\big>\\&\hspace{3em}-\big<\partial_zh(t,\theta_t)+\widetilde\E\big[\partial_{\mu_z}h(t,\widetilde \theta_t)(y_t,z_t,u_t)\big],L_t\big>\Big\}dt. \end{align*} Note that $\E\big[\big<p_0,K_0\big>\big]$ is the sum of the last two terms on the right-hand side of \eqref{e:J-derivative}. Plugging this expression into \eqref{e:J-derivative}, we get \begin{align*} &\frac{d}{d\varepsilon}J(u+\varepsilon v)\Big|_{\varepsilon=0}\\=&\E\int_0^T\Big<\partial_u^\text{T}f(t,\theta_t)p_t+\widetilde\E\big[\partial^\text{T}_{\mu_u} f(t,\widetilde \theta_t)(y_t,z_t,u_t)\widetilde p_t\big]+\partial_u^\text{T}g(t,\theta_t)q_t\\ &\qquad\qquad+\widetilde\E\big[\partial^\text{T}_{\mu_u} g(t,\widetilde \theta_t)(y_t,z_t,u_t)\widetilde q_t\big]+\partial_uh(t,\theta_t)+\widetilde \E\big[\partial_{\mu_u}h(t,\widetilde \theta_t)(y_t,z_t,u_t)\big],v_t\Big>dt. \end{align*} Using the Hamiltonian $H$ given by \eqref{Hamiltonian}, we can write \begin{align}\label{e:J-d} \frac{d}{d\varepsilon}J(u+\varepsilon v)\Big|_{\varepsilon=0}=\E\int_0^T\Big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],v_t\Big>dt. \end{align} Now we are ready to derive our main result, the stochastic maximum principle. \begin{theorem}\label{nmp} We assume conditions (H1)-(H3) for the control problem \eqref{state-e}-\eqref{cost-e}. Suppose that $u=(u_t)_{0\leq t\leq T}\in\mathcal U$ is an optimal control, $(y_t,z_t)_{0\leq t\leq T}$ is the associated state process, and $(p_t,q_t)_{0\leq t\leq T}$ is the adjoint process satisfying \eqref{e:pq}. Then, we have, for all $a\in U$, \begin{align}\label{variational inequality} \Big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],a-u_t\Big>\geq 0, \,\,\,dt\otimes d\mathbb P \text{ a.s.}, \end{align} where $H$ is the Hamiltonian defined by \eqref{Hamiltonian}. \end{theorem} \begin{proof} Given any admissible control $(\bar u_t)_{0\leq t\leq T}\in\mathcal U$, we denote $v_t=\bar u_t-u_t$. We use the perturbation $u^\varepsilon_t=u_t+\varepsilon v_t$. Since $u$ is optimal, i.e., $J(u)$ achieves the minimum, we have \begin{align*} \frac{d}{d\varepsilon}J(u+\varepsilon v)\Big|_{\varepsilon=0}\geq 0. \end{align*} This together with \eqref{e:J-d} implies \begin{align}\label{eq3.8} \E\int_0^T\Big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],\bar u_t-u_t\Big>dt\geq 0. \end{align} We set an admissible control $(\bar u_t)_{0\leq t\leq T}$ as follows \begin{align*} \bar u_s=\left\{ \begin{array}{l} \alpha_s, \ \ \ s\in[t,t+\varepsilon],\\ u_s, \ \ \ \text{otherwise}, \end{array} \right. \end{align*} where $(\alpha_t)_{0\leq t\leq T}\in \mathcal{U}$. From \eqref{eq3.8}, we have \begin{align} &\frac{1}{\varepsilon}\E\left[\int^{t+\varepsilon}_t\big<\partial_uH(s,\theta_s,p_s,q_s)+\widetilde\E\big[\partial_{\mu_u} H(s,\widetilde \theta_s,\widetilde p_s,\widetilde q_s)(y_s,z_s,u_s)\big],\alpha_s-u_s\big>ds\right]\geq0, \end{align} Letting $\varepsilon\rightarrow 0^+$, by Lebesgue differential theorem, we have for almost all $t$, \begin{align*} &\E\left[\big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],\alpha_t-u_t\big>\right]\geq0. \end{align*} For $A\in \mathcal F_t$, we set $\alpha_t=a\textbf{1}_A+u_t\textbf{1}_{A^c}$ with $a\in U$. Thus, we have for almost all $t$, \begin{align*} \E\left[\big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],a-u_t\big>\textbf{1}_A\right]\ge0. \end{align*} As $A\in \mathcal F_t$ is chosen arbitrarily, the definition of conditional expectation leads to, for almost all $t$, \begin{align*} &\E\left[\big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],a-u_t\big>\big|\mathcal F_t\right]\\&=\big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{\mu_u} H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],a-u_t\big>\geq 0, \text{a.s. } \end{align*} This proves the desired result. \end{proof} \subsection{On sufficiency of the condition}\label{sufficiency} In this subsection, we prove a verification theorem which states that under proper conditions, the maximum principle \eqref{variational inequality} obtained in Theorem \ref{nmp} does yield an optimal control. \begin{theorem}\label{smp} Assume (H1)-(H3). We further assume that the Hamiltonian $H$ given in \eqref{Hamiltonian} and $\Phi$ are convex in the sense \begin{align*} &H(t,y',z',u',\mu',p,q)-H(t,y,z,u,\mu,p,q)\\&\geq \big<\partial_yH(t,y,z,u,\mu,p,q),y'-y\big>+\widetilde\E\big[\big<\partial_{\mu_y}H(t,y,z,u,\mu,p,q)(\widetilde Y,\widetilde Z,\widetilde U),\widetilde Y'-\widetilde Y\big>\big]\\&+\big<\partial_zH(t,y,z,u,\mu,p,q),z'-z\big>+\widetilde\E\big[\big<\partial_{\mu_z}H(t,y,z,u,\mu,p,q)(\widetilde Y,\widetilde Z,\widetilde U),\widetilde Z'-\widetilde Z\big>\big]\\&+\big<\partial_u H(t,y,z,u,\mu,p,q),u'-u\big>+\widetilde\E\big[\big<\partial_{{\mu_u}}H(t,y,z,u,\mu,p,q)(\widetilde Y,\widetilde Z,\widetilde U),\widetilde U'-\widetilde U\big>\big], \end{align*} and \begin{align*} \Phi(y',\mu_y')-\Phi(y,\mu_y)\geq \big<\partial_y\Phi(y,\mu_y),y'-y\big>+\widetilde\E\big[\big<\partial_{\mu_y}\Phi(y,\mu_y)(\widetilde Y),\widetilde Y'-\widetilde Y\big>\big], \end{align*} for all $y,y'\in\R^n$, $z,z'\in\R^{n\times l}$, $u,u'\in\R^k$, $\mu,\mu'\in \mathcal P_2(\R^n\times\R^{n\times l}\times \R^k)$ with $\mu=(\mu_y,\mu_z, \mu_u)=\mathcal L(\widetilde Y,\widetilde Z,\widetilde U)$, $\mu'=(\mu_x', \mu_y', \mu_u')=\mathcal L(\widetilde Y',\widetilde Z',\widetilde U')$, $p\in\R^n$ and $q\in \R^{n\times d}$. Let $u=(u_t)_{0\leq t\leq T}\in \mathcal U$ be an admissible control, $(y_t,z_t)_{0\leq t\leq T}$ the state process and $(p_t,q_t)_{0\leq t\leq T}$ the adjoint process. Then, if \eqref{variational inequality} holds, $u$ is an optimal control. \end{theorem} \begin{proof} Recalling the definition of \eqref{cost-e} of $J$ and the notations \eqref{e:notations}, we have \begin{align*} J(v)-J(u)=\E\int_0^T\big\{h(t,\theta^v_t)-h(t,\theta_t)\big\}dt+\E\big[\Phi(y_0^v,\mathcal L(y_0^v))-\Phi(y_0,\mathcal L(y_0))\big], \end{align*} where we use the superscript $v$ to denote the processes associated to the control process $(v_t)_{0\leq t\leq T}\in\mathcal U$. It follows directly from the convexity of $H$ and $\Phi$ that \begin{align}\label{eq3.11} &h(t,\theta^v_t)-h(t,\theta_t)\notag\\ &=H(t,\theta^v_t,p_t,q_t)-H(t,\theta_t,p_t,q_t)-\big<f(t,\theta_t^v)-f(t,\theta_t),p_t\big>-\big<g(t,\theta_t^v)-g(t,\theta_t),q_t\big>\notag\\ &\geq \big<\partial_yH(t,\theta_t,p_t,q_t),y^v_t-y_t\big>+\widetilde\E\big[\big<\partial_{\mu_y}H(t,\theta_t,p_t,q_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde y^v_t-\widetilde y_t\big>\big]\notag\\&\hspace{0.3em}+\big<\partial_zH(t,\theta_t,p_t,q_t),z^v_t-z_t\big>+\widetilde\E\big[\big<\partial_{\mu_z}H(t,\theta_t,p_t,q_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde z^v_t-\widetilde z_t\big>\big]\\&\hspace{0.3em}+\big<\partial_uH(t,\theta_t,p_t,q_t),v_t-u_t\big>+\widetilde\E\big[\big<\partial_{{\mu_u}}H(t,\theta_t,p_t,q_t)(\widetilde y_t,\widetilde z_t,\widetilde u_t),\widetilde v_t-\widetilde u_t\big>\big]\notag\\&\hspace{0.3em}-\big<f(t,\theta_t^v)-f(t,\theta_t),p_t\big>-\big<g(t,\theta_t^v)-g(t,\theta_t),q_t\big>\notag, \end{align} and \begin{align}\label{eq3.12} &\E\big[\Phi(y_0^v,\mathcal L(y_0^v))-\Phi(y_0,\mathcal L(y_0))\big]\notag\\ &\geq \E\Big[\big<\partial_y\Phi(y_0,\mathcal L(y_0)),y^v_0-y_0\big>+\widetilde \E\big[\big<\partial_{\mu_y}\Phi(y_0,\mathcal L(y_0))(\widetilde y_0),\widetilde y^v_0-\widetilde y_0\big>\big]\Big]\\ &=\E\Big[\big<\partial_y\Phi(y_0,\mathcal L(y_0))+\widetilde\E\big[\partial_{\mu_y}\Phi(\widetilde y_0,\mathcal L(y_0))(y_0)\big],y_0^v-y_0\big>\Big]\notag. \end{align} Applying It\^o's formula to $\big<p_t,y^v_t-y_t\big>$ yields that \begin{align}\label{eq3.13} &\E\Big[\big<\partial_y\Phi(y_0,\mathcal L(y_0))+\widetilde\E\big[\partial_{\mu_y}\Phi(\widetilde y_0,\mathcal L(y_0))(y_0)\big],y_0^v-y_0\big>\Big]\notag\\&=\E\int_0^T\Big\{\big<f(t,\theta^v_t)-f(t,\theta_t),p_t\big>+\big<g(t,\theta^v_t)-g(t,\theta_t),q_t\big>\notag\\&\hspace{3em}-\big<\partial_yH(t,\theta_t,p_t,q_t)+\widetilde \E\big[\partial_{\mu_y}H(t,\widetilde\theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],y^v_t-y_t\big>\\&\hspace{3em}-\big<\partial_zH(t,\theta_t,p_t,q_t)+\widetilde \E\big[\partial_{\mu_z}H(t,\widetilde\theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],z^v_t-z_t\big>\Big\}dt.\notag \end{align} Combining \eqref{eq3.11}-\eqref{eq3.13}, and using Fubini's theorem, we have \begin{align*} J(v)-J(u)\ge \E\int_0^T\Big<\partial_uH(t,\theta_t,p_t,q_t)+\widetilde\E\big[\partial_{{\mu_u}}H(t,\widetilde \theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big],v_t-u_t\Big>dt \end{align*} Thus, if we assume \eqref{variational inequality}, we get \begin{align*} J(v)-J(u)\geq 0. \end{align*} Noting that $v\in \mathcal U$ is chosen arbitrarily, this implies that $u$ is an optimal control. The proof is concluded. \end{proof} \setcounter{equation}{0} \section{Well-posedness of mean-field forward-backward doubly stochastic differential equations}\label{4} Using the Hamiltonian $H$ given in \eqref{Hamiltonian}, the state equation \eqref{state-e} and the adjoint equation \eqref{e:pq} can be combined in the following system \begin{equation}\label{Hamiltonian-system} \left\{ \begin{aligned} -dy_t=& \partial_p H(t, \theta_t, p_t, q_t) dt+\partial_q H(t, \theta_t, p_t, q_t)d\overleftarrow B_t-z_tdW_t,\\ dp_t=&\Big[\partial_yH(t,\theta_t,p_t,q_t)+\widetilde \E\big[\partial_{\mu_y}H(t,\widetilde\theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big]\Big]dt\\&+\Big[\partial_zH(t,\theta_t,p_t,q_t)+\widetilde \E\big[\partial_{\mu_z}H(t,\widetilde\theta_t,\widetilde p_t,\widetilde q_t)(y_t,z_t,u_t)\big]\Big]dW_t\\&-q_td\overleftarrow B_t, \,\,\,t\in[0,T],\\ y_T=&\xi,~ p_0=\partial_y\Phi(y_0,\mathcal L(y_0))+\widetilde\E\big[\partial_{\mu_y}\Phi(\widetilde y_0,\mathcal L(y_0))(y_0)\big], \end{aligned} \right. \end{equation} where $\theta_t$ is given in \eqref{e:notations}. If $u_t$ is a function of $y_t,z_t,p_t,q_t$ and their joint distribution (see, e.g, the LQ case in Section \ref{lq}), the above system \eqref{Hamiltonian-system} can be written as a time-symmetric FBDSDE introduced in Peng and Shi \cite{03ps} of mean-field type, \begin{equation}\label{mf-bdsde} \left\{ \begin{aligned} -dy_t=&f(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))dt+g(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))d\overleftarrow B_t-z_tdW_t,\\ dp_t=&F(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))dt+G(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi,\,\, p_0=\Psi(y_0,\mathcal L(y_0)), \end{aligned} \right. \end{equation} where $\xi$ is an $\mathcal F_T$-measurable random variable, $\Psi:\Omega\times \R^n\times\mathcal P_2(\R^n)\to \R^n$, and $f, g, F, G$ are functions from $\Omega\times [0,T]\times \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d}\times \mathcal P_2( \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d})$ to $\R^n, \R^{n\times d}, \R^n , \R^{n\times l}$, respectively. \begin{definition}\label{def:fbdsde} A quadruple of processes $(y,p,z,q) $ is called a solution of \eqref{mf-bdsde} if $(y,p,z,q)\in L^2_\mathcal F([0,T];\R^n\times\R^n\times \R^{n\times l}\times \R^{n\times d})$ and satisfies \eqref{mf-bdsde}. \end{definition} Let $\mathcal A(t,\zeta,\mu)=(-F,f,-G,g)(t,\zeta, \mu)$ where $\zeta=(y,p,z,q)$ and $\mu$ stands for a generic element in $\mathcal P_2( \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d})$. Assume that for each $(\zeta,\mu)\in \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d}\times \mathcal P_2( \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d})$, $\mathcal A(\cdot,\zeta,\mu)\in L^2_\mathcal F([0,T];\R^n\times \R^n\times\R^{n\times l}\times \R^{n\times d}) $ and that for each $(y,\mu_y)\in \R^n\times \mathcal P_2(\R^n)$, $\Psi(y,\mu_y)\in L^2_{\mathcal F_0}(\R^n)$. For almost all $(t,\omega)\in [0,T]\times \Omega$, $ \zeta,\zeta'\in \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d}$, $\mu,\mu'\in\mathcal P_2( \R^n\times \R^n\times \R^{n\times l}\times \R^{n\times d})$, $ y,y'\in\R^n$, and $\mu_y,\mu_y'\in\mathcal P_2(\R^n)$, we assume the following conditions. \begin{enumerate} \item [(A1)] There exists $k_1>0$ such that \begin{align*} |\mathcal A(t,\zeta,\mu)-\mathcal A(t,\zeta',\mu')|&\leq k_1\big(|\zeta-\zeta'|+W_2(\mu,\mu')\big),\\ |\Psi(y,\mu_y)-\Psi(y',\mu'_y)|&\leq k_1\big(|y-y'|+W_2(\mu_y,\mu'_y)\big). \end{align*} \item [(A2)] There exist constants $k_2,k_3, k_4\ge0$ with $k_2+k_3>0$ and $k_3+k_4>0$ such that \begin{align*} &\E\big[\big<\mathcal A(t,\zeta,\mu)-\mathcal A(t,\zeta',\mu'),\zeta-\zeta'\big>\big]\\&\leq-k_2 \big(\E[|y-y'|^2+\|z-z'\|^2]\big) -k_3\big(\E[|p-p'|^2+\|q-q'\|^2]\big), \end{align*} and $$\E\big[\big<\Psi(y,\mu_y)-\Psi(y',\mu'_y),y-y'\big>\big]\geq k_4\E[|y-y'|^2].$$ Moreover, we make some further assumptions if $k_2$ or $k_3$ is zero: we assume \begin{align*} \|g(t,\zeta,\mu)-g(t,\zeta',\mu')\|&\leq k_1(|y-y'|+|p-p'|+\|q-q'\|)+\lambda_1 \|z-z'\|\\&+k_1(\E[|y-y'|+|p-p'|+\|q-q'\|])+\lambda _2 \E[\|z-z'\|], \end{align*} if $k_2=0$, and \begin{align*} \|G(t,\zeta,\mu)-G(t,\zeta',\mu')\|&\leq k_1(|y-y'|+\|z-z'\|+|p-p'|)+\lambda_1 \|q-q'\|\\&+k_1(\E[|y-y'|+\|z-z'\|+|p-p'|])+\lambda _2 \E[\|q-q'\|], \end{align*} if $k_3=0$, where $\lambda_1,\lambda_2$ are nonnegative constants satisfying $\lambda_1+\lambda_2<1$. \end{enumerate} We shall employ the method of continuation introduced in \cite{99pw} (see also \cite{15byz}) to establish the existence of solution of \eqref{mf-bdsde}. Consider a family of mean-field FBDSDEs parameterized by $\alpha\in[0,1]$, \begin{equation}\label{parameter-e} \left\{ \begin{aligned} -dy_t=&\big[f^\alpha(t,\zeta_t,\mu_t)+f_0(t)\big]dt+\big[g^\alpha(t,\zeta_t,\mu_t)+g_0(t)\big]d\overleftarrow B_t-z_tdW_t,\\ dp_t=&\big[F^\alpha(t,\zeta_t,\mu_t)+F_0(t)\big]dt+\big[G^\alpha(t,\zeta_t,\mu_t)+G_0(t)\big]dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi,\,\,p_0=\Psi^\alpha(y_0,\mathcal L(y_0))+\Psi_0, \end{aligned} \right. \end{equation} where $\zeta_t=(y_t,p_t,z_t,q_t)$, $\mu_t=\mathcal L(y_t,p_t,z_t,q_t)$, $(F_0,f_0,G_0,g_0)\in L^2_{\mathcal F}([0,T];\R^{n}\times \R^n\times \R^{n\times l}\times \R^{n\times d})$, $\Psi_0\in L^2_{\mathcal F_0}(\R^n)$ and for any given $\alpha\in [0,1]$, \begin{align*} &f^\alpha(t,\zeta_t,\mu_t)=\alpha f(t,\zeta_t,\mu_t)-(1-\alpha)k_3p_t,\,\,\, g^\alpha(t,\zeta_t,\mu_t)=\alpha g(t,\zeta_t,\mu_t)-(1-\alpha)k_3q_t,\\ &F^\alpha(t,\zeta_t,\mu_t)=\alpha F(t,\zeta_t,\mu_t)+(1-\alpha)k_2y_t,\,\,\, G^\alpha(t,\zeta_t,\mu_t)=\alpha G(t,\zeta_t,\mu_t)+(1-\alpha)k_2z_t,\\ &\Psi^\alpha(y_0,\mathcal L(y_0))=\alpha\Psi(y_0,\mathcal L(y_0))+(1-\alpha)y_0. \end{align*} When $\alpha=0$, equation \eqref{parameter-e} is reduced to \begin{equation}\label{alpha=0} \left\{ \begin{aligned} -dy_t=&[-k_3p_t+f_0(t)]dt+[-k_3q_t+g_0(t)]d\overleftarrow B_t-z_tdW_t,\\ dp_t=&[k_2y_t+F_0(t)]dt+[k_2z_t+G_0(t)]dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi,\,\,p_0=y_0+\Psi_0. \end{aligned} \right. \end{equation} The existence and uniqueness of the solution of equation \eqref{alpha=0} have been obtained in \cite[Proposition 3.6]{10hpw}. The following lemma is the key ingredient of the continuation method, which says that, if \eqref{parameter-e} has a solution for some $\alpha_0\in [0,1)$, it also has a solution for $\alpha\in [\alpha_0,\alpha_0+\delta_0]$, where $\delta_0$ is a constant independent of $\alpha_0$. \begin{lemma}\label{pertubation} Under (A1)-(A2), we assume that there exists a constant $ \alpha_0\in[0,1)$ such that given any $(F_0,f_0,G_0,g_0)\in L^2_{\mathcal F}([0,T];\R^{n}\times \R^n\times \R^{n\times l}\times \R^{n\times d})$ and $\Psi_0\in L^2_{\mathcal F_0}(\R^n)$, $\xi\in L^2_{\mathcal F_T}(\R^n)$, equation \eqref{parameter-e} with $\alpha=\alpha_0$ has a unique solution. Then, there exists a constant $\delta_0\in (0,1)$ which only depends on $k_1,k_2,k_3,k_4,\lambda_1,\lambda_2$ and $T$, such that for any $\alpha\in [\alpha_0,\alpha_0+\delta_0]$, equation \eqref{parameter-e} has a unique solution. \end{lemma} \begin{proof} By the assumption, for each $\overline \zeta=(\overline y,\overline p,\overline z,\overline q)\in L^2_\mathcal F([0,T];\R^n\times \R^n\times\R^{n\times l}\times \R^{n\times d})$ with law $\overline\mu\in \mathcal P_2(\R^n\times \R^n\times\R^{n\times l}\times \R^{n\times d})$, there exists a unique quadruple $\zeta=(y,p,z,q)\in L^2_\mathcal F([0,T];\R^n\times \R^n\times\R^{n\times l}\times \R^{n\times d})$ satisfying, for $\delta>0$, \begin{equation}\label{e:fbdsde} \left\{ \begin{aligned} -dy_t=&\big[f^{\alpha_0}(t,\zeta_t,\mu_t)+\delta(f(t,\overline \zeta_t,\overline \mu_t)+k_3\overline p_t)+f_0(t)\big]dt\\&+\big[g^{\alpha_0}(t,\zeta_t,\mu_t)+\delta(g(t,\overline \zeta_t,\overline \mu_t)+k_3\overline q_t)+g_0(t)\big]d\overleftarrow B_t-z_tdW_t,\\ dp_t=&\big[F^{\alpha_0}(t,\zeta_t,\mu_t)+\delta(F(t,\overline \zeta_t,\overline \mu_t)-k_2\overline y_t)+F_0(t)\big]dt\\&+\big[G^{\alpha_0}(t,\zeta_t,\mu_t)+\delta(G(t,\overline \zeta_t,\overline \mu_t)-k_2\overline z_t)+G_0(t)\big]dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi,\,\,p_0=\Psi^{\alpha_0}(y_0,\mathcal L(y_0))+\delta(\Psi(\overline y_0,\mathcal L(\overline y_0))-\overline y_0)+\Psi_0. \end{aligned} \right. \end{equation} In order to prove that \eqref{parameter-e} with $\alpha=\alpha_0+\delta$ has a solution (for sufficiently small $\delta$), it suffices to show that the mapping $I_{\alpha_0, \delta}(\overline\zeta)=\zeta$ defined via \eqref{e:fbdsde} is a contraction mapping on $L^2_\mathcal F([0,T];\R^n\times \R^n\times\R^{n\times l}\times\R^{n\times d})$. For this purpose, we will obtain some estimations first. Denote \begin{align*} &\widehat \zeta=(\widehat y, \widehat p,\widehat z, \widehat q)=(y- y',p-p',z-z',q-q'),\\&\widehat{\overline \zeta}=(\widehat{\overline y}, \widehat{\overline p},\widehat{\overline z}, \widehat{\overline q})=(\overline y-\overline y',\overline p-\overline p',\overline z-\overline z',\overline q-\overline q'). \end{align*} Applying the product rule \eqref{e:prod-rule} to $\big<\widehat y_t,\widehat p_t\big>$ yields \begin{align*} &\alpha_0\E\big[\big<\widehat y_0,\Psi(y_0,\mathcal L(y_0))-\Psi(y'_0,\mathcal L(y'_0))\big>\big]+(1-\alpha_0)\E [|\widehat y_0|^2]\\&\hspace{2em}+\delta\E\big[\big<\widehat y_0,-\widehat{\overline y}_0+ \Psi(\overline y_0,\mathcal L(\overline y_0))-\Psi(\overline y'_0,\mathcal L(\overline y'_0))\big>\big]\\&= \E\int_0^T\big\{\alpha_0\big<\mathcal A(t,\zeta_t,\mu_t)-\mathcal A(t,\zeta'_t,\mu'_t),\widehat \zeta_t\big>+\delta\big<\mathcal A(t,\overline \zeta_t,\overline \mu_t)-\mathcal A(t,\overline \zeta'_t,\overline \mu'_t),{\widehat \zeta}_t\big>\big\}dt\\&\hspace{2em}-(1-\alpha_0)\E\int_0^T\big\{ k_3|\widehat p_t|^2+ k_3\|\widehat q_t\|^2+k_2|\widehat y_t|^2+k_2 \|\widehat z_t\|^2 \big\}dt\\&\hspace{2em}+\delta\E\int_0^T\big\{k_3\big<\widehat {\overline p}_t, \widehat p_t\big>+k_3\big<\widehat {\overline q}_t, \widehat q_t\big>+k_2\big<\widehat {\overline y}_t, \widehat y_t\big>+k_2\big<\widehat{\overline z}_t, \widehat z_t\big>\big\}dt. \end{align*} By (A1)-(A2), we have \begin{align*} &(\alpha_0k_4+1-\alpha_0)\E\big[|\widehat y_0|^2\big]+\E\int_0^T\big\{k_3\big(|\widehat p_t|^2+ \|\widehat q_t\|^2\big)+k_2\big(|\widehat y_t|^2+\|\widehat z_t\|^2\big)\big\}dt \\&\leq \delta\Bigg\{\E\int_0^T\big\{k_3\big(\tfrac12|\widehat {\overline p}_t|^2+\tfrac12|\widehat p_t|^2\big)+k_3\big(\tfrac12\|\widehat {\overline q}_t\|^2+\tfrac12\|\widehat q_t\|^2\big)+k_2\big(\tfrac12|\widehat {\overline y}_t|^2+\tfrac12|\widehat y_t|^2\big)\\&\hspace{4em}+k_2\big(\tfrac12\|\widehat {\overline z}_t\|^2+\tfrac12\|\widehat z_t\|^2\big)+|\widehat \zeta_t||\mathcal A(t,\overline \zeta_t,\overline \mu_t)-\mathcal A(t,\overline \zeta'_t,\overline \mu'_t)|\big\}dt \\&\hspace{2em}+\E\big[\big(\tfrac{1}{2}|\widehat y_0|^2+\tfrac{1}{2}|\widehat{\overline y}_0|^2\big)+|\Psi(\overline y_0,\mathcal L(\overline y_0))-\Psi(\overline y'_0,\mathcal L(\overline y'_0))||\widehat y_0|\big]\Bigg\}\\&\leq\delta\Bigg\{\E\int_0^T\big\{k_3\big(\tfrac12|\widehat {\overline p}_t|^2+\tfrac12|\widehat p_t|^2\big)+k_3\big(\tfrac12\|\widehat {\overline q}_t\|^2+\tfrac12\|\widehat q_t\|^2\big)+k_2\big(\tfrac12|\widehat {\overline y}_t|^2+\tfrac12|\widehat y_t|^2\big)\\&\hspace{4em}+k_2\big(\tfrac12\|\widehat {\overline z}_t\|^2+\tfrac12\|\widehat z_t\|^2\big)+k_1\big(|\widehat\zeta_t|^2+\tfrac 12|\widehat {\overline \zeta}_t|^2+\tfrac12 W_2^2(\overline\mu_t,\overline \mu'_t)\big)\big\}dt\\&\hspace{2em }+\E\big[\big(\tfrac{1}{2}|\widehat y_0|^2+\tfrac{1}{2}|\widehat{\overline y}_0|^2\big)+k_1\big(|\widehat y_0|^2+\tfrac {1}{2}|\widehat{\overline y}_0|^2+\tfrac{1}{2}W_2^2(\mathcal L(\overline y_0),\mathcal L(\overline y'_0))\big) \big]\Bigg\}. \end{align*} Noting \begin{align*} W_2^2(\overline\mu_t,\overline \mu'_t)\leq \E\big[|\widehat {\overline \zeta}_t|^2\big],\hspace{1em}\text{and}\hspace{1em} W_2^2(\mathcal L(\overline y_0),\mathcal L(\overline y'_0))\leq \E\big[|\widehat {\overline y}_0|^2\big], \end{align*} we can find a constant $K_1>0$ depending only on $k_1, k_2, k_3$ such that \begin{align*} &(\alpha_0k_4+1-\alpha_0)\E\big[|\widehat y_0|^2\big]+\E\int_0^T\big\{k_2\big(|\widehat y_t|^2+\|\widehat z_t\|^2\big)+k_3\big(|\widehat p_t|^2+\|\widehat q_t\|^2\big)\big\}dt\notag\\&\leq \delta K_1\left\{\E\int_0^T \big(|\widehat \zeta_t|^2+|\widehat{\overline \zeta}_t|^2\big)dt+\E\big[|\widehat y_0|^2+|\widehat {\overline y}_0|^2\big]\right\}. \end{align*} Noting $(\alpha_0k_4+1-\alpha_0)\geq \min\{1,k_4\}$, we have \begin{align}\label{e:est-1} &\min\{1, k_4\}\E\big[|\widehat y_0|^2\big]+\E\int_0^T\big\{k_2\big(|\widehat y_t|^2+\|\widehat z_t\|^2\big)+k_3\big(|\widehat p_t|^2+\|\widehat q_t\|^2\big)\big\}dt\notag\\&\leq \delta K_1\left\{\E\int_0^T \big(|\widehat \zeta_t|^2+|\widehat{\overline \zeta}_t|^2\big)dt+\E\big[|\widehat y_0|^2+|\widehat {\overline y}_0|^2\big]\right\}. \end{align} Applying It\^o's formula \eqref{e:ito} to $|\widehat y_t|^2$, we have \begin{align*} &\E\big[|\widehat y_t|^2\big]+\E\int_t^T\|\widehat z_s\|^2ds\\&=2\E\int_t^T\big\{\big<\widehat y_s,\alpha_0(f(s,\zeta_s,\mu_s)-f(s,\zeta'_s,\mu'_s))-(1-\alpha_0)k_3\widehat p_s\big>\\&\hspace{4em}+\big<\widehat y_s,\delta (f(s,\overline\zeta_s,\overline\mu_s)-f(s,\overline\zeta'_s,\overline\mu'_s))+\delta k_3\widehat{\overline p}_s\big>\big\}ds\\&\quad +\E\int_t^T\big\|\alpha_0(g(s,\zeta_s,\mu_s)-g(s,\zeta'_s,\mu'_s))-(1-\alpha_0)k_3\widehat q_s\\&\hspace{4em}+\delta (g(s,\overline\zeta_s,\overline\mu_s)-g(s,\overline\zeta'_s,\overline\mu'_s))+\delta k_3\widehat{\overline q}_s\big\|^2ds. \end{align*} By the Lipschitz conditions (A1) and the Gronwall's inequality, we can find a constant $K_2$ depending on only $k_1, k_2, k_3, \lambda_1, \lambda_2$ such that \begin{equation}\label{e:est-2} \begin{aligned} &\sup_{t\in[0,T]} \E\big[|\widehat y_t|^2\big] \leq K_2\left\{\delta \E\int_0^T|\widehat {\overline\zeta}_t|^2dt+\E\int_0^T\big\{|\widehat p_t|^2+\|\widehat q_t\|^2\big\}dt\right\},\\ & \E\int_0^T\big\{|\widehat y_t|^2+\|\widehat z_t\|^2\}dt \leq K_2 (T\vee 1)\left\{\delta \E\int_0^T|\widehat {\overline\zeta}_t|^2dt+ \E\int_0^T\big\{|\widehat p_t|^2+\|\widehat q_t\|^2\big\}dt\right\}. \end{aligned} \end{equation} Similarly, the application of It\^o's formula \eqref{e:ito} to $|\widehat p_t|^2$ yields \begin{align*} &\E\big[|\widehat p_t|^2\big]+\E\int_0^t\|\widehat q_s\|^2ds\\&=\E\big[\big|\alpha\big\{\Psi(y_0, \cL(y_0))-\Psi(y_0', \cL(y_0'))\big\}+(1-\alpha_0)\widehat y_0\\&\hspace{3em}+\delta\big\{ \Psi(\overline y_0, \cL(\overline y_0))-\Psi(\overline{y}'_0, \cL(\overline{y}'_0 ))-\widehat{\overline y}_0\big\}\big|^2\big]\\ &\hspace{1em} +2\E\int_0^t\big\{\big<\widehat p_s,\alpha_0(F(s,\zeta_s,\mu_s)-F(s,\zeta'_s,\mu'_s))+(1-\alpha_0)k_2\widehat y_s\big>\\&\hspace{4.5em}+\big<\widehat p_s,\delta (F(s,\overline\zeta_s,\overline\mu_s)-F(s,\overline\zeta'_s,\overline\mu'_s))-\delta k_2\widehat{\overline y}_s\big>\big\}ds\\&\hspace{1em} +\E\int_0^t\big\|\alpha_0(G(s,\zeta_s,\mu_s)-G(s,\zeta'_s,\mu'_s))+(1-\alpha_0)k_2\widehat z_s\\&\hspace{4.5em}+\delta (G(s,\overline\zeta_s,\overline\mu_s)-G(s,\overline\zeta'_s,\overline\mu'_s))-\delta k_2\widehat{\overline z}_s\big\|^2ds \end{align*} By the Lipschitz conditions (A1) and the Gronwall's inequality, we can deduce that there exists $K_3$ depending only on $k_1, k_2, k_3,\lambda_1, \lambda_2$ such that \begin{equation}\label{e:est-3} \begin{aligned} & \E\int_0^T\big\{|\widehat p_t|^2+\|\widehat q_t\|^2\}dt \leq K_3\left\{\delta \E\left(\int_0^T|\widehat {\overline\zeta}_t|^2dt +|\widehat {\overline y}_0|^2 \right)+ \E\left(\int_0^T\big\{|\widehat y_t|^2+\|\widehat z_t\|^2\big\}dt + |\widehat y_0|^2\right)\right\}. \end{aligned} \end{equation} Now, in order to obtain the contraction of the mapping $I_{\alpha_0,\delta}$ for small $\delta$, it suffices to prove the following estimation from \eqref{e:est-1}-\eqref{e:est-3}, \begin{align}\label{e:contraction} \E\big[|\widehat y_0|^2\big]+\E\Big[\int_0^T\big|\widehat \zeta_t\big|^2dt\Big]\leq \delta K \left(\E\big[|\widehat{\overline y}_0|^2\big]+\E\Big[\int_0^T\big|\widehat {\overline\zeta}_t\big|^2dt\Big]\right), \end{align} for some positive constant $K$ depending on $k_1, k_2, k_3, k_4, \lambda_1, \lambda_2$ and $T$. The proof is split in three cases according to the positiveness of $k_2, k_3, k_4$. When $k_2,k_3, k_4>0,$ \eqref{e:contraction} is a direct consequence of \eqref{e:est-1}, when $k_2>0,k_3=0, k_4>0$, \eqref{e:contraction} follows from \eqref{e:est-1} and \eqref{e:est-3}, and when $k_2= 0,k_3>0, k_4\ge 0$, it follows from \eqref{e:est-1} and \eqref{e:est-2}. The proof is concluded. \end{proof} We are ready to present our main result in this section. \begin{theorem}\label{ex-uni-fbdsde} Under the assumptions (A1)-(A2), there exists a unique solution $(y,p,z,q)\in L^2_\mathcal F([0,T];\R^n\times\R^n\times \R^{n\times l}\times \R^{n\times d})$ to equation \eqref{mf-bdsde}. \end{theorem} \begin{proof} The existence and uniqueness of the solution follows from the well-posedness of equation \eqref{alpha=0} by \cite[Proposition 3.6]{10hpw} and Lemma \ref{pertubation}. We also provide a direct proof for the uniqueness as follows. Let $\zeta^1=(y^1,p^1,z^1,q^1)$ and $\zeta^2=(y^2,p^2,z^2,q^2)$ be two solutions of \eqref{mf-bdsde}. Denote $\widehat \zeta=(\widehat y,\widehat p,\widehat z,\widehat q)=(y^1-y^2,p^1-p^2,z^1-z^2,q^1-q^2)$. Applying It\^o's formula to $\la \widehat y_t,\widehat p_t\ra$, yields that \begin{align*} &\E\left[\la\Psi(y_0^1,\mathcal L(y_0^1))-\Psi(y_0^2,\mathcal L(y_0^2)), \widehat y_0\ra\right]\\&=\E\int_0^T\la\mathcal A(t,\zeta^1_t,\mu_t^1)-\mathcal A(t,\zeta_t^2,\mu_t^2),\widehat \zeta_t\ra dt, \end{align*} where we recall that $\mathcal A=(-F, f, -G, g)$. By the monotonicity condition (A2), we have that \begin{align}\label{e:unique} k_4\E\big[|\widehat y_0|^2\big]\leq-k_2\E\int_0^T\big\{|\widehat y_t|^2+\|\widehat z_t\|^2\big\}dt-k_3\E\int_0^T\big\{\|\widehat p_t|^2+\|\widehat q_t\|^2\big\}dt\leq 0. \end{align} If $k_2, k_3>0$, it yields directly $\zeta^1=\zeta^2$. If $k_2$ or $k_3$ is 0, say, $k_2=0$ and $k_3>0$, we have $p_1=p_2$ and $q_1=q_2$ by \eqref{e:unique}, and the uniqueness of $(y, z)$ follows from the classical result of BDSDEs (see \cite{94pp}). The proof is completed. \end{proof} \begin{remark} When the mean-field FBDSDE \eqref{mf-bdsde} is reduced to classical FBDSDE (without mean field), Theorem \ref{ex-uni-fbdsde} recovers the existence and uniqueness result obtained in \cite[Theorem 2.2]{03ps}. Our result is also compatible with \cite[Theorem 2.6]{99pw} when \eqref{mf-bdsde} degenerates to classical FBSDE. \end{remark} \section{Examples}\label{example} In this section, we apply our results obtained in preceding sections to some special cases. For simplicity, we assume $n=l=d=k=1$ throughout this section unless otherwise specified. \subsection{Scalar interaction} In this subsection, we consider the scalar interaction type control problem, in which the dependence upon probability measure is through the moments of the probability measure. More precisely, we assume that the coefficients in the state equation \eqref{state-e} take the following form, \begin{align*} &f(t,y,z,u,\mu)=\hat f\big(t,y,z,u,\int\varphi d\mu\big),\,\,\,\,g(t,y,z,u,\mu)=\hat g\big(t,y,z,u,\int \phi d\mu\big),\\&h(t,y,z,u,\mu)=\hat h\big(t,y,z,u,\int\psi d\mu \big),\,\,\,\,\Phi(y,\mu_y)=\hat\Phi\big(y,\int \gamma d\mu_y\big), \end{align*} for functions $\varphi,\phi,\psi:\R^3\to \R$ and $\gamma:\R\to \R$ with at most quadratic growth, and functions $\hat f$, $\hat g$, $\hat h: [0,T]\times \R^3\times \R\to \R$, $\hat \Phi: \R\times \R \to \R$ satisfying proper regularity conditions. Here $\int \varphi d\mu:=\int_{\R^3}\varphi(y,z,u)d\mu(y,z,u)= \E[\varphi(Y,Z, U)]$ where $(Y,Z,U)$ is a random vector with $\cL(Y,Z,U)=\mu$. Similar to Example \ref{example-h} in Section \ref{2}, the L-derivatives of $f, g, h$ and $\Phi$ can be calculated via $\hat f, \hat g, \hat h$ and $\hat \Phi$ respectively. For instance, \begin{align*} \partial_{\mu_y} f\big(t, y,z, u, \cL(Y, Z, U)\big)(Y,Z,U)=\partial_{r}\hat f \big(t,y,z, u, \E[\varphi(Y,Z,U)]\big) \partial_y \varphi (Y, Z,U), \end{align*} where $\partial_r \hat f$ denotes the partial derivative with respect to the term $\E[\varphi(Y,Z,U)]$. Set $\Theta_t=(y_t,z_t,u_t)$. Then, the adjoint equation \eqref{e:pq} can be written as \begin{equation} \left\{ \begin{aligned} dp_t=&\bigg\{\partial_y\hat f\big(t,\Theta_t,\E\big[\varphi(\Theta_t)\big]\big)p_t+\widetilde\E\big[\widetilde p_t\partial_r\hat f\big(t,\widetilde \Theta_t,\E\big[\varphi(\Theta_t)\big]\big)\partial_y \varphi(\Theta_t)\big]\\&\quad+\partial_y\hat g\big(t,\Theta_t,\E\big[\phi(\Theta_t)\big]\big)q_t+\widetilde\E\big[\widetilde q_t\partial_r\hat g\big(t,\widetilde \Theta_t,\E\big[\phi(\Theta_t)\big]\partial_y \phi(\Theta_t)\big)\big]\\&\quad+\partial_y\hat h\big(t,\Theta_t,\E\big[\psi(\Theta_t)\big]\big)+\widetilde\E\big[\partial_{r}\hat h\big(t,\widetilde\Theta_t ,\E\big[\psi(\Theta_t)\big]\big)\partial_y \psi(\Theta_t)\big]\bigg\}dt\\ &+\bigg\{\partial_z\hat f\big(t,\Theta_t,\E\big[\varphi(\Theta_t)\big]\big)p_t+\widetilde\E\big[\widetilde p_t\partial_r\hat f\big(t,\widetilde \Theta_t,\E\big[\varphi(\Theta_t)\big]\big)\partial_z \varphi(\Theta_t)\big]\\&\quad+\partial_z\hat g\big(t,\Theta_t,\E\big[\phi(\Theta_t)\big]\big)q_t+\widetilde\E\big[\widetilde q_t\partial_r\hat g\big(t,\widetilde \Theta_t,\E\big[\phi(\Theta_t)\big]\partial_z \phi(\Theta_t)\big)\big]\\&\quad+\partial_z\hat h\big(t,\Theta_t,\E\big[\psi(\Theta_t)\big]\big)+\widetilde\E\big[\partial_{r}\hat h\big(t,\widetilde\Theta_t ,\E\big[\psi(\Theta_t)\big]\big)\partial_z \psi(\Theta_t)\big]\bigg\}dW_t\\&-q_td\overleftarrow B_t,\,\,\,t\in[0,T],\\ p_0=&\partial_y\hat\Phi\big(y_0,\E[\gamma(y_0)]\big)+\widetilde\E\big[\partial_{r}\hat\Phi\big(\widetilde y_0,\E[\gamma(y_0)]\big)\partial\gamma(y_0)\big]. \end{aligned} \right. \end{equation} The stochastic maximum principle \eqref{variational inequality} obtained in Theorem \ref{nmp} becomes \begin{align*} &\bigg\{\partial_u\hat f\big(t,\Theta_t,\E\big[\varphi(\Theta_t)\big]\big)p_t+\partial_u\hat g\big(t,\Theta_t,\E\big[\phi(\Theta_t)\big]\big)q_t+\partial_u\hat h\big(t,\Theta_t,\E\big[\psi(\Theta_t)\big]\big)\\&\quad +\widetilde\E\big[\widetilde p_t\partial_r\hat f\big(t,\widetilde \Theta_t,\E\big[\varphi(\Theta_t)\big]\big)\partial_u\varphi(\Theta_t)\big]+\widetilde\E\big[\widetilde q_t\partial_r\hat g\big(t,\widetilde \Theta_t,\E\big[\phi(\Theta_t)\big]\big)\partial_u\phi(\Theta_t)\big]\\&\quad +\widetilde\E\big[\partial_r\hat h\big(t,\widetilde \Theta_t,\E\big[\psi(\Theta_t)\big]\big)\partial_u\psi(\Theta_t)\big]\big(a-u_t\big)\geq 0,\,\,\, \text{ for all } a\in U . \end{align*} \subsection{First order interaction} In this example, we consider the case of first order interaction where the dependence of the coefficients on the probability measure is linear in the following sense \begin{align*} &f(t,y,z,u,\mu)=\int_{\R^3}\hat f(t,y,z,u,y',z',u')d\mu(y',z',u')=\widetilde\E[\hat f(t, y, z, u, \widetilde Y, \widetilde Z,\widetilde U)],\\ &g(t,y,z,u,\mu)=\int_{\R^3}\hat g(t,y,z,u,y',z',u')d\mu(y',z',u')=\widetilde\E[\hat g(t, y, z, u, \widetilde Y, \widetilde Z,\widetilde U)],\\ &h(t,y,z,u,\mu)=\int_{\R^3}\hat h(t,y,z,u,y',z',u')d\mu(y',z',u')=\widetilde\E[\hat h(t, y, z, u,\widetilde Y,\widetilde Z,\widetilde U)],\\ &\Phi(y,\mu_y )=\int_{\R}\hat\Phi(y,y')d\mu_y(y')=\widetilde\E[\hat \Phi(y, \widetilde Y)], \end{align*} for some functions $\hat f$, $\hat g$, $\hat h$ defined on $\R^3\times \R^3$ and $\hat \Phi$ defined $\R\times \R$ with values on $\R$, where $(\widetilde Y,\widetilde Z,\widetilde U)$ is a random vector with the law $\mu$. The state equation \eqref{state-e} with first order interaction corresponds to a type of mean-field BDSDE which may arise naturally in economics, finance and game theorem, etc. We refer to \cite{09bdlp} for a study of mean-field BSDEs with first order interaction via a limit approach. Actually, when considering the $N$-players game where each individual state is governed by \begin{align*} dX_t^i=\frac{1}{N}\sum\limits_{j=1}^N \hat b(t,X_t^i,X_t^j,u_t^i)dt+\sigma dW_t,\,\,i=1,\cdots, N, \end{align*} $u_t^i$ denotes the strategy of $i$-th player. The equation can be rewritten as \begin{align*} dX_t^i&= b(t,X_t^i,\overline \mu_t^N,u_t^i)dt+\sigma dW_t, \end{align*} where $\overline\mu_t^N=\frac{1}{N}\sum\limits_{j=1}^N\delta_{X_t^j}$ and $ b(t,x,\mu,u)=\int_{\R^n}\hat b(t,x,x',u)d\mu(x'). $ Interaction given by functions of the form is called first order or linear. Now, we come back to our control problem in the case of first order interaction. From Example~ \ref{example-h}, $f$, $g$, $h$ are linear with respect to $\mu$, and $\Phi$ is linear in $\mu_y$. For $\Phi$, $\partial_{\mu_y}\Phi(y,\mu_y)(y')=\partial_{y'}\hat \Phi(y,y')$ and similarly, $\partial_{\mu_y}f(t,y,z,u,\mu)(y',z',u')=\partial_{y'}\hat f(t,y,z,u,y',z',u')$. The adjoint equation is \begin{equation} \left\{ \begin{aligned} dp_t=&\Big\{\widetilde\E\big[\partial_y\hat f(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)p_t+\partial_y\hat g(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)q_t+\partial_y\hat h(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)\big]\\&\hspace{0.5em}+\widetilde\E\big[\partial_{ y'}\hat f(t,\widetilde\Theta_t, y_t, z_t, u_t)\widetilde p_t+\partial_{y'}\hat g(t,\widetilde\Theta_t, y_t, z_t, u_t)\widetilde q_t+\partial_{y'}\hat h(t,\widetilde\Theta_t, y_t, z_t, u_t)\big]\Big\}dt\\&+\Big\{\widetilde \E\big[\partial_z\hat f(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)p_t+\partial_z\hat g(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)q_t+\partial_z\hat h(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)\big]\\&\hspace{1em}+\widetilde \E\big[\partial_{z'}\hat f(t,\widetilde\Theta_t, y_t, z_t, u_t)\widetilde p_t+\partial_{z'}\hat g(t,\widetilde\Theta_t, y_t, z_t, u_t)\widetilde q_t+\partial_{z'}\hat h(t,\widetilde\Theta_t, y_t, z_t, u_t)\big]\Big\}dW_t\\&-q_td\overleftarrow B_t,\,\,\,t\in[0,T],\\ p_0=&\widetilde \E\big[\partial_y\hat \Phi(y_0,\widetilde y_0)+\partial_{y'}\hat \Phi(\widetilde y_0, y_0)\big]. \end{aligned} \right. \end{equation} Similarly, it follows from applying stochastic maximum principle in Theorem \ref{nmp} that for $\forall a\in U$, \begin{align*} &\bigg\{\widetilde\E\Big[\partial_u\hat f(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)p_t+\partial_u\hat g(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)q_t+\partial_u\hat h(t,\Theta_t,\widetilde y_t,\widetilde z_t,\widetilde u_t)\\&\hspace{1em}+\partial_{ u'}\hat f(t,\widetilde\Theta_t, y_t, z_t, u_t)\widetilde p_t+\partial_{u'}\hat g(t,\widetilde \Theta_t, y_t, z_t, u_t)\widetilde q_t+\partial_{ u'}\hat h(t,\widetilde\Theta_t, y_t, z_t, u_t)\Big]\bigg\}\big(a-u_t\big)\geq 0. \end{align*} \subsection{LQ problem}\label{lq} In this subsection, we will apply the stochastic maximum principle derived in Section \ref{3} to a kind of mean-field stochastic linear quadratic control problem with scalar interaction. In such an LQ model, the drift and the volatility in \eqref{state-e} are of the form \begin{align*} &f(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t))=f_1y_t+f_2z_t+f_3u_t+\overline f_1\E[y_t]+\overline f_2\E[z_t]+\overline f_3\E[u_t],\\ &g(t,y_t,z_t,u_t,\mathcal L(y_t,z_t,u_t))=g_1y_t+g_2z_t+g_3u_t+\overline g_1\E[y_t]+\overline g_2\E[z_t]+\overline g_3\E[u_t], \end{align*} and the cost functional is assumed to be \begin{align*} J(u)=&\frac 12\E\bigg[\int_0^T\big\{h_1y_t^2+h_2z^2_t+h_3u^2_t+\overline h_1(\E[y_t])^2+\overline h_2(\E[z_t])^2+\overline h_3(\E[u_t])^2\big\}dt\\&\hspace{1em}+\Phi y_0^2+\overline \Phi (\E[y_0])^2\bigg], \end{align*} where $f_i, \overline f_i$, $g_i, \overline g_i$, $h_i, \overline h_i$ for $i=1,2,3$ and $\Phi, \overline \Phi$ are given constants satisfying $h_1, h_2,\overline h_1, \overline h_2,\Phi, \overline \Phi\geq 0$ and $h_3,\overline h_3>0$, $|g_2|+|\overline g_2|<1$. In this setting, the Hamiltonian $H$ given by \eqref{Hamiltonian} is \begin{align}\label{e:H-LQ} H(t,y,z,u,\mu,p,q)=&\big\{f_1y+f_2z+f_3u+\overline f_1\E[y]+\overline f_2\E[z]+\overline f_3\E[u]\big\}p\notag\\+&\big\{g_1y+g_2z+g_3u+\overline g_1\E[y]+\overline g_2\E[z]+\overline g_3\E[u]\big\}q\notag\\+&\tfrac 12\big\{h_1y^2+h_2z^2+h_3u^2+\overline h_1(\E[y])^2+\overline h_2(\E[z])^2+\overline h_3(\E[u])^2\big\}, \end{align} and the adjoint equation \eqref{e:pq} is \begin{equation} \left\{ \begin{aligned} dp_t=&\big\{f_1p_t+\overline f_1\E[p_t]+g_1q_t+\overline g_1\E[q_t]+h_1y_t+\overline h_1\E[y_t]\big\}dt\\+&\big\{f_2p_t+\overline f_2\E[p_t]+g_2q_t+\overline g_2\E[q_t]+h_2z_t+\overline h_2\E[z_t]\big\}dW_t\\-&q_td\overleftarrow B_t,\,\,\,t\in[0,T],\\ p_0=&\Phi y_0+\overline \Phi\E[y_0]. \end{aligned} \right. \end{equation} If we further assume that the control domain $U$ is the whole space $\R$, the stochastic maximum principle \eqref{variational inequality} yields \begin{align}\label{lq-mp} f_3p_t+\overline f_3\E[p_t] +g_3q_t+\overline g_3\E[q_t]+h_3u_t+\overline h_3\E [u_t]=0. \end{align} Taking expectation, we have \begin{align}\label{Eu} \E[u_t]=-\frac{1}{h_3+\overline h_3}\Big\{(f_3+\overline f_3)\E[p_t]+(g_3+\overline g_3)\E[q_t]\Big\}. \end{align} Plugging this into \eqref{lq-mp}, we obtain that \begin{align}\label{u} u_t=&-\frac{1}{h_3}\Big\{f_3p_t+\tfrac{1}{h_3+\overline h_3}\big(h_3\overline f_3-\overline h_3f_3\big)\E[p_t] +g_3q_t+\tfrac{1}{h_3+\overline h_3}\big(h_3\overline g_3-\overline h_3g_3\big)\E[q_t]\Big\}. \end{align} If the following stochastic Hamiltonian system \begin{equation}\label{fbdsde} \left\{ \begin{aligned} -dy_t=&\Big\{f_1y_t+f_2z_t+f_3u_t +\overline f_1\E[y_t]+\overline f_2\E[z_t]+\overline f_3\E[u_t]\Big\}dt\\ &+\Big\{g_1y_t+g_2z_t+g_3u_t +\overline g_1\E[y_t]+\overline g_2\E[z_t]+\overline g_3\E[u_t]\Big\}d\overleftarrow B_t-z_tdW_t,\\ dp_t=&\Big\{f_1p_t+\overline f_1\E[p_t]+g_1q_t+\overline g_1\E[q_t]+h_1y_t+\overline h_1\E[y_t]\Big\}dt\\&+\Big\{f_2p_t+\overline f_2\E[p_t]+g_2q_t+\overline g_2\E[q_t]+h_2z_t+\overline h_2\E[z_t]\Big\}dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi, ~ p_0=\Phi y_0+\overline \Phi\E[y_0]. \end{aligned} \right. \end{equation} with $u_t$ being given by \eqref{u} admits a solution, by the verification theorem in Section \ref{sufficiency}, the control process \eqref{u} is indeed the unique optimal control. Substituting \eqref{Eu} and \eqref{u} for $\E[u]$ and $u$ respectively leads to a strong coupling between the forward and backward equations in \eqref{fbdsde}, and we cannot apply Theorem \ref{ex-uni-fbdsde} due to the lack of monotonicity assumed in condition (A2). In the rest of this subsection, we shall prove the existence and uniqueness of the solution under some weaker conditions which are satisfied by \eqref{fbdsde} without the terms of $\E[u_t]$. Consider the following mean-field FBDSDE \begin{equation}\label{mf-fbdsde2} \left\{ \begin{aligned} -dy_t=&f(t,y_t,Cp_t,z_t,Dq_t,\mathcal L(y_t,Cp_t,z_t,Dq_t))dt\\&+g(t,y_t,Cp_t,z_t,Dq_t,\mathcal L(y_t,Cp_t,z_t,Dq_t))d\overleftarrow B_t-z_tdW_t,\\ dp_t=&F(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))dt\\&+G(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))dW_t-q_td\overleftarrow B_t.\\ y_T=&\xi, ~p_0=\Psi(y_0,\mathcal L(y_0)), \end{aligned} \right. \end{equation} where $C$ and $D$ are matrices of dimension $n\times n$. For simplicity, here we set $l=d=1$. We introduce below conditions (B1)-(B2) which are parallel to but weaker than (A1)-(A2) imposed in Section \ref{4}. As in Section~\ref{4}, we use the notations $\zeta=(y,p,z,q)$ and $\mathcal A(t,\zeta,\mu)=(-F,\breve f,-G,\breve g)(t,\zeta,\mu)$, where $\breve f(t,\zeta, \mu)=f(t,y, Cp, z, Dq, \mathcal L(y, Cp, z, Dq))$ and similarly for $\breve g$. \begin{enumerate} \item [(B1)]There exist constants $c_1\geq 0, c_2>0$ such that \begin{align*} \E\big[\big<\mathcal A(t,\zeta,\mu)-\mathcal A(t,\zeta',\mu'),\widehat\zeta\big>\big]&\leq -c_1\E\big[|\widehat y|^2+|\widehat z|^2\big]-c_2\E\big[|C\widehat p+D\widehat q|^2\big],\\ \E\big[\big<\Psi(y,\mu_y)-\Psi(y',\mu_y'), \widehat y\big>\big]&\geq 0, \end{align*} for $ \zeta,\zeta'\in \R^{n}\times\R^{n}\times\R^{n}\times\R^{n}, \mu,\mu'\in \mathcal P_2(\R^{n}\times\R^{n}\times\R^{n}\times\R^{n}), \mu_y,\mu_y'\in \mathcal P_2(\R^n)$, $\widehat\zeta=(\widehat y,\widehat p,\widehat z,\widehat q)=(y-y',p-p',z-z',q-q')$. \medskip \item [(B2)]There exists a constant $c_3>0$ such that \begin{align*} |\mathcal A(t,\zeta,\mu)-\mathcal A(t,\zeta',\mu')|&\leq c_3\big(|\zeta-\zeta'|+W_2(\mu,\mu')\big),\\ |\Psi(y,\mu_y)-\Psi(y',\mu'_y)|&\leq c_3\big(|y-y'|+W_2(\mu_y,\mu'_y)\big). \end{align*} Moreover, we assume that there exist $\lambda_1,\lambda_2>0$ with $\lambda_1+\lambda_2<1$ such that for $\forall t\in[0,T]$, \begin{align*} &\big|f(t,y,Cp,z,Dq,\mathcal L(y,Cp,z,Dq))-f(t,y',Cp',z',Dq',\mathcal L(y',Cp',z',Dq'))\big|^2\\&\leq c_3\big(|\widehat y|^2+|\widehat z|^2+|C\widehat p+D\widehat q|^2+\E\big[|\widehat y|^2+|\widehat z|^2+|C\widehat p+D\widehat q|^2\big]\big),\\ &\big|g(t,y,Cp,z,Dq,\mathcal L(y,Cp,z,Dq))-g(t,y',Cp',z',Dq',\mathcal L(y',Cp',z',Dq'))\big|^2\\&\leq c_3\big(|\widehat y|^2+|C\widehat p+D\widehat q|^2+\E\big[|\widehat y|^2+|C\widehat p+D\widehat q|^2\big]\big)+\lambda_1|\widehat z|^2+\lambda_2\E\big[|\widehat z|^2\big],\\ &\big|F(t,y,p,z,q,\mu)-F(t,y',p',z',q',\mu')\big|^2\leq c_3\big(|\widehat\zeta|^2+W_2^2(\mu,\mu')\big),\\ &\big|G(t,y,p,z,q,\mu)-G(t,y',p',z',q',\mu')\big|^2\\&\leq c_3\big(|\widehat y|^2+|\widehat z|^2+|\widehat p|^2+\E\big[|\widehat y|^2+|\widehat z|^2+|\widehat p|^2\big]\big)+\lambda_1|\widehat q|^2+\lambda_2\E\big[|\widehat q|^2\big]. \end{align*} \end{enumerate} \begin{theorem}\label{lq-eu} Under conditions (B1)-(B2), equation \eqref{mf-fbdsde2} admits a unique solution. \end{theorem} \begin{proof} First we prove the uniqueness. Let $\zeta=(y,z,p,q)$ and $\zeta'=(y',p',z',q')$ be two solutions of \eqref{mf-fbdsde2}. Applying It\^o's formula to $\big<\widehat y_t,\widehat p_t\big>$ and taking expectation yield \begin{align*} \E\big[\big<\widehat y_0,\Psi(y_0,\mathcal L(y_0))-\Psi(y'_0,\mathcal L(y'_0))\big>\big]=\E\int_0^T\big<\mathcal A(t,\zeta_t,\mu_t)-\mathcal A(t,\zeta_t',\mu_t'),\widehat \zeta_t\big>dt. \end{align*} This together with condition (B1) implies \begin{align*} c_2\E\int_0^T\big|C\widehat p_t+D\widehat q_t\big|^2dt\leq 0, \end{align*} and recalling that $c_2>0$, we have \begin{equation}\label{e:pq0} |C\widehat p_t+D\widehat q_t|^2=0, \text{ for almost all } t\in[0,T]. \end{equation} Now we deal with $|\widehat y_t|^2$ in a similar way. Using Lipschitz conditions on $f$ and $g$ in (B2) and taking \eqref{e:pq0} into account, we can get \[\E[|\widehat y_t|^2] +\frac12 \E \int_t^T |\widehat z_s|^2 ds \le c_0 \E\int_t^T |\widehat y_s|^2 ds,\] for some positive constant $c_0$. This implies $\widehat y\equiv 0$ by Gronwall's inequality and hence $\widehat z\equiv 0$. The uniqueness of $(p,q)$ then follows directly from classical result for BDSDEs. To obtain the existence of the solution, we consider the following equation: \begin{equation}\label{mffbdsdep} \left\{ \begin{aligned} -dy_t=&\Big\{\alpha f(t,y_t,Cp_t,z_t,Dq_t,\mathcal L(y_t,Cp_t,z_t,Dq_t))-(1-\alpha)(C^\text{T}Cp_t+C^\text{T}Dq_t)+f_0(t)\Big\}dt\\&+\Big\{\alpha g(t,y_t,Cp_t,z_t,Dq_t,\mathcal L(y_t,Cp_t,z_t,Dq_t))-(1-\alpha)(D^\text{T}Cp_t+D^\text{T}Dq_t)+g_0(t)\Big\}d\overleftarrow B_t\\&-z_tdW_t,\\ dp_t=&\Big\{\alpha F(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))+F_0(t)\Big\}dt\\&+\Big\{\alpha G(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))+G_0(t)\Big\}dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi, ~ p_0=\alpha\Psi(y_0,\mathcal L(y_0))+\Psi_0. \end{aligned} \right. \end{equation} Clearly, \eqref{mffbdsdep} with $\alpha=1$ coincides with \eqref{mf-fbdsde2}, and when $\alpha=0$, the existence and uniqueness follows directly from \cite{94pp}. As in Section \ref{4}, we shall take the method of continuation and prove the result of Lemma \ref{pertubation} under conditions (B1)-(B2). More precisely, given $\alpha_0\in[0,1)$ and $\overline\zeta=(\overline y,\overline p,\overline z,\overline q)\in L^2_\mathcal F([0,T];\R^n\times \R^n\times\R^{n}\times \R^{n})$, we consider \begin{equation}\label{e:linear-fbdsde} \left\{ \begin{aligned} -dy_t=&\Big\{\alpha_0 f(t,y_t,Cp_t,z_t,Dq_t,\mathcal L(y_t,Cp_t,z_t,Dq_t))-(1-\alpha_0)(C^\text{T}Cp_t+C^\text{T}Dq_t)\\&\hspace{0.5em}+\delta f(t,\overline y_t,C\overline p_t,\overline z_t,D\overline q_t,\mathcal L(\overline y_t,C\overline p_t,\overline z_t,D\overline q_t))+\delta(C^\text{T}C\overline p_t+C^\text{T}D\overline q_t)+f_0(t)\Big\}dt\\&+\Big\{\alpha_0 g(t,y_t,Cp_t,z_t,Dq_t,\mathcal L(y_t,Cp_t,z_t,Dq_t))-(1-\alpha_0)(D^\text{T}Cp_t+D^\text{T}Dq_t)\\&\hspace{1em}+\delta g(t,\overline y_t,C\overline p_t,\overline z_t,D\overline q_t,\mathcal L(\overline y_t,C\overline p_t,\overline z_t,D\overline q_t))+\delta(D^\text{T}C\overline p_t+D^\text{T}D\overline q_t)+g_0(t)\Big\}d\overleftarrow B_t\\&-z_tdW_t,\\ dp_t=&\Big\{\alpha_0 F(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))+\delta F(t,\overline y_t,\overline p_t,\overline z_t,\overline q_t,\mathcal L(\overline y_t,\overline p_t,\overline z_t,\overline q_t))+F_0(t)\Big\}dt\\&+\Big\{\alpha_0 G(t,y_t,p_t,z_t,q_t,\mathcal L(y_t,p_t,z_t,q_t))+\delta G(t,\overline y_t,\overline p_t,\overline z_t,\overline q_t,\mathcal L(\overline y_t,\overline p_t,\overline z_t,\overline q_t))+G_0(t)\Big\}dW_t\\&-q_td\overleftarrow B_t,\\ y_T=&\xi, ~ p_0=\alpha_0\Psi(y_0,\mathcal L(y_0))+\delta\Psi(\overline y_0,\mathcal L(\overline y_0))+\Psi_0, \end{aligned} \right. \end{equation} and shall prove that the mapping $\mathbb I _{\alpha_0,\delta}(\overline \zeta)=(\zeta)$ defined by \eqref{e:linear-fbdsde} is contractive for $\delta$ which is small but independent of $\alpha_0$. Applying the product rule \eqref{e:prod-rule} to $\big<\widehat y_t, \widehat p_t\big>$, taking expectation and using (B1)-(B2), we can get the following estimation which is parallel to \eqref{e:est-1}: there exists a constant $C_1$ only depending on $c_1,c_2,c_3$ such that \begin{align*} &\E\int_0^T|C\widehat p_t+D\widehat q_t|^2dt\leq \delta C_1\Bigg\{ \E\int_0^T\big\{|\widehat {\overline \zeta}_t|^2+|\widehat \zeta_t|^2\big\}dt+\E\big[|\widehat {\overline y}_0|^2+|\widehat y_0|^2\big]\Bigg\}. \end{align*} Applying It\^o's formula to $|\widehat y_t|^2$ and taking expectation, we can get the following estimates: \begin{align*} &\E\big[|\widehat y_t|^2\big]\leq C_2\E\int_0^T|C\widehat p_t+D\widehat q_t|^2dt+\delta C_2\E\int_0^T|\widehat {\overline \zeta}_t|^2dt,\\ &\E\int_0^T\{ |\widehat y_t|^2+|\widehat z_t|^2\}dt\leq C_3\E\int_0^T|C\widehat p_t+D\widehat q_t|^2dt+\delta C_3\E\int_0^T|\widehat {\overline \zeta}_t|^2dt. \end{align*} Similarly, we can also get \begin{align*} &\E\int_0^T\{|\widehat p_t|^2+|\widehat q_t|^2\}dt\leq C_4\E\int_0^T\{|\widehat y_t|^2+|\widehat z_t|^2\}dt+\delta C_4\E\int_0^T|\widehat {\overline \zeta}_t|^2dt+C_4\E\big[|\widehat y_0|^2\big]+\delta C_4\E\big[|\widehat{\overline y} _0|^2\big]. \end{align*} Combining the above estimates, we can find a constant $L$ only dependent on $c_1,c_2,c_3,\lambda_1,\lambda_2$ and $T$, such that \begin{align*} \E\int_0^T|\widehat \zeta_t|^2dt+\E\big[|\widehat y_0|^2\big]\leq \delta L\left(\E\int_0^T|\widehat {\overline \zeta}_t|^2dt+\E\big[|\widehat {\overline y}_0|^2\big]\right). \end{align*} Hence, if we choose $\delta=\tfrac{1}{2L}$, $\mathbb I_{\alpha_0, \delta}$ is a contraction mapping and thus equation \eqref{mffbdsdep} admits a solution for $\alpha=\alpha_0+\delta$. Noting that the choice of $\delta$ is independent of $\alpha_0$, one can repeat this procedure and show that \eqref{mffbdsdep} has a solution for all $\alpha\in[0,1]$. In particular, this implies the existence of solution to \eqref{mf-fbdsde2}. The proof is concluded. \end{proof} Now, we reconsider the stochastic linear quadratic problem that does not depend on the distribution of the control process, i.e., $\overline f_3,\overline g_3, \overline h_3=0$. In such situation, the optimal control $u$ given by \eqref{u} becomes \begin{align}\label{u2} u_t=-\tfrac{1}{h_3}(f_3p_t+g_3q_t), \end{align} and the Hamiltonian system \eqref{fbdsde} now is \begin{equation}\label{fbdsde-22} \left\{ \begin{aligned} -dy_t=&\Big\{f_1y_t+f_2z_t-\tfrac{f_3}{h_3}(f_3p_t+g_3q_t) +\overline f_1\E[y_t]+\overline f_2\E[z_t]\Big\}dt\\ &+\Big\{g_1y_t+g_2z_t-\tfrac{g_3}{h_3}(f_3p_t+g_3q_t) +\overline g_1\E[y_t]+\overline g_2\E[z_t]\Big\}d\overleftarrow B_t-z _tdW_t,\\ dp_t=&\Big\{f_1p_t+\overline f_1\E[p_t]+g_1q_t+\overline g_1\E[q_t]+h_1y_t+\overline h_1\E[y_t]\Big\}dt\\&+\Big\{f_2p_t+\overline f_2\E[p_t]+g_2q_t+\overline g_2\E[q_t]+h_2z_t+\overline h_2\E[z_t]\Big\}dW_t-q_td\overleftarrow B_t,\\ y_T=&\xi, p_0=\Phi y_0+\overline \Phi\E[y_0]. \end{aligned} \right. \end{equation} It can be easily checked that the coefficients in \eqref{fbdsde-22} satisfy (B1)-(B2) (we remark that the monotonicity condition in (A2) is not satisfied, though). By Theorem \ref{lq-eu}, there exists a unique solution to \eqref{fbdsde-22}. Thus, equations \eqref{u2} together with \eqref{fbdsde-22} provides a unique optimal control for the mean-field backward doubly stochastic LQ problem without involving the distribution of control. \begin{remark} When the mean-field FBDSDE \eqref{mf-fbdsde2} is reduced to classical FBDSDE (without mean field), Theorem \ref{lq-eu} recovers the existence and uniqueness result obtained in \cite[Theorem 3.8]{10hpw}. \end{remark} \section*{Acknowledgements} The authors would like to thank Tianyang Nie for his helpful discussions. J. Song is partially supported by Shandong University (Grant No. 11140089963041) and the National Natural Science Foundation of China (Grant No. 12071256). \bibliographystyle{plain} \bibliography{Reference-mfb} \end{document}
2205.02968v1
http://arxiv.org/abs/2205.02968v1
Decorated stable trees
\documentclass[a4paper,11pt]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage{mathpazo} \usepackage{graphicx} \usepackage{comment} \usepackage{amsthm,amsmath,amsfonts,stmaryrd,mathrsfs,amssymb} \usepackage{mathtools} \usepackage{enumerate} \usepackage{csquotes} \usepackage{chemarrow} \usepackage{tikz} \usepackage[top=2.5cm, bottom=2.5cm, left=2cm, right=2cm]{geometry} \usepackage{subfig} \usepackage{multirow} \usepackage{array} \usepackage{bigints} \usepackage{etoolbox} \usepackage{mdframed} \usepackage{color} \usepackage{tabularx} \usepackage[unicode,colorlinks=true,linkcolor=blue,citecolor=red,pdfencoding=auto,psdextra]{hyperref} \linespread{1.1} \newcommand\Pitem{ \addtocounter{enumi}{-1} \renewcommand\theenumi{\roman{enumi}'} \item \renewcommand\theenumi{\roman{enumi}} } \makeatletter \newcommand{\mylabel}[2]{#2\def\@currentlabel{#2}\label{#1}} \makeatother \newcommand\marginal[1]{\marginpar{\raggedright\parindent=0pt\tiny #1}} \DeclareMathOperator{\haut}{ht} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\Law}{Law} \DeclareMathOperator{\Leb}{Leb} \DeclareMathOperator{\dist}{d} \DeclareMathOperator{\Vol}{Vol} \DeclareMathOperator{\Ball}{B} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\rrt}{RRT} \DeclareMathOperator{\wrt}{WRT} \DeclareMathOperator{\pa}{PA} \DeclareMathOperator{\PD}{PD} \DeclareMathOperator{\GEM}{GEM} \DeclareMathOperator{\ML}{ML} \DeclareMathOperator{\MLMC}{MLMC} \DeclareMathOperator{\GW}{GW} \DeclareMathOperator{\BGW}{BGW} \newcommand{\ensemblenombre}[1]{\mathbb{#1}} \newcommand{\N}{\ensemblenombre{N}} \newcommand{\Z}{\ensemblenombre{Z}} \newcommand{\Q}{\ensemblenombre{Q}} \newcommand{\R}{\ensemblenombre{R}} \newcommand{\intervalle}[4]{\mathopen{#1}#2 \mathclose{}\mathpunct{},#3 \mathclose{#4}} \newcommand{\intervalleff}[2]{\intervalle{[}{#1}{#2}{]}} \newcommand{\intervalleof}[2]{\intervalle{(}{#1}{#2}{]}} \newcommand{\intervallefo}[2]{\intervalle{[}{#1}{#2}{)}} \newcommand{\intervalleoo}[2]{\intervalle{(}{#1}{#2}{)}} \newcommand{\intervalleentier}[2]{\intervalle\llbracket{#1}{#2} \rrbracket} \newcommand{\enstq}[2]{\left\lbrace#1\mathrel{}\middle|\mathrel{}#2\right\rbrace} \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\Gam}[1]{\Gamma \left(#1\right)} \newcommand{\Cut}[2]{\mathrm{Cut}\left(#1,#2\right)} \newcommand{\partieentiere}[1]{\lfloor #1 \rfloor} \newcommand{\petito}[1]{o\mathopen{}\left(#1\right)} \newcommand{\petiton}{o_n\mathopen{}\left(1\right)} \newcommand{\petitoe}{o_\epsilon\mathopen{}\left(1\right)} \newcommand{\grandO}[1]{O\mathopen{}\left(#1\right)} \newcommand{\tend}[2]{\underset{#1}{\overset{#2}{\longrightarrow}}} \newcommand{\bb}{\mathbf{b}} \newcommand{\brho}{\boldsymbol\rho} \newcommand{\bd}{\mathbf{d}} \newcommand{\bnu}{\boldsymbol\nu} \newcommand{\bB}{\mathsf{B}} \newcommand{\bRho}{\rho} \newcommand{\bD}{\mathsf{D}} \newcommand{\bNu}{\nu} \newcommand{\bH}{\mathsf{H}} \newcommand{\bP}{\mathsf{P}} \newcommand{\bM}{\mathsf{M}} \newcommand{\bl}{\mathbf{l}} \newcommand{\bi}{\mathbf{i}} \newcommand{\bm}{\mathbf{m}} \renewcommand{\P}{\mathbb{P}} \newcommand{\E}{\mathbb{E}} \newcommand{\D}{\mathbb{D}} \newcommand{\M}{\mathbb{M}} \newcommand{\K}{\mathbb{K}} \newcommand{\bU}{\mathbb{U}} \newcommand{\bT}{\mathbb{T}} \newcommand{\bA}{\mathbb{A}} \newcommand{\bt}{\mathbf{t}} \newcommand{\sH}{\mathscr{H}} \newcommand{\sL}{\mathscr{L}} \newcommand{\sK}{\mathscr{K}} \newcommand{\sG}{\mathscr{G}} \newcommand{\Ec}[1]{\mathbb{E} \left[#1\right]} \newcommand{\Ep}[1]{\mathbb{E} \left(#1\right)} \newcommand{\Pc}[1]{\mathbb{P} \left[#1\right]} \newcommand{\Pp}[1]{\mathbb{P} \left(#1\right)} \newcommand{\Ppp}[2]{\mathbb{P}_{#1} \left(#2\right)} \newcommand{\Pppsq}[3]{\mathbb{P}_{#1} \left(#2\mathrel{}\middle|\mathrel{}#3\right)} \newcommand{\Ecsq}[2]{\mathbb{E} \left[#1\mathrel{}\middle|\mathrel{}#2\right]} \newcommand{\Epsq}[2]{\mathbb{E} \left(#1\mathrel{}\middle|\mathrel{}#2\right)} \newcommand{\Pcsq}[2]{\mathbb{P} \left[#1\mathrel{}\middle|\mathrel{}#2\right]} \newcommand{\Ppsq}[2]{\mathbb{P} \left(#1\mathrel{}\middle|\mathrel{}#2\right)} \newcommand{\Ecp}[2]{\mathbb{E}_{#1} \left[#2\right]} \newcommand{\Var}[1]{\mathbb{V}\left(#1\right)} \newcommand{\ndN}{\mathbb{N}} \newcommand{\ndZ}{\mathbb{Z}} \newcommand{\ndQ}{\mathbb{Q}} \newcommand{\ndR}{\mathbb{R}} \newcommand{\ndC}{\mathbb{C}} \newcommand{\ndE}{\mathbb{E}} \newcommand{\ndK}{\mathbb{K}} \newcommand{\ndA}{\mathbb{A}} \renewcommand{\Pr}[1]{\mathbb{P}(#1)} \newcommand{\Prb}[1]{\mathbb{P}\left( #1 \right)} \newcommand{\Ex}[1]{\mathbb{E}[#1]} \newcommand{\Exb}[1]{\mathbb{E}\left[ #1 \right]} \newcommand{\Va}[1]{\mathbb{V}[#1]} \newcommand{\one}{\mathbbm{1}} \newcommand{\w}{w} \newcommand{\sca}{\kappa} \newcommand{\om}{\mathbf{w}} \newcommand{\rd}{\mathsf{rd}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cJ}{\mathcal{J}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cX}{\mathcal{X}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cY}{\mathcal{Y}} \newcommand{\cU}{\mathcal{U}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\cW}{\mathcal{W}} \newcommand{\fmT}{\mathfrak{T}} \newcommand{\fmS}{\mathfrak{S}} \newcommand{\fmA}{\mathfrak{A}} \newcommand{\mA}{\mathsf{A}} \newcommand{\mC}{\mathsf{C}} \newcommand{\mD}{\mathsf{D}} \newcommand{\mB}{\mathsf{B}} \newcommand{\mF}{\mathsf{F}} \newcommand{\mM}{\mathsf{M}} \newcommand{\mO}{\mathsf{O}} \newcommand{\mK}{\mathsf{K}} \newcommand{\mG}{\mathsf{G}} \newcommand{\mH}{\mathsf{H}} \newcommand{\mT}{\mathsf{T}} \newcommand{\mS}{\mathsf{S}} \newcommand{\mR}{\mathsf{R}} \newcommand{\mP}{\mathsf{P}} \newcommand{\mQ}{\mathsf{Q}} \newcommand{\mV}{\mathsf{V}} \newcommand{\mU}{\mathsf{U}} \newcommand{\mX}{\mathsf{X}} \newcommand{\mY}{\mathsf{Y}} \newcommand{\mZ}{\mathsf{Z}} \newcommand{\mb}{\mathsf{b}} \newcommand{\me}{\mathsf{e}} \newcommand{\CRT}{\mathcal{T}_\me} \newcommand{\tF}{\tilde{\cF}} \newcommand{\tG}{\tilde{\cG}} \newcommand{\Aut}{\text{Aut}} \newcommand{\Sym}{\text{Sym}} \newcommand{\mGWT}{\hat{\mathcal{T}}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cBa}{\overrightarrow{\mathcal{B}}'} \newcommand{\cBra}{\overrightarrow{\mathcal{B}}'^\bullet} \newcommand{\plh}{ {\ooalign{$\phantom{0}$\cr\hidewidth$\scriptstyle\times$\cr}}} \newcommand{\PLH}{{\mkern-2mu\times\mkern-2mu}} \newcommand{\hei}{\text{h}} \newcommand{\He}{\textnormal{H}} \newcommand{\Di}{\textnormal{D}} \newcommand{\emb}{\textnormal{emb}} \newcommand{\he}{\text{h}} \newcommand{\shp}{\mathsf{d}} \newcommand{\rhl}{\mathsf{rl}} \newcommand{\lhb}{\mathsf{lb}} \newcommand{\Forb}{\mathsf{Forb}} \newcommand{\spa}{\mathsf{span}} \newcommand{\KT}{\mathbf{T}} \newcommand{\UHT}{\mathbb{U}} \newcommand{\VHT}{\mathcal{V}_{\infty}} \newcommand{\cCb}{\mathcal{C}^\bullet} \newcommand{\cTb}{\mathcal{T}^\bullet} \newcommand{\cTbb}{\mathcal{T}^{\bullet \bullet}} \newcommand{\cTbbl}{\mathcal{T}^{\bullet \bullet (\ell)}} \newcommand{\eqdist}{\,{\buildrel d \over =}\,} \newcommand{\hxi}{\hat{\xi}} \newcommand{\convdis}{\,{\buildrel d \over \longrightarrow}\,} \newcommand{\convp}{\,{\buildrel p \over \longrightarrow}\,} \newcommand{\disto}{\mathsf{dis}} \newcommand{\Cyc}{\textsc{CYC}} \newcommand{\Set}{\textsc{SET}} \newcommand{\Seq}{\textsc{SEQ}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\GTB}{\Gamma T^\bullet} \newcommand{\GTBB}{\Gamma T^{\bullet \bullet}} \newcommand{\GTBBL}{\Gamma T^{\bullet \bullet (\ell)}} \newtheorem{firsttheorem}{Proposition} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \numberwithin{equation}{section} \newcommand{\keyword}[1]{\textbf{#1}~} \newcommand{\IF}{\keyword{if}} \newcommand{\ELSEIF}{\keyword{ELSEIF} } \newcommand{\THEN}{\keyword{then} } \newcommand{\ELSE}{\keyword{else}} \newcommand{\ENDIF}{\keyword{endif} } \newcommand{\RETURN}{\keyword{return}} \newcommand{\WHILE}{\keyword{while} } \newcommand{\ENDWHILE}{\keyword{end while} } \newcommand{\FOR}{\keyword{for}} \newcommand{\FOREACH}{\keyword{for each}} \newcommand{\EACH}{\keyword{each}} \newcommand{\ENDFOR}{\keyword{endfor} } \newcommand{\SPENDFOR}{\textbf{end for}~ } \newcommand{\ENDFORALL}{\keyword{ENDFORALL} } \newcommand{\DOWNTO}{\keyword{DOWNTO} } \newcommand{\DO}{\keyword{DO} } \newcommand{\AND}{\keyword{and}} \newcommand{\OR}{\keyword{OR} } \newcommand{\true}{\keyword{true}} \newcommand{\false}{\keyword{false} } \newcommand{\OPT}{\rm{OPT}} \newcommand{\REPEAT}{\keyword{repeat}} \newcommand{\UNTIL}{\keyword{until}} \newcommand{\TO}{\keyword{to}} \newcommand{\Po}[1]{{{\mathsf{Po}}\left(#1\right)}} \newcommand{\Pois}[1]{{{\mathsf{Pois}}\left(#1\right)}} \newcommand{\Bern}[1]{{{\mathsf{Bern}}\left(#1\right)}} \newcommand{\Geom}[1]{{{\mathsf{Geom}}\left(#1\right)}} \newcommand{\cnu}{\mathsf{0}} \newcommand{\con}{\mathsf{1}} \newcommand{\dres}{\mathbf{d}} \newcommand{\degp}{\mathrm{d}^+} \newcommand{\dis}{\operatorname{dis}} \newcommand{\notedelphin}[1]{{\textcolor{orange}{#1}}} \newcommand{\toremove}[1]{{\textcolor{gray}{#1}}} \newcommand{\ben}[1]{{\textcolor{blue}{B: #1}}} \newcommand{\notesiggi}[1]{{\textcolor{red}{Siggi: #1}}} \newcommand{\Bb}{\mathbb{B}} \newcommand{\ind}[1]{\mathbf{1}_{\left\lbrace #1 \right\rbrace}} \newcommand{\out}[1]{d_{#1}^+} \newcommand{\dec}{\text{dec}} \title{\textbf{Decorated stable trees}} \date{} \author{Delphin S\'{e}nizergues\thanks{University of British Columbia, E-mail: [email protected]} \and Sigurdur \"Orn Stef\'ansson\thanks{University of Iceland, E-mail: [email protected]} \and Benedikt Stufler\thanks{Vienna University of Technology, E-mail: [email protected]}} \begin{document} \maketitle \let\thefootnote\relax\footnotetext{ \\\emph{MSC2010 subject classifications}. 60F17, 60C05 \\ \emph{Keywords: decorated trees, looptrees, stable trees, invariance principle, self-similarity.} } \vspace {-0.5cm} \begin{abstract} We define \emph{decorated $\alpha$-stable trees} which are informally obtained from an $\alpha$-stable tree by blowing up its branchpoints into random metric spaces. This generalizes the $\alpha$-stable looptrees of Curien and Kortchemski, where those metric spaces are just deterministic circles. We provide different constructions for these objects, which allows us to understand some of their geometric properties, including compactness, Hausdorff dimension and self-similarity in distribution. We prove an invariance principle which states that under some conditions, analogous discrete objects, random decorated discrete trees, converge in the scaling limit to decorated $\alpha$-stable trees. We mention a few examples where those objects appear in the context of random trees and planar maps, and we expect them to naturally arise in many more cases. \end{abstract} \begin{figure}[h] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=1.0\linewidth]{1dot25_poisson_500k_joined.jpg} \caption{ The right side illustrates a decorated $\alpha$-stable tree for $\alpha=5/4$. The decorations are Brownian trees (2-stable trees) rescaled according to the ``width'' of branchpoints. The left side shows the corresponding $\alpha$-stable tree before blowing up its vertices. Colours mark vertices with high width and their corresponding decorations.} \label{fi:exiterated} \end{minipage} \end{figure} \section{Introduction} In this paper we develop a general method for constructing random metric spaces by gluing together metric spaces referred to as \emph{decorations} along a tree structure which is referred to as \emph{the underlying tree}. The resulting object, which we define informally below and properly in Section~\ref{s:disc_construction} and Section~\ref{s:cts_construction}, will be called a \emph{decorated tree}. We will consider two frameworks, which we refer to as the \emph{discrete case} and the \emph{continuous case}. In the discrete case the underlying tree is a finite, rooted plane tree $T$. To each vertex $v$ in $T$ with outdegree $d_T^+(v)$ there is associated a decoration $B(v)$ which is a metric space with one distinguished point called the inner root and $d_T^+(v)$ number of distinguished points called the outer roots. In applications, the decorations $B(v)$ may e.g.~be finite graphs and they will only depend on the outdegree of $v$. We will thus sometimes write $B(v) = B_{d^+_T(v)}$. The decorated tree is constructed by identifying, for each vertex $v$, each of the outer roots of $B(v)$ with the inner root of $B(vi)$, $i=1,\ldots,d^+_T(v)$, where $vi$ are the children of $v$. This identification will either be done at random or according to some prescribed labelling. Our aim is to describe the asymptotic behaviour of \emph{random} spaces that are constructed in this way. To this end, let $(T_n)_{n \ge 1}$ be a sequence of random rooted plane trees. The sequence is chosen in such a way that the exploration process of the random trees, properly rescaled, converges in distribution towards an excursion of an $\alpha$-stable Lévy process with $\alpha \in (1,2)$. This particular choice is motivated by applications as will be explained below. We will further assume that there is a parameter $0 < \gamma \leq 1$, which we call the \emph{distance exponent}, such that the (possibly random) decorations $B_k$, with the metric rescaled by $k^{-\gamma}$, converge in distribution towards a compact metric space $\mathcal{B}$ as $k\to \infty$, in the Gromov--Hausdorff sense. In the continuous case, we aim to construct a decorated tree which is the scaling limit of the sequence of discrete decorated trees described above. The underlying tree is chosen as the $\alpha$-stable tree $\mathcal{T}_\alpha$, with $\alpha \in (1,2)$, which is a random compact real tree. One important feature of the $\alpha$-stable tree is that it has countably many vertices of infinite degree. The size of such a vertex is measured by a number called its \emph{width} which plays the same role as the degree in the discrete setting. The continuous decorated tree is constructed in a way similar as in the discrete case, where now each vertex $v$ of infinite degree in $\mathcal{T}_\alpha$ is independently replaced by a decoration $\mathcal{B}_v$ distributed as $\mathcal{B}$, in which distances are rescaled by width of $v$ to the distance exponent $\gamma$. The gluing points are specified by singling out an inner root and sampling outer roots according to a measure $\mu$ and the decorations are then glued together along the tree structure. See Figure~\ref{fi:exiterated} for an example of a decorated tree. The precise construction is somewhat technical and is described in more detail in Section~\ref{ss:realtrees}. In order for the construction to yield a compact metric space we require some control on the size of the decorations $\mathcal{B}_v$. More precisely, we will assume that $\gamma > \alpha-1$ and that there exists $p>\frac{\alpha}{\gamma}$ such that $\Ec{\diam(\mathcal{B})^p}<\infty$. Under some further suitable conditions on the decorations and the underlying tree which are stated in Section~\ref{ss:conditions} we arrive at the following main results: \begin {itemize} \item \emph{Construction of the decorated $\alpha$-stable tree:} The decorated $\alpha$-stable tree defined above is a well defined random compact metric space, see Section~\ref{s:cts_construction} and Theorem~\ref{thm:compact_hausdorff}. \item \emph{Invariance principle:} The sequence of discrete random decorated trees, with a properly rescaled metric, converges in distribution towards a decorated $\alpha$-stable tree in the Gromov--Hausdorff--Prokhorov sense. See Theorem~\ref{th:invariance} for the precise statement. \item \emph{Hausdorff dimension:} Apart from degenerate cases, the Hausdorff dimension of \emph {the leaves} of the decorated $\alpha$-stable tree equals $\alpha/\gamma$, see Theorem~\ref{thm:compact_hausdorff}. It follows that if the decorations a.s.~have a Hausdorff dimension $d_\mathcal{B}$ then the Hausdorff dimension of the decorated $\alpha$-stable tree equals $\max\{d_\mathcal{B},\alpha/\gamma\}$. \end{itemize} \begin{remark} A heuristic argument for the condition $\alpha -1 < \gamma$ is the following. If $(T_n)$ is in the domain of attraction of $\cT_\alpha$ then the maximum degree of a vertex in $T_n$ is of the order $n^{1/\alpha}$ and the height of $T_n$ is of the order $n^{\frac{\alpha-1}{\alpha}}$ (up to slowly varying functions). A decoration on a large vertex will then have a diameter of the order $n^{\gamma/\alpha}$. If $\alpha-1 < \gamma$, the diameter of the decorations will thus dominate the height of $T_n$ and the decorated tree will have height of the order $n^{\gamma/\alpha}$, coming only from the decorations. Furthermore, the decorations will "survive" in the limit since the distances of the decorated tree will be rescaled according to the inverse of its height which is of the order of the diameter of the decorations. In the case $\alpha -1 > \gamma$, the decorations will have a diameter which is of smaller order than the height of $T_n$. This means that the proper rescaling of the distances in the decorated tree should be the inverse of the height of $T_n$, i.e.~$n^{-\frac{\alpha-1}{\alpha}}$ which would contract the decorations to points. We therefore expect that in this case, the scaling limit of the decorated tree is equal to a constant multiple of $\cT_\alpha$ which has a Hausdorff dimension $\alpha/(\alpha-1)$. We we do not go through the details in the current work. We do not know what happens exactly at the value $\alpha-1 = \gamma$ at which there is a phase transition in the geometry and the Hausdorff dimension of the scaling limit. We leave this as an interesting open case. \end{remark} \subsection{Relations to other work} The only current explicit example of decorated $\alpha$-stable trees are the $\alpha$-stable looptrees which were introduced by Curien and Kortchemski \cite{MR3286462}. The looptrees are constructed by decorating the $\alpha$-stable trees by deterministic circles. The authors further showed, in a companion paper \cite{MR3405619}, that the looptrees appear as boundaries of percolation clusters on random planar maps and there are indications that they play a more general role in statistical mechanical models on random planar maps \cite{Richier:2017}. Looptrees have also been shown to arise as limits of various models of discrete decorated trees such as discrete looptrees, dissections of polygons \cite{MR3286462} and outerplanar maps \cite{zbMATH07138334}. In these cases the discrete decorations $B_k$ are graph cycles, chord restricted dissections of a polygon and dissections of a polygon respectively. The idea of decorating $\alpha$-stable trees by more general decorations was originally proposed by Kortchemski and Marzouk in \cite{kortchemski2016}: \begin{displayquote} "Note that if one starts with a stable tree and explodes each branchpoint by simply gluing inside a “loop”, one gets the so-called stable looptrees .... More generally, one can imagine exploding branchpoints in stable trees and glue inside any compact metric space equipped with a homeomorphism with [0, 1]." \end{displayquote} They mention explicitly an example where an $\alpha_1$-stable tree would be decorated by an $\alpha_2$-stable tree and so on and ask the question what the Hausdorff dimension of that object would be. We give a partial answer to their question in Section~\ref{s:markedtrees}. See Figure \ref{fi:exiterated} for a simulation where $\alpha_1 = 5/4$ and $\alpha_2 = 2$. Archer studied a simple random walk on decorated Bienaymé--Galton--Watson trees in the domain of attraction of the $\alpha$-stable tree \cite{arXiv:2011.07266}. Although her results do not explicitly involve constructing a Gromov--Hausdorff limit of the decorated trees, they do require an understanding of their global metric properties. We refer further to her work for examples of decorated trees, some of which we do not discuss in the current work. S\'{e}nizergues has developed a general method for gluing together sequences of random metric spaces \cite{senizergues_gluing_2019} and introduced a framework where metric spaces are glued together along a tree structure \cite{senizergues_growing_2022}. Section~\ref{sec:self-similarity} relies heavily on the results of the latter. Let us also mention that Stufler has studied local limits and Gromov--Hausdorff limits of decorated trees which are in the domain of attraction of the Brownian continuum random tree (equivalently the 2-stable tree) \cite{StEJC2018, zbMATH07235577}. The Brownian tree has no vertices of infinite degree and thus the rescaled decorations contract to points in the limit which only changes the metric of the decorated tree by a multiplicative factor. Its Gromov--Hausdorff limit then generically turns out to be a multiple of the Brownian tree itself (see \cite[Theorem~6.60]{zbMATH07235577}). \subsection{Outline} The paper is organized as follows. In Section~\ref{s:preliminaries} we recall some important definitions. In Section~\ref{s:disc_construction} the construction of discrete decorated trees is provided and the required conditions on the underlying tree and decorations are stated. In Section~\ref{s:cts_construction} we give the definition of the continuous decorated trees and in Section~\ref{s:invariance} we state and prove the invariance principle, namely that the properly rescaled discrete decorated trees converge towards the continuous decorated trees in the Gromov--Hausdorff--Prokhorov sense. An alternative construction of the continuous decorated tree is given in Section~\ref{sec:self-similarity} which allows us to prove that it is a compact metric space and gives a way to calculate its Hausdorff dimension. Section~\ref{s:applications} provides several applications of our results. Finally, the Appendix is devoted to proving that the conditions on the discrete trees stated in Section~\ref{s:disc_construction} are satisfied for natural models of random trees, in particular Bienaymé--Galton--Watson trees which are in the domain of attraction of $\alpha$-stable trees, with $\alpha \in (1,2)$. \subsection*{Notation} We let $\mathbb{N} = \{1, 2, \ldots\}$ denote the set of positive integers. We usually assume that all considered random variables are defined on a common probability space whose measure we denote by $\mathbb{P}$. All unspecified limits are taken as $n$ becomes large, possibly taking only values in a subset of the natural numbers. We write $\convdis$ and $\convp$ for convergence in distribution and probability, and $\eqdist$ for equality in distribution. An event holds with high probability, if its probability tends to $1$ as $n \to \infty$. We let $O_p(1)$ denote an unspecified random variable $X_n$ of a stochastically bounded sequence $(X_n)_n$, and write $o_p(1)$ for a random variable $X_n$ with $X_n \convp 0$. The following list summarizes frequently used terminology. \begin{center} \begin{tabularx}{\linewidth}{lll} \qquad \qquad &$\out{T}(v)$ & outdegree of vertex $v$ in a tree $T$.\\&$|T|$ & the number of vertices of a tree $T$.\\ &$B$ & decoration in the discrete setting. \\ &$B_k$ & decoration of size $k$ in the discrete setting. \\ &$d_k$ & metric on $B_k$. \\ &$\dres_k = k^{-\gamma} d_k$ & rescaled metric on $B_k$. \\ &$\cB$ & decoration in the continuous case.\\ &$\cB_t$ & decoration in the continuous case indexed by $t\in[0,1]$.\\ &$\diam(\cB)$ & the diameter of the metric space $\cB$.\\ &$T$ & discrete tree.\\ &$T_n$ & discrete tree of size $n$.\\ &$\mathcal{T}$ & real tree.\\ &$\mathcal{T}_\alpha$ & the $\alpha$-stable tree.\\ &$\Bb$ & set of branchpoints of $\mathcal{T}_\alpha$. \\ &$X^{(\alpha)}$ & a spectrally positive, $\alpha$-stable Lévy process.\\ &$X^{\text{exc},(\alpha)}$ & an excursion of a spectrally positive, $\alpha$-stable Lévy process. \end{tabularx} \end{center} \paragraph*{Acknowledgements.} The authors are grateful to Nicolas Curien for suggesting this collaboration, and to Eleanor Archer for discussing some earlier version of her project \cite{arXiv:2011.07266}. The second author acknowledges support from the Icelandic Research Fund, Grant Number: 185233-051, and is grateful for the hospitality at Université Paris-Sud Orsay. \section{Preliminaries} \label{s:preliminaries} In this section we collect some definitions and refer to the key background results required in the current work. We start by introducing plane trees, then real trees and the $\alpha$-stable tree and finally we recall the definition of the Gromov--Hausdorff--Prokhorov topology which is the topology we use to describe the convergence of compact mesasured metric spaces. \subsection{Plane trees} \label{subsec:plane trees} A rooted plane tree is a subgraph of the infinite Ulam--Harris tree, which is defined as follows. Its vertex set, $\UHT$, is the set of all finite sequences \begin{align*} \UHT = \bigcup _{n\geq 0} \mathbb{N}^n \end{align*} where $\mathbb{N}= \{1, 2,\ldots\}$ and $\mathbb{N}^0 = \{\varnothing\}$ contains the empty sequence denoted by $\varnothing$. If $v$ and $w$ belong to $\UHT$, their concatenation is denoted by $vw$. The edge set consists of all pairs $(v,vi)$, $v\in\UHT$, $i\in\mathbb{N}$. The vertex $vi$ is said to be the $i$-th child of $v$ and $v$ is called its parent. In general, a vertex $vw$ is said to be a descendant of $v$ if $w \neq \varnothing$, and in that case $v$ is called its ancestor. This ancestral relation is denoted by $v \prec w$. We write $v \preceq w$ if either $v=w$ or $v \prec w$. We denote the most recent comment ancestor of $v$ and $w$ by $v\wedge w$, with the convention that $v\wedge w = v$ if $v \preceq w$ (and $v \wedge w = w$ if $w \preceq v$). A rooted plane tree $T$ is a finite subtree of the Ulam--Harris tree such that \begin{enumerate} \item $\varnothing$ is in $T$, and is called its root. \item If $v$ is in $T$ then all its ancestors are in $T$. \item If $v$ is in $T$ there is a number $\out{T}(v)\geq 0$, called the outdegree of $v$, such that $vi$ is in $T$ if and only if $1 \leq i \leq \out{T}(v)$. \end{enumerate} In the following we will only consider rooted plane trees. For simplicity, we will refer to them as trees. The number of vertices in a tree $T$ will be denoted by $|T|$ and we denote them, listed in lexicographical order, by $v_1(T), v_1(T),\ldots, v_{|T|}(T)$. When it is clear from the context, we may leave out the argument $T$ for notational ease. To the tree $T$ we associate its \L{}ukasiewicz path $(W_{k}(T))_{0\leq k \leq |T|}$ by $W_0(T) = 0$ and \begin{align} W_k(T) = \sum_{j=1}^{k} (\out{T}(v_j)-1) \end{align} for $1 \leq k \leq |T|$. Throughout the paper, we will consider sequences of \emph{random trees} $(T_n)_{n\geq 1}$. The index $n$ will usually refer to the size of the tree in some sense, e.g.~the number of vertices or the number of leaves. In Section~\ref{ss:conditions} we state the general conditions on these random trees. However, the prototypical example to keep in mind is when $T_n$ is a Bienaymé--Galton--Watson tree, conditioned to have $n$ vertices. In this case the \L{}ukasiewicz path $(W_{k}(T_n))_{0\leq k \leq n}$ is a random walk conditioned on staying non-negative until step $n$ when it hits $-1$. \subsection{Real trees and the $\alpha$-stable trees} \label{ss:realtrees} Real trees may be viewed as continuous counterparts to discrete trees. Informally, they are compact metric spaces obtained by gluing together line segments without creating loops. A formal definition may e.g.~be found in \cite{MR2147221} but for our purposes we give an equivalent definition in terms of excursions on $[0,1]$, which are continuous functions $g:[0,1]\to \mathbb{R}$ with the properties that $g\geq 0$ and $g(0) = g(1) = 1$. For $s,t\in[0,1]$ define $\displaystyle m_g(s,t) = \min_{s\wedge t \leq r \leq s \vee t} g(r)$ end define the pseudo distance \begin{align*} d_g(s,t) = g(s)+g(t) - 2 m_g(s,t). \end{align*} The real tree $\mathcal{T}^g$ encoded by the excursion $g$ is defined as the quotient \begin{align*} \cT^g = [0,1]/\{d_g=0\} \end{align*} endowed with the metric inherited from $d_g$ which will still be denoted $d_g$ by abuse of notation. We will consider a family of random real trees $\cT_\alpha$, $\alpha \in (1,2)$ called $\alpha$-stable trees, which serve as the underlying trees in the continuous random decorated trees. A brief definition of $\cT_\alpha$ is given below, but we refer to definitions and more details in \cite{MR1964956}. Let $(X^{(\alpha)}_t; t\in[0,1])$ be a spectrally positive $\alpha$-stable Lévy process, with $\alpha \in (1,2)$, normalized such that \begin{align} \label{eq:levymeasure} \mathbb{E}(\exp(-\lambda X_t)) = \exp(t\lambda^\alpha) \end{align} for every $\lambda > 0$. Let $(X^{\text{exc},(\alpha)}_t; t\in[0,1])$ be an excursion of $X_t^{(\alpha)}$, i.e.~the process conditioned to stay positive for $t \in (0,1)$ and to be 0 for $t\in\{0,1\}$. This involves conditioning on an event of probability zero which may be done via approximations, see e.g.~\cite[Chapter~VIII.3]{MR1406564}. Define the associated \emph{continuous height process} \begin{align*} H_t^{\text{exc},(\alpha)} = \lim_{\epsilon\to 0} \frac{1}{\epsilon}\int_{0}^t \mathbf{1}_{\{X_s^{\text{exc}}<I_s^t + \epsilon\}}ds \end{align*} where \begin{align*} I_s^t = \inf_{[s,t]} X^{\text{exc}}. \end{align*} Continuity of $H_t^{\text{exc},(\alpha)}$ is not guaranteed by the above definition but there exists a continuous modification which we consider from now on \cite{10.1214/aop/1022855417}. The $\alpha$-stable tree is the random real tree $\cT_\alpha := \cT^{H^{\text{exc},(\alpha)}}$ encoded by $H^{\text{exc},(\alpha)}$. We refer to \cite{MR1964956,MR1954248,MR2147221} for more details on the construction and properties of $\cT_\alpha$. Let $T^{\BGW}_n$ be a random BGW tree with a critical offspring distribution in the domain of attraction of an $\alpha$-stable law with $\alpha\in (1,2)$. Recall that this example is the prototype for the underlying random trees we consider in the current work. Since the \L ukasiewicz path $W_n(T^{\BGW}_n)$ is in this case a conditioned random walk, it follows that by properly rescaling $(W_{\lfloor n t \rfloor}(T^{\BGW}_n); t\in[0,1])$, it converges in distribution towards the excursion $(X^{\text{exc},(\alpha)}_t; t\in[0,1])$ in the Skorohod space $D([0,1],\mathbb{R})$, \cite[Chapter~VIII]{MR1406564}. This observation is the main idea in proving that $T_n^{\BGW}$, with a properly rescaled graph metric, converges towards $\cT_\alpha$ in the Gromov--Hausdorff sense \cite{MR1964956,MR2147221}. The pointed GHP-distance may and will be generalized in a straightforward manner by adding more points and more measures. \subsection{The Gromov-Hausdorff-Prokhorov topology} Let $\mathbb{M}$ be the set of all compact measured metric spaces modulo isometries. An element $(X,d_X,\nu_X) \in \mathbb{M}$ consists of the space $X$, a metric $d_X$ and a Borel-measure $\nu_X$. In particular, $\mathbb{M}$ contains finite graphs (e.g.~with a metric which is a multiple of the graph metric) and real trees. The Gromov--Hausdorff--Prokhorov (GHP) distance between two elements $(X,d_X,\nu_X),(Y,d_Y,\nu_Y) \in \mathbb{M}$ is defined by \begin{align} \label{eq:dghp} d_{GHP}(X,Y) = \inf\left\{d_H^Z(\phi_1(X),\phi_2(Y)) + d_P^Z(\phi_1^\ast \nu_X,\phi_2^\ast \nu_Y)\right\} \end{align} where the infimum is taken over all measure preserving isometries $\phi_1: X \to Z$ and $\phi_2: Y \to Z$ and metric spaces $Z$. Here $d_H^Z$ is the Hausdorff distance between compact metric spaces in $Z$, $d_P^Z$ is the Prokhorov distance between measures on $Z$ and $\phi^\ast \nu$ denotes the pushforward of the measure $\nu$ by $\phi$. We refer to \cite[{Chapter~27}]{MR2459454} for more details. When we are only interested in the convergence of the metric spaces without the measures, we leave out the term in \eqref{eq:dghp} with the Prokhorov metric and refer to the metric simply as the Gromov--Hausdorff (GH) metric, denoted $d_{GH}$. The following formulation of GHP-convergence may be useful in calculations and we will use it repeatedly in the following sections. If $(X,d_X)$ and $(Y,d_Y)$ are metric spaces and $\epsilon>0$, a function $\phi_\epsilon: X \to Y$ is called an $\epsilon$-isometry if the following two conditions hold: \begin{enumerate} \item (Almost isometry) $$\sup_{x_1,x_2\in X}|d_Y(\phi(x_1),\phi(x_2)) - d_X(x_1,x_2)| < \epsilon.$$ \item (Almost surjective) For every $y \in Y$ there exists $x \in X$ such that $$d_Y(\phi(x),y) < \epsilon.$$ \end{enumerate} If $(X_n,d_n,\nu_n)$ is a sequence of compact measured metric spaces which converge towards $(X,d,\nu)$ in the GHP topology then it is equivalent to the existence of $\epsilon_n$-isometries $\phi_{\epsilon_n}: X_n \to X$, where $\epsilon_n \to 0$, such that \begin{align} \label{eq:epsiloniso} d_H^X(\phi_{\epsilon_n}(X_n),X) \to 0 \quad \text{and} \quad d_P^X(\phi_{\epsilon_n}^\ast \nu_n, \nu) \to 0 \end{align} as $n\to \infty$. When the GH topology is considered, only the first convergence of these two is assumed. In most of the applications we will consider \emph{pointed} metric spaces $(X,d_X,\rho_X)$ where $\rho_X$ is a distinguished element called \emph{a point} or \emph{a root}. The GHP-distance is adapted to this by requiring that the isometries in \eqref{eq:dghp} and the $\epsilon_n$-isometries in \eqref{eq:epsiloniso}, send root to root. In this case we speak of the \emph{pointed} GHP-distance. In the same vein, we will sometimes deal with metric spaces that carry some extra finite measure. In this case we just add a term in \eqref{eq:dghp} that corresponds to the Prokhorov distance computed between the extra measures and modify \eqref{eq:epsiloniso} accordingly. \section{Construction in the discrete} \label{s:disc_construction} In this section we define the discrete decorated trees and state the sufficient conditions on the decorations and the underlying trees for our main results to hold. \emph{A decoration} of size $k$ is a compact metric space $B$ equipped with a metric $d$. We distinguish a vertex $\rho$ and call it the \emph{internal root} of $B$ and furthermore, we label $k$ vertices (not necessarily different) and call them the external roots of $B$. This labeling is described by a function $\ell: \{1,\ldots,k\} \to B$. Note that we explicitly allow external roots to coincide with the internal root. The space $B$ is equipped with a finite Borel measure $\nu$. We will use the notation $(B,d,\rho,\ell,\nu)$ for a decoration. In practice, $B$ will often be a finite graph, $d$ the graph metric, and $\nu$ the counting measure on the vertex set. The labeled vertices will often be the boundary of $B$ in some sense, e.g.~the leaves of a tree or the vertices on the boundary of a planar map. \begin{figure}[t] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.55\linewidth]{patch} \caption{Definition of the metric $d_n^{\dec}$.} \label{f:metric} \end{minipage} \end{figure} Let $(T_n)_{n\geq 1}$ be a sequence of random trees and $(\tilde B_k,\tilde \rho_k,\tilde d_k,\tilde \ell_k,\tilde \nu_k)_{k\geq 0}$ a sequence of random decorations indexed by their size. Conditionally on $T_n$, for each $v \in T_n$, sample independently a decoration $(B_{v},\rho_{v},d_{v},\ell_{v},\nu_{v})$ distributed as $(\tilde B_{\out{}(v)},\tilde \rho_{\out{}(v)},\tilde d_{\out{}(v)},\tilde \ell_{\out{}(v)},\tilde \nu_{\out{}(v)})$. We aim to glue the decorations to each other along the structure of $T_n$ which is informally explained as follows. Consider the disjoint union \begin{align*} T_n^{\ast,\dec} = \bigsqcup_{v\in T_n} B_v \end{align*} of decorations on which is defined an equivalence relation $\sim$, such that for each $v$ in $T_n$, and each $1\leq i \leq \out{}(v)$ we have $\ell_v(i) \sim \rho_{vi}$. In other words, the $i$-th external root of $B_v$ is glued to the internal root of $B_{vi}$. The precise construction is given below. We first define a pseudo-metric $d_n^{\ast,\dec}$ on $T_n^{\ast,\dec}$. For each $x \in T_n^{\ast,\dec}$ we let $v(x)$ denote the unique vertex in $T_n$ such that $x\in B_{v(x)}$. If $w$ is a vertex in $T_n$ such that $w \preceq v(x)$, let $Z_w^x \in B_w$ be defined by \begin{align*} Z_w^x = \begin{cases} \ell_w(i) & \text{if $wi \preceq v(x)$,}\\ x & \text{if $w = v(x)$}. \end{cases} \end{align*} In order to clarify the first case on the right hand side, note that if $w \prec v(x)$ then there is a unique child $wi$, $1 \le i \le \out{T_n}(w)$, of $w$ in $T_n$ such that $wi \preceq v(x)$, and we set $Z_w^x = \ell_w(i)$. Now, let $x,y \in T_n^{\ast,\dec}$ and first assume that $v(x) \prec v(y)$ and set \begin{align*} d_n^{\ast,\dec}(x,y) &= d_n^{\ast,\dec}(y,x) = d_{v(x)}\left(x,Z_{v(x)}^{y}\right)+\sum_{v(x) \prec w \preceq y} d_w\left(\rho_w,Z_w^{y}\right). \end{align*} Otherwise, if $v(x)$ and $v(y)$ are not in an ancestral relation, set \begin{align}\label{eq:definition distance dec n} d_n^{\ast,\dec}(x,y) = d_{v(x)\wedge v(y)}\left(Z_{v(x)\wedge v(y)}^{x},Z_{v(x)\wedge v(y)}^{y}\right) + d_n^{\ast,\dec}\left(Z_{v(x)\wedge v(y)}^{x},x\right) + d_n^{\ast,\dec}\left(Z_{v(x)\wedge v(y)}^{y},y\right). \end{align} These definitions are clarified in Fig.~\ref{f:metric}. Finally, the decorated tree is the quotient \begin{align} T_n^{\dec} = T_n^{\ast,\dec}\Big/\sim \end{align} where $x\sim y$ if and only if $d_n^{\ast,\dec}(x,y) = 0$, and the metric induced by $d_n^{\ast,\dec}$ on the quotient is denoted by $d_n^{\dec}$. Define a measure $\nu^\dec_n$, on $T_n^\dec$ by \begin{align*} \nu_n^\dec = \sum_{v\in T_n} \nu_v. \end{align*} Here the mass of a gluing point is the sum of the masses of all vertices of its equivalence class. Finally, root $T_n^\dec$ at $\rho_\varnothing$. We view the the decorated tree as a random, compact, rooted and measured metric space \begin{align*} (T_n^\dec,d_n^\dec,\rho_\varnothing, \nu_n^\dec). \end{align*} \subsection{Conditions on the trees and decorations} \label{ss:conditions} Assume in this subsection that $\alpha \in (1,2)$ is fixed and that $(T_n)_n$ is a sequence of random trees with decorations $(B_v,d_v,\rho_v,\ell_v,\nu_v)_{v\in T_n}$ sampled independently according to $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$. Let $\mu_k$ be a measure on $\tilde B_k$ defined by \begin{align*} \mu_k = \frac{1}{k}\sum_{j=1}^{k}\delta_{\tilde\ell_k(j)}. \end{align*} We let $v_1,\dots v_{|T_n|}$ be the vertices of $T_n$ listed in depth-first-search order and for all $1\leq k \leq |T_n|$, we let \begin{align*} M_k=\sum_{i=1}^{k}\nu_{v_i}(B_{v_i}) \end{align*} be the sum of all the total mass of the $k$ first decorations in depth-first order. We impose the following conditions on the decorations (D), the trees (T) and on both trees and decorations (B). \subsubsection*{Conditions on the decorations} \label{sss:decorations} \begin{enumerate}[\quad D1)] \item \label{c:GHPlimit} (\emph{Pointed GHP limit}). There are $\beta,\gamma > 0$, a compact metric space $(\mathcal{B},d)$ with a point $\rho$, a Borel probability measure $\mu$ and a finite Borel measure $\nu$ such that \begin{align*} \left(\tilde B_k,\tilde\rho_k,k^{-\gamma} \tilde d_k, \mu_k,k^{-\beta}\tilde \nu_k\right) \to (\mathcal{B},\rho,d,\mu,\nu) \end{align*} as $k\to \infty$, weakly in the pointed Gromov--Hausdorff--Prokhorov sense. \item \label{c:moment_diam_limit} (\emph{Moments of the diameter of the limit}) The limiting block $(\mathcal{B},\rho,d,\mu,\nu)$ satisfies $\Ec{\diam(\mathcal{B})^p}<\infty$, for some $p>\frac{\alpha}{\gamma}$. \item \label{c:mass_limit} (\emph{Limit of the expected mass}) It holds that $$k^{-\beta}\cdot \Ec{\tilde \nu_k(\tilde B_k)} \underset{k\rightarrow \infty }{\rightarrow} \Ec{\nu(\mathcal{B})} <\infty.$$ \end{enumerate} \subsubsection*{Conditions on the trees}\label{sss:tree} \begin{enumerate}[\quad T1)] \item \label{c:permute} (\emph{Symmetries of subtrees}) The distribution of $T_n$ is invariant under the permutation of any set of siblings. \item \label{c:invariance} (\emph{Invariance principle for the \L ukasiewicz path}) There is a sequence $b_n \ge 0$ such that \[ (b_n^{-1} W_{\lfloor|T_n|t\rfloor}(T_n))_{0 \le t \le 1} \convdis X^{\text{exc}, (\alpha)} \] where $(W_k(T_n))_{0\leq k \leq |T_n|}$ is the \L ukasiewicz path of $T_n$ and $X^{\text{exc},(\alpha)}$ is an excursion of the spectrally positive $\alpha$-stable Lévy-process defined by \eqref{eq:levymeasure}. \end{enumerate} \subsubsection*{Conditions on both}\label{sss:both} Let $\gamma$ and $b_n$ be as in conditions T and D in the previous sections. \begin{enumerate}[\quad B1)] \item \label{cond:small decorations dont contribute to distances} (\emph{Small decorations do not contribute}). For any $\epsilon>0$, \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace b_n^{-\gamma}\sum_{w\preceq v} \diam(B_w) \ind{\out{T_n}(w)\leq \delta b_n}\right\rbrace>\epsilon } \underset{\delta\rightarrow 0}{\rightarrow}0, \end{align*} uniformly in $n\geq 1$ for which $T_n$ is well-defined. \item \label{cond:measure is spread out} (\emph{Measure is spread out, case $\beta\leq \alpha$}). We assume that we have the following uniform convergence in probability, for some sequence $(a_n)$, \begin{align}\label{eq:convergence depth-first mass} \left(\frac{M_{\lfloor |T_n|t\rfloor}}{a_n}\right)_{t \in \intervalleff{0}{1}} \underset{n\rightarrow\infty}{\longrightarrow} (t)_{0 \leq t \leq 1}. \end{align} \item \label{cond:small decorations dont contribute to mass} (\emph{Measure is on the blocks, case $\beta>\alpha$}). For any $\epsilon>0$ we have \begin{align} \limsup_{n\rightarrow \infty} \Pp{b_n^{-\beta}\sum_{i=1}^{|T_n|}\nu_{v_i}(B_{v_i}) \ind{\out{T_n}(v_i)\leq \delta b_n} > \epsilon } \underset{\delta\rightarrow 0}{\longrightarrow} 0. \end{align} In this case we write $a_n=b_n^\beta$. \end{enumerate} \subsection{Reduction to the case of shuffled external roots} \label{ss:shuffle} Given our Hypothesis T\ref{c:permute} on the tree $T_n$, we can always shuffle the external roots of the blocks that we use without changing the law of the object $T_n^\dec$ that we construct. More precisely, given the law of $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)$ for any $k\geq 0$ we can define the block with shuffled external roots $(\hat B_k,\hat d_k,\hat\rho_k,\hat\ell_k,\hat\nu_k)$ as \begin{align*} (\hat B_k,\hat d_k,\hat\rho_k,\hat\nu_k)=(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde \nu_k) \quad \text{and} \quad \hat\ell_k(i)= \tilde\ell_k(\sigma(i)), \end{align*} where $\sigma$ is a uniform permutation of length $k$, independent of everything else. Then, assuming that Hypothesis~T\ref{c:permute} holds, the law of the object $\hat T_n^\dec$ constructed with underlying tree $T_n$ and decorations $(\hat B_k,\hat \rho_k,\hat d_k,\hat \ell_k,\hat \nu_k)_{k\geq 1}$ is the same as the that of the object $T_n^\dec$ constructed with underlying tree $T_n$ and decorations $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)_{k\geq 0}$. We will require the following lemma which informally states that the sequence of shuffled external roots in a decoration $B_k$ converges weakly to an i.i.d. sequence of external roots in the GHP-limit $\cB$ of $B_k$ \begin{lemma} \label{l:shuffle} Let $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)_{k\geq 1}$ be a (deterministic) sequence of decorations which converges to a decoration $(\cB,\rho,d,\mu,\nu)$ in the GHP-topology. Let $Y_1,Y_2,\ldots$ be a sequence of independent points in $\cB$ sampled according to $\mu$. Then there exists a sequence $\phi_k: \tilde B_k \to \cB$ of $\epsilon_k$-isometries, with $\epsilon_k \to 0$, such that \begin{align*} (\phi_k(\hat\ell_k(1)),\phi_k(\hat\ell_k(2))\ldots,\phi_k(\hat\ell_k(k)),\rho,\rho,\ldots) \convdis (Y_1,Y_2,\ldots) \end{align*} as $k\to \infty$. \end{lemma} \begin{proof} First note that \begin{align*} \left\lVert \mu_k^{\otimes k},\mathcal{L}\big(\hat{\ell}_k\big)\right\rVert_{TV} \to 0, \end{align*} as $k\to \infty$, where $\mathcal{L}(\hat{\ell}_k)$ denotes the law of the uniformly shuffled labels $\hat{\ell}_k$. Thus, asymptotically, we may work with the measure $\mu_n^{\otimes k}$ instead of the shuffled labels. The GHP-convergence guarantees existence of $\epsilon_k$-isometries $\phi_k:\tilde B_k \to \cB$ with $\epsilon_k\to 0$, such that the pushforward $\phi_k^\ast \mu_k$ converges to $\mu$ in distribution. It follows that $\phi_k^\ast \mu_k^{\otimes \mathbb{N}} \to \mu^{\otimes \mathbb{N}}$ in distribution which concludes the proof. \end{proof} In the rest of the paper, we are always going to assume that the external roots of the decorations that we use have been shuffled as above. Note that there can be situations where there is a natural ordering of the external roots e.g.~when the decorations are oriented circles, maps with an oriented boundary, trees encoded by contour functions and more. \subsection{The case of Bienaymé--Galton--Watson trees} \label{ss:BGW} One important case of application is when the tree $T_n$ is taken to be a Bienaymé--Galton--Watson (BGW) tree in the domain of attraction of an $\alpha$-stable tree, conditioned on having $n$ vertices. If we denote by $\mu$ a reproduction law on $\mathbb Z_+$, this corresponds to having $\sum_{i=0}^{\infty}i \mu(i)= 1$ and the function \begin{align*} x\mapsto\frac{1}{\mu([x,\infty))} \end{align*} to be $\alpha$-regularly varying, see the Appendix for a discussion on regularly varying functions. Note that then $T_n$ is possibly not defined for all $n\geq 1$; in that case we only consider values of $n$ for which it is. Condition T1 is satisfied for any BGW tree and T2 holds, thanks to results of Duquesne and Le Gall \cite{MR1964956,MR2147221}, for a sequence $(b_n)_{n\geq 1}$ that depends on the tail behaviour of $\mu([n,\infty))$. In the rest of this section, we assume that $\alpha\in(1,2)$ and $\gamma> \alpha -1$ are fixed and that we are working with BGW trees with reproduction distribution $\mu$ as above and corresponding sequence $(b_n)_{n\geq 1}$. The results below concern properties of the diameter and the measure on the decorations and they guarantee that the remaining conditions in Section~\ref{ss:conditions} are satisfied. Their proofs are given in the Appendix. We first state a result that ensures that small decorations don't contribute to the macroscopic distances. \begin{proposition}\label{prop:small blobs don't contribute} Under the following moment condition on the diameter of the decorations \begin{align*} \sup_{k\geq 0} \Ec{\left(\frac{\diam(\tilde B_k)}{k^\gamma}\right)^m}<\infty, \end{align*} for some $m>\frac{4\alpha}{2\gamma +1 -\alpha}$, condition B1 is satisfied. \end{proposition} \begin{remark} In \cite[Remark 1.5]{arXiv:2011.07266}, it is stated that in this context $b_n^{-\gamma}\cdot \diam(T_n^\dec)$ is tight under an assumption that corresponds to having $m>\frac{\alpha}{\alpha -1}$ in the above statement. Note that neither of those assumptions are optimal since $\frac{\alpha}{\alpha -1}$ and $\frac{4\alpha}{2\gamma -(\alpha-1)}$ are not always ordered in the same way for different values of $\alpha$ and $\gamma$. In particular, our bound does not blow up for a fixed $\gamma$ and $\alpha\rightarrow 1$. \end{remark} Now we state another result that ensures that either condition B\ref{cond:measure is spread out} or B\ref{cond:small decorations dont contribute to mass} is satisfied in this setting. We let $D$ denote a random variable that has law given by the reproduction distribution $\mu$ of the $\BGW$ tree. \begin{lemma}\label{lem:moment measure discrete block implies B2 or B3} Assuming that D1 holds for some value $\beta>0$ and that \begin{align*} \sup_{k\geq 0} \Ec{\left(\frac{\nu_{k}(\tilde B_k)}{k^\beta}\right)}<\infty, \end{align*} then condition B\ref{cond:measure is spread out} or B\ref{cond:small decorations dont contribute to mass} is satisfied, depending on the value of $\beta$: \begin{itemize} \item If $\beta < \alpha$ then B\ref{cond:measure is spread out} holds with $a_n:=n \cdot \Ec{\nu_D(B_D)}$. \item If $\beta=\alpha$ then B\ref{cond:measure is spread out} holds with \begin{equation} a_n:=\left\{ \begin{aligned} &n \cdot \Ec{\nu_D(B_D)} \quad &\text{if}\quad \Ec{D^\alpha} <\infty ,\\ &n \cdot \Ec{\nu(\cB)}\cdot \Ec{D^\alpha \ind{D\leq b_n}} \quad &\text{if} \quad \Ec{D^\alpha} =\infty, \end{aligned} \right. \end{equation} where in the second case, we further assume that $\sup_{k\geq 0} \Ec{\left(\frac{\nu_{k}(\tilde B_k)}{k^\beta}\right)^{1+\eta}}<\infty$ for some $\eta>0$. \item If $\beta > \alpha$ then B\ref{cond:small decorations dont contribute to mass} holds with $a_n = b_n^\beta$. \end{itemize} \end{lemma} \begin{remark} \label{re:remarkleaves} Both those results are stated for $\BGW$ trees conditioned on having exactly $n$ vertices. In fact, it is possible to use arguments from \cite[Section~4, Section~5]{MR2946438} to deduce that the same results hold for $\BGW$ trees conditioned on having $\lfloor \mu(\{0\}) n\rfloor$ leaves. This is actually the setting that we need for most of our applications. \end{remark} \section{Construction in the continuous} \label{s:cts_construction} Let $(X^{\text{exc},(\alpha)}_t; t\in[0,1])$ be an excursion of an $\alpha$-stable spectrally positive L\'{e}vy process, with $\alpha\in(1,2)$ and denote the corresponding $\alpha$-stable tree by $\cT_\alpha$. Denote the projection from $\intervalleff01$ to $\cT_\alpha$ by $\mathbf{p}$. Recall the definition \begin{align*} I_s^t = \inf_{[s,t]} X^{\text{exc},(\alpha)}. \end{align*} We define a partial order $\preceq$ on $[0,1]$ as follows: for every $s,t\in[0,1]$ \begin{equation}\label{eq:def genealogical order continuous tree} s\preceq t \quad \text{if} \quad s\leq t \quad \text{and} \quad X^{\text{exc},(\alpha)}_{s^{-}} \leq I_s^t. \end{equation} For $s,t\in[0,1]$ with $s\preceq t$, define \begin{equation}\label{eq:definition xts} x_s^t = I_s^t - X_{s^-}^{\text{exc}} \end{equation} and let \begin{equation}\label{eq:definition uts} u_s^t := \frac{x_s^t}{\Delta_s} \end{equation} where $\Delta_s = X^{\text{exc},(\alpha)}_s-X^{\text{exc},(\alpha)}_{s^-}$ is the jump of $X_s^{\text{exc},(\alpha)}$ at $s$. If $s$ is not a jump time of the process, we simply let $u_s^t:=0$. Let us denote $\Bb=\enstq{v\in \intervalleff{0}{1}}{\Delta_v>0}$, which is in one-to-one correspondence with the set of branch points of $\cT_\alpha$. Almost surely, these points in the tree all have an infinite degree, meaning that for any $v\in \Bb$, the space $\cT\setminus\{\mathbf{p}(v)\}$ has a countably infinite number of connected components. For any $v\in\Bb$, we let \begin{align*} \cA_v:= \enstq{u_v^t}{t\in\intervalleff{0}{1} \quad \text{and} \quad \exists s\in \intervalleff{0}{1}, v\prec s \prec t }. \end{align*} This set $\cA_v$ is in one-to-one correspondence with the connected components of $\cT\setminus\{\mathbf{p}(v)\}$ above $\mathbf{p}(v)$, meaning not the one containing the root. Conditionally on this, let $\left((\mathcal{B}_v,\rho_v,d_v,\mu_v,\nu_v)\right)_{v\in \Bb}$ be random measured metric spaces, sampled independently with the distribution of $(\mathcal{B},\rho,d,\mu,\nu)$ and for every $v\in \Bb$, let $(Y_{v,a})_{a\in\cA_v}$ be i.i.d. with law $\mu_v$. For $s \in [0,1]\setminus \Bb$, let $\mathcal{B}_s = \{s\}$. Define for every $v\in\Bb$ the rescaled distance \begin{equation*} \delta_v=\Delta_v^\gamma \cdot d_v. \end{equation*} and for any $t\succeq v$: \begin{align}\label{eq:definition Zvt} Z_v^t = \begin{cases} Y_{v,u_v^t} & \text{if} \quad u^t_v\in\cA_v,\\ \rho_v \qquad & \text{otherwise.} \end{cases} \end{align} Now we are ready to define a pseudo-distance $d$ on the set $\cT^{\ast,\dec}_\alpha=\bigsqcup_{s \in [0,1]}\mathcal{B}_s$. Before doing that, let us define, for any $v,w\in\Bb$ such that $v\preceq w$, and any $x\in \mathcal{B}_w,$ \begin{align*} Z_v^x = \begin{cases} Z^w_v & \text{if } v\prec w\\ x \qquad & \text{if } v=w. \end{cases} \end{align*} We also extend the genealogical order on the whole space $\cT^{\ast,\dec}_\alpha$ by declaring that for any $v\in \Bb, x\in B_v$, we have $v\preceq x \preceq v$, (thus making it a preorder, instead of an order relation). We also extend the definition of $\wedge$ by saying that $a\wedge b$ is always chosen among $\intervalleff01$. Then we define $d_0(a,b)$ for $a\preceq b$: \begin{align}\label{eq:def dist 0} d_0(a,b)=d_0(b,a) = \underset{v\in\Bb}{\sum_{a\prec v\preceq b}}\delta_v(\rho_v,Z_v^b), \end{align} and for any $a,b\in\cT^{\ast,\dec}_\alpha$, \begin{align*} d(a,b)=d_0(a\wedge b,a)+d_0(a\wedge b, b) + \delta_{a\wedge b }(Z_{a\wedge b}^a,Z_{a\wedge b}^b), \end{align*} We say that two elements $a$ and $b$ are equivalent $a\sim b$ if $d(a,b)=0$. We quotient the space by the equivalence relation, $\cT^\dec_\alpha=\cT^{\ast,\dec}_\alpha/\sim$ and denote $\mathbf{p}^\dec$ the corresponding quotient map. We can endow $\cT_\alpha^\dec$ with two types of measures: \begin{itemize} \item If $\beta \leq \alpha$, then we define the measure $\upsilon_\beta^\dec$ on $\cT_\alpha^\dec$ as the push-forward of the Lebesgue measure on $\intervalleff{0}{1}\setminus \Bb$ through the projection map $\mathbf{p}^\dec:\cT^{\ast,\dec}_\alpha \rightarrow \cT_\alpha^\dec$ onto the quotient. \item If $\beta > \alpha$, we define the measure $\upsilon_\beta^\dec$ as (the push-forward through $\mathbf{p}^\dec$ of) \begin{align*} \sum_{t \in \Bb } \Delta_t^\beta \nu_t \end{align*} which is almost surely a finite measure, provided that $\nu(\cB)$ has finite expectation, say. \end{itemize} Note that so far, it is not clear that the construction above yields a compact metric space (or even a metric space), as the metric $d$ could in principle take arbitrarily large or even infinite values. This is handled by the work of Section~\ref{sec:self-similarity}. In particular, a result that is important in our approach is the following, which ensures that under some moment assumption on the diameter of the decorations the distances in the whole object are dominated by the contributions of the decorations corresponding to large jumps. This will in particular be useful in the next section. \begin{lemma} \label{l:smalldec} Assume that $\gamma > \alpha-1$ and that condition D\ref{c:moment_diam_limit} is satisfied. Then, \begin{align} \sup_{s\in \intervalleff{0}{1}} \left\lbrace\sum_{t\prec s} \Delta_t^\gamma \diam(\mathcal{B}_t) \ind{\Delta_t\leq \delta}\right\rbrace \rightarrow 0 \end{align} as $\delta\rightarrow 0$ in probability. \end{lemma} \section{Invariance principle} \label{s:invariance} In this section we state and prove one of the main results of the paper, namely that under the conditions stated in Section~\ref{ss:conditions}, the discrete decorated trees with a properly rescaled graph distance, converge towards $\alpha$-stable decorated trees. \begin{theorem} \label{th:invariance} Let $(T_n)_{n\geq 1}$ be a sequence of random trees and $(B_v,\rho_{v},d_{v},\ell_{v},\nu_{v})_{v\in T_n}$ be random decorations on $T_n$ sampled independently according to $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$. Assume that conditions D,T and B in Section~\ref{ss:conditions} are satisfied for some exponents $\alpha, \beta,\gamma$ and denote the weak limit of $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$ in the sense of D\ref{c:GHPlimit} by $(\mathcal{B},\rho,d,\mu,\nu)$. Suppose that $\alpha-1<\gamma$ and let $\cT_\alpha^\dec$ be the $\alpha$-stable tree $\cT_\alpha$, decorated according to $(\mathcal{B},\rho,d,\mu,\nu)$ with distance exponent $\gamma$. Then \begin{align*} (T_n^\dec,b_n^{-\gamma}d_n^\dec, a_n^{-1}\cdot \nu_n^\dec) \to (\mathcal{T}_\alpha^\dec, d, \upsilon_\beta^\dec) \end{align*} in distribution in the Gromov-Hausdorff-Prokhorov topology. \end{theorem} \begin{proof} At first we will describe in detail the GH-convergence of the sequence of decorated trees, leaving out the measures $\nu_n^{\text{dec}}$, and then we give a less detailed outline of the GHP-convergence. We start by constructing, in several steps, a coupling that allows us to construct an $\epsilon$-isometry between $(T_n^\dec,b_n^{-\gamma} d_n^\dec)$ and $(\cT_\alpha^\dec,d)$ for arbitrary small $\epsilon$. \emph{Step 1, coupling of trees:} By Condition T\ref{c:invariance}, we may use Skorokhod's representation theorem, and construct on the same probability space $X^{\text{exc},(\alpha)}$ and a sequence of random trees $T_n$ such that \[ (b_n^{-1} W_{\lfloor|T_n|t\rfloor}(T_n))_{0 \le t \le 1} {\,{\longrightarrow}\,} X^{\text{exc},(\alpha)} \] almost surely. We will keep the same notation for all random elements on this new space. Order the vertices $(v_{n,1},v_{n,2},\ldots)$ of $T_n$ in non-increasing order of their degree (in case of ties use some deterministic rule) and denote their positions in the lexicographical order on the vertices of $T_n$ by $(t_{n,1},t_{n,2},\ldots)$, in such a way that for any $k\geq 1$ we have $v_{t_{n,k}}=v_{n,k}$. Similarly, order the set $\Bb$ of jump times of $X^{\text{exc},\alpha}$ in decreasing order $(t_1,t_2,\ldots)$ of the sizes of the jumps (there is no tie almost surely). Then, by properties of the Skorokhod topology, it holds that for any $k\geq 1$, \begin{align}\label{eq:convergence size and position of jumps} \frac{t_{n,k}}{|T_n|} \to t_k \qquad \text{and} \qquad \frac{\out{}(v_{n,k})}{b_n} \rightarrow \Delta_{t_k} \end{align} almost surely, as $n\to \infty$. Because of the way we can retrieve the genealogical order from the coding function, the genealogical order on $(v_{n,1},v_{n,2},\dots, v_{n,N})$ for any fixed $N$ stabilizes for $n$ large enough to that of $(t_{1},t_{2},\dots, t_{N})$ for the order $\preceq$ defined in \eqref{eq:def genealogical order continuous tree}. \emph{Step 2, coupling of decorations:} Let us shuffle the external roots of the decorations $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$ with independent uniform permutations as described in Section~\ref{ss:shuffle} and denote the shuffled decorations by $(\hat B_k,\hat\rho_{k},\hat d_{k},\hat \ell_{k},\hat \nu_{k})_{k\geq 0}$. From now on, sample $(B_v,\rho_{v},d_{v},\ell_{v},\nu_{v})_{v\in T_n}$ from the latter which, as explained before, does not affect the distribution of the decorated tree. Since for any $k\geq 1$, $\out{T_n}(v_{n,k}) \to \infty$ as $n\to \infty$, it holds, by Condition D\ref{c:GHPlimit}, that \begin{align*} (B_{v_{n,k}},\rho_{v_{n,k}},(\out{}(v_{n,k}))^{-\gamma}d_{v_{n,k}},\mu_{v_{n,k}},(\out{}(v_{n,k}))^{-\beta}\nu_{v_{n,k}}) \to (\mathcal{B},\rho,d,\mu,\nu) \end{align*} weakly as $n\to\infty$. We again use Skorokhod's theorem and modify the probability space such that for each $k\geq 1$, this convergence holds almost surely and the a.s.~limit will be denoted by $(\mathcal{B}_{t_k},\rho_{t_k},d_{t_k},\mu_{t_k}, \nu_{t_k})$. We still keep the same notation for random elements. For the rest of the proof, we will write $\mathbf d_v$ in place of $b_n^{-1}\cdot d_{v}$ to lighten notation. From the convergence \eqref{eq:convergence size and position of jumps}, and the definition of $\delta_v=\Delta_v^\gamma \cdot d_v$, we can re-express the above convergence as the a.s.\ convergence \begin{align}\label{eq:convergence block unk to block vk} (B_{v_{n,k}},\rho_{v_{n,k}}, \mathbf d_{v_{n,k}},\mu_{v_{n,k}},(b_n)^{-\beta}\nu_{v_{n,k}}) \to (\mathcal{B}_{t_k},\rho_{t_k},\delta_{t_k},\mu_{t_k},\Delta_{t_k}^\beta\nu_{t_k}) \end{align} \emph{Step 3, coupling of gluing points:} Fix a $\delta > 0$ and let $N(\delta)$ be the (finite) number of vertices $v \in \Bb$ such that $\Delta_v > \delta$. Thanks to Step 1, we can almost surely consider $n$ large enough so that the genealogical order on $(v_{n,1},v_{n,2},\dots, v_{n,N(\delta)})$ induced by $T_n$ corresponds to that of $(t_{1},t_{2},\dots, t_{N(\delta)})$. Note that $X^{\text{exc},(\alpha)}$ almost surely does not have a jump of size exactly $\delta$, and so for $n$ large enough the vertices $(v_{n,1},v_{n,2},\dots, v_{n,N(\delta)})$ are exactly the vertices with degree larger than $\delta b_n$. Now recall the notation \eqref{eq:definition uts} and for any $k\in \{1,2,\dots N(\delta)\}$, consider the finite set \begin{align*} \enstq{u_{t_k}^{t_i}}{i \in \{1,2,\dots N(\delta)\},\ t_i\succ t_k} \subset \mathcal A_{t_k} \end{align*} and enumerate its elements as $m_k(1), m_k(2), \dots , m_k(a_k)$ in increasing order. Similarly consider the set \begin{align*} \enstq{\ell \in \mathbb N}{v_{n,k}\ell \preceq v_{n,i} \text{ for some } i \in \{1,2,\dots N(\delta)\}}\subset \{1,\dots, \out{}(v_{n,k})\}. \end{align*} and denote its elements $j_1,j_2,\dots, j_{a_k}$. Note that these two sets have the same cardinality $a_k$ when $n$ is large enough because of our assumptions. For each $k\geq 1$, given $(B_{t_k},\rho_{t_k},d_{t_k},\mu_{t_k}, \nu_{t_k})$ and $X^{\text{exc},(\alpha)}$ the points $(Y_{t_k,a})_{a\in \mathcal{A}_{t_k}}$ are sampled independently according to $\mu_{t_k}$. By Lemma~\ref{l:shuffle} and \eqref{eq:convergence block unk to block vk} we may find a sequence $\phi_{v_{n,k}}: B_{v_{n,k}} \to \mathcal{B}_{t_k}$ of $\epsilon_{v_{n,k}}$-isometries, with $\epsilon_{v_{n,k}}\to 0$ as $n\to \infty$, such that \begin{align} \label{eq:coupling_glue} (\phi_{v_{n,k}}(\ell_{v_{n,k}}(j_1)),\phi_{v_{n,k}}(\ell_{v_{n,k}}(j_2))\ldots,\phi_{v_{n,k}}(\ell_{v_{n,k}}(j_{a_k}))) {\,{\buildrel w \over \longrightarrow}\,} (Y_{t_k,m_k(1)},Y_{t_k,m_k(2)},\ldots,Y_{t_k,m_k(a_k)} ). \end{align} We modify the probability space once again, using Skorokhod's representation theorem, so that this convergence holds almost surely. When $n$ is large enough, for $k,k'$ such that $v_{n,k} \prec v_{n,k'}$ we know that $v_{n,k} j_p \preceq v_{n,k'}$ and $u_{t_k}^{t_k'}=m_k(p)$ for some $p\in \{1,2,\dots, a_k\}$ so we have $Z_{v_{n,k}}^{v_{n,k'}}= \ell_{v_{n,k}}(j_p)$ as well as $Z_{t_k}^{t_{k'}}=Y_{t_k,m_k(a_p)}$. Plugging this back in \eqref{eq:coupling_glue} that we have the convergence \begin{align*} \phi_{v_{n,k}}(Z_{v_{n,k}}^{v_{n,k'}}) \rightarrow Z_{t_k}^{t_{k'}}. \end{align*} \emph{Step 4: convergence of the decorated trees} Now, construct the sequence of decorated trees $T_n^\dec$ from $T_n$ and the decorations \\ $(B_{v_{n,k}},\rho_{v_{n,k}},d_{v_{n,k}},\ell_{v_{n,k}},\nu_{v_{n,k}})_{k\geq 1}$. Similarly, construct the tree $\mathcal{T}_\alpha^\dec$ from $\mathcal{T}_\alpha$ and the family of decorations $(\mathcal{B}_{t_k},\rho_{t_k},d_{t_k},\mu_{t_k},\nu_{t_k})_{k\geq 1}$. Denote the projections to the quotients by $\mathbf{p}_n^\dec$ and $\mathbf{p}^\dec$ respectively. We claim that this coupling of trees, decorations and gluing points, guarantees that the convergence \begin{align*} (T_n^\dec,b_n^{-\gamma}d_n^\dec) \to (\mathcal{T}_\alpha^\dec, d) \end{align*} holds in probability for the Gromov--Hausdorff topology. Introduce \begin{align} r_n^\delta := b_n^{-\gamma} \cdot\sup_{v\in T_n} \left\lbrace\sum_{w\preceq v} \diam(B_w) \ind{\out{T_n}(w)\leq \delta b_n}\right\rbrace, \end{align} and also \begin{align}\label{def:r delta} r^\delta := \sup_{s\in \intervalleff{0}{1}} \left\lbrace\sum_{t\prec s} \Delta_t^\gamma \diam(B_t) \ind{\Delta_t\leq \delta}\right\rbrace. \end{align} It holds, by condition B1 and Lemma~\ref{l:smalldec} that \begin{align} \lim_{\delta \to 0} \limsup_{n\rightarrow\infty} r_n^\delta = 0 \quad \text{and} \quad \lim_{\delta\rightarrow 0} r^\delta =0 \end{align} in probability. Let us consider the subset $\cT_\alpha^{\dec,\delta}\subset \cT_\alpha^\dec$ which consists only of the blocks corresponding to jumps larger than $\delta$, i.e. \begin{align} \cT_\alpha^{\dec,\delta} = \bigsqcup_{1\leq k \leq N(\delta)}\mathbf{p}^\dec \left(\mathcal{B}_{t_k}\right), \end{align} endowed with the induced metric $d$. From the construction of $d$ it is clear that we can bound their Hausdorff distance \begin{align}\label{eq:T alpha well approximated by large jumps} \mathrm{d_H}(\cT_\alpha^{\dec,\delta},\cT_\alpha^\dec) \leq 2 r^\delta. \end{align} Let us do a similar thing with the discrete decorated trees. We introduce $T_n^{\dec,\delta} \subset T_n^\dec$ endowed with the induced metric from $b_n^{-\gamma} \cdot d_n^\dec$, where \begin{align} T_n^{\dec,\delta}=\bigsqcup_{1\leq k \leq N(\delta)} \mathbf{p}_n^\dec\left(B_{v_{n,k}}\right). \end{align} We have, for the distance $b_n^{-\gamma} \cdot d_n^\dec$, \begin{align} \mathrm{d_H}(T_n^{\dec,\delta},T_n^\dec) \leq 2 r_n^\delta . \end{align} We introduce the following auxiliary object \begin{align*} \widehat{T}_n^{\dec,\delta}=\bigsqcup_{1\leq k \leq N(\delta)} B_{v_{n,k}}, \end{align*} which we endow with the distance $\hat{d}$ defined as \begin{align*} \hat{d}(x,y)= b_n^{-\gamma}\cdot d_n^\dec(\mathbf{p}^\dec_n(x),\mathbf{p}^\dec_n(y))+\delta\cdot \ind{x,y \text{ don't belong to the same block}}. \end{align*} It is easy to check that $\hat{d}$ is a distance on $\widehat{T}_n^{\dec,\delta}$ and that the projection $\mathbf{p}^\dec_n$ is a $\delta$-isometry, so that \begin{align*} \mathrm{d_{GH}}\left((\widehat{T}_n^{\dec,\delta}, \hat{d}),(T_n^{\dec,\delta},b_n^{-\gamma} \cdot d_n^\dec)\right) \leq \delta. \end{align*} The role of this auxilliary object is just technical: it allows to have a metric space very close to $T_n^\dec$ but where no two vertices of two different blocks are identified together. This allows to define function from $\widehat{T}_n^{\dec,\delta}$ to $\cT_\alpha^{\dec,\delta}$ by just patching together the almost isometries that we have on the blocks. \emph{Step 5, constructing an almost isometry:} Now, we construct a function from $\widehat{T}_n^{\dec,\delta}$ that we will show is an $\epsilon$-isometry for some small $\epsilon$ when $n$ is large enough. Recall the $\epsilon_{v_{n,k}}$-isometries $\phi_{v_{n,k}}$ from \eqref{eq:coupling_glue}. We define the function $\phi$ from $\widehat{T}_n^{\dec,\delta}$ to $\cT_\alpha^{\dec}$ as the function such that for any $1\leq k \leq N(\delta)$ and any $x\in B_{v_{n,k}}$ we have \begin{align*} \phi(x):= \mathbf{p}^\dec(\phi_{v_{n,k}} (x)) \end{align*} Now let us show that for $n$ large enough $\phi$ is an $\epsilon$ isometry for \begin{align*} \epsilon=3r^\delta+3 r^{\delta}_{n}+ 3\delta. \end{align*} First, from \eqref{eq:T alpha well approximated by large jumps} and the definition of $\phi$ from the $\phi_{v_{n,k}}$, it is clear that the $\phi$ satisfies the almost surjectivity condition for that value of $\epsilon$. Then we have to check the almost isometry condition. For that, let us take $n$ large enough so that on the realization that we consider, the genealogical order on $(v_{n,1},v_{n,2},\dots, v_{n,N(\delta)})$ in $T_n$ coincides with that on $(t_{1},t_{2},\dots, t_{N}(\delta))$ defined by \eqref{eq:def genealogical order continuous tree}. Now for $k,k'\leq N(\delta)$ for which $t_k \prec t_{k'}$ (and equivalently $v_{n,k} \prec v_{n,k'}$) we have \begin{align*} \underset{1\leq i\leq N(\delta)}{\sum_{v_{n,k}\prec v_{n,i}\prec v_{n,k'}}} \mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right) &\leq \sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}} \mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right) \leq \underset{1\leq i\leq N(\delta)}{\sum_{v_{n,k}\prec v_{n,i}\prec v_{n,k'}}} \mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right) + r_n^\delta \end{align*} and similarly \begin{align*} \underset{1\leq i\leq N(\delta)}{\sum_{t_{k}\prec t_{i}\prec t_{k'}}} \delta_{t_{i}}\left(\rho_{t_{i}},Z_{t_{i}}^{t_{k'}}\right) &\leq \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}} \delta_w\left(\rho_w,Z_w^{t_{k'}}\right) \leq \underset{1\leq i\leq N(\delta)}{\sum_{t_{k}\prec t_{i}\prec t_{k'}}} \delta_{t_{i}}\left(\rho_{t_{i}},Z_{t_{i}}^{t_{k'}}\right) + r^\delta \end{align*} and so we have \begin{align*} &|\sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}} \mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right)- \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}} \delta_w\left(\rho_w,Z_w^{t_{k'}}\right) | \\ &\leq \sum_{i: v_{n,k} \prec v_{n,i} \prec v_{n,k'}} |\mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right)- \delta_{t_i}\left(\rho_{t_i},Z_{t_i}^{t_{k'}}\right) | + r^\delta + r_n^\delta\\ &\leq \underset{\epsilon_n}{\underbrace{\sum_{k,k',i=1}^{N(\delta)} |\mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right)- \delta_{t_i}\left(\rho_{t_i},Z_{t_i}^{t_{k'}}\right) |}} + r^\delta + r_n^\delta \end{align*} and by \eqref{eq:convergence block unk to block vk}, the term $\epsilon_n$ in the above display tends to $0$ for a fixed $\delta$ as $n$ tends to infinity. In particular, it is eventually smaller than $\delta$. Now let us take two $x,x'\in \widehat{T}_n^{\dec,\delta}$, where $x\in B_{v_{n,k}}$ and $x'\in B_{v_{n,k'}}$. Let us study the different cases $\bullet$ Case 1: assume that $k=k'$, then \begin{align} d(\phi(x),\phi(x')) = \delta_{t_k}(\phi_{v_{n,k}}(x),\phi_{v_{n,k}}(x')) \quad \text{and} \quad \hat d(x,x') =\mathbf d_{v_{n,k}}(x,x'). \end{align} and then we have $|d(\phi(x),\phi(x')) - \hat d(x,x')|\leq \epsilon_{v_{n,k}}$ by definition of $\phi_{v_{n,k}}$ being an $\epsilon_{v_{n,k}}$-isometry. For a fixed $\delta$, this quantity becomes smaller than $\delta$ for $n$ large enough. Since $k\leq N(\delta) < \infty$ a.s., this holds uniformly in $k$. $\bullet$ Case 2: assume that $t_k\prec t_{k'}$, then \begin{align*} d(\phi(x),\phi(x')) &= \delta_{t_{k}}(\phi(x),Z_{t_k}^{t_{k'}})+ \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}}\delta_w\left(\rho_w,Z_w^{t_{k'}}\right) + \delta_{t_{k'}}(\rho_{t_{k'}},\phi(x'))\\ &= \delta_{t_{k}}(\phi_{u_{n,k}}(x),Z_{t_k}^{t_{k'}})+ \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}}\delta_w\left(\rho_w,Z_w^{t_{k'}}\right) + \delta_{t_{k'}}(\rho_{t_{k'}},\phi_{t_{k'}}(x')) \end{align*} and \begin{align*} \hat d(x,x') = \mathbf d_{v_{n,k'}}(x,Z_{v_{n,k}}^{v_{n,k'}})+ \sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}}\mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right) + \mathbf d_{v_{n,k'}}(\rho_{v_{n,k'}},x') + \delta. \end{align*} Hence we have \begin{align*} |d(\phi(x),\phi(x')) - \hat d(x,x')| \leq |\delta_{t_{k}}(\phi(x),Z_{t_k}^{t_{k'}}) - \mathbf d_{v_{n,k'}}(x,Z_{v_{n,k}}^{v_{n,k'}}) |\\ + |\sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}}\delta_w\left(\rho_w,Z_w^{t_{k'}}\right) - \sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}}\mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right)|\\ + |\delta_{t_{k'}}(\rho_{t_{k'}},\phi_{t_{k'}}(x'))-\mathbf d_{v_{n,k'}}(\rho_{v_{n,k'}},x')| \end{align*} and so, upper-bounding each term, the last display is eventually smaller than $\epsilon$ as $n\rightarrow \infty$. \\ $\bullet$ Case 3: assume that $t_k\npreceq t_{k'}$ and $ t_{k'}\npreceq t_k$. Then, we can write \begin{align} d(\phi(x),\phi(x'))= \delta_{t_k \wedge t_{k'}} (Z_{t_k \wedge t_{k'}}^{t_k},Z_{t_k \wedge t_{k'}}^{t_{k'}}) + \sum_{t_k \wedge t_{k'} \prec w \preceq t_{k'}}\delta_{w}(\rho_{w},Z_w^{t_{k'}})+\delta_{t_{k'}}(\rho_{t_{k'}},\phi(x))\\ +\sum_{t_k \wedge t_{k'} \prec w\preceq t_{k}}\delta_{w}(\rho_{w},Z_w^{t_{k}})+\delta_{t_k}(\rho_{t_{k}}, \phi(x')), \end{align} in the same fashion as before, and a similar expression can be written down for the discrete object. Now, we can consider two cases: either $t_k \wedge t_{k'}=t_j$ for some $j\in \{1,\dots,N(\delta)\}$, in which case we can write \begin{align*} |\delta_{t_k \wedge t_{k'}} (Z_{t_k \wedge t_{k'}}^{t_k},Z_{t_k \wedge t_{k'}}^{t_{k'}}) - \mathbf d_{v_{n,k} \wedge v_{n,k'}} (Z_{v_{n,k} \wedge v_{n,k'}}^{v_{n,k}},Z_{v_{n,k} \wedge v_{n,k'}}^{v_{n,k'}})|\leq 2\epsilon_n, \end{align*} which tends to $0$ as $n\rightarrow \infty$. Otherwise, it means that $\Delta_{t_k \wedge t_{k'}}\leq \delta$ and so we can upper-bound the term in the last display by $r^\delta + r_n^\delta$. With the other terms, we can reason similarly as in the previous case and get \begin{align} |\hat d(x,x') - d(\phi(x),\phi(x'))|\leq 3r^\delta+4\epsilon_n+3r^{\delta}_{n}+2\delta+ \epsilon_{v_{n,k}} + \epsilon_{v_{n,k'}}, \end{align} which is smaller than $\epsilon$ for $n$ large enough. \paragraph{Adding the measures.} We sketch here a modification of the proof above to improve the convergence from Gromov--Hausdorff to Gromov--Hausdorff--Prokhorov topology. \medskip \emph{Step 1, the total mass converges:} First, remark that under the assumptions of the theorem with $\beta\leq \alpha$ the total mass $\nu_n^\dec(T_n^\dec)$ of the measure carried on $T_n^\dec$ satisfies \begin{align*} a_n^{-1} \cdot \nu_n^\dec(T_n^\dec) \underset{n\rightarrow \infty}{\rightarrow} 1. \end{align*} On the other hand, if $\beta>\alpha$, then using the coupling defined in the proof above, we claim that we have \begin{align}\label{eq:convergence total mass nu n} b_n^{-\beta}\cdot \nu_n^\dec(T_n^\dec)=b_n^{-\beta}\cdot\sum_{i=1}^{|T_n|} \nu_{v_i}(B_{v_i}) \underset{n\rightarrow \infty}{\rightarrow} \sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s). \end{align} Let us prove the above display. First, using \eqref{eq:convergence size and position of jumps} and \eqref{eq:convergence block unk to block vk}, we can write for all $k\geq 1$, \begin{align*} b_n^{-\beta}\cdot \nu_{v_{n,k}}(B_{v_{n,k}}) \underset{n\rightarrow \infty}{\rightarrow} \Delta_{t_k}^\beta\cdot \nu_{t_k}(\cB_{t_k}). \end{align*} This ensures that for any given $\delta>0$ we have \begin{align*} b_n^{-\beta}\cdot \sum_{k=0}^\infty \nu_{v_{n,k}}(B_{v_{n,k}}) \ind{d^+(v_{n,k})> \delta b_n}\rightarrow\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)\ind{\Delta_s > \delta}, \end{align*} and the quantity on the right-hand-side converges to $\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)$ as $\delta\rightarrow 0$. From there, we can construct a sequence $\delta_n\rightarrow 0$ such that the following convergence holds on an event of probability $1$ \begin{align*} b_n^{-\beta}\cdot \sum_{k=0}^\infty \nu_{v_{n,k}}(B_{v_{n,k}}) \ind{d^+(v_{n,k})\geq \delta_n b_n}\rightarrow\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s). \end{align*} Now, using Condition~B\ref{cond:small decorations dont contribute to mass} we get that \begin{align*} R(n,\delta)= b_n^{-\beta}\cdot \sum_{i=1}^{|T_n|}\nu_{v_i}(B_{v_i}) \ind{\out{}(v_i)\leq \delta b_n} \end{align*} tends to $0$ in probability as $\delta \rightarrow 0$, uniformly in $n$ large enough. This ensures that for our sequence $\delta_n$ tending to $0$ we have the convergence $R(n,\delta_n)\underset{n \rightarrow \infty}{\rightarrow} 0$ in probability. Putting things together we get that \eqref{eq:convergence total mass nu n} holds in probability. \medskip \emph{Step 2, Convergence of the sampled points:} Now that we know that the total mass of $\nu_n^\dec$ appropriately normalized converges to that of $\upsilon_\beta^\dec$, we consider a point $Y_n$ sampled under normalized version of $\nu_n^\dec$ and $\Upsilon_\beta$ sampled under a normalized version of $\upsilon_\beta^\dec$. In order to prove that $(T_n^\dec,d,\nu_n^\dec)$ converges to $(\cT_\alpha^\dec,d,\upsilon_\beta^\dec)$ in distribution, we will prove that the pointed spaces $(T_n^\dec,d,Y_n)$ converge to $(\cT_\alpha^\dec,d,\Upsilon_\beta)$ in distribution for the pointed Gromov--Hausdorff topology. \medskip \emph{Step 3, Sampling a uniform point:} To construct $Y_n$ on $T_n^\dec$, we first sample a point $X_n$ on $\intervalleff{0}{1}$. Then we introduce \begin{align} K_n:=\inf \enstq{k\geq 1}{M_k\geq X_n \cdot M_{|T_n|}} \end{align} and then sample a point on $B_{v_{K_n}}$ using a normalized version of the measure $\nu_{v_{K_n}}$. For $\beta\leq \alpha$, in the continuous setting, we construct a random point $\Upsilon_\beta$ on $\cT_\alpha^\dec$ distributed as $\upsilon_\beta^\dec$ by defining it as $\mathbf{p}^\dec(X)$ where $X\sim \mathrm{Unif}\intervalleff{0}{1}$. For $\beta>\alpha$ we can define the following random point $\Upsilon_\beta$ on $\cT_\alpha^\dec$ in three steps: sample $X\sim \mathrm{Unif}\intervalleff{0}{1}$ and then set \begin{align*} Z:=\inf \enstq{t\in \intervalleff{0}{1}}{\sum_{0\leq s\leq t}\Delta_s^\beta \nu_s(\mathcal{B}_s)\geq X\cdot \sum_{0\leq s\leq 1}\Delta_s^\beta\nu_s(\mathcal{B}_s)} \end{align*} and then sample a point on $\mathcal{B}_{Z}$ using a normalized version of the measure $\nu_{Z}$. (Note that in this case, $Z$ is a jump time of $X^{\mathrm{exc},(\alpha)}$ almost surely.) This construction ensures that conditionally on $X^{\mathrm{exc},(\alpha)}$, and $(\mathcal{B}_s)_{s\in \Bb}$, for all $t\in\intervalleff{0}{1}$ we have \begin{align*} \Pp{Z=t}=\frac{\Delta_t^\beta \cdot \nu_t(\mathcal{B}_t)}{\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)}. \end{align*} \emph{Step 4, Coupling $Y_n$ and $\Upsilon_\beta$:} In both cases $\alpha \leq \beta$ and $\alpha >\beta$ respectively, we couple the construction of $Y_n$ and $\Upsilon$ by using the same uniform random variable $X_n=X$. Now let us consider the two cases separately. \emph{Case $\beta \leq \alpha$.} Consider the convergence \eqref{eq:convergence depth-first mass}. Since this convergence is in probability to a constant, it happens jointly with other convergence results used above using Slutsky's lemma. Hence, using the Skorokhod embedding again, we can suppose that this convergence is almost sure, jointly with the other ones. Then, we can essentially go through the same proof as above and consider an extra decoration corresponding to $B_{v_{K_n}}$, which contains the random point $Y_n$. \emph{Case $\beta > \alpha$.} In that case, thanks to the reasoning of Step 1, we have the following convergence for the Skorokhod topology \begin{align*} b_n^{-\beta}\cdot\left(\sum_{i=1}^{\lfloor t |T_n|\rfloor} \nu_{v_i}(B_{v_i}) \right)_{0\leq t \leq 1} \underset{n\rightarrow \infty}{\rightarrow} \left(\sum_{0\leq s\leq t}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)\right)_{0\leq t \leq 1}. \end{align*} With the definition of respectively $Z$ and $K_n$ using the coupling $X_n=X$, the above convergence ensures that for $n$ large enough we have $K_n=v_{n,k}$ and $Z=t_k$ for some (random) $k\geq 1$. Then again, we can essentially go through the same proof as above and consider an extra random point $Y_n$ sampled on $B_{v_{n,k}}$ and a point $\Upsilon_\beta$ sampled on $\cB_{t_{k}}$ in such a way that those points are close together. \end{proof} \section{Self-similarity properties and alternative constructions}\label{sec:self-similarity} The goal of this section is to decompose the decorated stable tree along a discrete tree, using a framework similar to that developed in Section~2.1, with the difference that the gluing will occur along the whole Ulam tree instead of a finite tree. In particular this decomposition, which follows directly from a similar description of the $\alpha$-stable tree itself, will allow us to describe the decorated stable tree as obtained by a sort of line-breaking construction, and also identify its law as the fixed point of some transformation. We rely on a similar construction that holds jointly for the $\alpha$-stable tree and its associated looptree described in \cite{senizergues_growing_2022}. \textbf{In this section, we will always assume that $\beta>\alpha$ and that $\Ec{\nu(\cB)}$ and $\Ec{\diam(\cB)}$ are finite.} \subsection{Introducing the spine and decorated spine distributions} \begin{figure}\label{fig:spine and decorated spine} \centering \begin{tabular}{ccc} \includegraphics[height=6cm,page=1]{spineanddecoratedspine} & & \includegraphics[height=6cm,page=2]{spineanddecoratedspine} \end{tabular} \caption{On the left, the space $\cS^{\dec}$. On the right, the corresponding space $\cS$ which is just a segment with atoms along it.} \end{figure} We construct two related random metric spaces, $\cS$ and $\cS^{\dec}$, which will be the building blocks used to construct $\cT_\alpha$ and $\cT^{\dec}_\alpha$ respectively. This construction will depend on parameters $\alpha\in(1,2)$ and $\gamma\in(\alpha-1,1]$ and $\beta>\alpha$. First, let \begin{itemize} \item $(Y_k)_{k\geq 1}$ be a sequence of i.i.d. uniform random variables in $\intervalleff{0}{1}$, \item $(P_k)\sim \GEM(\alpha-1,\alpha-1)$, and $L$ its $(\alpha-1)$-diversity. \item $(\cB_k,d_k,\rho_k,\mu_k,\nu_k)$ i.i.d. random metric spaces with the same law as $(\cB,d,\rho,\mu,\nu)$, \item for every $k\geq 1$, a random point $Z_k$ taken on $\cB_k$ under $\mu_k$. \end{itemize} We refer for example to \cite[Appendix A.2]{senizergues_growing_2022} for the definitions of the distributions in the second bulletpoint. We first define the random spine \[\cS=(S,d_S,\rho_S,\mu_S)\] as the segment $S=\intervalleff{0}{L}$, rooted at $\rho_S=0$, endowed with the probability measure $\mu_S=\sum_{k\geq1}P_k \delta_{L\cdot Y_k}$ In order to construct $\cS^{\dec}$ we are going to informally replace every atom of $\mu_S$ with a metric space scaled by an amount that corresponds to the mass of that atom. To do so, we consider the disjoint union \begin{equation} \bigsqcup_{i=1}^\infty \cB_i, \end{equation} which we endow with the distance $d_{S^{\dec}}$ defined as \begin{align*} d_{S^{\dec}}(x,y) &= \begin{cases} P_i^\gamma d_i(x,y) & \text{if} \quad x,y\in \cB_i,\\ P_i^\gamma d_i(x,Z_i) + \sum_{k: Y_i<Y_k<Y_j}P_i^\gamma d_k(\rho_k,Z_k)+P_j^\gamma d_j(\rho_j,y) & \text{if} \quad x\in \cB_i,\ y\in \cB_j,\ Y_i<Y_j. \end{cases} \end{align*} Then $\cS^{\dec}$ is defined as the completion of $\bigsqcup_{i=1}^\infty \cB_i$ equipped with this distance. Its root $\rho$ can be obtained as a limit $\rho=\lim_{i\rightarrow\infty}\rho_{\sigma_i}$ for any sequence $(\sigma_i)_{i\geq 1}$ for which $Y_{\sigma_i}\rightarrow 0$. Essentially, $\cS^{\dec}$ looks like a skewer of decorations that are arranged along a line, in uniform random order. We can furthermore endow $\cS^{\dec}$ with a probability measure $\mu_{S^{\dec}}=\sum_{k=1}^{\infty}P_k \mu_k$. If $\beta>\alpha$ and $\Ec{\nu(\cB)}$ is finite, we also define the measure $\nu_{S^{\dec}}=\sum_{k=1}^{\infty}P_k^\beta \nu_k$. In the end, we have defined \begin{align*} \cS^{\dec}=(S^{\dec},d_{S^{\dec}},\rho_{S^{\dec}},\mu_{S^{\dec}},\nu_{S^{\dec}}). \end{align*} It is important to note that conditionally on $\cS$, constructing $\cS^{\dec}$ consists in replacing every atom of $\mu_S$ by an independent copy of a random metric space appropriately rescaled. This exactly corresponds to what we are trying to do with the entire tree $\cT_\alpha$. Up to now, it is not clear whether the last display is well defined though, because it is not clear whether the function $d_{S^{\dec}}$ only takes finite values. The next lemma ensures that it is the case as long as $\diam(\cB)$ has a finite first moment. Introduce the quantity $R=\sum_{i\geq 1} P_i^\gamma \diam(\cB_i)$. \begin{lemma}\label{lem:moment diam block implis moment diam spine} If $\Ec{\diam(\cB)^p}<\infty$ for some $p\geq 1$ then $\Ec{R^p}$ is finite so $\cS^{\dec}$ is almost surely a compact metric space with $\Ec{\diam(\cS^{\dec})^p}<\infty$. \end{lemma} In order to get Lemma~\ref{lem:moment diam block implis moment diam spine} we will apply the following lemma, with $\xi=\theta=\alpha-1$ and $Z_i=\diam(\cB_i)$. \begin{lemma} Let $(Z_i)_{i\geq 1}$ be i.i.d. positive random variables with moment of order $p\geq 1$, and $(P_i)_{i\geq 1}$ an independent random variable with $\GEM(\xi,\theta)$ distribution, and $\gamma>\xi$. Then the random variable \begin{align*} \sum_{i=1}^{\infty}P_i^\gamma Z_i, \end{align*} admits moments of order $p$. \end{lemma} Note that this lemma also guarantees that the measure $\nu_{S^{\dec}}=\sum_{k=1}^{\infty}P_k^\beta \nu_k$ is almost surely a finite measure, as soon as $\Ec{\nu(B)}$ is finite. \begin{proof} Suppose that $\xi<\gamma\leq 1$ and write \begin{align*} \left(\sum_{i=1}^{\infty}P_i^\gamma Z_i\right)^p &=\left(\sum_{i=1}^{\infty}P_i^\gamma\right)^p \left(\sum_{i=1}^{\infty}\left(\frac{P_i^\gamma}{\sum_{i=1}^{\infty}P_i^\gamma}\right) Z_i\right)^p\\ &\underset{\text{convexity}}{\leq} \left(\sum_{i=1}^{\infty}P_i^\gamma\right)^p \left(\sum_{i=1}^{\infty}\left(\frac{P_i^\gamma}{\sum_{i=1}^{\infty}P_i^\gamma}\right) Z_i^p\right)\\ &\leq \left(\sum_{i=1}^{\infty}P_i^\gamma\right)^{p-1}\left(\sum_{i=1}^{\infty}P_i^\gamma Z_i^p\right) \end{align*} In the case $\gamma>1$, since all the $P_i$ are smaller than $1$, we can write \begin{align*} \left(\sum_{i=1}^{\infty}P_i^\gamma Z_i\right)^{p}&\leq \left(\sum_{i=1}^{\infty}P_i Z_i\right)^{p}\leq \left(\sum_{i=1}^{\infty}P_i Z_i^{p}\right), \end{align*} where the second inequality comes from convexity. Now, for any $\xi <\gamma\leq 1$, for $n=\lceil p-1 \rceil$ we have \begin{align*} \Ec{\left(\sum_{i=1}^{\infty}P_i^\gamma\right)^{p-1}\left(\sum_{i=1}^{\infty}P_i^\gamma Z_i^p\right)} &\leq \Ec{\left(\sum_{i=1}^{\infty}P_i^\gamma Z_i^p\right) \cdot \left(\sum_{i=1}^{\infty}P_i^\gamma\right)^{n}}\\ &=\Ec{\sum_{(i_0,i_1,i_2,\dots,i_n)\in(\mathbb N^*)^{n+1}}P_{i_0}^\gamma\cdot Z_{i_0}^p \cdot P_{i_1}^\gamma\dots P_{i_n}^\gamma}\\ &\leq \sum_{(i_0,i_1,i_2,\dots,i_n)\in(\mathbb N^*)^{n+1}}\Ec{P_{i_1}^\gamma\dots P_{i_n}^\gamma} \cdot \Ec{Z_{i_0}^p}\\ &<\infty \end{align*} The last term is finite provided that $\gamma>\xi$, using Lemma~5.4 in \cite{goldschmidt_stable_2018}. \end{proof} \subsection{Description of an object as a decorated Ulam tree}\label{subsec:description of the objet as a gluing of spines} Now that we have defined the random object $\cS^\dec$, we define some decorations of the Ulam tree that are constructed in such a way that up to some scaling factor, all the decorations are i.i.d.\@ with the same law as $\cS^\dec$. Gluing all those decorations along the structure of the tree, as defined in the introduction, will give us a random object $\tilde{\cT}^\dec_\alpha$ that has the same law as $\cT^\dec_\alpha$ (the proof of that fact will come later in the section). In fact, in what follows, we provide two equivalent descriptions of the same object $\tilde{\cT}^\dec_\alpha$: \begin{itemize} \item One of them is a description as a random self-similar metric space: this construction will ensure that the object that we construct is compact under the weakest possible moment assumption on the diameter of the decoration and also give us the Hausdorff dimension of the object. \item The other one is through an iterative gluing construction. It is that one that we use to identify the law of $\tilde{\cT}^\dec_\alpha$ with that of $\cT^\dec_\alpha$. \end{itemize} The fact that the two constructions yield the same object is a result from \cite{senizergues_growing_2022}. \subsubsection{Decorations on the Ulam tree} We extend the framework introduced in the first section for finite trees to the entire Ulam tree, following \cite{senizergues_growing_2022}. We work with families of decorations on the Ulam tree $\mathcal D=(\mathcal{D}(v), v\in \mathbb U)$ of measured metric spaces indexed by the vertices of $\mathbb U$. Each of the decorations $\mathcal{D}(v)$ indexed by some vertex $v\in \mathbb U$ is rooted at some $\rho_v$ and carries a sequence of gluing points $(\ell_v(i))_{i\geq 1}$. The way to construct the associated decorated Ulam tree is to consider as before \begin{align*} \bigsqcup_{v\in\mathbb U}\mathcal{D}(v) \end{align*} and consider the metric gluing of those blocks obtained by the relations $\ell_v(i)\sim \rho_{vi}$ for all $v\in\mathbb U$ and $i\ge 1$. For topological considerations, we actually consider the metric completion of the obtained object. The final result is denoted $\sG(\cD)$. \paragraph{The completed Ulam tree.} Recall the definition of the Ulam tree $\bU=\bigcup_{n\geq 0} \mathbb N^n$ with $\mathbb N=\{1,2,\dots\}$. Introduce the set $\partial\bU=\mathbb{N}^\mathbb{N}$, to which we refer as the \emph{leaves} of the Ulam tree, which we see as the infinite rays joining the root to infinity and set $\overline{\bU}:=\bU\cup \partial\bU$. On this set, we have a natural genealogical order $\preceq$ defined in such a way that $v\preceq w$ if and only if $v$ is a prefix of $w$. From this order relation we can define for any $v\in\bU$ the \emph{subtree descending from $v$} as the set $T(v):=\enstq{w\in \overline{\bU}}{v\preceq w}$. The collection of sets $\{T(v),\ v\in\bU\} \sqcup \{\{v\},\ v\in\bU\}$ generates a topology over $\overline{\bU}$, which can also be generated using an appropriate ultrametric distance. Endowed with this distance, the set $\overline{\bU}$ is then a separable and complete metric space. \paragraph{Identification of the leaves.} Suppose that $\cD$ is such that $\sG(\cD)$ is compact. We can define a map \begin{equation}\label{growing:eq:def de iota} \iota_\cD:\partial \bU\rightarrow \sG(\cD), \end{equation} that maps every leaf of the Ulam-Harris tree to a point of $\sG(\cD)$. For any $\mathbf{i}=i_1i_2\dots \in\partial \bU$, the map is defined as \begin{align*} \iota_{\cD}(\mathbf{i}):= \lim_{n\rightarrow\infty} \rho_{i_1i_2\dots i_n} \in \sG(\cD), \end{align*} The fact that this map is well-defined and continuous is proved in \cite{senizergues_growing_2022}. \paragraph{Scaling of decorations.} In the rest of this section, for $\mathcal{X}=(X,d,\rho,\mu,\nu)$ a pointed metric space endowed with finite measures (as well as some extra structure like a sequence of points for example) we will use the notation \begin{align*} \mathrm{Scale}(a,b,c;\mathcal{X}) = (X,a\cdot d,\rho,b \cdot \mu, c\cdot \nu), \end{align*} where the resulting object still carries any extra structure that $\mathcal{X}$ originally carried. \subsubsection{A self-similar decoration} We introduce the following random variables: \begin{itemize} \item For every $v\in \mathbb{U}$, sample independently $(Q_{vi})_{i\geq 1}\sim \GEM(\frac{1}{\alpha},1-\frac{1}{\alpha})$ and denote $D_v$ the $\frac{1}{\alpha}$-diversity of $(Q_{vi})_{i\geq 1}$. \item This defines for every $v\in \mathbb{U}$, \begin{align*} \overline{Q_v}=\prod_{w\preceq v}Q_v \qquad \text{and} \qquad w_v=(\overline{Q_v})^\frac{1}{\alpha}\cdot D_v \end{align*} This in fact also defines a probability measure $\eta$ on $\partial \mathbb U$ the frontier of the tree which is characterized by $\eta(T(v))=\overline{Q_v}$, for all $v\in \mathbb U$. \item For every $v\in \mathbb{U}$ we sample independently of the rest $\cS^{\dec}_v$ that has the same law as $\cS^{\dec}=(S^{\dec},d_{S^{\dec}},\rho_{S^{\dec}},\mu_{S^{\dec}},\nu_{S^{\dec}})$ and sample a sequence of points from its measure $\mu_{S^{\dec}}$. We call those points $(X_{vi})_{i\geq 1}$ and define $\ell_v:i\mapsto X_{vi}\in\cS^{\dec}_v $. Then we can consider the following decorations on the Ulam tree as, for $v\in\mathbb{U}$, \begin{align}\label{eq:definition Tdec from GEM} \mathcal{D}(v):= \mathrm{Scale}\left(w_v^\gamma,w_v,w_v^\beta; \cS^{\dec}_v\right). \end{align} \end{itemize} From these decorations on the Ulam tree, we can define the corresponding decorated tree (and consider its metric completion) that we write as \begin{align*} \tilde{\cT}^\dec_\alpha:= \sG(\cD). \end{align*} Note that the object defined above is not necessarily compact. Whenever the underlying block (and hence also the decorated spine thanks to Lemma~\ref{lem:moment diam block implis moment diam spine}) has a moment of order $p>\frac{\alpha}{\gamma}$, a result of \cite[Section~4.2]{senizergues_growing_2022} ensures that the obtained metric space $\tilde{\cT}^\dec_\alpha$ is almost surely compact. \paragraph{The uniform measure.} Assume that the underlying block has a moment of order $p>\frac{\alpha}{\gamma}$ so that $\tilde{\cT}^\dec$ is almost surely compact. Then the maps $\iota_\cD:\partial \bU \rightarrow \sG(\cD)$ is almost surely well-defined and continuous so we can consider the measure $(\iota_\cD)_*\eta$ on $\sG(\cD)$, the push-forward of the measure $\eta$. \paragraph{The block measure.} If $\beta>\alpha$ and $\Ec{\nu(\cB)}<\infty$ then we can check that the total mass of the $\nu$ measures has finite expectation so it's almost surely finite. Hence we can endow $\tilde{\cT}^\dec_\alpha$ with the measure $\sum_{u}\nu_u$. \subsubsection{The iterative gluing construction for $\tilde{\cT}^\dec_\alpha$.} Before diving into the construction of $\tilde{\cT}^\dec_\alpha$, we define a family of time-inhomogeneous Markov chains called Mittag-Leffler Markov chains, first introduced by Goldschmidt and Haas \cite{goldschmidt_line_2015}, see also \cite{senizergues_growing_2022}. \paragraph{Mittag-Leffler Markov chains.} Let $0<\eta<1$ and $\theta>-\eta$. The generalized Mittag-Leffler $\mathrm{ML}(\eta, \theta)$ distribution has $p$th moment \begin{align}\label{growing:eq:moments mittag-leffler} \frac{\Gamma(\theta) \Gamma(\theta/\eta + p)}{\Gamma(\theta/\eta) \Gamma(\theta + p \eta)}=\frac{\Gamma(\theta+1) \Gamma(\theta/\eta + p+1)}{\Gamma(\theta/\eta+1) \Gamma(\theta + p \eta+1)} \end{align} and the collection of $p$-th moments for $p \in \N$ uniquely characterizes this distribution. A Markov chain $(\mathsf M_n)_{n\geq 1}$ has the distribution $\MLMC(\eta,\theta)$ if for all $n\geq 1$, \begin{equation*} \mathsf M_n\sim \mathrm{ML}\left(\eta,\theta+n-1\right), \end{equation*} and its transition probabilities are characterized by the following equality in law \begin{equation*} \left(\mathsf M_n,\mathsf M_{n+1}\right)=\left(\beta_n\cdot \mathsf M_{n+1},\mathsf M_{n+1}\right), \end{equation*} where $\beta_n\sim \mathrm{Beta}\left(\frac{\theta+k-1}{\eta}+1,\frac{1}{\eta}-1\right)$ is independent of $\mathsf M_{n+1}$. \paragraph{Construction of $\tilde{\cT}^\dec_\alpha$.} We can now express our second construction of $\tilde{\cT}^\dec_\alpha$. We start with a sequence $(\mathsf{M}_k)_{k\geq 1}\sim \mathrm{MLMC}\left(\frac{1}{\alpha},1-\frac{1}{\alpha}\right)$. Then we defined the sequence $(\mathsf{m}_k)_{k\geq 1}=(\mathsf{M}_k-\mathsf{M}_{k-1})_{k\geq 1}$ of increments of that chain, where we assume by convention that $\mathsf{M}_0=0$. From there, conditionally on $(\mathsf{M}_k)_{k\geq 1}$ we define an independent sequence $\mathcal{Y}_k$ of metric spaces such that \begin{align*} \mathcal{Y}_k \overset{(d)}{=} \mathrm{Scale}\left(\mathsf{m}_k^\gamma, \mathsf{m}_k,\mathsf{m}_k^\beta; \cS^\dec\right) \end{align*} Then, we define a sequence of increasing subtrees of the Ulam tree as follows: we let $\mathtt{T}_1$ be the tree containing only one vertex $v_1=\emptyset$. Then if $\mathtt{T}_n$ is constructed, we sample a random number $K_{n+1}$ in $\{1,\dots,n\}$ such that conditionally on all the rest $\Pp{K_n=k}\propto\mathsf{m}_k$. Then, we add the next vertex $v_{n+1}$ to the tree as the right-most child of $v_{K_{n+1}}$. The sequence $(\mathtt{T}_n)_{n\geq 1}$ is said to have the distribution of a weighted recursive tree with sequence of weights $(\mathsf{m}_k)_{k\geq 1}$. A property of this sequence of trees is the fact that the $\enstq{v_k}{k\geq1}=\mathbb U$. Hence for any $v\in \mathbb U$ we denote $k_v$ the unique $k$ such that $v_k=v$. Then we consider the decorations on the Ulam tree defined by \begin{align}\label{eq:definition Tdec from WRT} \widehat{\mathcal{D}}(v)=\mathcal{Y}_{k_v}. \end{align} In this setting, we can again define a measure $\widetilde{\eta}$ on the leaves $\partial\mathbb U$ of the Ulam tree by taking the weak limit of the uniform measure on $\{v_1,v_2,\dots v_n\}$ as $n\rightarrow\infty$, the almost sure existence of the limit being guaranteed by \cite[Theorem~1.7, Proposition~2.4]{senizergues_geometry_2021}. Then, from \cite[Proposition~3.2]{senizergues_growing_2022} we have \begin{align*} \left(\left(\mathsf{m}_{k_v}\right)_{v\in \mathbb U}, \widetilde{\eta}\right) \overset{\mathrm{(d)}}{=} \left(\left(w_{v}\right)_{v\in \mathbb U}, \eta\right). \end{align*} From that equality in distribution and the respective definitions \eqref{eq:definition Tdec from GEM} of $\mathcal{D}$ and \eqref{eq:definition Tdec from WRT} of $\widetilde{\mathcal{D}}$, it is clear that those two families of decorations have the same distribution. \subsubsection{Strategy} The rest of this section is about proving that the random decorated tree $\tilde{\cT}^\dec_\alpha$ that we constructed above has the same distribution as $\cT^\dec_\alpha$. For that we are going to characterize their ``finite-dimensional marginals'' and show that they are the same for the two processes: Using the second description of $\tilde{\cT}^\dec_\alpha$, we can consider for any $k\geq 1$ the subset $\tilde{\cT}^\dec_k$ that corresponds to keeping only the blocks corresponding to $v_1,\dots, v_k$. We compare this to $\cT^\dec_k$ which is the subset of $\cT^\dec_\alpha$ spanned by $k$ uniform leaves. \subsection{Finite-dimensional marginals of $\cT_\alpha$ and $\cT^\dec_\alpha$} \subsubsection{Approximating the decorated tree by finite dimensional marginals} For any tuple of points $x_1,x_2,\dots x_k$ in $\intervalleff{0}{1}$, we can consider \begin{align*} \mathrm{Span}(X^{\mathrm{exc},(\alpha)};x_1,x_2,\dots,x_k)=\enstq{x\in \intervalleff{0}{1}}{x\preceq x_i, \text{ for some } i\in\{1,2,\dots k\}}, \end{align*} using the definition of $\preceq$ derived from $X^{\mathrm{exc},(\alpha)}$. We further define \begin{align*} \mathrm{Span}(X^{\mathrm{exc},(\alpha)},(\cB_t)_{t\in \intervalleff{0}{1}};x_1,x_2,\dots,x_k)=\bigsqcup_{t\in \mathrm{Span}(X^{\mathrm{exc},(\alpha)};x_1,x_2,\dots,x_k)} \cB_t. \end{align*} Using independent uniform random points $(U_i)_{i\geq 1}$ on $[0,1]$, we define \begin{align*} \cT_k=\mathbf{p} \left(\mathrm{Span}(X^{\mathrm{exc},(\alpha)};U_1,U_2,\dots,U_k)\right) \end{align*} and \begin{align*} \cT^\dec_k=\mathbf{p}^\dec\left( \mathrm{Span}(X^{\mathrm{exc},(\alpha)},(\cB_t)_{t\in \intervalleff{0}{1}};U_1,U_2,\dots,U_k) \right), \end{align*} where $\mathbf{p}:\intervalleff{0}{1} \rightarrow \cT_\alpha$ and $\mathbf{p}^\dec:\sqcup_{t\in \intervalleff{0}{1}} \cB_t \rightarrow \cT_\alpha^\dec$ are the respective quotient maps. We say that $\cT_k$ (resp. $\cT^\dec_k$) is the discrete random finite-dimensional marginal of $\cT_\alpha$ (resp. $\cT^\dec_\alpha$), following the standard definition from trees, introduced by Aldous. \begin{remark} Note that for $\cT^\dec_k$ to be almost surely well-defined, we don't need the whole decorated tree $\cT^\dec_\alpha$ to be well-defined as a compact measured metric space. In fact, it is easy to check that if the tail of $\diam(\cB)$ is such that $\Pp{\diam(\cB)\geq x}\sim x^{-p}$ with $1 < p <\frac{\alpha }{\gamma}$, then $\sup_{v \in \mathbb B} \Delta_v^\gamma \cdot \diam(\cB_v)=\infty$, even though the distance of a random point to the root in $\cT^\dec_\alpha$ is almost surely finite. \end{remark} \begin{lemma} Identifying $\cB_v$ for any $v\in \mathbb B$ as a subset of $\cT^\dec_\alpha$ we almost surely have \begin{align*} \bigcup_{v\in \mathbb B}\cB_v \subset \overline{\bigcup_{n\geq 0}\cT^\dec_n}. \end{align*} \end{lemma} \begin{proof} By properties of the stable excursion, for any $t\in \mathbb B$ the set $\enstq{s\succeq t}{s\in \intervalleff{0}{1}}$ has a positive Lebesgue measure. Hence almost surely there is some $k$ such that $t\preceq U_k$. \end{proof} \begin{corollary}\label{cor:Tdec is compact iff union Tndec is} The space $\overline{\bigcup_{n\geq 0}\cT^\dec_n}$ is compact if and only if $\cT^\dec_\alpha$ is well-defined as a compact metric space. \end{corollary} \subsubsection{Description of $\cT_k$ and $\cT_k^\dec$} First, for any $k\geq 1$, we let $L_k$ be the distance $\mathrm{d}(U_k,\cT_{k-1})$ computed in the tree $\cT_\alpha$. Then we consider the set of branch-points $\mathbb B \cap (\cT_k\setminus\cT_{k-1})$ and define the decreasing sequence $(N_k(\ell))_{\ell \geq 1}$ which is the decreasing reordering of the sequence $\left(\Delta_t\right)_{t\in \mathbb B \cap (\cT_k\setminus\cT_{k-1})}$. Denote $N_k= \sum_{\ell \geq 1} N_k(\ell)$. Note that the jumps of the excursion process are all distinct almost surely. We denote $(t_k(\ell))_{\ell\ge 1}$ the corresponding sequence of jump times. We also consider the sequence $(L_k(\ell))_{\ell\ge 1}$ defined as $\mathrm{d}(t_k(\ell), \cT_{k-1})$. Let us also write $N_k^{r}(\ell)$ for the quantity $x_{t_k(\ell)}^{U_k}$, defined in \eqref{eq:definition xts}. Let us check that these random variables are enough to reconstruct $\cT_k^\dec\setminus \cT_{k-1}^\dec$. We have \begin{align*} \overline{\cT_k^\dec\setminus \cT_{k-1}^\dec}= \overline{\bigcup_{t\in \cT_k\setminus \cT_{k-1}} \cB_t} \end{align*} seen as a subset of $\cT^\dec_\alpha$. Now, from the form of the distance on $\cT^\dec_\alpha$, the induced distance between two point in $\cT_k^\dec\setminus \cT_{k-1}^\dec$ only depends on: \begin{itemize} \item the ordering of the jump times $(t_k(\ell))_{\ell \geq 1}$ by the relation $\preceq$ (they are all comparable because by definition we have $t_k(\ell)\preceq U_k$ for all $\ell \geq 1$), \item the sizes of the jumps $(\Delta_{t_k(\ell)})$ and the corresponding blocks $\cB_{t_k(\ell)}$, \item the position of the gluing points $Z_{t_k(\ell)}^{U_k}$ on the corresponding block $\cB_{t_k(\ell)}$. \end{itemize} We can also check that the topological closure $\partial(\cT_k^\dec\setminus \cT_{k-1}^\dec)$ contains a single point, we call this point $Z_k$. When considering the compact object $\overline{\cT_k^\dec\setminus \cT_{k-1}^\dec}$, we see it as rooted at $Z_k$. Now, if we want to reconstruct the entire $\cT_k^\dec$ from $(\cT_k^\dec\setminus \cT_{k-1}^\dec)$ and $\cT_{k-1}^\dec$ we also need some additional information. For that we consider the finite sequence $U_1 \wedge U_k, U_2\wedge U_k, \dots , U_{k-1}\wedge U_k$ which are all $\preceq U_k$ by definition. Because they are all comparable, we can consider $V_k$ the maximal element of this sequence. We let $I_k$ be the unique $i\leq k-1$ such that $V_k\in (\cT_i^\dec\setminus \cT_{i-1}^\dec)$. Additionally, we let $W_k=u^{U_k}_{V_k}$. We can check that the corresponding point $Z_k=Y_{V_k,W_k}$ is such that, conditionally on $\cT_{k-1}^\dec$ and $V_k$, distributed as a random point under the measure $\nu_{V_k}$ carried by the block $\cB_{V_k}$, by definition. Now, we use the results of \cite[Proposition~2.2]{goldschmidt_stable_2018} that identify the joint law of these quantities as scaling limit of analogous quantities defined in a discrete setting on trees constructed using the so-called Marchal algorithm. Those quantities have been studied with a slightly different approach in \cite{senizergues_growing_2022}, which provides an explicit description of those random variables, jointly in $k$ and $\ell$. The following can be read from \cite[Lemma~5.5, Proposition~5.7]{senizergues_growing_2022}: \begin{itemize} \item the sequence $(N_k)_{k\geq 1}$ has the distribution of the sequence $(\mathsf{m}_k)_{k\geq 1}=(\mathsf{M}_k-\mathsf{M}_{k-1})_{k\geq 1}$ where $(\mathsf{M}_k)_{k\geq 1}\sim \mathrm{MLMC}\left(\frac{1}{\alpha},1-\frac{1}{\alpha}\right)$. \item the sequences $\left(\frac{N_k(\ell)}{N_k}\right)_{\ell \geq 1}$ are i.i.d. with law $\mathrm{PD}(\alpha-1,\alpha -1)$ and $L_k= \alpha \cdot N_k^{\alpha -1} \cdot S_k$ where $S_k$ is the $(\alpha-1)$ diversity of $\left(\frac{N_k(\ell)}{N_k}\right)_{\ell \geq 1}$. \item the random variables $\frac{L_k(\ell)}{L_k}$ are i.i.d. uniform on $\intervalleff{0}{1}$, \item the random variables $\frac{N_k^{r}(\ell)}{N_k(\ell)}$ are i.i.d. uniform on $\intervalleff{0}{1}$. \end{itemize} In particular, from the above description, we can check that conditionally on the sequence $(N_k)_{k\geq 1}$, the random variables $(\overline{\cT_k^\dec\setminus \cT_{k-1}^\dec })_{k\geq 1}$ are independent with distribution given by \begin{align*} \overline{\cT_k^\dec\setminus \cT_{k-1}^\dec} = \mathrm{Scale}(N_k^\gamma, N_k, N_k^\beta; \cS^\dec). \end{align*} We have identified the laws of the two sequences $(\cT_k^\dec\setminus \cT_{k-1}^\dec)_{k\geq 1}$ and $(\tilde{\cT}_k^\dec\setminus \tilde{\cT}_{k-1}^\dec)_{k\geq 1}$. In order to identify the law of the sequences $(\cT_k^\dec)_{k\geq 1}$ and $(\tilde{\cT_k}^\dec)_{k\geq 1}$, we just have to check that the attachment procedure is the same. Recall the definition of the random variables $(I_k)_{k\geq 1}$, $(V_k)_{k\geq 1}$ and $(W_k)_{k\geq 1}$. Still from \cite[Lemma~5.5, Proposition~5.7]{senizergues_growing_2022}, conditionally on all the quantities whose distributions were identified above, the $I_k$'s are independent and $\Pp{I_k=i}=\frac{N_i}{N_1+\dots + N_{k-1}}$ for all $i\leq k-1$. From those random variables we can construct an increasing sequence of trees $(\mathtt{S}_n)_{n\geq 1}$ in such a way that the parent of the vertex with label $k$ is the vertex with label $I_k$. From the observation above, the law of $(\mathtt{S}_n)_{n\geq 1}$ conditionally on everything else mentioned before only depends on $(N_k)_{k\geq 1}$. This law is the same as that of $(\mathtt{T}_n)_{n\geq 1}$ conditionally on $(\mathsf m_k)_{k\geq 1}$ (and everything else). Now we just need to check the law of the gluing points: recall how the point $Z_k$ is given as $Z_k=Z_{V_k}^{U_k}=Y_{V_k,W_k}$. We just need to argue that this point, conditionally on $I_k=i$ the point $Z_k$ is taken under a normalized version of the measure $\sum_{\ell=1}^{\infty} \Delta_{t_i(\ell)} \mu_{t_i(\ell)}$. In fact, stil from \cite{senizergues_growing_2022}, conditionally on $I_k=i$ and everything else mentioned before we have $\Pp{V_k=t_i(\ell)}=\frac{N_i(\ell)}{N_i}=\frac{ \Delta_{t_i(\ell)}}{\sum_{\ell=1}^{\infty} \Delta_{t_i(\ell)}}$ for $\ell\geq 1$ and $W_k$ is independent uniform on $\intervalleff{0}{1}$. Since by definition, $W_k\in \cA_{V_k}$, then $Y_{V_k,W_k}$ is distributed as $\mu_{V_k}$ by construction. Now, since the uniform distribution has no atom, it is almost surely the case that $W_k \notin \enstq{u_{V_k}^{U_r}}{r\leq k-1, V_k\preceq U_r}$ so that the point $Y_{V_k,W_k}$ has no been used in the construction up to time $k-1$, so the sampling of $Z_k$ is indeed independent of the rest. \subsection{Properties of the construction.} We introduce the set of leaves $\cL= \cT^\dec_\alpha\setminus \cup_{n\geq 1}\cT^\dec_n$. Then still from \cite{senizergues_growing_2022} we have the following \begin{theorem} \label{thm:compact_hausdorff} If $\Ec{\diam(\cB)^p}<\infty$ for some $p>\frac{\alpha}{\gamma}$, then $\cT^\dec_\alpha$ is almost surely a compact metric space and $\Ec{\diam(\cT^\dec_\alpha)^p}<\infty$. Under the assumption that the measure on $\cB$ is not concentrated on its root almost surely, the Hausdorff dimension of the set of leaves $\mathcal L$ is given by \begin{align*} \dim_H(\cL)=\frac{\alpha}{\gamma} \end{align*} almost surely. \end{theorem} We can now provide a proof of Lemma~\ref{l:smalldec} which can just be seen as a corollary of the above theorem. \begin{proof}[Proof of Lemma~\ref{l:smalldec}] Introduce the random block $\widehat{\cB}$ defined on the same probability space as $\cB$ as the interval $\intervalleff{0}{\diam \cB}$ rooted at $0$ and whose sampling measure is a Dirac point mass at $\diam \cB$. We also consider the corresponding object $\widehat{\cT}^\dec$. Because $\widehat{\cB}$ also satisfies the assumptions of the theorem above, $\widehat{\cT}^\dec$ is almost surely compact and by Corollary~\ref{cor:Tdec is compact iff union Tndec is} we have $\mathrm{d}_H(\widehat{\cT}^\dec, \widehat{\cT}^\dec_n)\rightarrow 0$ almost surely as $n\rightarrow\infty$. Then we can check that, for uniform random variables $U_1,U_2,\dots,U_n$ that were used to define $\widehat{\cT}^\dec_n$, we have \begin{align*} \sup_{s\in \intervalleff{0}{1}} \sum_{t\prec s} \diam(\cB_t)\ind{\Delta_t<\delta} \leq\sup_{s\in \{U_1, U_2,\dots, U_n\}} \sum_{t\prec s} \diam(\cB_t)\ind{\Delta_t<\delta} + \mathrm{d}_H(\widehat{\cT}^\dec, \widehat{\cT}^\dec_n) \end{align*} Thanks to the above theorem, $\widehat{\cT}^\dec$ is compact and the second term tends to $0$ as $n\rightarrow \infty$. Then, for a large fixed $n$, the first term tends to $0$ as $\delta\rightarrow0$. \end{proof} \section{Applications} \label{s:applications} We present several applications of the invariance principle in Theorem~\ref{th:invariance} to block-weighted models of random discrete structures. The limits of these objects are stable trees decorated by other stable trees, stable looptrees, or even Brownian discs. \subsection{Marked trees and iterated stable trees} \label{s:markedtrees} Define a class of combinatorial objects $\cM$ consisting of all marked rooted plane trees where the root and each leaf receives a mark and internal vertices may or may not receive a mark. The size of an object $M$ in $\cM$ is its number of leaves and is denoted by $|M|$. For a given $n$ there are infinitely many trees with size $n$ so for simplicity we assume that there are no vertices of out-degree 1, rendering the number of trees of size $n$ finite. \begin{figure} [!h] \centerline{\scalebox{0.7}{\includegraphics{treeblock.pdf}}} \caption{On the left is a tree from $\cM$ and on the right it is shown how it is decomposed into its tree blocks. The root is at the bottom and marked vertices are denoted by black and white circles.} \label{f:treeblocks} \end{figure} A subclass of $\cM$, which we denote by $\cL$, consists of trees where no internal vertex, except the root, receives a mark, and so an $n$-sized object from $\cL$ is a rooted plane tree with $n$ leaves. The ordinary generating series of the classes $\cM$ and $\cL$ satisfy the equation \begin {align}\label{eq:iso1} \cM(z) = z + \cL(\cM(z))-\cM(z). \end {align} The interpretation is that either an object from $\cM$ is a single vertex (the root of the tree) or a tree in $\cL$, different from a single root, in which each marked vertex is identified with the root of an object from $\cM$. We will call a subtree of an element $M$ from $\cM$ a \emph{tree block} if it has more than one vertex, all of its leaves are marked, its root is marked and no other vertices are marked. We further require that it is a maximal subtree with this property. The tree blocks may also be understood as the subtrees obtained by cutting the marked tree at each marked internal vertex, see Fig.~\ref{f:treeblocks}. \begin {remark} \label{re:iteriter} One may introduce marks with $k$ different colors and define 'color blocks' and assign weights to them. This model is a candidate for a discrete version of further iterated trees $\mathcal{T}_{\alpha_1,\alpha_2,\ldots,\alpha_{k+1}}$. Denote the set of such structures with marks of $k$ colours by $\cM_k$, with $\cL = \cM_0$ and $\cM = \cM_1$. One then has an equation of generating series \begin {equation}\label{eq:iso2} \cM_k(z) = z + \cM_{k-1}(\cM_k(z))-\cM_k(z). \end {equation} \end {remark} \begin {remark} It is a standard result (and easy to check) that \begin {equation*} \cL(z) = \frac{1+z-\sqrt{z^2-6z+1}}{4} = z + z^2 + 3z^3 + 11 z^4 + 45 z^5 + 197 z^6 + \ldots \end {equation*} and from this and Equation~\eqref{eq:iso1} one may deduce that \begin {equation*} \cM(z) = \frac{1+7z-\sqrt{z^2-10z+1}}{12} = z + z^2 + 5z^3+31z^4 + 215z^5 +1597 z^6 + \ldots \end {equation*} This sequence of coefficients is not in the OEIS. Going further one finds that \begin {equation*} \cM_2(z) = \frac{1+17z-\sqrt{z^2-14z+1}}{24} = z + z^2 + 7z^3+61z^4 + 595z^5 +6217 z^6 + \ldots \end {equation*} In general, by induction \begin {align*} \cM_k(z) &= \frac{1+(2k^2+4k+1)z-\sqrt{z^2-(4k+6)z+1}}{2 (k+1)(k+2)} \\ &= z + z^2 + (2(k+1)+1)z^3 + (5(k+1)^2+5(k+1)+1)z^4 \\ & + (14(k+1)^3+21(k+1)^2+9(k+1)+1)z^5 \ldots \end {align*} Note that $[z^n (k+1)^m]\cM_k(z)$ is the number of rooted plane trees with no vertex of outdegree 1 which have $n$ leaves and $m$ internal vertices (not counting the root). In particular it holds that \begin {equation*} \sum_{m=0}^{n-2} [z^n (k+1)^m]\cM_k(z) = [z^n] \cL(z). \end {equation*} \end {remark} To each element $L$ of $\cL$ we assign a weight $\gamma(L)$ and denote the class of such weighted structures by $\cL^\gamma$. We assume the weight of the marked tree consisting of a single vertex is equal to $1$. Then, we assign a weighting $\omega$ to elements $M$ from $\cM$ by \begin {equation*} \omega(M) = \prod_{L} \gamma(L) \end {equation*} where the index $L$ ranges over the tree blocks in $M$. Denote the corresponding class of weighted structures by $\cM^\omega$. The weighed ordinary generating series satisfy a similar equation as before \begin {align*} \cM^\omega(z) = z + \cL^\gamma(\cM^\omega(z)) - \cM^\omega(z). \end {align*} Define a random element $\mM_n^\omega$ from the set of $n$-sized elements of $\cM^\omega$ which is selected with a probability proportional to its weight. Let $(\iota_n)_{n\geq 0}$ and $(\zeta_n)_{n\geq 0}$ be sequences of non-negative weights, with $\iota_0=\zeta_1=1$ and $\iota_1 = \zeta_1 = 0$. We will be interested in weights $\gamma$ of the form \begin {equation*} \gamma(L) = \zeta_{|L|} W(L) \end {equation*} with \begin{figure} [b!] \centerline{\scalebox{0.45}{\includegraphics{iso.pdf}}} \caption{The coupling of $\mM_n^\omega$ with simply generated trees with leaves as atoms.} \label{f:iso} \end{figure} \begin {equation*} W(L) = \prod_{v \text{ internal vertex in } L} \iota_{d^+(v)}. \end {equation*} Let $(w_n)_{n\geq 0}$ be a sequence of non-negative numbers such that $w_0=1$ and $w_1=0$. For $n\geq 2$ we will write the weights $\zeta_n$ as follows \begin {align*} \zeta_n = w_n Z_n^{-1} \end {align*} where \begin {equation*} Z_n = \sum_{\substack{L\in\cL, |L| = n}} W(L) \end {equation*} is the \emph{partition function} of elements from $\cL^\gamma$ of size $n$. In particular, when we choose $\zeta_n = 1$ (i.e.~$w_n = Z_n$) for all $n\geq 2$, we say that $\mM_n^\omega$ is an $n$-leaf \emph{simply generated marked tree} with branching weights $(\iota_k)_k$. \begin {proposition} \label{pro:samplingmnw} The random element $\mM_n^\omega$ may be sampled as follows: \begin {enumerate} \item Sample an $n$-leaf simply generated tree $\tau_n = \tau_n^\omega$ with branching weights $[z^{k}]\cL^{\gamma}(z) = w_k$ assigned to each internal vertex of out-degree $k>1$ and branching weight $w_0=1$ assigned to each leaf. \item For each vertex $v$ of $\tau_n$ sample independently a $d^+(v)$-leaf simply generated tree $\delta(v)$ with branching weights $\iota_k$ assigned to each vertex of outdegree $k\geq 0$. Glue together according to the tree structure of $\tau_n$ (see Fig.~\ref{f:iso}). \end {enumerate} \end {proposition} We refer to the surveys~\cite{MR2908619,zbMATH07235577} for details on simply generated models of trees with fixed numbers of vertices or leaves. The proof of Proposition~\ref{pro:samplingmnw} is by a straight-forward calculation, or alternatively by applying a general result on sampling trees with decorations~\cite[Lem. 6.7]{zbMATH07235577}. We will only be interested in the case where $\cM^\omega(z)$ has positive radius of convergence. In this case one may rescale the weights $w_n$ and $\iota_n$ such that they are probabilities without affecting the distribution of $\mM_n^\omega$, and we will assume in the following that this has been done. We denote by $\xi$ a random variable with distribution $w_n$ and $\chi$ a random variable with distribution $\iota_n$. The tree $\tau_n$ may then be viewed as a Bienaymé--Galton--Watson tree with offspring distribution $\xi$ conditioned on having $n$ leaves and $\delta(v)$ may be viewed as a Bienaymé--Galton--Watson tree with offspring distribution $\chi$ conditioned on having $d^+(v)$ leaves. We let $d_{\mM_n^\omega}$ denote the graph-distance on $\mM_n^\omega$ and $\nu_{\mM_n}$ the counting measure on its set of non-root vertices. Proposition~\ref{pro:samplingmnw} ensures that $(\mM_n^\omega, d_{\mM_n^\omega}, \nu_{\mM_n})$ falls into the framework of discrete decorated trees described in Section~\ref{s:disc_construction}, with the random tree given by $\tau_n$ and decorations according to a sequence $(\tilde B_k,\tilde \rho_k,\tilde d_k,\tilde \ell_k,\tilde \nu_k)_{k\geq 0}$ given as follows. The space $\tilde{B}_k$ is a $\chi$-BGW-tree conditioned on having $k$ leaves, $\tilde{\rho}_k$ is its root vertex, and $\tilde{d}_k$ is the graph distance on that space. The labeling function $\tilde{\ell}_k$ is chosen to be some bijection between $\{1, \ldots, k\}$ and the leaves of $\tilde{B}_k$. Thus $\mu_k$ is the uniform measure on the $k$ leaves of $\tilde{B}_k$. The measure $\tilde{\nu}_k$ is the counting measure on the non-root vertices of $\tilde{B}_k$. Choose $(w_n)_{n\geq 0}$ and $(\iota_n)_{n\geq 0}$ such that $\Ex{\xi}= \Ex{\chi} = 1$, and such that $\xi, \chi$ follow asymptotic power laws so that $\xi$ is in the domain of attraction of a stable law with index $\alpha_2 \in ]1,2[$ and $\chi$ lies in the domain of attraction of a stable law with index $\alpha_1 \in ]1,2[$. By~\cite[Thm. 6.1, Rem. 5.4]{MR2946438} there is a sequence \begin{align*} b_n = \left( \frac{|\Gamma(1-\alpha_2)|}{\Pr{\xi=0}} \right)^{1/\alpha_2}\inf \{x \ge 0 \mid \Pr{\xi > x} \le 1/n \} \sim c_1 n^{1/\alpha_2} \end{align*} for some constant $c_1>0$ such that \begin{align*} (b_n^{-1} W_{\lfloor|\tau_n|t\rfloor}(\tau_n))_{0 \le t \le 1} \convdis X^{\text{exc}, (\alpha_2)}. \end{align*} The number of vertices of $\tilde{B_k}$ concentrates at $k / \Pr{\chi=0}$, and by~\cite[Thm. 5.8]{MR2946438} it follows that for some constant $c_2>0$ \begin{align*} \left(\tilde B_k,\tilde\rho_k, c_2 k^{-1-1/\alpha_1} \tilde d_k, \mu_k,(k/\Pr{\chi=0})^{-1}\tilde \nu_k\right) \to (\cT_{\alpha_1},\rho_{\cT_{\alpha_1}}, d_{\cT_{\alpha_1}},\nu_{\cT_{\alpha_1}},\nu_{\cT_{\alpha_1}}) \end{align*} with $(\cT_{\alpha_1}, \rho_{\cT_{\alpha_1}}, d_{\cT_{\alpha_1}}, \nu_{\cT_{\alpha_1}})$ denoting the $\alpha_1$-stable tree with root vertex $\rho_{\cT_{\alpha_1}}$. For $\alpha_1 > \frac{1}{2 - \alpha_2}$ we may use the construction from Section~\ref{s:cts_construction} to form a decorated stable tree $(\cT_{\alpha_1,\alpha_2}, d_{\cT_{\alpha_1,\alpha_2}}, \nu_{\cT_{\alpha_1,\alpha_2}})$ with distance exponent $\gamma_1 := 1 - 1 / \alpha_1$, obtained by blowing up the branchpoints of the $\alpha_2$-stable tree with rescaled independent copies of the $\alpha_1$-stable tree. The measure $\nu_{\cT_{\alpha_1,\alpha_2}}$ is taken to be the push-forward of the Lebesgue measure, corresponding to the case $\beta=1$. Theorem~\ref{th:invariance} yields the following scaling limit: \begin {theorem}\label{thm:mainconv} Choose $(w_n)_{n\geq 0}$ and $(\iota_n)_{n\geq 0}$ such that $\Ex{\xi}= \Ex{\chi} = 1$, and such that $\xi, \chi$ follow asymptotic power laws so that $\xi$ is in the domain of attraction of a stable law with index $\alpha_2 \in ]1,2[$ and $\chi$ lies in the domain of attraction of a stable law with index $\alpha_1 \in ]1,2[$. Suppose that $\alpha_1 > \frac{1}{2 - \alpha_2}$. Then \begin {equation*} \left(\mM_n^\omega, c_1^{-1+1/\alpha_1} c_2 n^{-\frac{\alpha_1-1}{\alpha_2 \alpha_1}} d_{\mM_n^\omega}, \frac{1}{|\mM_n^\omega|}\nu_{\mM_n^\omega}\right) \convdis (\cT_{\alpha_1,\alpha_2}, d_{\cT_{\alpha_1,\alpha_2}}, \nu_{\cT_{\alpha_1,\alpha_2}}) \end {equation*} in the Gromov--Hausdorff--Prokhorov topology. \end {theorem} By Theorem~\ref{thm:compact_hausdorff} and standard arguments we also obtain the Hausdorff-dimension of the iterated stable tree: \begin {theorem} For every $\alpha_2 \in (1,2)$ and $\alpha_1 \in (\frac{1}{2-\alpha_2},2)$, almost surely the random tree $\mathcal{T}_{\alpha_1,\alpha_2}$ has Hausdorff dimension $\displaystyle\frac{\alpha_2 \alpha_1}{\alpha_1-1}$. \end {theorem} The construction may be iterated: With $\gamma_2 := \gamma_1 / \alpha_2$ we may choose any $\alpha_3 \in ]1, 1+\gamma_2[$ and build the iterated stable tree $\cT_{\alpha_1, \alpha_2, \alpha_3}$ as in Section~\ref{s:cts_construction} with distance exponent~$\gamma_2$, by blowing up the branchpoints of an $\alpha_3$-stable tree by rescaled independent copies of the iterated stable tree $\cT_{\alpha_1, \alpha_2}$. By Remark~\ref{re:iteriter} and analogous arguments as for Theorem~\ref{thm:mainconv}, the tree $\cT_{\alpha_1, \alpha_2, \alpha_3}$ arises as scaling limits of random finite marked trees with diameter having order $n^{\gamma_3}$ for $\gamma_3 := \gamma_2/ \alpha_3$. This construction may be iterated indefinitely often, by choosing $\alpha_4 \in ]1, 1 + \gamma_3[$ and setting $\gamma_4 = \gamma_3 / \alpha_4$, and so on. This yields a sequence $(\alpha_i)_{i \ge 1}$ so that $\cT_{\alpha_1, \ldots, \alpha_k}$ is well-defined for all $k \ge 2$. We pose the following question: \begin{question} Is there a non-trivial scaling limit for $\cT_{\alpha_1, \ldots, \alpha_k}$ as $k \to \infty$? \end{question} Note that the associated sequence $(\gamma_i)_{i \ge 1}$ is strictly decreasing and satisfies $\gamma_{i+1} < \gamma_i/ (\gamma_i +1)$, which yields $\lim_{i \to \infty} \gamma_i= 0$ and hence $\lim_{i \to \infty} \alpha_i= 1$. The intuition for the question is that if $\alpha$ is close to $1$, then the vertex with maximal width in $\cT_\alpha$ should dominate and hence blowing up all branchpoints by some decoration should yield something close to a single rescaled version of that decoration. Hence $\cT_{\alpha_1, \ldots, \alpha_{k+1}}$ \emph{should} be close to a constant multiple of $\cT_{\alpha_1, \ldots, \alpha_{k}}$ for $k$ large enough. Since $\cT_{\alpha_1, \ldots, \alpha_{k}}$ has Hausdorff dimension $\alpha_1 \cdots \alpha_k/(\alpha_1-1)$, we expect that $(\alpha_i)_{i \ge 1}$ needs to be chosen so that $\prod_{i=1}^\infty \alpha_i$ converges in order to get a scaling limit. \subsection{Weighted outerplanar maps} \label{sec:outer} Planar maps may roughly be described as drawings of connected graphs on the $2$-sphere, such that edges are represented by arcs that may only intersect at their endpoints. The connected components that are created when removing the map from the plane are called the faces of the map. The number of edges on the boundary of a face is its degree. In order avoid symmetries, usually a root edge is distinguished and oriented. The origin of the root-edge is called the root vertex. The face to the right of the root edge is called the outer face, the face to the left the root face. An outerplanar map is a planar map where all vertices lie on the boundary of the outer face. The geometric shape of outerplanar maps has received some attention in the literature~\cite{zbMATH06673644,zbMATH06729837,zbMATH07138334,zbMATH07235577}. Throughout this section we fix two sequences $\iota = (\iota_k)_{k \ge 3}$ and $\kappa = (\kappa_k)_{k \ge 2}$ of non-negative weights. We are interested in random outerplanar maps that are generated according to $\kappa$-weights on their blocks and $\iota$-weights on their faces. Our goal in this section is to describe phases in which we obtain decorated stable trees as scaling limits. Specifically, we will obtain stable trees decorated with looptrees and Brownian trees. This is motivated by the mentioned work~\cite{zbMATH07138334}, which described a phase transition of random face-weighted outerplanar maps from a deterministic circle to the Brownian tree via loop trees. By utilizing the second weight sequence $\kappa$ we hence obtain a completely different phase diagram. We will only consider outerplanar maps without multi edges or loop edges. Recall that a block of a graph is a connected subgraph $D$ that is maximal with the property that removing any of its vertices does not disconnect $D$. Blocks of outerplanar maps are precisely dissections of polygons. We define the weight of a dissection $D$ by \begin{align} \gamma(D) = \kappa_{|D|} \prod_F \iota_{\mathrm{deg}(F)}, \end{align} with $|D|$ denoting the number of vertices of $D$, the index $F$ ranging over the faces of $D$, and $\mathrm{deg}(F)$ denoting the degree of the face $F$. This includes the case where $D$ consists of two vertices joined by a single edge. The weight of an outerplanar map $O$ is then defined by \begin{align} \label{eq:omegao} \omega(O) = \prod_{D} \gamma(D), \end{align} with the index $D$ ranging over the blocks of $O$. Given $n \ge 1$ such that at least one $n$-sized outerplanar map has positive $\omega$-weight, this allows us to define a random outerplanar map $O_n$ that is selected with probability proportional to its $\omega$-weight among the finitely many outerplanar graphs with $n$ vertices. We will only consider the case where infinitely many integers $n$ with this property exist. Likewise, we define the random dissection $D_n$ that is sampled with probability proportional to its weight among all dissections with $n$ vertices. The random outerplanar map $O_n$ fits into the framework considered in the present work. We will make this formal. For each $k \ge 2$ let us set \begin{align} \label{eq:defd} d_k = \sum_{D : |D| =k} \gamma(D), \end{align} with the sum index $D$ ranging over all dissections of $k$-gons. We will only consider the case where the power series $D(z) := \sum_{k \ge 2} d_k z^k$ has positive radius of convergence $\rho_D > 0$. For any $t>0$ with $D(t) < t$ we define the probability weight sequence \begin{align} (p_i(t))_{i \ge 0} = (1 - D(t)/t, 0, d_2t , d_3t^2, d_4 t^3, \ldots). \end{align} Its first moment is given by $\sum_{k \ge 2} k d_k t^{k-1}$, and we set \begin{align} m_O = \lim_{t \nearrow \rho_D} \sum_{k \ge 2} k d_k t^{k-1} \in ]0, \infty]. \end{align} If $m_O \ge 1$, then there is a unique $0 < t_O \le \rho_D$ for which $ \sum_{k \ge 2} k d_k t_O^{k-1} = 1$. If $m_O < 1$ we set $t_O = \rho_D$. Furthermore, we let $\xi$ denote a random non-negative integer with probabilities \begin{align} \label{eq:defxi} \Pr{\xi= i} = p_i(t_O), \qquad i \ge 0. \end{align} Let $T_n$ denote a $\xi$-BGW tree conditioned on having $n$ leaves. Note that $T_n$ has no vertex with outdegree $1$. Let $g_n>0$ be a positive real number. For each $k \ge 2$ let $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_{n,k})$ denote the decoration with $\tilde{B}_k = D_k$, $\tilde d_k $ the graph distance on $D_k$ multiplied by $g_n$, $\tilde{\rho}_k$ the root vertex of $D_k$, $\tilde{\ell}_k: \{1, \ldots, k\} \to D_k$ any fixed enumeration of the $k$ vertices of $D_k$, and $\tilde{\nu}_k$ the counting measure on the non-root vertices (that is, all vertices except for the origin of the root edge) of $D_k$. We let $\tilde{B}_0$ denote a trivial one point space with no mass. \begin{figure}[t] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.7\linewidth]{outerdec} \caption[] {Correspondence of an outerplanar map (top left corner) to a tree decorated by dissections of polygons (bottom half).\footnote{Source of image: \cite{zbMATH07138334}}} \label{f:outerdec} \end{minipage} \end{figure} \begin{lemma}[{\cite[Lem. 6.7, Sec. 6.1.4]{zbMATH07235577}}] \label{le:outerdeco} The decorated tree $(T_n^\dec,d_n^\dec,\nu_n^\dec)$ is distributed like the random weighted outerplanar map $O_n$ equipped with the graph distance multiplied by $g_n$ and the counting measure on the non-root vertices. \end{lemma} \noindent See Figure~\ref{f:outerdec} for an illustration. We refer to the cited sources for detailed justifications. Similarly as for parameter $m_O$, we let $\rho_\iota$ denote the radius of convergence of the series $\sum_{k \ge 2} \iota_{k+1} z^{k}$ and set \begin{align*} m_D := \lim_{t \nearrow \rho_\iota} \sum_{k \ge 2} k \iota_{k+1} t^{k-1} \in ]0, \infty]. \end{align*} If $m_D\ge 1$, there is a unique constant $0 < t_D \le \rho_\iota$ such that $\sum_{k \ge 2} k \iota_{k+1} t_D^{k-1} = 1$. For $m_D < 1$ we set $t_D = \rho_\iota$. Furthermore, we let $\zeta$ denote a random non-negative integer with distribution given by the probability weight sequence \begin{align*} (q_i)_{i \ge 0} := (1 - \sum_{k \ge 2} \iota_{k+1} t_D^{k-1}, 0, \iota_{3} t_D, \iota_4 t_D^2, \ldots). \end{align*} We list three known non-trivial scaling limits for the random face-weighted dissection~$D_n$. If $m_D>1$, then by~\cite{MR3382675} $D_n$ lies in the universality class of the Brownian tree $\mathcal{T}_2$, with a diameter of order $\sqrt{n}$. (That is, $\mathcal{T}_2$ is given by $\sqrt{2}$ times the random real tree obtained by identifying points of a Brownian excursion of duration $1$.) Specifically, \begin{align} \label{eq:d1} \frac{\sqrt{2}}{\sqrt{ \Va{\zeta} q_0}} \frac{1}{4} \left( \Va{\zeta} + \frac{q_0\Pr{\zeta \in 2 \ndN_0}}{2\Pr{\zeta \in 2 \ndN_0} - q_0 } \right) \frac{1}{\sqrt{n}} D_n \convdis \mathcal{T}_2. \end{align} If $m_D = 1$ and $\Pr{\zeta \ge n} \sim c_D n^{-\eta}$ for $1 < \eta < 2$ and $c_D>0$, then by~\cite{MR3286462} $D_n$ lies in the universality class of the $\eta$-stable looptree $\mathcal{L}_\eta$: \begin{align} \label{eq:d2} (c_D q_0 |\eta(1- \eta)|)^{1/\eta} \frac{1}{n^{1/\eta}} D_n \convdis \mathcal{L}_\eta \end{align} in the Gromov--Hausdorff--Prokhorov sense. Here and in the following we use a shortened notation, where the product of a real number and a random finite graph refers to the vertex set space with the corresponding rescaled graph metric and the uniform measure on the set of vertices. If $0<m_D < 1$ and $\Pr{\zeta=n} = f(n) n^{-\theta}$ for some constant $\theta > 2$ and a slowly varying function $f$, then $D_n$ lies in the universality class of a deterministic loop. That is, \begin{align} \label{eq:d3} \frac{q_0}{n(1 - \Ex{\zeta})} D_n \convdis S^1 \end{align} with $S^1$ denoting the $1$-sphere. See for example~\cite{zbMATH07138334} for a detailed justification. Knowing the asymptotic behaviour of $D_n$ allows us to describe the asymptotic shape of the random weighted outerplanar map $O_n$: \begin{itemize} \item If $m_O>1$, then by~\cite[Thm. 6.60]{zbMATH07235577} there is a constant $c_\omega > 0$ (defined in a complicated manner using expected distances in bi-pointed Boltzmann dissections) such that \begin{align} c_\omega \frac{1}{\sqrt{n}} O_n \convdis \mathcal{T}_2. \end{align} In this case the asymptotics of weighted dissections only influence the constant $c_\omega$, but not the universality class. \item If $m_O=1$ and $\Pr{\xi \ge n} \sim c_O n^{-\alpha}$ for $1<\alpha<2$, then $T_n$ lies in the universality class of an $\alpha$-stable tree. That is, by~\cite[Thm. 6.1, Rem. 5.4]{MR2946438} there is a sequence \begin{align*} b_n = \left( \frac{|\Gamma(1-\alpha)|}{\Pr{\xi=0}} \right)^{1/\alpha}\inf \{x \ge 0 \mid \Pr{\xi > x} \le 1/n \} \end{align*} of order $n^{1/\alpha}$ for which \begin{align*} (b_n^{-1} W_{\lfloor|T_n|t\rfloor}(T_n))_{0 \le t \le 1} \convdis X^{\text{exc}, (\alpha)}. \end{align*} Now, the outcome of applying Theorem~\ref{th:invariance} depends on the decoration. Suppose that $\gamma > \alpha-1$. Then in each of the three discussed cases~\eqref{eq:d1},~\eqref{eq:d2}, and~\eqref{eq:d3} we obtain a scaling limit of the form \begin{align} \frac{1}{b_n^\gamma} O_n \convdis \mathcal{T}_\alpha^\dec. \end{align} Specifically: \subitem a) If $m_D>1$ then $\gamma = 1/2$ and $\mathcal{T}_\alpha^\dec$ is the $\alpha$-stable tree decorated according to a constant multiple of the Brownian tree $\mathcal{T}_2$ (with the constant given by the inverse of the scaling factor in~\eqref{eq:d1}). \subitem b) If $m_D = 1$ and $\Pr{\zeta \ge n} \sim c_D n^{-\eta}$ for $1 < \eta < 2$ and $c_D>0$, then $\gamma = 1/\eta$ and $\mathcal{T}_\alpha^\dec$ is the $\alpha$-stable tree decorated according to the stretched $\eta$-stable looptree $(c_D q_0 |\eta(1- \eta)|)^{-1/\eta} \mathcal{L}_\eta$. \subitem c) If $0<m_D<1$ and $\Pr{\zeta=n} = f(n) n^{-\theta}$ for some constant $\theta > 2$ and a slowly varying function $f$, then $\gamma = 1$ and $\mathcal{T}_\alpha^\dec$ is the $\alpha$-stable tree decorated according to the stretched circles $\frac{n(1 - \Ex{\zeta})}{q_0} S^1$. In other words, $\mathcal{T}_\alpha^\dec$ is distributed like the stretched $\alpha$-stable loop tree $\frac{n(1 - \Ex{\zeta})}{q_0} \mathcal{L}_\alpha$. Here condition B1 (which is necessary for applying Theorem~\ref{th:invariance}) may be verified using Proposition~\ref{prop:small blobs don't contribute} and Remark~\ref{re:remarkleaves}. The necessary sufficiently high uniform integrability of the diameter of the decorations is trivial in case c), and follows from~\cite[Lem. 6.61, Sec. 6.1.3]{zbMATH07235577} in case a), and analogously from tail-bounds of conditioned BGW trees~\cite{kortchemski_sub_2017} in case b). \item Finally, if $0<m_O < 1$ and $\Pr{\xi=n} = f_O(n) n^{-\theta_O}$ for some constant $\theta_O > 2$ and a slowly varying function $f$, then $O_n$ has giant $2$-connected component of size about $n (1- \Ex{\xi}) / \Pr{\xi=0}$. Hence, if $D_n$ admits a scaling limit like in the three discussed cases, then the scaling limit for $M_n$ is, up to a constant multiplicative factor, the same as for $D_n$. See for example~\cite{zbMATH07138334} for details on such approximations. \end{itemize} \subsection{Weighted planar maps with a boundary} A planar map with a boundary refers to a planar map where the outer face typically plays a special role, and the perimeter (number of half-edges on the boundary of the outer face) serves as a size parameter. The reason for counting half-edges instead of edges is that both sides of an edge may be adjacent to the outer face, and in this case such an edge is counted twice. We say the boundary of a planar map is simple, if it is a cycle. Here we explicitly allow the case of degenerate $2$-cycles and $1$-cycles. That is, a map consisting of two vertices joined by single edge has a simple boundary. A map consisting of a loop with additional structures on the inside has a simple boundary, and so has a map consisting of two vertices joined by two edges and additional structures on the inside. Planar maps with a boundary fit into the framework of decorated trees by identical arguments as for outerplanar maps. That is, we may decompose a planar map into its components with a simple boundary in the same way as an outerplanar map may be decomposed into dissections of polygons. They may be bijectively encoded as trees decorated by maps with a simple boundary in the same way as illustrated in Figure~\ref{f:outerdec}. Since we allow multi-edges and loops, the leaves of the encoding tree canonically and bijectively correspond to the half-edges on the boundary of the map. There are infinitely many planar maps with a fixed positive perimeter, hence it makes sense to assign summable weights. Say, for each planar map $D$ with a simple boundary we are given a weight $\gamma(D) \ge 0$. Like in Equation~\eqref{eq:omegao}, we then define the weight $\omega(M)$ of a planar map $M$ by \begin{align} \label{eq:omegam} \omega(M) = \prod_{D} \gamma(D), \end{align} with the index $D$ ranging over the components of $M$ with a simple boundary. Thus, for any positive integer $n$ for which the sum of all $\omega$-weights of $n$-perimeter maps is finite and non-zero we may define the random $n$-perimeter map $M_n$ that is drawn with probability proportional to its weight. Note that this formally encompasses the random outerplanar map $O_n$, for which $\gamma(D) = 0$ whenever $D$ is not a dissection of a polygon of perimeter at least $3$. Using the same definitions~\eqref{eq:defd}--\eqref{eq:defxi} for $m_O$ and $\xi$, it follows that the tree $T_n$ corresponding to the random map $M_n$ is distributed like the result of conditioning a $\xi$-BGW tree $T_n$ on having $n$ leaves. Furthermore, employing analogous definitions for the decorations $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)_{k \ge 0}$, it follows like in Lemma~\ref{le:outerdeco} that $M_n$ with distances rescaled by some scaling sequence $a_n>0$ is a discrete decorated tree: \begin{corollary} The decorated tree $(T_n^\dec,d_n^\dec,\nu_n^\dec)$ is distributed like the random weighted map $M_n$ with perimeter $n$, equipped with the graph distance multiplied by $a_n$ and the counting measure on the non-root vertices. \end{corollary} The boundary of $M_n$ also fits into this framework but its scaling limits have already been determined in pioneering works~\cite{Richier:2017,zbMATH07253709}, with the parameter $m_O$ and the tail of $\xi$ ultimately determining whether its a deterministic loop, a random loop tree, or the Brownian tree. In order to apply Theorem~\ref{th:invariance} to $M_n$ (as opposed to its boundary) we need to be able to look inside of the components, that is, we need a description of the asymptotic geometric shape of Boltzmann planar maps with a large simple boundary. However no such results appear to be known. What is known by~\cite{Richier:2017} is that so-called non-generic critical face weight sequences with parameter $\alpha' = ]1, 3/2[$ lie in a stable regime with $m_O = 1$ and $T_n^\dec$ in the universality class of an $\alpha$-stable tree for $\alpha = 1 / (\alpha' - 1/2)$. The boundary of $M_n$ lies in the universality class of an $\alpha$-stable looptree by~\cite[Thm. 1.1]{Richier:2017}. It is natural to wonder whether additional knowledge of Boltzmann planar maps with a simple boundary in this regime enables the application of Theorem~\ref{th:invariance}. This motivates the following question: \begin{question} Do decorated stable trees constructed in the present work arise as scaling limits of face-weighted Boltzmann planar maps with a boundary? \end{question} We note that there is a connection to the topic of stable planar maps arising as scaling limits (along subsequences) of Boltzmann planar maps without a boundary~\cite{zbMATH06469338,zbMATH06932734}. Roughly speaking, Boltzmann planar maps with a large boundary are thought to describe the asymptotic geometric behaviour of macroscopic faces of Boltzmann planar maps without a boundary. There are results for related models of triangulations and quadrangulations with a simple boundary with an additional weight on the vertices~\cite{zbMATH07039779, zbMATH07343343}. Scaling limits for models with a non-simple boundary have been determined for the special case of uniform quadrangulations with a boundary~\cite{MR3335010,zbMATH07212165} that are condition on both the number of vertices and the boundary length. For Boltzmann triangulations with the vertex weights as in~\cite{zbMATH07343343} we would expect to be in the regime $m_O<1$ where the shape of the map is dominated by a unique macroscopic component with a simple boundary. However, we could introduce block-weights as before in order to force the model into a stable regime. At least in principle, Theorem~\ref{th:invariance} then yields convergence towards a decorate stable tree obtained by blowing up the branchpoints of a stable tree by rescaled Brownian discs. Checking the requirements of Theorem~\ref{th:invariance} (such as verifying convergence of the moments of the rescaled diameter of Boltzmann triangulations with a simple boundary) does not appear involve any mayor obstacles, but we did not go through the details. To do so, we would need to recall extensive combinatorial background, and this appears to be beyond the scope of the present work. \appendix \section{Appendix} This appendix contains the proofs of two technical statements that are used in the paper, Proposition~\ref{prop:small blobs don't contribute} and Lemma~\ref{lem:moment measure discrete block implies B2 or B3}, which ensure that the main assumptions under which the rest of the paper is stated are satisfied for reasonable models of decorated BGW trees. Section~\ref{subsec:regularly varying functions and doa} recalls some general results about regularly varying functions and domain of attraction of stable random variables. Section~\ref{subsec:estimates for BGW} presents some estimates related to critical BGW trees whose reproduction law is in the domain of attraction of a stable law that are useful later on. In Section~\ref{subsec:spine decomposition with marks}, we introduce the notion of trees with marks on the vertices and prove a spine decomposition result for those marked trees. Finally in Section~\ref{subsec:small blobs do not contribute} we prove Proposition~\ref{prop:small blobs don't contribute}, which is the main technical result of this Appendix, using all the results and estimates derived before. At the end, in Section~\ref{subsec:proof of lemma about measure}, which is independent of what comes before, we prove Lemma~\ref{lem:moment measure discrete block implies B2 or B3}. \subsection{Regularly varying functions and domains of attraction}\label{subsec:regularly varying functions and doa} \subsubsection{Compositional inverse of regularly varying functions} Following standard terminology, we say that a function $f$ defined on a neighbourhood of infinity is regularly varying (at infinity) with exponent $\alpha\in \R$ if for every $\lambda\in \R\setminus \{0\}$ we have \begin{align*} \frac{f(\lambda x)}{f(x)}\rightarrow \lambda^\alpha. \end{align*} We consider those functions up to the equivalence relation of having a ratio tending to $1$ at infinity. When the index of regularity is positive $\alpha >0$, a regularly varying function $f$ with exponent $\alpha$ tends to infinity and we can define (at least at a neighbourhood of infinity): \begin{align*} f^{[-1]}(x):=\inf\enstq{y\in\R_+}{f(y)\geq x}, \end{align*} and the equivalence class of $f^{[-1]}$ only depends on the equivalence class of $f$. Then $f^{[-1]}$ is a regularly varying function with index $\alpha^{-1}$ and satisfies: \begin{align*} f\circ f^{[-1]}(x)\underset{x \rightarrow\infty}{\sim} f^{[-1]}\circ f (x)\underset{x \rightarrow\infty}{\sim} x. \end{align*} \subsubsection{Asymmetric stable random variable} For $\alpha\in\intervalleoo{0}{1}\cup \intervalleoo{1}{2}$, we let $Y_\alpha$ be a random variable with so-called asymmetric stable law of index $\alpha$, with distribution characterized by its Laplace transform, for all $\lambda>0$, \begin{align*} \Ec{\exp\left(-\lambda Y_\alpha \right)}&= \exp\left(-\lambda ^\alpha \right) \quad \text{if}\quad 0<\alpha <1,\\ &=\exp\left(\lambda ^\alpha \right) \quad \text{if}\quad 1<\alpha <2. \end{align*} It is ensured by \cite[Eq.(I20)]{zolotarev_one_1986} that those distributions have a density $d_\alpha$ and that this density is continuous and bounded. \paragraph{Domain of attraction.} Let $X$ be a random variable such that $\Pp{\abs{X}>x}$ is regularly varying with index $-\alpha$, centred if $\alpha\in\intervalleoo{1}{2}$. Consider a sequence $X_1, X_2,\dots$ of i.i.d random variables with the law of $X$. Suppose that \begin{align*} \frac{\Pp{X>x}}{\Pp{\abs{X}>x}}\rightarrow 1. \end{align*} Then, let \begin{align}\label{eq:regularly varying function associated to X} B_X(x):=\abs{\frac{1-\alpha}{\Gam{2-\alpha}}}^{-\frac{1}{\alpha}} \left(\frac{1}{\Pp{\abs{X}\geq x}}\right)^{[-1]}. \end{align} We also denote this function by $B_\nu$ if $\nu$ is the law of $X$. Since $x\mapsto \frac{1}{\Pp{\abs{X}\geq x}} $ is $\alpha$-regularly varying, $B_X$ is $\alpha^{-1}$-regularly varying. Then, by \cite[Theorem~4.5.1]{whitt_stochastic_2002}, we have: \begin{align*} \frac{1}{B_X(n)}\cdot \sum_{i=1}^{n}X_i \tend{n\rightarrow\infty}{(d)} Y_\alpha, \end{align*} where $Y_\alpha$ has the asymmetric $\alpha$-stable law, which we recall has a density $d_\alpha$. In fact, if the random variable $X$ is integer-valued (and not supported on a non-trivial arithmetic progression), a more precise version of the above convergence known as \emph{local limit theorem}, is true, see \cite[Theorem~4.2.1]{MR0322926}. Let $S_n=\sum_{i=1}^n X_i$, then we have \begin{align}\label{eq:stable local limit theorem} \sup_{k\in \Z} \Big| B_X(n)\Pp{S_n=k} - d_\alpha\left(\frac{k}{B_X(n)}\right) \Big| \underset{n\rightarrow\infty}{\rightarrow} 0. \end{align} Using the fact that the asymmetric $\alpha$-stable density $d_\alpha$ is bounded, note that the above convergence ensures in particular that there exists a constant $C$ so that for all $n\geq 1$ and all $k\in \Z$ we have \begin{align}\label{eq:uniform bound for the probability Sn=k} \Pp{S_n=k} \leq \frac{C}{B_X(n)}. \end{align} \subsubsection{The Potter bounds.} From \cite[Theorem~1.5.6(iii)]{bingham_regular_1989}: for $f$ a regularly varying function of index $\rho$, then for every $A>1$ and $\epsilon$, there exists $B$ such that for all $x,y\geq B$ we have \begin{equation} \frac{f(y)}{f(x)}\leq A \cdot \max \left\lbrace\left(\frac{y}{x}\right)^{\rho-\epsilon},\left(\frac{y}{x}\right)^{\rho+\epsilon}\right\rbrace. \end{equation} When $f$ is defined on the whole interval $\intervalleoo{0}{\infty}$ and bounded below by a positive constant, we can increase the constant $A$ so that the last display holds for any $x,y\geq 1$. Let us apply this to $B_X$, the slowly varying function associated to some positive random variable $X$ in the domain of attraction of an $\theta$-stable law. For all $\epsilon>0$ there exists a constant $C$ such that for all $n\geq 1$ for all $\frac{1}{B_X(n)}\leq \lambda \leq 1$, \begin{align}\label{eq:potter bounds applied to B-1(lambda B_n)} C^{-1}\cdot n\cdot \lambda^{\theta+\epsilon}\leq B_X^{[-1]}(\lambda B_X(n)) \leq C\cdot n\cdot \lambda^{\theta-\epsilon}, \end{align} and so that in particular, possibly for another constant $C$, \begin{align} C^{-1}\cdot \frac{1}{n}\cdot \lambda^{-\theta+\epsilon}\leq \Pp{X\geq \lambda B_X(n)} \leq C\cdot \frac{1}{n}\cdot \lambda^{-\theta-\epsilon}, \end{align} \subsection{Estimates for Bienaymé--Galton--Watson trees with $\alpha$-stable tails}\label{subsec:estimates for BGW}\label{subsec:the case of BGW} Let $\mu$ be a critical reproduction law in the domain of attraction of an $\alpha$-stable distribution, with $\alpha\in \intervalleoo{1}{2}$. Recall that $d_\alpha$ is the density function of the random variable $Y_\alpha$. The following lemma contains all the results that we need in the subsequent sections of the appendix. \begin{lemma}\label{lem:application of the potter bounds} Let $D$ be a random variable with distribution $\mu^*$ the size-biaised version of $\mu$ and $(\tau_i)_{i\geq 1}$ be independent BGW trees with reproduction law $\mu$. Then, for any $\eta>0$, there exists a constant $C$ such that for all $n\geq 1$ and for $\lambda\in \intervalleff{\frac{1}{B_n}}{1}$, we have \begin{enumerate}[(i)] \item \label{it:probability of degree being large} $\displaystyle \Pp{D\geq \lambda B_n}\leq C \cdot \lambda^{1-\alpha-\eta} \cdot \frac{B_n}{n},$ \item \label{it:probability total size of Bn GW is k} For all $k\in \mathbb Z$,\ \[\displaystyle\Pp{\sum_{i=1}^{\lambda B_n}\abs{\tau_i}=k} \leq C\cdot \frac{ \lambda^{-\alpha-\eta}}{n}.\] \item \label{it:expectation inverse total size of GW} $\displaystyle\Ec{\frac{1}{\sum_{i=1}^{\lambda B_n}\abs{\tau_i}}} \leq C\cdot \frac{ \lambda^{-\alpha-\eta}}{n},$ \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:application of the potter bounds}] We first prove (\ref{it:probability of degree being large}). We have using \cite[Eq.(45)]{kortchemski_sub_2017} \begin{align*} \Pp{D\geq k} = \mu^*(\intervallefo{k}{\infty}) \underset{k\rightarrow\infty}{\sim} \frac{\alpha}{\alpha-1 }\cdot k \cdot \mu(\intervallefo{k}{\infty}), \end{align*} so for $C$ a constant chosen large enough we get that for all $k\geq 1$, \begin{align*} \Pp{D\geq k} \leq C\cdot k \cdot \mu(\intervallefo{k}{\infty}). \end{align*} Applying this to $k=\lceil\lambda B_n\rceil$ and using the Potter bounds yields, for a value of $C$ that may change from line to line, \begin{align*} \Pp{D\geq \lambda B_n} \leq C\cdot \lceil\lambda B_n\rceil\cdot \mu(\intervallefo{\lambda B_n}{\infty}) &\leq C\cdot \lceil\lambda B_n\rceil\cdot \frac{1}{n}\cdot \lambda^{-\alpha-\eta}\\ &\leq C\cdot\frac{B_n}{n}\cdot \lambda^{1-\alpha-\eta}. \end{align*} So (\ref{it:probability of degree being large}) is proved. Now let us turn to (\ref{it:probability total size of Bn GW is k}). Using, for example, \cite[Eq.(26)]{kortchemski_sub_2017}, we have \begin{align}\label{eq:probability that a BGW has n vertices} \Pp{\abs{\tau}=n}\underset{n\rightarrow\infty}{\sim} \frac{d_\alpha(0)}{n B_\mu(n)} \quad \text{ and so } \quad \Pp{\abs{\tau}\geq n}\underset{n\rightarrow\infty}{\sim} \frac{\alpha d_\alpha(0)}{B_\mu(n)}. \end{align} This ensures that the random variable $\abs{\tau}$ is in the domain of attraction of the asymmetric $1/\alpha$-stable random variable. We can also check from the last display and the definition \eqref{eq:regularly varying function associated to X} that \begin{align}\label{eq:relation Babstau to Bmu} B_{\abs{\tau}}(n)\sim C\cdot B_\mu^{[-1]}(n), \end{align} for some constant $C$. Using \eqref{eq:uniform bound for the probability Sn=k} we then get \begin{align*} \Pp{\sum_{i=1}^{\lambda B_n}\abs{\tau_i}=k} \leq \frac{C}{B_\mu^{[-1]}(\lambda B_n)} \leq C\cdot \frac{ \lambda^{-\alpha-\eta}}{n}, \end{align*} where the last inequality follows from \ref{eq:potter bounds applied to B-1(lambda B_n)}. This finishes the proof of \ref{it:probability total size of Bn GW is k}. The last point (\ref{it:expectation inverse total size of GW}) follows from an application of Lemma~\ref{lem:expectation of inverse of sum of asymmetric heavy tailed rv} stated below to the distribution of $\abs{\tau}$ using \eqref{eq:relation Babstau to Bmu} and an application of the Potter bounds to conclude. \end{proof} \begin{lemma}\label{lem:expectation of inverse of sum of asymmetric heavy tailed rv} Let $X$ be a distribution on $\intervallefo{1}{\infty}$, in the domain of attraction of an asymmetric $\theta$-stable law, with $\theta\in\intervalleoo{0}{1}$, and $X_1, X_2,\dots$ i.i.d random variables with the law of $X$. Then, there exists a constant $C$ such that for all $N\geq 1$, \begin{align*} \Ec{\frac{1}{\sum_{i=1}^{N}X_i}} \leq \frac{C}{B_X(N)}. \end{align*} \end{lemma} \begin{proof} Write \begin{align*} \Ec{\frac{1}{\sum_{i=1}^{N}X_i}} = \frac{1}{B_X(N)} \cdot \Ec{\frac{B_X(N)}{\sum_{i=1}^{N}X_i}}\leq \frac{1}{B_X(N)} \cdot \Ec{\frac{B_X(N)}{\max_{1\leq i \leq N}X_i}}. \end{align*} Then \begin{align*} \Ec{\frac{B_X(N)}{\max_{1\leq i \leq N}X_i}} &= \int_{0}^{\infty}\Pp{\frac{B_X(N)}{\max_{1\leq i \leq N}X_i}>x} \mathrm{d} x=\int_{0}^{\infty}\Pp{\max_{1\leq i \leq N}X_i<x^{-1}\cdot B_X(N)} \mathrm{d} x. \end{align*} Since the values of $X_i$ are at least $1$, the integrand of the last display becomes $0$ when $x>B_X(N)$. But when $x\leq B_X(N)$ we have \begin{align*} \Pp{\max_{1\leq i \leq N}X_i<x^{-1}\cdot B_X(N)} &\leq \Pp{X<x^{-1}\cdot B_X(N)}^N\\ &\leq \left(1 - \Pp{X\geq x^{-1}B_X(N)}\right)^N\\ &\leq \exp\left(-N\cdot\Pp{X\geq x^{-1}B_X(N)}\right)\\ &\leq \exp\left(-N \cdot C\cdot \frac{x^{\theta/2}}{N}\right)\leq \exp\left(-C\cdot x^{\theta/2}\right), \end{align*} which is integrable over $\R$. In the last line, we use that \begin{equation*} \Pp{X\geq x^{-1}B_X(N)}\geq C \cdot \frac{1}{N}\cdot (x^{-1})^{-\frac{\theta}{2}} \end{equation*} for some constant $C$ that does not depend on $N$, thanks to the Potter bounds with $\epsilon=\theta/2$, which apply here because $x^{-1}\geq \frac{1}{B_X(N)}.$ \end{proof} \subsection{Spine decomposition for marked Bienaymé--Galton--Watson trees}\label{subsec:spine decomposition with marks} \paragraph{Definitions.} We first recall a few definitions from Section~\ref{subsec:plane trees}. We have $\UHT=\bigcup_{n\geq0} \N ^n$ the Ulam tree, and we denote $\bT$ the set of plane trees. For any $\bt\in\bT$, and $v\in \bt $, we define $\theta_v(\bt)=\enstq{u\in\bU}{vu\in \bt}$. For $w\in \bU$, we let $w\bt = \enstq{wv}{v\in\bt}$. If $\bt\in\bT$ and $u\in \bt$, then we define $\out{\bt}(u)$ to be the number of children of $u$ in $\bt$. We drop the index if it is clear from the context. If $v=v_1\dots v_n$ we say that $\abs{v}=n$ is the height of $v$, and for any $k\leq n$, we set $\left[v\right]_k=v_1\dots v_k$. If $v\neq\emptyset$ we also let $\hat{v}=\left[v\right]_{n-1}$ be the father of $v$. We define $\bT^*=\enstq{(\bt,v)}{\bt\in\bT, \ v\in \bt}$ the set of plane trees with a marked vertex. Let $\mu$ be a probability measure on the integers, such that $\sum_{k\geq 0}k\mu(k)=1$. The associated probability measure $\BGW_\mu$ on plane trees is the measure such that for $T$ a random tree taken under this measure, we have for every $\bt\in\bT$, finite: \begin{equation} \Pp{T=\bt}=\BGW_\mu(\{\bt\})=\prod_{u\in \bt}\mu(\out\bt(u)). \end{equation} \paragraph{Marks on a tree.} Let $E$ be any measurable space. A \emph{finite tree with marks} is an ordered pair $\tilde{\bt}=(\bt,(x_u)_{u\in \bt})$ where $\bt\in\bT$ and $(x_u)_{u\in\bt}$ are elements of $E$ (corresponding to marks of the vertices). We denote by $\widetilde{\bT}$ the space of marked trees, and similarly, we let $\widetilde{\bT}^*$ the set of marked trees with a distinguished vertex. Let $(\pi_k)_{k\geq 0}$ be a sequence of probability measures on $E$. A natural way to put random marks on a tree $\bt$ is to mark it with $(X_u)_{u\in\bt}$ that is a family of independent random variables with respective distribution $X_u\sim \pi_{\out{\bt}(u)}$. \paragraph{Cutting a tree at a vertex.} If $\bt\in\bT$, and $v\in \bt $, we define: \begin{align}\label{eq:definition Cut} \Cut{\bt}{v}&:=\bt\setminus\enstq{vu}{u\in\bU,\ u\neq\emptyset} =\bt\setminus (v\theta_v \bt) \cup \{v\}. \end{align} \paragraph{The Kesten tree.} We let \begin{itemize} \item $\left(D_i\right)_{i\geq 0}$ be a sequence of i.i.d. random variables with distribution $\mu^*$, where $\mu^*(k)=k\mu(k)$, for any $k\geq 0$, \item $(K_i)_{i\geq 1}$, which conditionally on the sequence $\left(D_i\right)$ are uniform on $\intervalleentier{1}{D_i}$, \item $\mathbf{U}_n^*=K_1K_2\dots K_n$. \item $\left(\tau_u\right)_{u\in\bU}$ are $\BGW$ trees with reproduction law $\mu$. \item $\cH=\enstq{\mathbf{U}_n^* i}{n\geq 0,\ i\in \intervalleentier{1}{D_i}\setminus K_i}$. \end{itemize} Then the Kesten tree is defined as \begin{align*} \KT^*=\enstq{\mathbf{U}_n^*}{n\geq 0}\cup \bigcup_{u\in\cH}u\tau_u. \end{align*} \paragraph{Cut Kesten tree.} Let $\tau$ be a BGW tree with offspring distribution $\mu$, independent of everything else. We let: \begin{align*} \KT^*_k:=\Cut{\KT^*}{\mathbf{U}_k^*}\cup (\mathbf{U}_k^*\tau), \end{align*} which is almost surely a finite tree. As before, we endow the finite random tree $\KT^*_k$ with some marks $(X_u)_{u\in \KT^*_k}$ such that conditionally on $\KT^*_k$, the marks are independent with respective conditional distribution $X_u\sim \pi_{\out{}(u)}$. We denote by $\widetilde{\KT}^*_k$ the obtained tree with marks. \paragraph{The ancestral line of a vertex.} We let \begin{align*} \bA:=\bigcup_{n\geq1}(\N_0\times E)^n. \end{align*} If $\tilde{\bt}=(\bt,(x_u)_{u\in\bt})\in\bT$, and $v=v_1v_2\dots v_k\in \bt$ we define: \begin{align*} A(\tilde{\bt},v):=\left((\out{\bt}(\left[v\right]_0),x_{\left[v\right]_0}),(\out{\bt}(\left[v\right]_1),x_{\left[v\right]_1}),\dots,(\out{\bt}(\left[v\right]_{k}),x_{\left[v\right]_{k}})\right)\in \bA, \end{align*} the marked ancestral line of vertex $v$ in the marked tree $\tilde{\bt}$. \paragraph{BGW tree with a uniform vertex satisfying some property.} Let $\mathscr{P}\subset\bA$ be some property. Consider $\widetilde{T}=(T,(X_u)_{u\in T})$ a marked BGW tree with reproduction distribution $\mu$ and mark distributions $(\pi_k)_{k\geq 0}$, and on the event $\enstq{\exists v \in T}{A(\widetilde{T},v)\in \mathscr{P}}$ where at least one vertex of $\widetilde{T}$ has the property $\mathscr{P}$ (which depends only on its ancestral line and the marks along it), we let $U$ be a uniform vertex such that this is the case. \begin{proposition}\label{prop:spine decomposition with marks} For any non-negative function $F:\widetilde{\bT}^*\rightarrow\R$, we have \begin{align}\label{eq:spine decomposition with marks} \Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}}}=\sum_{k\geq 0}\Ec{\frac{F(\widetilde{\KT}^*_k,\mathbf{U}_k^*)\ind{A(\widetilde{\KT}^*_k,\mathbf{U}_k^*)\in \mathscr{P}}}{\#\enstq{v\in\KT^*_k}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}. \end{align} \end{proposition} \begin{proof} For any $\bt$ and $u\in \bt$ of height $k$, we have \begin{align}\label{eq:equality proba GW - biaised GW with vertex} &\Pp{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}\notag\\ &=\prod_{1\leq i \leq k}\Pp{D_i=\out{\bt}(\left[u\right]_{i-1})}\cdot \Ppsq{K_i=u_i}{D_i=\out{\bt}(\left[u\right]_{i-1})} \cdot \prod_{v\in \bt, v\nprec u}\mu(\out{\bt}(v))\notag\\ &=\left(\prod_{1\leq i \leq k}\out{\bt}(\left[u\right]_{i-1})\mu(\out{\bt}(\left[u\right]_{i-1}) \frac{1}{\out{\bt}(\left[u\right]_{i-1})}\right) \cdot \prod_{v\in \bt, v\nprec u}\mu(\out{\bt}(v))\notag\\ &=\prod_{v\in \bt}\mu(\out{\bt}(v))\notag\\ &=\Pp{T=\bt}. \end{align} We now expand the expectation as a sum \begin{align*} \Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}}}&=\sum_{\bt\in \bT, u\in \bt}\Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}} \cdot \ind{T=\bt, U=u}}. \end{align*} For every $(\bt,u)$ with $\abs{u}=k$, we have \begin{align*} &\Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}} \cdot \ind{T=\bt, U=u}}\\ &= \Pp{T=\bt} \cdot \Ecsq{F(\widetilde{T},u) \cdot \ind{A(\widetilde{T},u)\in \mathscr{P}} \cdot \Ppsq{U=u}{\widetilde{T}}}{T=\bt}\\ &= \Pp{T=\bt} \cdot \Ecsq{\frac{F(\widetilde{T},u) \cdot \ind{A(\widetilde{T},u)\in \mathscr{P}}}{\#\enstq{v\in \bt}{A(\widetilde{T},v)\in \mathscr{P}}}}{T=\bt}\\ &=\Pp{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}\cdot \Ecsq{\frac{F(\widetilde{\KT}^*_k,u) \cdot \ind{A(\widetilde{\KT}^*_k,u)\in \mathscr{P}}}{\#\enstq{v\in \bt}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}{\KT^*_k=\bt}\\ &=\Pp{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}\cdot \Ecsq{\frac{F(\widetilde{\KT}^*_k,u) \cdot \ind{A(\widetilde{\KT}^*_k,u)\in \mathscr{P}}}{\#\enstq{v\in \bt}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}{\KT^*_k=\bt,\mathbf{U}_k^*=u}\\ &=\Ec{\frac{F(\widetilde{\KT}^*_k,\mathbf{U}_k^*) \cdot \ind{A(\widetilde{\KT}^*_k,\mathbf{U}_k^*)\in \mathscr{P}} \cdot \ind{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}}{\#\enstq{v\in \bt}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}.\\ \end{align*} The third equality uses \eqref{eq:equality proba GW - biaised GW with vertex} and the fact that conditionally on the tree $T=\bt$ or $\KT^*_k=\bt$, the distribution of the marks on the tree is the same. The fourth equality uses the fact that the quantity in the conditional expectation does not depend on $\mathbf{U}_k^*$. The result is then obtained by summing over all heights $k\geq 0$ and all finite trees $\bt$ with a distinguished vertex $u$ at height $k$. \end{proof} \begin{remark} For our purposes, we will just use $E=\R_+$ but we want to emphasize that this result still holds true for other types of marks: the key point for the proof is that the law of the marks conditionally on the tree only depends on the degree of the corresponding vertex. \end{remark} \subsection{The contribution of vertices of small degree is small}\label{subsec:small blobs do not contribute} The goal of this section is to prove Proposition~\ref{prop:small blobs don't contribute}. For that, we prove Proposition~\ref{prop:small marks don't contribute} which implies the former directly. Let $\alpha\in\intervalleoo{1}{2}$ and $\gamma>\alpha-1$. Let $\mu$ be a critical reproduction distribution in the domain of attraction of an $\alpha$-stable law. We let $b_n=B_\mu(n)$ be the $\frac{1}{\alpha}$-regularly varying function associated to $\mu$ by \eqref{eq:regularly varying function associated to X}. For all $n\geq 1$ for which the conditioning is non-degenerate, we let $T_n$ be a BGW tree with reproduction law $\mu$, conditioned to have exactly $n$ vertices. This random tree is endowed with marks $(X_u)_{u\in T_n}$ such that conditionally on $T_n$, the marks are independent with distribution that only depends on the degree of the corresponding vertex $X_u\sim \pi_{\out{T_n}(u)}$, where the sequence $(\pi_k)_{k\geq 0}$ is a sequence of probability measures on $\R_+$. We further require that for some real number $m> \frac{4\alpha}{ ( 2\gamma + 1 - \alpha)}$, we have \begin{align*} \sup_{k\geq 0} \Ec{\left(\frac{X^{(k)}}{k^\gamma}\right)^m}<\infty, \end{align*} where for every $k\geq 0$, the random variable $X^{(k)}$ has distribution $\pi_k$. We prove the following. \begin{proposition}\label{prop:small marks don't contribute} In the setting decribed above, for any $\epsilon>0$ we have \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma} \underset{\delta\rightarrow 0}{\rightarrow}0, \end{align*} uniformly in $n\geq 1$ such that $T_n$ is well-defined. \end{proposition} Remark that this proposition directly implies Proposition~\ref{prop:small blobs don't contribute} when applying it to marks distribution $(\pi_k)_{k\geq 0}$ that are respectively the laws of $(\diam(B_k))_{k\geq 0}$. In order to prove this result, we first prove an intermediate lemma. \begin{lemma}\label{lem:cutting slices} For any small enough $\eta>0$, $\epsilon>0$ and all $\delta\in \intervalleoo{0}{\epsilon^{\frac{1}{\eta}}}$, we have simultaneously for all $n\geq 1$, \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma} \leq \delta^{\beta}, \end{align*} where $\beta=(\gamma +\frac{1-\alpha}{2}-5\eta)m-2\alpha-4\eta$, which is positive if $\eta$ is chosen small enough. \end{lemma} Let us show how the result we want follows from this lemma. \begin{proof}[Proof of Proposition~\ref{prop:small marks don't contribute}] Let us take $\eta>0$ small enough such that Lemma~\ref{lem:cutting slices} holds. For any $\epsilon>0$ small enough, and $0<\delta<\epsilon^{\frac{1}{\eta}}$ we have \begin{align*} \sup_{v\in T_n} \left(\sum_{u\preceq v} X_u \ind{\out{T_n}(u)\leq \delta b_n}\right) &\leq \sum_{i=0}^{\infty} \sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\delta 2^{-i-1}b_n<\out{T_n}(u)\leq \delta 2^{-i} b_n}\right\rbrace. \end{align*} Write $\frac{\epsilon}{1-2^{-\eta}}=\sum_{i=0}^{\infty} \epsilon \cdot 2^{-i\eta}$ and use a union bound and the result of Lemma~\ref{lem:cutting slices} for all pairs $(\epsilon',\delta')\in \{(2^{-i\eta}\epsilon,2^{-i}\delta),\ i\geq 0\}$ which thanks to our assumption, still satisfy $\delta'<(\epsilon')^{\frac{1}{\eta}}$. This yields \begin{align*} &\Pp{\sup_{v\in T_n} \left(\sum_{u\preceq v} X_u \ind{\out{T_n}(u)\leq \delta b_n}\right)>\frac{\epsilon}{1-2^{-\eta}}b_n^\gamma} \\ &\leq \sum_{i=0}^{\infty} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\delta 2^{-i-1}b_n<\out{T_n}(u)\leq \delta 2^{-i} b_n}\right\rbrace>2^{-i\eta}\cdot \epsilon \cdot b_n^\gamma}\\ &\leq \sum_{i=0}^{\infty} (2^{-i}\delta)^{\beta} \underset{\delta\rightarrow 0}{\longrightarrow} 0 \end{align*} which is what we wanted to prove. \end{proof} The rest of the section is then devoted to the proof of Lemma~\ref{lem:cutting slices}. \subsubsection{Proof of Lemma~\ref{lem:cutting slices}} Now, let us turn to the proof of Lemma~\ref{lem:cutting slices}. For this one, we are going to use Proposition~\ref{prop:spine decomposition with marks} with a certain property $\tilde{P}(\epsilon,\delta,n)$, which is defined as the subset of $\bA$ such that for any $\tilde\bt=(\bt,(x_u)_{u\in\bt})$ and $v\in\bt$, \begin{align*} A(\tilde\bt,v)\in \tilde P(\epsilon,\delta,n) \quad \iff \quad \sum_{u\preceq v} x_u \ind{\frac{\delta}{2}b_n<\out{\bt}(u)\leq \delta b_n}>\epsilon b_n^\gamma. \end{align*} Let $T_n\sim \BGW_\mu^n$. For a small $\eta>0$ we can write using a union bound \begin{align}\label{eq:exponential decay of height of conditioned BGW} &\Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma}\notag \\ &\leq \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma \quad \text{and} \quad \haut(T_n)\leq \delta^{-\eta}\frac{n}{b_n}}+\Pp{\haut(T_n)>\delta^{-\eta}\frac{n}{b_n}}. \end{align} Thanks to \cite[Theorem~2]{kortchemski_sub_2017}, the second term is already smaller than some $C_1\exp(-C_2\delta^{-\eta})$, uniformly in $n$. We then just have to take care of the first term, for which we write \begin{align}\label{eq:conditioned BGW to unconditioned} &\Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma \quad \text{and} \quad \haut(T_n)\leq \delta^{-\eta}\frac{n}{b_n}}\notag\\ &=\frac{1}{\Pp{\abs{T}=n}}\cdot \Pp{\abs{T}=n \quad \text{and} \quad \exists v\in T, \quad \sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}>\epsilon b_n^\gamma \quad \text{and} \quad \haut(T)\leq \delta^{-\eta}\frac{n}{b_n}}, \end{align} where here $T$ is an unconditioned BGW tree. We already know from \eqref{eq:probability that a BGW has n vertices} that \begin{align*} \Pp{\abs{T}=n}\underset{n \rightarrow\infty}{\sim} \frac{d_\alpha(0)}{nb_n}. \end{align*} Using the spine decomposition Proposition~\ref{prop:spine decomposition with marks}, we can write the second factor as \begin{align}\label{eq:spine decomposition applied to delta slice} &\Pp{\abs{T}=n, \text{ and } \exists v\in T,\ \sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}> \epsilon b_n^\gamma \text{ and }\haut(v)\leq \delta^{-\eta}\frac{n}{b_n} } \notag \\ &=\Ec{\sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}} \frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\widetilde\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)}}}. \end{align} Then we are going to upper-bound the last display by using the fact that from the definition of $\tilde{P}(\epsilon,\delta,n)$, for any $v\in \KT_k^*$ such that $A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)$, any descendant $u$ of $v$ satisfies $A(\widetilde\KT_k^*,u)\in \tilde{P}(\epsilon,\delta,n)$. We first introduce \begin{align*} N(\epsilon,\delta,n):=\inf \enstq{i\geq 0}{\sum_{u\preceq \mathbf{U}_i^*} X_u \ind{\frac{\delta}{2}b_n<\out{\KT_k^*}(u)\leq \delta b_n}> \epsilon b_n^\gamma}, \end{align*} which is the height of the first vertex on the spine that satisfies $\tilde{P}(\epsilon,\delta,n)$. This, in particular, entails that $\frac{\delta}{2}b_n \leq \out{\KT_k^*}(\mathbf{U}_{N(\epsilon,\delta,n)}^*) \leq \delta b_n$. For technical reasons that will be explained later, let us split the offspring of $u=\mathbf{U}_{N(\epsilon,\delta,n)}^*$ into two subsets whose sizes are of the same order: those written as $ui$ for $i\leq \frac{\delta}{4} b_n$ and those written as $ui$ for $i> \frac{\delta}{4} b_n$. We only consider the progeny of the former for now and use that to lower-bound the total number $\#\enstq{v\in \KT_k^*}{A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)}$. We then get \begin{multline}\label{eq:upper bound using only a quarter of the progeny of the tip} \Ec{\sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}} \frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\widetilde\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)}}}\\ \leq \sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}} \Ec{\frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}. \end{multline} Now let us state a lemma and conclude from there. We prove the lemma at the end of the section. \begin{lemma}\label{lem:bounding every term in the sum of delta slices} For all $n$ and $k\leq \delta^{-\eta}\frac{n}{b_n}$, and $\delta<\epsilon^\frac{1}{\eta}$ small enough, there exists a constant $C$ such that \begin{enumerate}[(i)] \item\label{it:proba that size is n conditionally on rest of tree} \begin{align*} \Ppsq{\abs{\KT_k^*}=n}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}} \leq \frac{C\cdot \delta^{-\alpha-\eta}}{n}, \end{align*} \item\label{it:proba that Uk satisfies property} \begin{align*} \Pp{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)} \leq \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m}, \end{align*} \item \label{it:expectation of the inverse of number of vertices above UN} \begin{align*} \Ecsq{\frac{1}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for } i\leq \frac{\delta}{4}b_n}}}{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n)}\leq\frac{C \delta^{-\alpha-\eta}}{n}. \end{align*} \end{enumerate} \end{lemma} Using the above lemma to take care of every term in the sum appearing on the right-hand-side of \eqref{eq:upper bound using only a quarter of the progeny of the tip} and sum over $k$ to get the following \begin{align*} \sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}}\Ec{ \frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*}}}&\leq \delta^{-\eta}\frac{n}{b_n} \cdot C \frac{\delta^{-\alpha-\eta}}{n} \cdot C \frac{\delta^{-\alpha-\eta}}{n}\cdot \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m}\\ &\leq C \cdot \frac{\delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m-2\alpha-3\eta}}{n b_n}. \end{align*} Plugging the last display into \eqref{eq:conditioned BGW to unconditioned} using the equality \eqref{eq:spine decomposition applied to delta slice} and using \eqref{eq:exponential decay of height of conditioned BGW} and \eqref{eq:probability that a BGW has n vertices}, we then get that for small enough $\delta$, for $\delta<\epsilon^{\frac{1}{\eta}}$ we have \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma} \leq \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m-2\alpha-4\eta}, \end{align*} which is what we wanted to prove. \subsubsection{Proof of Lemma~\ref{lem:bounding every term in the sum of delta slices}} Let us successively prove the three points of the lemma. First, let us remark that if $\delta\leq \frac{8}{b_n}$, then (\ref{it:proba that size is n conditionally on rest of tree}) and (\ref{it:expectation of the inverse of number of vertices above UN}) are trivial. Indeed, in that case the quantities on the left-hand-side are smaller than $1$. On the other side, we can prove using the Potter bounds, that if we choose the constant large enough, the right-hand-side of that inequality is always greater than $1$ for $\delta$ in that range. Hence when proving (\ref{it:proba that size is n conditionally on rest of tree}) and (\ref{it:expectation of the inverse of number of vertices above UN}), we can always assume that $\delta\geq \frac{8}{b_n}$ so that $\frac{\delta}{4}b_n\geq 2$. \subparagraph{Proof of (\ref{it:proba that size is n conditionally on rest of tree}).} Let us work on the event $\left\lbrace A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n) \right \rbrace$. We denote by $v_1,\dots v_J$ the vertices of the form $\mathbf{U}_{N(\epsilon,\delta,n)}^*i$ for $i> \frac{\delta}{4}b_n$ that are not $\mathbf{U}_{N(\epsilon,\delta,n)+1}$. There is some number $J$ of them where $1\leq \frac{\delta}{4} b_n-1 \leq J \leq \delta b_n$. For this proof, we define $\Cut{\KT_k^*}{v_1,v_2,\dots v_J}$ similarly as in \eqref{eq:definition Cut} except this time we remove all the vertices that are strictly above all every one of the vertices $v_1,v_2,\dots v_J$. Now, note that the knowledge of the tree $\Cut{\KT_k^*}{v_1,v_2,\dots v_J}$ is enough to compute the quantity $\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i \text{ for } i \leq \frac{\delta}{4}b_n}$, and that conditionally on $\Cut{\KT_k^*}{v_1,v_2,\dots v_J}$, the subtrees $\tau_1,\dots ,\tau_J$ respectively above $v_1,\dots v_J$ are i.i.d.\ $\BGW_\mu$-distributed random trees. Then the total size of the whole tree $\KT_k^*$ is exactly $n$ if and only if the total volume of those trees $\tau_1,\dots ,\tau_J$ is exactly $n$ minus the number of vertices in the rest of $\KT_k^*$. This (conditional) probability is bounded above as follows \begin{align*} &\Ppsq{\abs{\KT_k^*}=n}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}\\ &= \Ecsq{\Ppsq{\abs{\KT^*_k}=n}{\Cut{\KT_k^*}{v_1,v_2,\dots v_J}}}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}\\ &= \Ecsq{\Ppsq{\sum_{j=1}^J \abs{\tau_j}=n+J-\abs{\KT^*_k}}{\Cut{\KT_k^*}{v_1,v_2,\dots v_J}}}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}\\ &\leq \frac{C\cdot \delta^{-\alpha-\eta}}{n}, \end{align*} using Lemma~\ref{lem:application of the potter bounds}(\ref{it:probability total size of Bn GW is k}) and the fact that by our assumptions we have $J\geq \frac{\delta}{4}b_n-1\geq 1$. \subparagraph{Proof of (\ref{it:proba that Uk satisfies property}).} For any $k\geq 1$, let us write $(X_i)_{0\leq i\leq k}=(X_{\mathbf{U}_i^*})$ to simplify the notation. We can write, using that $\epsilon\geq \delta^{\eta}$, \begin{align}\label{eq:proba that Uk has the property is split in two term} &\Pp{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}\notag\\ &\leq \Pp{\sum_{i=1}^{k}X_i \ind{\frac{\delta}{2}b_n< D_i\leq \delta b_n}>\epsilon b_n^\gamma}\notag\\ &\leq \Ppsq{\sum_{i=1}^{k}\frac{X_i}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}>\delta^{-\gamma+\eta}}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}+\Pp{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}> \delta^{1-\alpha -3\eta}}. \end{align} The second term of the last display is always smaller than what we would get if we take the maximum value for $k$, i.e. $\delta^{-\eta}\frac{n}{b_n}$. Using Lemma~\ref{lem:application of the potter bounds}(\ref{it:probability of degree being large}) we have for all $n\geq 1$, \[\Pp{\frac{\delta}{2}b_n\leq D_i \leq \delta b_n)}\leq C\cdot\delta ^{1-\alpha-\eta}\cdot \frac{b_n}{n}.\] Hence \begin{align*} \Pp{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}> \delta^{1-\alpha -3\eta}} &\leq \Pp{\mathrm{Bin}\left(\delta^{-\eta} \frac{n}{b_n},1\wedge (C\cdot \delta^{1-\alpha-\eta}\cdot \frac{b_n}{n}) \right)>\delta^{1-\alpha -3\eta}}, \end{align*} which decays exponentially in a negative power of $\delta$. For the first term of \eqref{eq:proba that Uk has the property is split in two term}, we use the fact that conditionally on the event $\{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}\}$, all the random variables $\frac{X_i}{(\delta b_n)^\gamma}$ are independent with $m$-th moment bounded above uniformly by the same constant $C$, \begin{align*} \sup_{0\leq i\leq k} \Ec{\left(\frac{X_i}{(\delta b_n)^\gamma}\right)^m}<C. \end{align*} Note that we have (for another constant $C$) \begin{align}\label{eq:upper-bound sum expectation Xi} \sum_{i=1}^{k}\frac{\Ec{X_i}}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq C \cdot \sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}. \end{align} We use Markov's inequality and a result of Petrov \cite[Chapter~III. Result 5.16]{MR0388499}, which applies thanks to the fact that $m\geq 2$. In the third line, we will use that $\gamma>\alpha-1$ so that for $\delta$ small enough we have $\delta^{-\gamma+\eta}-C\cdot \delta^{1-\alpha -3\eta}\geq \delta^{-\gamma+2\eta}$. \begin{align*} &\Ppsq{\sum_{i=1}^{k}\frac{X_i}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}>\delta^{-\gamma+\eta}}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}\\ &\underset{\eqref{eq:upper-bound sum expectation Xi}}{\leq} \Ppsq{\sum_{i=1}^{k}\frac{X_i-\Ec{X_i}}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}>\delta^{-\gamma+\eta}-C\delta^{1-\alpha -3\eta}}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}\\ &\underset{\text{Markov}}{\leq} \frac{\Ecsq{\left(\sum_{i=1}^{k}\frac{X_i-\Ec{X_i}}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\right)^m}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}}{\left(\delta^{-\gamma+2\eta}\right)^m}\\ &\underset{\text{Petrov }}{\leq} C\cdot \frac{(\delta^{1-\alpha -3\eta})^\frac{m}{2} }{\left(\delta^{-\gamma+2\eta}\right)^m}\\ &\leq \delta^{(\gamma +\frac{1-\alpha}{2}-4\eta)m}. \end{align*} Taking the sum of the two terms in \eqref{eq:proba that Uk has the property is split in two term} ensures that for $\epsilon$ small enough and $\delta<\epsilon^{1/\eta}$, \begin{align*} \Pp{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)} \leq \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m}. \end{align*} \subparagraph{Proof of (\ref{it:expectation of the inverse of number of vertices above UN}).} Now let us reason conditionally on the event $\{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)\}$. On that event the quantity $\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i \text{ for } i\leq \frac{\delta}{4}b_n}$ is at least the sum of the total size of $\frac{\delta}{4}b_n-1$ independent $\BGW$ tree branching off of the spine. Hence \begin{align*} &\Ecsq{\frac{1}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i \text{ for } i\leq \frac{\delta}{4}b_n}}}{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n)}\\ &\leq \Ec{\frac{1}{\sum_{i=1}^{\frac{\delta}{4}b_n-1}\abs{\tau_i}}} \leq\frac{C \delta^{-\alpha-\eta}}{n}, \end{align*} where the $\tau_i$'s are i.i.d.\ under distribution $\BGW_\mu$. The last inequality is obtained using Lemma~\ref{lem:application of the potter bounds}(\ref{it:expectation inverse total size of GW}). \subsection{Proof of Lemma~\ref{lem:moment measure discrete block implies B2 or B3}} \label{subsec:proof of lemma about measure} Before going into the proof of Lemma~\ref{lem:moment measure discrete block implies B2 or B3}, let us recall some general arguments concerning the \L ukasiewicz path of BGW trees conditional on their total size. For $T_n$ a BGW tree conditioned on having total size $n$, the law of the vector $(\out{T_n}(v_1)-1,\dots,\out{T_n}(v_n)-1; \nu_{v_1}(B_{v_1}), \dots ,\nu_{v_n}(B_{v_n}))$ can be described as \begin{align*} \mathrm{Law}\left((X_1,\dots, X_n; Z_1, \dots ,Z_n) \ | \ \sum_{i=1}^nX_i=-1, \ \forall k\leq n-1, \sum_{i=1}^k X_i\geq 0\right), \end{align*} where the $(X_i,Z_i)_{1\leq i \leq n}$ are i.i.d. with the distribution of $(D-1, \nu_D(B_D))$ where $D$ follows the reproduction distribution. Now using the so-called Vervaat transform, the above law can also be expressed as \begin{align*} \mathrm{Law}\left((X_{U+1},\dots, X_{U+n}; Z_{U+1}, \dots ,Z_{U+n}) \ \Big| \ \sum_{i=1}^nX_i=-1\right), \end{align*} where $U$ is defined as $U:=\min\enstq{1\leq k \leq n}{\sum_{i=1}^k X_i = \min_{1\leq k \leq n} \sum_{i=1}^k X_i}$, and the indices in the last display are interpreted modulo $n$. In what follows, we will also use an argument of absolute continuity. For any bounded function $F$ we can write \begin{align*} &\Ecsq{F((X_1,\dots, X_{\lfloor \frac{3n}{4}\rfloor }; Z_1, \dots ,Z_{\lfloor \frac{3n}{4}\rfloor }))}{\sum_{i=1}^n X_i=-1}\\ &= \Ec{F((X_1,\dots, X_{\lfloor \frac{3n}{4}\rfloor }; Z_1, \dots ,Z_{\lfloor \frac{3n}{4}\rfloor })) \frac{\Ppsq{\sum_{i=\lfloor \frac{3n}{4}\rfloor+1}^n X_i=-1 -\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i }{\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i}}{\Pp{\sum_{i=1}^n X_i=-1}}} \end{align*} Using the local limit theorem \cite[Theorem~4.2.1]{MR0322926} and the fact that the $\alpha$-stable density is bounded \cite[Eq.(I20)]{zolotarev_one_1986}, the term \begin{align*} \frac{\Ppsq{\sum_{i=\lfloor \frac{3n}{4}\rfloor+1}^n X_i=-1 -\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i }{\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i}}{\Pp{\sum_{i=1}^n X_i=-1}} \end{align*} appearing in the integral is bounded uniformly in $n\geq 4$. Using indicator functions for $F$, this ensures that any event that occurs with probability tending to $0$ or $1$ as $n\rightarrow \infty$ for the unconditioned model, also tend to $0$ or $1$ for the conditioned model. Note that we can apply the same argument for the vector $(X_{\lfloor \frac{n}{4}\rfloor },\dots, X_{n}; Z_{\lfloor \frac{n}{4}\rfloor}, \dots ,Z_{n}))$. \begin{proof}[Proof of Lemma~\ref{lem:moment measure discrete block implies B2 or B3}] We consider the case $\beta \leq \alpha$ and the case $\beta > \alpha$ in turn. $\bullet$ Case $\beta \leq \alpha$. In this case, we want to show that there exists a sequence $(a_n)_{n\geq 1}$ so that $\frac{1}{a_n}\cdot (M_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ in probability for the uniform topology. First, let us show that it is enough to check this if we replace $(M_k)_{1\leq k \leq n}$ with $(S_k)_{1\leq k \leq n}$ where $S_k:=\sum_{i=1}^kZ_i$. Indeed, if \begin{align}\label{eq:Sk converges in probability} \frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1} \end{align} in probability, then we have \begin{align*} \frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor})_{0\leq t\leq \frac{3}{4}} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq \frac{3}{4}} \quad \text{and} \quad \frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor} - S_{\lfloor \frac{3}{4}n \rfloor})_{\frac14\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t-\frac14)_{\frac14\leq t \leq 1} \end{align*} also in probability. By the absolute continuity argument, both those convergences also hold conditional on the event $\{\sum_{i=1}^nX_i=-1\}$, which ensures that $\frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ under $\Ppsq{\cdot }{\sum_{i=1}^{n}X_i=-1}$. Then, this is enough to get that $\frac{1}{a_n}\cdot (S_{U+\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ as well. Since $(M_k)_{1\leq k \leq n}$ is distributed as $S_{U+\lfloor nt \rfloor}$ under $\Ppsq{\cdot }{\sum_{i=1}^{n}X_i=-1}$, we get $\frac{1}{a_n}\cdot (M_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ in probability as required. In the end, we just have to find a sequence $a_n$ such that \eqref{eq:Sk converges in probability} holds. In fact, by monotonicity, we just need to check that the convergence holds for $t=1$. If $\Ec{Z}=\Ec{\nu_D(B_D)}$ is finite then it is easy to check that we can take $a_n:= \Ec{\nu_D(B_D)} \cdot n$. This is always the case if $\beta <\alpha$ since, using the assumption of the lemma \begin{align*} \Ec{\nu_D(B_D)}&= \sum_{k=0}^{\infty}\Pp{D=k}\Ec{\nu_k(B_k)}\\ &\underset{\text{assum.}}{\leq} C \cdot \sum_{k=0}^{\infty}\Pp{D=k} k^\beta \leq C\cdot \Ec{D^\beta} < \infty, \end{align*} so we can apply the law of large numbers. If $\Ec{Z}$ is infinite, which in our case can only happen if $\beta=\alpha$, we need to understand the tail behaviour of $Z$. We will show in that case that \begin{align}\label{eq:tail Z from tail D} \P(Z\geq x) \sim \Pp{D \geq x^{\frac{1}{\alpha}}} \cdot \Ec{\nu(\cB)} \qquad \text{as}~x\to \infty, \end{align} which in particular entails that $Z$ has a regularly varying tail of index $-1$ as $x\to \infty$. Using then general results (e.g. \cite[Theorem~3.7.2]{MR2722836}) for sums of random variables with regularly varying tail yields the result. First, for a given integer $N$ we introduce $Y_N^+$ and $Y_N^-$ whose distribution are defined in such a way that for all $x\geq 0$, \begin{align*} \Pp{Y_N^+\geq x}= \sup_{k\geq N} \Pp{\frac{\nu_k(B_k)}{k^\alpha} \geq x} \quad \text{ and } \quad \Pp{Y_N^-\geq x}= \inf_{k\geq N} \Pp{\frac{\nu_k(B_k)}{k^\alpha} \geq x}. \end{align*} Note that for both of them, we have the following bound thanks to our assumption and Markov's inequality \begin{align*} \Pp{Y_N^-\geq x}\leq \Pp{Y_N^+\geq x} \leq \sup_{k\geq 1}\Pp{\frac{\nu_k(B_k)}{k^\alpha} \geq x} \leq \frac{\sup_{k\geq 1} \Ec{\left(\frac{\nu_k(B_k)}{k^\alpha}\right)^{1+\eta}}}{x^{1+\eta}}. \end{align*} Since the quantity in the last display is integrable in $x$, and since $\Pp{Y_N^\pm\geq x}\rightarrow \Pp{\nu(\cB)\geq x}$ as $N\rightarrow\infty$ thanks to assumption D\ref{c:GHPlimit}, we have $\Ec{Y_N^\pm}\rightarrow \Ec{\nu(\cB)}$ by dominated convergence. Also note that we have $\Ec{\left(Y_N^\pm\right)^{1+\eta/2}}<\infty$. In order to prove \eqref{eq:tail Z from tail D}, we then just need to show that for every $N$, we have \begin{align*} \P(Z\geq x) \geq \Pp{D \geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^-} (1+o(1)) \quad \text{ and } \quad \P(Z\geq x) \leq \Pp{D \geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^+} (1+o(1)). \end{align*} Let us fix $N$ and write \begin{align*} \Pp{Z\geq x}= \Pp{\nu_D(B_D) \geq x} &= \Pp{\nu_D(B_D)\geq x, \ D\geq N} + \Pp{\nu_D(B_D) \geq x, \ D<N}\\ &= \Pp{\nu_D(B_D)\geq x, \ D\geq N} + O(x^{-1-\eta}). \end{align*} Now with a random variable $Y_N^+$ that is independent of $D$, we can write \begin{align*} \Pp{\nu_D(B_D)\geq x, \ D\geq N}&= \Ec{\Ppsq{ \frac{\nu_D(B_D)}{D^\alpha}\geq \frac{x}{D^\alpha}}{D} \ind{D\geq N}}\\ &\leq \Ec{\Pp{Y_N^+\geq \frac{x}{D^\alpha}} \ind{D\geq N}}\\ &= \Pp{Y_N^+D^\alpha \geq x, \ D>N}\\ &=\Pp{Y_N^+ D^\alpha \geq x} - \Pp{Y_N^+ D^\alpha \geq x, \ D\leq N}\\ &=\Pp{Y_N^+ D^\alpha \geq x} - O(x^{-1-\eta}). \end{align*} Now using \cite[Proposition~1.1]{kasahara_note_2018} this last display is equivalent to $\Pp{D\geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^+}$. We also get the other inequality $\Pp{\nu_D(B_D) \geq x} \geq \Pp{D\geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^-}\cdot (1+o(1))$, which finishes the proof. $\bullet$ Case $\beta > \alpha$. Since the arguments used here are quite similar to what we did above, let us just sketch the proof. Let $\epsilon >0$ and consider \begin{align*} R(n,\delta):= b_n^{-\beta} \cdot \sum_{i=1}^{n} \nu_{v_i}(B_{v_i})\ind{\out{}(v_i)\leq \delta b_n}. \end{align*} Using the same type of arguments as above, we can first show that \begin{align*} b_n^{-\beta} \cdot \sum_{i=1}^{\lfloor\frac{3n}{4}\rfloor} Z_i \ind{X_i+1\leq \delta b_n} \qquad \text{ and } \qquad b_n^{-\beta} \cdot \sum_{i=\lfloor\frac{n}{4}\rfloor}^{n} Z_i \ind{X_i+1\leq \delta b_n} \end{align*} tend to $0$ in probability under the unconditioned measure, uniformly in $n\geq 4$. Then using the absolute continuity argument together with the equality in distribution we get that \begin{align} \limsup_{n\rightarrow \infty}\Pp{R(n,\delta)\geq \epsilon} \underset{\delta \rightarrow 0}{\rightarrow} 0, \end{align} This ensures that condition B\ref{cond:small decorations dont contribute to mass} is satisfied. \end{proof} \bibliographystyle{siam} \bibliography{dtree} \end{document} \documentclass[a4paper,11pt]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage{mathpazo} \usepackage{graphicx} \usepackage{comment} \usepackage{amsthm,amsmath,amsfonts,stmaryrd,mathrsfs,amssymb} \usepackage{mathtools} \usepackage{enumerate} \usepackage{csquotes} \usepackage{chemarrow} \usepackage{tikz} \usepackage[top=2.5cm, bottom=2.5cm, left=2cm, right=2cm]{geometry} \usepackage{subfig} \usepackage{multirow} \usepackage{array} \usepackage{bigints} \usepackage{etoolbox} \usepackage{mdframed} \usepackage{color} \usepackage{tabularx} \usepackage[unicode,colorlinks=true,linkcolor=blue,citecolor=red,pdfencoding=auto,psdextra]{hyperref} \linespread{1.1} \newcommand\Pitem{ \addtocounter{enumi}{-1} \renewcommand\theenumi{\roman{enumi}'} \item \renewcommand\theenumi{\roman{enumi}} } \makeatletter \newcommand{\mylabel}[2]{#2\def\@currentlabel{#2}\label{#1}} \makeatother \newcommand\marginal[1]{\marginpar{\raggedright\parindent=0pt\tiny #1}} \DeclareMathOperator{\haut}{ht} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\Law}{Law} \DeclareMathOperator{\Leb}{Leb} \DeclareMathOperator{\dist}{d} \DeclareMathOperator{\Vol}{Vol} \DeclareMathOperator{\Ball}{B} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\rrt}{RRT} \DeclareMathOperator{\wrt}{WRT} \DeclareMathOperator{\pa}{PA} \DeclareMathOperator{\PD}{PD} \DeclareMathOperator{\GEM}{GEM} \DeclareMathOperator{\ML}{ML} \DeclareMathOperator{\MLMC}{MLMC} \DeclareMathOperator{\GW}{GW} \DeclareMathOperator{\BGW}{BGW} \newcommand{\ensemblenombre}[1]{\mathbb{#1}} \newcommand{\N}{\ensemblenombre{N}} \newcommand{\Z}{\ensemblenombre{Z}} \newcommand{\Q}{\ensemblenombre{Q}} \newcommand{\R}{\ensemblenombre{R}} \newcommand{\intervalle}[4]{\mathopen{#1}#2 \mathclose{}\mathpunct{},#3 \mathclose{#4}} \newcommand{\intervalleff}[2]{\intervalle{[}{#1}{#2}{]}} \newcommand{\intervalleof}[2]{\intervalle{(}{#1}{#2}{]}} \newcommand{\intervallefo}[2]{\intervalle{[}{#1}{#2}{)}} \newcommand{\intervalleoo}[2]{\intervalle{(}{#1}{#2}{)}} \newcommand{\intervalleentier}[2]{\intervalle\llbracket{#1}{#2} \rrbracket} \newcommand{\enstq}[2]{\left\lbrace#1\mathrel{}\middle|\mathrel{}#2\right\rbrace} \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\Gam}[1]{\Gamma \left(#1\right)} \newcommand{\Cut}[2]{\mathrm{Cut}\left(#1,#2\right)} \newcommand{\partieentiere}[1]{\lfloor #1 \rfloor} \newcommand{\petito}[1]{o\mathopen{}\left(#1\right)} \newcommand{\petiton}{o_n\mathopen{}\left(1\right)} \newcommand{\petitoe}{o_\epsilon\mathopen{}\left(1\right)} \newcommand{\grandO}[1]{O\mathopen{}\left(#1\right)} \newcommand{\tend}[2]{\underset{#1}{\overset{#2}{\longrightarrow}}} \newcommand{\bb}{\mathbf{b}} \newcommand{\brho}{\boldsymbol\rho} \newcommand{\bd}{\mathbf{d}} \newcommand{\bnu}{\boldsymbol\nu} \newcommand{\bB}{\mathsf{B}} \newcommand{\bRho}{\rho} \newcommand{\bD}{\mathsf{D}} \newcommand{\bNu}{\nu} \newcommand{\bH}{\mathsf{H}} \newcommand{\bP}{\mathsf{P}} \newcommand{\bM}{\mathsf{M}} \newcommand{\bl}{\mathbf{l}} \newcommand{\bi}{\mathbf{i}} \newcommand{\bm}{\mathbf{m}} \renewcommand{\P}{\mathbb{P}} \newcommand{\E}{\mathbb{E}} \newcommand{\D}{\mathbb{D}} \newcommand{\M}{\mathbb{M}} \newcommand{\K}{\mathbb{K}} \newcommand{\bU}{\mathbb{U}} \newcommand{\bT}{\mathbb{T}} \newcommand{\bA}{\mathbb{A}} \newcommand{\bt}{\mathbf{t}} \newcommand{\sH}{\mathscr{H}} \newcommand{\sL}{\mathscr{L}} \newcommand{\sK}{\mathscr{K}} \newcommand{\sG}{\mathscr{G}} \newcommand{\Ec}[1]{\mathbb{E} \left[#1\right]} \newcommand{\Ep}[1]{\mathbb{E} \left(#1\right)} \newcommand{\Pc}[1]{\mathbb{P} \left[#1\right]} \newcommand{\Pp}[1]{\mathbb{P} \left(#1\right)} \newcommand{\Ppp}[2]{\mathbb{P}_{#1} \left(#2\right)} \newcommand{\Pppsq}[3]{\mathbb{P}_{#1} \left(#2\mathrel{}\middle|\mathrel{}#3\right)} \newcommand{\Ecsq}[2]{\mathbb{E} \left[#1\mathrel{}\middle|\mathrel{}#2\right]} \newcommand{\Epsq}[2]{\mathbb{E} \left(#1\mathrel{}\middle|\mathrel{}#2\right)} \newcommand{\Pcsq}[2]{\mathbb{P} \left[#1\mathrel{}\middle|\mathrel{}#2\right]} \newcommand{\Ppsq}[2]{\mathbb{P} \left(#1\mathrel{}\middle|\mathrel{}#2\right)} \newcommand{\Ecp}[2]{\mathbb{E}_{#1} \left[#2\right]} \newcommand{\Var}[1]{\mathbb{V}\left(#1\right)} \newcommand{\ndN}{\mathbb{N}} \newcommand{\ndZ}{\mathbb{Z}} \newcommand{\ndQ}{\mathbb{Q}} \newcommand{\ndR}{\mathbb{R}} \newcommand{\ndC}{\mathbb{C}} \newcommand{\ndE}{\mathbb{E}} \newcommand{\ndK}{\mathbb{K}} \newcommand{\ndA}{\mathbb{A}} \renewcommand{\Pr}[1]{\mathbb{P}(#1)} \newcommand{\Prb}[1]{\mathbb{P}\left( #1 \right)} \newcommand{\Ex}[1]{\mathbb{E}[#1]} \newcommand{\Exb}[1]{\mathbb{E}\left[ #1 \right]} \newcommand{\Va}[1]{\mathbb{V}[#1]} \newcommand{\one}{\mathbbm{1}} \newcommand{\w}{w} \newcommand{\sca}{\kappa} \newcommand{\om}{\mathbf{w}} \newcommand{\rd}{\mathsf{rd}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cJ}{\mathcal{J}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cX}{\mathcal{X}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cY}{\mathcal{Y}} \newcommand{\cU}{\mathcal{U}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\cW}{\mathcal{W}} \newcommand{\fmT}{\mathfrak{T}} \newcommand{\fmS}{\mathfrak{S}} \newcommand{\fmA}{\mathfrak{A}} \newcommand{\mA}{\mathsf{A}} \newcommand{\mC}{\mathsf{C}} \newcommand{\mD}{\mathsf{D}} \newcommand{\mB}{\mathsf{B}} \newcommand{\mF}{\mathsf{F}} \newcommand{\mM}{\mathsf{M}} \newcommand{\mO}{\mathsf{O}} \newcommand{\mK}{\mathsf{K}} \newcommand{\mG}{\mathsf{G}} \newcommand{\mH}{\mathsf{H}} \newcommand{\mT}{\mathsf{T}} \newcommand{\mS}{\mathsf{S}} \newcommand{\mR}{\mathsf{R}} \newcommand{\mP}{\mathsf{P}} \newcommand{\mQ}{\mathsf{Q}} \newcommand{\mV}{\mathsf{V}} \newcommand{\mU}{\mathsf{U}} \newcommand{\mX}{\mathsf{X}} \newcommand{\mY}{\mathsf{Y}} \newcommand{\mZ}{\mathsf{Z}} \newcommand{\mb}{\mathsf{b}} \newcommand{\me}{\mathsf{e}} \newcommand{\CRT}{\mathcal{T}_\me} \newcommand{\tF}{\tilde{\cF}} \newcommand{\tG}{\tilde{\cG}} \newcommand{\Aut}{\text{Aut}} \newcommand{\Sym}{\text{Sym}} \newcommand{\mGWT}{\hat{\mathcal{T}}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cBa}{\overrightarrow{\mathcal{B}}'} \newcommand{\cBra}{\overrightarrow{\mathcal{B}}'^\bullet} \newcommand{\plh}{ {\ooalign{$\phantom{0}$\cr\hidewidth$\scriptstyle\times$\cr}}} \newcommand{\PLH}{{\mkern-2mu\times\mkern-2mu}} \newcommand{\hei}{\text{h}} \newcommand{\He}{\textnormal{H}} \newcommand{\Di}{\textnormal{D}} \newcommand{\emb}{\textnormal{emb}} \newcommand{\he}{\text{h}} \newcommand{\shp}{\mathsf{d}} \newcommand{\rhl}{\mathsf{rl}} \newcommand{\lhb}{\mathsf{lb}} \newcommand{\Forb}{\mathsf{Forb}} \newcommand{\spa}{\mathsf{span}} \newcommand{\KT}{\mathbf{T}} \newcommand{\UHT}{\mathbb{U}} \newcommand{\VHT}{\mathcal{V}_{\infty}} \newcommand{\cCb}{\mathcal{C}^\bullet} \newcommand{\cTb}{\mathcal{T}^\bullet} \newcommand{\cTbb}{\mathcal{T}^{\bullet \bullet}} \newcommand{\cTbbl}{\mathcal{T}^{\bullet \bullet (\ell)}} \newcommand{\eqdist}{\,{\buildrel d \over =}\,} \newcommand{\hxi}{\hat{\xi}} \newcommand{\convdis}{\,{\buildrel d \over \longrightarrow}\,} \newcommand{\convp}{\,{\buildrel p \over \longrightarrow}\,} \newcommand{\disto}{\mathsf{dis}} \newcommand{\Cyc}{\textsc{CYC}} \newcommand{\Set}{\textsc{SET}} \newcommand{\Seq}{\textsc{SEQ}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\GTB}{\Gamma T^\bullet} \newcommand{\GTBB}{\Gamma T^{\bullet \bullet}} \newcommand{\GTBBL}{\Gamma T^{\bullet \bullet (\ell)}} \newtheorem{firsttheorem}{Proposition} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \numberwithin{equation}{section} \newcommand{\keyword}[1]{\textbf{#1}~} \newcommand{\IF}{\keyword{if}} \newcommand{\ELSEIF}{\keyword{ELSEIF} } \newcommand{\THEN}{\keyword{then} } \newcommand{\ELSE}{\keyword{else}} \newcommand{\ENDIF}{\keyword{endif} } \newcommand{\RETURN}{\keyword{return}} \newcommand{\WHILE}{\keyword{while} } \newcommand{\ENDWHILE}{\keyword{end while} } \newcommand{\FOR}{\keyword{for}} \newcommand{\FOREACH}{\keyword{for each}} \newcommand{\EACH}{\keyword{each}} \newcommand{\ENDFOR}{\keyword{endfor} } \newcommand{\SPENDFOR}{\textbf{end for}~ } \newcommand{\ENDFORALL}{\keyword{ENDFORALL} } \newcommand{\DOWNTO}{\keyword{DOWNTO} } \newcommand{\DO}{\keyword{DO} } \newcommand{\AND}{\keyword{and}} \newcommand{\OR}{\keyword{OR} } \newcommand{\true}{\keyword{true}} \newcommand{\false}{\keyword{false} } \newcommand{\OPT}{\rm{OPT}} \newcommand{\REPEAT}{\keyword{repeat}} \newcommand{\UNTIL}{\keyword{until}} \newcommand{\TO}{\keyword{to}} \newcommand{\Po}[1]{{{\mathsf{Po}}\left(#1\right)}} \newcommand{\Pois}[1]{{{\mathsf{Pois}}\left(#1\right)}} \newcommand{\Bern}[1]{{{\mathsf{Bern}}\left(#1\right)}} \newcommand{\Geom}[1]{{{\mathsf{Geom}}\left(#1\right)}} \newcommand{\cnu}{\mathsf{0}} \newcommand{\con}{\mathsf{1}} \newcommand{\dres}{\mathbf{d}} \newcommand{\degp}{\mathrm{d}^+} \newcommand{\dis}{\operatorname{dis}} \newcommand{\notedelphin}[1]{{\textcolor{orange}{#1}}} \newcommand{\toremove}[1]{{\textcolor{gray}{#1}}} \newcommand{\ben}[1]{{\textcolor{blue}{B: #1}}} \newcommand{\notesiggi}[1]{{\textcolor{red}{Siggi: #1}}} \newcommand{\Bb}{\mathbb{B}} \newcommand{\ind}[1]{\mathbf{1}_{\left\lbrace #1 \right\rbrace}} \newcommand{\out}[1]{d_{#1}^+} \newcommand{\dec}{\text{dec}} \title{\textbf{Decorated stable trees}} \date{} \author{Delphin S\'{e}nizergues\thanks{University of British Columbia, E-mail: [email protected]} \and Sigurdur \"Orn Stef\'ansson\thanks{University of Iceland, E-mail: [email protected]} \and Benedikt Stufler\thanks{Vienna University of Technology, E-mail: [email protected]}} \begin{document} \maketitle \let\thefootnote\relax\footnotetext{ \\\emph{MSC2010 subject classifications}. 60F17, 60C05 \\ \emph{Keywords: decorated trees, looptrees, stable trees, invariance principle, self-similarity.} } \vspace {-0.5cm} \begin{abstract} We define \emph{decorated $\alpha$-stable trees} which are informally obtained from an $\alpha$-stable tree by blowing up its branchpoints into random metric spaces. This generalizes the $\alpha$-stable looptrees of Curien and Kortchemski, where those metric spaces are just deterministic circles. We provide different constructions for these objects, which allows us to understand some of their geometric properties, including compactness, Hausdorff dimension and self-similarity in distribution. We prove an invariance principle which states that under some conditions, analogous discrete objects, random decorated discrete trees, converge in the scaling limit to decorated $\alpha$-stable trees. We mention a few examples where those objects appear in the context of random trees and planar maps, and we expect them to naturally arise in many more cases. \end{abstract} \begin{figure}[h] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=1.0\linewidth]{1dot25_poisson_500k_joined.jpg} \caption{ The right side illustrates a decorated $\alpha$-stable tree for $\alpha=5/4$. The decorations are Brownian trees (2-stable trees) rescaled according to the ``width'' of branchpoints. The left side shows the corresponding $\alpha$-stable tree before blowing up its vertices. Colours mark vertices with high width and their corresponding decorations.} \label{fi:exiterated} \end{minipage} \end{figure} \section{Introduction} In this paper we develop a general method for constructing random metric spaces by gluing together metric spaces referred to as \emph{decorations} along a tree structure which is referred to as \emph{the underlying tree}. The resulting object, which we define informally below and properly in Section~\ref{s:disc_construction} and Section~\ref{s:cts_construction}, will be called a \emph{decorated tree}. We will consider two frameworks, which we refer to as the \emph{discrete case} and the \emph{continuous case}. In the discrete case the underlying tree is a finite, rooted plane tree $T$. To each vertex $v$ in $T$ with outdegree $d_T^+(v)$ there is associated a decoration $B(v)$ which is a metric space with one distinguished point called the inner root and $d_T^+(v)$ number of distinguished points called the outer roots. In applications, the decorations $B(v)$ may e.g.~be finite graphs and they will only depend on the outdegree of $v$. We will thus sometimes write $B(v) = B_{d^+_T(v)}$. The decorated tree is constructed by identifying, for each vertex $v$, each of the outer roots of $B(v)$ with the inner root of $B(vi)$, $i=1,\ldots,d^+_T(v)$, where $vi$ are the children of $v$. This identification will either be done at random or according to some prescribed labelling. Our aim is to describe the asymptotic behaviour of \emph{random} spaces that are constructed in this way. To this end, let $(T_n)_{n \ge 1}$ be a sequence of random rooted plane trees. The sequence is chosen in such a way that the exploration process of the random trees, properly rescaled, converges in distribution towards an excursion of an $\alpha$-stable Lévy process with $\alpha \in (1,2)$. This particular choice is motivated by applications as will be explained below. We will further assume that there is a parameter $0 < \gamma \leq 1$, which we call the \emph{distance exponent}, such that the (possibly random) decorations $B_k$, with the metric rescaled by $k^{-\gamma}$, converge in distribution towards a compact metric space $\mathcal{B}$ as $k\to \infty$, in the Gromov--Hausdorff sense. In the continuous case, we aim to construct a decorated tree which is the scaling limit of the sequence of discrete decorated trees described above. The underlying tree is chosen as the $\alpha$-stable tree $\mathcal{T}_\alpha$, with $\alpha \in (1,2)$, which is a random compact real tree. One important feature of the $\alpha$-stable tree is that it has countably many vertices of infinite degree. The size of such a vertex is measured by a number called its \emph{width} which plays the same role as the degree in the discrete setting. The continuous decorated tree is constructed in a way similar as in the discrete case, where now each vertex $v$ of infinite degree in $\mathcal{T}_\alpha$ is independently replaced by a decoration $\mathcal{B}_v$ distributed as $\mathcal{B}$, in which distances are rescaled by width of $v$ to the distance exponent $\gamma$. The gluing points are specified by singling out an inner root and sampling outer roots according to a measure $\mu$ and the decorations are then glued together along the tree structure. See Figure~\ref{fi:exiterated} for an example of a decorated tree. The precise construction is somewhat technical and is described in more detail in Section~\ref{ss:realtrees}. In order for the construction to yield a compact metric space we require some control on the size of the decorations $\mathcal{B}_v$. More precisely, we will assume that $\gamma > \alpha-1$ and that there exists $p>\frac{\alpha}{\gamma}$ such that $\Ec{\diam(\mathcal{B})^p}<\infty$. Under some further suitable conditions on the decorations and the underlying tree which are stated in Section~\ref{ss:conditions} we arrive at the following main results: \begin {itemize} \item \emph{Construction of the decorated $\alpha$-stable tree:} The decorated $\alpha$-stable tree defined above is a well defined random compact metric space, see Section~\ref{s:cts_construction} and Theorem~\ref{thm:compact_hausdorff}. \item \emph{Invariance principle:} The sequence of discrete random decorated trees, with a properly rescaled metric, converges in distribution towards a decorated $\alpha$-stable tree in the Gromov--Hausdorff--Prokhorov sense. See Theorem~\ref{th:invariance} for the precise statement. \item \emph{Hausdorff dimension:} Apart from degenerate cases, the Hausdorff dimension of \emph {the leaves} of the decorated $\alpha$-stable tree equals $\alpha/\gamma$, see Theorem~\ref{thm:compact_hausdorff}. It follows that if the decorations a.s.~have a Hausdorff dimension $d_\mathcal{B}$ then the Hausdorff dimension of the decorated $\alpha$-stable tree equals $\max\{d_\mathcal{B},\alpha/\gamma\}$. \end{itemize} \begin{remark} A heuristic argument for the condition $\alpha -1 < \gamma$ is the following. If $(T_n)$ is in the domain of attraction of $\cT_\alpha$ then the maximum degree of a vertex in $T_n$ is of the order $n^{1/\alpha}$ and the height of $T_n$ is of the order $n^{\frac{\alpha-1}{\alpha}}$ (up to slowly varying functions). A decoration on a large vertex will then have a diameter of the order $n^{\gamma/\alpha}$. If $\alpha-1 < \gamma$, the diameter of the decorations will thus dominate the height of $T_n$ and the decorated tree will have height of the order $n^{\gamma/\alpha}$, coming only from the decorations. Furthermore, the decorations will "survive" in the limit since the distances of the decorated tree will be rescaled according to the inverse of its height which is of the order of the diameter of the decorations. In the case $\alpha -1 > \gamma$, the decorations will have a diameter which is of smaller order than the height of $T_n$. This means that the proper rescaling of the distances in the decorated tree should be the inverse of the height of $T_n$, i.e.~$n^{-\frac{\alpha-1}{\alpha}}$ which would contract the decorations to points. We therefore expect that in this case, the scaling limit of the decorated tree is equal to a constant multiple of $\cT_\alpha$ which has a Hausdorff dimension $\alpha/(\alpha-1)$. We we do not go through the details in the current work. We do not know what happens exactly at the value $\alpha-1 = \gamma$ at which there is a phase transition in the geometry and the Hausdorff dimension of the scaling limit. We leave this as an interesting open case. \end{remark} \subsection{Relations to other work} The only current explicit example of decorated $\alpha$-stable trees are the $\alpha$-stable looptrees which were introduced by Curien and Kortchemski \cite{MR3286462}. The looptrees are constructed by decorating the $\alpha$-stable trees by deterministic circles. The authors further showed, in a companion paper \cite{MR3405619}, that the looptrees appear as boundaries of percolation clusters on random planar maps and there are indications that they play a more general role in statistical mechanical models on random planar maps \cite{Richier:2017}. Looptrees have also been shown to arise as limits of various models of discrete decorated trees such as discrete looptrees, dissections of polygons \cite{MR3286462} and outerplanar maps \cite{zbMATH07138334}. In these cases the discrete decorations $B_k$ are graph cycles, chord restricted dissections of a polygon and dissections of a polygon respectively. The idea of decorating $\alpha$-stable trees by more general decorations was originally proposed by Kortchemski and Marzouk in \cite{kortchemski2016}: \begin{displayquote} "Note that if one starts with a stable tree and explodes each branchpoint by simply gluing inside a “loop”, one gets the so-called stable looptrees .... More generally, one can imagine exploding branchpoints in stable trees and glue inside any compact metric space equipped with a homeomorphism with [0, 1]." \end{displayquote} They mention explicitly an example where an $\alpha_1$-stable tree would be decorated by an $\alpha_2$-stable tree and so on and ask the question what the Hausdorff dimension of that object would be. We give a partial answer to their question in Section~\ref{s:markedtrees}. See Figure \ref{fi:exiterated} for a simulation where $\alpha_1 = 5/4$ and $\alpha_2 = 2$. Archer studied a simple random walk on decorated Bienaymé--Galton--Watson trees in the domain of attraction of the $\alpha$-stable tree \cite{arXiv:2011.07266}. Although her results do not explicitly involve constructing a Gromov--Hausdorff limit of the decorated trees, they do require an understanding of their global metric properties. We refer further to her work for examples of decorated trees, some of which we do not discuss in the current work. S\'{e}nizergues has developed a general method for gluing together sequences of random metric spaces \cite{senizergues_gluing_2019} and introduced a framework where metric spaces are glued together along a tree structure \cite{senizergues_growing_2022}. Section~\ref{sec:self-similarity} relies heavily on the results of the latter. Let us also mention that Stufler has studied local limits and Gromov--Hausdorff limits of decorated trees which are in the domain of attraction of the Brownian continuum random tree (equivalently the 2-stable tree) \cite{StEJC2018, zbMATH07235577}. The Brownian tree has no vertices of infinite degree and thus the rescaled decorations contract to points in the limit which only changes the metric of the decorated tree by a multiplicative factor. Its Gromov--Hausdorff limit then generically turns out to be a multiple of the Brownian tree itself (see \cite[Theorem~6.60]{zbMATH07235577}). \subsection{Outline} The paper is organized as follows. In Section~\ref{s:preliminaries} we recall some important definitions. In Section~\ref{s:disc_construction} the construction of discrete decorated trees is provided and the required conditions on the underlying tree and decorations are stated. In Section~\ref{s:cts_construction} we give the definition of the continuous decorated trees and in Section~\ref{s:invariance} we state and prove the invariance principle, namely that the properly rescaled discrete decorated trees converge towards the continuous decorated trees in the Gromov--Hausdorff--Prokhorov sense. An alternative construction of the continuous decorated tree is given in Section~\ref{sec:self-similarity} which allows us to prove that it is a compact metric space and gives a way to calculate its Hausdorff dimension. Section~\ref{s:applications} provides several applications of our results. Finally, the Appendix is devoted to proving that the conditions on the discrete trees stated in Section~\ref{s:disc_construction} are satisfied for natural models of random trees, in particular Bienaymé--Galton--Watson trees which are in the domain of attraction of $\alpha$-stable trees, with $\alpha \in (1,2)$. \subsection*{Notation} We let $\mathbb{N} = \{1, 2, \ldots\}$ denote the set of positive integers. We usually assume that all considered random variables are defined on a common probability space whose measure we denote by $\mathbb{P}$. All unspecified limits are taken as $n$ becomes large, possibly taking only values in a subset of the natural numbers. We write $\convdis$ and $\convp$ for convergence in distribution and probability, and $\eqdist$ for equality in distribution. An event holds with high probability, if its probability tends to $1$ as $n \to \infty$. We let $O_p(1)$ denote an unspecified random variable $X_n$ of a stochastically bounded sequence $(X_n)_n$, and write $o_p(1)$ for a random variable $X_n$ with $X_n \convp 0$. The following list summarizes frequently used terminology. \begin{center} \begin{tabularx}{\linewidth}{lll} \qquad \qquad &$\out{T}(v)$ & outdegree of vertex $v$ in a tree $T$.\\&$|T|$ & the number of vertices of a tree $T$.\\ &$B$ & decoration in the discrete setting. \\ &$B_k$ & decoration of size $k$ in the discrete setting. \\ &$d_k$ & metric on $B_k$. \\ &$\dres_k = k^{-\gamma} d_k$ & rescaled metric on $B_k$. \\ &$\cB$ & decoration in the continuous case.\\ &$\cB_t$ & decoration in the continuous case indexed by $t\in[0,1]$.\\ &$\diam(\cB)$ & the diameter of the metric space $\cB$.\\ &$T$ & discrete tree.\\ &$T_n$ & discrete tree of size $n$.\\ &$\mathcal{T}$ & real tree.\\ &$\mathcal{T}_\alpha$ & the $\alpha$-stable tree.\\ &$\Bb$ & set of branchpoints of $\mathcal{T}_\alpha$. \\ &$X^{(\alpha)}$ & a spectrally positive, $\alpha$-stable Lévy process.\\ &$X^{\text{exc},(\alpha)}$ & an excursion of a spectrally positive, $\alpha$-stable Lévy process. \end{tabularx} \end{center} \paragraph*{Acknowledgements.} The authors are grateful to Nicolas Curien for suggesting this collaboration, and to Eleanor Archer for discussing some earlier version of her project \cite{arXiv:2011.07266}. The second author acknowledges support from the Icelandic Research Fund, Grant Number: 185233-051, and is grateful for the hospitality at Université Paris-Sud Orsay. \section{Preliminaries} \label{s:preliminaries} In this section we collect some definitions and refer to the key background results required in the current work. We start by introducing plane trees, then real trees and the $\alpha$-stable tree and finally we recall the definition of the Gromov--Hausdorff--Prokhorov topology which is the topology we use to describe the convergence of compact mesasured metric spaces. \subsection{Plane trees} \label{subsec:plane trees} A rooted plane tree is a subgraph of the infinite Ulam--Harris tree, which is defined as follows. Its vertex set, $\UHT$, is the set of all finite sequences \begin{align*} \UHT = \bigcup _{n\geq 0} \mathbb{N}^n \end{align*} where $\mathbb{N}= \{1, 2,\ldots\}$ and $\mathbb{N}^0 = \{\varnothing\}$ contains the empty sequence denoted by $\varnothing$. If $v$ and $w$ belong to $\UHT$, their concatenation is denoted by $vw$. The edge set consists of all pairs $(v,vi)$, $v\in\UHT$, $i\in\mathbb{N}$. The vertex $vi$ is said to be the $i$-th child of $v$ and $v$ is called its parent. In general, a vertex $vw$ is said to be a descendant of $v$ if $w \neq \varnothing$, and in that case $v$ is called its ancestor. This ancestral relation is denoted by $v \prec w$. We write $v \preceq w$ if either $v=w$ or $v \prec w$. We denote the most recent comment ancestor of $v$ and $w$ by $v\wedge w$, with the convention that $v\wedge w = v$ if $v \preceq w$ (and $v \wedge w = w$ if $w \preceq v$). A rooted plane tree $T$ is a finite subtree of the Ulam--Harris tree such that \begin{enumerate} \item $\varnothing$ is in $T$, and is called its root. \item If $v$ is in $T$ then all its ancestors are in $T$. \item If $v$ is in $T$ there is a number $\out{T}(v)\geq 0$, called the outdegree of $v$, such that $vi$ is in $T$ if and only if $1 \leq i \leq \out{T}(v)$. \end{enumerate} In the following we will only consider rooted plane trees. For simplicity, we will refer to them as trees. The number of vertices in a tree $T$ will be denoted by $|T|$ and we denote them, listed in lexicographical order, by $v_1(T), v_1(T),\ldots, v_{|T|}(T)$. When it is clear from the context, we may leave out the argument $T$ for notational ease. To the tree $T$ we associate its \L{}ukasiewicz path $(W_{k}(T))_{0\leq k \leq |T|}$ by $W_0(T) = 0$ and \begin{align} W_k(T) = \sum_{j=1}^{k} (\out{T}(v_j)-1) \end{align} for $1 \leq k \leq |T|$. Throughout the paper, we will consider sequences of \emph{random trees} $(T_n)_{n\geq 1}$. The index $n$ will usually refer to the size of the tree in some sense, e.g.~the number of vertices or the number of leaves. In Section~\ref{ss:conditions} we state the general conditions on these random trees. However, the prototypical example to keep in mind is when $T_n$ is a Bienaymé--Galton--Watson tree, conditioned to have $n$ vertices. In this case the \L{}ukasiewicz path $(W_{k}(T_n))_{0\leq k \leq n}$ is a random walk conditioned on staying non-negative until step $n$ when it hits $-1$. \subsection{Real trees and the $\alpha$-stable trees} \label{ss:realtrees} Real trees may be viewed as continuous counterparts to discrete trees. Informally, they are compact metric spaces obtained by gluing together line segments without creating loops. A formal definition may e.g.~be found in \cite{MR2147221} but for our purposes we give an equivalent definition in terms of excursions on $[0,1]$, which are continuous functions $g:[0,1]\to \mathbb{R}$ with the properties that $g\geq 0$ and $g(0) = g(1) = 1$. For $s,t\in[0,1]$ define $\displaystyle m_g(s,t) = \min_{s\wedge t \leq r \leq s \vee t} g(r)$ end define the pseudo distance \begin{align*} d_g(s,t) = g(s)+g(t) - 2 m_g(s,t). \end{align*} The real tree $\mathcal{T}^g$ encoded by the excursion $g$ is defined as the quotient \begin{align*} \cT^g = [0,1]/\{d_g=0\} \end{align*} endowed with the metric inherited from $d_g$ which will still be denoted $d_g$ by abuse of notation. We will consider a family of random real trees $\cT_\alpha$, $\alpha \in (1,2)$ called $\alpha$-stable trees, which serve as the underlying trees in the continuous random decorated trees. A brief definition of $\cT_\alpha$ is given below, but we refer to definitions and more details in \cite{MR1964956}. Let $(X^{(\alpha)}_t; t\in[0,1])$ be a spectrally positive $\alpha$-stable Lévy process, with $\alpha \in (1,2)$, normalized such that \begin{align} \label{eq:levymeasure} \mathbb{E}(\exp(-\lambda X_t)) = \exp(t\lambda^\alpha) \end{align} for every $\lambda > 0$. Let $(X^{\text{exc},(\alpha)}_t; t\in[0,1])$ be an excursion of $X_t^{(\alpha)}$, i.e.~the process conditioned to stay positive for $t \in (0,1)$ and to be 0 for $t\in\{0,1\}$. This involves conditioning on an event of probability zero which may be done via approximations, see e.g.~\cite[Chapter~VIII.3]{MR1406564}. Define the associated \emph{continuous height process} \begin{align*} H_t^{\text{exc},(\alpha)} = \lim_{\epsilon\to 0} \frac{1}{\epsilon}\int_{0}^t \mathbf{1}_{\{X_s^{\text{exc}}<I_s^t + \epsilon\}}ds \end{align*} where \begin{align*} I_s^t = \inf_{[s,t]} X^{\text{exc}}. \end{align*} Continuity of $H_t^{\text{exc},(\alpha)}$ is not guaranteed by the above definition but there exists a continuous modification which we consider from now on \cite{10.1214/aop/1022855417}. The $\alpha$-stable tree is the random real tree $\cT_\alpha := \cT^{H^{\text{exc},(\alpha)}}$ encoded by $H^{\text{exc},(\alpha)}$. We refer to \cite{MR1964956,MR1954248,MR2147221} for more details on the construction and properties of $\cT_\alpha$. Let $T^{\BGW}_n$ be a random BGW tree with a critical offspring distribution in the domain of attraction of an $\alpha$-stable law with $\alpha\in (1,2)$. Recall that this example is the prototype for the underlying random trees we consider in the current work. Since the \L ukasiewicz path $W_n(T^{\BGW}_n)$ is in this case a conditioned random walk, it follows that by properly rescaling $(W_{\lfloor n t \rfloor}(T^{\BGW}_n); t\in[0,1])$, it converges in distribution towards the excursion $(X^{\text{exc},(\alpha)}_t; t\in[0,1])$ in the Skorohod space $D([0,1],\mathbb{R})$, \cite[Chapter~VIII]{MR1406564}. This observation is the main idea in proving that $T_n^{\BGW}$, with a properly rescaled graph metric, converges towards $\cT_\alpha$ in the Gromov--Hausdorff sense \cite{MR1964956,MR2147221}. The pointed GHP-distance may and will be generalized in a straightforward manner by adding more points and more measures. \subsection{The Gromov-Hausdorff-Prokhorov topology} Let $\mathbb{M}$ be the set of all compact measured metric spaces modulo isometries. An element $(X,d_X,\nu_X) \in \mathbb{M}$ consists of the space $X$, a metric $d_X$ and a Borel-measure $\nu_X$. In particular, $\mathbb{M}$ contains finite graphs (e.g.~with a metric which is a multiple of the graph metric) and real trees. The Gromov--Hausdorff--Prokhorov (GHP) distance between two elements $(X,d_X,\nu_X),(Y,d_Y,\nu_Y) \in \mathbb{M}$ is defined by \begin{align} \label{eq:dghp} d_{GHP}(X,Y) = \inf\left\{d_H^Z(\phi_1(X),\phi_2(Y)) + d_P^Z(\phi_1^\ast \nu_X,\phi_2^\ast \nu_Y)\right\} \end{align} where the infimum is taken over all measure preserving isometries $\phi_1: X \to Z$ and $\phi_2: Y \to Z$ and metric spaces $Z$. Here $d_H^Z$ is the Hausdorff distance between compact metric spaces in $Z$, $d_P^Z$ is the Prokhorov distance between measures on $Z$ and $\phi^\ast \nu$ denotes the pushforward of the measure $\nu$ by $\phi$. We refer to \cite[{Chapter~27}]{MR2459454} for more details. When we are only interested in the convergence of the metric spaces without the measures, we leave out the term in \eqref{eq:dghp} with the Prokhorov metric and refer to the metric simply as the Gromov--Hausdorff (GH) metric, denoted $d_{GH}$. The following formulation of GHP-convergence may be useful in calculations and we will use it repeatedly in the following sections. If $(X,d_X)$ and $(Y,d_Y)$ are metric spaces and $\epsilon>0$, a function $\phi_\epsilon: X \to Y$ is called an $\epsilon$-isometry if the following two conditions hold: \begin{enumerate} \item (Almost isometry) $$\sup_{x_1,x_2\in X}|d_Y(\phi(x_1),\phi(x_2)) - d_X(x_1,x_2)| < \epsilon.$$ \item (Almost surjective) For every $y \in Y$ there exists $x \in X$ such that $$d_Y(\phi(x),y) < \epsilon.$$ \end{enumerate} If $(X_n,d_n,\nu_n)$ is a sequence of compact measured metric spaces which converge towards $(X,d,\nu)$ in the GHP topology then it is equivalent to the existence of $\epsilon_n$-isometries $\phi_{\epsilon_n}: X_n \to X$, where $\epsilon_n \to 0$, such that \begin{align} \label{eq:epsiloniso} d_H^X(\phi_{\epsilon_n}(X_n),X) \to 0 \quad \text{and} \quad d_P^X(\phi_{\epsilon_n}^\ast \nu_n, \nu) \to 0 \end{align} as $n\to \infty$. When the GH topology is considered, only the first convergence of these two is assumed. In most of the applications we will consider \emph{pointed} metric spaces $(X,d_X,\rho_X)$ where $\rho_X$ is a distinguished element called \emph{a point} or \emph{a root}. The GHP-distance is adapted to this by requiring that the isometries in \eqref{eq:dghp} and the $\epsilon_n$-isometries in \eqref{eq:epsiloniso}, send root to root. In this case we speak of the \emph{pointed} GHP-distance. In the same vein, we will sometimes deal with metric spaces that carry some extra finite measure. In this case we just add a term in \eqref{eq:dghp} that corresponds to the Prokhorov distance computed between the extra measures and modify \eqref{eq:epsiloniso} accordingly. \section{Construction in the discrete} \label{s:disc_construction} In this section we define the discrete decorated trees and state the sufficient conditions on the decorations and the underlying trees for our main results to hold. \emph{A decoration} of size $k$ is a compact metric space $B$ equipped with a metric $d$. We distinguish a vertex $\rho$ and call it the \emph{internal root} of $B$ and furthermore, we label $k$ vertices (not necessarily different) and call them the external roots of $B$. This labeling is described by a function $\ell: \{1,\ldots,k\} \to B$. Note that we explicitly allow external roots to coincide with the internal root. The space $B$ is equipped with a finite Borel measure $\nu$. We will use the notation $(B,d,\rho,\ell,\nu)$ for a decoration. In practice, $B$ will often be a finite graph, $d$ the graph metric, and $\nu$ the counting measure on the vertex set. The labeled vertices will often be the boundary of $B$ in some sense, e.g.~the leaves of a tree or the vertices on the boundary of a planar map. \begin{figure}[t] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.55\linewidth]{patch} \caption{Definition of the metric $d_n^{\dec}$.} \label{f:metric} \end{minipage} \end{figure} Let $(T_n)_{n\geq 1}$ be a sequence of random trees and $(\tilde B_k,\tilde \rho_k,\tilde d_k,\tilde \ell_k,\tilde \nu_k)_{k\geq 0}$ a sequence of random decorations indexed by their size. Conditionally on $T_n$, for each $v \in T_n$, sample independently a decoration $(B_{v},\rho_{v},d_{v},\ell_{v},\nu_{v})$ distributed as $(\tilde B_{\out{}(v)},\tilde \rho_{\out{}(v)},\tilde d_{\out{}(v)},\tilde \ell_{\out{}(v)},\tilde \nu_{\out{}(v)})$. We aim to glue the decorations to each other along the structure of $T_n$ which is informally explained as follows. Consider the disjoint union \begin{align*} T_n^{\ast,\dec} = \bigsqcup_{v\in T_n} B_v \end{align*} of decorations on which is defined an equivalence relation $\sim$, such that for each $v$ in $T_n$, and each $1\leq i \leq \out{}(v)$ we have $\ell_v(i) \sim \rho_{vi}$. In other words, the $i$-th external root of $B_v$ is glued to the internal root of $B_{vi}$. The precise construction is given below. We first define a pseudo-metric $d_n^{\ast,\dec}$ on $T_n^{\ast,\dec}$. For each $x \in T_n^{\ast,\dec}$ we let $v(x)$ denote the unique vertex in $T_n$ such that $x\in B_{v(x)}$. If $w$ is a vertex in $T_n$ such that $w \preceq v(x)$, let $Z_w^x \in B_w$ be defined by \begin{align*} Z_w^x = \begin{cases} \ell_w(i) & \text{if $wi \preceq v(x)$,}\\ x & \text{if $w = v(x)$}. \end{cases} \end{align*} In order to clarify the first case on the right hand side, note that if $w \prec v(x)$ then there is a unique child $wi$, $1 \le i \le \out{T_n}(w)$, of $w$ in $T_n$ such that $wi \preceq v(x)$, and we set $Z_w^x = \ell_w(i)$. Now, let $x,y \in T_n^{\ast,\dec}$ and first assume that $v(x) \prec v(y)$ and set \begin{align*} d_n^{\ast,\dec}(x,y) &= d_n^{\ast,\dec}(y,x) = d_{v(x)}\left(x,Z_{v(x)}^{y}\right)+\sum_{v(x) \prec w \preceq y} d_w\left(\rho_w,Z_w^{y}\right). \end{align*} Otherwise, if $v(x)$ and $v(y)$ are not in an ancestral relation, set \begin{align}\label{eq:definition distance dec n} d_n^{\ast,\dec}(x,y) = d_{v(x)\wedge v(y)}\left(Z_{v(x)\wedge v(y)}^{x},Z_{v(x)\wedge v(y)}^{y}\right) + d_n^{\ast,\dec}\left(Z_{v(x)\wedge v(y)}^{x},x\right) + d_n^{\ast,\dec}\left(Z_{v(x)\wedge v(y)}^{y},y\right). \end{align} These definitions are clarified in Fig.~\ref{f:metric}. Finally, the decorated tree is the quotient \begin{align} T_n^{\dec} = T_n^{\ast,\dec}\Big/\sim \end{align} where $x\sim y$ if and only if $d_n^{\ast,\dec}(x,y) = 0$, and the metric induced by $d_n^{\ast,\dec}$ on the quotient is denoted by $d_n^{\dec}$. Define a measure $\nu^\dec_n$, on $T_n^\dec$ by \begin{align*} \nu_n^\dec = \sum_{v\in T_n} \nu_v. \end{align*} Here the mass of a gluing point is the sum of the masses of all vertices of its equivalence class. Finally, root $T_n^\dec$ at $\rho_\varnothing$. We view the the decorated tree as a random, compact, rooted and measured metric space \begin{align*} (T_n^\dec,d_n^\dec,\rho_\varnothing, \nu_n^\dec). \end{align*} \subsection{Conditions on the trees and decorations} \label{ss:conditions} Assume in this subsection that $\alpha \in (1,2)$ is fixed and that $(T_n)_n$ is a sequence of random trees with decorations $(B_v,d_v,\rho_v,\ell_v,\nu_v)_{v\in T_n}$ sampled independently according to $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$. Let $\mu_k$ be a measure on $\tilde B_k$ defined by \begin{align*} \mu_k = \frac{1}{k}\sum_{j=1}^{k}\delta_{\tilde\ell_k(j)}. \end{align*} We let $v_1,\dots v_{|T_n|}$ be the vertices of $T_n$ listed in depth-first-search order and for all $1\leq k \leq |T_n|$, we let \begin{align*} M_k=\sum_{i=1}^{k}\nu_{v_i}(B_{v_i}) \end{align*} be the sum of all the total mass of the $k$ first decorations in depth-first order. We impose the following conditions on the decorations (D), the trees (T) and on both trees and decorations (B). \subsubsection*{Conditions on the decorations} \label{sss:decorations} \begin{enumerate}[\quad D1)] \item \label{c:GHPlimit} (\emph{Pointed GHP limit}). There are $\beta,\gamma > 0$, a compact metric space $(\mathcal{B},d)$ with a point $\rho$, a Borel probability measure $\mu$ and a finite Borel measure $\nu$ such that \begin{align*} \left(\tilde B_k,\tilde\rho_k,k^{-\gamma} \tilde d_k, \mu_k,k^{-\beta}\tilde \nu_k\right) \to (\mathcal{B},\rho,d,\mu,\nu) \end{align*} as $k\to \infty$, weakly in the pointed Gromov--Hausdorff--Prokhorov sense. \item \label{c:moment_diam_limit} (\emph{Moments of the diameter of the limit}) The limiting block $(\mathcal{B},\rho,d,\mu,\nu)$ satisfies $\Ec{\diam(\mathcal{B})^p}<\infty$, for some $p>\frac{\alpha}{\gamma}$. \item \label{c:mass_limit} (\emph{Limit of the expected mass}) It holds that $$k^{-\beta}\cdot \Ec{\tilde \nu_k(\tilde B_k)} \underset{k\rightarrow \infty }{\rightarrow} \Ec{\nu(\mathcal{B})} <\infty.$$ \end{enumerate} \subsubsection*{Conditions on the trees}\label{sss:tree} \begin{enumerate}[\quad T1)] \item \label{c:permute} (\emph{Symmetries of subtrees}) The distribution of $T_n$ is invariant under the permutation of any set of siblings. \item \label{c:invariance} (\emph{Invariance principle for the \L ukasiewicz path}) There is a sequence $b_n \ge 0$ such that \[ (b_n^{-1} W_{\lfloor|T_n|t\rfloor}(T_n))_{0 \le t \le 1} \convdis X^{\text{exc}, (\alpha)} \] where $(W_k(T_n))_{0\leq k \leq |T_n|}$ is the \L ukasiewicz path of $T_n$ and $X^{\text{exc},(\alpha)}$ is an excursion of the spectrally positive $\alpha$-stable Lévy-process defined by \eqref{eq:levymeasure}. \end{enumerate} \subsubsection*{Conditions on both}\label{sss:both} Let $\gamma$ and $b_n$ be as in conditions T and D in the previous sections. \begin{enumerate}[\quad B1)] \item \label{cond:small decorations dont contribute to distances} (\emph{Small decorations do not contribute}). For any $\epsilon>0$, \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace b_n^{-\gamma}\sum_{w\preceq v} \diam(B_w) \ind{\out{T_n}(w)\leq \delta b_n}\right\rbrace>\epsilon } \underset{\delta\rightarrow 0}{\rightarrow}0, \end{align*} uniformly in $n\geq 1$ for which $T_n$ is well-defined. \item \label{cond:measure is spread out} (\emph{Measure is spread out, case $\beta\leq \alpha$}). We assume that we have the following uniform convergence in probability, for some sequence $(a_n)$, \begin{align}\label{eq:convergence depth-first mass} \left(\frac{M_{\lfloor |T_n|t\rfloor}}{a_n}\right)_{t \in \intervalleff{0}{1}} \underset{n\rightarrow\infty}{\longrightarrow} (t)_{0 \leq t \leq 1}. \end{align} \item \label{cond:small decorations dont contribute to mass} (\emph{Measure is on the blocks, case $\beta>\alpha$}). For any $\epsilon>0$ we have \begin{align} \limsup_{n\rightarrow \infty} \Pp{b_n^{-\beta}\sum_{i=1}^{|T_n|}\nu_{v_i}(B_{v_i}) \ind{\out{T_n}(v_i)\leq \delta b_n} > \epsilon } \underset{\delta\rightarrow 0}{\longrightarrow} 0. \end{align} In this case we write $a_n=b_n^\beta$. \end{enumerate} \subsection{Reduction to the case of shuffled external roots} \label{ss:shuffle} Given our Hypothesis T\ref{c:permute} on the tree $T_n$, we can always shuffle the external roots of the blocks that we use without changing the law of the object $T_n^\dec$ that we construct. More precisely, given the law of $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)$ for any $k\geq 0$ we can define the block with shuffled external roots $(\hat B_k,\hat d_k,\hat\rho_k,\hat\ell_k,\hat\nu_k)$ as \begin{align*} (\hat B_k,\hat d_k,\hat\rho_k,\hat\nu_k)=(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde \nu_k) \quad \text{and} \quad \hat\ell_k(i)= \tilde\ell_k(\sigma(i)), \end{align*} where $\sigma$ is a uniform permutation of length $k$, independent of everything else. Then, assuming that Hypothesis~T\ref{c:permute} holds, the law of the object $\hat T_n^\dec$ constructed with underlying tree $T_n$ and decorations $(\hat B_k,\hat \rho_k,\hat d_k,\hat \ell_k,\hat \nu_k)_{k\geq 1}$ is the same as the that of the object $T_n^\dec$ constructed with underlying tree $T_n$ and decorations $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)_{k\geq 0}$. We will require the following lemma which informally states that the sequence of shuffled external roots in a decoration $B_k$ converges weakly to an i.i.d. sequence of external roots in the GHP-limit $\cB$ of $B_k$ \begin{lemma} \label{l:shuffle} Let $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)_{k\geq 1}$ be a (deterministic) sequence of decorations which converges to a decoration $(\cB,\rho,d,\mu,\nu)$ in the GHP-topology. Let $Y_1,Y_2,\ldots$ be a sequence of independent points in $\cB$ sampled according to $\mu$. Then there exists a sequence $\phi_k: \tilde B_k \to \cB$ of $\epsilon_k$-isometries, with $\epsilon_k \to 0$, such that \begin{align*} (\phi_k(\hat\ell_k(1)),\phi_k(\hat\ell_k(2))\ldots,\phi_k(\hat\ell_k(k)),\rho,\rho,\ldots) \convdis (Y_1,Y_2,\ldots) \end{align*} as $k\to \infty$. \end{lemma} \begin{proof} First note that \begin{align*} \left\lVert \mu_k^{\otimes k},\mathcal{L}\big(\hat{\ell}_k\big)\right\rVert_{TV} \to 0, \end{align*} as $k\to \infty$, where $\mathcal{L}(\hat{\ell}_k)$ denotes the law of the uniformly shuffled labels $\hat{\ell}_k$. Thus, asymptotically, we may work with the measure $\mu_n^{\otimes k}$ instead of the shuffled labels. The GHP-convergence guarantees existence of $\epsilon_k$-isometries $\phi_k:\tilde B_k \to \cB$ with $\epsilon_k\to 0$, such that the pushforward $\phi_k^\ast \mu_k$ converges to $\mu$ in distribution. It follows that $\phi_k^\ast \mu_k^{\otimes \mathbb{N}} \to \mu^{\otimes \mathbb{N}}$ in distribution which concludes the proof. \end{proof} In the rest of the paper, we are always going to assume that the external roots of the decorations that we use have been shuffled as above. Note that there can be situations where there is a natural ordering of the external roots e.g.~when the decorations are oriented circles, maps with an oriented boundary, trees encoded by contour functions and more. \subsection{The case of Bienaymé--Galton--Watson trees} \label{ss:BGW} One important case of application is when the tree $T_n$ is taken to be a Bienaymé--Galton--Watson (BGW) tree in the domain of attraction of an $\alpha$-stable tree, conditioned on having $n$ vertices. If we denote by $\mu$ a reproduction law on $\mathbb Z_+$, this corresponds to having $\sum_{i=0}^{\infty}i \mu(i)= 1$ and the function \begin{align*} x\mapsto\frac{1}{\mu([x,\infty))} \end{align*} to be $\alpha$-regularly varying, see the Appendix for a discussion on regularly varying functions. Note that then $T_n$ is possibly not defined for all $n\geq 1$; in that case we only consider values of $n$ for which it is. Condition T1 is satisfied for any BGW tree and T2 holds, thanks to results of Duquesne and Le Gall \cite{MR1964956,MR2147221}, for a sequence $(b_n)_{n\geq 1}$ that depends on the tail behaviour of $\mu([n,\infty))$. In the rest of this section, we assume that $\alpha\in(1,2)$ and $\gamma> \alpha -1$ are fixed and that we are working with BGW trees with reproduction distribution $\mu$ as above and corresponding sequence $(b_n)_{n\geq 1}$. The results below concern properties of the diameter and the measure on the decorations and they guarantee that the remaining conditions in Section~\ref{ss:conditions} are satisfied. Their proofs are given in the Appendix. We first state a result that ensures that small decorations don't contribute to the macroscopic distances. \begin{proposition}\label{prop:small blobs don't contribute} Under the following moment condition on the diameter of the decorations \begin{align*} \sup_{k\geq 0} \Ec{\left(\frac{\diam(\tilde B_k)}{k^\gamma}\right)^m}<\infty, \end{align*} for some $m>\frac{4\alpha}{2\gamma +1 -\alpha}$, condition B1 is satisfied. \end{proposition} \begin{remark} In \cite[Remark 1.5]{arXiv:2011.07266}, it is stated that in this context $b_n^{-\gamma}\cdot \diam(T_n^\dec)$ is tight under an assumption that corresponds to having $m>\frac{\alpha}{\alpha -1}$ in the above statement. Note that neither of those assumptions are optimal since $\frac{\alpha}{\alpha -1}$ and $\frac{4\alpha}{2\gamma -(\alpha-1)}$ are not always ordered in the same way for different values of $\alpha$ and $\gamma$. In particular, our bound does not blow up for a fixed $\gamma$ and $\alpha\rightarrow 1$. \end{remark} Now we state another result that ensures that either condition B\ref{cond:measure is spread out} or B\ref{cond:small decorations dont contribute to mass} is satisfied in this setting. We let $D$ denote a random variable that has law given by the reproduction distribution $\mu$ of the $\BGW$ tree. \begin{lemma}\label{lem:moment measure discrete block implies B2 or B3} Assuming that D1 holds for some value $\beta>0$ and that \begin{align*} \sup_{k\geq 0} \Ec{\left(\frac{\nu_{k}(\tilde B_k)}{k^\beta}\right)}<\infty, \end{align*} then condition B\ref{cond:measure is spread out} or B\ref{cond:small decorations dont contribute to mass} is satisfied, depending on the value of $\beta$: \begin{itemize} \item If $\beta < \alpha$ then B\ref{cond:measure is spread out} holds with $a_n:=n \cdot \Ec{\nu_D(B_D)}$. \item If $\beta=\alpha$ then B\ref{cond:measure is spread out} holds with \begin{equation} a_n:=\left\{ \begin{aligned} &n \cdot \Ec{\nu_D(B_D)} \quad &\text{if}\quad \Ec{D^\alpha} <\infty ,\\ &n \cdot \Ec{\nu(\cB)}\cdot \Ec{D^\alpha \ind{D\leq b_n}} \quad &\text{if} \quad \Ec{D^\alpha} =\infty, \end{aligned} \right. \end{equation} where in the second case, we further assume that $\sup_{k\geq 0} \Ec{\left(\frac{\nu_{k}(\tilde B_k)}{k^\beta}\right)^{1+\eta}}<\infty$ for some $\eta>0$. \item If $\beta > \alpha$ then B\ref{cond:small decorations dont contribute to mass} holds with $a_n = b_n^\beta$. \end{itemize} \end{lemma} \begin{remark} \label{re:remarkleaves} Both those results are stated for $\BGW$ trees conditioned on having exactly $n$ vertices. In fact, it is possible to use arguments from \cite[Section~4, Section~5]{MR2946438} to deduce that the same results hold for $\BGW$ trees conditioned on having $\lfloor \mu(\{0\}) n\rfloor$ leaves. This is actually the setting that we need for most of our applications. \end{remark} \section{Construction in the continuous} \label{s:cts_construction} Let $(X^{\text{exc},(\alpha)}_t; t\in[0,1])$ be an excursion of an $\alpha$-stable spectrally positive L\'{e}vy process, with $\alpha\in(1,2)$ and denote the corresponding $\alpha$-stable tree by $\cT_\alpha$. Denote the projection from $\intervalleff01$ to $\cT_\alpha$ by $\mathbf{p}$. Recall the definition \begin{align*} I_s^t = \inf_{[s,t]} X^{\text{exc},(\alpha)}. \end{align*} We define a partial order $\preceq$ on $[0,1]$ as follows: for every $s,t\in[0,1]$ \begin{equation}\label{eq:def genealogical order continuous tree} s\preceq t \quad \text{if} \quad s\leq t \quad \text{and} \quad X^{\text{exc},(\alpha)}_{s^{-}} \leq I_s^t. \end{equation} For $s,t\in[0,1]$ with $s\preceq t$, define \begin{equation}\label{eq:definition xts} x_s^t = I_s^t - X_{s^-}^{\text{exc}} \end{equation} and let \begin{equation}\label{eq:definition uts} u_s^t := \frac{x_s^t}{\Delta_s} \end{equation} where $\Delta_s = X^{\text{exc},(\alpha)}_s-X^{\text{exc},(\alpha)}_{s^-}$ is the jump of $X_s^{\text{exc},(\alpha)}$ at $s$. If $s$ is not a jump time of the process, we simply let $u_s^t:=0$. Let us denote $\Bb=\enstq{v\in \intervalleff{0}{1}}{\Delta_v>0}$, which is in one-to-one correspondence with the set of branch points of $\cT_\alpha$. Almost surely, these points in the tree all have an infinite degree, meaning that for any $v\in \Bb$, the space $\cT\setminus\{\mathbf{p}(v)\}$ has a countably infinite number of connected components. For any $v\in\Bb$, we let \begin{align*} \cA_v:= \enstq{u_v^t}{t\in\intervalleff{0}{1} \quad \text{and} \quad \exists s\in \intervalleff{0}{1}, v\prec s \prec t }. \end{align*} This set $\cA_v$ is in one-to-one correspondence with the connected components of $\cT\setminus\{\mathbf{p}(v)\}$ above $\mathbf{p}(v)$, meaning not the one containing the root. Conditionally on this, let $\left((\mathcal{B}_v,\rho_v,d_v,\mu_v,\nu_v)\right)_{v\in \Bb}$ be random measured metric spaces, sampled independently with the distribution of $(\mathcal{B},\rho,d,\mu,\nu)$ and for every $v\in \Bb$, let $(Y_{v,a})_{a\in\cA_v}$ be i.i.d. with law $\mu_v$. For $s \in [0,1]\setminus \Bb$, let $\mathcal{B}_s = \{s\}$. Define for every $v\in\Bb$ the rescaled distance \begin{equation*} \delta_v=\Delta_v^\gamma \cdot d_v. \end{equation*} and for any $t\succeq v$: \begin{align}\label{eq:definition Zvt} Z_v^t = \begin{cases} Y_{v,u_v^t} & \text{if} \quad u^t_v\in\cA_v,\\ \rho_v \qquad & \text{otherwise.} \end{cases} \end{align} Now we are ready to define a pseudo-distance $d$ on the set $\cT^{\ast,\dec}_\alpha=\bigsqcup_{s \in [0,1]}\mathcal{B}_s$. Before doing that, let us define, for any $v,w\in\Bb$ such that $v\preceq w$, and any $x\in \mathcal{B}_w,$ \begin{align*} Z_v^x = \begin{cases} Z^w_v & \text{if } v\prec w\\ x \qquad & \text{if } v=w. \end{cases} \end{align*} We also extend the genealogical order on the whole space $\cT^{\ast,\dec}_\alpha$ by declaring that for any $v\in \Bb, x\in B_v$, we have $v\preceq x \preceq v$, (thus making it a preorder, instead of an order relation). We also extend the definition of $\wedge$ by saying that $a\wedge b$ is always chosen among $\intervalleff01$. Then we define $d_0(a,b)$ for $a\preceq b$: \begin{align}\label{eq:def dist 0} d_0(a,b)=d_0(b,a) = \underset{v\in\Bb}{\sum_{a\prec v\preceq b}}\delta_v(\rho_v,Z_v^b), \end{align} and for any $a,b\in\cT^{\ast,\dec}_\alpha$, \begin{align*} d(a,b)=d_0(a\wedge b,a)+d_0(a\wedge b, b) + \delta_{a\wedge b }(Z_{a\wedge b}^a,Z_{a\wedge b}^b), \end{align*} We say that two elements $a$ and $b$ are equivalent $a\sim b$ if $d(a,b)=0$. We quotient the space by the equivalence relation, $\cT^\dec_\alpha=\cT^{\ast,\dec}_\alpha/\sim$ and denote $\mathbf{p}^\dec$ the corresponding quotient map. We can endow $\cT_\alpha^\dec$ with two types of measures: \begin{itemize} \item If $\beta \leq \alpha$, then we define the measure $\upsilon_\beta^\dec$ on $\cT_\alpha^\dec$ as the push-forward of the Lebesgue measure on $\intervalleff{0}{1}\setminus \Bb$ through the projection map $\mathbf{p}^\dec:\cT^{\ast,\dec}_\alpha \rightarrow \cT_\alpha^\dec$ onto the quotient. \item If $\beta > \alpha$, we define the measure $\upsilon_\beta^\dec$ as (the push-forward through $\mathbf{p}^\dec$ of) \begin{align*} \sum_{t \in \Bb } \Delta_t^\beta \nu_t \end{align*} which is almost surely a finite measure, provided that $\nu(\cB)$ has finite expectation, say. \end{itemize} Note that so far, it is not clear that the construction above yields a compact metric space (or even a metric space), as the metric $d$ could in principle take arbitrarily large or even infinite values. This is handled by the work of Section~\ref{sec:self-similarity}. In particular, a result that is important in our approach is the following, which ensures that under some moment assumption on the diameter of the decorations the distances in the whole object are dominated by the contributions of the decorations corresponding to large jumps. This will in particular be useful in the next section. \begin{lemma} \label{l:smalldec} Assume that $\gamma > \alpha-1$ and that condition D\ref{c:moment_diam_limit} is satisfied. Then, \begin{align} \sup_{s\in \intervalleff{0}{1}} \left\lbrace\sum_{t\prec s} \Delta_t^\gamma \diam(\mathcal{B}_t) \ind{\Delta_t\leq \delta}\right\rbrace \rightarrow 0 \end{align} as $\delta\rightarrow 0$ in probability. \end{lemma} \section{Invariance principle} \label{s:invariance} In this section we state and prove one of the main results of the paper, namely that under the conditions stated in Section~\ref{ss:conditions}, the discrete decorated trees with a properly rescaled graph distance, converge towards $\alpha$-stable decorated trees. \begin{theorem} \label{th:invariance} Let $(T_n)_{n\geq 1}$ be a sequence of random trees and $(B_v,\rho_{v},d_{v},\ell_{v},\nu_{v})_{v\in T_n}$ be random decorations on $T_n$ sampled independently according to $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$. Assume that conditions D,T and B in Section~\ref{ss:conditions} are satisfied for some exponents $\alpha, \beta,\gamma$ and denote the weak limit of $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$ in the sense of D\ref{c:GHPlimit} by $(\mathcal{B},\rho,d,\mu,\nu)$. Suppose that $\alpha-1<\gamma$ and let $\cT_\alpha^\dec$ be the $\alpha$-stable tree $\cT_\alpha$, decorated according to $(\mathcal{B},\rho,d,\mu,\nu)$ with distance exponent $\gamma$. Then \begin{align*} (T_n^\dec,b_n^{-\gamma}d_n^\dec, a_n^{-1}\cdot \nu_n^\dec) \to (\mathcal{T}_\alpha^\dec, d, \upsilon_\beta^\dec) \end{align*} in distribution in the Gromov-Hausdorff-Prokhorov topology. \end{theorem} \begin{proof} At first we will describe in detail the GH-convergence of the sequence of decorated trees, leaving out the measures $\nu_n^{\text{dec}}$, and then we give a less detailed outline of the GHP-convergence. We start by constructing, in several steps, a coupling that allows us to construct an $\epsilon$-isometry between $(T_n^\dec,b_n^{-\gamma} d_n^\dec)$ and $(\cT_\alpha^\dec,d)$ for arbitrary small $\epsilon$. \emph{Step 1, coupling of trees:} By Condition T\ref{c:invariance}, we may use Skorokhod's representation theorem, and construct on the same probability space $X^{\text{exc},(\alpha)}$ and a sequence of random trees $T_n$ such that \[ (b_n^{-1} W_{\lfloor|T_n|t\rfloor}(T_n))_{0 \le t \le 1} {\,{\longrightarrow}\,} X^{\text{exc},(\alpha)} \] almost surely. We will keep the same notation for all random elements on this new space. Order the vertices $(v_{n,1},v_{n,2},\ldots)$ of $T_n$ in non-increasing order of their degree (in case of ties use some deterministic rule) and denote their positions in the lexicographical order on the vertices of $T_n$ by $(t_{n,1},t_{n,2},\ldots)$, in such a way that for any $k\geq 1$ we have $v_{t_{n,k}}=v_{n,k}$. Similarly, order the set $\Bb$ of jump times of $X^{\text{exc},\alpha}$ in decreasing order $(t_1,t_2,\ldots)$ of the sizes of the jumps (there is no tie almost surely). Then, by properties of the Skorokhod topology, it holds that for any $k\geq 1$, \begin{align}\label{eq:convergence size and position of jumps} \frac{t_{n,k}}{|T_n|} \to t_k \qquad \text{and} \qquad \frac{\out{}(v_{n,k})}{b_n} \rightarrow \Delta_{t_k} \end{align} almost surely, as $n\to \infty$. Because of the way we can retrieve the genealogical order from the coding function, the genealogical order on $(v_{n,1},v_{n,2},\dots, v_{n,N})$ for any fixed $N$ stabilizes for $n$ large enough to that of $(t_{1},t_{2},\dots, t_{N})$ for the order $\preceq$ defined in \eqref{eq:def genealogical order continuous tree}. \emph{Step 2, coupling of decorations:} Let us shuffle the external roots of the decorations $(\tilde B_k,\tilde \rho_{k},\tilde d_{k},\tilde \ell_{k},\tilde \nu_{k})_{k\geq 0}$ with independent uniform permutations as described in Section~\ref{ss:shuffle} and denote the shuffled decorations by $(\hat B_k,\hat\rho_{k},\hat d_{k},\hat \ell_{k},\hat \nu_{k})_{k\geq 0}$. From now on, sample $(B_v,\rho_{v},d_{v},\ell_{v},\nu_{v})_{v\in T_n}$ from the latter which, as explained before, does not affect the distribution of the decorated tree. Since for any $k\geq 1$, $\out{T_n}(v_{n,k}) \to \infty$ as $n\to \infty$, it holds, by Condition D\ref{c:GHPlimit}, that \begin{align*} (B_{v_{n,k}},\rho_{v_{n,k}},(\out{}(v_{n,k}))^{-\gamma}d_{v_{n,k}},\mu_{v_{n,k}},(\out{}(v_{n,k}))^{-\beta}\nu_{v_{n,k}}) \to (\mathcal{B},\rho,d,\mu,\nu) \end{align*} weakly as $n\to\infty$. We again use Skorokhod's theorem and modify the probability space such that for each $k\geq 1$, this convergence holds almost surely and the a.s.~limit will be denoted by $(\mathcal{B}_{t_k},\rho_{t_k},d_{t_k},\mu_{t_k}, \nu_{t_k})$. We still keep the same notation for random elements. For the rest of the proof, we will write $\mathbf d_v$ in place of $b_n^{-1}\cdot d_{v}$ to lighten notation. From the convergence \eqref{eq:convergence size and position of jumps}, and the definition of $\delta_v=\Delta_v^\gamma \cdot d_v$, we can re-express the above convergence as the a.s.\ convergence \begin{align}\label{eq:convergence block unk to block vk} (B_{v_{n,k}},\rho_{v_{n,k}}, \mathbf d_{v_{n,k}},\mu_{v_{n,k}},(b_n)^{-\beta}\nu_{v_{n,k}}) \to (\mathcal{B}_{t_k},\rho_{t_k},\delta_{t_k},\mu_{t_k},\Delta_{t_k}^\beta\nu_{t_k}) \end{align} \emph{Step 3, coupling of gluing points:} Fix a $\delta > 0$ and let $N(\delta)$ be the (finite) number of vertices $v \in \Bb$ such that $\Delta_v > \delta$. Thanks to Step 1, we can almost surely consider $n$ large enough so that the genealogical order on $(v_{n,1},v_{n,2},\dots, v_{n,N(\delta)})$ induced by $T_n$ corresponds to that of $(t_{1},t_{2},\dots, t_{N(\delta)})$. Note that $X^{\text{exc},(\alpha)}$ almost surely does not have a jump of size exactly $\delta$, and so for $n$ large enough the vertices $(v_{n,1},v_{n,2},\dots, v_{n,N(\delta)})$ are exactly the vertices with degree larger than $\delta b_n$. Now recall the notation \eqref{eq:definition uts} and for any $k\in \{1,2,\dots N(\delta)\}$, consider the finite set \begin{align*} \enstq{u_{t_k}^{t_i}}{i \in \{1,2,\dots N(\delta)\},\ t_i\succ t_k} \subset \mathcal A_{t_k} \end{align*} and enumerate its elements as $m_k(1), m_k(2), \dots , m_k(a_k)$ in increasing order. Similarly consider the set \begin{align*} \enstq{\ell \in \mathbb N}{v_{n,k}\ell \preceq v_{n,i} \text{ for some } i \in \{1,2,\dots N(\delta)\}}\subset \{1,\dots, \out{}(v_{n,k})\}. \end{align*} and denote its elements $j_1,j_2,\dots, j_{a_k}$. Note that these two sets have the same cardinality $a_k$ when $n$ is large enough because of our assumptions. For each $k\geq 1$, given $(B_{t_k},\rho_{t_k},d_{t_k},\mu_{t_k}, \nu_{t_k})$ and $X^{\text{exc},(\alpha)}$ the points $(Y_{t_k,a})_{a\in \mathcal{A}_{t_k}}$ are sampled independently according to $\mu_{t_k}$. By Lemma~\ref{l:shuffle} and \eqref{eq:convergence block unk to block vk} we may find a sequence $\phi_{v_{n,k}}: B_{v_{n,k}} \to \mathcal{B}_{t_k}$ of $\epsilon_{v_{n,k}}$-isometries, with $\epsilon_{v_{n,k}}\to 0$ as $n\to \infty$, such that \begin{align} \label{eq:coupling_glue} (\phi_{v_{n,k}}(\ell_{v_{n,k}}(j_1)),\phi_{v_{n,k}}(\ell_{v_{n,k}}(j_2))\ldots,\phi_{v_{n,k}}(\ell_{v_{n,k}}(j_{a_k}))) {\,{\buildrel w \over \longrightarrow}\,} (Y_{t_k,m_k(1)},Y_{t_k,m_k(2)},\ldots,Y_{t_k,m_k(a_k)} ). \end{align} We modify the probability space once again, using Skorokhod's representation theorem, so that this convergence holds almost surely. When $n$ is large enough, for $k,k'$ such that $v_{n,k} \prec v_{n,k'}$ we know that $v_{n,k} j_p \preceq v_{n,k'}$ and $u_{t_k}^{t_k'}=m_k(p)$ for some $p\in \{1,2,\dots, a_k\}$ so we have $Z_{v_{n,k}}^{v_{n,k'}}= \ell_{v_{n,k}}(j_p)$ as well as $Z_{t_k}^{t_{k'}}=Y_{t_k,m_k(a_p)}$. Plugging this back in \eqref{eq:coupling_glue} that we have the convergence \begin{align*} \phi_{v_{n,k}}(Z_{v_{n,k}}^{v_{n,k'}}) \rightarrow Z_{t_k}^{t_{k'}}. \end{align*} \emph{Step 4: convergence of the decorated trees} Now, construct the sequence of decorated trees $T_n^\dec$ from $T_n$ and the decorations \\ $(B_{v_{n,k}},\rho_{v_{n,k}},d_{v_{n,k}},\ell_{v_{n,k}},\nu_{v_{n,k}})_{k\geq 1}$. Similarly, construct the tree $\mathcal{T}_\alpha^\dec$ from $\mathcal{T}_\alpha$ and the family of decorations $(\mathcal{B}_{t_k},\rho_{t_k},d_{t_k},\mu_{t_k},\nu_{t_k})_{k\geq 1}$. Denote the projections to the quotients by $\mathbf{p}_n^\dec$ and $\mathbf{p}^\dec$ respectively. We claim that this coupling of trees, decorations and gluing points, guarantees that the convergence \begin{align*} (T_n^\dec,b_n^{-\gamma}d_n^\dec) \to (\mathcal{T}_\alpha^\dec, d) \end{align*} holds in probability for the Gromov--Hausdorff topology. Introduce \begin{align} r_n^\delta := b_n^{-\gamma} \cdot\sup_{v\in T_n} \left\lbrace\sum_{w\preceq v} \diam(B_w) \ind{\out{T_n}(w)\leq \delta b_n}\right\rbrace, \end{align} and also \begin{align}\label{def:r delta} r^\delta := \sup_{s\in \intervalleff{0}{1}} \left\lbrace\sum_{t\prec s} \Delta_t^\gamma \diam(B_t) \ind{\Delta_t\leq \delta}\right\rbrace. \end{align} It holds, by condition B1 and Lemma~\ref{l:smalldec} that \begin{align} \lim_{\delta \to 0} \limsup_{n\rightarrow\infty} r_n^\delta = 0 \quad \text{and} \quad \lim_{\delta\rightarrow 0} r^\delta =0 \end{align} in probability. Let us consider the subset $\cT_\alpha^{\dec,\delta}\subset \cT_\alpha^\dec$ which consists only of the blocks corresponding to jumps larger than $\delta$, i.e. \begin{align} \cT_\alpha^{\dec,\delta} = \bigsqcup_{1\leq k \leq N(\delta)}\mathbf{p}^\dec \left(\mathcal{B}_{t_k}\right), \end{align} endowed with the induced metric $d$. From the construction of $d$ it is clear that we can bound their Hausdorff distance \begin{align}\label{eq:T alpha well approximated by large jumps} \mathrm{d_H}(\cT_\alpha^{\dec,\delta},\cT_\alpha^\dec) \leq 2 r^\delta. \end{align} Let us do a similar thing with the discrete decorated trees. We introduce $T_n^{\dec,\delta} \subset T_n^\dec$ endowed with the induced metric from $b_n^{-\gamma} \cdot d_n^\dec$, where \begin{align} T_n^{\dec,\delta}=\bigsqcup_{1\leq k \leq N(\delta)} \mathbf{p}_n^\dec\left(B_{v_{n,k}}\right). \end{align} We have, for the distance $b_n^{-\gamma} \cdot d_n^\dec$, \begin{align} \mathrm{d_H}(T_n^{\dec,\delta},T_n^\dec) \leq 2 r_n^\delta . \end{align} We introduce the following auxiliary object \begin{align*} \widehat{T}_n^{\dec,\delta}=\bigsqcup_{1\leq k \leq N(\delta)} B_{v_{n,k}}, \end{align*} which we endow with the distance $\hat{d}$ defined as \begin{align*} \hat{d}(x,y)= b_n^{-\gamma}\cdot d_n^\dec(\mathbf{p}^\dec_n(x),\mathbf{p}^\dec_n(y))+\delta\cdot \ind{x,y \text{ don't belong to the same block}}. \end{align*} It is easy to check that $\hat{d}$ is a distance on $\widehat{T}_n^{\dec,\delta}$ and that the projection $\mathbf{p}^\dec_n$ is a $\delta$-isometry, so that \begin{align*} \mathrm{d_{GH}}\left((\widehat{T}_n^{\dec,\delta}, \hat{d}),(T_n^{\dec,\delta},b_n^{-\gamma} \cdot d_n^\dec)\right) \leq \delta. \end{align*} The role of this auxilliary object is just technical: it allows to have a metric space very close to $T_n^\dec$ but where no two vertices of two different blocks are identified together. This allows to define function from $\widehat{T}_n^{\dec,\delta}$ to $\cT_\alpha^{\dec,\delta}$ by just patching together the almost isometries that we have on the blocks. \emph{Step 5, constructing an almost isometry:} Now, we construct a function from $\widehat{T}_n^{\dec,\delta}$ that we will show is an $\epsilon$-isometry for some small $\epsilon$ when $n$ is large enough. Recall the $\epsilon_{v_{n,k}}$-isometries $\phi_{v_{n,k}}$ from \eqref{eq:coupling_glue}. We define the function $\phi$ from $\widehat{T}_n^{\dec,\delta}$ to $\cT_\alpha^{\dec}$ as the function such that for any $1\leq k \leq N(\delta)$ and any $x\in B_{v_{n,k}}$ we have \begin{align*} \phi(x):= \mathbf{p}^\dec(\phi_{v_{n,k}} (x)) \end{align*} Now let us show that for $n$ large enough $\phi$ is an $\epsilon$ isometry for \begin{align*} \epsilon=3r^\delta+3 r^{\delta}_{n}+ 3\delta. \end{align*} First, from \eqref{eq:T alpha well approximated by large jumps} and the definition of $\phi$ from the $\phi_{v_{n,k}}$, it is clear that the $\phi$ satisfies the almost surjectivity condition for that value of $\epsilon$. Then we have to check the almost isometry condition. For that, let us take $n$ large enough so that on the realization that we consider, the genealogical order on $(v_{n,1},v_{n,2},\dots, v_{n,N(\delta)})$ in $T_n$ coincides with that on $(t_{1},t_{2},\dots, t_{N}(\delta))$ defined by \eqref{eq:def genealogical order continuous tree}. Now for $k,k'\leq N(\delta)$ for which $t_k \prec t_{k'}$ (and equivalently $v_{n,k} \prec v_{n,k'}$) we have \begin{align*} \underset{1\leq i\leq N(\delta)}{\sum_{v_{n,k}\prec v_{n,i}\prec v_{n,k'}}} \mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right) &\leq \sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}} \mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right) \leq \underset{1\leq i\leq N(\delta)}{\sum_{v_{n,k}\prec v_{n,i}\prec v_{n,k'}}} \mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right) + r_n^\delta \end{align*} and similarly \begin{align*} \underset{1\leq i\leq N(\delta)}{\sum_{t_{k}\prec t_{i}\prec t_{k'}}} \delta_{t_{i}}\left(\rho_{t_{i}},Z_{t_{i}}^{t_{k'}}\right) &\leq \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}} \delta_w\left(\rho_w,Z_w^{t_{k'}}\right) \leq \underset{1\leq i\leq N(\delta)}{\sum_{t_{k}\prec t_{i}\prec t_{k'}}} \delta_{t_{i}}\left(\rho_{t_{i}},Z_{t_{i}}^{t_{k'}}\right) + r^\delta \end{align*} and so we have \begin{align*} &|\sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}} \mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right)- \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}} \delta_w\left(\rho_w,Z_w^{t_{k'}}\right) | \\ &\leq \sum_{i: v_{n,k} \prec v_{n,i} \prec v_{n,k'}} |\mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right)- \delta_{t_i}\left(\rho_{t_i},Z_{t_i}^{t_{k'}}\right) | + r^\delta + r_n^\delta\\ &\leq \underset{\epsilon_n}{\underbrace{\sum_{k,k',i=1}^{N(\delta)} |\mathbf d_{v_{n,i}}\left(\rho_{v_{n,i}},Z_{v_{n,i}}^{v_{n,k'}}\right)- \delta_{t_i}\left(\rho_{t_i},Z_{t_i}^{t_{k'}}\right) |}} + r^\delta + r_n^\delta \end{align*} and by \eqref{eq:convergence block unk to block vk}, the term $\epsilon_n$ in the above display tends to $0$ for a fixed $\delta$ as $n$ tends to infinity. In particular, it is eventually smaller than $\delta$. Now let us take two $x,x'\in \widehat{T}_n^{\dec,\delta}$, where $x\in B_{v_{n,k}}$ and $x'\in B_{v_{n,k'}}$. Let us study the different cases $\bullet$ Case 1: assume that $k=k'$, then \begin{align} d(\phi(x),\phi(x')) = \delta_{t_k}(\phi_{v_{n,k}}(x),\phi_{v_{n,k}}(x')) \quad \text{and} \quad \hat d(x,x') =\mathbf d_{v_{n,k}}(x,x'). \end{align} and then we have $|d(\phi(x),\phi(x')) - \hat d(x,x')|\leq \epsilon_{v_{n,k}}$ by definition of $\phi_{v_{n,k}}$ being an $\epsilon_{v_{n,k}}$-isometry. For a fixed $\delta$, this quantity becomes smaller than $\delta$ for $n$ large enough. Since $k\leq N(\delta) < \infty$ a.s., this holds uniformly in $k$. $\bullet$ Case 2: assume that $t_k\prec t_{k'}$, then \begin{align*} d(\phi(x),\phi(x')) &= \delta_{t_{k}}(\phi(x),Z_{t_k}^{t_{k'}})+ \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}}\delta_w\left(\rho_w,Z_w^{t_{k'}}\right) + \delta_{t_{k'}}(\rho_{t_{k'}},\phi(x'))\\ &= \delta_{t_{k}}(\phi_{u_{n,k}}(x),Z_{t_k}^{t_{k'}})+ \sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}}\delta_w\left(\rho_w,Z_w^{t_{k'}}\right) + \delta_{t_{k'}}(\rho_{t_{k'}},\phi_{t_{k'}}(x')) \end{align*} and \begin{align*} \hat d(x,x') = \mathbf d_{v_{n,k'}}(x,Z_{v_{n,k}}^{v_{n,k'}})+ \sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}}\mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right) + \mathbf d_{v_{n,k'}}(\rho_{v_{n,k'}},x') + \delta. \end{align*} Hence we have \begin{align*} |d(\phi(x),\phi(x')) - \hat d(x,x')| \leq |\delta_{t_{k}}(\phi(x),Z_{t_k}^{t_{k'}}) - \mathbf d_{v_{n,k'}}(x,Z_{v_{n,k}}^{v_{n,k'}}) |\\ + |\sum_{w\in \mathbb{B}: t_k \prec w \prec t_{k'}}\delta_w\left(\rho_w,Z_w^{t_{k'}}\right) - \sum_{w\in T_n: v_{n,k} \prec w \prec v_{n,k'}}\mathbf d_w\left(\rho_w,Z_w^{v_{n,k'}}\right)|\\ + |\delta_{t_{k'}}(\rho_{t_{k'}},\phi_{t_{k'}}(x'))-\mathbf d_{v_{n,k'}}(\rho_{v_{n,k'}},x')| \end{align*} and so, upper-bounding each term, the last display is eventually smaller than $\epsilon$ as $n\rightarrow \infty$. \\ $\bullet$ Case 3: assume that $t_k\npreceq t_{k'}$ and $ t_{k'}\npreceq t_k$. Then, we can write \begin{align} d(\phi(x),\phi(x'))= \delta_{t_k \wedge t_{k'}} (Z_{t_k \wedge t_{k'}}^{t_k},Z_{t_k \wedge t_{k'}}^{t_{k'}}) + \sum_{t_k \wedge t_{k'} \prec w \preceq t_{k'}}\delta_{w}(\rho_{w},Z_w^{t_{k'}})+\delta_{t_{k'}}(\rho_{t_{k'}},\phi(x))\\ +\sum_{t_k \wedge t_{k'} \prec w\preceq t_{k}}\delta_{w}(\rho_{w},Z_w^{t_{k}})+\delta_{t_k}(\rho_{t_{k}}, \phi(x')), \end{align} in the same fashion as before, and a similar expression can be written down for the discrete object. Now, we can consider two cases: either $t_k \wedge t_{k'}=t_j$ for some $j\in \{1,\dots,N(\delta)\}$, in which case we can write \begin{align*} |\delta_{t_k \wedge t_{k'}} (Z_{t_k \wedge t_{k'}}^{t_k},Z_{t_k \wedge t_{k'}}^{t_{k'}}) - \mathbf d_{v_{n,k} \wedge v_{n,k'}} (Z_{v_{n,k} \wedge v_{n,k'}}^{v_{n,k}},Z_{v_{n,k} \wedge v_{n,k'}}^{v_{n,k'}})|\leq 2\epsilon_n, \end{align*} which tends to $0$ as $n\rightarrow \infty$. Otherwise, it means that $\Delta_{t_k \wedge t_{k'}}\leq \delta$ and so we can upper-bound the term in the last display by $r^\delta + r_n^\delta$. With the other terms, we can reason similarly as in the previous case and get \begin{align} |\hat d(x,x') - d(\phi(x),\phi(x'))|\leq 3r^\delta+4\epsilon_n+3r^{\delta}_{n}+2\delta+ \epsilon_{v_{n,k}} + \epsilon_{v_{n,k'}}, \end{align} which is smaller than $\epsilon$ for $n$ large enough. \paragraph{Adding the measures.} We sketch here a modification of the proof above to improve the convergence from Gromov--Hausdorff to Gromov--Hausdorff--Prokhorov topology. \medskip \emph{Step 1, the total mass converges:} First, remark that under the assumptions of the theorem with $\beta\leq \alpha$ the total mass $\nu_n^\dec(T_n^\dec)$ of the measure carried on $T_n^\dec$ satisfies \begin{align*} a_n^{-1} \cdot \nu_n^\dec(T_n^\dec) \underset{n\rightarrow \infty}{\rightarrow} 1. \end{align*} On the other hand, if $\beta>\alpha$, then using the coupling defined in the proof above, we claim that we have \begin{align}\label{eq:convergence total mass nu n} b_n^{-\beta}\cdot \nu_n^\dec(T_n^\dec)=b_n^{-\beta}\cdot\sum_{i=1}^{|T_n|} \nu_{v_i}(B_{v_i}) \underset{n\rightarrow \infty}{\rightarrow} \sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s). \end{align} Let us prove the above display. First, using \eqref{eq:convergence size and position of jumps} and \eqref{eq:convergence block unk to block vk}, we can write for all $k\geq 1$, \begin{align*} b_n^{-\beta}\cdot \nu_{v_{n,k}}(B_{v_{n,k}}) \underset{n\rightarrow \infty}{\rightarrow} \Delta_{t_k}^\beta\cdot \nu_{t_k}(\cB_{t_k}). \end{align*} This ensures that for any given $\delta>0$ we have \begin{align*} b_n^{-\beta}\cdot \sum_{k=0}^\infty \nu_{v_{n,k}}(B_{v_{n,k}}) \ind{d^+(v_{n,k})> \delta b_n}\rightarrow\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)\ind{\Delta_s > \delta}, \end{align*} and the quantity on the right-hand-side converges to $\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)$ as $\delta\rightarrow 0$. From there, we can construct a sequence $\delta_n\rightarrow 0$ such that the following convergence holds on an event of probability $1$ \begin{align*} b_n^{-\beta}\cdot \sum_{k=0}^\infty \nu_{v_{n,k}}(B_{v_{n,k}}) \ind{d^+(v_{n,k})\geq \delta_n b_n}\rightarrow\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s). \end{align*} Now, using Condition~B\ref{cond:small decorations dont contribute to mass} we get that \begin{align*} R(n,\delta)= b_n^{-\beta}\cdot \sum_{i=1}^{|T_n|}\nu_{v_i}(B_{v_i}) \ind{\out{}(v_i)\leq \delta b_n} \end{align*} tends to $0$ in probability as $\delta \rightarrow 0$, uniformly in $n$ large enough. This ensures that for our sequence $\delta_n$ tending to $0$ we have the convergence $R(n,\delta_n)\underset{n \rightarrow \infty}{\rightarrow} 0$ in probability. Putting things together we get that \eqref{eq:convergence total mass nu n} holds in probability. \medskip \emph{Step 2, Convergence of the sampled points:} Now that we know that the total mass of $\nu_n^\dec$ appropriately normalized converges to that of $\upsilon_\beta^\dec$, we consider a point $Y_n$ sampled under normalized version of $\nu_n^\dec$ and $\Upsilon_\beta$ sampled under a normalized version of $\upsilon_\beta^\dec$. In order to prove that $(T_n^\dec,d,\nu_n^\dec)$ converges to $(\cT_\alpha^\dec,d,\upsilon_\beta^\dec)$ in distribution, we will prove that the pointed spaces $(T_n^\dec,d,Y_n)$ converge to $(\cT_\alpha^\dec,d,\Upsilon_\beta)$ in distribution for the pointed Gromov--Hausdorff topology. \medskip \emph{Step 3, Sampling a uniform point:} To construct $Y_n$ on $T_n^\dec$, we first sample a point $X_n$ on $\intervalleff{0}{1}$. Then we introduce \begin{align} K_n:=\inf \enstq{k\geq 1}{M_k\geq X_n \cdot M_{|T_n|}} \end{align} and then sample a point on $B_{v_{K_n}}$ using a normalized version of the measure $\nu_{v_{K_n}}$. For $\beta\leq \alpha$, in the continuous setting, we construct a random point $\Upsilon_\beta$ on $\cT_\alpha^\dec$ distributed as $\upsilon_\beta^\dec$ by defining it as $\mathbf{p}^\dec(X)$ where $X\sim \mathrm{Unif}\intervalleff{0}{1}$. For $\beta>\alpha$ we can define the following random point $\Upsilon_\beta$ on $\cT_\alpha^\dec$ in three steps: sample $X\sim \mathrm{Unif}\intervalleff{0}{1}$ and then set \begin{align*} Z:=\inf \enstq{t\in \intervalleff{0}{1}}{\sum_{0\leq s\leq t}\Delta_s^\beta \nu_s(\mathcal{B}_s)\geq X\cdot \sum_{0\leq s\leq 1}\Delta_s^\beta\nu_s(\mathcal{B}_s)} \end{align*} and then sample a point on $\mathcal{B}_{Z}$ using a normalized version of the measure $\nu_{Z}$. (Note that in this case, $Z$ is a jump time of $X^{\mathrm{exc},(\alpha)}$ almost surely.) This construction ensures that conditionally on $X^{\mathrm{exc},(\alpha)}$, and $(\mathcal{B}_s)_{s\in \Bb}$, for all $t\in\intervalleff{0}{1}$ we have \begin{align*} \Pp{Z=t}=\frac{\Delta_t^\beta \cdot \nu_t(\mathcal{B}_t)}{\sum_{0\leq s\leq 1}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)}. \end{align*} \emph{Step 4, Coupling $Y_n$ and $\Upsilon_\beta$:} In both cases $\alpha \leq \beta$ and $\alpha >\beta$ respectively, we couple the construction of $Y_n$ and $\Upsilon$ by using the same uniform random variable $X_n=X$. Now let us consider the two cases separately. \emph{Case $\beta \leq \alpha$.} Consider the convergence \eqref{eq:convergence depth-first mass}. Since this convergence is in probability to a constant, it happens jointly with other convergence results used above using Slutsky's lemma. Hence, using the Skorokhod embedding again, we can suppose that this convergence is almost sure, jointly with the other ones. Then, we can essentially go through the same proof as above and consider an extra decoration corresponding to $B_{v_{K_n}}$, which contains the random point $Y_n$. \emph{Case $\beta > \alpha$.} In that case, thanks to the reasoning of Step 1, we have the following convergence for the Skorokhod topology \begin{align*} b_n^{-\beta}\cdot\left(\sum_{i=1}^{\lfloor t |T_n|\rfloor} \nu_{v_i}(B_{v_i}) \right)_{0\leq t \leq 1} \underset{n\rightarrow \infty}{\rightarrow} \left(\sum_{0\leq s\leq t}\Delta_s^\beta \cdot \nu_s(\mathcal{B}_s)\right)_{0\leq t \leq 1}. \end{align*} With the definition of respectively $Z$ and $K_n$ using the coupling $X_n=X$, the above convergence ensures that for $n$ large enough we have $K_n=v_{n,k}$ and $Z=t_k$ for some (random) $k\geq 1$. Then again, we can essentially go through the same proof as above and consider an extra random point $Y_n$ sampled on $B_{v_{n,k}}$ and a point $\Upsilon_\beta$ sampled on $\cB_{t_{k}}$ in such a way that those points are close together. \end{proof} \section{Self-similarity properties and alternative constructions}\label{sec:self-similarity} The goal of this section is to decompose the decorated stable tree along a discrete tree, using a framework similar to that developed in Section~2.1, with the difference that the gluing will occur along the whole Ulam tree instead of a finite tree. In particular this decomposition, which follows directly from a similar description of the $\alpha$-stable tree itself, will allow us to describe the decorated stable tree as obtained by a sort of line-breaking construction, and also identify its law as the fixed point of some transformation. We rely on a similar construction that holds jointly for the $\alpha$-stable tree and its associated looptree described in \cite{senizergues_growing_2022}. \textbf{In this section, we will always assume that $\beta>\alpha$ and that $\Ec{\nu(\cB)}$ and $\Ec{\diam(\cB)}$ are finite.} \subsection{Introducing the spine and decorated spine distributions} \begin{figure}\label{fig:spine and decorated spine} \centering \begin{tabular}{ccc} \includegraphics[height=6cm,page=1]{spineanddecoratedspine} & & \includegraphics[height=6cm,page=2]{spineanddecoratedspine} \end{tabular} \caption{On the left, the space $\cS^{\dec}$. On the right, the corresponding space $\cS$ which is just a segment with atoms along it.} \end{figure} We construct two related random metric spaces, $\cS$ and $\cS^{\dec}$, which will be the building blocks used to construct $\cT_\alpha$ and $\cT^{\dec}_\alpha$ respectively. This construction will depend on parameters $\alpha\in(1,2)$ and $\gamma\in(\alpha-1,1]$ and $\beta>\alpha$. First, let \begin{itemize} \item $(Y_k)_{k\geq 1}$ be a sequence of i.i.d. uniform random variables in $\intervalleff{0}{1}$, \item $(P_k)\sim \GEM(\alpha-1,\alpha-1)$, and $L$ its $(\alpha-1)$-diversity. \item $(\cB_k,d_k,\rho_k,\mu_k,\nu_k)$ i.i.d. random metric spaces with the same law as $(\cB,d,\rho,\mu,\nu)$, \item for every $k\geq 1$, a random point $Z_k$ taken on $\cB_k$ under $\mu_k$. \end{itemize} We refer for example to \cite[Appendix A.2]{senizergues_growing_2022} for the definitions of the distributions in the second bulletpoint. We first define the random spine \[\cS=(S,d_S,\rho_S,\mu_S)\] as the segment $S=\intervalleff{0}{L}$, rooted at $\rho_S=0$, endowed with the probability measure $\mu_S=\sum_{k\geq1}P_k \delta_{L\cdot Y_k}$ In order to construct $\cS^{\dec}$ we are going to informally replace every atom of $\mu_S$ with a metric space scaled by an amount that corresponds to the mass of that atom. To do so, we consider the disjoint union \begin{equation} \bigsqcup_{i=1}^\infty \cB_i, \end{equation} which we endow with the distance $d_{S^{\dec}}$ defined as \begin{align*} d_{S^{\dec}}(x,y) &= \begin{cases} P_i^\gamma d_i(x,y) & \text{if} \quad x,y\in \cB_i,\\ P_i^\gamma d_i(x,Z_i) + \sum_{k: Y_i<Y_k<Y_j}P_i^\gamma d_k(\rho_k,Z_k)+P_j^\gamma d_j(\rho_j,y) & \text{if} \quad x\in \cB_i,\ y\in \cB_j,\ Y_i<Y_j. \end{cases} \end{align*} Then $\cS^{\dec}$ is defined as the completion of $\bigsqcup_{i=1}^\infty \cB_i$ equipped with this distance. Its root $\rho$ can be obtained as a limit $\rho=\lim_{i\rightarrow\infty}\rho_{\sigma_i}$ for any sequence $(\sigma_i)_{i\geq 1}$ for which $Y_{\sigma_i}\rightarrow 0$. Essentially, $\cS^{\dec}$ looks like a skewer of decorations that are arranged along a line, in uniform random order. We can furthermore endow $\cS^{\dec}$ with a probability measure $\mu_{S^{\dec}}=\sum_{k=1}^{\infty}P_k \mu_k$. If $\beta>\alpha$ and $\Ec{\nu(\cB)}$ is finite, we also define the measure $\nu_{S^{\dec}}=\sum_{k=1}^{\infty}P_k^\beta \nu_k$. In the end, we have defined \begin{align*} \cS^{\dec}=(S^{\dec},d_{S^{\dec}},\rho_{S^{\dec}},\mu_{S^{\dec}},\nu_{S^{\dec}}). \end{align*} It is important to note that conditionally on $\cS$, constructing $\cS^{\dec}$ consists in replacing every atom of $\mu_S$ by an independent copy of a random metric space appropriately rescaled. This exactly corresponds to what we are trying to do with the entire tree $\cT_\alpha$. Up to now, it is not clear whether the last display is well defined though, because it is not clear whether the function $d_{S^{\dec}}$ only takes finite values. The next lemma ensures that it is the case as long as $\diam(\cB)$ has a finite first moment. Introduce the quantity $R=\sum_{i\geq 1} P_i^\gamma \diam(\cB_i)$. \begin{lemma}\label{lem:moment diam block implis moment diam spine} If $\Ec{\diam(\cB)^p}<\infty$ for some $p\geq 1$ then $\Ec{R^p}$ is finite so $\cS^{\dec}$ is almost surely a compact metric space with $\Ec{\diam(\cS^{\dec})^p}<\infty$. \end{lemma} In order to get Lemma~\ref{lem:moment diam block implis moment diam spine} we will apply the following lemma, with $\xi=\theta=\alpha-1$ and $Z_i=\diam(\cB_i)$. \begin{lemma} Let $(Z_i)_{i\geq 1}$ be i.i.d. positive random variables with moment of order $p\geq 1$, and $(P_i)_{i\geq 1}$ an independent random variable with $\GEM(\xi,\theta)$ distribution, and $\gamma>\xi$. Then the random variable \begin{align*} \sum_{i=1}^{\infty}P_i^\gamma Z_i, \end{align*} admits moments of order $p$. \end{lemma} Note that this lemma also guarantees that the measure $\nu_{S^{\dec}}=\sum_{k=1}^{\infty}P_k^\beta \nu_k$ is almost surely a finite measure, as soon as $\Ec{\nu(B)}$ is finite. \begin{proof} Suppose that $\xi<\gamma\leq 1$ and write \begin{align*} \left(\sum_{i=1}^{\infty}P_i^\gamma Z_i\right)^p &=\left(\sum_{i=1}^{\infty}P_i^\gamma\right)^p \left(\sum_{i=1}^{\infty}\left(\frac{P_i^\gamma}{\sum_{i=1}^{\infty}P_i^\gamma}\right) Z_i\right)^p\\ &\underset{\text{convexity}}{\leq} \left(\sum_{i=1}^{\infty}P_i^\gamma\right)^p \left(\sum_{i=1}^{\infty}\left(\frac{P_i^\gamma}{\sum_{i=1}^{\infty}P_i^\gamma}\right) Z_i^p\right)\\ &\leq \left(\sum_{i=1}^{\infty}P_i^\gamma\right)^{p-1}\left(\sum_{i=1}^{\infty}P_i^\gamma Z_i^p\right) \end{align*} In the case $\gamma>1$, since all the $P_i$ are smaller than $1$, we can write \begin{align*} \left(\sum_{i=1}^{\infty}P_i^\gamma Z_i\right)^{p}&\leq \left(\sum_{i=1}^{\infty}P_i Z_i\right)^{p}\leq \left(\sum_{i=1}^{\infty}P_i Z_i^{p}\right), \end{align*} where the second inequality comes from convexity. Now, for any $\xi <\gamma\leq 1$, for $n=\lceil p-1 \rceil$ we have \begin{align*} \Ec{\left(\sum_{i=1}^{\infty}P_i^\gamma\right)^{p-1}\left(\sum_{i=1}^{\infty}P_i^\gamma Z_i^p\right)} &\leq \Ec{\left(\sum_{i=1}^{\infty}P_i^\gamma Z_i^p\right) \cdot \left(\sum_{i=1}^{\infty}P_i^\gamma\right)^{n}}\\ &=\Ec{\sum_{(i_0,i_1,i_2,\dots,i_n)\in(\mathbb N^*)^{n+1}}P_{i_0}^\gamma\cdot Z_{i_0}^p \cdot P_{i_1}^\gamma\dots P_{i_n}^\gamma}\\ &\leq \sum_{(i_0,i_1,i_2,\dots,i_n)\in(\mathbb N^*)^{n+1}}\Ec{P_{i_1}^\gamma\dots P_{i_n}^\gamma} \cdot \Ec{Z_{i_0}^p}\\ &<\infty \end{align*} The last term is finite provided that $\gamma>\xi$, using Lemma~5.4 in \cite{goldschmidt_stable_2018}. \end{proof} \subsection{Description of an object as a decorated Ulam tree}\label{subsec:description of the objet as a gluing of spines} Now that we have defined the random object $\cS^\dec$, we define some decorations of the Ulam tree that are constructed in such a way that up to some scaling factor, all the decorations are i.i.d.\@ with the same law as $\cS^\dec$. Gluing all those decorations along the structure of the tree, as defined in the introduction, will give us a random object $\tilde{\cT}^\dec_\alpha$ that has the same law as $\cT^\dec_\alpha$ (the proof of that fact will come later in the section). In fact, in what follows, we provide two equivalent descriptions of the same object $\tilde{\cT}^\dec_\alpha$: \begin{itemize} \item One of them is a description as a random self-similar metric space: this construction will ensure that the object that we construct is compact under the weakest possible moment assumption on the diameter of the decoration and also give us the Hausdorff dimension of the object. \item The other one is through an iterative gluing construction. It is that one that we use to identify the law of $\tilde{\cT}^\dec_\alpha$ with that of $\cT^\dec_\alpha$. \end{itemize} The fact that the two constructions yield the same object is a result from \cite{senizergues_growing_2022}. \subsubsection{Decorations on the Ulam tree} We extend the framework introduced in the first section for finite trees to the entire Ulam tree, following \cite{senizergues_growing_2022}. We work with families of decorations on the Ulam tree $\mathcal D=(\mathcal{D}(v), v\in \mathbb U)$ of measured metric spaces indexed by the vertices of $\mathbb U$. Each of the decorations $\mathcal{D}(v)$ indexed by some vertex $v\in \mathbb U$ is rooted at some $\rho_v$ and carries a sequence of gluing points $(\ell_v(i))_{i\geq 1}$. The way to construct the associated decorated Ulam tree is to consider as before \begin{align*} \bigsqcup_{v\in\mathbb U}\mathcal{D}(v) \end{align*} and consider the metric gluing of those blocks obtained by the relations $\ell_v(i)\sim \rho_{vi}$ for all $v\in\mathbb U$ and $i\ge 1$. For topological considerations, we actually consider the metric completion of the obtained object. The final result is denoted $\sG(\cD)$. \paragraph{The completed Ulam tree.} Recall the definition of the Ulam tree $\bU=\bigcup_{n\geq 0} \mathbb N^n$ with $\mathbb N=\{1,2,\dots\}$. Introduce the set $\partial\bU=\mathbb{N}^\mathbb{N}$, to which we refer as the \emph{leaves} of the Ulam tree, which we see as the infinite rays joining the root to infinity and set $\overline{\bU}:=\bU\cup \partial\bU$. On this set, we have a natural genealogical order $\preceq$ defined in such a way that $v\preceq w$ if and only if $v$ is a prefix of $w$. From this order relation we can define for any $v\in\bU$ the \emph{subtree descending from $v$} as the set $T(v):=\enstq{w\in \overline{\bU}}{v\preceq w}$. The collection of sets $\{T(v),\ v\in\bU\} \sqcup \{\{v\},\ v\in\bU\}$ generates a topology over $\overline{\bU}$, which can also be generated using an appropriate ultrametric distance. Endowed with this distance, the set $\overline{\bU}$ is then a separable and complete metric space. \paragraph{Identification of the leaves.} Suppose that $\cD$ is such that $\sG(\cD)$ is compact. We can define a map \begin{equation}\label{growing:eq:def de iota} \iota_\cD:\partial \bU\rightarrow \sG(\cD), \end{equation} that maps every leaf of the Ulam-Harris tree to a point of $\sG(\cD)$. For any $\mathbf{i}=i_1i_2\dots \in\partial \bU$, the map is defined as \begin{align*} \iota_{\cD}(\mathbf{i}):= \lim_{n\rightarrow\infty} \rho_{i_1i_2\dots i_n} \in \sG(\cD), \end{align*} The fact that this map is well-defined and continuous is proved in \cite{senizergues_growing_2022}. \paragraph{Scaling of decorations.} In the rest of this section, for $\mathcal{X}=(X,d,\rho,\mu,\nu)$ a pointed metric space endowed with finite measures (as well as some extra structure like a sequence of points for example) we will use the notation \begin{align*} \mathrm{Scale}(a,b,c;\mathcal{X}) = (X,a\cdot d,\rho,b \cdot \mu, c\cdot \nu), \end{align*} where the resulting object still carries any extra structure that $\mathcal{X}$ originally carried. \subsubsection{A self-similar decoration} We introduce the following random variables: \begin{itemize} \item For every $v\in \mathbb{U}$, sample independently $(Q_{vi})_{i\geq 1}\sim \GEM(\frac{1}{\alpha},1-\frac{1}{\alpha})$ and denote $D_v$ the $\frac{1}{\alpha}$-diversity of $(Q_{vi})_{i\geq 1}$. \item This defines for every $v\in \mathbb{U}$, \begin{align*} \overline{Q_v}=\prod_{w\preceq v}Q_v \qquad \text{and} \qquad w_v=(\overline{Q_v})^\frac{1}{\alpha}\cdot D_v \end{align*} This in fact also defines a probability measure $\eta$ on $\partial \mathbb U$ the frontier of the tree which is characterized by $\eta(T(v))=\overline{Q_v}$, for all $v\in \mathbb U$. \item For every $v\in \mathbb{U}$ we sample independently of the rest $\cS^{\dec}_v$ that has the same law as $\cS^{\dec}=(S^{\dec},d_{S^{\dec}},\rho_{S^{\dec}},\mu_{S^{\dec}},\nu_{S^{\dec}})$ and sample a sequence of points from its measure $\mu_{S^{\dec}}$. We call those points $(X_{vi})_{i\geq 1}$ and define $\ell_v:i\mapsto X_{vi}\in\cS^{\dec}_v $. Then we can consider the following decorations on the Ulam tree as, for $v\in\mathbb{U}$, \begin{align}\label{eq:definition Tdec from GEM} \mathcal{D}(v):= \mathrm{Scale}\left(w_v^\gamma,w_v,w_v^\beta; \cS^{\dec}_v\right). \end{align} \end{itemize} From these decorations on the Ulam tree, we can define the corresponding decorated tree (and consider its metric completion) that we write as \begin{align*} \tilde{\cT}^\dec_\alpha:= \sG(\cD). \end{align*} Note that the object defined above is not necessarily compact. Whenever the underlying block (and hence also the decorated spine thanks to Lemma~\ref{lem:moment diam block implis moment diam spine}) has a moment of order $p>\frac{\alpha}{\gamma}$, a result of \cite[Section~4.2]{senizergues_growing_2022} ensures that the obtained metric space $\tilde{\cT}^\dec_\alpha$ is almost surely compact. \paragraph{The uniform measure.} Assume that the underlying block has a moment of order $p>\frac{\alpha}{\gamma}$ so that $\tilde{\cT}^\dec$ is almost surely compact. Then the maps $\iota_\cD:\partial \bU \rightarrow \sG(\cD)$ is almost surely well-defined and continuous so we can consider the measure $(\iota_\cD)_*\eta$ on $\sG(\cD)$, the push-forward of the measure $\eta$. \paragraph{The block measure.} If $\beta>\alpha$ and $\Ec{\nu(\cB)}<\infty$ then we can check that the total mass of the $\nu$ measures has finite expectation so it's almost surely finite. Hence we can endow $\tilde{\cT}^\dec_\alpha$ with the measure $\sum_{u}\nu_u$. \subsubsection{The iterative gluing construction for $\tilde{\cT}^\dec_\alpha$.} Before diving into the construction of $\tilde{\cT}^\dec_\alpha$, we define a family of time-inhomogeneous Markov chains called Mittag-Leffler Markov chains, first introduced by Goldschmidt and Haas \cite{goldschmidt_line_2015}, see also \cite{senizergues_growing_2022}. \paragraph{Mittag-Leffler Markov chains.} Let $0<\eta<1$ and $\theta>-\eta$. The generalized Mittag-Leffler $\mathrm{ML}(\eta, \theta)$ distribution has $p$th moment \begin{align}\label{growing:eq:moments mittag-leffler} \frac{\Gamma(\theta) \Gamma(\theta/\eta + p)}{\Gamma(\theta/\eta) \Gamma(\theta + p \eta)}=\frac{\Gamma(\theta+1) \Gamma(\theta/\eta + p+1)}{\Gamma(\theta/\eta+1) \Gamma(\theta + p \eta+1)} \end{align} and the collection of $p$-th moments for $p \in \N$ uniquely characterizes this distribution. A Markov chain $(\mathsf M_n)_{n\geq 1}$ has the distribution $\MLMC(\eta,\theta)$ if for all $n\geq 1$, \begin{equation*} \mathsf M_n\sim \mathrm{ML}\left(\eta,\theta+n-1\right), \end{equation*} and its transition probabilities are characterized by the following equality in law \begin{equation*} \left(\mathsf M_n,\mathsf M_{n+1}\right)=\left(\beta_n\cdot \mathsf M_{n+1},\mathsf M_{n+1}\right), \end{equation*} where $\beta_n\sim \mathrm{Beta}\left(\frac{\theta+k-1}{\eta}+1,\frac{1}{\eta}-1\right)$ is independent of $\mathsf M_{n+1}$. \paragraph{Construction of $\tilde{\cT}^\dec_\alpha$.} We can now express our second construction of $\tilde{\cT}^\dec_\alpha$. We start with a sequence $(\mathsf{M}_k)_{k\geq 1}\sim \mathrm{MLMC}\left(\frac{1}{\alpha},1-\frac{1}{\alpha}\right)$. Then we defined the sequence $(\mathsf{m}_k)_{k\geq 1}=(\mathsf{M}_k-\mathsf{M}_{k-1})_{k\geq 1}$ of increments of that chain, where we assume by convention that $\mathsf{M}_0=0$. From there, conditionally on $(\mathsf{M}_k)_{k\geq 1}$ we define an independent sequence $\mathcal{Y}_k$ of metric spaces such that \begin{align*} \mathcal{Y}_k \overset{(d)}{=} \mathrm{Scale}\left(\mathsf{m}_k^\gamma, \mathsf{m}_k,\mathsf{m}_k^\beta; \cS^\dec\right) \end{align*} Then, we define a sequence of increasing subtrees of the Ulam tree as follows: we let $\mathtt{T}_1$ be the tree containing only one vertex $v_1=\emptyset$. Then if $\mathtt{T}_n$ is constructed, we sample a random number $K_{n+1}$ in $\{1,\dots,n\}$ such that conditionally on all the rest $\Pp{K_n=k}\propto\mathsf{m}_k$. Then, we add the next vertex $v_{n+1}$ to the tree as the right-most child of $v_{K_{n+1}}$. The sequence $(\mathtt{T}_n)_{n\geq 1}$ is said to have the distribution of a weighted recursive tree with sequence of weights $(\mathsf{m}_k)_{k\geq 1}$. A property of this sequence of trees is the fact that the $\enstq{v_k}{k\geq1}=\mathbb U$. Hence for any $v\in \mathbb U$ we denote $k_v$ the unique $k$ such that $v_k=v$. Then we consider the decorations on the Ulam tree defined by \begin{align}\label{eq:definition Tdec from WRT} \widehat{\mathcal{D}}(v)=\mathcal{Y}_{k_v}. \end{align} In this setting, we can again define a measure $\widetilde{\eta}$ on the leaves $\partial\mathbb U$ of the Ulam tree by taking the weak limit of the uniform measure on $\{v_1,v_2,\dots v_n\}$ as $n\rightarrow\infty$, the almost sure existence of the limit being guaranteed by \cite[Theorem~1.7, Proposition~2.4]{senizergues_geometry_2021}. Then, from \cite[Proposition~3.2]{senizergues_growing_2022} we have \begin{align*} \left(\left(\mathsf{m}_{k_v}\right)_{v\in \mathbb U}, \widetilde{\eta}\right) \overset{\mathrm{(d)}}{=} \left(\left(w_{v}\right)_{v\in \mathbb U}, \eta\right). \end{align*} From that equality in distribution and the respective definitions \eqref{eq:definition Tdec from GEM} of $\mathcal{D}$ and \eqref{eq:definition Tdec from WRT} of $\widetilde{\mathcal{D}}$, it is clear that those two families of decorations have the same distribution. \subsubsection{Strategy} The rest of this section is about proving that the random decorated tree $\tilde{\cT}^\dec_\alpha$ that we constructed above has the same distribution as $\cT^\dec_\alpha$. For that we are going to characterize their ``finite-dimensional marginals'' and show that they are the same for the two processes: Using the second description of $\tilde{\cT}^\dec_\alpha$, we can consider for any $k\geq 1$ the subset $\tilde{\cT}^\dec_k$ that corresponds to keeping only the blocks corresponding to $v_1,\dots, v_k$. We compare this to $\cT^\dec_k$ which is the subset of $\cT^\dec_\alpha$ spanned by $k$ uniform leaves. \subsection{Finite-dimensional marginals of $\cT_\alpha$ and $\cT^\dec_\alpha$} \subsubsection{Approximating the decorated tree by finite dimensional marginals} For any tuple of points $x_1,x_2,\dots x_k$ in $\intervalleff{0}{1}$, we can consider \begin{align*} \mathrm{Span}(X^{\mathrm{exc},(\alpha)};x_1,x_2,\dots,x_k)=\enstq{x\in \intervalleff{0}{1}}{x\preceq x_i, \text{ for some } i\in\{1,2,\dots k\}}, \end{align*} using the definition of $\preceq$ derived from $X^{\mathrm{exc},(\alpha)}$. We further define \begin{align*} \mathrm{Span}(X^{\mathrm{exc},(\alpha)},(\cB_t)_{t\in \intervalleff{0}{1}};x_1,x_2,\dots,x_k)=\bigsqcup_{t\in \mathrm{Span}(X^{\mathrm{exc},(\alpha)};x_1,x_2,\dots,x_k)} \cB_t. \end{align*} Using independent uniform random points $(U_i)_{i\geq 1}$ on $[0,1]$, we define \begin{align*} \cT_k=\mathbf{p} \left(\mathrm{Span}(X^{\mathrm{exc},(\alpha)};U_1,U_2,\dots,U_k)\right) \end{align*} and \begin{align*} \cT^\dec_k=\mathbf{p}^\dec\left( \mathrm{Span}(X^{\mathrm{exc},(\alpha)},(\cB_t)_{t\in \intervalleff{0}{1}};U_1,U_2,\dots,U_k) \right), \end{align*} where $\mathbf{p}:\intervalleff{0}{1} \rightarrow \cT_\alpha$ and $\mathbf{p}^\dec:\sqcup_{t\in \intervalleff{0}{1}} \cB_t \rightarrow \cT_\alpha^\dec$ are the respective quotient maps. We say that $\cT_k$ (resp. $\cT^\dec_k$) is the discrete random finite-dimensional marginal of $\cT_\alpha$ (resp. $\cT^\dec_\alpha$), following the standard definition from trees, introduced by Aldous. \begin{remark} Note that for $\cT^\dec_k$ to be almost surely well-defined, we don't need the whole decorated tree $\cT^\dec_\alpha$ to be well-defined as a compact measured metric space. In fact, it is easy to check that if the tail of $\diam(\cB)$ is such that $\Pp{\diam(\cB)\geq x}\sim x^{-p}$ with $1 < p <\frac{\alpha }{\gamma}$, then $\sup_{v \in \mathbb B} \Delta_v^\gamma \cdot \diam(\cB_v)=\infty$, even though the distance of a random point to the root in $\cT^\dec_\alpha$ is almost surely finite. \end{remark} \begin{lemma} Identifying $\cB_v$ for any $v\in \mathbb B$ as a subset of $\cT^\dec_\alpha$ we almost surely have \begin{align*} \bigcup_{v\in \mathbb B}\cB_v \subset \overline{\bigcup_{n\geq 0}\cT^\dec_n}. \end{align*} \end{lemma} \begin{proof} By properties of the stable excursion, for any $t\in \mathbb B$ the set $\enstq{s\succeq t}{s\in \intervalleff{0}{1}}$ has a positive Lebesgue measure. Hence almost surely there is some $k$ such that $t\preceq U_k$. \end{proof} \begin{corollary}\label{cor:Tdec is compact iff union Tndec is} The space $\overline{\bigcup_{n\geq 0}\cT^\dec_n}$ is compact if and only if $\cT^\dec_\alpha$ is well-defined as a compact metric space. \end{corollary} \subsubsection{Description of $\cT_k$ and $\cT_k^\dec$} First, for any $k\geq 1$, we let $L_k$ be the distance $\mathrm{d}(U_k,\cT_{k-1})$ computed in the tree $\cT_\alpha$. Then we consider the set of branch-points $\mathbb B \cap (\cT_k\setminus\cT_{k-1})$ and define the decreasing sequence $(N_k(\ell))_{\ell \geq 1}$ which is the decreasing reordering of the sequence $\left(\Delta_t\right)_{t\in \mathbb B \cap (\cT_k\setminus\cT_{k-1})}$. Denote $N_k= \sum_{\ell \geq 1} N_k(\ell)$. Note that the jumps of the excursion process are all distinct almost surely. We denote $(t_k(\ell))_{\ell\ge 1}$ the corresponding sequence of jump times. We also consider the sequence $(L_k(\ell))_{\ell\ge 1}$ defined as $\mathrm{d}(t_k(\ell), \cT_{k-1})$. Let us also write $N_k^{r}(\ell)$ for the quantity $x_{t_k(\ell)}^{U_k}$, defined in \eqref{eq:definition xts}. Let us check that these random variables are enough to reconstruct $\cT_k^\dec\setminus \cT_{k-1}^\dec$. We have \begin{align*} \overline{\cT_k^\dec\setminus \cT_{k-1}^\dec}= \overline{\bigcup_{t\in \cT_k\setminus \cT_{k-1}} \cB_t} \end{align*} seen as a subset of $\cT^\dec_\alpha$. Now, from the form of the distance on $\cT^\dec_\alpha$, the induced distance between two point in $\cT_k^\dec\setminus \cT_{k-1}^\dec$ only depends on: \begin{itemize} \item the ordering of the jump times $(t_k(\ell))_{\ell \geq 1}$ by the relation $\preceq$ (they are all comparable because by definition we have $t_k(\ell)\preceq U_k$ for all $\ell \geq 1$), \item the sizes of the jumps $(\Delta_{t_k(\ell)})$ and the corresponding blocks $\cB_{t_k(\ell)}$, \item the position of the gluing points $Z_{t_k(\ell)}^{U_k}$ on the corresponding block $\cB_{t_k(\ell)}$. \end{itemize} We can also check that the topological closure $\partial(\cT_k^\dec\setminus \cT_{k-1}^\dec)$ contains a single point, we call this point $Z_k$. When considering the compact object $\overline{\cT_k^\dec\setminus \cT_{k-1}^\dec}$, we see it as rooted at $Z_k$. Now, if we want to reconstruct the entire $\cT_k^\dec$ from $(\cT_k^\dec\setminus \cT_{k-1}^\dec)$ and $\cT_{k-1}^\dec$ we also need some additional information. For that we consider the finite sequence $U_1 \wedge U_k, U_2\wedge U_k, \dots , U_{k-1}\wedge U_k$ which are all $\preceq U_k$ by definition. Because they are all comparable, we can consider $V_k$ the maximal element of this sequence. We let $I_k$ be the unique $i\leq k-1$ such that $V_k\in (\cT_i^\dec\setminus \cT_{i-1}^\dec)$. Additionally, we let $W_k=u^{U_k}_{V_k}$. We can check that the corresponding point $Z_k=Y_{V_k,W_k}$ is such that, conditionally on $\cT_{k-1}^\dec$ and $V_k$, distributed as a random point under the measure $\nu_{V_k}$ carried by the block $\cB_{V_k}$, by definition. Now, we use the results of \cite[Proposition~2.2]{goldschmidt_stable_2018} that identify the joint law of these quantities as scaling limit of analogous quantities defined in a discrete setting on trees constructed using the so-called Marchal algorithm. Those quantities have been studied with a slightly different approach in \cite{senizergues_growing_2022}, which provides an explicit description of those random variables, jointly in $k$ and $\ell$. The following can be read from \cite[Lemma~5.5, Proposition~5.7]{senizergues_growing_2022}: \begin{itemize} \item the sequence $(N_k)_{k\geq 1}$ has the distribution of the sequence $(\mathsf{m}_k)_{k\geq 1}=(\mathsf{M}_k-\mathsf{M}_{k-1})_{k\geq 1}$ where $(\mathsf{M}_k)_{k\geq 1}\sim \mathrm{MLMC}\left(\frac{1}{\alpha},1-\frac{1}{\alpha}\right)$. \item the sequences $\left(\frac{N_k(\ell)}{N_k}\right)_{\ell \geq 1}$ are i.i.d. with law $\mathrm{PD}(\alpha-1,\alpha -1)$ and $L_k= \alpha \cdot N_k^{\alpha -1} \cdot S_k$ where $S_k$ is the $(\alpha-1)$ diversity of $\left(\frac{N_k(\ell)}{N_k}\right)_{\ell \geq 1}$. \item the random variables $\frac{L_k(\ell)}{L_k}$ are i.i.d. uniform on $\intervalleff{0}{1}$, \item the random variables $\frac{N_k^{r}(\ell)}{N_k(\ell)}$ are i.i.d. uniform on $\intervalleff{0}{1}$. \end{itemize} In particular, from the above description, we can check that conditionally on the sequence $(N_k)_{k\geq 1}$, the random variables $(\overline{\cT_k^\dec\setminus \cT_{k-1}^\dec })_{k\geq 1}$ are independent with distribution given by \begin{align*} \overline{\cT_k^\dec\setminus \cT_{k-1}^\dec} = \mathrm{Scale}(N_k^\gamma, N_k, N_k^\beta; \cS^\dec). \end{align*} We have identified the laws of the two sequences $(\cT_k^\dec\setminus \cT_{k-1}^\dec)_{k\geq 1}$ and $(\tilde{\cT}_k^\dec\setminus \tilde{\cT}_{k-1}^\dec)_{k\geq 1}$. In order to identify the law of the sequences $(\cT_k^\dec)_{k\geq 1}$ and $(\tilde{\cT_k}^\dec)_{k\geq 1}$, we just have to check that the attachment procedure is the same. Recall the definition of the random variables $(I_k)_{k\geq 1}$, $(V_k)_{k\geq 1}$ and $(W_k)_{k\geq 1}$. Still from \cite[Lemma~5.5, Proposition~5.7]{senizergues_growing_2022}, conditionally on all the quantities whose distributions were identified above, the $I_k$'s are independent and $\Pp{I_k=i}=\frac{N_i}{N_1+\dots + N_{k-1}}$ for all $i\leq k-1$. From those random variables we can construct an increasing sequence of trees $(\mathtt{S}_n)_{n\geq 1}$ in such a way that the parent of the vertex with label $k$ is the vertex with label $I_k$. From the observation above, the law of $(\mathtt{S}_n)_{n\geq 1}$ conditionally on everything else mentioned before only depends on $(N_k)_{k\geq 1}$. This law is the same as that of $(\mathtt{T}_n)_{n\geq 1}$ conditionally on $(\mathsf m_k)_{k\geq 1}$ (and everything else). Now we just need to check the law of the gluing points: recall how the point $Z_k$ is given as $Z_k=Z_{V_k}^{U_k}=Y_{V_k,W_k}$. We just need to argue that this point, conditionally on $I_k=i$ the point $Z_k$ is taken under a normalized version of the measure $\sum_{\ell=1}^{\infty} \Delta_{t_i(\ell)} \mu_{t_i(\ell)}$. In fact, stil from \cite{senizergues_growing_2022}, conditionally on $I_k=i$ and everything else mentioned before we have $\Pp{V_k=t_i(\ell)}=\frac{N_i(\ell)}{N_i}=\frac{ \Delta_{t_i(\ell)}}{\sum_{\ell=1}^{\infty} \Delta_{t_i(\ell)}}$ for $\ell\geq 1$ and $W_k$ is independent uniform on $\intervalleff{0}{1}$. Since by definition, $W_k\in \cA_{V_k}$, then $Y_{V_k,W_k}$ is distributed as $\mu_{V_k}$ by construction. Now, since the uniform distribution has no atom, it is almost surely the case that $W_k \notin \enstq{u_{V_k}^{U_r}}{r\leq k-1, V_k\preceq U_r}$ so that the point $Y_{V_k,W_k}$ has no been used in the construction up to time $k-1$, so the sampling of $Z_k$ is indeed independent of the rest. \subsection{Properties of the construction.} We introduce the set of leaves $\cL= \cT^\dec_\alpha\setminus \cup_{n\geq 1}\cT^\dec_n$. Then still from \cite{senizergues_growing_2022} we have the following \begin{theorem} \label{thm:compact_hausdorff} If $\Ec{\diam(\cB)^p}<\infty$ for some $p>\frac{\alpha}{\gamma}$, then $\cT^\dec_\alpha$ is almost surely a compact metric space and $\Ec{\diam(\cT^\dec_\alpha)^p}<\infty$. Under the assumption that the measure on $\cB$ is not concentrated on its root almost surely, the Hausdorff dimension of the set of leaves $\mathcal L$ is given by \begin{align*} \dim_H(\cL)=\frac{\alpha}{\gamma} \end{align*} almost surely. \end{theorem} We can now provide a proof of Lemma~\ref{l:smalldec} which can just be seen as a corollary of the above theorem. \begin{proof}[Proof of Lemma~\ref{l:smalldec}] Introduce the random block $\widehat{\cB}$ defined on the same probability space as $\cB$ as the interval $\intervalleff{0}{\diam \cB}$ rooted at $0$ and whose sampling measure is a Dirac point mass at $\diam \cB$. We also consider the corresponding object $\widehat{\cT}^\dec$. Because $\widehat{\cB}$ also satisfies the assumptions of the theorem above, $\widehat{\cT}^\dec$ is almost surely compact and by Corollary~\ref{cor:Tdec is compact iff union Tndec is} we have $\mathrm{d}_H(\widehat{\cT}^\dec, \widehat{\cT}^\dec_n)\rightarrow 0$ almost surely as $n\rightarrow\infty$. Then we can check that, for uniform random variables $U_1,U_2,\dots,U_n$ that were used to define $\widehat{\cT}^\dec_n$, we have \begin{align*} \sup_{s\in \intervalleff{0}{1}} \sum_{t\prec s} \diam(\cB_t)\ind{\Delta_t<\delta} \leq\sup_{s\in \{U_1, U_2,\dots, U_n\}} \sum_{t\prec s} \diam(\cB_t)\ind{\Delta_t<\delta} + \mathrm{d}_H(\widehat{\cT}^\dec, \widehat{\cT}^\dec_n) \end{align*} Thanks to the above theorem, $\widehat{\cT}^\dec$ is compact and the second term tends to $0$ as $n\rightarrow \infty$. Then, for a large fixed $n$, the first term tends to $0$ as $\delta\rightarrow0$. \end{proof} \section{Applications} \label{s:applications} We present several applications of the invariance principle in Theorem~\ref{th:invariance} to block-weighted models of random discrete structures. The limits of these objects are stable trees decorated by other stable trees, stable looptrees, or even Brownian discs. \subsection{Marked trees and iterated stable trees} \label{s:markedtrees} Define a class of combinatorial objects $\cM$ consisting of all marked rooted plane trees where the root and each leaf receives a mark and internal vertices may or may not receive a mark. The size of an object $M$ in $\cM$ is its number of leaves and is denoted by $|M|$. For a given $n$ there are infinitely many trees with size $n$ so for simplicity we assume that there are no vertices of out-degree 1, rendering the number of trees of size $n$ finite. \begin{figure} [!h] \centerline{\scalebox{0.7}{\includegraphics{treeblock.pdf}}} \caption{On the left is a tree from $\cM$ and on the right it is shown how it is decomposed into its tree blocks. The root is at the bottom and marked vertices are denoted by black and white circles.} \label{f:treeblocks} \end{figure} A subclass of $\cM$, which we denote by $\cL$, consists of trees where no internal vertex, except the root, receives a mark, and so an $n$-sized object from $\cL$ is a rooted plane tree with $n$ leaves. The ordinary generating series of the classes $\cM$ and $\cL$ satisfy the equation \begin {align}\label{eq:iso1} \cM(z) = z + \cL(\cM(z))-\cM(z). \end {align} The interpretation is that either an object from $\cM$ is a single vertex (the root of the tree) or a tree in $\cL$, different from a single root, in which each marked vertex is identified with the root of an object from $\cM$. We will call a subtree of an element $M$ from $\cM$ a \emph{tree block} if it has more than one vertex, all of its leaves are marked, its root is marked and no other vertices are marked. We further require that it is a maximal subtree with this property. The tree blocks may also be understood as the subtrees obtained by cutting the marked tree at each marked internal vertex, see Fig.~\ref{f:treeblocks}. \begin {remark} \label{re:iteriter} One may introduce marks with $k$ different colors and define 'color blocks' and assign weights to them. This model is a candidate for a discrete version of further iterated trees $\mathcal{T}_{\alpha_1,\alpha_2,\ldots,\alpha_{k+1}}$. Denote the set of such structures with marks of $k$ colours by $\cM_k$, with $\cL = \cM_0$ and $\cM = \cM_1$. One then has an equation of generating series \begin {equation}\label{eq:iso2} \cM_k(z) = z + \cM_{k-1}(\cM_k(z))-\cM_k(z). \end {equation} \end {remark} \begin {remark} It is a standard result (and easy to check) that \begin {equation*} \cL(z) = \frac{1+z-\sqrt{z^2-6z+1}}{4} = z + z^2 + 3z^3 + 11 z^4 + 45 z^5 + 197 z^6 + \ldots \end {equation*} and from this and Equation~\eqref{eq:iso1} one may deduce that \begin {equation*} \cM(z) = \frac{1+7z-\sqrt{z^2-10z+1}}{12} = z + z^2 + 5z^3+31z^4 + 215z^5 +1597 z^6 + \ldots \end {equation*} This sequence of coefficients is not in the OEIS. Going further one finds that \begin {equation*} \cM_2(z) = \frac{1+17z-\sqrt{z^2-14z+1}}{24} = z + z^2 + 7z^3+61z^4 + 595z^5 +6217 z^6 + \ldots \end {equation*} In general, by induction \begin {align*} \cM_k(z) &= \frac{1+(2k^2+4k+1)z-\sqrt{z^2-(4k+6)z+1}}{2 (k+1)(k+2)} \\ &= z + z^2 + (2(k+1)+1)z^3 + (5(k+1)^2+5(k+1)+1)z^4 \\ & + (14(k+1)^3+21(k+1)^2+9(k+1)+1)z^5 \ldots \end {align*} Note that $[z^n (k+1)^m]\cM_k(z)$ is the number of rooted plane trees with no vertex of outdegree 1 which have $n$ leaves and $m$ internal vertices (not counting the root). In particular it holds that \begin {equation*} \sum_{m=0}^{n-2} [z^n (k+1)^m]\cM_k(z) = [z^n] \cL(z). \end {equation*} \end {remark} To each element $L$ of $\cL$ we assign a weight $\gamma(L)$ and denote the class of such weighted structures by $\cL^\gamma$. We assume the weight of the marked tree consisting of a single vertex is equal to $1$. Then, we assign a weighting $\omega$ to elements $M$ from $\cM$ by \begin {equation*} \omega(M) = \prod_{L} \gamma(L) \end {equation*} where the index $L$ ranges over the tree blocks in $M$. Denote the corresponding class of weighted structures by $\cM^\omega$. The weighed ordinary generating series satisfy a similar equation as before \begin {align*} \cM^\omega(z) = z + \cL^\gamma(\cM^\omega(z)) - \cM^\omega(z). \end {align*} Define a random element $\mM_n^\omega$ from the set of $n$-sized elements of $\cM^\omega$ which is selected with a probability proportional to its weight. Let $(\iota_n)_{n\geq 0}$ and $(\zeta_n)_{n\geq 0}$ be sequences of non-negative weights, with $\iota_0=\zeta_1=1$ and $\iota_1 = \zeta_1 = 0$. We will be interested in weights $\gamma$ of the form \begin {equation*} \gamma(L) = \zeta_{|L|} W(L) \end {equation*} with \begin{figure} [b!] \centerline{\scalebox{0.45}{\includegraphics{iso.pdf}}} \caption{The coupling of $\mM_n^\omega$ with simply generated trees with leaves as atoms.} \label{f:iso} \end{figure} \begin {equation*} W(L) = \prod_{v \text{ internal vertex in } L} \iota_{d^+(v)}. \end {equation*} Let $(w_n)_{n\geq 0}$ be a sequence of non-negative numbers such that $w_0=1$ and $w_1=0$. For $n\geq 2$ we will write the weights $\zeta_n$ as follows \begin {align*} \zeta_n = w_n Z_n^{-1} \end {align*} where \begin {equation*} Z_n = \sum_{\substack{L\in\cL, |L| = n}} W(L) \end {equation*} is the \emph{partition function} of elements from $\cL^\gamma$ of size $n$. In particular, when we choose $\zeta_n = 1$ (i.e.~$w_n = Z_n$) for all $n\geq 2$, we say that $\mM_n^\omega$ is an $n$-leaf \emph{simply generated marked tree} with branching weights $(\iota_k)_k$. \begin {proposition} \label{pro:samplingmnw} The random element $\mM_n^\omega$ may be sampled as follows: \begin {enumerate} \item Sample an $n$-leaf simply generated tree $\tau_n = \tau_n^\omega$ with branching weights $[z^{k}]\cL^{\gamma}(z) = w_k$ assigned to each internal vertex of out-degree $k>1$ and branching weight $w_0=1$ assigned to each leaf. \item For each vertex $v$ of $\tau_n$ sample independently a $d^+(v)$-leaf simply generated tree $\delta(v)$ with branching weights $\iota_k$ assigned to each vertex of outdegree $k\geq 0$. Glue together according to the tree structure of $\tau_n$ (see Fig.~\ref{f:iso}). \end {enumerate} \end {proposition} We refer to the surveys~\cite{MR2908619,zbMATH07235577} for details on simply generated models of trees with fixed numbers of vertices or leaves. The proof of Proposition~\ref{pro:samplingmnw} is by a straight-forward calculation, or alternatively by applying a general result on sampling trees with decorations~\cite[Lem. 6.7]{zbMATH07235577}. We will only be interested in the case where $\cM^\omega(z)$ has positive radius of convergence. In this case one may rescale the weights $w_n$ and $\iota_n$ such that they are probabilities without affecting the distribution of $\mM_n^\omega$, and we will assume in the following that this has been done. We denote by $\xi$ a random variable with distribution $w_n$ and $\chi$ a random variable with distribution $\iota_n$. The tree $\tau_n$ may then be viewed as a Bienaymé--Galton--Watson tree with offspring distribution $\xi$ conditioned on having $n$ leaves and $\delta(v)$ may be viewed as a Bienaymé--Galton--Watson tree with offspring distribution $\chi$ conditioned on having $d^+(v)$ leaves. We let $d_{\mM_n^\omega}$ denote the graph-distance on $\mM_n^\omega$ and $\nu_{\mM_n}$ the counting measure on its set of non-root vertices. Proposition~\ref{pro:samplingmnw} ensures that $(\mM_n^\omega, d_{\mM_n^\omega}, \nu_{\mM_n})$ falls into the framework of discrete decorated trees described in Section~\ref{s:disc_construction}, with the random tree given by $\tau_n$ and decorations according to a sequence $(\tilde B_k,\tilde \rho_k,\tilde d_k,\tilde \ell_k,\tilde \nu_k)_{k\geq 0}$ given as follows. The space $\tilde{B}_k$ is a $\chi$-BGW-tree conditioned on having $k$ leaves, $\tilde{\rho}_k$ is its root vertex, and $\tilde{d}_k$ is the graph distance on that space. The labeling function $\tilde{\ell}_k$ is chosen to be some bijection between $\{1, \ldots, k\}$ and the leaves of $\tilde{B}_k$. Thus $\mu_k$ is the uniform measure on the $k$ leaves of $\tilde{B}_k$. The measure $\tilde{\nu}_k$ is the counting measure on the non-root vertices of $\tilde{B}_k$. Choose $(w_n)_{n\geq 0}$ and $(\iota_n)_{n\geq 0}$ such that $\Ex{\xi}= \Ex{\chi} = 1$, and such that $\xi, \chi$ follow asymptotic power laws so that $\xi$ is in the domain of attraction of a stable law with index $\alpha_2 \in ]1,2[$ and $\chi$ lies in the domain of attraction of a stable law with index $\alpha_1 \in ]1,2[$. By~\cite[Thm. 6.1, Rem. 5.4]{MR2946438} there is a sequence \begin{align*} b_n = \left( \frac{|\Gamma(1-\alpha_2)|}{\Pr{\xi=0}} \right)^{1/\alpha_2}\inf \{x \ge 0 \mid \Pr{\xi > x} \le 1/n \} \sim c_1 n^{1/\alpha_2} \end{align*} for some constant $c_1>0$ such that \begin{align*} (b_n^{-1} W_{\lfloor|\tau_n|t\rfloor}(\tau_n))_{0 \le t \le 1} \convdis X^{\text{exc}, (\alpha_2)}. \end{align*} The number of vertices of $\tilde{B_k}$ concentrates at $k / \Pr{\chi=0}$, and by~\cite[Thm. 5.8]{MR2946438} it follows that for some constant $c_2>0$ \begin{align*} \left(\tilde B_k,\tilde\rho_k, c_2 k^{-1-1/\alpha_1} \tilde d_k, \mu_k,(k/\Pr{\chi=0})^{-1}\tilde \nu_k\right) \to (\cT_{\alpha_1},\rho_{\cT_{\alpha_1}}, d_{\cT_{\alpha_1}},\nu_{\cT_{\alpha_1}},\nu_{\cT_{\alpha_1}}) \end{align*} with $(\cT_{\alpha_1}, \rho_{\cT_{\alpha_1}}, d_{\cT_{\alpha_1}}, \nu_{\cT_{\alpha_1}})$ denoting the $\alpha_1$-stable tree with root vertex $\rho_{\cT_{\alpha_1}}$. For $\alpha_1 > \frac{1}{2 - \alpha_2}$ we may use the construction from Section~\ref{s:cts_construction} to form a decorated stable tree $(\cT_{\alpha_1,\alpha_2}, d_{\cT_{\alpha_1,\alpha_2}}, \nu_{\cT_{\alpha_1,\alpha_2}})$ with distance exponent $\gamma_1 := 1 - 1 / \alpha_1$, obtained by blowing up the branchpoints of the $\alpha_2$-stable tree with rescaled independent copies of the $\alpha_1$-stable tree. The measure $\nu_{\cT_{\alpha_1,\alpha_2}}$ is taken to be the push-forward of the Lebesgue measure, corresponding to the case $\beta=1$. Theorem~\ref{th:invariance} yields the following scaling limit: \begin {theorem}\label{thm:mainconv} Choose $(w_n)_{n\geq 0}$ and $(\iota_n)_{n\geq 0}$ such that $\Ex{\xi}= \Ex{\chi} = 1$, and such that $\xi, \chi$ follow asymptotic power laws so that $\xi$ is in the domain of attraction of a stable law with index $\alpha_2 \in ]1,2[$ and $\chi$ lies in the domain of attraction of a stable law with index $\alpha_1 \in ]1,2[$. Suppose that $\alpha_1 > \frac{1}{2 - \alpha_2}$. Then \begin {equation*} \left(\mM_n^\omega, c_1^{-1+1/\alpha_1} c_2 n^{-\frac{\alpha_1-1}{\alpha_2 \alpha_1}} d_{\mM_n^\omega}, \frac{1}{|\mM_n^\omega|}\nu_{\mM_n^\omega}\right) \convdis (\cT_{\alpha_1,\alpha_2}, d_{\cT_{\alpha_1,\alpha_2}}, \nu_{\cT_{\alpha_1,\alpha_2}}) \end {equation*} in the Gromov--Hausdorff--Prokhorov topology. \end {theorem} By Theorem~\ref{thm:compact_hausdorff} and standard arguments we also obtain the Hausdorff-dimension of the iterated stable tree: \begin {theorem} For every $\alpha_2 \in (1,2)$ and $\alpha_1 \in (\frac{1}{2-\alpha_2},2)$, almost surely the random tree $\mathcal{T}_{\alpha_1,\alpha_2}$ has Hausdorff dimension $\displaystyle\frac{\alpha_2 \alpha_1}{\alpha_1-1}$. \end {theorem} The construction may be iterated: With $\gamma_2 := \gamma_1 / \alpha_2$ we may choose any $\alpha_3 \in ]1, 1+\gamma_2[$ and build the iterated stable tree $\cT_{\alpha_1, \alpha_2, \alpha_3}$ as in Section~\ref{s:cts_construction} with distance exponent~$\gamma_2$, by blowing up the branchpoints of an $\alpha_3$-stable tree by rescaled independent copies of the iterated stable tree $\cT_{\alpha_1, \alpha_2}$. By Remark~\ref{re:iteriter} and analogous arguments as for Theorem~\ref{thm:mainconv}, the tree $\cT_{\alpha_1, \alpha_2, \alpha_3}$ arises as scaling limits of random finite marked trees with diameter having order $n^{\gamma_3}$ for $\gamma_3 := \gamma_2/ \alpha_3$. This construction may be iterated indefinitely often, by choosing $\alpha_4 \in ]1, 1 + \gamma_3[$ and setting $\gamma_4 = \gamma_3 / \alpha_4$, and so on. This yields a sequence $(\alpha_i)_{i \ge 1}$ so that $\cT_{\alpha_1, \ldots, \alpha_k}$ is well-defined for all $k \ge 2$. We pose the following question: \begin{question} Is there a non-trivial scaling limit for $\cT_{\alpha_1, \ldots, \alpha_k}$ as $k \to \infty$? \end{question} Note that the associated sequence $(\gamma_i)_{i \ge 1}$ is strictly decreasing and satisfies $\gamma_{i+1} < \gamma_i/ (\gamma_i +1)$, which yields $\lim_{i \to \infty} \gamma_i= 0$ and hence $\lim_{i \to \infty} \alpha_i= 1$. The intuition for the question is that if $\alpha$ is close to $1$, then the vertex with maximal width in $\cT_\alpha$ should dominate and hence blowing up all branchpoints by some decoration should yield something close to a single rescaled version of that decoration. Hence $\cT_{\alpha_1, \ldots, \alpha_{k+1}}$ \emph{should} be close to a constant multiple of $\cT_{\alpha_1, \ldots, \alpha_{k}}$ for $k$ large enough. Since $\cT_{\alpha_1, \ldots, \alpha_{k}}$ has Hausdorff dimension $\alpha_1 \cdots \alpha_k/(\alpha_1-1)$, we expect that $(\alpha_i)_{i \ge 1}$ needs to be chosen so that $\prod_{i=1}^\infty \alpha_i$ converges in order to get a scaling limit. \subsection{Weighted outerplanar maps} \label{sec:outer} Planar maps may roughly be described as drawings of connected graphs on the $2$-sphere, such that edges are represented by arcs that may only intersect at their endpoints. The connected components that are created when removing the map from the plane are called the faces of the map. The number of edges on the boundary of a face is its degree. In order avoid symmetries, usually a root edge is distinguished and oriented. The origin of the root-edge is called the root vertex. The face to the right of the root edge is called the outer face, the face to the left the root face. An outerplanar map is a planar map where all vertices lie on the boundary of the outer face. The geometric shape of outerplanar maps has received some attention in the literature~\cite{zbMATH06673644,zbMATH06729837,zbMATH07138334,zbMATH07235577}. Throughout this section we fix two sequences $\iota = (\iota_k)_{k \ge 3}$ and $\kappa = (\kappa_k)_{k \ge 2}$ of non-negative weights. We are interested in random outerplanar maps that are generated according to $\kappa$-weights on their blocks and $\iota$-weights on their faces. Our goal in this section is to describe phases in which we obtain decorated stable trees as scaling limits. Specifically, we will obtain stable trees decorated with looptrees and Brownian trees. This is motivated by the mentioned work~\cite{zbMATH07138334}, which described a phase transition of random face-weighted outerplanar maps from a deterministic circle to the Brownian tree via loop trees. By utilizing the second weight sequence $\kappa$ we hence obtain a completely different phase diagram. We will only consider outerplanar maps without multi edges or loop edges. Recall that a block of a graph is a connected subgraph $D$ that is maximal with the property that removing any of its vertices does not disconnect $D$. Blocks of outerplanar maps are precisely dissections of polygons. We define the weight of a dissection $D$ by \begin{align} \gamma(D) = \kappa_{|D|} \prod_F \iota_{\mathrm{deg}(F)}, \end{align} with $|D|$ denoting the number of vertices of $D$, the index $F$ ranging over the faces of $D$, and $\mathrm{deg}(F)$ denoting the degree of the face $F$. This includes the case where $D$ consists of two vertices joined by a single edge. The weight of an outerplanar map $O$ is then defined by \begin{align} \label{eq:omegao} \omega(O) = \prod_{D} \gamma(D), \end{align} with the index $D$ ranging over the blocks of $O$. Given $n \ge 1$ such that at least one $n$-sized outerplanar map has positive $\omega$-weight, this allows us to define a random outerplanar map $O_n$ that is selected with probability proportional to its $\omega$-weight among the finitely many outerplanar graphs with $n$ vertices. We will only consider the case where infinitely many integers $n$ with this property exist. Likewise, we define the random dissection $D_n$ that is sampled with probability proportional to its weight among all dissections with $n$ vertices. The random outerplanar map $O_n$ fits into the framework considered in the present work. We will make this formal. For each $k \ge 2$ let us set \begin{align} \label{eq:defd} d_k = \sum_{D : |D| =k} \gamma(D), \end{align} with the sum index $D$ ranging over all dissections of $k$-gons. We will only consider the case where the power series $D(z) := \sum_{k \ge 2} d_k z^k$ has positive radius of convergence $\rho_D > 0$. For any $t>0$ with $D(t) < t$ we define the probability weight sequence \begin{align} (p_i(t))_{i \ge 0} = (1 - D(t)/t, 0, d_2t , d_3t^2, d_4 t^3, \ldots). \end{align} Its first moment is given by $\sum_{k \ge 2} k d_k t^{k-1}$, and we set \begin{align} m_O = \lim_{t \nearrow \rho_D} \sum_{k \ge 2} k d_k t^{k-1} \in ]0, \infty]. \end{align} If $m_O \ge 1$, then there is a unique $0 < t_O \le \rho_D$ for which $ \sum_{k \ge 2} k d_k t_O^{k-1} = 1$. If $m_O < 1$ we set $t_O = \rho_D$. Furthermore, we let $\xi$ denote a random non-negative integer with probabilities \begin{align} \label{eq:defxi} \Pr{\xi= i} = p_i(t_O), \qquad i \ge 0. \end{align} Let $T_n$ denote a $\xi$-BGW tree conditioned on having $n$ leaves. Note that $T_n$ has no vertex with outdegree $1$. Let $g_n>0$ be a positive real number. For each $k \ge 2$ let $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_{n,k})$ denote the decoration with $\tilde{B}_k = D_k$, $\tilde d_k $ the graph distance on $D_k$ multiplied by $g_n$, $\tilde{\rho}_k$ the root vertex of $D_k$, $\tilde{\ell}_k: \{1, \ldots, k\} \to D_k$ any fixed enumeration of the $k$ vertices of $D_k$, and $\tilde{\nu}_k$ the counting measure on the non-root vertices (that is, all vertices except for the origin of the root edge) of $D_k$. We let $\tilde{B}_0$ denote a trivial one point space with no mass. \begin{figure}[t] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.7\linewidth]{outerdec} \caption[] {Correspondence of an outerplanar map (top left corner) to a tree decorated by dissections of polygons (bottom half).\footnote{Source of image: \cite{zbMATH07138334}}} \label{f:outerdec} \end{minipage} \end{figure} \begin{lemma}[{\cite[Lem. 6.7, Sec. 6.1.4]{zbMATH07235577}}] \label{le:outerdeco} The decorated tree $(T_n^\dec,d_n^\dec,\nu_n^\dec)$ is distributed like the random weighted outerplanar map $O_n$ equipped with the graph distance multiplied by $g_n$ and the counting measure on the non-root vertices. \end{lemma} \noindent See Figure~\ref{f:outerdec} for an illustration. We refer to the cited sources for detailed justifications. Similarly as for parameter $m_O$, we let $\rho_\iota$ denote the radius of convergence of the series $\sum_{k \ge 2} \iota_{k+1} z^{k}$ and set \begin{align*} m_D := \lim_{t \nearrow \rho_\iota} \sum_{k \ge 2} k \iota_{k+1} t^{k-1} \in ]0, \infty]. \end{align*} If $m_D\ge 1$, there is a unique constant $0 < t_D \le \rho_\iota$ such that $\sum_{k \ge 2} k \iota_{k+1} t_D^{k-1} = 1$. For $m_D < 1$ we set $t_D = \rho_\iota$. Furthermore, we let $\zeta$ denote a random non-negative integer with distribution given by the probability weight sequence \begin{align*} (q_i)_{i \ge 0} := (1 - \sum_{k \ge 2} \iota_{k+1} t_D^{k-1}, 0, \iota_{3} t_D, \iota_4 t_D^2, \ldots). \end{align*} We list three known non-trivial scaling limits for the random face-weighted dissection~$D_n$. If $m_D>1$, then by~\cite{MR3382675} $D_n$ lies in the universality class of the Brownian tree $\mathcal{T}_2$, with a diameter of order $\sqrt{n}$. (That is, $\mathcal{T}_2$ is given by $\sqrt{2}$ times the random real tree obtained by identifying points of a Brownian excursion of duration $1$.) Specifically, \begin{align} \label{eq:d1} \frac{\sqrt{2}}{\sqrt{ \Va{\zeta} q_0}} \frac{1}{4} \left( \Va{\zeta} + \frac{q_0\Pr{\zeta \in 2 \ndN_0}}{2\Pr{\zeta \in 2 \ndN_0} - q_0 } \right) \frac{1}{\sqrt{n}} D_n \convdis \mathcal{T}_2. \end{align} If $m_D = 1$ and $\Pr{\zeta \ge n} \sim c_D n^{-\eta}$ for $1 < \eta < 2$ and $c_D>0$, then by~\cite{MR3286462} $D_n$ lies in the universality class of the $\eta$-stable looptree $\mathcal{L}_\eta$: \begin{align} \label{eq:d2} (c_D q_0 |\eta(1- \eta)|)^{1/\eta} \frac{1}{n^{1/\eta}} D_n \convdis \mathcal{L}_\eta \end{align} in the Gromov--Hausdorff--Prokhorov sense. Here and in the following we use a shortened notation, where the product of a real number and a random finite graph refers to the vertex set space with the corresponding rescaled graph metric and the uniform measure on the set of vertices. If $0<m_D < 1$ and $\Pr{\zeta=n} = f(n) n^{-\theta}$ for some constant $\theta > 2$ and a slowly varying function $f$, then $D_n$ lies in the universality class of a deterministic loop. That is, \begin{align} \label{eq:d3} \frac{q_0}{n(1 - \Ex{\zeta})} D_n \convdis S^1 \end{align} with $S^1$ denoting the $1$-sphere. See for example~\cite{zbMATH07138334} for a detailed justification. Knowing the asymptotic behaviour of $D_n$ allows us to describe the asymptotic shape of the random weighted outerplanar map $O_n$: \begin{itemize} \item If $m_O>1$, then by~\cite[Thm. 6.60]{zbMATH07235577} there is a constant $c_\omega > 0$ (defined in a complicated manner using expected distances in bi-pointed Boltzmann dissections) such that \begin{align} c_\omega \frac{1}{\sqrt{n}} O_n \convdis \mathcal{T}_2. \end{align} In this case the asymptotics of weighted dissections only influence the constant $c_\omega$, but not the universality class. \item If $m_O=1$ and $\Pr{\xi \ge n} \sim c_O n^{-\alpha}$ for $1<\alpha<2$, then $T_n$ lies in the universality class of an $\alpha$-stable tree. That is, by~\cite[Thm. 6.1, Rem. 5.4]{MR2946438} there is a sequence \begin{align*} b_n = \left( \frac{|\Gamma(1-\alpha)|}{\Pr{\xi=0}} \right)^{1/\alpha}\inf \{x \ge 0 \mid \Pr{\xi > x} \le 1/n \} \end{align*} of order $n^{1/\alpha}$ for which \begin{align*} (b_n^{-1} W_{\lfloor|T_n|t\rfloor}(T_n))_{0 \le t \le 1} \convdis X^{\text{exc}, (\alpha)}. \end{align*} Now, the outcome of applying Theorem~\ref{th:invariance} depends on the decoration. Suppose that $\gamma > \alpha-1$. Then in each of the three discussed cases~\eqref{eq:d1},~\eqref{eq:d2}, and~\eqref{eq:d3} we obtain a scaling limit of the form \begin{align} \frac{1}{b_n^\gamma} O_n \convdis \mathcal{T}_\alpha^\dec. \end{align} Specifically: \subitem a) If $m_D>1$ then $\gamma = 1/2$ and $\mathcal{T}_\alpha^\dec$ is the $\alpha$-stable tree decorated according to a constant multiple of the Brownian tree $\mathcal{T}_2$ (with the constant given by the inverse of the scaling factor in~\eqref{eq:d1}). \subitem b) If $m_D = 1$ and $\Pr{\zeta \ge n} \sim c_D n^{-\eta}$ for $1 < \eta < 2$ and $c_D>0$, then $\gamma = 1/\eta$ and $\mathcal{T}_\alpha^\dec$ is the $\alpha$-stable tree decorated according to the stretched $\eta$-stable looptree $(c_D q_0 |\eta(1- \eta)|)^{-1/\eta} \mathcal{L}_\eta$. \subitem c) If $0<m_D<1$ and $\Pr{\zeta=n} = f(n) n^{-\theta}$ for some constant $\theta > 2$ and a slowly varying function $f$, then $\gamma = 1$ and $\mathcal{T}_\alpha^\dec$ is the $\alpha$-stable tree decorated according to the stretched circles $\frac{n(1 - \Ex{\zeta})}{q_0} S^1$. In other words, $\mathcal{T}_\alpha^\dec$ is distributed like the stretched $\alpha$-stable loop tree $\frac{n(1 - \Ex{\zeta})}{q_0} \mathcal{L}_\alpha$. Here condition B1 (which is necessary for applying Theorem~\ref{th:invariance}) may be verified using Proposition~\ref{prop:small blobs don't contribute} and Remark~\ref{re:remarkleaves}. The necessary sufficiently high uniform integrability of the diameter of the decorations is trivial in case c), and follows from~\cite[Lem. 6.61, Sec. 6.1.3]{zbMATH07235577} in case a), and analogously from tail-bounds of conditioned BGW trees~\cite{kortchemski_sub_2017} in case b). \item Finally, if $0<m_O < 1$ and $\Pr{\xi=n} = f_O(n) n^{-\theta_O}$ for some constant $\theta_O > 2$ and a slowly varying function $f$, then $O_n$ has giant $2$-connected component of size about $n (1- \Ex{\xi}) / \Pr{\xi=0}$. Hence, if $D_n$ admits a scaling limit like in the three discussed cases, then the scaling limit for $M_n$ is, up to a constant multiplicative factor, the same as for $D_n$. See for example~\cite{zbMATH07138334} for details on such approximations. \end{itemize} \subsection{Weighted planar maps with a boundary} A planar map with a boundary refers to a planar map where the outer face typically plays a special role, and the perimeter (number of half-edges on the boundary of the outer face) serves as a size parameter. The reason for counting half-edges instead of edges is that both sides of an edge may be adjacent to the outer face, and in this case such an edge is counted twice. We say the boundary of a planar map is simple, if it is a cycle. Here we explicitly allow the case of degenerate $2$-cycles and $1$-cycles. That is, a map consisting of two vertices joined by single edge has a simple boundary. A map consisting of a loop with additional structures on the inside has a simple boundary, and so has a map consisting of two vertices joined by two edges and additional structures on the inside. Planar maps with a boundary fit into the framework of decorated trees by identical arguments as for outerplanar maps. That is, we may decompose a planar map into its components with a simple boundary in the same way as an outerplanar map may be decomposed into dissections of polygons. They may be bijectively encoded as trees decorated by maps with a simple boundary in the same way as illustrated in Figure~\ref{f:outerdec}. Since we allow multi-edges and loops, the leaves of the encoding tree canonically and bijectively correspond to the half-edges on the boundary of the map. There are infinitely many planar maps with a fixed positive perimeter, hence it makes sense to assign summable weights. Say, for each planar map $D$ with a simple boundary we are given a weight $\gamma(D) \ge 0$. Like in Equation~\eqref{eq:omegao}, we then define the weight $\omega(M)$ of a planar map $M$ by \begin{align} \label{eq:omegam} \omega(M) = \prod_{D} \gamma(D), \end{align} with the index $D$ ranging over the components of $M$ with a simple boundary. Thus, for any positive integer $n$ for which the sum of all $\omega$-weights of $n$-perimeter maps is finite and non-zero we may define the random $n$-perimeter map $M_n$ that is drawn with probability proportional to its weight. Note that this formally encompasses the random outerplanar map $O_n$, for which $\gamma(D) = 0$ whenever $D$ is not a dissection of a polygon of perimeter at least $3$. Using the same definitions~\eqref{eq:defd}--\eqref{eq:defxi} for $m_O$ and $\xi$, it follows that the tree $T_n$ corresponding to the random map $M_n$ is distributed like the result of conditioning a $\xi$-BGW tree $T_n$ on having $n$ leaves. Furthermore, employing analogous definitions for the decorations $(\tilde B_k,\tilde d_k,\tilde\rho_k,\tilde\ell_k,\tilde \nu_k)_{k \ge 0}$, it follows like in Lemma~\ref{le:outerdeco} that $M_n$ with distances rescaled by some scaling sequence $a_n>0$ is a discrete decorated tree: \begin{corollary} The decorated tree $(T_n^\dec,d_n^\dec,\nu_n^\dec)$ is distributed like the random weighted map $M_n$ with perimeter $n$, equipped with the graph distance multiplied by $a_n$ and the counting measure on the non-root vertices. \end{corollary} The boundary of $M_n$ also fits into this framework but its scaling limits have already been determined in pioneering works~\cite{Richier:2017,zbMATH07253709}, with the parameter $m_O$ and the tail of $\xi$ ultimately determining whether its a deterministic loop, a random loop tree, or the Brownian tree. In order to apply Theorem~\ref{th:invariance} to $M_n$ (as opposed to its boundary) we need to be able to look inside of the components, that is, we need a description of the asymptotic geometric shape of Boltzmann planar maps with a large simple boundary. However no such results appear to be known. What is known by~\cite{Richier:2017} is that so-called non-generic critical face weight sequences with parameter $\alpha' = ]1, 3/2[$ lie in a stable regime with $m_O = 1$ and $T_n^\dec$ in the universality class of an $\alpha$-stable tree for $\alpha = 1 / (\alpha' - 1/2)$. The boundary of $M_n$ lies in the universality class of an $\alpha$-stable looptree by~\cite[Thm. 1.1]{Richier:2017}. It is natural to wonder whether additional knowledge of Boltzmann planar maps with a simple boundary in this regime enables the application of Theorem~\ref{th:invariance}. This motivates the following question: \begin{question} Do decorated stable trees constructed in the present work arise as scaling limits of face-weighted Boltzmann planar maps with a boundary? \end{question} We note that there is a connection to the topic of stable planar maps arising as scaling limits (along subsequences) of Boltzmann planar maps without a boundary~\cite{zbMATH06469338,zbMATH06932734}. Roughly speaking, Boltzmann planar maps with a large boundary are thought to describe the asymptotic geometric behaviour of macroscopic faces of Boltzmann planar maps without a boundary. There are results for related models of triangulations and quadrangulations with a simple boundary with an additional weight on the vertices~\cite{zbMATH07039779, zbMATH07343343}. Scaling limits for models with a non-simple boundary have been determined for the special case of uniform quadrangulations with a boundary~\cite{MR3335010,zbMATH07212165} that are condition on both the number of vertices and the boundary length. For Boltzmann triangulations with the vertex weights as in~\cite{zbMATH07343343} we would expect to be in the regime $m_O<1$ where the shape of the map is dominated by a unique macroscopic component with a simple boundary. However, we could introduce block-weights as before in order to force the model into a stable regime. At least in principle, Theorem~\ref{th:invariance} then yields convergence towards a decorate stable tree obtained by blowing up the branchpoints of a stable tree by rescaled Brownian discs. Checking the requirements of Theorem~\ref{th:invariance} (such as verifying convergence of the moments of the rescaled diameter of Boltzmann triangulations with a simple boundary) does not appear involve any mayor obstacles, but we did not go through the details. To do so, we would need to recall extensive combinatorial background, and this appears to be beyond the scope of the present work. \appendix \section{Appendix} This appendix contains the proofs of two technical statements that are used in the paper, Proposition~\ref{prop:small blobs don't contribute} and Lemma~\ref{lem:moment measure discrete block implies B2 or B3}, which ensure that the main assumptions under which the rest of the paper is stated are satisfied for reasonable models of decorated BGW trees. Section~\ref{subsec:regularly varying functions and doa} recalls some general results about regularly varying functions and domain of attraction of stable random variables. Section~\ref{subsec:estimates for BGW} presents some estimates related to critical BGW trees whose reproduction law is in the domain of attraction of a stable law that are useful later on. In Section~\ref{subsec:spine decomposition with marks}, we introduce the notion of trees with marks on the vertices and prove a spine decomposition result for those marked trees. Finally in Section~\ref{subsec:small blobs do not contribute} we prove Proposition~\ref{prop:small blobs don't contribute}, which is the main technical result of this Appendix, using all the results and estimates derived before. At the end, in Section~\ref{subsec:proof of lemma about measure}, which is independent of what comes before, we prove Lemma~\ref{lem:moment measure discrete block implies B2 or B3}. \subsection{Regularly varying functions and domains of attraction}\label{subsec:regularly varying functions and doa} \subsubsection{Compositional inverse of regularly varying functions} Following standard terminology, we say that a function $f$ defined on a neighbourhood of infinity is regularly varying (at infinity) with exponent $\alpha\in \R$ if for every $\lambda\in \R\setminus \{0\}$ we have \begin{align*} \frac{f(\lambda x)}{f(x)}\rightarrow \lambda^\alpha. \end{align*} We consider those functions up to the equivalence relation of having a ratio tending to $1$ at infinity. When the index of regularity is positive $\alpha >0$, a regularly varying function $f$ with exponent $\alpha$ tends to infinity and we can define (at least at a neighbourhood of infinity): \begin{align*} f^{[-1]}(x):=\inf\enstq{y\in\R_+}{f(y)\geq x}, \end{align*} and the equivalence class of $f^{[-1]}$ only depends on the equivalence class of $f$. Then $f^{[-1]}$ is a regularly varying function with index $\alpha^{-1}$ and satisfies: \begin{align*} f\circ f^{[-1]}(x)\underset{x \rightarrow\infty}{\sim} f^{[-1]}\circ f (x)\underset{x \rightarrow\infty}{\sim} x. \end{align*} \subsubsection{Asymmetric stable random variable} For $\alpha\in\intervalleoo{0}{1}\cup \intervalleoo{1}{2}$, we let $Y_\alpha$ be a random variable with so-called asymmetric stable law of index $\alpha$, with distribution characterized by its Laplace transform, for all $\lambda>0$, \begin{align*} \Ec{\exp\left(-\lambda Y_\alpha \right)}&= \exp\left(-\lambda ^\alpha \right) \quad \text{if}\quad 0<\alpha <1,\\ &=\exp\left(\lambda ^\alpha \right) \quad \text{if}\quad 1<\alpha <2. \end{align*} It is ensured by \cite[Eq.(I20)]{zolotarev_one_1986} that those distributions have a density $d_\alpha$ and that this density is continuous and bounded. \paragraph{Domain of attraction.} Let $X$ be a random variable such that $\Pp{\abs{X}>x}$ is regularly varying with index $-\alpha$, centred if $\alpha\in\intervalleoo{1}{2}$. Consider a sequence $X_1, X_2,\dots$ of i.i.d random variables with the law of $X$. Suppose that \begin{align*} \frac{\Pp{X>x}}{\Pp{\abs{X}>x}}\rightarrow 1. \end{align*} Then, let \begin{align}\label{eq:regularly varying function associated to X} B_X(x):=\abs{\frac{1-\alpha}{\Gam{2-\alpha}}}^{-\frac{1}{\alpha}} \left(\frac{1}{\Pp{\abs{X}\geq x}}\right)^{[-1]}. \end{align} We also denote this function by $B_\nu$ if $\nu$ is the law of $X$. Since $x\mapsto \frac{1}{\Pp{\abs{X}\geq x}} $ is $\alpha$-regularly varying, $B_X$ is $\alpha^{-1}$-regularly varying. Then, by \cite[Theorem~4.5.1]{whitt_stochastic_2002}, we have: \begin{align*} \frac{1}{B_X(n)}\cdot \sum_{i=1}^{n}X_i \tend{n\rightarrow\infty}{(d)} Y_\alpha, \end{align*} where $Y_\alpha$ has the asymmetric $\alpha$-stable law, which we recall has a density $d_\alpha$. In fact, if the random variable $X$ is integer-valued (and not supported on a non-trivial arithmetic progression), a more precise version of the above convergence known as \emph{local limit theorem}, is true, see \cite[Theorem~4.2.1]{MR0322926}. Let $S_n=\sum_{i=1}^n X_i$, then we have \begin{align}\label{eq:stable local limit theorem} \sup_{k\in \Z} \Big| B_X(n)\Pp{S_n=k} - d_\alpha\left(\frac{k}{B_X(n)}\right) \Big| \underset{n\rightarrow\infty}{\rightarrow} 0. \end{align} Using the fact that the asymmetric $\alpha$-stable density $d_\alpha$ is bounded, note that the above convergence ensures in particular that there exists a constant $C$ so that for all $n\geq 1$ and all $k\in \Z$ we have \begin{align}\label{eq:uniform bound for the probability Sn=k} \Pp{S_n=k} \leq \frac{C}{B_X(n)}. \end{align} \subsubsection{The Potter bounds.} From \cite[Theorem~1.5.6(iii)]{bingham_regular_1989}: for $f$ a regularly varying function of index $\rho$, then for every $A>1$ and $\epsilon$, there exists $B$ such that for all $x,y\geq B$ we have \begin{equation} \frac{f(y)}{f(x)}\leq A \cdot \max \left\lbrace\left(\frac{y}{x}\right)^{\rho-\epsilon},\left(\frac{y}{x}\right)^{\rho+\epsilon}\right\rbrace. \end{equation} When $f$ is defined on the whole interval $\intervalleoo{0}{\infty}$ and bounded below by a positive constant, we can increase the constant $A$ so that the last display holds for any $x,y\geq 1$. Let us apply this to $B_X$, the slowly varying function associated to some positive random variable $X$ in the domain of attraction of an $\theta$-stable law. For all $\epsilon>0$ there exists a constant $C$ such that for all $n\geq 1$ for all $\frac{1}{B_X(n)}\leq \lambda \leq 1$, \begin{align}\label{eq:potter bounds applied to B-1(lambda B_n)} C^{-1}\cdot n\cdot \lambda^{\theta+\epsilon}\leq B_X^{[-1]}(\lambda B_X(n)) \leq C\cdot n\cdot \lambda^{\theta-\epsilon}, \end{align} and so that in particular, possibly for another constant $C$, \begin{align} C^{-1}\cdot \frac{1}{n}\cdot \lambda^{-\theta+\epsilon}\leq \Pp{X\geq \lambda B_X(n)} \leq C\cdot \frac{1}{n}\cdot \lambda^{-\theta-\epsilon}, \end{align} \subsection{Estimates for Bienaymé--Galton--Watson trees with $\alpha$-stable tails}\label{subsec:estimates for BGW}\label{subsec:the case of BGW} Let $\mu$ be a critical reproduction law in the domain of attraction of an $\alpha$-stable distribution, with $\alpha\in \intervalleoo{1}{2}$. Recall that $d_\alpha$ is the density function of the random variable $Y_\alpha$. The following lemma contains all the results that we need in the subsequent sections of the appendix. \begin{lemma}\label{lem:application of the potter bounds} Let $D$ be a random variable with distribution $\mu^*$ the size-biaised version of $\mu$ and $(\tau_i)_{i\geq 1}$ be independent BGW trees with reproduction law $\mu$. Then, for any $\eta>0$, there exists a constant $C$ such that for all $n\geq 1$ and for $\lambda\in \intervalleff{\frac{1}{B_n}}{1}$, we have \begin{enumerate}[(i)] \item \label{it:probability of degree being large} $\displaystyle \Pp{D\geq \lambda B_n}\leq C \cdot \lambda^{1-\alpha-\eta} \cdot \frac{B_n}{n},$ \item \label{it:probability total size of Bn GW is k} For all $k\in \mathbb Z$,\ \[\displaystyle\Pp{\sum_{i=1}^{\lambda B_n}\abs{\tau_i}=k} \leq C\cdot \frac{ \lambda^{-\alpha-\eta}}{n}.\] \item \label{it:expectation inverse total size of GW} $\displaystyle\Ec{\frac{1}{\sum_{i=1}^{\lambda B_n}\abs{\tau_i}}} \leq C\cdot \frac{ \lambda^{-\alpha-\eta}}{n},$ \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:application of the potter bounds}] We first prove (\ref{it:probability of degree being large}). We have using \cite[Eq.(45)]{kortchemski_sub_2017} \begin{align*} \Pp{D\geq k} = \mu^*(\intervallefo{k}{\infty}) \underset{k\rightarrow\infty}{\sim} \frac{\alpha}{\alpha-1 }\cdot k \cdot \mu(\intervallefo{k}{\infty}), \end{align*} so for $C$ a constant chosen large enough we get that for all $k\geq 1$, \begin{align*} \Pp{D\geq k} \leq C\cdot k \cdot \mu(\intervallefo{k}{\infty}). \end{align*} Applying this to $k=\lceil\lambda B_n\rceil$ and using the Potter bounds yields, for a value of $C$ that may change from line to line, \begin{align*} \Pp{D\geq \lambda B_n} \leq C\cdot \lceil\lambda B_n\rceil\cdot \mu(\intervallefo{\lambda B_n}{\infty}) &\leq C\cdot \lceil\lambda B_n\rceil\cdot \frac{1}{n}\cdot \lambda^{-\alpha-\eta}\\ &\leq C\cdot\frac{B_n}{n}\cdot \lambda^{1-\alpha-\eta}. \end{align*} So (\ref{it:probability of degree being large}) is proved. Now let us turn to (\ref{it:probability total size of Bn GW is k}). Using, for example, \cite[Eq.(26)]{kortchemski_sub_2017}, we have \begin{align}\label{eq:probability that a BGW has n vertices} \Pp{\abs{\tau}=n}\underset{n\rightarrow\infty}{\sim} \frac{d_\alpha(0)}{n B_\mu(n)} \quad \text{ and so } \quad \Pp{\abs{\tau}\geq n}\underset{n\rightarrow\infty}{\sim} \frac{\alpha d_\alpha(0)}{B_\mu(n)}. \end{align} This ensures that the random variable $\abs{\tau}$ is in the domain of attraction of the asymmetric $1/\alpha$-stable random variable. We can also check from the last display and the definition \eqref{eq:regularly varying function associated to X} that \begin{align}\label{eq:relation Babstau to Bmu} B_{\abs{\tau}}(n)\sim C\cdot B_\mu^{[-1]}(n), \end{align} for some constant $C$. Using \eqref{eq:uniform bound for the probability Sn=k} we then get \begin{align*} \Pp{\sum_{i=1}^{\lambda B_n}\abs{\tau_i}=k} \leq \frac{C}{B_\mu^{[-1]}(\lambda B_n)} \leq C\cdot \frac{ \lambda^{-\alpha-\eta}}{n}, \end{align*} where the last inequality follows from \ref{eq:potter bounds applied to B-1(lambda B_n)}. This finishes the proof of \ref{it:probability total size of Bn GW is k}. The last point (\ref{it:expectation inverse total size of GW}) follows from an application of Lemma~\ref{lem:expectation of inverse of sum of asymmetric heavy tailed rv} stated below to the distribution of $\abs{\tau}$ using \eqref{eq:relation Babstau to Bmu} and an application of the Potter bounds to conclude. \end{proof} \begin{lemma}\label{lem:expectation of inverse of sum of asymmetric heavy tailed rv} Let $X$ be a distribution on $\intervallefo{1}{\infty}$, in the domain of attraction of an asymmetric $\theta$-stable law, with $\theta\in\intervalleoo{0}{1}$, and $X_1, X_2,\dots$ i.i.d random variables with the law of $X$. Then, there exists a constant $C$ such that for all $N\geq 1$, \begin{align*} \Ec{\frac{1}{\sum_{i=1}^{N}X_i}} \leq \frac{C}{B_X(N)}. \end{align*} \end{lemma} \begin{proof} Write \begin{align*} \Ec{\frac{1}{\sum_{i=1}^{N}X_i}} = \frac{1}{B_X(N)} \cdot \Ec{\frac{B_X(N)}{\sum_{i=1}^{N}X_i}}\leq \frac{1}{B_X(N)} \cdot \Ec{\frac{B_X(N)}{\max_{1\leq i \leq N}X_i}}. \end{align*} Then \begin{align*} \Ec{\frac{B_X(N)}{\max_{1\leq i \leq N}X_i}} &= \int_{0}^{\infty}\Pp{\frac{B_X(N)}{\max_{1\leq i \leq N}X_i}>x} \mathrm{d} x=\int_{0}^{\infty}\Pp{\max_{1\leq i \leq N}X_i<x^{-1}\cdot B_X(N)} \mathrm{d} x. \end{align*} Since the values of $X_i$ are at least $1$, the integrand of the last display becomes $0$ when $x>B_X(N)$. But when $x\leq B_X(N)$ we have \begin{align*} \Pp{\max_{1\leq i \leq N}X_i<x^{-1}\cdot B_X(N)} &\leq \Pp{X<x^{-1}\cdot B_X(N)}^N\\ &\leq \left(1 - \Pp{X\geq x^{-1}B_X(N)}\right)^N\\ &\leq \exp\left(-N\cdot\Pp{X\geq x^{-1}B_X(N)}\right)\\ &\leq \exp\left(-N \cdot C\cdot \frac{x^{\theta/2}}{N}\right)\leq \exp\left(-C\cdot x^{\theta/2}\right), \end{align*} which is integrable over $\R$. In the last line, we use that \begin{equation*} \Pp{X\geq x^{-1}B_X(N)}\geq C \cdot \frac{1}{N}\cdot (x^{-1})^{-\frac{\theta}{2}} \end{equation*} for some constant $C$ that does not depend on $N$, thanks to the Potter bounds with $\epsilon=\theta/2$, which apply here because $x^{-1}\geq \frac{1}{B_X(N)}.$ \end{proof} \subsection{Spine decomposition for marked Bienaymé--Galton--Watson trees}\label{subsec:spine decomposition with marks} \paragraph{Definitions.} We first recall a few definitions from Section~\ref{subsec:plane trees}. We have $\UHT=\bigcup_{n\geq0} \N ^n$ the Ulam tree, and we denote $\bT$ the set of plane trees. For any $\bt\in\bT$, and $v\in \bt $, we define $\theta_v(\bt)=\enstq{u\in\bU}{vu\in \bt}$. For $w\in \bU$, we let $w\bt = \enstq{wv}{v\in\bt}$. If $\bt\in\bT$ and $u\in \bt$, then we define $\out{\bt}(u)$ to be the number of children of $u$ in $\bt$. We drop the index if it is clear from the context. If $v=v_1\dots v_n$ we say that $\abs{v}=n$ is the height of $v$, and for any $k\leq n$, we set $\left[v\right]_k=v_1\dots v_k$. If $v\neq\emptyset$ we also let $\hat{v}=\left[v\right]_{n-1}$ be the father of $v$. We define $\bT^*=\enstq{(\bt,v)}{\bt\in\bT, \ v\in \bt}$ the set of plane trees with a marked vertex. Let $\mu$ be a probability measure on the integers, such that $\sum_{k\geq 0}k\mu(k)=1$. The associated probability measure $\BGW_\mu$ on plane trees is the measure such that for $T$ a random tree taken under this measure, we have for every $\bt\in\bT$, finite: \begin{equation} \Pp{T=\bt}=\BGW_\mu(\{\bt\})=\prod_{u\in \bt}\mu(\out\bt(u)). \end{equation} \paragraph{Marks on a tree.} Let $E$ be any measurable space. A \emph{finite tree with marks} is an ordered pair $\tilde{\bt}=(\bt,(x_u)_{u\in \bt})$ where $\bt\in\bT$ and $(x_u)_{u\in\bt}$ are elements of $E$ (corresponding to marks of the vertices). We denote by $\widetilde{\bT}$ the space of marked trees, and similarly, we let $\widetilde{\bT}^*$ the set of marked trees with a distinguished vertex. Let $(\pi_k)_{k\geq 0}$ be a sequence of probability measures on $E$. A natural way to put random marks on a tree $\bt$ is to mark it with $(X_u)_{u\in\bt}$ that is a family of independent random variables with respective distribution $X_u\sim \pi_{\out{\bt}(u)}$. \paragraph{Cutting a tree at a vertex.} If $\bt\in\bT$, and $v\in \bt $, we define: \begin{align}\label{eq:definition Cut} \Cut{\bt}{v}&:=\bt\setminus\enstq{vu}{u\in\bU,\ u\neq\emptyset} =\bt\setminus (v\theta_v \bt) \cup \{v\}. \end{align} \paragraph{The Kesten tree.} We let \begin{itemize} \item $\left(D_i\right)_{i\geq 0}$ be a sequence of i.i.d. random variables with distribution $\mu^*$, where $\mu^*(k)=k\mu(k)$, for any $k\geq 0$, \item $(K_i)_{i\geq 1}$, which conditionally on the sequence $\left(D_i\right)$ are uniform on $\intervalleentier{1}{D_i}$, \item $\mathbf{U}_n^*=K_1K_2\dots K_n$. \item $\left(\tau_u\right)_{u\in\bU}$ are $\BGW$ trees with reproduction law $\mu$. \item $\cH=\enstq{\mathbf{U}_n^* i}{n\geq 0,\ i\in \intervalleentier{1}{D_i}\setminus K_i}$. \end{itemize} Then the Kesten tree is defined as \begin{align*} \KT^*=\enstq{\mathbf{U}_n^*}{n\geq 0}\cup \bigcup_{u\in\cH}u\tau_u. \end{align*} \paragraph{Cut Kesten tree.} Let $\tau$ be a BGW tree with offspring distribution $\mu$, independent of everything else. We let: \begin{align*} \KT^*_k:=\Cut{\KT^*}{\mathbf{U}_k^*}\cup (\mathbf{U}_k^*\tau), \end{align*} which is almost surely a finite tree. As before, we endow the finite random tree $\KT^*_k$ with some marks $(X_u)_{u\in \KT^*_k}$ such that conditionally on $\KT^*_k$, the marks are independent with respective conditional distribution $X_u\sim \pi_{\out{}(u)}$. We denote by $\widetilde{\KT}^*_k$ the obtained tree with marks. \paragraph{The ancestral line of a vertex.} We let \begin{align*} \bA:=\bigcup_{n\geq1}(\N_0\times E)^n. \end{align*} If $\tilde{\bt}=(\bt,(x_u)_{u\in\bt})\in\bT$, and $v=v_1v_2\dots v_k\in \bt$ we define: \begin{align*} A(\tilde{\bt},v):=\left((\out{\bt}(\left[v\right]_0),x_{\left[v\right]_0}),(\out{\bt}(\left[v\right]_1),x_{\left[v\right]_1}),\dots,(\out{\bt}(\left[v\right]_{k}),x_{\left[v\right]_{k}})\right)\in \bA, \end{align*} the marked ancestral line of vertex $v$ in the marked tree $\tilde{\bt}$. \paragraph{BGW tree with a uniform vertex satisfying some property.} Let $\mathscr{P}\subset\bA$ be some property. Consider $\widetilde{T}=(T,(X_u)_{u\in T})$ a marked BGW tree with reproduction distribution $\mu$ and mark distributions $(\pi_k)_{k\geq 0}$, and on the event $\enstq{\exists v \in T}{A(\widetilde{T},v)\in \mathscr{P}}$ where at least one vertex of $\widetilde{T}$ has the property $\mathscr{P}$ (which depends only on its ancestral line and the marks along it), we let $U$ be a uniform vertex such that this is the case. \begin{proposition}\label{prop:spine decomposition with marks} For any non-negative function $F:\widetilde{\bT}^*\rightarrow\R$, we have \begin{align}\label{eq:spine decomposition with marks} \Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}}}=\sum_{k\geq 0}\Ec{\frac{F(\widetilde{\KT}^*_k,\mathbf{U}_k^*)\ind{A(\widetilde{\KT}^*_k,\mathbf{U}_k^*)\in \mathscr{P}}}{\#\enstq{v\in\KT^*_k}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}. \end{align} \end{proposition} \begin{proof} For any $\bt$ and $u\in \bt$ of height $k$, we have \begin{align}\label{eq:equality proba GW - biaised GW with vertex} &\Pp{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}\notag\\ &=\prod_{1\leq i \leq k}\Pp{D_i=\out{\bt}(\left[u\right]_{i-1})}\cdot \Ppsq{K_i=u_i}{D_i=\out{\bt}(\left[u\right]_{i-1})} \cdot \prod_{v\in \bt, v\nprec u}\mu(\out{\bt}(v))\notag\\ &=\left(\prod_{1\leq i \leq k}\out{\bt}(\left[u\right]_{i-1})\mu(\out{\bt}(\left[u\right]_{i-1}) \frac{1}{\out{\bt}(\left[u\right]_{i-1})}\right) \cdot \prod_{v\in \bt, v\nprec u}\mu(\out{\bt}(v))\notag\\ &=\prod_{v\in \bt}\mu(\out{\bt}(v))\notag\\ &=\Pp{T=\bt}. \end{align} We now expand the expectation as a sum \begin{align*} \Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}}}&=\sum_{\bt\in \bT, u\in \bt}\Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}} \cdot \ind{T=\bt, U=u}}. \end{align*} For every $(\bt,u)$ with $\abs{u}=k$, we have \begin{align*} &\Ec{F(\widetilde{T},U)\ind{\exists v \in T,\ A(\widetilde{T},v)\in \mathscr{P}} \cdot \ind{T=\bt, U=u}}\\ &= \Pp{T=\bt} \cdot \Ecsq{F(\widetilde{T},u) \cdot \ind{A(\widetilde{T},u)\in \mathscr{P}} \cdot \Ppsq{U=u}{\widetilde{T}}}{T=\bt}\\ &= \Pp{T=\bt} \cdot \Ecsq{\frac{F(\widetilde{T},u) \cdot \ind{A(\widetilde{T},u)\in \mathscr{P}}}{\#\enstq{v\in \bt}{A(\widetilde{T},v)\in \mathscr{P}}}}{T=\bt}\\ &=\Pp{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}\cdot \Ecsq{\frac{F(\widetilde{\KT}^*_k,u) \cdot \ind{A(\widetilde{\KT}^*_k,u)\in \mathscr{P}}}{\#\enstq{v\in \bt}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}{\KT^*_k=\bt}\\ &=\Pp{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}\cdot \Ecsq{\frac{F(\widetilde{\KT}^*_k,u) \cdot \ind{A(\widetilde{\KT}^*_k,u)\in \mathscr{P}}}{\#\enstq{v\in \bt}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}{\KT^*_k=\bt,\mathbf{U}_k^*=u}\\ &=\Ec{\frac{F(\widetilde{\KT}^*_k,\mathbf{U}_k^*) \cdot \ind{A(\widetilde{\KT}^*_k,\mathbf{U}_k^*)\in \mathscr{P}} \cdot \ind{(\KT^*_k,\mathbf{U}_k^*)=(\bt,u)}}{\#\enstq{v\in \bt}{A(\widetilde{\KT}^*_k,v)\in \mathscr{P}}}}.\\ \end{align*} The third equality uses \eqref{eq:equality proba GW - biaised GW with vertex} and the fact that conditionally on the tree $T=\bt$ or $\KT^*_k=\bt$, the distribution of the marks on the tree is the same. The fourth equality uses the fact that the quantity in the conditional expectation does not depend on $\mathbf{U}_k^*$. The result is then obtained by summing over all heights $k\geq 0$ and all finite trees $\bt$ with a distinguished vertex $u$ at height $k$. \end{proof} \begin{remark} For our purposes, we will just use $E=\R_+$ but we want to emphasize that this result still holds true for other types of marks: the key point for the proof is that the law of the marks conditionally on the tree only depends on the degree of the corresponding vertex. \end{remark} \subsection{The contribution of vertices of small degree is small}\label{subsec:small blobs do not contribute} The goal of this section is to prove Proposition~\ref{prop:small blobs don't contribute}. For that, we prove Proposition~\ref{prop:small marks don't contribute} which implies the former directly. Let $\alpha\in\intervalleoo{1}{2}$ and $\gamma>\alpha-1$. Let $\mu$ be a critical reproduction distribution in the domain of attraction of an $\alpha$-stable law. We let $b_n=B_\mu(n)$ be the $\frac{1}{\alpha}$-regularly varying function associated to $\mu$ by \eqref{eq:regularly varying function associated to X}. For all $n\geq 1$ for which the conditioning is non-degenerate, we let $T_n$ be a BGW tree with reproduction law $\mu$, conditioned to have exactly $n$ vertices. This random tree is endowed with marks $(X_u)_{u\in T_n}$ such that conditionally on $T_n$, the marks are independent with distribution that only depends on the degree of the corresponding vertex $X_u\sim \pi_{\out{T_n}(u)}$, where the sequence $(\pi_k)_{k\geq 0}$ is a sequence of probability measures on $\R_+$. We further require that for some real number $m> \frac{4\alpha}{ ( 2\gamma + 1 - \alpha)}$, we have \begin{align*} \sup_{k\geq 0} \Ec{\left(\frac{X^{(k)}}{k^\gamma}\right)^m}<\infty, \end{align*} where for every $k\geq 0$, the random variable $X^{(k)}$ has distribution $\pi_k$. We prove the following. \begin{proposition}\label{prop:small marks don't contribute} In the setting decribed above, for any $\epsilon>0$ we have \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma} \underset{\delta\rightarrow 0}{\rightarrow}0, \end{align*} uniformly in $n\geq 1$ such that $T_n$ is well-defined. \end{proposition} Remark that this proposition directly implies Proposition~\ref{prop:small blobs don't contribute} when applying it to marks distribution $(\pi_k)_{k\geq 0}$ that are respectively the laws of $(\diam(B_k))_{k\geq 0}$. In order to prove this result, we first prove an intermediate lemma. \begin{lemma}\label{lem:cutting slices} For any small enough $\eta>0$, $\epsilon>0$ and all $\delta\in \intervalleoo{0}{\epsilon^{\frac{1}{\eta}}}$, we have simultaneously for all $n\geq 1$, \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma} \leq \delta^{\beta}, \end{align*} where $\beta=(\gamma +\frac{1-\alpha}{2}-5\eta)m-2\alpha-4\eta$, which is positive if $\eta$ is chosen small enough. \end{lemma} Let us show how the result we want follows from this lemma. \begin{proof}[Proof of Proposition~\ref{prop:small marks don't contribute}] Let us take $\eta>0$ small enough such that Lemma~\ref{lem:cutting slices} holds. For any $\epsilon>0$ small enough, and $0<\delta<\epsilon^{\frac{1}{\eta}}$ we have \begin{align*} \sup_{v\in T_n} \left(\sum_{u\preceq v} X_u \ind{\out{T_n}(u)\leq \delta b_n}\right) &\leq \sum_{i=0}^{\infty} \sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\delta 2^{-i-1}b_n<\out{T_n}(u)\leq \delta 2^{-i} b_n}\right\rbrace. \end{align*} Write $\frac{\epsilon}{1-2^{-\eta}}=\sum_{i=0}^{\infty} \epsilon \cdot 2^{-i\eta}$ and use a union bound and the result of Lemma~\ref{lem:cutting slices} for all pairs $(\epsilon',\delta')\in \{(2^{-i\eta}\epsilon,2^{-i}\delta),\ i\geq 0\}$ which thanks to our assumption, still satisfy $\delta'<(\epsilon')^{\frac{1}{\eta}}$. This yields \begin{align*} &\Pp{\sup_{v\in T_n} \left(\sum_{u\preceq v} X_u \ind{\out{T_n}(u)\leq \delta b_n}\right)>\frac{\epsilon}{1-2^{-\eta}}b_n^\gamma} \\ &\leq \sum_{i=0}^{\infty} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\delta 2^{-i-1}b_n<\out{T_n}(u)\leq \delta 2^{-i} b_n}\right\rbrace>2^{-i\eta}\cdot \epsilon \cdot b_n^\gamma}\\ &\leq \sum_{i=0}^{\infty} (2^{-i}\delta)^{\beta} \underset{\delta\rightarrow 0}{\longrightarrow} 0 \end{align*} which is what we wanted to prove. \end{proof} The rest of the section is then devoted to the proof of Lemma~\ref{lem:cutting slices}. \subsubsection{Proof of Lemma~\ref{lem:cutting slices}} Now, let us turn to the proof of Lemma~\ref{lem:cutting slices}. For this one, we are going to use Proposition~\ref{prop:spine decomposition with marks} with a certain property $\tilde{P}(\epsilon,\delta,n)$, which is defined as the subset of $\bA$ such that for any $\tilde\bt=(\bt,(x_u)_{u\in\bt})$ and $v\in\bt$, \begin{align*} A(\tilde\bt,v)\in \tilde P(\epsilon,\delta,n) \quad \iff \quad \sum_{u\preceq v} x_u \ind{\frac{\delta}{2}b_n<\out{\bt}(u)\leq \delta b_n}>\epsilon b_n^\gamma. \end{align*} Let $T_n\sim \BGW_\mu^n$. For a small $\eta>0$ we can write using a union bound \begin{align}\label{eq:exponential decay of height of conditioned BGW} &\Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma}\notag \\ &\leq \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma \quad \text{and} \quad \haut(T_n)\leq \delta^{-\eta}\frac{n}{b_n}}+\Pp{\haut(T_n)>\delta^{-\eta}\frac{n}{b_n}}. \end{align} Thanks to \cite[Theorem~2]{kortchemski_sub_2017}, the second term is already smaller than some $C_1\exp(-C_2\delta^{-\eta})$, uniformly in $n$. We then just have to take care of the first term, for which we write \begin{align}\label{eq:conditioned BGW to unconditioned} &\Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma \quad \text{and} \quad \haut(T_n)\leq \delta^{-\eta}\frac{n}{b_n}}\notag\\ &=\frac{1}{\Pp{\abs{T}=n}}\cdot \Pp{\abs{T}=n \quad \text{and} \quad \exists v\in T, \quad \sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}>\epsilon b_n^\gamma \quad \text{and} \quad \haut(T)\leq \delta^{-\eta}\frac{n}{b_n}}, \end{align} where here $T$ is an unconditioned BGW tree. We already know from \eqref{eq:probability that a BGW has n vertices} that \begin{align*} \Pp{\abs{T}=n}\underset{n \rightarrow\infty}{\sim} \frac{d_\alpha(0)}{nb_n}. \end{align*} Using the spine decomposition Proposition~\ref{prop:spine decomposition with marks}, we can write the second factor as \begin{align}\label{eq:spine decomposition applied to delta slice} &\Pp{\abs{T}=n, \text{ and } \exists v\in T,\ \sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}> \epsilon b_n^\gamma \text{ and }\haut(v)\leq \delta^{-\eta}\frac{n}{b_n} } \notag \\ &=\Ec{\sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}} \frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\widetilde\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)}}}. \end{align} Then we are going to upper-bound the last display by using the fact that from the definition of $\tilde{P}(\epsilon,\delta,n)$, for any $v\in \KT_k^*$ such that $A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)$, any descendant $u$ of $v$ satisfies $A(\widetilde\KT_k^*,u)\in \tilde{P}(\epsilon,\delta,n)$. We first introduce \begin{align*} N(\epsilon,\delta,n):=\inf \enstq{i\geq 0}{\sum_{u\preceq \mathbf{U}_i^*} X_u \ind{\frac{\delta}{2}b_n<\out{\KT_k^*}(u)\leq \delta b_n}> \epsilon b_n^\gamma}, \end{align*} which is the height of the first vertex on the spine that satisfies $\tilde{P}(\epsilon,\delta,n)$. This, in particular, entails that $\frac{\delta}{2}b_n \leq \out{\KT_k^*}(\mathbf{U}_{N(\epsilon,\delta,n)}^*) \leq \delta b_n$. For technical reasons that will be explained later, let us split the offspring of $u=\mathbf{U}_{N(\epsilon,\delta,n)}^*$ into two subsets whose sizes are of the same order: those written as $ui$ for $i\leq \frac{\delta}{4} b_n$ and those written as $ui$ for $i> \frac{\delta}{4} b_n$. We only consider the progeny of the former for now and use that to lower-bound the total number $\#\enstq{v\in \KT_k^*}{A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)}$. We then get \begin{multline}\label{eq:upper bound using only a quarter of the progeny of the tip} \Ec{\sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}} \frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\widetilde\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{A(\widetilde\KT_k^*,v)\in \tilde{P}(\epsilon,\delta,n)}}}\\ \leq \sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}} \Ec{\frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}. \end{multline} Now let us state a lemma and conclude from there. We prove the lemma at the end of the section. \begin{lemma}\label{lem:bounding every term in the sum of delta slices} For all $n$ and $k\leq \delta^{-\eta}\frac{n}{b_n}$, and $\delta<\epsilon^\frac{1}{\eta}$ small enough, there exists a constant $C$ such that \begin{enumerate}[(i)] \item\label{it:proba that size is n conditionally on rest of tree} \begin{align*} \Ppsq{\abs{\KT_k^*}=n}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}} \leq \frac{C\cdot \delta^{-\alpha-\eta}}{n}, \end{align*} \item\label{it:proba that Uk satisfies property} \begin{align*} \Pp{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)} \leq \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m}, \end{align*} \item \label{it:expectation of the inverse of number of vertices above UN} \begin{align*} \Ecsq{\frac{1}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for } i\leq \frac{\delta}{4}b_n}}}{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n)}\leq\frac{C \delta^{-\alpha-\eta}}{n}. \end{align*} \end{enumerate} \end{lemma} Using the above lemma to take care of every term in the sum appearing on the right-hand-side of \eqref{eq:upper bound using only a quarter of the progeny of the tip} and sum over $k$ to get the following \begin{align*} \sum_{k=0}^{\delta^{-\eta}\frac{n}{b_n}}\Ec{ \frac{\ind{\abs{\KT_k^*}=n}\cdot \ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*}}}&\leq \delta^{-\eta}\frac{n}{b_n} \cdot C \frac{\delta^{-\alpha-\eta}}{n} \cdot C \frac{\delta^{-\alpha-\eta}}{n}\cdot \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m}\\ &\leq C \cdot \frac{\delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m-2\alpha-3\eta}}{n b_n}. \end{align*} Plugging the last display into \eqref{eq:conditioned BGW to unconditioned} using the equality \eqref{eq:spine decomposition applied to delta slice} and using \eqref{eq:exponential decay of height of conditioned BGW} and \eqref{eq:probability that a BGW has n vertices}, we then get that for small enough $\delta$, for $\delta<\epsilon^{\frac{1}{\eta}}$ we have \begin{align*} \Pp{\sup_{v\in T_n} \left\lbrace\sum_{u\preceq v} X_u \ind{\frac{\delta}{2}b_n<\out{T_n}(u)\leq \delta b_n}\right\rbrace>\epsilon b_n^\gamma} \leq \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m-2\alpha-4\eta}, \end{align*} which is what we wanted to prove. \subsubsection{Proof of Lemma~\ref{lem:bounding every term in the sum of delta slices}} Let us successively prove the three points of the lemma. First, let us remark that if $\delta\leq \frac{8}{b_n}$, then (\ref{it:proba that size is n conditionally on rest of tree}) and (\ref{it:expectation of the inverse of number of vertices above UN}) are trivial. Indeed, in that case the quantities on the left-hand-side are smaller than $1$. On the other side, we can prove using the Potter bounds, that if we choose the constant large enough, the right-hand-side of that inequality is always greater than $1$ for $\delta$ in that range. Hence when proving (\ref{it:proba that size is n conditionally on rest of tree}) and (\ref{it:expectation of the inverse of number of vertices above UN}), we can always assume that $\delta\geq \frac{8}{b_n}$ so that $\frac{\delta}{4}b_n\geq 2$. \subparagraph{Proof of (\ref{it:proba that size is n conditionally on rest of tree}).} Let us work on the event $\left\lbrace A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n) \right \rbrace$. We denote by $v_1,\dots v_J$ the vertices of the form $\mathbf{U}_{N(\epsilon,\delta,n)}^*i$ for $i> \frac{\delta}{4}b_n$ that are not $\mathbf{U}_{N(\epsilon,\delta,n)+1}$. There is some number $J$ of them where $1\leq \frac{\delta}{4} b_n-1 \leq J \leq \delta b_n$. For this proof, we define $\Cut{\KT_k^*}{v_1,v_2,\dots v_J}$ similarly as in \eqref{eq:definition Cut} except this time we remove all the vertices that are strictly above all every one of the vertices $v_1,v_2,\dots v_J$. Now, note that the knowledge of the tree $\Cut{\KT_k^*}{v_1,v_2,\dots v_J}$ is enough to compute the quantity $\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i \text{ for } i \leq \frac{\delta}{4}b_n}$, and that conditionally on $\Cut{\KT_k^*}{v_1,v_2,\dots v_J}$, the subtrees $\tau_1,\dots ,\tau_J$ respectively above $v_1,\dots v_J$ are i.i.d.\ $\BGW_\mu$-distributed random trees. Then the total size of the whole tree $\KT_k^*$ is exactly $n$ if and only if the total volume of those trees $\tau_1,\dots ,\tau_J$ is exactly $n$ minus the number of vertices in the rest of $\KT_k^*$. This (conditional) probability is bounded above as follows \begin{align*} &\Ppsq{\abs{\KT_k^*}=n}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}\\ &= \Ecsq{\Ppsq{\abs{\KT^*_k}=n}{\Cut{\KT_k^*}{v_1,v_2,\dots v_J}}}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}\\ &= \Ecsq{\Ppsq{\sum_{j=1}^J \abs{\tau_j}=n+J-\abs{\KT^*_k}}{\Cut{\KT_k^*}{v_1,v_2,\dots v_J}}}{\frac{\ind{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i\text{ for some } i\leq \frac{\delta}{4}b_n}}}\\ &\leq \frac{C\cdot \delta^{-\alpha-\eta}}{n}, \end{align*} using Lemma~\ref{lem:application of the potter bounds}(\ref{it:probability total size of Bn GW is k}) and the fact that by our assumptions we have $J\geq \frac{\delta}{4}b_n-1\geq 1$. \subparagraph{Proof of (\ref{it:proba that Uk satisfies property}).} For any $k\geq 1$, let us write $(X_i)_{0\leq i\leq k}=(X_{\mathbf{U}_i^*})$ to simplify the notation. We can write, using that $\epsilon\geq \delta^{\eta}$, \begin{align}\label{eq:proba that Uk has the property is split in two term} &\Pp{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)}\notag\\ &\leq \Pp{\sum_{i=1}^{k}X_i \ind{\frac{\delta}{2}b_n< D_i\leq \delta b_n}>\epsilon b_n^\gamma}\notag\\ &\leq \Ppsq{\sum_{i=1}^{k}\frac{X_i}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}>\delta^{-\gamma+\eta}}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}+\Pp{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}> \delta^{1-\alpha -3\eta}}. \end{align} The second term of the last display is always smaller than what we would get if we take the maximum value for $k$, i.e. $\delta^{-\eta}\frac{n}{b_n}$. Using Lemma~\ref{lem:application of the potter bounds}(\ref{it:probability of degree being large}) we have for all $n\geq 1$, \[\Pp{\frac{\delta}{2}b_n\leq D_i \leq \delta b_n)}\leq C\cdot\delta ^{1-\alpha-\eta}\cdot \frac{b_n}{n}.\] Hence \begin{align*} \Pp{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}> \delta^{1-\alpha -3\eta}} &\leq \Pp{\mathrm{Bin}\left(\delta^{-\eta} \frac{n}{b_n},1\wedge (C\cdot \delta^{1-\alpha-\eta}\cdot \frac{b_n}{n}) \right)>\delta^{1-\alpha -3\eta}}, \end{align*} which decays exponentially in a negative power of $\delta$. For the first term of \eqref{eq:proba that Uk has the property is split in two term}, we use the fact that conditionally on the event $\{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}\}$, all the random variables $\frac{X_i}{(\delta b_n)^\gamma}$ are independent with $m$-th moment bounded above uniformly by the same constant $C$, \begin{align*} \sup_{0\leq i\leq k} \Ec{\left(\frac{X_i}{(\delta b_n)^\gamma}\right)^m}<C. \end{align*} Note that we have (for another constant $C$) \begin{align}\label{eq:upper-bound sum expectation Xi} \sum_{i=1}^{k}\frac{\Ec{X_i}}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq C \cdot \sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}. \end{align} We use Markov's inequality and a result of Petrov \cite[Chapter~III. Result 5.16]{MR0388499}, which applies thanks to the fact that $m\geq 2$. In the third line, we will use that $\gamma>\alpha-1$ so that for $\delta$ small enough we have $\delta^{-\gamma+\eta}-C\cdot \delta^{1-\alpha -3\eta}\geq \delta^{-\gamma+2\eta}$. \begin{align*} &\Ppsq{\sum_{i=1}^{k}\frac{X_i}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}>\delta^{-\gamma+\eta}}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}\\ &\underset{\eqref{eq:upper-bound sum expectation Xi}}{\leq} \Ppsq{\sum_{i=1}^{k}\frac{X_i-\Ec{X_i}}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}>\delta^{-\gamma+\eta}-C\delta^{1-\alpha -3\eta}}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}\\ &\underset{\text{Markov}}{\leq} \frac{\Ecsq{\left(\sum_{i=1}^{k}\frac{X_i-\Ec{X_i}}{(\delta b_n)^\gamma}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\right)^m}{\sum_{i\leq k}\ind{\frac{\delta }{2}b_n< D_i\leq \delta b_n}\leq \delta^{1-\alpha -3\eta}}}{\left(\delta^{-\gamma+2\eta}\right)^m}\\ &\underset{\text{Petrov }}{\leq} C\cdot \frac{(\delta^{1-\alpha -3\eta})^\frac{m}{2} }{\left(\delta^{-\gamma+2\eta}\right)^m}\\ &\leq \delta^{(\gamma +\frac{1-\alpha}{2}-4\eta)m}. \end{align*} Taking the sum of the two terms in \eqref{eq:proba that Uk has the property is split in two term} ensures that for $\epsilon$ small enough and $\delta<\epsilon^{1/\eta}$, \begin{align*} \Pp{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)} \leq \delta^{(\gamma +\frac{1-\alpha}{2}-5\eta)m}. \end{align*} \subparagraph{Proof of (\ref{it:expectation of the inverse of number of vertices above UN}).} Now let us reason conditionally on the event $\{A(\widetilde{\KT}_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta,n)\}$. On that event the quantity $\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i \text{ for } i\leq \frac{\delta}{4}b_n}$ is at least the sum of the total size of $\frac{\delta}{4}b_n-1$ independent $\BGW$ tree branching off of the spine. Hence \begin{align*} &\Ecsq{\frac{1}{\#\enstq{v\in \KT_k^*}{v\succeq \mathbf{U}_{N(\epsilon,\delta,n)}^*i \text{ for } i\leq \frac{\delta}{4}b_n}}}{A(\KT_k^*,\mathbf{U}_k^*)\in \tilde{P}(\epsilon,\delta ,n)}\\ &\leq \Ec{\frac{1}{\sum_{i=1}^{\frac{\delta}{4}b_n-1}\abs{\tau_i}}} \leq\frac{C \delta^{-\alpha-\eta}}{n}, \end{align*} where the $\tau_i$'s are i.i.d.\ under distribution $\BGW_\mu$. The last inequality is obtained using Lemma~\ref{lem:application of the potter bounds}(\ref{it:expectation inverse total size of GW}). \subsection{Proof of Lemma~\ref{lem:moment measure discrete block implies B2 or B3}} \label{subsec:proof of lemma about measure} Before going into the proof of Lemma~\ref{lem:moment measure discrete block implies B2 or B3}, let us recall some general arguments concerning the \L ukasiewicz path of BGW trees conditional on their total size. For $T_n$ a BGW tree conditioned on having total size $n$, the law of the vector $(\out{T_n}(v_1)-1,\dots,\out{T_n}(v_n)-1; \nu_{v_1}(B_{v_1}), \dots ,\nu_{v_n}(B_{v_n}))$ can be described as \begin{align*} \mathrm{Law}\left((X_1,\dots, X_n; Z_1, \dots ,Z_n) \ | \ \sum_{i=1}^nX_i=-1, \ \forall k\leq n-1, \sum_{i=1}^k X_i\geq 0\right), \end{align*} where the $(X_i,Z_i)_{1\leq i \leq n}$ are i.i.d. with the distribution of $(D-1, \nu_D(B_D))$ where $D$ follows the reproduction distribution. Now using the so-called Vervaat transform, the above law can also be expressed as \begin{align*} \mathrm{Law}\left((X_{U+1},\dots, X_{U+n}; Z_{U+1}, \dots ,Z_{U+n}) \ \Big| \ \sum_{i=1}^nX_i=-1\right), \end{align*} where $U$ is defined as $U:=\min\enstq{1\leq k \leq n}{\sum_{i=1}^k X_i = \min_{1\leq k \leq n} \sum_{i=1}^k X_i}$, and the indices in the last display are interpreted modulo $n$. In what follows, we will also use an argument of absolute continuity. For any bounded function $F$ we can write \begin{align*} &\Ecsq{F((X_1,\dots, X_{\lfloor \frac{3n}{4}\rfloor }; Z_1, \dots ,Z_{\lfloor \frac{3n}{4}\rfloor }))}{\sum_{i=1}^n X_i=-1}\\ &= \Ec{F((X_1,\dots, X_{\lfloor \frac{3n}{4}\rfloor }; Z_1, \dots ,Z_{\lfloor \frac{3n}{4}\rfloor })) \frac{\Ppsq{\sum_{i=\lfloor \frac{3n}{4}\rfloor+1}^n X_i=-1 -\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i }{\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i}}{\Pp{\sum_{i=1}^n X_i=-1}}} \end{align*} Using the local limit theorem \cite[Theorem~4.2.1]{MR0322926} and the fact that the $\alpha$-stable density is bounded \cite[Eq.(I20)]{zolotarev_one_1986}, the term \begin{align*} \frac{\Ppsq{\sum_{i=\lfloor \frac{3n}{4}\rfloor+1}^n X_i=-1 -\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i }{\sum_{i=1}^{\lfloor \frac{3n}{4}\rfloor} X_i}}{\Pp{\sum_{i=1}^n X_i=-1}} \end{align*} appearing in the integral is bounded uniformly in $n\geq 4$. Using indicator functions for $F$, this ensures that any event that occurs with probability tending to $0$ or $1$ as $n\rightarrow \infty$ for the unconditioned model, also tend to $0$ or $1$ for the conditioned model. Note that we can apply the same argument for the vector $(X_{\lfloor \frac{n}{4}\rfloor },\dots, X_{n}; Z_{\lfloor \frac{n}{4}\rfloor}, \dots ,Z_{n}))$. \begin{proof}[Proof of Lemma~\ref{lem:moment measure discrete block implies B2 or B3}] We consider the case $\beta \leq \alpha$ and the case $\beta > \alpha$ in turn. $\bullet$ Case $\beta \leq \alpha$. In this case, we want to show that there exists a sequence $(a_n)_{n\geq 1}$ so that $\frac{1}{a_n}\cdot (M_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ in probability for the uniform topology. First, let us show that it is enough to check this if we replace $(M_k)_{1\leq k \leq n}$ with $(S_k)_{1\leq k \leq n}$ where $S_k:=\sum_{i=1}^kZ_i$. Indeed, if \begin{align}\label{eq:Sk converges in probability} \frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1} \end{align} in probability, then we have \begin{align*} \frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor})_{0\leq t\leq \frac{3}{4}} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq \frac{3}{4}} \quad \text{and} \quad \frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor} - S_{\lfloor \frac{3}{4}n \rfloor})_{\frac14\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t-\frac14)_{\frac14\leq t \leq 1} \end{align*} also in probability. By the absolute continuity argument, both those convergences also hold conditional on the event $\{\sum_{i=1}^nX_i=-1\}$, which ensures that $\frac{1}{a_n}\cdot (S_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ under $\Ppsq{\cdot }{\sum_{i=1}^{n}X_i=-1}$. Then, this is enough to get that $\frac{1}{a_n}\cdot (S_{U+\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ as well. Since $(M_k)_{1\leq k \leq n}$ is distributed as $S_{U+\lfloor nt \rfloor}$ under $\Ppsq{\cdot }{\sum_{i=1}^{n}X_i=-1}$, we get $\frac{1}{a_n}\cdot (M_{\lfloor nt \rfloor})_{0\leq t\leq 1} \underset{n \rightarrow \infty}{\rightarrow} (t)_{0\leq t \leq 1}$ in probability as required. In the end, we just have to find a sequence $a_n$ such that \eqref{eq:Sk converges in probability} holds. In fact, by monotonicity, we just need to check that the convergence holds for $t=1$. If $\Ec{Z}=\Ec{\nu_D(B_D)}$ is finite then it is easy to check that we can take $a_n:= \Ec{\nu_D(B_D)} \cdot n$. This is always the case if $\beta <\alpha$ since, using the assumption of the lemma \begin{align*} \Ec{\nu_D(B_D)}&= \sum_{k=0}^{\infty}\Pp{D=k}\Ec{\nu_k(B_k)}\\ &\underset{\text{assum.}}{\leq} C \cdot \sum_{k=0}^{\infty}\Pp{D=k} k^\beta \leq C\cdot \Ec{D^\beta} < \infty, \end{align*} so we can apply the law of large numbers. If $\Ec{Z}$ is infinite, which in our case can only happen if $\beta=\alpha$, we need to understand the tail behaviour of $Z$. We will show in that case that \begin{align}\label{eq:tail Z from tail D} \P(Z\geq x) \sim \Pp{D \geq x^{\frac{1}{\alpha}}} \cdot \Ec{\nu(\cB)} \qquad \text{as}~x\to \infty, \end{align} which in particular entails that $Z$ has a regularly varying tail of index $-1$ as $x\to \infty$. Using then general results (e.g. \cite[Theorem~3.7.2]{MR2722836}) for sums of random variables with regularly varying tail yields the result. First, for a given integer $N$ we introduce $Y_N^+$ and $Y_N^-$ whose distribution are defined in such a way that for all $x\geq 0$, \begin{align*} \Pp{Y_N^+\geq x}= \sup_{k\geq N} \Pp{\frac{\nu_k(B_k)}{k^\alpha} \geq x} \quad \text{ and } \quad \Pp{Y_N^-\geq x}= \inf_{k\geq N} \Pp{\frac{\nu_k(B_k)}{k^\alpha} \geq x}. \end{align*} Note that for both of them, we have the following bound thanks to our assumption and Markov's inequality \begin{align*} \Pp{Y_N^-\geq x}\leq \Pp{Y_N^+\geq x} \leq \sup_{k\geq 1}\Pp{\frac{\nu_k(B_k)}{k^\alpha} \geq x} \leq \frac{\sup_{k\geq 1} \Ec{\left(\frac{\nu_k(B_k)}{k^\alpha}\right)^{1+\eta}}}{x^{1+\eta}}. \end{align*} Since the quantity in the last display is integrable in $x$, and since $\Pp{Y_N^\pm\geq x}\rightarrow \Pp{\nu(\cB)\geq x}$ as $N\rightarrow\infty$ thanks to assumption D\ref{c:GHPlimit}, we have $\Ec{Y_N^\pm}\rightarrow \Ec{\nu(\cB)}$ by dominated convergence. Also note that we have $\Ec{\left(Y_N^\pm\right)^{1+\eta/2}}<\infty$. In order to prove \eqref{eq:tail Z from tail D}, we then just need to show that for every $N$, we have \begin{align*} \P(Z\geq x) \geq \Pp{D \geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^-} (1+o(1)) \quad \text{ and } \quad \P(Z\geq x) \leq \Pp{D \geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^+} (1+o(1)). \end{align*} Let us fix $N$ and write \begin{align*} \Pp{Z\geq x}= \Pp{\nu_D(B_D) \geq x} &= \Pp{\nu_D(B_D)\geq x, \ D\geq N} + \Pp{\nu_D(B_D) \geq x, \ D<N}\\ &= \Pp{\nu_D(B_D)\geq x, \ D\geq N} + O(x^{-1-\eta}). \end{align*} Now with a random variable $Y_N^+$ that is independent of $D$, we can write \begin{align*} \Pp{\nu_D(B_D)\geq x, \ D\geq N}&= \Ec{\Ppsq{ \frac{\nu_D(B_D)}{D^\alpha}\geq \frac{x}{D^\alpha}}{D} \ind{D\geq N}}\\ &\leq \Ec{\Pp{Y_N^+\geq \frac{x}{D^\alpha}} \ind{D\geq N}}\\ &= \Pp{Y_N^+D^\alpha \geq x, \ D>N}\\ &=\Pp{Y_N^+ D^\alpha \geq x} - \Pp{Y_N^+ D^\alpha \geq x, \ D\leq N}\\ &=\Pp{Y_N^+ D^\alpha \geq x} - O(x^{-1-\eta}). \end{align*} Now using \cite[Proposition~1.1]{kasahara_note_2018} this last display is equivalent to $\Pp{D\geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^+}$. We also get the other inequality $\Pp{\nu_D(B_D) \geq x} \geq \Pp{D\geq x^{\frac{1}{\alpha}}} \cdot \Ec{Y_N^-}\cdot (1+o(1))$, which finishes the proof. $\bullet$ Case $\beta > \alpha$. Since the arguments used here are quite similar to what we did above, let us just sketch the proof. Let $\epsilon >0$ and consider \begin{align*} R(n,\delta):= b_n^{-\beta} \cdot \sum_{i=1}^{n} \nu_{v_i}(B_{v_i})\ind{\out{}(v_i)\leq \delta b_n}. \end{align*} Using the same type of arguments as above, we can first show that \begin{align*} b_n^{-\beta} \cdot \sum_{i=1}^{\lfloor\frac{3n}{4}\rfloor} Z_i \ind{X_i+1\leq \delta b_n} \qquad \text{ and } \qquad b_n^{-\beta} \cdot \sum_{i=\lfloor\frac{n}{4}\rfloor}^{n} Z_i \ind{X_i+1\leq \delta b_n} \end{align*} tend to $0$ in probability under the unconditioned measure, uniformly in $n\geq 4$. Then using the absolute continuity argument together with the equality in distribution we get that \begin{align} \limsup_{n\rightarrow \infty}\Pp{R(n,\delta)\geq \epsilon} \underset{\delta \rightarrow 0}{\rightarrow} 0, \end{align} This ensures that condition B\ref{cond:small decorations dont contribute to mass} is satisfied. \end{proof} \bibliographystyle{siam} \bibliography{dtree} \end{document}
2205.02915v1
http://arxiv.org/abs/2205.02915v1
Fractal dimensions of $k$-automatic sets
\documentclass[oneside]{amsart} \usepackage{geometry} \geometry{letterpaper} \usepackage{graphicx} \usepackage{amssymb} \usepackage{comment} \usepackage[mathscr]{euscript} \usepackage{tikz} \usetikzlibrary{decorations.pathmorphing, patterns,shapes,arrows,automata,positioning} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{mathtools} \usepackage{amsthm} \usepackage{hyperref} \usepackage{enumerate} \usepackage{indentfirst} \usepackage{tikz} \usetikzlibrary{arrows,automata,positioning} \usepackage{color} \newcommand{\red}{\color{red}} \newcommand{\black}{\color{black}} \newcommand{\blue}{\color{blue}} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \newtheorem{thm}{Theorem} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{fact}[thm]{Fact} \newtheorem{cor}[thm]{Corollary} \newtheorem{ques}[thm]{Question} \newtheorem*{thm*}{Theorem} \newtheorem*{thmA}{Theorem A} \newtheorem*{thmB}{Theorem B} \newtheorem*{thmC}{Theorem C} \newtheorem*{thmD}{Theorem D} \numberwithin{thm}{section} \newtheorem*{facts}{Facts} \newtheorem*{claim1}{Claim 1} \newtheorem*{claim2}{Claim 2} \newtheorem*{claim}{Claim} \newtheorem*{conjecture}{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem*{defn*}{Definition} \newtheorem*{ack}{Acknowledgements} \newtheorem{ex}[thm]{Example} \newtheorem{rmk}[thm]{Remark} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\calS}{\mathcal{S}} \newcommand{\calM}{\mathcal{M}} \newcommand{\calR}{\mathcal{R}} \newcommand{\calQ}{\mathcal{Q}} \newcommand{\calA}{\mathcal{A}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calC}{\mathcal{C}} \newcommand{\calD}{\mathcal{D}} \newcommand{\calL}{\mathcal{L}} \newcommand{\calP}{\mathcal{P}} \newcommand{\calF}{\mathcal{F}} \newcommand{\calN}{\mathcal{N}} \newcommand{\calG}{\mathcal{G}} \newcommand{\calH}{\mathcal{H}} \newcommand{\Diam}{\operatorname{Diam}} \newcommand{\sprad}{\operatorname{sprad}} \begin{document} \author{Alexi Block Gorman} \address {The Fields Institute \\222 College Street\\Toronto, ON M5T 3J1\\Canada} \email{[email protected]} \author{Christian Schulz} \address {Department of Mathematics\\University of Illinois at Urbana-Champaign\\1409 West Green Street\\Urbana, IL 61801} \email{[email protected]} \subjclass[2020]{Primary 03D05 Secondary 28A80} \keywords{B\"uchi automata, Finite automata, Fractal geometry, Hausdorff dimension, Hausdorff measure, Box-counting dimension, Entropy, Model theory, Tame geometry} \title{Fractal dimensions of $k$-automatic sets} \begin{abstract} This paper seeks to build on the extensive connections that have arisen between automata theory, combinatorics on words, fractal geometry, and model theory. Results in this paper establish a characterization for the behavior of the fractal geometry of ``$k$-automatic'' sets, subsets of $[0,1]^d$ that are recognized by B\"uchi automata. The primary tools for building this characterization include the entropy of a regular language and the digraph structure of an automaton. Via an analysis of the strongly connected components of such a structure, we give an algorithmic description of the box-counting dimension, Hausdorff dimension, and Hausdorff measure of the corresponding subset of the unit box. Applications to definability in model-theoretic expansions of the real additive group are laid out as well. \end{abstract} \maketitle \section{Introduction} \subsection{Main Results} In this paper, we consider the $k$-automatic subsets of $\R$, and analyze both the $k$-representations of such sets and the B\"uchi automata that recognize their base-$k$ representations. The methods used in this paper integrate multiple perspectives previously taken regarding B\"uchi automata and $k$-automatic subsets of finite-dimensional Euclidean spaces. These include the perspective given by viewing automata as directed graphs, as well as characterizations of $k$-regular $\omega$-languages coming from combinatorics on words. Our primary result describes how to obtain the Hausdorff and box-counting dimensions of $k$-automatic subsets of $[0,1]^d \subseteq \R^d$ (with $d \in \N$) not quite in terms of some of the induced subautomata, but by considering slight variants thereof. Further results include a similar mechanism for computing Hausdorff measure (in the appropriate dimension) in terms of the same variant of an induced subautomaton, as well as a characterization of which expansions of the first order structure $(\R,<,+)$ by $k$-automatic subsets of $[0,1]^d$ have definable unary sets whose Hausdorff dimension differs from its box-counting dimensions. Recall that an automaton is ``trim'' if each state is accessible from some start state, and each state is also coaccessible from some accept state. Below, we use $d_H$ to denote Hausdorff dimension, we use $d_B$ to denote box-counting dimension, and $h(X)$ denotes the entropy of $X$; for formal definitions of each, see section \ref{prelim}. The following theorem is in fact a corollary in section \ref{closed} that emerges from unpacking results of Mauldin and Williams in \cite{MW88}, in conjunction with some elementary results in section \ref{entropy}. \begin{thmA} If $X$ is a closed $k$-automatic subset of $[0,1]^n$ recognized by closed automaton $\calA$, then $d_H(X) = d_B (X)= \frac{1}{\log (k)} h(L(\calA))$, where $L(\calA)$ is the set of strings $\calA$ recognizes. \end{thmA} To state our main theorem, we will briefly describe the ``cycle language'' associated to a state $q$ in an automaton $\calA$, the definition of which is stated with more detail in section \ref{dimensions}. Suppose that $\calA$ is an automaton (finite or B\"uchi) with $Q$ as its set of states. For state $q \in Q$, the cycle language $C_q(\calA)$ is the language consisting of all $w \in \Sigma^*$ such that there is a run of $\calA$ from state $q$ to itself via $w$. \begin{thmB} Let $\mathcal{A}$ be a trim B\"uchi automaton with set of states $Q$, and let $X$ be the set of elements in $[0,1]^d \subseteq \R^d$ that have a base-$k$ representation that $\calA$ accepts. Let $F$ be the set of accept states of $\mathcal{A}$. Then: \begin{enumerate}[(i)] \item $d_H(X) = \frac{1}{\log k} \max_{q \in F} h(C_q(\mathcal{A}))$; \item $d_B(X) = \frac{1}{\log k} \max_{q \in Q} h(C_q(\mathcal{A}))$. \end{enumerate} \end{thmB} From this theorem, and using the crucial notion of ``unambiguous'' B\"uchi automata, we establish a similar result that describes the Hausdorff measure of a $k$-automatic set in terms of the structure of the automaton that recognizes it. For a definition of unambiguous automata and details about the partition $\{M_q:q\in Q'\}$ of the language $L$ in the theorem below, see Section \ref{measure}. \begin{thmC} Let $\mathcal{A}$ be an unambiguous B\"uchi automaton with set of states $Q$ and recognizing an $\omega$-language $L$. Let $Q' \subseteq Q$ be the set of states whose strongly connected component contains an accept state. For each $q \in Q'$, let $\mathcal{A}_q$ be the automaton created by moving the start state of $\mathcal{A}$ to $q$ and removing all transitions out of its strongly connected component, and let $L_q$ be the $\omega$-language it accepts. Then we can effectively partition $L$ into sublanguages $\{M_q:q\in Q'\}$ such that: \begin{enumerate}[(i)] \item $d_H(\nu_k(L)) = \max_{q \in Q'} d_H(\nu_k(L_q))$, \item with $\alpha = d_H(\nu_k(L))$, $\mu_H^{\alpha}(L) = \sum_{q \in Q'} \mu_H^{\alpha}(M_q)$. \end{enumerate} \end{thmC} Finally, we give a dividing line for the fractal dimensions of definable sets in certain first-order sets related to B\"uchi automata. This dividing line has implications for the model-theoretic tameness of structures of the form $(\R,<,+,X)$ where $X \subseteq [0,1]^n$ is $k$-automatic, since Hieronymi and Walsberg have shown in \cite{HW19} that if $\calC$ is a Cantor set (compact, has no interior and no isolated points) then $(\R,<,+,\calC)$ is not tame with respect to any notion coming from Shelah-style generalizations of stability, including NIP and NTP$_2$. \begin{thmD} Suppose $X \subseteq [0,1]^n$ is $k$-automatic. There exists a set $A \subseteq [0,1]$ definable in $(\R, <,+, X)$ such that $d_B(A) \neq d_H(A)$ if and only if either a unary Cantor set is definable in $(\R,<,+,X)$ or a set that is both dense and codense on an interval is definable in $(\R,<,+,X)$. \end{thmD} \subsection{Background} In his seminal work \cite{B62}, J. R. B\"{u}chi introduced the notion of what we now call a B\"{u}chi automaton, and he identified a connection between these automata and the monadic second-order theory of the natural numbers with the successor function. Notably, B\"{u}chi automata take countably infinite-length inputs, unlike standard automata (which we will also call ``finite automata''), which only accept or reject finite-length input strings. In addition to the work of B\"{u}chi to extend the notion of automatic sets to infinite words, in \cite{M66} McNaughton broadened the realm of generating infinite sequences by a finite automaton, and many more notions of automatic or regular sets of infinite words arose. There is natural topological structure on the space of infinite words on a finite alphabet, hence the topological features of subsets of such a space that is recognized by an appropriate B\"{u}chi automaton have been investigated since the 1980s. Languages recognized by B\"{u}chi automata are commonly called regular $\omega$-languages. One topological property that was first introduced in the context of information theory by Claude Shannon in \cite{S48} is that of entropy, also called ``topological entropy'' in some settings. In \cite{S85}, L. Staiger established that extending the definition of entropy to $\omega$-languages yields compelling topological characterizations of closed regular $\omega$-languages. For example, he shows in \cite{S85} that a closed regular $\omega$-language is countable if and only if the entropy is $0$ and that the entropy of regular $\omega$-languages is countably additive. From another perspective, it was shown in \cite{CLR15} that there is a close connection between regular $\omega$-languages and Graph Directed Iterated Function Systems, or GDIFSs for brevity. Due to the work in \cite{MW88} there have long existed means of computing geometric properties like Hausdorff measure and Hausdorff dimension for GDIFSs. In light of the connection between B\"{u}chi automata and GDIFSs, the connections between fractal dimensions and entropy for automatic sets of real numbers can now yield a total characterization, as our paper illuminates. \begin{ack} Many thanks to Philipp Hieronymi for the interesting ideas, questions, and generous guidance provided for this paper. Thanks to Elliot Kaplan, Jason Bell, and Rahim Moosa, for their helpful questions and comments regarding the contents of this paper. We gratefully acknowledge that this research was supported by the Fields Institute for Research in Mathematical Sciences. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Institute. The first author was partially supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE -- 1746047. The second author was partially supported by National Science Foundation Grant no. DMS -- 1654725. \end{ack} \section{Preliminaries}\label{prelim} \subsection{Definition of B\"uchi automata} Below, for a set $X$ we use $X^*$ to denote the Kleene star of $X$, i.e. $X^*=\{x_1x_2\ldots x_n: n\in \N, x_1,\ldots ,x_n\in X\}$, and we similarly use $X^{\omega}$ to denote the set $\{x_1x_2\ldots: x_1, x_2, \ldots \in X\}$. For a language $L$ of finite strings, we will use $\vec{L}$ to denote the limit language of $L$, i.e. the set of infinite strings with infinitely many prefixes in $L$. \begin{defn} A \textit{finite automaton} is a $5$-tuple $\mathcal{A} = (Q, \Sigma, \delta, S, F)$ where: \begin{itemize} \item $Q$, the set of \textit{states,} is a finite set; \item $\Sigma$, the \textit{alphabet,} is a finite set; \item $\delta$, the \textit{transition function,} is a function $Q \times \Sigma \to \mathcal{P}(Q)$; \item $S$, the set of \textit{start states} or \textit{initial states,} is a nonempty subset of $Q$; \item $F$, the set of \textit{accept states} or \textit{final states,} is a subset of $Q$. \end{itemize} A finite automaton is said to \textit{run from $q_0$ to $q_n$ on} a string $w = w_1 \dots w_n \in \Sigma^n$, for $q_0, q_n \in Q$, if there exist states $q_1, \dots, q_{n-1}$ such that for $i = 1, \dots, n$ we have $q_i \in \delta(q_{i-1}, w_i)$. If $q_0 \in S$, such a sequence of states may be called a \textit{run} of $w$ in $\mathcal{A}$, which is \textit{accepting} if $q_n \in F$. The automaton \textit{accepts} $w$ if there is an accepting run of $w$. The language \textit{recognized} (or accepted) by $\mathcal{A}$ is the set of all strings in $\Sigma^*$ it accepts. Two finite automata are \textit{equivalent} if they recognize the same language. \end{defn} \begin{defn} A \textit{B\"uchi automaton} is a $5$-tuple $\mathcal{A} = (Q, \Sigma, \delta, S, F)$ where: \begin{itemize} \item $Q$, the set of \textit{states,} is a finite set; \item $\Sigma$, the \textit{alphabet,} is a finite set; \item $\delta$, the \textit{transition function,} is a function $Q \times \Sigma \to \mathcal{P}(Q)$; \item $S$, the set of \textit{start states} or \textit{initial states,} is a nonempty subset of $Q$; \item $F$, the set of \textit{accept states} or \textit{final states,} is a subset of $Q$. \end{itemize} For an infinite string $w = w_1 w_2 \dots \in \Sigma^\omega$, a \textit{run} of $w$ in $\mathcal{A}$ is a sequence of states $q_0, q_1, \dots \in Q^\omega$ such that $q_0 \in S$ and for $i \in \mathbb{Z}^+$ we have $q_i \in \delta(q_{i-1}, w_i)$. A run is \textit{accepting} if $q_i \in F$ for infinitely many $i$. The automaton \textit{accepts} $w$ if there is an accepting run of $w$. The $\omega$-language \textit{recognized} (or accepted) by $\mathcal{A}$ is the set of all strings in $\Sigma^\omega$ it accepts. Two B\"uchi automata are \textit{equivalent} if they recognize the same language. \end{defn} Note that the only difference between these definitions is in the accept condition; thus, the same tuple $(Q, \Sigma, \delta, S, F)$ may be alternately treated as either a finite or B\"uchi automaton, which will be useful several times in this paper. A finite or B\"uchi automaton also has a canonical digraph structure whose vertex set is $Q$ and whose edge set contains precisely those $(q, q') \in Q^2$ for which there exists $\sigma \in \Sigma$ such that $q' \in \delta(q, \sigma)$. We will often implicitly refer to this digraph structure, speaking of such concepts as paths between states and strongly connected components containing states. If we refer to the graphical structure on an automaton as simply a graph, we implicitly mean the structure of the automaton as a directed graph. We will also use several properties that such an automaton may have: \begin{defn} \label{autoproperties} Let $\mathcal{A} = (Q, \Sigma, \delta, S, F)$ be a finite or B\"{u}chi automaton. \begin{enumerate}[(i)] \item We say $\calA$ is \textit{deterministic} if $|S| = 1$ and $|\delta(q, c)| \leq 1$ for all $q \in Q, c \in \Sigma$. (Note that this definition guarantees that there is at most one run of a given $w$ in $\mathcal{A}$.) Every finite automaton has an equivalent deterministic automaton; this is not true in general for B\"uchi automata. \item We say $\calA$ is \textit{finite-trim} if for every $q \in Q$, there is a path from a start state to $q$ (possibly of zero length) and a path from $q$ to an accept state (also possibly of zero length). On the additional condition that the path from $q$ to an accept state must be of nonzero length, we say that $\calA$ is \textit{trim}. Every B\"uchi automaton has an equivalent trim automaton; every finite automaton has an equivalent finite-trim automaton. In fact, given an automaton $\calA$, we may always produce a finite-trim automaton that is equivalent to $\calA$ as both a finite and B\"uchi automaton. \item We say $\calA$ is \textit{closed} if it is trim and every state is an accept state (i.e., $Q=F$). \item Given a trim automaton $\calA=(Q,\Sigma,\delta,S,F)$, call $\overline{\calA}=(Q,\Sigma,\delta,S,Q)$ (this is the resulting automaton when all the states of $\calA$ are added to the set of accept states) the \textit{closure} of $\calA$. Note that if an automaton $\calB=(Q',\Sigma, \delta',S',F')$ is equivalent to $\calA$ but not trim, then $(Q',\Sigma,\delta',S',Q')$ need not recognize the same language as $\overline{\calA}$. \item We say an automaton $\calA=(Q,\Sigma,\delta,S,F)$ is \textit{weak} if for every $q,q' \in Q$ such that $q$ and $q'$ are in the same strongly connected component of $\calA$ (as a digraph), either $q$ and $q'$ are both accept states, or both are not accept states. \end{enumerate} \end{defn} \subsection{Regularity and $k$-representations} Let $k \in \N_{>1}$, and set $[k]=\{0,1, \ldots ,k-1\}$ for the remainder of this paper. We will use the terms ``base-$k$ representation'' and ``$k$-representation'' interchangeably to mean the expression of an element $x \in \R$ as a countable sum of integer powers of $k$, each multiplied by a coefficient in $[k]$. Note that we will sometimes conflate elements of $[0,1]$ and their $k$-representations, and we may occasionally say that an automaton $\calA$ accepts \emph{the} $k$-representation of $x \in [0,1]$. For the countable subset of $[0,1]^d$ whose elements have multiple (in particular, at most $2^d$) $k$-representations, we mean that $\calA$ accepts \emph{at least one} of the $k$-representations of $x$. For ease of switching between $x$ and its $k$-representation, we will define a valuation for elements of $[k]^{\omega}$. \begin{defn} Define $\nu_k:[k]^{\omega} \to [0,1]$ by: $$\nu_k(w) = \sum_{i=0}^{\infty} \frac{w_i}{k^{i+1}}$$ where $w=w_0w_1w_2\ldots$ with $w_i \in [k]$ for each $i \in \N$. \end{defn} Note that the equivalence relation $v \equiv w \iff \nu_k(v) = \nu_k(w)$ is not only a finite equivalence relation, but moreover each equivalence class has size at most two. As noted above, only countably many elements in $[k]^{\omega}$ are not the unique element of their $\nu_k$-equivalence class. For $L \subseteq ([k]^d)^{\omega}$, set $$\nu_k(L)=\{ (\nu_k(w_1), \ldots \nu_k(w_d)): w_1, \ldots,w_d \in [k]^{\omega}, ((w_{1,i},\ldots ,w_{d,i}))_{i<\omega} \in L\}.$$ We can now formally define what it means for a subset of $[0,1]\subseteq \R$ to be $k$-automatic. Let $k \in \N$ be greater than one, and let $d \in \N$ be greater than zero. \begin{defn} \label{r-reg} Say that $L\subseteq ([k]^d)^{\omega}$ is \emph{$k$-regular} if there is some B\"uchi automaton $\calA$ with alphabet $[k]^d$ that recognizes $L$. Say that $A \subseteq [0,1]^d$ is \emph{$k$-automatic} if there is a B\"uchi automaton $\calA$ with alphabet $[k]^d$ that recognizes the maximal language $L \subseteq ([k]^d)^{\omega}$ such that $A=\nu_k(L)$. Moreover, if this holds, say that $\calA$ \emph{recognizes} $A$. \end{defn} We also use the notation that for a B\"uchi automaton $\calA$ with alphabet $[k]$, $V_k(\calA)$ will denote the set of elements $x\in [0,1]$ for which some $k$-representation of $x$ is accepted by $\calA$. The fact below follows immediately from the existence of a B\"uchi automaton with alphabet $\Sigma^2$ that accepts a pair of elements $x,y \in \Sigma^{\omega}$ precisely if both are $k$-representations of the same element of $[0,1]$. \begin{fact} For $A \subseteq [0,1]^d$, if there is some $k$-regular language $L \subseteq ([k]^d)^{\omega}$ such that $A=\nu_k(L)$, then the set of all $k$-representations of elements of $A$ is $k$-regular as well. \end{fact} Call an element $x \in [0,1]^d$ a \emph{$k$-rational} if there exists $w \in ([k]^d)^*$ such that $x=\nu_k(w\vec{0}^{\omega})$, where $\vec{0}$ is the $d$-tuple $(0, \dots, 0)$. Clearly, these are the elements of $[0,1]^d$ whose coordinates can all be written as fractions with powers of $k$ in the denominators. Throughout this paper $d$ denotes the (finite, but arbitrary) arity of the Euclidean space we are working in. We use $\calA$ to denote both finite automata and B\"{u}chi automata, and we use $L$ to denote the subset of $([k]^d)^*$ that $\calA$ recognizes if it is a finite automaton, or to denote the subset of $([k]^d)^{\omega}$ that $\calA$ recognizes if it is a B\"{u}chi automaton. If $\calA$ is a B\"{u}chi automaton, we will often use $A$ to denote $\nu_k(L)$, unless specified otherwise. We will say that a B\"{u}chi automaton $\calA$ accepts $x \in [0,1]^d$ if $\calA$ accepts some $w \in ([k]^d)^{\omega}$ such that $\nu_k(w) = x$. Given $\calA$ a trim B\"uchi automaton we let $\overline{A}$ denote the image under $\nu_k$ of the language that $\overline{\calA}$, the closure of $\calA$, recognizes. In \cite{CLR15}, the authors show that every closed trim B\"uchi automaton recognizes a (topologically) closed set, hence the conflation of the set recognized by $\overline{\calA}$ with $\overline{A}$. This conflation will be further justified in Section \ref{closed}. In addition, we define closed $k$-automatic $\omega$-languages and the closure of a $k$-automatic $\omega$-language analogously. If $L$ is a language (either a subset of $\Sigma^*$ or a subset of $\Sigma^{\omega}$) let $L^{pre} \subseteq \Sigma^*$ denote the set of all finite prefixes of elements of $L$. Similarly, let $L_n^{pre}$ denote the set of all length-$n$ prefixes of elements of $L$, and let $L_{<n}^{pre}$ denote the set of all prefixes of $L$ with length at most $n$. \subsection{Definition of entropy} A key concept that turns out to be very helpful in the study of dimension of $k$-automatic sets is the notion of \textit{entropy.} The entropy of a formal language was perhaps first used for regular languages by Chomsky and Miller in \cite{CM58} and was called entropy as an analogue for topological entropy by Hansen, Perrin, and Simon in \cite{HPS92}. In \cite{AB11}, the authors note a seeming correspondence between the Hausdorff dimension of a $k$-automatic fractal and the entropy of the language of substrings of the base-$k$ expansions of its points. Proving this conjecture is one of the main results of this paper. In order to do so, we find it most convenient to extend the definition of entropy to sequences of real numbers as follows: \begin{defn} Let $(a_n)_{n\in\N}$ be a sequence of nonnegative real numbers with the property that infinitely many terms are nonzero. The \textit{entropy} of $a_n$ is defined as the limit superior: $$h((a_n)_n) = \limsup_{n \to \infty} \frac{\log a_n}{n}.$$ The entropy $h(L)$ of an infinite language $L$ is the entropy of $(|L|_n)_n$. \end{defn} We choose to leave the entropy undefined for $a_n$ eventually zero, as this way the entropy is always a real number (and is nonnegative if $a_n$ is an integer sequence), which simplifies some results regarding entropy. \subsection{Definition of box-counting dimension} There are two different notions of dimension that will play large roles in this paper. The first is the concept of \textit{box-counting dimension,} also known as \textit{Minkowski dimension.} Intuitively, this is defined by quantifying how the number of boxes required to cover a given set increases as the size of the boxes decreases. This matches our intuition regarding ``nice'' sets that have a well-defined length, area, etc. For instance, it is natural that to cover a polygonal area of $\R^2$ with boxes, when the boxes are half the size, this will require four times as many boxes. The box-counting dimension of such a polygon is $\frac{\log 4}{\log 2} = 2$. In order to fully formalize this notion, many decisions must be made about the details. Is the ``size'' of a box its diameter or its side length? Must we use boxes, or could we use another shape, like a closed ball, instead? What if we allow the covering sets to be \textit{any} set of a given diameter? Should we place restrictions on the positioning of each box, such as requiring them to come from a grid? It turns out that most of these decisions have no effect on the resulting notion of dimension, i.e. they are equivalent. Therefore, we use one of the several versions of the definition given in \cite{F03}: \begin{defn}[\cite{F03}] Let $X \subseteq \R^d$ be nonempty and bounded. \begin{enumerate}[(i)] \item We define $N(X, \epsilon)$ to be the number of sets of the form $I_{\vec{z}} = [z_1 \epsilon, (z_1 + 1) \epsilon] \times \dots \times [z_d \epsilon, (z_d + 1) \epsilon]$, where $\vec{z} = (z_1, \dots, z_d)$ are integers, required to cover $X$. \item The \textit{upper box-counting dimension} of $X$ is: $$d_{\overline{B}}(X) = \limsup_{\epsilon \to 0} \frac{\log N(X, \epsilon)}{\log \frac{1}{\epsilon}}.$$ \item The \textit{lower box-counting dimension} of $X$ is: $$d_{\underline{B}}(X) = \liminf_{\epsilon \to 0} \frac{\log N(X, \epsilon)}{\log \frac{1}{\epsilon}}.$$ \item If the upper and lower box-counting dimensions of $X$ are equal, we refer to their value as simply \textit{the box-counting dimension} $d_B(X)$. \end{enumerate} \end{defn} Box-counting dimension has several properties that justify its being called a dimension: \begin{fact}[\cite{F03}] \; \begin{enumerate}[(i)] \item If $X$ is a smooth $n$-manifold (embedded in $\R^d$), then $d_B(X) = n$. \item If $X \subseteq Y$, then $d_{\overline{B}}(X) \leq d_{\overline{B}}(Y)$ and $d_{\underline{B}}(X) \leq d_{\underline{B}}(Y)$. \item If $X = Y_1 \cup Y_2$, then $d_{\overline{B}}(X) = \max(d_{\overline{B}}(Y_1), d_{\overline{B}}(Y_2))$. \item Invertible affine transformations of $\R^d$ preserve $d_{\overline{B}}$ and $d_{\underline{B}}$. \end{enumerate} \end{fact} In addition to these, box-counting dimension has one more property that turns out to be quite useful (and that other notions of dimension do not possess): \begin{fact}[\cite{F03}] Let $X \subseteq \R^d$ be nonempty and bounded. Then $d_{\overline{B}}(X) = d_{\overline{B}}(\overline{X})$, and $d_{\underline{B}}(X) = d_{\underline{B}}(\overline{X})$. \end{fact} \subsection{Definition of Hausdorff dimension}\label{Hdim} Hausdorff dimension is the other notion of dimension that we will use in this paper. It is considerably more popular within fractal geometry, probably due to its compatibility with measure-theoretic notions. To define Hausdorff dimension, we must first define the notion of Hausdorff \textit{measure,} a family of outer measures on subsets of $\R^d$: \begin{defn}[\cite{F03}] Let $X$ be a nonempty Borel subset of $\R^d$. For $s \geq 0, \epsilon > 0$ we define: $$\mu_H^s(X, \epsilon) = \inf \left\{\sum_{i=1}^\infty (\Diam U_i)^s : \{U_i\}_i \text{ is a collection sets of diameter at most $\epsilon$ covering $X$}\right\}$$ The \textit{$s$-dimensional Hausdorff measure} of $X$, $\mu_H^s(X)$, is the limit of $\mu_H^s(X, \epsilon)$ as $\epsilon \to 0$. \end{defn} One precaution: recall that when we defined box-counting dimension above, we mentioned that it does not matter if the covering sets are boxes, balls, or any set with a given diameter. This is not the case with Hausdorff measure. Although the Hausdorff \textit{dimension} of $X$ would ultimately be the same if we changed these details in the above definition, the measure itself could be different. Note that for subsets of $\mathbb{R}$ with Hausdorff measure one, the Hausdorff measure agrees with the Lebesgue measure. A given set $X$ will only have a ``meaningful'' (i.e. nonzero and finite) Hausdorff measure for at most one value of $s$. Consider once more the example of a polygon in $\R^2$. The $2$-dimensional Hausdorff measure of a polygon is, up to a constant factor of $\frac{\pi}{4}$, its area. But the $s$-dimensional Hausdorff measure will be infinite for any $s < 2$ and zero for any $s > 2$. This suggests the following definition of Hausdorff dimension: \begin{defn}[\cite{F03}] For $X \subseteq \R^d$ nonempty, the \textit{Hausdorff dimension} $d_H(X)$ is the unique real number $s$ such that $\mu_H^{s'}(X) = \infty$ for $s' < s$ and $\mu_H^{s'}(X) = 0$ for $s' > s$. \end{defn} Note that when $s' = s$, the Hausdorff measure may or may not be finite and may or may not be zero. What matters for determining dimension is the limiting behavior on either side of the critical value. Hausdorff dimension has the properties we expect any notion of dimension to have: \begin{fact}[\cite{F03}] \; \begin{enumerate}[(i)] \item If $X$ is a smooth $n$-manifold (embedded in $\R^d$), then $d_H(X) = n$. \item If $X \subseteq Y$, then $d_H(X) \leq d_H(Y)$. \item If $X = Y_1 \cup Y_2$, then $d_H(X) = \max(d_H(Y_1), d_H(Y_2))$. \item Invertible affine transformations of $\R^d$ preserve $d_H$. \end{enumerate} \end{fact} In fact, Hausdorff dimension satisfies a stronger version of (iii) above: \begin{fact}[\cite{F03}] If $X = \bigcup_{i \in \N} Y_i$, then $d_H(X) = \max_{i \in N} d_H(Y_i)$. \end{fact} As a corollary, the Hausdorff dimension of any countable set is zero (as that of a point is zero), so Hausdorff dimension is invariant under the addition or removal of a countable set of points. This is very unlike box-counting dimension: note that box-counting dimension is preserved under closures. Because $\R^d$ is separable, stability under closures and stability under the addition of countably many points are properties directly at odds with each other. In particular, consider the set $X = \Q \cap [0, 1]$. The closure of $X$ is the interval $[0, 1]$, which has box-counting dimension $1$; hence $X$ has box-counting dimension $1$. Yet $X$ is countable and thus has Hausdorff dimension $0$. This gives an explicit example of when Hausdorff and box-counting dimension may differ. Note, however, that when they do differ, it is always the box-counting dimension that is higher: \begin{fact}[\cite{F03}] Let $X$ be a nonempty and bounded subset of $\R^d$. Then $d_H(X) \leq d_{\underline{B}}(X) \leq d_{\overline{B}}(X)$. \end{fact} \section{Entropy and its relationship to dimension}\label{entropy} \subsection{Properties of entropy} It will be helpful to establish several properties of entropy before connecting it to fractal dimension. First, we need a couple results from \cite{S85} concerning the monotonicity of entropy under the subset relation and union operation, and $\omega$-language prefixes. \begin{fact}[\cite{S85}, Proposition 1] \label{language_entropy_union} Let $L_1$ and $L_2$ be infinite languages. \begin{enumerate}[(i)] \item If $L_1 \subseteq L_2$, then $h(L_1) \leq h(L_2)$. \item $h(L_1 \cup L_2) = \max (h(L_1), h(L_2))$. \end{enumerate} \end{fact} \begin{fact}[\cite{S85}]\label{lang_entropy_union} \label{language_entropy_summation} Let $L$ be an infinite language. Then: $$h(L^{pre}) = \limsup_{n \to \infty} \frac{\log |L|_{\leq n}}{n} = h(L).$$ \end{fact} In the case where $L$ is closed under prefixes, we can define its entropy to be a limit, rather than a limit superior. To do this, we need a lemma concerning periodic functions: \begin{lem} \label{periodic_thing} Let $n > 1$, $P_1, \dots, P_n > 0$, and $\epsilon > 0$. Let $S$ be the set of positive integers $z$ such that $\frac{z}{P_i} - \lfloor \frac{z}{P_i} \rfloor < \epsilon$ for $i = 1, \dots, n$. Then there exists $N$ such that no $N$ consecutive positive integers lie outside $S$. \end{lem} \begin{proof} Note that with $k$ a positive integer, if $\frac{z}{kP} - \lfloor \frac{z}{kP} \rfloor < \frac{\epsilon}{k}$, then $\frac{z}{P} - \lfloor \frac{z}{P} \rfloor < \epsilon$. By choosing least-common-multiple values of $P_i$ and smaller $\epsilon$, we can assume $P_i$ are all linearly independent over $\mathbb{Q}$. Then at most one of them can be rational, say $P_1 = \frac{a}{b}$. We give $[0, 1)^{n-1}$ the toroidal metric where we identify opposite sides, i.e. the inherited metric from its canonical bijection with $\mathbb{R}^{n-1}/\mathbb{Z}^{n-1}$. The sequence $((\frac{ak}{P_2} - \lfloor \frac{ak}{P_2} \rfloor, \frac{ak}{P_3} - \lfloor \frac{ak}{P_3} \rfloor, \dots, \frac{ak}{P_n} - \lfloor \frac{ak}{P_n} \rfloor))_{k \in \mathbb{Z}^+}$ is dense in the unit box under the Euclidean metric; as the toroidal metric gives rise to a strictly coarser topology, the sequence is also dense in the toroidal metric. Now let $z$ be any positive integer, and let $Z$ be the smallest multiple of $a$ not less than $z$. Let $B$ be the box in $[0, 1)^{n-1}$ containing the fractional parts of the elements of $[\frac{\epsilon}{3} - \frac{Z}{P_2}, \frac{2\epsilon}{3} - \frac{Z}{P_2}] \times \dots \times [\frac{\epsilon}{3} - \frac{Z}{P_n}, \frac{2\epsilon}{3} - \frac{Z}{P_n}]$. Then by density, there is some $k \leq m$ such that $(\frac{ak}{P_2} - \lfloor \frac{ak}{P_2} \rfloor, \frac{ak}{P_3} - \lfloor \frac{ak}{P_3} \rfloor, \dots, \frac{ak}{P_n} - \lfloor \frac{ak}{P_n} \rfloor) \in B$. Therefore, $(\frac{Z+ak}{P_2} - \lfloor \frac{Z+ak}{P_2} \rfloor, \frac{Z+ak}{P_3} - \lfloor \frac{Z+ak}{P_3} \rfloor, \dots, \frac{Z+ak}{P_n} - \lfloor \frac{Z+ak}{P_n} \rfloor) \in [\frac{\epsilon}{3}, \frac{2\epsilon}{3}]^{n-1}$. We have thus found a positive integer $Z+ak$ less than $z + a + am$ such that $\frac{Z+ak}{P_i} - \lfloor \frac{Z+ak}{P_i} \rfloor < \epsilon$ for $2 \leq i \leq n$; and $\frac{Z+ak}{P_1} - \lfloor \frac{Z+ak}{P_1} \rfloor = 0$ because $Z+ak$ is a multiple of $a$. Setting $N = a + am + 1$ thus proves the lemma. \end{proof} The following fact is used in Corollary 9 of \cite{S85}, but not proven explicitly. Hence, we include the following result for completeness. \begin{lem} \label{language_entropy_prefix_limit} Let $L$ be a regular language closed under prefixes. Then $\lim_{n \to \infty} \frac{\log |L|_n}{n}$ exists. \end{lem} \begin{proof} Let the deterministic finite automaton $\mathcal{A} = (Q, \Sigma, \delta, S, F)$ recognize $L$. Let $\{q_1, \dots, q_m\} = Q$, and assume without loss of generality that $S = \{q_1\}$. Let $N_{n,i}$ be the number of strings of length $n$ of which there is a run in $\mathcal{A}$ from $q_1$ to $q_i$, and let $t_{i,j} = |\{\sigma \in \Sigma : q_j \in \delta(q_i, \sigma)\}|$. Then: $$N_{n+1,j} = \sum_{i \in I} t_{i,j} N_{n,i}$$ $$|L|_{=(n+1)} = \sum_{i \in A} N_{n+1,i} = \sum_{j \in A} \sum_{i \in I} t_{i,j} N_{n,i}$$ We thus have a system of recurrences that can be put in matrix form: \begin{align} \begin{bmatrix} N_{n+1, 1} \\ N_{n+1, 2} \\ \vdots \\ N_{n+1, m} \\ |L|_{n+1} \end{bmatrix} = \begin{bmatrix} t_{1,1} & t_{2,1} & \dots & t_{m,1} & 0 \\ t_{1,2} & t_{2,2} & \dots & t_{m,2} & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ t_{1,m} & t_{2,m} & \dots & t_{m,m} & 0 \\ \sum_{j \in A} t_{1,j} & \sum_{j \in A} t_{2,j} & \dots & \sum_{j \in A} t_{m,j} & 0 \end{bmatrix} \begin{bmatrix} N_{n, 1} \\ N_{n, 2} \\ \vdots \\ N_{n, m} \\ |L|_n \end{bmatrix} \end{align} Write the above as $\vec{v}_{n+1} = \mathbf{A} \vec{v}_n$. We have $\vec{v}_0 = [1, 0, \dots, 0, 1]^T$. This then allows us to write $\vec{v}_{n} = \mathbf{A}^n \vec{v}_0$ and $|L|_n = [0, 0, \dots, 0, 1] \mathbf{A}^n \vec{v}_0$. Using linear algebra techniques, we may then derive an expression for $|L|_n$: $$|L|_n = p_1(n) \lambda_1^n + \dots + p_r(n) \lambda_r^n$$ where $p_i(n)$ are polynomials and $\lambda_i$ are the eigenvalues of $\mathbf{A}$. Write $\lambda_i$ in polar form as $\rho_i e^{\mathbf{i} \theta_i}$. Then taking the real part of both sides, we get: $$|L|_n = \mathrm{Re}(p_1(n)) (\rho_1^n \cos(n \theta_1)) + \dots + \mathrm{Re}(p_r(n)) (\rho_r^n \cos(n \theta_r))$$ Let $n_1, n_2, \dots$ be an increasing sequence such that $\cos(n_i \theta_j) > 0.9$ for any $i, j$. Let $n_{i+1} - n_i$ be bounded above by $N$; because the cosine function is periodic, we can find such an $N$ by Lemma \ref{periodic_thing}. Define: $$A_i = \mathrm{Re}(p_1(n_i)) \rho_1^{n_i} + \dots + \mathrm{Re}(p_r(n_i)) \rho_r^{n_i}$$ Assume without loss of generality that $\rho_j$ above are in nonincreasing order, and let $k$ be such that $\rho_1 = \dots = \rho_k \neq \rho_{k+1}$. The polynomial $\mathrm{Re}(p_1) + \dots + \mathrm{Re}(p_k)$ is eventually monotone and unbounded; if it is eventually decreasing, then $A_i \to -\infty$, which cannot be, as $|L|_{n_i}$ is nonnegative. Let $B_i = \mathrm{Re}(p_1(n_i)) \rho_1^{n_i} + \dots + \mathrm{Re}(p_k(n_i)) \rho_k^{n_i}$. From the above, we know $B_i$ is eventually increasing to $+\infty$. Moreover, the ratio $\frac{A_i - B_i}{B_i}$ is vanishingly small. So eventually, $0.8 B_i < |L|_{n_i} < 1.1 B_i$. Write $B_i = P(n_i) \rho_1^{n_i}$, with $P$ polynomial. Then: $$\lim_{i \to \infty} \frac{\log B_i}{n_i}$$ $$= \lim_{i \to \infty} \frac{\log (P(n_i) \rho_1^{n_i})}{n_i}$$ $$= \lim_{i \to \infty} \frac{\log P(n_i) + \log \rho_1^{n_i}}{n_i}$$ $$= \lim_{i \to \infty} \frac{\log P(n_i)}{n_i} + \frac{n_i \log \rho_1}{n_i}$$ $$= \log \rho_1.$$ Therefore, because $0.8 B_i < |L|_{n_i} < 1.1 B_i$, we note that $\lim_{i \to \infty} \frac{\log |L|_{n_i}}{n_i} = \log \rho_1$ as well. Now, let $n_i \leq n'_i \leq n_{i+1}$. Because $n_{i+1} \leq n'_i + N$, we have $|L|_{n_{i+1}} \leq |L|_{n'_i} |\Sigma|^N$. By the same logic, $|L|_{n'_i} \leq |L|_{n_i} |\Sigma|^N$. Letting $M = |\Sigma|^N$ then gives us $\frac{1}{M} |L|_{n_{i+1}} \leq |L|_{n'_i} \leq M |L|_{n_i}$. So: $$\log \rho_1 = \lim_{i \to \infty} \frac{\log |L|_{n_{i+1}}}{n_{i+1}}$$ Adding a bounded value to the top or bottom of this fraction will not affect the limit: $$= \lim_{i \to \infty} \frac{\log \frac{1}{M} + \log |L|_{n_{i+1}}}{n_{i+1} + (n'_i - n_{i+1})}$$ $$= \lim_{i \to \infty} \frac{\log \left( \frac{1}{M} |L|_{n_{i+1}} \right)}{n'_i}$$ $$\leq \lim_{i \to \infty} \frac{\log |L|_{n'_i}}{n'_i}$$ $$\leq \lim_{i \to \infty} \frac{\log \left( M |L|_{n_i} \right)}{n'_i}$$ $$= \lim_{i \to \infty} \frac{\log M + \log |L|_{n_i}}{n_i + (n'_i - n_i)}$$ $$= \lim_{i \to \infty} \frac{\log |L|_{n_i}}{n_i} = \log \rho_1.$$ Therefore, $\lim_{i \to \infty} \frac{\log |L|_{n'_i}}{n'_i} = \log \rho_1$. This is independent of the choice of each $n'_i$ following the constraints, so we can cover the sequence $\frac{\log |L|_{n}}{n}$ by subsequences converging to $\log \rho_1$. Therefore, the entire sequence must also converge to this limit. \end{proof} \begin{cor} \label{language_entropy_prefix_limit_summation} Let $L$ be a regular language closed under prefixes. Then $\lim_{n \to \infty} \frac{\log |L|_{\leq n}}{n}$ exists. \end{cor} \begin{proof} With $\Sigma$ the alphabet of $L$, assume without loss of generality that $\$ \notin \Sigma$. Let $\Sigma' = \Sigma \cup \{\$\}$ and $L' = L\$^*$. Then $L'$ is prefix-closed, and there is a one-to-one correspondence between strings of length at most $n$ in $L$ and strings of length exactly $n$ in $L'$ (realized by adding the appropriate number of $\$$). The result then immediately follows from Lemma \ref{language_entropy_prefix_limit}. \end{proof} \subsection{Entropy and box-counting dimension} Note that box-counting dimension appears to be quite hard to compute by the definition we have given, because there are fairly general quantifications over multiple variables. But with a bit of work, the process of proving the box-counting dimension of a set can be simplified. In particular, we have the following result showing that we do not need to check many values of $\epsilon$ and $h$: \begin{lem} \label{box_counting_simpler_limit} Let $X \subseteq [0, 1]^d$. Let $r > 1$, and assume that the limit: $$L = \lim_{n \to \infty} \frac{\log N(X, r^{-n}, \vec{0})}{\log r^n}$$ exists. Then $d_B(X) = L$. \end{lem} \begin{proof} Let $(\epsilon_i)_i$ be any sequence of positive values converging to zero, and let $(\vec{h}_i)_i$ be any sequence of values in $[0, 1)^d$. We will show that: $$\lim_{i \to \infty} \frac{\log N(X, \epsilon_i, \vec{h}_i)}{\log \frac{1}{\epsilon_i}} = L.$$ Choose some arbitrary $i$, and find $n_i$ such that $r^{-n_i} > \epsilon_i > r^{-n_i - 1}$. Note first that $n_i \to \infty$ as $i \to \infty$; otherwise, there would exist $n^*$ such that infinitely many $n_i < n^*$, and hence infinitely many $\epsilon_i > r^{-n^* - 1}$, which cannot be. Consider the boxes of the form $I_{\vec{z}} = [z_1 r^{-n_i}, (z_1 + 1) r^{-n_i}] \times \dots \times [z_d r^{-n_i}, (z_d + 1) r^{-n_i}]$ where $\vec{z} = (z_1, \dots, z_d)$ are integers. Similarly, choose an arbitrary vector of integers $\vec{z}' = (z'_1, \dots, z'_d)$ and let $I' = [(z'_1 - h_{i,1}) \epsilon_i, (z'_1 - h_{i,1} + 1) \epsilon_i] \times \dots \times [(z'_d - h_{i,d}) \epsilon_i, (z'_d - h_{i,d} + 1) \epsilon_i]$. Note that because the $I_{\vec{z}}$ are adjacent (i.e. they partition $\mathbb{R}^d$) and longer than $I'$, then there exist at most $2^d$ such $I_{\vec{z}}$ that cover $I'$. We chose $I'$ arbitrarily, so therefore, $N(S, r^{-n_i}, 0) \leq 2^d N(S, \epsilon_i, \vec{h}_i)$. Similarly, choose an arbitrary $\vec{z}$ and let $I = [z_1 r^{-n_i - 1}, (z_1 + 1) r^{-n_i - 1}] \times \dots \times [z_d r^{-n_i - 1}, (z_d + 1) r^{-n_i - 1}]$, and consider the intervals of the form $I_{\vec{z}'} = [(z'_1 - h_{i,1}) \epsilon_i, (z'_1 - h_{i,1} + 1) \epsilon_i] \times \dots \times [(z'_d - h_{i,d}) \epsilon_i, (z'_d - h_{i,d} + 1) \epsilon_i]$ for integer vectors $\vec{z}'$. Note that because the $I_{\vec{z}'}$ are adjacent (i.e. they partition $\mathbb{R}$) and longer than $I$, then there exist at most $2^d$ such $I_{\vec{z}'}$ that cover $I$. We chose $I$ arbitrarily, so therefore, $N(X, \epsilon_i, \vec{h}_i) \leq 2^d N(X, r^{-n_i-1}, 0)$. These inequalities give us: $$\frac{\log \frac{1}{2^d} N(X, r^{-n_i}, 0)}{\log r^{n_i + 1}} \leq \frac{\log N(X, \epsilon_i, \vec{h}_i)}{\log \frac{1}{\epsilon_i}} \leq \frac{\log 2^d N(X, r^{-n_i-1}, 0)}{\log r^{n_i}}.$$ Apply laws of logarithms: $$\frac{- \log 2^d + \log N(X, r^{-n_i}, 0)}{\log r + \log r^{n_i}} \leq \frac{\log N(X, \epsilon_i, \vec{h}_i)}{\log \frac{1}{\epsilon_i}} \leq \frac{\log 2^d + \log N(X, r^{-n_i-1}, 0)}{-\log r + \log r^{n_i + 1}}.$$ Now note that as long as $a_i, b_i \to \infty$ with $C_1, C_2$ constant, we have $\lim_{i \to \infty} \frac{C_1 + a_i}{C_2 + b_i} = \lim_{i \to \infty} \frac{a_i}{b_i}$. So: $$\lim_{i \to \infty} \frac{- \log 2^d + \log N(X, r^{-n_i}, 0)}{\log r + \log r^{n_i}} = \lim_{i \to \infty} \frac{\log N(X, r^{-n_i}, 0)}{\log r^{n_i}} = L.$$ Similarly: $$\lim_{i \to \infty} \frac{\log 2^d + \log N(X, r^{-n_i-1}, 0)}{-\log r + \log r^{n_i + 1}} = \lim_{i \to \infty} \frac{\log N(X, r^{-n_i-1}, 0)}{\log r^{n_i + 1}} = L.$$ Since $\frac{\log N(X, \epsilon_i, \vec{h}_i)}{\log \frac{1}{\epsilon_i}}$ is bounded between two sequences converging to $L$, we conclude that its limit is $L$ as well. \end{proof} \begin{lem} \label{box_counting_entropy} Let $L$ be an $\omega$-language of base-$k$ representations of points in $[0, 1]^d$. Let $X = \nu_k(L)$. Then $d_B(X) = \frac{1}{\log k} h(L^{pre})$. \end{lem} \begin{proof} Lemma \ref{language_entropy_prefix_limit} tells us that the entropy of $L^{pre}$ is defined as a limit and not just as a limit superior. By Lemma \ref{box_counting_simpler_limit}, it then suffices to show: $$\lim_{n \to \infty} \frac{\log N(X, k^{-n}, \vec{0})}{\log k^n} = \frac{1}{\log k} \lim_{n \to \infty} \frac{\log |L^{pre}|_n}{n}.$$ Now consider each string $w$ in $L^{pre}$ of length $n$. The infinite strings with this prefix represent precisely the numbers in $I_{\vec{z}} = [z_1 k^{-n}, (z_1 + 1) k^{-n}] \times \dots \times [z_d k^{-n}, (z_d + 1) k^{-n}]$ where $\vec{z} = (z_1, \dots, z_d)$ is the integer vector with base-$k$ representation given by $w$. So all of these boxes for all such strings $w$ will cover $X$; thus, $N(X, k^{-n}, \vec{0}) \leq |L^{pre}|_n$. This covering may not be optimal, because the boxes $I_{\vec{z}}$ are not disjoint; they overlap at points with at least one coordinate that is $k$-rational. So a single $I_{\vec{z}}$ may contain points that the above covering method would place in any adjacent box. However, this is the only overlap, so a box in the optimal covering corresponds to at most $3^d$ boxes in the above covering. Hence $N(X, k^{-n}, \vec{0}) \geq \frac{1}{3^d} |L^{pre}|_n$. Therefore: $$\frac{1}{\log k} \lim_{n \to \infty} \frac{\log |L^{pre}|_n}{n}$$ $$= \frac{1}{\log k} \lim_{n \to \infty} \frac{\log \frac{1}{3^d} + \log |L^{pre}|_n}{n}$$ $$= \frac{1}{\log k} \lim_{n \to \infty} \frac{\log \left(\frac{1}{3^d} |L^{pre}|_n\right)}{n}$$ $$\leq \frac{1}{\log k} \lim_{n \to \infty} \frac{\log N(X, k^{-n}, \vec{0})}{n}$$ $$\leq \frac{1}{\log k} \lim_{n \to \infty} \frac{\log |L^{pre}|_n}{n}.$$ It follows that the two limits are equal, as required. \end{proof} We conclude this section by noting that the choice of regular language to associate with a $k$-automatic set is not always obvious. In particular, in their conjecture regarding the connection between entropy and dimension, Adamczewski and Bell \cite{AB11} use the regular language of \textit{substrings} of the base-$k$ expansions, not the language of prefixes as we are using here. We will show that this does not make a difference. This result follows from \cite{S85}, in which the author shows that the entropy of a $k$-regular $\omega$-language $L$ is equal to that of the (normal) regular language $L^{pre}$. Theorem 14 of \cite{S85} further shows that if $L$ is an $\omega$-language then there is a strongly-connected $\omega$-language $L'$ and a word $w \in \Sigma$ such that $\{v: v=w\ell, \ell \in L'\} \subseteq L$, and $h(L)=h(L')$. It follows that the entropy of $L$ is equal to the maximum over the entropies of the subsets corresponding to the strongly connected components of any B\"uchi automaton $\calA$ recognizing $L$, which together implies the entropy of $L$ is the same as that of the set of substrings of $L$. \begin{fact}[\cite{S85}] \label{entropy_pre_sub} Let $L$ be a regular language closed under prefixes, and let $L^{sub}$ be the regular language of substrings of strings in $L$. Then $h(L) = h(L^{sub})$. \end{fact} \section{The closed case}\label{closed} \subsection{Spectral radius and box-counting dimension} The goal of this subsection will be to produce a method for computing the box-counting dimension of a closed $k$-automatic set based on its B\"uchi automaton. We will do this by first computing the entropy of the corresponding language of prefixes and then applying Lemma \ref{box_counting_entropy}. \begin{defn}\label{adjmat} Let $\mathcal{A}$ be a B\"uchi automaton. Suppose that $Q$, the set of states of $\calA$, is size $n$ and let $\iota:\{1,\ldots,n\} \to Q$ be a fixed bijection. Then we associate to $\calA$ a weighted adjacency matrix $\mathbf{M}(\calA,s)=(m_{i,j})$ given by: \[ m_{i,j} = \left (\frac{|\{\sigma \in \Sigma: (\iota(i),\iota(j),\sigma) \in \Delta\}|}{ k}\right)^{s}. \] Recall that $\Sigma$ is the alphabet and $\Delta$ is the transition function of $\calA$. \end{defn} We will need the following fact regarding sequences defined by repeated matrix multiplication. The following result appears in \cite{S85}: \begin{fact} \label{language_radius_entropy} Let $L$ be a regular language closed under prefixes. Let $\mathcal{A}$ be a finite automaton recognizing $L$, and assume that $\mathcal{A}$ is deterministic, trim, and has every state accepting. Label the states of $\mathcal{A}$ by numbers $1$ through $m$. Then the entropy of $L$ is equal to the spectral radius $\sprad{k\mathbf{M}(\calA,1)}$ of weighted adjacency matrix $\mathbf{M}(\calA,1)$ scaled by the constant $k$. \end{fact} We know a lot about the dimension of $V_k(\mathcal{A})$ when it is a closed set, and we know that if $\mathcal{A}$ is deterministic (and trim), then we can take the closure of $V_k(\mathcal{A})$ by making all states accepting. It is not immediately obvious that this will hold when $\mathcal{A}$ is nondeterministic, but in the following lemma we show this is case. The following result is essentially a corollary of Lemma 58 in \cite{CLR15}: \begin{lem} \label{automaton_closure} Let $\mathcal{A}$ be a trim B\"uchi automaton. Then $V_k(\overline{\calA}) = \overline{V_k(\mathcal{A})}$. \end{lem} \begin{proof} First, we prove that $V_k(\overline{\calA})$ is closed. Let $x \in \overline{V_k(\overline{\calA})}$. Then there exist $(x_m)_{m \in \N} \in V_k(\overline{\calA})$ with $x_m \to x$. Without loss of generality assume that either $x_m < x$ for all $m$, or else $x_m > x$ for all $m$. We will assume the latter; the proof in the former case is analogous. Let $w$ be the infinite base-$k$ representation of $x$ such that if $x$ is $k$-rational, then $w$ ends in $0^\omega$; this uniquely identifies $w$. (We choose this representation because we assumed $x_m > x$.) Then for all $n$ there exists $m_n$ such that the first $n$ characters of some base-$k$ representation of $x_{m_n}$ are the first $n$ characters of $w$. Let $w_n$ be this base-$k$ representation for a given $n$. Now, we will define a graph $G$ inductively. The vertex set will be a subset of $\Sigma^* \times Q^*$. Our initial condition is that $G$ contains the vertices $(\epsilon, q)$, where $q$ ranges over $S$ (the set of start states of $\overline{\calA}$). Then for every vertex $(u, v)$, we require that $G$ contain the vertex $(uc, vq)$ where $uc$ is a prefix of $w$ and $\overline{\calA}$ transitions from the last state in $v$ to state $q$ on $c$; and we require that $G$ contain the edge from $(u, v)$ to $(uc, vq)$. Observe that: \begin{itemize} \item $G$ has finitely many connected components: as we have defined $G$, every vertex is connected to a vertex whose string is shorter. So if the length of the string for a vertex is $n$, that vertex has a path of length $n$ to an initial vertex. \item $G$ is locally finite: every vertex has at most $k|Q|$ incident edges. \item $G$ is infinite: if $w_n$ is accepted, there must be an accepting path for the first $n$ characters of $w_n$, which are also the first $n$ characters of $w$. This gives a vertex for every natural number $n$. \end{itemize} Therefore, we can apply K\H{o}nig's lemma to one of the connected components of $G$ and conclude that $G$ has an infinite path. Moreover, note that because strings have finite length, infinitely many of the edges in the path must lead to a longer string; and because there is only one edge from any vertex to a shorter string, it in fact must be the case that eventually the path only contains edges to longer strings. Choose a vertex sufficiently far into the path that this is the case; there is a path from an initial vertex to this vertex and then infinitely farther through longer and longer strings; and this path must correspond to an accepting path for $w$. So $\overline{\calA}$ accepts $w$, and thus, $V_k(\overline{\calA})$ is closed. Now it remains to show that $V_k(\overline{\calA})$ is the closure of $V_k(\mathcal{A})$. Note that any accepting run in $\mathcal{A}$ is also accepting in $\overline{\calA}$, so $V_k(\mathcal{A}) \subseteq V_k(\overline{\calA})$. Moreover, let $w$ be accepted by $\overline{\calA}$. Then for any $n$, the first $n$ characters of $w$ have a run to some state in $\overline{\calA}$. As the only difference between $\mathcal{A}$ and $\overline{\calA}$ is the set of accept states, there is also a run for these characters in $\mathcal{A}$, and because $\mathcal{A}$ is trim, this run can be extended to an accepting run. So every infinite string accepted by $\overline{\calA}$ has, for all $n$, its first $n$ characters in common with some infinite string accepted by $\mathcal{A}$. It follows that $V_k(\overline{\calA}) \subseteq \overline{V_k(\mathcal{A})}$. But the only closed subset of $\overline{V_k(\mathcal{A})}$ that is a superset of $V_k(\mathcal{A})$ is $\overline{V_k(\mathcal{A})}$ itself. \end{proof} \subsection{Spectral radius and Hausdorff dimension} In \cite{MW88}, the objects whose Hausdorff dimensions they describe are called ``geometric graph directed constructions.'' These constructions would be described in the current terminology of metric geometry as GDIFSs that satisfy the open set condition and have compact attractors. Recall the open set condition for GDIFSs as defined in \cite{E08}. \begin{defn} If $\mathcal{G} = (V,E,s,t,X,S)$ is a GDIFS, then it satisfies the \emph{open set condition} precisely if for all $v \in V$ there are nonempty open sets $U_v \subseteq \mathbb{R}^d$ such that $$U_u \supseteq f_e[U_v]$$ for all $u,v \in V$ such that $e \in E_{u,v}$, and additionally, $$f_e[U_v] \cap f_{e'}[U_{v'}] = \emptyset$$ for all $u, v, v' \in V$ such that $e \in E_{u,v}$ and $e'\in E_{u,v'}$. \end{defn} \begin{thm} Suppose that $X \subseteq [0,1]^d$ is a $k$-automatic set recognized by a deterministic B\"uchi automaton $\mathcal{A}$ with corresponding GDIFS $\mathcal{G}_{\mathcal{A}} = (V,E,s,t,X,S)$. Then $\mathcal{G}_{\mathcal{A}}$ satisfies the open set condition as defined in \cite{E08}. \end{thm} \begin{proof} Let $X$, $\mathcal{A}$ and $\mathcal{G}_{\mathcal{A}}$ be as in the hypotheses. To see that the open set condition holds, for each $v \in V$ let $U_v = (0,1)^d$. Observe that for any $e \in E$, it is the case that $f_e[U_v] = \{\frac{x+\sigma_e}{r}: x\in U_v\}$, where $\sigma_e \in \Sigma$ is the label of the transition arrow in $\mathcal{A}$ that corresponds to the edge $e$. Hence it is clear that $U_u \supseteq f_e(U_v)$ for any vertices $u,v \in V$ such that $e$ is an edge from $u$ to $v$, since $$( 0,1 ) ^d \supseteq \prod_{i=1}^d\left(\frac{\sigma_{e,i}}{r},\frac{ \sigma_{e,i}+1}{r}\right).$$ Suppose now that $u,v,v' \in V$ and that $e \in E_{u,v}$ and $e' \in E_{u,v'}$. Due to $\mathcal{A}$ being deterministic, it cannot be the case that the transition arrows corresponding to $e$ and $e'$ respectively in $\mathcal{A}$ have the same label. Thus we know $\sigma_e \neq \sigma_{e'}$, and as a consequence if an element $\vec{a}$ is in $\{\frac{x+\sigma_e}{r}: x\in (0,1)^d\}$, then it cannot be the case that $\vec{a} \in \{\frac{x+\sigma_{e'}}{r}: x\in U_v\}$, since such sets carve out the interior of disjoint subcubes of $(0,1)^d$ of size $\frac{1}{r}^d$. Hence the open set condition is satisfied. \end{proof} The following corollary is immediate from the fact that a closed B\"uchi automaton is weak, by applying the above theorem. Recall that weak automata (which include all closed automata) have an equivalent deterministic B\"uchi automaton \cite{PP04}. \begin{cor}\label{open} All closed B\"uchi automata satisfy the open set condition. \end{cor} In order to apply Theorem 5 from the paper \cite{MW88} of Mauldin and Williams, we need the following lemma concerning the equality of the Hausdorff dimension for two strongly connected automata which only differ in which states are initial. \begin{lem}\label{strongly_connected_start_state} Let $\mathcal{A}$ and $\mathcal{A}'$ be two strongly connected B\"uchi automata with the same set of states, set of accept states, and transition relation. Then $d_H(V_k(\calA))$ = $d_H(V_k(\calA'))$. \end{lem} \begin{proof} Note that $V_k(\calA) = \bigcup_{q_i \in S} V_k(\calA_i)$, where $S$ is the set of start states of $\calA$ and $\calA_i$ is a modification of $\calA$ where $q_i$ is the only start state. Therefore $d_H(V_k(\calA)) = \max_{q_i \in S} d_H(V_k(\calA_i))$. Let $r$ be the value of $i$ giving maximal dimension, i.e. $d_H(V_k(\calA)) = d_H(V_k(\calA_r))$. Define $\calA'_s$ and choose $s$ analogously, such that $d_H(V_k(\calA')) = d_H(V_k(\calA'_s))$ and such that $q_s$ is the only start state in $\calA'_s$. Because $\calA_r$ is strongly connected, it must contain a path $P$ from $q_s$ to $q_r$; let $v \in ([k]^d)^*$ be a string witnessing path $P$. Let $w \in ([k]^d)^\omega$ be accepted by $\calA_r$. Then $P$ concatenated with an accepting run of $w$ in $\calA_r$ forms an accepting run of $vw$ in $\calA'_s$. Moreover, note that: $$\nu_k(vw) = \sum_{i=0}^\infty \frac{(vw)_i}{k^{i+1}}$$ $$= \sum_{i=0}^{|v|-1} \frac{(vw)_i}{k^{i+1}} + \sum_{i=|v|}^\infty \frac{(vw)_i}{k^{i+1}}$$ $$= \sum_{i=0}^{|v|-1} \frac{v_i}{k^{i+1}} + \sum_{i=0}^\infty \frac{w_i}{k^{|v|+i+1}}$$ $$= \nu_k(v\vec{0}^\omega) + k^{-|v|} \nu_k(w).$$ Let $f : [0, 1]^d \to [0, 1]^d$ map $x$ to $\nu_k(v\vec{0}^\omega) + k^{-|v|} x$; then the above chain of equalities gives us that $f(V_k(\calA_r)) \subseteq V_k(\calA'_s)$. Because $f$ is an invertible affine transformation, it follows that $d_H(V_k(\calA_r)) \leq d_H(V_k(\calA'_s))$. The analogous argument in the opposite direction gives us $d_H(V_k(\calA'_s)) \leq d_H(V_k(\calA_r))$. Thus: $$d_H(V_k(\calA)) = d_H(V_k(\calA_r)) = d_H(V_k(\calA'_s)) = d_H(V_k(\calA')).$$ \end{proof} In \cite{MW88} Mauldin and Williams work with what they call a ``geometric graph directed construction'' on $\R^m$. We observe that the GDIFS associated to a closed B\"uchi automaton is nearly an instance of such a ``geometric graph directed construction.'' In \cite{MW88}, however, the authors consider only iterated function systems directed by a graph, whereas the GDIFSs associated to closed automata are in general multigraphs. To account for this, though, we observe that we can ``translate'' between multigraph representations of B\"uchi automata and automaton diagrams which are true digraphs. \begin{defn}\label{dg} For a deterministic B\"uchi automaton $\calA=(Q,\Sigma,\delta,S,F)$, consider the automaton $(Q \times \Sigma, \Sigma, \delta', S', F \times \Sigma)$, with $\delta'$ and $S'$ defined as follows: \begin{itemize} \item For all $\sigma, \tau \in \Sigma$ and $q, s \in Q$, if $\delta(q, \sigma) = \varnothing$, then $\delta'((q, \tau), \sigma) = \varnothing$. If $\delta(q, \sigma) = \{s\}$, then $\delta'((q, \tau), \sigma) = \{(s, \sigma)\}$. (These are the only possibilities, because $\calA$ is deterministic.) \item Fix some $\sigma_0 \in \Sigma$ arbitrarily; then with $S = \{q\}$, we have $S' = \{(q, \sigma_0)\}$. \end{itemize} Since this automaton is not necessarily trim, let $\calA_{dg}$ denote the automaton that results from removing any states that are not accessible from the start state or not co-accessible from some accept state. \end{defn} \begin{lem}\label{multigraph_to_digraph} If the automaton $\calA$ is a closed B\"uchi automaton, then $\calA_{dg}$ is an equivalent B\"uchi automaton that is also closed. \begin{proof} To see that $\calA$ and $\calA_{dg}$ are equivalent as automata, observe that if $(q_i)_{i\in \N}$ is an accepting run of $(\tau_i)_{i\in \N}$ in $\calA$, then set $\tau_{-1}=\sigma_0$ and observe that the run $(q_i, \tau_{i-1})_{i \in \N}$ of $(\tau_i)_{i\in \N}$ in $\calA_{dg}$ is an acceptance run for $\calA_{dg}$. Similarly if $(q_i, \sigma_{i})_{i \in \N}$ is an acceptance run of $(\tau_i)_{i\in \N}$ in $\calA_{dg}$, then $(q_i)_{i\in \N}$ is an acceptance run in $\calA$ for the same string. Since we define $\calA_{dg}$ to be trim and since $Q=F$ for $\calA$, we observe $Q\times \Sigma = F\times \Sigma$ makes $\calA_{dg}$ closed precisely if $\calA$ is closed. \end{proof} \end{lem} The next corollary follows from the remarks made in the proof of Lemma \ref{multigraph_to_digraph} and essentially says that connected components are preserved by the construction in Definition \ref{dg}. \begin{cor}\label{scdg} If $C$ is a strongly connected component of $\calA$, then there is a corresponding strongly connected component of $\calA_{dg}$, call it $C_{dg}$, such that the induced sub-automata generated by $C$ and $C_{dg}$ (respectively) recognize the same strings up to prefixes. In other words, let $L$ be recognized by the induced sub-automaton generated by $C$, and likewise for $L_{dg}$ and $C_{dg}$; then there exist words $u, v$ such that $L = uL_{dg}$ and $L_{dg} = vL$. \end{cor} \begin{proof} First, we note that the choice of start state in each induced sub-automaton does not matter. In a strongly connected automaton, if the start state is moved, the resulting language and original language are equal up to prefixes (as defined in the statement of the corollary). So we will allow the start states to be chosen freely. Let $\calA_C$ denote the sub-automaton of $\calA$ consisting of the states and transitions contained within $C$, with arbitrarily chosen $q \in C$ as its only start state. Then by strong connectedness, there exists a word $w$ and a corresponding run $(q_i)_{0 \leq i \leq |w|}$ where $q_0 = q_n = q$ and where, for each transition within $C$, there exists an $i$ witnessing the transition, i.e. if there is an arrow from state $r$ to state $s$ on the character $\sigma$, then there exists $i$ such that $q_{i-1} = r, q_i = s, w_i = \sigma$. Let $C_{dg}$ be the strongly connected component of $(q, w_n)$ in $\calA_{dg}$, and let $\calA_{C_{dg}}$ be its induced sub-automaton with $(q, w_n)$ as the start state. Then there is a corresponding path $((q_i, w_i))_{0 \leq i \leq |w|}$ (letting $w_0 = w_n$) in $\calA_{C_{dg}}$; this path is a cycle, so each state $(q_i, w_i)$ is in $C_{dg}$. Let $u$ be any infinite word accepted by $\calA_C$. Following the proof of the previous lemma, note that because each transition in the acceptance run for $u$ is also somewhere in the run for $w$, and therefore it has a corresponding transition in $\calA_{C_{dg}}$. So $u$ is accepted by $\calA_{C_{dg}}$. The reverse inclusion follows immediately from the same argument as in Lemma \ref{multigraph_to_digraph}. \end{proof} The following is implied by the results in \cite{S89} in which the author shows that for closed languages the Hausdorff dimension and entropy agree, in conjunction with the countable additivity of entropy, as shown in \cite{S85}. \begin{lem}\label{hdim} Let $\calA$ be a closed B\"uchi automaton. Let $SC(\calA)$ denote the set of strongly connected components of $\calA$. For each strongly connected component $C$ of $\calA$, let $\calA_{C}$ denote the sub-automaton of $\calA$ consisting of the states and transitions contained within $C$, with arbitrarily chosen $q \in C$ as its only start state. Then \[ d_H(V_k(\calA)) = \max_{C \in SC(\calA)}\left( d_H( V_{k}(\calA_{C})) \right). \] \begin{proof} By Lemma \ref{strongly_connected_start_state}, the Hausdorff dimension of the automaton $\calA_C$ does not depend on the choice of $q$, its start state. Let $K$ be the attractor of $G_{\calA}$, i.e. the set that $\calA$ recognizes. We recall the statement of Theorem 5 of \cite{MW88}. It states that if $K= \bigcup_{v \in V} K_v$ is the attractor of GDIFS $G$, then $d_H(K_v) = \alpha_v = \max \{ \alpha_H| H \in SC_v(G)\}$. Here, $SC_v(G)$ denotes the set of strongly connected components of $G$ that are accessible from $v$. By Corollary \ref{open} we know that $\calA_{dg}$ satisfies the definition in \cite{MW88} of a geometric graph directed construction. By Lemma \ref{multigraph_to_digraph} we know that the automaton $\calA_{dg}$ recognizes the same set as $\calA$, hence $d_H(V_k(\calA))=d_H(\calA_{dg})$. For each strongly connected component $C$ of $\calA$, let $\calA_C$ be the automaton whose automaton diagram is given by taking $C$ and assigning an arbitrary start state from those in the vertex set of $C$. By Corollary \ref{scdg}, for each $C$ a strongly connected component of $G$ there is a corresponding strongly connected component $C_{dg}$ of $\calA_{dg}$ such that $d_H(V_k(\calA_C))=d_H(V_k(\calA_{C,dg}))$, where $\calA_{C,dg}$ is an automaton whose automaton diagram is given by taking $C_{dg}$ and assigning an arbitrary start state from those in the vertex set of $C_{dg}$. Since $\calA_{dg}$ has a true digraph structure (rather than that of a multigraph), we can apply Theorem 3 of \cite{MW88} to obtain that the Hausdorff dimension of $V_k(\calA_{C,dg})$ is the unique $\alpha$ such that $\sprad{\mathbf{M}(\calA_C, \alpha)} =1$. Note that the value of $\max_{C \in SC(\calA)} d_H( V_{k}(\calA_{C}))$ is well-defined by Lemma \ref{strongly_connected_start_state}. By Theorem 4 of \cite{MW88}, we conclude that the dimension of $V_k(\calA_{dg})$ is the maximum over all such $\alpha$ for $\calA_{C,dg} \in SC(\calA_{dg})$. \end{proof} \end{lem} \begin{thm}\label{closed_equality} If $X$ is a closed $k$-automatic set in $\R$ recognized by closed automaton $\calA$, then $d_H(X) = d_B (X)= \frac{1}{\log (k)} h(L(\calA))$, where $L(\calA)$ is the set of strings $\calA$ recognizes. \begin{proof} By Lemma \ref{hdim}, it suffices to show that $d_H(X) = d_B (X)= \frac{1}{\log (k)} h(L(\calA_{dg}))$, since $L(\calA) = L(\calA_{dg})$ by Lemma \ref{multigraph_to_digraph}. Suppose that $\calB$ is the induced sub-automaton given by taking a strongly connected component of $\calA_{dg}$ and selecting a start state arbitrarily. Let $L$ be the language that $\calB$ recognizes as a subset of $[k]^{\omega}$. By Theorem 3 of \cite{MW88} the Hausdorff dimension of $V_k(\calB)$ is the unique $\alpha$ such that $\sprad{M(\calB, \alpha)} =1$. By computations from \cite{MW88}, we know that if $\vec{u} = (1, \ldots ,1) \in \R^m$, where $\calB$ has $m$ states, then $$\sprad{\mathbf{M}(\calB, \alpha)}= \lim_{n \to \infty} \Vert \mathbf{M}(\calB, \alpha)^n\vec{u} \Vert^{1/n}_1.$$ Moreover, the authors show by induction in \cite{MW88} that the following holds: $$\Vert \mathbf{M}(\calB, \alpha)^n\vec{u} \Vert_1 = \sum_{w\in L^{pre}_n} \left( \frac{1}{k^n}\right)^{\alpha}$$ where $L^{pre}_n$ is the set of length-$n$ prefixes of $L$. Hence we get the following: $$\sprad{\mathbf{M}(\calB, \alpha)}= \lim_{n \to \infty}\left( \sum_{w\in L^{pre}_n} \left( \frac{1}{k^n}\right)^{\alpha} \right)^{1/n}$$ and taking the logarithm base $k$ of the equation $\sprad{\mathbf{M}(\calB, \alpha)} =1$, we obtain this equality: $$\lim_{n \to \infty}\frac{1}{n}\log_k\left( \frac{1}{k^{n\alpha}} \vert L^{pre} \vert_n \right) =0.$$ Consequently, we obtain the following: $$\lim_{n \to \infty}\frac{1}{n}\log_k (k^{-n\alpha}) + \lim_{n \to \infty}\frac{1}{n}\log_k(\vert L^{pre} \vert_n ) =0$$ which yields the final equality $\alpha = \lim_{n \to \infty}\frac{\log_k(\vert L^{pre} \vert_n)}{n} =\frac{1}{ \log(k)}h(\calB)$, as desired. Finally, using Lemma \ref{hdim} we know that $d_H(X)$ is the maximum of the Hausdorff dimensions of the strongly connected components of $\calA$. We note that by Fact \ref{language_radius_entropy}, the same holds for entropy, because when we put the incidence matrix for $\calA$ in Jordan canonical form $PJP^{-1}$, the blocks of the block-diagonal matrix $J$ correspond to the strongly connected components of $\calA$. Since $\sprad J$ is the maximum of that of its diagonal blocks, we can conclude that $d_H(X) = \frac{1}{\log (k)}h(X)= d_B(X)$ by Lemma \ref{box_counting_entropy}. \end{proof} \end{thm} \section{Cycle languages and when dimensions disagree}\label{dimensions} We would like to know in the general case when the Hausdorff dimension of a $k$-automatic set is not equal to its box-counting dimension. A fundamental example of this is the set of dyadic rationals in $[0, 1]$: $\{\frac{a}{2^n} : a, n \in \mathbb{N}\} \cap [0, 1]$. This set is $2$-automatic because it can be equivalently phrased as the set of numbers in $[0, 1]$ whose binary expansions only have a finite number of one of the two bits ($0$ or $1$). It has Hausdorff dimension $0$, as it is countable, and box-counting dimension $1$, as it is dense in the interval. Examining a B\"uchi automaton for this set, we note that it seems the reason for this disparity in dimension is that there are many ways to get from the start state to an accept state but few ways (one way, in fact) to loop from an accept state back to itself. We can in fact formalize this notion and obtain a sufficient and necessary condition for the equivalence of the two notions of dimension by defining the notion of a \textit{cycle language.} \begin{defn}\label{cycle} In a finite or B\"uchi automaton $\mathcal{A}$ with $q$ as one of its states, the \textit{cycle language} $C_q(\mathcal{A}) \subseteq \Sigma^*$ contains all strings $w$ for which there is a run of $\mathcal{A}$ from state $q$ to itself on the string $w$. Let $\calC_q(\calA)$ denote the automaton constructed by taking $\mathcal{A}$, making state $q$ the only start and accept state, and trimming the resulting automaton. Call this the \emph{cycle automaton}. \end{defn} Note that cycle languages are regular, and this is witnessed by the cycle automaton. Note also that because looping to an accept state multiple times (or zero times) is still a loop, $C_q(\mathcal{A})^* = C_q(\mathcal{A})$. \begin{lem} \label{hausdorff_dimension_cycle_languages} Let $\mathcal{A} = (Q, \Sigma, \delta, S, F)$ be a trim B\"uchi automaton, and let $X = V_k(\mathcal{A})$. Let $X_q = V_k(\calC_q(\mathcal{A}))$ for each $q \in Q$. Then: \begin{enumerate}[(i)] \item $d_H(X) = \max_{q \in F} d_H(X_q)$; \item $d_B(X) = \max_{q \in Q} d_H(X_q)$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate}[(i)] \item Let $L_q$ be the language of words that have a path from a start state of $\mathcal{A}$ to state $q$. An infinite string is accepted by $\mathcal{A}$ precisely when it has a path that runs from a start state to an accept state and then cycles back to that accept state infinitely often. Thus, $$L(\mathcal{A}) = \bigcup_{q \in F} \bigcup_{w \in L_q} w C_i(\mathcal{A})^\omega.$$ Let $T_w$ be the (linear) transformation on a set corresponding to prefixing the string $w$. Applying $d_H \circ V_k$ to both sides, we get: $$d_H(X) = d_H\left(\bigcup_{q \in F} \bigcup_{w \in L_q} T_w(X_q) \right)$$ Applying the formula for Hausdorff dimension of a countable union: $$d_H(X) = \sup_{q \in F} \sup_{w \in L_q} d_H(T_w(X_q))$$ Because (invertible) linear transformations do not affect dimension: $$d_H(X) = \sup_{q \in F} \sup_{w \in L_q} d_H(X_q)$$ Eliminating the now-useless quantification over $w$: $$d_H(X) = \sup_{q \in F} d_H(X_q)$$ Because $A$ is finite: $$d_H(X) = \max_{q \in F} d_H(X_q).$$ \item Because $\mathcal{A}$ is trim, we have $\overline{X} = V_k(\overline{\mathcal{A}})$ according to Lemma \ref{automaton_closure}. So applying (i), we get that $d_H(\overline{X}) = \max_{q \in Q} d_H(X_q)$ (note that the cycle language $C_q(\mathcal{A})$ does not depend on which states are accepting). Yet we know from Theorem \ref{closed_equality} that $d_H(\overline{X}) = d_B(\overline{X}) = d_B(X)$. \end{enumerate} \end{proof} We do immediately get as a corollary that $d_H(X) < d_B(X)$ when $d_H(X_q)$ is larger for some $q \notin F$ than for all $q \in F$. However, this is not a very useful version of the characterization as it stands, because the Hausdorff dimension of $\nu_k(C_q(\mathcal{A})^\omega)$ is not an easy value to compute \textit{a priori.} The rest of this section will be focused on reducing the above to an easier problem. The main step in the process of simplifying the result of Lemma \ref{hausdorff_dimension_cycle_languages} is the following result: \begin{lem} \label{hausdorff_omega_limit} Let $\mathcal{A}$ be a finite automaton, not necessarily deterministic. Let $L'$ be the $\omega$-language recognized by $\mathcal{A}$ as a B\"uchi automaton, and let $L$ be the language recognized by $\mathcal{A}$ as a finite automaton. Let $X = \nu_k(L')$ and $Y = \nu_k(L^\omega)$. Then $d_H(\overline{X}) \leq d_H(Y)$. \end{lem} \begin{proof} Without loss of generality, assume $\mathcal{A}$ is finite-trim. Let $\mathcal{B}$ be the induced sub-automaton containing states in $\mathcal{A}$ from which there are arbitrarily long paths to accept states. We note that $\mathcal{B}$ is trim and is equivalent as a B\"uchi automaton to $\mathcal{A}$. Assume that the desired lemma holds for $\mathcal{B}$; let $L_\mathcal{B}$ be the language recognized by $\mathcal{B}$ as a finite automaton, and let $Z = \nu_k(L_\mathcal{B}^\omega)$. Then $L_\mathcal{B} \subseteq L$, hence $d_H(Z) \leq d_H(Y)$. So $d_H(\overline{X}) \leq d_H(Y)$ as well; because $\mathcal{A}$ and $\mathcal{B}$ are equivalent as B\"uchi automata, the lemma then holds for $\mathcal{A}$. Thus we have reduced the lemma to the case where the automaton in question is trim. Starting over, assume $\mathcal{A}$ is trim. Then $\overline{\mathcal{A}}$, as a finite automaton, recognizes the language $M$ of prefixes of $L$. Let $M'$ be the language $\overline{\mathcal{A}}$ recognizes as a B\"uchi automaton. By the proof of Lemma \ref{automaton_closure}, the language $M'$ is closed, in the sense that if infinitely many of an infinite string $w$'s prefixes are prefixes of strings in $M'$, then $w \in M'$. Therefore $M' = \vec{M}$, as $M$ is the language of prefixes of strings in $M'$. Note furthermore that $L' \subseteq M'$. Moreover, any prefix of a string in $M'$ is a string in $M$ and thus a prefix of a string in $L$, and thus, from trimness, a prefix of a string in $L'$; so $\nu_k(\vec{M}) = \overline{X}$. Let $Q$ be the state set of $\mathcal{A}$ (and $\overline{\mathcal{A}}$). Choose for each $q \in Q$ a string $u_q$ on which $\mathcal{A}$ runs from a start state to state $q$, and choose a string $v_q$ on which $\mathcal{A}$ runs from state $q$ to an accept state. Note that $u_q v_q \in L$. Let $\ell$ be the maximum length of $v_q u_q$ for $q \in Q$. Observe that because $\overline{X} = \nu_k(\vec{M})$ is closed, and $M$ is closed under prefixes, we have that $d_H(\overline{X}) = \frac{1}{\log k} h(M)$ by Theorem \ref{closed_equality}. Define the language $M_n$ for each positive integer $n$ as follows: for each string in $M$, and for every accepting run of this string in $\mathcal{A}$, after every $n$ characters, we find the index $q$ of the state $\mathcal{A}$ is in and insert $v_q u_q$; the resulting strings are the elements of $M_n$. We note that $M_n$ and $X_n = \nu_k(\vec{M}_n)$ have the following properties: \begin{itemize} \item $\vec{M}_n$ is a subset of $L^\omega$. A string in $\vec{M}_n$ must have the form $w_1 v_{q_1} u_{q_1} w_2 v_{q_2} u_{q_2} w_3 \dots$ with each $w_j$ a string of length $n$. Note that $w_1 v_{q_1} \in L$, and $u_{q_j} w_{j+1} v_{q_{j+1}}$ is in $L$ as well. So this string is in $L^\omega$, and hence $X_n \subseteq Y$. \item The set $X_n$ is still closed; observe that by considering every possible path through $L$ of length $n$ and connecting them with strings of states recognizing $v_q u_q$, it is possible to create a closed B\"uchi automaton for $\vec{M}_n$ and apply Lemma \ref{automaton_closure} again. Therefore, $d_H(X_n) = \frac{1}{\log k} h(M_n^{pre})$ by Theorem \ref{closed_equality}. \item Let $r$ be a positive integer; every string of length at most $rn$ in $M$ has a corresponding string of length at most $r(n+\ell)$ in $M_n^{pre}$. Thus $|M_n^{pre}|_{\leq r(n+\ell)} \geq |M|_{\leq rn}$, and so: $$h(M_n^{pre}) = \limsup_{r \to \infty} \frac{\log |M_n^{pre}|_{\leq r(n+\ell)}}{r(n+\ell)}$$ $$\geq \limsup_{r \to \infty} \frac{\log |M|_{\leq rn}}{r(n+\ell)}$$ $$= \frac{n}{n+\ell} \limsup_{r \to \infty} \frac{\log |M|_{\leq rn}}{rn}.$$ Now, without loss of generality, assume our alphabet does not contain the character $\$$, and let $N = M\$^*$. Observe that $N$ is still prefix-closed and that $|M|_{\leq n} = |N|_n$ for all $n$. Then the above expression is equal to $\frac{n}{n+\ell} h(N)$, because from Lemma \ref{language_entropy_prefix_limit_summation}, we know that the limit superior defining $h(N)$ is a limit (thus it does not matter if we take a subsequence of the indices). We would then like to replace $N$ with $M$ in this equation, hence we show $h(N) = h(M)$. Note that because $\overline{\mathcal{A}}$ is trim, every string in $M$ can be extended to a longer string in $M$; and because $M$ is prefix-closed, said longer string can have any length. Therefore $|M|_n$ is monotone in $n$, and so $|N|_n = |M|_{\leq n} \leq (n+1) |M|_n$. Conversely we trivially have $|M|_n \leq |N|_n$. Since multiplication by a linear factor does not change the entropy, we have $h(N) = h(M)$ as required. \item Let $r$ be a positive integer. Each string in $M$ corresponds to at most $|Q|(\ell+1)$ strings in $M_n^{pre}$ because, at worst, the length of the original string is a multiple of $n$, and thus $M_n^{pre}$ has $|Q|(\ell+1)$ corresponding strings depending on which state the string ends in (which could, due to nondeterminism, be any of them) and how much of the last $v_q u_q$ is added. These corresponding strings are never shorter than the original string; therefore, $|M_n^{pre}|_{\leq r} \leq |Q|(\ell + 1) |M|_{\leq r}$. Thus: $$h(M_n^{pre}) = \limsup_{r \to \infty} \frac{\log |M_n^{pre}|_{\leq r}}{r}$$ $$\leq \limsup_{r \to \infty} \frac{\log (|Q|(\ell + 1) |M|_{\leq r})}{r}$$ $$= \limsup_{r \to \infty} \frac{\log |Q|(\ell + 1) + \log |M|_{\leq r}}{r}$$ $$= \limsup_{r \to \infty} \frac{\log |Q|(\ell + 1)}{r} + \limsup_{r \to \infty} \frac{\log |M|_{\leq r}}{r}$$ $$= \limsup_{r \to \infty} \frac{\log |M|_{\leq r}}{r} = h(M).$$ \end{itemize} We thus have a collection of subsets $X_n$ of $Y$ such that: $$\frac{n}{n+\ell} d_H(\overline{X}) = \frac{1}{\log k} \frac{n}{n+\ell} h(M) \leq \frac{1}{\log k} h(M_n^{pre}) = d_H(X_n) \leq \frac{1}{\log k} h(M) = d_H(\overline{X}).$$ Because $\frac{n}{n + \ell} \to 1$ as $n \to \infty$, we conclude that $d_H(X_n) \to d_H(\overline{X})$. Therefore, $d_H(Y) \geq \sup d_H(X_n) \geq d_H(\overline{X})$, as required. \end{proof} \begin{cor} \label{dimension_equiv_cycle} Let $\mathcal{A}$ be a B\"uchi automaton recognizing an $\omega$-language $L'$. Assume that $\mathcal{A}$ is trim and has one accept state and one start state, which are the same state. Let $L$ be the language recognized by $\mathcal{A}$ as a finite automaton. Then $L' = L^\omega$, and $d_H(\nu_k(L')) = d_B(\nu_k(L'))$. \end{cor} \begin{proof} Let $X = \nu_k(L')$. From Theorem \ref{closed_equality}, we know $d_H(\overline{X}) = d_B(\overline{X}) = d_B(X)$, so it suffices to show $d_H(X) = d_H(\overline{X})$; in fact, we need only show $d_H(\overline{X}) \leq d_H(X)$. Note that if an infinite string $w$ belongs to $L'$, it must have a run that starts at the single start/accept state and revisit it infinitely many times; hence, $w$ can be broken into infinitely many substrings, each of which starts and ends at this state. So $L' = L^\omega$. The second result then follows from Lemma \ref{hausdorff_omega_limit}. \end{proof} \begin{lem} \label{dimension_entropy_cycle} Let $L = C_q(\mathcal{A})$ be a cycle language for a B\"uchi automaton. Then $d_H(\nu_k(L^\omega)) = \frac{1}{\log k} h(L)$. \end{lem} \begin{proof} Let $L'$ be the language $\calC_q(\mathcal{A})$ accepts as a B\"uchi automaton. Then by Corollary \ref{dimension_equiv_cycle}, it suffices to show $d_B(\nu_k(L')) = \frac{1}{\log k} h(L)$. Moreover, by Lemma \ref{box_counting_entropy}, it suffices to show $h((L')^{pre}) = h(L)$, where $(L')^{pre}$ is the language of prefixes of strings in $L'$. We claim that $(L')^{pre} = L^{pre}$. We have $L^{pre} \subseteq (L')^{pre}$ because $\calC_q(\mathcal{A})$ is trim, so any string that can be extended to a string in $L^{pre}$ can be further extended to an infinite string in $(L')^{pre}$. We have $(L')^{pre} \subseteq L^{pre}$ because any string in $L'$ must have infinitely many prefixes in $L$, so we may extend a string in $(L')^{pre}$ to an infinite string and then cut it off at a sufficiently late occurrence of an accept state. So we have reduced the lemma to demonstrating that $h(L) = h(L^{pre})$, which is Fact \ref{language_entropy_summation}. \end{proof} Combining the above with Lemma \ref{hausdorff_dimension_cycle_languages} gives us the theorem: \begin{thm} \label{hausdorff_dimension_cycle_language_entropy} Let $\mathcal{A} = (Q, \Sigma, \delta, S, F)$ be a trim B\"uchi automaton, and let $X = V_k(\mathcal{A})$. Let $F$ be the set of indices of accept states in $\mathcal{A}$. Then: \begin{enumerate}[(i)] \item $d_H(X) = \frac{1}{\log k} \max_{q \in F} h(C_q(\mathcal{A}))$; \item $d_B(X) = \frac{1}{\log k} \max_{q \in Q} h(C_q(\mathcal{A}))$. \end{enumerate} \end{thm} \begin{cor} Let $\calA$ be a trim B\"uchi automaton such that $V_k(\calA) = X$. Then $d_H(X)<d_B(X) = h(X)$ if and only if there exists a non-final state $q \in Q \setminus F$ such that for cofinitely many $m \in \N$ there exists $n_m \in \N$ such that for each $f \in F$, the ratio of $|C_f(\calA)|_m$, which denotes the number of paths from $f$ to itself of length $m$, to $k^m$ is strictly less than than the ratio of $|C_q(\calA)|_{n_m}$ to $k^{n_m}$. \begin{proof} For the forward implication, we will illustrate the contrapositive; suppose that for every non-final state $q \in Q \setminus F$ there is a final state $f \in F$ such that $\limsup_{n \to \infty} |\{ \sigma:q \to q \: | \: \sigma \text{ is a path of length }n\}| \leq \limsup_{n \to \infty} |\{ \sigma:f \to f \: | \: \sigma \text{ is a path of length }n\}|$. If this is the case, we conclude that for all $q \in Q$ we have $h(C_q(\calA))\leq h(C_f(\calA))$ for some $f \in F$, where $C_q(\calA)$ is the cycle language of $q$ as defined in \ref{cycle}. This follows because taking logarithms commutes with $\limsup$. By Theorem \ref{hausdorff_dimension_cycle_language_entropy}, we conclude that $d_B(X)=d_H(X)$, proving the contrapositive. For the backwards implication, we suppose that there does exist some $q \in Q \setminus F$ such that such that for cofinitely many $m \in \N$ there exists $n_m \in \N$ such that $\frac{|C_f(\calA)|_m}{k^m}$ (the ratio of paths from $f$ to itself of length $m$ to $k^m$) is strictly less than than $\frac{|C_q(\calA)|_{n_m}}{k^{n_m}}$ (the ratio of paths from $q$ to itself of length $n_m$ to $k^{n_m}$) for each $f \in F$. Then it is evident that the sequence $(n_m)_{m \in \N}$ witnesses that for all $f \in F$ we have $\limsup_{n \to \infty}\frac{|C_f|_n}{k^n} < \limsup_{n \to \infty}\frac{|C_q(\calA)|_n}{k^n}$. Taking the logarithm of each side, we conclude $h(C_f(\calA))<h(C_q(\calA))$ for all $f \in F$. Hence by Theorem \ref{hausdorff_dimension_cycle_language_entropy}, we get that $d_H(X) < d_B(X)$. \end{proof} \end{cor} \section{Hausdorff measure}\label{measure} As mentioned in Section \ref{Hdim}, the Hausdorff dimension of a fractal $X$ is defined by considering the $s$-dimensional Hausdorff \textit{measure} $\mu_H^s(X)$ for different values of $s$, and in particular, $\mu_H^s(X)$ is $\infty$ for $s < d_H(X)$ and $0$ for $s > d_H(X)$. In this section, we will give methods for determining the $d_H(X)$-dimensional Hausdorff measure of various types of $k$-automatic fractal $X$ and make some observations that result from these methods. Note that this measure may be zero, a positive real number, or infinite. We begin by leveraging former work of Merzenich and Staiger. The following comprises Lemma 15 and Procedure 1 of \cite{MS94}: \begin{fact} \label{hausdorff_measure_sc_closed} Let $\mathcal{A}$ be a strongly connected deterministic B\"uchi automaton such that $X = V_k(\mathcal{A})$ is closed. Then there is exactly one vector $\vec{u}$ such that $\vec{u}$ is an eigenvector corresponding to an eigenvalue of maximum magnitude of $\mathbf{M}(\mathcal{A}, 0)$ and such that $\vec{u}$ contains nonnegative entries, the maximum of which is $1$. The Hausdorff measure $\mu_H^{d_H(X)}(X)$ is the entry in $\vec{u}$ corresponding to the start state of $\mathcal{A}$. \end{fact} Note that in particular, this implies $0 \leq \mu_H^{d_H(X)}(X) \leq 1$ in this case. The next step in this analysis is to extend this work to the case of a general strongly connected automaton. In particular, we might rightfully suspect that the Hausdorff dimension and measure of a set recognized by a strongly connected B\"uchi automaton are the same as those of the set's closure. It is most convenient to show this first for the dimensions and then for the measures. \begin{lem} \label{strongly_connected_hausdorff_closure} If $\calA$ is a strongly connected B\"uchi automaton, then $d_H(V_k(\calA)) = d_H(\overline{V_k(\calA)})$. \end{lem} \begin{proof} Note that removing accept states from $\calA$ can only make the resulting Hausdorff dimension lower, so it suffices to consider the case where $\calA$ has only one accept state. Similarly, by Lemma \ref{strongly_connected_start_state}, it suffices to assume $\calA$ has only one start state, which is also its accept state. In this case, $\calA$ satisfies the conditions of Corollary \ref{dimension_equiv_cycle}; so $d_H(V_k(\calA)) = d_B(V_k(\calA))$. This is then equal to $d_B(\overline{V_k(\calA)})$, which is in turn equal to $d_H(\overline{V_k(\calA)})$ by Theorem \ref{closed_equality}, because $\overline{V_k(\calA)}$ is a closed $k$-automatic set. \end{proof} We show the analogous result for Hausdorff measures by first noting that when such an $X$ is embedded in its closure, the set-difference has lower dimension and hence is null in the higher-dimensional measure. \begin{lem} \label{hausdorff_dimension_closure_minus_sc} Let $\mathcal{A}$ be a strongly connected B\"uchi automaton, and let $X = V_k(\mathcal{A})$. Then $d_H(\overline{X} \setminus X) < d_H(X)$ assuming the latter is positive. \end{lem} \begin{proof} First, we let $\overline{L}$ be the $\omega$-language accepted by $\overline{\calA}$. Note that $\nu_k(\overline{L}) = \overline{X}$, so an element of $\overline{X} \setminus X$ must correspond to an infinite string on which $\mathcal{A}$ passes through finitely many accept states before infinitely passing through exclusively non-accepting states. Let $L'$ be the language of strings for which every infinite run is of this form (and for which there is at least one such infinite run). This correspondence is not exact; it is possible that if an element of $\overline{X} \setminus X$ is $k$-rational, one of its representations is in $L'$ while the other is not. This will not affect an analysis of Hausdorff dimension, because (nonzero) Hausdorff dimension does not change based on the membership of a countable set. Our goal is then to show that $d_H(\nu_k(L')) < d_H(X)$. Let $F^c = Q \setminus F$ be the set of non-accepting states of $\mathcal{A}$, and let $L_q$ for $q \in F^c$ be the set of infinite strings for which every infinite run starting at state $q$ only passes through non-accepting states in $\mathcal{A}$ (and for which there is at least one such infinite run). Since every string in $L'$ has a tail in $L_q$ for some $q$, we know $\nu_k(L')$ is a countable union of scaled copies of $\nu_k(L_q)$. So it suffices to show that $d_H(\nu_k(L_q)) < d_H(\overline{X})$ for all $q \in F^c$. We will show a stronger statement, that $d_H(\overline{\nu_k(L_q)}) < d_H(\overline{X})$. By Theorem \ref{closed_equality}, it suffices to show the corresponding entropy statement, that $h(L_q^{pre}) < h(\overline{L}^{pre})$. Let $\log \alpha = h(L_q^{pre})$. By Lemma \ref{language_entropy_prefix_limit}, $\log \alpha = \lim_{n \to \infty} \frac{\log |L_q^{pre}|_n}{n}$. For every non-accepting state $q'$ of $\mathcal{A}$, there exists a string on which $\mathcal{A}$ cycles from state $q'$ to itself while passing through an accept state. Choose such a cycle for each $q' \in F^c$, and let $m$ be the least common multiple of their lengths; by repeating the cycles if necessary, we may assume they all have the same length $m$. Let $M$ be the $\omega$-language defined as follows: every string in $M$ is made up of blocks $m$ characters long, with each block a ``normal block'' or a ``cycle block.'' The normal blocks, when taken together with the cycle blocks removed, must form a string in $L_q$. The cycle blocks are each one of the cycles of length $m$ mentioned above, whichever one corresponds to the state of $\mathcal{A}$ the string is in at the time, in a run witnessing that the normal blocks form a string in $L_q$. Note that when every string in $M$ is prefixed by a constant string that corresponds to a path from the start state of $\mathcal{A}$ to state $q$ (which does not affect entropy), the result is a subset of $\overline{L}$. So $h(M^{pre}) \leq h(\overline{L}^{pre})$, and it suffices to show $h(L_q^{pre}) < h(M^{pre})$. Now for every string in $L_q^{pre}$, choose a run that witnesses this. Let $k$ be a positive integer, and let $0 \leq r \leq k$. For any string in $L_q^{pre}$ of length $rm$, we may apply the chosen run and insert $(k-r)$ cycle blocks at any choice of block boundaries in order to produce a string in $M^{pre}$ of length $km$. Therefore: $$|M^{pre}|_{km} \geq \sum_{r=0}^k {k \choose r} |L_q^{pre}|_{rm}.$$ Let $1 < \beta < \alpha$. One can check that eventually $|L_q^{pre}|_n > \beta^n$. So for sufficiently large $k$: $$|M^{pre}|_{km} > \sum_{r=0}^k {k \choose r} \beta^{rm} = (\beta^m + 1)^k.$$ So: $$h(M^{pre}) = \limsup_{n \to \infty} \frac{\log |M^{pre}|_n}{n}$$ $$\geq \limsup_{k \to \infty} \frac{\log |M^{pre}|_{km}}{km}$$ $$\geq \limsup_{k \to \infty} \frac{\log \left((\beta^m + 1)^k\right)}{km}$$ $$= \limsup_{k \to \infty} \frac{k \log \left(\beta^m + 1\right)}{km}$$ $$= \frac{\log \left(\beta^m + 1\right)}{m}.$$ So for all $1 < \beta < \alpha$, we have $h(M^{pre}) \geq \frac{\log \left(\beta^m + 1\right)}{m}$. Thus: $$h(M^{pre}) \geq \lim_{\beta \nearrow \alpha} \frac{\log \left(\beta^m + 1\right)}{m} = \frac{\log \left(\alpha^m + 1\right)}{m} > \frac{\log \left(\alpha^m\right)}{m} = \log \alpha = h(L_q^{pre}).$$ This concludes the proof. \end{proof} \begin{cor} \label{hausdorff_measure_closure} Let $\mathcal{A}$ be a strongly connected B\"uchi automaton, and let $X = V_k(\mathcal{A})$ have Hausdorff dimension $d_H(X) = \alpha$. Then $\mu_H^{\alpha}(X) = \mu_H^{\alpha}(\overline{X})$. \end{cor} \begin{proof} We examine two cases. If $\alpha$ is positive, then the previous lemma gives us $d_H(\overline{X} \setminus X) < \alpha$, hence $\mu_H^\alpha(\overline{X} \setminus X) = 0$. The result then follows from additivity of Hausdorff measure. Assume on the other hand that $\alpha = 0$; then $\mu_H^\alpha$ is the counting measure. If $X$ is infinite, then so is $\overline{X}$, hence the counting measure agrees for both. If $X$ is finite, then $\overline{X} = X$, as any finite set in a metric space is closed. So in either case $\mu_H^0(X) = \mu_H^0(\overline{X})$. \end{proof} Our goal is now to extend this result and compute the Hausdorff measure of any $k$-automatic fractal. We will do this by making use of a class of B\"uchi automata called \textit{unambiguous} B\"uchi automata: \begin{defn} \label{unambiguous_buchi_automata} Let $\mathcal{A}$ be a B\"uchi automaton. We say that $\mathcal{A}$ is \textit{unambiguous} if for every infinite string $w$ accepted by $\mathcal{A}$, there is exactly one accepting run of $\mathcal{A}$ on $w$. \end{defn} Note that any deterministic B\"uchi automaton is unambiguous. But in the case of B\"uchi automata, we know that nondeterminism is sometimes necessary in order to achieve full computational capability. Fortunately, a result of \cite{CM03} tells us that this capability can still be fully realized in the unambiguous case: \begin{fact} \label{unambiguous_equivalence} Let $L$ be a regular $\omega$-language. Then there is an unambiguous B\"uchi automaton $\mathcal{A}$ with $L(\mathcal{A}) = L$. Moreover, given any B\"uchi automaton for $L$, we can effectively convert it to an unambiguous automaton. \end{fact} We will now extend Corollary \ref{hausdorff_measure_closure}, giving us a method to compute the Hausdorff measure of any $k$-automatic fractal. Using the previous fact, we may assume that the B\"uchi automaton defining the fractal is unambiguous. Next, it is useful for us to define functions associating each infinite string in an $\omega$-language $L$ with its \textit{key state} and \textit{key prefix}: \begin{defn} Let $\mathcal{A}$ be an unambiguous B\"uchi automaton with set of states $Q$ and recognizing an $\omega$-language $L$. The function $k_\mathcal{A} : L \to Q$ maps a string $w$ to the unique \textit{key state} $q$ such that when $\mathcal{A}$ performs an accepting run on $w$: \begin{itemize} \item the run passes through state $q$; \item after first passing through state $q$, the run never leaves its strongly connected component; \item before first passing through state $q$, the run never enters its strongly connected component. \end{itemize} The function $p_\mathcal{A} : L \to \Sigma^*$ maps $w$ to the \textit{key prefix} of $w$ running from the start state of $\mathcal{A}$ to the first occurrence of $k_\mathcal{A}(w)$. \end{defn} Note that when a run leaves a strongly connected component, it can never return to it, or else the states in between would also be a part of the same component, a contradiction. So $k_\mathcal{A}$ and $p_\mathcal{A}$ are well-defined functions. \begin{lem} \label{hausdorff_measure_one_key_state} Let $\mathcal{A}$ be an unambiguous B\"uchi automaton with set of states $Q$ and recognizing an $\omega$-language $L$. Let $Q' \subseteq Q$ be the set of states whose strongly connected component contains an accept state. For each $q \in Q'$, let $\mathcal{A}_q$ be the automaton created by moving the start state of $\mathcal{A}$ to $q$ and removing all transitions out of its strongly connected component, and let $L_q$ be the $\omega$-language it accepts. Let $M_q$ be the $\omega$-language containing all strings $w \in L$ with $k_\mathcal{A}(w) = q$. Then $d_H(\nu_k(M_q)) = d_H(\nu_k(L_q))$, assuming $M_q \neq \varnothing$. Moreover, let $P_q = \{p_\mathcal{A}(w) : w \in M_q\}$; then for any $\alpha$: $$\mu_H^{\alpha}(M_q) = \sum_{u \in P_q} \frac{\mu_H^{\alpha}(L_q)}{k^{\alpha|u|}}$$ (here $|u|$ is the length of $u$). \end{lem} \begin{proof} Let $M_{q,u}$ be the set of $w \in L$ with $k_\mathcal{A}(w) = q$ and $p_\mathcal{A}(w) = u$. Such strings decompose as $w = uv$, where $v$ is a string that, starting from state $q$, passes through accept states in its strongly connected component infinitely often without leaving the component. In other words, $M_{q,u} = u L_q$. It follows that $\nu_k(M_{q,u})$ is an affine copy of $\nu_k(L_q)$, which is scaled down with a factor of $f = \frac{1}{k^{|u|}}$. By properties of Hausdorff measure, $\mu_H^{\alpha}(M_{q,u}) = \frac{\mu_H^{\alpha}(L_q)}{k^{\alpha|u|}}$. Note then that $M_q = \bigcup_{u \in P_q} M_{q,u}$, because $M_{q,u}$ is empty for $u \notin P_q$. Since $P_q \subseteq \Sigma^*$ is a countable set, we may apply countable additivity to get the desired formula. \end{proof} This process may then be applied to every key state to produce the following result: \begin{thm} \label{hausdorff_measure_all_key_states} Let $\mathcal{A}$, $Q$, $Q'$, $L$, $L_q$, $M_q$ be as in the previous lemma. Then: \begin{enumerate}[(i)] \item $d_H(\nu_k(L)) = \max_{q \in Q'} d_H(\nu_k(L_q))$, \item with $\alpha = d_H(\nu_k(L))$, $\mu_H^{\alpha}(L) = \sum_{q \in Q'} \mu_H^{\alpha}(M_q)$. \end{enumerate} \end{thm} \begin{proof} For (i), we need only note that from the previous lemma $d_H(\nu_k(L_q)) = d_H(\nu_k(M_q))$ and that $L = \bigcup_{q \in Q'} M_q$. Then (ii) also follows from $L = \bigcup_{q \in Q} M_q$. \end{proof} \section{Model-theoretic consequences}\label{models} For the basics of first-order logic and model theory, see \cite{M02}, from which we also take our notation and conventions. We recall the following correspondence between subsets of $\R$ recognized by B\"uchi automata and definable subsets of the first-order structure $\calR_k=(\R, <, +, \Z, X_k(x,u,d))$ where the ternary predicate $X_k(x,u,d)$ holds precisely if $u$ is an integer power of $k$, and in the $k$-ary expansion of $x$ the digit specified by $u$ is $d$, i.e. $d$ is the coefficient of $u$ in the expansion. In particular, note that $u \in k^{\Z}$ and $d \in \{0,\ldots ,k-1\}$ in any tuple $(x,u,d)$ such that $X_k(x,u,d)$ holds. \begin{thm}[\cite{BRW98}] For each $n \in \N$, any $A \subseteq \R^n$ is $k$-automatic if and only if $A$ is definable in $\calR_k$. \end{thm} In conjunction with results from \cite{BG21}, we obtain the following consequences for definable subsets of the structure $\calR_k$. Below, by ``Cantor set'' we mean a nonempty compact set that has no isolated points and has no interior. \begin{lem}\label{loglem} Suppose that $\calA$ is a trim B\"uchi automaton with $m$ states on alphabet $[k]$ that has one accept state, which is also the start state. Let $L$ be the language it accepts as a B\"uchi automaton. Suppose that $w \in [k]^n$ is not the prefix of any word in the language $L$. Then every subset of $[0,1]$ of the form $(\nu_k(w0^{\omega}),\nu_k(w(k-1)^{\omega}))$, with $w \in [k]^{\ell}$, $\ell \in \N$, has a subinterval of size at least $k^{-(\ell+m+n)}$ disjoint from $\nu_k(L)$. Moreover, the box-counting dimension of $\nu_k(L)$ is at most $\log_{k^{m+n}}(k^{n+m}-1)<1$. \begin{proof} For each $q \in Q$ we let $s_q$ be a minimal path from $q$ back to the start state. We know a minimal path visits each state at most once. For a prefix $w \in [k]^*$ such that a run on $\calA$ stops at state $q$, the interval $\tilde{I} =(\nu_k(ws_qv0^{\omega}),\nu_k(ws_qv(k-1)^{\omega}))$ cannot be in $\nu_k(L)$. Observe that the length of $\tilde{I}$ is at least $k^{-(\ell+m+n)}$. For the ``moreover,'' observe that by Lemma \ref{box_counting_entropy} and Lemma \ref{automaton_closure}, we have the following: $$d_B(\nu_k(L)) = d_B(\overline{\nu_k(L)}) = \frac{1}{\log(k)} h(L^{pre}) = \lim_{d \to \infty} \frac{\log(|L^{pre}|_{d})}{d \log(k)} = \lim_{d \to \infty} \frac{\log(|L^{pre}|_{d(n+m)})}{d(n+m) \log(k)}. $$ We proceed by induction on $d$. For the base case, note $|L^{pre}|_{n+m} \leq k^m(k^n-1)<k^{n+m}-1$. For $d>1$, we know $|L^{pre}|_{d(n+m)} \leq |L^{pre}|_{(d-1)(n+m)}(k^{n+m}-1)$ because for each $w \in |L^{pre}|_{(d-1)(n+m)}$ there exists $q' \in Q$ such that $ws_{q'}v$ is not in $|L^{pre}|_{d(n+m)}$. By induction, we get the following: $$ \lim_{d \to \infty} \frac{\log(|L^{pre}|_{d(n+m)})}{d(n+m)\log(k)} \leq \lim_{d \to \infty} \frac{\log((k^{n+m}-1)^d)}{d(n+m)\log(k)},$$ and hence $d_B(\nu_k(L)) \leq \log_{k^{n+m}}(k^{n+m}-1)$. \end{proof} \end{lem} Recall that $\mu_H^1$ is the Hausdorff 1-measure, which equals Lebesgue measure on subsets of $\R$. \begin{lem}\label{measuremin} Suppose that $X\subseteq [0,1]$ is a $k$-automatic set with Hausdorff dimension 1, and $X$ is recognized by B\"uchi automaton $\calA$. If $\mu_H^1(X \cap I)>0$ for every interval $I \subseteq [0,1]$ with $k$-rational endpoints, then there is $\epsilon>0$ such that for each such $I$ we have $\mu_H^1(X \cap I) \geq \epsilon \cdot \operatorname{diam}(I)$. \begin{proof} Let $m$ be the number of states in $\calA$. By hypothesis the Hausdorff dimension of $X \cap I$ is 1 for every interval $I \subseteq [0,1]$ with $k$-rational endpoints, and $\mu_H^1(X \cap I)$ is positive. This means that for each prefix $w$ of $L(\calA)$, there exists a strongly connected component $\calC_q(\calA)$ that contains an accept state $q$ and for which the image under $V_k$ of the set of words with prefix $w$ that are accepted via a run contained in $\calC_q(\calA)$ has positive Hausdorff 1-measure. In particular, there is a word $v$ with length at most $m+|w|$ such that $v$ extends $w$ and $\mu_H^1(\nu_k(vC_q(A)^{\omega}))>0$, where $C_q(A)$ is the cycle language for $\calC_q(\calA)$, assuming without loss of generality that a run of $v$ terminates at state $q$. We set $a$ to be the minimum of $\mu_H^1(V_k(C_q(\calA)))$ where $q$ ranges over all states in a strongly connected component of $\calA$ that contains an accept state and has positive Hausdorff 1-measure. We let $\epsilon = a \cdot k^{-m}$, and conclude that for each $I \subseteq [0,1]$, we have $\mu_H^1(X \cap I) \geq \epsilon \cdot \operatorname{diam}(I)$, as desired. \end{proof} \end{lem} \begin{thm} Suppose $X \subseteq [0,1]^n$ is $k$-automatic. There exists a set $A \subseteq [0,1]$ definable in $(\R, <,+, X)$ such that $d_B(A) \neq d_H(A)$ if and only if either a unary Cantor set is definable in $(\R,<,+,X)$ or a set that is both dense and codense on an interval is definable in $(\R,<,+,X)$. \begin{proof} $(\impliedby)$ Suppose that $C$, a unary Cantor set, is definable in the structure $(\R,<,+,X)$. Let $A$ be the set of endpoints of the complementary intervals (e.g. the endpoints of the middle thirds that are removed in constructing the ternary Cantor set), which form a countable set whose closure is the entire Cantor set. This follows since each element of a Cantor set can be approximated by endpoints of the complement within $[0,1]$. We conclude that $d_H(A) = 0$ and $d_B(A) = d_B(\overline{A}) \neq 0$ by Lemma 6.7 in \cite{BG21}. Now suppose that $D$ is a set that is dense and codense in an interval $J \subseteq [0,1]$, and is definable in $\calR_k$. Note that if there exists any interval $I \subseteq J$ such that $d_H(D \cap I) <1$, then because $D$ is dense in $J$ and hence also in $I$, we conclude that $d_B(D\cap I) = d_H(\overline{D} \cap I)=1 > d_H(D\cap I)$, and we are done. Similarly if $d_H(\overline{D} \setminus D \cap I)<1$ for some interval $I \subseteq J$, since $D$ is codense in $J$. The only case remaining is the one in which for every interval $I \subseteq J$, we know $d_H(D\cap I)=1$ and $d_H(\overline{D} \setminus D\cap I)=1$. If this is the case then both $\calA$, the automaton that recognizes $D$, and $\calA_c$, the automaton that recognizes $\overline{D}\setminus D$, each have a strongly connected component containing an accept state whose cycle language has the same Hausdorff dimension as the whole automaton, namely Hausdorff dimension 1. Suppose that $S_q(\calA)$ is the strongly connected automaton that recognizes the cycle language of state $q$ in $\calA$ that has the same Hausdorff dimension as $D$. Similarly suppose that $S_{q'}(\calA_c)$ is the strongly connected automaton that recognizes the cycle language of state $q'$ that has the same Hausdorff dimension as $\overline{D} \setminus D$. If the image of $S_q(\calA)$ under $V_k$ is nowhere dense in $[0,1]$, then $S_q(\calA)$ satisfies the hypotheses of Lemma \ref{loglem}, since being nowhere dense necessitates that at least one finite word is not a prefix of the language. Similar logic holds for $S_{q'}(\calA_c)$. So by Lemma \ref{loglem}, they would have box-counting dimension less than one, and hence their closures would as well. This would contradict our assumption that they each respectively witness the Hausdorff dimensions of $\calA$ and $\calA_c$ being one, since Hausdorff dimension is bounded above by box-counting dimension, so $S_q(\calA)$ and $S_{q'}(\calA_c)$ cannot have box-counting dimension less than one. Therefore $S_q(\calA)$ and $S_{q'}(\calA_c)$ both must recognize somewhere dense sets. Hence the closure of each must contain an interval and thus has positive Lebesgue measure. By Corollary \ref{hausdorff_measure_closure} we know that both $S_q(\calA)$ and $S_{q'}(\calA_c)$ have the same Hausdorff 1-measure as their closures (as B\"uchi automata). So by Lemma \ref{hausdorff_measure_one_key_state}, both $D$ and $\overline{D}\setminus D$ have positive Lebesgue measure, which equals Hausdorff 1-measure, on every subinterval $I \subseteq J$. In particular, both $D \cap I$ and $(\overline{D}\setminus D) \cap I$ have positive Lebesgue measure for every $I \subseteq J$. Fixing one such $I$, we may assume without loss of generality that $\overline{D}$ is an interval. Applying Lemma \ref{measuremin}, we know that both $D\cap I$ and $\overline{D}\setminus D \cap I$ have positive Lebesgue measure bounded uniformly below by $\epsilon \cdot \operatorname{diam}(I)$ and $\epsilon ' \cdot \operatorname{diam}(I)$ respectively, for fixed $\epsilon, \epsilon '>0$ given to us by Lemma \ref{measuremin}. Since $\overline{D}\setminus D \cap I$ is a subset of the complement of $D\cap I$, we conclude $\epsilon' \leq 1-\epsilon$, and also get the following: $$\epsilon' \cdot \operatorname{diam}(I) \leq \mu_H^1(D \cap I) \leq \epsilon \operatorname{diam}(I).$$ By the Lebesgue density theorem, since $D \cap I$ is dense in $I$ for each $I$ with $k$-rational endpoints, we know that $$\lim_{\operatorname{diam}(I) \to 0} \frac{\mu_H^1(D \cap I)}{\operatorname{diam}(I)}$$ exists and equals 1. Yet we observe that $\epsilon' \leq \frac{\mu_H^1(D \cap I)}{\operatorname{diam}(I)} \leq \epsilon$, so we must have $\epsilon = 1$. But we conclude the same holds for $ \frac{\mu_H^1(\overline{D}\setminus D \cap I)}{\operatorname{diam}(I)}$, so we also need $\epsilon' =1$. This contradicts the fact that $\epsilon' \leq 1 - \epsilon$, and we are done. $(\implies)$ Suppose that there is some set $A \subseteq [0,1]$ definable in $(\R,<,+,X)$ such that $d_B(A) \neq d_H(A)$. Then it must be the case that $A \neq \overline{A}$. Let $\calA$ be a trim B\"uchi automaton recognizing $A$. Note that we know $A$ has no interior. Suppose it is not the case that $A$ is somewhere dense, i.e. $A$ is nowhere dense. Then, we may pass from $A$ to $\overline{A}$, and since $A$ is nowhere dense we know that $\overline{A}$ is nowhere dense as well. We know that $d_B(\overline{A})=d_B(A)>0$; otherwise, since $d_H(A) \leq d_B(A)$, they would have to agree on $A$. If $d_B(\overline{A}) = 1$, then $d_H(\overline{A})=1$ by Theorem \ref{closed_equality}. This implies that there is a state $q$ in $\overline{\calA}$ such that $d_H(V_k(S_q(\overline{\calA})))=1$ by Lemma \ref{strongly_connected_hausdorff_closure}. In order for $\overline{A}$ to be nowhere dense, it must be the case that each strongly connected component of $\overline{\calA}$ omits at least one string of finite length as a prefix. Otherwise, every finite-length string is the prefix of some string in $C_q(\overline{\calA})$. Since there exists a string $w \in [k]^*$ such that $\nu_k(wC_q(\overline{\calA})^{\omega}) \subseteq \overline{A}$, this would imply that $\overline{A}$ is dense in (and in fact, since $\overline{A}$ is closed, contains) the interval $\nu_k(w[k]^*)$, contradicting that $\overline{A}$ is nowhere dense. So for each state $q$ in $\overline{\calA}$, let $v_q \in [k]^*$ be a word omitted from the prefixes of $C_q(\overline{\calA})$ whose length is minimal among such finite words. By Lemma \ref{loglem}, we conclude that $V_k(S_q(\overline{\calA}))$ has Hausdorff dimension less than $1$ for all states $q$ in $\overline{\calA}$. So by Lemma \ref{hdim}, we know that $d_H(\overline{A})$ is the maximum of $d_H(V_k(S_q(\overline{\calA})))$ for all states $q$ in $\overline{\calA}$, and we conclude that $1>d_H(\overline{A})>0$. By \cite{BG21}, there is a definable unary Cantor set in $(\R,<,+,X)$. In the case that $A$ is somewhere dense, say on interval $I \subseteq [0,1]$, suppose it is not also codense in $I$. Then some subinterval of $I$ is contained entirely in $A$, so $d_H(A) =1=d_B(A)$, a contradiction. \end{proof} \end{thm} \begin{thebibliography}{10} \bibitem{AB11} B. Adamczewski and J. Bell, \newblock An analogue of Cobham’s theorem for fractals. \newblock {\em Trans. Amer. Math. Soc.}, 363(8):4421--4442, 2011. \bibitem{BG21} A.~Block Gorman, \newblock Pairs and predicates in expansions of o-minimal structures. \newblock Ph.D. thesis, University of Illinois at Urbana-Champaign, 2021. \newblock \url{http://hdl.handle.net/2142/112983}. \bibitem{BRW98} B.~Boigelot, S.~Rassart, and P.~Wolper, \newblock On the expressiveness of real and integer arithmetic automata (extended abstract). \newblock {\em Proceedings of the 25th International Colloquium on Automata, Languages and Programming} (London, UK), ICALP '98, Springer-Verlag, pp. 152–163, 1998. \bibitem{B62} J. R. B\"{u}chi, \newblock On a decision method in restricted second order arithmetic. \newblock {\em Proc. International Congress on Logic, Method, and Philosophy of Science}, Stanford: Stanford University Press, pp. 1–12, 1962. \bibitem{CM03} O. Carton and M. Michel, \newblock Unambiguous Buchi automata. \newblock {\em Theoretical Computer Science}, 297:37--81, 2003. \bibitem{CLR15} {\'E}.~Charlier, J.~Leroy, and M.~Rigo, \newblock An analogue of {C}obham's theorem for graph directed iterated function systems. \newblock {\em Adv. Math.}, 280:86--120, 2015. \bibitem{CM58} N. Chomsky and G. Miller, \newblock Finite state languages. \newblock {\em Information and Control}, 1(2):91-112, 1958. \bibitem{E08} G. Edgar, \newblock Measure, Topology, and Fractal Geometry. \newblock {\em Springer}, New York, 2008. \bibitem{HPS92} G.~Hansel, D.~Perrin, and Imre Simon, \newblock Compression and Entropy. \newblock {\em STACS}, 1992. \bibitem{HW19} P.~Hieronymi and E.~Walsberg, \newblock Fractals and the monadic second order theory of one successor. \newblock {\em arXiv Mathematics e-prints}, 2019. \newblock \url{https://arxiv.org/pdf/1901.03273.pdf}. \bibitem{F03} K. Falconer, \newblock Fractal Geometry: Mathematical Foundations and Application. \newblock {\em John Wiley \& Sons}, 2003. \bibitem{M02} \newblock D. Marker, \newblock \emph{Model Theory: An Introduction}, \newblock Graduate Texts in Mathematics, Springer New York, 2002. \bibitem{M66} R.~McNaughton, \newblock Testing and generating infinite sequences by a finite automaton. \newblock {\em Information and control}, 9:521--530, 1966. \bibitem{MS94} W. Merzenich and L. Staiger, \newblock Fractals, dimension, and formal languages. \newblock {\em Informatique th\'eoretique et applications}, 28(3--4):361--386, 1994. \bibitem{S89} Ludwig Staiger, \newblock Combinatorial properties of the Hausdorff dimension. \newblock {\em Journal of Statistical Planning and Inference}, 23(1989):95--100, 1989. \bibitem{S85} L. Staiger, \newblock The entropy of finite $\omega$-languages. \newblock {\em Problems of Control and Information Theory}, 14(5): 383-392, 1985. \bibitem{MW88} R. D. Mauldin and S. C. Williams, \newblock Hausdorff dimension in graph directed constructions. \newblock {\em Trans. Amer. Math. Soc.}, 309 (1988), no. 2, 811--829. \bibitem{PP04} J.-E. Pin and D. Perrin, \newblock Infinite Words: Automata, Semigroups, Logic and Games. \newblock {\em Elsevier}, 2004. \bibitem{S48} C. E. Shannon, \newblock A Mathematical Theory of Communication. \newblock {\em Bell System Technical Journal.}, 27 (3): 379–423, 1948. \end{thebibliography} \end{document}
2205.02634v1
http://arxiv.org/abs/2205.02634v1
Some results on the super domination number of a graph II
\documentclass[11pt]{article}\UseRawInputEncoding \usepackage{amssymb,amsfonts,amsmath,latexsym,epsf,tikz,url} \usepackage[usenames,dvipsnames]{pstricks} \usepackage{pstricks-add} \usepackage{epsfig} \usepackage{pst-grad} \usepackage{pst-plot} \usepackage[space]{grffile} \usepackage{etoolbox} \makeatletter \patchcmd\Gread@eps{\@inputcheck#1 }{\@inputcheck"#1"\relax}{}{} \makeatother \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{Problem} \newcommand{\proof}{\noindent{\bf Proof.\ }} \newcommand{\qed}{\hfill $\square$\medskip} \textwidth 14.5cm \textheight 21.0cm \oddsidemargin 0.4cm \evensidemargin 0.4cm \voffset -1cm \begin{document} \title{Some results on the super domination number of a graph II} \author{ Nima Ghanbari } \date{\today} \maketitle \begin{center} Department of Informatics, University of Bergen, P.O. Box 7803, 5020 Bergen, Norway\\ {\tt [email protected]} \end{center} \begin{abstract} Let $G=(V,E)$ be a simple graph. A dominating set of $G$ is a subset $S\subseteq V$ such that every vertex not in $S$ is adjacent to at least one vertex in $S$. The cardinality of a smallest dominating set of $G$, denoted by $\gamma(G)$, is the domination number of $G$. A dominating set $S$ is called a super dominating set of $G$, if for every vertex $u\in \overline{S}=V-S$, there exists $v\in S$ such that $N(v)\cap \overline{S}=\{u\}$. The cardinality of a smallest super dominating set of $G$, denoted by $\gamma_{sp}(G)$, is the super domination number of $G$. In this paper, we obtain more results on the super domination number of graphs which is modified by an operation on vertices. Also, we present some sharp bounds for super domination number of chain and bouquet of pairwise disjoint connected graphs. \end{abstract} \noindent{\bf Keywords:} domination number, super dominating set, chain, bouquet \medskip \noindent{\bf AMS Subj.\ Class.:} 05C69, 05C76 \section{Introduction} Let $G = (V,E)$ be a graph with vertex set $V$ and edge set $E$. Throughout this paper, we consider graphs without loops and directed edges. For each vertex $v\in V$, the set $N(v)=\{u\in V | uv \in E\}$ refers to the open neighbourhood of $v$ and the set $N[v]=N(v)\cup \{v\}$ refers to the closed neighbourhood of $v$ in $G$. The degree of $v$, denoted by $\deg(v)$, is the cardinality of $N(v)$. A set $S\subseteq V$ is a dominating set if every vertex in $\overline{S}= V- S$ is adjacent to at least one vertex in $S$. The domination number $\gamma(G)$ is the minimum cardinality of a dominating set in $G$. There are various domination numbers in the literature. For a detailed treatment of domination theory, the reader is referred to \cite{domination}. \medskip The concept of super domination number was introduced by Lema\'nska et al. in 2015 \cite{Lemans}. A dominating set $S$ is called a super dominating set of $G$, if for every vertex $u\in \overline{S}$, there exists $v\in S$ such that $N(v)\cap \overline{S}=\{u\}$. The cardinality of a smallest super dominating set of $G$, denoted by $\gamma_{sp}(G)$, is the super domination number of $G$. We refer the reader to \cite{Alf,Dett,Nima,Kri,Kle,Zhu} for more details on super dominating set of a graph. \medskip Let $G$ be a connected graph constructed from pairwise disjoint connected graphs $G_1,\ldots ,G_n$ as follows. Select a vertex of $G_1$, a vertex of $G_2$, and identify these two vertices. Then continue in this manner inductively. Note that the graph $G$ constructed in this way has a tree-like structure, the $G_i$'s being its building stones (see Figure \ref{Figure1}). \begin{figure}[!h] \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-4.819607)(13.664668,2.90118) \pscircle[linecolor=black, linewidth=0.04, dimen=outer](5.0985146,1.0603933){1.6} \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.898515,0.66039336) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.898515,0.26039338) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(12.698514,0.66039336) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(10.298514,1.0603933) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.098515,-0.9396066) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.098515,-0.9396066) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.898515,0.66039336) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.898515,-0.9396066) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(11.898515,-0.9396066) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(12.698514,-0.9396066) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(12.698514,0.26039338) } \pscustom[linecolor=black, linewidth=0.04] { \newpath \moveto(14.298514,0.66039336) \closepath} \psbezier[linecolor=black, linewidth=0.04](11.598515,1.0203934)(12.220886,1.467607)(12.593457,1.262929)(13.268515,1.0203933715820312)(13.943572,0.7778577)(12.308265,0.90039337)(12.224765,0.10039337)(12.141264,-0.69960666)(10.976142,0.5731798)(11.598515,1.0203934) \psbezier[linecolor=black, linewidth=0.04](4.8362556,-3.2521083)(4.063277,-2.2959895)(4.6714916,-1.9655427)(4.891483,-0.99004078729821)(5.111474,-0.014538889)(5.3979383,-0.84551746)(5.373531,-1.8452196)(5.349124,-2.8449216)(5.6092343,-4.208227)(4.8362556,-3.2521083) \psbezier[linecolor=black, linewidth=0.04](8.198514,-2.0396066)(6.8114076,-1.3924998)(6.844908,-0.93520766)(5.8785143,-1.6996066284179687)(4.9121203,-2.4640057)(5.6385145,-3.4996066)(6.3385143,-2.8396065)(7.0385146,-2.1796067)(9.585621,-2.6867135)(8.198514,-2.0396066) \pscircle[linecolor=black, linewidth=0.04, dimen=outer](7.5785146,-3.6396067){1.18} \psdots[linecolor=black, dotsize=0.2](11.418514,0.7403934) \psdots[linecolor=black, dotsize=0.2](9.618514,1.5003934) \psdots[linecolor=black, dotsize=0.2](6.6585145,0.7403934) \psdots[linecolor=black, dotsize=0.2](3.5185144,0.96039337) \psdots[linecolor=black, dotsize=0.2](5.1185145,-0.51960665) \psdots[linecolor=black, dotsize=0.2](5.3985143,-2.5796065) \psdots[linecolor=black, dotsize=0.2](7.458514,-2.4596066) \rput[bl](8.878514,0.42039338){$G_i$} \rput[bl](7.478514,-4.1196065){$G_j$} \psbezier[linecolor=black, linewidth=0.04](0.1985144,0.22039337)(0.93261385,0.89943534)(2.1385605,0.6900083)(3.0785143,0.9403933715820313)(4.0184684,1.1907784)(3.248657,0.442929)(2.2785144,0.20039338)(1.3083719,-0.042142253)(-0.53558505,-0.45864862)(0.1985144,0.22039337) \psbezier[linecolor=black, linewidth=0.04](2.885918,1.4892112)(1.7389486,2.4304078)(-0.48852357,3.5744174)(0.5524718,2.1502930326916756)(1.5934672,0.7261687)(1.5427756,1.2830372)(2.5062277,1.2429687)(3.46968,1.2029002)(4.0328875,0.5480146)(2.885918,1.4892112) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](9.038514,0.7403934)(2.4,0.8) \psbezier[linecolor=black, linewidth=0.04](9.399693,1.883719)(9.770389,2.812473)(12.016343,2.7533927)(13.011008,2.856550531577144)(14.005673,2.9597082)(13.727474,2.4925284)(12.761896,2.2324166)(11.796317,1.9723049)(9.028996,0.9549648)(9.399693,1.883719) \pscircle[linecolor=black, linewidth=0.04, dimen=outer](9.898515,-3.3396065){1.2} \psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.2985144,-1.1396066)(0.4,1.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.4985144,-3.3396065)(1.8,0.8) \psdots[linecolor=black, dotsize=0.2](2.2985144,0.26039338) \psdots[linecolor=black, dotsize=0.2](2.2985144,-2.5396066) \psdots[linecolor=black, dotsize=0.2](8.698514,-3.3396065) \psdots[linecolor=black, dotsize=0.2](9.898515,-2.1396067) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](10.298514,-1.5396066)(2.0,0.6) \end{pspicture} } \end{center} \caption{\label{Figure1} A graph with subgraph units $G_1,\ldots , G_n$.} \end{figure} Usually say that $G$ is obtained by point-attaching from $G_1,\ldots , G_n$ and that $G_i$'s are the primary subgraphs of $G$. A particular case of this construction is the decomposition of a connected graph into blocks (see \cite{Deutsch}). We refer the reader to \cite{Alikhani1,Nima0,Moster} for more details and results in the concept of graphs from primary subgraphs. \medskip In this paper, we continue the study of super domination number of a graph. In Section 2, we mention some previous results, also the definition of $G\odot v$ and find a shrap upper bound for the super domination number of that. In Section 3, we obtain some results on the chain of graphs that is a special case of graphs which are obtained by point-attaching from primary subgraphs. Finally, in Section 4, we find some sharp bounds on the super domination number of the bouquet of graphs which is another special case of graphs that are made by point-attaching. \section{Super domination number of $G\odot v$} $G\odot v$ is the graph obtained from $G$ by the removal of all edges between any pair of neighbours of $v$ \cite{Alikhani}. Some results in this operation can be found in \cite{Nima1}. In this section, we study the super domination number of $G\odot v$. First, we state some known results. \begin{theorem}\cite{Lemans}\label{thm-1} Let $G$ be a graph of order $n$ which is not empty graph. Then, $$1\leq \gamma(G) \leq \frac{n}{2} \leq \gamma_{sp}(G) \leq n-1.$$ \end{theorem} \begin{theorem}\cite{Lemans}\label{thm-2} \begin{itemize} \item[(i)] For a path graph $P_n$ with $n\geq 3$, $\gamma_{sp}(P_n)=\lceil \frac{n}{2} \rceil$. \item[(ii)] For a cycle graph $C_n$, \begin{displaymath} \gamma_{sp}(C_n)= \left\{ \begin{array}{ll} \lceil\frac{n}{2}\rceil & \textrm{if $n \equiv 0, 3 ~ (mod ~ 4)$, }\\ \\ \lceil\frac{n+1}{2}\rceil & \textrm{otherwise.} \end{array} \right. \end{displaymath} \item[(iii)] For the complete graph $K_n$, $\gamma_{sp}(K_n)=n-1$. \item[(iv)] For the complete bipartite graph $K_{n,m}$, $\gamma_{sp}(K_{n,m})=n+m-2$, where $min\{n,m\}\geq 2$. \item[(v)] For the star graph $K_{1,n}$, $\gamma_{sp}(K_{1,n})=n$. \end{itemize} \end{theorem} \begin{theorem}\cite{Nima}\label{Firend-thm} For the friendship graph $F_n$, $\gamma_{sp}(F_n)=n+1$. \end{theorem} \begin{theorem}\cite{Nima}\label{G/v} Let $G=(V,E)$ be a graph and $v\in V$ is not a pendant vertex. Then, $$ \gamma_{sp}(G/v)\leq \gamma_{sp} (G)+\lfloor \frac{\deg (v)}{2} \rfloor -1,$$ where $G/v$ is the graph obtained by deleting $v$ and putting a clique on the open neighbourhood of $v$. \end{theorem} Here we consider to $G\odot v$. First suppose that $v$ is a pendant vertex. Then by the definition of $G\odot v$, we have $G\odot v=G$. So we have the following easy result: \begin{proposition}\label{Godotvpendant} Let $G=(V,E)$ be a graph and $v\in V$ is a pendant vertex. Then, $$ \gamma_{sp}(G\odot v)= \gamma_{sp} (G).$$ \end{proposition} Hence, there is no reason to compute $\gamma_{sp}(G\odot v)$ when $v$ is a pendant vertex. Now we find a sharp upper bound for the super domination number of $G\odot v$ when $v$ is not a pendant vertex. \begin{theorem}\label{Godotv} Let $G=(V,E)$ be a graph and $v\in V$ is not a pendant vertex. Then, $$\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)+\lfloor \frac{\deg (v)}{2} \rfloor -1.$$ \end{theorem} \begin{proof} Suppose that $v\in V$ such that $\deg (v)=n\geq2$ and $N(v)=\{v_1,v_2,\ldots,v_n\}$. Also $D$ is a super dominating set for $G$. We have the following cases: \begin{itemize} \item[(i)] $v\notin D$. So, there exists $v_r\in N(v)\cap D$ such that $N(v_r)\cap \overline{D} = \{v\}$ which means that all other neighbours of $v_r$ are in $D$ too. There is no vertex such as $v_p\in N(v)$ that dominates $v_q\in N(v)\cap \overline{D}$ and satisfies the condition of super dominating set, because in that case we have $\{v_q,v\}\subseteq N(v_p)\cap \overline{D} $ which is a contradiction. So all vertices in $ N(v)\cap \overline{D}$ dominate by some vertices which are not in $N(v)$. Now by removing all edges between any pair of neighbours of $v$, $D$ is a super dominating set for the $G\odot v$ too. So, $\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)$. \item[(ii)] $v\in D$ and for some $1 \leq i \leq n$, there exists $v_i\in N(v)$ such that $N(v)\cap \overline{D} = \{v_i\}$. So, all other neighbours of $v$ should be in $D$. Now by removing all edges between any pair of neighbours of $v$, $D$ is a super dominating set for the $G\odot v$ too. So, $\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)$. \item[(iii)] $v\in D$ and for every $1 \leq i \leq n$, there does not exist $v_i\in N(v)$ such that $N(v)\cap \overline{D} = \{v_i\}$. If $v_i\in N(v)$ is dominated by $v_i'$ such that $v_i'\notin N(v)$, then after removing all edges between any pair of neighbours of $v$, $v_i$ still can dominated by $v_i'$ and $N(v_i')\cap \overline{D} = \{v_i\}$. So we keep all vertices in $D-N(v)$ in our dominating set. If $v_i\in N(v)$ is dominated by $v_j$ such that $v_j\in N(v)$ and $N(v_j)\cap \overline{D} = \{v_i\}$, then we simply add $v_i$ to our dominating set after removing all edges between any pair of neighbours of $v$. At most we have $\lfloor \frac{n}{2} \rfloor$ vertices with this condition. Without loss of generality, suppose that $v_1$ dominates $v_2$, $v_3$ dominates $v_4$, $v_5$ dominates $v_6$ and so on. Since all vertices in $N(v)-\{v_2\}$ are in $D\cup\{v_4,v_6,\ldots\}$, then $v_2$ is now dominated by $v$, and then by our argument, $D\cup\{v_4,v_6,\ldots\}$ is a super dominating set for $G\odot v$. Hence, $\gamma_{sp}(G\odot v)\leq \gamma_{sp} (G)+\lfloor \frac{n}{2} \rfloor -1$. \end{itemize} Therefore we have the result. \qed \end{proof} \begin{remark} Upper bound in Theorem \ref{Godotv} is sharp. It suffices to consider $G=F_n$ as friendship graph and $v$ the vertex with $\deg(v)=2n$. By Theorem \ref{Firend-thm}, $\gamma_{sp} (G)=n+1$. One can easily check that $G\odot v=K_{1,2n}$ and then by Theorem \ref{thm-2}, $\gamma_{sp}(G\odot v)=2n$. Therefore $\gamma_{sp}(G\odot v)= \gamma_{sp} (G)+\lfloor \frac{\deg (v)}{2} \rfloor -1$. \end{remark} We end this section by an immediate result of Theorems \ref{G/v} and \ref{Godotv}. \begin{corollary} Let $G=(V,E)$ be a graph and $v\in V$ is not a pendant vertex. Then, $$ \gamma _{sp}(G) \geq \frac{\gamma _{sp}(G\odot v)+\gamma _{sp}(G/v)}{2}-\lfloor \frac{\deg (v)}{2} \rfloor +1.$$ \end{corollary} \section{Super domination number of chain of graphs} In this section, we consider to a special case of graphs which is obtained by point-attaching from primary subgraphs, and is called chain of graphs $G_1,\ldots , G_n$. Suppose that $x_i,y_i \in V(G_i)$. Let $C(G_1,...,G_n)$ be the chain of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i, y_i\}_{i=1}^k$ which is obtained by identifying the vertex $y_i$ with the vertex $x_{i+1}$ for $i=1,2,\ldots,n-1$ (see Figure \ref{chain-n}). \begin{figure}[!h] \begin{center} \psscalebox{0.75 0.75} { \begin{pspicture}(0,-3.9483333)(12.236668,-2.8316667) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](1.2533334,-3.4416668)(1.0,0.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](3.2533333,-3.4416668)(1.0,0.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](5.2533336,-3.4416668)(1.0,0.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](8.853333,-3.4416668)(1.0,0.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](10.853333,-3.4416668)(1.0,0.4) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](2.2533333,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](0.25333345,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](2.2533333,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](4.2533336,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](4.2533336,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](9.853333,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](9.853333,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](11.853333,-3.4416666) \rput[bl](0.0,-3.135){$x_1$} \rput[bl](2.0400002,-3.2016668){$x_2$} \rput[bl](3.9866667,-3.1216667){$x_3$} \rput[bl](2.1733334,-3.9483335){$y_1$} \rput[bl](4.12,-3.9483335){$y_2$} \rput[bl](6.1733336,-3.8816667){$y_3$} \rput[bl](0.9600001,-3.6283333){$G_1$} \rput[bl](3.0,-3.5883334){$G_2$} \rput[bl](5.04,-3.5616667){$G_3$} \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](6.2533336,-3.4416666) \psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.3, fillcolor=white](7.8533335,-3.4416666) \psdots[linecolor=black, dotsize=0.1](6.6533337,-3.4416666) \psdots[linecolor=black, dotsize=0.1](7.0533333,-3.4416666) \psdots[linecolor=black, dotsize=0.1](7.4533334,-3.4416666) \rput[bl](9.6,-3.0816667){$x_n$} \rput[bl](11.826667,-3.8683333){$y_n$} \rput[bl](9.586667,-3.9483335){$y_{n-1}$} \rput[bl](8.533334,-3.6016667){$G_{n-1}$} \rput[bl](7.4,-3.1616666){$x_{n-1}$} \rput[bl](10.613334,-3.575){$G_n$} \end{pspicture} } \end{center} \caption{Chain of $n$ graphs $G_1,G_2, \ldots , G_n$.} \label{chain-n} \end{figure} Before we start the study of super domination number of chain of graphs, we mention the following easy result which is a direct result of the definition of super dominating set and super domination number: \begin{proposition}\label{pro-disconnect} Let $G$ be a disconnected graph with components $G_1$ and $G_2$. Then $$\gamma _{sp}(G)=\gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \end{proposition} Now we consider to chain of two graphs and find sharp upper and lower bounds for its super domination number. \begin{theorem}\label{chain2-thm} Let $G_1$ and $G_2$ be two disjoint connected graphs and let $x_i,y_i \in V(G_i)$ for $i\in \{1,2\}$. Let $C(G_1,G_2)$ be the chain of graphs $\{G_i\}_{i=1}^2$ with respect to the vertices $\{x_i, y_i\}_{i=1}^2$ which obtained by identifying the vertex $y_1$ with the vertex $x_{2}$. Let this vertex in $V(C(G_1,G_2))$ be $z$ (see Figure \ref{chain-2}). Then, $$\gamma _{sp}(G_1)+\gamma _{sp}(G_2) -1\leq \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \end{theorem} \begin{figure} \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-4.8)(20.99,-0.8) \pscircle[linecolor=black, linewidth=0.08, dimen=outer](2.26,-2.4){1.6} \psellipse[linecolor=black, linewidth=0.08, dimen=outer](7.66,-2.4)(1.8,0.8) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.66,-2.4) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.86,-2.4) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.86,-2.4) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.46,-2.4) \rput[bl](0.0,-2.56){$x_1$} \rput[bl](5.22,-2.5){$x_2$} \rput[bl](4.16,-2.56){$y_1$} \rput[bl](9.82,-2.5){$y_2$} \rput[bl](2.06,-4.8){$G_1$} \rput[bl](7.5,-4.76){$G_2$} \pscircle[linecolor=black, linewidth=0.08, dimen=outer](15.06,-2.4){1.6} \psellipse[linecolor=black, linewidth=0.08, dimen=outer](18.46,-2.4)(1.8,0.8) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.46,-2.4) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.66,-2.4) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.66,-2.4) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](20.26,-2.4) \rput[bl](12.8,-2.56){$x_1$} \rput[bl](20.62,-2.5){$y_2$} \rput[bl](15.86,-4.8){$C(G_1,G_2)$} \rput[bl](16.66,-1.96){$z$} \end{pspicture} } \end{center} \caption{Graphs $G_1,G_2$ and $C(G_1,G_2)$ with respect to vertices $y_1$ and $x_2$, respectively.} \label{chain-2} \end{figure} \begin{proof} First, we find an upper bound for $\gamma _{sp}(C(G_1,G_2))$. Let $S_1$ be a super dominating set of $G_1$ and $\gamma _{sp}(G_1)=|S_1|$, and also $S_2$ be a super dominating set of $G_2$ and $\gamma _{sp}(G_2)=|S_2|$. we have the following cases: \begin{itemize} \item[(i)] $y_1 \in S_1$ and $x_2 \in S_2$. In this case, $y_1$ and $x_2$ may have or have not influence on the vertices in $\overline{S_1}$ and $\overline{S_2}$, respectively. So we consider the following cases: \begin{itemize} \item[(i.1)] There exists $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ and $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. So $N(y_1)-\{g_1\}\subseteq S_1$ and $N(y_2)-\{g_2\}\subseteq S_2$. Let $$S=\left( S_1 \cup S_2 \cup \{z,g_1\} \right)-\{y_1,x_2\}. $$ $S$ is a super dominating set for $C(G_1,G_2)$, because $g_2$ is dominated by $z$ and since all neighbours of $y_1$ are in $S$ now, then $N(z)\cap \overline{S} = \{g_2\}$. The rest of vertices in $\overline{S}$ are dominated by the same vertex as before and the definition of super dominating set holds. So in this case, $$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \item[(i.2)] There exists $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ but there does not exist $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. We know that $N(y_1)-\{g_1\}\subseteq S_1$, but we may have more than one vertex in $N(x_2)\cap \overline{S_2}$ or may have all vertices in $N(x_2)$ as a subset of $S_2$. Since we have no knowledge about $N(x_2)\cap \overline{S_2}$, let $$S=\left( S_1 \cup S_2 \cup \{z,g_1\} \right)-\{y_1,x_2\}. $$ Clearly, $S$ is a super dominating set for $C(G_1,G_2)$ since all vertices in $\overline{S}$ are dominated by the same vertex as before. Hence $$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \item[(i.3)] There does not exist $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ but there exists $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. It is similar to part (i.2). \item[(i.4)] There does not exist $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$ and there does not exist $g_2\in N(x_2)$ such that $N(x_2)\cap \overline{S_2} = \{g_2\}$. We may have more than one vertex in $N(y_1)\cap \overline{S_1}$ or may have all vertices in $N(y_1)$ as a subset of $S_1$, and same argument about $x_2$. Let $$S=\left( S_1 \cup S_2 \cup \{z\} \right)-\{y_1,x_2\}. $$ Then all vertices in $\overline{S}$ are dominated by the same vertex as before and the definition of the super dominating set holds. So we have $$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)-1.$$ \end{itemize} \item[(ii)] $y_1 \in S_1$ and $x_2 \notin S_2$. In this case, we only pay attention to $y_1$. So we consider the following cases: \begin{itemize} \item[(ii.1)] There exists $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$. So $N(y_1)-\{g_1\}\subseteq S_1$. Let $$S=\left( S_1 \cup S_2 \cup \{g_1\} \right)-\{y_1\}. $$ We show that $S$ is a super dominating set for $C(G_1,G_2)$. By the definition of $S$ we have $g_1\in S$, so we do not need to consider it in the definition of super dominating set. Since $x_2 \notin S_2$, then there exists $h\in S_2$ such that $N(h)\cap \overline{S_2} = \{x_2\}$. Now we consider to $z$ and clearly we have $N(h)\cap \overline{S} = \{z\}$. The rest of vertices in $\overline{S}$ are dominated by the same vertex as before and the definition of the super dominating set holds. So $$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \item[(ii.2)] There does not exist $g_1\in N(y_1)$ such that $N(y_1)\cap \overline{S_1} = \{g_1\}$. So simply let $$S=\left( S_1 \cup S_2 \cup \{z\} \right)-\{y_1\}. $$ By an easy argument same as before, we conclude that $S$ is a super dominating set for $C(G_1,G_2)$ and therefore $$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \end{itemize} \item[(iii)] $y_1 \notin S_1$ and $x_2 \in S_2$. It is similar to part (ii). \item[(iv)] $y_1 \notin S_1$ and $x_2 \notin S_2$. Let $S=S_1 \cup S_2$. Then by similar argument as part (ii.1), $S$ is a super dominating set for $C(G_1,G_2)$ and hence $$ \gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \end{itemize} Therefore in all cases we have $\gamma _{sp}(C(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)$. Now we find a lower bound for $\gamma _{sp}(C(G_1,G_2))$. First we find a super dominating set for $C(G_1,G_2)$. Let this set be $D$ and $\gamma _{sp}(C(G_1,G_2))=|D|$. Now by using this set, we find super dominating sets for $G_1$ and $G_2$. Consider to the following cases: \begin{itemize} \item[(i)] $z\in D$. In this case, $z$ may has or has not influence on the vertices in $\overline{D}$. So we consider the following cases: \begin{itemize} \item[(i.1)] There exists $u\in N(z)$ such that $N(z)\cap \overline{D} = \{u\}$. So $N(z)-\{u\}\subseteq D$ and therefore all other neighbours of $z$ are in $D$. Without loss of generality, suppose that $u\in V(G_1)$. Now we separate components $G_1$ and $G_2$ from $C(G_1,G_2)$ and form a disconnected graph with components $G_1$ and $G_2$, replace vertex $z$ with $x_2$ in $G_1$ and replace vertex $z$ with $y_1$ in $G_2$ (see Figure \ref{chain-2}). Let $$D_1=\left( D \cup \{y_1\}\right)-\left( V(G_2) \cup \{z\} \right).$$ We show that $D_1$ is a super dominating set for $G_1$. $u$ is dominated by $y_1 \in D_1$ now and since $N(z)-\{u\}\subseteq D$, then $N(y_1)-\{u\}\subseteq D_1$. Hence $N(y_1)\cap \overline{D_1} = \{u\}$. The rest of vertices in $\overline{D_1}$ are dominated by the same vertex as before and the definition of the super dominating set holds. So $\gamma _{sp}(G_1)\leq|D_1|$. Now we consider to $G_2$. Let $$D_2=\left( D \cup \{x_2\}\right)-\left( V(G_1) \cup \{z\} \right).$$ Since $x_2 \in D_2$, clearly all vertices in $\overline{D_2}$ are dominated by the same vertex as before. So the definition of the super dominating set holds and $\gamma _{sp}(G_2)\leq|D_2|$. By Proposition \ref{pro-disconnect}, super domination number of a disconnected graph with components $G_1$ and $G_2$ is the summation of cardinal of each super dominating set of them. Since $$D_1\cup D_2=\left( D \cup \{y_1,x_2\}\right) - \{z\},$$ and $D_1\cap D_2= \{\}$, then $$ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq|D_1|+|D_2|=|D_1\cup D_2|=\gamma _{sp}(C(G_1,G_2))+1.$$ \item[(i.2)] There does not exist $u\in N(z)$ such that $N(z)\cap \overline{D} = \{u\}$. Same as previous case, we form $G_1$ and $G_2$. Let $$D_1=\left( D \cup \{y_1\}\right)-\left( V(G_2) \cup \{z\} \right),$$ and $$D_2=\left( D \cup \{x_2\}\right)-\left( V(G_1) \cup \{z\} \right).$$ All vertices in $\overline{D_1}$ and $\overline{D_2}$ are dominated by the same vertex as before. So by similar argument as previous case, we have $$ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq \gamma _{sp}(C(G_1,G_2))+1.$$ \end{itemize} \item[(ii)] $z\notin D$. So there exists $v\in D$ such that $N(v)\cap \overline{D} = \{z\}$. We form $G_1$ and $G_2$ same as part (i.1). Without loss of generality, suppose that $v\in V(G_1)$. Let $$D_1= D - V(G_2) ,$$ and $$D_2=\left( D \cup \{x_2\}\right)- V(G_1).$$ $D_1$ is a super dominating set for $G_1$ because $y_1$ is dominated by $v$ and $N(v)\cap \overline{D} = \{y_1\}$, and the rest of vertices in $\overline{D_1}$ are dominated by the same vertex as before. So $\gamma _{sp}(G_1)\leq|D_1|$. Since $x_2 \in D_2$, all vertices in $\overline{D_2}$ are dominated by the same vertex as before and the definition of super dominating set holds. So $D_2$ is a super dominating set for $G_2$. Hence $\gamma _{sp}(G_2)\leq|D_2|$. Since $D_1\cup D_2= D \cup \{x_2\}$, and $D_1\cap D_2= \{\}$, then $$ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq\gamma _{sp}(C(G_1,G_2))+1.$$ \end{itemize} Hence in all cases, $ \gamma _{sp}(G_1)+\gamma _{sp}(G_2)\leq\gamma _{sp}(C(G_1,G_2))+1$, and therefore we have the result. \qed \end{proof} \begin{remark} Bounds in the Theorem \ref{chain2-thm} are sharp. For the upper bound, it suffices to consider $G_1=G_2=P_3$. Then by Theorem \ref{thm-2}, $ \gamma _{sp}(G_1)=\gamma _{sp}(G_2)=2$. Now let $y_1$ and $x_2$ be the vertex with degree 2 in $G_1$ and $G_2$, respectively. One can easily check that $C(G_1,G_2)=K_{1,4}$, and by Theorem \ref{thm-2}, $\gamma _{sp}(C(G_1,G_2))=4=\gamma _{sp}(G_1)+\gamma _{sp}(G_2)$. For the lower bound, it suffices to consider $H_1=F_4$ and $H_2=F_5$, where $F_n$ is the friendship graph of order $n$. Then by Theorem \ref{Firend-thm}, $ \gamma _{sp}(H_1)=5$ and $\gamma _{sp}(H_2)=6$. Now let $y_1$ be the vertex with degree 8 in $H_1$ and $x_2$ be the vertex with degree 10 in $H_2$, respectively. One can easily check that $C(H_1,H_2)=F_9$, and by Theorem \ref{Firend-thm}, $\gamma _{sp}(C(H_1,H_2))=10=\gamma _{sp}(H_1)+\gamma _{sp}(H_2)-1$. \end{remark} We end this section by an immediate result of Theorem \ref{chain2-thm}. \begin{corollary} Let $G_1,G_2, \ldots , G_n$ be a finite sequence of pairwise disjoint connected graphs and let $x_i,y_i \in V(G_i)$. Let $C(G_1,...,G_n)$ be the chain of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i, y_i\}_{i=1}^k$ which obtained by identifying the vertex $y_i$ with the vertex $x_{i+1}$ for $i=1,2,\ldots,n-1$ (Figure \ref{chain-n}). Then, $$ \left( \sum_{i=1}^{n} \gamma _{sp}(G_i) \right) - n \leq \gamma _{sp}(C(G_1,...,G_n)) \leq \sum_{i=1}^{n} \gamma _{sp}(G_i).$$ \end{corollary} \section{Super domination number of bouquet of graphs} In this section, we consider to another special case of graphs which is obtained by point-attaching from primary subgraphs. Let $G_1,G_2, \ldots , G_n$ be a finite sequence of pairwise disjoint connected graphs and let $x_i \in V(G_i)$. Let $B(G_1,...,G_n)$ be the bouquet of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i\}_{i=1}^n$ and obtained by identifying the vertex $x_i$ of the graph $G_i$ with $x$ (see Figure \ref{bouquet-n}). \begin{figure}[!h] \begin{center} \psscalebox{0.75 0.75} { \begin{pspicture}(0,-6.76)(5.6,-1.16) \rput[bl](2.6133332,-3.64){$x_1$} \rput[bl](3.0533333,-4.0933332){$x_2$} \rput[bl](2.6533334,-4.5466666){$x_3$} \rput[bl](2.5866666,-1.8133334){$G_1$} \rput[bl](4.72,-4.1066666){$G_2$} \rput[bl](2.56,-6.2){$G_3$} \rput[bl](2.1333334,-4.04){$x_n$} \rput[bl](0.21333334,-4.0933332){$G_n$} \psellipse[linecolor=black, linewidth=0.04, dimen=outer](1.4,-3.96)(1.4,0.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.8,-2.56)(0.4,1.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](4.2,-3.96)(1.4,0.4) \psellipse[linecolor=black, linewidth=0.04, dimen=outer](2.8,-5.36)(0.4,1.4) \psdots[linecolor=black, dotsize=0.1](0.8,-4.76) \psdots[linecolor=black, dotsize=0.1](1.2,-5.16) \psdots[linecolor=black, dotsize=0.1](1.6,-5.56) \psdots[linecolor=black, dotstyle=o, dotsize=0.5, fillcolor=white](2.8,-3.96) \rput[bl](2.6533334,-4.04){$x$} \end{pspicture} } \end{center} \caption{Bouquet of $n$ graphs $G_1,G_2, \ldots , G_n$ and $x_1=x_2=\ldots=x_n=x$.} \label{bouquet-n} \end{figure} Clearly, bouquet of two graphs $G_1$ and $G_2$ with respect to vertices $x_1\in V(G_1)$ and $x_2\in V(G_2)$, is the same as chain of these two graphs. So by Theorem \ref{chain2-thm}, we have: \begin{proposition}\label{bouquet2-prop} Let $G_1$ and $G_2$ be two disjoint connected graphs and let $x_i \in V(G_i)$ for $i\in \{1,2\}$. Let $B(G_1,G_2)$ be the bouquet of graphs $\{G_i\}_{i=1}^2$ with respect to the vertices $\{x_i\}_{i=1}^2$ which obtained by identifying the vertex $x_1$ with the vertex $x_{2}$. Then, $$\gamma _{sp}(G_1)+\gamma _{sp}(G_2) -1\leq \gamma _{sp}(B(G_1,G_2)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2).$$ \end{proposition} Here we consider to bouquet of three graphs and find upper and lower bounds for the super domination number of that. \begin{theorem}\label{bouquet3-thm} Let $G_1$, $G_2$ and $G_3$ be two disjoint connected graphs and let $x_i \in V(G_i)$ for $i\in \{1,2,3\}$. Let $B(G_1,G_2,G_3)$ be the bouquet of graphs $\{G_i\}_{i=1}^3$ with respect to the vertices $\{x_i\}_{i=1}^3$ which obtained by identifying these vertices. Then, $$\gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3) -2\leq \gamma _{sp}(B(G_1,G_2,G_3)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3).$$ \end{theorem} \begin{proof} First we consider to $G_1$ and $G_2$. Suppose that $H=B(G_1,G_2)$ with respect to the vertices $\{x_i\}_{i=1}^2$ which obtained by identifying the vertex $x_1$ with the vertex $x_{2}$. Let this vertex be $y$. Now we consider to graphs $H$ and $G_3$ and let $B(H,G_3)$ be the bouquet of these graphs with respect to the vertices $y$ and $x_3$. Clearly, we have $B(G_1,G_2,G_3)=B(H,G_3)$. First we find the lower bound. By Proposition \ref{bouquet2-prop}, we have: \begin{align*} \gamma _{sp}(B(G_1,G_2,G_3)) &= \gamma _{sp}(B(H,G_3)) \\ &\geq \gamma _{sp}(H)+\gamma _{sp}(G_3) -1 \\ &= \gamma _{sp}(B(G_1,G_2))+\gamma _{sp}(G_3) -1 \\ &\geq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3) -2. \end{align*} By the same argument, we have $\gamma _{sp}(B(G_1,G_2,G_3)) \leq \gamma _{sp}(G_1)+\gamma _{sp}(G_2)+\gamma _{sp}(G_3)$, and therefore we have the result. \qed \end{proof} As an immediate result of Proposition \ref{bouquet2-prop} and Theorem \ref{bouquet3-thm}, by using induction we have: \begin{corollary}\label{Cor-bouq-n} Let $G_1,G_2, \ldots , G_n$ be a finite sequence of pairwise disjoint connected graphs and let $x_i,y_i \in V(G_i)$. Let $B(G_1,...,G_n)$ be the bouquet of graphs $\{G_i\}_{i=1}^n$ with respect to the vertices $\{x_i\}_{i=1}^k$ which obtained by identifying these vertices. Then, $$ \left( \sum_{i=1}^{n} \gamma _{sp}(G_i) \right) - n +1 \leq \gamma _{sp}(B(G_1,...,G_n)) \leq \sum_{i=1}^{n} \gamma _{sp}(G_i).$$ \end{corollary} We end this section by showing that bounds for $\gamma _{sp}(B(G_1,...,G_n))$ are sharp. \begin{remark} Bounds in Corollary \ref{Cor-bouq-n} are sharp. For the lower bound, it suffices to consider $G_1=G_2= \ldots = G_n=F_2$ where $F_n$ is the friendship graph of order $n$ and let $x_i$ for $i=1,2,\ldots,n$ be the vertex with degree 4 in $F_2$. One can easily check that $B(G_1,...,G_n)=F_{2n}$. By Theorem \ref{Firend-thm}, we have $\gamma _{sp}(B(G_1,...,G_n))=2n+1$. Also $\gamma _{sp}(F_2)=3$. So $\gamma _{sp}(B(G_1,...,G_n))=\left( \sum_{i=1}^{n} \gamma _{sp}(G_i) \right) - n +1$. For the upper bound, it suffices to consider $H_1=H_2= \ldots = H_n=P_2$ where $P_n$ is the path graph of order $n$. Clearly, we have $\gamma _{sp}(P_2)=1$ and $B(H_1,...,H_n)=K_{1,n}$. By Theorem \ref{thm-2}, $\gamma _{sp}(B(H_1,...,H_n))=n$. Hence, $\gamma _{sp}(B(H_1,...,H_n))=\left( \sum_{i=1}^{n} \gamma _{sp}(H_i) \right)$. \end{remark} \section{Conclusions} In this paper, we obtained a sharp upper bound for super domination number of graphs which modified by operation $\odot$ on vertices. Also we presented some results for super domination number of chain and bouquet of finite sequence of pairwise disjoint connected graphs. Future topics of interest for future research include the following suggestions: \begin{itemize} \item[(i)] Finding sharp lower bound for super domination number of $G\odot v$. \item[(ii)] Finding super domination number of link and circuit of graphs. \item[(iii)] Finding super domination number of subdivision and power of a graph. \item[(iv)] Counting the number of super dominating sets of graph $G$ with size $k\geq \gamma _{sp}(G)$. \end{itemize} \section{Acknowledgements} The author would like to thank the Research Council of Norway and Department of Informatics, University of Bergen for their support. \begin{thebibliography}{99} \bibitem{Alikhani} S. Alikhani, E. Deutsch, More on domination polynomial and domination root, {\it Ars Combin.\/}, 134 (2017) 215-232. \bibitem{Alikhani1}S. Alikhani, N. Ghanbari, Sombor index of polymers, {\it MATCH Commun. Math. Comput. Chem.\/}, \textbf{86} (2021) 715–728. \bibitem{Alf} R. Alfarisi, Dafik, R. Adawiyah, R. M. Prihandini, E. R. Albirri, I. H. Agustin, Super domination number of unicyclic graphs, {\it IOP Conf. Series: Earth and Environmental Science \/}, 243 (2019) 012074. DOI:10.1088/1755-1315/243/1/012074. \bibitem{Dett} M. Dettlaff, M. Lema\'nska, J.A. Rodr\'iguez-Vel\'azquez, R. Zuazua, On the super domination number of lexicographic product graphs, {\it Discrete Applied Mathematics\/}, 263 (2019) 118–129. DOI:10.1016/j.dam.2018.03.082. \bibitem{Deutsch} D. Emeric, and S. Klavzar, Computing Hosoya polynomials of graphs from primary subgraphs, {\it MATCH Commun. Math. Comput. Chem.\/}, \textbf{70} (2013) 627--644. \bibitem{Nima0} N. Ghanbari, On the Graovac-Ghorbani and atom-bond connectivity indices of graphs from primary subgraphs, {\it Iranian J. Math. Chem.\/}, to appear. Available at https://arxiv.org/abs/2109.13564. \bibitem{Nima} N. Ghanbari, Some results on the super domination number of a graph, submitted, arXiv:2204.10666 (2022). \bibitem{Moster} N. Ghanbari, S. Alikhani, Mostar index and edge Mostar index of polymers, {\it Comp. Appl. Math.\/}, \textbf{40}, 260 (2021). DOI:10.1007/s40314-021-01652-x. \bibitem{Nima1} N. Ghanbari, S. Alikhani, Total dominator chromatic number of some operations on a graph, {\it Bull. Comput. Appl. Math.\/}, 6(2) (2018) 9--20. \bibitem{domination} T.W. Haynes, S.T. Hedetniemi, P.J. Slater, Fundamentals of domination in graphs, {\it Marcel Dekker\/}, NewYork, (1998). \bibitem{Kri} B. Krishnakumari, Y. B. Venkatakrishnan, Double domination and super domination in trees, {\it Discrete Mathematics, Algorithms and Applications \/}, 08(04) (2016) 1650067. DOI:10.1142/S1793830916500671. \bibitem{Kle} D. J. Klein, J.A. Rodr\'iguez-Vel\'azquez, E. Yi, On the super domination number of graphs, {\it Communications in Combinatorics and Optimization \/}, 5(2) (2020) 83-96. DOI:10.22049/CCO.2019.26587.1122. \bibitem{Lemans} M. Lema\'nska, V. Swaminathan, Y.B. Venkatakrishnan, and R. Zuazua, Super dominating sets in graphs, {\it Proc. Nat. Acad. Sci. India Sect.\/}, A 85(3) (2015) 353--357. DOI:10.1007/s40010-015-0208-2. \bibitem{Zhu} W. Zhuang, Super Domination in Trees, {\it Graphs and Combinatorics\/}, 38 (2022) 21. DOI:10.1007/s00373-021-02409-3. \end{thebibliography} \end{document}
2205.02614v1
http://arxiv.org/abs/2205.02614v1
Very Pliable Index Coding
\documentclass[conference,10pt]{IEEEtran} \addtolength{\topmargin}{9mm} \usepackage[T1]{fontenc} \usepackage{url} \usepackage{cite,xcolor} \usepackage[cmex10]{amsmath} \interdisplaylinepenalty=2500 \usepackage[cmintegrals]{newtxmath} \usepackage{bm} \usepackage{graphicx} \usepackage{enumitem} \usepackage[normalem]{ulem} \usepackage{microtype} \graphicspath{{.}{../pictures/}{pictures/}{../../pictures/}} \definecolor{awesome}{rgb}{1.0, 0.13, 0.32} \newcommand{\BV}[1]{\color{awesome} #1 \color{black}} \newcommand{\equalbydef}{:=} \newcommand{\LO}[1]{\color{blue} #1 \color{black}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{example}{Example} \newtheorem{remark}{Remark} \newtheorem{corollary}{Corollary} \newtheorem{proposition}{Proposition} \newtheorem{observation}{Observation} \usepackage{etoolbox} \makeatletter \patchcmd{\@begintheorem}{\textit}{\textbf}{}{} \patchcmd{\@begintheorem}{\itshape}{\bfseries}{}{} \makeatother \bibliographystyle{IEEEtran} \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} \title{Very Pliable Index Coding} \author{ \IEEEauthorblockN{Lawrence Ong} \IEEEauthorblockA{The University of Newcastle\\ Email: [email protected]} \and \IEEEauthorblockN{Badri N.\ Vellambi} \IEEEauthorblockA{University of Cincinnati\\ Email: [email protected]} } \maketitle \begin{abstract} In the pliable variant of index coding, receivers are allowed to decode any new message not known a priori. Optimal code design for this variant involves identifying each receiver's choice of a new message that minimises the overall transmission rate. This paper proposes a formulation that further relaxes the decoding requirements of pliable index coding by allowing receivers to decode different new messages depending on message realisations. Such relaxation is shown to offer no rate benefit when linear codes are used, but can achieve strictly better rates in general. Scenarios are demonstrated for which the transmission rates are better when the message size is finite than when it is asymptotically large. This is in stark contrast to traditional communication setups. \end{abstract} \section{Introduction} Traditional communication systems involve sending specific messages to specific receivers. Such requirements have been relaxed in modern applications like recommender systems, sensor networks, and dataset transmission for distributed learning. In these applications, the receiver's requirement is to obtain any \emph{new} message that it does not already have; there is no constraint on which new message is to be decoded. Yet, codes designed for traditional communications are still being used for such applications. This paper explores the fundamental limits of communications with decoding pliability under the framework of index coding. Index coding~\cite{baryossefbirk11,arbabjolfaeikim18trends,ong2017} consists of one transmitter sending specific messages to multiple receivers through a common broadcast, where each of the receivers already has some subset of messages. Although the setup of index coding seems overly simplified, it has been shown to be equivalent (in terms of transmission rates and code design) to network coding, which is a general multi-source, multi-sink, multi-link network problem~\cite{effrosrouayheblangberg15,ongvellambiklieweryeoh21}. Decoding pliability under the framework of index coding has been coined \emph{pliable index coding}~\cite{brahmafragouli15}. It has been shown that a code designed for pliable index coding achieves significant transmission rate improvement compared to its non-pliable counterpart (index coding). For a problem with $n$ receivers each having a randomly selected subset of messages, such relaxation has been shown to decrease the required size of the broadcast codeword from $\sqrt{n}$ to $\log n$~\cite{brahmafragouli15}. Pliable index coding shares much similarity with index coding, especially in code design. Index coding is equivalent to pliable index coding with a fixed decoding choice for each receiver. Thus, evaluating index coding with all possible decoding choices yields an optimal solution for pliable index coding, including the optimal code. While it is NP-hard to determine the best decoding choice, they have been found for a few special cases~\cite{liutuninetti20,ongvellambikliewer2019,OngConf3}. Lower bounds on the rates are derived using decoding-chain arguments, and achievability, using MDS codes or uncoded transmissions. Although pliable index coding allows each receiver to decode any new message, the decoding choice must be fixed in the code design for all message realisations. This fixed-choice constraint is often unnecessary in the true spirit of decoding pliability. In this paper, we study a relaxed version of pliable communications where the receivers are not tied to a decoding choice, but are free to decode different new messages upon receiving different transmissions. We call such a setup \emph{very-pliable} (VP) index coding. Such a deviation from traditional decoding models complicates analyses, as many entropy-based information-theoretic tools no longer apply. For example, decoding rules of the form $H(X_i|Y,X_\mathcal{S}) = 0$ do not hold; here $i$ is the choice of the index of the message to be decoded, $Y$ is the received transmission, and $X_\mathcal{S}$ are the messages that the receiver has. Also, due to the absence of a fixed decoding choice, the concatenation of two (or more) very-pliable (VP) index codes does not yield a longer VP code. \subsection{Contributions} This paper establishes the following results: \begin{enumerate} \item The broadcast rate for VP index coding (or VP rate, in short) is at least one. The broadcast rate is the main performance metric, given by the transmitted codeword length normalised to the message length. \item When restricted to linear encoders, the optimal VP rate is the same as optimal pliable index coding rate. \item There exist scenarios with finite message alphabets where \begin{enumerate} \item The VP rate is strictly lower (better) than that for the corresponding pliable setting. \item The VP rate is strictly lower than the asymptotic VP rate (as the alphabet size grows unbounded). \end{enumerate} \item A procedure to construct VP codes for a larger alphabet size from VP codes for smaller alphabets. \end{enumerate} Results~1 and 3 are obtained by restricting the number of messages that can be encoded to one particular codeword due to the decoding requirements, and then using a counting argument to lower bound the number of codewords in total. Achievability is obtained by constructing a hypergraph representation of the problem and evaluating the covering number of the hypergraph. Result~2 is obtained by using the solvability of a single variable in a system of linear equations. Result~4 is obtained by concatenating a VP code with an MDS code. Although it has been shown that linear codes can be strictly sub-optimal for some index-coding instances, examples of such instances and codes have not been reported. In this paper, we present very-pliable index-coding settings and their rate-optimal non-linear VP codes, which strictly outperform linear codes. \section{Problem Formulation}\label{sec:formulation} We use the following notation: $\mathbb{Z}^+$ denotes the set of natural numbers, $[a:b] := \{a, a+1, \dotsc, b\}$ for $a,b\in\mathbb{Z}^+$ such that $a < b$, and $X_S = (X_i: i \in S)$ for some ordered set $S$. Consider a sender with $m \in \mathbb{Z}^+$ messages denoted by $X_{[1 : m]} = (X_1, \dots, X_m)$. Each message $X_i \in [0:k-1]$, where $k \geq 2$ denotes the message alphabet size. There are $n$ receivers having distinct subsets of messages, which we refer to as side information. Each receiver is labelled by its side information, i.e., the receiver that has messages $X_{H}$, for some $H \subsetneq [1 : m]$, will be referred to as receiver $H$. A problem instance is characterised by $(m,\mathbb U)$, where $m$ is the number of messages, and the set $\mathbb{U} \subseteq 2^{[1:m]} \setminus \{[1:m]\}$ represents the receivers in the instance. Given a problem instance $(m,\mathbb{U})$, a very-pliable index code (or VP code in short) of size $t \in \mathbb{Z}^+$ for message size $k\in\mathbb{Z}^+$ consists of \begin{itemize} \item sender's encoding function $\mathsf{E}: [0:k-1]^m \rightarrow [0: t-1]$; \item for each receiver $H\in\mathbb{U}$, a pair of decoding functions \begin{align*} &\mathsf{I}_H: [0:t-1] \times [0:k-1]^{|H|} \,\,\,\rightarrow [1:m]\setminus H\\ &\mathsf{G}_H: [0:t-1] \times [0:k-1]^{|H|} \rightarrow [0:k-1] \end{align*} such that for any realisation $x_{[1:m]}\in [1:k-1]^m$, receiver~$H$ decodes a new message with index $\mathsf{I}_H(\mathsf{E}(x_{[1:m]},x_H)$ without any error. By new, we mean that for any $H\in\mathbb{U}$ and $x_{[1:m]}$, $\mathsf{I}_H(\mathsf{E}(x_{[1:m]}),x_H)\notin H$. The value of the message decoded by receiver~$H$ is given by $\mathsf{G}_H(\mathsf{E}(x_{[1:m]}),x_H)$. \end{itemize} In pliable index coding, the index of the decoded message depends only on the receiver, and not the message realisation; in VP index coding, the message index \emph{can} vary with message realisation. Hence, the need for an additional decoding function $\mathsf{I}_H$ that specifies which message is decoded. The following example illustrates this key idea of \emph{very pliability}. \begin{example}\label{eg:0} Consider the problem instance $m=3$, $\mathbb{U} = \big\{ \{1\}, \{2\}, \{3\} \big\}$. For $k=3$, a VP code for this instance is given in Figure~\ref{fig:eg:0}, where each box indicates all message realisations mapped to a particular codeword. Suppose that the codeword given by top left box is conveyed by the encoder. Then, receiver with side information $X_2$ will decode $X_1=0$ when $X_2=0$, and when $X_2=1$, the same receiver will decode $X_3=2$. Thus, in VP coding, a receiver can decode different messages for different received codeword and side-information realisations. \end{example} \begin{figure}[t] \centering \includegraphics[width=8cm]{pictures/Alpha3} \caption{A rate-optimal VP for $m=3$, $\mathbb{U} = \big\{ \{1\}, \{2\}, \{3\} \big\}$, and $k=3$.} \label{fig:eg:0} \end{figure} The aim is to find the optimal rate for a particular message size $k$, denoted by \begin{equation} \alpha_k := \min_{\mathsf{E}, \{\mathsf{I}_H,\mathsf{G}_H\}} \frac{\log t}{\log k}, \end{equation} the optimal rate over all $k$, denoted by \begin{equation} \alpha_* := \inf_k \alpha_k, \end{equation} and the asymptotic rate, denoted by \begin{equation} \alpha_\infty := \liminf_{k \rightarrow \infty} \alpha_k. \end{equation} For any problem instance $(m,\mathbb{U})$, a pliable index code differs from a VP code only in $\mathsf{I}_H$, where it is a constant function. So, a pliable index code is also a VP code, and \begin{align} \alpha_k &\leq \beta_k, \quad \forall k \geq 2,\label{eq:rate-1-compare}\\ \alpha_* &\leq \beta_{*},\\ \alpha_\infty &\leq \beta_{\infty},\label{eq:rate-3-compare} \end{align} where $\beta$ denotes the corresponding rates for pliable index codes for the same problem instance. For index codes and pliable index codes, we always have $\beta_{*} = \beta_{\infty}$~\cite{ong2017}. However, for some problem instances, we will show that not only can $\alpha_*$ and $\alpha_\infty$ be distinct, but \begin{equation} \alpha_* < \alpha_\infty. \end{equation} Without loss of generality, the side-information sets $H$ of the receivers are distinct; all receivers having the same side information can be satisfied if and only if any one of them can be satisfied. Also, no receiver has side information~$H = [1:m]$ because this receiver cannot be satisfied. \section{Results} \begin{theorem}\label{thm:alpha-grater-than-1} For any problem instance and for any $k$, $\alpha_k \geq 1$. \end{theorem} This can be proven easily (using entropy) for index codes and pliable index codes, but not so for VP codes. \begin{IEEEproof}[Proof of Theorem~\ref{thm:alpha-grater-than-1}] Our proof bounds the number of message realisations that can be encoded to a codeword. Consider the encoding function $\mathsf{E}$ of any VP code. Let the set of message realisations that are encoded to a codeword $c$ by $\mathsf{E}^{-1}(c) \subseteq [0:k-1]^m$. Consider an arbitrary receiver~$H$. Let the subset of $\mathsf{E}^{-1}(c)$ with a specific $x'_H$ as the receiver's side information be \begin{equation} \mathcal{S}_H(c,x'_H) := \{ x_{[1:m]} \in \mathsf{E}^{-1}(c) : x_H = x'_H\}. \label{eq:si-partition} \end{equation} The decoding requirement of receiver~$H$ dictates that for any $x'_H$ and $c$, there is an index $i \in [1:m] \setminus H$ and value $v\in[0,k-1]$ such that for any $x_{[1:m]}\in \mathcal{S}_H(c,x'_H)$, $x_i=v$, i.e., the component of the $i^{\textrm{th}}$ message in every realisation in $\mathcal{S}_H(c,x'_H)$ is the same. Note that $i$ and $v$ are allowed to vary with $c$ and $x'_H$. This means there can be at most $k^{m-|H|-1}$ distinct message realisations in $\mathcal{S}_H(c,x'_H)$. Then, since $\mathsf{E}^{-1}(c)$ is the disjoint union of $\mathcal{S}_H(c,x'_H)$ for various $x'_H$, we see that \begin{align} \label{eq:codeword-size} \big|\mathsf{E}^{-1}(c)\big| = \bigcup_{x'_H} \big|\mathcal{S}_H(c,x'_H)\big| \leq k^{|H|}k^{m-|H|-1} \leq k^{m-1}. \end{align} Thus, at most $k^{m-1}$ message realisations can be encoded to each codeword. Since each message realisation $x_{[1:m]} \in [0:k-1]^m$ must be encoded to a codeword, the total number of codeword required $t \geq k^m/k^{m-1} = k$. Hence, $\alpha_k \geq 1$. \end{IEEEproof} \begin{theorem}\label{thm:linear} Consider any linear VP index code, that is, \begin{equation} \mathsf{E}(x_{[1:m]}) = \boldsymbol{E} \boldsymbol{x}, \end{equation} where $\boldsymbol{E}$ is an $T \times m$ encoding matrix over $\mathbb F_q$, the finite field of size $q$, and $\boldsymbol{x}$ is an $m\times 1$ vector over $\mathbb{F}_q$ denoting the message realisation. Then \begin{align} \alpha_q^\text{linear} &= \beta_q^\text{linear}, \quad \text{for any $q$},\label{eq:thm2-1}\\ \alpha_*^\text{linear} &= \beta_*^\text{linear}, \label{eq:thm2-2}\\ \alpha_\infty^\text{linear} &= \beta_\infty^\text{linear}.\label{eq:thm2-3} \end{align} The superscript {\upshape linear} denotes optimal rates over linear codes. \end{theorem} \begin{IEEEproof}[Proof of Theorem~\ref{thm:linear}] Let $\boldsymbol{E}$ be the encoding matrix of a linear VP index code. Let $H$ be a receiver in the problem. The encoding operation can be written as \begin{equation} \boldsymbol{c}= \boldsymbol{E}\boldsymbol{x} = \boldsymbol{E}_{H^c} \boldsymbol{x}_{H^c} + \boldsymbol{E}_H \boldsymbol{x}_{H}, \end{equation} where $\boldsymbol{E}_{H^c}$ is the $T \times |H^c|$ submatrix corresponding to columns of $H^c$, and $\boldsymbol{x}_{H^c}$ is a $|H^c|\times 1$ vector corresponding to messages whose indices lie in $H^c$; similarly, $\boldsymbol{E}_{H}$ is the $T \times |H|$ submatrix corresponding to columns of $H$, and $\boldsymbol{x}_{H}$ is a $|H|\times 1$ vector corresponding to messages whose indices lie in $H$. Let us suppose that the receiver $H$ receives codeword $\boldsymbol{c}$ from the encoder, and side information $\boldsymbol{x}_H$. As before, let us define \begin{equation} \mathcal{S}_H(\boldsymbol{c}, \boldsymbol{x}_H) = \left\{ (\tilde{\boldsymbol{x}}_{H^c},\tilde{\boldsymbol{x}}_{H}): \begin{array}{c} \boldsymbol{E}_{H^c} \tilde{\boldsymbol{x}}_{H^c} + \boldsymbol{E}_H \tilde{\boldsymbol{x}}_{H}= \boldsymbol{c} \\ \tilde{\boldsymbol{x}}_{H}=\boldsymbol{x}_{H} \end{array}\right\}. \end{equation} Note that a message realisation is in $\mathcal{S}_H(\boldsymbol{c}, \boldsymbol{x}_H)$ if and only if it is a solution to \begin{equation} \left[\begin{array}{c|c} \boldsymbol{E}_{H^c} & \boldsymbol{E}_H \\ \hline \mathbf{0}_{|H|\times |H^c|} & \mathbf{I}_{|H|\times |H|} \end{array} \right]\begin{bmatrix} \boldsymbol{X}_{H^c} \\ \boldsymbol{X}_{H} \end{bmatrix} = \begin{bmatrix} \boldsymbol{c} \\ \boldsymbol{x}_{H} \end{bmatrix}. \end{equation} Therefore, it follows that a message realisation is in $\mathcal{S}_H(\boldsymbol{c}, \boldsymbol{x}_H)$ if and only if it is a solution to \begin{equation} \left[\begin{array}{c|c} \boldsymbol{E}_{H^c} & \mathbf{0}_{T\times |H|} \\ \hline \mathbf{0}_{|H|\times |H^c|} & \mathbf{I}_{|H|\times |H|} \end{array} \right]\begin{bmatrix} \boldsymbol{X}_{H^c} \\ \boldsymbol{X}_{H} \end{bmatrix} = \begin{bmatrix} \boldsymbol{c}-\boldsymbol{E}_H\boldsymbol{x}_{H} \\ \boldsymbol{x}_{H} \end{bmatrix}. \label{eqn-affinesoln} \end{equation} From elementary matrix analysis we know the following: \textit{In a consistent system of equations $\boldsymbol{Ax}=\boldsymbol{b}$, a variable $x_j$ has a unique solution if and only if a linear combination of the rows of $\boldsymbol{A}$ yields $\boldsymbol{e}_j$, the binary vector with a single one in the $j$-th component.} Therefore, we conclude from \eqref{eqn-affinesoln} that the receiver $H$ will decode a message successfully after receiving codeword $\boldsymbol{c}$ from the encoder, and using side information $\boldsymbol{x}_H$ if and only if it can decode the same message always, i.e., for every $x_{[1:m]}$. Therefore, there exists a message index $j\in H^c$ that depends only on $\boldsymbol{E}$ whose (message) value will always (i.e., for any $x_{[1:m]}$) be identified correctly by receiver $H$. Since the problem and the receiver $H$ are arbitrary, it follows that every VP linear code has an equivalent pliable code with the same encoder, which then naturally yields the claim of the theorem. \end{IEEEproof} \begin{theorem}\label{thm:3} There exist problem instances where \begin{equation} \alpha_k < \beta_k, \end{equation} for some $k\in\mathbb Z^+$. \end{theorem} \begin{IEEEproof}[Proof of Theorem~\ref{thm:3}] Consider the problem instance in Example~\ref{eg:0} and the corresponding VP code presented in Figure~\ref{fig:eg:0}. Note that the rate of this VP code is $1.7712$, whereas for the pliable codes for this instance, $\beta_k = 2$ for all $k$~\cite{liutuninetti20}. \end{IEEEproof} The code illustrated in Figure~\ref{fig:eg:0} was, in fact, shown to be optimal by exhaustive search on a certain coding hypergraph created as follows: \begin{itemize} \item The vertex set $\mathcal{V} = [0:k-1]^m$ consists of all $k^m$ message realisations. \item The hypergraph contains a hyperedge $\mathsf{e}\subseteq \mathcal{V}$ if and only if $\mathsf{e}$ is a maximal subset of message realisations (i.e., vertices of the hypergraph) that can mapped to a codeword by the encoder while ensuring successful recovery of a new message by each receiver. For the problem instance in Example~\ref{eg:0}, $\{(0,0,0), (0,0,1), (1,1,2), (2,1,2)\}$ is a hyperedge, as adding any realisation to this set will violate the decoding requirement of one or more of the receivers. \end{itemize} Any hyperedge in this coding hypergraph can be used to construct a codeword for a VP code. Therefore, the problem of designing a VP code for a problem reduces to identifying a collection of hyperedges in the hypergraph covering all vertices (i.e., message realisations). It then follows that the design of rate-optimal VP code reduces to the identification of minimal vertex cover of the underlying coding hypergraph. The following technicality arises in formulating the rate-optimal VP code from a minimal vertex cover: \textit{multiple hyperedges may cover a vertex}; however, a realisation can be assigned to only one codeword. To address this, we simply enumerate the hyperedges in any order, and assign message realisations to the first hyperedge they appear in. From the above description, it is clear that $\alpha_k$ equals the \textit{covering number}~\cite[p.~1]{ScheinermanUllman} of the coding hypergraph. Through an implementation of the above hypergraph approach, we identified optimal rates for the problem instance $m=3, \mathbb{U} = \big\{ \{1\}, \{2\}, \{3\} \big\}$ for the following message alphabet sizes: \begin{itemize} \item $k=2$: $\alpha_2 = 2 = \beta_2$ \item $k=3$: $t=7$, and hence $\alpha_3 = 1.7712 < \beta_3$ \item $k=4$: $t=11$, and hence $\alpha_4 = 1.7297 < \beta_4$ \end{itemize} An optimal codebook for the last setup is shown in Figure~\ref{fig:minimal-covering}. \begin{figure}[t] \centering \includegraphics[width=8cm]{pictures/Alpha4}\\ \caption{A rate-optimal VP code for $m=3$, $\mathbb{U} = \big\{ \{1\}, \{2\}, \{3\} \big\}$, and $k=4$.} \label{fig:minimal-covering} \end{figure} \begin{theorem}\label{thm:infty-equal} Consider the problem instance with $m \geq 3$ messages and $\mathbb{U} = \big\{\{1\},\{2\},\ldots,\{m\}\big\}$. For this instance, $\alpha_\infty=\beta_\infty=2$ \end{theorem} \begin{IEEEproof}[Proof of Theorem~\ref{thm:infty-equal}] This side-information setting belongs to the class of consecutive complete-$S$ problems, and for this setting, $\beta_\infty=2$~\cite{liutuninetti20}. We only need to focus on $\alpha_\infty$. We will establish this result by bounding from above the maximum number of message realisations that can be mapped to any one coded message. To begin with, assume $k$ is the size of each message and let $\mathsf{E}: [0:k-1]^m\rightarrow [0:t-1]$ be a VP encoder for this problem. Let $c\in[0:t-1]$ and let us investigate the realisations in the pre-image $\mathsf{E}^{-1}(c)$. To assist our analysis, we classify the realisations in $\mathsf{E}^{-1}(c)$ as follows (see Figure~\ref{fig:structclass}): \begin{itemize} \item Partition $\mathsf{E}^{-1}(c)$ into $m-1$ parts. Part $i\in[2:m]$ consists of all realisations in $\mathsf{E}^{-1}(c)$ where receiver~1\footnote{Recall that receiver~$H$ has $X_H$ as side information. When $H$ is a singleton, we call receiver~\{i\} as receiver~$i$.} decodes message $X_i$. Note that there is no Part 1. \item View the realisations in Part $i>1$ of the partition as a table with each row corresponding to the realisation and the columns ordered as $X_1$ as the first, $X_i$ as the second, and the rest in increasing order of message index starting from $X_{\min([2:m]\setminus \{i\})}$. For example in Part 2, the columns correspond to realisations of $X_1,X_2,X_3,\ldots, X_m$ in that order, whereas in Part 3, the columns correspond to realisations of $X_1,X_3,X_2,X_4,\ldots, X_m$ in that order. By construction, any realisation cannot appear in more than one partition. \item In Part $i>1$, we classify the rows (equivalently, realisations) into three cases as follows: \begin{itemize} \item Case A: a row/realisation is of case A if for this realisation, receiver~1 decodes message $X_i$, and receiver~$i$ decodes message $X_\ell$ for some $\ell\notin\{1,i\}$. \item Case B: a row/realisation is of case B if for this realisation, receiver~1 decodes $X_i$, and receiver~$i$ decodes $X_1$, and receiver~$\min([2:m]\setminus \{i\})$ (i.e., the receiver that has the message whose index corresponds to the third column) decodes a message other than $X_1$ or $X_i$. \item Case C: a row/realisation is of case C if for this realisation, receiver~1 decodes message $X_i$, and receiver~$i$ decodes $X_1$, and receiver~$\min([2:m]\setminus \{i\})$ decodes either $X_1$ or $X_i$. \end{itemize} \end{itemize} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{pictures/Thm4Fig}\\ \caption{Details of the partition of realisations mapped to a codeword $c$.} \label{fig:structclass} \end{figure} With this classification, we now bound the number of case A, B and C realisations in each pre-image $\mathsf{E}^{-1}(c)$. Let us fix $a\in[0:k-1]$ and count the number of case-A realisations with $X_1=a$ in $\mathsf{E}^{-1}(c)$. The receiver~1 must decode some message $X_{j_a}$ whose value is, say $x_{j_a}$. By definition receiver~$j_a$ must decode another index $i_a\notin\{1,j_a\}$. Hence, the fact that a realisation is of case A, and $X_1=a$ determines uniquely two other messages indices and their values. Hence, the number of realisations of case A with $X_1=a$ is at most $k^{m-3}$ (that is, at most $k$ values for each of the remaining $m-3$ messages). Since $a\in[0:k-1]$, it follows that there can be at most $k\cdot k^{m-3} = k^{m-2}$ case-A realisations in $\mathsf{E}^{-1}(c)$. Let us fix $a,b\in[0:k-1]$. Let us count the number of case-B realisations with $X_1=a$ in $\mathsf{E}^{-1}(c)$. Receiver~1 must decode some message $X_{j_a}$ whose value is, say, $x_{j_a}$. By definition, receiver~$j_a$ must decode $X_1$ and receiver~$\min([2:m]\setminus \{j_a\})$ must decode a message that is neither $X_1$ nor $X_{j_a}$. Hence, the number of case-B realisations in $\mathsf{E}^{-1}(c)$ with $X_1=a$ and $X_{\min([2:m] \setminus \{j_a\})}=b$ is at most $k^{m-4}$ (i.e., at most $k$ values for each of the remaining $m-4$ messages). Since $a,b\in[0:k-1]$, it follows that there can be at most $k^2 \cdot k^{m-4}=k^{m-2}$ case-B realisations in $\mathsf{E}^{-1}(c)$. So far, we have upper bounded the total number of realisations of Cases~A and B in $\mathsf{E}^{-1}(c)$. For Case~C, we focus on each part of the partition shown in Figure~\ref{fig:structclass} individually. Let us fix $i\in[2:m]$ and $a\in[0:k-1]$, and count the number of Case-C realisations in Part $i$. Receiver~1 must decode some message $X_{i}$. By definition, receiver~$i$ must decode $X_1$, and receiver~$\min([2:m] \setminus \{i\})$ must decode a message that is either $X_1$ or $X_{i}$. So the value of message~$X_{\min([2:m]\setminus \{i\})}$ determines the values of both $X_1$ and $X_i$. Hence, the values of $X_1$ and $X_i$ are uniquely determined in any case-C realisation in Part $i$ when $X_{\min([2:m] \setminus \{i\})}=a$. This is only true in Part~$i$, unlike the counting arguments used for Cases~A and B, which apply to all of $\mathsf{E}^{-1}(c)$. So, there are at most $k^{m-3}$ Case-C realisations in Part $i$ with $X_{\min([2:m] \setminus \{i\})}=a$. Since $a \in [0:k-1]$ and $i \in [2:m]$, it follows that there are at most $k(m-1)\cdot k^{m-3}$ = $(m-1)k^{m-2}$ realisations of Case~C in $\mathsf{E}^{-1}(c)$. Adding all the three cases, we see that there can be no more than $(m+1)k^{m-2}$ realisations in $\mathsf{E}^{-1}(c)$. Therefore, it follows that for any VP code over a message alphabet of size $k$, \begin{align*} t \geq \frac{k^{m}}{\max_{c} |\mathsf{E}^{-1}(c)| }\geq \frac{k^{m}}{(m+1)k^{m-2}} = \frac{k^2}{m+1}. \end{align*} Since the above holds for any VP code and message alphabet size $k$, by limiting $k\rightarrow \infty$, we get the required result. \end{IEEEproof} Combining Theorems~\ref{thm:3} and \ref{thm:infty-equal} for the problem instance $m=3$, $\mathbb{U} = \big\{ \{1\}, \{2\}, \{3\} \big\}$, we get the following: \begin{corollary} There exist problem instances where \begin{align} \alpha_* &< \alpha_\infty,\\ \alpha_* &< \beta_*. \end{align} \end{corollary} Note that unlike index codes or pliable index codes, VP codes cannot be concatenated, This is because although VP codes allow a decoder to decode any message---which can be different for each message realisation---if a message contains multiple parts, all decoded parts must be from the same message. Concatenating two VP codes may violate this decoding requirement as parts from different messages could be decoded. However, the following is possible: \begin{theorem}\label{thm:concatenation} Consider a problem instance where $|H| \geq 1$ for all $H \in \mathbb{U}$. Then, for any $k$, \begin{equation} \alpha_{2k}\leq \alpha_k\frac{\log k}{\log 2k} + (m-1) \frac{\log 2}{\log 2k}. \end{equation} \end{theorem} \begin{IEEEproof}[Proof of Theorem~\ref{thm:concatenation}] Consider any VP code of rate $\alpha_k$ that encodes messages of size $k$. By definition, such a code exists. For this code, it follows that $E(x)\in [0:k^{\alpha_k}-1]$ for $x \in [0:k-1]^m$. Consider also a binary MDS code $E':\{0,1\}^m\rightarrow \{0,1\}^{m-1}$ defined by $E'(\boldsymbol{y}) = (y_1 \oplus y_2, y_2 \oplus y_3, \dots, y_{m-1} \oplus y_m)$, where $\boldsymbol{y} \in [0:1]^m$, and $\oplus$ is the modulo-2 addition. Note that for this code, a reciver having any one message can decode all other messages. To devise a VP code for message size $2k$, we proceed as follows. Instead of viewing the message alphabet as $[0:2k-1]$, let us view the message alphabet as $[0:k-1]\times\{0,1\}$. The encoder for the alphabet of size $2k$ will process messages over $[0:k-1]^m$ using $E$ and additionally process messages over $\{0,1\}^m$ using $E'$. This concatenated code has a total of $k^{\alpha_k} \cdot 2^{m-1}$ codewords. Each receiver for the concatenated code will use the decoder for $E$ to recover a message of size $k$, and use the decoder for $E'$ to recover the additional bit corresponding to the same message index decoded using the decoder for $E$. Note that the rate of this concatenated code is \begin{equation} \frac{\log\big( k^{\alpha_k} 2^{m-1} \big)}{\log (2k)}. \end{equation} Since the concatenated code is a VP code that encodes messages with an alphabet size of $2k$, the claim follows. \end{IEEEproof} \begin{remark} The concatenation technique used in the proof of Theorem~\ref{thm:concatenation} can be extended to the case where all receivers knows $p \in [0:m-1]$ messages as side information. In such case, for any $k$, \begin{equation} \alpha_{kf(m,p)}\leq \frac{\alpha_k\log k+(m-p)\log f(m,p)}{\log k+ \log f(m,p)) } , \end{equation} where $f(m,p)$ is the minimum size of the finite field required to construct an $(m,m-p)$-MDS code. \end{remark} In the following, we establish problem instances where $\alpha_k=\beta_k$. \begin{lemma}\label{cor:alpha-equals-beta} For any problem instance where a rate-1 pliable index code exists for message alphabet size $k$, \begin{equation} \alpha_{k^\ell} = \beta_{k^\ell} = 1,\label{eq:all-1} \end{equation} for all $\ell \in \mathbb{Z}^+$. \end{lemma} \begin{IEEEproof}[Proof of Lemma~\ref{cor:alpha-equals-beta}] Concatenate\footnote{Unlike VP codes, index codes and pliable index codes can be concatenated.} $\ell$ copies of the pliable index code to obtain $\beta_{k^\ell} \leq 1$ for any $\ell$. The result then follows from \eqref{eq:rate-1-compare} and Theorem~\ref{thm:alpha-grater-than-1}. \end{IEEEproof} We now show that equality between $\alpha_k$ and $\beta_k$ can also hold for non-trivial cases where $\alpha_k > 1$. \begin{lemma}\label{thm:alpha-equals-beta} There exist problem instances where $\alpha_k = \beta_k > 1$ for all $k$. \end{lemma} \begin{IEEEproof}[Proof of Lemma~\ref{thm:alpha-equals-beta}] Consider the problem instance with $m=3$ and $\mathbb{U} = \big\{ \emptyset, \{1\}, \{1,2\}, \{1,3\} \big\}$ Consider any codeword $c$ of a VP code of size $t\in\mathbb{N}$ for this instance, and fix $a \in [0:k-1]$. Consider the event that $X_1=a$. Receiver~$1$ (i.e., the receiver that has $X_1$) will use the codeword and the fact that $X_1=a$ to uniquely identify message, say $X_{i_a}$ as $x_{i_a}$, where $i_a\in\{2,3\}$. Now, since receiver~$\{1,i_a\}$ must also decode a message, it follows that the receiver~1 can use the decoded message $X_{i_a}=x_{i_a}$ to decode the value of the remaining message $X_{\{1,2,3\}\setminus\{1,i_a\}}$. Hence, it follows that any $c$ and $a\in[0:k-1]$, the pre-image $\mathsf{E}^{-1}(c)$ can have at most one realisation with $X_1=a$. As $a \in [0:k-1]$, $|\mathsf{E}^{-1}(c)|\leq k$. Since this is true for any VP code, we must have $t \geq k^3 / k = k^2$. From \eqref{eq:rate-1-compare}, we note that $\beta_k\geq \alpha_k \geq 2$. However, the pliable index code $\mathsf{E}(x_{[1:3]}) = (x_2,x_3)$ has a rate of $2$. Hence, $\alpha_k=\beta_k = 2$. \end{IEEEproof} \clearpage \bibliography{otherpublications,mypublications} \end{document}
2205.02578v2
http://arxiv.org/abs/2205.02578v2
Groups with small multiplicities of fields of values of irreducible characters
\documentclass[12pt]{amsart} \usepackage{amsmath,amsthm,amsfonts,amssymb,latexsym,enumerate,xcolor} \usepackage{showlabels} \usepackage[pagebackref]{hyperref} \headheight=7pt \textheight=574pt \textwidth=432pt \topmargin=14pt \oddsidemargin=18pt \evensidemargin=18pt \newcommand{\CC}{{\mathbb{C}}} \newcommand{\FF}{{\mathbb{F}}} \newcommand{\OC}{{\mathcal{O}}} \newcommand{\OB}{{\mathbf{O}}} \newcommand{\Char}{{\mathsf{char}}} \newcommand{\ZZ}{{\mathbb{Z}}} \newcommand{\CB}{\mathbf{C}} \newcommand{\bC}{{\mathbf C}} \newcommand{\GC} {\mathcal{G}} \newcommand{\GCD}{\mathcal{G}^*} \newcommand{\bV} {\mathbf V} \newcommand{\bI} {\mathbf I} \newcommand{\GCF}{{\mathcal G}^F} \newcommand{\TC}{\mathcal{T}} \newcommand{\bZ}{{\mathbf Z}} \newcommand{\bO}{{\mathbf O}} \newcommand{\bF}{{\mathbf F}} \newcommand{\GCDF}{{\mathcal{G}^{*F^*}}} \newcommand{\PP} {\mathcal P} \newcommand{\LL} {\mathcal L} \newcommand{\cU} {\mathcal U} \newcommand{\cV} {\mathcal V} \newcommand{\cW} {\mathcal W} \newcommand{\fS} {\mathfrak S} \newcommand{\FD} {F^*} \newcommand{\ssS}{{\sf S}} \newcommand{\SSS}{\mathsf{S}} \newcommand{\AAA}{\mathsf{A}} \newcommand{\fP}{\mathfrak{P}} \newcommand{\fA}{\mathfrak{A}} \newcommand{\fI}{\mathfrak{I}} \newcommand{\F}{\mathbb{F}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\C}{\mathbb{C}} \newcommand{\Maxn}{\operatorname{Max_{\textbf{N}}}} \newcommand{\Syl}{\operatorname{Syl}} \newcommand{\dl}{\operatorname{dl}} \newcommand{\cd}{\operatorname{cd}} \newcommand{\cdB}{\operatorname{cdB}} \newcommand{\cs}{\operatorname{cs}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\core}{\operatorname{core}} \newcommand{\Con}{\operatorname{Con}} \newcommand{\Cl}{\operatorname{Cl}} \newcommand{\Max}{\operatorname{Max}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Ker}{\operatorname{Ker}} \newcommand{\Imm}{\operatorname{Im}} \newcommand{\car}{\operatorname{car}} \newcommand{\Irr}{\operatorname{Irr}} \newcommand{\IBr}{\operatorname{IBr}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Rad}{\operatorname{Rad}} \newcommand{\Soc}{\operatorname{Soc}} \newcommand{\Hall}{\operatorname{Hall}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\PSL}{\operatorname{PSL}} \newcommand{\Sz}{\operatorname{Sz}} \newcommand{\Gal}{\operatorname{Gal}} \newcommand{\diag}{{{\operatorname{diag}}}} \newcommand{\St}{{{\operatorname{St}}}} \renewcommand{\exp}{{{\operatorname{exp}}}} \newcommand{\al}{\alpha} \newcommand{\gam}{\gamma} \newcommand{\lam}{\lambda} \newcommand{\Id}{{{\operatorname{Id}}}} \newcommand{\ppd}{\textsf{ppd}~} \newcommand{\juancomment}{\textcolor{purple}} \newcommand{\alexcomment}{\textcolor{blue}} \newcommand{\Out}{{{\operatorname{Out}}}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\Sol}{\operatorname{Sol}} \newcommand{\trdeg}{\operatorname{trdeg}} \newcommand{\av}{\operatorname{av}} \newcommand{\tw}[1]{{}^{#1}\!} \renewcommand{\sp}[1]{{<\!#1\!>}} \let\eps=\epsilon \let\la=\lambda \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{con}[thm]{Conjecture} \newtheorem{pro}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{que}[thm]{Question} \newtheorem*{thmA}{Theorem A} \newtheorem*{conA'}{Conjecture A'} \newtheorem*{thmB}{Theorem B} \newtheorem*{thmC}{Theorem C} \newtheorem*{thmD}{Theorem D} \newtheorem*{thmE}{Theorem E} \newtheorem*{thmF}{Theorem F} \theoremstyle{definition} \newtheorem{rem}[thm]{Remark} \newtheorem{defn}[thm]{Definition} \newtheorem{exmp}[thm]{Example} \numberwithin{equation}{section} \renewcommand{\labelenumi}{\upshape (\roman{enumi})} \def\irrp#1{{\rm Irr}_{p'}(#1)} \def\irr#1{{\rm Irr}(#1)} \def\aut#1{{\rm Aut}(#1)} \def\cent#1#2{{\bf C}_{#1}(#2)} \def\syl#1#2{{\rm Syl}_{#1}(#2)} \def\norm#1#2{{\bf N}_{#1}(#2)} \def\oh#1#2{{\bf O}_{#1}(#2)} \def\nor{\triangleleft\,} \def\zent#1{{\bf Z}(#1)} \def\iitem#1{\goodbreak\par\noindent{\bf #1}} \def \mod#1{\, {\rm mod} \, #1 \, } \def\sbs{\subseteq} \begin{document} \title[Groups with small multiplicities of fields of values]{Groups with small multiplicities of fields of values of irreducible characters} \author{Juan Mart\'inez} \address{Departament de Matem\`atiques, Universitat de Val\`encia, 46100 Burjassot, Val\`encia, Spain} \email{[email protected]} \thanks{Research supported by Generalitat Valenciana CIAICO/2021/163 and CIACIF/2021/228.} \keywords{Irreducible character, Field of values, Galois extension} \subjclass[2020]{Primary 20C15} \date{\today} \begin{abstract} In this work, we classify all finite groups such that for every field extension $F$ of $\Q$, $F$ is the field of values of at most $3$ irreducible characters. \end{abstract} \maketitle \section{Introduction}\label{Section1} Let $G$ be a finite group, and let $\chi$ be a character of $G$. We define the field of values of $\chi$ as \[\Q(\chi)=\Q(\chi(g)|g \in G).\] We also define \[f(G)=\max_{F/\mathbb{Q}}|\{\chi \in \Irr(G)|\mathbb{Q}(\chi)=F\}|.\] A.Moretó \cite{Alex} proved that the order of a group is bounded in terms of $f(G)$. This is, there exists $b : \N \rightarrow \N$ such that $|G|\leq b(f(G))$, for every finite group $G$. In that work, it was observed that $f(G)=1$ if and only if $G=1$. The referee of \cite{Alex} asked for the classification of finite groups $G$ with $f(G)=2$ or $3$. Our goal in this paper is to obtain this classification. \begin{thmA} Let $G$ be a finite group. Then \begin{itemize} \item[(i)] If $f(G)=2$, then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{F}_{21}\}$. \item[(ii)] If $f(G)=3$, then $G \in \{\mathsf{S}_{3},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{52}, \mathsf{A}_{5},\PSL(2,8),\Sz(8)\}$. \end{itemize} where $\mathsf{F}_{n}$ and $\mathsf{D}_{n}$ are the Frobenius group and the dihedral group of order $n$, respectively. As a consequence, the best possible values for $b(2)$ and $b(3)$ are $21$ and $29.120$, respectively. \end{thmA} We will study the solvable case and the non-solvable case separately. In the non-solvable case, using a theorem of Navarro and Tiep \cite{Navarro-Tiep}, we will prove that the condition $f(G)\leq 3$ implies that $G$ possesses $3$ rational characters. Then, we will use the main results of \cite{Rossi} to restrict the structure of non-solvable groups with $f(G)\leq 3$. We will divide the solvable case in two different steps. In the first step, we classify all metabelian groups with $f(G)\leq 3$. To do this we will use the condition $f(G)\leq 3$ to give an upper bound to the number of irreducible characters, or equivalently, an upper bound to the number of conjugacy classes. Once we have bounded the number of conjugacy classes, we will use the classification given in \cite{VeraLopez} to finish our classification. In the second step, we prove that if $G$ is a solvable group with $f(G)\leq 3$, then $G$ is metabelian. Our work shows that, as expected, the bounds that are attainable from \cite{Alex} are far from best possible. Following the proof in \cite{Alex} we can see that if $f(G)=2$ and $G$ is solvable, then $G$ has at most $256$ conjugacy classes. It follows from Brauer's \cite{Brauer} bound for the order of a group in terms of its number of conjugacy classes, that $|G|\leq 2^{2^{256}}$. We remark that, even though there are asymptotically better more recent bounds, they depend on non-explicit constants and it is not clear if they are better for groups with at most $256$ conjugacy classes. \section{Preliminaries}\label{Section2} In this section we present the basic results that will be used in this work, sometimes without citing them explicitly. \begin{lem} Let $G$ be a finite group. If $N$ is a normal subgroup of $G$, then $f(G/N)\leq f(G)$. \end{lem} \begin{lem}[Lemma 3.1 of \cite{Alex}]\label{cf} Let $G$ be a finite group and $\chi \in \Irr(G)$. Then $|\mathbb{Q}(\chi):\mathbb{Q}|\leq f(G)$. \end{lem} As a consequence of this result, if $f(G)\leq 3$, then $|\mathbb{Q}(\chi):\mathbb{Q}|\leq 3$. Therefore, $\Q(\chi)$ will be $\Q$, a quadratic extension of $\Q$ or a cubic extension of $\Q$. We can also deduce that if $f(G)\leq 3$ and $\chi \in \Irr(G)$, then there exists $g \in G$ such that $\Q(\chi)=\Q(\chi(g))$. \begin{lem} Let $G$ be a group with $f(G)\leq 3$ and $\chi \in \Irr(G)$ such that $|\Q(\chi):\Q|=2$. Then $\{\psi \in \Irr(G)|\Q(\psi)=\Q(\chi)\}=\{\chi,\chi^{\sigma}\}$, where $\Gal(\Q(\chi)/\Q)=\{1,\sigma\}$. \begin{proof} Clearly $\{\chi,\chi^{\sigma}\} \subseteq \{\psi \in \Irr(G)|\Q(\psi)=\Q(\chi)\}$. Suppose that there exists $\psi \in \Irr(G)\setminus \{\chi,\chi^{\sigma}\}$ with $\Q(\psi)=\Q(\chi)$. Then $\chi,\chi^{\sigma},\psi,\psi^{\sigma}$ are four irreducible characters with the same field of values, which contradicts that $f(G)\leq 3$. \end{proof} \end{lem} As a consequence, if $f(G)\leq 3$, we deduce that for each quadratic extension $F$ of $\Q$, there exist at most two irreducible characters of $G$ whose field of values is $F$. Let $n$ be a positive integer, we define the cyclotomic extension of order $n$, as $\Q_{n}=\Q(e^{\frac{2i\pi }{n}})$. We recall that for every $\chi \in \Irr(G)$ and for every $g\in G$, $\Q(\chi(g))\in \Q_{o(g)}$. The following two lemmas will be useful to deal with $\Q_{o(g)}$, where $g \in G$. \begin{lem}\label{order} Assume that $G/G''=\mathsf{F}_{rq}$, where $q$ is a prime $G/G'\cong \mathsf{C}_{r}$ is the Frobenius complement of $\mathsf{F}_{rq}$ and that $G''$ is a $p$-elementary abelian group. Then $o(g)$ divides $rp$, for every $g \in G\setminus G'$. \end{lem} \begin{lem} Let $n$ be a positive integer. Then the following hold. \begin{itemize} \item[(i)] If $n=p$, where $p$ is an odd prime, then $\Q_{n}$ contains only one quadratic extension. \item[(ii)] If $n=p$, where $p$ is an odd prime, then $\Q_{n}$ contains only one cubic extension if $n\equiv 1 \pmod 3$ and contains no cubic extension if $n\not \equiv 1 \pmod 3$. \item[(iii)] If $n=p^{k}$, where $p$ is an odd prime and $k\geq 2$, then $\Q_{n}$ contains only one quadratic extension. \item[(iv)] If $n=p^{k}$, where $p$ is an odd prime and $k\geq 2$, then $\Q_{n}$ contains one cubic extension if $p\equiv 1 \pmod 3$ or $p=3$ and contains no cubic extension if $p\equiv -1 \pmod 3$. \item[(v)] If $n=p^{k}q^{t}$, where $p$ and $q$ are odd primes and $k,t \geq 1$, then $\Q_{n}$ contains $3$ quadratic extensions. \item[(vi)] If $n=p^{k}q^{t}$, where $p$ and $q$ are odd primes and $k,t \geq 1$, then $\Q_{n}$ contains $4$ cubic extensions if both $\Q_{p^k}$ and $\Q_{q^t}$ contain cubic extensions, contains one cubic extensions if only one of $\Q_{p^k}$ or $\Q_{q^t}$ contains a cubic extension and does not contain cubic extensions if both $\Q_{p^k}$ and $\Q_{q^t}$ do not contain cubic extensions. \item[(vii)] If $n$ is odd, then $\Q_{n}=\Q_{2n}$. \end{itemize} \begin{proof} This result follows from elementary Galois Theory. As an example, we prove (iii) and (iv). We know that $\Gal(\Q_{p^k}/\Q)\cong \mathsf{C}_{p^{k-1}(p-1)}$. Since $\Q_{p^k}$ has as many quadratic extensions as the number subgroups of index $2$ in $\Gal(\Q_{p^k}/\Q)$, we deduce that $\Q_{p^k}$ has only one quadratic extension. Now, we observe that $\Q_{p^k}$ has cubic extensions if and only if $3$ divides $p^{k-1}(p-1)$. This occurs if and only if $p=3$ or if $3$ divides $p-1$. If $\Q_{p^k}$ has cubic extensions, we can argue as in the quadratic case to prove that it has only one cubic extension. Thus, (iv) follows. \end{proof} \end{lem} The following is well known. \begin{lem}\label{exten} Let $N$ be a normal subgroup of $G$ and let $\theta \in \Irr(N)$ be invariant in $G$. If $(|G:N|,o(\theta)\theta(1))=1$, then there exists a unique $\chi \in \Irr(G)$ such that $\chi_{N}=\theta$, $o(\chi)=o(\theta)$ and $\Q(\chi)=\Q(\theta)$. In particular, if $(|G:N|,|N|)=1$, then every invariant character of $N$ has an unique extension to $G$ with the same order and the same field of values. \begin{proof} By Theorem 6.28 of \cite{Isaacscar}, there exists $\chi$ an unique extension such that $o(\chi)=o(\theta)$. Clearly, $\Q(\theta) \subseteq \Q(\chi)$. Assume that $\Q(\theta) \not=\Q(\chi)$, then there exists $\sigma \in \Gal(\Q(\chi)/\Q(\theta))\setminus\{1\}$. Then $\chi^{\sigma}$ extends $\theta$ and $o(\chi)=o(\theta)=o(\chi^{\sigma})$, by unicity of $\chi$ that is impossible. Thus, $\Q(\theta) =\Q(\chi)$ as we claimed. \end{proof} \end{lem} We need to introduce some notation in order to state the results deduced from \cite{VeraLopez}. If $G$ is a finite group, then we write $k(G)$ to denote the number of conjugacy classes of $G$ and $\alpha(G)$ to denote the number of $G$-conjugacy classes contained in $G\setminus S(G)$, where $S(G)$ is the socle of $G$. \begin{thm}\label{Vera-Lopez} Let $G$ be a group such that $k(G)\leq 11$. If $f(G)\leq 3$, then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{F}_{21},\mathsf{S}_{3},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{52},\mathsf{A}_{5}, \PSL(2,8),\Sz(8)\}$. \begin{proof} Using the classification of \cite{VeraLopez} of groups with $k(G)\leq 11$, we can see that these are the only groups with $f(G)\leq 3$ and $k(G)\leq 11$. \end{proof} \end{thm} \begin{thm}\label{Vera-Lopez3} Let $G$ be a solvable group with $\alpha(G)\leq 3$. Then either $G=\mathsf{S}_4$ or $G$ is metabelian. \begin{proof} If $G$ is a group with $\alpha(G) \leq 3$, then $G$ must be one of the examples listed in Lemmas 2.18, 2.19 and 2.20 of \cite{VeraLopez}. We see that except for $\mathsf{S}_4$ all solvable groups in those lemmas are metabelian. \end{proof} \end{thm} \begin{thm}\label{Vera-Lopez2} Let $G$ be a group such that $S(G)$ is abelian, $k(G)\geq 12$, $4 \leq \alpha(G) \leq 9$ and $k(G/S(G))\leq 10$. Then $f(G)>3$. \begin{proof} If $G$ is a group such that $4 \leq \alpha(G) \leq 10$ and $k(G/S(G))\leq 10$, then $G$ must be one of the examples listed in Lemmas 4.2, 4.5, 4.8, 4.11, 4.14 of \cite{VeraLopez}. We see that $f(G)>3$ for all groups in those lemmas with $k(G)>11$. \end{proof} \end{thm} Now, we classify all nilpotent groups with $f(G)\leq 3 $. \begin{thm}\label{nilpotent} If $G$ is a nilpotent group with $f(G)\leq 3,$ then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$. \begin{proof} Let $p$ be a prime dividing $|G|$. Then there exists $K\trianglelefteq G$ such that $G/K=\mathsf{C}_{p}$. Therefore, $f(\mathsf{C}_{p})= f(G/K)\leq f(G)\leq3$, and hence $p \in \{2,3\}$. Thus, the set of prime divisors of $|G|$ is contained in $\{2,3\}$. If $6$ divides $|G|$, then there exists $N$, a normal subgroup of $G$, such that $G/N=\mathsf{C}_{6}$. However, $f(\mathsf{C}_{6})=4> 3$ and we deduce that $G$ must be a $p$-group. It follows that $G/\Phi(G)$ is an elementary abelian $2$-group or an elementary abelian $3$-group with $f(G/\Phi(G)) \leq 3$. Since $f(\mathsf{C}_{2}\times \mathsf{C}_{2})=4$ and $f(\mathsf{C}_{3}\times \mathsf{C}_{3})=8$, we have that $G/\Phi(G) \in \{\mathsf{C}_{2},\mathsf{C}_{3}\}$. Thus, $G$ is a cyclic $2$-group or a cyclic $3$-group. Since $f(\mathsf{C}_{8})>3$ and $f(\mathsf{C}_{9})>3$, it follows that $G\in \{\mathsf{C}_{2},\mathsf{C}_{4},\mathsf{C}_{3}\}$. \end{proof} \end{thm} In the remaining we will assume that $G$ is not a nilpotent group. From this case, we can also deduce the following result. \begin{cor}\label{der} If $G$ is group with $f(G)\leq3$, then either $G=G'$ or $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$. \begin{proof} Suppose that $G'<G$, then $G/G'$ is an abelian group with $f(G/G')\leq 3$. Thus, by Theorem \ref{nilpotent}, $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$. \end{proof} \end{cor} In the proof of the solvable case of Theorem A, we need to see that there are no groups $G$ with $f(G)\leq 3$ of certain orders. We collect them in the next result. \begin{lem}\label{casos} There exists no group $G$ with $f(G)\leq 3$ and $|G| \in \{30,42, 48,50,54,\\70,84,98,100,126,147,156,234,260,342,558,666,676,774,882,903,954,1098,1206,\\1314,1404,2756,4108,6812,8164\}$. \begin{proof} We observe that all numbers in the above list are smaller than 2000, except $\{2756,4108,6812,8164\}$. However, the numbers $\{2756,4108,6812,8164\}$ are cube-free. Thus, we can use GAP \cite{gap} to check the result. \end{proof} \end{lem} \section{Non-solvable case}\label{Section3} In this section we classify the non-solvable groups with $f(G)\leq 3$. \begin{thm}\label{nonsolvable} Let $G$ be a non-solvable group with $f(G)\leq 3$. Then $f(G)\leq 3$ and $G \in \{\mathsf{A}_{5}, \PSL(2,8), \Sz(8)\}$. \end{thm} If $G$ is a group with $f(G)\leq 3$, it follows trivially that $G$ possesses at most $3$ irreducible rational characters. We will use the following results from \cite{Navarro-Tiep} and \cite{Rossi}, which classify the non-solvable groups with two or three rational characters, respectively. \begin{thm}[Theorems B and C of \cite{Navarro-Tiep}]\label{Navarro-Tiep} Let $G$ be a non-solvable group. Then $G$ has at least 2 irreducible rational characters. If moreover, $G$ has exactly two irreducible rational characters, then $M/N \cong \PSL(2,3^{2a+1})$, where $M=O^{2'}(G)$, $N=O_{2'}(M)$ and $a \geq 1$. \end{thm} \begin{thm}[Theorem B of \cite{Rossi}]\label{simplePrev2} Let $G$ be a non-solvable group with exactly three rational characters. If $M:=O^{2'}(G)$, then there exists $N\triangleleft G$ solvable and contained in $M$ such that $M/N$ is one of the following groups: \begin{itemize} \item[(i)] $\PSL(2,2^{n})$, where $n\geq2$. \item[(ii)] $\PSL(2,q)$, where $q\equiv 5 \pmod{24}$ or $q\equiv-5 \pmod{24}$. \item[(iii)] $\Sz(2^{2t+1})$, where $t \geq 1$. \item[(iv)] $ \PSL(2,3^{2a+1})$, where $a \geq 1$. \end{itemize} If moreover $M/N$ has the form (i),(ii) or (iii), then $N=O_{2'}(M)$. \end{thm} From Theorems \ref{Navarro-Tiep} and \ref{simplePrev2}, we deduce that if $S$ is a simple group with at most three rational characters, then $S$ is one of the groups listed above. That will allow us to determine the simple groups with $f(G)\leq 3$. Looking at the character tables of the groups $\PSL(2,q)$ (see \cite{Dornhoff}, chapter 38) and $\Sz(q)$ (see \cite{Geck}), we see that there is always an entry of the form $e^{\frac{-2\pi i}{q-1}}+e^{\frac{-2\pi i}{q-1}}$. For this reason, we study whether $e^{\frac{2\pi i}{r}}+e^{\frac{2\pi i}{r}}$ is rational, quadratic or cubic. Let $r$ be a positive integer. We will write $\varphi(r)$ to denote the Euler's function of $r$, this is $\varphi(r)=|\{k\in \{1,\ldots,r-1\}| (k,r)=1\}|$. \begin{lem}\label{omega} Let $r$ be a positive integer, let $\nu=e^{\frac{2\pi i}{r}}$ and let $\omega=\nu+\nu^{-1}$. Then the following hold \begin{itemize} \item[(i)] $\omega$ is rational if and only if $r\in \{3,4,6\}$. \item[(ii)] $\omega$ is quadratic if and only if $r\in \{5,8,10\}$. \item[(iii)] $\omega$ is cubic if and only if $r\in \{7,9,14,18\}$. \end{itemize} \begin{proof} Let $k\in \{1,\ldots,r-1\}$ such that $(r,k)=1$. Then there exists $\sigma_{k} \in \Gal(\Q(\nu)/\Q)$ such that $\sigma_{k}(\nu)=\nu^{k}$ and hence $\sigma_{k}(\omega)=\nu^{k}+\nu^{-k}$. Suppose that $\omega\in \Q$. Let $k\in \{2,\ldots,r-1\}$ such that $(r,k)=1$. Since $\sigma_{k}(\omega)=\omega$, we deduce that $k=r-1$. Thus, we deduce that $\varphi(r)=2$ and hence $r\in \{3,4,6\}$. Suppose now that $\omega$ is quadratic. Then there exists $\sigma \in \Gal(\Q(\nu)/\Q)$ such that $\sigma(\omega)\not=\omega$. We deduce that $\sigma(\nu)=\nu^{k_{0}}$, where $k_{0} \in \{2,\ldots,r-2\}$ and $(r,k_{0})=1$. Since $\omega$ is quadratic, it follows that $\sigma(\omega)$ is the only Galois conjugate of $\omega$ and hence $\{k \leq r|(r,k)=1\}=\{1,k_{0},r-k_{0},r-1\}$. Thus, $\varphi(r)=4$ and (ii) follows. Reasoning as in the previous case, we can deduce that $\omega$ is cubic if and only if $\varphi(r)= 6$ and hence (iii) follows. \end{proof} \end{lem} \begin{thm}\label{simple} Let $S$ be a non-abelian simple group with $f(S)\leq 3$. Then $S \in \{\mathsf{A}_{5},\PSL(2,8),\Sz(8)\}$. \begin{proof} Since $f(S)\leq 3$, $S$ has at most three rational characters. Thus, $S$ has the form described in Theorem \ref{simplePrev2}. We claim that the only groups in those families with $f(S)\leq3$ are $\mathsf{A}_{5}(=\PSL(2,4))$, $\PSL(2,8)$ and $\Sz(8)$. Let $S=\PSL(2,q)$ where $q$ is a prime power or let $S=\Sz(q)$ where $q=2^{2t+1}$ and $t\geq 1$. We know that there exists $\chi \in \Irr(S)$ and $a \in S$ such that $\chi(a)=e^{\frac{-2\pi i}{q-1}}+e^{\frac{-2\pi i}{q-1}}$. The condition $f(S)\leq 3$ implies that $|\Q(\chi(a)):\Q|\leq 3$. By Lemma \ref{omega}, we deduce that $q-1 \in \{3,4,5,6,7,8,9,10,14,18\}$. If $S=\PSL(2,q)$, we have that $q=2^n$, $q=3^{2m+1}$ or $q\equiv \pm 5 \pmod{24}$. Thus, we only have to consider the cases $q \in \{5,8,19\}$. Finally, we have that $3=f(\PSL(2,5))=f(\PSL(2,8))$ and $f(\PSL(2,19))=4$. If $S=\Sz(q)$, we have that $q=2^{2t+1}$ and hence we only have to consider the case $q=8$. Finally, we have that $f(\Sz(8))=3$. Thus, the only simple groups with $f(S)=3$ are $\mathsf{A}_{5}$, $\PSL(2,8)$ and $\Sz(8)$. \end{proof} \end{thm} Using Theorem \ref{Navarro-Tiep} we prove that a non-solvable group with $f(G)\leq 3$ has exactly three rational characters. \begin{thm}\label{2racional} Let $G$ be a non-solvable group with $f(G)\leq 3$. Then $G$ has exactly three rational irreducible characters. In particular, $f(G)=3$. \begin{proof} By Theorem \ref{Navarro-Tiep}, $G$ has at least two rational irreducible characters. Suppose that $G$ has exactly two rational irreducible characters. Applying again Theorem \ref{Navarro-Tiep}, if $M=O^{2'}(G)$ and $N=O_{2'}(M)$, then $M/N \cong \PSL(2,3^{2a+1})$. Taking the quotient by $N$, we may assume that $N=1$. By Theorem \ref{simple}, $f(M)=f(\PSL(2,3^{2a+1}))>3$ and hence we deduce that $M<G$. Now, we claim that there exists a rational character of $M$ that can be extended to a rational character of $G$. By Lemma 4.1 of \cite{Auto}, there exists $\psi \in \Irr(M)$, which is rational and is extendible to a rational character $\varphi \in \Irr(\Aut(M))$. If $H=G/\mathsf{C}_{G}(M)$, then we can identify $H$ with a subgroup of $\Aut(M)$ which contains $M$. Therefore, $\varphi_{H}\in \Irr(H)\subseteq \Irr(G)$ and it is rational, as we wanted. Let $\chi \in \Irr(G/M)\setminus\{1\}$. Since $|G/M|$ is odd, $\chi$ cannot be rational. Thus, there exists $\rho\not =\chi$, a Galois conjugate of $\chi$. Then $\Q(\chi)=\Q(\rho)$. Since $\psi$ is extendible to the rational character $\varphi \in \Irr(G)$, applying Gallagher's Theorem (See Corollary 6.17 of \cite{Isaacscar}), we have that $\chi \varphi\not=\rho \varphi$ are two irreducible characters of $G$ and $\Q(\chi)=\Q(\rho)=\Q(\varphi\chi)=\Q(\varphi\rho)$. Therefore, we have $4$ irreducible characters with the same field of values, which is impossible. \end{proof} \end{thm} Now, we use Theorem \ref{simplePrev2} to determine $G/O_{2'}(G)$. \begin{thm}\label{reduction} Let $G$ be a finite non-solvable group with $f(G)=3$. Then $G/O_{2'}(G) \in \{\mathsf{A}_{5},\PSL(2,8),\Sz(8)\}$. \begin{proof} Let $M$ and $N$ be as in Theorem \ref{simplePrev2}. We assume for the moment that $N=1$. Suppose first that $M<G$. Reasoning as in Theorem \ref{2racional}, we can prove that there exists $\psi \in \Irr(M)$ such that it is extendible to a rational character $\varphi \in \Irr(G)$. As in Theorem \ref{2racional}, if we take $\chi \in \Irr(G/M)\setminus\{1\}$ and $\rho$ a Galois conjugate of $\chi$, then $\Q(\chi)=\Q(\rho)=\Q(\varphi\chi)=\Q(\varphi\rho)$, where all of these characters are different, which is a contradiction. Thus, $M=G$ and hence $G$ is a simple group with $f(G)=3$. By Theorem \ref{simple}, $G\in \{\mathsf{A}_{5},\PSL(2,8),\Sz(8)\}$. If we apply the previous reasoning to $G/N$, then we have that $G/N$ is one of the desired groups. In either case, $G/N$ has the form (i),(ii) or (iii) of Theorem \ref{simplePrev2} and hence $N=O_{2'}(G)$. \end{proof} \end{thm} To complete our proof it only remains to prove that $O_{2'}(G)=1$. However, we need to study before two special cases. First, we study the case when $O_{2'}(G)=Z(G)$. \begin{thm}\label{quasisimple} There is no quasisimple group $G$ such that $O_{2'}(G)=Z(G)$, $O_{2'}(G)>1$ and $G/Z(G) \in \{\mathsf{A}_{5},\PSL(2,8),\Sz(8)\}$. \begin{proof} Suppose that such a group exists. Then we have that $|Z(G)|$ divides $|M(S)|$, where $S=G/Z(G)$. Since the Schur multiplier of $\mathsf{A}_{5}$, $\Sz(8)$ and $\PSL(2,8)$ is $\mathsf{C}_{2}$, $\mathsf{C}_{2}\times \mathsf{C}_{2}$ and the trivial group, respectively, we have that $Z(G)$ is a $2$-group. However, $Z(G)=O_{2'}(G)$ and hence $|Z(G)|$ has odd order. Thus, $Z(G)=1$ and the result follows. \end{proof} \end{thm} We need to introduce more notation to deal with the remaining case. For any group $G$, we define $o(G)=\{o(g)|g \in G \setminus \{1\}\}$. Suppose that $f(G)\leq 3$ and let $\chi \in \Irr(G)$ be a non-rational character. Then $\Q(\chi)=\Q(\chi(g))$ for some $g \in G \setminus \{1\}$. Thus, $\Q(\chi)$ is a quadratic extension or a cubic extension of $\Q_{n}$, where $n = o(g)$. If $N$ is a normal subgroup of $G$, then we write $\Irr(G|N)$ to denote the set of $\chi \in \Irr(G)$ such that $N \not \leq \ker(\chi)$. Finally, if $N$ is a normal subgroup of $G$ and $\theta \in \Irr(N)$, then we write $I_{G}(\theta)=\{g \in G|\theta^{g}=\theta\}$ to denote the inertia subgroup of $\theta$ in $G$. \begin{thm}\label{other} There is no group $G$ with $f(G)\leq 3$ such that $G/O_{2'}(G) \in \{\mathsf{A}_{5},\\ \PSL(2,8), \Sz(8)\}$, $O_{2'}(G)$ is elementary abelian and a $G/O_{2'}(G)$-simple module. \begin{proof} Write $V=O_{2'}(G)$ and let $|V|=p^d$ with $p>2$. Thus, if $\F_{p}$ is the field of $p$ elements, then $V$ can be viewed as an irreducible $\F_{p}[G/V]$-module of dimension $d$. We can extend the associated representation to a representation of $G/V$ over an algebraically closed field in characteristic $p$. Thus, the representation given by $V$ can be expressed as a sum of irreducible representations of $G/V$ over an algebraically closed field in characteristic $p$. Let $m(S)$ be the smallest degree of a non-linear $p$-Brauer character of $S$. We have that $d \geq m(G/V)$. We have to distinguish two different cases: $p$ divides $|G/V|$ and $p$ does not divide $|G/V|$. \underline{Case $p$ does not divide $|G/V|$:} In this case the Brauer characters are the ordinary characters. Thus, $|V|=p^{d}$ where $d$ is at least the smallest degree of an irreducible non-trivial character of $G/V$. Now, let $\lambda \in \Irr(V)\setminus \{1\}$. Then $\Q(\lambda)\subseteq \Q_{p}$. Since $(|G/V|,|V|)=1$, we have that $(|I_{G}(\lambda)/V|,|V|)=1$. Thus, by Lemma \ref{exten}, we have that $\lambda$ has an extension $\psi \in \Irr(I_{G}(\lambda))$ with $\Q(\psi)=\Q(\lambda)\subseteq \Q_{p}$. By the Clifford's correspondence (See Theorem 6.11 of \cite{Isaacscar}) $\psi^{G} \in \Irr(G)$ and $\Q(\psi^{G})\subseteq \Q(\psi) \subseteq \Q_{p}$. Thus, given $\zeta$, an orbit of $G/V$ on $\Irr(V)\setminus \{1_{V}\}$, there exists $\chi_{\zeta} \in \Irr(G|V)$ such that $\Q(\chi_{\zeta})\subseteq \Q_{p}$. Let $F$ be the unique quadratic extension of $\Q_{p}$ and let $T$ be the unique cubic extension of $\Q_{p}$ (if such an extension exists). Since $\Irr(G/V)$ contains three rational characters, we deduce that $\Q(\chi_{\zeta})\in \{T,F\}$ and since $F$ is quadratic, then there are at most $2$ characters whose field of values is $F$. Thus, the action of $G/V$ on $\Irr(V)\setminus \{1_{V}\}$ has at most $5$ orbits. Therefore, $|V|=|\Irr(V)|\leq 5|G/V|+1$. \begin{itemize} \item[(i)] Case $G/V=\mathsf{A}_{5}$: In this case $|V|\geq 7^3=343$ (because $7$ is the smallest prime not dividing $|G/V|$ and $3$ is the smallest degree of a non-linear character of $\mathsf{A}_{5}$). On the other hand, we have that $|V|\leq 5|G/V|+1\leq 5\cdot 60+1=301<343$, which is a contradiction. \item[(ii)] Case $G/V=\PSL(2,8)$: In this case $|V|\geq 5^{7}=78125$ and $|V|\leq 5\cdot504+1=2521$, which is a contradiction. \item[(iii)] Case $G/V=\Sz(8)$: In this case $|V|\geq 3^{14}=4782969$ and $|V|\leq 5\cdot 29120+1=145601$, which is a contradiction. \end{itemize} \underline{Case $p$ divides $G/V$:} From the Brauer character tables of $\{\mathsf{A}_{5},\PSL(2,8),\\ \Sz(8)\}$, we deduce that $m(\mathsf{A}_{5})=3$ for $p \in \{3,5\}$, $m(\PSL(2,8))=7$ for $p \in \{3,7\}$ and $m(\Sz(8))=14$ for $p \in \{3,7,13\}$. \begin{itemize} \item [(i)] Case $G/V=\PSL(2,8)$: \begin{itemize} \item [a)] $p=7$: In this case $|V|=7^{d}$ with $d\geq 7$ and $o(G)=\{2,3,7,9,2\cdot 7, 3\cdot 7, 7 \cdot 7, 9 \cdot 7\}$. On the one hand, the number of non-trivial $G$-conjugacy classes contained in $V$ is at least $\frac{|V|}{|G/V|}\geq \frac{7^{7}}{504}\geq 1634$. Therefore, we deduce that $|\Irr(G)|\geq 1634$. On the other hand, we have that there are at most $3$ quadratic extensions and at most $4$ cubic extensions contained in $\Q_{n}$, where $n \in o(G)$. Applying again that $f(G)\leq 3$, we have that the number of non-rational characters in $G$ is at most $2\cdot3+3\cdot 4=18$. Counting the rational characters, we have that $|\Irr(G)|\leq 21<1634$, which is a contradiction. \item [b)] $p=3$: In this case $|V|=3^{d}$ with $d\geq 7$ and by calculation $k(G)=|\Irr(G)|\leq 3+2\cdot 3+3\cdot 2=15$. We know that $V=S(G)$, and hence if $4\leq \alpha(G)\leq 9$, then $f(G)>3$ by Theorem \ref{Vera-Lopez2} (clearly $\alpha(G)\geq 4$ because $k(G/S(G))=9$). Thus, $\alpha(G)\geq 10$. Since $V=S(G)$ and $k(G)\leq 15$, we deduce that $V$ contains at most $4$ non-trivial $G$-conjugacy classes. Thus, $|V|\leq 504\cdot 4+1=2017<3^{7}$ and hence we have a contradiction. \end{itemize} \item [(ii)] Case $G/V=\Sz(8)$: In this case $|V|\geq 5^{14}$ and as before $|\Irr(G)|\geq 209598$. \begin{itemize} \item [a)] $p=5$: By calculation, $|\Irr(G)|\leq 3 +2 \cdot 7+3\cdot 2=23<209598$, which is a contradiction. \item [b)] $p\in \{7,13\}$: By calculation, $|\Irr(G)|\leq 3+2\cdot 7+3\cdot 4 =29<209598$, which is a contradiction. \end{itemize} \item [(iii)] Case $G/V=\mathsf{A}_{5}$: \begin{itemize} \item [a)] $p=3$: In this case $|V|=3^d$, where $d\geq 3$ and by calculation we have that, $|\Irr(G)|\leq 3+ 2\cdot 3+3 \cdot 1 =12$. As before, applying Theorem \ref{Vera-Lopez2}, we can deduce that $|V|$ contains at most one non-trivial $G$-conjugacy class. Thus, $|V|\leq 61$ and since $V$ is a 3-group we deduce that $|V|= 3^3$. We also deduce that $26$ is the size of a $G$-conjugacy class. That is impossible since 26 does not divide $|G/V|=60$. \item [b)] $p=5$: In this case $k(G)\leq 9$ and by Theorem \ref{Vera-Lopez} there is no group with the required properties. \end{itemize} \end{itemize} We conclude that there is no group with the desired form and hence $V=1$. \end{proof} \end{thm} Now, we are prepared to prove of Theorem \ref{nonsolvable} \begin{proof}[Proof of Theorem \ref{nonsolvable}] By Theorem \ref{reduction}, we know that $G/O_{2'}(G) \in \{\mathsf{A}_{5},\PSL(2,8), \\\Sz(8)\}$. We want to prove that $O_{2'}(G)=1$. Suppose that $O_{2'}(G)>1$. Taking an appropriate quotient, we may assume that $O_{2'}(G)$ is a minimal normal subgroup of $G$. Since $O_{2'}(G)$ is solvable, we have that $O_{2'}(G)$ is a $p$-elementary abelian subgroup for some odd prime $p$. There are two possibilities for $O_{2'}(G)$. The first one is that $O_{2'}=Z(G)$, and the second one is that $O_{2'}(G)$ is a $G/O_{2'}(G)$-simple module. The first one is impossible by Theorem \ref{quasisimple} and the second one is impossible by Theorem \ref{other}. Thus, $O_{2'}(G)=1$ and the result follows. \end{proof} Therefore, the only non-solvable groups with $f(G)\leq 3$ are $\mathsf{A}_{5},\PSL(2,8)$ and $\Sz(8)$. In the remaining we will assume that $G$ is solvable. \section{Metabelian case}\label{Section5} Let $G$ be a finite metabelian group with $f(G)\leq 3$. By Corollary \ref{der}, we have that $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4}\}$ and hence we can divide this case in different subcases. We begin by studying the case when $G'$ is $p$-elementary abelian. \begin{lem}\label{casopelem} Let $G$ be a finite group such that $f(G)\leq 3$ and $G'\not=1$ is $p$-elementary abelian. Then $G \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$. \begin{proof} First, we observe that $(|G:G'|,p)=1$. Otherwise, $G$ would be a nilpotent group with $f(G)\leq 3$. Thus, by Theorem \ref{nilpotent}, we would have that $G'=1$, which is impossible. Let $\psi \in \Irr(G')\setminus \{1_{G'}\}$ and let $I_{G}(\psi)$ be the inertia group of $\psi$ in $G$. Since $G/G'$ is cyclic, applying Theorem 11.22 of \cite{Isaacscar}, we have that $\psi$ can be extended to an irreducible character of $I_{G}(\psi)$. Since $\psi$ cannot be extended to $G$, we have that $\psi$ cannot be invariant and hence $I_{G}(\psi)<G$. Now, we will study separately the case $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3}\}$ and the case $G/G'=\mathsf{C}_{4}$. Assume first that $G/G' \in \{\mathsf{C}_{2},\mathsf{C}_{3}\}$. Since $ I_{G}(\psi)< G$, we deuce that $I_{G}(\psi)=G'$ for every $\psi \in \Irr(G')\setminus \{1_{G'}\}$. Thus, by Clifford correspondence, $\psi^G\in \Irr(G)$. Therefore, if $\chi \in \Irr(G|G')$, then $\chi$ has the form $\chi=\psi^{G}$, where $\psi \in \Irr(G')\setminus \{1_{G'}\}$. Since $\mathbb{Q}(\psi)\subseteq \mathbb{Q}_{p}$, we have that $\mathbb{Q}(\psi^{G})\subseteq \mathbb{Q}_{p}$. We know that there exists at most one quadratic extension in $\mathbb{Q}_{p}$ and at most one cubic extension in $\mathbb{Q}_{p}$. Since $\Irr(G/G')$ contains at least one rational character and $f(G)\leq 3$, we have that $|\Irr(G|G')|\leq 2+1\cdot 2+ 1\cdot 3=7$. Since $|\Irr(G/G')|\leq 3$, we have that $k(G)=|\Irr(G)| = |\Irr(G|G')|+|\Irr(G/G')|\leq 7+3=10$. By Theorem \ref{Vera-Lopez}, we deduce that the only groups such that $|G:G'|\in \{2,3\}$, $G'$ is elementary abelian, $f(G)\leq 3$ and $k(G)\leq 10$ are $\{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{F}_{21}\}$. Assume now that $G/G'=\mathsf{C}_{4}$. If $\psi \in \Irr(G')\setminus \{1_{G'}\}$, we have that $I_{G}(\psi)<G$ and hence we have two possible options. The first one is that $I_{G}(\psi)=G'$. In this case, applying the Clifford correspondence, we have that $\psi^{G}\in \Irr(G)$ and hence $\mathbb{Q}(\psi^{G})\subseteq \Q(\psi)\subseteq \mathbb{Q}_{p}$. The other one is that $|G:I_{G}(\psi)|=2$. In this case, applying Lemma \ref{exten}, we have that $\psi $ is extendible to $\varphi \in \Irr(I_{G}(\psi))$ and $\Q(\varphi)=\Q(\psi)\subseteq \Q_{p}$. Let $\Irr(I_{G}(\psi)/G')=\{1,\rho\}$. By Gallagher's Theorem, $\varphi$ and $\varphi\rho$ are all the extensions of $\psi$ to $I_{G}(\psi)$. Since $\Q(\rho)=\Q$, we have that $\Q(\varphi\rho)=\Q(\varphi)\subseteq \Q_{p}$. Let $\tau \in \{\varphi,\varphi\rho\}$. We have that $\tau^{G} \in \Irr(G)$, and hence $\Q(\tau^{G})\subseteq \Q(\tau)\subseteq \Q_{p}$. Therefore, $\Q(\chi)\subseteq \Q_{p}$ for every $\chi \in \Irr(G|G')$. As before, we can deduce that $ \Irr(G|G')$ contains at most $5$ non-rational characters. On the other hand, $\Irr(G/G')$ contains two rational characters and hence $\Irr(G|G')$ contains at most one rational character. Therefore, $|\Irr(G|G')|\leq 6$ and hence $k(G)=|\Irr(G/G')|+|\Irr(G|G')|\leq 4+6=10$. By Theorem \ref{Vera-Lopez}, our only possible options are $\{\mathsf{F}_{20},\mathsf{F}_{52}\}$. \end{proof} \end{lem} \begin{thm}\label{caso2ab} Let $G$ be a metabelian group with $f(G)\leq 3$ such that $|G:G'|=2$. Then $G \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{D}_{18}\}$. \begin{proof} Assume for the moment that $G'$ is a $p$-group. We note that $F(G)=G'$. Therefore, $G'/\Phi(G)=F(G)/\Phi(G)$ is $p$-elementary abelian. Thus, by Lemma \ref{casopelem}, we have that $G/\Phi(G) \in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14}\}$ and hence $G'/\Phi(G)$ is cyclic. Therefore, $G'$ is a cyclic $p$-group and and we have only three possibilities for $p$. We analyse the cases $p=3$, $p=5$ and $p=7$ separately. If $p=3$, then $G'$ is a cyclic group of order $3^{l}$. If $l \geq 3$, then there exists $K$ characteristic in $G'$ of order $3^{l-3}$. Thus, $|G/K|=2\cdot3^{3}=54$ and $f(G/K)\leq 3$. However, by Lemma \ref{casos}, there is no group of order $54$ with $f(G)\leq 3$. Thus, $l\in\{1,2\}$. If $l=1$, then $G=\mathsf{S}_{3}$ and if $l=2$, then $G=\mathsf{D}_{18}$. If $p \in \{5,7\}$, then $G'$ is a cyclic group of order $p^{l}$. If $l \geq 2$, then there exists $K$ characteristic in $G'$ of order $p^{l-2}$. Thus, $|G/K|=2\cdot p^{2}$ and $f(G/K)\leq 3$. For $p=5$, we have that $|G/K|=2\cdot 5^{2}=50$ and for $p=7$, we have that $|G/K|=2\cdot 7^{2}=98$. However, by Lemma \ref{casos}, there is no group of order $50$ or $98$ with $f(G)\leq3$. Therefore, if $G'$ is a $p$-group, then $G \in \{\mathsf{S}_{3},\mathsf{D}_{18},\mathsf{D}_{10},\mathsf{D}_{14}\}$. From here, we also deduce that the prime divisors of $|G'|$ are contained in $\{3,5,7\}$. To complete the classification it only remains to prove that $|G'|$ cannot be divided by two different primes. Suppose that both $3$ and $5$ divide $|G'|$. Taking a quotient by a Sylow $7$-subgroup of $G'$, we may assume that the only prime divisors of $|G'|$ are $3$ and $5$. By the case when $|G'|$ is a $p$-group, we deduce that the Sylow $3$-subgroups and Sylow $5$-subgroups of $G'$ are both cyclic. Thus, $f(G/\Phi(G))\leq 3$ and $G'/\Phi(G)=\mathsf{C}_{3}\times \mathsf{C}_{5}$. Therefore, $G/\Phi(G)$ is a group of order $30$ with $f(G/\Phi(G))\leq 3$, which is impossible by Lemma \ref{casos}. Analogously, we can prove that if any of the pairs $\{3,7\}$ or $\{5,7\}$ divides $|G'|$ at the same time, then there exists a group $H$ with $f(H)\leq 3$ of order $42$ or $70$, respectively. Applying again Lemma \ref{casos}, we have a contradiction. Thus, $G'$ is a $p$-group and the result follows. \end{proof} \end{thm} \begin{thm}\label{caso3ab} Let $G$ be a metabelian group with $f(G)\leq 3$ such that $|G:G'|=3$. Then $G \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$. \begin{proof} As in Theorem \ref{caso2ab}, we assume first that $G'$ is a $p$-group. By Proposition \ref{casopelem}, we have that $G/\Phi(G) \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$. Therefore, we have that $p\in \{2,7\}$. We analyse each case separately. If $p=7$, then $G'/\Phi(G)=\mathsf{C}_{7}$. Thus, $G'$ is a cyclic group of order $7^{l}$. If $l \geq 2$, then there exists $K$ characteristic in $G'$ of order $7^{l-2}$. Thus, $|G/K|=3\cdot7^{2}=147$ and $f(G/K)\leq 3$. However, by Lemma \ref{casos}, there is no group of order $147$ with $f(G)\leq 3$. Thus, $l=1$ and hence $G= \mathsf{F}_{21}$. If $p=2$, then $G'/\Phi(G)=\mathsf{C}_{2}\times \mathsf{C}_{2}$. Thus, $G'=U\times V$, where $U$ is cyclic of order $2^n$, $V$ is cyclic of order $2^m$ and $n\geq m$ .Then, we can take $H$ the unique subgroup of $U$ of order $2^{m}$. Thus, $K=H\times V$ is normal in $G$ and $(G/K)'$ is a cyclic 2-group. Thus, $f(G/K)\leq 3$, $|G/K:(G/K)'|=3$ and $(G/K)'$ is a cyclic $2$-group, which is not possible by Proposition \ref{casopelem}. It follows that $n=m$ and hence $G'$ is a product of $2$ cycles of length $n$. If $n \geq 2$, then there exists $T$ characteristic in $G'$ such that $G'/T=\mathsf{C}_{4}\times \mathsf{C}_{4}$. Thus, $f(G/T)\leq 3$ and $|G/T|=48$, which contradicts Lemma \ref{casos}. It follows that $n=1$ and hence $G=\mathsf{A}_{4}$. Therefore, we have that the prime divisors of $G'$ are contained in $\{2,7\}$ and if $G'$ is a $p$-group, then $G \in \{\mathsf{A}_{4},\mathsf{F}_{21}\}$. Assume now that both $2$ and $7$ divide $|G'|$. Then $G'/\Phi(G)=\mathsf{C}_{2}\times \mathsf{C}_{2}\times \mathsf{C}_{7}$. Thus, $|G/\Phi(G)|=84$ and $f(G/\Phi(G))\leq 3$, which is impossible by Lemma \ref{casos}. Then $G'$ must be a $p$-group and the result follows. \end{proof} \end{thm} \begin{thm}\label{caso4ab} Let $G$ be a metabelian group with $f(G)\leq 3$ such that $|G:G'|=4$. Then $G \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$. \begin{proof} As in Theorem \ref{caso2ab}, we assume first that $G'$ is a $p$-group. By Lemma \ref{casopelem}, we have that $G/\Phi(G) \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$ and hence $G'$ is a cyclic $p$-group, where $p \in \{5,13\}$. In both cases $G'$ is a cyclic group of order $p^{l}$. If $l \geq 2$, then there exists $K$ characteristic in $G'$ of order $p^{l-2}$. Thus, $|G/K|=4\cdot p^{2}$ and $f(G/K)\leq 3$. For $p=5$, we have that $|G/K|=4\cdot 5^{2}=100$ and for $p=13$, we have that $|G/K|=4\cdot 13^{2}=676$. However, by Lemma \ref{casos} there is no group of order $100$ or $676$ with $f(G)\leq3$. Therefore, we have that the prime divisors of $G'$ are contained in $\{5,13\}$ and if $G'$ is a $p$-group then $G \in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$. Assume now that both $5$ and $13$ and divide $|G'|$. Then $G'/\Phi(G)= \mathsf{C}_{5}\times \mathsf{C}_{13}$. Thus, $f(G/\Phi(G))\leq 3$, $|G/\Phi(G)|=4\cdot 5 \cdot 13=260$, which contradicts Lemma \ref{casos}. Therefore, $G'$ must be a $p$-group and the result follows. \end{proof} \end{thm} \section{Solvable case} In this section we classify all solvable groups with $f(G)\leq 3$. By the results of the previous section, we have that $G/G'' \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{S}_{3}, \mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. Therefore, the result will be completed once we prove that $G''=1$. We will begin by determining all possible $\Q(\chi)$ for $\chi \in \Irr(G|G'')$ and then, we will use this to bound $k(G)$. Finally, the result will follow from Theorems \ref{Vera-Lopez} and \ref{Vera-Lopez2} and some calculations. \begin{lem}\label{restocasos} Let $G$ be a group such that $G''\not=1$, $G''$ is $p$-elementary abelian, $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$ and $p$ does not divide $|G'/G''|$. If $r=|G:G'|$, then $\Q(\chi)\subseteq \Q_{rp}$ for every $\chi \in \Irr(G|G'')$. \begin{proof} By Lemma \ref{order}, we know that for every $g \in G \setminus G'$ and for every $\chi \in \Irr(G)$, $\chi(g) \in \Q_{rp}$. Therefore, we only have to prove that $\Q(\chi_{G'})\subseteq \Q_{rp}$ for every $\chi \in \Irr(G|G'')$. It suffices to prove that $\Q(\psi)\subseteq \Q_{rp}$ for every $\psi \in \Irr(G'|G'')$. Let $\lambda \in \Irr(G'')\setminus \{1_{G''}\}$. We know that $\Q(\lambda)\subseteq \Q_{p}$ and $\lambda$ cannot be extended to an irreducible character of $G'$. Since $|G':G''|$ is prime, we deduce that $\lambda^{G'}\in \Irr(G')$. Now, we have that $\Q(\lambda^{G'})\subseteq \Q(\lambda)\subseteq \Q_{p}\subseteq \Q_{rp}$ and hence the result follows. \end{proof} \end{lem} \begin{lem}\label{casoD18} Let $G$ be a group such that $G''\not=1$, $G''$ is $p$-elementary abelian, $G/G''=\mathsf{D}_{18}$ and $p\not=3$. If $f(G)\leq 3$, then $k(G)\leq 15$. Moreover, if $p=2$, then $k(G)\leq 10$ and if $p$ is an odd prime with $p\equiv -1 \pmod 3$, then $k(G)\leq 12$. \begin{proof} We claim that $\Q(\chi_{G'})\subseteq \Q_{3p}$ for every $\chi \in \Irr(G|G'')$. Let $\lambda \in \Irr(G'')\setminus \{1_{G''}\}$ and let $T=I_{G'}(\lambda)$. We know that $\Q(\lambda)\subseteq \Q_{p}$ and $\lambda$ cannot be extended to an irreducible character of $G'$. Since $(|G''|,|G':G''|)=1$, applying Lemma \ref{exten}, we deduce that $\lambda$ extends to $\mu \in \Irr(T)$ with $\Q(\mu)=\Q(\lambda)\subseteq \Q_{p}$. It follows that $T<G'$ and hence we have two different possibilities. The first one is that $T=G''$. In this case, $\lambda^{G'}\in \Irr(G')$ and hence $\Q(\lambda^{G'})\subseteq \Q(\lambda)\subseteq \Q_{p}\subseteq \Q_{3p}$. The second one is that $|T:G''|=3$. In this case, $\Irr(T/G'')=\{1,\rho, \rho^2\}$. By Gallagher's Theorem, we have that $\Irr(T|\lambda)=\{\mu, \rho\mu, \rho^2\mu\}$ and since $\Q(\rho)=\Q_{3}$, we deduce that $\Q(\psi)\subseteq \Q_{3p}$ for every $\psi \in \Irr(T|\lambda)$. Now, let $\psi \in \Irr(T|\lambda)$. Thus, by the Clifford correspondence, $\psi^{G'}\in \Irr(G')$ and hence $\Q(\psi^{G'})\subseteq \Q(\psi)\subseteq \Q_{3p}$. Thus, $\Q(\chi_{G'})\subseteq \Q_{3p}$ for every $\chi \in \Irr(G|G'')$. Assume that $f(G) \leq 3$. Since $\Irr(G/G'')$ contains 3 rational characters, we deduce that $\Irr(G|G'')$ does not contain rational characters. Assume first that $p$ is odd. By Lemma \ref{order}, we know that for every $g \in G \setminus G'$ and for every $\chi \in \Irr(G)$, $\chi(g) \in \Q_{2p}=\Q_{p}\subseteq \Q_{3p}$. Thus, by the previous claim, if $\chi \in \Irr(G|G'')$, then $\Q(\chi)\subseteq \Q_{3p}$ and hence it is either quadratic extension of $\Q_{3p}$ or a cubic extension of $\Q_{3p}$. We know that $\Q_{3p}$ possesses three quadratic extensions and at most one cubic extension. Thus, $|\Irr(G|G'')|\leq 3\cdot 2+1\cdot 3=9$ and hence $k(G)=|\Irr(G)|=|\Irr(G/G'')|+|\Irr(G|G'')|\leq 6+9=15$. We also observe that $\Q_{3p}$ possesses a cubic extension if and only if $p\equiv 1 \pmod 3$. Thus, if $p\equiv -1 \pmod 3$, then $k(G)\leq 12$. Assume now that $p=2$. In this case, $\Q_{3p}=\Q_3$. By Lemma \ref{order}, we know that for every $g \in G \setminus G'$ and for every $\chi \in \Irr(G)$, $\chi(g) \in \Q_{2p}=\Q(i)$. Thus, if $\chi \in \Irr(G|G'')$, then either $\Q(\chi)=\Q_{3}$ or $\Q(\chi)=\Q(i)$. Since $\Q(i)$ and $\Q_{3}$ are both quadratic, we have that $|\Irr(G|G'')|\leq 2\cdot 2$ and hence $k(G)\leq 6+4=10$. \end{proof} \end{lem} \begin{lem}\label{casoA4} Let $G$ be a group such that $G''\not=1$, $G''$ is $p$-elementary abelian and $G/G''=A_{4}$. If $f(G)\leq 3$, then $k(G)\leq12$. If moreover $p\not\equiv 1 \pmod 3$, then $k(G)\leq 9$. \begin{proof} First, we study the orders of the elements of $G$. If $g \in G''$, then $o(g)$ divides $p$. If $g \in G'\setminus G''$, then $o(g)$ divides $2p$. Finally, if $g \in G \setminus G'$, then $o(g)$ divides $3p$. Let $\chi\in \Irr(G)$. Then, $\Q(\chi_{G''})\subseteq \Q_{p}$. If $g \in G \setminus G'$, then $\chi(g) \in \Q_{3p}$. Finally, if $g \in G'\setminus G''$, then $\chi(g)\in \Q_{2p}$. Thus, $\Q(\chi)$ is contained in $\Q_{2p}$ or in $\Q_{3p}$. If $p=2$, then $\Q_{2p}=\Q(i)$ and $\Q_{3p}=\Q_{3}$. Therefore, we have that $k(G)=|\Irr(G)|\leq 2\cdot 2+3=7<9$. Assume now that $p\not=2$. Then $\Q_{2p}=\Q_{p}$ and it follows that $\Q(\chi) \subseteq \Q_{3p}$ for every $\chi \in \Irr(G)$. Assume first that $p=3$, then $\Q_{3p}=\Q_{9}$. Then $\Q_{3p}$ possesses only one quadratic extension and one cubic extension. Therefore, $k(G)=|\Irr(G)|\leq 2\cdot 1+3\cdot 1+3=8<9$. Finally, assume that $p\not=3$ is an odd prime. Then $\Q_{3p}$ has three quadratic extensions and at most one cubic extension. It follows that $k(G)\leq 2\cdot 3+3\cdot 1+3=12$. We also have that if $p\equiv -1 \pmod 3$, then $\Q_{3p}$ has no cubic extension and hence $k(G)\leq 9$. \end{proof} \end{lem} The next result completes the proof of the solvable case of Theorem A. \begin{thm}\label{solvable} Let $G$ be a solvable group with $f(G)\leq 3$. Then $G \in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{S}_{3},\\ \mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. \begin{proof} If $G$ is metabelian, by Theorems \ref{caso2ab},\ref{caso3ab} and \ref{caso4ab}, $G\in \{\mathsf{C}_{2},\mathsf{C}_{3},\mathsf{C}_{4},\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{A}_{4},\\ \mathsf{D}_{14}, \mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. Therefore, we only have to prove that $G''=1$. Assume that $G''>1$. Taking an appropriate quotient, we may assume that $G''$ is a minimal normal subgroup of $G$. Since $G$ is solvable, we have that $G''$ is $p$-elementary abelian for some prime $p$. We also have that $G/G''$ is a metabelian group with $f(G/G'')\leq 3$. Thus, $G/G'' \in \{\mathsf{S}_{3}, \mathsf{D}_{10},\mathsf{A}_{4},\mathsf{D}_{14},\mathsf{D}_{18},\mathsf{F}_{20},\mathsf{F}_{21},\mathsf{F}_{52}\}$. We claim that we can assume that $G''$ is the unique minimal normal subgroup of $G$. Suppose that there exists $M$, a minimal normal subgroup of $G$ different of $G''$. Then $MG''/G''$ is a minimal normal subgroup of $G/G''$. On the one hand, if $G/G''\not=D_{18}$, then the only minimal normal subgroup of $G/G''$ is $G'/G''$. Thus, $G'=M\times G''$ and hence $G'$ is abelian, which is a contradiction. On the other hand, if $G/G''=D_{18}$, then the only possibility is that $|M|=3$. Let $\overline{G}=G/M$ and let $\overline{\cdot}$ denote the image in $G/M$. We have that $f(\overline{G})\leq 3$, $\overline{G}''=\overline{G''}=MG''/M\cong G''/(M\cap G'')=G''$ and $\overline{G}/\overline{G}'' \cong G/MG''\cong \mathsf{S}_{3}$. Therefore, $\overline{G}$ will be one of the studied cases. So, in any case, we may assume that $G$ is the only minimal subgroup of $G$, this is $G''=S(G)$. In particular, $k(G/S(G))=k(G/G'')\leq 7\leq 10$ and hence this hypothesis of Theorem \ref{Vera-Lopez2} is satisfied. Since we are assuming that $G$ is not metabelian and $f(\mathsf{S}_4)=5>3$, we may apply Theorem \ref{Vera-Lopez3} to deduce that $\alpha(G)\geq 4$. In addition, if $k(G)\leq 11$, applying Theorem \ref{Vera-Lopez}, we have that the only possibility is that $G''=1$, which is a contradiction. Thus, we will assume that $k(G)\geq 12$. As a consequence, if $4 \leq\alpha(G)\leq 9$, then applying Theorem \ref{Vera-Lopez2} we have that $f(G)>3$, which is impossible. Therefore, in the remaining, we will assume that $k(G)\geq 12$ and $\alpha(G)\geq 10$. Now, we proceed to study case by case. We study the case $G/G''=\mathsf{A}_{4}$ and the case $G/G''\not=\mathsf{A}_{4}$ separately . \underline{Case $G/G''=\mathsf{A}_{4}$:} By Lemma \ref{casoA4}, if $p\not\equiv 1 \pmod 3$, then $k(G)\leq 9<12$, which is imposible. Thus, we may assume that $p\equiv 1 \pmod 3$ and $k(G)=12$. Since $\alpha(G)\geq10$, we have that $G''$ contains a unique $G$-conjugacy class of non-trivial elements. As a consequence, $|G''|\leq 12+1=13$. We also have that $|G''|$ is a power of a prime, $p$, such that that $p\equiv 1 \pmod 3$. Thus, the only possibilities are $|G''|\in \{7,13\}$ and hence $|G|\in \{84,156\}$. By Lemma \ref{casos}, there is no group of order $84$ or $156$ with $f(G)\leq 3$ and hence we have a contradiction. \underline{Case $G/G''\not=\mathsf{A}_{4}$:} In this case $G'/G''$ is a cyclic group. We claim that $(|G':G''|,p)=1$. Assume that $p$ divides $|G':G''|$. Then $G'$ is a $p$-group and hence $G''\subseteq \Phi(G')$. Therefore, $G'$ is cyclic and hence it is abelian, which is a contradiction. Thus, the claim follows. Now, we study separately the case $G/G''=\mathsf{D}_{18}$ and the case $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$. \begin{itemize} \item \underline{Case $G/G''=\mathsf{D}_{18}$:} Since $p\not=3$, we may apply Lemma \ref{casoD18}. If $p=2$, then $k(G)\leq 10<12$ and hence we have a contradiction. Thus, we may assume that $p$ is odd. Assume now that $p$ is an odd prime such that $p\not\equiv 1 \pmod 3$. In this case $k(G)\leq 12$. Thus, $k(G)=12$ and reasoning as in the case $G/G''=\mathsf{A}_{4}$ we can deduce that $G''$ contains a unique $G$-conjugacy class of non-trivial elements. It follows that $|G''|\leq 18+1=19$, $|G''|$ must be a power of a prime, $p$, with $p\not\equiv 1 \pmod 3$ and $|G''|=\frac{18}{|H|}+1$, where $H \leq \mathsf{D}_{18}$. Since there is no integer with the required properties, we have a contradiction. Assume finally that $p\equiv 1 \pmod 3$. In this case $k(G)\leq 15$. As before, we can deduce that $G''$ contains at most $4$ non-trivial conjugacy classes and hence $|G''|\leq 4 \cdot 18+1=73$. Therefore, $|G''|\in \{7, 13, 19, 31, 37, 43, 49,53, 61, 67, 73 \}$ and hence $|G| \in \{126, 234, 342, 558, 666, 774, 882, 954, 1098, 1206, 1314\}$. Applying again Lemma \ref{casos}, we have a contradiction. \item \underline{Case $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14},\mathsf{F}_{21},\mathsf{F}_{20},\mathsf{F}_{52}\}$:} Since $(|G':G''|,p)=1$, we may apply Lemma \ref{restocasos}. Thus, if $r=|G:G'|$ and $\chi \in \Irr(G|G'')$, we have that $\Q(\chi)\subseteq \Q_{rp}$. We study the cases $r=2,3,4$ separately. \begin{itemize} \item [(i)] Case $G/G''\in \{\mathsf{S}_{3},\mathsf{D}_{10},\mathsf{D}_{14}\}$: In these cases $|G:G'|=2$ and hence for all $\chi \in \Irr(G|G'')$ we have that $\Q(\chi)\subseteq \Q_{2p}=\Q_{p}$. Thus, $\Irr(G|G'')$ contains at most 5 non-rational characters. We also observe that $\Irr(G/G'')$ possesses at most $3$ non-rational character. Counting the rational characters, we have that $k(G)\leq 3+3+5=11<12$. That is a contradiction. \item [(ii)] Case $G/G''=\mathsf{F}_{21}$: If $\chi \in \Irr(G|G'')$ then $\Q(\chi)\subseteq\Q_{3p}$. Assume first that $p\not\in\{2,3\}$. Then, $\Q_{3p}$ contains three quadratic extensions and at most one cubic extension and one of these quadratic extensions is $\Q_{3}$. Since we have two characters in $\Irr(G/G'')$ whose field of values is $\Q_{3}$ there is no character in $\Irr(G|G'')$ whose field of values is $\Q_{3}$. Thus, $\Irr(G|G'')$ contains at most $2\cdot 2+3\cdot 1=7$ non-rational characters. Thus, $k(G)\leq 7+4+3=14$. Since $\Q_{3p}$ contains a cubic extension if and only if $p\equiv 1 \pmod 3$, we deduce that if $p\equiv -1 \pmod 3$, then $k(G)\leq 11<12$. Therefore, we deduce that $p\equiv 1 \pmod 3$. Now, reasoning as in the case $G/G''=\mathsf{D}_{18}$, we may assume that $|G''|$ contains at most $3$ non-trivial $G$-conjugacy classes. Therefore, $|G''|$ is a prime power of a prime, $p$, such that $p\equiv 1 \pmod 3$ and $|G''|-1$ must be the sum of at most three divisors of $|G/G''|=21$. It follows that $|G''|\in \{7,43\}$. Applying that $(|G':G''|,p)=1$, we have that $|G''|=43$ and hence $|G|=21\cdot 43=903$. However, by Lemma \ref{casos}, there is no group of order $903$ with $f(G)\leq 3$. Reasoning similarly, we can deduce that if $p=2$, then $k(G)\leq 7<12$ and hence we have a contradiction. Finally, assume that $p=3$. In this case $\Q_{3p}=\Q_{9}$ contains only one quadratic extension and one cubic extension. Since the unique quadratic extension of $\Q_9$ is $\Q_3$, we deduce that that $\Irr(G|G'')$ contains at most $3$ non-rational characters. Thus, $k(G)\leq 3+4+3=10<12$ and hence we have a contradiction. \item [(iii)] Case $G/G''\in \{\mathsf{F}_{20},\mathsf{F}_{52}\}$: Then $G/G''=\mathsf{F}_{4q}$ for $q \in \{5,13\}$. Thus, applying Lemma \ref{restocasos}, we have that $\Q(\chi)\subseteq \Q_{4p}$ for every $\chi \in \Irr(G|G'')$. Reasoning as in the case $G/G''=\mathsf{F}_{21}$, we have that if $p\not=2$, then $\Irr(G|G'')$ contains at most $7$ non-rational characters and if $p=2$, then $\Irr(G|G'')$ cannot contain non-rational characters. Therefore, if $p=2$ then $k(G)\leq 8<12$, which is a contradiction. Thus, we may assume that $p$ is an odd prime. Before studying the remaining cases, we claim that $|G''|\equiv 1 \pmod q$. Since $(|G:G''|,p)=1$, applying the Schur-Zassenhaus Theorem, we have that $G''$ is complemented in $G$ by $U\ltimes V$, where $U$ is cyclic of order $4$ and $V$ is cyclic of order $q$. We claim that $V$ cannot fix any non-trivial element of $G''$. We have that the action of $V$ on $G''$ is coprime. Thus, by Theorem 4.34 of \cite{Isaacs}, $G''=[G'',V]\times C_{G''}(V)$. Since $C_{G''}(V)\leq G''$ is normal in $G$ and $G''$ is minimal normal, we have that either $C_{G''}(V)=1$ or $C_{G''}(V)=G''$. If $C_{G''}(V)=G''$, then $G'$ is abelian, which is a contradiction. Thus, $C_{G''}(V)=1$ and hence $V$ does not fix any non-trivial element in $G''$. Therefore, $|G''|\equiv 1 \pmod q$ as we claimed. \begin{itemize} \item [a)] Case $G/G''=\mathsf{F}_{20}$: It is easy to see that $k(G)\leq 12$. If moreover, $p\not \equiv 1 \pmod 3$, then $k(G)\leq 9$, which is impossible. Thus, as in case $G/G''=\mathsf{A}_{4}$ we may assume that $p\equiv 1 \pmod 3$ and that $G''$ possesses a unique non-trivial $G$-conjugacy class. Therefore, $|G''|\leq20+1=21$, $|G''|\equiv 1 \pmod 5$ and it is a power or a prime, $p$, $p\equiv 1 \pmod 3$. We see that there is no integer with the required properties, and hence we have a contradiction. \item [b)] Case $G/G''=\mathsf{F}_{52}$: It is easy to see that $k(G)\leq 15$. As in case $G/G''=\mathsf{D}_{18}$, we may assume that $G''$ contains at most $4$ non-trivial $G$-conjugacy classes. Therefore, $|G''|\leq 4\cdot 52+1=209$. It follows that $|G''|\equiv 1 \pmod {13}$, $|G''|\leq 209$ and it is a power of a prime. Thus, $|G''|\in \{27,53,79,131,157\}$ and hence $|G|\in \{1404,2756,4108,6812,8164\}$, which contradicts Lemma \ref{casos}. \end{itemize} \end{itemize} \end{itemize} We conclude that $G''=1$ and the result follows. \end{proof} \end{thm} Now, Theorem A follows from Theorems \ref{nonsolvable} and \ref{solvable}. \section{Further questions} We close this paper with several possible lines of future research that, motivated by this work, have been suggested by A. Moret\'o. Theorem A shows that, as one could expect, $f(G)$ is usually much smaller than $k(G)$. It is perhaps surprising, therefore, that there could perhaps exist bounds for $f(G)$ that are asymptotically of almost the same order of magnitude as known bounds for $k(G)$. By Brauer's \cite{Brauer} bound, we know that the number of conjugacy classes of a finite group $G$ is $k(G)\geq\log_2\log_2|G|$. Theorem A shows that this bound does not hold if we replace $k(G)$ by $f(G)$ when $f(G)=2$ or $3$, but barely. It could be true that $f(G)$ is at least the integer part of $\log_2\log_2|G|$. Brauer's Problem 3 \cite{Brauer} asks for substantially better bounds for $k(G)$ in terms of $|G|$. Such a bound was obtained by Pyber \cite{Pyber}, whose bound was later improved in \cite{Keller} and \cite{BMT}. Currently the best known bound is the one given in \cite{BMT}, which asserts that for every $\epsilon>0$, there exists $\delta>0$ such that $k(G)>\frac{\delta \log_2|G|}{(\log_2\log_2|G|)^{3+\epsilon}}$ for all finite groups. Nowadays, the main open problem in this field is whether there is a logarithmic bound. More precisely, Bertram \cite{Bertram} asked whether $k(G)>\log_3|G|$. Theorem A shows that this bound does not hold if we replace $k(G)$ by $f(G)$, but it is not clear whether a logarithmic lower bound for $f(G)$ in terms of $|G|$ could exist. Another interesting problem on the number of conjugacy classes of a finite group was proposed by Bertram in \cite{Bertram}. He asked whether $k(G)\geq\omega(|G|)$, where if $n=p_1^{a_1}\dots p_t^{a_t}$ is the decomposition of the positive integer $n$ as a product of powers of pairwise different primes, $\omega(n)=a_1+\cdots+ a_n$. Theorem A also shows that this definitely does not hold if we replace $k(G)$ by $f(G)$. However, it could be true that $f(G)$ is at least the chief length of $G$ (i.e., the number of chief factors in a chief series). \renewcommand{\abstractname}{Acknowledgements} \begin{abstract} This work will be part of the author’s PhD thesis, under the supervision of Alexander Moret\'o. He would like to thank him. \end{abstract} \begin{thebibliography}{99} \bibitem{BMT} B. Baumeister, A. Maróti, H. Tong-Viet, \rm{Finite groups have more conjugacy classes}. \textit{Forum Math.} \textbf{29} (2017), 259-275. \bibitem{Bertram} E. A. Bertram, \rm{Lower bounds for the number of conjugacy classes in finite groups. Ischia Group Theory 2004} \textit{Contemp. Math.} \textbf{402} (2006), 95-117. \bibitem{Brauer} R. Brauer, \textit{Representations of Finite Groups, volume I of Lectures on Modern Mathematics}. Wiley, New York, 1963. \bibitem{Dornhoff} L. Dornhoff, \textit{Group Representation Theory. Part A: Ordinary representation theory}. Marcel Dekker, Inc., New York, 1971. Pure and Applied Mathematics, 7. \bibitem{gap} The GAP Group, \rm{GAP Groups, Algorithms, and programming, version 4.8.6, 2016}, \textbf{http://www.gap-system.org}. \bibitem{Geck} M. Geck, \textit{An Introduction to Algebraic Geometry and Algebraic Groups, volume 10 of Oxford Graduate Texts in Mathematics}. Oxford University Press, Oxford, 2003. \bibitem{Keller} T. M. Keller, \rm{Finite groups have even more conjugacy classes}. \textit{Israel J. Math.} \textbf{181} (2011), 433-444. \bibitem{Isaacscar} I. M. Isaacs, \textit{Character Theory of Finite Groups}. Dover Publications, New York, 1976. \bibitem{Isaacs} I. M. Isaacs, \textit{Finite Group Theory}. Amer. Math. Soc. Providence, Rhode Island, 2008. \bibitem{Alex} A. Moretó, \rm{Multiplicities of fields of values of irreducible characters of finite groups}. \textit{Proc. Amer. Math. Soc.} \textbf{149} (2021), 4109-4116. \bibitem{Navarro-Tiep} G. Navarro, P. H. Tiep, \rm{Rational irreducible characters and rational conjugacy classes in finite groups}. \textit{Trans. Amer. Math. Soc.} \textbf{360} (2008), 2443–2465. \bibitem{Auto} H. Nguyen, A. Schaeffer Fry, H. Tong-Viet, C. Vinroot, \rm{On the number of irreducible real-valued characters of a finite group}. \textit{J. Algebra} \textbf{555} (2020), 275–288. \bibitem{Pyber} L. Pyber, \rm{Finite groups have many conjugacy classes}. \textit{J. Lond. Math. Soc. (2)} \textbf{46} (1992), 239–249. \bibitem{Rossi} D. Rossi, \textit{Fields of Values in Finite Groups: Characters and Conjugacy Classes}. PhD Thesis, University of Arizona, Tucson, 2018. \bibitem{VeraLopez} A. Vera López, J. Vera López, \rm{Classification of finite groups according to the number of conjugacy classes II}. \textit{Israel J. Math.} \textbf{56} (1986), 188-221. \end{thebibliography} \end{document}
2205.02467v1
http://arxiv.org/abs/2205.02467v1
A quantitative variational analysis of the staircasing phenomenon for a second order regularization of the Perona-Malik functional
\documentclass[titlepage,12pt]{article} \usepackage{hyperref} \usepackage[usenames,dvipsnames]{pstricks} \usepackage{pst-plot} \usepackage{amssymb,amsthm,amsmath} \usepackage[a4paper]{geometry} \usepackage{datetime2} \usepackage[utf8]{inputenc} \usepackage[italian,english]{babel} \usepackage{mathrsfs} \usepackage{bbm} \selectlanguage{english} \geometry{text={15.7 cm, 23 cm},centering,includefoot} \date{} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\R}{\mathbb{R}} \newcommand{\re}{\mathbb{R}} \newcommand{\z}{\mathbb{Z}} \newcommand{\n}{\mathbb{N}} \newcommand{\ep}{\varepsilon} \newcommand{\AM}{\operatorname{AM}} \newcommand{\sPM}{\operatorname{PM}} \newcommand{\sPMF}{\operatorname{PMF}} \newcommand{\sRPMF}{\operatorname{RPMF}} \newcommand{\sRPM}{\operatorname{RPM}} \newcommand{\PMF}{\operatorname{\mathbb{PMF}}} \newcommand{\RPM}{\operatorname{\mathbb{RPM}}} \newcommand{\RPMF}{\operatorname{\mathbb{RPMF}}} \newcommand{\JF}{\operatorname{\mathbb{JF}}} \newcommand{\J}{\operatorname{\mathbb{J}}} \newcommand{\PJ}{P\!J} \newcommand{\argmin}{\operatorname{argmin}} \newcommand{\sign}{\operatorname{sign}} \newcommand{\uep}{u_{\ep}} \newcommand{\glim}{\Gamma\mbox{--}\lim} \newcommand{\auto}{\mathrel{\substack{\vspace{-0.1ex}\\\displaystyle\leadsto\\[-0.9em] \displaystyle\leadsto}}} \newcommand{\loc}{_{\mathrm{loc}}} \newcommand{\Rloc}{_{\mathrm{R-loc}}} \newcommand{\Lloc}{_{\text{\textrm{L-loc}}}} \newcommand{\obl}{\mathrm{Obl}} \newcommand{\vrt}{\mathrm{Vert}} \newcommand{\orz}{\mathrm{Hor}} \newcommand{\omep}{\omega(\ep)} \newcommand{\omepn}{\omega(\ep_{n})} \newcommand{\X}{\mathbb{X}} \newtheorem{thm}{Theorem}[section] \newtheorem{thmbibl}{Theorem} \newtheorem{propbibl}[thmbibl]{Proposition} \newtheorem{rmkbibl}[thmbibl]{Remark} \newtheorem{rmk}[thm]{Remark} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{cor}[thm]{Corollary} \newtheorem{ex}[thm]{Example} \newtheorem{lemma}[thm]{Lemma} \newtheorem{open}{Open problem} \newtheorem{question}[thmbibl]{Question} \renewcommand{\thethmbibl}{\Alph{thmbibl}} \title{A quantitative variational analysis of the staircasing phenomenon for a second order regularization of the Perona-Malik functional} \author{ Massimo Gobbino\vspace{1ex}\\ {\normalsize Università degli Studi di Pisa} \\ {\normalsize Dipartimento di Matematica}\\ {\normalsize PISA (Italy)}\\ {\normalsize e-mail: \texttt{[email protected]}} \and Nicola Picenni\vspace{1ex}\\ {\normalsize Scuola Normale Superiore} \\ {\normalsize PISA (Italy)}\\ {\normalsize e-mail: \texttt{[email protected]}} } \begin{document} \maketitle \begin{abstract} We consider the Perona-Malik functional in dimension one, namely an integral functional whose Lagrangian is convex-concave with respect to the derivative, with a convexification that is identically zero. We approximate and regularize the functional by adding a term that depends on second order derivatives multiplied by a small coefficient. We investigate the asymptotic behavior of minima and minimizers as this small parameter vanishes. In particular, we show that minimizers exhibit the so-called staircasing phenomenon, namely they develop a sort of microstructure that looks like a piecewise constant function at a suitable scale. Our analysis relies on Gamma-convergence results for a rescaled functional, blow-up techniques, and a characterization of local minimizers for the limit problem. This approach can be extended to more general models. \vspace{6ex} \noindent{\bf Mathematics Subject Classification 2020 (MSC2020):} 49J45, 35B25, 49Q20. \vspace{6ex} \noindent{\bf Key words:} Perona-Malik functional, singular perturbation, higher order regularization, Gamma-convergence, blow-up, piecewise constant functions, local minimizers, Young measures, varifolds. \end{abstract} \section{Introduction} Let us consider the minimum problem for the one-dimensional functional \begin{equation} \sPMF(u):=\int_{0}^{1}\log\left(1+u'(x)^{2}\right)\,dx+ \beta\int_{0}^{1}(u(x)-f(x))^{2}\,dx, \label{defn:PMF} \end{equation} where $\beta>0$ is a real number, and $f\in L^{2}((0,1))$ is a given function that we call \emph{forcing term}. The second integral is a sort of \emph{fidelity term}, tuned by the parameter $\beta$, that penalizes the distance between $u$ and the forcing term $f$. The principal part of (\ref{defn:PMF}) is the functional \begin{equation} \sPM(u):=\int_{0}^{1}\log\left(1+u'(x)^{2}\right)\,dx, \label{defn:PM} \end{equation} whose Lagrangian $\phi(p):=\log(1+p^{2})$ is not convex. To make matters worse, the convex envelope of $\phi(p)$ is identically~0, and this implies that the relaxation of (\ref{defn:PM}) is identically~0 in every reasonable functional space. As a consequence, it is well-known that \begin{equation} \inf\left\{\sPMF(u):u\in C^{1}([0,1])\right\}=0 \qquad \forall f\in L^{2}((0,1)). \nonumber \end{equation} We refer to (\ref{defn:PM}) as the \emph{Perona-Malik functional}, because its formal gradient-flow is (up to a factor~2) the celebrated forward-backward parabolic equation \begin{equation} u_{t}=\left(\frac{u_{x}}{1+u_{x}^{2}}\right)_{x}=\frac{1-u_{x}^{2}}{(1+u_{x}^{2})^{2}}\,u_{xx}, \label{defn:PM-eqn} \end{equation} introduced by P.~Perona and J.~Malik~\cite{PeronaMalik}. Numerical experiments seem to suggest that this diffusion process has good stability properties, but at the present there is no rigorous theory that explains why a model that is ill-posed from the analytical point of view exhibits this unexpected stability, known in literature as the \emph{Perona-Malik paradox}~\cite{Kichenassami}. The same qualitative analysis applies when the principal part is of the form \begin{equation} \sPM(\phi,u):=\int_{0}^{1}\phi(u'(x))\,dx, \nonumber \end{equation} provided that $\phi(p)$ is convex in a neighborhood of the origin, concave when $|p|$ is large, and with convex envelope identically equal to~0. Some notable examples are \begin{equation} \phi(p)=\arctan(p^{2}) \qquad\mbox{or}\qquad \phi(p)=(1+p^{2})^{\alpha} \quad\text{with }\alpha\in(0,1/2), \label{defn:phi-1} \end{equation} or even more generally \begin{equation} \phi(p)=(1+|p|^{\gamma})^{\alpha} \quad\text{with }\gamma>1\text{ and }\alpha\in(0,1/\gamma). \label{defn:phi-2} \end{equation} \paragraph{\textmd{\textit{Singular perturbation of the Perona-Malik functional}}} Several approximating models have been introduced in order to mitigate the ill-posed nature of (\ref{defn:PM-eqn}). These approximating models are obtained via convolution~\cite{1992-SIAM-Lions}, space discretization~\cite{2008-JDE-BNPT,2001-CPAM-Esedoglu,GG:grad-est}, time delay~\cite{2007-Amann}, fractional derivatives~\cite{2009-JDE-Guidotti}, fourth order regularization~\cite{1996-Duke-DeGiorgi,2008-TAMS-BF,2019-SIAM-BerGiaTes}, addition of a dissipative term (see~\cite{2020-JDE-BerSmaTes} and the references quoted therein). For a more complete list of references on the evolution problem (\ref{defn:PM-eqn}) we refer to the recent papers~\cite{2018-Poincare-KimYan,2019-SIAM-BerGiaTes,2020-JDE-BerSmaTes} and to the references quoted therein. In this paper we limit ourselves to the variational background, and we consider the functional whose formal gradient flow is the fourth order regularization of (\ref{defn:PM-eqn}), namely the functional (see~\cite{1996-Duke-DeGiorgi,ABG,2006-DCDS-BelFusGug,2008-TAMS-BF,2014-M3AS-BelChaGol}) \begin{equation} \sPMF_{\ep}(u):= \int_{0}^{1}\left\{\ep^{10}|\log\ep|^{2}u''(x)^{2}+ \log\left(1+u'(x)^{2}\right)+ \beta(u(x)-f(x))^{2}\right\}dx, \label{defn:SPM-intro} \end{equation} where the bizarre form of the $\ep$-dependent coefficient is just aimed at preventing the appearance of decay rates defined in an implicit way in the sequel of the paper. For every choice of $\ep\in(0,1)$ and $\beta>0$ the model is well-posed, in the sense that the minimum problem for (\ref{defn:SPM-intro}) admits at least one minimizer of class $C^{2}$ for every choice of the forcing term $f\in L^{2}((0,1))$. Here we investigate the asymptotic behavior of minima and minimizers as $\ep\to 0^{+}$. Before describing our results, it is useful to open a parenthesis on a related problem that has already been studied in the literature. \paragraph{\textmd{\textit{The Alberti-M\"uller model}}} Let us consider the functional \begin{equation} \AM_{\ep}(u):=\int_{0}^{1}\left\{\ep^{2}u''(x)^{2}+(u'(x)^{2}-1)^{2}+\beta(x) u(x)^{2}\right\}dx, \label{defn:AM} \end{equation} where $\beta\in L^{\infty}((0,1))$ is positive for almost every $x\in(0,1)$. The minimizers of (\ref{defn:AM}) with periodic boundary conditions were studied by G.~Alberti and S.~M\"uller in~\cite{2001-CPAM-AlbertiMuller} (see also~\cite{1993-CalcVar-Muller}). In this model the forcing term $f(x)$ is identically~0, and the dependence on first order derivatives is described by the double-well potential $\phi(p):=(p^{2}-1)^{2}$. As in (\ref{defn:SPM-intro}) the function $\phi(p)$ is non-convex, but in this case its convex envelope vanishes just for $|p|\leq 1$, while it coincides with $\phi(p)$ elsewhere, and in particular it is coercive at infinity. From the heuristic point of view, minimizers to (\ref{defn:AM}) would like to be identically~0, but with constant derivative equal to $\pm 1$. Of course this is not possible if we think of $u(x)$ and $u'(x)$ as functions, but it becomes possible if we consider $u(x)$ as a function whose ``derivative'' $u'(x)$ is a Young measure. More formally, given a family $\{u_{\ep}(x)\}$ of minimizers to (\ref{defn:AM}), one can show that $u_{\ep}(x)\to 0$ uniformly, $u_{\ep}'(x)\rightharpoonup 0$ weakly in $L^{4}((0,1))$, and more precisely $u_{\ep}'(x)$ converges to the Young measure that in every point $x\in(0,1)$ assumes the two values $\pm 1$ with probability $1/2$. The next step consists in analyzing the asymptotic profile of minimizers. The intuitive idea is that minimizers develop a \emph{microstructure} at some scale $\omep$, and this microstructure resembles a triangular wave (sawtooth function). In other words, one expects minimizers to be of the form \begin{equation} u_{\ep}(x)\sim\omep\varphi\left(\frac{x}{\omep}+b(\ep)\right), \label{th:AM-expansion} \end{equation} where \begin{itemize} \item the function $\varphi$ that describes the asymptotic profile of minimizers is a triangular wave with slopes $\pm 1$, for example the function defined by $\varphi(x):=|x|-1$ for every $x\in[-2,2]$, and then extended by periodicity to the whole real line, \item $\omep$ is a suitable scaling factor that vanishes as $\ep\to 0^{+}$ and is proportional to the asymptotic ``period'' of minimizers (which however are not necessarily periodic functions ``in large''), \item $b(\ep)$ is a sort of phase parameter, that can be assumed to be less than the period of $\varphi$. \end{itemize} We point out that the limit of $u_{\ep}'(x)$ as a Young measure carries no information concerning the asymptotic behavior of $\omep$, and actually it does not even imply the existence of any form of asymptotic period or asymptotic profile. The first big issue is giving a rigorous formal meaning to an asymptotic expansion of the form (\ref{th:AM-expansion}). In~\cite{2001-CPAM-AlbertiMuller} the formalization relies on the notion of Young measure with values in compact metric spaces. In a nutshell, starting form every minimizer $u_{\ep}(x)$, the authors consider the function that associates to every $x\in(0,1)$ the rescaled function \begin{equation} y\mapsto\frac{u_{\ep}(x+\omep y)}{\omep}, \nonumber \end{equation} where $\omep=\ep^{1/3}$. This new function is interpreted as a Young measure on the interval $(0,1)$ with values in $L^{\infty}(\re)$, which is a \emph{compact} metric space with respect to the distance according to which $g_{n}$ converges to $g_{\infty}$ if and only if $\arctan(g_{n})$ converges to $\arctan(g_{\infty})$ with respect to the weak* convergence in $L^{\infty}(\re)$. The result is that this family of Young measures converges (in the sense of Young measures with values in a compact metric space) to a limit Young measure that in almost every point is concentrated in the translations of the triangular wave. This statement is a rigorous, although rather abstract and technical, formulation of expansion (\ref{th:AM-expansion}). \paragraph{\textmd{\textit{From Young measures to varifolds}}} There are some notable differences between our model and (\ref{defn:AM}). The first one is that in our case the trivial forcing term $f(x)\equiv 0$ would lead to the trivial solution $u_{\ep}(x)\equiv 0$ for every $\ep\in(0,1)$. Therefore, here a nontrivial forcing term is required if we want nontrivial solutions. The second difference lies in the growth of the convex envelope of $\phi(p)$. In the case of (\ref{defn:AM}) the convex envelope grows at infinity as $p^{4}$, and this guarantees a uniform bound in $L^{4}((0,1))$ for the derivatives of all sequences with bounded energy. In our case the convex envelope vanishes identically, and therefore there is no hope to obtain bounds on derivatives in terms of bounds on the energies. The third, and more relevant, difference lies in the construction of the convex envelope. In the case of (\ref{defn:AM}) the convex envelope of $\phi$ vanishes in the interval $[-1,1]$ because every $p$ in this interval can be written as a convex combination of $\pm 1$, and $\phi(1)=\phi(-1)=0$. This is the ultimate reason why the derivatives of minimizers tend to stay close to the two values $\pm 1$ when $\ep$ is small enough. In our case the convex envelope of $\phi$ vanishes identically on the whole real line, but no real number $p$ can be written as the convex combination of two distinct points where $\phi$ vanishes. Roughly speaking, the vanishing of the convex envelope is achieved only in the limit, in some sense by writing every real number $p$ as a convex combination of 0 and $\pm\infty$, depending on the sign of $p$. This implies that minimizers $\uep(x)$ tend to assume a staircase-like shape, with regions where they are ``almost horizontal'' and regions where they are ``almost vertical'' (as described in the left and central section of Figure~\ref{figure:multi-scale}). From the technical point of view, this means that there is no hope that the family $\{u_{\ep}'(x)\}$ admits a limit in the sense of Young measures. This is the point in which varifolds come into play, because varifolds allow ``functions'' whose graph has in every point a mix of horizontal and vertical ``tangent'' lines. \paragraph{\textmd{\textit{Our results}}} In our analysis of the asymptotic behavior of minima and minimizers, we restrict ourselves to forcing terms $f(x)$ of class $C^{1}$, and we prove three main results. \begin{itemize} \item The first result (Theorem~\ref{thm:asympt-min}) concerns the asymptotic behavior of minima. We prove that the minimum $m_{\ep}$ of (\ref{defn:SPM-intro}) over $H^{2}((0,1))$ satisfies $m_{\ep}\sim c_{0}\ep^{2}|\log\ep|$, where $c_{0}$ is proportional to the integral of $|f'(x)|^{4/5}$. \item The second result (Theorem~\ref{thm:BU}) concerns the asymptotic behavior of minimizers $u_{\ep}(x)$. To this end, for every family $x_{\ep}\to x_{0}\in(0,1)$ we consider the families of functions \begin{equation} y\mapsto\frac{u_{\ep}(x_{\ep}+\omep y)-f(x_{\ep})}{\omep} \qquad\text{and}\qquad y\mapsto\frac{u_{\ep}(x_{\ep}+\omep y)-u_{\ep}(x_{\ep})}{\omep}, \label{defn:BU-uep} \end{equation} which correspond to the intuitive idea of zooming the graph of a minimizer $u_{\ep}(x)$ in a neighborhood of $(x_{\ep},f(x_{\ep}))$ and $(x_{\ep},u_{\ep}(x_{\ep}))$ at scale $\omep$. We show that, when $\omep=\ep|\log\ep|^{1/2}$, these functions converge (up to subsequences) in a rather strong sense (strict convergence of bounded variation functions, see Definition~\ref{defn:BV-sc}) to a piecewise constant function, a sort of staircase with steps whose height and length depend on $f'(x_{0})$. This result provides a quantitative description of the staircase-like microstructure of minimizers, with a notion of convergence that is much stronger than weak* convergence in $L^{\infty}(\re)$, and without the technical machinery of Young measures with values in metric spaces (see Remark~\ref{rmk:AM}). \item The third result (Theorem~\ref{thm:varifold}) shows that $u_{\ep}(x)\to f(x)$ first in the sense of uniform convergence, then in the sense of strict convergence of bounded variation functions, and finally in the sense of varifolds, provided that we consider the graph of $f(x)$ as a varifold with a suitable density and a suitable combination of horizontal and vertical tangent lines in every point. \end{itemize} The three results described above are only the first order analysis of what is actually a \emph{multi-scale problem}. In a companion paper (in preparation) we plan to investigate higher-resolution zooms of minimizers (from the center to the right of Figure~\ref{figure:multi-scale}), in order to reveal the exact structure of the horizontal and vertical parts of each step of the staircase. \begin{figure}[ht] \begin{center} \hfill \psset{unit=8.5ex} \pspicture(0,0)(4,3) \psplot[linecolor=orange,linewidth=1.5\pslinewidth]{0}{4}{x 1.5 mul x 3 exp 14 div sub} \multido{\n=0.05+0.10}{40}{ \psline(!\n \space 0.05 sub \n \space 1.5 mul \n \space 3 exp 14 div sub) (!\n \space 0.05 add \n \space 1.5 mul \n \space 3 exp 14 div sub) } \multido{\n=0.05+0.10}{39}{ \psline (!\n \space 0.05 add \n \space 1.5 mul \n \space 3 exp 14 div sub) (!\n \space 0.05 add \n \space 0.1 add 1.5 mul \n \space 0.1 add 3 exp 14 div sub) } \psellipse[linecolor=cyan](1,1.35)(0.4,0.6) \endpspicture \hfill \psset{unit=12ex} \pspicture(-0.2,-0.3)(1.3,1.7) \psplot[linecolor=orange,linewidth=1.7\pslinewidth]{-0.1}{1.1}{x 1.7 mul x 2 exp 3 div sub} \multido{\n=-0.1+0.2}{6}{\psbezier (!\n \space \n \space 1.7 mul \n \space 2 exp 3 div sub) (!\n \space 0.19 add \n \space 1.7 mul \n \space 2 exp 3 div sub) (!\n \space 0.01 add \n \space 0.2 add 1.7 mul \n \space 0.2 add 2 exp 3 div sub) (!\n \space 0.2 add \n \space 0.2 add 1.7 mul \n \space 0.2 add 2 exp 3 div sub) } \psellipse[linecolor=cyan](0.41,0.65)(0.16,0.24) \endpspicture \hfill \psset{unit=3ex} \pspicture(-3,-0.5)(6,6) \psline(-3,0)(0,0) \psbezier(0,0)(2.5,0)(0.5,6)(3,6) \psline(3,6)(6,6) \endpspicture \hfill \mbox{} \caption{description of the multi-scale problem at three levels of resolution. Left: staircasing effect in large around the forcing term. Center: zoom of the staircase in a region. Right: cubic transition between two consecutive steps.} \label{figure:multi-scale} \end{center} \end{figure} \paragraph{\textmd{\textit{Overview of the technique}}} Our analysis relies on Gamma-convergence techniques. The easy remark is that minimum values of (\ref{defn:SPM-intro}) tend to~0, and minimizers tend to the forcing term in $L^{2}((0,1))$. This is because the unstable character of (\ref{defn:PM}) comes back again when $\ep\to 0^{+}$, and forces the Gamma-limit of the family of functionals (\ref{defn:SPM-intro}) to be identically~0. More delicate is finding the vanishing order of minimum values, and the fine structure of minimizers as $\ep\to 0^{+}$. The starting observation is that, if $v_{\ep}(y)$ denotes the blow-up defined in (\ref{defn:BU-uep}) on the left, with $\omep=\ep|\log\ep|^{1/2}$, then $v_{\ep}(y)$ minimizes a rescaled version of (\ref{defn:SPM-intro}), namely the functional \begin{equation} \sRPMF_{\ep}(v):= \int_{I_{\ep}}\left\{\ep^{6}(v'')^{2}+\frac{1}{\ep^{2}|\log\ep|}\log\left(1+(v')^{2}\right) +\beta\left(v-g_{\ep}\right)^{2}\right\}\,dy, \label{defn:RPMF-intro} \end{equation} where the new forcing term $g_{\ep}(y)$ is a suitable blow-up of $f(x)$, and the new integration interval $I_{\ep}$ depends on the blow-up center $x_{\ep}$, but in any case its length is equal to $\omep^{-1}$, and therefore it diverges. If $f(x)$ is of class $C^{1}$, then $g_{\ep}(y)\to f'(x_{0})y$ when $x_{\ep}\to x_{0}$. Moreover, the results of~\cite{ABG,2008-TAMS-BF} suggest that, if we consider the functional (\ref{defn:RPMF-intro}) restricted to a \emph{finite fixed interval} $(a,b)$, its Gamma-limit has the form \begin{equation} \alpha_{0}J_{1/2}(v)+\beta\int_{a}^{b}\left(v(y)-f'(x_{0})y\right)^{2}\,dy, \label{defn:J-intro} \end{equation} where $\alpha_{0}$ is a suitable positive constant, and the functional $J_{1/2}(v)$ is finite only if $v$ is a ``pure jump function'' (see Definition~\ref{defn:PJF}), and in this class it coincides with the sum of the square roots of the jump heights of $v$. At the end of the day, this means that the minimum problem for (\ref{defn:SPM-intro}) can be approximated, at a suitable small scale, by a family of minimum problems for functionals such as (\ref{defn:J-intro}), and these minimum problems, due to the simpler form and to the linear forcing term, can be solved almost explicitly. However, things are not so simple. A first issue is that the integration intervals $I_{\ep}$ in (\ref{defn:RPMF-intro}) invade the whole real line. This forces us to work with local minimizers (namely minimizers up to perturbations with compact support) instead of global minimizers. So we have to adapt the classical Gamma-convergence results in order to deal with local minimizers, and we need also to classify all local minimizers to (\ref{defn:J-intro}). These local minimizers are characterized in Proposition~\ref{prop:loc-min-class}, and they turn out to be staircases whose steps have length and height that depend on $f'(x_{0})$. The second issue is compactness. We observed before that a bound on $\sPM_{\ep}(\uep)$ does not provide compactness of the family $\{\uep\}$ in any reasonable space. After rescaling and introducing (\ref{defn:RPMF-intro}), on the one hand the good news is that a classical coerciveness result implies that a uniform bound on $\sRPMF_{\ep}(v_{\ep})$ is enough to deduce that the family $\{v_{\ep}\}$ is relatively compact, for example in $L^{2}$. On the other hand, the bad news is that an asymptotic estimate of the form $\sPM_{\ep}(\uep)\sim c_{0}\omep^{2}$ yields only a uniform bound on $\omep\sRPMF_{\ep}(v_{\ep})$, which does not exclude that $\sRPMF_{\ep}(v_{\ep})$ might diverge as $\ep\to 0^{+}$. We overcome this difficulty by showing that a bound of this type in some interval yields a true uniform bound for $\sRPMF_{\ep}(v_{\ep})$ in a \emph{smaller interval}, and this is enough to guarantee the compactness of local minimizers. This improvement of the bound (see Proposition~\ref{prop:iteration}) requires a delicate iteration argument in a sequence of nested intervals, which probably represents the technical core of this paper. \paragraph{\textmd{\textit{Possible extensions}}} In order to contain this paper in a reasonable length, we decided to focus our presentation only on the singular perturbation (\ref{defn:SPM-intro}) of the original functional with the logarithm. Nevertheless, many parts of the theory can be extended to more general models. We discuss some possible generalizations in section~\ref{sec:extension}. \paragraph{\textmd{\textit{Dynamic consequences}}} We hope that our variational analysis could be useful in the investigation of solutions to the evolution equation (\ref{defn:PM-eqn}). Numerical experiments with different approximating schemes seem to suggest that solutions develop \emph{instantaneously} a staircase-like pattern consistent with the results of this paper. Due to its instantaneous character, this phase of the dynamic is usually referred to as ``fast time'' (see~\cite{2006-DCDS-BelFusGug}). The connection between the dynamic and the variational behavior is hardly surprising if we think of gradient-flows as limits of discrete-time evolutions, as in De Giorgi's theory of minimizing movements. In this context the minimum problem for (\ref{defn:PMF}) with forcing term $f(x)$ equal to the initial datum $u_{0}(x)$ is just the first step in the construction of the minimizing movement. Transforming this intuition into a rigorous statement concerning the fast-time behavior of solutions to (\ref{defn:PM-eqn}) is a challenging problem. Another issue is that actually the staircasing effect seems to appear only in the so-called supercritical regions of $u_{0}(x)$, namely where $u_{0}'(x)$ falls in the concavity region of $\phi(p)$ (see the simulations in~\cite{2006-SIAM-Esedoglu,2006-DCDS-BelFusGug,2009-JMImVis-GuiLam,2009-JDE-Guidotti,2012-JDE-Guidotti}). The variational analysis can not produce this effect, in some sense because the convexification involves a ``global procedure'', and therefore it is very likely that an explanation should rely also on dynamical arguments. \paragraph{\textmd{\textit{Structure of the paper}}} This paper is organized as follows. In section~\ref{sec:statements} we introduce the notations and we state our main results concerning the asymptotic behavior of minima and minimizers for (\ref{defn:SPM-intro}). In section~\ref{sec:gconv} we state the results that we need concerning the rescaled functionals (\ref{defn:RPMF-intro}) and their Gamma-limit. In section~\ref{sec:loc-min} we recall the notion of local minimizers, both for (\ref{defn:SPM-intro}) and for the Gamma-limit, and we state their main properties. In section~\ref{sec:strategy} we show that our main results follow from the properties of local minimizers, that we prove later in section~\ref{sec:loc-min-proof}. Finally, in section~\ref{sec:extension} we mention some different models to which our theory can be extended, and in section~\ref{sec:open} we present some open problems. We also add an appendix with a proof of the results stated in section~\ref{sec:gconv}, some of which are apparently missing, or present with flawed proofs, in the literature. \setcounter{equation}{0} \section{Statements}\label{sec:statements} For every $\ep\in(0,1)$ let us set \begin{equation} \omep:=\ep|\log\ep|^{1/2}. \label{defn:omep} \end{equation} Let $\beta>0$ be a real number, let $\Omega\subseteq\re$ be an open set, and let $f\in L^{2}(\Omega)$ be a function that we call forcing term. In order to emphasize the dependence on all the parameters, we write (\ref{defn:SPM-intro}) in the form \begin{equation} \PMF_{\ep}(\beta,f,\Omega,u):= \int_{\Omega}\left\{\ep^{6}\omep^{4}u''(x)^{2}+ \log\left(1+u'(x)^{2}\right)+ \beta(u(x)-f(x))^{2}\right\}dx. \label{defn:SPM} \end{equation} The first result that we state concerns existence and regularity of minimizers, and their convergence to the fidelity term in $L^{2}((0,1))$. We omit the proof because it is a standard application of the direct method in the calculus of variations, and of the fact that the convex envelope of the function $p\mapsto\log(1+p^{2})$ is identically~0. \begin{prop}[Existence and regularity of minimizers]\label{prop:basic} Let $\omep$ be defined by (\ref{defn:omep}), and let $\PMF_{\ep}(\beta,f,(0,1),u)$ be defined by (\ref{defn:SPM}), where $\ep\in(0,1)$ and $\beta>0$ are two real numbers, and $f\in L^{2}((0,1))$ is a given function. Then the following facts hold true. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \emph{(Existence)} There exists \begin{equation} m(\ep,\beta,f):=\min\left\{\PMF_{\ep}(\beta,f,(0,1),u):u\in H^{2}((0,1))\strut\right\}. \label{defn:DMnf} \end{equation} \item \emph{(Regularity)} Every minimizer belongs to $H^{4}((0,1))$, and in particular to $C^{2}([0,1])$. \item \emph{(Minimum values vanish in the limit)} It turns out that $m(\ep,\beta,f)\to 0$ as $\ep\to 0^{+}$. \item \emph{(Convergence of minimizers to the fidelity term)} If $\{u_{\ep}\}$ is any family of minimizers for (\ref{defn:DMnf}), then $u_{\ep}(x)\to f(x)$ in $L^{2}((0,1))$ as $\ep\to 0^{+}$. \end{enumerate} \end{prop} In the sequel we assume that the forcing term $f(x)$ belongs to $C^{1}([0,1])$. Under this regularity assumption, our first result concerns the asymptotic behavior of minima. \begin{thm}[Asymptotic behavior of minima]\label{thm:asympt-min} Let $\omep$ be defined by (\ref{defn:omep}), and let $\PMF_{\ep}(\beta,f,(0,1),u)$ be defined by (\ref{defn:SPM}), where $\ep\in(0,1)$ and $\beta>0$ are two real numbers, and $f\in C^{1}([0,1])$ is a given function. Then the minimum value $m(\ep,\beta,f)$ defined in (\ref{defn:DMnf}) satisfies \begin{equation} \lim_{\ep\to 0^{+}}\frac{m(\ep,\beta,f)}{\omep^{2}}= 10\left(\frac{2\beta}{27}\right)^{1/5}\int_{0}^{1}|f'(x)|^{4/5}\,dx. \label{th:asympt-min} \end{equation} \end{thm} The asymptotic behavior of $m(\ep,\beta,f)$ under weaker regularity assumptions on $f(x)$ is a largely open problem. We refer to section~\ref{sec:open} for further details. Now we investigate the asymptotic behavior of minimizers. The intuitive idea is that they tend to develop a staircase structure. In order to formalize this idea, we need several definitions. To begin with, we define some classes of ``staircase-like functions''. \begin{defn}[Canonical staircases]\label{defn:staircase} \begin{em} Let $S:\re\to\re$ be the function defined by \begin{equation*} S(x):=2\left\lfloor\frac{x+1}{2}\right\rfloor \qquad \forall x\in\re, \end{equation*} where, for every real number $\alpha$, the symbol $\lfloor\alpha\rfloor$ denotes the greatest integer less than or equal to $\alpha$. For every pair $(H,V)$ of real numbers, with $H>0$, we call \emph{canonical $(H,V)$-staircase} the function $S_{H,V}:\re\to\re$ defined by \begin{equation} S_{H,V}(x):=V\cdot S(x/H) \qquad \forall x\in\re. \label{defn:SC0} \end{equation} \end{em} \end{defn} Roughly speaking, the graph of $S_{H,V}(x)$ is a staircase with steps of horizontal length $2H$ and vertical height $2V$. The origin is the midpoint of the horizontal part of one of the steps. The staircase degenerates to the null function when $V=0$, independently of the value of~$H$. \begin{defn}[Translations of the canonical staircase]\label{defn:translations} \begin{em} Let $(H,V)$ be a pair of real numbers, with $H>0$, and let $S_{H,V}(x)$ be the function defined in (\ref{defn:SC0}). Let $v:\re\to\re$ be a function. \begin{itemize} \item We say that $v$ is an \emph{oblique translation} of $S_{H,V}(x)$, and we write $v\in\obl(H,V)$, if there exists a real number $\tau_{0}\in[-1,1]$ such that \begin{equation} v(x)=S_{H,V}(x-H\tau_{0})+V\tau_{0} \qquad \forall x\in\re. \nonumber \end{equation} \item We say that $v$ is a \emph{graph translation of horizontal type} of $S_{H,V}(x)$, and we write $v\in\orz(H,V)$, if there exists a real number $\tau_{0}\in[-1,1]$ such that \begin{equation} v(x)=S_{H,V}(x-H\tau_{0}) \qquad \forall x\in\re. \nonumber \end{equation} \item We say that $v$ is a \emph{graph translation of vertical type} of $S_{H,V}(x)$, and we write $v\in\vrt(H,V)$, if there exists a real number $\tau_{0}\in[-1,1]$ such that \begin{equation} v(x)=S_{H,V}(x-H)+V(1-\tau_{0}) \qquad \forall x\in\re. \nonumber \end{equation} \end{itemize} \end{em} \end{defn} \begin{rmk} \begin{em} Let us interpret translations of the canonical staircase in terms of graph (see Figure~\ref{figure:staircase}). \begin{itemize} \item Oblique translations correspond to taking the graph of the canonical staircase $S_{H,V}(x)$ and moving the origin along the line $Hy=Vx$, namely the line that connects the midpoints of the steps. \item Graph translations of horizontal type correspond to moving the origin to some point in the horizontal part of some step. \item Graph translations of vertical type correspond to moving the origin to some point in the vertical part of some step. \end{itemize} We observe that graph translations of horizontal type with $\tau_{0}=\pm 1$ coincide with graph translations of vertical type with the same value of $\tau_{0}$. In those cases the origin is moved to the ``corners'' of the graph. \end{em} \end{rmk} \begin{figure}[h] \hfill \psset{unit=2ex} \pspicture(-4,-6.5)(6,8.5) \psline[linewidth=0.7\pslinewidth]{->}(-3.5,0)(5.5,0) \psline[linewidth=0.7\pslinewidth]{->}(0,-5)(0,8) \multiput(0,-3)(0,3){4}{\psline[linewidth=0.5\pslinewidth](-0.3,0)(0.3,0)} \multiput(-3,0)(1,0){9}{\psline[linewidth=0.5\pslinewidth](0,-0.3)(0,0.3)} \psclip{\psframe[linestyle=none](-3.5,-5)(5.5,8)} \psline[linecolor=cyan](-4,-6)(6,9) \multiput(-3,-3)(2,3){4}{\psline[linewidth=2\pslinewidth](0,0)(2,0)} \multiput(-3,-3)(2,3){5}{\psline[linewidth=2\pslinewidth,linestyle=dashed](0,0)(0,-3)} \endpsclip \rput[r](-0.5,3){$2V$} \rput[t](1,-0.5){$H$} \rput[t](1,-5.5){(a)} \endpspicture \hfill \pspicture(-4,-6.5)(6,8.5) \psline[linewidth=0.7\pslinewidth]{->}(-3.5,0)(5.5,0) \psline[linewidth=0.7\pslinewidth]{->}(0,-5)(0,8) \multiput(0,-3)(0,3){4}{\psline[linewidth=0.5\pslinewidth](-0.3,0)(0.3,0)} \multiput(-3,0)(1,0){9}{\psline[linewidth=0.5\pslinewidth](0,-0.3)(0,0.3)} \psline[linecolor=magenta]{<->}(-1,2)(2,6.5) \psclip{\psframe[linestyle=none](-3.55,-5)(5.5,8)} \psline[linecolor=cyan](-4,-6)(6,9) \multiput(-2.5,-2.25)(2,3){4}{\psline[linewidth=2\pslinewidth](0,0)(2,0)} \multiput(-2.5,-2.25)(2,3){5}{\psline[linewidth=2\pslinewidth,linestyle=dashed](0,0)(0,-3)} \endpsclip \rput[t](1,-5.5){(b)} \endpspicture \hfill \pspicture(-4,-6.5)(6,8.5) \psline[linewidth=0.7\pslinewidth]{->}(-3.5,0)(5.5,0) \psline[linewidth=0.7\pslinewidth]{->}(0,-5)(0,8) \multiput(0,-3)(0,3){4}{\psline[linewidth=0.5\pslinewidth](-0.3,0)(0.3,0)} \multiput(-3,0)(1,0){9}{\psline[linewidth=0.5\pslinewidth](0,-0.3)(0,0.3)} \psline[linecolor=magenta]{<->}(1,-1)(4.5,-1) \psclip{\psframe[linestyle=none](-3.5,-5)(5.5,8)} \psline[linecolor=cyan](-4,-6)(6,9) \multiput(-2.5,-3)(2,3){4}{\psline[linewidth=2\pslinewidth](0,0)(2,0)} \multiput(-2.5,-3)(2,3){5}{\psline[linewidth=2\pslinewidth,linestyle=dashed](0,0)(0,-3)} \endpsclip \rput[t](1,-5.5){(c)} \endpspicture \hfill \pspicture(-4,-6.5)(6,8.5) \psline[linewidth=0.7\pslinewidth]{->}(-3.5,0)(5.5,0) \psline[linewidth=0.7\pslinewidth]{->}(0,-5)(0,8) \multiput(0,-3)(0,3){4}{\psline[linewidth=0.5\pslinewidth](-0.3,0)(0.3,0)} \multiput(-3,0)(1,0){9}{\psline[linewidth=0.5\pslinewidth](0,-0.3)(0,0.3)} \psline[linecolor=magenta]{<->}(-1,1)(-1,5) \psclip{\psframe[linestyle=none](-3.5,-5)(5.5,8)} \psline[linecolor=cyan](-4,-6)(6,9) \multiput(-2,-2.25)(2,3){4}{\psline[linewidth=2\pslinewidth](0,0)(2,0)} \multiput(-2,-2.25)(2,3){6}{\psline[linewidth=2\pslinewidth,linestyle=dashed](0,0)(0,-3)} \endpsclip \rput[t](1,-5.5){(d)} \endpspicture \hfill\mbox{} \caption{(a)~Canonical staircase. (b)~Oblique translation. (c)~Graph translation of horizontal type. (d)~Graph translation of vertical type. In all translations the parameter is $\tau_{0}=1/2$.} \label{figure:staircase} \end{figure} In the sequel $BV((a,b))$ denotes the space of functions with bounded variation in the interval $(a,b)\subseteq\re$. For every function $u$ in this space, $Du$ denotes its distributional derivative, which is a signed measure, and $|Du|((a,b))$ denotes the total variation of $u$ in $(a,b)$. We call jump points of $u$ the points $x\in(a,b)$ where $u$ is not continuous. As usual, $BV\loc(\re)$ denotes the set of all functions $u:\re\to\re$ whose restriction to every interval $(a,b)$ belongs to $BV((a,b))$. The staircase-like functions we have introduced above are typical examples of elements of the space $BV\loc(\re)$. Our result for the asymptotic behavior of minimizers involves smooth functions converging to staircases. The strongest sense in which this convergence is possible is the so-called strict converge. We recall here the definitions (see~\cite[Definition~3.14]{AFP}). \begin{defn}[Strict convergence in an interval]\label{defn:BV-sc} \begin{em} Let $(a,b)\subseteq\re$ be an interval. A sequence of functions $\{u_{n}\}\subseteq BV((a,b))$ converges \emph{strictly} to some $u_{\infty}\in BV((a,b))$, and we write \begin{equation} u_{n}\auto u_{\infty} \quad\text{in }BV((a,b)), \nonumber \end{equation} if \begin{equation} u_{n}\to u_{\infty} \text{ in } L^{1}((a,b)) \qquad\text{and}\qquad |Du_{n}|((a,b))\to|Du_{\infty}|((a,b)). \nonumber \end{equation} \end{em} \end{defn} \begin{defn}[Locally strict convergence on the whole real line]\label{defn:BV-lsc} \begin{em} A sequence of functions $\{u_{n}\}\subseteq BV\loc(\re)$ converges \emph{locally strictly} to some $u_{\infty}\in BV\loc(\re)$, and we write \begin{equation} u_{n}\auto u_{\infty} \quad\text{in }BV\loc(\re), \nonumber \end{equation} if $u_{n}\auto u_{\infty}$ in $BV((a,b))$ for every interval $(a,b)\subseteq\re$ whose endpoints are not jump points of the limit $u_{\infty}$. \end{em} \end{defn} Both definitions can be extended in the usual way to families depending on real parameters. For example, $u_{\ep}\auto u_{0}$ in $BV((a,b))$ as $\ep\to 0^{+}$ if and only if $u_{\ep_{n}}\auto u_{0}$ in $BV((a,b))$ for every sequence $\ep_{n}\to 0^{+}$. In the following remark we recall some consequences of strict convergence. \begin{rmk}[Consequences of strict convergence]\label{rmk:strict} \begin{em} Let us assume that $u_{n}\auto u_{\infty}$ in $BV((a,b))$. Then the following facts hold true. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item It turns out that $\{u_{n}\}$ is bounded in $L^{\infty}((a,b))$, and $u_{n}\to u_{\infty}$ in $L^{p}((a,b))$ for every $p\geq 1$ (but not necessarily for $p=+\infty$). \item \label{strict:cont} For every $x\in(a,b)$, and every sequence $x_{n}\to x$, it turns out that \begin{equation} \liminf_{y\to x}u_{\infty}(y)\leq\liminf_{n\to +\infty}u_{n}(x_{n})\leq \limsup_{n\to +\infty}u_{n}(x_{n})\leq\limsup_{y\to x}u_{\infty}(y), \nonumber \end{equation} and in particular $u_{n}(x_{n})\to u_{\infty}(x)$ whenever $u_{\infty}$ is continuous in $x$, and the convergence is uniform in $(a,b)$ if the limit $u_{\infty}$ is continuous in $(a,b)$. \item \label{strict:sub-int} It turns out that $u_{n}\auto u_{\infty}$ in $BV((c,d))$ for every interval $(c,d)\subseteq(a,b)$ whose endpoints are not jump points of the limit $u_{\infty}.$ \item\label{th:conv*+-} The positive and negative part of the distributional derivatives converge separately in the \emph{closed} interval (see~\cite[Proposition~3.15]{AFP}). More precisely, if $D^{+}u_{n}$ and $D^{-}u_{n}$ denote, respectively, the positive and negative part of the signed measure $Du_{n}$, and similarly for $u_{\infty}$, then for every continuous test function $\phi:[a,b]\to\re$ it turns out that \begin{equation*} \lim_{n\to +\infty}\int_{[a,b]}\phi(x)\,dD^{+}u_{n}(x)= \int_{[a,b]}\phi(x)\,dD^{+}u_{\infty}(x), \end{equation*} and similarly with $D^{-}u_{n}$ and $D^{-}u_{\infty}$. \end{enumerate} \end{em} \end{rmk} In our second main result we consider any family $\{\uep(x)\}$ of minimizers to (\ref{defn:SPM}) and any family of points $x_{\ep}\to x_{0}\in(0,1)$, and we investigate the asymptotic behavior of the family of fake blow-ups (we call them ``fake'' because in the numerator we subtract $f(x_{\ep})$ instead of $\uep(x_{\ep})$) defined by \begin{equation} w_{\ep}(y):=\frac{\uep(x_{\ep}+\omep y)-f(x_{\ep})}{\omep} \qquad \forall y\in\left(-\frac{x_{\ep}}{\omep},\frac{1-x_{\ep}}{\omep}\right), \label{defn:wep} \end{equation} and the asymptotic behavior of the family of true blow-ups defined by \begin{equation} v_{\ep}(y):=\frac{\uep(x_{\ep}+\omep y)-\uep(x_{\ep})}{\omep}\qquad \forall y\in\left(-\frac{x_{\ep}}{\omep},\frac{1-x_{\ep}}{\omep}\right). \label{defn:vep} \end{equation} We prove that both families are relatively compact in the sense of locally strict convergence, and all their limit points are suitable staircases. \begin{thm}[Blow-up of minimizers at standard resolution]\label{thm:BU} Let $\omep$ be defined by (\ref{defn:omep}), and let $\PMF_{\ep}(\beta,f,(0,1),u)$ be defined by (\ref{defn:SPM}), where $\ep\in(0,1)$ and $\beta>0$ are two real numbers, and $f\in C^{1}([0,1])$ is a given function. Let $\{u_{\ep}\}\subseteq H^{2}((0,1))$ be a family of functions with \begin{equation} u_{\ep}\in\operatorname{argmin}\left\{\PMF_{\ep}(\beta,f,(0,1),u):u\in H^{2}((0,1))\strut\right\} \qquad \forall\ep\in(0,1), \nonumber \end{equation} and let $x_{\ep}\to x_{0}\in(0,1)$ be a family of points. Let us consider the canonical $(H,V)$-staircase with parameters \begin{equation} H:=\left(\frac{24}{\beta^{2}|f'(x_{0})|^{3}}\right)^{1/5}, \qquad\qquad V:=f'(x_{0})H, \label{defn:HV} \end{equation} with the agreement that this staircase is identically equal to~0 when $f'(x_{0})=0$. Then the following statements hold true. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \emph{(Compactness of fake blow-ups).} The family $\{w_{\ep}(y)\}$ defined by (\ref{defn:wep}) is relatively compact with respect to locally strict convergence, and every limit point is an oblique translation of the canonical $(H,V)$-staircase. More precisely, for every sequence $\{\ep_{n}\}\subseteq(0,1)$ with $\ep_{n}\to 0^{+}$ there exist an increasing sequence $\{n_{k}\}$ of positive integers and a function $w_{\infty}\in\obl(H,V)$ such that \begin{equation} w_{\ep_{n_{k}}}(y)\auto w_{\infty}(y) \quad\text{in }BV\loc(\re). \nonumber \end{equation} \item \emph{(Compactness of true blow-ups).} The family $\{v_{\ep}(y)\}$ defined by (\ref{defn:vep}) is relatively compact with respect to locally strict convergence, and every limit point is a graph translation of the canonical $(H,V)$-staircase. More precisely, for every sequence $\{\ep_{n}\}\subseteq(0,1)$ with $\ep_{n}\to 0^{+}$ there exist an increasing sequence $\{n_{k}\}$ of positive integers and a function $v_{\infty}\in\orz(H,V)\cup\vrt(H,V)$ such that \begin{equation} v_{\ep_{n_{k}}}(y)\auto v_{\infty}(y) \quad\text{in }BV\loc(\re). \nonumber \end{equation} \item \emph{(Realization of all possible oblique translations)}. Let $w_{0}\in\obl(H,V)$ be any oblique translation of the canonical $(H,V)$-staircase. Then there exists a family $\{x'_{\ep}\}\subseteq(0,1)$ such that \begin{equation} \limsup_{\ep\to 0^{+}}\frac{|x'_{\ep}-x_{\ep}|}{\omep}\leq H, \label{th:liminf-limsup} \end{equation} and \begin{equation} \frac{u_{\ep}(x'_{\ep}+\omep y)-f(x'_{\ep})}{\omep}\auto w_{0}(y) \quad\text{in }BV\loc(\re). \label{th:BU-conv-fake} \end{equation} \item \emph{(Realization of all possible graph translations)}. Let $v_{0}\in\orz(H,V)\cup\vrt(H,V)$ be any graph translation of the canonical $(H,V)$-staircase. Then there exists a family $\{x'_{\ep}\}\subseteq(0,1)$ satisfying (\ref{th:liminf-limsup}) and \begin{equation} \frac{u_{\ep}(x'_{\ep}+\omep y)-u_{\ep}(x'_{\ep})}{\omep}\auto v_{0}(y) \quad\text{in }BV\loc(\re). \nonumber \end{equation} \end{enumerate} \end{thm} Let us make some comments about Theorem~\ref{thm:BU} above. To begin with, we consider the special case of stationary points, and the special case of blow-ups in boundary points. \begin{rmk}[Stationary points of the forcing term] \begin{em} In the special case where $f'(x_{0})=0$, the canonical $(H,V)$-staircase is identically equal to~0, and it coincides with all its oblique or graph translations. In this case the whole family of fake blow-ups and the whole family of true blow-ups converge to~0, without any need of subsequences. \end{em} \end{rmk} \begin{rmk}[Internal vs boundary blow-ups]\label{rmk:BU-semi-int} \begin{em} For the sake of shortness, we stated the result in the case where $x_{0}\in(0,1)$. The very same conclusions hold true, with exactly the same proof, even if $x_{0}\in\{0,1\}$, provided that \begin{equation} \lim_{\ep\to 0^{+}}\frac{\min\{x_{\ep},1-x_{\ep}\}}{\omep}=+\infty. \label{hp:BU-internal} \end{equation} When $x_{0}\in\{0,1\}$ and (\ref{hp:BU-internal}) fails, we can again characterize the limits of fake and true blow-ups, more or less with the same techniques. This requires a one-sided variant of the canonical staircases that we discuss later in section~\ref{sec:loc-min}. We refer to Remark~\ref{rmk:boundary-proof} for further details. \end{em} \end{rmk} In the following remark we present the result from two different points of view. \begin{rmk}[Further interpretations of Theorem~\ref{thm:BU}]\label{rmk:AM} \begin{em} Let us consider any distance in the space $\X:=BV\loc(\re)$ that induces the locally strict convergence. Given any minimizer $\uep$ to (\ref{defn:SPM}), we extend it to a continuous function $\widehat{u}_{\ep}:\re\to\re$ by setting \begin{equation} \widehat{u}_{\ep}(x):= \begin{cases} \uep(0) & \text{if }x\leq 0, \\ \uep(x) & \text{if }x\in[0,1], \\ \uep(1) & \text{if }x\geq 1. \end{cases} \nonumber \end{equation} Then we consider the function $U_{\ep}:(0,1)\to \X$ defined by \begin{equation} [U_{\ep}(x)](y):=\frac{\widehat{u}_{\ep}(x+\omep y)-f(x)}{\omep} \qquad \forall y\in\re, \nonumber \end{equation} namely the function that associates to every $x\in(0,1)$ the fake blow-up of $\widehat{u}_{\ep}$ with center in $x$ at scale $\omep$. Finally, for every $x\in(0,1)$ we consider the set $T(x)\subseteq \X$ consisting of all oblique translations of the canonical $(H,V)$-staircase with parameters given by (\ref{defn:HV}). We observe that $T(x)$ is homeomorphic to the circle $S^{1}$ if $f'(x)\neq 0$, and $T(x)$ is a singleton if $f'(x)=0$. Then ``$U_{\ep}(x)$ converges to $T(x)$'' in the following senses. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item (Hausdorff convergence). For every interval $[a,b]\subseteq(0,1)$ we consider the graph of $U_{\ep}$ over $[a,b]$, namely \begin{equation} G_{\ep}(a,b):=\{(x,w):x\in[a,b],\ w=U_{\ep}(x)\}\subseteq [a,b]\times \X, \nonumber \end{equation} and the graph of the multi-function $T(x)$, namely the set \begin{equation} G_{0}(a,b):=\{(x,w):x\in[a,b],\ w\in T(x)\}\subseteq [a,b]\times \X. \nonumber \end{equation} Then it turns out that $G_{\ep}\to G_{0}$ as $\ep\to 0^{+}$ with respect to the Hausdorff distance between compact subsets of $(0,1)\times \X$. This convergence result is a direct consequence of statements~(1) and~(3) of Theorem~\ref{thm:BU}. It can also be extended to true blow-ups, just by defining $T(x)$ as the set of graph translations instead of oblique translations. \item (Young measure convergence). Let us consider $U_{\ep}$ as a Young measure $\nu_{\ep}$ in $(0,1)$ with values in $\X$. Let $\nu_{0}$ denote the Young measure that associates to every $x\in(0,1)$ the probability measure in $T(x)$ that is invariant by oblique translations. Then it turns out that \begin{equation} \nu_{\ep}\rightharpoonup\nu_{0} \qquad\text{as }\ep\to 0^{+}, \nonumber \end{equation} where the convergence is in the sense of $\X$-valued Young measures in $(0,1)$. We point out that the strict convergence induced by the distance in our space $\X$ is much stronger than the convergence in~\cite{2001-CPAM-AlbertiMuller}, where the distance just induces the weak* topology in a ball of $L^{\infty}$. For this reason, our space $\X$ is not compact, but we could easily recover the compactness by restricting ourselves to the subset consisting of all blow-ups of all minimizers for $\ep$ is some interval $(0,\ep_{0}]\subseteq(0,1)$, together with all their possible limits as $\ep\to 0^{+}$. This convergence in the sense of Young measures follows from the Hausdorff convergence and the invariance of $\nu_{0}$ by oblique translations, which in turn follows from a remake of~\cite[Proposition~3.1 and Lemma~2.7]{2001-CPAM-AlbertiMuller}. The argument is however analogous to the proof of statement~(3) of Theorem~\ref{thm:BU}. The idea is that any translation of the blow-up point of order $\omep$ delivers a proportional oblique translation of the limit. In the case of true blow-ups we expect the limit Young measure $\nu_{0}$ to be uniformly concentrated only on graph translations of horizontal type, while graph translations of vertical type should have zero measure because they correspond to a very special choice of the blow-up points. \end{enumerate} \end{em} \end{rmk} Theorem~\ref{thm:BU} shows that minimizers develop a microstructure at scale~$\omep$. As a consequence, this microstructure does not appear if we consider blow-ups at a coarser scale, as in the following statement. \begin{cor}[Low-resolution blow-ups of minimizers]\label{thm:spm-bu-low} Let $\ep$, $\omep$, $\beta$, $f$, $u_{\ep}$, $x_{0}$ be as in Theorem~\ref{thm:BU}. Let $\{x_{\ep}\}\subseteq(0,1)$ be a family of real numbers such that $x_{\ep}\to x_{0}$, and let $\{\alpha_{\ep}\}$ be a family of positive real numbers such that $\alpha_{\ep}\to 0$ but $\omep/\alpha_{\ep}\to 0$. Then it turns out that \begin{equation} \frac{u_{\ep}(x_{\ep}+\alpha_{\ep}y)-u_{\ep}(x_{\ep})}{\alpha_{\ep}}\auto f'(x_{0})y \qquad \text{in }BV\loc(\re), \nonumber \end{equation} and therefore also uniformly on bounded subsets of $\re$. \end{cor} The second consequence of Theorem~\ref{thm:BU} is an improvement of statement~(4) in Proposition~\ref{prop:basic}, at least in the case where the forcing term $f(x)$ is of class $C^{1}$. In this case indeed we obtain that minimizers converge to $f$ also in the sense of strict convergence. Moreover, as $u_{\ep}(x)$ converges to $f(x)$, its derivative $u_{\ep}'(x)$ converges to a mix of $0$ and $\pm\infty$, and this mix is ``in the average'' equal to $f'(x)$. We state the result using an elementary language, and then we interpret it in the formalism of varifolds. \begin{thm}[Convergence of minimizers to the forcing term]\label{thm:varifold} Let $\ep$, $\beta$, $f$, $u_{\ep}$ be as in Theorem~\ref{thm:BU}. Then the family $\{u_{\ep}(x)\}$ of minimizers converges to $f(x)$ in the following senses. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \emph{(Strict convergence).} It turns out that $u_{\ep}(x)\auto f(x)$ in $BV((0,1))$, and therefore also uniformly in $[0,1]$. \item \emph{(Convergence as varifolds).} Let us set \begin{equation} V_{0}^{+}:=\left\{x\in[0,1]:f'(x)>0\right\}, \qquad V_{0}^{-}:=\left\{x\in[0,1]:f'(x)<0\right\}. \label{defn:V0+-} \end{equation} Then for every continuous test function \begin{equation} \phi:[0,1]\times\re\times\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\to\re \nonumber \end{equation} it turns out that \begin{multline} \lim_{\ep\to 0^{+}}\int_{0}^{1} \phi\left(x,u_{\ep}(x),\arctan(u_{\ep}'(x))\strut\right)\sqrt{1+u_{\ep}'(x)^{2}}\,dx= \int_{0}^{1}\phi(x,f(x),0)\,dx \\[1ex] +\int_{V_{0}^{-}}\phi\left(x,f(x),-\frac{\pi}{2}\strut\right)|f'(x)|\,dx +\int_{V_{0}^{+}}\phi\left(x,f(x),\frac{\pi}{2}\strut\right)|f'(x)|\,dx. \label{th:varifold} \end{multline} \end{enumerate} \end{thm} The conclusions of Theorem~\ref{thm:varifold} is weaker than Theorem~\ref{thm:BU}, because it does not carry so much information about the asymptotic profile of minimizers. Just for comparison, the counterpart of this result in the Alberti-M\"uller model is the convergence of $u_{\ep}'(x)$ to a Young measure that in every point assumes the two values $\pm 1$ with equal probability. Therefore, we suspect that the same conclusion might be true under weaker assumptions on the forcing term $f(x)$, and we refer to section~\ref{sec:open} for further discussion. \begin{rmk}[Varifold interpretation] \begin{em} Let us limit ourselves for a while to test functions such that $\phi(x,s,\pi/2)=\phi(x,s,-\pi/2)$ for all admissible values of $x$ and $s$. Let us observe that the function $p\mapsto\arctan(p)$ is a homeomorphism between the projective line and the interval $[-\pi/2,\pi/2]$ with the endpoint identified. Under these assumptions we can interpret the two sides of (\ref{th:varifold}) as the action of two suitable varifolds on the test function $\phi$. In the left-hand side we have the varifold associated to the graph of $u_{\ep}(x)$ in the canonical way, namely with ``weight'' (projection into $\re^{2}$) equal to the restriction of the one-dimensional Hausdorff measure to the graph of $\uep(x)$, and ``tangent component'' in the direction of the derivative $\uep'(x)$. In the right-hand side we have a varifold such that \begin{itemize} \item the ``weight'' is the one-dimensional Hausdorff measure restricted to the graph of $f(x)$, multiplied by the density \begin{equation} \frac{1+|f'(x)|}{\sqrt{1+f'(x)^{2}}}, \nonumber \end{equation} which in turn coincides with the push-forward of the Lebesgue measure through the map $x\mapsto(x,f(x))$ multiplied by $1+|f'(x)|$, \item the ``tangent component'' in the point $(x,f(x))$ is equal to \begin{equation} \frac{1}{1+|f'(x)|}\,\delta_{(1,0)}+\frac{|f'(x)|}{1+|f'(x)|}\,\delta_{(0,1)}, \nonumber \end{equation} where $\delta_{(1,0)}$ and $\delta_{(0,1)}$ are the Dirac measures concentrated in the horizontal direction $(1,0)$ and in the vertical direction $(0,1)$, respectively. \end{itemize} It follows that statement~(2) of Theorem~\ref{thm:varifold} above is a reinforced version of varifold convergence. The reinforcement consists in considering the vertical tangent line in the direction $(0,1)$ as different from the vertical tangent line in the direction $(0,-1)$. \end{em} \end{rmk} \begin{rmk}[Minimality is essential] \begin{em} In Theorem~\ref{thm:BU} and Theorem~\ref{thm:varifold} we can not replace the requirement that $\{\uep\}$ is a family of minimizers by weaker ``almost minimality'' conditions such as \begin{equation} \lim_{\ep\to 0^{+}}\frac{\PMF_{\ep}(\beta,f,(0,1),\uep)}{m(\ep,\beta,f)}=1. \label{hp:almost-min} \end{equation} Indeed, one can check that the cost of adding an isolated bump that simulates two opposite jumps in a neighborhood of some point is proportional to $\omep^{5/2}$ (see also (\ref{th:lim-PJ}) below). Since the denominator in (\ref{hp:almost-min}) is proportional to $\omep^{2}$, this condition does not even imply a uniform bound on the total variation of $\uep$. \end{em} \end{rmk} \setcounter{equation}{0} \section{Functional setting and Gamma-convergence}\label{sec:gconv} This section deals with the rescaled version of the Perona-Malik functional (\ref{defn:SPM}) and its Gamma-limit. The results are somewhat classical, and rather close to similar results in the literature. On the other hand, in some cases they are not stated in the literature in the form we need, and in some other cases the proofs that we found in the literature do not work. Therefore, for the convenience of the reader we decided to include at least a sketch of the proofs in an appendix at the end of the paper. \paragraph{\textmd{\textit{Functional setting}}} Let us consider the functional \begin{equation} \RPM_{\ep}(\Omega,u):= \int_{\Omega}\left\{\ep^{6}u''(x)^{2}+\frac{1}{\omep^{2}}\log\left(1+u'(x)^{2}\right)\right\}\,dx \label{defn:ABG} \end{equation} defined for every real number $\ep\in(0,1)$, every open set $\Omega\subseteq\re$, and every function $u\in H^{2}(\Omega)$. This functional is a rescaled version of the principal part of (\ref{defn:SPM}). When we add the usual ``fidelity term'', depending on a real parameter $\beta>0$ and on a forcing term $f\in L^{2}(\Omega)$, we obtain the rescaled Perona-Malik functional with fidelity term \begin{equation} \RPMF_{\ep}(\beta,f,\Omega,u):= \RPM_{\ep}(\Omega,u)+\beta\int_{\Omega}\left(u(x)-f(x)\right)^{2}\,dx. \label{defn:ABGF} \end{equation} The Gamma-limit of (\ref{defn:ABG}) as $\ep\to 0^{+}$ turns out to be finite only in the space of ``pure jump functions'', defined as finite or countable linear combination of Heaviside functions. More formally, the notion is the following. \begin{defn}[Pure jump functions]\label{defn:PJF} \begin{em} Let $(a,b)\subseteq\re$ be an interval. A function $u:(a,b)\to\re$ is called a \emph{pure jump function}, and we write $u\in\PJ((a,b))$, if there exist a real number $c$, a finite or countable set $S_{u}\subseteq(a,b)$, and a function $J:S_{u}\to\re\setminus\{0\}$ such that \begin{equation} \sum_{s\in S_{u}}|J(s)|<+\infty \label{defn:sum-conv} \end{equation} and \begin{equation} u(x)=c+\sum_{s\in S_{u}}J(s)\mathbbm{1}_{(s,b)}(x) \qquad \forall x\in (a,b), \label{defn:PJ} \end{equation} where $\mathbbm{1}_{(s,b)}:\re\to\{0,1\}$ is the indicator function of the interval $(s,b)$, defined as \begin{equation} \mathbbm{1}_{(s,b)}(x):=\begin{cases} 1 & \text{if }x\in(s,b), \\ 0 & \text{otherwise}. \end{cases} \nonumber \end{equation} The set $S_{u}$ is called the \emph{jump set} of $u$, every element $s\in S_{u}$ is called a \emph{jump point} of $u$, and $|J(s)|$ is called the \emph{height of the jump} of $u$ in $s$. We call \emph{boundary values} of $u$ the numbers \begin{equation} u(a):=\lim_{x\to a^{+}}u(x)=c \qquad\quad\text{and}\qquad\quad u(b):=\lim_{x\to b^{-}}u(x)=c+\sum_{s\in S_{u}}J(s). \label{defn:PJ-BC} \end{equation} \end{em} \end{defn} Pure jump functions can be defined in an alternative way as those functions in $BV((a,b))$ whose distributional derivative is a finite or countable linear combination of atomic measures. In particular, it can be verified that the representation (\ref{defn:PJ}) is unique, and defines a function $u\in BV((a,b))$ whose total variation is the sum of the series in (\ref{defn:sum-conv}), and whose distributional derivative $Du$ is the sum of Dirac measures concentrated in the points of the set $S_{u}$ with weight $J(s)$. Moreover, $S_{u}$ coincides with the set of discontinuity points of $u$, and \begin{equation} J(s)=\lim_{x\to s^{+}}u(x)-\lim_{x\to s^{-}}u(x) \qquad \forall s\in S_{u}. \nonumber \end{equation} We can now introduce the functional \begin{equation} \operatorname{\mathbb{J}}_{1/2}(\Omega,u):=\sum_{s\in S_{u}\cap\Omega}|J(s)|^{1/2}, \label{defn:J} \end{equation} defined for every $u\in\PJ((a,b))$ and every open subset $\Omega\subseteq(a,b)$. Of course the convergence of the series in (\ref{defn:sum-conv}) does not imply the convergence of the series in (\ref{defn:J}), and therefore at this level of generality it may happen that $\operatorname{\mathbb{J}}_{1/2}(\Omega,u)=+\infty$ for some choices of $u$ and $\Omega$. \paragraph{\textmd{\textit{Gamma-convergence}}} The following result concerns the convergence of the family $\RPM_{\ep}$ to a multiple of $\J_{1/2}$. The compactness statement is similar to~\cite[Theorem~4.1]{2008-TAMS-BF}, while the Gamma-convergence statement coincides with \cite[Theorem~4.4]{2008-TAMS-BF} in the special case $\phi(p)=\log(1+p^{2})$. Nevertheless, unfortunately the proof in~\cite{2008-TAMS-BF} relies on~\cite[Lemma~3.1]{2008-TAMS-BF}, which is clearly false for this choice of $\phi(p)$. In the appendix at the end of the paper we present a specific proof for this case. \begin{thm}[Gamma-convergence, compactness, properties of recovery sequences]\label{thm:ABG} Let $(a,b)\subseteq\re$ be an interval, let us consider the functionals defined in (\ref{defn:ABG}) and (\ref{defn:J}), and let us set \begin{equation} \alpha_{0}:= \frac{16}{\sqrt{3}}. \label{defn:alpha-0} \end{equation} Then the following statements hold true. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \label{stat:ABG-gconv} \emph{(Gamma convergence)} Let us extend the functionals (\ref{defn:ABG}) and (\ref{defn:J}) to the space $L^{2}((a,b))$ by setting them equal to $+\infty$ outside their original domains. Then with respect to the metric of $L^{2}((a,b))$ it turns out that \begin{equation} \glim_{\ep\to 0^{+}}\RPM_{\ep}((a,b),u)=\alpha_{0}\J_{1/2}((a,b),u) \qquad \forall u\in L^{2}((a,b)). \nonumber \end{equation} \item \label{stat:ABG-cpt} \emph{(Compactness)} Let $\{\ep_{n}\}\subseteq(0,1)$ be any sequence such that $\ep_{n}\to 0^{+}$, and let $\{u_{n}\}\subseteq H^{2}((a,b))$ be any sequence such that \begin{equation} \sup_{n\in\n}\left\{\RPM_{\ep_{n}}((a,b),u_{n})+\int_{a}^{b}u_{n}(x)^{2}\,dx\right\}<+\infty. \label{hp:Gconv-coercive} \end{equation} Then there exist an increasing sequence $\{n_{k}\}$ of positive integers, and a function $u_{\infty}\in \PJ((a,b))$ such that $u_{n_{k}}\to u_{\infty}$ in $L^{2}((a,b))$ as $k\to +\infty$. \item \label{stat:ABG-uc} \emph{(Strict convergence of recovery sequences)} Let $u\in\PJ((a,b))$ be a pure jump function with $\J_{1/2}((a,b),u)<+\infty$. Let $\{\ep_{n}\}\subseteq(0,1)$ be any sequence such that $\ep_{n}\to 0^{+}$, and let $\{u_{n}\}\subseteq H^{2}((a,b))$ be any sequence such that $u_{n}\to u$ in $L^{2}((a,b))$, and \begin{equation} \lim_{n\to +\infty}\RPM_{\ep_{n}}((a,b),u_{n})=\alpha_{0}\J_{1/2}((a,b),u). \label{hp:recovery} \end{equation} Then actually $u_{n}\auto u$ in $BV((a,b))$, according to Definition~\ref{defn:BV-sc}. \item \label{stat:ABG-recovery} \emph{(Recovery sequences with given boundary data)} Let $\{\ep_{n}\}$ and $u$ be as in the previous statement, and let $\{A_{0,n}\}$, $\{A_{1,n}\}$, $\{B_{0,n}\}$, $\{B_{1,n}\}$ be four sequences of real numbers such that \begin{equation} \lim_{n\to +\infty}\left(A_{0,n},A_{1,n},B_{0,n},B_{1,n}\right)=(u(a),0,u(b),0), \nonumber \end{equation} where the boundary values of $u$ are intended as usual in the sense of (\ref{defn:PJ-BC}). Then there exists a sequence $\{u_{n}\}\subseteq H^{2}((a,b))$ with boundary data \begin{equation} \left(u_{n}(a),u_{n}'(a),u_{n}(b),u_{n}'(b)\strut\right)= (A_{0,n}, A_{1,n}, B_{0,n}, B_{1,n}) \qquad \forall n\in\n \label{hp:recovery-BC} \end{equation} such that $u_{n}\to u$ in $L^{2}((a,b))$ and (\ref{hp:recovery}) holds true. \end{enumerate} \end{thm} \begin{rmk} \begin{em} The choice of the ambient space $L^{2}((a,b))$ is not essential in Theorem~\ref{thm:ABG}, and actually it can be replaced with $L^{p}((a,b))$ for any real exponent $p\geq 1$ (but not for $p=+\infty$, at least in statements~(2) and~(4)). \end{em} \end{rmk} \paragraph{\textmd{\textit{Convergence of minima and minimizers}}} Since the fidelity term in (\ref{defn:ABGF}) is continuous with respect to the metric of $L^{2}(\Omega)$, and Gamma-convergence is stable with respect to continuous perturbations, we deduce that the Gamma-limit of (\ref{defn:ABGF}) is the functional \begin{equation} \JF_{1/2}(\alpha,\beta,f,\Omega,u):= \alpha\J_{1/2}(\Omega,u)+\beta\int_{\Omega}(u(x)-f(x))^{2}\,dx, \label{defn:JF} \end{equation} with $\alpha$ equal to the constant $\alpha_{0}$ defined in (\ref{defn:alpha-0}). Now we concentrate on the special case where $\Omega=(0,L)$ and the forcing term is the linear function $f(x)=Mx$, for suitable real numbers $L>0$ and $M$, and we consider the following minimum values without boundary conditions \begin{gather} \mu_{\ep}(\beta,L,M):=\min_{u\in H^{2}((0,L))} \RPMF_{\ep}(\beta,Mx,(0,L),u), \label{defn:muep} \\[1ex] \mu_{0}(\alpha,\beta,L,M):= \min_{u\in\PJ((0,L))}\JF_{1/2}(\alpha,\beta,Mx,(0,L),u). \label{defn:mu0} \end{gather} Then we introduce boundary conditions. In the case of (\ref{defn:ABGF}) we call $H^{2}((0,L),M)$ the set of all functions $v\in H^{2}((0,L))$ such that $v(0)=0$, $v(L)=ML$, and $v'(0)=v'(L)=0$. In the case of (\ref{defn:JF}) we call $\PJ((0,L),M)$ the set of all functions $v\in\PJ((0,L))$ such that $v(0)=0$ and $v(L)=ML$, where these boundary values are intended in the sense of (\ref{defn:PJ-BC}). At this point we consider the following minimum values with boundary conditions \begin{gather} \mu_{\ep}^{*}(\beta,L,M):=\min_{u\in H^{2}((0,L),M)} \RPMF_{\ep}(\beta,Mx,(0,L),u), \label{defn:muep*} \\[1ex] \mu_{0}^{*}(\alpha,\beta,L,M):= \min_{u\in\PJ((0,L),M)}\JF_{1/2}(\alpha,\beta,Mx,(0,L),u). \label{defn:mu0*} \end{gather} The following result contains the properties of these minimum values that we exploit in the sequel (a sketch of the proof is in the appendix). \begin{prop}[Asymptotic analysis of minima with linear forcing term]\label{prop:mu} The minimum values defined in (\ref{defn:muep}) through (\ref{defn:mu0*}) have the following properties. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \label{prop:existence} \emph{(Existence).} The minimum problems (\ref{defn:muep}) through (\ref{defn:mu0*}) admit a solution for every $(\ep,\alpha,\beta,L,M)\in(0,1)\times(0,+\infty)^{3}\times\re$. \item \label{prop:M} \emph{(Symmetry, continuity and monotonicity with respect to $M$).} For every admissible value of $\ep$, $\alpha$, $\beta$, $L$ the four functions \begin{gather*} M\mapsto\mu_{\ep}(\beta,L,M), \qquad\qquad M\mapsto\mu_{\ep}^{*}(\beta,L,M), \\[0.5ex] M\mapsto\mu_{0}(\alpha,\beta,L,M), \qquad\qquad M\mapsto\mu_{0}^{*}(\alpha,\beta,L,M), \end{gather*} are even, continuous in $\re$, and nondecreasing in $[0,+\infty)$. \item \label{prop:L} \emph{(Monotonicity with respect to $L$).} For every admissible value of $\ep$, $\alpha$, $\beta$, $M$, the three functions \begin{equation} L\mapsto\mu_{\ep}(\beta,L,M), \qquad\quad L\mapsto\mu_{0}(\alpha,\beta,L,M), \qquad\quad L\mapsto\mu_{0}^{*}(\alpha,\beta,L,M) \nonumber \end{equation} are nondecreasing with respect to $L$ in $(0,+\infty)$. As for $\mu_{\ep}^{*}$, it turns out that \begin{equation} \mu_{\ep}^{*}(\beta,L_{2},M)\leq\left(\frac{L_{2}}{L_{1}}\right)^{3}\mu_{\ep}^{*}(\beta,L_{1},M) \label{th:monot-muep*} \end{equation} for every $0<L_{1}\leq L_{2}$. \item \label{prop:pointwise} \emph{(Pointwise convergence).} For every admissible value of $\beta$, $M$ and $L$ it turns out that \begin{equation} \lim_{\ep\to 0^{+}}\mu_{\ep}(\beta,L,M)= \mu_{0}(\alpha_{0},\beta,L,M), \label{th:lim-mu} \end{equation} and \begin{equation} \lim_{\ep\to 0^{+}}\mu_{\ep}^{*}(\beta,L,M)= \mu_{0}^{*}(\alpha_{0},\beta,L,M), \label{th:lim-mu*} \end{equation} where $\alpha_{0}$ is the constant defined in (\ref{defn:alpha-0}). \item \label{prop:uniform} \emph{(Uniform convergence).} The limits (\ref{th:lim-mu}) and (\ref{th:lim-mu*}) are uniform for bounded values of $M$, in the sense that for every positive value of $\beta$ and $L$ it turns out that \begin{equation} \lim_{\ep\to 0^{+}}\sup_{|M|\leq M_{0}} \left|\mu_{\ep}(\beta,L,M)-\mu_{0}(\alpha_{0},\beta,L,M)\right|=0 \qquad \forall M_{0}>0, \label{th:lim-mu-unif} \end{equation} and \begin{equation} \lim_{\ep\to 0^{+}}\sup_{|M|\leq M_{0}} \left|\mu^{*}_{\ep}(\beta,L,M)-\mu_{0}^{*}(\alpha_{0},\beta,L,M)\right|=0 \qquad \forall M_{0}>0. \label{th:lim-mu*-unif} \end{equation} \end{enumerate} \end{prop} \setcounter{equation}{0} \section{Local minimizers}\label{sec:loc-min} In this section we state the key tools for the proof of our main results. The key idea is that also local minimizers for functionals of the form (\ref{defn:ABGF}) converge to local minimizers for functionals of the form (\ref{defn:JF}). This extends the Gamma convergence results of the previous section. The notion of local minimizers can be introduced in a very general framework by asking minimality with respect to compactly supported perturbations. In many concrete examples this is equivalent to saying that a given function is a minimizer with respect to its own boundary conditions. Of course the number and the form of these boundary conditions depend on the nature of the functional, as we explain below. \begin{defn}[Local minimizers in intervals]\label{defn:loc-min} \begin{em} Let $(a,b)\subseteq\re$ be an interval, and let $\mathcal{F}(u)$ be a functional defined in some functional space $\mathcal{S}((a,b))$. \begin{itemize} \item Let us assume that $\mathcal{S}((a,b))=H^{2}((a,b))$. A \emph{local minimizer }is any function $u\in H^{2}((a,b))$ such that $\mathcal{F}(u)\leq\mathcal{F}(v)$ for every function $v\in H^{2}((a,b))$ such that \begin{equation} \left(v(a),v'(a),v(b),v'(b)\strut\right)=\left(u(a),u'(a),u(b),u'(b)\strut\right). \nonumber \end{equation} \item Let us assume that $\mathcal{S}((a,b))=\PJ((a,b))$. A \emph{local minimizer} is any function $u\in\PJ((a,b))$ such that $\mathcal{F}(u)\leq\mathcal{F}(v)$ for every function $v\in\PJ((a,b))$ such that $(v(a),v(b))=(u(a),u(b))$, where boundary values of pure jump functions are intended in the sense of (\ref{defn:PJ-BC}). \end{itemize} In both cases we write \begin{equation} u\in\argmin\loc\left\{\mathcal{F}(u):u\in\mathcal{S}((a,b))\right\}. \nonumber \end{equation} \end{em} \end{defn} We observe that in Definition~\ref{defn:loc-min} the two endpoints of the interval play the same role. In the sequel we need also the following notion of one-sided local minimizer, where we focus just on one of the endpoints. \begin{defn}[One-sided local minimizers in an interval]\label{defn:R-loc-min} \begin{em} Let $(a,b)\subseteq\re$ be an interval, and let $\mathcal{F}(u)$ be a functional defined in some functional space $\mathcal{S}((a,b))$. \begin{itemize} \item Let us assume that $\mathcal{S}((a,b))=H^{2}((a,b))$. A \emph{right-hand local minimizer} is any function $u\in H^{2}((a,b))$ such that $\mathcal{F}(u)\leq\mathcal{F}(v)$ for every function $v\in H^{2}((a,b))$ such that $(v(b),v'(b))=(u(b),u'(b))$. \item Let us assume that $\mathcal{S}((a,b))=\PJ((a,b))$. A \emph{right-hand local minimizer} is any function $u\in\PJ((a,b))$ such that $\mathcal{F}(u)\leq\mathcal{F}(v)$ for every function $v\in\PJ((a,b))$ such that $v(b)=u(b)$. \end{itemize} In both cases we write \begin{equation} u\in\argmin\Rloc\left\{\mathcal{F}(u):u\in\mathcal{S}((a,b))\right\}. \nonumber \end{equation} Left-hand local minimizers are defined in a symmetric way, just focusing on the endpoint~$a$. \end{em} \end{defn} \begin{defn}[Entire and semi-entire local minimizers] \begin{em} Let us consider functionals $\mathcal{F}(I,u)$ defined for every interval $I$ and every $u$ in some function space $\mathcal{S}(I)$. \begin{itemize} \item An \emph{entire local minimizer} is a function $u:\re\to\re$ such that, for every interval $(a,b)\subseteq\re$, the restriction of $u$ to $(a,b)$ is a local minimizer in $(a,b)$. \item A \emph{right-hand semi-entire local minimizer} is a function $u:(0,+\infty)\to\re$ such that, for every real number $L>0$, the restriction of $u$ to $(0,L)$ is a right-hand local minimizer in $(0,L)$. \item A \emph{left-hand semi-entire local minimizer} is a function $u:(-\infty,0)\to\re$ such that, for every real number $L>0$, the restriction of $u$ to $(-L,0)$ is a left-hand local minimizer in $(-L,0)$. \end{itemize} \end{em} \end{defn} The following result is crucial both in the proof of Theorem~\ref{thm:asympt-min}, and as a preliminary step toward the characterization of entire and semi-entire local minimizers to the limit functional (\ref{defn:JF}). \begin{prop}[Estimates for minima of the limit problem]\label{prop:loc-min-discr} For every $(\alpha,\beta,L,M)\in(0,+\infty)^{3}\times\re$ the minimum values defined in (\ref{defn:mu0}) and (\ref{defn:mu0*}) satisfy \begin{align} \hspace{3em} c_{1}|M|^{4/5}L-c_{2}|M|^{1/5}&\leq \mu_{0}(\alpha,\beta,L,M) \label{th:est-mu-abLM} \\[0.5ex] &\leq \mu_{0}^{*}(\alpha,\beta,L,M)\leq c_{1}|M|^{4/5}L+c_{3}|M|^{1/5}, \hspace{3em} \label{th:est-mu*-abLM} \end{align} where \begin{equation} c_{1}:=\frac{5}{4}\left(\frac{\alpha^{4}\beta}{3}\right)^{1/5}, \qquad c_{2}:=20\left(\frac{2\alpha^{6}}{3\beta}\right)^{1/5}, \qquad c_{3}:=\frac{5}{4}\left(\frac{3\alpha^{6}}{\beta}\right)^{1/5}. \label{defn:c123} \end{equation} \end{prop} We are now ready to state the first main result of this section, namely the characterization of all entire and semi-entire local minimizers for the functional (\ref{defn:JF}). \begin{prop}[Classification of entire and semi-entire local minimizers]\label{prop:loc-min-class} For every choice of the real numbers $(\alpha,\beta,M)\in(0,+\infty)^{2}\times\re$ let us consider the functional $\JF_{1/2}(\alpha,\beta,M,\re,v)$ defined in (\ref{defn:JF}). Let us consider the canonical $(H,V)$-staircase $S_{H,V}(x)$ with parameters \begin{equation} H:=\frac{1}{2}\left(\frac{9\alpha^{2}}{\beta^{2}|M|^{3}}\right)^{1/5}, \qquad\qquad V:=MH, \label{defn:lambda-0} \end{equation} and the usual agreement that $S_{H,V}(x)\equiv 0$ when $M=0$. Then the following statements hold true. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \emph{(Entire local minimizers).} The set of entire local minimizers coincides with the set of the oblique translations of the canonical $(H,V)$-staircase $S_{H,V}(x)$, as introduced in Definition~\ref{defn:staircase} and Definition~\ref{defn:translations}. \item \emph{(Semi-entire local minimizers).} The unique right-hand semi-entire local minimizer is the function $w:(0,+\infty)\to\re$ defined by \begin{equation} w(x):=\begin{cases} Mz_{0} & \text{if }x\in(0,z_{0}), \\ S_{H,V}(x-z_{0})+Mz_{0}\quad & \text{if }x\geq z_{0}, \end{cases} \label{defn:semi-entire} \end{equation} where $z_{0}:=(5/3)^{1/2}H$ (if $M=0$ the value of $z_{0}$ is not relevant). The unique left-hand semi-entire local minimizer is the function $w(-x)$. \end{enumerate} \end{prop} In words, the right-hand semi-entire local minimizer is an oblique translation of the canonical $(H,V)$-staircase, but with a first step that is longer. Intuitively, this is due to the fact that the ``jump at the origin'' has no cost in terms of energy. The second main result of this section is the convergence of local minimizers for (\ref{defn:ABGF}) to local minimizers for (\ref{defn:JF}). Let us start with the symmetric case. \begin{prop}[Convergence to entire local minimizers]\label{prop:bu2minloc} Let $M$ and $\beta$ be real numbers, with $\beta>0$. For every positive integer $n$, let $\ep_{n}\in(0,1)$ and $A_{n}<B_{n}$ be real numbers, let $g_{n}:(A_{n},B_{n})\to\re$ be a continuous function, and let $w_{n}\in H^{2}((A_{n},B_{n}))$. Let us assume that \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item as $n\to +\infty$ it turns out that $\ep_{n}\to 0^{+}$, $A_{n}\to -\infty$, and $B_{n}\to +\infty$, \item $g_{n}(x)\to Mx$ uniformly on bounded subsets of $\re$, \item for every positive integer $n$ it turns out that \begin{equation} w_{n}\in\argmin\loc\left\{\RPMF_{\ep_{n}}(\beta,g_{n},(A_{n},B_{n}),w):w\in H^{2}((A_{n},B_{n}))\right\}, \nonumber \end{equation} \item there exists a positive real number $C_{0}$ such that \begin{equation} \RPMF_{\ep_{n}}(\beta,g_{n},(A_{n},B_{n}),w_{n})\leq\frac{C_{0}}{\ep_{n}} \qquad \forall n\geq 1. \label{hp:bound-G} \end{equation} \end{enumerate} Then there exists an increasing sequence $\{n_{k}\}$ of positive integers such that \begin{equation} w_{n_{k}}(x)\auto w_{\infty}(x) \quad\text{in }BV\loc(\re), \nonumber \end{equation} where $w_{\infty}$ is an entire local minimizer for the functional (\ref{defn:JF}) with $\alpha$ given by~(\ref{defn:alpha-0}). \end{prop} The result for one-sided local minimizers is analogous. We state it in the case of right-hand local minimizers. \begin{prop}[Convergence to semi-entire local minimizers]\label{prop:bu2Rminloc} Let $M$ and $\beta$ be real numbers, with $\beta>0$. For every positive integer $n$, let $\ep_{n}\in(0,1)$ and $L_{n}>0$ be real numbers, let $g_{n}:(0,L_{n})\to\re$ be a continuous function, and let $w_{n}\in H^{2}((0,L_{n}))$. Let us assume that \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item as $n\to +\infty$ it turns out that $\ep_{n}\to 0^{+}$ and $L_{n}\to +\infty$, \item $g_{n}(x)\to Mx$ uniformly on bounded subsets of $(0,+\infty)$, \item for every positive integer $n$ it turns out that \begin{equation} w_{n}\in\argmin\Rloc\left\{\RPMF_{\ep_{n}}(\beta,g_{n},(0,L_{n}),w):w\in H^{2}((0,L_{n}))\right\}, \nonumber \end{equation} \item there exists a positive real number $C_{0}$ such that \begin{equation} \RPMF_{\ep_{n}}(\beta,g_{n},(0,L_{n}),w_{n})\leq\frac{C_{0}}{\ep_{n}} \qquad \forall n\geq 1. \nonumber \end{equation} \end{enumerate} Let $w_{\infty}$ denote the unique right-hand semi-entire local minimizer for the functional (\ref{defn:JF}) with $\alpha$ given by~(\ref{defn:alpha-0}), namely the function defined by (\ref{defn:semi-entire}). Then for every $L>0$ that is not a jump point of $w_{\infty}$ it turns out that \begin{equation} w_{n}(x)\auto w_{\infty}(x) \quad\text{in }BV((0,L)). \nonumber \end{equation} \end{prop} \setcounter{equation}{0} \section{Proofs of main results}\label{sec:strategy} In this section we assume that the results stated in section~\ref{sec:gconv} and section~\ref{sec:loc-min} are valid, and using them we prove all the main results of section~\ref{sec:statements} concerning the behavior of minima and minimizers. We hope that this presentation allows to highlight the main ideas without focusing on the technical details that will be presented in the next section. \subsection{Asymptotic behavior of minima (Theorem~\ref{thm:asympt-min})} The proof of Theorem~\ref{thm:asympt-min} consists of two main parts. In the first part (estimate from below) we consider any family $\{u_{\ep}\}\subseteq H^{2}((0,1))$ and we show that \begin{equation} \liminf_{\ep\to 0^{+}}\frac{\PMF_{\ep}(\beta,f,(0,1),u_{\ep})}{\omep^{2}}\geq 10\left(\frac{2\beta}{27}\right)^{1/5}\int_{0}^{1}|f'(x)|^{4/5}\,dx. \label{est:lim-SPM-below} \end{equation} In the second part (estimate from above) we construct a family $\{u_{\ep}\}\subseteq H^{2}((0,1))$ such that \begin{equation} \limsup_{\ep\to 0^{+}}\frac{\PMF_{\ep}(\beta,f,(0,1),u_{\ep})}{\omep^{2}}\leq 10\left(\frac{2\beta}{27}\right)^{1/5}\int_{0}^{1}|f'(x)|^{4/5}\,dx. \label{est:lim-SPM-above} \end{equation} \subsubsection{Estimate from below} \paragraph{\textmd{\textit{Interval subdivision and approximation of the forcing term}}} Let us fix two real numbers $L>0$ and $\eta\in(0,1)$. For every $\ep\in(0,1)$ we set \begin{equation} N_{\ep,L}:=\left\lfloor\frac{1}{L\omep}\right\rfloor \qquad\quad\text{and}\quad\qquad L_{\ep}:=\frac{1}{N_{\ep,L}\omep}. \label{defn:NepAep} \end{equation} We observe that $N_{\ep,L}$ is an integer, and that $L_{\ep}\to L$ when $\ep\to 0^{+}$. We observe also that $[0,1]$ is (up to a finite number of points) the disjoint union of the $N_{\ep,L}$ intervals of length $L_{\ep}\omep$ defined by \begin{equation} I_{\ep,k}:=((k-1)L_{\ep}\omep,kL_{\ep}\omep) \qquad \forall k\in\{1,\ldots,N_{\ep,L}\}, \label{defn:Iepk} \end{equation} and we consider the piecewise affine function $f_{\ep,L}:[0,1]\to\re$ that interpolates the values of $f$ at the endpoints of these intervals, namely the function defined by \begin{equation} f_{\ep,L}(x):=M_{\ep,L,k}(x-(k-1)L_{\ep}\omep)+f((k-1)L_{\ep}\omep) \qquad \forall x\in I_{\ep,k}, \label{defn:fepL} \end{equation} where \begin{equation} M_{\ep,L,k}:=\frac{f(kL_{\ep}\omep)-f((k-1)L_{\ep}\omep)}{L_{\ep}\omep}. \nonumber \end{equation} From the $C^{1}$ regularity of $f$ we deduce that \begin{equation} |M_{\ep,L,k}|\leq\max\{|f'(x)|:x\in[0,1]\}=:M_{\infty} \label{est:mepL-bound} \end{equation} for every admissible value of $\ep$, $L$ and $k$, and that the family $\{f_{\ep,L}\}$ converges to $f$ in the sense that \begin{equation} \lim_{\ep\to 0^{+}}\frac{1}{\omep^{2}}\int_{0}^{1}(f(x)-f_{\ep,L}(x))^{2}\,dx=0. \label{lim:fepL2f} \end{equation} Moreover, we deduce also that $f_{\ep,L}'(x)\to f'(x)$ uniformly in $[0,1]$, and in particular \begin{equation} \lim_{\ep\to 0^{+}}L_{\ep}\omep\sum_{k=1}^{N_{\ep,L}}|M_{\ep,L,k}|^{4/5}= \lim_{\ep\to 0^{+}}\int_{0}^{1}|f_{\ep,L}'(x)|^{4/5}\,dx= \int_{0}^{1}|f'(x)|^{4/5}\,dx. \label{lim:fepL'2f'} \end{equation} Finally, from the inequality \begin{equation} (a+b)^{2}\geq(1-\eta)a^{2}+\left(1-\frac{1}{\eta}\right)b^{2} \qquad \forall\eta\in(0,1), \quad \forall (a,b)\in\re^{2}, \nonumber \end{equation} we obtain the estimate \begin{equation} \int_{0}^{1}(u_{\ep}-f)^{2}\,dx\geq (1-\eta)\int_{0}^{1}(u_{\ep}-f_{\ep,L})^{2}\,dx +\left(1-\frac{1}{\eta}\right)\int_{0}^{1}(f-f_{\ep,L})^{2}\,dx, \nonumber \end{equation} from which we conclude that \begin{eqnarray} \PMF_{\ep}(\beta,f,(0,1),u_{\ep})& \geq & (1-\eta)\PMF_{\ep}(\beta,f_{\ep,L},(0,1),u_{\ep}) \nonumber \\[0.5ex] & & \mbox{}+\left(1-\frac{1}{\eta}\right)\beta\int_{0}^{1}(f(x)-f_{\ep,L}(x))^{2}\,dx. \label{est:SPM-ABGF} \end{eqnarray} \paragraph{\textmd{\textit{Reduction to a common interval}}} We prove that \begin{equation} \PMF_{\ep}(\beta,f_{\ep,L},(0,1),u_{\ep})\geq \omep^{3}\sum_{k=1}^{N_{\ep,L}}\mu_{\ep}(\beta,L,M_{\ep,L,k}), \label{ineq:ABGF-mLM} \end{equation} where $\mu_{\ep}(\beta,L,M_{\ep,L,k})$ is defined by (\ref{defn:muep}). To this end, we begin by observing that \begin{equation} \PMF_{\ep}(\beta,f_{\ep,L},(0,1),u_{\ep})= \sum_{k=1}^{N_{\ep,L}}\PMF_{\ep}(\beta,f_{\ep,L},I_{\ep,k},u_{\ep}). \label{ineq:ABGF-sum} \end{equation} Each of the terms of the sum can be reduced to the common interval $(0,L_{\ep})$ by introducing the function $v_{\ep,L,k}:(0,L_{\ep})\to\re$ defined by \begin{equation} v_{\ep,L,k}(y):=\frac{\uep((k-1)L_{\ep}\omep+\omep y)-f((k-1)L_{\ep}\omep)}{\omep} \qquad \forall y\in(0,L_{\ep}). \label{defn:veLK} \end{equation} Indeed, with the change of variable $x=(k-1)L_{\ep}\omep+\omep y$, we obtain that \begin{equation} \int_{I_{\ep,k}}(\uep(x)-f_{\ep,L}(x))^{2}\,dx= \omep^{3}\int_{0}^{L_{\ep}}(v_{\ep,L,k}(y)-M_{\ep,L,k}\,y)^{2}\,dy \nonumber \end{equation} and \begin{equation} \int_{I_{\ep,k}} \left\{\ep^{6}\omep^{4}\uep''(x)^{2}+\log\left(1+\uep'(x)^{2}\right)\right\}dx =\omep^{3}\RPM_{\ep}((0,L_{\ep}),v_{\ep,L,k}), \nonumber \end{equation} and therefore \begin{eqnarray*} \PMF_{\ep}(\beta,f_{\ep,L},I_{\ep,k},u_{\ep}) & = & \omep^{3}\RPMF_{\ep}(\beta,M_{\ep,L,k}\,x,(0,L_{\ep}),v_{\ep,L,k}) \\[1ex] & \geq & \omep^{3}\mu_{\ep}(\beta,L_{\ep},M_{\ep,L,k}) \\[1ex] & \geq & \omep^{3}\mu_{\ep}(\beta,L,M_{\ep,L,k}), \label{eqn:1000-620} \end{eqnarray*} where in the last inequality we exploited that $L_{\ep}\geq L$, and $\mu_{\ep}$ is monotone with respect to the length of the interval. Plugging this inequality into (\ref{ineq:ABGF-sum}) we obtain (\ref{ineq:ABGF-mLM}). \paragraph{\textmd{\textit{Convergence to minima of the limit problem}}} There exists $\ep_{0}\in(0,1)$ such that \begin{equation} \mu_{\ep}(\beta,L,M_{\ep,L,k})\geq\mu_{0}(\alpha_{0},\beta,L,M_{\ep,L,k})-\eta \label{th:meLM-mu0} \end{equation} for every $\ep\in(0,\ep_{0})$ and every $k\in\{1,\ldots,N_{\ep,L}\}$, where the function $\mu_{0}$ is defined according to (\ref{defn:mu0}), and $\alpha_{0}$ is defined by (\ref{defn:alpha-0}). Indeed, this estimate follows from Proposition~\ref{prop:mu}, and in particular from the uniform convergence (\ref{th:lim-mu-unif}), after observing that the values of $M_{\ep,L,k}$ are uniformly bounded because of (\ref{est:mepL-bound}). \paragraph{\textmd{\textit{Conclusion}}} From the estimate from below in (\ref{th:est-mu-abLM}) we know that \begin{equation*} \mu_{0}(\alpha_{0},\beta,L,M_{\ep,L,k})\geq c_{1}|M_{\ep,L,k}|^{4/5}L-c_{2}|M_{\ep,L,k}|^{1/5}, \end{equation*} where $c_{1}$ and $c_{2}$ are given by (\ref{defn:c123}), and therefore in particular \begin{equation} c_{1}:=\frac{5}{4}\left(\frac{\alpha_{0}^{4}\beta}{3}\right)^{1/5}= 10\left(\frac{2\beta}{27}\right)^{1/5}. \label{defn:c1} \end{equation} Summing over $k$, from (\ref{ineq:ABGF-mLM}) and (\ref{th:meLM-mu0}) we obtain that \begin{eqnarray*} \lefteqn{\hspace{-5em} \frac{\PMF_{\ep}(\beta,f_{\ep,L},(0,1),u_{\ep})}{\omep^{2}} \;\geq\; \omep\sum_{k=1}^{N_{\ep,L}}\mu_{\ep}(\beta,L,M_{\ep,L,k})} \\ \qquad & \geq & \omep\sum_{k=1}^{N_{\ep,L}}\mu_{0}(\alpha_{0},\beta,L,M_{\ep,L,k})-\eta\omep N_{\ep,L} \\ & \geq & c_{1}L\omep\sum_{k=1}^{N_{\ep,L}}|M_{\ep,L,k}|^{4/5} -c_{2}\omep N_{\ep,L}M_{\infty}^{1/5}-\eta\omep N_{\ep,L}. \end{eqnarray*} Finally, plugging this estimate into (\ref{est:SPM-ABGF}) we deduce that \begin{eqnarray*} \frac{\PMF_{\ep}(\beta,f,(0,1),u_{\ep})}{\omep^{2}} & \geq & (1-\eta)c_{1}\frac{L}{L_{\ep}}\cdot L_{\ep}\omep\sum_{k=1}^{N_{\ep,L}}|M_{\ep,L,k}|^{4/5} \\[1ex] & & \mbox{}-\omep N_{\ep,L}\cdot(1-\eta)\left(c_{2}M_{\infty}^{1/5}+\eta\right) \\[1ex] & & \mbox{}+\left(1-\frac{1}{\eta}\right) \frac{\beta}{\omep^{2}}\int_{0}^{1}(f(x)-f_{\ep,L}(x))^{2}\,dx. \end{eqnarray*} Now we let $\ep\to 0^{+}$, and we exploit (\ref{lim:fepL'2f'}) in the first line, the fact that $\omep N_{\ep,L}\to 1/L$ in the second line, and (\ref{lim:fepL2f}) in the third line. We conclude that \begin{equation*} \liminf_{\ep\to 0^{+}}\frac{\PMF_{\ep}(\beta,f,(0,1),u_{\ep})}{\omep^{2}} \geq (1-\eta)\left\{c_{1}\int_{0}^{1}|f'(x)|^{4/5}\,dx -\frac{c_{2}M_{\infty}^{1/5}+\eta}{L}\right\}. \end{equation*} Finally, letting $\eta\to 0^{+}$ and $L\to +\infty$, and recalling that $c_{1}$ is given by (\ref{defn:c1}), we obtain exactly (\ref{est:lim-SPM-below}). \subsubsection{Estimate from above} We show the existence of a family $\{u_{\ep}\}\subseteq H^{2}((0,1))$ for which (\ref{est:lim-SPM-above}) holds true. This amounts to proving the asymptotic optimality of all the steps in the proof of the estimate from below. \paragraph{\textmd{\textit{Interval subdivision and approximation of the forcing term}}} Let us fix again two real numbers $L>0$ and $\eta\in(0,1)$, and for every $\ep\in(0,1)$ let us define $N_{\ep,L}$ and $L_{\ep}$ as in (\ref{defn:NepAep}), the intervals $I_{\ep,k}$ as in (\ref{defn:Iepk}), and the piecewise affine function $f_{\ep,L}:(0,1)\to\re$ as in (\ref{defn:fepL}). Then we exploit the inequality \begin{equation*} (a+b)^{2}\leq(1+\eta)a^{2}+\left(1+\frac{1}{\eta}\right)b^{2} \qquad \forall\eta\in(0,1), \quad \forall (a,b)\in\re^{2}, \end{equation*} and for every $u\in H^{2}((0,1))$ we obtain the estimate \begin{eqnarray*} \PMF_{\ep}(\beta,f,(0,1),u) & \leq & (1+\eta)\PMF_{\ep}(\beta,f_{\ep,L},(0,1),u) \\[1ex] & & \mbox{}+\left(1+\frac{1}{\eta}\right)\beta\int_{0}^{1}(f(x)-f_{\ep,L}(x))^{2}\,dx. \label{est:SPM-ABGF-above} \end{eqnarray*} \paragraph{\textmd{\textit{Reduction to a common interval}}} We claim that there exists $u_{\ep}\in H^{2}((0,1))$ such that \begin{eqnarray*} \PMF_{\ep}(\beta,f_{\ep,L},(0,1),u_{\ep}) & = & \omep^{3}\sum_{k=1}^{N_{\ep,L}}\mu_{\ep}^{*}(\beta,L_{\ep},M_{\ep,L,k}) \\ & \leq & \left(\frac{L_{\ep}}{L}\right)^{3}\omep^{3}\sum_{k=1}^{N_{\ep,L}}\mu_{\ep}^{*}(\beta,L,M_{\ep,L,k}), \label{ineq:ABGF-mLM*} \end{eqnarray*} where $\mu_{\ep}^{*}$ is defined by (\ref{defn:muep*}), and the inequality follows from (\ref{th:monot-muep*}). To this end, in analogy with the previous case we observe that the equalities \begin{eqnarray*} \PMF_{\ep}(\beta,f_{\ep,L},(0,1),u_{\ep}) & = & \sum_{k=1}^{N_{\ep,L}}\PMF_{\ep}(\beta,f_{\ep,L},I_{\ep,k},u_{\ep}) \\[1ex] & = & \omep^{3}\sum_{k=1}^{N_{\ep,L}}\RPMF_{\ep}(\beta,M_{\ep,L,k}\,x,(0,L_{\ep}),v_{\ep,L,k}) \end{eqnarray*} hold true for every $u_{\ep}\in H^{2}((0,1))$, provided that $u_{\ep}(x)$ and $v_{\ep,L,k}(x)$ are related by (\ref{defn:veLK}). At this point it is enough to choose $u_{\ep}$ in such a way that $v_{\ep,L,k}(x)$ coincides with a minimizer in the definition of $\mu_{\ep}^{*}(\beta,L_{\ep},M_{\ep,L,k})$ for every admissible choice of $k$. Due to the boundary conditions in (\ref{defn:muep*}), the resulting function $\uep(x)$ coincides with the forcing term $f(x)$ in the nodes of the form $x=kL_{\ep}\omep$, its derivative vanishes in the same points, and the profile in each subinterval is (up to homotheties and translations) a minimizer to (\ref{defn:muep*}). As a consequence, the different pieces glue together in a $C^{1}$ way, and thus the resulting function belongs to $H^{2}((0,1))$. \paragraph{\textmd{\textit{Convergence to minima of the limit problem}}} As in the case of the estimates from below we rely on Proposition~\ref{prop:mu}, and in particular on the uniform convergence (\ref{th:lim-mu*-unif}), in order to deduce that there exists $\ep_{0}\in(0,1)$ such that \begin{equation*} \left(\frac{L_{\ep}}{L}\right)^{3}\mu_{\ep}^{*}(\beta,L,M_{\ep,L,k})\leq \mu_{0}^{*}(\alpha_{0},\beta,L,M_{\ep,L,k})+\eta \end{equation*} for every $\ep\in(0,\ep_{0})$ and every $k\in\{1,\ldots,N_{\ep,L}\}$. We can absorb the cubic factor into $\eta$ because $L_{\ep}\to L$, and $\mu_{\ep}^{*}(\beta,L,M_{\ep,L,k})$ is uniformly bounded for $\ep$ small because of the uniform bound on the slopes $M_{\ep,L,k}$ and the continuity of the limit $\mu_{0}^{*}$ with respect to~$M$. \paragraph{\textmd{\textit{Conclusion}}} Now we exploit the estimate from above in (\ref{th:est-mu*-abLM}), and we find that \begin{equation*} \mu_{0}^{*}(\alpha_{0},\beta,L,M_{\ep,L,k})\leq c_{1}|M_{\ep,L,k}|^{4/5}L+c_{3}|M_{\ep,L,k}|^{1/5}, \end{equation*} where again $c_{1}$ is given by (\ref{defn:c1}), and as in the previous case we conclude that \begin{eqnarray*} \frac{\PMF_{\ep}(\beta,f,(0,1),u_{\ep})}{\omep^{2}} & \leq & (1+\eta)c_{1}L\omep\sum_{k=1}^{N_{\ep,L}}|M_{\ep,L,k}|^{4/5} \\[1ex] & & \mbox{}+\omep N_{\ep,L}\cdot(1+\eta)\left(c_{3}M_{\infty}^{1/5}+\eta\right) \\[1ex] & & \mbox{}+\left(1+\frac{1}{\eta}\right) \frac{\beta}{\omep^{2}}\int_{0}^{1}(f(x)-f_{\ep,L}(x))^{2}\,dx. \end{eqnarray*} Letting $\ep\to 0^{+}$, we obtain that this family $\{u_{\ep}\}$ satisfies \begin{equation*} \limsup_{\ep\to 0^{+}}\frac{\PMF_{\ep}(\beta,f,(0,1),u_{\ep})}{\omep^{2}} \leq (1+\eta)\left\{c_{1}\int_{0}^{1}|f'(x)|^{4/5}\,dx +\frac{c_{3}M_{\infty}^{1/5}+\eta}{L}\right\}. \end{equation*} Now we observe that the right-hand side tends to the right-hand side of (\ref{est:lim-SPM-above}) when $\eta\to 0^{+}$ and $L\to +\infty$. Therefore, with a standard diagonal procedure we can find a family $\{u_{\ep}\}\subseteq H^{2}((0,1))$ for which exactly (\ref{est:lim-SPM-above}) holds true. \qed \subsection{Blow-ups at standard resolution (Theorem~\ref{thm:BU})} The proof of Theorem~\ref{thm:BU} consists of three main parts. In the first two parts we address the compactness of fake and true blow-ups. In the final part we show how to achieve all possible translations of the canonical staircase. \subsubsection{Compactness of fake blow-ups and oblique translations}\label{sec:fake-bu} Let us set for simplicity $x_{n}:=x_{\ep_{n}}$, and let $w_{n}(y):=w_{\ep_{n}}(y)$ denote the corresponding fake blow-ups, defined in the interval $(A_{n},B_{n})$ with \begin{equation} A_{n}:=-\frac{x_{n}}{\omepn}, \qquad\qquad B_{n}:=\frac{1-x_{n}}{\omepn}. \label{defn:An-Bn} \end{equation} We need to show that the sequence $\{w_{n}(y)\}$ has a subsequence that converges locally strictly in $BV\loc(\re)$ to some oblique translation of the canonical $(H,V)$-staircase. To this end, we introduce the function $g_{n}:(A_{n},B_{n})\to\re$ defined by \begin{equation} g_{n}(y):=\frac{f(x_{n}+\omepn y)-f(x_{n})}{\omepn} \qquad \forall y\in(A_{n},B_{n}). \label{defn:gn} \end{equation} We are now in a position to apply Proposition~\ref{prop:bu2minloc}. Let us check the assumptions. \begin{itemize} \item Since $x_{n}\to x_{0}\in(0,1)$, passing to the limit in (\ref{defn:An-Bn}) we see that $A_{n}\to -\infty$ and $B_{n}\to +\infty$. \item Since the forcing term $f(x)$ is of class $C^{1}$, passing to the limit in (\ref{defn:gn}) we see that $g_{n}(y)\to f'(x_{0})\cdot y$ uniformly on bounded subsets of $\re$. \item With the change of variable $x=x_{n}+\omepn y$ we obtain that \begin{equation} \PMF_{\ep_{n}}(\beta,f,(0,1),u_{\ep_{n}})= \omepn^{3}\cdot\RPMF_{\ep_{n}}(\beta,g_{n},(A_{n},B_{n}),w_{n}). \label{eqn:PMF-vs-RPMF} \end{equation} Since $u_{\ep_{n}}(x)$ is a minimizer of the original functional $u\mapsto\PMF_{\ep_{n}}(\beta,f,(0,1),u)$, it follows that $w_{n}(y)$ is a minimizer of $w\mapsto\RPMF_{\ep_{n}}(\beta,g_{n},(A_{n},B_{n}),w)$. \item Due to (\ref{eqn:PMF-vs-RPMF}), estimate (\ref{hp:bound-G}) follows from Theorem~\ref{thm:asympt-min} as soon as $|\log\ep_{n}|\geq 1$. \end{itemize} At this point, from Proposition~\ref{prop:bu2minloc} we deduce that the sequence $\{w_{n}(y)\}$ converges locally strictly in $BV\loc(\re)$, at least up to subsequences, to an entire local minimizer of the limit functional (\ref{defn:JF}), with $\alpha$ given by (\ref{defn:alpha-0}). Finally, from Proposition~\ref{prop:loc-min-class} we know that all these entire local minimizers are oblique translations of the canonical $(H,V)$-staircase, with parameters given by (\ref{defn:HV}). \begin{rmk}[Back to Remark~\ref{rmk:BU-semi-int}]\label{rmk:boundary-proof} \begin{em} Let consider the case where $x_{\ep}\to x_{0}\in\{0,1\}$. If (\ref{hp:BU-internal}) holds true, then again $A_{n}\to -\infty$ and $B_{n}\to +\infty$ for every sequence $\ep_{n}\to 0^{+}$, and hence the previous proof still works. If (\ref{hp:BU-internal}) fails, then when $x_{0}=0$ it may happen that $A_{n}\to A_{\infty}\in(-\infty,0]$ and $B_{n}\to +\infty$ for some sequence $\ep_{n}\to 0^{+}$. In this case it is convenient to introduce the translated functions \begin{equation*} \widehat{w}_{n}(y):=w_{n}(y+A_{n})-f'(0)A_{n} \qquad\text{and}\qquad \widehat{g}_{n}(y):=g_{n}(y+A_{n})-f'(0)A_{n}. \end{equation*} We observe that these functions are defined in the interval $(0,L_{n})$ with $L_{n}:=B_{n}-A_{n}$, so that $L_{n}\to +\infty$. We observe also that \begin{equation*} \widehat{w}_{n}\in\argmin\Rloc\left\{\RPMF_{\ep_{n}}(\beta,\widehat{g}_{n},(0,L_{n}),v):v\in H^{2}((0,L_{n}))\right\}, \end{equation*} and that $\widehat{g}_{n}(y)\to f'(0)\cdot y$ uniformly on bounded subsets of $(0,+\infty)$. This means that we are in the framework of Proposition~\ref{prop:bu2Rminloc}, from which we deduce that the whole sequence $\{\widehat{w}_{n}(y)\}$ converges to the unique semi-entire local minimizer in $(0,+\infty)$ of the limit functional (\ref{defn:JF}), with $\alpha$ given by (\ref{defn:alpha-0}). This semi-entire local minimizer is given by (\ref{defn:semi-entire}), and the convergence is strict in $BV((0,L))$ for every $L>0$ that is not a jump point of the limit. This is a rigorous way of saying that $w_{n}(y)$ converges to $w(y-A_{\infty})+f'(0)A_{\infty}$, and the latter is the oblique translation of the unique semi-entire local minimizer that ``starts in $y=A_{\infty}$''. The case where $x_{0}=1$, and for some sequence $\ep_{n}\to 0^{+}$ it happens that $A_{n}\to -\infty$ and $B_{n}\to B_{\infty}\in[0,+\infty)$, is symmetric. \end{em} \end{rmk} \subsubsection{Compactness of true blow-ups and graph translations}\label{sec:true-bu} Let us define $x_{n}$ and $w_{n}(y)$ as before, and let $v_{n}(y):=v_{\ep_{n}}(y)$ denote the corresponding true blow-ups. We observe that true blow-ups are related to the fake blow-ups by the equality \begin{equation} v_{n}(y)=w_{n}(y)-w_{n}(0) \qquad \forall y\in(A_{n},B_{n}), \label{fake-vs-true-bu} \end{equation} and therefore the asymptotic behavior of the sequence $\{v_{n}(y)\}$ can be deduced from the asymptotic behavior of the sequence $\{w_{n}(y)\}$. More precisely, let us assume that \begin{equation*} w_{n_{k}}(y)\auto S_{H,V}(y-H\tau_{0})+V\tau_{0} \quad\mbox{in }BV\loc(\re) \end{equation*} for some sequence $n_{k}\to+\infty$ and some $\tau_{0}\in[-1,1]$. Then we distinguish two cases. \begin{itemize} \item Let us assume that $|\tau_{0}|<1$. In this case $y=0$ is not a discontinuity point of the limit of fake blow-ups, and hence the strict convergence implies pointwise convergence (see statement~(\ref{strict:cont}) in Remark~\ref{rmk:strict}), so that \begin{equation*} \lim_{k\to +\infty}w_{n_{k}}(0)= S_{H,V}(-H\tau_{0})+V\tau_{0}= V\tau_{0}. \end{equation*} Therefore, from (\ref{fake-vs-true-bu}) we deduce that $v_{n_{k}}(y)\auto S_{H,V}(y-H\tau_{0})$ in $BV\loc(\re)$, and we conclude by observing that the limit is a graph translation of horizontal type of the canonical $(H,V)$-staircase, as required. \item Let us assume that $\tau_{0}=\pm 1$, and hence $\tau_{0}=1$ without loss of generality (because oblique translations corresponding to $\tau_{0}=1$ and $\tau_{0}=-1$ coincide). In this case $y=0$ is a discontinuity point of the limit of fake blow-ups, and hence strict convergence (see statement~(\ref{strict:cont}) in Remark~\ref{rmk:strict}) implies only that \begin{equation*} -V\leq\liminf_{k\to +\infty}w_{n_{k}}(0)\leq\limsup_{k\to +\infty}w_{n_{k}}(0)\leq V. \end{equation*} As a consequence, up to a further subsequence (not relabeled), $w_{n_{k}}(0)$ tends to some value in $[-V,V]$ that we can always write in the form $V\tau_{1}$ for some real number $\tau_{1}\in[-1,1]$. Therefore, from (\ref{fake-vs-true-bu}) we deduce that, along this further subsequence, \begin{equation*} v_{n_{k}}(y)\auto S_{H,V}(y-H)+V-V\tau_{1} \quad\mbox{in }BV\loc(\re), \end{equation*} and we conclude by observing that the limit is a graph translation of vertical type of the canonical $(H,V)$-staircase, as required. \end{itemize} \subsubsection{Realization of all possible oblique/horizontal/vertical translations} In the constructions we can assume, without loss of generality, that $f'(x_{0})\neq 0$, because otherwise all families of fake or true blow-ups converge to the trivial staircase that is identically~0, in which case there is nothing to prove. \paragraph{\textmd{\textit{Canonical staircase}}} We show that there exists a family $x_{\ep}'\to x_{0}$ satisfying (\ref{th:liminf-limsup}), and (\ref{th:BU-conv-fake}) with $w_{0}(y):=S_{H,V}(y)$. The natural idea is to look for the fake blow-ups that minimize some distance from the desired limit. To this end, for every $\ep\in(0,1)$ small enough we consider the function \begin{equation*} \psi_{\ep}(x):= \int_{-H}^{H}\left|\frac{u_{\ep}(x+\omep y)-f(x)}{\omep}\right|\,dy. \end{equation*} It is a continuous function of $x$, and therefore it admits at least one minimum point $x_{\ep}'$ in the interval $[x_{\ep}-H\omep,x_{\ep}+H\omep]$. We claim that $\{x_{\ep}'\}$ is the required family. To begin with, we observe that (\ref{th:liminf-limsup}) is automatic from the definition, and we call \begin{equation} w_{\ep}(y):=\frac{u_{\ep}(x_{\ep}'+\omep y)-f(x_{\ep}')}{\omep} \label{defn:wep'} \end{equation} the corresponding fake blow-ups. If we assume by contradiction that $\{w_{\ep}(y)\}$ does not converge to $S_{H,V}(x)$, then from the compactness result we know that there exists a sequence $\ep_{n}\to 0^{+}$ such that $w_{\ep_{n}}(y)$ converges locally strictly in $BV\loc(\re)$ to some oblique translation $z_{0}(y)$ of $S_{H,V}(y)$, different from $S_{H,V}(y)$ itself, and in particular \begin{equation*} \lim_{n\to +\infty}\psi_{\ep_{n}}(x_{\ep_{n}}')= \lim_{n\to+\infty}\int_{-H}^{H}|w_{\ep_{n}}(y)|\,dy= \int_{-H}^{H}|z_{0}(y)|\,dy>0. \end{equation*} On the other hand, since $z_{0}$ is an oblique translation, it can be written in the form \begin{equation*} z_{0}(y)=S_{H,V}(y-H\tau_{1})+V\tau_{1} \end{equation*} for a suitable $\tau_{1}\in[-1,1]$, with $\tau_{1}\neq 0$. Now for every positive integer $n$ we set \begin{equation*} x_{\ep_{n}}'':=x_{\ep_{n}}'+(2k_{n}+\tau_{1})H\omepn, \end{equation*} where $k_{n}\in\{-1,0,1\}$ is chosen in such a way that \begin{equation*} x_{\ep_{n}}-H\omepn\leq x_{\ep_{n}}''< x_{\ep_{n}}+H\omepn \end{equation*} (we point out that there is always exactly one possible choice of $k_{n}$). We claim that \begin{equation} \frac{u_{\ep_{n}}(x_{\ep_{n}}''+\omepn y)-f(x_{\ep_{n}}'')}{\omepn} \auto S_{H,V}(y) \qquad \text{in }BV\loc(\re), \label{claim:BU2v0} \end{equation} and in particular the convergence is also in $L^{1}((-H,H))$. This implies that $\psi_{\ep_{n}}(x_{\ep_{n}}'')\to 0$, and hence $\psi_{\ep_{n}}(x_{\ep_{n}}'')<\psi_{\ep_{n}}(x_{\ep_{n}}')$ when $n$ is large enough, thus contradicting the minimality of $x_{\ep_{n}}'$. In order to prove (\ref{claim:BU2v0}), up to subsequences (not relabeled) we can always assume that $k_{n}$ is actually a constant $k_{\infty}$. Now we observe that \begin{equation*} \frac{u_{\ep_{n}}(x_{\ep_{n}}''+\omepn y)-f(x_{\ep_{n}}'')}{\omepn}= w_{\ep_{n}}(y+(2k_{\infty}+\tau_{1})H)- \frac{f(x_{\ep_{n}}'')-f(x_{\ep_{n}}')}{\omepn}, \end{equation*} so that in particular \begin{equation*} w_{\ep_{n}}(y+(2k_{\infty}+\tau_{1})H)\auto z_{0}(y+(2k_{\infty}+\tau_{1})H)= S_{H,V}(y+2k_{\infty}H)+V\tau_{1}, \end{equation*} and \begin{equation*} \lim_{n\to +\infty}\frac{f(x_{\ep_{n}}'')-f(x_{\ep_{n}}')}{\omepn}= (2k_{\infty}+\tau_{1})H\cdot f'(x_{0})= (2k_{\infty}+\tau_{1})V. \end{equation*} It follows that \begin{equation*} \frac{u_{\ep_{n}}(x_{\ep_{n}}''+\omepn y)-f(x_{\ep_{n}}'')}{\omepn}\auto S_{H,V}(y+2k_{\infty}H)-2k_{\infty}V, \end{equation*} and we conclude by observing that the latter coincides with $S_{H,V}(y)$. This completes the proof of (\ref{claim:BU2v0}). \paragraph{\textmd{\textit{All oblique translations}}} Let $x_{\ep}'\to x_{0}$ be the family that we found in the previous paragraph, namely a family satisfying (\ref{th:liminf-limsup}), and (\ref{th:BU-conv-fake}) with $w_{0}(y):=S_{H,V}(y)$. If we need to obtain a different oblique translation of the form $w_{0}(y)=S_{H,V}(y-H\tau_{0})+V\tau_{0}$ for some $\tau_{0}\in(-1,1]$, then it is enough to consider the family \begin{equation*} x_{\ep}'':=x_{\ep}'-H\tau_{0}\omep+2k_{\ep}H\omep, \end{equation*} where $k_{\ep}\in\{-1,0,1\}$ is chosen in such a way that $x_{\ep}''\in[x_{\ep}-H\omep,x_{\ep}+H\omep]$. Indeed, it is enough to observe that \begin{equation} \frac{u_{\ep}(x_{\ep}''+\omep y)-f(x_{\ep}'')}{\omep}= w_{\ep}(y+(2k_{\ep}-\tau_{0})H)- \frac{f(x_{\ep}'')-f(x_{\ep}')}{\omep}, \label{eqn:fbu-''-'} \end{equation} where $w_{\ep}$ is the family of fake blow-ups with centers in $x_{\ep}'$. At this point, if needed we split the family into three subfamilies according to the value of $k_{\ep}$. In the subfamily where $k_{\ep}$ is equal to some constant $k_{0}$ we obtain that \begin{equation*} w_{\ep}(y+(2k_{\ep}-\tau_{0})H)\auto S_{H,V}(y+(2k_{0}-\tau_{0})H), \end{equation*} and \begin{equation*} \frac{f(x_{\ep}'')-f(x_{\ep}')}{\omep}\to (2k_{0}-\tau_{0})H\cdot f'(x_{0})=(2k_{0}-\tau_{0})V. \end{equation*} This implies that the left-hand side of (\ref{eqn:fbu-''-'}) converges locally strictly to \begin{equation*} S_{H,V}(y+(2k_{0}-\tau_{0})H)-(2k_{0}-\tau_{0})V, \end{equation*} which is equal to $S_{H,V}(y-H\tau_{0})+V\tau_{0}$, independently of $k_{0}$, as required. \paragraph{\textmd{\textit{Graph translations of horizontal type}}} In this paragraph we show that any graph translation of the form $S_{H,V}(y-H\tau_{0})$, with $\tau_{0}\in[-1,1]$, can be obtained as the limit of a suitable family of true blow-ups whose centers satisfy (\ref{th:liminf-limsup}). To begin with, we observe that the set of possible limits is closed with respect to the locally strict convergence in $BV\loc(\re)$, and therefore it is enough to obtain all limits with $\tau_{0}$ in the open interval $(-1,1)$. In this case, we claim that we can take the same family $x_{\ep}'\to x_{0}$ whose fake blow-ups converge to $S_{H,V}(y-H\tau_{0})-V\tau_{0}$, with the same value of $\tau_{0}$. Indeed, we observe again that \begin{equation} \frac{\uep(x_{\ep}'+\omep y)-\uep(x_{\ep}')}{\omep}= w_{\ep}(y)-w_{\ep}(0), \label{eqn:fake-true} \end{equation} where $w_{\ep}(y)$ is defined by (\ref{defn:wep'}). Now we know that $w_{\ep}(y)\auto S_{H,V}(y-H\tau_{0})-V\tau_{0}$. Moreover, if $\tau_{0}\in(-1,1)$ the limit function is continuous in $y=0$, and therefore the strict convergence implies also that $w_{\ep}(0)\to S_{H,V}(-H\tau_{0})-V\tau_{0}=-V\tau_{0}$. Plugging these two result into (\ref{eqn:fake-true}) we obtain that the left-hand side converges to $S_{H,V}(y-H\tau_{0})$, as required. \paragraph{\textmd{\textit{Graph translations of vertical type}}} In this final paragraph we show that any graph translation of the form $S_{H,V}(y-H)+(1-\tau_{0})V$, with $\tau_{0}\in[-1,1]$, can be obtained as the limit of a suitable family of true blow-ups whose centers satisfy (\ref{th:liminf-limsup}). To begin with, as in the case of graph translations of horizontal type we reduce ourselves to the case where $\tau_{0}\in(-1,1)$. In this case we consider the family $x_{\ep}'\to x_{0}$ whose fake blow-ups $w_{\ep}(y)$ defined by (\ref{defn:wep'}) converge to $S_{H,V}(y)$. Since $S_{H,V}(y)$ is continuous in $y=-2H$ and $y=2H$, the strict convergence implies in particular that \begin{equation*} \lim_{\ep\to 0^{+}}w_{\ep}(-2H)=-2V \qquad\quad\text{and}\quad\qquad \lim_{\ep\to 0^{+}}w_{\ep}(2H)=2V. \end{equation*} Recalling that $w_{\ep}(y)$ is continuous in $y$ and vanishes for $y=0$, this means that when $\ep\in(0,1)$ is small enough there exist $a_{\ep}\in(-2H,0)$ and $b_{\ep}\in(0,2H)$ such that \begin{equation} w_{\ep}(a_{\ep})=(-1+\tau_{0})V \qquad\quad\text{and}\quad\qquad w_{\ep}(b_{\ep})=(1+\tau_{0})V. \label{defn:aep-bep} \end{equation} These two conditions imply in particular that \begin{equation} \lim_{\ep\to 0^{+}}a_{\ep}=-H \qquad\quad\text{and}\quad\qquad \lim_{\ep\to 0^{+}}b_{\ep}=H, \label{lim:aep-bep} \end{equation} because a limit of blow-ups can be different from an integer multiple of $2V$ only in the jump points of the limit function $S_{H,V}(y)$. Now we set \begin{equation*} x_{\ep}'':=\begin{cases} x_{\ep}'+\omep a_{\ep}\quad & \text{if }x_{\ep}'\geq x_{\ep}, \\ x_{\ep}'+\omep b_{\ep} & \text{if }x_{\ep}'< x_{\ep}, \end{cases} \end{equation*} and we claim that this is the required family. Indeed, from the definition it follows that \begin{equation*} \frac{|x_{\ep}''-x_{\ep}|}{\omep}\leq \max\left\{\frac{|x_{\ep}'-x_{\ep}|}{\omep},-a_{\ep},b_{\ep}\right\}, \end{equation*} and therefore (\ref{th:liminf-limsup}) for $x_{\ep}''$ follows from (\ref{th:liminf-limsup}) for $x_{\ep}'$ and (\ref{lim:aep-bep}). In order to compute the limit of the true blow-ups with center in $x_{\ep}''$, we consider the two subfamilies where $x_{\ep}''$ is defined using $a_{\ep}$ or $b_{\ep}$. In the first case from (\ref{defn:aep-bep}) and (\ref{lim:aep-bep}) we deduce that \begin{eqnarray*} \frac{\uep(x_{\ep}''+\omep y)-\uep(x_{\ep}'')}{\omep} & = & w_{\ep}(y+a_{\ep})-w_{\ep}(a_{\ep}), \\ & \auto & S_{H,V}(y-H)-(-1+\tau_{0})V, \end{eqnarray*} as required. Analogously, in the second case we obtain that \begin{equation*} \frac{\uep(x_{\ep}''+\omep y)-\uep(x_{\ep}'')}{\omep}\auto S_{H,V}(y+H)-(1+\tau_{0})V, \end{equation*} which again coincides with $S_{H,V}(y-H)+(1-\tau_{0})V$, as required. \qed \subsection{Convergence of minimizers to the forcing term} \subsubsection{Strict convergence (statement~(1) of Theorem~\ref{thm:varifold})} Since the limit $f(x)$ is continuous, we know that uniform convergence in $[0,1]$ follows from strict convergence (see statement~(\ref{strict:cont}) in Remark~\ref{rmk:strict}). As for strict convergence, we already know from Proposition~\ref{prop:basic} that $u_{\ep}\to f$ in $L^{2}((0,1))$. Therefore, it remains to show that (the opposite inequality is trivial) \begin{equation} \limsup_{\ep\to 0^{+}}\int_{0}^{1}|u_{\ep}'(x)|\,dx\leq \int_{0}^{1}|f'(x)|\,dx. \label{th:limsup-strict} \end{equation} Let us assume by contradiction that this is not the case, and hence there exist a positive real number $\eta_{0}$ and a sequence $\{\ep_{n}\}\subseteq(0,1)$ such that $\ep_{n}\to 0^{+}$ and \begin{equation} \int_{0}^{1}|u_{\ep_{n}}'(x)|\,dx\geq \int_{0}^{1}|f'(x)|\,dx+\eta_{0} \qquad \forall n\in\n. \label{defn:eta-0} \end{equation} For every fixed positive real number $L$, in analogy with (\ref{defn:NepAep}) we set \begin{equation*} N_{n}:=\left\lfloor\frac{1}{L\omepn}\right\rfloor \qquad\quad\text{and}\quad\qquad L_{n}:=\frac{1}{N_{n}\omepn}, \end{equation*} and we consider the intervals $I_{n,k}:=((k-1)L_{n}\omepn,kL_{n}\omepn)$ with $k\in\{1,\ldots,N_{n}\}$. Since we can rewrite (\ref{defn:eta-0}) in the form \begin{equation*} \sum_{k=1}^{N_{n}}\int_{I_{n,k}}\left(|u_{\ep_{n}}'(x)|-|f'(x)|\right)\,dx\geq\eta_{0}, \end{equation*} we deduce that for every $n\in\n$ there exists an integer $k_{n}$ such that \begin{equation} \int_{I_{n,k_{n}}}\left(|u_{\ep_{n}}'(x)|-|f'(x)|\right)\,dx\geq\frac{\eta_{0}}{N_{n}}. \label{defn:kn} \end{equation} Now we set $x_{n}:=(k_{n}-1)\omepn L_{n}$, and we consider the corresponding fake blow-ups \begin{equation} w_{n}(y):=\frac{u_{\ep_{n}}(x_{n}+\omepn y)-f(x_{n})}{\omepn}. \label{defn:FBU-strict} \end{equation} With the change of variable $x=x_{n}+\omep y$, we can rewrite (\ref{defn:kn}) in the form \begin{equation} \int_{0}^{L_{n}}\left(|w_{n}'(y)|-|f'(x_{n}+\omepn y)|\strut\right)\,dy\geq \frac{\eta_{0}}{N_{n}\omepn}\geq \eta_{0}L. \label{defn:kn-bis} \end{equation} Up to subsequences (not relabeled) we can always assume that $x_{n}$ converges to some $x_{\infty}\in[0,1]$. Let us assume for a while that $x_{\infty}\in(0,1)$. From the continuity of $f'(x)$ we deduce that $f'(x_{n}+\omepn y)\to f'(x_{\infty})$ uniformly on bounded subsets of $\re$ and in particular, since $L_{n}\to L$, we obtain that \begin{equation} \lim_{n\to+\infty}\int_{0}^{L_{n}}|f'(x_{n}+\omepn y)|\,dy=|f'(x_{\infty})|L. \label{est:lim-f'} \end{equation} Moreover, from statement~(1) of Theorem~\ref{thm:BU} we deduce that, up to a further subsequence (not relabeled), $w_{n}\auto w_{\infty}$ in $BV\loc(\re)$, where $w_{\infty}$ is an oblique translation of a canonical staircase with parameters depending on $\beta$ and $f'(x_{\infty})$. As a consequence, from statement~(\ref{strict:sub-int}) of Remark~\ref{rmk:strict} we obtain that \begin{equation*} \limsup_{n\to +\infty}\int_{0}^{L_{n}}|w_{n}'(y)|\,dy\leq \lim_{n\to +\infty}\int_{a}^{b}|w_{n}'(y)|\,dy= |Dw_{\infty}|((a,b)) \end{equation*} for every interval $(a,b)\supseteq[0,L]$ whose endpoints $a$ and $b$ are not jump points of $w_{\infty}$. If we consider any sequence of such intervals whose intersection is $[0,L]$, we deduce that \begin{equation} \limsup_{n\to +\infty}\int_{0}^{L_{n}}|w_{n}'(y)|\,dy\leq |Dw_{\infty}|([0,L]). \label{est:lim-wn'} \end{equation} From (\ref{defn:kn-bis}), (\ref{est:lim-f'}) and (\ref{est:lim-wn'}) we conclude that \begin{equation} |Dw_{\infty}|([0,L])-|f'(x_{\infty})|L\geq\eta_{0}L. \label{est:L-V} \end{equation} Now we observe that the left-hand side is the difference between the total variation of $w_{\infty}$ in $[0,L]$ and the total variation of the line $y\mapsto |f'(x_{\infty})|y$ in the same interval. Since $w_{\infty}(y)$ is a staircase with the property that the midpoints of the vertical parts of the steps lie on the same line, the left-hand side of (\ref{est:L-V}) is bounded from above by the height of each step of the staircase. Now both $w_{\infty}$ and $x_{\infty}$ might depend on $L$, but in any case the height of the steps depends only on $\beta$ and $|f'(x_{\infty})|$, and the latter is bounded independently on $L$ because $f$ is of class $C^{1}$. In conclusion, the left-hand side of (\ref{est:L-V}) is bounded from above independently of $L$, and this contradicts (\ref{est:L-V}) when $L$ is large enough. Let us consider now the case where $x_{\infty}=0$ (the case $x_{\infty}=1$ is symmetric). In this case we consider the sequence $\{k_{n}\}$. If it is unbounded, then up to subsequences we can assume that it diverges to $+\infty$. In this case the intervals where the functions $w_{n}(y)$ of (\ref{defn:FBU-strict}) are defined invade eventually the whole real line, and therefore the previous argument works without any change (see also Remark~\ref{rmk:BU-semi-int}). If the sequence $\{k_{n}\}$ is bounded, then up to subsequences we can assume that it is equal to some fixed positive integer $k_{\infty}$. In this case the functions $w_{n}(y)$ are all defined in the same half-line $y>y_{\infty}$ with $y_{\infty}:=-(k_{\infty}-1)L$, and in this half-line they converge to a limit staircase $w_{\infty}(y)$ (see Remark~\ref{rmk:boundary-proof}). The convergence is strict in every interval of the form $(y_{\infty},b)$, where $b$ is not a jump point of $w_{\infty}(y)$, and of course also in all intervals of the form $(a,b)$ where $a$ and $b$ are not jump points of $w_{\infty}(y)$. Moreover, the function $w_{\infty}$ is the unique semi-entire right-hand local minimizer of $\JF_{1/2}$ with the appropriate parameters in this half-line, namely the suitable oblique translation of the function defined in (\ref{defn:semi-entire}), which is again a staircase with the property that the midpoints of the vertical parts of the steps lie on the line $f'(x_{\infty})y$. At the end of the day, we obtain that (\ref{est:L-V}) holds true also in this case, and as before the right-hand side depends only on the difference between the ``values'' of $w_{\infty}(y)$ and of the line $f'(x_{\infty})y$ at the two endpoints. This difference is bounded from above by the height of the steps of $w_{\infty}$. These steps could be either the ``ordinary steps'' or the ``initial step'', which is higher, but in any case their height is independent of $L$. \subsubsection{Varifold convergence (statement~(2) of Theorem~\ref{thm:varifold})} \paragraph{\textmd{\textit{Notations and splitting of the graph}}} In analogy with (\ref{defn:V0+-}), for every $\ep\in(0,1)$ we set \begin{equation*} V_{\ep}^{+}:=\{x\in[0,1]:u_{\ep}'(x)>0\} \qquad\text{and}\qquad V_{\ep}^{-}:=\{x\in[0,1]:u_{\ep}'(x)<0\}. \end{equation*} From statement~(\ref{th:conv*+-}) of Remark~\ref{rmk:strict} we know that the strict convergence of $u_{\ep}(x)$ to $f(x)$ implies in particular that \begin{equation} \lim_{\ep\to 0^{+}}\int_{V_{\ep}^{+}}g(x)u_{\ep}'(x)\,dx= \int_{V_{0}^{+}}g(x)f'(x)\,dx \label{th:conv-measure} \end{equation} for every continuous function $g:[0,1]\to\re$, and similarly with $V_{\ep}^{-}$ and $V_{0}^{-}$. We observe also that the strict convergence $u_{\ep}\auto f$ in $BV((0,1))$ implies that the family $\{u_{\ep}(x)\}$ is bounded in $L^{\infty}((0,1))$, and therefore there exist real numbers $\ep_{0}\in(0,1)$ and $M_{0}\geq 0$ such that \begin{equation} |\phi(x,u_{\ep}(x),\arctan p)|\leq M_{0} \qquad \forall(x,p)\in[0,1]\times\re \quad \forall\ep\in(0,\ep_{0}). \label{est:bound-phi} \end{equation} Now for every $a\in(0,1)$ we define the three sets \begin{gather*} I_{a}:=\left\{(x,s,p)\in[0,1]\times\re\times\re:|s-f(x)|\leq a,\ |p|\leq a\right\}, \\[1ex] I_{a}^{+}:=\left\{(x,s,p)\in[0,1]\times\re\times\re:|s-f(x)|\leq a,\ p\geq 1/a\right\}, \\[1ex] I_{a}^{-}:=\left\{(x,s,p)\in[0,1]\times\re\times\re:|s-f(x)|\leq a,\ p\leq-1/a\right\}, \end{gather*} and the corresponding three constants \begin{gather*} \Gamma_{a}:=\max\left\{|\phi(x,s,\arctan p)-\phi(x,f(x),0)|:(x,s,p)\in I_{a}\right\}, \\[1ex] \Gamma_{a}^{+}:=\max\left\{|\phi(x,s,\arctan p)-\phi(x,f(x),\pi/2)|:(x,s,p)\in I_{a}^{+}\right\}, \\[1ex] \Gamma_{a}^{-}:=\max\left\{|\phi(x,s,\arctan p)-\phi(x,f(x),-\pi/2)|:(x,s,p)\in I_{a}^{-}\right\}. \end{gather*} We observe that, due to the boundedness of $f(x)$ and the uniform continuity of $\phi$ in bounded sets, these constants satisfy \begin{equation} \lim_{a\to 0^{+}}\Gamma_{a}=\lim_{a\to 0^{+}}\Gamma_{a}^{+}=\lim_{a\to 0^{+}}\Gamma_{a}^{-}=0. \label{lim-gamma-vari} \end{equation} Finally, for every $\ep\in(0,1)$ and every $a\in(0,1)$, we write the interval $[0,1]$ as the disjoint union of the four sets \begin{gather} H_{a,\ep}:=\left\{x\in[0,1]:|u_{\ep}'(x)|\leq a\right\}, \label{defn:H-a-ep} \\[1ex] V_{a,\ep}^{+}:=\left\{x\in[0,1]:u_{\ep}'(x)\geq 1/a\right\}, \qquad V_{a,\ep}^{-}:=\left\{x\in[0,1]:u_{\ep}'(x)\leq -1/a\right\}, \label{defn:V-a-ep} \\[1ex] M_{a,\ep}:=\left\{x\in[0,1]:a<|u_{\ep}'(x)|<1/a\right\}, \label{defn:M-a-ep} \end{gather} and accordingly we write \begin{equation*} \int_{0}^{1} \phi\left(x,u_{\ep}(x),\arctan(u_{\ep}'(x))\strut\right)\sqrt{1+u_{\ep}'(x)^{2}}\,dx= I_{a,\ep}^{H}+I_{a,\ep}^{+}+I_{a,\ep}^{-}+I_{a,\ep}^{M}, \end{equation*} where the four terms in the right-hand side are the integrals over the four sets defined above. We observe that \begin{eqnarray*} \PMF_{\ep}(\beta,f,(0,1),u_{\ep}) & \geq & \int_{0}^{1}\log\left(1+u_{\ep}'(x)^{2}\right)\,dx \\[0.5ex] & \geq & \log\left(1+a^{2}\right)\left(|V_{a,\ep}^{+}|+|V_{a,\ep}^{-}|+|M_{a,\ep}|\right), \end{eqnarray*} and, since the left-hand side tends to~0, we deduce that \begin{equation*} \lim_{\ep\to 0^{+}}|V_{a,\ep}^{+}|= \lim_{\ep\to 0^{+}}|V_{a,\ep}^{-}| =\lim_{\ep\to 0^{+}}|M_{a,\ep}|=0 \qquad \forall a\in(0,1), \end{equation*} and as a consequence \begin{equation*} \lim_{\ep\to 0^{+}}|H_{a,\ep}^{+}|=1 \qquad \forall a\in(0,1). \end{equation*} We claim that for every fixed $a\in(0,1)$ it turns out that \begin{gather} \limsup_{\ep\to 0^{+}}\left| I_{a,\ep}^{H}-\int_{0}^{1}\phi(x,f(x),0)\,dx \right|\leq M_{0}\left(\sqrt{1+a^{2}}-1\right)+\Gamma_{a}, \label{th:limsup-H} \\[1ex] \lim_{\ep\to 0^{+}}I_{a,\ep}^{M}=0, \label{th:limsup-M} \\[1ex] \limsup_{\ep\to 0^{+}}\left| I_{a,\ep}^{+}-\int_{V_{0}^{+}}\phi(x,f(x),\pi/2)\cdot f'(x)\,dx \right|\leq\Gamma_{a}^{+}\int_{0}^{1}|f'(x)|\,dx+M_{0}a, \label{th:limsup-V+} \\[1ex] \limsup_{\ep\to 0^{+}}\left| I_{a,\ep}^{-}-\int_{V_{0}^{-}}\phi(x,f(x),-\pi/2)\cdot |f'(x)|\,dx \right|\leq\Gamma_{a}^{-}\int_{0}^{1}|f'(x)|\,dx+M_{0}a. \label{th:limsup-V-} \end{gather} If we prove these claims, then we let $a\to 0^{+}$ and from (\ref{lim-gamma-vari}) we obtain exactly (\ref{th:varifold}). In words, this means that the integral in the left-hand side of (\ref{th:varifold}) splits into the four integrals over the regions (\ref{defn:H-a-ep}), (\ref{defn:V-a-ep}), (\ref{defn:M-a-ep}), which behave as follows. \begin{itemize} \item The integral over the ``intermediate'' region $M_{a,\ep}$ disappears in the limit. \item The integral over the ``horizontal'' region $H_{a,\ep}$ tends to the first integral in the right hand side of (\ref{th:varifold}), in which the ``tangent component'' is horizontal. \item The integrals over the two ``vertical'' regions $V^{+}_{a,\ep}$ and $V^{-}_{a,\ep}$ tend to the two integrals over $V_{0}^{+}$ and $V_{0}^{-}$ in the right hand side of (\ref{th:varifold}). In this two integrals the ``tangent component'' is vertical. \end{itemize} \paragraph{\textmd{\textit{Estimate in the intermediate regime}}} From (\ref{est:bound-phi}) we know that \begin{equation*} \left|\phi(x,u_{\ep}(x),\arctan(u_{\ep}'(x)))\right|\sqrt{1+u_{\ep}'(x)^{2}}\leq M_{0}\sqrt{1+\frac{1}{a^{2}}} \qquad \forall x\in M_{a,\ep}, \end{equation*} and therefore \begin{equation*} |I_{\ep,a}^{M}|\leq M_{0}\sqrt{1+\frac{1}{a^{2}}}\cdot|M_{a,\ep}|. \end{equation*} Since $|M_{a,\ep}|\to 0$ as $\ep\to 0^{+}$, this proves (\ref{th:limsup-M}). \paragraph{\textmd{\textit{Estimate in the horizontal regime}}} In order to prove (\ref{th:limsup-H}), we observe that \begin{eqnarray*} I_{a,\ep}^{H}-\int_{0}^{1}\phi(x,f(x),0)\,dx & = & \int_{H_{a,\ep}}\phi\left(x,u_{\ep}(x),\arctan(u_{\ep}'(x))\strut\right)\left(\sqrt{1+u_{\ep}'(x)^{2}}-1\right)\,dx \\ & & +\int_{H_{a,\ep}}\left\{\phi\left(x,u_{\ep}(x),\arctan(u_{\ep}'(x))\strut\right)-\phi(x,f(x),0)\right\}\,dx, \\ & & +\int_{H_{a,\ep}}\phi(x,f(x),0)\,dx- \int_{0}^{1}\phi(x,f(x),0)\,dx. \end{eqnarray*} The absolute value of the first line in the right-hand side is less than or equal to $M_{0}\left(\sqrt{1+a^{2}}-1\right)$. The absolute value of the second line is less than or equal to $\Gamma_{a}$ provided that \begin{equation} |u_{\ep}(x)-f(x)|\leq a \qquad \forall x\in[0,1], \label{est:uep-f} \end{equation} and this happens whenever $\ep$ is small enough. The third line tends to~0 because $|H_{\ep,a}|\to 1$ as $\ep\to 0^{+}$. This is enough to establish (\ref{th:limsup-H}). \paragraph{\textmd{\textit{Estimate in the vertical regime}}} In order to prove (\ref{th:limsup-V+}), we observe that \begin{eqnarray*} \lefteqn{\hspace{-3em} I_{a,\ep}^{+}-\int_{V_{0}^{+}}\phi(x,f(x),\pi/2)\cdot f'(x)\,dx = } \\ \qquad\qquad & = & \int_{V_{a,\ep}^{+}}\phi\left(x,u_{\ep}(x),\arctan(u_{\ep}'(x))\strut\right)\left(\sqrt{1+u_{\ep}'(x)^{2}}-u_{\ep}'(x)\right)\,dx \\[0.5ex] & & +\int_{V_{a,\ep}^{+}}\left\{\phi\left(x,u_{\ep}(x),\arctan(u_{\ep}'(x))\strut\right)-\phi(x,f(x),\pi/2)\right\}u_{\ep}'(x)\,dx, \\[0.5ex] & & +\int_{V_{a,\ep}^{+}}\phi(x,f(x),\pi/2)u_{\ep}'(x)\,dx -\int_{V_{\ep}^{+}}\phi(x,f(x),\pi/2)u_{\ep}'(x)\,dx \\[0.5ex] & & +\int_{V_{\ep}^{+}}\phi(x,f(x),\pi/2)u_{\ep}'(x)\,dx -\int_{V_{0}^{+}}\phi(x,f(x),\pi/2)f'(x)\,dx \\[0.5ex] & =: & L_{1}+L_{2}+L_{3}+L_{4}. \end{eqnarray*} Let us consider the four lines separately. The first line can be estimated as \begin{equation*} |L_{1}|\leq M_{0}\max\left\{\sqrt{1+p^{2}}-p:p\geq 1/a\right\}|V_{a,\ep}^{+}|\leq M_{0}\cdot\frac{a}{2}\cdot|V_{a,\ep}^{+}|, \end{equation*} and this tends to~0 when $\ep\to 0^{+}$. The second line can be estimated as \begin{equation*} |L_{2}|\leq\Gamma_{a}^{+}\cdot\int_{0}^{1}|u_{\ep}'(x)|\,dx \end{equation*} whenever (\ref{est:uep-f}) holds true, namely when $\ep$ is small enough. For the third line we observe that $V_{\ep}^{+}\setminus V_{a,\ep}^{+}\subseteq H_{a,\ep}\cup M_{a,\ep}$, and therefore \begin{eqnarray*} |L_{3}| & \leq & \int_{H_{a,\ep}}|\phi(x,f(x),0)|\cdot|u_{\ep}'(x)|\,dx+ \int_{M_{a,\ep}}|\phi(x,f(x),0)|\cdot|u_{\ep}'(x)|\,dx \\ & \leq & M_{0}a+M_{0}\cdot\frac{1}{a}\cdot|M_{a,\ep}|. \end{eqnarray*} Finally, we observe that $L_{4}\to 0$ as $\ep\to 0^{+}$ because of (\ref{th:conv-measure}). Recalling (\ref{th:limsup-strict}) and the fact that $|M_{a,\ep}|\to 0$ as $\ep\to 0^{+}$, from the previous estimates we conclude that \begin{eqnarray*} \limsup_{\ep\to 0^{+}}|L_{1}+L_{2}+L_{3}+L_{4}| & \leq & \limsup_{\ep\to 0^{+}}\Gamma_{a}^{+}\cdot\int_{0}^{1}|u_{\ep}'(x)|\,dx+ M_{0}a \\ & = & \Gamma_{a}^{+}\int_{0}^{1}|f'(x)|\,dx+M_{0}a, \end{eqnarray*} which proves (\ref{th:limsup-V+}). The proof of (\ref{th:limsup-V-}) is analogous. \qed \subsection{Low resolution blow-ups (Corollary~\ref{thm:spm-bu-low})} The pointwise convergence for $y=0$ is trivial, and therefore it is enough to check the convergence of total variations, which in turn reduces to \begin{equation} \limsup_{\ep\to 0^{+}}\int_{a}^{b}|u_{\ep}'(x_{\ep}+\alpha_{\ep}y)|\,dy\leq |f'(x_{0})|(b-a) \label{th:limsup-TV-alpha} \end{equation} for every interval $(a,b)\subseteq\re$. If we assume by contradiction that (\ref{th:limsup-TV-alpha}) fails, than the same argument we exploited in the proof of (\ref{th:limsup-strict}) shows that there exist a positive real number $\eta_{0}$, a sequence $\{\ep_{n}\}\subseteq(0,1)$ such that $\ep_{n}\to 0^{+}$, and a sequence of positive integers $k_{n}$ such that \begin{equation} \int_{0}^{L_{n}}|u_{\ep_{n}}'(\widehat{x}_{n}+\omepn y)|\,dy- |f'(x_{0})|L_{n}\geq \eta_{0}\cdot\frac{\alpha_{\ep_{n}}}{\omepn N_{n}}, \label{est:TV-alpha} \end{equation} where now \begin{equation*} N_{n}:=\left\lfloor\frac{(b-a)\alpha_{\ep_{n}}}{L\omepn}\right\rfloor, \qquad L_{n}:=\frac{(b-a)\alpha_{\ep_{n}}}{N_{n}\omepn}, \qquad \widehat{x}_{n}:=x_{\ep_{n}}+a\alpha_{\ep_{n}}+L_{n}\omepn(k_{n}-1), \end{equation*} and $k_{n}\in\{1,\ldots,N_{n}\}$. The integral in the left-hand side of (\ref{est:TV-alpha}) coincides with the total variation in the interval $(0,L_{n})$ of the fake blow-up of $u_{\ep_{n}}(x)$, at the standard scale $\omepn$, with center in $\widehat{x}_{n}$. Since $\widehat{x}_{n}\to x_{0}$ (here we exploit again that $k_{n}\leq N_{n}$ and $\omep/\alpha_{\ep}\to 0$), we know that these fake blow-ups converge strictly (up to subsequences) to some staircase $w_{\infty}$. Therefore, passing to the limit in (\ref{est:TV-alpha}) we deduce that \begin{equation*} |Dw_{\infty}|([0,L])-|f'(x_{0})|L\geq\frac{\eta_{0}L}{b-a}, \end{equation*} and we conclude exactly as in the proof of (\ref{th:limsup-strict}). \qed \setcounter{equation}{0} \section{Asymptotic analysis of local minimizers}\label{sec:loc-min-proof} This section is the technical core of the paper. Here we prove all the results that we stated in section~\ref{sec:loc-min}. \subsection{Preliminary lemmata} \begin{lemma}\label{lemma:equipartition} Let $C_{0}$ and $C_{1}$ be two positive real numbers. Let us consider the function $\varphi:(0,1)\to\re$ defined by \begin{equation*} \varphi(t):=C_{0}\left(\sqrt{t}+\sqrt{1-t}\right)+ C_{1}\left(t^{3}+(1-t)^{3}\right), \end{equation*} and let us assume that there exists $t_{0}\in(0,1)$ such that $\varphi(t)\geq\varphi(t_{0})$ for every $t\in(0,1)$. Then it turns out that $t_{0}=1/2$. \end{lemma} \begin{proof} With the variable change $t=\sin^{2}\theta$, we can restate the claim as follows. Let us consider the function $g:(0,\pi/2)\to\re$ defined by \begin{equation*} g(\theta):=C_{0}\left(\cos\theta+\sin\theta\right)+ C_{1}\left(\cos^{6}\theta+\sin^{6}\theta\right); \end{equation*} if there exists $\theta_{0}\in(0,\pi/2)$ such that \begin{equation} g(\theta)\geq g(\theta_{0}) \qquad \forall\theta\in(0,\pi/2), \label{hp:g-min-loc} \end{equation} then necessarily $\theta_{0}=\pi/4$. In order to prove this claim, we observe that the derivative of $g(\theta)$ is \begin{equation} g'(\theta)=(\cos\theta-\sin\theta) \left(C_{0}-6C_{1}\cos\theta\sin\theta(\cos\theta+\sin\theta)\right). \label{eqn:g'} \end{equation} Let us consider the function $\psi(\theta):=\cos\theta\sin\theta(\cos\theta+\sin\theta)$, whose derivative is \begin{equation*} \psi'(\theta)=(\cos\theta-\sin\theta)(1+3\cos\theta\sin\theta). \end{equation*} It follows that $\psi(\theta)$ is increasing in $[0,\pi/4]$ and decreasing in $[\pi/4,\pi/2]$, and its maximum values is $\psi(\pi/4)=1/\sqrt{2}$. Now we distinguish two cases. \begin{itemize} \item If $C_{0}\sqrt{2}\geq 6C_{1}$, then the sign of $g'(\theta)$ coincides with the sign of $\cos\theta-\sin\theta$. It follows that $\pi/4$ is the unique stationary point of $g(\theta)$ in $(0,\pi/4)$, but it is a maximum point, and therefore in this case there is no $\theta_{0}\in(0,\pi/2)$ for which (\ref{hp:g-min-loc}) holds true. \item If $C_{0}\sqrt{2}<6C_{1}$, then also the second term in the right-hand side of (\ref{eqn:g'}) changes its sign in two points of the form $\pi/4\pm\theta_{1}$ for some $\theta_{1}\in(0,\pi/4)$. In this case it turns out that $g(\theta)$ has three stationary points in $(0,\pi/2)$, namely $\pi/4\pm\theta_{1}$ (which are maximum points) and $\pi/4$, which is a minimum point (local or global depending on $C_{0}$ and $C_{1}$). \end{itemize} In any case, if $g(\theta)$ has a minimum point in $(0,\pi/2)$, this is necessarily $\pi/4$. \end{proof} \begin{lemma}\label{lemma:ABG} Let $(a,b)\subseteq\re$ be an interval, and let $A_{0}$, $A_{1}$, $B_{0}$, $B_{1}$ be four real numbers. Let us consider the minimum problem \begin{equation*} \min\left\{\int_{a}^{b}w''(y)^{2}\,dy: w\in H^{2}((a,b)),\ \left(w(a),w'(a),w(b),w'(b)\strut\right)=(A_{0},A_{1},B_{0},B_{1})\right\}. \end{equation*} Then the unique minimum point is the function \begin{equation*} w_{0}(y)=P\left(y-\frac{a+b}{2}\right), \end{equation*} where $P(x)=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}$ is the polynomial of degree three with coefficients \begin{equation*} \left.\begin{array}{c@{\qquad}c} \displaystyle c_{0}:=\frac{A_{0}+B_{0}}{2}-\frac{B_{1}-A_{1}}{8}(b-a), & \displaystyle c_{1}:=\frac{3(B_{0}-A_{0})}{2(b-a)}-\frac{A_{1}+B_{1}}{4}, \\[3ex] \displaystyle c_{2}:=\frac{B_{1}-A_{1}}{2(b-a)}, & \displaystyle c_{3}:=-\frac{2(B_{0}-A_{0})}{(b-a)^{3}}+\frac{A_{1}+B_{1}}{(b-a)^{2}}. \end{array}\right. \end{equation*} As a consequence, the minimum value is \begin{equation*} \frac{(B_{1}-A_{1})^{2}}{b-a}+ \frac{12}{(b-a)^{3}}\left[(B_{0}-A_{0})-\frac{A_{1}+B_{1}}{2}(b-a)\right]^{2}, \end{equation*} and the minimum point satisfies the pointwise estimates \begin{equation*} |w_{0}(y)|\leq\frac{3(|A_{0}|+|B_{0}|)}{2}+\frac{|A_{1}|+|B_{1}|}{2}(b-a) \qquad \forall y\in[a,b], \end{equation*} and \begin{equation*} |w_{0}'(y)|\leq\frac{3|B_{0}-A_{0}|}{b-a}+\frac{3(|A_{1}|+|B_{1}|)}{2} \qquad \forall y\in[a,b]. \end{equation*} \end{lemma} \begin{proof} From Euler-Lagrange equation we know that minimizers are polynomials of degree three, and $w_{0}$ is the unique such polynomial that fits the boundary conditions. \end{proof} \begin{lemma}\label{lemma:bound-D-H} Let $(a,b)\subseteq\re$ be an interval, and let $D$ and $H$ be positive real numbers. Let $\ep\in(0,1)$ be a real number such that \begin{equation} 2\ep^{2}\left(\sqrt{H}+\ep^{2}D\right)<(b-a) \label{hp:bound-D-H-1} \end{equation} and \begin{equation} \frac{2}{|\log\ep|}\log\left(1+\frac{45}{2\ep^{4}}\left(\sqrt{H}+\ep^{2}D\right)^{2}\right)\leq 18. \label{hp:bound-D-H-2} \end{equation} Then for every $(A_{0},B_{0})\in[-H,H]^{2}$ and every $(A_{1},B_{1})\in[-D,D]^{2}$ there exists a function $w\in H^{2}((a,b))$ satisfying the boundary conditions \begin{equation} \left(w(a),w'(a),w(b),w'(b)\strut\right)=(A_{0},A_{1},B_{0},B_{1}), \label{hp:BC} \end{equation} and the estimates \begin{gather} \RPM_{\ep}((a,b),w)\leq 80\left(\sqrt{H}+\ep^{2}D\right), \label{th:bound-ABG} \\ \int_{a}^{b}w(x)^{2}\,dx\leq 10\ep^{2}\left(\sqrt{H}+\ep^{2}D\right)^{5}. \label{th:bound-int} \end{gather} \end{lemma} \begin{proof} For every real number $\eta\in(0,(b-a)/2)$, let us consider the function \begin{equation*} w(x):=\begin{cases} \varphi_{1}(x)\quad & \text{if } x\in[a,a+\eta], \\ 0 & \text{if } x\in[a+\eta,b-\eta], \\ \varphi_{2}(x) & \text{if } x\in[b-\eta,b], \end{cases} \end{equation*} where $\varphi_{1}(x)$ is the unique polynomial of degree three such that \begin{equation*} \varphi_{1}(a)=A_{0}, \qquad \varphi_{1}'(a)=A_{1}, \qquad \varphi_{1}(a+\eta)=\varphi_{1}'(a+\eta)=0, \end{equation*} and $\varphi_{2}(x)$ is the unique polynomial of degree three such that \begin{equation*} \varphi_{2}(b)=B_{0}, \qquad \varphi_{2}'(b)=B_{1}, \qquad \varphi_{2}(b-\eta)=\varphi_{2}'(b-\eta)=0. \end{equation*} We observe that $w$ belongs to $H^{2}((a,b))$, and fulfills the boundary conditions (\ref{hp:BC}). From Lemma~\ref{lemma:ABG} we deduce that $w(x)$ satisfies the integral estimate \begin{equation*} \int_{a}^{a+\eta}w''(x)^{2}\,dx\leq \frac{D^{2}}{\eta}+\frac{12}{\eta^{3}}\left(H+\frac{D}{2}\eta\right)^{2}\leq \frac{7D^{2}}{\eta}+\frac{24H^{2}}{\eta^{3}}, \end{equation*} and the pointwise estimates \begin{equation*} |w(x)|\leq\frac{3H}{2}+\frac{D\eta}{2} \qquad\quad\text{and}\quad\qquad |w'(x)|\leq\frac{3H}{\eta}+\frac{3D}{2} \end{equation*} for every $x\in[a,a+\eta]$, from which we deduce that \begin{equation*} \int_{a}^{a+\eta}w(x)^{2}\,dx\leq \frac{9H^{2}\eta}{2}+\frac{D^{2}\eta^{3}}{2}, \end{equation*} and \begin{equation*} \int_{a}^{a+\eta}\log\left(1+w'(x)^{2}\right)\,dx\leq \eta\log\left(1+\frac{18H^{2}}{\eta^{2}}+\frac{9D^{2}}{2}\right). \end{equation*} Analogous estimates hold true in the interval $[b-\eta,b]$, while of course there is no contribution from the central interval $[a+\eta,b-\eta]$. It follows that \begin{equation} \RPM_{\ep}((a,b),w) \leq \frac{\eta}{\ep^{2}}\left\{\left(\frac{14D^{2}}{\eta^{2}}+\frac{48H^{2}}{\eta^{4}}\right)\ep^{8}+ \frac{2}{|\log\ep|}\log\left(1+\frac{18H^{2}}{\eta^{2}}+\frac{9D^{2}}{2}\right)\right\}, \label{est:bound-ABG} \end{equation} and \begin{equation} \int_{a}^{b}w(x)^{2}\,dx\leq 9 H^{2}\eta+D^{2}\eta^{3}. \label{est:bound-int} \end{equation} Now we set $\eta:=\ep^{2}\left(\sqrt{H}+\ep^{2}D\right)$. This choice is admissible because $\eta<(b-a)/2$ due to (\ref{hp:bound-D-H-1}). We observe also that $\eta^{4}\geq\ep^{8}H^{2}$ and $\eta^{2}\geq\ep^{8}D^{2}$. As a consequence, from (\ref{est:bound-int}) we conclude that \begin{equation*} \int_{a}^{b}w(x)^{2}\,dx\leq 9 H^{2}\eta+D^{2}\eta^{3}\leq \frac{10\eta^{5}}{\ep^{8}}, \end{equation*} which proves (\ref{th:bound-int}). Similarly, we obtain that \begin{equation*} \left(\frac{14D^{2}}{\eta^{2}}+\frac{48H^{2}}{\eta^{4}}\right)\ep^{8}\leq 62, \end{equation*} and \begin{equation*} \frac{2}{|\log\ep|}\log\left(1+\frac{18H^{2}}{\eta^{2}}+\frac{9D^{2}}{2}\right)\leq \frac{2}{|\log\ep|}\log\left(1+\frac{45}{2}\frac{\eta^{2}}{\ep^{8}}\right)\leq 18, \end{equation*} where in the last inequality we exploited (\ref{hp:bound-D-H-2}). Plugging these estimates into (\ref{est:bound-ABG}) we obtain (\ref{th:bound-ABG}). \end{proof} \subsection{Proof of Proposition~\ref{prop:loc-min-discr} and Proposition~\ref{prop:loc-min-class}}\label{subsec:loc-min-proof} In this subsection we prove the two propositions in the same time. The common idea is that every local minimizer to the functional (\ref{defn:JF}) is a staircase where all the steps have the same length and the same height, and this staircase intersects the graph of the forcing term $Mx$ in the midpoint of every horizontal step. This structure applies to entire local minimizers, but also to minimizers to (\ref{defn:mu0}), with the possible exception that the length of the two steps at the boundary might be different. Once that this structure has been established, in both cases we only need to optimize with respect to the length of the steps. The proof of the structure result is rather lengthy, because we need first to show that the jump set is discrete, then that the steps are symmetric with respect to the forcing term, and finally that all the steps have the same length. Since the parameters $\alpha$, $\beta$ and $M$ are fixed once for all, for the sake of shortness in the sequel the functional (\ref{defn:JF}) is denoted only by $\JF(\Omega,w)$. When needed, we also assume that $M>0$ (the case $M<0$ is symmetric, and the easier case $M=0$ is treated in the last paragraph of the proof). \paragraph{\textmd{\textit{The jump set of local minimizers is discrete}}} Let us assume that $w_{0}(x)$ is a local minimizer for the functional $\JF((a,b),w)$ in some interval $(a,b)\subseteq\re$. We prove that the set of jump points of $w_{0}$ in $(a,b)$ is finite. To this end, let us assume by contradiction that this is not the case. Due to the structure of the elements of the space $\PJ((a,b))$, we know that there exist a sequence $\{s_{k}\}\subseteq(a,b)$ of distinct real numbers, a real number $c_{0}$, and a sequence $\{J_{k}\}$ of real numbers different from zero such that \begin{equation*} \sum_{k=1}^{\infty}|J_{k}|<+\infty \end{equation*} and \begin{equation} w_{0}(x)=c_{0}+\sum_{k=1}^{\infty}J_{k}\mathbbm{1}_{(s_{k},b)}(x) \qquad \forall x\in(a,b). \label{defn:v0-jump} \end{equation} For every integer $n\geq 2$ we consider the real number \begin{equation*} R_{n}:=\sum_{k=n+1}^{\infty}|J_{k}|, \end{equation*} and the function $w_{n}:(a,b)\to\re$ defined by \begin{equation} w_{n}(x):=c_{0}+\left(J_{1}+\sum_{k=n+1}^{\infty}J_{k}\right)\mathbbm{1}_{(s_{1},b)}(x) +\sum_{k=2}^{n}J_{k}\mathbbm{1}_{(s_{k},b)}(x) \qquad \forall x\in(a,b). \label{defn:vn-jump} \end{equation} We observe that $R_{n}\to 0$, and the function $w_{n}$ has a finite number of jumps located in $s_{1},\ldots,s_{n}$, and the jump in $s_{1}$ has ``absorbed'' all the heights of the jumps in $s_{i}$ with $i\geq n+1$ (the jump height in $s_{1}$ might also vanish). In this way it turns out that \begin{equation*} \lim_{x\to a^{+}}w_{n}(x)=\lim_{x\to a^{+}}w_{0}(x)=c_{0} \qquad\text{and}\qquad \lim_{x\to b^{-}}w_{n}(x)=\lim_{x\to b^{-}}w_{0}(x)=c_{0}+\sum_{k=1}^{\infty}J_{k}, \end{equation*} and therefore $w_{0}$ and $w_{n}$ have the same ``boundary data''. As a consequence, due to the minimality of $w_{0}(x)$ this implies that \begin{equation} \JF((a,b),w_{n})\geq\JF(a,b),w_{0}) \qquad \forall n\geq 2. \label{ineq:vn-v0} \end{equation} On the other hand, from (\ref{defn:v0-jump}) and (\ref{defn:vn-jump}) we obtain that \begin{equation*} \J_{1/2}((a,b),w_{0})-\J_{1/2}((a,b),w_{n})= \sum_{k=n+1}^{\infty}|J_{k}|^{1/2}+|J_{1}|^{1/2}- \left|J_{1}+\sum_{k=n+1}^{\infty}J_{k}\right|^{1/2}. \end{equation*} Due to the subadditivity of the square root, the first term can be estimated as \begin{equation*} \sum_{k=n+1}^{\infty}|J_{k}|^{1/2}\geq \left(\sum_{k=n+1}^{\infty}|J_{k}|\right)^{1/2}=(R_{n})^{1/2}, \end{equation*} while for the second and third term it turns out that \begin{equation*} |J_{1}|^{1/2}-\left|J_{1}+\sum_{k=n+1}^{\infty}J_{k}\right|^{1/2}\geq |J_{1}|^{1/2}-\left(|J_{1}|+R_{n}\right)^{1/2}\geq -\frac{R_{n}}{2|J_{1}|^{1/2}}. \end{equation*} From these two inequalities it follows that \begin{equation} \J_{1/2}((a,b),w_{0})-\J_{1/2}((a,b),w_{n})\geq (R_{n})^{1/2}-\frac{R_{n}}{2|J_{1}|^{1/2}}. \label{ineq:J1/2-v0-vn} \end{equation} Moreover, from (\ref{defn:vn-jump}) we obtain also that \begin{equation*} |w_{0}(x)-w_{n}(x)|\leq R_{n} \qquad \forall x\in(a,b) \end{equation*} and \begin{equation*} |w_{n}(x)-Mx|\leq |c_{0}|+\sum_{k=1}^{\infty}|J_{k}|+M\max\{|a|,|b|\}=:V_{\infty} \qquad \forall x\in(a,b), \end{equation*} and therefore \begin{eqnarray} \int_{a}^{b}(w_{0}(x)-Mx)^{2}\,dx & \geq & \int_{a}^{b}\left[(w_{n}(x)-Mx)^{2}+2(w_{n}(x)-Mx)(w_{0}(x)-w_{n}(x))\right]\,dx \nonumber \\ & \geq & \int_{a}^{b}(w_{n}(x)-Mx)^{2}\,dx-2(b-a)V_{\infty}R_{n} \label{ineq:int-v0-vn} \end{eqnarray} for every $n\geq 2$. From (\ref{ineq:J1/2-v0-vn}) and (\ref{ineq:int-v0-vn}) we conclude that \begin{equation*} \JF((a,b),w_{0})-\JF((a,b),w_{n})\geq \alpha(R_{n})^{1/2}-\frac{\alpha R_{n}}{2|J_{1}|^{1/2}}-2\beta(b-a)V_{\infty}R_{n} \end{equation*} When $R_{n}\to 0^{+}$ the right-hand side is positive, and this contradicts (\ref{ineq:vn-v0}). \paragraph{\textmd{\textit{Existence of jump points and intersections}}} Let us assume that $M>0$, and let us set \begin{equation} L_{0}:=\left(\frac{64\alpha^{2}}{\beta^{2}M^{3}}\right)^{1/5}. \label{defn:L0} \end{equation} We claim that, if $w_{0}(x)$ is a local minimizer in some interval $(a,b)\subseteq\re$ with length $b-a>L_{0}$, then $w_{0}(x)$ has either at least one jump point in $(a,b)$ or at least one intersection with the line $Mx$, namely there exists $z_{0}\in(a,b)$ such that $w_{0}(z_{0})=Mz_{0}$. Indeed, let us assume by contradiction that this is not the case. Then in $(a,b)$ the function $w_{0}(x)$ is a constant of the form $Ma-c$ or $Mb+c$ for some real number $c\geq 0$. In both cases it turns out that \begin{equation} \JF((a,b),w_{0})=\left(\frac{M^{2}}{3}(b-a)^{3}+M(b-a)^{2}c+(b-a)c^{2}\right)\beta. \label{eqn:F-ab-v0} \end{equation} For every real number $\tau$ with $0<2\tau<b-a$, let us consider the function $w_{\tau}:(a,b)\to\re$ defined by \begin{equation} w_{\tau}(x):= \begin{cases} \dfrac{M(a+b)}{2}\quad & \text{if }a+\tau<x<b-\tau, \\[1ex] w_{0}(x) & \text{if }x\in(a,b)\setminus(a+\tau,b-\tau). \end{cases} \label{defn:v-tau-1} \end{equation} Since $w_{\tau}$ coincides with $w_{0}$ in a neighborhood of the boundary, from the minimality of $w_{0}$ we deduce that $\JF((a,b),w_{\tau})\geq\JF((a,b),w_{0})$ for every admissible value of $\tau$, and in particular \begin{equation} \lim_{\tau\to 0^{+}}\JF((a,b),w_{\tau})\geq\JF((a,b),w_{0}). \label{ineq:F-vtau-v0} \end{equation} The right-hand side is given by (\ref{eqn:F-ab-v0}). As for the left-hand side, we observe that $w_{\tau}$ has two equal jumps of height $c+M(b-a)/2$, while the integral term can be computed starting from the explicit expression (\ref{defn:v-tau-1}). We obtain that \begin{equation} \lim_{\tau\to 0^{+}}\JF((a,b),w_{\tau})= 2\alpha\left(c+\frac{M(b-a)}{2}\right)^{1/2}+ \frac{\beta M^{2}}{12}(b-a)^{3}. \label{eqn:F-ab-vtau} \end{equation} Plugging (\ref{eqn:F-ab-vtau}) and (\ref{eqn:F-ab-v0}) into (\ref{ineq:F-vtau-v0}) we conclude that \begin{equation} 2\alpha\left(c+\frac{M(b-a)}{2}\right)^{1/2}\geq \frac{\beta M^{2}}{4}(b-a)^{3}+\beta M(b-a)^{2}c+\beta(b-a)c^{2}. \label{ineq:absurd} \end{equation} We claim that this is impossible if $c\geq 0$ and $b-a>L_{0}$. To this end, we write (\ref{defn:L0}) in the equivalent form $\beta^{2}M^{3}L_{0}^{5}=64\alpha^{2}$, from which we deduce that \begin{equation} \beta^{2}M^{3}(b-a)^{5}>64\alpha^{2} \label{ineq:L0} \end{equation} because $b-a>L_{0}$. Now we distinguish two cases. \begin{itemize} \item Let us assume that $c\leq M(b-a)/2$. Multiplying (\ref{ineq:L0}) by $M(b-a)$, and taking the square root, we obtain that \begin{equation*} \beta M^{2}(b-a)^{3}>8\alpha[M(b-a)]^{1/2}, \end{equation*} and therefore \begin{equation*} 2\alpha\left(c+\frac{M(b-a)}{2}\right)^{1/2}\leq 2\alpha[M(b-a)]^{1/2} <\frac{\beta M^{2}}{4}(b-a)^{3}. \end{equation*} Since the latter is less than or equal to the right-had side of (\ref{ineq:absurd}), we have reached a contradiction in this case. \item Let us assume that $c\geq M(b-a)/2$, and in particular that $c$ is positive. We observe that this condition can be rewritten as \begin{equation*} 2\sqrt{2}\,c^{3/2}\geq M^{3/2}(b-a)^{3/2}, \end{equation*} while (\ref{ineq:L0}) can be rewritten in the form \begin{equation*} \beta(b-a)>\frac{8\alpha}{M^{3/2}(b-a)^{3/2}}. \end{equation*} Since $c>0$, from these inequalities it follows that \begin{equation*} \beta(b-a)c^{2}> \frac{8\alpha}{M^{3/2}(b-a)^{3/2}}\cdot c^{2} \geq 2\alpha\sqrt{2c}\geq 2\alpha\left(c+\frac{M(b-a)}{2}\right)^{1/2}. \end{equation*} Since the first term is less than or equal to the right-had side of (\ref{ineq:absurd}), we have reached a contradiction also in this case. \end{itemize} \paragraph{\textmd{\textit{Symmetry of jumps}}} Let $w_{0}(x)$ be a local minimizer in some interval $(a,b)\subseteq\re$, and let $s\in(a,b)$ be a jump point of $w_{0}$. From the first step we already know that $s$ is isolated and therefore, up to restricting to a smaller interval, we can assume that $w_{0}(x)$ is equal to some constant $A$ in $(a,s)$, and to some constant $B\neq A$ in $(s,b)$. We claim that \begin{equation} Ms-A=B-Ms \label{th:jump-symm} \end{equation} and that, if $M\neq 0$, the two terms have the same sign of $M$. To this end, for every $\tau\in(a,b)$ we consider the function $w_{\tau}:(a,b)\to\re$ that is equal to $A$ in $(a,\tau)$, and equal to $B$ in $(\tau,b)$, and we set \begin{equation*} \varphi(\tau):=\JF((a,b),w_{\tau})=\alpha\sqrt{B-A}+\beta\int_{a}^{\tau}(A-Mx)^{2}\,dx +\beta\int_{\tau}^{b}(B-Mx)^{2}\,dx. \end{equation*} Since $w_{\tau}$ coincides with $w_{0}$ in a neighborhood of the boundary of the interval, from the minimality of $w_{0}$ we deduce that $\varphi(\tau)$ attains its minimum in $(a,b)$ when $\tau=s$. This implies in particular that \begin{equation} 0=\varphi'(s)=\beta\left[(Ms-A)^{2}-(B-Ms)^{2}\right] \label{th:symm-eq} \end{equation} and \begin{equation} 0\leq\varphi''(s)=2\beta M[(Ms-A)+(B-Ms)]. \label{th:symm-ineq} \end{equation} Since $\beta>0$ and $B\neq A$, equality (\ref{th:symm-eq}) implies (\ref{th:jump-symm}). If in addition $M\neq 0$, then (\ref{th:symm-ineq}) implies that the two terms in (\ref{th:jump-symm}) have the same sign of~$M$. \paragraph{\textmd{\textit{Equipartition of intersections}}} Let us assume that $M>0$, let $w_{0}(x)$ be a local minimizer in some interval $(a,b)\subseteq\re$, and let \begin{equation*} a<z_{1}<z_{2}<\ldots<z_{n}<b \end{equation*} denote the intersections in $(a,b)$ of $w_{0}(x)$ with the line $Mx$, namely the solutions to the equation $w_{0}(x)=Mx$. We observe that between any two intersections there is necessarily at least one jump point, and therefore from the previous steps we know that their number is finite. We claim that \begin{equation*} z_{2}-z_{1}=z_{3}-z_{2}=\ldots=z_{n}-z_{n-1}. \end{equation*} In order to show the claim it is enough to show that, if $z_{1}<z_{2}<z_{3}$ are three consecutive intersections, then $z_{2}-z_{1}=z_{3}-z_{2}$. To this end, we restrict to the interval $(z_{1},z_{3})$ and we observe that, due to the previous steps, it turns out that \begin{equation*} w_{0}(x)=\begin{cases} Mz_{1}\quad & \text{if }x\in(z_{1},(z_{1}+z_{2})/2), \\ Mz_{2} & \text{if }x\in((z_{1}+z_{2})/2,(z_{2}+z_{3})/2), \\ Mz_{3} & \text{if }x\in((z_{2}+z_{3})/2,z_{3}). \end{cases} \end{equation*} For every $\tau\in(z_{1},z_{3})$ we consider the function $w_{\tau}:(z_{1},z_{3})\to\re$ defined by \begin{equation*} w_{\tau}(x)=\begin{cases} Mz_{1}\quad & \text{if }x\in(z_{1},(z_{1}+\tau)/2), \\ M\tau & \text{if }x\in((z_{1}+\tau)/2,(\tau+z_{3})/2), \\ Mz_{3} & \text{if }x\in((\tau+z_{3})/2,z_{3}). \end{cases} \end{equation*} From this explicit expression it follows that \begin{equation*} \JF((z_{1},z_{3}),w_{\tau})= \alpha\sqrt{M}\left(\sqrt{\tau-z_{1}}+\sqrt{z_{3}-\tau}\,\strut\right)+ \frac{\beta M^{2}}{12}\left\{(\tau-z_{1})^{3}+(z_{3}-\tau)^{3}\right\}. \end{equation*} Since $w_{\tau}$ coincides with $w_{0}$ in a neighborhood of the boundary of the interval, from the minimality of $w_{0}$ we deduce that this function of $\tau$ attains its minimum in $(z_{1},z_{3})$ when $\tau=z_{2}$. With the variable change $\tau=z_{1}+t(z_{3}-z_{1})$ this is equivalent to saying that the function \begin{equation*} \varphi(t):=C_{0}\left(\sqrt{t}+\sqrt{1-t}\right)+ C_{1}\left(t^{3}+(1-t)^{3}\right), \end{equation*} where \begin{equation*} C_{0}:=\alpha\sqrt{M}\sqrt{z_{3}-z_{1}} \qquad\quad\text{and}\quad\qquad C_{1}:=\frac{\beta M^{2}}{12}(z_{3}-z_{1})^{3}, \end{equation*} attains its minimum in $(0,1)$ when $t=(z_{2}-z_{1})/(z_{3}-z_{1})$. On the other hand, from Lemma~\ref{lemma:equipartition} we know that the only possible minimum point is $t=1/2$, and this implies that $z_{2}$ is the midpoint of $(z_{1},z_{3})$. \paragraph{\textmd{\textit{Estimate from below for the minimum}}} We are now ready to prove the estimate from below in (\ref{th:est-mu-abLM}). Again we consider the case where $M>0$. To begin with, we observe that this estimate is trivial when $L\leq 8L_{0}$, because in this case the left-hand side is nonpositive. If $L>8L_{0}$, then from the previous steps we know that any minimizer $w_{0}\in\PJ((0,L))$ intersects the line $Mx$ in at least one point $a_{0}\in(0,4L_{0})$, and in at least one point $b_{0}\in(L-4L_{0},L)$. Indeed, we know that in $(0,2L_{0})$ there exists at least one intersection or jump (because the length of the interval is greater than $L_{0}$), and the same in $(2L_{0},4L_{0})$, and in any case between any two jumps there exists at least one intersection because of the symmetry of jumps. Now we know that the interval $(a_{0},b_{0})$ is divided into $n\geq 1$ intervals of equal length whose endpoints are intersections. Moreover, $w_{0}(x)$ has exactly one jump point in the midpoint between any two consecutive intersection. As a consequence, the shape of $w_{0}(x)$ in $(a_{0},b_{0})$ depends only on $n$, and with an elementary computation we find that \begin{eqnarray*} \JF((0,L),w_{0}) & \geq & \JF((a_{0},b_{0}),w_{0}) \\[1ex] & = & n\left\{\alpha\sqrt{\frac{M(b_{0}-a_{0})}{n}}+ \frac{\beta M^{2}}{12}\left(\frac{b_{0}-a_{0}}{n}\right)^{3}\right\}. \end{eqnarray*} Therefore, from the inequality \begin{equation*} A+B\geq 5\left(\frac{A^{4}B}{4^{4}}\right)^{1/5} \qquad \forall(A,B)\in[0,+\infty)^{2} \end{equation*} we conclude that \begin{equation*} \JF((0,L),w_{0})\geq\frac{5}{4}\left(\frac{\alpha^{4}\beta M^{4}}{3}\right)^{1/5}(b_{0}-a_{0}) \geq\frac{5}{4}\left(\frac{\alpha^{4}\beta M^{4}}{3}\right)^{1/5}(L-8L_{0}). \end{equation*} Plugging (\ref{defn:L0}) into this inequality we obtain the estimate from below in (\ref{th:est-mu-abLM}). \paragraph{\textmd{\textit{Estimate from above for the minimum}}} Let us prove the estimate from above in (\ref{th:est-mu-abLM}). Let $n:=\lceil L/(2H)\rceil$ denote the smallest integer greater than or equal to $L/(2H)$, where $H$ is defined by (\ref{defn:lambda-0}), and let us consider the function $w_{0}\in\PJ((0,2nH))$ that has intersections with the line $Mx$ in $0$, $2H$, $4H$, \ldots, $2nH$, and jumps in the midpoints of the intervals between consecutive intersections. Since $w_{0}$ is a competitor for the minimum problem (\ref{defn:mu0*}) in the interval $(0,2nH)$, from the monotonicity of $\mu_{0}^{*}$ with respect to $L$ we deduce that \begin{eqnarray} \mu_{0}^{*}(\alpha,\beta,L,M) & \leq & \mu_{0}^{*}(\alpha,\beta,2nH,M) \nonumber \\[0.5ex] & \leq & \JF((0,2nH),w_{0}) \nonumber \\[0.5ex] & = & n\left(\alpha\sqrt{2MH}+\frac{2\beta M^{2}}{3}H^{3}\right) \nonumber \\[0.5ex] & \leq & \left(\frac{L}{2H}+1\right) \left(\alpha\sqrt{2MH}+\frac{2\beta M^{2}}{3}H^{3}\right), \label{est:mu0-H0} \end{eqnarray} and we conclude by remarking that the last term coincides with the right-hand side of (\ref{th:est-mu*-abLM}) when $H$ is given by (\ref{defn:lambda-0}). \paragraph{\textmd{\textit{Structure of entire local minimizers}}} Let $w_{0}(x)$ be an entire local minimizer. From the previous steps applied in every interval of the form $(-L,L)$, with $L\to +\infty$, we know that the set of intersection points of $w_{0}(x)$ with $Mx$ is discrete and divides the line into segments of the same length $2h>0$, whose midpoints are the unique jump points of $w_{0}$. This is enough to conclude that $w_{0}(x)$ is an oblique translation of some staircase with steps of horizontal length $2h$ and vertical height $2Mh$. It remains only to show that $h=H$, where $H$ is given by (\ref{defn:lambda-0}). Up to an oblique translation, we can always assume that the intersections are the points of the form $2zh$ with $z\in\z$. Let us consider the interval $(0,2nh)$, where $n$ is a positive integer. Applying (\ref{est:mu0-H0}) with $L=2nh$ we deduce that \begin{eqnarray*} n\left(\alpha\sqrt{2Mh}+\frac{2\beta M^{2}}{3}h^{3}\right) & = & \JF((0,2nh),w_{0}) \\ & = & \mu_{0}^{*}(\alpha,\beta,2nh,M) \\[1ex] & \leq & \left(\frac{2nh}{2H}+1\right) \left(\alpha\sqrt{2MH}+\frac{2\beta M^{2}}{3}H^{3}\right). \end{eqnarray*} Dividing by $nh$, and letting $n\to +\infty$, we conclude that \begin{equation*} \alpha\frac{\sqrt{2M}}{\sqrt{h}}+\frac{2\beta M^{2}}{3}h^{2}\leq \alpha\frac{\sqrt{2M}}{\sqrt{H}}+\frac{2\beta M^{2}}{3}H^{2}, \end{equation*} and this inequality is possible if and only if $h=H$, because $H$ is the unique minimum point of the left-hand side as a function of $h>0$. \paragraph{\textmd{\textit{Structure of semi-entire local minimizers}}} Let $w_{0}:(0,+\infty)\to\re$ be a right-hand semi-entire local minimizer. Let $z_{0}<z_{1}<z_{2}<\ldots$ denote the intersection points of $w_{0}$. Arguing as in the case of entire local minimizers we can show that $z_{k+1}-z_{k}=2H$ for every $k\geq 0$. It remains to find the value of $z_{0}$. To this end, for every real number $\tau$ we consider the function \begin{equation*} w_{\tau}(x):=w_{0}(x)+M\tau\mathbbm{1}_{(0,z_{0}+H)}(x) \qquad \forall x>0. \end{equation*} If we restrict to the interval $(0,z_{1})=(0,z_{0}+2H)$, then $w_{\tau}$ and $w_{0}$ have the same boundary value in $z_{1}$, and therefore by the minimality of $w_{0}$ we know that the function $\varphi(\tau):=\JF_{1/2}((0,z_{1}),w_{\tau})$ has a minimum point in $\tau=0$. On the other hand an easy computation reveals that \begin{equation*} \varphi(\tau)= \alpha\sqrt{M(2H-\tau)}+\beta\int_{0}^{z_{0}+H}M^{2}(z_{0}+\tau-x)^{2}\,dx+ \beta\int_{z_{0}+H}^{z_{1}}M^{2}(z_{1}-x)^{2}\,dx, \end{equation*} and therefore \begin{equation} 0=\varphi'(0)= -\frac{\alpha\sqrt{M}}{2\sqrt{2H}}+\beta M^{2}\left(z_{0}^{2}-H^{2}\right). \label{eqn:deriv-phi} \end{equation} Finally, we observe that the definition of $H$ in (\ref{defn:lambda-0}) implies that \begin{equation*} \frac{\alpha\sqrt{M}}{2\sqrt{2H}}=\frac{2\beta M^{2}}{3}H^{2}. \end{equation*} Plugging this identity into (\ref{eqn:deriv-phi}) we obtain that $z_{0}=(5/3)^{1/2}H$, as required. \paragraph{\textmd{\textit{Existence of entire and semi-entire local minimizers}}} Up to this point we have just shown that, if entire or semi-entire local minimizers exist, then they have the prescribed form. It remains to show that all oblique translations of the canonical $(H,V)$-staircase are actually entire local minimizers, and that the function $w(x)$ defined by (\ref{defn:semi-entire}) is actually a right-hand semi-entire local minimizer. The argument is rather standard, and therefore we limit ourselves to sketching the main steps in the case of the canonical $(H,V)$-staircase $S_{H,V}(x)$ (the case of its oblique translations and of semi-entire minimizers is analogous). It is enough to show that, for every positive integer $n$, the function $S_{H,V}(x)$ minimizes $\JF_{1/2}((-2nH,2nH),u)$ among all functions $u\in\PJ((-2nH,2nH))$ that coincide with $S_{H,V}$ at the endpoints. To begin with, we show that the minimum exists. This follows from a standard application of the direct method in the calculus of variations, as in the proof of statement~(\ref{prop:existence}) of Proposition~\ref{prop:mu}. Once we know that the minimum exists, we go back through all the previous steps in order to show that the minimum has only a finite number of equi-spaced intersections points, and their number is the one we expect. \paragraph{\textmd{\textit{The case $M=0$}}} When the forcing term vanishes, the estimates of Proposition~\ref{prop:loc-min-discr} are actually trivial. As for Proposition~\ref{prop:loc-min-class}, in this case we have to show that the function $w_{0}(x)\equiv 0$ is the unique entire or semi-entire local minimizer. This can be proved in the following way. Let $w_{0}(x)$ be any entire or semi-entire local minimizer. \begin{itemize} \item We show that the jump set of $w_{0}(x)$ is discrete. This can be done as in the general case, since in that paragraph we never used that $M\neq 0$. \item We show the symmetry of jumps (\ref{th:jump-symm}) as in the general case, since that equality was proved without using that $M\neq 0$. As a consequence, we deduce that $|w_{0}(x)|$ is constant. \item We show that $w_{0}(x)$ vanishes identically. Indeed, when we consider a long enough interval, any function $w_{0}(x)$ with $|w_{0}(x)|$ constant and different from~0 is worse (due to the overwhelming cost of the fidelity term) than a function with the same boundary values that has two jump points close to the boundary and vanishes elsewhere. \end{itemize} This completes the proof also in this special case. \qed \begin{rmk} \begin{em} The existence of entire and semi-entire local minimizers follows also as a corollary of Proposition~\ref{prop:bu2minloc} and Proposition~\ref{prop:bu2Rminloc}. \end{em} \end{rmk} \subsection{Compactness and convergence of local minimizers} In this subsection we prove Proposition~\ref{prop:bu2minloc} and Proposition~\ref{prop:bu2Rminloc}. The key point in the argument is the following result, where we show that an estimate of order $\ep^{-1}$ for the energy $\RPMF_{\ep}$ in some interval implies an $\ep$-independent estimate for the same energy in a smaller interval. \begin{prop}[Boundedness of the energy in a smaller interval]\label{prop:iteration} Let $L$, $\Gamma_{0}$, $\beta$ be positive real numbers. Then there exists two real numbers $\ep_{0}\in(0,1)$ and $\Gamma_{1}>0$ for which the following statement holds true. Let $f:[-(L+1),L+1]\to\re$ be a continuous function such that \begin{equation} |f(x)|\leq \Gamma_{0} \qquad \forall x\in[-(L+1),L+1], \label{hp:f-infty} \end{equation} let $\ep\in(0,\ep_{0})$, and let \begin{equation} w\in\argmin\loc\left\{\RPMF_{\ep}(\beta,f,(-(L+1),L+1),w):w\in H^{2}((-(L+1),L+1))\right\} \label{hp:min-loc} \end{equation} be a local minimizer such that \begin{equation} \RPMF_{\ep}(\beta,f,(-(L+1),L+1),w)\leq\frac{\Gamma_{0}}{\ep}. \label{hp:univ-bound} \end{equation} Then in the smaller interval $(-L,L)$ the local minimizer $w$ satisfies \begin{equation} \RPMF_{\ep}(\beta,f,(-L,L),w)\leq 4\Gamma_{1}. \label{th:univ-bound} \end{equation} \end{prop} \begin{proof} Let us consider the expression \begin{equation*} \Gamma_{2}:=\frac{\Gamma_{1}^{1/4}}{\beta^{1/4}}+\Gamma_{0}^{1/2}+1. \end{equation*} We observe that it is possible to choose a real number $\Gamma_{1}\geq\Gamma_{0}$ in such a way that \begin{equation} (80+20\beta)\Gamma_{2}+4\beta(L+1)\Gamma_{0}^{2}\leq \Gamma_{1}, \label{defn:M1} \end{equation} and it is possible to choose a real number $\ep_{0}\in(0,1/4)$ such that the inequalities \begin{gather} \Gamma_{1}\ep^{1/2}|\log\ep|\leq\log 2, \qquad\qquad \ep^{3/2}\,\Gamma_{2}\leq L, \label{hp:ub-ep-1} \\[1ex] \frac{2}{|\log\ep|}\log\left(1+\frac{45}{2\ep^{5}}\cdot\Gamma_{2}^{2}\right)\leq 18, \qquad\qquad \ep^{5/8}\,\Gamma_{2}^{4}\leq 1 \label{hp:ub-ep-2} \end{gather} hold true for every $\ep\in(0,\ep_{0})$. In the sequel we show that the statement holds true with these values of $\ep_{0}$ and $\Gamma_{1}$. Since $\ep\in(0,\ep_{0})$ and $\ep_{0}<1/4$, there exists a unique positive integer $n$ such that \begin{equation} \frac{1}{4^{2^{n}}}=\frac{1}{2^{2^{n+1}}}\leq\ep<\frac{1}{2^{2^{n}}}. \label{defn:ep-n} \end{equation} For every $k\in\{0,1,\ldots,n\}$ we set \begin{equation*} L_{n,k}:=L+1-\frac{1}{2^{n-k}}, \end{equation*} and we observe that for every $k\in\{0,1,\ldots,n-1\}$ it turns out that \begin{gather} L\leq L_{n,k+1}<L_{n,k}<L+1, \nonumber \\ L_{n,k}-L_{n,k+1}=\frac{1}{2^{n-k}}, \label{eqn:diff-Lnk} \end{gather} and \begin{equation} 2^{n-k}\leq 2^{2^{n-k-1}}= \left\{\left(2^{2^{n}}\right)^{1/2}\right\}^{2^{-k}}< \left(\frac{1}{\ep^{1/2}}\right)^{2^{-k}}. \label{est:diff-Lnk} \end{equation} We claim that \begin{equation} \RPMF_{\ep}(\beta,f,(-L_{n,k},L_{n,k}),w)\leq \frac{\Gamma_{1}}{\ep^{2^{-k}}} \qquad \forall k\in\{0,1,\ldots,n\}. \label{th:univ-bound-k} \end{equation} The case $k=0$ follows from assumption (\ref{hp:univ-bound}) because the interval $(-L_{n,0},L_{n,0})$ is contained in the interval $(-(L+1),L+1)$ and $\Gamma_{1}\geq\Gamma_{0}$. Since $L_{n,n}=L$, the case $k=n$ implies (\ref{th:univ-bound}) because of the estimate from below in (\ref{defn:ep-n}). Now we prove (\ref{th:univ-bound-k}) by finite induction on $k$. Let us assume that (\ref{th:univ-bound-k}) holds true for some $k\in\{0,1,\ldots,n-1\}$, and let us prove that it holds true also for $k+1$. To begin with, we focus on the interval $(L_{n,k+1},L_{n,k})$, and we observe that \begin{eqnarray*} \frac{\Gamma_{1}}{\ep^{2^{-k}}} & \geq & \RPMF_{\ep}(\beta,f,(-L_{n,k},L_{n,k}),w) \\[1ex] & \geq & \RPMF_{\ep}(\beta,f,(L_{n,k+1},L_{n,k}),w) \\[1ex] & \geq & \int_{L_{n,k+1}}^{L_{n,k}}\left\{\frac{1}{\omep^{2}}\log\left(1+w'(y)^{2}\right)+ \beta(w(y)-f(y))^{2}\right\}dy \\[1ex] & \geq & (L_{n,k}-L_{n,k+1})\cdot \\[0.5ex] & & \mbox{}\cdot \min\left\{\frac{1}{\omep^{2}}\log\left(1+w'(y)^{2}\right)+ \beta(w(y)-f(y))^{2}:y\in[L_{n,k+1},L_{n,k}]\right\}. \end{eqnarray*} If $b_{k,\ep}\in[L_{n,k+1},L_{n,k}]$ is any minimum point, recalling (\ref{eqn:diff-Lnk}), (\ref{est:diff-Lnk}), and the first inequality in (\ref{hp:ub-ep-1}), this proves that \begin{equation*} \log\left(1+w'(b_{k,\ep})^{2}\right)\leq \frac{\Gamma_{1}}{\ep^{2^{-k}}}\cdot\omep^{2}\cdot 2^{n-k}\leq \frac{\Gamma_{1}}{\left(\ep^{3/2}\right)^{2^{-k}}}\cdot\omep^{2}\leq \Gamma_{1}\ep^{1/2}|\log\ep|\leq \log 2, \end{equation*} and \begin{equation*} \beta(w(b_{k,\ep})-f(b_{k,\ep}))^{2}\leq \frac{\Gamma_{1}}{\ep^{2^{-k}}}\cdot 2^{n-k}\leq \frac{\Gamma_{1}}{\left(\ep^{3/2}\right)^{2^{-k}}}. \end{equation*} From these two inequalities and (\ref{hp:f-infty}) we deduce that $|w'(b_{k,\ep})|\leq 1$ and \begin{equation*} |w(b_{k,\ep})|\leq \frac{\Gamma_{1}^{1/2}}{\beta^{1/2}\left(\ep^{3/4}\right)^{2^{-k}}}+\Gamma_{0}\leq \frac{(\Gamma_{1}/\beta)^{1/2}+\Gamma_{0}}{\left(\ep^{3/4}\right)^{2^{-k}}}. \end{equation*} With an analogous argument, we can show that there exists $a_{k,\ep}\in[-L_{n,k},-L_{n,k+1}]$ such that \begin{equation*} |w'(a_{k,\ep})|\leq 1 \qquad\quad\mbox{and}\quad\qquad |w(a_{k,\ep})|\leq\frac{(\Gamma_{1}/\beta)^{1/2}+\Gamma_{0}}{\left(\ep^{3/4}\right)^{2^{-k}}}. \end{equation*} Now we exploit that $w$ minimizes $\RPMF_{\ep}$ in the interval $(a_{k,\ep},b_{k,\ep})$ with respect to its boundary conditions, and we estimate the minimum value by applying Lemma~\ref{lemma:bound-D-H} with \begin{equation*} (a,b)=(a_{k,\ep},b_{k,\ep}), \qquad\quad D:=1, \qquad\quad H:=\frac{(\Gamma_{1}/\beta)^{1/2}+\Gamma_{0}}{\left(\ep^{3/4}\right)^{2^{-k}}}. \end{equation*} We observe that \begin{equation*} \sqrt{H}+\ep^{2}D\leq \frac{\left[(\Gamma_{1}/\beta)^{1/2}+\Gamma_{0}\right]^{1/2}}{\left(\ep^{3/8}\right)^{2^{-k}}}+1\leq \frac{\Gamma_{2}}{\left(\ep^{3/8}\right)^{2^{-k}}}, \end{equation*} and in particular from the second inequality in (\ref{hp:ub-ep-1}) we obtain that \begin{equation*} \ep^{2}\left(\sqrt{H}+\ep^{2}D\right)\leq \ep^{3/2}\Gamma_{2}\leq L< \frac{b_{k,\ep}-a_{k,\ep}}{2}, \end{equation*} which shows that assumption (\ref{hp:bound-D-H-1}) is satisfied, while from the first inequality in (\ref{hp:ub-ep-2}) we obtain that \begin{equation*} \frac{2}{|\log\ep|}\log\left(1+\frac{45}{2\ep^{4}}\left(\sqrt{H}+\ep^{2}D\right)^{2}\right)\leq \frac{2}{|\log\ep|}\log\left(1+\frac{45}{2\ep^{4}}\cdot\frac{\Gamma_{2}^{2}}{\ep}\right)\leq 18. \end{equation*} which shows that assumption (\ref{hp:bound-D-H-2}) is satisfied. Therefore, from Lemma~\ref{lemma:bound-D-H} we deduce the existence of $w_{k,\ep}\in H^{2}((a_{k,\ep},b_{k,\ep}))$, with the same boundary values (function and derivative) of $w$, satisfying \begin{equation*} \RPM_{\ep}((a_{k,\ep},b_{k,\ep}),w_{k,\ep})\leq 80\frac{\Gamma_{2}}{\left(\ep^{3/8}\right)^{2^{-k}}}\leq 80\frac{\Gamma_{2}}{\ep^{2^{-k-1}}} \end{equation*} and \begin{equation*} \int_{a_{k,\ep}}^{b_{k,\ep}}w_{k,\ep}(x)^{2}\,dx\leq 10\ep^{2}\frac{\Gamma_{2}^{5}}{\left(\ep^{15/8}\right)^{2^{-k}}}= 10\frac{\ep^{11/8}}{\left(\ep^{15/8}\right)^{2^{-k}}}\cdot\left(\ep^{5/8}\Gamma_{2}^{4}\right)\cdot\Gamma_{2}\leq 10\frac{\Gamma_{2}}{\ep^{2^{-k-1}}}, \end{equation*} where the last inequality follows from the second relation in (\ref{hp:ub-ep-2}). From the last two estimates and the minimality of $w$ we conclude that \begin{eqnarray*} \lefteqn{\hspace{-2em} \RPMF_{\ep}(\beta,f,(-L_{n,k+1},L_{n,k+1}),w) } \\[1ex] & \leq & \RPMF_{\ep}(\beta,f,(a_{k,\ep},b_{k,\ep}),w) \\[1ex] & \leq & \RPMF_{\ep}(\beta,f,(a_{k,\ep},b_{k,\ep}),w_{k,\ep}) \\[1ex] & \leq & \RPM_{\ep}((a_{k,\ep},b_{k,\ep}),w_{k,\ep})+ 2\beta\int_{a_{k,\ep}}^{b_{k,\ep}}w_{k,\ep}(x)^{2}\,dx+ 2\beta\int_{a_{k,\ep}}^{b_{k,\ep}}f(x)^{2}\,dx \\[0.5ex] & \leq & (80+20\beta)\frac{\Gamma_{2}}{\ep^{2^{-k-1}}}+ 2\beta(2L+2)\Gamma_{0}^{2} \\[0.5ex] & \leq & \frac{\Gamma_{1}}{\ep^{2^{-k-1}}}, \end{eqnarray*} where the last two inequalities we exploited (\ref{hp:f-infty}) and (\ref{defn:M1}), respectively. This completes the inductive step, and hence also the proof. \end{proof} \begin{rmk}\label{rmk:iteration} \begin{em} Proposition~\ref{prop:iteration} can be extended in a straightforward way to one-sided local minimizers. To this end, it is enough to replace in the statement the interval $(-L,L)$ with $(0,L)$, the interval $(-(L+1),L+1)$ with $(0,L+1)$, and ``loc'' with ``R-loc''. The proof is analogous and somewhat simpler, because we just need to work on one side of the interval. \end{em} \end{rmk} \subsubsection{Proof of Proposition~\ref{prop:bu2minloc}} \paragraph{\textmd{\textit{Existence of a limit}}} We prove that there exist a function $w_{\infty}:\re\to\re$ and an increasing sequence $\{n_{k}\}$ of positive integers such that, for every $L>0$, the restriction of $w_{\infty}$ to the interval $(-L,L)$ belongs to $\PJ((-L,L))$ and $w_{n_{k}}(x)\to w_{\infty}(x)$ in $L^{2}((-L,L))$. To this end, it is enough to prove that, for every fixed real number $L>0$, it turns out that \begin{equation} \sup\left\{\RPM_{\ep_{n}}((-L,L),w_{n})+\int_{-L}^{L}w_{n}(x)^{2}\,dx:n\in\n\right\}<+\infty. \label{th:F<=CL} \end{equation} Indeed, once that this uniform bound has been established (the supremum might depend on $L$, of course), the compactness result of statement~(\ref{stat:ABG-cpt}) of Theorem~\ref{thm:ABG} implies that the sequence $\{w_{n}\}$ is relatively compact in $L^{2}((-L,L))$ for this fixed value on $L$, and any limit function lies in $\PJ((-L,L))$. At this point we apply the result for a sequence of intervals $(-L_{k},L_{k})$ with $L_{k}\to +\infty$, and with a classical diagonal procedure we obtain the subsequence that converges in all bounded intervals. In order to prove (\ref{th:F<=CL}), we begin by observing that, due to assumption~(ii), there exists a constant $M_{L}$ such that \begin{equation*} |g_{n}(x)|\leq M_{L} \qquad \forall x\in[-(L+1),L+1],\quad\forall n\in\n. \end{equation*} At this point we apply Proposition~\ref{prop:iteration} with \begin{equation*} w(x):=w_{n}(x), \qquad\quad f(x):=g_{n}(x), \qquad\quad \Gamma_{0}:=\max\{M_{L},C_{0}\}. \end{equation*} This is possible because assumptions (\ref{hp:min-loc}) and (\ref{hp:univ-bound}) are satisfied for trivial reasons as soon as $[-(L+1),L+1]\subseteq(A_{n},B_{n})$ and $|\log\ep_{n}|\geq 1$. From Proposition~\ref{prop:iteration} we obtain that there exists a constant $\Gamma_{1}$ such that \begin{equation} \RPMF_{\ep_{n}}(\beta,g_{n},(-L,L),w_{n})\leq 4\Gamma_{1} \label{est:ABGF-4Gamma} \end{equation} when $n$ is large enough. This implies (\ref{th:F<=CL}) because the left hand-side of (\ref{est:ABGF-4Gamma}) controls the first term in the left-hand side of (\ref{th:F<=CL}), while the integral can be estimated as \begin{equation*} \int_{-L}^{L}w_{n}(x)^{2}\,dx\leq 2\int_{-L}^{L}(w_{n}(x)-g_{n}(x))^{2}\,dx+2\int_{-L}^{L}g_{n}(x)^{2}\,dx, \end{equation*} where the first integral is controlled again by the left hand-side of (\ref{est:ABGF-4Gamma}), and the second integral is controlled because of the uniform bound on $g_{n}$. \paragraph{\textmd{\textit{Characterization of the limit}}} Let $w_{\infty}$ be any limit function identified in the first paragraph of the proof. We claim that $w_{\infty}$ is an entire local minimizer for the functional (\ref{defn:JF}) with $\alpha$ defined by (\ref{defn:alpha-0}). The function $w_{\infty}(x)$ is by definition the limit in $L^{2}\loc(\re)$ of some sequence $w_{n_{k}}(x)$, and from the uniform bounds (\ref{est:ABGF-4Gamma}) we deduce also that $\log(1+w_{n_{k}}'(x)^{2})\to 0$ in $L^{1}\loc(\re)$. Up to further subsequences (not relabeled) we can assume that in both cases the convergence is also pointwise for almost every $x\in\re$. Now let us consider any interval $(a,b)\subseteq\re$ whose endpoints are not jump points of $w_{\infty}$, and such that $w_{n_{k}}(x)\to w_{\infty}(x)$ and $w_{n_{k}}'(x)\to 0$ for $x\in\{a,b\}$. Let $v\in\PJ((a,b))$ be any function with the same boundary conditions of $w_{\infty}(x)$ in the usual sense. From statement~(\ref{stat:ABG-recovery}) of Theorem~\ref{thm:ABG} applied with the quadruple of boundary data \begin{equation*} (w_{n_{k}}(a),w_{n_{k}}'(a),w_{n_{k}}(b),w_{n_{k}}'(b))\to (w_{\infty}(a),0,w_{\infty}(b),0)= (v(a),0,v(b),0) \end{equation*} we obtain a recovery sequence $\{v_{k}\}\subseteq H^{2}((a,b))$ for $v$ that has the same boundary conditions of $w_{n_{k}}$ in $a$ and $b$ (both on the function and on the derivative). From the minimality of $w_{n_{k}}$ we deduce that \begin{equation*} \RPMF_{\ep_{n_{k}}}(\beta,g_{n_{k}},(a,b),w_{n_{k}})\leq \RPMF_{\ep_{n_{k}}}(\beta,g_{n_{k}},(a,b),v_{k}) \end{equation*} for every positive integer $k$. Letting $k\to +\infty$, and recalling statement~(\ref{stat:ABG-gconv}) of Theorem~\ref{thm:ABG}, we conclude that \begin{eqnarray*} \JF_{1/2}(\alpha_{0},\beta,M,(a,b),w_{\infty}) & \leq & \liminf_{k\to +\infty}\RPMF_{\ep_{n_{k}}}(\beta,g_{n_{k}},(a,b),w_{n_{k}}) \\[1ex] & \leq & \lim_{k\to +\infty}\RPMF_{\ep_{n_{k}}}(\beta,g_{n_{k}},(a,b),v_{k}) \\[1ex] & = & \JF_{1/2}(\alpha_{0},\beta,M,(a,b),v). \end{eqnarray*} Since $v$ is arbitrary, this proves that $w_{\infty}$ is a local minimizer of the limit functional in the interval $(a,b)$. Since intervals of this type invade the real line, this proves that $w_{\infty}$ is an entire local minimizer for the limit functional, as required. \paragraph{\textmd{\textit{Strict convergence}}} In the special case where $v(x)\equiv w_{\infty}(x)$ in $(a,b)$, the argument of the previous paragraph gives that \begin{equation*} \lim_{k\to +\infty}\RPM_{\ep_{n_{k}}}((a,b),w_{n_{k}})= \alpha_{0}\J_{1/2}((a,b),w_{\infty}), \end{equation*} namely $\{w_{n_{k}}\}$ is a recovery sequence for $w_{\infty}$ in the interval $(a,b)$. At this point, from statement~(\ref{stat:ABG-uc}) of Theorem~\ref{thm:ABG}, we conclude that $w_{n_{k}}\auto w_{\infty}$ in $BV((a,b))$. Since intervals of this type invade the real line, this completes the proof. \qed \subsubsection{Proof of Proposition~\ref{prop:bu2Rminloc}} The proof is analogous to the proof of Proposition~\ref{prop:bu2minloc}, and hence we limit ourselves to sketching the argument. In the first step we show that there exist a function $w_{\infty}:(0,+\infty)\to\re$ and an increasing sequence $\{n_{k}\}$ of positive integers such that \begin{itemize} \item the restriction of $w_{\infty}$ to the interval $(0,L)$ belongs to $\PJ((0,L))$ for every $L>0$, \item $w_{n_{k}}(x)\to w_{\infty}(x)$ in $L^{2}((0,L))$ and $\log(1+w_{n_{k}}'(x)^{2})\to 0$ in $L^{1}((0,L))$ for every $L>0$, \item $w_{n_{k}}(x)\to w_{\infty}(x)$ and $w_{n_{k}}'(x)\to 0$ for almost every $x>0$. \end{itemize} The argument relies on the one-sided version of Proposition~\ref{prop:iteration} (see Remark~\ref{rmk:iteration}), and on the compactness result of statement~(\ref{stat:ABG-cpt}) of Theorem~\ref{thm:ABG}. In the second step we consider intervals of the form $(0,L)$, where $L$ is any positive real number in which we have the pointwise convergence $w_{n_{k}}(L)\to w_{\infty}(L)$ and $w_{n_{k}}'(L)\to 0$, and such that $L$ is not a jump point of $w_{\infty}$ (both conditions hold true for almost every point in the half-line). Then we consider any function $v\in\PJ((0,L))$ such $v(L)=w_{\infty}(L)$, where boundary values are intended in the usual sense. From statement~(\ref{stat:ABG-recovery}) of Theorem~\ref{thm:ABG}, applied with the quadruple of initial data \begin{equation*} (v(0),0,w_{n_{k}}(L),w_{n_{k}}'(L))\to(v(0),0,w_{\infty}(L),0)=(v(0),0,v(L),0), \end{equation*} we obtain a recovery sequence $\{v_{k}\}\subseteq H^{2}((0,L))$ for $v$ that has the same boundary conditions of $w_{n_{k}}$ in $x=L$. Thus from the minimality of $w_{n_{k}}$ in $(0,L)$ we deduce as in the previous case that \begin{equation*} \JF_{1/2}(\alpha_{0},\beta,M,(0,L),w_{\infty})\leq\JF_{1/2}(\alpha_{0},\beta,M,(0,L),v). \end{equation*} Since $L$ can be chosen to be arbitrarily large, this is enough to conclude that $w_{\infty}$ is a right-hand minimizer in $(0,+\infty)$. Finally, in the third step we conclude as before that the convergence is strict in every interval $(0,L)$ such that $L$ is not a jump point of $w_{\infty}$. \qed \setcounter{equation}{0} \section{Possible extensions}\label{sec:extension} Our proof of Theorem~\ref{thm:asympt-min} relies just on the Gamma-convergence results for the rescaled functionals (\ref{defn:ABG}), and on the estimates of Proposition~\ref{prop:loc-min-discr} for the minima of the limit functional with linear forcing term. Our proofs of Theorems~\ref{thm:BU} and~\ref{thm:varifold} rely on the characterization of local minima for the limit functional, and on the compactness result that follows from Proposition~\ref{prop:iteration}. For these reasons, we expect that these results can be extended to more general models by just extending to these models the tools that we exploited here. For example, it is possible to consider more general fidelity terms of the form \begin{equation*} \int_{0}^{1}\beta(x)|u(x)-f(x)|^{p}\,dx, \end{equation*} for suitable choices of the exponent $p\geq 1$ (but also every $p>0$ should be fine) and of the coefficient $\beta(x)$, provided that it is continuous and strictly positive. In the sequel we focus on less trivial generalizations that involve the principal part, and we discuss three possibilities. \paragraph{\textmd{\textit{Different convex-concave Lagrangians}}} We can replace the function $\phi(p):=\log(1+p^{2})$ with different functions, for example those presented in (\ref{defn:phi-1}) or (\ref{defn:phi-2}). This leads to functionals with principal part of the form \begin{equation*} \sPM_{\ep}(u):=\int_{0}^{1}\left\{\ep^{6}\omep^{4}u''(x)^{2}+\phi(u'(x))\right\}\,dx, \end{equation*} where now $\omep:=\ep\phi(1/\ep^{2})^{1/2}$. Under rather general assumptions on $\phi$, the blow-ups of minimizers at scale $\omep$ are local minimizers for the rescaled functionals \begin{equation*} \sRPM_{\ep}(\Omega,v):=\int_{\Omega}\left\{\ep^{6}v''(x)^{2}+\frac{1}{\omep^{2}}\phi(v'(x))\right\}\,dx, \end{equation*} and this family Gamma-converges to a suitable multiple of the functional $\J_{\sigma}(\Omega,v)$, which is the natural generalization of (\ref{defn:J}) obtained by replacing 1/2 with a different exponent $\sigma\in(0,1)$ that depends on the growth at infinity of $\phi(p)$ (actually in this case we obtain only exponents in $[1/2,1)$). All the results of this paper can be easily extended, more or less with the same techniques. \paragraph{\textmd{\textit{Higher order singular perturbation}}} We can replace second order derivatives with derivatives of higher order. This leads to functionals with principal part of the form \begin{equation*} \sPM_{\ep}(u):= \int_{0}^{1}\left\{\ep^{4k-2}\omep^{2k}u^{(k)}(x)^{2}+\log\left(1+u'(x)^{2}\right)\right\}\,dx, \end{equation*} where $u^{(k)}(x)$ denotes the derivative of $u(x)$ of order $k\geq 2$, and $\omep$ is defined as in (\ref{defn:omep}). Also in this case the rescaled functionals \begin{equation*} \sRPM_{\ep}(\Omega,v):= \int_{\Omega}\left\{\ep^{4k-2}v^{(k)}(x)^{2}+\frac{1}{\omep^{2}}\log\left(1+v'(x)^{2}\right)\right\}\,dx \end{equation*} Gamma-converges to a suitable multiple of $\J_{\sigma}(\Omega,v)$, now with $\sigma=1/k$. Therefore, it seems reasonable that the results of this paper can be extended, even if some steps (for example the iteration argument in the compactness result) might require some extra work. Of course, one can also combine a higher order singular perturbation with a different choice of $\phi(p)$, and/or choose a different exponent for the higher order derivative. \paragraph{\textmd{\textit{Space discretization}}} In a different direction, it is possible to consider a space discretization of the problem where derivatives are replaced by finite differences. This leads to functionals of the form \begin{equation*} \sPM_{\ep}(u):=\int_{0}^{1-\ep^{2}\omep}\log\left(1+\left(\frac{u(x+\ep^{2}\omep)-u(x)}{\ep^{2}\omep}\right)^{2}\right)\,dx, \end{equation*} possibly defined in the space of functions that are piecewise constant with steps of length $\ep^{2}\omep$, where now $\omep$ is defined as in (\ref{defn:omep}). This is equivalent to considering the original functional (\ref{defn:PM}), depending on true derivatives, but restricted to the space of functions that are piecewise affine, again with respect to some grid with size $\ep^{2}\omep$. The natural rescaling corresponding to blow-ups at scale $\omep$ leads to the family of functionals \begin{equation*} \sRPM_{\ep}(v):=\frac{1}{\omep^{2}}\int_{\Omega}\log\left(1+\left(\frac{v(x+\ep^{2})-v(x)}{\ep^{2}}\right)^{2}\right)\,dx. \end{equation*} Apparently this scaling of the Perona-Malik functional, although elementary, never appeared in the previous literature on the discrete model (see~\cite{2001-CPAM-Esedoglu,2003-M3AS-MorNeg,2011-SIAM-SlowSD,2018-ActApplMath-BraVal}). The Gamma-limit turns out to be a multiple of the functional $\J_{0}(\Omega,v)$, namely the functional that simply counts the number of jumps of $v$ in $\Omega$, regardless of jump heights. Again it is possible to extend the results of this paper, and some steps are even easier, for example the iterative argument in Proposition~\ref{prop:iteration} and the classification of local minimizers for the limit functional. \setcounter{equation}{0} \section{Future perspectives and open problems}\label{sec:open} In this final section we present some questions that remain open, and that could deserve further investigation. The first one concerns uniqueness of minimizers, which is always a challenging question when the Lagrangian is non-convex. We recall that for the model (\ref{defn:AM}) uniqueness is known in some cases (see~\cite[Theorem~1.1 and subsequent Remark~4]{1993-CalcVar-Muller}), but in that case the forcing term is rather special and there are periodic boundary conditions. \begin{open}[Uniqueness of minimizers] Let us consider the minimum problem (\ref{defn:DMnf}), under the same assumptions of Proposition~\ref{prop:basic}. Determine whether the minimizer is unique, at least when $\ep$ is small enough and/or the forcing term $f(x)$ is smooth enough. \end{open} Concerning Theorem~\ref{thm:asympt-min}, it could be interesting to investigate the asymptotic behavior of minima under weaker regularity assumptions on the forcing term $f(x)$. \begin{open}[Existence of the limit of rescaled minimum values] Characterize all functions $f\in L^{2}((0,1))$ such that the limit in (\ref{th:asympt-min}) exists, or exists and is a real number, or exists and coincides with the right-hand side, up to defining $f'(x)$ in a suitable way. \end{open} The question is largely open. It is also conceivable that the vanishing order of $m(\ep,\beta,f)$ as $\ep\to 0^{+}$ depends on the regularity of $f(x)$ in terms of H\"older continuity, Sobolev exponents or even fractional Sobolev spaces, which motivates the following question. \begin{open}[Vanishing order of minima vs regularity of the forcing term] Find any connection between the vanishing order of $m(\ep,\beta,f)$ as $\ep\to 0^{+}$ and the regularity of the forcing term $f(x)$. \end{open} Here we present the results that we know for the time being. \begin{itemize} \item For every $f\in\PJ((0,1))$ with a finite number of jumps it turns out that \begin{equation} \lim_{\ep\to 0^{+}}\frac{m(\ep,\beta,f)}{\omep^{5/2}}= 4\left(\frac{2}{3}\right)^{1/2}5^{3/4}\cdot\J_{1/2}((0,1),f). \label{th:lim-PJ} \end{equation} The same should be true when $\J_{1/2}((0,1),f)<+\infty$. \item It should not be difficult to extend (\ref{th:asympt-min}) to every $f\in H^{1}((0,1))$, and probably also to forcing terms that are the sum of a function $f_{1}\in H^{1}((0,1))$ and a function $f_{2}\in\PJ((0,1))$ with $\J_{1/2}((0,1),f_{2})<+\infty$. This extension should require only some technical adjustments in our proof, because the key point (\ref{lim:fepL2f}) remains true. \item Heuristically, when minimizing (\ref{defn:ABGF}) we can replace the rescaled Perona-Malik functional (\ref{defn:ABG}) by its Gamma-limit (\ref{defn:J}). This leads to a minimization problem in the class of pure jump functions, that we can further simplify by restricting to competitors whose jump points are equally spaced at some fixed distance $\delta$, to be optimized with respect to $\ep$. By formalizing this idea we obtain the following two estimates from above. \begin{itemize} \item If $f(x)$ is $a$-H\"older continuous with some exponent $a\in(0,1]$ and some constant $H$, then it turns out that \begin{equation*} \limsup_{\ep\to 0^{+}}\frac{m(\ep,\beta,f)}{\omep^{10a/(3a+2)}}\leq c_{a}H^{4/(3a+2)}. \end{equation*} \item If $f\in W^{1,p}((0,1))$ for some exponent $p\in[1,2]$, then it turns out that \begin{equation*} \limsup_{\ep\to 0^{+}}\frac{m(\ep,\beta,f)}{\omep^{(15p-10)/(7p-4)}}\leq c_{p}\|f'\|_{L^{p}((0,1))}^{(5p-2)/(7p-4)}. \end{equation*} \end{itemize} \item The set of forcing terms $f\in L^{2}((0,1))$ for which the limit in (\ref{th:asympt-min}) exists has empty interior, even if we allow the limit to be~$+\infty$, and even if we restrict ourselves to a sequence $\ep_{n}\to 0^{+}$. Indeed, for every fixed $\ep_{n}\in(0,1)$, the function $f\to m(\ep_{n},\beta,f)$ is continuous in $L^{2}((0,1))$, and therefore also the function \begin{equation*} \Psi_{n}(f):=\arctan\left(\frac{m(\ep_{n},\beta,f)}{\omega(\ep_{n})^{2}}\right) \end{equation*} is continuous in the same space. Let us assume by contradiction that $\Psi_{n}(f)$ converges to some $\Psi_{\infty}(f)$ for every $f$ in some open set $\mathcal{U}\subseteq L^{2}((0,1))$. Since $\mathcal{U}$ is a Baire space, and $\Psi_{\infty}$ is the pointwise limit of continuous functions, then necessarily $\Psi_{\infty}$ is continuous in some $G_{\delta}$ subset $\mathcal{V}\subseteq\mathcal{U}$. Now on the one hand we know from (\ref{th:lim-PJ}) that $\Psi_{\infty}(f)=0$ for every piecewise constant function with a finite number of jumps, and this class is dense in $L^{2}((0,1))$, and therefore $\Psi_{\infty}(f)=0$ for every $f\in\mathcal{V}$. On the other hand, also functions of class $C^{1}$ with right-hand side of (\ref{th:asympt-min}) greater than~1 are dense in $L^{2}((0,1))$, which implies that $\Psi_{\infty}(f)\geq 1$ for every $f\in\mathcal{V}$. \end{itemize} As for the convergence of minimizers, on the one hand we expect that the $C^{1}$ regularity of $f(x)$ is required in order to characterize the blow-ups of minimizers with $\ep$-dependent centers as we did in Theorem~\ref{thm:BU}. On the other hand, the statement of Theorem~\ref{thm:varifold} seems to require less regularity on $f(x)$, in contrast with our proof that is deeply based on Theorem~\ref{thm:BU}. \begin{open}[Strict and varifold convergence of minimizers] Extend the results of Theorem~\ref{thm:varifold} to less regular forcing terms, and in particular determine whether the results hold true for every $f\in BV((0,1))$ (up to a suitable extension of identity~(\ref{th:varifold}) to bounded variation functions). \end{open} Finally, since this paper deals with the Perona-Malik functional in dimension one, we conclude with the following natural and challenging question. \begin{open}[Any space dimension] Extend the results of this paper to higher dimensions. \end{open} \setcounter{equation}{0} \appendix \section{Appendix} In this final appendix we prove the results stated in section~\ref{sec:gconv}. To this end, we need three preliminary technical lemmata. The first one is the classical estimate from below for the rescaled Perona-Malik functional in an interval where $|u'(x)|$ is ``large'' (the argument is analogous to a step in the proof of~\cite[Proposition~3.3]{ABG}). \begin{lemma}[Basic estimate from below]\label{lemma:basic-below} Let $(\alpha,\beta)\subseteq\re$ be an interval, and let $u\in H^{2}((\alpha,\beta))$. Let us assume that there exists a real number $D>0$ such that $|u'(\alpha)|=|u'(\beta)|=D$, and $|u'(x)|\geq D$ for every $x\in(\alpha,\beta)$. Then for every $\ep\in(0,1)$ it turns out that \begin{equation} \RPM_{\ep}((\alpha,\beta),u)\geq 4\left(\frac{2}{3}\right)^{1/2}\left(\frac{\log(1+D^{2})}{|\log\ep|}\right)^{3/4} \left(|u(\beta)-u(\alpha)|-D(\beta-\alpha)\strut\right)^{1/2}. \label{th:usual-below} \end{equation} \end{lemma} \begin{proof} Let us observe that our assumptions imply that either $u'(x)\geq D$ for every $x\in(\alpha,\beta)$, or $u'(x)\leq -D$ for every $x\in(\alpha,\beta)$. In both cases it turns out that $u'(x)$ has the same value at the two endpoints of the interval. Moreover $u(\beta)-u(\alpha)$ has the same sign of $u'(\alpha)$ and $u'(\beta)$, and \begin{equation*} |u(\beta)-u(\alpha)|-D(\beta-\alpha)= \left|(u(\beta)-u(\alpha))-\frac{u'(\alpha)+u'(\beta)}{2}(\beta-\alpha)\strut\right|\geq 0. \end{equation*} As a consequence, from Lemma~\ref{lemma:ABG} we obtain that \begin{equation*} \int_{\alpha}^{\beta}u''(x)^{2}\,dx\geq \frac{12}{(\beta-\alpha)^{3}}\left(|u(\beta)-u(\alpha)|-D(\beta-\alpha)\strut\right)^{2}, \end{equation*} and therefore \begin{eqnarray*} \RPM_{\ep}((\alpha,\beta),u) & = & \int_{\alpha}^{\beta}\left(\ep^{6}u''(x)^{2}+ \frac{1}{\omep^{2}}\log\left(1+u'(x)^{2}\right)\right)\,dx \\[1ex] & \geq & \frac{12\ep^{6}}{(\beta-\alpha)^{3}}\left(|u(\beta)-u(\alpha)|-D(\beta-\alpha)\strut\right)^{2}+ \frac{\beta-\alpha}{\omep^{2}}\log\left(1+D^{2}\right). \end{eqnarray*} Applying the classical inequality \begin{equation*} A+B\geq\frac{4}{3^{3/4}}\left(AB^{3}\right)^{1/4} \qquad \forall (A,B)\in[0,+\infty)^{2}, \end{equation*} we obtain exactly (\ref{th:usual-below}). \end{proof} The second lemma shows that for every function $u\in H^{2}((a,b))$ one can find a function $z\in\PJ((a,b))$ that is close to $u$ in terms of $L^{p}$ norm and total variation, and such that the $\RPM_{\ep}$ energy of $u$ is controlled from below by the $\J_{1/2}$ energy of $z$. An analogous result is proved in~\cite[Proposition~3.3]{ABG}. \begin{lemma}[Substitution lemma]\label{lemma:split} Let $(a,b)\subseteq\re$ be an interval, let $\ep_{n}\subseteq(0,1)$ be a sequence such that $\ep_{n}\to 0^{+}$, and let $\{u_{n}\}\subseteq H^{2}((a,b))$ be a sequence of functions such that \begin{equation} \sup\left\{\RPM_{\ep_{n}}((a,b),u_{n}):n\geq 1\right\}<+\infty. \label{hp:RPM-bounded} \end{equation} Then there exists a sequence $\{z_{n}\}\subseteq\PJ((a,b))$ with the following properties. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item It turns out that \begin{equation} \RPM_{\ep_{n}}((a,b),u_{n})\geq M_{n}\cdot\J_{1/2}((a,b),z_{n}) \qquad \forall n\geq 1, \label{th:uz-energy} \end{equation} where \begin{equation*} M_{n}:=4\left(\frac{2}{3}\right)^{1/2} \left\{\frac{1}{|\log\ep_{n}|}\log\left(1+\frac{1}{\ep_{n}^{4}|\log\ep_{n}|^{8}}\right)\right\}^{3/4} \qquad \forall n\geq 1. \end{equation*} \item The function $z_{n}$ is asymptotically close to $u_{n}$ in the sense that \begin{equation} \lim_{n\to +\infty}\|u_{n}-z_{n}\|_{L^{p}((a,b))}= 0 \qquad \forall p\in[1,+\infty). \label{th:uz-Lp} \end{equation} \item The total variation of $z_{n}$ is asymptotically close to the total variation of $u_{n}$ in the sense that \begin{equation} \lim_{n\to +\infty}\int_{a}^{b}|u_{n}'(x)|\,dx-|Dz_{n}|((a,b))=0. \label{th:uz-TV} \end{equation} \end{enumerate} \end{lemma} \begin{proof} Let us consider the set \begin{equation*} A_{n}:=\left\{x\in(a,b):|u_{n}'(x)|>\frac{1}{\ep_{n}^{2}|\log\ep_{n}|^{4}}\right\}. \end{equation*} Since $A_{n}$ is an open set, we can write it as a finite or countable union of open disjoint intervals (its connected components), namely in the form \begin{equation*} A_{n}=\bigcup_{i\in I_{n}}(\alpha_{n,i},\beta_{n,i}), \end{equation*} where $I_{n}$ is a suitable index set. Let $w_{n}:[a,b]\to\re$ be the function of class $C^{1}$ such that $w_{n}(a)=u_{n}(a)$, and \begin{equation*} w_{n}'(x):=\begin{cases} 0 & \text{if }x\in(a,b)\setminus A_{n}, \\ u_{n}'(x)-\dfrac{\sign(u_{n}'(x))}{\ep_{n}^{2}|\log\ep_{n}|^{4}}\quad & \text{if }x\in A_{n}. \end{cases} \end{equation*} We observe that $w_{n}'(x)$ is the difference between $u_{n}'(x)$ and the truncation of $u_{n}'(x)$ between the two values $\pm\ep_{n}^{-2}|\log\ep_{n}|^{-4}$. We deduce that in each of the intervals $ (\alpha_{n,i},\beta_{n,i})$ the sign of $w_{n}'(x)$ is constant and coincides with the sign of $u_{n}'(x)$ in the same interval, and in any case it turns out that \begin{equation} |w_{n}(\beta_{n,i})-w_{n}(\alpha_{n,i})|= |u_{n}(\beta_{n,i})-u_{n}(\alpha_{n,i})|- \frac{\beta_{n,i}-\alpha_{n,i}}{\ep_{n}^{2}|\log\ep_{n}|^{4}}. \label{eqn:TV-wn-un} \end{equation} Finally, for every $i\in I_{n}$ we consider the midpoint $\gamma_{n,i}:=(\alpha_{n,i}+\beta_{n,i})/2$ of the interval $(\alpha_{n,i},\beta_{n,i})$, and we introduce the function $z_{n}\in\PJ((a,b))$ whose jump points are located in those midpoints, with height equal to the variation of $w_{n}$ in the corresponding intervals, and translated vertically so that $z_{n}(a)=w_{n}(a)=u_{n}(a)$. This function is given by \begin{equation*} z_{n}(x):=u_{n}(a)+ \sum_{i\in I_{n}}(w_{n}(\beta_{n,i})-w_{n}(\alpha_{n,i}))\mathbbm{1}_{(\gamma_{n,i},b)}(x) \qquad \forall x\in(a,b). \end{equation*} With these definitions, we are now ready to prove the required estimates. \paragraph{\textmd{\textit{Statement~(1)}}} Let us apply Lemma~\ref{lemma:basic-below} to the function $u_{n}$ in the interval $(\alpha_{n,i},\beta_{n,i})$ with $D:=\ep_{n}^{-2}|\log\ep_{n}|^{-4}$. Recalling (\ref{eqn:TV-wn-un}), we obtain that \begin{eqnarray*} \RPM_{\ep_{n}}((a,b),u_{n}) & \geq & \RPM_{\ep_{n}}(A_{n},u_{n}) \\[1ex] & = & \sum_{i\in I_{n}}\RPM_{\ep_{n}}((\alpha_{n,i},\beta_{n,i}),u_{n}) \\ & \geq & M_{n}\sum_{i\in I_{n}}\left(|u_{n}(\beta_{n,i})-u_{n}(\alpha_{n,i})|- \frac{\beta_{n,i}-\alpha_{n,i}}{\ep_{n}^{2}|\log\ep_{n}|^{4}}\right)^{1/2} \\[0.5ex] & = & M_{n}\sum_{i\in I_{n}}\left|w_{n}(\beta_{n,i})-w_{n}(\alpha_{n,i})\right|^{1/2} \\[0.5ex] & = & M_{n}\sum_{i\in I_{n}}\left|J_{z_{n}}(\gamma_{n,i})\right|^{1/2} \\[0.5ex] & = & M_{n}\J_{1/2}((a,b),z_{n}), \end{eqnarray*} which proves (\ref{th:uz-energy}). \paragraph{\textmd{\textit{Statement~(2)}}} In order to prove (\ref{th:uz-Lp}), we show that \begin{equation} u_{n}(x)-w_{n}(x)\to 0 \qquad \text{uniformly in }[a,b] \label{th:un-wn} \end{equation} and that for every $p\in[1,+\infty)$ it turns out that \begin{equation} w_{n}-z_{n}\to 0 \qquad \text{in }L^{p}((a,b)). \label{th:wn-zn} \end{equation} In order to prove (\ref{th:un-wn}) we introduce the sets \begin{equation*} B_{n}:=\left\{x\in(a,b):\frac{1}{|\log\ep_{n}|}\leq|u_{n}'(x)|\leq\frac{1}{\ep_{n}^{2}|\log\ep_{n}|^{4}}\right\} \end{equation*} and \begin{equation*} C_{n}:=\left\{x\in(a,b):|u_{n}'(x)|<\frac{1}{|\log\ep_{n}|}\right\}. \end{equation*} Let us estimate the measure of $A_{n}$, $B_{n}$, $C_{n}$. In the case of $C_{n}$ we consider the trivial estimate $|C_{n}|\leq b-a$. In the case of $A_{n}$ and $B_{n}$ we consider the term with the logarithm in (\ref{defn:ABG}), and we obtain that \begin{equation*} |A_{n}|\leq \frac{\ep_{n}^{2}|\log\ep_{n}|}{\log(1+\ep_{n}^{-4}|\log\ep_{n}|^{-8})}\RPM_{\ep_{n}}((a,b),u_{n}) \end{equation*} and \begin{equation*} |B_{n}|\leq\frac{\ep_{n}^{2}|\log\ep_{n}|}{\log(1+|\log\ep_{n}|^{-2})}\RPM_{\ep_{n}}((a,b),u_{n}). \end{equation*} Recalling (\ref{hp:RPM-bounded}), these estimates imply that \begin{equation*} \lim_{n\to +\infty}\frac{|A_{n}|}{\ep_{n}^{2}|\log\ep_{n}|^{4}}= \lim_{n\to +\infty}\frac{|B_{n}|}{\ep_{n}^{2}|\log\ep_{n}|^{4}}= 0. \end{equation*} Now let us consider the function $u_{n}'(x)-w_{n}'(x)$. In $A_{n}$ it turns out that \begin{equation*} \int_{A_{n}}|u_{n}'(x)-w_{n}'(x)|\,dx= \int_{A_{n}}\frac{1}{\ep_{n}^{2}|\log\ep_{n}|^{4}}\,dx= \frac{|A_{n}|}{\ep_{n}^{2}|\log\ep_{n}|^{4}}. \end{equation*} In $B_{n}$ and $C_{n}$ it turns out that $w_{n}'(x)=0$, and hence \begin{equation*} \int_{B_{n}}|u_{n}'(x)-w_{n}'(x)|\,dx= \int_{B_{n}}|u_{n}'(x)|\,dx\leq \frac{|B_{n}|}{\ep_{n}^{2}|\log\ep_{n}|^{4}} \end{equation*} and \begin{equation*} \int_{C_{n}}|u_{n}'(x)-w_{n}'(x)|\,dx= \int_{C_{n}}|u_{n}'(x)|\,dx\leq \frac{|C_{n}|}{|\log\ep_{n}|}\leq \frac{b-a}{|\log\ep_{n}|}. \end{equation*} From all these estimates we conclude that \begin{equation} \lim_{n\to +\infty}\int_{a}^{b}|u_{n}'(x)-w_{n}'(x)|\,dx=0, \label{est:un'-wn'} \end{equation} which implies (\ref{th:un-wn}) because $u_{n}(a)=w_{n}(a)=0$ for every $n\geq 1$. In order to prove (\ref{th:wn-zn}), we begin by observing that for every $x\in(a,b)\setminus A_{n}$ it turns out that \begin{eqnarray*} w_{n}(x) & = & w_{n}(a)+\int_{a}^{x}w_{n}'(t)\,dt \\ & = & u_{n}(a)+\sum_{\{i\in I_{n}:\beta_{n,i}\leq x\}} \int_{\alpha_{n,i}}^{\beta_{n,i}}w_{n}'(t)\,dt \\ & = & u_{n}(a)+\sum_{\{i\in I_{n}:\beta_{n,i}\leq x\}}(w_{n}(\beta_{n,i})-w_{n}(\alpha_{n,i})) \\ & = & z_{n}(x), \end{eqnarray*} which implies that $w_{n}(x)-z_{n}(x)=0$ when $x\not\in A_{n}$. On the other hand, when $x\in A_{n}$ it turns out that \begin{equation*} |w_{n}(x)-z_{n}(x)|\leq |w_{n}(\beta_{n,i})-w_{n}(\alpha_{n,i})|= |J_{z_{n}}(\gamma_{n,i})|\leq \left(\J_{1/2}((a,b),z_{n})\right)^{2}, \end{equation*} and therefore from (\ref{th:uz-energy}) we conclude that \begin{eqnarray*} \int_{a}^{b}|w_{n}(x)-z_{n}(x)|^{p}\,dx & = & \sum_{i\in I_{n}}\int_{\alpha_{i,\ep_{n}}}^{\beta_{i,\ep_{n}}}|w_{n}(x)-z_{n}(x)|^{p}\,dx \\[0.5ex] & \leq & \sum_{i\in I_{n}}(\beta_{i,\ep_{n}}-\alpha_{i,\ep_{n}})\cdot\left(\J_{1/2}((a,b),z_{n})\right)^{2p} \\[0.5ex] & = & |A_{n}|\cdot\left(\J_{1/2}((a,b),z_{n})\right)^{2p} \\[0.5ex] & \leq & |A_{n}|\cdot\left(\frac{1}{M_{n}}\RPM_{\ep_{n}}((a,b),u_{n})\right)^{2p}, \end{eqnarray*} which implies (\ref{th:wn-zn}) because $\RPM_{\ep_{n}}((a,b),u_{n})$ is bounded from above, $M_{n}$ is bounded from below, and $|A_{n}|\to 0$. \paragraph{\textmd{\textit{Statement~(3)}}} It remains to prove (\ref{th:uz-TV}). To this end, we just observe that \begin{equation*} |Dz_{n}|((a,b))= \sum_{i\in I_{n}}|w_{n}(\beta_{n,i})-w_{n}(\alpha_{n,i})|= \int_{A_{n}}|w_{n}'(x)|\,dx= \int_{a}^{b}|w_{n}'(x)|\,dx, \end{equation*} and \begin{equation*} \left| \int_{a}^{b}|u_{n}'(x)|\,dx- \int_{a}^{b}|w_{n}'(x)|\,dx \right|\leq \int_{a}^{b}|u_{n}'(x)-w_{n}'(x)|\,dx, \end{equation*} and we conclude thanks to (\ref{est:un'-wn'}). \end{proof} The last preliminary result is the classical lower semicontinuity of $\J_{1/2}$ (see for example~\cite[Theorems~4.7 and~4.8]{AFP}). Here we include an elementary proof in the one dimensional case, different from the original proof in~\cite{1989-BUMI-Ambrosio}, because we need to deduce the reinforced statement (true in dimension one) where the convergence of the energies implies the \emph{strict} convergence of the arguments. \begin{lemma}[Lower semicontinuity of $\J_{1/2}$]\label{lemma:J-strict} Let $(a,b)\subseteq\re$ be an interval, and let $\{z_{n}\}\subseteq\PJ((a,a))$ be a sequence with the following properties: \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item there exists a constant $M$ such that \begin{equation} \J_{1/2}((a,b),z_{n})\leq M \qquad \forall n\geq 1, \label{hp:bound-J1/2} \end{equation} \item there exists $p\geq 1$ and $z_{\infty}\in L^{p}((a,b))$ such that $z_{n}\to z_{\infty}$ in $L^{p}((a,b))$. \end{enumerate} Then the following two statements hold true. \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item \emph{(Lower semicontinuity).} It turns out that $z_{\infty}\in\PJ((a,b))$ and \begin{equation} \liminf_{n\to +\infty}\J_{1/2}((a,b),z_{n})\geq \J_{1/2}((a,b),z_{\infty}). \label{th:J-LSC} \end{equation} \item \emph{(Strict convergence).} If in addition we assume that \begin{equation} \lim_{n\to +\infty}\J_{1/2}((a,b),z_{n})=\J_{1/2}((a,b),z_{\infty}), \label{hp:zn-strict} \end{equation} then actually $z_{n}\auto z_{\infty}$ in $BV((a,b))$. \end{enumerate} \end{lemma} \begin{proof} For every $n\geq 1$, let us write $z_{n}(x)$ in the form \begin{equation} z_{n}(x)=c_{n}+\sum_{i=1}^{\infty}J_{n}(i)\mathbbm{1}_{(s_{n}(i),+\infty)}(x) \qquad \forall x\in(a,b), \label{eqn:zn} \end{equation} where $c_{n}:=z_{n}(a)$ (the boundary value is intended in the usual sense), $\{s_{n}(i)\}_{i\geq 1}\subseteq(a,b)$ is a sequence of distinct points, and $\{J_{n}(i)\}_{i\geq 1}$ is a sequence of real numbers such that $|J_{n}(i+1)|\leq|J_{n}(i)|$ for every $i\geq 1$. We observe that, even when the function $z_{n}$ has only a finite number of jump points, we can always write it in the form (\ref{eqn:zn}) by introducing infinitely many ``fake jumps'' with ``jump height'' equal to~0. From assumptions (i) and (ii) we derive two types of estimates. \begin{itemize} \item (Uniform bounds). From assumption (i) and the subadditivity of the square root we deduce that \begin{equation} \sum_{i=1}^{\infty}|J_{n}(i)|\leq \left(\sum_{i=1}^{\infty}|J_{n}(i)|^{1/2}\right)^{2}\leq M^{2} \qquad \forall n\geq 1. \label{est:TV} \end{equation} Combined with assumption (ii), this implies that there exists a constant $M_{1}$ such that \begin{equation} |c_{n}|\leq M_{1} \qquad \forall n\geq 1. \label{est:cn} \end{equation} Finally, from (\ref{est:TV}) and (\ref{est:cn}) we deduce that there exists a constant $M_{2}$ such that \begin{equation} \|z_{n}\|_{L^{\infty}((a,b))}\leq M_{2} \qquad \forall n\geq 1. \label{est:zn-Linfty} \end{equation} \item (Uniform smallness of the tails). We claim that for every $\ep>0$ there exists a positive integer $i_{\ep}$ such that \begin{equation} \sum_{i=i_{\ep}}^{\infty}|J_{n}(i)|\leq M\sqrt{\ep} \qquad \forall n\geq 1. \label{est:tails} \end{equation} Indeed, if we define $i_{\ep}$ as the smallest integer greater that $M/\sqrt{\ep}$, then from (\ref{hp:bound-J1/2}) it turns out that $|J_{n}(i)|\leq\ep$ for at least one index $i\leq i_{\ep}$. At this point, from the monotonicity of $|J_{n}(i)|$ we conclude that \begin{equation*} |J_{n}(i)|\leq\ep \qquad \forall n\geq 1, \quad \forall i\geq i_{\ep}, \end{equation*} and therefore \begin{equation*} \sum_{i=i_{\ep}}^{\infty}|J_{n}(i)|= \sum_{i=i_{\ep}}^{\infty}|J_{n}(i)|^{1/2}\cdot|J_{n}(i)|^{1/2}\leq \sqrt{\ep}\cdot\sum_{i=i_{\ep}}^{\infty}|J_{n}(i)|^{1/2}\leq \sqrt{\ep}\cdot M, \end{equation*} which proves (\ref{est:tails}). \end{itemize} From the uniform bounds we obtain that, up to subsequences (not relabeled), the following limits exists as $n\to +\infty$: \begin{equation*} c_{n}\to c_{\infty}, \qquad\quad J_{n}(i)\to J_{\infty}(i), \qquad\quad s_{n}(i)\to s_{\infty}(i)\in[a,b]. \end{equation*} From these limits we deduce that \begin{equation} \liminf_{n\to +\infty}\J_{1/2}((a,b),z_{n})= \liminf_{n\to +\infty}\sum_{i=1}^{\infty}|J_{n}(i)|^{1/2}\geq \sum_{i=1}^{\infty}|J_{\infty}(i)|^{1/2}, \label{ineq:J-zinfty} \end{equation} and, due to the uniform smallness of the tails, \begin{equation} \lim_{n\to +\infty}|Dz_{n}|((a,b))= \lim_{n\to +\infty}\sum_{i=1}^{\infty}|J_{n}(i)|= \sum_{i=1}^{\infty}|J_{\infty}(i)|. \label{ineq:TV-zinfty} \end{equation} At this point we can introduce the function \begin{equation} \widehat{z}_{\infty}(x):=c_{\infty}+ \sum_{i=1}^{\infty}J_{\infty}(i)\mathbbm{1}_{(s_{\infty}(i),+\infty)}(x) \qquad \forall x\in(a,b). \label{defn:z-infty} \end{equation} Exploiting again (\ref{est:tails}) we can show that $z_{n}(x)\to \widehat{z}_{\infty}(x)$ for every $x\in(a,b)$ that does not appear in the sequence $\{s_{\infty}(i)\}$. This almost everywhere pointwise convergence, together with the uniform bound (\ref{est:zn-Linfty}), implies that $z_{n}\to\widehat{z}_{\infty}$ in $L^{p}((a,b))$ for every $p\in[1,+\infty)$, and hence in particular that $z_{\infty}(x)=\widehat{z}_{\infty}(x)$. Now we exploit the representation (\ref{defn:z-infty}) in order to compute the total variation of $z_{\infty}$ and $\J_{1/2}((a,b),z_{\infty})$. This is not immediate, because in the representation (\ref{defn:z-infty}) the points $s_{\infty}(i)$ are not necessarily distinct, and some of them might even coincide with the endpoints of the interval $(a,b)$, in which case they do not contribute to the total variation or to $\J_{1/2}$. In any case, the function defined by (\ref{defn:z-infty}) belongs to $\PJ((a,b))$, and its jump set is contained in the image of the sequence $\{s_{\infty}(i)\}$ intersected with the open interval $(a,b)$. Moreover, for every $s$ in this set, the jump height of $z_{\infty}$ in $s$ is given by \begin{equation*} J_{z_{\infty}}(s)=\sum_{\{i\geq 1:s_{\infty}(i)=s\}}J_{\infty}(i), \end{equation*} where of course the sum (or series) might also vanish. In particular, for every jump point $s$ of $z_{\infty}$ we obtain that \begin{equation} |J_{z_{\infty}}(s)|\leq\sum_{\{i\geq 1:s_{\infty}(i)=s\}}|J_{\infty}(i)|, \label{est:J} \end{equation} with equality if and only if all terms in the sum have the same sign. Analogously, we obtain that \begin{equation*} |J_{z_{\infty}}(s)|^{1/2}\leq\sum_{\{i\geq 1:s_{\infty}(i)=s\}}|J_{\infty}(i)|^{1/2}, \end{equation*} with equality if and only if at most one term in the sum is different from~0 (here we exploit that the square root is strictly subadditive). From (\ref{est:J}) it follows that \begin{equation} |Dz_{\infty}|((a,b))= \sum_{s\in S_{z_{\infty}}}|J_{z_{\infty}}(s)|\leq \sum_{i=1}^{\infty}|J_{\infty}(i)|, \label{eqn:TV-zinfty} \end{equation} with equality if and only if $s_{\infty}(i)\in(a,b)$ for every $i\geq 1$ such that $J_{\infty}(i)\neq 0$, and $J_{\infty}(i)\cdot J_{\infty}(j)\geq 0$ for every pair $(i,j)$ of distinct positive integers such that $s_{\infty}(i)=s_{\infty}(j)\in(a,b)$. Analogously, it turns out that \begin{equation} \J_{1/2}((a,b),z_{\infty})= \sum_{s\in S_{z_{\infty}}}|J_{z_{\infty}}(s)|^{1/2}\leq \sum_{i=1}^{\infty}|J_{\infty}(i)|^{1/2}, \label{eqn:J-zinfty} \end{equation} with equality if and only if $s_{\infty}(i)\in(a,b)$ for every $i\geq 1$ such that $J_{\infty}(i)\neq 0$, and $J_{\infty}(i)\cdot J_{\infty}(j)= 0$ for every pair $(i,j)$ of distinct positive integers such that $s_{\infty}(i)=s_{\infty}(j)\in(a,b)$. In particular, in all cases where equality occurs in (\ref{eqn:J-zinfty}), then equality occurs also in (\ref{eqn:TV-zinfty}). At this point we are ready to complete the proof. Indeed, (\ref{th:J-LSC}) follows from (\ref{ineq:J-zinfty}) and (\ref{eqn:J-zinfty}), provided that we start with the subsequence of $\{z_{n}\}$ that realizes the liminf in (\ref{th:J-LSC}). As for the strict convergence, under assumption (\ref{hp:zn-strict}) we have necessarily equality both in (\ref{ineq:J-zinfty}) and in (\ref{eqn:J-zinfty}), and hence we have equality also in (\ref{eqn:TV-zinfty}). At this point from (\ref{ineq:TV-zinfty}) and (\ref{eqn:TV-zinfty}) we conclude that $|Dz_{n}|((a,b))\to|Dz_{\infty}|((a,b))$ (to be over-pedantic, what we actually proved is that every subsequence of $\{z_{n}\}$ has a further subsequence with this property), which is what we need in order to conclude that the convergence is strict. \end{proof} \begin{rmk} \begin{em} The only properties of the square root that are relevant for Lemma~\ref{lemma:J-strict} are that it is a nonnegative function that is strictly sub-additive and satisfies $\sqrt{\sigma}/\sigma\to +\infty$ as $\sigma\to 0^{+}$. \end{em} \end{rmk} \subsection{Proof of Theorem~\ref{thm:ABG}} \paragraph{\textmd{\textit{Statement~(1)}}} Let us start with the liminf inequality. We need to prove that \begin{equation} \liminf_{n\to +\infty}\RPM_{\ep_{n}}((a,b),u_{n})\geq \alpha_{0}\J_{1/2}((a,b),u) \label{Gconv:liminf} \end{equation} for every sequence $\{u_{n}\}\subseteq H^{2}((a,b))$ such that $u_{n}\to u$ in $L^{2}((a,b))$, and every sequence $\{\ep_{n}\}\subseteq(0,1)$ such that $\ep_{n}\to 0^{+}$. Up to subsequences (not relabeled), we can assume that the left-hand side is bounded and that the liminf is actually a limit, and in particular that the sequence $\{\RPM_{\ep_{n}}((a,b),u_{n})\}$ is bounded. When this is the case, from Lemma~\ref{lemma:split} we obtain a sequence $\{z_{n}\}\subseteq\PJ((a,b))$ such that $z_{n}\to u$ in $L^{2}((a,b))$ and \begin{equation} \RPM_{\ep_{n}}((a,b),u_{n})\geq M_{n}\cdot\J_{1/2}((a,b),z_{n}) \qquad \forall n\geq 1. \label{th:Jzn} \end{equation} Now we observe that $M_{n}\to\alpha_{0}$ as $n\to +\infty$, and from the lower semicontinuity of $\J_{1/2}$ (with respect to any $L^{p}$ convergence) we conclude that \begin{equation*} \liminf_{n\to +\infty}\RPM_{\ep_{n}}((a,b),u_{n})\geq \liminf_{n\to +\infty}M_{n}\cdot\J_{1/2}((a,b),z_{n})\geq \alpha_{0}\J_{1/2}((a,b),u), \end{equation*} which proves (\ref{Gconv:liminf}). For the limsup inequality, we refer to the proof of~\cite[Theorem~4.4]{2008-TAMS-BF}. The idea is rather classical. First of all, we reduce ourselves to the case where $u$ has only a finite number of jump points, because this class is dense in $L^{2}((a,b))$ with respect to the energy $\J_{1/2}$. Given any function $u\in\PJ((a,b))$ with a finite number of jumps, we consider the function $u_{\ep}(x)$ that coincides with $u(x)$ outside some small intervals that contain a single jump point, and in each of these small intervals coincides with the cubic polynomial that interpolates the values at the boundary of the interval. From Lemma~\ref{lemma:ABG} we obtain the exact value of the integral of $u_{\ep}''(x)^{2}$, and an estimate from above for the integral of $\log(1+u_{\ep}'(x)^{2})$. If we optimize the length of each small interval in terms of $\ep$ and of the jump height, the resulting family is the required recovery family. We stress that, in the case where $u$ has a finite number of jumps, there exists a recovery family that coincides with $u$ in a fixed neighborhood of the boundary points $x=a$ and $x=b$. \paragraph{\textmd{\textit{Statement~(2)}}} Let us apply again Lemma~\ref{lemma:split}. We obtain a sequence $\{z_{n}\}\subseteq\PJ((a,b))$ satisfying (\ref{th:uz-Lp}) and (\ref{th:Jzn}). In particular, since $M_{n}$ is bounded from below by a positive constant, from (\ref{hp:Gconv-coercive}) we deduce that this sequence satisfies \begin{equation*} \sup_{n\in\n}\left\{\J_{1/2}((a,b),z_{n})+\int_{a}^{b}z_{n}(x)^{2}\,dx\right\}<+\infty. \end{equation*} From the classical compactness result for the functional $\J_{1/2}$ (whose proof in dimension one is more or less contained in the proof of Lemma~\ref{lemma:J-strict} above), it follows that $\{z_{n}\}$ is relatively compact in $L^{p}((a,b))$ for every $p\in[1,+\infty)$. Due to (\ref{th:uz-Lp}), the same is true for $\{u_{n}\}$. \paragraph{\textmd{\textit{Statement~(3)}}} Let us apply again Lemma~\ref{lemma:split}. The resulting sequence $\{z_{n}\}$ converges to $u$. On the other hand, from the lower semicontinuity of $\J_{1/2}$, estimate (\ref{th:uz-energy}), and assumption (\ref{hp:recovery}), we deduce that \begin{eqnarray*} \J_{1/2}((a,b),u) & \leq & \liminf_{n\to +\infty}\J_{1/2}((a,b),z_{n}) \\[0.5ex] & \leq & \limsup_{n\to +\infty}\J_{1/2}((a,b),z_{n}) \\ & \leq & \limsup_{n\to +\infty}\frac{1}{M_{n}}\cdot\RPM_{\ep_{n}}((a,b),u_{n}) \\ & = & \frac{1}{\alpha_{0}}\cdot\alpha_{0}\J_{1/2}((a,b),u). \end{eqnarray*} This implies that $\J_{1/2}((a,b),z_{n})\to\J_{1/2}((a,b),u)$, and therefore from Lemma~\ref{lemma:J-strict} we conclude that $z_{n}\auto u$ in $BV((a,b))$. In turn, this implies that also $u_{n}\auto u$ in $BV((a,b))$ because of (\ref{th:uz-TV}). \paragraph{\textmd{\textit{Statement~(4)}}} As in the proof of the limsup inequality for the Gamma-convergence result, we can assume that $u$ is a pure jump function with a finite number of jump points. When this is the case, we already know that there exists a recovery sequence $\widehat{u}_{n}\to u$ that coincides with $u$ in a neighborhood of the boundary, namely there exists $\eta>0$ such that for every $n\geq 1$ it turns out that $\widehat{u}_{n}(x)=u(x)=u(a)$ for every $x\in(a,a+\eta)$, and similarly $\widehat{u}_{n}(x)=u(x)=u(b)$ for every $x\in(b-\eta,b)$. Now the idea is to modify $\widehat{u}_{n}$ in the two lateral intervals $(a,a+\eta)$ and $(b-\eta,b)$ in order to fulfill the given boundary conditions (\ref{hp:recovery-BC}). To this end, we set \begin{equation*} u_{n}(x):=\begin{cases} u(a)+w_{1,n}(x)\quad & \text{if }x\in(a,a+\eta], \\ \widehat{u}_{n}(x) & \text{if }x\in[a+\eta,b-\eta], \\ u(b)+w_{2,n}(x) & \text{if }x\in[b-\eta,b). \end{cases} \end{equation*} where $w_{1,n}$ is the function given by Lemma~\ref{lemma:bound-D-H} applied in the interval $(a,a+\eta)$ with boundary data \begin{equation*} \left(w_{1,n}(a),w_{1,n}'(a),w_{1,n}(a+\eta),w_{1,n}'(a+\eta)\right)= (A_{0,n}-u(a),A_{1,n},0,0), \end{equation*} and $w_{2,n}$ is the function given by Lemma~\ref{lemma:bound-D-H} applied in the interval $(b-\eta,b)$ with boundary data \begin{equation*} \left(w_{2,n}(a),w_{2,n}'(a),w_{2,n}(a+\eta),w_{2,n}'(a+\eta)\right)= (0,0,B_{0,n}-u(b),B_{1,n}). \end{equation*} We observe that $u_{n}\in H^{2}((a,b))$ and \begin{eqnarray*} \RPM_{\ep_{n}}((a,b),u_{n}) & = & \RPM_{\ep_{n}}((a,a+\eta),w_{1,n}) \\ & & +\RPM_{\ep_{n}}((a+\eta,b-\eta),\widehat{u}_{n}) \\ & & +\RPM_{\ep_{n}}((b-\eta,b),w_{2,n}). \end{eqnarray*} The second term coincides with $\RPM_{\ep_{n}}((a,b),\widehat{u}_{n})$, and therefore it converges to $\alpha_{0}\J_{1/2}((a,b),u)$ when $n\to +\infty$. Therefore, it is enough to show that the other two terms vanish in the limit. To this end, we observe that in the interval $(a,a+\eta)$ the assumptions of Lemma~\ref{lemma:bound-D-H} are satisfied with \begin{equation*} H=H_{n}:=|A_{0,n}-u(a)| \qquad\text{and}\qquad D=D_{n}:=|A_{1,n}|. \end{equation*} Since $H_{n}$ and $D_{n}$ tend to~0, we conclude that \begin{equation*} \lim_{n\to+\infty}\RPM_{\ep_{n}}((a,a+\eta),w_{1,n})\leq \lim_{n\to+\infty}80\left(\sqrt{H_{n}}+\ep_{n}^{2}D_{n}\right)= 0. \end{equation*} In the same way we obtain that \begin{equation*} \lim_{n\to+\infty}\RPM_{\ep_{n}}((b-\eta,b),w_{2,n})= 0, \end{equation*} which completes the proof. \qed \subsection{Proof of Proposition~\ref{prop:mu}} \paragraph{\textmd{\textit{Statement~(\ref{prop:existence})}}} In the case of $\mu_{\ep}$, $\mu_{\ep}^{*}$ and $\mu_{0}$, existence is a standard application of the direct method in the calculus of variations. The case of $\mu_{0}^{*}$ is less trivial because boundary conditions in $\PJ((0,L))$ do not pass to the limit, for example, with respect to $L^{2}$ convergence. This issue, however, can be fixed in a rather standard way. To this end, we relax boundary conditions by allowing ``jumps at the boundary'', namely we minimize \begin{equation*} \alpha\J_{1/2}((0,L),v)+\beta\int_{0}^{L}(v(x)-Mx)^{2}\,dx+ \alpha\left(|v(0)|^{1/2}+|v(L)-ML|^{1/2}\right) \end{equation*} over $\PJ((0,L))$, without boundary conditions. In this case the direct method works, and we claim that any minimizer $v(x)$ satisfies $v(0)=0$ and $v(L)=ML$. Indeed, let $v(x)$ be any minimizer, and let us consider the value in $x=0$ (the argument in $x=L$ is symmetric). Let us assume that $M>0$ (the case $M=0$ is trivial, and the case $M<0$ is symmetric). Arguing as at the beginning of section~\ref{subsec:loc-min-proof} we can show that the set of jump points of $v$ is finite, and comparing with a competitor $v_{\tau}(x)$ which is equal to~0 in $(0,\tau)$, and equal to $v(x)$ elsewhere, we can conclude that $v(0)=0$. \paragraph{\textmd{\textit{Statement~(\ref{prop:M})}}} We prove the result in the case of $\mu_{\ep}$, but the argument is analogous in the other three cases. The symmetry follows from the simple remark that, if $v(x)$ is a minimizer for some $M$, then $-v(x)$ is a minimizer for $-M$. As for the continuity, it follows from the fact that, if $M_{n}\to M_{\infty}$, then the fidelity term in $\RPMF_{\ep}(\beta,M_{n}x,(0,L),v)$ converges to the fidelity term in $\RPMF_{\ep}(\beta,M_{\infty}x,(0,L),v)$ uniformly on bounded subsets of $L^{2}((a,b))$. As for monotonicity, let us consider any pair $0\leq M_{1}< M_{2}$. Let us choose any minimizer $v_{2}\in H^{2}((0,L))$ in the definition of $\mu_{\ep}(\beta,L,M_{2})$, and let us consider the function $v_{1}(x):=(M_{1}/M_{2})v_{2}(x)$. Elementary computations show that \begin{equation*} \RPM_{\ep}((0,L),v_{1})\leq\RPM_{\ep}((0,L),v_{2}), \end{equation*} and \begin{equation*} \int_{0}^{L}(v_{1}(x)-M_{1}\,x)^{2}\,dx= \frac{M_{1}^{2}}{M_{2}^{2}}\int_{0}^{L}(v_{2}(x)-M_{2}\,x)^{2}\,dx\leq \int_{0}^{L}(v_{2}(x)-M_{2}\,x)^{2}\,dx, \end{equation*} and therefore \begin{eqnarray*} \mu_{\ep}(\beta,L,M_{1}) & \leq & \RPMF_{\ep}(\beta,M_{1}x,(0,L),v_{1}) \\ & \leq & \RPMF_{\ep}(\beta,M_{2}x,(0,L),v_{2}) \\ & = & \mu_{\ep}(\beta,L,M_{2}). \end{eqnarray*} \paragraph{\textmd{\textit{Statement~(\ref{prop:L})}}} Let us consider any pair $0<L_{1}<L_{2}$, and let us examine separately the behavior of the four functions. In the case of $\mu_{\ep}$, let $v_{2}(x)$ be any minimizer for $\mu_{\ep}(\beta,L_{2},M)$. Then the restriction of $v_{2}(x)$ to $(0,L_{1})$, which we call $v_{1}(x)$, is a competitor in the definition of $\mu_{\ep}(\beta,L_{1},M)$, and therefore as before we conclude that \begin{eqnarray*} \mu_{\ep}(\beta,L_{1},M) & \leq & \RPMF_{\ep}(\beta,Mx,(0,L_{1}),v_{1}) \\ & \leq & \RPMF_{\ep}(\beta,Mx,(0,L_{2}),v_{2}) \\ & = & \mu_{\ep}(\beta,L_{2},M). \end{eqnarray*} The same argument works in the case of $\mu_{0}$. In the case of $\mu_{0}^{*}$ we have to keep into account boundary conditions, and therefore we define \begin{equation} v_{1}(x)=\frac{L_{1}}{L_{2}}v_{2}\left(\frac{L_{2}x}{L_{1}}\right) \qquad \forall x\in(0,L_{1}), \label{defn:v2->v1} \end{equation} and we observe that $\J_{1/2}((0,L_{1}),v_{1})\leq\J_{1/2}((0,L_{2}),v_{2})$ and \begin{equation*} \int_{0}^{L_{1}}(v_{1}(x)-Mx)^{2}\,dx= \left(\frac{L_{1}}{L_{2}}\right)^{3}\int_{0}^{L_{2}}(v_{2}(x)-Mx)^{2}\,dx, \end{equation*} which again implies the conclusion. Finally, the monotonicity of $\mu_{\ep}^{*}$ with respect to $L$ is in general false (the minimum diverges when $L\to 0^{+}$ due to the term with second order derivatives). In this case the natural definition (\ref{defn:v2->v1}), that preserves the boundary conditions (both on the function and on the derivative), reduces the fidelity term and the term with the logarithm, but increases the term with second order derivatives. What we do in this case is the opposite. We consider a minimizer $v_{1}(x)$ in the definition of $\mu_{\ep}^{*}(\beta,L_{1},M)$, and we define a function $v_{2}(x)$ in $(0,L_{2})$ in such a way that (\ref{defn:v2->v1}) holds true. With a simple variable change we see that \begin{equation*} \int_{0}^{L_{2}}v_{2}''(x)^{2}\,dx= \frac{L_{1}}{L_{2}}\int_{0}^{L_{1}}v_{1}''(x)^{2}\,dx, \end{equation*} \begin{equation*} \int_{0}^{L_{2}}\log\left(1+v_{2}'(x)^{2}\right)\,dx= \frac{L_{2}}{L_{1}}\int_{0}^{L_{1}}\log\left(1+v_{1}'(x)^{2}\right)\,dx, \end{equation*} and \begin{equation*} \int_{0}^{L_{2}}(v_{2}(x)-Mx)^{2}\,dx= \left(\frac{L_{2}}{L_{1}}\right)^{3}\int_{0}^{L_{1}}(v_{1}(x)-Mx)^{2}\,dx, \end{equation*} so that in particular \begin{equation*} \RPMF_{\ep}(\beta,(0,L_{2}),Mx,v_{2})\leq \left(\frac{L_{2}}{L_{1}}\right)^{3}\RPMF_{\ep}(\beta,(0,L_{1}),Mx,v_{1}). \end{equation*} Since $v_{2}(x)$ is a competitor in the definition of $\mu_{\ep}^{*}(\beta,L_{2},M)$, this is enough to establish (\ref{th:monot-muep*}). \paragraph{\textmd{\textit{Statement~(\ref{prop:pointwise})}}} Pointwise convergence, namely convergence of minima, is a rather standard consequence of Gamma-convergence and equi-coerciveness. We point out that in the case of (\ref{defn:muep*}) and (\ref{defn:mu0*}) the functionals have to take the boundary conditions into account (the usual way is to set the functionals equal to $+\infty$ when the argument does not satisfy the boundary conditions), and in this case the limsup inequality in the Gamma-convergence result is slightly more delicate because it requires the control of boundary conditions for recovery sequences. \paragraph{\textmd{\textit{Statement~(\ref{prop:uniform})}}} The pointwise convergence (\ref{th:lim-mu}) is actually uniform with respect to $M$ (on bounded sets) because of the continuity and monotonicity with respect to $M$ of both $\mu_{\ep}$ and $\mu_{0}$. An analogous argument applies in the case of (\ref{th:lim-mu*}). \qed \subsubsection*{\centering Acknowledgments} We would like to thank Iacopo Ripoli for working on a preliminary version of this project in his master thesis~\cite{ripoli:tesi}. The first author is a member of the \selectlanguage{italian} ``Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni'' (GNAMPA) of the ``Istituto Nazionale di Alta Matematica'' (INdAM). \selectlanguage{english} \begin{thebibliography}{10} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL } \providecommand{\selectlanguage}[1]{\relax} \providecommand{\eprint}[2][]{\url{#2}} \bibitem{2001-CPAM-AlbertiMuller} \textsc{G.~Alberti}, \textsc{S.~M\"{u}ller}. \newblock A new approach to variational problems with multiple scales. \newblock \emph{Comm. Pure Appl. Math.} \textbf{54} (2001), no.~7, 761--825. \bibitem{ABG} \textsc{R.~Alicandro}, \textsc{A.~Braides}, \textsc{M.~S. Gelli}. \newblock Free-discontinuity problems generated by singular perturbation. \newblock \emph{Proc. Roy. Soc. Edinburgh Sect. A} \textbf{128} (1998), no.~6, 1115--1129. \bibitem{2007-Amann} \textsc{H.~Amann}. \newblock Time-delayed {P}erona-{M}alik type problems. \newblock \emph{Acta Math. Univ. Comenian. (N.S.)} \textbf{76} (2007), no.~1, 15--38. \bibitem{1989-BUMI-Ambrosio} \textsc{L.~Ambrosio}. \newblock A compactness theorem for a new class of functions of bounded variation. \newblock \emph{Boll. Un. Mat. Ital. B (7)} \textbf{3} (1989), no.~4, 857--881. \bibitem{AFP} \textsc{L.~Ambrosio}, \textsc{N.~Fusco}, \textsc{D.~Pallara}. \newblock \emph{{Functions of bounded variation and free discontinuity problems}}. \newblock Oxford Mathematical Monographs, 2000. \bibitem{2014-M3AS-BelChaGol} \textsc{G.~Bellettini}, \textsc{A.~Chambolle}, \textsc{M.~Goldman}. \newblock The {$\Gamma$}-limit for singularly perturbed functionals of {P}erona-{M}alik type in arbitrary dimension. \newblock \emph{Math. Models Methods Appl. Sci.} \textbf{24} (2014), no.~6, 1091--1113. \bibitem{2008-TAMS-BF} \textsc{G.~Bellettini}, \textsc{G.~Fusco}. \newblock The {$\Gamma$}-limit and the related gradient flow for singular perturbation functionals of {P}erona-{M}alik type. \newblock \emph{Trans. Amer. Math. Soc.} \textbf{360} (2008), no.~9, 4929--4987. \bibitem{2006-DCDS-BelFusGug} \textsc{G.~Bellettini}, \textsc{G.~Fusco}, \textsc{N.~Guglielmi}. \newblock A concept of solution and numerical experiments for forward-backward diffusion equations. \newblock \emph{Discrete Contin. Dyn. Syst.} \textbf{16} (2006), no.~4, 783--842. \bibitem{2008-JDE-BNPT} \textsc{G.~Bellettini}, \textsc{M.~Novaga}, \textsc{M.~Paolini}, \textsc{C.~Tornese}. \newblock Convergence of discrete schemes for the {P}erona-{M}alik equation. \newblock \emph{J. Differential Equations} \textbf{245} (2008), no.~4, 892--924. \bibitem{2019-SIAM-BerGiaTes} \textsc{M.~Bertsch}, \textsc{L.~Giacomelli}, \textsc{A.~Tesei}. \newblock Measure-valued solutions to a nonlinear fourth-order regularization of forward-backward parabolic equations. \newblock \emph{SIAM J. Math. Anal.} \textbf{51} (2019), no.~1, 374--402. \bibitem{2020-JDE-BerSmaTes} \textsc{M.~Bertsch}, \textsc{F.~Smarrazzo}, \textsc{A.~Tesei}. \newblock On a class of forward-backward parabolic equations: formation of singularities. \newblock \emph{J. Differential Equations} \textbf{269} (2020), no.~9, 6656--6698. \bibitem{2018-ActApplMath-BraVal} \textsc{A.~Braides}, \textsc{V.~Vallocchia}. \newblock Static, quasistatic and dynamic analysis for scaled {P}erona-{M}alik functionals. \newblock \emph{Acta Appl. Math.} \textbf{156} (2018), 79--107. \bibitem{1992-SIAM-Lions} \textsc{F.~Catt\'{e}}, \textsc{P.-L. Lions}, \textsc{J.-M. Morel}, \textsc{T.~Coll}. \newblock Image selective smoothing and edge detection by nonlinear diffusion. \newblock \emph{SIAM J. Numer. Anal.} \textbf{29} (1992), no.~1, 182--193. \bibitem{2011-SIAM-SlowSD} \textsc{M.~Colombo}, \textsc{M.~Gobbino}. \newblock Slow time behavior of the semidiscrete {P}erona-{M}alik scheme in one dimension. \newblock \emph{SIAM J. Math. Anal.} \textbf{43} (2011), no.~6, 2564--2600. \bibitem{1996-Duke-DeGiorgi} \textsc{E.~De~Giorgi}. \newblock Conjectures concerning some evolution problems. \newblock volume~81, pages 255--268. 1996. \newblock \urlprefix\url{https://doi.org/10.1215/S0012-7094-96-08114-4}. \newblock A celebration of John F. Nash, Jr. \bibitem{2001-CPAM-Esedoglu} \textsc{S.~Esedo\={g}lu}. \newblock An analysis of the {P}erona-{M}alik scheme. \newblock \emph{Comm. Pure Appl. Math.} \textbf{54} (2001), no.~12, 1442--1487. \bibitem{2006-SIAM-Esedoglu} \textsc{S.~Esedo\={g}lu}. \newblock Stability properties of the {P}erona-{M}alik scheme. \newblock \emph{SIAM J. Numer. Anal.} \textbf{44} (2006), no.~3, 1297--1313. \bibitem{GG:grad-est} \textsc{M.~Ghisi}, \textsc{M.~Gobbino}. \newblock Gradient estimates for the {P}erona-{M}alik equation. \newblock \emph{Math. Ann.} \textbf{337} (2007), no.~3, 557--590. \bibitem{2009-JDE-Guidotti} \textsc{P.~Guidotti}. \newblock A new nonlocal nonlinear diffusion of image processing. \newblock \emph{J. Differential Equations} \textbf{246} (2009), no.~12, 4731--4742. \bibitem{2012-JDE-Guidotti} \textsc{P.~Guidotti}. \newblock A backward-forward regularization of the {P}erona-{M}alik equation. \newblock \emph{J. Differential Equations} \textbf{252} (2012), no.~4, 3226--3244. \bibitem{2009-JMImVis-GuiLam} \textsc{P.~Guidotti}, \textsc{J.~V. Lambers}. \newblock Two new nonlinear nonlocal diffusions for noise reduction. \newblock \emph{J. Math. Imaging Vision} \textbf{33} (2009), no.~1, 25--37. \bibitem{Kichenassami} \textsc{S.~Kichenassamy}. \newblock The {P}erona-{M}alik paradox. \newblock \emph{SIAM J. Appl. Math.} \textbf{57} (1997), no.~5, 1328--1342. \bibitem{2018-Poincare-KimYan} \textsc{S.~Kim}, \textsc{B.~Yan}. \newblock On {L}ipschitz solutions for some forward-backward parabolic equations. \newblock \emph{Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire} \textbf{35} (2018), no.~1, 65--100. \bibitem{2003-M3AS-MorNeg} \textsc{M.~Morini}, \textsc{M.~Negri}. \newblock Mumford-{S}hah functional as {$\Gamma$}-limit of discrete {P}erona-{M}alik energies. \newblock \emph{Math. Models Methods Appl. Sci.} \textbf{13} (2003), no.~6, 785--805. \bibitem{1993-CalcVar-Muller} \textsc{S.~M\"{u}ller}. \newblock Singular perturbations as a selection criterion for periodic minimizing sequences. \newblock \emph{Calc. Var. Partial Differential Equations} \textbf{1} (1993), no.~2, 169--204. \bibitem{PeronaMalik} \textsc{P.~Perona}, \textsc{J.~Malik}. \newblock Scale-space and edge detection using anisotropic diffusion. \newblock Technical report, EECS Department, University of California, Berkeley, Dec 1988. \bibitem{ripoli:tesi} \textsc{I.~Ripoli}. \newblock \emph{Investigation on a Perona-Malik problem}. \newblock Master thesis, University of Pisa, 2020. \end{thebibliography} \label{NumeroPagine} \end{document}
2205.02391v1
http://arxiv.org/abs/2205.02391v1
Orbital integrals and normalizations of measures
\documentclass{amsart} \usepackage{bm} \usepackage{graphicx} \usepackage{enumerate} \usepackage{latexsym} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{hyperref} \usepackage{color} \usepackage[cmtip,all,matrix,arrow,tips,curve]{xy} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newcommand{\GL}{\operatorname{GL}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\Sp}{\operatorname{Sp}} \newcommand{\GSp}{\operatorname{GSp}} \newcommand{\Sl}{\operatorname{SL}} \newcommand{\G}{\mathbb{G}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\F}{\mathbb{F}} \newcommand{\ri}{\mathcal O} \newcommand{\oi}{\mathcal O} \newcommand{\Z}{\mathbb Z} \newcommand{\A}{\mathbb A} \newcommand{\C}{\mathbb C} \newcommand{\bT}{\mathbf T} \newcommand{\bV}{\mathbf V} \newcommand{\cT}{\mathcal T} \newcommand{\bG}{\mathbf G} \newcommand{\bZ}{\mathbf Z} \newcommand{\inv}{\operatorname{inv}} \newcommand{\vol}{\operatorname{vol}} \newcommand{\ord}{\operatorname{ord}} \newcommand{\tr}{\operatorname{Tr}} \newcommand{\Tr}{\operatorname{Tr}} \newcommand{\Ad}{\operatorname{Ad}} \newcommand{\Lie}{\operatorname{Lie}} \newcommand{\ad}{\operatorname{ad}} \newcommand{\fg}{\mathfrak {g}} \newcommand{\fz}{\mathfrak {z}} \newcommand{\ft}{\mathfrak {t}} \newcommand{\fc}{\mathfrak {c}} \newcommand{\fsl}{\mathfrak {sl}} \newcommand{\fgl}{\mathfrak {gl}} \newcommand{\fsp}{\mathfrak {sp}} \newcommand{\fso}{\mathfrak {so}} \newcommand{\stab}{\operatorname{Stab}} \newcommand{\spec}{\operatorname{Spec}} \newcommand{\rank}{\operatorname{Rank}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\geom}{{\operatorname{geom}}} \newcommand{\charp}{{\operatorname{char.p.}}} \newcommand{\res}{\operatorname{Res}} \newcommand{\rd}{\operatorname{Red}} \newcommand{\rk}{\operatorname{rank}} \newcommand{\der}{{\operatorname{der}}} \newcommand{\Jac}{{\operatorname{Jac}}} \newcommand{\Gal}{\operatorname{Gal}} \newcommand{\fr}{\operatorname{Fr}} \newcommand{\can}{{\operatorname{can}}} n}{{\operatorname{fin}}} \newcommand{\spl}{{\operatorname{spl}}} \newcommand{\rss}{{\operatorname{rss}}} \newcommand{\ch}{{\operatorname{char}}} \newcommand{\sep}{{\operatorname{sep}}} \newcommand{\SO}{\mathrm {SO}} \newcommand{\be}{\mathbf {e}} \newcommand{\bff}{\mathbf {f}} \newcommand{\bh}{\mathbf {h}} \title{Orbital integrals and normalizations of measures} \author{Julia Gordon} \address{Department of Mathematics \\ University of British Columbia\\ 121-1984 Mathematics Rd., Vancouver BC V6T 1Z2, Canada\\ [email protected]} \begin{document} \maketitle \begin{abstract} This note provides an informal introduction, with examples, to some technical aspects of the re-normalization of measures on orbital integrals used in the work of Langlands, Frenkel-Langlands-Ng\^o, and Altug on Beyond Endoscopy. In particular, we survey different relevant measures on algebraic tori and explain the connection with the Tamagawa numbers. We work out the example of $\GL_2$ in complete detail. The Appendix by Matthew Koster illustrates, for the Lie algebras $\fsl_2$ and $\fso_3$, the relation between the so-called geometric measure on the orbits and Kirillov's measure on co-adjoint orbits in the linear dual of the Lie algebra. \end{abstract} \section{Introduction} Haar measure on a locally compact topological group is unique up to a constant. In many situations the normalization matters (as we shall see below). In particular, it seems that some measures are more convenient than others in the approach to Beyond Endoscopy by Frenkel-Langlands-Ng\^o, Arthur, and Altug. The main goal of this note is to track some of the normalizations of the measures that arise in the literature on orbital integrals on reductive groups over non-Archimedean local fields, and provide an introduction to this aspect of Altug's lectures. In particular, our first goal is to give an exposition of the formula (3.31) in \cite{langlands-frenkel-ngo}, which is the first technical step towards Poisson summation. The second goal is to summarize the relationship between measures on $p$-adic manifolds, point-counts over the residue field, and local $L$-functions. These relations are scattered over the literature, and the aim here is to collect the references in one place, and provide some examples. Fundamentally, there are two approaches to choosing a normalization of a Haar measure on the set of $F$-points of an algebraic group for a local field $F$: one can consider a measure associated with a specified differential form; or one can choose a specific compact subgroup and prescribe the volume of that subgroup. As we shall see, both approaches have certain advantages, and converting between these two kinds of normalizations can be surprisingly tricky. All the objects we consider here will be affine algebraic varieties, and we will only consider algebraic differential forms. To avoid confusion, we will try to consistently denote varieties by bold letters, while various sets of rational points will be denotes by letters in the usual font. In this context, we start, in \S \ref{sec:vf}, with a quick survey of A.Weil's definition of the measure on the set $\bV(F)$ of $F$-points of an affine variety $\bV$ associated with an algebraic volume form on $\bV$. We discuss the relation between this measure and counting rational points of $\bV$ over the residue field, and the results on the various measures on $\bT(F)$ for an algebraic torus $\bT$ that follow. Next in \S \ref{sec:oi}, we compare two natural measures on the orbits of the adjoint action of an algebraic group $\bG(F)$ on itself. This comparison is the main reason for writing this note. More specifically, we introduce Steinberg map and derive the relationship between two measures on the orbits: the so-called \emph{geometric} measure, obtained by considering the stable orbits as fibres of Steinberg map, and the measure obtained as a quotient of two natural measures coming from volume forms. In \S \ref{sec:canon}, we combine the outcome with the results of \S\ref{sec:vf} to obtain the relationship between the geometric measure and the so-called \emph{canonical} measure (which is the one most frequently used to define orbital integrals). We do the $\GL_2$ example in detail. Finally, in \S \ref{sec:global}, we assemble the local results into a global calculation, first, in the context of the analytic class number formula, and then in the case of Eichler-Selberg Trace Formula. \section*{Acknowledgment and disclaimer.} These notes would not have been possible without many conversations with W. Casselman over the years; in particular, among many other insights, I thank him for pointing out the key point of \S\ref{subsub:general}. I learned most of the material presented in these notes while working on a seemingly unrelated project with Jeff Achter, S. Ali Altug, and Luis Garcia. I am very grateful to these people. One of the things that surprised me during that work was the difficulty of doing calculations with Haar measure and tracking its normalizations in the literature. The goal of these notes is to illustrate practical ways of doing such calculations, and provide references (and emphasize the normalizations of the measures in these sources) to the best of my ability at the moment. I am not aiming at presenting general (or rigorous) proofs here. My sincere gratitude also goes to the organizers of the Program ``On the Langlands Program: Endoscopy and Beyond'' at NUS for inviting me to give these lectures and for their patience and encouragement during the preparation of these notes; and to the referee for many helpful suggestions. \section{Volume forms and point-counting}\label{sec:vf} Everywhere in this note, $F$ stands for a non-Archimedean local field (of characteristic zero or positive characteristic), with the ring of integers $\ri_F$ and residue field $k_F$ of cardinality $q$. We denote its \emph{uniformizing element} (a \emph{uniformizer} for short) by $\varpi$; by definition, $\varpi$ is a generator of the maximal ideal of $\ri_F$. We denote the normalized valuation on $F$ by $\ord$; thus, $\ord(\varpi)=1$. \subsection{The measure on the affine line} We start with choosing, once and for all, an additive Haar measure on the affine line. For a non-Archimedean local field $F$, we normalize the additive Haar measure on $F$ so that $\vol(\ri_F)=1$. For an affine line over $F$, given a choice of the coordinate $x$, there is an invariant differential form $dx$; we declare that the associated measure $|dx|$ on $\A^1(F)$ also gives volume $1$ to the ring of integers (this choice is analogous to setting up the `unit interval' on the $x$-axis over the reals and declaring that the interval $[0,1]$ has `volume' (\emph{i.e.}, length) $1$ with respect to the measure $|dx|$). Now that this choice is made, any non-vanishing top degree differential form $\omega$ (defined over $F$ or any finite extension of $F$) on a $d$-dimensional $F$-variety $\bV$ determines a measure $|\omega|$ on the set $\bV(F)$ of its $F$-points, where $|\cdot|$ stands for the absolute value on $F$ (respectively, its unique extension to the field of definition of $\omega$). Thus, our definition of the measure associated with a volume form is such that for the additive group $\G_a$ both approaches to normalizing the measure give the same natural measure: the measure that gives $\G_a(\ri_F)$ volume $1$. We shall see that such a measure is closely related to counting points over the residue field. \begin{remark} Note that our choice of the normalization of the measure on the affine line differs from that of \cite{langlands-frenkel-ngo}: if the local field under consideration arises as a completion of a global field at a finite place, we normalize the Haar measure on it so that the ring of integers has volume $1$, whereas in \cite{langlands-frenkel-ngo}, the authors fix a choice of a character of the global field and normalize the measure on every completion so that it is self-dual with respect to that character. This choice is important for the Poisson summation formula; however, as the authors point out, this makes all the measures locally non-canonical. Since our exposition is local, we chose to omit this complication. However, this means that given any variety $\bV$ defined over a global field $K$, our calculations of measures on $\bV(K_v)$ at every place $v$ differ from those of \cite{langlands-frenkel-ngo} by $N{\mathfrak d}_v^{\dim(\bV)/2} =|\Delta_{K/\Q}|_v^{\dim(\bV)/2}$, where $\mathfrak d$ is the different, $N:K\to\Q$ is the norm map, and $\Delta_{K/\Q}$ is the discriminant of $K$.\footnote{more precisely, we quote (approximately) from \cite{langlands-frenkel-ngo}: `because of this there are no canonical local calculations. The ideal ${\mathfrak d}_v$ is however equal to $\ri_v$ almost everywhere. So there are canonical local formulas almost everywhere.'} If $K=\Q$, this issue disappears. \end{remark} There is a natural notion of integration on the set of $p$-adic points of a variety with respect to a volume form, see \cite{weil:adeles}. A key feature of this theory is that if ${\mathbf X}$ is a \emph{smooth} scheme over $\ri_F$, and $\omega$ is a top degree non-vanishing differential form on ${\mathbf X}$, defined over $\ri_F$, then the volume of ${\mathbf X}(\ri_F)$ with respect to the measure $|\omega|$ is given by the number of points on the closed fibre of ${\mathbf X}$: \begin{equation}\label{eq:weil} \vol_{|\omega|}({\mathbf X}(\ri_F))=\frac{\#{\mathbf X}(k_F)}{q^{\dim({\mathbf X})}}. \end{equation} The relationship between volumes and point-counts for more general sets (e.g. not requiring smoothness) was further explored by Serre \cite[Chapter III]{serre:chebotarev}, Oesterl\'e \cite{Oesterle}, and in the greatest generality,\footnote{a far-reaching generalization of these ideas is the theory of motivic integration started by Batyrev \cite{batyrev:calabi-yau}, Kontsevich, Denef-Loeser \cite{DL.arithm}, and Cluckers-Loeser \cite{cluckers-loeser}.} W. Veys \cite{Veys:measure}. Here we will only need to consider the case of reductive algebraic groups. We start with algebraic tori, where the volumes already carry interesting arithmetic information. \subsection{Tori}\label{sub:tori} Let $\bT$ be an algebraic torus defined over $F$. Let $F^\sep$ be the separable closure of $F$. As discussed above, to define a measure on $T:=\bT(F)$ one can start with a differential form or with a compact subgroup. To define either, we first need a choice of coordinates on $\bT$. One natural choice, albeit not defined over $F$ unless $\bT$ is $F$-split, comes from any basis of the character lattice of $\bT$. Let $\chi_1,\dots ,\chi_r$, where $r$ is the rank of $\bT$, be any set of generators of $X^\ast(\bT)$ over $\Z$ (the characters a priori are defined over $F^\sep$). We note that this choice is equivalent to a choice of an isomorphism $\bT\simeq \G_m^r$ over $F^\sep$. Then we can define a volume form (defined over $F^\sep$ but \emph{not over $F$}, unless $\bT$ is split over $F$): \begin{equation}\label{eq:omegaT} \omega_T=\frac{d\chi_1}{\chi_1} \wedge\ldots \wedge \frac{d{\chi_r}}{\chi_r}. \end{equation} The group of $F$-points $\bT(F)$ has a unique maximal compact subgroup in the $p$-adic topology; we denote it by $T^c$, following the notation of \cite{shyr77}. Ono gave the description of this subgroup in terms of characters: $$T^c =\{t\in \bT(F): |\chi(t)|=1 \text{ for } \chi\in X^\ast(\bT)_F\},$$ where $X^\ast(\bT)_F$ is the sublattice of $X^\ast(\bT)$ consisting of the characters defined over $F$. \noindent {\bf Question 1:}\label{q1} What is the volume of $T^c$ with respect to the measure $|\omega_T|$? The question is well-defined because any other $\Z$-basis of $X^\ast(\bT)$ would differ from $\{\chi_i\}$ by a $\Z$-matrix of determinant $\pm 1$, and hence the resulting volume form would give rise to the same measure. The complete answer to this question is quite involved and requires machinery beyond the scope of this note. Here we show some basic examples illustrating the easy part and the difficulty, and provide further references in \S\ref{subsub:general}. \begin{example}\label{ex:split} $\bT$ is $F$-split. For $\G_m$, we have: the invariant form as above is $dx/x$, where $x=\chi_1:\G_m\to \G_m$ is the identity character and the natural coordinate, and $\G_m^c =\ri_F^\times$. The volume calculation gives: $$\begin{aligned} &\vol_{\left|\frac{dx}x\right|}(\ri_F^\times)= \int_{\ri_F^\times }\frac{1}{|x|} |dx| = \int_{\ri_F^\times} |dx| = \vol_{|dx|}(\ri_F) - \vol_{|dx|}(\varpi \ri_F)\\ &= 1-\frac1q =\frac{\#k_F^\times}{q}, \end{aligned}$$ as predicted by (\ref{eq:weil}). Then for an $F$-split torus of rank $r$, $$\vol_{|\omega_T|}(T^c)=\left(1-\frac1q\right)^r.$$ \end{example} The next easiest case is Weil restriction of scalars. \begin{example}\label{ex:quadr} Let $E/F$ be a quadratic extension, and $\bT:=\res_{E/F}\G_m$. Assume that the residue characteristic $p\neq 2$. We write $E=F(\sqrt{\epsilon})$, where $\epsilon$ is any non-square in $F$, and we can choose it to be in $\ri_F$ without loss of generality. By definition, $\bT$ has two characters defined over $E$, call them $z_1$ and $z_2$ and think of them as $E$-coordinates on $\bT(E)$. Then the volume form $\omega_T$ is simply $\omega_T=\frac{dz_1}{z_1}\wedge\frac{dz_2}{z_2}$. Note that it is defined over $E$ but not over $F$. We can try to rewrite it in $F$-coordinates: we write $z_1= x+\sqrt{\epsilon} y, z_2= x-\sqrt{\epsilon} y$, and get: \begin{equation}\label{eq:pullback_res} \begin{aligned} &\omega_T=\frac{d(x+\sqrt{\epsilon}y)}{x+\sqrt{\epsilon}y}\wedge\frac{d(x-\sqrt{\epsilon}y)}{x-\sqrt{\epsilon}y}\\ &=\frac{(dx+\sqrt{\epsilon}dy)\wedge(dx -\sqrt{\epsilon}dy)}{x^2-\epsilon y^2} = \frac{-2 \sqrt{\epsilon}}{N_{E/F}(x+ \sqrt{\epsilon} y)} dx\wedge dy, \end{aligned} \end{equation} where $N_{E/F}$ is the norm map. We note that $\bT(F)= E^\times$ as a set, and the norm map is the generator of the group of $F$-characters of $\bT$. Thus the subgroup $T^c$ of $\bT(F)$ is $$T^c=\{x+\sqrt{\epsilon} y \in E^\times : |x^2-{\epsilon}y^2|_F=1 \}.$$ Its volume with respect to the measure $|\omega_T|$ is: \begin{equation}\label{eq:ex1} \vol_{|\omega_T|} (T^c) = |2\sqrt{\epsilon}| \int_{\{(x,y)\in F^2: |x^2-\epsilon y^2|=1\}} dx dy. \end{equation} Thus we have reduced the computation of the volume of $T^c$ with respect to the volume form $\omega_T$ to the computation of the volume of the subset of $\A^2(F)$, which we denote by $C$, defined by $$C:=\{(x,y)\in F^2: |x^2-\epsilon y^2|=1\}$$ with respect to the usual measure on the plane $|dx\wedge dy|$ (note that $C$ is open in $\A^2(F)$ in the $p$-adic topology). The computation of this volume illustrates the way to use point-counting over the residue field, and for this reason we do it in detail. There are two cases: $\epsilon$ is a unit (i.e., $E/F$ is unramified), and $\epsilon$ is not a unit. First, consider the unramified case. Note that $\ord(x^2-\epsilon y^2)=\min(\ord(x^2), \ord(y^2))$ since $\epsilon$ is a non-square unit. Then our set can be decomposed as: $$C=\{(x, y): x\in \ri_F^\times, y\in \ri_F \}\sqcup \{(x,y): x\in \ri_F\setminus \ri_F^\times, y\in \ri_F^\times\},$$ and its volume with respect to the affine plane measure $|dx\wedge dy|$ is, therefore: $$\vol_{|dx\wedge dy|}(C)= \frac{(q-1)q}{q^2}+ \frac{1}q\frac{q-1}q = \frac{q-1}q\left(1+\frac 1q\right).$$ There is an alternative (and more insightful) way to do this calculation: first, as above, observe that necessarily the set $C$ is contained in $\ri_F^2$. Then consider the reduction mod $\varpi$ map $(x, y)\mapsto (\bar x, \bar y)$ from $\ri_F^2$ to $k_F^2$, where $\varpi$ is the uniformizer of the valuation of $F$. Each fibre of this map is a translate of the set $(\varpi)\times(\varpi)\subset \ri_F^2$, thus each fibre has volume $q^{-2}$ with respect to the measure that we have denoted by $|dx\wedge dy|$. Therefore we just need to compute the number of these fibres to complete the calculation. There are two ways to do it: one is to proceed by hand, which in this case is easy enough. Another is to appeal to a generalization of Hensel's Lemma: the affine $\ri_F$-scheme defined by $x^2-\epsilon y^2\neq 0$ is smooth; this implies that the reduction map from the set of its $\ri_F$-points is surjective onto the set of $k_F$-points of its special fibre (see, e.g., \cite[\S 3]{serre:chebotarev} and \cite[III.4.5, Corollaire 3, p.271]{Bourbaki:commalg}). The set $C$ can be written as: $$C=\{(x,y)\in \ri_F^2: \overline{(x^2-{\epsilon} y^2)}\neq 0\}.$$ Since the reduction map is surjective in this case, we just need to find the number of points $(\bar x, \bar y)\in k_F^2$ such that $\bar x^2 -\bar \epsilon \bar y^2 \neq 0$. The set of points satisfying this condition is in bijection with $\F_{q^2}^\times$, where $\F_{q^2}$ is the quadratic extension of our residue field $k=\F_q$, and we get the same result as above for the volume of $C$. If the extension is ramified, the calculation changes. In this case $\ord(\epsilon)=1$, hence for $x^2-\epsilon y^2$ to be a unit, $x$ has to be a unit and there is no condition on $y$ other than it has to be an integer. Again consider the reduction modulo the uniformizer map. Its image in this case is $\F_q^\times \times \F_q$ (again, this can be checked by hand in this specific case), and thus the volume of the set $C$ is $\frac{(q-1)q}{q^2}$. We summarize (cf. \cite{Langlands:2013aa}): \begin{equation}\label{eq:res_quadr} \vol_{|\omega_T|}(T^c)=\begin{cases}&(\frac{q-1}q)^2 \quad {\bT}\text { split }\\ &\frac{q-1}q \frac {q+1}q \quad {\bT}\text { non-split unramifed }\\ & \frac1{\sqrt{q}}\frac{q-1}q \quad {\bT}\text { non-split ramified. }\\ \end{cases} \end{equation} Note the factor $|2|$ from (\ref{eq:ex1}) disappears (i.e., $|2|_v = 1$) since we are assuming that $p\neq 2$. The factor $\frac1{\sqrt{q}}$ in the ramified case comes from the factor $|\sqrt{\epsilon}|$ in (\ref{eq:ex1}) (in this case, $\epsilon=\varpi$ up to a unit since $p\neq 2$). For $p=2$, everything in the calculation is slightly different (and a lot longer) but the answers are similar. Not to interrupt the flow of the exposition, we postpone the discussion of $p=2$ till \S\ref{sub:two} below. \subsubsection{Norm-1 torus of a quadratic extension}\label{susbsub:norm1} Finally, to illustrate general difficulties of this volume computation, we consider the example of the norm-1 torus of a quadratic extension. There is an exact sequence of algebraic $F$-tori: \begin{equation}\label{eq:exact_seq_n1} 1\to \res_{E/F}^{(1)}\G_m \to \res_{E/F} \G_m \to \G_m \to 1, \end{equation} where the last map is the norm map; its kernel is an algebraic torus $\bT_1:=\res_{E/F}^{(1)}\G_m $ over $F$, called the \emph{norm-1} torus. It is tempting to try to use this exact sequence to compute the volume of $T_1^c$ with respect to $\omega_{T_1}$, but that is not the right way to proceed. The standard way to do this calculation is to consider an isogeny between $\bT_1\times \G_m$ and $\bT$ and use the results of Ono on the behaviour of various invariants attached to tori under isogenies; see also \cite{shyr77}. However, this would take us too far afield; instead we proceed with an elementary calculation. Before we do this calculation for a $p$-adic field, consider for a moment the situation when $F=\R$ and $E=\C$, in order to get some geometric intuition. \begin{example}\label{ex:heuristic} Let $\bT := \res_{\C/\R}\G_m(\R)=\C^\times$; then the norm-1 torus is the unit circle $S^1$. The same calculation as in Example \ref{ex:quadr} shows that the volume form $\omega_T$ gives the measure $|\omega_T|=|\frac{2i dx\wedge dy}{x^2+y^2}| =\frac{2}{x^2+y^2}|dx\wedge dy|$ on $\C^\times$. Similarly, if we go by the definition of the volume form $\omega_{S^1}$ on $S^1$, we obtain the following. The generator of the character group $X^\ast(S^1)$ (over $\C$) is simply the identity character $z\mapsto z=x+iy$. We get that $\omega_{S^1}=\frac{dz}{z}$ by definition, but intuitively it is not yet clear what is the measure defined by this form. We write $dz=dx+idy$, and note that $\frac1z =\bar z=x-iy$ when $z\in S^1$. We obtain: \begin{equation}\label{eq:circle} \omega_{S^1}=(x-iy)(dx+idy)= (x-iy)dx+(y+ix)dy. \end{equation} It is still not obvious what measure this form gives; it would be convenient to rewrite it using the local coordinate of some chart on the circle. Here we can use the fact that we are working over $\R$ and take $\theta$ to be the arc length; then $x=\cos \theta$, $y=\sin \theta$, $0\le \theta <2\pi$ is the familiar (transcendental) parametrization. We get: \begin{equation}\label{eq:S1} \begin{aligned} &\omega_{S^1} =(x-iy)dx+(y+ix)dy\\ &=(\cos \theta -i\sin \theta) d(\cos \theta)+(\sin \theta +i\cos\theta) d(\sin \theta) \\ &= -\cos \theta \sin \theta d\theta + \sin \theta \cos \theta d\theta +i(\cos^2 \theta d\theta +\sin^2 \theta d\theta) =id\theta. \end{aligned} \end{equation} Since $|i|=1$, we see that the measure $|\omega_{S^1}|$ coincides with the arc length. The exact sequence (\ref{eq:exact_seq_n1}) gives a relation between this measure and the measures on $S^1$ and $\G_m$, which is the same as rewriting the measure $dx\wedge dy$ in polar coordinates. Indeed, we have (from calculus) $dx\wedge dy = r dr \wedge d\theta$, so $\frac{dx\wedge dy}{x^2+y^2} =\frac{dr}r \wedge d\theta$. We obtain the relation between $\omega_{S^1}$ and the form $\omega_T$ on $\bT=\res_{\C/\R}\G_m$: \begin{equation} \omega_T= 2 \omega_{S^1}\wedge \omega_{\G_m}. \end{equation} \end{example} The appearance of the factor $2$ in this relation, combined with the fact the norm map to $\G_m$ is not surjective on $\R$-points and is $2:1$ illustrates that the relation between the measures on $\bT$ and $\bT_1$ is not straightforward (if one cares for a power of $2$). Armed with this caution, we move on to the $p$-adic fields. \begin{example}\label{ex:norm1} Let $\bT_1 := \res_{E/F}^{(1)}\G_m$ be the norm-1 torus of a quadratic extension as above, but with $E=F(\sqrt{\epsilon})$ an extension of non-Archimedean local fields as in Example \ref{ex:quadr}. As before, we assume $p\neq 2$ (the case $p=2$ is treated below in \S\ref{sub:two}). As above, we would like to understand the form $\omega_{\bT_1}$. We observe that the relation (\ref{eq:circle}) can be easily adapted to this case (essentially, replacing $i$ with $\sqrt{\epsilon}$). What we need is an algebraic parametrization of the conic $x^2-\epsilon y^2=1$. Such a parametrization is given in projective coordinates $(x:y:z)$ by: \begin{equation}\label{eq:param_conic} x=\epsilon t^2 + 1 ,\quad y=2 t, \quad z=1-\epsilon t^2, \quad t\in F. \end{equation} Then a calculation similar to (\ref{eq:S1}) shows that in the affine chart $z\neq 0$, $$ \omega_{\bT_1}=-\frac{2\sqrt{\epsilon}}{1-{\epsilon}t^2} dt. $$ We note here that we could have used the same rational parametrization for the unit circle in the example above; then at this point we would have obtained the same answer: if we plug in $\epsilon =-1$, we get that the ``volume'' of the circle with respect to $|\omega_{S^1}|$ is $2\int_{\R}\frac1{t^2+1}\, dt = 2\pi$, as expected. Continuing with the $p$-adic calculation, we can discard $|2|$ since $p\neq 2$. Next, we observe that $T_1^c=\bT_1(F)$ (our torus is \emph{anisotropic}; it has no non-trivial $F$-characters, and hence the condition defining $T_1^c$ is vacuous). Therefore, the volume of $T_1^c$ with respect to $\omega_{\bT_1}$ is \begin{equation}\label{eq:n1t} \vol_{|\omega_{\bT_1}|}(T_1^c) = \int_{F}\left|\frac {\sqrt{\epsilon}}{{1-\epsilon}t^2}\right|\, |dt|. \end{equation} Now we need to consider two cases. \\ {\bf Case 1.} The extension is unramified, i.e., $\epsilon$ is a non-square unit. Then (\ref{eq:n1t}) becomes (using the fact that the volume of the $p$-adic annulus $\{t: \ord(t)=n\}$ with respect to the measure $|dt|$ equals $q^{-n}-q^{-(n+1)}$) : \begin{equation}\label{eq:vol_unram} \begin{aligned} &\vol_{|\omega_{\bT_1}|}(T_1^c) =& \int_{F}\frac1{|1-\epsilon t^2|}\, |dt| = \int_{\ri_F} |dt| + \sum_{n=1}^\infty \frac1{q^{2n}}(q^{n}-q^{(n-1)}) \\ &{} & =1+\left(1-\frac1q\right)\sum_{n=1}^{\infty}\frac1{q^n}= 1+\frac 1q. \end{aligned} \end{equation} {\bf Case 2.} The extension is ramified, i.e., $\ord(\epsilon)=1$. Then $|1-\epsilon t^2|= 1$ if $t\in \ri_F$ and $|1-\epsilon t^2|= q^{2n-1}$ if $\ord(t)=n<0$. Thus, the integral computing the volume of $T_1^c$ again breaks down as a sum: \begin{equation}\label{eq:one-more-two} \begin{aligned} &\vol_{|\omega_{\bT_1}|}(T_1^c) = & \frac1{\sqrt{q}}\int_{F}\frac1{|1-\epsilon t^2|}\, |dt| = \frac1{\sqrt{q}} \left(\int_{\ri_F} |dt| + \sum_{n=1}^\infty \frac1{q^{2n-1}}(q^{n}-q^{(n-1)})\right) \\ &{} & =\frac1{\sqrt{q}}\left(1+q\left(1-\frac1q\right)\sum_{n=1}^{\infty}\frac1{q^n}\right)= \frac2{\sqrt{q}}. \end{aligned} \end{equation} Let us compare the results of this calculation with the approach to volumes via point-counting. If we take the equation $x^2-\epsilon y^2=1$ and reduce it modulo the uniformizer, we get an equation of a conic over $\F_q$. In the unramified case, this conic is in bijection with ${\mathbb P}^1(\F_q)$ via (\ref{eq:param_conic}); thus we expect the volume to equal $\frac{q+1}q$, which agrees with (\ref{eq:vol_unram}). In the ramified case, when $\epsilon$ is not a unit, the reduction of the same equation modulo the uniformizer gives a disjoint union of two lines over the finite field: it is the subvariety of the affine plane defined by $x^2=1$. The point-count over $\F_q$ gives us $2q$, thus the volume we obtain is $\frac{2q}{q}=2$, which agrees with (\ref{eq:one-more-two}) once we make the correction for the fact that our volume form had a factor of $\sqrt{\epsilon}$ and thus was not defined over $F$ (this again illustrates why in \cite{weil:adeles} the disciminant factor appears in the definition of the volume form). We note that when $p\neq 2$, the affine scheme $\spec F[x,y]/(x^2-\epsilon y^2-1)$ is smooth over $\ri_F$ (in both the ramified and unramified cases, which can be checked by the Jacobi criterion, \cite[\S 2.2]{bosch-lutkebohmert-raynaud:NeronModels}), and this justifies the fact that the point-count on the reduction $\mod \varpi$ does give us the correct answer. \end{example} \subsubsection{The N\'eron model}\label{subsub:Neron} How do the above calculations generalize to an arbitrary algebraic torus? The issue is that for a torus that is not $F$-split, it is not a priori obvious how to choose `coordinates' defined over $\ri_F$; more precisely, one first needs to define an \emph{integral model} for $\bT$, i.e., a scheme over $\ri_F$ such that its generic fibre is $\bT$. In order to use the formula (\ref{eq:weil}), this model would also need to be a \emph{smooth} scheme over $\ri_F$. There is a canonical way to define such a smooth integral model for $\bT$, namely, the \emph{weak N\'eron model}, \cite[Chapter 4]{bosch-lutkebohmert-raynaud:NeronModels}, which we shall denote by $\mathcal T$. In general it is a scheme not of finite type (it can have infinitely many connected components). The $\ri_F$-points of its identity component $T^0:={\mathcal T}^0(\ri_F) $ provide another canonical compact subgroup of $\bT(F)$. This subgroup is traditionally used in the literature to normalize the Haar measures on tori (and plays a role in normalization of measures on general reductive groups, as we shall see below), but it plays no explicit role in this note, hence we do not discuss any details of its definition. Moreover, once we have the $\ri_F$-model for $\bT$, we can use the local coordinates associated with this model to define a volume form on $\bT(F)$. Unlike the volume form defined above by using the characters, this form actually has coefficients in $F$; it is called the \emph{canonical} volume form in the literature, following the article \cite{gross:motive}; we call it $\omega^{\can}$, but we shall not use any explicit information about it in this note. In general, $T^0$ is a subgroup of finite index in $T^c$ (this index is an interesting arithmetic invariant, see \cite{bitan} for a detailed study), and the relationship between the form $\omega_T$ defined above and $\omega^{\can}$ is discussed in \cite{gan-gross:haar}. The subgroup $T^c$ is the set of $\ri_F$-points of the so-called \emph{standard} integral model of $\bT$, which is not smooth in general; roughly speaking, the coordinates on the standard model come from the characters $\chi_i$ as above in the examples (see \cite[\S 1.1]{bitan}). If $\bT$ splits over an unramified extension of $F$, the situation is simple (see \cite{bitan} and references therein for details): \begin{theorem} Suppose that $\bT$ splits over an unramified extension of $F$. Then \begin{enumerate} \item $T^0= T^c$, and the special fibre ${\mathcal T}^0_{\kappa}$ of ${\mathcal T}^0$ is an algebraic torus over $\F_q$. \item $\vol_{|\omega_T|}(T^c)= \vol_{|\omega^{\can}|}(T^c)= \frac{\# {\mathcal T}^0_{\kappa}(\F_q)}{q^{\dim(\bT)}}$. \end{enumerate} \end{theorem} We give one illustrative example without any details, and summarize the known general results below in \S \ref{subsub:general}. \begin{example} Consider again Example \ref{ex:norm1}, where $\bT$ is the norm-1 torus of a quadratic extension. It is anisotropic over $F$, and consequently its N\'eron model is a scheme of finite type over $F$. If $E/F$ is unramified, the `standard model' (see \cite{bitan}) coincides with the N\'eron model, and is simply defined by the equation $x^2-\epsilon y^2=1$ over $\ri_F$. It is connected and its special fibre is an irreducible conic over $\F_q$, which has $q+1$ rational points over $\F_q$, as discussed above. If $E/F$ is ramified, e.g. $E=F[\sqrt{\varpi}]$, then $|T^c/{T}^0|=2$ (the special fibre of $\mathcal T$ has two connected components, each isomorphic to an affine line as we saw above -- note that it is not an algebraic torus!) And in this case we have $$\vol_{|\omega^{\can}|}({T}^0)= \frac{\# {\mathcal T}^0_{\kappa}(\F_q)}{q^{\dim(\bT)}}=\frac{q}q=1,$$ and also $\omega_T=|\sqrt{\varpi}|_E \omega^{\can} = \frac1{\sqrt{q}}\omega^{\can}$. \end{example} \end{example} \begin{example}\label{ex:restr} For an arbitrary finite extension $E/F$ and $\bT=\res_{E/F}\G_m$, we have $\bT(F) = E^\times$, and $T^c =\ri_E^\times$. If $\alpha_1, \dots, \alpha_r$ are elements of $\ri_E$ that form a basis for $\ri_E$ over $\ri_F$, then $\chi_i(x):=\tr_{E/F}(\alpha_i x)$ form a basis of $X^\ast(\bT)$ over $\Z$. Then by definition of the discriminant (as the norm of the different ${\mathfrak d}_{E/F}$), the measure $|\omega_T|=|\wedge d\chi_i|$ equals $|\det(\tr_{E/F}(\alpha_i))|_F \omega^{\can}$, i.e. the conversion factor is the square root of the $F$-absolute value of the discriminant of $E$: $$\omega_T=\sqrt{|\Delta_{E/F}|}\omega^\can.$$ This calculation is generalized to an arbitrary reductive group (not just an arbitrary torus) in \cite{gan-gross:haar}. \end{example} \subsubsection{References to the general results: a non-self-contained answer to Question 1} \label{subsub:general} \begin{enumerate} \item In general, the index $[T^c:T^0]=|(X_\ast)_I^{tor}|$ can be computed by looking at the inertia co-invariants on the co-character lattice of $\bT$, see \cite[(3.1)]{bitan} which follows \cite[\S 7]{kottwitz:isocrystals-2}. \item The relation $ \vol_{|\omega^{\can}|}(T^0) = \frac{\# {\mathcal T}^0_{\kappa}(\F_q)}{q^{\dim(\bT)}}$ holds for \emph{any} algebraic torus, \cite[Proposition 2.14]{bitan}. \item As noted above, the form $\omega_T$ is not generally defined over $F$. It turns out (see \cite{gan-gross:haar}) that it only needs to be corrected by a factor that is a square root of an element $F$ to get to a form defined over $F$. We saw this already in the case when $\bT$ is of the form $\res_{E/F}\G_m$; the general case follows from this calculation and a theorem of Ono that relates an arbitrary torus with a torus of the form $\res_{E/F}\G_m$, see proof of Corollary 7.3 in \cite{gan-gross:haar}. Specifically, Corollary 7.3 in \cite{gan-gross:haar} states (using our notation) that if $F$ has characteristic zero or if $\bT$ splits over a Galois extension of $F$ of degree relatively prime to the the characteristic of $F$, then: \begin{equation} \left |\frac{\omega_T}{\sqrt{D_M}}\right| = | \omega^\can|, \end{equation} where $D_M$ is a \emph{refined Artin conductor} of the motive $M$ associated with $\bT$, defined in \cite[(4.5)]{gan-gross:haar}. We discuss this motive briefly in the next subsection, but do not discuss the definition of the Artin conductor. \item Combining these results gives an answer to Question 1 above. \end{enumerate} \subsubsection{The local $L$-functions} There is yet another way to express the number of $\F_q$-points of ${\mathcal T}^0_\kappa$, and hence the volume of $T^0$, entirely in terms of the representation of the Galois group on the character lattice, using Artin L-factor. Indeed, an algebraic torus $\bT$ over $F$ is uniquely determined by the action of the Galois group of its splitting field $E$ on $X^\ast(\bT)$. Let $\Gamma=\Gal(E/F)$ and let $I$ be the inertia subgroup and let $\fr$ be the Frobenius automorphism of $k_E$ over $k_F$. Recall that we have the exact sequence of groups $$1\to I \to \Gamma \to \langle \fr \rangle \to 1,$$ and the cyclic group $\langle \fr \rangle$ is isomorphic to the Galois group of the residue field $\Gal(k_E/k_F)$. Thus we get a natural action of $\Gal(k_E/k_F)$ on the set of inertia invariants $X^\ast(\bT)^I$. Let us denote this integral representation by $$\sigma_T: \Gal(k_E/k_F) \to {\operatorname{Aut}}_\Z(X^\ast(\bT)^I)\simeq \GL_{d_I}(\Z),$$ where $d_I=\rk(X^\ast(\bT)^I)$. The Artin $L$-factor associated with this representation is, by definition, \begin{equation}\label{eq:L} L(s, \sigma_T)=\det\left(I_{d_I}- \frac{\sigma_T(\fr)}{q^s}\right)^{-1} \quad \text { for } s\in \C, \end{equation} where $I_{d_I}$ is the identity matrix of size $d_I$. Then the following relation holds (we are quoting it from \cite[Propostion 2.14]{bitan}): \begin{theorem}\label{thm:torus} $\omega^{\can}(T^0)=\frac{\# {\mathcal T}^0_{\kappa}(\F_q)}{q^{\dim(\bT)}} = L(1, \sigma_T)^{-1}$. \end{theorem} Note that if $E/F$ is unramified, the inertia is trivial, so $d_I=\rk(\bT)$. Thus, to summarize, we have defined a natural invariant form $\omega_T$ on $\bT$ and described the maximal compact subgroup $T^c$ of $\bT(F)$ in terms of the characters of $\bT$. If $\bT$ splits over an unramified extension, the volume of $T^c$ with respect to this differential form equals \begin{equation}\label{eq:unram} \vol_{|\omega_T|}(T^c)= \frac{\#{\mathcal T}^0_\kappa (\F_q)}{q^{\dim(\bT)}} = L(1, \sigma_T)^{-1}, \end{equation} where the second equality holds for any $\bT$, not necessarily unramified. In general, the volume of $T^c$ with respect to this differential form contains two more factors -- the index of $T^0$ in $T^c$ and the ratio between the differential forms $\omega_T$ and $\omega^{\can}$. \subsection{Reductive groups} Similarly to the case of tori discussed above, for a general reductive group $\bG$ over a local field $F$, the choice of a normalization of Haar measure is linked with a choice of a `canonical' differential form or a `canonical' compact subgroup of $\bG(F)$ (unlike an algebraic torus, the set of $F$-points of a general reductive group can have more than one conjugacy class of maximal compact subgroups, and this choice matters for the normalization of measure). Luckily, for the questions studied in \cite{langlands-frenkel-ngo} the choice of the normalization of measure on $\bG(F)$ does not matter -- it only contributes some global constant. However, for completeness, we record that a `canonical' choice of a compact subgroup $G^0$ and an associated volume form $\omega_G$ is described by B. Gross in \cite{gross:motive}, using Bruhat-Tits theory. The group $G^0$ is the set of $\ri_F$-points of a smooth scheme $\underline{\bG}$ over $\ri_F$ whose generic fibre is $\bG$. Hence, by Weil's general argument (since $\underline{\bG}$ is smooth over $\ri_F$), the volume of $G^0$ is obtained by counting points on the special fibre of $\underline \bG$ (see (\cite[Proposition 4.7]{gross:motive}): \begin{equation} \vol_{|\omega_G|}(G^0)=\frac{\#\underline{\bG}_\kappa(\F_q)}{q^{\dim\bG}}. \end{equation} Moreover, Gross defines an Artin-Tate motive $M$ associated with $\bG$ such that the volume of the canonical compact subroup with respect to the canonical form is given by the value of the Artin $L$-function associated with this motive at $1$. (If $\bG$ is an algebraic torus, the associated motive is precisely the representation of the Galois group on its character lattice as above, and Gross' result amounts precisely to the statement of Theorem \ref{thm:torus} above). \section{Orbital integrals: the geometric measure}\label{sec:oi} \subsection{The two normalizations}\label{subseq:the_problem} Let $\bG$ be a connected reductive algebraic group over $F$. We denote the sets of regular semisimple elements in $G:=\bG(F)$ by $G^\rss$ (respectively, $\fg^\rss$ for the Lie algebra $\fg$ of $\bG$).\footnotetext{The simplest way to characterize the set of regular semisimple elements is to use any faithful representation of $G$ to think of its elements as matrices; then an element $\gamma\in \bG(F)$ (respectively, $X\in \fg$) is regular and semisimple if and only if its eigenvalues (in an algebraic closure of $F$) are distinct; we will also give a precise definition below in \S \ref{sub:weyl_discr}.} Let $\gamma\in G:=\bG(F)^\rss$ (in this note we are only interested in this setting). The adjoint orbit (or simply, `orbit', or sometimes, `rational orbit' when we want to emphasize that it is the group of $F$-points of $\bG$ that is acting on it) of $\gamma$ is the set $$\ri(\gamma) := \{g\gamma g^{-1} \mid g\in \bG(F)\}.$$ The centralizer of $\gamma$ is by definition the group $C_{G}(\gamma) = \{ g\in \bG(F)\mid g\gamma g^{-1} =\gamma \}$. We will also briefly refer to the notion of a \emph{stable} orbit of $\gamma$. It is a finite union of rational orbits; as a first approximation, it can be thought of as the set $$\ri(\gamma)^\mathrm{stable} := \{g\gamma g^{-1} \mid g\in \bG(F^\sep)\}\cap \bG(F).$$ However, this is not the correct definition in general; see \cite{kottwitz82}. We will not need a precise definition in this note. If $\bG=\GL_n$, then for $\gamma\in G^\rss$, the stable orbit and rational orbit are the same. When $\gamma \in G^\rss$, the identity component of the centralizer of $\gamma$ is a maximal torus $T\subset G$; it can be thought of as a set of $F$-points $T=\bT(F)$ of an algebraic torus $\bT$ defined over $F$. This leads to two natural approaches to normalizing the measure on the orbit of $\gamma$: \begin{enumerate} \item Normalize the measures on $G$ and $T$ according to one of the methods discussed above and consider the quotient measure. \item Describe the space of all (stable) orbits, and derive a measure on each orbit as a quotient measure with respect to the measure on the space of orbits. \footnote{In fact, there is a third natural approach if we are working with the orbital integrals on the Lie algebra rather than the group: namely, to identify $\fg$ with $\fg^\ast$ and consider the differential form on the orbit itself which comes from Kirillov's symplectic form on co-adjoint orbits, see \cite{kottwitz:clay}. We will not discuss this approach here as it is not related to the main subject of the note. However, the example of $\fsl_2$ where one can clearly see the relation of this measure to what one would expect from calculus is provided in the Appendix by Matthew Koster.} \end{enumerate} Since the $G$-invariant measure on each orbit is unique up to a constant multiple, the two orbital integrals defined with respect to these measures will of course differ by a constant; however, this constant can, and does, depend on the orbit. The goal of this section is to give a detailed explanation for the formula that relates the two orbital integrals; this is equation (3.31) in \cite{langlands-frenkel-ngo}. More specifically, we start with a review of the construction of Steinberg map $\fc: \bG \to \A_G$, in \S \ref{sub:steinberg} below. The set $\A_G(F)$ has an open dense subset whose points parametrize the stable orbits of regular semisimple elements in $G$ -- each fibre of the map $\fc$ over a point of this subset is such a stable orbit. The relation (3.31) in \cite{langlands-frenkel-ngo} we aim to explain is: \begin{equation}\label{eq:FLN} \int_{\fc^{-1}(a)} f(g) d|\omega_a| = |\Delta(t)| L(1, \sigma_{T\backslash G}) O^{\mathrm{stable}}(t, f), \end{equation} where $a=\fc(t)$. We start by defining all the ingredients of this formula (as we shall see, this formula is not really about orbital integrals; it is simply a statement about the relationship between two invariant measures on an orbit). We also simultaneously treat the orbital integrals on Lie algebras. \subsection{The space $\A_G$; Chevalley and Steinberg maps}\label{sub:steinberg} We start with the Lie algebra, where the situation is simpler. We recall that $\bG$ acts on $\fg$ via adjoint action, denoted by $\Ad:\bG \to \GL(\fg)$ (for the classical groups and their Lie algebras, $\Ad(g)$ is simply matrix conjugation by an element $g\in \bG(F)$). When we talk about orbits in $\fg$, it is the orbits under the adjoint action. For $X\in \fg$, its \emph{centralizer} $C_G(X)$ is, by definition, its stabilizer (in $\bG(F)$) under the adjoint action. If $X\in \fg^\rss$, then $C_G(X)$ is a maximal torus in $\bG(F)$. \subsubsection{Reductive Lie algebra; algebraically closed field}\label{subsub:Liealg}\footnote{This section is entirely based on \cite[\S 14]{kottwitz:clay}.} Let $\fg$ be the Lie algebra of $\bG$. For the moment let us work over an algebraically closed field $k$ of characteristic $0$ (in fact, assuming sufficiently large characteristic is sufficient here but we will not pursue this direction). Let $\ft=\mathrm{Lie}(\bT)$ be a maximal Cartan subalgebra. Then the ring of polynomial functions on $\ft$ is the symmetric algebra $S=S(\ft^\ast)$. The Weyl group acts on $S$, and the ring of invariants $S^W$ is the ring of regular functions on the quotient $\ft/W$. In other words, $\ft/W$ is a variety over $k$, isomorphic to $\spec S^W$. Furthermore, in fact $S^W$ is itself a polynomial ring, and so $\ft/W$ is isomorphic to the affine space $\A^r$, where $r=\rank(\bT)$. \begin{example} Let $\bG=\GL_n$, and let $\ft\subset \fg=\mathfrak{gl}_n$ be the Cartan subalgebra consisting of diagonal matrices. Then $W=S_n$, $S=k[x_1, \dots, x_n]$, and $S^W$ is the algebra of symmetric polynomials. As we know, it is generated by the elementary symmetric polynomials. Thus, the map $\ft\to\A^r$ is given by: $t=\diag(t_1, \dots, t_r)\mapsto (\sigma_1(\bar t), \dots, \sigma_r(\bar t))$, where $\bar t =(t_1, \dots, t_r)$ and $\sigma_k(\bar t)=\sum_{\{i_1, \dots, i_k\}\subset \{1, \dots, r\}} t_{i_1}\dots t_{i_k}$ is the $k$-th elementary symmetric polynomial. Note that in particular, for $n=2$, we get the map $t\mapsto (\tr(t), \det(t))$. \end{example} Let $k[\fg]$ be the $F$-algebra of polynomial functions on $\fg$, and let $k[\fg]^G$ be the subalgebra of the polynomials invariant under the adjoint action of $G$. We quote from \cite[\S 14.2]{kottwitz:clay}: Chevalley's restriction theorem can be stated as: $$k[\fg]^G\cong S^W,$$ where the isomorphism is given by restricting the polynomial functions from $\fg$ to $\ft$. Dually to the inclusion $k[\fg]^G\hookrightarrow k[\fg]$, we get the surjection (which we will refer to as \emph{Chevalley map}) $$\fc_{\fg}: \fg \to \A_G=\ft/W,$$ which maps $X\in \fg$ to the unique $W$-orbit in $\ft$ consisting of elements conjugate to the semisimple part of $X$. An important observation (which is not used in these notes but is very relevant for the subject) is that the nilpotent cone in $\fg$ is $\fc_{\fg}^{-1}(0)$. In general, the role of `elementary symmetric polynomials' is played by the traces of irreducible representations of $\fg$ determined by the \emph{fundamental weights}. Namely, let $\{\mu_i\}_{i=1}^r$ be the fundamental weights determined by a choice of simple roots for $\fg$ (i.e., the weights of $\fg$ defined by $\langle \mu_i, \alpha_j\rangle=\delta_{ij}$, where $\Delta=\{\alpha_j\}_{j=1}^r$ is a base of the root system of $\fg$). Let $\rho_i$ be the representation of $\fg$ of highest weight $\mu_i$. Then $S^W$, as an algebra, is generated by $\Tr(\rho_i)$ (see, e.g., \cite[23.1]{humphreys}). For the type $A_n$, one recovers the elementary symmetric polynomials from this construction. Namely, for $\fsl_n$, it happens that the exterior powers of the standard representation are irreducible, and they give all the fundamental representations: $\rho_i=\wedge^i \rho_1$, for $i=1, \dots, {n-1}$, where $\rho_1$ is the standard representation, which has highest weight $\mu_1$. Consequently, since the coefficients of the characteristic polynomial of a matrix are (up to sign) the traces of its exterior powers, we obtain: \begin{example} For $\fsl_n$, Chevalley map can be realized explicitly as $X \mapsto (a_i)$, where $a_i$ are the coefficients of the characteristic polynomial of $X$. \end{example} \subsubsection{Reductive Lie algebra, non-algebraically closed field} When the field $F$ is not algebraically closed, the space $\A_G$ can be defined as $\spec F[\fg]^G$, avoiding the need to choose a maximal torus; it turns out that the morphism $\fc$ is still defined over $F$ (see \cite[\S 14.3]{kottwitz:clay}). However, in this note we are only considering the case of $\bG$ split over $F$, and it is convenient for us to continue using an explicit definition of the map. Namely, if $\bG$ is split over $F$, we can choose the split maximal torus $T^\spl$ in $G$, and define $\A_G=\ft^\spl/W$ exactly as above. The definition of the map $\fc_{\fg}$ stays the same. Consider explicitly what happens in the $\bG=\GL_2$ example. \begin{example}\label{ex:gl2} As above, Chevalley map is the map $\fc_{\mathfrak{gl}_2}: \mathfrak{gl}_2 \to \A^2$, $X\mapsto (\tr(X), \det(X))$. All the split Cartan subalgebras are conjugate in $\fg$. The image under Chevalley map of any split Cartan subalgebra $\ft$ of $\fg$ is the set $$(a_1, a_2)\in \A^2: a_1^2-4a_2 \text{ is a square in } F.$$ We observe that the $F$-conjugacy classes of Cartan subalgebras in ${\mathfrak{gl}_2}$ are in bijection with quadratic extensions of $F$: as discussed above in Example \ref{ex:restr}, for each quadratic extension $E$ of $F$ we get the torus $R_{E/F}\G_m$ in $\GL_2$. Its Lie algebra maps under Chevalley map onto the set $$(a_1, a_2)\in \A^2(F): {a_1^2-4a_2} \text{ is a square in } E.$$ We note that the image of the set of semisimple elements of $\fg$ is the complement of the origin in $\A^2(F)$, and the image of the set of regular semisimple elements is the complement of the locus $a_1^2-4a_2=0$. \end{example} This situation is general: all Cartan subalgebras become conjugate to $\ft$ over the algebraic closure of $F$; Chevalley map is defined over $F$, and on $F$-points, the images of $(\Lie S)(F)$ under Chevalley map cover a Zariski open subset of $\A_G(F)$ as $S$ runs over a set of representatives of the $F$-conjugacy classes of tori. Now we return to the group itself; here the situation gets more complicated because of the central isogenies. \subsubsection{Semi-simple simply connected split group} Assume that $\bG$ is split over $F$, and let $\bT$ be an $F$-split maximal torus of $\bG$. We shall see that $\bT/W\simeq \ft/W = \A_G$ in this case. To do this, we construct a basis for the coordinate ring of $\bT/W$ (see \cite[\S 3.3]{langlands-frenkel-ngo}). Let $\alpha_1, \dots \alpha_r$ be a set of simple roots for $\bG$ relative to $\bT$ (since $\bG$ is assumed to be semi-simple, the root lattice spans the same vector space as the character lattice $X^\ast(\bT)$, so there are $r$ simple roots). Let $\mu_i$ be the fundamental weights, as above, defined by $\mu_i(\alpha_j^\vee)=\delta_{ij}$ for $1\le i,j\le r$. We recall that for a semi-simple algebraic group, \emph{simply connected} means that the character lattice $X^\ast(\bT)$ coincides with the weight lattice, i.e. $\mu_i$ with $i=1, \dots r$ constitute a $\Z$-basis of $X^\ast(\bT)$. Let $\rho_i$ be the algebraic representation of $G$ of the highest weight $\mu_i$ for $i=1, \dots r$, and let $a_i(t)=\tr \rho_i(t)$. These functions are algebraically independent over $F$ and $$\bT/W\simeq \spec F[a_1, \dots, a_r].$$ As above, we get the map $\fc:\bG\to \A_G$, defined by $g\mapsto (\tr\rho_i(g))$. This map for the group is called Steinberg map. \begin{example} As a baby example, take $\bG=\Sl_2$, with $\bT$ the torus of diagonal matrices, and let $\rho_1$ be its standard representation on $F^2$. For $x\in \F^\times$, let $t(x)\in \bT(F)$ be the one-parameter subgroup of diagonal matrices, $t(x)=\diag(x, x^{-1})$. Then the weights of $\rho_1$ are $\mu_1:=\left(\diag(x, x^{-1})\mapsto x\right)$ and $-\mu_1 = \left(\diag(x, x^{-1})\mapsto x^{-1}\right) $ (which form a single Weyl orbit). We have $a:=\tr(\rho_1)(t(x)) =x+x^{-1}$, and this is the coordinate on the affine line $\A^1=\A_{\Sl_2}$. More generally, for $\bG=\Sl_n$, we have $r=n-1$, and with the standard choice of simple roots $\alpha_i(\diag(x_1, \dots x_n))=x_i x_{i+1}^{-1}$, the above construction yields $\rho_1$ - the standard representation of $\Sl_n(F)$ on $F^n$, and $\rho_i=\wedge^i \rho_1$ (see \cite[\S 15.2]{fulton-harris:RepresentationTheory} for a detailed treatment over $\C$, which in fact works for algebraic representations over $F$). We recover the same `characteristic polynomial' map: the trace of the $i$-th alternating power of the standard representation applied to a diagonal matrix is precisely the $i$-th coefficient of its characteristic polynomial (which is, up to sign, the degree $i$ elementary symmetric polynomial of the eigenvalues). (Note, however, that this is a coincidence that holds just for groups of type $A_n$: the isomorphism $\rho_i\simeq \wedge^i \rho_1$ does not hold for other types; we discuss this issue below in \S \ref{sub:naive_meas}). \end{example} {\bf Caution:} Note that unlike the typical situation when one has an algebraic homomorphism of Lie algebras which then is `integrated' to obtain a homomorphism of simply connected Lie groups, Chevalley map on $\fg$ is \emph{not} the differential of Steinberg map (e.g. for $\Sl_2$, the map on $\fsl_2$ is $X\mapsto \det(X)$, while on $\Sl_2$ the map is $g\mapsto \tr(g)$). \subsubsection{Split reductive group with simply connected derived subgroup} Let $\bG$ be a split, reductive group of rank $r$, with simply connected derived group $\bG^\der$ (whose Lie algebra we will denote by $\fg^\der$). Let $\bZ$ be the connected component of the centre of $\bG$. By our assumption that $\bG$ is split, $\bZ$ is a split torus. Let $T\supset Z$ be a split maximal torus in $G$, $T^\der = T \cap G^\der$ (note that $T^\der$ is \emph{not} the derived group of $T$), and let $W$ be the Weyl group of $G$ relative to $T$. Let $\A_{G^\der}=T^\der/W$ be the Steinberg quotient for the semisimple group $\bG^\der$. Let us denote $\rank(\bZ)$ by $r_Z$. (Naturally, the most common situation is $r_Z=1$. ) We have $\A_{G^\der}\simeq \A^{r-r_Z}$. We have the exact sequence of algebraic groups [(3.1) in \cite{langlands-frenkel-ngo}]: \begin{equation}\label{eq:exseq} \xymatrix{ 1 \ar[r] & {\mathbf A} \ar[r] & \bZ\times \bG^\der \ar[r]& \bG \ar[r] & 1, \quad {\mathbf A}=\bZ\cap \bG^\der. } \end{equation} For example, for $\bG=\GL_2$, the group ${\mathbf A}$ is the algebraic group $\mu_2$ of square roots of $1$; it is defined by the equation $x^2=1$. \footnote{It is important to think of ${\mathbf A}$ as a group scheme. As the authors point out, this group scheme presents an `annoying difficulty' in characteristic $2$ (by not being \'etale).} Na\"ively, then, one would like to define Steinberg-Hitchin base as $\A_{G^\der} \times \bZ$, and establish a correspondence between the stable conjugacy classes in $G$ and the points of the base, as it was done for semi-simple simply connected groups. The obstacle is that we cannot really define a good map from $\bG$ to $\A_{G^\der} \times \bZ$ over $F$ by means of the exact sequence (\ref{eq:exseq}): first, the decomposition $g=g'z$ with $g'\in G^\der$ and $z\in Z$ is defined only up to replacing $g'$ and $z$ with $ag'$, $az$ ($a\in {\mathbf A}(F)$), and second, the map $(g', z)\to g'z$ is in general not surjective on $F$-points: for example for $\GL_2$, its image only consists of elements whose determinant is a square in $F$. The way to deal with this issue is described in \cite[(3.15)]{langlands-frenkel-ngo}: the set of $F$-points of the Steiberg-Hitchin base $\mathfrak A_G$ is defined as the union over cocycles $\eta\in H^1(F,A)$ of the spaces $(\mathfrak B_\eta(F)\times \bZ_\eta(F))/{\mathbf A}(F)$, where $\mathfrak B_\eta$ and $\bZ_\eta$ are torsors of, respectively, $\A_{G^\der}$ and $\bZ$, defined by the cocycle $\eta$. Finally, note that for the Lie algebra there is no issue because the Lie algebra actually splits as a direct sum $\fg=\fg^\der\oplus \mathfrak z$, and this is why we could treat all \emph{reductive} Lie algebras above on equal footing. The situation is more complicated if $\bG^\der$ is not simply connected, as Steinberg quotient in this case will no longer be an affine space. We will not address this case (as well as the non-split case) in this note. \subsection{Weyl discriminant} We recall the definition and the basic properties of the Weyl discriminant (for the Lie algebra, the main source is \cite[\S\S 7, 14]{kottwitz:clay}). \subsubsection{Weyl disriminant on the Lie algebra}\label{sub:weyl_discr} \begin{definition} Let $\fg$ be a reductive Lie algebra, let $X\in \fg$ be a regular semisimple element, and let $T=C_G(X)$ be its centralizer with the Lie algebra $\ft=\Lie (T)$. Then $$ D(X)= \det(\ad(X)\vert_{\fg/\ft})$$ is called the Weyl discriminant of $X$. \end{definition} The discriminant is, in fact, a polynomial function on $\fg$ (and thus extends to all of $\fg$ from the dense subset of regular semisimple elements): $D(X)$ is the lowest non-vanishing coefficient of the characteristic polynomial of $\ad(X)$ (see \cite[7.5]{kottwitz:clay}). This interpretation allows us to give an intrinsic characterization of the set of regular semisimple elements: in fact, $X\in \fg$ is regular semisimple if and only if $D(X)\neq 0$; thus it can be taken as a definition of \emph{regular semisimple}. We also recall the expression for $D(X)$ in terms of roots: \begin{equation}\label{eq:Droots} D(X)=\prod_{\alpha\in \Phi} \alpha(X)= (-1)^{\frac{\dim\fg-\rk\fg}{2}}\left(\prod_{\alpha\in \Phi^+} \alpha(X)\right)^2, \end{equation} where $\Phi$ is the set of all roots and $\Phi^+ $ is any set of positive roots. \begin{example}\label{ex:Weyl_disc} We compute the explicit expressions for the Weyl discriminant in terms of the eigenvalues of $X$, in the cases $\fg=\fsl_n$ and $\fg=\fsp_{2n}$, for use in future examples. Ler $X\in \fg$ have eigenvalues $\lambda_i\in \bar F$. For $\fg=\fsl_n$, the roots are $\alpha_{ij}(X)= \lambda_i-\lambda_j$, $1\le i, j \le n \text{ and } i\neq j$. Then the Weyl discriminant of $X$ coincides with the polynomial discriminant of the characteristic polynomial of $X$: $$D(X)=\prod_{{1\le i, j \le n} \atop{ i\neq j}}(\lambda_i-\lambda_j).$$ (We observe that the eigenvalues satisfy the relation $\sum_{i=1}^n\lambda_i=\tr(X)=0$). For $\fg=\fsp_n$, the explicit expression for the roots depends on the choice of the coordinates for the standard representation (though of course the answer does not). We define $\Sp_{2n}$ and $\fsp_{2n}$ explicitly as: $$ \Sp_{2n}(F)=\{g\in \GL_{2n} (F): g^{t} J g =J\}, \quad \fsp_{2n}(F)=\{X\in \fgl_{2n} (F): X^{t} J + JX =0\},$$ where $J=\left[\begin{smallmatrix}0 & I_n \\ -I_n & 0 \end{smallmatrix}\right]$ and $I_n$ stands for the $n\times n$-identity matrix. Then the eigenvalues of any element $X\in \fsp_{2n}$ satisfy $\lambda_{n+i}=-\lambda_i$, $1\le i \le n$, and the set of values of the roots at $X$ is (cf. \cite[\S 12.1]{humphreys}): $\{\pm (\lambda_i\pm \lambda_j), 1\le i, j \le n, i\neq j\}\cup \{\pm 2\lambda_i, 1\le i\le n \}$. Then we get: $$D(X)= (-1)^{\frac{n(n+1)}2}2^n\prod_{{1\le i, j \le n} \atop{ i\neq j}}(\lambda_i^2-\lambda_j^2)\prod_{i=1}^n \lambda_i.$$ \end{example} We also observe that in a reductive Lie algebra, the Weyl discriminant of any element is computed entirely via the derived subalgebra $\fg^\der$, by definition (since $\fg/\ft=\fg^\der/\ft^\der$). \subsubsection{Weyl discriminant on the group} On the group, the definition is obtained essentially by reducing to the Lie algebra: \begin{definition} Let $\gamma \in \bG(F)$ be a regular semisimple element, and let $T=C_G(X)$ be its centralizer with the Lie algebra $\ft=\Lie (T)$. Then the Weyl discriminant of $\gamma$ is $$ D(\gamma)= \det(1-\Ad(\gamma)\vert_{\fg/\ft}).$$ \end{definition} Similarly to the Lie algebra case, the Weyl discriminant has an expression in terms of the (multiplicative) roots: \begin{equation}\label{eq:disc_group} D(\gamma)=\prod_{\alpha\in \Phi} (1-\alpha(\gamma)) = (-1)^{\frac{\dim\fg-\rk\fg}{2}}\rho^2(\gamma)\left(\prod_{\alpha\in \Phi^+}(1-\alpha(\gamma))\right)^2, \end{equation} where $\rho$ is half the sum of positive roots, so $2\rho$ is the sum of positive roots (in the above formula, $\rho^2(\gamma)$ is the value of the character $2\rho$ at $\gamma$). Note that the second part of the formula expressing the Weyl discriminant as a product over \emph{positive} roots now has an extra factor that did not arise in the Lie algebra case (the examples below illustrate this). We again show the calculation for the general linear and symplectic groups. Note that the final expressions are a lot simpler when restricted to $\bG^\der$. \begin{example}\label{ex: disc_group} In all examples, we give an explicit expression for the Weyl discriminant of a regular semisimple element $\gamma\in G(F)$ with eigenvalues $\{\lambda_i\}\subset \bar F$. We observe that these expressions do not depend on the field (so one could even consider $F=\C$). \begin{enumerate} \item $G=\GL_2$: $D(\gamma)=(1-\frac{\lambda_1}{\lambda_2})(1-\frac{\lambda_2}{\lambda_1}) = -\frac{(\lambda_1-\lambda_2)^2}{\det(\gamma)}$. \item $G=\GL_n$: $D(\gamma)= \prod_{{1\le i, j \le n} \atop{ i\neq j}}\left(1-\frac{\lambda_i}{\lambda_j}\right)$. \item $G=\Sp_{2n}$: $D(\gamma)= \prod_{1 \le i < j \le n} d_{ij} \cdot \prod_{1 \le i \le n} d_{i}$, {where} $$\begin{aligned} d_{ij} &= \left(1-\frac{\lambda_i}{\lambda_j}\right)\left(1-\frac{\lambda_j}{\lambda_i}\right)\left(1-\lambda_i\lambda_j\right)\left(1-\frac1{\lambda_i\lambda_j}\right)\\ d_i &= \left(1-\lambda_i^2\right)\left(1-1/\lambda_i^2\right). \end{aligned} $$ \item $G={\mathrm {GSp}}_{2n}$. By definition, ${\mathrm {GSp}}_{2n}(F)$ is the algebraic group whose functor of points is defined as, for any $F$-algebra $R$, $${\mathrm {GSp}}_{2n}(R)= \{g\in \GL_{2n}(R): \exists \nu(g)\in R^\times, g^tJg =\nu(g) J\},$$ where $J$ is the same matrix as the one used to define $\Sp_{2n}$. It fits into the exact sequence of algebraic groups $$1\to \Sp_{2n}\to {\mathrm {GSp}}_{2n} \to \G_m \to 1,$$ where the map to $\G_m$ is the map $g\mapsto \nu(g)$, called the \emph{multiplier}. We have ${\mathrm {GSp}}_{2n}^\der =\Sp_{2n}$, so ${\mathrm {GSp}}_{2n}$ is a good example (other than $\GL_n$) of a reductive but not semi-simple algebraic group whose derived subgroup is simply connected. If the element $\gamma$ has multiplier $\nu$, then as above for $G=\Sp_{2n}$, $D(\gamma)= \prod_{1 \le i < j \le n} d_{ij} \cdot \prod_{1 \le i \le n} d_{i}$, {but now we have } $$\begin{aligned} d_{ij} &= \left(1-\frac{\lambda_i}{\lambda_j}\right)\left(1-\frac{\lambda_j}{\lambda_i}\right)\left(1-\frac{\lambda_i\lambda_j}{\nu}\right)\left(1-\frac{\nu}{\lambda_i\lambda_j}\right)\\ d_i &= \left(1-\frac{\lambda_i^2}{\nu}\right)\left(1-\frac{\nu}{\lambda_i^2}\right). \end{aligned} $$ \end{enumerate} \end{example} \subsection{Orbital integrals: the Lie algebra case}\label{sub:lie} We start with a prototype case of a Lie algebra. \subsubsection{Definitions: Lie algebra}\label{sub:def_lie} Let $\bG$ be a connected reductive group defined over a local field $F$, as above. The orbital integrals (for regular semisimple elements) on the Lie algebra are distributions on the space $C_c^\infty(\fg)$ of the locally constant compactly supported functions on $\fg$, defined as follows. Let $X\in \fg$ be a regular semisimple element, and let $f\in C_c^\infty(\fg)$. Since $X$ is regular semisimple, its centralizer is a torus $T=C_G(X)=\bT(F)$, as discussed above, and thus the adjoint orbit of $X$ can be identified with the quotient $\bT(F)\backslash \bG(F)$. Both $\bT(F)$ and $\bG(F)$ can be endowed with any of the natural measures discussed above in \S\ref{subseq:the_problem}. Once the measures on $G$ and $T$ are fixed, there is a unique quotient measure on $T\backslash G$, which we will denote by $\mu_{T\backslash G}$ (see e.g., \cite[\S 2.4]{kottwitz:clay} for the definition of the quotient measure in this context). The orbital integral with respect to this measure is \begin{equation}\label{eq:oi} O_X(f) :=\int_{T\backslash G} f(\Ad(g^{-1})X) d\mu_{T\backslash G}. \end{equation} We observe that there are finitely many $F$-conjugacy classes of tori in $G$; thus there are finitely many choices of measures that we need to make on the representatives of these conjugacy classes, and these choices endow the orbit of every regular semisimple element with a measure. If the canonical measures (in the sense of \cite{gross:motive}, discussed above in \ref{subsub:Neron}) are chosen on the tori, the resulting orbital integrals are called \emph{canonical}. This approach to the normalization of measures on the orbits is the one typically used in the literature. On the other hand, one can use Chevalley map defined above to normalize the measures on orbits. For a general reductive group $\bG$ and $X\in \fg$ regular semisimple, the fibre $\fc^{-1}(\fc(X))$ over the point $a:=\fc(X)\in \A_G(F)$ is the stable orbit of $X$, which is a finite union of $F$-rational orbits. Thus, every $F$-rational orbit is an open subset of $\fc^{-1}(a)$ for some $a\in \A_G(F)$, and if we define a measure on this fibre, we get a measure on every $F$-rational orbit contained in it by restriction. In the Introduction, we have fixed measures on affine spaces \emph{with a choice of a basis}. The Lie algebra $\fg$ is an affine space; it does not come with a canonical choice of a basis, and this choice would not matter much in the discussion below; we can choose an arbitrary $F$-basis $\{e_i\}_{i=1}^n$ of $\fg$ for our purposes. This basis then gives rise to a differential form $\omega_{\fg}=\wedge_{i=1}^n dx_i$ on $\fg$, which gives a measure $|\omega_\fg|$ as in the Introduction. The space $\A_G$ is also an affine space under our assumptions (since at the moment we are working with the Lie algebra); and in our construction it comes with a choice of basis $\{\rho_i\}_{i=1}^r$ as in \S \ref{subsub:Liealg}. We let $\omega_{\A_G}$ be the differential form associated with this basis. Thus we get the quotient measure on each fibre $\fc^{-1}(\fc(X))$: it is the measure associated with the differential form $\omega_{\fc(X)}^\geom$ such that \begin{equation}\label{eq:geom_lie1} \omega_\fg=\omega^{\geom}_{\fc(X)}\wedge \omega_{\A_G}. \end{equation} That is, by definition of $\omega_{\fc(X)}^{\geom}$, for any $f\in C_c^\infty(\fg)$, \begin{equation}\label{eq:geom_lie2} \int_{\fg} f(X)\, d|\omega_{\fg}|=\int_{\A_G}\int_{\fc^{-1}(\fc(X))}f(X)\,d |\omega^{\geom}_{\fc(X)}| \,d |\omega_{\A_G}|. \end{equation} Our immediate goal is to derive the relationship between these two measures on the orbit: $\mu_{T\backslash G}$ and $|d \omega_a^{\geom}|$, where $a=\fc(X)$. First we observe that since both measures are quotient measures of a chosen Haar measure on $G$, their ratio does not depend on the choice of the measure on $G$, as long as it is compatible with the choice of the measure on the Lie algebra; thus at this point, the choice of the measure on $G$ is determined by our choice of the form $\omega_{\fg}$. (Conversely, one often chooses a measure on $G$ first, and this determines $\omega_\fg$.) At the same time, the choice of the measures on the representatives of conjugacy classes of tori affects the measure $\mu_{T\backslash G}$ but not the measure $|\omega_a^{\geom}|$. Here we address two natural choices of such measures: \begin{enumerate}[(i)] \item Let $\omega_G$ be a volume form on $G$, and on each algebraic torus $T$, define the form $\omega_T$ using the characters of the torus as in (\ref{eq:omegaT}). Then we get the quotient measure $\omega_{T\backslash G}$ on each orbit. This is the measure discussed in \cite{langlands-frenkel-ngo}. We discuss this measure in this section. \item Use the measure denoted above by $|\omega^\can|$, associated with the N\'eron model, on each torus. This measure on the orbits is discussed in the next section. \end{enumerate} Thus our first goal is to determine the ratio of $\omega_{T\backslash G}$ to $\omega_a^\geom$, for each $X\in \fg^\rss$ (which determines $T$ and $a$). It turns out that the conversion between these two measures is based on exactly the same calculation as the Weyl integration formula, which we now review. \subsubsection{Weyl integration formula, revisited} We follow \cite[{\S 7, \S14.1.1}]{kottwitz:clay}, and use the same notation (except we continue to use boldface letters to denote varieties). For a torus $T\subset G$, let $W_T={\mathbf N_G(T)}(F)/\bT(F)$ be the relative Weyl group of $T$ (cf. \cite[{\S 7.1}]{kottwitz:clay}). Weyl integration formula (which we quote in this form from \cite[\S 7.7]{kottwitz:clay}), for an arbitrary Schwartz-Bruhat function $f\in C_c^\infty(\fg)$, asserts: \begin{equation}\label{eq:Weyl} \int_\fg f(Y) d|\omega_\fg(Y)| =\sum_T \frac 1{|W_T|}\int_{\ft}|D(X)| \int_{T\backslash G} f(\Ad(g^{-1}) X ) d|\omega_{T\backslash G}| \, d|\omega_\ft(X)|, \end{equation} where the sum on the right-hand side is over the representatives of the conjugacy classes of tori in $G$. The proof of this formula relies on a computation of the Jacobian of the map \begin{equation}\label{eq:themap} \begin{aligned}(T\backslash G)\times \fg &\to \fg \\ (g, X) &\mapsto \Ad(g^{-1})X. \end{aligned} \end{equation} This map is $|W_T|:1$, and its Jacobian at $X$ is precisely $|D(X)|$ (see \cite[\S 7.2]{kottwitz:clay} for a beautiful exposition). \subsubsection{The relation between geometric and canonical orbital integrals for the Lie algebra} Let us just na\"ively compare the right-hand side of the Weyl integration formula with the right-hand side of (\ref{eq:geom_lie2}) above (since the left-hand sides are the same). First, note (as already discussed above) that our space $\A_G$ is, up to a set of measure zero, a disjoint union of images of the representatives of the conjugacy classes of tori, and for each torus, Chevalley map $\fc: T\to \A_G$ is $|W_T|:1$. Thus, the right-hand side of (\ref{eq:geom_lie1}) would look exactly like the right-hand side of the Weyl integration formula (\ref{eq:Weyl}) if we could replace integration over $\A_G$ with the sum of integrals over the representatives of the conjugacy classes of Cartan subalgebras $\ft=\Lie(T)$ (as $T$ ranges over the conjugacy classes of maximal tori). The situation is summarized by the commutative diagram: \begin{equation}\label{eq:diagram} \xymatrix{ &(T\backslash G)\times \ft \ar[d] \ar[r] & \fg \ar[d] \\ &\ft \ar[r] & \A_G=\ft^\spl/W } \end{equation} Here the horizontal map on the top is the map (\ref{eq:themap}); this map is $|W_T|:1$ and its Jacobian at $(g, X)$ is $|D(X)|$ (see \cite[{\S 7.2}]{kottwitz:clay}). The vertical arrow on the left is projection onto $\ft$; the vertical arrow on the right is Chevalley map $\fc$; and the horizontal arrow at the bottom is $\fc \vert_{\ft}$, which is also $|W_T|:1$ . We have the forms $\omega_\fg$ on $\fg$ and $\omega_{\A_G}$ on $\A_G$; let us choose the invariant differential form $\omega_T$ on $T$ defined by the characters of $T$ as in (\ref{eq:omegaT}); we also need an invariant top degree form $\omega_G$ on $G$, which is required to be compatible with $\omega_\fg$ under the exponential map, which determines it uniquely. As discussed above, given $\omega_G$ and $\omega_T$, we get the quotient measure $|\omega_{T\backslash G}|$ that corresponds to a differential form $\omega_{T\backslash G}$ satisfying $\omega_T\wedge \omega_{T\backslash G} =\omega_G$, and a differential form $\omega^{\geom}_{\fc^{-1}(a)}$ on each fibre of the map $\fc:\fg \to \A_G$. Both $\omega_{T\backslash G}$ and $\omega_{\fc^{-1}(a)}^{\geom}$ are generators of the top exterior power of the cotangent bundle of $\bT\backslash \bG$, hence they differ by a constant (which can depend on $a$). Looking at the top, right and bottom maps in the diagram (\ref{eq:diagram}), respectively, we see that these differential forms are related as follows (the first and third lines follow from the Jacobian formula and the fact that the horizontal maps are $|W_T|:1$; the second line is the definition of $\omega^{\geom}_{\fc^{-1}(a)}$ with $a=\fc(X)$): \begin{equation}\label{eq:measures} \begin{aligned} & |W_T|^{-1} |D(X)| \omega_{T\backslash G}\wedge \omega_{\ft} = \omega_{\fg}\\ & \omega_{\fg}(X)=\omega^{\geom}_{\fc^{-1}(\fc(X))}\wedge \omega_{\A_{G}}(\fc(X)) \\ & |W_T|^{-1} |\Jac(\fc\vert_{\ft})| \omega_{\ft}=\omega_{\A_G}. \end{aligned} \end{equation} We conclude that \begin{equation}\label{eq:jac_chev} |D(X)|\omega_{T\backslash G}(X)= |\Jac(\fc\vert_{\ft}) (X)| \omega^{\geom}_{\fc^{-1}(\fc(X))}, \quad X\in \ft. \end{equation} The Jacobian of the restriction of Chevalley map $\fc \vert_{\ft}$ at $X$ is $\prod_{\alpha\in \Phi^+}\alpha(X)$, \emph{up to a constant in $F^\times$} (see \cite[\S 14.1]{kottwitz:clay}). This constant depends on the choice of coordinates on $\ft$. We use the basis of the character lattice $\{\chi_i\}_{i=1}^r$, as in (\ref{eq:omegaT}), to define the coordinates on $\ft$. With this choice of coordinates, the constant turns out to be $\pm 1$; the sign depends on the ordering of the characters $\chi_i$ and does not affect the resulting measure. The reason for this is that the constant is $1$ for the split torus (this is not trivial; it follows from the argument in \cite[ch.5,\S5]{Bourbaki:Lie}, and the group version of this statement is also proved in \cite{langlands-frenkel-ngo} over $\C$ (see proof of Proposition 3.29, especially (3.33) and (3.34)); the argument holds for any split torus). If $T$ is not split, we can work over an extension $E$ where $T$ splits, and since our coordinate system is precisely the one used for the split torus over $E$, the equality continues to hold. We observe that on the Lie algebra, we have \begin{equation}\label{eq:root_discr} |\prod_{\alpha\in \Phi^+}\alpha(X)| = |D(X)|^{1/2}. \end{equation} Putting the relations (\ref{eq:measures}), (\ref{eq:jac_chev}), and (\ref{eq:root_discr}) together, we obtain the following Proposition. \begin{proposition}\label{prop:cT}(cf. \cite[Proposition 3.29]{langlands-frenkel-ngo}.) Let $\fc:\fg\to \A_G$ be Chevalley map as above; let $X\in \fg$ be a regular semisimple element, let the algebraic torus $T$ be its centralizer, with the Lie algebra $\ft$. Then with the measures defined as above, we have: \begin{equation*}\label{eq: final_liealg} |\omega^{\geom}_{\fc^{-1}(\fc(X))}|=|D(X)|^{1/2} |\omega_{T\backslash G}|. \end{equation*} \end{proposition} We conclude this section with an example illustrating the proposition. \begin{example} Let $\fg=\fsl_2$ and let $\ft=\ft^{\spl}$ be the subalgebra of diagonal matrices. Then we have $\fc:X(t):=\left[\begin{smallmatrix} t & 0\\ 0& -t \end{smallmatrix}\right] \mapsto -t^2$; here the Jacobian is just the derivative (since we are dealing with a function of one variable), so $\Jac(\fc\vert_{\ft})=-2t = -\alpha(X(t))$. Now consider $\ft_E$ -- a non-split Cartan subalgebra corresponding to a quadratic extension $E=F[\sqrt{\epsilon}]$: $\ft_E= \left\{X(t):=\left[\begin{smallmatrix} 0& t\\ \epsilon t & 0 \end{smallmatrix}\right], \ t\in F \right\}$. We have $\fc\vert_{\ft_E} = -\epsilon t^2$, and its Jacobian is $-2\epsilon t =-\sqrt{\epsilon} \alpha(X(t))$ (note that the eigenvalues of our element are $\pm\sqrt{\epsilon} t$). At the same time, on $\ft_E$, the measure $\omega_{T}$ is $\sqrt{\epsilon}dt$ (recall that $\omega_T$ is defined by means of characters of $T$ over the algebraic closure). Hence, with this choice of the differential form, we obtain, again, with $a=\fc(X(t))$: $$da = -\sqrt{\epsilon} \alpha(X(t)) dt = -\alpha(X(t)) \omega_T(X(t)).$$ \end{example} \subsection{The simplest group case}\label{sub:group} Let us assume that $\bG$ is semi-simple, split, and simply connected. We are now almost ready to explain the relation (3.31) of \cite{langlands-frenkel-ngo} (see equation (\ref{eq:FLN}) above). The definitions are essentially the same as in the Lie algebra case: \begin{itemize} \item an orbit of a regular semisimple element $\gamma\in G:=\bG(F)$, as a manifold over $F$, can be identified with $T\backslash G$, where $T$ is the centralizer of $\gamma$. As above, if $\omega_G$ is a volume form on $G$ and $\omega_T$ - a volume form on $T$, we get the measure $|\omega_{T\backslash G}|$ on the orbit of $\gamma$. \item The regular fibres of the map $\fc: G\to \A_G$ are stable orbits; each stable orbit of a regular semisimple element is a finite disjoint union of $F$-rational orbits, and thus we get the geometric measure $|\omega_a^{\geom}|$ on each such orbit, by considering the quotient of the measures on $G$ and $\A_G$. \end{itemize} For $\gamma\in G$, let $$\Delta(\gamma):= \rho^{-1}(\gamma)\prod_{\alpha>0}(1-\alpha(\gamma)); \quad \text{thus } |\Delta(\gamma)|^2=|D(\gamma)|.$$ \begin{theorem}(\cite [Relation (3.31)]{langlands-frenkel-ngo}.) Let $\bG$ be a connected semi-simple simply connected group over a local field $F$, and let $\gamma\in G$ be a regular semisimple element. Then for any $f\in C_c^\infty(G)$, the orbital integrals with respect to the geometric measure on the orbit of $\gamma$, and the measure $\omega_{T\backslash G}$ (which, by definition, is the quotient of the measures $|\omega_G|$ on $G$ and $|\omega_T|$ on $T$, with $\omega_T$ defined by (\ref{eq:omegaT})) are related via: $$ \int_{\ri(\gamma)}f(g)d|\omega_{\fc(\gamma)}^{\geom}(g)|= |\Delta(\gamma)|\int_{T\backslash G}f(\Ad(g^{-1})\gamma)\omega_{T\backslash G}(g),$$ where on the left, the orbit $\ri(\gamma)$ is thought of as an open subset of the stable orbit $\fc^{-1}(\fc(\gamma))$ and endowed with the geometric measure $|\omega_{\fc(\gamma)}^{\geom}(g)|$ as above. \end{theorem} We first explain two differences with the statement in \cite{langlands-frenkel-ngo}. \begin{remark} Our expression does not (yet) include the factor $L(1, \sigma_{T\backslash G})$ that appears in (3.31) of \cite{langlands-frenkel-ngo}. This factor appears simply by their definition of the measure $d\bar{g}_v:=L(1, \sigma_{T\backslash G})\omega_{T\backslash G}$ which appears on the right-hand side of (3.31). As we shall see in the next section, using the measure $d\bar{g}_v$ ensures that the local orbital integral on the right-hand side is $1$ for almost all places of a given number field (which is desirable for defining the orbital integral globally), and for almost all places this coincides with the orbital integral with respect to the canonical measure. \end{remark} \begin{remark} Note that we stated the theorem as a relation between orbital integrals, whereas in \cite{langlands-frenkel-ngo} it is stated as a relation between \emph{stable} orbital integrals. Since the measure is a local notion, this is an equivalent statement: in fact, the assertion of the theorem is just that the two measures on the stable orbit (and hence, by restriction, on every rational orbit) are related via $$|\omega_{\fc(\gamma)}^{\geom}(g)| = |\Delta(\gamma)| |\omega_{T\backslash G}(g)|.$$ \end{remark} \subsubsection{Sketch of the proof.} As the measures are defined by differential forms, the calculation is carried out in the exterior power of the cotangent space, and hence it is essentially the same calculation as for the Lie algebra above. The only ingredients that needs to be treated slightly differently are the discriminant and the Jacobian of the map from $T$ to $T/W$. Indeed, for $\gamma\in G^\rss(F^\sep)$, we still have the exact sequence of tangent spaces (see \cite[Lemma 26]{langlands-frenkel-ngo}) $$0\to Tan_\gamma(\fc^{-1}(a)) \to Tan_\gamma \bG \to Tan_a(\A_G) \to 0,$$ and by definition, $\omega_{\fc(\gamma)}^\geom\wedge \omega_{\A_G} =\omega_G$; $\omega_G=\omega_T\wedge \omega_{T\backslash G}$. The proof proceeds exactly as for Lie algebras, except the map (\ref{eq:themap}) needs to be replaced with the map \begin{equation}\label{eq:themapgroup} \begin{aligned}(T\backslash G)\times G &\to G \\ (g, \gamma) &\mapsto g^{-1}\gamma g, \end{aligned} \end{equation} and the map $\ft\to \A_G=T/W$ is replaced with the map $T\to T/W$. The Jacobian of the first map is the group version of the Weyl discriminant (and fits into the group version of Weyl integration formula in the exact same way as it did for the Lie algebra): $$|W_T|^{-1} |D(\gamma)| \omega_{T\backslash G}\wedge \omega_{T} = \omega_{G}.$$ Next, we need to relate $\omega_T$ with $\omega_{\A_G}$. \begin{lemma}\label{lem:JacT} (\cite[Proposition 3.29]{langlands-frenkel-ngo}.) Let $\bG$ be a split, semi-simple, simply connected group, and let $T\subset G$ be a maximal torus. Let $\omega_T$ be defined by (\ref{eq:omegaT}). Then $$|W_T|^{-1}\omega_T(\gamma) = |\Delta(\gamma)|\omega_{\A_G}(\fc(\gamma)).$$ \end{lemma} \begin{proof} For $T= T^\spl$, this is proved in \cite{langlands-frenkel-ngo}, as well as in \cite{Bourbaki:Lie} (where the field is assumed algebraically closed, but the proof works verbatim for the split torus). Now it remains to consider the restriction of $\fc$ to an arbitrary (not necessarily split) maximal torus. The map $\fc$ on $T$ can be defined as a composition $$\bT\to \bT^\spl\to \A_G=\bT^\spl/W,$$ where the first map is an isomorphism over the algebraic closure of $F$. The pullback of the form $\omega_{T^\spl}$ on $T^\spl$ is precisely the form $\omega_T$ on $T$, and thus the equality remains true. \end{proof} The theorem follows, precisely as in the Lie algebra case. To conclude this section, we compute some examples illustrating the above Lemma (which show that it is substantially non-trivial even for the split torus). \subsubsection{Examples of Jacobians and discriminants on the group} \begin{example} We again start with $\bG=\Sl_2$. Let $\gamma_t= \left[\begin{smallmatrix} t & 0\\ 0& t^{-1} \end{smallmatrix}\right]$. The map $\fc$ on the diagonal torus is given by: $\fc:\left[\begin{smallmatrix} t & 0\\ 0& t^{-1} \end{smallmatrix}\right] \mapsto t+t^{-1}$. Its Jacobian (i.e., the derivative) is $1-t^{-2}$; so we get: $$ \Jac(\fc\vert_{T^{\spl}})(\gamma_t)= 1-t^{-2}=(1-(-\alpha)(\gamma_t))=t^{-2}(1-\alpha(\gamma_t)). $$ We observe that for $\SL_2$, the half-sum of positive roots is $\rho=\frac12\alpha$, so $\rho(\gamma_t)= t$. However, this is not yet the whole story. We are interested in the ratio between the measure $\omega_T=\frac{dt}t$ on $T$ and the measure $da$ on $\A^1\simeq T/W$, and our map, as above, is given by $a=t+t^{-1}$. We just computed: $da=(1-t^{-2})dt$. Then we have: $$\frac{dt}{t}=\frac{(1-t^{-2})^{-1}}{t} da = \rho(\gamma_t) \prod_{\alpha>0} (1-\alpha(\gamma_t))^{-1} da.$$ \end{example} It is instructive to do one more, higher rank, example. \begin{example} Let $\bG=\Sp_4$ (defined explicitly as in Example \ref{ex:Weyl_disc} above), and consider the split torus $T=\{\diag(t_1, t_2, t_1^{-1}, t_2^{-1})\mid t_i\in F^\times \}$. Let $\gamma_{t_1, t_2}= \diag(t_1, t_2, t_1^{-1}, t_2^{-1})$. In these coordinates, Steinberg map is given explicitly by the elementary symmetric polynomials: $$ \begin{aligned} &\fc: \gamma_{t_1, t_2} \mapsto (a,b), \\ & a=t_1+t_2+t_1^{-1}+t_2^{-1}, \quad b=t_1t_2+t_2t_1^{-1}+t_1t_2^{-1}+t_1^{-1}t_2^{-1}+2. \end{aligned} $$ \end{example} The Jacobian of this map is (we are skipping the details of a painful calculation) \begin{equation*} \left|\begin{smallmatrix} \frac{\partial a}{\partial t_1} & \frac{\partial b}{\partial t_1}\\ \frac{\partial a}{\partial t_2} & \frac{\partial b}{\partial t_2} \end{smallmatrix} \right| = (1-t_1^{-2})(1-t_2^{-2})(1-t_1^{-1}t_2^{-1})(t_1-t_2), \end{equation*} which we recognize as: $$\Jac (\fc\vert_T)(\gamma_{t_1, t_2})= t_1\prod_{\alpha<0}(1-\alpha(\gamma_{t_1, t_2})),$$ Note the factor of $t_1$ in front (which is not a root value). Thus we obtained: $$\begin{aligned} &dt_1\wedge dt_2 = \pm \prod_{\alpha<0} (1-\alpha(\gamma_{t_1,t_2}))^{-1} t_1^{-1} da\wedge db \\ &= \pm\left( \prod_{\alpha>0} (1-\alpha(\gamma_{t_1, t_2}))^{-1} \right)\rho^2(\gamma_{t_1, t_2}) t_1^{-1} da\wedge db. \end{aligned}$$ The plus-minus sign in front is not important and depends on the ordering of the coordinates. Now, we are interested in the ratio between the invariant measure $\omega_T= \frac{dt_1\wedge dt_2}{t_1 t_2}$ on $T$ and the measure $d \omega_{\A_G}=da\wedge db$ on $T/W$. We note that in this case $\rho$, the half-sum of positive roots, evaluated at $\gamma_{t_1, t_2}$ is $\rho(\gamma_{t_1, t_2})= \left(t_1^2 t_2^2 \frac{t_1}{t_2} (t_1 t_2)\right)^{1/2} = t_1^2 t_2$, and compute further (here we write $\gamma:=\gamma_{t_1, t_2}$ to avoid notational clutter): \begin{equation*} \begin{aligned} \omega_T & = \frac{dt_1\wedge dt_2}{t_1 t_2} = \frac{\pm\left( \prod_{\alpha>0} (1-\alpha(\gamma))^{-1} \right)\rho^2(\gamma) t_1^{-1}}{t_1 t_2} da\wedge db \\ & = \pm\left( \prod_{\alpha>0} (1-\alpha(\gamma)^{-1} \right)\rho(\gamma) da\wedge db =\Delta(\gamma) da\wedge db. \end{aligned} \end{equation*} \subsection{The general case} First, suppose that $\bG$ is a connected split reductive group over $F$, with $\bG^\der$ simply connected. Then if one uses the correct general notion of Steinberg-Hitchin base as defined in \cite{langlands-frenkel-ngo}, all measures are invariant under the action of the centre, and hence relation (3.31) of \cite{langlands-frenkel-ngo} holds in this case as well, with no further proof needed. If $\bG^\der$ is not simply connected, the space we denoted by $\A_{G^\der}$ is no longer an affine space, and one needs to use $z$-extensions. If $\bG$ is not split, we need to consider Galois action on Steinberg-Hitchin base. Both topics are discussed in \cite{langlands-frenkel-ngo} but are beyond the scope of these notes. \subsection{Aside: na\"ive approach for classical groups -- what works and what doesn't}\label{sub:naive_meas} Suppose for a moment that $G\hookrightarrow \GL_n(F)$ is a split classical reductive group. It is tempting (and often done in Number theory\footnote{For example, \cite{gekeler03}, \cite{achterwilliams15}, \cite{davidetal15}, \cite{achter-altug-garcia-gordon}, etc. A reader not interested in this type of a calculation can safely skip this section.}) to still try to use the coefficients of the characteristic polynomial to define the maps from $\fg$ and $G$ to $\A_G$. This works (with further caution discussed below) for the groups of type $A_n$, $C_n$ and $B_n$, but does not quite work for type $D_n$. First, consider Chevalley map on the Lie algebra. If $\fg =\mathfrak{sp}_{2n}$, then the characteristic polynomial of any element $X\in \fg$ has the form: $f_X(t)=t^{2n}+a_1 t^{2n-2}+\dots +a_{n}$. We can define $\fc'_{\fsp}(X)$ to be the tuple of coefficients $(a_1, \dots, a_{n})\in \A^{n}$. The relationship between this map and Chevalley map is determined by the relation between the fundamental representation $\rho_i$ and the $i$-th exterior power of the standard representation $\wedge^i \rho_1$, for $i=1,\dots, n$. For the symplectic Lie algebra, it turns out that $\wedge^i \rho_1$ is a direct sum of $\rho_i$ and some representations of lower highest weights (see e.g. \cite[Theorem 17.5]{fulton-harris:RepresentationTheory}). Hence, the transition matrix between the characters of $\rho_i$ and the characters of $\wedge^i \rho_1$ (i.e., the coefficients of the characteristic polynomial up to sign) is upper-triangular with $1$s on the diagonal. Therefore, we get a measure-preserving isomorphism between the affine space $\A_G$ with coordinates $\tr(\rho_i)$ and the affine space $\A_G'$ with the coordinates given by the coefficients of the characteristic polynomial. This implies that the map $\fc'_{\fsp}$ could be used instead of $\fc_{\fsp}$ in all the calculations, without affecting the results. For the odd orthogonal Lie algebras $\fso_{2n+1}$, the exterior powers $\wedge^i\rho_1$ are irreducible for $1=1, \dots, n$, and for $i=1, \dots, n-1$, coincide with the first $n-1$ fundamental representations; however, the last fundamental representation, the \emph{spin representation} is not obtained this way (see \cite[Theorem 19.14]{fulton-harris:RepresentationTheory}). For the even orthogonal Lie algebra $\fso_{2n}$, the representations $\wedge^i\rho_1$ are irreducible for $1=1, \dots, n-1$, and for $i=1, \dots, n-2$, coincide with the first $n-2$ fundamental representations, and $\wedge^n\rho_1$ decomposes as a direct sum of two irreducible representations whose weights are \emph{double} the fundamental weights (see \cite[Theorem 19.2]{fulton-harris:RepresentationTheory}). Nevertheless, for the odd orthogonal Lie algebras, the coefficients of the characteristic polynomial still distinguish the stable conjugacy classes of regular semisimple elements; for the even orthogonal Lie algebras, one needs to add the \emph{pfaffian}. Passing to Steinberg map and algebraic groups: for the symplectic group, the coefficients of the characteristic polynomial can still be used without affecting any of the measure conversions, since this group is simply connected, and an argument similar to the above argument on the Lie algebra applies. For special orthogonal groups, $\A_G$ is not an affine space since $\bG$ is not simply connected; the coefficients of the characteristic polynomial give a map to an affine space. It seems to be a worthwhile exercise to work out the relationship between these two spaces and their measures; I have not done this calculation. Finally, we discuss the cases $\bG=\GL_n$ and $\bG=\GSp_{2n}$ in some more detail since the latter calculation is needed in \cite{achter-altug-garcia-gordon}. For $\GL_n$, we just map $g$ to the coefficients of its characteristic polynomial. For $\GSp_{2n}$, we can define $\fc^\charp(g)=(a_1, \dots, a_n, \nu(g))$, where $a_1, \dots a_n$ are the first $n$ non-trivial coefficients of the characteristic polynomial, and $\nu(g)$ is the multiplier (this is ad hoc; one could have used the determinant instead to be consistent with $\GL_n$); the superscript `$\charp$' is to remind us that we are using the characteristic polynomial and distinguish this map from the standard map $\fc$. The codomain of the map $\fc^\charp$ is the space we call $\A_{G}^\charp$ which is $\A^{n-1}\times\G_m$ if $\bG=\GL_n$, and $\A^n\times \G_m$ if $\G=\GSp_{2n}$. The restriction of $\fc^\charp$ to $\bG^\der$ (if we forget the $\G_m$-component) coincides with the map $\fc'$ constructed above for $\bG^\der$ (which coincides with $\fc$ if $\bG^\der =\SL_n$). The measure on the base $\A_G^\charp$ in this case should be defined as the product of the measures associated with the form $da$ on the affine space, and $ds/s$ on $\G_m$, where we denote the coordinates on $\A^{k}\times \G_m$ by $(a,s)$ (with $k=n-1$ or $k=n$). With this definition, the resulting measure is, essentially, independent of the specific map used for the last coordinate (for example, in the case of $\GSp_{2n}$, if the determinant instead of the multiplier were mapped to $s$, the measure would just change by the factor $|n|_F$, which is $1$ unless the residue characteristic of $F$ divides $n$; but this caveat is the reason we prefer to work with the multiplier). Let $\omega_a^\charp$ be the form on the fibre $(\fc^\charp)^{-1}(a)$ defined the same way as the form $\omega_a^{\geom}$ in (\ref{eq:geom_lie1}) and \S\ref{sub:group}, but using the map $\fc^\charp$ instead: \begin{equation}\label{eq:charp} \omega_a^\charp \wedge (da\wedge \frac{ds}s) = \omega_G. \end{equation} As before, the forms $\omega_G$, $\omega_T$ and $\omega_{G/T}$ are, of course, invariant under the action of the centre of $G$, but the centre does not even act on $\A_G^\charp$ as a group. Nevertheless, multiplication by scalars still makes sense on this space. To find the relation between the differential forms $\omega_a^\charp$ and $\omega_{T\backslash G}$ on a given orbit, let us work over the algebraic closure of $F$ for a moment. Over $\bar F$, every element $\gamma\in \bG(\bar F)$ can be written as $\eta_z\gamma'$ with $\eta_z\in \bZ(\bar F)$ and $\gamma'\in \bG^\der(\bar F)$ (defined up to an element of ${\mathbf A}(\bar F)$; we just pick one such pair). For $\eta_z:=z{Id}\in Z$ with $z\in \bar F$, and $g\in \bG^\der(\bar F)$, each coefficient $a_i(\eta_z g)$ of the characteristic polynomial of $\eta_zg$ differs from $a_i(g)$ by a power of $z$. Then the form $\omega_a^\charp$ would have to scale by the power of $z$ as well, to preserve (\ref{eq:charp}). We denote by $O_\gamma^\charp$ the orbital integral of $\gamma$ with respect to the form $\omega_{\fc^\charp(\gamma)}^\charp$, as a distribution on $C_c^\infty(G)$. We compute explicitly the relation between this integral and the integral with respect to $\omega_{T\backslash G}$ for $\GSp_{2n}$. \begin{example} $\bG=\GSp_{2n}$. In this case the scaling factor is $|z|^{S}$, where $S$ is the sum of the degrees (as homogeneous polynomials in the roots) of the first $n$ coefficients of the characteristic polynomial, i.e., $1+2+\dots + n =\frac{n(n+1)}2$. If $\gamma=\eta_z\gamma'$ with $\det(\gamma')=1$ (and $\eta_z\in \bZ(\bar F)$), then $|z|=|\det(\gamma)|^{1/2n}$. We obtain, for $\gamma\in \GSp_{2n}(F)^\rss$: $$ O^\charp_{\gamma}(f)= |D(\gamma)|^{1/2} |\det(\gamma)|^{-\frac{(n+1)}{4}}\int_{G/T}f(g^{-1}\gamma g) \omega_{G/T}.$$ \end{example} \subsection{Summary} To summarize, so far the following choices have been made (we use the same notation as in \cite{langlands-frenkel-ngo} whenever possible): \begin{itemize} \item The measure $|dx|$ on $F$, such that the volume of $\ri_F$ is $1$. If we are working over a global field $K$, and $F=K_v$ is its completion at a finite place $v$, this measure differs from \cite{langlands-frenkel-ngo} for a finite number of places $v$. For orbital integrals, this discrepancy gives rise to the factor $|\Delta_{K/\Q}|_v^{\dim(G)-\rank(G)}$ (independent of the element) at each place $v$. \item An invariant differential form $\omega_G$ on $\bG$ -- appears on the both sides and does not affect the ratio between measures. \item For an algebraic torus $\bT$, a choice of $\{\chi_1, \dots, \chi_n\}$ - a $\Z$-basis of $X^\ast(\bT)$. This choice does not affect anything. \end{itemize} Given $\gamma\in G$ - a regular semisimple element, with $T=C_G(\gamma)$, the following differential forms and measures have been constructed from these choices: \begin{itemize} \item $\omega_T=d\chi_1\wedge\dots \wedge d\chi_r$; \item $\omega_{T\backslash G}$ (which should be thought of as a measure on the orbit of $\gamma$, with $T=C_G(\gamma)$) satisfying $\omega_G=\omega_T\wedge \omega_{T\backslash G}$. (Note that the centralizers of stably conjugate elements are isomorphic as algebraic tori over $F$, so one can also think of $\omega_{T\backslash G}$ as a form on the \emph{stable} orbit of $\gamma$.) \item $\omega_a$, also on the stable orbit of $\gamma$, with $a=\fc(\gamma)$, satisfying $\omega_{\A_G}\wedge \omega_a=\omega_G$. \item In \cite{langlands-frenkel-ngo}, there is a renormalized measure $d\bar g_v:= \frac{L(1, \sigma_G)}{L(1, \sigma_T)} \omega_{T\backslash G}$. \end{itemize} Recall the notation $D(\gamma)$ for the Weyl discriminant of $\gamma\in G^\rss$, $|\Delta(\gamma)|=|D(\gamma)|^{1/2}$, as well as the definition of Artin $L$-factor, (\ref{eq:L}). The following relations between these measures have been established: \begin{itemize} \item For $\gamma\in G^\rss$, and $a=\fc(\gamma)$, $\omega_a=|\Delta(\gamma)|\omega_{T\backslash G}$. \item Consequently, for the measure $d\bar g_v$ defined in \cite[\S3.4 below (3.17)]{langlands-frenkel-ngo} we have: $\omega_a = |\Delta(\gamma)| {L(1, \sigma_{G/T})}d\bar g_v$, where the representation $\sigma_{T\backslash G}$ of the Galois group is, by definition, the quotient of the representation $\sigma_T$ on $X^\ast(\bT)$ by the subrepresentation $\sigma_G$ on the characters of $G$ \footnote{ the rank of this subrepresentation is the same as the rank of the centre of $G$; if $G$ is a semisimple group, we have $\sigma_{T\backslash G} =\sigma_T$.}, and hence $\left(\frac{L(1, \sigma_G)}{L(1, \sigma_T)}\right)^{-1}$ is precisely $L(1, \sigma_{T\backslash G})$. \end{itemize} This establishes the identity (3.31) in \cite{langlands-frenkel-ngo} (we note that $\mathrm{Orb}(f)$ is defined in \emph{loc.cit.} as the integral with respect to the measure $d\bar g_v$ on the orbit). Now we move on to the discussion of the factor $L(1, \sigma_{T\backslash G})$ and the relationship with the \emph{canonical measures} in the sense of Gross. \section{Orbital integrals: from differential forms to `canonical measures'}\label{sec:canon} So far, we have worked with measures coming from differential forms, as summarized above. However, in many parts of the literature the so-called \emph{canonical} measures are used. They are defined by means of defining a \emph{canonical} subgroup, and then normalizing the measure so that the volume of this subgroup is $1$. This introduces the following factors: \begin{itemize} \item By definition of the canonical measure, for a torus $\bT$, $$\mu_T^{\can}=\frac1{\vol_{\omega^{\can}}(\cT^0)}|\omega^{\can}|, $$ where $\omega^{\can}$ is the so-called \emph{canonical} invariant volume form (discussed briefly in \S\ref{subsub:Neron} above; the details of the definition are not important here). By Theorem \ref{thm:torus} above, $$\vol_{\omega^{\can}}(\cT^0)= L(1, \sigma_T)^{-1}. $$ Hence, on $\bT$, we have \begin{equation}\label{eq:can} \omega^{\can}=\frac{\vol_{\omega^{\can}}(\cT^0)}{\vol_{\omega_T}(\cT^0)}\omega_T. \end{equation} \item Recall that $\omega_G=\omega_G^{\can}$ since we are assuming $\bG$ is split; this is also true more generally for $\bG$ unramified, (and in any case, the choice of the form on $\bG$ matters much less than the choice of the form on $\bT$, as discussed above). Therefore, on the orbit of $\gamma$, we have: $$ \omega^{\can}_{T\backslash G} = \frac{\vol_{\omega_T}(\cT^0)}{\vol_{\omega^{\can}}(\cT^0)}\omega_{T\backslash G},$$ \text{ and } \begin{equation}\label{eq:quot} \begin{aligned} &\mu^{\can}_{T\backslash G} = \frac{\mu_G^\can}{\mu_T^\can} = \frac{{\frac1{\vol_{\omega_G}(G^0)}}{|\omega_G|}}{{\frac1{\vol_{\omega^\can}(\cT^0)}}{|\omega^\can|}} =\frac{\vol_{\omega^\can}(\cT^0)}{\vol_{\omega_G}(G^0)} \frac{|\omega_G|}{|\omega^\can|}\\ &=\frac{\vol_{\omega^\can}(\cT^0)}{\vol_{\omega_G}(G^0)} \frac{|\omega_G|}{\frac{\vol_{\omega^\can}(\cT^0)}{\vol_{\omega_T}(\cT^0)}|\omega_T|} =\frac{\vol_{\omega_T}(\cT^0)}{\vol_{\omega_G}(G^0)}\frac{|\omega_G|}{|\omega_T|} = \frac{\vol_{\omega_T}(\cT^0)}{\vol_{\omega_G}(G^0)}|\omega_{T\backslash G}|. \end{aligned} \end{equation} When $\bT$ splits over an unramified extension, by Theorem \ref{thm:torus} above, $\vol_{\omega_T}(\cT^0)=L(1, \sigma_T)^{-1}$. Thus at almost all places $v$, the measure $d\bar g_v$ is related to the canonical measure on the orbit by: \begin{equation}\label{eq:dgbar} d\bar g_v= L(1, \sigma_G)\vol_{\omega_G}(G^0)\mu^{\can}_{T\backslash G}. \end{equation} Combining (\ref{eq:quot}) with the relation $\omega_a=\pm \Delta(\gamma) \omega_{T\backslash G}$, we also obtain the relation between the geometric measure and canonical measure (for all places): \begin{equation}\label{eq:canon_to_geom} |\omega_a| = |\Delta(\gamma)| \frac{1}{\vol_{\omega_T}(\cT^0)} \mu^{\can}_{T\backslash G}, \end{equation} where the factor $\vol_{\omega_G}(G^0)$ disappears since the same measure on $\bG$ needs to be used on both sides when defining $\omega_a$. We recall that the factor ${\vol_{\omega_T}(\cT^0)}$ is discussed above in \S \ref{subsub:general}. \end{itemize} \subsection{Example: $\GL_2$}\label{sub:GL2} For $\bG=\GL_2$, we can make everything completely explicit. The orbital integrals of spherical functions with respect to \emph{canonical} measure are computed, for example, in \cite[\S 5]{kottwitz:clay}. We combine this computation with our conversion factors to obtain the integrals with respect to the geometric measure. We observe that the result agrees with the formula (3.6) of \cite{Langlands:2013aa}. In \cite[\S 5]{kottwitz:clay} the orbital integrals are computed using the reduced building (i.e. the tree) for $\GL_2$, and expressed in terms of the integer $d_\gamma$ (for $\gamma\in G(F)^\rss$). The number $d_\gamma$ is defined in terms of the valuations of the eigenvalues of $\gamma$, see the top of p.415 for the split case, p.417 for the unramified case, and (5.9.9) for the ramified case. In fact, we have $$ |D(\gamma)|=\begin{cases}q^{-2d_\gamma}, \quad \gamma \text{ split or unramified},\\ q^{-2d_\gamma-1}, \quad \gamma \text{ ramified}. \end{cases} $$ (This is the definition in the split and unramified cases and an easy exercise in the ramified case.) Here we only look at the simplest orbital integral of $f_0$, the characteristic function of the maximal compact subgroup $G_0=\GL_2(\ri_F)$. \begin{itemize} \item If $\gamma$ is split over $F$, then, from formula (5.8.4) \emph{loc.cit.}: \begin{equation}\label{eq:split} O_{\gamma}(f_0)= q^{d_\gamma} = |D(\gamma)|^{-1/2} \end{equation} \item If $\gamma$ is elliptic (which in $\GL_2^\rss$, is the same as not split), then $O_\gamma(f_0)=|V^\gamma|$, the cardinality of the set of fixed points of the action of $\gamma$ on the building; see formula (5.9.3). Note that here the right-hand side does not depend on the choice of the measures on $G$ and on the centralizer of $\gamma$ (which we denote by $T=G_\gamma=C_G(\gamma)$ to consolidate notation with this part of \cite{kottwitz:clay}). Thus, there is a unique choice of measures for which this equality is true. This equality is explained in \S 3.4 of \emph{loc.cit.}; see also the explanation just above (5.9.1). In fact, for elliptic $\gamma$, one has $$\vol(Z\backslash G_\gamma)O_\gamma(f_0)=|V^\gamma|,$$ where on the left the volume and the orbital integral are taken with respect to the same choice of the measure on $G_\gamma$, and the measure on $G$ that gives $G_0$ volume $1$ (note that in this formula both sides are independent of the choice of the measure on $G_\gamma$). Thus the measure on $G_\gamma$ that makes (5.9.3) work is precisely the measure such that $\vol(Z\backslash G_\gamma)=1$. Plugging in the calculations of $|V^\gamma|$ from \emph{loc.cit.}, in the two remaining cases we obtain: \item If $\gamma$ is unramified (5.9.7): \begin{equation}\label{eq:unram} O_{\gamma}(f_0)= 1+(q+1)\frac{q^{d_\gamma}-1}{q-1}. \end{equation} \item If $\gamma$ is ramified (5.9.10): \begin{equation}\label{eq:ram} O_{\gamma}(f_0)= 2\frac{q^{d_\gamma+1}-1}{q-1}. \end{equation} \end{itemize} Assume as usual that $p\neq 2$. Suppose we started with the measure on $G_\gamma$ that gave volume $1$ to its maximal compact subgroup, and the measure on $Z$ such that the volume of $Z\cap G_0$ is $1$. In the unramified case, the map from $T^c$ to $Z\backslash T$ is surjective, and this choice of measures gives the quotient $Z\backslash T$ volume $1$. In the ramified case, the image of $T^c$ in $Z\backslash T$ has index $2$, and thus the volume of $Z\backslash T$ we get from this natural measure on $T$ is not $1$ but $2$. The upshot is that in the ramified case, the measure giving the volume $1$ to $Z\backslash G_\gamma$ does \emph{not} come from a natural measure on $G_\gamma$. Thus, combining the relation (\ref{eq:canon_to_geom}) with (\ref{eq:split}), (\ref{eq:unram}), and (\ref{eq:ram}), respectively, and using (\ref{eq:res_quadr}), which applies since for all tori in $\GL_2$, $T^c=\cT^0$, we obtain the integrals with respect to the geometric measure: \begin{equation}\label{eq:oigl2} \begin{aligned} & O_\gamma^\geom(f_0) = \frac{|D(\gamma)|^{1/2}}{\vol_{\omega_T}(T^c)} O_{\gamma}(f_0)\\ & =\begin{cases} &(1-\frac{1}{q})^{-2} \quad \gamma \text{ is hyperbolic} \\ &\frac{q^2}{(q^2-1)}\left(1+(q+1)\frac{q^{d_\gamma}-1}{q-1}\right) q^{-d_\gamma} \quad \gamma \text{ is unramified elliptic}\\ &\frac{q\sqrt{q}}{q-1} \left(\frac{q^{d_\gamma+1}-1}{q-1}\right) q^{-d_\gamma-1/2} \quad \gamma \text{ is ramified elliptic} \end{cases}\\ & = \left(1-\frac1q\right)^{-2} \begin{cases} & 1 \quad \gamma \text{ is hyperbolic} \\ & 1 - \frac{2}{q+1}q^{-d_\gamma} \quad \gamma \text{ is unramified elliptic}\\ &1 - q^{-(d_\gamma+1)} \quad \gamma \text{ is ramified elliptic} \end{cases} \end{aligned} \end{equation} These formulas agree with \cite[(2.2.10)]{Langlands:2013aa}, if one multiplies our results by $\vol_{\omega_G}(G_0)= (1-\frac1q)^2(1+\frac 1q)$, as required by the relation (\ref{eq:dgbar}). \subsection{The next step} In \cite{Langlands:2013aa}, Langlands works out Poisson summation on the geometric side of the stable Trace Formula for $\SL_2$. Roughly speaking, Poisson summation formula is a relation between the sum of the values of a smooth function over a lattice in a vector space, and the sum of the values of its Fourier transform over a dual lattice. Here the space is the set of ad\`elic points of the Steinberg-Hitchin base for $\SL_2$, which is just the affine line. The lattice in it is the image of the diagonal embedding of the base field (we can take $\Q$ for simplicity). The geometric side of the Trace Formula contains a sum over the conjugacy classes of elliptic elements $\gamma\in \SL_2(\Q)$, which corresponds to a sum over $\Q$ in the Steinberg-Hitchin base. Thus at least for the elliptic part, it is tempting to take the function to be a stable orbital integral (i.e., the integral of some fixed test function over a fibre of Steinberg map $\fc^{-1}(a)$, as a function of $a$), and apply Poisson summation. However, for that the function needs to satisfy some smoothness assumption. Now we can at least make some preliminary remarks about how far our function is from being smooth, at least at every finite place. If we take $\gamma\in \SL_2(\Q_p)$, the $\GL_2(\Q_p)$-orbital integral computed above is the \emph{stable} orbital integral of $\gamma$. All along we have been assuming that $\gamma$ is a regular semisimple element. It is well-known that the singularities of orbital integrals occur only at non-regular elements (and we will see this explicitly in a moment, in this example). More precisely, it is a result of Harish-Chandra that for a given test function $f$, when a measure of the form $|\omega_{G/T}|$ is used on each regular semisimple orbit, the orbital integral $\gamma\mapsto O_\gamma(f)$ is a smooth (i.e., locally constant) function on the open set $G^\rss$ of regular semisimple elements. This function is not bounded as $\gamma$ approaches a non-regular element; however, its growth is controlled by $|D(\gamma)|^{-1/2}$. Specifically, Harish-Chandra proved that (still with $f$ fixed), the so-called \emph{normalized orbital integral}, namely, the function $ \gamma\mapsto |D(\gamma)|^{1/2}O_\gamma(f)$ is bounded on compact subsets of $G$, and locally integrable on $G$. We note that since $D(\gamma)$ vanishes at non-regular elements, this normalized orbital integral is also zero off the regular set. Thus, the normalized orbital integral, as a function of $\gamma$ (for a given test function $f$), is locally constant on $G^\rss$, zero on non-regular semisimple elements, and locally bounded on $G$. However, this does not imply that it is continuous on $G$. Indeed, while it is locally constant on the set of regular semisimple elements, as $\gamma$ approaches a non-regular element, the neighbourhoods of constancy get smaller; at a non-regular element $\gamma_0$ itself this function is zero since $D(\gamma_0)=0$; by Harish-Chandra's theorem this function is bounded on any compact neighbourhood of $\gamma_0$; however, nothing says that it is continuous at $\gamma_0$: without a careful choice of measures, it will have ``jumps''. As we shall see in our $\SL_2$-example, the extension of the normalized orbital integral by zero to non-regular elements does not actually give a continuous function on $G$; however, when the geometric measure is chosen, one gets a function on the Steinberg-Hitchin base with just a removable discontinuity. In $\SL_2$, we have just two non-regular semisimple elements, namely, $\pm {\mathrm{Id}}$. Their images under Steinberg map $\fc_{SL_2}:\SL_2 \to \A^1$ are $\pm 2$. Fix $p$ (for now, $p\neq 2$) and consider, for example, a neighbourhood of the point $2\in \A_{\SL_2}(\Q_p)=\A^1(\Q_p)$. It consists of the images of split, ramified, and unramified elements with sufficiently large $d_\gamma$ (the split/ramified/unramifed is determined by the discriminant of the characteristic polynomial of a given element, as discussed above in Example \ref{ex:gl2}). The formula (\ref{eq:oigl2}) shows that as $d_\gamma\to \infty$ (i.e, as $\gamma$ approaches $\pm \mathrm{Id}$), the stable orbital integral of $f_0$ on the orbit of $\gamma$ with respect to the geometric measure gives a \emph{continuous} function on $\A^1$, with value $(1-q^{-1})^{-2}$ at $a=2\in \A_{\SL_2}(\Q_p)$. (This, of course, cannot be said about the orbital integrals with respect to the canonical measure, as they get large - of the size $q^{d_\gamma}$; as $|D(\gamma)|^{1/2} \asymp q^{-d_\gamma}$, we see the confirmation of Harish-Chandra's boundedness result; but still the function $\gamma\mapsto |D(\gamma)|^{1/2}O^\can(\gamma)$ has ``jumps'' at $\pm Id$; once we make the adjustments by the volumes of the maximal compact subgroups of the corresponding tori, it becomes continuous). This continuity result is one of the insights of \cite{Langlands:2013aa}. However, as we see explicitly from (\ref{eq:oigl2}), this function is continuous but not smooth (i.e. not constant on any neighbourhood of $a=\pm 2$); and so far this is just the description of the situation at a single place, whereas ultimately we will need a global Poisson summation formula. This causes some of the technical difficulties discussed in Altug's lectures. \section{Global volumes}\label{sec:global} \subsection{The analytic class number formula for an imaginary quadratic field}\label{sub:an_cnf} Here we recast the analytic class number formula for an imaginary quadratic field $K$ as a volume computation, using the above ideas. It was observed by Ono, \cite{ono_tamagawa_tori} (see also \cite{shyr77}), that the analytic class number formula in this case amounts to the fact that the Tamagawa number $\tau(\bT)$ of the torus $\bT=\res_{K/\Q}\G_m$ equals $1$. We will assume that $\tau(\bT)=1$ and derive the analytic class number formula for $K$ from this fact. This also serves as preparation for \S\ref{sub:orb_glob} where the same volume term combines with an orbital integral for an interesting result. The analytic class number formula for a general number field is the relation \begin{equation}\label{eq:cnf} \lim_{s\to 1^+}(s-1)\zeta_K(s) = \frac{2^{r+t}\pi^t R_K h_K}{w_K |\Delta_K|^{1/2}}, \end{equation} where: $\zeta_K$ is the Dedekind zeta-function of $K$, $R_K$ is the regulator (we will not need it in this note so we skip the definition), $\Delta_K$ is the discriminant of $K$, $h_K$ is the class number, $w_K$ is the number of roots of $1$ in $K$, $r$ is the number of real embeddings, and $2t$ is the number of complex embeddings of $K$. Let us consider an imaginary quadratic field $K=\Q(\sqrt{-D})$ (with $D>0$); denote its ring of integers by $\ri$. In this case, we have $r=0$, $t=1$, the regulator $R_K$ is automatically equal to $1$, and the left-hand side equals the value at $s=1$ of $\frac{\zeta_K(s)}{\zeta(s)}=L(1, \chi_K)$ -- the value (in the sense of a conditionally convergent product) of the Dirichlet $L$-function $L(1, \chi_K)$ \footnote{This equality is the simplest case of the correspondence between Artin and Hecke $L$-functions.}. Here $\zeta(s)$ is the Riemann zeta-function, and $\chi_K$ is the Dirichlet character associated with $K$ : \begin{equation}\label{eq:dirichlet_char} \chi_K(p)=\begin{cases} 1 &\quad p \text{ splits in } K\\ -1 &\quad p \text{ is inert in } K\\ 0 &\quad p \text{ ramifies in } K \end{cases}. \end{equation} Thus, for an imaginary quadratic field $K$ the analytic class number formula reduces to: \begin{equation}\label{eq:quadr_class} L(1, \chi_K)=\frac{2\pi h_K}{w_K\sqrt{\Delta_K}} . \end{equation} Our goal is to prove this relation by using only the known facts about algebraic groups and the measure conversions discussed above. The algebraic group in question here is just the torus $\bT=\res_{K/\Q}\G_m$. n$ be the finite ad\`eles. In general $K^\times$ embeds (diagonally) in $\A_K^\times$ with discrete image; for $K$ imaginary quadratic, the image of the embedding n)^\times$ is still discrete (in fact, this is true only when $K=\Q$ or is imaginary quadratic, see e.g.,\cite{milne:ClassFieldTheory}). We know (weak approximation, see e.g., \cite{platonov-rapinchuk:AlgGroupsAndNT}) that for $\Q$, n)^\times \big/ \prod_p \Z_p^\times =\{1\}.$$ n$: n)^\times \big/ \prod_{v\nmid\infty} (\ri_v)^\times$, which, roughly speaking, should measure the size of the class group of $K$. The reason this is not exactly the class group is the intersection of the image of $K^\times$ with the compact subgroup $\prod_{v\nmid\infty} (\ri_v)^\times$. More precisely, we have the exact sequence: \begin{equation}\label{eq:exseq_ideles} \xymatrix{ n)^\times \ar[r]& \mathrm{Cl}(K) \ar[r] & 1, } \end{equation} where $\mathrm{Cl}(K)$ is the class group of $K$. The group $K^\times \cap \prod_{v\nmid\infty} (\ri_v)^\times$ is precisely the group $\mu_K$ of roots of $1$ in $K$ (the elements of $K^\times$ that are units at every finite place). The key point is that \emph{if we normalize the volume of the group of units $\ri_v^\times$ to be $1$ at every place, and call this measure $\nu_T$} \footnote{An important coincidence that happens for our torus $T$, because it is obtained from $\G_m$ by restriction of scalars, is that the measure $\nu_T$ coincides with the canonical measure $\mu_T^\can$ at every finite place, as discussed above in the point (1) of \S \ref{subsub:general}. See \cite{shyr77} for the general situation.}, then we get from the above exact sequence: \begin{equation}\label{eq:Tama1} n)^\times \big/ \prod_{v\nmid\infty} (\ri_v)^\times\right) = \frac{h_K}{w_K}. \end{equation} We will assume as fact that the Tamagawa number of $\bT$ is $1$ (this is so because $\bT$ is obtained from $\G_m$ by Weil restriction of scalars, as briefly discussed below). The analytic class number formula will follow as soon as we relate the volume on the left-hand side of (\ref{eq:Tama1}) to the Tamagawa number $\tau(\bT)$. \subsubsection{Tamagawa measure}\label{subsub:Tama} We briefly recall the definition of the Tamagawa measure, just for the special case of our torus $\bT$. We follow the definition of Ono, \cite{ono:boulder}, \cite{ono:arithmetic_tori}, which has become standard. \footnote{We note that superficially, it differs from the definition that A. Weil uses in \cite{weil:adeles}, in the sense that Ono uses a specific set of convergence factors, and incorporates a global factor that makes his definition independent of the choice of the convergence factors at finitely many places. The resulting global measure, of course, is the same in the both sources.} {\bf 1.} Let $(\A_\Q^\times)^1$ be the set of norm-1 ad\`eles (also referred to as \emph{special ideles}): $$(\A_\Q^\times)^1:=\{(x_v)\in \A_\Q: \prod_{v} |(x_v)|_v =1\}, $$ where the product is over all places of $\Q$. We have the exact sequence \begin{equation}\label{eq:infty1} 1\to (\A_\Q^\times)^1 \to \A_\Q^\times \to \R_{>0}^\times \to 1, \end{equation} where the first map is the inclusion and the second map is the product of absolute values over all places, $x=(x_v)\mapsto \prod_v |x_v|_v$. Moreover, the exact sequence splits and we have a canonical decomposition \begin{equation}\label{eq:adeles_product} \A_\Q^\times \simeq (\A_\Q^\times)^1\times (\R^\times)^0, \end{equation} as a direct product of topological groups, where $(\R^\times)^0$ stands for the connected component $R_{>0}^\times$ (in the sense of the metric topology) of the group $\R^\times$. We note that the image of the diagonal embedding of $\Q^\times$ into $\A_\Q^\times$ is contained in $(\A_\Q^\times)^1$, and it follows from (\ref{eq:adeles_product}) that the quotient $\Q^\times \backslash (\A_\Q^\times)^1$ is compact. {\bf 2.} To define the Tamagawa measure on $\bT(\Q)\backslash\bT(\A)$, one needs to start with a volume form $\omega$ on $\bT$ defined over $\Q$. We note that even writing down such a form concretely is not trivial: the natural form $\omega_T$ defined in \S \ref{sub:tori} is not defined over $\Q$. Fortunately, in our special case, the differential form $\omega:=\frac{1}{\sqrt{\Delta_K}}\omega_T$ is defined over $\Q$, see Example \ref{ex:restr} (This easy case can also be verified directly by a calculation similar to that of Example \ref{ex:quadr}). \footnote{See \cite{gan-gross:haar}, Corollary 3.7, for a way to define such a form in general. In our special case of the quadratic field, it is an easy case of the discriminant-conductor formula that the Artin conductor of the motive constructed in \cite{gan-gross:haar} coincides with the discriminant $\Delta_K$, so our definition is a special case of the construction in \cite{gan-gross:haar}.} Recall the local Artin $L$-factors attached to the representation $\sigma_T$ of $\Gal(K/\Q)$ on $X^\ast(\bT)$, see (\ref{eq:L}), and let $L(s, \sigma_T):=\prod_p L_p(s, \sigma_T)$. Let $r_T$ be the multiplicity of the trivial representation as a sub-representation of $\sigma_T$. In our case, $\sigma_T$ is $2$-dimensional, and $r_T=1$; a copy of the trivial representation in $\sigma_T$ is generated by the norm character, which is stable under the action of the Galois group $\Z/2\Z$. Let $$\rho_T:= \lim_{s\to 1+} (s-1)^{r_T}L(s, \sigma_T).$$ We see that in our case, $\rho_T$ coincides with the left-hand-side of (\ref{eq:cnf}). The measure $\mu^{Tama}$ on $\bT(\A)$ is defined as: \begin{equation}\label{eq:def_tama} \mu^{Tama}:=\rho_T^{-1}|\omega_\infty|\prod_p L_p(1, \sigma_T) |\omega|_p, \end{equation} where $\omega_\infty$ is the form induced by $\omega$ on $\bT(\R)$, in our case. We make a note of some subtle features of this definition: \begin{enumerate} \item The definition does not depend on the choice of a volume form (as long as $\omega$ is defined over $\Q$), since any two choices differ by a constant in $\Q$, which does not matter globally thanks to the product formula. n)$ would have infinite volume with respect to such a `measure', since by (\ref{eq:res_quadr}), it contains the Euler product for the Riemann zeta function at $1$. There is some choice involved in the definition of the convergence factors (for example, in Weil's definition in \cite{weil:adeles} the convergence factors in this case would be simply $(1-1/p)$, which would be sufficient to achieve convergence of the product measure). As Ono explains in \S 3.5 of \cite{ono:arithmetic_tori}, if one modifies the individual convergence factors by any multipliers whose product converges, it does not affect the final result thanks to the global factor $\rho_T^{-1}$. \end{enumerate} For future use, we define by $\mu_p$ the measure $L_p(1, \sigma_T) |\omega|_p$ on $\bT(\Q_p)$. {\bf 3.} The Tamagawa number of $\bT$ is, by definition, the volume (with respect to the Tamagawa measure on the quotient, discussed below), of $\bT(\Q)\backslash \bT(\A_\Q)^1$, where $$\bT(\A_\Q)^1=\{(x_v)\in \bT(\A_\Q): \prod_v|\chi(x_v)|_v = 1 \text{ for all }\chi\in X^\ast(\bT) \text { that are defined over }\Q \}.$$ The group of $\Q$-characters of $\bT$ has rank 1, and is generated by the norm map. Thus, $$\bT(\A_\Q)^1=\{(x_v)\in \bT(\A_\Q): \prod_{v}|N_{K_v/\Q_v}(x_v)|_v = 1\},$$ where the product is over the places of $\Q$. We note that $\bT(\A_\Q)=\A_K^\times$, and we have the exact sequence \begin{equation}\label{eq:T1} 1\to \bT(\A_\Q)^1 \to \A_K^\times \to \R_{>0}^\times \to 1, \end{equation} where the map to $\R_{>0}^\times$ is the product of the local norm maps. Let $dm$ (using the notation and terminology of \cite{shyr77}) be the measure on $\bT(\A_\Q)^1$ that `matches topologically' in this exact sequence with the measure $\mu^{Tama}$ on $\bT(\A_\Q)=\A_K^\times$ defined above, and the measure $\frac{dt}{t}$ on $\R_{>0}^\times$. That is, $dm$ is the measure on $\bT(\A_\Q)^1$ such that $$\mu^{Tama}= dm\wedge \frac{dt}{t}.$$ Since $\bT(\Q)=K^\times$ is a discrete subgroup of $\bT(\A_\Q)^1$, the measure $dm$ descends to the quotient by this subgroup, and the volume of $\bT(\Q)\backslash \bT(\A_\Q)^1$ with respect to this measure is, by definition, the Tamagawa number $\tau(\bT)$. We note that the exact sequence (\ref{eq:T1}) splits, and we have a group isomorphism $$ K^\times \backslash \A_K^\times \simeq \bT(\Q)\backslash \bT(\A_\Q)^1 \times \R_{>0}^\times.$$ \subsubsection{The proof of the analytic class number formula} Now we are ready to go from (\ref{eq:Tama1}) to the analytic class number formula for $K$; we do it in two steps. {\bf Step1. The finite places.} Rewriting the relation (\ref{eq:res_quadr}) of Example \ref{ex:quadr} using the notation of this section (and noting that for $p\neq 2$, $|\Delta_K|_p=p^{-1}$ if $p$ ramifies in $K$ and $|\Delta_K|_p=1$ otherwise), we obtain, for $v\vert p$: \begin{equation}\label{eq:classnum1} \vol_{|\omega_T|}(\ri_v^\times)=\left(1-\frac1p\right)L_v(1, \chi_K)^{-1}|\Delta_K|_p^{1/2}. \end{equation} We will see below in \S\ref{sub:two} that this relation holds at $p=2$ as well. Thus, $\vol_{\mu_p}(\ri_v^\times) =1$, and we see explicitly in this example, that our measure $\mu_p$ coincides with the $p$-component of the measure $\nu_T$ used in (\ref{eq:Tama1}) (this is a very special case of \cite{gan-gross:haar}, Corollary 7.3). {\bf Step 2. The infinite places and putting it together.} We also have the exact sequence (where $S^1\subset \C^\times$ is the unit circle) \begin{equation}\label{eq:infty} n \to 1, \end{equation} where the first map is the inclusion into $\A_K$ that maps $z\in S^1$ to the ad\`ele $(1,z)$ trivial at all the finite places, and the second map is the projection onto the finite ad\`eles. Since the image of the diagonal embedding of $K^\times$ intersects the image of $S^1$ in $\bT(\A_\Q)^1$ trivially, (\ref{eq:infty}) yields the exact sequence for the quotients by $K^\times$: \begin{equation}\label{eq:infty1} n \to 1. \end{equation} Now we need to carefully find the component at infinity $dm_\infty$ (which is a measure on $S^1$) of the measure $dm$ defined via the exact sequence (\ref{eq:T1}). First, we choose a convenient basis for the character lattice of $\bT$, in view of this exact sequence: we use the characters $\chi_1(z, \bar z)=z $ and $\eta(z):=z\bar z =|z|^2$. These two characters, which can be thought of as the vectors $(1,0)$ and $(1,1)$ with respect to the `standard' basis of $X^\ast(\bT)$ in Example \ref{ex:quadr}, still form a $\Z$-basis of $X^\ast(\bT)$, and hence we can write $\omega_T=\frac{dz}{z}\wedge \frac{d\eta}{\eta}$. We will also use the coordinates $(z, \eta)$ on $\bT$ for the rest of this calculation. We write every element of $\bT(\A)$ as $a=a_f a_\infty$, where $a_f$ has the infinity component $1$ and $a_\infty=(1,(z, \eta))$ has all the components at the finite places equal to $1$. In this notation, $\bT(\A)^1$ is defined by the condition $|z|^2=|\eta| = \|a_f\|^{-1}$. We write n}^{Tama}\mu_{\infty}^{Tama},$$ n}^{Tama} = \rho_T^{-1}\prod_p \mu_p$, and $\mu_\infty^{Tama}=|\omega|$. We recall that by definition, $\omega=\frac{1}{\sqrt{|\Delta_K|}}\omega_T$. Then by the definition of $dm$, we have $$ dm_\infty \wedge \frac{d\eta}{\eta} = \mu^{Tama}_\infty = \frac{1}{\sqrt{|\Delta_K|}}\frac{da_\infty}{a_\infty}\wedge \frac{d{\eta}}{\eta}, $$ and thus $$dm_\infty = \frac{1}{\sqrt{|\Delta_K|}}\frac{da_{\infty}}{a_{\infty}} = \frac{1}{\sqrt{|\Delta_K|}}\frac{dz}{z}.$$ We have computed above in (\ref{eq:S1}) that the form $dz/z$ gives precisely the arc length $d\theta$ on the unit circle. Thus, the volume of $S^1$ with respect to $dm_\infty$ is $\frac{2\pi}{\sqrt{|\Delta_K|}}$. Finally, we get from (\ref{eq:classnum1}) and (\ref{eq:Tama1}): \begin{equation}\label{eq:cnf_proof} \begin{aligned} n)) \\ n)^\times\right) = \frac{2\pi}{\sqrt{|\Delta_K|}} L(1, \chi_K)^{-1} \frac{h_K}{w_K}, \end{aligned} \end{equation} recovering the analytic class number formula for $K$. We note that the product $L(1, \chi_K)$ converges only conditionally. \subsection{What happens at $p=2$}\label{sub:two} We do the detailed (and elementary but tedious) analysis of the changes one needs to make at $p=2$ to all of the above calculations as they apply to an imaginary quadratic extension $K$ of $\Q$ as above in order to completely justify the global calculation above, and also point out interesting geometric differences relevant for a norm-one torus of a quadratic extension\footnote{This section can be skipped if the reader is willing to believe that (\ref{eq:classnum1}) holds at $p=2$ as well, and is not interested in the norm-1 torus (which is not used in the sequel).}. We write $K=\Q(\sqrt{d})$, with $d$ a square-free (and in our case, negative) integer $d=-D$. Our main reference for the number theory information is e.g., \cite[\S VI.3]{frohlich-taylor}. \subsubsection{Quadratic extension at $p=2$}\label{subsub:2start} First of all, recall that the ring of integers of $K$ is $$\ri_K=\begin{cases} \Z\left[\frac{\sqrt{d}+1}2 \right], &\quad d\equiv 1 \mod 4,\\ \Z[\sqrt{d}], &\quad d\equiv 2, 3 \mod 4. \end{cases} $$ We will need the fact that the ring $\ri_K$ is generated over $\Z$ by a root of the monic polynomial $X^2-X+\frac{1-d}2$ in the first case, and of $X^2-d$ in the second case. Therefore the behaviour of the prime $2$ in $K$ depends on the residue of $d$ modulo $8$. Indeed, if $d\equiv 1 \mod 4$, then we need to look at the reduction of the polynomial $X^2-X+\frac{1-d}2$ modulo $2$; we see that it is irreducible over $\F_2$ if $\frac{1-d}2$ is odd, and it factors as $x(x-1)$ if $\frac{1-d}2$ is even. Thus if $d\equiv 5\mod 8$, $K_v$ (where $v\vert 2$) is the unramified quadratic extension of $\Q_2$, while if $d\equiv 1 \mod 8$, the prime $2$ splits in $K$. In the remaining cases, $2$ ramifies: if $d\equiv 3, 7 \mod 8$, the relevant polynomial is $X^2-d$, and its reduction $\mod 2$ factors as $(x-1)^2$; if $d\equiv 2, 6 \mod 8$, then the reduction of our polynomial $\mod 2$ is just $x^2$. \subsubsection{From Dedekind zeta-factor to Dirichlet $L$-factor} While the local $L$-factor at $p=2$ itself looks a bit different from the other primes, the argument relating the local factor of $L(1, \chi)$ to the local factor of the Dedekind zeta-function is the same for all primes (including $2$), when the Dirichlet character associated with $K$ is defined by (\ref{eq:dirichlet_char}). We observe that above, we have just computed the Dirichlet character of $K$ at $2$: \begin{equation}\label{eq:char2} \chi_K(2)=\begin{cases} 1 &\quad d\equiv 1 \mod 8\\ -1 &\quad d\equiv 5 \mod 8 \\ 0 &\quad \text{ otherwise } \end{cases}. \end{equation} By definition, the local factor at $p=2$ (as at any other prime) of the Dedekind zeta-function is $$\zeta_2(s)=\prod_{\mathfrak p\supset (2)}\frac1{1-{\mathrm N}_{K/\Q}(\mathfrak p)^{-s}},$$ where the product is over the prime ideals of $\ri_K$ lying over $2$. It remains to recall that in all cases, for $\mathfrak{p}$ lying over $(p)$, $N_{K/\Q}(\mathfrak p)=p^f$, where $f$ is the residue degree, see e.g. \cite[II.4]{frohlich-taylor}. \subsubsection{Quadratic extensions and norm-$1$ tori} If $F$ is a local field with residue characteristic $2$, then $|F^\times/(F^\times)^2|=8$, and $F$ has one unramified and 6 ramified quadratic extensions. (Indeed, we recall that $F^\times \simeq \Z\times \ri_F^\times$, and for a $2$-adic field, $|\ri_F^\times/(\ri_F^\times)^2|=4$). In the case $F=\Q_2$ everything can again be computed in an elementary way, and this is what we do in this section. We can list the extensions explicitly using the discussion from \S\ref{subsub:2start}: $\Q_2(\sqrt{5})$ is the unramified extension; $\Q_2(\sqrt{3})$ and $\Q_2(\sqrt{7})$ are the ramified extensions coming from the non-square units, and $\Q_2(\sqrt{2})$, $\Q_2(\sqrt{10})$, and $\Q_2(\sqrt{6})$ and $\Q_2(\sqrt{14})$ are the ramified extensions corresponding to the elements $\varpi\epsilon$ where $\varpi=2$ and $\epsilon$ is a non-square unit. We compute the volumes of $\bT(\Q_2)^c$ with respect to the form $\omega_T$ as in \S\ref{sub:tori}, for the tori $\bT=\res_{K/\Q}\G_m$ as well as for the norm-1 tori of the corresponding extensions, for $K= \Q(\sqrt{d})$, with $d\equiv 5,3,2 \mod 8$, to compare the calculation with Examples \ref{ex:quadr} and \ref{ex:norm1}. We recall that the identification $E^\times = (\res_{E/F}\G_m)(F)$ is completely general; and for a quadratic extension $E$, when we think of $(\res_{E/F}\G_m)(F)$ as a torus in $\GL_2(F)$, the determinant in $\GL_2$ corresponds to the norm map $N_{E/F}$ on $E^\times$ under this identification. Till the end of the section, we keep the notation $\bT=\res_{E/\Q_2}\G_m$ (we are now looking locally at $p=2$, so to match the notation of \S\ref{sub:tori}, we have $F=\Q_2$ and $E$ is the completion of $K$ at $p=2$). {\bf 1. The unramified case, $E=\Q_2(\sqrt{5})$.} We write the elements of $E$ as $x+\varphi y$ where $\varphi=\frac{1+\sqrt{5}}2\in E$ is a root of $X^2-X-1$ and $x, y \in \Q_2$; then $\ri_E=\{x+\varphi y\mid x,y \in \Z_2\}$ (of course, for elements of $E$, the first representation is equivalent to simply writing $x'+y'\sqrt{5}$, but for the ring of integers this would not give the whole ring). The norm map in these coordinates is $$N_{E/\Q_2}(x+\varphi y)=\left(x+\frac{y}2+\frac{\sqrt{5}}2y\right)\left(x+\frac{y}2-\frac{\sqrt{5}}2y\right)= x^2+xy-y^2.$$ This causes small changes to our na\"ive calculations of Example \ref{ex:quadr}. In particular, the pullback to $\bT$ of the differential form $\frac{dx}x$ on $\G_m$ is now (exactly as in (\ref{eq:pullback_res}), with $\varphi'=\frac{1-\sqrt{5}}2$ denoting the Galois conjugate of $\varphi$) \begin{equation}\label{eq:pullback2unram} \begin{aligned} & \omega_T=\frac{d(x+\varphi y)}{x+\varphi y}\wedge\frac{d(x+\varphi' y)}{x + \varphi' y} =\frac{(dx+\varphi dy)\wedge(dx +\varphi' dy)}{x^2+xy- y^2}\\ & = \frac{\varphi'-\varphi}{N_{E/F}(x+ \varphi y)} dx\wedge dy =\frac{-\sqrt{5}}{N_{E/F}(x+ \varphi y)} dx\wedge dy. \end{aligned} \end{equation} Note the absence of the factor $2$, compared with (\ref{eq:pullback_res}), which would have caused trouble here. From the volume to the point-count: since the extension is unramified, this part does not change. In fancy terms, we can say that the so-called standard model over $\Z_2$ for $\bT$, defined by the coordinates $x$ and $y$ that we chose, is smooth. The set of $\F_2$-points of its special fibre (in simple terms, the reduction mod $2$, which is the uniformizer of our unramified extension) is still $(\F_4)^\times$. Thus, the volume formula (\ref{eq:res_quadr}) still holds in this case. If this extension was obtained as the completion at $2$ of a quadratic extension $K=\Q(\sqrt{d})$ of $\Q$, then $d\equiv 5 \mod 8$, and therefore in this case the discriminant of $K$ is just $\sqrt{d}$. Hence, the relation (\ref{eq:classnum1}) holds without any modification. The norm-1 torus of this extension is treated very similarly to the unramified case with $p\neq 2$; and the final answer for its volume is given by the same formula as in the case $p\neq 2$. {\bf 2. Ramified case 1, $E=\Q_2(\sqrt{2})$.} We have $\ri_E=\{x+y\sqrt{2}\mid x,y\in \Z_2\}$, and the norm map is $N_{E/\Q_2}(x+\sqrt{2}y)= x^2-2y^2$ similarly to the $p\neq 2$ case. In this case the calculation of the form $\omega_T$ applies verbatim, so we get the extra factor $\frac12$ in (\ref{eq:ex1}). Specifically, (\ref{eq:ex1}) now becomes: \begin{equation*} \vol_{|\omega_T|}(T^c)=\frac1{2\sqrt{2}}\int_{\{(x, y) \in \Z_2^2:|x^2-2y^2|=1\}} dx dy. \end{equation*} The condition $x^2-2y^2\in \Z_2^\times$ is still equivalent to $x \in \Z_2^\times$, and we obtain that $\vol_{|\omega_T|}(T^c)=\frac{1}{2\sqrt{q}}\frac{q-1}q$; here $q=2$, we are just writing it this way for the ease of comparison with (\ref{eq:res_quadr}). However, the relation (\ref{eq:classnum1}) again holds without modification, since if the completion of $K=\Q(\sqrt d)$ at $2$ is ramified, then $\Delta_K=4\sqrt{d}$, and $|\Delta_K|_2^{1/2}$ has the extra factor $2$ as well. \begin{example}\label{ex:q22} \emph{The norm-1 torus of $E=\Q_2(\sqrt{2})$:} Unlike the full torus obtained by the restriction of scalars, for the norm-1 subtorus the reduction of the volume computation to counting residue-field points looks very different from Example \ref{ex:norm1}. We include this point-count exercise without a full discussion of its implications for the computation of the volume with respect to $\omega_T$ or $\omega^\can$, to illustrate the difficulties that arise when the reduction $\mod \varpi$ is not smooth. We write the $2$-adic expansions $x=x_0+2x_1+4x_2+ \ldots,$ $y=y_0+2y_1+4y_2+ \ldots,$ with $x_i, y_i \in \{0,1\}$. Then \begin{equation}\label{eq:2squaring} \begin{aligned} x^2&= (x_0+2x_1+4x_2+ 8x_3+ 16x_4 + \ldots)^2 \\ {}&= x_0^2+4(x_0x_1+x_1^2)+8(x_0x_2)+16(x_2^2+x_1x_2+x_0x_3)\\ {}&+2^5(x_0x_4+x_1x_3)+2^6(x_3^2+x_0x_5+x_1x_4+x_2x_3)+ \ldots. \end{aligned} \end{equation} Thus the condition $x^2-2y^2=1$ becomes (each line comes from the congruence modulo the next power of $2$ (indicated in the left column), and each congruence is congruence $\mod 2$): \begin{equation*} \begin{aligned} &\mod 2: &\quad x_0=1;\\ &\mod 2^2: &\quad y_0=0; \\ &\mod 2^3: &\quad x_0x_1+x_1^2 \equiv 0; \\ &\mod 2^4: &\quad x_0x_2 -(y_0y_1 +y_1^2) \equiv 0; \\ &\mod 2^5: &\quad x_2^2+x_1x_2+x_0x_3 - (y_0y_2) \equiv 0; \\ &\mod 2^6: &\quad x_0x_4+x_1x_3 - (y_2^2+y_1y_2+y_0y_3) \equiv 0; \\ &\mod 2^7: &\quad x_3^2+x_0x_5+x_1x_4+x_2x_3 -(y_0y_4+y_1y_3) \equiv 0; \\ &\ldots & \quad \ldots. \end{aligned} \end{equation*} These equations yield: \begin{equation}\label{eq:2digits} \begin{aligned} &x_0=1, &\quad y_0=0,\\ &x_1 \text{ is arbitrary}, &\quad y_1 \text{ is arbitrary},\\ &x_2 = y_1, &\quad y_2 \text{ is arbitrary},\\ &x_3 = x_2^2+x_1x_2, &\quad y_3 \text{ is arbitrary},\\ &\ldots \end{aligned} \end{equation} This illustrates that Hensel's Lemma, as expected, starts working once we have a solution $\mod 8$, but \emph{not for solutions $\mod 2$}. In fancier terms, we have the reduction $\mod 8$ map defined on the set of $(\Z_2\times \Z_2)$-solutions of the norm equation $x^2-2y^2=1$. The image of this map is the set $\{(x_0, y_0, x_1, y_1, x_2, y_2)\}$ defined by the first three lines of (\ref{eq:2digits}) inside $\A^6(\F_2)$; we see that it is a $3$-dimensional hyperplane in $\A^6(\F_2)$. The fibre of the reduction $\mod 8$ map over each point in its image is a translate of $(2^3)\subset Z_2$, so the volume of each fibre is $\frac{1}{2^3}$. We obtain that the volume of our set of solutions is $\frac{\#\A^3(\F_2)}{2^3}$. Note that this is geometrically quite different from the answer in the ramified case in Example \ref{ex:norm1}, where the image of the reduction $\mod p$ map was two copies of an affine line, and therefore the image of the reduction $\mod p^2$ map then was \emph{two} copies of an affine plane over $\F_p$; and the image of reduction $\mod p^3$ map, inside $\A^6(\F_p)$, was \emph{two} copies of $\A^3(\F_p)$. \end{example} {\bf 3. Ramified case 2, $E=\Q_2(\sqrt{3})$.} Exactly as in the case $E=\Q_2(\sqrt{2})$ considered above, for $\bT=\res_{E/\Q_2}\G_m$, the volume $\vol_{|\omega_T|}(T^c)$ has the extra factor of $\frac12$ compared with the $p\neq 2$, and so does the discriminant (basically, the calculation of the volume of $T^c$ is not sensitive to \emph{which} ramified extension of $\Q_2$ we are considering). \begin{example} \emph{The norm-1 torus of $E=\Q_2(\sqrt{3})$.} The calculation is also similar to \ref{ex:q22} above: we use (\ref{eq:2squaring}) to count solutions of the equation $x^2-3y^2=1$, but the geometry looks slightly different. As above, from (\ref{eq:2squaring}) we get: \begin{equation*} \begin{aligned} &\mod 2: &\quad x_0^2+y_0^2=1;\\ &\mod 2^2: &\quad x_0^2-3y_0^2\equiv 1 \mod 4; \\ &\mod 2^3: &\quad x_0x_1+x_1^2 - 3(y_0y_1+y_1^2)\equiv 0 \mod 2; \\ \end{aligned} \end{equation*} From the first two equations, we get: $x_0=1$, $y_0=0$ (note that the second congruence $\mod 4$ does not allow for the option $x_0=0$, $y_0=1$, in contrast to the ramified case with $p\neq 2$; this is an example of a solution $\mod 2$ that \emph{does not lift} to a solution $\mod 4$). Then the third equation becomes $x_1+x_1^2-3y_1^2\equiv 0 \mod 2$, which allows for arbitrary $x_1$ and makes $y_1=0$. Next, from the truncation $\mod 2^4$, we get (plugging all this information in): $$ \quad 4(x_1+x_1^2) -8 (x_2-3\cdot 0\cdot y_2) \equiv 0 \mod 16. $$ If $x_1=0$, we get $x_2=0$; if $x_1=1$, then $x_2=1$. Summing it up, so far we have: \begin{equation}\label{eq:truncation_q2rt3} \begin{aligned} &x_0=1, &\quad y_0=0,\\ &x_1 \text{ is arbitrary}, &\quad y_1 =0,\\ &x_2 = x_1, &\quad y_2 \text{ is arbitrary},\\ \end{aligned} \end{equation} Note that the pattern so far has been something we have not seen before: the congruences modulo an odd power of $2$ might have some `carry-over' to the next power; then the congruence modulo the next even power forces the truncated expression to become literally zero with no carry-over. Continuing with one more step: \begin{equation*} \begin{aligned} &\mod 2^5: & {}\\ &{} \quad x_2^2+x_1x_2+x_0x_3 - 3(y_2^2+y_1y_2+y_0y_3) \equiv 0 \mod 2; \\ &\mod 2^6: &{}\\ &{}\quad (x_2^2+x_1x_2+x_0x_3 - 3(y_2^2+y_1y_2+y_0y_3)) + (x_0x_4+x_1x_3 - 3(y_0y_4+y_1y_3)) \equiv 0, \end{aligned} \end{equation*} where the last congruence is $\mod 4$. As we plug in what we already know about the first terms, the first equation becomes $x_3+y_2^2\equiv 0 \mod 2$, so $x_3=y_2$. Plugging this into the second equation (and ignoring the squares), we get $(2x_2-2x_3)+(x_4+x_3)\equiv 0 \mod 4$, which determines $x_4$ uniquely. Again we notice that by now Hensel's Lemma started working as expected: at every step we get one linear relation in two unknown parameters, so the fibre over each truncated solution is an affine line over $\Q_2$. To summarize, (\ref{eq:truncation_q2rt3}) says that the image of the reduction $\mod 8$ map is a plane in $\A^6(\F_2)$, and thus for the volume of $T^c$, we get $\frac{\#\A^2(\F_2)}{2^3}$. \end{example} Now we are ready to return to the global calculations. \subsection{Global orbital integrals in $\GL_2$}\label{sub:orb_glob} n\right)^\times$ that we just obtained. Let $\gamma\in \GL_2(\Q)$ be a regular semisimple element, such that its centralizer $T$ is non-split over $\R$. Let $K$ be the quadratic extension of $\Q$ generated by the eigenvalues of $\gamma$. Then $T=\bT(\Q)$ with $\bT=\res_{K/\Q} \G_m$, as in \S\ref{sub:an_cnf}. n =\otimes_p f_p$, with $f_p$ equal to ${\bf 1}_{\GL_2(\Z_p)}$, the characteristic function of $\GL_2(\Z_p)$, for almost all $p$, n)$. Taking the product of the relations (\ref{eq:canon_to_geom}) at every prime $p$, and applying the product formula to the absolute values $\prod_p|D(\gamma)|_p$ and $\prod_p|\Delta_K|_p$, we obtain: \begin{equation}\label{eq:ofin} n). \end{equation} We observe that the canonical measure on the centralizer of $\gamma$ (or any measure coinciding with it at almost all places) is convenient for defining a global orbital integral because with such a measure, all but finitely many factors $O_{\gamma_p}(f_p)$ are equal to $1$, and thus the global orbital integral is a finite product, namely, the product over the primes that divide $D(\gamma)$ and the primes where $f_p\neq {\bf1}_{G(\Z_p)}$, and there is no question of convergence. In the Trace Formula, the orbital integrals are weighted by volumes; and the volume has to be taken with respect to the same measure on the centralizer that was used to define the orbital integral. Comparing (\ref{eq:ofin}) with (\ref{eq:cnf}), we see explicitly that for $\GL_2$, the volume term contains some of the factors that also appear when we pass from the canonical measure to the geometric measure on the orbit. Now we are ready to explicate an observation (that is implicit in the work of Langlands) that switching to the geometric measure makes the volume term disappear in the case $\bG=\GL_2$, in addition to making the orbital integrals at finite places better-behaved. This comes at the cost of now having the orbital integral expressed as only a \emph{conditionally convergent} infinite product. This theorem can be thought of as the main point of this note. \begin{theorem}\label{thm:main} Let $\gamma$ be a regular elliptic element of $\GL_2$ as above, that splits over a quadratic extension $K$. Let $\bT=\res_{K/\Q}\G_m$. Then n) =\frac{|D(\gamma)|^{1/2}}{2\pi} n).$$ \end{theorem} \begin{proof} Combining the relation (\ref{eq:ofin}) with the calculation in (\ref{eq:cnf_proof}), we see that the factors $L(1,\chi_K)$ and $|\Delta_K|^{1/2}$ cancel out, and we obtain: $$\begin{aligned} n) n)\\ & = \frac{|D(\gamma)|^{1/2}}{2\pi} \tau(\bT) n). \end{aligned}$$ Since $\bT$ is obtained from $\G_m$ by restriction of scalars, we have $\tau(\bT)=1$, as discussed above in \S\ref{subsub:Tama}. \end{proof} We make a few remarks: \begin{remark} {\bf 1.} In our statement, the left-hand side of the equation is actually independent of the choice of the measure on $\bT(\Q_p)$ at every finite place $p$, as long as the volume and the orbital integral are taken with respect to the same measure; this is consistent with the right-hand side, which does not involve any measure on $\bT$ at all. {\bf 2.} The volume that appears in the Trace Formula is $\vol(\bT(\Q)\backslash \bT(\A)^1)$; when passing from the volume appearing on the left-hand side of Theorem \ref{thm:main} to this volume, the ratio between them will depend on the precise choice of the normalization of the component of the measure at infinity. Calculations of this sort (for a general reductive group, with various specific choices of the measure at infinity) appear in \cite{gan-gross:haar} and \cite{gross:motive}. {\bf 3. }The right-hand side of the relation in Theorem \ref{thm:main} might be preferable in two ways: there is no complicated volume term, and the orbital integral has the local components that are continuous as functions on the Steinberg-Hitchin base (however, now the orbital integral on the right is an infinite product that converges conditionally). {\bf 4. } The proof of the theorem does not use the analytic class number formula. Moreover, the proof is general, except for three pieces: \begin{enumerate} \item The specific knowledge that at every finite place $v$, $\vol_{\omega_T}(\cT^0)$ is $|\Delta_K|_v^{1/2}L_v(1, \chi_T)^{-1}$. See \cite{gan-gross:haar} for a generalization of such a relation. \item For general tori Tamagawa numbers can be difficult to compute explicitly (see \cite{rud:tori} for some partial results), but for the maximal tori in $\GL_n$ they are $1$. \item The factor $2\pi$ in the denominator is specific to $\GL_2$; in general, it needs to be replaced with the factor determined by the component of the chosen measure at infinity. \end{enumerate} \end{remark} We conclude with a brief discussion of Eichler-Selberg Trace Formula, since it is the starting point for Altug's lectures. This discussion is entirely based on \cite{knightly-li}. \subsection{Eichler-Selberg Trace Formula for $\GL_2$} The Eichler-Selberg Trace Formula expresses the trace of a Hecke operator $T_n$ on the space $S_k(N)$ of cusp forms of weight $k$ and level $N$. For simplicity of exposition, we set $N=1$ in this note; in this case the central character is also trivial. In this setting, the Eichler-Selberg Trace Formula states: \begin{equation}\label{EichlerTF} \begin{aligned} & n^{1-k/2} \tr T_n & = \frac{k-1}{12}\begin{cases} &1, n \text{ is a perfect square} \\ & 0 \text{ otherwise } \end{cases} \\ &{}& -\frac12\sum_{t^2<4n} \frac{\rho^{k-1}-\bar\rho^{k-1}}{\rho-\bar\rho} \sum_m h_w\left(\frac{t^2-4n}{m^2}\right) \\ &{}& -\frac12\sum_{d\mid n} \min(d, \frac{n}{d})^{k-1}, \end{aligned} \end{equation} where in the middle line $\rho$ and $\bar{\rho}$ are roots of the polynomial $X^2-tX+n$, and $h_w\left(\frac{t^2-4n}{m^2}\right)$ is the weighted class number of the order in $\Q[\rho]$ that has discriminant $\frac{t^2-4n}{m^2}$. The sum over $m$ runs over the integers $m\ge 1$ such that $m^2$ divides $t^2-4n$, and $\frac{t^2-4n}{m^2}$ is $0$ or $1$ $\mod 4$. The goal of this section is to sketch, without any detail, a connection between this formula and the (geometric side of) Arthur-Selberg Trace formula. \subsubsection{The test function} We start with a very brief recall of the connection between modular forms and automorphic forms on $\GL_2$. We refer to e.g., Knightly and Li \cite{knightly-li} for all the details. Let $\bG=\GL_2$ for the rest of this section. A cusp form of weight $k$ generates (as a representation of $\bG(\A_\Q)$ under the action by right translations) a closed subspace $(\pi, V)$ of $L^2_0(\bG(\Q)\backslash \bG(\A))$; for a level $1$ cusp form (which is our assumption here, so in particular we should assume $k>2$), the central character of $\pi$ is trivial. By Flath's theorem, the representation $\pi$ factors as a restricted tensor product n\otimes \pi_{\infty} = \otimes'_{p} \pi_p\otimes \pi_{\infty}.$$ Since $(\pi, V)$ came from a cusp form of weight $k$, we have $\pi_{\infty}=\pi_k$ -- the discrete series representation of highest weight $k$. Every function $f\in C_c(\bG(\A)) $ gives rise to a linear operator $R(f)$ on $L^2(\bZ(\A)\backslash \bG(\A))$ defined by $$(R(f)\phi)(g):= \int_{(\bZ\backslash \bG)(\A)}f(x)\phi(gx) dx.$$ In this language, the Hecke operator $T_n$ on $S_k$ is precisely $n^{k/2-1}R(f_{n,k})$, where $f_{n,k}$ is a specific test function in the space $L^2(\bG(\Q)\backslash \bG(\A)^1)$. We quote the definition of this test function from \cite{knightly-li}. \begin{itemize} \item Let $f_\infty$ be a matrix coefficient of the representation $\pi_k$. By orthogonality of matrix coefficients, this ensures that the image of $R(f)$ is contained in $(\pi, V)$. (Since $(\pi, V)$ is irreducible, this means $R(f)$ projects onto $V$). \item For $p\nmid n$, let $f_p$ be the characteristic function of $\bZ(\Q_p) \bG(\Z_p)$, \item For $p\mid n$, let $f_p$ be the characteristic function of $\bZ(\Q_p) M_{n,p}$, where $M_{n,p}$ is the set of matrices of determinant $n$ in $M_2(\Q_p)$ (we prefer to think of it as the characteristic function of the union of the double cosets of the Cartan decomposition for $\GL_2(\Q_p)$ of determinant $n$). \end{itemize} \subsubsection{The transition to Arthur Trace Formula} We plug $f:=f_{n,k}$ into Arthur's Trace Formula for $\GL_2$, and examine the geometric side. Since for our test function $f$, the continuous and residual parts of the spectral side vanish, the geometric side in fact equals $\tr R(f)$ (see \cite[\S 22]{knightly-li}). Finally, Knightly and Li show that: \begin{itemize} \item The first line of (\ref{EichlerTF}) matches the contribution of the trivial conjugacy class; \item The last line matches the contribution of the unipotent and hyperbolic conjugacy classes. \item The middle line matches the contribution of the elliptic conjugacy classes. \end{itemize} We discuss why the last claim is plausible. By definition of the test function $f$, its orbital integrals vanish on all elements $\gamma$ such that $\det(\gamma)\neq n$; hence, in the geometric side of Arthur's Trace Formula, we are left with the sum over $\gamma\in \bG(\Q)$ satisfying $\det(\gamma)=n$. The conjugacy classes in $\GL_2$ are parametrized by characteristic polynomials, and the elliptic ones correspond to the polynomials with negative discriminants, so at least superficially, we recognize the sum over the integers $t$ such that $t^2<4n$ as a sum over the rational elliptic conjugacy classes. Next, note that the expression $\frac{\rho^{k-1}-\bar\rho^{k-1}}{\rho-\bar\rho} $ is the value of the character of $\pi_k$ on the corresponding conjugacy class; thus, we recognize it as the orbital integral of $f_\infty$ (see e.g. \cite[\S 1.11]{kottwitz:clay} for the discussion of characters as orbital integrals of matrix coefficients; see also \cite[Ch.I, \S5.2]{gelfand-graev}). Knightly and Li show (in our notation): \begin{equation} n) = \sum_m h_w\left(\frac{t^2-4n}{m^2}\right), \end{equation} where the sum over $m$ is as in (\ref{EichlerTF}). While the appearance of class numbers in our earlier calculations is suggestive, and proves this relation in the trivial case when $t^2-4n$ is square-free, it appears that our arguments are insufficient for getting a simpler proof of this claim in general (other than by essentially direct computation of the both sides, or matching the computation of the right-hand side in \cite{knightly-li} with the calculation on the building in \cite{kottwitz:clay}, the results of which we already quoted above). A similar statement for $\GL_n$, relating orbital integrals to sums of class numbers of orders, is proved by Zhiwei Yun, \cite{yun:dedekind}. \section{Appendix A. Kirillov's form on co-adjoint orbits: two examples} \begin{center} {by Matthew Koster } \end{center} In this appendix we illustrate Kirillov's construction of a volume form on co-adjoint orbits in a Lie algebra. Here we work over $\R$ in order to be able to use the intuition from calculus. In these examples, we also relate this form to the geometric measure discussed in the article. \footnote{This work was part of an NSERC summer USRA project in the summer of 2019; we acknowledge the support of NSERC.} \subsection{The coadjoint orbits}\label{sub:coorb} A large part of this section is quoted from \cite[\S 17.3]{kottwitz:clay} for the reader's convenience and to set up notation. Let $G$ denote a semisimple Lie group, $\mathfrak{g}$ its Lie algebra, and $\mathfrak{g}^*$ the linear dual space of $\mathfrak{g}$. We denote elements of $\fg$ by capital letters, e.g. $X$, and use starts to denote elements of $\fg^\ast$; unless explicitly stated there is no a priori relationship between $X$ and $X^*$. $G$ acts on $\mathfrak{g}$ by Ad and acts on $\mathfrak{g}^*$ by Ad$^*$, where $$\langle \Ad^*(g)(X^*), X \rangle = \langle X^*, \Ad(g^{-1})(X) \rangle.$$ Let $\ri(X^*) \subset \mathfrak{g}^* $ denote the orbit of $X^*$ under this action (called a co-adjoint orbit). \\ We recall that the differential of the adjoint action $\Ad$ of $G$ is the action of $\mathfrak{g}$ on itself by $\ad$ where $\ad_X(Z) = [X,Z]$. The co-adjoint action of $\fg$ on $\mathfrak{g}^*$ is the differential of $\Ad^*$; we denote it by $\ad^*$; explicitly, this action is defined by $\langle \ad_X^*(Y^\ast), Z \rangle = \langle Y^\ast, [Z,X] \rangle$. A choice of an element $X^\ast\in \fg^\ast$ defines a map $\varphi_{X^\ast}:G \to \mathfrak{g}^*$ by $\varphi_{X^\ast}(g) = \Ad^*_g(X^*)$. The differential of this map at the identity $e \in G$ gives an identification of $\mathfrak{g}/\mathfrak{c}(X^*)$ with the tangent space $T_{X^*}\ri (X^*)$ at $X^\ast$, defined by $X \mapsto \ad^*_X(X^*)$. Here $\fc(X^\ast)$ is the stabilizer of $X^\ast$ under $\ad^\ast$, and we are viewing $T_{X^*} \ri(X^*)$ as a subspace of $T_{X^*} \mathfrak{g}^* \cong \mathfrak{g}^*$. We denote this identification by $\Phi_{X^\ast}:\mathfrak{g}/\fc(X^\ast) \to T_{X^\ast}\ri(X^\ast)\hookrightarrow \mathfrak{g}^*$. The element $X^\ast$ gives an alternating form $\omega_{X^\ast}$ on $\fg$, defined by \begin{equation} \omega'_{X^\ast}(X, Y):= \langle X^\ast, [X, Y] \rangle = -\langle \ad^\ast(X) X^\ast, Y \rangle. \end{equation} This form clearly vanishes on $\fc(X^\ast)$, and gives a non-degenerate bilinear form on $\fg/\fc(X^\ast)$, which we have just identified with $T_{X^\ast}\ri(X^\ast)$. Thus, given a co-adjoint orbit $\mathcal O$, we get a symplectic $2$-form $\omega'$ on it by letting the value of $\omega'$ at $X^\ast\in \mathcal O$ equal $\omega'_{X^\ast}$. In particular, as a manifold, $\mathcal O$ has to have even dimension; if its dimension is $2k$, then the $k$-fold wedge product of the form $\omega'$ gives a volume form on $\mathcal O$. Over a field of characteristic zero, we can identify a semisimple Lie algebra with its dual; we will use the Killing form for this. Then the adjoint orbits in $\fg$ get identified with the co-adjoint orbits in $\fg^\ast$, and thus we get a very natural algebraic volume form on each adjoint orbit in $\fg$. Here our goal is to compute this form explicitly in two examples: the regular nilpotent orbit in $\mathfrak{sl_2}(\R)$ and a semisimple $\SO_3(\R)$-orbit in $\mathfrak{so}_3(\R)$ (we do not use the accidental isomorphism in this calculation). In both cases the orbit will be two-dimensional, so we are just computing the form denoted by $\omega'$ above. \subsection{Rewriting the form as a form on an orbit in $\fg$}\label{sub:form} Given $X_0^\ast\in \fg^\ast$, we compute the form $\omega$ on the orbit of an element $X_0\in \fg$ that corresponds to $X_0^\ast\in \fg^\ast$ under the isomorphism defined by Killing form, in three steps: \begin{enumerate} \item Compute the map $\Phi_{X_0^\ast}$. \item For $\widetilde{v_1}, \widetilde{v_2} \in T_{X_0^*}\ri(X_0^*)$ find $v_1, v_2 \in \mathfrak{g}$ with $\Phi_{X_0^\ast}(v_i) = \widetilde{v_i}$ for $i=1,2$, and then evaluate $\omega_{X_0^*}(\widetilde{v_1}, \widetilde{v_2} ) = \langle X_0^*, [v_1,v_2]\rangle$. \item Using Killing form, identify $\fg$ with $\fg^\ast$, which identifies a co-adjoint orbit of $X_0^\ast$ in $\fg^\ast$ with an adjoint orbit of an element $X_0\in \fg$. Then use the adjoint action to explicitly define the volume form $\omega_X$ on $T_X \ri$ at a point $X$ in this orbit by pulling back the form $\omega_{X_0}$ . \end{enumerate} \subsection{Example I: a regular nilpotent orbit in $\fsl_2(\R)$}\label{sub:sl2R} Let $G= \SL(2; \mathbb{R})$, $\mathfrak{g} = \mathfrak{sl}(2; \mathbb{R})$, and consider the standard basis $\{\be, \bff, \bh\}$ for $\mathfrak{g}$ given by: \begin{align*} {\bf e} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \ \ \ \ {\bf f} = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \ \ \ \ {\bf h} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}. \end{align*} Let $\{\be^*, \bff^*, \bh^*\}$ be the basis for $\mathfrak{g}^*$ dual to $\{\be, \bff, \bh\}$ under the Killing form. Explicitly this means that $\be^*(\bff)=4$, $\bff^*(\be)=4$, and $\bh^*(\bh)=8$. We will compute Kirillov form on the co-adjoint orbit $\ri_{\bff^\ast}$ of $\bff^*$, which we identify with the adjoint orbit of $\be$ in $\fg$. When we refer to coordinates $x,y,z$ on $\fg$, it is with respect to our chosen basis $\{\be, \bff, \bh\}$. Given this choice of coordinates, we have the basis of the space of $1$-forms on $\fg$ given by $dx$, $dy$, $dz$. Under the isomorphism $\fg^\ast \simeq \fg$ defined by Killing form, a point $(x,y,z)\in \fg^\ast$ is mapped to $\left[\begin{smallmatrix} z/2 & x \\ y & -z/2\end{smallmatrix}\right]\in \fg$. We can describe the orbit of $\be$ very explicitly in these coordinates. \subsubsection{The nilpotent cone} If we are working over $\R$, then the set of nilpotent elements in $\fg$ forms a cone: indeed, for a nilpotent matrix we have $\det\left[\begin{smallmatrix} z/2 & x \\ y & -z/2\end{smallmatrix}\right]=0$, i.e., $z^2+4xy=0$. (One can easily see that it is, indeed, a cone by the change of coordinates $u=x+y$, $v=x-y$: in these coordinates, the equation of the orbit becomes $z^2+u^2=v^2$). See e.g. \cite[\S 2.3]{debacker:clay} for more detail of this picture. The nilpotent cone consists of 3 orbits of $\SL_2(\R)$: $\{0\}$, the half-cone with $v>0$ (which is the orbit of $\be$), and the half-cone with $v<0$ (the orbit of $\bff$). One can explicitly compute that given a matrix $X_0:=\left[\begin{smallmatrix} z_0/2 & x_0 \\ y_0 & -z_0/2\end{smallmatrix}\right]\in \fg$ satisfying $z_0^2=-4x_0y_0$ (which forces $x_0y_0<0$ if we are working over $\R$), the element $g_0$ below provides the conjugation so that $X_0 = Ad^*_{g_0} (\be)$ (it is convenient for us to write $g_0$ as a product of a diagonal and a unipotent matrix with a view toward further calculations): \begin{equation}\label{eq:g} g_0 = \begin{bmatrix} \sqrt{x_0} & 0 \\ \sqrt{-y_0} & \frac{1}{\sqrt{x_0}} \end{bmatrix} = \begin{bmatrix} \sqrt{x_0} & 0 \\ 0 & \frac{1}{\sqrt{x_0}} \\ \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ \sqrt{-{x_0}{y_0}} & 1 \\ \end{bmatrix}. \end{equation} Note that since $v=x_0-y_0>0$, and $x_0y_0<0$, we have $x_0>0, y_0<0$, which explains our choice of signs inside the square roots. \subsubsection{A measure from calculus} Given that our orbit is an open half-cone, we can write down a natural measure on it as a parametrized surface. In fact, as we think of a parametrization for this cone, we can be guided by the fact that we are looking for an $\SL_2(\R)$-invariant measure. We recall Cartan decompositon: $\SL_2(\R)=UK$, where $U$ is the group of lower-triangular unipotent matrices and $K=\SO_2(\R)\simeq S^1$. The adjoint action of $\SO_2(\R)$ is given by a fairly complicated formula (see \cite[\S 2.3]{debacker:clay}), but at the same time one has the obvious action of $S^1$ on the cone by rotations; thus it is reasonable to make a rotation-invariant measure on our cone. Therefore, we use cylindrical coordinates to parametrize the cone $z^2+u^2=v^2$ and arrive at $z=t\cos(\theta)$, $u=t\sin(\theta)$, $v=t$, which translates to the parametrization $\rho:(0, \infty) \times [0, 2\pi) \to \mathfrak{g}$ given by: \begin{equation} \rho(t, \theta) = (t (\cos \theta +1), t (\cos \theta -1), t \sin \theta). \end{equation} The natural volume form on the cone is then $dt\wedge d\theta$; below we see how it compares to Kirillov's volume form. Note: now that we made this guess at a form, we could just express the actions of $U$ and $K=\SO_2(\R)$ on the cone in the $(t, \theta)$ coordinates, and check if this form is invariant. However, we prefer to compute the Kirillov's form directly and derive the comparison this way. \subsubsection{Computing Kirillov's form} \emph{Step 1. The calculation at $\bff^\ast$.} We compute the map $\Phi_{\bff^\ast}$ defined in \S \ref{sub:coorb}. It is a map from $\fg$ to $\fg^\ast$, so given $(x,y,z)=x\be+y\bff+z\bh \in \fg$, its image under $\Phi_{\bff^\ast}$ is a linear functional on $\fg$. Thus it makes sense to write $\langle \Phi_{\bff^\ast}(x,y,z), (x',y',z')\rangle$, where $(x',y',z')\in \fg$. We evaluate: \begin{align*} \langle \Phi_{\bff^\ast}(x\be+y\bff+z\bh),(x', y',z') \rangle &= \langle \bff^*, [x'\be+y'\bff+z'\bh, x\be+y\bff+z\bh] \rangle \\ &= \langle \bff^*, 2(z'x-x'z)\be + 2(y'z-z'y)\bff+(x'y-y'x)\bh \rangle \\ & = 8(z'x-x'z) \\ &= \langle -(2z\bff^*-x\bh^*), (x', y',z')\rangle. \end{align*} Thus \begin{equation}\label{eq:Phi-e} \Phi_{\bff^\ast}(x\be + y\bff + z\bh ) = - (2z \bff^* - x\bh^*). \end{equation} \emph{Step 2.} We need to find a preimage under $\Phi_{\bff^\ast}$ for a vector $\tilde v \in T_{\bff^\ast} \ri_{\bff^\ast}$. We recall that this tangent space is identified with a subspace of $\fg^\ast$, which we later plan to identify with a subspace of $\fg$. Because of this latter anticipated identification, we write $\tilde v = x \bff^\ast + y \be^\ast + z \bh^\ast $. We see directly from (\ref{eq:Phi-e}) that for $\tilde v$ to be in the image of $\Phi_{\bff^\ast}$ it has to satisfy $y =0$, and then $v = z \be - \frac{x}2 \bh$ satisfies $\Phi_{\bff^\ast}(v)=\tilde v$. We are now ready to compute the form $\omega_{\bff^\ast}$. Let $\widetilde{v_i} = x_i\bff^*+ y_i\be^* + z_i\bh^*$ for $i=1,2$; then as discussed above, we can take $v_i= z_i \be - \frac{x_i}2 \bh$. Then $[v_1, v_2] = (z_1x_2-x_1z_2)\be$, and finally we have that: \begin{equation}\label{eq:omega-e} \omega_{\bff^*} ( \tilde v_1, \tilde v_2 ) =\langle \bff^*, [v_1,v_2]\rangle = 4(z_1x_2-x_1z_2). \end{equation} Under the identification of the differential $2$-forms on a vector space with alternating $2$-tensors, we recognize this form as $-4 dx^*\wedge dz^*$, which we identify with the form $\omega_{\be}:= -4dx\wedge dz$ on the adjoint orbit of $\be\in \fg$. \emph{Step 3. Pullback of $\omega_{\be}$ under the adjoint action.} We compute the operator $\Ad_{g_0}$ for the element $g_0$ from (\ref{eq:g}) in our coordinates, in order to use it to pull back the form $\omega_{\be}$. By the right-hand side of (\ref{eq:g}), the matrix of $\Ad_{g_0}$ in the basis $\{\be, \bff, \bh\}$ is \begin{equation} \Ad_{g_0}= \begin{bmatrix} x_0 & 0 & 0\\ 0 & \frac1{x_0} & 0 \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 & 0\\ -{x_0}{y_0} & 1 & 2 \sqrt{-{x_0}{y_0}} \\ \sqrt{-{x_0}{y_0}} & 0 & 1 \end{bmatrix} = \begin{bmatrix} x_0 & 0 & 0 \\ -{y_0} & \frac{1}{x_0} & 2 \sqrt{-\frac{y_0}{x_0}} \\ \sqrt{-{x_0}{y_0}} & 0 & 1 \end{bmatrix}. \end{equation} Thus, \begin{equation} \begin{aligned} &(\Ad_{g_0})^\ast (dx\wedge dz)= \left| \begin{matrix} x_0 & 0 \\ \sqrt{-x_0y_0} & 0 \end{matrix} \right| dx\wedge dy - \left| \begin{matrix} x_0 & 0 \\ \sqrt{-x_0y_0} & 1 \end{matrix} \right| dx\wedge dz +\left| \begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix} \right| dy\wedge dz\\ &= x_0 dx\wedge dz. \end{aligned} \end{equation} By definition, the volume form at $X_0\in \fg$ is $(\Ad_{g_0^{-1}})^\ast(\omega_{\be})$, and thus we get $$\omega_{X_0}=\frac{1}{x_0}dx\wedge dz.$$ Coverting to $(t, \theta)$-coordinates, we get: \begin{align*} \rho^*(dx \wedge dz) &= ( (\cos \theta+1)dt - t \sin \theta \ d \theta) \wedge (\sin \theta \ dt + t \cos \theta \ d \theta) \\ &= t \cos \theta (\cos \theta+1 ) dt \wedge d \theta - t \sin^2 \theta d \theta \wedge dt \\ &=(t \cos^2 \theta + t \cos \theta + t \sin^2 \theta ) dt \wedge d \theta \\ &= t (1+ \cos \theta) dt \wedge d \theta \end{align*} and therefore \begin{align*} \rho^* \omega &= \rho^* \left(\frac{4 dx \wedge dz }{x }\right) = \frac{ 4t (1+ \cos \theta) }{t (1+ \cos \theta)} dt \wedge d \theta\\ &= 4 \ dt \wedge d \theta. \end{align*} \subsubsection{Semisimple orbits in $\fsl_2(\R)$} The orbits of split semisimple elements in $\fsl_2(\R)$ are hyperboloids of one sheet asymptotically approaching the nilpotent cone on the outside; the orbits of elliptic elements are the individual sheets of hyperboloids of two sheets that lie inside the same asymptotic cone (see e.g., \cite[\S 2.3.3]{debacker:clay} for detail). Measures on them can be computed in a similar way (we return to this calculation below). For now we compute another example, a semisimple orbit in $\mathfrak{so}_3(\R)$. \subsection{Another example: a semisimple (elliptic) element in $\mathfrak{so}_3(\R)$}\label{sub:so3R} Let $G=\SO(3)$, $\mathfrak{g}=\mathfrak{so}(3)$, and let $\{X, H, Y \}$ be the basis for $\mathfrak{g}$ given by: $$X = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} \ \ H = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} \ \ Y = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 &0 & 0 \end{pmatrix} $$ Let $\{X^\ast, H^\ast, Y^\ast \}$ be dual basis for $\mathfrak{g}^*$ under the Killing form. Explicitly this means $X^*(X)=-2$, $Y^*(Y)=-2$, and $H^\ast(H)=-2$. Denote by $\ri_{H^\ast}$ the co-adoint orbit of $H^\ast$. Then a brief calculation shows that $\ri_{H^\ast}$ is the unit sphere: $$ \ri_{H^\ast} = \{ (x,y,z) \in \mathfrak{g}^* \ \vert \ x^2+y^2+z^2=1 \} $$ We compute Kirillov's form on this sphere, using a slightly different method from the above (to illustrate various approaches to such computations). Namely, rather than computing the form at one fixed base point on the orbit and then using the group action to compute it at all points, we do the computation directly for each point $X_0$ of our orbit $\ri_{H^\ast}$. As above, given $X_0\in \mathcal{O}_{H^\ast}$, we have $\varphi_{X_0}:G \to \mathfrak{g}^* $ given by $\varphi_{X_0}(g) = Ad^*_{g}(X_0)$. We write $X_0=(x_0,y_0,z_0)$ in $\{X^\ast, H^\ast, Y^\ast\}$-coordinates. As above, the differential of $\varphi_{X_0}$ at $e \in G$ gives an identification $\Phi_{X_0}:\mathfrak{g} / \mathfrak{c}(X) \to T_{X_0}\ri_{H^\ast} \hookrightarrow{} T_{X_0}\mathfrak{g}^* \cong \mathfrak{g}^* $. This can be computed either by recalling that $\langle \Phi_{X_0}(X), Y\rangle = \langle X_0, [Y,X] \rangle$ or via the exponential map: \begin{align*} \Phi_{X_0}(X) = \frac{d}{dt} \big\vert_{t=0} (\varphi_{X_0} \circ \exp)(tX). \end{align*} The result of this computation is that with respect to our coordinates, the matrix representation for $\Phi_{X_0}$ is given by: $$\begin{pmatrix}0 & z_0 & -y_0 \\ -z_0 & 0 & x_0 \\ y_0 & -x_0 & 0 \end{pmatrix}.$$ Write $\omega=f_1 dX^* \wedge dH^* + f_2 dX^* \wedge dY^* + f_3 dH^* \wedge dY^*$. We have: \begin{align*} &f_1(X_0) = \omega_{X_0}(\frac{\partial}{\partial X^*}, \frac{\partial}{\partial H^*}) = \langle X_0, [ (0,z_0,-y_0), (-z_0,0,x_0) ] \rangle = -2(x_0^2z_0 + y_0^2z_0 + z_0^3) = -2z_0\\ &f_2(X_0) = \omega_{X_0}(\frac{\partial}{\partial X^*}, \frac{\partial}{\partial Y^*}) = \langle X_0, [ (0,z_0,-y_0), (y_0,-x_0,0) ] \rangle = 2(x_0^2y_0 + y_0^3 + y_0z_0^2) = 2y_0\\ &f_3(X_0) = \omega_{X_0}(\frac{\partial}{\partial H^*}, \frac{\partial}{\partial Y^*}) = \langle X_0, [ (-z_0,0,x_0), (y_0,-x_0,0) ] \rangle = -2(x_0^3 +x_0 y_0^2 + x_0z_0^2) = -2x_0. \end{align*} Therefore, \begin{equation}\label{eq:omega1} \begin{aligned} \omega(x,y,z) &= -2 x \ dH^* \wedge dY^* +2y \ dX^*\wedge dY^* - 2z \ dX^* \wedge dH^* \\ &= -2 ( x \ dH^* \wedge dY^* -y \ dX^* \wedge dY^* + z\ dX^* \wedge dH^* ). \end{aligned} \end{equation} It is easy to check that if we parametrize the sphere using the spherical coordinates, this form is rewritten as twice the usual surface area element: $\omega(\varphi, \theta)=2\sin\varphi$. We leave this check as an exercise. \subsubsection{General semi-simple orbits in ${\mathfrak{so}}_3(\R)$ and $\fsl_2(\R)$} Since the group $\SO_3(\R)$ is compact, all its maximal tori are conjugate; consequently, every semisimple element in ${\mathfrak {so}}_3(\R)$ is conjugate to $rH$ for some $r\in \R$ (and all semisimple orbits are spheres). It is clear that if we replace $H^\ast$ with $rH^\ast$, the form in (\ref{eq:omega1}) gets scaled by $r$: $\omega_{rH^\ast} =2r dx\wedge dz$, so it is again the natural area element on a sphere of radius $r$. We also note that all our calculations for these algebraic volume forms are valid over any field of characteristic different from $2$ (the only reason we were working over the reals is the nice geometric picture and the intuitive parametric equations for the orbits as surfaces; note that despite our use of these transcendental parametrizations, in the end all the differential forms are algebraic). Returning to semi-simple orbits in $\fsl_2(\R)$, we can use the accidental isomorphism: over $\C$, $\fsl_2$ and $\mathfrak{s0}_3$ are isomorphic. Thus, the same calculation as above shows also that the value at the element $t\bh$ of the Kirillov form on the orbit of $t{\bh}\in \SL_2(\R)$ is $2t dx\wedge dy$ (note that the $y$- and $z$-coordinates are swapped in \S \ref{sub:sl2R} and \S \ref {sub:so3R}); this uniquely determines the invariant form on the orbit. How does this form relate to the volume form $\omega_c^\geom$ defined by (\ref{eq:geom_lie1}) in \S \ref{sub:lie}? Using the coordinates of \ref{sub:sl2R}, Chevalley map is given by $$\left[\begin{smallmatrix} z/2 & x \\ y & -z/2\end{smallmatrix}\right] \mapsto -\frac{z^2}4-xy. $$ The geometric measure is defined as a quotient: $dx\wedge dy \wedge {\frac12 dz} = \omega_c^\geom \wedge dc$, where $c= -\frac{z^2}4-xy$. Evaluating all the forms at the point $t\bh$ (which corresponds to $x=y=0$, $z=2t$, and thus $c=-t^2$), we see that $\omega_c^\geom$ must satisfy $$(\omega_c^\geom)_{t\bh}\wedge (-2t dt) = dx\wedge dy\wedge dt,$$ and therefore, $(\omega_c^\geom)_{t\bh}=-\frac1{2t}dx\wedge dy$. We obtain the conversion coefficient between Kirillov's form and the geometric form: on the orbit of a split semi-simple element $t\bh$ it is $-\frac1{4t^2}=-D(t\bh)^{-1}$. It would be interesting to find this coefficient for a general reductive Lie algebra. \bibliographystyle{amsalpha} \bibliography{latest_biblio} \end{document} \item One starts with the differential form $\omega_{\G_m}=\frac{dx}{x}$ on $\G_m$. It is necessary to modify the measure locally by the convergence factors $\lambda_p:=(1-\frac{1}p)$ at each rational prime $p$. The Tamagawa measure on $\G_m$ (as an algebraic group over $\Q$) is, by definition, $\prod_p \lambda_p^{-1}|\omega_{\G_m, p}|_p$. (It immediately follows from the definition of the measure that the volume $\tau(\G_m)$ of $\Q^\times\backslash (\A_{\Q}^\times)^1$ with respect to this measure is $1$.) Moreover, the Tamagawa number is preserved under restriction of scalars (see \cite{weil:adeles}), when the pullback of measures, and the convergence factors are defined as follows. \item By \cite[Theorem 2.3.2]{weil:adeles}, the set $\mu_p:=\lambda_p$ is a also a set of convergence factors for the measure obtained by pulling back the form $\omega_{\G_m}$ to a torus $\bT$ obtained by restriction of scalars. Weil defines the form he denotes by $p^\ast(\omega)$, the pullback of $\omega$ to the variety obtained by restriction of scalars (\cite[p.22]{weil:adeles}). In our example, using Weil's notation $p^*(\omega)$ on the left and our notation from Example \ref{ex:quadr} on the right, his definition gives: $p^*(\omega)=(\sqrt{\Delta_{K/\Q}})^{\dim(T)}\frac{dz_1}{z_1}\wedge\frac{dz_2}{z_2} = \sqrt{\Delta_{K/\Q}}\omega_T$. We note that the presence of the discriminant factor makes this volume form defined over $\Q$ (as we saw in Example \ref{ex:quadr}), however, it does not affect the product measure we are about to define, thanks to the product formula.
2205.02382v2
http://arxiv.org/abs/2205.02382v2
Ranks of $RO(G)$-graded stable homotopy groups of spheres for finite groups $G$
\documentclass[11pt]{amsart} \setlength{\oddsidemargin}{.25in} \setlength{\evensidemargin}{.25in} \setlength{\topmargin}{.25in} \setlength{\headsep}{.25in} \setlength{\headheight}{0in} \setlength{\textheight}{8.5in} \setlength{\textwidth}{6in} \usepackage{setspace} \usepackage{amsmath, amsfonts, amsthm, amstext, xspace} \usepackage{amssymb,latexsym} \usepackage{comment} \usepackage{enumitem} \usepackage{xcolor} \usepackage{hyperref} \usepackage[capitalise]{cleveref} \definecolor{seagreen}{RGB}{46,139,87} \definecolor{maroon}{RGB}{128,0,0} \definecolor{darkviolet}{RGB}{148,0,211} \definecolor{twelve}{RGB}{100,100,170} \definecolor{thirteen}{RGB}{100,150,50} \definecolor{fourteen}{RGB}{200,0,0} \definecolor{fifteen}{RGB}{0,200,0} \definecolor{sixteen}{RGB}{0,0,200} \definecolor{seventeen}{RGB}{200,0,200} \definecolor{eighteen}{RGB}{0,200,200} \usepackage[all]{xy} \newtheorem{thm}{Theorem}[section] \newtheorem*{theorem*}{Theorem} \newtheorem*{conjecture*}{Conjecture} \newtheorem*{corollary*}{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{state}[thm]{Statement} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{hypo}[thm]{Hypothesis} \newtheorem{fig}[thm]{Figure} \newtheorem{conv}[thm]{Convention} \newtheorem{assump}[thm]{Assumption} \theoremstyle{definition} \newtheorem*{exm*}{Example} \newtheorem{defin}[thm]{Definition} \newtheorem{exm}[thm]{Example} \newtheorem{proc}[thm]{Procedure} \newtheorem{ques}[thm]{Question} \newtheorem{restr}[thm]{Restriction} \newtheorem{rem2}[thm]{Remark} \newcommand{\ra}{\rightarrow} \newcommand{\la}{\leftarrow} \newcommand{\lra}{\longrightarrow} \newcommand{\lla }{\longrightarrow} \newtheorem{thmx}{Theorem} \renewcommand{\thethmx}{\Alph{thmx}} \def\a{\mathbb{A}} \def\b{\mathbb{B}} \def\c{\mathbb{C}} \def\e{\mathbb{E}} \def\f{\mathbb{F}} \def\g{\mathbb{G}} \def\h{\mathbb{H}} \def\k{\mathbb{K}} \def\l{\mathbb{L}} \def\m{\mathbb{M}} \def\n{\mathbb{N}} \def\p{\mathbb{P}} \def\q{\mathbb{Q}} \def\r{\mathbb{R}} \def\s{\mathbb{S}} \def\t{\mathbb{T}} \def\z{\mathbb{Z}} \def\ca{\mathcal{A}} \def\cb{\mathcal{B}} \def\cc{\mathcal{C}} \def\cd{\mathcal{D}} \def\cf{\mathcal{F}} \def\cg{\mathcal{G}} \def\cl{\mathcal{L}} \def\cm{\mathcal{M}} \def\cn{\mathcal{N}} \def\co{\mathcal{O}} \def\cp{\mathcal{P}} \def\cq{\mathcal{Q}} \def\cs{\mathcal{S}} \def\ct{\mathcal{T}} \def\cu{\mathcal{U}} \def\cv{\mathcal{V}} \def\fa{\mathfrak{a}} \def\fb{\mathfrak{b}} \def\fg{\mathfrak{g}} \def\fh{\mathfrak{h}} \def\fm{\mathfrak{m}} \def\fp{\mathfrak{p}} \def\fq{\mathfrak{q}} \def\fu{\mathfrak{u}} \def\zmodp{\mathbb{Z}/p} \def\zlocp{\mathbb{Z}_{(p)}} \def\ctp{\widehat{\otimes}} \def\Moda{\mathcal{M}od_A} \def\Spt{\mathcal{S}pt} \def\Motc{\mathcal{M}ot_\c} \def\Motr{\mathcal{M}ot_\r} \def\Sptc{\mathcal{S}pt^{C_2}} \def\Ab{\mathcal{A}b} \def\mft{\underline{\mathbb{F}_2}} \def\holim{\operatorname{lim}} \def\Gal{\operatorname{Gal}} \def\hocolim{\operatorname{colim}} \def\THH{\operatorname{THH}} \def\TC{\operatorname{TC}} \def\HH{\operatorname{HH}} \def\HC{\operatorname{HC}} \def\Ho{\operatorname{Ho}} \def\Spec{\operatorname{Spec}} \def\Spf{\operatorname{Spf}} \def\Tor{\operatorname{Tor}} \def\Ext{\operatorname{Ext}} \def\colim{\operatorname{colim}} \def\ker{\operatorname{ker}} \def\coker{\operatorname{coker}} \def\id{\operatorname{id}} \def\bo{\mathit{bo}} \def\tmf{\mathit{tmf}} \def\rk{\operatorname{rk}} \def\Aut{\operatorname{Aut}} \newcommand{\limt}[1]{\underset{#1}{\lim \,}} \def\vbar{\overline{v}} \usepackage{amssymb} \usepackage{tikz-cd} \newcommand{\upi}{\underline{\pi}} \newcommand{\piGa}{\pi^G_{\alpha}(S^0)} \newcommand{\piG}{\pi^G} \newcommand{\ROG}{RO(G)} \newcommand{\ROPG}{RO^+(G)} \newcommand{\da}{d_{\alpha}} \newcommand{\sm}{\wedge} \newcommand{\Hom}{\mathrm{Hom}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\bbA}{\mathbb{A}} \newcommand{\st}{\; |\;} \newcommand{\tensor}{\otimes} \setcounter{tocdepth}{1} \author{J.P.C. Greenlees}\address{Mathematics Institute, Zeeman Building, Covengtry CV4, 7AL, UK}\email{[email protected]} \author{J.D. Quigley}\address{Department of Mathematics, Cornell University, Ithaca, NY, USA}\email{[email protected]} \title{Ranks of $RO(G)$-graded stable homotopy groups of spheres for finite groups $G$} \begin{document} \maketitle \input{xypic} \begin{abstract} We describe the distribution of infinite groups within the $RO(G)$-graded stable homotopy groups of spheres for a finite group $G$. \end{abstract} \maketitle \tableofcontents \section{Introduction} \subsection{Overview} In ordinary stable homotopy theory, one of the most basic theorems is Serre's Finiteness Theorem \cite{Ser53} stating that the $n$-th stable homotopy group of the sphere, $\pi_n(S^0)$, is finite for $n>0$. Since we understand that $\pi_0(S^0)\cong \Z$, this means that rationally the structure of stable homotopy is very simple, and attention is quickly focused on torsion. Equivariantly, it is still true that rationalisation is a massive simplification, but the residual structure in the rationalisation is worth some attention. Let $G$ be a finite group and consider the $RO(G)$-graded stable homotopy groups of the sphere \cite[Ch. IX]{Alaska}. The purpose of this note is to identify the crudest feature of these groups: their ranks as abelian groups. This is a straightforward deduction from well-known results, but some interesting features emerge by giving a systematic account. \begin{exm*} Let $G = C_2$ be the cyclic group of order two. Then $$RO(C_2) \cong \z\{1, \sigma\},$$ where $1$ is the one-dimensional trivial representation and $\sigma$ is the one-dimensional sign representation. Computations of Araki--Iriye \cite{AI82} show that $\pi_\alpha^{C_2}(S^0)$ is infinite if $$\alpha \in \z \{ 2(1 - \sigma) \} \cup \z \{\sigma\}.$$ Our results recover this observation, and show that these are the \emph{only} degrees for which $\pi_\alpha^{C_2}(S^0)$ is infinite. \end{exm*} Using rational equivariant stable homotopy theory, we prove the following: \begin{thmx}[{\cref{cor:piafinite}}]\label{Thmx:A} Let $G$ be a finite group and $\alpha \in RO(G)$. Then $$\piGa \otimes \q =[S^{\alpha},S^0]^G \otimes \q =\prod_{(H)}\Hom_{W_G(H)}(\pi_0(S^{\alpha^H}), \Q),$$ where the product is taken over conjugacy classes of subgroups $H \leq G$. Thus $\piGa \otimes \q$ is a rational vector space of dimension $r_{\alpha}$, where $$r_{\alpha}=|\{ (H)\st \alpha^H=0 \mbox{ and } W_G(H) \mbox{ acts trivially on } \pi_0(S^{\alpha^H}) \}|.$$ \end{thmx} We lay the groundwork for applying this theorem in \cref{Sec:Orientation} and \cref{Sec:Geometry}. We then compute the ranks of the $RO(G)$-graded stable homotopy groups of spheres for various $G$ in \cref{Sec:Examples}. In \cref{Sec:Variations}, we discuss two natural variations where the same techniques give information. Since the sphere is rationally an Eilenberg-MacLane spectrum for the Burnside Mackey functor, $S^0\simeq_{\Q} H\bbA$, we may view our methods as a calculation of the rationalization of $H^{\star}_G(S^0; \bbA)$ (where $\star$ denotes $RO(G)$-grading). The same methods apply to give a calculation of the rationalization of $H^{\star}_G(S^0; M)$ for any Mackey functor $M$. For the second variation, we may consider the Picard-graded stable homotopy groups of spheres: invertible objects are again characterised in terms of orientations and dimension functions (see \cite{FLM}). Finally, we note that our results provide a basis for understanding other large-scale phenomena in the $RO(G)$-graded stable homotopy groups of spheres. For example, Iriye \cite{Iri82} showed that Nishida's nilpotence theorem \cite{Nis73} holds equivariantly: an element $\pi_\star^G(S^0)$ is torsion if and only if it is nilpotent. \cref{Thmx:A} therefore explicitly describes the regions of $\pi_\star^G(S^0)$ in which elements can be nilpotent and non-nilpotent. \subsection{Finite generation} For most of the paper we will work rationally, but we would like to draw conclusions about the integral situation. For completeness we include the proofs of the basic finiteness statements that permit this deduction. \begin{lem}\label{lem:finite} For any $\alpha \in RO(G)$, the sphere $S^{\alpha}$ is a finite $G$-cell spectrum. \end{lem} \begin{proof} For an actual representation $V$, the sphere $S^V$ is a smooth compact manifold and hence admits the structure of a finite $G$-CW-complex. By exactness of Spanier--Whitehead duality, $DS^V\simeq S^{-V}$ is also a finite $G$-CW spectrum (since by the Wirthm\"uller isomorphism $DG/H_+\simeq G/H_+$). Now if $\alpha =V-W$, $S^{\alpha}\simeq S^V\sm S^{-W}$, so the result follows. \end{proof} The following consequence fails for infinite compact Lie groups. \begin{lem} For any $\alpha \in \ROG$ the abelian group $\piGa$ is finitely generated. Consequently $\piGa$ is finite if and only if $\piGa\tensor \Q=0$. \end{lem} \begin{proof} From the Segal--tom Dieck splitting theorem \cite{tD75}, we see that $\piG_n(S^0)$ is a finitely generated abelian group. By \cref{lem:finite}, it follows that $\piGa$ is finitely generated. \end{proof} \cref{Thmx:A} describes the $RO(G)$-graded rational homotopy groups of the sphere. By the previous lemma, this determines precisely those degrees $\alpha \in RO(G) $ for which $\piGa$ is finite. \subsection{Conventions} Henceforth everything is rational. We write $G$ for a finite group, $H$ for a subgroup of $G$, and $W_G(H) = N_G(H)/H$ for the Weyl group of $H$. We use $*$ to denote $\z$-graded groups and $\star$ to denote $RO(G)$-graded groups. If $V \in RO(G)$, then $|V|$ denotes its (virtual) dimension. \subsection{Acknowledgements} The second author thanks Eva Belmont, Bert Guillou, Dan Isaksen, and Mingcong Zeng for helpful discussions related to this work. The authors also thank William Balderrama for discussions concerning nilpotence, and especially for pointing out \cite{Iri82}, which answered a question in a previous version. \section{Rational stable homotopy}\label{Sec:Rational} For finite groups, it is easy to give a complete model of rational $G$-spectra \cite[App. A]{GMTate}. We do not need the full strength of this description, so we describe what we want in a convenient form. First, note that for any $X$ and $Y$, passage to geometric fixed points gives a map $$\Phi^H: [X,Y]^G\lra [\Phi^HX, \Phi^HY]. $$ The codomain admits an action of the Weyl group $W_G(H)$ by conjugation, and $\Phi^H$ takes values in the $W_G(H)$-equivariant maps. \begin{thm}\label{Thm:RationalMaps} If $Y$ is rational, the maps $\Phi^H$ give an isomorphism $$[X,Y]^G_*=\bigoplus_{(H)}H^0(W_G(H); [\Phi^HX, \Phi^HY]_*), $$ where the sum is taken over conjugacy classes of subgroups $H \leq G$. Furthermore, passage to homotopy groups gives isomorphisms $$H^0(W_G(H); [\Phi^HX, \Phi^HY]_*)=\Hom_{W_G(H)}(\pi_*(\Phi^HX), \pi_*(\Phi^HY)).$$ \end{thm} \begin{proof} Filtering $EG_+$ by skeleta gives a spectral sequence $$H^*(G; [X,Y]_*)\Rightarrow [EG_+\sm X, Y]^G_* $$ for (integral) stable maps. When $Y$ is rational, this collapses to an isomorphism $$H^0(G; [X,Y]_*)=[EG_+\sm X, Y]^G_*$$ Combining this with the splitting $S^0\simeq \bigvee_{(H)} e_HS^0 $ we obtain the first stated isomorphism. The second comes from the classical version of Serre's Theorem \cite{Ser53}. \end{proof} \begin{rem2} There is an alternative approach, via \cite{GMTate}. First, we observe that $X\simeq \prod_n\Sigma^n H\upi^G_n(X)$, and then use the fact that all rational Mackey functors are projective and injective to deduce $$[X,Y]^G\stackrel{\cong}\lra \prod_n\Hom( \upi^G_n(X), \upi^G_n(Y)). $$ Now we use the structure of Mackey functors to deduce $$\Hom( \upi^G_n(X), \upi^G_n(Y)) \cong \prod_{(H)}\Hom_{W_G(H)}(\pi_n(\Phi^HX), \pi_n(\Phi^HY)),$$ as claimed. \end{rem2} Since $G$ acts trivially on $S^0$, $W_G(H)$ acts trvially on $\pi_0(S^0)=\Q$. We then have the following consequence of \cref{Thm:RationalMaps}: \begin{thm} \label{cor:piafinite} Let $G$ be a finite group and $\alpha \in RO(G)$. Then $$\piGa=[S^{\alpha},S^0]^G=\prod_{(H)}\Hom_{W_G(H)}(\pi_0(S^{\alpha^H}), \Q),$$ where the product is taken over conjugacy classes of subgroups $H \leq G$. Thus $\piGa$ is a rational vector space of dimension $r_{\alpha}$, where $$r_{\alpha}=|\{ (H)\st \alpha^H=0 \mbox{ and } W_G(H) \mbox{ acts trivially on } \pi_0(S^{\alpha^H}) \}|.$$ \end{thm} \section{The orientation character}\label{Sec:Orientation} \newcommand{\aut}{\mathrm{Aut}} \newcommand{\mutwo}{\mu_2} For any real representation $V$ the group $G$ acts on $H_{|V|}(S^V)$, giving a homomorphism $$o_V: G\lra \aut(\Z)=\mutwo.$$ We view this as an element $o_V\in H^1(G;\mutwo)$. In view of the K\"unneth isomorphism $$H_{|V|}(S^V)\tensor H_{|W|}(S^W) \stackrel{\cong}\lra H_{|V+W|}(S^{V\oplus W}),$$ this gives a homomorphism $$o: \ROG \lra H^1(G; \mutwo). $$ Elements of the kernel $\ROPG$ of $o$ are {\em orientable} virtual representations. \begin{exm} Clearly $2\alpha$ is orientable for any $\alpha$. More generally, the image of any complex representation is orientable, as is any element in the image of $RSO(G)\lra RO(G)$. \end{exm} \begin{rem2} It is clear that an orientable representation $\rho: G\lra O(n)$ is one that takes values in representations of determinant 1, so that it comes from $RSO(G)$. However, this is not true of virtual representations. For example, if $G=\Sigma_3$, then $V-\sigma - 1$ is orientable (where $V$ is the reduced regular representation and $\sigma$ is the sign representation). However, only even multiples of $\sigma$ or $V$ come from $RSO(\Sigma_3)$. \end{rem2} \section{Geometry of the ranks of the $RO(G)$-graded stable stems}\label{Sec:Geometry} To make the answer in \cref{cor:piafinite} explicit there are now two ingredients: (a) the dimension of the fixed points and (b) the orientations. \subsection{Virtual representations of fixed point dimension zero} If we list the simple real representations $S_1, S_2, \ldots, S_r$ of $G$, we may identify $\ROG=\Z^r$. Now, for each subgroup $H \leq G$ we have a dimension vector $$d_H=(\dim (S_1^H), \ldots , \dim(S_r^H)), $$ and the space of virtual representations $\alpha$ with $\alpha^H=0$ is $$N_H=\{ x\st x\cdot d_H=0\}, $$ which is isomorphic to $\Z^{r-1}$ as an abelian group. The only $\alpha$ for which $\piGa$ can be infinite are those lying in some $N_H$, and the maximum rank of $\piGa$ is the number of conjugacy classes of $H$ with $\alpha \in N_H$. When $H=G$, the Weyl group $W_G(H)$ is trivial, and we immediately draw a useful conclusion. \begin{cor} If $V$ is a virtual representation with $V^G=0$ then $$\rk \pi_V^G(S^0) \geq 1$$ \end{cor} \begin{rem2} One special case is when $V$ is a multiple of the reduced regular representation $\bar{\rho}$. This was observed to the second author by Bert Guillou, who noted that it follows from the fact that $\Phi^G(S^0) \simeq S^0$ and that geometric fixed points are given by inverting the Euler class of the reduced regular representation. On this same theme, if $V$ is a representation with $V^G=0$, the inclusion of the origin gives a map $a_V: S^0\lra S^V$ whose $G$-fixed points generates $\pi_0(S^0)$. The element $a_V$ is thus of infinite order in $\pi^G_{-V}(S^0)$. The $G$-component of the map $a_V$ will not usually be invertible integrally. However, by \cref{Thm:RationalMaps}, there is a rational map $a_V'\in \pi_V^G(S^0)$ whose $G$-component is the inverse of $a_V$. The problem of finding the smallest positive multiple of $a'_V$ that is integral is of considerable interest; the case of the group of order 2 was studied classically by Landweber \cite{Lan69}, but is now best treated using motivic homotopy theory \cite{BGI21, GI20}. For the group of odd prime order $p$, it was studied by Iriye \cite{Iri89}. \end{rem2} \subsection{Orientability} If $W_G(H)$ is of odd order, then all the gradings in $N_H$ give infinite groups. In general, on each such null space $N_H$ we have an orientation $$o_H: N_H\lra H^1(W_G(H); \mutwo). $$ As noted above, the kernel $N_H^+$ contains all even vectors of $N_H$ and the image of all complex representations. The set of $\alpha$ for which $\piGa$ is infinite is $\bigcup_H N_H^+$. The rank $r_{\alpha}$ of $\piGa$ is the number of conjugacy classes $H$ with $\alpha \in N_H^+$. \subsection{Bases} If we choose a subgroup $H$ giving an associated fixed point vector $d_H$, we note that the component of the trivial representation $S_1$ is always 1, so that $N_H$ has basis $S_2-d_H(2), S_3-d_H(3), \ldots, S_r-d_H(r)$. The orientation $o_H$ is thus described by the homomorphisms $$o_H(2), o_H(3), \ldots, o_H(r): W_G(H)\lra \mu_2$$ where $o_H(i)=o_H(S_i-d_H(i))$. Since $W_G(H)$ always acts trivially on the trivial representation, the orientation $o_H(S_i-d_H(i))=o_H(S_i)$, and $o_H(i)$ is the determinant of $S_i^H$. Since $o_H$ is a homomorphism, this determines its values throughout. All the homomorphisms factor through the largest elementary abelian 2-quotient $E_2(H)$ of $W_G(H)$ (i.e., we factor out commutators and squares). \section{The two variations}\label{Sec:Variations} In effect, our calculation in \cref{cor:piafinite} was of $$\piGa \tensor \Q =[S^\alpha, S^0]^G\tensor \Q=[S^\alpha, H\bbA]^G\tensor \Q=H^0_G(S^\alpha; \bbA)\tensor \Q. $$ We point out that the same methods allow us to calculate $$[S^\alpha, HM]^G\tensor \Q=H^0_G(S^\alpha; M)\tensor \Q $$ for any invertible spectrum $S^\alpha$ and rational $G$-Mackey functor $M$. Indeed we still have $$[S^\alpha, HM]^G\tensor \Q=\prod_{(H)}\Hom_{W_G(H)}(H_0(S^{\alpha^H}), M^{e H}), $$ where $M$ corresponds to $\{ M^{e H}\}_H$ under the equivalence $$\mbox{$G$-MackeyFunctors$/\Q$}\simeq \prod_{(H)}\mbox{$\Q W_G(H)$-modules}.$$ More explicitly, $M^{e H}=M(G/H)/(\mathrm{proper \ transfers})$. In other words, $$\rk [S^\alpha, HM]^G \otimes \q = \sum_{(H)} z_H \cdot m(\alpha , H),$$ where $z_H=1$ if $\alpha^H=0$ and $z_H=0$ otherwise, and where $m(\alpha ,H)$ is the multiplicity of the simple $\Q W_G(H)$-representation $H_0(S^{\alpha^H})$ in $M^{e H}$. The only $M^{eH}$ which can possibly give infinite groups are those with summands coming from a homomorphism $W_G(H)\lra \mu_2$. Since the sphere corresponds to the Burnside Mackey functor $\bbA$ with $\bbA^{eH}=\Q$ (with trivial action), it has almost as many $RO(G)$-gradings which are infinite as is possible. \section{Examples}\label{Sec:Examples} We conclude by explicitly calculating the ranks of the $RO(G)$-graded stable homotopy groups of spheres for groups $G$ with small subgroup lattices. \subsection{Cyclic group of order two} We have $$RO(C_2) \cong \z\{1, \sigma\}$$ where $1$ is the ($1$-dimensional) trivial representation and $\sigma$ is the sign representation. Then $$N_e \cong \z \{ 1 - \sigma\}, \quad N_{C_2} \cong \z\{\sigma\}.$$ Since $W_{C_2}(C_2) = e$, we have $$N_{C_2}^+ = N_{C_2} \cong \z\{\sigma\}.$$ On the other hand, $W_{C_2}(e) \cong C_2/e \cong C_2$ acts by $(-1)$ on $1-\sigma$, so $$N_e^+ \cong \z\{2(1-\sigma)\}.$$ Each representation $V \in N_{C_2}^+ \cup N_e^+$ satisfies $\rk \pi_V^{C_2}(S^0) \geq 1$. Since $N_{C_2}^+ \cap N_e^+ = \{0\}$, we also have $\rk \pi_0^{C_2}(S^0) = 2$. Altogether, we find: \begin{prop} We have \[ \rk \pi_V^{C_2}(S^0) = \begin{cases} 2 \quad & \text{ if } V = 0, \\ 1 \quad & \text{ if } V \in (\z\{\sigma\} \cup \z\{2(1-\sigma)\}) \setminus \{0\}, \\ 0 \quad & \text{ otherwise.} \end{cases} \] \end{prop} \begin{figure} \includegraphics[trim=0 40 0 0, clip]{C2.pdf} \caption{Degrees in $RO(C_2)$ where $\pi_V^{C_2}(S^0)$ has infinite rank. A black $\bullet$ indicates a copy of $\z$ arising from $N_{C_2}^+$ and a blue $\bullet$ indicates a copy of $\z$ arising from $N_e^+$.} \end{figure} \begin{rem2} The fact that $\pi_V^{C_2}(S^0)$ is infinite for $V \in \z\{\sigma\} \cup \z\{2(1-\sigma)\}$ appears in \cite[Thm. 7.6]{AI82}. A proof that these are the only degrees for which $\pi_V^{C_2}(S^0)$ is infinite using the $C_2$-equivariant Adams spectral sequence was communicated to the second author by Bert Guillou and Dan Isaksen. \end{rem2} \subsection{Cyclic group of odd prime order} Let $q = \frac{p-1}{2}$. We have $$RO(C_p) \cong \z\{ 1, \phi_1,\ldots,\phi_{q}\},$$ where $1$ is the $1$-dimensional trivial representation and $\phi_t: C_p \to \Aut(\r^2) \cong \Aut(\c)$ sends the generator of $C_p$ to $\cdot e^{2\pi i t/p}$. Then $$N_e \cong \z \{ 2 - \phi_1, \ldots, 2 - \phi_q\}, \quad N_{C_p} \cong \z \{\phi_1,\ldots,\phi_q\}.$$ Since $W_{C_p}(e) \cong C_p$ and $W_{C_p}(C_p) = e$ necessarily act trivially on $\z$, we have $$N_e^+ = N_e, \quad N_{C_p}^+ \cong N_{C_p}.$$ Finally, we have $$N_e^+ \cap N_{C_p}^+ \cong \z\{\phi_1 - \phi_2, \ldots, \phi_1 - \phi_q\}.$$ \begin{prop} We have \[ \rk \pi_V^{C_p}(S^0) = \begin{cases} 2 \quad & \text{ if } V \in \z \{ \phi_1 - \phi_2, \ldots, \phi_1 - \phi_q\}, \\ 1 \quad & \text{ if } V \in (\z\{2 - \phi_1, \ldots, 2 - \phi_q\} \cup \z \{\phi_1,\ldots,\phi_q\}) \setminus \z\{\phi_1-\phi_2,\ldots,\phi_1-\phi_q\}, \\ 0 \quad & \text{ otherwise.} \end{cases} \] \end{prop} \begin{figure} \includegraphics[trim=0 40 0 0, clip]{C3.pdf} \caption{Degrees in $RO(C_3)$ where $\pi_V^{C_3}(S^0)$ has infinite rank. A black $\bullet$ indicates a copy of $\z$ arising from $N_{C_3}^+$ and a blue $\bullet$ indicates a copy of $\z$ arising from $N_e^+$.} \end{figure} \begin{rem2} We note that $\phi_1, \ldots, \phi_q$ have similar behaviour. Thus we are considering $\Z \oplus N_G$ and the same picture as for $C_2$, but now the vertical line represents $N_G$ and the antidiagonal represents another subspace isomorphic to $N_G$. The origin now consists of $\{\Sigma_i a_i \phi_i \st \Sigma_i a_i=0\}$, perhaps to be thought of as the kernel of $N_G\lra \Z$, and isomorphic to $\Z^{q-1}$. \end{rem2} \subsection{Cyclic groups of odd prime power order} Let $q = \frac{p^n-1}{2}$. Then $$RO(C_{p^n}) \cong \z \{ 1, \phi_1, \ldots, \phi_q\} \cong \z^{q+1}.$$ For all $0 \leq m \leq n$, we have $$N_{C_{p^m}}^+ = N_{C_{p^m}} \cong \z\{ 2 - \phi_i : p^m \mid i\} \oplus \z\{\phi_j : p^m \nmid j\}.$$ Indeed, let $\gamma$ denote a generator of $C_{p^n}$, so $\gamma^{p^{n-m}}$ is a generator for $C_{p^m}$. Since $\lambda_i: \gamma \mapsto \cdot e^{\frac{2\pi i}{p^n}}$, $$\lambda_i: \gamma^{p^{n-m}} \mapsto \cdot e^{\frac{2\pi i p^{n-m}}{p^n}} = \cdot e^{\frac{2 \pi i}{p^m}}.$$ Therefore $\lambda_i$ pulls back to a trivial $C_{p^m}$-representation if and only if $p^m \mid i$. Describing the intersections of these subspaces gets complicated quickly. For example, if $0 \leq k < m \leq n$, then $$N_{C_{p^k}}^+ \cap N_{C_{p^m}}^+ \cong \z\{2 - \phi_i : p^m \mid i\} \oplus \z\{\phi_j : p^k \nmid i\} \oplus \z\{\phi_{p^k}-\phi_\ell : \ell > p^k, \ p^k \mid \ell, \ p^m \nmid \ell\}.$$ Here, we use that $p^m \mid i$ implies $p^k \mid i$, and similarly, $p^k \nmid j$ implies $p^m \nmid j$. \subsection{Klein four group} Let $K = C_2 \times C_2 = \{e, i, j,k\}$. We have $$RO(K) \cong \z \{ 1, \sigma_i, \sigma_j, \sigma_k\},$$ where $1$ is the $1$-dimensional trivial representation, $\sigma_i$ is the $1$-dimensional representation on which $e$ and $i$ act trivially and $j$ and $k$ act by $(-1)$, and similarly for $\sigma_j$ and $\sigma_k$. Then \begin{align*} N_e &\cong \z \{ 1 - \sigma_i, 1-\sigma_j, 1-\sigma_k\}, \\ N_{\langle i \rangle} &\cong \z \{ 1 - \sigma_i, \sigma_j, \sigma_k\}, \\ N_{\langle j \rangle} &\cong \z\{ \sigma_i, 1 - \sigma_j, \sigma_k\}, \\ N_{\langle k \rangle} &\cong \z\{ \sigma_i, \sigma_j, 1 - \sigma_k\}, \\ N_{K} &\cong \z\{\sigma_i, \sigma_j, \sigma_k\}. \end{align*} The Weyl group $W_K(e) \cong K$ acts nontrivially on $\sigma_i$, $\sigma_j$, and $\sigma_k$, so we have $$N_e^+ \cong \z\{2(1-\sigma_i), \sigma_i - \sigma_j, \sigma_i-\sigma_k\}.$$ The Weyl group $W_K(\langle i \rangle) \cong K / \langle i \rangle \cong \langle j \rangle \cong \langle k \rangle$ acts nontrivially on $\sigma_i$ but trivially on $\sigma_j$ and $\sigma_k$, so we have $$N_{\langle i \rangle}^+ \cong \z \{ 2(1 - \sigma_i), \sigma_j, \sigma_k\},$$ and similarly, $$N_{\langle j \rangle}^+ \cong \z\{ \sigma_i, 2(1-\sigma_j), \sigma_k\},$$ $$N_{\langle k \rangle}^+ \cong \z\{ \sigma_i, \sigma_j, 2(1-\sigma_k)\}.$$ Finally, since $W_K(K) \cong e$ must act trivially on $\z$, we have $$N_K^+ \cong N_K \cong \z \{ \sigma_i, \sigma_j, \sigma_k\}.$$ To determine the ranks of $\piGa$, we now compute intersections. In the following, we let $a \in \{i, j, k\}$, $a' \in \{i,j,k\} \setminus \{a\}$, and $a'' \in \{i,j,k\} \setminus \{a, a'\}.$ Then we have \begin{align*} N_e^+ \cap N_{\langle a \rangle}^+ &\cong \z\{2(1-\sigma_a), \sigma_a - \sigma_{a'} \}, \\ N_e^+ \cap N_K^+ &\cong \z\{\sigma_i - \sigma_j, \sigma_i - \sigma_k\}, \\ N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ &\cong \z \{ 2(1 - \sigma_a - \sigma_{a'}), \sigma_{a''} \}, \\ N_{\langle a \rangle}^+ \cap N_K^+ &\cong \z \{ \sigma_{a'}, \sigma_{a''} \},\\ N_e^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ &\cong \{0\},\\ N_{\langle a \rangle}^+ \cap N_{\langle a'' \rangle}^+ \cap N_K^+ &\cong \z\{\sigma_{a'}\},\\ N_e^+ \cap N_{\langle a \rangle}^+ \cap N_K^+ &\cong \z \{\sigma_a - \sigma_{a'} \},\\ N_{\langle a \rangle} \cap N_{\langle a' \rangle} \cap N_{\langle a'' \rangle} &\cong \z \{ 2(1 - \sigma_a - \sigma_{a'} - \sigma_{a''}) \}, \end{align*} and all $4$- and $5$-fold intersections are $\{0\}$. \begin{prop} With $a, a', a''$ as above, we have \[ \rk \pi_V^K(S^0) = \begin{cases} 5 \quad & \text{ if } V = 0, \\ 3 \quad & \text{ if } V \in (\z\{\sigma_a - \sigma_{a'}\} \cup \z\{\sigma_{a'}\} \cup \z\{2(1 - \sigma_a - \sigma_{a'} - \sigma_{a''})\}) \setminus \{0\}, \\ 2 \quad & \text{ if } V \in (\bigcup_{H \neq H'} N_H^+ \cap N_{H'}^+ ) \setminus (\bigcup_{H \neq H' \neq H'' \neq H} N_H^+ \cap N_{H'}^+ \cap N_{H''}^+), \\ 1 \quad & \text{ if } V \in (\bigcup_H N_H^+) \setminus (\bigcup_{H \neq H'} N_H^+ \cap N_{H'}^+), \\ 0 \quad & \text{ otherwise.} \end{cases} \] \end{prop} \subsection{Dihedral groups of order $2p$, $p$ odd} We have $$RO(D_{2p}) \cong \z \{ 1, \sigma, \phi_1, \ldots, \phi_q\},$$ where $1$ is the trivial representation, $\sigma$ is the sign representation, and $\phi_t : D_{2p} \to \Aut(\r^2) \cong \Aut(\c)$ sends the generator of $C_p \subseteq D_{2p}$ to $\cdot e^{2\pi i t / p}$ and the generator of $C_2 \subseteq D_{2p}$ to reflection across the real axis. Then \begin{align*} N_e &\cong \z\{1 - \sigma, 2 - \phi_1, \ldots, 2 - \phi_q\}, \\ N_{C_2} &\cong \z \{ 1 - \sigma, 1 - \phi_1, \ldots, 1 - \phi_q\}, \\ N_{C_p} &\cong \z \{ 1 - \sigma, \phi_1, \ldots, \phi_q\}, \\ N_{D_{2p}} &\cong \z \{ \sigma, \phi_1, \ldots, \phi_q\}. \end{align*} Since $W_{D_{2p}}(C_2) \cong e \cong W_{D_{2p}}(D_{2p})$, we have $N_{C_2}^+ = N_{C_2}$ and $N_{D_{2p}}^+ = N_{D_{2p}}$. On the other hand, $W_{D_{2p}}(e) \cong D_{2p}$ and $W_{D_{2p}}(C_p) \cong C_2$, so \begin{align*} N_e^+ &\cong \z \{ 2(1 - \sigma), 2(2 - \phi_1), \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_{C_p}^+ &\cong \z \{2(1 - \sigma), 2\phi_1, \phi_1-\phi_2,\ldots,\phi_1-\phi_q\}. \end{align*} We now compute intersections: \begin{align*} N_e^+ \cap N_{C_2}^+ &\cong \z \{2(1-\sigma), \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_e^+ \cap N_{C_p}^+ &\cong \z \{2(1-\sigma), \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_e^+ \cap N_{D_{2p}}^+ &\cong \z \{4\sigma-2\phi_1, \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_{C_2}^+ \cap N_{C_p}^+ &\cong \z \{ 2(1 - \sigma), \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_{C_2}^+ \cap N_{D_{2p}}^+ &\cong \z \{ \sigma - \phi_1, \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_{C_p}^+ \cap N_{D_{2p}}^+ &\cong \z \{ \phi_1, \dots, \phi_q\}, \\ N_e^+ \cap N_{C_2}^+ \cap N_{C_p}^+ &\cong \z \{ 2(1 - \sigma), \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_e^+ \cap N_{C_2}^+ \cap N_{D_{2p}}^+ &\cong \z\{ \phi_1-\phi_2, \ldots, \phi_1 - \phi_q \}, \\ N_e^+ \cap N_{C_p}^+ \cap N_{D_{2p}}^+ &\cong \z\{ \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_{C_2}^+ \cap N_{C_p}^+ \cap N_{D_{2p}}^+ &\cong \z \{ \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ N_e^+ \cap N_{C_2}^+ \cap N_{C_p}^+ \cap N_{D_{2p}}^+ &\cong \z \{ \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}. \end{align*} \begin{prop} We have \[ \rk \pi_V^{D_{2p}}(S^0) = \begin{cases} 4 \quad & \text{ if } V \in \z \{ \phi_1-\phi_2, \ldots, \phi_1 - \phi_q\}, \\ 2 \quad & \text{ if } V \in (\bigcup_{H \neq H'} N_H^+ \cap N_{H'}^+ ) \setminus (\bigcup_{H \neq H' \neq H'' \neq H} N_H^+ \cap N_{H'}^+ \cap N_{H''}^+), \\ 1 \quad & \text{ if } V \in (\bigcup_H N_H^+) \setminus (\bigcup_{H \neq H'} N_H^+ \cap N_{H'}^+), \\ 0 \quad & \text{ otherwise.} \end{cases} \] \end{prop} \subsection{Quaternion group} Let $Q = Q_8$ denote the quaternion group of order $8$. We have $$RO(Q) = \z \{ 1, \sigma_i, \sigma_j, \sigma_k, h\},$$ where $1, \sigma_i, \sigma_j, \sigma_k$ are the pullbacks of the $K$-representations of the same name along the quotient map $Q \to Q/C_2 \cong K$, and $h$ is the unique irreducible $4$-dimensional representation of $Q$. Then with $a$, $a'$, and $a''$ as in our analysis of $K$, \begin{align*} N_e &\cong \z\{1 - \sigma_i, 1-\sigma_j, 1-\sigma_k, 4 1 - h\}, \\ N_{C_2} &\cong \z\{ 1 - \sigma_i, 1 - \sigma_j, 1-\sigma_k, h\}, \\ N_{\langle a \rangle} &\cong \z \{ 1 - \sigma_a, \sigma_{a'}, \sigma_{a''}, h\}, \\ N_Q &\cong \z\{ \sigma_i, \sigma_j, \sigma_k, h\}, \end{align*} and \begin{align*} N_e^+ &\cong \z\{2(1 -\sigma_i), \sigma_i-\sigma_j, \sigma_i-\sigma_k, 41-h\}, \\ N_{C_2}^+ &\cong \z\{2(1-\sigma_i), \sigma_i - \sigma_j, \sigma_i-\sigma_k, h\}, \\ N_{\langle a \rangle}^+ &\cong \z \{2(1-\sigma_a), \sigma_{a'}, \sigma_{a''}, h\}, \\ N_Q^+ &\cong \z\{\sigma_i, \sigma_j, \sigma_k, h\}. \end{align*} The $2$-fold intersections are as follows: \begin{align*} N_e^+ \cap N_{C_2}^+ &\cong \z \{2(1-\sigma_i), \sigma_i - \sigma_j, \sigma_i - \sigma_k\}, \\ N_e^+ \cap N_{\langle a \rangle}^+ &\cong \z \{ 2(1 - \sigma_a), \sigma_{a'} - \sigma_{a''}, h - 4 \sigma_{a'}\}, \\ N_e^+ \cap N_Q^+ &\cong \z \{ \sigma_i - \sigma_j, \sigma_i - \sigma_k, 4\sigma_i - h \}, \\ N_{C_2}^+ \cap N_{\langle a \rangle}^+ &\cong \z \{ 2(1 - \sigma_a), \sigma_{a'} - \sigma_{a''}, h\}, \\ N_{C_2}^+ \cap N_Q^+ &\cong \z\{ \sigma_i - \sigma_j, \sigma_i - \sigma_k, h\}, \\ N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ &\cong \z \{ 2(1 - \sigma_a - \sigma_{a'}), \sigma_{a''}, h\}, \\ N_{\langle a \rangle}^+ \cap N_Q^+ &\cong \z \{ \sigma_{a'}, \sigma_{a''}, h\}. \end{align*} The $3$-fold intersections are as follows: \begin{align*} N_e^+ \cap N_{C_2}^+ \cap N_{\langle a \rangle}^+ &\cong \z \{ 2(1 - \sigma_a), \sigma_{a'} - \sigma_{a''}\}, \\ N_e^+ \cap N_{C_2}^+ \cap N_Q^+ &\cong \z \{ \sigma_i - \sigma_j, \sigma_i - \sigma_k\}, \\ N_e^+ \cap N_{\langle a \rangle}^+ \cap N_Q^+ &\cong \z \{\sigma_{a'} - \sigma_{a''}, h - 4 \sigma_{a'} \}, \\ N_e^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ &\cong \z \{2(1 - \sigma_a - \sigma_{a'} - \sigma_{a''}), h-4\sigma_{a''}\}, \\ N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_Q^+ &\cong \z\{\sigma_{a'}-\sigma_{a''},h\}, \\ N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ &\cong \z \{ 2(1-\sigma_a - \sigma_{a'} - \sigma_{a''}), h\}, \\ N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_Q^+ &\cong \z \{\sigma_{a''}, h\}, \\ N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ &\cong \z\{2(1-\sigma_a-\sigma_{a'}-\sigma_{a''}),h\}, \end{align*} The $4$-fold intersections are as follows: \begin{align*} N_e^+ \cap N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_Q^+ &\cong \z \{ \sigma_{a'} - \sigma_{a''} \}, \\ N_e^+ \cap N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ &\cong \z \{ 2(1 - \sigma_a - \sigma_{a'} - \sigma_{a''}\}, \\ N_e^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ &\cong \z \{ 2(1 - \sigma_a - \sigma_{a'} - \sigma_{a''})\}, \\ N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_Q^+ &\cong \z\{ h\}, \\ N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ &\cong \z \{ 2(1-\sigma_a-\sigma_{a'} - \sigma_{a''}), h\}, \\ N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ \cap N_Q^+ &\cong \z \{h\}. \end{align*} The $5$-fold intersections are as follows: \begin{align*} N_e^+ \cap N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_Q^+ &\cong \{0\}, \\ N_e^+ \cap N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ &\cong \z\{2(1 - \sigma_a - \sigma_{a'} - \sigma_{a''})\}, \\ N_e^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ \cap N_Q^+ &\cong \{0\}, \\ N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ \cap N_Q^+ &\cong \z\{h\}. \end{align*} For completeness, the unique $6$-fold intersection is $$N_e^+ \cap N_{C_2}^+ \cap N_{\langle a \rangle}^+ \cap N_{\langle a' \rangle}^+ \cap N_{\langle a'' \rangle}^+ \cap N_Q^+ \cong \{0\}.$$ \begin{prop} With $a$, $a'$, $a''$ as above, we have \[ \rk \pi_V^K(S^0) = \begin{cases} 6 \quad & \text{ if } V=0, \\ 5 \quad & \text{ if } V \in ( \z\{h\} \cup \z\{2(1-\sigma_a-\sigma_{a'}-\sigma_{a''})\}) \setminus \{0\}, \\ 4 \quad & \text{ if } V \in (\z\{\sigma_{a'}-\sigma_{a''}\} \cup \z\{2(1-\sigma_a-\sigma_{a'}-\sigma_{a''}),h\}) \\ & \quad \quad \quad \setminus (\z\{h\} \cup \z\{2(1-\sigma_a-\sigma_{a'}-\sigma_{a''})\}), \\ 3 \quad & \text{ if } V \in (\bigcup_{H, H', H'' } N_H^+ \cap N_{H'}^+ \cap N_{H''}^+) \setminus (\bigcup_{H, H', H'', H'''} \bigcap_{L \in \{H, H', H'', H'''\}} N_L^+), \\ 2 \quad & \text{ if } V \in (\bigcup_{H \neq H'} N_H^+ \cap N_{H'}^+ ) \setminus (\bigcup_{H,H',H''} N_H^+ \cap N_{H'}^+ \cap N_{H''}^+), \\ 1 \quad & \text{ if } V \in (\bigcup_H N_H^+) \setminus (\bigcup_{H \neq H'} N_H^+ \cap N_{H'}^+), \\ 0 \quad & \text{ otherwise.} \end{cases} \] \end{prop} \bibliographystyle{alpha} \bibliography{GSerreFiniteBibliography} \end{document}
2205.02381v1
http://arxiv.org/abs/2205.02381v1
Topological Langmuir-cyclotron wave
\documentclass[english,english,pra,english,preprint,amsmath,amssymb,aps,longbibliography,showkeys, titlepage]{revtex4-2} \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \setcounter{secnumdepth}{3} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{graphicx} \makeatletter \theoremstyle{plain} \newtheorem{thm}{\protect\theoremname} \theoremstyle{definition} \newtheorem{defn}[thm]{\protect\definitionname} \theoremstyle{plain} \newtheorem{lem}[thm]{\protect\lemmaname} \usepackage[T1]{fontenc} \setcounter{secnumdepth}{3} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{graphicx} \newtheorem{theorem}{} \usepackage{graphicx} \usepackage{float} \usepackage{xcolor} \usepackage[hypertexnames=false]{hyperref} \hypersetup{ breaklinks = true, colorlinks = true, citecolor = {blue}, urlcolor = {blue}, linkcolor = {blue} } \makeatother \usepackage{babel} \providecommand{\definitionname}{Definition} \providecommand{\lemmaname}{Lemma} \providecommand{\theoremname}{Theorem} \begin{document} \title{Topological Langmuir-cyclotron wave} \author{Hong Qin} \email{[email protected] } \author{Yichen Fu} \email{[email protected]} \affiliation{Princeton Plasma Physics Laboratory and Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08540} \begin{abstract} A theoretical framework is developed to describe the Topological Langmuir-Cyclotron Wave (TLCW), a recently identified topological surface excitation in magnetized plasmas. As a topological wave, the TLCW propagates unidirectionally without scattering in complex boundaries. The TLCW is studied theoretically as a spectral flow of the Hamiltonian Pseudo-Differential-Operator (PDO) $\hat{H}$ for waves in an inhomogeneous plasma. The semi-classical parameter of the Weyl quantization for plasma waves is identified to be the ratio between electron gyro-radius and the inhomogeneity scale length of the system. Hermitian eigenmode bundles of the bulk Hamiltonian symbol $H$ for plasma waves are formally defined. Because momentum space in classical continuous media is contractible in general, the topology of the eigenmode bundles over momentum space is trivial. This is in stark contrast to condensed matters. Nontrivial topology of the eigenmode bundles in classical continuous media only exists over phase space. A boundary isomorphism theorem is established to facilitate the calculation of Chern numbers of eigenmode bundles over non-contractible manifolds in phase space. It also defines a topological charge of an isolated Weyl point in phase space without adopting any connection. Using these algebraic topological techniques and an index theorem formulated by Faure, it is rigorously proven that the nontrivial topology at the Weyl point of the Langmuir wave-cyclotron wave resonance generates the TLCW as a spectral flow. It is shown that the TLCW can be faithfully modeled by a tilted Dirac cone in phase space. An analytical solution of the entire spectrum of the PDO of a generic tilted phase space Dirac cone, including its spectral flow, is given. The spectral flow index of a tilted Dirac cone is one, and its mode structure is a shifted Gaussian function. \end{abstract} \maketitle \section{Introduction \label{sec:Introduction}} Topological wave in classical continuous media is an active research topic for its practical importance. For example, it was discovered \citep{delplace2017topological,Faure2019} that the well-known equatorial Kelvin wave, which can trigger an El Nino episode \citep{Roundy2007}, is a topological wave in nature. Topological waves in cold magnetized plasmas have been recently studied \citep{parker2020topological,fu2021topological}. Through comprehensive numerical simulations and heuristic application of the principle of bulk-edge correspondence, a topological surface excitation called Topological Langmuir-Cyclotron Wave (TLCW) was identified \citep{fu2021topological,Fu2022a}. As a topological wave, the TLCW has topological robustness, i.e., it is unidirectional and free of scattering and reflection. Thus, the TLCW excitation is expected to be experimentally observable. In the present study, we develop a theoretical framework to describe the TLCW and rigorously prove that it is produced by the nontrivial topology at the Weyl point due to the Langmuir wave-cyclotron wave resonance using an index theorem for spectral flows formulated by Faure \citep{Faure2019} and tools of algebraic topology. Most of the techniques developed are applicable to general topological waves in classical continuous media as well. The key developments of the present study are summarized as follows. \begin{enumerate} \item The TLCW is theoretically described as a spectral flow of the global Hamiltonian Pseudo-Differential-Operator (PDO) $\hat{H}$ for waves in an inhomogeneous magnetized plasma. For this problem, the semi-classical parameter of the Weyl quantization operator, which maps the bulk Hamiltonian symbol $H$ to $\hat{H}$, is identified as the ratio between electron gyro-radius and the scale length of the inhomogeneity. We emphasize the important role of the semi-classical parameters and the necessity to identify them for topological waves in classical continuous media according to the nature of the physics under investigation. \item We formally construct the Hermitian eigenmode bundles of the bulk Hamiltonian symbol $H$, and show that it is the topology of the eigenmode bundles over non-contractible manifolds in phase space that determines the properties of spectral flows for classical continuous media. We show that the topology of eigenmode bundles on momentum (wavenumber) space is trivial in classical continuous media, unlike in condensed matters. Without modification, the Atiyah-Patodi-Singer (APS) index theorem \citep{Atiyah1976} proved for spectral flows over $S^{1}$ is only applicable to condensed matters, and Faure's index theorem \citep{Faure2019} for spectral flows over $\mathbb{R}$-valued wavenumbers should be adopted for classical continuous media. \item A boundary isomorphism theorem (Theorem \ref{thm:BoundaryIso}) is proved to facilitate the calculation of Chern numbers of eigenmode bundles over a 2D sphere in phase space. The theorem also defines a topological charge of an isolated Weyl point in phase space using a topological method, i.e., without using any connection. \item An analytical solution of the global Hamiltonian PDO of a generic tilted Dirac cone in phase space is found, which generalizes the previous result for a straight Dirac cone \citep{Faure2019}. The spectral flow index of a tilted phase space Dirac cone is calculated to be one, and the mode structure of the spectral flow is found to be a shifted Gaussian function. \item These tools are applied to prove the existence of the TLCW in magnetized plasmas with the spectral flow index being one. The Chern theorem (Theorem \ref{thm:Chern}), instead of the Berry connection or any other connection, was used to calculate the Chern numbers. And it is shown that the TLCW can be faithfully described by a tilted Dirac cone in phase space. \end{enumerate} The paper is organized as follows. In Sec.\,\ref{sec:Problem-statement}, we pose the problem to be studied and describe the general properties of the TLCW identified by numerical simulations. Section \ref{sec:Numerial} presents additional numerical evidence and simulation results of the TLCW. In Sec.\,\ref{sec:Topology}, we define the Hermitian eigenmode bundles of waves in classical continuous media and develop algebraic topological tools to study the nontrivial topology of eigenmode bundles over phase space. The existence of TLCW as a spectral flow is proven in Sec.\,\ref{sec:TLCWPrediction}. We construct a tilted Dirac cone model for the TLCW in Sec.\,\ref{sec:AnalyticalTDC}, and the entire spectrum of the PDO of a generic tilted Dirac cone, including its spectral flow, is solved analytically. \section{Problem statement and general properties of TLCW \label{sec:Problem-statement}} We first pose the problem to be addressed in the present study, introduce the governing equations, set up the class of equilibrium plasmas that might admit the TLCW, and describe its general properties. Consider a cold magnetized plasma with fixed ions. The equilibrium magnetic field $\boldsymbol{B}_{0}=B_{0}\boldsymbol{e}_{z}$ is assumed to be constant. Because the plasma is cold, any density profile $n(\boldsymbol{r})$ is an admissible equilibrium. Denote by $L\sim\left|n/\nabla n\right|$ the characteristic scale length of $n$. There is no equilibrium electrical field and electron flow velocity, i.e., $\boldsymbol{v}_{0}=0$ and $\boldsymbol{E}_{0}=0.$ The linear dynamics of the system is described by the following equations for the perturbed electromagnetic field $\boldsymbol{E}$ and $\boldsymbol{B}$, and the perturbed electron flow $\boldsymbol{v}$. \begin{align} & \partial_{t}\boldsymbol{v}=-e\boldsymbol{E}/m_{e}-\Omega\boldsymbol{v}\times\boldsymbol{e}_{z},\label{eq:basic1}\\ & \partial_{t}\boldsymbol{E}=c\nabla\times\boldsymbol{B}+4\pi en\boldsymbol{v},\label{eq:basic2}\\ \ & \partial_{t}\boldsymbol{B}=-c\nabla\times\boldsymbol{E},\label{eq:basic3} \end{align} where $\Omega=eB_{0}/m_{e}c$ is the cyclotron frequency, $m_{\mathrm{e}}$ is the electron mass, and $e>0$ is the elementary charge. We normalize $\boldsymbol{v}$ by $1/\sqrt{4\pi n(\boldsymbol{r})m_{\mathrm{e}}}$, $t$ by $1/\Omega$, $\boldsymbol{r}$ by $L$, and $\nabla$ by $1/L$. In the normalized variables, Eqs.\,(\ref{eq:basic1})-(\ref{eq:basic3}) can be written as \begin{align} \mathrm{i}\partial_{t}\psi & =\hat{H}\psi,\\ \psi & =\left(\begin{array}{c} \boldsymbol{v}\\ \boldsymbol{E}\\ \boldsymbol{B} \end{array}\right),\\ \hat{H} & (\boldsymbol{r},-\mathrm{i}\eta\nabla)=\begin{pmatrix}\mathrm{i}\boldsymbol{e}_{z}\times & -\mathrm{i}\omega_{\text{p}} & 0\\ \mathrm{i}\omega_{\text{p}} & 0 & \mathrm{i}\eta\nabla\times\\ 0 & -\mathrm{i}\eta\nabla\times & 0 \end{pmatrix},\label{eq:Hhat} \end{align} where $\mathrm{i}\boldsymbol{e}_{z}\times$ and $\mathrm{i}\eta\nabla\times$ denote $3\times3$ anti-symmetric matrices corresponding to $\boldsymbol{e}_{z}$ and $\nabla$, respectively. For a generic vector $\boldsymbol{u}=(u_{x},u_{y},u_{z})$ in $\mathbb{R}^{3},$ the corresponding $3\times3$ anti-symmetric matrix is defined to be \begin{equation} \boldsymbol{u}\times\equiv\begin{pmatrix}0 & -u_{z} & u_{y}\\ u_{z} & 0 & -u_{x}\\ -u_{y} & u_{x} & 0 \end{pmatrix}. \end{equation} In Eq.\,(\ref{eq:Hhat}), $\omega_{\mathrm{p}}(\boldsymbol{r})=\sqrt{4\pi n(\boldsymbol{r})e^{2}/m_{\mathrm{e}}}/\Omega$ is the local plasma frequency normalized by $\Omega,$ and $\eta\equiv c/(L\Omega)\sim\rho_{e}/L$ is a dimensionless parameter proportional to the ratio between electron gyro-radius and the scale length of $n$. Here, $\eta$ is assumed to be small, i.e., $\eta\ll1$, and it plays the role of the semi-classical parameter for the Weyl quantization of this problem. The Weyl quantization operator \begin{equation} \mathrm{Op}_{\eta}:f\rightarrow\hat{f}=\mathrm{Op}_{\eta}(f) \end{equation} maps a function in phase space $f(\boldsymbol{r},\boldsymbol{k})$, called a symbol, to an Pseudo-Differential-Operator (PDO) $\hat{f}$ on functions $\psi(\boldsymbol{r})$ on the $n$-dimensional configuration space. The operator $\hat{f}=\mathrm{Op}_{\eta}(f)$ is defined by \begin{equation} \hat{f}\psi(\boldsymbol{r})=\frac{1}{(2\pi\eta)^{n}}\int f\left(\frac{\boldsymbol{r}+\boldsymbol{s}}{2},\boldsymbol{k}\right)\exp\left(\frac{\mathrm{i}\boldsymbol{k}\cdot\left(\boldsymbol{x}-\boldsymbol{y}\right)}{\eta}\right)\psi(\boldsymbol{s})\mathrm{d}\boldsymbol{s}\mathrm{d}\boldsymbol{k}\thinspace. \end{equation} In particular, we have $\hat{\boldsymbol{k}}=-\mathrm{i}\eta\nabla$. For the $\hat{H}$ given by Eq.\,(\ref{eq:Hhat}), its pre-image $H$, i.e., the symbol $H$ satisfying $\hat{H}=\mathrm{Op}_{\eta}(H)$, is \begin{equation} H(\boldsymbol{r},\boldsymbol{k})=\begin{pmatrix}\mathrm{i}\boldsymbol{e}_{z}\times & -\mathrm{i}\omega_{\text{p}} & 0\\ \mathrm{i}\omega_{\text{p}} & 0 & -\boldsymbol{k}\times\\ 0 & \boldsymbol{k}\times & 0 \end{pmatrix}.\label{eq:H} \end{equation} In quantum theory, the semi-classical parameter is typically the Plank constant $\hbar$, and it is a crucial parameter in the index theorems for spectral flow \citep{Atiyah1976,Faure2019} of PDOs. For the plasma waves in the present study, the semi-classical parameter is identified to be $\eta\equiv c/(L\Omega)$, which is the ratio between electron gyro-radius and the scale length of the equilibrium plasma. Notice that in the PDO $\hat{H}(\boldsymbol{r},-\mathrm{i}\eta\nabla)$ the differential operator $\nabla$ is normalized by $1/L$, but in the symbol $H(\boldsymbol{r},\boldsymbol{k})$ the wavenumber $\boldsymbol{k}$ is normalized by $\Omega/c$, thanks to the small semi-classical parameter $\eta$ strategically placed in the Weyl quantization operator $\mathrm{Op}_{\eta}$. This structure between the PDO and the symbol is required for the application of the index theorem of spectral flow \citep{Atiyah1976,Faure2019}. In the study of other topological properties of classical media, such as electromagnetic materials \citep{silveirinha2015chern,silveirinha2016bulk,gangaraj2017berry,marciani2020chiral}, fluid systems \citep{delplace2017topological,Faure2019,perrot2019topological,tauber2019bulk,venaille2021wave,zhu2021topology,souslov2019topological,qin2019kelvin,fu2020physics,David2022}, and magnetized plasmas \citep{gao2016photonic,yang2016one,parker2020nontrivial,parker2020topological,parker2021topological,fu2021topological,Fu2022a,Rajawat2022,qin2021spontaneous}, we believe that appropriate semi-classical parameters should also be carefully determined first based on the specific nature of the problems under investigation. In plasma physics, the symbol $H(\boldsymbol{r},\boldsymbol{k})$ is called the local Hamiltonian of the system, but it is known as the bulk Hamiltonian in condensed matter physics. Thus, in the present context, the phrases ``bulk modes'' and ``local modes'' have the same meaning, referring to the spectrum determined by $H(\boldsymbol{r},\boldsymbol{k})$ locally at each $\boldsymbol{r}$ and each $\boldsymbol{k}$ separately. The spectrum of the PDO $\hat{H}(\boldsymbol{r},-\mathrm{i}\eta\nabla)$ will be called global modes. The edge modes, including topological edge modes, refer to the global modes of $\hat{H}(\boldsymbol{r},-\mathrm{i}\eta\nabla)$ whose mode structures are non-vanishing only in some narrow interface regions. It is unfortunate that the phrases ``local modes'' and ``edge modes'', defined in different branches of physics, have very different meanings. For a fixed $\boldsymbol{r}$ and a fixed $\boldsymbol{k}$, $H(\boldsymbol{r},\boldsymbol{k})$ is a $9\times9$ Hermitian matrix. Denote its 9 eigenmodes by \[ (\omega_{n},\psi_{n}),\thinspace\thinspace n=-4,-3,\cdots,3,4\thinspace, \] which are ordered by the value of the eigenfrequencies, i.e., $\omega_{i}\leq\omega_{j}$ for $i<j$. Under this index convention, it can be verified that $\omega_{-n}=-\omega_{n}$ and $\omega_{0}=0$, i.e., the spectrum is symmetric with respect to the real axis. Plotted in Fig.\,\ref{fig:dispersion_relation} are the dispersion relations of $\omega_{n}$ $(n=1,2,3,4)$ for an over-dense and an under-dense plasma, respectively. The eigenfrequencies are plotted as functions of $k_{z}$ and $k_{y}$ only since the spectrum is invariant when $\boldsymbol{k}$ rotates in the $x$-$y$ plane. \begin{figure}[ht] \centering \includegraphics[width=11cm]{Fig_1} \caption{The dispersion relation $\omega_{n}(k_{z},k_{y})$ $(n=1,2,3,4)$ for (a) an over-dense plasma and (b) an under-dense plasma. Different values of $k_{y}$ are indicated by the color map.} \label{fig:dispersion_relation} \end{figure} Straightforward analysis shows that for a given $\omega_{\text{p}},$ the spectrum has two possible resonances, a.k.a. Weyl points, when $\boldsymbol{k}_{\perp}=0$ and $k_{z}=k^{\pm}$, where \begin{equation} k^{\pm}\equiv\dfrac{\omega_{\text{p}}}{\sqrt{1\pm\omega_{\text{p}}}} \end{equation} are two critical wavenumbers for the given $\omega_{\text{p}}.$ We are interested in the resonance at $\boldsymbol{k}_{\perp}=0$ and $k_{z}=k^{-}$, which is between the Langmuir wave and the cyclotron wave (the R-wave near the cyclotron frequency). Obviously, this Langmuir-Cyclotron (LC) resonance or Weyl point exists when and only when the plasma is under-dense, i.e., $\omega_{\text{p}}<1$. For a given $k_{z}$, the LC resonance occurs when $\omega_{\text{p}}=\omega_{\text{pc}}$, where \begin{equation} \omega_{\text{pc}}\equiv\dfrac{\sqrt{k_{z}^{4}+4k_{z}^{2}}-k_{z}^{2}}{2} \end{equation} is the critical plasma frequency for the given $k_{z}$. In the parameter space of $(\omega_{\text{p}},k_{x},k_{y})$ for a fixed $k_{z}$, when moving away from the LC Weyl point $(\omega_{\text{p}},k_{x},k_{y})=(\omega_{\text{pc}},0,0)$, the distance between $\omega_{1}$ and $\omega_{2}$ will increase, i.e., \begin{equation} \omega_{2}-\omega_{1}>0,\,\,\text{when }(\omega_{\text{p}},k_{x},k_{y})\ne(\omega_{\text{pc}},0,0). \end{equation} Interesting topological physics happens in the neighborhood of the LC Weyl point $(\omega_{\text{p}},k_{x},k_{y})=(\omega_{\text{pc}},0,0)$. Figure \ref{fig:DiracCone} shows the surfaces of $\omega_{1}$ and $\omega_{2}$ as functions of $\omega_{\text{p}}$ and $k_{x}$ near the LC Weyl point. The structure is known as a Dirac cone. One important feature of the Dirac cone at the LC Weyl point is that it is tilted. Also, note that this tilted Dirac cone is in phase space since $\omega_{\text{p}}$ is a function of $x.$ This is different from condensed matter physics, where the Dirac cone is mostly in momentum space. \begin{figure}[ht] \includegraphics[width=8cm]{Fig_2} \caption{Tilted phase space Dirac cone in the neighborhood of the LC Weyl point $(\omega_{\text{p}},k_{x},k_{y})=(\omega_{\text{pc}},0,0)$. } \label{fig:DiracCone} \end{figure} Previous numerical studies and qualitative consideration \citep{fu2021topological,Fu2022a} indicated that for a given $k_{z},$ a simple 1D equilibrium that is inhomogeneous in the $x$-direction will admit the TLCW if the range of $\omega_{\text{p}}(x)$ includes $\omega_{\text{pc}}.$ In particular, we will consider the equilibrium profile displayed in Fig.\,\ref{fig:1DGeometry}. The profile is homogeneous in Region I $(x\le-1)$ and Region II $(x\ge1)$, and $\omega_{\text{p}}(x)$ monotonically decrease in the transition region $(-1\le x\le1)$. The profile of $\omega_{\text{p}}(x)$ satisfies the condition \begin{gather} \omega_{\text{p}1}>\omega_{\text{p}}(0)=\omega_{\text{pc}}>\omega_{\text{p}2}\thinspace,\label{eq: condition1}\\ \omega_{\text{p}1}\equiv\omega_{\text{p}}(x\le-1)\,,\\ \omega_{\text{p}2}\equiv\omega_{\text{p}}(x\ge1). \end{gather} The LC Weyl point locates at $x=0,$ and $\omega_{\text{p}1}$ is the plasma frequency of Region I and $\omega_{\text{p}2}$ that of Region II. Note that here $x$ is the dimensionless length normalized by $L$, the scale length of equilibrium density profile $\omega_{\text{p}}(x)$. \begin{figure}[ht] \centering \includegraphics[width=8cm]{Fig_3} \caption{One-dimensional equilibrium with one transition region. The $x$ coordinate has been normalized by $L,$ the scale length of $n(x)$.} \label{fig:1DGeometry} \end{figure} In the following analysis, we will assume $k_{z}$ is a fixed parameter unless explicitly stated otherwise. Through the variation of $\omega_{\text{p}}(x)$, the spectrum $\omega_{n}(x,k_{x},k_{y})$ of the bulk Hamiltonian symbol $H(\boldsymbol{r},\boldsymbol{k})$ becomes a function of $x$. We defined the common gap condition for the spectra $\omega_{1}(x,k_{x},k_{y})$ and $\omega_{2}(x,k_{x},k_{y})$ as follows. \begin{defn} \label{def:CommGap} The spectra $\omega_{1}(x,k_{x},k_{y})$ and $\omega_{2}(x,k_{x},k_{y})$ are said satisfying the common gap condition for parameters exterior to the ball $B_{r}^{3}\equiv\left\{ (x,k_{x},k_{y})\mid x^{2}+k_{x}^{2}+k_{y}^{2}\le r^{2}\right\} $ in the phase space of $(x,k_{x},k_{y})$, if there exists an interval $[g_{1}(r),g_{2}(r)]$ such that $\omega_{1}(x,k_{x},k_{y})<g_{1}(r)$ and $\omega_{2}(x,k_{x},k_{y})>g_{2}(r)$ for all $(x,k_{x},k_{y})\notin B_{r}^{3}.$ We call $[g_{1}(r),g_{2}(r)]$ the common gap of $\omega_{1}(x,k_{x},k_{y})$ and $\omega_{2}(x,k_{x},k_{y})$ for parameters exterior to the ball $B_{r}^{3}$. For all the parameter space that we have explored, condition (\ref{eq: condition1}) implies the common gap condition of $\omega_{1}(x,k_{x},k_{y})$ and $\omega_{2}(x,k_{x},k_{y})$ for $(x,k_{x},k_{y})$ exterior to the ball of $B_{1}^{3}$. Due to the algebraic complexity of $H(\boldsymbol{r},\boldsymbol{k})$, this fact cannot be proved through a simple procedure, even though no contour example was found numerically. In Sec.\,\ref{sec:AnalyticalTDC}, we will give a proof of this fact for a reduced Hamiltonian corresponding to a tilted Dirac cone in the neighborhood of the LC Weyl point. In the analysis before Sec.\,\ref{sec:AnalyticalTDC}, we will take the common gap condition as an assumption. For the 1D equilibrium with inhomogeneity in the $x$-direction, $k_{y}$ and $k_{z}$ are good quantum numbers and can be treated as system parameters. The PDO $\hat{H}(\boldsymbol{r},-\mathrm{i}\eta\nabla)$ defined in Eq.\,(\ref{eq:Hhat}) reduces to \begin{equation} \hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})=\begin{pmatrix}\mathrm{i}\boldsymbol{e}_{z}\times & -\mathrm{i}\omega_{\text{p}}(x) & 0\\ \mathrm{i}\omega_{\text{p}}(x) & 0 & (\mathrm{i}\eta\partial_{x},-k_{y,}-k_{z})\times\\ 0 & (-\mathrm{i}\eta\partial_{x},k_{y,}k_{z})\times & 0 \end{pmatrix},\label{eq:Hhatx} \end{equation} and the corresponding bulk Hamiltonian symbol is \begin{equation} H(x,k_{x},k_{y},k_{z})=\begin{pmatrix}\mathrm{i}\boldsymbol{e}_{z}\times & -\mathrm{i}\omega_{\text{p}}(x) & 0\\ \mathrm{i}\omega_{\text{p}}(x) & 0 & (-k_{x},-k_{y,}-k_{z})\times\\ 0 & (k_{x},k_{y,}k_{z})\times & 0 \end{pmatrix},\label{eq:Hx} \end{equation} In Region I or II, the system is homogeneous, and in each region separately it is valid to speak of the homogeneous eigenmodes of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$, which are identical to the bulk modes of $H(x,k_{x},k_{y},k_{z})$ in that region. The TLCW is a global eigenmode of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ localized in the transition region of $-1<x<1$. Hence the name of edge mode. In Fig.\,\ref{fig:1Dspectrum}, the numerically calculated spectrum of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ is plotted as a function of $k_{y}$. The spectrum consists of three parts. The upper and lower parts are the spectrum of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ that fall in the bulk bands of $H(x,k_{x},k_{y},k_{z})$ in Regions I and II. The spectrum in the middle is a single line trespassing the common band gap shared by Regions I and II. It is the TLCW. Its frequency increases monotonically with $k_{y}$, passing through $\omega_{\text{pc}}$. Such a curve of the dispersion relation for the edge mode as a function of $k_{y}$ is known as a spectral flow because it ships one eigenmode of $H(x,k_{x},k_{y},k_{z})$ from the lower band to the upper band across the band gap (see Fig.\,\ref{fig:1Dspectrum}a). If there were two or more edge modes in the gap as in the case of oceanic equatorial waves \citep{delplace2017topological,Faure2019}, there would be two or more spectral flows. In Sec.\,\ref{sec:TLCWPrediction}, we will formally define spectral flow and show that the number of spectral flow reflects the topology of the plasma waves and is determined by a topological index known as the Chern number of a properly chosen manifold in the parameter space. This is why they are called topological edge modes. For the TLCW, we will show that its Chern number is one. \end{defn} \begin{figure}[ht] \centering \includegraphics[height=8cm]{Fig_4a} \hspace{1cm} \includegraphics[height=8cm]{Fig_4b} \caption{(a) Spectrum of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ as a function of $k_{y}$. (b) The mode structure of the TLCW.} \label{fig:1Dspectrum} \end{figure} Condition (\ref{eq: condition1}) was identified as that for the existence of the TLCW \citep{fu2021topological,Fu2022a} by heuristically applying the bulk-edge correspondence using the numerically integrated values of the Berry curvature over the $k_{x}$-$k_{y}$ plane for the bulk modes of $H(x,k_{x},k_{y},k_{z})$ in Regions I and II. However, such an integral should not be used as a topological index for the topology waves in classical continuous media, because the topology of vector bundles over a contractible base manifold is trivial, and the $k_{x}$-$k_{y}$ plane is contractible. A detailed discussion about the trivial topology over the momentum space for classical continuous media can be found in Sec.\,\ref{subsec:TrivialTop}. In Secs.\,\ref{sec:Topology}-\ref{sec:AnalyticalTDC}, we show how to formulate the bulk-edge correspondence for this problem in the classical continuous media, using an index theorem of spectral flow over wavenumbers taking values in $\mathbb{R}$ established by Faure \citep{Faure2019} and techniques of algebraic topology. We rigorously prove that there exists one TLCW when condition (\ref{eq: condition1}) and the common band gap condition are satisfied. After presenting additional numerical evidence of the TLCW in the next section, we will start our analytical study in Sec.\,\ref{sec:Topology} by defining the Hermitian eigenmode bundle of plasma waves, with which the index theorem is concerned with. \section{Additional numerical evidence of TLCW \label{sec:Numerial}} In this section, we display several more examples of numerically calculated TLCW by a 1D eigenmode solver of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ \citep{fu2021topological} as well as 3D time-dependent simulations \citep{Fu2022a}. The first example is the TLCW in a 1D equilibrium with two LC Wely points, as illustrated in Fig.\,\ref{fig:1DGeometry2Edge}. The high-density region is in the middle and the low-density region is on the two sides. When condition (\ref{eq: condition1}) is satisfied, we expect to observe two TLCWs, one on the right LC Weyl point and one on the left. The numerically solved spectrum of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ is shown in Fig.\,\ref{fig:1Dspectrum2Edege}, which meets the expectation satisfactorily. \begin{figure}[ht] \centering \includegraphics[width=9cm]{Fig_5} \caption{One-dimensional equilibrium with two Weyl points, located at the two regions where density changes.} \label{fig:1DGeometry2Edge} \end{figure} \begin{figure}[ht] \centering \includegraphics[height=8cm]{Fig_6a} \hspace{0.5cm} \includegraphics[height=8cm]{Fig_6b} \caption{(a) Spectrum of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ as a function of $k_{y}$. (b) The mode structure of the right TLCW. (c) The mode structure of the left TLCW.} \label{fig:1Dspectrum2Edege} \end{figure} Shown in Figs.\,\ref{fig:zigzag} and \ref{fig:oval} are 3D simulations of the TLCW, where the boundary between two regions are nontrivial curves in a 2D plane. In the simulations, an electromagnetic source is placed on the boundary marked by the yellow star. For the simulation in Fig.\,\ref{fig:zigzag}, the boundary is an irregular zigzag line. As anticipated, the TLCW propagates along the irregular boundary unidirectionally and without any scattering and reflection by the sharp turns. In Fig.\,\ref{fig:oval}, the boundary is a closed oval, and the TLCW stays on the oval boundary as expected. The propagation is again unidirectionally and without any scattering into other modes. Because $\omega_{\mathrm{p,1}}>\omega_{\mathrm{p,2}}$, the TLCW propagates counterclockwise and carries a non-zero (kinetic) angular momentum \citep{Fu2022a}. Even though the source does not carry any angular momentum, an angular-momentum-carrying surface wave is generated by the mechanism of the TLCW. \begin{figure}[ht] \centering \includegraphics[width=7cm]{Fig_7a}\includegraphics[width=7cm]{Fig_7b} \caption{(a) 2D and (b) 3D simulations of the TLCW excited on a zig-zag boundary.} \label{fig:zigzag} \end{figure} \begin{figure}[ht] \centering \includegraphics[height=8cm]{Fig_8a} \hspace{1cm} \includegraphics[height=8cm]{Fig_8b} \caption{(a) 2D and (b) 3D simulations of the TLCW excited on an oval boundary.} \label{fig:oval} \end{figure} \section{Topology of Hermitian eigenmode bundles of plasma waves and waves in classical continuous media\label{sec:Topology}} The index theorem \citep{Atiyah1976,Faure2019} establishes the bulk-edge correspondence linking the nontrivial topology of the bulk mode of symbol $H$ and the spectral flow of PDO $\hat{H}$. The topology here refers to that of the Hermitian bundles of eigenmodes over appropriate regions of the parameter spaces, which we now define. Denote the parameter space by $M$. In the present context, $M$ is the space of all possible 4-tuples $(x,k_{x},k_{z},k_{z})$. For a given $m=(x,k_{x},k_{z},k_{z})\in M$, the bulk Hamiltonian symbol $H(m)$ supports a finite number of eigenmodes. For the plasma wave operator defined by Eq.\,(\ref{eq:Hx}), there are 9 eigenmodes as explained in Sec.\,\ref{sec:Problem-statement}. But, most of the discussion and results in this section are not specific to the plasma waves and remain valid for a general bulk Hamiltonian symbol $H(m)$ in continuous media with 1D inhomogeneity. When there is no degeneracy for a given eigenfrequency, all eigenvectors corresponding to the eigenfrequency form a 1D complex vector space. \begin{defn} Let $Q\subset M$ be a subset of the parameter space that forms a manifold with or without boundary. If the $j$-th eigenmode is not degenerate over $Q$, then the space of disjointed union of all eigenvectors of $\psi_{j}(q)$ at all $q\in Q$ forms a 1D complex line bundle \begin{equation} \pi_{j}:E_{j}\rightarrow Q \end{equation} over $Q$. With the standard Hermitian form \begin{equation} \left\langle u,w\right\rangle \equiv u\cdot\bar{w}, \end{equation} for all $u,w\in\pi_{j}^{-1}(q)$ and $q\in Q,$ $E_{j}\rightarrow Q$ is also a Hermitian bundle. It will be called the Hermitian line bundle of the $j$-th eigenmode of the bulk Hamiltonian symbol $H(m)$ over $Q$. \end{defn} If both the $l$-th eigenmode and the $j$-th eigenmode are non-degenerate over $Q,$ the Whitney sum of $E_{l}\rightarrow Q$ and $E_{j}\rightarrow Q$ defines the Hermitian line bundle of the $l$-th and the $j$-th eigenmodes, \begin{equation} E_{\{l,j\}}\equiv E_{l}\oplus E_{j} \end{equation} with the Hermitian form defined as \begin{equation} \left\langle \boldsymbol{u},\boldsymbol{v}\right\rangle \equiv u_{l}\cdot\bar{w}_{l}+u_{j}\cdot\bar{w}_{j}, \end{equation} for all $\boldsymbol{u}=(u_{l},u_{j}),\boldsymbol{w}=(w_{l},w_{j})$, where $u_{l},w_{l}\in\pi_{l}^{-1}(q)$ and $u_{j},w_{j}\in\pi_{j}^{-1}(q)$. Similarly, Hermitian bundle of a set of eigenmodes indexed by set $J$ is defined as \begin{equation} E_{J}\equiv\oplus_{j\in J}E_{j}\thinspace, \end{equation} if for each $j\in J,$ the $j$-th eigenmode is not degenerate over $Q$. In general, $E_{J}$ can be defined when degeneracy exists only between indices in $J$, but we will not use this structure in the present study. The current study is concerned with the topology of the Hermitian line bundles $E_{j}\rightarrow Q$. In particular, we would like to know when the bundle is trivial, i.e., a global product bundle over $Q$, and when it is not. If nontrivial, it is desirable to calculate the Chern classes of the bundle to measure how twisted it is. For the Hermitian line bundles of eigenmodes of plasma waves, we will show in Secs.\,\ref{sec:TLCWPrediction} and \ref{sec:AnalyticalTDC} that the topological index of the $E_{1}$ bundle over a properly chosen non-contractible, compact manifold in phase space $Q$, calculated from its first Chern class $C_{1}(E_{1})$, determines the number of TLCWs at the transition region. For Hermitian bundles, the associated principal bundles are $U(n)$ bundles and each Chern class $C_{j}$ is a de Rham cohomology class of the base manifold constructed from a curvature 2-form of the bundles. According to the Chern-Weil theorem, different connections for the bundles yield the same de Rham cohomology classes on the base manifold. In the present study, it is only necessary to calculate the first Chern class, and the following result is useful, \begin{equation} C_{1}(E_{J})=\sum_{j\in J}C_{1}(E_{j})\,.\label{eq:C1} \end{equation} The right-hand side of Eq.\,(\ref{eq:C1}) is relatively easy to calculate because each $E_{j}$ is a Hermitian line bundle, whose first Chern class is given by \begin{align} C_{1} & =\dfrac{\mathrm{i}}{2\pi}\theta,\\ \theta & =\mathrm{d}\chi, \end{align} where $\theta$ is a curvature 2-form and $\chi$ is a connection. As mentioned above, different connections will generate the same $C_{1}$ class. Nevertheless, the Hermitian line bundle is endowed with natural connection \begin{equation} \chi=\left\langle w,\mathrm{d}w\right\rangle , \end{equation} which is a $u(1)$-valued local 1-form in each trivialization patch. This natural connection for the Hermitian line bundle is known as the Berry connection in condensed matter physics or the Simon connection \footnote{It was Barry Simon \citep{Simon1983,Castelvecchi2020} who first pointed out that Michael Berry's phase is an anholonomy of the natural connection on a Hermitian line bundle. For this reason, Frankel \citep{Frankel2011} taunted the temptation to call it the Berry-Barry connection. However, as Barry Simon pointed out, it is had been known to geometers such as Bott and Chern \citep{Bott1965}. Given the current culture of inclusion, it is probably more appropriate to call it the Bott-Chern-Berry-Simon connection.}. \subsection{Trivial topology of plasma waves in momentum space \label{subsec:TrivialTop}} In terms of topological properties, there is a major difference between condensed matters and classical continuous media such as plasmas and fluids. The momentum space, or wavenumber space, of typical condensed matters is the Brillouin zone, which is non-contractible due to the periodicity of the lattices. On the contrary, the wavenumber space in plasmas and fluids is contractible, and it is a well-known fact that vector bundles over a contractible manifold are trivial. Here, a topological manifold $M$ is called contractible if it is of the same homotopy type of a point, i.e., there exist a point $x_{0}$ and continuous maps $f:M$$\rightarrow\{x_{0}\}$ and $g:\{x_{0}\}\rightarrow M$ such that $f\circ g$ is homotopic to identity in $\{x_{0}\}$ and $g\circ f$ is homotopic to identity in $M.$ Because of its importance to the continuous media in classical physics, we formalize this result as a theorem. \begin{thm} \label{thm:contractible}Let $Q$ be a subset of the parameter space $M$ for a bulk Hamiltonian symbol $H$. If the $j$-th eigenmode is non-degenerate on $Q$, and $Q$ is a contractible manifold, then the Hermitian line bundle $E_{j}\rightarrow Q$ is trivial. In particular, the n-th Chern class $C_{n}($$E_{j}\rightarrow Q)=0$ for $n\ge1.$ \end{thm} Note that Theorem \ref{thm:contractible} holds for any bulk Hamiltonian symbol. For the $H(\boldsymbol{r},\boldsymbol{k})$ defined in Eq.\,(\ref{eq:H}) and the $H(x,k_{x},k_{y},k_{z})$ defined in Eq.\,(\ref{eq:Hx}) for plasma waves, a more specific result is available as a direct corollary of Theorem \ref{thm:contractible}. \begin{thm} \label{thm:ContractiblePlasma}For the bulk Hamiltonian symbol $H(\boldsymbol{r},\boldsymbol{k})$ defined in Eq.\,(\ref{eq:H}) for plasma waves, when $k_{z}\neq k^{\pm},$ the Hermitian line bundle of all eigenmodes over the perpendicular wavenumber plane $Q_{k_{\perp}}=\{(k_{x},k_{y})\mid k_{x}\in\mathbb{R},\thinspace k_{y}\in\mathbb{R}\}=\mathbb{R}^{2}$ are trivial. In particular, $C_{n}(E_{j}\rightarrow Q_{k_{\perp}})=0$ for $n\ge1$ and $-4\le j\le4.$ \end{thm} The fact that $C_{n}(E_{j}\rightarrow Q)$ vanishes when $Q$ is contractible is expected after all because $C_{n}$ is the de Rham cohomology class of the base manifold. But Theorem \ref{thm:ContractiblePlasma} is important. It tells us that the plasma wave topology over the $k_{x}$-$k_{y}$ plane is trivial if $k_{z}\neq k^{\pm}.$ Nontrivial topology of plasma wave bundles occurs only over non-contractible parameter manifolds, for example, over an $S^{2}$ surface in the phase space of $(x,k_{x},k_{y})$, as we will show in Sec.\,\ref{subsec:NonTrivialTop}. Before leaving this subsection, we would like to point out that in recent studies of wave topology in classical continuous media, much effort has been made to calculate ``topological indices'' or ``Chern numbers'' of the eigenmode bundles over the contractible $k_{x}$-$k_{y}$ plane. This type of effort is characterized by the attempt to evaluate the integration of the Berry curvature or various modified versions thereof over the $k_{x}$-$k_{y}$ plane. The difficulties involved were often attributed to the fact that the $k_{x}$-$k_{y}$ plane is not compact. As we see from Theorems \ref{thm:contractible} and \ref{thm:ContractiblePlasma}, when the wave bundle is well-defined over the entire $k_{x}$-$k_{y}$ plane, its topology is trivial. In these cases, the non-compactness of the $k_{x}$-$k_{y}$ plane is irrelevant, so is whether an integer or non-integer index can be designed. \subsection{Nontrivial plasma wave topology in phase space \label{subsec:NonTrivialTop}} In the present context, the ultimate utility of the topological property of the eigenmode bundles of the bulk Hamiltonian symbol $H$ is to predict the existence of the topological edge modes of the global Hamiltonian PDO $\hat{H}.$ For this purpose, the proper plasma wave eigenmode bundles are over the 2D sphere in the parameter space of $(x,k_{x},k_{y})$ for the $H(x,k_{x},k_{y},k_{z})$ defined in Eq.\,(\ref{eq:Hx}), \begin{equation} S_{1}^{2}=\left\{ (x,k_{x},k_{y})\mid x^{2}+k_{x}^{2}+k_{y}^{2}=1\right\} =\partial B_{1}^{3}, \end{equation} where $B_{1}^{3}$ is the 3D ball with radius $r=1$. One indicator of nontrivial topology, or twist, of a wave eigenmode bundle over $S_{1}^{2}$ is the number of zeros a nontrivial section must have, akin to the situation of hairy ball theorem for the tangent bundle of $S^{2}$. To include the possibilities of repeated zeros, we follow Frankel \citep{Frankel2011} to define the index of an isolated zero point $z$ of a section $u$ of a Hermitian line bundle as follows. \begin{defn} \label{def:ZeroIndex}Let $z$ be an isolated zero of a section $u$ that has a finite number of zeros. Select a normalized local frame $e$ for the Hermitian line bundle in the neighborhood of $z$. The normalized section near $z$, but not at $z,$ can be expressed as $u/\mid u\mid=e\text{exp}(\mathrm{i}\alpha).$ The index at $z$ is defined to be \begin{equation} j_{u}(z)\equiv\frac{1}{2\pi}\int_{\partial D}\mathrm{d}\alpha, \end{equation} where $D$ is a small disk containing $z$ on $S_{1}^{2}$ with orientation pointing away from $S_{1}^{2},$ and the orientation of $\partial D$ is induced from that of $D$. The index of the section $u$ is the sum of indices at all zeros $z_{l}$ of $u,$ \begin{equation} \text{Ind}(u)\equiv\sum_{l}j_{u}(z_{l}). \end{equation} \end{defn} Note that for each isolated zero $z$, the local frame $e$ selected in the neighborhood of $z$ is not vanishing at $z$. The index $j_{u}(z)$ intuitively measures how many turns the phase of $u$ increases relative to $e$ at $z$ over one turn on $\partial D$. In general, $e$ is only a local frame instead of a global frame, otherwise the bundle is trivial. The following theorem of Chern relates the index of a nontrivial section to the first Chern class over $S^{2}$ \citep{Frankel2011}. \begin{thm} \label{thm:Chern} {[}Chern{]} Let $E$ be a Hermitian line bundle over a closed orientable 2D surface $S^{2}$. Let $u:$$S^{2}\rightarrow E$ be a section of $E$ with a finite number of zeros. Then the integral of the 1st Chern class $C_{1}$ over $S^{2}$ is an integer that is equal to the index of the section $u,$ i.e., \[ n_{c}\equiv\int_{S^{2}}C_{1}(E\rightarrow S^{2})=\mathrm{Ind}(u). \] \end{thm} Here, $n_{c}$ is known as the first Chern number. Since the present study only involves the first Chern number, it is denoted by $n_{c}$ instead of $n_{c_{1}}.$ The first Chern number of the $j$-th eigenmode bundle over $S_{1}^{2}$ is denoted by $n_{cj}$. In Sec.\,\ref{sec:TLCWPrediction}, the first Chern number $n_{c1}$ of the plasma wave eigenmode bundle $E_{1}$ over $S_{1}^{2}$ will be linked to the spectral flow index of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ using Faure's index theorem \citep{Faure2019}. To facilitate the calculation of $n_{cj}$ over $S_{1}^{2}$, we establish the following general facts about eigenmode bundles in continuous media, including plasma wave eigenmode bundles. \begin{defn} \label{def:Iso2} Let $\pi_{1}:E_{1}\rightarrow P$ and $\pi_{2}:E_{2}\rightarrow Q$ be two vector bundles. A diffeomorphism $\phi:E_{1}\rightarrow E_{2}$ is called an isomorphism if for every $p\in P$, $\exists$ $q\in Q$ such that $\phi\circ\pi_{1}^{-1}(p)\subset\pi_{2}^{-1}(q)$ and $\phi:\pi_{1}^{-1}(p)\rightarrow\pi_{2}^{-1}(q)$ is a vector space isomorphism. If an isomorphism exists, $E_{1}$ and $E_{2}$ are isomorphic, denoted as $E_{1}\simeq E_{2}.$ \end{defn} Note that the base manifolds $P$ and $Q$ in the above definition can be identical or different. \begin{defn} Let $f:P\rightarrow Q$ is a smooth map between differential manifolds $P$ and $Q,$ and $\pi_{2}:E\rightarrow Q$ a vector bundle over $Q$. The pullback bundle $\pi_{1}:f^{*}E\rightarrow P$ is defined to be \begin{equation} f^{*}E=\left\{ (p,e)\in P\times E\mid p\in P,\,e\in E,\,f(p)=\pi_{2}(e)\right\} . \end{equation} \end{defn} The standard manifold and vector bundle structure of $f^{*}E$ can be formally established. For example, see Ref. \citep{Tu2017-177}. Note that the pullback bundle is defined as a pullback set and this mechanism does not define a map from $E$ to $f^{*}E$ because $f$ is not in general invertible. On the other hand, when $f$ is invertible, a pullback map from $E$ to $f^{*}E$ can be defined by a similar mechanism as follows. \begin{defn} Let $f:P\rightarrow Q$ is a diffeomorphism between differential manifolds $P$ and $Q,$ and $\pi_{2}:E\rightarrow Q$ a vector bundle over $Q$. The pullback map $f^{\dagger}$ is defined to be \begin{align} f^{\dagger}: & E\rightarrow f^{*}E,\\ & e\mapsto(f^{-1}\circ\pi_{2}(e),e).\nonumber \end{align} When $f$ is a diffeomorphism, $f^{\dagger}(E)=f^{*}E.$ \end{defn} We will use the following theorem known as homotopy induced isomorphism \citep{Bott1982}. \begin{thm} \label{thm:homotopyIso}{[}Homotopy induced isomorphism{]} Let $f_{0}$ and $f_{1}$ are two homotopic maps between manifolds $P$ and $Q$. For a vector bundle $E\rightarrow Q$, the pullback bundles $f_{0}^{*}E$ and $f_{1}^{*}E$ over $P$ are isomorphic. \end{thm} Theorem \ref{thm:contractible} is a direct corollary of Theorem \ref{thm:homotopyIso}. \begin{thm} \label{thm:pullbackIso} Let $f:P\rightarrow Q$ is a diffeomorphism between differential manifolds $P$ and $Q$, and $\pi_{2}:E\rightarrow Q$ a vector bundle over $Q$. The pullback map $f^{\dagger}$ is an isomorphism between $E$ and $f^{*}E$. \end{thm} \begin{proof} According to Definition \ref{def:Iso2}, it suffices to prove that (i) $f^{\dagger}:E\rightarrow f^{*}E$ is a diffeomorphism and (ii) $\forall q\in Q,$ $\exists p\in P$ such that $f^{\dagger}\circ\pi_{2}^{-1}(q)\subset\pi_{1}^{-1}(p)$ and $f^{\dagger}:\pi_{2}^{-1}(q)\mapsto\pi_{1}^{-1}(p)$ is an isomorphism of vector space. Since $f^{\dagger}(e)=(f^{-1}\circ\pi_{2}(e),e)$ and both $f^{-1}$ and $\pi_{2}$ are smooth, $f^{\dagger}$ is smooth. By construction, $f^{\dagger}$ is smoothly invertible. Thus, $f^{\dagger}$ is a diffeomorphism. To prove (ii), we utilize a local trivialization. For $\forall q\in Q$, let $U$ be an open set containing $q$ in the open cover of $Q$ for the local trivialization of $E\rightarrow Q$. Locally, $E$ is a production $U\times V.$ In particular, $\pi_{2}^{-1}(q)=\{q\}\times V$ and \begin{equation} f^{\dagger}:(q,v)\mapsto\left(p=f^{-1}\circ\pi_{2}(q,v)=f^{-1}(q),(q,v)\right). \end{equation} For this fixed $q,$ \begin{align} f^{\dagger}\circ\pi_{2}^{-1}(q) & =f^{\dagger}\left(\left\{ (q,v)\mid v\in V\right\} \right)=\left\{ \left(p,(q,v)\right)\mid p=f^{-1}(q),v\in V\right\} \nonumber \\ & =\left\{ \left(p,(q,v)\right)\mid f(p)=\pi_{2}(q,v),v\in V\right\} =\pi_{1}^{-1}(p). \end{align} Also, $f^{\dagger}:(q,v)\mapsto\left(p,(q,v)\right)$ for the fixed $q$ and $p=f^{-1}(q)$ is an isomorphism. Thus, $f^{\dagger}$ is an isomorphism and $E\simeq f^{*}E$. \end{proof} The following is the main theorem of this paper, which will enable us to analytical calculate the topological index for the TLCW. In a parameter space that is $\mathbb{R}^{3}$, denote by $S_{r}^{2}=\left\{ q=(q_{1,}q_{2,}q_{3})\mid q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=r^{2}\right\} $ the sphere of radius $r$, and by $Sh_{(a,b)}\equiv\left\{ q=(q_{1,}q_{2,}q_{3})\mid a^{2}\le q_{1}^{2}+q_{2}^{2}+q_{3}^{2}\le b^{2}\right\} $ the 3D shell with inner radius $a$ and outer radius $b$. \begin{thm} \label{thm:BoundaryIso}{[}Boundary isomorphism{]} Let $E\rightarrow Sh_{(a,b)}$ be a Hermitian line bundle defined over a shell $Sh_{(a,b)}$ in a parameter space that is $\mathbb{R}^{3}$. (i) The bundles obtained by restricting $E$ over $S_{a}^{2}$ and $S_{b}^{2}$ are isomorphic, i.e., $E\rightarrow S_{a}^{2}\simeq E\rightarrow S_{b}^{2}$. (ii) Bundles $E\rightarrow S_{a}^{2}$ and $E\rightarrow S_{b}^{2}$ have the same first Chern number, i.e., $n_{c}\left(E\rightarrow S_{a}^{2}\right)=n_{c}\left(E\rightarrow S_{b}^{2}\right)$. \end{thm} \begin{proof} To prove (i), construct the following continuous class of compressing maps on $Sh_{(a,b)}$, \begin{align} f_{\epsilon}: & Sh_{(a,b)}\rightarrow Sh_{(a,b)}\thinspace,\\ & p\mapsto\epsilon p+(1-\epsilon)\frac{ap}{\left|p\right|},\thinspace0\le\epsilon\le1.\nonumber \end{align} As $\epsilon$ decreases from $1$ to $0$ continuously, $f_{\epsilon}$ compresses the shell towards the inner sphere $S_{a}^{2}$. $f_{1}$ is the identify map, $f_{0}$ crashes the shell onto $S_{a}^{2},$ and $f_{1}$ and $f_{0}$ are homotopic. According to Theorem \ref{thm:homotopyIso}, \begin{equation} f_{1}^{*}E\simeq f_{0}^{*}E\:.\label{eq:f1E} \end{equation} Restricting both sides of Eq.\,(\ref{eq:f1E}) to $S_{b}^{2}$ leads to \begin{equation} E\mid_{S_{b}^{2}}=\left(f_{1}^{*}E\right)\mid_{S_{b}^{2}}\simeq\left(f_{0}^{*}E\right)\mid_{S_{b}^{2}}=f_{0}^{*}\left(E\mid_{S_{a}^{2}}\right)\mid_{S_{b}^{2}}. \end{equation} Denote by $f_{0r}$ the restriction of $f_{0}$ on $S_{b}^{2},$ i.e., \begin{align} f_{0r}: & S_{b}^{2}\rightarrow S_{a}^{2}\thinspace,\nonumber \\ & p\mapsto f_{0}(p). \end{align} Obviously, $f_{0r}$ is a diffeomorphism, and according to Theorem \ref{thm:pullbackIso}, \begin{equation} f_{0}^{*}\left(E\mid_{S_{a}^{2}}\right)\mid_{S_{b}^{2}}=f_{0r}^{*}\left(E\mid_{S_{a}^{2}}\right)\simeq E\mid_{S_{a}^{2}}. \end{equation} Therefore, \begin{equation} E\rightarrow S_{b}^{2}=E\mid_{S_{b}^{2}}\simeq E\mid_{S_{a}^{2}}=E\rightarrow S_{a}^{2}\,. \end{equation} For (ii), we have \[ f_{0r}^{*}\left(C_{1}\left(E\rightarrow S_{a}^{2}\right)\right)=C_{1}\left(f_{0r}^{*}\left(E\rightarrow S_{a}^{2}\right)\right)=C_{1}\left(E\rightarrow S_{b}^{2}\right), \] where the first equal sign is the naturality property of characteristic classes. The second equal sign is due to the fact that $f_{0r}^{*}\left(E\rightarrow S_{a}^{2}\right)$ and $E\rightarrow S_{b}^{2}$ are two isomorphic bundles on $S_{b}^{2}$, and thus have the same Chern classes. The first Chern number on $E\rightarrow S_{b}^{2}$ is \begin{align} n_{c}\left(E\rightarrow S_{b}^{2}\right) & =\int_{S_{b}^{2}}C_{1}\left(E\rightarrow S_{b}^{2}\right)\\ & =\int_{S_{b}^{2}}f_{0r}^{*}\left(C_{1}\left(E\rightarrow S_{a}^{2}\right)\right)\nonumber \\ & =\int_{S_{a}^{2}}C_{1}\left(E\rightarrow S_{a}^{2}\right)=n_{c}\left(E\rightarrow S_{a}^{2}\right),\nonumber \end{align} where the integral on $S_{b}^{2}$ is evaluated on $S_{a}^{2}$ via the pullback mechanism in the third equal sign. \end{proof} Theorem \ref{thm:BoundaryIso} says that when the Hermitian line bundle $E$ is defined on $Sh(a,b),$ $E\rightarrow S_{a}^{2}$ and $E\rightarrow S_{b}^{2}$ are isomorphic bundles and have the same first Chern number. Around an isolated Weyl point, the Hermitian line bundle is well defined except at the Weyl point, Theorem \ref{thm:BoundaryIso} states that all closed surfaces surrounding the Weyl point have the same first Chern number, which can be viewed as the topological charge associated with this isolated Weyl point in phase space (see Fig.\,\ref{fig:TopologicalCharge}). \begin{figure}[ht] \includegraphics[width=6cm]{Fig_9} \caption{Topological charge of waves in classical continuous media. Around an isolated Weyl point, all closed surfaces surrounding the Weyl point have the same first Chern number, which can be viewed as the topological charge associated with this isolated Weyl point in phase space. } \label{fig:TopologicalCharge} \end{figure} We also need the following results for the first Chern class for the plasma wave eigenmode bundles. They can be established straightforwardly. \begin{lem} \label{lem:C1Property}For the 9 bulk eigenmode bundles of the plasma waves specified by Hamiltonian symbol $H(\boldsymbol{r},\boldsymbol{k})$ defined in Eq.\,(\ref{eq:H}), the following identities for the first Chern class holds over a general base manifold for the bundles: \begin{flalign} C_{1}\left(\oplus_{j=1}^{4}E_{j}\right) & =C_{1}\left(\oplus_{j=-4}^{-1}E_{j}\right),\\ C_{1}\left(\oplus_{j=-4}^{4}E_{j}\right) & =0,\\ C_{1}\left(E_{0}\right) & =0,\\ C_{1}\left(\oplus_{j=-4}^{1}E_{j}\right) & =C_{1}\left(E_{1}\right). \end{flalign} \end{lem} \section{TLCW predicted by Faure's index theorem and algebraic topological analysis\label{sec:TLCWPrediction}} \subsection{Faure's Index theorem for TLCW} In condensed matter physics, the bulk-edge correspondence states that the gap Chern number equals the number of edge modes in the gap. Mathematically, the correspondence had been rigorously proved as the Atiyah-Patodi-Singer (APS) index theorem \citep{Atiyah1976} for spectral flows over $S^{1}$, which corresponds to the momentum parameter $k_{y}$ in the direction with spatial translation symmetry of a periodic lattice. However, for waves in classical continuous media, including waves in plasmas, the $k_{y}$ parameter is not periodic, and it takes value in $\mathbb{R}$. Therefore, the APS index theorem proved for spectral flow over $S^{1}$ is not applicable for waves in continuous media without modification. Recently, Faure \citep{Faure2019} formulated an index theorem for spectral flows over $\mathbb{R}$-valued $k_{y}$, which links the spectral flow index to the gap Chern number of the eigenmode bundle over a 3D ball in the phase space of $(x,k_{x},k_{y}).$ Faure's index theorem applies to waves in classical continuous media. In this section, we apply Faure's index theorem and Theorem \ref{thm:BoundaryIso} to prove the existence of TLCW. For the bulk Hamiltonian symbol $H(x,k_{x},k_{y},k_{z})$ defined in Eq.\,(\ref{eq:Hx}), the global Hamiltonian PDO $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ defined in Eq.\,(\ref{eq:Hhatx}), and the 1D equilibrium profile specified by Eq.\,(\ref{eq: condition1}), we have the following theorems and definition adapted from Faure \citep{Faure2019}. \begin{thm} \label{thm:SpectralFlow}For a fixed $k_{z}$, assume that $[g_{1},g_{2}]$ is the common gap of $\omega_{1}(x,k_{x},k_{y})$ and $\omega_{2}(x,k_{x},k_{y})$ for parameters exterior to the ball $B_{1}^{3}$. For any $\lambda>0,$ there exists $\eta_{0}>0$ such that (i) for all $\eta<\eta_{0}$ and $k_{y}\in[-1-\lambda,1+\lambda]$, $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ has no or discrete spectrum in the gap of $[g_{1}+\lambda,g_{2}-\lambda]$ that depend on $\eta$ and $k_{y}$ continuously; (ii) for all $\eta<\eta_{0}$, $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ has no spectrum in $[g_{1}-\lambda,g_{2}+\lambda]$ at $k_{y}=\pm(1+\lambda).$ \end{thm} \begin{proof} This theorem is a special case of Theorem 2.2 in Faure \citep{Faure2019}. \end{proof} Theorem \ref{thm:SpectralFlow} states that the spectrum of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ in the common gap $[g_{1}+\lambda,g_{2}-\lambda]$, if any, must consist of discrete dispersion curves parameterized by $k_{y}$. Theorem \ref{thm:SpectralFlow} also stipulates the following ``traffic rules'' for the flow of the spectrum. The dispersion curves cannot enter or exit the rectangle region $[-1-\lambda,1+\lambda]\times[g_{1}+\lambda,g_{2}-\lambda]$ on the $k_{y}$-$\omega$ plane from the left or right sides. They can only enter or exit through the upper or lower sides (see Fig.\,\ref{fig:SpectralFlow}). Intuitively, a spectral flow is a dispersion curve of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ that can trespass the rectangle. It flows between the lower band and the upper band, as if transporting one eigenmode upward or downward through the spectral gap of $H(x,k_{x},k_{y},k_{z})$ for parameters exterior to the ball $B_{1}^{3}$. We now formally define the spectral flow and spectral flow index. \begin{figure}[ht] \centering \includegraphics[width=10cm]{Fig_10} \caption{Illustration of possible spectral flows of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ in the common gap $[g_{1}+\lambda,g_{2}-\lambda]$. Theorem \ref{thm:SpectralFlow} stipulates the ``traffic rules'' for the flow of the spectrum. Red curves are spectral flows with index $1$, and blue curves are spectral flows with index $-1.$} \label{fig:SpectralFlow} \end{figure} \begin{defn} \label{def:SF}For a fixed $k_{z}$, assume that $[g_{1},g_{2}]$ is the common gap of $\omega_{1}(x,k_{x},k_{y})$ and $\omega_{2}(x,k_{x},k_{y})$ for parameters exterior to the ball $B_{1}^{3}$. A spectral flow is a smooth dispersion curve $\omega=f(k_{y},\eta)$ of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ satisfying the following condition: For any $\lambda>0,$ there exists a $\eta_{0}>0$ such that either (i) for all $\eta<\eta_{0}$, $f(-1-\lambda,\eta)<g_{1}+\lambda$ and $f(1+\lambda,\eta)>g_{2}-\lambda$ or (ii) for all $\eta<\eta_{0}$, $f(-1-\lambda,\eta)>g_{2}-\lambda$ and $f(1+\lambda,\eta)<g_{1}+\lambda$. For case (i), its index is $1$. For case (ii), its index is $-1$. The spectral flow index $n_{\text{sf}}$ of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ is the summation of indices of all its spectral flows. In the present context, a spectral flow of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ is a TLCW. But Theorem \ref{thm:SpectralFlow} and Definition \ref{def:SF} are valid for any generic $\hat{H}$. \end{defn} A few possible spectral flow configurations are illustrated in Fig.\,\ref{fig:SpectralFlow}. Strictly speaking, $n_{\text{sf}}$ is not necessarily the total number of all possible upward and downward spectral flows of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$. It is the net number of upward spectral flows. For plasma waves, the following theorem links the number of TLCWs to the first Chern number of the $E_{1}$ eigenmode bundle over a non-contractible, compact surface in the phase space of $(x,k_{x},k_{y}).$ \begin{thm} \label{thm:SpectralFlowIndex}For a fixed $k_{z},$ assume that the common gap condition for parameters exterior to the ball $B_{1}^{3}\equiv\left\{ (x,k_{x},k_{y})\mid x^{2}+k_{x}^{2}+k_{y}^{2}\le1\right\} $ is satisfied for the spectra $\omega_{1}(x,k_{x},k_{y},k_{z})$ and $\omega_{2}(x,k_{x},k_{y},k_{z})$ of $H(x,k_{x},k_{y},k_{z})$. The spectral flow index of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ in the gap equals the Chern number $n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)$ of the $E_{1}$ eigenmode bundle of $H(x,k_{x},k_{y},k_{z})$ over $S_{1}^{2}\equiv\left\{ (x,k_{x},k_{y})\mid x^{2}+k_{x}^{2}+k_{y}^{2}\le1\right\} $, i.e., $n_{\text{sf}}=n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)$. \end{thm} \begin{proof} This theorem is a direct specialization of Theorem 2.7 formulated by Faure in Ref.\,\citep{Faure2019}, which states that when a spectral gap exists between $\omega_{l}$ and $\omega_{l+1}$ of a bulk Hamiltonian for all parameters exteriors to $B_{r}^{3}$, the spectral flow index $n_{\text{sf}}$ of the corresponding PDO in the gap equals the gap Chern number $n_{c}\left(\oplus_{j\le l}E_{j}\rightarrow S_{r}^{2}\right)$. For the plasma waves satisfying the common gap condition stated, $l=1$ and \[ n_{\text{sf}}=n_{c}\left(\oplus_{j\le1}E_{j}\rightarrow S_{1}^{2}\right)=n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right), \] where use is made of Lemma \ref{lem:C1Property}. \end{proof} \subsection{Index calculation of TLCW using algebraic topological techniques } Theorem \ref{thm:SpectralFlowIndex} links the number of TLCWs, or the spectral flow index of $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$, to the Chern number $n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)$ of the $E_{1}$ eigenmode bundle of $H(x,k_{x},k_{y},k_{z})$ over $S_{1}^{2}\equiv\left\{ (x,k_{x},k_{y})\mid x^{2}+k_{x}^{2}+k_{y}^{2}\le1\right\} $. However, it is not an easy task to calculate $n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)$ either analytically or numerically. Here, we use the algebraic topological tools developed in Sec.\,\ref{subsec:NonTrivialTop} to analytically calculate $n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right).$ Because for the 1D equilibrium profile specified by Eq.\,(\ref{eq: condition1}), the LC Weyl point only occurs at $x=0,$ and the eigenmode bundle $E_{1}$ is well-defined in $B_{1}^{3}/(0,0,0)$, we can invoke Theorem \ref{thm:BoundaryIso} to calculate $n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)$ as \begin{equation} n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)=\lim_{\delta\rightarrow0^{+}}n_{c}\left(E_{1}\rightarrow S_{\delta}^{2}\right).\label{eq:ncepsilon} \end{equation} The right-hand side of Eq.\,(\ref{eq:ncepsilon}) is the first Chern number of the $E_{1}$ bundle over an infinitesimal sphere surrounding the Weyl point in the phase space of $(x,k_{x},k_{y})$, and it can be analytically evaluated using Taylor expansion at the Weyl point as follows. At the LC Weyl point $(x,k_{x},k_{y})=(0,0,0)$, the spectrum and eigenmodes of $H(x,k_{x},k_{y},k_{z})$ can be solved analytically. Denote by $\left(\omega_{j0},\psi_{j0}\right)$ the $j$-th eigenmode. At this point, two of the eigenmodes with positive frequencies resonant, \begin{equation} \omega_{10}=\omega_{20}=\omega_{\text{pc}}=\dfrac{\sqrt{k_{z}^{4}+4k_{z}^{2}}-k_{z}^{2}}{2}, \end{equation} and the corresponding eigenmodes are \begin{alignat*}{1} \psi_{10} & =\left(0,0,-\frac{\mathrm{i}}{\sqrt{2}},0,0,\frac{1}{\sqrt{2}},0,0,0\right)^{\mathrm{T}},\\ \psi_{20} & =\left(\mathrm{i}k_{z},-k_{z},0,\frac{\omega_{\text{pc}}}{k_{z}},\mathrm{i}\frac{\omega_{\text{pc}}}{k_{z}},0,-\mathrm{i},1,0\right)^{\mathrm{T}}, \end{alignat*} where $\psi_{10}$ is the Langmuir wave and $\psi_{20}$ is the cyclotron wave. In the infinitesimal neighborhood of the Weyl point, $k_{x}\sim k_{y}\sim x\sim\delta$, \begin{alignat}{1} H(x,k_{x},k_{y},k_{z}) & =H_{0}+\delta H,\\ H_{0}(x,k_{x},k_{y},k_{z}) & =\begin{pmatrix}\mathrm{i}\boldsymbol{e}_{z}\times & -\mathrm{i}\omega_{\text{pc}} & 0\\ \mathrm{i}\omega_{\text{pc}} & 0 & (0,0,-k_{z})\times\\ 0 & (0,0,k_{z})\times & 0 \end{pmatrix},\\ \delta H(x,k_{x},k_{y},k_{z}) & =\begin{pmatrix}0 & -\mathrm{i}\omega_{\text{p}}^{\prime}(x)x & 0\\ \mathrm{i}\omega_{\text{p}}^{\prime}(x)x & 0 & (-k_{x},-k_{y,}0)\times\\ 0 & (k_{x},k_{y,}0)\times & 0 \end{pmatrix}. \end{alignat} We can express $H$ in the basis of $\psi_{j0}$ $(-4\le j\le4)$. But for modes with $\delta\omega=$$\omega-\omega_{\text{pc}}\sim\delta$, $H$ can approximated by the expansion using $\psi_{10}$ and $\psi_{20}$ only, and reduces to a $2\times2$ matrix, \begin{alignat}{1} H_{2} & (x,k_{x},k_{y},k_{z}):=\begin{pmatrix}\psi_{10}^{\dagger}H\psi_{10} & \psi_{10}^{\dagger}H\psi_{20}\\ \psi_{20}^{\dagger}H\psi_{10} & \psi_{20}^{\dagger}H\psi_{20} \end{pmatrix}=\begin{pmatrix}\omega_{\text{pc}}+\delta\omega_{\text{p}} & \dfrac{-k_{x}-\mathrm{i}k_{y}}{\sqrt{2}\alpha}\\[10pt] \dfrac{-k_{x}+\mathrm{i}k_{y}}{\sqrt{2}\alpha} & \omega_{\text{pc}}-\dfrac{4\omega_{\text{pc}}}{\alpha^{2}}\delta\omega_{\text{p}} \end{pmatrix},\label{eq:H2}\\[5pt] \alpha & \equiv\sqrt{4+3k_{z}^{2}-k_{z}\sqrt{4+k_{z}^{2}}}\thinspace,\quad\delta\omega_{\text{p}}=-\beta x\thinspace,\quad\beta\equiv\left|\dfrac{\mathrm{d}\omega_{\text{p}}}{\mathrm{d}x}\right|_{x=0}\geq0, \end{alignat} where we used the fact that the equilibrium profile $\omega_{\mathrm{p}}(x)$ selected in Eq.\,(\ref{eq: condition1}) decreases monotonically. The eigen system of $H_{2}$ can be solved straightforwardly. The two eigenfrequencies of $H_{2}$ are \begin{alignat}{1} \omega_{1} & =\omega_{\text{pc}}-\frac{\beta}{2}\left(1-\frac{4\omega_{\text{pc}}}{\alpha^{2}}\right)x-\gamma\thinspace,\label{eq:om1}\\ \omega_{2} & =\omega_{\text{pc}}-\frac{\beta}{2}\left(1-\frac{4\omega_{\text{pc}}}{\alpha^{2}}\right)x+\gamma\thinspace,\label{eq:om2}\\ \gamma & \equiv\sqrt{\frac{k_{x}^{2}+k_{y}^{2}}{2\alpha^{2}}+\frac{x^{2}\beta^{2}}{4}\left(1+\frac{4\omega_{\text{pc}}}{\alpha^{2}}\right)^{2}}\thinspace. \end{alignat} The corresponding eigenmodes, expressed in the basis of $\psi_{10}$ and $\psi_{20}$ , are \begin{alignat}{1} \tilde{\psi}_{1} & =\left(\alpha\beta\left(1+\frac{4\omega_{\text{pc}}}{\alpha^{2}}\right)x+2\alpha\gamma,\sqrt{2}(k_{x}-\mathrm{i}k_{y})\right)^{\mathrm{T}}\thinspace,\\ \tilde{\psi}_{2} & =\left(\alpha\beta\left(1+\frac{4\omega_{\text{pc}}}{\alpha^{2}}\right)x-2\alpha\gamma,\sqrt{2}(k_{x}-\mathrm{i}k_{y})\right)^{\mathrm{T}}. \end{alignat} Everywhere except $(x,k_{x},k_{y})=(0,0,0)$ in the parameter space, we have $\omega_{1}<\omega_{2}$, so the $E_{1}$ eigenmode bundle of $H$ is faithfully represented by $\tilde{\psi}_{1}$ when $\delta$ is small but non-vanishing. What matters for the present study is the first Chern number $n_{c}\left(E_{1}\rightarrow S_{\delta}^{2}\right),$ which can be obtained by counting the number of zeros of $\tilde{\psi}_{1}$ on $S_{\delta}^{2}$, according to Theorem \ref{thm:Chern}. On $S_{\delta}^{2}$, $\tilde{\psi}_{1}$ is well-defined everywhere, and has one zero at $(x,k_{x},k_{y})=(-\delta,0,0)$. The index of this zero can be calculated according to Definition \ref{def:ZeroIndex} as follows. We select the following local frame for $E_{1}\rightarrow S_{\delta}^{2}$ in the neighborhood of $(x,k_{x},k_{y})=(-\delta,0,0)$, \begin{equation} e=\left(\frac{\alpha\beta\left(1+\frac{4\omega_{\text{pc}}}{\alpha^{2}}\right)x+2\alpha\gamma}{(k_{x}-\mathrm{i}k_{y})},\sqrt{2}\right)^{\mathrm{T}}\thinspace. \end{equation} It is easy to verify that $e$ is well-defined in the neighborhood of $(x,k_{x},k_{y})=(-\delta,0,0)$ on $S_{\delta}^{2}$, especially at the point of $(x,k_{x},k_{y})=(-\delta,0,0)$ itself. Note that $e$ is singular at $(x,k_{x},k_{y})=(\delta,0,0)$ on $S_{\delta}^{2}$, therefore it is not a (global) section of bundle $E_{1}\rightarrow S_{\delta}^{2}$. The expression of the section $\tilde{\psi}_{1}$ in the $e$ frame is $(k_{x}-\mathrm{i}k_{y})$. In one turn on $S_{\delta}^{2}$ circulating $(x,k_{x},k_{y})=(-\delta,0,0)$, for example on a circle with a fixed $x$ near $(x,k_{x},k_{y})=(-\delta,0,0)$, the phase increase of $(k_{x}-\mathrm{i}k_{y})$ is $2\pi.$ Thus, we conclude that $\text{Ind}\left((x,k_{x},k_{y})=(-\delta,0,0)\right)=1$. According to Theorem \ref{thm:Chern}, \[ n_{c}\left(E_{1}\rightarrow S_{\delta}^{2}\right)=\text{Ind}\left((x,k_{x},k_{y})=(-\delta,0,0)\right)=1 \] And from Eq.\,(\ref{eq:ncepsilon}) and Theorem \ref{thm:SpectralFlowIndex}, \[ n_{\text{sf}}=n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)=n_{c}\left(E_{1}\rightarrow S_{\delta}^{2}\right)=1. \] We conclude that there is one net upward spectral flow, i.e., the TLCW, if the common gap condition is satisfied. \section{An analytical model for TLCW by a tilted phase space Dirac cone\label{sec:AnalyticalTDC}} As shown in the Sec.\,\ref{sec:TLCWPrediction}, near the LC Weyl point only the Langmuir wave and the cyclotron wave are important, and the $9\times9$ bulk Hamiltonian symbol $H(x,k_{x},k_{y},k_{z})$ can be approximated by the $2\times2$ reduced bulk Hamiltonian symbol $H_{2}(x,k_{x},k_{y},k_{z})$. For the bulk modes of $H(x,k_{x},k_{y},k_{z})$, the prominent feature near the LC Weyl point is the tilted Dirac cone shown in Fig.\,\ref{fig:DiracCone}. This interesting structure is faithfully captured by $H_{2}(x,k_{x},k_{y},k_{z})$. For comparison, the tilted phase space Dirac cone of $H_{2}(x,k_{x},k_{y},k_{z})$ is plotted in Fig.\,\ref{fig:DiracConeH2}. From the definition of $H_{2}(x,k_{x},k_{y},k_{z}),$ it is clear that the factor $4\omega_{\text{pc}}/\alpha^{2}$ is the reason for the cone being tilted. \begin{figure}[ht] \centering \includegraphics[width=8cm]{Fig_11} \caption{The tilted phase space Dirac cone of $H_{2}(x,k_{x},k_{y},k_{z})$ at the LC Weyl point. It faithfully represents the tilted Dirac cone of $H(x,k_{x},k_{y},k_{z})$ shown in Fig.\,\ref{fig:DiracCone}.} \label{fig:DiracConeH2} \end{figure} For $H_{2}(x,k_{x},k_{y},k_{z})$, the corresponding PDO is \begin{equation} \hat{H}_{2}(x,k_{x},k_{y},k_{z})=\begin{pmatrix}\omega_{\text{pc}}-\beta x & \dfrac{\mathrm{i}}{\sqrt{2}\alpha}(\eta\partial_{x}-k_{y})\\[10pt] \dfrac{\mathrm{i}}{\sqrt{2}\alpha}(\eta\partial_{x}+k_{y}) & \omega_{\text{pc}}+\dfrac{4\omega_{\text{pc}}}{\alpha^{2}}\beta x \end{pmatrix}.\label{eq:H2hat} \end{equation} The theorems proved in Sec.\,\ref{sec:TLCWPrediction} shows that the full system $\hat{H}(x,-\mathrm{i}\eta\partial_{x},k_{y},k_{z})$ admits one topological edge mode, i.e., the TLCW, as confirmed by numerical solutions in Secs.\,\ref{sec:Problem-statement} and \ref{sec:Numerial}. This property of the TLCW is also faithfully captured by the reduced system $\hat{H}_{2}(x,k_{x},k_{y},k_{z})$. In particular, Theorem \ref{thm:SpectralFlowIndex} applies to $\hat{H}_{2}(x,k_{x},k_{y},k_{z})$ as well. From Eqs.\,(\ref{eq:om1}) and (\ref{eq:om2}), the common gap condition is satisfied for $\omega_{1}$ and $\omega_{2}$, and the proof of Theorem \ref{thm:SpectralFlowIndex} shows that the Chern number $n_{c}\left(E_{1}\rightarrow S_{1}^{2}\right)$ of the first eigenmode bundle of $H_{2}(x,k_{x},k_{y},k_{z})$ over $S_{1}^{2}$ equals $1.$ Thus, $\hat{H}_{2}(x,k_{x},k_{y},k_{z})$ admits one spectral flow, i.e., the TLCW. To thoroughly understand the physics of a tilted phase space Dirac cone and the TLCW, we present here the analytical solution of the entire spectrum of $\hat{H}_{2}(x,k_{x},k_{y},k_{z})$, including its spectral flow in the band gap. For the PDO corresponding to a $2\times2$ symbol of a straight Dirac cone, its analytical solution has been given by Faure \citep{Faure2019}. But for the PDO corresponding to a $2\times2$ symbol of a tilted Dirac cone, we are not aware of any previous analytical solution. To analytically solve for its spectrum, we first simplify the matrix $\hat{H}_{2}(x,k_{x},k_{y},k_{z})$ in Eq.\,(\ref{eq:H2hat}). Subtract the entire spectrum by $\omega_{\mathrm{pc}}$ and renormalize $x$ and $k_{y}$ as follows, \begin{align} \tilde{x}:=\sqrt{\dfrac{\sqrt{2}\alpha\beta\kappa}{\eta}}x,\quad\tilde{k}_{y}:=\dfrac{k_{y}}{\sqrt{\sqrt{2}\alpha\beta\eta\kappa}}, \end{align} where $\kappa^{2}=4\omega_{\mathrm{pc}}/\alpha^{2}$. Matrix $\hat{H}_{2}$ in Eq.\,(\ref{eq:H2hat}) then simplifies to \begin{align} \hat{H}_{2}=\sqrt{\dfrac{\sqrt{2}\beta\eta\kappa}{\alpha}}\dfrac{1}{\sqrt{2}}\begin{pmatrix}-\tilde{x}/\kappa & \mathrm{i}(\partial_{\tilde{x}}-\tilde{k}_{y})\\ \mathrm{i}(\partial_{\tilde{x}}+\tilde{k}_{y}) & \kappa\tilde{x} \end{pmatrix}. \end{align} It is clear that $\kappa$ is a parameter measuring how tilted the Dirac cone is. When $\kappa=1$, the Dirac cone is straight. From now on in this section, the overscript tilde in $\tilde{x}$ and $\tilde{k}_{y}$ will be omitted for simple notation. We further transform $\hat{H}_{2}$ by a similarity transformation and scaling, \begin{align} \hat{H}_{2}' & =\sqrt{\dfrac{\alpha}{\sqrt{2}\beta\eta\kappa}}R\hat{H}_{2}R^{-1}=\dfrac{1}{\sqrt{2}}\begin{pmatrix}-x/\kappa & \mathrm{i}(\partial_{x}-k_{y})/\kappa\\ \mathrm{i}\kappa(\partial_{x}+k_{y}) & \kappa x \end{pmatrix},\\ R & =\mathrm{diag}(\kappa,1). \end{align} $\hat{H}_{2}'$ can be expressed using Pauli matrices and the identity matrix $\sigma_{0}$ as \begin{align} \sqrt{2}\hat{H}_{2}' & =\mathrm{i}(\mu_{2}k_{y}+\mu_{1}\partial_{x})\sigma_{x}+(\mu_{1}k_{y}+\mu_{2}\partial_{x})\sigma_{y}-\mu_{1}x\sigma_{z}+\mu_{2}x\sigma_{0},\\[5pt] \mu_{1} & =\dfrac{1}{2}\left(\kappa+\dfrac{1}{\kappa}\right),\quad\mu_{2}=\dfrac{1}{2}\left(\kappa-\dfrac{1}{\kappa}\right). \end{align} We next apply a unitary transformation to cyclically rotate Pauli matrices such that $(\sigma_{x},\sigma_{y},\sigma_{z},\sigma_{0})\to(\sigma_{y},\sigma_{z},\sigma_{x},\sigma_{0})$. Under this rotation, $\hat{H}_{2}'$ becomes \begin{align} \hat{H}_{2}''=\begin{pmatrix}\mu_{1}\lambda+\mu_{2}\hat{a} & \mu_{2}\lambda-\mu_{1}\hat{a}^{\dagger}\\ -\mu_{2}\lambda-\mu_{1}\hat{a} & -\mu_{1}\lambda+\mu_{2}\hat{a}^{\dagger} \end{pmatrix},\label{eq:H2transform} \end{align} where $\lambda=k_{y}/\sqrt{2}$ and \begin{align} \hat{a}=\dfrac{1}{\sqrt{2}}(x+\partial_{x}),\quad\hat{a}^{\dagger}=\dfrac{1}{\sqrt{2}}(x-\partial_{x}). \end{align} are annihilation and creation operators. Notice that $\mu_{1}=1$ and $\mu_{2}=0$ when $\kappa=1$, and this is the special case when $\hat{H}_{2}''$ reduces to a Hamiltonian corresponding to a straight Dirac cone \citep{Faure2019,Delplace2022}. We now construct an analytical solution of $\hat{H}_{2}''$. Recall that the eigenstates of a quantum harmonic oscillator $|n\rangle$ can be represented by the Hermite polynomials $H_{n}(x)$ as \begin{align} \langle x|n\rangle=\varphi_{n}(x)=\frac{1}{\left(2^{n}n!\sqrt{\pi}\right)^{1/2}}\mathrm{e}^{-\frac{x^{2}}{2}}H_{n}(x). \end{align} Define a set of shifted wave functions $|n;\delta\rangle$ by \begin{equation} \langle x|n;\delta\rangle:=\varphi_{n}(x+\sqrt{2}\delta). \end{equation} They satisfy the following iteration relations, \begin{align} \hat{a}^{\dagger}|n;\delta\rangle & =\sqrt{n+1}|n+1;\delta\rangle-\delta|n;\delta\rangle,\\ \hat{a}|n;\delta\rangle & =\sqrt{n}|n-1;\delta\rangle-\delta|n;\delta\rangle. \end{align} With these shifted wave functions as basis, it can be verified that $\hat{H}_{2}''$ has two sets of eigenvectors, \begin{align} \psi_{n}^{\pm}=\begin{pmatrix}|n+1;\delta_{n}^{\pm}\rangle\\ \gamma_{n}^{\pm}|n;\delta_{n}^{\pm}\rangle \end{pmatrix},\quad & n=0,1,2,\cdots\thinspace,\label{eq:eigenmods} \end{align} where \[ \gamma_{n}^{\pm}=\dfrac{\sqrt{n+1}}{-\lambda\mp\sqrt{\lambda^{2}+n+1}},\quad\delta_{n}^{\pm}=\pm\dfrac{\mu_{2}}{\mu_{1}}\sqrt{\lambda^{2}+n+1}. \] The corresponding eigenvalues are \begin{align} E_{n}^{\pm}=\pm\dfrac{2\kappa}{1+\kappa^{2}}\sqrt{\lambda^{2}+n+1},\quad n=0,1,2,\cdots\thinspace. \end{align} Importantly, there is one additional eigenstate that is not included in Eq.\,(\ref{eq:eigenmods}), which, in fact, represents the spectrum flow. Its eigenvector and eigenvalue are \begin{align} \psi_{-1}=\begin{pmatrix}|0;\delta_{-1}\rangle\\ 0 \end{pmatrix},\quad E_{-1}=\dfrac{2\kappa}{1+\kappa^{2}}\lambda, \end{align} where \begin{align} \delta_{-1}=\dfrac{\mu_{2}}{\mu_{1}}\lambda. \end{align} Here, we abusively denote this eigenmode as the ``$n=-1$'' eigenstate. The spectral flow is a linear function of $k_{y}$, and its mode structure is a shifted Gaussian function. The spectrum of $\hat{H}_{2}''(x,-\mathrm{i}\partial_{x},\lambda)$ are plotted in Fig.\,\ref{fig:TDCspectrum}(a). The spectrum consists of three parts, the upper and lower parts are the global modes in the frequency bands of $H_{2}(x,k_{x},k_{y},k_{z})$. The middle part is the single spectral flow connecting the left of the lower part to the right of the upper part. Note that the tilted Dirac cone of $H_{2}(x,k_{x},k_{y},k_{z})$ breaks up into two pieces in the global modes of $\hat{H}_{2}''(x,-\mathrm{i}\partial_{x},\lambda)$. In Fig.\,\ref{fig:TDCspectrum}(b), the analytical solution of the mode structure of the spectral flow of $\hat{H}_{2}''(x,-\mathrm{i}\partial_{x},\lambda)$ is plotted. The analytical result displayed in Fig.\,\ref{fig:TDCspectrum} agrees well with the numerical solution shown in Fig.\,\ref{fig:1Dspectrum}. \begin{figure}[ht] \centering \includegraphics[height=5.7cm]{Fig_12a} \hspace{0.5cm} \includegraphics[height=6cm]{Fig_12b} \caption{(a) Analytical spectrum of $\hat{H}_{2}''(x,-\mathrm{i}\partial_{x},\lambda)$ as a function of $k_{y}$. (b) Analytical mode structure of the TLCW. The result agrees well with the numerical solution shown in Fig.\,\ref{fig:1Dspectrum}.} \label{fig:TDCspectrum} \end{figure} \section{Conclusions and discussions \label{sec:Conclusions} } Inspired by advances in topological materials in condensed matter physics \citep{thouless1982quantized,halperin1982quantized,hasan2010colloquium,bernevig2013topological,qi2011topological,armitage2018weyl}, study of topological waves in classical continuous media, such as electromagnetic materials \citep{silveirinha2015chern,silveirinha2016bulk,gangaraj2017berry,marciani2020chiral}, fluid systems \citep{delplace2017topological,Faure2019,perrot2019topological,tauber2019bulk,venaille2021wave,zhu2021topology,souslov2019topological,qin2019kelvin,fu2020physics,David2022}, and magnetized plasmas \citep{gao2016photonic,yang2016one,parker2020nontrivial,parker2020topological,parker2021topological,fu2021topological,Fu2022a,Rajawat2022,qin2021spontaneous}, has attracted much attention recently. The Topological Langmuir-Cyclotron Wave (TLCW) is a recently identified topological surface excitation in magnetized plasmas generated by the nontrivial topology at the Weyl point due to the Langmuir wave-cyclotron wave resonance \citep{fu2021topological,Fu2022a}. In this paper, we have systematically developed a theoretical framework to describe the TLCW. It has been realized that the theoretical methodology for studying topological material properties in condensed matter physics cannot be directly applied to classical continuous media, because the momentum (wavenumber) space for condensed matter is periodic, whereas that for classical continuous media is not. Specifically, the typical momentum space for classical continuous media is $\mathbb{R}^{n}$ $(n=1,2,3),$ and it is difficult to integrate the Berry curvature over $\mathbb{R}^{n}$ to obtain an integer number that can be called the Chern number. The difficulty has been attributed to the fact that $\mathbb{R}^{n}$ is not compact, and different remedies have been proposed accordingly. However, we demonstrated that the key issue is not whether the momentum space is non-compact, but rather that it is contractible. When the base manifold is contractible, all vector bundles on it are topologically trivial, and whether an integer index can be designed is irrelevant. For classical continuous media, nontrivial topology can be found only for vector bundles over phase space. Without modification, the Atiyah-Patodi-Singer (APS) index theorem \citep{Atiyah1976} proved for spectral flows over $S^{1}$ is only applicable to condensed matters, and Faure's index theorem \citep{Faure2019} for spectral flows over $\mathbb{R}$-valued $k_{y}$ should be adopted for classical continuous media. In the present study, the TLCW is defined as a spectral flow of a Pseudo-Differential-Operator (PDO) $\hat{H}$ for plasma waves in an inhomogeneous magnetized plasma, and the semi-classical parameter of the Weyl quantization operator is identified as the ratio between electron gyro-radius and the scale length of the inhomogeneity. We formally constructed the Hermitian eigenmode bundles of the bulk Hamiltonian symbol $H$ corresponding to the PDO $\hat{H},$ and emphasized that the properties of spectral flows are determined by the topology of the eigenmode bundles over non-contractible phase space manifolds. To calculate Chern numbers of eigenmode bundles over a 2D sphere in phase space, as required by Faure's index theorem, a boundary isomorphism theorem (Theorem \ref{thm:BoundaryIso}) was established. The TLCW is proved to exist in magnetized plasmas as a spectral flow with the spectral index being one. The Chern theorem (Theorem \ref{thm:Chern}), instead of the Berry connection or any other connection, was used to calculate the Chern numbers. Finally, we developed an analytically solvable model for the TLCW using a tilted phase space Dirac cone. An analytical solution of the PDO of a generic tilted phase space Dirac cone was found, which generalized the previous result for a straight Dirac cone \citep{Faure2019}. The spectral flow index of the tilted Dirac cone was calculated to be one, and the mode structure of the spectral flow was found to be a shifted Gaussian function. As a topological edge wave, the TLCW can propagate unidirectionally and without reflection and scattering along complex boundaries. Due to this topological robustness, it might be relatively easy to excite the TLCW experimentally. Of course, laboratory and astrophysical plasmas are subject to many more physical effects that have not been included in the present model, such as collisions and finite temperature. For practical application, these factors need to be carefully evaluated by experimental and theoretical methods. \begin{acknowledgments} This research was supported by the U.S. Department of Energy (DE-AC02-09CH11466). We thank Dr. F. Faure, Dr. P. Delplace, and Prof. B. Simon for fruitful discussion. The present study is inspired by their groundbreaking contributions. \end{acknowledgments} \bibliography{ref} \end{document}
2205.02276v2
http://arxiv.org/abs/2205.02276v2
Iterated line graphs with only negative eigenvalues $-2$, their complements and energy
\documentclass[CCO,PDF,12pt]{cco_author} \usepackage{layout} \title{Iterated line graphs with only negative eigenvalues $-2$, their complements and energy} \shorttitle{Iterated line graphs with only negative eigenvalues $-2$, their complements and energy} \author{Harishchandra S. Ramane\inst{1}, B. Parvathalu\inst{2}\footnote{Corresponding Author}, K. Ashoka\inst{3}, Daneshwari Patil\inst{1} } \shortauthor{Harishchandra S. Ramane, B. Parvathalu, K. Ashoka, Daneshwari Patil} \institute{\centering \inst{1} Department of Mathematics, Karnatak University, Dharwad 580003, Karnataka, India \\ {\tt [email protected], [email protected]} \inst{2} Department of Mathematics, Karnatak University's Karnatak Science/Arts College Dharwad 580001, Karnataka, India \\ {\tt [email protected]} \inst{3} Department of Mathematics, Christ University, Bangalore 560029, Karnataka, India\\ {\tt [email protected]} } \abstract{The graphs with all equal negative or positive eigenvalues are special kind in the spectral graph theory. In this article, several iterated line graphs $\mathcal{L}^k(G)$ with all equal negative eigenvalues $-2$ are characterized for $k\ge 1$ and their energy consequences are presented. Also the spectra and the energy of complement of these graphs are obtained, interestingly they have exactly two positive eigenvalues with different multiplicities. Moreover, we characterize a large class of equienergetic graphs which extend the existing results. } \keywords{Energy of a graph, Eigenvalues of graphs, Equitable partition, Iterated line graphs, Complement of a graph} \history{Received: ....; Revised: .....; Accepted: ....; Published Online: .... }\msc{05C50, 05C76, 05C12} \begin{document} \maketitle \section{Introduction}\label{sec1} Spectral graph theory is aimed at answering the questions related to the graph theory in terms of eigenvalues of matrices which are naturally associated with graphs. Graphs with all equal positive or negative eigenvalues are very special kind. The line graph of a graph has a special property that its least eigenvalue is not smaller than $-2$ \cite{Hoffman}. In the last two decades the graphs with least eigenvalues $-2$ have been well studied \cite{Cvetkovic_Sgen,Cvetten_yrs,VijayaKumar}. The problem of particular interest in this class of graphs is the graphs with all negative eigenvalues equal to $-2$. This problem can be restated in terms of the eigenvalues of signless Laplacian matrix Q for line graphs as the graphs with all its signless Laplacian eigenvalues belongs to the set $[2, \infty)\cup\{0\}$ with the help of the relation \eqref{line_eig_snles_eqn}. In \cite{HSRamane} Ramane et al. obtained the spectra and the energy of iterated regular line graphs $\mathcal{L}^k(G)$ for $k\ge 2$ and thus gave infinitely many pairs of non-trivial equienergetic graphs which belong to the above class of graphs. Let $\rho$ be the property that a graph $G$ has all its negative eigenvalues equal to $-2$. In this paper we are motivated to find several classes of iterated line graphs $\mathcal{L}^k(G)$ with the property $\rho$ for $k\ge 1$, spectra of their complements and energy consequences. As a consequence of the energy of these graphs we present a large class of equienergetic graphs which generalize the existing results. The energy relation between a graph $G$ and its complement were studied in \cite{Mojallal,Nikiforov,ramathesis,Ramane_Bp,Ramane-2023}, we extend the results pertaining $\mathcal{E}(\overline{G}) = \mathcal{E}(G)$ to non-regular iterated line graphs with the property $\rho$. The energy of line graphs is well studied in \cite{Kinkar_Das,Gutman_line,Hyper,HSRamane,OscarRojo}. In this paper, we also present hyperenergetic iterated line graphs and their complements. The software sage \cite{sage} is used to verify some of the results.\\ This paper is organized as follows: Section $2$ introduces basic definitions, equitable partitions and known results on the spectra and energy of graphs. In Section $3$, it is proven that the two distinct quotient matrices defined for an equitable partition of the $H$-join of regular graphs produces the same partial spectrum for the adjacency matrix, Laplacian matrix and signless Laplacian matrix. Section $4$ presents findings on the spectra and energy of iterated line graphs that satisfy the property $\rho$. Section $5$ discusses the spectra and energy of the complements of the graphs covered in Section $4$. Section $6$ provides an upper bound for the independence number of iterated line graphs and their complements along with a theorem regarding the minimum order of a connected graph whose complement is also connected and considers line graphs satisfying the property $\rho$. \section{Preliminaries} In this paper, simple, undirected and finite graph $G$ are considered with vertex set $V(G)=\{v_1, v_2, \ldots, v_n\}$ and edge set $E(G)=\{e_1, e_2, \ldots, e_m\}$. Let $e=v_iv_j$ be an edge of $G$ with its end vertices $v_i$ and $v_j$. The order and size of a graph $G$ refer to the number of vertices and the number of edges, respectively. The complement $\overline{G}$ of a graph $G$ has same vertices as in $G$ but two vertices are adjacent in $\overline{G}$ if and only if they are not adjacent in $G$. Let the subgraph of a graph $G$ obtained by deleting vertices $v_1, v_2, \ldots, v_k$, $k<n$ and the edges incident to them in $G$ be $G-\{v_1, v_2, \ldots, v_k\}$ and simply $G-v$ if one vertex $v$ and edges incident to it are deleted. The {\bf adjacency matrix} $A(G)$ or simply $A$ of a graph $G$ of order $n$ is the $n\times n$ matrix indexed by $V(G)$ whose $(i, j)$-th entry is defined as $a_{ij} = 1$ if $v_iv_j\in E(G)$ and $0$ otherwise. The \textbf{Laplacian} $L$ and \textbf{signless Laplacian} $Q$ of a graph are defined as $L = D - A$ and $Q = D + A$, respectively, where $D = [d_i]$ is the diagonal degree matrix of appropriate order, and the $i$-th diagonal entry $d_i$ represents the degree of vertex $v_i$. The matrix $Q$ is positive semidefinite and possesses real eigenvalues. The spectrum of a square matrix $M$ is the multiset of its eigenvalues, including their algebraic multiplicities. Denote the characteristic polynomial and the spectrum of a matrix $M$ associated to a graph $G$, respectively by $\varphi(M(G);x)$ and $Sp_M(G)$. Let $a Sp_M(G)\pm b = \{a\lambda\pm b: \lambda\in Sp_M(G)\}$ for any real numbers $a$ and $b$. Let the difference between two sets $A$ and $B$ is denoted by $A \setminus B$. The spectrum of a graph is the spectrum of its adjacency matrix $A$. The inertia of a graph $G$ is the triplet $(n^+, n^-, n^0)$, representing the number of positive, negative and zero eigenvalues, respectively. Given a graph $G$ we denote the spectrum of adjacency matrix and signless Laplacian matrix respectively by $Sp_A(G)=\{\lambda_1^{m_1}, \lambda_2^{m_2}, \ldots, \lambda_k^{m_k}\}$ and $Sp_Q(G)=\{q_1^{m_1}, q_2^{m_2}, \ldots, q_k^{m_k}\}$, where $\lambda_{i}$'s and $q_{i}$'s are indexed in descending order, and $m_i$ is the multiplicity of the respective eigenvalue for $1\le i\le k$. Denote the least eigenvalue of signless Laplacian by $q_{\min}$. The {\bf energy} \cite{Gutman} of a graph $G$ is defined as $\mathcal{E}(G) = \sum\limits_{i=1}^{n}\lvert\lambda_i\rvert = 2\sum\limits_{i=1}^{n^+}\lambda_{i} = -2\sum\limits_{i=1}^{n^-}\lambda_{n-i+1}$. Two graphs $G_1$ and $G_2$ having the same order are called {\bf equienergetic} if they share the same energy. \noindent A set of vertices in a graph $G$ is independent if no two vertices in the set are adjacent. The {\bf independence number} $\alpha{(G)}$ of a graph $G$ is the maximum cardinality of an independent set of $G$. As usual the graphs $C_n, K_n$ and $P_n$ denote the cycle graph, complete graph and path graph on $n$ vertices respectively. For additional notation and terminology, we refer \cite{Cvetk_book}. \begin{definition} \cite{Turandef} The {\bf Tur\'{a}n graph} $T_r(n)$, $r\ge 1$, is the complete $r$-partite graph of order $n$ with all parts of size either $\lfloor n/r \rfloor$ or $\lceil n/r \rceil$. \end{definition} \begin{definition}\cite{Harary} The {\bf line graph} $\mathcal{L}(G)$ of a graph $G$ is constructed by taking the edge set of $G$ as the vertex set of $\mathcal{L}(G)$. Two vertices in $\mathcal{L}(G)$ are connected by an edge if their corresponding edges in $G$ share a common vertex. The $k$-th {\bf iterated line graph} of $G$, denoted by $\mathcal{L}^k(G)$ for $k\in\{0, 1, 2,\ldots\}$, is defined as follows: $\mathcal{L}^0(G) = G$ and for $k \geq 1$, $\mathcal{L}^k(G) = \mathcal{L}(\mathcal{L}^{k-1}(G))$. \end{definition} Let $n_k$ and $m_k$ be the order and size of $\mathcal{L}^k(G)$ for $k \in \{0, 1, 2,\ldots\}$. It is noted that $n_k=m_{k-1}$ for $k \in \{1, 2,\ldots\}$. Let us denote the eigenvalues of $\mathcal{L}^k(G)$ and its complement $\overline{\mathcal{L}^k(G)} $ respectively by $\lambda_{k(j)}$ and $\overline{\lambda}_{k(j)}$ for $k \in \{1, 2,\ldots\}$ and $1\le j\le n_k$. The complement of a line graph is also called jump graph \cite{Jump1}. \begin{theorem}{\rm \cite{Cvetk_book}}\label{reg_compl} Suppose $G$ is an $r$-regular graph of order $n$ with its spectrum $Sp_A(G)=\{r, \lambda_2, \ldots, \lambda_n\}$, then its complement $\overline{G}$ is a $(n-r-1)$-regular graph with the spectrum $Sp_A(\overline{G})=\{n-r-1, -1-\lambda_n, \ldots, -1-\lambda_2\}$. \end{theorem} \begin{theorem}{\rm \cite{Sachs}}\label{line_eigv} Suppose $G$ is an $r$-regular graph of order $n$ and size $m$ with its spectrum $Sp_A(G)=\{r, \lambda_2, \ldots, \lambda_n\}$, then the line graph of $G$ is a $(2r-2)$-regular graph with the spectrum $Sp_A(\mathcal{L}(G))=\{2r-2, \lambda_2+r-2, \ldots, \lambda_n+r-2, -2^{m-n}\}$. \end{theorem} \noindent Let $n$ and $m$ represent the order and size of a graph $G$, respectively. The relationship between the eigenvalues of the line graph $\mathcal{L}(G)$ and the signless Laplacian $Q$ of $G$ is given by \cite{cvet_signless}: \begin{equation}\label{line_eig_snles_eqn} Sp_A\big(\mathcal{L}(G)\big) = \{-2^{m-n}\} \cup \big(Sp_Q(G)-2\big). \end{equation} The multiplicity of the eigenvalue $-2$ in $\mathcal{L}(G)$ is $m - n + 1$ if $G$ is bipartite, and $m - n$ if $G$ is not bipartite \cite{Cvetkovic_Sgen}. \begin{theorem}{\rm\cite{Kinkar_Das}}\label{Kinkar} Let $G$ be a graph with order $n (> 2)$, $m$ edges and minimum degree $\delta$. If $\delta \ge \frac{n}{2} + 1$, then the line graph $\mathcal{L}(G)$ satisfies the property $\rho$ and has the energy of $4(m-n)$. Thus, all line graphs of such graphs with order $n$, size $m$ and minimum degree $\delta \ge \frac{n}{2} + 1$ are mutually equienergetic. \end{theorem} \begin{theorem}{\rm\cite{Hyper}}\label{hyper_en} Let $G$ be a graph of order $n(\ge 5)$ and size $m$. If $m\ge 2n$, the line graph $\mathcal{L}(G)$ is hyperenergetic. \end{theorem} \noindent The following is an elegant, though not sharp, relation between the smallest eigenvalue $\lambda_{\text{min}}$ of $A$ and the smallest eigenvalue $q_{\text{min}}$ of $Q$. \begin{proposition}{\rm\cite{Desai}}\label{Desai_prop} If $G$ is a graph with minimum degree $\delta$ and maximum degree $\Delta$, then $$ q_{min}-\Delta \le \lambda_{min} \le q_{min}-\delta.$$ \end{proposition} \begin{proposition}{\rm\cite{de_Lima}}\label{d_Lima} If $G$ is a spanning subgraph of a graph $G'$, then $q_{min}(G)\le q_ {min}(G')$. \end{proposition} \begin{theorem}{\rm\cite{HeChang}}\label{He_Chang} If $G$ is a graph with vertex $v$, then $q_{min}(G)-1\le q_{min}(G-v)$. \end{theorem} \noindent Let ${\bf j}$ denote all one's column vector, that is ${\bf j}=(1,1,\ldots, 1)^T$. \begin{theorem}{\rm\cite{Hagos}}\label{compl-1-l} If $\lambda\in Sp_A(G)$, then $-1-\lambda\in Sp_A(\overline{G})$ if and only if ${\bf j}^TY=0$ for some eigenvector $Y$ of $G$ corresponding to the eigenvalue $\lambda$. \end{theorem} \begin{proposition}{\rm\cite{Cvetk_book}}\label{eig_space-2} In the line graph $\mathcal{L}(G)$ of a graph $G$ the eigenspace of the eigenvalue $-2$ is orthogonal to the vector ${\bf j}$. \end{proposition} \noindent The Weyl's eigenvalue inequality \cite{Horn} $\lambda_j(M_1)+\lambda_k(M_2)\le \lambda_i(M_1+M_2)$, $j+k-n\ge i$ for sum of two Hermitian matrices $ M_1$ and $M_2$ of order $n$ gives the following useful eigenvalues inequality on a graph $G$ of order $n$ and its complement $\overline{G}$ \cite{Mojallal}: \begin{equation}\label{eig_inq} \,\lambda_j+\overline{\lambda}_{n-j+2}\le -1 \text{ for } j\in \{2, 3, \ldots , n\}. \end{equation} \begin{proposition}{\rm\cite{GodsilCD}}\label{prop_minor} Let $M$ be any square matrix of order $n$ with the characteristic polynomial $\varphi(M;x)=\sum\limits_{r=0}^{n}(-1)^nm_rx^{n-r}$. Then $m_r$ is equal to the sum of the principal minors of $M$ of order $r$. \end{proposition} \begin{definition} If $G$ is a graph of order $n$ with vertices $v_1, v_2, \ldots, v_n$, then the graph $G[H_1, H_2, \ldots, H_n]$ called {\bf generalized composition} or {\bf $H$-join} \cite{Schwenk, Cardoso_Spec} which is obtained from the graphs $H_1, H_2,\ldots, H_n$ by joining every vertex of $H_i$ to every vertex of $H_j$ if $v_i$ is adjacent to $v_j$ in $G$. \end{definition} \noindent Let $\pi=(\pi_1,\pi_2,\ldots,\pi_p)$ be a partition of a vertex set of a graph $G$. The partition $\pi$ is called {\bf equitable partition} \cite{ChangAn,Schwenk} if for each $i,j=1,2,\ldots,p$ there exists a number $c_{ij}$ such that for every vertex $v\in\pi_i$ there are exactly $c_{ij}$ edges between $v$ and the vertices in $\pi_j$. If $\pi$ is an equitable partition then the associated $p\times p$ matrix with rows and columns corresponding to the partite sets $\pi_1,\pi_2,\ldots,\pi_p$ is called {\bf quotient matrix}. Let $A_{\pi}$ is the $p\times p$ matrix with $(i, j)$-th element $a^{\pi}_{ij}$ equal to $c_{ij}$ and the $p\times p$ diagonal matrix $D_{\pi}$ whose $i$-th diagonal element equal to $\sum\limits_{k=1}^{p}c_{ik}$. If $\pi$ is an equitable partition, we denote the quotient matrix for adjacency, Laplacian and signless Laplacian of a graph, respectively by $A_{\pi}, L_{\pi}$ and $Q_{\pi}$. The matrices $A_{\pi}, L_{\pi}$ and $Q_{\pi}$ are given by $A_{\pi}=[a^{\pi}_{ij}]$ \cite{Schwenk}, $L_{\pi}=[l^{\pi}_{ij}]=D_{\pi}-A_{\pi}$ \cite{Cardoso} and $Q_{\pi}=[q^{\pi}_{ij}]=D_{\pi}+A_{\pi}$ \cite{ChangAn,Wang_SL}. It is noted that these need not be symmetric matrices.\\ Let $H_i$ be $r_i$-regular graphs of order $n_i$ for $i=1,2,\ldots,p$. In case of $H$-join $G[H_1, H_2,\ldots, H_p]$, where $G$ is a graph of order $p$, let us denote the quotient matrix for adjacency, Laplacian and signless Laplacian of a graph, respectively by $A^{H}_{\pi}=[a^{H}_{ij}], L^{H}_{\pi}=[l^{H}_{ij}]$ and $Q^{H}_{\pi}=[q^{H}_{ij}]$. These matrices are defined as \cite{Cardoso_Spec,WuBaoFeng} \begin{equation*} a^{H}_{ij} = \left\{ \begin{array}{ll} c_{ij} & \text{ if } {i=j}\\[3mm] \sqrt{n_in_j} & \text{ if } {v_iv_j}\in E(G)\\[3mm] 0 & \text{ otherwise,} \end{array} \right. \end{equation*} \begin{equation*} l^{H}_{ij} = \left\{ \begin{array}{ll} l^{\pi}_{ij} & \text{ if } {i=j}\\ [3mm] -\sqrt{n_in_j} & \text{ if } {v_iv_j}\in E(G)\\ [3mm] 0 & \text{ otherwise} \end{array} \right. \end{equation*} and \begin{equation*} q^{H}_{ij}= \left\{ \begin{array}{ll} q^{\pi}_{ij} & \text{ if } {i=j}\\ [3mm] \sqrt{n_in_j} & \text{ if } {v_iv_j}\in E(G)\\[3mm] 0 & \text{ otherwise.} \end{array} \right. \end{equation*} \noindent It is noted that these are symmetric matrices.\\ If $\pi=(\pi_1,\pi_2,\ldots,\pi_p)$ is an equitable partition of a graph $G$ with cardinality of $\pi_i$, $\lvert\pi_i\rvert = m_i$ for $i=1, \ldots, p$ then $\pi$ also equitable partition to its complement $\overline{G}$. The quotient matrix $\overline{A}_{\pi}$ of $\overline{G}$ is given by $\overline{A}_{\pi}={\bf J}_{\pi}-I-A_{\pi}$ \cite{Teranishi}, where ${\bf J}_{\pi}$ is the matrix of order $p$ whose $(i,j)$-th element equal to $m_j$ and $I$ is the identity matrix of order $p$. \begin{proposition}{\rm\cite{Teranishi}}\label{Cospec_Prop} Let the graphs $G_1$ and $G_2$ are co-spectral with equitable partitions $\pi 1$ and $\pi 2$ respectively. If $A_{\pi 1} = A_{\pi 2}$, then the graphs $\overline{G_1}$ and $\overline{G_2}$ are co-spectral. \end{proposition} \begin{theorem}{\rm\cite{WuBaoFeng}}\label{SpecQ} Let $G$ be a graph of order $n$ and $H_i$ be $r_i$-regular graph of order $n_i$ for $i=1, 2, \ldots, n$. If ${\Gamma}=G[H_1, H_2,\ldots, H_n]$, then $$Sp_Q(\Gamma)=\left(\cup_{i=1}^n\Big((q^{\pi}_{ii}-2r_i)+(Sp_Q(H_i)\setminus\{2r_i\})\Big)\right)\cup \Big(Sp(Q^{H}_{\pi})\Big).$$ \end{theorem} \noindent Suppose that $n\ge 2$ and $M=[m_{ij}]\in \mathbb{C}^{n\times n}$. The Ger\v{s}gorin discs $D_i, i=1,2,\ldots,n$ of the matrix $M$ are defined as the closed circular regions $D_i=\{z\in\mathbb{C}:\lvert z-m_{ii}\rvert\le R_i\}$ in the complex plane, where \begin{equation*} R_i= \sideset{}{}{\sum}_{\substack{ j=1\\ j\ne i }}^{n}\lvert m_{ij}\rvert \end{equation*} is the radius of $D_i$. \begin{theorem}[Ger\v{s}gorin]{\rm\cite{Horn}}\label{Gersgorin} Let $n\ge 2$ and $M\in \mathbb{C}^{n\times n}$. All eigenvalues of the matrix $M$ lie in the region $D=\cup_{i=1}^{n}D_i$, where $D_i, i=1,2,\ldots,n$ are the Ger\v{s}gorin discs of $M$. \end{theorem} \section{Spectra of Quotient Matrices} \begin{theorem}\label{eqlquospec} Let $\Gamma = G[H_1, H_2,\ldots, H_n]$, where $H_i$ is an $r_i$-regular graph of order $n_i$ for $i=1, 2, \ldots, n$. Then the spectrum of the quotient matrices $ A_{\pi}, L_{\pi}$ and $Q_{\pi}$ equal to the spectrum of the quotient matrices $A^{H}_{\pi}, L^{H}_{\pi}$ and $Q^{H}_{\pi}$ respectively. \end{theorem} \begin{proof} It is clear that $\pi = (V(H_1), V(H_2), \ldots, V(H_n))$ is an equitable partition of $\Gamma$. Let us first prove the result for the matrices $Q_{\pi}$ and $Q^{H}_{\pi}$ using Proposition \ref{prop_minor} and similar can be applied for the remaining matrices. The entries of the matrices $Q_{\pi}$ and $Q^{H}_{\pi}$ can be written as $q^{\pi}_{ij}=a_{ij}n_j$, $q^{H}_{ij}=a_{ij}\sqrt{n_in_j}$ for $i\neq j$ and $q^{\pi}_{ii}=q^{H}_{ii}$, where $a_{ij}$ is the entry of the adjacency matrix of $G$. Let $S_n$ be the set of all permutations over the set $\{1, 2, \ldots, n\}$ and if $\sigma\in S_n$ denote its sign by $sgn(\sigma)$. Let $M_r^{'}$ be the principal minor of order $r$ which is obtained by deleting any $n-r$ rows and corresponding columns of the matrix $Q_{\pi}$ and the respective principal minor of the matrix $Q^{H}_{\pi}$ be $M_r^{''}$ for $1\leq r\leq n$. Then $M_r^{'}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r}q^{\pi}_{k\sigma(k)}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r}a_{k\sigma(k)}n_{\sigma(k)}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r}a_{k\sigma(k)}n_k$ and $M_r^{''}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r}q^{H}_{k\sigma(k)}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r}a_{k\sigma(k)}\sqrt{n_kn_{\sigma(k)}}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r}a_{k\sigma(k)}n_k$ if $k\neq\sigma(k)$ for any $k=1,2,\ldots, r$. If $k=\sigma(k)$ for $k=1,2,\ldots, p$ and $1\leq p\leq r$ then $M_r^{'}=M_r^{''}=\sum\limits_{\sigma\in S_r}sgn(\sigma)\prod\limits_{k=1}^{r-p}a_{k\sigma(k)}n_k(q^{\pi}_{kk})^p$. Which shows that $M_r^{'}=M_r^{''}$ to each $r$ for $1\le r\le n$. Hence, the sum of the principal minors of order $r$ of the matrices $Q_{\pi}$ and $Q^{H}_{\pi}$ are equal. It shows that the matrices $Q_{\pi}$ and $Q^{H}_{\pi}$ have the same characteristic polynomials by Proposition \ref{prop_minor}. Thus the matrices $Q_{\pi}$ and $Q^{H}_{\pi}$ have the same spectrum. \end{proof} Now Theorem \ref{SpecQ}, with the help of above theorem can be stated in the following way: \begin{proposition}\label{HSpecQeql} Let $G$ be a graph of order $n$ and $H_i$ be an $r_i$-regular graph of order $n_i$ for $i=1, 2, \ldots, n$. If $\Gamma=G[H_1, H_2$,\ldots, $H_n]$, then $$Sp_Q(\Gamma)=\left(\bigcup_{i=1}^n\Big(R_i+(Sp_Q(H_i)\setminus\{2r_i\})\Big)\right)\cup \Big(Sp(Q_{\pi})\Big),$$ where $R_i$ is the $i$-th row sum excluding the diagonal entry of $Q_{\pi}$ for $i=1, 2, \ldots, n$. \end{proposition} \begin{proof} Proof follows by Theorem \ref{eqlquospec} and an observation that $q^{\pi}_{ii} = 2r_i+R_i$ for $i=1, 2, \ldots, n$. \end{proof} \section{Spectra and Energy of iterated line graphs with the property $\rho$} \begin{theorem}\label{Iter_line-2} Let $G$ be a graph with order $n_0$ and size $m_0$, where each edge $e = uv$ in $G$ satisfies $d_u + d_v \ge 6$. Then, the graphs $\mathcal{L}^k(G)$ have the property $\rho$ for $k \ge 2$. Moreover, all the iterated line graphs $\mathcal{L}^k(G)$ of such a graph $G$ are mutually equienergetic, with energy $4(n_k - n_{k-1})$ for $k \ge 2$. \end{theorem} \begin{proof} If $G$ is a graph of order $n_0$ and size $m_0$ with an edge $e=uv$, then in the line graph $\mathcal{L}(G)$, the degree of the vertex corresponding to the edge $e$ is $d_u + d_v - 2$. Given that $d_u + d_v \ge 6$ for every edge $e = uv$ in $G$, it follows that $d_u + d_v - 2 \ge 4$. This implies that the minimum degree of each vertex in $\mathcal{L}(G)$ is at least four. It is well known that the least eigenvalue of the line graph $\mathcal{L}(G)$ is not smaller than $-2$. Therefore, by Proposition \ref{Desai_prop}, the least eigenvalue $\lambda_{\text{min}}$, the smallest signless Laplacian eigenvalue $q_{\text{min}}$, and the minimum degree $\delta$ of $\mathcal{L}(G)$ satisfy $q_{\text{min}} \ge \lambda_{\text{min}} + \delta \ge -2 + 4 = 2$. Now, by relation \eqref{line_eig_snles_eqn}, $\mathcal{L}(\mathcal{L}(G)) = \mathcal{L}^2(G)$ satisfies property $\rho$, with the multiplicity of $-2$ being equal to $m_1 - n_1$. It can be easily observed that the minimum degree $\delta$ in the line graphs $\mathcal{L}^k(G)$ increases as $k$ increases. This implies that $q_{\text{min}}$ of $\mathcal{L}^k(G)$ also increases with $k$ and is at least $2$. Hence, by relation \eqref{line_eig_snles_eqn}, the iterated line graphs $\mathcal{L}^k(G)$ satisfy property $\rho$ for each $k \ge 2$, with their energy equal to $4(m_{k-1} - n_{k-1}) = 4(n_k - n_{k-1})$. \end{proof} \noindent A tree graph is called a caterpillar if removing all its pendant vertices results in a path graph. The spectra and the energy of the line graphs of caterpillars are studied in \cite{OscarRojo}. \begin{example} The graph in Figure 1 is the caterpillar $C(4,3,4)$. This graph satisfies the conditions of Theorem \ref{Iter_line-2}, so the graphs $\mathcal{L}^k\big(C(4,3,4)\big)$ satisfy the property $\rho$ for $k\ge 2$. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{Fig1}\\ \caption{Caterpillar $C(4,3,4)$} \label{Fig1} \end{figure} \end{example} \begin{corollary}\label{Cor_Iter_line-2} Let $G$ be a graph of order $n_0$ and size $m_0$ with a minimum degree $\delta \ge 3$. Then, the graphs $\mathcal{L}^k(G)$ satisfy property $\rho$ for $k \ge 2$. Moreover, all the iterated line graphs $\mathcal{L}^k(G)$ of such a graph $G$ are mutually equienergetic, with energy equal to $4(n_k - n_{k-1})$ for $k \ge 2$. \end{corollary} \begin{remark} In \cite{HSRamane}, Ramane et al. obtained the spectra and the energy of iterated line graphs of regular graphs with degree $r \ge 3$, thereby characterizing a large class of pairs of non-trivial equienergetic regular graphs. It is noted that all the results in their paper are special cases of Corollary \ref{Cor_Iter_line-2}. \end{remark} Now, Theorem \ref{Iter_line-2} naturally leads us to consider when $\mathcal{L}(G)$ satisfies property $\rho$, given that $\mathcal{L}^k(G)$ satisfies property $\rho$ for $k \ge 2$ in a graph $G$ where $d_u + d_v \ge 6$ for each edge $e = uv$. In this context, we have the following results. \begin{lemma}\label{leasteigQpi} Let $G$ be a connected graph of order $n(\ge 2)$ and $H_i$ be $r_i$-regular graph of order $n_i$, where $r_i\ge 1$, for $i=1, 2, \ldots, n$. Then the least eigenvalue of the quotient matrix $Q_{\pi}$ of the graph $G[H_1, H_2,\ldots, H_n]$ is at least $2$. \end{lemma} \begin{proof} The signless Laplacian matrix $Q$ of a graph $G$ is positive semidefinite, which possesses real eigenvalues. Consequently, by Proposition \ref{HSpecQeql}, the eigenvalues of the quotient matrix $Q_{\pi}$ are also real. In the $i$-th row of $Q_{\pi}$, the diagonal entry $q^{\pi}_{ii}$ satisfies $q^{\pi}_{ii} = 2r_i + R_i$, where $R_i$ represents the sum of the $i$-th row of $Q_{\pi}$ excluding $q^{\pi}_{ii}$. By applying Ger\v{s}gorin’s Theorem \ref{Gersgorin}, all the eigenvalues of $Q_{\pi}$ are contained within the union of the closed intervals $[2r_i, 2(r_i + R_i)]$ for $i = 1, 2, \dots, n$. Now, the result follows from the condition that $r_i \geq 1$. \end{proof} \begin{theorem}\label{lineofjoin} Let $G$ be a connected graph of order $n(\ge 2)$ and $H_i$ be an $r_i$-regular graph of order $p_i$, where $r_i\ge 1$, for $i=1, 2, \ldots, n$. If ${\Gamma}=G[H_1, H_2, \ldots, H_n]$ is a graph of order $n_0$ and size $m_0$ then the graphs $\mathcal{L}^k(\Gamma)$ satisfy the property $\rho$ for $k\ge 1$. Furthermore, the line graphs of graphs $G'$ of order $n_k$ and size $m'_{k}$ for which $\mathcal{L}^k(\Gamma)$ is a spanning subgraph are mutually equienergetic with energy $4(m'_k-n_k)$ for $k\ge 0$. \end{theorem} \begin{proof} Since $ G$ is a connected graph of order $ n \geq 2$, for every vertex $ v_i$ in $ G$, there exists at least one adjacent vertex $ v_j$. This implies that each vertex of the graph $ H_i$ is adjacent to every vertex of $ H_j$ for at least one $ j$ in $ \Gamma = G[H_1, H_2, \ldots, H_n]$. Moreover, since $ H_i$ are $ r_i$-regular graphs of order $ p_i$, with $ r_i \geq 1$, it follows that $ p_i \geq 2$. Thus the quotient matrix $ Q_{\pi}$ corresponding to the equitable partition $ \pi = (V(H_1), V(H_2), \ldots, V(H_n))$ has a non-diagonal entry $ q^{\pi}_{ij} \geq 2$ in its $ i$-th row for at least one $ j$. Since the signless Laplacian $ Q$ of any graph is positive semidefinite, it has non-negative eigenvalues. By applying Proposition \ref{HSpecQeql} and Lemma \ref{leasteigQpi}, we conclude that every signless Laplacian eigenvalue of $ \Gamma$ is at least $2$. From the relation \eqref{line_eig_snles_eqn}, it follows that the line graph $ \mathcal{L}(\Gamma)$ satisfies the property $ \rho$. The minimum order graph $ \Gamma$ with the smallest possible degrees is $ \Gamma = K_2[K_2, K_2]$, which shows that the minimum degree $ \delta$ of $ \Gamma$ is at least $3$. Thus, by Corollary \ref{Cor_Iter_line-2} all the iterated line graphs $ \mathcal{L}^k(\Gamma)$ for $ k \geq 1$ satisfy the property $ \rho$. Since $ \Gamma$ is a spanning subgraph of $ G'$, Proposition \ref{d_Lima} implies that $ G'$ also has $ q_{\min} \geq 2$, which further implies that the graph $ \mathcal{L}(G')$ satisfies the property $ \rho$ with its energy equal to $ 4(m'_k - n_k)$ for $ k \geq 0$ by relation \eqref{line_eig_snles_eqn}. This completes the proof. \end{proof} \begin{remark}\label{Join_remark} Let $G$ be a connected graph of order $n\ge 2$ and $H_i$ be an $r_i$-regular graph of order $p_i$ with $r_i\ge 1$, for $i=1, 2, \ldots, n$. Let ${\Gamma}=G[H_1, \ldots, H_j, \ldots, H_n]$ for $1\le j\le i$ and $H_{s_1}$, $H_{s_2}$ be two $r$-regular graphs of same order with $r\ge 1$. If ${\Gamma_1}=G[H_1, \ldots, H_{s_1}, \ldots, H_n]$ and ${\Gamma_2}=G[H_1, \ldots, H_{s_2}, \ldots, H_n]$, then the graphs $\mathcal{L}^k(\Gamma_1)$ and $\mathcal{L}^k(\Gamma_2)$ are equienergetic for $k\ge 1$. Further, if $H_{s_1}$ and $H_{s_2}$ are non co-spectral (co-spectral) graphs then we get non co-spectral (co-spectral) graphs $\mathcal{L}^k(\Gamma_1)$ and $\mathcal{L}^k(\Gamma_2)$ respectively, for $k\ge 0$ as they have same quotient matrices. \end{remark} \begin{remark} In Theorem \ref{Kinkar}, Das et al. characterized a large class of equienergetic line graphs of graphs of order $ n$ under the condition that the minimum degree $ \delta \geq \frac{n}{2} + 1$. However, one can also construct equienergetic line graphs of graphs of order $ n$ with the condition $ \delta \leq \frac{n}{2} + 1$ by using Theorem \ref{lineofjoin}. For example, if $ \Gamma = K_2[K_2, C_n]$ with $ n \geq 6$, it is always possible to construct non-isomorphic equienergetic line graphs of $ \Gamma$ with energy equal to $4$ times (size of $\Gamma-$ order of $\Gamma)$ for $ \delta = 4$ by adding edges between non-adjacent vertices of $ \Gamma$. \end{remark} \begin{theorem}\label{vertex_del} Let $G$ be a connected graph of order $n(\ge 2)$ and $H_i$ be an $r_i$-regular graph of order $p_i$, where $r_i\ge 2$, for $i=1, 2, \ldots, n$. If ${\Gamma}=G[H_1, H_2,\ldots, H_n]$, then the graphs $\mathcal{L}^k(\Gamma-v)$ satisfy the property $\rho$ for $k\ge 1$. Moreover, if $n_i\ge min\{2r_i: r_i\ge s\ge2\}$ then the graphs $\mathcal{L}^k(\Gamma-\{v_1, v_2, \ldots, v_{2(s-1)}\})$ satisfy the property $\rho$ for $k\ge 1$. \end{theorem} \begin{proof} All the eigenvalues of $ Q_{\pi} $ of $ \Gamma $ belong to the union of the closed intervals $ [2r_i, 2(r_i+R_i)] $ for $ i=1, 2, \ldots, n $ by Lemma \ref{leasteigQpi}, which shows that each eigenvalue of $ Q_{\pi} $ is at least $ 4 $ since $ r_i \ge 2 $. Each row of the matrix $ Q_{\pi} $ has at least one non-diagonal entry that is at least $ 3 $ as $ r_i \ge 2 $ and $ G $ is connected with order $ n \ge 2 $. By Proposition \ref{HSpecQeql}, the $ q_{\min} $ of $ \Gamma $ is at least $ 3 $. By Theorem \ref{He_Chang}, the $ q_{\min} $ of $ \Gamma-v $ is at least $ 2 $. Hence $ \mathcal{L}(\Gamma-v) $ satisfies the property $ \rho $ by relation \eqref{line_eig_snles_eqn}. If $ n_i \ge \min\{2r_i: r_i \ge s \ge 2\} $, then the $ q_{\min} $ of $ Q_{\pi} $ is at least $ 2s $, and each row of $ Q_{\pi} $ has at least one non-diagonal entry at least $ 2s $ with $ s\ge 2 $. By Proposition \ref{HSpecQeql}, the $ q_{\min} $ of $ \Gamma $ is at least $ 2s $. It is easy to observe by Theorem \ref{He_Chang} that $ q_{\min}(\Gamma) - 2(s-1) \le q_{\min}\big(\Gamma-\{v_1, v_2, \ldots, v_{2(s-1)}\}\big). $ Hence $ \mathcal{L}\big(\Gamma-\{v_1, v_2, \ldots, v_{2(s-1)}\}\big) $ satisfies the property $ \rho $ by relation \eqref{line_eig_snles_eqn}. The graphs $ \Gamma-v $ for $ r_i \ge 2 $ and $ \Gamma-\{v_1, v_2, \ldots, v_{2(s-1)}\} $ for $ n_i \ge \min\{2r_i: r_i \ge s \ge 2\} $ both are graphs with minimum degree $ \delta \ge 3 $. Thus by Corollary \ref{Cor_Iter_line-2}, all the iterated line graphs $ \mathcal{L}^k(\Gamma-v) $ and $ \mathcal{L}^k(\Gamma-\{v_1, v_2, \ldots, v_{2(s-1)}\}) $ for $ k \ge 1 $ satisfy the property $ \rho $. \end{proof} \noindent The following is an interesting result due to the minimum number of edges in join of two connected graphs. \begin{corollary}\label{path_join_ite} If $m, n\ge 3$ then the graphs $\mathcal{L}^k\big(K_2[P_n, P_m]\big)$ satisfy the property $\rho$ for $k\ge 1$. \end{corollary} \begin{proof} Let $ G = K_2 $, $ H_1 = C_{n+1} $ and $ H_2 = C_{m+1} $ for $ m, n \ge 3 $ in Theorem \ref{vertex_del}. Let $ v_1 \in H_1 $ and $ v_2 \in H_2 $. By deleting the vertices $ v_1 $ and $ v_2 $ along with the edges incident to them in $ H_1 $ and $ H_2 $ respectively, we obtain $ \Gamma - \{v_1, v_2\} = K_2[P_n, P_m] $. Therefore, the graphs $ \mathcal{L}^k(K_2[P_n, P_m]) $ satisfy the property $ \rho $ for $ k \ge 1 $. \end{proof} \begin{definition}\cite{Gutman_Kn} Let $ v $ be a vertex of the complete graph $ K_n $ where $ n \ge 3 $, and let $ e_i $ for $ i = 1, \ldots, p $, $ 1 \le p \le n-1 $ be its distinct edges all incident to $ v $. The graph $ Ka_n(p) $ is obtained by deleting the edges $ e_i $ for $ i = 1, \ldots, p $ from $ K_n $. Note that $ Ka_n(0) = K_n $. \end{definition} \noindent With this notation we have the following result. \begin{theorem}\label{ver_del_Kn} If $n\ge 6$ then the graphs $\mathcal{L}^k\big(Ka_n(p)\big)$, $1\le p\le n-4$ satisfy the property $\rho$ for $k\ge 1$. \end{theorem} \begin{proof} All the graphs of order up to 5 whose line graphs satisfy the property $ \rho $ are $ C_4 $, $ K_4 $, $ K_{3,2} $ and $ K_5 $ according to Theorem \ref{min_order_thm}. It is noted that none of these graphs are of the type $ Ka_n(p) $ for $ p \ge 1 $. If $ n \ge 6 $, the graph $ Ka_n(n-4) $ can be expressed as $ P_3[K_1, K_3, K_{n-4}] $. The quotient matrix $ Q_\pi $ of $ Ka_n(n-4) $ is given by \[ Q_\pi = \begin{bmatrix} 3 & 3 & 0 \\ 1 & n+1 & n-4 \\ 0 & 3 & 2n-7 \end{bmatrix} \] with its spectrum $$ Sp(Q_\pi) = \left\{n-\frac{1}{2}+\frac{1}{2}\sqrt{4n(n-7)+73}, n-2, n-\frac{1}{2}-\frac{1}{2}\sqrt{4n(n-7)+73}\right\}. $$ The signless Laplacian spectrum of $ Ka_n(n-4) $ is \[ Sp_Q\big(Ka_n(n-4)\big) = \{ (n-2)^2, (n-3)^{n-5}\} \cup Sp(Q_\pi). \] It is clear that all the signless Laplacian eigenvalues of $ Ka_n(n-4) $ are greater than or equal to $ 2 $, except for the concern about the third eigenvalue of $ Sp(Q_\pi) $ when $ n \ge 6 $. However, this eigenvalue $ n-\frac{1}{2}-\frac{1}{2}\sqrt{4n(n-7)+73} \ge 2 $ if $ n \ge 5 $. Thus, $ \mathcal{L}\big(Ka_n(p)\big) $ for $ 1 \le p \le n-4 $ satisfies the property $ \rho $ by using Proposition \ref{d_Lima} and the relation \eqref{line_eig_snles_eqn}. It is also easy to observe that the minimum degree of $ Ka_n(p) $ for $ 1 \le p \le n-4 $ is at least $ 3 $. Hence by Corollary \ref{Cor_Iter_line-2}, all the iterated line graphs $ \mathcal{L}^k(Ka_n(p)) $ for $ k \ge 1 $ and $ 1 \le p \le n-4 $ satisfy the property $ \rho $. \end{proof} There are certain classes of graphs with a least eigenvalue of $-2$, such as exceptional graphs and generalized line graphs \cite{Cvetkovic_Sgen}. If the minimum degree $ \delta \ge 4 $ in these classes of graphs, we have the following simple result. \begin{theorem}\label{min_deg4} Let $G$ be a graph with least eigenvalue $-2$ and the minimum degree $\delta\ge 4$. Then the iterated line graphs $\mathcal{L}^k(G)$ satisfy the property $\rho$ for $k\ge 1$. \end{theorem} \begin{proof} The least eigenvalues $\lambda_{min}, q_{min}$ and the minimum degree $\delta$ of $G$ by Proposition \ref{Desai_prop} satisfy $q_{min}\ge \lambda_{min}+\delta\ge -2+4=2$. Therefore, by using the relation \eqref{line_eig_snles_eqn} $\mathcal{L}(G)$ satisfies the property $\rho$. Now, by using Corollary \ref{Cor_Iter_line-2} all the iterated line graphs $\mathcal{L}^k(G)$ satisfy the property $\rho$ for $k\ge 1$. \end{proof} \noindent Theorem \ref{Kinkar} of Das et al. can be extended to the iterated line graphs. \begin{theorem}\label{Kinkar_ite_lin} Let $G$ be a graph of order $n_0( > 2)$ and size $m_0$ with the minimum degree $\delta\ge \frac{n_0}{2} + 1$. Then the graphs $\mathcal{L}^k(G)$ satisfy the property $\rho$ for $k\ge 1$. \end{theorem} \begin{proof} If $G$ is a graph of order $n_0 > 2$ and size $m_0$ with the minimum degree $\delta\ge \frac{n_0}{2} + 1$, then by Theorem \ref{Kinkar}, $\mathcal{L}(G)$ satisfies the property $\rho$. The existence of a graph $G$ with the minimum degree $\delta\ge \frac{n_0}{2} + 1$ implies $n_0\ge 4$, that is, the minimum degree of $G$ is at least $3$. Therefore, by applying Corollary \ref{Cor_Iter_line-2}, all the iterated line graphs $\mathcal{L}^k(G)$ satisfy the property $\rho$ for $k\ge 1$. \end{proof} \noindent The following inequality was given by Leonardo de Lima et al. in \cite{Leonardo} for Tur\'{a}n graph. \begin{equation*} (r-2)\Big\lfloor \frac{n}{r}\Big\rfloor<q_{min}\big(T_r(n)\big)\le \Big(1-\frac{1}{r}\Big)n. \end{equation*} \begin{note} The above inequality is not valid when $r=3$, $n=6, 7, 8$ and $r=4$, $n=5$ as seen from the following spectral values: $Sp_Q\big(T_3(6)\big)=\{8, 4^3, 2^2\}$, $Sp_Q\big(T_3(7)\big)=\{9.2745, 5^{2}, 4^{2}, 3, 1.7251\}$, $Sp_Q\big(T_3(8)\big)=\{10.6056, 6, 5^4, 3.3944, 2\}$ and $Sp_Q\big(T_4(5)\big)=\{7.3723, 3^3, 1.6277\}$. These values demonstrate that $q_{min}\big(T_3(6)\big)=2, q_{min}\big(T_3(7)\big)=1.7251$, $q_{min}\big(T_3(8)\big)=2 $ and $q_{min}\big(T_4(5)\big)=1.6277$, but the inequality gives strict lower bound $2$ for $q_{min}\big(T_r(n)\big)$. However, with the help of this inequality we have the following result. \end{note} \begin{proposition}\label{Turan_itera} If $r = 3$ and $n\ge 6$; $n\ne 7$ or $r=4$ and $n\ne 5$ or $r\ge 5$, then the graphs $\mathcal{L}^k\big(T_r(n)\big)$ satisfy the property $\rho$ for $k\ge 1$. \end{proposition} \begin{proof} The condition on $r$ and $n$ in the hypothesis together with the above inequality guarantees that $q_{\text{min}}(T_r(n))$ is at least $2$. As a result, $\mathcal{L}(T_r(n))$ satisfies the property $\rho$ by applying the relation \eqref{line_eig_snles_eqn}. Moreover, it is evident that the minimum degree of $T_r(n)$ is at least $3$. Consequently, by Corollary \ref{Cor_Iter_line-2} all iterated line graphs $\mathcal{L}^k(G)$ satisfy the property $\rho$ for $k \geq 1$. \end{proof} \subsection{\bf Iterated regular line graphs with property $\rho$} Most of the results discussed so far involve non-regular iterated line graphs that satisfy the property $\rho$. We obtain iterated regular line graphs $\mathcal{L}^k(G)$ that satisfy the property $\rho$ from Theorem \ref{Iter_line-2} for $k \geq 2$ and from Theorem \ref{lineofjoin}, Theorem \ref{min_deg4} and Theorem \ref{Kinkar_ite_lin} for $k \geq 1$. In Proposition, \ref{Turan_itera} if $r$ divides $n$, we also get iterated regular line graphs $\mathcal{L}^k(G)$ that satisfy the property $\rho$ for $k \geq 1$. Here, we present additional iterated regular line graphs $\mathcal{L}^k(G)$ for $k \geq 1$ by taking regular graphs $G$. \begin{theorem}\label{reg1_ite_lin} If $G$ is an $r$-regular graph of order $n$ with $3\le r\le \frac{n-1}{3}$, then the graphs $\mathcal{L}^k(\overline{G})$ satisfy the property $\rho$ for $k\ge 1$. \end{theorem} \begin{proof} Let $Sp_A(G)=\{r, \lambda_2^{m_2}, \ldots, \lambda_t^{m_t}\}$ such that $1+\sum\limits_{i=2}^{t}m_i=n$. Then, by Theorem \ref{reg_compl}, $\overline{G}$ is also a regular graph with $Sp_A(\overline{G})=\{n-r-1, (-1-\lambda_t)^{m_t}, \ldots, (-1-\lambda_2)^{m_2}\}$ and by Theorem \ref{line_eigv}, $Sp_A(\mathcal{L}(\overline{G}))=\{2(n-1)-2r-2, (n-r-\lambda_t-4)^{m_t}, \ldots, (n-r-\lambda_2-4)^{m_2}, -2^{n(n-r-3)/2}\}$. We shall prove that all eigenvalues of $\mathcal{L}(\overline{G})$, except $-2$ are non negative. Since $G$ is regular, $\mathcal{L}(\overline{G})$ is also regular with degree $2(n-1)-2r-2$, which implies $2(n-1)-2r-2>0$. The condition $r\le \frac{n-1}{3}$ gives $n\ge 3r+1$ which implies $n-r-\lambda_i-4\ge 2r-\lambda_i-3\ge 0$ as $r\ge 3$ to each $i=2, \ldots, t$. Hence $\mathcal{L}(\overline{G})$ satisfies the property $\rho$. Additionally, $n\ge 3r+1$ implies $n-r-1\ge 2r\ge 6$ which shows that $\overline{G}$ is a regular graph with a minimum degree $\ge 6$. This implies that the graphs $\mathcal{L}^k(\overline{G})$ satisfy the property $\rho$ to each $k\ge 1$ by Corollary \ref{Cor_Iter_line-2}. \end{proof} \begin{theorem}\label{reg2_ite_lin} If $G$ is an $r$-regular graph of order $n\ge 8$ and $r\ge 1$, then the graphs $\mathcal{L}^k\big(\overline{\mathcal{L}(G)}\big)$, $k\ge 1$ satisfy the property $\rho$. \end{theorem} \begin{proof} Let $Sp_A(G)=\{r, \lambda_2^{m_2}, \ldots, \lambda_t^{m_t}\}$ such that $1+\sum\limits_{i=2}^{t}m_i=n$. Then, by Theorem \ref{line_eigv}, $Sp_A(\mathcal{L}(G))=\{2r-2, (\lambda_2+r-2)^{m_2}, \ldots, (\lambda_t+r-2)^{m_t}, -2^{n(r-2)/2}\}$. Since $\mathcal{L}(G)$ is regular by Theorem \ref{reg_compl}, $\overline{\mathcal{L}(G)}$ also a regular graph with $Sp_A\big(\overline{\mathcal{L}(G)}\big)=\{nr/2-2r+1, 1^{n(r-2)/2}, (1-r-\lambda_t)^{m_t}, \ldots, (1-r-\lambda_2)^{m_2}\}$. Again, by Theorem \ref{line_eigv}, $Sp_A\big(\mathcal{L}\big(\overline{\mathcal{L}(G)}\big)\big) = \{r(n-4), ((n-4)r/2)^{n(r-2)/2}, ((n-6)r/2-\lambda_t)^{m_t}, \ldots, ((n-6)r/2-\lambda_2)^{m_2}, -2^{nr(nr-4r-2)/8}\}$. It is easy to see that all the eigenvalues of $\mathcal{L}\big(\overline{\mathcal{L}(G)}\big)$ are non-negative except $-2$ for $n\ge 8$. Hence, $\mathcal{L}\big(\overline{\mathcal{L}(G)}\big)$ satisfies the property $\rho$. It is noted that the degree of $\overline{\mathcal{L}(G)}$ is $nr/2-2r+1$, which is at least $3$ for $n\ge 8$ and $r\ge 1$. This implies the graphs $\mathcal{L}^k\big(\overline{\mathcal{L}(G)}\big)$ satisfy the property $\rho$ for $k\ge 1$ by Corollary \ref{Cor_Iter_line-2}. \end{proof} \begin{remark} One can easily construct equienergetic graphs, similar to those in Theorem \ref{lineofjoin} for the results in Theorems \ref{vertex_del} to \ref{reg2_ite_lin}. \end{remark} \begin{theorem}\label{hyper_en_itLine} Let $G$ be a graph with $d_u+d_v\ge 6$ to each edge $e=uv$ in $G$. Then the graphs $\mathcal{L}^k(G)$ are hyperenergetic for $k\ge 2$. \end{theorem} \begin{proof} Since $G$ is a graph where $d_u + d_v \geq 6$ for each edge $e = uv$, $\mathcal{L}(G)$ is a graph with minimum degree $\delta \geq 4$. The number of edges in $\mathcal{L}(G)$ is $m_1 = \frac{1}{2} \sum\limits_{i=1}^{m_0} d_i \geq \frac{1}{2}(4m_0) = 2m_0 = 2n_1$, which implies that the graph $\mathcal{L}^2(G)$ is hyperenergetic by Theorem \ref{hyper_en}. Note that the minimum degree increases in the line graphs $\mathcal{L}^k(G)$ as $k$ increases for $k \geq 2$ and it is at least 4. Therefore, $m_k \geq 2n_k$ and by Theorem \ref{hyper_en}, all iterated line graphs $\mathcal{L}^k(G)$ are hyperenergetic for $k \geq 2$. \end{proof} \section{Spectra and Energy of complement of iterated line graphs with the property $\rho$} \begin{lemma}\label{compl_lem} Let $\mathcal{L}(G)$ be the line graph of a graph $G$ with order $n_0$ and size $m_0$. If $\mathcal{L}(G)$ has a non-negative eigenvalue $\lambda_{1(j)}$, then its complement $\overline{\mathcal{L}(G)}$ has a negative eigenvalue $\overline{\lambda}_{1(m_0-j+2)}$ for $j\in \{2, 3, \ldots , m_0\}$. In a particular, if $\mathcal{L}(G)$ has eigenvalue $-2$, then its complement $\overline{\mathcal{L}(G)}$ has eigenvalue $1$. \end{lemma} \begin{proof} If $G$ is a graph of order $n_0$ and size $m_0$, then its line graph $\mathcal{L}(G)$ has order $m_0$. If $\mathcal{L}(G)$ has a non-negative eigenvalue $\lambda_{1(j)}$, then its complement $\overline{\mathcal{L}(G)}$ has eigenvalue an $\overline{\lambda}_{1(m_0-j+2)}\le -1-\lambda_{1(j)}$ by inequality \ref{eig_inq}, which is negative for $j\in \{2, 3, \ldots , m_0\}$. If $-2$ is the eigenvalue of $\mathcal{L}(G)$, then by Proposition \ref{eig_space-2}, its eigenspace is orthogonal to all one's vector ${\bf j}$. Now by using Theorem \ref{compl-1-l}, the eigenvalue $-1-(-2)=1$ is the eigenvalue of $\overline{\mathcal{L}(G)}$. This completes the proof. \end{proof} The papers \cite{Mojallal, Nikiforov, ramathesis, Ramane_Bp, Ramane-2023} explore exact relations between a regular graph $G$ and its complement. In the following, we extend these results to non-regular iterated line graphs with property $\rho$. \begin{theorem}\label{comple_spec_enrg} Let $G$ be a graph of order $n_0$ and size $m_0$. If the graphs ${\mathcal{L}^k(G)}$ satisfy the property $\rho$ with $-2$ multiplicity $m_{k-1}-n_{k-1}$ for $k\ge 1$, then the graphs $\overline{\mathcal{L}^k(G)}$ for $k\ge 1$ have exactly two positive eigenvalues: the spectral radius and $1$ with multiplicity $m_{k-1}-n_{k-1}$. Furthermore \begin{equation}\label{rel_line_cpml} \mathcal{E}(\overline{\mathcal{L}^k(G)})=2\overline{\lambda}_{k(1)}+\frac{\mathcal{E}(\mathcal{L}^k(G))}{2}.\end{equation} \end{theorem} \begin{proof} It is given that the iterated line graphs $\mathcal{L}^k(G)$ of $G$ for $k \ge 1$ have all negative eigenvalues equal to $-2$ with multiplicity $m_{k-1} - n_{k-1} = n_k - n_{k-1}$. This implies that the remaining $n_{k-1}$ eigenvalues of $\mathcal{L}^k(G)$ are non-negative. By Lemma \ref{compl_lem}, the complement of the iterated line graphs $\overline{\mathcal{L}^k(G)}$ has negative eigenvalues $\overline{\lambda}_{k(n_k-j+2)}$ for $j \in \{2, 3, \ldots, n_{k-1}\}$ and positive eigenvalues $\overline{\lambda}_{k(n_k-j+2)} = 1$ for $j \in \{n_{k-1}+1, \ldots, n_k\}$. The only remaining eigenvalue of $\overline{\mathcal{L}^k(G)}$ is the spectral radius $\overline{\lambda}_{k(1)}$, which must be greater than or equal to $1$. If $\overline{\mathcal{L}^k(G)}$ is connected, then $\overline{\lambda}_{k(1)} > 1$. Thus, $\overline{\mathcal{L}^k(G)}$ has exactly two positive eigenvalues: $\overline{\lambda}_{k(1)}$ and $1$, with the latter having multiplicity $n_k - n_{k-1}$ for $k \ge 1$. Therefore, the energy of $\overline{\mathcal{L}^k(G)}$ is $\mathcal{E}(\overline{\mathcal{L}^k(G)}) = 2(\overline{\lambda}_{k(1)} + n_k - n_{k-1})$. Since $\mathcal{E}({\mathcal{L}^k(G)}) = 4(n_k - n_{k-1})$ as $-2$ is the only negative eigenvalue of ${\mathcal{L}^k(G)}$ with multiplicity $n_k - n_{k-1}$, we obtain the required energy relation between $\mathcal{E}(\overline{\mathcal{L}^k(G)})$ and $\mathcal{E}({\mathcal{L}^k(G)})$ for $k \ge 1$. \end{proof} \begin{corollary}\label{nec_suf_copl_equ} Let $G$ be a graph of order $n_0$ and size $m_0$. If the graphs ${\mathcal{L}^k(G)}$ satisfy the property $\rho$ with $-2$ multiplicity $m_{k-1}-n_{k-1}$ for $k\ge 1$, then the graphs ${\mathcal{L}^k(G)}$ and $\overline{\mathcal{L}^k(G)}$ for $k\ge 1$ are equienergetic if and only if $n^-(\mathcal{L}^k(G))=\overline{\lambda}_{k(1)}$. \end{corollary} \begin{proof} The energy relation \eqref{rel_line_cpml} between $\mathcal{E}(\overline{\mathcal{L}^k(G)})$ and $\mathcal{E}(\mathcal{L}^k(G))$ by Theorem \ref{comple_spec_enrg} can be expressed as $2\mathcal{E}(\overline{\mathcal{L}^k(G)}) - \mathcal{E}(\mathcal{L}^k(G)) = 4\overline{\lambda}_{k(1)}$, or equivalently, $\mathcal{E}(\overline{\mathcal{L}^k(G)}) - \mathcal{E}(\mathcal{L}^k(G)) = 4\overline{\lambda}_{k(1)} - \mathcal{E}(\overline{\mathcal{L}^k(G)})$. Now, the graphs $\mathcal{L}^k(G)$ and $\overline{\mathcal{L}^k(G)}$ for $k \ge 1$ are equienergetic if and only if $4\overline{\lambda}_{k(1)} - \mathcal{E}(\overline{\mathcal{L}^k(G)}) = 0$ or $4\overline{\lambda}_{k(1)} = \mathcal{E}(\overline{\mathcal{L}^k(G)})$. Since $\mathcal{E}(\overline{\mathcal{L}^k(G)}) = 2(\overline{\lambda}_{k(1)} + n_k - n_{k-1})$, we obtain $\overline{\lambda}{k(1)} = n_k - n_{k-1}$. But, it is given that $n^-(\mathcal{L}^k(G)) = n_k - n_{k-1}$, which completes the proof. \end{proof} \begin{corollary}\label{compl_equi} Let $G$ be a graph of order $n_0$ and size $m_0$. If the graphs ${\mathcal{L}^k(G)}$ satisfy the property $\rho$ with $-2$ having multiplicity $m_{k-1} - n_{k-1}$ for $k \ge 1$, then the complements of the iterated line graphs $\overline{\mathcal{L}^k(G)}$ are mutually equienergetic for $k \ge 1$ if and only if they have the same spectral radius. \end{corollary} \begin{proof} If $G$ is a graph of order $n_0$ and size $m_0$, then the graphs ${\mathcal{L}^k(G)}$ have order $n_k$ and size $m_k$. If the graphs ${\mathcal{L}^k(G)}$ satisfy the property $\rho$ with $-2$ having multiplicity $m_{k-1} - n_{k-1}$ for $k \ge 1$, then all the iterated line graphs $\mathcal{L}^k(G)$ of such graphs $G$ with the same order $n_0$ and the same size $m_0$ are mutually equienergetic with energy $4(m_{k-1} - n_{k-1})$. Using this fact in the energy relation \eqref{rel_line_cpml} between $\mathcal{E}(\overline{\mathcal{L}^k(G)})$ and $\mathcal{E}(\mathcal{L}^k(G))$ by Theorem \ref{comple_spec_enrg} completes the proof. \end{proof} \begin{example} The graphs $\overline{\mathcal{L}^k(\Gamma_1)}$ and $\overline{\mathcal{L}^k(\Gamma_2)}$ in Remark \ref{Join_remark} are equienergetic for $k \ge 1$ as they have the same quotient matrices and because the spectral radius of a quotient matrix coincides with the spectral radius of the corresponding graph. Moreover, in Remark \ref{Join_remark} if $H_{s_1}$ and $H_{s_2}$ are non co-spectral (co-spectral) graphs, then we obtain non co-spectral (co-spectral) graphs $\overline{\mathcal{L}^k(\Gamma_1)}$ and $\overline{\mathcal{L}^k(\Gamma_2)}$ respectively for $k \ge 0$ by using Proposition \ref{Cospec_Prop}. \end{example} \begin{remark} If $k \ge 1$, then the results in Theorem \ref{comple_spec_enrg}, Corollary \ref{nec_suf_copl_equ} and Corollary \ref{compl_equi} hold true for the iterated line graphs ${\mathcal{L}^k(G)}$ in Theorems \ref{lineofjoin} to \ref{reg2_ite_lin}. Similarly, if $k \ge 2$ these results hold true for the iterated line graphs ${\mathcal{L}^k(G)}$ in Theorem \ref{Iter_line-2} and Corollary \ref{Cor_Iter_line-2}. \end{remark} \begin{remark} In \cite{ramane_copl_equi}, Ramane et al. obtained equienergetic regular graphs using the complement of iterated regular line graphs $\overline{\mathcal{L}^k(G)}$ for $k\ge 2$ by taking regular graphs $G$ with the same order and the same degree $r\ge 3$. This approach characterizes a large class of pairs of non-trivial equienergetic regular graphs. Furthermore, it is noted that all the results in this paper are particular cases of Corollary \ref{compl_equi} and Corollary \ref{nec_suf_copl_equ}. \end{remark} \begin{theorem} Let $G$ be a graph with $d_u+d_v\ge 6$ to each edge $e=uv$ in $G$, then the graphs $\overline{\mathcal{L}^k(G)}$ are hyperenergetic for $k\ge 2$ if ${\lambda}_{k(1)}\le \frac{n_k-1}{2}$. \end{theorem} \begin{proof} If $G$ is a graph with $d_u+d_v\ge 6$ to each edge $e=uv$, then the iterated line graphs $\mathcal{L}^k(G)$ for $k\ge 2$ are hyperenergetic by Theorem \ref{hyper_en_itLine}, that is $\mathcal{E}({\mathcal{L}^k(G)})\ge 2(n_k-1)$. The energy relation \ref{rel_line_cpml} between $\mathcal{E}(\overline{\mathcal{L}^k(G)})$ and $\mathcal{E}({\mathcal{L}^k(G)})$ by Theorem \ref{comple_spec_enrg} is \begin{equation*} \mathcal{E}(\overline{\mathcal{L}^k(G)}) = 2\overline{\lambda}_{k(1)}+\frac{\mathcal{E}(\mathcal{L}^k(G))}{2} \ge 2\overline{\lambda}_{k(1)} +\frac{1}{2} 2(n_k-1) = 2\overline{\lambda}_{k(1)} + (n_k-1). \end{equation*} It is well known that $\lambda_{k(1)}+\overline{\lambda}_{k(1)}\ge n_k-1$, which implies $\overline{\lambda}_{k(1)}\ge n_k-1-\lambda_{k(1)}\ge n_k-1-\frac{n_k-1}{2} = \frac{n_k-1}{2}$ if ${\lambda}_{k(1)}\le \frac{n_k-1}{2}$. By using this we get $\mathcal{E}(\overline{\mathcal{L}^k(G)})\ge 2\frac{n_k-1}{2}+(n_k-1)=2(n_k-1)$ which completes the proof. \end{proof} \section{Other results} \begin{proposition}\label{indep_num_lin} Let $G$ be a graph of order $n_0$ and size $m_0$. If the graphs ${\mathcal{L}^k(G)}$ satisfy the property $\rho$ with $-2$ multiplicity $m_{k-1}-n_{k-1}$ for $k\ge 1$, then $\alpha \big(\mathcal{L}^k(G)\big)\le n_{k-1}$ and $\alpha \big(\overline{\mathcal{L}^k(G)}\big)\le n^-\big(\mathcal{L}^k(G)\big)+1$. \end{proposition} \begin{proof} If $G$ is a graph of order $n_0$, it is well known that $n^-(G)\le n_0-\alpha(G)$. Using this fact for the graphs ${\mathcal{L}^k(G)}$, we get that $m_{k-1}-n_{k-1} = n_{k}-n_{k-1}\le n_k-\alpha\big(\mathcal{L}^k(G)\big)$ which implies $\alpha \big(\mathcal{L}^k(G)\big)\le n_{k-1}$ for $k\ge 1$. Again using the same fact for the graphs $\overline{\mathcal{L}^k(G)}$, we get that $n_{k-1}-1\le n_k-\alpha\big(\overline{\mathcal{L}^k(G)}\big)$ or $-(n_k-n_{k-1}+1)\le -\alpha\big(\overline{\mathcal{L}^k(G)}\big)$ which gives $\alpha \big(\overline{\mathcal{L}^k(G)}\big)\le n^-\big(\mathcal{L}^k(G)\big)+1$ for $k\ge 1$. \end{proof} \noindent The following result is interesting regarding the minimum order of a graph where both the graph $G$ and its complement are connected and the line graph $\mathcal{L}(G)$ satisfies property $\rho$. \begin{theorem}\label{min_order_thm} The smallest possible order of a connected graph $G$, where its line graph satisfies the property $\rho$ and the complement graph $\overline{G}$ is also connected, is $7$. \end{theorem} \begin{proof} There are exactly $13$ non-isomorphic connected graphs of order up to 6 whose line graphs satisfy the property \( \rho \). These graphs include $C_4, K_4, K_{3, 2}, K_5, K_{4, 2}, K_{3, 3}, K_6$ and the graphs shown in Figure 2. \begin{figure}[h!] \centering \includegraphics[width=1.0\linewidth]{Fig2}\\ \label{Fig2} \caption{} \end{figure} \vspace{5mm} None of the graphs mentioned above have a connected complement. The only connected graph of order $7$ whose line graph satisfies the property $\rho$ and whose complement is also connected is shown in Figure $3$, which completes the proof. \begin{figure}[h!] \centering \includegraphics[width=0.3\linewidth]{Fig3}\\ \label{Fig3} \caption{} \end{figure} \end{proof} \section*{Conclusion}\label{sec13} In this paper, we have described several iterated line graphs $\mathcal{L}^k(G)$ with all negative eigenvalues equal to $-2$ and discussed their energy. We also explored the spectra and the energy of their complements. Additionally, we presented a large class of equienergetic graphs, extending some of the previous results. Although we have identified many classes of iterated line graphs with all negative eigenvalues equal to $-2$, one important question remains: to characterize all graphs with all negative eigenvalues equal to $-2$ which are not necessarily limited to line graphs.\\ \noindent {\bf Data Availability:} There is no data associated with this article.\\ \noindent {\bf Conflicts of interest:} The authors have no conflict of interest. \begin{thebibliography}{9} \bibitem{Cardoso} D. M. Cardoso, C. Delorme, P. Rama, Laplacian eigenvectors and eigenvalues and almost equitable partitions, European J. Combin. {\bf 28} (2007) 665--673. https://doi.org/10.1016/j.ejc.2005.03.006 \bibitem{Cardoso_Spec} D. M. Cardoso, M. A. A. de Freitas, E. A. Martins, M. Robbiano, Spectra of graphs obtained by a generalization of the join graph operation, Discrete Math. {\bf 313} (2013) 733--741. https://doi.org/10.1016/j.disc.2012.10.016 \bibitem{ChangAn} A. Chang, Some properties of the spectrum of graphs, Appl. Math. J. Chinese Univ. Ser. B. {\bf 14} (1999) 103--107. https://doi.org/10.1007/s11766-999-0061-7 \bibitem{Jump1} G. Chartrand, H. Hevia, E. B. Jarrett, M. Schultz, Subgraph distances in graphs defined by edge transfers, Discrete Math. {\bf 170} (1997) 63--79. https://doi.org/10.1016/0012-365X(95)00357-3 \bibitem{cvet_signless} D. Cvetkovi{\'c}, Signless Laplacians and line graphs, Bull. Classe des Sci. Math. Nat. {\bf 131} (2005) 85--92. \bibitem{Cvetk_book} D. M. Cvetkovi\'{c}, M. Doob, H. Sachs, Spectra of Graphs, Academic Press, New York, 1980. \bibitem{Cvetkovic_Sgen} D. Cvetkovi\'{c}, P. Rowlinson, S. Simi\'{c}, Spectral Generalizations of Line Graphs. Cambridge University Press, Cambridge, 2004. https://doi.org/10.1017/CBO9780511751752 \bibitem{Cvetten_yrs} D. Cvetkovi\'{c}, P. Rowlinson, S. Simi\'{c}, Graphs with least eigenvalue $-2$: ten years on, Linear Algebra Appl. {\bf 484} (2015) 504--539. https://doi.org/10.1016/j.laa.2015.06.012 \bibitem{Kinkar_Das} K. C. Das, S. A. Mojallal, I. Gutman, On energy of line graphs, Linear Algebra Appl. {\bf 499} (2016) 79--89. https://doi.org/10.1016/j.laa.2016.03.003 \bibitem{Leonardo} L. de Lima, V. Nikiforov, C. Oliveira, The clique number and the smallest $Q$-eigenvalue of graphs, Discrete Math. {\bf 339} (2016) 1744--1752. https://doi.org/10.1016/j.disc.2016.02.002 \bibitem{de_Lima} L. S. de Lima, C. S. Oliveira, N. M. M. de Abreu, V. Nikiforov, The smallest eigenvalue of the signless Laplacian, Linear Algebra Appl. {\bf 435} (2011) 2570--2584. https://doi.org/10.1016/j.laa.2011.03.059 \bibitem{Desai} M. Desai, V. Rao, A characterization of the smallest eigenvalue of a graph, J. Graph Theory {\bf 18} (1994) 181--194. https://doi.org/10.1002/jgt.3190180210 \bibitem{GodsilCD} C. D. Godsil, Algebraic Combinatorics, Chapman \& Hall Mathematics, New York, 1993. \bibitem{Gutman} I. Gutman, The energy of a graph, Ber. Math.-Statist. Sekt. Forsch. Graz {\bf 103} (1978) 1--22. \bibitem{Gutman_Kn} I. Gutman, L. Pavlovi\'{c}, The energy of some graphs with large number of edges, Bull. Cl. Sci. Math. Nat. Sci. Math. {\bf 24} (1999) 35--50. \bibitem{Gutman_line} I. Gutman, M. Robbiano, E. A. Martins, D. M. Cardoso, L. Medina, O. Rojo, Energy of line graphs, Linear Algebra Appl. {\bf 433} (2010) 1312--1323. https://doi.org/10.1016/j.laa.2010.05.009 \bibitem{Hagos} E. M. Hagos, Some results on graph spectra, Linear Algebra Appl. {\bf 356} (2002) 103--111. https://doi.org/10.1016/S0024-3795(02)00324-5 \bibitem{Harary} F. Harary, Graph Theory, Addison-Wesley Reading, 1969. \bibitem{HeChang} C. He, H. Pan, The smallest signless Laplacian eigenvalue of graphs under perturbation, Electron. J. Linear Algebra. {\bf 23} (2012) 473--482. https://doi.org/10.13001/1081-3810.1533 \bibitem{Hoffman} A. J. Hoffman, Some recent results on spectral properties of graphs, Beitr\"{a}ge zur Graphentheorie Kolloquium, Manebach (1967) pp. 75--80. \bibitem{Horn} R. A. Horn, C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 2012. \bibitem{Hyper} Y. Hou, I. Gutman, Hyperenergetic line graphs, MATCH Commun. Math. Comput. Chem. {\bf 43} (2001) 29--39. \bibitem{Hou_Xu} Y. Hou, L. Xu, Equienergetic bipartite graphs, MATCH Commun. Math. Comput. Chem. {\bf 57} (2007) 363--370. \bibitem{Turandef} F. Lazebnik, S. Tofts, An extremal property of Tur\'{a}n graphs, Electron. J. Combin. {\bf 17}, Research Paper 170 (2010). \bibitem{Mojallal} S. A. Mojallal, P. Hansen, On the difference of energies of a graph and its complement graph, Linear Algebra Appl. {\bf 595} (2020) 1--12. https://doi.org/10.1016/j.laa.2020.02.026 \bibitem{Nikiforov} V. Nikiforov, Remarks on the energy of regular graphs, Linear Algebra Appl. {\bf 508} (2016) 133--145. https://doi.org/10.1016/j.laa.2016.07.007 \bibitem{ramathesis} H. S. Ramane, Some Topics in Spectral Graph Theory, Ph.D. Thesis, Karnatak University, Dharwad, 2001. http://hdl.handle.net/10603/186446 \bibitem{ramane_copl_equi} H. S. Ramane, I. Gutman, H. B. Walikar, S. B. Halkarni, Equienergetic complement graphs, Kragujevac J. Sci. {\bf 27} (2005) 67--74. \bibitem{Ramane-2023} H. S. Ramane, B. Parvathalu, K. Ashoka, {An upper bound for difference of energies of a graph and its complement}, Examples and Counterexamples, 3 (2023) 100100. https://doi.org/10.1016/j.exco.2023.100100. \bibitem{Ramane_Bp} H. S. Ramane, B. Parvathalu, D. Patil, K. Ashoka, Graphs equienergetic with their complements, MATCH Commun. Math. Comput. Chem. {\bf 82} (2019) 471--480. \bibitem{HSRamane} H. S. Ramane, H. B. Walikar, S. B. Rao, B. D. Acharya, P. R. Hampiholi, S. R. Jog, I. Gutman, Spectra and energies of iterated line graphs of regular graphs, Appl. Math. Lett. {\bf 18} (2005) 679--682. https://doi.org/10.1016/j.aml.2004.04.012 \bibitem{OscarRojo} O. Rojo, Line graph eigenvalues and line energy of caterpillars, Linear Algebra Appl. {\bf 435} (2011) 2077--2086. https://doi.org/10.1016/j.laa.2011.03.064 \bibitem{Sachs} H. Sachs, \"{U}ber Teiler, Faktoren und charakteristische Polynome von Graphen II, Wiss. Z. Tech. Hochsch. Ilmenau. {\bf 13} (1967) 405--412. \bibitem{Schwenk} A. J. Schwenk, Computing the characteristic polynomial of a graph in: Graphs and Combinatorics, Lecture Notes in Math, pp. 153–172, Springer, Berlin 1974. \bibitem{sage} W. A. et al. Stein, Sage Mathematics Software (Version 8.9), The Sage Development Team (2019). http://www.sagemath.org \bibitem{Teranishi} Y. Teranishi, Main eigenvalues of a graph, Linear Multilinear Algebra {\bf 49} (2001) 289--303. https://doi.org/10.1080/03081080108818702 \bibitem{VijayaKumar} G. Vijaykumar, S. B. Rao, N. M. Singhi, Graphs with eigenvalues at least $-2$, Linear Algebra Appl. {\bf 46} (1982) 27--42. https://doi.org/10.1016/0024-3795(82)90023-4 \bibitem{Wang_SL} J. Wang, F. Belardo, A note on the signless Laplacian eigenvalues of graphs, Linear Algebra Appl. {\bf 435} (2011) 2585--2590. https://doi.org/10.1016/j.laa.2011.04.004 \bibitem{WuBaoFeng} B. Wu, Y. Lou, C. He, Signless Laplacian and normalized Laplacian on the $H$-join operation of graphs, Discrete Math. Algorithms Appl. {\bf 6} (2014) 1450046. https://doi.org/10.1142/S1793830914500463 \end{thebibliography} \end{document}
2205.02196v2
http://arxiv.org/abs/2205.02196v2
On the monoid of partial isometries of a cycle graph
\documentclass[11pt]{article} \usepackage{amssymb,amsmath} \usepackage[mathscr]{eucal} \usepackage[cm]{fullpage} \usepackage[english]{babel} \usepackage[latin1]{inputenc} \def\dom{\mathop{\mathrm{Dom}}\nolimits} \def\im{\mathop{\mathrm{Im}}\nolimits} \def\d{\mathrm{d}} \def\id{\mathrm{id}} \def\N{\mathbb N} \def\PT{\mathcal{PT}} \def\T{\mathcal{T}} \def\Sym{\mathcal{S}} \def\DP{\mathcal{DP}} \def\A{\mathcal{A}} \def\B{\mathcal{B}} \def\C{\mathcal{C}} \def\D{\mathcal{D}} \def\DPS{\mathcal{DPS}} \def\DPC{\mathcal{DPC}} \def\ODP{\mathcal{ODP}} \def\PO{\mathcal{PO}} \def\POD{\mathcal{POD}} \def\POR{\mathcal{POR}} \def\I{\mathcal{I}} \def\ro{{\hspace{.2em}}\rho{\hspace{.2em}}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newcommand{\NR}{{N\!\!R}} \newenvironment{proof}{\begin{trivlist}\item[\hskip\labelsep{\bf Proof.}]}{\qed\rm\end{trivlist}} \newcommand{\qed}{{\unskip\nobreak \hfil\penalty50\hskip .001pt \hbox{} \nobreak\hfil \vrule height 1.2ex width 1.1ex depth -.1ex nalhyphendemerits=0\medbreak}} \newcommand{\lastpage}{\addresss} \newcommand{\addresss}{\small \sf \noindent{\sc V\'\i tor H. Fernandes}, Center for Mathematics and Applications (CMA), FCT NOVA and Department of Mathematics, FCT NOVA, Faculdade de Ci\^encias e Tecnologia, Universidade Nova de Lisboa, Monte da Caparica, 2829-516 Caparica, Portugal; e-mail: [email protected]. \medskip \noindent{\sc T\^ania Paulista}, Departamento de Matem\'atica, Faculdade de Ci\^encias e Tecnologia, Universidade NOVA de Lisboa, Monte da Caparica, 2829-516 Caparica, Portugal; e-mail: [email protected]. } \title{On the monoid of partial isometries of a cycle graph} \author{V\'\i tor H. Fernandes\footnote{This work is funded by national funds through the FCT - Funda\c c\~ao para a Ci\^encia e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020 and UIDP/00297/2020 (Center for Mathematics and Applications).}~ and T\^ania Paulista } \begin{document} \maketitle \begin{abstract} In this paper we consider the monoid $\DPC_n$ of all partial isometries of a $n$-cycle graph $C_n$. We show that $\DPC_n$ is the submonoid of the monoid of all oriented partial permutations on a $n$-chain whose elements are precisely all restrictions of the dihedral group of order $2n$. Our main aim is to exhibit a presentation of $\DPC_n$. We also describe Green's relations of $\DPC_n$ and calculate its cardinal and rank. \end{abstract} \medskip \noindent{\small 2020 \it Mathematics subject classification: \rm 20M20, 20M05, 05C12, 05C25.} \noindent{\small\it Keywords: \rm transformations, orientation, partial isometries, cycle graphs, rank, presentations.} \section*{Introduction}\label{presection} Let $\Omega$ be a finite set. As usual, let us denote by $\PT(\Omega)$ the monoid (under composition) of all partial transformations on $\Omega$, by $\T(\Omega)$ the submonoid of $\PT(\Omega)$ of all full transformations on $\Omega$, by $\I(\Omega)$ the \textit{symmetric inverse monoid} on $\Omega$, i.e. the inverse submonoid of $\PT(\Omega)$ of all partial permutations on $\Omega$, and by $\Sym(\Omega)$ the \textit{symmetric group} on $\Omega$, i.e. the subgroup of $\PT(\Omega)$ of all permutations on $\Omega$. \smallskip Recall that the \textit{rank} of a (finite) monoid $M$ is the minimum size of a generating set of $M$, i.e. the minimum of the set $\{|X|\mid \mbox{$X\subseteq M$ and $X$ generates $M$}\}$. Let $\Omega$ be a finite set with at least $3$ elements. It is well-known that $\Sym(\Omega)$ has rank $2$ (as a semigroup, a monoid or a group) and $\T(\Omega)$, $\I(\Omega)$ and $\PT(\Omega)$ have ranks $3$, $3$ and $4$, respectively. The survey \cite{Fernandes:2002survey} presents these results and similar ones for other classes of transformation monoids, in particular, for monoids of order-preserving transformations and for some of their extensions. For example, the rank of the extensively studied monoid of all order-preserving transformations of a $n$-chain is $n$, which was proved by Gomes and Howie \cite{Gomes&Howie:1992} in 1992. More recently, for instance, the papers \cite{ Araujo&al:2015, Fernandes&al:2014, Fernandes&al:2019, Fernandes&Quinteiro:2014, Fernandes&Sanwong:2014} are dedicated to the computation of the ranks of certain classes of transformation semigroups or monoids. \smallskip A \textit{monoid presentation} is an ordered pair $\langle A\mid R\rangle$, where $A$ is a set, often called an \textit{alphabet}, and $R\subseteq A^*\times A^*$ is a set of relations of the free monoid $A^*$ generated by $A$. A monoid $M$ is said to be \textit{defined by a presentation} $\langle A\mid R\rangle$ if $M$ is isomorphic to $A^*/\rho_R$, where $\rho_R$ denotes the smallest congruence on $A^*$ containing $R$. Given a finite monoid, it is clear that we can always exhibit a presentation for it, at worst by enumerating all elements from its multiplication table, but clearly this is of no interest, in general. So, by determining a presentation for a finite monoid, we mean to find in some sense a \textit{nice} presentation (e.g. with a small number of generators and relations). A presentation for the symmetric group $\Sym(\Omega)$ was determined by Moore \cite{Moore:1897} over a century ago (1897). For the full transformation monoid $\T(\Omega)$, a presentation was given in 1958 by A\u{\i}zen\v{s}tat \cite{Aizenstat:1958} in terms of a certain type of two generator presentation for the symmetric group $\Sym(\Omega)$, plus an extra generator and seven more relations. Presentations for the partial transformation monoid $\PT(\Omega)$ and for the symmetric inverse monoid $\I(\Omega)$ were found by Popova \cite{Popova:1961} in 1961. In 1962, A\u{\i}zen\v{s}tat \cite{Aizenstat:1962} and Popova \cite{Popova:1962} exhibited presentations for the monoids of all order-preserving transformations and of all order-preserving partial transformations of a finite chain, respectively, and from the sixties until our days several authors obtained presentations for many classes of monoids. See also \cite{Ruskuc:1995}, the survey \cite{Fernandes:2002survey} and, for example, \cite{Cicalo&al:2015, East:2011, Feng&al:2019, Fernandes:2000, Fernandes:2001, Fernandes&Gomes&Jesus:2004, Fernandes&Quinteiro:2016, Howie&Ruskuc:1995}. \medskip Now, let $G=(V,E)$ be a finite simple connected graph. The (\textit{geodesic}) \textit{distance} between two vertices $x$ and $y$ of $G$, denoted by $\d_G(x,y)$, is the length of a shortest path between $x$ and $y$, i.e. the number of edges in a shortest path between $x$ and $y$. Let $\alpha\in\PT(V)$. We say that $\alpha$ is a \textit{partial isometry} or \textit{distance preserving partial transformation} of $G$ if $$ \d_G(x\alpha,y\alpha) = \d_G(x,y) , $$ for all $x,y\in\dom(\alpha)$. Denote by $\DP(G)$ the subset of $\PT(V)$ of all partial isometries of $G$. Clearly, $\DP(G)$ is a submonoid of $\PT(V)$. Moreover, as a consequence of the property $$ \d_G(x,y)=0 \quad \text{if and only if} \quad x=y, $$ for all $x,y\in V$, it immediately follows that $\DP(G)\subseteq\I(V)$. Furthermore, $\DP(G)$ is an inverse submonoid of $\I(V)$ (see \cite{Fernandes&Paulista:2022arxiv}). \smallskip Observe that, if $G=(V,E)$ is a complete graph, i.e. $E=\{\{x,y\}\mid x,y\in V, x\neq y\}$, then $\DP(G)=\I(V)$. On the other hand, for $n\in\N$, consider the undirected path $P_n$ with $n$ vertices, i.e. $$ P_n=\left(\{1,\ldots,n\},\{\{i,i+1\}\mid i=1,\ldots,n-1\}\right). $$ Then, obviously, $\DP(P_n)$ coincides with the monoid $$ \DP_n=\{\alpha\in\I(\{1,2,\ldots,n\}) \mid |i\alpha-j\alpha|=|i-j|, \mbox{for all $i,j\in\dom(\alpha)$}\} $$ of all partial isometries on $\{1,2,\ldots,n\}$. The study of partial isometries on $\{1,2,\ldots,n\}$ was initiated by Al-Kharousi et al.~\cite{AlKharousi&Kehinde&Umar:2014,AlKharousi&Kehinde&Umar:2016}. The first of these two papers is dedicated to investigating some combinatorial properties of the monoid $\DP_n$ and of its submonoid $\ODP_n$ of all order-preserving (considering the usual order of $\N$) partial isometries, in particular, their cardinalities. The second paper presents the study of some of their algebraic properties, namely Green's structure and ranks. Presentations for both the monoids $\DP_n$ and $\ODP_n$ were given by the first author and Quinteiro in \cite{Fernandes&Quinteiro:2016}. The monoid $\DPS_n$ of all partial isometries of a star graph with $n$ vertices ($n\geqslant1$) was considered by the authors in \cite{Fernandes&Paulista:2022arxiv}. They determined the rank and size of $\DPS_n$ as well as described its Green's relations. A presentation for $\DPS_n$ was also exhibited in \cite{Fernandes&Paulista:2022arxiv}. \smallskip Now, for $n\geqslant3$, consider the \textit{cycle graph} $$ C_n=(\{1,2,\ldots, n\}, \{\{i,i+1\}\mid i=1,2,\ldots,n-1\}\cup\{\{1,n\}\}) $$ with $n$ vertices. Notice that, cycle graphs and cycle subgraphs play a fundamental role in Graph Theory. \smallskip This paper is devoted to studying the monoid $\mathcal{DP}(C_n)$ of all partial isometries of $C_n$, which from now on we denote simply by $\DPC_n$. Observe that $\DPC_n$ is an inverse submonoid of the symmetric inverse monoid $\I_n$. \smallskip In Section \ref{basics} we start by giving a key characterization of $\DPC_n$, which allows for significantly simpler proofs of various results presented later. Also in this section, a description of the Green's relations of $\DPC_n$ is given and the rank and the cardinal of $\DPC_n$ are calculated. Finally, in Section \ref{presenta}, we determine a presentation for the monoid $\DPC_n$ on $n+2$ generators, from which we deduce another presentation for $\DPC_n$ on $3$ generators. \smallskip For general background on Semigroup Theory and standard notations, we refer to Howie's book \cite{Howie:1995}. \smallskip We would like to point out that we made use of computational tools, namely GAP \cite{GAP4}. \section{Some properties of $\DPC_n$} \label{basics} We begin this section by introducing some concepts and notation. For $n\in\N$, let $\Omega_n$ be a set with $n$ elements. As usual, we denote $\PT(\Omega_n)$, $\I(\Omega_n)$ and $\Sym(\Omega_n)$ simply by $\PT_n$, $\I_n$ and $\Sym_n$, respectively. Let $\alpha\in\PT_n$. Recall that the \textit{rank} of $\alpha$ is the size of $\im(\alpha)$. Next, suppose that $\Omega_n$ is a chain, e.g. $\Omega_n=\{1<2<\cdots<n\}$. A partial transformation $\alpha\in\PT_n$ is called \textit{order-preserving} [\textit{order-reversing}] if $x\leqslant y$ implies $x\alpha\leqslant y\alpha$ [$x\alpha\geqslant y\alpha$], for all $x,y \in \dom(\alpha)$. It is clear that the product of two order-preserving or of two order-reversing transformations is order-preserving and the product of an order-preserving transformation by an order-reversing transformation, or vice-versa, is order-reversing. We denote by $\POD_n$ the submonoid of $\PT_n$ whose elements are all order-preserving or order-reversing transformations. Let $s=(a_1,a_2,\ldots,a_t)$ be a sequence of $t$ ($t\geqslant0$) elements from the chain $\Omega_n$. We say that $s$ is \textit{cyclic} [\textit{anti-cyclic}] if there exists no more than one index $i\in\{1,\ldots,t\}$ such that $a_i>a_{i+1}$ [$a_i<a_{i+1}$], where $a_{t+1}$ denotes $a_1$. Notice that, the sequence $s$ is cyclic [anti-cyclic] if and only if $s$ is empty or there exists $i\in\{0,1,\ldots,t-1\}$ such that $a_{i+1}\leqslant a_{i+2}\leqslant \cdots\leqslant a_t\leqslant a_1\leqslant \cdots\leqslant a_i $ [$a_{i+1}\geqslant a_{i+2}\geqslant \cdots\geqslant a_t\geqslant a_1\geqslant \cdots\geqslant a_i $] (the index $i\in\{0,1,\ldots,t-1\}$ is unique unless $s$ is constant and $t\geqslant2$). We also say that $s$ is \textit{oriented} if $s$ is cyclic or $s$ is anti-cyclic. See \cite{Catarino&Higgins:1999,Higgins&Vernitski:2022,McAlister:1998}. Given a partial transformation $\alpha\in\PT_n$ such that $\dom(\alpha)=\{a_1<\cdots<a_t\}$, with $t\geqslant0$, we say that $\alpha$ is \textit{orientation-preserving} [\textit{orientation-reversing}, \textit{oriented}] if the sequence of its images $(a_1\alpha,\ldots,a_t\alpha)$ is cyclic [anti-cyclic, oriented]. It is easy to show that the product of two orientation-preserving or of two orientation-reversing transformations is orientation-preserving and the product of an orientation-preserving transformation by an orientation-reversing transformation, or vice-versa, is orientation-reversing. We denote by $\POR_n$ the submonoid of $\PT_n$ of all oriented transformations. Notice that $\POD_n\cap\I_n$ and $\POR_n\cap\I_n$ are inverse submonoids of $\I_n$. \smallskip Let us consider the following permutations of $\Omega_n$ of order $n$ and $2$, respectively: $$ g=\begin{pmatrix} 1&2&\cdots&n-1&n\\ 2&3&\cdots&n&1 \end{pmatrix} \quad\text{and}\quad h=\begin{pmatrix} 1&2&\cdots&n-1&n\\ n&n-1&\cdots&2&1 \end{pmatrix}. $$ It is clear that $g,h\in\POR_n\cap\I_n$. Moreover, for $n\geqslant3$, $g$ together with $h$ generate the well-known \textit{dihedral group} $\D_{2n}$ of order $2n$ (considered as a subgroup of $\Sym_n$). In fact, for $n\geqslant3$, $$ \D_{2n}=\langle g,h\mid g^n=1,h^2=1, hg=g^{n-1}h\rangle=\{1,g,g^2,\ldots,g^{n-1}, h,hg,hg^2,\ldots,hg^{n-1}\} $$ and we have $$ g^k=\begin{pmatrix} 1&2&\cdots&n-k&n-k+1&\cdots&n\\ 1+k&2+k&\cdots&n&1&\cdots&k \end{pmatrix}, \quad\text{i.e.}\quad ig^k=\left\{\begin{array}{lc} i+k & 1\leqslant i\leqslant n-k\\ i+k-n & n-k+1\leqslant i\leqslant n , \end{array}\right. $$ and $$ hg^k=\begin{pmatrix} 1&\cdots&k&k+1&\cdots&n\\ k&\cdots&1&n&\cdots&k+1 \end{pmatrix}, \quad\text{i.e.}\quad ihg^k=\left\{\begin{array}{lc} k-i+1 & 1\leqslant i\leqslant k\\ n+k-i+1 & k+1\leqslant i\leqslant n , \end{array}\right. $$ for $0\leqslant k\leqslant n-1$. Observe that, for $n\in\{1,2\}$, the dihedral group $\D_{2n}=\langle g,h\mid g^n=1, h^2=1, hg=g^{n-1}h\rangle$ of order $2n$ (also known as the \textit{Klein four-group} for $n=2$) cannot be considered as a subgroup of $\Sym_n$. Denote also by $\C_n$ the \textit{cyclic group} of order $n$ generated by $g$, i.e. $\C_n=\{1,g,g^2,\ldots,g^{n-1}\}$. \medskip Until the end of this paper, we will consider $n\geqslant3$. \smallskip Now, notice that, clearly, we have $$ \d_{C_n}(x,y)=\min \{|x-y|,n-|x-y|\} = \left\{ \begin{array}{ll} |x-y| &\mbox{if $|x-y|\leqslant\frac{n}{2}$}\\ n-|x-y| &\mbox{if $|x-y|>\frac{n}{2}$} \end{array} \right. $$ and so $0\leqslant\d_{C_n}(x,y)\leqslant\frac{n}{2}$, for all $x,y \in \{1,2,\ldots,n\}$. From now on, for any two vertices $x$ and $y$ of $C_n$, we denote the distance $\d_{C_n}(x,y)$ simply by $\d(x,y)$. Let $x,y \in \{1,2,\ldots,n\}$. Observe that $$ \d(x,y)=\frac{n}{2} \quad\Leftrightarrow\quad |x-y|=\frac{n}{2} \quad\Leftrightarrow\quad n-|x-y|=\displaystyle\frac{n}{2} \quad\Leftrightarrow\quad |x-y|=n-|x-y|, $$ in which case $n$ is even, and \begin{equation}\label{d1} |\left\{z\in \{1,2,\ldots,n\}\mid \d(x,z)=d\right\}|= \left\{ \begin{array}{ll} 1 &\mbox{if $d=\frac{n}{2}$}\\ 2 &\mbox{if $d<\frac{n}{2}$,} \end{array} \right. \end{equation} for all $1\leqslant d \leqslant\frac{n}{2}$. Moreover, it is a routine matter to show that $$ D=\left\{z\in \{1,2,\ldots,n\}\mid \d(x,z)=d\right\}=\left\{z\in \{1,2,\ldots,n\}\mid \d(y,z)=d'\right\} $$ implies \begin{equation}\label{d2} \d(x,y)=\left\{ \begin{array}{ll} \mbox{$0$ (i.e. $x=y$)} &\mbox{if $|D|=1$}\\ \frac{n}{2} &\mbox{if $|D|=2$,} \end{array} \right. \end{equation} for all $1\leqslant d,d' \leqslant\frac{n}{2}$. \medskip Recall that $\DP_n$ is an inverse submonoid of $\POD_n\cap\I_n$. This is an easy fact to prove and was observed by Al-Kharousi et al. in \cite{AlKharousi&Kehinde&Umar:2014,AlKharousi&Kehinde&Umar:2016}. A similar result is also valid for $\DPC_n$ and $\POR_n\cap\I_n$, as we will deduce below. First, notice that, it is easy to show that both permutations $g$ and $h$ of $\Omega_n$ belong to $\DPC_n$ and so the dihedral group $\D_{2n}$ is contained in $\DPC_n$. Furthermore, as we prove next, the elements of $\DPC_n$ are precisely the restrictions of the permutations of the dihedral group $\D_{2n}$. This is a key characterization of $\DPC_n$ that will allow us to prove in a simpler way some of the results that we present later in this paper. Observe that $$ \alpha=\sigma|_{\dom(\alpha)} \quad\Leftrightarrow\quad \alpha=\id_{\dom(\alpha)} \sigma \quad\Leftrightarrow\quad \alpha=\sigma\id_{\im(\alpha)}, $$ for all $\alpha\in\PT_n$ and $\sigma\in\I_n$. \begin{lemma}\label{fundlemma} Let $\alpha \in \PT_n$. Then $\alpha \in\DPC_n$ if and only if there exists $\sigma \in \D_{2n}$ such that $\alpha=\sigma|_{\dom(\alpha)}$. Furthermore, for $\alpha \in \DPC_n$, one has: \begin{enumerate} \item If either $|\dom(\alpha)|= 1$ or $|\dom(\alpha)|= 2$ and $\d(\min \dom(\alpha),\max \dom(\alpha))=\frac{n}{2}$ (in which case $n$ is even), then there exist exactly two (distinct) permutations $\sigma,\sigma' \in\D_{2n}$ such that $\alpha= \sigma|_{\dom(\alpha)} = \sigma'|_{\dom(\alpha)}$; \item If either $|\dom(\alpha)|= 2$ and $\d(\min \dom(\alpha),\max \dom(\alpha)) \neq \frac{n}{2}$ or $|\dom(\alpha)|\geqslant 3$, then there exists exactly one permutation $\sigma \in\mathcal{D}_{2n}$ such that $\alpha= \sigma|_{\dom(\alpha)}$. \end{enumerate} \end{lemma} \begin{proof} Let $\alpha \in \PT_n$. \smallskip If $\alpha=\sigma|_{\dom(\alpha)}$, for some $\sigma \in \D_{2n}$, then $\alpha\in\DPC_n$, since $\D_{2n}\subseteq\DPC_n$ and, clearly, any restriction of an element of $\DPC_n$ also belongs to $\DPC_n$. \smallskip Conversely, let us suppose that $\alpha\in\DPC_n$. First, observe that, for each pair $1\leqslant i,j\leqslant n$, there exists a unique $k\in\{0,1,\ldots,n-1\}$ such that $ig^k=j$ and there exists a unique $\ell\in\{0,1,\ldots,n-1\}$ such that $ihg^\ell=j$. In fact, for $1\leqslant i,j\leqslant n$ and $k,\ell\in\{0,1,\ldots,n-1\}$, it is easy to show that: \begin{description} \item if $i\leqslant j$ then $ig^k=j$ if and only if $k=j-i$; \item if $i>j$ then $ig^k=j$ if and only if $k=n+j-i$; \item if $i+j\leqslant n$ then $ihg^\ell=j$ if and only if $\ell=i+j-1$; \item if $i+j > n$ then $ihg^\ell=j$ if and only if $\ell=i+j-1-n$. \end{description} Therefore, we may conclude immediately that: \begin{enumerate} \item any nonempty transformation of $\DPC_n$ has at most two extensions in $\D_{2n}$ and, if there are two distinct, one must be an orientation-preserving transformation and the other an orientation-reversing transformation; \item any transformation of $\DPC_n$ with rank $1$ has two distinct extensions in $\D_{2n}$ (one being an orientation-preserving transformation and the other an orientation-reversing transformation). \end{enumerate} Notice that, as $g^n=g^{-n}=1$, we also have $ig^{j-i}=j$ and $ihg^{i+j-1}=j$, for all $1\leqslant i,j\leqslant n$. \smallskip Next, suppose that $\dom(\alpha)=\{i_1,i_2\}$. Then, there exist $\sigma\in\C_n$ and $\xi\in\D_{2n}\setminus\C_n$ (both unique) such that $i_1\sigma=i_1\alpha=i_1\xi$. Take $D=\left\{z\in \{1,2,\ldots,n\}\mid \d(i_1\alpha,z)=\d(i_1,i_2)\right\}$. Then $1\leqslant |D|\leqslant 2$ and $i_2\alpha,i_2\sigma,i_2\xi\in D$. Suppose that $i_2\sigma=i_2\xi$ and let $j_1=i_1\sigma$ and $j_2=i_2\sigma$. Then $\sigma=g^{j_1-i_1}=g^{j_2-i_2}$ and $\xi=hg^{i_1+j_1-1}=hg^{i_2+j_2-1}$. Hence, we have $j_1-i_1=j_2-i_2$ or $j_1-i_1=j_2-i_2\pm n$, from the first equality, and $i_1+j_1=i_2+j_2$ or $i_1+j_1=i_2+j_2\pm n$, from the second. Since $i_1\neq i_2$ and $i_2-i_1\neq n$, it a routine matter to conclude that the only possibility is to have $i_2-i_1=\frac{n}{2}$ (in which case $n$ is even). Thus $\d(i_1,i_2)=\frac{n}{2}$. By (\ref{d1}) it follows that $|D|=1$ and so $i_2\alpha=i_2\sigma=i_2\xi$, i.e. $\alpha$ is extended by both $\sigma$ and $\xi$. If $i_2\sigma\neq i_2\xi$ then $|D|=2$ (whence $\d(i_1,i_2)<\frac{n}{2}$) and so either $i_2\alpha=i_2\sigma$ or $i_2\alpha=i_2\xi$. In this case, $\alpha$ is extended by exactly one permutation of $\D_{2n}$. \smallskip Now, suppose that $\dom(\alpha)=\{i_1<i_2<\cdots <i_k\}$, for some $3\leqslant k\leqslant n-1$. Since $\sum_{p=1}^{k-1}(i_{p+1}-i_p) = i_k-i_1<n$, then there exists at most one index $1\leqslant p\leqslant k-1$ such that $i_{p+1}-i_p\geqslant\frac{n}{2}$. Therefore, we may take $i,j\in\dom(\alpha)$ such that $i\neq j$ and $\d(i,j)\neq\frac{n}{2}$ and so, as $\alpha|_{\{i,j\}}\in\DPC_n$, by the above deductions, there exists a unique $\sigma\in\D_{2n}$ such that $\sigma|_{\{i,j\}}=\alpha|_{\{i,j\}}$. Let $\ell\in\dom(\alpha)\setminus\{i,j\}$. Then $$ \ell\alpha,\ell\sigma\in \left\{z\in \{1,2,\ldots,n\}\mid \d(i\alpha,z)=\d(i,\ell)\right\}\cap\left\{z\in \{1,2,\ldots,n\}\mid \d(j\alpha,z)=\d(j,\ell)\right\}. $$ In order to obtain a contradiction, suppose that $\ell\alpha\neq\ell\sigma$. Therefore, by (\ref{d1}), we have $$ \left\{z\in \{1,2,\ldots,n\}\mid \d(i\alpha,z)=\d(i,\ell)\right\} = \left\{\ell\alpha,\ell\sigma\right\}= \left\{z\in \{1,2,\ldots,n\}\mid \d(j\alpha,z)=\d(j,\ell)\right\} $$ and so, by (\ref{d2}), $\d(i,j)=\d(i\alpha,j\alpha)=\frac{n}{2}$, which is a contradiction. Hence $\ell\alpha=\ell\sigma$. Thus $\sigma$ is the unique permutation of $\D_{2n}$ such that $\alpha= \sigma|_{\dom(\alpha)}$, as required. \end{proof} Bearing in mind the previous lemma, it seems appropriate to designate $\DPC_n$ by \textit{dihedral inverse monoid} on $\Omega_n$. \smallskip Since $\D_{2n}\subseteq\POR_n\cap\I_n$, which contains all the restrictions of its elements, we have immediately: \begin{corollary}\label{dpcpopi} The monoid $\DPC_n$ is contained in $\POR_n\cap\I_n$. \end{corollary} Observe that, as $\D_{2n}$ is the group of units of $\POR_n\cap\I_n$ (see \cite{Fernandes&Gomes&Jesus:2004,Fernandes&Gomes&Jesus:2009}), then $\D_{2n}$ also has to be the group of units of $\DPC_n$. \medskip Next, recall that, given an inverse submonoid $M$ of $\I_n$, it is well known that the Green's relations $\mathscr{L}$, $\mathscr{R}$ and $\mathscr{H}$ of $M$ can be described as following: for $\alpha, \beta \in M$, \begin{itemize} \item $\alpha \mathscr{L} \beta$ if and only if $\im(\alpha) = \im(\beta)$; \item $\alpha \mathscr{R} \beta$ if and only if $\dom(\alpha) = \dom(\beta)$; \item $\alpha \mathscr{H} \beta $ if and only if $\im(\alpha) = \im(\beta)$ and $\dom(\alpha) = \dom(\beta)$. \end{itemize} In $\I_n$ we also have \begin{itemize} \item $\alpha \mathscr{J} \beta$ if and only if $|\dom(\alpha)| = |\dom(\beta)|$ (if and only if $|\im(\alpha)| = |\im(\beta)|$). \end{itemize} Since $\DPC_n$ is an inverse submonoid of $\I_n$, it remains to describe its Green's relation $\mathscr{J}$. In fact, it is a routine matter to show that: \begin{proposition} \label{greenJ} Let $\alpha, \beta \in \DPC_n$. Then $\alpha \mathscr{J} \beta$ if and only if one of the following properties is satisfied: \begin{enumerate} \item $|\dom(\alpha)|=|\dom(\beta)|\leqslant1$; \item $|\dom(\alpha)|=|\dom(\beta)|=2$ and $\d(i_1,i_2)=\d(i'_1,i'_2)$, where $\dom(\alpha)=\{i_1,i_2\}$ and $\dom(\beta)=\{i'_1,i'_2\}$; \item $|\dom(\alpha)|=|\dom(\beta)|=k\geqslant3$ and there exists $\sigma\in\D_{2k}$ such that $$ \begin{pmatrix} i'_1&i'_2&\cdots&i'_k\\ i_{1\sigma}&i_{2\sigma}&\cdots&i_{k\sigma} \end{pmatrix} \in\DPC_n, $$ where $\dom(\alpha)=\{i_1<i_2<\dots<i_k\}$ and $\dom(\beta)=\{i'_1<i'_2<\cdots<i'_k\}$. \end{enumerate} \end{proposition} An alternative description of $\mathscr{J}$ can be found in second author's M.Sc.~thesis \cite{Paulista:2022}. \medskip Next, we count the number of elements of $\DPC_n$. \begin{theorem} One has $|\DPC_n| = n2^{n+1}-\frac{(-1)^n+5}{4}n^2-2n+1$. \end{theorem} \begin{proof} Let $\A_i=\{\alpha\in\DPC_n\mid |\dom(\alpha)|=i\}$, for $i=0,1,\ldots,n$. Since the sets $\A_0,\A_1,\ldots,\A_n$ are pairwise disjoints, we get $|\DPC_n|=\sum_{i=0}^{n} |\A_i|$. Clearly, $\A_0=\{\emptyset\}$ and $\A_1=\{\binom{i}{j}\mid 1\leqslant i,j\leqslant n\}$, whence $|\A_0|=1$ and $|\A_1|=n^2$. Moreover, for $i\geqslant3$, by Lemma \ref{fundlemma}, we have as many elements in $\A_i$ as there are restrictions of rank $i$ of permutations of $\D_{2n}$, i.e. we have $\binom{n}{i}$ distinct elements of $\A_i$ for each permutation of $\D_{2n}$, whence $|\A_i|=2n\binom{n}{i}$. Similarly, for an odd $n$, by Lemma \ref{fundlemma}, we have $|\A_2|=2n\binom{n}{2}$. On the other hand, if $n$ is even, also by Lemma \ref{fundlemma}, we have as many elements in $\A_2$ as there are restrictions of rank $2$ of permutations of $\D_{2n}$ minus the number of elements of $\A_2$ that have two distinct extensions in $\D_{2n}$, i.e. $|\A_2|=2n\binom{n}{2}-|\B_2|$, where $$ \B_2=\{\alpha\in\DPC_n\mid |\mbox{$\dom(\alpha)|=2$ and $\d(\min \dom(\alpha),\max \dom(\alpha))=\frac{n}{2}$}\}. $$ It is easy to check that $$ \B_2=\left\{ \begin{pmatrix} i&i+\frac{n}{2}\\ j&j+\frac{n}{2} \end{pmatrix}, \begin{pmatrix} i&i+\frac{n}{2}\\ j+\frac{n}{2}&j \end{pmatrix} \mid 1\leqslant i,j\leqslant \frac{n}{2} \right\}, $$ whence $|\B_2|=2(\frac{n}{2})^2=\frac{1}{2}n^2$. Therefore $$ |\DPC_n|= \left\{\begin{array}{ll} 1+n^2+2n\sum_{i=2}^{n}\binom{n}{i} & \mbox{if $n$ is odd} \\\\ 1+n^2+2n\sum_{i=2}^{n}\binom{n}{i} -\frac{1}{2}n^2 & \mbox{if $n$ is even} \end{array}\right. = \left\{\begin{array}{ll} n2^{n+1}-n^2-2n+1 & \mbox{if $n$ is odd} \\\\ n2^{n+1}-\frac{3}{2}n^2-2n+1 & \mbox{if $n$ is even}, \end{array}\right. $$ as required. \end{proof} \medskip We finish this section by deducing that $\DPC_n$ has rank $3$. Let $$ e_i=\id_{\Omega_n\setminus\{i\}}= \begin{pmatrix} 1&\cdots&i-1&i+1&\cdots&n\\ 1&\cdots&i-1&i+1&\cdots&n \end{pmatrix}\in\DPC_n, $$ for $i=1,2,\ldots,n$. Clearly, for $1\leqslant i,j\leqslant n$, we have $e_i^2=e_i$ and $e_ie_j=\id_{\Omega_n\setminus\{i,j\}}=e_je_i$. More generally, for any $X\subseteq\Omega_n$, we get $\Pi_{i\in X}e_i=\id_{\Omega_n\setminus X}$. Now, take $\alpha\in\DPC_n$. Then, by Lemma \ref{fundlemma}, $\alpha=h^ig^j|_{\dom(\alpha)}$, for some $i\in\{0,1\}$ and $j\in\{0,1,\ldots,n-1\}$. Hence $\alpha=h^ig^j\id_{\im(\alpha)}=h^ig^j\Pi_{k\in\Omega_n\setminus\im(\alpha)}e_k$. Therefore $ \{g,h,e_1,e_2,\ldots,e_n\} $ is a generating set of $\DPC_n$. Since $e_j=hg^{j-1}e_nhg^{j-1}$ for all $j\in\{1,2,\ldots,n\}$, it follows that $\{g,h,e_n\}$ is also a generating set of $\DPC_n$. As $\D_{2n}$ is the group of units of $\DPC_n$, which is a group with rank $2$, the monoid $\DPC_n$ cannot be generated by less than three elements. So, we have: \begin{theorem} The rank of the monoid $\DPC_n$ is $3$. \end{theorem} \section{Presentations for $\DPC_n$}\label{presenta} In this section, we aim to determine a presentation for $\DPC_n$. In fact, we first determine a presentation of $\DPC_n$ on $n+2$ generators and then, by applying applying \textit{Tietze transformations}, we deduce a presentation for $\DPC_n$ on $3$ generators. \smallskip We begin this section by recalling some notions related to the concept of a monoid presentation. \smallskip Let $A$ be an alphabet and consider the free monoid $A^*$ generated by $A$. The elements of $A$ and of $A^*$ are called \textit{letters} and \textit{words}, respectively. The empty word is denoted by $1$ and we write $A^+$ to express $A^*\setminus\{1\}$. A pair $(u,v)$ of $A^*\times A^*$ is called a \textit{relation} of $A^*$ and it is usually represented by $u=v$. To avoid confusion, given $u, v\in A^*$, we will write $u\equiv v$, instead of $u=v$, whenever we want to state precisely that $u$ and $v$ are identical words of $A^*$. A relation $u=v$ of $A^*$ is said to be a \textit{consequence} of $R$ if $u{\hspace{.11em}}\rho_R{\hspace{.11em}}v$. Let $X$ be a generating set of $M$ and let $\phi: A\longrightarrow M$ be an injective mapping such that $A\phi=X$. Let $\varphi: A^*\longrightarrow M$ be the (surjective) homomorphism of monoids that extends $\phi$ to $A^*$. We say that $X$ satisfies (via $\varphi$) a relation $u=v$ of $A^*$ if $u\varphi=v\varphi$. For more details see \cite{Lallement:1979} or \cite{Ruskuc:1995}. A direct method to find a presentation for a monoid is described by the following well-known result (e.g. see \cite[Proposition 1.2.3]{Ruskuc:1995}). \begin{proposition}\label{provingpresentation} Let $M$ be a monoid generated by a set $X$, let $A$ be an alphabet and let $\phi: A\longrightarrow M$ be an injective mapping such that $A\phi=X$. Let $\varphi:A^*\longrightarrow M$ be the (surjective) homomorphism that extends $\phi$ to $A^*$ and let $R\subseteq A^*\times A^*$. Then $\langle A\mid R\rangle$ is a presentation for $M$ if and only if the following two conditions are satisfied: \begin{enumerate} \item The generating set $X$ of $M$ satisfies (via $\varphi$) all the relations from $R$; \item If $u,v\in A^*$ are any two words such that the generating set $X$ of $M$ satisfies (via $\varphi$) the relation $u=v$ then $u=v$ is a consequence of $R$. \end{enumerate} \end{proposition} \smallskip Given a presentation for a monoid, another method to find a new presentation consists in applying Tietze transformations. For a monoid presentation $\langle A\mid R\rangle$, the four \emph{elementary Tietze transformations} are: \begin{description} \item(T1) Adding a new relation $u=v$ to $\langle A\mid R\rangle$, provided that $u=v$ is a consequence of $R$; \item(T2) Deleting a relation $u=v$ from $\langle A\mid R\rangle$, provided that $u=v$ is a consequence of $R\backslash\{u=v\}$; \item(T3) Adding a new generating symbol $b$ and a new relation $b=w$, where $w\in A^*$; \item(T4) If $\langle A\mid R\rangle$ possesses a relation of the form $b=w$, where $b\in A$, and $w\in(A\backslash\{b\})^*$, then deleting $b$ from the list of generating symbols, deleting the relation $b=w$, and replacing all remaining appearances of $b$ by $w$. \end{description} The next result is well-known (e.g. see \cite{Ruskuc:1995}): \begin{proposition} \label{tietze} Two finite presentations define the same monoid if and only if one can be obtained from the other by a finite number of elementary Tietze transformations $(T1)$, $(T2)$, $(T3)$ and $(T4)$. \end{proposition} \medskip Now, consider the alphabet $A=\{g,h,e_1,e_2,\ldots,e_n\}$ and the following set $R$ formed by the following monoid relations: \begin{description} \item $(R_1)$ $g^n=1$, $h^2=1$ and $hg=g^{n-1}h$; \item $(R_2)$ $e_i^2=e_i$, for $1\leqslant i\leqslant n$; \item $(R_3)$ $e_ie_j=e_je_i$, for $1\leqslant i<j\leqslant n$; \item $(R_4)$ $ge_1=e_ng$ and $ge_{i+1}=e_ig$, for $1\leqslant i\leqslant n-1$; \item $(R_5)$ $he_i=e_{n-i+1}h$, for $1\leqslant i\leqslant n$; \item $(R_6^\text{o})$ $hge_2e_3\cdots e_n=e_2e_3\cdots e_n$, if $n$ is odd; \item $(R_6^\text{e})$ $hge_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n=e_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n$ and $he_1e_2\cdots e_n=e_1e_2\cdots e_n$, if $n$ is even. \end{description} Observe that $|R|=\frac{n^2+5n+9+(-1)^n}{2}$. \smallskip We aim to show that the monoid $\DPC_n$ is defined by the presentation $\langle A \mid R\rangle$. \smallskip Let $\phi:A\longrightarrow \DPC_n$ be the mapping defined by $$ g\phi=g ,\quad h\phi=h ,\quad e_i\phi=e_i, \mbox{~for $1\leqslant i\leqslant n$}, $$ and let $\varphi:A^*\longrightarrow \DPC_n$ be the homomorphism of monoids that extends $\phi$ to $A^*$. Notice that we are using the same symbols for the letters of the alphabet $A$ and for the generating set of $\DPC_n$, which simplifies notation and, within the context, will not cause ambiguity. \smallskip It is a routine matter to check the following lemma. \begin{lemma}\label{genrel} The set of generators $\{g,h,e_1,e_2,\ldots,e_n\}$ of $\DPC_n$ satisfies (via $\varphi$) all the relations from $R$. \end{lemma} Observe that this result assures us that, if $u,v\in A^*$ are such that the relation $u=v$ is a consequence of $R$, then $u\varphi=v\varphi$. \smallskip Next, in order to prove that any relation satisfied by the generating set of $\DPC_n$ is a consequence of $R$, we first present a series of three lemmas. In what follows, we denote the congruence $\rho_R$ of $A^*$ simply by $\rho$. \begin{lemma}\label{pre0} If $n$ is even then the relation $$ hg^{2j-1}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n = e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n $$ is a consequence of $R$, for $1\leqslant j\leqslant \frac{n}{2}$. \end{lemma} \begin{proof} We proceed by induction on $j$. Let $j=1$. Then $hge_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n=e_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n$ is a relation of $R$. Next, suppose that $hg^{2j-1}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n = e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n$, for some $1\leqslant j\leqslant \frac{n}{2}-1$. Then $$ \begin{array}{cll} & hg^{2(j+1)-1}e_1\cdots e_je_{j+2}\cdots e_{j+\frac{n}{2}}e_{j+\frac{n}{2}+2}\cdots e_n & \\ \equiv & hg^{2j+1}e_1\cdots e_je_{j+2}\cdots e_{j+\frac{n}{2}}e_{j+\frac{n}{2}+2}\cdots e_n & \\ \rho & hg^{2j}e_nge_2\cdots e_je_{j+2}\cdots e_{j+\frac{n}{2}}e_{j+\frac{n}{2}+2}\cdots e_n & \mbox{(by $R_4$)}\\ \rho & hgg^{2j-1}e_n e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_{n-1}g & \mbox{(by $R_4$)}\\ \rho & g^{n-1}hg^{2j-1}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_ng & \mbox{(by $R_1$ and $R_3$)}\\ \rho & g^{n-1}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_ng& \mbox{(by the induction hyphotesis)}\\ \rho & g^{n-1}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_{n-1}ge_1 & \mbox{(by $R_4$)}\\ \rho &g^{n-1}ge_2\cdots e_je_{j+2}\cdots e_{j+\frac{n}{2}}e_{j+\frac{n}{2}+2}\cdots e_ne_1 & \mbox{(by $R_4$)}\\ \rho &e_1\cdots e_je_{j+2}\cdots e_{j+\frac{n}{2}}e_{j+\frac{n}{2}+2}\cdots e_n & \mbox{(by $R_1$ and $R_3$)}, \end{array} $$ as required. \end{proof} \begin{lemma}\label{pre1} The relation $hg^{2i-1}e_1\cdots e_{i-1}e_{i+1}\cdots e_n=e_1\cdots e_{i-1}e_{i+1}\cdots e_n$ is a consequence of $R$, for $1\leqslant i\leqslant n$. \end{lemma} \begin{proof} We proceed by induction on $i$. Let $i=1$. If $n$ is odd then $hge_2e_3\cdots e_n=e_2e_3\cdots e_n$ is a relation of $R$. So, suppose that $n$ is even. Then $hge_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n=e_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n$ is a relation of $R$, whence $$ hge_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_ne_{\frac{n}{2}+1}\ro e_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_ne_{\frac{n}{2}+1} $$ and so $hge_2e_3\cdots e_n=e_2e_3\cdots e_n$, by $R_3$. Now, suppose that $hg^{2i-1}e_1\cdots e_{i-1}e_{i+1}\cdots e_n\ro e_1\cdots e_{i-1}e_{i+1}\cdots e_n$, for some $1\leqslant i\leqslant n-1$. Then (with steps similar to the previous proof), we have $$ \begin{array}{rcll} hg^{2(i+1)-1}e_1\cdots e_{i}e_{i+2}\cdots e_n & \equiv & hg^{2i+1}e_1\cdots e_{i}e_{i+2}\cdots e_n & \\ & \rho & hg^{2i}e_nge_2\cdots e_{i}e_{i+2}\cdots e_n & \mbox{(by $R_4$)}\\ & \rho & hgg^{2i-1}e_ne_1\cdots e_{i-1}e_{i+1}\cdots e_{n-1}g & \mbox{(by $R_4$)}\\ & \rho & g^{n-1}hg^{2i-1}e_1\cdots e_{i-1}e_{i+1}\cdots e_ng & \mbox{(by $R_1$ and $R_3$)}\\ & \rho & g^{n-1}e_1\cdots e_{i-1}e_{i+1}\cdots e_n g & \mbox{(by the induction hyphotesis)}\\ & \rho & g^{n-1}e_1\cdots e_{i-1}e_{i+1}\cdots e_{n-1}ge_1 & \mbox{(by $R_4$)}\\ & \rho &g^{n-1}ge_2\cdots e_{i}e_{i+2}\cdots e_ne_1 & \mbox{(by $R_4$)}\\ & \rho &e_1\cdots e_{i}e_{i+2}\cdots e_n & \mbox{(by $R_1$ and $R_3$)}, \end{array} $$ as required. \end{proof} \begin{lemma}\label{pre2} The relation $h^\ell g^me_1e_2\cdots e_n=e_1e_2\cdots e_n$ is a consequence of $R$, for $\ell,m\geqslant0$. \end{lemma} \begin{proof} First, we prove that the relation $he_1e_2\cdots e_n=e_1e_2\cdots e_n$ is a consequence of $R$. Since this relation belongs to $R$ for an $even$ $n$, it remains to show that $he_1e_2\cdots e_n\ro e_1e_2\cdots e_n$ for $n$ odd. Suppose that $n$ is odd. Hence, from $R_6^\text{o}$, we have $hge_2e_3\cdots e_ne_1\ro e_2e_3\cdots e_ne_1$, so $hge_1e_2\cdots e_n\ro e_1e_2\cdots e_n$ (by $R_3$), whence $ge_1e_2\cdots e_n\ro he_1e_2\cdots e_n$ (by $R_1$) and then $(ge_1e_2\cdots e_n)^n\ro (he_1e_2\cdots e_n)^n$. Now, by applying relations $R_4$ and $R_3$, we have $$ ge_1e_2\cdots e_n\ro e_nge_2\cdots e_n\ro e_ne_1\cdots e_{n-1}g \ro e_1e_2\cdots e_ng, $$ whence $(ge_1e_2\cdots e_n)^n\ro g^n(e_1e_2\cdots e_n)^n\ro e_1e_2\cdots e_n$, by relations $R_1$, $R_3$ and $R_2$. On the other hand, by applying relations $R_5$ and $R_3$, we get $$ he_1e_2\cdots e_n\ro e_ne_{n-1}\cdots e_1h\ro e_1e_2\cdots e_nh, $$ whence $(he_1e_2\cdots e_n)^n\ro h^n(e_1e_2\cdots e_n)^n\ro h e_1e_2\cdots e_n$, by relations $R_1$, $R_3$ and $R_2$, since $n$ is odd. Therefore $he_1e_2\cdots e_n\ro e_1e_2\cdots e_n$. \smallskip Secondly, we prove that the relation $ge_1e_2\cdots e_n=e_1e_2\cdots e_n$ is a consequence of $R$. In fact, we have $$ \begin{array}{rcll} ge_1e_2\cdots e_n & \rho & ge_1hge_2\cdots e_n & \mbox{(by Lemma \ref{pre1})}\\ & \rho & e_nghge_2\cdots e_n & \mbox{(by $R_4$)}\\ & \rho & e_ngg^{n-1}he_2\cdots e_n & \mbox{(by $R_1$)}\\ & \rho & e_nhe_2\cdots e_n & \mbox{(by $R_1$)}\\ & \rho & he_1e_2\cdots e_n & \mbox{(by $R_5$)}\\ & \rho & e_1e_2\cdots e_n & \mbox{(by the first part).} \end{array} $$ \smallskip Now, clearly, for $\ell,m\geqslant0$, $h^\ell g^me_1e_2\cdots e_n\ro e_1e_2\cdots e_n$ follows immediately from $ge_1e_2\cdots e_n\ro e_1e_2\cdots e_n$ and $he_1e_2\cdots e_n\rho e_1e_2\cdots e_n$, which concludes the proof of the lemma. \end{proof} We are now in a position to prove the following result. \begin{theorem}\label{firstpres} The monoid $\DPC_n$ is defined by the presentation $\langle A \mid R\rangle$ on $n+2$ generators. \end{theorem} \begin{proof} In view of Proposition \ref{provingpresentation} and Lemma \ref{genrel}, it remains to prove that any relation satisfied by the generating set $\{g,h,e_1,e_2,\ldots,e_n\}$ of $\DPC_n$ is a consequence of $R$. Let $u,v\in A^*$ be such that $u\varphi=v\varphi$. We aim to show that $u\ro v$. Take $\alpha=u\varphi$. It is clear that relations $R_1$ to $R_5$ allow us to deduce that $u\ro h^\ell g^m e_{i_1}\cdots e_{i_k}$, for some $\ell\in\{0,1\}$, $m\in\{0,1,\ldots,n-1\}$, $1\leqslant i_1 < \cdots < i_k\leqslant n$ and $0\leqslant k\leqslant n$. Similarly, we have $v\ro h^{\ell'} g^{m'} e_{i'_1}\cdots e_{i'_k}$, for some $\ell'\in\{0,1\}$, $m'\in\{0,1,\ldots,n-1\}$, $1\leqslant i'_1 < \cdots < i'_{k'}\leqslant n$ and $0\leqslant k'\leqslant n$. Since $\alpha=h^\ell g^m e_{i_1}\cdots e_{i_k}$, it follows that $\im(\alpha)=\Omega_n\setminus\{i_1,\ldots,i_k\}$ and $\alpha=h^\ell g^m|_{\dom(\alpha)}$. Similarly, as also $\alpha=v\varphi$, from $\alpha= h^{\ell'} g^{m'} e_{i'_1}\cdots e_{i'_k}$, we get $\im(\alpha)=\Omega_n\setminus\{i'_1,\ldots,i'_{k'}\}$ and $\alpha= h^{\ell'} g^{m'}|_{\dom(\alpha)}$. Hence $k'=k$ and $\{i'_1,\ldots,i'_k\}=\{i_1,\ldots,i_k\}$. Furthermore, if either $k=n-2$ and $\d(\min \dom(\alpha),\max \dom(\alpha)) \neq \frac{n}{2}$ or $k\leqslant n-3$, by Lemma \ref{fundlemma}, we obtain $\ell'=\ell$ and $m'=m$ and so $u\ro h^\ell g^m e_{i_1}\cdots e_{i_k} \ro v$. \smallskip If $h^{\ell'}g^{m'} = h^\ell g^m$ (even as elements of $\D_{2n}$) then $\ell'=\ell$ and $m'=m$ and so we get again $u\ro h^\ell g^m e_{i_1}\cdots e_{i_k} \ro v$. Therefore, let us suppose that $h^{\ell'}g^{m'}\neq h^\ell g^m$. Hence, by Lemma \ref{fundlemma}, we may conclude that $\alpha=\emptyset$ or $\ell'=\ell-1$ or $\ell'=\ell+1$. If $\alpha=\emptyset$, i.e. $k=n$, then $u\ro h^\ell g^m e_1e_2\cdots e_n \ro e_1e_2\cdots e_n \ro h^{\ell'}g^{m'} e_1e_2\cdots e_n \ro v$, by Lemma \ref{pre2}. Thus, we may suppose that $\alpha\neq\emptyset$ and, without loss of generality, also that $\ell'=\ell+1$, i.e. $\ell=0$ and $\ell'=1$. \smallskip Let $k=n-2$ and admit that $\d(\min \dom(\alpha),\max \dom(\alpha)) = \frac{n}{2}$ (in which case $n$ is even). Let $\alpha=\begin{pmatrix} i_1&i_2\\ j_1&j_2 \end{pmatrix} $, with $1\leqslant i_1<i_2\leqslant n$. Then $i_2-i_1=\frac{n}{2}=\d(i_1,i_2)=\d(j_1,j_2)=|j_2-j_1|$ and so $j_2\in\{j_1 -\frac{n}{2}, j_1 +\frac{n}{2}\}$. Let $j=\min\{j_1, j_2\}$ (notice that $1\leqslant j\leqslant\frac{n}{2}$) and $i=j\alpha^{-1}$. Hence $\im(\alpha)=\{j,j+\frac{n}{2}\}$ and $\alpha=g^{n+j-i}|_{\dom(\alpha)}=hg^{i+j-1-n}|_{\dom(\alpha)}$ (cf. proof of Lemma \ref{fundlemma}). So, we have $$ u\ro g^m e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n \quad\text{and}\quad v\ro hg^{m'} e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n $$ and, by Lemma \ref{fundlemma}, $m=rn+j-i$, for some $r\in\{0,1\}$, and $m'=i+j-1-r'n$, for some $r'\in\{0,1\}$. Thus, we get $$ \begin{array}{rcll} u & \rho & g^me_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n & \\ & \rho & g^m hg^{2j-1}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n & \mbox{(by Lemma \ref{pre0})} \\ & \rho & g^m hg^{2j-1+(r-r')n}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n & \mbox{(by $R_1$)} \\ & \rho & hg^{n-m} g^{m+m'}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n & \mbox{(by $R_1$)} \\ & \rho & hg^{m'}e_1\cdots e_{j-1}e_{j+1}\cdots e_{j+\frac{n}{2}-1}e_{j+\frac{n}{2}+1}\cdots e_n & \mbox{(by $R_1$)} \\ & \rho & v. & \end{array} $$ \smallskip Finally, consider that $k=n-1$. Let $i\in\Omega_n$ be such that $\Omega_n\setminus\{i_1,\ldots,i_{n-1}\}=\{i\}$. Then $\im(\alpha)=\{i\}$ and $\{i_1,\ldots,i_{n-1}\}=\{1,\ldots,i-1,i+1,\ldots,n\}$. Take $a=i\alpha^{-1}$. Then $ag^m=i=ahg^{m'}$. Since $ag^m=a+m-rn$, for some $r\in\{0,1\}$, and $ahg^{m'}=(n-a+1)g^{m'}=r'n-a+1+m'$, for some $r'\in\{0,1\}$, in a similar way to what we proved before, we have $$ \begin{array}{rcll} u & \rho & g^me_1\cdots e_{i-1}e_{i+1}\cdots e_n & \\ & \rho & g^m hg^{2i-1}e_1\cdots e_{i-1}e_{i+1}\cdots e_n & \mbox{(by Lemma \ref{pre1})} \\ & \rho & g^m hg^{2i-1+(r-r')n}e_1\cdots e_{i-1}e_{i+1}\cdots e_n & \mbox{(by $R_1$)} \\ & \rho & hg^{n-m} g^{m+m'}e_1\cdots e_{i-1}e_{i+1}\cdots e_n & \mbox{(by $R_1$)} \\ & \rho & hg^{m'}e_1\cdots e_{i-1}e_{i+1}\cdots e_n & \mbox{(by $R_1$)} \\ & \rho & v , & \end{array} $$ as required. \end{proof} Notice that, taking into account the relation $h^2=1$ of $R_1$, we could have taken only \textit{half} of the relations $R_5$, namely the relations $he_i=e_{n-i+1}h$ with $1\leqslant i\leqslant \lceil \frac{n}{2}\rceil$. \medskip Our next and final goal is to deduce from the previous presentation for $\DPC_n$ a new one on $3$ generators, by using Tietze transformations. \smallskip Recall that, towards the end of Section \ref{basics}, we observed that $e_i=hg^{i-1}e_nhg^{i-1}$ for all $i\in\{1,2,\ldots,n\}$. \smallskip We will proceed as follows: first, by applying T1, we add the relations $e_i=hg^{i-1}e_nhg^{i-1}$, for $1\leqslant i\leqslant n$; secondly, we apply T4 to each of the relations $e_i=hg^{i-1}e_nhg^{i-1}$ with $i\in\{1,2,\ldots,n-1\}$ and, in some cases, by convenience, we also replace $e_n$ by $hg^{n-1}e_nhg^{n-1}$; finally, by using the relations $R_1$, we simplify the new relations obtained, eliminating the trivial ones or those that are deduced from others. In what follows, we perform this procedure for each of the sets of relations $R_1$ to $R_6^\text{o}/R_6^\text{e}$. \begin{description} \item $(R_1)$ There is nothing to do for these relations. \item $(R_2)$ For $1\leqslant i\leqslant n-1$, from $e_i^2=e_i$, we have $ hg^{i-1}e_nhg^{i-1} hg^{i-1}e_nhg^{i-1} = hg^{i-1}e_nhg^{i-1}, $ which is equivalent to $e_n^2=e_n$. \item $(R_3)$ For $1\leqslant i<j\leqslant n$, from $e_ie_j=e_je_i$, we get $ hg^{i-1}e_nhg^{i-1} hg^{j-1}e_nhg^{j-1} = hg^{j-1}e_nhg^{j-1} hg^{i-1}e_nhg^{i-1} $ and it is a routine matter to check that this relation is equivalent to $e_ng^{j-i}e_ng^{n-j+i} = g^{j-i}e_ng^{n-j+i}e_n$. \item $(R_4)$ From $ge_1=e_ng$, we obtain $ ghe_nh=e_ng, $ which is equivalent to $hg^{n-1}e_nhg^{n-1}=e_n$ (and, obviously, also to $ghe_ngh=e_n$). On the other hand, for $1\leqslant i\leqslant n-1$, from $ge_{i+1}=e_ig$ we get $ g hg^ie_nhg^i = hg^{i-1}e_nhg^{i-1} g $ and this relation is equivalent to $e_n=e_n$. \item $(R_5)$ For $1\leqslant i\leqslant n$, from $he_i=e_{n-i+1}h$, we have $ h hg^{i-1}e_nhg^{i-1} = hg^{n-i}e_nhg^{n-i} h, $ which is a relation equivalent to $hg^{n-1}e_nhg^{n-1}=e_n$. \item $(R_6^\text{o})$ From $hge_2e_3\cdots e_n=e_2e_3\cdots e_n$ (with $n$ odd), we get \begin{align*} hg (hge_nhg) (hg^2e_nhg^2)\cdots (hg^{n-1}e_nhg^{n-1}) = (hge_nhg) (hg^2e_nhg^2)\cdots (hg^{n-1}e_nhg^{n-1}). \end{align*} It is easy to check that this relation is equivalent to $hg(e_ng)^{n-2}e_n=(e_ng)^{n-2}e_n$. \item $(R_6^\text{e})$ Now, for an even $n$, from $hge_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n=e_2\cdots e_{\frac{n}{2}}e_{\frac{n}{2}+2}\cdots e_n$, we obtain \begin{align*} hg (hge_nhg) \cdots (hg^{\frac{n}{2}-1}e_nhg^{\frac{n}{2}-1}) (hg^{\frac{n}{2}+1}e_nhg^{\frac{n}{2}+1})\cdots (hg^{n-1}e_nhg^{n-1}) = \\ (hge_nhg) \cdots (hg^{\frac{n}{2}-1}e_nhg^{\frac{n}{2}-1}) (hg^{\frac{n}{2}+1}e_nhg^{\frac{n}{2}+1})\cdots (hg^{n-1}e_nhg^{n-1}), \end{align*} which can routinely be verified to be equivalent to $hg (e_ng)^{\frac{n}{2}-1}g(e_ng)^{\frac{n}{2}-2}e_n= (e_ng)^{\frac{n}{2}-1}g(e_ng)^{\frac{n}{2}-2}e_n$. On the other hand, from $he_1e_2\cdots e_n=e_1e_2\cdots e_n$, we have \begin{align*} h (he_nh) (hge_nhg)\cdots (hg^{n-1}e_nhg^{n-1}) = (he_nh) (hge_nhg) \cdots (hg^{n-1}e_nhg^{n-1}), \end{align*} a relation that is equivalent to $h(e_ng)^{n-1}e_n=(e_ng)^{n-1}e_n$. \end{description} \smallskip So, let us consider the following set $Q$ of monoid relations on the alphabet $B=\{g,h,e\}$: \begin{description} \item $(Q_1)$ $g^n=1$, $h^2=1$ and $hg=g^{n-1}h$; \item $(Q_2)$ $e^2=e$ and $ghegh=e$; \item $(Q_3)$ $eg^{j-i}eg^{n-j+i} = g^{j-i}eg^{n-j+i}e$, for $1\leqslant i<j\leqslant n$; \item $(Q_4)$ $hg(eg)^{n-2}e=(eg)^{n-2}e$, if $n$ is odd; \item $(Q_5)$ $hg (eg)^{\frac{n}{2}-1}g(eg)^{\frac{n}{2}-2}e= (eg)^{\frac{n}{2}-1}g(eg)^{\frac{n}{2}-2}e$ and $h(eg)^{n-1}e=(eg)^{n-1}e$, if $n$ is even. \end{description} Notice that $|Q|=\frac{n^2-n+13+(-1)^n}{2}$. Therefore, by considering the mapping $B\longrightarrow\DPC_n$ defined by $g\longmapsto g$, $h\longmapsto h$ and $e\longmapsto e_n$, we have: \begin{theorem}\label{rankpres} The monoid $\DPC_n$ is defined by the presentation $\langle B \mid Q\rangle$ on $3$ generators. \end{theorem} \begin{thebibliography}{00} \bibitem{Aizenstat:1958} A.Ya. A\u{\i}zen\v{s}tat, Defining relations of finite symmetric semigroups, Mat. Sb. N. S. 45 (1958), 261--280 (Russian). \bibitem{Aizenstat:1962} A.Ya. A\u{\i}zen\v{s}tat, The defining relations of the endomorphism semigroup of a finite linearly ordered set, Sibirsk. Mat. 3 (1962), 161--169 (Russian). \bibitem{AlKharousi&Kehinde&Umar:2014} F. Al-Kharousi, R. Kehinde and A. Umar, Combinatorial results for certain semigroups of partial isometries of a finite chain, Australas. J. Combin. 58 (2014), 365--375. \bibitem{AlKharousi&Kehinde&Umar:2016} F. Al-Kharousi, R. Kehinde and A. Umar, On the semigroup of partial isometries of a finite chain, Commun. Algebra 44 (2016), 639--647. \bibitem{Araujo&al:2015} J. Ara\'ujo, W. Bentz, J.D. Mitchell and C. Schneider, The rank of the semigroup of transformations stabilising a partition of a finite set, Mathematical Proceedings of the Cambridge Philosophical Society, 159 (2015), 339--353. \bibitem{Catarino&Higgins:1999} P.M. Catarino and P.M. Higgins, The monoid of orientation-preserving mappings on a chain, Semigroup Forum 58 (1999), 190--206. \bibitem{Cicalo&al:2015} S. Cical\`o, V.H. Fernandes and C. Schneider, Partial transformation monoids preserving a uniform partition, Semigroup Forum 90 (2015), 532--544. \bibitem{East:2011} J. East, Generators and relations for partition monoids and algebras, J. Algebra 339 (2011), 1--26. \bibitem{Feng&al:2019} Y.-Y. Feng, A. Al-Aadhami, I. Dolinka, J. East and V. Gould, Presentations for singular wreath products, J. Pure Appl. Algebra 223 (2019), 5106--5146. \bibitem{Fernandes:2000} V.H. Fernandes, The monoid of all injective orientation preserving partial transformations on a finite chain, Commun. Algebra 28 (2000), 3401--3426. \bibitem{Fernandes:2001} V.H. Fernandes, The monoid of all injective order preserving partial transformations on a finite chain, Semigroup Forum 62 (2001), 178-204. \bibitem{Fernandes:2002survey} V.H. Fernandes, Presentations for some monoids of partial transformations on a finite chain: a survey, Semigroups, Algorithms, Automata and Languages, eds. Gracinda M. S. Gomes \& Jean-\'Eric Pin \& Pedro V. Silva, World Scientific (2002), 363--378. \bibitem{Fernandes&Gomes&Jesus:2004} V.H. Fernandes, G.M.S. Gomes and M.M. Jesus, Presentations for some monoids of injective partial transformations on a finite chain, Southeast Asian Bull. Math. 28 (2004), 903--918. \bibitem{Fernandes&Gomes&Jesus:2009} V.H. Fernandes, G.M.S. Gomes and M.M. Jesus, Congruences on monoids of transformations preserving the orientation of a finite chain, J. Algebra 321 (2009), 743--757. \bibitem{Fernandes&al:2014} V.H. Fernandes, P. Honyam, T.M. Quinteiro and B. Singha, On semigroups of endomorphisms of a chain with restricted range, Semigroup Forum 89 (2014), 77--104. \bibitem{Fernandes&al:2019} V.H. Fernandes, J. Koppitz and T. Musunthia, The rank of the semigroup of all order-preserving transformations on a finite fence, Bulletin of the Malaysian Mathematical Sciences Society 42 (2019), 2191--2211. \bibitem{Fernandes&Paulista:2022arxiv} V.H. Fernandes and T. Paulista, On the monoid of partial isometries of a finite star graph, arXiv:2203.05504 (2022), https://doi.org/10.48550/arXiv.2203.05504 (21 pages). \bibitem{Fernandes&Quinteiro:2014} V.H. Fernandes and T.M. Quinteiro, On the ranks of certain monoids of transformations that preserve a uniform partition, Commun. Algebra 42 (2014), 615--636. \bibitem{Fernandes&Quinteiro:2016} V.H. Fernandes and T.M. Quinteiro, Presentations for monoids of finite partial isometries, Semigroup Forum 93 (2016), 97--110. \bibitem{Fernandes&Sanwong:2014} V.H. Fernandes and J. Sanwong, On the rank of semigroups of transformations on a finite set with restricted range, Algebra Colloquium 21 (2014), 497--510. \bibitem{GAP4} The GAP~Group, \emph{GAP -- Groups, Algorithms, and Programming, Version 4.11.1}; 2021. \newline (https://www.gap-system.org) \bibitem{Gomes&Howie:1992} G.M.S. Gomes and J.M. Howie, On the ranks of certain semigroups of order-preserving transformations, Semigroup Forum 45 (1992), 272--282. \bibitem{Howie:1995} J.M. Howie, Fundamentals of Semigroup Theory, Oxford, Oxford University Press, 1995. \bibitem{Howie&Ruskuc:1995} J.M. Howie and N. Ru\v{s}kuc, Constructions and presentations for monoids, Commun. Algebra 22 (1994), 6209--6224. \bibitem{Higgins&Vernitski:2022} P.M. Higgins and A. Vernitski, Orientation-preserving and orientation-reversing mappings: a new description, Semigroup Forum 104 (2022), 509--514. \bibitem{Lallement:1979} G. Lallement, Semigroups and Combinatorial Applications, John Wiley \& Sons, New York, 1979. \bibitem{McAlister:1998} D. McAlister, Semigroups generated by a group and an idempotent, Commun. Algebra 26 (1998), 515--547. \bibitem{Moore:1897} E.H. Moore, Concerning the abstract groups of order $k!$ and $\frac12k!$ holohedrically isomorphic with the symmetric and the alternating substitution groups on $k$ letters, Proc. London Math. Soc. 28 (1897), 357--366. \bibitem{Paulista:2022} T. Paulista, Partial Isometries of Some Simple Connected Graphs, M.Sc. Thesis, School of Science and Technology of NOVA University Lisbon, 2022. \bibitem{Popova:1961} L.M. Popova, The defining relations of certain semigroups of partial transformations of a finite set, Leningrad. Gos. Ped. Inst. U\v cen. Zap. 218 (1961), 191--212 (Russian). \bibitem{Popova:1962} L.M. Popova, Defining relations of a semigroup of partial endomorphisms of a finite linearly ordered set, Leningrad. Gos. Ped. Inst. U\v cen. Zap. 238 (1962), 78--88 (Russian). \bibitem{Ruskuc:1995} N. Ru\v{s}kuc, Semigroup Presentations, Ph.D. Thesis, University of St-Andrews, 1995. \end{thebibliography} \bigskip \lastpage \end{document}
2205.02109v2
http://arxiv.org/abs/2205.02109v2
The van Est Map on Geometric Stacks
\documentclass{report} \usepackage[utf8]{inputenc} \usepackage[utf8]{inputenc} \usepackage{amsmath} \renewcommand\thepage{\roman{page}} \pdfoutput=1 \usepackage{setspace} \usepackage[left=3.4cm, right=3.4cm, bottom=3cm, top=3cm]{geometry} \usepackage{hyperref} \usepackage[raggedright]{titlesec} \counterwithin*{chapter}{part} \usepackage{cleveref} \usepackage{amssymb} \usepackage{footnotebackref} \usepackage{mathtools} \usepackage{mathabx,graphicx} \usepackage{tikz-cd} \usepackage{amsthm} \usepackage{enumitem} \usepackage{eqparbox} \newcommand{\eqmathbox}[2][N]{\eqmakebox[#1]{$\displaystyle#2$}} \usepackage{hyperref, cleveref} \usepackage{graphicx} \usepackage{graphics} \usepackage{mathrsfs} \usepackage{cleveref} \usepackage{thmtools} \newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}} \makeatletter \makeatother \usepackage{bbm} \newlist{steps}{enumerate}{1} \setlist[steps, 1]{label = Step \arabic*:} \theoremstyle{plain} \setcounter{section}{-1} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{exmp}[theorem]{Example} \newtheorem{rmk}[theorem]{Remark} \newtheorem{conjecture}[theorem]{Conjecture} \numberwithin{equation}{section} \newtheorem*{conjecture*}{Conjecture} \usepackage{rotating} \usepackage{comment} \usepackage[nottoc,notlot,notlof]{tocbibind} \newcommand*{\isoarrow}[1]{\arrow[#1,"\rotatebox{90}{\(\sim\)}"]} \setlength\parindent{0pt} \newtheorem{thm}{theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \linespread{1.3} \usepackage{blindtext} \makeatletter\@addtoreset{chapter}{part} \setcounter{chapter}{-1} \setcounter{page}{3} \title{THE VAN EST MAP ON GEOMETRIC STACKS} \author{\vspace{15mm}\\by\vspace{28mm}\\Joshua Lackman\vspace{50mm}\\A thesis submitted in conformity with the requirements\\ for the degree of Doctor of Philosophy\\ Graduate Department of Mathematics\\University of Toronto\vspace{22mm}\\\copyright $\,$ Copyright 2022 by Joshua Lackman} \date{} \begin{document} \maketitle \thispagestyle{empty} \setcounter{page}{2} \chapter*{} \begin{doublespace} \begin{centering} \textbf{\LARGE{Abstract}} \vspace{1mm}\\The van Est Map on Geometric Stacks \vspace{3mm}\\Joshua Lackman\footnote{\href{mailto:[email protected]}{[email protected]}} \\Doctor of Philosophy \\Graduate Department of Mathematics \\University of Toronto \\2022 \\ \end{centering} \vspace{3mm}We generalize the van Est map and isomorphism theorem in three ways, and we propose a category of Lie algebroids and LA-groupoids (with equivalences). First, we generalize the van Est map from a comparison map between Lie groupoid cohomology and Lie algebroid cohomology to a (more conceptual) comparison map between the cohomology of a stack $\mathcal{G}$ and the foliated cohomology of a stack $\mathcal{H}\to\mathcal{G}$ mapping into it. At the level of Lie groupoids, this amounts to describing the van Est map as a map from Lie groupoid cohomology to the cohomology of a particular LA-groupoid. We do this by associating to any (nice enough) homomorphism of Lie groupoids $f:H\to G$ a natural foliation of the stack $[H^0/H]\,.$ In the case of a wide subgroupoid $H\xhookrightarrow{}G\,,$ this foliation can be thought of as equipping the normal bundle of $H$ with the structure of an LA-groupoid. In particular, this generalization allows us to derive classical results, including van Est's isomorphism theorem about the maximal compact subgroup, which we generalize to proper subgroupoids, as well as the Poincar\'{e} lemma; it also gives a new method of computing Lie groupoid cohomology. \vspace{3mm}\\Secondly, we generalize the functions that we can take cohomology of in the context of the van Est map; instead of using functions valued in representations, we can use functions valued in modules — eg. $S^1$-valued functions and $\mathbb{Z}$-valued functions. This generalization allows us to obtain classical results about linearizing group actions, as well as results about lifting group actions to gerbes. Thirdly, everything we do works in both the smooth and holomorphic categories. \vspace{3mm}\\At the end of this thesis we give a conjectural definition of Morita equivalences of Lie algebroids and generalized morphisms, and we prove a no-go theorem. We explore higher structures and conjecture a link between Lie \RNum{2} and the van Est theorem. This involves describing higher cohomology classes (eg. gerbes) as generalized morphisms in a higher category, similar to how principal bundles define generalized morphisms in a (2,1)-category. We conjecture the existence of a smooth version of Grothendieck's homotopy hypothesis, and we describe a category of LA-groupoids with equivalences. \end{doublespace} \chapter*{Acknowledgements} I would like to begin by thanking my advisor, Marco Gualtieri. Marco spent many hours teaching me over the last several years, and he introduced me to the topic of this thesis (as well as a variety of other topics). In addition to mathematics, Marco gave me a lot of helpful advice with regards to the exposition of my paper and this thesis. Furthermore, with his support, I've had the opportunity to travel to places I may have never gone, with the most memorable being Tokyo (IPMU) and Banff (BIRS). There were times when I was absent for extended periods, without notice (particularly during 2020-2021), and I appreciate his understanding. Finally, I should mention that I'm writing this on a laptop Marco lent me so long ago that I can't even recall when. \vspace{3mm}\\I would like to thank my committee members Lisa Jeffrey and Eckhard Meinrenken. Lisa, in particular, made many valuable comments with regards to the editing and exposition of this thesis, including the discovery of many errors that I would have been remiss to have left uncorrected. Eckhard's explanation of the van Est map is what initially allowed me to understand it. \vspace{3mm}\\I'd like to give a special thanks to Francis Bischoff for the many hours we've spent discussing math, including topics directly related to this thesis. I'd like to give a shout-out to Ahmed, Kasun and David, who were almost the only people I had contact with during the time surrounding the lockdowns. We had some fun times. I will also take the time to acknowledge the people who were there at the beginning: Krishan, who helped motivate me to take running more seriously all those years ago, Lennart, Sina, Adriano, Chris, Matt, Afiny, Zhara and the members of GLab. Last but not least, I'd like to acknowledge the wonderful time I had exploring Tokyo with Omar, Shinji and Marco (garbage bags and all). In particular, the time I spent at The Kavli Institute for the Physics and Mathematics of the Universe (IPMU) was terrific. I surely missed some individuals, and for that I apologize. \vspace{3mm}\\I am thankful to the administrative staff in the department of mathematics for all that they've done, including Jemima Merisca who helped set up the meetings and my defense, and for ensuring that I submit this thesis on time. \vspace{3mm}\\I would like to acknowledge the funding I received from The Faculty of Arts and Science in the form of The Faculty of Arts and Science Top (FAST) Doctoral Fellowship. \vspace{3mm}\\Finally, I'd like to thank my family for all of their support. \tableofcontents \chapter{Introduction and Applications} \pagenumbering{arabic} \section{Introduction to Parts 1 and 2} \subsection{A Bit of History and Motivation} In 1986, van Est (1921-2002) published a novel proof of Lie's third theorem, which he ascribed to Cartan (1869-1951) — the person who is credited with originally proving the theorem. Recall that Lie's third theorem states that every Lie algebra has an integration, and is considered to be the most difficult of Lie's theorems. The proof used the van Est map and the van Est isomorphism theorem; more precisely, given that that every matrix Lie algebra integrates to a Lie group (which is much easier to prove) and that every Lie group has vanishing second homotopy group, the van Est isomorphism theorem completes the proof. \vspace{3mm}\\Let us give a brief synopsis of the van Est map and isomorphism theorem: let $G$ be a Lie group and let $E$ be a representation of $G\,.$ We can differentiate this structure to obtain the corresponding Lie algebra $\mathfrak{g}$ and the corresponding representation of $\mathfrak{g}$ on $E\,.$ From this data we get two cohomologies: the Lie group cohomology and Lie algebra cohomology with coefficients in $E\,,$ denoted $H^*(G,E)$ and $H^*(\mathfrak{g},E)\,,$ respectively. Originally, the van Est map $VE$ was a map \begin{equation} VE:H^*(G,E)\to H^*(\mathfrak{g},E)\,. \end{equation} More generally, given a compact subgroup $K\xhookrightarrow{}G\,,$ with Lie algebra $\mathfrak{k}\,,$ the van Est map factors through the relative Lie algebra cohomology $H^*(\mathfrak{g},\mathfrak{k},E)\,.$ That is, there is a map \begin{equation} VE_{G,K}:H^*(G,E)\to H^*(\mathfrak{g},\mathfrak{k},E)\,. \end{equation} Forms in the relative Lie algebra complex are forms in the Lie algebra complex of $\mathfrak{g}$ which evaluate to $0$ when contracted with any vector in $\mathfrak{k}\,,$ and which are invariant under the conjugation action of $H\,.$ Classically, these maps are what has been meant by van Est maps, and essentially what van Est proved amounts to the following theorems: \begin{theorem}\label{van est original} Suppose $G\rightrightarrows *$ has vanishing homotopy groups up to degree $n\,.$ Then $VE$ is an isomorphism up to and including degree $n\,,$ and is injective in degree $n+1\,.$ \end{theorem} \begin{theorem}\label{van est compact} Let $K$ be the maximal compact subgroup of $G\rightrightarrows *\,.$ The map $VE_{G,K}$ is an isomorphism in all degrees. \end{theorem} Later on, the van Est map was extended to Lie groupoids by Weinstein, Xu and others: given a Lie groupoid $G\rightrightarrows G^0$ and a representation $E$ of $G\,,$ we obtain through differentiation a corresponding Lie algebroid $\mathfrak{g}\to G^0$ and a corresponding representation of $\mathfrak{g}$ on $E\,.$ There is a van Est map, still denoted $VE\,,$ from the Lie groupoid cohomology to the Lie algebroid cohomology: \begin{equation} VE:H^*(G,E)\to H^*(\mathfrak{g},E)\,. \end{equation} Crainic proved the following result: \begin{theorem}\label{crainic} If the target fibers have vanishing homotopy groups up to and including degree $n\,,$ then $VE$ is an isomorphism up to and including degree n, and is injective in degree $n+1$. \end{theorem} Crainic also described the image of the map in degree $n+1$ (and in fact proved a more general result involving a proper action). There are, in particular, applications of this result to the integration of Poisson manifolds, and more generally to the integration of Lie algebroids. \vspace{3mm}\\The van Est map is one of the main tools we have to compute Lie groupoid cohomology. Many others have worked on van Est maps and isomorphism theorems, some authors are: Arias Abad, Cabrera, Li-Bland, Meinrenken, Salazar, et al. — van Est maps have been proven to be very useful. However, in the author's opinion the van Est map as currently defined has three drawbacks, which will be addressed in this thesis (in no particular order): \begin{enumerate} \item The van Est map is only defined for coefficients in a representation. However, we would like to consider more general coefficients so that we can use van Est theorems to prove a wider range of results. In particular, $S^1$-valued functions and our theorem are relevant to: computing characters and $S^1$-extensions of Lie groups; computing representations of Lie groupoids and the geometric quantization of Poisson manifolds and Courant Algebroids; the basic gerbe over a compact simple Lie group. In particular, the theorem we prove in this paper can be used to derive the following classical results (either immediately, or with a small amount of work): \begin{itemize} \item Let $P\to N$ be a principal torus-bundle with an action of a compact, simply connected Lie group $G$ on $N\,.$ Then the action of $G$ lifts to $P\,,$ and the lift is unique up to isomorphism. \item If $G\rightrightarrows G^0$ has $n$-connected target fibers, then $H^k(G,\mathbb{Z})\cong H^k(G^0,\mathbb{Z})$ for $0\le k\le n$ (this is a special case of the theorem) \item The Poincar\'{e} lemma (this is a special case of the theorem) \item van Est's original result, \Cref{van est compact} \end{itemize} The first two results are related to issues about coefficients, and the third and fourth result are related to point 3, which we will discuss in a moment (Weinstein and Xu in~\cite{weinstein}, Crainic and Zhu in~\cite{zhu} did consider a version of groupoid cohomology with coefficients in $S^1\,,$ and proved some isomorphism theorems in degrees one and two; van Est also wrote about more general coefficients in~\cite{van est 2}, as did Brylinski in~\cite{brylinski}). \item The second drawback is related to the first one: the van Est map is only defined in the smooth category, but it is desirable to have one in the holomorphic category as well (this is essentially changing coefficients from smooth to holomorphic functions). \item In the absence of a proper action, the van Est map for Lie groupoids doesn't give any information about the higher degree cohomology of the groupoid. We would like a more general theorem that contains \Cref{van est compact} as a special case, and allows us to compute higher degrees of cohomology at the infinitesimal level. In addition, van Est's result doesn't hold when you change coefficients (similarly, it doesn't hold in the holomorphic category), therefore we need a more general theorem to compute even the cohomology of Lie groups. \end{enumerate} Consider the following setting: let $\pi:Y\to X$ be a surjective submersion of smooth (complex) manifolds. There is a morphism $H^*(X,\mathcal{O})\to H_{\pi}^*(Y)\,,$ where $H_{\pi}^*(Y)$ denotes the foliated de Rham cohomology of $Y\,.$ Explicitly, the map is given by first applying the map $H^*(X,\mathcal{O})\to H^*(Y,\pi^{-1}\mathcal{O})\,,$ and then taking a fiberwise de Rham resolution. Now, we have the following theorem (see criterion 1.9.4 in \cite{Bernstein}, \cite{buch}): \begin{theorem} Let $\pi:Y\to X$ be a surjective submersion of smooth (complex) manifolds, such that the fibers of $\pi$ are $n$-connected. Then the morphism $H^*(X,\mathcal{O})\to H_{\pi}^*(Y)$ is an isomorphism up to degree $n$ and injective in degree $n+1\,.$ \end{theorem} Of course, in the smooth setting $H^*(X,\mathcal{O})$ is zero in positive degrees, however this isn't true in the holomorphic category, and the point is that you can consider any sheaf of functions on $X$ valued in some abelian Lie group and an analogous result holds. The statement of this result is similar to the statement of the van Est theorem, with $Y$ playing the role of $G^0$ and $X$ playing the role of $G;$ a slight generalization of this result is used to prove the van Est theorem. That this result should, in addition, be a special case of a van Est-type isomorphism theorem was one of the author's main motivations for this direction of study. \subsection{Generalizing the van Est Map} In this thesis we are going to interpret the van Est map as a result about differentiable stacks. More precisely, a sufficiently nice map of stacks $[H^0/H]\to [G^0/G]$ determines a foliation of $[H^0/H]\,,$ and we can compute the foliated cohomology. The van Est map will then be, roughly, a map from the cohomology of $[G^0/G]$ to the foliated cohomology of $[H^0/H]\,.$ The case of the van Est map for Lie groupoids corresponds to the case that the map of stacks is the one represented by the inclusion $G^0\xhookrightarrow{}G\,.$ van Est's original result, \Cref{van est compact}, is obtained by taking the map of stacks to be the one represented by the inclusion of the maximal compact subgroup $K\xhookrightarrow{} G\,.$ The Poincar\'{e} lemma will be obtained by letting the map of stacks be the one represented by $X\to *\,,$ where $X$ is a contractible space and $*$ is a point. \subsection{Rough Explanation of the van Est map} A (nice enough) map $f:H\to G$ of Lie groupoids determines a ``foliation" of $H\,,$ which determines a Lie algebroid-groupoid over $H\,.$ There is a canonical map from the groupoid cohomology of $G$ to the foliated cohomology of $H\,,$ obtained by first applying the inverse image functor to cohomology classes, and then taking a resolution by foliated differential forms. The aformentioned notion of foliation is not always the usual one associated to Lie groupoids, but is one that is appropriate when working in the (2,1)-category of Lie groupoids. For example, consider a Lie group $G\,;$ there is a canonical map $*\to G\,,$ which is just the inclusion of the identity element. The foliation of $*$ deteremined by this map would naively be the $0$ vector space, but with the notion of foliation we are using it is actually equivalent to the Lie algebra $\mathfrak{g}\,.$ \subsection{Addressing the Drawbacks} To address drawbacks one and two, we first need a more general definition of Lie algebroid cohomology that allows us to use coefficients that are not in a represention\footnote{Weinstein and Xu allude to this possiblity in their paper on the quantization of symplectic groupoids}. For example, suppose we want to use $S^1$-coefficients. Then, if we let $\mathfrak{g}=TX$ for some manifold $X\,,$ changing coefficients from $\mathcal{O}$ (with the trivial action of $TX\,)$ to $\mathcal{O}^*$ would involve passing from de Rham cohomology \begin{equation} \mathcal{O}_X\xrightarrow[]{\text{d}} \Omega^1_X\to \Omega^2_X\to\cdots \end{equation} to Deligne cohomology \begin{equation} \mathcal{O}^*_X\xrightarrow[]{\text{dlog}} \Omega^1_X\to \Omega^2_X\to\cdots\,. \end{equation} More generally, given any Lie algebroid $\mathfrak{g}\to X$ and any abelian Lie group $A\,,$ the Lie algebroid forms we get are: \begin{enumerate} \item In degree $0\,,$ functions on $X$ taking values in $A\,,$ \item In degree $n>0\,,$ Lie algebroid n-forms taking values in the Lie algebra $\mathfrak{a}$ of $A\,.$ \end{enumerate} In general, given a Lie groupoid $G\rightrightarrows G^0\,,$ the coefficients we consider are $G$-modules: these are essentially representations $\pi: M\to G^0$ of $G\,,$ except that, unlike a representation, the fibers of the map $\pi$ don't need to be vector spaces - they can be any abelian Lie group. Once this is done, drawbacks one and two are addressed by using a generalization of Crainic's proof of the van Est isomorphism theorem. \vspace{3mm}\\The third drawback is more subtle to resolve. In order to do this, it is best to think of the category of Lie groupoids as a (2,1)-category, where the 2-morphisms between maps of Lie groupoids are natural isomorphisms. In this (2,1)-category, there is a distinct notion of fibers of maps between Lie groupoids, as well as fibrations. Using this notion, and thinking of $G^0$ as a Lie groupoid with only identity morphisms, the fibers of the natural map $G^0\xhookrightarrow{} G$ are simply the target fibers of $G\rightrightarrows G^0\,.$ Therefore, thinking of Lie groupoids as objects in a (2,1)-category, we can restate the van Est isomorphism theorem for Lie groupoids as so: \begin{theorem} If the fibers of the natural map $G^0\xhookrightarrow{} G$ are n-connected (ie. have vanishing homotopy groups up to and including degree $n\,),$ then $VE$ is an isomorphism up to and including degree n, and is injective in degree $n+1\,.$ \end{theorem} Now, we will show that to every nice enough homomorphism of $f:H\to G$ of Lie groupoids, one can associate a Lie algebroid-groupoid over $H\,,$ which we will denote $D\to H\,.$ It is then natural to ask: how do the cohomologies of $G$ and this Lie algebroid-groupoid $D\to H$ compare? First, we will define a van Est map \begin{equation*} VE: H^*(G,M)\to H^*(D\to H, f^*M)\,. \end{equation*}We will then prove the following theorem: \begin{theorem}\label{van est morphism} Let $f:H\to G$ be a homomorphism of Lie groupoids, which is a surjective submersion at the level of objects. Suppose further that the fibers of $f$ are all n-connected. Then the van Est map is an isomorphism up to and including degree $n\,,$ and is injective in degree $n+1\,.$ \end{theorem} We will also describe its image in degree $n+1\,.$ Letting $H=G^0\,,$ we recover the usual Lie algebroid of $G$ and the usual van Est map. \subsection{The Lie Algebroid-Groupoid Associated to the Normal Bundle of a Subgroup} Before continuing the discussion of the van Est map, let's motivate one instance of associating a Lie algebroid-groupoid to a (nice enough) map $H\to G$ — it resolves the following conundrum: the normal bundle of the identity bisection inherits the structure of a Lie algebroid, so what structure does the normal bundle to a subgroupoid inherit? It should model a small neighborhood of the subgroupoid, in the same way that the Lie algebroid models a small neighborhood of the identity bisection. \vspace{3mm}\\To illustrate how the normal bundle of $H^{(1)}\xhookrightarrow{}G^{(1)}$ inherits the structure of an LA-groupoid (short for Lie algebroid-groupoid), let's specialize to the case of Lie groups. Let $G$ be a Lie group and $H\xhookrightarrow{} G$ a subgroup. We claim to have the following Lie algebroid-groupoid: \begin{equation}\label{LA group} \begin{tikzcd} H\ltimes_{\text{Ad}}\mathfrak{h}\ltimes\mathfrak{g} \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & \mathfrak{g} \arrow[d] \\ H \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} Here, the right column is just the Lie algebra of $G\,,$ and the bottom row is just the Lie group $H\,.$ The Lie algebroid of the left column comes from the identification of $TH\cong H\times \mathfrak{h}$ (where $\mathfrak{h}$ is the Lie algebra of $H\rightrightarrows *\,).$ Then, this Lie algebroid is just the product of the Lie algebroids $TH\to H$ and $\mathfrak{g}\to *$ (it's really the trivial bundle of Lie algebras $\mathfrak{h}\times \mathfrak{g}$ over $H\,).$ Now for the top row: here $H\ltimes_{\text{Ad}} h$ is the semidirect product of $H$ and $\mathfrak{h}$ associated to the adjoint representation of $H$ on $\mathfrak{h}\,.$ There is a natural action of this group on $\mathfrak{g}:$ letting $(h,X_\mathfrak{h})\in H\ltimes \mathfrak{h}\,,\tilde{X}_\mathfrak{g}\in \mathfrak{g}\,,$ we have an action given by $(h,X_\mathfrak{h})\cdot \tilde{X}_\mathfrak{g}=Ad_h\tilde{X}_\mathfrak{g}+X_\mathfrak{h}\,.$ \vspace{3mm}\\Now to explain how the LA-groupoid in \ref{LA group} relates to the normal bundle of $H\xhookrightarrow{}G:$ applying the forgetful functor from LA-groupoids to VB-groupoids (ie. a vector bundle over a Lie groupoid), we obtain the following VB groupoid: \begin{equation}\label{VB group} \begin{tikzcd} H\ltimes_{\text{Ad}}\mathfrak{h}\ltimes\mathfrak{g} \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & \mathfrak{g} \arrow[d] \\ H \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} ie. the diagram looks the same, we have just forgotten the Lie brackets. Now, the adjoint action of $H$ on $\mathfrak{g}$ descends to an action of $H$ on $\mathfrak{g}/\mathfrak{h}\,,$ and the groupoid $H\ltimes_{\text{Ad}}\mathfrak{h}\ltimes\mathfrak{g}\rightrightarrows\mathfrak{g}$ is Morita equivalent to $H\ltimes_{\text{Ad}}\mathfrak{g}/\mathfrak{h}\rightrightarrows \mathfrak{g}/\mathfrak{h}\,.$ As a result of this, the VB-groupoid \ref{VB group} is Mortia equivalent to the following VB-groupoid\footnote{If $\mathfrak{h}\subset Z(\mathfrak{g})$ (the center of $\mathfrak{g}$) and if $\mathfrak{g}\cong\mathfrak{h}\oplus\mathfrak{g}/\mathfrak{h}\,,$ then this is a Morita equivalence of LA-groupoids.} \begin{equation}\label{VB group morita} \begin{tikzcd} H\ltimes_{\text{Ad}}\mathfrak{g}/\mathfrak{h} \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & \mathfrak{g}/\mathfrak{h} \arrow[d] \\ H \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} Now applying the forgetful functor from VB-groupoids to vector bundles over manifolds, we get \begin{equation}\label{VB manifold group} \begin{tikzcd} H^{(1)}\times\mathfrak{g}/\mathfrak{h}\arrow[d] \\ H^{(1)} \end{tikzcd} \end{equation} The vector bundle \ref{VB manifold group} is naturally identified with the normal bundle of $H^{(1)}\xhookrightarrow{} G^{(1)}\,.$ Hence, in this sense, the normal bundle inherits the structure of an LA-groupoid. \subsection{A Sketch of the Proof of van Est's Original Result} Now let's specialize the LA-groupoid \ref{LA group} to the case where $H=K$ is the maximal compact subgroup and $E$ is a representation of $G\,.$ The goal here will be to sketch the proof of \Cref{van est compact}. Two facts will be relevant: \begin{enumerate} \item $K\xhookrightarrow{} G$ is a homotopy equivalence, \item The cohomology of a proper Lie groupoid, with coefficients in a representation, is trivial in all degrees higher than $0\,.$ \end{enumerate} Fact 1 implies that the fiber, which is Morita equivalent to $G/K\,,$ is contractible. Then, using \Cref{van est morphism}, we get that the cohomology of \ref{LA group} with coefficients in $E$ is isomorphic to $H^*(G,E)\,.$ Now, both the top and bottom groupoids in \ref{LA group} are proper, therefore fact 2 implies that the cohomology of \ref{LA group} reduces to the invariant cohomology of the right column. To expound on this, the cohomology of the right column is just the Lie algebra cohomology of $\mathfrak{g}\,,$ ie. the cohomology of the complex \begin{equation}\label{che complex} E\xrightarrow[]{d} Hom(\mathfrak{g},E)\xrightarrow[]{d} Hom (\Lambda^2\mathfrak{g},E)\xrightarrow[]{d}\cdots\,. \end{equation} Now, the complex we get from \ref{LA group} is the subcomplex of \ref{che complex} consisting of those forms invariant under the action of $K\ltimes_{\text{Ad}}\mathfrak{k}\,,$ ie. forms which evaluate to $0$ upon contraction with any vector in $\mathfrak{k}\,,$ and which are invariant under the conjugation action of $K$ - this is exactly the aformentioned relative Lie algebra complex, therefore we have obtained van Est's result. \subsection{LA-Groupoids of Homomorphisms \texorpdfstring{$H\to G$}{H to G}} Now let's discuss an interpretation of the LA-groupoid associated to a map $H\to G\,.$ Recall that to every Lie groupoid $H$ we can associate an LA-groupoid $TH\to H$ by forming degreewise tangent bundles. A foliation of a Lie groupoid $H$ is a wide sub-LA-groupoid of the tangent LA-groupoid $TH\to H\,.$ The LA-groupoid cohomology of a foliation of $H$ can be thought of as the tangential de Rham cohomology, ie. the cohomology of differential forms which take an inputs only vectors in the foliation. We will explain how the LA-groupoid determined by a (nice enough) map between groupoids $H\to G$ can be thought of as a foliation of $H$ associated to the map $H\to G$ (in the (2,1)-category sense, ie. after replacing $H$ with a Morita equivalent groupoid). In particular, a Lie algebra $\mathfrak{g}\to *$ is a foliation of $*\,.$ \subsection{LA-Groupoid of \texorpdfstring{$Y\to X$}{Y to X}} We will first describe how this works in the extreme (but more intuitive) case that the morphism $H\to G$ is just a surjective submersion\footnote{Much of the author's intuition about Lie groupoids comes from surjective submersions between smooth manifolds} between smooth manifolds $\pi:Y\to X\,,$ and where the coefficients are in $\mathcal{O}\,.$ In this case, we can form the submersion groupoid $Y\times_X Y\rightrightarrows Y$ (whose formation can be thought of as replacing the map $Y\to X$ with the cofibration $Y\to Y\times_X Y\rightrightarrows Y\,),$ and we can take its Lie algebroid. This is the same Lie algebroid as the foliation determined by the fibers of $\pi\,,$ and the cohomology of this Lie algebroid is just the de Rham cohomology of differential forms which only take as inputs tangent vectors in the foliation. Thus, we have two methods of obtaining the same Lie algebroid (which can be thought of as an LA-groupoid by considering manifolds to be groupoids with only identity morphisms). \subsection{LA-Groupoid of \texorpdfstring{$*\to G$}{* to G}} For a more involved example, we consider the extreme case on the opposite side of the spectrum: the case of a Lie group $G\rightrightarrows *$ and the mapping $*\xhookrightarrow{}G\,.$ We can also form a ``submersion groupoid", and the result is just the group $G\rightrightarrows *\,,$ and the Lie algebroid we get is $\mathfrak{g}\mapsto *\,.$ Now, the claim is that this Lie algebra can be thought of as a foliation of $*\,.$ Naively, the tangent bundle of $*$ is the zero vector space, therefore the foliation determined by this map seems like it should just be trivial. However, this isn't what we mean as this isn't the Morita invariant notion of foliation. What will will do is replace the map $*\xhookrightarrow{} G$ with a fibration, in the context of the (2,1)-category of Lie groupoids, which in this case will be the commutative diagram \begin{equation} \begin{tikzcd} G\rtimes G \arrow[rd, "p_2"] & \\ * \arrow[u, "{(*,*)}"] \arrow[r,] & G \end{tikzcd} \end{equation} Here, $G\rtimes G$ is the action groupoid associated to the right action of $G$ on itself, and $p_2$ is the projection onto the second factor. Now, we can consider the LA-groupoid given by the tangent LA-groupoid $T(G\rtimes G)\to G\rtimes G\,,$ which has a natural map $p_{2*}$ to the tangent LA-groupoid $TG\to G\,,$ ie. we have a map \begin{equation} \begin{tikzcd} T(G\rtimes G) \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & TG \arrow[d] \arrow[rr, "p_{2*}", shift right=8] & & TG \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & * \arrow[d] \\ G\rtimes G \arrow[r, shift left] \arrow[r, shift right] & G & & G \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} Taking kernels of $p_{2*}$ as maps of vector bundles, we get a foliation of $G\rtimes G$ and a natural map to $\mathfrak{g}\,.$ Explicitly, the foliation and map are given by the following: \begin{equation} \begin{tikzcd} TG\rtimes G \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & TG \arrow[d] \arrow[rr, shift right=7] & & \mathfrak{g} \arrow[d] \arrow[r, shift left] \arrow[r, shift right] & \mathfrak{g} \arrow[d] \\ G\rtimes G \arrow[r, shift left] \arrow[r, shift right] & G & & * \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} Here, the maps from the left and right column to $\mathfrak{g}$ are obtained by right translating vectors in $TG$ to the origin. Now, the map of groupoids on the top row is a Morita equivalence, which implies that this map of LA-groupoids is an equivalence. Therefore, the foliation is indeed Morita equivalent to $\mathfrak{g}\,,$ and closely corresponds to how $\mathfrak{g}$ is usually thought of as: the right invariant vector fields on $G\,.$ In addition, we have again obtained equivalent LA-groupoids using two different methods. \subsection{Applications}\label{app} Here we will state some applications of \Cref{van est morphism}. Before stating the first theorem, we will make some remarks. \vspace{4mm}\\Suppose we have a Lie group $G$ which acts on a manifold $N\,.$ Then given a subgroup $H$ of $G\,,$ there is an action of $H\ltimes_{\text{Ad}}\mathfrak{h}$ on $\mathfrak{g}\ltimes N\,,$ given by $(h, X_\mathfrak{h})\cdot (X_\mathfrak{g},n)=(\text{Ad}_h X_\mathfrak{g}+X_\mathfrak{h},h\cdot n)\,,$ where $X_\mathfrak{h}\in \mathfrak{h}\,, X_\mathfrak{g}\in\mathfrak{g}\,.$ From this, we get an action of $H\ltimes_{\text{Ad}}\mathfrak{h}$ on Lie algebroid forms on $\mathfrak{g}\ltimes N\,.$ Now we will state the first theorem, which generalizes \Cref{van est compact}, and is an application of \Cref{van est morphism} to the mapping $H\ltimes N\to G\ltimes N:$ \begin{theorem} Let $G$ be a Lie group and $K$ its maximal compact subgroup. Let $N$ be a smooth manifold on which $G$ acts, and let $E\to N$ be a representation of $G\ltimes N\rightrightarrows N\,.$ Then we have that \begin{equation} H^*(G\ltimes N, E)\cong H^*(\mathfrak{g}\ltimes N,\mathfrak{k}\ltimes N, E)\,, \end{equation} where the cohomology group on the right is the cohomology of the subcomplex of Lie algebroid forms on $\mathfrak{g}\ltimes N$ which are invariant under the action of $K\ltimes_{\text{Ad}}\mathfrak{k}\,.$ \end{theorem} \vspace{4mm}The next result concerns lifting projective representations to representations, and is an application of \Cref{van est morphism} to the mapping $*\to G\,,$ with coefficients on $\mathcal{O}^*:$ \begin{theorem} Let $G$ be a simply connected Lie group and let $V$ be a finite dimensional complex vector space. Let $\rho:G\to \text{PGL } (V)$ be a homomorphism. Then $G$ lifts to a homomorphism $\tilde{\rho}:G\to\text{GL } (V)\,.$ If $G$ is semisimple, this lift is unique. \end{theorem} \vspace{4mm}The next theorem concerns lifting group actions to principal bundles, and is an application of \Cref{van est morphism} to the mapping of $N\to G\ltimes N\,,$ with coefficients in $T^n$ (the $n$-dimensional torus): \begin{theorem} Let $G$ be a compact, simply connected Lie group acting on a manifold $N\,,$ and let $P\to N$ be a principal bundle for the $n$-torus $T^n\,.$ Then, up to isomorphism, there is a unique lift of the action of $G$ to $P\,.$ \end{theorem} \vspace{2mm}The next theorem generalizes a result proven by Crainic using different methods, and in particular gives a criterion for there to exist an integration of certain Lie algebroids. It is an application of \Cref{van est morphism} to the mapping $G^0\to G\rightrightarrows G^0\,,$ with coefficients in a $G$-module $M:$ \begin{theorem}\label{central extension1} Consider the exponential sequence $0\to Z \to\mathfrak{m}\overset{\exp}{\to} M\,.$ Let \begin{align}\label{LA extension1} 0\to\mathfrak{m}\to \mathfrak{a} \to \mathfrak{g}\to 0 \end{align} be the central extension of $\mathfrak{g}$ associated to $\omega\in H^2(\mathfrak{g},\mathfrak{m})\,.$ Suppose that $\mathfrak{g}$ has a simply connected integration $G\rightrightarrows X$ and that \begin{align} \int_{S^2_x}\omega\in Z \end{align} for all $x\in X$ and $S^2_x\,,$ where $S^2_x$ in a $2$-sphere contained in the source fiber over $x\,.$ Then $\mathfrak{a}$ integrates to a unique extension \begin{align} 1\to M\to A \to G\to 1\,. \end{align} \end{theorem} \vspace{4mm}\Cref{van est morphism} also ``knows" about some very classical results, including the Poincaré lemma, which concerns the mapping $\mathbb{R}^n\to *$ and coefficients in $\mathcal{O}$ (we do not claim it is a proof as it is surely circular, but the point is it does contain the result as a ``subtheorem"): \begin{theorem} Every closed differential form on $\mathbb{R}^n\,,$ in degree higher than $0\,,$ is exact. \end{theorem} \begin{proof}[Proof] The LA-groupoid associated to the map $\mathbb{R}^n\to *$ is $T\mathbb{R}^n\,,$ whose cohomology is the de Rham cohomology of $\mathbb{R}^n$. By \Cref{van est morphism}, since $\mathbb{R}^n$ is contractible, this cohomology is the cohomology of the point $*\,,$ which is trivial in degrees higher than $0\,.$ \end{proof} The next result, proved by Buchdahl in~\cite{buch}, is used when applying the Penrose transform in twistor theory (see section 4 of~\cite{bailey}, as well as~\cite{witten}). Again, this result is a special case of our theorem, however this result is essentially used in the proof. \begin{theorem} Let $Y\to X$ be a surjective submersion of complex manifolds with $n$-connected fibers. Then the natural map $H^*(X,\mathcal{O})\to H^*(Y,f^{-1}\mathcal{O})$ is an isomorphism up to degree $n$ and is injective in degree $n+1\,.$ \end{theorem} More generally, in the previous theorem we can take local sections of any holomorphic vector bundle rather than $\mathcal{O}$ (in fact, our theorem shows that the we can use sheaves more general than local sections of vector bundles, and we also determine its image in degree $n+1)$. \vspace{3mm}\\Let us remark that the van Est theorem we prove can be derived from a van Est theorem for double Lie groupoids, using the association of a double Lie groupoid to a (nice enough) map of Lie groupoids. In addition, a van Est theorem for relative cohomology (which will be defined in Part 2) should be derivable in this way. \section{Introduction to Part 3} Part 3 of this thesis is mostly disjoint from Parts 1 and 2, with the exception of the use of LA-groupoids towards the end, and some mention of the van Est map (but a deep understanding of this map is not needed). This part is largely conjectural — our aim is to understand what Morita equivalences and generalized morphisms of Lie algebroids are (and for completion, details need to be filled in). \vspace{3mm}\\Towards the end of Part 3 we define a category of LA-groupoids (with $n$-equivalences), which contains both Lie groupoids and Lie algebroids as objects and provides a ``wormhole" between them. In particular, this category unifies the category of differentiable stacks, the category of Lie algebroids and the homotopy category. This category has the following remarkable properties: \begin{enumerate} \item Manifolds $X\,,Y$ are Morita equivalent in this category if and only if they are diffeomorphic. \item More generally, Lie groupoids $G\,,H$ are Morita equivalent in this category if and only if they are Morita equivalent in the category of Lie groupoids, \item Two tangent bundles $TX\,,TY$ are Morita equivalent in this category if and only if $X\,,Y$ are homotopy equivalent. \item If $G\rightrightarrows G^0$ is source $n$-connected, there is a canonical $n$-equivalence $\mathcal{I}:\mathfrak{g}\to G\,.$ This morphism can be interpreted as the integration functor. If $n=\infty$ this is a Morita equivalence (here $\mathfrak{g}$ is the Lie algebroid of $G$). \item With regards to the previous point, the van Est map is given by a pullback: $\mathcal{I}^*:H^*(G,M)\to H^*(\mathfrak{g},M)\,.$ \item A Lie algebroid $\mathfrak{g}$ is integrable if and only if it is 1-equivalent to some Lie groupoid $G\,.$ \item This category induces a natural notion of smooth homotopy equivalence of Lie groupoids. \item A finite dimensional classifying space of a Lie groupoid $G$ is just a manifold $BG$ which is homotopy equivalent to $G$ (this may remind the reader of Grothendieck's homotopy hypothesis). \item If $EG\to BG$ is finite dimensional, then the Atiyah algebroid $\text{at}(EG)$ is Morita equivalent to $G\,.$ In particular, if $G$ is discrete then $\text{at}(EG)=T(BG)\,,$ therefore $G$ is Morita equivalent to $T(BG)\,.$ \item Due to points 2, 3 and 4, we get the following result: suppose that $\mathcal{P}$ assigns to each LA-groupoid some property (eg.\ its cohomology) that is invariant under $n$-equivalence. Then if $X$ is homotopy equivalent to $Y\,;$ if $H\rightrightarrows H^0$ is Morita equivalent to $K\rightrightarrows K^0\,;$ if $G\rightrightarrows G^0$ is source $n$-connected, we get that $\mathcal{P}(TX)\cong \mathcal{P}(TY)\,;\mathcal{P}(H)\cong \mathcal{P}(K)\,;\mathcal{P}(G)\cong \mathcal{P}(\mathfrak{g})\,,$ respectively. \end{enumerate} \vspace{3mm}To understand this we should start at the beginning. We make the following observation. The Lie algebroid cohomology of tangent bundles is invariant under homotopy equivalence of the underlying manifolds (for coefficients in $\mathcal{O}\,,$ this cohomology is the de Rham cohomology, for coefficients in $\mathbb{Z}$ it is the singular cohomology). Many interesting properties of tangent bundles are preserved under homotopy equivalences, and in addition we have the following result, due to Arias Abad, Quintero V\'{e}lez and V\'{e}lez V\'{a}squez: \begin{theorem}(see~\cite{Quintero}, Corollary 5.1) If $f:X\to Y$ is a smooth homotopy equivalence, then the pullback functor \begin{equation} f^*:\textbf{Loc}_{\infty}(N)\to \textbf{Loc}_{\infty}(M) \end{equation} is a quasi-equivalence (ie. it is a quasi-equivalence between the dg categories of $\infty$-local systems). \end{theorem} We give a definition of Morita equivalence of Lie algebroids which says that two tangent bundles are Morita equivalent if and only if their underlying manifolds are homotopy equivalent. Before doing this, we motivate our definition. Once again, the definition we give is still conjectural. \vspace{3mm}\\One advantage of our formulation is that it offers an explanation for why certain cohomologies of manifolds are invariant under homotopy equivalences and others are not: a cohomology should be invariant under homotopy equivalences if it can be interpreted as Lie algebroid cohomology of the tangent bundle. In fact, we do a little more: similarly to how topological spaces have $n$-equivalences (which in nice topological categories give homotopy equivalences when $n=\infty$), we find that Lie algebroids also come with a natural notion of $n$-equivalences. \vspace{3mm}\\Next we discuss how gerbes can be interpreted as generalized morphisms into a higher category, namely, $n$-gerbes are morphisms into the category of Lie $n$-groupoids (or $n$-fold groupoids; this may be related to Example 6.16 in~\cite{Nuiten}). For $n=1\,,$ this gives the usual interpretation of principal $G$-bundles as generalized morphisms into $G\,.$ This suggests an interpretation of the van Est map (for coefficients in a representation) as a map taking higher (generalized) morphisms of Lie groupoids into higher (generalized) morphisms of Lie algebroids. \vspace{3mm}\\The fact that tangent bundles are Morita equivalent (under our definition) if and only if their underlying manifolds are homotopy equivalent, together with the fact that the van Est map can be interpreted as a map taking higher generalized morphisms to higher generalized morphisms, suggests the existence of a smooth version of the homotopy hypothesis (see~\cite{groth}), roughly (and up to equivalence) \begin{equation} \text{Higher Lie groupoids}\cong\text{Higher Lie algebroids.} \end{equation} After this discussion, we move on to discuss differentiablity of generalized morphisms. Already, the fact that Morita maps of Lie groupoids don't differentiate to (anything like) equivalences of Lie algebroids suggests that this isn't always possible: $\text{Pair}(X)$ is Morita equivalent to $*\,,$ however the number of points in the leaf space of its Lie algebroid $TX$ can be arbitrarily high, whereas the leaf space of $T*$ consists of a single point. Furthermore, letting $X=S^1\,,$ $TS^1$ has nontrivial representations, whereas $T*$ does not. We suggest the situation is even more dire. We prove the following no-go result: \begin{theorem}\label{strong} There is no notion of generalized moprhisms between Lie algebroids with the following properties: \begin{enumerate} \item Associated to any generalized morphism $P:G\to H$ is a generalized morphism $dP:\mathfrak{g}\to\mathfrak{h}\,.$ \item Generalized morphisms $\mathfrak{g}\to\mathfrak{h}$ induce pullback maps $H^{1}(\mathfrak{h})\to H^{1}(\mathfrak{g})\,,$ in such a way that the pullback of a trivial class is trivial.\footnote{One may assume that we are taking cohomology of cocycles valued in $\mathbb{R}\,.$} \item Pullback of cohomology commutes with the van Est map. That is, the following diagram is commutative: \begin{equation} \begin{tikzcd} {H^{1}(H)} \arrow[r, "P^*"] \arrow[d, "VE"'] & {H^{1}(G)} \arrow[d, "VE"] \\ {H^{1}(\mathfrak{h})} \arrow[r, "dP^*"] & {H^{1}(\mathfrak{g})} \end{tikzcd} \end{equation} \end{enumerate} \end{theorem} Our resolution is that property 1 should not be required to hold, and we discuss, from the perspective of the smooth version of the homotopy hypothesis, why some generalized morphisms cannot be differentiated. We then move on to discuss in which sense certain generalized morphisms can be differentiated. What we find is that, a generalized morphism $G\to H$ can be differentiated ``up to degree $n$" if $H$ has source $n$-connected fibers (this is sufficient but not necessary, of course strict homomorphisms can always be differentiated). \vspace{3mm}\\The smooth homotopy hypothesis suggests that there should be morphisms between Lie algebroids and Lie groupoids. Towards the end of chapter 2 (Part 3) we will make this precise: we define \textit{exotic} morphisms between Lie algebroids and Lie groupoids by using the embeddings of both Lie algebroids and Lie groupoids into LA-groupoids, and this will give us another interpretation of the van Est map. We prove the following result: \begin{theorem} Let $G\rightrightarrows G^0$ be a source $n$-connected Lie groupoid. Then $\mathfrak{g}\to G^0$ is $n$-equivalent to $G\rightrightarrows G^0\,,$ via a canonical $n$-generalized morphism (if $n=\infty$ they are Morita equivalent). \end{theorem} In fact, what we show is that invariance of LA-groupoid cohomology under $n$-equivalence (which we expect to be true up to degree $n$) implies the van Est isomorphism theorem and the homotopy invariance of de Rham cohomology and singular cohomology, and of course the Morita invariance of Lie groupoid cohomology. \section{Structure of Thesis} Part 1 of the thesis is due to a paper published by the author. It describes a generalization of the van Est map and isomorphism theorem to cohomology with more general coefficients than functions valued in a representation. This part does contain some important definitions and concepts which are important for part 2; the crucial material for understanding part 2 is contained in chapters 1 and 2 of part 1. In addition, this part describes a canonical module associated to a complex manifold with divisor, and a section called ``Integration by Prequantization" which describes an alternative way of integrating Lie algebroid cohomology classes that doesn't use the van Est map. Furthermore, section 4 contains a list of applications of the van Est map. \vspace{3mm}\\Part 2 of this thesis contains the full generalization of the van Est map to stacks. This part is more conceptual in nature and contains a few more concepts from category theory than part 1, mostly in the context of our definitions of fibrations, cofibrations and foliations. If the intention is to understand the van Est map, sections 2.2, 3.5, 4.2 and 5 can be skipped, though it is not necessarily recommended; these sections, in particular, describe a higher category of double groupoids, which is necessary for our discussion of cofibrations, which leads us to showing that a nice enough map between Lie groupoids can be replaced with a cofibration; section 5, in particular, contains a description of the equivalence of two LA-groupoids associated to a groupoid homomorphism, as well as an explanation of how the normal bundle of a subgroupoid can be thought of as an LA-groupoid. Section 5 also describes how every wide subgroupoid of a Lie groupoid comes with a canonical representation. \vspace{3mm}\\Part 3 of this thesis is largely speculative and in this part we give conjectural definitions of Morita equivalences of Lie algebroids and generalized morphisms. It also describes a link between Lie's second theorem and the van Est isomorphism theorem, which involves viewing gerbes as generalized morphisms in a higher category. We discuss connections with the homotopy hypothesis and we prove a no-go theorem regarding generalized morphisms of Lie algebroids. We define generalized morphisms between Lie algebroids and Lie groupoids, and a category of LA-groupoids (with $n$-equivalences). We conclude this part with conjectures and future directions. \vspace{3mm}\\We recommend reading the introduction first (at least the parts relevant to the reader's interest). Some of the material in this thesis will be known to experts (perhaps unknowingly to the author), and with regards to this material no originality is claimed. \part{Van Est Theory With Coefficients in a Module} \chapter{Basics of Simplicial Manifolds, Stacks, Sheaves} \subsection*{Outline of Part 1} Part 1 of this thesis is based on a paper written by the author, see~\cite{Lackman}. This part is organized as follows: section $1$ is a brief review of simplicial manifolds, Lie groupoids and stacks, but it is important for setting up notation, results and constructions which will be used in the next sections. This section contains all of the results about stacks which are needed for this part. Section $2$ contains a review and a generalization of the Chevalley-Eilenberg complex. Section $3$ is where we define the van Est map and prove the main theorem of part 1. The next sections of part 1 concern applications of the main theorem, various new constructions of geometric structures involving Lie groupoids, and examples. \\\\$\mathbf{Notation:}$ For the rest of the paper, we use the following notation: given a smooth (or holomorphic) surjective submersion $\pi:M\to X\,,$ we let $\mathcal{O}(M)$ denote the sheaf of smooth (or holomorphic) sections of $\pi\,.$ \section{Simplicial Manifolds} In this section we briefly review simplicial manifolds, sheaves on simplicial manifolds and their cohomology. \begin{definition}Let $Z^\bullet$ be a (semi) simplicial manifold, ie. a contravariant functor from the (semi) simplex category to the category of manifolds. A sheaf $\mathcal{S}_\bullet$ on $Z^\bullet$ is a sheaf $\mathcal{S}_n$ on $Z^n\,,$ for all $n\ge 0\,,$ such that for each morphism $f:[n]\to[m]$ we have a morphism $\mathcal{S}(f):Z(f)^{-1}\mathcal{S}_n\to \mathcal{S}_m\,,$ and such that $\mathcal{S}(f\circ g)=\mathcal{S}(f)\circ \mathcal{S}(g)\,.$ A morphism between sheaves $\mathcal{S}_\bullet$ and $\mathcal{G}_\bullet$ on $Z^\bullet$ is a morphism of sheaves $u^n:\mathcal{S}_n\to \mathcal{G}_n$ for each $n\ge 0$ such that for $f:[n]\to [m]$ we have that $u^m\circ \mathcal{S}(f)=\mathcal{G}(f)\circ u^n\,.$ We let $\textrm{Sh}(Z^\bullet)$ denote the category of sheaves on $Z^\bullet\,.$ $\blacksquare$\end{definition} \begin{definition} Given a sheaf $\mathcal{S}_\bullet$ on a (semi) simplicial manifold $Z^\bullet\,,$ we define $Z^n:=\text{Ker}\big[\Gamma(\mathcal{S}_n)\xrightarrow{\delta^*} \Gamma(\mathcal{S}_{n+1})\big]\,,\,B^n:=\text{Im}\big[\Gamma(\mathcal{S}_{n-1})\xrightarrow{\delta^*}\Gamma(\mathcal{S}_n)\big]\,,$ where $\delta^*$ is the alternating sum of the face maps, ie. \begin{align*} \delta^*=\sum_{i=0}^n(-1)^id_{n,i}^{-1}\,, \end{align*} where $d_{n,i}:Z^n\to Z^{n-1}$ is the $i^{\textrm{th}}$ face map. We then define the naive cohomology (see \cite{nlab}) \begin{equation*}H^n_{\text{naive}}(Z^\bullet\,,\mathcal{S}_\bullet):=Z^n/B^n\,. \end{equation*} $\blacksquare$\end{definition} \begin{definition}[see~\cite{Deligne}]\label{cohomology simplicial} Given a (semi) simplicial manifold $Z^\bullet\,,$ $\textrm{Sh}(Z^\bullet)$ has enough injectives, and we define \begin{equation*} H^n({Z^\bullet\,,\mathcal{S}_\bullet}):=R^n\Gamma_\text{inv}(\mathcal{S}_\bullet)\,, \end{equation*} where $\Gamma_\text{inv}:\textrm{Sh}(Z^\bullet)\to \mathbf{Ab}$ is given by $\mathcal{S}_\bullet \mapsto\text{Ker}\big[\Gamma(\mathcal{S}_0)\xrightarrow{\delta^*} \Gamma(\mathcal{S}_1)\big]\,.$ $\blacksquare$\end{definition} \theoremstyle{definition}\begin{rmk}\label{acyclic resolution} As usual, in addition to injective resolutions one can use acylic resolutions to compute cohomology. \end{rmk} \theoremstyle{definition}\begin{rmk}[see \cite{Deligne}]\label{double complex} A convenient way to compute $H^*({Z^\bullet\,,\mathcal{S}_\bullet})$ is to choose a resolution \begin{align*} 0\xrightarrow{}\mathcal{S}_\bullet\xrightarrow{}\mathcal{A}^0_\bullet\xrightarrow{\partial_\bullet^0}\mathcal{A}^1_\bullet\xrightarrow{\partial_\bullet^1}\cdots \end{align*} such that \begin{align*} 0\xrightarrow{}\mathcal{S}_n\xrightarrow{}\mathcal{A}^0_n\xrightarrow{\partial_n^0}\mathcal{A}^1_n\xrightarrow{\partial_n^1}\cdots \end{align*} is an acyclic resolution of $\mathcal{S}_n\,,$ for all $n\ge 0\,,$ and then take the cohomology of the total complex of the double complex $C^p_q=\Gamma(A^p_q)\,,$ with differentials $\delta^*$ and $\partial^p_q\,.$ \end{rmk} $\mathbf{}$ \\The following theorem is a well-known consequence of the Grothendieck spectral sequence: \begin{theorem}[Leray Spectral Sequence]\label{spectral} Let $f:X^\bullet\to Y^\bullet$ be a morphism of simplicial topological spaces, and let $\mathcal{S}_\bullet$ be a sheaf on $X^\bullet\,.$ Then there is a spectral sequence $E^{pq}_*\,,$ called the Leray spectral sequence, such that $E^{pq}_2=H^p(Y^\bullet,R^qf_*(\mathcal{S}_\bullet))$ and such that \begin{align*} E^{pq}_2\Rightarrow H^{p+q}(X^\bullet,\mathcal{S}_\bullet)\,. \end{align*} \end{theorem} \subsection{Stacks} Here we briefly review the theory of differentiable stacks. A differentiable stack is in particular a category, and first we will define the objects of the category, and then the morphisms. All manifolds and maps can be taken to be in the smooth or holomorphic categories. The following definitions can be found in~\cite{Kai}. \begin{definition} Let $G\rightrightarrows G^0$ be a Lie groupoid. A $G$-principal bundle (or principal $G$-bundle) is a manifold $P$ together with a surjective submersion $P\overset{\pi}{\to}M$ and a map $P\overset{\rho}{\to} G^0\,,$ called the moment map, such that there is a right $G$-action on $P\,,$ ie. a map $P\sideset{_{\rho}}{_{t}}{\mathop{\times}} G\to P\,,$ denoted $(p,g)\mapsto p\cdot g\,,$ such that \begin{itemize} \item $\pi(p\cdot g)=\pi(p)$ \item $\rho(p\cdot g)=s(g)$ \item $(p\cdot g_1)\cdot g_2=p\cdot(g_1g_2)$ \end{itemize} and such that \begin{align*} P\sideset{_{\rho}}{_{t}}{\mathop{\times}} G\to P\sideset{_{\pi}}{_{\pi}}{\mathop{\times}} P\,,\;(p,g)\mapsto (p,p\cdot g) \end{align*} is a diffeomorphism. $\blacksquare$\end{definition} \begin{definition} A morphism between $G$-principal bundles $P\to M$ and $Q\to N$ is given by a commutative diagram of smooth maps \[ \begin{tikzcd} P\arrow{r}{\phi}\arrow{d} & Q\arrow{d} \\M\arrow{r} & N \end{tikzcd} \] such that $\phi(p\cdot g)=\phi(p)\cdot g\,.$ In particular this implies that $\rho\circ\phi(p)=\rho(p)\,.$ $\blacksquare$\end{definition} \begin{definition} Let $G\rightrightarrows G^0$ be a Lie groupoid. Then we define $[G^0/G]$ to be the category of $G$-principal bundles, together with its natural functor to the category of manifolds (which takes a $G$-principal bundle to its base manifold). We call $[G^0/G]$ a (differentiable or holomorphic) stack. $\blacksquare$\end{definition} \theoremstyle{definition}\begin{rmk} Given a Lie groupoid $G\rightrightarrows G^0\,,$ there is a canonical Grothendieck topology on $[G^0/G]\,,$ hence we can talk about sheaves on stacks and their cohomology. What is most important to know for the next sections is that a sheaf on $[G^0/G]$ is in particular a contravariant functor \[F:[G^0/G]\to \mathbf{Ab}\,. \] See Section~\ref{appendix:Cohomology of Sheaves on Stacks} for details. \end{rmk} \subsection{Groupoid Modules} We now define Lie groupoid modules. Their importance is due to the fact that these are the structures which differentiate to representations; they will be one of the main objects we study in this paper. \begin{definition} Let $X$ be a manifold. A family of groups over $X$ is a Lie groupoid $M\rightrightarrows X$ such that the source and target maps are equal. A family of groups will be called a family of abelian groups if the multiplication on $M$ induces the structure of an abelian group on its source fibers, ie. if $s(a)=s(b)$ then $m(a,b)=m(b,a)\,.$ $\blacksquare$\end{definition} \theoremstyle{definition}\begin{exmp}\label{trivial} Let $A$ be an abelian Lie group and let $X$ be a manifold. Then $X{\mathop{\times}} A$ is naturally a family of abelian groups, with the source and target maps being the projection onto the first factor $p_1:X{\mathop{\times}} A\to X\,.$ This will be called a trivial family of abelian groups, and will be denoted $A_X\,.$ \end{exmp} \theoremstyle{definition}\begin{exmp}\label{foag} One way of constructing families of abelian groups is as follows: Let $A$ be an abelian group, and let $\text{Aut}(A)$ be its automorphism group. Then to any principal $\text{Aut}(A)$-bundle $P$ we have a canonical family of abelian groups - it is given by the fiber bundle whose fibers are $A$ and which is associated to $P\,.$ Families of abelian groups constructed in this way are locally trivial in the sense that locally they are isomorphic to the trivial family of abelian groups given by $A_{\mathbb{R}^n}\,,$ for some $n$ (compare this with vector bundles). \end{exmp} \begin{definition}(see \cite{Tu}$\,$): Let $G\rightrightarrows G^0$ be a Lie groupoid. A $G$-module $M$ is a family of abelian groups together with an action of $G$ on $M$ such that for $a\,,b\in G^0\,,$ $G(a,b):M_a\to M_b$ acts by homomorphisms $($here $G(a,b)$ is the set of morphisms with source $a$ and target $b)\,.$ If $M$ is a vector bundle\footnote{Here we are implicitly using the fact that the forgeful functor from the category of finite dimensional vector spaces to the category of simply connected abelian Lie groups is an equivalence of categories.}, $M$ will be called a representation of $G\,.$ $\blacksquare$\end{definition} \theoremstyle{definition}\begin{exmp} Let $G\rightrightarrows G^0$ be a groupoid and let $A$ be an abelian group. Then $A_{G^0}$ is a family of abelian groups (see Example~\ref{trivial}), and it is a $G$-module with the trivial action, that is $g\in G(x,y)$ acts by $g\cdot (x,a)=(y,a)\,.$ We will call this a trivial $G$-module. \end{exmp} \theoremstyle{definition}\begin{exmp}\label{SL2} Let $G=SL(2,\mathbb{Z})\,,$ which is the mapping class group of the torus. Every $T^2$-bundle over $S^1$ is isomorphic to one with transition functions in $SL(2,\mathbb{Z})\,,$ with the standard open cover of $S^1$ using two open sets. All of these are naturally $\Pi_1(S^1)$-modules since $SL(2,\mathbb{Z})$ is discrete. In particular, the Heisenberg manifold is a $\Pi_1(S^1)$-module. Explicitly, consider the matrix \begin{align*} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}\in SL(2,\mathbb{Z})\,. \end{align*} This matrix defines a map from $T^2\to T^2\,,$ and it corresponds to a Dehn twist. The total space of the corresponding $T^2$-bundle is diffeomorphic to the Heisenberg manifold $H_M$, which is the quotient of the Heisenberg group by the right action of the integral Heisenberg subgroup on itself, ie. we make the identification \begin{align*} \begin{pmatrix} 1 & a & c\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}\sim \begin{pmatrix} 1 & a+n & c+k+am\\ 0 & 1 & b+m \\ 0 & 0 & 1 \end{pmatrix}\,, \end{align*} where $a\,,b\,,c\in\mathbb{R}$ and $n\,,m\,,k\in\mathbb{Z}\,.$ The projection onto $S^1$ is given by mapping to $b\,.$ \\\\The fiberwise product associated to the bundle $H_M\to S^1$ is given by \begin{align*} \begin{pmatrix} 1 & a & c\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}\cdot \begin{pmatrix} 1 & a' & c'\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}= \begin{pmatrix} 1 & a+a' & c+c'\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}\,. \end{align*} See Example~\ref{module example} for more. \end{exmp} \begin{definition} Let $M\,,N$ be $G$-modules. A morphism $f:M\to N$ is a morphism of the underlying groupoids such that if $s(g)=s(m)\,,$ then $f(g\cdot m)=g\cdot f(m)\,.$ $\blacksquare$\end{definition} \begin{proposition} Let $M\to X$ be a family of abelian groups. Then $H^1(X,\mathcal{O}(M))$ classifies principal $M$-bundles over $X$ for which $\rho=\pi\,.$ \end{proposition} Before concluding this section we will make a remark on notation: \theoremstyle{definition}\begin{rmk}Given a family of abelian groups $E\overset{\pi}{\to} Y\,,$ we can form its sheaf of sections, which as previously stated we denote by $\mathcal{O}(E)\,.$ In addition, given a map $f:X\to Y$ we get a family of abelian groups on $X\,,$ given by $f^*E=X{\mathop{\times}}_Y E\,.$ \end{rmk} \subsection{Sheaves on Lie Groupoids and Stacks} In this section we discuss the relationship between sheaves on $[G^0/G]\,,$ sheaves on $\mathbf{B}^\bullet G$ and $G$-modules ($\mathbf{B}^\bullet G$ is the nerve of $G\,,$ see appendix~\ref{appendix:Cohomology of Sheaves on Stacks} for more). \subsubsection{Sheaves: Lie Groupoids to Stacks}\label{sheaves on stacks} Here we discuss how to obtain a sheaf on the stack $[G^0/G]$ from a $G$-module. \\\\Let $M$ be a $G$-module for $G\rightrightarrows G^0\,.$ We obtain a sheaf on $[G^0/G]$ as follows: consider the object of $[G^0/G]$ given by \[ \begin{tikzcd} P \arrow{r}{\rho}\arrow{d}{\pi} & G^0 \\ X \end{tikzcd} \] We can form the action groupoid $G\ltimes P$ and consider the $(G\ltimes P)$-module given by $\rho^*M\,.$ To $P$ we assign the abelian group $\Gamma_\text{inv}(\rho^*\mathcal{O}(M))$ (ie. the sections invariant under the $G\ltimes P$ action). To a morphism between objects of $[G^0/G]$ the functor just assigns the set-theoretic pullback. This defines a sheaf on $[G^0/G]\,,$ denoted $\mathcal{O}(M)_{[G^0/G]}\,.$ \subsubsection{Sheaves: Stacks to Lie Groupoids}\label{stacks to lie groupoids} Here we discuss how to obtain a sheaf on $\mathbf{B}^\bullet G$ from a sheaf on $[G^0/G]\,,$ and we define the cohomology of a groupoid with coefficients taking values in a module. \\\\Let $G\rightrightarrows G^0$ be a Lie groupoid and let $\mathcal{S}$ be a sheaf on $[G^0/G]\,.$ Consider the object of $[G^0/G]$ given by \[ \begin{tikzcd} P \arrow{r}{\rho}\arrow{d}{\pi} & G^0 \\ X \end{tikzcd} \] We can associate to each open set $U\subset X$ the object of $[G^0/G]$ given by \[ \begin{tikzcd} P\vert_U \arrow{r}{\rho}\arrow{d}{\pi} & G^0 \\ U \end{tikzcd} \] We get a sheaf on $X$ by assigning to $U\subset X$ the abelian group $\mathcal{S}(P\vert_U)\,.$ \\\\Now for all $n\ge 0\,,$ the spaces $\mathbf{B}^{n} G$ are canonically identified with $G$-principal bundles, by identifying $\mathbf{B}^{n} G$ with the object of $[G^0/G]$ given by \begin{equation}\label{canonical object} \begin{tikzcd}[row sep=large, column sep=large] \mathbf{B}^{n+1} G \arrow{r}{d_{1,1}\circ p_{n+1}}\arrow{d}{d_{n+1,n+1}} & G^0 \\ \mathbf{B}^{n} G \end{tikzcd} \end{equation} where $p_{n+1}$ is the projection onto the $(n+1)^{\text{th}}$ factor. Hence given a sheaf $\mathcal{S}$ on $[G^0/G]$ we obtain a sheaf on $\mathbf{B}^{n} G\,,$ for all $n\ge 0\,,$ denoted $\mathcal{S}(\mathbf{B}^{n} G)\,,$ and together these form a sheaf on $B^\bullet G\,.$ Furthermore, given a $G$-module $M$ we have that \begin{align*} \mathcal{O}(M)_{[G^0/G]}({G^0})\cong \mathcal{O}(M)\,. \end{align*} Moreover, we have the following lemma: \begin{lemma} Let $M$ be a $G$-module. Then the sheaf on $\mathbf{B}^\bullet G$ given by $\mathcal{O}(M)_{[G^0/G]}(\mathbf{B}^{\bullet}G)\,,$ is isomorphic to the sheaf of sections of the simplicial family of abelian groups given by \begin{align*} \mathbf{B}^\bullet(G\ltimes M)\to \mathbf{B}^\bullet G\,. \end{align*} \end{lemma} \theoremstyle{definition}\begin{definition} Let $G\rightrightarrows G^0$ be a Lie groupoid and let $M$ be a $G$-module. We define \[ H^*(G,M):=H^*(\mathbf{B}^{\bullet}G,\mathcal{O}(M)_{[G^0/G]}(\mathbf{B}^{\bullet}G))\,. \] $\blacksquare$\end{definition} \theoremstyle{definition}\begin{rmk}[See~\cite{Kai}]\label{morita inv} Let $G\rightrightarrows G^0$ and $K\rightrightarrows K^0$ be Lie groupoids and let $\phi:G\to K$ be a Morita morphism (see \Cref{moritamapjeff}). Then the pullback $\phi^*$ induces an equivalence of categories \[ \phi^*:[K^0/K]\to[G^0/G]\,. \] Furthermore, let $\mathcal{S}$ be a sheaf on $[G^0/G]\,.$ Then the pushforward sheaf $\phi_*\mathcal{S}:=\mathcal{S}\circ\phi^*$ is a sheaf on $[K^0/K]$ and we have a natural isomorphism \[ H^*(\mathbf{B}^{\bullet}G,\mathcal{S}(\mathbf{B}^{\bullet}G))\cong H^*(\mathbf{B}^{\bullet}K,\phi_*\mathcal{S}(\mathbf{B}^{\bullet}K))\,. \] \end{rmk} \subsection{Godement Construction for Sheaves on Stacks} Here we discuss a version of the Godement resolution for sheaves on stacks, and we show how it can be used to compute cohomology. \begin{definition} Let $G\rightrightarrows G^0$ be a Lie groupoid and let $\mathcal{S}$ be a sheaf on $[G^0/G]\,.$ We define the Godement resolution of $\mathcal{S}$ as follows: Consider the object of $[G^0/G]$ given by \[ \begin{tikzcd} P \arrow{r}{\rho}\arrow{d}{\pi} & G^0 \\ X \end{tikzcd} \] and consider the corresponding sheaf on $X$ (see~\Cref{stacks to lie groupoids}), denoted by $\mathcal{S}(X)\,.$ We can then consider, for each $n\ge 0\,,$ the $n^\text{th}$ sheaf in the Godement resolution of $\mathcal{S}(X)\,,$ denoted $\mathbb{G}^n(\mathcal{S}(X))\,,$ and to $P$ we assign the abelian group $\Gamma(\mathbb{G}^n(\mathcal{S}(X)))\,.$ These define sheaves on $[G^0/G]$ which we denote by $\mathbb{G}^n(\mathcal{S})\,.$ $\blacksquare$\end{definition} For a sheaf $\mathcal{S}$ on $[G^0/G]$ we obtain a resolution by using $\mathbb{G}^\bullet(\mathcal{S})$ in the following way: \begin{align*} \mathcal{S}\xhookrightarrow{}\mathbb{G}^1(\mathcal{S})\to \mathbb{G}^2(\mathcal{S})\to\cdots\,. \end{align*} The sheaves $ \mathbb{G}^n(\mathcal{S})$ are not in general acyclic on stacks, however the sheaves $\mathbb{G}^n(\mathcal{S})(\mathbf{B}^m G)$ are acyclic on $\mathbf{B}^m G$ and hence can be used to compute cohomology (see Theorem~\ref{stack groupoid cohomology} and Remark~\ref{double complex}). \subsection{Examples}\label{important examples} The constructions in the previous sections will be important in Section~\ref{van Est map} when defining the van Est map; it is crucial that modules define sheaves on stacks in order to use the Morita invariance of cohomology. Here we exhibit examples of the constructions from the previous sections which will be used in Section~\ref{van Est map}. \begin{proposition}\label{example surj sub}Let $f:Y\to X$ be a surjective submersion, and consider the submersion groupoid $Y{\mathop{\times}}_f Y\rightrightarrows Y\,.$ This groupoid is Morita equivalent (see \Cref{moritaequivjeff}) to the trivial $X\rightrightarrows X$ groupoid, hence their associated stacks, $[Y/(Y{\mathop{\times}}_f Y)]$ and $[X/X]\,,$ are categorically equivalent. \end{proposition} We now describe the functor $f^*:[X/X]\to[Y/(Y{\mathop{\times}}_f Y)]$ which gives this equivalence: \\\\An $X\rightrightarrows X$ principal bundle is given by a manifold $N$ together with a map $\rho:N\to X$ (the $\pi$ map here is the identity map $N\to N)\,.$ To such an object, we let $f^*(N,\rho)=N\sideset{_\rho}{_{f}}{\mathop{\times}} Y\,.$ This is a $Y{\mathop{\times}}_f Y$ principal bundle in the following way: \[\begin{tikzcd} N\sideset{_\rho}{_{f}}{\mathop{\times}} Y \arrow{r}{\rho = p_2}\arrow{d}{\pi = p_1} & Y \\ N \end{tikzcd} \] The functor $f^*$ is an equivalence of stacks. \\\\Now suppose we have a sheaf $\mathcal{S}$ on the stack $[Y/(Y{\mathop{\times}}_f Y)]\,,$ then we obtain a sheaf on $[X/X]$ by using the pushforward of $f\,,$ ie. to an object $(N,\rho)\in [X/X]$ we associate the abelian group $f_*\mathcal{S}(N,\rho):=\mathcal{S}(f^*(N,\rho))\,.$ We then obtain a sheaf on the simplicial space $\mathbf{B}^\bullet(X\rightrightarrows X)$ as follows: First note that $\mathbf{B}^n(X\rightrightarrows X)=X$ for all $n\ge 0\,,$ so the sheaves are the same on all levels. Now let $U\xhookrightarrow{\iota} X$ be open. Then $(U,\iota)\in [X/X]\,,$ so to this object we assign the abelian group $f_*\mathcal{S}(U,\iota)\,.$ \begin{proposition}\label{key example}Suppose $M$ is a $Y{\mathop{\times}}_f Y$-module and $f$ has a section $\sigma:X\to Y\,.$ We then obtain a sheaf (and its associated Godement sheaves) on $[X/X]\,,$ and in particular we obtain a sheaf (and its associated Godement sheaves) on $X\in [X/X]\,,$ which we describe as follows: \end{proposition} We use the notation in Proposition~\ref{example surj sub}. We have that \[f^*(U,\iota)=\begin{tikzcd} U_{\iota}{\mathop{\times}}_f Y \arrow{r}{\rho= p_2}\arrow{d}{\pi= p_1} & Y \\ U \end{tikzcd} =\begin{tikzcd} Y\vert_U \arrow{r}{}\arrow{d}{f} & Y \\ U \end{tikzcd} \] We then see that $\,\Gamma_\text{inv}(\rho^*\mathcal{O}(M))\cong \Gamma(\sigma\vert_U^*\mathcal{O}(M))\,,$ hence the sheaf we get on $X$ is simply $\sigma^*\mathcal{O}(M)\,.$ Furthermore, the sheaves we get on $X$ by applying the Godement construction to $\mathcal{O}(M)_{[Y/(Y{\mathop{\times}}_f Y)]}$ are simply $\mathbb{G}^\bullet(\sigma^*\mathcal{O}(M))\,.$ \begin{lemma}\label{acyclic edit} Suppose we have a sheaf $\mathcal{S}$ on the stack $[Y/(Y{\mathop{\times}}_f Y)]\,,$ then the associated Godement sheaves $\mathbf{G}^\bullet(\mathcal{S})$ are acyclic. \end{lemma} \begin{proof} This follows from the fact that $[Y/(Y{\mathop{\times}}_f Y)]\,,$ is Morita equivalent to $[X/X]\,,$ since cohomology is invariant under Morita equivalence of stacks, and the fact that the Godement sheaves on a manifold are acyclic. \end{proof} \theoremstyle{definition}\begin{rmk} Let $X$ be a manifold and let $X\rightrightarrows X$ be the trivial Lie groupoid. Let $\mathcal{S}$ be a sheaf on $[X/X]\,.$ Then we recover the usual cohomology: \[ H^*(\mathbf{B}^\bullet X,\mathcal{S}(\mathbf{B}^\bullet X)) =H^*(X,\mathcal{S}(X))\,. \] This will be important in computing the cohomology of submersion groupoids, since they are Morita equivalent to trivial groupoids. \end{rmk} \chapter{Chevalley-Eilenberg Complex for Modules}\label{Chevalley} In this section we review the Chevalley-Eilenberg complex associated to a representation of a Lie algebroid. Then we generalize Lie algebroid representations to Lie algebroid modules and define their Chevalley-Eilenberg complex. These will be used in Section~\ref{van Est map}. \section{Lie Algebroid Representations} \begin{definition}Let $\mathfrak{g}\overset{\pi}{\to} Y$ be a Lie algebroid, with anchor map $\alpha:\mathfrak{g}\to TY\,,$ and recall that $\mathcal{O}(\mathfrak{g})$ denotes the sheaf of sections of $\mathfrak{g}\overset{\pi}{\to} Y\,.$ A representation of $\mathfrak{g}$ is a vector bundle $E\to Y$ together with a map\begin{align*} \mathcal{O}(\mathfrak{g})\otimes\mathcal{O}(E)\to\mathcal{O}(E)\,,\,X\otimes s\mapsto L_X(s) \end{align*} such that for all open sets $\,U\subset Y$ and for all $f\in \mathcal{O}_Y(U)\,,X\in \mathcal{O}(\mathfrak{g})(U)\,, s\in \mathcal{O}(E)(U)\,,$ we have that \begin{enumerate} \item $L_{fX}(s)=fL_X(s)\,,$ \item $L_X(fs)=fL_X(s)+(\alpha(X)f)s\,,$ \item $L_{[X,Y]}(s)=[L_X,L_Y](s)\,.$ \end{enumerate} $\blacksquare$\end{definition} \begin{definition} Let $E$ be a representation of $\mathfrak{g}\,.$ Let $\mathcal{C}^n(\mathfrak{g},E)$ denote the sheaf of $E$-valued $n$-forms on $\mathfrak{g}\,,$ ie. the sheaf of sections of $\Lambda^n \mathfrak{g}^*\otimes E\,.$ There is a canonical differential\footnote{Meaning in particular that $d_\text{CE}^2=0\,.$} \begin{align*} d_\text{CE}:\mathcal{C}^n(\mathfrak{g},E)\to \mathcal{C}^{n+1}(\mathfrak{g},E)\,,\,n\ge0 \end{align*} defined as follows: let $\omega\in \mathcal{C}^n(\mathfrak{g},E)(V) $ for some open set $V\,.$ Then for $X_1\,,\ldots \,, X_{n+1}\in \pi^{-1}(m)\,,\,m\in V\,,$ choose local extensions $\mathbf{X}_1\,,\ldots\,,\mathbf{X}_{n+1}$ of these vectors, ie. choose \begin{align*} p\mapsto\mathbf{X}_1(p)\,,\ldots\,,p\mapsto\mathbf{X}_{n+1}(p)\in \mathcal{O}(\mathfrak{g})(U)\,, \end{align*} for some open set $U$ such that $m\in U\subset V\,,$ and such that $\mathbf{X}_i(m)=X_i$ for all $1\le i\le n+1)\,.$ Then let \begin{align*} d_{\text{CE}}\omega(X_1\,,\ldots\,,X_{n+1})&=\sum_{i<j}(-1)^{i+j-1}\omega([\mathbf{X}_i,\mathbf{X}_j],\mathbf{X}_1,\ldots ,\hat{\mathbf{X}}_i,\ldots , \hat{\mathbf{X}}_j,\ldots , \mathbf{X}_{n+1}) \vert_{p=m} \\& +\sum_{i=1}^{n+1}(-1)^i L_{\mathbf{X}_i}(\omega(\mathbf{X}_i,\ldots , \hat{\mathbf{X}}_i ,\ldots , \mathbf{X}_{n+1}))\vert_{p=m}\,. \end{align*} This is well-defined and independent of the chosen extensions. $\blacksquare$\end{definition} \subsection{Lie Algebroid Modules} We will now define Lie algebroid modules and define their Chevalley-Eilenberg complexes; these will look like the Chevalley-Eilenberg complexes associated to representations, except for possibly in degree zero (though representations will be seen to be special cases of Lie algebroid modules). \begin{definition} Let $\mathfrak{g}\to Y$ be a Lie algebroid, and let $M$ be a family of abelian groups, with Lie algebroid $\mathfrak{m}$ and exponential map $\exp:\mathfrak{m}\to M\,.$\footnote{Note that $\mathfrak{m}$ is just a vector bundle and the exponential map is given by the fiberwise exponential map taking a Lie algebra to its corresponding Lie group.} Then a $\mathfrak{g}$-module structure on $M$ is given by the following: a $\mathfrak{g}$-representation structure on $\mathfrak{m}$ $($ie. a morphism $\mathcal{O}(\mathfrak{g})\otimes\mathcal{O}(\mathfrak{m})\to\mathcal{O}(E)\,,\,X\otimes s\mapsto L_X(s))\,,$ together with a morphism of sheaves \begin{align*} \mathcal{O}(\mathfrak{g})\otimes_{\mathbb{Z}}\mathcal{O}(M)\to\mathcal{O}(\mathfrak{m})\,,\,X\otimes_{\mathbb{Z}} s\mapsto \tilde{L}_X(s) \end{align*} such that for all open sets $\,U\subset Y$ and for all $f\in \mathcal{O}_Y(U)\,,X\in \mathcal{O}(\mathfrak{g})(U)\,, s\in \mathcal{O}(M)(U)\,,\sigma\in \mathcal{O}(\mathfrak{m})(U)\,, $ we have that \begin{enumerate} \item $\tilde{L}_{fX}(s)=f\tilde{L}_X(s)\,,$ \item $\tilde{L}_{[X,Y]}(s)=(L_X\tilde{L}_Y-L_Y\tilde{L}_X)(s)\,,$ \item $\tilde{L}_X(\exp{\sigma})=L_X(\sigma)\,.$ \end{enumerate} If $M$ is endowed with such a structure we call it a $\mathfrak{g}$-module. $\blacksquare$\end{definition} \begin{definition}\label{forms}Let $\mathfrak{g}\to X$ be a Lie algebroid and let $M$ be a $\mathfrak{g}$-module. We then define sheaves on $X\,,$ called ``sheaves of $M$-valued forms", as follows: let \begin{align*} &\mathcal{C}^0(\mathfrak{g},M)=\mathcal{O}(M)\,, \\& \mathcal{C}^n(\mathfrak{g},M)=\mathcal{O}(\Lambda^n \mathfrak{g}^*\otimes \mathfrak{m})\,,\;n> 0\,. \end{align*} Furthermore, for $s\in\mathcal{O}(M)(U)\,,$ we define $d_\text{CE}\log f$ by $d_\text{CE}\log f(X):=\tilde{L}_X(s)\,.$ We then have a cochain complex of sheaves given by \begin{equation}\label{CE} \mathcal{C}^0(\mathfrak{g},M)\xrightarrow{d_{\text{CE}}\log}\mathcal{C}^1(\mathfrak{g},M)\xrightarrow{d_\text{CE}}\mathcal{C}^2(\mathfrak{g},M)\xrightarrow{d_\text{CE}}\cdots\,. \end{equation} $\blacksquare$\end{definition} \begin{definition} The sheaf cohomology of the above complex of sheaves is denoted by $H^*(\mathfrak{g},M)\,.$ $\blacksquare$\end{definition} \begin{definition} Let $M\,,N$ be $\mathfrak{g}$-modules. A morphism $f:M\to N$ is a morphism of the underlying families of abelian groups such that the induced map $df:\mathfrak{m}\to\mathfrak{n}$ satisfies $\tilde{L}_X(f\circ s)=df\circ \tilde{L}_X(s)\,,$ for all local sections $X$ of $\mathfrak{g}$ and $s$ of $M\,.$ $\blacksquare$\end{definition} \theoremstyle{definition}\begin{exmp} Here we will show that the notion of $\mathfrak{g}$-modules naturally extends the notion of $\mathfrak{g}$-representations. Let $E$ be a representation of $\mathfrak{g}\,.$ By thinking of the fibers of $E$ as abelian groups it defines a family of abelian groups. The exponential map $E\overset{\exp}{\to} E$ is the identity, hence its kernel is the zero section and $E$ naturally defines a $\mathfrak{g}$-module where $d_\text{CE}\log =d_\text{CE}\,.$ So the definition of a $\mathfrak{g}$-module and its Chevalley-Eilenberg complex recovers the definition of a $\mathfrak{g}$-representation and its Chevalley-Eilenberg complex given by Crainic in~\cite{Crainic}. \end{exmp} \theoremstyle{definition}\begin{exmp}\label{representation} The group of isomorphism classes of $\mathfrak{g}$-representations on complex line bundles is isomorphic to $H^1(\mathfrak{g},\mathbb{C}^*_M)\,,$ where $\mathbb{C}^*_M$ is the $\mathfrak{g}$-module for which $\tilde{L}_X s=\mathrm{dlog}\,s(\alpha(X))\,,$ for a local section $s$ of $\mathbb{C}^*_M\,.$ The corresponding statement holds for real line bundles, with $\mathbb{C}^*_M$ replaced by $\mathbb{R}^*_M\,.$ \end{exmp} \theoremstyle{definition}\begin{exmp}(Deligne Complex) Let $X$ be a manifold and $\mathfrak{g}=TX\,.$ Then letting $M=\mathbb{C}^*_X\,,$ we have that $\mathfrak{m}=\mathbb{C}_X$ naturally carries a representation of $TX\,,$ ie. where the differentials are the de Rham differentials. Letting $\exp:\mathfrak{m}\to M$ be the usual exponential map, it follows that $M$ is a $\mathfrak{g}$-module, and in fact the complex~\eqref{CE} in this case is known as the Deligne complex. \end{exmp} For a less familiar example we have the following: \theoremstyle{definition}\begin{exmp}\label{module example} Consider the space $S^1$ and the group $\mathbb{Z}/2\mathbb{Z}=\{-1,1\}\,.$ This group is contained in the automorphism groups of $\mathbb{Z}\,,\mathbb{R}$ and $\mathbb{R}/\mathbb{Z}\,,$ hence we get nontrivial families of abelian groups over $S^1$ as follows (compare with Example~\ref{foag}): Let $A$ be any of the groups $\mathbb{Z}\,,\mathbb{R}\,,\mathbb{R}/\mathbb{Z}\,.$ Now cover $S^1$ in the standard way using two open sets $U_0\,,U_1\,,$ and glue together the bundles $U_0{\mathop{\times}} A\,,U_1{\mathop{\times}} A$ with the transition functions $-1\,,1$ on the two connected components of $U_0\cap U_1\,.$ Denote these families of abelian groups by $\tilde{\mathbb{Z}}\,,\tilde{\mathbb{R}}\,,\widetilde{\mathbb{R}/\mathbb{Z}}$ respectively. The space $\tilde{\mathbb{R}}$ is toplogically the M\"{o}bius strip, and $\widetilde{\mathbb{R}/\mathbb{Z}}$ is topologically the Klein bottle. \\\\Next, there is a canonical flat connection on these bundles of groups which is compatible with the fiberwise group structures, hence these families of abelian groups are modules for $\Pi_1(S^1)\,,$ the fundamental groupoid of $S^1\,.$ \\\\Furthermore, the $TS^1$-representation associated to the $TS^1$-module of $\tilde{\mathbb{Z}}$ is the rank $0$ vector bundle over $S^1\,,$ and the $TS^1$-representations associated to the $TS^1$-modules of $\tilde{\mathbb{R}}\,,\,\widetilde{\mathbb{R}/\mathbb{Z}}$ are isomorphic to the Mobius strip, ie. the line bundle obtained by gluing together $U_0{\mathop{\times}}\mathbb{R}\,,\,U_ 1{\mathop{\times}} \mathbb{R}$ using the same transition functions as discussed above. The Chevalley-Eilenberg differential, on each local trivialization $U_0{\mathop{\times}}\mathbb{R}\,,\,U_1{\mathop{\times}}\mathbb{R}\,,$ is just the de Rham differential. \\\\The cohomology groups are $H^i(TS^1,\tilde{\mathbb{R}})=0$ in all degrees, and \[ H^i(TS^1,\widetilde{\mathbb{R}/\mathbb{Z}})=\begin{cases} \mathbb{Z}/2\mathbb{Z}, & \text{if}\ i= 0 \\ 0, & \text{if}\ i>0\,. \end{cases} \] \end{exmp} \begin{theorem}\label{g-module} Suppose $G\rightrightarrows G^0$ is a Lie groupoid. There is a natural functor \begin{align*} F:G\text{-modules}\to \mathfrak{g}\text{-modules}\,. \end{align*} Furthermore, if $G$ is source simply connected then this functor restricts to an equivalence of categories on the subcategories of $G$-modules and $\mathfrak{g}$-modules for which $\exp:\mathfrak{m}\to M$ is a surjective submersion. \end{theorem} \begin{proof}$\mathbf{}$ \\\\$\mathbf{1.}$ For the first part, let $M$ be a $G$-module and for $x\in G^0\,$ let $\gamma:(-1,1)\to G(x,\cdot)$ be a curve in the source fiber such that $\gamma(0)=\text{Id}(x)\,.$ We define \begin{align*} \tilde{L}_{\dot{\gamma}(0)}r:=\frac{d}{d\epsilon}\Big\vert_{\epsilon=0}r(x)^{-1}[\gamma(\epsilon)^{-1}\cdot r(t(\gamma(\epsilon)))] \end{align*} for a local section $r$ of $\mathcal{O}(M)\,.$ One can check that this is well-defined and that property 1 is satisfied. Now note that the action of $G$ on $M$ induces a linear action of $G$ on $\mathfrak{m}\,,$ and we get a $\mathfrak{g}$-representation on $\mathfrak{m}$ by defining \begin{align*} L_{\dot{\gamma}(0)}\sigma:=\frac{d}{d\epsilon}\Big\vert_{\epsilon=0}\sigma(x)^{-1}[\gamma(\epsilon)^{-1}\cdot \sigma(t(\gamma(\epsilon)))] \end{align*} for a local section $\sigma$ of $\mathfrak{m}\,.$ With these definitions property 2 is satisfied. \\\\Now note that this action of $G$ on $\mathfrak{m}$ preserves the kernel of $\exp:\mathfrak{m}\to M\,.$ Let $\sigma$ be a local section of $\mathfrak{m}$ around $x$ such that $\exp{\sigma}=e\,.$ Then \begin{align*} L_{\dot{\gamma}(0)}\sigma=\frac{d}{d\epsilon}\Big\vert_{\epsilon=0}\sigma(x)^{-1}[\gamma(\epsilon)^{-1}\cdot \sigma(t(\gamma(\epsilon)))]\,, \end{align*} and since the $G$-action preserves the kernel of $\exp\,,$ which is discrete, we have that \begin{align*} \gamma(\epsilon)^{-1}\cdot \sigma(t(\gamma(\epsilon)))=\sigma(x)\,, \end{align*} hence $L_{\dot{\gamma}(0)}(\sigma)=0\,,$ therefore $L(\sigma)=\tilde{L}(\exp{\sigma})=0\,,$ from which property 3 follows. Since it can be seen that morphisms of $G$-modules induce morphisms of $\mathfrak{g}$-modules, this completes the proof. \\\\$\mathbf{2.}$ For the second part, let $M$ be a $\mathfrak{g}$-module for which $\exp:\mathfrak{m}\to M$ is a surjective submersion, and suppose $G$ is source simply connected. Then in particular $\mathfrak{m}$ is a $\mathfrak{g}$-representation, and it is known that for source simply connected groupoids Rep$(G)\cong\text{Rep}(\mathfrak{g})$ (by Lie's second theorem for Lie groupoids), hence $\mathfrak{m}$ integrates to a $G$-representation. Property 3 implies that the $G$-action preserves the kernel of $\exp\,,$ hence the action of $G$ on $\mathfrak{m}$ descends to $M\,.$ More explicitly: let $g\in G(x,y)$ and let $m\in M_x\,,$ ie. the source fiber of $M$ over $x\,.$ Let $\tilde{m}\in\mathfrak{m}_x$ be such that $\exp{\tilde{m}}=m$ and define \begin{align*} g\cdot m=\exp{(g\cdot{\tilde{m}})}\,. \end{align*} This is well-defined since the action of $G$ preserves the kernel of $\exp\,.$ Hence the functor is essentially surjective. Now again using the fact that for source simply connected groupoids Rep$(G)\cong\text{Rep}(\mathfrak{g})\,,$ it follows that the functor is fully faithful, and since it is also essentially surjective, this completes the proof. \end{proof} Note that by composing with $\exp$ we obtain a natural map \begin{align*} V_{\mathfrak{g}}^*\overset{\exp}{\to} V_{G}^*\,. \end{align*} Claim: this map is an isomorphism. To see this, let $f\in V^*_{\mathfrak{g}}$ and suppose $\exp{f}=0\,.$ Let $v\in V$ and $t\in \mathbb{R}\,.$ Then $\exp{f(tv)}=0\implies \exp{tf(v)}=0.$ But taking $t\ne 0$ small enough such that $tf(v)\in U$ we get that $tf(v)=0\,,$ hence $f(v)=0\,.$ So $\exp$ is injective. \\\\Now let $f\in V^*_G\,.$ Let $v\in V\,.$ We must have $f(0)=0\,,$ so the map $[0,1]\to G$ defined by $t\mapsto f(tv)$ defines a path in $G$ starting at $0\,,$ therefore since $\mathfrak{g}$ is a covering space of $G\,,$ by the unique path lifting property there is a unique path $X_v:[0,1]\to\mathfrak{g}$ starting at $0$ which lifts $G\,.$ So define $X\in V^*_{\mathfrak{g}}$ by $X(v)=X_v(1)\,.$ Then $X(tv)=X_{tv}(1)$ and this defines a path lifting $f(tv)$ and starting at $0\,,$ hence $X(tv)=tX(v)\,.$ $X(a+b)=X_{a+b}(1)$ \\\\so by continuity for small enough $t\ne 0\in\mathbb{R}$ we have that $f(tv)\in \exp(U)\,.$ Hence there exists a unique $X_{tv}\in\mathfral{g}$ such that $\exp{X_{tv}}=f(tv)\,.$ Claim: $\exp{\frac{X_{tv}}{t}}=f(v).$ Now define $X\in V^*_{\mathfrak{g}}$ by $X(v)=\frac{X_{tv}}{t}\,.$ Let $\Lambda^0 V^*:=G\,.$ Let $\omega\in \Lambda^n V^*\,.$ \chapter{Van Est Map}\label{van Est map} \section{Definition} In this section we will discuss a generalization of the van Est map that appears in~\cite{Crainic}. It will be a map $H^*(G,M)\to H^*(\mathfrak{g},M)\,,$ for a $G$-module $M\,,$ which will be an isomorphism up to a certain degree which depends on the connectivity of the source fibers of $G\,.$ Let us remark that one doesn't need to know the details of the map to understand the main theorem of this paper, \Cref{van Est image}, and if the reader wishes they may skip ahead to~\Cref{main theorem section}. \\\\Given a groupoid $G\rightrightarrows G^0\,,$ $G$ naturally defines a principal $G$-bundle with the moment map given by $t\,,$ ie. the action is given by the left multiplication of $G$ on itself. Being consistent with the previous notation, we denote the resulting action groupoid by $G\ltimes G$ and note that it is isomorphic to $G_s{\mathop{\times}}_{s} G\rightrightarrows G\,,$ hence it is Morita equivalent to the trivial $G^0\rightrightarrows G^0$ groupoid. \begin{definition} We let $\mathbf{E}^\bullet G:=\mathbf{B}^\bullet(G\ltimes G)\,.$ The simplicial map $\kappa:\mathbf{E}^\bullet G\to\mathbf{B}^\bullet G$ induced by the groupoid morphism $\pi_1:G\ltimes G\to G$ makes $\mathbf{E}^\bullet G$ into a simplicial principal $G$-bundle, and the fiber above $(g^1\,,\ldots\,,g^n)\in \mathbf{B}^n G$ is $t^{-1}(s(g_n))\,.$ $\blacksquare$\end{definition} \theoremstyle{definition}\begin{rmk}Note that $G\ltimes G$ is a groupoid object in $[G^0/G]\,,$ and as a principal $G$-bundle it is the canonical object associated to $G$ via diagram~\ref{canonical object} \end{rmk} \begin{definition} Let $\Omega_{\kappa\;q}^p(\kappa^*M)$ denote the sheaf of sections of $\Lambda^p T^*_\kappa\, \mathbf{E}^qG\,(\kappa^*M)\,,$ the $\kappa$-foliated covectors taking values in $\kappa^*M\,.$ Succinctly, from $M$ we get a family of abelian groups on $\mathbf{B}^q G\,,$ given by \[ \mathbf{B}^q(G\ltimes M)\to \mathbf{B}^qG\,, \] which we denote by $M_{\mathbf{B}^qG}\,;$ we then have that $\kappa^*M_{\mathbf{B}^q G}$ is a module for the submersion groupoid \[\mathbf{E}^qG\times_{\mathbf{B}^qG} \mathbf{E}^qG\rightrightarrows \mathbf{E}^qG\,, \] and $\Omega_{\kappa\;q}^p(\kappa^*M)$ is the sheaf of $\kappa^*M_{\mathbf{B}^q G}$-valued p-forms associated to the corresponding Lie algebroid module (see~\Cref{forms}). Explicitly, $\Omega_{\kappa\;q}^0(\kappa^*M)$ is the sheaf of sections of~\footnote{Really, we should write $\kappa^*M_{\mathbf{B}^q G}\to\mathbf{E}^q G\,,$ but for notational simplicitly we suppress the subscript.} \[ \kappa^*M\to \mathbf{E}^q G\,, \] and for $p\ge 1\,,$ $\Omega_{\kappa\;q}^p(\kappa^*M)$ is the sheaf of $\kappa$-foliated $p$-forms taking values in $\kappa^*\mathfrak{m}\,.$ There is a differential \begin{equation} \Omega_{\kappa\;q}^0(\kappa^*M)\xrightarrow{\text{dlog}} \Omega_{\kappa\;q}^1(\kappa^*M) \end{equation} which is defined as follows: let $U$ be an open set in $\mathbf{E}^q G$ and let $X_g$ be a vector tangent to a $\kappa$-fiber at a point $g\in U\,.$ Let $f\in \Omega_{\kappa\;q}^0(\kappa^*M)(U)\,.$ Define $\text{dlog}\,f\in \Omega_{\kappa\;q}^1(\kappa^*M)(U)$ by \[ \text{dlog}\,f\,(X_g)=f(g)^{-1}f_*(X_g)\,, \] where in order to identify this with a point in $\kappa^*\mathfrak{m}_g$ we are implicity using the canonical identification of $\kappa^*M_g$ with $\kappa^*M_{g'}$ for any two points $g\,,g'$ in the same $\kappa$-fiber (here $\kappa^*M_g$ is the fiber of $\kappa^*M$ over $g)\,.$ We also use the canonical identification of $\kappa^*\mathfrak{m}_g$ with $\kappa^*\mathfrak{m}_{g'}$ for any two points $g\,,g'$ in the same $\kappa$-fiber to define the differentials for $p>0:$ \begin{equation} \Omega_{\kappa\;q}^p(\kappa^*M)\overset{\text{d}}{\to}\Omega_{\kappa\;q}^{p+1}(\kappa^*M)\,. \end{equation} $\blacksquare$\end{definition} $\mathbf{}$ \begin{theorem} There is an isomorphism \begin{equation*} Q:H^*(\mathbf{E}^\bullet G\,,\kappa^{-1}\mathcal{O}(M))\to H^*(\mathfrak{g}\,,M)\,. \end{equation*} \end{theorem} \begin{proof}Form the sheaf $\kappa^{-1}\mathcal{O}(M)$ on $\mathbf{E}^\bullet G\,.$ This sheaf is not in general a sheaf on the stack $[G/(G\ltimes G)]\,,$ but it is resolved by sheaves on stacks in the following way:\footnote{These are sheaves on stacks because \begin{equation*} \Lambda^n T^*_\kappa G(\kappa^*M)\cong\Lambda^n T^*_tG(t^*M) \end{equation*} (where $t$ is the target map), and the latter are $(G\ltimes G)$-modules.} \begin{equation}\label{deligne1} \kappa^{-1}\mathcal{O}(M)_\bullet\xhookrightarrow{} \mathcal{O}(\kappa^*M)_\bullet\overset{\text{dlog}}{\to}\Omega_{\kappa\,\bullet}^1(\kappa^*M) \overset{\text{d}}{\to}\Omega_{\kappa\,\bullet}^2(\kappa^*M)\to\cdots\,. \end{equation} We let, for all $q\ge 0\,,$ \begin{align}\label{C}C^\bullet_q:=\mathcal{O}(\kappa^*M)_q\to\Omega_{\kappa\,q}^1(\kappa^*M) \to\Omega_{\kappa\,q}^2(\kappa^*M)\to\cdots\,. \end{align} We can then take the Godement resolution of $C^\bullet_q$ and get a double complex for each $q\ge 0:$ \begin{align*} C^\bullet_q\xhookrightarrow{} \mathbb{G}^0(C^\bullet_q)\to \mathbb{G}^1(C^\bullet_q)\to\cdots\,. \end{align*} \\All of the sheaves $\mathbb{G}^p(C^r_\bullet)$ are sheaves on stacks, and it follows that these sheaves are acyclic (as sheaves on stacks) since $G\ltimes G\rightrightarrows G$ is Morita equivalent to a submersion groupoid, and \Cref{acyclic edit}. Hence $\mathbb{G}^p(C^r_\bullet)$ can be used to compute cohomology (see Remark~\ref{acyclic resolution}) and we have that \begin{align*} H^*(\mathbf{E}^\bullet G, \kappa^{-1}\mathcal{O}(M))\cong H^*(\text{Tot}(\Gamma_\text{inv}(\mathbb{G}^\bullet (C^\bullet_0))))\,. \end{align*} Now we have that \begin{align*} \Gamma_\text{inv}(\mathbb{G}^p(C^q_0)))=\Gamma(\mathbb{G}^p(i^*C^q_0))\,, \end{align*} where $i:G^0\to G$ is the identity bisection. Since all of the differentials in~\eqref{C} preserve invariant sections, they descend to differentials on $\Gamma(\mathbb{G}^\bullet(i^*C^\bullet_0))\,,$ hence \begin{align*} H^*(\text{Tot}(\Gamma_\text{inv}(\mathbb{G}^\bullet (C^\bullet_0))))\cong H^*(\mathfrak{g},M)\,. \end{align*} \end{proof} \theoremstyle{definition}\begin{definition}\label{van Est} Let $M$ be a $G$-module. We define a map \begin{equation*} H^*(G\,,M)\to H^*(\mathfrak{g}\,,M) \end{equation*} given by the composition \begin{equation*} H^*(G\,,M)\overset{\kappa^{-1}}{\to} H^*(\mathbf{E}^\bullet G\,,\kappa^{-1}\mathcal{O}(M)_\bullet)\overset{Q}{\to}H^*(\mathfrak{g}\,,M)\,. \end{equation*} This is the van Est map; we denote it by $VE\,.$ \end{definition} \theoremstyle{definition}\begin{rmk} Taking $M=\mathbb{C}_{G^0}$ as a smooth abelian groups with the trivial $G$-action, the sheaves in the resolution of $\kappa^{-1}\mathcal{O}(M)$ in~\eqref{deligne1} are already acyclic (as sheaves on stacks). Hence our map coincides with the map \begin{align*} H^*(G\,,M)&\to H^*(\mathbf{E}^\bullet G\,,\kappa^{-1}\mathcal{O}(M)_\bullet) \\&=H^*(\Gamma_\text{inv}(\mathcal{O}(\kappa^*M)_0)\to\Gamma_\text{inv}(\Omega_{\kappa\,0}^1(M)) \to\cdots) \to H^*(\mathfrak{g}\,,M)\,, \end{align*} which is the van Est map as described in \cite{Meinrenken}. \end{rmk} \subsection{Van Est for Truncated Cohomology} In order to emphasize geometry on the space of morphisms rather than on the space of objects we perform a truncation. That is, we truncate the contribution of $G^0$ to $H^*(G,M)$ by considering instead the cohomology \[ H^*(\mathbf{B}^\bullet G,\mathcal{O}(M)^0_\bullet)\,, \] where $\mathcal{O}(M)^{0}_n=\mathcal{O}(M)_n$ for all $n\ge 1\,,$ and where $\mathcal{O}(M)^{0}_0$ is the trivial sheaf on $G^0\,,$ ie. the sheaf that assigns to every open set the group containing only the identity. \\\\We define \[ H^*_0(G,M):=H^{*+1}(\mathbf{B}^\bullet G,\mathcal{O}(M)^{0}_\bullet)\,. \] There is a canonical map \[ H^*_0(G,M)\to H^{*+1}(G,M)\ \] induced by the morphism of sheaves on $\mathbf{B}^\bullet G$ given by $\mathcal{O}(M)^{0}_\bullet\xhookrightarrow{}\mathcal{O}(M)_\bullet\,.$ Similarly, we can truncate $M$ from $H^*(\mathfrak{g},M)$ by considering instead \[ H^*_0(\mathfrak{g},M):=H^{*+1}(0\to\mathcal{C}^1(\mathfrak{g},M)\to\mathcal{C}^2(\mathfrak{g},M)\to\cdots)\,. \] Then in like manner there is a canonical map \[ H^*_0(\mathfrak{g},M)\mapsto H^{*+1}(\mathfrak{g},M) \] induced by the inclusion of the truncated complex into the full one. \begin{theorem}\label{canonical edit} There is a canonical map $VE_0$ lifting $VE\,,$ ie. such that the following diagram commutes: \begin{equation}\label{diagram} \begin{tikzcd} H^*_0(G,M) \arrow{d}{}\arrow{r}{VE_0} & H^*_0(\mathfrak{g},M)\arrow{d} \\ H^{*+1}(G,M)\arrow{r}{VE} & H^{*+1}(\mathfrak{g},M) \end{tikzcd} \end{equation} where $H_0^*$ denotes truncated cohomology, as define above. \end{theorem} \begin{proof} Consider the ``normalized" sheaf on $\mathbf{E}^\bullet G$ given by $\widehat{\mathcal{O}(\kappa^*M)}_\bullet\,,$ where $\widehat{\mathcal{O}(\kappa^*M)}_n=\mathcal{O}(\kappa^*M)_n$ for $n\ge 1\,,$ and where $\widehat{\mathcal{O}(\kappa^*M)}_0$ is the subsheaf of $\mathcal{O}(\kappa^*M)_0$ consisting of local sections which are the identity on $G^0\,.$ Then \[ H^*(\mathbf{E}^\bullet G,\mathbb{G}^n(\widehat{\mathcal{O}(\kappa^*M)}_\bullet))=0\,, \] and in particular, $\mathbb{G}^n(\widehat{\mathcal{O}(\kappa^*M)}_\bullet)$ is acyclic. $\mathbf{}$ \\\\Now consider the sheaf $\widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet$ on $\mathbf{E}^\bullet G$ given by $\widehat{\kappa^{-1}\mathcal{O}(M)}_n=\kappa^{-1}\mathcal{O}(M)_n$ for $n\ge 1\,,$ and such that $\widehat{\kappa^{-1}\mathcal{O}(M)}_0$ is the subsheaf of $\kappa^{-1}\mathcal{O}(M)_0$ consisting of local sections which are the identity on $G^0\,.$ Then there is a canonical embedding $\kappa^{-1}\mathcal{O}(M)^0_\bullet\xhookrightarrow{}\widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet\,,$ hence we get a map \begin{align*} H^*(\mathbf{B}^\bullet G,\mathcal{O}(M)^0_\bullet)\to H^*(\mathbf{E}^\bullet G,\widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet)\,. \end{align*} Now we have that the following inclusion is a resolution: \begin{align*} \widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet\xhookrightarrow{} \widehat{\mathcal{O}(\kappa^*M)}_\bullet\to\Omega_{\kappa\,\bullet}^1(M) \to\Omega_{\kappa\,\bullet}^2(M)\to\cdots\,. \end{align*} Then one can show that \begin{align*} & H^*(\mathbf{E}^\bullet G,\widehat{\mathcal{O}(\kappa^*M)}_\bullet\to\Omega_{\kappa\,\bullet}^1(M) \to\Omega_{\kappa\,\bullet}^2(M)\to\cdots) \\&\cong H^{*}(\mathbf{E}^\bullet G,0\to\Omega_{\kappa\,\bullet}^1(M) \to\Omega_{\kappa\,\bullet}^2(M)\to\cdots)\,, \end{align*} and since $\Omega_{\kappa\,\bullet}^1(M)\to\Omega_{\kappa\,\bullet}^2(M)\to\cdots$ is a complex of sheaves on stacks, by a similar argument made when defining the van Est map in the previous section we get that \begin{align*} H^{*+1}(\mathbf{E}^\bullet G,0\to\Omega_{\kappa\,\bullet}^1(M)\to\Omega_{\kappa\,\bullet}^2(M)\to\cdots)\cong H^*_0(\mathfrak{g},M)\,. \end{align*} Then $VE_0$ is the map \begin{align*} & H^*_0(G,M)=H^{*+1}(\mathbf{B}^\bullet G,\mathcal{O}(M)^0_\bullet)\xrightarrow{\kappa^{-1}} H^{*+1}(\mathbf{E}^\bullet G,\kappa^{-1}\mathcal{O}(M)^0_\bullet) \\&\to H^{*+1}(\mathbf{E}^\bullet G,\widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet) \,{\cong}\, H^{*+1}(\mathbf{E}^\bullet G,\widehat{\mathcal{O}(\kappa^*M)}_\bullet\to\Omega_{\kappa\,\bullet}^1(M) \to\cdots) \\&{\cong}\,H^{*+1}(\mathbf{E}^\bullet G,0\to\Omega_{\kappa\,\bullet}^1(M) \to\cdots) \xrightarrow{\cong} H^{*}_0(\mathfrak{g},M)\,. \end{align*} \end{proof} \theoremstyle{definition}\begin{rmk} The van Est map (including the truncated version) factors through a local van Est map defined on the cohomology of the local groupoid [see~\cite{Meinrenken}], ie. to compute the van Est map one can first localize the cohomology classes to a neighborhood of the identity bisection. \end{rmk} \subsection{Properties of the van Est Map} In this section we discuss some properties of the van Est map; the main results pertain to its kernel and image. \\\\Recall that given a sheaf $\mathcal{S}_\bullet$ on a (semi) simplicial space $X^\bullet\,,$ we calculate its cohomology by taking an injective resolution $0\to\mathcal{S}_\bullet\to \mathcal{I}^0_\bullet\to\mathcal{I}^1_\bullet\to\cdots$ and computing \begin{align*} H^*(\,\Gamma_\text{inv}(\mathcal{I}^0_0)\to\Gamma_\text{inv}(\mathcal{I}^1_0)\to\cdots)\,. \end{align*} By considering the natural injection $\Gamma_\text{inv}(\mathcal{I}_0^n)\hookrightarrow\Gamma(\mathcal{I}^n_0)$ we get a map \begin{equation}\label{restriction} r:H^*(X^\bullet,\mathcal{S}_\bullet)\to H^*(X^0,\mathcal{S}_0)\,. \end{equation} Similarly, for a cochain complex of abelian groups $\mathcal{A}^0\to \mathcal{A}^1\to\cdots$ there is a map \begin{align*} H^*(\mathcal{A}^0\to \mathcal{A}^1\to\cdots)\overset{r}{\to} H^*(\mathcal{A}^0)\,. \end{align*} Using this, we have the following result, which gives an enlargement of diagram \eqref{diagram}: \begin{lemma}\label{commute} The following diagram is commutative: \[ \begin{tikzcd} H^{*}(G^0,\mathcal{O}(M))\arrow{r}{\delta^*}\arrow{d}{\parallel} &H^*_0(G,M)\arrow{r}{}\arrow{d}{VE_0} &H^{*+1}(G,M) \arrow{r}{r} \arrow{d}{VE} & H^{*+1}(G^0,\mathcal{O}(M))\arrow{d}{\parallel} \\ H^{*}(G^0,\mathcal{O}(M))\arrow{r}{d_{CE}\log}& H^*_0(\mathfrak{g},M)\arrow{r}{}& H^{*+1}(\mathfrak{g},M) \arrow{r}{r}& H^{*+1}(G^0,\mathcal{O}(M)) \end{tikzcd} \] \end{lemma} \begin{lemma}\label{kernel} Suppose that $X\overset{\pi}{\to} Y$ is a surjective submersion with $(n-1)$-connected fibers, for some $n>0\,,$ and with a section $\sigma\,.$ Consider an exact sequence of of families of abelian groups on $Y$ given by \begin{align*} 0\to Z\xrightarrow{}\mathfrak{m}\xrightarrow{\exp{}}M\,. \end{align*} Let $\omega\in H^0(X,\Omega^n_{\pi}(\pi^*\mathfrak{m}))$ be closed (ie. $\omega$ is a closed, foliated $n$-form on $X$) and suppose that \begin{equation}\label{in kernel} \int_{S^n(\pi^{-1}(y))}\omega \in Z\;\;\text{ for all } y \in Y \text{ and all } S^n(\pi^{-1}(y))\,, \end{equation} where $S^n(\pi^{-1}(y))$ is an $n$-sphere contained in the source fiber over $y\,.$ Let $[\omega]$ denote the class $\omega$ defines in $H^n(X,\pi^{-1}\mathcal{O}(\mathfrak{m}))\,.$ Then $\exp{[\omega]}=0\,.$ \begin{proof} From Equation~\ref{in kernel} we know that $\exp{\omega}\vert_{\pi^{-1}(y)}=0$ for each $y\in Y\,,$ therefore since the source fibers of $X$ are $(n-1)$-connected, by Theorem~\ref{spectral theorem} we have that \begin{align*} \exp{[\omega]}=\pi^{-1}\beta \end{align*} for some $\beta\in H^n(Y,\mathcal{O}(M))\,.$ Since $\pi\circ \sigma:Y\to Y$ is the identity this implies that \begin{align*} \exp{\sigma^{-1}[\omega]}=\beta\,, \end{align*} but $\sigma^{-1}[\omega]=0$ since $\omega$ is a global foliated form. Hence $\beta=0\,,$ hence $\exp{[\omega]}=0\,.$ \end{proof} \end{lemma} \begin{corollary}\label{groupoid kernel} Suppose that $G\rightrightarrows X$ is source $(n-1)$-connected for some $n>0\,.$ Consider an exact sequence of families of abelian groups on $X$ given by \begin{align*} 0\xrightarrow{} Z\xrightarrow{}\mathfrak{m}\xrightarrow{\exp{}} M\,. \end{align*} Let $\omega\in H^0(\mathcal{C}^0(\mathfrak{g},\mathfrak{m}))$ be closed (ie. it is a closed $n$-form in the Chevalley-Eilenberg complex) and suppose that \begin{equation}\label{in kernel 2} \int_{S^n(s^{-1}(x))}\omega \in Z\;\;\text{ for all } x \in X \text{ and all } S^n(s^{-1}(x))\,, \end{equation} where in the above we have left translated $\omega$ to a source-foliated $n$-form, and where $S^n(s^{-1}(x))$ is an $n$-sphere contained in the source fiber over $x\,.$ Let $[\omega]$ denote the class $\omega$ defines in $H^n(\mathbf{E}^\bullet G,\kappa^{-1}\mathfrak{m})\,.$ Then $r(\exp{([\omega])})=0\,,$ where $r$ is as in Equation~\ref{restriction}. \end{corollary} \begin{proof} This follows directly from Lemma~\ref{kernel}. \end{proof} \subsection{Main Theorem}\label{main theorem section} Before proving the main theoem of the paper, we will discuss translation of Lie algebroid objects: similarly to how one can translate Lie algebroid forms to differential forms along the source fibers, one can translate all Lie algebroid cohomology classes (eg. 1-dimensional Lie algebroid representations) to cohomology classes along the source fibers (in the case of a Lie algebroid representation, translation will result in a principal bundle with flat connection along the source fibers). We will describe it in degree 1, the translations in higher degrees works analogously. \theoremstyle{definition}\begin{definition}\label{translation} Let $G\rightrightarrows G^0$ be a Lie groupoid and let $M$ be a $G$-module. Let $\{U_i\}_i$ be an open cover of $G^0$ and let $\{(h_{ij},\alpha_i\}_{ij}$ represent a class in $H^1(\mathfrak{g},M)$ (here the $h_{ij}$ are sections of $M$ over $U_i\cap U_j\,,$ and the $\alpha_i$ are Lie algebroid 1-forms taking values in $\mathfrak{m}$). Then on $\mathbf{B}^1G$ we get a class in foliated cohomology (foliated with respect to the source map), ie. a class in \begin{align} H^1(\mathcal{O}(s^*M)\overset{\text{dlog}}{\to}\Omega_{s}^1(s^*M) \overset{\text{d}}{\to}\Omega_{s}^2(s^*M)\to\cdots)\,, \end{align} defined as follows: we have an open cover of $\mathbf{B}^1G$ given by $\{t^{-1}(U_i)\}_i\,.$ We then get a principal $s^*M$-bundle over $\mathbf{B}^1G$ given by transition functions $t^*h_{ij}$ defined as follows: for $g\in t^{-1}(U_{ij})$ let \begin{align} t^*h_{ij}(g):=g^{-1}\cdot h_{ij}(t(g))\,. \end{align}Similarly, we define foliated 1-forms $t^*\alpha_i$ on $t^{-1}(U_i)$ as follows: for $g\in t^{-1}(U_i)$ with $s(g)=x\,,$ and for $V_g\in T_g(s^{-1}(x))\,,$ let \begin{align*} t^*\alpha_i(V_g):=g^{-1}\cdot\alpha_{i}(R_{g^{-1}}V_g)\,, \end{align*} where $R_{g^{-1}}$ denotes translation by $g^{-1}\,.$ Then the desired class is given by the cocycle $\{(t^*h_{ij},t^*{\alpha_i})\}_i$ on the open cover $\{t^{-1}(U_i)\}\,.$ For each $x\in G^0\,,$ by restricting the cocycle to $s^{-1}(x)$ we also get a class in \begin{align*} H^1(s^{-1}(x),\mathcal{O}(M_x)\xrightarrow{\text{dlog}}\Omega^1\otimes\mathfrak{m}_x\to\Omega^2\otimes\mathfrak{m}_x\to\cdots)\,. \end{align*} \\Similarly, we can translate any class $\alpha\in H^{\bullet}(\mathfrak{g},M)$ to a class in \begin{align}\label{class} H^{\bullet}(\mathcal{O}(s^*M)\overset{\text{dlog}}{\to}\Omega_{s}^1(s^*M) \overset{\text{d}}{\to}\Omega_{s}^2(s^*M)\to\cdots)\,, \end{align} and we denote this class by $t^*\alpha\,.$ Furthermore, for each $x\in G^0$ we obtain as class in \begin{align*} H^\bullet(s^{-1}(x),\mathcal{O}(M_x)\xrightarrow{\text{dlog}}\Omega^1\otimes\mathfrak{m}_x\to\Omega^2\otimes\mathfrak{m}_x\to\cdots)\,, \end{align*} and we denote this class by $t_x^*\alpha\,.$ \\\\Alternatively, given a class $\alpha\in H_0^{\bullet}(\mathfrak{g},M)\,,$ we can translate this to a class in \begin{align} H^\bullet(0\to\Omega_{s}^1(s^*M) \overset{\text{d}}{\to}\Omega_{s}^2(s^*M)\to\cdots)\,, \end{align} and we denote this class by $t_0^*\alpha\,.$ In this case the notation $t^*\alpha$ will be used to mean the class obtained is in~\ref{class} by first viewing $\alpha$ as a class in $H^\bullet(\mathfrak{g},M)\,.$ $\blacksquare$ \end{definition} \begin{proposition} With the previous definition, we have the following commutative diagram: \[ \begin{tikzcd} H^\bullet_0(\mathfrak{g},M) \arrow{d}{}\arrow{r}{t_0^*} & H^{\bullet+1}(0\to\Omega_{s}^1(s^*M) \overset{\text{d}}{\to}\cdots) \arrow{d} \\ H^{\bullet+1}(\mathfrak{g},M)\arrow{r}{t^*} & H^{\bullet+1}(\mathcal{O}(s^*M)\to\Omega_{s}^1(s^*M) \overset{\text{d}}{\to}\cdots) \end{tikzcd} \] $\blacksquare$ \end{proposition} The importance of the previous definition is due to the fact that given a class in $\alpha\in H^{\bullet}(\mathfrak{g},M)\,,$ the class $t^*\alpha$ defines a class in $H^{\bullet}(E^\bullet G\,,\kappa^{-1}\mathcal{O}(M))$ (or if $\alpha\in H^\bullet_0(\mathfrak{g},M)\,,$ then $t_0^*\alpha$ defines a class in $H^{\bullet}(E^\bullet G\,,\widehat{\kappa^{-1}\mathcal{O}(M)})\,,$ see~\Cref{van Est map}). We are now ready to state and prove the main theorem of the paper: \begin{theorem}[Main Theorem]\label{van Est image} Suppose $G\rightrightarrows G^0$ is source $n$-connected and that $M$ is a $G$-module fitting into the exact sequence \[ 0\to Z\to \mathfrak{m}\overset{exp}{\to} M\,, \] where $\mathfrak{m}$ is the Lie algebroid of $M\,.$ Then the van Est map $\,VE:H^*(G,M)\to H^*(\mathfrak{g},M)$ is an isomorphism in degrees $\le n$ and injective in degree $(n+1)\,.$ The image of $VE$ in degree $(n+1)$ are the classes $\alpha\in H^{n+1}(\mathfrak{g},M)$ such that for all $x\in G^0\,,$ the translated class $t_x^*\alpha$ (see Definition~\ref{translation}) is trivial in \begin{align*} H^{n+1}(s^{-1}(x),\mathcal{O}(M_x)\xrightarrow{\text{dlog}}\Omega^1\otimes\mathfrak{m}_x\to\Omega^2\otimes\mathfrak{m}_x\to\cdots)\,. \end{align*} The same statement holds for $VE_0:H_0^*(G,M)\to H_0^*(\mathfrak{g},M)$ with a degree shift, that is: the truncated van Est map $VE_0:H_0^*(G,M)\to H_0^*(\mathfrak{g},M)$ is an isomorphism in degrees $\le n-1$ and injective in degree $n\,.$ The image of $VE_0$ in degree $n$ are the classes $\alpha\in H_0^{n}(\mathfrak{g},M)$ such that for all $x\in G^0\,,$ the translated class $t_x^*\alpha$ is trivial in \begin{align*} H^{n+1}(s^{-1}(x),\mathcal{O}(M_x)\xrightarrow{\text{dlog}}\Omega^1\otimes\mathfrak{m}_x\to\Omega^2\otimes\mathfrak{m}_x\to\cdots)\,. \end{align*} In particular, let $\omega$ be a closed Lie algebroid $(n+1)$-form, ie. \[ \omega \in \ker\big[\Gamma(\mathcal{C}^{n+1}(\mathfrak{g},M))\xrightarrow{d_\text{CE}}\Gamma(\mathcal{C}^{n+2}(\mathfrak{g},M))\big]\,. \] Then $[\omega]\in H^{n}_0(\mathfrak{g},M)$ is in the image of $VE_0$ if and only if \begin{equation}\label{van Est condition} \int_{S^{n+1}_x}\omega \in Z\;\;\text{ for all } x \in G^0 \text{ and all } S^{n+1}_x\,, \end{equation} where $S^{n+1}_x$ in an $(n+1)$-sphere contained in the source fiber over $x\,.$\footnote{For the case of smooth Lie groups, this seems to be shown in~\cite{wockel1}, although our proof is still different.} \end{theorem} \begin{proof} The statement regarding $VE$ follows from the fact that \begin{equation}\label{iso of coho} \ H^*(\mathfrak{g},M)= H^*(\mathbf{E}^\bullet G,\kappa^{-1}\mathcal{O}(M)_\bullet) \end{equation} and Theorem~\ref{spectral theorem}. For the statement regarding $VE_0$ we use the fact that \begin{align*} H^*_0(\mathfrak{g},M)\cong H^{*+1}(\mathbf{E}^\bullet G,\widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet)\,, \end{align*} and the fact that the map \begin{align*} H^{*+1}(\mathbf{E}^\bullet G,\kappa^{-1}\mathcal{O}(M)^0_\bullet)\xrightarrow{}H^{*+1}(\mathbf{E}^\bullet G,\widehat{\kappa^{-1}\mathcal{O}(M)}_\bullet) \end{align*} is an isomorphism in degrees $\le n-1$ and is injective in degree $n\,.$ Furthermore, Theorem~\ref{spectral theorem} implies that \begin{align*} H^*(\mathbf{B}^{\bullet}G,\mathcal{O}(M)^0_\bullet)\xrightarrow{\kappa^{-1}}H^*(E\mathbf{G}^{\bullet},\kappa^{-1}\mathcal{O}(M)^0_\bullet) \end{align*} is an isomorphism in degrees $\le n-1$ and is injective in degree $n\,,$ hence we get that the map $H^*_0(G,M)\to H^*_0(\mathfrak{g},M)$ is an isomorphism in degrees $\le n-1$ and injective in degree $n\,.$ The statement regarding its image in degree $n$ follows from Corollary~\ref{groupoid kernel}. \end{proof} \theoremstyle{definition}\begin{exmp}This is a continuation of Example~\ref{module example}. The source fibers of $\Pi_1(S^1)\rightrightarrows S^1$ are contractible, hence Theorem~\ref{van Est image} shows that the cohomology groups are $H^i(\Pi_1(S^1),\tilde{\mathbb{R}})=0$ in all degrees, and \[ H^i(\Pi_1(S^1),\widetilde{\mathbb{R}/\mathbb{Z}})=H^{i+1}(\Pi_1(S^1),\tilde{\mathbb{Z}})=\begin{cases} \mathbb{Z}/2\mathbb{Z}, & \text{if}\ i= 0 \\ 0, & \text{if}\ i\ne 0\,, \end{cases} \] and this result agrees with the computation done in Example~\ref{module example}. We also have that $\Pi_1(S^1)$ is Morita equivalent to the fundamental group $\pi_1(S^1)\cong\mathbb{Z}\,,$ and the associated $\mathbb{Z}$-modules are the abelian groups $\mathbb{Z}\,,\,\mathbb{R}\,,\,\mathbb{R}/\mathbb{Z}\,,$ where even integers act trivially and odd integers act by inversion. One can also use this information to compute $H^{i}(\Pi_1(S^1),\tilde{\mathbb{Z}})$ and indeed find that \[ H^{i}(\Pi_1(S^1),\tilde{\mathbb{Z}})=\begin{cases} \mathbb{Z}/2\mathbb{Z}, & \text{if}\ i= 1 \\ 0, & \text{if}\ i\ne 1\,. \end{cases} \]\end{exmp} \subsection{Groupoid Extensions and the van Est Map} To every extension \begin{equation}\label{ab ext} 1\to A\to E\to G\to 1 \end{equation} of a Lie groupoid $G$ by an abelian group $A$ (see~\ref{abelian extensions}) one can associate a class in $H^1_0(\mathfrak{g},A)$ (where $\mathfrak{g}$ is the Lie algebroid of $G$) in two ways: one is given by the extension class of the short exact sequence $0\to \mathfrak{a}\to\mathfrak{e}\to\mathfrak{g}\to 0$ determined by~\ref{ab ext}, and the other is given by applying the van Est map to the class in $H^1_0(G,A)$ determined by~\ref{ab ext}. Here we will show that these two classes are the same. \begin{theorem}\label{groupoid extension} Let $M$ be a $G$-module and consider an extension of the form \begin{align*} 1\to M\to E\to G\to 1 \end{align*} and let $\alpha\in H^1_0(G,M)$ be its isomorphism class. Then the isomorphism class of the Lie algebroid associated to $VE(\alpha)\in H^1_0(\mathfrak{g},M)$ is equal to the isomorphism class of the Lie algebroid $\mathfrak{e}$ of $E\,.$ \end{theorem} \begin{proof} Let $\{U_i\}_i$ be an open cover of $G^0\xhookrightarrow{} G$ on which there are local sections $\sigma_i:U_i\to E$ such that $\sigma$ takes $G^0\xhookrightarrow{}G$ to $G^0\xhookrightarrow{}E\,.$ These define a class $\alpha\in H^1_0(G,M)$ by taking $g_{ij}=\sigma^{-1}_i\cdot \sigma_j$ on $U_i\cap U_j\,,$ and where $h_{ijk}=\sigma_k^{-1}\cdot \sigma_i\cdot \sigma_j$ on $p_1^{-1}(U_i)\cap p_2^{-1}(U_j)\cap m^{-1}(U_k)\subset \mathbf{B}^{2}G\,.$ The sections $\sigma_{ii}$ induce a splitting of \begin{align*} 0\to\mathfrak{m}\vert_{U_i}\to\mathfrak{e}\vert_{U_i}\to\mathfrak{g}\vert_{U_i}\to 0\,, \end{align*} which in turn gives a canonical closed 2-form $\omega\in C^2(\mathfrak{g}\vert_{U_i},M)\,,$ and the isomorphism given by $g_{ij}:E\vert_{U_i\cap U_j}\to E\vert_{U_i\cap U_j}$ induces an isomorphism $\mathfrak{e}\vert_{U_i\cap U_j}\to \mathfrak{e}\vert_{U_i\cap U_j}$ given by $g_{ij*}$ (ie. the pushforward). Now the argument in Theorem 5 in~\cite{Crainic} implies that $VE(h_{iii})=[\omega_i]\,,$ and then one can check that $VE(\alpha)$ is the class given by $\{(\omega_i,g_{ij*})\}_{ij}\,.$ \end{proof} \chapter{Applications} \section{Groupoid Extensions and Multiplicative Gerbes} Here we describe applications of the main theorem (\Cref{van Est image}) to the integration of Lie algebroid extensions, to representations, and to multiplicative gerbes. \\\\If we take $M=E$ to be a representation in Theorem~\ref{van Est image}, then $Z=\{0\}$ and we obtain the following result, due to Crainic (see~\cite{Crainic}, section 2.3). \begin{theorem}[\cite{Crainic}] Suppose $G\rightrightarrows X$ is source $(n-1)$-connected and that $E$ is a $G$-representation. Then the van Est map $\,VE:H^*(G,E)\to H^*(\mathfrak{g},E)$ is an isomorphism in degrees $\le n-1$ and is injective in degree $n\,.$ Furthermore, $\omega\in H^n(\mathfrak{g},E)$ is in the image of $VE$ if and only if \begin{equation} \int_{S^n_x}\omega=0\;\;\text{ for all } x \in X \text{ and all } S^n_x\,, \end{equation} where $S^n_x$ in an $n$-sphere contained in the source fiber over $x\,.$ \end{theorem} Now we will prove a result about the integration of Lie algebroid extensions, which generalizes the above result in the $n=2$ case. At least in the case where $M=S^1$ this is due to Crainic and Zhu (see~\cite{zhu}), but their proof is different. \begin{theorem}\label{central extension} Consider the exponential sequence $0\to Z \to\mathfrak{m}\overset{\exp}{\to} M\,.$ Let \begin{align}\label{LA extension} 0\to\mathfrak{m}\to \mathfrak{a} \to \mathfrak{g}\to 0 \end{align} be the central extension of $\mathfrak{g}$ associated to $\omega\in H^2(\mathfrak{g},\mathfrak{m})\,.$ Suppose that $\mathfrak{g}$ has a simply connected integration $G\rightrightarrows X$ and that \begin{align}\label{condition Z} \int_{S^2_x}\omega\in Z \end{align} for all $x\in X$ and $S^2_x\,,$ where $S^2_x$ in a $2$-sphere contained in the source fiber over $x\,.$ Then $\mathfrak{a}$ integrates to a unique extension \begin{align}\label{LG extension} 1\to M\to A \to G\to 1\,. \end{align} \end{theorem} In particular, if $G$ and $M$ are Hausdorff then $\mathfrak{a}$ admits a Hausdorff integration.\footnote{This generalizes Theorem 5 in~\cite{Crainic}, with a different proof.} \begin{proof} By Theorem~\ref{van Est image} $H^1_0(G,M)$ is isomorphic to the subgroup of $H^1_0(\mathfrak{g},M)$ which have periods in $Z$ along the source fibers. Hence by Theorem~\ref{groupoid extension} the Lie algebroid extension in~\ref{LA extension} integrates to an extension of the form~\ref{LG extension}. Since in particular $\mathbf{B}^1A$ is a principal $M$-bundle over $G\,,$ it must be Hausdorff if $G$ and $M$ are. \end{proof} \begin{remark}Note that in fact a stronger result than the above theorem can be made. Suppose we have an extension \begin{align}\label{nonabelian} 0\to\mathfrak{m}\xrightarrow{\iota}\mathfrak{a}\xrightarrow{\pi} \mathfrak{g}\to 0\,, \end{align}where now $\mathfrak{m}$ isn't assumed to be abelian, so that $M$ is a nonabelian module. However, suppose there is a splitting of~\eqref{nonabelian} such that the curvature $\omega$ takes values in the center of $\mathfrak{m}\,,$ denoted $Z(\mathfrak{m})$ (which we assume is a vector bundle). Then two things occur: First, let $\sigma:\mathfrak{g}\to\mathfrak{a}$ denote the splitting. Then we get an action of $\mathfrak{g}$ on $\mathfrak{m}$ defined by $ \iota(L_{X}W):=[\sigma(X),\iota(W)]\,,$ for $X\in \mathcal{O}(\mathfrak{g})\,,W\in \mathcal{O}(\mathfrak{m})$ (here we are defining $L_X W\,.$ One can check that this is in the image of $\iota$ and so defines a local section of $\mathcal{O}(\mathfrak{m})\,,$ and that this action is compatible with Lie brackets). Assume that this action integrates to an action of $G$ on $M\,,$ making $M$ into a (nonabelian) G-module. \\\\The second thing that occurs is that we get a central extension given by \begin{align}\label{nonabelian2} 0\to Z(\mathfrak{m})\to (Z(\mathfrak{m})\oplus\mathfrak{g},\omega)\to \mathfrak{g}\to 0\,, \end{align} where $\omega$ is the curvature of $\sigma\,,$ and $\mathfrak{g}$ acts on $Z(\mathfrak{m})$ as above. The extension~\eqref{nonabelian2} is a reduction of~\eqref{nonabelian} in the following sense: we can form the Lie algebroid $Z(\mathfrak{m})\oplus \mathfrak{m}$ and this Lie algeboid has a natural action of $Z(\mathfrak{m})\,,$ and the quotient is isomorphic to $\mathfrak{m}\,.$ Similarly, we can form the Lie algebroid $(Z(\mathfrak{m})\oplus\mathfrak{g},\omega)\oplus \mathfrak{m}\,,$ and this Lie algebroid also has a natural action of $Z(\mathfrak{m})\,,$ and the quotient is isomorphic to $\mathfrak{a}\,.$ Therefore, the extension~\eqref{nonabelian} is associated to the extension~\eqref{nonabelian2} in a way that is analogous to the reduction of the structure group of a principal bundle. \\\\Assume now that the extension~\eqref{nonabelian2} integrates to an extension \begin{align}\label{abelian ext} 1\to Z(M)\xrightarrow{\iota} E \xrightarrow{\pi} G\to 1\,, \end{align} where $G$ is the source simpy connected groupoid integrating $\mathfrak{g}\,.$ Then we can form the product Lie groupoid $E _{s}{\times}_{s} M:$ the multiplication is given by \begin{align*} (e,m)(e',m')=(ee',m(\pi(e)^{-1}\cdot m'))\,, \end{align*} where $t(e)=s(e')\,.$ Similarly to the Lie algebroid extension case, the family of abelian groups $Z(M)$ acts on the family of groups $Z(M)_{s}{\mathop{\times}}_s M\,,$ as well as on the Lie groupoid $E _{s}{\mathop{\times}}_s M\,,$ and the quotient of the former is isomorphic to $M\,,$ and the quotient of the latter integrates $\mathfrak{a}$ in~\eqref{nonabelian}. This gives us an extension \begin{align} 1\to M\to A\to G\to 1 \end{align} integrating~\eqref{nonabelian}. Therefore, if we can integrate~\eqref{nonabelian2} we can also integrate~\eqref{nonabelian}. \\\\One should notice the similarity between the construction we've just described and the construction described in Lemma 3.6 in~\cite{rui}, in the special case of a regular Lie algebroid (ie. where the anchor map has constant rank), and where the extension is given by \begin{align} 0\to \text{ker}(\alpha)\to \mathfrak{a}\xrightarrow{\alpha}\text{im}(\alpha)\to 0\,, \end{align} where $\alpha$ is the anchor map of $\mathfrak{a}\,.$ The obstruction to integration described there coincides with the obstruction given by~\Cref{van Est image} for the integration of~\eqref{nonabelian2}, and we've shown that the vanishing of this obstruction is sufficient for the integration of~\eqref{nonabelian}, and hence of $\mathfrak{a}\,,$ to exist. \end{remark} The above results concerned the degree $1$ case in truncated cohomology. We will now apply the main theorem to the integration of rank one representations, which concerns degree $1$ in nontruncated cohomology. First we make use of the following result: \begin{proposition}\label{G representations} The group of isomorphism classes of representations of $G\rightrightarrows G^0$ on complex line bundles is isomorphic to $H^1(G\,,\mathbb{C}^*_{G^0})\,.$ The corresponding statement for real line bundles holds, with $\mathbb{C}^*_{G^0}$ replaced by $\mathbb{R}^*_{G^0}\,.$ See Example~\ref{representation}. \end{proposition} The following statement is already known, we are just giving a cohomological proof. \begin{theorem} Let $G\rightrightarrows G^0$ be a source simply connected Lie groupoid. Then $\text{Rep}(G,1)\cong \text{Rep}(\mathfrak{g},1)\,,$ where $\text{Rep}(G,1)\,,\,\text{Rep}(\mathfrak{g},1)$ are the categories of 1-dimensional representations, ie. representations on line bundles. \end{theorem} \begin{proof} This follows directly from Theorem~\ref{van Est image}, Example~\ref{representation} and Proposition~\ref{G representations}. \end{proof} Now for the degree $2$ case in truncated cohomology: we use the main theorem to give a proof of an integration result concerning the multiplicative gerbe on compact, simple and simply connected Lie groups (see~\cite{konrad}). For the purposes of this thesis it is enough to think of a gerbe as the data given by a degree 2 \^{C}ech cocycle. \begin{theorem} Let $G$ be a simply connected Lie group. Then for each $\alpha\in H^2_0(\mathfrak{g},\mathbb{R})$ which is integral on $G\,,$ there is a class in $H^2_0(G,S^1)$ integrating it. \end{theorem} \begin{proof} It is well known that simply connected Lie groups are 2-connected (since $\pi_2(G)=0$ for Lie groups), so Theorem~\ref{van Est image} immediately gives the result. \end{proof} \subsection{Group Actions and Lifting Problems} In this section we apply~\Cref{van Est image} to study the problems of lifting projective representations to representations, and to lifting Lie group actions to principal torus bundles. \subsubsection{Lifting Projective Representations} \begin{theorem} Let $G$ be a simply connected Lie group and let $V$ be a finite dimensional complex vector space. Let $\rho:G\to \text{PGL } (V)$ be a homomorphism. Then $G$ lifts to a homomorphism $\tilde{\rho}:G\to\text{GL } (V)\,.$ If $G$ is semisimple, this lift is unique. \end{theorem} \begin{proof} We have a central extension \begin{align}\label{GL extension} 1\to \mathbb{C}^*\to \text{GL } (V)\to \text{PGL } (V)\to 1\,, \end{align} and the corresponding Lie algebra extension splits: the Lie algebra of $\text{PGL } (V)$ is isomorphic to $\mathfrak{g}\mathfrak{l}(V)/\mathbb{C}\,,$ where $\lambda\in\mathbb{C}$ acts on $X\in\mathfrak{g}\mathfrak{l}(V)$ by taking $X\mapsto X+\lambda\,\mathbf{I}\,.$ The map \begin{align*} \mathfrak{g}\mathfrak{l}(V)/\mathbb{C}\to\mathfrak{g}\mathfrak{l}(V)\,, X\mapsto X-\frac{\text{tr}(X)}{\text{dim}(V)}\mathbf{I} \end{align*} is a Lie algebra homomorphism. Therefore, since $G$ is simply connected,~\Cref{van Est image} implies that the extension of $G$ that we get by pulling back the extension given by \eqref{GL extension} via $\rho$ is trivial (since the pullback of a trivial Lie algebra extension is trivial). However, a trivialization of the pullback extension is the same thing as a lifting of the homomorphism $\rho$ to a homomorphism $\tilde{\rho}:G\to\text{GL } (V)\,,$ hence such a lifting exists. \\\\Now for uniqueness: it is easy to see that the liftings of $\rho$ are a torsor for $\text{Hom}(G,\mathbb{C}^*)\,,$ but again by~\Cref{van Est image} we have that $\text{Hom}(G,\mathbb{C}^*)\cong \text{Hom}(\mathfrak{g},\mathbb{C})\,,$ and the right side is $0$ if $G$ is semisimple. Hence if $G$ is semisimple there is a unique lift. \end{proof} \begin{remark} One can also use the above method to give a proof of Bargmann's theorem (\cite{bargmann}), that is, if $H^2(\mathfrak{g},\mathbb{R})=0\,,$ then every projective representation of a (infinite dimensional) Hilbert space lifts to a representation. \end{remark} \subsubsection{Lifting Group Actions to Principal Bundles and Quantizations} Now we will look at a different lifting problem, one involving compact, semisimple Lie groups. First let us remark the following well-known result: \begin{lemma} A compact Lie group is semisimple if and only if its fundamental group is finite. \end{lemma} Now the aim of the rest of this section is to prove the following result: \begin{theorem}\label{compact group actions} Let $G$ be a compact, semisimple Lie group acting on a manifold $X\,.$ Suppose $P\to X$ is a principal bundle for the $n$-torus $T^n\,.$ Then the action of $G$ on $X$ lifts to an action of $G$ on $P^{|\pi_1(G)|}$ (here $P^{|\pi_1(G)|}$ is the principal $T^n$-bundle whose torsor over $x\in X$ is the product of the torsor over $x$ in $P$ with itself $|\pi_1(G)|$ times), and the lift is unique up to isomorphism (ie. any two lifts differ by a principal bundle automorphism). \end{theorem} In particular, if $G$ is compact and simply connected, then actions of $G$ on a manifold $X$ lift to all principal $T^n$-bundles over $X\,.$ \theoremstyle{definition}\begin{exmp} Consider the standard action of $SO(3)$ on $S^2\,.$ We have that $\pi_1(SO(3))=\mathbb{Z}/2\mathbb{Z}\,,$ hence $|\pi_1(SO(3))|=2\,.$ Therefore,~\Cref{compact group actions} implies that the action of $SO(3)$ on $S^2$ lifts to an action on all even degree principal $S^1$-bundles over $S^2\,,$ in a unique way up to isomorphism. On the other hand, since $SU(2)$ is simply connected, its standard action on $S^2$ lifts to an action on all principal $S^1$-bundles over $S^2\,,$ again in a unique way up to isomorphism. \end{exmp} Before proving~\Cref{compact group actions}, we will prove the following result, which is interesting in its own right and is related to the Riemann-Hilbert correspondence\footnote{In particular, this result determines exactly when a flat connection on a gerbe integrates to an action of the fundamental groupoid on the gerbe.} \begin{lemma}\label{fundamental group} Let $X$ be a connected manifold with universal cover $\tilde{X}$ and suppose that $\pi_k(X)=0$ for all $2\le k\le m\,.$ Let $T^n\xrightarrow{\text{dlog}} \mathbf{\Omega}^\bullet$ be the Deligne complex. Then $H^k(\pi_1(X),T^n)\cong H^k(X,T^n\xrightarrow{\text{dlog}} \mathbf{\Omega}^\bullet)$ for all $2\le k\le m\,,$ and the following sequence is exact: \begin{align} 0\to H^{m+1}(\pi_1(X),T^n)\to H^{m+1}(X,T^n\to \mathbf{\Omega}^\bullet)\to H^{m+1}(\tilde{X},T^n\to \mathbf{\Omega}^\bullet)\,. \end{align} \end{lemma} \begin{proof} We have that $\pi_1(X)$ is Morita equivalent to $\Pi_1(X)$ (the fundamental group and fundamental groupoid of $X\,,$ respectively), so by Morita invariance \begin{align*} H^{\bullet}(\pi_1(X),T^n)\cong H^{\bullet}(\Pi_1(X),T_X^n)\,.\end{align*} The result then follows from~\Cref{van Est image}. \end{proof} \begin{corollary}\label{power is zero} Let $G$ be a connected Lie group and as usual let $\mathbf{B}^1G$ be the underlying manifold. Then for every class $\alpha \in H^1(\mathbf{B}^1G,T^n\to \mathbf{\Omega}^\bullet)\,,$ we have that $\alpha^{|\pi_1(G)|}=1\,.$ \end{corollary} \begin{proof} From~\Cref{fundamental group} we have the well-known result that $H^1(\mathbf{B}^1G,T^n\to \mathbf{\Omega}^\bullet)\cong H^1(\pi_1(G),T^n)\,.$ The latter is equal to $\text{Hom}(\pi_1(G),T^n)\,,$ however every $f\in \text{Hom}(\pi_1(G),T^n)$ satisfies $f^{|\pi_1(G)|}=1\,,$ completing the proof. \end{proof} We now state a proposition that will be needed for the proof of~\Cref{compact group actions} (for a proof of this proposition, see~\cite{Crainic}). \begin{proposition}\label{vanishing cohomology prop} Let $G\rightrightarrows X$ be a proper Lie groupoid (ie. the map $(s,t):G\to X\times X$ is a proper map). Let $E\to X$ be a representation of $G\,.$ Then $H^k(G,E)=0$ for all $k\ge 1\,.$ \end{proposition} The key to proving~\Cref{compact group actions} is the following lemma: \begin{lemma}\label{vanishing cohomology} Let $G$ be a compact, simply connected Lie group acting on a manifold $X\,.$ Then $H^1_0(G\ltimes X,T^n_X)=0\,.$ \end{lemma} \begin{proof} Since $G$ is compact the action is proper, hence $G\ltimes X$ is a proper groupoid, hence from~\Cref{vanishing cohomology prop} we see that $H^k(G\ltimes X,\mathbb{R}^n_X)=0$ for all $k\ge 1\,.$ This implies that $H^k_0(G\ltimes X,\mathbb{R}^n_X)=0$ for all $k\ge 2\,.$ Since simply connected Lie groups are $2$-connected,~\Cref{van Est image} implies that $H^2_0(G\ltimes X,\mathbb{Z}^n_X)=0\,.$ Hence, from the short exact sequence $0\to\mathbb{Z}^n\to\mathbb{R}^n\to T^n\to 0\,,$ we get that $H^1_0(G\ltimes X,T^n_X)=0\,.$ \end{proof} We are now ready to prove~\Cref{compact group actions} for simply connected groups. \begin{lemma}\label{compact simply connected} Let $G$ be a compact, simply connected Lie group acting on a manifold $X\,.$ Suppose $P\to X$ is a principal bundle for the $n$-torus $T^n\,.$ Then the action of $G$ on $X$ lifts to an action of $G$ on $P\,,$ and the lift is unique up to isomorphism. \end{lemma} \begin{proof} Consider the gauge groupoid of $P$ given by $\text{At}(P):=P\times P/T^n\rightrightarrows X\,,$ where the action of $T^n$ is the diagonal action (here the source and target maps are the projections onto the first and second factors, respectively, and a morphism with source $x$ and target $y$ is a $T^n$-equivariant morphism between the fibers of $P$ lying over $x$ and $y\,,$ respectively). The gauge groupoid fits into a central extension of $\text{Pair}(X)\,,$ ie. \begin{align}\label{Atiyah sequence} 1\to T^n_X\to \text{At}(P)\to\text{Pair}(X)\to 1\,. \end{align} A lift of the $G$-action to $P\to X$ is equivalent to a lift of the canonical homomorphism $G\ltimes X\xrightarrow{(s,t)}\text{Pair}(X)$ to $\text{At}(P)\,,$ which is equivalent to a trivialization of the central extension of $G\ltimes X$ given by pulling back, via $(s,t)\,,$ the central extension given by \eqref{Atiyah sequence}. From~\Cref{vanishing cohomology} we know that such a trivialization exists, hence the $G$-action lifts to $P\,.$ \\\\Uniqueness up to isomorphism follows from the fact that the isomorphism classes of different lifts are a torsor for the image of $H^0_0(G\ltimes X,T^n)$ in $H^1(G\ltimes X,T^n)\,,$ and that the image is trivial follows from the exponential sequence $1\to \mathbb{Z}^n\to\mathbb{R}^n\to T^n\to 1\,,$ since both $H^1(G\ltimes X,\mathbb{R}^n_X)$ and $H^1_0(G,\mathbb{Z}^n_X)$ are trivial (the former follows from~\Cref{vanishing cohomology prop}, the latter follows from~\Cref{van Est image}). \end{proof} Now we can prove~\Cref{compact group actions}. One way of doing this is to look at the action of $\pi_1(G)$ on its universal cover, another way is the following: \begin{proof}[Proof of Theorem \ref{compact group actions}] Let $\tilde{G}$ be the universal cover of $G\,.$ From~\Cref{compact simply connected} we know that the corresponding action of $\tilde{G}$ on $X$ lifts to an action on $P\,,$ giving us a class $\alpha\in H^1(\tilde{G}\ltimes X,T^n)$ whose underlying principal bundle on $X$ is $P\,.$ Hence after applying the van Est map we get a class $\text{VE}(\alpha)\in H^1(\mathfrak{g}\ltimes X,T^n)\,,$ whose underlying principal bundle on $X$ is also $P\,.$ \\\\After translating $\text{VE}(\alpha)$, we get a flat $T^n$-bundle on each source fiber of $G\ltimes X\,,$ ie. for each $x\in X$ we get a flat $T^n$-bundle on $G$, which we denote by $P_{(G,x)}\,.$ Then by~\Cref{power is zero} we have that $P_{(G,x)}^{|\pi_1(G)|}$ is trivial. However, $P_{(G,x)}^{|\pi_1(G)|}$ is the right translation of $|\pi_1(G)|\cdot\text{VE}(\alpha)$ (where the $\mathbb{Z}$-action is the natural one on cohomology classes), hence by~\Cref{van Est image} we get the existence of a lift of the $G$-action to $P^{|\pi_1(G)|}\,.$ \\\\Uniqueness follows from the same argument as in~\Cref{compact simply connected}. \end{proof} In the same vein, given a Hamiltonian group action on a symplectic manifold $G\, \rotatebox[origin=c]{-90}{$\circlearrowright$}\,(M,\omega)\,,$ one can show that, if $G$ is simply connected, this action lifts to a quantization. The reason is that if one considers the action Lie algebroid $\mathfrak{g}\ltimes M\,,$ with anchor $\alpha\,,$ the moment map $\mu$ trivializes $\alpha^*\omega$ in the truncated Lie algebroid complex. The rest follows from the van Est isomorphism theorem. \subsection{Quantization of Courant Algebroids} In this section we will discuss applications of our main theorem to the quantization of Courant algebroids, as discussed in~\cite{gen kahler}. \\\\Let $C$ be a smooth Courant algebroid over $X$ (see~\cite{gen kahler} for a definition) associated to a $3$-form $\omega\,,$ and suppose that it is prequantizable, that is $\omega$ has integral periods. Let $g$ denote an $S^1$-gerbe prequantizing $\omega\,.$ Let $D\subset C$ be a Dirac structure. Then in particular, $D$ is a Lie algebroid, and as explained in~\cite{gen kahler} $g$ can be equipped with a flat $D$-connection, denoted $A\,.$ This determines a class $[(g,A)]\in H^2(D,S^1_X)\,.$ Suppose $D$ integrates to a Lie groupoid. We can then ask about the integrability of $[(g,A)]\,,$ or in other words: does the action of $D$ on $g$ integrate to an action of the corresponding source simpy connected groupoid on $g\,?$ Here we give a class of examples that does integrate, and it relates to the basic gerbe on a compact, simple Lie group. $($see~\cite{erbe},~\cite{Severa}$)\,.$ \theoremstyle{definition}\begin{exmp} Let $G$ be a compact, simple Lie group with universal cover $\tilde{G}\,.$ and let $\langle \cdot,\cdot\rangle$ be the unique bi-invariant $2$-form which at the identity is equal to the Killing form. Associated to $\langle \cdot,\cdot\rangle$ is a bi-invariant and integral $3$-form $\omega$, called the Cartan $3$-form, given at the identity by \begin{align*}\omega\vert_e=\frac{\langle \,[\,\cdot\,,\,\cdot\,]\,,\cdot\rangle\vert_e}{2}\,. \end{align*} The Dirac structure in this case, called the Cartan-Dirac structure, is the action Lie algebroid $\mathfrak{g}\ltimes G\,,$ where the action is the adjoint action of $\mathfrak{g}$ on $G\,.$ From this there is a canonical class $\alpha\in H^2(\mathfrak{g}\ltimes G,S^1_G)\,,$ whose underlying gerbe on $G$ is called the basic gerbe. The source simply connected integation of $\mathfrak{g}\ltimes G$ is $\tilde{G}\ltimes G\,,$ where the action of $\tilde{G}$ on $G$ is the one lifting the action of $G$ on itself by conjugation. Since the source fibers of $\tilde{G}\ltimes G$ are diffeomorphic to $\tilde{G}\,,$ which is necessarily $2$-connected, by Theorem~\ref{van Est image} we have that \begin{align*} H^2(\tilde{G}\ltimes G,S^1_G)\overset{VE}{\cong}H^2(\mathfrak{g}\ltimes G,S^1_G)\,. \end{align*} Hence $\alpha$ integrates to a class in $H^2(\tilde{G}\ltimes G,S^1_G)\,.$ \end{exmp}To summarize, we have proven the following [see~\cite{erbe},~\cite{krepski}]: \begin{theorem}[Integration of Cartan-Dirac structures] Let $G$ be a compact, simple Lie group with universal cover $\tilde{G}\,.$ Then the adjoint action of $\mathfrak{g}$ on the basic gerbe (where the action is given by the Cartan-Dirac structure) integrates to an action of $\tilde{G}$ on the basic gerbe. \end{theorem} \subsection{Integration of Lie \texorpdfstring{$\infty$-Algebroids}{infinity algebrois}} In this section we will discuss the integration and quantization of Lie $\infty$-algebroids. See~\cite{pym2} for more details. We consider Lie $\infty$-algebroids of the following form: \\\\Let $\mathfrak{g}$ be a Lie algebroid and let $M\to X$ be a $\mathfrak{g}$-module. Let $\omega\in C^n(\mathfrak{g},M)$ be closed, $n>2\,.$ We can define a two term Lie $(n-1)$-algebroid as follows: Let $\mathcal{L}=\mathfrak{m}\oplus\mathfrak{g}$ where $\mathfrak{m}$ has degree $2-n$ and $\mathfrak{g}$ has degree $0\,.$ Let all differentials be zero except for the degree $0$ and degree $-n$ differentials. Define the degree $0$ differential as follows: for $U$ an open set in $X$ and for $m_1\,,m_2\in\mathcal{O}(\mathfrak{m})(U)\,,g_1\,,g_2\in\mathcal{O}(\mathfrak{g})(U)\,,$ let \begin{align*} [m_1+g_1,m_2+g_2]_0=[g_1,g_2]+d_{CE}m_2(g_1)-d_{CE}m_1(g_2)\,, \end{align*} where $[g_1,g_2]$ is the Lie bracket of $g_1\,,g_2$ in $\mathfrak{g}\,.$ Define the degree $2-n$ bracket by as follows: for $g_1\,,\ldots\,,g_n\in\mathcal{O}(\mathfrak{g})(U)\,,$ let \begin{align*} [g_1\,,\ldots\,,g_n]_n=\omega(g_1\,,\ldots\,,g_n)\,, \end{align*} otherwise if any of inputs is in $\mathcal{O}(\mathfrak{m})(U)$ let the bracket be zero. This defines a Lie $(n-1)$-algebroid. \\\\Since the universal cover of a $k$-dimensional torus $($for $k\ge 1)$ is contractible,~\Cref{van Est image} gives us the following result, at the level of cohomology: \begin{corollary} All Lie $(n-1)$-algebroids associated to closed $n$-forms on the $k$-dimensional torus $T^k$ integrate to multiplicative $(n-2)$-gerbes. \end{corollary} We now apply the previous results to Lie $2$-algebras (an L$-\infty$ algebra concentrated in the two lowest degrees). As proved in~\cite{baez2}, all Lie $2$-algebras are equivalent to ones of the form \begin{align}\label{lie 2} V\to \mathfrak{g}\,, \end{align} where the only nonzero brackets are the degree $0$ and $-1$ brackets, and where the degree $-1$ bracket is given by a closed $3$-form. Furthermore, if $\omega\,,\omega'$ define equivalent Lie $2$-algebras, then $[\omega]=[\omega']$ in $H^3_0(\mathfrak{g},V)\,,$ implying that the map from Lie $2$-algebras to $H^3_0(\mathfrak{g},V)$ is canonical. Since simply connected Lie groups are $2$-connected, Theorem~\ref{van Est image} can help us determine when a Lie $2$-algebra integrates. \begin{theorem}\label{2 algebra} Let $\mathcal{L}$ be a Lie $2$-algebra represented by the $3$-form $\omega\,.$ Let $G$ be the simply connected integration of $\mathfrak{g}\,.$ Then if the periods $P(\omega)$ of $\omega$ form a discrete subgroup of $V\,,$ then $\mathcal{L}$ integrates to a class in $H_0^2(G,V/P(\omega))\,.$ \end{theorem} \theoremstyle{definition}\begin{rmk} Note that in~\cite{andre} it is shown that the obstruction to integrating a Lie $2$-algebra to a Lie $2$-group is that the periods of $\omega$ form a discrete subgroup of $V\,,$ ie. the obstruction is the same as the one in the above theorem. To explain this, we note the following: it is shown in~\cite{schommer} that to every class in $H^2_0(G,S^1)$ there corresponds an equivalence class of Lie $2$-groups. We expect that under this correspondence,~\Cref{2 algebra} shows that the Lie $2$-algebras which satisfy the hypotheses of this theorem integrate to Lie $2$-groups. \end{rmk} \subsection{Applications to Gerbes} Let us recall that isomorphism classes of gerbes with a flat connection on a manifold $X$ are classified by $H^2(TX,X\times\mathbb{C}^*)\,.$ If the underlying gerbe of a gerbe with a flat connection is trivial then there is a lift of this gerbe to some $\omega\in H^2(TX,X\times\mathbb{C})\,.$ Furthermore, actions of $\Pi_1(X)$ on gerbes are classified by $H^2(\Pi_1(X),\mathbb{C}^*)\,.$ We will call a gerbe with a flat connection integrable if its isomorphism class is in the image of the van Est map \[ H^2(\Pi_1(X),X\times\mathbb{C}^*)\to H^2(TX,X\times\mathbb{C}^*)\,. \] \Cref{van Est image} then implies the following: \begin{theorem}\label{gerbe} If $\pi_2(X)=0$, then all gerbes with a flat connection are integrable, and the correspondence is one-to-one. Otherwise, suppose the underlying gerbe of the gerbe with a flat connection is trivial and let $\pi:\tilde{X}\to X$ be the universal cover of $X\,.$ Then it is integrable if and only if $\pi^*\omega$ has integral periods, where $\omega\in H^2(TX,X\times\mathbb{C})$ is the lift of the class of the gerbe with a flat connection. Moreover if this is the case then the integration is unique. \end{theorem} \begin{remark} In fact since \begin{align}\label{TX iso} H^2(TX,X\times\mathbb{C}^*)\cong H^2(X,\mathbb{C}^*)\,, \end{align} one can show that \[ H^2(\Pi_1(X),X\times\mathbb{C}^*)\overset{\text{VE}}{\cong} \ker[\pi^*:H^2(TX,X\times\mathbb{C}^*)\to H^2(T\tilde{X},\tilde{X}\times\mathbb{C}^*)]\,, \] where $\pi^*$ is the pullback induced by~\eqref{TX iso}. This is consistent with the result from exercise 159 in~\cite{davis} which states that there is an exact sequence \[ \pi_2(X)\to H_2(X,\mathbb{Z})\to H_2(\pi_1(X),\mathbb{Z})\to 0\,, \] from which one can deduce that \[ H^2(\pi_1(X),\mathbb{C}^*)\cong \ker[\pi^*:H^2(X,\mathbb{C}^*)\to H^2(\tilde{X},\mathbb{C}^*)]\,. \] \end{remark} $\mathbf{}$ \\Morita invariance of cohomology gives us the following: \begin{theorem}\label{gerbe fundamental group} Integrable gerbes with a flat connection on $X$ are in one-to-one correspondence with isomorphism classes of central extensions of the form \[ 0\to\mathbb{C}^*\to E\to \pi_1(X)\to 0\,, \] ie. $H^2(\pi_1(X),\mathbb{C}^*)\,.$ \end{theorem} $\mathbf{}$ \\Combining~\Cref{gerbe} and~\Cref{gerbe fundamental group} we get the following: \begin{corollary} There is a canonical embedding \[ H^2(\pi_1(X),\mathbb{C}^*)\xhookrightarrow{} H^2(TX,X\times\mathbb{C}^*)\,. \] If $\pi_2(X)=0$ then this embedding is an isomorphism. \end{corollary} \begin{remark} One should compare the above results with the well-known theorem which states that line bundles with a flat connection are in one-to-one correpsondence with $\text{Hom}(\pi_1(X),\mathbb{C}^*)\cong H^1(\pi_1(X),\mathbb{C}^*)\,.$ One can also prove this using the Mortia invariance of cohomology and the fact that line bundles with flat connections always integrate uniquely to a class in $H^1(\Pi_1(X),X\times\mathbb{C}^*)\,.$ Note that $H^2(\pi_1(X),\mathbb{C}^*)$ is known as the Schur multiplier of $\pi_1(X)$ (or its dual, depending on conventions; see~\cite{greg}). \end{remark} \subsection{Van Est Map: Heisenberg Action Groupoids}\label{heisenberg action} In this section we will apply the tools developed in the previous sections to integrate a particular Lie algebroid extension and show that we get a Heisenberg action groupoid. \\\\Consider the space $\mathbb{C}^2$ with divisor $D=\{xy=0\}\,.$ Then the $2$-form \begin{align*} \omega=\frac{dx\wedge dy}{xy} \end{align*} is a closed form in $C_0^2(T_{\mathbb{C}^2}(-\log{D}),\mathbb{C}_{\mathbb{C}^2})\begin{footnote}{On $X\backslash D$ the $2$-form $\omega/2\pi i$ is the curvature of the Deligne line bundle associated to the holomorphic functions $x$ and $y\,.$}\end{footnote}\,.$ The source simply connected integration of $T_{\mathbb{C}^2}(-\log{D})$ is $\mathbb{C}^2\ltimes \mathbb{C}^2\,,$ where the action of $\mathbb{C}^2$ on itself is given by \begin{align*} (a,b)\cdot (x,y)=(e^ax,e^by)\,. \end{align*} Since the source fibers are contractible Theorem~\ref{van Est image} tells us that the central extension of $T_{\mathbb{C}^2}(-\log{D})$ defined by $\omega$ integrates to an $\mathbb{C}_{\mathbb{C}^2}$ central extension of $\mathbb{C}^2\ltimes \mathbb{C}^2\,.$ We will describe the central extension here. First we will compute the integration of $\omega:$ we define coordinates on $\mathbf{B}^{\bullet\le 2} (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})$ as follows: \begin{align*} &(x,y)\in \mathbb{C}^2= \mathbf{B}^0 (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\,, \\&(a,b,x,y)\in \mathbb{C}^2{\mathop{\times}}\mathbb{C}^2=\mathbf{B}^1 (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\,, \\& (a',b',a,b,x,y)\in \mathbf{B}^2 (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\,. \end{align*} On $\mathbf{E}^{\bullet\le 2} (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})$ we have coordinates \begin{align*} &(a,b,x,y)\in \mathbf{E}^0 (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\,, \\&(a',b',a,b,x,y)\in \mathbf{E}^1 (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\,, \\& (a'',b'',a',b',a,b,x,y)\in \mathbf{E}^2 (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\,, \end{align*} where the map $\kappa:\mathbf{E}^{\bullet\le 2} (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})\to\mathbf{B}^{\bullet\le 2} (\mathbb{C}{\mathop{\times}}\mathbb{C}\ltimes \mathbb{C}{\mathop{\times}}\mathbb{C})$ is given by \begin{align*} &(a,b,x,y)\mapsto (e^a x,e^b y)\,, \\& (a',b',a,b,x,y)\mapsto (a',b',e^a x, e^b y)\,, \\& (a'',b'',a',b',a,b,x,y)\mapsto (a'',b'',a',b',e^a x, e^b y)\,. \end{align*} When we right translate $\omega$ to $\mathbf{E}^0(\mathbb{C}\ltimes\mathbb{C})$ we get the fiberwise form $da\wedge db\,.$ This is exact, with primitive $a\,db\,.$ When we pullback $a\,db$ to $\mathbf{E}^1(\mathbb{C}\ltimes\mathbb{C})$ we get the fiberwise form $a'\,db\,,$ and this is exact, with primitive $a'b\,.$ When we pullback $a'b$ to $\mathbf{E}^2(\mathbb{C}\ltimes\mathbb{C})$ we get the function $a''b'\,,$ and this is $\kappa^*a'b\,.$ So the cocycle integrating $\omega$ is $f(a',b',a,b,x,y)=a'b\,.$ \\\\One can show that the central extension associated to this cocycle is an action groupoid of the complex Heisenberg group acting on $\mathbb{C}{\mathop{\times}}\mathbb{C}\,,$ ie. we have the following proposition: \begin{proposition} The logarithmic $2$-form $\frac{dx\wedge dy}{xy}$ on $\mathbb{C}^2$ with divisor $xy=0$ defines a Lie algebroid extension of $T_{\mathbb{C}^2}(-\log{\{xy=0\}})\,.$ This Lie algebroid extension integrates to an extension of $\mathbb{C}^2\ltimes \mathbb{C}^2$ given by a Heisenberg action groupoid. More precisely, the extension is of the form \begin{align}\label{heisenberg extension} 0\to\mathbb{C}_{\mathbb{C}^2}\to H\ltimes\mathbb{C}^2 \to\mathbb{C}^2\ltimes \mathbb{C}^2\to 0\,, \end{align} where $H$ is the subgroup of matrices of the form \begin{align*} \begin{pmatrix} 1 & a & c\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix} \end{align*} for $a\,,b\,,c\in\mathbb{C},$ and the action on $\mathbb{C}{\mathop{\times}}\mathbb{C}$ is given by $(a,b,c)\cdot (x,y)=(e^a x,e^b y)\,,$ where $(a,b,c)$ represents the above matrix. \end{proposition} \subsection{Van Est Map: \texorpdfstring{$\mathbb{C}\ltimes\mathbb{P}^1$}{C to P1}} In this section we will classify two different geometric strucutres: rank one holomorphic representations of $\Pi_1(\mathbb{P}^1,\{0,\infty\})\cong \mathbb{C}\ltimes\mathbb{P}^1\,,$ which are classified by \[ H^1(\Pi_1(\mathbb{P}^1,\{0,\infty\}),\mathbb{C}^*_{\mathbb{P}^1})\,, \] and rank one holomorphic representations of its Lie algebroid, denoted $T_{\mathbb{P}^1}(-\log{\{0,\infty\}})\,,$ which are classified by \[H^1(T_{\mathbb{P}^1}(-\log{\{0,\infty\}}),\mathbb{C}^*_{\mathbb{P}^1})\,. \] We then compute the van Est map between them and explicitly show that it is an isomorphism. \vspace{3mm}\\Let's begin: consider the action of $\mathbb{C}$ on $\mathbb{P}^1$ given by $a\cdot [z:w]=[e^{a}z:w]\,,$ and form the action groupoid given by $\mathbb{C}\ltimes\mathbb{P}^1\,.$ Then representations of $\mathbb{C}\ltimes\mathbb{P}^1$ on holomorphic line bundles are classified by $H^1(\mathbb{C}\ltimes\mathbb{P}^1,\mathbb{C}^*_{\mathbb{P}^1})\,,$ and these are the global versions of flat logarithmic connections on holomorphic line bundles, with poles at $[0:1]\,,$ also known as holomorphic representations of the Lie algebroid of $\mathbb{C}\ltimes\mathbb{P}^1\,.$ The sheaf of sections of the Lie algebroid of $\mathbb{C}\ltimes\mathbb{P}^1$ is isomorphic to the sheaf of sections of $T_{\mathbb{C}}$ which vanish at the origin and $\infty\,.$ \begin{steps} \item Let $U_0\,,U_1$ be the standard open covering of $\mathbb{P}^1\,.$ Then we get an open covering of $\mathbf{B}^1(\mathbb{C}\ltimes\mathbb{P}^1)$ by using the open cover $\{s^{-1}U_i\cap t^{-1}U_j\}_{i,j\in \{0,1\}}\,.$ A standard Mayer-Vietoris argument shows that this is a good cover in degree one, ie. it can be used to compute cohomology in degree one. Let $U_{ij}=s^{-1}U_i\cap t^{-1}U_j\,.$ The inequivalent degree one cocycles are given by the following: \begin{align}\label{cocycle} &\sigma_{00}(a,z)=e^{(k+\lambda)a}\,,\,\sigma_{01}(a,z)=e^{\lambda a}z^{-k}\,, \\ &\sigma_{10}(a,z)=e^{(k+\lambda) a}z^k\,,\,\sigma_{11}(a,z)=e^{\lambda a}\,,\nonumber\\& g_{01}(z)=z^k\,,\nonumber \end{align} where $\sigma_{ij}$ are functions on $U_{ij}$ and $g_{01}$ is a function on $U_0\cap U_1$ representing the principal bundle, and where $k\in\mathbb{Z}\,,\lambda\in\mathbb{C}\,.$ Hence \begin{align*} H^1(\mathbb{C}\ltimes\mathbb{P}^1,\mathbb{C}^*_{\mathbb{P}^1})\cong \mathbb{C}{\mathop{\times}}\mathbb{Z}\,. \end{align*} Now we compute the van Est map on these classes. First recall that the van Est map factors through the cohomology of the local groupoid, so we only need to be concerned with a neighborhood of the identity bisection. $\mathbf{}$ \item Pull back the cocycle via $\kappa^{-1}$ to a neighborhood of $G$ in $G\ltimes G\,,$ and get the cocycle given by the functions \begin{align*} \kappa^{-1}\sigma_{00}\,,\,\kappa^{-1}\sigma_{11}\,, t^{-1}g_{01} \end{align*} defined on the open sets $\kappa^{-1}U_{00}\,,\kappa^{-1}U_{11}\,,t^{-1}U_{10}\,,$ respectively. \item Now by the condition that~\eqref{cocycle} is a cocycle, it follows that \begin{align*} &\kappa^{-1}\sigma_{00}=\delta^*\sigma_{00}\;\text{ on }\;\kappa^{-1}U_{00}\cap s^{-1}U_{00}\cap t^{-1}U_{00}\,, \\&\kappa^{-1}\sigma_{11}=\delta^*\sigma_{11}\;\text{ on }\;\kappa^{-1}U_{11}\cap s^{-1}U_{11}\cap t^{-1}U_{11}\,, \end{align*} and these two open sets cover a neighbourhood of $G^0$ in $G\ltimes G\,.$ Explicitly, \begin{align*} \kappa^{-1}U_{ii}\cap s^{-1}U_{ii}\cap t^{-1}U_{ii}=\{(g_1,g_2)\in G{\mathop{\times}} G: g_1\,,g_2\,,g_1g_2\in U_{ii}\}\,, \end{align*} for $i=0\,,1\,.$ So for the second step we get the functions \begin{align*} \sigma_{00}\,,\,\sigma_{11}\,, s^{-1}g_{01}\,, \end{align*} defined on the open sets $U_{00}\,,U_{11}\,,U_{00}\cap U_{11}\,,$ respectively. \item Now apply $\mathrm{dlog}+\partial$ to $\sigma_{00}\,,\sigma_{11}\,,$ and we get the elements \begin{align*} \mathrm{dlog}\sigma_{00}\,,\,\mathrm{dlog}\sigma_{11}\,. \frac{\sigma_{00}}{\sigma_{11}}s^{-1}g_{01}\,, \end{align*} defined on the open sets $U_{00}\,,U_{11}\,,U_{00}\cap U_{11}\,,$ respectively. Explicitly, these are given, respectively, by \begin{align*} (k+\lambda)\frac{da}{2\pi i}\,,\, \lambda\frac{da}{2\pi i}\,,\, e^{ka}z^k\,. \end{align*} \item Now this cocycle is pulled back from the following cocycle in the Chevalley-Eilenberg complex via $t\,:$ \begin{align}\label{lie cocycle} \alpha_0(z)=(k+\lambda)\frac{da}{2\pi i}\,, \alpha_1(z)=\lambda\frac{da}{2\pi i}\,, g_{01}(z)=z^k\,, \end{align} where these maps are defined on $U_0\,,U_1\,, U_0\cap U_1\,,$ respectively. \end{steps} $\mathbf{}$ \\The anchor map in this case is $\alpha(\partial_a\vert_{(0,z')})=z'\,\partial_z\vert_{z'}\,,$ which is an embedding of sheaves, and hence the sheaf of sections of the Lie algebroid is isomorphic to the sheaf on $\mathbb{P}^1$ generated by $z\,\partial_z$ on $U_0$ and $\tilde{z}\,\partial_{\tilde{z}}$ on $U_1\,.$ Under this isomorphism of sheaves, $da$ gets sent to $dz/z\,.$ We can use the isomorphism to identify the Lie algebroid cocycle in~\eqref{lie cocycle} with the cocycle in the sheaf of logarithmic differential forms given by \begin{align}\label{flat log connections} \alpha_0(z)=\frac{(k+\lambda)}{2\pi i}\,\frac{dz}{z}\,, \alpha_1(z)=\frac{\lambda}{2\pi i}\,\frac{dz}{z}\,, g_{01}(z)=z^k\,. \end{align} $\mathbf{}$ \\To summarize, we have the following: \begin{proposition} The cocycles in~\ref{cocycle} give an isomorphism $H^1(\mathbb{C}\ltimes\mathbb{P}^1,\mathbb{C}^*_{\mathbb{P}^1})\cong\mathbb{C}{\mathop{\times}}\mathbb{Z}\,; $ the cocycles in~\ref{flat log connections} give an isomorphism $H^1(T_{\mathbb{P}^1}(-\log{\{0,\infty\}}),\mathbb{C}^*_{\mathbb{P}^1})\cong \mathbb{C}{\mathop{\times}}\mathbb{Z}\,.$ Under these isomorphisms the van Est map \[ VE:H^1(\mathbb{C}\ltimes\mathbb{P}^1,\mathbb{C}^*_{\mathbb{P}^1})\to H^1(T_{\mathbb{P}^1}(-\log{\{0,\infty\}}),\mathbb{C}^*_{\mathbb{P}^1}) \] is given by $(\lambda,k)\mapsto(\lambda,k)\,.$ \end{proposition} \section{Heisenberg Manifold as a Higher Structure} In this section we will show that the Heisenberg manifold has several compatible geometric structures on it; in particular, it is a principal bundle in the category of groupoids, or a groupoid in the category of principal bundles (in fact, one can enhance the construction we make to obtain a $\Pi_1(S^1)$-module in the category of principal bundles with a connection, since the Heisenberg manifold is naturally a principal bundle with connection over $T^2$). \vspace{3mm}\\The Heisenberg manifold, denoted $H_M\,,$ is the quotient of the Heisenberg group by the right action of the integral Heisenberg subgroup on itself, ie. we make the identification \begin{align*} \begin{pmatrix} 1 & a & c\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}\sim \begin{pmatrix} 1 & a+n & c+k+am\\ 0 & 1 & b+m \\ 0 & 0 & 1 \end{pmatrix}\,, \end{align*} where $a\,,b\,,c\in\mathbb{R}$ and $n\,,m\,,k\in\mathbb{Z}\,.$ \\\\$H_M$ is a principal $S^1$-bundle over $T^2$ by projecting onto $(a,b)\,.$ Furthermore, we get a $T^2$-bundle over $S^1$ by projecting onto $b\,,$ making $H_M$ into a family of abelian groups over $S^1\,.$ \\\\More explcitly, the product associated to the bundle $H_M\to S^1$ is given by \begin{align*} \begin{pmatrix} 1 & a & c\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}\cdot \begin{pmatrix} 1 & a' & c'\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}= \begin{pmatrix} 1 & a+a' & c+c'\\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix}\,, \end{align*} Putting this together, we have the following diagram: \begin{equation}\label{hei} \begin{tikzcd} H_M \arrow[r, shift right] \arrow[r, shift left] \arrow[d] & S^1 \arrow[d] \\ T^2 \arrow[r, shift right] \arrow[r, shift left] & S^1 \end{tikzcd} \end{equation} Here, the principal bundle on the right is the trivial principal bundle (the map is the identity), and the groupoid on the bottom is the trivial $S^1$ family of abelian groups over $S^1\,.$ \vspace{3mm}\\As mentioned earlier, we can enhance \ref{hei} with connections to obtain a diagram of the following form: \begin{equation} \begin{tikzcd} H_M \arrow[d, "{(S^1\text{-principal bundle},\nabla_2)}"'] \arrow[rrr, "{(T^2\text{-family of groups},\nabla_1)}", shift left] \arrow[rrr, shift right] & & & S^1 \arrow[d, "\text{Trivial principal bundle}"] \\ T^2 \arrow[rrr, shift left] \arrow[rrr, "\text{Trivial } S^1\text{-family of groups}"', shift right] & & & S^1 \end{tikzcd} \end{equation} In the bottom row and right column, the connections are the trivial ones on the trivial bundles. The connection on the top row is flat, making $H_M$ into a $\Pi_1(S^1)$-module, and the connection on the left is the one associated with the quantization of $T^2\,.$ One might say that the quantization of $T^2$ is a $\Pi_1(S^1)$-module. \section{The Canonical Module Associated to a Complex Manifold and Divisor} Given a complex manifold $X$ and a (simple normal crossings) divisor $D\,,$ we construct a natural module for the Lie groupoid $\text{Pair}(X,D)$ (which is the terminal integration of $T_X(-\log{D})\,,$ the Lie algebroid whose sheaf of sections is the sheaf of sections of $T_X$ which are tangent to $D)\,.$ These are modules for which the underlying surjective submersion does not define a fiber bundle, and in particular the underlying family of abelian groups is not locally trivial. Generically the fiber will be $\mathbb{C}^*\,,$ but over $D$ the fibers will degenerate to $\mathbb{C}^*{\mathop{\times}} \mathbb{Z}^k\,,$ for some $k$ depending on the point $D\,.$ \subsection{The Module \texorpdfstring{$\mathbb{C}^*_{\mathbb{C}}(*\{0\})$}{C}} Here we will do a warm up example for the general case to come in the next section. More precisely, we will construct a family of abelian groups whose sheaf of sections is isomorphic to the sheaf of nonvanishing meromorphic functions with a possible pole or zero only at the origin, and we will show that it is naturally a module for the terminal groupoid integrating $T_\mathbb{C}(-\log{\{0\}})\,,$ the Lie algebroid whose sheaf of sections is isomorphic to the sheaf of sections of $T\mathbb{C}$ vanishing at the origin. This space was defined in~\cite{luk}. \\\\Consider the action groupoid $\mathbb{C}^*\ltimes\mathbb{C}\rightrightarrows\mathbb{C}\,,$ where the action of $\mathbb{C}^*$ on $\mathbb{C}$ is given by \begin{align*} a\cdot x=ax\,. \end{align*}This is the terminal groupoid integrating $T_\mathbb{C}(-\log{\{0\}})\,.$ We will construct a module for this groupoid as follows: consider the family of abelian groups given by \begin{align*} {\mathbb{C}}{\mathop{\times}}\mathbb{C}^*{\mathop{\times}}\mathbb{Z}\overset{p_1}{\to}\mathbb{C}\,. \end{align*} This family of abelian groups is a $\mathbb{C}^*\ltimes\mathbb{C}$-module with action given by \begin{align} (a,x)\cdot(x,y,i)=(ax,a^{-i}y,i)\,. \end{align} There is a submodule given by \begin{align*} \mathbb{C}{\mathop{\times}}\mathbb{Z}\backslash \{(0,j):j\ne 0\}\overset{p_1}{\to}\mathbb{C}\,, \end{align*} where the embedding into ${\mathbb{C}}{\mathop{\times}}\mathbb{C}^*{\mathop{\times}}\mathbb{Z}$ is given by $(x,j)\mapsto (x,x^{-j},j)\,,$ for $x\ne 0\,,$ and $(0,0)\mapsto (0,1,0)\,.$ We can then form the quotient to get another module, denoted $\mathbb{C}^*_{\mathbb{C}}(*\{0\})\,.$ Formally, we have the following: \theoremstyle{definition}\begin{definition} We define the space $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ as \begin{align*} \mathbb{C}^*_{\mathbb{C}}(*\{0\}):= {\mathbb{C}}{\mathop{\times}}\mathbb{C}^*{\mathop{\times}}\mathbb{Z}/\sim\,,\,(x,y,i)\sim (x,x^{-j}y,i+j)\,,\;x\ne 0\,. \end{align*} $\blacksquare$\end{definition} \begin{proposition}The space $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ is a complex manifold and there is a holomorphic surjective submersion $\pi:M\to\mathbb{C}$ given by $\pi(x,y,i)=x\,$ The space $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ is a family of abelian groups with product defined by a \begin{align*} (x,y,i)\cdot(x,y',j)=(x,yy',i+j)\,. \end{align*} It is a $\mathbb{C}^*\ltimes\mathbb{C}$-module with action given by \begin{align*} (a,x)\cdot(x,y,i)=(ax,a^{-i}y,i)\,, \end{align*} and there is a short exact sequence of modules given by \begin{align*} 0\to \mathbb{C}{\mathop{\times}}\mathbb{Z}\backslash \{(0,j):j\ne 0\}\to {\mathbb{C}}{\mathop{\times}}\mathbb{C}^*{\mathop{\times}}\mathbb{Z}\to \mathbb{C}^*_{\mathbb{C}}(*\{0\})\to 0\,. \end{align*} The fiber of $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ over a point $x\ne 0$ is isomorphic to $\mathbb{C}^*\,,$ and the fiber over $x=0$ is isomorphic to $\mathbb{C}^*{\mathop{\times}}\mathbb{Z}\,.$ \end{proposition} \begin{proof} We prove that it is a complex manifold. First we show that we can cover the space with charts whose transition functions are holomorphic. For each $i\in\mathbb{Z}\,,$ we get a chart given by $\mathbb{C}\times\mathbb{C}^*\,,$ taking $(x,y,i)\mapsto (x,y)\,.$ On the intersection between the $i$ and $j$ coordinate systems, the transition function is given by $(x,y)\mapsto (x,x^{-j}y)\,,$ which is holomorphic. \newline\newline To prove the space is Hausdorff, we observe that away from $x=0\,,$ the space is just $\mathbb{C}^*\times\mathbb{C}^*\,.$ Now take two points $(0,y,i)\,,(x,y',j)\,,$ $x\ne 0\,.$ We get disjoint neighborhoods of these points by choosing small enough neighborhoods $U_i\,, U_j\,,$ such that the projections onto the $x$-coordinate are disjoint. Now given two distinct points $(0,y,i)\,,(0,y',j)\,,$ with $j>i$ we obtain two disjoint neighborhoods by choosing $x\in \mathbb{C}$ such that $|x^{i-j}y|>|y'|\,,$ and then choosing small enough disks around $y\,,y'\,.$ Now suppose we take two distinct points $(0,y,i)\,,(0,y',i)\,.$ We get two disjoint neighborhoods by choosing disjoint neighborhoods of $y\,,y'\in\mathbb{C}^*\,,$ and taking all $x\in\mathbb{C}^*\,.$ \end{proof} \begin{proposition}\label{sheaf identification}The sheaf $\mathcal{O}(\mathbb{C}^*_{\mathbb{C}}(*\{0\}))$ (where sections here are taken to be holomorphic) is isomorphic to the sheaf of meromorphic functions on $\mathbb{C}$ with poles or zeroes only at $x=0\,,$ denoted $\mathcal{O}^*(*\{0\})\,.$ \end{proposition} \begin{proof} Consider the morphism of sheaves defined as follows: for an open set $U\subset\mathbb{C}$ and a holomorphic section $s(x)=(x,f(x),i)$ of $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ over $U\,,$ define a meromorphic function on $U\,,$ with a possible pole/zero only at $x=0\,,$ by $x^if(x)\,,\,x\in U\,.$ This map is an isomorphism of sheaves. \end{proof} Now to any $G$-module there is an associated $G$-representation, and the representation associated to $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ is the trivial one, ie. $\mathfrak{m}\cong\mathbb{C}{\mathop{\times}}\mathbb{C}$ with the projection map being the projection onto the first factor, and the action of $\mathbb{C}^*\ltimes\mathbb{C}$ is given by \begin{align*} (a,x)\cdot(x,y)=(ax,y)\,. \end{align*} We identify $\mathfrak{m}$ with points $(x,y,0)\in\mathbb{C}{\mathop{\times}}\mathbb{C}{\mathop{\times}}\mathbb{Z}\,,$ where the second $\mathbb{C}$ is identified with the Lie algebra of $\mathbb{C}^*\,.$ The sheaf of sections of $\mathfrak{m}$ is naturally isomorphic to the sheaf of $\mathbb{C}$-valued functions on $\mathbb{C}\,.$ \begin{proposition}The Chevalley-Eilenberg complex associated to $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ is isomorphic to the complex \begin{align*} \mathcal{O}_\mathbb{C}^*(*\{0\})\overset{\mathrm{dlog}}{\to}\Omega^1_\mathbb{C}(\log D)\,. \end{align*} \end{proposition} \begin{proof}We will compute $\mathrm{d}_\text{CE}\,\mathrm{log}:$ consider the meromorphic function $x^nf(x)\,,$ $x\in U\,,$ where $f$ is holomorphic and nonvanishing. We identify it with the local section of $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ given by $s(x)=(x,f(x),n)\,.$ Now the anchor map is given by \begin{align*} \alpha:\textit{Lie}(\mathbb{C}^*\ltimes\mathbb{C})\to T\mathbb{C}\,,\,\alpha(\partial_x,x)=x\partial_x\,. \end{align*} Then we can compute that \begin{align*} &\tilde{L}_{(\partial_x,x)}s(x)=\frac{d}{d\varepsilon}\Big\vert_{\varepsilon=0}\,(x,e^{n\varepsilon}f(e^{\varepsilon}x)f(x)^{-1},0) =(x,n+xf'(x)f(x)^{-1},0) \\&=(x,\mathrm{dlog} (x^nf)\,(x\partial_x),0)=(x,\mathrm{dlog} (x^nf)\,\alpha(\partial_x,x),0)\,, \end{align*} so $f$ differentiates to $\mathrm{dlog} (x^nf)\,,$ so that $\mathrm{d}_\text{CE}\,\mathrm{log}$ corresponds to $\mathrm{dlog}$ under the identification of sheaves used in Proposition~\ref{sheaf identification}. This completes the proof. \end{proof} \subsection{The Module \texorpdfstring{$\mathbb{C}^*_X(*D)$ and Pair$(X,D)$}{the module}} Here we will generalize the construction in the previous section to arbitrary complex manifolds and smooth divisors. \begin{proposition} Let $X$ be a complex manifold of complex dimension $n\,,$ and let $D$ be a smooth divisor. Then there is a canonical family of abelian groups $\mathbb{C}^*_X(*D)\to X$ such that $\mathcal{O}(\mathbb{C}^*_X(*D))$ (where sections here are taken to be holomorphic) is isomorphic to $\mathcal{O}^*(*D)\,,$ the sheaf of nonvanishing meromorphic functions with poles or zeros only on $D\,.$ \end{proposition} \begin{proof}We can construct a family of abelian groups as follows: choose an open cover $\{\mathbb{D}^n_i\}_i$ of $X$ by polydiscs $($ie. $\mathbb{D}_i=\{z\in\mathbb{C}:|z|<1\})\,,$ with coordinates $(x_{i,1},\mathbf{x}_i)=(x_{i,1}\,,x_{i,2}\,,\ldots\,,x_{i,n})$ on $\mathbb{D}^n_i\,,$ in such a way that \begin{align*}D\cap \mathbb{D}^n_i=\{x_{i,1}=0\}\,. \end{align*} Then on $\mathbb{D}^n_i$ form the family of abelian groups $\mathbb{D}^n_i{\mathop{\times}}\mathbb{C}^*{\mathop{\times}}\mathbb{Z}/\sim\,,$ where \begin{align*} (x_{i,1},\mathbf{x}_i,y,k)\sim (x_{i,1},\mathbf{x}_i,x_{i,1}^{-l}y,k+l) \textit{ for } x_{i,1}\ne 0\,, \end{align*} where the surjective submersion is given by the projection onto $(x_{i,1},\mathbf{x}_i)\,,$ and where the product is given by \begin{align*} (x_{i,1},\mathbf{x}_i,y,k)\cdot (x_{i,1},\mathbf{x}_i,y',l)=(\mathbf{x}_i,yy',k+l)\,. \end{align*} We can glue these families of abelian groups together in the following way: on $\mathbb{D}^n_i\cap\mathbb{D}^n_j$ we have a nonvanishing holomorphic function $g_{ij}$ satisfying $x_{j,1}=g_{ij}x_{i,1}\,.$ Now let \begin{align*} (x_{i,1},\mathbf{x}_i,y,k)\sim(x_{j,1},\mathbf{x}_j,g_{ij}^{-k}y,k)\,. \end{align*} This gluing preserves the fiberwise group structure, hence we obtain a family of abelian groups, denoted \begin{align*} \mathbb{C}^*_X(*D)\overset{\pi}{\to}X\,. \end{align*} As in the previous section, where this was done for $(X,D)=(\mathbb{C},\{0\})\,,$ the sheaf $\mathcal{O}(\mathbb{C}^*_X(*D))$ is isomorphic to $\mathcal{O}^*(*D)\,.$ \end{proof} \begin{proposition}[see~\cite{pym}] There is a terminal integration of $T_X(-\log{} D)$ (denoted by $\text{Pair}(X,D))\,,$ the Lie algebroid whose sheaf of sections is isomorphic to the sheaf of sections of $T_X$ which are tangent to $D\,.$ \end{proposition} \begin{proof}The terminal integration, $\text{Pair}(X,D)\,,$ can be described locally as follows (here the notation is as in the previous proposition): the set of morphisms $\mathbb{D}^n_i\to\mathbb{D}^n_j$ is given by all \begin{align*} &(a,\mathbf{x}_j,x_{i,1},\mathbf{x}_i)\in\mathbb{C}^*{\mathop{\times}}\mathbb{D}_j^{n-1}{\mathop{\times}}\mathbb{D}_i{\mathop{\times}}\mathbb{D}_i^{n-1} \\&\text{such that } (ax_{i,1},\mathbf{x}_j)\in \mathbb{D}^n_j\,. \end{align*} The source, target and multiplication maps are: \begin{align*} &s(a,\mathbf{x}_j,x_{i,1}\,,\mathbf{x}_i)=(x_{i,1}\,,\mathbf{x}_i)\in \mathbb{D}_i^{n}\,, \\&t(a,\mathbf{x}_j,x_{i,1}\,,\mathbf{x}_i)=(ax_{i,1},\mathbf{x}_j)\in\mathbb{D}_j^{n}\,, \\&(a',\mathbf{x}_k,ax_{i,1},\mathbf{x}_j)\cdot(a,\mathbf{x}_j,x_{i,1}\,,\mathbf{x}_i) \\&=(a'a,\mathbf{x}_k,x_{i,1}\,,\mathbf{x}_i)\in\mathbb{C}^*{\mathop{\times}}\mathbb{D}_k^{n-1}{\mathop{\times}}\mathbb{D}_i{\mathop{\times}}\mathbb{D}_i^{n-1}\,. \end{align*} The gluing maps on the groupoid are induced by the gluing maps on $X\,,$ that is, \begin{align*} &(a,\mathbf{x}_j,x_{i,1}\,,\mathbf{x}_i)\sim \Big(a\frac{g_{jl}(ax_{i,1})}{g_{ik}(x_{i,1})},\mathbf{x}_l,x_{k,1}\,,\mathbf{x}_k\Big) \end{align*} if \begin{align*} &(x_{i,1},\mathbf{x}_i)\in \mathbb{D}_i^{n}\sim (x_{k,1},\mathbf{x}_k)\in \mathbb{D}_k^{n}\,, \\&(ax_{i,1},\mathbf{x}_j)\in \mathbb{D}_j^{n}\sim (x_{l,1},\mathbf{x}_l)\in \mathbb{D}_l^{n}\,. \end{align*} \end{proof} \begin{proposition}The morphism \begin{align}\label{TX-module} \mathrm{dlog}:\mathcal{O}^*(*D)\to \Omega_X^1(\log{} D) \end{align} endows $\mathbb{C}^*_X(*D)$ with the structure of a $T_X(-\log D)$-module, and this structure integrates to give $\mathbb{C}^*_X(*D)$ the structure of a $\mathrm{Pair}(X,D)$-module. \end{proposition} \begin{proof}Define an action of $\mathrm{Pair}(X,D)$ on $\mathbb{C}^*_X(*D)$ as follows (the notation is as in the previous two propositions): \begin{align*} &(a,\mathbf{x}_j,x_{i,1}\,,\mathbf{x}_i)\cdot(x_{i,1},\mathbf{x}_i,y,k) \\&=(ax_{i,1},\mathbf{x}_j,a^{-k}y,k)\,. \end{align*} This is a well-defined action by fiberwise isomorphisms, and it indeed differentiates to the $T_X(-\log{}D)$-module defined by \eqref{TX-module}. \end{proof} Essentially the same construction can be done in the case that $D$ is a simple normal crossing divisor. In a neighborhood $U$ of a simple crossing divisor which is biholomorphic to a polydisk, we can choose coordinates $\mathbf{x}=(x_1,\ldots,x_n)$ on $\mathbb{D}^n$ such that the simple normal crossing divisor is given by $x_1\cdots x_k=0\,.$ Then \begin{align*} & \mathbb{C}^*_X(*D)\vert_{\mathbb{D}^n}=\mathbb{D}^n{\mathop{\times}}\mathbb{C}^*{\mathop{\times}}\mathbb{Z}^k/\sim\,, \,(\mathbf{x},y,\mathbf{i})\sim (\mathbf{x},x_{j_1}^{-m_{j_1}}\cdots x_{j_l}^{-m_{j_l}}y,\mathbf{i}+\mathbf{m}) \\&\text{away from } x_{j_1}\cdots x_{j_l}=0\,, \text{ where } j_1,\ldots,j_l\in \{1,\ldots,k\} \\&\text{and where } m_{j_1},\ldots, m_{j_l}\text{ are the nonzero components of }\mathbf{m}\in\mathbb{Z}^k\,. \end{align*} Alternatively, it can locally be described as \begin{align*} \mathbb{C}^*_X(*{\{x_1=0\}})\otimes_{\mathbb{C}^*}\cdots\otimes_{\mathbb{C}^*}\mathbb{C}^*_X(*{\{x_k=0\}})\,, \end{align*} where the $\mathbb{C}^*$-action is the one induced by the action of $\mathbb{C}^*_U$ on $\mathbb{C}^*_X(*D)\,,$ which comes from the embedding $\mathbb{C}^*_U\xhookrightarrow{}\mathbb{C}^*_X(*D)\,.$ \newline\newline To summarize this section, we have proven the following: \begin{theorem} Let $X$ be a complex manifold and let $D$ be a simple normal crossing divisor. There is a family of abelian groups \begin{align*} \mathbb{C}^*_X(*D)\overset{\pi}{\to}X \end{align*} whose sheaf of holomorphic sections is isomorphic to $\mathcal{O}_X^*(*D)\,.$ Furthermore, there is a canonical action of $\mathrm{Pair}(X,D)$ on $M$ making it into a $\mathrm{Pair}(X,D)$-module, and this module structure integrates the canonical $T_X(-\log{}D)$-module structure on $M$ induced by the morphism \begin{align*} \mathrm{dlog}:\mathcal{O}_X^*(*D)\to \Omega_X^1(\log{} D)\,. \end{align*} \end{theorem} \section{Integration of Cohomology Classes by Prequantization} In this section we describe an alternative approach to integration of classes in Lie algebroid cohomology that may sometimes be used, and which doesn't directly involve the van Est map (more accurately, this method could be combined with the previous method). We call it integration by prequantization because in the case that the Lie algebroid is the tangent bundle and one is trying to integrate a $2$-form $\omega$, this method uses the line bundle whose first Chern class is the cohomology class of $\omega\,.$ We will first describe this method and then give some examples. \\\\Suppose we have a $G$-module $N$ and we are interested in integrating a class in the cohomology of the truncated complex, $\alpha\in H^*_0(\mathfrak{g},N)\,.$ Now suppose we have a $G$-module $M$ such that $\mathfrak{m}=\mathfrak{n}\,,$ and such that there is a map $N\to M$ of $G$-modules which differentiates to the identity map on $\mathfrak{n}\,.$ In this case the morphism \[ \mathcal{O}(M)\xrightarrow{\text{d}_{\text{CE}}\text{log}}\mathcal{C}^1(\mathfrak{g},M)\cong \mathcal{C}^1(\mathfrak{g},N) \] induces a morphism \begin{align*} H^*(G^0,\mathcal{O}(M))\to H_0^*(\mathfrak{g},N)\,. \end{align*} Then one can try lift $\alpha$ to a class $\tilde{\alpha}\in H^*(G^0,\mathcal{O}(M))\,.$ If a lift can be found, then one can attempt to integrate $\alpha$ to a class in $H^*_0(G,N)$ by showing that $\delta^*\tilde{\alpha}$ is in the image of the map $H^*_0(G,N)\to H^*_0(G,M)\,.$ If this succeeds then this class in $H^*_0(G,N)$ integrates $\alpha\,.$ We can summarize this method with the following proposition: \begin{proposition} Let $G$ be a Lie groupoid, and let $N,M$ be $G$-modules with the same underlying Lie algebroids $\mathfrak{n}\,.$ Suppose further that there is a map of $G$-modules $f:N\to M$ which differentiates to the identity on $\mathfrak{n}$ (in particular this means that the Lie algebroids of $N$ and $M$ are the same as $G$-representations). The following diagram is commutative: \[ \begin{tikzcd}[row sep=3em, column sep = 3em] & H^*_0(G,M) \arrow[bend left=60,"VE_0"]{dd} \\ H^*_0(G,N)\arrow{ur}{f}\arrow{d}{VE_0}& H^*(G^0,\mathcal{O}(M))\arrow{u}{\delta^*}\arrow{d}{\mathrm{d}_{\mathrm{CE}}\mathrm{log}} \\H^*_0(\mathfrak{g},N)\arrow{r}{\cong}& H^*_0(\mathfrak{g},M) \end{tikzcd} \] \end{proposition} \theoremstyle{definition}\begin{exmp} Let $X$ be a manifold and let $\omega$ be a closed $2$-form which has integral periods. Then there is a class $g\in H^1(X,\mathcal{O}^*)$ which lifts $\omega\,,$ ie. a principal $\mathbb{C}^*$-bundle. We then have that $\delta^*g\in H^1_0(\textrm{Pair}(X),\mathbb{C}^*_X)$ integrates $\omega\,.$ \end{exmp} \theoremstyle{definition}\begin{exmp} Consider the trivial $(\mathbb{C}^*\ltimes\mathbb{C}\rightrightarrows \mathbb{C})$-module $\mathbb{C}^*_{\mathbb{C}}\,,$ and let $\mathfrak{g}$ be its Lie algebroid. Consider the class in $H^0_0(\mathfrak{g},\mathbb{C}^*_{\mathbb{C}})$ given by $\frac{dz}{z}\,.$ This class is not in the image of \[ \mathcal{O}^*_{\mathbb{C}}\xrightarrow{\mathrm{d}\mathrm{log}}\mathcal{C}^1(\mathfrak{g},\mathbb{C}^*_{\mathbb{C}})\,. \] However, $\mathbb{C}^*_{\mathbb{C}}\xhookrightarrow{} \mathbb{C}^*_{\mathbb{C}}(*\{0\})$ $($where $\mathbb{C}^*_{\mathbb{C}}(*\{0\})$ is as in the previous section$)\,,$ and they have the same Lie algebroids, and in addition the class $\frac{dz}{z}$ is in the image of \[ \mathcal{O}^*_{\mathbb{C}}(*D)\xrightarrow{\mathrm{d}_{CE}\mathrm{log}}\mathcal{C}^1(\mathfrak{g},\mathbb{C}^*_{\mathbb{C}})\,, \] namely $\mathrm{d}\mathrm{log}\,z=\frac{dz}{z}\,.$ We then have that $\delta^*z\,(a,z)=a\,,$ which is $\mathbb{C}^*$-valued. Hence the morphism $(a,z)\mapsto a$ integrates $\frac{dz}{z}\,.$ \end{exmp} To get examples involving the integration of extensions, we have the following proposition: \begin{proposition} Let $X$ be a complex manifold with smooth divisor $D\,,$ and let $\Pi_1(X,D)\rightrightarrows X$ be the source simply connected integration of $T_X(-\log{D})\,.$ Then the subgroup of classes in $H^1_0(T_X(-\log{D}),\mathbb{C}_{X})$ which are integral on $X\backslash D$ embeds into $H^1_0(\Pi_1(X,D),\mathbb{C}^*_{X})\,.$ \end{proposition} \begin{proof} Let $\omega\in H^1_0(T_X(-\log{D}),\mathbb{C}_X)$ be a class which is prequantizable, which means that it is in the image of the map \[ H^1(X,\mathcal{O}^*_X(*D))\to H^1_0(T_X(-\log{D}),\mathbb{C}_X)\,. \] It is proved in~\cite{luk} that this is equivalent to $\omega$ being integral on $X\backslash D\,.$ \\\\There is a short exact sequence of $\Pi_1(X,D)$-modules \begin{align*} 0\to \mathbb{C}^*_X\overset{\iota}{\to} \mathbb{C}^*_X(*D)\overset{\pi}{\to} \mathrm{\acute{e}t}(\iota_*\mathcal{O}(\mathbb{Z}_D))\to 0 \end{align*} $($where $\iota:D\to X$ is the inclusion and \'{e}t means the \'{e}tal\'{e} space, which may be non-Hausdorff, but this is fine $)\,.$ From this we get the long exact sequence \begin{align*} &H^0_0(\Pi_1(X,D),\mathrm{\acute{e}t}(\iota_*\mathcal{O}(\mathbb{Z}_D)))\to H^1_0(\Pi_1(X,D),\mathbb{C}^*_X)\to H^1_0(\Pi_1(X,D),\mathbb{C}^*_X(*D)) \\&\to H^1_0(\Pi_1(X,D),\mathrm{\acute{e}t}(\iota_*\mathcal{O}(\mathbb{Z}_D)))\,. \end{align*} Now $H^0_0(\Pi_1(X,D),\mathrm{\acute{e}t}(\iota_*\mathcal{O}(\mathbb{Z}_D)))=0$ since a morphism of groupoids must be $0$ on the identity bisection, so since the fibers of $\mathrm{\acute{e}t}(\iota_*\mathcal{O}(\mathbb{Z}_D))$ are discrete and the source fibers $\Pi_1(X,D)$ are connected, any such morphism must be identically $0\,.$ So we get the long exact sequence \begin{align*} 0 \to H^1_0(\Pi_1(X,D),\mathbb{C}^*_X)\to H^1_0(\Pi_1(X,D),\mathbb{C}^*_X(*D))\to H^1_0(\Pi_1(X,D),\mathrm{\acute{e}t}(\iota_*\mathcal{O}(\mathbb{Z}_D)))\,. \end{align*} If we let $\alpha\in H^1(X,\mathbb{C}_X^*(*D))\,,$ then $t^*\alpha-s^*\alpha\in H^1_0(\Pi_1(X,D),\mathbb{C}_X^*(*D))\,,$ and \begin{align*} \pi(t^*\alpha-s^*\alpha)=t^*\pi(\alpha)-s^*\pi(\alpha)=0\,, \end{align*} where the latter equality follows from the fact that $\pi(\alpha)$ is a module for the full subgroupoid over $D\,,$ which follows from the following: there is a morphism from the full subgroupoid over $D$ to $\Pi_1(D)\,,$ and $\pi(\alpha)$ is a module for $\Pi_1(D)$ since $\pi(\alpha)$ is a local system. \\\\Hence there is a unique lift of $\alpha$ to $H^1_0(\Pi_1(X,D),\mathbb{C}^*_X)\,.$ Hence all of the prequantizable classes in $H^2(T_X(-\log{D}),\mathbb{C}_X)$ integrate to classes in $H^1_0(\Pi_1(X,D),\mathbb{C}^*_X)\,,$ \end{proof} What this proposition means is that any closed logarithmic $2$-form on a complex manifold $X$ with smooth divisor $D\,,$ which has integral periods on $X\backslash D\,,$ defines a $\mathbb{C}^*$-groupoid extension of $\Pi_1(X,D)\,.$ \theoremstyle{definition}\begin{exmp}We can specialize to the case $X=\mathbb{P}^2$ and where $D$ is a smooth projective curve of degree $\ge 3$ and genus $g$ in $\mathbb{P}^2\,.$ Then as proved in~\cite{luk}, the prequantizable subgroup of $H^1_0(T_{\mathbb{P}^2}(-\log{D}),\mathbb{C}^*_X)$ is isomorphic to $\mathbb{Z}^{2g}\,.$ Hence $\mathbb{Z}^{2g}\xhookrightarrow{}H^1_0(\Pi_1(\mathbb{P}^2,D),\mathbb{C}^*_{\mathbb{P}^2})\,.$ \end{exmp} \part{Van Est Theory on Geometric Stacks} \chapter{The (2,1)-Category of Lie Groupoids and Stacks} Here we will briefly describe the (2,1)-category of Lie groupoids and the (2,1)-category of geoemtric stacks. \section{(2,1)-Category of Lie groupoids} Let's start with the (2,1)-category of Lie groupoids: \begin{itemize} \item The objects are Lie groupoids \item the morphisms are homomorphisms of Lie groupoids \item The 2-morphisms are smooth (holomorphic) natural transformations \item The weak equivalences are Morita maps \end{itemize} There are two notions of fiber products: one coming from the categroy of Lie groupoids, which we will call the strong fiber product, and the other coming from the (2,1)-category of Lie groupoids, which we will call the fiber product. They are defined as so: \begin{definition} Given two homomorphisms of Lie groupoids $f_1:H\to G\,,f_2:K\to G\,,$ we get a third groupoid by taking the fiber product at the level of objects and morphisms: \begin{equation} H^{(1)}\times_{G^{(1)}}K^{(1)}\rightrightarrows H^0\times_{G^0}K^0\,. \end{equation} If the resulting groupoid is a Lie groupoid, we call it the strong fiber product (or strong pullback), denoted $H\times_{G!}K$ or $f_2^!H\,.$ This is in particular the case if the maps at the level of objects and arrows are transversal. \end{definition} Now given a morphism of Lie groupoids $f:H\to G$ and an object $g^0\in G^0\,,$ we call $f^{-1}(g^0)$ the kernel of $f$ over $g^0$ (assuming this kernel exists). Thinking of $\{g^0\}\rightrightarrows \{g^0\}$ as the trivial Lie groupoid, it comes with a natural map into $G\,,$ and the kernel of $f$ over $g^0$ is equivalently given by the strong fiber product $H\times_{G!}\{g^0\}\,.$ We make the following definition: \begin{definition} Given a map $f:H\to G$ of Lie groupoids and an object $g^0\in G^0\,,$ the kernel of $f$ over $g^0\,,$ if it exists, is given by $f^{-1}(g^0)\,,$ or equivalently it is given by $H\times_{G!}\{g^0\}\,.$ \end{definition} Now the second definition of fiber product, which is the main one we will be using and is the one that has the right universal property in the (2,1)-category of Lie groupoids, is given by the following: \begin{definition} Given two homomorphisms of Lie groupoids $f_1:H\to G\,,f_2:K\to G\,,$ we get a third groupoid as so (see \cite{Moerdijk}): \begin{itemize} \item The objects are triples $(h^0,g,k^0)\in H^0\times G^{(1)}\times K^0$ where $g$ is an arrow $f_1(h^0)\to f_2(k^0)\,.$ \item An arrow between the objects $(h^0,g,k^0)\to (h'^{ 0},g',k'^{ 0})$ is given by a pair $(h,k)\in H^{(1)}\times K^{(1)}$ such that $h\,,k$ are arrows from $h^0\to h'^0\,,k^0\to k'^0\,,$ respectively, such that $g'\,f_1(h)=f_2(k)\,g\,.$ \end{itemize} If this groupoid is a Lie groupoid, it will be called the fiber product (or pullback), and denoted $H\times_G K$ or $f_1^*K\,.$ This will be a Lie groupoid as long as the space of objects is a manifold. This is in particular the case if $t\circ p_2:H^0\times_{G^0}G^{(1)}\to G^0$ is submersion, where $p_2:H^0\times_{G^0}G^{(1)}\to G^{(1)}$ is the projection onto the second factor. \end{definition} Now given a morphism $f:H\to G$ of Lie groupoids, we can ask what the fibers of the map are. Fibers exist only over objects in $G^0\,,$ and the fiber over an object $g^0\xhookrightarrow{} G\,.$ \begin{definition} Let $f:H\to G$ be a map of Lie groupoids and let $g^0\xhookrightarrow{} G$ be an object in $G^0\,.$ We can consider the trivial Lie groupoid $\{g^0\}\rightrightarrows \{g^0\}\,,$ and this comes with a morphism into $G\,.$ Assuming the fiber product $H\times_G \{g^0\}$ exists, we call it the fiber of $f$ over $g^0\,.$ \end{definition} \subsection{Computing Fiber Products} Here we will collect some basic results about fibers and fiber products: \begin{exmp} If $f:H\to G$ is a homomorphism of Lie groups, then there is only one object, hence only one fiber, and it is given by $H\ltimes G\rightrightarrows G\,.$ If $H\xhookrightarrow{}G\,,$ then the fiber is Morita equivalent to $G/H\,.$ In particular, if $H\xhookrightarrow{}G$ is the maximal compact subgroup, then the fiber is contractible. \end{exmp} \begin{exmp} If $f:Y\to X$ is a map of smooth manifolds, thought of as groupoids, then the fibers are just the fibers as maps between manifolds (if it exists as a smooth manifold). \end{exmp} \begin{proposition} If $f:H\to G$ is a Morita map, then the fibers are all pair groupoids, hence the fibers are all Morita equivalent to a point. \end{proposition} \begin{exmp} If $f:G\to X$ is a map from a groupoid to a manifold, then the fiber (if it exists) over a point $x\in X$ is just the kernel $f^{-1}(x)$ \end{exmp} \begin{exmp} If $f:X\to G$ is a map from a manifold to a groupoid, then the fiber over a point $g^0\in G^0$ is the manifold $X\sideset{_{f}}{_{s}}{\mathop{\times}} t^{-1}(g^0)\,.$ In particular, if $X=G^0\,,$ then the fibers are just the target fibers. \end{exmp} \begin{proposition}\label{morita of fibers} Given morphisms of Lie groupoids $f_1:H\to G\,,f_2:K\to G\,,$ there is a canonical morphism of Lie groupoids $H\times_{G!}K\xhookrightarrow{}H\times_G K\,,$ assuming they exist. In particular, given a morphism $f:H\to G$ and an object $g^0\in G^0\,,$ there is a natural inclusion $f^{-1}(g^0)\xhookrightarrow{}H\times_G\{g^0\}\,.$ In \Cref{equiv1}, \Cref{equiv2}, we give conditions on which these inclusions are Morita equivalences. \end{proposition} \begin{proposition} Suppose $F:H\to G$ is a Lie groupoid homomorphism and $g$ is an arrow $g^0\to g'^0\,.$ Then if the fiber over $g^0$ exists, then so does the fiber over $g'^0\,,$ and they are isomorphic. \begin{proof} At the level of objects, the isomorphism is given by $(h^0,g')\to (h^0,gg')\,.$ The morphisms are already naturally identified. \end{proof} \end{proposition} \begin{corollary} Suppose $F:H\to G$ is a Lie groupoid homomorphism such that $G$ is a transitive groupoid. Then if one fiber of $F$ exists, they all exist and are all isomorphic. \end{corollary} \begin{exmp} If $f:H\to G$ is a homomorphism, then, $G^0\times_{G} H=H\ltimes P\rightrightarrows P\,,$ where $P$ is the bibundle associated to the morphism $f\,.$ We can also consider the map $f\vert_{H^0}:H^0\to G^0\xhookrightarrow{}G\,,$ where we consider $H^0\rightrightarrows H^0\,, G^0\rightrightarrows G^0$ to be the trivial Lie groupoids, and the fiber product $G^0\times_{G}H^0=P\,.$ \end{exmp} \begin{exmp} Suppose we have generalized morphisms $P_1:G\to H\,,\,P_2:H\to K\,.$ We then have two action groupoids given by $P_1\rtimes H\,,\,H\ltimes P_2$ with correponding morphisms into $H\,.$ We can form the fiber product \begin{equation} H\ltimes P_2\times_H P_1\rtimes H\,, \end{equation} and this groupoid is Morita equivalent to $(P_2\times_{H^0} P_1)/H\,,$ hence can be identified with the composition $P_2\circ P_1\,.$ \end{exmp} \section{(2,1)-Category of Stacks} Now, to get the (2,1)-category of geometric stacks we localize the (2,1)-category of Lie groupoids at the weak equivalences. We can describe the category as follows: \begin{itemize} \item The objects are Lie groupoids \item the morphisms are anafunctors \item The 2-morphisms are ananatural transformations \end{itemize} In the above definition, an anafunctor $H\to G$ is a generalized morphism given by a roof (see \cite{Li}): \begin{equation} \begin{tikzcd} & K \arrow[ld, "\cong"'] \arrow[rd] & \\ H & & G \end{tikzcd} \end{equation} Here the left leg is a Morita equivalence and the map $K^0\to H^0$ is a surjective submersion (in fact, we can even take the map $K\to H$ to be a fibration). We can compose anafunctors via the strong fiber product: \begin{equation} \begin{tikzcd} & & K\times_{H!} K' \arrow[ld, "\cong"'] \arrow[rd] & & \\ & K \arrow[ld, "\cong"'] \arrow[rd] & & K' \arrow[ld, "\cong"'] \arrow[rd] & \\ I & & H & & G \end{tikzcd} \end{equation} \vspace{3mm}\\ An ananatural transformation between two anafunctors $H\xleftarrow[]{\cong}K\to G\Rightarrow H\xleftarrow[]{\cong}K'\to G$ is given by a natural transformation between the composite functors $K\times _G K'\to K\to G$ and $K\times_G K'\to K'\,.$ One can also compose ananatural transformations, but we won't define it here (see \cite{Li} for more). \chapter{Double Lie Groupoids and LA-Groupoids} In this chapter we will present the information about double Lie groupoids and LA-groupoids which is relevant to the theorem we wish to prove. Essentially, a double Lie groupoid is a Lie groupoid internal to the category of Lie groupoids, ie. the space of arrows and the space of objects are both Lie groupoids. An LA-groupoid is essentially a Lie algebroid internal to the category of Lie groupoids, ie. the total space and the base space of the Lie algebroid are Lie groupoids. There is a differentiation functor from double Lie groupoids to LA-groupoids. \begin{definition}(see page 5 of~\cite{mehtatang} for complete details) A double groupoid is a groupoid internal to the category of groupoids. We will denote it as so \begin{equation}\label{double groupoid} \begin{tikzcd} G^{01} \arrow[r, shift left] \arrow[r, shift right] \arrow[d, shift left] \arrow[d, shift right] & G^{11} \arrow[d, shift left] \arrow[d, shift right] \\ G^{00} \arrow[r, shift left] \arrow[r, shift right] & G^{10} \end{tikzcd} \end{equation} We will denote the source and target maps from $G^{ij}\to G^{kl}$ by $s_{ij,kl},\,t_{ij,kl}\,,$ respectively. \end{definition} \begin{definition} A double Lie groupoid is a double groupoid such that all rows and columns are Lie groupoids, and such that the double source map \begin{equation} (s_{01,00},\,s_{01,11}):G^{01}\to G^{00}\sideset{_{s_{00,10}}}{_{s_{11,10}}}{\mathop{\times}} G^{11} \end{equation} is a surjective submersion. \end{definition} \vspace{3mm}Now we can describe the infinitesimal analogue of a double Lie groupoid, in the vertical direction. \begin{definition} An LA-groupoid (short for Lie algebroid-groupoid), denoted as \ref{LA-groupoid def} is a Lie algebroid internal to the category of Lie groupoids. That is, the top and bottom rows are Lie groupoids, the left and right columns are Lie algebroids, all structure maps are compatible, and such that the map (to be defined below) \begin{equation} (p_1\,,s_A):A^1\to M^1\sideset{_{s_M}}{_{p_0}}{\mathop{\times}}M^0 \end{equation} is a surjective submersion. Here, $p_1\,,p_2$ are the projection maps $A^1\to M^1\,,A^2\to M^2\,,$ respectively, and $s_A\,,s_M$ are the source maps $A^1\to A^0\,,M^1\to M^0\,,$ respectively. \begin{equation}\label{LA-groupoid def} \begin{tikzcd} A^1 \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & A^0 \arrow[d] \\ M^1 \arrow[r, shift left] \arrow[r, shift right] & M^0 \end{tikzcd} \end{equation} \end{definition} \section{Category of Double Lie groupoids and LA-Groupoids}\label{defLAM} Here we define morphisms of double Lie groupoids, LA-groupoids, and Morita equivalences (compare with~\cite{del hoyo}). \begin{definition}\label{mor double} A morphism $f$ between double Lie groupoids consists of four functions, \begin{equation} f^{00}\,,f^{10}\,,f^{01}\,,f^{11}\,, \end{equation} where $f^{ij}$ maps the $ij$ corner to the $ij$ corner, for which the corresponding maps of Lie groupoids are all morphisms. \end{definition} \begin{definition}\label{mordo} A Morita map of double Lie groupoids is a morphism of double Lie groupoids for which the morphism between the top rows (or left columns) is a Morita equivalence. \end{definition} \begin{definition} A morphism of LA-groupoids consists of four functions, $f^{00}\,,f^{10}\,,f^{01}\,,f^{11}\,,$ where $f^{ij}$ maps the $ij$ corner to the $ij$ corner, for which the corresponding maps of Lie groupoids and Lie algebroids are all morphisms. \end{definition} \begin{definition} A Morita map of LA-groupoids is a morphism of LA-groupoids for which the morphism between the top rows is a Morita equivalence. \end{definition} \begin{remark}\label{generate} The definition of Morita map of double Lie groupoids we've given is not quite the definition we should give. This category comes with two notions of weak equivalence, one in the horizontal direction and one in the vertical direction. Really, we should take the smallest subcategory containing all of these weak equivalences to get a category with weak equivalences, or better, a homotopical category. Alternatively, there may be a nicer definition. \end{remark} \section{Higher Category of Double Lie Groupoids} Now since cofibrations will be mentioned several times in this thesis, we wish to define 2-morphisms between morphisms of double Lie groupoids. We will do this with respect to the top rows, however, by taking the transpose of the diagram we get the definition with respect to the left columns. \begin{definition} Consider morphisms $f_1\,,f_2$ between double Lie groupoids \begin{equation} \begin{tikzcd} H^{01} \arrow[d, shift left] \arrow[d, shift right] \arrow[r, shift left] \arrow[r, shift right] & H^{11} \arrow[rr, "f_2", shift right=5] \arrow[d, shift left] \arrow[d, shift right] & & G^{01} \arrow[d, shift right] \arrow[r, shift left] \arrow[d, shift left] \arrow[r, shift right] & G^{11} \arrow[d, shift right] \arrow[d, shift left] \\ H^{00} \arrow[r, shift left] \arrow[r, shift right] & H^{10} \arrow[rr, "f_1"', shift left=6] & & G^{00} \arrow[r, shift right] \arrow[r, shift left] & G^{10} \end{tikzcd} \end{equation} A 2-morphism $f_1\Rightarrow f_2$ is given by a functor \begin{equation} \begin{tikzcd} & & G^{01} \arrow[r, shift right] \arrow[r, shift left] & G^{11} \\ H^{00} \arrow[r, shift left] \arrow[r, shift right] & H^{10} \arrow[ru] & & \end{tikzcd} \end{equation} for which the induced map $H^{00}\to G^{01}$ defines a natural transformation between $f_1\,,f_2$ when restricted to the groupoids in the left column. \end{definition} Now we have a (2,1)-category of double Lie groupoids, and we may now discuss cofibration in the category, which we will do later. In addition, one can invert weak equivalences to obtain a new category, analogous to what is done for groupoids, but we won't be needing that. \begin{remark}Note that because 2-morphisms are functors, we also have morphisms of 2-morphisms, ie. 3-morphisms. Therefore we really have a (3,1)-category.\end{remark} \chapter{Replacing a Map With a Fibration/Cofibration} In the introdution, we mentioned obtaining equivalent LA-groupoids associated to a map using two different methods. The two methods of obtaining LA-groupoids given a (nice enough) map $H\to G$ correspond to the two methods of constructing ``groupoids" out of such a map, which we will call the fibration and cofibration replacements. By analogy one should think back to homotopy theory, where one can replace a map $Y\to X$ with a fibration, namely the mapping path space, or a cofibration, namely the mapping cylinder. One should keep analogies with homotopy theory in mind when reading this chapter (if one thinks of homotopy theory as really being about $\infty$-groupoids, these are more than analogies). \section{Fibration and Cofibrations} The context here is that we are thinking about the (2,1)-category of Lie groupoids (the one where no localization has been performed). Given any 2-category there is a notion of fibration (see \cite{str}, \cite{riehl}), and the first thing we will do here is define a fibration of Lie groupoids. There are several notions of ``fibrations" of Lie groupoids in the literature, but as far as the author can tell they are distinct from the one we are about to define, which should be thought of as analogous to Hurewicz fibrations (whereas, for example, Kan fibrations are analogous to Serre fibrations). Some fibrations currently defined in the literature are, in particular, what we call quasifibrations (ie. see \cite{Mackenzie}). \vspace{3mm}\\Let us expound on fibrations for a moment. A morphism of $\infty$-groupoids (modelled by simplicial spaces) is a Kan fibration if it has the ``homotopy lifting property" with respect to the standard $n$-simplices. For a map of groupoids $f:H\to G$ (ie. the groupoids have no higher morphisms) this is equivalent to $f$ having the ``homotopy lifting property" with respect to the standard $0$-simplex, meaning that if $h^0\in H^0\,,g\in G$ are such that $s(g)=f(h^0)\,,$ then there must exists an $h\in H$ such that $s(h)=h^0$ and $f(h)=g\,.$ Mackenzie (see~\cite{Mackenzie}, definition 2.4.3) requires the following stronger condition to hold: $f_0:H^0\to G^0$ and $H\to f_0^!G$ must be surjective submersions. \vspace{3mm}\\The fibrations we discuss will have the homotopy lifting property with respect to all Lie groupoids. In particular, they will be Kan fibrations, and in the important examples we will consder they will be fibrations in the sense of Mackenzie. The two things that could prevent a fibration in the sense of this paper from being a fibration in the sense of Mackenzie are: \begin{itemize} \item the technical condition of certain maps being submersions, \item the definition of a Hurewicz fibration $X\to Y$ from topology doesn't imply that the map is surjective (which could happen if $Y$ isn't path connected), and analogously our definition of fibrations doesn't imply that $f_0$ is surjective. \end{itemize} \begin{remark} A remark on conventions and notation: in this thesis, when we speak of a 2-morphism (or natural transformation) of a morphism $f\,,$ we mean a 2-morphism/natural transformatio $f\Rightarrow f'\,,$ where $f'$ is some morphism. We may not explicitly write $f'\,.$ This is justified by the following: a natural transformation between morphisms of groupoids $f,f':H\to G$ is determined by a map $g_H:H^0\to G^{(1)}$ which satisfies $s(g_H(h^0))=f(h^0)\,,t(g_H(h^0))=f'(h^0)$ and which satisfies the desired commuation relations. Conversely, since arrows in a Lie groupoid are invertible, a morphism $f:H\to G$ and a map $g_H:H^0\to G^{(1)}$ satisfying $s(g_H(h^0))=f(h^0)$ determines a morphism $f':H\to G$ and a 2-morphism $f\Rightarrow f'\,.$ As for notation, we may denote a natural transforamtion of a morphism $H\to G$ by using the notation $g_H:H^0\to G^{(1)}$ (here the $g$ in $g_H$ references the arrows in codomain $G\,,$ and the $H$ references the domain). In addition, objects will be denoted with a superscript $0\,,$ ie. an object of $G$ will be denoted by $g^0$ (arrows will be denoted by $g\,,$ including identity arrows). \end{remark} \begin{definition} A morphism $F:E\to G$ of Lie groupoids is a fibration if it has the lifitng property with respect to 2-morphisms. That is, $F$ is a fibration if the following condition holds: let $f:H\to G$ be a morphism of Lie groupoids and let $g_H:H^0\to G^{(1)}$ define a natural transformation of $f\,.$ Suppose there exists a lift $\tilde{f}:H\to E\,,$ ie. $f=F\circ\tilde{f}\,,$ then there must exist a lift of the natural transformation $g\,,$ ie. a map $e_H:H^0\to E^{(1)}$ satisfying $g_H=F\circ e_H$ which defines a natural transformation of $f\,.$ \end{definition} \begin{definition} A morphism $\iota:A\to G$ is a cofibration if it has the the extension property with respect to 2-morphisms, ie. suppose we have a morphism $f:A\to H$ together with a natural transformation of $f\,,$ given by $h_A:A^0\to H^{(1)}\,.$ If there exists a map $\tilde{f}:G\to H$ satisfying $f=\tilde{f}\iota\,,$ there must be a natural transformation of $\tilde{f}\,,$ given by a map $h_G:G^0\to H^{(1)}\,,$ satisfying $h_A=h_G\iota\,.$ \end{definition} The previous definitions raise the following question: given a morphism of Lie groupoids, when can one replace it with an equivalent fibration/cofibration? The answer to the former is always. On the other hand, the author believes that a morphism which isn't already a cofibration seldom has a cofibration replacement (though we will see later that if we use double groupoids they exist far more often). \section{The Canonical Fibration Replacement} Now to explain how to replace a morphism with a fibration (see~\cite{hoyor} section 2.2,~\cite{fernandes} page 6 for discussions about the same canonical factorization). Given a map $H\to G$ there is a canonical bibundle $P=G^{(1)}\sideset{_t}{_f}{\mathop{\times}} H^0$ for $H$ and $G\,,$ where $H$ acts via a left action and $G$ acts via a right action, forming the groupoid $H\ltimes P\rtimes G\rightrightarrows P\,.$ From this, we get a canonical fibration replacement for the map $H\to G\,,$ given by the following commutative diagram: \begin{equation}\label{fibration rep} \begin{tikzcd} H\ltimes P\rtimes G \arrow[rd, "p_3"] & \\ H \arrow[u, "\iota"] \arrow[r, "A"'] & G \end{tikzcd} \end{equation} The map $p_3$ (projection onto the third factor) is a fibration and $\iota$ is a cofibration. In addition, letting \begin{equation*} p_1:H\ltimes P \rtimes G\to H \end{equation*} be the projection onto the first factor, we have that $p_1$ is a retraction of $\iota\,,$ and there is a 2-morphism $c:\iota p_1\Rightarrow \mathbbm{1}_{H\ltimes P\rtimes G}$ such that for $h^0\in H^0\,,$ $c\iota(h^0)=\mathbbm{1}_{\iota(h^0)}\,.$ In particular, $\iota$ is a Morita map. The groupoid $H\ltimes P\rtimes G$ is isomorphic to $G\times_G H$ (and as we will see, $H\ltimes P\rtimes G$ naturally has the structure of a double Lie groupoid). \begin{definition} Let $f:H\to G$ be a morphism of Lie groupoids. We will say a fibration $F:E\to G$ is a fibration replacement for $f$ if the following conditions hold: there are maps $\iota:H\to E\,,$ $p:E\to H$ such that there exists 2-morphisms $p\iota\Rightarrow \mathbbm{1}_H\,,\iota p\Rightarrow \mathbbm{1}_E\,,$ and such that $f=F\iota\,.$ \begin{remark} Rather than requiring that there are Morita maps both ways in the definition of fibration, one might require instead that there is a map just one way, as in the definition of fibration in a model category — but we won't be needing to do this. \end{remark} \end{definition} The discussion above proves the following: \begin{proposition} Given any morphism $f:H\to G$ of Lie groupoids, there is a fibration replacement for $f\,.$\footnote{If we allow the space of arrows to be non-Hausdorff, then we must allow the base to be non-Hausdorff as well, otherwise the fibration replacement may not exist in the category. We will assume everything is Hausdorff.} \end{proposition} Now given morphisms $H\to G\,, H'\to G'\,,$ we will call them equivalent if there is a diagram of one of the following forms, where the vertical arrows are Morita maps: \begin{equation}\label{morita maps} \begin{tikzcd} H' \arrow[r] & G' & H' \arrow[r] \arrow[d] \arrow[r, Rightarrow, bend right=74, shift right=2] & G' \\ H \arrow[r] \arrow[u] \arrow[ru, Rightarrow] & G \arrow[u] & H \arrow[r] & G \arrow[u] \\ H' \arrow[r] & G' \arrow[d] & H' \arrow[d] \arrow[r] \arrow[rd, Rightarrow] & G' \arrow[d] \\ H \arrow[r] \arrow[u] \arrow[r, Rightarrow, bend left=74, shift left=2] & G & H \arrow[r] & G \end{tikzcd} \end{equation} Now the following proposition shows that fibration replacements are essentially unique: \begin{proposition}\label{canonical map} Given any pair of fibration replacements for $H\to G\,,H'\to G'$ (fitting into the diagram in the top left) given by $E\to G\,,E'\to G'\,,$ respectively, there is a commutative diagram of the following form: \begin{equation} \begin{tikzcd} E \arrow[d] \arrow[r, dashed] & E' \arrow[d] \\ G \arrow[r, dashed] & G' \end{tikzcd} \end{equation} \end{proposition} \begin{proof} What we want to do is show that the map $E\twoheadrightarrow G'$ given by the composition $E\twoheadrightarrow G\twoheadrightarrow G'$ lifts to a map $E\to E'$ (note that we are using different arrows for ease of exposition, they do not carry any connotation). We will do this by showing that there is another map $E\rightharpoonup G'$ which factors through $E'$ and which is equivalent to the first map via a $2$-morphism. The map in question is the one given by the following commutative diagram: \begin{equation} \begin{tikzcd} E \arrow[d,harpoon] & & E' \arrow[d] \\ H \arrow[r,harpoon] & H' \arrow[r,harpoon] \arrow[ru, dashed] & G' \end{tikzcd} \end{equation} Now the proof follows from the following diagram: \begin{equation} \begin{tikzcd} & & G \arrow[r,two heads] & G' \\ E \arrow[rr, harpoon,shift right=0.5] \arrow[rru, two heads, shift left=1] \arrow[rru, Rightarrow, bend right, shift left] & & H \arrow[r,harpoon] \arrow[u] \arrow[ru, Rightarrow] & H' \arrow[u,harpoon] \end{tikzcd} \end{equation} \end{proof} Now we have the following result, which already gives us one application of fibrations (see \Cref{morita of fibers} for more information): \begin{proposition}\label{equiv1} Suppose $F:E\to G$ is a fibration which is a surjective submersion at the level of objects, and let $f:H\to G$ be a morphism of Lie groupoids. Then the strong fiber product and the fiber product both exist, and the canonical morphism $E\times_{G!} H\xhookrightarrow{} E\times_G H$ is a Morita equivalence. \end{proposition} Given the previous result, it is in particular true that if $F:E\to G$ is a fibration which is a surjective submersion at the level of objects, then the the canonical inclusion $F^{-1}(g^0)\xhookrightarrow{} E\times_G \{g^0\}$ is a Morita equivalence. In light of this, we make the following definition: \begin{definition}\label{quasifibration} We will call a map $f:H\to G$ a quasifibration if for each $g^0\in G^0$ the kernel over $g^0$ exists and if the canonical inclusion $f^{-1}(g^0)\xhookrightarrow{}H\times_G \{g^0\}$ is a Morita equivalence. \end{definition} Now we will define what it means for a map of Lie groupoids $f:H\to G$ to be a surjective submersion. This is what Mackenzie calls a fibration in \cite{Mackenzie}. They are the correct maps for defining simple foliations of Lie groupoids. \begin{definition}\label{surj sub} We call a map $f:H\to G$ a surjective submersion of Lie groupoids if both the map on the space of objects and the map $H\to G\sideset{_s}{_{f^0}}{\mathop{\times}}H^0\,, h\mapsto (f(h),s(h))$ are surjective submersions. \end{definition} \begin{proposition}\label{equiv2} A surjective submersion of Lie groupoids is a quasifibration. \end{proposition} \begin{exmp} A map of Lie groups $H\to G$ is a quasifibration if and only if it is a surjective submersion at the level of arrows. It is also a surjective submersion if and only if it is a surjective submersion at the level of arrows. \end{exmp} \section{Properties of Fibrations} Now in homotopy theory, a fiber bundle is in particular a fibration, but this is not true for Lie groupoids. One might wonder, if the map $H\to G$ is a fiber bundle in the category of Lie groupoids, would there be an advantage to replacing this with a fibration? The answer is yes. \vspace{3mm}\\ Consider for example the homomorphism $\mathbb{R}\to S^1\,.$ This is a fiber bundle in the category of Lie groupoids. However, the fibration replacement $\mathbb{R}\ltimes S^1\rtimes S^1\to S^1$ has at least one interesting property that the map $\mathbb{R}\to S^1$ doesn't have (aside from the 2-morphism lifting property): the fibers of the map $\mathbb{R}\to S^1$ (as a map of spaces) are all diffeomorphic to the fiber over the identity, but not canonically. However, the fibers of the fibration replacement $\mathbb{R}\ltimes S^1\rtimes S^1\to S^1$ are all canonically identified with the fiber over the identity, $\mathbb{R}\ltimes S^1\,.$ We have the following result (a similar observation is made in~\cite{fernandes}): \begin{proposition}\label{fibration property} Let $F:E\to G$ be a morphism of Lie groupoids. Suppose there is an action of $G$ on $E$ with respect to the moment map $t\circ F\,,$ which is compatible with $F$ and the action of $G$ on itself with respect to the target map, in the sense that, if $F(e)=g$ and s(g')=t(g), then $F(g'\cdot e)=g'g$ and $s(g'\cdot e)=s(e)$ (in particular, $F$ is a morphism of Lie groupoids as well as $G$-spaces). Then $F$ is a fibration. \end{proposition} Such a $G$-action will in particular identify the fibers $F^{-1}(g)$ and $F^{-1}(g')\,$ if $s(g)=s(g')\,.$ Let us remark that, in particular, given such a $G$-action, we get a canonical section of \begin{equation} E\xrightarrow[]{(F,s)} G\sideset{_s}{_F}{\mathop{\times}}E^0\,. \end{equation} In the special case of Lie groups, one can ask what the fibrations are. As the following proposition shows, maps of Lie groups are almost never fibrations (however in the case of discrete groups, the condition is equivalent to the map being surjective). \begin{proposition} Let $F:H\to G$ be a map of Lie groups; $F$ is a fibration if and only if, as manifolds, $H$ is a trivial fiber bundle with respect to $F\,,$ ie. $H\cong \text{ker }F\times G\,,$ with the map to $G$ being the projection onto the second factor. \end{proposition} \begin{proof} First, if $H\cong \text{ker }F\times G$ as manifolds, then $F$ admits a section, given by $g\mapsto (e,g)\,.$ This section allows us to lift 2-morphisms. Conversely, suppose $F:H\to G$ is a fibration. Consider the map $f:G\rightrightarrows G\to G\rightrightarrows *\,,$ which sends everything to the identity element. We have a 2-morphism given by the identity map $G\to G\,.$ The map $f$ factors through $F\,,$ therefore the 2-morphism must lift, ie. there must be a section of $F\,.$ Since the kernel of $F$ acts on the fibers of $F\,,$ this section gives us the desired identification $H\cong \text{ker }F\times G\,.$ \end{proof} \section{Split Fibrations} We define a splitting of a Lie groupoid $A\rightrightarrows A^0$ to be an embedding of $A$ as the diagonal of a double Lie groupoid, ie. \begin{equation} \begin{tikzcd} A \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift left] \arrow[d, shift right] & C \arrow[d, shift right] \arrow[d, shift left] \\ B \arrow[r, shift left] \arrow[r, shift right] & A^0 \end{tikzcd} \end{equation} where the source and target maps of $A\rightrightarrows A^0$ are equal to the double source and double target maps of the above double groupoid (for more on obtaining a simplicial manifold from the diagonal of a bisimplicial manifold, see \cite{mehtatang}). \vspace{3mm}\\Now given a map $F:E\to G$ equipped with a compatible $G$-action as in \Cref{fibration property}, we get a splitting of $E\rightrightarrows E^0\,.$ The double groupoid is essentially an action groupoid, which we will describe below: \begin{equation} \begin{tikzcd} E \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & E^0\rtimes G \arrow[d, shift right] \arrow[d, shift left] \\ \text{Ker }F \arrow[r, shift right] \arrow[r, shift left] & E^0 \end{tikzcd} \end{equation} \begin{itemize} \item The groupoid on the bottom row is just the subgroupoid $\text{Ker }F$ of $E\,.$ \item Now for the groupoid in the left column: we have an identifiation of $E$ with $\text{Ker }F\sideset{_F}{_s}{\mathop{\times}}G\,,$ and associated to this identification is an action groupoid of $G$ on $\text{Ker }{F}\,.$ The action is defined as follows: let $e\in \text{Ker }F\,,$ and let $g\in G$ be such that $s(g)=F(e)\,.$ Then $(e,F(e))\cdot g=(e', g)\,,$ and we define $e\cdot {g}=e'\,.$ \item Now for the groupoid in the right column: let $e^0\in E^0\,,$ and $g$ be such that $s(g)=F(e^0)\,.$ We can identify $e^0$ with the identity morphism in $E\,,$ denoted $\iota(e^0)\,,$ and we define $e^0\cdot g:=t(\iota(e^0)\cdot g)$ \item Finally for the groupoid in the top row. There is an action of $\text{Ker }F$ on $E^0\rtimes G\,,$ defined as follows: suppose $s(e)=e^0\,,$ we then define $e\cdot (e^0,g)=(t(e),g)\,.$ \end{itemize} We can think of this groupoid as an action groupoid of a groupoid on another groupoid, ie. $G$ acts on $\text{ker } F\rightrightarrows E^0\,.$ Due to this discussion, we make the following definition: \begin{definition}(see~\cite{Mackenzie}, definition 2.5.2) We call a fibration $E\to G$ with a choice of $G$-action, as in \Cref{fibration property}, a split fibration. \end{definition} \begin{remark} Note that our use of the term ``splitting" refers exclusively to the fact that the groupoid ``splits" into a double groupoid. Mackenzie (in~\cite{Mackenzie}, definition 2.5.2) coincidentally uses the terminology ``split fibration" in a way that seems to agree with how we use the term split fibration. \end{remark} \begin{exmp} Let $A\,,B$ be Lie groups, and consider the fibration $A\times B\to A\,.$ We have an action of $A$ on $A\times B\,,$ given by $a'\cdot (a,b)=(aa'^{-1}\,,b)\,,$ and so in particular the action on the kernel of this map is trivial. We then have the following splitting of $A\times B$: \begin{equation} \begin{tikzcd} A\times B \arrow[r, shift left] \arrow[r, shift right] \arrow[d, shift left] \arrow[d, shift right] & A \arrow[d, shift left] \arrow[d, shift right] \\ B \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} \end{exmp} Previously we discussed what the fibrations of Lie groups are, and similarly one can ask what the split fibrations of Lie groups are. We have the following result: (see~\cite{Mackenzie}) \begin{proposition} Let $f:H\to G$ be a map of Lie groups. Then $H$ is a split fibration if and only if it is a semidirect product of $\text{ker }f$ and $G\,.$ \end{proposition} \begin{exmp} Consider a semidirect product $N\rtimes H\,.$ There is a natural morphism $H\to N\rtimes H\,,$ and this defines the action of $H$ on $N\rtimes H\,.$ The splitting of this group is then given by the following double groupoid: \begin{equation} \begin{tikzcd} N\rtimes H \arrow[r, shift left] \arrow[r, shift right] \arrow[d, shift left] \arrow[d, shift right] & H \arrow[d, shift left] \arrow[d, shift right] \\ N \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} Here the groupoid in the left column is just the action groupoid of $H$ acting on $N$ as a space, and the groupoid in the top row is just the action groupoid of $N$ acting trivially on $H$ as a space, ie. it is a bundle of Lie groups over $H$ (so the source and target maps are just the projection onto $H)\,.$ \end{exmp} \subsection{The Canonical Split Fibration} In the case that the fibration is of the form $G\times_G H\to G\,,$ there is a canonical $G$-action as in \Cref{fibration property}, and the associated splitting is given by \begin{equation}\label{replacement} \begin{tikzcd} H\ltimes P\rtimes G \arrow[r, shift left] \arrow[r, shift right] \arrow[d, shift right] \arrow[d, shift left] & P\rtimes G \arrow[d, shift right] \arrow[d, shift left] \\ H\ltimes P \arrow[r, shift right] \arrow[r, shift left] & P \end{tikzcd} \end{equation} There is an advantage to thinking of $H\ltimes P\rtimes G$ as a double groupoid, since the fibers of the map \begin{equation} \begin{tikzcd}\label{fol} H\ltimes P\rtimes G \arrow[d, shift left] \arrow[d, shift right] \arrow[r, shift left] \arrow[r, shift right] & P\rtimes G \arrow[d, shift left] \arrow[d, shift right] \arrow[r, shift right=6] & G \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift left] \arrow[d, shift right] & G \arrow[d, shift left] \arrow[d, shift right] \\ H\ltimes P \arrow[r, shift left] \arrow[r, shift right] & P & G^0 \arrow[r, shift left] \arrow[r, shift right] & G^0 \end{tikzcd} \end{equation} at each vertical level of the corresponding bisimplicial space are the fibers of the map $H\to G\,.$ This is not true for the map \begin{equation} \begin{tikzcd} H\ltimes P\rtimes G \arrow[d, shift left] \arrow[d, shift right] \arrow[r, shift right=6] & G \arrow[d, shift left] \arrow[d, shift right] \\ P & G^0 \end{tikzcd} \end{equation} where the fibers only appear as the kernel over an object in $G^0$ (and this will be true for any split fibration). Even worse, typically for morphisms $H\to G$ the fibers aren't embedded in $H$ in any way. In the context of this thesis, this is the main reasons for replacing a map $H\to G$ with a fibration; this will allow us to use results about simplicial/bisimplicial manifolds to study Lie groupoids. \section{The Canonical Cofibration}\label{cofibration replacement} The second construction one can make from a (nice enough) map $f:H\to G$ can be interpreted as replacing the map $f:H\to G$ with a cofibration; it may also be interpreted as presenting the stack $[G^0/G]$ by a double groupoid with base $H\rightrightarrows H^0\,.$ First we will motivate the construction. \vspace{3mm}\\Suppose $Y\to X$ is a surjective submerison. In the category of manifolds isomorphisms are diffeomorphisms, therefore unless this map is also injective (making it a cofibration) there can be no cofibration replacement. However, in the category of Lie groupoids we have the submersion groupoid $Y\times_X Y\rightrightarrows Y\,,$ which is Morita equivalent to $X\,,$ and $Y\xhookrightarrow{} Y\times_X Y$ is an injection; in addition it is a cofibration. Therefore, in the category of Lie groupoids, we can replace a surjective submersion (or any submersion) between manifolds with a cofibration, and $Y\times_X Y\rightrightarrows Y$ may be called a cofibration replacement of $Y\to X\,.$ Of course, it also gives a presentation of the stack $[X/X]\,.$ \vspace{3mm}\\For maps of Lie groupoids, we will generalize the construction of the submersion groupoid. The object we will be replacing $G$ with is $H\times_G H\,,$ which though a priori is only a Lie groupoid, actually has the structure of a double Lie groupoid; this is analogous to how, given two submersions of manifolds $Y,Z\to X\,,$ the fiber product $Y\times_X Z$ is just a manifold - however, in the special case that $Y=Z\,,$ the fiber product inherits the structure of a groupoid. Explicitly, the double groupoid is given by \begin{equation}\label{HH} \begin{tikzcd} H^{(1)}\sideset{_{f\circ s}}{_{t}}{\mathop{\times}} G^{(1)}\sideset{_s}{_{f\circ s}}{\mathop{\times}} H^{(1)} \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & H^{(0)}\sideset{_f}{_{t}}{\mathop{\times}} G^{(1)}\sideset{_s}{_{f}}{\mathop{\times}} H^{(0)} \arrow[d, shift right] \arrow[d, shift left] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & H^0 \end{tikzcd} \end{equation} Here, the bottom groupoid is simply $H$ and the top groupoid is $H\times_G H\,.$ Equivalently, we can write \ref{HH} in a condensed form as \begin{equation} \begin{tikzcd} H^{(1)}\times_G H^{(1)} \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & H^0\times_G H^0 \arrow[d, shift right] \arrow[d, shift left] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & H^0 \end{tikzcd} \end{equation} The fiber product in the bottom row is with respect to the map $f^0:H^0\to G\,,$ and in the top row the fiber product is with respect to the map $f\circ s:H^{(1)}\to G\,.$ Using the strong pullback, we can write this double groupoid as \begin{equation} \begin{tikzcd} (f^0\circ s)^!G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & f^{0!}G \arrow[d, shift right] \arrow[d, shift left] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & H^0 \end{tikzcd} \end{equation} We will now make the following definitions: \begin{definition}\label{canonical cofi} Given a (nice enough) homomorphism $f:H\to G\,,$ we define $H\times_G H\rightrightarrows H$ to be the double Lie groupoid in \ref{HH}. We may also denote it by $f^!G\,.$ We will sometimes call this the canonical cofibration (associated to $f$). \end{definition} We will now explain what we mean by ``caononical cofibration". First we make a definition: \begin{definition} Let $f:H\to G$ be a morphism (here $H\,,G$ may be Lie groupoids or double Lie groupoids). A cofibration replacement for $f$ is given by a pair of maps $\iota:H\to K\,,F:K\to G$ (where $K$ is a Lie groupoid or double Lie groupoid), such that $\iota$ is a cofibration, $F$ is a fibration which is also a Morita map, and such that $f=F\iota\,.$ \end{definition} We will now show that if $f:H\to G$ is essentially surjective then $H\times_G \rightrightarrows H$ is Morita equivalent to $G\rightrightarrows G^0\,;$ while there is no strict morphism between them, there is a natural roof. While we do this, we will also discuss the sense in which the map $H\xhookrightarrow{}H\times_G H\rightrightarrows{}H$ is a cofibration replacement for $H\to G\,.$ \vspace{3mm}\\Consider the map $H\ltimes P\rtimes G\to G\,.$ We can form the fiber product with respect to the objects and arrows to get the following double groupoid (with the appropriate fiber products in the top row): \begin{equation}\label{cofi} \begin{tikzcd} H\times H\ltimes(P\times P)\rtimes G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & P\times_{G^0} P \arrow[d, shift right] \arrow[d, shift left] \\ H\ltimes P\rtimes G \arrow[r, shift right] \arrow[r, shift left] & P \end{tikzcd} \end{equation} Now there is a natural morphism from \ref{cofi} to $G\rightrightarrows G^0\,,$ which we think of as a double groupoid in the following way: \begin{equation}\label{G} \begin{tikzcd} G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & G^0 \arrow[d, shift right] \arrow[d, shift left] \\ G \arrow[r, shift right] \arrow[r, shift left] & G^0 \end{tikzcd} \end{equation} The map from \ref{cofi} to \ref{G} is given by projection onto $G\,.$ This map is a Morita equivalence (where we view the double groupoid as the groupoid in the left column over the groupoid in the right column). \vspace{3mm}\\Now there is a natural inclusion from $H\rightrightarrows H^0$ to the double groupoid in \ref{cofi}, and composing this map with the map from \ref{cofi} to $G\rightrightarrows G^0$ gives us our original map $H\to G\,.$ Moreover, this inclusion is a cofibration. Therefore, the map from $H$ to \ref{cofi} is a cofibration replacement for $H\to G\,.$ In general, if the map $H\to G$ isn't essentially surjective but $H^0\to G^0$ is a submersion, we can form the disjoint union Lie groupoid $H\sqcup G^0$ which will map into $G\,,$ and will be essentially surjective and a submersion at the level of objects. We will now summarize this result : \begin{proposition} Let $f:H\to G$ be such that the induced map $H^0\to G^0$ is a submersion. Then a cofibration replacement for $f$ exists in the category of double Lie groupoids. \end{proposition} \vspace{3mm}Now on the other hand, we also have a Morita equivalence from \ref{cofi} to $f^!G\,.$ Therefore $f^!G\,,$ by definition, is Morita equivalent to $G\,.$ In addition, the natural inclusion $H\xhookrightarrow{}f^!G$ is a cofibration. Now there isn't a morphism $f^!G\to G\,,$ so $H\xhookrightarrow{}f^!G$ isn't exactly a cofibration replacement for $H\xhookrightarrow{} G\,,$ it is almost just as good, so we will call it the canonical cofibration. \vspace{3mm}\\Now we will state a sufficient condition for a map of Lie groupoids $H\to G$ to be a cofibration: \begin{proposition} Suppose $H\to G$ is a map of Lie groupoids such that the map on the space of objects is a diffeomorphism, then $f$ is a cofibration. In particular, all maps of Lie groups are cofibrations. \end{proposition} Homomorphisms which aren't diffeomorphisms at the level of objects are often not cofibrations (in general, a condition for a map $H\to G$ to be a cofibration is probably that the map $H^0\to G^0$ is a closed embedding). Here we will give an example: \begin{exmp} Let $G\rightrightarrows *$ be a Lie group, and consider the identity morphism $G\to G\,.$ We can consider the trivial groupoid $G\rightrightarrows G\,,$ and there is a unique homomorphism $f$ mapping into $G\rightrightarrows *\,,$ which sends everything to $*\,.$ We get a natural transformation $f\Rightarrow f$ by sending the space of objects of $G\rightrightarrows G$ to the space of arrows of $G\rightrightarrows *$ using the identity map. Now $f$ factors through the identity morphism, therefore the identity morphism extends this map, however there can be no extension of this natural transformation since $G\rightrightarrows *$ has only one object, so the identity map $G\to G$ can't factor through it. \end{exmp} \begin{remark} Due to these results about fibrations and cofibrations, one can probably put a model structure on the category of $\infty$-fold Lie groupoids (that is, the category consisting of Lie groupoids, double groupoids, triple groupoids, etc.), with respect to a certain nice class of maps (ie. submersions at the level of objects). \end{remark} \subsection{``Relative" Lie Groupoid Cohomology} There is a global version of relative Lie algebra cohomology, and there is also a Lie groupoid analogue of this which we will now discuss. We put relative in quotations as there doesn't appear to be a long exact sequence associated with this cohomology which involves a groupoid and a subgroupoid. However, in \cref{analogies} we will exhibit a cohomology which does fit into such a long exact sequence. \vspace{3mm}\\Once again, one can think about the canonical cofibration associated to a map $H\to G$ as presenting the stack $[G^0/G]$ as a double groupoid over $H\rightrightarrows H^0\,.$ This is a useful construction to make when comparing the cohomology of two groupoids as it assembles both groupoids into a single object. \begin{exmp} Let's specialize \ref{HH} to the case where $H\xhookrightarrow{}G$ is a wide subgroupoid. In this case, the double groupoid is \begin{equation}\label{properr} \begin{tikzcd} H^{(1)}\sideset{_s}{_t}{\mathop{\times}} G^{(1)}\sideset{_s}{_s}{\mathop{\times}} H^{(1)} \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & G^{(1)} \arrow[d, shift right] \arrow[d, shift left] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & G^0 \end{tikzcd} \end{equation} Notice that, if $H$ is proper, then the groupoids in all of the rows of the corresponding bisimplicial manifold are proper. Since the cohomology of a proper groupoid with values in a represenation vanishes in positive degree, the cohomology of \ref{properr} reduces to the cohomology of the right column, and thus one can work with cocycles for which the pullback by $\delta_h^*$ is trivial. Therefore, $H^*(G,E)$ is isomorphic to the cohomology of the subcomplex of $Z(G,E)$ consisting of functions $f:G^{(n)}\to E$ such that $\delta_h^*f=0\,.$ These are functions such that, \begin{equation}\label{naive} f(g_1,g_2,\ldots,g_n)=f(h_1g_1h_2^{-1},h_2g_2h_3^{-1},\ldots,h_ng_nh_{n+1}^{-1})\,, \end{equation} whenever the expression on the right makes sense. In degree $0$ we get functions invariant under the action of $H\,,$ ie. $f(s(h))=f(t(h))\,.$ \vspace{3mm}\\Notice that this double groupoid relates the cohomology of $H,G$ and the ``$H$-invariant" (or ``relative") cohomology of $G\,,$ given by the kernel of $\delta_h^*\,.$ It also relates the cohomology of $H,G$ and the cohomology of the mapping cone (to be defined in \cref{analogies}), which in this case may be interpreted as the relative cohomology (of $G$ relative to $H\,$). \end{exmp} Let's rephrase what was previously said about reducing the cohomology of the double groupoid to that of the right column. With any double complex there is an associated spectral sequence; actually, there are two, but we will focus on the one where we compute the first page using the horizontal differentials, and the second page is then computed using the vertical differentials. This spectral sequence converges to the cohomology of the total complex, which in this case is the cohomology of $G$ (assuming Morita invariance of cohomology of double groupoids). In the case that $H$ is proper, this spectral sequence collapses on the second page to the first column. Summarizing this: \begin{proposition} Let $G$ be a Lie groupoid with a representation $E\,,$ and let $K$ be a wide and proper Lie subgroupoid. Then $H^*(G,E)$ is isomorphic to the cohomology of the subcomplex of functions $f:G^{(n)}\to E$ consisting of those functions which satisfy \begin{equation}\label{k invariant} f(g_1,g_2,\ldots,g_n)=f(k_1g_1k_2^{-1},k_2g_2k_3^{-1},\ldots,k_ng_nk_{n+1}^{-1})\,, \end{equation} whenever the expression on the right makes sense. In degree $0$ we get functions invariant under the action of $K\,,$ ie. $f(s(k))=f(t(k))$ (here $g_i\in G^{(1)}\,, k_i\in K^{(1)})\,.$ \end{proposition} \begin{remark} The subcomplex of functions which satisfy \Cref{naive} seems to be the complex of functions on the ``naive" quotient of $G$ by by the double groupoid $\text{Pair}(H)\,.$ \end{remark} \subsection{LA-Groupoid Associated to the Canonical Cofibration} In the previous section we discussed replacing a nice enough map $f:H\to G$ with a cofibration. Now one can ask: what is the LA-groupoid associated to the canonical cofibration $H\times_G H\rightrightarrows H\,?$ It is given by the following: \begin{equation}\label{LAg} \begin{tikzcd} (f\circ s)^!\mathfrak{g} \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & f^!\mathfrak{g} \arrow[d] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & H^0 \end{tikzcd} \end{equation} We will discuss this LA-groupoid more in \Cref{equiv la group}. We may denote it $f^!\mathfrak{g}\,.$ In light of this, we see that it can be useful to replace a map with the canonical cofibration even if the map is already a cofibration. \vspace{3mm}\\Now let's specialize \ref{LAg} to the case that $f:H\to G$ is an inclusion of Lie groups. The resulting LA-groupoid is the following: \begin{equation} \begin{tikzcd}\label{LAgroup} H\ltimes_{\text{Ad}}\mathfrak{h}\ltimes\mathfrak{g} \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & \mathfrak{g} \arrow[d] \\ H \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} \vspace{3mm}\\Now in the case that $H\subset Z(G)\,,$ something special happens if the sequence $0\to\mathfrak{h}\to\mathfrak{g}\to\mathfrak{g}/\mathfrak{h}\to 0$ splits as a direct sum. This can be useful for computing cohomology as it gives a simpler model of the LA-groupoid: \begin{proposition}\label{semidirect} Suppose that $\mathfrak{h}\subset Z(\mathfrak{g})$ (the center of $\mathfrak{g}$) and that $\mathfrak{g}\cong\mathfrak{h}\oplus\mathfrak{g}/\mathfrak{h}\,.$ Then the map $(h,[X])\mapsto (h,0,[X])$ induces a Morita map of LA-groupoids between \begin{equation} \begin{tikzcd} H\times\mathfrak{g}/\mathfrak{h} \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & \mathfrak{g}/\mathfrak{h} \arrow[d] \\ H \arrow[r, shift left] \arrow[r, shift right] & * \end{tikzcd} \end{equation} and \ref{LAgroup}. Here, the Lie algebroid on the left is just a trivial bundle of Lie algebras, ie. the pullback of $\mathfrak{g}/\mathfrak{h}\to *\,.$ \end{proposition} For completion, we will state the global analogue of the previous proposition: \begin{proposition} Suppose $N\subset Z(G)$ and that $G\cong N\times G/N\,.$ Then letting $\iota:N\to G$ be the inclusion map, we have that $\iota^!G$ is Morita equivalent to the following double groupoid: \begin{equation} \begin{tikzcd} N\times G/N \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & G/N \arrow[d, shift right] \arrow[d, shift left] \\ N \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Here the groupoids in the top row and left column are trivial bundles of groups. Note that this is a splitting of $N\times G/N\rightrightarrows *\,.$ \end{proposition} \section{Analogies Between Lie Groupoids and Homotoy Theory}\label{analogies} Here we will just collect some observations the author believes to display analogies between (double) Lie groupoids and homotopy theory (which are more than analogies when thinking about topological spaces as being equivalent to their fundamental $\infty$-groupoid). Already to make some of these constructions we've had to exit the category of Lie groupoids and enter the category of double Lie groupoids; in this section we will, in a sense, have to leave the category of double Lie groupoids (depending on how you interpret the constructions). We will in particular discuss mapping cones, relative cohomology and suspension. \vspace{3mm}\\We have already discussed fibrations and cofibrations, and the canonical fibration and cofibration replacements, which are analogous to the mapping path space and the mapping cylinder. Another construction we could make is of the mapping cone: given a map $f:H\to G$ of Lie groupoids which is a \hyperref[surj stacks]{surjective submersions of stacks} (ie. $f$ is essentially surjective and is a submersion at the level of objects), we can form the canonical cofibration and then collapse the base to a point. We get the following (semi-) bisimplicial space\footnote{To be precise, it is a semi-simplicial manifold in the category of simplicial manifolds.}: \begin{equation}\label{cone} \begin{tikzcd} H^{(1)}\sideset{_{f\circ s}}{_{t}}{\mathop{\times}} G^{(1)}\sideset{_s}{_{f\circ s}}{\mathop{\times}} H^{(1)} \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & H^{(0)}\sideset{_f}{_{t}}{\mathop{\times}} G^{(1)}\sideset{_s}{_{f}}{\mathop{\times}} H^{(0)} \arrow[d, shift right] \arrow[d, shift left] \\ * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Now in the case that $H\to G$ isn't essentially surjective but is still a submersion at the level of objects, one can form the canonical cofibration by forming the canonical cofibration of $H\sqcup G^0\to G$ instead, and then one can collapse $H$ in the base to a point. We will denote \ref{cone} by $C(f)\,.$ We can compute the cohomology of $C(f)\,.$ If $H\xhookrightarrow{} G$ is a subgroupoid, one might call this the relative cohomology of $G$ (relative to $H$). \vspace{3mm}\\Now we can compute the cohomology of \ref{cone}. Since we've collapsed the bottom row to a point, we are going to shift the degree of cohomology by one, so that what we would naturally call degree $1$ is now degree $0\,.$ If the canonical cofibration is analogous to the mapping cylinder, then the above construction should be analogous to the mapping cone. To support this idea, we have the following proposition: \begin{proposition} Let $f:H\to G$ be a morphism which is a submersion at the level of objects. We get the following long exact sequence (where the coefficients are associated to $M$): \begin{equation} \cdots \to H^{n}(G)\to H^{n}(H)\to H^{n}(C(f))\to H^{n+1}(G)\to H^{n+1}(H)\to\cdots \end{equation} \end{proposition} Here, the map $H^{n}(C(f))\to H^{n+1}(G)$ is the one associated to the inclusion of $C(f)\xhookrightarrow{}f^!G;$ the map $H^{n}(G)\to H^{n}(H)$ is given by restricting the cohomology classes of $f^!G$ to the bottom row, ie. $H\rightrightarrows H^0\,;$ the morphism $H^{n}(H)\to H^{n}(C(f))$ is given by pulling back cohomology classes from $H$ to $C(f)$ by using the embedding of $H\rightrightarrows H^0$ into the bottom row of $f^!G$ and pulling back cohomology classes to the second row of $C(f)$ via $\delta^*_v\,.$ \vspace{3mm}\\Note that we in particular get a long exact sequence by taking a Lie groupoid $G\rightrightarrows G^0$ and leting $H=G^0\,.$ In this case, we get a long exact sequence relating the cohomologies of $G^0,[G^0/G]$ and the cohomology classes on $G$ corresponding to multiplicative objects (what was called ``truncated cohomology" in Part 1). That is, the truncated cohomology is the cohomology of the mapping cone $G^0\xhookrightarrow{}G\,.$ In this case the corresponding long exact sequence was first communicated to the author by Francis Bischoff. \vspace{3mm}\\Finally, if we take the mapping cone of $H\to *$ we get the suspension of $H\,.$ Explicitly, this is given by collapsing the base $H$ of the double groupoid $\text{Pair}(H)\rightrightarrows H$ to a point. \chapter{Foliations of Lie Groupoids and Stacks} \section{The (2,1)-Category of Foliations} As opposed to groupoids internal to the category of smooth manifolds, one can consider groupoids internal to the category of foliations, ie. a Lie groupoid $G\rightrightarrows G^0$ such that $G^0,\,G^{(1)}$ are foliated manifolds, and such that all structure maps are maps of foliations. There is a forgetful functor from groupoids internal to the category of foliations to the category of Lie groupoids. A foliation of a Lie groupoid $G\rightrightarrows G^0$ is essentially a lift of $G$ to a groupoid internal to foliated manifolds. \begin{definition} A foliation of a Lie groupoid $G\rightrightarrows G^0$ is a foliation of $G^0,\,G^{(1)}$ such that all structure maps are maps of foliations. \end{definition} Foliated Lie groupoids naturally form a (2,1)-category. A morphism of foliated groupoids \begin{equation*} f:H\to G \end{equation*} is a morphism of groupoids which is also a degreewise map of foliations. A 2-morphism between $f_1\,,f_2:H\to G$ is a natural transformation $g_h:H^0\to G^{(1)}\,,$ $f_1\Rightarrow f_2$ such that $g_h$ is a map of foliated manifolds. \vspace{3mm}\\Of course, one can talk about Morita equivalences of foliations, this is a little bit more subtle. Before doing so, we will briefly go over another way of thinking about foliations: Associated to a foliation of a Lie groupoid $G\rightrightarrows G^0$ is a Lie algebroid subbundle of the tangent bundle to $G\,,$ \begin{equation} \begin{tikzcd} TG \arrow[r] \arrow[d, shift left] \arrow[d, shift right] & G \arrow[d, shift left] \arrow[d, shift right] \\ TG^0 \arrow[r] & G^0 \end{tikzcd} \end{equation} where the subbundle of $TG^0\,,TG^{(1)}$ consists of vectors tangent to the leaves of the foliation. This in particular gives a Lie algebroid groupoid. We will identify this LA-groupoid with the corresponding foliation of the Lie groupoid. \begin{definition} A Morita map of foliated Lie groupoids $H\to G$ is a map of foliated Lie groupoids for which the induced map \begin{equation} \begin{tikzcd} TH \arrow[d, shift right] \arrow[d, shift left] \arrow[rr, shift right=7] & & TG \arrow[d, shift right] \arrow[d, shift left] \\ TH^0 & & TG^0 \end{tikzcd} \end{equation} is a Morita map. \end{definition} \begin{proposition}\label{morita foliations} Let $F:H\to G\,,$ $f:G\to H$ be morphisms of foliated groupoids, such that $f\circ F\Rightarrow \mathbbm{1}_H\,, F\circ f\Rightarrow \mathbbm{1}_G$ (the 2-morphisms are required to be compatible with the foliations). Then $F$ and $f$ induce Morita maps of foliations. \end{proposition} The previous discussion implies the following proposition: \begin{proposition} Associated to a foliation of a Lie groupoid $H$ is a sub LA-groupoid of the tangent LA-groupoid $TH\to H\,.$ We will denote this sub LA-groupoid by $D\to H\,,$ or $D_H\to H$ if there is risk of any confusion. \end{proposition} We can now slightly rephrase the definition of Morita map: a map beteween foliatied Lie groupoids $H\to G$ is a Morita map if the induced map of LA-groupoids from $D_H\to H$ to $D_G\to G$ is a Morita equivalence. By analogy with the relative tangent bundle of a submersion $Y\to X\,,$ we make the following definition \begin{definition} A foliation $D\to H$ associated to a (nice enough) map $H\to G$ will be called the relative tangent bundle (of $TH$ relative to $TG\,.$)\footnote{See \Cref{Normal bundles} for a discussion on why we can think of this as a normal bundle} \end{definition} We can now invert weak equivalences (ie. Morita maps) to obtain a new (2,1)-category. This category is just the (2,1) category of anafunctors (ie. roofs), but where the objects are foliated Lie groupoids, and the morphisms and two morphisms are compatible with the foliations. This leads us into the next section, but first we will define simple foliations. \begin{definition}\label{simple} We will call a foliation of a Lie groupoid $H$ simple if the foliation, at the level of objects and arrows, is given by a \hyperref[surj sub]{surjective submersion} $H\to G$ (see \cite{eli}). \end{definition} \begin{proposition}\label{cohomology of fol} Given a (nice enough) morphism $H\to G\,,$ the fibration replacement $G\times_G H\to G$ is a simple foliation. \end{proposition} Now give a nice enough map $f:H\to G\,,$ we have discussed a way of obtaining a foliation of a fibration replacement of $f\,.$ However, if $f$ satisfies the conditions in \Cref{simple}, we have a foliation of $H$ itself. These two foliations are Morita equivalent. \begin{proposition} Suppose $f:H\to G$ satisfies the conditions in \Cref{simple}, so that we get a simple foliation of $H\,.$ Then the canonical map $H\to G\times_G H$ is a Morita equivalence of foliations. \end{proposition} Furthermore, given a simple foliation $f:H\to G\,,$ there is a canonical double groupoid integrating this LA-groupoid, and it is given by the following: \begin{equation} \begin{tikzcd} H^{(1)}\times_{G^{(1)}}H^{(1)} \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & H^{0}\times_{G^{0}}H^{0} \arrow[d, shift right] \arrow[d, shift left] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & H^0 \end{tikzcd} \end{equation} This is a double Lie groupoid over $H\rightrightarrows H^0\,.$ Of course, we have already described another double groupoid over $H\rightrightarrows H^0\,,$ given by the canonical cofibration associated to $f;$ these two double Lie groupoids are canonically Morita equivalent. Therefore, the constructions we've been making agree with the usual constructions in the case that the map $f:H\to G$ defines a simple foliation. \begin{lemma}\label{iso of coh} Let $f:H\to G$ be a simple foliation, and let $M$ be a $G$-module. Then we have an isomorphism of cohomology $H^*(\mathbf{B}^{\bullet}H,f^{-1}\mathcal{O}(M))\cong H^*(D\to H,f^*M)\,.$ \end{lemma} \begin{proof} The proof goes by forming the nerve of $D\to H$ and taking a resolution by fiberwise differential forms of $f^{-1}\mathcal{O}(M))_{\mathbf{B}^n H}\,,$ for each $n\ge 0\,.$ \end{proof} Now let us summarize what we will call the (2,1)-category of foliations of Lie groupoids. \begin{itemize} \item The objects are foliations of Lie groupoids, which can equivalently be thought of as a LA-groupoid subbundle of $TG\to G\,,$ equivalently, a VB-subbbundle for which sections are closed under the Lie bracket. \item The morphisms between foliations of $H$ and $G$ are morphisms $f:H\to G$ for which $f_*$ is a morphism of LA-groupoids (equivalently, a morphism of VB-groupoids). \item Given morphisms $f,g:H\to G$ of foliated Lie groupoids, a 2-morphism $f\Rightarrow g$ is given by a 2-morphism $f\Rightarrow g$ of maps between Lie groupoids for which the derivative maps into the subbundle, ie. a 2-morphism is given by a map $h:H^0\to G^{(1)}$ satisfying the standard conditions, such that $h_*$ maps vectors in the foliation of $H^0$ to vectors in the foliation of $G^{(1)}\,.$ \end{itemize} This category has weak equivalences, which are given by morphisms which induce a Morita equivalence of LA-groupoids. \subsection{(2,1)-Category of Foliations of Stacks} The (2,1)-category of foliations of stacks is essentially the (2,1)-category of stacks, but where all Lie groupoids are foliated and all morphisms and 2-morphisms are compatible with the foliations: \begin{itemize} \item The objects are foliated Lie groupoids \item The morphisms are anafunctors for which the maps are maps of foliated Lie groupoids, and for which the left leg is a Morita map of foliated Lie groupoids. \item The 2-morphisms are 2-morphism in the (2,1)-category of Lie groupoids which are maps of foliated manifolds. \end{itemize} \begin{definition} A foliation of a stack $\mathcal{G}=[G^0/G]$ is a foliation of $G$ up to Morita equivalence of foliations. \end{definition} Now we will define simple foliations of stacks; the foliations relevant to the van Est map are all simple: \begin{definition} A foliation of a stack is simple if it can be presented by a simple foliation of Lie groupoids (see \Cref{simple} for the definition of simple foliation of Lie groupoids). \end{definition} Given a map of stacks $\mathcal{F}:\mathcal{H}\to\mathcal{G}\,,$ we should have a criterion for determining when it can be presented by a \hyperref[surj sub]{surjective submersion} of Lie groupoids, so that it defines a simple foliation. Before doing this, we make a definition. \begin{definition}\label{surj stacks} A map of stacks $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ is called a surjective submersion if it can be presented by a \hyperref[surj sub]{surjective submersion of Lie groupoids} $F:H\to G\,.$ Similarly, we may call a map of Lie groupoids $F:H\to G$ a surjective submersion of stacks if the induced map of stacks is a surjective submersion. \end{definition} \begin{lemma} If a map of stacks $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ is a surjective submersion, then given any presentation of the map $f:H\to G\,,$ the map $G\sideset{_s}{_{f^0}}{\mathop{\times}}H^0\to G^0\,,(g,h^0)\mapsto t(g)$ will be as surjective submersion of Lie groupoids. \end{lemma} Given the previous result, in order to determine if a map of stacks is a surjective submersion we only need to check it on one presentation. \vspace{3mm}\\Now \Cref{canonical map}, \Cref{morita foliations} imply the following result: \begin{proposition} A surjective submersion of stacks $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ determines a simple foliation of $\mathcal{H}\,.$ \end{proposition} \begin{remark} One way to understand these definitions of surjective submersions for Lie groupoids and stacks is to require the following property: a map $f:H\to G$ (thought of as Lie groupoids or stacks) should be a surjective submersion if and only if the double structure $H\times_G H$ associated to it exists and is Morita equivalent to $G\,.$ If we take the fiber product to be the strong one, we get the definition of surjective submersion for Lie groupoids, but if we take the fiber product to be the one appropriate for stacks, we get the definition of surjective submersion for stacks. \end{remark} We have the following simply but useful criterion for determining determining if a map of stacks is a surjective submersion. \begin{proposition} Suppose a map of stacks $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ can be presented by a map $f:H\to G$ which is essentially surjective and which is a submersion at the level of objects. Then $\mathcal{F}$ is a surjective submersion. \end{proposition} Given the definition of foliation of a stack, every foliation of a manifold defines a foliation of the associated stack, however, it is also possible for a singular foliation of a manifold $X$ to ``lift" to a foliation of the associated stack. This will happen whenever the singular foliation is induced by an integrable Lie algebroid (due to the fact that every integrable Lie algebroid defines a foliation of the stack associated to the base). In particular, Lie algebroids with almost injective anchor maps (ie. the anchor map is injective on a dense open set in the base) are integrable and thus define simple foliations of the stack associated to the space of objects. \begin{exmp}\label{singular foliation} Consider the singular foliation of $S^2$ induced by the action of $S^1\,.$ This foliation is singular at the north and south poles. We can form the groupoid $G=S^2\rtimes S^1\,,$ and then the canonical fibration replacement of the map $S^2\xhookrightarrow{} S^2\rtimes S^1\,,$ given by $G\rtimes G\,,$ is Morita equivalent to $S^2\,.$ The foliation of $G\rtimes G$ induced by the mapping $G\rtimes G\to G$ is Morita equivalent, as LA-groupoids, to the Lie algebroid $\mathfrak{g}\to S^2\,;$ away from the singular points of the foliation on $S^2\,,$ this Morita equivalence of LA-groupoids is a Morita equivalence of foliations. In this sense this singular foliation lifts to a (regular) foliation of the stack. \begin{remark} In the context of example \ref{singular foliation}, one can pull back the standard symplectic form on $S^2$ to $G\rtimes G$ via the target map. This will define a $0$-shifted sympletcic form on $G\rtimes G\,,$ and one might be tempted to do a geometric quantization of $S^2$ by geometrically quantizing the stack (using $G\rtimes G$ as a representative); the motivation is due to the fact that the singular foliation lifts to a regular foliation on the stack. The foliation on the stack is generically Lagrangian (using the definition of Lagrangian for a $0$-shifted symplectic structure), however the two leaves corresponding to the singular leaves of the north and south pole are not Lagrangian, but the symplectic form still vanishes on those leaves (this corresponds to the fact that the north and south poles are isotropic, but not coisotropic). This is however a maximally isotropic foliation in the sense that, even locally, there is no foliation by isotropic submanifolds whose leaves contain the leaves of this foliation as proper subsets. One can still compute the ``Bohr-Sommerfeld" leaves, and the dimension of the space of sections obtained agrees with the quantization of $S^2$ via the Kahler polarization. \end{remark} \end{exmp} Given a Lie algebra $\mathfrak{g}\,,$ we will denote its canonical integration by $\tilde{G}\,.$ The category of Lie algebras can be naturally upgraded to a (2,1)-category, where a 2-morphism $f_1\Rightarrow f_2$ between $f_1\,,f_2:\mathfrak{h}\to\mathfrak{g}$ is given by a $\tilde{g}\in\tilde{G}$ such that $f_1=\text{Ad}_{\tilde{g}}^*\,f_2\,.$ \vspace{3mm}\\Now given a Lie algebra $\mathfrak{g}\,,$ we get a simple foliation of $[*/*]$ given by $\tilde{G}\rtimes\tilde{G}\to \tilde {G}\,.$ Conversely, a foliation of $[*/*]$ is given, in particular, by a mapping $F:\text{Pair}(X)\to H\,,$ for some manifold $X$ and some Lie groupoid $H\rightrightarrows H^0\,.$ Letting $*$ be a point in $X\,,$ we get a Lie algebra $\mathfrak{g}$ by taking the Lie algebra of the isotropy group over $F(*)\,.$ Given any other point $*'\in X\,,$ we have a canonical isomorphism between the isotropy groups of $F(*)$ and $F(*')\,,$ so in this sense the Lie algebra $\mathfrak{g}$ is well-defined. We have the following result: \begin{proposition} The full (2,1)-subcategory of simple foliations of $[*/*]$ is equivalent to the (2,1)-category of Lie algebras. \end{proposition} \begin{remark} From this point of view, the Lie algebroid cohomology of $\mathfrak{g}\mapsto G^0$ is the same as the foliated cohomology of the stack $[G^0/G^0]\,,$ with respect to the canonical foliation $G\rtimes G\to G\,.$ In particular, $H^1(\mathfrak{g},\mathbb{C}^*_{G^0})$ classifies line bundles with foliated flat connection on the stack $[G^0/G^0]\,.$ \end{remark} \begin{proposition} The foliation determined by an integrable Lie algebroid $\mathfrak{g}\to M$ is independent of the source-connected Lie groupoid integrating it. \begin{proof} The foliation associated to any source-connected Lie groupoid $G\rightrightarrows M$ integrating $\mathfrak{g}\to M$ is equivalent to the foliation associated to the source simply connected integration. \end{proof} \end{proposition} The following conjecture is a converse to the result that integrable Lie algebroids determine foliations of the base: \begin{conjecture} A Lie algebroid $\mathfrak{g}\to X$ is integrable if and only if it is equivalent, as an LA-groupoid, to a simple foliation. \end{conjecture} \section{Leaves of a Foliation} Given a foliation of a stack, we can present it by a foliation of a Lie groupoid, and the leaves in the space of arrows passing through the identity bisection are subgroupoids. We would like to say that the union of these leaves are the stack. We won't be able to quite say this, but something similar will hold. First we will describe categorical unions: \subsection{Categorical Union} In category with a given object $\mathcal{C}\,,$ a subobject $\mathcal{A}\xhookrightarrow{} \mathcal{C}$ is defined to be an equivalence class of monomorphisms. In addition, one can form the category of subobjects of $ \mathcal{C}\,,$ where a morphism between subobjects is essentially an inclusion. Now given two subobjects $\mathcal{A},\mathcal{B}\xhookrightarrow{} \mathcal{C}\,,$ their union is defined to be the coproduct of $\mathcal{A},\mathcal{B}$ in the category of subobjects of $\mathcal{C}\,.$ \vspace{3mm}\\Lets consider the category of sets. Consider a set $X$ and let $A,B\subset X\,.$ A morphism between subsets $A\to C$ is an inclusion $A\subset C\,.$ The coproduct of $A$ and $B$ is in particular a subset of $X$ receiving morphisms from $A\,,B\,;$ the coproduct is $A\cup B\,.$ Now given a third subset $C\subset X\,,$ one can form the union $(A\cup B)\cup C\,.$ In particular, given any collection of subsets $\{A_i\}_{i\in I}\,,$ one can form all finite unions, obtaining a new collection of sets $\{A_j\}_{j\in J}\,,$ where $J$ a directed set, given by \begin{equation} J=\coprod_{n=1}^{\infty} I^n\,. \end{equation} For example, if $j=(i_1,i_2,i_3)\,,$ then $A_j=A_{i_1}\cup A_{i_2}\cup A_{i_3}\,.$ One can now form the union $\bigcup_{i\in I} A_i$ as a direct limit in this way. \subsection{Union of Leaves} Now in the category of groupoids, a subgroupoid is a subobject, however in the (2,1)-category of groupoids, only full subgroupoids behave as subobjects. Given a Lie groupoid $G$ and two full subgroupoids $H,K\,,$ their union is the full subgroupoid over $H^0\cup K^0$ (meaning that it contains all morphisms between objects in $H^0\cup K^0$). Given a collection of full subgroupoids $\{G_i\}_{i\in I}$ such that $\cup_i G^0_i=G^0\,,$ there union is $\cup_i G_i=G\,.$ Therefore, if the foliation is by full subgroupoids we can say their union is the groupoid. \vspace{3mm}\\Now suppose we have a foliation of a stack, given by a foliation of a Lie groupoid $G\rightrightarrows G^0\,.$ Let $\{L_i\}_{i\in I}$ be the set of leaves in $G^{(1)}$ intersecting the space of objects. Typically these subgroupoids will not be full, however the categorical image of $L_i$ in $G$ is just the full subgroupoid over $L_i^0\,,$ therefore we can say that the union of the images of the leaves of the stack is the stack itself (the image of a morphism in a category is essentially the smallest monomorphism which the morphism factors though). \vspace{3mm}\\ Another observation is the following: A Lie groupoid-principal bundle determines a foliation of the total space of the principal bundle, and a foliation of the Lie groupoid refines this foliation of the total space. Therefore, given a foliation of a Lie groupoid, we get a foliation of the objects of the associated stack. \subsection{Foliations of Lie Groups} Here we will show that foliations of Lie groupoids, in some sense, generalize normals subgroups. The context is foliations of a Lie group $G\to *$ (ie. the space of arrows is foliated, and the foliation is compatible with the structure maps). We have the following observation (made by Francis Bischoff, and a similar observation made by Eli Hawkins in \cite{eli}): \begin{proposition} Foliations of a Lie group are in bijective correspondence with normal subgroups. \end{proposition} \begin{proof} To see this, note that since the foliation is compatible with the composition, the composition must takes two leaves to a third leaf. Hence, the set of leaves has a multiplication on it, and we can form the quotient to get a group. This implies that the leaf intersecting the origin is a normal subgroup, and the leaves are the cosets. \end{proof} \begin{remark} Note that the above result implies, in particular, that all foliations of Lie groups are simple, since the quotient of a Lie group by a normal Lie subgroup always exists. \end{remark} \chapter{LA-Groupoids Associated to a Map} Here we will further discuss the LA-groupoids associated to a (nice enough) map $:H\to G\,.$ So far, given a (nice enough) map $f:H\to G\,,$ we have two ways of associating an LA-groupoid (see \Cref{cofibration replacement}): the first way is by forming the fibration replacement and taking the associated foliation, and the second way is by forming the canonical cofibration and taking the associated LA-groupoid. \section{Equivalence of the Two LA-groupoids}\label{equiv la group} Here we will show here that the resulting LA-groupoids are Morita equivalent. First, we will give the construction of the two LA-groupoids: Let $f:H\to G$ be a (nice enough) map of Lie groupoids. First we form the fibration replacement $H\ltimes P\rtimes G\to G\,,$ and from this we get an LA-groupoid by taking the kernels of the left and right columns as a map of vector bundles: \begin{equation} \begin{tikzcd} TH\ltimes TP\rtimes TG \arrow[r, shift left] \arrow[d] \arrow[r, shift right] & TP \arrow[d] \arrow[r, "p_{3*}", shift right=8] & TG \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & TG^0 \arrow[d] \\ H\ltimes P\rtimes G \arrow[r, shift left] \arrow[r, shift right] & P & G \arrow[r, shift left] \arrow[r, shift right] & G^0 \end{tikzcd} \end{equation} Explicitly, it is given by \begin{equation}\label{exx} \begin{tikzcd} TH\ltimes T_sP\rtimes G \arrow[r, shift left] \arrow[d] \arrow[r, shift right] & T_sP \arrow[d] \\ H\ltimes P\rtimes G \arrow[r, shift left] \arrow[r, shift right] & P \end{tikzcd} \end{equation} \vspace{3mm}Now the second way of obtaining an LA-groupoid (see \Cref{cofibration replacement}) is given by \begin{equation} \begin{tikzcd} (f\circ s)^!\mathfrak{g} \arrow[r, shift left] \arrow[r, shift right] \arrow[d] & f^!\mathfrak{g} \arrow[d] \\ H^{(1)} \arrow[r, shift right] \arrow[r, shift left] & H^0 \end{tikzcd} \end{equation} We can rewrite this as \begin{equation} \begin{tikzcd} TH\ltimes TH^0\times_{TG^0}\mathfrak{g} \arrow[d] \arrow[r, shift left] \arrow[r, shift right] & TH^0\times_{TG^0}\mathfrak{g} \arrow[d] \\ H \arrow[r, shift left] \arrow[r, shift right] & H^0 \end{tikzcd} \end{equation} Now, there is a natural map \begin{equation}\label{morita LA} \begin{tikzcd} TH\ltimes T_sP\rtimes G \arrow[r, shift left] \arrow[d] \arrow[r, shift right] & T_sP \arrow[d] \arrow[r, "", shift right=9] & TH\ltimes TH^0\times_{TG^0}\mathfrak{g} \arrow[r, shift left] \arrow[d] \arrow[r, shift right] & TH^0\times_{TG^0}\mathfrak{g} \arrow[d] \\ H\ltimes P\rtimes G \arrow[r, shift left] \arrow[r, shift right] & P & H \arrow[r, shift left] \arrow[r, shift right] & H^0 \end{tikzcd} \end{equation} where the map on bottom row is given by the projection on the first factor, and the map on the top row is given by right translation (via $G$) of vectors in $T_sP=TH^0\times_{TG^0}T_sG$ to vectors in $TH^0\ltimes TH^0\times_{TG^0}\mathfrak{g}\,.$ This map is a Morita equivalence of the groupoids in the top row, therefore these LA-groupoids are Morita equivalent. \begin{remark} Specializing this result to the case of $\iota:G^0\xhookrightarrow{} G\,,$ we can interpret $\mathfrak{g}$ in the following way: we can equip $G\rightrightarrows G^0$ with the trivial LA-groupoid structure $0_G\to G$ (ie. the zero vector bundle). We can then interpret $\mathfrak{g}$ as being $\iota^!0_G$ (this pullback should be understood as the pullback appropriate to the (2,1)-category of groupoids, ie. we pull back $0_G\to G$ to the fibration replacement). \end{remark} \begin{remark}This gives one kind of duality between fibrations and cofibrations, ie. given a (nice enough) map $f:H\to G\,,$ we have two methods of obtaining LA-groupoids, one using fibrations and one using cofibrations, and they both agree. Related to this duality is another: given a (nice enough) map $f:H\to G\,,$ we can compute the fibers of the map by computing the kernel of the fibration replacement $G\times_G H\to G$ over an object in $G^0\,.$ Similarly, we can compute the fibers as the kernel of the target map from the top groupoid to the bottom groupoid in \ref{HH}, over an object in $H^0\,.$ \end{remark} \section{The Normal Bundle is an LA-Groupoid} Here we will show how the normal bundle of a wide Lie subgroupoid can be interpreted as an LA-groupoid. First we will specialize the previous construction to the case that $H=G$ (the one associated to the canonical cofibration). We have the following LA-groupoid: \begin{equation} \begin{tikzcd}\label{LA-groupoid2} TG\ltimes_{TG^0}\mathfrak{g} \arrow[d, shift left] \arrow[r] \arrow[d, shift right] & G^{(1)} \arrow[d, shift left] \arrow[d, shift right] \\ \mathfrak{g} \arrow[r] & G^0 \end{tikzcd} \end{equation} ie. there is a natural action of $TG$ on $\mathfrak{g}\,.$ To expand on the groupoid in the left column of \ref{LA-groupoid2}, note that given a groupoid $G\rightrightarrows G^0\,,$ we can form the tangent groupoid $TG\rightrightarrows TG^0\,.$ Now a groupoid naturally acts on itself, with moment map being the target. Therefore, we can form the groupoid $TG\rtimes TG\rightrightarrows TG\,,$ and we can form the subgroupoid \begin{equation}\label{TG action T_sG} T_sG\rtimes TG\rightrightarrows T_sG\,, \end{equation}which consists of the action of $TG$ on vectors in $TG$ which are tangent to the source fibers. Using right translation, we have an action of $TG$ on $\mathfrak{g}\,,$ \begin{equation} \mathfrak{g} \sideset{}{_{TG^0}}{\mathop{\rtimes}} TG \rightrightarrows \mathfrak{g}\,, \end{equation} here, the moment map for $\mathfrak{g}$ is just the anchor map. One way of describing this action is to choose an adjoint representation up to homotopy, ie. a splitting of the sequence \begin{equation} \begin{tikzcd} t^*\mathfrak{g} \arrow[r, "r"] & TG^{(1)} \arrow[r, "s_*"] \arrow[l, "\omega", dotted, bend left, shift left=2] & s^*TG^0 \end{tikzcd} \end{equation} such that the splitting is the canonical one when restricted to $G^0\,.$ Then, one obtains an adjoint action up to homotopy, given by \begin{equation} Ad_g(X_{s(g)}):=\omega_g(g(X_{s(g)}-\alpha(X_{s(g)})))g^{-1}\,. \end{equation} Now we may define an action of $TG$ on $\mathfrak{g}\,,$ given by \begin{equation} \tilde{X}_g\cdot X_{s(g)}=Ad_g(X_{s(g)})+\omega_g(\tilde{X_g})g^{-1}\,. \end{equation} Let us emphasize that the above action of $TG$ is a bonafide action, and that it doesn't depend in any way on the choice of splitting. \subsection{The Normal Bundle}Using the action of $TG$ on $\mathfrak{g}$ given in the previous section, we deduce that, given a (nice enough) homomorphism $H\to G\,,$ we get a natural action of $TH$ on $\mathfrak{g}\,,$ and $TH\ltimes_{TG^0}\mathfrak{g}$ is an LA-groupoid over $H\,.$ In the case that $H\xhookrightarrow{}G$ is a wide Lie subgroupoid specializes to \begin{equation} \begin{tikzcd} TH\ltimes\mathfrak{g} \arrow[d, shift left] \arrow[r] \arrow[d, shift right] & H^{(1)} \arrow[d, shift left] \arrow[d, shift right] \\ \mathfrak{g} \arrow[r] & G^0 \end{tikzcd} \end{equation} where the action of $TH$ on $\mathfrak{g}$ is given by \begin{equation} \tilde{X}_g\cdot X_{s(g)}=Ad_g(X_{s(g)})+\omega_g(\tilde{X_g})g^{-1}\,. \end{equation} We will now show how this LA-groupoid can be thought of as equipping the normal bundle of $H^{(1)}\xhookrightarrow{}G^{(1)}$ with additional structure. \vspace{3mm}\\First, apply the forgetful functor to VB-groupoids, so that we obtain the same diagram but have forgotten the Lie brackets. Now, consider the following VB-groupoid \begin{equation}\label{LA-groupoidss} \begin{tikzcd} H\ltimes(\mathfrak{g}/\mathfrak{h}) \arrow[d, shift left] \arrow[r] \arrow[d, shift right] & H \arrow[d, shift left] \arrow[d, shift right] \\ \mathfrak{g}/\mathfrak{h} \arrow[r] & G^0 \end{tikzcd} \end{equation} Here, \begin{equation}\label{action} h\cdot X=Ad_h(X_{s(h)})\end{equation} for $X\in\mathfrak{g}/\mathfrak{h}\,.$ This is well-defined, as we can see as follows: consider $X_{s(h)}+Y_{s(h)}\,,$ where $Y_{s(h)}\in \mathfrak{h}_{s(h)}\,.$ Let $W_h\in TH_h$ be such that $s_*W_h=\alpha(X_{s(h)}+Y_{s(h)})$ (this is possible, since $s$ is a submersion). Then \begin{equation}\label{normal3} W_h\cdot (X_{s(h)}+Y_{s(h)})=Ad_hX_{s(h)}+Ad_hY_{s(h)}+\omega_h(W_h)h^{-1}\,. \end{equation}Now recall, that action is independent of $\omega\,,$ and in addition this action at a point $g\in G$ only depends on $\omega_g\,,$ thus we may choose $\omega_h$ so that $\omega_h(TH)\in t^*\mathfrak{h}\,.$ Then, $Ad_hY_{s(h)}+\omega_h(W_h)h^{-1}\in \mathfrak{h}_{s(h)}\,,$ Then we see that the action given in \ref{normal3} is independent of $Y_{s(h)}$ and $W_h\,,$ and thus the action given by \cref{action} is well-defined. Now, the natural homomorphism $TH\ltimes \mathfrak{g}\to H\ltimes\mathfrak{g}/\mathfrak{h}$ is a Morita equivalence\footnote{Note that, if $\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{g}/\mathfrak{h}\,,$ then this is a Morita equivalence of LA-groupoids.}, and from this we get that our VB-groupoid is Morita equivalent to \ref{LA-groupoidss}. Now, applying the forgetful functor from VB-groupoids to vector bundles over manifolds, we get \begin{equation} \begin{tikzcd}\label{LA-groupoid} H^{(1)}\times_{G^0}\mathfrak{g}/\mathfrak{h} \arrow[r] & H^{(1)}\,, \end{tikzcd} \end{equation} which is naturally identified, via translation, with the normal bundle of $H^{(1)}\xhookrightarrow{} G^{(1)}\,.$ \section{Geometric Construction of the Natural Representation of Wide Subgroupoids} In the previous section, we showed that given a wide subgroupoid (ie. a subgroupoid which contains all objects) $H\xhookrightarrow{} G\,,$ there is a natural action of $H$ on $\mathfrak{g}/\mathfrak{h}\,.$ We will now construct this action geometrically. \vspace{3mm}\\First, let $X_{s(g)}\in \mathfrak{g}/\mathfrak{h}\vert_{s(g)}\,.$ Choose a lift of $X_{s(g)}$ to $\tilde{X}_{s(g)}\in\mathfrak{g}\,.$ Now, since $H$ is a wide subgroupoid and $s$ is a submersion, there exists a vector $X'_{s(h)}\in TH_{s(h)}$ such that $t_* X'_{s(h)}=t_*(\tilde{X}_{s(h)})\,.$ Therefore, $t_*(\tilde{X}_{s(h)}-X'_{s(h)})=0\,,$ and using the groupoid $TG\rightrightarrows TG^0\,,$ we have that $(h,0)\in TH\vert_h$ is composable with $\tilde{X}_{s(h)}-X'_{s(h)}\,,$ and we get a vector $(h,0)\cdot(\tilde{X}_{s(h)}-X'_{s(h)})\in TH\vert_h\,.$ Now once again, since $H$ is a wide subgroupoid and $s$ is a submersion, there is a vector $Y_h\in TH\vert_h$ such that $s_*Y_h=s_*(\tilde{X}_{s(h)}-X'_{s(h)})\,,$ therefore $s_*((h,0)\cdot(\tilde{X}_{s(h)}-X'_{s(h)})-Y_h)=0\,.$ Hence, we can right translate to get \begin{equation} ((h,0)\cdot(\tilde{X}_{s(h)})-X'_{s(h)}-Y_h)\cdot h^{-1}\in \mathfrak{g}\vert_{t(h)}\,,. \end{equation} After passing to the quotient, we get a well-defined action of $H$ on $\mathfrak{g}/\mathfrak{h}\,.$ \vspace{3mm}\\Furthermore, the analogous argument shows that, if $f:H\to G$ is a homomorphism which is a surjective submersion on the base, one gets a representation of $H$ on $f^*\mathfrak{g}/\mathfrak{h}\,.$ \begin{remark} One can study the cohomology and the truncated cohomology \begin{equation} H^*(H,\mathfrak{g}/\mathfrak{h})\,, H_0^*(H,\mathfrak{g}/\mathfrak{h})\,, \end{equation} respectively. Roughly, these should classify deformations in the normal direction; note that, $H^{(1)}$ is a bitorsor for $H\rightrightarrows H^0\,.$ The authors interpretation of these cohomologies are as follows: the degree $0$ cohomology should classify deformations of $H^{(1)}\xhookrightarrow{}G^{(1)}$ in the normal direction, as a bitorsor for $H\rightrightarrows H^0\,.$ In the second case, the degree $0$ cohomology should classify deformations of $H\xhookrightarrow{} G$ in the normal direction, as a Lie groupoid. \vspace{3mm}\\Furthermore, associated to any Lie groupoid representation $E\to G^0$ of $G\rightrightarrows G^0$ is a cohomology class $H^1(G,\mathbb{C}^*_X)$ (or $H^1(G,\mathbb{R}^*_X)$ if the representation is a real vector bundle); this class is obtained by taking the induced representation on the determinant bundle $\Lambda^{\text{top}} E\,,$ and since one-dimensional representations are classified by $H^1(G,\mathbb{C}^*_X)$ we get a natural cohomology class. Therefore, associated to any (nice enough) homomorphism $f:H\to G\,,$ there is a natural cohomology class in degree $1$ — this generalizes the fact that to any submanifold $Y\xhookrightarrow{}X$ there is a natural degree $1$ cohomology class associated to the determinant of the normal bundle. \end{remark} \section{Normal Bundles of Stacks in General}\label{Normal bundles} Here we will show how some of the constructions we've been making generalize the construction of normal bundles. \vspace{3mm}\\First, let us record an observation: consider an embedding $\iota:N\xhookrightarrow{}M\,;$ the normal bundle is defined as $\iota^*TM/TN\,.$ Now we can think of this in the following way: consider the following VB-groupoid \begin{equation} \begin{tikzcd} TN\ltimes\iota^*TM \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & \iota^*TM \arrow[d] \\ N \arrow[r, shift right] \arrow[r, shift left] & N \end{tikzcd} \end{equation} There is a natural Morita map to the normal bundle, so they are equivalent descriptions. However, this description works for any smooth map $\pi:Y\to X$ between manifolds, ie. we can consider the following to be the normal bundle \begin{equation}\label{normal} \begin{tikzcd} TY\ltimes\pi^*TX \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & \pi^*TX \arrow[d] \\ Y \arrow[r, shift right] \arrow[r, shift left] & Y \end{tikzcd} \end{equation} In the case that $\pi$ is a surjective submersion, $\pi^*TX/TY$ doesn't give the right vector bundle, however \ref{normal} is Morita equivalent to the relative tangent bundle, which is what we want. One should be able to make such a construction for any smooth morphism $H\to G$ of Lie groupoids, however it will probably be a Lie algebroid in the category of double Lie groupoids. In addition, in this case the relative tangent bundle really is a normal bundle, since the map $Y\to X$ is equivalent to the map $Y\xhookrightarrow{} Y\times_X Y\rightrightarrows Y$ and the relative tangent bundle is the normal bundle to this map. Since injectivity of objects isn't invariant under Morita equivalence, it makes sense to think of normal bundles in this way. \vspace{3mm}\\We have already generalized the relative tangent bundle for a (nice enough) map $f:H\to G$ between Lie groupoids, so we feel justified in calling it the normal bundle. One can probably make sense of construction \ref{normal2} in general for any smooth map $f:H\to G$ of Lie groupoids, and if $f$ is a submersion we should in addition get a Lie algebroid structure; but that's not the point of this thesis. In addition, the relative tangent bundle really is a normal bundle, since the map $Y\to X$ is equivalent to the map $Y\xhookrightarrow{} Y\times_X Y\rightrightarrows Y\,.$ Since injectivity of objects isn't invariant under Morita equivalence, it makes sense to think of normal bundles in this way. \vspace{3mm}\\We have already generalized the relative tangent bundle for a (nice enough) map $f:H\to G$ between Lie groupoids, so we feel justified in calling it the normal bundle, at least in the context of double Lie groupoids where we can always replace $f$ with an embedding. One can probably make sense of construction \ref{normal} in general for any smooth map $f:H\to G$ of Lie groupoids, and if $f$ is a submersion we should in addition get a Lie algebroid structure; but that's not the point of this thesis. \chapter{The van Est Map} The van Est map with respect to a (nice enough) homomorphism $f:H\to G$ is a map from the Lie groupoid cohomology of $G$ to the associated foliated cohomology of $H\,.$ Formally, it is given by the following composition: \begin{equation} \begin{tikzcd} {H^*(G,M)} \arrow[r] \arrow[rr, "VE", bend right, shift right=2] & {H^*(H,f^{-1}\mathcal{O}(M))} \arrow[r] & {H^*_{\text{dR}}(f:H\to G,M)} \end{tikzcd} \end{equation} Here, the second map is obtained by taking a fiberwise de Rham resolution, and therefore is an isomorphism. The first map is not in general an isomorphism. However, if the fibers $f$ are $n$-connected (which is equivalent to the classifying space of its fibers being $n$-connected) it will be an isomorphism up to degree $n\,,$ and injective in degree $n+1\,.$ Its image in degree $n+1$ will consists of classes which pull back to a trivial cohomology class on each fiber. \vspace{3mm}\\Heuristically, this is true because any class in $H^*(H,f^{-1}\mathcal{O}(M))$ which vanishes on the fibers ``should" be pulled back from the base. However, there are obstructions to doing this, but the obstructions lie in lower degree cohomology (of a locally constant sheaf) of the fibers, which is zero in the degrees we are considering due to the connectivity assumption. \vspace{3mm}\\More precisely, consider the case of a surjective submersion between spaces $\pi:Y\to X\,,$ where we take the cohomology of $X$ with respect to functions valued in some abelian Lie group. If this were a fiber bundle, then we could locally write $\pi^{-1}(U)=U\times F\,,$ where $F$ is the fiber of $\pi\,.$ Then one can make a Leray spectral sequence argument to derive the result, since the local product formula would give us a good handle on the derived functors of $\pi\,.$ However, in the case that $\pi$ isn't a fiber bundle, the spectral sequence doesn't offer much help because the derived functors can be very complicated. To illustrate this, consider the following example (from the paper ``the relative de rham sequence"): \begin{exmp}Let $Y=\mathbb{R}^2-\{(0,0)\}\,,X=\mathbb{R}\,.$ Let $\pi$ be the projection onto the first factor — this is a surjective submersion (but not a fiber bundle). The sheaf we will put on $X$ is $\mathcal{O}_X\,,$ so that the sheaf we get on $Y$ is $\pi^{-1}\mathcal{O}_X\,.$ Consider the following foliated form, which defines a class in $H^1(Y,\pi^{-1}\mathcal{O}_X):$ \begin{equation} \frac{x\,dy}{x^2+y^2}\,. \end{equation} Notice that, when restricted to each fiber, this form is trivial. Away from $\{(x,y):x=0\}\,,$ all the primitives are of the form $g(x)=\arctan{(y/x)}+f(x)\,.$ However, due to the limiting behavior of $\arctan{(y/x)}$ as $x\to 0\,,$ there is no function $f(x)$ which will make $g(x)$ continuous on all of $Y\,.$ Therefore, this one form is not trivial over any neighborhood of $x=0$ — this displays the complexity of the derived functors of $\pi\,.$ \begin{remark}Of course, this example doesn't satisfy the connectivity assumptions since the fibers are not all connected. However, such a phenomenon could not happen for a fiber bundle — locally a fiber bundle is of the form $U\times F\,,$ and a primitive for any foliated one form which is trivial along each fiber can be found through integration. \end{remark} \end{exmp} \section{Definition of the van Est map} Here we will state and prove the main theorem of this paper (there are some applications stated in~\Cref{app}). Let us first remark that some of what we do depends on the Morita invariance of LA-groupoid cohomology, which has been shown in \cite{Waldron} (see section 5.4.5) for coefficients in a representation. We are using more general coefficients, but we expect the Morita invariance to hold. Though we don't need Morita invariance to state a theorem which is essentially equivalent. \vspace{3mm}\\Now the first thing we must do is define the van Est map. Let us first explain what it is in the case of a surjecive submersion $Y\to X$ and where the module is just $X\times S^1\,:$ \vspace{3mm}\\Consider a surjective submersion $\pi:Y\to X$ and a module for $X$ given by $X\times S^1$ (this is automatically a module since $X$ has only identity morphisms). We will denote the sheaf of function of $X\times S^1$ by $\mathcal{O}^*\,.$ The map $\pi$ induces a morphism \begin{equation}\label{1} \pi^{-1}:H^*(X,\mathcal{O}^*)\to H^*(Y,\pi^{-1}\mathcal{O}^*)\,. \end{equation} Now we can take a leafwise resolution of $\pi^{-1}\mathcal{O}^*$ by leafwise differential forms, given by \begin{equation}\label{2} \pi^{-1}\mathcal{O}^*\to\mathcal{O}^*\to \Omega_{\pi}^1(Y)\to \Omega_{\pi}^2(Y)\to\cdots\,. \end{equation} From this, we get a map \begin{equation} H^*(Y,\pi^{-1}\mathcal{O}^*)\to H^*_{\pi}(Y,\mathcal{O}^*\to \Omega_{\pi}^1(Y)\to\cdots)\,, \end{equation} which is an isomorphism. Composing \Cref{2} with \Cref{1}, we get a map \begin{equation} H^*(X,\mathcal{O}*)\to H^*_{\pi}(Y,\mathcal{O}^*\to \Omega_{\pi}^1(Y)\to\cdots)\,; \end{equation} this is the van Est map in this special case. Now if the fibers of $\pi$ are n-connected, the van Est map is an isomorphism up to degree $n\,,$ injective in degree $n+1\,,$ and its image in degree $n+1$ consists of cohomology classes which pull back to zero along each fiber. This follows from \Cref{bisimpliciall}. \vspace{3mm}\\The definition of the van Est map for Lie groupoids will be defined for a \hyperref{surj sub}[surjective submersion of Lie groupoids], and will proceed directly analogously. For stacks, the van Est map will be defined for a \hyperref[surj stacks]{surjective submersions of stacks}, and it will proceed by presenting the surjective submersion of stacks by a surjective submersion of Lie groupoids, and then using the van Est map there. \vspace{3mm}\\In the following, $D\to H$ refers to a foliation of $H$ (ie. $D$ is a subbundle of $TH\rightrightarrows TH^0)\,,$ $H^*(D\to H,f^*M)$ means the LA-groupoid cohomology of $D\to H$ with coefficients in $f^*M\,.$ Some good examples to keep in mind: when $M=G^0\times \mathbb{R}$ we are taking cohomology with respect to the sheaf of $\mathbb{R}$-valued functions on the nerbve, and when $M=G^0\times S^1$ we are taking cohomology with respect to the sheaf of $S^1$-valued functions on the nerve (for more on LA-groupoid cohomology, see \cite{mehta}). \begin{definition} Let $f:H\to G$ be a simple foliation of Lie groupoids and let $M$ be a $G$-module. The van est map is a map \begin{equation} VE:H^*(G,M)\to H^*(D\to H,f^*M)\,, \end{equation} given by pulling back cohomology classes \begin{equation} H^*(\mathbf{B}^\bullet G,\mathcal{O}(M))\xrightarrow[]{f^{-1}}H^*(\mathbf{B}^\bullet H, f^{-1}\mathcal{O}(M))\,, \end{equation} and then taking a resolution via leafwise differential forms. \end{definition} \begin{remark} By Morita invariance of LA-groupoid cohomology, given a (nice enough) map $f:H\to G\,,$ we have an isomorphism between the cohomologies $H^*(P\to G\times_G H, \tilde{f}^*M)$ and $H^*(f^!\mathfrak{g}\to H, f^*M)\,.$ Therefore, we get a map $H^*(G,M)\to H^*(f^!\mathfrak{g}\to H, f^*M)$ as well. This is gives the usual van Est map in the case that the mapping is $G^0\xhookrightarrow{} G\,.$ One can show this explicitly however, similarly to what was done in part 1. \end{remark} We will now define the van Est map on stacks. In the following, we define a family of abelian groups $\mathcal{M}$ over a stack $\mathcal{G}$ to be a family of abelian groups associated to $G$-module $M\,,$ where $G$ is a presentation of $\mathcal{G}\,$ : \begin{definition} Let $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ be a \hyperref[surj stacks]{surjective submersion} of differentiable (holomorphic) stacks, and let $\mathcal{M}$ be a family of abelian groups over $\mathcal{G}\,.$ The van Est map, \begin{equation} \mathcal{VE}:H^*(\mathcal{G},\mathcal{M})\to H^*(\mathcal{D}\to\mathcal{H},\mathcal{F}^*\mathcal{M}) \end{equation} is defined by choosing a Lie groupoid presentation of both the simple foliation associated to $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ and $\mathcal{M}\,,$ and applying the van Est map associated to Lie groupoids. This is independent of any choices made, since any two choices of presentations of the simple foliation will be Morita equivalent). \end{definition} \begin{remark} In a similar way to what is done in Part 1, one could define a ``truncated" version of this more general van Est map (so that the sheaf on $G^0$ is the sheaf which assigns to an open set the one point group), and prove an isomorphism theorem analogous to the one we are about to prove. \end{remark} \subsection{The van Est Isomorphism Theorem} Before continuing, we need to make a remark about what it means for a Lie groupoid to be $n$-connected. There are two ways of doing this: one is by using a definition of homotopy groups for Lie groupoids (as in \cite{Xutu}, \cite{noohi}), in which case $n$-connected would mean that the first $n$-homotopy groups are trivial. Equivalently, this means that the classifying space is $n$-connected. \vspace{3mm}\\Before stating and proving the isomorphism theorems, we will state two lemmas: \begin{lemma}\label{bar} Consider the double Lie groupoid: \begin{equation}\label{dd} \begin{tikzcd} H\ltimes P\rtimes G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & P\rtimes G \arrow[d, shift right] \arrow[d, shift left] \\ H\ltimes P \arrow[r, shift right] \arrow[r, shift left] & P \end{tikzcd} \end{equation} Associated to \ref{dd} is a bisimplicial manifold. Applying the bar functor gives us the total simplicial manifold; the total simplicial manifold is given by a fiber product of the antidiagonal components (see \cite{mehtatang}, \cite{cegarra}). In this special case, the total simplicial manifold is just the simplicial manifold associated to the diagonal, which is the nerve of $H\times _G G\,.$ Since the cohomology (with respect to some sheaf) of the total simplicial manifold is the cohomology of \ref{dd}, this implies that the cohomology of \ref{dd} is the cohomology of $G\times_G H\,.$ \end{lemma} \begin{proof} That the total simplicial manifold is the nerve of the diagonal follows from a computation of the total simplicial manifold. There is a canonical projection map from the components of the total simplicial manifold to the components of the bisimplicial manifold, and there is a canonical inclusion map from the components of the bisimplicial manifold to the components of the total simplicial manifold. These are chain maps and are mutual inverses at the level of cocylces. \end{proof} \begin{lemma}\label{bisimpliciall} Suppose $Y^{\bullet,\bullet}$ is a bisimplicial topological sapce, and $X^\bullet$ is a simplicial topological sapce (considered as a bisimplicial topological space $X^{\bullet,\bullet}$ that is constant in the first $\bullet\,,$ ie. $X^{i,\,j}=X^{0,\,j}$ for all $i$). Suppose $f:Y^{\bullet,\bullet}\to X^{\bullet}$ is such that the restriction $f:Y^{i,\,j}\to X^{j}$ is a locally fibered map for all $i\,,j\,,$ and that the fibers $F^{\bullet,j}$of the map $Y^{\bullet,j}\to X^j$ are $n$-connected as simplicial spaces. Let $\mathcal{A}$ be a sheaf on $X^{\bullet}\,.$ Then the map $H^*(X^\bullet,\mathcal{A})\to H^*(Y^{\bullet,\bullet},f^{-1}\mathcal{A}) $ is an isomorphism up to degree $n\,,$ and is injective in degree $n+1\,.$ The image in degree $n+1$ consists of cohomology classes which vanish along each fiber. \end{lemma} \begin{proof} This follows from a generalization of \Cref{spectral theorem}, (equivalently, this follows from a generalization of criterion 1.9.4 in \cite{Bernstein} to a mapping $X^{\bullet}\to Y$ and the Leray spectral sequence). \end{proof} \begin{proposition}\label{ve groupoid} Let $f:H\to G$ be a \hyperref[surj sub]{surjective submersion} of Lie groupoids such that the fibers of $f$ are all $n$-connected. Then the van Est map, $VE\,,$ is an isomorphism up to and including degree $n\,,$ it is injective in degree $n+1\,,$ and its image in degree $n+1$ consists of those cohomology classes which are trivial along the fibers. \end{proposition} \begin{proof} First we replace $H$ with its canonical fibration replacement, and then we split it using the canonical splitting. We get a map \begin{equation}\label{other la} \begin{tikzcd} H\ltimes P\rtimes G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & P\rtimes G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & G \arrow[d, shift right] \arrow[d, shift left] \\ H\ltimes P \arrow[r, shift right] \arrow[r, shift left] & P & G^0 \end{tikzcd} \end{equation} and using this map we can take the inverse image of cohomology of $G\,.$ That this map has the desired isomorphism properties follows from \Cref{bisimpliciall}\,, and that the same is true for the mapping $G\times_G H\to G$ follows from \Cref{bar}\,. Finally, that the same is true for the mapping $H\to G$ follows from factoring this map as $H\to G\times_G H\to G$ and using Morita invariance of LA-groupoid cohomology. \end{proof} \begin{theorem}\label{ve stacks} Let $\mathcal{F}:\mathcal{H}\to\mathcal{G}$ be a \hyperref[surj stacks]{surjective submersion} of differentiable (holomorphic) stacks, and let $\mathcal{M}$ be a family of abelian groups over $\mathcal{G}\,.$ Suppose that the fibers of $\mathcal{F}$ are all $n$-connected. Then the van Est map, $\mathcal{V}\mathcal{E}\,,$ is an isomorphism up to and including degree $n\,,$ is injective in degree $n+1\,,$ and its image in degree $n+1$ consists of cohomology classes which vanish along the leaves of the associated foliation. \end{theorem} \begin{proof} After choosing a presentation of $\mathcal{F}$ and $\mathcal{M}\,,$ this follows from \Cref{ve groupoid}. \end{proof} \begin{remark} In light of \Cref{ve groupoid}, one way of showing that a class $\alpha\in H^n(\mathfrak{g},M)$ integrates to $H^n(G,M)$ is to show that there is a wide subgroupoid $\iota:H\xhookrightarrow{}G\,,$ with $n$-connected fibers, such that $\alpha$ integrates to $\iota^!\mathfrak{g}\,.$ Given this, a natural question is the following: if we pull back $\alpha$ to $\mathfrak{h}$ and this class integrates to $H\,,$ under what circumstances will it integrate to $\iota^!\mathfrak{g}\,?$ In degree one the answer seems to be always. \end{remark} \section{Computations Using van Est} In this section we will give some example computations. We have several different models that we can use to compute the foliated cohomology of a surjectice submersion of stacks $f:H\to G\,.$ One is given by the LA-groupoid associated to the fibration replacement, a second one is given by the LA-groupoid associated to the canonical cofibration, and a third is obtained from the proof \Cref{ve groupoid}: we can compute the LA-groupoid cohomology as the ``$G$-invariant" cohomology of the foliation \begin{equation}\label{inv} \begin{tikzcd} H\rtimes P \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & G^0 \arrow[d, shift right] \arrow[d, shift left] \\ P & G^0 \end{tikzcd} \end{equation} This map is the map in the bottom row of \ref{other la}. Note that since $P\rtimes G$ is Morita equivalent to a manifold, $H^0\,,$ the leafwise cohomology of \ref{other la} is the $G$-invariant cohomology of \ref{inv} (compare this with the computation of the van Est map in part 1). Furthermore, the $G$-invariant forms on \ref{inv} can be identified with forms on $f^!\mathfrak{g}\,.$ Therefore, \ref{other la} gives us a method computing the ``van Est map" from $H^*(G,M)$ to $H^*(f^!\mathfrak{g},M)\,.$ We put van Est in quotations here, because $VE$ really maps into the foliated cohomology, but using Morita invariance we may identify it with a map into the cohomology of $f^!\mathfrak{g}\,.$ We will do this from now on. \vspace{3mm}\\While the simplest model to use when computing the van Est map is the LA-groupoid associated to the fibration replacement, the simplest model to use when computing cohomology is the LA-groupoid associated to the canonical cofibration. Therefore, when computing cohomology, we will use this model. \vspace{3mm}\\Now for the strategy. Suppose one wants to compute Lie groupoid cohomogy of some groupoid $G$ up to degre $n\,.$ The idea is to choose some wide subgroupoid $H$ whose cohomology is easier to compute (though it doesn't necessarily have to be a subgroupoid) and for which the fibers of the map $H\to G$ are at least $(n-1)$-connected (but preferably $n$-connected). If the coefficients take values in a representation, then a good choice is a proper subgroupoid since the cohomology a priori vanishes in positive degree. From a cohomological perspective, the generalization of maximal compact subgroups from Lie theory are wide and proper subgroupoids for which the inclusion map has fibers which are connected and have vanishing homotopy groups in positive degree. These do not necessarily exist. \subsection{Examples} Now in all of the following examples, the cocycle described on the Lie groupoid is just specified by a function on one of the levels of the nerve, so we will not be needing a resolution. In this case then, for the double groupoid \ref{dd} there are three differentials, a vertical differential $\delta^*_v\,,$ a horizontal differential $\delta^*_h\,,$ and the foliated de Rham differential $d\,.$ These differentials all commute, and the associated bisimplicial complex is bigraded with respect to the vertical and horizontal directions. The differential on the total complex will be \begin{equation}\label{differential} \delta_v^*+(-1)^{\text{deg v}}\delta_h^*+(-1)^{\text{deg v+deg h}}\,d\,, \end{equation} where deg v, deg h mean the degrees in the vertical and horizontal directions, respectively. Note that $-1$ here denotes inversion in the module we are taking cohomology with respect to, eg. if $f$ is an $S^1$-valued function, then $(-f)(x)$ means $1/f(x)\,;$ if $f$ is $\mathbb{R}$-valued, $(-f)(x)$ means $-f(x)\,.$ \vspace{3mm}\\Similarly, for the LA-groupoids in the following examples (corresponding to $f^!\mathfrak{g}\,,$ where $f:H\to G$ is a homomorphism), the total differential will be \begin{equation} d+(-1)^{\bullet}\delta_h^*\,, \end{equation} where $d$ is the Chevalley-Eilenberg differential and $\bullet$ is the form degree. \vspace{3mm}\\In general, there will be an additional differential due to the fact that if we use sheaves that are not acyclic on manifolds then we must take a resolution in order to describe general cocycles. Therefore, in the context of the preceding paragraphs there would be four and three differentials comprising the total differential, respectively. \begin{exmp} Conisder the groupoid $\mathbb{R}^*\,,$ We wish to compute the cohomology $H^*(\mathbb{R}^*,S^1)\,.$ The van Est from part 1 isn't useful here because $\mathbb{R}^*$ isn't connected. Furthermore, since $S^1$ isn't a vector space, van Est's original result doesn't apply. \vspace{3mm}\\ We will make use of the maximal compact subgroup $\mathbb{Z}/2\xhookrightarrow{}\mathbb{R}^*\,.$ Since the fibers of this map are contractible, \Cref{ve stacks} tells us that we can compute cohomology as the foliated cohomology of this map. The model we will use to do this is the one associated to the cofibration replacement, the LA-groupoid is the following: \begin{equation} \begin{tikzcd} \mathbb{Z}/2\times\mathbb{R} \arrow[r, shift right] \arrow[r, shift left] \arrow[d] & \mathbb{R} \arrow[d] \\ \mathbb{Z}/2 \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} The Lie algebroid differentials here are trivial since $\mathbb{R}$ is abelian, and the groupoid in the top row is just the trivial bundle of $\mathbb{Z}/2$ groups over $\mathbb{R}\,.$ Now the degree $0$ cohomology of this LA-groupoid is just $S^1\,.$ \vspace{3mm}\\A cohomology class in degree $1$ is given by a closed Lie algebra $1$-form on $\mathbb{R}$ (and again, any form here is automatically closed since $\mathbb{R}$ is abelian) and a homomorphism $\mathbb{Z}/2\to S^1\,;$ the compatibility condition in this example is trivial. There is only one nontrivial homomorphim $\mathbb{Z}/2\to S^1\,,$ therefore, the cohomology in degree 1 is just $\mathbb{R}\times \mathbb{Z}/2\,.$ \vspace{3mm}\\Note the Lie algebra $\mathbb{R}$ has no forms in degree higher than $1\,,$ therefore cohomology classes in degree $n>1$ are given by a function $f_1:(\mathbb{Z}/2)^{n-1}\times\mathbb{R}\to\mathbb{R}\,,$ linear in $\mathbb{R}$ (representing a $1$-form), and a group cocycle for $f_2:(\mathbb{Z}/2)^n\to S^1$ such that the pair $f_1,f_2$ satisfies the compatibility condition. The only nontrivial compatibility conidtion in this example is that $\delta^*f_1=0\,,$ and this can be true if and only if $n$ is odd, in which case the cocycle is necessarily trivial since the $\mathbb{Z}/2$ action on $\mathbb{R}$ is trivial. Therefore, the cohomology in degree $n>1$ is just $H^{n}(\mathbb{Z}/2,S^1)\,.$ \vspace{3mm}\\By the exponential sequence $0\to \mathbb{Z}\to\mathbb{R}\to S^1\to 0$ and the fact that the cohomology valued in a vector space of a compact group is trivial, we find that the cohomology in degree $n>1$ is just $H^{n+1}(\mathbb{Z}/2,\mathbb{Z})\,.$ One way of computing this is by using the fact that $B(\mathbb{Z}/2)\cong \mathbb{R}P^{\infty}\,;$ using this, we see that $H^{n+1}(\mathbb{Z}/2,\mathbb{Z})=\mathbb{Z}/2$ if $n>1$ is odd, and is $0$ otherwise. \vspace{3mm}\\Summarizing, we get that \begin{equation} H^n(\mathbb{R}^*,S^1)= \begin{cases} S^1 & n=0 \\ \mathbb{R}\times\mathbb{Z}/2 & n=1 \\ \mathbb{Z}/2 & n>1\text{ is odd} \\ 0 & n>1\text{ is even}\\ \end{cases} \end{equation} We can explicitly write down generators. In degree $1\,,$ the cohomology classes are generated by the following cocycles: $f_1(x)=e^{ia\log{|x|}}\,,$ where $a\in\mathbb{R}\,,$ and $f_2(x)= -1$ if $x<0$ and is $1$ otherwise. In degree $n>1$ where $n$ is odd, the generating cocycle is given by $f:\mathbb{R}^{*n}\to S^1\,,f(x_1,\ldots,x_n)=-1$ if $x_1,\ldots,x_n<0\,,$ and is equal to $1$ otherwise. \end{exmp} In the next example we will compute the van Est map in degree 1, and we will then compute the cohomology in all degrees. \begin{exmp} Consider the smooth Lie groups $S^1\xhookrightarrow{}\mathbb{C}^*\,.$ Since the fibers of the map $S^1\xhookrightarrow{}\mathbb{C}^*$ are contractible, \Cref{ve stacks} tells us that we can compute the cohomology at the level of LA-groupoids. Let's compute the van Est map in degree one, with coefficients in $\mathbb{C}^*\,.$ Degree one cohomology classes with coefficients in $\mathbb{C}^*$ are homomorphisms $\mathbb{C}^*\to\mathbb{C}^*\,,$ which are generated by $f(z)=z\,,g(z)=|z|^{\gamma}\,,\gamma\in\mathbb{C}\,,$ so $H^1(\mathbb{C}^*,\mathbb{C}^*)=\mathbb{Z}\times\mathbb{C}\,.$ \begin{itemize} \item On the canonical fibration replacement, the van Est map is just given by the pullback with respect to the projection onto the third fact:: \begin{equation} \begin{tikzcd} S^1\ltimes\mathbb{C}^*\rtimes\mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \arrow[rr, shift right=7] & & \mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \\ \mathbb{C}^* & & * \end{tikzcd} \end{equation} Therefore, \begin{equation}\label{ve ex} \mathcal{VE}(f)(e^{i\theta},\lambda,z)=z\,,\mathcal{VE}(g)(e^{i\theta},\lambda,z)=|z|^{\gamma}\,. \end{equation} \item Now the computation of the van Est map, when mapping into the LA-groupoid associated to the canonical cofibration, is more involved. The double groupoid morphism we get is \begin{equation}\label{s1} \begin{tikzcd} S^1\ltimes \mathbb{C}^*\rtimes \mathbb{C}^* \arrow[r, shift left] \arrow[r, shift right] \arrow[d, shift right] \arrow[d, shift left] & \mathbb{C}^*\rtimes \mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \\ S^1\ltimes \mathbb{C}^* \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^* \end{tikzcd} \longrightarrow \begin{tikzcd} \mathbb{C}^* \arrow[r, shift left] \arrow[r, shift right] \arrow[d, shift right] \arrow[d, shift left] & \mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \\ \bullet \arrow[r, shift right] \arrow[r, shift left] & \bullet \end{tikzcd} \end{equation} Let's first apply the van Est map to $f\,,$ which lives on the top right corner of the diagram on the right (see \ref{differential} for the differential). First we will pull back $f$ to $\mathbb{C}^*\ltimes \mathbb{C}^*$ via the projection $p$ onto the second factor. The map we get is $p^*f(\lambda\,,z)=z\,.$ \vspace{3mm}\\Now $p^*f=\delta_v^*w\,,$ where $w:\mathbb{C}^*\to\mathbb{C}^*\,,$ $w(\lambda)=\lambda\,.$ We have that \begin{equation*} -\text{dlog}\,1/w=\frac{d\lambda}{\lambda}\,,(\delta^*_h w)(e^{i\theta},\lambda)=e^{i\theta}\,. \end{equation*} thus the pair \begin{equation*} \bigg(e^{i\theta}\,,\frac{d\lambda}{\lambda}\bigg) \end{equation*} form a cocycle. Similarly, we can apply the van Est map to $g\,,$ and we'll get \begin{equation*} \bigg(1\,,\frac{\gamma}{2}\Big(\frac{d\lambda}{\lambda}+\frac{d\bar{\lambda}}{\bar{\lambda}}\Big)\bigg) \end{equation*} Therefore, we arrive at the following\footnote{Note that, this isn't quite where the van Est map is supposed to map to. We will discuss this further after completing this computation.}: \begin{equation}\label{ve f} \mathcal{VE}(f)=\bigg(e^{i\theta}\,,\frac{d\lambda}{\lambda}\bigg)\,, \mathcal{VE}(g)=\bigg(1\,,\frac{\gamma}{2}\Big(\frac{d\lambda}{\lambda}+\frac{d\bar{\lambda}}{\bar{\lambda}}\Big)\bigg)\,. \end{equation} \vspace{3mm}\\Now the fibers of the map $S^1\xhookrightarrow{}\mathbb{C}^*$ are contractible, which means $\mathcal{VE}$ is an isomorphism in all degrees. We will now show that the cocycles we've chosen do indeed generate the degree one cohomology by computing them at the level of the LA-groupoid. \vspace{3mm}\\One forms on $\mathbb{C}^*$ which pull back via $\delta^*_v$ to give $0$ are invariant one forms, which are of the form \begin{equation*} \alpha\frac{d\lambda}{\lambda}\,, \beta\frac{d\bar{\lambda}}{\bar{\lambda}}\,, \end{equation*} for $\alpha\,,\beta\in\mathbb{C}\,.$ Now when we pull back \begin{equation} \alpha\frac{d\lambda}{\lambda}+ \beta\frac{d\bar{\lambda}}{\bar{\lambda}} \end{equation} via $\delta_h^*\,,$ we get $(\alpha-\beta) \,d\theta\,,$ which is only dlog exact if $\alpha-\beta\in\mathbb{Z}\,;$ in this case, the corresponding cocycle is given by \begin{equation} \Big(e^{i(\alpha-\beta)\theta}, \alpha\frac{d\lambda}{\lambda}+ \beta\frac{d\bar{\lambda}}{\bar{\lambda}}\Big) \end{equation} Now we have the following equality: \begin{equation} \alpha\frac{d\lambda}{\lambda}=(\alpha-\beta)\frac{d\lambda}{\lambda}+\beta\frac{d\lambda}{\lambda}\,. \end{equation}Using this equality, we see that the degree one cohomology is generated by the following cocycles: \begin{equation} \Big(e^{i\alpha\theta}, \alpha\frac{d\lambda}{\lambda}\Big)\,,\;\; \bigg(1\,,\beta\Big(\frac{d\lambda}{\lambda}+\frac{d\bar{\lambda}}{\bar{\lambda}}\Big)\bigg)\,,\;\alpha\in\mathbb{Z}\,,\beta\in\mathbb{C}\,. \end{equation} Therefore, we see that $H^1(\mathbb{C}^*,\mathbb{C}^*)=\mathbb{C}\times\mathbb{Z}\,,$ agreeing with what we said earlier. \item Actually, looking at \ref{ve f}, we see that we haven't quite described the cohomology classes on an LA-groupoid. We will now show what $\mathcal{VE}(f)\,,\mathcal{VE}(g)$ look like on the two LA-groupoids we've been discussing. We will start with the LA-groupoid associated to the canonical cofibration, followed by the LA-groupoid associated to the foliation. The LA-groupoid associated to the canonical cofibration is given by \begin{equation} \begin{tikzcd} S^1\times \mathbb{R}\ltimes\mathbb{C} \arrow[r, shift right] \arrow[r, shift left] \arrow[d] & \mathbb{C} \arrow[d] \\ S^1 \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Here, $S^1$ is acting trivially. A degree 1 cocycle is given by a one form for the Lie algebra $\mathbb{C}\to *$ together with a homomorphism $S^1\to \mathbb{C}^*$ satisfying the compatibility condition. In this context, \ref{ve f} takes the form \begin{equation} \mathcal{VE}(f)=\big(e^{i\theta}\,,d\lambda\big)\,, \mathcal{VE}(g)=\big(1\,,\frac{\gamma}{2}\big(d\lambda+d\bar{\lambda}\big)\big)\,. \end{equation} Tto arrive at this we just had to evaluate the one forms at the identity in $\mathbb{C}^*\,.$ \item Finally, the van Est map is (again) really a map into the foliated cohomology of $[*/S^1]\,,$ which we can describe using the canonical fibration replacement: \begin{equation} \begin{tikzcd} S^1\ltimes\mathbb{C}^*\rtimes\mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \arrow[rr, shift right=7] & & \mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \\ \mathbb{C}^* & & * \end{tikzcd} \end{equation}We should describe these cocycles here too (using the isomorphism in \Cref{equiv la group}). The result is \begin{equation} \mathcal{VE}(f)=\bigg(e^{i\theta}\,,\frac{d\lambda}{\lambda}\bigg)\,, \mathcal{VE}(g)=\bigg(1\,,\frac{\gamma}{2}\Big(\frac{d\lambda}{\lambda}+\frac{d\bar{\lambda}}{\bar{\lambda}}\Big)\bigg)\,. \end{equation} which is the same as \ref{ve f}, but here $e^{i\theta}\,,\lambda\,,\bar{\lambda}$ refer to the functions $S^1\times\mathbb{C}^*\times\mathbb{C}^*\to\mathbb{C}^*,,\, (e^{i\theta},\lambda,z)\mapsto e^{i\theta}\,,\lambda\,,\,\bar{\lambda}\,,$ respectively, and $d\lambda\,,d\bar{{\lambda}}$ are foliated one forms along the $\mathbb{C}^*$ in the base. These cocycles are equivalent to the ones in \ref{ve ex}. \item We will now finish the computation of the cohomology $H^*(\mathbb{C}^*,\mathbb{C}^*)\,.$ First note that since $S^1\times\mathbb{R}\cong \mathbb{C}^*\,,$ from \Cref{semidirect} we get that \ref{s1} is Morita equivalent to the following LA-groupoid, which provides a simpler model for computing the cohomology: \begin{equation} \begin{tikzcd} S^1\times\mathbb{R} \arrow[r, shift right] \arrow[r, shift left] \arrow[d] & \mathbb{R} \arrow[d] \\ S^1 \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Therefore, the cohomology of $\mathbb{C}^*$ is quite literally the cohomology of the normal bundle of $S^1\xhookrightarrow{}\mathbb{C}^*\,.$ Here, the Lie algebroid differentials are all $0$ and the groupoid in the top row is the action groupoid associated to the trivial action of $S^1$ on $\mathbb{R}\,.$ Furthermore, since the cohomology of a proper groupoid vanishes in positive degrees, we can use the exponential seqeuence together with the fact that $B{S^1}=\mathbb{C}P^\infty$ to help us compute the cohomology. Putting this together, we can derive the cohomology groups in all degrees. They are given by: \begin{equation} H^n(\mathbb{C}^*,\mathbb{C}^*)= \begin{cases} \mathbb{C}^* & n=0 \\ \mathbb{C}\times\mathbb{Z} & n=1 \\ \mathbb{Z} & n>1\text{ is odd} \\ 0 & n>1\text{ is even}\\ \end{cases} \end{equation} \end{itemize} \end{exmp} \begin{exmp} \vspace{5mm}Consider the Lie group $\mathbb{C}^2\rightrightarrows *\,,$ with coordinates $(w,z)$ and consider the subspace $\mathbb{C}\rightrightarrows *$ given by $w=z\,.$ Consider the trivial $\mathbb{C}$-module. We have a degree two cycle given by $f(w_1,z_1,w_2,z_2)=w_1z_2\,,$ whose corresponding extension is the complex Heisenberg group. Let's compute the van Est map here. \begin{itemize} \item First, the true van Est map (as defined in this thesis) is given by pullback with respect to the projection onto the third factor: \begin{equation} \begin{tikzcd} (\mathbb{C}\times\mathbb{C})\ltimes\mathbb{C}^2\rtimes(\mathbb{C}^2\times\mathbb{C}^2) \arrow[d, shift right=2] \arrow[d] \arrow[d, shift left=2] & & \mathbb{C}^2\times\mathbb{C}^2 \arrow[d, shift right=2] \arrow[d] \arrow[d, shift left=2] \\ \mathbb{C}\rtimes\mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \arrow[rr, shift left] & & \mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \\ \mathbb{C}^2 & & * \end{tikzcd} \end{equation} Therefore, we have that \begin{equation}\label{ve ex2} \mathcal{VE}(f)(a_1,a_2,x,y,w_1,z_1,w_2,z_2)=w_1z_2\,. \end{equation} \item Now let's compute the van Est map with respect to the LA-groupoid associated to the canonical cofibration (see \ref{differential} for the differential). We have the following diagram: \begin{equation} \begin{tikzcd} \mathbb{C}\ltimes\mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & \mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \\ \mathbb{C}\ltimes\mathbb{C}^2 \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^2 & * \end{tikzcd} \end{equation} Now since the cocycle we have lives in degree 2, we must further compute the nerves: \begin{equation} \begin{tikzcd} & & \mathbb{C}^2\rtimes(\mathbb{C}^2\times\mathbb{C}^2) \arrow[d, shift right=2] \arrow[d, shift left=2] \arrow[d] & \mathbb{C}^2\times\mathbb{C}^2 \arrow[d, shift right=2] \arrow[d] \arrow[d, shift left=2] \\ & \mathbb{C}\ltimes\mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \arrow[r, "p", shift right=7] & \mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \\ (\mathbb{C}\times\mathbb{C})\ltimes\mathbb{C}^2 \arrow[r, shift right=2] \arrow[r, shift left=2] \arrow[r] & \mathbb{C}\ltimes\mathbb{C}^2 \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^2 & * \end{tikzcd} \end{equation} Now we have that \begin{equation} p^*f(x,y,w_1,z_1,w_2,z_2)=f(w_1,z_1,w_2,z_2)=w_1z_2=\delta_v^*f_1(x,y,w_1,z_1,w_2,z_2)\,, \end{equation} where $f_1(x,y,w,z)=xz\,.$ For the next step, we apply $(-\delta_h^*-d)$ to $-f_1$ (recall that $d$ is the fiberwise de Rham differential). We get \begin{equation} \delta_h^*f_1(a,x,y,w,z)=az\,,df_1=zdx \end{equation} Now $df_1=\delta_v^*\,ydx$ and $\delta_h^*f_1(a,x,y,w,z)=\delta_v^*f_2\,,$ where $f_2(a,x,y)=ay\,.$ Next we have that \begin{equation} (\delta_h^*+d)(-ydx)=-yda-adx-ada+dx\wedge dy\,, \end{equation} and $\delta_h^*(-f_2)=-a_1a_2\,,-d(-f_2)=ady+yda\,.$ So in the end, the cocycle we get is \begin{equation}\label{cocycle h} -a_1a_2+a(dy-dx-da)+dx\wedge dy\,. \end{equation} \item Now, the LA-groupoid corresponding to the canonical cofibration is \begin{equation} \begin{tikzcd} \mathbb{C}\times\mathbb{C}\ltimes\mathbb{C}^2 \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^2 \arrow[d] \\ \mathbb{C} \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} and further computing the nerve, we have \begin{equation}\label{cof h} \begin{tikzcd} (\mathbb{C}\times\mathbb{C})\times(\mathbb{C}\times\mathbb{C})\ltimes\mathbb{C}^2 \arrow[d] \arrow[r, shift right=2] \arrow[r] \arrow[r, shift left=2] & \mathbb{C}\times\mathbb{C}\ltimes\mathbb{C}^2 \arrow[d] \arrow[r, shift right] \arrow[r, shift left] & \mathbb{C}^2 \arrow[d] \\ \mathbb{C}\times\mathbb{C} \arrow[r, shift right=2] \arrow[r] \arrow[r, shift left=2] & \mathbb{C} \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} and we can identity the cocycle \ref{cocycle h} with a cocycle on \ref{cof h}, where $dx\,,dy$ are the Lie algebra $1$-forms on $\mathbb{C}^2\to *$ and $a$ is the coordinate on the Lie group $\mathbb{C}\rightrightarrows *\,.$ \item Finally, we can describe this cocycle on the foliation of $[*/\mathbb{C}]$ (using \Cref{equiv la group}), which is given by \begin{equation} \begin{tikzcd} \mathbb{C}\ltimes\mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \arrow[rr, shift right=7] & & \mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \\ \mathbb{C}^2 & & * \end{tikzcd} \end{equation} Forming the next part of the nerve of the groupoid on the left, we have \begin{equation} \begin{tikzcd} (\mathbb{C}\times\mathbb{C})\ltimes\mathbb{C}^2\rtimes(\mathbb{C}^2\times\mathbb{C}^2) \arrow[d] \arrow[d, shift left=2] \arrow[d, shift right=2] \\ \mathbb{C}\ltimes\mathbb{C}^2\rtimes\mathbb{C}^2 \arrow[d, shift right] \arrow[d, shift left] \\ \mathbb{C}^2 \end{tikzcd} \end{equation} The cocyce is given by $-a_1a_2+a(dy-dx-da)+dx\wedge dy\,,$ where $-a_1a_2$ lives on the top, $a(dy-dx-da)$ lives in the middle and $dx\wedge dy$ lives on the bottom. This is equivalent to the cocycle given by \ref{ve ex2}. \end{itemize} \end{exmp} \part{Morita Equivalences of Lie Algebroids and LA-Groupoids, Higher Generalized Morphisms, Future Directions} \section*{Brief Summary of Part 3}By the end of this part (in~\Cref{finale}) we will have combined $n$-equivalences of Lie algebroids with Morita equivalences of Lie groupoids in the category of LA-groupoids (which contains both Lie groupoids and Lie algebroids as objects).\ This category has some \textit{surprising} but natural properties, including generalized morphisms between Lie algebroids and Lie groupoids (and even Morita equivalences between them). These properties include the following (though like much of Part 3, details need to be filled in): \begin{enumerate} \item Manifolds $X\,,Y$ are Morita equivalent in this category if and only if they are diffeomorphic. \item More generally, Lie groupoids $G\,,H$ are Morita equivalent in this category if and only if they are Morita equivalent in the category of Lie groupoids. \item Two tangent bundles $TX\,,TY$ are Morita equivalent in this category if and only if $X\,,Y$ are homotopy equivalent. \item There is a canonical generalized morphism $\mathcal{I}:\mathfrak{g}\to G\,,$ which represents the integration functor. If $G\rightrightarrows G^0$ is source $n$-connected then this morphism is an $n$-equivalence, and dual to this there is a canonical $n$-equivalence $\mathcal{D}:G\to \mathfrak{g}\,,$ which represents the differentiation functor. If $n=\infty$ these generalized morphisms are Morita equivalences. \item With regards to the previous two points, the van Est map is given by the pullback $\mathcal{I}^*:H^{\bullet}(G,M)\to H^{\bullet}(\mathfrak{g},M)\,.$ If $G$ is source $n$-connected then integration is given by $\mathcal{D}^*:H^{\bullet}(\mathfrak{g},M)\to H^{\bullet}(G,M)\,,$ for $\bullet\le n\,.$ \item A Lie algebroid $\mathfrak{g}$ is integrable if and only if it is $1$-equivalent to some Lie groupoid $G\,.$ \item This category induces a notion of homotopy equivalence on the category of Lie groupoids: a generalized morphism $P:G\to H$ is a homotopy equivalence if the induced generalized morphism in the category of LA-groupoids, $TP:TG\to TH\,,$ is a Morita equivalence. In particular, a Morita equivalence of Lie groupoids induces a homotopy equivalence. In addition, we get a natural notion of $n$-equivalence of Lie groupoids (in the homotopy sense). \item A finite dimensional classifying space of a Lie groupoid $G$ is just a manifold $BG$ which is homotopy equivalent to $G\,.$ \item If $EG\to BG$ is finite dimensional, then the Atiyah algebroid $\text{at}(EG)$ is Morita equivalent to $G\,.$ In particular, if $G$ is discrete then $\text{at}(EG)=T(BG)\,,$ therefore $G$ is Morita equivalent to $T(BG)\,.$ \item Due to points 2, 3 and 4, we get the following result: suppose that $\mathcal{P}$ assigns to each LA-groupoid some property (eg.\ its cohomology) that is invariant under $n$-equivalence. Then if $X$ is homotopy equivalent to $Y\,;$ if $H\rightrightarrows H^0$ is Morita equivalent to $K\rightrightarrows K^0\,;$ if $G\rightrightarrows G^0$ is source $n$-connected, we get that $\mathcal{P}(TX)\cong \mathcal{P}(TY)\,;\mathcal{P}(H)\cong \mathcal{P}(K)\,;\mathcal{P}(G)\cong \mathcal{P}(\mathfrak{g})\,,$ respectively. \end{enumerate} \chapter{Lie Algebroids as Generalized Homotopy Theory} In this chapter we want to discuss generalized morphisms of Lie algebroids, defined in \Cref{MEA} (see summary of Part 3 on the previous page). Some of what is said will be known to experts. Throughout this thesis, we have made use of Morita equivalences of LA-groupoids, however we don't claim that all Morita equivalences should be of this form, ie. there should be a more general notion. For Lie algebroids, the definition of Morita equivalences we've been using reduces to isomorphisms, but this is all we needed for this thesis. We want to explore a more general notion here for Lie algebroids. First we will motivate the definition, however the idea is that the right notion of Morita equivalences of Lie algebroids should give a generalization of homotopy theory in the smooth category. In particular, we conjecture the following result: \begin{conjecture} Suppose $A\to X\,, B\to Y$ are $n$-equivalent Lie algebroids. There is an equivalence of categories between representations up to homotopy (of length at most $n$) of $A$ and $B\,.$ \end{conjecture} \vspace{1mm}We will begin with a discussion and provide evidence for our claims, and then we will state precise definitions. Part 3 of this thesis was written after discussions with Francis Bischoff. At the end of this chapter we make a remark on (perhaps far-fetched) connections with geometric/deformation quantization. Let us emphasize that Part 3 of this thesis is largely conjectural and significant details need to be filled in for completion. \section{Some Definitions of Morita Equivalences of Lie Algebroids and \texorpdfstring{$\Pi_{\infty}(A)$}{Pi infinity}} Before getting into the abstract details that are to follow, we wish to prime the reader with a concrete result. It relates to the following conundrum: there is a functor from Lie groupoids to Lie algebroids, however Morita equivalences don't seem to get sent to anything like Morita equivalences, eg. Pair$(X)$ is Morita equivalent to a point, but $TX$ doesn't seem like it should be Morita equivalent to the zero vector space. Our point of view is the following: it is often more instructive to think of Lie algebroids as structures which quotient to a Lie groupoids, rather than as infinitesimal approximations. This is already familiar in the case of connected abelian groups, where the Lie algebra is identified with the universal cover. This perspective would suggest that, rather than \begin{equation} \begin{tikzcd} H \arrow[rr, "\text{Morita map}"] & & G \arrow[r, Rightarrow] & \mathfrak{h} \arrow[rr, "\text{Morita map}"] & & \mathfrak{g}\,, \end{tikzcd} \end{equation} the implication should point in the opposite direction, ie. \begin{equation}\label{oppo} \begin{tikzcd} H \arrow[rr, "\text{Morita map}"] & & G & \mathfrak{h} \arrow[l, Rightarrow] \arrow[rr, "\text{Morita map}"] & & \mathfrak{g} \end{tikzcd} \end{equation} That is, Morita equivalences of Lie algebroids should integrate to Morita equivalences of Lie groupoids (we will make a more clear statement later, but $H\,,G$ here should be source simply connected). To support this, let's first give a notion of Morita equivalence due to Fernandes (see~\cite{ruif}, page 198) which we wish to generalize: \begin{definition}\label{mlf} Two Lie algebroids $A\to X\,,B\to Y$ are Morita equivalent if there is a manifold $Z$ with surjective submersions $\pi_X\,,\pi_Y$ to $X$ and $Y\,,$ with simply connected fibers, such that the pullbacks $\pi_X^!A\;,\pi_Y^!B$ are isomorphic. \end{definition} \vspace{1mm}In particular, if $\pi:Y\to X$ is a surjective submersion with simply connected fibers and $A\to X$ is a Lie algebroid, then by this definition $\pi^!A$ is Morita equivalent to $A\,.$ Now we give the following result, which supports\ref{oppo}, and we will come back to this in~\Cref{defml} (see Lemma 1.14 in~\cite{haus} for the proof): \begin{proposition}\label{morgl} Let $A\to X$ be an integrable Lie algebroid, and let $\pi:Y\to X$ be a surjective submersion with simply connected fibers. Then the source simply connected groupoids integrating $\pi^!A$ and $A$ are Morita equivalent. \end{proposition} This proposition supports~\ref{oppo}. Now we will move on with the discussion, and we will put the preceding result into context a little later. \vspace{3mm}\\In the author's opinion, at least in the optimal cases, every Lie algebroid should integrate to a canonical Lie $\infty$-groupoid, and the Lie algebroid and its integrating canonical Lie $\infty$-groupoid should be equivalent in a strong sense. Roughly, the canonical Lie $\infty$-groupoid of a Lie algebroid $A\to X$ should be given by the following simplicial space, denoted $\Pi_{\infty}(A)$ (see page 3 of~\cite{zhuc}, where it is denoted by $S(A)\,;$ see also section 1.4 of~\cite{andre},~\cite{getzler},~\cite{sometitle})\footnote{This part is motivational, we don't intend to be perfectly precise, however this will lead to a concrete definition.}: \begin{equation}\label{inftyg} \Pi_{\infty}^{(i)}(A)=\text{hom}(T\Delta^i,A)\,. \end{equation}In particular, they should have the same invariants, including cohomology and representations (up to homotopy). Our perspective is that a Lie algebroid $A$ is a finite dimensional model of $\Pi_{\infty}(A)\,.$ We call $\Pi_{\infty}(A)$ the fundamental $\infty$-groupoid of $A\,.$ As a special case, the tangent bundle $TX$ should be equivalent to the fundamental $\infty$-groupoid $\Pi_\infty(X)\,.$ An important property of $\Pi_{\infty}(A)$ is that its source fibers should be weakly contractible. Now the source simply connected integration of $A\,,$ denoted $\Pi_1(A)\,,$ (assuming it exists) should be a quotient (ie. truncation) of $\Pi_{\infty}(A)\,.$ It is in this sense in which a groupoid is a quotient of a Lie algebroid. \vspace{3mm}\\It was proven that given such a Morita equivalence (see~\Cref{mlf}), the cohomology groups of the Lie algebroids up to degree one agree, as do their category of representations. Note that this definition is the analogue of the definition of Morita equivalence of groupoids (except for the additional simply connected assumption, which we will explain). In the same paper it is mentioned that, instead of asking that the fibers be simply connected, one can ask that they are $n$-connected for some $n\,.$ In the author's opinion, the right value of $n$ is $\infty\,,$ ie. the fibers should be contractible; any other value of $n$ should be thought of as a particular case of an $n$-equivalence of Lie algebroids, which is analogous to the notion of weak $n$-equivalence from topology (see page 144 in~\cite{dieck}). We cite the following two results as evidence to our claim: \begin{theorem}(Theorem 2 in~\cite{Crainic})\label{crainicmor} Let $f:X\to Y$ be a surjective submersion with $n$-connected fibers, and let $B\to Y$ be a Lie algebroid. Then the Lie algebroid cohomologies of $f^!B$ and $B$ are isomorphic up to degree $n\,.$ \end{theorem} \begin{theorem}(see Theorem 4.2 in~\cite{sparano}) Let $f:X\to Y$ be a surjective submersion with $n$-connected fibers, and let $B\to Y$ be a Lie algebroid. Then the Lie algebroids $f^!B$ and $B$ share the same deformation cohomology up to degree $n\,.$\footnote{For more properties invariant under this notion of equivalence, see Remark 3.3 in~\cite{sparano}.} \end{theorem} In particular, our definition of Morita equivalence of Lie algebroids will imply that two tangent bundles are Morita equivalent if and only if their underlying manifolds are homotopy equivalent. One piece of evidence supporting this is the fact that the Lie algebroid cohomology of tangent bundles is invariant under homotopy equivalences of the underlying manifolds (this includes the cohomology of constant sheaves). Further evidence supporting this is the following proposition (due to Arias Abad, Quintero V\'{e}lez and V\'{e}lez V\'{a}squez): \begin{theorem}(see~\cite{Quintero}, Corollary 5.1) If $f:X\to Y$ is a smooth homotopy equivalence, then the pullback functor \begin{equation} f^*:\textbf{Loc}_{\infty}(N)\to \textbf{Loc}_{\infty}(M) \end{equation} is a quasi-equivalence (ie. it is a quasi-equivalence between the dg categories of $\infty$-local systems). \end{theorem} \vspace{1mm}Let us make a remark which is relevant to this discussion and which we will come back to later: a surjective submersion $f:X\to Y$ with $n$-connected fibers is a weak $n$-equivalence, ie. it defines an isomorphism of homotopy groups up to degree $n\,.$ For $n=\infty\,,$ $f$ is a homotopy equivalence (see Corollary 13 in~\cite{gael}). \vspace{3mm}\\For another example, we can embed manifolds into the category of Lie algebroids by assigning to a manifold $X$ the $0$ vector bundle over it. Our definition will imply that two manifolds are Morita equivalent if and only if they are diffeomorphic. Certainly, all reasonable properties of manifolds are invariant under diffeomorphisms. Noe that the aformentioned embedding is the one that is consistent with the natural embedding of manifolds into groupoids/stacks, which assigns to $X$ the groupoid containing only identity morphisms (note that, assigning to $X$ the groupoid $\Pi_1(X)\,,$ for example, is not an embedding under Morita equivalences). \section{Motivation}Before giving our definition of Morita equivalences of Lie algebroids (see~\Cref{MEA}), we will motivative the definition. \vspace{3mm}\\Let's restrict to the subcategory of Lie algebroids given by tangent bundles. We believe that two tangent bundles $TX\,,TY$ should be Morita equivalent if and only if $X$ and $Y$ are homotopy equivalent. One reason for this is that homotopy equivalence seems to preserve everything that should be an invariant of Lie algerboids, in particular the Lie algebroid cohomologies of $TX\,,TY$ are the same if $X$ and $Y$ are homotopy equivalent (if they are just weak $n$-equivalent, meaning that there is a map $X\to Y$ which is an isomorphism on homotopy groups up to degree $n\,,$ then the Lie algebroid cohomologies will agree up to degree $n$). \vspace{3mm}\\We will return to Lie algebroids in a moment, but first let's recall the definition of a Morita map for Lie groupoids. A morphism $f:H\to G$ is a Morita equivalence if the following two conditions hold: \begin{enumerate} \item $H^0\times_{G^0}G^{(1)}\to G^0$ is a surjective submersion. \item The following diagram is a fiber product: \begin{equation} \begin{tikzcd} H^{(1)} \arrow[r, "f"] \arrow[d, "{(s,t)}"'] & G^{(1)} \arrow[d, "{(s,t)}"'] \\ H^0\times H^0 \arrow[r, "{(f,f)}"] & G^0\times G^0 \end{tikzcd} \end{equation} \end{enumerate} Now we want to generalize this to Lie algebroids. The map $H^0\times_{G^0}G^{(1)}\to G^0$ has a Lie algebroid analogue: given two Lie algebroids $A\to X\,,B\to Y$ and a morphism $f:A\to B\,,$ we can form its ``mapping path space". This is given by the fiber product $X\sideset{_f}{_{s}}{\mathop{\times}}P\,,$ where $P$ is the space of algebroid paths. Let's recall the definition: \begin{definition}\label{nodef1}(see section 1 in~\cite{rui} for more details) Let $A\to X$ be a Lie algebroid. An algebroid path (or A-path) is given by a $C^1$ curve $\gamma:[0,1]\to A$ such that $\pi\circ\gamma$ is a $C^2$ curve in $X$ and such that \begin{equation} \alpha(\gamma(t))=\frac{d}{dt}\pi(\gamma(t))\,. \end{equation} We denote the space of algebroid paths by $P_X\,.$ It is the same space as $\Pi_{\infty}^1(A)$ (see~\ref{inftyg}). \end{definition} Now $P_X$ is a Banach manifold (see section 4.2 of~\cite{rui}), and it comes with two maps onto $X\,,$ which we will denote suggestively as follows: \begin{equation} s\,,t:P_X\to X\,, s(\gamma)=\pi(\gamma(0))\,,t(\gamma)=\pi(\gamma(1))\,. \end{equation} \begin{definition}\label{nodef} Given a map $f:A\to B$ of Lie algebroids $A\to X\,,B\to Y\,,$ we define the mapping path space to be $P_f=X\sideset{_f}{_{s}}{\mathop{\times}}P_B\,.$ \end{definition} Now given this definition, a guess for what a Morita equivalence of Lie algebroids might be is the following: given two Lie algebroids $A\to X\,,B\to Y\,,$ a morphism $f:A\to B$ is a Morita equivalence if the following two conditions hold: \begin{enumerate} \item The composition $P_f\to P_Y\xrightarrow[]{t}Y$ is a surjective submersion. \item The following diagram is a fiber product: \begin{equation} \begin{tikzcd} A \arrow[r, "f"] \arrow[d, "\alpha"'] & B \arrow[d, "\alpha"'] \\ TX \arrow[r, "f_*"] & TY \end{tikzcd} \end{equation} \end{enumerate} Now this definition doesn't work, and to see this, just consider the case of tangent bundles. Assuming $Y$ is path connected, these two conditions will hold automatically. To understand what is going on, let's consider the fundamental groupoid $\Pi_1(Y)\,.$ Given a map $f:X\to Y$ (assume they are connected for simplicity), $f^!\Pi_1(Y)$ is Morita equivalent to $\Pi_1(Y)\,,$ however it isn't necessarily Morita equivalent to the source simply connected integration of $f^!TY=TX\,,$ ie. $\Pi_1(X)\,.$ In order for this to be the case we need that the map $f$ defines an isomorphism on $\pi_1(X)\,.$ Let's emphasize this point: \textbf{the pullback of the source simply connected integration of a Lie algebroid is not necessarily equivalent to the source simply connected integration of the pullback of the Lie algebroid.} \vspace{3mm}\\Continuing the discussion, suppose that $Y$ has vanishing homotopy groups above degree $1\,,$ so that $\Pi_1(Y)$ has contractible source fibers. In this case, in the author's opinion, $\Pi_1(Y)$ is essentially completely equivalent to $TY$ (and hence $\Pi_{\infty}(Y))\,,$ in particular they have all the same cohomology groups, for all modules and all degrees, by the van Est isomorphism theorem. However, $f^!\Pi_1(Y)$ doesn't necessarily have contractible source (or equivalently target) fibers, in order for this to be true we need $f$ to be a homotopy equivalence. \vspace{3mm}\\Now let's consider the case of a map $f:X\to Y$ and let's consider the following question: when is $f^!\Pi_{\infty}(Y)$ equivalent to $\Pi_{\infty}(X)\,?$ In order for this to be true, $f^!\Pi_{\infty}(Y)$ should have contractible target fibers. Explicitly, a target fiber over a point $x_0\in X$ of $f^!\Pi_\infty(Y)$ is given by: \begin{equation} \{(x,\gamma):x\in X\,, \gamma:[0,1]\to Y\,,\gamma(0)=f(x)\,,\gamma(1)=f(x_0)\}\,, \end{equation} ie. a target fiber over $x_0$ is the homotopy fiber over $f(x_0)\,.$ Basic results about Serre fibrations from topology then imply that all target fibers are contractible if and only if $f:X\to Y$ is a homotopy equivalence. \vspace{3mm}\\More generally then, given a map $f:X\to Y$ with a Lie algebroid $B\to Y\,,$ we see that $f^!\Pi_{\infty}(B)$ (see \cref{inftyg}) doesn't need to be equivalent to $\Pi_{\infty}(f^!B)$ (assuming the pullbacks exist). That is, for general $f\,,$ \begin{equation} f^!\Pi_{\infty}\not\cong \Pi_{\infty}f^!\,. \end{equation} The property that $f^!B$ is Morita equivalent to $B$ should be equivalent to the properties that $P_f\to Y$ is surjective submersion and \begin{equation} f^!\Pi_{\infty}(B)\cong \Pi_{\infty}(f^!B)\,. \end{equation} If $f$ is a surjective submersion with contractible fibers, then these properties should be satisfied for all $B\,,$ in particular $f^!$ and $\Pi_{\infty}$ should commute. If $f$ is a surjective submersion with (strict) fibers that have vanishing homotopy groups up to degree $n\,,$ then $f^!$ and $\Pi_{\infty}$ should commute ``up to degree $n",$ meaning that $f^!$ and $\Pi_{n}$\footnote{We will not attempt to define $\Pi_n$ here, but it should be an $n$-truncation of $\Pi_{\infty}\,,$ see Theorem 1.2 in~\cite{zhuc}. For $n=1$ it is ``functor" giving the source simply connected integration.} should commute (ignoring the issue of existence of the truncation). \vspace{3mm}\\Now that we understand why the two preceding conditions are not enough (see the discussion on the previous page, after~\Cref{nodef}) for Lie algebroids to be equivalent, let's discuss what more is needed. In order for $f^!\Pi_{\infty}(B)$ to be equivalent to $\Pi_{\infty}(f^!B)$ the target fibers of $f^!\Pi_{\infty}(B)$ should be (weakly) contractible. These are the analogue of homotopy fibers from topology for general Lie algebroids. \section{Definition of Morita Equivalences of Lie Algebroids}\label{defml} After the discussions in the previous sections, we are now ready to give our definition of Morita equivalences of Lie algebroids (see~\Cref{nodef1}~\Cref{nodef} for the definitions of $P$ and $P_f$). In particular, we can apply this definition to foliations, in which case we obtain a definition of equivalence distinct from the notion of Morita equivalence introduced in~\cite{haus}. One should compare it with the notion of homotopy equivalence of foliations introduced in~\cite{baum}\footnote{It is not clear to the author if our definition agrees with theirs.}: \begin{definition}\label{MEA} Let $A\to X\,,B\to Y$ be Lie algebroids. We say that a morphism $f:A \to B$ is a Morita equivalence if the following conditions hold: \begin{enumerate} \item The composition $P_f\xrightarrow[]{\pi} P_Y\xrightarrow[]{t}Y$ is a surjective submersion with (weakly) contractible fibers.\footnote{Here $\pi$ is the natural map $P_f\xrightarrow[]{\pi} P_Y\,.$} \item The following diagram is a fiber product: \begin{equation} \begin{tikzcd} A \arrow[r, "f"] \arrow[d, "\alpha"'] & B \arrow[d, "\alpha"'] \\ TX \arrow[r, "f_*"] & TY \end{tikzcd} \end{equation} \end{enumerate} \end{definition} The idea would then be to localize at Morita equivalences to obtain generalized morphisms. We also give the following definition: \begin{definition}\label{neq} Let $A\to X\,,B\to Y$ be Lie algebroids. We say that a morphism $f:A \to B$ is an $n$-equivalence if the following conditions hold: \begin{enumerate} \item The composition $P_f\xrightarrow[]{\pi} P_Y\xrightarrow[]{t}Y$ is a surjective submersion with $n$-connected fibers. \item The following diagram is a fiber product: \begin{equation} \begin{tikzcd} A \arrow[r, "f"] \arrow[d, "\alpha"'] & B \arrow[d, "\alpha"'] \\ TX \arrow[r, "f_*"] & TY \end{tikzcd} \end{equation} \end{enumerate} \end{definition} One can localize at $n$-equivalences to obtain $n$-generalized morphisms. Let us emphasize that an $n$-equivalence for $n=\infty$ is a Morita equivalence. \vspace{3mm}\\Let's \textbf{sketch} a proof that this previous definition generalizes~\Cref{mlf}: \begin{proposition} Let $f:X\to Y$ be a surjective submersion with $n$-connected fibers, and let $A\to Y$ be a Lie algebroid. Then $f^!A$ is $n$-equivalent to $A\,.$ \begin{proof}(\textbf{sketch}) The proof essentially follows the proof of Lemma 1.14 in~\cite{haus}. First note that since $f$ is surjective, $P_f\to Y$ is also surjective. Furthermore, $P_X\to Y$ is a submersion\footnote{This was communicated to the author by Rui Loja Fernandes.}, therefore since $f$ is also a submersion it follows that $P_f\to Y$ is a surjective submersion. Now the map $\pi:P_f\to P_Y$ induces a natural map $(t\circ\pi)^{-1}(y)\to t^{-1}(y)$ for $y\in Y\,,$ and the fibers of this map are the fibers of $f\,,$ which are $n$-connected. Since $t^{-1}(y)$ is contractible, it follows that the target fibers of $P_f$ are $n$-connected. The second condition is satisfied by the definition of $f^!A\,.$ \end{proof} \end{proposition} \vspace{2mm}Now given a Lie groupoid $G\,,$ there is a natural action of $TG$ on $\mathfrak{g}\,.$ Under the \textbf{assumption} that there is also a natural ``action" of $TP$ on $\mathfrak{g}\,,$ we get the following result: \begin{conjecture} Suppose $f:A\to B$ is a map of Lie algebroids which satisfies the hypotheses of~\Cref{neq}. Then $H^*(A,\mathcal{O})=H^*(B,\mathcal{O})\,.$ \end{conjecture} \begin{proof}(\textbf{idea}) The idea is the following: $P_f$ comes with two natural maps $\pi_X\,,\pi_Y$ to $X\,,Y\,,$ respectively ($\pi_Y$ is what we call $t\circ\pi$ in the definition above, and $\pi_X$ is the map coming from the fact that $P_f=X\times_Y P\,.$) These maps are surjective submersions with $n$-connected fibers, so by~\Cref{crainicmor} (assuming it holds in the appropriate infinite dimensinal setting), the result follows if we can show that $\pi_X^!A\cong \pi_Y^! B\,.$ Note that, by assumption, $A=f\vert_X^!B\,,$ and it follows that an ``action" of $TP$ on $\mathfrak{g}$ would define an isomorphism $\pi_X^!A\cong \pi_Y^! B\,.$ \end{proof} Under the assumption that these definition are correct, and due to the infinite dimensional nature of $\Pi_{\infty}(A)\,,$ we believe it would be natural to generalize~\Cref{mlf} to allow the space $Z$ to be a (paracompact, Hausdorff) Banach manifold, such that the fibers of both maps are $n$-connected, and such that $\pi_X^!A\cong \pi_Y^!B\,.$ We believe that this would also generalize the definitions of equivalence given above, in which case it would be a theorem that a map satisfying the conditions of~\Cref{neq} would be this more general $n$-equivalence. Hence we give the following, more general definition as well: \begin{definition}(more general) Let $A\to X\,, B\to Y$ be Lie algebroids. We say $A$ and $B$ are $n$-equivalent if there is a (paracompact, Hausdorff) Banach manifold\footnote{Or perhaps, even more generally, a diffeological space.} $Z$ with surjective submersions $\pi_X:Z\to X\,,\pi_Y:Z\to Y\,,$ both with $n$-connected fibers, such that $\pi_X^!A\cong \pi_Y^!B\,.$ For $n=\infty$ we call this a Morita equivalence. \end{definition} \vspace{4mm}Now let's connect all of this to~\Cref{morgl}. Recall that we think of a Lie algebroid $A\to Y$ as being equivalent to $\Pi_{\infty}(A)\,,$ for which we can take quotients (ie. truncations) to get different groupoids $\Pi_n(A)\,.$ Now from this perspective, a Morita equivalence between $A\to Y$ and $B\to X$ should correspond to a Morita equivalence between $\Pi_{\infty}(A)$ and $\Pi_{\infty}(B)\,,$ and taking truncations should preserve this equivalence. Now if $f:Y\to X$ is a surjective submersion with contractible fibers and $A\cong f^!B\,,$ then $A$ and $B$ should be Morita equivalent. In fact,~\Cref{morgl} tells us that, indeed, in this context $\Pi_1(A)$ is Morita equivalent to $\Pi_1(B)\,.$ More generally, since the fibers in the aformentioned proposition are only required to be simply connected, this result suggests that $1$-equivalences between $A$ and $B$ should integrate to Morita equivalences of $\Pi_1(A)$ and $\Pi_1(B)\,.$ This makes sense as there are no higher morphisms. \begin{remark} If the comments made in this section are correct, ie. that $A$ is equivalent to $\Pi_{\infty}(A)\,,$ it would imply that van Est is, in a sense, really about the descent of cohomology and there is perhaps a much more general theorem laying around. \end{remark} \section{Necessary Conditions for a Morphism to be a Morita Equivalence and Examples} Now let's determine a necessary condition on $f$ so that the fibers of $t:P_f\to Y$ are weakly contractible. Since the codomain of the map $P_f\to Y$ is finite dimensional the constant rank theorem holds, by theorem F in~\cite{glock}. Therefore the map $P_f\to Y$ is locally fibered (or what others call a topological submersion), and by Corollary 13 in~\cite{gael}, if such a map has weakly contractible fibers then it is a Serre fibration, hence it also is a weak homotopy equivalence. Now the canonical inclusion $\iota:X\to P_f$ is a deformation retract, and $f=t\circ\pi\circ\iota\,,$ therefore $t\circ\pi$ is a weak equivalence if and only if $f$ is a weak equivalence, which is equivalent to $f$ being a homotopy equivalence by Whitehead's theorem. Therefore, a necessary condition for the map $P_f\to Y$ to have (weakly) contractible fibers is that $f$ is a homotopy equivalence. \begin{remark} As a partial converse to the discussion above, if $f$ is a homotopy equivalence and if $P_f\to Y$ is a Serre fibration, then $P_f\to Y$ has (weakly) contractible fibers. \end{remark} \vspace{3mm}Now let's look at two extreme examples, one where the Lie algebroid is transitive and the other where the anchor map is $0:$ \begin{exmp} Let $f:X\to Y$ be a map of manifolds, where we take the Lie algebroid on $Y$ given by $TY\to Y\,.$ Then the surjective submersion property of conditions 1 is satisfied and $TX$ satisfies condition 2. The condition on the fibers is equivalent to the induced map $f:X\to Y$ being a homotopy equivalence. Therefore, we see that a map $TX\to TY$ is a Morita equivalence if and only if the induced map $X\to Y$ is a homotopy equivalence. \end{exmp} \begin{exmp} Consider a surjective submersion $f:X\to Y$ of manifolds, where we take the Lie algebroid on $Y$ to be the $0$-vector bundle, denoted $0_Y\to Y\,.$ We have that $f^!0_Y$ satisfies condition 2. We also have that $P_f=X$ and so $P_f\to Y$ is a surjective submersion with (weakly) contractible fibers if and only if $f$ has contractible fibers. That is, if the induced map $f:X\to Y$ is a surjective submersion, then the morphism $f^!0_Y\to Y$ is a Morita equivalence if and only if $f$ has contractible fibers. \end{exmp} \vspace{3mm}Assuming our definition of Morita equivalences of Lie algebroids is correct, it would offer an explanation for why the cohomology of certain sheaves are invariant under homotopy equivalence and others aren't. For example, the sheaf cohomology of constant sheaves is invariant under homotopy equivalence, but the sheaf cohomology of $\mathcal{O}$ is not. From the point of view of Lie algebroids, and restricting to tangent bundles, this can be explained by the fact that it should really the Lie algebroid cohomology which is invariant, and the Lie algebroid cohomology of a constant sheaf happens to just be the sheaf cohomology of the constant sheaf.\footnote{See~\Cref{Chevalley} of Part 1 for the definition of Lie algebroid cohomology that we are using.} However, the Lie algebroid cohomology of $\mathcal{O}$ (with respect to the tangent bundle) is the de Rham cohomology of the underlying manifold, which is indeed a homotopy invariant. \vspace{3mm}\\On the other hand, $H^*(X,\mathcal{O})$ is the Lie algebroid cohomology of the $0$ vector bundle over $X\,.$ As mentioned earlier, two Lie algebroids consisting of the $0$ vector bundles over $X$ and $Y$ are Morita equivalent if and only if $X$ and $Y$ are diffeomorphic. Certainly, the cohomology of $\mathcal{O}$ is invariant under diffeomorphisms. \begin{remark} \begin{enumerate} \item Is there a connection between the Morita invariance of Lie algebroid cohomology and the problem of invariance of polarization in geometric quantization? Classical geometric quantization of a symplectic manifold $(M,\omega)$ involves first choosing a line bundle with connection whose curvature is $\omega$ (such a choice isn't unique, but if $M$ is simply connected it is unique up to isomorphism). Once this is done, one must choose a Lagrangian polarization — this in particular equips $M$ with a Lie algebroid and representation. One would normally proceed to then take the quantization to be the degree $0$ cohomology of this representation, however if there is a nontrivial Bohr-Sommerfeld condition this may result in the $0$ vector space, and so one must use ``discontinuous" global sections, given by sections over the Bohr-Sommerfeld leaves. According to~\cite{snia} (chapter 1, or see Theorem 1.2 in~\cite{eva}), the dimension of the cohomology of the representation is (under certain conditions) the number of Bohr-Sommerfeld leaves. Different polarizations can lead to different results, however they do often give the same results. \vspace{3mm}\\For example, we can quantize $T^*S^1$ and there are two natural choices of polarization: one has quotient space $S^1\,,$ and the other has quotient space $\mathbb{R}\,.$ Computing the foliated cohomologies gives isomorphic results, however the interesting thing is that the isomorphism isn't degree preserving. How does this happen? \item Can a deformation quantization of a Poisson manifold $(M,\Lambda)$ be obtained from geometric quantization of $\Pi_{\infty}(\Lambda)\,,$ using a generalization of the quantization procedure (due to Hawkins) found in~\cite{eli}? Indeed, the formula for deformation quantization via the path integral approach (due to Cattaneo and Felder, found on the first page of~\cite{catt}) is formally similar to the formula for the twisted convolution product, eg. page 22 in~\cite{eli}. In the former, it looks like the integral is, roughly, over morphisms $T\Delta\to T^*M\,,$ and functions $f:M\to\mathbb{R}$ give ``operators" by pulling them back to $\Pi_{\infty}(\Lambda)$ via the source map. If this is true, it would suggest that the deformation quantization can be obtained via geometric quantization of $(M,\Lambda)\,,$ using the source simply connected groupoid, if its source fibers are contractible. This is consistent with examples 6.2, 6.5 in~\cite{eli}. \end{enumerate} \end{remark} \chapter{Higher Morphisms, Connections with van Est and a No-Go Result About Lie Algebroids} In the previous chapter we defined Morita equivalences and $n$-equivalences of Lie algebroids, and we conjecture that, after localizing with respect to these $n$-equivalences, we get an equivalence between two categories, one groupoid-like and one algebroid-like, giving a version of Grothendieck's homotopy hypothesis (see~\cite{groth}) for Lie algebroids (or a smooth homotopy hypothesis). Recall that the homotopy hypothesis essentially says that topological spaces, up to weak equivalence, are equivalent to (discrete) $\infty$-groupoids.\footnote{If we use Kan complexes to model $\infty$-groupoids then this becomes a theorem. See~\cite{Quillen}.} The connection between the homotopy hypothesis and the one we will state arises fact that, under our definition of Morita equivalences, two tangent bundles are Morita equivalent if and only if their underlying manifolds are weakly equivalent.\footnote{Furthemore, analogously to how the smooth fundamental groupoid $\Pi_1(X)$ of a space is equivalent to the discrete version, we expect the same result to hold for $\Pi_{\infty}(X)\,.$} In addition, for the case $n=1$ we obtain a generalization of Lie's second and third theorems. \vspace{3mm}\\To support the claims of the preceding paragraph, we state following result, due to Bloch and Smith, which is a generalization of the Riemann-Hilbert correspondence from $\Pi_{1}(X)$ to $\Pi_{\infty}(X)$\footnote{Recall that the classical Riemann-Hilbert correspondence states that their is an equivalence of categories between flat bundles on a space $X$ and representations of $\Pi_1(X)\,.$}: \begin{theorem}(see~\cite{block}) Let $X$ be a manifold. There is an $A_{\infty}$-quasi-equivalence \begin{equation} \mathcal{RH}:\mathcal{P}_{\mathcal{A}}\to \textit{Loc}^{\mathcal{C}}(\Pi_{\infty}(X))\,. \end{equation} Here, $\mathcal{P}_{\mathcal{A}}$ is the dg-category of graded bundles on $X$ with a flat $\mathbb{Z}$-graded connection, and $\textit{Loc}^{\mathcal{C}}(\Pi_{\infty}(X))$ is the dg-category of $\infty$-local systems on $X\,.$ \end{theorem} \vspace{1mm}Before getting into an algebroid version of the homotopy hypothesis, we (in particular) aim to do two things: describe how the van Est isomorphism theorem is a generalization of Lie's second theorem, and describe how gerbes can be thought of as a generalized morphisms (analogously to how principal bundles correspond to generalized morphisms). Much of what is said in this section may be known to experts. Some references include~\cite{ginot},~\cite{konrad},~\cite{Murray},~\cite{wockel}. \vspace{3mm}\\We've already seen that for coefficients in an abelian Lie algebra, the van Est isomorphism theorem generalizes Lie's second theorem. For example, given a morphism $f:\mathfrak{g}\to\mathbb{C}\,,$ both Lie's second theorem and the van Est isomorphism theorem tell us that this morphism integrates to a morphism $G\to\mathbb{C}$ (or any other Lie group integating the Lie algebra $\mathbb{C})\,,$ given that $G$ is source simply connected. \vspace{3mm}\\Of course, the van Est isomorphism theorem tells us more, namely it says that a degree $n$-cohomology class for the Lie algebroid integrates to the Lie groupoid as long as the groupoid has $n$-connected source fibers. Our perspective is that the van Est theorem should generalize to the higher integrations $\Pi_n(\mathfrak{g})\,,$\footnote{We won't attempt to define this here, but it should be an $n$-truncation of $\Pi_{\infty}(\mathfrak{g})\,.$ See Theorem 1.2 in~\cite{zhuc}.} where the van Est map should be an isomorphism up to degree $n\,,$ which is why when $n=\infty$ we should get an isomorphism in all degrees. \vspace{4mm}\\Towards the end we will define generalized morphisms between Lie algebroids and Lie groupoids, which will give us another interpretation of the van Est map. We then argue that if $G\rightrightarrows G^0$ is source $n$-connected, then $G\rightrightarrows G^0$ is ``$n$-equivalent" to $\mathfrak{g}\to G^0\,.$ \vspace{3mm}\\Once again, Part 3 of this thesis is largely speculative (and again, significant details need to be filled in for completion). \section{Gerbes as Generalized Morphisms} Let's flesh out the way in which van Est generalizes Lie's second theorem. Let's think about a manifold $X$ for a moment. Consider the Lie group $\mathbb{C}^*\,.$ Classes in $H^0(X,\mathcal{O}^*)$ correspond to global functions $X\to \mathbb{C}^*\,,$ where here $\mathbb{C}^*$ is thought of as a manifold, not as a group. Classes in $H^1(X,\mathcal{O}^*)$ correspond to principal $\mathbb{C}^*$-bundles over $X\,.$ Now after embedding manifolds into groupoids (up to Morita equivalence, equivalently stacks) we learn that principal $\mathbb{C}^*$-bundles are the same as generalized morphisms $X\to \mathbb{C}^*\,,$ where now $\mathbb{C}^*$ is thought of as a group. A natural question is then: what happens when we embed manifolds into double groupoids (or 2-groupoids)? We will now argue that gerbes give higher generalized morphisms. \vspace{3mm}\\Consider a class in $H^2(X,\mathcal{O}^*)\,.$ We will think of this geometrically as a bundle gerbe (see~\cite{Murray}), which can be described as follows: we have a surjective submersion $\pi:Y\to X\,,$ together with a central $\mathbb{C}^*$-extension of $Y\times_X Y\rightrightarrows Y\,.$ We will denote this extensions by $P\rightrightarrows Y\,,$ ie. we have the following sequence of groupoids: \begin{equation} \begin{tikzcd} Y\times\mathbb{C}^* \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & P \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & Y\times_X Y \arrow[d, shift right] \arrow[d, shift left] \\ Y & Y & Y \end{tikzcd} \end{equation} Here, the groupoid on the left is just the product of the groupoids $Y\rightrightarrows Y$ with $\mathbb{C}^*\rightrightarrows *\,.$ In particular, $P\to Y\times_X Y$ is a principal $\mathbb{C}^*$-bundle. From this data, we can form the following double groupoid: \begin{equation} \begin{tikzcd} P\rtimes\mathbb{C}^* \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & P \arrow[d, shift right] \arrow[d, shift left] \\ Y \arrow[r, shift right] \arrow[r, shift left] & Y \end{tikzcd} \end{equation} Here, the groupoid in the top row is the action groupoid, corresponding to the fact that $P$ is a principal $\mathbb{C}^*$-bundle over $Y\times_X Y\,,$ and the groupoid in the left column is the product of $P\rightrightarrows Y$ with $\mathbb{C}^*\rightrightarrows *\,.$ Now, this double groupoid is Morita equivalent\footnote{See~\Cref{mordo} for the definition.} to \begin{equation} \begin{tikzcd} Y\times_X Y \arrow[d, shift right] \arrow[d, shift left] \\ Y \end{tikzcd} \end{equation} which is Morita equivalent to the manifold $X\,.$ On the other hand, we have a natural morphism \begin{equation} \begin{tikzcd} P\rtimes\mathbb{C}^* \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & P \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & \mathbb{C}^* \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & * \arrow[d, shift right] \arrow[d, shift left] \\ Y \arrow[r, shift right] \arrow[r, shift left] & Y & * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Therefore, using a $\mathbb{C}^*$-gerbe we get a generalized morphism with domain $X\,,$ ie. \begin{equation} \begin{tikzcd} X \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & X \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & \mathbb{C}^* \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & * \arrow[d, shift right] \arrow[d, shift left] \\ X \arrow[r, shift right] \arrow[r, shift left] & X & * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Observe that we are no longer thinking of $\mathbb{C}^*$ as a group, we are thinking of it as a double groupoid. Equivalently, all of the double groupoids we have seen in this section are in one-to-one correspondence with strict 2-groupoids (see Example 3.7 in~\cite{mehtatang}), so rather than using double groupoids we can use strict 2-groupoids. This result is in line with the fact that a bundle gerbe can be thought of as a principal 2-bundle for a 2-group (see~\cite{ginot}), so we might have expected, a priori, that a gerbe can be thought of as a generalized morphisms into a 2-group. \vspace{3mm}\\Let's look at another example: given a central extension $1\to A\to E\to G\to 1$ of Lie groups (which is described by a class in $H^2(G,A)$), we can form the following double groupoid: \begin{equation} \begin{tikzcd} E\rtimes A \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & E \arrow[d, shift right] \arrow[d, shift left] \\ * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} where the groupoid in the left column is again the product of the groups $E$ and $A\,.$ This double groupoid is Morita equivalent to \begin{equation} \begin{tikzcd} G \arrow[d, shift right] \arrow[d, shift left] \\ * \end{tikzcd} \end{equation} and again, there is a natural morphism \begin{equation} \begin{tikzcd} E\rtimes A \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & E \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & A \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & * \arrow[d, shift right] \arrow[d, shift left] \\ * \arrow[r, shift right] \arrow[r, shift left] & * & * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Therefore, once again we get a generalized morphism with domain $G\rightrightarrows *\,,$ ie, \begin{equation} \begin{tikzcd} G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & G \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right=7] & A \arrow[r, shift right] \arrow[r, shift left] \arrow[d, shift right] \arrow[d, shift left] & * \arrow[d, shift right] \arrow[d, shift left] \\ * \arrow[r, shift right] \arrow[r, shift left] & * & * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Observe the pattern: given an abelian Lie group $A\rightrightarrows*\,,$ the cohomology of a groupoid $G\rightrightarrows G^0$ in degree $0$ corresponds to morphisms $G\to A[-1]\,,$ where by $A[-1]$ we mean the groupoid $A\rightrightarrows A\,.$ Cohomology in degree one corresponds to generalized morphisms from $G\to A\,,$ where now we are thinking of $A$ as a Lie group. Cohomology in degree 2 corresponds to generalized morphisms $G\to A[1]\,,$ where by $A[1]$ we mean the following double groupoid (which can equivalently be thought of as a strict 2-groupoid, by example 3.7 in~\cite{mehtatang}): \begin{equation} \begin{tikzcd} A \arrow[d, shift right] \arrow[d, shift left] \arrow[r, shift right] \arrow[r, shift left] & * \arrow[d, shift right] \arrow[d, shift left] \\ * \arrow[r, shift right] \arrow[r, shift left] & * \end{tikzcd} \end{equation} Note that, the 2-groupoid corresponding to this double groupoid is $*$ in degrees 0 and 1, and $A$ in degree 2 (see~\cite{mehtatang}). We might denote these morphisms, suggestively, by $\text{Hom}^{\bullet}(G,A)$ (where we shift degrees so that $\text{Hom}^{0}(G,A)$ denotes morphisms into $A\rightrightarrows A$). \vspace{3mm}\\We believe that this phenomenon continues to occur for general cohomology classes after embedding manifolds (or more generally, groupoids) into $n$-fold groupoids\footnote{by $n$-fold groupoids we mean the degree $n$-version of double groupoids, ie. groupoids, double groupoids, triple groupoids, etc.} (or n-groupoids). If we take $n=\infty\,,$ we should be able to do this for all cohomology classes simultaneously. This would be an $\infty$-category. \section{Perspectives on van Est and Lie's Second Theorem: The Homotopy Hypothesis} Now let's connect the discussion in the previous section to van Est and Lie's second theorem. The aformentioned generalized morphisms should be naturally graded, eg. for a manifold $X\,,$ a function has degree $0\,,$ a principal bundle has degree $1\,,$ a gerbe has degree $2\,,$ etc. Equivalently, the generalized morphism has degree $n$ if it maps into an $n$-groupoid. We should be able to think about Lie algebroid cohomology classes in a similar way (eg. associated to a class in $H^2(\mathfrak{g},\mathcal{O})$ is a central extension of Lie algebroids, and associated to such an extension is a Lie $2$-algebroid which is equivalent to $\mathfrak{g}\,.$ This may be related to Example 6.16 in~\cite{Nuiten}). Then Lie's theorem would be about degree $1$ morphisms only, whereas van Est would be about morphisms in all degrees (for coefficients in a representation, for example). \vspace{3mm}\\Consider a Lie algebroid $A$ and some $\infty$-groupoid integrating $A$ (eg. $\Pi_n(A)\,,$ for some $n$). A more general van Est isomorphism theorem should imply that, in particular, a degree $n$ generalized morphism with domain $A$ integrates to $\Pi_n(A)\,.$ \vspace{3mm}\\From this point of view, the true mathematical object corresponding to a Lie algebroid $A$ is not a groupoid or source simply connected groupoid, but rather a source $\infty$-connected $\infty$-groupoid, which can be taken to be $\Pi_{\infty}(A)\,.$ The van Est isomorphism theorem would prove that a generalized morphism with domain $A$ (and appropriate codomain) is in one-to-one correspondence with generalized morphisms with domain $\Pi_{\infty}(A)\,,$ where the morphisms can be of any degree. \vspace{3mm}\\Of course, this raises two questions: \begin{enumerate} \item How can we describe nonabelian gerbes as generalized morphisms? (Presumably by using their descripton in~\cite{bundle}). \item Is there a van Est map and isomorphism theorem for these generalized morphisms taking values in general Lie groupoids, rather than abelian Lie groups? Lie's second theorem would suggest this possibility. \end{enumerate} Ultimately, a more general van Est isomorphism theorem should imply that there is an equivalence between two appropriate $\infty$-categories, one Lie groupoid-like and one Lie algebroid-like. There should be a Lie algebroid homotopy hypothesis, or smooth homotopy hypothesis, which, in spirit, should imply:\footnote{Technically, we are conjecturing that this is a meaningful statement that expresses truth. We give a related conjecture in~\ref{conjf}.} \begin{conjecture} Lie $n$-algebroids up to $n$-equivalence are equivalent to Lie $n$-groupoids which are source $n$-connected\footnote{We are being vague about what source $n$-connected means, but it probably means that the (groupoid) fibers of the inclusion map of the base into the higher groupoid are $n$-connected.} (up to equivalence).\footnote{Due to the failure of Lie's third theorem, in order for such a thing to be true in general we need to use a more general notion of Lie groupoids, eg. see~\cite{zhuc}.} In particular, generalized $n$-morphisms of Lie $n$-algebroids integrate to generalized $n$-morphisms of Lie $n$-groupoids. \end{conjecture} The functor inducing th e equivalence should be Lie differentiation (see~\cite{bona},~\cite{Li},~\cite{jet} for some details about Lie $n$-algebroids/groupoids and differentiaton, see~\cite{zhuc2} for a notion of Morita equivalence of higher groupoids). This should imply that a Lie algebroid $A\,,$ up to $n$-equivalence, is essentially indistinguishable from $\Pi_n(A)\,,$ ``up to degree $n$". For example, since degree $k$ cohomology classes can be viewed as morphisms into a $k$-algebroid/groupoid, this conjecture should imply that they have the same cohomology up to degree $n\,.$ \vspace{3mm}\\When restrcited to Lie algebroids given by tangent bundles, this essentially says that a space $X\,,$ up to $n$-equivalence, is equivalent to $\Pi_n(X)\,.$ Very roughly speaking, we have the following sequence of implications: \begin{equation} \begin{tikzcd} \text{LA Homotopy Hypothesis} \arrow[r, Rightarrow] & \text{van Est} \arrow[r, Rightarrow] & \text{Lie \RNum{2}} \end{tikzcd} \end{equation} Now from this point of view, why do only degree $1$ cohomology classes (including representions) necessarily integrate to the source simply connected groupoid? It is because higher cohomology classes correspond to morphisms into higher Lie groupoids/algebroids,\footnote{At the algebroid level we should get this correspondence after associating to a cohomology class a higher Lie algebroid} which means that by considering higher cohomology classes we are essentially leaving the category of Lie groupoids/algebroids, and since Lie \RNum{3} is only about $1$-groupoids, one might guess that we don't get a correspondendce between higher degree cohomology groups. \begin{remark}\label{imrem} As a final remark of this section we wish to point out that, under the assumed equivalence of $A$ with $\Pi_{\infty}(A)\,,$ the van Est map can be viewed as a pullback map rather than as a differentiation map. Furthermore, integrating classes from Lie algebroid cohomology to Lie groupoid cohomology can be viewed as a question of descent. That is, pullback $\cong$ differentiation, integration $\cong$ descent. \vspace{3mm}\\We can already see that this is true in the case of a connected abelian Lie group $G\,,$ since its Lie algebra $\mathfrak{g}$ is naturally identified with the simply connected integation of $\mathfrak{g}\,,$ which is the universal cover of $G$ (since $\mathfrak{g}$ is contractible it should be equivalent to $\Pi_{\infty}(\mathfrak{g})$\footnote{We hope that we aren't confusing the reader too much. We are thinking of $\mathfrak{g}$ in two different ways: 1. as the Lie algebra of $G$ and 2. as a simply connected abelian group.}). In this case we can readily observe that pullback $\cong$ differentiation, integration $\cong$ descent. For example, differentiating a morphism $f:G\to S^1$ to $df\vert_e:\mathfrak{g}\to\mathbb{R}$ is the same as pulling back $f$ to the universal cover of $G\,,$ ie. $\mathfrak{g}\,,$ and lifting this map to $\mathbb{R}\,.$ On the other hand, integrating a morphism $\mathfrak{g}\to\mathbb{R}$ to a morphism $G\to S^1$ is the same as quotienting this morphism to a map $\mathfrak{g}\to S^1$ and then asking if it descends to a map $G\to S^1\,.$ \end{remark} \section{Differentiating Generalized Morphisms? A No-Go Theorem} In this section we consider the problem of differentiating generalized morphisms. Perhaps suprisingly, the answer turns out to be negative, ie. generalized morphisms are not, in general, differentiable (even if one doesn't believe in our definition). We prove the following result: \begin{theorem}\label{comdi} There is no notion of generalized moprhisms between Lie algebroids with the following properties: \begin{enumerate} \item Associated to any generalized morphism $P:G\to H$ is a generalized morphism $dP:\mathfrak{g}\to\mathfrak{h}\,.$ \item Generalized morphisms $\mathfrak{g}\to\mathfrak{h}$ induce pullback maps $H^{\bullet}(\mathfrak{h},A)\to H^{\bullet}(\mathfrak{g},A)\,,$\footnote{See Part 1, sections 2 and 3 for the relevant definitions.} for any abelian Lie group $A\,,$ in such a way that the pullback of a trivial class is trivial. \item Pullback of cohomology commutes with the van Est map. That is, the following diagram is commutative: \begin{equation} \begin{tikzcd} {H^{\bullet}(H,A)} \arrow[r, "P^*"] \arrow[d, "VE"'] & {H^{\bullet}(G,A)} \arrow[d, "VE"] \\ {H^{\bullet}(\mathfrak{h},A)} \arrow[r, "dP^*"] & {H^{\bullet}(\mathfrak{g},A)} \end{tikzcd} \end{equation} \end{enumerate} \end{theorem} \begin{proof} Consider $\Pi_1(S^1)\cong \mathbb{R}\ltimes S^1\rightrightarrows S^1\,,$ whose Lie algebroid is $TS^1\,.$ The universal cover $\mathbb{R}\to S^1$ is a principal $\mathbb{Z}$-bundle, and therefore defines a generalized morphism $P:\Pi_1(S^1)\to\mathbb{Z}\,.$ There is a canonical class in $H^1(\mathbb{Z},\mathbb{R})\,,$ given by $n\mapsto n\,,$ and the pullback of this class to $H^1(\Pi_1(S^1),\mathbb{R})$ can be represented by the natural homomorphism $\mathbb{R}\ltimes S^1\to\mathbb{R}\,.$ After applying the van Est map, we get $d\theta\in H^1_{\text{dR}}(S^1)\,.$ On the other hand, the Lie algebra of $\mathbb{Z}$ is $0\to *\,,$ therefore the three properties imply that $d\theta$ must be exact, which it isn't. \end{proof} \begin{remark} The proof shows that we can relax the conditions so that properties 2 and 3 only need to hold in degree $1\,,$ and we can assume that $A$ is a representation (see~\Cref{strong}). In addition, even if we worked with source connected groupoids only, a similar proof shows that the three properties cannot hold simultaneously. Our resolution is to relax property 1 so that not all generalized morphisms are differentiable. \end{remark} Now a conundrum might be given by the following: associated to a rank $n$ representation of $G$ is a generalized morphism into $\text{GL}(n,\mathbb{C})$ (given by the frame bundle associated to the underlying vector bundle), and we can differentiate representations, so why might we not be able to differentiate the associated generalized morphism? The answer is that we can't (in general) \textit{really} differentiate representations. Of course we can, in a sense, differentiate representations, however the point is that the differentiation is only happening on the domain side, not on the codomain side – when we differentiate representations we retain the $\textit{GL}(n,\mathbb{C})$ data which describes the vector bundle. The only thing which gets differentiated is the action of $G\,.$ \vspace{3mm}\\Now, consider a generalized morphism $G\to H$ given by a bibundlle $P\,.$ This doesn't need to differentiate to a generalized morphism $\mathfrak{g}\to\mathfrak{h}\,.$ On the other hand, if $H$ has contractible target fibers the morphism will differentiate to a generalized morphism: in this case the fibers of the map $\pi:P\to G^0$ are contractible, and therefore $\pi^!\mathfrak{g}$ is Morita equivalent to $\mathfrak{g}\,.$ However, we also have that $\pi^!(\mathfrak{g}) \cong\mathfrak{g}\ltimes P\rtimes \mathfrak{h}$ (ie. they are isomorphic, since $\pi^!G\cong G\ltimes P\rtimes H$), and we have a natural morphism $\mathfrak{g}\ltimes P\rtimes \mathfrak{h}\to \mathfrak{h}\,.$ Therefore, we get the following generalized morphism of Lie algebroids: \begin{equation} \begin{tikzcd} & \pi^!\mathfrak{g} \arrow[ld,"\text{Morita}"'] \arrow[r, "\text{Iso}", no head, equal] & \mathfrak{g}\ltimes P\rtimes\mathfrak{h} \arrow[rd] & \\ \mathfrak{g} & & & \mathfrak{h} \end{tikzcd} \end{equation} More generally, if we work with Lie algebroids up to $1$-equivalence and if we work with source simply connected groupoids only, all generalized morphisms will differentiate to generalized morphisms of algebroids. In this case, property 3 in~\ref{comdi} will hold only up to degree $1\,.$ Even more generally, this is true as long as the codomain is source simply connected. \vspace{3mm}\\Now let's delve deeper into the theory to see what is going on. What is happening is more transparent if we think from the perspective of $\Pi_{\infty}(\mathfrak{g})\,,$ rather than from the perspective of $\mathfrak{g}$ (we will now only refer to groupoids which are source connected). From this perspective, we have a generalized morphism $G\to H$ and we can always ``lift" this to a generalized morphism $\Pi_{\infty}(\mathfrak{g})\to H\,,$ however these morphisms don't necessarily lift to morphisms $\Pi_{\infty}(\mathfrak{g})\to \Pi_{\infty}(\mathfrak{h})\,,$ and why should they? Lifting problems generally have topological obstructions, and this is no different. Diagrammatically, the problem of differentiation is given by the following: \begin{equation} \begin{tikzcd} \Pi_{\infty}(\mathfrak{g}) \arrow[d] \arrow[r, "?", dotted] & \Pi_{\infty}(\mathfrak{h}) \arrow[d] \\ G \arrow[r] & H \end{tikzcd} \end{equation} Now given a diagram of the following form \begin{equation} \begin{tikzcd} & Z \arrow[d, "\pi"] \\ X \arrow[r, "f"'] \arrow[ru, "\tilde{f}", dotted] & Y \end{tikzcd} \end{equation} there are obstructions to a lift $\tilde{f}$ existing. One such obstruction is the following: given any class $\alpha \in H^*(Y,\mathbb{Z})$ such that $f^*\alpha$ is nontrivial, it must also be the case that $\pi^*\alpha$ is nontrivial. \vspace{3mm}\\Let's consider to the context of a generlized morphism $G\to \mathbb{C}^*\,,$ which can be described by a degree one cohomology class $H^1(G,\mathbb{C}^*)\,.$ If this cohomology class is trivial when pulled back to the base $G^0\,,$ then this generalized morphism is equivalent to a homomorphism $G\to\mathbb{C}^*\,,$ which differentiates to a morphism $\mathfrak{g}\to\mathbb{C}\,.$ This is always the case if $H^2(G^0,\mathbb{Z})=0\,.$ Recall that $H^*(\mathfrak{g}\,,\mathbb{Z})=H^*(G^0,\mathbb{Z})\,,$ therefore this will always be the case if $H^2(\mathfrak{g}\,,\mathbb{Z})=0\,;$ in the spirit of the smooth homotopy hypothesis, this should be equivalent to $H^2(\Pi_{\infty}(\mathfrak{g}),\mathbb{Z})=0\,.$ Indeed, this is consistent with the fact that the classifying space of $\Pi_{\infty}(\mathfrak{g})$ should be $G^0\,.$ \vspace{3mm}\\Let us emphasize this point: there is an obstruction to lifting a generalized morphism $\Pi_{\infty}(\mathfrak{g})\to \mathbb{C}^*$ to a generalized morprhism $\Pi_{\infty}(\mathfrak{g})\to\mathbb{C}\,,$ and this obstruction vanishes if $H^2(\Pi_{\infty}(\mathfrak{g}),\mathbb{Z})=0\,.$\footnote{Since $\mathbb{C}$ is contractible, $\Pi_{\infty}(\mathbb{C})$ should be equivalent to $\Pi_1(\mathbb{C})=\mathbb{C}\rightrightarrows *\,.$} \vspace{3mm}\\Why then, do strict morphisms $G\to H$ always differentiate? Again, under the assumption that $H^*(\Pi_{\infty}(\mathfrak{g}),\mathbb{Z})\cong H^*(G^0,\mathbb{Z})\,,$ pulling back a cohomology class \begin{equation} H^*(H,\mathbb{Z})\to H^*(\Pi_{\infty}(\mathfrak{g}),\mathbb{Z}) \end{equation} is equivalent to pulling back the induced class in $H^*(H^0,\mathbb{Z})$ to $H^*(G^0,\mathbb{Z})\,.$ The question is, if such a class is nontrivial after being pulled back to $G^0\,,$ can it be trivial when pulled back to $\Pi_{\infty}(\mathfrak{h})\,?$ This is impossible, since the latter condition would imply that the cohomology class, when pulled back to $H^0\,,$ is trivial, which means it must also be trivial when pulled back to $G^0\,.$ Therefore, the obstruction vanishes. \vspace{3mm}\\Let's emphasize another important point: classes in $H^1(\mathfrak{g}\,,\mathbb{C}^*)$ are essentially generalized morphisms $\mathfrak{g}\to\mathbb{C}^*\,,$ ie.\ we are computing generalized morphisms $\Pi_{\infty}(\mathfrak{g})\to\mathbb{C}^*$ (or generalized morphisms $\Pi_{1}(\mathfrak{g})\to\mathbb{C}^*\,,$ if working up to $1$-equivalence). This leads us into the next section. \section{Exotic Morphisms Between Algebroids and Groupoids, Differentiability}\label{generalized} Consider a function $f:X\to Y$ between smooth manifolds, with no assumption about differentiability of $f\,.$ This is equivalent to specifying a homomorphism $\text{Pair}(X)\to\text{Pair}(Y)\,.$ Asking if $f$ is smooth is essentially the same as askng if $f$ lifts to a smooth morphism $\Pi_{\infty}(TX)\to \Pi_{\infty}(TY)\,,$ via the map $\gamma(t)\mapsto f(\gamma(t))\,.$ Inspired by this and the discussion in the previous section, we make the following definition: \begin{definition} Let $P:G\to H$ be a generalized morphism of Lie groupoids. We say that $P$ is differentiable if it differentiates to a generalized morphism in the category of Lie algebroids, up to Morita equivalence. We say that $P$ is $n$-differentiable if it differentiates to a generalized morphism, up to $n$-equivalence (we may call a generalized morphism between Lie algebroids, defined up to $n$-equivalence, an $n$-generalized morphism). \end{definition} \begin{exmp} Let $f:G\to H$ be a homomorphism. Then $f$ is differentiable. \end{exmp} In particular, non-differentiability of generalized morphisms is related to the fact that Morita maps are not invertible by (smooth) morphisms in general. For if a morphism were invertible (up to Morita equivalence), it would be equivalent to a homomorphism, which is differentiable. \begin{exmp} Let $P:G\to H$ be a generalized morphism between source $n$-connected Lie groupoids. Then $P$ is $n$-differentiable. \end{exmp} Of course, a $1$-generalized morphism between integrable Lie algebroids will integrate to a generalized morphism between their source simply connected integrations, generalizing Lie's second theorem. \vspace{3mm}\\Now, as mentioned in the previous section, we know that classes in $H^1(\mathfrak{g},\mathcal{O}^*)$ describe generalized morphisms $\Pi_1(\mathfrak{g})\to \mathbb{C}^*\,.$ One way to generalize this idea of mapping algebroids into groupoids is to use the embeddings of Lie groupoids and Lie algebroids into LA-groupoids and use generalized morphisms there — these provide a ``wormhole" between algebroids and groupoids. We make the following definition: \begin{definition} Consider a Lie algebroid $\mathfrak{g}\to X$ and a Lie groupoid $H\rightrightarrows H^0\,.$ Let $P\to X$ be a principal bundle for $H\,,$ and suppose that $P$ is equipped with an action of $\mathfrak{g}\,.$ We call $P$ a generalized morphism $\mathfrak{g}\to H$ if the following diagram is an LA-groupoid: \begin{equation} \begin{tikzcd} \mathfrak{g}\ltimes P\rtimes H \arrow[r] \arrow[d, shift right] \arrow[d, shift left] & P\rtimes H \arrow[d, shift right] \arrow[d, shift left] \\ \mathfrak{g}\ltimes P \arrow[r] & P \end{tikzcd} \end{equation} Essentially, we are saying that $P$ is a generalized morphism if it is a principal $H$-bundle with a flat $\mathfrak{g}$-connection. \vspace{3mm}\\Similarly, we can define generalized morphisms $G\to \mathfrak{h}$ as a space $P$ with actions of $G$ and $\mathfrak{h}\,,$ for which the following is an LA-groupoid: \begin{equation} \begin{tikzcd} G\ltimes P\rtimes \mathfrak{h} \arrow[r, shift right] \arrow[d] \arrow[r, shift left] & P\rtimes \mathfrak{h} \arrow[d] \\ G\ltimes P \arrow[r, shift right] \arrow[r, shift left] & P \end{tikzcd} \end{equation} In addition, we require that $P\rtimes\mathfrak{h}$ is Morita equivalent to $0_{G^0}\to G^0$ via the canonical morphism (which implies that the Lie algebroid in the left column is Morita equivalent to $0_{G^{(1)}}\to G^{(1)}$). This means that the fibers of $\rho$ should be contractible and that $P\rtimes\mathfrak{h}$ should be the Lie algebroid associated to the foliation $P\to G^0$ (similarly, we can define $n$-generalized morphisms). We may call these \textit{exotic} morphisms. \end{definition} \begin{remark} In Part 2 of this thesis, there was a lot of emphasis places on the double groupoid \begin{equation} \begin{tikzcd} G\ltimes P\rtimes H \arrow[r, shift right] \arrow[d, shift right] \arrow[r, shift left] \arrow[d, shift left] & P\rtimes H \arrow[d, shift right] \arrow[d, shift left] \\ G\ltimes P \arrow[r, shift right] \arrow[r, shift left] & P \end{tikzcd} \end{equation} Most importantly was the LA-groupoid obtained by differentiating in the vertical direction; up until now we had never differentiated in the horizontal direction. We've now come full circle. \end{remark} \section{The Category of LA-Groupoids with n-Equivalences — Another Interpretation of the van Est Map}\label{finale} In Part 2 of this thesis we discussed an interpretation of the van Est map as a map from the cohomology of a groupoid to the foliated cohomology of the space of objects. Here we will discuss another (but not unrelated) interpretation of the van Est map. Previously in this chapter we discussed how, via the smooth homotopy hypothesis, we should be able to view the van Est map as a pullback map \begin{equation} H^*(G,M)\xrightarrow[]{VE}H^*(\Pi_{\infty}(\mathfrak{g}),M)\,. \end{equation} In this section, we will make this precise, at the level of $\mathfrak{g}\,.$ \vspace{3mm}\\If one imagines the category consisting of the maximal generalization of Lie algebroids and Lie groupoids, it should contain LA-groupoids. This implies that there is a natural notion of morphisms and equivalences between Lie algebroids and Lie groupoids. Indeed, both Lie algebroids and Lie groupoids have a notion of weak equivalences, and these two notions get mixed in this category of LA-groupoids, which results in some unexpected yet sensible features. In fact, since for each $n\ge 0$ we have a notion of $n$-equivalence for Lie algebroids, for each $n\ge 0$ we get a notion of $n$-equivalence for LA-groupoids (we implicitly used this in the previous section). In particular, the category of LA-groupoids we define unifies the category of differentiable stacks, the category of Lie algebroids and the homotopy category; we nickname it \textbf{Worm}\footnote{Due to its ``wormhole" like property of providing ``paths" (ie.\ morphisms) between different ``universes" (ie.\ categories).} In this category, the differentiation and integration functors between groupoids and algebroids are represented by generalized morphisms. \vspace{3mm}\\Here we will show that there is a canonical \textit{exotic} morphism $\mathcal{I}:\mathfrak{g}\to G\,.$ One may interpret this morphism as the integration functor. In fact, we will show that if $G$ is source $n$-connected then this morphism is an $n$-equivalence, which is a Morita equivalence if $n=\infty\,.$ There are details which need to be filled in, however the idea should (hopefully) be clear. \begin{definition} A morphism between LA-groupoids is a Morita morphism if the morphism restricted to the top Lie algebroids is a Morita morphism of Lie algebroids, or if the morphism restricted to the top Lie groupoids is a Morita equivalence of Lie groupoids.\footnote{There are two Lie algebroids and two Lie groupoids appearing in an LA-groupoid — by ``top" we are referring to the Lie algebroid/groupoid in the left column and upper row. Equivalently, the Morita morphism induces an equivalence of both Lie algebroids (or Lie groupoids).} A morphism is an $n$-equivalence if the morphism restricted to the top Lie algebroids is an $n$-equivalence, or if the morphism restricted to the top Lie groupoids is a Morita morphism.\footnote{In the latter case this means it is also a Morita morphism of LA-groupoids.} We can then take the subcategory generated by these two notions of $n$-equivalence (see~\Cref{generate1}) and invert $n$-equivalences to obtain more general equivalences and morphisms. We refer to this as the category of LA-groupoids (with $n$-equivalences). For $n=\infty$ an $n$-equivalence is a Morita equivalence. \end{definition} \begin{lemma} Given a Lie groupoid $G\,,$ there is a canonical generalized morphism $\mathcal{I}:\mathfrak{g}\to G$ in the category of LA-groupoids. If $G$ is source $n$-connected, there is a canonical $n$-generalized morphism $\mathcal{D}:G\to\mathfrak{g}\,.$ \end{lemma} \begin{proof} We have the following generalized morphism $\mathcal{I}:\mathfrak{g}\to G:$ \begin{equation}\label{mid} \begin{tikzcd} \mathfrak{g} \arrow[r] \arrow[d, shift right] \arrow[d, shift left] & G^0 \arrow[d, shift right] \arrow[d, shift left] & \mathfrak{g}\ltimes G\rtimes G \arrow[r] \arrow[d, shift right] \arrow[d, shift left] \arrow[l, " \;\text{ Morita}"', shift left=7] & G\rtimes G \arrow[d, shift right] \arrow[d, shift left] & 0_G \arrow[r] \arrow[d, shift right] \arrow[d, shift left] & G \arrow[d, shift right] \arrow[d, shift left] \\ \mathfrak{g} \arrow[r] & G^0 & \mathfrak{g}\ltimes G \arrow[r] & G \arrow[r, shift left=8] & 0_{G^0} \arrow[r] & G^0 \end{tikzcd} \end{equation} Let us remind the reader that the notation $0_X\to X$ refers to the zero Lie algebroid over $X\,,$ and note that we are using the natural embeddings of Lie algebroids and Lie groupoids into LA-groupoids. Now in the case that $G$ is source $n$-connected, the map on the right is an $n$-equivalence, due to the fact that $\mathfrak{g}\ltimes G$ is $n$-equivalent to $0_{G^0}\to G^0\,,$ and $\mathfrak{g}\ltimes G\rtimes G$ is $n$-equivalent to $0_{G}\to G\,.$ Therefore, working up to $n$-equivalence, we get a generalized inverse morphism $\mathcal{D}:G\to\mathfrak{g}\,.$ \end{proof} Let's point out that the LA-groupoid in the middle of~\ref{mid} previously appeared in this thesis as an important object in Part 2 (see~\ref{exx}, where here the map $H\to G$ is taken to be $G^0\to G$, and where the top right and bottom left corners have been switched), in the form \begin{equation} \begin{tikzcd} T_sG\rtimes G \arrow[d, shift right] \arrow[d, shift left] \arrow[r] & G\rtimes G \arrow[d, shift right] \arrow[d, shift left] \\ T_sG \arrow[r] & G \end{tikzcd} \end{equation} \begin{corollary}\label{CAG} The van Est map is given by $\mathcal{I}^*:H^{\bullet}(G,M)\to H^{\bullet}(\mathfrak{g},M)\,.$ Integration (up to degree $n$, where $G$ is source $n$-connected) is given by $\mathcal{D}^*:H^{\bullet}(\mathfrak{g},M)\to H^{\bullet}(G,M)\,.$ \end{corollary} We will now summarize the preceding observations: \begin{theorem}\label{LALG} Let $G\rightrightarrows G^0$ be a source $n$-connected Lie groupoid. Then $\mathfrak{g}\to G^0$ is $n$-equivalent to $G\rightrightarrows G^0$ (if $n=\infty$ they are Morita equivalent). \end{theorem} This theorem is consistent with the fact that, as far as the author knows, there is no ``significant" way of distinguishing a Lie groupoid from its Lie algebroid (up to ``degree $n$") if the source fibers are $n$-connected (eg.\ they have the same cohomologies up to degree $n$ and we expect them to have equivalent $n$-term representations up to homotopy). In fact, invariance of LA-groupoid cohomology under $n$-equivalence (which we expect to be true up to degree $n$ and which we leave to future work) implies the van Est isomorphism theorem and the homotopy invariance of de Rham cohomology and singular cohomology, and of course the Morita invariance of Lie groupoid cohomology. \vspace{3mm}\\We emphasize that, under the assumptions that the source fibers are $n$-connected, we expect that $G$ is equivalent to $\Pi_n(\mathfrak{g})$ – via the smooth homotopy hypothesis, this is the higher groupoid interpretation of this theorem. \vspace{3mm}\\Now that we've defined $n$-equivalences of Lie algebroids, we may discuss ``$n$-Lie algebroids" of stacks (not to be confused with Lie $n$-algebroids): \begin{definition} Let $[H^0/H]$ be a differentiable stack. We define its $n$-Lie algebroid to be a Lie algebroid $\mathfrak{g}$ which is $n$-equivalent to $H\rightrightarrows H^0$ (if it exists). \end{definition} Essentially, the $n$-Lie algebroid of $H\rightrightarrows H^0$ is the Lie algebroid of a Morita equivalent Lie groupoid $G\rightrightarrows G^0\,,$ whose source fibers are $n$-connected. Of course, it is unique up to $n$-equivalence. Really, what one may want to do is define an algebroid version of differentiable stacks. Associated to a Lie algebroid $\mathfrak{g}$ one can assign the category of $n$-generalized morphisms from manifolds into $\mathfrak{g}$ (so there is probably a category of ``algebroid-stacks" for each $n\ge 0$). An object in this category is given by a manifold $M$ together with a surjective submersion $\pi:P\to M$ with $n$-connected fibers, together with an action of $\mathfrak{g}$ on $P\,,$ such that $P\rtimes \mathfrak{g}\cong \pi^!0_M$ (ie. the fiberwise tangent bundle). \begin{remark}\label{generate1} This category comes with two notions of weak equivalence, one in the horizontal direction and one in the vertical direction. We should take the smallest subcategory containing all of these weak equivalences to get a category with weak equivalences, or better, a homotopical category. Alternatively, there may be a nicer definition. See~\Cref{generate} for a discussion about essentially the same point. \end{remark} \section{Properties of the Category of LA-Groupoids} Now that we've seen that there are \textit{exotic} morphisms between Lie algebroids and Lie groupoids (by which we mean generalized morphisms between algebroids and groupoids in \textbf{Worm}, ie. the category of LA-groupoids), we will list some remarkable properties of this category: \begin{enumerate} \item Manifolds $X\,,Y$\footnote{There are two natural embeddings of manifolds into LA-groupoids, one is via the $0$-Lie algebroid, one is via the groupoid containing only identity morphisms. However, $\mathcal{I}$ provides an isomorphism between the two, so it doesn't matter which one we use.} are Morita equivalent in this category if and only if they are diffeomorphic. \item More generally, Lie groupoids $G\,,H$ are Morita equivalent in this category if and only if they are Morita equivalent in the category of Lie groupoids. \item Two tangent bundles $TX\,,TY$ are Morita equivalent in this category if and only if $X\,,Y$ are homotopy equivalent. \item There is a canonical generalized morphism $\mathcal{I}:\mathfrak{g}\to G\,,$ which represents the integration functor. If $G\rightrightarrows G^0$ is source $n$-connected then this morphism is an $n$-equivalence, and dual to this there is a canonical $n$-equivalence $\mathcal{D}:G\to \mathfrak{g}\,,$ which represents the differentiation functor. If $n=\infty$ these generalized morphisms are Morita equivalences. \item With regards to the previous two points, the van Est map is given by the pullback $\mathcal{I}^*:H^*(G,M)\to H^*(\mathfrak{g},M)\,.$ If $G$ is source $n$-connected then integration is given by $\mathcal{D}^*:H^{\bullet}(\mathfrak{g},M)\to H^{\bullet}(G,M)\,,$ for $\bullet\le n\,.$ \item A Lie algebroid $\mathfrak{g}$ is integrable if and only if it is $1$-equivalent to some Lie groupoid $G\,.$ \item This category induces a notion of homotopy equivalence on the category of Lie groupoids: a generalized morphism $P:G\to H$ is a homotopy equivalence if the induced generalized morphism in the category of LA-groupoids, $TP:TG\to TH\,,$ is a Morita equivalence.\footnote{There is a notion of homotopy equivalence described in~\cite{noohi1}, page 58, but it's not clear to the author that these notions are the same in the smooth category.} In particular, a Morita equivalence of Lie groupoids induces a homotopy equivalence. In addition, we get a natural notion of $n$-equivalence of Lie groupoids (in the homotopy sense). \item A finite dimensional classifying space of a Lie groupoid $G$ is just a manifold $BG$ which is homotopy equivalent to $G\,.$ \footnote{This is in line with Grothendieck's homotopy hypothesis, where geometric realization gives an equivalence of categories between simplicial sets and topological spaces. See chapter 2, section 3 of~\cite{Quillen}. Note that, if $G$ is discrete, then $TG=G\,.$ Once again, we expect that $\Pi_{\infty}(TX)\,,$ as an infinite dimensional higher Lie groupoid, is Morita equivalent to $\Pi_{\infty}(TX)\,,$ as a discrete higher groupoid.} \item If $EG\to BG$ is finite dimensional, then the Atiyah algebroid $\text{at}(EG)$ is Morita equivalent to $G\,.$ In particular, if $G$ is discrete then $\text{at}(EG)=T(BG)\,,$ therefore $G$ is Morita equivalent to $T(BG)\,.$ \item Due to points 2, 3 and 4, we get the following result: suppose that $\mathcal{P}$ assigns to each LA-groupoid some property (eg.\ its cohomology) that is invariant under $n$-equivalence. Then if $X$ is homotopy equivalent to $Y\,;$ if $H\rightrightarrows H^0$ is Morita equivalent to $K\rightrightarrows K^0\,;$ if $G\rightrightarrows G^0$ is source $n$-connected, we get that $\mathcal{P}(TX)\cong \mathcal{P}(TY)\,;\mathcal{P}(H)\cong \mathcal{P}(K)\,;\mathcal{P}(G)\cong \mathcal{P}(\mathfrak{g})\,,$ respectively. \end{enumerate} \begin{itemize} \item About point 7: in particular, a homomorphism $f:G\to H$ is a homotopy equivalence if the induced map $f:G^{(1)}\to H^{(1)}$ is a homotopy equivalence of manifolds, eg. if $K\xhookrightarrow{}G$ is the inclusion of a maximal compact subgroup into a Lie group, then it is a homotopy equivalence. This is consistent with the fact that $BK$ is homotopy equivalent to $BG\,.$ In fact, these kinds of homomorphisms, together with Morita equivalences, generate homotopy equivalences. \vspace{3mm}\\In addition, a homotopy equivalence of discrete groupoids is the same as a Morita equivalence. For another example, if the source fibers of $G\rightrightarrows G^0$ are $n$-connected, then the morphism $G^0\to G$ is an $n$-equivalence. \item About point 8: if $EG\to BG$ is finite dimensional then $EG$ defines a generalized morphism $BG\to G\,,$ given by the following diagram \begin{equation} \begin{tikzcd} & EG\rtimes G \arrow[ld, "\text{Morita}"'] \arrow[rd] & \\ BG & & G \end{tikzcd} \end{equation} By the preceding bullet point, the morphism on the right is a homotopy equivalence, due to the fact that $(EG\rtimes G)^{(1)}\to G^{(1)}$ is a surjective submersion with contractible fibers, and such a map is automatically a homotopy equivalence (see~\cite{gael}, Corollary 13). \item About point 9: in particular, $\mathbb{Z}\rightrightarrows *$ is Morita equivalent to $TS^1\,,$ while $\mathbb{Z}\rightrightarrows *$ is homotopy equivalent to $S^1$ (here $S^1$ is just the manifold). \item About point 10: here a \textit{property} of LA-groupoids can often be formalized as a functor $\mathcal{P}$ with source the category of LA-groupoids. Examples of properties of LA-groupoids that we expect to be invariant under $n$-equivalence are cohomology up to degree $n$ and representations up to homotopy of length at most $n$ (once the latter is defined). In particular, any property of \textbf{Lie groupoids} which factors through a Morita invariant property of \textbf{LA-groupoids}, via taking their tangent LA-groupoids (eg. de Rham cohomology, cohomology of constant sheaves), will be invariant under homotopy equivalence. \end{itemize} \vspace{3mm}Let's connect the properties of this category to Lie's second and third theorems, which tell us that there is an equivalence of categories between source simply connected Lie groupoids and integrable Lie algebroids, given by the functor $\mathfrak{g}\mapsto\Pi_1(\mathfrak{g})\,.$ Of course, given this equivalence, there is still a natural distinction between $\mathfrak{g}$ and $\Pi_1(\mathfrak{g})\,,$ which arises from the existence of morphisms between source simply connected groupoids and non-source simply connected groupoids. However, after localizing at $1$-equivalences there is no such distinction in the category of LA-groupoids, since $\mathcal{I}$ provides an isomorphism $\mathfrak{g}\to \Pi_1(\mathfrak{g})\,.$ This is the essence of the smooth homotopy hypothesis. We can formulate the following generalization (which we have proved a special case of):\footnote{Again, we are really conjecturing that this is a meaningful statement that expresses truth.} \begin{conjecture}\label{conjf} In the category of $\text{L}_\infty$-algebroids over Lie $\infty$-groupoids, $\mathfrak{g}$ is Morita equivalent to $\Pi_{\infty}(\mathfrak{g})\,.$ More generally, if $G$ integrates $\mathfrak{g}$ and if $G$ is source $n$-connected,\footnote{In the appropriate sense of higher groupoids.} then $\mathfrak{g}$ is $n$-equivalent to $G\,.$ \end{conjecture} In fact, there are different levels to this, since one can now ask about equivalences between LA-groupoids and double groupoids. In order to do this, one can embed both of these into the category of ``LA-groupoids over double Lie groupoids". One can then consider the category of \textit{$\text{L}_\infty$-algebroids over Lie $\infty$-groupoids} \textbf{over} \textit{double Lie $\infty$-groupoids}. This would be ``level 2", and one can continue until ``level $\infty$". \begin{remark}Once again, if some notion defined with respect to Lie groupoids/algebroids can be interpreted exactly as a functor in the aforementioned higher category, then this notion will automatically be invariant under equivalence (eg. we expect this to be true for cohomology and representations up to homotopy). \end{remark} \begin{remark} This notion of smooth homotopy equivalence naturally generalizes to topological groupoids.\ However, in order for many of the same properties to hold one should put some restrictions on the source and target maps. A nice and fairly large class of maps, which include both fibrations and topological submersions, are homotopic submersions (see~\cite{gael}), or microfibrations (see~\cite{raptis}). Noohi discusses some appropriate classes of maps to use in~\cite{noohi1}. \end{remark} \chapter{Conjectures About van Est (for Double Groupoids) and Cohomology} There are many natural ways to generalize what has been done in this thesis. One can go to higher groupoids ($L_{\infty}$-algebroids), $n$-fold groupoids, or a combination of the two. For example, one can prove a van Est theorem from double groupoids to LA-groupoids, and one from double groupoids to double Lie algebroids. \begin{conjecture} There is a van Est map from the cohomology of a double Lie groupoid to the cohomology of its LA-groupoid. If the differentiation takes place in the vertical direction and if the vertical source fiber (a fiber in the (2,1)-category sense of this thesis) is $n$-connected, then the van Est map is an isomorphism up to degree $n$ and is injective in degree $n+1\,.$ \end{conjecture} Let us remark once again that this result would imply the van Est isomorphism theorem given in Part 2 of this thesis. In addition, one can also generalize the results of this thesis to the Bott-Shulman-Stashef complex (see~\cite{BS}). \vspace{3mm}\\ As a final remark of this thesis, we wish to make an observation about cohomology which we conjecture generalizes, and is related to the isomorphism of Alexander-Spanier cohomology with de Rham cohomology (see~\cite{Meinrenken}). Following this discussion we will state two conjectures. \vspace{3mm}\\We consider two examples. \begin{enumerate} \item First, let $G\rightrightarrows G^0$ be a Lie groupoid and consider the map $\iota:G^0\to G\,.$ We can consider cohomology classes on $G$ with coefficients in $\mathcal{O}\,.$ We can form the nerves and we get an induced map $\iota^{\bullet}:B^{\bullet}G^0\to B^{\bullet}G\,.$ We can then use the inverse image functor to obtain a sheaf on on $B^{\bullet}G^0\,.$ The cohomology of this sheaf, $H^*(B^{\bullet}G^0,\iota^{\bullet-1}\mathcal{O})\footnote{To be clear, the $-1$ here refers to the inverse image functor, not a shift in degree.}\,,$ gives the cohomology of the local groupoid of $G\,,$ which is known to be isomorphic $H^*(\mathfrak{g},\mathcal{O})$ (see~\cite{Meinrenken}). Now the map $\iota:G^0\to G$ is equivalent to the map $\pi_2:G\rtimes G\to G\,,$ and $H^*(B^{\bullet}(G\rtimes G),\pi_2^{\bullet-1}\mathcal{O})$ is also isomorphic to $H^*(\mathfrak{g},\mathcal{O})$ (by arguments shown in the proof of the main theorem in Part 1, for example), therefore \begin{equation} H^*(B^{\bullet}(G\rtimes G),\pi_2^{\bullet-1}\mathcal{O})\cong H^*(B^{\bullet}G^0,\iota^{\bullet-1}\mathcal{O})\,. \end{equation} Now let's emphasize the main point: these two cohomologies agree, and the morphisms $G^0\to G\,,$ $G\rtimes G\to G$ are equivalent (note that, the cohomology group on the right side is really the cohomology of a cochain complex of sheaves on $G^0$). \item \vspace{3mm}Let's take a look at another example. Let $X$ be a manifold, and consider the sheaf $\mathcal{O}$ over a point $*\,.$ Consider the map $X\to *\,.$ We can compute the inverse image sheaf, and we get the sheaf of locally constant $\mathbb{R}$-valued functions on $X\,.$ The cohomology of this sheaf is $H^*(X,\mathbb{R})\,.$\footnote{We could form the nerves of the spaces and map, still thinking of them as groupoids, however since both groupoids are just manifolds we would obtain the same result.} Now the map $X\to *$ is equivalent to the map $\iota:X\to \text{Pair}(X)\,.$ We can form the nerves to get $\iota^{\bullet}:B^{\bullet}X\to B^{\bullet}\text{Pair}(X)\,,$ and we can take the cohomology of the inverse image sheaf, and what we get is the local cohomology of $\text{Pair}(X)$ (essentially, Alexander-Spanier cohomology), which is isomorphic to the de Rham cohomology of $X\,,$ ie.\begin{equation} H^*(X,\mathbb{R})\cong H^*(B^{\bullet}X\,,\iota^{\bullet-1}\mathcal{O})\,. \end{equation} Again, the point being that these two cohomologies are isomorphic and the morphisms $X\to *\,,X\to\text{Pair}(X)$ are equivalent (again, the cohomology group on the right side is really the cohomology of a cochain complex of sheaves on $X$). \end{enumerate} Both of these examples have the same properties. We now state the following conjecture: \vspace{1mm}\begin{conjecture} Let $f_1:H_1\to G_1\,, f_2:H_2\to G_2$ be morphisms of Lie groupoids with a natural isomorphism $f_1\Rightarrow f_2\,.$ Let $M_1$ be a $G_1$-module and let $M_2$ be the corresponding $G_2$ module. Then the natural isomorphism induces an isomorphism\footnote{Again, to be clear, the $-1$ in~\ref{someq} refers to the inverse image functor, not a shift in degree.} \begin{equation}\label{someq} H^*(B^\bullet H_1, f_1^{\bullet-1}\mathcal{O}(M_1))\cong H^*(B^\bullet H_2, f_2^{\bullet-1}\mathcal{O}(M_2))\,. \end{equation} \end{conjecture} \vspace{4mm}Note that, in the simple case where $M_1=G_1^0\times A$ for some abelian Lie group $A\,,$ with the trivial $G$-action, we have that $M_2=G_2^0\times A\,,$ with the trivial $G$-action. \vspace{3mm}\\In particular, together with the work done in Part 2, this would imply the following result (which is essentially a generalization of~\Cref{important}, which is the key result used in proving the van Est isomorphism theorem): \begin{conjecture} Let $f:H\to G$ be a map of Lie groupoids such that $H^0\times_{G^0}G^{(1)}\to G^0$ is a surjective submersion, and let $M$ be a $G$-module. Suppose further that the (groupoid) fibers of $f$ are $n$-connected. Then we have an isomorphism up to degree $n\,,$ and an injection in degree $n+1\,,$ given by the following: \begin{equation} H^*(B^{\bullet}G,\mathcal{O}(M))\to H^*(B^{\bullet}H,f^{\bullet-1}\mathcal{O}(M))\,. \end{equation} \end{conjecture} \newpage ll} \begin{quote} \centering ``Tell me,” the great twentieth-century philosopher Ludwig Wittgenstein once asked a friend, ``why do people always say it was natural for mankind to assume that the sun went around the Earth rather than that the Earth was rotating?” His friend replied, ``Well, obviously because it just looks as though the Sun is going around the Earth.” Wittgenstein responded, ``Well, what would it have looked like if it had looked as though the Earth was rotating?” \vspace{2mm}\\― A conversation between Ludwig Wittgenstein and G. E. M. Anscombe. \end{quote} ll} \appendix\chapter{Appendix} \section{Derived Functor Properties}\label{derived functor} In this section we discuss some vanishing results for the derived functors of locally fibered maps. These results are particularly useful when using the Leray spectral sequence. \begin{definition} A map $f:X\to Y$ between topological spaces is called locally fibered if for all points $x\in X$ there exists open sets $U\ni x$ and $V\ni f(x)\,,$ a topological space $F$ and a homeomorphism $\phi:U\to F{\mathop{\times}} V$ such that the following diagram commutes: \[ \begin{tikzcd} U \arrow{r}{\phi}\arrow{dr}{f} & F{\mathop{\times}} V\arrow{d}{p_2} \\ & V \end{tikzcd} \] $\blacksquare$\end{definition} \theoremstyle{definition}\begin{exmp} If $X\,,Y$ are smooth manifolds and $f:X\to Y$ is a surjective submersion, then $f$ is locally fibered. \end{exmp} \begin{proposition}[The Canonical Resolution]\label{canonical resolution} Let $M\to X$ be a family of abelian groups. Then there is a canonical acyclic resolution of $\mathcal{O}(M)$ that differs from the Godement resolution. It is given by the following: for a sheaf $\mathcal{S}\,,$ let $\mathbb{G}^0(\mathcal{S})$ be the first sheaf in the Godement resolution of $\mathcal{S}\,,$ ie. the sheaf of germs of $\mathcal{S}\,.$ Let $\mathcal{G}^0(M)$ be the sheaf of all sections of $M$ (including discontinuous ones). Let \begin{align*} \mathcal{G}^{n+1}(M)=\mathbb{G}^0(\textrm{coker}[\mathcal{G}^{n-1}(M)\to \mathcal{G}^n(M)]) \end{align*} for $n\ge 0\,,$ where $\mathcal{G}^{-1}(M):=\mathcal{O}(M)\,.$ We then have the following acylic resolution of $\mathcal{O}(M):$ \begin{align*} 0\to\mathcal{O}(M)\to \mathcal{G}^{0}(M)\to \mathcal{G}^1(M)\to\cdots\,. \end{align*} \end{proposition} \begin{definition}[see \cite{Bernstein}]\label{def n acyclic} A continuous map $f:X\to Y$ is called $n$-acyclic if it satisfies the following conditions: \begin{enumerate} \item For any sheaf $\mathcal{S}$ on $Y$ the adjunction morphism $\mathcal{S}\mapsto R^0f_*(f^{-1}\mathcal{S})$ is an isomorphism and $R^if_*(f^{-1}\mathcal{S})=0$ for all $i=1\,,\ldots\,,n\,.$ \item For any base change $\tilde{Y}\to Y$ the induced map $f:X{\mathop{\times}}_Y \tilde{Y}\to\tilde{Y}$ satisfies property 1. \end{enumerate} $\blacksquare$\end{definition} \begin{theorem}[see \cite{Bernstein}, criterion 1.9.4]\label{n acyclic} Let $f:X\to Y$ be a locally fibered map. Suppose that all fibers of $f$ are $n$-acyclic (ie. $n$-connected). Then $f$ is $n$-acyclic. \end{theorem} \begin{corollary}\label{restriction is zero} Let $f:X\to Y$ be a locally fibered map and suppose that all fibers of $f$ are $n$-acyclic. Let $M$ be a family of abelian groups. Then if \begin{align*} \alpha\in R^{n+1}f_*(f^{-1}\mathcal{O}(M))(Y) \end{align*} satisfies $\alpha\vert_{f^{-1}(y)}=0$ for all $y\in Y$ $($note that $\alpha\vert_{f^{-1}(y)}\in H^{n+1}(f^{-1}(y),f^{-1}M_y))\,,$ then $\alpha=0\,.$ \end{corollary} \begin{proof} By Proposition~\ref{canonical resolution} we have the following resolution of $\mathcal{O}(M)\,:$ \begin{align*} 0\to\mathcal{O}(M)\to \mathcal{G}^{0}(M)\to \mathcal{G}^1(M)\to\cdots\,. \end{align*} Since $f^{-1}$ is an exact functor, we obtain the following resolution of $f^{-1}\mathcal{O}(M):$ \begin{align}\label{resolution} 0\to f^{-1}\mathcal{O}(M)\to f^{-1}\mathcal{G}^{0}(M)\to f^{-1}\mathcal{G}^1(M)\to\cdots\,. \end{align} Hence, \begin{align*} R^\bullet f_*(f^{-1}\mathcal{O}(M))=R^\bullet f_*(f^{-1}\mathcal{G}^{0}(M)\to f^{-1}\mathcal{G}^1(M)\to\cdots)\,. \end{align*} One can show that $\alpha\vert_{f^{-1}(y)}=0$ for all $y\in Y\iff \alpha\mapsto 0$ under the map induced by $f^{-1}\mathcal{O}(M)\to f^{-1}\mathcal{G}^{0}(M)\,,$ hence we obtain the result by using Theorem~\ref{n acyclic}. \end{proof} \begin{theorem}\label{important} Let $f:X\to Y$ be a locally fibered map such that all its fibers are $n$-acyclic. Let $M\to Y$ be a family of abelian groups. Then the following is an exact sequence for $0\le k\le n+1:$ \begin{align*} & 0\to H^k(Y,\mathcal{O}(M))\overset{f^{-1}}{\to} H^k(X, f^{-1}\mathcal{O}(M)) \\& \to \prod_{y\in Y} H^k(f^{-1}(y),\mathcal{O}((f^*M)\vert_{f^{-1}(y)}))\,. \end{align*} In particular, $H^k(Y,\mathcal{O}(M))\overset{}{\to} H^k(X, f^{-1}\mathcal{O}(M))$ is an isomorphism for $k=0\,,\ldots\,,n$ and is injective for $k=n+1\,.$ \end{theorem} \begin{proof} This follows from the Leray spectral sequence (see section $0$) and Corollary~\ref{restriction is zero}. \end{proof} Similarly, we can generalize this result to simplicial manifolds: \begin{theorem}\label{spectral theorem} Let $f:X^\bullet\to Y^\bullet$ be a locally fibered morphism of simplicial topological spaces such that, in each degree, all of its fibers are $n$-acyclic. Let $M_\bullet\to Y^\bullet$ be a simplicial family of abelian groups. Then the following is an exact sequence for $0\le k\le n+1:$ \begin{align*}\label{Short exact} & 0\xrightarrow{} H^k(Y^\bullet,\mathcal{O}(M_\bullet))\xrightarrow{f^{-1}}H^k(X^\bullet, f^{-1}\mathcal{O}(M_\bullet)) \\ &\xrightarrow{} \prod_{y\in Y^0} H^k(f^{-1}(y),\mathcal{O}((f^*M_0)\vert_{f^{-1}(y)}))\,. \end{align*} In particular, $H^k(Y^\bullet,\mathcal{O}(M_\bullet))\overset{}{\to} H^k(X^\bullet, f^{-1}\mathcal{O}(M_\bullet))$ is an isomorphism for $k=0\,,\ldots\,,n$ and is injective for $k=n+1\,.$\footnote{See Section~\ref{derived functor} for more details.} \end{theorem} \subsection{Lie Groupoids}\label{BG functor} In this section we briefly review some important concepts in the theory of Lie groupoids. \begin{definition} A groupoid is a category $G\rightrightarrows G^0$ for which the objects $G^0$ and morphisms $G$ are sets and for which every morphism is invertible. A Lie groupoid is a groupoid $G\rightrightarrows G^0$ such that $G^0\,, G$ are smooth manifolds\begin{footnote}{We allow for the possibility that the manifolds are not Hausdorff, but all structure maps should be locally fibered.}\end{footnote}, such that the source and target maps, denoted $s\,,t$ respectively, are submersions, and such that all structure maps are smooth, ie. \begin{align*} & i:G^0\to G \\ & m:G\sideset{_s}{_{t}}{\mathop{\times}} G\to G \\& \text{inv}:G\to G \end{align*} are smooth (these maps are the identity, multiplication/composition and inversion, respectively). A morphism between Lie groupoids $G\to H$ is a smooth functor between them. $\blacksquare$\end{definition} \begin{definition}\label{moritamapjeff} Let $G\rightrightarrows G^0\,, K\rightrightarrows K^0$ be Lie groupoids. A Morita map $\phi:G\to K$ is a map such that \begin{enumerate} \item The composite map $G^0\sideset{_{\phi}}{_{s}}{\mathop{\times}}K^{(1)}\xrightarrow[]{p_2} K^{(1)}\xrightarrow[]{t} K^0$ is a surjective submersion. \item The following diagram is Cartesian \[ \begin{tikzcd} G \arrow{r}{(s,t)} \arrow{d}{\phi} & G^0{\mathop{\times}} G^0 \arrow{d}{(\phi\,,\phi)} \\ K \arrow{r}{(s,t)} & K^0{\mathop{\times}} K^0 \end{tikzcd} \] \end{enumerate} $\blacksquare$ \end{definition} \begin{definition}\label{moritaequivjeff} We say that $G\,,K$ are Morita equivalent Lie groupoids if there is a third Lie groupoid $H$ with Morita maps to both $G$ and $K\,.$ $\blacksquare$\end{definition} \begin{definition} There is a functor \[\mathbf{B}^\bullet:\text{groupoids}\to \text{simplicial spaces}\,,\,G\mapsto\mathbf{B}^\bullet G\,, \] where $\mathbf{B}^0 G=G^0\,,\,\mathbf{B}^1 G=G\,,$ and \[\mathbf{B}^n G=\underbrace{G\sideset{_t}{_{s}}{\mathop{\times}} G \sideset{_t}{_{s}}{\mathop{\times}} \cdots\sideset{_t}{_{s}}{\mathop{\times}} G}_{n \text{ times}}\,, \] the space of $n$-composable arrows. Here the face maps are the source and target maps for $n=1\,,$ and for $(g_0\,,\ldots\,,g_n)\in \mathbf{B}^{n+1}G\,,$ \begin{align*} & d_{n+1,0}(g_0\,,\ldots\,,g_n)=(g_1\,,\ldots\,,g_n)\,, \\& d_{n+1,i}(g_0\,,\ldots\,,g_n)=(g_0\,,\ldots\,,g_{i-1}g_i\,,\hat{g}_i\,,\ldots\,,g_n)\,,\,\;1\le i\le n \\& d_{n+1,n+1}(g_0\,,\ldots\,,g_n)=(g_0\,,\ldots\,,g_{n-1})\,. \end{align*} The degeneracy maps are $\text{Id}:G^0\to G$ for $n=0\,,$ and \begin{align*} \\& \sigma_{n-1,i}(g_0\,,\ldots\,,g_{n-1})=(g_0\,,\ldots\,,g_{i-1}\,,\text{Id}(t(g_i))\,,\hat{g}_i\,,\ldots\,,g_{n-1})\,,\,\;0\le i\le n-1 \\& \sigma_{n-1,n}(g_0\,,\ldots\,,g_{n-1})=(g_0\,,\ldots\,,g_{i-1}\,,\hat{g}_i\,,\ldots\,,g_{n-1}\,,\text{Id}(s(g_{n-1})))\,. \end{align*} A morphism $f:G\to H$ gets sent to $\mathbf{B}^\bullet f:\mathbf{B}^\bullet G\to \mathbf{B}^\bullet H\,,$ which acts as $f$ does for $n=0\,,1\,,$ and \begin{align*} \mathbf{B}^n f(g_0\,,\ldots\,,g_{n-1})=(f(g_0)\,,\ldots\,,f(g_{n-1})) \end{align*} for $n> 1\,.$ $\blacksquare$\end{definition} \subsection{Cohomology of Sheaves on Stacks}\label{appendix:Cohomology of Sheaves on Stacks} In this section we briefly review the Grothendieck topology and sheaves on a differentiable stack, as well as their cohomology. The following definitions are based on~\cite{Kai}, which include further details. \begin{definition} We call a family of morphisms $\{P_i\to P\}_i$ in $[G^0/G]$ a covering family if the corresponding family of morphisms on the base manifolds $\{M_i\to M\}_i$ is a covering family for the site of smooth manifolds, ie. a family of \'{e}tale maps such that $\coprod_i M_i\to M$ is surjective. This defines a Grothendieck topology on $[G^0/G]\,,$ thus we can now speak of sheaves on $[G^0/G]\,,$ ie. contravariant functors $\mathcal{S}:[G^0/G]\to\mathbf{Ab}$ such that the following diagram is an equalizer for all covering families $\{P_i\to P\}_i:$ \begin{align*} \mathcal{S}(P)\to\prod_i \mathcal{S}(P_i)\rightrightarrows \prod_{i,j} \mathcal{S}(P_i{\mathop{\times}}_P P_j)\,. \end{align*} A morphism between sheaves $\mathcal{S}$ and $\mathcal{F}$ is a natural transformation from $\mathcal{S}$ to $\mathcal{F}\,.$ $\blacksquare$\end{definition} \begin{definition}\label{stack cohomology} Let $\mathcal{S}$ be a sheaf on $[G^0/G]\,.$ Define the global sections functor $\Gamma:\textrm{Sh}([G^0/G])\to \mathbf{Ab}$ by \begin{align*} \Gamma([G^0/G],\mathcal{S}):=\text{Hom}_{\text{sh}([G^0/G])}(\mathbb{Z}\,,\,\mathcal{S})\,, \end{align*} where $\mathbb{Z}$ is the sheaf on $[G^0/G]$ which assigns to the object \[ \begin{tikzcd} P\arrow{r}{}\arrow{d} & G^0 \\M \end{tikzcd} \] the abelian group $H^0(M,\mathbb{Z})\,.$ $\blacksquare$\end{definition} \begin{definition} The global sections functor $\Gamma:\textrm{Sh}([G^0/G])\to \mathbf{Ab}$ is left exact and the category of sheaves on $[G^0/G]$ has enough injectives, so we define $H^*([G^0/G],\mathcal{S}):=R^*\Gamma(\mathcal{S})\,.$ $\blacksquare$\end{definition} \begin{theorem}[see~\cite{Kai}]\label{stack groupoid cohomology} Let $\mathcal{S}$ be a sheaf on $[G^0/G]\,.$ Then \begin{align*} H^*([G^0/G],\mathcal{S})\cong H^*(\mathbf{B}^\bullet G,\mathcal{S}(\mathbf{B}^\bullet G))\,. \end{align*} \end{theorem} \subsection{Abelian Extensions}\label{abelian extensions} Here we review abelian extensions and central extensions of Lie groupoids and Lie Algebroids. \begin{definition}Let $M$ be a $G$-module for a Lie groupoid $G\rightrightarrows G^0$. A Lie groupoid extension of $G$ by $M$ is given by a Lie groupoid $E\rightrightarrows G^0$ and a sequence of morphisms \begin{align*} 1\to M\overset{\iota}{\to} E\overset{\pi}{\to} G\to 1\,, \end{align*} such that $\iota\,,\,\pi$ are the identity on $G^0\,;$ such that $\iota$ is an embedding and $\pi$ is a surjective submersion; such that if $m\in M\,,\,e\in E$ satisfy $s(m)=s(e)\,,$ then $e\iota(m)=\iota(\pi(e)\cdot m)e\,;$ in addition, we require that $E\to G$ be principal $M$-bundle with respect to the right action. If $M$ is a trivial $G$-module then $E$ will be called a central extension. If $A$ is an abelian Lie group then associated to it is a canonical trivial $G$-module given by $A_{G^0}\,,$ and by an $A$-central extension of $G$ we will mean an extension of $G$ by the trivial $G$-module $A_{G^0}\,.$ Furthermore, there is a natural action of $M$ on $E\,,$ and we assume that with this action $E$ is a principal $M$-bundle. $\blacksquare$\end{definition} \begin{definition}Let $\mathfrak{m}$ be a $\mathfrak{g}$-representation for a Lie algebroid $\mathfrak{g}\to N$. A Lie algebroid extension of $\mathfrak{g}$ by $\mathfrak{m}$ is given by a Lie algebroid $\mathfrak{e}\to N$ and an exact sequence of the form \begin{align*} 0\to \mathfrak{m}\overset{\iota}{\to}\mathfrak{e}\overset{\pi}{\to}\mathfrak{g}\to 0\,, \end{align*} such that $\iota\,,\,\pi$ are the identity on $N\,,$ and such that if $X\,,Y$ are local sections over an open set $U\subset N$ of $\mathfrak{m}\,,\mathfrak{e}\,,$ respectively, then $\iota(L_{\pi(Y)}X)=[Y,\iota(X)]\,.$ If $\mathfrak{m}$ is a trivial $\mathfrak{g}$-module then $\mathfrak{e}$ will be called a central extension. Similarly to the previous definition, if $V$ is a finite dimensional vector space then associated to it is a canonical trivial $\mathfrak{g}$-module given by $N{\mathop{\times}} V\,,$ and by a $V$-central extension of $\mathfrak{g}$ we will mean an extension of $\mathfrak{g}$ by the trivial $\mathfrak{g}$-module $N{\mathop{\times}} V\,.$ $\blacksquare$\end{definition} \begin{proposition}[see \cite{Kai} and \cite{luk}] With the above definitions, $H^1_0(G,M)$ classifies extensions of $G$ by $M\,,$ and $H^1_0(\mathfrak{g},M)$ classifies extensions of $\mathfrak{g}$ by $\mathfrak{m}\,.$ \end{proposition} \begin{thebibliography}{9} \bibitem{alveraz} Álvarez, Daniel . \textit{Leaves of stacky Lie algebroids.} Comptes Rendus. Mathématique, Volume 358 (2020) no. 2, pp. 217-226. doi : 10.5802/crmath.37. https://comptes-rendus.academie-sciences.fr/mathematique/articles/10.5802/crmath.37/ \bibitem{BS} Arias Abad, Camilo, and Crainic, Marius. \textit{The Weil algebra and the Van Est isomorphism.} Ann. Inst. Fourier (Grenoble) 61 (2011), no. 3, 927–970. \bibitem{abadc} Arias Abad, Camilo and Crainic, Marius. \textit{Representations up to homotopy and Bott's spectral sequence for Lie groupoids.} Advances in Mathematics, Volume 248, 2013, Pages 416-452, ISSN 0001-8708, https://doi.org/10.1016/j.aim.2012.12.022. \bibitem{abad} Arias Abad, Camilo and Crainic, Marius. \textit{Representations up to homotopy of Lie algebroids.} Journal für die reine und angewandte Mathematik (Crelles Journal), vol. 2012, no. 663, 2012, pp. 91-126. \bibitem{Quintero} Arias Abad, Camilo and Quintero Vélez, Alexander and Vélez Vásquez, Sebastián. \textit{An $A_{\infty}$ version of the Poincar\'{e} lemma.} Pacific J. Math. 302 (2019), no. 2, 385–412. \bibitem{bundle} Aschieri, P. and Cantini, L. and Jur\v{c}o, B. \textit{Nonabelian Bundle Gerbes, Their Differential Geometry and Gauge Theory.} Commun. Math. Phys. 254, 367–400 (2005). https://doi.org/10.1007/s00220-004-1220-6 \bibitem{baez2} Baez, John C. and Lauda, Aaron D. \textit{Higher-dimensional algebra. V: 2-Groups.} Theory and Applications of Categories [electronic only] 12 (2004): 423-491. http://eudml.org/doc/124217. \bibitem{bailey} Bailey, Toby N. and Eastwood, M. and Gindikin, S. \textit{Smoothly parameterized Čech cohomology of complex manifolds.} The Journal of Geometric Analysis 15 (2004): 9-23. \bibitem{bargmann} Bargmann, Valentine. \textit{On unitary ray representations of continuous groups}, Annals of Mathematics (1954), 59 (1): 1–46, doi:10.2307/1969831, JSTOR 1969831. \bibitem{baum} Baum, Paul and Connes, Alain. \textit{Leafwise Homotopy Equivalence and Rational Pontrjagin Classes.} Advanced Studies in Pure Mathematics 5, (1985), pp.I-14. \bibitem{Kai} Behrend, Kai and Xu, Ping. \textit{Differentiable Stacks and Gerbes.} J. Symplectic Geom. 9 (2011), no. 3, 285-341. \bibitem{Bernstein} Bernstein, J. and Lunts, V. \textit{Equivariant Sheaves and Functors.} LNM 1578, (1991). \bibitem{block} Block, Jonathan and Smith, Aaron M. \textit{A Riemann Hilbert correspondence for infinity local systems.} arXiv: Algebraic Topology (2009). \bibitem{bona} Bonavolont\`{a}, Giuseppe and Poncin, Norbert. \textit{On the category of Lie n-algebroids.} Journal of Geometry and Physics, Volume 73 (2013), Pages 70-90. \bibitem{brylinski} Brylinski, Jean-Luc. \textit{Differentiable Cohomology of Gauge Groups.} arXiv:math/0011069 [math.DG], (2000). \bibitem{buch} Buchdahl, N. \textit{On the Relative De Rham Sequence.} Proceedings of the American Mathematical Society, vol. 87, no. 2, American Mathematical Society, 1983, pp. 363–66, https://doi.org/10.2307/2043718. \bibitem{cabrera} Cabrera, Alejandro and Drummond, Thiago. \textit{Van Est isomorphism for Homogeneous Cochains.} Pacific J. Math. 287 (2017), no. 2, 297–336. \bibitem{Canez} Cañez, Santiago. Double groupoids and the symplectic category. Journal of Geometric Mechanics, 2018, 10 (2) : 217-250. doi: 10.3934/jgm.2018009 \bibitem{cegarra} Cegarra, A. and Remedios, J. \textit{The relationship between the diagonal and the bar constructions on a bisimplicial set.} Topology and its Applications, (2015), 153(1):21-51 \bibitem{catt} Cattaneo, A. and Felder, G. \textit{A Path Integral Approach¶to the Kontsevich Quantization Formula.} Comm Math Phys 212, 591–611 (2000). \bibitem{Crainic} Crainic, Marius. \textit{Differentiable and Algebroid Cohomology, van Est Isomorphisms, and Characteristic Classes.} Commentarii Mathematici Helvetici, Vol.78, (2003) pp. 681-721. \bibitem{rui} Crainic, Marius and Fernandes, Rui Loja. \textit{Integrabiltiy of Lie Brackets.} Annals of Mathematics, 157 (2003), 575-620. \bibitem{zhu} Crainic, Marius and Zhu, Chenchang. \textit{Integrability of Jacobi and Poisson structures.} Annales de l'Institut Fourier, Volume 57 (2007) no. 4, pp. 1181-1216. \bibitem{davis} Davis, James F. and Kirk, Paul. \textit{Lecture Notes in Algebraic Topology.} American Mathematical Society (2001). \bibitem{cueca} Cueca Ten, Miquel and \'{A}lvarez, Daniel. \textit{Stacky Lie algebroids. }https://utrechtgeometrycentre.nl/wp-content/uploads/2020/08/201009-Slides-Miquel-Cueca.pdf, (2014). \bibitem{hoyor} del Hoyo, Matias. \textit{On the homotopy type of a cofibred category.} Cahiers de Topologie et Geometrie Differentielle Categoriques 53 (2012), 82–114). \bibitem{fernandes} del Hoyo, Matias and Fernandes, Rui Loja. \textit{Riemannian metrics on differentiable stacks.} Mathematische Zeitschrift, 292 (2019), 103–132. \bibitem{del hoyo} del Hoyo, Matias and Ortiz, Cristian. \textit{Morita Equivalences of Vector Bundles.} International Mathematics Research Notices, Volume 2020, Issue 14, July 2020, Pages 4395–4432, https://doi.org/10.1093/imrn/rny149 \bibitem{Deligne} Deligne P. \textit{Th\'{e}orie de Hodge. III.} Inst. Hautes \'{E}tudes Sci. Publ. Math No. $\mathbf{44}$ (1974), 5-77. \bibitem{dieck} tom Dieck, Tammo. \textit{Algebraic Topology.} European Mathematical Society, Zürich (2008). https://www.maths.ed.ac.uk/~v1ranick/papers/diecktop.pdf \bibitem{bo} Feng, Bo and Hanany, Amihay and He, Yang-Hui and Prezas, Nikolaos. \textit{Discrete Torsion, Non-Abelian Orbifolds and the Schur Multiplier.} Journal of High Energy Physics, 5 (2000). \bibitem{ruif} Fernandes, Rui. \textit{Invariants of Lie algebroids.} Differential Geometry and its Applications, Volume 19, Issue 2 (2003), Pages 223-243, ISSN 0926-2245, https://doi.org/10.1016/S0926-2245(03)00032-9. \bibitem{getzler} Getzler, Ezra. \textit{Lie theory for nilpotent L$\infty$ algebras.} Annals of Mathematics, 170 (2009), 271–301 \bibitem{ginot} Ginot, Gr\'{e}gory and Sti\'{e}non, Mathieu. \textit{G-gerbes, principal 2-group bundles and characteristic classes.} J. Symplectic Geom., 13(4):1001–1047, (2015). \bibitem{glock} Gl\"{o}ckner, Helge. \textit{Fundamentals of submersions and immersions between infinite-dimensional manifolds.} arXiv: Differential Geometry (2015). \bibitem{groth} Grothendieck, Alexander. \textit{Pursuing Stacks.} arXiv:2111.01000 [math.CT], (2010). \bibitem{grunenfelder} Grunenfelder, L. and Mastnak, M.\textit{Cohomology of abelian matched pairs and the Kac sequence.} Journal of ALgebra, Vol. 276 (2013). Pages 706-736. \bibitem{gen kahler} Gualtieri, Marco. \textit{Generalized Kahler Geometry.} arXiv:1007.3485 [math.DG], (2010). \bibitem{pym} Gualtieri, Marco and Li, Songhao and Pym, Brent. \textit{The Stokes Groupoids.} Journal für die reine und angewandte Mathematik (Crelles Journal), Vol. 2018 (2013). \bibitem{luk} Gualtieri, Marco and Luk, Kevin. \textit{Logarithmic Picard Algebroids and Meromorphic Line Bundles.} arXiv:1712.10125 [math.AG], (2017). \bibitem{eli} Hawkins, Eli. \textit{A Groupoid Approach to Quantization.} J. Symplectic Geom. 6 (2008), no. 1, 61-125. \bibitem{haus} Garmendia, Alfonso and Marco Zambon. \textit{Hausdorff Morita equivalence of singular foliations.} Annals of Global Analysis and Geometry 55 (2018): 99-132. \bibitem{andre} Henriques, André. \textit{Integrating L$\infty$-algebras.} Compositio Mathematica (2008), Vol 144. \bibitem{greg} Karpilovsky, Gregory. \textit{The Schur Multiplier.} Oxford University Press (1987). \bibitem{krepski} Krepski, Derek. \textit{Basic equivariant gerbes on non-simply connected compact simple Lie groups} Journal of Geometry and Physics Volume 133, November 2018, Pages 30-41. \bibitem{Lackman} Lackman, Joshua. \textit{Cohomology of Lie Groupoid Modules and the Generalized van Est Map.} International Mathematics Research Notices, rnab027, (2021), https://doi.org/10.1093/imrn/rnab027 \bibitem{Xutu} Laurent-Gengoux, C., Tu, J.-L. and Xu, P. \textit{Chern-Weil map for principal bundles over groupoids.} Mathematische Zeitschrift. 255 (2007), 451–491. \bibitem{Li} Li, Du. \textit{Higher Groupoid Actions, Bibundles, and Differentiation.} Arxiv: Differential Geometry, (2014). https://arxiv.org/pdf/1512.04209 \bibitem{Meinrenken} Li-Bland, David and Meinrenken, Eckhard. \textit{On the van Est homomorphism for Lie groupoids.} L’Enseignement Mathématique, Vol. 61, (2014). \bibitem{riehl} Loregian, Fosco and Riehl, Emily . Categorical notions of fibration. Expo. Math., 38(4):496–514, 2020. \bibitem{Mackenzie} Mackenzie, K. (2005). General Theory of Lie Groupoids and Lie Algebroids (London Mathematical Society Lecture Note Series). Cambridge: Cambridge University Press. doi:10.1017/CBO9781107325883 \bibitem{mehta} Mehta, R. \textit{Q-algebroids and their cohomology.} J. Symplectic Geom. 7 (2009), no. 3, 263–293. \bibitem{mehtatang} Mehta, R.A. and Tang, X. From double Lie groupoids to local Lie 2-groupoids. Bull Braz Math Soc, New Series 42, 651–681 (2011). https://doi.org/10.1007/s00574-011-0033-4 \bibitem{gael} Meigniez, Ga\"{e}l. \textit{Submersions, Fibrations and Bundles.} Transactions of the American Mathematical Society. Volume 354, Number 9, Pages 3771–3787. \bibitem{erbe} Meinrenken, Eckhard. \textit{The Basic Gerbe Over a Compact Simple Lie Group.} Enseign. Math. 49. (2002). \bibitem{salazar} Meinrenken, Eckhard and Salazar, María Amelia. \textit{Van Est Differentiation and Integration.} Math. Ann. 376 (2020), no. 3-4, 1395–1428. \bibitem{eva} Miranda, Eva and Presas, Francisco. \textit{Geometric Quantization of real polarizations via sheaves.} Journal of Smplectic Geometry, Volume 13, Number 2, 421–462, (2015) \bibitem{Moerdijk} Moerdijk, I. Orbifolds as groupoids: an introduction. In Orbifolds in mathematics and physics (Madison, WI, 2001), volume 310 of Contemp. Math., pages 205–222. Amer. Math. Soc., Providence, RI, 2002. \bibitem{Murray} Murray, Michael. \textit{Bundle gerbes} J. Lond. Math. Soc. 54 (1996) pp. 403-416 \bibitem{nlab} nlab authors. \textit{nlab: Lie Group Cohomology} Revision 15. (2019). https://ncatlab.org/nlab/show/Lie+group+cohomology. \bibitem{noohi1} Noohi, Behrang. \textit{Foundations of Topological Stacks I.} arXiv: Algebraic Geometry (2005). \bibitem{noohi} Noohi, Behrang. \textit{Fundamental Groups of Algebraic Stacks.} Journal of the Institute of Mathematics of Jussieu 3 (2004): 69 - 103. \bibitem{Nuiten} Nuiten, J. \textit{Homotopical Algebra for Lie Algebroids.} Appl Categor Struct 27, 493–534 (2019). https://doi.org/10.1007/s10485-019-09563-z \bibitem{ARANGO} Ochoa Arango, Jes\'{u}s Alonso and Tiraboschi, Alejandro. \textit{Double Groupoid Cohomology and Extensions.} Arxiv: K-Theory and Homology, (2016). https://arxiv.org/abs/1608.06712 \bibitem{Peters} Peters, Chris A.M. and Steenbrink, Joseph H.M. \textit{Mixed Hodge Structures (A Series of Modern Surveys in Mathematics).} Springer (2008). \bibitem{pym2} Pym, Brent and Safronov, Pavel. \textit{Shifted Symplectic Lie Algebroids.} International Mathematics Research Notices, rny215 (2018). \bibitem{Quillen} Quillen, Daniel. \textit{Homotopical Algebra.} Lecture Notes in Mathematics 43, Springer, (1967). \bibitem{raptis} Raptis, G. \textit{On Serre Microfibrations and a Lemma of M. Weirss.} Glasgow Mathematical Journal, 59(3), (2017) 649-657. doi:10.1017/S0017089516000458 \bibitem{schommer} Schommer-Pries, Christopher J. \textit{Central Extensions of Smooth 2-Groups and a Finite-Dimensional String 2-Group.} Geometry and Topology, Vol. 15 (2009). \bibitem{sometitle} \v{S}evera, Pavol. \textit{Some title containing the words “homotopy” and “symplectic”, e.g. this one.} Travaux math\'{e}matiques. Fasc. XVI. Vol. 16. Trav. Math. Univ. Luxemb., Luxembourg, (2005), pp. 121–137. \bibitem{jet} \v{S}evera, Pavol. \textit{$L_\infty$ algebras as 1-jets of simplicial manifolds (and a bit beyond).} arXiv:math/0612349 [math.DG], (2006). \bibitem{Severa} \v{S}evera, Pavol and Weinstein, Alan. \textit{Poisson Geometry with a 3-Form Background.} Progress of Theoretical Physics Supplement, Vol. 144, (2002), pp. 145-154. \bibitem{snia} \'{S}niatycki, J. \textit{On Cohomology Groups Appearing in Geometric Quantization.} Differential Geometric Methods in Mathematical Physics I (1975). \bibitem{sparano} Sparano, Giovanni and Vitagliano, Luca. \textit{Deformation cohomology of Lie algebroids and Morita equivalence.} C. R. Math. Acad. Sci. Paris 356 (2018), no. 4, 376–381. \bibitem{str} Street, Ross. \textit{Fibrations and Yoneda’s lemma in a 2-category.} Category Seminar (Proc. Sem., Sydney, 1972/1973), pp. 104--133. Lecture Notes in Math., Vol. 420, Springer, Berlin, 1974. \bibitem{Tu} Tu, Jean-Louis. \textit{Groupoid Cohomology and Extensions.} Transactions of the American Mathematical, Vol. 358, No. 11, (2006), pp. 4721-4747 \bibitem{van est} van Est, W. T. \textit{Group cohomology and Lie algebra cohomology in Lie groups. I, II.} Nederl. Akad. Wetensch. Proc. Ser. A. 56 = Indagationes Math. 15, (1953). 484–492, 493–504. \bibitem{van est 2} van Est, W. T. \textit{On the algebraic cohomology concepts in Lie groups. I, II.} Nederl. Akad. Wetensch. Proc. Ser. A. 58 = Indag. Math. 17, (1955). 225–233, 286–294 \bibitem{wockel1} Wagemann, Friedrich and Wockel, Christoph. \textit{A Cocycle Model for Topological and Lie Group Cohomology.} Trans. Amer. Math. Soc. 367 (2015), 1871-1909. \bibitem{konrad} Waldorf, Konrad. \textit{Multiplicative Bundle Gerbes With Connection.} Differential Geometry and its Applications, Vol. 28, Issue 3 (2010), pp. 313-340. \bibitem{Waldron} Waldron, J. \textit{Lie Algebroids over Differentiable Stacks.} Arxiv: Differential Geometry, (2015). https://arxiv.org/abs/1511.07366 \bibitem{weinstein} Weinstein, Alan and Xu, Ping. \textit{Extensions of symplectic groupoids and quantization.} Journal für die reine und angewandte Mathematik. Vol. 417, (1991) pp. 159-190. \bibitem{witten} Witten, E. \textit{Perturbative Gauge Theory as a String Theory in Twistor Space.} Commun. Math. Phys. 252, 189–258 (2004). https://doi.org/10.1007/s00220-004-1187-3 \bibitem{wockel} Wockel, C. \textit{Principal 2-bundles and their gauge 2-groups.} Forum Math. 23 (3) (2011) 565, [arXiv:0803.3692 [math.DG]]. \bibitem{zhuc} Zhu, Chenchang. \textit{Lie II theorem for Lie algebroids via higher Lie groupoids.} Arxiv: Differential Geometry, (2010). arXiv:math/0701024v2 [math.DG]. \bibitem{zhuc2} Zhu, Chenchang. \textit{n-Groupoids and Stacky Groupoids.} International Mathematics Research Notices,(2009), 4087-4141. \end{thebibliography} \end{document}
2205.02067v1
http://arxiv.org/abs/2205.02067v1
Topological and homological properties of the orbit space of a simple three-dimensional compact linear Lie group
\documentclass{article} \usepackage[cp1251]{inputenc} \usepackage[russian]{babel} \usepackage{amssymb,amsfonts,amsmath,amsthm,amscd} \usepackage{graphicx} \usepackage{enumerate} \usepackage{soul} \usepackage[matrix,arrow,curve]{xy} \usepackage{longtable} \usepackage{array} \usepackage{bm} \usepackage{titling} \makeatletter \def\@maketitle{ \newpage \vskip0.5em UDK \udk \vskip0.5em \vskip1em \begin{center} \let\footnote\thanks {\Large\@author\par} \vskip1.5em {\bf\LARGE\@title\par} \vskip1em \end{center} \par \vskip1.5em} \def\@title{\@latex@warning@no@line{No \noexpand\title given}} \renewcommand{\thefootnote}{\arabic{footnote}} \newcommand{\fn}[1]{\hspace{2pt}\footnote{#1}} \renewcommand{\thanksmarkseries}[1]{ \def\@bsmarkseries{\renewcommand{\thefootnote}{\arabic{footnote}}}} \thanksmarkseries{arabic} \renewcommand{\symbolthanksmark}{\thanksmarkseries{arabic}} \title{Topological and homological properties\\ of the orbit space\\ of a~simple three-dimensional\\ compact linear Lie group} \author{Styrt O.\,G.\thanks{Russia, MIPT, oleg\[email protected]}} \newcommand{\udk}{512.815.1+512.815.6+512.816.1+512.816.2} \newdimen\defskip \defskip=3pt \newcommand{\thec}[1]{\textup{\arabic{#1}}} \renewcommand{\theequation}{\thec{equation}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{prop}{Proposition} \newtheorem{stm}{Statement} \newtheorem{imp}{Corollary} \newcommand{\parr}{\par\addvspace{\defskip}} \newcommand{\deff}[2]{\newenvironment{#1}{\parr\textbf{#2.}}{\parr}} \deff{df}{Definition} \deff{note}{Remark} \deff{denote}{Notation} \deff{denotes}{Notations} \deff{hint}{Hint} \deff{answer}{Answer\:} \newenvironment{dfn}[1]{\parr\textbf{Definition~({#1}).}}{\parr} \def\@thm#1#2#3{ \normalfont \trivlist \let\thmheadnl\relax \let\thm@swap\@gobble \thm@notefont{\bfseries\upshape} \thm@headpunct{.} \thm@headsep 5\p@ plus\p@ minus\p@\relax \thm@space@setup #1 \@topsep \thm@preskip \@topsepadd \thm@postskip \def\@tempa{#2}\ifx\@empty\@tempa \def\@tempa{\@oparg{\@begintheorem{#3}{}}[]} \else \refstepcounter{#2} \def\@tempa{\@oparg{\@begintheorem{#3}{\csname the#2\endcsname}}[]} \@tempa } \def\proof{\relax\ifmmode \blacktriangleleft \mskip 36mu plus 0mu minus 18mu \else } \def\endproof{\ifmmode \mskip 36mu plus 0mu minus 18mu\blacktriangleright \else \unskip~$\blacktriangleright$\par } \def\Ann{\op{Ann}} \newcommand{\Ss}{\textup{\S\,}} \newcommand{\Sss}{\textup{\S\S\,}} \newcommand{\No}{\textup{\,}} \newcommand*{\clei}{\nobreak\hskip\z@skip} \DeclareRobustCommand*{\ti}{~\textemdash{} } \DeclareRobustCommand*{\dh}{\clei\hbox{-}\clei} \renewcommand{\"}{''} \renewcommand{\:}{\textup{:}} \renewcommand{\~}{\textup{;}} \newcommand{\?}{\,\nobreak\hskip0pt} \newcommand{\no}{} \makeatother \newcommand*{\bw}[1]{#1\nobreak\discretionary{}{\hbox{$\mathsurround=0pt #1$}}{}} \newcommand{\sco}{,\ldots,} \newcommand{\spl}{\bw+\ldots\bw+} \newcommand{\smn}{\bw-\ldots\bw-} \newcommand{\seq}{\bw=\ldots\bw=} \newcommand{\sle}{\bw\le\ldots\bw\le} \newcommand{\sge}{\bw\ge\ldots\bw\ge} \newcommand{\sop}{\bw\oplus\ldots\bw\oplus} \newcommand{\sd}{\bw\cdot\ldots\bw\cdot} \newcommand{\sti}{\bw\times\ldots\bw\times} \newcommand{\sot}{\bw\otimes\ldots\bw\otimes} \newcommand{\scu}{\bw\cup\ldots\bw\cup} \newcommand{\sca}{\bw\cap\ldots\bw\cap} \newcommand{\ssub}{\bw\subs\ldots\bw\subs} \newcommand{\ssup}{\bw\sups\ldots\bw\sups} \newcommand{\ha}[1]{\left\langle#1\right\rangle} \newcommand{\ba}[1]{\bigl\langle#1\bigr\rangle} \newcommand{\hr}[1]{\left(#1\right)} \newcommand{\br}[1]{\bigl(#1\bigr)} \newcommand{\Br}[1]{\Bigl(#1\Bigr)} \newcommand{\bbr}[1]{\biggl(#1\biggr)} \newcommand{\ter}[1]{\textup{(}#1\textup{)}} \newcommand{\bgm}[1]{\bigl|#1\bigr|} \newcommand{\Bm}[1]{\Bigl|#1\Bigr|} \newcommand{\hn}[1]{\left\|#1\right\|} \newcommand{\hnn}[1]{\|#1\|} \newcommand{\bn}[1]{\bigl\|#1\bigr\|} \newcommand{\Bn}[1]{\Bigl\|#1\Bigr\|} \newcommand{\hs}[1]{\left[#1\right]} \newcommand{\bs}[1]{\bigl[#1\bigr]} \newcommand{\BS}[1]{\Bigl[#1\Bigr]} \newcommand{\hc}[1]{\left\{#1\right\}} \newcommand{\bc}[1]{\bigl\{#1\bigr\}} \newcommand{\BC}[1]{\Bigl\{#1\Bigr\}} \newcommand{\hb}[1]{\hskip-1pt\left/#1\right.\hskip-1pt} \newcommand{\hbr}[1]{\left.\hskip-1pt#1\right/\hskip-1pt} \newcommand{\bb}{\bigm/} \newcommand{\Bb}{\Bigm/} \newcommand{\bbl}{\bigm\wo} \newcommand{\Bbl}{\Bigm\wo} \newcommand{\mbb}{\mathbb} \newcommand{\mbf}{\mathbf} \newcommand{\mcl}{\mathcal} \newcommand{\mfr}{\mathfrak} \newcommand{\mrm}{\mathrm} \newcommand{\R}{\mbb{R}} \newcommand{\Q}{\mbb{Q}} \newcommand{\Z}{\mbb{Z}} \newcommand{\N}{\mbb{N}} \newcommand{\T}{\mbb{T}} \newcommand{\F}{\mbb{F}} \newcommand{\Cbb}{\mbb{C}} \newcommand{\Hbb}{\mbb{H}} \newcommand{\RP}{\mbb{R}\mrm{P}} \newcommand{\CP}{\mbb{C}\mrm{P}} \newcommand{\Eb}{\mbb{E}} \newcommand{\idb}{\mbf{1}} \newcommand{\ib}{\mbf{i}} \newcommand{\jb}{\mbf{j}} \newcommand{\kb}{\mbf{k}} \newcommand{\Pc}{\mcl{P}} \newcommand{\Zc}{\mcl{Z}} \newcommand{\ggt}{\mfr{g}} \newcommand{\hgt}{\mfr{h}} \newcommand{\tgt}{\mfr{t}} \newcommand{\kgt}{\mfr{k}} \newcommand{\pd}{\partial} \newcommand{\liml}[1]{\lim\limits_{{#1}}} \newcommand{\intl}[2]{\int\limits_{{#1}}^{{#2}}} \newcommand{\ints}[1]{\int\limits_{{#1}}} \newcommand{\suml}[2]{\sum\limits_{{#1}}^{{#2}}} \newcommand{\sums}[1]{\sum\limits_{{#1}}} \newcommand{\sumiun}{\sum\limits_{i=1}^{n}} \newcommand{\prodl}[2]{\prod\limits_{{#1}}^{{#2}}} \newcommand{\prods}[1]{\prod\limits_{{#1}}} \newcommand{\oplusl}[2]{\bigoplus\limits_{{#1}}^{{#2}}} \newcommand{\opluss}[1]{\bigoplus\limits_{{#1}}^{}} \newcommand{\otimesl}[2]{\bigotimes\limits_{{#1}}^{{#2}}} \newcommand{\otimess}[1]{\bigotimes\limits_{{#1}}^{}} \newcommand{\capl}[2]{\bigcap\limits_{{#1}}^{{#2}}} \renewcommand{\caps}[1]{\bigcap\limits_{{#1}}} \newcommand{\cupl}[2]{\bigcup\limits_{{#1}}^{{#2}}} \newcommand{\cups}[1]{\bigcup\limits_{{#1}}} \newcommand{\sqcupl}[2]{\bigsqcup\limits_{{#1}}^{{#2}}} \newcommand{\sqcups}[1]{\bigsqcup\limits_{{#1}}} \newcommand{\alf}{\alpha} \newcommand{\be}{\beta} \newcommand{\ga}{\gamma} \newcommand{\Ga}{\Gamma} \newcommand{\de}{\delta} \newcommand{\De}{\Delta} \newcommand{\ep}{\varepsilon} \newcommand{\la}{\lambda} \newcommand{\La}{\Lambda} \newcommand{\nab}{\nabla} \newcommand{\rh}{\rho} \newcommand{\si}{\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\ta}{\theta} \newcommand{\ph}{\varphi} \newcommand{\om}{\omega} \newcommand{\Om}{\Omega} \newcommand{\GL}{\mbf{GL}} \newcommand{\SL}{\mbf{SL}} \newcommand{\Or}{\mbf{O}} \newcommand{\SO}{\mbf{SO}} \newcommand{\Sp}{\mbf{Sp}} \newcommand{\Spin}{\mbf{Spin}} \newcommand{\Un}{\mbf{U}} \newcommand{\SU}{\mbf{SU}} \newcommand{\glg}{\mfr{gl}} \newcommand{\slg}{\mfr{sl}} \newcommand{\sog}{\mfr{so}} \newcommand{\un}{\mfr{u}} \newcommand{\sug}{\mfr{su}} \newcommand{\spg}{\mfr{sp}} \DeclareMathOperator{\Lie}{Lie} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\ad}{ad} \DeclareMathOperator{\Mat}{Mat} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\In}{In} \DeclareMathOperator{\Arg}{Arg} \DeclareMathOperator{\Int}{Int} \DeclareMathOperator{\Rea}{Re} \DeclareMathOperator{\Img}{Im} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Pf}{Pf} \DeclareMathOperator{\sta}{st} \DeclareMathOperator{\Fin}{Fin} \DeclareMathOperator{\ps}{ps} \DeclareMathOperator{\dist}{dist} \newcommand{\labheadu}[1]{\textup{#1.}} \newcommand{\labheadi}[1]{\textup{#1)}} \newcommand{\labheadii}[1]{\textup{(#1)}} \newcommand{\labhi}[1]{\labheadi{\arabic{#1}}} \newcommand{\labhii}[1]{\labheadii{\roman{#1}}} \renewcommand{\labelenumi}{\labhi{enumi}} \renewcommand{\labelenumii}{\labhii{enumii}} \renewcommand{\theenumi}{\labhi{enumi}} \renewcommand{\theenumii}{\labhii{enumii}} \newenvironment{nums}[1]{\renewcommand{\no}{#1}\begin{enumerate}}{\end{enumerate}} \newcommand{\eqm}[1]{\begin{equation}#1\end{equation}} \newcommand{\equ}[1]{\begin{equation*}#1\end{equation*}} \newcommand{\mln}[1]{\begin{multline}#1\end{multline}} \newcommand{\ml}[1]{\begin{multline*}#1\end{multline*}} \newcommand{\eqna}[1]{\begin{eqnarray}#1\end{eqnarray}} \newcommand{\eqnu}[1]{\begin{eqnarray*}#1\end{eqnarray*}} \newcommand{\case}[1]{\begin{cases}#1\end{cases}} \newcommand{\cask}[1]{\begin{casks}#1\end{casks}} \newcommand{\matr}[1]{\begin{matrix}#1\end{matrix}} \newcommand{\rbmat}[1]{\begin{pmatrix}#1\end{pmatrix}} \renewcommand{\ge}{\geqslant} \renewcommand{\le}{\leqslant} \newcommand{\fa}{\,\forall\,} \newcommand{\exi}{\,\exists\,} \newcommand{\exu}{\,\exists\,!\;} \newcommand{\bes}{\infty} \newcommand{\es}{\varnothing} \newcommand{\eqi}{\equiv} \newcommand{\subs}{\subset} \newcommand{\sups}{\supset} \newcommand{\subsneq}{\subsetneqq} \newcommand{\sm}{\setminus} \newcommand{\wo}{\backslash} \newcommand{\swo}{\mathbin{\triangle}} \newcommand{\cln}{\colon} \newcommand{\nl}{\lhd} \newcommand{\wg}{\wedge} \newcommand{\lhdp}{\leftthreetimes} \newcommand{\Ra}{\Rightarrow} \newcommand{\Lra}{\Leftrightarrow} \newcommand{\xra}{\xrightarrow} \newcommand{\yra}[1]{\xra[#1]{}} \newcommand{\hra}{\hookrightarrow} \newcommand{\dv}{\smash{\mskip3mu\lower1pt\hbox{\vdots}\mskip3mu}} \newcommand{\us}{\underset} \newcommand{\ol}{\overline} \newcommand{\os}[1]{\overset{#1}} \newcommand{\Inn}[1]{\smash{\os{\circ}{\smash{#1}\vph{^{_{^{_{c}}}}}}}\vph{#1}} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \newcommand{\nd}{\,\&\,} \newcommand{\sst}[1]{\substack{#1}} \newcommand{\unbr}[2]{\underbrace{#1}_{#2}} \newcommand{\mb}[1]{$\bm{#1}$} \newcommand{\thra}{\twoheadrightarrow} \newcommand{\bom}{\boldmath} \newcommand{\phm}[1]{\phantom{#1}} \newcommand{\hph}[1]{\hphantom{#1}} \newcommand{\vph}[1]{\vphantom{#1}} \newcommand{\soo}{\soulomit} \newcommand{\ign}{\ignorespaces} \newcommand{\leqn}{\lefteqn} \newcommand{\lp}[1]{\llap{$#1$}} \newcommand{\rp}[1]{\rlap{$#1$}} \newcommand{\wtW}{\leqn{\hskip1pt\wt{\phm{N}}}W} \newcommand{\olQ}{\leqn{\hskip1pt\ol{\phm{J}}}Q} \newcommand{\mk}{\os{\text{def}}=} \newcommand{\MSC}{\textit{2000 MSC}\: } \renewcommand{\refname}{References} \begin{document} \maketitle The article is devoted to the question whether the orbit space of a~compact linear group is a~topological manifold and a~homological manifold. In the paper, the case of a~simple three-dimensional group is considered. An upper bound is obtained for the sum of the half-dimension integral parts of the irreducible components of a~representation whose quotient space is a~homological manifold, that enhances an earlier result giving the same bound if the quotient space of a~representation is a~smooth manifold. The most of the representations satisfying this bound are also researched before. In the proofs, standard arguments from linear algebra, theory of Lie groups and algebras and their representations are used. \smallskip \textit{Key words}\: Lie group, linear representation of a~group, topological quotient space of an action, topological manifold, homological manifold. \section{Introduction}\label{introd} Consider a~faithful linear representation of a~compact Lie group~$G$ in a~real vector space~$V$. The question is, whether the topological quotient space $V/G$ of this action is a~topological manifold, and whether it is a~homological manifold. Further, for brevity, we will say simply <<manifold>> for a~topological manifold. Without loss of generality, we can suppose that $V$ is a~Euclidian space, $G$ is a~Lie subgroup of the group $\Or(V)$, and the representation $G\cln V$ is tautological. This problem was researched in~\cite{MAMich,Lange} for finite groups. Besides, the author's papers \cite{My1,My2,My3,My4} study both topological and differential-geometric properties of the quotient space for different group classes\: for groups with commutative connected component~\cite{My1} and for simple groups of classical types~\cite{My2,My3,My4}. The author's articles \cite{homoarx,homo,homo1} also consider groups with commutative connected component and strengthen the <<topological>> part of the results of~\cite{My1}. This paper gives a~similar strengthening of the results of~\cite{My2} for simple three-dimensional groups. Denote by~$G^0$ the connected component of the group~$G$ and by~$\ggt$ its Lie algebra. Suppose that $\ggt\cong\sug_2$\ti or, equivalently, that the group~$G^0$ is isomorphic to one of the groups $\SU_2$ and $\SO_3$. Let $n_1\sco n_L$ be the dimensions of irreducible components of the representation $\ggt\cln V$ with considering multiplicities and in descending order. Since the representation $G\cln V$ is faithful, we have $n_1\sge n_l>1=n_{l+1}=\dots=n_L$, where $l\in\{1\sco N\}$. Denote by $q(V)$ the number $\suml{i=1}{L}\bs{\frac{n_i}{2}}=\suml{i=1}{l}\bs{\frac{n_i}{2}}\in\N$. The main result of the article is the following theorem. \begin{theorem}\label{main} If $\ggt\cong\sug_2$ and $V/G$ is a~homological manifold, then $q(V)\le4$. \end{theorem} \section{Auxiliary facts}\label{facts} This section contains a~number of auxiliary notations and statements, including the ones taken from the articles cited (all new statements are with proofs). \begin{lemma}\label{prop} Let $X$ be a~topological space and $n$ a~positive integer. \begin{nums}{-1} \item If $X$ is a~simply connected homological $n$\dh sphere, then $X\cong S^n$. \item The cone over the space~$X$ is a~homological $(n+1)$\dh manifold if and only if $X$ is a~homological $n$\dh sphere. \end{nums} \end{lemma} \begin{proof} See Theorem~2.3 and Lemma~2.6 in~\cite[\Ss2]{Lange}. \end{proof} Traditionally, denote by~$\T$ the Lie group $\bc{\la\in\Cbb\cln|\la|=1}$ by multiplication. Suppose that we have a~Euclidian space~$V$ and a~compact group $G\bw\subs\Or(V)$ with Lie algebra $\ggt\subs\sog(V)$. Consider an arbitrary vector $v\in V$. The subspaces $\ggt v$ and $N_v:=(\ggt v)^{\perp}$ of the space~$V$ are invariant under the stationary subgroup~$G_v$ of the vector~$v$. The stationary subalgebra~$\ggt_v$ of the vector~$v$ coincides with $\Lie G_v$. Set $M_v:=N_v\cap(N_v^{G_v})^{\perp}\subs N_v$. Clearly, $N_v=N_v^{G_v}\oplus M_v\bw\subs V$ and $G_vM_v=M_v$. \begin{stm}\label{Mv} In each $G^0$\dh invariant subspace $V'\subs V$, there exists a~vector~$v$ such that $M_v\subs(V')^{\perp}$. \end{stm} \begin{proof} See Statement~2.2 in~\cite[\Ss2]{My1}. \end{proof} \begin{theorem}\label{slice} Let $v\in V$ be some vector. If $V/G$ is a~homological manifold, then $N_v/G_v$ and $M_v/G_v$ are homological manifolds. \end{theorem} \begin{proof} See Theorem~4 and Corollary~5 in~\cite{homo}. \end{proof} \begin{df} A~linear operator in a~space over some field is called \textit{a~reflection} (resp. \textit{a~pseudoreflection}) if the subspace of its stable points has codimension $1$ (resp.~$2$). \end{df} \begin{df} Let $K$ be the Lie group $\bc{v\in\Hbb\cln\hn{v}=1}$ by multiplication and $\Ga\subs K$ the inverse image of the dodecahedral rotation group by the covering homomorphism $K\thra\SO_3$. \textit{The Poincare group} is defined as the linear group obtained by the restriction of the action $K\cln\Hbb$ with left shifts onto the subgroup $\Ga\subs K$. \end{df} \begin{theorem}\label{lang} If the group $G\subs\Or(V)$ is finite and $V/G$ is a~homological manifold, then there are decompositions $G=G_0\times G_1\sti G_k$ and $V=V_0\oplus V_1\sop V_k$ \ter{$k\in\Z_{\ge0}$} such that \begin{itemize} \item the subspaces $V_0,V_1\sco V_k\subs V$ are pairwise orthogonal and $G$\dh invariant\~ \item for each $i,j=0\sco k$, the linear group $(G_i)|_{V_j}\subs\Or(V_j)$ is trivial if $i\ne j$, generated by pseudoreflections if $i=j=0$, and isomorphic to the Poincare group if $i=j>0$ \ter{in part, $\dim V_j=4$ for any $j=1\sco k$}. \end{itemize} \end{theorem} \begin{proof} See Proposition~3.13 in~\cite[\Ss3]{Lange}. \end{proof} On the space~$\ggt$, there is an $\Ad(G)$\dh invariant scalar product, according to which, we will further identify the spaces $\ggt$ and~$\ggt^*$. If $\ggt'\subs\ggt$ is a~one-dimensional subalgebra and $V'\subs V$ is a~subspace, then $\ggt'V'\subs V$ is a~subspace of dimension no greater than $\dim V'$. Recall the definitions of \textit{$q$\dh stable} ($q\in\N$) and \textit{indecomposable} sets of vectors of finite-dimensional spaces over fields~\cite[\Ss1]{My1}, that are also needed in this article. A~decomposition of a~vector set of a~finite-dimensional linear space onto components will be defined as its representation as the union of its subsets whose linear spans are linearly independent. If at least two of these linear spans are nontrivial, then such a~decomposition will be called \textit{proper}. Say that a~vector set is \textit{indecomposable} if it does not admit any proper decomposition onto components. Each vector set can be decomposed onto indecomposable components uniquely (up to the zero vector distribution), and for any of its decomposition onto components, each component is the union of some of its indecomposable components (again up to the zero vector). \begin{df} A~finite vector set of a~finite-dimensional space, with considering multiplicities, will be called \textit{$q$\dh stable} ($q\in\N$), if its linear span is preserved after deleting of any no greater than $q$ vectors (again taking multiplicities into account). \end{df} For an arbitrary finite vector set~$P$ in a~finite-dimensional space over some field, with considering multiplicities, the number of nonzero vectors of the set~$P$ (again considering multiplicities) will be denoted by~$\hn{P}$. Assume that the group~$G^0$ is commutative, i.\,e. a~torus. Each irreducible representation of the group~$G^0$ is one- or two-dimensional. Recall the concept of the weight of its irreducible representation given in~\cite[\Ss1]{My1}. An arbitrary two-dimensional irreducible representation of the group~$G^0$ has a~$G^0$\dh invariant complex structure, and we can consider it as a~one-dimensional complex representation of the group~$G^0$, naturally matching it with a~weight\ti a~Lie group homomorphism $\la\cln G^0\to\T$\ti and identifying the latter with its differential\ti the vector $\la\in\ggt^*$. Match a~one-dimensional representation of the group~$G^0$ with the weight $\la:=0\in\ggt^*$. Classes of isomorphic irreducible representations of~$G^0$ are characterized by weights $\la\in\ggt^*=\ggt$ defined up to multiplying by $(-1)$. Let $P\subs\ggt$ be the set of weights $\la\in\ggt$ corresponding to decomposing the representation $G^0\cln V$ into the direct sum of irreducible ones (with considering multiplicities). The set $P\subs\ggt$ does not depend on choosing this decomposition (up to multiplying the weights by $(-1)$). Since the representation $G\cln V$ is faithful, we have $\ha{P}=\ggt$. {\newcommand{\refr}{see \cite[Theorem~4]{homo1}} \begin{theorem}[\refr]\label{submain} Suppose that $V/G$ is a~homological manifold and $P\subs\ggt$ is a~$2$\dh stable set. Then the representation $G\cln V$ is the direct product of representations $G_l\cln V_l$ \ter{$l=0\sco p$} such that \begin{nums}{-1} \item for any $l=0\sco p$, the quotient space $V_l/G_l$ is a~homological manifold\~ \item $|G_0|<\bes$\~ \item for each $l=1\sco p$, the group~$G_l$ is infinite and the weight set of the representation $G_l\cln V_l$ is indecomposable, $2$\dh stable and does not contain zeroes. \end{nums} \end{theorem}} \begin{theorem}\label{main1} Assume that $\dim G=1$ and the set $P\subs\ggt$ is $2$\dh stable and does not contain zeroes. If $V/G$ is a~homological manifold, then $\hn{P}=3$. \end{theorem} \begin{proof} Since $0\notin P$, the space~$V$ has a~$G^0$\dh invariant complex structure. If the group $G\subs\Or(V)$ does not contain complex reflections, then the statement follows from Theorem~6 of the paper~\cite{homo1}. As for the arbitrary case, it can be reduced (see~\cite[\Sss3,\,7]{My1}) to that of a~representation of a~one-dimensional group without complex reflections whose weight set is obtained form~$P$ with multiplying all weights by nonzero scalars. \end{proof} \begin{imp}\label{p3} Suppose that $\dim G=1$ and $P\subs\ggt$ is a~$2$\dh stable set. If $V/G$ is a~homological manifold, then $\hn{P}=3$. \end{imp} \begin{proof} Follows from Theorems \ref{submain} and~\ref{main1}. \end{proof} \begin{imp}\label{ple3} If $\dim G=1$ and $V/G$ is a~homological manifold, then $\hn{P}\bw\le3$ \ter{or, equivalently, $\dim(\ggt V)\le6$}. \end{imp} \begin{proof} Assume that $\hn{P}>3$. Then $P\subs\ggt$ is a~$2$\dh stable set. By Corollary~\ref{p3}, $\hn{P}=3$. So, we came to a~contradiction. \end{proof} \section{Proofs of the results}\label{prove} In this section, Theorem~\ref{main} is proved. For the rest of the paper, we will assume that $\ggt\cong\sug_2$, i.\,e. that the group~$G^0$ is isomorphic to $\SU_2$ or $\SO_3$. Set $V_0:=V^{G^0}\subs V$. In terms of~\S\,\ref{introd}, $L-l\bw=\dim V_0$, $V_0\ne V$, and the numbers $n_1\sco n_l$ are the dimensions of irreducible components of the representation $\ggt\cln V_0^{\perp}$ (with considering multiplicities), each of them being either divisible by~$4$ or odd. If $\ggt'\bw\subs\ggt$ is a~proper subalgebra, then $\dim\ggt'=1$, and the subspace $\ggt'V\subs V$ has dimension $2q(V)$. Therefore, $2q(V)\le\dim V$. It is enough to prove the theorem in the case $V_0=0$ (i.\,e. if the representation $G^0\cln V$ does not have one-dimensional irreducible components). Indeed, there exists a~vector $v\in V_0$ such that $M_v\subs V_0^{\perp}$ (see Statement~\ref{Mv}). We have $\ggt v=0$, $N_v=V$, $G_v\sups G^0$, $(G_v)^0\bw=G^0$, $\ggt_v=\ggt$. Further, $M_v=(V^{G_v})^{\perp}\bw\sups(V^{G^0})^{\perp}\bw=V_0^{\perp}\sups M_v$, and, hence, $M_v\bw=V_0^{\perp}$. By Theorem~\ref{slice}, if $V/G$ is a~homological manifold, then $M_v/G_v$ is a~homological manifold. As for the decompositions onto irreducible components of the representations of the group $G^0\bw=(G_v)^0$ in the spaces $V$ and $M_v$, the latter is obtained from the former by deleting all one-dimensional components. Thus, $q(V)\bw=q(M_v)$. Further, let us assume that $V/G$ is a~homological manifold and $V_0=0$. We should prove that $q(V)\le4$. Suppose that there exists a~vector $v\in V$ such that $\dim G_v=1$. Then $\dim\ggt_v=1$, $\dim(\ggt v)=2$. Besides, by Theorem~\ref{slice}, $N_v/G_v$ is a~homological manifold. According to Corollary~\ref{ple3}, $\dim(\ggt_v N_v)\le6$, $2q(V)\bw=\dim(\ggt_v V)\bw\le\dim(\ggt_v N_v)+\dim(\ggt v)\le8$, $q(V)\le4$. Further, we will assume that the space~$V$ does not contain a~vector with one-dimensional stationary subgroup and that $q(V)>4$. Consequently, \begin{itemize} \item $\dim V\ge2q(V)>8$\~ \item $G^0\cong\SU_2$\~ \item any irreducible component of the representation $G^0\cln V$ has dimension divisible by~$4$\~ the same can be said about each of its subrepresentations\~ \item $(\Ker\Ad)\cap G^0=\Zc(G^0)=\{\pm E\}\subs\Or(V)$. \end{itemize} Arbitrary operators $g\in G$ and $\xi\in\ggt^{\Ad(g)}$ in the space~$V$ commute. Hence, for any $g\in G$, the subspaces $V^g,(E-g)V\subs V$ are $(\ggt^{\Ad(g)})$\dh invariant\~ once $\Ad(g)=E$, they are $\ggt$\dh invariant and, thus, $\rk(E-g)\dv4$. Consider an arbitrary vector $v\in V$. If the subalgebra $\ggt_v\subs\ggt$ is proper, then $\dim\ggt_v=1$ that contradicts the assumption. So, $\ggt_v=\ggt$ or $\ggt_v=0$. In the former case, we have $G_v\sups G^0$, $v\in V^{G^0}=V_0=0$. Therefore, if $v\ne0$, then $\ggt_v=0$, $|G_v|<\bes$, the map $\ggt\to(\ggt v),\,\xi\to(\xi v)$ is a~linear isomorphism, each $g\in G_v$ and $\xi\in\ggt$ satisfying $g(\xi v)=\br{\Ad(g)\xi}v$ that implies $(\xi v\in V^g)\Lra(\xi\in\ggt^{\Ad(g)})$. Consequently, if $v\ne0$ and $g\in G_v$, then $(\ggt v)^g=(\ggt^{\Ad(g)})v$. For arbitrary $g\in G$ and $\xi\in\ggt$, denote by~$\ph_{g,\xi}$ the linear map of the space~$V$ to the outer direct sum of two copies of the space $(E-g)V$ defined by the rule $v\to\br{(E-g)v,(E-g)\xi v}$. \begin{lemma}\label{inj} For any $g\in G$ and $\xi\in\ggt\sm(\ggt^{\Ad(g)})$, we have $\Ker\ph_{g,\xi}=0$. \end{lemma} \begin{proof} If $v\ne0$ and $v\in\Ker\ph_{g,\xi}$, then $(E-g)v=(E-g)\xi v=0$, i.\,e. $g\in G_v$ and $\xi v\in V^g$ following $\xi\in\ggt^{\Ad(g)}$ that contradicts the condition. \end{proof} \begin{imp} If $g\in G$ and $\Ad(g)\ne E$, then $\dim V\le2\cdot\rk(E-g)$. \end{imp} Our nearest goal is proving the next theorem. \begin{theorem}\label{stab} For each $v\in V\sm\{0\}$, we have $[G_v,G_v]=G_v\subs\Ker\Ad$. \end{theorem} For proving Theorem~\ref{stab}, fix an arbitrary vector $v\bw\in V\sm\{0\}$. Denote by~$\pi$ the homomorphism $G_v\to\Or(N_v),\,g\to g|_{N_v}$ and by~$H_v$ the subgroup $\pi(G_v)\subs\Or(N_v)$. By Theorem~\ref{slice}, $N_v/G_v$ is a~homological manifold. Besides, $|G_v|<\bes$. According to Theorem~\ref{lang}, there are decompositions $H_v\bw=H_0\times H_1\sti H_k$ and $N_v=W_0\oplus W_1\sop W_k$ \ter{$k\in\Z_{\ge0}$} such that \begin{itemize} \item the subspaces $W_0,W_1\sco W_k\subs N_v$ are pairwise orthogonal and $G_v$\dh invariant\~ \item for each $i,j=0\sco k$, the linear group $(H_i)|_{W_j}\subs\Or(W_j)$ is trivial if $i\ne j$, generated by pseudoreflections if $i=j=0$, and isomorphic to the Poincare group if $i=j>0$ \ter{in part, $\dim W_j=4$ for any $j=1\sco k$}. \end{itemize} It is well known that the Poincare group coincides with its commutant\~ the same can be said about each of the groups $H_i$, $i=1\sco k$. If $g\in G_v$, then $\rk(E-g)-\dim\br{(E-g)N_v}=\dim\br{(E-g)(\ggt v)}\bw=\rk\br{E\bw-\Ad(g)}\bw\le2$\~ once $\Ad(g)=E$, we have $\rk(E-g)=\dim\br{(E-g)N_v}$. \begin{lemma}\label{triv} If $g\in G_v$ and $\dim\br{(E-g)N_v}\le2$, then $g=E$. \end{lemma} \begin{proof} By condition, $\rk(E-g)\le4$. If $\Ad(g)\ne E$, then $\dim V\bw\le2\bw\cdot\rk(E\bw-g)\le8$ while $\dim V>8$. Hence, $\Ad(g)=E$ implying, firstly, $\rk(E-g)\dv4$ and, secondly, $\rk(E-g)=\dim\br{(E-g)N_v}\le2$. Thus, $\rk(E-g)=0$, $g=E$. \end{proof} According to Lemma~\ref{triv}, $\Ker\pi=\{E\}\subs G_v$, i.\,e. $\pi$ is an isomorphism $G_v\bw\to H_v$. Therefore, setting $G_i:=\pi^{-1}(H_i)\subs G_v$ \ter{$i=0\sco k$}, we obtain that \begin{itemize} \item $G_v=G_0\times G_1\sti G_k$\~ \item the group~$G_0$ is generated by the elements $g\in G_v$ such that $\dim\br{(E\bw-g)N_v}\le2$ (and, by Lemma~\ref{triv}, is trivial)\~ \item each of the groups $G_i$, $i=1\sco k$, coincides with its commutant\~ \item if $i\in\{1\sco k\}$ and $g\in G_i\sm\{E\}$, then $N_v^g=N_v\cap W_i^{\perp}$ and $(E-g)N_v=W_i$ (consequently, $\dim\br{(E-g)N_v}=4$, $\rk(E-g)\le6$). \end{itemize} It follows from said above that $G_v=G_1\sti G_k=[G_v,G_v]$. \begin{lemma} Each of the groups $\Ad(G_i)$, $i=1\sco k$, is commutative. \end{lemma} \begin{proof} Suppose that there exist a~number $i\in\{1\sco k\}$ and elements $g,h\in G_i$ such that the operators $\Ad(g)$ and $\Ad(h)$ do not commute. We have $\Ad(g),\Ad(h)\ne E$, the subspaces $\ggt^{\Ad(g)},\ggt^{\Ad(h)}\subs\ggt$ being different and one-dimensional. Hence, $\ggt^{\Ad(h)}=\R\xi$ ($\xi\in(\ggt^{\Ad(h)})\sm(\ggt^{\Ad(g)})$), thus, $\xi V^h\bw\subs V^h$ and, also, $(\ggt v)^h=(\ggt^{\Ad(h)})v=\R(\xi v)$. Further, $g,h\in G_i\sm\{E\}$ following, firstly, $\rk(E-h)\le6$ and, secondly, $N_v^g=N_v^h=N_v\cap W_i^{\perp}$, $V^h=N_v^g\oplus\br{\R(\xi v)}$, $(E-g)\xi V^h\subs(E-g)V^h=\R\br{(E-g)\xi v}$, $\dim(\ph_{g,\xi}V^h)\le2$. According to Lemma~\ref{inj}, $\Ker\ph_{g,\xi}=0$ implying $\dim V^h=\dim(\ph_{g,\xi}V^h)\le2$ and, consequently, $\dim V=\rk(E-h)+\dim V^h\le8$ while $\dim V>8$. The contradiction obtained completes the proof. \end{proof} For each $i=1\sco k$, we have $\Ad(G_i)\bw=\Ad\br{[G_i,G_i]}\bw=\bs{\Ad(G_i),\Ad(G_i)}\bw=\{E\}$, i.\,e. $G_i\subs\Ker\Ad$. Therefore, $G_v=G_1\sti G_k\subs\Ker\Ad$. So, we completely proved Theorem~\ref{stab}. \begin{imp}\label{strv} For any $v\in V\sm\{0\}$, we have $G_v\cap G^0=\{E\}$. \end{imp} \begin{proof} By Theorem~\ref{stab}, $G_v\subs\Ker\Ad$, $G_v\cap G^0\subs(\Ker\Ad)\cap G^0=\{\pm E\}\bw\subs\Or(V)$. \end{proof} There exists an embedding $\T\hra G^0$, so, the group~$\T$ can be identified with its image by this embedding and considered as a~subgroup of the group~$G^0$. According to Corollary~\ref{strv}, each irreducible subrepresentation of the representation $\T\cln V$ is faithful and, hence, isomorphic to the representation $\T\cln\Cbb$ by multiplying. Thus, the space~$V$ is supplied with a~complex structure in whose terms the action $\T\cln V$ is proceeded with multiplying by scalars. All operators of the group $\Ker\Ad$ commute with all operators of the group $G^0\sups\T$, and, also, $(\Ker\Ad)\cap G^0=\{\pm1\}\subs\T$. Therefore, $\Ker\Ad$ is a~finite subgroup of the group $\GL_{\Cbb}(V)$, each of its operators~$g$ is semisimple over the field~$\Cbb$ and satisfies the relation $(\Spec_{\Cbb}g)\subs\T\subs G^0$. In the group~$G$, denote by~$H$ the subgroup generated by the union of all subgroups $G_v$, $v\in V\sm\{0\}$. By Theorem~\ref{stab}, $H\subs\Ker\Ad$. \begin{prop}\label{gla} For any $g\in\Ker\Ad$, we have $(\Spec_{\Cbb}g)\subs(gH)\cap\T$. \end{prop} \begin{proof} Take an arbitrary element $\la\in(\Spec_{\Cbb}g)$. There exists a~vector $v\bw\in V\sm\{0\}$ such that $gv=\la v$, so, $\la\in gG_v\subs gH$. \end{proof} \begin{lemma} For each $g\in\Ker\Ad$, we have $g^2=E$. \end{lemma} \begin{proof} As said above, $H\subs\Ker\Ad$. So, $gH\subs g(\Ker\Ad)=\Ker\Ad$. By Proposition~\ref{gla}, $(\Spec_{\Cbb}g)\subs(gH)\cap\T\subs(\Ker\Ad)\cap\T=\{\pm1\}\subs\T$. \end{proof} \begin{imp}\label{coker} The group $\Ker\Ad$ is commutative. \end{imp} \begin{imp}\label{stri} For any $v\in V\sm\{0\}$, we have $G_v=\{E\}$. \end{imp} \begin{proof} Follows from Theorem~\ref{stab} and Corollary~\ref{coker}. \end{proof} \begin{imp}\label{htri} The subgroup $H\subs G$ is trivial. \end{imp} \begin{lemma}\label{ker0} We have $\Ker\Ad\subs G^0$. \end{lemma} \begin{proof} Let $g\in\Ker\Ad$ be any element. It follows from Proposition~\ref{gla} and Corollary~\ref{htri} that $(\Spec_{\Cbb}g)\subs\{g\}\cap\T$. On the other hand, $(\Spec_{\Cbb}g)\ne\es$, so, $g\in\T\subs G^0$. \end{proof} Since $\Aut(\ggt)=\In(\ggt)$, we have $\Ad(G)=\Ad(G^0)$ implying (with Lemma~\ref{ker0}) $G=G^0(\Ker\Ad)=G^0\cong\SU_2$. Thus, $\pi_3(G)\bw\cong\pi_3(\SU_2)\cong\pi_3(S^3)\cong\Z$. Let $S\subs V$ be the unit sphere and $M$ the quotient space $S/G$. We have $\dim V>8$, $\dim S>7$\~ hence, $S$ and~$M$ are connected topological spaces and, also, $\pi_k(S)=\{e\}$ \ter{$k=1\sco7$}. According to Corollary~\ref{stri}, the action $G\cln S$ is free. The factorization map $S\thra M$ is a~locally product bundle with fibre~$G$. The corresponding exact homotopic sequence gives the relations $\pi_k(M)\bw\cong\pi_{k-1}(G)$ \ter{$k=2\sco7$} and $\pi_1(M)\cong G/G^0=\{e\}$. By Lemma~\ref{prop}, $M\cong S^m$ \ter{$m:=\dim S-3>4$}\~ on the other hand, $\pi_4(M)\cong\pi_3(G)\cong\Z$. So, we came to a~contradiction that completely proves Theorem~\ref{main}. \newpage {\renewcommand{\refname}{References} \begin{thebibliography}{9} \bibitem{MAMich} Mikhailova\?M.\,A. On the quotient space modulo the action of a finite group generated by pseudoreflections // Mathematics of the USSR-Izvestiya. 1985. Vol.~24. No.~1. Pp. 99--119.\\ DOI: 10.1070/IM1985v024n01ABEH001216 \bibitem{Lange} Lange\?C. When is the underlying space of an orbifold a topological manifold? // arXiv: math.GN/1307.4875. \bibitem{My1} Styrt\?O.\,G. On the orbit space of a compact linear Lie group with commutative connected component // Tr. Mosk. Mat. O-va. 2009. Vol. 70. Pp. 235--287. \bibitem{My2} Styrt\?O.\,G. On the orbit space of a three-dimensional compact linear Lie group // Izv. RAN, Ser. math. 2011. Vol. 75. No. 4. Pp. 165--188. \bibitem{My3} Styrt\?O.\,G. On the orbit space of an irreducible representation of a special unitary group // Tr. Mosk. Mat. O-va. 2013. Vol. 74. No. 1. Pp. 175--199. \bibitem{My4} Styrt\?O.\,G. On the orbit spaces of irreducible representations of simple compact Lie groups of types $B$, $C$, and~$D$ // J.~Algebra. 2014. Vol. 415. Pp.~137--161. \bibitem{homoarx} Styrt\?O.\,G. Topological and homological properties of the orbit space of a compact linear Lie group with commutative connected component // arXiv: math.AG/1607.06907. \bibitem{homo} Styrt\?O.\,G. Topological and homological properties of the orbit space of a compact linear Lie group with a commutative connected component // Vestn. Mosk. Gos. Tekh. Univ. im. N.E. Baumana, Estestv. Nauki (Herald of the Bauman Moscow State Tech. Univ., Nat. Sci.). 2018. No. 3. Pp. 68--81.\\ DOI: 10.18698/1812-3368-2018-3-68-81 \bibitem{homo1} Styrt\?O.\,G. More on the topological and homological properties of the orbit space in a compact linear Lie group featuring a commutative connected component. Conclusion // Vestn. Mosk. Gos. Tekh. Univ. im. N.E. Baumana, Estestv. Nauki (Herald of the Bauman Moscow State Tech. Univ., Nat. Sci.). 2018. No. 6. Pp. 48--63.\\ DOI: 10.18698/1812-3368-2018-6-48-63 \bibitem{Bredon} Bredon\?G.\,E. Introduction to compact transformation groups. Academic Press. 1972. 459~p. \end{thebibliography}} \end{document}
2205.02027v2
http://arxiv.org/abs/2205.02027v2
The degree of commutativity of wreath products with infinite cyclic top group
\documentclass[11pt,leqno]{amsart} \usepackage[mathcal]{eucal} \usepackage[utf8]{inputenc} \usepackage{amsfonts,amssymb} \usepackage{hyperref} \usepackage{graphicx} \usepackage{enumerate,float} \usepackage{caption} \usepackage{xfrac} \usepackage{tikz} \tikzset{every picture/.style={line width=0.75pt}} \addtolength{\hoffset}{-.8cm} \addtolength{\textwidth}{1.6cm} \addtolength{\voffset}{-1.5cm} \addtolength{\textheight}{1.6cm} \addtolength{\footskip}{.5cm} \pagestyle{plain} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{lemma}{Lemma}[section] \newtheorem{lemma-defi}[lemma]{Lemma and Definition} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{theoremABC}{Theorem} \renewcommand{\thetheoremABC}{\Alph{theoremABC}} \newtheorem{propositionABC}[theoremABC]{Proposition} \renewcommand{\thepropositionABC}{\Alph{propositionABC}} \theoremstyle{remark} \newtheorem{remark}[lemma]{Remark} \newtheorem*{notation*}{Notation} \newtheorem{question}[lemma]{Question} \theoremstyle{definition} \newtheorem{definition}[lemma]{Definition} \newtheorem{example}[lemma]{Example} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{conjectureABC}[theoremABC]{Conjecture} \renewcommand{\theconjectureABC}{\Alph{conjectureABC}} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\dc}{dc} \DeclareMathOperator{\dens}{\delta} \DeclareMathOperator{\It}{It} \DeclareMathOperator{\maxit}{maxi} \DeclareMathOperator{\minit}{mini} \makeatletter \def\moverlay{\mathpalette\mov@rlay} \def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen \ialign{\hfil$\m@th#1##$\hfil\cr#2\crcr}}} \newcommand{\charfusion}[3][\mathord]{ \mathpalette\mov@rlay{#2\cr#3} } } \makeatother \newcommand{\cupdot}{\charfusion[\mathbin]{\cup}{\cdot}} \newcommand{\bigcupdot}{\charfusion[\mathop]{\bigcup}{\cdot}} \makeatletter \def\author@andify{ \nxandlist {\unskip ,\penalty-1 \space\ignorespaces} {\unskip {} \@@and~} {\unskip \penalty-2 \space \@@and~}} \makeatother \title[Degree of commutativity of wreath products]{The degree of commutativity of wreath products with infinite cyclic top group} \author[I. de las Heras]{Iker de las Heras} \address{Iker de las Heras: Mathematisches Institut, Heinrich-Heine-Universit\"at, 40225 D\"usseldorf, Germany; Department of Mathematics, University of the Basque Country UPV/EHU, 48940 Leioa, Spain} \email{[email protected]} \author[B. Klopsch]{Benjamin Klopsch} \address{Benjamin Klopsch: Mathematisches Institut, Heinrich-Heine-Universit\"at, 40225 D\"usseldorf, Germany} \email{[email protected]} \author[A. Zozaya]{Andoni Zozaya} \address{Andoni Zozaya: Department of Mathematics, University of the Basque Coun\-try UPV/EHU, 48940 Leioa, Spain} \email{[email protected]} \date{} \makeatletter \@namedef{subjclassname@2020}{\textup{2020} Mathematics Subject Classification} \makeatother \subjclass[2020]{20P05, 20E22, 20F65, 20F69, 20F05} \keywords{Degree of commutativity, wreath products, density, word growth} \thanks{The first author acknowledges support by the Basque Government, grant POS\_2021\_2\_0040. The third author is supported by Spanish Ministry of Science, Innovation and Universities' grant FPU17/04822. The first and third author acknowledge as well support by the Basque Government, project IT483-22, and the Spanish Government, project PID2020-117281GB-I00, partly with ERDF funds. The authors thank Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, where a large part of this research was carried out.} \begin{document} \maketitle \begin{abstract} The degree of commutativity of a finite group is the probability that two uniformly and randomly chosen elements commute. This notion extends naturally to finitely generated groups~$G$: the degree of commutativity $\dc_S(G)$, with respect to a given finite generating set~$S$, results from considering the fractions of commuting pairs of elements in increasing balls around $1_G$ in the Cayley graph $\mathcal{C}(G,S)$. We focus on restricted wreath products of the form $G = H \wr \langle t \rangle$, where $H \ne 1$ is finitely generated and the top group $\langle t \rangle$ is infinite cyclic. In accordance with a more general conjecture, we show that $\dc_S(G) = 0$ for such groups~$G$, regardless of the choice of~$S$. This extends results of Cox who considered lamplighter groups with respect to certain kinds of generating sets. We also derive a generalisation of Cox's main auxiliary result: in `reasonably large' homomorphic images of wreath products $G$ as above, the image of the base group has density zero, with respect to certain types of generating sets. \end{abstract} \section{Introduction} Let $G$ be a finitely generated group, with finite generating set~$S$. For $n \in \mathbb{N}_0$, let $B_S(n) = B_{G,S}(n)$ denote the ball of radius $n$ in the Cayley graph $\mathcal{C}(G,S)$ of $G$ with respect to~$S$. Following Antol\'in, Martino and Ventura~\cite{AnMaVe17}, we define the \emph{degree of commutativity} of $G$ with respect to $S$ as \begin{equation*} \dc_S(G) = \limsup_{n \rightarrow \infty} \frac{\vert\{ (g,h) \in B_S(n) \times B_S(n) \mid gh=hg\}\vert}{\vert B_S(n)\vert^2}. \end{equation*} We remark that this notion can be viewed as a special instance of a more general concept, where the degree of commutativity is defined with respect to `reasonable' sequences of probability measures on~$G$, as discussed in a preliminary \texttt{arXiv}-version of~\cite{AnMaVe17} or, in more detail, by Tointon in~\cite{To20}. If $G$ is finite, the invariant $\dc_S(G)$ does not depend on the particular choice of~$S$, as the balls stabilise and $\dc(G) = \dc_S(G)$ simply gives the probability that two uniformly and randomly chosen elements of~$G$ commute. This situation was studied already by Erd\H{o}s and Tur\'an~\cite{ErTu68}, and further results were obtained by various authors over the years; for example, see~\cite{GuRo06, Gu73, MaToVaVe21, Ne89, Ru79}. For infinite groups~$G$, it is generally not known whether $\dc_S(G)$ is independent of the particular choice of~$S$. The degree of commutativity is naturally linked to the following concept of density, which is employed, for instance, in~\cite{BuVe02}. The \emph{density} of a subset $X\subseteq G$ with respect to $S$ is \[ \dens_S(X) = \dens_{G,S}(X) = \limsup_{n \rightarrow \infty} \frac{\vert X \cap B_S(n)\vert }{\vert B_S(n)\vert }. \] If the group $G$ has sub-exponential word growth, then the density function~$\dens_S$ is bi-invariant; compare with~\cite[Prop.~2.3]{BuVe02}. Based on this fact, the following can be proved, initially for residually finite groups and then without this additional restriction, even in the more general context of suitable sequences of probability measures; see~\cite[Thm.~1.3]{AnMaVe17} and~\cite[Thms.~1.6 and 1.17]{To20}. \begin{theorem}[\textup{Antol\'in, Martino and Ventura~\cite{AnMaVe17}; Tointon~\cite{To20}}] \label{thm:AMV} Let $G$ be a finitely generated group of sub-exponential word growth, and let $S$ be a finite generating set of~$G$. Then $\dc_S(G) > 0$ if and only if $G$ is virtually abelian. Moreover, $\dc_S(G)$ does not depend on the particular choice of~$S$. \end{theorem} The situation is far less clear for groups of exponential word growth. In this context, the following conjecture was raised in~\cite{AnMaVe17}. \begin{conjecture}[\textup{Antol\'in, Martino and Ventura~\cite{AnMaVe17}}] \label{conjecture} Let $G$ be a finitely generated group of exponential word growth and let $S$ be a finite generating set of~$G$. Then, $\dc_S(G)=0$, irrespective of the choice of~$S$. \end{conjecture} In~\cite{AnMaVe17} the conjecture was already confirmed for non-elementary hyperbolic groups, and Valiunas~\cite{Va19} confirmed it for right-angled Artin groups (and more general graph products of groups) with respect to certain generating sets. Furthermore, Cox~\cite{Co18} showed that the conjecture holds, with respect to \emph{selected} generating sets, for (generalised) lamplighter groups, that is for restricted standard wreath products of the form $G = F \wr \langle t \rangle$, where $F \ne 1$ is finite and $\langle t \rangle$ is an infinite cyclic group. Wreath products of such a shape are basic examples of groups of exponential word growth; in Section~\ref{sec:preliminaries} we briefly recall the wreath product construction, here we recall that $G = N \rtimes \langle t \rangle$ with base group $N = \bigoplus_{i \in \mathbb{Z}} F^{t^i}$. In the present paper, we make a significant step forward in two directions, by confirming Conjecture~\ref{conjecture} for an even wider class of restricted standard wreath products and with respect to \emph{arbitrary} generating sets. \begin{theoremABC} \label{thm:main} Let $G = H \wr \langle t \rangle$ be the restricted wreath product of a finitely generated group~$H \ne 1$ and an infinite cyclic group $\langle t \rangle \cong C_\infty$. Then $G$ has degree of commutativity~$\dc_S(G)=0$, for every finite generating set $S$ of~$G$. \end{theoremABC} One of the key ideas in~\cite{Co18} is to reduce the desired conclusion $\dc_S(G) = 0$, for the wreath products $G = N \rtimes \langle t \rangle$ under consideration, to the claim that the base group $N$ has density $\dens_S(N) =0$ in~$G$. We proceed in a similar way and derive Theorem~\ref{thm:main} from the following density result, which constitutes our main contribution. \begin{theoremABC} \label{thm:main-density} Let $G = H \wr \langle t \rangle$ be the restricted wreath product of a finitely generated group~$H$ and an infinite cyclic group $\langle t \rangle \cong C_\infty$. Then the base group $N = \bigoplus_{i\in \mathbb{Z}} H^{t^i}$ has density $\dens_S(N)=0$ in $G$, for every finite generating set $S$ of~$G$. \end{theoremABC} The limitation in~\cite{Co18} to special generating sets $S$ of lamplighter groups $G$ is due to the fact that the arguments used there rely on explicit minimal length expressions for elements $g\in G$ with respect to~$S$. If one restricts to generating sets which allow control over minimal length expressions in a similar, but somewhat weaker way, it is, in fact, possible to simplify and generalise the analysis considerably. In this way we arrive at the following improvement of the results in~\cite[\textsection 2.2]{Co18}, for homomorphic images of wreath products. \begin{theoremABC} \label{thm:main-2} Let $G$ be a finitely generated group of exponential word growth of the form $G= N \rtimes \langle t \rangle$, where \smallskip \begin{enumerate}[\rm (a)] \item the subgroup $\langle t \rangle$ is infinite cyclic; \item the normal subgroup $N = \langle \bigcup \big\{ H^{t^i} \mid i \in \mathbb{Z} \big\} \rangle$ is generated by the $\langle t \rangle$-conjugates of a finitely generated subgroup $H$ of $N$; \item the $\langle t \rangle$-conjugates of this group $H$ commute elementwise: $\big[H^{t^i}, H^{t^j} \big] = 1$ for all $i, j \in \mathbb{Z}$ with $H^{t^i} \ne H^{t^j}$. \end{enumerate} \smallskip \noindent Suppose further that $S_0$ is a finite generating set for $H$ and that the exponential growth rates of $H$ with respect to $S_0$ and of $G$ with respect to $S = S_0 \cup \{ t \}$ satisfy \begin{equation} \label{eq:inequality} \lim_{n \rightarrow \infty} \sqrt[n]{\vert B_{H,S_0}(n)\vert } < \lim_{n \rightarrow \infty} \sqrt[n]{\vert B_{G,S}(n)\vert }. \end{equation} Then $N$ has density $\dens_S(N)=0$ in $G$ with respect to~$S$. \end{theoremABC} For finitely generated groups $G$ of sub-exponential word growth, the density of a subgroup of infinite index, such as $N$ in $G = N \rtimes \langle t \rangle$ with $\langle t \rangle \cong C_\infty$, is always~$0$; see~\cite{BuVe02}. Thus Theorem~\ref{thm:main-2} has the following consequence. \begin{corollary} Let $G = A \rtimes \langle t \rangle$ be a finitely generated group, where $A$ is abelian and $\langle t \rangle \cong C_\infty$. Then $A$ has density $\dens_S(A) = 0$ in $G$, with respect to any finite generating set of $G$ that takes the form $S = S_0 \cup \{t\}$ with $S_0 \subseteq A$. \end{corollary} Next we give a very simple concrete example to illustrate that the technical condition~\eqref{eq:inequality} in Theorem~\ref{thm:main-2} is not redundant: the situation truly differs from the one for groups of sub-exponential word growth. It is not difficult to craft more complex examples. \begin{example} \label{ex:counterexample} Let $G = F \times \langle t\rangle$, where $F = \langle x,y \rangle$ is the free group on two generators and $\langle t \rangle \cong C_\infty$. Then $F$ has density $\dens_S(F) = 1/2 > 0$ in $G$ for the `obvious' generating set~$S=\{ x,y,t \}$. Indeed, for every $i \in \mathbb{Z}$ we have \[ B_{G,S}(n) \cap F t^i = \begin{cases} B_{F,\{ x, y\}}(n - \vert i\vert ) t^i & \text{if $n \in \mathbb{N}$ with $n \ge \vert i\vert $,} \\ \varnothing & \text{otherwise,} \end{cases} \] and hence, for all $n \in \mathbb{N}$, \[ \vert B_{G,S}(n) \cap F\vert = \vert B_{F,\{x,y\}}(n)\vert \] and \[ \vert B_{G,S}(n)\vert = \vert B_{F,\{x,y\}}(n)\vert + 2 \sum\nolimits_{i=1}^{n} \vert B_{F,\{x,y\}}(n-i)\vert . \] This yields \begin{multline*} \frac{\vert B_{G,S}(n) \cap F\vert }{\vert B_{G,S}(n)\vert } =\frac{2\cdot 3^n -1}{2\cdot 3^n -1 + 2\sum_{i=1}^{n}\left(2\cdot 3^{n-i} -1\right)} \\ = \frac{2\cdot 3^n -1}{4 \cdot 3^n -2n-3} \to \frac{1}{2} \qquad \text{as $n \to \infty$.} \end{multline*} We remark that in this example $F$ and $G$ have the same exponential growth rates: \[ \lim_{n \to \infty} \sqrt[n]{\vert B_{F,\{x,y\}}(n)\vert } = \lim_{n \to \infty} \sqrt[n]{\vert B_{G,S}(G)\vert } =3. \] Furthermore, the argument carries through without any obstacles with any finite generating set $S_0$ of $F$ in place of $\{x,y\}$. \end{example} Finally, we record an open question that suggests itself rather naturally. \begin{question} \label{que:alternative} Let $G$ be a finitely generated group such that $\dc_S(G) >0$ with respect to a finite generating set~$S$. Does it follow that there exists an abelian subgroup $A \leq G$ such that $\dens_S(A) >0$? \end{question} For groups $G$ of sub-exponential word growth the answer is ``yes'', as one can see by an easy argument from Theorem~\ref{thm:AMV}. An affirmative answer for groups of exponential word growth could be a step towards establishing Conjecture~\ref{conjecture} or provide a pathway to a possible alternative outcome. At a heuristic level, an affirmative answer to Question~\ref{que:alternative} would fit well with the results in~\cite{Sh18} and~\cite{To20}. \\ \textbf{Notation.} Our notation is mostly standard. For a set $X$, we denote by $\mathcal{P}(X)$ its power set. For elements $g,h$ of a group $G$, we write $g^h=h^{-1}gh$ and $[g,h] = g^{-1} g^h$. For a finite generating set $S$ of $G$, we denote by $l_S(g)$ the length of~$g$ with respect to~$S$, i.e., the distance between $g$ and $1$ in the corresponding Cayley graph~$\mathcal{C}(G,S)$ so that \[ B_S(n) = B_{G,S}(n) = \{ g \in G \mid l_S(g) \le n \} \qquad \text{for $n \in \mathbb{N}_0$.} \] Given $a,b \in \mathbb{R}$ and $T \subseteq \mathbb{R}$, we write $[a,b]_T = \{ x \in T \mid a \le x \le b \}$; for instance, $[-2,\sqrt{2}]_\mathbb{Z} = \{-2,-1,0,1\}$. We repeatedly compare the limiting behaviour of real-valued functions defined on cofinite subsets of $\mathbb{N}_0$ which are eventually non-decreasing and take positive values. For this purpose we employ the conventional Landau symbols; specifically we write, for functions $f,g$ of the described type, \begin{align*} \text{$f \in o(g)$, or $g \in \omega(f)$,} & \quad\text{if $\lim_{n \to \infty} \tfrac{f(n)}{g(n)} = 0$, equivalently $\lim_{n \to \infty} \tfrac{g(n)}{f(n)} = \infty$.} \end{align*} As customary, we use suggestive short notation such as, for instance, $f \in o(\log n)$ in place of $f \in o(g)$ for $g \colon \mathbb{N}_{\ge 2} \to \mathbb{R}$, $n \mapsto \log(n)$. \\ \textbf{Acknowledgement.} We thank two independent referees for detailed and valuable feedback. Their comments triggered us to improve the exposition and to sort out a number of minor shortcomings. In particular, this gave rise to Proposition~\ref{pro:exists-q}. \section{Preliminaries} \label{sec:preliminaries} In this section, we collect preliminary and auxiliary results. Furthermore, we briefly recall the wreath product construction and establish basic notation. \subsection{Groups of exponential word growth} We concern ourselves with groups of exponential word growth. These are finitely generated groups $G$ such that for any finite generating set $S$ of~$G$, the \emph{exponential growth rate} \begin{equation} \label{eq:exponentail-rate} \lambda_S(G) = \lim_{n\rightarrow \infty } \sqrt[n]{\vert B_S(n)\vert } = \inf \big\{ \sqrt[n]{\vert B_S(n)\vert } \mid n \in \mathbb{N}_0 \big\} \end{equation} of $G$ with respect to~$S$ satisfies $\lambda_S(G) > 1$. Since the word growth sequence $\vert B_S(n) \vert$, $n \in \mathbb{N}$, is submultiplicative, i.e., \[ \vert B_S(n + m) \vert \leq \vert B_S(n) \vert \vert B_S(m) \vert \qquad \text{for all } n, m \in \mathbb{N}, \] the limit in \eqref{eq:exponentail-rate} exists and is equal to the infimum as stated, by Fekete's lemma~\cite[Corollary VI.57]{Ha03}. We will use the following basic estimates: \begin{align*} \lambda_S(G)^n \le \vert B_S(n)\vert & \quad \text{for all $n \in \mathbb{N}_0$}, \\ \intertext{and, for each $\varepsilon \in \mathbb{R}_{>0}$,} \vert B_S(n)\vert \le (\lambda_S(G) + \varepsilon)^n & \quad \text{for all sufficiently large $n \in \mathbb{N}$.} \end{align*} In the proof of Theorem~\ref{thm:main-2}, the following two auxiliary results are used. \begin{lemma} \label{lem:stirling} For each $\alpha \in [0,1]_\mathbb{R}$, the sequences $\sqrt[n]{\binom{n+ \lceil \alpha n \rceil}{\lceil\alpha n\rceil}}$ and $\sqrt[n]{\binom{n}{\lceil\alpha n\rceil}}$, $n \in \mathbb{N}$, converge, and furthermore \[ \lim_{\alpha\rightarrow 0^+}\left(\lim_{n\rightarrow\infty}\sqrt[n]{\binom{n+ \lceil \alpha n \rceil}{\lceil\alpha n\rceil}}\right) = \lim_{\alpha\rightarrow 0^+}\left(\lim_{n\rightarrow\infty}\sqrt[n]{\binom{n}{\lceil\alpha n\rceil}}\right)=1. \] Consequently, if $f \colon \mathbb{N} \to \mathbb{R}_{>0}$ satisfies $f\in o(n)$, then the sequence $\binom{n +\lceil f(n)\rceil}{\lceil f(n)\rceil}$, $n \in \mathbb{N}$, grows sub-exponentially in~$n$, viz.\ $\sqrt[n]{\binom{n+\lceil f(n)\rceil}{\lceil f(n)\rceil}} \to 1$ as $n \to \infty$. \end{lemma} \begin{proof} For each $\alpha \in [0,1]_\mathbb{R}$, Stirling's approximation for factorials yields \begin{align*} \binom{n + \lceil \alpha n \rceil}{\lceil\alpha n \rceil} &\sim\frac{\sqrt{2\pi (n + \lceil \alpha n \rceil)} \big( (n + \lceil \alpha n \rceil)/e \big)^{(n + \lceil \alpha n \rceil)}} {\sqrt{2\pi \lceil\alpha n\rceil}(\lceil\alpha n\rceil/e)^{\lceil\alpha n\rceil} \sqrt{2\pi n}(n/e)^{n}}\\ &=\frac{\sqrt{n + \lceil \alpha n \rceil}}{\sqrt{2\pi n \lceil\alpha n \rceil}} \cdot \frac{\lceil n + \alpha n \rceil^{\lceil n + \alpha n \rceil}}{\lceil\alpha n\rceil^{\lceil \alpha n \rceil} n^n}, \quad \text{as $n \to \infty$,} \end{align*} i.e., the ratio of the left-hand term to the right-hand term tends to~$1$ as $n$ tends to infinity. Moreover, for all $n\in\mathbb{N}$, \[ \frac{\lceil n + \alpha n \rceil^{\lceil n + \alpha n \rceil}}{\lceil\alpha n\rceil^{\lceil \alpha n \rceil} n^n} \ge \frac{(n + \alpha n)^{n + \alpha n}}{(\alpha n+1)^{\alpha n +1} n^n} = n^{-1} \left( \frac{(1+\alpha)^{1+\alpha}}{(\alpha +\sfrac{1}{n})^{(\alpha + \sfrac{1}{n})}} \right)^{n} \] and similarly \[ \frac{\lceil n + \alpha n \rceil^{\lceil n + \alpha n \rceil}}{\lceil\alpha n\rceil^{\lceil \alpha n \rceil} n^n} \le \frac{(n + \alpha n + 1)^{n + \alpha n + 1}}{(\alpha n)^{\alpha n} n^n} = n \left( \frac{(1+\alpha +\sfrac{1}{n})^{(1+\alpha + \sfrac{1}{n})}}{\alpha^{\alpha} } \right)^{n}. \] This shows that \begin{equation*} \lim_{n\rightarrow\infty}\sqrt[n]{\binom{n + \lceil \alpha n \rceil}{\lceil \alpha n \rceil}} = \frac{(1+ \alpha)^{1 + \alpha}}{\alpha^\alpha}. \end{equation*} Since $\lim_{\alpha \to 0^+} \alpha^\alpha = 1$, we conclude that \begin{equation*} \lim_{\alpha\rightarrow 0^+}\left(\lim_{n\rightarrow\infty}\sqrt[n]{\binom{n+ \lceil \alpha n \rceil}{\lceil\alpha n\rceil}}\right) =1. \end{equation*} A similar computation yields that the second sequence $\sqrt[n]{\binom{n}{\lceil\alpha n\rceil}}$, $n \in \mathbb{N}$, converges. Again directly, or by virtue of \begin{equation*} 1 \leq \sqrt[n]{\binom{n}{ \lceil \alpha n \rceil}} \leq \sqrt[n]{\binom{n + \lceil \alpha n \rceil}{\lceil \alpha n \rceil }}, \end{equation*} we conclude that also the second limit, for $\alpha \to 0^+$, is equal to~$1$. \end{proof} \begin{proposition} \label{pro:exists-q} Let $G$ be a finitely generated group of exponential word growth, with finite generating set $S$. Then there exists a non-decreasing unbounded function $q \colon \mathbb{N} \to \mathbb{R}_{\ge 0}$ such that $q \in o(n)$ and \[ \frac{\vert B_S(n)\vert }{\vert B_S(n-q(n))\vert } \to \infty \qquad \text{as $n \to \infty$}. \] \end{proposition} \begin{proof} We put $\lambda = \lambda_S(G) > 1$ and write $\vert B_S(n) \vert = \lambda^{\sum_{i=1}^n b_i}$, with $b_i \in \mathbb{R}_{\ge 0}$ for $i \in \mathbb{N}$, so that the sequence $\sum_{i=1}^n b_i$, $n \in \mathbb{N}$, is subadditive and \[ \lim_{n \to \infty} \frac{1}{n} \sum\nolimits_{i=1}^n b_i = 1. \] In this notation, we seek a non-decreasing unbounded function $q \colon \mathbb{N} \to \mathbb{R}_{\ge 0}$ such that, simultaneously, \begin{equation} \label{eq:requirements-q} q(n)/n \to 0 \quad \text{and} \quad \sum\nolimits_{i = n - \lfloor q(n) \rfloor +1}^n b_i \to \infty \qquad \text{as $n \to \infty$}. \end{equation} We show below that for every $m \in \mathbb{N}$, \begin{equation} \label{eq:reduction-to-1/m} \sum\nolimits_{i=n - \lfloor n/m \rfloor +1}^n b_i \to \infty \qquad \text{as $n \to \infty$}. \end{equation} From this we see that there is an increasing sequence of non-positive integers $c(m)$, $m \in \mathbb{N}$, such that, for each $m$, \[ c(m) \ge m^2 \quad \text{and} \quad \forall n \in \mathbb{N}_{\ge c(m)}: \sum\nolimits_{i=n - \lfloor n/m \rfloor +1}^n b_i \ge m. \] Setting $q_1(n) = \lfloor n/m \rfloor$ for $n \in \mathbb{N}$ with $c(m) \le n < c(m+1)$ and \[ q(n) = \max \{ q_1(k) \mid k \in [1,n]_\mathbb{Z} \}, \] we arrive at a function $q \colon \mathbb{N} \to \mathbb{R}_{\ge 1}$ meeting the requirements~\eqref{eq:requirements-q}. \smallskip It remains to establish~\eqref{eq:reduction-to-1/m}. Let $m \in \mathbb{N}$ and put $\varepsilon = \varepsilon_m = (6m)^{-1} \in \mathbb{R}_{>0}$. We choose $N = N_\varepsilon \in \mathbb{N}$ minimal subject to \[ 1 -\varepsilon \le \frac{1}{n} \sum\nolimits_{i=1}^n b_i \le 1+\varepsilon \qquad \text{for all $n \in \mathbb{N}_{\ge N}$.} \] In the following we deal repeatedly with sums of the form \[ \beta(k) = \sum\nolimits_{i=kN+1}^{kN+N} b_i, \] for $k \in \mathbb{N}$, and using subadditivity, we obtain \[ \beta(k) \le \beta(0) \le (1+\varepsilon) N \qquad \text{for all $k \in \mathbb{N}$.} \] We consider $n \in \mathbb{N}$ with $n \ge (1+\varepsilon) \varepsilon^{-1} N \ge N$ and write $n = l N + r$ with $l = l_n \in \mathbb{N}$ and $r = r_n \in [0,N-1]_\mathbb{Z}$. Furthermore, we set \[ t = t_n = \frac{\big\vert \big\{ k \in [0,l-1]_\mathbb{Z} \;\big\vert\; \beta(k) > \varepsilon N \big\} \big\vert}{l} \in [0,1]_\mathbb{R}. \] From our set-up, we deduce that \begin{multline*} 1 - \varepsilon \le \frac{1}{n} \sum\nolimits_{i=1}^n b_i \le \frac{1}{lN} \bigg( \Big( \sum\nolimits_{k=0}^{l-1} \beta(k) \Big) + \beta(l) \bigg) \\ \le \big( t (1+\varepsilon) + (1-t) \varepsilon \big) + \frac{1+\varepsilon}{l} \le t + 2\varepsilon, \end{multline*} hence $t \ge 1-3\varepsilon$ and consequently \begin{multline*} \big\vert \big\{ k \in [0,l-1]_\mathbb{Z} \mid \beta(k) > \varepsilon N \big\} \cap \big\{ k \in [0,l-1]_\mathbb{Z} \mid \lceil (1-6\varepsilon)l \rceil + 1 \le k \big\} \big\vert \\ \ge t l + \big( l - \lceil (1-6\varepsilon)l \rceil -1 \big) - l \ge \big( 1-3\varepsilon - (1-6\varepsilon) \big) l - 2 = 3\varepsilon l - 2. \end{multline*} Since \[ n- \lfloor n/m \rfloor = \lceil (1- 6 \varepsilon)n \rceil \le \lceil (1- 6 \varepsilon) (l+1) \rceil N \le (\lceil (1- 6 \varepsilon) l \rceil + 1 )N, \] this gives \begin{equation*} \sum\nolimits_{i=n- \lfloor n/m \rfloor +1}^n b_i \ge \sum\nolimits_{k= \lceil (1-6\varepsilon)l \rceil +1}^{l-1} \beta(k) \ge (3 \varepsilon l - 2) \varepsilon N, \end{equation*} which tends to infinity as $l \to \infty$. This proves~\eqref{eq:reduction-to-1/m}. \end{proof} In~\cite[Lemma 2.2]{Pi00} Pittet seems to claim that \begin{equation*} \liminf_{n \to \infty} \frac{\vert B_S(n)\vert }{\vert B_S(n-1)\vert } > 1, \end{equation*} from which Proposition~\ref{pro:exists-q} could be derived much more easily. However, we found the explanations in \cite{Pi00} not fully conclusive and thus opted to work out an independent argument. Naturally, it would be interesting to establish a more effective version of Proposition~\ref{pro:exists-q}, if possible. \subsection{Wreath products} \label{sec:wreath-products} We recall that a group $G = H \wr K$ is the restricted standard \emph{wreath product} of two subgroups $H$ and~$K$, if it decomposes as a semidirect product $G = N \rtimes K$, where the normal closure of $H$ is the direct sum $N = \bigoplus_{k \in K} H^k$ of the various conjugates of $H$ by elements of~$K$; the groups $N$ and $K$ are referred to as the \emph{base group} and the \emph{top group} of the wreath product~$G$, respectively. Since we do not consider complete standard wreath products or more general types of permutational wreath products, we shall drop the terms ``restricted'' and ``standard'' from now on. Throughout the rest of this section, let \begin{equation} \label{eq:standing-assumption-G} G = H \wr \langle t \rangle = N \rtimes \langle t \rangle \qquad \text{with base group} \qquad N = \bigoplus\nolimits_{i \in \mathbb{Z}} H^{t^i} \end{equation} be the wreath product of a finitely generated subgroup~$H$ and an infinite cyclic subgroup $\langle t \rangle \cong C_\infty$. Every element $g \in G$ can be written uniquely in the form \[ g = \widetilde{g} \, t^{\rho(g)}, \] where $\rho(g) \in \mathbb{Z}$ and $\widetilde{g} = \prod\nolimits_{i \in \mathbb{Z}} (g_{\vert i})^{\, t^i} \in N$ with `coordinates' $g_{\vert i} \in H$. The support of the product decomposition of $\widetilde{g}$ is finite and we write \[ \supp(g) = \{ i \in \mathbb{Z} \mid g_{\vert i} \neq 1 \}. \] Furthermore, it is convenient to fix a finite symmetric generating set $S$ of~$G$; thus $G = \langle S \rangle$, and $g \in S$ implies $g^{-1} \in S$. We put $d = \vert S\vert $ and fix an ordering of the elements of~$S$: \begin{equation}\label{eq:standing-assumption-S} S = \{ s_1, \ldots, s_d \}, \qquad \text{with $s_j = \widetilde{s_{j}} \, t^{\rho(s_j)}$ for $j \in [1,d]_\mathbb{Z}$,} \end{equation} where $\widetilde{s_1}, \ldots, \widetilde{s_d} \in N$. We write $r_S = \max \big\{ \rho(s_j) \mid j \in [1,d]_\mathbb{Z} \big\} \in \mathbb{N}$. \begin{definition} \label{def:itinerary} An \emph{$S$-expression} of an element $g \in G$ is (induced by) a word $W = \prod_{k=1}^l X_{\iota(k)}$ in the free semigroup $\langle X_1, \ldots, X_d \rangle$ such that \begin{equation} \label{eq:writing} g = W(s_1, \ldots, s_d) = \prod\nolimits_{k=1}^l s_{\iota(k)} ; \end{equation} here $W$ determines and is determined by the function $\iota = \iota_W \colon [1,l]_\mathbb{Z} \to [1,d]_\mathbb{Z}$. In this situation the standard process of collecting powers of $t$ to the right yields \begin{equation} \label{equ:recover-g-from-It} g = \widetilde{g} \, t^{-\sigma(l)} \qquad \text{with} \quad \widetilde{g} = \prod\nolimits_{k=1}^l \widetilde{s_{\iota(k)}}^{\, t^{\sigma(k-1)}}, \end{equation} where $\sigma = \sigma_{S,W}$ is short for the negative\footnote{At this stage the sign change is a price we pay for not introducing notation for left-conjugation; Example~\ref{exa:itinery-etc} illustrates that $\sigma$ plays a convenient role in the concept of itinerary.} cumulative exponent function \[ \sigma_{S,W} \colon [0,l]_\mathbb{Z} \to \mathbb{Z}, \quad k \mapsto - \sum\nolimits_{j=1}^k \rho \big( s_{\iota(j)} \big). \] We define the \emph{itinerary} of $g$ associated to the $S$-expression~\eqref{eq:writing} as the pair \[ \It(S,W) = (\iota_W,\sigma_{S,W}), \] and we say that $\It(S,W)$ has length~$l$, viz.\ the length of the word~$W$. For the purpose of concrete calculations it is helpful to depict the functions $\iota_{W}$ and $\sigma_{S, W}$ as finite sequences. The term `itinerary' refers to~\eqref{equ:recover-g-from-It}, which indicates how $g$ can be built stepwise from the sequences $\iota_W$ and $\sigma_{S,W}$; see Example~\ref{exa:itinery-etc} below. In particular, $g$ is uniquely determined by the itinerary $\It(S,W) = (\iota,\sigma)$ and, accordingly, we refer to $g$ as the element corresponding to that itinerary. But unless $G$ is trivial and $S$ is empty, the element $g$ has, of course, infinitely many $S$-expressions which in turn give rise to infinitely many distinct itineraries of one and the same element. For discussing features of the exponent function $\sigma_{S,W}$, we call \begin{equation*} \maxit \!\big( \It(S,W) \big) = \max (\sigma_{S,W}) \qquad \text{and} \qquad \minit \!\big( \It(S,W) \big) = \min (\sigma_{S,W}) \end{equation*} the \emph{maximal} and \emph{minimal itinerary points} of~$\It(S,W)$. Later we fix a representative function $\mathcal{W} \colon G \to \langle X_1, \ldots, X_d \rangle$, $g \mapsto W_g$ which yields for each element of $G$ an $S$-expression of shortest possible length. In that situation we suppress the reference to $S$ and refer to \[ \It_\mathcal{W}(g) = \It(S,W_g), \; \maxit_\mathcal{W}(g) = \maxit \!\big( \It_\mathcal{W}(g) \big), \; \minit_\mathcal{W}(g) = \minit \!\big( \It_\mathcal{W}(g) \big) \] as the $\mathcal{W}$-\emph{itinerary}, the \emph{maximal $\mathcal{W}$-itinerary point} and the \emph{minimal $\mathcal{W}$-itinerary point} of any given element~$g$. \end{definition} To illustrate the terminology we discuss a concrete example. \begin{example} \label{exa:itinery-etc} A typical example of the wreath products that we consider is the lamplighter group \[ G = \langle a, t \mid a^{2} = 1, [a, a^{t^i}]=1 \text{ for $i \in \mathbb{Z}$} \rangle = \bigoplus\nolimits_{i \in \mathbb{Z}} \langle a_i \rangle \rtimes \langle t \rangle \cong C_2\wr C_\infty, \] where $a_i = a^{\, t^i}$ for each $i \in \mathbb{Z}$. We consider the finite symmetric generating set \[ S = \{ s_1, \ldots, s_5 \} \] with \[ s_1 = a_4 t^{-3}, \, s_2 = t^{-2}, \, s_3 = s_1^{\, -1} = a_1 t^3, \, s_4 = s_2^{\, -1} = t^2, \, s_5 = a_0 = s_5^{\, -1}. \] Let $g =\widetilde{g} \, t^{3}$ be such that $g_{\vert i} = a$ for $i \in \{-1,1,2,6\}$ and $g_{\vert i} = 1$ otherwise. Then we have $\rho(g) = 3$, $\supp(g) = \{-1,1,2,6\}$, and \begin{equation} \label{eq:writing-example} g = t^{-2} \cdot a_0 \cdot a_4t^{-3} \cdot \big( t^2 \big)^{\, 2} \cdot a_0 \cdot t^{2} \cdot a_0 \cdot t^{2} = s_2 \, s_5 \, s_1 \, s_4^{\, 2} \, s_5 \, s_4 \, s_5 \, s_4 \end{equation} is an $S$-expression for $g$ of length~$9$, based on $W = X_2 X_5 X_1 X_4^{\, 2} X_5 X_4 X_5 X_4$. The itinerary $I = \It(S,W)$ associated to this $S$-expression for $g$ is \begin{equation} \label{eq:itinery-example} I = (\iota, \sigma) = \big( (2, 5, 1, 4, 4, 5, 4, 5, 4), (0,2,2,5,3,1,1,-1,-1,-3) \big), \end{equation} where we have written the maps $\iota$ and $\sigma$ in sequence notation. Furthermore, we see that $\maxit(I) = 5$ and $\minit(I) = -3$. Figure~\ref{fig:itinerary} gives a pictorial description of part of the information contained in~$I$. \begin{figure}[H] \centering \begin{tikzpicture}[x=0.5pt,y=0.5pt,yscale=-1,xscale=1] \draw (60,100) -- (650,100) ; \draw (80,95.5) -- (80,105.5) ; \draw (130,95) -- (130,105) ; \draw (180,95) -- (180,105) ; \draw (280,95) -- (280,105) ; \draw (430,95) -- (430,105) ; \draw (480,95) -- (480,105) ; \draw (530,95) -- (530,105) ; \draw (630,95) -- (630,105) ; \draw [draw opacity=0] (280,100) .. controls (280,100) and (280,100) .. (280,100) .. controls (280,72.39) and (302.39,50) .. (330,50) .. controls (357.61,50) and (380,72.39) .. (380,100) -- (330,100) -- cycle ; \draw (280,100) .. controls (280,100) and (280,100) .. (280,100) .. controls (280,72.39) and (302.39,50) .. (330,50) .. controls (357.61,50) and (380,72.39) .. (380,100) ; \draw [draw opacity=0] (380,100) .. controls (380,100) and (380,100) .. (380,100) .. controls (380,72.39) and (413.58,50) .. (455,50) .. controls (496.42,50) and (530,72.39) .. (530,100) -- (455,100) -- cycle ; \draw (380,100) .. controls (380,100) and (380,100) .. (380,100) .. controls (380,72.39) and (413.58,50) .. (455,50) .. controls (496.42,50) and (530,72.39) .. (530,100) ; \draw [draw opacity=0] (530,100) .. controls (530,100) and (530,100) .. (530,100) .. controls (530,127.61) and (507.61,150) .. (480,150) .. controls (452.39,150) and (430,127.61) .. (430,100) -- (480,100) -- cycle ; \draw (530,100) .. controls (530,100) and (530,100) .. (530,100) .. controls (530,127.61) and (507.61,150) .. (480,150) .. controls (452.39,150) and (430,127.61) .. (430,100) ; \draw [draw opacity=0] (430,100) .. controls (430,100) and (430,100) .. (430,100) .. controls (430,127.61) and (407.61,150) .. (380,150) .. controls (352.39,150) and (330,127.61) .. (330,100) -- (380,100) -- cycle ; \draw (430,100) .. controls (430,100) and (430,100) .. (430,100) .. controls (430,127.61) and (407.61,150) .. (380,150) .. controls (352.39,150) and (330,127.61) .. (330,100) ; \draw [draw opacity=0] (330,100) .. controls (330,100) and (330,100) .. (330,100) .. controls (330,127.61) and (307.61,150) .. (280,150) .. controls (252.39,150) and (230,127.61) .. (230,100) -- (280,100) -- cycle ; \draw (330,100) .. controls (330,100) and (330,100) .. (330,100) .. controls (330,127.61) and (307.61,150) .. (280,150) .. controls (252.39,150) and (230,127.61) .. (230,100) ; \draw (325,47.5) -- (332,50) -- (325,52.5) ; \draw (450,47.5) -- (457,50) -- (450,52.5) ; \draw (485,152.5) -- (477.96,150) -- (484.91,147.5) ; \draw (385,152.5) -- (377.96,150) -- (384.91,147.5) ; \draw (285,152.5) -- (277.96,150) -- (284.91,147.5) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (226.5,100) .. controls (226.5,98.07) and (228.07,96.5) .. (230,96.5) .. controls (231.93,96.5) and (233.5,98.07) .. (233.5,100) .. controls (233.5,101.93) and (231.93,103.5) .. (230,103.5) .. controls (228.07,103.5) and (226.5,101.93) .. (226.5,100) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (326.5,100) .. controls (326.5,98.07) and (328.07,96.5) .. (330,96.5) .. controls (331.93,96.5) and (333.5,98.07) .. (333.5,100) .. controls (333.5,101.93) and (331.93,103.5) .. (330,103.5) .. controls (328.07,103.5) and (326.5,101.93) .. (326.5,100) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (376.5,100) .. controls (376.5,98.07) and (378.07,96.5) .. (380,96.5) .. controls (381.93,96.5) and (383.5,98.07) .. (383.5,100) .. controls (383.5,101.93) and (381.93,103.5) .. (380,103.5) .. controls (378.07,103.5) and (376.5,101.93) .. (376.5,100) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (576.5,100) .. controls (576.5,98.07) and (578.07,96.5) .. (580,96.5) .. controls (581.93,96.5) and (583.5,98.07) .. (583.5,100) .. controls (583.5,101.93) and (581.93,103.5) .. (580,103.5) .. controls (578.07,103.5) and (576.5,101.93) .. (576.5,100) -- cycle ; \draw [draw opacity=0] (230,100) .. controls (230,100) and (230,100) .. (230,100) .. controls (230,127.61) and (207.61,150) .. (180,150) .. controls (152.39,150) and (130,127.61) .. (130,100) -- (180,100) -- cycle ; \draw (230,100) .. controls (230,100) and (230,100) .. (230,100) .. controls (230,127.61) and (207.61,150) .. (180,150) .. controls (152.39,150) and (130,127.61) .. (130,100) ; \draw (185,152.5) -- (177.96,150) -- (184.91,147.5) ; \draw (229,99) -- (219,89) ; \draw [draw opacity=0] (219.39,89.41) .. controls (216.68,86.69) and (215,82.94) .. (215,78.8) .. controls (215,70.52) and (221.72,63.8) .. (230,63.8) .. controls (238.28,63.8) and (245,70.52) .. (245,78.8) .. controls (245,82.94) and (243.32,86.69) .. (240.61,89.41) -- (230,78.8) -- cycle ; \draw (219.39,89.41) .. controls (216.68,86.69) and (215,82.94) .. (215,78.8) .. controls (215,70.52) and (221.72,63.8) .. (230,63.8) .. controls (238.28,63.8) and (245,70.52) .. (245,78.8) .. controls (245,82.94) and (243.32,86.69) .. (240.61,89.41) ; \draw (231,99) -- (241,89) ; \draw (329,98.7) -- (319,88.7) ; \draw [draw opacity=0] (319.39,89.11) .. controls (316.68,86.39) and (315,82.64) .. (315,78.5) .. controls (315,70.22) and (321.72,63.5) .. (330,63.5) .. controls (338.28,63.5) and (345,70.22) .. (345,78.5) .. controls (345,82.64) and (343.32,86.39) .. (340.61,89.11) -- (330,78.5) -- cycle ; \draw (319.39,89.11) .. controls (316.68,86.39) and (315,82.64) .. (315,78.5) .. controls (315,70.22) and (321.72,63.5) .. (330,63.5) .. controls (338.28,63.5) and (345,70.22) .. (345,78.5) .. controls (345,82.64) and (343.32,86.39) .. (340.61,89.11) ; \draw (331,98.7) -- (341,88.7) ; \draw (380.94,100) -- (391.05,109.89) ; \draw [draw opacity=0] (390.65,109.48) .. controls (393.4,112.17) and (395.12,115.9) .. (395.17,120.04) .. controls (395.26,128.32) and (388.62,135.12) .. (380.34,135.21) .. controls (372.06,135.3) and (365.26,128.66) .. (365.17,120.38) .. controls (365.12,116.24) and (366.76,112.47) .. (369.44,109.72) -- (380.17,120.21) -- cycle ; \draw (390.65,109.48) .. controls (393.4,112.17) and (395.12,115.9) .. (395.17,120.04) .. controls (395.26,128.32) and (388.62,135.12) .. (380.34,135.21) .. controls (372.06,135.3) and (365.26,128.66) .. (365.17,120.38) .. controls (365.12,116.24) and (366.76,112.47) .. (369.44,109.72) ; \draw (378.94,100.02) -- (369.05,110.14) ; \draw (374.89,132.4) -- (381.89,134.9) -- (374.89,137.4) ; \draw (335,66.3) -- (327.96,63.8) -- (334.91,61.3) ; \draw (234.6,66.3) -- (227.56,63.8) -- (234.51,61.3) ; \draw (274,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $0$}}; \draw (200,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $-1$}}; \draw (160,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $-2$}}; \draw (105,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $-3$}}; \draw (60,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $-4$}}; \draw (118,142) node [anchor=north west][inner sep=0.75pt] [align=left][rotate=90] {{\scriptsize $=$}}; \draw (98,146) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $-\rho(g)$}}; \draw (314,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $1$}}; \draw (374,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $2$}}; \draw (414,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $3$}}; \draw (475,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $4$}}; \draw (529.07,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $5$}}; \draw (575,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $6$}}; \draw (624.5,106) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize $7$}}; \end{tikzpicture} \caption{An illustration of the itinerary of $g$ in \eqref{eq:itinery-example} associated to the \mbox{$S$-expression} in \eqref{eq:writing-example}; the support of $\widetilde{g}$ is also indicated}. \label{fig:itinerary} \end{figure} An alternative $S$-expression for the same element $g$ is \begin{align} \label{eq:writing-example-2} g & = a_4 t^{-3} \cdot \big( t^2 \big)^2 \cdot a_0 \cdot a_1 t^3 \cdot \big( t^{-2} \big)^3 \cdot a_0 \cdot t^{-2} \cdot a_0 \cdot t^{-2} \cdot a_0 \cdot \big( t^2 \big)^3 \cdot a_0 \cdot a_1 t^{3} \nonumber \\ & = s_1 \, s_4^{\, 2} \, s_5 \, s_3 \, s_2^{\, 3} \, s_5 \, s_2 \, s_5 \, s_2 \, s_5 \, s_4^{\, 3} \, s_5 \, s_3. \end{align} It has length $18$ and is based on the semigroup word \[ W' = X_1 \, X_4^{\, 2} \, X_5 \, X_3 \, X_2^{\, 3} \, X_5 \, X_2 \, X_5 \, X_2 \, X_5 \, X_4^{\, 3} \, X_5 \, X_3. \] In this case, the itinerary associated to the $S$-expression \eqref{eq:writing-example-2} is \begin{multline*} I' = (\iota', \sigma') = \big( (1, 4, 4, 5 , 3 , 2 , 2 , 2 , 5 , 2 , 5 , 2 , 5 , 4, 4 , 4, 5, 3),\\ (0, 3, 1, -1, -1, -4, -2, 0, 2, 2, 4, 4, 6, 6, 4, 2, 0, 0, -3) \big), \end{multline*} and we observe that $\maxit(I') = 6$ and $\minit(I') = -4.$ \end{example} There is a natural notion of a product of two itineraries, and it has the expected properties. We collect the precise details in a lemma. \begin{lemma-defi} \label{lem:itindecomposition} In the general set-up described above, suppose that $I_1 = (\iota_1, \sigma_1)$ and $I_2 = (\iota_2, \sigma_2)$ are itineraries, of lengths $l_1$ and $l_2$, associated to $S$-expressions $W_1, W_2$ for elements $g_1, g_2 \in G$. Then $W = W_1W_2$ is an $S$-expression for $g = g_1g_2$; the associated itinerary \[ I = \It(S,W) = (\iota,\sigma) \] has length $l = l_1 + l_2$ and its components are given by \begin{align*} \iota(k) & = \begin{cases} \iota_1(k) & \text{if $k \in [1,l_1]_\mathbb{Z}$,} \\ \iota_2(k-l_1) & \text{if $k \in [l_1+1,l]_\mathbb{Z}$,} \end{cases} \\ \sigma(k) & = \begin{cases} \sigma_1(k) & \text{if $k \in [0,l_1]_\mathbb{Z}$,} \\ \sigma_1(l_1) + \sigma_2(k-l_1) & \text{if $k \in [l_1+1,l]_\mathbb{Z}$.} \end{cases} \end{align*} We refer to $I$ as the \emph{product itinerary} and write $I = I_1 \ast I_2$. Conversely, if $I$ is the itinerary of some element $g\in G$ associated to some $S$-expression of length~$l$ and if $l_1 \in[0,l]_\mathbb{Z}$, there is a unique decomposition $I = I_1 \ast I_2$ for itineraries $I_1$ of length $l_1$ and $I_2$ of length $l_2 = l-l_1$; the corresponding elements $g_1, g_2 \in G$ satisfy $g = g_1 g_2$. \end{lemma-defi} \begin{proof} The assertions in the first paragraph are easy to verify from \[ W= W_1 W_2 = \prod\nolimits_{k=1}^{l_1} X_{\iota_1(k)} \prod\nolimits_{k=1}^{l_2} X_{\iota_2(k)} = \prod\nolimits_{k=1}^{l_1} X_{\iota_1(k)} \prod\nolimits_{k=l_1 + 1}^{l_1 + l_2} X_{\iota_2(k-l_1)} \] and the observation that, for $k \in [0,l]_\mathbb{Z}$, \begin{multline*} \sigma(k) = - \sum\nolimits_{j=1}^{k} \rho(s_{\iota(k)}) \\ = \begin{cases} - \sum_{j=1}^{k} \rho \big(s_{\iota_1(k)} \big) = \sigma_1(k) & \text{if $k \le l_1$,} \\ - \sum_{j=1}^{l_1} \rho(s_{\iota_1(k)}) - \sum_{j=l_1+1}^{k} \rho\big(s_{\iota_2(k-l_1)} \big) = \sigma_1(l_1) + \sigma_2(k-l_1) & \text{if $k > l_1$.} \end{cases} \end{multline*} Conversely, let $I$ be the itinerary of an element $g$, associated to some $S$-expression $W = \prod_{k=1}^l X_{\iota(k)}$ of length~$l$, and let $l_1 \in [0,l]_\mathbb{Z}$. Then $W$ decomposes uniquely as a product $W_1 W_2$ of semigroup words of lengths $l_1$ and $l-l_2$, namely for $W_1 =\prod_{k=1}^{l_1} X_{\iota(k)}$ and $W_2 = \prod_{k=l_1+1}^{l} X_{\iota(k)}$. These are $S$-expressions for elements $g_1, g_2$ and $g = g_1 g_2$. Moreover, $W_1$ and $W_2$ give rise to itineraries $I_1, I_2$ such that $I = I_1 * I_2$. Since $W_1$ and $I_1$, respectively $W_2$ and $I_2$, determine one another uniquely, the decomposition $I = I_1 * I_2$ is unique. \end{proof} For a representative function $\mathcal{W} \colon G \to \langle X_1, \ldots, X_d \rangle$, $g \mapsto W_g$, as discussed in Definition~\ref{def:itinerary}, it is typically not the case that $W_{gh} = W_g W_h$ for $g,h \in G$. Consequently, it is typically not true that $\It_\mathcal{W}(gh) = \It_\mathcal{W}(g) \ast \It_\mathcal{W}(h)$. \begin{lemma} \label{lem:writing} Let $G = H \wr \langle t \rangle$ be a wreath product as in~\eqref{eq:standing-assumption-G}, with generating set $S$ as in~\eqref{eq:standing-assumption-S}. Put \[ C = C(S) = 1 + \max \big\{ \vert i\vert \mid i \in \supp(s) \text{ for some } s \in S \big\} \in \mathbb{N}. \] Then the following hold. \smallskip \begin{enumerate}[\rm (i)] \item \label{enu:lem-1} For every $g \in G$ with itinerary~$I$, \[ \minit(I) - C < \min(\supp(g)) \quad \text{and} \quad \max (\supp(g)) < \maxit(I) + C. \] \item \label{enu:lem-2} Let $u \in H$. Put $m_S = \max\{ C, r_S \} \in \mathbb{N}$ and \[ D = D(S,u) = l_S(u) + 2 \max \left\{ l_S\big(t^j \big) \mid 0 \le j \le m_S + r_S \right\} \in \mathbb{N}. \] Let $g \in G$ with itinerary~$I$, associated to an $S$-expression of length~$l_S(g)$. Then, for every $j \in \mathbb{Z}$ with $\minit(I) - m_S \le j \le \maxit(I) + m_S$, the elements $h = g u^{t^{j+\rho(g)}}, \hbar = u^{t^j} g \in G$ satisfy $\rho(h) = \rho(\hbar) = \rho(g)$ and the `coordinates' of $h$, $\hbar$ are given by \[ h_{\vert i} = \begin{cases} g_{\vert i} & \text{if $i \ne j$,} \\ g_{\vert j} \, u & \text{if $i = j$,} \end{cases} \qquad \hbar_{\vert i} = \begin{cases} g_{\vert i} & \text{if $i \ne j$,} \\ u \, g_{\vert j} & \text{if $i = j$} \end{cases} \qquad \text{for $i \in \mathbb{Z}$}. \] Furthermore, \begin{equation*} l_S(h) \le l_S(g) + D \qquad \text{and} \qquad l_S(\hbar) \le l_S(g) + D. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} We write $I = (\iota,\sigma)$ for the given itinerary of~$g$, and $l$ denotes the length of~$I$. \smallskip \eqref{enu:lem-1} From \eqref{equ:recover-g-from-It} it follows that \begin{align*} \supp(g) & \subseteq \bigcup_{1 \le k \le l} \left( \sigma(k-1) + \supp(s_{\iota(k)}) \right) \\ & \subseteq \bigcup_{1 \le k \le l} [\sigma(k-1)-C+1,\sigma(k-1)+C-1]_\mathbb{Z}; \end{align*} from this inclusion the two inequalities follow readily. \smallskip \eqref{enu:lem-2} In addition we now have $l = l_S(g)$. The first assertions are very easy to verify. We justify the upper bound for $l_S(h)$; the bound for $l_S(\hbar)$ is derived similarly. Since $\minit(I) - m_S \le j \le \maxit(I) + m_S$ and since itineraries proceed, in the second coordinate, by steps of length at most~$r_S \le m_S$, there exists $k \in [0,l]_\mathbb{Z}$ such that $\vert j - \sigma(k)\vert \leq m_S$. If $\vert j - \sigma(l)\vert \leq m_S$ we put $k = l-1$; otherwise we choose $k$ maximal with $\vert j - \sigma(k)\vert \leq m_S$. Next we decompose the itinerary $I$ as the product $I = I_1 \ast I_2$ of itineraries $I_1$ of length $l_1 = k+1$ and $I_2$ of length $l_2 = l-k-1$; compare with Lemma~\ref{lem:itindecomposition}. We denote by $g_1 = \widetilde{g_1} t^{-\sigma(k+1)}$ and $g_2 = \widetilde{g_2} t^{\sigma(k+1)+\rho(g)}$ the elements corresponding to $I_1$ and~$I_2$ so that $g = g_1 g_2 = \widetilde{g_1} \widetilde{g_2}^{t^{\sigma(k+1)}} t^{\rho(g)}$. Moreover, we observe from $\vert j - \sigma(k+1) \vert \le m_S+r_S$ that \[ g_3 = u^{t^{j-\sigma(k+1)}} = t^{-j+\sigma(k+1)} \, u \, t^{j-\sigma(k+1)} \] has length~$l_3 \le l_S(u) + 2\, l_S \big( t^{j-\sigma(k+1)} \big) \le D$. Our choice of $k$ guarantees that the support of $\widetilde{g_2}^{\, t^{\sigma(k+1)}}$ does not overlap with $\{j\} = \supp(u^{t^j})$; compare with~\eqref{enu:lem-1}. Thus $\widetilde{g_2}^{\, t^{\sigma(k+1)}}$ and $u^{t^{j}}$, both in the base group, commute with one another. This gives \begin{multline*} h = g u^{t^{j+\rho(g)}} = \widetilde{g_1} \widetilde{g_2}^{t^{\sigma(k+1)}} u^{t^j} t^{\rho(g)} = \widetilde{g_1} u^{t^j} \widetilde{g_2}^{t^{\sigma(k+1)}} t^{\rho(g)} \\ = g_1 t^{-j + \sigma(k+1)} u t^{j - \sigma(k+1)} g_2 = g_1 g_3 g_2, \end{multline*} and we conclude that $l_S(h) \le l_1 + l_2 + l_3 \leq l +D = l_S(g) + D$. \end{proof} \section{Proofs of Theorems~\ref{thm:main} and~\ref{thm:main-density}} \label{sec:proof-of-main-theorem} First we explain how Theorem~\ref{thm:main} follows from Theorem~\ref{thm:main-density}. The first ingredient is the following general lemma. \begin{lemma}[\textup{Antol\'in, Martino and Ventura~\cite[Lem.~3.1]{AnMaVe17}}] \label{lem:two steps} Let $G = \langle S \rangle$ be a group, with finite generating set~$S$. Suppose that there exists a subset $X \subseteq G$ satisfying \smallskip \begin{enumerate}[\rm (a)] \item $\dens_S(X) = 0$; \item $\sup \left\{ \frac{\vert C_G(g) \cap B_S(n)\vert }{\vert B_S(n)\vert } \mid g \in G \smallsetminus X \right\} \to 0$ as $n \to \infty$. \end{enumerate} \smallskip \noindent Then $G$ has degree of commutativity $\dc_S(G)=0$. \end{lemma} The second ingredient comes from~\cite[\textsection 2.1]{Co18}, where Cox shows the following. If $G = H \wr \langle t \rangle$ is the wreath product of a finitely generated group $H \ne 1$ and an infinite cyclic group $\langle t \rangle$, with base group~$N$, and if $S$ is any finite generating set for~$G,$ then \[ \sup \left\{ \tfrac{\vert C_G(g) \cap B_S(n)\vert }{\vert B_S(n)\vert } \mid g \in G \smallsetminus N \right\} \to 0 \qquad \text{as} \quad n \to \infty. \] The idea behind Cox' proof is as follows. For $g \in G \smallsetminus N$, the centraliser $C_G(g)$ is cyclic and the `translation length' of $g$ with respect to~$S$ is uniformly bounded away from~$0$. The latter means that there exists $\tau_S > 0$ such that \[ \inf_{n \in \mathbb{N}} \left\{ \tfrac{ l_S(g^n) }{ n } \mid g \in G \smallsetminus N \right\} \geq \tau_S. \] Consequently, for $g \in G \smallsetminus N$ the function $n \mapsto \vert C_G(g) \cap B_S(n) \vert $ is bounded uniformly by a linear function, while $G$ has exponential word growth. Thus, Theorem~\ref{thm:main-density} implies Theorem~\ref{thm:main}, and it remains to establish Theorem~\ref{thm:main-density}. Throughout the rest of this section, let \begin{equation*} G = H \wr \langle t \rangle = N \rtimes \langle t \rangle \qquad \text{with base group} \qquad N = \bigoplus\nolimits_{i \in \mathbb{Z}} H^{t^i} \end{equation*} be the wreath product of a finitely generated subgroup~$H$ and an infinite cyclic subgroup $\langle t \rangle$, just as in~\eqref{eq:standing-assumption-G}. The exceptional case $H = 1$ poses no obstacle, hence we assume $H \ne 1$. We fix a finite symmetric generating set $S = \{s_1, \ldots, s_d\}$ for~$G$ and employ the notation established around~\eqref{eq:standing-assumption-S}. Finally, we recall that $G$ has exponential word growth and we write \[ \lambda = \lambda_S(G) > 1 \] for the exponential growth rate of $G$ with respect to~$S$; see~\eqref{eq:exponentail-rate}. We start by showing that the subset of $N$ consisting of all elements with suitably bounded support is negligible in the computation of the density of~$N$. \begin{proposition} \label{prop:small-support} Fix a representative function $\mathcal{W}$ which yields for each element of $G$ an $S$-expression of shortest possible length and let $q \colon \mathbb{N} \to \mathbb{R}_{\ge 1}$ be a non-decreasing unbounded function such that $q \in o(\log n)$. Then the sequence of sets \begin{equation*} R_q(n) = R_{\mathcal{W},q}(n) = \{g \in N \cap B_S(n) \mid \maxit_\mathcal{W}(g) - \minit_\mathcal{W}(g) \le q(n) \}, \end{equation*} indexed by $n \in \mathbb{N}$, satisfies \begin{equation*} \lim_{n \to \infty} \frac{\vert R_{q}(n)\vert }{\vert B_S(n)\vert } = 0. \end{equation*} \end{proposition} The proof of Proposition~\ref{prop:small-support} is preceded by some preparations and two auxiliary lemmata. We keep in place the set-up from Proposition~\ref{prop:small-support}. For $i \in \mathbb{Z}$, we write $H_i = H^{t^i}$. Using the notation established in Section~\ref{sec:wreath-products}, we accumulate the `coordinates' of elements of $S$ in a set \[ S_0 = \{ s_{\vert i} \mid s \in S, i \in \mathbb{Z} \} = \left\{ (s_j)_{\vert i} \mid 1 \le j \le d \text{ and } i \in \mathbb{Z} \right\} \subseteq H = H_0, \] we set $S_i = S_0^{\, t^i} \subseteq H_i$ for $i \in \mathbb{Z}$. Then $S_i$ is a finite symmetric generating set of~$H_i$ for each $i \in \mathbb{Z}$. Indeed, every element $h \in H$ satisfies $h = \widetilde{h} = h_{\vert 0}$ and can thus be written in the form \[ h = \left( \prod\nolimits_{k=1}^l \widetilde{s_{\iota(k)}}^{\, t^{\sigma(k-1)}} \right) {\vert_0} = \prod\nolimits_{k=1}^{l} \big( \widetilde{s_{\iota(k)}}_{\vert \, -\sigma(k-1)} \big), \] based upon a suitable itinerary $I = (\iota,\sigma)$ of length~$l$. We conclude that $H = \langle S_0 \rangle$ and consequently $H_i = \langle S_i \rangle$ for $i \in \mathbb{Z}$; the generating systems inherit from $S$ the property of being symmetric. Moreover, we have $\vert B_{H_i,S_i}(n)\vert = \vert B_{H,S_0}(n)\vert $ for all $n \in \mathbb{N}$; consequently, \[ \lambda_{S_0}(H) = \lambda_{S_i}(H_i) \qquad \text{for all $i \in \mathbb{Z}$.} \] It is convenient to split the analysis of the set~$R_q(n)$ from Proposition~\ref{prop:small-support} into two parts. First we take care of elements whose `coordinates' fall within sufficiently small balls around $1$ in~$H$, with respect to the generating set~$S_0$. \begin{lemma} \label{lem:small-coordinates} In addition to the set-up above, let $f \colon \mathbb{N} \rightarrow \mathbb{R}_{> 0}$ be a non-decreasing unbounded function such that $f \in o(n/q(n))$. Then the sequence of subsets \[ R_q^f(n) = R_{\mathcal{W},q}^f(n) =\big\{ g \in R_q(n) \mid g_{\vert i} \in B_{H,S_0}(f(n)) \text{ for all } i \in \mathbb{Z} \big\} \subseteq R_q(n), \] indexed by $n \in \mathbb{N}$, satisfies \[ \lim_{n \rightarrow \infty} \frac{\vert R_q^f(n) \vert}{\vert B_S(n) \vert } = 0. \] \end{lemma} \begin{proof} Let $C = C(S) \in \mathbb{N}$ be as is in Lemma~\ref{lem:writing}\eqref{enu:lem-1} and choose $C' \in \mathbb{N}$ such that $\lambda^{C'} > \lambda_{S_0}(H)$. Then we have, for all sufficiently large $n \in \mathbb{N}$, \[ \big\vert R_q^f(n) \big\vert \leq \big\vert B_{H,S_0}(f(n)) \big\vert^{2q(n) +2C} \le \lambda^{2C'q(n)f(n) + 2C'Cf(n)} \le \lambda^{4C'C q(n) f(n)}. \] From $f\in o(n/q(n))$ we obtain \[ 4C'C q(n) f(n) - n \;\to\; -\infty \qquad \text{as $n \to \infty$} \] and hence \[ \frac{\vert R_q^f(n)\vert }{\vert B_S(n)\vert } \le \lambda^{4C'C q(n) f(n) - n} \;\to\; 0 \qquad \text{as $n \to \infty$.} \] \end{proof} Next we consider $R_q(n) \smallsetminus R_q^f(n)$, for a function $f$ as in Lemma~\ref{lem:small-coordinates} and $n \in \mathbb{N}$. For every $g \in R_q(n) \smallsetminus R_q^f(n),$ we pick $i(g) \in \mathbb{Z}$ with \[ \minit_\mathcal{W}(g) - C < i(g) < \maxit_\mathcal{W}(g) +C \qquad \text{and} \qquad g_{\vert i(g)} \notin B_{S_0}(f(n)), \] where $C = C(S) \in \mathbb{N}$ continues to denote the constant from Lemma~\ref{lem:writing}\eqref{enu:lem-1}. Let $I = (\iota,\sigma)$, viz.\ $I_g = (\iota_g,\sigma_g)$, denote the $\mathcal{W}$-itinerary of~$g$. Then \[ g_{\vert i(g)} = \prod\nolimits_{k=1}^{l_S(g)} (s_{\iota(k)})_{\vert \, i(g)-\sigma(k-1)}. \] By successively cancelling sub-products of adjacent factors that evaluate to~$1$ and have maximal length with this property (in an orderly fashion, from left to right, say), we arrive at a `reduced' product expression \begin{equation} \label{eq:tower} g_{\vert i(g)} = \prod_{j=1}^{\ell} (s_{\iota(\kappa(j))})_{\vert \, i(g)-\sigma(\kappa(j)-1)}, \end{equation} for some $\ell = \ell_g \in [1,l_S(g)]_\mathbb{Z}$ and an increasing function $\kappa = \kappa_g \colon [1,\ell]_\mathbb{Z} \to [1, l_S(g)]_\mathbb{Z}$ that picks out a subsequence of factors. In particular, this means that, for $j_1, j_2 \in [1,\ell]_\mathbb{Z}$ with $j_1 < j_2$, \begin{equation} \label{eq:in-particular-kappa} \begin{split} \prod_{k=\kappa(j_1)+1}^{\kappa(j_2)} (s_{\iota(k)})_{\vert \, i(g)-\sigma(k-1)} & = \prod_{j = j_1+1}^{j_2} \; \prod_{k=\kappa(j-1)+1}^{\kappa(j)} (s_{\iota(k)})_{\vert \, i(g)-\sigma(k-1)} \\ & = \prod_{j = j_1+1}^{j_2} (s_{\iota(\kappa(j))})_{\vert \, i(g)-\sigma(\kappa(j)-1)} \ne 1, \end{split} \end{equation} and moreover we have $l_S(g) \ge \ell \geq l_{S_0}(g_{\vert i(g)}) \geq f(n).$ By means of suitable perturbations, we aim to produce from $g$ a collection of $\ell$ distinct elements $\dot g(1), \ldots, \dot g(\ell)$ which each carry sufficient information to `recover' the initial element~$g$. We proceed as follows. For each choice of $j \in [1,\ell]_\mathbb{Z}$ we decompose the itinerary $I$ for $g$ into a product $I = I_{j,1} \ast I_{j,2}$ of itineraries of length $\kappa(j)$ and $l_S(g) - \kappa(j)$; compare with Lemma~\ref{lem:itindecomposition}. Then $g = g_{j,1} g_{j,2}$, where $g_{j,1}, g_{j,2}$ denote the elements of $G$ corresponding to $I_{j,1}, I_{j,2}$. From $g \in R_q(n)$ it follows that $\maxit(I_{j,1}) - \minit(I_{j,1})$ and $\maxit(I_{j,2}) - \minit(I_{j,2})$ are bounded by~$q(n)$; in particular, $\rho(g_{j,1}) \in [-q(n),q(n)]_\mathbb{Z}$. We define \begin{equation} \label{eq:gdot} \dot g(j) = g_{j,1} \, t^{-3q(n)-4C} \, g_{j,2} \end{equation} with $C=C(S)$ as above; see Figure~\ref{fig:hi} for a pictorial illustration, which features an additional parameter $\tau$ that we introduce in the proof of Lemma~\ref{lem:big-coordinates}. \begin{figure}[H] \centering \hspace{-12pt} \begin{tikzpicture}[x=0.70pt,y=0.70pt,yscale=-1,xscale=1] \draw (100,100) -- (500,100) ; \draw (150,95) -- (150,105) ; \draw (200.72,95.13) .. controls (200.74,90.46) and (198.42,88.12) .. (193.75,88.1) -- (169.85,88) .. controls (163.18,87.97) and (159.86,85.63) .. (159.88,80.96) .. controls (159.86,85.63) and (156.52,87.95) .. (149.85,87.92)(152.85,87.94) -- (125.96,87.83) .. controls (121.29,87.81) and (118.95,90.13) .. (118.93,94.8) ; \draw (371,94.86) .. controls (371.02,90.19) and (368.7,87.85) .. (364.03,87.83) -- (340.14,87.73) .. controls (333.47,87.7) and (330.15,85.36) .. (330.17,80.69) .. controls (330.15,85.36) and (326.81,87.68) .. (320.14,87.65)(323.14,87.66) -- (296.25,87.56) .. controls (291.58,87.54) and (289.24,89.86) .. (289.22,94.53) ; \draw (203,83.61) -- (287,83.61) ; \draw [shift={(289,83.61)}, rotate = 180] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (9.6,-2.4) -- (0,0) -- (9.6,2.4) -- cycle ; \draw [shift={(201,83.61)}, rotate = 0] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (9.6,-2.4) -- (0,0) -- (9.6,2.4) -- cycle ; \draw (201,78.61) -- (201,88.61) ; \draw (289,78.61) -- (289,88.61) ; \draw (289,97.5) -- (289,102.5) ; \draw (201,97.5) -- (201,102.5) ; \draw (119,97.5) -- (119,102.5) ; \draw (371.5,97.5) -- (371.5,102.5) ; \draw (146,105) node [anchor=north west][inner sep=0.75pt] [align=left] {{\scriptsize 0}}; \draw (105,65) node [anchor=north west][inner sep=0.75pt] [font=\tiny] [align=left] {$\displaystyle \ \supp(g_{j,1})$}; \draw (315,65) node [anchor=north west][inner sep=0.75pt] [font=\tiny] [align=left] {$\displaystyle \supp(g_{j,2})$, shifted by $\displaystyle -\rho(g_{j,1})+3q(n)+4C$}; \draw (239.11,70) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle \tau$}; \end{tikzpicture} \caption{An illustration of the factorisation $\dot g(j) = g_{j,1} \, t^{-3q(n)-4C} \, g_{j,2}$.} \label{fig:hi} \end{figure} \begin{lemma} \label{lem:big-coordinates} In the set-up above, the elements $\dot{g}(1), \ldots, \dot{g}(\ell)$ defined in \eqref{eq:gdot} satisfy the following: \begin{enumerate}[\rm (i)] \item \label{enu:ball} for each $j \in [1, \ell]_{\mathbb{Z}}$ the element $\dot g(j)$ lies in $B_S( n + (3 q(n) + 4C)l_S(t))$; \item \label{enu:recover} for each $j \in [1,\ell]_\mathbb{Z}$ the original element $g$ can be recovered from $\dot g(j)$; \item \label{enu:pw-distinct} the elements $\dot g(1), \ldots, \dot g(\ell)$ are pairwise distinct. \end{enumerate} \end{lemma} \begin{proof} \eqref{enu:ball} Lemma~\ref{lem:itindecomposition} gives $l_S(g_{j,1}) + l_S(g_{j,2}) = \ell \le l_S(g) \leq n$, and it is clear that $l_S\big(t^{-3q(n)-4C} \big) \leq (3q(n) + 4C)l_S(t)$. \smallskip \eqref{enu:recover} Let $j \in [1,\ell]_\mathbb{Z}$, and write $\mathcal{G}_1 = \supp(g_{j,1})$, $\mathcal{G}_2 = \supp(g_{j,2})$. Lemma~\ref{lem:writing}\eqref{enu:lem-1} implies that the sets $\mathcal{G}_1$ and $\mathcal{G}_2 - \rho(g_{j,1}) = \supp \big( t^{\rho(g_{j,1})} g_{j,2} \big)$ lie wholly within the interval $[-q(n)-C,q(n)+C]_\mathbb{Z}$, hence \begin{equation} \label{eq:supp-H1-H2} \supp\! \big(\dot g(j) \big) = \mathcal{G}_1 \cupdot \big( \mathcal{G}_2 - \rho(g_{j,1}) + 3q(n)+4C \big) \end{equation} with a gap \[ \tau = \underbrace{\min\!\big( \mathcal{G}_2 - \rho(g_{j,1}) + 3q(n)+4C \big)}_{\ge\, -q(n)-C+3q(n)+4C \, = \, 2q(n)+3C} - \underbrace{\max\!\big( \mathcal{G}_1 \big)}_{\le\, q(n)+C} \ge q(n) + 2C, \] subject to the standard conventions $\min \varnothing = +\infty$ and $\max \varnothing = -\infty$ in special circumstances; see Figure~\ref{fig:hi} for a pictorial illustration. In contrast, gaps between two elements in $\mathcal{G}_1$ or two elements in $\mathcal{G}_2$ are strictly less than~$q(n)+2C \le \tau$. Consequently, we can identify the two components in~\eqref{eq:supp-H1-H2} and thus $\mathcal{G}_1$ and $\mathcal{G}_2 - \rho(g_{j,1})$, without any prior knowledge of~$j$ or $g_{j,1}, g_{j,2}$. Therefore, for each $i \in \mathbb{Z}$ the $i$th coordinate of $g$ satisfies \[ g_{\vert i} = \begin{cases} \dot g(j)_{\vert i} \, \dot g(j)_{\vert \, i+3q(n)+4C} & \text{if $i \in [-q(n)-C,q(n)+C]$,} \\ 1 & \text{otherwise,} \end{cases} \] and hence $g$ can be recovered from~$\dot g(j)$. \smallskip \eqref{enu:pw-distinct} For $j_1, j_2 \in [1,\ell]_\mathbb{Z}$ with $j_1< j_2$ we conclude from our choice of the `reduced' product expression~\eqref{eq:tower} and its consequence~\eqref{eq:in-particular-kappa} that \begin{multline*} \dot g(j_1)_{\vert i(g)} = \big( g_{j_1,1} \big)_{\vert i(g)}= \prod\nolimits_{k=1}^{\kappa(j_1)} \big( s_{\iota(k)} \big)_{\vert \, i(g)-\sigma(k-1)} \\ \ne \prod\nolimits_{k=1}^{\kappa(j_2)} \big( s_{\iota(k)} \big)_{\vert \, i(g)-\sigma(k-1)} = \big( g_{j_2,1} \big)_{\vert i(g)} = \dot g(j_2)_{\vert i(g)} \end{multline*} and hence $\dot g(j_1) \ne \dot g(j_2)$. \end{proof} For the proof of Proposition~\ref{prop:small-support} we now make a more careful choice of the non-decreasing unbounded function $f \colon \mathbb{N} \to \mathbb{R}_{>0}$, which entered the stage in Lemma~\ref{lem:small-coordinates}: we arrange that \[ f \in o \big( n/q(n) \big) \quad \text{and} \quad f \in \omega \big( (\lambda+1)^{m(n)} \big) \quad \text{for $m(n) = \big( 3 q(n) + 4 C \big)\hspace{1pt}l_S(t)$}, \] with $C = C(S)$ as in Lemma~\ref{lem:writing}\eqref{enu:lem-1}. For instance, we can take $f = f_\alpha$ for any real parameter $\alpha$ with $0 < \alpha < 1$, where $f_\alpha(n)=\max\left\{k^{\alpha}/q(k)\mid k\in[1,n]_{\mathbb{Z}}\right\}$ for $n\in\mathbb{N}$. Indeed, since $q(n) \in o(\log{n})$ and $q(n) \geq 1$ for all $n \in \mathbb{N}$, each of these functions satisfies \[ \lim_{n \rightarrow \infty} \frac{f_\alpha(n) q(n)}{n} \leq \lim_{n \rightarrow \infty} \frac{n^{\alpha} q(n)}{n} = 0. \] Furthermore, $q(n) \in o(\log n)$ implies $q(n)a^{q(n)} \in o(n^\beta)$ for all $a \in \mathbb{R}_{> 1}$ and $\beta \in \mathbb{R}_{> 0}$ so that \begin{align*} \lim_{n \rightarrow \infty} \frac{(\lambda + 1)^{m(n)}}{f_\alpha(n)} & \le \lim_{n \rightarrow \infty} \frac{q(n)(\lambda + 1)^{m(n)}}{n^{\alpha}} \\ & = (\lambda +1)^{4 C \, l_S(t)} \lim_{n \rightarrow \infty} \frac{q(n)(\lambda +1)^{3 l_S(t) q(n)}}{n^\alpha} \\ & = 0. \end{align*} \begin{proof}[Proof of Proposition \ref{prop:small-support}.] We continue with the set-up established above; in particular, we make use of the refined choice of $f$. In view of~Lemma \ref{lem:small-coordinates} it remains to show that \begin{equation*} \frac{\vert R_q(n) \smallsetminus R_q^f(n)\vert }{\vert B_S(n)\vert } \;\to\; 0 \qquad \text{as $n \to \infty$.} \end{equation*} We define a map \begin{align*} F_{n} \colon R_q(n) \smallsetminus R_q^f(n) & \,\rightarrow\, \mathcal{P} \big(B_S(n+m(n)) \big) \\ g & \,\mapsto\, \left\{ \dot g(j) \mid 1 \le j \le \ell_g \right\}; \end{align*} see~\eqref{eq:gdot} and Lemma~\ref{lem:big-coordinates}\eqref{enu:ball}. From Lemma~\ref{lem:big-coordinates}\eqref{enu:recover} we deduce that $F_{n}(g_1) \cap F_{n}(g_2) = \varnothing$ for all $g_1, g_2 \in R_q(n) \smallsetminus R_q^f(n)$ with $g_1 \ne g_2$. In addition, from $\ell_g \geq f(n)$ and Lemma~\ref{lem:big-coordinates}\eqref{enu:pw-distinct} we deduce that $\vert F_{n}(g)\vert \geq f(n)$ for all $g \in R_q(n) \smallsetminus R_q^f(n)$. This yields \[ \big\vert B_S(n+m(n)) \big\vert \geq f(n) \, \big\vert R_q(n) \smallsetminus R_q^f(n) \big\vert , \] and hence, by submultiplicativity, \begin{equation*} \begin{split} \frac{\vert R_q(n) \smallsetminus R_q^f(n)\vert }{\vert B_S(n)\vert } \le \frac{\vert B_S(n+m(n))\vert }{f(n) \, \vert B_S(n)\vert } &\le \frac{\vert B_S(m(n))\vert }{f(n)} \\ &\le \frac{(\lambda + 1)^{m(n)}}{f(n)} \;\to\; 0 \qquad \text{as $n \to \infty$.} \end{split} \end{equation*} \end{proof} \begin{remark} \label{rmk:small-support} Proposition~\ref{prop:small-support} can be established much more easily under the extra assumption that $H$ has sub-exponential word growth. Indeed, in this case, one can prove that \[ \lim_{n \to \infty} \frac{\left\vert R_q(n)\right\vert }{\vert B_S(n)\vert } = 0 \] for any non-decreasing unbounded function $q \colon \mathbb{N} \to \mathbb{R}_{>1}$ such that $q \in o(n)$; the proof is similar to the one of Lemma~\ref{lem:small-supp-general} below. If we assume that $H$ is finite, it is easy to see that there exists $\alpha \in \mathbb{R}_{>0}$ such that \[ \lim_{n \to \infty} \frac{\left\vert R_{q}(n)\right\vert }{\vert B_S(n)\vert } = 0 \qquad \text{for $q \colon \mathbb{N} \to \mathbb{R}_{>1}$, $n \mapsto 1+\alpha n$.} \] \end{remark} Next we establish Theorem~\ref{thm:main-density}, using ideas that are similar to those in the proof of Proposition~\ref{prop:small-support}: again we work with perturbations of a given element $g$ in such a manner that the original element can be retrieved easily. We begin with some preparations to establish an auxiliary lemma. Fix a representative function $\mathcal{W}$ which yields for each element of $G$ an $S$-expression of shortest possible length, and fix an element $u \in H \smallsetminus \{1\}$. Consider $g \in N$ with $\mathcal{W}$-itinerary $I = (\iota,\sigma)$, viz.\ $I_g = (\iota_g,\sigma_g)$. We put \[ \sigma^+ = \sigma_g^+ = \maxit_{\mathcal{W}}(g) \qquad \text{and} \qquad \sigma^- = \sigma_g^- = \minit_{\mathcal{W}}(g). \] For the time being, we suppose that \begin{align*} k^+ = k_{\mathcal{W},g}^+ & = \min \{ k \mid 0 \le k \le l_S(g) \text{ and } \sigma(k) = \sigma^+ \}, \\ k^- = k_{\mathcal{W},g}^- & = \min \{ k \mid 0 \le k \le l_S(g) \text{ and } \sigma(k) = \sigma^- \} \end{align*} satisfy $k^+\le k^-$. We decompose the itinerary for $g$ as $I = I_1 \ast I_2 \ast I_3$, where $I_1$, $I_2$, $I_3$ have lengths $k^+$, $k^- - k^+$, $l_S(g)-k^-$; compare with Lemma~\ref{lem:itindecomposition}. If $x = x_{\mathcal{W},g}$, $y = y_{\mathcal{W},g}$, $z = z_{\mathcal{W},g}$ denote the elements corresponding to $I_1$, $I_2$, $I_3$ then $g = xyz$; observe that the lengths of $I_1, I_2, I_3$ are automatically minimal, i.e, equal to $l_S(x), l_S(y), l_S(z)$. All this is illustrated schematically in Figure~\ref{fig:g}. Observe that $I_1$, associated to $x$, `starts' at $0$ and `ends' at $\sigma^+$, the shifted $I_2$, associated to $y$, `starts' at $\sigma^+$ and `ends' at $\sigma^-$, and the shifted $I_3$, associated to $z$, `starts' at $\sigma^-$ and `ends' at $0$. \begin{figure}[H] \centering \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.95,xscale=0.95] \draw (140,100) -- (570,100) ; \draw (350,97.5) -- (350,102.5) ; \draw (543.89,97.9) .. controls (543.89,93.3) and (541.56,90.9) .. (536.89,90.9) -- (444.65,90.9) .. controls (437.98,90.9) and (434.65,88.6) .. (434.65,83.93) .. controls (434.65,88.6) and (431.32,90.9) .. (424.65,90.9)(427.65,90.9) -- (332.42,90.9) .. controls (327.75,90.9) and (325.42,93.3) .. (325.42,97.9) ; \draw (403.5,95.75) .. controls (403.49,91.08) and (401.16,88.75) .. (396.49,88.76) -- (296.24,88.97) .. controls (289.57,88.99) and (286.23,86.67) .. (286.22,82) .. controls (286.23,86.67) and (282.91,89.01) .. (276.24,89.02)(279.24,89.01) -- (175.99,89.23) .. controls (171.32,89.24) and (168.99,91.57) .. (169,96.24) ; \draw (169,104.13) .. controls (169,108.8) and (171.33,111.13) .. (176,111.13) -- (341,111.13) .. controls (347.67,111.13) and (351,113.46) .. (351,118.13) .. controls (351,113.46) and (354.33,111.13) .. (361,111.13)(358,111.13) -- (537,111.13) .. controls (541.67,111.13) and (544,108.8) .. (544,104.13) ; \draw (168.9,97.5) -- (168.9,102.5) ; \draw (543.86,98.21) -- (543.86,103.21) ; \draw (346.8,102.8) node [anchor=north west][inner sep=0.75pt] [font=\tiny] [align=left] {$\displaystyle 0$}; \draw (346,122) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle y$}; \draw (282,70) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle z$}; \draw (430.8,72) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle x$}; \draw (150,113) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle \sigma^- = \minit(I)$}; \draw (520,112) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle \sigma^+ = \maxit(I)$}; \end{tikzpicture} \caption{A schematic illustration of the decomposition $g=xyz$.} \label{fig:g} \end{figure} \vspace{5pt} Next, we put to use the element $u \in H \smallsetminus\{1\}$ that was fixed and define, for any given $J \subseteq [\sigma^-,\sigma^+]_\mathbb{Z}$, perturbations \[ \dot x(J) = \dot x_{\mathcal{W},g}(J,u), \qquad \dot y(J) = \dot y_{\mathcal{W},g}(J,u), \qquad \dot z(J) = \dot z_{\mathcal{W},g}(J,u) \] of the elements $x,y,z$ that are specified by \begin{multline} \label{equ:rho-x-y-z} \rho(\dot x(J)) = \rho(x) = -\sigma^+, \quad \rho(\dot y(J)) = \rho(y) = -\sigma^- + \sigma^+, \\ \rho(\dot z(J)) = \rho(z) = \sigma^{-} \end{multline} and \begin{equation} \label{equ:def-xyz-dot} \begin{split} \dot x(J)_{\vert i} & = \begin{cases} x_{\vert i} \, u & \text{for $i \in J_{\ge 0}$,} \\ x_{\vert i} & \text{otherwise,} \end{cases} \\ \dot y(J)_{\vert i} & = \begin{cases} u^{\, -1} \, y_{\vert i} & \text{for $i \in \mathbb{Z}$ such that $i +\sigma^+ \in J_{\ge 0}$,} \\ y_{\vert i} \, u^{\, -1} & \text{for $i \in \mathbb{Z}$ such that $i +\sigma^+ \in J_{< 0}$,} \\ y_{\vert i} & \text{otherwise,} \end{cases} \\ \dot z(J)_{\vert i} & = \begin{cases} u \, z_{\vert i} & \text{for $i \in \mathbb{Z}$ such that $i + \sigma^- \in J_{<0}$,} \\ z_{\vert i} & \text{otherwise,} \end{cases} \end{split} \end{equation} where we suggestively write $J_{\geq 0} = \{ j \in J \mid j \geq 0 \}$ and $J_{<0} = \{ j \in J \mid j < 0 \}$. We observe that \begin{equation} \label{eq:recover-g} g = \dot x(J) \, \dot y(J) \, \dot z(J). \end{equation} Let $C = C(S) \in \mathbb{N}$ be as is in Lemma~\ref{lem:writing}\eqref{enu:lem-1}. We call \[ \ddot g(J) = \dot x(J) \, t^{-2C} \,\dot y(J)^{-1} \, t^{-2C} \dot z(J) \] the \emph{$J$-variant of $g$}; see Figure~\ref{fig:g(I)} for a schematic illustration. \begin{figure}[H] \centering \begin{tikzpicture}[x=0.37pt,y=0.37pt,yscale=-0.95,xscale=0.95] \draw (29,104) -- (949,104) ; \draw (91.4,100) -- (91.4,108) ; \draw (864,100) -- (864,108) ; \draw (285.42,100.96) .. controls (285.42,96.29) and (283.09,93.96) .. (278.42,93.96) -- (186.19,93.99) .. controls (179.52,93.99) and (176.19,91.66) .. (176.18,86.99) .. controls (176.19,91.66) and (172.86,93.99) .. (166.19,94)(169.19,94) -- (73.95,94.03) .. controls (69.28,94.03) and (66.95,96.36) .. (66.96,101.03) ; \draw (917.35,100.98) .. controls (917.34,96.31) and (915,93.99) .. (910.33,94) -- (810.08,94.21) .. controls (803.41,94.22) and (800.08,91.9) .. (800.07,87.23) .. controls (800.08,91.9) and (796.75,94.24) .. (790.08,94.25)(793.08,94.24) -- (689.83,94.46) .. controls (685.16,94.47) and (682.84,96.8) .. (682.85,101.47) ; \draw (669.68,101.51) .. controls (669.68,96.84) and (667.35,94.51) .. (662.68,94.5) -- (501.36,94.33) .. controls (494.69,94.32) and (491.36,91.99) .. (491.36,87.32) .. controls (491.36,91.99) and (488.03,94.32) .. (481.36,94.31)(484.36,94.31) -- (309.27,94.13) .. controls (304.6,94.12) and (302.27,96.45) .. (302.27,101.12) ; \draw (66.88,101.5) -- (66.88,106.5) ; \draw (917.33,101.5) -- (917.33,106.5) ; \draw (84,107) node [anchor=north west][inner sep=0.75pt] [font=\tiny] [align=left] {$\displaystyle 0$}; \draw (120,50) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle \supp(\dot x( J))$}; \draw (360,45) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle \supp(\dot y(J)^{-1})$, shifted}; \draw (685,50) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle \supp(\dot z(J))$, shifted}; \draw (819,110) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] [align=left] {$\displaystyle -\rho(\ddot g(J))$}; \end{tikzpicture} \caption{A schematic illustration of the support components of $\ddot g(J)$.} \label{fig:g(I)} \end{figure} Observe that \begin{equation} \label{eq:ddot-rho} \ddot g(J) \in N t^{\rho(\ddot g(J))}, \qquad \text{where $\rho(\ddot g(J)) = 2 ( \sigma_g^- - \sigma_g^+ ) - 4C \le -4$.} \end{equation} Up to now we assumed that $k^+ \le k^-$. If instead $k^- < k^+$, a similar construction at this stage yields elements \begin{equation} \label{eq:ddot-rho-minus} \ddot g(J) \in Nt^{\rho(\ddot g(J))}, \qquad \text{where $\rho(\ddot g(J)) = 2 (\sigma_g^+ - \sigma_g^- ) + 4C \ge 4$;} \end{equation} in particular, there is no overlap between elements $\ddot g(J)$ arising from these two different cases. For our purposes, it suffices to work with subsets $J \subseteq [\sigma^-,\sigma^+]_\mathbb{Z}$ of size $\vert J \vert = 2$ and we streamline the discussion to this situation. \begin{lemma} \label{lem:ddot-properties} In the set-up described above, suppose that $J \subseteq [ \sigma^-, \sigma^+]_\mathbb{Z}$ with $\vert J \vert = 2$. Let $D = D(S,u) \in \mathbb{N}$ be as in Lemma~\ref{lem:writing}\eqref{enu:lem-2}. Then \begin{enumerate}[\rm (i)] \item\label{enu:ball-ddot} $ l_S(\ddot g(J)) \leq l_S(g) + D' $ for $D' = 6D + 2\, l_S \big( t^{2C} \big)$; \item \label{enu:recover-main} the element $g$ can be recovered from $\ddot g(J)$ and any one of $\sigma^+, \sigma^-$; \item \label{enu:pw-distinct-main} the resulting variants of $g$ are pairwise distinct, i.e., $\ddot g(J) \ne \ddot g(J')$ for all $ J' \subseteq [ \sigma^-, \sigma^+]_\mathbb{Z}$ with $\vert J' \vert =2$ and $J \ne J'$. \end{enumerate} \end{lemma} \begin{proof} \eqref{enu:ball-ddot} Since \begin{align*} J_{\ge 0} & \subseteq [0,\sigma^+]_\mathbb{Z} \subseteq [\minit(I_1),\maxit(I_1)]_\mathbb{Z},\\ J - \sigma^+ & \subseteq [\sigma^- - \sigma^+,0]_\mathbb{Z} = [\minit(I_2),\maxit(I_2)]_\mathbb{Z} ,\\ J_{<0} - \sigma^- & \subseteq [0,-\sigma^-]_\mathbb{Z} \subseteq [\minit(I_3),\maxit(I_3)]_\mathbb{Z} \end{align*} we can apply Lemma~\ref{lem:writing}\eqref{enu:lem-2}, if necessary twice, to deduce that \[ l_S( \dot x(J)) \leq l_S(x)+2D, \quad l_S( \dot y(J)) \leq l_S(y) +2D, \quad l_S ( \dot z(J)) \leq l_S(z) +2D. \] Since $l_S(x) + l_S(y) + l_S(z) = l_S(g)$, this gives \[ l_S(\ddot g(J)) \leq l_S(g) + D' \qquad \text{for $D' = 6D + 2 \, l_S \big( t^{2C} \big)$}. \] \eqref{enu:recover-main} As in the discussion above suppose that $k^+ = k^+_{\mathcal{W},g}$ and $k^- = k^-_{\mathcal{W},g}$ satisfy $k^+ \le k^-$; the other case $k^- < k^+$ can be dealt with similarly. We have to check that $g$ can be recovered from~$\ddot g(J)$, if we are allowed to use one of the parameters $\sigma^+, \sigma^-$. Indeed, from $-\rho(\ddot g(J)) = 2 \big( \sigma^+ - \sigma^- \big) +4C$ we deduce that in such a case both, $\sigma^+$ and $\sigma^-$ are available to us. Furthermore, Lemma~\ref{lem:writing}\eqref{enu:lem-1} gives \begin{align*} \supp\! \big( \dot x (J) \big) & \subseteq [\sigma^-- C+1, \sigma^++ C-1]_\mathbb{Z}, \\ \supp \!\big( \dot y(J)^{-1} \big) & \subseteq [-C+1,\sigma^+ - \sigma^- +C-1]_\mathbb{Z}, \\ \supp\!\big(\dot z(J)\big) & \subseteq [-C+1, \sigma^+ -\sigma^-+C-1]_\mathbb{Z}, \end{align*} and thus \begin{multline*} \supp \!\big( \ddot g(J) \big) = \supp \!\big( \dot x (J) \big) \cupdot \left( \supp \!\big( \dot y(J)^{-1} \big) + \sigma^+ + 2 C \right) \\ \cupdot \left( \supp \!\big( \dot z(J) \big) + 2 \sigma^+ - \sigma^- + 4C \right) \end{multline*} allows us to recover $\dot x(J)$, $\dot y(J)$ and $\dot z(J)$ via \eqref{equ:rho-x-y-z} and \begin{align*} \dot x(J)_{\vert i} & = \begin{cases} \ddot g(J)_{\vert i} & \text{for $i \in [\sigma^-- C, \sigma^++ C]_\mathbb{Z}$,} \\ 1 & \text{for $i \in \mathbb{Z} \smallsetminus [\sigma^-- C, \sigma^++ C]_\mathbb{Z}$,} \end{cases} \\ (\dot y(J)^{-1})_{\vert i} & = \begin{cases} \ddot g(J)_{\vert \, i + \sigma^+ + 2C} & \text{for $i \in [-C, \sigma^+-\sigma^-+C]_\mathbb{Z}$,} \\ 1 & \text{for $i \in \mathbb{Z} \smallsetminus [-C, \sigma^+-\sigma^-+C]_\mathbb{Z}$,} \end{cases} \\ \dot z(J)_{\vert i} & = \begin{cases} \ddot g(J)_{\vert \, i + 2 \sigma^+ - \sigma^- + 4C} & \text{for $i \in [-C, \sigma^+-\sigma^-+C]_\mathbb{Z}$,} \\ 1 & \text{for $i \in \mathbb{Z} \smallsetminus [-C, \sigma^+-\sigma^-+C]_\mathbb{Z}$.} \end{cases} \end{align*} Using \eqref{eq:recover-g}, we recover~$g = \dot x(J) \, \dot y(J) \, \dot z(J)$. \smallskip \eqref{enu:pw-distinct-main} Again we suppose that $k^+ = k^+_{\mathcal{W},g}$ and $k^- = k^-_{\mathcal{W},g}$ satisfy $k^+ \le k^-$; the other case $k^- < k^+$ can be dealt with similarly. Let $J' \subseteq [\sigma^-, \sigma^+]_\mathbb{Z}$ with $\vert J' \vert = 2$ such that $\ddot g(J) = \ddot g(J')$. As explained above, we can not only recover $g$ but even $\dot x(J) = \dot x(J')$, $\dot y(J) = \dot y(J')$ and $\dot z(J) = \dot z(J')$ from $\ddot g(J) = \ddot g(J')$ and $\sigma^+$, say. Since $u \ne 1$ we deduce from \eqref{equ:def-xyz-dot} that $J = J'$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main-density}] We continue within the set-up established above; in particular, we employ the $J$-variants $\ddot g(J)$ of elements $g \in N$ for two-element subsets $J \subseteq [\sigma^-_g,\sigma^+_g]_\mathbb{Z}$, with respect to a fixed representative function $\mathcal{W}$ and a chosen element $u \in H \smallsetminus \{1\}$. Let $q \colon \mathbb{N} \to \mathbb{R}_{\ge 1}$ be a non-decreasing unbounded function such that $q \in o(\log n)$. We make use of the decomposition \begin{equation}\label{equ:A-cap-B-decomp} N \cap B_S(n) = R_{q}(n) \cupdot R_{q}^\flat(n), \qquad \text{for $n \in \mathbb{N}$,} \end{equation} where $R_{q}(n) = R_{\mathcal{W},q}(n)$ is defined as in Proposition~\ref{prop:small-support} and $R_{q}^\flat(n) = R_{\mathcal{W},q}^\flat(n)$ denotes the corresponding complement in $N \cap B_S(n)$. Let $D' \in \mathbb{N}$ be as in Lemma~\ref{lem:ddot-properties}\eqref{enu:ball-ddot}. Below we show that \begin{equation} \label{eq:D'-bound} \vert B_S(n + D')\vert > \frac{q(n)}{2} \, \vert R_{q}^\flat(n)\vert \qquad \text{for $n \in \mathbb{N}$.} \end{equation} This bound and submultiplicativity yield \[ \frac{\vert R_{q}^\flat(n)\vert }{\vert B_S(n)\vert } < \frac{2 \vert B_S(n + D')\vert }{q(n)\vert B_S(n)\vert } \le \frac{2 \vert B_S(D')\vert }{q(n)} \;\to\; 0 \quad \text{as $n \to \infty$.} \] Together with Proposition~\ref{prop:small-support} we deduce from \eqref{equ:A-cap-B-decomp} that $N$ has density zero: \[ \dens_S(N) = \lim_{n \to \infty}\frac{\vert N \cap B_S(n)\vert }{\vert B_S(n)\vert } = 0, \] properly as a limit. \medskip It remains to establish \eqref{eq:D'-bound}. The set $R_{q}^\flat(n)$ decomposes into a disjoint union of subsets \[ R_{q,\ell}^\flat(n) = \{ g \in N \cap B_S(n) \mid \sigma_g^+ - \sigma_g^- =\ell \}, \quad \ell > q(n), \] and the map \begin{align*} F_n \colon R_{q}^\flat(n) & \to \mathcal{P} \!\left( B_S(n + D') \right), \\ g & \mapsto \big\{ \ddot g(J) \mid J \subseteq [\sigma_g^-, \sigma_g^+]_\mathbb{Z} \text{ with } \vert J\vert =2 \big\} \end{align*} restricts for each $\ell \in \mathbb{N}$ with $\ell > q(n)$, to a mapping \[ F_{n,\ell} \colon R_{q,\ell}^\flat(n) \to \mathcal{P} \!\left( \big( N t^{-2\ell -4C} \cup N t^{2\ell+4C} \big) \cap B_S(n + D') \right); \] see Lemma~\ref{lem:ddot-properties}\eqref{enu:ball-ddot}, \eqref{eq:ddot-rho} and~\eqref{eq:ddot-rho-minus}. We contend that for every $h \in \big( N t^{-2\ell -4C} \cup N t^{2\ell +4C} \big) \cap B_S(n + D'),$ where $\ell > q(n)$, there are at most $\ell+1$ elements $g \in R_{q,\ell}^\flat(n)$ such that $h \in F_n(g)$. Indeed, suppose that $h \in N t^{2\ell +4C} \cap B_S(n + D')$, with $\ell > q(n)$, and suppose that $g \in R_{q,\ell}^\flat(n)$ such that $h = \ddot g(J)$ for some $J \subseteq [\sigma_g^-, \sigma_g^+]_\mathbb{Z}$ with $\vert J\vert =2$. Then $\sigma_g^+ \in [0,\ell]_\mathbb{Z}$ takes one of $\ell +1$ values, and once $\sigma^+$ is fixed, there is a way of recovering~$g$, by Lemma~\ref{lem:ddot-properties}\eqref{enu:recover-main}. For $h \in Nt^{-2\ell -4C} \cap B_S(n + D')$ the argument is similar. From this observation and Lemma~\ref{lem:ddot-properties}\eqref{enu:pw-distinct-main} we conclude that \begin{multline*} \left\vert \big( N t^{-2\ell -4C} \cup N t^{2\ell+4C} \big) \cap B_S(n + D') \right\vert \ge \frac{1}{\ell+1} \binom{\ell +1}{2} \, \vert R_{q,\ell}^\flat(n)\vert \\ > \frac{q(n)}{2} \, \vert R_{q,\ell}^\flat(n)\vert . \end{multline*} Hence \[ \vert B_S(n+D')\vert > \frac{q(n)}{2} \, \sum_{\ell > q(n)} \left\vert R_{q,\ell}^\flat(n) \right\vert = \frac{q(n)}{2} \, \left\vert R_{q}^\flat(n)\right\vert, \] which is the bound~\eqref{eq:D'-bound} we aimed for. \end{proof} \section{Proof of Theorem \ref{thm:main-2}} Throughout this section let $G$ denote a finitely generated group of exponential word growth of the form $G= N \rtimes \langle t \rangle$, where \begin{enumerate}[\rm (a)] \item the subgroup $\langle t \rangle$ is infinite cyclic; \item the normal subgroup $N = \langle \bigcup \big\{ H^{t^i} \mid i \in \mathbb{Z} \big\} \rangle$ is generated by the $\langle t \rangle$-conjugates of a finitely generated subgroup~$H$ $N$; \item \label{enu:conj-commute} the $\langle t \rangle$-conjugates of this group $H$ commute elementwise: $\big[H^{t^i}, H^{t^j} \big] = 1$ for all $i, j \in \mathbb{Z}$ with $H^{t^i} \ne H^{t^j}$. \end{enumerate} Suppose further that $S_0 = \{a_1, \dots, a_d\} \subseteq H$ is a finite symmetric generating set for $H$ and that the exponential growth rates of $H$ with respect to $S_0$ and of $G$ with respect to $S = S_0 \cup \{ t, t^{-1} \}$ satisfy \begin{equation} \label{eq:inequality-later} \lim_{n \rightarrow \infty} \sqrt[n]{\vert B_{H,S_0}(n)\vert } < \lim_{n \rightarrow \infty} \sqrt[n]{\vert B_{G,S}(n)\vert }. \end{equation} This is essentially the setting of Theorem~\ref{thm:main-2}; for technical reasons we prefer to work with symmetric generating sets. Our ultimate aim is to show that $\dens_S(N)=0$. Using the commutation rules recorded in~\eqref{enu:conj-commute}, it is not difficult to see that every $g \in N$ admits $S$-expressions of minimal length that take the special form \begin{align} \label{eq:minimal-1} g & = t^{-\sigma^-} \cdot \bigg( \prod_{i=\sigma^-}^{\sigma^+-1} \big( w_i(a_1, \dots, a_d) \, t^{-1} \big) \bigg) \cdot w_{\sigma^+}(a_1,\ldots,a_d) \cdot t^{\sigma^+}, \\ \label{eq:minimal-2} g & = t^{-\sigma^+} \cdot \bigg( \prod_{j=\sigma^-}^{\sigma^+ -1} \big( w_{\sigma^+ + \sigma^- - j}(a_1, \dots, a_d) \, t \big) \bigg) \cdot w_{\sigma^-}(a_1,\ldots,a_d) \cdot t^{\sigma^-}, \end{align} where the parameters $\sigma^-, \sigma^+ \in \mathbb{Z}$ satisfy $\sigma^- \le \sigma^+$ and, for every $i \in [\sigma^-,\sigma^+]_\mathbb{Z}$, we have picked a suitable semigroup word $w_i = w_i(Y_1,\ldots,Y_d)$ in $d$ variables of length $l_{S_0}(w_i(a_1,\ldots,a_d))$. The lengths of the expressions~\eqref{eq:minimal-1} and \eqref{eq:minimal-2} are equal to \[ l_S(g) = \vert \sigma^-\vert + (\sigma^+ - \sigma^-) + \vert \sigma^+\vert + \sum_{i=\sigma^-}^{\sigma^+} l_{S_0} \!\big(w_i(a_1,\ldots,a_d) \big). \] For the following we fix, for each $g \in N$, expressions as described and we use subscripts to stress the dependency on~$g$: we write $\sigma_g^-$, $\sigma_g^+$ and $w_{g,i}$ for $i \in [\sigma_g^-,\sigma_g^+]_\mathbb{Z}$, where necessary. The notation is meant to be reminiscent of the one introduced in Definition~\ref{def:itinerary}, but one needs to keep in mind that we are dealing with a larger class of groups now. \begin{lemma} \label{lem:small-supp-general} In addition to the general set-up described above, let $q \colon \mathbb{N} \to \mathbb{R}_{>0}$ be a non-decreasing unbounded function such that $q \in o(n)$. Then the sequence of sets \[ R_q(n) = \{ g \in N \cap B_S(n) \mid -q(n) \le \sigma_g^- \le \sigma_g^+ \le q(n) \}, \] indexed by $n \in \mathbb{N}$, satisfies \[ \lim_{n \to \infty} \frac{\vert R_q(n)\vert }{\vert B_S(n)\vert } = 0. \] \end{lemma} \begin{proof} For short we set $\mu = \lim_{n \to \infty}\sqrt[n]{\vert B_{H,S_0}(n)\vert }$ and $\lambda = \lim_{n \to \infty} \sqrt[n]{\vert B_{G,S}(n)\vert }$. According to \eqref{eq:inequality-later} we find $\varepsilon \in \mathbb{R}_{> 0}$ such that $(\mu + \varepsilon)/\lambda \le 1 - \varepsilon$ and $M = M_\varepsilon \in \mathbb{N}$ such that \[ \vert B_{H,S_0}(n)\vert \le M (\mu+\varepsilon)^n \quad \text{for all $n \in \mathbb{N}_0$.} \] This allows us to bound the number of possibilities for the elements $w_{g,i}(a_1,\ldots,a_d)$ in an $S$-expression of the form~\eqref{eq:minimal-1} for $g \in R_q(n)$ and, writing $\tilde q(n) = 2 \lfloor q(n) \rfloor +1$, we obtain \begin{align*} \vert R_q(n)\vert & \leq \sum_{\substack{m_{-\lfloor q(n) \rfloor}, \ldots, m_{\lfloor q(n) \rfloor} \in \mathbb{N}_0 \text{ st}\\ m_{-\lfloor q(n) \rfloor} + \ldots + m_{\lfloor q(n) \rfloor} \le n}} \;\; \prod_{i = -\lfloor q(n) \rfloor}^{\lfloor q(n) \rfloor} \vert B_{H,S_0}(m_i)\vert \\ & \leq \binom{n + \tilde q(n)}{\tilde q(n)} M^{\tilde q(n)} (\mu+\varepsilon)^n, \end{align*} and hence \begin{equation} \label{eq:polynomial} \frac{\vert R_q(n)\vert }{\vert B_S(n)\vert } \le \frac{\vert R_q(n)\vert }{\lambda^n} \le \binom{n+\tilde q(n)}{\tilde q(n)} M^{\tilde q(n)} (1-\varepsilon)^n \quad \text{for $n \in \mathbb{N}$.} \end{equation} We notice that $q \in o(n)$ implies $\tilde q \in o(n)$. Thus Lemma~\ref{lem:stirling} implies that $\binom{n + \tilde q(n)}{\tilde q(n)} M^{\tilde q(n)}$ grows sub-exponentially, and the term on the right-hand side of \eqref{eq:polynomial} tends to $0$ as $n$ tends to infinity. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main-2}] We continue to work in the notational set-up introduced above. In addition we fix a non-decreasing unbounded function $q \colon \mathbb{N} \to \mathbb{R}_{\ge 0}$ such that $q \in o(n)$ and \begin{equation}\label{eq:from-lem-exists-q} \frac{\vert B_S(n)\vert }{\vert B_S(n-q(n))\vert } \to \infty \qquad \text{as $n \to \infty$}; \end{equation} see Proposition~\ref{pro:exists-q}. As in the proof of Theorem~\ref{thm:main-density}, we make use of a decomposition \begin{equation*} N \cap B_S(n) = R_{q}(n) \cupdot R_{q}^\flat(n), \qquad \text{for $n \in \mathbb{N}$,} \end{equation*} where $R_{q}(n)$ is defined as in Lemma~\ref{lem:small-supp-general} and $R_{q}^\flat(n)$ denotes the corresponding complement in $N \cap B_S(n)$. In view of Lemma~\ref{lem:small-supp-general} it suffices to show that \begin{equation} \label{equ:R-flat-0} \frac{\vert R_{q}^\flat(n)\vert }{\vert B_S(n)\vert } \to 0 \quad \text{as $n \to \infty$.} \end{equation} It is enough to consider sufficiently large $n$ so that $n>q(n)$ holds. For every such $n$ and $g \in R_{q}^\flat(n)$, with chosen minimal $S$-expressions~\eqref{eq:minimal-1} and \eqref{eq:minimal-2}, we have $\sigma^- = \sigma_g^- < - q(n)$ or $\sigma^+ = \sigma_g^+ > q(n)$, hence \[ \left\{ g t^{-q(n)}, g t^{q(n)} \right\} \cap B_S(n-q(n)) \ne \varnothing. \] As each of the right translation maps $g \mapsto g t^{-q(n)}$ and $g \mapsto g t^{q(n)}$ is injective, we conclude that \[ \vert R_{q}^\flat(n)\vert \le 2 \vert B_S(n-q(n))\vert, \] and thus \eqref{equ:R-flat-0} follows from~\eqref{eq:from-lem-exists-q}. \end{proof} \medskip \begin{thebibliography}{99} \bibitem{AnMaVe17} Antol\'in,~Y., Martino,~A. and Ventura,~E.: Degree of commutativity of infinite groups, \textit{Proc. Amer. Math. Soc.} \textbf{145}(2) (2017), 479--485. \bibitem{BuVe02} Burillo,~J. and Ventura,~E.: Counting primitive elements in free groups, \textit{Geom. Dedicata} \textbf{93} (2002), 143--162. \bibitem{Co18} Cox,~C.\,G.: The degree of commutativity and lamplighter groups, \textit{Internat. J. Algebra Comput.} \textbf{28}(7) (2018), 1163--1173. \bibitem{ErTu68} Erd\H{o}s,~P. and Tur{\'a}n,~P.: On some problems in statistical group-theory IV, \textit{Acta Math. Acad. Sci. Hung.} \textbf{19} (1968), 413--435. \bibitem{GuRo06} Guralnick,~R.\,M. and Robinson,~G.\,R.: On the commuting probability in finite groups, \textit{J. Algebra} \textbf{300} (2006), 509--528. \bibitem{Gu73} Gustafson,~W.\,H.: What is the probability that two group elements commute?, \textit{Amer. Math. Monthly} \textbf{80}(9) (1973), 1031--1034. \bibitem{Ha03} De la Harpe,~P.: Topics in Geometric Group Theory. University of Chicago Press, Chicago, USA (2003). \bibitem{MaToVaVe21} Martino,~A., Tointon,~M., Valiunas,~M. and Ventura,~E.: Probabilistic nilpotence in infinite groups, \textit{Israel J. Math.} \textbf{244} (2021), 539--588. \bibitem{Ne89} Neumann,~P.\,M.: Two combinatorial problems in group theory, \textit{Bull. London Math. Soc.} \textbf{21} (1989), 456--458. \bibitem{Pi00} Pittet,~C.\,H.: The isoperimetric profile of homogeneous Riemann manifolds, \textit{J. Differential Geom.} \textbf{54} (2000), 255--302. \bibitem{Ru79} Rusin,~D.\,J.: What is the probability that two elements of a finite group commute?, \textit{Pacific J. Math.} \textbf{82}(1) (1979), 237--247. \bibitem{Sh18} Shalev,~A.: Probabilistically nilpotent groups, \textit{Proc. Amer. Math. Soc.} \textbf{146} (2018), 1529--1536. \bibitem{To20} Tointon,~M.\,C.\,H.: Commuting probabilities of infinite groups, \textit{J. London Math. Soc.} \textbf{101}(3) (2020), 1280--1297. \bibitem{Va19} Valiunas,~M.: Rational growth and degree of commutativity of graph products, \textit{J. Algebra} \textbf{522} (2019), 309--331. \end{thebibliography} \end{document}
2205.01993v2
http://arxiv.org/abs/2205.01993v2
Horizontally quasiconvex envelope in the Heisenberg group
\documentclass[11pt, reqno]{amsart} \usepackage{latexsym,amsmath,amssymb, amsthm, mathscinet} \usepackage{cases, verbatim} \usepackage[active]{srcltx} \usepackage{hyperref} \usepackage{xcolor} \usepackage[margin=1.5in]{geometry} \setlength{\topmargin}{-0.1in} \setlength{\oddsidemargin}{0.3in} \setlength{\evensidemargin}{0.3in} \setlength{\textwidth}{5.9in} \setlength{\rightmargin}{0.7in} \setlength{\leftmargin}{-0.3in} \setlength{\textheight}{9.1in} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \theoremstyle{remark} \newtheorem{rmk}[thm]{Remark} \newtheorem{example}[thm]{Example} \numberwithin{equation}{section} \theoremstyle{definition} \newtheorem{defi}[thm]{Definition} \numberwithin{thm}{section} \numberwithin{equation}{section} \makeatletter \newcommand{\rmnum}[1]{\romannumeral #1} \newcommand{\Rmnum}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \def\ilap{\Delta_\infty} \def\Om{\Omega} \def\om{\omega} \def\vi{\varphi} \def\id{{\rm id\, }} \def\BBB{\widetilde{B}} \def\MM{\mathfrak{M}} \def\Mod{{\rm Mod}} \def\R{{\mathbb R}} \def\X{{\mathcal X}} \def\C{{\mathcal C}} \def\D{{\mathcal D}} \def\F{{\mathcal F}} \def\M{{\mathcal M}} \def\N{{\mathcal N}} \def\H{{\mathbb H}} \def\U{{\mathcal U}} \def\S{{\mathbf S}} \def\I{{\mathcal I}} \def\T{{\mathcal T}} \def\V{{\mathbb V}} \def\K{{\mathcal K}} \def\A{{\mathcal A}} \newcommand{\boldg}{{\mathbf g}} \newcommand{\cH}{\mathcal H} \newcommand{\cJ}{\mathcal J} \newcommand{\pO}{\partial\Omega} \newcommand{\Oba}{\overline{\Omega}} \newcommand{\ep}{\epsilon} \newcommand{\vep}{\varepsilon} \newcommand{\ol}{\overline} \newcommand{\ul}{\underline} \newcommand{\dive}{\operatorname{div}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\proj}{\operatorname{proj}} \newcommand{\bpm}{\begin{pmatrix}} \newcommand{\epm}{\end{pmatrix}} \newcommand{\la}{\left\langle} \newcommand{\ra}{\right\rangle} \newcommand{\bu}{{\boldsymbol u}} \newcommand{\bv}{{\boldsymbol v}} \newcommand{\br}{{\boldsymbol r}} \newcommand{\bvarphi}{{\boldsymbol \varphi}} \newcommand{\bh}{{\boldsymbol h}} \newcommand{\bw}{{\boldsymbol w}} \newcommand{\bW}{{\boldsymbol W}} \newcommand{\bU}{{\mathbf {U}}} \newcommand{\bX}{{\mathbf X}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\coh}{\operatorname{co}} \newcommand{\spa}{\operatorname{span}} \DeclareMathOperator*{\limsups}{limsup^\ast} \DeclareMathOperator*{\liminfs}{liminf_\ast} \def\leb{{\mathcal L}} \def\rss{{\vert_{_{_{\!\!-\!\!-\!}}}}} \def\esssup{\mathop{\rm ess\,sup\,}} \def\essmax{\mathop{\rm ess\,max\,}} \def\osc{\mathop{\rm osc\,}} \def\diam{{\rm diam\,}} \def\dist{{\rm dist\,}} \def\divv{{\rm div\,}} \def\rot{{\rm rot\,}} \def\curl{{\rm curl\,}} \def\lip{{\rm Lip\,}} \def\rank{{\rm rank\,}} \def\supp{{\rm supp\,}} \def\loc{{\rm loc}} \newcommand{\fracm}[1]{\frac{1}{#1}} \newcommand{\brac}[1]{\left (#1 \right )} \belowdisplayskip=18pt plus 6pt minus 12pt \abovedisplayskip=18pt plus 6pt minus 12pt \parskip 8pt plus 1pt \def\tsp{\def\baselinestretch{2.65}\large \normalsize} \def\dsp{\def\baselinestretch{1.35}\large \normalsize} \title[H-quasiconvex envelope]{\protect{Horizontally quasiconvex envelope\\ in the Heisenberg group}} \author[A. Kijowski]{Antoni Kijowski} \address[Antoni Kijowski]{Analysis on Metric Spaces Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan, {\tt [email protected]}} \author[Q. Liu]{Qing Liu} \address[Qing Liu]{Geometric Partial Differential Equations Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan, {\tt [email protected]}} \author[X. Zhou]{Xiaodan Zhou} \address[Xiaodan Zhou]{Analysis on Metric Spaces Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan, {\tt [email protected]}} \begin{document} \begin{abstract} This paper is concerned with a PDE-based approach to the horizontally quasiconvex (h-quasiconvex for short) envelope of a given continuous function in the Heisenberg group. We provide a characterization for upper semicontinuous, h-quasiconvex functions in terms of the viscosity subsolution to a first-order nonlocal Hamilton-Jacobi equation. We also construct the corresponding envelope of a continuous function by iterating the nonlocal operator. One important step in our arguments is to prove the uniqueness and existence of viscosity solutions to the Dirichlet boundary problem for the nonlocal Hamilton-Jacobi equation. Applications of our approach to the h-convex hull of a given set in the Heisenberg group are discussed as well. \end{abstract} \subjclass[2020]{35R03, 35D40, 26B25, 52A30} \keywords{Heisenberg group, h-quasiconvex functions, h-convex sets, Hamilton-Jacobi equations, viscosity solutions} \maketitle \section{Introduction}\label{sec:intro} \subsection{Background and motivation} Convex analysis is a classical and fundamental topic with numerous applications in various fields of mathematics and beyond. In contrast to the extensive literature on convex analysis in the Euclidean space, less is known about the case in a general geometric setting such as sub-Riemannian manifolds. This paper is mainly concerned with a PDE method to deal with a certain weak type of convexity for both sets and functions in the first Heisenberg group $\H$. The Heisenberg group $\mathbb{H}$ is $\mathbb{R}^{3}$ endowed with the non-commutative group multiplication \[ (x_p, y_p, z_p)\cdot (x_q, y_q, z_q)=\left(x_p+x_q, y_p+y_q, z_p+z_q+\frac{1}{2}(x_py_q-x_qy_p)\right), \] for all $p=(x_p, y_p, z_p)$ and $q=(x_q, y_q, z_q)$ in $\H$. The differential structure of $\mathbb{H}$ is determined by the left-invariant vector fields \[ X_1=\frac{\partial}{\partial x}-\frac{y}{2}\frac{\partial}{\partial z}, \quad X_2=\frac{\partial}{\partial y}+\frac{x}{2}\frac{\partial}{\partial z}, \quad X_3=\frac{\partial}{\partial z}. \] One may easily verify the commutation relation $X_3=[X_1, X_2]= X_{1}X_{2}- X_{2}X_{1}$. Let \[ \mathbb{H}_0=\{h\in \mathbb{H}: h=(x, y, 0) \ \text{ for $x, y\in \mathbb{R}$}\}. \] For any $p\in \H$, the set \[ \H_p=\{p\cdot h: \ h\in \H_0\} \] is called the horizontal plane through $p$. It is clear that $\H_p=\spa\{X_1(p), X_2(p)\}$ for every $p\in \H$. See \cite{CDPT} for a detailed introduction of the Heisenberg group. Our primary interest is to understand how to convexify a given bounded set in the Heisenberg group $\H$, that is, we aim to find the smallest convex set that contains the given set. Let us first clarify the meaning of convex sets we consider in this work. We shall actually focus on the notion of weakly h-convex sets, which is first introduced in \cite{DGN1} and later studied in \cite{Rithesis, CCP1, ArCaMo} etc. A set $E\subset \H$ is said to be weakly h-convex if the horizontal segment connecting any two points in $E$ lies in $E$; see also Definition \ref{def h-set}. Hereafter we call such a set an h-convex set for simplicity of terminology. There are several other types of set convexity in $\H$ defined with different kinds of convex combination of two points. One seemingly natural notion is based on geodesics in $\H$. A set $E\subset \H$ is said to be geodetically convex if, for every pair of points $p, q\in E$, the set $E$ contains all geodesics joining $p$ and $q$. This notion is known to be a very strong one; the geodetically convex hull of any three points that are not on the same geodesic in $\H$ has to be the whole group \cite{MoR}. A different notion, called strong h-convexity \cite{DGN1} or twisted convexity \cite{CCP1}, uses the dilation of group quotient to combine two points. It is still a quite strong notion, much stronger than the Euclidean convexity. In general a strongly h-convex hull of a bounded set consisting of more than two points could be unbounded \cite{CCP1}. We refer the reader to \cite{Rithesis, CCP1} for related discussions on these convexity notions. The notion of (weak) h-convexity is obviously weaker than the Euclidean convexity as well as the other aforementioned notions because of the restriction on horizontal segments. Such relaxation gives rise to unexpected properties. An h-convex set can even be disconnected, as pointed out in \cite{CCP2}. One simple example of h-convex sets is the union of two points $(0, 0, 1)$ and $(0, 0, -1)$ in $\H$. It is h-convex, because no horizontal segments exist to connect the points. A less trivial example of disconnected h-convex sets is presented in Example \ref{ex1}. Such an unusual character makes it challenging to find h-convex hull of a given set in $\H$. In contrast to the Euclidean situation, where one can generate the convex hull of a set by simply connecting every pair of points in the set with a segment, building the h-convex hull of a given set in the Heisenberg group requires possibly infinite iterations to include all necessary horizontal segments; see the proof of \cite[Lemma 4.1]{Riconv} by Rickly about this method. In general, it is not straightforward to describe and construct the h-convex hull of a set. This motivates us to search possible analytic methods to solve this problem. A closely related problem we also intend to discuss is constructing the horizontally quasiconvex (or simply h-quasiconvex) envelope of a given function in an h-convex domain $\Omega \subset \H$. An h-quasiconvex function in $\Omega$ is defined to be a function whose sublevel sets are all h-convex. It is equivalent to saying that \beq\label{h-quasi def} u(w) \leq \max\{u(p),u(q)\} \eeq holds for any $p\in \Omega$, $q \in \H_p\cap \Omega$ and $w \in [p,q]$. This notion is introduced in \cite{SuYa}. It is also studied in \cite{CCP2}, where such functions are called weakly h-quasiconvex functions. We again suppress the term ``weakly'' in this paper to avoid redundancy. Similar to the case of sets, for any function in $\Omega$, it is not trivial in general how one can find its h-quasiconvex envelope in a constructive way. We remark that a stronger notion of function convexity in $\H$, called horizontal convexity, is introduced by \cite{DGN1} and \cite{LMS}. Various properties and generalizations of such convex functions are discussed in \cite{BaRi, GuMo2, Wa, Mag, CaPi, BaDr, MaSc} etc. The corresponding convex envelope and its applications to convexity properties of sub-elliptic equations are studied in \cite{LZ2}. \subsection{Main results} Inspired by the Euclidean results in \cite{BGJ1}, in this work we provide a PDE-based approach to investigate h-convex hulls and h-quasiconvex envelopes. Our study starts from an improved characterization of h-quasiconvex functions. It is known \cite[Theorem 4.5]{CCP2} that any function $u\in C^1(\Omega)$ is h-quasiconvex if and only if \begin{equation}\label{USC} u(\xi) < u(p) \Rightarrow \langle \nabla_H u(p), (p^{-1}\cdot \xi)_h \rangle \leq 0 \end{equation} for any $\xi \in \H_p\cap \Omega$. Here, $\nabla_H u$ denotes the horizontal gradient of $u$, given by $\nabla_H u=(X_1 u, X_2u)$. Also, for each $\zeta=(x_\zeta, y_\zeta, z_\zeta)\in \H$, $\zeta_h$ represents its horizontal coordinates, that is, $\zeta_h=(x_\zeta, y_\zeta)$. For the sake of our applications to construct h-convex hulls we generalize this characterization to fuctions which are not necessarily of $C^1$ class. Extending the Euclidean arguments in \cite{BGJ1} to $\H$, we show in Theorem \ref{thm char} that an upper semicontinuous (USC) function $u$ is h-quasiconvex if and only if \eqref{USC} holds in the viscosity sense. It is equivalent to saying that \beq\label{vis char} \sup\{\la \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h\ra: \xi\in \H_p\cap \Omega, \ u(\xi)< u(p)\}\leq 0 \eeq whenever there exist $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a maximum at $p$. Further developing the generalized characterization, we adopt an iterative scheme to find the h-quasiconvex envelope of a continuous function $f$, denoted by $Q(f)$, in a bounded h-convex domain $\Omega$ under a particular Dirichlet boundary condition. The iteration is implemented by solving a sequence of nonlocal Hamilton-Jacobi equations, where the Hamiltonian is given by the left hand side of \eqref{vis char}, that is, \[ H(p, u(p), \nabla u(p))=\sup\{\la \nabla_H u(p), (p^{-1}\cdot \xi)_h\ra: \xi\in \H_p\cap \Omega, \ u(\xi)< u(p)\}. \] We briefly describe our scheme in what follows. Let $f\in C(\Oba)$ be a given function satisfying \begin{numcases}{} f=K \quad &\text{on $\pO$,}\label{dirichlet-f} \\ f\leq K \quad &\text{in $\Oba$}\label{coercive-f} \end{numcases} with $K\in \R$. This set of conditions resembles the coercivity assumption on $f$ when $K>0$ is taken large. It guarantees the existence of an h-quasiconvex function $\ul{f}\in C(\Oba)$ such that $\ul{f}\leq f$ in $\Oba$ and $\ul{f}=K$ on $\partial \Omega$; see Proposition \ref{prop lower}. This in turn implies the existence of $Q(f)$ taking the same boundary value. We set $u_0=f$ in $\Omega$ and take $u_n$ ($n=1, 2, \ldots$) to be the unique viscosity solution of \beq\label{iteration eq} u_n+H(p, u_n, \nabla_H u_n)=u_{n-1} \quad \text{in $\Omega$} \eeq satisfying the same set of conditions as in \eqref{dirichlet-f}--\eqref{coercive-f}, that is, \begin{numcases}{} u_n=K \quad &\text{on $\pO$,}\label{data bdry} \\ u_n\leq K \quad &\text{in $\Oba$.}\label{data bdry ineq} \end{numcases} It turns out that $u_n$ is a nonincreasing sequence and converges uniformly to $Q(f)$ as $n\to \infty$. This is in fact our main theorem. \begin{thm}[Iterative scheme for envelope]\label{thm scheme bdry} Suppose that $\Omega$ is a bounded h-convex domain in $\H$ and $f\in C(\Oba)$ satisfies \eqref{dirichlet-f}--\eqref{coercive-f} for some $K\in \R$. Let $Q(f)\in USC(\Omega)$ be the h-quasiconvex envelope of $f$ in $\Omega$. Let $u_0=f$ in $\Omega$ and $u_n$ be the unique solution of \eqref{iteration eq} satisfying \eqref{data bdry}\eqref{data bdry ineq} for $n\geq 1$. Then $u_n\to Q(f)$ uniformly in $\Oba$ as $n\to \infty$. \end{thm} Such type of nonlocal schemes is proposed in \cite{BGJ1} in the Euclidean case for general Dirichlet data. We remark that in the Euclidean space a similar class of nonlocal equations depending on the level sets of the unknown is also studied for applications in geometric evolutions and front propagation \cite{Car1, Car2, Sl, KLM}. Although our PDE looks analogous to theirs, the well-posedness in the sub-Riemannian case is not straightforward at all. The main difference from the Euclidean case lies in an additional constraint that requires $\xi\in \H_p\cap \Omega$, which depends on the space variable $p$. This extra constraint brings us much difficulty in proving the comparison principle for \eqref{iteration eq}. It is the coercivity-like setting \eqref{data bdry}\eqref{data bdry ineq} that enables us to overcome the difficulty and obtain the uniqueness of solutions. More details can be found in Section \ref{sec:comp}. The existence of viscosity solutions, on the other hand, can be handled in a standard way by adapting Perron's method \cite{CIL}. Since the existence in the Euclidean case is not explicitly discussed in \cite{BGJ1}, we give full details of the arguments for our sub-Riemannian version in Section \ref{sec:exist}. Once the sequence $u_n$ is determined iteratively for all $n=1, 2, \ldots$, Theorem \ref{thm scheme bdry} can be proved by applying a stability argument for viscosity solutions. It is worth mentioning that, instead of adopting the restrictive setting \eqref{data bdry}\eqref{data bdry ineq}, one can solve \eqref{iteration eq} with general boundary data and obtain the scheme convergence as in Theorem \ref{thm scheme bdry} if there exists an appropriate h-quasiconvex barrier $\ul{f}$ from below compatible with the boundary value; see Theorem \ref{thm existence2} and Remark \ref{rmk convergence2}. Another possible modification of the scheme is to consider the maximal subsolution of \eqref{iteration eq} rather than its solutions at each step. Although we only get pointwise convergence of the scheme in this case, it allows us to avoid the uniqueness issue and to construct the h-quasiconvex envelopes for a general class of upper semicontinuous functions, even in an unbounded domain $\Omega$. See Theorem \ref{thm scheme} for results in this relaxed setting. We would like to point out that, besides the PDE-based approach described above, there is a more direct constructive method to build $Q(f)$, which employs the following convexification operator: \beq\label{direct conv} T[f](w) = \inf \left\{ \max \{ f(p), f(q)\}: w \in [p, q], \ p \in \Omega,\ q \in \Omega \cap \H_p \right\}, \ \text{for }w\in \Omega. \eeq It turns out that $T[f]$ itself may not be h-quasiconvex in $\Omega$ but iterated application of $T$ yields a pointwise approximation of the h-quasiconvex envelope $Q(f)$; see Theorem \ref{thm direct} for details. A similar idea is used in the proof of \cite[Lemma 4.1]{Riconv} for h-convex functions. It is also adopted in \cite{LZ2} to construct the h-convex envelope of a given function $f$. As an application of our constructive methods for h-quasiconvex envelopes, we study the h-convex hull of a given bounded set in $\H$. A key ingredient is the so-called level set formulation, which plays an important role in the study of geometric evolutions \cite{CGG, ES1, Gbook}. We can apply the same idea to our problem, since the nonlocal Hamiltonian is actually a geometric operator, homogeneous in $u$. Suppose that $E$ is a bounded open set in $\H$. Take a bounded h-convex domain $\Omega$ such that $E\subset \Omega$. We next choose a defining function $f\in C(\Oba)$ such that \beq\label{initial defining func} E=\{p\in \Omega: f(p)<0\} \eeq and \eqref{dirichlet-f}\eqref{coercive-f} hold for some $K>0$. It turns out that the h-convex hull of $E$ coincides with the zero sublevel set of $Q(f)$, i.e., \beq\label{final defining func} \coh(E)=\{p\in \Omega: Q(f)<0\}. \eeq We remark that $\coh(E)$ is independent of the choices of $f$ and $\Omega$. As long as \eqref{initial defining func} together with \eqref{dirichlet-f}\eqref{coercive-f} holds, $\coh(E)$ obtained in \eqref{final defining func} will not change. See Theorem \ref{thm hull1} for more precise statements. This PDE approach leads us to a better understanding about h-convex hulls. One application is about the inclusion principle. By definition, it is easily seen that $\coh(D)\subset \coh(E)$ holds for any sets $D, E\subset \H$ satisfying $D\subset E$. In Theorem \ref{thm sep}, we establish a quantitative version of the inclusion principle in $\H$. For any bounded open (or closed) sets $D, E\subset \H$, we obtain \beq\label{dist sep} \inf\left\{\tilde{d}_H(p, q): p\in \coh(D), q\in \H\setminus \coh(E) \right\}\geq \inf\left\{\tilde{d}_H(p, q): p\in D, q\in \H\setminus E\right\}. \eeq where $\tilde{d}_H$ denotes the right invariant gauge metric in $\H$; see \eqref{right metric} below. This property amounts to saying that taking h-convex hulls of two sets in $\H$, one contained in the other, does not reduce the shortest $\tilde{d}_H$ distance between their boundaries. If $E$ contains the right invariant $\delta$-neighborhood of $D$ for some $\delta>0$, then $\coh(E)$ also contains the right invariant $\delta$-neighborhood of $\coh(D)$. While such a result can be obtained comparatively easily in the Euclidean case, the proof is more involved in the Heisenberg group. Our proof is based on comparing the h-quasiconvex envelopes of defining functions of both sets combined with arguments involving sup-convolutions. It is not clear to us whether one can replace $\tilde{d}_H$ by the left invariant gauge metric $d_H$. This problem is related to the h-convexity preserving property for solutions of evolution equations in the Heisenberg group; see some partial results in \cite{LMZ, LZ2}. Another natural question is on the continuity (or stability) of $\coh(E)$ with respect to the set $E$, which we discuss in the last part of this paper. In contrast to the Euclidean case, in general the Hausdorff distance $d_H(\coh(E_j), \coh(E))$ between $\coh(E_j)$ and $\coh(E)$ does not necessarily converge to zero when $d_H (E_j,E)\to 0$ in the Heisenberg group, see Example \ref{ex2}. One can show rather easily that such stability result holds under a strict star-shapedness assumption on the set $E$; see Proposition \ref{prop stable}. \subsection{Notations} We conclude the introduction by listing several notations that are often used in the work. Throughout this paper, $|\cdot |_G$ stands for the Kor\'{a}nyi gauge, i.e., for $p=(x, y, z)\in \H$ \[ |p|_G=\left((x^2+y^2)^2+16z^2\right)^{\frac{1}{4}}. \] The Kor\'{a}nyi gauge induces a left invariant metric $d_H$ on $\H$ with \[ d_H(p, q)=|p^{-1}\cdot q|_G\quad p, q\in \H. \] We also use the right invariant metric $\tilde{d}_H$, defined by \beq\label{right metric} \tilde{d}_H(p, q)=|p\cdot q^{-1}|_G, \quad p, q\in \H. \eeq The associated distances between a point $p\in \H$ and a set $E\subset \H$ are respectively denoted by $d_H(p, E)$ and $\tilde{d}_H(p, E)$. For two sets $D, E\subset \H$, we write $d_H(D, E)$ and $\tilde{d}_H(D, E)$ to denote respectively the Hausdorff distances between $D$ and $E$ with respect to the metrics $d_H$ and $\tilde{d}_H$, i.e, for $d=d_H$ or $d=\tilde{d}_H$, \[ d(D, E)=\max\left\{\sup_{p\in D}d(p, E),\ \sup_{p\in E} d(p, D)\right\}. \] We denote by $B_r(p)$ the open gauge ball in $\H$ centered at $p\in \H$ with radius $r>0$, that is, \[ B_r(p)=\{q\in \H: |p^{-1}\cdot q|_G< r\}, \] while $\tilde{B}_r(p)$ represents the corresponding right-invariant metric ball. Let $\delta_\lambda$ denote the non-isotropic dilation in $\H$ with $\lambda\geq 0$, that is, $\delta_\lambda(p)=(\lambda x, \lambda y, \lambda^2 z)$ for $p=(x, y, z)\in \H$. We write $\delta_\lambda(E)$ to denote the dilation of a given set $E\subset \H$, that is, $\delta_\lambda(E)=\{\delta_\lambda(p): p\in E\}$. The rest of the paper is organized in the following way. In Section \ref{sec:h-quasiconvex}, we first give a review on the definitions and basic properties of h-convex sets and h-quasiconvex functions, and then present the viscosity characterization of upper semicontinuous h-quasiconvex functions. We also show how to construct the h-quasiconvex envelope by iterated application of the operator in \eqref{direct conv}. Section \ref{sec:nonlocal-hj} is devoted to the well-posedness of the nonlocal Hamilton-Jacobi equation, including the uniqueness and existence of viscosity solutions. Our PDE-based iterative scheme is introduced in Section \ref{sec:iteration}. We finally discuss applications of our results to the h-convex hull of a given open or closed set in Section \ref{sec:convex-hull}. \section{H-quasiconvex functions}\label{sec:h-quasiconvex} \subsection{Definition and basic properties} Let us first go over the definition of h-convex sets. We restrict the original definition proposed in \cite{DGN1} for general Carnot groups to the case of $\H$. \begin{defi}[Definition 7.1 in \cite{DGN1}]\label{def h-set} We say, that a set $E \subset \H$ is h-convex if for every $p\in E$ and $q\in \H_p\cap E$, the horizontal segment $[p, q]$ joining $p$ and $q$ stays in $E$. \end{defi} As pointed out in \cite[Proposition 7.4]{DGN1}, any gauge ball $B_R(p)$ with $p\in \H$ and $R>0$ is h-convex. The notion of h-convex sets is in fact very weak. There are numerous h-convex sets in $\H$ that are obviously not convex in the Euclidean sense. \begin{example}[Disconnected h-convex sets]\label{ex1} Denote by $\pi(0, \rho)$ the planar open disk centered at the origin with radius $\rho>0$, i.e., \begin{equation}\label{h-pi} \pi(0, \rho) := \{(x,y) \in \R^2: x^2+y^2 < \rho^2 \}. \end{equation} Let us consider a disconnected set $E =(\pi(0, r) \times \{0\}) \cup (\pi(0,R) \times \{t\})$, where $r, R, t>0$ are given. Such a set $E$ is h-convex under appropriate conditions on $r$ and $R$. To see this, we take the horizontal plane \[ Z_t=\{(x, y, z)\in \H: z=t\} \] and compute the distance between $q_t=(0, 0, t)$ and $\H_p\cap Z_t$ for each point $p=(x_p, y_p, 0)\in \pi(0, r)\times \{0\}$. It turns out that \[ d_H(q_t, \H_p\cap Z_t) = \frac{2 t}{\sqrt{x_p^2 + y_p^2}} \geq \frac{2t}{r}. \] If $d_H(q_t, \H_p\cap Z_t)\geq R$, then none of the horizontal planes of points $p\in \pi(0, r)\times \{0\}$ in the lower disk will intersect the upper disk $\pi(0, R)\times \{t\}$. This means that $E$ is h-convex if $2t\geq rR$. It is obvious that in general $E$ is not connected and thus cannot be convex as a subset of $\R^3$. It is also clear that $E$ is no longer h-convex if $2t<rR$. \end{example} Let us also recall from \cite{CCP2} the definition of h-quasiconvex functions in $\H$. \begin{defi}[Definition 4.3 in \cite{CCP2}]\label{def h-fun} Suppose that $\Omega \subset \H$ is h-convex. We say, that a function $u :\Omega \to \R$ is h-quasiconvex if \eqref{h-quasi def} holds for every $p\in \Omega$, $q \in \H_p\cap \Omega$ and $w \in [p,q]$. In other words, $u$ is h-quasiconvex if for every $\lambda \in \R$ the sublevel set $\{ w \in \Omega: u(w) \leq \lambda\}$ is h-convex. \end{defi} \begin{rmk} We remark that it is equivalent to define h-convex functions with strict sublevel set $\{w\in \Omega: u(w)<\lambda\}$. In fact, first note that the \[ \{w\in \Omega: u(w)\le \lambda\}=\bigcap_{\vep>0}\{w\in \Omega: u(w)< \lambda+\vep\} \] and the intersection of h-convex sets are still h-convex. On the other hand, if $\{w\in \Omega: u(w)\le \lambda\}$ is h-convex, then \eqref{h-quasi def} holds for every $p\in \Omega$, $q \in \H_p\cap \Omega$ and $w \in [p, q]$. If $p, q\in \{w\in \Omega: u(w)<\lambda\}$ and $q \in \H_p\cap \Omega$, it follows that $u(w)<\lambda$ when $w \in [p, q]$ and thus $\{x\in \Omega: u(w)<\lambda\}$ is h-convex. \end{rmk} For our later applications, below we provide a typical h-quasiconvex function associated to a given h-convex set. The construction is based on the right invariant metric $\tilde{d}_H$, as given in \eqref{right metric}. \begin{prop}[A metric-based h-quasiconvex function]\label{prop metric-quasi} Suppose that $\Omega\subset \H$ is an h-convex domain in $\H$ and $E$ is an h-convex open subset of $\Omega$. Then $\psi_{E}\in C(\Omega)$ given by \[ \psi_E(p)=-\tilde{d}_H(p, \H\setminus E), \quad p\in \Omega, \] is an h-quasiconvex function in $\Omega$. \end{prop} \begin{proof} It suffices to show that \[ E_\lambda:=\{p\in \Omega: \psi_E(p)<\lambda\} \] is h-convex for every $\lambda\in \R$. We only need to consider the case for $\lambda< 0$, since $E_\lambda=E$ if $\lambda=0$ and $E_\lambda=\Omega$ if $\lambda> 0$. Assume by contradiction that there exist $p, q\in E_\lambda$ with $q\in \H_p$ as well as a point $w\in [p, q]\cap (\Omega\setminus E_\lambda)$. This means that \[ \tilde{d}_H(w, \H\setminus E)\le-\lambda \] and there exists $\zeta\in \H\setminus E$ such that $|\zeta\cdot w^{-1}|_G\leq -\lambda$. Taking $\xi=\zeta\cdot w^{-1}\cdot p, \quad \eta=\zeta\cdot w^{-1} \cdot q$, we can easily verify that $\eta\in \H_\xi$ and $\zeta\in [\xi, \eta]$. Suppose that $\xi \in \H\setminus E$ holds. Then, since \[ \tilde{d}_H(\xi, p)=|\xi\cdot p^{-1}|_G=|\zeta\cdot w^{-1}|_G\leq -\lambda, \] we have $\tilde{d}_H(p, \H\setminus E)\leq -\lambda$ or equivalently $\psi_E(p)\geq \lambda$, which is a contradiction to the condition $p\in E_\lambda$. We can similarly derive a contradiction if $\eta\in \H\setminus E$ holds. The remaining case when $\xi, \eta\in E$ is an obvious contradiction to the assumption that $E$ is h-convex, since $\zeta\in [\xi, \eta]$ and $\zeta\in \H\setminus E$. \end{proof} \subsection{Viscosity characterization of h-quasiconvexity} The following characterization of h-quasiconvexity is known for smooth functions \cite[Theorem 4.5]{CCP2}. We provide a generalized result in the nonsmooth case by extending \cite[Proposition 2.2]{BGJ1} to the Heisenberg group. \begin{thm}[Viscosity characterization of h-quasiconvexity]\label{thm char} Let $\Omega \subset \H$ be open and h-convex and $u :\Omega \to [-\infty,\infty)$ be upper semicontinuous. Then, $u$ is h-quasiconvex if and only if whenever there exist $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a maximum at $p$, \begin{equation}\label{usc} \langle \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h \rangle \leq 0\quad \text{holds for any $\xi \in \H_p\cap \Omega$ satisfying $u(\xi) < u(p)$}. \end{equation} \end{thm} \begin{rmk} It is worth mentioning that, as in \cite{CCP2}, we can also express the inner product term $\la \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h\ra$ in \eqref{usc} by $\la \nabla \varphi(p), \xi-p\ra$. This is possible because of the condition that $\xi\in \H_p$. We shall maintain the expression on the left hand side to suggest possible generalization of our results in general Carnot groups, which is not elaborated in this paper. \end{rmk} \begin{proof}[Proof of Theorem \ref{thm char}] Let us prove the necessity of \eqref{usc} by contradiction. Suppose, that $u$ is upper semicontinuous h-quasiconvex function, $\varphi\in C^1(\Omega)$ is such that $u-\varphi$ attains a maximum at $p\in \Omega$, and there exists $\xi\in \H_p\cap \Omega$ with $u(\xi) < u(p)$ such that \[ \langle \nabla_H \varphi(p) , (p^{-1}\cdot \xi)_h \rangle>0. \] Then, for $\lambda>0$ small enough and $w = p\cdot \delta_\lambda (\xi^{-1}\cdot p)$ there holds $u(w) < u(p)$. Indeed, the directional derivative of $\varphi$ at $p$ in the direction $p^{-1}\cdot \xi$ is positive, and hence $\varphi(w) < \varphi(p)$ for $\lambda>0$ small enough. Since $u-\varphi$ attains a maximum at $p$ we obtain $u(p)>u(w)$. We conclude with $\xi \in \H_w$, $p \in [w, \xi]$ and $u(p)>\max\{u(w),u(\xi)\}$, which contradicts the h-quasiconvexity of $u$. Now we are left with proving sufficiency of \eqref{usc}. Suppose that $u$ is not h-quasiconvex. Then, without loss of generality there exists a point $\xi=(x_\xi, y_\xi, 0) \in \H_0$ such that \[ \max_{[0,\xi]} u > \max\{u(0),u(\xi)\}. \] Denote by $Z$ the set of maximizers of $u$ on the segment $[0,\xi]$. By the upper semicontinuity of $u$ there exists $R \in (0, |\xi|/4)$ small enough such that \[ \min \{d_H(0,Z),d_H(\xi,Z)\}>R \] (i.e. $Z$ is in the relative interior of $[0,\xi]$) and $ u(q) < \max_{[0, \xi]} u$ for any $q \in B_R(0) \cup B_R (\xi)$. Let \[ \mathcal{C}:=\{q \in \Omega: d_H(q,[0,\xi])<R, 0<\langle q,\xi \rangle < |\xi|^2 \} \] be a cylindrical neighborhood of the segment $[0,\xi]$. We define $\varphi_n$ by \[ \varphi_n(p) = \frac{1}{n} \langle p, \xi \rangle +n \left((x_py_\xi-y_px_\xi)^2+z_p^2\right), \quad p\in \Omega. \] As $n\to \infty$, $u-\varphi_n\to u$ pointwise in $[0,\xi]$ and $u-\varphi_n\to -\infty$ elsewhere in $\overline{\mathcal{C}}$. Then there exists a sequence $p_n=(x_{p_n},y_{p_n},z_{p_n}) \in \overline{\mathcal{C}}$ such that \[ \max_{\overline{\mathcal{C}}} (u-\varphi_n ) =u(p_n)- \varphi_n(p_n), \] which converges to a point in $Z$ via a subsequence. Let us index the subsequence still by $n$ for notational simplicity. Suppose that the subsequence $p_n \to (tx_\xi, ty_\xi, 0)$ as $n \to \infty$ for some $t \in (R/|\xi|,1-R/|\xi|)$. Let us consider $w_n =(x_{p_n}/ t, y_{p_n}/t ,z_{p_n}) \in \H_{p_n}$. Observing that \[ d_H(\xi,w_n) = |\xi^{-1} \cdot w_n |_G \to 0, \] we get $u(w_n)<u(p_n)$ for $n$ large enough. Let us compute $\nabla_H \varphi_n(p_n)$ as follows: \[ \nabla_H \varphi_n (p_n) = \Bigg( \frac{x_\xi }{n} +2n y_\xi (x_{p_n}y_\xi -y_{p_n}x_\xi)- n y_{p_n} z_{p_n}, \frac{y_\xi }{n} -2nx_\xi (x_{p_n}y_\xi -y_{p_n}x_\xi)+ n x_{p_n} z_{p_n}\Bigg). \] Since \[ (p_n^{-1}\cdot w_n)_h = \left( \frac{1}{t}-1 \right) (x_{p_n}, y_{p_n}), \] we have \[\langle \nabla_H \varphi_n(p_n),(p_n^{-1}\cdot w_n)_h \rangle = \left(\frac{1}{t}-1\right) \left( \frac{\langle p_n,\xi \rangle}{n} +2n (x_{p_n}y_\xi -y_{p_n}x_\xi)^2 \right)>0, \] which contradicts \eqref{usc}. \end{proof} Theorem \ref{thm char} amounts to saying that $u\in USC(\Omega)$ is h-quasiconvex if \eqref{vis char} holds in the viscosity sense, that is, \[ \sup\{\la \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h\ra: \xi\in \H_p\cap \Omega, \ u(\xi)< u(p)\}\leq 0 \] whenever there exist $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a maximum at $p$. As a standard remark in the viscosity solution theory, the maximum here can be replaced by a local maximum or a strict maximum. In spite of the nonlocal nature, we can obtain the following property by using the geometricity of the operator. \begin{lem}[Invariance with respect to composition]\label{lem geometric} Let $\Omega\subset \H$ be h-convex and $u: \Omega\to \R$ be bounded and upper semicontinuous. Assume that $g: \R\to \R$ is a nondecreasing continuous function. If $u$ is h-quasiconvex in $\Omega$, then so is $g\circ u$. \end{lem} \begin{proof} It is clear that $g\circ u$ is still upper semicontinuous. Assume first that $g\in C^1(\R)$ is strictly increasing. It is clear that $g\circ u$ is still upper semicontinuous. Suppose that there exists $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $g\circ u-\varphi$ attains a maximum at $p_0$. Then using the inverse function $g^{-1}$ of $g$, we see that $u-g^{-1}\circ\varphi$ also attains a maximum at $p_0$. Applying the characterization of h-quasiconvexity in Theorem \ref{thm char}, we get \[ \sup\{\la \nabla_H (g^{-1}\circ \varphi)(p_0), (p_0^{-1}\cdot \xi)_h\ra: \xi\in \H_{p_0}\cap \Omega, \ u(\xi)< u(p_0)\}\leq 0, \] which implies \beq\label{geometric eq1} \sup\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra: \xi\in \H_{p_0}\cap \Omega, \ (g\circ u)(\xi)< (g\circ u)(p_0)\}\leq 0. \eeq This shows that $g\circ u$ is h-quasiconvex. For the general case when $g\in C(\R)$ is nondecreasing, we can take a sequence $g_k\in C^1(\R)$ increasing such that $g_k\to g$ locally uniformly in $\R$ as $k\to \infty$. In this case, suppose that there exists $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $g\circ u-\varphi$ attains a strict maximum at $p_0$. Then there exist $p_k\in \Omega$ such that $p_k\to p_0$ as $k\to \infty$ and $g_k\circ u-\varphi$ attains a local maximum at $p_k$. Adopting the argument above, we obtain the h-quasiconvexity of $g_k\circ u$, that is, \[ \sup\{\la \nabla_H \varphi(p_k), (p_k^{-1}\cdot \xi)_h\ra: \xi\in \H_{p_k}\cap \Omega, \ (g_k\circ u)(\xi)< (g_k\circ u)(p_k)\}\leq 0. \] By the uniform convergence of $g_k\circ u$ to $g\circ u$ in $\Omega$ and the continuity of $\H_p$ with respect to $p$, we can pass to the limit as $k\to \infty$ and obtain the relation \eqref{geometric eq1} again. \end{proof} \subsection{H-quasiconvex envelope} In what follows, we introduce the h-quasiconvex envelope of a given function. \begin{defi}[Definition of h-quasiconvex envelope]\label{defi envelope} Let $\Omega$ be an h-convex domain in $\H$ and $f: \Omega \to \R$ be a given function. We say, that $Q(f)$ is the h-quasiconvex envelope of $f$ if it is the greatest h-quasiconvex function majorized by $f$, that is \[ Q(f)(p) := \sup \{g(p): g \leq f \text{ and } g \text{ is h-quasiconvex}\}, \] where we adopt the convention that $\sup \emptyset = -\infty$. \end{defi} By definition, $Q(f)$ is monotone in $f$; namely, $Q(f)\leq Q(g)$ in $\Omega$ holds provided that $f\leq g$ in $\Omega$. Moreover, $Q(f)$ is stable with respect to $f$ in the following sense. \begin{prop}[Stability of h-quasiconvex envelope]\label{prop stability} Suppose that $\Omega$ is an h-convex domain in $\H$ and $f, g: \Omega\to \R$ are given functions. Assume that both $Q(f)$ and $Q(g)$ exist in $\Omega$. Then there holds \beq\label{fun stable} \sup_{\Omega} |Q(f)-Q(g)|\leq \sup_{\Omega}|f-g|. \eeq \end{prop} \begin{proof} Let $M:=\sup_{\Oba}|f-g|$. Since $f-M\leq g$ in $\Oba$, by the monotonicity of $Q$, we get \beq\label{fun stable1} Q(f-M)\leq Q(g) \quad \text{in $\Omega$.} \eeq Noticing that $Q(f)-M$ is h-quasiconvex and $Q(f)\leq f$ in $\Omega$, by Definition \eqref{defi envelope}, we deduce that \[ Q(f)-M\leq Q(f-M)\quad \text{in $\Omega$}, \] which, by \eqref{fun stable1}, yields \[ Q(f)-M\leq Q(g)\quad \text{in $\Omega$}. \] Exchanging the roles of $f$ and $g$, we conclude the proof of \eqref{fun stable}. \end{proof} Let us now discuss how to find the h-convex envelope of a given function. A straightforward method is to employ a convexification operator. For an h-convex domain $\Omega \subset \H$ and $f: \Omega \to \R$, let $T[f]$ be given by \eqref{direct conv}, It is clear that $\inf_{\Omega} f \leq T[f] \leq f$ in $\Omega$. Also, it is easily seen that $T[f]=f$ in $\Omega$ if and only if $f$ is h-quasiconvex. This operator is inspired by its Euclidean analogue, which is given by \[ T_{eucl}(w)= \inf \left\{ \max \{ f(p),f(q)\}: w \in [p, q], p \in \Omega, q \in \Omega \right\},\quad w\in \R^3. \] In the Euclidean case, where the quasiconvex envelope, written as $Q_{eucl}(f)$, satisfies \[ Q_{eucl}(f)=T_{eucl}[f] \] in a bounded convex domain $\Omega$. In contrast, the following example shows that in the Heisenberg group, in general $T[f]$ is not necessarily an h-quasiconvex function. \begin{example}\label{exa direct} Let $f:\H\to \R$ be defined as $f(p)=|1-z^2|$ for $p=(x, y, z)\in \H$. One can compute directly to get, for $p=(x, y, z)$, \[ T[f](p)=\begin{cases} z^2-1&\ |z|\ge 1, \\ 0&\ |z|<1\ \text{and}\ (x,y)\neq(0,0),\\ 1-z^2 &\ |z|<1\ \text{and}\ (x, y)=(0,0),\\ \end{cases} \] which fails to be quasiconvex. In fact, letting $p=(x, y, t)$, $q=(-x, -y, t)$ with $|t|<1$, we see that $q\in \mathbb{H}_p$ and at $w=(0,0, t)\in [p, q]$, we have \[ T[f](w)=1-t^2>0=\max\{T[f](p), T[f](q)\}. \] However, if we apply the operator one more time, then we have, for $p=(x, y, z)\in \H$, \[ T^2[f](p)=\begin{cases} z^2-1&\ |z|\ge 1, \\ 0&\ |z|<1. \end{cases} \] It is not difficult to see that $Q(f)=T^2[f]$ in $\H$. Indeed, Noticing that $T^2[f]$ is h-quasiconvex, by definition we have $T^2[f]\leq Q(f)$ in $\H$. On the other hand, the reverse inequality $Q(f)\leq T^2[f]$ can be obtained by applying the operator $T$ twice to the inequality $Q(f)\leq f$. \end{example} It turns out that in general one can obtain the quasiconvex envelope by iterating the operator $T$. Such type of iteration is also used in \cite{LZ2} to construct the h-convex envelope of a given continuous function in the Heisenberg group. \begin{thm}[Iterative scheme with direct convexification]\label{thm direct} Let $\Omega$ be an h-convex domain in $\H$. Suppose that $f$ is bounded from below. Let $T$ be the operator given by \eqref{direct conv}. Then $T^n[f] \to Q(f)$ pointwise in $\Omega$ as $n\to \infty$. \end{thm} \begin{proof} Notice that by the monotonicity of $T^n[f]$ in $n$ and boundedness of $f$ from below, the pointwise limit of $T^n[f]$ exists. Let us denote it by $F$, i.e., \[ F:=\lim_{n\to \infty} T^n[f]. \] Let us fix $\varepsilon>0$, $p \in \Omega$, $q \in \Omega \cap \H_p$ and $w \in [p,q]$. For $n$ sufficiently large, there holds \[ F(p) \geq T^n[f](p) - \varepsilon, \qquad F(q) \geq T^n[f](q) - \varepsilon. \] Moreover, we have \[ \max \{ T^n[f](p),T^n[f](q) \} \geq T^{n+1} [f](w) \geq F(w), \] and therefore \[ \max\{ F(p), F(q) \} \geq F(w) -\varepsilon. \] Letting $\varepsilon \to 0$, we deduce that $F$ is h-quasiconvex and thus $F \leq Q(f)$ in $\Omega$. On the other hand, for any $w \in \Omega$ and $p \in \Omega$, $q \in \Omega \cap \H_p$ such that $w\in [p, q]$, there holds \[ \max \{ f(p), f(q)\} \geq \max\{ Q(f)(p),Q(f)(q)\} \geq Q(f)(w). \] It follows that $T[f] \geq Q(f)$ in $\Omega$. We can iterate this argument obtain $T^n[f] \geq Q(f)$ in $\Omega$ for every $n$. Hence, sending $n\to \infty$, we are led to $F\geq Q(f)$ holds in $\Omega$, which completes the proof. \end{proof} As shown in Example \ref{exa direct}, $T^n[f]=Q(f)$ may hold for a finite $n$. We do not know in general how many iterations one needs to run to obtain $Q(f)$. It would be interesting to find a condition to guarantee the finiteness of $n$. \section{Nonlocal Hamilton-Jacobi equation}\label{sec:nonlocal-hj} Inspired by \cite{BGJ1}, we would like to focus our attention on a PDE-based approach to build the h-quasiconvex envelope of a given function $f$ in a bounded domain $\Omega\subset \H$. We develop the idea in Theorem \ref{thm char}, which provides a characterization of h-quasiconvexity in terms of viscosity subsolutions of a nonlocal PDE. For our convenience of notations below, for any function $u: \Omega\to \R$ and any $p\in \Omega$, we denote \[ S_p(u)=\{\xi\in \H_p\cap \Omega: u(\xi)<u(p)\}. \] We study the following nonlocal Hamilton-Jacobi equation: \begin{equation}\label{hj eq} u(p)+H(p, u(p), \nabla_H u(p))=f(p) \quad \text{in $\Omega$}, \end{equation} to which the subsolutions are defined with \[ H(p, u(p), \nabla_H u(p))=\sup\left\{\la \nabla_H u(p), (p^{-1}\cdot \xi)_h\ra: \xi\in S_p(u)\right\} \] while the supersolutions are defined with \[ H(p, u(p), \nabla_H u(p))=\sup\left\{\la \nabla_H u(p), (p^{-1}\cdot \xi)_h\ra: \xi\in \hat{S}_p(u)\right\}. \] Here we set \[ \hat{S}_p(u):=\{\xi\in \H_p\cap \Omega: u(\xi)\leq u(p)\}. \] The major difficulty lies at the degeneracy and nonlocal nature of the first order operator. We mainly study uniqueness and existence of solutions to this equation in a slightly restrictive setting, assuming that the solutions $u$ satisfy \begin{numcases}{} u=K \quad &\text{on $\pO$, }\label{dirichlet} \\ u\leq K \quad &\text{in $\Oba$}\label{coercive-like} \end{numcases} for some $K\in \R$. It turns out that we can obtain a unique solution if $f$ satisfies the same conditions. These conditions can be viewed as a bounded-domain variant of coercivity assumption on $u$. If $u\in C(\H)$ is coercive in $\H$, that is, \[ \lim_{R\to \infty} \inf_{|p|_G\geq R} u(p)\to \infty, \] then we can take $K\in \R$ large and $\Omega=\{p: u(p)<K\}$ such that both \eqref{dirichlet} and \eqref{coercive-like} hold. \subsection{Definition and basic properties of solutions} Let us first present the definition of subsolutions of \eqref{hj eq}. \begin{defi}[Subsolutions]\label{def sub} Let $f\in USC(\Omega)$ be locally bounded in $\Omega$. A locally bounded upper semicontinuous function $u: \Omega\to \R$ is called a subsolution of \eqref{hj eq} if whenever there exist $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a maximum at $p$, \beq\label{sub-property} u(p)+\sup\left\{\la \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h\ra:\ \xi\in S_p(u) \right\}\leq f(p). \eeq \end{defi} Note that the supremum in \eqref{sub-property} does make sense. We consider naturally \[ \sup\left\{\la \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h\ra:\ \xi\in S_p(u) \right\}=0 \] if $\nabla_H \varphi(p)=0$. It is also easily seen that $S_p(u)\neq \emptyset$ provided that $\nabla_H\varphi(p)\neq 0$ and $u-\varphi$ attains a local maximum at $p$. We next give a definition of supersolutions. \begin{defi}[Supersolutions]\label{def super1} Let $f\in LSC(\Omega)$ be locally bounded in $\Omega$. A locally bounded lower semicontinuous function $u: \Omega\to \R$ is called a weak supersolution of \eqref{hj eq} if whenever there exist $p\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a minimum at $p$, \beq\label{super-property} u(p)+\sup\left\{\la \nabla_H \varphi(p), (p^{-1}\cdot \xi)_h\ra:\ \xi\in \hat{S}_p(u) \right\}\geq f(p). \eeq \end{defi} For any point $p\in \Omega$, we also say that $u\in USC(\Omega)$ (resp, $LSC(\Omega)$) satisfies the subsolution (resp., supersolution) property at $p$ if \eqref{sub-property} (resp., \eqref{super-property}) holds for any $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a maximum (resp., minimum) at $p$. As a standard remark in the theory of viscosity solutions, we may replace the maximum in the definitions above by local maximum or strict maximum. \begin{lem}[Upper bound]\label{lem sub} Suppose that $\Omega$ is a domain in $\H$. Let $f\in USC(\Omega)$ be locally bounded in $\Omega$. If $u\in USC(\Omega)$ is a subsolution of \eqref{hj eq}. Then $u\leq f$ in $\Omega$. \end{lem} \begin{proof} Let us first show $u\leq f$ at all points where $u$ can be tested. Assume that there exists $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a maximum at some $p_0\in \Omega$. If $\nabla_H\varphi(p_0)=0$, then we immediately obtain the desired inequality $u(p_0)\leq f(p_0)$ by Definition \ref{def sub}. If $\nabla_H\varphi(p_0)\neq 0$ and thus $S_{p_0}(u)\neq \emptyset$, we can take a sequence $\xi_j\in S_{p_0}(u)$ such that $\xi_j\to p_0$ as $j\to \infty$. This yields \[ \sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}(u) \right\} \geq \limsup_{j\to \infty}\la \nabla_H \varphi(p_0),\ (p_0^{-1}\cdot \xi_j)_h\ra=0, \] As a result, we get $u(p_0)\leq f(p_0)$ by Definition \ref{def sub} again. It remains to show that $u\leq f$ holds also at those points where $u$ cannot be tested. Fix $p_0\in \Omega$ arbitrarily. Since $u$ is locally bounded, for $\vep>0$ small, we can find $p_\vep\in \Omega$ in a neighborhood of $p_0$ such that \[ p\mapsto u(p)-\frac{1}{\vep} |p_0^{-1}\cdot p|_G^4 \] attains a local maximum in $\Omega$ at $p_\vep$. In particular, we have \beq\label{lem sub eq1} u(p_0)\leq u(p_\vep)-\frac{1}{\vep} |p_0^{-1}\cdot p_\vep|_G^4. \eeq By the local boundedness of $u$, we have $p_\vep\to p_0$ as $\vep\to 0$. Noticing that $u$ is tested from above by a smooth function at $p_\vep$, we may apply our result shown in the first part to deduce $u(p_\vep)\leq f(p_\vep). $ It follows from \eqref{lem sub eq1} that $u(p_0)\leq f(p_\vep)$. Sending $\vep\to 0$ and applying the upper semicontinuity of $f$, we end up with $u(p_0)\leq f(p_0)$. \end{proof} Let us present more properties for \eqref{hj eq}. \begin{lem}[Basic properties]\label{lem basic property} Suppose that $\Omega$ is a domain in $\H$. For each locally bounded $f\in USC(\Omega)$, let $\A[f]$ denote the set of all subsolutions of \eqref{hj eq}. Then the following properties hold. \begin{enumerate} \item[(i)] (Monotonicity) For any locally bounded $f_1, f_2\in USC(\Omega)$ satisfying $f_1\leq f_2$ in $\Omega$, $\A[f_1]\subset \A[f_2]$ holds. \item[(ii)] (Constant addition invariance) For any $c\in \R$ and $u\in \A[f]$, $u+c\in \A[f+c]$ holds. \item[(iii)] (Left translation invariance) For any $\eta\in \H$, $u_\eta\in \A[f_\eta]$ holds, where $u_\eta$ and $f_\eta$ are given by \[ u_\eta(p)=u(\eta\cdot p), \quad f_\eta(p)=f(\eta\cdot p), \quad p\in \Omega_\eta. \] \end{enumerate} Analogous properties to the above also hold for supersolutions of \eqref{hj eq}. \end{lem} We omit the details of the proof, since it is quite straightforward from the structure of the Hamiltonian. \subsection{Comparison principle}\label{sec:comp} We next establish a comparison principle for \eqref{hj eq} under the conditions \eqref{dirichlet} and \eqref{coercive-like}. General Dirichlet boundary problems will also be briefly discussed later. \begin{thm}[Comparison principle with constant boundary data]\label{thm comp} Let $\Omega$ be a bounded domain in $\H$ and $f\in C(\Omega)$. Let $u\in USC(\Oba)$ and $v\in LSC(\Oba)$ be respectively a subsolution and a supersolution of \eqref{hj eq}. Assume in addition that \beq\label{comp add1} u\leq v=K \quad \text{on $\partial\Omega$} \eeq and $u\leq K $ in $\Oba$ for some $K\in\R$. Then $u\leq v$ in $\Oba$. \end{thm} \begin{proof} Assume by contradiction that $\max_{\Oba} (u-v)=\sigma$ for some $\sigma>0$. For $\vep>0$ small, we consider \[ \Phi_\vep(p, q)=u(p)-v(q)-\frac{|p\cdot q^{-1}|_G^4}{\vep} \] for $p, q\in \Oba$. It is clear that $\Phi_\vep$ attains a positive maximum in $\Oba\times \Oba$. Let $(p_\vep, q_\vep)\in \Oba\times \Oba$ be a maximizer. We thus obtain \beq\label{cp eq2} \Phi_\vep(p_\vep, q_\vep)\geq \sigma, \eeq which implies that \beq\label{cp eq0} u(p_\vep)-v(q_\vep)\geq \sigma \eeq and \beq\label{cp eq1} \frac{|p_\vep\cdot q_\vep^{-1}|_G^4}{\vep}\leq u(p_\vep)-v(q_\vep)-\sigma \eeq for all $\vep>0$. Due to the boundedness of $u$ and $v$, it follows from \eqref{cp eq1} that $|p_\vep\cdot q_\vep^{-1}|_G\to 0$, which in turn implies that $d_H(p_\vep, q_\vep)=|p_\vep^{-1}\cdot q_\vep|_G\to 0$ as $\vep\to 0$. By taking a subsequence, still indexed by $\vep$, we have $p_\vep, q_\vep\to p_0$ as $\vep\to 0$ for some point $p_0\in \Oba$. Thanks to \eqref{cp eq2} and the assumption that $u\leq v$ on $\partial \Omega$, we deduce that $p_0\in \Omega$ and $p_\vep, q_\vep\in \Omega$ for all $\vep>0$ small. We now apply the definition of subsolutions and supersolutions. Note that $u-\varphi_1$ attains a maximum at $p_\vep$ and $v-\varphi_2$ attains a minimum at $q_\vep$, where we take \[ \varphi_1(p)= v(q_\vep)+{|p\cdot q_\vep^{-1}|_G^4\over \vep}, \] \[ \varphi_2(q)=u(p_\vep)-{|p_\vep\cdot q^{-1}|_G^4\over \vep} \] for $p, q\in \H$. Writing \[ p_\vep=\left(x_{p_\vep}, y_{p_\vep}, z_{p_\vep}\right),\quad q_\vep=\left(x_{q_\vep}, y_{q_\vep}, z_{q_\vep}\right), \] we see, by direct calculations, that \beq\label{cp new5} \begin{aligned} &\nabla_H \varphi_1(p_\vep)=\nabla_H \varphi_2 (q_\vep)\\ &={1\over \vep}\big(4A_\vep (x_{p_\vep}-x_{q_\vep})-16 B_\vep(y_{p_\vep}+y_{q_\vep}),\ 4A_\vep(y_{p_\vep}-y_{q_\vep})+16B_\vep(x_{p_\vep}+x_{q_\vep})\big), \end{aligned} \eeq where \[ A_\vep=(x_{p_\vep}-x_{q_\vep})^2+(y_{p_\vep}-y_{q_\vep})^2,\quad B_\vep=z_{p_\vep}-z_{q_\vep}-{1\over 2}x_{p_\vep}y_{q_\vep}+{1\over 2}y_{p_\vep}x_{q_\vep}. \] We next discuss two different cases. Case 1. Suppose that there exists a subsequence, again indexed by $\vep$ for simplicity of notation, such that \[ \sup\{\la \nabla_H \varphi_2(q_\vep), (q_\vep^{-1}\cdot \eta)_h\ra: \eta\in \hat{S}_{q_\vep}(v)\}=0. \] Then applying the definition of supersolutions to $v$, we get \beq\label{cp eq3} v(q_\vep)\geq f(q_\vep). \eeq On the other hand, by Lemma \ref{lem sub}, we obtain \beq\label{cp eq4} u(p_\vep)\leq f(p_\vep). \eeq Combining \eqref{cp eq3} and \eqref{cp eq4}, we have \[ u(p_\vep)-v(q_\vep)\leq f(p_\vep)-f(q_\vep) \] for $\vep>0$ small. Letting $\vep\to 0$, we use the continuity of $f$ to get \[ \limsup_{\vep\to 0} \left(u(p_\vep)-v(q_\vep)\right)\leq 0, \] which is a contradiction to \eqref{cp eq0}. Case 2. Suppose that \[ \sup\left\{\la \nabla_H \varphi_2(q_\vep), (q_\vep^{-1}\cdot \eta)_h\ra: \eta\in \hat{S}_{q_\vep}(v)\right\}>0 \] for all $\vep>0$ small. We then can find a sequence $\eta_{\vep, n}\in \hat{S}_{q_\vep}(v)$ such that \beq\label{cp new4} \la \nabla_H \varphi_2(q_\vep), (w_n)_h\ra\geq \sup\left\{\la \nabla_H \varphi_2(q_\vep), (q_\vep^{-1}\cdot \eta)_h\ra: \eta\in \hat{S}_{q_\vep}(v)\right\}-{1\over n} \eeq as $n\to \infty$, where $w_n=q_\vep^{-1}\cdot \eta_{\vep, n}\in \H_0$. In particular, we have $v(\eta_{\vep, n})\leq v(q_\vep)$. In view of \eqref{coercive-like} and \eqref{cp eq0}, we get $v(q_{\vep})\leq K-\sigma$. On the other hand, since $v=K$ due to \eqref{comp add1}, we see that as $n\to \infty$ and and $\vep\to 0$, $\eta_{\vep, n}$ cannot converge to a boundary point. In other words, there exists $r>0$ such that $B_r(\eta_{\vep, n})\subset \Omega$. We may assume that $\vep$ is small enough so that $r>|p_\vep\cdot q_{\vep}^{-1} |_G$. Besides, it also follows from \eqref{cp new4} that \beq\label{cp new1} \la \nabla_H \varphi_2(q_\vep), (w_n)_h\ra>0 \eeq for any $n\geq 1$ large. Taking \[ \mu(s)=|p_\vep\cdot (s w_n)\cdot \eta_{\vep, n}^{-1} |_G^4=|p_\vep\cdot (s-1)w_n\cdot q_\vep^{-1}|_G^4, \quad s\in \R \] with $sw_n$ denoting the usual constant multiple of $w_n$ by the factor $s$, i.e., \[ s w_n=(s x_{w_n}, s y_{w_n}, 0)\quad\text{for }\ w_n=(x_{w_n}, y_{w_n}, 0), \] we get by direct calculations \[ \begin{aligned} \mu'(s)&=4A_{\vep, s}(x_{p_\vep}-x_{q_\vep}+(s-1) x_{w_n}) x_{w_n}+4A_{\vep, s}(y_{p_\vep}-y_{q_\vep}+(s-1) y_{w_n}) y_{w_n}\\ &\quad -16B_{\vep, s} (y_{p_\vep}+y_{q_\vep})x_{w_n}+16B_{\vep, s}(x_{p_\vep}+x_{q_\vep})y_{w_n}, \end{aligned} \] where we let \[ A_{\vep, s}=(x_{p_\vep}-x_{q_\vep}+(s-1)x_{w_n})^2+(y_{p_\vep}-y_{q_\vep}+(s-1)y_{w_n})^2, \] \[ B_{\vep, s}=z_{p_\vep}-z_{q_\vep}-{1\over 2} x_{p_\vep} y_{q_\vep}+{1\over 2}y_{p_\vep}x_{q_\vep}-{1\over 2} (s-1) x_{w_n} y_{q_\vep}+{1\over 2} (s-1) y_{w_n} x_{q_\vep}. \] It is then clear that \[ \begin{aligned} \mu'(1)&=\big(4A_\vep(x_{p_\vep}-y_{q_\vep})-16 B_\vep(y_{p_\vep}+y_{q_\vep})\big) x_{w_n}+\big(4A_\vep(y_{p_\vep}-y_{q_\vep})+16B_\vep(x_{p_\vep}+x_{q_\vep})\big)y_{w_n}\\ &= \vep\la \nabla_H \varphi_2(q_\vep), w_n\ra. \end{aligned} \] Owing to \eqref{cp new1}, we are led to $\mu'(1)>0$, which implies \[ \mu(s)=|p_\vep\cdot (s w_n)\cdot \eta_{\vep, n}^{-1} |_G^4< |p_\vep\cdot q_{\vep}^{-1} |_G^4 \] for any $s<1$ sufficiently close to $1$. This amounts to saying that we can take $s_n\in (0, 1)$ such that $s_n\to 1$ as $n\to \infty$ and \beq\label{cp new2} |\xi_{\vep, n}\cdot \eta_{\vep, n}^{-1}|_G^4<|p_\vep\cdot q_{\vep}^{-1} |_G^4 \eeq for $\xi_{\vep, n}=p_\vep\cdot (s_n w_n)$. This yields $\xi_{\vep, n}\in B_r(\eta_{\vep, n})$ and thus $\xi_{\vep, n}\in \Omega$. It is also clear that $\xi_{\vep, n}\in \H_{p_{\vep}}$ and \beq\label{cp new3} \left|\left(p_\vep^{-1}\cdot \xi_{\vep, n}\right)_h -\left(q_\vep^{-1}\cdot \eta_{\vep, n}\right)_h\right|\to 0\quad \text{as $n\to \infty$}. \eeq In view of the maximality of $\Phi_\vep$ at $(p_\vep, q_\vep)$, we have \[ u(\xi_{\vep, n})-v(\eta_{\vep, n})-{1\over \vep} |\xi_{\vep, n}\cdot \eta_{\vep, n}^{-1}|_G^4\leq u(p_\vep)-v(q_\vep)-{1\over \vep}|p_\vep\cdot q_\vep^{-1} |_G^4 \] which by \eqref{cp new2} yields \[ u(\xi_{\vep, n})-u(p_{\vep})< v(\eta_{\vep, n})-v(q_\vep)\leq 0. \] We have shown that $\xi_{\vep, n}\in S_{p_\vep}(u)$. Adopting the definition of subsolutions, we deduce that \[ u(p_\vep)+\la \nabla_H\varphi_1(p_\vep),\ \left(p_\vep^{-1}\cdot \xi_{\vep, n}\right)_h\ra\leq f(p_\vep). \] On the other hand, we can also apply the definition of supersolutions, together with \eqref{cp new4}, to get \beq\label{general comp2} v(q_\vep)+\la \nabla_H\varphi_2(q_\vep),\ \left(q_\vep^{-1}\cdot \eta_{\vep, n}\right)_h\ra\geq f(q_\vep)-{1\over n} \eeq for $n\geq 1$ large. Combining these two inequalities, we use \eqref{cp new5} to obtain \[ u(p_\vep)-v(q_\vep)\leq f(p_\vep)-f(q_\vep)-\la \nabla_H\varphi_1(p_\vep), \left(p_\vep^{-1}\cdot \xi_{\vep, n}\right)_h -\left(q_\vep^{-1}\cdot \eta_{\vep, n}\right)_h \ra+{1\over n}. \] Sending $n\to \infty$ and applying \eqref{cp new3}, we get \[ u(p_\vep)-v(q_\vep)\leq f(p_\vep)-f(q_\vep), \] which is a contradiction to \eqref{cp eq0} and the continuity of $f$. \end{proof} It is possible to give a slightly different comparison theorem for more general Dirichlet boundary problems under the h-convexity assumption on $\Omega$. To this end, we prove the following lemma using the intrinsic cone property of h-convexity given in \cite[Theorem 1.4]{ArCaMo}; see also \cite{MoM} for regularity results related to this property. \begin{lem}\label{rmk convex boundary} Let $\Omega \subset \H$ be an open set. If $\Omega$ is h-convex, then for every $p\in \partial \Omega$ and $q\in \H_p\cap \Omega$, the horizontal segment $(p, q]$ joining $p$ and $q$ stays in $\Omega$. \end{lem} \begin{proof} Assume by contradiction that there exist $p \in \partial \Omega$, $q \in \H_p \cap \Omega$ such that the half-open horizontal segment $(p, q]$ does not stay in $\Omega$. Let $w$ be the closest point to $q$ on $[q,p)$ that is not in $\Omega$, i.e., $w=p\cdot \lambda_0(p^{-1}\cdot q)$, where \[ \lambda_0=\sup\{\lambda\geq 0: p\cdot \lambda(p^{-1}\cdot q)\notin \Omega\}. \] Then $w \in \partial \Omega$, $q\in \H_w$ and $w\cdot \lambda(w^{-1}\cdot q)\in \Omega$ for all $\lambda\in (\lambda_0, 1)$. Such $w$ is a so-called non-characteristic point defined in \cite{ArCaMo}. In view of \cite[Theorem 1.4]{ArCaMo}, the h-convexity of $\Omega$ implies the existence of an intrinsic cone in the exterior of $\Omega$ with vertex $w$ and axis along the horizontal segment $(p, w]$, which further enables us to find a point $z \in (p, w)$ and $r>0$ such that $B_r(z) \subset \H \setminus \Omega$. Noticing that there exists a sequence $p_j \in \Omega$ such that $p_j \to p$ as $j \to \infty$, we can take $q_j=p_j \cdot p^{-1}\cdot q$ and $z_j = p_j \cdot p^{-1} \cdot z$ so that $q_j, z_j \in \H_{p_j}$ and $q_j\to q$, $z_j\to z$ as $j\to \infty$. We choose $j>0$ large such that $q_j\in \Omega$ and $z_j\in B_r(z)$. This contradicts the h-convexity of $\Omega$, since $p_j, q_j \in \Omega$, but the point $z_j$ on the horizontal segment $[p_j, q_j]$ is not in $\Omega$. \end{proof} \begin{thm}[Comparison principle under domain convexity]\label{thm comp2} Let $\Omega$ be a bounded h-convex domain in $\H$ and $f\in C(\Omega)$. Let $u\in USC(\Oba)$ and $v\in LSC(\Oba)$ be respectively a subsolution and a supersolution of \eqref{hj eq}. If $u\leq v$ on $\partial \Omega$, then $u\leq v$ in $\Oba$. \end{thm} \begin{proof} The proof is almost the same as that of Theorem \ref{thm comp} except for some necessary modifications to handle this general boundary condition. The only difference lies at the argument by contradiction in Case 2 on the occasion when $\eta_{\vep, n}\in \hat{S}_{q_\vep}$ converges to a boundary point $\eta_0\in \partial \Omega$ as $n\to \infty$ and $\vep\to 0$. (This gives a contradiction to the conditions \eqref{coercive-like} and \eqref{comp add1} in the setting of Theorem \ref{thm comp}.) Let us derive a contradiction only in this case. Thanks to the condition $u\leq v$ on $\partial \Omega$, we have $u(\eta_0)\leq v(\eta_0)$, which by \eqref{cp eq0} and the upper semicontinuity of $u$ yields the existence of $r>0$ such that \beq\label{general comp1} B_r(\eta_0)\cap \Omega\subset \{ z \in \Omega: u(z) < u(p_\vep)\} \eeq for any $\vep>0$ small. Since $p_\vep, q_\vep\to p_0\in \Omega$, by Lemma \ref{rmk convex boundary} we can utilize the h-convexity of $\Omega$ to deduce that \[ \eta^\lambda=(1-\lambda)p_0+\lambda \eta_0\in \Omega \] for all $0\leq \lambda<1$. We may also choose $\lambda$ close to $1$, depending only on $r$, such that $\eta^\lambda\in B_r(\eta_0)\cap\Omega$. Letting \[ \eta^\lambda_{\vep, n}=(1-\lambda)q_\vep+\lambda \eta_{\vep, n}=q_\vep\cdot (\lambda w_n), \] we have $\eta^\lambda_{\vep, n}\in B_r(\eta_0)\cap \Omega$ when $n$ is sufficiently large and $\vep$ is sufficiently small. Let us now take $\xi^\lambda_{\vep, n}=p_\vep\cdot (\lambda w_n)$. It is easily seen that \[ |\xi^\lambda_{\vep, n}\cdot (\eta^\lambda_{\vep, n})^{-1}|_G\to 0 \quad \text{as $n\to \infty$ and $\vep\to 0$}, \] and thus $\xi^\lambda_{\vep, n}\in B_r(\eta_0)$ for $n$ large and $\vep$ small. By \eqref{general comp1}, we have $\xi^\lambda_{\vep, n}\in S_{p_\vep} (u)$. It follows from the definition of subsolutions that \beq\label{general comp3} u(p_\vep)+\la \nabla_H\varphi_1(p_\vep),\ \lambda(w_n)_h\ra\leq f(p_\vep). \eeq Combining this with \eqref{general comp2} and using \eqref{cp eq0} and \eqref{cp new5}, we obtain \[ \sigma \leq u(p_\vep)-v(q_\vep)\leq {1\over n}+(1-\lambda)\la \nabla_H\varphi_1(p_\vep),\ (w_n)_h\ra. \] Applying \eqref{general comp3} and \eqref{cp eq0} again, we are led to \[ \sigma\leq {1\over n}+{1-\lambda \over \lambda}\left(f(p_\vep)-u(p_\vep)\right)\leq {1\over n}+{1-\lambda \over \lambda}\left(f(p_\vep)-v(p_\vep)\right) \] Sending $n\to \infty$ and $\vep\to 0$, we get \[ \sigma\leq {1-\lambda \over \lambda}\left(f(p_0)-v(p_0)\right). \] Passing to the limit as $\lambda\to 1$ immediately yields a contradiction. \end{proof} \subsection{Existence of solutions}\label{sec:exist} We also adapt Perron's method to show the existence of solutions of \eqref{hj eq} with $f$ satisfying \eqref{dirichlet-f} and \eqref{coercive-f}. In the sequel, for a locally bounded function $u:\Omega \to \mathbb{R}$, we denote by $u^*$ and $u_*$ its upper and lower semicontinuous envelopes respectively. \begin{thm}[Existence of solutions]\label{thm existence} Let $\Omega\subset \H$ be a bounded domain. Assume that $f\in C(\Oba)$ satisfies \eqref{dirichlet-f} and \eqref{coercive-f} for some $K\in \R$. Assume that there exists a subsolution $\underline{u}\in C(\Oba)$ of \eqref{hj eq} satisfying $\underline{u}=K$ on $\partial \Omega$. For any $p\in \Oba$, let \beq\label{perron eq} U(p)=\sup\{u(p): \text{$u\in USC(\Oba)$ is a subsolution of \eqref{hj eq}}\}. \eeq Then $U^*$ is continuous in $\Oba$ and is the unique solution of \eqref{hj eq} satisfying $U^*\leq f\leq K$ in $\Oba$ and $U^*=f=K$ on $\partial \Omega$. \end{thm} \begin{rmk} In the theorem above, if the domain $\Omega$ is further assumed to be h-convex, then the existence of $\underline{u}$ is guaranteed; see Proposition \ref{prop lower} below for an explicit construction of $\underline{u}$ satisfying the required conditions. \end{rmk} \begin{rmk} In view of Lemma \ref{lem sub}, we see that any subsolution $u$ satisfies $u\leq f$ in $\Omega$. Therefore $\underline{u}\leq u\leq f$ in $\Oba$ holds and $U$ in \eqref{perron eq} is well-defined. \end{rmk} Let us first show that $U$ is a subsolution. To this purpose, we prove the following result, where we do not need the coercivity-like conditions \eqref{dirichlet} and \eqref{coercive-like}. \begin{prop}[Maximum subsolution]\label{prop sub} Suppose that $\Omega$ is a domain in $\H$ and $f\in USC(\Omega)$. Let $\A$ be a family of subsolutions of \eqref{hj eq}. Let $v$ be given by \[ v(p)=\sup\{u(p): \text{$u\in \A$}\}, \quad p\in \Omega. \] Then $v^\ast$ is a subsolution of \eqref{hj eq}. \end{prop} \begin{proof} Suppose that there exist $\varphi\in C^1(\Omega)$ and $p_0\in \Omega$ such that $v^\ast-\varphi$ attains a strict maximum at $p_0$. Then we can find a sequence $u_j\in \A$ and $p_j\in \Omega$ such that $u_j-\varphi$ attains a maximum at $p_j$, and, as $j\to \infty$, \beq\label{exist sub1} p_j\to p_0, \quad u_j(p_j)\to v^\ast(p_0). \eeq Let us first consider the case when $\nabla_H\varphi(p_0)\neq 0$. It follows immediately that $S_{p_0}(v^\ast)\neq \emptyset$. For any $\delta>0$ small, there exists $q_\delta\in S_{p_0}(v^\ast)$ such that \beq\label{exist sub2} \sup\left\{\la \nabla_H \varphi(p_0),\ (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}(v_\ast) \right\}\leq \la \nabla_H \varphi(p_0),\ (p_0^{-1}\cdot q_\delta)_h\ra+\delta. \eeq Let us take $\xi_j\in \H_{p_j}\cap \Omega$ such that $\xi_j\to q_\delta$ as $j\to \infty$. Since $v^\ast(q_\delta)<v^\ast(p_0)$ and \[ \limsup_{j\to \infty}u_j(\xi_j)\leq \limsup_{j\to \infty} v^\ast(\xi_j)\leq v^\ast(q_\delta), \] we get $u_j(\xi_j)<v^\ast(p_0)$ and thus $\xi_j\in S_{p_j}(u_j)$ for $j\geq 1$ sufficiently large. Applying Definition \ref{def sub}, we have \[ u_j(p_j)+\sup\left\{\la \nabla_H \varphi(p_j),\ (p_j^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_j}(u_j) \right\}\leq f(p_j), \] which implies \[ u_j(p_j)+\la \nabla_H \varphi(p_j),\ (p_j^{-1}\cdot \xi_j)_h\ra\leq f(p_j). \] Letting $j\to \infty$, by \eqref{exist sub1} and the upper semicontinuity of $f$, we are led to \[ v^\ast(p_0)+\la \nabla_H \varphi(p_0),\ (p_0^{-1}\cdot q_\delta)_h\ra\leq f(p_0). \] It follows from \eqref{exist sub2} that \[ v^\ast(p_0)+\sup\left\{\la \nabla_H \varphi(p_0),\ (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}(v^\ast) \right\}\leq f(p_0)+\delta. \] Due to the arbitrariness of $\delta>0$, we obtain \[ v^\ast(p_0)+\sup\left\{\la \nabla_H \varphi(p_0),\ (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}(v^\ast) \right\}\leq f(p_0). \] Let us discuss the case when $\nabla_H \varphi(x_0)=0$. Since $u_j-\varphi$ attains a maximum at $p_j$, by Lemma \ref{lem sub}, we get $u_j(p_j)\leq f(p_j)$. Sending $j\to \infty$, we have $v^\ast(p_0)\leq f(p_0)$, as desired. Our proof is now complete. \end{proof} \begin{rmk} Proposition \ref{prop sub} can be used in more general circumstances than the setting in Theorem \ref{thm existence}. Note that $f$ is only assumed to be upper semicontinuous in $\Omega$ in contrast to the continuity assumption in Theorem \ref{thm existence}. \end{rmk} \begin{rmk}\label{rmk supremum} The proof of Proposition \ref{prop sub}, which is just an adaptation of Perron's argument, is certainly not restricted to \eqref{hj eq}. It can be extended with ease to a more general class of nonlocal Hamilton-Jacobi equation including \[ \sup\left\{\la \nabla_H u(p), (p^{-1}\cdot \xi)_h\ra: \xi\in \hat{S}_p(u)\right\}=0. \] In this case, in view of Theorem \ref{thm char}, our result simply means that the upper semicontinuous envelope of the pointwise supremum of a class of upper semicontinuous h-quasiconvex functions is also h-quasiconvex. \end{rmk} The following result shows that the maximal subsolution is a solution. \begin{prop}[Locally greater subsolutions]\label{prop super} Let $\Omega\subset \H$ be a bounded domain and $f\in LSC(\Omega)$. Let $u$ be a subsolution of \eqref{hj eq}. Assume that $u_\ast\geq f$ on $\pO$. If $u_\ast$ fails to satisfy the supersolution property at $p_0\in \Omega$, then there exist $r>0$ and a subsolution $U_r$ of \eqref{hj eq} such that \beq\label{greater sub1} \sup_{B_r(p_0)} (U_r-u)>0 \eeq and \beq\label{greater sub2} U_r=u \quad \text{in $\Omega\setminus B_r(p_0)$.} \eeq \end{prop} \begin{proof} Since $u_\ast$ fails to satisfy the supersolution property at $p_0$, there exists $\varphi\in C^1(\Omega)$ such that $u_\ast-\varphi$ attains a minimum at $p_0$ but \beq\label{eq greater1} u_\ast(p_0)+\sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in \hat{S}_{p_0}(u_\ast) \right\}< f(p_0). \eeq Since the supremum is nonnegative, it is then clear that \beq\label{eq greater3} u_\ast(p_0)<f(p_0). \eeq By adding $a |p\cdot p_0^{-1}|_G^4+b$ to $\varphi$ with $a<0$ and $b\in \R$, we may additionally assume that there exists $r>0$ small satisfying \[ (u_\ast-\varphi)(p)>(u_\ast-\varphi)(p_0)=0 \] for all $p\in B_r(p_0)\setminus \{p_0\}$. We also take $\delta>0$ accordingly small such that \[ \varphi(p)+\delta\leq u(p) \quad \text{for $p\in B_r(p_0)\setminus B_{r/2}(p_0)$.} \] Letting \[ U_r(p)=\begin{cases} \max\{\varphi(p)+\delta, u(p)\} & \text{if $p\in B_r(p_0)$,}\\ u(p) & \text{if $p\notin B_r(p_0)$,} \end{cases} \] we see that \eqref{greater sub1} and \eqref{greater sub2} hold. In what follows we show that $U_r$ is a subsolution of \eqref{hj eq} for such $r>0$ and $\delta>0$ small. Suppose that $\psi\in C^1(\Omega)$ is a test function such that $U_r-\psi$ attains a maximum at some $q_0\in \Omega$. We divide our argument into two cases. Case 1. Suppose that $U_r(q_0)=u(q_0)$. It follows that $u-\psi$ attains a maximum at $q_0$. Since $u$ is a subsolution, we have \beq\label{eq greater2} u(q_0)+\sup\{\la \nabla_H \psi(q_0), (q_0^{-1}\cdot \eta)_h\ra: \eta\in S_{q_0}(u) \}\leq f(q_0). \eeq Noticing that $U_r\geq u$ in $\Omega$, we have $S_{q_0}(U_r)\subset S_{q_0}(u)$, which by \eqref{eq greater2} yields \[ U_r(q_0)+\sup\{\la \nabla_H \psi(q_0), (q_0^{-1}\cdot \eta)_h\ra: \eta\in S_{q_0}(U_r) \}\leq f(q_0), \] as desired. Case 2. Suppose that $U_r(q_0)=\varphi(q_0)+\delta$. In this case, we have $q_0\in B_r(p_0)$. Also, it is easily seen that $\varphi+\delta-\psi$ attains a maximum at $q_0$, which yields \[ \nabla_H \psi(q_0)=\nabla_H \varphi(q_0). \] It thus suffices to show that there exists $r, \delta>0$ small such that \beq U_r(q)+\sup\{\la\nabla_H\varphi(q), (q^{-1}\cdot \eta)_h\ra: \eta\in S_q(U_r)\}\leq f(q) \eeq for all $q\in B_r(p_0)$ satisfying $U_r(q)=\varphi(q)+\delta$. Arguing by contradiction, we take $r_n, \delta_n>0$ and $q_n\in B_{r_n}(p_0)$ such that $r_n, \delta_n\to 0$ as $n\to \infty$, \[ U_{r_n}(q_n)=\varphi(q_n)+\delta_n, \] and \beq\label{eq greater4} \varphi(q_n)+2\delta_n+\sup\{\la\nabla_H\varphi(q_n), (q_n^{-1}\cdot \eta)_h\ra: \eta\in S_{q_n}(U_r)\}> f(q_n). \eeq Note that if there exists a subsequence of $q_n$ at which $\nabla_H\varphi(q_n)=0$, then sending $n\to \infty$ in \eqref{eq greater4} yields $\varphi(p_0)\geq f(p_0)$, which is clearly a contradiction to \eqref{eq greater3}. We thus only need to consider the case when $\nabla_H\varphi(q_n)\neq 0$ for all $n\geq 1$. By \eqref{eq greater4}, this implies the existence of $\eta_n\in S_{q_n}(U_{r_n})$ such that \beq\label{eq greater6} \varphi(q_n)+2\delta_n+\la\nabla_H\varphi(q_n), (q_n^{-1}\cdot \eta_n)_h\ra> f(q_n). \eeq In particular, we have $\eta_n\in \H_{q_n}\cap \Omega$ and \beq\label{eq greater5} u(\eta_n)\leq U_{r_n}(\eta_n)< \varphi(q_n)+\delta_n. \eeq Due to the boundedness of $\Omega$, we may find a subsequence, still indexed by $n$, such that $\eta_n\to \eta_0$ for some $\eta_0\in \Oba$. Since the horizontal plane $\H_p$ is continuous in $p$, we have $\eta_0\in \H_{p_0}$. Moreover, passing to the limit in \eqref{eq greater5}, we obtain \[ u_\ast(\eta_0)\leq \varphi(p_0)=u_\ast(p_0), \] which by \eqref{eq greater3} implies $u_\ast(\eta_0)<f(p_0)$. In view of the assumption that $u_\ast\geq f$ on $\partial \Omega$, we immediately have $\eta_0\in \Omega$. In other words, we have shown that \beq\label{eq greater7} \eta_0\in \hat{S}_{p_0}(u_\ast). \eeq Finally, taking $\liminf_{n\to \infty}$ in \eqref{eq greater6}, we are led to \[ u_\ast(p_0)+\la \nabla_H\varphi(p_0), (p_0^{-1}\cdot \eta_0)_h\ra\geq f(p_0), \] which, together with \eqref{eq greater7}, yields a contradiction to \eqref{eq greater1}. \end{proof} Let us complete the proof of Theorem \ref{thm existence}. \begin{proof}[Proof of Theorem \ref{thm existence}] Let $U$ be given by \eqref{perron eq}. By Proposition \ref{prop sub}, $U^\ast$ is a subsolution of \eqref{hj eq}. Also, by Lemma \ref{lem sub} and the continuity of $f$ in $\Oba$, we obtain \beq\label{existence pf1} U^\ast\leq f\leq K \quad \text{ in $\Oba$.} \eeq By assumptions on $\underline{u}$, we also have $U\geq \underline{u}$ in $\Oba$ and \beq\label{existence pf2} (U^\ast)_\ast \geq U_\ast\geq \underline{u}=K\quad\text{ on $\partial \Omega$.} \eeq Applying Proposition \ref{prop super}, we see that $(U^\ast)_\ast$ is a supersolution of \eqref{hj eq}, for otherwise we can construct a subsolution locally larger than $U^\ast$, which contradicts the definition of $U$. Noticing that \eqref{existence pf1} and \eqref{existence pf2} are combined to imply that \[ U^\ast=(U^\ast)_\ast=K\quad \text{on $\pO$,} \] by the comparison principle, Theorem \ref{thm comp}, we get $U^\ast\leq (U^\ast)_\ast$ in $\Oba$. It follows that $U^\ast$ is continuous in $\Oba$ and is the unique solution of \eqref{hj eq} satisfying $U^\ast=f=K$ on $\partial\Omega$. \end{proof} Following the proof above, one can obtain an existence result for general Dirichlet boundary problems under the h-convexity of $\Omega$. \begin{thm}[Existence for general Dirichlet problems]\label{thm existence2} Let $\Omega\subset \H$ be a bounded h-convex domain and $f\in C(\Oba)$. Assume that there exist a subsolution $\underline{u}\in C(\Oba)$ of \eqref{hj eq} such that $\underline{u}=f$ on $\partial \Omega$. Let $U$ be given by \eqref{perron eq}. Then $U^*$ is continuous in $\Oba$ and is the unique solution of \eqref{hj eq} satisfying $U^*=f$ on $\partial \Omega$. \end{thm} We omit the detailed proof but remark that Theorem \ref{thm comp2} enables us to handle general boundary data following similar arguments above. \section{H-quasiconvex envelope via PDE-based iteration}\label{sec:iteration} An iteration is introduced in Section \ref{sec:intro} to find the h-quasiconvex envelope $Q(f)$ of a given function $f$ in a bounded h-convex domain $\Omega\subset\H$ with $f$ satisfying \eqref{dirichlet-f} and \eqref{coercive-f}. In this section, we first prove our main result, Theorem \ref{thm scheme bdry}. We shall later study a more general case when $\Omega$ is possibly unbounded without boundary data prescribed on $\pO$. We begin with an easy construction of an h-quasiconvex $\ul{f}$ as a lower bound of the whole scheme. \begin{prop}[Existence of h-quasiconvex barriers]\label{prop lower} Let $\Omega\subset \H$ be a bounded h-convex domain. Assume that $f\in C(\Oba)$ satisfies \eqref{dirichlet-f} and \eqref{coercive-f} for some $K\in \R$. Then there exists $\ul{f}\in C(\Oba)$ h-quasiconvex in $\Omega$ such that $\ul{f}\leq f$ in $\Oba$ and $\ul{f}=K$ on $\partial \Omega$. \end{prop} \begin{proof} By Proposition \ref{prop metric-quasi}, we see that \[ \psi:=-\tilde{d}_H(\cdot, \H\setminus \Omega) \] is an h-quasiconvex function in $\H$. Since $f$ is continuous and $\Omega$ is bounded, there exists a modulus of continuity $\omega_f$, strictly increasing, such that \[ f(p)\geq -\omega_f (\tilde{d}_H(p, \H\setminus \Omega))+K \quad \text{for all $p\in \Oba$.} \] Taking \beq\label{g-fun} g(s)=-\omega_f(-s)+K , \quad s\le 0 \eeq we immediately get \[ f(p)\geq g(-\tilde{d}_H(p, \H\setminus E))\geq g(\psi(p))\quad \text{for all $p\in \Omega$.} \] Let $\ul{f}=g\circ\psi$. We easily see that $\ul{f}\leq f$ in $\Oba$ and $\ul{f}=K$ on $\partial \Omega$. Noticing that $g$ is continuous and nondecreasing, we deduce by Lemma \ref{lem geometric} that $\ul{f}$ is h-quasiconvex in $\Omega$. \end{proof} Note that the existence of the h-quasiconvex envelope $Q(f)$ is guaranteed by the existence of $\ul{f}$. Moreover, due to the conditions above, we have $Q(f)=f=K$ on $\partial \Omega$. Let $u_0=f$. By Theorem \ref{thm existence}, for $n=1, 2, \ldots$ we can find a unique solution $u_n$ of \eqref{iteration eq} satisfying \eqref{data bdry} and \eqref{data bdry ineq}. By Lemma \ref{lem sub}, one can also see that \beq\label{monotone bdry} \ul{f}\leq \ldots \leq u_n\leq u_{n-1}\leq \ldots \leq u_0 = f \quad\text{in $\Omega$ for $n=1, 2, \ldots$}. \eeq We proceed to prove Theorem \ref{thm scheme bdry}. To this end, we mention a fundamental fact on monotone sequences of functions. \begin{rmk}\label{rmk limits} Since $u_n$ is non-increasing in $n$, the pointwise limit $\lim_{n\to \infty}u_n$ is equal to $\limsup_{n\to \infty}^\ast u_n$, defined by \[ \limsups_{n\to \infty} u_n(p)=\lim_{k\to \infty} \sup\left\{u_n(q): q\in B_{{1\over k}}(p), \ n\geq k\right\},\quad \text{$p\in \Omega$}. \] In fact, for any fixed $p\in \Omega$, $\vep>0$ and any $n\geq 1$, by the upper semicontinuity of $u_n$, we can take $k\geq 1$ sufficiently large to get \[ u_n(p)\geq \sup\left\{u_n(q): q\in B_{{1\over k}}(p)\right\}-\vep. \] By the monotonicity of $u_n$ in $n$, we have \[ u_n(p)\geq \sup\{u_n(q): q\in B_{{1\over k}}(p), n\geq k\}-\vep, \] which yields \[ u_n(p)\geq \limsups_{n\to \infty} u_n(p)-\vep. \] Letting $n\to \infty$ and $\vep\to 0$, we are led to \[ \lim_{n\to \infty} u_n(p)\geq \limsups_{n\to \infty} u_n(p). \] Since the reverse inequality clearly holds, we thus obtain the equality. \end{rmk} \begin{prop}[Stability of h-quasiconvexity]\label{prop sub-stability} Suppose that $\Omega$ is a bounded h-convex domain in $\H$. Let $u_0=f\in USC(\Omega)$ and $u_n\in USC(\Omega)$ be a subsolution of \eqref{iteration eq}. Then \[ u=\limsups_{n\to \infty} u_n \] is h-quasiconvex in $\Omega$. \end{prop} \begin{proof} Our argument below in based on the characterization given in Theorem \ref{thm char}. Suppose that there exist $p_0\in \Omega$ and $\varphi\in C^1(\Omega)$ such that $u-\varphi$ attains a strict maximum in $\Omega$ at $p_0$. Then by Remark \ref{rmk limits}, there exists $p_n\in \Omega$ such that $u_n-\varphi$ attains a local maximum at $p_n$ and $p_n\to p_0$, $u_n(p_n)\to u(p_0)$ as $n\to \infty$. We may assume that $\nabla_H\varphi(p_0)\neq 0$, for otherwise it is clear that \[ \la\nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra=0 \] holds for all $\xi\in S_{p_0}(u)$. We thus have $S_{p_0}(u)\neq \emptyset$. For any $\vep>0$, let $\xi_0\in S_{p_0}(u)$ satisfy \beq\label{scheme pf eq1} \la \nabla_H\varphi(p_0), (p_0^{-1}\cdot \xi_0)_h\ra\geq \sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra: \xi\in S_{p_0}(u)\right\}-\vep. \eeq Since $u(\xi_0)<u(p_0)$, we have $u_n(\xi_0)<u_n(p_n)$ when $n$ is sufficiently large. Then there is a point $\xi_n\in \H_{p_n}$ satisfying $u_n(\xi_n)<u_n(p_n)$ for $n$ large and $p_n^{-1}\cdot \xi_n=p_0^{-1}\cdot \xi_0$. Applying the definition of subsolutions of \eqref{iteration eq}, we get \[ u_n(p_n)+\la \nabla_H\varphi(p_n), (p_n^{-1}\cdot \xi_n)_h\ra\leq u_{n-1}(p_n), \] which is equivalent to \[ u_n(p_n)+\la \nabla_H\varphi(p_n), (p_0^{-1}\cdot \xi_0)_h\ra\leq u_{n-1}(p_n). \] Passing to the limit as $n\to \infty$, we have \[ u(p_0)+\la \nabla_H\varphi(p_0), (p_0^{-1}\cdot \xi_0)_h\ra\leq u(p_0). \] It follows from \eqref{scheme pf eq1} that \[ \sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra: \xi\in S_{p_0}(u)\right\}\leq \vep. \] Letting $\vep\to 0$, we get \[ \sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra: \xi\in S_{p_0}(u)\right\}\leq 0. \] The proof for h-quasiconvexity of $u$ is now complete. \end{proof} We are now in a position to prove Theorem \ref{thm scheme bdry}. \begin{proof}[Proof of Theorem \ref{thm scheme bdry}] In view of Proposition \ref{prop lower}, there exists $\ul{f}\in C(\Oba)$ h-quasiconvex such that $\ul{f}\leq f$ in $\Oba$ and $\ul{f}=f=K$ on $\partial \Omega$. Since $u_n\in C(\Oba)$ is a monotone sequence and $u_n\geq \ul{f}$ in $\Oba$, it converges to $u\in USC(\Oba)$ pointwise as $n\to \infty$. We thus get $u= Q(f)=f=K$ on $\partial \Omega$. By Proposition \ref{prop sub-stability}, we see that $u$ is h-quasiconvex in $\Omega$. It follows immediately that $u\leq Q(f)$ in $\Omega$. Note that $Q(f)$ is a subsolution of \eqref{iteration eq} for every $n$ satisfying $Q(f)= f$ on $\partial\Omega$. Then by the comparison principle, we have $Q(f)\leq u_n$ in $\Oba$. We thus have $Q(f)\leq u$ in $\Oba$ and therefore $u_n\to Q(f)$ pointwise in $\Oba$ as $n\to \infty$. The uniform convergence of $u_n$ requires extra work. Let us extend the definitions of $u_n$ ($n=0, 1, 2, \ldots $) by constant $K$ to the whole space $\H$, that is, \[ u^K_n(p)=\begin{cases} u_n(p) & \text{if $p\in \Omega$,}\\ K & \text{if $p\in\H\setminus\Omega$.} \end{cases} \] Let us verify that $u^K_n$ is a solution of \beq\label{extend pde} u^K_n(p)+H(p, u^K_n(p), \nabla_H u^K_n(p))=u^K_{n-1}(p) \quad \text{in $\H$}. \eeq In fact, it is obvious that $u^K_n$ is a supersolution. Below we show that it is a subsolution. Suppose that there is a test function $\varphi\in C^1(\H)$ such that $u^K_n-\varphi$ attains a maximum at $p_0\in \H$. We easily see that \beq\label{extend sub} u^K_n(p_0)+\sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}(u_n^K) \right\}\leq u^K_{n-1}(p_0) \eeq holds if $p_0\in \H\setminus \Oba$ or $p_0\in \Omega$. In the case that $p_0\in \partial \Omega$, extending $\ul{f}$ to get \[ \ul{f}^K(p)=\begin{cases} \ul{f}(p) & \text{if $p\in \Omega$,}\\ K & \text{if $p\in\H\setminus\Omega$.} \end{cases} \] we observe that $\ul{f}^K$ is h-quasiconvex in $\H$ and $\ul{f}^K-\varphi$ also attains a maximum at $p_0$; the latter is due to the facts that $\ul{f}^K\leq u^K_n$ in $\H$ and $\ul{f}^K=u^K_n=K$ on $\partial\Omega$. It follows from Theorem \ref{thm char} that \[ \sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}\left(\ul{f}^K\right) \right\}\leq 0, \] which implies that \[ \sup\left\{\la \nabla_H \varphi(p_0), (p_0^{-1}\cdot \xi)_h\ra:\ \xi\in S_{p_0}\left(u_n^K\right) \right\}\leq 0. \] It is then clear that \eqref{extend sub} holds again, since $p_0\in \partial \Omega$ and thus $u_n^K(p_0)=u_{n-1}^K(p_0)=K$. Hence, $u_n^K$ is a subsolution of \eqref{extend pde}. We next translate the extended solutions. For any fixed $h\in \H$, we have \beq\label{equiconti1} u^K_{n-1, h}(p)-\omega_f(|h|_G)\leq u^K_{n-1}(p)\quad \text{in $\H$ } \eeq holds for $n=1$, where $\omega_f$ is the modulus of continuity of $f$ in $\Oba$. Using Lemma \ref{lem basic property}, we can also deduce that $u^K_{n, h}(p)=u^K_n(h\cdot p)-\omega_f(|h|_G)$ is a subsolution of \[ u^K_{n, h}(p)+H(p, u^K_{n, h}(p), \nabla_H u^K_{n, h}(p))=u^K_{n-1, h}(p)-\omega_f(|h|_G) \quad \text{in $\H$,} \] and therefore by \eqref{equiconti1} is a subsolution of \[ u^K_{n, h}(p)+H(p, u^K_{n, h}(p), \nabla_H u^K_{n, h}(p))=u^K_{n-1}(p) \quad \text{in $\H$} \] in the case that $n=1$. Take a bounded h-convex domain $\Omega'$ such that \[ \{p\in \H: |p^{-1}\cdot q|_G\leq |h|, \ q\in \Oba\}\subset \Omega'. \] Applying the comparison principle, Theorem \ref{thm comp}, in $\Omega'$, we are led to \[ u^K_{1, h}(p)-\omega_f(|h|_G)\leq u^K_1(p)\quad \text{in $\Omega'$,} \] which implies \eqref{equiconti1} with $n=2$. We can repeat the argument to show that \eqref{equiconti1} holds for all $n\geq 1$. Exchanging the roles of $u^K_n$ and $u^K_{n, h}$, we deduce that \[ |u^K_{n, h}-u^K_{n}|\leq \omega_f(|h|_G) \quad \text{in $\H$}, \] which yields \[ |u_{n}(h\cdot p)-u_{n}(p)|\leq \omega_f(|h|_G) \] for all for all $p\in (h^{-1}\cdot\Oba)\cap \Oba$ and $n\geq 1$. Due to the arbitrariness of $h$, we get the equi-continuity of $u_n$ in $\Oba$ with modulus $\omega_f$. It follows that $Q(f)=\inf_{n\geq 1} u_n$ is also continuous in $\Oba$ with the same modulus. The uniform convergence of $u_n$ to $Q(f)$ in $\Oba$ as $n\to \infty$ is an immediate consequence of Dini's theorem. \end{proof} \begin{rmk}\label{rmk convergence2} It is possible to use the same scheme to approximate the h-quasiconvex envelope $Q(f)$ of a function $f\in C(\Oba)$ that takes general boundary values. In this case, the uniqueness and existence of $u_n$ are guaranteed by Theorem \ref{thm comp2} and Theorem \ref{thm existence2}. The pointwise convergence of $u_n$ to $Q(f)$ can be shown in the same way as in the proof of Theorem \ref{thm scheme bdry}. It is also possible to obtain uniform convergence if $\ul{f}$ can be extended to an h-quasiconvex function in a neighborhood of $\Oba$. \end{rmk} We conclude this section by providing a relaxed version of Theorem \ref{thm scheme bdry} with no assumptions on boundedness or convexity of $\Omega$. Based on Proposition \ref{prop sub}, for any $f\in USC(\Omega)$ that is bounded below, one can still obtain a sequence $u_n$ that converges pointwise to $Q(f)$ in $\Omega$ without even using the boundary value of $f$. In this general case, we can take $u_n$ to be the maximum subsolution of \eqref{iteration eq}. More precisely, let $u_0=f$ and, for $n=1, 2, \ldots$, let $u_n$ be the maximum subsolution of \eqref{iteration eq}, that is, \beq\label{iteration sub1} u_n=v_n^\ast\quad \text{in $\Omega$}, \eeq where, for any $p\in \Omega$, \beq\label{iteration sub2} v_n(p)=\sup\{u(p): \ \text{$u\in USC(\Omega)$ is a subsolution of \eqref{hj eq} with $f=u_{n-1}$}\}. \eeq By Proposition \ref{prop sub}, $u_n$ is indeed a subsolution of \eqref{hj eq} with $f=u_{n-1}$. \begin{thm}[Iterative scheme for envelope without boundary data]\label{thm scheme} Let $\Omega$ be an h-convex domain in $\H$ and $f\in USC(\Omega)$. Assume that there exists an h-quasiconvex function $\ul{f}\in USC(\Omega)$ such that $f\geq \ul{f}$ in $\Omega$. Let $u_n\in USC(\Omega)$ be iteratively defined by \eqref{iteration sub1}--\eqref{iteration sub2}. Then $u_n\to Q(f)$ pointwise in $\Omega$ as $n\to \infty$, where $Q(f)\in USC(\Omega)$ is the h-quasiconvex envelope of $f$. \end{thm} \begin{proof} By Lemma \ref{lem sub}, we still have \eqref{monotone bdry}. It then follows from Remark \ref{rmk limits} and Proposition \ref{prop sub-stability} that $\lim_{n\to \infty} u_n\in USC(\Omega)$ and \[ \lim_{n\to \infty} u_n\leq Q(f) \quad \text{in $\Omega$.} \] We can also get the reversed inequality by showing that $Q(f)\leq u_n$ in $\Omega$ for all $n\geq 1$. Note that here we cannot prove it by the comparison principle, Theorem \ref{thm comp}, due to the possible unboundedness of $\Omega$ and loss of the boundary data. Instead, we use the maximality of $u_n$ among all subsolutions of \eqref{iteration eq}. \end{proof} \section{H-convex hull}\label{sec:convex-hull} In this section we study the h-convex hull of a given set in $\mathbb{H}$. Using the definition of h-convex sets introduced in Definition \ref{def h-set}, we can define the h-convex hull of a set in the following natural way. \begin{defi}[H-convex hull]\label{def hull} For a set $E \subset \H$ we denote by $\coh (E)$ the h-convex hull of $E$ defined to be the smallest h-convex set in $\H$ containing $E$, i.e., \[ \coh(E)=\bigcap \, \{D\subset \H: \text{$D$ is h-convex and satisfies $E\subset D$} \}. \] \end{defi} Below we attempt to understand several basic properties of h-convex hulls. \subsection{Level set formulation} We first establish connection between h-quasiconvex envelopes and h-convex hulls. Our general process of convexifying an open or closed set is an adaptation of the so-called level set method, which can be summarized as follows. \begin{enumerate} \item For a given open (resp., closed) set $E\subset \H$, we take a function $f\in C(\H)$ such that \beq\label{appl1} E= \{p\in \H: f(p)<0\} \quad (\text{resp., } \ E=\{p\in \H: f(p)\leq 0\}). \eeq \item We construct the h-quasiconvex envelope $Q(f)$. \item The h-convex hull turns out to be the $0$-sublevel set of $Q(f)$, that is, \[ \coh(E)=\{p\in \H: Q(f)(p)<0\} \quad (\text{resp., } \ \coh(E)=\{p\in \H: Q(f)(p)\leq 0\}). \] \end{enumerate} Let us prove the result stated in the step (3) above under more precise assumptions. We first examine the case when $E$ is a bounded open or closed set in $\H$. Then we can take $\Omega=B_{R}(0)\supset \overline{\coh(E)}$ with $R>0$ large and use Theorem \ref{thm scheme bdry} to construct the h-quasiconvex envelope $Q(f)\in C(\Omega)$ for a defining function $f\in C(\Omega)$ of the set $E$. \begin{thm}[Level set method for h-convex hull]\label{thm hull1} Let $E\subset \H$ be a bounded open (resp., closed) set. Let $R>0$ large such that $\overline{\coh(E)}\subset B_R(0)$. Assume that $f\in C(\H)$ satisfies \eqref{appl1}. Assume also that there exists $K>0$ such that \beq\label{tech cond1} \begin{cases} f\leq K &\quad \text{in $B_R(0)$,}\\ f\equiv K &\quad \text{in $\H\setminus B_{R}(0)$.} \end{cases} \eeq Let $Q(f)$ be the h-quasiconvex envelope of $f$. Then, $Q(f)\in C(\H)$ and $Q(f)$ satisfies \beq\label{appl3} Q(f)\equiv K \quad \text{in $\H\setminus B_R(0)$} \eeq and \beq\label{appl2} \coh(E)=\{p\in \Omega: Q(f)(p)<0\} \quad (\text{resp., } \ \coh(E)=\{p\in \Omega: Q(f)(p)\leq 0\}). \eeq \end{thm} \begin{proof} Let us only give a proof in the case when $E$ is a bounded open set. One can use the same argument to prove the result for a bounded closed set $E$. In our current setting, we take $\Omega=B_R(0)$ and choose \[ \ul{f}(p)=\min\{L(|p|_G-R), 0\}+K, \quad p\in \H \] with $L>0$ large so that all of the assumptions in Theorem \ref{thm scheme bdry} are satisfied for any $n>0$. We thus have $Q(f)\in C(\Oba)$ and \eqref{appl3}. Note that $\coh(E)\subset\{p\in \H: Q(f)(p)<0\}$ holds, since the right hand side is h-quasiconvex and contains $E$. In order to show \eqref{appl2}, it thus suffices to prove the reverse implication. To this end, we recall from Proposition \ref{prop metric-quasi} that \[ \psi_{\coh(E)}:=-\tilde{d}_H(\cdot, \H\setminus \coh(E)) \] is an h-quasiconvex function in $\H$. We adopt the same proof of Proposition \ref{prop lower} to construct an h-quasiconvex function $\ul{f}:=g\circ \psi_{\coh(E)}$ in $\Omega$ satisfying $f(p)\geq \ul{f}(p)$ for all $p\in \Omega$, where $g$ is determined by the modulus of continuity $\omega_f$ of $f$, given by \eqref{g-fun}. It follows that \[ Q(f)\geq Q(g\circ \psi_{\coh(E)})=g\circ \psi_{\coh(E)}. \] Since $g$ is actually strictly increasing, we are led to \[ \begin{aligned} \{p\in \H: Q(f)(p)<0\}&\subset \{p\in \H: g(\psi_{\coh(E)}(p))<0\}\\ &=\{p\in \H: \psi_{\coh(E)}(p)<0\}=\coh(E). \end{aligned} \] The proof is now complete. \end{proof} As an immediate consequence of the result above, if $E\subset \H$ is a bounded open set, its h-convex hull $\coh(E)$ is also bounded and open as well. Likewise, $\coh(E)$ is bounded and closed provided that $E\subset \H$ is bounded and closed. The h-convex hull $\coh(E)$ essentially does not depend on the choices of $\Omega$ and $f\in C(\H)$ as long as $\Omega$ is large enough to contain $\coh(E)$ and the $0$-sublevel set of $f$ agrees with $E$. Following a similar argument as in the proof of Theorem \ref{thm hull1}, one can construct the h-convex hull of a general, possibly unbounded open set $E\subset \H$. In this case, we study $\coh(E)$ in $\Omega=\H$ without imposing technical assumptions like \eqref{tech cond1}. We can use Theorem \ref{thm scheme} to get $Q(f)\in USC(\H)$. \begin{thm}\label{thm hull2} Let $E\subset \H$ be an open set. Assume that $f\in C(\H)$ satisfies \[ E= \{p\in \H: f(p)<0\}. \] Let $Q(f)$ be the h-quasiconvex envelope of $f$. Then $Q(f)\in USC(\H)$ and \[ \coh(E)=\{p\in \H: Q(f)(p)<0\}. \] \end{thm} The proof is omitted, since it resembles that of Theorem \ref{thm hull1}. \subsection{Quantitative inclusion principle} The definition of h-convex hulls, Definition \ref{def hull}, immediately yields the inclusion principle: $\coh(D)\subset \coh(E)$ if $D\subset E$. Using the formulation via h-quasiconvex envelopes, we can quantify this result for bounded open or closed sets. \begin{thm}[Quantitative inclusion principle]\label{thm sep} Let $D, E$ be two bounded open (resp., closed) sets in $\H$. If $D\subset E$, then $\coh(D)\subset \coh(E)$ and \eqref{dist sep} holds. \end{thm} In order to prove this result, let us introduce the following sup-convolution for a bounded continuous function $u$ with $\delta>0$: \beq\label{sup-convolution} u^\delta(p)=\sup_{\tilde{d}_H(p, q)< \delta} u(q),\quad p\in \H. \eeq Due to the continuity of $u$, we also have \[ u^\delta(p)=\max_{\tilde{d}_H(p, q)\leq \delta} u(q), \quad p\in \H. \] It turns out that this sup-convolution also preserves h-quasiconvexity. \begin{lem}[H-quasiconvexity preserving by sup-convolution]\label{lem sup} Assume that $u$ is a bounded continuous function in $\H$. Let $u^\delta\in C(\H)$ be given by \eqref{sup-convolution}. Then $u^\delta$ is h-quasiconvex in $\H$ if $u$ is h-quasiconvex in $\H$. \end{lem} \begin{proof} Note that $u^\delta$ is bounded and continuous in $\H$, since $u$ is bounded and continuous. We approximate $u^\delta$ by \beq\label{sup approx} u^\delta_\beta(p)=\sup_{q\in \H}\left\{u(q)-\frac{\tilde{d}_H(p, q)^\beta}{\delta^\beta}\right\} \eeq with $\beta>0$ large. It is easily seen that $u^\delta_\beta$ is also bounded and continuous in $\H$. For any $p_0\in \H$, we can find $q_{\delta, \beta}\in \H$ such that \beq\label{sep eq4} u^\delta_\beta(p_0)=u(q_{\delta, \beta})-\frac{\tilde{d}_H(p_0, q_{\delta, \beta})^\beta}{\delta^\beta}, \eeq which yields \[ \frac{\tilde{d}_H(p_0, q_{\delta, \beta})^\beta}{\delta^\beta}\leq u(q_{\delta, \beta})-u^\delta_\beta(p_0). \] Due to the boundedness of $u$, sending $\beta\to \infty$, we obtain \[ \limsup_{\beta\to \infty}\tilde{d}_H(p_0, q_{\delta, \beta})\leq \delta. \] We thus can take a subsequence of $q_{\delta, \beta}$ converging to a point $q_\delta\in \H$, which satisfies $\tilde{d}_H(p_0, q_{\delta})\leq \delta$. It follows that \beq\label{sep eq1} \limsup_{\beta\to \infty} u^\delta_\beta(p_0)\leq u(q_\delta)\leq \max_{\tilde{d}_H(p_0, q)\leq \delta} u(q)=u^\delta(p_0). \eeq On the other hand, noticing that \[ u^\delta_\beta(p_0)\geq \sup\left\{u(q)-\frac{\tilde{d}_H(p_0, q)^\beta}{\delta^\beta}: \tilde{d}_H(p_0, q)<\delta\right\}, \] we deduce that \beq\label{sep eq2} \sup_{\beta\geq 1} u^\delta_\beta(p_0)\geq \sup\left\{u(q): \tilde{d}_H(p, q)<\delta\right\}=u^\delta(p_0). \eeq Combining \eqref{sep eq1} and \eqref{sep eq2} as well as the arbitrariness of $p_0$, we are led to \beq\label{sep eq3} u^\delta(p)=\sup_{\beta\geq 1} u^\delta_\beta(p)\quad\text{for all $p\in \H$.} \eeq Suppose that there exist $\varphi\in C^1(\H)$ and $p_0\in \H$ such that $u^\delta_\beta-\varphi$ attains a local maximum at $p_0$. Then \beq\label{sep eq5} (p, q)\mapsto u(q)-\varphi(p)-\frac{\tilde{d}_H(p, q)^\beta}{\delta^\beta} \eeq attains a maximum at $(p_0, q_{\delta, \beta})$, where $q_{\delta, \beta}\in \H$ is the point satisfying \eqref{sep eq4}. The maximality of \eqref{sep eq5} implies that $u-\psi$ attains a maximum at $q_{\delta, \beta}$, where $\psi\in C^1(\H)$ is given by \[ \psi(q)=\varphi(p_0)+\frac{\tilde{d}_H(p_0, q)^\beta}{\delta^\beta}. \] Since $u$ is h-quasiconvex, we have \beq\label{sep eq6} \sup\left\{\la \nabla_H \psi(q_{\delta, \beta}), (q_{\delta, \beta}^{-1}\cdot \eta)_h\ra:\ \eta\in S_{q_{\delta, \beta}}(u) \right\}\leq 0. \eeq Note that for any $\xi \in S_{p_0} (u^\delta_\beta)$, we can choose $\eta=q_{\delta, \beta}\cdot p_0^{-1}\cdot \xi$ so that $\eta\in S_{q_{\delta, \beta}} (u)$. Indeed, we can easily verify that $\eta\in \H_{q_{\delta, \beta}}$ and by \eqref{sup approx} and \eqref{sep eq4} we obtain \[ u(\eta)-\frac{\tilde{d}_H(\xi, \eta)^\beta}{\delta^\beta}\leq u^\delta_{\beta}(\xi)<u^\delta_\beta(p_0)=u(q_{\delta, \beta})-\frac{\tilde{d}_H(p_0, q_{\delta, \beta})^\beta}{\delta^\beta} \] which reduces to $u(\eta)<u^\delta_\beta(p_0)$ due to the fact that \[ \tilde{d}_H(\xi, \eta)=\tilde{d}_H(p_0, q_{\delta, \beta}). \] Hence, we have $\eta\in S_{q_{\delta, \beta}} (u)$. Since $p_0^{-1}\cdot \xi=q_{\delta, \beta}^{-1}\cdot \eta$ and $\nabla_H\varphi(p_0)=\nabla_H \psi(q_{\delta, \beta})$ (similar to the calculations for \eqref{cp new5}), \eqref{sep eq6} can be rewritten as \[ \sup\left\{\la \nabla_H \varphi(p_0), ({p_0}^{-1}\cdot \xi)_h\ra:\ \xi \in S_{p_0} (u^\delta_\beta) \right\}\leq 0. \] By Theorem \ref{thm char}, we see that $u^\delta_\beta$ is h-quasiconvex in $\H$. Thanks to \eqref{sep eq3}, we conclude the proof of the h-quasiconvexity of $u^\delta$ by applying the result in Remark \ref{rmk supremum}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm sep}] Since $E\subset \coh(E)$, we get $D\subset \coh(E)$. Due to the h-convexity of $\coh(E)$, by Definition \ref{def hull} we obtain $\coh(D)\subset \coh(E)$. We next prove \eqref{dist sep}, assuming that $E, D$ are open. The case for closed sets can be similarly treated. Take $\Omega=B_{R}(0)$ and $f\in C(\H)$ as in Theorem \ref{thm hull1}. In this case, $Q(f)\in C(\H)$ and \eqref{appl2} hold. Since $Q(f)$ is constant outside $B_{R}(0)$, we can also obtain the boundedness of $Q(f)$. Adopting Lemma \ref{lem sup}, we see that the sup-convolution $Q(f)^\delta$ of $Q(f)$ is h-quasiconvex in $\H$ for any $\delta>0$. It follows that \[ \coh(E)_\delta:=\{p\in \H: Q(f)^\delta(p)<0\}=\{p\in \H: \tilde{B}_p(\delta)\subset \coh(E)\} \] is h-convex, where we recall that $\tilde{B}_p(\delta)$ denotes the right-invariant metric ball centered at $p$ with radius $\delta$. By taking $\delta=\inf_{p\in \H\setminus E}\tilde{d}_H(p, D)$, we have \[ D\subset \{p\in \H: \tilde{B}_p(\delta)\subset E\}\subset \coh(E)_\delta, \] which yields $\coh(D)\subset \coh(E)_\delta$ by the h-convexity of $\coh(E)_\delta$. This gives \eqref{dist sep} immediately. \end{proof} \subsection{Continuity under star-shapedness}\label{sec:cont} To conclude this work, we briefly mention the continuity of $\coh(E)$ with respect to $E$. Let us recall that for any two sets $D,E \subset \H$, the distance $d_H(D,E)$ is defined as \[ d_H(D,E)=\max \left\{\sup_{p \in D} d_H(p,E),\ \sup_{p \in E} d_H(p,D)\right\}. \] We are interested in the following question: for a bounded open or closed set $E\subset \H$ and a sequence of sets $E_j\subset \H$, is it true that \beq\label{hull conti} d_H(\coh(E), \coh(E_j))\to 0\quad \text{if}\quad d_H(E, E_j)\to 0 \eeq holds? In contrast to the Euclidean case, where the answer is affirmative, the situation in the Heisenberg group is less straightforward. In general, we cannot expect \eqref{hull conti} to hold, as indicated by the following example. \begin{example}\label{ex2} We slightly change the set $E$ in Example \ref{ex1} by taking \[ E =(\pi(0, r) \times (-\delta, 0)) \cup (\pi(0, r) \times (t,t+ \delta)) \] with $r, t, \delta>0$. In other words, $E$ is set to be the union of two circular cylinders of height $\delta>0$. We may use the same argument as in Example \ref{ex1} to show that $E$ is h-convex if $r^2\leq 2t$ but not h-convex if $r^2>2t$. Consider the critical case when $r^2=2t$. Since $E$ is h-convex in this case, we have $\coh(E)=E$. On the other hand, it is not difficult to see that any $\vep$-neighborhood of $E$ (with respect to $d_H$ or other equivalent metrics), denoted by $\N_\vep(E)$, is not h-convex. Its h-convex hull $\coh(\N_\vep(E))$ at least contains horizontal segments joining the upper and lower cylinders. Hence, we can get $c>0$ such that \[ d_H(\coh(E), \coh(\N_\vep(E))>c \] for all $\vep>0$. This shows that \eqref{hull conti} fails to hold in general. \end{example} We have seen that the h-convex hull $\coh(E)$ is not stable with respect to perturbations of the set $E$. It is however worth emphasizing that the h-quasiconvex envelope is stable, as shown in Proposition \ref{prop stability}. The main reason for such a discrepancy is that the quasi-convexification process may fatten the level sets of functions, that is, a level set of $Q(f)$ may contain interior points while the level set of $f$ at the same level initially does not. See for example \cite{BSS, So3, Gbook} on the fattening phenomenon for the level-set formulation of geometric evolution equations in different contexts. We need additional assumptions to guarantee \eqref{hull conti}. One sufficient condition is the following type of star-shapedness of the set $E$. \begin{defi}[Strict star-shapedness] We say that a connected set $E\subset \H$ is strictly star-shaped with respect to the group identity $0\in H$ if for any $0\leq \lambda<1$ we have $\delta_\lambda (E)\subset E$ and \beq\label{strict star eq} \inf_{x\in \delta_\lambda(E)} d_H(x, \H\setminus E)>0. \eeq We say $E$ is strictly star-shaped if there exists a point $p_0\in \H$ such that the left translation $p_0^{-1} \cdot E$ of $E$ is strictly star-shaped with respect to $0$. \end{defi} It is a stronger version than the star-shapedness property studied in \cite{DF, DGS}, where all conditions except \eqref{strict star eq} are required. Note that h-convex set can be disconnected and does not imply strict or weak star-shapedness, as observed in Example \ref{ex1}. Below, we present three examples of connected sets to compare these notions. We refer the interested reader to \cite{DF} for more examples of star-shaped sets. \begin{example} Let us take \begin{align*} E_1 &= \textstyle\bigcup_{\lambda \in [0,1]} \delta_{\lambda}((1,1,1)) \cup \delta_{\lambda}((-1,-1,1)) ,\\ E_2 & =(\pi(0,1)\times \{0\})\cup (\pi(0,1)\times \{t\})\cup (\{(0,0)\}\times (0,1)),\\ E_3 &= \{(x, 0, 0): x\in (-1, 1)\}, \end{align*} where $\pi(0, 1)$ denotes the unit disk centered at the origin as defined in \eqref{h-pi}. All of these sets are bounded and connected. The set $E_1$ is strictly star-shaped with respect to the group identity but not h-convex; it is easily seen that for any $\delta_s(E_1)\subset E_1$ for all $0\leq s<1$ but the horizontal segment between $(1, 1, 1)$ and $(-1, -1, 1)$ does not stay in $E_1$. In view of Example \ref{ex1}, we see that $E_2$ is h-convex. But it is not star-shaped, since for any $p_0\in E_2$, there exists $p\in E_2$ such that the curve $\{p_0\cdot \delta_\lambda(p_0^{-1}\cdot p): \lambda\in [0, 1)\}$ contains points outside $E_2$. The horizontal segment $E_3$ is obviously h-convex and star-shaped with respect to every point in $E_3$. It is however not star-shaped with respect to points in $\H\setminus E_3$. It is not strictly star-shaped in $\H$ either. Indeed, for any $p_0\in E_3$, we can find $\lambda<1$ close to $1$ such that $0\in \delta_\lambda(p_0^{-1}\cdot E_3)$ and $d_H(0, \H\setminus (p_0^{-1}\cdot E_3))=0$; in other words, $p_0^{-1}\cdot E_3$ is not strictly star-shaped with respect to $0$. \end{example} Assuming strict star-shapedness, we prove stability of h-convex hull. \begin{prop}[Stability of h-convex hulls of star-shaped sets]\label{prop stable} Let $E\subset \H$ be a bounded open (resp., closed) set that is strictly star-shaped. Let $E_j$ be a sequence of bounded open (resp. closed) sets in $\H$ such that $d_H(E, E_j)\to 0$ as $j\to \infty$. Then there holds \beq\label{set stability} d_H(\coh(E), \coh(E_j))\to 0\quad \text{as $j\to \infty$.} \eeq \end{prop} \begin{proof} By definition of h-convex hull and $\delta_\lambda (p)\in \H_{\delta_\lambda (q)}$ provided $p\in \mathbb{H}_q$, it is not difficult to verify that \beq\label{starshape eq1} \coh(\delta_\lambda E)=\delta_\lambda (\coh(E)) \eeq for any $\lambda> 0$. Owing to the strict star-shapedness condition on $E$, for each fixed $0<\lambda<1$ we have $\delta_\lambda (E)\subset E_j$ when $j\geq 1$ is sufficiently large. It then follows that $\coh(\delta_\lambda (E))\subset \coh( E_j)$. By \eqref{starshape eq1}, we get $\delta_\lambda (\coh(E))\subset \coh(E_j)$ for all $j\geq 1$ large. Since the strict star-shapedness also implies $E\subset \delta_{1/\lambda} (E)$ and \[ d_H(E, \delta_{1/\lambda} (E))>0 \] for any $0<\lambda<1$, we can use a symmetric argument to show that $\coh(E_j)\subset \delta_{1/\lambda} (\coh(E))$ for $j\geq 1$ large. Hence, $\delta_{\lambda} (\coh(E))\subset \coh(E_j)\subset \delta_{1/\lambda} (\coh(E))$ and $\delta_{\lambda} (\coh(E))\subset \coh(E)\subset \delta_{1/\lambda} (\coh(E))$ imply that \[ d_H(\coh(E), \coh(E_j)) \leq d_H\left(\delta_\lambda (\coh(E)), \delta_{1/\lambda}(\coh(E))\right) \] for $j\geq 1$ large. Letting $j\to \infty$ and then $\lambda\to 1$ yields \eqref{set stability}. \end{proof} The strict star-shapedness is only a sufficient condition to guarantee the stability of h-convex hull. It would be interesting to find a sufficient and necessary condition for this property. \subsection*{Acknowledgments} The authors are grateful to the anonymous referees for valuable comments, especially to one of the referees for pointing out useful references that help us prove Lemma \ref{rmk convex boundary} and improve the result in Theorem \ref{thm comp2}. The work of the second author was supported by JSPS Grants-in-Aid for Scientific Research (No.~19K03574, No.~22K03396) and by funding (Grant No. 205004) from Fukuoka University. The work of the third author was supported by JSPS Grant-in-Aid for Research Activity Start-up (No.~20K22315) and JSPS Grant-in-Aid for Early-Career Scientists (No.~22K13947). \bibliographystyle{abbrv} \begin{thebibliography}{10} \bibitem{ArCaMo} G.~Arena, A.~O. Caruso, and R.~Monti. \newblock Regularity properties of {$H$}-convex sets. \newblock {\em J. Geom. Anal.}, 22(2):583--602, 2012. \bibitem{BaRi} Z.~M. Balogh and M.~Rickly. \newblock Regularity of convex functions on {H}eisenberg groups. \newblock {\em Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)}, 2(4):847--868, 2003. \bibitem{BaDr} M.~Bardi and F.~Dragoni. \newblock Subdifferential and properties of convex functions with respect to vector fields. \newblock {\em J. Convex Anal.}, 21(3):785--810, 2014. \bibitem{BSS} G.~Barles, H.~M. Soner, and P.~E. Souganidis. \newblock Front propagation and phase field theory. \newblock {\em SIAM J. Control Optim.}, 31(2):439--469, 1993. \bibitem{BGJ1} E.~N. Barron, R.~Goebel, and R.~R. Jensen. \newblock The quasiconvex envelope through first-order partial differential equations which characterize quasiconvexity of nonsmooth functions. \newblock {\em Discrete Contin. Dyn. Syst. Ser. B}, 17(6):1693--1706, 2012. \bibitem{CCP1} A.~Calogero, G.~Carcano, and R.~Pini. \newblock Twisted convex hulls in the {H}eisenberg group. \newblock {\em J. Convex Anal.}, 14(3):607--619, 2007. \bibitem{CCP2} A.~Calogero, G.~Carcano, and R.~Pini. \newblock On weakly {H}-quasiconvex functions on the {H}eisenberg group. \newblock {\em J. Convex Anal.}, 15(4):753--766, 2008. \bibitem{CaPi} A.~Calogero and R.~Pini. \newblock Horizontal normal map on the {H}eisenberg group. \newblock {\em J. Nonlinear Convex Anal.}, 12(2):287--307, 2011. \bibitem{CDPT} L.~Capogna, D.~Danielli, S.~D. Pauls, and J.~T. Tyson. \newblock {\em An introduction to the {H}eisenberg group and the sub-{R}iemannian isoperimetric problem}, volume 259 of {\em Progress in Mathematics}. \newblock Birkh\"auser Verlag, Basel, 2007. \bibitem{Car1} P.~Cardaliaguet. \newblock On front propagation problems with nonlocal terms. \newblock {\em Adv. Differential Equations}, 5(1-3):213--268, 2000. \bibitem{Car2} P.~Cardaliaguet. \newblock Front propagation problems with nonlocal terms. {II}. \newblock {\em J. Math. Anal. Appl.}, 260(2):572--601, 2001. \bibitem{CGG} Y.~G. Chen, Y.~Giga, and S.~Goto. \newblock Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations. \newblock {\em J. Differential Geom.}, 33(3):749--786, 1991. \bibitem{CIL} M.~G. Crandall, H.~Ishii, and P.-L. Lions. \newblock User's guide to viscosity solutions of second order partial differential equations. \newblock {\em Bull. Amer. Math. Soc. (N.S.)}, 27(1):1--67, 1992. \bibitem{DF} F.~Dragoni, D.~Filali. \newblock Starshaped and convex sets in Carnot groups and in the geometries of vector fields. \newblock {\em J. Convex Anal.}, 26(4):1349--372, 2019. \bibitem{DGN1} D.~Danielli, N.~Garofalo, and D.-M. Nhieu. \newblock Notions of convexity in {C}arnot groups. \newblock {\em Comm. Anal. Geom.}, 11(2):263--341, 2003. \bibitem{DGS} F.~Dragoni, N.~Garofalo, and P.~Salani. \newblock Starshapedness for fully non-linear equations in {C}arnot groups. \newblock {\em J. Lond. Math. Soc. (2)}, 99(3):901--918, 2019. \bibitem{ES1} L.~C. Evans and J.~Spruck. \newblock Motion of level sets by mean curvature. {I}. \newblock {\em J. Differential Geom.}, 33(3):635--681, 1991. \bibitem{Gbook} Y.~Giga. \newblock {\em Surface evolution equations}, volume~99 of {\em Monographs in Mathematics}. \newblock Birkh\"auser Verlag, Basel, 2006. \bibitem{GuMo2} C.~E. Guti\'{e}rrez and A.~Montanari. \newblock On the second order derivatives of convex functions on the {H}eisenberg group. \newblock {\em Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)}, 3(2):349--366, 2004. \bibitem{KLM} T.~Kagaya, Q.~Liu, and H.~Mitake. \newblock Quasiconvexity preserving property for fully nonlinear nonlocal parabolic equations. \newblock {\em NoDEA Nonlinear Diffferential Equations Appl.,} 30(1): Paper No. 13, 2023. \bibitem{LMZ} Q.~Liu, J.~J. Manfredi, and X.~Zhou. \newblock Lipschitz continuity and convexity preserving for solutions of semilinear evolution equations in the {H}eisenberg group. \newblock {\em Calc. Var. Partial Differential Equations}, 55(4):55:80, 2016. \bibitem{LZ2} Q.~Liu and X.~Zhou. \newblock Horizontal convex envelope in the {H}eisenberg group and applications to sub-elliptic equations. \newblock {\em Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)}, 22(4):2039--2076, 2021. \bibitem{LMS} G.~Lu, J.~J. Manfredi, and B.~Stroffolini. \newblock Convex functions on the {H}eisenberg group. \newblock {\em Calc. Var. Partial Differential Equations}, 19(1):1--22, 2004. \bibitem{Mag} V.~Magnani. \newblock Lipschitz continuity, {A}leksandrov theorem and characterizations for {$H$}-convex functions. \newblock {\em Math. Ann.}, 334(1):199--233, 2006. \bibitem{MaSc} V.~Magnani and M.~Scienza. \newblock Characterizations of differentiability for {$h$}-convex functions in stratified groups. \newblock {\em Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)}, 13(3):675--697, 2014. \bibitem{MoR} R.~Monti and M.~Rickly. \newblock Geodetically convex sets in the {H}eisenberg group. \newblock {\em J. Convex Anal.}, 12(1):187--196, 2005. \bibitem{MoM} A.~Montanari and D.~Mobidelli. \newblock Multiexponential maps in {C}arnot groups with applications to convexity and differentiability. \newblock {\em Ann. Mat. Pura Appl. (4)}, 200(1):253--272, 2021. \bibitem{Riconv} M.~Rickly. \newblock {\em First-Order Regularity of Convex Functions on Carnot Groups}. \newblock {\em J. Geom. Anal.} 16(4):679--702, 2006. \bibitem{Rithesis} M.~Rickly. \newblock {\em On questions of existence and regularity related to notions of convexity in Carnot groups. Ph.D. Thesis}. \newblock PhD thesis, Mathematisches Institut, Bern, 2005. \bibitem{Sl} D.~Slep\v{c}ev. \newblock Approximation schemes for propagation of fronts with nonlocal velocities and {N}eumann boundary conditions. \newblock {\em Nonlinear Anal.}, 52(1):79--115, 2003. \bibitem{So3} H.~M. Soner. \newblock Motion of a set by the curvature of its boundary. \newblock {\em J. Differential Equations}, 101(2):313--372, 1993. \bibitem{SuYa} M.-B. Sun and X.-P. Yang. \newblock Some properties of quasiconvex functions on the {H}eisenberg group. \newblock {\em Acta Math. Appl. Sin. Engl. Ser.}, 21(4):571--580, 2005. \bibitem{Wa} C.~Wang. \newblock Viscosity convex functions on {C}arnot groups. \newblock {\em Proc. Amer. Math. Soc.}, 133(4):1247--1253 (electronic), 2005. \end{thebibliography} \end{document}
2205.01978v2
http://arxiv.org/abs/2205.01978v2
Small modules with interesting rank varieties
\documentclass[12pt,reqno]{amsart} \usepackage{amssymb,amsmath,tabularx,mathrsfs,mathbbol,yfonts,upgreek,hyperref} \usepackage{amsthm,verbatim,comment} \usepackage{geometry} \geometry{top=3.5cm, left=3cm, right=3cm, bottom=3cm} \usepackage{stmaryrd} \usepackage{paralist} \usepackage[all]{xy} \usepackage{mathdots} \usepackage{tikz} \usepackage{ytableau} \usetikzlibrary{arrows,matrix,positioning,fit} \usepackage{graphicx} \allowdisplaybreaks \usepackage{nicematrix} \setcounter{MaxMatrixCols}{20} \NiceMatrixOptions{ code-for-first-row = \color{red} , code-for-last-row = \color{red} , code-for-first-col = \color{blue} , code-for-last-col = \color{red} } \newenvironment{psmallmatrix} {\left(\begin{smallmatrix}} {\end{smallmatrix}\right)} \DeclareSymbolFontAlphabet{\mathbb}{AMSb} \DeclareSymbolFontAlphabet{\mathbbl}{bbold} \newcommand{\bigzero}{\mbox{\normalfont\Large\bfseries 0}} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{nota}[thm]{Notation} \newtheorem{eg}[thm]{Example} \newtheorem{rem}[thm]{Remark} \theoremstyle{remark} \newtheorem*{rem*}{Remarks} \newtheorem{ques}[thm]{Question} \renewcommand{\b}{\boldsymbol} \newcommand{\modcat}[1]{\text{$#1$-{$\mathrm{mod}$}}} \renewcommand{\mod}{\mathrm{mod}\ } \renewcommand{\a}{\textswab{a}} \newcommand{\TT}{\mathscr{T}} \newcommand{\TTS}{\mathscr{T}_\mathrm{sstd}} \newcommand{\T}{\mathrm{T}} \renewcommand{\SS}{\mathrm{S}} \newcommand{\sgn}{\mathrm{sgn}} \newcommand{\Char}{\operatorname{\mathrm{char}}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\stab}{\operatorname{stab}} \newcommand{\sym}[1]{\mathfrak{S}_{#1}} \newcommand{\e}[1]{\overline{e_{#1}}} \renewcommand{\t}[1]{\overline{\mathfrak{t}_{#1}}} \newcommand{\tsym}[2]{\mathfrak{S}_{#1}^{+#2}} \newcommand{\Ind}{\operatorname{\mathrm{Ind}}} \newcommand{\Inf}{\mathrm{Inf}} \newcommand{\Res}{\operatorname{\mathrm{Res}}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\bb}[1]{\mathbbl{{#1}}} \newcommand{\R}{\mathscr{R}} \newcommand{\Y}{\mathscr{Y}} \newcommand{\C}{\mathscr{C}} \newcommand{\NN}{\mathbb{N}} \renewcommand{\O}{\mathcal{O}} \newcommand{\M}{\mathfrak{M}} \newcommand{\tlsym}[2]{\mathfrak{S}_{#1}^{+#2}} \newcommand{\tlxi}[2]{\xi_{#1}^{+#2}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\A}{\mathbb{A}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Rho}{\pmb{\mathscr{V}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathscr{P}} \newcommand{\RP}{\mathscr{RP}} \newcommand{\ch}{\mathrm{ch}} \newcommand{\seq}[1]{\mathbf{#1}} \newcommand{\Sch}{\mathrm{S}} \renewcommand{\L}{\mathscr{L}} \newcommand{\List}{\mathrm{List}} \newcommand{\m}{{\pmb{\mathscr{M}}}} \newcommand{\sgnK}{\mathscr{K}^\pm} \newcommand{\set}[1]{[#1]} \newcommand{\cont}{\text{{\tiny $\#$}}} \newcommand{\len}[1]{\wp(#1)} \newcommand{\power}[1]{\llbracket #1\rrbracket} \newcommand{\B}{\mathcal{B}} \newcommand{\CC}{\mathcal{C}} \newcommand{\W}{\mathscr{W}} \newcommand{\F}{\mathbb{F}} \newcommand{\U}[2]{{\mathrm{U}^\wedge_{{#1}}({#2})}} \newcommand{\ind}[2]{{#1}{\uparrow^{#2}}} \newcommand{\res}[2]{{#1}{\downarrow_{#2}}} \newcommand{\rad}{\mathrm{Rad}} \DeclareMathOperator{\rank}{rank} \newcommand{\0}{\text{\textcolor{white}{$0$}}} \numberwithin{equation}{section} \newcommand{\rk}[2]{V^\#_{#1}({#2})} \newcommand{\Nom}{\mathrm{N}} \newcommand{\s}{\mathfrak{s}} \newcommand{\KJ}[1]{\textcolor{blue}{#1}} \newcommand{\JL}[1]{\textcolor{red}{#1}} \begin{document} \title[The rank varieties]{Small modules with interesting rank varieties} \author{Kay Jin Lim} \address[K. J. Lim]{Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-05-16, 21 Nanyang Link, Singapore 637371.} \email{[email protected]} \author{Jialin Wang} \address[J. Wang]{Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-04-15, 21 Nanyang Link, Singapore 637371.} \email{[email protected]} \begin{abstract} This paper focuses on the rank varieties for modules over a group algebra $\F E$ where $E$ is an elementary abelian $p$-group and $p$ is the characteristic of an algebraically closed field $\F$. In the first part, we give a sufficient condition for a Green vertex of an indecomposable module to contain an elementary abelian $p$-group $E$ in terms of the rank variety of the module restricted to $E$. In the second part, given a homogeneous algebraic variety $V$, we explore the problem on finding a small module with rank variety $V$. In particular, we examine the simple module $D^{(kp-p+1,1^{p-1})}$ for the symmetric group $\sym{kp}$. \end{abstract} \subjclass[2010]{20C20, 20C30, 14J25} \thanks{We thank the referee for numerous suggestion especially for a shorter proof of Lemma \ref{L: rkinduction}. The first author is supported by Singapore Ministry of Education AcRF Tier 1 grant RG17/20.} \maketitle \section{Introduction} Let $\F$ be an algebraically closed field with positive characteristic $p$. Most group algebras of a finite $p$-group in characteristic $p$ have wild representation theory. As such, one does not hope to classify the indecomposable modules up to isomorphism. Instead, various techniques have been brought in to study the representations without the intention of making such classification. In this paper, we mainly focus on the complexity \cite{AE81,AE82}, Green vertex \cite{Green59} and rank variety \cite{carlson} for modules. These notions are interrelated. For example, if $E$ is an elementary abelian $p$-subgroup of rank $k$ of a finite group $G$ and $M$ is an indecomposable $\F G$-module with the rank variety of $\res{M}{E}$ the same as $\F^k$, then $E$ is a subgroup of a Green vertex of $M$. The first result in our paper improves such relation (see Theorem \ref{T: Green}). In the second part of this paper, we study the exterior power of the natural simple $\F\sym{n}$-module $D^{(n-1,1)}$. Our study is motivated by various sources. For simplicity, for each homogeneous algebraic variety $V$, in \cite{Carlson84}, Carlson constructed a module with rank variety $V$. The dimension of the module obtained in this manner is `pretty large'. In \cite[\S12.7]{benson2016}, with the help of the $L_\zeta$-technology, Benson constructed an $\F E$-module of dimension 36 with rank variety \[p_3=x_1^2x_2^2+x_1^2x_3^2+x_2^2x_3^2\] when $p=3$ and $E=(C_3)^3$. This is smaller than the Specht module $S^{(3^3)}$ (which has dimension 42) with the same rank variety upon restriction to the elementary abelian $3$-subgroup generated by 3 disjoint 3-cycles (see \cite{L09}). In \cite[Appendix B, Problem 16]{benson2016}, for a homogeneous algebraic variety $V$, Benson asked for the smallest dimension $d_V$ of an $\F E$-module with rank variety $V$. The problem is obviously extremely difficult to answer in general. Nevertheless, we wish to offer some insight to this problem from the viewpoint of the representations of symmetric groups. More precisely, we want to find naturally occurring representations from the symmetric groups with interesting varieties. Apart from, more generally, the Specht module $S^{(p^p)}$, the other examples are the simple module $D^{(n-1,1)}$ in $p=2$ (see \cite{Jiang21}) and some basic spin modules (see \cite{Uno}). In \cite[Theorem 4.9]{LT13}, Tan and the first author proved that all simple modules $D^\lambda$ for the symmetric groups belonging to the weight 2 block have complexity 2 except $\lambda=(p+1,1^{p-1})$. This suggested the study of the simple module $D(p-1):=D^{(kp-p+1,1^{p-1})}\cong \bigwedge^{p-1}D^{(kp-1,1)}$ for $k\geq 2$. Notice that, when $p\nmid n$, for all $1\leq r\leq n-1$, $\bigwedge^rD^{(n-1,1)}\cong \bigwedge^rS^{(n-1,1)}\cong S^{(n-r,1^r)}$ which is a simple Specht module. As such, the variety of $\bigwedge^rD^{(n-1,1)}$ is `uninteresting' as shown in \cite{DL17}. When $p\mid n$ and $1\leq r<p-1$, we have $p\nmid \dim_\F D^{(n-r,1^r)}$ and its variety is again `uninteresting'. As such, the module $D(p-1)$ is the `smallest' we could have picked. Suppose that $k\not \equiv 1(\mod p)$ {and $p$ is odd}. In this paper, we prove that $D(p-1)$ has complexity $k-1$. More precisely, we show that the rank variety of $D(p-1)$ restricted to the largest elementary abelian $p$-group $E_k$ is the algebraic set $V(p_k)$ where $p_k$ is given as in Equation \ref{Eq: pk}. We consider the module is rather small in the following sense. When $p=3$ and $k\geq 3$, Carlson's result (see Theorem \ref{T: dimension}) shows that any module with the rank variety $V(p_k)$ has dimension at least $6(k-1)$ and divisible by $3$. On the other hand, $\dim D(2)={3k-2\choose 2}=\frac{3(k-1)(3k-2)}{2}$. Its ratio with Carlson's bound is \[\frac{\dim D(2)}{6(k-1)}=\frac{3k-2}{4}\] which is a polynomial of degree 1 in $k$. The particular case when $k=3$, the module $M:=\res{D^{(7,1,1)}}{E_3}$ has the rank variety $V(p_3)$ as above and has dimension 21 which is strictly smaller than the dimensions of the modules we have discussed in the previous paragraph, which are 36 and 42. Using Magma \cite{Magma}, it turns out also that $M$ is indecomposable and we have not found a strictly smaller module than $M$ and yet supporting $p_3$. The methods we have employed in our computation include the notion of generic and maximal Jordan types of modules, their relation with the Schur functor and brute-force calculation. We believe that the condition $k\not \equiv 1(\mod p)$ is unnecessary and further conjecture that $\bigwedge^rD^{(kp-1,1)}$ has interesting variety when $r\equiv p-1(\mod p)$. In the next section, we collate the basic knowledge we shall need in this paper. In Section \ref{S: Green Vertex}, {we prove our result Theorem \ref{T: Green} regarding the rank variety and Green vertex}. In Section \ref{S: D1}, we compute the generic Jordan type and the maximal Jordan set of the module $\res{D(1)}{E_k}$ (see Theorem \ref{T: jtD}) as a preparation for the discussion in the next section. In the final section, we show that the rank variety of $\res{D(p-1)}{E_k}$ is the algebraic set $V(p_k)$ as in Theorem \ref{T: main thm}. \section{Preliminary} For the basic knowledge required in this article, we refer the reader to the references \cite{Benson1,Benson2,benson2016,James78,Shafarevich}. Throughout, $\F$ is an algebraically closed field of positive characteristic $p$. \subsection{Modules for finite groups} Let $G$ be a finite group. All the $\F G$-modules we consider in this paper are finite-dimensional over $\F$. We use the notations $\ind{}{}$ and $\res{}{}$ for the induction and restriction of modules for finite groups. The direct sum and tensor product of two $\F G$-modules $M,N$ are denoted by $M\oplus N$ and $M\otimes_\F N$ (or simply $M\otimes N$) respectively. By abuse of notation, the trivial $\F G$-module is also denoted as $\F$ (or $\F_G$ if we wish to emphasis the group $G$). If $N$ is a direct summand of $M$, that is $M\cong N\oplus N'$ for another $\F G$-module $N'$, we write $N\mid M$. Let $M$ be an indecomposable $\F G$-module. A Green vertex $Q$ of $M$ is a minimal subgroup of $G$ such that $M\mid \ind{S}{G}$ for some indecomposable $\F Q$-module $S$. In this case, $S$ is called an $\F Q$-source of $M$ and $Q$ is necessarily a $p$-subgroup. Furthermore, all green vertices of $M$ are conjugate in $G$. Let $H,K$ be subgroups of $G$ and $N$ be an $\F K$-module. We have the Mackey's formula \[\res{\ind{N}{G}}{H}\cong \bigoplus_{HgK} \ind{\res{{}^gN}{H\cap {}^gK}}{H}\] where the sum runs over a complete set of double coset representatives of $(H,K)$ in $G$. For a finite group $G$ and an $\F G$-module $M$, the complexity of $M$ is the rate of growth of a projective resolution of $M$ and is denoted as $c_G(M)$. Moreover, if $\mathscr{E}$ is a set of representatives of the maximal elementary abelian $p$-subgroups of $G$ up to conjugation, we have \begin{equation}\label{Eq: complexity} c_G(M)=\max_{E\in \mathscr{E}}c_E(\res{M}{E}). \end{equation} \subsection{The representations of symmetric groups}\label{SS: sym} For a finite set $A$, we denote $\sym{A}$ the symmetric group permuting the elements in $A$. For a natural number $n$, we let \[\sym{n}=\sym{\{1,2,\ldots,n\}}.\] Let $k$ be a positive integer and $n=kp$. In this paper, we are interested in modules for the symmetric group $\sym{kp}$ restricted to the elementary abelian $p$-subgroup \[E_k=\langle g_1,\ldots,g_k\rangle\] where, for each $1\leq i\leq k$, $g_i$ is the $p$-cycle $((i-1)p+1,(i-1)p+2,\ldots,ip)$. Notice that, when $p$ is odd, $E_k$ is, up to conjugation, the only elementary abelian $p$-subgroup of $\sym{kp}$ with rank $k$ and the remaining have ranks strictly less than $k$. Notice that \[\mathrm{N}_{\sym{kp}}(E_k)/\mathrm{C}_{\sym{kp}}(E_k)\cong \F_p^\times\wr \sym{k}.\] A partition $\lambda$ of $n$ is a sequence of positive integers $(\lambda_1,\dots,\lambda_k)$ such that $\lambda_1\geq \lambda_2\geq \cdots\geq\lambda_k$ and $\lambda_1+\cdots+\lambda_k=n$. It is a hook partition if $\lambda_2=\cdots=\lambda_k=1$. It is $p$-regular if $\lambda$ does not contain $p$ parts of the same size. The Young diagram of $\lambda$ is the depiction of the set $\{(i,j): 1\leq i\leq k, 1\leq j\leq \lambda_i\}$ and an element in the set (or diagram) is called a node. A $\lambda$-tableau is an array obtained by assigning the nodes in the Young diagram of $\lambda$ by the numbers $1,2,\dots,n$ with no repeats. We say that a $\lambda$-tableau is standard if the numbers are increasing both in each row from left to right and in each column from top to bottom. Given another partition $\mu$ of $n$, $\lambda$ dominates $\mu$ if, for all $r$, \[\sum^r_{i=1}\lambda_i\geq \sum^r_{i=1}\mu_i\] where we let $\lambda_i=0$ if $i>k$ and similar for $\mu$. In this case, we write $\lambda\unrhd \mu$. Fix a partition $\lambda$ of $n$. The symmetric group $\sym{n}$ has a natural action on the $\lambda$-tableaux by permuting the numbers. The row (respectively, column) stabilizers $R_t$ (respectively, $C_t$) of a $\lambda$-tableau $t$ is the subgroup of $\sym n$ consisting of the elements fixing the rows (respectively, columns) of $t$ setwise. A $\lambda$-tabloid $\{t\}$ is the equivalence class containing $t$ under the equivalence relation defined by: for $\lambda$-tableaux $t,t'$, $t\sim t'$ if and only if $t=\pi t'$ for some $\pi\in R_t$. As such, the natural action of $\sym n$ acts on the $\lambda$-tabloids by permuting the numbers as well. Let $t$ be a $\lambda$-tableau. We define the polytabloid \[e_t=\sum_{\sigma \in C_t} (\sgn\sigma)\sigma\{t\}.\] We say that $e_t$ is standard if $t$ is standard. The Specht module $S^\lambda$ is the $\F\sym n$-module which is, as a vector space, spanned by the $\lambda$-polytabloids. The set of standard $\lambda$-polytabloids forms a characteristic free basis for $S^\lambda$ and its dimension is given by the hook formula. In the case $\lambda$ is $p$-regular, $S^\lambda$ has a simple head $D^\lambda$. Moreover, the set of $D^\lambda$ where $\lambda$ runs over all $p$-regular partitions of $n$ gives a complete set of simple $\F\sym n$-modules up to isomorphism. Let $n>2$ and consider the natural simple $\F\sym{n}$-module $D(1):=D^{(n-1,1)}$. For any $r\leq \dim_\F(D(1))$, define the $r$th exterior power \[D(r):=\bigwedge^rD(1).\] In fact, the surjection $S^{(n-1,1)}\twoheadrightarrow D(1)$ induces a surjection $S^{(n-r,1^r)}\cong \bigwedge^r S^{(n-1,1)}\twoheadrightarrow D(r)$ where the isomorphism can be found in, for example, \cite[Proposition 2.3(a)]{MZ07}. In the case when $p$ is odd, we have that $D(r)$ is again a simple $\F\sym{n}$-module (see \cite{Danz07}). In this case, using \cite[6.3.59]{JK} and \cite{Peel}, we have $D(r)\cong D^{(n-r,1^r)^R}$ where $R$ denotes the $p$-regularisation of a partition; in particular, \[D(p-1)\cong D^{(n-p+1,1^{p-1})}.\] Suppose that $n=kp$ and $p$ is an arbitrary prime. In this case, $S^{(kp-1,1)}$ has composition factors $D^{(kp-1,1)}$ and $D^{(kp)}\cong \F$ from top to bottom. For each $i=1,\ldots,kp$, let $\mathfrak{t}_i$ denote the $(kp-1,1)$-tabloid with $i$ in the second row and, if $i\neq 1$, let $e_i=e_{\mathfrak{t}_i}=\mathfrak{t}_i-\mathfrak{t}_1$. For $\sigma \in \sym {kp}$, we have $\sigma e_i=\mathfrak{t}_{\sigma i}-\mathfrak{t}_{\sigma 1}$. As such, $\{e_i:i=2,\ldots,kp\}$ is a basis for $S^{(kp-1,1)}$ with the trivial submodule $D^{(kp)}$ spanned by $\sum^{i=kp}_{i=2}e_i$. Let $\e i=e_i+D^{(kp)}$. Thus we obtain \[\{ \e i: i=3,\ldots,kp\}\] a basis for $D(1)=D^{(kp-1,1)}$. In this case, for $1\leq r\leq kp-2$, \[\dim_\F D(r)=\binom{kp-2}{r}.\] In particular, $p\nmid \dim_\F D(r)$ for all $r=1,\ldots,p-2$. \subsection{Algebraic variety} Consider the polynomial ring $R:=\F[x_1,\ldots,x_k]$. For any ideal $I$ of $R$, let $V(I)$ denote the algebraic set. The affine space $\A^k(\F)$ is a noetherian topological space with the algebraic sets as the closed sets. For the algebraic variety $V(I)$, we let $\dim V(I)$ denote its dimension, that is the supremum of the lengths $d$ of the chains of distinct irreducible closed subsets of $V(I)$ of the form $V_d\subset \cdots\subset V_1\subset V_0$. Furthermore, by definition, $\dim V(I)$ is equal to the maximum of the dimensions of the irreducible components of $V(I)$. In the case when $I$ is prime, i.e., $V(I)$ is irreducible, $\dim V(I)$ is the Krull dimension of $R/I$. We shall need the following. \begin{thm}[{\cite[Chapter 1, \S6, Theorem 1]{Shafarevich}}]\label{T: irred variety dim} Suppose that $V$ is irreducible and $W\subseteq V$. If $\dim W=\dim V$ then $W=V$. \end{thm} Suppose that $k$ is an integer at least $2$. Throughout the paper, we denote \begin{align}\label{Eq: pk} p_k:=\sum_{i=1}^{k}\left (x_1\cdots \widehat{x_i}\cdots x_k\right )^{p-1} \end{align} where $\widehat{x_i}$ indicates that $x_i$ does not appear as a factor in the corresponding summand. By convention, we denote $p_1=1$. When $k=2$, we have \[p_2=x_1^{p-1}+x_2^{p-1}=\prod_{\lambda\in I}(x_1-\lambda x_2)\] where $I$ is the complete set of $(p-1)$th roots of $-1$ in $\F$. However, when $k\geq 3$, the polynomial $p_k$ is irreducible as given below. \begin{lem}\label{irred} If $k\geq 3$, the polynomial $p_k$ is irreducible and the variety $V(p_k)$ is irreducible of dimension $k-1$. \end{lem} \begin{proof} We argue by induction on $k$. When $k=3$, $p_3=x^{p-1}y^{p-1}+x^{p-1}z^{p-1}+y^{p-1}z^{p-1}$. Suppose that $p_3=gh$ for some $g,h \in \F[x,y,z]$. Since $p_3$ is homogeneous, both $g,h$ are homogeneous, say of degrees $r,s$ respectively. Divide both sides throughout by $z^{2p-2}$. Let $X=\frac{x}{z}$, $Y=\frac{y}{z}$, $P_3(X,Y)=\frac{1}{z^{2p-2}}p_3$, $G(X,Y)=\frac{1}{z^{r}}g$ and $H(X,Y)=\frac{1}{z^s}h$. We have \begin{align*} G(X,Y)H(X,Y)=P_3(X,Y)&=X^{p-1}Y^{p-1}+X^{p-1}+Y^{p-1}\\ &=X^{p-1}(1+Y^{p-1})+Y^{p-1}. \end{align*} Let $G(X,Y)=a_0X^0+a_1X^1+\ldots+a_iX^i$ and $H(X,Y)=b_0X^0+b_1X^1+\ldots+b_{p-1-i}X^{p-1-i}$ such that all the coefficients $a_0,\ldots,a_i,b_0,\ldots,b_{p-1-i}$ are in $\F[Y]$ and, without loss of generality, $0\leq i\leq p-1-i$. In particular, we have $a_ib_{p-1-i}=1+Y^{p-1}$. Since $\F$ is algebraically closed, $1+Y^{p-1}$ has $p-1$ distinct roots $\lambda_1,\lambda_2,\ldots, \lambda_{p-1}$ in $\F$ and \[1+Y^{p-1}=\prod_{j=1}^{p-1} (Y-\lambda_j).\] Thus, for each $1\leq j\leq p-1$, $Y-\lambda_j$ is either a factor of $a_i$ or a factor of $b_{p-1-i}$ (and not both). As such, we must have either $a_i(\lambda_j)\neq 0$ or $b_{p-1-i}(\lambda_j)\neq 0$. On the other hand, for each $1\leq j\leq p-1$, $P_3(X,\lambda_j)=\lambda_j^{p-1}=-1\in \F$. This implies \[\deg(P_3(X,\lambda_j))=\deg(G(X,\lambda_j))+\deg(H(X,\lambda_j))=0,\] i.e., $\deg(G(X,\lambda_j))=\deg(H(X,\lambda_j))=0$ for any $\lambda_j$. If $Y-\lambda_j$ is a factor of $a_i$ then $b_{p-1-i}(\lambda_j)\neq 0$ and $\deg(H(X,\lambda_j))=p-1-i=0$. So $i=p-1>p-1-i=0$, a contradiction. This shows that $b_{p-1-i}=\frac{1}{\mu}(1+Y^{p-1})$ and $a_i=\mu\neq 0$ in $\F$. But $0=\deg(G(X,\lambda_j))=i$. Hence $G(X,Y)=\mu\neq 0$. In particular, $g(x,y,z)=\mu z^r$. But $z\nmid p_3$. Therefore $r=0$, i.e., $g=\mu\in \F$. Thus, $p_3$ is irreducible. Suppose now that $p_{k-1}(x_1,x_2,\ldots, x_{k-1})\in \F[x_1,x_2,\ldots, x_{k-1}]$ is irreducible for some $k\geq 4$. Let \[D=\F[x_1,x_2,\ldots, x_{k-1}]/(p_{k-1})\] which is an integral domain. For a polynomial $f\in \F[x_1,\ldots,x_k]$, let $f=f_0x_k^0+\cdots+f_mx_k^m$ where $f_0,\ldots,f_m\in \F[x_1,x_2,\ldots, x_{k-1}]$, we write \[\bar{f}=\overline{f_0}x_k^0+\cdots+\overline{f_m}x_k^m\in D[x_k]\] where $\overline{f_i}=f_i+(p_{k-1})\in D$. Suppose that $p_k=gh$ for some $g,h\in \F[x_1,x_2,\ldots, x_k]$. Since $p_k$ is homogeneous, both $g,h$ are homogeneous. Notice that $\overline{g}\overline{h}=\overline{p_k}=\overline{(x_1x_2\cdots x_{k-1})^{p-1}}$ and so \[\deg(\bar{g})+\deg(\bar{h})=\deg(\overline{p_k})=0.\] As such, $\deg(\bar g)=0=\deg(\bar h)$. Thus, there exist $s,s'\in \F[x_1,x_2,\ldots, x_k]$ and $t,t'\in \F[x_1,x_2,\ldots, x_{k-1}]$ such that \begin{align*} g&=p_{k-1}s+t,\\ h&=p_{k-1}s'+t', \end{align*} where $t=\frac{1}{\mu}x_1^{i_1}x_2^{i_2}\cdots x_{k-1}^{i_{k-1}}$ and $t'=\mu x_1^{p-1-i_1}x_2^{p-1-i_2}\cdots x_{k-1}^{p-1-i_{k-1}}$ for some integers $0\leq i_1,i_2,\ldots,i_{k-1}\leq p-1$ and $0\neq\mu\in \F$. We claim that $\deg(g)=\deg(p_{k-1})+\deg(s)$ if $s\neq 0$. Let $d=\deg(s)$ and $s_d$ be the $d$th homogeneous component of $s$. Then \[p_{k-1}s_d+t=\frac{1}{\mu}x_1^{i_1}x_2^{i_2}\cdots x_{k-1}^{i_{k-1}}+\sum_{i=1}^{k-1}x_1^{p-1}\cdots \widehat{x_i^{p-1}}\cdots x_{k-1}^{p-1}s_d\neq 0\] and therefore $\deg(g)=\deg(p_{k-1})+\deg(s_d)=\deg(p_{k-1})+d$ as required. If $ss'\neq 0$, let $d'=\deg(s')$, we get \[(k-1)(p-1)=\deg(p_k)=\deg(gh)=2\deg(p_{k-1})+d+d'\geq 2(k-2)(p-1),\] i.e., $3\geq k$, a contradiction. Without loss of generality, let us assume $s'=0$ (and hence $s\neq 0$). Then $g=p_{k-1}s+t$ and $h=t'\in \F[x_1,x_2,\ldots, x_{k-1}]$. On the other hand, \[x_k^{p-1}p_{k-1}+(x_1x_2\cdots x_{k-1})^{p-1}=p_{k-1}st'+tt', \] i.e., $st'=x_k^{p-1}$. So $h=t'=\mu$ and $p_k$ is irreducible. Since $p_k$ is irreducible, $(p_k)$ is prime and the variety $V(p_k)$ is irreducible of dimension $k-1$. \end{proof} \subsection{Rank variety for a module} We now review the rank variety for a module as introduced by Carlson \cite{carlson}. Let $E=\langle g_1,\ldots,g_k\rangle$ be an elementary abelian $p$-group of rank $k$ with the generators in the order $g_1,\ldots,g_k$. For each $i=1,\ldots,k$, denote $X_i=g_i-1\in \F E$. The set $\{X_1+J^2,\dots,X_k+J^2\}$ forms a basis for $J/J^2$ where $J=\rad(\F E)$ is the Jacobson radical of $\F E$. Let $\A^k(\F)$ denote the affine $k$-space over $\F$ consisting of $k$-tuples $\alpha=(\alpha_1,\dots,\alpha_k)$ with each $\alpha_i\in \F$. For $0\neq \alpha=(\alpha_1,\dots,\alpha_k)\in\A^k(\F)$ and let \[X_\alpha=\alpha_1X_1+\cdots+\alpha_kX_k\in \F E\] and $u_\alpha=1+X_\alpha$. Since we are in characteristic $p$, we have $X_\alpha^p=0$ and $u_\alpha^p=1$, that is $\langle u_\alpha\rangle$ is a cyclic subgroup of $(\F E)^\times$ of order $p$ and it is called a cyclic shifted subgroup of $E$. Let $M$ be an $\F E$-module. We denote the consideration of $M$ as $\F\langle u_\alpha\rangle$-module by $\res{M}{\langle u_\alpha\rangle}$. The rank variety $\rk{E}{M}$ of $M$ is defined as \[\rk{E}{M}=\{0\}\cup\{0\neq \alpha\in \A^k(\F): \text{$\res{M}{\langle u_\alpha\rangle}$ is not free}\}.\] {Up to isomorphism, the rank variety $V^\#_E(M)$ is independent of the choice of generators for $E$ (see \cite[Theorem 6.5]{carlson}).} By slight abuse of notation, for an $\F G$-module $M$ and an elementary abelian $p$-subgroup $E$ of $G$, we write $V^\#_E(M)$ for $V^\#_E(\res{M}{E})$. The rank variety of a projective $\F E$-module is described as follows. \begin{lem}[Dade's lemma {\cite[Lemma 11.8]{Dade78}}]\label{L: Dade} Let $M$ be an $\F E$-module. Then $M$ is projective if and only if $V^\#_E(M)=\{0\}$. \end{lem} Given a short exact sequence $0\to M'\to M\to M''\to 0$ of $\F E$-modules, by definition, we have $V^\#_E(M)\subseteq V^\#_E(M')\cup V^\#_E(M'')$. As a consequence, we get the following lemma. \begin{lem}\label{L: filtra} Let $M$ be an $\F E$-module and consider a filtration of $M$ with subquotients $Q_1,\ldots,Q_r$. Then \[V^\#_E(M)\subseteq \bigcup_{i=1}^r V^\#_E(Q_i).\] \end{lem} Moreover, we have the following basic properties regarding rank varieties. \begin{thm}\label{T: basic rank} Let $M$ and $N$ be $\F E$-modules. Then \begin{enumerate}[(i)] \item $\rk{E}{M}$ is a closed homogeneous subvariety of $\rk{E}{\F}=\A^k(\F)$, \item the dimension of $\rk{E}{M}$ is equal to the complexity $c_E(M)$, \item $\rk{E}{M\oplus N}=\rk{E}{M}\cup\rk{E}{N}$ and $\rk{E}{M\otimes N}=\rk{E}{M}\cap\rk{E}{N}$, \item if $M$ is indecomposable, then the projective variety $\overline{V^\#_E(M)}$ is connected, \end{enumerate} \end{thm} Obviously, when $p\nmid \dim_\F M$, $\res{M}{\langle u_\alpha\rangle}$ is never free and therefore $V^\#_E(M)=\A^k(\F)$. The following theorem demonstrates a more subtle relation. \begin{thm}[{\cite[Corollary 7.7]{carlson} and \cite[Theorem 3.1]{Carlson93}}]\label{T: dimension} Suppose that $E$ has rank $k$, $M$ is an $\F E$-module, $r,d$ are the dimension and degree of $V^\#_E(M)$ respectively. We have $p^{k-r}\mid \dim_\F M$ and $dp^{k-r}\leq \dim_\F M$. \end{thm} Following \cite[Appendix B, Problem 16]{benson2016}, we define the following. \begin{defn} Let $V$ be a homogeneous algebraic variety in $\A^k(\F)$. Define \[d_V=\min\{\dim_\F M:V^\#_E(M)\cong V\}\] where $E$ is an elementary abelian $p$-group of rank $k$ and $M$ is finite-dimensional over $\F$. \end{defn} The set in the definition is not empty by \cite{Carlson84} and Theorem \ref{T: dimension} gives a lower bound for $d_V$. {In the rank 1 case, the description of $d_V$ is simple. If $\dim V=0$ then $d_V=p$. If $\dim V=1$ then $d_V=1$. To address the rank 2 case, we need the following lemma. } \begin{lem}\label{L: linear space} Let $E$ be an elementary abelian $p$-group of rank $k$ and $W$ be a linear subspace of $\F^k$ of dimension $r$. There is an $\F E$-module $M$ of dimension $p^{k-r}$ such that $V^\#_E(M)=W$. In particular, we have $d_W=p^{k-r}$. \end{lem} \begin{proof} The existence of such module has been shown in \cite[Example 12.1.2]{benson2016}. As such, $d_W$ is bounded above by $p^{k-r}$. Together with Theorem \ref{T: dimension}, we have the equality. \end{proof} As such, we get the following result for $d_V$ in the rank 2 case. {\begin{thm} Suppose that $k=2$. Let $V$ be a homogeneous algebraic variety in $\A^2(\F)$ of dimension $r$. Then \[d_V=\left \{\begin{array}{ll}1&\text{if $r=2$,}\\ dp&\text{if $r=1$ and $d=\deg(\overline{V})$,}\\ p^2&\text{if $r=0$.}\end{array}\right .\] \end{thm} \begin{proof} Suppose that $r=2$. We have $V=\A^2(\F)$ and $d_V=1$. Suppose that $r=1$ and $d=\deg(\overline{V})$. We have that $\overline{V}$ is a union of $d$ distinct points $v_1,\ldots,v_d$. For each $1\leq i\leq d$, by Lemma \ref{L: linear space}, there is an $\F E$-module $M_i$ of dimension $p$ such that $V^\#_E(M_i)$ is the line passing through $v_i$. By Theorem \ref{T: basic rank}(iii), $V^\#_E(\bigoplus_{i=1}^d M_i)=V$. As such, $d_V$ is bounded above by $dp$. Using Theorem \ref{T: dimension}, we have $d_V=dp$. Suppose now that $r=0$. We get $V^\#_E(\F E)=\{0\}=V$. Therefore $d_V=p^2$. \end{proof}} Let $G$ be a finite group, $E$ be an elementary abelian $p$-subgroup of $G$ and $M$ be an $\F G$-module. There is a natural action of $\Nom_G(E)$ on $V^{\#}_E(M)$ given by the following. If $n \in \Nom_G(E)$ such that $ng_in^{-1} = \prod_{j=1}^k g_j^{a_{ij}}$ for each $i$, and $\alpha = (\alpha_1,\dotsc, \alpha_k) \in \A^k(\F)$, then \[n \cdot \alpha = \left (\sum_{j=1}^k a_{1j}\alpha_j, \dotsc, \sum_{j=1}^k a_{kj}\alpha_j\right ).\] In this paper, we are interested in the case when $G=\sym{kp}$ and $E_k=\langle g_1,\ldots,g_k\rangle$ where $g_i$'s are the $p$-cycles as in \S\ref{SS: sym}. In this case, the action on the rank variety can be translated as follows. \begin{lem}\label{L: symmetry} Let $M$ be an $\F\sym{kp}$-module. We have $\mathrm{N}_{\sym{kp}}(E_k)/\mathrm{C}_{\sym{kp}}(E_k)\cong \F_p^\times\wr \sym{k}$ and, for $\gamma\in \F_p^\times$ in the $i$th component, $\sigma\in\sym{k}$ and $\alpha\in V^\#_{E_k}(M)$, \begin{align*} \gamma\cdot \alpha&=(\alpha_1,\ldots,\gamma\alpha_i,\ldots,\alpha_k),\\ \sigma\cdot \alpha&=(\alpha_{\sigma^{-1}(1)},\alpha_{\sigma^{-1}(2)},\ldots,\alpha_{\sigma^{-1}(k)}). \end{align*} \end{lem} \subsection{Maximal and generic Jordan types} The representations of the cyclic group of order $p$, {denoted by $C_p$}, is well-known. Up to isomorphism, for $\F C_p$, there is a unique simple module and the indecomposable modules are $J_1,J_2,\ldots,J_p$ where $\dim_\F J_i=i$, $J_i$'s are uniserial and $J_p$ is projective. By abuse of notation, we also write $J_i$ for the Jordan block over $\F$ of size $i\times i$ with eigenvalue 0. As such, for $1\leq r\leq i\leq p$, we have \[\rank(J_i^r)=i-r. \] Let $E$ be an elementary abelian $p$-group of rank $k$, $M$ be an $\F E$-module and $B$ be a basis for $M$. For each element $x\in J=\rad(\F E)$, since the matrix representation $[x]_B$ of $x$ with respect to $B$ is nilpotent, we have $[x]_B$ is similar to a diagonal sum of Jordan blocks. Suppose that, for each $1\leq i\leq p$, $J_i$ appears with multiplicity $a_i\in\N_0$. In this case, we say that $x$ has Jordan type \[[p]^{a_p}[p-1]^{a_{p-1}}\cdots[1]^{a_1}.\] The dominance order $\unrhd$ of partitions gives rise to a partial order on the Jordan types, that is $[p]^{a_p}\cdots[1]^{a_1}\succeq [p]^{b_p}\cdots[1]^{b_1}$ if and only if $(p^{a_p},\dots,1^{a_1})\unrhd (p^{b_p},\dots,1^{b_1})$ as partitions. Let $x\in J$. We say that $x$ has maximal Jordan type on $M$ if the Jordan type of $[x]_B$ is maximal, with respect to the dominance order, among all the matrices of elements in $J$. For the notion of the generic Jordan type of $M$, we refer the reader to \cite[Chapter 3]{benson2016}. Roughly speaking, the generic Jordan type of $M$ is the Jordan type of $u_\alpha-1$ on $M$ for almost all points $\alpha\in \A^k(\F)$. Following the work \cite{FPS}, we may regard the set of elements with maximal Jordan type {(denoted as $U_{\max}(M)$ in \cite{benson2016})} as \[\U{E}{M}\subseteq (J/J^2)\backslash\{0\}\] and, as such, subset of $\A^k(\F)$. By slight abuse of notation, for an $\F G$-module $M$ and an elementary abelian $p$-subgroup $E$ of $G$, we also write $\U{E}{M}$ for $\U{E}{\res{M}{E}}$. For our computation purpose in this paper, we require the connection between the maximal and generic Jordan types as below. \begin{thm}[{\cite[\S4]{FPS}}]\label{T: generic eq maximal} Let $E$ be an elementary abelian $p$-group and $M$ be an $\F E$-module. All elements with maximal Jordan type on $M$ have the same Jordan type and it is the same as the generic Jordan type of $M$. \end{thm} As a consequence, we have the following corollary. \begin{cor}\label{C: complement} Suppose that the elementary abelian $p$-group $E$ has rank $k$ and the generic Jordan type of $M$ is free. We have $V^\#_E(M)=\U{E}{M}^c$ which is the complement of $\U{E}{M}$ in $\A^k(\F)$. \end{cor} \begin{proof} By Theorem \ref{T: generic eq maximal}, the maximal Jordan type of $M$ is $[p]^m$ for some $m$. For any $\alpha\in \A^k(\F)$, $\res{M}{\langle u_\alpha\rangle}$ is not free if and only if $u_\alpha-1$ has Jordan type strictly less than $[p]^m$ in the dominance order. Therefore, $\alpha\in V^\#_E(M)$ if and only if $\alpha\not\in\U{E}{M}$. \end{proof} To conclude our preliminary, we would like to record the following result which is a special case of \cite[Corollary~4.6.2]{benson2016} as the $r$th exterior power coincides with the Schur functor labelled by $(1^r)$. \begin{prop}\label{P: Umax} If $r<p$, the generic Jordan type of $\bigwedge^r(M)$ is $\bigwedge^r$ of the generic Jordan type of $M$, and $\U{E}{\bigwedge^r(M)}\supseteq\U{E}{M}$. \end{prop} \section{A result on the Green vertex}\label{S: Green Vertex} In this section, we prove a result concerning the Green vertices. In the literature, a sufficient condition for a Green vertex $Q$ of an indecomposable $\F G$-module $M$ to contain an elementary abelian $p$-subgroup $E$ of rank $k$ is $V^\#_E(M)=\F^k$. Our theorem (see Theorem \ref{T: Green}) generalises such result. We begin with a lemma. \begin{lem}\label{L: rkinduction} Let $E=\langle g_1,g_2,\dots,g_k\rangle$ be an elementary abelian $p$-group of rank $k$, $E'=\langle g_1^{w_{11}}\cdots g_k^{w_{1k}},\dots,g_1^{w_{s1}}\cdots g_k^{w_{sk}}\rangle$ be a subgroup of $E$ of rank $s$ where $0\leq w_{ij}\leq p-1$ for all admissible $i,j$ and $M$ be an $\F E'$-module. Then $V_E^\#(\ind{M}{E})$ lies in the subspace of $\A^k(\F)$ given by \[W=\text{span}\{ (w_{11},\dots ,w_{1k}),(w_{21},\dots ,w_{2k}),\dots, (w_{s1},\dots ,w_{sk})\}.\] In particular, if $E'$ acts trivially on $M$, then $V^\#_{E'}(M)=W$. \end{lem} \begin{proof} Since $E'$ is a $p$-group, $M$ has composition factors copies of $\F_{E'}$. Therefore, $\ind{M}{E}$ has a filtration with factors copies of $\ind{\F_{E'}}{E}$. By Lemma \ref{L: filtra}, we have $V^\#_E(\ind{M}{E})\subseteq V^\#_E(\ind{\F_{E'}}{E})$. As such, we only need to prove that $V^\#_E(\ind{\F_{E'}}{E})=W$. For $1\leq i\leq s$, let $w_i=(w_{i1},\dots ,w_{ik})$, $f_i=g_1^{w_{i1}}\cdots g_k^{w_{ik}}$ and $\{e_i:1\leq i\leq k\}$ be the standard basis for $\A^k(\F)$. Let $J=\rad(\F E)$ and $E''=\langle g_{j_1},\ldots,g_{j_t}\rangle$ be the subgroup of $E$ such that $s+t=k$, $1\leq j_1<\cdots<j_t\leq k$ and $E=E'\times E''$. Let $0\neq \alpha\in \A^k(\F)$ and suppose that \begin{align*}\label{Eq: alpha} \alpha=\sum_{i=1}^s a_iw_i+\sum_{i=1}^t b_ie_{j_i}=\sum_{i=1}^sa_i\left (\sum_{j=1}^k w_{ij}e_j\right )+\sum_{i=1}^tb_ie_{j_i} \end{align*} where $a_i,b_i\in\F$. For any $\beta_1,\dots,\beta_k\in \F_p$, we have \[\prod_{i=1}^k g_i^{\beta_i}=\prod_{i=1}^k (X_i+1)^{\beta_i} \equiv \sum_{i=1}^k \beta_iX_i +1 (\mod J^2). \] Consequently, \begin{align*} u_\alpha-1&=\sum_{i=1}^sa_i\left (\sum_{j=1}^k w_{ij}X_j\right )+\sum_{i=1}^t b_iX_{j_i} \nonumber\\ &\equiv\sum_{i=1}^sa_i\left (\prod_{j=1}^k g_j^{w_{ij}}-1\right )+\sum_{i=1}^tb_i(g_{j_i}-1))(\mod J^2)\nonumber\\ &=\sum_{i=1}^sa_i(f_i-1)+\sum_{i=1}^tb_i(g_{j_i}-1). \label{Eq: 1} \end{align*} Let $z:=\sum_{i=1}^tb_i(g_{j_i}-1)$. Notice $z=0$ if and only if $\alpha\in W$. Also, $N:=\ind{\F_{E'}}{E}\cong \F E''$ where $E'$ acts trivially and $E''$ acts regularly. Therefore, $u_\alpha-1$ acts as $z$ on $N$. As such, $z\neq 0$ if and only if $u_\alpha$ acts freely on $N$. So $V^\#_E(N)=W$. \end{proof} In view of Lemma \ref{L: rkinduction}, we have the following definition. {\begin{defn} For the vector space $V=\F^r$ over $\F$, a subspace $W$ of $V$ is called a base subspace if $W$ has a basis consisting of vectors of the form $(\lambda_1,\ldots,\lambda_r)\in \F_p^r$ where $\F_p$ is the base subfield of $\F$. \end{defn}} {For example, $\F^r$ is a base subspace of $\F^r$ with the standard basis. The total number of base subspaces of $\F^r$ is obviously finite.} \begin{thm}\label{T: Green} Let $M$ be an indecomposable $\F G$-module and $E$ be an elementary abelian $p$-subgroup of $G$ of rank $r$. If $V^\#_E(\res{M}{E})$ is not contained in the union of proper base subspaces of $\F^r$ then $E$ is a subgroup of a Green vertex of $M$. \end{thm} \begin{proof} Let $Q$ be a vertex and $S$ be an $\F Q$-source of $M$. By Mackey's formula, we have \[\res{M}{E}\left | \bigoplus \ind{\res{{}^gS}{E\cap {}^gQ}}{E}\right ..\] Consequently, $V^\#_E(\res{M}{E})\subseteq \bigcup V^\#_E(\ind{\res{{}^gS}{E\cap {}^gQ}}{E})$ by Theorem \ref{T: basic rank}(iii). If $E$ is not contained in a conjugate of $Q$, then $E\cap {}^gQ$ is a proper subgroup of $E$. Hence, by Lemma \ref{L: rkinduction}, $V^\#_E(\ind{\res{{}^gS}{E\cap {}^gQ}}{E})$ is contained in a proper base subspace. \end{proof} \begin{eg} Let $p=3$, $E=\langle g_1,g_2\rangle\cong C_3\times C_3$ and consider the $\F E$-module $M_{\lambda,\mu}$ given in {\cite[Example 1.13.1]{benson2016}} where \begin{align*} X_1&\mapsto \begin{pmatrix} 0&0&0\\ 1&0&0\\ 0&1&0 \end{pmatrix},& X_2&\mapsto \begin{pmatrix} 0&0&0\\ \lambda&0&0\\ \mu&\lambda&0 \end{pmatrix}. \end{align*} For $(\alpha_1,\alpha_2)\in\A^2(\F)$, we have \[X_\alpha\mapsto \begin{pmatrix} 0&0&0\\ \alpha_1+\alpha_2\lambda&0&0\\ \alpha_2\mu&\alpha_1+\alpha_2\lambda&0 \end{pmatrix}=:U_\alpha.\] Clearly, $\alpha\in V^\#_E(M_{\lambda,\mu})$ if and only if $\alpha_1+\alpha_2\lambda=0$, i.e., $V^\#_E(M_{\lambda,\mu})$ is the line passing through $(-\lambda,1)$. If $\lambda\not\in \F_3$, then $(-\lambda,1)$ is a point of the rank variety but not belonging to the union of the base lines. In this case, by Theorem \ref{T: Green}, $E$ is the Green vertex of $M_{\lambda,\mu}$. The converse is however incorrect even for this example. Let $\lambda=0$ and $\mu\neq 0$. {We claim that $E$ is the Green vertex of $M$ but $V^\#_E(M_{0,\mu})$ is contained in the union of some proper base lines.} It is clear that $V^\#_E(M_{0,\mu})$ is the line passing through $(0,1)$. Let $E'$ be a Green vertex of $M_{0,\mu}$. Since $M_{0,\mu}$ is not projective, $E'>\{1\}$. Suppose that $E'\neq E$, i.e., $E'=\langle g_1^ag_2^b\rangle$ for some $0\leq a,b\leq 2$. Since $M_{0,\mu}\mid \ind{\res{M_{0,\mu}}{E'}}{E}$, by Lemma \ref{L: rkinduction}, we have \[V^\#_E(M_{0,\mu})\subseteq V^\#_E(\ind{\res{M_{0,\mu}}{E'}}{E})\subseteq \mathrm{span}\{(a,b)\}\] and hence $a=0$ and, without loss of generality, $b=1$. However, $\res{M_{0,\mu}}{\langle g_2\rangle}=U_1\oplus U_2$ where $\dim_\F U_i=i$ and $U_i$'s are indecomposable. By Green's indecomposability theorem (see \cite[Theorem 3.13.3]{Benson1}) and Krull-Schmidt Theorem, for \[M_{0,\mu}\mid \ind{\res{M_{0,\mu}}{E'}}{E}\cong \ind{U_1}{E}\oplus \ind{U_2}{E},\] we must have $\ind{U_1}{E}\cong M_{0,\mu}$. However, as $U_1$ is the trivial $\F E'$-module, \[\res{\ind{U_1}{E}}{E'}\cong \F\oplus\F\oplus \F \not\cong U_1\oplus U_2\cong \res{M_{0,\mu}}{E'}.\] \end{eg} \section{The natural simple module $D(1)$}\label{S: D1} Let $n=kp>2$ and we consider the natural simple module $D(1)=D^{(kp-1,1)}$. As in \S\ref{SS: sym}, let $E_k$ be the elementary abelian $p$-subgroup of $\sym{kp}$ generated by the $k$ disjoint $p$-cycles $g_1,\ldots,g_k$. Our main result describes the maximal (or generic) Jordan type of $\res{D(1)}{E_k}$ for $p\geq 3$ {and the corresponding maximal Jordan set}. We note that the $p=2$ case has been dealt with in \cite{Jiang21}. Although our method in this section works even in this case, we have decided to leave it out so that the presentation would be neater. We assume $p\geq 3$ henceforth. To begin, we recall the $(kp-1,1)$-tabloids $\mathfrak{t}_i$, $e_i=\mathfrak{t}_i-\mathfrak{t}_1$ (for $i\neq 1$) and the basis $\{ \e i: i=3,\ldots,kp\}$ we have described for $D(1)$ where $\e i=e_i+D^{(n)}$ and $D^{(n)}$ is identified as the trivial submodule of $S^{(n-1,1)}$ spanned by $\sum_{i= 2}^{kp}e_i$. Similarly, we write $\t i$ for $\mathfrak{t}_i+D^{(n)}$. The basis is not particularly convenient for our computation. As such, our first step is to pick another basis for $D(1)$ which interacts well with the action of $E_k$. For $2\leq i\leq k$ and $1\leq r\leq p$, let $e_{i,r}=e_{(i-1)p+r}$ and $\mathfrak{t}_{i,r}=\mathfrak{t}_{(i-1)p+r}$. Thus, $\e {i,r}=\t {i,r}-\t 1$ and $g_i\t {i,r}=\t {i,g_1r}$. Recall that, for each $i=1,\ldots,k$, we have $X_i=g_i-1$. \begin{defn} Let $b_1=\e 3$ if $p=3$ and $b_1=\e 3 -\e 4$ if $p\geq 5$, $\B_1=\{ b_1,X_1^1b_1,\ldots,X_1^{p-3}b_1\}$. For $2\leq i\leq k$, let $b_i=\e {i,1} -\e 3$ and $\B_i=\{ b_i,X_i^1b_i,\ldots,X_i^{p-1}b_i\}$. Let \[\B=\bigcup_{i=1}^{k} \B_i\]. \end{defn} The next two lemmas demonstrate the structure of $\res{D(1)}{E_k}$. \begin{lem}\label{L: D(1) basis} The set $\B$ is a basis for $D(1)$. Moreover, \begin{enumerate}[(i)] \item for $1\leq r \leq p-1$ and $2\leq i\leq k$, we have \[ X_i^rb_i=\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1} \e {i,s};\] \item if $p\geq 5$, we have for $1\leq r\leq p-4$, we have \begin{align*} X_1^rb_1&=\sum_{s=1}^{r+2} (-1)^{r-s+3} \binom{r+1}{s-1} \e {s+2},\\ X_1^{p-3}b_1&=\sum_{s=1}^{p-2}s\e {s+2}. \end{align*} \end{enumerate} \end{lem} \begin{proof} We show the equations in the statement by induction on $r$. Suppose that $2\leq i\leq k$. Notice that \[b_i=\e {i,1} -\e 3=(\t {i,1}-\t 1)-(\t 3-\t 1)=\t {i,1}-\t 3,\] and we also have \begin{align*} \sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1} \e {i,s} &=\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1}(\t {i,s}-\t 1)\\ &=\left(\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1}\t {i,s}\right )-\left (\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1}\right )\t 1\\ &=\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1}\t {i,s}. \end{align*} When $r=1$, we have \begin{align*} X_ib_i&=(g_i-1)(\t {i,1}-\t 3) =(g_i\t {i,1}-g_i\t 3)-(\t {i,1}-\t 3)=\t {i,g_11}-\t 3-\t {i,1}+\t 3\\ &=\t {i,2}-\t {i,1}=-\e {i,1}+\e {i,2}. \end{align*} Suppose that part (i) holds for some $1\leq r\leq p-2$. We have \begin{align*} X_i(X_i^rb_i)&=(g_i-1)\left (\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1} \t {i,s}\right )\\ &=\left (\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1} g_i\t {i,s}\right )-\left (\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1} \t {i,s}\right )\\ &=\sum_{s=1}^{r+1} (-1)^{r-s+1} \binom{r}{s-1} \t {i,s+1}+\sum_{s=1}^{r+1} (-1)^{r+1-s+1} \binom{r}{s-1} \t {i,s}\\ &=\sum_{s=2}^{r+2} (-1)^{r-s+2} \binom{r}{s-2} \t {i,s}+\sum_{s=1}^{r+1} (-1)^{r+1-s+1} \binom{r}{s-1} \t {i,s}\\ &=(-1)^{r+1}\t {i,1}+\t {i,r+2}+\sum_{s=2}^{r+1} (-1)^{r-s+2} \left (\binom{r}{s-2}+\binom{r}{s-1}\right ) \t {i,s}\\ &=\sum_{s=1}^{r+2} (-1)^{r-s+2} \binom{r+1}{s-1} \t {i,s}. \end{align*} For part (ii), suppose $p\geq 5$. If $r=1$, then \begin{align*} X_1b_1&=(g_1-1)(\e 3-\e 4)=(g_1-1)(\t 3-\t 4)=(\t 4-\t 5)-(\t 3-\t 4)\\ &=-\t 3+2\t 4-\t 5=-\e 3+2\e 4-\e 5. \end{align*} For $1\leq r\leq p-5$, we have \begin{align*} X_1(X_1^rb_1)&=(g_1-1)\left (\sum_{s=1}^{r+2} (-1)^{r-s+3} \binom{r+1}{s-1} \t {s+2}\right )\\ &=\sum_{s=1}^{r+2} (-1)^{r-s+3} \binom{r+1}{s-1} g_1\t {s+2}-\sum_{s=1}^{r+2} (-1)^{r-s+3} \binom{r+1}{s-1} \t {s+2}\\ &=\sum_{s=1}^{r+2} (-1)^{r-s+3} \binom{r+1}{s-1}\t {s+3}+\sum_{s=1}^{r+2} (-1)^{r+1-s+3} \binom{r+1}{s-1} \t {s+2}\\ &=\sum_{s=2}^{r+3} (-1)^{r+1-s+3} \binom{r+1}{s-2}\t {s+2}+\sum_{s=1}^{r+2} (-1)^{r+1-s+3} \binom{r+1}{s-1} \t {s+2}\\ &=(-1)^{r+3}\t {3}+(-1)\t {r+5}+\sum_{s=2}^{r+2} (-1)^{r+1-s+3} \left (\binom{r+1}{s-2}+\binom{r+1}{s-1}\right ) \t {s+2}\\ &=\sum_{s=1}^{r+3} (-1)^{r+1-s+3} \binom{r+2}{s-1} \e {s+2}. \end{align*} Lastly, when $r=p-4$, we have \begin{align*} X_1(X_1^{p-4}b_1)&=(g_1-1)\left (\sum_{s=1}^{p-2} (-1)^{p-1-s} \binom{p-3}{s-1} \t {s+2}\right )\\ &=\left (\sum_{s=1}^{p-2} (-1)^{s+1} \binom{p-2}{s-1} \t {s+2}\right )-g_1\t p\\ &=\left (\sum_{s=1}^{p-2} s \t {s+2}\right )- \t 1 =\t 2+2\t 3+\ldots+(p-1)\t p+\sum_{s=p+1}^{kp}\t s\\ &=\e 3+2\e 4+\ldots+(p-2)\e p. \end{align*} {Given the description above, it is clear that, for $2\leq i\leq k$, $B_i$ is linearly independent and $\mathrm{span}(B_i)= \mathrm{span}\{\e{i,1},\dots,\e{i,p}\}$ as vector spaces. Furthermore, $\bigcup_{i=1}^{p-2} B_i$ is a basis if and only if the set $\{b_1,\dots, X_1^{p-4}b_1,\sum_{i=3}^{p}(i-2)\e {i}\}$ is linearly independent. We have $(b_1,\dots, X_1^{p-4}b_1,\sum_{i=3}^{p}(i-2)\e {i}\})^\top=A(\e 3,\e 4,\dots, \e p)^\top$ where $\top$ denotes the transpose of a matrix and $A$ is the matrix where, for $1\leq i,j\leq p-2$, \[A_{ij}=(-1)^{i-j+2}\binom{i}{j-1}. \] Thus, the set $\{b_1,\dots, X_1^{p-4}b_1,\sum_{i=3}^{p}(i-2)\e {i}\}$ is linearly independent if and only if $\rank A=p-2$. For $1\leq i\leq p-3$, let $T_i$ represent the elementary row operation adding $\sum_{k=1}^{i}(-1)^{i-k+2}\binom{i+1}{k}R_k$ to the $(i+1)$th row where $R_k$ denotes the $k$th row of $R$. Performing the row operations in the order $T_1, T_2, \dots, T_{p-3}$ on $A$, we obtain the following matrix $A'$: \[A'=\begin{pmatrix} 1&-1&\\ 1&0&-1&\\ \vdots&\vdots&\ddots&\ddots&\\ 1&0&\cdots&0&-1\\ 1&0&\cdots&0&0 \end{pmatrix}.\] Thus, $\rank A=\rank {A'}=p-2$. We conclude that $\B$ is a basis for $D(1)$. } \end{proof} \begin{lem}\label{L: action} Let $1\leq j\leq k$. We have \begin{enumerate}[(i)] \item for $2\leq i\leq k$ and $0\leq r\leq p-1$, \[ X_j(X_i^rb_i)=\left \{\begin{array}{ll} b_1&\text{if $j=1$ and $r=0$,}\\ 0 &\text{if $j\neq i$ or $r=p-1$.}\end{array}\right . \] \item for $0\leq r\leq p-3$, \[ X_j(X_1^rb_1)=\left \{\begin{array}{ll} \sum_{i=2}^{k} X_i^{p-1}b_i &\text{if $j=1$ and $r=p-3$,}\\ 0 &\text{if $j\neq 1$.}\end{array}\right . \] \end{enumerate} In particular, $X_iX_j$ annihilates $D(1)$ for all $1\leq i\neq j\leq k$. \end{lem} \begin{proof} {Recall that $b_1= \t 3-\t 1$ if $p=3$ and $\t 3-\t 4$ if $p\geq 5$, and $b_i=\t {i,1}-\t 3$ for $2\leq i\leq k$.} Suppose $2\leq i\neq j\leq k$. If $p=3$, we have \begin{align*} X_1b_i&=(g_1-1)(\t {i,1} -\t 3)=(\t {i,1} -\t 1)-(\t {i,1}-\t {3})=\t 3-\t 1=b_1. \end{align*} If $p\geq 5$, we have \[ X_1b_i=(g_1-1)(\t {i,1} -\t 3)=(\t {i,1} -\t 4)-(\t {i,1}-\t {3})=\e 3-\e 4=b_1.\] For $2\leq i \leq k$, we have \[X_i(X_i^{p-1}b_i)=X_i^{p}b_i=0b_i=0. \] Since $g_{j}(\t {i,r})=\t {i,r}$, we have that $X_{j}(X_i^rb_i)=0$. Thus, the equalities in part (i) hold. For part (ii), we only show $X_1(X_1^{p-3}b_1)=\sum_{i=2}^k X_i^{p-1}b_i$ and the rest are very similar to the cases in part (i). If $p=3$, we have \begin{align*} X_1 b_1&=(g_1-1)(\t 3-\t 1)\\ &=(\t 1-\t 2)-(\t 3-\t 1)\\ &=-\e 2-\e 3\\ &=\sum_{s=4}^{3k} \e s=\sum_{i=2}^k X_i^{2}b_i. \end{align*} If $p\geq 5$, we have \begin{align*} X_1(X_1^{p-3}b_1)&=(g_1-1)\left (\t 2+2\t 3+\ldots+(p-1)\t p\right )\\ &=\t 3+2\t 4+\ldots+(p-2)\t p-\t 1-(\t 2+2\t 3+\ldots+(p-1)\t p)\\ &=-\t 2-\t 3-\ldots -\t p +\left (\sum_{s=2}^{kp}\t s\right )\\ &=\sum_{s=p+1}^{kp}\t s=\sum_{i=2}^k X_i^{p-1}b_i. \end{align*} \end{proof} The following diagram depicts the actions of the $X_j$'s on the basis $\B$ of $D(1)$. The blue arrows represent the action of $X_1$ and the remaining arrows represent the respective actions of $X_j$'s for $j\neq 1$. The $+$ in the diagram indicates that $X_1^{p-2}b_1=\sum^k_{j=2}X_j^{p-1}b_j$. \[\begin{tikzpicture} \draw[dashed] (0,0) ellipse (4cm and .5cm); \draw[dashed] (0,-6) ellipse (4cm and .5cm); \node (b2) at (-4,0) [circle,draw,fill=black,scale=.5] {}; \node (b21) at (-5,-1.1) [circle,draw,fill=black,scale=.5] {}; \node (b22) at (-5,-2.1) [circle,draw,fill=black,scale=.5] {}; \node at (-5,-3.1) {$\vdots$}; \node (b23) at (-5,-4.1) [circle,draw,fill=black,scale=.5] {}; \node (b24) at (-5,-5.1) [circle,draw,fill=black,scale=.5] {}; \node (b25) at (-4,-6) [circle,draw,fill=black,scale=.5] {}; \node (b3) at (-2,-0.45) [circle,draw,fill=black,scale=.5] {}; \node (b31) at (-3,-1.3) [circle,draw,fill=black,scale=.5] {}; \node (b32) at (-3,-2.3) [circle,draw,fill=black,scale=.5] {}; \node at (-3,-3.3) {$\vdots$}; \node (b33) at (-3,-4.3) [circle,draw,fill=black,scale=.5] {}; \node (b34) at (-3,-5.3) [circle,draw,fill=black,scale=.5] {}; \node (b35) at (-2,-6.5) [circle,draw,fill=black,scale=.5] {}; \node (b1) at (0,-1.5) [circle,draw,fill=black,scale=.5] {}; \node (b11) at (0,-2.5) [circle,draw,fill=black,scale=.5] {}; \node at (0,-3.25) {$\vdots$}; \node (b13) at (0,-4) [circle,draw,fill=black,scale=.5] {}; \node (b14) at (0,-5) [circle,draw,fill=black,scale=.5] {}; \node (b4) at (2,-0.45) [circle,draw,fill=black,scale=.5] {}; \node (b41) at (3,-1.3) [circle,draw,fill=black,scale=.5] {}; \node (b42) at (3,-2.3) [circle,draw,fill=black,scale=.5] {}; \node at (3,-3.3) {$\vdots$}; \node (b43) at (3,-4.3) [circle,draw,fill=black,scale=.5] {}; \node (b44) at (3,-5.3) [circle,draw,fill=black,scale=.5] {}; \node (b45) at (2,-6.5) [circle,draw,fill=black,scale=.5] {}; \draw[-angle 45,thick,color=blue] (b3) -- (b1); \draw[-angle 45] (b3) -- (b31); \draw[-angle 45] (b31) -- (b32); \draw[-angle 45] (b33) -- (b34); \draw[-angle 45] (b34) -- (b35); \draw[-angle 45,thick,color=blue] (b2) -- (b1); \draw[-angle 45] (b2) -- (b21); \draw[-angle 45] (b21) -- (b22); \draw[-angle 45] (b23) -- (b24); \draw[-angle 45] (b24) -- (b25); \draw[-angle 45,thick,color=blue] (b1) -- (b11); \draw[-angle 45,thick,color=blue] (b13) -- (b14); \draw[-angle 45,thick,color=blue] (b14) -- (b25); \draw[-angle 45,thick,color=blue] (b14) -- (b35); \draw[-angle 45,thick,color=blue] (b14) -- (b45); \draw[-angle 45,thick,color=blue] (b4) -- (b1); \draw[-angle 45] (b4) -- (b41); \draw[-angle 45] (b41) -- (b42); \draw[-angle 45] (b43) -- (b44); \draw[-angle 45] (b44) -- (b45); \node at (-4,0.5) {$b_3$}; \node at (-5.5,-0.8) {$X_3b_3$}; \node at (-5.6,-1.8) {$X_3^2b_3$}; \node at (-4,-6.5) {$X_3^{p-1}b_3$}; \node at (-2,0) {$b_2$}; \node at (-3.5,-1) {$X_2b_2$}; \node at (-3.6,-2) {$X_2^2b_2$}; \node at (-2,-7) {$X_2^{p-1}b_2$}; \node at (2,0) {$b_k$}; \node at (3.5,-1) {$X_kb_k$}; \node at (3.6,-2) {$X_k^2b_k$}; \node at (2,-7) {$X_k^{p-1}b_k$}; \node at (0.5,-1.5) {$b_1$}; \node at (0.6,-2.5) {$X_1b_1$}; \node at (0.9,-5) {$X_1^{p-3}b_1$}; \node at (0,-6) {$+$}; \end{tikzpicture}\] For $0\neq \alpha\in \A^k(\F)$, let $X_\alpha=\sum_{i=1}^k \alpha_iX_i$ as in the preliminary. By Lemma \ref{L: action}, we have \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes,left delimiter=(,right delimiter=),row sep=0.05cm,column sep=0.05cm] { 0 &{} &{} &{} &\alpha_1 &{} &{} &{} &{}&{}&\alpha_1 &{} &{} &{} \\ \alpha_1 & 0 &{} &{} &{} &{} &{} &{} &\ldots&{}&{} &{} &{} &{} \\ {} &\ddots& \ddots &{} &{} &{} &{} &{} &{}&&{} &{} &{} &{} &{} \\ {} &{} & \alpha_1& 0 &{} &{} &{} &{} &{}&{}&{}&{} &{} &{} &{} \\ {} &{} &{} &{} & 0 &{} &{} &{} &{}&{}&{}&{} &{} &{} &{} \\ {} &{} &{} &{} &\alpha_2 & 0 &{} &{} &{}&{}&{}&{} &{} &{} &{} \\ {} &{} &{} &{} &{} & \ddots & \ddots&{} &{}&{}&{} &{} &{} &{} &{} \\ {} &{} &{} &\alpha_1&{} &{} &\alpha_2& 0 &{}&{}&{} &{} &{} &{} &{} \\ {} &\vdots&{}&{}&{}&{}&{}&{}&\ddots&{}&{}&{}&{} \\ {} &{} &{} &{} &{} &{} &{} &{} &{}&{}& 0 &{} &{} &{} \\ {} &{} &{} &{} &{} &{} &{} &{} &{}&{}& \alpha_k& 0 &{} &{} \\ {} &{} &{} &{} &{} &{} &{} &{} &{}&{}&{} & \ddots & \ddots&{} \\ {} &{} &{} &\alpha_1&{} &{} &{} &{} &{}&{}&{} &{} &\alpha_k &0 \\ }; \node[fit=(m-1-1)(m-4-4),draw=blue,dashed,inner sep=0.5mm]{}; \node[fit=(m-5-5)(m-8-8),draw=blue,dashed,inner sep=0.5mm]{}; \node[fit=(m-10-11)(m-13-14),draw=blue,dashed,inner sep=0.5mm]{}; \draw[dashed,color=orange] (-2,2) rectangle (0.8,4.8); \draw[dashed,color=orange] (2,2) rectangle (5,4.8); \draw[dashed,color=orange] (-5.1,-0.95) rectangle (-2.2,1.8); \draw[dashed,color=orange] (-5.1,-4.75) rectangle (-2.2,-1.9); \node at (-2.8,4.5) {$S(1,1)$}; \node at (0.1,4.5) {$S(1,2)$}; \node at (4.3,4.5) {$S(1,k)$}; \node at (-2.9,1.5) {$S(2,1)$}; \node at (0.2,1.5) {$S(2,2)$}; \node at (-2.9,-2.2) {$S(k,1)$}; \node at (4.2,-2.2) {$S(k,k)$}; \node at (-6.7,0) {$[X_\alpha]_\B=$}; \end{tikzpicture}\] where we have deliberately left out the zero entries and it is comprised of the submatrices $S(i,j)$, where \begin{enumerate}[(a)] \item the diagonal submatrices $S(i,i)$ are square of sizes $(p-2)\times (p-2)$ if $i=1$ and $p\times p$ if $i\geq 2$, \item for $2\leq i\leq k$, $S(i,1)$ is the matrix with the $(p,p-2)$-entry $\alpha_1$ and zero elsewhere, \item for $2\leq j\leq k$, $S(1,j)$ is the matrix with the $(1,1)$-entry $\alpha_1$ and zero elsewhere, and \item $S(i,j)$ is the zero matrix otherwise. \end{enumerate} Notice that, in the case $k=1$, $[X_\alpha]_\B=S(1,1)$. In order to compute the Jordan type of $[X_\alpha]_\B$, in general, we need to know $\rank([X_\alpha]_\B^r)$ for all $1\leq r\leq p-1$. The next lemma addresses such computation. Recall the polynomial $p_k$ in Equation \ref{Eq: pk}. \begin{lem}\label{rank} Let $k\geq 2$ and $S$ be the matrix $[X_\alpha]_\B$. We have \begin{enumerate}[(i)] \item $\rank(S)\leq (k-1)(p-1)+p-3$ and the equality holds if and only if all $\alpha_i$'s are nonzero; \item if $p\geq 5$ and all $\alpha_i$'s are nonzero, then $\rank(S^{p-3})=3k-2$ and $\rank(S^{p-2})=2k-2$; \item if all $\alpha_i$'s are nonzero, then $\rank(S^{p-1})\leq k-1$ and the equality holds if and only if $p_k(\alpha_1,\alpha_2,\ldots,\alpha_k)\neq 0$. \end{enumerate} \end{lem} \begin{proof} In this proof, for a matrix $A$, the $j$th row of $A$ is denoted as $A_j$ for all admissible $j$. Notice that $S$ has at least $k-1$ zero rows. If $\alpha_1=0$, then $\rank(S)\leq (k-1)(p-1)$. Also, if $\alpha_i=0$ for some $i=2,3,\ldots,k$, then $\rank(S)\leq (k-2)(p-1)+p-2$. Suppose $\alpha_i\neq 0$ for $i=1,2,\ldots,k$. We have \[S_1=\sum_{i=2}^{k} \frac{\alpha_1}{\alpha_i}S_{p(i-1)},\] and the remaining nonzero rows form a basis for the row space. In this case, $\rank S=kp-2-k$ and the proof for part (i) is complete. Now we suppose all the $\alpha_i$'s are nonzero. Argue inductively on $m$, we can work out the explicit matrix for $S^m$. We leave it to the reader to observe how $S^m$ look like while we have presented $S$, $S^{p-3}$ and $S^{p-2}$ here. We have \[ S^{p-3}=\begin{pNiceMatrix}[first-row,first-col,small] &1 &2& \ldots &{p-2}&{p-1}&{p} &{p+1}&\ldots &2p-2&\ldots&{kp-p-1}&{kp-p} &{kp-p+1} & \dots&{kp-2}\\ 1 & & & & & & & & & & && & & & & \\ \vdots & & & & & && & & & && & & & & \\ {p-3} & 0 & & & &\alpha_1^{p-3}& & & & & \ldots &\alpha_1^{p-3}& & & & \\ {p-2} &\alpha_1^{p-3}&0& & & 0 & & & & & &0& & & & & \\ \hdottedline {p-1} &&& & & & & & & & && & & & & \\ \vdots & & & & & & & & & & && & & & & \\ {2p-4}& & & & &\alpha_2^{p-3}& & & & & && & & & & \\ {2p-3}& & & & & 0 &\alpha_2^{p-3}& & & & && & & & & \\ {2p-2}& 0 &\alpha_1^{p-3}&& & 0 & 0 &\alpha_2^{p-3}&& & && & & & & \\ \hdottedline \vdots&&\vdots&&&&&&&&\ddots&&&&\\ \hdottedline kp-p-1&&&&&&&&&&&&&&\\ \vdots&&&&&&&&&&&&&&\\ {kp-4}& & & & & & & & & & &\alpha_k^{p-3} & & & \\ {kp-3}& & & & & & & & & & & 0 &\alpha_k^{p-3}& & \\ {kp-2}& 0 &\alpha_1^{p-3}&& & & & & & & & 0 & 0 &\alpha_k^{p-3}& \\ \CodeAfter \tikz \draw[dotted,color=black] (1-|5) -- (16-|5); \tikz \draw[dotted,color=black] (1-|10) -- (16-|10); \tikz \draw[dotted,color=black] (1-|11) -- (16-|11); \end{pNiceMatrix}\] where zero entries are deliberatedly left out. Notice that there are only $2+(k-1)3=3k-1$ nonzero rows with \[S^{p-3}_{p-3}=\sum_{i=2}^{k} \left (\frac{\alpha_1}{\alpha_i}\right )^{p-3}S^{p-3}_{ip-4}.\] Thus, $\rank(S^{p-3})=3k-2$. For $S^{p-2}$, we have \[ S^{p-2}=\begin{pNiceMatrix}[first-row,first-col,small] &1 &2& \ldots &{p-2}&{p-1}&{p} &\ldots &2p-2&\ldots&{kp-p-1}&{kp-p} & \dots&{kp-2}\\ 1 & & & & & & & & & & && & & & & \\ \vdots & & & & & && & & & && & & & & \\ {p-3} & & & & && & & & & && & & & \\ {p-2} &&& & & \alpha_1^{p-2} & & & & \ldots & \alpha_1^{p-2} & & & & & \\ \hdottedline {p-1} &&& & & & & & & & && & & & & \\ \vdots & & & & & & & & & & && & & & & \\ {2p-3}& & & & & \alpha_2^{p-2}&& & & & && & & & & \\ {2p-2}& \alpha_1^{p-2}&&& & 0 &\alpha_2^{p-2}&& & & && & & & & \\ \hdottedline \vdots&\vdots&&&&&&&&\ddots&&&&\\ \hdottedline kp-p-1&&&&&&&&&&&&&&\\ \vdots&&&&&&&&&&&&&&\\ {kp-3}& & & & & & & & & &\alpha_k^{p-2}& & \\ {kp-2} &\alpha_1^{p-2}&& & & & & & & & 0 &\alpha_k^{p-2}& \\ \CodeAfter \tikz \draw[dotted,color=black] (1-|5) -- (16-|5); \tikz \draw[dotted,color=black] (1-|9) -- (16-|9); \tikz \draw[dotted,color=black] (1-|10) -- (16-|10); \end{pNiceMatrix}.\] There are only $1+(k-1)2=2k-1$ nonzero rows in $S^{p-2}$ with \[S^{p-2}_{p-2}=\sum_{i=2}^{k} \left (\frac{\alpha_1}{\alpha_i}\right )^{p-2}S^{p-2}_{ip-3}.\] Thus, $\rank(S^{p-2})=2k-2$ and the proof for part (ii) is complete. We now prove part (iii). Using the matrix $S^{p-2}$, we obtain \[ S^{p-1}=\begin{pNiceMatrix}[first-row,first-col,small] &\ldots &{p-1} &\ldots &{2p-1}&\ldots &{kp-p-1} & \ldots&\\ \vdots & & & & & & & \\ {2p-2}& &\alpha_1^{p-1}+\alpha_2^{p-1}&&\alpha_1^{p-1}&\ldots& \alpha_1^{p-1} &\\ \vdots&&&&&&&\\ {3p-2}& &\alpha_1^{p-1}& &\alpha_1^{p-1}+\alpha_3^{p-1}&\ldots&\alpha_1^{p-1}&\\ \vdots&&\vdots&&\vdots&\ddots&\vdots&\\ {kp-2}& &\alpha_1^{p-1}& &\alpha_1^{p-1} & \ldots &\alpha_1^{p-1}+\alpha_k^{p-1}& \\ \end{pNiceMatrix}\] where we now only highlight the possibly nonzero entries with their corresponding rows and columns. There are only $k-1$ nonzero rows in $S^{p-1}$ and $\rank (S^{p-1})=k-1$ if and only if this following $(k-1)\times (k-1)$-matrix has full rank: \[B=\begin{pmatrix} \alpha_1^{p-1}+\alpha_2^{p-1}&\alpha_1^{p-1} &\ldots &\alpha_1^{p-1}\\ \alpha_1^{p-1} &\alpha_1^{p-1}+\alpha_3^{p-1}&\ldots &\alpha_1^{p-1}\\ \vdots &\vdots&\ddots&\vdots\\ \alpha_1^{p-1} &\alpha_1^{p-1} &\ldots &\alpha_1^{p-1}+\alpha_k^{p-1}\\ \end{pmatrix}. \] We claim that $\det B=p_k(\alpha_1,\ldots,\alpha_k)$ and therefore proving part (iii). Performing row operations, we have \[\xymatrix{B\ar[r]^-T&{\begin{pmatrix} \alpha_2^{p-1}&0 &\ldots &-\alpha_k^{p-1}\\ 0 &\alpha_3^{p-1} &\ldots &-\alpha_k^{p-1}\\ \vdots &\vdots&\ddots&\vdots\\ \alpha_1^{p-1} &\alpha_1^{p-1} &\ldots &\alpha_1^{p-1}+\alpha_k^{p-1} \end{pmatrix}=:B'}\ar[d]^-U\\ & {B'':=\begin{pmatrix} \alpha_2^{p-1}&0 &\ldots &-\alpha_k^{p-1}\\ 0 &\alpha_3^{p-1} &\ldots &-\alpha_k^{p-1}\\ \vdots &\vdots&\ddots&\vdots\\ 0 &0 &\ldots &\alpha_1^{p-1}+\alpha_k^{p-1}+\sum_{i=2}^{k-1}\frac{\alpha_1^{p-1}\alpha_k^{p-1}}{\alpha_i^{p-1}} \end{pmatrix}}}\] where $T$ corresponds to the row operations of subtracting the $i$th row by the $(k-1)$th row, one for each $1\leq i\leq k-2$, and $U$ corresponds to the row operation $B'_{k-1}-\sum_{i=1}^{k-2}\left (\frac{\alpha_1}{\alpha_{i+1}}\right )^{p-1}B'_i$. Therefore \[\det(B)=\det(B'')=\alpha_2^{p-1}\cdots\alpha_{k-1}^{p-1}\left (\alpha_1^{p-1}+\alpha_k^{p-1}+\sum_{i=2}^{k-1}\frac{\alpha_1^{p-1}\alpha_k^{p-1}}{\alpha_i^{p-1}}\right )=p_k(\alpha_1,\ldots,\alpha_k)\] as desired. \end{proof} We conclude the section with the following result concerning the generic Jordan type and maximal Jordan set of $D(1){\downarrow_{E_k}}$. \begin{thm}\label{T: jtD} The generic Jordan type of $D(1){\downarrow_{E_k}}$ is $[p-2][p]^{k-1}$ and it has maximal Jordan set $\U{E_k}{D(1)}$ the complement of \[V(p_k)\cup\bigcup_{i=1}^k V(x_i). \] \end{thm} \begin{proof} If $k=1$, $[X_\alpha]_\B=S(1,1)$ has rank $p-3$ for all $\alpha_1\neq 0$. In this case, the (generic) Jordan type of $D(1){\downarrow_{E_1}}$ is $[p-2]$. Also, $p_1=1$ by our convention and its maximal Jordan set is precisely $\F\backslash\{0\}$. Suppose now that $k\geq 2$. Let $M=D(1){\downarrow_{E_k}}$ and $W=V(p_k)\cup\bigcup_{i=1}^k V(x_i)$. For $0\neq\alpha\in \A^k(\F)$, let $S=[X_\alpha]_\B$. Suppose that the Jordan type of $S$ is given by $[p]^{a_p} [p-1]^{a_{p-1}}\ldots [1]^{a_1}$. By Lemma \ref{rank}, $a_p\leq k-1$ and $a_p=k-1$ if and only if $\alpha_i\neq 0$ for all $i$ and $p_k(\alpha_1,\alpha_2,\ldots,\alpha_k)\neq 0$, i.e., $\alpha\not\in W$. In this case, given $\rank(S^{p-3})=3k-2$ and $\rank(S^{p-2})=2k-2$, we obtain that $a_{p-2}+2a_{p-1}+3a_p=3k-2$ and $a_{p-1}+2a_p=2k-2$. Thus, $a_{p-2}=1$ and $a_{p-1}=0$. Therefore the Jordan type of $[X_\alpha]_\B$ is $[p-2][p]^{k-1}$ as $\dim M=kp-2$. Furthermore, $[p-2][p]^{k-1}$ is the most dominant Jordan type among all Jordan types of modules of dimension $kp-2$. As such, the maximal and generic Jordan type of $M$ is $[p-2][p]^{k-1}$ and $\U{E}{M}$ is the complement of $W$. \end{proof} \begin{rem} When $p=2$ and $k>1$, our calculation shows that the generic Jordan type of $D(1){\downarrow_{E_k}}$ is $[2]^{k-1}$ and it has maximal Jordan set $\U{E_k}{D(1)}$ the complement of $V(p_k)$. This coincides with \cite{Jiang21} using Corollary \ref{C: complement}. \end{rem} \section{The simple module $D(p-1)$} As the setting in the previous section, we assume $p\geq 3$, fix an integer $k\geq 2$ and let $n=kp$. Again, $E_k$ is the elementary abelian $p$-subgroup of $\sym{kp}$ generated by $k$ disjoint $p$-cycles and $p_k$ is the polynomial in Equation \ref{Eq: pk}. In this section, we shall make use of the fact that $D(p-1)$ is the $(p-1)$th exterior power of $D(1)$ to compute the rank variety for $\res{D(p-1)}{E_k}$ when $k\not\equiv 1(\mod p)$. We should like to mention that the rank variety should also be $V(p_k)$ for the case $k\equiv 1(\mod p)$ but our method unfortunately does not apply. \begin{lem}\label{jtDp-1} The generic Jordan type of $\res{D(p-1)}{E_k}$ is $[p]^m$ where $mp=\binom{kp-2}{p-1}$. \end{lem} \begin{proof} It is well-known that, for $\F G$-modules $M$ and $N$, we have \[\bigwedge^m(M\oplus N)\cong \bigoplus^m_{i=0} \bigwedge^i(M)\otimes\bigwedge^{m-i} (N).\] By Proposition \ref{P: Umax}, we conclude that the generic Jordan type of $\res{D(p-1)}{E_k}\cong \res{\left (\bigwedge^{p-1} D(1)\right )}{E_k}$ is the $(p-1)$th exterior power of the generic Jordan type of $D(1)$. By Theorem \ref{T: jtD}, the generic Jordan type of $D(1)$ is $[p-2][p]^{k-1}$. Notice that \begin{align*} N:=\bigwedge^{p-1}(J_{p-2}\oplus J_p^{\oplus k-1})\cong \bigoplus^{p-1}_{i=0} \bigwedge^{p-1-i} J_{p-2}\otimes \bigwedge^{i}(J_p^{\oplus k-1}). \end{align*} Unless $i=0$, $\bigwedge^{i}(J_p^{\oplus k-1})$ is free. However, when $i=0$, $\bigwedge^{p-1} J_{p-2}$ is the zero module. Therefore, $N$ is free. Since the dimension of $D(p-1)$ is $\binom{kp-2}{p-1}$, we get the number $m$ as desired. The proof is now complete. \end{proof} Since the generic Jordan type of $\res{D(p-1)}{E_k}$ is free, by Corollary \ref{C: complement}, we have $V^\#_{E_k}(D(p-1))=(\U{E_k}{D(p-1)}^c$. By Theorem \ref{T: jtD} and Proposition \ref{P: Umax}, we conclude that \begin{equation}\label{subset} V_{E_k}^\#(D(p-1))\subseteq V(p_k)\cup\bigcup_{i=1}^k V(x_i). \end{equation} More precisely, we have the following lemma. In the proof, we need the modular branching rule. Since we do not require it elsewhere, we refer the readers to \cite{kleshchev} {for the necessary details}. \begin{lem}\label{subseteq} $V_{E_k}^\#(D(p-1))\subseteq V(p_k)$. \end{lem} \begin{proof} Given Equation \ref{subset}, we just need to show that, for any $1\leq i\leq k$, \begin{equation*} V_{E_k}^\#(D(p-1))\cap V(x_i)\subseteq \bigcup_{j\neq i} V(x_j,x_i) \subseteq V(p_k). \end{equation*} The second inclusion is clear since, if any two of the coordinates of a point are $0$, then $p_k$ vanishes at this point. Showing the first inclusion is equivalent to showing that, for any $\alpha=(\alpha_1,\dots,\alpha_k)\in V_{E_k}^\#(D(p-1))$ with $\alpha_i=0$ for some $i$, there exists $j\neq i$ such that $\alpha_j=0$. Let $\alpha$ be such a point. By Lemma \ref{L: symmetry}, we can further assume that $i=k$ and let $\alpha'=(\alpha_1,\ldots,\alpha_{k-1})$. Since \[u_\alpha=u_{\alpha'}=1+\sum_{i=1}^{k-1}\alpha_i(g_i-1)\in \F\sym{kp-1},\] we have $\res{D(p-1)}{\langle u_\alpha\rangle}=\res{(\res{D(p-1)}{\sym{kp-1}})}{\langle u_{\alpha'}\rangle}$ and therefore $\alpha'\in V^\#_{E_{k-1}}(D(p-1))$. By the modular branching rule \cite[Theorem 11.2.7]{kleshchev}, since the only one good node in $(kp-p+1,1^{p-1})$ is the $(1,kp-p+1)$-node (see the diagram below indicating the necessary residues and the good node), we have $\res{D(p-1)}{\sym{kp-1}}\cong D^{(kp-p,1^{p-1})}$. \[ \begin{ytableau} 0&1& \none[\dots]&\scriptstyle p-1&0& \none[\dots]&\scriptstyle p-1&*(yellow)0&\none[1] \cr \scriptstyle p-1 &\none[0] \cr \scriptstyle p-2 \cr \none[\vdots] \cr 1 \cr \none [0] \end{ytableau}\] Since $(kp-p,1^{p-1})$ is a hook of size coprime to $p$, by \cite[Theorem 2]{Peel} and \cite[Theorem 11.1]{James78}, we have $D^{(kp-p,1^{p-1})}\cong S^{(kp-p,1^{p-1})}$. By \cite[Theorem 7.3]{Wildon10}, $S^{(kp-p,1^{p-1})}$ has a vertex a Sylow $p$-subgroup $P$ of $\sym{kp-2p}$ and has a trivial $\F P$-source. Therefore, using Mackey's formula, \[\res{D(p-1)}{E_{k-1}}\mid\res{\ind{\F_P}{\sym{kp-1}}}{E_{k-1}}\cong\bigoplus_{s\in E_{k-1}\backslash \sym{kp-1}/P} \ind{\res{{}^s\F_P}{E_{k-1}\cap sPs^{-1}}}{E_{k-1}} \] where each $E_{k-1}\cap sPs^{-1}$ is a proper subgroup of $E_{k-1}$. Consequently, we have: \[\alpha'\in V_{E_{k-1}}^\#(D(p-1))\subseteq \bigcup_{s\in E_{k-1}\backslash \sym{kp-1}/P} V_{E_{k-1}}^\#(\ind{\res{{}^s\F_P}{E_{k-1}\cap sPs^{-1}}}{E_{k-1}}), \] and there exist some $s\in E_{k-1}\backslash \sym{kp-1}/P$ such that $\alpha' \in V_{E_{k-1}}^\#(\ind{\res{{}^s\F_P}{E_{k-1}\cap sPs^{-1}}}{E_{k-1}})$. Let $E'=E_{k-1}\cap sPs^{-1}$. Since $P\leq \sym{(k-2)p}$, we have $sPs^{-1} \leq\sym{\{s(1),s(2),\dots,s((k-2)p)\}}$. Therefore $sPs^{-1}$ permutes only $(k-2)p$ numbers and $E'$ is properly contained in $E_{k-1}$. We claim that there exist $1\leq j\leq k-1$ such that, for any $\prod_{i=1}^{k-1} g_i^{q_i}\in E'$, $q_j=0$. If not, then for any $1\leq j\leq k-1$, there exists, some $\prod_{i=1}^{k-1} g_i^{q_i}\in E'$ such that $q_j\neq 0$. This shows that $E'$ permutes all $(k-1)p$ numbers, which is a contradiction. Since $E'$ avoids the generator $g_j$ completely, by Lemma \ref{L: rkinduction}, we have $w_{ij}=0$ for $1\leq i\leq k-1$ in that statement, i.e. $\alpha_j=0$. The proof is now complete. \end{proof} Now we are ready to state and prove the main theorem in this section. \begin{thm}\label{T: main thm} If $k\not\equiv 1 (\mod p)$, then \[V_{E_k}^\#(D(p-1))=V(p_k).\] \end{thm} \begin{proof} From Lemma \ref{subseteq}, we knew that $W:=V_{E_k}^\#(D(p-1))\subseteq V(p_k)$ and \begin{align*} \dim_\F D(p-1)=\binom{kp-2}{p-1}&=\frac{(kp-2)(kp-3)\ldots(k-1)p}{(p-1)!}\\ &=\frac{(kp-2)(kp-3)\ldots((k-1)p+1)}{(p-1)!}\cdot (k-1)p. \end{align*} Since $k\not \equiv 1(\mod p)$, $p$ divides $\dim_\F {D(p-1)}$ but $p^2$ does not. Let $r=\dim W$. By Theorem \ref{T: dimension}, we have $p^{k-r}\mid \dim_\F D(p-1)$ and so $r\geq k-1$. On the other hand, $\dim {V(p_k)}= k-1$. Therefore $r=k-1$. When $k\geq 3$, by Lemma \ref{irred}, $V(p_k)$ is irreducible and we must have $W= V(p_k)$ by Theorem \ref{T: irred variety dim}. Suppose now that $k=2$. Notice that $V(p_2)$ is reducible and we have a factorisation \[X^{p-1}+Y^{p-1}=\prod_{r\in I} (X-rY), \] where $I$ is the set of all $(p-1)$th roots of $-1$ in $\F$. We have $V(p_2)=\bigcup_{r\in I}V(X-rY)$ and this gives the irreducible components of $V(p_2)$. On the other hand, $\dim W=1=\dim V$. Then $W=\bigcup_{r\in I'} V(X-rY)$ for some $\emptyset\neq I'\subseteq I$. Let $\lambda$ be a primitive $(2p-2)$th root of 1 in $\F$ where $\langle\lambda\rangle$ is a subgroup of $\F_{p^2}^\times$. We have $I=\{\lambda^1,\lambda^3,\dots,\lambda^{2p-3}\}$ and $\{\lambda^2,\lambda^4,\dots,\lambda^{2p-2}\}=\F_p^{\times}$. Let $r'=\lambda^{2i+1}\in I'$. By Lemma \ref{L: symmetry}, for any $r=\lambda^{2j+1}\in I$, we have \[(r,1)=(\lambda^{2j-2i}\lambda^{2i+1},1)=\lambda^{2j-2i}\cdot (r',1)\in W\] where we have considered $\lambda^{2j-2i}$ belongs to the first component in $\F_p^\times\wr \sym{2}$. As such, $I'=I$ and we have $W=V(p_2)$ as required. \end{proof} \begin{rem} The only obstruction for applying the proof of Theorem \ref{T: main thm} to the remaining case; namely, $k\equiv 1(\mod p)$, is that we could not deduce that $\dim V^\#_{E_k}(D(p-1))=k-1$. {If the dimension were indeed $k-1$, the rest of the proof follows. Also, Corollaries \ref{C: up bound pk}, \ref{C: variety kp-p-1} and \ref{C: comp} below would also follow at once. } \end{rem} We conclude the paper with corollaries following our result. \begin{cor}\label{C: up bound pk} For $k\not\equiv 1(\mod p)$, we have $d_{V(p_k)}\leq \binom{kp-2}{p-1}$. \end{cor} \begin{cor}\label{C: variety kp-p-1} If $k\not\equiv 1(\mod p)$, then $V^\#_{E_k}(D(kp-p-1))=V(p_k)$. \end{cor} \begin{proof} By \cite[Theorem 8.15]{James78}, $S^\lambda\otimes \sgn\cong (S^{\lambda'})^*$ where $\lambda'$ is the conjugate partition of $\lambda$. Since $D(p-1)$ appears as the head of $S^{(kp-p+1,1^{p-1})}$ and $D(kp-p-1)$ appears as the socle of $S^{(p,1^{kp-p})}$, we have $D(p-1)\otimes \sgn\cong D(kp-p-1)$. Therefore, by Theorem \ref{T: basic rank}(iii), the restriction of both modules $D(p-1)$ and $D(kp-p-1)$ to $E_k$ have the same rank variety. \end{proof} \begin{cor}\label{C: comp} Suppose that $k\not\equiv 1 (\mod p)$. The complexities of the simple modules $D(p-1)$ and $D(kp-p-1)$ are $k-1$. \end{cor} \begin{proof} Since $p\geq 3$, the $p$-rank of an elementary abelian $p$-subgroup $E$ of $\sym{kp}$ is strictly less than $k$ unless $E$ is conjugate to $E_k$. Since the dimension of $V^\#_{E_k}(D(p-1))$ is $k-1$ and the rest are not more than $k-1$, the maximal value must be $k-1$. By Equation \ref{Eq: complexity} and Theorem \ref{T: basic rank}(ii), the complexity of $D(p-1)$ must be $k-1$. The same holds for $D(kp-p-1)$ using Corollary \ref{C: variety kp-p-1}. \end{proof} \begin{cor} Let $\lambda_1,\ldots,\lambda_{p-1}$ be the $(p-1)$th roots of $-1$ in $\F$. When $k=2$, the $\F\sym{2p}$-module $D(p-1)$ restricted to $E_2$ decomposes into $Q\oplus\bigoplus_{j=1}^{p-1} N_j$ such that $Q$ is projective, for each $1\leq j\leq p-1$, $N_j$ is projective-free and $V^\#_{E_2}(N_j)=V(X-\lambda_jY)$. \end{cor} \begin{proof} By Theorem \ref{T: main thm}, when $k=2$, the connected components of the projective variety $\overline{V^\#_{E_2}(D(p-1))}$ are singleton points $(\lambda_j:1)$ one for each $1\leq j\leq p-1$. As such, by Theorem \ref{T: basic rank}(iv), there are summands $Q,N_1,\ldots,N_{p-1}$ of $\res{D(p-1)}{E_2}$ with the desired property. \end{proof} \begin{rem} By \cite{Danz07,DG15}, it is known that the Green vertices of $D(p-1)$ are the Sylow $p$-subgroups of $\sym{kp}$. In the case of $k\not\equiv 1(\mod p)$, Lemma \ref{irred}, Theorems \ref{T: Green} and \ref{T: main thm} confirm that $D(p-1)$ has a Green vertex containing $E_k$ as $V(p_k)$ is not contained in the union of the proper base subspaces. Moreover, the same holds for $D(kp-p-1)$. \end{rem} \begin{thebibliography}{111} \bibitem{AE81} J. L. Alperin and L. Evens, Representations, resolutions and Quillen's dimension theorem, J. Pure Appl. Algebra 22 (1981), no. 1, 1--9. \bibitem{AE82} J. L. Alperin and L. Evens, Varieties and elementary abelian groups, J. Pure Appl. Algebra 26 (1982), no. 3, 221--227. \bibitem{Benson1} D. J. Benson, Representations and Cohomology, I: Basic representation theory of finite groups and associative algebras, Cambridge Stud. Adv. Math. 30, Cambridge Univ. Press, 1991. \bibitem{Benson2} D. J. Benson, Representations and Cohomology, II: Cohomology of groups and modules, Cambridge Stud. Adv. Math. 31, Cambridge Univ. Press, 1991. \bibitem{benson2016} D. J. Benson, Representations of Elementary Abelian $p$-groups and Vector Bundles, Cambridge Tracts in Mathematics, 208, Cambridge University Press, 2017. \bibitem{Magma} W. Bosma, J. Cannon, and C. Playoust, The Magma algebra system. I. The user language, J. Symbolic Comput., 24 (1997), 235--265. \bibitem{carlson} J. F. Carlson, The varieties and the cohomology ring of a module, J. Algebra 85 (1983), no. 1, 104--143. \bibitem{Carlson84} J. F. Carlson, The variety of an indecomposable module is connected, Invent. Math. 77 (1984), no. 2, 291--299. \bibitem{Carlson93} J. F. Carlson, Varieties and modules of small dimension, Arch. Math. (Basel) 60 (1993), no. 5, 425--430. \bibitem{Dade78} E. Dade, Endo-permutation modules over $p$-groups II, Ann. of Math. (2) 108 (1978), no. 2, 317--346. \bibitem{Danz07} S. Danz, On vertices of exterior powers of the natural simple module for the symmetric group in odd characteristic, Arch. Math. (Basel) 89 (2007), no. 6, 485--496. \bibitem{DG15} S. Danz and E. Giannelli, Vertices of simple modules of symmetric groups labelled by hook partitions, J. Group Theory 18 (2015), no. 2, 313--334. \bibitem{DL17} S. Danz and K. J. Lim, Signed Young modules and simple Specht modules, Adv. Math. 307 (2017), 369--416. \bibitem{FPS} E. M. Friedlander, J. Pevtsova and A. Suslin, Generic and maximal Jordan types, Invent. Math. 168 (2007), no. 3, 485--522. \bibitem{Green59} J. A. Green, On the indecomposable representations of a finite group, Math. Z. 70 (1959) 430--445. \bibitem{James78} G. D. James, The Representation Theory of the Symmetric Groups, Lecture Notes in Mathematics 682, Springer, Berlin, 1978. \bibitem{JK} G. James and A. Kerber, The Representation Theory of the Symmetric Group, Encyclopedia of Mathematics and its Applications, vol. 16, Addison–Wesley Publishing Co., Reading, MA, 1981. \bibitem{Jiang21} Y. Jiang, On the complexities of some simple modules of symmetric groups, Beitr. Algebra Geom. 60 (2019), no. 4, 599--625. \bibitem{kleshchev} A. Kleshchev, Linear and Projective Representations of Symmetric Groups, Cambridge Tracts in Mathematics, 163, Cambridge University Press, 2005. \bibitem{L09} K. J. Lim, The varieties for some Specht modules, J. Algebra 321 (2009), no. 8, 2287--2301. \bibitem{LT13} K. J. Lim and K. M. Tan, The complexities of some simple modules of the symmetric groups, Bull. Lond. Math. Soc. 45 (2013), no. 3, 497--510. \bibitem{MZ07} J. M\"{u}ller and R. Zimmermann, Green vertices and sources of simple modules of the symmetric group labelled by hook partitions, Arch. Math. (Basel) 89 (2) (2007) 97--108. \bibitem{Peel} M. H. Peel, Hook representations of symmetric groups, Glasgow Math. J. 12 (1971), 136--149. \bibitem{Shafarevich} I. R. Shafarevich, Basic Algebraic Geometry I: Varieties in projective space, second edition, Springer-Verlag, 1994. \bibitem{Uno} K. Uno, The rank variety of irreducible modules for symmetric groups, S\={u}rikaisekikenky\={u}sho K\={o}ky\={u}roku No. 1251 (2002), 8--15. \bibitem{Wildon10} M. Wildon, Vertices of Specht modules and blocks of the symmetric group, J. Algebra 323 (2010), no. 8, 2243--2256. \end{thebibliography} \end{document}
2205.01894v1
http://arxiv.org/abs/2205.01894v1
Results on bar-core partitions, core shifted Young diagrams, and doubled distinct cores
\documentclass{amsart} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{comment} \usepackage[none]{hyphenat} \usepackage{adjustbox} \usepackage{tikz} \usetikzlibrary{arrows, decorations.markings} \usepackage{ytableau} \usepackage{mathtools} \usepackage{cite} \usepackage{verbatim} \usepackage{comment} \usepackage{amsmath} \usepackage{enumitem} \usepackage{amssymb} \usepackage{amsthm} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{url} \newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}} \newcommand{\EOl}{\mathrm{EO}\text{-largest}} \newcommand{\OEl}{\mathrm{OE}\text{-largest}} \newcommand{\size}{\mathfrak{s}} \newcommand{\partition}{\mathcal{C}_{s,s+1}} \newcommand{\peven}{\mathcal{C}_{s,s+1}^{\mathrm{E}}} \newcommand{\podd}{\mathcal{C}_{s,s+1}^{\mathrm{O}}} \newcommand{\oi}{\mathcal{O}_{s,s+1}} \newcommand{\oieo}{\mathcal{O}_{s,s+1}^{\mathrm{EO}}} \newcommand{\oioe}{\mathcal{O}_{s,s+1}^{\mathrm{OE}}} \renewcommand{\AA}{\mathbb{A}} \newcommand{\thth}{\textsuperscript{th}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\TT}{\mathcal{T}} \newcommand{\CC}{\mathbb{C}} \newcommand{\PP}{\mathbb{P}} \newcommand{\PPS}{\PP_{s, s+1}} \newcommand{\mm}{\mathfrak{m}} \newcommand{\pp}{\mathfrak{p}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cO}{\mathcal{O}} \newcommand{\ra}{\rightarrow} \renewcommand{\aa}{\alpha} \newcommand{\bb}{\beta} \newcommand{\rr}{\gamma} \newcommand{\dd}{\partial} \newcommand{\set}[2]{\{#1 : #2\}} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Set}{Set} \DeclareMathOperator{\coker}{coker} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\nulity}{nulity} \DeclareMathOperator{\Ob}{Ob} \newcommand{\txt}[1]{\textnormal{#1}} \newcommand{\op}{\txt{op}} \newcommand{\Ab}{\txt{Ab}} \newcommand{\I}{\mathcal{I}} \newcommand{\J}{\mathcal{J}} \newcommand{\la}{\lambda} \newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \usepackage{mathrsfs} \newtheorem{thm}{Theorem} \theoremstyle{definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{defn}[thm]{Definition} \newtheorem{prop}[thm]{Proposition} \newtheorem{rem}[thm]{Remark} \newtheorem{note}{Note} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{ex}[thm]{Example} \allowdisplaybreaks \newcommand{\ols}[1]{\mskip0\thinmuskip\overline{\mskip-.5\thinmuskip {#1} \mskip-2.5\thinmuskip}\mskip0\thinmuskip} \numberwithin{thm}{section} \title[bar-cores, CSYDs, and doubled distinct cores] {Results on bar-core partitions, core shifted Young diagrams, and doubled distinct cores} \author{Hyunsoo Cho} \address{Hyunsoo Cho, Institute of Mathematical Sciences, Ewha Womans University, Seoul, Republic of Korea} \email{[email protected]} \author{JiSun Huh} \address{JiSun Huh, Department of Mathematics, Ajou University, Suwon, Republic of Korea} \email{[email protected]} \author{Hayan Nam} \address{Hayan Nam, Department of Mathematics, Duksung Women's University, Seoul, Republic of Korea} \email{[email protected]} \author{Jaebum Sohn} \address{Jaebum Sohn, Department of Mathematics, Yonsei University, Seoul, Republic of Korea} \email{[email protected]} \begin{document} \begin{abstract} Simultaneous bar-cores, core shifted Young diagrams (or CSYDs), and doubled distinct cores have been studied since Morris and Yaseen introduced the concept of bar-cores. In this paper, our goal is to give a formula for the number of these core partitions on $(s,t)$-cores and $(s,s+d,s+2d)$-cores for the remaining cases that are not covered yet. In order to achieve this goal, we observe a characterization of $\overline{s}$-core partitions to obtain characterizations of doubled distinct $s$-core partitions and $s$-CSYDs. By using them, we construct $NE$ lattice path interpretations of these core partitions on $(s,t)$-cores. Also, we give free Motzkin path interpretations of these core partitions on $(s,s+d,s+2d)$-cores. \end{abstract} \maketitle \sloppy \section{Introduction} A \emph{partition} $\la = (\la_1, \la_2, \ldots, \la_{\ell})$ of $n$ is a non-increasing positive integer sequence whose sum of the parts $\la_i$ is $n$. We denote that $\la_i \in \la$ and visualize a partition $\la$ with the \emph{Young diagram} $D(\la)$. For a partition $\la$, $\la'$ is called the \emph{conjugate} of $\la$ if $D(\la')$ is the reflection of $D(\la)$ across the main diagonal, and $\la$ is called \emph{self-conjugate} if $\la=\la'$. An $(i,j)$-box of $D(\la)$ is the box at the $i$th row from the top and the $j$th column from the left. The \emph{hook length} of an $(i,j)$-box, denoted by $h_{i,j}(\la)$, is the total number of boxes on the right and the below of the $(i,j)$-box and itself, and the \emph{hook set} $\mathcal{H}(\la)$ of $\la$ is the set of hook lengths of $\la$. We say that a partition $\la$ is an \emph{$s$-core} if $ks\notin\mathcal{H}(\la)$ for all $k \in \mathbb{N}$ and is an \emph{$(s_1, s_2, \dots, s_p)$-core} if it is an $s_i$-core for all $i=1,2,\dots,p$. Figure \ref{fig:ex} illustrates the Young diagram of a partition and a hook length. \begin{figure}[ht!] \centering \small{ $D(\la)=$~\begin{ytableau} ~&~&~&~&~&~&~ \\ ~&~&~&~&~&~ \\ ~&~&~ \\ ~&~ \end{ytableau} \qquad \qquad \begin{ytableau} ~&*(gray!50)9&*(gray!50)&*(gray!50)&*(gray!50)&*(gray!50)&*(gray!50) \\ ~&*(gray!50)&~&~&~&~ \\ ~&*(gray!50)&~ \\ ~&*(gray!50) \end{ytableau}} \caption{The Young diagram of the partition $\la=(7,6,3,2)$ and a hook length $h_{1,2}(\la)=9$.} \label{fig:ex} \end{figure} There have been active research on the number of simultaneous core partitions and self-conjugate simultaneous core partitions since Anderson \cite{Anderson} counted the number of $(s,t)$-core partitions for coprime $s$ and $t$. For more information, see \cite{AL,FMS,Wang} for example. In this paper, we investigate the three different types of core partitions, which are called bar-core partitions, core shifted Young diagrams, and doubled distinct core partitions. Researchers have been studied them independently but they are inevitably related to each other. We first give the definitions of the three objects that we only deal with under the condition that the partition is \emph{strict}, which means that each part is all distinct. For a strict partition $\la=(\la_1, \la_2, \ldots, \la_{\ell})$, an element of the set \[ \{\la_i+\la_{i+1}, \la_i+\la_{i+2}, \dots, \la_i+\la_{\ell} \} \cup \left( \{ \la_{i}, \la_{i}-1, \dots, 1 \} \setminus \{\la_{i}-\la_{i+1}, \dots, \la_{i}-\la_{\ell}\} \right) \] is called a \emph{bar length} in the $i$th row. A strict partition $\la$ is called an \emph{$\overline{s}$-core} (\emph{$s$-bar-core}) if $s$ is not a bar length in any row in $\la$. For example, the sets of bar lengths in every row of $\la=(7,6,3,2)$ are $\{13,10,9,7,6,3,2\}$, $\{9,8,6,5,2,1\}$, $\{5,3,2\}$, and $\{2,1\}$. Thus, $\la$ is an $\overline{s}$-core partition for $s=4,11,12$, or $s\geq 14$. The \emph{shifted Young diagram} $S(\la)$ of a strict partition $\la$ is obtained from $D(\la)$ by shifting the $i$th row to the right by $i-1$ boxes for each $i$. The \emph{shifted hook length} $h^*_{i,j}(\la)$ of an $(i,j)$-box in $S(\la)$ is the number of boxes on its right, below and itself, and the boxes on the $(j+1)$st row if exists. For example, the left diagram in Figure \ref{fig:bar} shows the shifted Young diagram of the partition $(7,6,3,2)$ with the shifted hook lengths. The shifted hook set $\mathcal{H}^*(\la)$ is the set of shifted hook lengths in $S(\la)$. A shifted Young diagram $S(\la)$ is called an \emph{$s$-core shifted Young diagram}, shortly $s$-CSYD, if none of the shifted hook lengths of $S(\la)$ are divisible by $s$. Sometimes we say that ``$\la$ is an $s$-CSYD'' instead of ``$S(\la)$ is an $s$-CSYD''. Given a strict partition $\la=(\la_1, \la_2, \ldots, \la_{\ell})$, the \emph{doubled distinct partition} of $\la$, denoted by $\la \la$, is a partition whose Young diagram $D(\la \la)$ is defined by adding $\la_i$ boxes to the $(i-1)$st column of $S(\la)$. In other words, the Frobenius symbol of the doubled distinct partition $\la\la$ is given by \[ \begin{pmatrix} \la_1 & \la_2 & \cdots &\la_{\ell}\\ \la_1 -1 & \la_2 -1 & \cdots & \la_{\ell} -1 \end{pmatrix}. \] The doubled distinct partition $\la\la$ is called a \emph{doubled distinct $s$-core} if none of the hook lengths are divisible by $s$. Note that the hook set of $D(\la\la)$ that is located on the right of the main diagonal is the same as $\mathcal{H}^*(\la)$. Indeed, the hook lengths on the $(\ell+1)$st column of $D(\la\la)$ are the parts of $\la$ and the deletion of this column from $D(\la\la)$ gives a self-conjugate partition. See Figure \ref{fig:bar} for example. \begin{figure}[ht!] {\small $S(\la)=~$\begin{ytableau} 13&10&9&7&6&3&2 \\ \none&9&8&6&5&2&1 \\ \none&\none&5&3&2 \\ \none&\none&\none&2&1 \\ \end{ytableau} \qquad \qquad $D(\la\la)=~$\begin{ytableau} *(gray!60)14&13&10&9&*(gray!20)7&6&3&2 \\ 13&*(gray!60)12&9&8&*(gray!20)6&5&2&1 \\ 10&9&*(gray!60)6&5&*(gray!20)3&2 \\ 9&8&5&*(gray!60)4&*(gray!20)2&1 \\ 6&5&2&1 \\ 3&2 \\ 2&1 \end{ytableau}} \caption{The shifted Young diagram $S(\la)$ with the shifted hook lengths and the doubled distinct partition $\la\la$ with the hook lengths for the strict partition $\la=(7,6,3,2)$.}\label{fig:bar} \end{figure} We extend the definition of simultaneous core partitions to bar-core partitions and CSYDs. We use the following notations for the variety sets of core partitions, \begin{align*} \mathcal{SC}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of self-conjugate $(s_1, s_2, \dots, s_p)$-cores},\\ \mathcal{BC}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of $(\overline{s_1}, \overline{s_2},\dots, \overline{s_p})$-cores},\\ \mathcal{CS}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of $(s_1, s_2, \dots, s_p)$-CSYDs},\\ \mathcal{DD}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of doubled distinct $(s_1, s_2, \dots, s_p)$-cores}. \end{align*} There are a couple of results on counting the number of simultaneous core partitions of the three objects, bar-cores, CSYDs, and doubled distinct cores. Bessenrodt and Olsson \cite{BO} adopted the Yin-Yang diagram to count the number of $(\ols{s\phantom{t}},\overline{t})$-core partitions for odd numbers $s$ and $t$, Wang and Yang \cite{WY} counted the same object when $s$ and $t$ are in different parity, and Ding \cite{Ding} counted the number of $(s,s+1)$-CSYDs (as far as the authors know these are the only counting results on the three objects known until now). Our main goal is to fill out all the possible results we could get on $(s,t)$-cores and $(s,s+d,s+2d)$-cores for the three objects by constructing some bijections. Additionally, we hire a well-known object so called self-conjugate core partitions to enumerate the number of such core partitions. For instance, bar-core partitions and self-conjugate core partitions are related to each other; Yang \cite[Theorem 1.1]{Yang} constructed a bijection between the set of self-conjugate $s$-cores and that of $\overline{s}$-cores for odd $s$; Gramain, Nath, and Sellers \cite[Theorem 4.12]{GNS} gave a bijection between self-conjugate $(s,t)$-core partitions and $(\ols{s\phantom{t}},\overline{t})$-core partitions, where both $s$ and $t$ are coprime and odd. The following theorems are the main results in this paper. \begin{thm}\label{thm:main1} For coprime positive integers $s$ and $t$, the number of doubled distinct $(s,t)$-core partitions is \[ |\mathcal{DD}_{(s,t)}|=\binom{\lfloor (s-1)/2 \rfloor + \lfloor (t-1)/2 \rfloor}{\lfloor (s-1)/2 \rfloor}, \] and the number of $(s,t)$-CSYDs is \[ |\mathcal{CS}_{(s,t)}|=\binom{\floor*{(s-1)/2} + \floor*{t/2} -1}{\floor*{(s-1)/2}} +\binom{\floor*{s/2} + \floor*{(t-1)/2}-1}{\floor*{(t-1)/2}}. \] \end{thm} \begin{thm}\label{thm:unifying} Let $s$ and $d$ be coprime positive integers. \begin{enumerate} \item[(a)] For odd $s$ and even $d$, \begin{align*} |\mathcal{BC}_{(s,s+d,s+2d)}|&=|\mathcal{CS}_{(s,s+d,s+2d)}|=|\mathcal{DD}_{(s,s+d,s+2d)}|\\ &=\sum_{i=0}^{(s-1)/2}\binom{(s+d-3)/2}{\lfloor i/2 \rfloor}\binom{(s+d-1)/2-\lfloor i/2 \rfloor}{(s-1)/2-i}. \end{align*} \item[(b)] For odd numbers $s$ and $d$, \begin{align*} &|\mathcal{BC}_{(s,s+d,s+2d)}|=|\mathcal{CS}_{(s,s+d,s+2d)}|\\ &~~=\sum_{i=0}^{(s-1)/2}\binom{(d-1)/2+i}{\lfloor i/2 \rfloor}\left( \binom{(s+d-2)/2}{(d-1)/2+i} + \binom{(s+d-4)/2}{(d-1)/2+i}\right). \end{align*} \item[(c)] For even $s$ and odd $d$, \begin{align*} |\mathcal{BC}_{(s,s+d,s+2d)}|=&\sum_{i=0}^{s/2} \binom{(s+d-1)/2}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, s/2 -i}, \\ |\mathcal{CS}_{(s,s+d,s+2d)}|=&\sum_{i=0}^{(s-2)/2}\binom{(s+d-3)/2}{\lfloor i/2 \rfloor}\binom{(s+d-3)/2-\lfloor i/2 \rfloor}{(s-2)/2-i}\\ &+\sum_{i=0}^{(s-2)/2}\binom{(s+d-5)/2}{\lfloor i/2 \rfloor}\binom{(s+d-1)/2-\lfloor i/2 \rfloor}{(s-2)/2-i}. \end{align*} \item[(d)] For odd $d$, \[ |\mathcal{DD}_{(s,s+d,s+2d)}|=\sum_{i=0}^{ \lfloor(s-1)/2\rfloor} \binom{\lfloor (s+d-2)/2\rfloor }{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, \lfloor(s-1)/2\rfloor -i}. \] \end{enumerate} \end{thm} This paper is organized as follows: In Section \ref{sec:2}, we obtain useful propositions involving the three objects which are used frequently throughout this paper. Restricted those objects by the size of partitions, we get the generating functions of $\overline{s}$-cores and $s$-CSYDs for even $s$. Section \ref{sec:double} includes connections between the sets of $NE$ lattice paths and the three objects with the condition being $(s,t)$-cores. We consider the Yin-Yang diagrams to find the number of doubled distinct $(s,t)$-core partitions and the number of $(s,t)$-CSYDs by constructing each bijection to a certain set of $NE$ lattice paths. In Section \ref{sec:triple}, we describe the relations between free Motzkin paths and the three objects under the condition of being $(s,s+d,s+2d)$-cores by using the $(\overline{s+d},d)$-abacus diagram, the $(\overline{s+d},d)$-abacus function, and their properties. From the bijections we set up, we count the number of each $(s,s+d,s+2d)$-core partitions as a result of the number of corresponding free Motzkin paths. \section{Properties and generating functions}\label{sec:2} We begin this section by showing a property which follows straightly from the definition of the bar lengths and the shifted hook lengths. \begin{lem}\label{lem:barhook} Let $\la = (\la_1, \la_2, \dots, \la_{\ell})$ be a strict partition. The set of bar lengths in the $i$th row of $\la$ is equal to the set of the shifted hook lengths in the $i$th row of $S(\la)$. \end{lem} \begin{proof} Let $\mu \coloneqq (\la_1 - \ell +1, \la_2 -\ell +2, \dots, \la_{\ell})$. By the definition of the shifted hook lengths, we have \[ h_{i,j}^*(\la)=\begin{cases} \la_i+\la_{j+1} & \text{ if }~ i \le j \le \ell-1,\\ h_{i, j-\ell+1}(\mu) & \text{ if }~ \ell \le j \le \la_i. \end{cases} \] We show that the statement is true for the first row. Assume, on the contrary, that $h_{1,j}^*(\la)=h_{1, j-\ell+1}(\mu)=\la_1-\la_k=h_{1,1}(\mu)-h_{k,1}(\mu)$ for some $k$. Then, by the definition of hook lengths, \[ \mu_1+\mu_{j-\ell+1}'-(j-\ell+1) = (\mu_1+\mu_1'-1)-(\mu_k+\mu_1' -k), \] which implies that $\mu_k+\mu_{j-\ell+1}'-(k+j-\ell)=h_{k, j-\ell+1}(\mu)=0$. Since the hook lengths are always nonzero, we get a contradiction. Similarly, this argument works for the $i$th row in general. \end{proof} \subsection{Characterizations} In the theory of core partitions, a partition $\la$ is an $s$-core if $s\notin \mathcal{H}(\la)$ or, equivalently, if $ms\notin\mathcal{H}(\la)$ for all $m$. In \cite[p. 31]{MY}, Morris and Yaseen gave a corollary that $\la$ is an $\overline{s}$-core if and only if none of the bar lengths in the rows of $\la$ are divisible by $s$. However, Olsson \cite[p. 27]{Olsson-book} pointed out that this corollary is not true when $s$ is even. In Figure \ref{fig:bar}, one can see that $\la=(7,6,3,2)$ is a $\overline{4}$-core partition, but $h^*_{2,3}(\la)=8$. Later, Wang and Yang \cite{WY} gave a characterization of $\overline{s}$-core partitions. \begin{prop}\cite{WY}\label{prop:bar} For a strict partition $\la=(\la_1,\la_2,\dots,\la_{\ell})$, $\la$ is an $\overline{s}$-core if and only if all the following hold: \begin{enumerate} \item[(a)] $s \notin \la$. \item[(b)] If $\la_i \in \la$ with $\la_i>s$, then $\la_i -s \in \la$. \item[(c)] If $\la_i, \la_j \in \la$, then $\la_i+\la_j \not\equiv 0 \pmod{s}$ except when $s$ is even and $\la_i,\la_j \equiv s/2 \pmod{s}$. \end{enumerate} \end{prop} We extend this characterization to doubled distinct $s$-core partitions and $s$-CSYDs. \begin{prop}\label{prop:dd} For a strict partition $\la=(\la_1,\la_2,\dots,\la_{\ell})$, $\la\la$ is a doubled distinct $s$-core partition if and only if all the following hold: \begin{enumerate} \item[(a)] $\la$ is an $\overline{s}$-core. \item[(b)] $s/2 \notin \la$ for even $s$. \end{enumerate} \end{prop} \begin{proof} It is known by Lemma \ref{lem:barhook} and the definition of $\la\la$ that $$\mathcal{H}(\la\la)=\mathcal{H}^*(\la) \cup \{h_{i,i}(\la\la)=2\la_i \mid i=1,2,\dots,\ell \}.$$ Therefore, for an $\overline{s}$-core partition $\la$ and even $s$, $s/2 \in \la$ if and only if $s \in \mathcal{H}(\la\la)$, meaning that $\la\la$ is not a doubled distinct $s$-core. \end{proof} \begin{prop}\label{prop:CSYD} For a strict partition $\la=(\la_1,\la_2,\dots,\la_{\ell})$, $S(\la)$ is an $s$-CSYD if and only if all the following hold: \begin{enumerate} \item[(a)] $\la$ is an $\overline{s}$-core. \item[(b)] $3s/2 \notin \la$ for even $s$. \end{enumerate} \end{prop} \begin{proof} Assume first that $S(\la)$ is an $s$-CSYD. By Lemma \ref{lem:barhook}, $\la$ is an $\overline{s}$-core. If $3s/2 \in \la$, then $s/2 \in \la$ by Proposition \ref{prop:bar} (b). This implies that there is a bar length of $2s$ in $\la$, which means that $S(\la)$ is not an $s$-CSYD. Conversely, suppose that two conditions (a) and (b) hold. If $\la$ is an $\overline{s}$-core but $S(\la)$ is not an $s$-CSYD, then there is a box $(i,j)$ in $S(\la)$ such that $h^*_{i,j}(\la)=sk$ for some $k\geq 2$. It follows from the definition of the bar lengths that there exist $\la_i,\la_j \in \la$ satisfying $\la_i+\la_j=sk$. Also, by Proposition~\ref{prop:bar}~(c), we deduce that $s$ is even and $\la_i,\la_j \equiv s/2 \pmod s$. Hence, when $\la_i > \la_j$, we can write $\la_i = (2m+1)s/2$ for some $m\geq 1$, and therefore $3s/2 \in \la$ by Proposition~\ref{prop:bar}~(b). It leads to a contradiction. \end{proof} \begin{rem} \label{rmk:oddoddodd} From the characterizations we observe that, for coprime odd integers $s_1,s_2,\dots,s_p$, we have \[ \mathcal{BC}_{(s_1, s_2, \dots, s_p)}=\mathcal{CS}_{(s_1, s_2, \dots, s_p)}=\mathcal{DD}_{(s_1, s_2, \dots, s_p)}. \] \end{rem} \subsection{Generating functions} In this subsection, we consider the generating functions of the following numbers, \begin{align*} sc_s(n) &: \text{~the number of self-conjugate $s$-core partitions of $n$},\\ bc_s(n) &: \text{~the number of $\overline{s}$-core partitions of $n$},\\ cs_s(n) &: \text{~the number of $s$-CSYDs of $n$},\\ dd_s(n) &: \text{~the number of doubled distinct $s$-core partitions of $n$}. \end{align*} Garvan, Kim, and Stanton \cite{GKS} obtained the generating functions of the numbers $sc_s(n)$ and $dd_s(n)$ by using the concept of the core and the quotient of a partition. As usual, we use the well-known $q$-product notation $$(a;q)_n=\prod\limits_{i=0}^{n-1}(1-aq^i) \quad \text{and} \quad (a;q)_{\infty}=\lim\limits_{n \to \infty} (a;q)_n \quad \text{for} ~ |q|<1.$$ \begin{prop}\cite[Equations (7.1a), (7.1b), (8.1a), and (8.1b)]{GKS}\label{prop:gf_GKS} For a positive integer $s$, we have \begin{align*} \sum_{n=0}^{\infty}sc_s(n)q^n&=\begin{dcases*} \frac{(-q;q^2)_\infty(q^{2s};q^{2s})^{(s-1)/2}_\infty}{(-q^s;q^{2s})_\infty} & \text{if $s$ is odd},\\ (-q;q^2)_\infty(q^{2s};q^{2s})^{s/2}_\infty & \text{if $s$ is even,} \end{dcases*}\\[2ex] \sum_{n=0}^{\infty}dd_s(n)q^n&=\begin{dcases*} \frac{(-q^2;q^2)_\infty(q^{2s};q^{2s})^{(s-1)/2}_\infty}{(-q^{2s};q^{2s})_\infty} & \text{if $s$ is odd},\\ \frac{(-q^2;q^2)_\infty(q^{2s};q^{2s})^{(s-2)/2}_\infty}{(-q^{s};q^{s})_\infty} & \text{if $s$ is even}. \end{dcases*} \end{align*} \end{prop} The generating function of the numbers $bc_s(n)$ for odd $s$ was found by Olsson \cite{Olsson-book}. Note that for odd $s$, it is clear that $bc_s(n)=cs_s(n)$ as a partition $\la$ is an $\overline{s}$-core if and only if it is an $s$-CSYD by Propositions \ref{prop:bar} and \ref{prop:CSYD}. \begin{prop}\cite[Proposition (9.9)]{Olsson-book} \label{prop:gf_O} For an odd integer $s$, we have \[ \sum_{n=0}^{\infty}bc_{s}(n)q^n=\sum_{n=0}^{\infty}cs_{s}(n)q^n=\frac{(-q;q)_\infty(q^{s};q^{s})^{(s-1)/2}_\infty}{(-q^s;q^s)_\infty}. \] \end{prop} From Propositions \ref{prop:gf_GKS} and \ref{prop:gf_O}, we also see that $dd_s(2n)=bc_{s}(n)$ when $s$ is odd. We now give generating functions of the numbers $bc_{s}(n)$ and $cs_s(n)$ for even $s$ by using Propositions \ref{prop:bar}, \ref{prop:dd}, and \ref{prop:CSYD}. \begin{prop}\label{prop:bargen} For an even integer $s$, we have \[ \sum_{n=0}^{\infty}bc_{s}(n)q^n=\frac{(-q;q)_\infty(q^{s};q^{s})^{(s-2)/2}_\infty}{(-q^{s/2};q^{s/2})_\infty}\sum_{n\geq 0} q^{sn^2/2}. \] \end{prop} \begin{proof} Let $s$ be a fixed even integer. From Propositions \ref{prop:bar} and \ref{prop:dd} we first see that the number of $\overline{s}$-core partitions $\la$ of $n$ for which $s/2\notin \la$ is equal to $dd_s(2n)$. We also notice that for a positive integer $i$, the number of $\overline{s}$-core partitions $\la$ of $n$ for which $(2i-1)s/2\in \la$ and $(2i+1)s/2\notin \la$ is equal to $dd_s(2n-i^2s)$ since $(2i-1)s/2\in \la$ implies $(2i-3)s/2, (2i-5)s/2, \dots, s/2 \in \la$ by Proposition \ref{prop:bar} (b). Therefore, we have \[ bc_s(n)=dd_s(2n)+dd_s(2n-s)+dd_s(2n-4s)+\cdots=\sum_{i\geq0} dd_s(2n-i^2s), \] which completes the proof from Proposition \ref{prop:gf_GKS}. \end{proof} \begin{prop} For an even integer $s$, we have \[ \sum_{n=0}^{\infty}cs_s(n)q^n=\frac{(-q;q)_\infty(q^{s};q^{s})^{(s-2)/2}_\infty}{(-q^s;q^{s/2})_\infty}. \] \end{prop} \begin{proof} Similar to the proof of Proposition \ref{prop:bargen}, $cs_s(n)=dd_s(2n)+dd_s(2n-s)$ for even $s$ by Propositions \ref{prop:dd} and \ref{prop:CSYD}. \end{proof} \section{Enumeration on $(s,t)$-cores} \label{sec:double} A \emph{north-east ($NE$) lattice path} from $(0,0)$ to $(s,t)$ is a lattice path which consists of steps $N=(0,1)$ and $E=(1,0)$. Let $\mathcal{NE}(s,t)$ denote the set of all $NE$ lattice paths from $(0,0)$ to $(s,t)$. In this section, we give $NE$ lattice path interpretations for $(\ols{s\phantom{t}},\overline{t})$-core related partitions and count such paths. Combining the results on self-conjugate $(s,t)$-core partitions and $(\ols{s\phantom{t}},\overline{t})$-core partitions which are independently proved by Ford, Mai, and Sze \cite[Theorem 1]{FMS}, Bessenrodt and Olsson \cite[Theorem 3.2]{BO}, and Wang and Yang \cite[Theorem 1.3]{WY}, we get the following theorem. \begin{thm}\cite{FMS,BO,WY}\label{thm:selfbar} For coprime positive integers $s$ and $t$, \[ |\mathcal{BC}_{(s,t)}|=|\mathcal{SC}_{(s,t)}|=\binom{\lfloor s/2 \rfloor + \lfloor t/2 \rfloor}{\lfloor s/2 \rfloor}. \] \end{thm} Also, Ding \cite{Ding} examined the Hasse diagram of the poset structure of an $(s,s+1)$-CSYD to count them. \begin{thm}\cite[Theorem 3.5]{Ding}\label{thm:Ding} For any positive integer $s\geq 2$, \[ |\mathcal{CS}_{(s,s+1)}|=\binom{s-1}{\floor*{(s-1)/2}}+\binom{s-2}{\floor*{(s-1)/2}}. \] \end{thm} From now on, we count doubled distinct $(s,t)$-cores and $(s,t)$-CSYDs. When $s$ and $t$ are both odd, the numbers of such partitions are already known by Remark \ref{rmk:oddoddodd}. We focus on the case when $s$ is even and $t$ is odd. For $(\ols{s\phantom{t}},\overline{t})$-cores with coprime odd integers $s$ and $t$ such that $1<s<t$, Bessenrodt and Olsson \cite{BO} defined the Yin-Yang diagram as an array $A(s,t)=\{A_{i,j}\}$, where \[ A_{i,j}\coloneqq-\frac{s+1}{2}t+js+it \qquad \text{ for } 1 \le i \le \frac{s-1}{2} \text{ and } 1 \le j \le \frac{t-1}{2}. \] The location of $A_{i,j}$ is at the intersection of the $i$th row from the top and the $j$th column from the left. For fixed $s$ and $t$, they showed that the set of parts consisting of all possible $(\ols{s\phantom{t}},\overline{t})$-core partitions is equal to the set of absolute values of $A_{i,j}$ in $A(s,t)$. They also gave a bijection $\phi$ between $\mathcal{BC}_{(s,t)}$ and the set $\mathcal{NE}((t-1)/2, (s-1)/2)$ in the Yin-Yang diagram from the lower-left corner to the upper-right corner. For an $NE$ lattice path $P$ in the Yin-Yang diagram $A(s,t)$, let $M(P)$ denote the set consisting of positive entries above $P$ and the absolute values of negative entries below $P$. According to the bijection $\phi$, if $\la$ is an $(\ols{s\phantom{t}},\overline{t})$-core partition and $P=\phi(\la)$ is the corresponding path in $A(s,t)$, then $M(P)$ is equal to the set of parts in $\la$. For $(\ols{s\phantom{t}},\overline{t})$-cores with coprime even $s$ and odd $t$, Wang and Yang \cite{WY} defined the Yin-Yang diagram to be an array $B(s,t)$, where \[ B_{i,j}\coloneqq-\frac{s+2}{2}t+js+it \qquad \text{ for } 1 \le i \le \frac{s}{2} \text{ and } 1 \le j \le \frac{t-1}{2}, \] and gave a bijection $\psi$ between the sets $\mathcal{BC}_{(s,t)}$ and $\mathcal{NE}((t-1)/2, s/2)$ in $B(s,t)$ from the lower-left corner to the upper-right corner. Again, the map $\psi$ sends an $(\ols{s\phantom{t}},\overline{t})$-core $\la$ to the path $Q=\psi(\la)$ in $B(s,t)$, where $M(Q)$ is equal to the set of parts in $\la$. See Figure \ref{fig:YinYang} for example. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.5] \node at (0,0){ \begin{tabular}{ c c c c c c } -43 & -34 & -25 & -16 & -7 & 2\\ -30 & -21 & -12 & -3 & 6 & 15\\ -17 & -8 & 1 & 10 & 19 & 28\\ -4 & 5 & 14 & 23 & 32 & 41 \end{tabular}}; \node at (0,-3) {$A(9,13)$}; \end{tikzpicture} \qquad \quad \begin{tikzpicture}[scale=.5] lldraw[color=gray!40] (-5.3,-2) rectangle (-3.5, -1) (-1.7,0) rectangle (1.9, 1) (3.7,1) rectangle (5.5, 2) ; \foreach \i in {0,1,2,3,4} \draw[dotted] (-5.3,-2+\i)--(5.5,-2+\i); \foreach \i in {0,1,2,3,4,5,6} \draw[dotted] (-5.3+1.8*\i,-2)--(-5.3+1.8*\i,2); \draw[thick] (-5.3,-2)--(-5.3,-1)--(-1.7,-1)--(-1.7,1)--(5.5,1)--(5.5,2); \node at (0,0){ \begin{tabular}{ c c c c c c } -43 & -34 & -25 & -16 & -7 & 2\\ -30 & -21 & -12 & -3 & 6 & 15\\ -17 & -8 & 1 & 10 & 19 & 28\\ -4 & 5 & 14 & 23 & 32 & 41 \end{tabular}}; \node at (0,-3) {$P=NEENNEEEEN$}; \end{tikzpicture}\\[2ex] \begin{tikzpicture}[scale=.5] \node at (0,0){ \begin{tabular}{ c c c c c c c} -44 & -36 & -28 & -20 & -12 & -4 \\ -31 & -23 & -15 & -7 & 1 & 9 \\ -18 & -10 & -2 & 6 & 14 & 22\\ -5 & 3 & 11 & 19 & 27 & 35 \end{tabular}}; \node at (0,-3) {$B(8,13)$}; \end{tikzpicture} \qquad \quad \begin{tikzpicture}[scale=.5] lldraw[color=gray!40] (-5.3,-2) rectangle (-3.5, -1) (-1.7,-1) rectangle (0.1,0) (-1.7,0) rectangle (1.9, 1) ; \foreach \i in {0,1,2,3,4} \draw[dotted] (-5.3,-2+\i)--(5.5,-2+\i); \foreach \i in {0,1,2,3,4,5,6} \draw[dotted] (-5.3+1.8*\i,-2)--(-5.3+1.8*\i,2); \draw[thick] (-5.3,-2)--(-5.3,-1)--(-1.7,-1)--(-1.7,1)--(5.5,1)--(5.5,2); \node at (0,0){ \begin{tabular}{ c c c c c c c} -44 & -36 & -28 & -20 & -12 & -4 \\ -31 & -23 & -15 & -7 & 1 & 9 \\ -18 & -10 & -2 & 6 & 14 & 22\\ -5 & 3 & 11 & 19 & 27 & 35 \end{tabular}}; \node at (0,-3) {$Q=NEENNEEEEN$}; \end{tikzpicture} \caption{The Yin-Yang diagrams $A(9,13)$ and $B(8,13)$, and the paths $P=\phi((12,4,3,2))$ and $Q=\psi((15,7,5,2))$.}\label{fig:YinYang} \end{figure} Now we give path interpretations for doubled distinct $(s,t)$-cores and $(s,t)$-CSYDs for even $s$ and odd $t$ by using this Yin-Yang diagram $B(s,t)$ together with Propositions~\ref{prop:dd} and \ref{prop:CSYD}. \begin{thm}\label{thm:dd2} For even $s$ and odd $t$ that are coprime, there is a bijection between the sets $\mathcal{DD}_{(s,t)}$ and $\mathcal{NE}((t-1)/2,(s-2)/2)$. In addition, \[ |\mathcal{DD}_{(s,t)}|=\binom{(s-2)/2 + (t-1)/2}{(s-2)/2}. \] \end{thm} \begin{proof} Recall the bijection $\psi$ between the sets $\mathcal{BC}_{(s,t)}$ and $\mathcal{NE}((t-1)/2, s/2)$ in the Yin-Yang diagram $B(s,t)$ from the lower-left corner to the upper-right corner. To find the desired bijection, we restrict the domain of $\psi$ under the set $\mathcal{DD}_{(s,t)}$. By Proposition~\ref{prop:dd}~(b) and the fact that $B_{1,(t-1)/2}=-s/2$, we see that $Q=\psi(\la)$ corresponds to a partition $\la$ such that $\la\la$ is a doubled distinct $(s,t)$-core if and only if $Q$ is a path in the set $\mathcal{NE}((t-1)/2, s/2)$ in the Yin-Yang diagram $B(s,t)$ that ends with a north step $N$, equivalently $\mathcal{NE}((t-1)/2, (s-2)/2)$. Hence, the number of doubled distinct $(s,t)$-core partitions is given by $|\mathcal{NE}((t-1)/2, (s-2)/2)|$. \end{proof} \begin{thm}\label{thm:CSYD2} For even $s$ and odd $t$ that are coprime, there is a bijection between the sets $\mathcal{CS}_{(s,t)}$ and \[ \mathcal{NE}((t-1)/2,(s-2)/2)\cup \mathcal{NE}( (t-3)/2,(s-2)/2). \] In addition, \[ |\mathcal{CS}_{(s,t)}|=\binom{(s-2)/2 + (t-1)/2}{(s-2)/2}+\binom{(s-2)/2 + (t-3)/2}{(s-2)/2}. \] \end{thm} \begin{proof} It follows from Propositions~\ref{prop:bar} and \ref{prop:CSYD} that $\la$ is an $(s,t)$-CSYD if and only if $\la$ is an $(\ols{s\phantom{t}},\overline{t})$-core partitions and $3s/2 \notin \la$. We first note that $\la\la$ is a doubled distinct $(s,t)$-core partition if and only if $\la$ is an $(s,t)$-CSYD and $s/2 \notin \la$. Indeed, there is a bijection between the set of $(s,t)$-CSYDs $\la$ with $s/2 \notin \la$ and the set $\mathcal{NE}((t-1)/2, (s-2)/2)$ by Theorem~\ref{thm:dd2}. Therefore, it is sufficient to show that there is a bijection between the set of $(s,t)$-CSYDs $\la$ with $s/2 \in \la$ and the set $\mathcal{NE}((t-3)/2,(s-2)/2)$. Note that for an $(s,t)$-CSYD $\la$ such that $s/2 \in \la$, $Q=\psi(\la)$ is a path in the set $\mathcal{NE}((t-1)/2, s/2)$ in the Yin-Yang diagram $B(s,t)$ that must end with an east step preceded by a north step since $B_{1,(t-1)/2}=-s/2$ and $B_{1,(t-3)/2}=-3s/2$. Then, we get a bijection between the set of $(s,t)$-CSYDs $\la$ with $s/2 \in \la$ and the set $\mathcal{NE}((t-3)/2,(s-2)/2)$. Moreover, the number of $(s,t)$-CSYDs is obtained by counting the corresponding lattice paths. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main1}] It is followed by Remark \ref{rmk:oddoddodd}, Theorems \ref{thm:selfbar}, \ref{thm:dd2}, and \ref{thm:CSYD2} \end{proof} \section{Results on $(s,s+d,s+2d)$-cores}\label{sec:triple} A path $P$ is called a \emph{free Motzkin path of type $(s,t)$} if it is a path from $(0,0)$ to $(s,t)$ which consists of steps $U=(1,1)$, $F=(1,0)$, and $D=(1,-1)$. Let $\mathcal{F}(s,t)$ be the set of free Motzkin paths of type $(s,t)$. For given sets $A,B$ of sequences of steps, we denote $\mathcal{F}(s,t \,;\, A,B)$ the set of free Motzkin paths $P$ of type $(s,t)$, where $P$ does not start with the sequences in the set $A$ and does not end with the sequences in the set $B$. Recently, Cho and Huh \cite[Theorem 8]{ChoHuh} and Yan, Yan, and Zhou \cite[Theorems 1.1 and 1.2]{YYZ2} found a free Motzkin path interpretation of self-conjugate $(s,s+d,s+2d)$-core partitions and enumerated them independently. \begin{thm}\cite{ChoHuh,YYZ2} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{SC}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}\left((s+d-1)/2,-d/2\right)$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}\left((s+d)/2,-(d+1)/2 \,;\, \emptyset,\{U\}\right)$ if $s$ is odd and $d$ is odd; \item[(c)] $\mathcal{F}\left((s+d+1)/2,-(d+1)/2 \,;\, \emptyset,\{U\}\right)$ if $s$ is even and $d$ is odd. \end{enumerate} In addition, the number of self-conjugate $(s,s+d,s+2d)$-core partitions is \[ \displaystyle |\mathcal{SC}_{(s,s+d,s+2d)}|= \begin{cases} &\displaystyle\sum_{i=0}^{\lfloor s/4 \rfloor} \binom{(s+d-1)/2 }{i, d/2+i, (s-1)/2-2i} \qquad \text{if $d$ is even,}\\ &\\ &\displaystyle\sum_{i=0}^{\lfloor s/2\rfloor} \binom{\lfloor (s+d-1)/2 \rfloor}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, \lfloor s/2 \rfloor -i} \quad \text{if $d$ is odd.} \end{cases} \] \end{thm} Similar to the construction in \cite{ChoHuh}, we give an abacus construction and a path interpretation for each set of $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partitions, doubled distinct $(s,s+d,s+2d)$-core partitions, and $(s,s+d,s+2d)$-CSYDs.\\ \subsection{$(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partitions}\label{sec:bar} For coprime positive integers $s$ and $d$, let the \emph{$(\overline{s+d},d)$-abacus diagram} be a diagram with infinitely many rows labeled by $i \in \mathbb{Z}$ and $\floor*{(s+d+2)/2}$ columns labeled by $j \in \{0,1,\dots,\floor*{(s+d)/2}\}$ from bottom to top and left to right whose position $(i,j)$ is labeled by $(s+d)i+dj$. The following proposition guarantees that, for each positive integer $h$, there is at least one position on the $(\overline{s+d},d)$-abacus diagram labeled by either $h$ or $-h$. \begin{prop} \label{prop:injection} Let $s$ and $d$ be coprime positive integers and $h$ be a positive integer. For a given $(\overline{s+d},d)$-abacus diagram, we get the following properties. \begin{itemize} \item[(a)] If $h\not\equiv 0, (s+d)/2 \pmod{s+d}$, then there exists a unique position labeled by $h$ or $-h$. \item[(b)] If $h\equiv 0 \pmod{s+d}$, then there are two positions labeled by $h$ and $-h$, respectively, in the first column. \item[(c)] If $s+d$ is even and $h\equiv (s+d)/2 \pmod{s+d}$, then there are two positions labeled by $h$ and $-h$, respectively, in the last column. \end{itemize} \end{prop} \begin{proof} In the $(\overline{s+d},d)$-abacus diagram, the absolute values of the labels in column $j$ are congruent to $dj$ or $-dj$ modulo $s+d$. We claim that $dj$ and $-dj$ for $j\in\{0,1,\dots, \floor*{(s+d)/2}$\} are all incongruent modulo $s+d$ except $j=0$ or $(s+d)/2$. For $0 \leq j_1 < j_2\leq \floor*{(s+d)/2}$, it is clear that $dj_1$ and $dj_2$ are incongruent modulo $s+d$. Suppose $dj_1 \equiv -dj_2 \pmod{s+d}$ for some $0 \leq j_1,j_2\leq \floor*{(s+d)/2}$, it follows that $d(j_1+j_2)$ is a multiple of $s+d$. Since $s$ and $d$ are coprime, $d(j_1+j_2)$ is not a multiple of $s+d$ except for $j_1=j_2=0$ or $j_1=j_2=(s+d)/2$, where both $s$ and $d$ are odd. This completes the proof of the claim. The claim implies that, for every positive integer $h$, there exists $j\in\{0,1,\dots, \floor*{(s+d)/2}\}$ such that $h$ is congruent to $dj$ or $-dj$ modulo $s+d$. In addition, if $h\not\equiv 0, (s+d)/2 \pmod{s+d}$, then there exists a unique position labeled by $h$ or $-h$ in the $(\overline{s+d},d)$-abacus diagram, which shows the statement (a). The statements (b) and (c) follows immediately. \end{proof} For a strict partition $\la=(\la_1,\la_2,\dots)$, the \emph{$(\overline{s+d},d)$-abacus of $\la$} is obtained from the $(\overline{s+d},d)$-abacus diagram by placing a bead on position labeled by $\la_i$ if exists. Otherwise, we place a bead on position labeled by $-\la_i$. A position without bead is called a \emph{spacer}. See Figure \ref{fig:abacus_bar} for example. We use this $(\overline{s+d},d)$-abacus when we deal with $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partitions. For the $(\overline{s+d},d)$-abacus of an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\la$, let $r(j)$ denote the row number such that position $(r(j),j)$ is labeled by a positive integer while position $(r(j)-1,j)$ is labeled by a non-positive integer. The arrangement of beads on the diagram can be determined by the following rules. \begin{lem}\label{lem:beads} Let $\la$ be a strict partition. For coprime positive integers $s$ and $d$, if $\la$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core, then the $(\overline{s+d},d)$-abacus of $\la$ satisfies the following. \begin{enumerate} \item[(a)] If a bead is placed on position $(i,j)$ such that $i> r(j)$, then a bead is also placed on each of positions $(i-1,j), (i-2,j), \dots, (r(j),j)$. \item[(b)] If a bead is placed on position $(i,j)$ such that $i< r(j)-1$, then a bead is also placed on each of positions $(i+1,j), (i+2,j), \dots, (r(j)-1,j)$. \item[(c)] For each $j$, at most one bead is placed on positions $(r(j),j)$ or $(r(j)-1,j)$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[(a)] The fact that a bead is placed on position $(i,j)$ with $i>r(j)$ implies that $(s+d)i+dj$ is a part in $\la$. Since $\la$ is an $(\overline{s+d})$-core, it follows from Proposition~\ref{prop:bar}~(b) that $(s+d)(i-1)+dj$ is a part in $\la$. In a similar way, we also have $(s+d)(i-2)+dj, \dots, (s+d)r(j)+dj \in \la$ so that a bead is placed on each of positions $(i-1,j), (i-2,j), \dots, (r(j),j)$. \item[(b)] If a bead is placed on position $(i,j)$ with $i<r(j)-1$, then $-(s+d)i-dj$ is a part in $\la$. Again, it follows from Proposition~\ref{prop:bar}~(b) that $-(s+d)(i+1)-dj$ is a part in $\la$ and so are $-(s+d)(i+2)-dj, \dots, -(s+d)(r(j)-1)-dj \in \la$. Thus, we place a bead on positions $(i+1,j), (i+2,j), \dots, (r(j)-1,j)$. \item[(c)] Suppose that beads are placed on both positions $(r(j),j)$ and $(r(j)-1,j)$ labeled by $(s+d)r(j)+dj$ and $(s+d)(r(j)-1)+dj$, respectively. One can notice that $(s+d)(r(j)-1)+dj$ is a non-positive integer and the sum of the absolute values of $(s+d)r(j)+dj$ and $(s+d)(r(j)-1)+dj$ is $s+d$, which contradicts to Proposition~\ref{prop:bar}~(c). In particular, if one of them is labeled by $(s+d)/2$, then the other must be labeled by $-(s+d)/2$, which is also a contradiction to the definition of the $(\overline{s+d},d)$-abacus. \end{enumerate} \end{proof} For an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\la$, in order to explain the properties of the $(\overline{s+d},d)$-abacus of $\la$ more simply, we define the \emph{$(\overline{s+d},d)$-abacus function of $\la$} \[ f:\{0,1,\dots,\lfloor (s+d)/2 \rfloor\}\rightarrow \mathbb{Z} \] as follows: For each $j \in \{0,1,\dots,\lfloor (s+d)/2 \rfloor\}$, if there is a bead labeled by a positive integer in column $j$, let $f(j)$ be the largest row number in column $j$, where a bead is placed on. Otherwise, let $f(j)$ be the largest row number in column $j$, where position $(f(j),j)$ is a spacer with a non-positive labeled number. The following propositions give some basic properties of the $(\overline{s+d},d)$-abacus function of an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition. \begin{prop}\label{prop:f_initial} Let $s$ and $d$ be coprime positive integers. If $\la$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition, then the $(\overline{s+d},d)$-abacus function $f$ of $\la$ satisfies the following. \begin{enumerate} \item[(a)] $f(0)=0$ and $f(1)=0$ or $-1$. \item[(b)] $f(j-1)$ is equal to one of the three values $f(j)-1$, $f(j)$, and $f(j)+1$ for $j=1,2,\dots, \lfloor(s+d)/2\rfloor$. \end{enumerate} \end{prop} \begin{proof} We consider the $(\overline{s+d},d)$-abacus of $\la$. \begin{enumerate} \item[(a)] Since positions $(0,0)$ and $(1,0)$ are labeled by $0$ and $s+d$, respectively, there is no bead in column $0$. Hence, $f(0)=0$. Similarly, since positions $(-1,1)$, $(0,1)$, and $(1,1)$ are labeled by $-s$, $d$, and $s+2d$ respectively, there is at most one bead on position $(0,1)$ in column $1$. Hence, $f(1)=0$ or $-1$. \item[(b)] For a fixed $j$, let $f(j)=i$. Suppose that a bead is placed on position $(i,j)$ which is labeled by a positive integer. If position $(i-1,j-1)$ is labeled by a positive integer, then a bead is placed on this position by Proposition~\ref{prop:bar}~(b). Otherwise, position $(i-1,j-1)$ is a spacer by Proposition~\ref{prop:bar}~(c). In any case, it follows from the definition of $f$ that $f(j-1)\geq f(j)-1$. Additionally, since position $(i+1,j)$ is a spacer, position $(i+2,j-1)$ is a spacer by Proposition~\ref{prop:bar}~(b). Hence, $f(j-1)\leq f(j)+1$. Next, suppose that position $(i,j)$ is a spacer which is labeled by a negative integer. Since position $(i-1,j-1)$ is labeled by a negative integer, it is a spacer, so $f(j-1)\geq f(j)-1$. We now assume that $f(j-1)\geq i+2$. If position $(i+2,j-1)$ is labeled by a positive integer, then a bead is placed on this position by Lemma~\ref{lem:beads}~(a). In this case, position $(i+1,j)$ either has with a bead labeled by a positive integer or is a spacer labeled by a negative integer by Proposition~\ref{prop:bar}~(b) and (c), which contradicts to $f(j)=i$. Otherwise, if position $(i+2,j-1)$ is labeled by a negative integer, then it is a spacer. Therefore, position $(i+1,j)$ is a spacer by Proposition~\ref{prop:bar}~(b), which also contradicts to $f(j)=i$. Hence, $f(j-1)\leq f(j)+1$. \end{enumerate} \end{proof} \begin{prop}\label{prop:barf} Let $s$ and $d$ be coprime integers. For an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\la$, the $(\overline{s+d},d)$-abacus function $f$ of $\la$ satisfies the following. \begin{enumerate} \item [(a)] If $s$ is odd and $d$ is even, then $f(\frac{s+d-1}{2})\in \{-\frac{d+2}{2}, -\frac{d}{2}\}$. \item [(b)] If $s$ and $d$ are both odd, then $f(\frac{s+d}{2}) \in \{-\frac{d+1}{2},-\frac{d-1}{2}\}$. In addition, $f(\frac{s+d-2}{2})=-\frac{d+1}{2}$ when $f(\frac{s+d}{2})=-\frac{d-1}{2}$. \item [(c)] If $s$ is even and $d$ is odd, then $f(\frac{s+d-1}{2})\in \{-\frac{d+3}{2}, -\frac{d+1}{2}, -\frac{d-1}{2}\}$. \end{enumerate} \end{prop} \begin{proof} Let position $(a,b)$ denote position $(-\lfloor d/2 \rfloor,\lfloor (s+d)/2 \rfloor)$. \begin{enumerate} \item [(a)] Positions $(a-1,b),(a,b)$, and $(a+1,b)$ are labeled by $-s-3d/2, -d/2$, and $s+d/2$, respectively. First we show that $s+d/2$ and $s+3d/2$ are not parts of $\la$. If $s+d/2 \in \la$, then $d/2\in\la$ by Proposition \ref{prop:bar} (b). It gives a contradiction by Proposition \ref{prop:bar} (c) since $(s+d/2)+d/2=s+d$. One can similarly show that $s+3d/2 \notin \la$. Hence, the only possibility of having a bead in column $b$ is putting it on position $(a,b)$. Thus, $f(b)=a-1$ or $a$. \item [(b)] Positions $(a-1,b),(a,b)$, and $(a+1,b)$ are labeled by $-(s+d)/2$, $(s+d)/2$, and $(3s+3d)/2$, respectively. We first claim that there is no bead on position $(a+1,b)$. If $(3s+3d)/2 \in \la$, then $(s+d)/2,(s+3d)/2 \in \la$ by Proposition \ref{prop:bar} (b), which contradicts to Proposition \ref{prop:bar} (c) since $(s+d)/2 + (s+3d)/2 = s+2d$. This completes a proof of the claim. Therefore, $f(b)=a$ when $(s+d)/2 \in \la$ and $f(b)=a-1$ otherwise. Furthermore, we would like to show that $f(b-1)=a-1$ assuming that $f(b)=a$. Consider positions $(a-1,b-1)$ and $(a,b-1)$ which are labeled by $-(s+3d)/2$ and $(s-d)/2$, respectively. Position $(a-1,b-1)$ is a spacer by Proposition \ref{prop:bar} (c) since $(s+3d)/2+(s+d)/2=s+2d$. When $s>d$, position $(a,b-1)$ is also a spacer by Proposition \ref{prop:bar} (c) since $(s-d)/2+(s+d)/2=s$. Otherwise, $(s-d)/2$ is negative and a bead is placed on position $(a,b-1)$ since $(d-s)/2=(s+d)/2-s$. In any case, we conclude that $f(b-1)=a-1$. \item [(c)] Positions $(a-2,b), (a-1,b), (a,b)$, and $(a+1,b)$ are labeled by $-(3s+4d)/2,-(s+2d)/2,s/2$ and $(3s+2d)/2$, respectively. If $(3s+2d)/2 \in \la$, then $s/2, (s+2d)/2\in\la$ by Proposition \ref{prop:bar} (b), which contradicts to Proposition \ref{prop:bar} (c). Thus, $(3s+2d)/2 \notin \la$ and $f(b)<a+1$. Similarly, $(3s+4d)/2 \notin \la$ which implies $f(b)\geq a-2$. \end{enumerate} \end{proof} For coprime positive integers $s$ and $d$, it is obvious that the map from the set of $(\ols{s\phantom{d}}, \overline{s+d}, \overline{s+2d})$-core partitions to the set of functions satisfying the conditions in Propositions \ref{prop:f_initial} and \ref{prop:barf} is well-defined and injective. The following proposition shows that this map is surjective. \begin{prop}\label{prop:barinv} For coprime positive integers $s$ and $d$, let $f$ be a function that satisfies Propositions \ref{prop:f_initial} and \ref{prop:barf}. If $\la$ is a strict partition such that $f$ is the $(\overline{s+d},d)$-abacus function of $\la$, then $\la$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition. \end{prop} \begin{proof} We show that $\la$ satisfies the conditions in Proposition \ref{prop:bar} (a), (b), and (c). \begin{enumerate} \item [(a)] It follows from Proposition \ref{prop:f_initial} (a) that $s,s+d,s+2d \notin \la$. \item [(b)] Assume that $h$ is a part in $\la$. If $h > s+d$, then $h - (s+d) \in \la$ by Lemma \ref{lem:beads}. Consider the $(\overline{s+d},d)$-abacus diagram and suppose that $h > s$, but $h - s \notin \la$ to the contrary. Let position $(i,j)$ be labeled by $a$ such that $|a|=h$ which has a bead on. If $a>0$, then we get $j<\floor*{(s+d)/2}$ or $h=(s+d)/2$ with $s<d$ for odd numbers $s$ and $d$ by Proposition \ref{prop:barf}. First, assume that $j<\floor*{(s+d)/2}$. Then, position $(i-1,j+1)$ is a spacer labeled by $h -s$ which implies $f(j)\geq i$ and $f(j+1)<i-1$, so we get a contradiction to Proposition \ref{prop:f_initial} (b). Now, for odd numbers $s$ and $d$, let $h=(s+d)/2$ with $s<d$. Then, we have a bead on position $(-(d-1)/2,(s+d-2)/2)$ labeled by $(s-d)/2$ by Proposition \ref{prop:barf} (b), which gives a contradiction. If $a<0$, then position $(i+1,j-1)$ labeled by $-h +s$ is a spacer. This implies that $f(j-1) \geq i+1$ and $f(j) < i$, which contradicts to Proposition \ref{prop:f_initial} (b). By the similar argument, one can show that $h > s+2d$ implies $h - (s+2d) \in \la$. \item [(c)] By Lemma \ref{lem:beads} (c) and the construction of $f$, it is sufficient to show that there are no $h_1,h_2 \in \la$ such that $h_1 \neq h_2$ and $h_1 + h_2 \in \{s,s+2d\}$. Assume that there exist $h_1,h_2 \in \la$ satisfying $h_1 + h_2 =s$. If $h_1, h_2\neq (s+d)/2$, then there are positions $(i,j)$ and $(i-1,j+1)$ that are labeled by $h_1$ and $-h_2$, respectively. In this case, we get $f(j)\geq i$ and $f(j+1) < i-1$, which contradicts to Proposition \ref{prop:f_initial} (b). If $h_2=(s+d)/2$ (so both $s$ and $d$ are odd), then positions $(i,(s+d-2)/2)$ and $(i,(s+d)/2)$ are labeled by $h_1$ and $h_2$, respectively, and we get a contradiction to Proposition \ref{prop:barf} (b). Similar argument works for the case when $h_1+h_2 \neq s+2d$. \end{enumerate} \end{proof} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,0)--(6,-1.2)--(8,-2.4)--(10,-3.6)--(12,-2.4); lldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,0) circle (2pt) (6,-1.2) circle (2pt) (8,-2.4) circle (2pt) (10,-3.6) circle (2pt) (12,-2.4) circle (2pt) ; lldraw[color=gray!40] (2,0) circle (18pt); lldraw[color=gray!40] (4,0) circle (18pt); lldraw[color=gray!40] (6,-1.2) circle (18pt); lldraw[color=gray!40] (10,-2.4) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {22,26,30,34,38,42} \node at (\i/2-22/2,2.4) {$\i$}; \foreach \i in {11,15,19,23,27,31} \node at (\i/2-11/2,1.2) {$\i$}; \foreach \i in {0,4,8,12,16,20} \node at (\i/2,0) {$\i$}; \foreach \i in {-11,-7,-3,1,5,9} \node at (\i/2+11/2,-1.2) {$\i$}; \foreach \i in {-22,-18,-14,-10,-6,-2} \node at (\i/2+22/2,-2.4) {$\i$}; \foreach \i in {-33,-29,-25,-21,-17,-13} \node at (\i/2+33/2,-3.6) {$\i$}; \foreach \i in {-44,-40,-36,-32,-28,-24} \node at (\i/2+44/2,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {I. $(\overline{11},4)$-abacus of $(8,4,2,1)$}; \end{tikzpicture} \quad \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,-1.2)--(6,-2.4)--(8,-3.6)--(10,-2.4)--(12,-2.4); lldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,-1.2) circle (2pt) (6,-2.4) circle (2pt) (8,-3.6) circle (2pt) (10,-2.4) circle (2pt) (12,-2.4) circle (2pt) ; lldraw[color=gray!40] (2,0) circle (18pt); lldraw[color=gray!40] (6,-1.2) circle (18pt); lldraw[color=gray!40] (8,-2.4) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {20,23,26,29,32,35} \node at (\i/1.5-20/1.5,2.4) {$\i$}; \foreach \i in {10,13,16,19,22,25} \node at (\i/1.5-10/1.5,1.2) {$\i$}; \foreach \i in {0,3,6,9,12,15} \node at (\i/1.5,0) {$\i$}; \foreach \i in {-10,-7,-4,-1,2,5} \node at (\i/1.5+10/1.5,-1.2) {$\i$}; \foreach \i in {-20,-17,-14,-11,-8,-5} \node at (\i/1.5+20/1.5,-2.4) {$\i$}; \foreach \i in {-30,-27,-24,-21,-18,-15} \node at (\i/1.5+30/1.5,-3.6) {$\i$}; \foreach \i in {-40,-37,-34,-31,-28,-25} \node at (\i/1.5+40/1.5,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {II. $(\overline{10},3)$-abacus of $(8,3,1)$}; \end{tikzpicture}\\ \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,-1.2)--(6,-2.4)--(8,-2.4)--(10,-1.2)--(12,-2.4); lldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,-1.2) circle (2pt) (6,-2.4) circle (2pt) (8,-2.4) circle (2pt) (10,-1.2) circle (2pt) (12,-2.4) circle (2pt) ; lldraw[color=gray!40] (2,0) circle (18pt); lldraw[color=gray!40] (6,-1.2) circle (18pt); lldraw[color=gray!40] (10,-1.2) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {20,23,26,29,32,35} \node at (\i/1.5-20/1.5,2.4) {$\i$}; \foreach \i in {10,13,16,19,22,25} \node at (\i/1.5-10/1.5,1.2) {$\i$}; \foreach \i in {0,3,6,9,12,15} \node at (\i/1.5,0) {$\i$}; \foreach \i in {-10,-7,-4,-1,2,5} \node at (\i/1.5+10/1.5,-1.2) {$\i$}; \foreach \i in {-20,-17,-14,-11,-8,-5} \node at (\i/1.5+20/1.5,-2.4) {$\i$}; \foreach \i in {-30,-27,-24,-21,-18,-15} \node at (\i/1.5+30/1.5,-3.6) {$\i$}; \foreach \i in {-40,-37,-34,-31,-28,-25} \node at (\i/1.5+40/1.5,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {III. $(\overline{10},3)$-abacus of $(5,3,1)$}; \end{tikzpicture} \quad \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,0)--(6,-1.2)--(8,-2.4)--(10,-3.6)--(12,-2.4); lldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,0) circle (2pt) (6,-1.2) circle (2pt) (8,-2.4) circle (2pt) (10,-3.6) circle (2pt) (12,-2.4) circle (2pt) ; lldraw[color=gray!40] (2,0) circle (18pt); lldraw[color=gray!40] (4,0) circle (18pt); lldraw[color=gray!40] (10,-2.4) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {22,25,28,31,34,37} \node at (\i/1.5-22/1.5,2.4) {$\i$}; \foreach \i in {11,14,17,20,23,26} \node at (\i/1.5-11/1.5,1.2) {$\i$}; \foreach \i in {0,3,6,9,12,15} \node at (\i/1.5,0) {$\i$}; \foreach \i in {-11,-8,-5,-2,1,4} \node at (\i/1.5+11/1.5,-1.2) {$\i$}; \foreach \i in {-22,-19,-16,-13,-10,-7} \node at (\i/1.5+22/1.5,-2.4) {$\i$}; \foreach \i in {-33,-30,-27,-24,-21,-18} \node at (\i/1.5+33/1.5,-3.6) {$\i$}; \foreach \i in {-44,-41,-38,-35,-32,-29} \node at (\i/1.5+44/1.5,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {IV. $(\overline{11},3)$-abacus of $(7,6,3)$}; \end{tikzpicture} \caption{The $(\overline{s+d},d)$-abaci of several partitions and the corresponding free Motzkin paths}\label{fig:abacus_bar} \end{figure} For given coprime integers $s$ and $d$, let $\la$ be an $(\ols{s\phantom{d}}, \overline{s+d}, \overline{s+2d})$-core partition. For the $(\overline{s+d},d)$-abacus function $f$ of $\la$, we set $f(\floor*{(s+d+2)/2})\coloneqq -\floor*{(d+1)/2}$ and define $\phi(\la)$ to be the path $P=P_1P_2 \cdots P_{\floor*{(s+d+2)/2}}$, where the $j$th step is given by $P_j=(1,f(j)-f(j-1))$ for each $j$. By Proposition \ref{prop:f_initial} (b), $P_j$ is one of the three steps $U=(1,1)$, $F=(1,0)$, and $D=(1,-1)$, so $P$ is a free Motzkin path. From this construction together with Proposition~\ref{prop:barf}, we obtain a path interpretation of an $(\ols{s\phantom{d}}, \overline{s+d}, \overline{s+2d})$-core partition as described in the following theorem. \begin{thm}\label{thm:barcore} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{BC}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d}{2} \,;\, \{U\},\{D\})$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}(\frac{s+d+2}{2},-\frac{d+1}{2} \,;\, \{U\},\{FD,DD,U\})$ if both $s$ and $d$ are odd; \item[(c)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset)$ if $s$ is even and $d$ is odd. \end{enumerate} \end{thm} \begin{proof} All the bijections come from Propositions \ref{prop:f_initial}, \ref{prop:barf}, and \ref{prop:barinv}. By drawing line segments that connects the positions $(f(j),j)$ and $(f(j+1),j+1)$ to obtain $P=P_1P_2 \cdots P_{\floor*{(s+d)/2}}$ in the $(\overline{s+d},d)$-abacus, we have the one-to-one correspondences between the sets $\mathcal{BC}_{(s,s+d,s+2d)}$ and { \small \begin{align*} \text{(a) }& \mathcal{F}\left(\frac{s+d-1}{2},-\frac{d}{2}\,;\, \{U\},\emptyset\right)\cup\mathcal{F}\left(\frac{s+d-1}{2}, -\frac{d+2}{2}\,;\, \{U\},\emptyset\right);\\ \text{(b) }& \mathcal{F}\left(\frac{s+d}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right) \cup \mathcal{F}\left(\frac{s+d}{2},-\frac{d-1}{2} \,;\, \{U\},\{F,D\}\right);\\ \text{(c) }& \mathcal{F}\left(\frac{s+d-1}{2},-\frac{d-1}{2} \,;\, \{U\},\emptyset\right) \cup \mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right) \\ &\hspace{55mm}\cup\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+3}{2} \,;\, \{U\},\emptyset\right). \end{align*} } The addition of the last step gives free Motzkin paths of type $(\lfloor (s+d+2)/2\rfloor,-\lfloor (d+1)/2 \rfloor)$ as we desired. \end{proof} \begin{ex} For a $(\overline{7}, \overline{11}, \overline{15})$-core partition $\la=(8,4,2,1)$, Diagram I in Figure \ref{fig:abacus_bar} illustrates the $(\overline{11},4)$-abacus of $\la$. The $(\overline{11},4)$-abacus function $f$ of $\la$ is given by $$f(0)=0,~ f(1)=0,~ f(2)=0,~ f(3)=-1,~ f(4)=-2, ~f(5)=-3, ~f(6)=-2,$$ and its corresponding path is $P=\phi(\la)=FFDDDU$. \end{ex} \subsection{Doubled distinct $(s,s+d,s+2d)$-core partitions} Recall that for an $\overline{s}$-core partition $\la$ with even $s$, $\la\la$ is a doubled distinct $s$-core if and only if $s/2 \notin \la$. \begin{prop}\label{prop:dd_f} For a strict partition $\la$ such that $\la\la$ is a doubled distinct $(s,s+d,s+2d)$-core, the $(\overline{s+d},d)$-abacus function $f$ of $\la$ satisfies the following. \begin{enumerate} \item [(a)] If $s$ is odd and $d$ is even, then $f(\frac{s+d-1}{2})\in \{ -\frac{d+2}{2}, -\frac{d}{2}\}$. \item [(b)] If $s$ and $d$ are both odd, then $f(\frac{s+d}{2})=-\frac{d+1}{2}$. \item [(c)] If $s$ is even and $d$ is odd, then $f(\frac{s+d-1}{2})=-\frac{d+1}{2}$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item [(a)] It follows from Proposition \ref{prop:barf} (a) since we do not need to consider the additional property of a doubled distinct core partition. \item [(b)] Positions $(-(d+1)/2,(s+d)/2)$ and $(-(d-1)/2,(s+d)/2)$ are labeled by $-(s+d)/2$ and $(s+d)/2$, respectively. Since $(s+d)/2 \notin \la$ by Proposition \ref{prop:dd} (b), there is no bead in column $(s+d)/2$, and $f((s+d)/2)=-(d+1)/2$. \item [(c)] Positions $(-(d+1)/2,(s+d-1)/2)$ and $(-(d-1)/2,(s+d-1)/2)$ are labeled by $-(s+2d)/2$ and $s/2$, respectively. We know that $s/2,(s+2d)/2 \notin \la$ by Proposition \ref{prop:dd} (b), so $f((s+d-1)/2)=-(d+1)/2$. \end{enumerate} \end{proof} Similar to the bar-core case considered in Section \ref{sec:bar}, there is a one-to-one correspondence between the set of doubled distinct $(s,s+d,s+2d)$-cores and the set of functions satisfying the conditions in Propositions \ref{prop:f_initial} and \ref{prop:dd_f}. The following proposition completes the existence of the bijection. \begin{prop}\label{prop:dd_inverse} For coprime positive integers $s$ and $d$, let $f$ be a function that satisfies Propositions \ref{prop:f_initial} and \ref{prop:dd_f}. If $\la$ is a strict partition such that $f$ is the $(\overline{s+d},d)$-abacus function of $\la$, then $\la\la$ is a doubled distinct $(s,s+d,s+2d)$-core. \end{prop} \begin{proof} It is sufficient to show that $\la$ satisfies Proposition \ref{prop:dd} (b). We consider the case according to the parity of $s$ and $d$. For odd $s$ and even $d$, all of $s,s+d,s+2d$ are odd, so we no longer need to consider the additional property of $\la\la$. For odds $s$ and $d$, there is no bead in column $(s+d)/2$ by Proposition \ref{prop:dd_f} (b). Since the only column that has labels whose absolute values are $(s+d)/2$ is the column $(s+d)/2$, it follows that $(s+d)/2 \notin \la$. If $s$ is even and $d$ is odd, then $s$ and $s+2d$ are even. In a similar way, $s/2,(s+2d)/2 \notin \la$ by Proposition \ref{prop:dd_f} (c). \end{proof} Now we give a path interpretation for the doubled distinct $(s,s+d,s+2d)$-cores. \begin{thm}\label{thm:dd3} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{DD}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d}{2} \,;\, \{U\},\{D\})$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}(\frac{s+d}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset)$ if both $s$ and $d$ are odd; \item[(c)] $\mathcal{F}(\frac{s+d-1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset)$ if $s$ is even and $d$ is odd. \end{enumerate} \end{thm} \begin{proof} Part (a) comes from Theorem \ref{thm:barcore} (a). Parts (b) and (c) are followed by Propositions \ref{prop:f_initial} and \ref{prop:dd_f}. Note that the length of the corresponding paths in parts (b) and (c) are different than the original setting. Since parts (b) and (c) in Proposition \ref{prop:dd_f} give only one option for the value of $f$ at the second last step, we no longer need to extend the corresponding path to the end point. \end{proof} \subsection{$(s,s+d,s+2d)$-CSYDs} We recall that for even $s$, $\la$ is an $s$-CSYD if and only if $\la$ is an $\overline{s}$-core and $3s/2 \notin \la$. \begin{prop}\label{prop:csyd_f} For a strict partition $\la$ such that $S(\la)$ is an $(s,s+d,s+2d)$-CSYD, the $(\overline{s+d},d)$-abacus function $f$ of $\la$ satisfies the following. \begin{enumerate} \item [(a)] If $s$ is odd and $d$ is even, then $f(\frac{s+d-1}{2})\in\{-\frac{d+2}{2},-\frac{d}{2}\}$. \item [(b)] If $s$ and $d$ are both odd, then $f(\frac{s+d}{2}) \in \{-\frac{d+1}{2},-\frac{d-1}{2}\}$. In addition, $f(\frac{s+d-2}{2})=-\frac{d+1}{2}$ when $f(\frac{s+d}{2})=-\frac{d-1}{2}$. \item [(c)] If $s$ is even and $d$ is odd, then $f(\frac{s+d-1}{2}), f(\frac{s+d-3}{2}) \in \{ -\frac{d+3}{2}, -\frac{d+1}{2}, -\frac{d-1}{2}\}$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item [(a)] It also follows from Proposition \ref{prop:barf} (a) since we do not need to consider the additional property of an $S(\la)$. \item [(b)] From the proof of Proposition \ref{prop:barf} (b), we have $(3s+3d)/2\notin \la$ for an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\la$. Therefore, $\la$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition if and only if $S(\la)$ is an $(s,s+d,s+2d)$-CSYD for odd numbers $s$ and $d$. \item [(c)] Let $(a,b)=(-(d+3)/2,(s+d-3)/2)$. By the proof of Proposition \ref{prop:barf} (c) we have $f(b+1)=a,a+1$, or $a+2$. Note that positions $(a,b)$, $(a+1,b)$, $(a+2,b)$, and $(a+3,b)$ are labeled by $-(3s+6d)/2$, $-(s+4d)/2$, $(s-2d)/2$, and $3s/2$ respectively. Since $3s/2,(3s+6d)/2 \notin \la$ by Proposition \ref{prop:CSYD} (b), there is at most one bead labeled by $(s-2d)/2$ or $-(s+4d)/2$ in column $b$. Hence, $f(b)=a,a+1$, or $a+2$. \end{enumerate} \end{proof} Again, we construct a bijection between the set of $(s,s+d,s+2d)$-CSYDs and the set of functions satisfying the conditions in Propositions \ref{prop:f_initial} and \ref{prop:csyd_f}. \begin{prop} For coprime positive integers $s$ and $d$, let $f$ be a function that satisfies Propositions \ref{prop:f_initial} and \ref{prop:csyd_f}. If $\la$ is a strict partition such that $f$ is the $(\overline{s+d},d)$-abacus function of $\la$, then $S(\la)$ is an $(s,s+d,s+2d)$-CSYD. \end{prop} \begin{proof} Similar to Proposition \ref{prop:dd_inverse}, it is sufficient to show that $\la$ satisfies Proposition \ref{prop:CSYD} (b). Also, we do not need to check the additional condition when $s$ is odd and $d$ is even. If $s$ and $d$ are both odd, by Proposition \ref{prop:csyd_f} (b), there is at most one bead labeled by $(s+d)/2$ in column $(s+d)/2$. Since no columns but the column $(s+d)/2$ has labels whose absolute values are $(3s+3d)/2$, it follows that $(3s+3d)/2 \notin \la$. If $s$ is even and $d$ is odd, then only the column $(s+d-3)/2$ has positions labeled by $-(3s+6d)/2$ and $3s/2$. Since there is at most one bead being labeled by $(-s+4d)/2$ or $(s-2d)/2$ in column $(s+d-3)/2$ by Proposition \ref{prop:csyd_f} (c), we have $3s/2,(3s+6d)/2 \notin \la$. It completes the proof. \end{proof} Similarly, we give a path interpretation for $(s,s+d,s+2d)$-CSYDs. \begin{thm}\label{thm:csyd3} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{CS}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d}{2} \,;\, \{U\},\{D\})$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}(\frac{s+d+2}{2},-\frac{d+1}{2} \,;\, \{U\},\{FD,DD,U\})$ if both $s$ and $d$ are odd; \item[(c)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d+1}{2} \,;\, \{U\},\{UU,DD\})$ if $s$ is even and $d$ is odd. \end{enumerate} \end{thm} \begin{proof} Parts (a) and (b) follow from Theorem \ref{thm:barcore}. Now we need to construct a bijection for the set $\mathcal{CS}_{(s,s+d,s+2d)}$ when $s$ is even and $d$ is odd. Until the second last step of the corresponding free Motzkin paths, the paths should be in one of the following sets: \begin{align*} &\mathcal{F}\left((s+d-1)/2,-(d-1)/2 \,;\, \{U\},\{D\}\right),\\ &\mathcal{F}\left((s+d-1)/2,-(d+1)/2 \,;\, \{U\},\emptyset\right),\\ &\mathcal{F}\left((s+d-1)/2,-(d+3)/2 \,;\, \{U\},\{U\}\right). \end{align*} By adding the end point of the free Motzkin path, we get the statements. \end{proof} \subsection{Enumerating $(s,s+d,s+2d)$-core partitions} In this subsection we give a proof of Theorem~\ref{thm:unifying}. We begin with a useful lemma. \begin{lem}\label{lem:path1} Let $a$ and $b$ be positive integers. \begin{enumerate} \item[(a)] The total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step is given by \[ |\mathcal{F}(a+b,-b \,;\, \{U\},\emptyset)|=\sum_{i=0}^{a}\binom{a+b-1}{\lfloor i/2 \rfloor, b+\lfloor (i-1)/2\rfloor, a-i}. \] \item[(b)] The total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step and ends with either a up or a flat step is \[ |\mathcal{F}(a+b,-b \,;\, \{U\},\{D\})|=\sum_{i=0}^{a-1}\binom{a+b-2}{\lfloor i/2 \rfloor}\binom{a+b-1-\lfloor i/2 \rfloor}{a-i-1}. \] \item[(c)] The total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step and ends with either a down or a flat step is \[ |\mathcal{F}(a+b,-b \,;\, \{U\},\{U\})|=\sum_{i=0}^{a}\binom{a+b-2}{\lfloor i/2 \rfloor}\binom{a+b-1-\lfloor i/2 \rfloor}{a-i}. \] \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[(a)] The number of free Motzkin paths of type $(a+b,-b)$ having $k$ up steps (so that it has $b+k$ down steps and $a-2k$ flat steps) for which starts with a down (resp. flat) step is $\binom{a+b-1}{k,b+k-1,a-2k}$ (resp. $\binom{a+b-1}{k,b+k,a-(2k+1)}$). Hence, the total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step is \[ \sum_{k=0}^{\lfloor a/2 \rfloor}\binom{a+b-1}{k,b+k-1,a-2k} +\sum_{k=0}^{\lfloor (a-1)/2 \rfloor}\binom{a+b-1}{k,b+k,a-(2k+1)}, \] which can be written as in the statement. \item[(b)] Note that $|\mathcal{F}(a+b,-b\,;\,\{U\},\{D\})|$ is equal to the sum of the two values, which are given by (a), \begin{align*} |\mathcal{F}(a+b-1,-b \,;\, \{U\},\emptyset)|&=\sum_{i=0}^{a-1}\binom{a+b-2}{\lfloor i/2 \rfloor, b+\lfloor (i-1)/2\rfloor, a-i-1},\\ |\mathcal{F}(a+b-1,-b-1 \,;\, \{U\},\emptyset)|&=\sum_{i=0}^{a-2}\binom{a+b-2}{\lfloor i/2 \rfloor, b+\lfloor (i+1)/2\rfloor, a-i-2}. \end{align*} Hence, $|\mathcal{F}(a+b,-b\,;\,\{U\},\{D\})|$ is equal to \[ \sum_{i=0}^{a-1}\binom{a+b-2}{\lfloor i/2 \rfloor}\left(\binom{a+b-2-\lfloor i/2 \rfloor}{a-i-1} +\binom{a+b-2-\lfloor i/2 \rfloor}{a-i-2}\right), \] which can be written as in the statement. \item[(c)] Similar to (b), the formula follows. \end{enumerate} \end{proof} For coprime positive integers $s$ and $d$, let $\mathfrak{sc}$, $\mathfrak{bc}$, $\mathfrak{cs}$, and $\mathfrak{dd}$ denote the cardinalities of the sets $\mathcal{SC}_{(s,s+d,s+2d)}$, $\mathcal{BC}_{(s,s+d,s+2d)}$, $\mathcal{CS}_{(s,s+d,s+2d)}$, and $\mathcal{DD}_{(s,s+d,s+2d)}$, respectively. \begin{proof}[Proof of Theorem~\ref{thm:unifying}.] \begin{enumerate} \item[(a)] Recall that for odd $s$ and even $d$, the three sets $\mathcal{BC}_{(s,s+d,s+2d)}$, $\mathcal{DD}_{(s,s+d,s+2d)},$ and $\mathcal{CS}_{(s,s+d,s+2d)}$ are actually the same by Remark~\ref{rmk:oddoddodd}. By Theorem \ref{thm:barcore} (a), the set $\mathcal{BC}_{(s,s+d,s+2d)}$ is bijective with $\mathcal{F}((s+d+1)/2,-d/2 \,;\, \{U\},\{D\}).$ By setting $a=(s+1)/2$ and $b=d/2$ in Lemma~\ref{lem:path1}~(b), we obtain a desired formula. \item[(b)] For odd numbers $s$ and $d$, we have $\mathfrak{bc}=\mathfrak{cs}$ by Theorems \ref{thm:barcore} (b) and \ref{thm:csyd3} (b). By Lemma~\ref{lem:path1}~(a), we get \begin{align*} &\left|\mathcal{F}\left(\frac{s+d}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right)\right|=\sum_{i=0}^{(s-1)/2}\binom{(s+d-2)/2}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, (s-1)/2-i}, \\ &\left|\mathcal{F}\left(\frac{s+d}{2},-\frac{d-1}{2} \,;\, \{U\},\{F,D\}\right)\right|=\left|\mathcal{F}\left(\frac{s+d-2}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right)\right|\\ & \hspace{54.5mm} =\sum_{i=0}^{(s-3)/2}\binom{(s+d-4)/2}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, (s-3)/2-i}. \end{align*} As in the proof of Theorem \ref{thm:barcore}, $\mathfrak{bc}$ is equal to the sum of these two terms, which can be written as follows. \[ \mathfrak{bc}=\mathfrak{cs}=\sum_{i=0}^{(s-1)/2}\binom{(d-1)/2+i}{\lfloor i/2 \rfloor}\left( \binom{(s+d-2)/2}{(d-1)/2+i} + \binom{(s+d-4)/2}{(d-1)/2+i}\right). \] \item[(c)] By Theorem \ref{thm:barcore} (c), the set $\mathcal{BC}_{(s,s+d,s+2d)}$ is bijective with the set $\mathcal{F}((s+d+1)/2,-(d+1)/2 \,;\, \{U\},\emptyset)$ for even $s$ and odd $d$. By Lemma~\ref{lem:path1}~(a), \[ \mathfrak{bc}=\sum_{i=0}^{s/2}\binom{(s+d-1)/2}{\lfloor i/2 \rfloor, (d+1)/2+\lfloor (i-1)/2\rfloor, s/2-i}. \] Now we consider the set $\mathcal{CS}_{(s,s+d,s+2d)}$. As in the proof of Theorem \ref{thm:csyd3}, $\mathfrak{cs}=|\mathcal{F}_1|+|\mathcal{F}_2|+|\mathcal{F}_3|$, where \begin{align*} \mathcal{F}_1&\coloneqq\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d-1}{2} \,;\, \{U\},\{D\}\right)\!,\\ \mathcal{F}_2&\coloneqq\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right)\!,\\ \mathcal{F}_3&\coloneqq\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+3}{2} \,;\, \{U\},\{U\}\right)\!. \end{align*} From Lemma~\ref{lem:path1}, we obtain that \begin{align*} |\mathcal{F}_2|&=\sum_{i=0}^{(s-2)/2}\binom{(s+d-3)/2}{\left\lfloor i/2 \right\rfloor} \binom{(s+d-3)/2-\left\lfloor i/2 \right\rfloor}{(s-2)/2-i},\\ |\mathcal{F}_1|+|\mathcal{F}_3|&=\sum_{i=0}^{(s-2)/2}\binom{(s+d-5)/2}{\left\lfloor i/2 \right\rfloor} \binom{(s+d-1)/2-\left\lfloor i/2 \right\rfloor}{(s-2)/2-i}, \end{align*} which completes the proof. \item[(d)] Theorem \ref{thm:dd3} (b) and (c), and Lemma \ref{lem:path1} give an expression of $\mathfrak{dd}$ depending on the parity of $s$. By manipulating binomial terms, one can combine two expressions into one. \end{enumerate} \end{proof} \begin{rem} From the path constructions, we compare the sizes among them. \begin{enumerate} \item[(a)] If $s$ is odd and $d$ is even, then $\mathfrak{sc}<\mathfrak{bc}=\mathfrak{cs}=\mathfrak{dd}$. \item[(b)] If both $s$ and $d$ are odd, then $\mathfrak{sc}=\mathfrak{dd}<\mathfrak{bc}=\mathfrak{cs}$. \item[(c)] If $s$ is even and $d$ is odd, then $\mathfrak{dd}<\mathfrak{cs}<\mathfrak{sc}=\mathfrak{bc}$. \end{enumerate} \end{rem} \section*{Acknowledgments} Hyunsoo Cho was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1C1C2007589) and the Ministry of Education (No. 2019R1A6A1A11051177). JiSun Huh was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1C1C1A01008524). Jaebum Sohn was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1F1A1A01066216). \begin{thebibliography}{00} \bibitem{AL} T.~Amdeberhan and E.~S. Leven. \newblock Multi-cores, posets, and lattice paths. \newblock {\em Adv. in Appl. Math.}, 71:1--13, 2015. \bibitem{Anderson} J.~Anderson. \newblock Partitions which are simultaneously {$t_1$}- and {$t_2$}-core. \newblock {\em Discrete Math.}, 248(1-3):237--243, 2002. \bibitem{BO} C.~Bessenrodt and J.~B. Olsson. \newblock Spin block inclusions. \newblock {\em J. Algebra}, 306(1):3--16, 2006. \bibitem{ChoHuh} H.~Cho and J.~Huh. \newblock Self-conjugate $(s, s+ d,\dots, s+ pd) $-core partitions and free rational {M}otzkin paths. \newblock {\em arXiv preprint arXiv:2004.03208}, 2020. \bibitem{Ding} J.~Ding. \newblock T-core shifted {Y}oung diagrams. \newblock {\em Discrete Math.}, 343(7):111874, 12, 2020. \bibitem{FMS} B.~Ford, H.~Mai, and L.~Sze. \newblock Self-conjugate simultaneous {$p$}- and {$q$}-core partitions and blocks of {$A_n$}. \newblock {\em J. Number Theory}, 129(4):858--865, 2009. \bibitem{GKS} F.~Garvan, D.~Kim, and D.~Stanton. \newblock Cranks and {$t$}-cores. \newblock {\em Invent. Math.}, 101(1):1--17, 1990. \bibitem{GNS} J.-B. Gramain, R.~Nath, and J.~A. Sellers. \newblock Simultaneous core partitions with nontrivial common divisor. \newblock {\em Ramanujan J.}, in press. \bibitem{MY} A.~O. Morris and A.~K. Yaseen. \newblock Some combinatorial results involving shifted {Y}oung diagrams. \newblock {\em Math. Proc. Cambridge Philos. Soc.}, 99(1):23--31, 1986. \bibitem{Olsson-book} J.~B. Olsson. \newblock {\em Combinatorics and representations of finite groups}, volume~20 of {\em Vorlesungen aus dem Fachbereich Mathematik der Universit\"{a}t GH Essen [Lecture Notes in Mathematics at the University of Essen]}. \newblock Universit\"{a}t Essen, Fachbereich Mathematik, Essen, 1993. \bibitem{WY} J.~L.~P. Wang and J.~Y.~X. Yang. \newblock On the average size of an {$(\overline{s},\overline{t})$}-core partition. \newblock {\em Taiwanese J. Math.}, 23(5):1025--1040, 2019. \bibitem{Wang} V.~Y. Wang. \newblock Simultaneous core partitions: parameterizations and sums. \newblock {\em Electron. J. Combin.}, 23(1):Paper 1.4, 34, 2016. \bibitem{YYZ2} S.~H.~F. Yan, D.~Yan, and H.~Zhou. \newblock Self-conjugate {$(s,s + d,s + 2d)$}-core partitions and free {M}otzkin paths. \newblock {\em Discrete Math.}, 344(4):112304, 2021. \bibitem{Yang} J.~Y.~X. Yang. \newblock Bijections between bar-core and self-conjugate core partitions. \newblock {\em Ramanujan J.}, 50(2):305--322, 2019. \end{thebibliography} \end{document} \endinput
2205.01872v2
http://arxiv.org/abs/2205.01872v2
A smectic liquid crystal model in the periodic setting
\documentclass[article,onefignum,onetabnum]{siamart171218} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{graphicx} \usepackage{placeins} \usepackage{comment} \usepackage{MnSymbol} \usepackage{enumerate} \usepackage{xfrac} \usepackage[utf8]{inputenc} \newcommand{\creflastconjunction}{, and~} \newcommand{\tr}[1]{\mathrm{tr}\,{#1}} \newcommand{\dv}[1]{\mathrm{div}\,{#1}} \newcommand{\cl}[1]{\mathrm{curl}\,{#1}} \newcommand{\eps}{\varepsilon} \newcommand{\dHM}{d\mathcal{H}_{\M}} \newcommand{\cg}{\mathcal{C}_g} \newcommand{\dvr}{\mathrm{div}} \newcommand{\bn}{\mathbf{n}} \newcommand{\bm}{{\bf m}} \newcommand{\e}{\varepsilon} \newcommand{\R}{\mathbb R} \newcommand{\mT}{\mathbb T} \newcommand{\mN}{\mathbb N} \newcommand{\mZ}{\mathbb Z} \newcommand{\M}{\mathcal M} \newcommand{\rr}{{\bf r}} \newcommand{\T}{{\bf T}} \newcommand{\N}{{\bf N}} \newcommand{\p}{{\bf p}} \newcommand{\q}{{\bf q}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\abs}[1]{\left\vert{#1}\right\vert} \newcommand{\sss}{\setcounter{equation}{0}} \newcommand{\n}{{\bf n}} \newcommand{\id}{\mathrm{I}} \newcommand{\dive}{\mathrm{div}\,} \newcommand{\curl}{\mathrm{curl}\,} \newcommand{\dz}{\partial_z} \newcommand{\dx}{\partial_x} \newcommand{\dy}{\partial_y} \newcommand{\dxx}{\dx^2} \newcommand{\dyy}{\dy^2} \newcommand{\dxy}{\dx\dy} \newcommand{\dyx}{\dy\dx} \newcommand{\dxz}{\dx\dz} \newcommand{\dzx}{\dz\dx} \newcommand{\dyz}{\dy\dz} \newcommand{\dzy}{\dz\dy} \newcommand{\ro}{\mathrm{R}\,} \newcommand{\mn}{m_n} \let\oldmp\mp \renewcommand\mp{m^+} \let\oldmn\mn \renewcommand\mn{m_n} \newcommand{\mm}{m^-} \newcommand{\mmo}{\mm_1} \newcommand{\mpo}{\mp_1} \newcommand{\mres}{\hspace{-.75mm}\lefthalfcup \hspace{-.75mm}} \newcommand{\parallelsum}{\mathbin{\!/\mkern-5mu/\!}} \newsiamremark{remark}{Remark} \newsiamremark{hypothesis}{Hypothesis} \crefname{hypothesis}{Hypothesis}{Hypotheses} \newsiamthm{claim}{Claim} \overfullrule=0pt \headers{A smectic liquid crystal model in the periodic setting}{Novack, Yan} \title{A smectic liquid crystal model in the periodic setting} \author{Michael Novack\thanks{Department of Mathematics, The University of Texas at Austin, Austin, TX, USA (\email{[email protected]}).} \and Xiaodong Yan\thanks{Department of Mathematics, The University of Connecticut, Storrs, CT, USA (\email{[email protected]}).} } \begin{document} \maketitle \begin{abstract} We consider the asymptotic behavior as $\e $ goes to zero of the 2D smectics model in the periodic setting given by \begin{equation*} \mathcal{E}_{\e }( w) =\frac{1}{2}\int_{\mathbb{T}^{2}}\frac{1}{\e }\left( \left\vert \partial _{1}\right\vert ^{-1}\left( \partial _{2}w-\partial _{1}\frac{1}{2}w^{2}\right) \right) ^{2}+\e \left( \partial _{1}w\right) ^{2}dx . \end{equation*}We show that the energy $\mathcal{E}_\e(w)$ controls suitable $L^p$ and Besov norms of $w$ and use this to demonstrate the existence of minimizers for $\mathcal{E}_\e(w)$, which has not been proved for this smectics model before, and compactness in $L^p$ for an energy-bounded sequence. We also prove an asymptotic lower bound for $\mathcal{E}_\e(w)$ as $\e \to 0$ by means of an entropy argument. \end{abstract} \section{Introduction}\label{sec:intro} We consider the variational model \begin{equation} \mathcal{E}_{\e }( w) =\frac{1}{2}\int_{\mathbb{T}^{2}}\frac{1}{\e }\left( \left\vert \partial _{1}\right\vert ^{-1}\left( \partial _{2}w-\partial _{1}\frac{1}{2}w^{2}\right) \right) ^{2}+\e \left( \partial _{1}w\right) ^{2}dx\,, \label{periodicenergy} \end{equation}where $w:\mathbb{T}^{2}\rightarrow \mathbb{R}$ is a periodic function with vanishing mean in $x_{1}$, that is \begin{equation}\label{vanishing mean} \int_{0}^{1}w(x_1,x_2)\,dx_{1}=0\quad\textup{for any $x_2 \in [0,1)$}\,. \footnotemark \end{equation}\footnotetext{More {\color{black}generally}, a periodic distribution $f$ on $\mathbb{T}^2$ has ``vanishing mean in $x_1$'' if for all $(k_1,k_2)=k\in \left( 2\pi \mathbb{Z}\right) ^{2}$ with $k_1=0$, $\widehat{f}(k)=0$. If $f$ corresponds to an $L^p$ function, $p\in [1,\infty)$, this is equivalent to the existence of a sequence $\{\varphi_k\}$ of smooth, periodic functions satisfying \eqref{vanishing mean} that converges in $L^p$ to $f$.} Here $\left\vert \partial _{1}\right\vert ^{-1}$ is defined via its Fourier coefficients \[ \widehat{\left\vert \partial _{1}\right\vert ^{-1}f}\left( k\right) =\left\vert k_{1}\right\vert ^{-1}\widehat{f}\left( k\right) \text{ \ for }k\in {\left( 2\pi \mathbb{Z}\right) ^{2}}\,, \] and is well defined when \eqref{vanishing mean} holds. This model is motivated by a nonlinear approximate model of smectic liquid crystals. The following functional has been proposed as an approximate model for smectic liquid crystals \cite{BreMar99, IL99, NY1, San06, SanKam03} in two space dimensions: \begin{equation} E_{\e }(u)=\frac{1}{2}\int_{\Omega }\frac{1}{\e }\left( \partial_2 u-\frac{1}{2}(\partial_1 u)^{2}\right) ^{2}+\e (\partial_{11} u)^{2}\,dx, \label{smecticenergy} \end{equation}where $u$ is the Eulerian deviation from the ground state $\Phi(x) = x_2$ and $% \e $ is the characteristic length scale. The first term represents the compression energy and the second term represents the bending energy. For further background on the model, we refer to \cite{NY1,NY2} and the references contained therein. The 3D version of \eqref{smecticenergy}{\color{black}, which we analyzed in \cite{NY2} but do not consider further here,} is also used for example in the mathematical description of nuclear pasta in neutron stars \cite{CSH18}. Assuming that $u$ is periodic on the torus $\mathbb{T}^{2}=\Omega$ and setting $w=\partial_1 u, $ (\ref{smecticenergy}) becomes \[ E_{\e }(u)=\frac{1}{2}\int_{\mathbb{T}^{2}}\frac{1}{\e}\left( |\partial _{1}|^{-1}\left( \partial _{2}w-\partial _{1}\frac{1}{2}w^{2}\right) \right) ^{2}+\e \left( \partial _{1}w\right) ^{2}dx. \]{\color{black}Finally, a similar model to \eqref{periodicenergy} with $|\partial_1|^{-1/2}$ replacing $|\partial_1|^{-1}$ has been derived in the context of micromagnetics \cite{IgnOtt19}; see also \cite{Ste11}}. The asymptotic behavior of \eqref{smecticenergy} as $\e$ goes to zero was studied in \cite{NY1}. Given $\varepsilon _{n}\rightarrow 0$ and a sequence $\left\{ u_{n}\right\} $ with bounded energies $E_{\varepsilon _{n}}( u_{n})$, the authors proved pre-compactness of {$\{\partial_1 u_n\}$ in $L^q$ for any $1\leq q<p$ and pre-compactness of $\{\partial_2 u_n\}$ in $L^{2}$} under the additional assumption $\| \partial_1 u_{n}\| _{L^{\color{black}p }}\leq C $ { for some $p >6$}. The compactness proof in \cite{NY1} uses a compensated compactness argument based on entropies, following the work of Tartar \cite{Tar79, Tar83, Tar05} and Murat \cite{Mur78, Mur81ASNP, Mur81JMPA}. In addition, a lower bound on $E_{\e}$ and a matching upper bound corresponding to a 1D ansatz was obtained as $\e \rightarrow 0$ under the assumption that the limiting function $u$ satisfies $\nabla u \in (L^\infty \cap BV)(\Omega)$. In this paper, we approach the compactness via a different argument in the periodic setting. Our proof is motivated by recent work on related variational models in the periodic setting \cite{C-AOttSte07,GolJosOtt15, IORT20, Ott09JFA, OttSte10, DabJamVen20} where strong convergence of a weakly convergent $L^2$ sequence is proved via estimates on Fourier series. Given a sequence $u_{\e}$ weakly converging in $L^2(\mT^2)$, to prove strong convergence of $u_{\e}$ in $L^2$, it is sufficient to show that there is no concentration in the high frequencies. The center piece of this approach relies on the estimates for solutions to Burgers equation $$ -\partial_1\frac{1}{2}w^2+\partial_2 w=\eta$$ in suitable Besov spaces. This type of compactness argument also applies to a sequence $\{w_n\}$ with $\mathcal{E}_{\e}(w_n)\leq C$ for any fixed $\e$. As a direct corollary, we obtain the existence of minimizers of $E_{\e}$ in $W^{1,2}(\mT^2)$ (see Corollary \ref{cor:exis}) for any fixed $\e$. We observe that to the best of our knowledge, the existence of minimizers of $E_{\e}$ in any setting was not known due to the lack of compactness for sequence $\{u_n\}$ satisfying $E_{\e}(u_n) \leq C$ with fixed $\e$. To further understand the minimization of $\mathcal{E}_\e$, we are also interested in a sharp lower bound for the asymptotic limit of $\mathcal{E}_\e$ as $\e $ approaches zero. In the literature for such problems (see for example \cite{AmbDeLMan99, AviGig99, IgnMer12, JinKoh00}), one useful technique in achieving such a bound is an ``entropy'' argument, in which the entropy production $\int \dive \Sigma(w)$ of a vector field $\Sigma(w)$ is used to bound the energy $\mathcal{E}_\e$ from below. For the 2D Aviles-Giga functional \begin{equation} \frac{1}{2}\int_\Omega \frac{1}{\e}(|\nabla u|^2 - 1)^2 + \e | \nabla^2 u|^2 \,dx\,, \end{equation} such vector fields were introduced in \cite{JinKoh00,DKMO01}. In \cite{NY1, NY2}, the analogue for the smectic energy, in 2D and 3D respectively, of the Jin-Kohn entropies from \cite{JinKoh00} were used to prove a sharp lower bound which can be matched by a construction similar to \cite{ ConDeL07,Pol07}. In this paper, we use the vector field \begin{equation}\label{2dSigma} \Sigma(w) = \left(-\frac{1}{3}w^3, \frac{1}{2}w^2 \right) \end{equation} which is $(-(\partial_1 u)^3/3, (\partial_1 u)^2/2)$ in terms of $u$, to prove a sharp lower bound. As $\e\to 0$, entropy production concentrates along curves and approximates the total variation of the distributional divergence of a BV vector field. An interesting open direction which motivates studying \eqref{2dSigma} is utilizing the correct version of \eqref{2dSigma} (or the entropies from \cite{DKMO01,GhiraldinLamy}) in 3D, for example in a compactness argument. The paper is organized as follows. The pre-compactness of a sequence of functions with bounded energy is proved in Section \ref{sec:cpt}, for both fixed $\varepsilon$ and $\varepsilon \to 0$. The lower bound is established in Section \ref{sec:lbd}. \section{Compactness of a sequence with bounded energy} \label{sec:cpt} \subsection{Preliminaries} \label{sec:prelim} Let $\mathbf{e}_{1}=\left( 1,0\right) $ and $\mathbf{e}_{2}=\left( 0,1\right) $ be unit vectors in $\mathbb{R}^{2}.$ We recall some definitions from \cite{IORT20}. For $f:\mathbb{T}^{2}\rightarrow \mathbb{R}$, we write \[ \partial _{j}^{h}f\left( x\right) =f\left( x+h\mathbf{e}_{j}\right) -f\left( x\right) \text{ \ \ \ \ \ }x\in \mathbb{T}^{2},\text{ }h\in \mathbb{R}\text{. } \] \begin{definition} Given $f:\mathbb{T}^{2}\rightarrow \mathbb{R}$, $j\in \{1,2\},$ $s\in \left( 0,1\right] ,$ and $p\in \lbrack 1,\infty )$, the directional Besov seminorm is defined as \[ \left\Vert f\right\Vert _{\overset{\cdot }{\mathcal{B}}_{p;j}^{s}}=\sup_{h\in \left( 0,1\right] }\frac{1}{h^{s}}\left( \int_{\mathbb{T}^{2}}\left\vert \partial _{j}^{h}f\right\vert ^{p}dx\right) ^{\frac{1}{p}} \] \end{definition} \begin{remark} This is the $\mathcal{B}^{s;p,\infty }$ seminorm defined in each direction separately. \end{remark} \begin{remark}\label{IORT remark} For $p=2$ and $s\in \left( 0,1\right) ,$ given $s^{\prime }\in \left( s,1\right) ,$ the following inequality holds $\left( \cite[\textup{Equation }(2.2)]{IORT20}\right) $:% \[ \int_{\mathbb{T}^{2}}\left\vert \left\vert \partial _{j}\right\vert ^{s}f\right\vert ^{2}=\dsum \left\vert k_{j}\right\vert ^{2s}\left\vert \widehat{f}\left( k\right) \right\vert ^{2}=c_{s}\int_{\mathbb{R}}\frac{1}{\left\vert h\right\vert ^{2s}}\int_{\mathbb{T}^{2}}\left\vert \partial _{j}^{h}f\right\vert ^{2}dx\frac{dh}{\left\vert h\right\vert }\leq C( s,s^{\prime }) \left\Vert f\right\Vert^{2}_{\color{black}\overset{\cdot }{\mathcal{B}}_{2;j}^{s'}}. \] \end{remark} We quote two {\color{black}results }from \cite{IORT20}. \begin{lemma} \label{iortb9}\cite[Proposition B.9]{IORT20} For every $% p\in \left( 1,\infty \right] $ and $q\in \left[ 1,p\right] $ with $\left( p,q\right) \neq \left( \infty ,1\right) ,$ there exists a constant $C( p,q) >0$ such that for every periodic function $f:\left[ 0,1\right) \rightarrow \mathbb{R}$ with vanishing mean,% \begin{equation} \left( \int_{0}^{1}\left\vert f\left( z\right) \right\vert ^{p}dz\right) ^{\frac{1}{p}}\leq C( p,q) \int_{0}^{1}\frac{1}{h^{\frac{1}{q}-\frac{1}{p}}}\left( \int_{0}^{1}\left\vert \partial _{1}^{h}f\left( z\right) \right\vert ^{q}dz\right) ^{\frac{1}{q}}\frac{dh}{h}\,, \label{eqn:pqbd} \end{equation} with the usual interpretation for $p=\infty$ or $q=\infty$. \end{lemma} {\color{black}The following estimate was derived in the proof of Lemma B.10 in \cite{IORT20}. \begin{lemma} \cite[In the proof of Lemma B.10]{IORT20} For every $p\in \left[ 1,\infty \right) $ and every periodic function $f:\left[ 0,1\right) \rightarrow \mathbb{R}$, $h \in (0,1]$, the following estimate holds. \begin{equation} \left( \int_{0}^{1}\left\vert \partial _{1}^{h}f\left( z\right) \right\vert ^{p}dz\right) ^{\frac{1}{p}}\leq 2\left( \frac{1}{h}\int_{0}^{h}\int_{0}^{1}\left\vert \partial _{1}^{h^{\prime }}f\left( z\right) \right\vert ^{p}dz\,dh^{\prime }\right) ^{\frac{1}{p}}. \label{eqn:avebd} \end{equation} \end{lemma}} We define $\eta _{w}=\partial _{2}w-\partial _{1}\frac{1}{2}w^{2}$, and thus $% \left( \ref{periodicenergy}\right) $ can be written as \begin{equation}\label{periodic} \mathcal{E}_{\varepsilon}(w)=\frac{1}{2}\int_{\mT^2}\frac{1}{\varepsilon}(|\partial_1|^{-1}\eta_w)^2+\varepsilon (\partial_1w)^2 dx. \end{equation} Finally, we introduce the $\e$-independent energy \begin{equation}\label{epsilon ind} \mathcal{E}(w) = \left( \int_{\mathbb{T}^{2}}\left( \left\vert \partial _{1}\right\vert ^{-1}\eta _{w}\right) ^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\mathbb{T}^{2}}\left( \partial _{1}w\right) ^{2}dx\right) ^{\frac{1}{2}}\,, \end{equation} and note that \begin{equation}\label{trivial bound} \mathcal{E}(w) \leq \mathcal{E}_\e(w)\quad\textup{ for all $\e>0$}\,. \end{equation} \subsection{Besov and ${\bf\textit{$L^p$}}$ estimates} \label{sec:besov} We obtain the following estimates. {\color{black}The proofs follow closely those in \cite[Propositions 2.3-2.4]{IORT20}.} \begin{lemma}\label{lemma 2.6} There exists a universal constant $C_1>0$ such that if $w\in L^{2}\left( \mathbb{T}^{2}\right) $ and has vanishing mean in $x_{1}$ and $h\in \left( 0,1\color{black}\right]$, then \begin{equation} \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\right\vert ^{3}dx\leq C_1h\mathcal{E}(w) \label{l3estimate} \end{equation}and \begin{equation} {\color{black}\sup_{x_{2}\in \left[ 0,1\right) }\int_{0}^{h}\int_{0}^{1}\left\vert \partial _{1}^{h^{\prime }}w\left( x_{1},x_{2}\right) \right\vert ^{2}dx_{1}dh^{\prime }\leq C_1\left( h\mathcal{E}( w) +h^{\frac{5}{3}}\mathcal{E}^{\frac{2}{3}}( w) \right) \label{b2sestimate}}. \end{equation}\end{lemma} \begin{proof} {Throughout the proof, we assume that $w$ is smooth; once the estimates hold for smooth $w$, they hold in generality by approximation. The constant $C_1$ may change from line to line.} Following \cite[Equations (2.5)-(2.6)]{IORT20}, we apply the modified Howarth-K\'{a}rm\'{a}n-Monin identities for the Burgers operator. For every $h^{\prime }\in \left( 0,1\color{black}\right] $, we have\begin{equation} \partial _{2}\frac{1}{2}\int_{0}^{1}\left\vert \partial _{1}^{h^{\prime }}w\right\vert \partial _{1}^{h^{\prime }}w\,dx_{1}-\frac{1}{6}\partial _{h^{\prime }}\int_{0}^{1}\left\vert \partial _{1}^{h^{\prime }}w\right\vert ^{3}dx_{1}=\int_{0}^{1}\partial _{1}^{h^{\prime }}\eta _{w}\left\vert \partial _{1}^{h^{\prime }}w\right\vert dx_{1}, \label{HKM1} \end{equation}\begin{equation} \partial _{2}\frac{1}{2}\int_{0}^{1}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx_{1}-\frac{1}{6}\partial _{h^{\prime }}\int_{0}^{1}\left( \partial _{1}^{h^{\prime }}w\right) ^{3}dx_{1}=\int_{0}^{1}\partial _{1}^{h^{\prime }}\eta _{w}\partial _{1}^{h^{\prime }}w\,dx_{1}. \label{HKM2} \end{equation} Integrating \eqref{HKM1} over $x_{2}$ and using the periodicity of $w$ yields \begin{eqnarray} \partial _{h^{\prime }}\int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h^{\prime }}w\right\vert ^{3}dx &=&-6\int_{\mathbb{T}^{2}}\partial _{1}^{h^{\prime }}\eta _{w}\left\vert \partial _{1}^{h^{\prime }}w\right\vert dx \nonumber \\ &=&-6\int_{\mathbb{T}^{2}}\eta _{w}\partial _{1}^{-h^{\prime }}\left\vert \partial _{1}^{h^{\prime }}w\right\vert dx. \label{L3derivative} \end{eqnarray}Now \begin{eqnarray} \left\vert \int_{\mathbb{T}^{2}}\eta _{w}\partial _{1}^{-h^{\prime }}\left\vert \partial _{1}^{h^{\prime }}w\right\vert dx\right\vert &\leq &\left( \int_{\mathbb{T}^{2}}\left( \left\vert \partial _{1}\right\vert ^{-1}\eta _{w}\right) ^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\mathbb{T}^{2}}\left( \partial _{1}\partial _{1}^{-h^{\prime }}\left\vert \partial _{1}^{h^{\prime }}w\right\vert \right) ^{2}dx\right) ^{\frac{1}{2}} \nonumber \\ &\leq &C_1\left( \int_{\mathbb{T}^{2}}\left( \left\vert \partial _{1}\right\vert ^{-1}\eta _{w}\right) ^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\mathbb{T}^{2}}\left( \partial _{1}w\right) ^{2}dx\right) ^{\frac{1}{2}}, \notag \end{eqnarray}so that integrating \eqref{L3derivative} from $0$ to $h$ and using $\partial _{1}^{0}w=0,$ we have \begin{equation} \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\right\vert ^{3}dx\leq C_1\left( \int_{\mathbb{T}^{2}}\left( \left\vert \partial _{1}\right\vert ^{-1}\eta _{w}\right) ^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\mathbb{T}^{2}}\left( \partial _{1}w\right) ^{2}dx\right) ^{\frac{1}{2}}h\leq C_1h\mathcal{E}(w) . \notag \end{equation} To prove \eqref{b2sestimate}, we integrate \eqref{HKM2} from $0$ to $h$ and again utilize $\partial _{1}^{0}w=0$ to obtain \begin{equation} \partial _{2}\frac{1}{2}\int_{0}^{h}\int_{0}^{1}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx_{1}dh^{\prime }-\frac{1}{6}\int_{0}^{1}\left( \partial _{1}^{h}w\right) ^{3}dx_{1}=\int_{0}^{h}\int_{0}^{1}\partial _{1}^{h^{\prime }}\eta _{w}\partial _{1}^{h^{\prime }}w\,dx_{1}dh^{\prime }. \label{fderivative} \end{equation}We set $$f\left( x_{2}\right) =\int_{0}^{h}\int_{0}^{1}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx_{1}dh^{\prime }\,,$$ and recall the Sobolev embedding inequality for $W^{1,1}\left( \mathbb{T}\right) \subset L^{\infty }\left( \mathbb{T}\right) $:% \[ \sup_{z\in \mathbb{T}}\left\vert f\left( z\right) \right\vert \leq \int_{\mathbb{T}}\left\vert f\left( y\right) \right\vert dy+\int_{\mathbb{T}}\left\vert f^{\prime }\left( y\right) \right\vert dy\,. \]Then applying this to $f(x_2)$ and referring to $\left( \ref% {fderivative}\right) $, we have \begin{eqnarray}\label{supbd} &&\sup_{x_{2}\in \left[ 0,1\right) }\int_{0}^{h}\int_{0}^{1}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx_{1}dh^{\prime } \\ \notag &\leq &\int_{0}^{h}\int_{\mathbb{T}^{2}}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx\,dh^{\prime } \\ \notag &&+\frac{1}{3}\int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\right\vert ^{3}dx+2\int_{0}^{h}\int_{0}^{1}\left\vert \int_{0}^{1}\eta _{w}\partial _{1}^{-h^{\prime }}\left\vert \partial _{1}^{h^{\prime }}w\right\vert dx_{1}\right\vert x_{2}\,dh^{\prime }. \nonumber \end{eqnarray}Since \[ \int_{\mathbb{T}^{2}}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx\leq \left( \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h^{\prime }}w\right\vert ^{3}dx\right) ^{\frac{2}{3}}\leq {\color{black}C_1}\left( h^{\prime }\mathcal{E}( w) \right) ^{\frac{2}{3}}, \]and \begin{eqnarray*} &&\int_{0}^{1}\left\vert \int_{0}^{1}\eta _{w}\partial _{1}^{-h^{\prime }}\left\vert \partial _{1}^{h^{\prime }}w\right\vert dx_{1}\right\vert x_{2} \\ &\leq &\left( \int_{\mathbb{T}^{2}}\left( \left\vert \partial _{1}\right\vert ^{-1}\eta _{w}\right) ^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\mathbb{T}^{2}}\left( \partial _{1}\partial _{1}^{-h^{\prime }}\left\vert \partial _{1}^{h^{\prime }}w\right\vert \right) ^{2}dx\right) ^{\frac{1}{2}} \\ &\leq &C_1\left( \int_{\mathbb{T}^{2}}\left( \left\vert \partial _{1}\right\vert ^{-1}\eta _{w}\right) ^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\mathbb{T}^{2}}\left( \partial _{1}w\right) ^{2}dx\right) ^{\frac{1}{2}}, \end{eqnarray*}$\left( \ref{supbd}\right) $ therefore implies \[ \sup_{x_{2}\in \left[ 0,1\right) }\int_{0}^{h}\int_{0}^{1}\left( \partial _{1}^{h^{\prime }}w\right) ^{2}dx_{1}dh^{\prime }\leq C_1\left( h^{\frac{5}{3}}\mathcal{E}^{\frac{2}{3}}(w)+h\mathcal{E}( w) \right)\,, \]which is $\left( \ref{b2sestimate}\right) $. \end{proof} \begin{lemma} \label{lm:lpbd} {\color{black} If $w\in L^{2}\left( \mathbb{T}^{2}\right) $ and has vanishing mean in $x_{1}$, then the following estimates hold:} \begin{equation} \left\Vert w\right\Vert _{\overset{\cdot }{\mathcal{B}}_{3;1}^{s}}\leq C_1\mathcal{E}^{\frac{1}{3}} (w) \,, \text{ \ for every }s\in \left( {\color{black}0},\frac{1}{3}\right] , \label{b3sestimate} \end{equation} where $C_1$ is as in Lemma \ref{lemma 2.6};{\color{black} \begin{equation} \left\Vert w\right\Vert _{L^{p}\left( \mathbb{T}^{2}\right) }\leq C_2(p)\mathcal{E}^{\frac{2}{3\alpha}}(w)\big(\mathcal{E}(w) + \mathcal{E}^{\frac{2}{3}}(w) \big) ^{\frac{\alpha-2}{2\alpha}}\,, \label{lpestimate} \end{equation} for every $1\leq p < \frac{10}{3}$, where $\alpha= \max\{2,p \}$;} and {\color{black} \begin{equation} \left\Vert w\right\Vert _{L^{p}\left( \mathbb{T}^{2}\right) }\leq C_2(p){\e^{-\frac{1}{\alpha}}}\mathcal{E}_{\e}^{\frac{1}{\alpha}}(w)\big(\mathcal{E}_\varepsilon(w) + \mathcal{E}_\varepsilon^{\frac{2}{3}}(w) \big) ^{\frac{\alpha-2}{2\alpha}} \label{eqn:lpepsilon} \end{equation} for every $\e >0$ and $1\leq p <6$, where again $\alpha=\max\{2,p \}$. } \end{lemma} \begin{proof} The estimate $\left( \ref{b3sestimate}\right) $ follows from $\left( \ref{l3estimate}% \right) $ and the definition of $\left\Vert \cdot \right\Vert _{\overset{\cdot }{\mathcal{B}}_{3;1}^{s}}.$ Turning to $\left( \ref{lpestimate}\right)$-{\color{black}\eqref{eqn:lpepsilon}, we first prove a preliminary estimate. We} fix $x_{2}\in \left[ 0,1\right)$ and apply Lemma \ref{iortb9} to $f\left( z\right) =w\left( z,x_{2}\right) $ {\color{black}with {\color{black}$q=2,$ $p>2$} to deduce \[ \left( \int_{0}^{1}\left\vert w\left( x_{1},x_{2}\right) \right\vert ^{p}dx_{1}\right) ^{\frac{1}{p}}\leq C_2(p)\int_{0}^{1}\frac{1}{\color{black}h^{\frac{1}{2}-\frac{1}{p}}}\left( \int_{0}^{1}\left\vert \partial _{1}^{h}w\left( x_{1},x_{2}\right) \right\vert ^{\color{black}2}dx_{1}\right) ^{\color{black}\frac{1}{2}}\frac{dh}{h}. \] Integrating over $x_{2},$ we thus have by Minkowski's integral inequality \begin{align}\notag \left\Vert w\right\Vert _{L^{p}\left( \mathbb{T}^{2}\right) } &=\left( \int_{0}^{1}\int_{0}^{1}\left\vert w\left( x_{1},x_{2}\right) \right\vert ^{p}dx_{1}dx_{2}\right) ^{\frac{1}{p}} \\ \notag &\leq C_2(p) \left(\int_0^1 \left[\int_0^1 h^{\color{black}\frac{1}{p}-\frac{3}{2}}\left(\int_0^1\left \vert\partial_1^h w(x_1,x_2)\right \vert^{\color{black}2}dx_1\right)^{\color{black}\frac{1}{2}}dh\right]^pdx_2\right)^{\frac{1}{p}}\\ \notag &\leq C_2(p)\int_{0}^{1}h^{\color{black}\frac{1}{p}-\frac{3}{2}}\left[ \int_{0}^{1}\left( \int_{0}^{1}\left\vert \partial _{1}^{h}w\left( x_{1},x_{2}\right) \right\vert ^{\color{black}2}dx_{1}\right) ^{\color{black}\frac{p}{2}}dx_{2}\right]^{\frac{1}{p}} dh \\ \notag &\leq C_2(p)\int_{0}^{1}h^{\color{black}\frac{1}{p}-\frac{3}{2}}\sup_{x_2 \in [0,1)}\left( \int_{0}^{1}\left\vert \partial _{1}^{h}w\left( x_{1},x_{2}\right) \right\vert ^{\color{black}2}dx_{1}\right) ^{\frac{p-2}{2p}}\cdot \left( \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\left( x\right) \right\vert ^{2}dx\right) ^{\frac{1}{p}}dh\,. \end{align} {\color{black}The first term in the integrand can be estimated using \eqref{eqn:avebd} and \eqref{b2sestimate}, which gives \begin{align*} \sup_{x_2 \in [0,1)}\left( \int_{0}^{1}\left\vert \partial _{1}^{h}w\left( x_{1},x_{2}\right) \right\vert ^{2}dx_{1}\right) ^{\frac{p-2}{2p}} &\leq\sup_{x_2 \in [0,1)}\left( \frac{4}{h} \int_0^h \int_0^1 \left|\partial_1^{h'} w(x_1,x_2) \right|^2 \,dx_1dh' \right)^{\frac{p-2}{2p}} \\ &\leq C_1 \left(\mathcal{E}(w) + h^{\frac{2}{3}}\mathcal{E}^{\frac{2}{3}}(w) \right) ^{\frac{p-2}{2p}}\,, \end{align*} and therefore \begin{equation}\label{startingpoint} \| w \|_{L^p(\mathbb{T}^2)} \leq C_2(p) \big(\mathcal{E}(w) + \mathcal{E}^{\frac{2}{3}}(w) \big)^{\frac{p-2}{2p}}\int_0^1 h^{\frac{1}{p}-\frac{3}{2}}\left( \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\left( x\right) \right\vert ^{2}dx\right)^{\frac{1}{p}}dh\,. \end{equation} To prove \eqref{lpestimate} and \eqref{eqn:lpepsilon} we estimate the $h$-integrand in two different fashions before integrating. For \eqref{lpestimate}, using H{\"o}lder's inequality and \eqref{l3estimate}, we have the upper bound \begin{align}\notag \left( \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\left( x\right) \right\vert ^{2}dx\right) ^{\frac{1}{p}} \leq \left( \int_{\mathbb{T}^{2}}\left\vert \partial _{1}^{h}w\left( x\right) \right\vert ^{3}dx\right) ^{\frac{2}{3p}}\leq C_1 h^{\frac{2}{3p}} \mathcal{E}^{\frac{2}{3p}}(w)\,. \end{align} Inserting this into \eqref{startingpoint} and using $p\in (2,10/3)$ yields \begin{align*} \|w\|_{L^p(\mathbb{T}^2)} &\leq C_2(p) \mathcal{E}^{\frac{2}{3p}}(w)\big(\mathcal{E}(w) + \mathcal{E}^{\frac{2}{3}}(w) \big) ^{\frac{p-2}{2p}}\int_0^1 h^{\frac{5}{3p}- \frac{3}{2}}\,dh \\ \notag &=C_2(p)\mathcal{E}^{\frac{2}{3p}}(w)\big(\mathcal{E}(w) + \mathcal{E}^{\frac{2}{3}}(w) \big) ^{\frac{p-2}{2p}} \,, \end{align*} which is \eqref{lpestimate} when $p>2$. For $p\leq 2$, we apply \eqref{lpestimate} with $p'>2$, use the fact that $\|w\|_{L^p}\leq \| w\|_{L^{p'}}$, and let $p' \searrow 2$. Now for \eqref{eqn:lpepsilon}, we instead use the fundamental theorem of calculus and Jensen's inequality to estimate \begin{align}\notag \left(\int_{\mathbb{T}^2} \left|\partial_1^h w(x) \right|^2\,dx \right)^{\frac{1}{p}} &\leq \left(h^2 \int_{\mathbb{T}^2} \left(\partial_1 w(x) \right)^2\,dx \right)^{\frac{1}{p}} \\ \notag &\leq h^{\frac{2}{p}}\varepsilon^{-\frac{1}{p}}\mathcal{E}_\varepsilon^{\frac{1}{p}}(w)\,. \end{align} When plugged into \eqref{startingpoint} and combined with \eqref{trivial bound}, this implies \begin{align*} \|w\|_{L^p(\mathbb{T}^2)} &\leq C_2(p) \varepsilon^{-\frac{1}{p}}\mathcal{E}_\varepsilon^{\frac{1}{p}}(w)\big(\mathcal{E}_\varepsilon(w) + \mathcal{E}_\varepsilon^{\frac{2}{3}}(w) \big) ^{\frac{p-2}{2p}}\int_0^1 h^{\frac{3}{p}- \frac{3}{2}}\,dh \\ \notag &=C_2(p)\varepsilon^{-\frac{1}{p}}\mathcal{E}_\varepsilon^{\frac{1}{p}}(w)\big(\mathcal{E}_\varepsilon(w) + \mathcal{E}_\varepsilon^{\frac{2}{3}}(w) \big) ^{\frac{p-2}{2p}} \end{align*} for $p\in (2,6)$. The case $p\in [1,2)$ is handled similarly as in \eqref{lpestimate}.}} \end{proof} \begin{remark}\color{black} Generalizing the previous argument to the 3D smectics model from \cite{NY2} is open. An intermediate step would be analyzing the Aviles-Giga model (which is a special case of the energy in \cite{NY2}) on $\mathbb{T}^2$ using these type of ideas. \end{remark} \subsection{Compactness and existence} \label{sec:cptext} We prove compactness and existence theorems in this section. First we define the admissible sets \[ \mathcal{A}_{\varepsilon }\mathcal{=}\left\{ w\in L^{2}\left( \mathbb{T}^{2}\right) :\int_{0}^{1}w\left( x_{1},x_{2}\right) dx_{1}=0\text{ for each }x_{2}\in \left[ 0,1\right) \text{ and }\mathcal{E}_{\varepsilon }( w) <\infty \right\} \]{and \[ \mathcal{A}\mathcal{=}\left\{ w\in L^{2}\left( \mathbb{T}^{2}\right) :\int_{0}^{1}w\left( x_{1},x_{2}\right) dx_{1}=0\text{ for each }x_{2}\in \left[ 0,1\right) \text{ and }\mathcal{E}( w) <\infty \right\} . \] Note that for any positive $\varepsilon>0$, \eqref{trivial bound} implies that $\mathcal{A}_\varepsilon \subset \mathcal{A}$.} We prove the following compactness result. \begin{proposition}\label{prop:l2cpt} If $\{w_{n}\}\subset \mathcal{A}$ satisfy $\mathcal{E}_{\varepsilon_n}( w_{n}) \leq {\color{black}C_3}<\infty$ {and $\sup_n| \varepsilon_n| \leq \varepsilon_0$}, then $\left\{ w_{n}\right\} $ is precompact in $L^{2}\left( \mathbb{T}% ^{2}\right) .$ \end{proposition} \begin{proof} By \eqref{lpestimate}, \begin{equation}\notag \left\Vert w_{n}\right\Vert _{\color{black}L^2\left( \mathbb{T}^{2}\right) }\leq C_2(p)\mathcal{E}^{\frac{2}{3\alpha}}(w)\big(\mathcal{E}(w) + \mathcal{E}^{\frac{2}{3}}(w) \big) ^{\frac{\alpha-2}{2\alpha}} , \end{equation} and {\color{black}thus, by \eqref{trivial bound} (that is, $\mathcal{E}(w)\leq \mathcal{E}_\varepsilon(w)$), $\left\Vert w_{n}\right\Vert _{L^{2}\left( \mathbb{T}^{2}\right) }\leq C_4$ depending on $p$ and $C_3$}. As a consequence, we can find $w_{0}\in L^{2}\left( \mathbb{T}^{2}\right) $ such that up to a subsequence, $w_{n}\rightharpoonup w_{0}$ weakly in $L^{2}\left( \mathbb{T}^{2}\right) .$ Therefore, for each $k\in (2\pi\mathbb{Z})^2$, \begin{equation} \widehat{w_{n}}\left( k\right) \rightarrow \widehat{w_{0}}\left( k\right) ,\, \,\left\vert \widehat{w_{n}}\left( k\right) \right\vert \leq \left( \int_{\mathbb{T}^{2}}w_{n}^{2}\right) ^{\frac{1}{2}}\leq {\color{black}C_4},\,\text{ and }\left\vert \widehat{w_{n}^{2}}\left( k\right) \right\vert \leq \int_{\mathbb{T}^{2}}w_{n}^{2}\leq {\color{black}C_4^{2}}. \label{fourierbd} \end{equation}We therefore know that for any fixed $N\in \mathbb{N},$ \[ \sum_{\substack{ \left\vert k_{1}\right\vert \leq 2\pi N, \\ \left\vert k_{2}\right\vert \leq 2\pi N}}\left\vert \widehat{w_{n}}\left( k\right) -\widehat{w_{0}}\left( k\right) \right\vert ^{2}\rightarrow 0\text{ as }n\rightarrow \infty \,, \]and so the strong convergence of $w_{n}$ $\rightarrow w_{0}$ would follow if \begin{equation}\label{uniform decay} \sum_{\substack{ \left\vert k_{1}\right\vert >2\pi N \\ \textup{or} \\ \left\vert k_{2}\right\vert >2\pi N}}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2}\rightarrow 0\text{ uniformly in }n\text{ as }{N}\rightarrow \infty . \end{equation} The rest of the proof is dedicated to showing \eqref{uniform decay}. {We fix $0<s<1/3$ and appeal to Remark \ref{IORT remark} and \eqref{b3sestimate} to calculate}\begin{eqnarray}\notag \int_{\mathbb{T}^{2}}\left\vert \left\vert \partial _{1}\right\vert ^{s}w_{n}\right\vert ^{2}&=&\dsum \left\vert k_{1}\right\vert ^{2s}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2}\leq C( s,\sfrac{1}{3}) \left\Vert w_{n}\right\Vert _{\overset{\cdot }{\mathcal{B}}_{2;1}^{1/3}}^{2}\\ &\leq &C( s,\sfrac{1}{3})\left\Vert w_{n}\right\Vert _{\overset{\cdot }{\mathcal{B}}_{3;1}^{1/3}}^{2}\leq C(s,\sfrac{1}{3}){\color{black}C_1}\mathcal{E}^{\frac{2}{3}}( w_{n}) \leq {\color{black}C_5}, \label{hsbd} \end{eqnarray}{\color{black}for suitable $C_5$.} We recall the formula \[ \eta _{w}=\partial _{2}w-\partial _{1}\frac{1}{2}w^{2}, \]which, in terms of Fourier coefficients, reads \[ \widehat{\eta _{w}}\left( k\right) =-ik_{2}\widehat{w}\left( k\right) +\frac{1}{2}ik_{1}\widehat{w^{2}}\left( k\right) . \]For $M_1$, $M_2\in \mathbb{N}$ to be chosen momentarily, we combine this with \eqref{fourierbd} and then \eqref{hsbd} to find \begin{align*} &\sum_{\substack{ \left\vert k_{1}\right\vert >2\pi M_1 \\ \textup{or} \\\left\vert k_{2}\right\vert >2\pi M_2}}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2} \\ &\leq \sum_{\left\vert k_{1}\right\vert >2\pi M_{1}}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2}+\sum_{\substack{ \left\vert k_{1}\right\vert \leq 2\pi M_{1} \\ \left\vert k_{2}\right\vert >2\pi M_{2}}}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2} \\ &\leq {\color{black}(2\pi M_{1})^{-2s}}\sum_{\left\vert k_{1}\right\vert >2\pi M_{1}}\left\vert k_{1}\right\vert ^{2s}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2} +2\sum_{\substack{ \left\vert k_{1}\right\vert \leq 2\pi M_{1} \\ \left\vert k_{2}\right\vert >2\pi M_{2}}}\frac{1}{\left\vert k_{2}\right\vert ^{2}}\left\vert \widehat{\eta _{w_{n}}}\left( k\right) \right\vert ^{2}\\ & \qquad +{\color{black}\frac{1}{2}}\sum_{\substack{ \left\vert k_{1}\right\vert \leq 2\pi M_{1} \\ \left\vert k_{2}\right\vert >2\pi M_{2}}}\frac{\left\vert k_{1}\right\vert ^{2}}{\left\vert k_{2}\right\vert ^{2}}\left\vert \widehat{w_{n}^{2}}\left( k\right) \right\vert \\ &\leq {\color{black}(2\pi M_{1})^{-2s}}\sum_{\left\vert k_{1}\right\vert >2\pi M_{1}}\left\vert k_{1}\right\vert ^{2s}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2}+\frac{{\color{black}2}M_{1}^{2}}{M_{^{2}}^{2}}\sum_{\substack{ \left\vert k_{1}\right\vert \leq 2\pi M_{1} \\ \left\vert k_{2}\right\vert >2\pi M_{2}}}\frac{1}{\left\vert k_{1}\right\vert ^{2}}\left\vert \widehat{\eta _{w_{n}}}\left( k\right) \right\vert ^{2}+{\color{black}\frac{C_4^2}{2}}\sum_{\substack{ \left\vert k_{1}\right\vert \leq 2\pi M_{1} \\ \left\vert k_{2}\right\vert >2\pi M_{2}}}\frac{\left\vert k_{1}\right\vert ^{2}}{\left\vert k_{2}\right\vert ^{2}} \\ &\leq {\color{black}(2\pi M_{1})^{-2s}C_5+\frac{2M_{1}^{2}}{M_{^{2}}^{2}}\times \varepsilon_0 \mathcal{E}_{\varepsilon_n}(w_n)+\frac{C_4^2}{2}\times 2(2\pi M_1)^3\times \frac{1}{\pi M_2}}. \end{align*}Taking $M_{1}=M\in \mathbb{N}$ and $M_{2}=M^{4},$ we find that \[ \sum_{\substack{ \left\vert k_{1}\right\vert >2\pi M \\ \textup{or} \\ \left\vert k_{2}\right\vert >2\pi M^4}}\left\vert \widehat{w_{n}}\left( k\right) \right\vert ^{2}\rightarrow 0\text{ uniformly in $n$ as }M\rightarrow \infty \,, \] which concludes the proof of \eqref{uniform decay}. \end{proof} \begin{corollary}\label{cor:lpconv} If $\{w_{n}\}\subset \mathcal{A}$ satisfy $\mathcal{E}_{\varepsilon_n}( w_{n}) \leq C<\infty$ {and $\sup_n| \varepsilon_n| \leq \varepsilon_0$}, then $\left\{ w_{n}\right\} $ is precompact in $L^{p}\left( \mathbb{T}% ^{2}\right)$ for any $p\in [1,{\color{black}\frac{10}{3}})$. \end{corollary} \begin{proof} The conclusion follows from the precompactness of ${\color{black}\{w_n\}}$ in $L^2(\mT^2)$, the bound \eqref{lpestimate} from Lemma \ref{lm:lpbd}, and interpolation. \end{proof} \begin{corollary} \label{cor:fixedlp} If $\{w_{n}\}\subset \mathcal{A}$ satisfy $\mathcal{E}_{\varepsilon}( w_{n}) \leq C<\infty$ for a fixed $\e$, then $\left\{ w_{n}\right\} $ is precompact in $L^{p}\left( \mathbb{T}% ^{2}\right)$ for any $p\in [1,6)$. \end{corollary} \begin{proof} We again appeal to the precompactness of $w_n$ in $L^2(\mT^2)$ (taking $\e_n=\e$ in Proposition \ref{prop:l2cpt}), but instead use the bound \eqref{eqn:lpepsilon} from Lemma \ref{lm:lpbd} before interpolating. \end{proof} \color{black}As a direct application of Corollary \ref{cor:fixedlp}, we can prove an existence theorem for the original smectic energy {\color{black}$E_\varepsilon$ defined in} \eqref{smecticenergy}. {\color{black}For any periodic $g: {\mathbb{T}^1}\to \mathbb{R}$, we define} \[ \widetilde{\mathcal{A}}_{\color{black}\varepsilon,g }=\left\{ u\in W^{1,2}\left( \mathbb{T}^{2}\right) :E_{\varepsilon }\left( u\right) <\infty,\, {\color{black}\int_0^1 u(x_1,x_2) dx_1=g(x_2) \text{ for a.e. }x_2 \in [0,1) } \right\} . \] {\color{black}We note that $\widetilde{\mathcal{A}}_{\color{black}\varepsilon,g }$ is non-empty for example when $g$ is smooth. } \begin{corollary}\label{cor:exis} For fixed $\varepsilon >0$, {\color{black}if $\widetilde{\mathcal{A}}_{\varepsilon,g}$ is non-empty,} then there exists $u_{\varepsilon }\in \widetilde{% \mathcal{A}}_{\color{black}\varepsilon,g}$ such that $E_{\varepsilon }\left( u_{\varepsilon }\right) =\inf_{u\in \widetilde{\mathcal{A}}_{\color{black}\varepsilon,g }}E_{\varepsilon }\left( u\right) .$ \end{corollary} \begin{proof}[{\color{black}Proof}] {\color{black}Since admissible class is non-empty, we can} let $u_{n}$ be a minimizing sequence for $$ E_{\varepsilon }\left( u\right)=\frac{1}{2}\int_{\Omega }\frac{1}{\e }\left( \partial_2 u-\frac{1}{2}(\partial_1 u)^{2}\right) ^{2}+\e (\partial_{11} u)^{2}\,dx; $$ {\color{black}in particular, the energies are uniformly bounded. By Corollary \ref{cor:fixedlp}}, we have, up to a subsequence that we do not relabel, \begin{equation}\label{L4 convergence} \partial_1 u_{n}\rightarrow \partial_1 u_{0}\quad\textup{in $L^{4}\left( \mathbb{T}^{2}\right) $} \end{equation} for some $u_{0}$. Since $u_n$ is a minimizing sequence, the first term in $E_\varepsilon$ combined with the $L^4$-convergence of $\partial_1 u_n$ implies that $\{\partial_2 u_n \}$ are uniformly bounded in $L^2(\mathbb{T}^2)$. Thus, up to a further subsequence which we do not notate, there exists $v_0 \in L^2$ such that $\partial_2 u_n \rightharpoonup v_0$ weakly in $L^2(\mathbb{T}^2)$. Furthermore, by the uniqueness of weak limits, it must be that $v_0 = \partial_2 u_0$, so $u_0 \in W^{1,2}(\mathbb{T}^2)$. Expanding \begin{equation*} \int_{\Omega }\left( \partial_2 u_n-\frac{(\partial_1 u_n)^2}{2}\right)^2\,dx =\int_{\Omega }\left[ (\partial_2 u_n)^2-(\partial_1 u_n)^2\partial_2 u_n + \frac{1}{4}(\partial_1 u_n)^4\right]\,dx \,,\end{equation*}we see that by \eqref{L4 convergence}, the lower semicontinuity of the $L^2$-norm under weak convergence, and the fact that \[ \lim_{n\rightarrow \infty }\int_{\mathbb{T}^{2}}(\partial_1 u_{n})^2 \partial_2u_{n}\,dx=\int_{\mathbb{T}^{2}}(\partial_1 u_{0})^2 \partial_2u_{0}\,dx, \]we have \begin{equation}\label{potential convergences} \liminf_{n\to \infty}\int_{\mathbb{T}^2 }\left( \partial_2 u_n-\frac{(\partial_1 u_n)^2}{2}\right)^2\,dx \geq \int_{\mathbb{T}^2 }\left( \partial_2 u-\frac{(\partial_1 u)^2}{2}\right)^2\,dx . \end{equation} {\color{black}Also, the uniform $L^2$-bound on $\partial_{11} u$ and the uniqueness of limits implies that, up to a subsequence, $\partial_{11} u_n \rightharpoonup \partial_{11} u_0$ weakly in $L^2(\mathbb{T}^2)$, and thus \begin{equation}\label{lsc of elastics} \liminf_{n\to \infty}\int_\Omega (\partial_{11} u_n)^2 \,dx \geq \int_{\mathbb{T}^2} (\partial_{11} u_0)^2\,dx\,. \end{equation} Putting together \eqref{potential convergences}-\eqref{lsc of elastics}, we} conclude \[ {\color{black}\inf_{\mathcal{A}_{\varepsilon,g}}E_\varepsilon =} \liminf_{n\rightarrow \infty }E_{\varepsilon }(u_{n}) \geq E_{\varepsilon }( u_{0}) . \] {\color{black}Finally, {\color{black}by Poincare's inequality and the weak convergence of $\nabla u_n$ to $\nabla u_0$ in $L^2(\mathbb{T}^2)$, we conclude that $u_n$ converges to $u_0$ strongly in $L^2(\mathbb{T}^2)$. Hence $$\int_0^1 u(x_1,x_2) dx_1 =\lim_{n \rightarrow \infty} \int_0^1 u_n(x_1,x_2) dx_1=g(x_2) \text{ for a.e. } x_2 \in [0,1),$$ therefore} $u_0$ belongs to $\mathcal{A}_{\varepsilon,g}$ and is a minimizer.} \end{proof} \section{Lower bound}\label{sec:lbd} We consider the question of finding a limiting functional as a lower bound for $E_{\e}$ as $\e$ goes to zero. Given a sequence $\{w_\varepsilon\}$ with $E_{\e} (w_{\e})\leq C$ and $\e\to 0$, then \begin{equation}\label{distributionally to zero} \int_{\mT^2}(|\partial_1|^{-1}\eta_{w_{\e}})^2 dx \rightarrow 0. \end{equation} Therefore $\eta_{w_{\e}} \rightarrow 0$ distributionally and the natural function space for the limiting problem is $$ \mathcal{A}_0=\{w\in L^2(\mT^2):\, \eta_w=-\partial_1 \frac{1}{2}w^2+\partial_2 w =0 \text{ in } \mathcal{D}'\}. $$ \subsection{Properties of BV functions} Let $\Omega \subset \mathbb{R}^2$ be a bounded open set. We first recall the BV structure theorem. For $v\in {\color{black}[BV(\Omega)]^2}$, the Radon measure $Dv$ can be decomposed as $$ Dv=D^a v+D^c v+D^j v $$ where $D^a v$ is the absolutely continuous part of $Dv$ with respect to Lebesgue measure $\mathcal{L}^2$ and $D^c v$, $D^j v$ are the Cantor part and the jump part, respectively. All three measures are mutually singular. Furthermore, $D^a v=\nabla v \mathcal{L}^2\,\mres \Omega$ where $\nabla v$ is the approximate differential of $v$; $D^c v= D_s v \,\mres (\Omega\backslash S_v)$ and $D^j v= D_s v\, \mres{J_v} $, where $D_s v$ is the singular part of $Dv$ with respect to $\mathcal{L}^2$, $S_v$ is the set of approximate discontinuity points of $v$, and $J_v$ is the jump set of $v$. Since $J_v$ is countably $\mathcal{H}^1$-rectifiable, $D^j v$ can be expressed as $$ (v^+ -v^-)\otimes\nu \,\mathcal{H}^1\mres {J_v}, $$ where $\nu$ is orthogonal to the approximate tangent space at each point of $J_v$ and $v^+$, $v^-$ are the traces of $v$ from either side of $J_v$. Next we quote the following general chain rule formula for BV functions. \begin{theorem}(\cite[Theorem 3.96]{AmbFusPal00})\label{bvchain theorem} Let $w \in {\color{black}[BV(\Omega)]^2}$, $\Omega \subset \mathbb{R}^2$, and $f\in [C^1(\R^2)]^2$ be a Lipschitz function satisfying $f(0)=0$ if $|\Omega|=\infty$. Then $v=f\circ w$ belongs to $[BV(\Omega)]^2$ and \begin{equation}\label{eqn:bvchain} D v=\nabla f(w) \nabla w \mathcal{L}^2\,\mres \Omega+\nabla f(\tilde w) D^c w + (f(w^+)-f(w^-))\otimes \nu_w \mathcal{H}^{1}\mres { J_w}. \end{equation} Here $\tilde{w}(x)$ is the approximate limit of $w$ at $x$ and is defined on $\Omega \backslash J_w$. \end{theorem} {\color{black}In what follows, we will use Theorem 3.1 to compute the distributional divergence of such $f\circ w$ as the trace of the measure \eqref{eqn:bvchain}, that is \begin{equation}\label{divtrace} \dive (f \circ w) = \mathrm{tr}\,(\nabla f(w) \nabla w) \mathcal{L}^2\,\mres \Omega + \mathrm{tr}\,(\nabla f(\tilde w) D^c w) + (f(w^+)-f(w^-))\cdot\nu_w \mathcal{H}^{1}\mres { J_w} \end{equation} as measures.} { \begin{lemma}\label{A0 lemma} If $w\in \mathcal{A}_0 \cap (BV \cap L^\infty)(\mathbb{T}^2)$, then denoting by $D_i^a w$ and $D_i^c w$ the $i$-th components of the measures $D^a w$ and $D^c w$, we have \begin{equation}\notag (-w D_1^a w + D_2^a w) = 0\quad \textit{and} \quad (-\tilde{w} D_1^c w + D_2^c w) = 0 \end{equation} {\color{black}as measures, and,} {\color{black} setting $\sigma(w) = (-w^2/2,w)$, \begin{equation}\label{compatibility condition} \left[ \sigma(w^+) - \sigma(w^-)\right]\cdot\nu_w = 0\quad\mathcal{H}^1\textit{-a.e. on }J_w\,. \end{equation}} \end{lemma}} \begin{proof} Let $\sigma(w) = (-w^2/2,w)$. By virtue of $w\in \mathcal{A}_0 \cap (BV \cap L^\infty)(\mathbb{T}^2)$ {\color{black}and \eqref{divtrace},} we know that, in the sense of distributions, \begin{align}\notag 0 &= -\partial_1 \frac{1}{2}w^2+\partial_2 w \\ \notag &= \dive \sigma(w) \\ \label{sum of three} & = (-w D_1^a w + D_2^a w) + (-\tilde{w} D_1^c w + D_2^c w) + (\sigma(w^+) - \sigma(w^-))\cdot \nu_{w}\mathcal{H}^1\mres J_w\,. \end{align} But the measures $D^a w$, $D^c w$, and $D^j w$ are mutually singular, which implies that each individual term in \eqref{sum of three} is the zero measure. {\color{black}The lemma immediately follows.} \end{proof} \subsection{Limiting functional and the proof of the lower bound} Let $$ \Sigma(w)=\left(-\frac{1}{3}w^3,\frac{1}{2}w^2\right)\,. $$ If $w \in {\color{black}\mathcal{A}_0\cap}(BV \cap L^{\infty})(\mT^2)$, we can apply the chain rule \eqref{eqn:bvchain} and Lemma \ref{A0 lemma} to $\Sigma(w)$, yielding \begin{align} \notag \dive \Sigma (w) &=w(-w{\color{black}D^a_1} w+{\color{black}D^a_2}w) \mathcal{L}^2+\tilde{w}(-\tilde{w}{\color{black}D_1^c} w+{\color{black}D_2^c} w) \\ \notag &\qquad+\left ( \Sigma (w^{+})-\Sigma(w^{-})\right )\cdot \nu_w \mathcal{H}^1\mres J_{w}\\ \label{eqn:dsigw} &= \left ( \Sigma (w^{+})-\Sigma(w^{-})\right )\cdot \nu_w \mathcal{H}^1\mres J_{w}\,. \end{align} \begin{remark} \color{black}Observe if $w=u_x$ and $u_z=\frac{1}{2}u_x^2$, the entropy $\Sigma (w)$ here is exactly the entropy $\tilde{\Sigma}(\nabla u)=-(u_xu_z-\frac{1}{6}u_x^3, \frac{1}{2}u_x^2)$, which we used in the lower bound estimates in \cite{NY1}. In fact, the argument below also gives a proof of the lower bound on any domain $\Omega \subset \mathbb{R}^2$; the only necessary modification of the proof presented above is that one does not use $|\partial_1|^{-1} \eta_w$ to represent the compression energy, but rather the original expression from \eqref{smecticenergy}. \end{remark} \begin{theorem}\label{thm:lower bound} Let $\e_n \searrow 0$, $\{w_n\} \subset L^2(\mT^2)$ with $\partial_1 w_n \in L^2 (\mT^2)$ such that \begin{equation} w_n \rightarrow w \text{ in } L^3(\mT^2), \end{equation} for some $w \in (BV \cap L^{\infty})(\mT^2)$. Then \begin{equation}\label{liminfequation} \liminf_{n \rightarrow \infty} \mathcal{E}_{\e_n}(w_n)\geq \int_{J_w}\frac{|w^+-w^-|^3}{12 \sqrt{1+\frac{1}{4}(w^++w^-)^2}} d\mathcal{H}^1.\end{equation} \end{theorem} \begin{remark}\color{black} Due to recent progress on the rectifiability for the defect set to certain solutions of Burgers equation \cite{Mar22}, the lower bound should in fact be valid among a larger class of limiting functions. Specifically, if $w\in \mathcal{A}_0 \cap L^\infty(\mathbb{T}^2)$ and for every smooth convex entropy $\Phi:\mathbb{R}\to \mathbb{R}$ and corresponding entropy flux $\Psi:\mathbb{R}\to \mathbb{R}$ with $\Psi'(v) = -\Phi'(v)v$, \begin{equation}\label{finite entropy condition} \partial_1 \Psi(w) + \partial_2 \Phi(w) \textup{ is a finite Radon measure}, \end{equation} then there exists an $H^1$-rectifiable set $J_w$ with strong traces on either side such that \begin{equation}\label{limiting energy} |\dive \Sigma(w)| = \frac{|w^+-w^-|^3}{12 \sqrt{1+\frac{1}{4}(w^++w^-)^2}} \mathcal{H}^1\mres J_w. \end{equation} In particular, by substituting any entropy/entropy flux pair for $\Sigma$ in the the argument below, one finds that for an energy bounded sequence, any limiting function $w$ satisfies \eqref{finite entropy condition} and thus \eqref{limiting energy}. Technically, applying the results of \cite{Mar22} to deduce \eqref{limiting energy} would require extending the arguments there from $[0,T] \times \mathbb{R}$ to the bounded domain $\mathbb{T}^2$ as in \cite{MarARMA,MarAdvCalcVar} and proving that \eqref{finite entropy condition} implies that $w\in C^0([0,1];L^1(\mathbb{T}^1))$ (the continuous in time dependence being a technical assumption in \cite[Definition 1.1]{Mar22}). {Regarding the regularity assumption, it is known (see e.g. \cite[Remark 5.2]{MarCalVarPDE}, \cite[pg. 191]{JOP}) that the argument of Vasseur \cite{Vas01} applies in this context and gives a representative of $w$ belonging to $C^0([0,1];L^1(\mathbb{T}^1))$. The extension of \cite{Mar22} to a bounded domain} should not present serious difficulties, although we have not pursued the details further. {\color{black}The concentration of the entropy measures on an $\mathcal{H}^1$-rectifiable jump set should be a key step in obtaining the full $\Gamma$-convergence of $E_\varepsilon$ in \eqref{smecticenergy} to the limiting energy \eqref{limiting energy}. The remaining obstacles to such a result are the construction of a recovery sequence for functions that with gradients that do not belong to $BV \cap L^\infty$ (as the existing technology from \cite{ConDeL07,Pol07} uses both those assumptions) and the strengthening of the results of \cite{Mar22} to include functions which do not belong to $L^\infty$.} \end{remark} \begin{proof}[Proof of Theorem \ref{thm:lower bound}] Without loss of generality, we assume $\liminf_{n\rightarrow \infty} \mathcal{E}_{\e_n}(w_n) <\infty${\color{black}, so that $w\in \mathcal{A}_0$ by \eqref{distributionally to zero}. Now} {\color{black} for any smooth $v$}, direct calculation shows \begin{eqnarray}\label{eqn:divSig} \dive \Sigma ({\color{black}v})&=&\partial_1(-\frac{1}{3}{\color{black}v}^3)+\partial_2(\frac{1}{2}{\color{black}v}^2)\\ \notag &=&{\color{black}v}(\partial_2 {\color{black}v}-{\color{black}v}\partial_1{\color{black}v})={\color{black}v}\eta_{\color{black}v}. \end{eqnarray} On the other hand, we can bound $\mathcal{E}_{\e}$ from below as follows: \begin{eqnarray} \label{eqn:energybdbelow} \mathcal{E}_{\e}({\color{black}v})&=&\frac{1}{2}\int_{\mathbb{T}^2}\frac{1}{\e}\left(|\partial_1|^{-1}\left(\partial_2 {\color{black}v}-\partial_1 \frac{1}{2} {\color{black}v}^2\right)\right)^2 +\e (\partial_1 {\color{black}v})^2 dx \\ \notag & = & \frac{1}{2\e} \left\Arrowvert |\partial_1|^{-1} \eta_{\color{black}v}\right\Arrowvert_{L^2(\mT^2)}^2+\frac{\e}{2}\left\Arrowvert \partial_1 {\color{black}v}\right \Arrowvert^2_{L^2(\mT^2)}\\ \notag & \geq & \left\Arrowvert |\partial_1|^{-1} \eta_{\color{black}v}\right\Arrowvert_{L^2(\mT^2)}\left\Arrowvert \partial_1 {\color{black}v}\right \Arrowvert_{L^2(\mT^2)}. \end{eqnarray} From \eqref{eqn:divSig} and \eqref{eqn:energybdbelow}, given any smooth periodic function $\phi$, for any smooth ${\color{black}v}$, we have \begin{eqnarray}\label{eqn:upbdsmooth} &&\left|-\int_{{\mT}^2}\Sigma({\color{black}v})\cdot \nabla \phi\, dx \right|=\left|\int_{{\mT}^2}\dive \Sigma ({\color{black}v}) \phi \,dx\right| \\ \notag & \leq &\left(\int_{{\mT}^2}||\partial_1|^{-1} \eta_{{\color{black}v}}|^2 dx\right)^{\frac{1}{2}}\left(\int_{{\mT}^2}|\partial_1 ({\color{black}v} \phi)|^2dx\right)^{\frac{1}{2}}\\ \notag & \leq &\left\Arrowvert |\partial_1|^{-1} \eta_{{\color{black}v}}\right\Arrowvert_{L^2(\mT^2)}\left\Arrowvert \partial_1 {\color{black}v}\right \Arrowvert_{L^2(\mT^2)}\left\Arrowvert \phi\right\Arrowvert_{L^{\infty}(\mT^2)}\\ \notag &&+\left\Arrowvert |\partial_1|^{-1} \eta_{{\color{black}v}}\right\Arrowvert_{L^2(\mT^2)}\| {\color{black}v}\|_{L^2(\mT^2)}\left\Arrowvert \partial_1\phi\right\Arrowvert_{L^{\infty}(\mT^2)}\\ \notag & \leq & \mathcal{E}_{\e}({\color{black}v})\left\Arrowvert \phi\right\Arrowvert_{L^{\infty}(\mT^2)}+C\sqrt{\e}\mathcal{E}_{\e}({\color{black}v})^{\frac{1}{2}}\| {\color{black}v}\|_{L^2(\mT^2)}\left\Arrowvert \partial_1\phi\right\Arrowvert_{L^{\infty}(\mT^2)} \,. \end{eqnarray} By the density of smooth functions in $L^2(\mT^2)$, \eqref{eqn:upbdsmooth} holds for any ${\color{black}v} \in L^2(\mT^2)$ with $|\partial_1|^{-1} \eta_{\color{black}v}, \partial_1 {\color{black}v} \in L^2(\mT^2)$. Thus \begin{eqnarray} &&\left|-\int_{{\mT}^2}\Sigma(w_n)\cdot \nabla \phi \,dx \right|\\ \notag & \leq & \mathcal{E}_{\e_n}(w_n)\left\Arrowvert \phi\right\Arrowvert_{L^{\infty}(\mT^2)}+C\sqrt{\e_n}\mathcal{E}_{\e_n}(w_n)^{\frac{1}{2}}\| w_n\|_{L^2(\mT^2)}\left\Arrowvert \partial_1\phi\right\Arrowvert_{L^{\infty}(\mT^2)} \,. \end{eqnarray} Letting $n \rightarrow \infty$, by the strong convergence of $w_n$ in $L^3(\mT^2)$, we have $\Sigma(w_n) \rightarrow \Sigma(w)$ in $L^1(\mT^2)$, so that \begin{eqnarray}\label{eqn:limSig} -\int_{{\mT}^2}\Sigma(w)\cdot \nabla \phi \,dx &=&-\lim_{n\rightarrow \infty}\int_{{\mT}^2}\Sigma(w_n)\cdot \nabla \phi\, dx \\ \notag &\leq& \liminf_{n\rightarrow \infty}\mathcal{E}_{\e_n}(w_n)\left\Arrowvert \phi\right\Arrowvert_{L^{\infty}(\mT^2)}. \end{eqnarray} {\color{black}By taking the supremum over all smooth test functions $\phi$ with $\|\phi\|_{L^\infty}\leq 1$ in \eqref{eqn:limSig},} we see that $|\mathrm{div}\,\Sigma(w)|(\mathbb{T}^2)$ is a lower bound for the energies. To derive the explicit expression for this measure, we note that since $w\in \mathcal{A}_0 \cap (BV \cap L^\infty)(\mathbb{T}^2)$, \eqref{compatibility condition} and \eqref{eqn:dsigw} apply, so that \begin{align*}\notag |\mathrm{div}\,\Sigma(w)|(\mathbb{T}^2) = \left|\left[\Sigma(w^+) - \Sigma(w^-) \right]\cdot \frac{\left(\sigma(w^+)-\sigma(w^-)\right)^\perp}{|\sigma(w^+)-\sigma(w^-)|} \right| \mathcal{H}^1 \mres J_{w} \,. \end{align*} The right hand side of this equation can be calculated directly from the formulas for $\sigma(w)$ and $\Sigma(w)$ and simplifies to \eqref{liminfequation} (see \cite[Proof of Lemma 4.1, Equation (6.3)]{NY1}). \end{proof} \begin{remark} When comparing with the lower bound proof from \cite{NY1}, this proof requires an extra integration by parts, as it does not rely on a pointwise lower bound on the energy density (see e.g. \cite[Equation (4.11)]{NY1}). The relationship between these two entropies and the structure of the corresponding arguments is exactly mirrored in the entropies devised in \cite{JinKoh00, DKMO01} for the Aviles-Giga problem - they are equal on the zero set of the potential term, and both give lower bounds, with only one of them (\cite{JinKoh00}) bounding the energy density from below pointwise. \end{remark} \textbf{ACKNOWLEDGEMENTS} {\color{black}We thank both referees for useful comments that improved the article. M.N. also thanks Elio Marconi for helpful discussions regarding \cite{Mar22}.} M.N.'s research is supported by NSF grant RTG-DMS 1840314. X.Y.'s research is supported by {\color{black} Simons Collaboration Grant \#947054, together} with a Research Excellence Grant and a CLAS Dean's summer research grant from University of Connecticut. \FloatBarrier \bibliographystyle{siam} \bibliography{ref} \end{document}
2205.01847v2
http://arxiv.org/abs/2205.01847v2
Rates of estimation for high-dimensional multi-reference alignment
\documentclass[10pt]{article} \usepackage[margin=1in]{geometry} \RequirePackage{amsthm,amsmath,amsfonts,amssymb} \RequirePackage[round]{natbib} \RequirePackage[colorlinks,linkcolor=black,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{enumerate,accents,rotating,bbm} \newtheorem{Theorem}{Theorem}[section] \newtheorem*{Theorem*}{Theorem} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Assumption}[Theorem]{Assumption} \theoremstyle{definition} \newtheorem{Remark}[Theorem]{Remark} \setcounter{tocdepth}{1} \newcommand{\Ln}{\operatorname{Ln}} \newcommand{\iid}{\mathrm{iid}} \newcommand{\mX}{\mathcal X} \newcommand{\mY}{\mathcal Y} \newcommand{\mZ}{\mathcal Z} \newcommand{\mP}{\mathbb P} \newcommand{\conc}{\mathrm{conc}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Id}{\operatorname{Id}} \newcommand{\Unif}{\operatorname{Unif}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\diag}{\operatorname{diag}} \newcommand{\Cov}{\operatorname{Cov}} \newcommand{\Var}{\operatorname{Var}} \newcommand{\Tr}{\operatorname{Tr}} \newcommand{\1}{\mathbbm{1}} \newcommand{\SO}{\operatorname{SO}} \newcommand{\HS}{\mathrm{HS}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cF}{\mathcal{F}} \newcommand{\KL}{\mathrm{KL}} \newcommand{\TV}{\mathrm{TV}} \newcommand{\thetapK}{{{\theta'}^K}} \newcommand{\thetaK}{{\theta^K}} \newcommand{\rhosum}{\rho_{\mathrm{sum}}} \newcommand{\rhomax}{\rho_{\mathrm{max}}} \newcommand{\orbit}{\mathcal{O}} \newcommand{\der}{\mathrm{d}} \newcommand{\oracle}{\mathrm{oracle}} \newcommand{\opt}{\mathrm{opt}} \newcommand{\fm}{\mathrm{fm}} \newcommand{\MLE}{\mathrm{MLE}} \newcommand{\gen}{\mathrm{gen}} \newcommand{\rlower}{\underaccent{\bar}{r}} \newcommand{\rupper}{\bar{r}} \newcommand{\clower}{\underaccent{\bar}{c}} \newcommand{\cupper}{\bar{c}} \newcommand{\rprod}{r_{\mathrm{prod}}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \renewcommand{\P}{\mathbb{P}} \newcommand{\E}{\mathbb{E}} \newcommand{\eps}{\varepsilon} \newcommand{\Normal}{\mathcal{N}} \newcommand{\Arg}{\operatorname{Arg}} \renewcommand\footnotemark{} \newcommand{\revise}[1]{#1} \title{Rates of estimation for high-dimensional multi-reference alignment} \author{Zehao Dou, Zhou Fan, Harrison Zhou \footnote{Department of Statistics and Data Science, Yale University, USA. \newline \texttt{[email protected], [email protected], [email protected]}}} \begin{document} \maketitle \allowdisplaybreaks[4] \begin{abstract} We study the continuous multi-reference alignment model of estimating a periodic function on the circle from noisy and circularly-rotated observations. Motivated by analogous high-dimensional problems that arise in cryo-electron microscopy, we establish minimax rates for estimating generic signals that are explicit in the dimension $K$. In a high-noise regime with noise variance $\sigma^2 \gtrsim K$, \revise{for signals with Fourier coefficients of roughly uniform magnitude}, the rate scales as $\sigma^6$ and has no further dependence on the dimension. This rate is achieved by a bispectrum inversion procedure, and our analyses provide new stability bounds for bispectrum inversion that may be of independent interest. In a low-noise regime where $\sigma^2 \lesssim K/\log K$, the rate scales instead as $K\sigma^2$, and we establish this rate by a sharp analysis of the maximum likelihood estimator that marginalizes over latent rotations. A complementary lower bound that interpolates between these two regimes is obtained using Assouad's hypercube lemma. \revise{We extend these analyses also to signals whose Fourier coefficients have a slow power law decay.} \end{abstract} \tableofcontents \section{Introduction} Multi-reference alignment (MRA) refers to the problem of estimating an unknown signal from noisy samples that are subject to latent rotational transformations \citep{ritov1989estimating,bandeira2014multireference}. This problem has seen renewed interest in recent years, as a simplified model for molecular reconstruction in cryo-electron microscopy (cryo-EM) and related methods of molecular imaging \citep{bendory2020single,singer2020computational}. It arises also in various other applications in structural biology and image registration \citep{sadler1992shift,brown1992survey,diamond1992multiple}. Recent literature has established rates of estimation for MRA in fixed dimensions \citep{perry2019sample,bandeira2020optimal,abbe2018multireference,ghosh2021multi}, describing a rich picture of how these rates may depend on the signal-to-noise ratio and properties of the underlying signal. However, many applications of MRA involve high-dimensional signals, and there is currently limited understanding of optimal rates of estimation in high-dimensional settings. In the continuous MRA model---the focus of this work---the signal is a smooth periodic function $f$ on the circular domain $[-\pi,\pi)$. We observe independent samples of $f$ in additive white noise, where each sample has a uniformly random latent rotation of its domain \citep{bandeira2020optimal,fan2021maximum}. The true function $f$ is identifiable only up to rotation, and we will study its estimation under the rotation-invariant squared-error loss \begin{equation}\label{eq:functionloss} L(\hat{f},f)=\min_{\alpha \in [-\pi,\pi)} \int_{-\pi}^\pi \Big(\hat{f}(t)-f(t-\alpha \bmod 2\pi)\Big)^2\,\der t. \end{equation} In the closely related discrete MRA model, the signal is instead a vector $x \in \R^K$, observed in additive Gaussian noise with cyclic permutations of its coordinates \citep{bandeira2014multireference,perry2019sample}. The continuous and discrete models are similar, in that both rotational actions are diagonalized in the (continuous or discrete, resp.) Fourier basis, and these diagonal actions have similar forms. A recent line of work has studied rates of estimation for MRA in ``low dimensions'', treating as constant the dimension $K$ for discrete MRA, or the maximum Fourier frequency $K$ for continuous MRA. Many such results have specifically focused on a regime of high noise: In this regime, \cite{perry2019sample} showed that the squared-error risk for estimating ``generic'' signals scales with the noise standard deviation as $\sigma^6$. \cite{bandeira2020optimal} showed that this scaling for estimating a ``non-generic'' signal depends on its pattern of zero and non-zero Fourier coefficients, and derived rate-optimal upper and lower bounds over minimax classes of such signals. Rates of estimation for MRA with non-uniform rotations were studied in \cite{abbe2018multireference}, with a dihedral group of both rotations and reflections in \cite{bendory2022dihedral}, with sparse signals in \cite{ghosh2021multi}, and with down-sampled observations in a super-resolution context in \cite{bendory2020super}. It is empirically observed, for example in \citet[Section 5]{fan2021maximum}, that electric potential functions of protein molecules in cryo-EM applications may require basis representations with dimensions in the thousands to capture secondary structure, and even higher dimensions to achieve near-atomic resolution. Motivated by this observation, in this paper, we extend the above line of work to study the continuous MRA model in potentially high dimensions, in both high-noise and low-noise regimes. Our main results are described informally as follows: \revise{Let \[\theta^*=(\theta_{1,1}^*,\theta_{1,2}^*,\theta_{2,1}^*,\theta_{2,2}^*,\theta_{3,1}^*,\theta_{3,2}^*,\ldots)\] be the coefficients of $f$ in the real Fourier basis over $[-\pi,\pi)$, i.e., \[f(t) = \sum_{k=1}^{\infty} \frac{1}{\sqrt{\pi}}\theta_{k,1}^*\cos kt + \frac{1}{\sqrt{\pi}}\theta_{k,2}^*\sin kt,\] and let \begin{equation}\label{eq:polarcoordinates} (r_k \cos \phi_k, r_k\sin\phi_k)=(\theta_{k,1}^*,\theta_{k,2}^*) \end{equation} be the representation of the $k^\text{th}$ Fourier frequency in terms of the magnitude $r_k$ and phase $\phi_k$. Fixing a decay parameter $\beta \in [0,\frac{1}{2})$, we consider a class of signals $f$ represented by \[\Theta_\beta=\Big\{f:\,r_k \asymp k^{-\beta} \text{ for } k=1,\ldots,K,\; r_k=0 \text{ for all } k \geq K+1\Big\}\] where we bandlimit $f$ to its first $K$ Fourier frequencies.} Our results distinguish two separate signal-to-noise regimes for estimating $f$, based on the size of the entrywise noise variance $\sigma^2$ in the Fourier basis. We establish sharp minimax rates of estimation in both regimes, \revise{for sufficiently large sample size $N$}, that are explicit in their dependence on the dimension $K$.\revise{ \begin{Theorem*}[Informal] Let $\beta \in [0,\frac{1}{2})$. \begin{enumerate}[(a)] \item (High noise) If $\sigma^2 \gtrsim K^{1-2\beta}$ and $N \gtrsim K^{6\beta}\sigma^6\log K$, then \[\inf_{\hat{f}} \sup_{f \in \Theta_\beta} \E[L(\hat{f},f)] \asymp \frac{K^{4\beta}\sigma^6}{N}.\] \item (Low noise) If $\sigma^2 \lesssim K^{1-2\beta}/\log K$ and $N \gtrsim K^{1+2\beta}\sigma^2 \log K$, then \[\inf_{\hat{f}} \sup_{f \in \Theta_\beta} \E[L(\hat{f},f)] \asymp \frac{K\sigma^2}{N}.\] \end{enumerate} \end{Theorem*} \noindent We refer to Theorems \ref{thm:highnoise} and \ref{thm:lownoise} for precise statements of these results. Our signal class with power law decay $\beta<1/2$ is representative of a setting where the average power per Fourier frequency, $\|\theta^*\|^2/K \asymp K^{-2\beta}$, is of comparable magnitude to the power $r_k^2$ at a typical frequency $k \in \{1,\ldots,K\}$. Our analyses of the estimators that achieve these minimax rates apply more generally to signals of this form (c.f.\ Theorems \ref{thm:oracleMoM} and \ref{thm-mle}).} \revise{For large $N$, this result implies that there is a sharp transition in the minimax estimation rate near the noise level $\sigma^2 \asymp K^{1-2\beta} \asymp \|\theta^*\|^2$, which separates the two signal-to-noise regimes of the problem. Such a transition may be anticipated by the results of \cite{bandeira2017estimation}, where $\sigma^2 \gtrsim \|\theta^*\|^2$ is the condition required to carry out the high-noise Taylor expansion of the chi-squared divergence, and of \cite{romanov2021multi} which provided a sharp analysis of the sample complexity in the low-noise regime for an analogous discrete MRA model (see below). As $\sigma^2$ varies in the small parameter window from $K^{1-2\beta}/\log K$ to $K^{1-2\beta}$ between these ``low-noise'' and ``high-noise'' regimes, our result confirms that there must be a rapid increase in the minimax risk, from roughly the order $K^{2-2\beta}/N$ to $K^{3-2\beta}/N$.} In the high-noise regime where $\sigma^2\gtrsim \|\theta^*\|^2$, we show that the minimax rate is achieved by a variant of a third-order method-of-moments (MoM) procedure. The scaling with $\sigma^6$ matches previous results of \cite{perry2019sample}, and a notable new feature of the rate is its scaling with the dimension $K$---for example, when $\beta=0$, the rate has no explicit dependence on $K$. In the MRA model, for functions having the Fourier coefficients (\ref{eq:polarcoordinates}), second-order moments correspond to the power spectrum \[\Big\{r_k^2:k=1,\ldots,K\Big\}\] and third-order moments to the Fourier bispectrum \[\Big\{\phi_{k+l}-\phi_k-\phi_l:k,l \in \{1,\ldots,K\} \text{ and } k+l \leq K\Big\}\] Method-of-moments in this context is also known as bispectrum inversion \citep{sadler1992shift,bendory2017bispectrum}, which aims to estimate the Fourier phases $\{\phi_k\}$ from an estimate of the bispectrum. Results of \cite{bendory2017bispectrum,perry2019sample} imply that for signals where $r_k \neq 0$ for every $k=1,\ldots,K$, these phases are uniquely determined by the bispectrum. \revise{Our analyses quantify the conditioning of the linear system relating the bispectrum to the Fourier phases, which gives rise to the quantitative dependence of the estimation rate on $K$. To resolve phase ambiguities before solving this linear system, we prove also an important $\ell_\infty$ stability property of bispectrum inversion (c.f.\ Lemma \ref{lemma:phasestability}), which is of independent interest.} Our definition of the low-noise regime $\sigma^2 \lesssim K^{1-2\beta}/\log K \asymp \|\theta^*\|^2/\log K$ and minimax rate in this regime are related to the work of \cite{romanov2021multi}, which studied instead the discrete MRA model in the asymptotic limit $K \to \infty$ and $(\sigma^2 \log K)/K \to 1/\alpha \in (0,\infty)$, for a Bayesian setting where $\theta^*$ has a standard Gaussian prior. This work showed a transition in the Bayes risk and associated sample complexity at the sharp threshold $\alpha=2$. The analysis in \cite{romanov2021multi} relied on the discreteness of the rotational model, analyzing a template matching procedure that exactly recovers the latent rotation for each sample. For continuous MRA, this estimation of each rotation is possible only up to a per-sample error that is independent of the sample size $N$, and averaging the correspondingly rotated samples would yield an estimation bias that does not vanish with $N$. Our analysis shows that direct application of third-order method-of-moments also does not yield the optimal estimation rate across the entire low-noise regime. We instead analyze the maximum-likelihood estimator (MLE) that marginalizes over latent rotations, to obtain the minimax upper bound in this regime. \subsection{Further related literature} A body of work on MRA and related models focuses on the synchronization approach, which seeks to first estimate the latent rotation of each sample based on the relative rotational alignments between pairs of samples \citep{singer2011angular}. In the context of cryo-EM, this is known also as the ``common lines'' method \citep{singer2010detecting,singer2011three}. Algorithms developed and studied for estimating these pairwise alignments include spectral procedures \citep{singer2011angular,singer2011three,ling2022near}, semidefinite relaxations \citep{singer2011angular,singer2011three,bandeira2014multireference,bandeira2015non}, and iterative power method or approximate message passing approaches \citep{boumal2016nonconvex,perry2018message}. In high-noise regimes, synchronization-based estimation may fail to recover the latent rotations, or may lead to a biased and inconsistent estimate of the underlying signal. A separate line of work has studied alternative method-of-moments or maximum likelihood procedures for the MRA problem, which marginalize over the latent rotations \citep{abbe2018multireference,boumal2018heterogeneous,perry2019sample,bandeira2020optimal,ghosh2021multi,bendory2022dihedral}. These papers relate the rate of estimation in high noise to the order of moments needed to identify the true signal, which may differ depending on the sparsity pattern of its Fourier coefficients and the distribution of the latent random rotations. Related analyses have been performed for three-dimensional rotational actions, as arising in Procrustes alignment problems \citep{pumir2021generalized} and cryo-EM \citep{sharon2020method}. For cryo-EM, these methods encompass invariant-features approaches \citep{kam1980reconstruction} and expectation-maximization algorithms \citep{sigworth1998maximum,scheres2005maximum,scheres2012relion}. The works \cite{bandeira2017estimation,abbe2018estimation} studied method-of-moments estimators in problems with general rotational groups, where \cite{bandeira2017estimation} related the rates of estimation and numbers of moments needed to identify the true signal to the structure of the invariant polynomial algebra of the group action. In these general settings, \cite{brunel2019learning,fan2020likelihood,katsevich2020likelihood,fan2021maximum} studied also properties of the log-likelihood function, its optimization landscape, and the Fisher information matrix, relating the structure of the invariant algebra to asymptotic rates of estimation for the MLE. \subsection{Outline} Section \ref{sec-2} provides a formal statement of the continuous MRA model and of our main results. Section \ref{sec-3} provides some preliminaries that relate the loss function to the Fourier magnitudes and phases. Section \ref{sec-4} proposes and analyzes a third-order method-of-moments estimator, which determines the phases by inverting the Fourier bispectrum. This estimator attains the minimax upper bound for squared-error risk in the high-noise regime. Section \ref{sec-5} analyzes the maximum likelihood estimator that attains the minimax upper bound for squared-error risk in the low-noise regime. Section \ref{sec-6} gives a minimax lower bound using Assouad's lemma, which matches the upper bounds of Sections \ref{sec-4} and \ref{sec-5} while also interpolating between these two signal-to-noise regimes. \subsection{Notation} For a complex number $z=re^{i\theta} \in \C$, $\overline{z}=re^{-i\theta}$ is its complex conjugate. $\Arg z=\theta$ is its principal argument in the range $[-\pi,\pi)$. $\langle u,v \rangle=\sum_k u_k\overline{v_k}$ is the $\ell_2$ inner-product for real or complex vectors, and $\|u\|=\sqrt{\langle u,u \rangle}$ is the $\ell_2$ norm. $I_K \in \R^{K \times K}$ is the identity matrix in dimension $K$. $\Normal_\C(0,\sigma^2)$ is the complex mean-zero Gaussian distribution, with independent real and imaginary parts having real Gaussian distribution $\Normal(0,\frac{\sigma^2}{2})$. We write $a \wedge b=\min(a,b)$. For a function $F:\R^k \to \R$, we denote its gradient and Hessian by $\nabla F \in \R^k$ and $\nabla^2 F \in \R^{k \times k}$. For two distributions $P$ and $Q$, $D_{\KL}(P\|Q)=\int \log(\frac{P}{Q})dP$ is their Kullback-Leibler (KL) divergence. \section{Model and main results} \label{sec-2} \revise{Let $\cS^1=[-\pi,\pi)$ be identified with the unit circle, with addition modulo $2\pi$. Let $f:\cS^1 \to \R$ be a smooth periodic function on $\cS^1$. We represent rotations of the circle by angles $\alpha \in \cA=[-\pi,\pi)$,} and denote the function $f$ with domain rotated by $\alpha$ as \[f_{\alpha}(t)=f(t-\alpha \bmod 2\pi).\] We study estimation of $f$ from $N$ i.i.d.\ samples of the form \[f_{\alpha}(t)\,\der t+\sigma\,\der W(t), \qquad \alpha \sim \Unif([-\pi,\pi)).\] In each sample, $\alpha$ represents a different latent and uniformly random rotation of the domain of $f$, and the entire rotated function $f_\alpha$ is observed with additive continuous white noise $\sigma\,\der W(t)$ on the circle. \revise{An equivalent Gaussian sequence formulation of the model is discussed below.} We assume that $\sigma>0$ is a fixed and known noise level. As $f$ is identifiable only up to rotation, we consider the rotation-invariant loss (\ref{eq:functionloss}). Note that we may alternatively study a model where each rotated function $f_\alpha(t)$ is observed with Gaussian noise only at a discrete set of points $t \in \cS^1$ that are fixed or randomly sampled \citep{bandeira2020optimal,bendory2020super}. We study the above continuous observation model so as to abstract away aspects of the problem that are related to this discrete sampling. The mean value of $f$ over the circle is invariant to rotations, and is easily estimated by averaging across samples. Thus, let us assume for simplicity and without loss of generality that $f$ has known mean 0. Passing to the Fourier domain, we assume that $f$ is bandlimited to $K$ Fourier frequencies, i.e.\ $f$ admits the Fourier sequence representation \[f(t)=\sum_{k=1}^K \theta_{k,1} f_{k,1}(t)+\theta_{k,2} f_{k,2}(t), \qquad f_{k,1}(t)=\frac{1}{\sqrt{\pi}} \cos kt, \quad f_{k,2}(t)=\frac{1}{\sqrt{\pi}} \sin kt,\] where $\{f_{k,1},f_{k,2}:k=1,\ldots,K\}$ are orthonormal Fourier basis functions over $[-\pi,\pi)$, and \[\theta=(\theta_{1,1},\theta_{1,2},\ldots,\theta_{K,1},\theta_{K,2}) \in \R^{2K}\] are the Fourier coefficients of $f$. We assume implicitly throughout the paper that $K \geq 2$, and we are interested in applications with potentially large values of this bandlimit $K$. Importantly, due to the choice of Fourier basis, the $2K$-dimensional space of such bandlimited functions is closed under rotations of the circle. The rotation $f \mapsto f_\alpha$ induces a map from the Fourier coefficients of $f$ to those of $f_\alpha$, which we denote as $\theta \mapsto g(\alpha) \cdot \theta$ for an orthogonal matrix $g(\alpha) \in \R^{2K \times 2K}$. Explicitly, this map $\theta \mapsto g(\alpha) \cdot \theta$ is given separately for each Fourier frequency $k=1,\ldots,K$ by \begin{equation}\label{eq:thetarotation} \begin{pmatrix} \theta_{k,1} \\ \theta_{k,2} \end{pmatrix} \mapsto \begin{pmatrix} \cos k\alpha\; & -\sin k\alpha \\ \sin k\alpha & \cos k\alpha \end{pmatrix} \begin{pmatrix} \theta_{k,1} \\ \theta_{k,2} \end{pmatrix}, \end{equation} and $g(\alpha)$ is the block-diagonal matrix with these $2 \times 2$ blocks. Equivalently, writing \[(\theta_{k,1},\theta_{k,2})=(r_k \cos \phi_k, r_k\sin\phi_k)\] where $r_k \geq 0$ is the magnitude and $\phi_k \in \cA$ is the phase (identified modulo $2\pi$), this map is given for each $k=1,\ldots,K$ by \begin{equation}\label{eq:rphirotation} (r_k,\phi_k) \mapsto (r_k,\phi_k+k\alpha). \end{equation} The samples $f_\alpha(t)\,\der t+\sigma\,\der W(t)$ represented in this Fourier sequence space take the form \begin{equation}\label{eq:samples} y^{(m)}=g(\alpha^{(m)}) \cdot \theta+\sigma \eps^{(m)} \in \R^{2K} \text{ for } m=1,\ldots,N \end{equation} where $\alpha^{(1)},\ldots,\alpha^{(N)} \overset{\iid}{\sim} \Unif([-\pi,\pi))$, $\eps^{(1)},\ldots,\eps^{(N)} \overset{\iid}{\sim} \Normal(0,I_{2K})$, and these are independent. Writing $\hat{\theta} \in \R^{2K}$ for the Fourier coefficients of the estimated function $\hat{f}$ (which should likewise be bandlimited to $K$ Fourier frequencies), the loss (\ref{eq:functionloss}) is equivalent to \begin{equation}\label{eqn-loss} L(\hat{\theta},\theta)=\min_{\alpha \in \cA} \|\hat{\theta}-g(\alpha) \cdot \theta\|^2. \end{equation} In the remainder of this paper, we will consider the problem in this sequence form. \revise{We reserve the notation $\theta^*$ for the Fourier coefficients of the true unknown function. Fixing constants $\beta \in [0,\frac{1}{2})$ and $\clower,\cupper>0$, we consider a parameter space of ``generic'' Fourier coefficient vectors with power law decay rate $\beta$, given by \begin{equation}\label{eq:Thetar} \Theta_\beta=\Big\{\theta^* \in \R^{2K}:\,\clower k^{-\beta} \leq r_k(\theta^*) \leq \cupper k^{-\beta} \text{ for all } k=1,\ldots,K\Big\}. \end{equation} Here, ``generic'' refers to the quantitative lower bound for each value $r_k(\theta^*)$ that matches the assumed upper bound up to a constant factor. This condition may be viewed as an analogue of the genericity condition in \cite{perry2019sample} that all Fourier magnitudes are bounded above and below by a constant, in our high-dimensional setting of interest with potentially large $K$ and decaying Fourier magnitudes. Our main results are the following two theorems, which characterize the minimax rates of estimation over $\Theta_\beta$ in high-noise and low-noise regimes. \begin{Theorem}[Minimax risk in high noise]\label{thm:highnoise} Fix any $\beta \in [0,\frac{1}{2})$ and any constant $c_0>0$. If $\sigma^2 \geq c_0K^{1-2\beta}$, then for a constant $C_0>0$ depending only on $\beta,\clower,\cupper,c_0$ and for any $N \geq C_0K^{6\beta}\sigma^6\log K$, \[\inf_{\hat{\theta}} \sup_{\theta^* \in \Theta_\beta} \E_{\theta^*}[L(\hat{\theta},\theta^*)] \asymp \frac{K^{4\beta}\sigma^6}{N}.\] \end{Theorem} \begin{Theorem}[Minimax risk in low noise]\label{thm:lownoise} Fix any $\beta \in [0,\frac{1}{2})$. There exist constants $C_0,C_1>0$ depending only on $\beta,\clower,\cupper$ such that if $\sigma^2 \leq \frac{K^{1-2\beta}}{C_1\log K}$ and $N \geq C_0K^{1+2\beta}\sigma^2\log K$, then \[\inf_{\hat{\theta}} \sup_{\theta^* \in \Theta_\beta} \E_{\theta^*}[L(\hat{\theta},\theta^*)] \asymp \frac{K\sigma^2}{N}.\] \end{Theorem}} \noindent In both statements, $\E_{\theta^*}$ is the expectation over $N$ samples $y^{(1)},\ldots,y^{(N)}$ from the model (\ref{eq:samples}) with true parameter $\theta^*$. The infimum $\inf_{\hat{\theta}}$ is over all estimators $\hat{\theta}$ based on these samples, and $\asymp$ denotes upper and lower bounds up to constant multiplicative factors that depend only on $\beta,\clower,\cupper,c_0$. \section{Preliminaries} \label{sec-3} \subsection{Bounds for the loss} For $\phi,\phi' \in \cA=[-\pi,\pi)$, we define the circular distance \begin{equation}\label{eq:dA} |\phi-\phi'|_\cA=\min_{j \in \Z} |\phi-\phi'+2\pi j|. \end{equation} It is direct to check that $(\phi,\phi') \mapsto |\phi-\phi'|_\cA$ is a metric on $\cA$, satisfying the triangle inequality and the upper bound \begin{equation}\label{eq:dAupperbound} |\phi-\phi'|_{\cA} \leq \min(\pi, |\phi-\phi'|). \end{equation} We may express and bound the loss (\ref{eqn-loss}) in terms of the Fourier magnitudes and phases. \begin{Proposition}\label{prop-lossgeneral} Let $\theta=(r_k\cos \phi_k,r_k\sin\phi_k)_{k=1}^K$ and $\theta'=(r_k'\cos \phi_k',r_k'\sin\phi_k')_{k=1}^K$. Then \begin{equation}\label{eq:lossrphi} L(\theta,\theta')=\sum_{k=1}^K (r_k-r_k')^2+\inf_{\alpha \in \R} \sum_{k=1}^K 2r_kr_k'\Big[1-\cos(\phi_k-\phi_k'+k\alpha)\Big]. \end{equation} Consequently, for universal constants $C,c>0$, \begin{align*} &\sum_{k=1}^K (r_k-r_k')^2+c\inf_{\alpha \in \R} \sum_{k=1}^K r_kr_k'|\phi_k-\phi_k'+k\alpha|_\cA^2\\ &\hspace{1in}\leq L(\theta,\theta')\leq \sum_{k=1}^K (r_k-r_k')^2+C\inf_{\alpha \in \R} \sum_{k=1}^K r_kr_k'|\phi_k-\phi_k'+k\alpha|_\cA^2. \end{align*} \end{Proposition} \begin{proof} For any $\alpha \in \R$, we have \begin{align*} \|\theta'-g(\alpha)\cdot\theta\|^2 &= \sum_{k=1}^{K}\left[(r_k'\cos\phi_k' - r_k\cos(\phi_k+k\alpha))^2+(r_k'\sin\phi_k' - r_k\sin(\phi_k+k\alpha))^2\right] \\ &= \sum_{k=1}^{K} (r_k-r_k')^2+2r_kr_k'\left[1-\cos(\phi_k-\phi_k'+k\alpha)\right]. \end{align*} Taking the infimum over $\alpha$ gives (\ref{eq:lossrphi}). The consequent inequalities follow from the bounds $c|t|_{\cA}^2 \leq 1-\cos(t) \leq C|t|_{\cA}^2$ for universal constants $C,c>0$, applied with $t=\phi_k-\phi_k'+k\alpha$ for each $k$. \end{proof} \subsection{Complex representation}\label{sec:complexrepr} It will be notationally and conceptually convenient to pass between $\theta \in \R^{2K}$ and a complex representation by $\tilde\theta \in \C^K$. We use throughout \begin{equation}\label{eq:principalArg} \Arg z \in [-\pi,\pi) \end{equation} for the principal complex argument of $z \in \C$. Recalling the $k^\text{th}$ Fourier coefficient pair $(\theta_{k,1},\theta_{k,2})=(r_k\cos\phi_k,r_k\sin\phi_k)$, we set \begin{equation}\label{eq:complextheta} \tilde\theta_k=\theta_{k,1}+i\theta_{k,2}=r_ke^{i\phi_k} \in \C. \end{equation} For $\theta,\theta' \in \R^{2K}$, note then that \begin{equation}\label{eq:CRisometry} \langle \theta,\theta' \rangle =\sum_{k=1}^K \theta_{k,1}\theta_{k,1}'+\theta_{k,2}\theta_{k,2}' =\sum_{k=1}^K \Re \tilde\theta_k\overline{\tilde\theta_k'} =\frac{\langle \tilde\theta,\tilde\theta' \rangle+\langle \tilde\theta', \tilde\theta \rangle}{2} \end{equation} where the left side is the real inner-product, and the right side is the complex inner-product $\langle u,v \rangle=\sum_k u_k\overline{v_k}$. Similarly, we may represent the sample $y^{(m)} \in \R^{2K}$ from (\ref{eq:samples}) by $\tilde y^{(m)} \in \C^K$ where \[\tilde y^{(m)}_k=y^{(m)}_{k,1}+iy^{(m)}_{k,2} \in \C.\] Then, recalling the form of the rotational action (\ref{eq:rphirotation}), we have \begin{equation}\label{eq:tildey} \tilde y_k^{(m)}=r_k e^{i(\phi_k+k\alpha^{(m)})} +\sigma \tilde{\eps}_k^{(m)} \in \C \end{equation} where $\tilde{\eps}_k^{(m)}=\eps_{k,1}^{(m)}+i\eps_{k,2}^{(m)} \sim \Normal_\C(0,2)$ is complex Gaussian noise, independent across both frequencies $k=1,\ldots,K$ and samples $m=1,\ldots,N$. \section{Method-of-moments estimator} \label{sec-4} \revise{In this section, we analyze an estimator based on a third-order method-of-moments idea. We prove a general risk bound that depends on the smallest non-zero Fourier magnitude $\rlower=\min_k r_k(\theta^*)$ of the true signal, valid for any noise level $\sigma^2>0$, and we show in particular that this achieves the minimax upper bound of Theorem \ref{thm:highnoise} for signals $\theta^* \in \Theta_\beta$ in the high-noise regime.} Throughout this section, let us denote the Fourier magnitudes and phases of the true parameter as $\theta^*=(r_k\cos\phi_k,r_k\sin\phi_k)_{k=1}^K$ and write $\E$ for $\E_{\theta^*}$. Observe from (\ref{eq:tildey}) that for every $k=1,\ldots,K$, \[\E\big[|\tilde y_k^{(m)}|^2\big]=r_k^2+2\sigma^2.\] Then $N^{-1}\sum_{m=1}^N |\tilde y_k^{(m)}|^2-2\sigma^2$ provides an unbiased estimate of $r_k^2$. Furthermore, denote \begin{equation}\label{eq:indexset} \cI=\Big\{(k,l):k,l \in \{1,\ldots,K\} \text{ and } k+l \leq K\Big\}. \end{equation} Applying that $\{\tilde{\eps}_k^{(m)}:k=1,\ldots,K\}$ are independent with mean 0, and also $\E[(\tilde{\eps}_k^{(m)})^2]=0$ (cf.\ Proposition \ref{lemma-distribution-1} of Appendix \ref{appendix:MoM}), for any $(k,l) \in \cI$ including the case $k=l$ we have \begin{equation*} \begin{aligned} \E\left[\tilde y_{k+l}^{(m)}\cdot\overline{\tilde y_k^{(m)}}\cdot\overline{\tilde y_l^{(m)}}\right]&= \E\left[r_{k+l}e^{i(\phi_{k+l}+(k+l)\alpha^{(m)})} \cdot r_ke^{i(-\phi_k-k\alpha^{(m)})} \cdot r_l e^{i(-\phi_l-l\alpha^{(m)})}\right]\\ &=r_{k+l}r_kr_l e^{i(\phi_{k+l}-\phi_k-\phi_l)}. \end{aligned} \end{equation*} Thus the complex argument of $N^{-1} \sum_{m=1}^N \tilde y_{k+l}^{(m)} \cdot \overline{\tilde y_k^{(m)}}\cdot \overline{\tilde y_l^{(m)}}$ provides an estimate of the Fourier bispectrum component $\phi_{k+l}-\phi_k-\phi_l$ modulo $2\pi$, from which we may hope to recover the individual phases $\phi_k$. This motivates the following class of method-of-moments procedures: \begin{enumerate} \item[1.] For each $k=1,\ldots,K$, estimate $r_k$ by \begin{equation}\label{eq:hatrk} \hat{r}_k=\left(\frac{1}{N}\sum_{m=1}^N |\tilde y_k^{(m)}|^2 -2\sigma^2\right)_+^{1/2}. \end{equation} \item[2.] For each $(k,l) \in \cI$, compute \begin{equation}\label{eq:hatBkl} \hat{B}_{k,l}=\frac{1}{N}\sum_{m=1}^N \tilde y_{k+l}^{(m)} \cdot \overline{\tilde y_k^{(m)}} \cdot \overline{\tilde y_l^{(m)}}, \end{equation} and choose a version of its complex argument $\hat{\Phi}_{k,l}$ in $\R$ such that $\hat{\Phi}_{k,l}-\Arg \hat{B}_{k,l}=0 \bmod 2\pi$. \item[3.] Estimate $\phi=(\phi_k:k=1,\ldots,K)$ by the least-squares estimator \begin{equation}\label{eq:leastsquares} \hat{\phi}=\argmin_{\phi \in \R^K} \sum_{(k,l) \in \cI} \big(\hat{\Phi}_{k,l} -(\phi_{k+l}-\phi_k-\phi_l)\big)^2. \end{equation} Then estimate $\theta$ by $\hat{\theta}=(\hat{r}_k\cos\hat{\phi}_k,\hat{r}_k\sin\hat{\phi}_k)_{k=1}^K$. \end{enumerate} Here, (\ref{eq:leastsquares}) is defined using the squared difference over $\R$ rather than over the periodic domain $\cA$. Hence the final estimate $\hat{\theta}$ depends on the specific choice of argument $\hat{\Phi}_{k,l}$ in Step 2, which we have left ambiguous above. We proceed by first studying in Section \ref{sec:MoMoracle} an ``oracle'' version of this estimator, where $\hat{\Phi}_{k,l}$ is chosen in Step 2 using knowledge of the true phases $\phi_1,\ldots,\phi_K$ as the unique version of the argument of $\hat{B}_{k,l}$ for which $\hat{\Phi}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l) \in [-\pi,\pi)$. This choice satisfies an exact distributional symmetry in sign. We leverage this symmetry to provide a risk bound for this oracle procedure. To develop an actual estimator based on this oracle idea, we propose in Section \ref{sec:MoMopt} a method of mimicking this oracle using a pilot estimate of $\phi_1,\ldots,\phi_K$ that is obtained by first minimizing an $\ell_\infty$-type optimization objective. We prove an $\ell_\infty$-stability bound for bispectrum inversion, which implies that the resulting choice of $\hat{\Phi}_{k,l}$ coincides with the oracle choice with high probability as long as $N \gtrsim \frac{\sigma^6}{\rlower^6}\log K$. Consequently, this estimator attains the same estimation rate without oracle knowledge. \revise{We summarize these results as the following theorem.} \revise{ \begin{Theorem}\label{thm:oracleMoM} Let $\hat{\theta} \in \{\hat{\theta}^\oracle,\hat{\theta}^\opt\}$ be the above method-of-moments estimator, where $\hat{\Phi}_{k,l}$ is chosen either using the oracle of Section \ref{sec:MoMoracle} or the optimization procedure of Section \ref{sec:MoMopt}. Suppose $r_k \geq \rlower>0$ for each $k=1,\ldots,K$. There exist universal constants $C,C_0>0$ such that if $N \geq C_0(\frac{\sigma^6}{\rlower^6}\log K+\frac{\sigma^3}{\rlower^3}(\log K)^{3/2})$, then \begin{equation}\label{eq:oracleMoMbound} \E[L(\hat{\theta},\theta^*)] \leq CK \left(\frac{\sigma^2}{N}+\frac{\sigma^4}{N\rlower^2}\right) +\frac{C\|\theta^*\|^2}{K} \left(\frac{K\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6}\right) \end{equation} \end{Theorem} \noindent We remark that for signals where $\|\theta^*\|^2/K \asymp \rlower^2$, as is the case for our signal class $\Theta_\beta$ of interest, this risk bound reduces to \[\E[L(\hat{\theta},\theta^*)] \leq \frac{C}{N} \left(K\sigma^2+\frac{K\sigma^4}{\rlower^2} +\frac{\sigma^6}{\rlower^4}\right)\]} \subsection{The oracle procedure}\label{sec:MoMoracle} Let us identify each entry of the true Fourier phase vector as a real value $\phi_k \in [-\pi,\pi)$, and set \begin{equation}\label{eq:Phidef} \Phi_{k,l}=\phi_{k+l}-\phi_k-\phi_l \in \R. \end{equation} We emphasize that this arithmetic is carried out in $\R$, not modulo $2\pi$. We consider an oracle version of the above method-of-moments procedure, where $\hat{\Phi}_{k,l}^\oracle \in [\Phi_{k,l}-\pi,\Phi_{k,l}+\pi)$ is chosen in Step 2 as the unique version of the complex argument of $\hat{B}_{k,l}$ that belongs to this range. Recalling the complex representation of $\theta$ in (\ref{eq:complextheta}) and defining \begin{equation}\label{eq:Bkl} B_{k,l}=\tilde\theta_{k+l} \cdot \overline{\tilde\theta_k} \cdot \overline{\tilde\theta_l} =r_{k+l}r_kr_l e^{i(\phi_{k+l}-\phi_k-\phi_l)} \in \C, \end{equation} note that this means, for the principal argument specified in (\ref{eq:principalArg}), \begin{equation}\label{eq:Argoracle} \hat{\Phi}_{k,l}^\oracle-\Phi_{k,l}=\Arg (\hat{B}_{k,l}/B_{k,l}) \in [-\pi,\pi). \end{equation} We will write $\hat{\Phi}^\oracle=\hat{\Phi}^\oracle(\phi)$ if we wish to make explicit the dependence of this definition on the phase vector $\phi$ of the true signal. We denote by $\hat{\phi}^\oracle$ the resulting least-squares estimate of $\phi$ in (\ref{eq:leastsquares}), and by $\hat{\theta}^\oracle$ the corresponding estimate of $\theta$. In the remainder of this subsection, we describe an argument showing that Theorem \ref{thm:oracleMoM} holds for $\hat{\theta}^\oracle$, deferring detailed proofs to Appendix \ref{appendix:MoM}. We divide the argument into the analysis of Step 1 of the MoM procedure for estimating the Fourier magnitudes $\{r_k\}_{k=1}^K$, Step 2 for estimating the bispectrum components $\{\Phi_{k,l}\}_{(k,l) \in \cI}$, and Step 3 for recovering the phases $\{\phi_k\}_{k=1}^K$ from the bispectrum.\\ {\bf Estimating $r_k$.} Standard Gaussian and chi-squared tail bounds show the following guarantee for estimating the Fourier magnitudes $r_k$ via $\hat{r}_k$, defined in (\ref{eq:hatrk}). \begin{Lemma}\label{lemma:rktail} For each $k=1,\ldots,K$ and a universal constant $c>0$, \begin{align} \P[\hat{r}_k \geq r_k(1+s)]&\leq 2\exp\left(-cNs^2\left(\frac{r_k^2}{\sigma^2} \wedge \frac{r_k^4}{\sigma^4}\right)\right) \text{ for all } s \geq 0,\label{eq:hatrupper}\\ \P[\hat{r}_k \leq r_k(1-s)]&\leq 2\exp\left(-cNs^2\left(\frac{r_k^2}{\sigma^2} \wedge \frac{r_k^4}{\sigma^4}\right)\right) \text{ for all } s \in [0,1).\label{eq:hatrlower} \end{align} \end{Lemma} \noindent Integrating these tail bounds yields the following immediate corollary. \begin{Corollary}\label{cor:MoMrriskbound} For each $k=1,\ldots,K$ and a universal constant $C>0$, \[\E[(\hat{r}_k-r_k)^2] \leq C\left(\frac{\sigma^2}{N}+ \frac{\sigma^4}{Nr_k^2}\right).\] \end{Corollary} {\bf Estimating $\Phi_{k,l}$.} Applying a concentration inequality for cubic polynomials in independent Gaussian random variables, derived from \cite{latala2006estimates}, we obtain the following tail bounds for estimating $B_{k,l}$ by $\hat{B}_{k,l}$ in Step 2, and for estimating the bispectrum component $\Phi_{k,l}$ by the oracle estimator $\hat{\Phi}_{k,l}^\text{oracle}$. \begin{Lemma}\label{lemma:Phikltail} Consider any $(k,l) \in \cI$ and suppose $r_{k+l},r_k,r_l \geq \rlower$. Then for universal constants $C,c>0$ and any $s>0$, \begin{equation} \label{eqn-B-final-bound} \P\Big[|\hat{B}_{k,l}/B_{k,l}-1| \geq s\Big] \leq C\exp\left(-c\left(\frac{Ns^2\rlower^2}{\sigma^2} \wedge \frac{Ns^2\rlower^6}{\sigma^6} \wedge \frac{(Ns)^{2/3}\rlower^2} {\sigma^2}\right)\right). \end{equation} Furthermore, for universal constants $C,c>0$ and any $s \in (0,\pi/2)$, \begin{equation}\label{eqn-Phi-final-bound} \P\Big[|\hat{\Phi}_{k,l}^\oracle-\Phi_{k,l}| \geq s\Big] \leq C\exp\left(-c\left(\frac{Ns^2\rlower^2}{\sigma^2} \wedge \frac{Ns^2\rlower^6}{\sigma^6} \wedge \frac{(Ns)^{2/3}\rlower^2}{\sigma^2}\right)\right). \end{equation} \end{Lemma} \begin{Corollary}\label{cor:riskPhikl} Consider any $(k,l) \in \cI$ and suppose $r_{k+l},r_k,r_k \geq \rlower$. Then for a universal constant $C>0$, \[\E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})^2] \leq C\left( \frac{\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6}\right)\] \end{Corollary} A key property of the oracle estimator $\hat{\Phi}_{k,l}^\text{oracle}$ is an exact distributional symmetry in sign, \begin{equation}\label{eq:symmetryinsign} \hat{\Phi}_{k,l}^\text{oracle}-\Phi_{k,l} \overset{L}{=}-\hat{\Phi}_{k,l}^\text{oracle}+\Phi_{k,l}. \end{equation} \revise{This implies that $\E[\hat{\Phi}_{k,l}^\text{oracle}-\Phi_{k,l}]=0$, and hence $\E[(\hat{\Phi}_{k,l}^\text{oracle}-\Phi_{k,l}) (\hat{\Phi}_{x,y}^\text{oracle}-\Phi_{x,y})]=0$ when these bispectral components do not have any overlapping index, as stated in part (a) of the following lemma. For $\Phi_{k,l}$ and $\Phi_{x,y}$ that have an overlapping index, the corresponding estimates $\hat{\Phi}_{k,l}^\text{oracle}$ and $\hat{\Phi}_{x,y}^\text{oracle}$ are not independent. Our proof of Theorem \ref{thm:oracleMoM} requires a sharper bound on the expected product of their errors than what is naively obtained from the preceding Corollary \ref{cor:riskPhikl} and Cauchy-Schwarz. Indeed, applying the representation (\ref{eq:Argoracle}) and a first-order Taylor approximation $\Arg z=\Im \Ln z \approx \Im (z-1)$ around $z=1$, we obtain $\E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})] \approx \E[\Im(\hat{B}_{k,l}/B_{k,l}-1) \Im(\hat{B}_{x,y}/B_{x,y}-1)]$, and it is easily checked that this latter expectation is of size $O(\sigma^2/N\rlower^2)$, exhibiting a cancellation of the $O(\sigma^6/N\rlower^6)$ error. However, a naive bound for the error of this Taylor approximation remains of size $O(\sigma^6/N\rlower^6)$. Part (b) of the following lemma establishes a sharp bound for $\E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})]$ by carrying out the Taylor expansion to a higher order $J \asymp N\rlower^6/\sigma^6$ with a remainder that is exponentially small in $N\rlower^6/\sigma^6$, and exhibiting a similar cancellation in expectation for all terms of the Taylor expansion up to this order $J$.} \begin{Lemma}\label{lemma:Phicovariance} Let $(k,l),(x,y) \in \cI$, and suppose $r_k,r_l,r_{k+l},r_x,r_y,r_{x+y} \geq \rlower$. For some universal constants $C,c>0$, \begin{enumerate}[(a)] \item If $\{k,l,k+l\}$ is disjoint from $\{x,y,x+y\}$, then \[\E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})]=0.\] \item If $\{k,l,k+l\} \cap \{x,y,x+y\}$ has cardinality 1, then \begin{equation}\label{eq:card1bound} \Big|\E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})]\Big| \leq C\left(\frac{\sigma^2}{N\rlower^2} +e^{-c\left(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2}\right)}\right). \end{equation} \item For any $(k,l),(x,y) \in \cI$, \[\Big|\E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})]\Big| \leq C\left(\frac{\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6}\right).\] \end{enumerate} \end{Lemma} {\bf Estimating $\phi_k$.} We now translate the preceding bounds for estimating the Fourier bispectrum $\{\Phi_{k,l}\}$ to estimating the phases $\{\phi_k\}$ using the least squares procedure (\ref{eq:leastsquares}). Define the matrix $M \in \R^{\cI \times K}$ with rows indexed by the bispectrum index set $\cI$ from (\ref{eq:indexset}), such that the linear system (\ref{eq:Phidef}) may be expressed as $\Phi=M\phi$. That is, row $(k,l)$ of $M$ is given by $e_{k+l}-e_k-e_l$ where $e_k \in \R^K$ is the $k^\text{th}$ standard basis vector. Then (\ref{eq:leastsquares}) is given explicitly by \begin{equation}\label{eq:Mdagger} \hat{\phi}=M^\dagger \hat{\Phi} \end{equation} where $M^\dagger$ is the Moore-Penrose pseudo-inverse. Recall that a rotation of the circular domain of $f$ induces the map (\ref{eq:rphirotation}), which does not change the bispectral components $\Phi_{k,l}$. This is reflected by the property that $(1,2,3,\ldots,K)$ belongs to the kernel of $M$. The following lemma shows that this is the unique vector in the kernel. Furthermore, $M$ is well-conditioned on the subspace orthogonal to this kernel, with all remaining $K-1$ singular values on the same order of $\sqrt{K}$. \begin{Lemma}\label{lemma:Mproperties} $M$ has rank exactly $K-1$, and the kernel of $M$ is the span of $(1,2,3,\ldots,K) \in \R^K$. All $K-1$ non-zero eigenvalues of $M^{\top}M \in \R^{K\times K}$ are integers in the interval $[K+1,2K+1]$. \end{Lemma} This yields the following corollary for estimation of the Fourier phases $\{\phi_k\}$, up to a global rotation that is represented by an additive shift in the direction of $(1,2,3,\ldots,K)$. \begin{Corollary}\label{cor:MoMphiriskbound} Suppose $r_k \geq \rlower$ for each $k=1,\ldots,K$. Then for universal constants $C,c>0$, \revise{ \begin{equation}\label{eq:phiriskbound} \E\left[\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2 |\hat\phi_k^\oracle-\phi_k+k\alpha|_\cA^2\right] \leq \frac{C\|\theta^*\|^2}{K} \left(\frac{K\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6} +Ke^{-c(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2})}\right). \end{equation}} \end{Corollary} \begin{proof} By adding a multiple of $(1,2,3,\ldots,K)$ to $\phi$ and absorbing this shift into $\alpha$, we may assume without loss of generality that $\phi$ is orthogonal to $(1,2,3,\ldots,K)$. Under this assumption, we will then upper-bound the left side by choosing $\alpha=0$. Since $\Phi=M\phi$, this implies $M^\dagger \Phi=M^\dagger M\phi=\phi$, the last equality holding because Lemma \ref{lemma:Mproperties} implies that $M^\dagger M$ is the projection orthogonal to $(1,2,3,\ldots,K)$. \revise{Set $D=\diag(r_k^2)_{k=1}^K \in \R^{K \times K}$.} Then applying $\Tr AB \leq \Tr B \cdot \|A\|_{\mathrm{op}}$ for positive semidefinite $A,B$, where $\|\cdot\|_{\mathrm{op}}$ is the $\ell_2 \to \ell_2$ operator norm, \revise{ \begin{align*} \E\left[\sum_{k=1}^K r_k^2|\hat\phi_k^\oracle-\phi_k|_\cA^2\right] &\leq \E[(\hat\phi^\oracle-\phi)^\top D(\hat\phi^\oracle-\phi)] =\E[(\hat\Phi^\oracle-\Phi)^\top M^{\dagger\top} DM^\dagger(\hat\Phi^\oracle-\Phi)]\\ &=\Tr M^{\dagger\top} DM^\dagger\E[(\hat\Phi^\oracle-\Phi) (\hat\Phi^\oracle-\Phi)^{\top}]\\ &\leq \Tr(M^{\dagger\top} DM^{\dagger})\cdot\|\E[(\hat\Phi^\oracle-\Phi)(\hat\Phi^\oracle-\Phi)^{\top}]\|_{\mathrm{op}}\\ &\leq \Tr D \cdot \|M^{\dagger}M^{\dagger\top}\|_{\mathrm{op}} \cdot\|\E[(\hat\Phi^\oracle-\Phi)(\hat\Phi^\oracle-\Phi)^{\top}]\|_{\mathrm{op}} \end{align*} Here, $\Tr D=\sum_{k=1}^K r_k^2=\|\theta^*\|^2$, and Lemma \ref{lemma:Mproperties} implies $\|M^{\dagger}M^{\dagger\top}\|_{\mathrm{op}} =\|(M^\top M)^\dagger\|_{\mathrm{op}} \leq 1/(K+1)$.} We have $\|A\|_{\mathrm{op}} \leq \|A\|_\infty$ for positive semidefinite $A$, where $\|A\|_\infty$ is the $\ell_\infty \to \ell_\infty$ operator norm given by the maximum absolute row sum. For a universal constant $C>0$ and each $(k,l) \in \cI$, there are at most $C$ pairs $(x,y) \in \cI$ for which $\{k,l,k+l\} \cap \{x,y,x+y\}$ has cardinality 2 or 3, and at most $CK$ pairs $(x,y) \in \cI$ for which $\{k,l,k+l\} \cap \{x,y,x+y\}$ has cardinality 1. Applying Lemma \ref{lemma:Phicovariance}(b) for those pairs for which this cardinality is 1, Lemma \ref{lemma:Phicovariance}(c) for those pairs for which this cardinality is 2 or 3, and Lemma \ref{lemma:Phicovariance}(a) for all remaining pairs, we obtain for different universal constants $C,c>0$ that \[\|\E[(\hat\Phi^\oracle-\Phi)(\hat\Phi^\oracle-\Phi)^{\top}]\|_\infty \leq C\left(\frac{K\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6} +Ke^{-c(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2})}\right).\] Combining the above concludes the proof. \end{proof} Let us remark that using Lemma \ref{lemma:Phicovariance}(b) in place of Lemma \ref{lemma:Phicovariance}(c) for the pairs where $\{k,l,k+l\}$ and $\{x,y,x+y\}$ overlap in one index is important for removing a factor of $K$ in the $\sigma^6/(N\rlower^6)$ component of the error, which will be the leading contribution to the overall estimation error in the high-noise regime. Theorem \ref{thm:oracleMoM} for $\hat\theta^\oracle$ now follows from the loss upper bound in Proposition \ref{prop-lossgeneral} in terms of the separate estimation errors for magnitude and phase, together with Corollaries \ref{cor:MoMrriskbound} and \ref{cor:MoMphiriskbound}. \subsection{Mimicking the oracle}\label{sec:MoMopt} We now consider the method-of-moments procedure where the choice of $\hat{\Phi}_{k,l}$ in Step 2 is determined instead by the following method: Compute a ``pilot'' estimate of $\phi$ as any minimizer of the $\ell_\infty$-type objective \begin{equation}\label{eq:tildephidef} \tilde{\phi}=\argmin_{\phi \in \cA^K} \max_{(k,l) \in \cI} |\Arg \hat{B}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l)|_\cA, \end{equation} where a minimizer exists because $\cA$ is compact under $|\cdot|_{\cA}$. Identify each entry $\tilde{\phi}_k \in [-\pi,\pi)$ of this estimate as a real value, and set $\tilde{\Phi}_{k,l}=\tilde{\phi}_{k+l}-\tilde{\phi}_k-\tilde{\phi}_l$ where arithmetic is again carried out in $\R$, not modulo $2\pi$. Then choose $\hat{\Phi}_{k,l}^\opt \in [\tilde{\Phi}_{k,l}-\pi,\tilde{\Phi}_{k,l}+\pi)$ as the unique version of the complex argument of $\hat{B}_{k,l}$ belonging to this range. Let $\hat\phi^\opt$ be the resulting least-squares estimate of $\phi$ in (\ref{eq:leastsquares}), and let $\hat\theta^\opt$ be the corresponding estimate of $\theta$. \revise{We prove Theorem \ref{thm:oracleMoM} for $\hat\theta^\opt$ by showing that, with high probability, $\hat{\Phi}^\opt=\hat{\Phi}^\oracle(\phi')$ for some phase vector $\phi'$ that is equivalent to $\phi$. By ``equivalent'', we mean that $\phi$ and $\phi'$ represent the same Fourier phases up to rotation of the circular domain, i.e.\ there exists $\alpha \in \R$ for which \begin{equation}\label{eq:phiequivalence} |\phi_k'-\phi_k+k\alpha|_\cA=0 \text{ for each } k=1,\ldots,K. \end{equation} Then using $\hat{\Phi}^\opt$ achieves the same loss as using $\hat{\Phi}^\oracle(\phi)$.} The main additional ingredient in the proof is a deterministic $\ell_\infty$-stability bound for recovery of the Fourier phases from the bispectrum, stated in the following result. \begin{Lemma}\label{lemma:phasestability} Fix any $\delta \in (0,\pi/3)$ and $\phi,\phi' \in \R^K$. Denote $\Phi_{k,l}=\phi_{k+l}-\phi_k-\phi_l$ and $\Phi_{k,l}'=\phi_{k+l}'-\phi_k'-\phi_l'$. If \[|\Phi_{k,l}-\Phi_{k,l}'|_\cA \leq \delta \text{ for all } (k,l) \in \cI,\] then there exists some $\alpha \in \R$ such that \[|\phi_k-\phi_k'-k\alpha|_\cA \leq \delta \text{ for all } k=1,\ldots,K.\] \end{Lemma} \noindent This guarantees that, if $\tilde{\phi}$ yields a bispectrum $\tilde{\Phi}$ which is elementwise close to the true bispectrum $\Phi$ in the circular distance modulo $2\pi$, then $\tilde{\phi}$ must also be elementwise close to $\phi$ up to a rotation of the circular domain. \revise{In other words, this is an $\ell_\infty \to \ell_\infty$ operator-norm bound for the matrix $M^\dagger$ from (\ref{eq:Mdagger}), where the $\ell_\infty$ norms are defined using the circular distance per coordinate and modulo the equivalence relation (\ref{eq:phiequivalence}).} The above guarantee is sufficient to show that if each quantity $\Arg \hat{B}_{k,l}$ estimates the true bispectral component $\Phi_{k,l}$ up to a small constant error in the circular distance $|\cdot|_\cA$, then its version $\hat{\Phi}_{k,l}^\text{opt}$ that is chosen using $\tilde{\phi}$ must coincide exactly with the oracle choice $\hat{\Phi}_{k,l}^\text{oracle}(\phi')$, based on a phase vector $\phi'$ that is equivalent to the true phase vector $\phi$. \begin{Corollary}\label{cor:optmimicsoracle} Let $\hat{B}_{k,l}$ be as defined in (\ref{eq:hatBkl}), and suppose $\phi \in \R^K$ is such that \begin{equation}\label{eq:linftyPhibound} |\Arg \hat{B}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l)|_\cA <\pi/12 \text{ for every } (k,l) \in \cI. \end{equation} Then there exists $\phi'$ equivalent to $\phi$ such that $\hat\Phi^\opt=\hat\Phi^\oracle(\phi')$. \end{Corollary} \begin{proof} By the definition of the optimization procedure which defines $\tilde{\phi}$ in (\ref{eq:tildephidef}), \begin{equation}\label{eq:tildePhibound} \max_{(k,l) \in \cI} |\Arg \hat{B}_{k,l}-(\tilde{\phi}_{k+l}-\tilde{\phi}_k-\tilde{\phi}_l)|_\cA \leq \max_{(k,l) \in \cI} |\Arg \hat{B}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l)|_\cA. \end{equation} By assumption, the right side is at most $\pi/12$. Then by the triangle inequality for $|\cdot|_\cA$, for every $(k,l) \in \cI$, we have $|(\tilde{\phi}_{k+l}-\tilde{\phi}_k-\tilde{\phi}_l) -(\phi_{k+l}-\phi_k-\phi_l)|_\cA<\pi/6$. Applying Lemma \ref{lemma:phasestability}, we obtain for some $\alpha \in \R$ and all $k=1,\ldots,K$ that $|\tilde{\phi}_k-\phi_k-k\alpha|_\cA<\pi/6$. This means that there exists $\phi'$ equivalent to $\phi$ for which, for the usual absolute value, \[|\tilde{\phi}_k-\phi_k'|<\pi/6 \text{ for all } k=1,\ldots,K.\] Then denoting $\Phi_{k,l}'=\phi_{k+l}'-\phi_k'-\phi_l'$, by the triangle inequality, $|\tilde\Phi_{k,l}-\Phi_{k,l}'|<\pi/2$ for all $(k,l) \in \cI$. Since $\phi'$ is equivalent to $\phi$, also \[|\Arg \hat{B}_{k,l}-(\phi_{k+l}'-\phi_k'-\phi_l')|_\cA =|\Arg \hat{B}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l)|_\cA<\pi/12.\] So by the definition of $\hat{\Phi}^\oracle(\phi')$, we have $|\hat{\Phi}_{k,l}^\oracle(\phi')-\Phi_{k,l}'|<\pi/12$ for the usual absolute value. Then $|\hat{\Phi}_{k,l}^\oracle(\phi')-\tilde\Phi_{k,l}| <\pi/2+\pi/12<\pi$ for all $(k,l) \in \cI$, meaning that $\hat{\Phi}^\oracle(\phi')=\hat{\Phi}^\opt$. \end{proof} The tail bounds of Lemma \ref{lemma:Phikltail} may be used to show that the event (\ref{eq:linftyPhibound}) holds with high probability. On this event, the loss of $\hat{\theta}^\opt$ matches exactly that of $\hat{\theta}^\oracle$. Combining with a crude bound for the loss on the complementary event, which has exponentially small probability in $N$, we obtain Theorem \ref{thm:oracleMoM} for $\hat\theta^\opt$. \revise{ \begin{Remark} We study this two-stage estimation procedure primarily to enable a theoretical analysis of its risk. One may alternatively consider a more direct procedure where the least-squares objective (\ref{eq:leastsquares}) is defined using the squared distance $|\hat{\Phi}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l)|_{\cA}^2$ over the periodic domain $\cA$, which would avoid the need to identify a version of $\hat{\Phi}_{k,l}$. However, analyzing the risk of such a procedure may require an $\ell_2$-analogue of the stability guarantee of Lemma \ref{lemma:phasestability}, which seems more challenging to obtain. Here, stability in the $\ell_\infty$ sense allows us to circumvent this issue by first estimating the oracle choices of $\hat{\Phi}_{k,l}$ using the $\ell_\infty$-objective (\ref{eq:tildephidef}). \end{Remark} } \revise{Finally, let us check that this estimation guarantee in Theorem \ref{thm:oracleMoM} coincides with our stated minimax rate in Theorem \ref{thm:highnoise} when restricted to parameters $\theta^* \in \Theta_\beta$ and to the high-noise regime. \begin{proof}[Proof of Theorem \ref{thm:highnoise}, upper bound] For $\theta^* \in \Theta_\beta$, we have $\rlower^2 \geq cK^{-2\beta}$ and $\|\theta^*\|^2 \leq CK^{1-2\beta}$, for ($\beta$-dependent) constants $C,c>0$. Thus the risk bound of Theorem \ref{thm:oracleMoM} reduces to \[\E[L(\hat{\theta}^\opt,\theta^*)] \leq \frac{C}{N}\left(K\sigma^2 +K^{1+2\beta}\sigma^4+K^{4\beta}\sigma^6\right) \leq \frac{C'K^{4\beta}\sigma^6}{N}\] for constants $C,C'>0$, the last inequality holding in the high-noise setting $\sigma^2 \geq c_0K^{1-2\beta}$. In this setting, there is a constant $c>0$ for which \[\frac{\sigma^6}{\rlower^6}\log K \geq \frac{c\sigma^3}{\rlower^3}(\log K)^{3/2}.\] Then the required condition for $N$ in Theorem \ref{thm:oracleMoM} is implied by $N \geq C_0'K^{6\beta}\sigma^6\log K$ for a sufficiently large constant $C_0'>0$, and this yields the minimax upper bound of Theorem \ref{thm:highnoise}. \end{proof}} We remark that Theorem \ref{thm:oracleMoM} gives an estimation guarantee not just in the high-noise regime, but for any noise level $\sigma^2$. In a regime of \emph{very} low noise \revise{$\sigma^2 \lesssim K^{-2\beta}$}, it also implies the upper bound of Theorem \ref{thm:lownoise}. \revise{ \begin{proof}[Proof of Theorem \ref{thm:lownoise}, upper bound, for $\sigma^2 \leq K^{-2\beta}$] For $\sigma^2 \leq K^{-2\beta}$, the risk bound of Theorem \ref{thm:oracleMoM} reduces instead to \[\E[L(\hat{\theta}^\opt,\theta^*)] \leq \frac{C}{N}\left(K\sigma^2 +K^{1+2\beta}\sigma^4+K^{4\beta}\sigma^6\right) \leq \frac{C'K\sigma^2}{N}\] The required condition for $N$ is implied by $N \geq C_0'K^{1+2\beta}\sigma^2\log K$ for a sufficiently large constant $C_0'>0$, and this yields the minimax upper bound of Theorem \ref{thm:lownoise}. \end{proof}} In high dimensions $K$ and the noise regime \revise{$K^{-2\beta} \ll \sigma^2 \ll K^{1-2\beta}/\log K$, (\ref{eq:oracleMoMbound}) exhibits the rate $K^{1+2\beta}\sigma^4/N$} which is larger than the minimax rate $K\sigma^2/N$. This arises from estimating the Fourier magnitudes $\{r_k\}$ without using phase information. In this regime, the above method-of-moments procedure becomes suboptimal. We will instead analyze in Section \ref{sec-5} the maximum likelihood estimator, to establish the minimax rate over the entire low-noise regime described by Theorem \ref{thm:lownoise}. \begin{Remark} This proof of the minimax upper bound is information-theoretic in nature, in that the pilot estimate used to mimic the oracle may require exponential time in $K$ to compute. We describe in Appendix \ref{appendix:freqmarching} an alternative ``frequency marching'' method, as discussed also in \cite[Section IV]{bendory2017bispectrum}, which provides a computationally efficient alternative to mimic the oracle at the expense of a larger requirement for the sample size $N$. This method sets $\tilde{\phi}_1=0$ and, for each $k=2,\ldots,K$, sets \[\tilde{\phi}_k=\Arg \hat{B}_{1,k-1}+\tilde{\phi}_{k-1} \bmod 2\pi\] to define a pilot estimator $\tilde{\phi}$ for $\phi$. We show that, resolving the phase ambiguity of $\hat{\Phi}$ using this pilot estimate and then re-estimating $\hat{\phi}$ by least squares, the resulting procedure achieves the same risk as described in Theorem \ref{thm:oracleMoM} under a requirement for $N$ that is larger by a factor of $K^2$. \end{Remark} \section{Maximum likelihood estimator}\label{sec-5} The method-of-moments procedure analyzed in the preceding section is not rate-optimal over the full low-noise regime described by Theorem \ref{thm:lownoise}. Motivated by this observation, and by the more common use of likelihood-based approaches in practice \citep{sigworth1998maximum,scheres2012relion}, in this section we analyze the maximum likelihood estimator (MLE) in the setting of Theorem \ref{thm:lownoise}. Define the log-likelihood function \begin{equation}\label{eq:ll} l(\theta,y)=\log p_\theta(y):=\log\left[\frac{1}{2\pi} \int_{-\pi}^\pi \left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^{2K}\exp\left(-\frac{\|y-g(\alpha)\cdot\theta\|^2}{2\sigma^2}\right)d\alpha\right] \end{equation} where $p_\theta(y)$ denotes the Gaussian mixture density that marginalizes over the unknown rotation. Then the MLE is given by \[\hat{\theta}^{\mathrm{MLE}}=\arg\min_{\theta \in \R^{2K}} R_N(\theta), \qquad R_N(\theta)=-\frac{1}{N}\sum_{m=1}^{N}l(\theta, y^{(m)}),\] where $R_N(\theta)$ denotes the negative empirical log-likelihood. \revise{For the results of this section, we isolate the following general condition for the Fourier magnitudes of $\theta^*$. \begin{Assumption}\label{assump:gen} There exists a constant $c_\gen>0$ such that for any $B \subseteq \{1,\ldots,K\}$ with $|B| \geq K/2$ \[\sum_{k \in B} r_k(\theta^*)^2 \geq c_\gen \|\theta^*\|^2\] \end{Assumption} \noindent It is clear that this condition holds for our signal class $\Theta_\beta$ of interest. Our main result is then the following general risk bound for $\hat{\theta}^{\mathrm{MLE}}$ in the low-noise setting of Theorem \ref{thm:lownoise}.} \begin{Theorem}\label{thm-mle} \revise{Suppose Assumption \ref{assump:gen} holds. Then there exist constants $C,C_0,C_1>0$ depending only on $c_\gen$ such that if $\sigma^2 \leq \frac{K}{C_1\log K}$ and $N \geq C_0K(1+\frac{K\sigma^2}{\|\theta^*\|^2}) \log (K+\frac{\|\theta^*\|^2}{\sigma^2})$,} then \[\mathbb{E}_{\theta^*}[L(\hat{\theta}^{\mathrm{MLE}},\theta^*)] \leq \frac{CK\sigma^2}{N}.\] \end{Theorem} \noindent For \revise{$\sigma^2 \geq K^{-2\beta}$}, this requirement for $N$ reduces to that of Theorem \ref{thm:lownoise}, up to a modified constant $C_0>0$. Combined with the argument for \revise{$\sigma^2 \leq K^{-2\beta}$} in Section \ref{sec:MoMopt}, this immediately implies the minimax upper bound of Theorem \ref{thm:lownoise}. In the remainder of this section, we prove Theorem \ref{thm-mle}. The proof applies a classical idea of second-order Taylor expansion for the log-likelihood function. Observe first that the negative log-likelihood $R_N(\theta)$ satisfies the rotational invariance $R_N(\theta)=R_N(g(\alpha) \cdot \theta)$ for all $\alpha \in \cA$. Thus $\hat{\theta}^{\mathrm{MLE}}$ is defined only up to rotation, and all rotations of $\hat{\theta}^{\mathrm{MLE}}$ incur the same loss. To fix this rotation and ease notation in the analysis, let us denote by $\hat{\theta}^{\mathrm{MLE}}$ the rotation of the MLE such that \begin{equation}\label{eq:MLEversion} \|\hat{\theta}^\mathrm{MLE}-\theta^*\|^2= \min_{\alpha \in \cA} \|g(\alpha) \cdot \hat{\theta}^\mathrm{MLE}-\theta^*\|^2 =L(\hat{\theta}^\mathrm{MLE},\theta^*), \end{equation} where $\theta^*$ is the true parameter. Since $\hat{\theta}^{\mathrm{MLE}}$ minimizes $R_N(\theta)$, we have $0 \geq R_N(\hat{\theta}^{\mathrm{MLE}})-R_N(\theta^*)$. Then Taylor expansion (for this rotation of $\hat{\theta}^\mathrm{MLE}$ that satisfies (\ref{eq:MLEversion})) gives \begin{align} 0 &\geq R_N(\hat{\theta}^{\mathrm{MLE}})-R_N(\theta^*)\nonumber\\ &=\nabla R_N(\theta^*)^\top(\hat{\theta}^{\mathrm{MLE}}-\theta^*) +\frac{1}{2}(\hat{\theta}^{\mathrm{MLE}}-\theta^*)^\top \nabla^2 R_N(\tilde{\theta})(\hat{\theta}^{\mathrm{MLE}}-\theta^*) \label{eq:taylor} \end{align} where $\tilde{\theta} \in \R^{2K}$ is on the line segment between $\theta^*$ and $\hat{\theta}^{\mathrm{MLE}}$. Heuristically, Theorem \ref{thm-mle} will follow from the bounds \begin{align} \Big|\nabla R_N(\theta^*)^\top(\hat{\theta}^{\mathrm{MLE}}-\theta^*)\Big| &\lesssim \sqrt{\frac{K}{N\sigma^2}} \cdot \|\hat{\theta}^{\mathrm{MLE}}-\theta^*\|, \label{eq:heuristicgrad}\\ (\hat{\theta}^{\mathrm{MLE}}-\theta^*)^\top \nabla^2 R_N(\tilde{\theta})(\hat{\theta}^{\mathrm{MLE}}-\theta^*) &\gtrsim \frac{1}{\sigma^2} \cdot \|\hat{\theta}^{\mathrm{MLE}}-\theta^*\|^2. \label{eq:heuristichess} \end{align} Applying these to (\ref{eq:taylor}) and rearranging yields the desired result $\|\hat{\theta}^{\mathrm{MLE}}-\theta^*\|^2 \lesssim K\sigma^2/N$. The bulk of the proof lies in establishing an appropriate version of (\ref{eq:heuristichess}). This requires a delicate argument for large $K$, as naive uniform concentration and Lipschitz bounds for $\nabla^2 R_N(\theta) \in \R^{2K \times 2K}$ fail to establish (\ref{eq:heuristichess}) in the full ranges of $\sigma^2$ and $N$ that are specified by Theorem \ref{thm-mle}. In the remainder of this section, we describe the components of this argument, deferring detailed proofs to Appendix \ref{appendix:MLE}. \subsection{Gradient and Hessian of the log-likelihood} To simplify the model, observe that each sample $y^{(m)}$ satisfies the equality in law \[y^{(m)}=g(\alpha^{(m)}) \cdot \theta^*+\sigma \eps^{(m)} \overset{L}{=}g(\alpha^{(m)}) \cdot (\theta^*+\sigma \eps^{(m)}).\] Furthermore, $g(\alpha^{(m)})^{-1} g(\alpha)=g(\alpha-\alpha^{(m)})$ where, if $\alpha \sim \Unif([-\pi,\pi))$ is a uniformly random rotation, then $\alpha-\alpha^{(m)}$ is also uniformly random for any fixed $\alpha^{(m)}$. Applying these observations to the form (\ref{eq:ll}) of the log-likelihood function, we obtain the equality in law for the negative log-likelihood process \begin{equation}\label{eq:RNequalinlaw} \Big\{R_N(\theta):\theta \in \R^{2K}\Big\} \overset{L}{=} \left\{-\frac{1}{N}\sum_{m=1}^N l(\theta,\,\theta^*+\sigma \eps^{(m)}):\theta \in \R^{2K}\right\}. \end{equation} That is to say, having defined the log-likelihood function to marginalize over a uniformly random latent rotation, the distribution of $\{R_N(\theta):\theta \in \R^{2K}\}$ is the same under the model $y^{(m)}=g(\alpha^{(m)}) \cdot \theta^*+\sigma \eps^{(m)} \sim p_{\theta^*}$ as under a model $y^{(m)}=\theta^*+\sigma \eps^{(m)}$ without latent rotations. Thus, in the analysis, we will henceforth assume the simpler model \begin{equation}\label{eq:simplifyy} y^{(m)}=\theta^*+\sigma \eps^{(m)} \text{ for } m=1,\ldots,N, \qquad \eps^{(1)},\ldots,\eps^{(N)} \overset{\iid}{\sim} \mathcal{N}(0,I_{2K}). \end{equation} Under this model (\ref{eq:simplifyy}), expanding the square in the exponent of (\ref{eq:ll}), $R_N(\theta)$ may be written as \begin{align} R_N(\theta)&=\frac{1}{N}\sum_{m=1}^N K\log 2\pi \sigma^2 +\frac{\|\theta\|^2}{2\sigma^2}+\frac{\|\theta^*+\sigma \eps^{(m)}\|^2} {2\sigma^2}\nonumber\\ &\hspace{1in} -\log\left[\frac{1}{2\pi}\int_{-\pi}^\pi \exp\left(\frac{\langle \theta^*+\sigma \eps^{(m)}, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)d\alpha\right].\label{eq:llexpanded} \end{align} Given $\theta,\eps \in \R^{2K}$, define $\cP_{\theta,\eps}$ to be the tilted probability law over angles $\alpha \in \cA$ with density \begin{equation}\label{eq:Pthetaeps} \frac{d\cP_{\theta,\eps}(\alpha)}{d\alpha} =\exp\left(\frac{\langle \theta^*+\sigma \eps, \,g(\alpha) \cdot \theta \rangle}{\sigma^2}\right)\Bigg/ \int_{-\pi}^\pi \exp\left(\frac{\langle \theta^*+\sigma \eps, \,g(\alpha) \cdot \theta \rangle}{\sigma^2}\right)\,d\alpha. \end{equation} Then direct computation shows that the gradient and Hessian of $R_N(\theta)$ take the forms \begin{align} \nabla R_N(\theta)&=\frac{\theta}{\sigma^2}-\frac{1}{N}\sum_{m=1}^N \frac{1}{\sigma^2} \E_{\alpha \sim \cP_{\theta,\eps^{(m)}}}\Big[g(\alpha)^{-1}(\theta^*+\sigma \eps^{(m)})\Big]\label{eq:grad}\\ \nabla^2 R_N(\theta)&=\frac{1}{\sigma^2}I-\frac{1}{N}\sum_{m=1}^N \frac{1}{\sigma^4}\Cov_{\alpha \sim \cP_{\theta,\eps^{(m)}}} \Big[g(\alpha)^{-1}(\theta^*+\sigma \eps^{(m)})\Big]\label{eq:hess} \end{align} where the expectation and covariance are over the random rotation $\alpha \sim \cP_{\theta,\eps^{(m)}}$ (conditional on $\eps^{(m)}$) following the above law. \subsection{Tail bound} \label{section-5.2} As a first step of the proof, we fix a small constant $\delta_1 \in (0,1)$ to be determined, and define the domain \begin{equation}\label{eq:Bdelta} \cB(\delta_1)=\left\{\theta:\|\theta-\theta^*\| \leq \delta_1 \|\theta^*\|\right\} \subset \R^{2K}. \end{equation} We first establish the following lemma, which shows that $\hat{\theta}^\MLE$ belongs to this domain $\cB(\delta_1)$ with high probability, and provides also an upper bound for the fourth moment of $\hat{\theta}^{\text{MLE}}$. \begin{Lemma} \label{lemma-highprob-bound} \revise{Suppose that Assumption \ref{assump:gen} holds.} Fix any constant $\delta_1>0$, and define $\cB(\delta_1)$ by (\ref{eq:Bdelta}). Then there exist constants $C_0,C_1,C',c'>0$ depending only on $c_\gen,\delta_1$ such that if \revise{$\sigma^2 \leq \frac{\|\theta^*\|^2}{C_1\log K}$} and $N \geq C_0K$, then \begin{align} \P\left[\hat{\theta}^{\mathrm{MLE}} \in \cB(\delta_1)\right] &\geq 1-e^{-c'N(\log K)^2/K},\label{eq:MLElocalization}\\ \E[\|\hat{\theta}^{\mathrm{MLE}}\|^4] &\leq C'\|\theta^*\|^4. \label{eq:MLE4thmoment} \end{align} \end{Lemma} To show this lemma, define the population negative log-likelihood $R(\theta)=\E_{\theta^*}[R_N(\theta)]$, where the equality in law (\ref{eq:RNequalinlaw}) allows us to evaluate the expectation under the simplified model (\ref{eq:simplifyy}). Then the KL-divergence between $p_{\theta^*}$ and $p_\theta$ is given by \begin{equation}\label{eq:DKL} D_{\KL}(p_{\theta^*}\|p_\theta)=R(\theta)-R(\theta^*) =\E_{\theta^*}[R_N(\theta)]-\E_{\theta^*}[R_N(\theta^*)]. \end{equation} Recalling the form (\ref{eq:llexpanded}) for the negative log-likelihood $R_N(\theta)$, we have \begin{equation}\label{eqn-KL-lower-1} D_{\mathrm{KL}}(p_{\theta^*}\|p_{\theta})= \frac{\|\theta\|^2-\|\theta^*\|^2}{2\sigma^2}+\mathrm{I}-\mathrm{II} \end{equation} where \begin{align*} \mathrm{I}&=\mathbb{E}\log\frac{1}{2\pi}\int_{-\pi}^\pi \exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta^*\rangle}{\sigma^2}\right)d\alpha\\ \mathrm{II}&=\mathbb{E}\log\frac{1}{2\pi}\int_{-\pi}^\pi \exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)d\alpha \end{align*} and both expectations are over $\eps \sim \mathcal{N}(0,I_{2K})$. \revise{For sufficiently small $|\alpha|$,} we may apply a quadratic Taylor expansion of $\langle \theta^*,\,g(\alpha) \cdot \theta^* \rangle=\sum_k r_k(\theta^*)^2 \cos k\alpha$ around $\alpha=0$, to write \begin{equation}\label{eq:alpha0Taylor} \langle \theta^*,\,g(\alpha) \cdot \theta^* \rangle -\|\theta^*\|^2 \approx -\sum_{k=1}^K r_k(\theta^*)^2 \cdot \frac{k^2\alpha^2}{2} \asymp \revise{-K^2\|\theta^*\|^2\alpha^2} \end{equation} \revise{where this last approximation holds under Assumption \ref{assump:gen}.} Then $\int \exp(\theta^*,g(\alpha) \cdot \theta^*/\sigma^2)\,d\alpha$ in $\mathrm{I}$ may be approximated by a Gaussian integral over $\alpha \in \R$. Upper bounding $\mathrm{II}$ by the supremum over $\alpha$, and applying a standard covering net argument to control the suprema of the Gaussian processes $\langle \eps,\,g(\alpha) \cdot \theta^* \rangle$ and $\langle \eps,\,g(\alpha) \cdot \theta \rangle$, we obtain the following lower bound on the KL-divergence. \begin{Lemma}\label{lemma-KL-lower} \revise{Suppose Assumption \ref{assump:gen} holds, and $\sigma^2 \leq \|\theta^*\|^2$.} Then there are constants $C_2,C_3>0$ depending only on $c_\gen$ such that for any $\theta\in\mathbb{R}^{2K}$, \[D_{\mathrm{KL}}(p_{\theta^*}\|p_{\theta}) \geq \frac{\min_{\alpha \in \cA} \|\theta^*-g(\alpha)\cdot\theta\|^2}{2\sigma^2} -\frac{1}{2}\log\revise{\left(\frac{C_2K^2\|\theta^*\|^2}{\sigma^2}\right)} -\frac{C_3(\|\theta^*\|+\|\theta\|)}{\sigma}\cdot \sqrt{\log K}.\] \end{Lemma} Comparing this with the rate of uniform concentration of the negative log-likelihood $R_N(\theta)$ around its mean $R(\theta)$ (cf.\ Lemma \ref{lemma-bounded-norm-start}), we obtain an exponential tail bound for the probability of the event \[\|\theta^*-\hat{\theta}^\MLE\| \in \big[n\delta_1\|\theta^*\|,(n+1)\delta_1\|\theta^*\|\big]\] for each integer $n \geq 1$. Summing this bound over all $n \geq 1$ yields Lemma \ref{lemma-highprob-bound}. \subsection{Lower bound for the information matrix}\label{sec:localanalysis} In light of Lemma \ref{lemma-highprob-bound}, to show (\ref{eq:heuristichess}) with high probability, it suffices to establish a version of the lower bound \begin{equation}\label{eq:heuristichesslower} \nabla^2 R_N(\theta) \gtrsim \frac{1}{\sigma^2} \cdot I \qquad \text{ uniformly over } \theta \in \cB(\delta_1). \end{equation} Denote the tangent vector to the rotational orbit $\{g(\alpha) \cdot \theta^*: \alpha \in \cA\}$ at $\theta^*$ by \begin{equation}\label{eq:ustar} u^*=\frac{d}{d\alpha} g(\alpha) \cdot \theta^*\bigg|_{\alpha=0} =g'(0) \cdot \theta^*. \end{equation} From the rotational invariance of $R(\theta)$, it is easy to see that the expected (Fisher) information matrix $\E[\nabla^2 R_N(\theta^*)]=\nabla^2 R(\theta^*)$ must be singular, with $u^*$ belonging to its kernel. Thus we cannot expect the bound (\ref{eq:heuristichesslower}) to hold in all directions of $\R^{2K}$, but only in those directions orthogonal to $u^*$. This will suffice to show (\ref{eq:heuristichess}), because we will check that choosing $\hat{\theta}^\MLE$ to satisfy (\ref{eq:MLEversion}) also ensures $\hat{\theta}^\MLE-\theta^*$ is orthogonal to $u^*$. The statement (\ref{eq:heuristichesslower}) restricted to directions orthogonal to $u^*$ is formalized in the following lemma. \begin{Lemma}\label{lemma-delta-A} \revise{Suppose Assumption \ref{assump:gen} holds.} Fix any constant $\eta>0$. There exist constants $C_0,C_1,\delta_1,c>0$ depending only on $c_\gen,\eta$ such that if \revise{$\sigma^2 \leq \frac{\|\theta^*\|^2}{C_1\log K}$ and $N \geq C_0K(1+\frac{K\sigma^2}{\|\theta^*\|^2})\log (K+\frac{\|\theta^*\|^2}{\sigma^2})$}, then with probability at least $1-e^{-\frac{cN}{(1+K\sigma^2/\|\theta^*\|^2)^2}}$, the following holds: For every $\theta \in \cB(\delta_1)$ and every unit vector $v \in \R^{2K}$ satisfying $\langle u^*,v \rangle=0$, \[v^\top \nabla^2 R_N(\theta) v \geq \frac{1-\eta}{\sigma^2}.\] \end{Lemma} From the form of $\nabla^2 R_N(\theta)$ in (\ref{eq:hess}), observe that \begin{equation}\label{eq:hessexpansion} v^\top \nabla^2 R_N(\theta)v=\frac{1}{\sigma^2} -\frac{1}{N\sigma^4}\sum_{m=1}^N \Var_{\alpha \sim \cP_{\theta,\eps^{(m)}}}\Big[ v^\top g(\alpha)^{-1} (\theta^*+\sigma\eps^{(m)}) \Big]. \end{equation} The proof of Lemma \ref{lemma-delta-A} is based on a refinement of the argument in the preceding section, to approximate the distribution $\cP_{\theta,\eps}$ in the above variance by a Gaussian law over $\alpha$. Here, applying a separate bound to control the Gaussian process $\sup_\alpha \langle \eps,g(\alpha) \cdot \theta \rangle$ will be too loose to obtain the lemma. We instead perform a Taylor expansion of $\langle \theta^*+\sigma \eps,\,g(\alpha) \cdot \theta \rangle$ around its (random, $\eps$-dependent) mode \[\alpha_0=\argmax_\alpha \langle \theta^*+\sigma \eps,\,g(\alpha) \cdot \theta \rangle,\] and combine this with the condition $\theta \in \cB(\delta_1)$ to obtain a quadratic approximation \[\frac{\langle \theta^*+\sigma \eps,\,g(\alpha) \cdot \theta \rangle}{\sigma^2} -\text{constant} \asymp \revise{-\frac{K^2\|\theta^*\|^2}{\sigma^2}(\alpha-\alpha_0)^2}\] where the constant is independent of $\alpha$. Thus, $\cP_{\theta,\eps}$ for any $\theta \in \cB(\delta_1)$ may be approximated by a Gaussian law with mean $\alpha_0$ and variance on the order of $\frac{\sigma^2}{K^2\|\theta^*\|^2}$. Applying a Taylor expansion also of $v^\top g(\alpha)^{-1}(\theta^*+\sigma \eps)$ around $\alpha=\alpha_0$, and approximating the variance over $\alpha \sim \cP_{\theta,\eps}$ by the variance with respect to this Gaussian law, we obtain a bound \[\Var_{\alpha \sim \cP_{\theta,\eps}}\Big[v^\top g(\alpha)^{-1} (\theta^*+\sigma \eps)\Big] \leq \eta \sigma^2\] for a small constant $\eta>0$, which is sufficient to show Lemma \ref{lemma-delta-A}. These Taylor expansion arguments may be formalized on a high-probability event for $\eps$, where this event is dependent on $\theta$ and $v$. More precisely, let \[\tilde{\theta}=(\theta_1,\ldots,\theta_K) \in \C^K, \qquad \tilde{v}=(v_1,\ldots,v_K) \in \C^K, \qquad \tilde{\eps}=(\eps_1,\ldots,\eps_K) \in \C^K\] denote the complex representations of $\theta,v,\eps$ as defined in Section \ref{sec:complexrepr}. For each $\theta \in \cB(\delta_1)$ and unit test vector $v \in \R^{2K}$ with $\langle u^*,v \rangle=0$, we define a $(\theta,v)$-dependent domain $\cE(\theta,v,\delta_1) \subset \R^{2K}$ by the four conditions \revise{ \begin{align*} \sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot \theta \rangle| &\leq \frac{\delta_1\|\theta^*\|^2}{\sigma}\\ \sup_{\alpha \in \cA} \Big|\langle \eps,g(\alpha) \cdot v \rangle\Big| & \leq \frac{\|\theta^*\|}{\sigma}\\ \sup_{\alpha,\alpha' \in [-\pi,\pi)} \frac{1}{\alpha^2} \left|\Re \sum_{k=1}^K \overline{\eps_k}e^{ik\alpha'} \Big(e^{ik\alpha}-1-ik\alpha\Big)\theta_k\right| &\leq \frac{\delta_1 K^2 \|\theta^*\|^2}{\sigma}\\ \sup_{\alpha,\alpha' \in [-\pi,\pi)} \frac{1}{|\alpha-\alpha'|} \left|\Re \sum_{k=1}^K \overline{\eps_k}\Big(e^{ik\alpha}-e^{ik\alpha'}\Big) v_k\right| &\leq \frac{\delta_1K \|\theta^*\|}{\sigma} \end{align*}} The following deterministic lemma holds on the event that $\eps \in \cE(\theta,v,\delta_1)$. \begin{Lemma}\label{lemma-infor-3} \revise{Suppose Assumption \ref{assump:gen} holds.} Fix any $\eta>0$. There exist constants $C_1,\delta_1>0$ depending only on $c_\gen,\eta$ such that if \revise{$\sigma^2 \leq \frac{\|\theta^*\|^2}{C_1\log K}$,} then the following holds: For any $\theta \in \cB(\delta_1)$, any unit vector $v \in \R^{2K}$ satisfying $\langle u^*,v \rangle=0$, and any (deterministic) $\eps \in \cE(\theta,v,\delta_1)$, \begin{align} \Var_{\alpha \sim \cP_{\theta,\eps}}\Big[v^\top g(\alpha)^{-1}(\theta^*+\sigma \eps)\Big] &\leq \eta\sigma^2. \label{eq:varbound} \end{align} \end{Lemma} Each of the four conditions defining $\cE(\theta,v,\delta_1)$ involves the supremum of a Gaussian process, which may be bounded using a standard covering net argument. We remark that each of these conditions is defined with the right side being a factor $\|\theta^*\|/\sigma$ larger than the mean value of the left side, so that their failure probabilities are exponentially small in $\|\theta^*\|^2/\sigma^2$. This is summarized in the following result. \begin{Lemma}\label{lemma:epsgoodprob} \revise{Suppose Assumption \ref{assump:gen} holds.} Fix any constant $\delta_1>0$, any $\theta \in \cB(\delta_1)$, and any unit vector $v$ satisfying $\langle u^*,v \rangle=0$. For some constants $C_1,c>0$ depending only on $c_\gen,\delta_1$, if \revise{$\sigma^2 \leq \frac{\|\theta^*\|^2}{C_1\log K}$, then \[\P_{\eps \sim \mathcal{N}(0,I)}\Big[\eps \notin \cE(\theta,v,\delta_1) \Big] \leq e^{-c\|\theta^*\|^2/\sigma^2}.\]} \end{Lemma} Finally, we combine Lemmas \ref{lemma-infor-3} and \ref{lemma:epsgoodprob} to conclude the proof of Lemma \ref{lemma-delta-A}: We may write the second term of (\ref{eq:hessexpansion}) as \begin{align*} &\frac{1}{N\sigma^4} \sum_{m=1}^N \Var_{\alpha \sim \cP_{\theta,\eps^{(m)}}}\Big[ v^\top g(\alpha)^{-1} (\theta^*+\sigma\eps^{(m)}) \Big] \cdot \1\{\eps^{(m)} \in \cE(\theta,v,\delta_1)\}\\ &\hspace{1in}+\frac{1}{N\sigma^4} \sum_{m=1}^N \Var_{\alpha \sim \cP_{\theta,\eps^{(m)}}}\Big[ v^\top g(\alpha)^{-1} (\theta^*+\sigma\eps^{(m)}) \Big] \cdot \1\{\eps^{(m)} \notin \cE(\theta,v,\delta_1)\}. \end{align*} The first sum is bounded by Lemma \ref{lemma-infor-3}, while the second sum is sparse by Lemma \ref{lemma:epsgoodprob} and may be controlled using a Chernoff bound for binomial random variables. Taking a union bound over a covering net of pairs $(\theta,v)$ shows Lemma \ref{lemma-delta-A}. \subsection{Proof of Theorem \ref{thm-mle}} We now combine the preceding lemmas to conclude the proof of Theorem \ref{thm-mle}. Let $C_0,C_1,\delta_1>0$ be such that the conclusions of Lemma \ref{lemma-delta-A} hold for $\eta=1/2$. Define the event \[\cE=\left\{\hat{\theta}^\MLE \in \cB(\delta_1) \text{ and } \sup_{\theta \in \cB(\delta_1)}\,\sup_{v:\|v\|=1,\langle u^*,v \rangle=0}\, v^\top \nabla^2 R_N(\theta) v \geq \frac{1}{2\sigma^2}\right\}.\] When $\cE$ holds, we have also $\tilde{\theta} \in \cB(\delta_1)$ in the Taylor expansion (\ref{eq:taylor}). Recall our choice of rotation (\ref{eq:MLEversion}) for $\hat{\theta}^{\text{MLE}}$. Then the first-order condition for (\ref{eq:MLEversion}) gives \[0=\frac{d}{d\alpha} \|\hat{\theta}^\mathrm{MLE}- g(\alpha) \cdot \theta^*\|^2\bigg|_{\alpha=0} =-2\langle u^*,\,\hat{\theta}^\mathrm{MLE}-\theta^* \rangle,\] so that $\langle u^*,\,\hat{\theta}^{\mathrm{MLE}}-\theta^* \rangle=0$. Then (\ref{eq:taylor}) and the definition of $\cE$ imply \[0 \geq \1\{\cE\}\left(\nabla R_N(\theta^*)^\top (\hat{\theta}^\MLE-\theta^*) +\frac{1}{4\sigma^2}\|\hat{\theta}^\MLE-\theta^*\|^2\right).\] Rearranging, we get \[\1\{\cE\}\|\hat{\theta}^\MLE-\theta^*\|^2 \leq -\1\{\cE\} \cdot 4\sigma^2 \cdot \nabla R_N(\theta^*)^\top (\hat{\theta}^\MLE-\theta^*) \leq 4\sigma^2 \cdot \|\nabla R_N(\theta^*)\| \cdot \|\hat{\theta}^\MLE-\theta^*\|.\] Dividing by $\|\hat{\theta}^\MLE-\theta^*\|$, squaring both sides, and taking expectation yields \begin{equation}\label{eq:MLEriskonE} \E\Big[\1\{\cE\}\|\hat{\theta}^\MLE-\theta^*\|^2\Big] \leq 16\sigma^4\E\Big[\|\nabla R_N(\theta^*)\|^2\Big]. \end{equation} From (\ref{eq:grad}), we have \[\nabla R_N(\theta^*)=\frac{1}{N}\sum_{m=1}^N \left(\frac{\theta^*}{\sigma^2} -\frac{1}{\sigma^2}\E_{\alpha \sim \cP_{\theta^*,\eps^{(m)}}}\Big[ g(\alpha)^{-1}(\theta^*+\sigma \eps^{(m)})\Big]\right).\] These summands (the per-sample score vectors) are independent random vectors with mean 0, by the first-order condition for $\theta^*$ minimizing $R(\theta)$. So \begin{align*} \E\Big[\|\nabla R_N(\theta^*)\|^2\Big] &=\frac{1}{N}\E_{\eps \sim \mathcal{N}(0,I)} \left[\left\|\frac{\theta^*}{\sigma^2} -\frac{1}{\sigma^2}\E_{\alpha \sim \cP_{\theta,\eps}}\Big[ g(\alpha)^{-1}(\theta^*+\sigma \eps)\Big]\right\|^2\right]\\ &=\frac{1}{N\sigma^4} \E_{\eps \sim \mathcal{N}(0,I)} \left[\left\|\E_{\alpha \sim \cP_{\theta,\eps}}\Big[ g(\alpha)^{-1}(\theta^*+\sigma \eps)\Big]\right\|^2-\|\theta^*\|^2\right]\\ &\leq \frac{1}{N\sigma^4}\E_{\eps \sim \mathcal{N}(0,I)} \left[\|\theta^*+\sigma\eps\|^2-\|\theta^*\|^2\right]=\frac{2K}{N\sigma^2}. \end{align*} Combining with (\ref{eq:MLEriskonE}), \[\E\Big[\1\{\cE\}\|\hat{\theta}^\MLE-\theta^*\|^2\Big] \leq \frac{32K\sigma^2}{N}.\] By Lemmas \ref{lemma-highprob-bound} and \ref{lemma-delta-A}, \revise{$\P[\cE^c] \leq e^{-\frac{cN}{(1+K\sigma^2/\|\theta^*\|^2)^2}}$} for some constant $c>0$. Then applying also (\ref{eq:MLE4thmoment}), for some constant $C>0$, \revise{ \[\E\Big[\1\{\cE^c\}\|\hat{\theta}^\MLE-\theta^*\|^2\Big] \leq \sqrt{\E[\|\hat{\theta}^\MLE-\theta^*\|^4]} \cdot \sqrt{\P[\cE^c]} \leq C\|\theta^*\|^2 \cdot e^{-\frac{cN}{2(1+K\sigma^2/\|\theta^*\|^2)^2}}.\] Under the given assumption $N \geq C_0K(1+\frac{K\sigma^2}{\|\theta^*\|^2})\log(K+\frac{\|\theta^*\|^2}{\sigma^2})$ for sufficiently large $C_0>0$, this implies also $N \geq C_0'K(1+\frac{K\sigma^2}{\|\theta^*\|^2})\log N$ for a large constant $C_0'>0$. (This is verified in the proof of Lemma \ref{lemma-delta-A}, cf.\ (\ref{eq:logKlogN}) of Appendix \ref{appendix:MLE}.) Then \[\E\Big[\1\{\cE^c\}\|\hat{\theta}^\MLE-\theta^*\|^2\Big] \leq C\|\theta^*\|^2 \cdot e^{-\frac{cN}{2(1+K\sigma^2/\|\theta^*\|^2)^2}} \leq \frac{C'\sigma^2}{N}.\]} Combining the above two risk bounds on $\cE$ and $\cE^c$ yields Theorem \ref{thm-mle}. \section{Minimax lower bounds} \label{sec-6} In this section, we show the minimax lower bounds of Theorems \ref{thm:highnoise} and \ref{thm:lownoise}. The lower bounds will be implied by estimation of the Fourier phases $\phi_k(\theta^*)$ only, even when the Fourier magnitudes $r_k(\theta^*)$ are known. \revise{Fix any $\beta \in [0,\frac{1}{2})$, and consider the parameter space \[\cP_\beta=\Big\{\theta^* \in \R^{2K}: r_k(\theta^*)=k^{-\beta} \text{ for all } k=1,\ldots,K \Big\}.\]} The main result of this section is the following minimax lower bound over $\cP_\beta$, which is valid for any noise level $\sigma^2>0$ and interpolates between the low-noise and high-noise regimes. \revise{ \begin{Lemma}\label{lemma:minimaxlowerP} Fix any $\beta \in [0,\frac{1}{2})$. Then for some $\beta$-dependent constants $C,c>0$ and any $\sigma^2>0$, \begin{equation}\label{eq:minimaxlowerlownoise} \inf_{\hat{\theta}} \sup_{\theta^* \in \cP_\beta} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq c \cdot \min\left(\frac{1}{N} \cdot \max\left(K\sigma^2,\; \frac{K^{4\beta}\sigma^6} {e^{CK^{1-2\beta}/\sigma^2}}\right),\,K^{1-2\beta}\right). \end{equation} \end{Lemma}} Let us check that this implies the minimax lower bounds of Theorems \ref{thm:highnoise} and \ref{thm:lownoise}. \revise{ \begin{proof}[Proof of Theorems \ref{thm:highnoise} and \ref{thm:lownoise}, lower bounds] By rescaling, we may assume without loss of generality that $\clower \leq 1 \leq \cupper$, and hence $\cP_\beta \subset \Theta_\beta$. Assuming $\sigma^2 \geq c_0K^{1-2\beta}$, choosing the second argument of $\max(\cdot)$ in (\ref{eq:minimaxlowerlownoise}) gives \[\inf_{\hat{\theta}} \sup_{\theta^* \in \Theta_\beta} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq \inf_{\hat{\theta}} \sup_{\theta^* \in \cP_\beta} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq c \cdot \min\left(\frac{K^{4\beta}\sigma^6}{N},\,K^{1-2\beta}\right)\] for a constant $c>0$ depending on $c_0$. When $N \geq C_0K^{6\beta}\sigma^6\log K$ for sufficiently large $C_0>0$, we have $K^{4\beta}\sigma^6/N<K^{1-2\beta}$, so this gives the lower bound of Theorem \ref{thm:highnoise}. For any $\sigma^2>0$, choosing the first argument of $\max(\cdot)$ in (\ref{eq:minimaxlowerlownoise}) also gives \[\inf_{\hat{\theta}} \sup_{\theta^* \in \Theta_\beta} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq \inf_{\hat{\theta}} \sup_{\theta^* \in \cP_\beta} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq c \cdot \min\left(\frac{K\sigma^2}{N},\,K^{1-2\beta}\right).\] When $N \geq C_0K^{1+2\beta}\sigma^2 \log K$ for sufficiently large $C_0>0$, we have $K\sigma^2/N<K^{1-2\beta}$, so this gives the lower bound of Theorem \ref{thm:lownoise}. \end{proof}} Finally, we describe the arguments that show Lemma \ref{lemma:minimaxlowerP}, deferring detailed proofs to Appendix \ref{appendix:lower}. Denote $p_\theta(y)$ as the Gaussian mixture density of $y$, as in (\ref{eq:ll}). The proof will apply Assouad's hypercube construction together with an upper bound on the KL-divergence $D_{\KL}(p_\theta\|p_{\theta'})$. For the low-noise regime of Theorem \ref{thm:lownoise}, a tight upper bound is provided by (\ref{eq:KLupperlownoise}) below, which is immediate from the data processing inequality. For the high-noise regime of Theorem \ref{thm:highnoise}, we apply an argument from \cite{bandeira2020optimal} for bounding the $\chi^2$-divergence, and track carefully the dependence of this argument on the dimension $K$. \begin{Lemma}\label{lemma:KLupperbound} For any $\theta,\theta' \in \R^{2K}$, \begin{equation}\label{eq:KLupperlownoise} D_{\KL}(p_\theta\|p_{\theta'}) \leq \frac{\|\theta-\theta'\|^2}{2\sigma^2}. \end{equation} Furthermore, let $\theta=(r_k\cos\phi_k,r_k\sin\phi_k)_{k=1}^K$ and $\theta'=(r_k'\cos\phi_k',r_k'\sin\phi_k')_{k=1}^K$. Denote $R^2=\max(\sum_{k=1}^K r_k^2$, $\sum_{k=1}^K {r_k'}^2)$ and $\rupper=\max(\max_{k=1}^K r_k,\max_{k=1}^K r_k')$. Then also \begin{align} D_{\KL}(p_{\theta}\|p_{\theta'}) &\leq \frac{e^{R^2/2\sigma^2}}{4\sigma^4}\sum_{k=1}^K (r_k^2-{r_k'}^2)^2\nonumber\\ &\hspace{0.2in}+\frac{3\rupper^2R^2e^{3R^2/2\sigma^2}}{2\sigma^6} \cdot \inf_{\alpha \in \R} \sum_{k=1}^K \Big[(r_k-r_k')^2 +r_kr_k'(\phi_k-\phi_k'+k\alpha)^2\Big]. \label{eq:KLupperhighnoise} \end{align} \end{Lemma} \revise{The upper bound (\ref{eq:KLupperhighnoise}) is sufficient to prove Lemma \ref{lemma:minimaxlowerP} in the setting $\beta=0$, where the argument is as follows:} We restrict attention to a discrete space of $2^K$ parameters $\theta^\tau \in \cP_0$, indexed by the hypercube $\tau \in \{0,1\}^K$, where all Fourier magnitudes are equal to 1 and the Fourier phases $\phi^\tau=(\phi_1^\tau,\ldots,\phi_K^\tau)$ are given by \[\phi^\tau_k=\tau_k \cdot \phi.\] Here, the value $\phi \in \R$ is chosen maximally while ensuring that $D_{\KL}(p_{\theta^\tau}\|p_{\theta^{\tau'}}) \leq H(\tau,\tau')/N$ by the bounds of Lemma \ref{lemma:KLupperbound}, where $H(\tau,\tau')$ is the Hamming distance on the hypercube. Applying Proposition \ref{prop-lossgeneral}, we may show that the loss between such parameters is also lower bounded in terms of Hamming distance as $L(\theta^\tau,\theta^{\tau'}) \gtrsim r^2\phi^2 \cdot H(\tau,\tau')$. Assouad's lemma, see e.g.\ \citep[Lemma 2]{cai2012optimal}, then implies a minimax lower bound over the discrete parameter space $\{\theta^\tau:\tau \in \{0,1\}^K\}$, which in turn implies the lower bound of Lemma \ref{lemma:minimaxlowerP} over $\cP_0$. \revise{For more general decay parameters $\beta \in [0,\frac{1}{2})$, we apply a variation of this argument where the parameters $\theta^\tau$ are defined such that only the Fourier phases $\phi_k^\tau$ for $k>K/2$ are non-zero. We establish a modified version of (\ref{eq:KLupperhighnoise}) for the corresponding vectors $\theta^\tau$, where $\rupper$ may be replaced by the maximum of $(r_k,r_k')$ over $k>K/2$. The remainder of the proof is then similar to the $\beta=0$ setting.} \appendix \section{Proofs for method-of-moments estimation}\label{appendix:MoM} We prove the results of Section \ref{sec-4} on the method-of-moments estimator. \begin{Proposition}\label{lemma-distribution-1} Let $\eta \sim \Normal_\C(0,2)$. Then we have the equalities in law $\eta \overset{L}{=} \overline{\eta}$ and $\eta \overset{L}{=} e^{i\phi} \eta$ for any $\phi \in \R$. Furthermore, \begin{align} \E[\eta^j\overline{\eta}^k]&=0 \text{ for all integers } j \neq k, \label{eq:gaussiansymmetry}\\ \E[|\eta|^{2j}] &\leq 4^j j! \text{ for all integers } j \geq 1.\label{eq:gaussianmoments} \end{align} \end{Proposition} \begin{proof} We may represent $\eta=Re^{i\alpha}$ where $R^2 \sim \chi_2^2$ is independent of $\alpha \sim \Unif([-\pi,\pi))$. Then $\bar{\eta}=Re^{-i\alpha}$, $e^{i\phi}\eta=Re^{i(\phi+\alpha)}$, and $\eta^j\bar{\eta}^k=R^{j+k} e^{i(j-k)\alpha}$, so $\eta \overset{L}{=} \overline{\eta}$, $\eta \overset{L}{=} e^{i\phi} \eta$, and (\ref{eq:gaussiansymmetry}) follow. For (\ref{eq:gaussianmoments}), write $|\eta|^2=R^2=Z^2+{Z'}^2$ where $Z,Z' \overset{\iid}{\sim} \Normal(0,1)$. Then $\E[|\eta|^{2j}]=\E[(Z^2+{Z'}^2)^j] \leq 2^j\E[Z^{2j}+{Z'}^{2j}]$. We have $\E[Z^{2j}]=(2j-1)!! \leq 2^{j-1} \cdot j!$, showing (\ref{eq:gaussianmoments}). \end{proof} \subsection{Estimation of $r_k$} \begin{proof}[Proof of Lemma \ref{lemma:rktail}] Write $\theta_k=(\theta_{k,1},\theta_{k,2}) \in \R^2$ and $\eps_k^{(m)}=(\eps_{k,1}^{(m)},\eps_{k,2}^{(m)}) \in \R^2$. Since $|\tilde y_k^{(m)}|^2=\|\theta_k+\sigma\eps_k^{(m)}\|^2$ and $\|\theta_k\|^2=r_k^2$, we have \[\frac{1}{N}\sum_{m=1}^N |\tilde y_k^{(m)}|^2-2\sigma^2 =r_k^2+\frac{1}{N}\sum_{m=1}^N 2 \sigma \langle \eps_k^{(m)},\theta_k \rangle+\frac{\sigma^2}{N} \sum_{m=1}^N (\|\eps_k^{(m)}\|^2-2).\] Applying $N^{-1}\sum_m 2\sigma \langle \eps_k^{(m)},\theta_k \rangle \sim \Normal(0,4\sigma^2r_k^2/N)$, $\sum_m \|\eps_k^{(m)}\|^2 \sim \chi^2_{2N}$, and standard Gaussian and chi-squared tail bounds, for a universal constant $c>0$ and any $t>0$ we have \[\P\left[\frac{1}{N}\sum_{m=1}^N 2 \sigma \langle \eps_k^{(m)},\theta_k \rangle \geq t\right] \leq e^{-cNt^2/\sigma^2r_k^2}, \quad \P\left[\frac{1}{N}\sum_{m=1}^N (\|\eps_k^{(m)}\|^2-2) \geq t \right] \leq e^{-cN(t \wedge t^2)}.\] Then for a universal constant $c'>0$, \begin{align*} \P\left[\frac{1}{N}\sum_{m=1}^N |\tilde y_k^{(m)}|^2-2\sigma^2 \geq (1+t)r_k^2\right] &\leq \P\left[\frac{1}{N}\sum_{m=1}^N 2 \sigma \langle \eps_k^{(m)},\theta_k \rangle \geq \frac{tr_k^2}{2}\right] +\P\left[\frac{\sigma^2}{N}\sum_{m=1}^N (\|\eps_k^{(m)}\|^2-2) \geq \frac{tr_k^2}{2}\right]\\ &\leq 2\exp\left(-c'N\left(\frac{t^2r_k^2}{\sigma^2} \wedge \frac{tr_k^2}{\sigma^2} \wedge \frac{t^2r_k^4}{\sigma^4}\right)\right). \end{align*} Applying this with $t=2s+s^2$ and recalling the definition of $\hat{r}_k$ from (\ref{eq:hatrk}), the left side is exactly $\P[\hat{r}_k \geq r_k(1+s)]$. Then, considering separately the cases $s \geq 1$ and $s \leq 1$, the right side reduces to the upper bound (\ref{eq:hatrupper}). For the lower bound, similarly for any $t>0$, a lower chi-squared tail bound gives \[\P\left[\frac{1}{N}\sum_{m=1}^N (\|\eps_k^{(m)}\|^2-2) \leq -t\right] \leq e^{-cNt^2}.\] Then we obtain analogously \begin{align*} \P\left[\frac{1}{N}\sum_{m=1}^N |\tilde y_k^{(m)}|^2-2\sigma^2 \leq r_k^2(1-t)\right] & \leq 2\exp\left(-cN\left(\frac{t^2r_k^2}{\sigma^2} \wedge \frac{t^2r_k^4}{\sigma^4}\right)\right). \end{align*} Applying this with $t=2s-s^2 \geq s$ for $s \in [0,1)$, we obtain (\ref{eq:hatrlower}). \end{proof} \begin{proof}[Proof of Corollary \ref{cor:MoMrriskbound}] We apply $\E[X^2]=\E[\int_0^\infty \1\{|X| \geq s\}\cdot 2s\,ds]=\int_0^\infty \P[|X| \geq s] \cdot 2s\,ds$ with $X=\hat{r}_k/r_k-1$, and $\int_0^\infty s\,e^{-\alpha s^2}ds=\alpha^{-1} \int_0^\infty t\,e^{-t^2}dt \leq C/\alpha$. Then Lemma \ref{lemma:rktail} gives \[\E[(\hat{r}_k-r_k)^2]=r_k^2 \cdot \E[(\hat{r}_k/r_k-1)^2] \leq r_k^2 \cdot \int_0^\infty 8s\left(e^{-cNs^2r_k^2/\sigma^2} +e^{-cNs^2r_k^4/\sigma^4}\right) ds \leq C\left(\frac{\sigma^2}{N}+ \frac{\sigma^4}{Nr_k^2}\right).\] \end{proof} \subsection{Oracle estimation of $\Phi_{k,l}$} \begin{proof}[Proof of Lemma \ref{lemma:Phikltail}] Recall $B_{k,l}$ from (\ref{eq:Bkl}) and $\hat{B}_{k,l}$ from (\ref{eq:hatBkl}). We first show concentration of $\hat{B}_{k,l}$ around $B_{k,l}$. Let us write \[\tilde y_k^{(m)}=r_ke^{i(\phi_k+k\alpha^{(m)})}+\sigma \tilde{\eps}_k^{(m)} =e^{ik\alpha^{(m)}}\tilde\theta_k(1+(\sigma/r_k) \eta_k^{(m)})\] where $\tilde\theta_k=r_k e^{i\phi_k}$ is the complex representation of $(\theta_{k,1},\theta_{k,2})$, and $\eta_k^{(m)}=e^{-ik\alpha^{(m)}}(r_k/\tilde{\theta}_k) \tilde{\eps}_k^{(m)}$ is a rotation of the Gaussian noise. By Proposition \ref{lemma-distribution-1}, we still have $\eta_k^{(m)} \sim \Normal_\C(0,2)$ where these remain independent across all $k=1,\ldots,K$ and $m=1,\ldots,N$. Applying this to (\ref{eq:hatBkl}), the factors $e^{ik\alpha^{(m)}},e^{il\alpha^{(m)}},e^{i(k+l)\alpha^{(m)}}$ cancel to yield \begin{equation}\label{eq:hatBexpansion} \hat{B}_{k,l}=\frac{1}{N}\sum_{m=1}^N \tilde\theta_{k+l}\overline{\tilde\theta_k\tilde\theta_l} \left(1+(\sigma/r_{k+l}) \eta_{k+l}^{(m)}\right) \left(1+(\sigma/r_k) \overline{\eta_k^{(m)}}\right) \left(1+(\sigma/r_l) \overline{\eta_l^{(m)}}\right) =B_{k,l}(1+\mathrm{I}+\mathrm{II}+\mathrm{III}) \end{equation} where \begin{align*} \mathrm{I}&=\frac{\sigma}{N}\sum_{m=1}^N \frac{\overline{\eta_k^{(m)}}}{r_k}+\frac{\overline{\eta_l^{(m)}}}{r_l} +\frac{\eta_{k+l}^{(m)}}{r_{k+l}}\\ \mathrm{II}&=\frac{\sigma^2}{N}\sum_{m=1}^N \frac{\overline{\eta_k^{(m)}}\overline{\eta_l^{(m)}}}{r_kr_l}+ \frac{\overline{\eta_k^{(m)}}\eta_{k+l}^{(m)}}{r_kr_{k+l}} +\frac{\overline{\eta_l^{(m)}} \eta_{k+l}^{(m)}}{r_lr_{k+l}}\\ \mathrm{III}&=\frac{\sigma^3}{N}\sum_{m=1}^N \frac{\overline{\eta_k^{(m)}}\overline{\eta_l^{(m)}}\eta_{k+l}^{(m)}}{r_kr_lr_{k+l}}. \end{align*} To bound $\mathrm{I}$, observe that \begin{equation}\label{eq:deg1term} \frac{\sigma}{N} \sum_{m=1}^N \frac{\Re \eta_k^{(m)}}{r_k} \sim \Normal\left(0,\frac{\sigma^2}{Nr_k^2}\right), \end{equation} and similarly for the imaginary part and for the other two terms of $\mathrm{I}$. Then by a Gaussian tail bound, \begin{equation}\label{eq:deg1finalbound} \P[|\mathrm{I}| \geq t] \leq C\exp(-cNt^2\rlower^2/\sigma^2). \end{equation} To bound $\mathrm{II}$, consider first $k \neq l$ and $\sum_m \Re \overline{\eta_k^{(m)}} \cdot \Re \overline{\eta_l^{(m)}}$. Each term $\Re \overline{\eta_k^{(m)}} \cdot \Re \overline{\eta_l^{(m)}}$ is the product of two independent standard Gaussian variables. Then applying \cite[Corollary 1]{latala2006estimates} with $d=2$, $A=I$, $\|A\|_{\{1,2\}}=\sqrt{N}$, and $\|A\|_{\{1\},\{2\}}=1$, we have \[\P\left[\left|\frac{1}{N}\sum_{m=1}^N \Re \overline{\eta_k^{(m)}} \cdot \Re \overline{\eta_l^{(m)}}\right| \geq t\right] \leq Ce^{-cN(t \wedge t^2)}.\] So \begin{equation}\label{eq:degree2bound} \P\left[\left|\frac{\sigma^2}{N}\sum_{m=1}^N \frac{\Re \overline{\eta_k^{(m)}}}{r_k} \cdot \frac{\Re \overline{\eta_l^{(m)}}}{r_l}\right| \geq t\right] \leq C\exp\left(-cN\left(\frac{t\rlower^2}{\sigma^2} \wedge \frac{t^2\rlower^4}{\sigma^4}\right)\right). \end{equation} The same bound holds for all products of real and imaginary parts of $\eta_k^{(m)}$ and $\eta_l^{(m)}$, except for $\Re \eta_k^{(m)} \cdot \Re \eta_l^{(m)}$ and $\Im \eta_k^{(m)} \cdot \Im \eta_l^{(m)}$ when $k=l$. For these products, we may consider them together and apply \[\frac{\sigma^2}{N}\sum_{m=1}^N \frac{(\Re \eta_k^{(m)})^2}{r_k^2} -\frac{(\Im \eta_k^{(m)})^2}{r_k^2} =\frac{2\sigma^2}{Nr_k^2} \sum_{m=1}^N \frac{\Re \eta_k^{(m)}-\Im \eta_k^{(m)}}{\sqrt{2}} \cdot \frac{\Re \eta_k^{(m)}+\Im \eta_k^{(m)}}{\sqrt{2}}\] where now $(\Re \eta_k^{(m)}-\Im \eta_k^{(m)})/\sqrt{2}$ and $(\Re \eta_k^{(m)}+\Im \eta_k^{(m)})/\sqrt{2}$ are independent standard Gaussian variables. The bound (\ref{eq:degree2bound}) then holds for this sum, and this shows \[\P\left[\left|\frac{\sigma^2}{N}\sum_{m=1}^N \frac{\overline{\eta_k^{(m)}} \overline{\eta_l^{(m)}}}{r_kr_l}\right|>t\right] \leq C\exp\left(-cN\left(\frac{t\rlower^2}{\sigma^2} \wedge \frac{t^2\rlower^4}{\sigma^4}\right)\right)\] for the first term of $\mathrm{II}$. Applying the same argument for the remaining two terms of $\mathrm{II}$, \begin{equation}\label{eq:deg2finalbound} \P[|\mathrm{II}| \geq t] \leq C\exp\left(-cN\left(\frac{t\rlower^2}{\sigma^2} \wedge \frac{t^2\rlower^4}{\sigma^4}\right)\right). \end{equation} We apply a similar argument to bound $\mathrm{III}$. Consider first $k \neq l$ and $\sum_m \Re \eta_k^{(m)} \cdot \Re \eta_l^{(m)} \cdot \Re \eta_{k+l}^{(m)}$. Each term $\Re \eta_k^{(m)} \cdot \Re \eta_l^{(m)} \cdot \Re \eta_{k+l}^{(m)}$ is the product of three independent standard Gaussian variables. Then applying \cite[Corollary 1]{latala2006estimates} with $d=3$, $A=\sum_{m=1}^N e_m \otimes e_m \otimes e_m$, $\|A\|_{\{1,2,3\}}=\sqrt{N}$, $\|A\|_{\{1,2\},\{3\}}=1$, and $\|A\|_{\{1\},\{2\},\{3\}}=1$, \[\P\left[\left|\frac{1}{N}\sum_{m=1}^N \Re \eta_k^{(m)} \cdot \Re \eta_l^{(m)} \cdot \Re \eta_{k+l}^{(m)}\right| \geq t\right] \leq Ce^{-c(Nt^2 \wedge Nt \wedge (Nt)^{2/3})} \leq Ce^{-cN(t^2 \wedge \frac{t^{2/3}}{N^{1/3}})}.\] (The second inequality applies $t \geq t^2 \wedge \frac{t^{2/3}}{N^{1/3}}$ for any $N \geq 1$ and $t \geq 0$.) The same bound holds for all combinations of real and imaginary parts of $\eta_k^{(m)},\eta_l^{(m)},\eta_{k+l}^{(m)}$, except again for products having $\Re \eta_k^{(m)} \cdot \Re \eta_l^{(m)}$ or $\Im \eta_k^{(m)} \cdot \Im \eta_l^{(m)}$ when $k=l$. These products may be bounded by applying \[\frac{1}{2} \Re \eta_{2k}^{(m)} \cdot \left((\Re \eta_k^{(m)})^2 -(\Im \eta_k^{(m)})^2\right) =\Re \eta_{2k}^{(m)} \cdot \frac{\Re \eta_k^{(m)}-\Im \eta_k^{(m)}}{\sqrt{2}} \cdot \frac{\Re \eta_k^{(m)}+\Im \eta_k^{(m)}}{\sqrt{2}}\] and similarly for $\Im \eta_{2k}^{(m)}\cdot ((\Re \eta_k^{(m)})^2 -(\Im \eta_k^{(m)})^2)$, where $\Re \eta_{2k}^{(m)}$, $\Im \eta_{2k}^{(m)}$, $(\Re \eta_k^{(m)}-\Im \eta_k^{(m)})/\sqrt{2}$, and $(\Re \eta_k^{(m)}+\Im \eta_k^{(m)})/\sqrt{2}$ are independent standard Gaussian variables. Thus \begin{equation}\label{eq:deg3finalbound} \P\left[|\mathrm{III}| \geq t\right] \leq C\exp\left(-cN\left(\frac{t^2\rlower^6}{\sigma^6} \wedge \frac{t^{2/3}\rlower^2}{N^{1/3}\sigma^2}\right)\right). \end{equation} Combining (\ref{eq:deg1finalbound}), (\ref{eq:deg2finalbound}), and (\ref{eq:deg3finalbound}), for any $s>0$ we obtain \[\P\Big[|\hat{B}_{k,l}/B_{k,l}-1| \geq s\Big] \leq C\exp\left(-cN\left(\frac{s^2\rlower^2}{\sigma^2} \wedge \frac{s\rlower^2}{\sigma^2} \wedge \frac{s^2\rlower^4}{\sigma^4} \wedge \frac{s^2\rlower^6}{\sigma^6} \wedge \frac{s^{2/3}\rlower^2}{N^{1/3}\sigma^2}\right)\right).\] We have \[\frac{s\rlower^2}{\sigma^2} \geq \frac{s^2\rlower^2}{\sigma^2} \wedge \frac{s^{2/3}\rlower^2}{N^{1/3}\sigma^2}, \qquad \frac{s^2\rlower^4}{\sigma^4} \geq \frac{s^2\rlower^2}{\sigma^2} \wedge \frac{s^2\rlower^6}{\sigma^6},\] so this simplifies to (\ref{eqn-B-final-bound}). Finally, for any $z \in \C$ and any $s \in (0,1)$, observe that $|z-1|<s$ implies $|\Arg z|<\arcsin s<\pi s/2$ for the principal argument (\ref{eq:principalArg}). Then, recalling that $\hat{\Phi}_{k,l}^\oracle-\Phi_{k,l}=\Arg (\hat{B}_{k,l}/B_{k,l})$ from (\ref{eq:Argoracle}), we obtain for any $s \in (0,\pi/2)$ that \[\P\Big[|\hat{\Phi}_{k,l}^\oracle-\Phi_{k,l}| \geq s\Big] \leq \P\Big[|\hat{B}_{k,l}/B_{k,l}-1| \geq \frac{2s}{\pi}\Big]\] and (\ref{eqn-Phi-final-bound}) follows. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:riskPhikl}] We apply $\E[X^2]=\int_0^\infty \P[|X| \geq s] \cdot 2s\,ds$ and Lemma \ref{lemma:Phikltail} to obtain, for universal constants $C,c>0$, \begin{align*} \E[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})^2] &\leq \int_0^{\pi/2} Cs\left(e^{-cNs^2\rlower^2/\sigma^2}+ e^{-cNs^2\rlower^6/\sigma^6}+e^{-c(Ns)^{2/3}\rlower^2/\sigma^2}\right)ds\\ &\hspace{2in}+C\left(e^{-cN\rlower^2/\sigma^2}+ e^{-cN\rlower^6/\sigma^6}+e^{-cN^{2/3}\rlower^2/\sigma^2}\right), \end{align*} where the second term bounds the integral from $s=\pi/2$ to $s=\pi$. The result then follows from applying $\int_0^\infty se^{-\alpha s^2}ds=\alpha^{-1}\int_0^\infty te^{-t^2}dt \leq C/\alpha$ and $\int_0^\infty se^{-\alpha s^{2/3}}ds =\alpha^{-3}\int_0^\infty t^3e^{-t^2} \cdot 3t^2\,dt \leq C/\alpha^3$ for the first term, $e^{-cx} \leq C/x$ and $e^{-cx} \leq C/x^3$ for the second term, and $\sigma^6/N^2\rlower^6 \leq \sigma^6/N\rlower^6$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:Phicovariance}] Part (c) follows from Corollary \ref{cor:riskPhikl} and Cauchy-Schwarz. For part (a), recall from (\ref{eq:Argoracle}) and the expression for $\hat{B}_{k,l}$ in (\ref{eq:hatBexpansion}) that \[\hat\Phi_{k,l}^\oracle-\Phi_{k,l}=\Arg(\hat{B}_{k,l}/B_{k,l}) =\Arg \frac{1}{N}\sum_{m=1}^N \left(1+(\sigma/r_{k+l}) \eta_{k+l}^{(m)} \right)\left(1+(\sigma/r_k) \overline{\eta_k^{(m)}}\right) \left(1+(\sigma/r_l) \overline{\eta_l^{(m)}}\right)\] Since $\eta_k^{(m)}$ are independent across $k=1,\ldots,K$ and $m=1,\ldots,N$, we obtain in the setting of part (a) that $\hat\Phi_{k,l}^\oracle-\Phi_{k,l}$ is independent of $\hat\Phi_{x,y}^\oracle-\Phi_{x,y}$. Furthermore, applying the conjugation symmetry of Proposition \ref{lemma-distribution-1} to the variables $\eta_k^{(m)}$, we have the equality in law $\hat{B}_{k,l}/B_{k,l} \overset{L}{=} \overline{\hat{B}_{k,l}/B_{k,l}}$ for the quantity inside $\Arg(\cdot)$. Since $\Arg z=-\Arg \overline{z}$ whenever $\Arg z \neq -\pi$, and the probability is 0 that $\Arg \hat{B}_{k,l}/B_{k,l}=-\pi$ exactly, this equality in law implies the sign symmetry (\ref{eq:symmetryinsign}). Hence $\E[\hat{\Phi}_{k,l}^\oracle-\Phi_{k,l}]=0$. This shows part (a). It remains to show part (b). Let $\Ln$ denote the principal value of the complex logarithm with branch cut on the negative real line, so that $\Arg z=\Im \Ln z$ whenever $\Arg z \neq -\pi$. Denote $\delta_{k,l}=\hat B_{k,l}/B_{k,l}-1$. Then (with probability 1) \begin{align*} (\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y}) &=\Arg \frac{\hat B_{k,l}}{B_{k,l}} \cdot \Arg \frac{\hat B_{x,y}}{B_{x,y}} =\Im \Ln\left(1+\delta_{k,l}\right) \cdot \Im\Ln\left(1+\delta_{x,y}\right). \end{align*} Let us fix an integer $J=J(N,\rlower^2,\sigma^2) \geq 1$ to be determined, and apply a Taylor expansion of $t \mapsto \Ln(1+t\delta)$ around $t=0$ to write \[\Im \Ln(1+\delta)=q(\delta)+r(\delta), \qquad q(\delta)=\Im \sum_{j=1}^J \frac{(-1)^{j-1}}{j} \delta^j, \qquad r(\delta)=\Im \int_0^1 \delta^{J+1} \cdot \frac{(-1)^J(1-t)^J}{(1+t\delta)^{J+1}}dt.\] Define the event $\cE=\{|\delta_{k,l}|<1/2 \text{ and } |\delta_{x,y}|<1/2\}$. We may then apply the approximation \[\E\left[(\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})\right]=\E\Big[q(\delta_{k,l}) \cdot q(\delta_{x,y})\Big] +\mathrm{I}+\mathrm{II}+\mathrm{III}\] where we define the three error terms \begin{align*} \mathrm{I}&=-\E\Big[\1\{\cE^c\} \cdot q(\delta_{k,l}) \cdot q(\delta_{x,y})\Big]\\ \mathrm{II}&=\E\left[\1\{\cE\} \Big(q(\delta_{k,l}) \cdot r(\delta_{x,y}) +r(\delta_{k,l}) \cdot q(\delta_{x,y}) +r(\delta_{k,l}) \cdot r(\delta_{x,y})\Big)\right]\\ \mathrm{III}&=\E\left[\1\{\cE^c\} \cdot (\hat\Phi_{k,l}^\oracle-\Phi_{k,l})(\hat\Phi_{x,y}^\oracle-\Phi_{x,y})\right] \end{align*} To bound these errors, let $C,C',c,c'>0$ denote universal constants changing from instance to instance. Recall from (\ref{eqn-B-final-bound}) that \begin{equation}\label{eq:Tayloreventbound} \P[\cE^c] \leq \P[|\delta_{k,l}| \geq 1/2]+\P[|\delta_{x,y}| \geq 1/2] \leq Ce^{-c(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2})}. \end{equation} Also, for any $j \geq 1$, applying $\E[|X|^j]=\int_0^\infty \P[|X| \geq s] \cdot js^{j-1}\,ds$ with $X=2\delta_{k,l}$, and applying also for $Z \sim \Normal(0,1)$ that $\E[|Z|^j] \leq \E[Z^{2j}]^{1/2}=[(2j-1)!!]^{1/2} \leq (2j)^{(j-1)/2}$, we have from (\ref{eqn-B-final-bound}) that \begin{align*} \E[|2\delta_{k,l}|^j] &\leq \int_0^\infty js^{j-1} \cdot C(e^{-cNs^2\rlower^2/\sigma^2} +e^{-cNs^2\rlower^6/\sigma^6}+e^{-c(Ns)^{2/3}\rlower^2/\sigma^2})\,ds\\ &=Cj \left(\left(\frac{\sigma^j}{\rlower^j N^{j/2}} +\frac{\sigma^{3j}}{\rlower^{3j} N^{j/2}} \right) \cdot \int_0^\infty t^{j-1}e^{-ct^2}dt +\left(\frac{\sigma^{3j}}{\rlower^{3j} N^j}\right) \cdot \int_0^\infty t^{3j-3}e^{-ct^2}\cdot 3t^2\,dt\right)\\ &\leq (C_0j)^{\frac{j}{2}}\left(\frac{\sigma^j}{\rlower^j N^{j/2}} +\frac{\sigma^{3j}}{\rlower^{3j} N^{j/2}} \right) +(C_0j)^{\frac{3j}{2}}\left(\frac{\sigma^{3j}}{\rlower^{3j} N^j}\right) \end{align*} where $C_0$ in the last line is a universal constant, which we will later assume satisfies $C_0 \geq 3$. Let us set \begin{equation}\label{eq:Jchoice} J=\left\lfloor \frac{1}{4C_0e} \left(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2}\right)\right\rfloor \end{equation} for this constant $C_0>0$. Note that if the quantity inside $\lfloor \cdot \rfloor$ is less than 1, then the statement of part (b) holds since the left side of (\ref{eq:card1bound}) is at most $\pi^2$, and the right side is an arbitrarily large constant. Thus, we may assume henceforth that $J \geq 1$. The above gives \[\E[|2\delta_{k,l}|^j] \leq 2\left(\frac{j}{4eJ}\right)^{j/2} +\left(\frac{j}{4eJ}\right)^{3j/2}.\] Applying $|q(\delta)| \leq J\max(|\delta|,|\delta|^J)$, and also $|r(\delta)| \leq |2\delta|^{J+1}$ for $|\delta|<1/2$, this shows \begin{align*} \E[|q(\delta_{k,l})|^2] &\leq J^2\E[|\delta_{k,l}|^2] +J^2\E[|\delta|^{2J}] \leq CJ+J^2e^{-cJ},\\ \E[|q(\delta_{k,l})|^3] &\leq J^3\E[|\delta_{k,l}|^3] +J^3\E[|\delta|^{3J}] \leq CJ^{3/2}+J^3e^{-cJ},\\ \E[\1\{\cE\}|r(\delta_{k,l})|^2] &\leq \E[|\delta_{k,l}|^{2J+2}] \leq Ce^{-cJ}. \end{align*} Then, applying these bounds together with (\ref{eq:Tayloreventbound}), H\"older's inequality, and Cauchy-Schwarz, \[|\mathrm{I}|+|\mathrm{II}|+|\mathrm{III}| \leq CJ^2 e^{-cJ} \leq C'e^{-c'J}.\] This gives the second term on the right side of (\ref{eq:card1bound}). Finally, let us bound the dominant term $\E[q(\delta_{k,l})q(\delta_{x,y})]$ using the condition that $\{k,l,k+l\} \cap \{x,y,x+y\}$ has cardinality 1. Applying $2\Im u \cdot \Im v=\Re u\bar{v}-\Re uv$, we have \[\E[q(\delta_{k,l})q(\delta_{x,y})]=\sum_{i,j=1}^J \frac{(-1)^{i+j}}{ij} \E[\Im \delta_{k,l}^i \cdot \Im \delta_{x,y}^j] =\sum_{i,j=1}^J \frac{(-1)^{i+j}}{2ij} \Big(\Re \E[\delta_{k,l}^i\overline{\delta_{x,y}^j}] -\Re \E[\delta_{k,l}^i\delta_{x,y}^j]\Big).\] From the expression for $\hat{B}_{k,l}$ in (\ref{eq:hatBexpansion}), observe that \[\delta_{k,l}=\frac{\hat{B}_{k,l}}{B_{k,l}}-1=\frac{1}{N}\sum_{m=1}^N \left(1+\frac{\sigma}{r_{k+l}}\eta_{k+l}^{(m)}\right)\left(1+\frac{\sigma}{r_k}\overline{\eta_k^{(m)}}\right)\left(1+\frac{\sigma}{r_l}\overline{\eta_l^{(m)}}\right)-1.\] We view this as a polynomial in the variables $\{\eta_{k+l}^{(m)}, \overline{\eta_k^{(m)}},\overline{\eta_l^{(m)}}:m=1,\ldots,N\}$ where, after canceling $+1$ with $-1$, each monomial has total degree at least 1 in these variables. We consider three cases. {\bf Case 1:} $k+l=x+y$. This allows possibly $k=l$ and/or $x=y$, but ensures $\{k,l\} \cap \{x,y\}=\emptyset$ since $\{k,l,k+l\} \cap \{x,y,x+y\}$ has cardinality 1. We may expand $\delta_{k,l}^i\delta_{x,y}^j$ as a sum of monomials in $\eta_{k+l}^{(m)},\overline{\eta_k^{(m)}}, \overline{\eta_l^{(m)}},\overline{\eta_x^{(m)}},\overline{\eta_y^{(m)}}$ with degree at least 1, and observe that $k+l=x+y$ is distinct from $\{k,l,x,y\}$ because it is strictly greater in value. Then (\ref{eq:gaussiansymmetry}) from Proposition \ref{lemma-distribution-1} implies $\E[\delta_{k,l}^i \delta_{x,y}^j]=0$. We may also expand $\delta_{k,l}^i \overline{\delta_{x,y}^j}$ as a sum of monomials in $\eta_{k+l}^{(m)}, \overline{\eta_k^{(m)}}, \overline{\eta_l^{(m)}},\overline{\eta_{k+l}^{(m)}},\eta_x^{(m)}, \eta_y^{(m)}$. Since $\{k,l\}$ are distinct from $\{x,y,k+l\}$, any monomial involving $\overline{\eta_k^{(m)}},\overline{\eta_l^{(m)}}$ has vanishing expectation. Similarly, any monomial involving $\eta_x^{(m)},\eta_y^{(m)}$ has vanishing expectation. Thus the only non-vanishing terms are \begin{equation}\label{eq:deltaprodexpansion1} \E[\delta_{k,l}^i\overline{\delta_{x,y}^j}] =\E\left[\left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_{k+l}}\eta_{k+l}^{(m)}\right)^i \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_{k+l}}\overline{\eta_{k+l}^{(m)}}\right)^j\right]. \end{equation} Then applying the equality in law $N^{-1}\sum_{m=1}^N (\sigma/r_{k+l})\eta_{k+l}^{(m)} \overset{L}{=} \eta \cdot \sigma/(r_{k+l}\sqrt{N})$ where $\eta \sim \Normal_\C(0,2)$, together with (\ref{eq:gaussiansymmetry}) and (\ref{eq:gaussianmoments}) and the bound $j! \leq j^j$, \[\Big|\E[\delta_{k,l}^i\overline{\delta_{x,y}^j}]\Big| =\left(\frac{\sigma}{r_{k+l}\sqrt{N}}\right)^{i+j}\E[\eta^i\overline{\eta^j}] \leq \1\{i=j\}\left(\frac{4j\sigma^2}{N\rlower^2}\right)^j.\] So, recalling the definition of $J$ from (\ref{eq:Jchoice}) where $C_0 \geq 3$, \[\Big|\E[q(\delta_{k,l})q(\delta_{x,y})]\Big| \leq \sum_{j=1}^J \frac{1}{2j^2} \left(\frac{4j\sigma^2}{N\rlower^2}\right)^j \leq \left(\frac{2\sigma^2}{N\rlower^2}\right) \sum_{j=1}^J \left(\frac{4J\sigma^2}{N\rlower^2}\right)^{j-1} \leq \left(\frac{2\sigma^2}{N\rlower^2}\right)\sum_{j=1}^\infty \left(\frac{1}{C_0e}\right)^{j-1}.\] Thus we obtain, for a universal constant $C>0$, \begin{equation}\label{eq:IVbound} \Big|\E[q(\delta_{k,l})q(\delta_{x,y})]\Big| \leq \frac{C\sigma^2}{N\rlower^2}. \end{equation} This concludes the proof in Case 1. {\bf Case 2:} $k=x+y$. (By symmetry, this addresses also $l=x+y$, $x=k+l$, and $y=k+l$.) Then $\{k,l\} \cap \{x,y,k+l\}=\emptyset$ and $\{k+l\} \cap \{x,y\}=\emptyset$, because $k+l$ is greater than $\{k,l\}$, $k$ is greater than $\{x,y\}$, and $l=x$ or $l=y$ would imply that $\{k,l,k+l\} \cap \{x,y,x+y\}$ has cardinality 2. We may expand $\delta_{k,l}^i\overline{\delta_{x,y}^j}$ as a sum of monomials in $\eta_{k+l}^{(m)},\overline{\eta_k^{(m)}}, \overline{\eta_l^{(m)}},\eta_x^{(m)},\eta_y^{(m)}$. Since $\{k,l\}$ are distinct from $\{x,y,k+l\}$, (\ref{eq:gaussiansymmetry}) implies $\E[\delta_{k,l}^i\overline{\delta_{x,y}^j}]=0$. We may also expand $\delta_{k,l}^i\delta_{x,y}^j$ as a sum of monomials in $\eta_{k+l}^{(m)},\overline{\eta_k^{(m)}},\overline{\eta_l^{(m)}}, \eta_k^{(m)},\overline{\eta_x^{(m)}},\overline{\eta_y^{(m)}}$. Here, $k+l$ is distinct from $\{k,l,x,y\}$, and $\{x,y\}$ are distinct from $\{k,k+l\}$, so any monomial involving $\eta_{k+l}^{(m)},\overline{\eta_x^{(m)}},\overline{\eta_y^{(m)}}$ has vanishing expectation. If $l \neq k$, then also $l$ is distinct from $\{k,k+l\}$ so monomials involving $\overline{\eta_l^{(m)}}$ have vanishing expectation, yielding \[\E[\delta_{k,l}^i\delta_{x,y}^j] =\E\left[\left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\overline{\eta_k^{(m)}}\right)^i \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\eta_k^{(m)}\right)^j\right].\] This is analogous to (\ref{eq:deltaprodexpansion1}), and the same argument as above gives (\ref{eq:IVbound}). If instead, $l=k$, then we obtain that the only non-zero terms of $\E[\delta_{k,l}^i\delta_{x,y}^j]$ are \begin{equation}\label{eq:deltaprodexpansion2} \E\left[\delta_{k,l}^i\delta_{x,y}^j\right] =\E\left[\left(\frac{1}{N}\sum_{m=1}^N \frac{2\sigma}{r_k}\overline{\eta_k^{(m)}}+\frac{\sigma^2}{r_k^2} (\overline{\eta_k^{(m)}})^2\right)^i \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\eta_k^{(m)}\right)^j\right] \end{equation} Let us distribute this product and then factor the expectations of the resulting terms, using independence of $\{\eta_k^{(m)}:m=1,\ldots,N\}$. We write $\sum_{(a_1,\ldots,a_N)|a}$ for the sum over all tuples of nonnegative integers $(a_1,\ldots,a_N)$ that sum to $a$. Then the above may be rewritten as \begin{align*} \E\left[\delta_{k,l}^i\delta_{x,y}^j\right] &=\mathop{\sum_{a,b \geq 0}}_{a+b=i} \binom{i}{a} \E\left[\left(\frac{1}{N}\sum_{m=1}^N \frac{2\sigma}{r_k}\overline{\eta_k^{(m)}}\right)^a \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma^2}{r_k^2}(\overline{\eta_k^{(m)}})^2\right)^b \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\eta_k^{(m)}\right)^j\right]\\ &=\mathop{\sum_{a,b \geq 0}}_{a+b=i} \binom{i}{a}2^aN^b \E\left[ \left(\sum_{m=1}^N \frac{\sigma}{Nr_k}\overline{\eta_k^{(m)}}\right)^a \left(\sum_{m=1}^N \left(\frac{\sigma}{Nr_k}\overline{\eta_k^{(m)}}\right)^2\right)^b \left(\sum_{m=1}^N \frac{\sigma}{Nr_k}\eta_k^{(m)}\right)^j\right]\\ &=\mathop{\sum_{a,b \geq 0}}_{a+b=i} \binom{i}{a} 2^aN^b \sum_{(a_1,\ldots,a_N)|a}\,\sum_{(b_1,\ldots,b_N)|b}\, \sum_{(j_1,\ldots,j_N)|j}\\ &\hspace{0.2in} \binom{a}{a_1,\ldots,a_N}\binom{b}{b_1,\ldots,b_N} \binom{j}{j_1,\ldots,j_N} \prod_{m=1}^N \E\left[\left(\frac{\sigma \overline{\eta_k^{(m)}}}{Nr_k}\right)^{a_m+2b_m} \left(\frac{\sigma \eta_k^{(m)}}{Nr_k}\right)^{j_m}\right] \end{align*} where the last line uses that there are $\binom{a}{a_1,\ldots,a_N}$ ways to choose $a_1$ of the factors $(\sum_{m=1}^N \frac{\sigma}{Nr_k}\overline{\eta_k^{(m)}})^a$ to correspond to $m=1$, $a_2$ to correspond to $m=2$, etc., and similarly for $b$ and $j$. Then, applying (\ref{eq:gaussiansymmetry}) and (\ref{eq:gaussianmoments}), \begin{align*} \left|\E\left[\delta_{k,l}^i\delta_{x,y}^j\right]\right| &\leq \mathop{\sum_{a,b \geq 0}}_{a+b=i} \binom{i}{a} 2^aN^b \sum_{(a_1,\ldots,a_N)|a}\,\sum_{(b_1,\ldots,b_N)|b}\, \sum_{(j_1,\ldots,j_N)|j} \\ &\hspace{0.5in} \binom{a}{a_1,\ldots,a_N}\binom{b}{b_1,\ldots,b_N} \binom{j}{j_1,\ldots,j_N} \prod_{m=1}^N \1\{a_m+2b_m=j_m\} \left(\frac{4\sigma^2}{N^2r_k^2}\right)^{j_m} j_m! \end{align*} Observe that $\binom{j}{j_1,\ldots,j_N} \cdot \prod_{m=1}^N j_m!=j! \leq j^j$. The condition $a_m+2b_m=j_m$ for every $m=1,\ldots,N$ requires $a+2b=\sum_m a_m+2b_m=\sum_m j_m=j$. For $(a,b,j)$ satisfying this requirement, fixing any partitions $(a_1,\ldots,a_N)|a$ and $(b_1,\ldots,b_N)|b$, there is exactly one partition $(j_1,\ldots,j_N)|j$ for which $a_m+2b_m=j_m$ holds for every $m=1,\ldots,N$. Thus, the above gives \begin{align*} \left|\E\left[\delta_{k,l}^i\delta_{x,y}^j\right]\right| \leq \left(\frac{4j\sigma^2}{N^2\rlower^2}\right)^j \mathop{\sum_{a,b \geq 0}}_{a+b=i,\,a+2b=j} \binom{i}{a} 2^aN^b \sum_{(a_1,\ldots,a_N)|a}\,\sum_{(b_1,\ldots,b_N)|b}\, \binom{a}{a_1,\ldots,a_N}\binom{b}{b_1,\ldots,b_N}. \end{align*} Observe now that $\sum_{(a_1,\ldots,a_N)|a} \binom{a}{a_1,\ldots,a_N}$ counts exactly the number of assignments of each of $a$ labeled objects to $N$ bins, by first determining the number of objects in each bin, followed by their identities. So $\sum_{(a_1,\ldots,a_N)|a} \binom{a}{a_1,\ldots,a_N}=N^a$. Applying the similar identity for $b$ and $N^{a+2b}=N^j$ above, \[\left|\E\left[\delta_{k,l}^i\delta_{x,y}^j\right]\right| \leq \left(\frac{4j\sigma^2}{N\rlower^2}\right)^j \mathop{\sum_{a,b \geq 0}}_{a+b=i,\,a+2b=j} \binom{i}{a} 2^a.\] Then, recalling that $\E\left[\delta_{k,l}^i\overline{\delta_{x,y}^j}\right]=0$, \[\Big|\E[q(\delta_{k,l})q(\delta_{x,y})]\Big| \leq \sum_{i,j=1}^J \frac{1}{2ij} \left|\E\left[\delta_{k,l}^i\delta_{x,y}^j\right]\right| \leq \sum_{j=1}^J \frac{1}{2j} \left(\frac{4j\sigma^2}{N\rlower^2}\right)^j \mathop{\sum_{a,b \geq 0}}_{a+2b=j} \frac{1}{a+b}\binom{a+b}{a}2^a.\] We may apply \[\mathop{\sum_{a,b \geq 0}}_{a+2b=j} \frac{1}{a+b}\binom{a+b}{a}2^a \leq \sum_{a=0}^j \binom{j}{a}2^a=3^j.\] Then, recalling the definition of $J$ from (\ref{eq:Jchoice}) where $C_0 \geq 3$, \[\Big|\E[q(\delta_{k,l})q(\delta_{x,y})]\Big| \leq \sum_{j=1}^J \frac{1}{2j} \left(\frac{12j\sigma^2}{N\rlower^2}\right)^j \leq \frac{6\sigma^2}{N\rlower^2}\sum_{j=1}^J \left(\frac{12J\sigma^2}{N\rlower^2}\right)^{j-1} \leq \frac{6\sigma^2}{N\rlower^2}\sum_{j=1}^\infty \left(\frac{3}{C_0e}\right)^{j-1}.\] This again yields (\ref{eq:IVbound}), and concludes the proof in Case 2. {\bf Case 3:} $k=x$. (By symmetry, this addresses also $k=y$, $l=x$, and $l=y$.) This ensures that $\{k+l,k+y\} \cap \{k,l,y\}=\emptyset$ and $k+l \neq k+y$, because $k+l$ is greater than $\{k,l\}$, $k+y$ is greater than $\{k,y\}$, and $k+l=y$ or $k+y=l$ or $k+l=k+y$ would lead to $\{k,l,k+l\} \cap \{x,y,x+y\}$ having cardinality 2. We may expand $\delta_{k,l}^i\delta_{x,y}^j$ as monomials in $\eta_{k+l}^{(m)},\overline{\eta_k^{(m)}},\overline{\eta_l^{(m)}}, \eta_{k+y}^{(m)},\overline{\eta_y^{(m)}}$. Since $\{k+l,k+y\}$ are distinct from $\{k,l,y\}$, (\ref{eq:gaussiansymmetry}) implies $\E[\delta_{k,l}^i\delta_{x,y}^j]=0$. We may also expand $\delta_{k,l}^i\overline{\delta_{x,y}^j}$ as monomials in $\eta_{k+l}^{(m)},\overline{\eta_k^{(m)}},\overline{\eta_l^{(m)}}, \overline{\eta_{k+y}^{(m)}},\eta_k^{(m)},\eta_y^{(m)}$. Since $k+l$ is distinct from $\{k,l,k+y\}$ and $k+y$ is distinct from $\{k,y,k+l\}$, (\ref{eq:gaussiansymmetry}) implies that any monomials involving $\eta_{k+l}^{(m)}$ or $\overline{\eta_{x+y}^{(m)}}$ have vanishing expectation. Note that since $k+l \neq k+y$, also $l \neq y$. If $k$ is distinct from $\{l,y\}$, then monomials involving $\overline{\eta_l^{(m)}}$ or $\eta_y^{(m)}$ also have vanishing expectation, so \[\E[\delta_{k,l}^i\overline{\delta_{x,y}^j}] =\E\left[\left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\overline{\eta_k^{(m)}}\right)^i \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\eta_k^{(m)}\right)^j\right].\] This is analogous to (\ref{eq:deltaprodexpansion1}), and the same argument as in Case 1 leads to (\ref{eq:IVbound}). If instead $k=l$ (which by symmetry addresses also $k=y$), then \[\E[\delta_{k,l}^i\overline{\delta_{x,y}^j}] =\E\left[\left(\frac{1}{N}\sum_{m=1}^N \frac{2\sigma}{r_k}\overline{\eta_k^{(m)}} +\frac{\sigma^2}{r_k^2}(\overline{\eta_k^{(m)}})^2\right)^i \left(\frac{1}{N}\sum_{m=1}^N \frac{\sigma}{r_k}\eta_k^{(m)}\right)^j\right].\] This is analogous to (\ref{eq:deltaprodexpansion2}), and the same argument as in Case 2 leads to (\ref{eq:IVbound}). This concludes the proof in Case 3. Combining these three cases shows (\ref{eq:card1bound}). \end{proof} \subsection{Oracle estimation of $\phi_k$} \begin{proof}[Proof of Lemma \ref{lemma:Mproperties}] Suppose $M\phi=0$. Then for all $(k,l) \in \cI$, $\phi_{k+l}=\phi_k+\phi_l$. Then $\phi_2=\phi_1+\phi_1=2\phi_1$, $\phi_3=\phi_2+\phi_1=3\phi_1$, etc., and $\phi_K=\phi_{K-1}+\phi_1=K\phi_1$, so $\phi$ is a multiple of $(1,2,3,\ldots,K)$. Conversely, any multiple of $(1,2,3,\ldots,K)$ satisfies $M\phi=0$ by the definition of $M$. This shows the first statement. Denote $T=M^\top M$. We explicitly compute $T$: Let $M_k \in \R^{\cI}$ be the $k^\text{th}$ column of $M$. The diagonal entries of $T$ are $T_{kk}=\|M_k\|^2$. If $2k>K$, then the non-zero entries of $M_k$ correspond to the $(i,j)$ pairs \begin{align*} (i,j)=(1,k-1),\ldots,(k-1,1)&: M_{(i,j),k}=1\\ (i,j)=(1,k),\ldots,(K-k,k)&: M_{(i,j),k}=-1\\ (i,j)=(k,1),\ldots,(k,K-k)&: M_{(i,j),k}=-1 \end{align*} So $T_{kk}=(k-1)+2(K-k)=2K-1-k$. If $2k \leq K$, then the non-zero entries of $M_k$ correspond to the $(i,j)$ pairs \begin{align*} (i,j)=(1,k-1),\ldots,(k-1,1)&: M_{(i,j),k}=1\\ (i,j)=(1,k),\ldots,(k-1,k),(k+1,k),\ldots,(K-k,k)&: M_{(i,j),k}=-1\\ (i,j)=(k,1),\ldots,(k,k-1),(k,k+1),\ldots,(k,K-k)&: M_{(i,j),k}=-1\\ (i,j)=(k,k)&: M_{(i,j),k}=-2 \end{align*} So $T_{kk}=(k-1)+2(K-k-1)+4=2K+1-k$. Thus, for all $k=1,\ldots,K$, \[T_{kk}=2K+1-k-2\cdot \1\{2k>K\}.\] For $1\leq j<k \leq K$, when $j+k>K$, we have $T_{jk}=M_j^\top M_k=-2$ where the only non-zero contributions to this inner-product come from rows $(k-j,j),(j,k-j) \in \cI$. When $j+k \leq K$, the non-zero contributions come from rows $(k-j,j),(j,k-j),(j,k),(k,j) \in \cI$, and these cancel exactly to yield $T_{jk}=M_j^\top M_k=0$. Combining these diagonal and off-diagonal components, $T$ has the form \[T=\left(\begin{matrix}2K&~&~&~\\~&2K-1&~&~\\~&~&\ddots&~\\~&~&~&K+1\end{matrix}\right)-\left(\begin{matrix}~&~&~&2\\~&~&2&2\\~&\begin{rotate}{90}$\ddots$\end{rotate}& \vdots & \vdots \\2&\ldots&2&2\end{matrix}\right)\] where the second matrix accounts also for the term $-2 \cdot \1\{2k>K\}$ of the diagonal entries. Now let $\lambda>0$ be a positive eigenvalue of $T$, with non-zero eigenvector $x=(x_1,x_2,\ldots,x_K)$. This must be orthogonal to the null vector $(1,2,\ldots,K)$ so $x_1+2x_2+\ldots+Kx_K=0$. From the above form of $T$, the equation $Tx=\lambda x$ may be arranged as the linear system \begin{align*} (2K-\lambda)x_1&= 2x_K\\ (2K-1-\lambda)x_2&= 2(x_{K-1}+x_K)\\ &\vdots\\ (K+2-\lambda)x_{K-1}&=2(x_2+\ldots+x_K)\\ (K+1-\lambda)x_K &= 2(x_1+x_2+\ldots+x_K). \end{align*} Summing these equations and adding $x_1+2x_2+\ldots+Kx_K$ to both sides, we obtain \[(2K+1-\lambda)(x_1+x_2+\ldots+x_K)= 2(x_1+2x_2+\ldots+Kx_K)+(x_1+2x_2+\ldots+Kx_K)=0.\] If $x_1+x_2+\ldots+x_K \neq 0$, then this implies $\lambda=2K+1$. If $x_1+x_2+\ldots+x_K=0$, but $\lambda\notin\{K+1,K+2,\ldots,2K\}$, then from the above linear system, we have the implications \begin{align*} (K+1-\lambda)x_K = 2(x_1+x_2+\ldots+x_K)=0 &\Rightarrow x_K=0\\ (2K-\lambda)x_1 = 2x_K=0 &\Rightarrow x_1=0\\ (K+2-\lambda)x_{K-1}=2(x_2+x_3+\ldots+x_K)=2(0-x_1)=0 &\Rightarrow x_{K-1}=0\\ (2K-1-\lambda)x_2= 2(x_{K-1}+x_K)=0 &\Rightarrow x_2=0 \end{align*} and so forth. Then $x_1=x_2=\ldots=x_K=0$, which contradicts $x\neq 0$. Thus any positive eigenvalue $\lambda$ of $T$ is one of the values $\{K+1,K+2,\ldots,2K+1\}$. \end{proof} \begin{proof}[\revise{Proof of Theorem \ref{thm:oracleMoM}, $\hat{\theta}=\hat{\theta}^\oracle$}] Recall the loss upper bound from Proposition \ref{prop-lossgeneral}. For constants $C,C'>0$, applying $ab \leq a^2+b^2$, \begin{align*} L(\hat\theta^\oracle,\theta^*) &\leq \sum_{k=1}^K (\hat{r}_k-r_k)^2 +C\inf_{\alpha \in \R} \sum_{k=1}^K \hat{r}_kr_k \big|\hat\phi_k^\oracle-\phi_k+k\alpha\big|_{\cA}^2\\ &=\sum_{k=1}^K (\hat{r}_k-r_k)^2 +C\inf_{\alpha \in \R} \sum_{k=1}^K [r_k^2+r_k(\hat{r}_k-r_k)] \big|\hat\phi_k^\oracle-\phi_k+k\alpha\big|_{\cA}^2\\ &\leq (C+1)\sum_{k=1}^K (\hat{r}_k-r_k)^2 +C\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2\left( \big|\hat\phi_k^\oracle-\phi_k+k\alpha\big|_{\cA}^2 +\big|\hat\phi_k^\oracle-\phi_k+k\alpha\big|_{\cA}^4\right)\\ &\leq C'\left(\sum_{k=1}^K (\hat{r}_k-r_k)^2 +\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2 \big|\hat\phi_k^\oracle-\phi_k+k\alpha\big|_\cA^2\right). \end{align*} The expectation may be bounded using Corollaries \ref{cor:MoMrriskbound} and \ref{cor:MoMphiriskbound}. For the bound (\ref{eq:phiriskbound}) of Corollary \ref{cor:MoMphiriskbound}, fixing $c>0$ be the constant in the exponent, observe that the given condition for $N$ with $C_0>0$ large enough implies \[c\left(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2}\right) \geq \frac{c}{2}\left(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2}\right) +3\log K.\] We may then apply $e^{-(c/2)x},e^{-(c/2)x^{2/3}} \leq C/x$ for a constant $C>0$ to obtain \begin{equation}\label{eq:exptopoly} e^{-c(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2})} \leq \frac{C}{K^3}\left( \frac{\sigma^6}{N\rlower^6}+\frac{\sigma^3}{N\rlower^3}\right) \leq \frac{C'}{K^3}\left(\frac{\sigma^2}{N\rlower^2} +\frac{\sigma^6}{N\rlower^6}\right). \end{equation} Then Corollary \ref{cor:MoMphiriskbound} gives simply \revise{ \[\E\left[\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2|\hat\phi_k^\oracle-\phi_k+k\alpha|_\cA^2\right] \leq \frac{C'\|\theta^*\|^2}{K} \left(\frac{K\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6}\right)\]} and combining this with Corollary \ref{cor:MoMrriskbound} yields the lemma. \end{proof} \subsection{Estimation of $\Phi_{k,l}$ by optimization} \begin{proof}[Proof of Lemma \ref{lemma:phasestability}] Set $v=\phi-\phi'$. We must show: If $v \in \R^K$ is such that \begin{equation}\label{eq:Mphismall} |v_{k+l}-v_k-v_l|_{\cA} \leq \delta \text{ for all } (k,l) \in \cI \end{equation} then there exists $\alpha \in \R$ with $|v_k-k\alpha|_\cA \leq \delta$ for all $k=1,\ldots,K$. We induct on $K$. For $K=1$ the result holds trivially by setting $\alpha=v_1$. Suppose the result holds for $K-1$. Consider $v \in \R^K$ that satisfies (\ref{eq:Mphismall}). By the induction hypothesis, there exists $\alpha \in \R$ such that $|v_k-k\alpha|_{\cA} \leq \delta$ for $k=1,\ldots,K-1$. Then by the triangle inequality, it is immediate to see that \[|v_K-K\alpha|_\cA \leq |v_K-v_1-v_{K-1}|_\cA +|v_1-\alpha|_\cA+|v_{K-1}-(K-1)\alpha|_\cA \leq 3\delta.\] To complete the induction, we must show the stronger bound of $\delta$ instead of $3\delta$. For this, let \[\alpha_*=\argmin_{\alpha \in \R} \Big(\max_{k=1}^{K-1} |v_k-k\alpha|_\cA\Big), \qquad \eps_k=|v_k-k\alpha_*|_\cA \text{ for } k=1,\ldots,K-1, \qquad \eps=\max_{k=1}^{K-1} \eps_k.\] Note that a minimizing $\alpha_*$ exists because the minimum may equivalently be restricted to the compact domain $[-\pi,\pi]$. The induction hypothesis implies $\eps \leq \delta$. By definition of $|\cdot|_\cA$, there exists $j_k \in \Z$ for each $k=1,\ldots,K-1$ such that \[\eps_k=|v_k-k\alpha_*+2\pi j_k|.\] Furthermore, we claim that there must exist two indices $k,l \in \{1,\ldots,K-1\}$ for which \[\eps=v_k-k\alpha_*+2\pi j_k \quad \text{ and } \quad {-}\eps=v_l-l\alpha_*+2\pi j_l.\] This is because \[\Big\{k \in \{1,\ldots,K-1\}:\eps_k=\eps\Big\}=\Big\{k \in \{1,\ldots,K-1\}:|v_k-k\alpha_*+2\pi j_k|=\eps\Big\}\] is non-empty by definition of $\eps$. If $v_k-k\alpha_*+2\pi j_k=\eps$ for every $k$ belonging to this set, then we may decrease the value of $\max_{k=1}^{K-1} |v_k-k\alpha_*|_\cA$ by slightly increasing $\alpha_*$, which contradicts the optimality of $\alpha_*$. Similarly if $v_k-k\alpha_*+2\pi j_k=-\eps$ for all $k$ in this set, then we may decrease $\max_{k=1}^{K-1} |v_k-k\alpha_*|_\cA$ by slightly decreasing $\alpha_*$, again contradicting the optimality of $\alpha_*$. Thus the claimed indices $k,l$ exist. Then, for this index $k \in \{1,\ldots,K-1\}$, we have \[v_k-k\alpha_* \in 2\pi \Z+\eps, \qquad v_{K-k}-(K-k)\alpha_* \in 2\pi \Z+[-\eps,\eps], \qquad v_K-v_k-v_{K-k} \in 2\pi \Z+[-\delta,\delta].\] Adding these three conditions and applying $\eps \leq \delta$, \[v_K-K\alpha_* \in 2\pi \Z+[-\delta,2\eps+\delta] \subseteq 2\pi \Z+[-\delta,3\delta].\] Similarly, for this index $l \in \{1,\ldots,K-1\}$, we have \[v_l-l\alpha_* \in 2\pi \Z-\eps, \qquad v_{K-l}-(K-l)\alpha_* \in 2\pi \Z+[-\eps,\eps], \qquad v_K-v_l-v_{K-l} \in 2\pi \Z+[-\delta,\delta].\] Then adding these conditions, also \[v_K-K\alpha_* \in 2\pi \Z+[-2\eps-\delta,\delta] \subseteq 2\pi \Z+[-3\delta,\delta].\] Since $3\delta<\pi$ strictly, the above two conditions combine to show that $v_K-K\alpha_* \in 2\pi \Z+[-\delta,\delta]$, i.e.\ $|v_K-K\alpha_*|_\cA \leq \delta$. This completes the induction. \end{proof} \begin{proof}[\revise{Proof of Theorem \ref{thm:oracleMoM}, $\hat{\theta}=\hat{\theta}^\opt$}] Let $\cE$ be the event where (\ref{eq:linftyPhibound}) holds. Note that this is exactly the event where $|\hat{\Phi}_{k,l}^\oracle(\phi)-\Phi_{k,l}|<\pi/12$ for all $(k,l) \in \cI$. Then by Lemma \ref{lemma:Phikltail} and a union bound, \[\P[\cE^c] \leq CK^2\,e^{-c(\frac{N\rlower^6}{\sigma^6} \wedge \frac{N^{2/3}\rlower^2}{\sigma^2})} \leq \frac{C'}{K} \left(\frac{\sigma^2}{N\rlower^2}+\frac{\sigma^6}{N\rlower^6}\right)\] where the second inequality holds under the given condition for $N$ as argued in (\ref{eq:exptopoly}). Applying Corollaries \ref{cor:optmimicsoracle} and \ref{cor:MoMphiriskbound}, \revise{ \begin{align*} &\E\left[\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2 |\hat\phi_k^\opt-\phi_k+k\alpha|_\cA^2\right]\\ &=\E\left[\1\{\cE\}\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2 |\hat\phi_k^\opt-\phi_k+k\alpha|_\cA^2\right] +\E\left[\1\{\cE^c\}\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2 |\hat\phi_k^\opt-\phi_k+k\alpha|_\cA^2\right]\\ &\leq \E\left[\inf_{\alpha \in \R} \sum_{k=1}^K r_k^2|\hat\phi_k^\oracle(\phi')-\phi_k'+k\alpha|_\cA^2\right] +C\|\theta^*\|^2 \cdot \P[\cE^c] \leq \frac{C'\|\theta^*\|^2}{K}\left(\frac{K\sigma^2}{N\rlower^2} +\frac{\sigma^6}{N\rlower^6}\right). \end{align*}} This is (up to a universal constant) the same risk bound as established for the oracle estimator itself in Corollary \ref{cor:MoMphiriskbound}. The remainder of the proof is then the same as that of Theorem \ref{thm:oracleMoM} for $\hat{\theta}=\hat{\theta}^\oracle$. \end{proof} \subsection{Estimation of $\Phi_{k,l}$ by frequency marching}\label{appendix:freqmarching} We describe in this section an alternative frequency marching method for mimicking the oracle estimator, which is more explicit and computationally efficient but requires a larger sample size $N$ to succeed. Let \[\tilde{\phi}_1=0\] and, for each $k=2,\ldots,K$, set \[\tilde{\phi}_k=\Arg \hat{B}_{1,k-1}+\tilde{\phi}_{k-1} \bmod 2\pi.\] This defines a vector $\tilde\phi \in [-\pi,\pi)^K$, which we use in place of (\ref{eq:tildephidef}). Then, as in Section \ref{sec:MoMopt}, define $\tilde\Phi_{k,l}=\tilde\phi_{k+l}-\tilde\phi_k-\tilde\phi_l$ with arithmetic carried out over $\R$, and choose $\hat{\Phi}_{k,l}^\fm \in [\tilde\Phi_{k,l}-\pi,\tilde\Phi_{k,l}+\pi)$ as the unique version of the phase of $\hat{B}_{k,l}$ belonging to this range. Finally, let $\hat\phi^\fm$ be the resulting least-squares estimate of $\phi$ in (\ref{eq:leastsquares}), and let $\hat\theta^\fm$ be the resulting estimate of $\theta$. Again, this procedure uses the frequency-marching estimate $\tilde{\phi}$ only as a pilot estimate to resolve the phase ambiguity of the estimated bispectrum, which is then inverted using a least-squares approach. The following lemma is analogous to Corollary \ref{cor:optmimicsoracle}, but requires an improvement for the error of $\Arg \hat{B}_{k,l}$ by a factor of $1/K$. \begin{Lemma}\label{lemma:fmmimicsoracle} Suppose \begin{equation}\label{eq:linftyfm} |\Arg \hat{B}_{k,l}-(\phi_{k+l}-\phi_k-\phi_l)|_\cA<\pi/(6K) \text{ for every } (k,l) \in \cI. \end{equation} Then there exists $\phi'$ equivalent to $\phi$ such that $\hat\Phi^\fm=\hat\Phi^\oracle(\phi')$. \end{Lemma} \begin{proof} For each $k=2,\ldots,K$, by the definition of $\tilde\phi_k$ and the triangle inequality, \[\big|\tilde\phi_k-\phi_k+k\phi_1\big|_\cA \leq \big|\Arg \hat{B}_{1,k-1}-(\phi_k-\phi_1-\phi_{k-1})\big|_\cA +\big|\tilde\phi_{k-1}-\phi_{k-1}+(k-1)\phi_1\big|_\cA.\] Under the given condition, recursively applying this bound and using $\tilde\phi_1-\phi_1+\phi_1=0$ for $k=1$, \[\big|\tilde\phi_k-\phi_k+k\alpha\big|_\cA \leq \frac{\pi(k-1)}{6K}<\frac{\pi}{6} \text{ for } \alpha=\phi_1 \text{ and all } k=1,\ldots,K.\] This means there exists $\phi'$ equivalent to $\phi$ for which $|\tilde\phi_k-\phi_k'|<\pi/6$ for all $k=1,\ldots,K$, and the remainder of the argument is the same as in Corollary \ref{cor:optmimicsoracle}. \end{proof} The following guarantee is then analogous to Theorem \ref{thm:oracleMoM}, now describing the estimator $\hat\theta^\fm$ under a requirement for $N$ that is larger by a factor of $K^2$. \begin{Proposition} Suppose $r_k \geq \rlower$ for each $k=1,\ldots,K$. There exist universal constants $C,C_0>0$ such that if \revise{$N \geq C_0(\frac{K^2\sigma^6}{\rlower^6}\log K+\frac{K\sigma^3}{\rlower^3} (\log K)^{3/2})$}, then the guarantee (\ref{eq:oracleMoMbound}) holds also for $\hat{\theta}^\fm$. \end{Proposition} \begin{proof} Let $\cE$ be the event where (\ref{eq:linftyfm}) holds. This is exactly the event where $|\hat{\Phi}_{k,l}^\oracle(\phi)-\Phi_{k,l}|<\pi/(6K)$ for all $(k,l) \in \cI$. Then by Lemma \ref{lemma:Phikltail} and a union bound, \[\P[\cE^c] \leq CK^2\,e^{-c(\frac{N\rlower^2}{K^2\sigma^2} \wedge \frac{N\rlower^6}{K^2\sigma^6}\wedge \frac{N^{2/3}\rlower^2}{K^{2/3}\sigma^2})} .\] Under the given condition for $N$, an argument similar to (\ref{eq:exptopoly}) shows that this implies \[\P[\cE^c] \leq \frac{C'}{K}\left(\frac{\sigma^2}{N\rlower^2} +\frac{\sigma^6}{N\rlower^6}\right),\] and the remainder of the proof is the same as that of Theorem \ref{thm:oracleMoM}. \end{proof} \section{Proofs for maximum likelihood estimation in low noise}\label{appendix:MLE} \subsection{KL divergence and tail bound for the MLE} The following lemma bounds the Gaussian process $\langle \eps,\,g(\alpha) \cdot \theta \rangle$ which appears in (\ref{eq:llexpanded}). \begin{Lemma} \label{lemma-infor-1} Let $\eps \sim \mathcal{N}(0,I_{2K})$. For a universal constant $C>0$, any $\theta \in \R^{2K}$, and any $s,t>0$, \begin{align} \P\left[\sup_{\alpha \in \cA} |\langle \eps, g(\alpha) \cdot \theta \rangle|>t \text{ and } \|\eps\| \leq s\right] &\leq \frac{8\pi\|\theta\|Ks}{t}\cdot e^{-\frac{t^2}{8\|\theta\|^2}} \label{eq:GPtail}\\ \E\left[\sup_{\alpha \in \cA} |\langle \eps, g(\alpha) \cdot \theta \rangle| \right] &\leq C\|\theta\|\sqrt{\log K}.\label{eq:GPexpectation} \end{align} \end{Lemma} \begin{proof} For each fixed $\alpha\in \cA$, we have $\langle \varepsilon, g(\alpha)\cdot\theta\rangle \sim \mathcal N(0, \|\theta\|^2)$. Thus by a Gaussian tail bound, \[\P[|\langle \eps,g(\alpha) \cdot \theta \rangle|>t/2] \leq 2e^{-\frac{t^2}{8\|\theta\|^2}}.\] We set $\delta=t/(2\|\theta\|Ks)$ and take $N_\delta \subset \cA$ as a $\delta$-net of $\cA=[-\pi,\pi)$ in the metric $|\cdot|_\cA$, having cardinality \[|N_\delta|=\frac{2\pi}{\delta}=\frac{4\pi\|\theta\|Ks}{t}.\] For any $\alpha, \alpha'\in \cA$ such that $|\alpha-\alpha'|_\cA \leq \delta$, from the definition (\ref{eq:thetarotation}) of the diagonal blocks of $g(\alpha)$, we have $\|g(\alpha)-g(\alpha')\|_{\text{op}} \leq K\delta$. Thus, on the event $\{\|\eps\| \leq s\}$, \[\big|\langle \eps,g(\alpha) \cdot \theta \rangle -\langle \eps,g(\alpha') \cdot \theta \rangle\big| \leq K\delta s\|\theta\|=t/2.\] So \[\P\left[\sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot \theta \rangle| >t \text{ and } \|\eps\| \leq s \right] \leq \P\left[\sup_{\alpha \in N_\delta} |\langle \eps,g(\alpha) \cdot \theta \rangle|>t/2\right] \leq |N_\delta| \cdot 2e^{-\frac{t^2}{8\|\theta\|^2}}\] which yields (\ref{eq:GPtail}). Applying (\ref{eq:GPtail}) with $s=\sqrt{4K}$ and integrating from $t=4\|\theta\|\sqrt{\log K}$ to $t=\infty$, \begin{align*} &\E\left[\sup_{\alpha \in \cA} |\langle\varepsilon, g(\alpha)\cdot\theta\rangle|\cdot \mathbf{1}\{\|\eps\| \leq \sqrt{4K}\}\right]\\ &\leq\E\left[4\|\theta\|\sqrt{\log K} +\int_{4\|\theta\|\sqrt{\log K}}^{\infty}\mathbf{1} \left\{\sup_{\alpha \in \cA} |\langle\varepsilon, g(\alpha)\cdot\theta\rangle| >t \text{ and } \|\eps\| \leq \sqrt{4K} \right\}dt\right]\\ &\leq 4\|\theta\|\sqrt{\log K}+\int_{4\|\theta\|\sqrt{\log K}}^\infty \frac{16\pi \|\theta\| K^{3/2}}{t} e^{-t^2/8\|\theta\|^2}dt\\ &=4\|\theta\|\sqrt{\log K}+16\pi\|\theta\|K^{3/2}\int_{4\sqrt{\log K}}^\infty e^{-t^2/8}dt \leq C\|\theta\|\sqrt{\log K} \end{align*} for a universal constant $C>0$ and any $K \geq 2$. Applying a chi-squared tail bound, we have also \begin{align*} \E\left[\sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot \theta \rangle| \cdot \mathbf{1}\{\|\eps\| \geq \sqrt{4K}\}\right] &\leq \|\theta\| \cdot \E\Big[\|\eps\| \cdot \mathbf{1}\{\|\eps\| \geq \sqrt{4K}\} \Big]\\ &\leq \|\theta\| \cdot \E[\|\eps\|^2]^{1/2} \P[\|\eps\| \geq \sqrt{4K}]^{1/2} \leq \|\theta\| \cdot \sqrt{2K} \cdot e^{-cK} \end{align*} for a universal constant $c>0$. Combining the above gives (\ref{eq:GPexpectation}). \end{proof} The next lemma formalizes the statement (\ref{eq:alpha0Taylor}) obtained by a Taylor expansion around $\alpha=0$. \begin{Lemma} \label{lemma-infor-2} \revise{Suppose Assumption \ref{assump:gen} holds. Fix any constant $\delta_0 \in [0,3c_\gen/8]$.} Then there are constants $C,c>0$ depending only on $c_\gen$ (and independent of $\delta_0$) such that for all $\alpha \in [-\frac{\delta_0}{K},\frac{\delta_0}{K}]$, \begin{equation}\label{eq:maintermlower} \revise{cK^2\|\theta^*\|^2\alpha^2 \leq \|\theta^*\|^2-\langle \theta^*,g(\alpha) \cdot \theta^* \rangle \leq CK^2\|\theta^*\|^2\alpha^2.} \end{equation} Furthermore, there is a constant $\iota>0$ depending only on $c_\gen,\delta_0$ such that for all $\alpha \in [-\pi,\pi) \setminus [-\frac{\delta_0}{K},\frac{\delta_0}{K}]$, \begin{equation}\label{eq:maintermupper} \langle \theta^*,g(\alpha) \cdot \theta^* \rangle \leq (1-\iota)\|\theta^*\|^2. \end{equation} \end{Lemma} \begin{proof} We write as shorthand $r_k=r_k(\theta^*)$. From (\ref{eq:rphirotation}), observe that \begin{equation}\label{eq:maintermtmp} \langle \theta^*,\,g(\alpha) \cdot \theta^* \rangle =\sum_{k=1}^K r_k^2\cos k\alpha =\|\theta^*\|^2-\sum_{k=1}^K r_k^2(1-\cos k\alpha). \end{equation} This is an even function of $\alpha$, so it suffices to consider $\alpha \in [0,\pi]$. Suppose first that $0\leq \alpha\leq \delta_0/K$. By Taylor expansion around $\alpha=0$, \[\sum_{k=1}^{K}r_k^2(1-\cos k\alpha)=\sum_{k=1}^{K} r_k^2 \cdot \frac{k^2\alpha^2}{2}+r(\alpha),\] where $|r(\alpha)| \leq \sum_{k=1}^{K} r_k^2(k\alpha)^3/6$. \revise{Observe that \begin{equation}\label{eq:krsumbounds} \sum_{k=1}^K k^3r_k^2 \leq K^3\|\theta^*\|^2, \qquad \sum_{k=1}^K k^2r_k^2 \leq K^2\|\theta^*\|^2, \qquad \sum_{k=1}^K k^2r_k^2 \geq (K/2)^2 \sum_{k=\lceil K/2 \rceil}^K r_k^2 \geq \frac{c_\gen K^2\|\theta^*\|^2}{4}, \end{equation} the last inequality applying Assumption \ref{assump:gen}. Then for $0 \leq \alpha \leq \frac{\delta_0}{K} \leq \frac{3c_\gen}{8K}$, applying the first and third of these bounds,} we have $|r(\alpha)| \leq \sum_{k=1}^K r_k^2 (k\alpha)^2/4$. Applying this and the above Taylor expansion to (\ref{eq:maintermtmp}) gives \[\sum_{k=1}^K r_k^2 \cdot \frac{k^2\alpha^2}{4} \leq \|\theta^*\|^2-\langle \theta^*,g(\alpha) \cdot \theta^* \rangle \leq \sum_{k=1}^K r_k^2 \cdot \frac{3k^2\alpha^2}{4},\] which implies (\ref{eq:maintermlower}) \revise{by the second and third bounds of (\ref{eq:krsumbounds}).} Now consider $\delta_0/K<\alpha \leq \pi$. In the sequence $(\cos\alpha, \cos2\alpha, \ldots, \cos K\alpha)$, we claim that there are at most $\lceil K/2 \rceil$ items belonging to the interval $[\cos L, 1]$, where $L=\min(\delta_0/2,\pi/8)$: \begin{itemize} \item If $\frac{\delta_0}{K}<\alpha<\frac{\pi}{K}$, then $\alpha, 2\alpha, \ldots,K\alpha\in (0,\pi)$. So $\cos(k\alpha) \in [\cos L,1]$ implies that $k\alpha\in [0,L]$, and the number of such items is at most $L/\alpha \leq (\delta_0/2)/(\delta_0/K)=K/2$. \item If $\frac{t\pi}{K} \leq \alpha < \frac{(t+1)\pi}{K}$ for some $1 \leq t\leq \frac{K}{4}-1$, then $\alpha, 2\alpha, \ldots,K\alpha\in(0,(t+1)\pi)$. So $\cos(k\alpha) \in [\cos L,1]$ implies that $k\alpha$ falls into one of $t+1$ closed intervals of width $L$, and the number of such items is at most \[(t+1)\cdot \left\lceil\frac{L}{\alpha}\right\rceil \leq (t+1)\cdot\left(\frac{\pi/8}{t\pi/K}+1\right)=\frac{K}{8} +\frac{K}{8t}+t+1 \leq \frac{K}{8}+\frac{K}{8}+\frac{K}{4}=\frac{K}{2}.\] \item If $\frac{\pi}{4}<\alpha \leq \pi$, then any two consecutive items $\cos k\alpha$ and $\cos (k+1)\alpha$ cannot both belong to $[\cos L, 1]$, since $\alpha>\frac{\pi}{4} \geq 2L$. Therefore, the number of items would not exceed $\lceil K/2 \rceil$. \end{itemize} Denoting $B=\{k:\cos k\alpha\notin[\cos L, 1]\}$, we then have $|B| \geq \lfloor K/2 \rfloor$ and $1-\cos L \geq c$ a small constant depending on $\delta_0$, so \[\sum_{k=1}^K r_k^2(1-\cos k\alpha) \geq \sum_{k \in B} r_k^2(1-\cos L) \geq \revise{c \cdot c_\gen\|\theta^*\|^2} \geq \iota\|\theta^*\|^2\] for a constant $\iota>0$ depending only on $c_\gen,\delta_0$. Applying this to (\ref{eq:maintermtmp}) gives (\ref{eq:maintermupper}). \end{proof} \begin{proof}[Proof of Lemma \ref{lemma-KL-lower}] Recall the form (\ref{eqn-KL-lower-1}) for the KL divergence. For $\mathrm{II}$, upper bounding the average over $\alpha$ by the maximum, \begin{align} \mathrm{II} \leq \E\log \sup_{\alpha \in \cA} \exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right) &\leq \frac{\sup_{\alpha \in \cA} \langle\theta^*, g(\alpha)\cdot\theta\rangle}{\sigma^2} + \mathbb{E} \left[ \frac{\sup_{\alpha \in \cA} \langle\varepsilon, g(\alpha)\cdot\theta\rangle}{\sigma}\right]\nonumber\\ &\leq \frac{\sup_{\alpha \in \cA} \langle \theta^*, g(\alpha) \cdot \theta \rangle}{\sigma^2}+\frac{C\|\theta\|}{\sigma}\cdot\sqrt{\log K},\label{eqn-KL-lower-2-1} \end{align} where the last inequality applies (\ref{eq:GPexpectation}) from Lemma \ref{lemma-infor-1}. Similarly, to lower bound $\mathrm{I}$, let us set \revise{$\delta_0=3c_\gen/8$} and apply \begin{align*} \mathrm{I}& \geq \E \left[\log \frac{1}{2\pi}\int_{-\pi}^\pi \exp\left(\frac{\langle \theta^*,g(\alpha) \cdot \theta^*}{\sigma^2}\right) d\alpha \cdot \inf_{\alpha} \exp\left(\frac{\eps,g(\alpha) \cdot \theta^*}{\sigma}\right)\right]\\ &= \log\frac{1}{2\pi}\int_{-\pi}^{\pi} \exp\left(\frac{\langle \theta^*, g(\alpha)\cdot\theta^*\rangle}{\sigma^2}\right)d\alpha -\mathbb{E}\left[\frac{\sup_{\alpha \in \cA} \langle\varepsilon, g(\alpha)\cdot\theta^*\rangle}{\sigma}\right]\\ &\geq \log\frac{1}{2\pi}\int_{-\delta_0/K}^{\delta_0/K} \exp\left(\frac{\langle \theta^*, g(\alpha)\cdot\theta^*\rangle}{\sigma^2}\right)d\alpha -\frac{C\|\theta^*\|}{\sigma} \cdot \sqrt{\log K} \end{align*} Applying the upper bound of (\ref{eq:maintermlower}) from Lemma \ref{lemma-infor-2}, we have for a constant $C>0$ that \begin{align*} \int_{-\delta_0/K}^{\delta_0/K} \exp\left(\frac{\langle \theta^*, g(\alpha)\cdot\theta^*\rangle}{\sigma^2}\right)d\alpha &\geq \exp\left(\frac{\|\theta^*\|^2}{\sigma^2}\right)\int_{-\delta_0/K}^{\delta_0/K} \revise{\exp\left(-\frac{CK^2\|\theta^*\|^2}{\sigma^2}\alpha^2\right)}d\alpha\\ &=\exp\left(\frac{\|\theta^*\|^2}{\sigma^2}\right)\revise{\left(\frac{2CK^2\|\theta^*\|^2}{\sigma^2}\right)^{-1/2}}\cdot\sqrt{2\pi} \left(1-2\widetilde{\Phi}\left(\revise{\sqrt{\frac{2CK^2\|\theta^*\|^2}{\sigma^2}}} \cdot \frac{\delta_0}{K}\right) \right) \end{align*} where $\widetilde{\Phi}(x)=\int_x^\infty\frac{1}{\sqrt{2\pi}}e^{-t^2/2}dt$ is the right tail probability of the standard Gaussian law. Applying the given condition \revise{$\sigma^2 \leq \|\theta^*\|^2$}, the input to $\widetilde{\Phi}$ is bounded below by a positive constant. Then the value for $\widetilde{\Phi}$ is bounded away from $1/2$, so for a constant $C_2>0$, \begin{equation}\label{eq:gaussianintegral} \frac{1}{2\pi}\int_{-\delta_0/K}^{\delta_0/K} \exp\left(\frac{\langle \theta^*, g(\alpha)\cdot\theta^*\rangle}{\sigma^2}\right)d\alpha \geq \exp\left(\frac{\|\theta^*\|^2}{\sigma^2}\right) \cdot \left(\frac{C_2K^2\|\theta^*\|^2}{\sigma^2}\right)^{-1/2}. \end{equation} Thus \begin{equation}\label{eqn-KL-lower-3-3} \mathrm{I} \geq \frac{\|\theta^*\|^2}{\sigma^2}-\frac{1}{2}\log\left(\frac{C_2K^2\|\theta^*\|^2}{\sigma^2}\right)-\frac{C\|\theta^*\|}{\sigma}\cdot\sqrt{\log K}. \end{equation} Combining (\ref{eqn-KL-lower-1}), (\ref{eqn-KL-lower-2-1}), and (\ref{eqn-KL-lower-3-3}) and applying \[\min_{\alpha \in \cA} \|\theta^*-g(\alpha) \cdot \theta\|^2 =\|\theta^*\|^2+\|\theta\|^2-2 \sup_{\alpha \in \cA} \langle \theta^*, g(\alpha) \cdot \theta \rangle\] yields the lemma. \end{proof} The following lemma establishes concentration of $R_N(\theta)$ around its mean $R(\theta)$, uniformly over bounded domains of $\theta$. \begin{Lemma}\label{lemma-bounded-norm-start} For a universal constant $c>0$, any $M>0$, and any $t>0$, \begin{align} &\P\left[\sup_{\theta:\|\theta\| \leq M} |R_N(\theta)-R(\theta)|>4t\right] \nonumber\\ &\hspace{1in}\leq 2\left(1+\tfrac{2M\sqrt{2\|\theta^*\|^2+(4K+4t)\sigma^2}}{t\sigma^2}\right)^{2K} e^{-\frac{cN\sigma^2t^2}{M^2}}+4e^{-cN(t \wedge \frac{t^2}{K} \wedge \frac{t^2\sigma^2}{\|\theta^*\|^2})}.\label{eq:unifconc} \end{align} \end{Lemma} \begin{proof} Recalling the form of $R_N(\theta)$ from (\ref{eq:llexpanded}), we have \[R_N(\theta)=\mathrm{I}-\mathrm{II}(\theta)+\operatorname{const}(\theta)\] where \begin{align*} \mathrm{I}&=\frac{1}{N}\sum_{m=1}^N \frac{\|\theta^*+\sigma \eps^{(m)}\|^2}{2\sigma^2},\\ \mathrm{II}(\theta)&=\frac{1}{N}\sum_{m=1}^N f(\eps^{(m)},\theta) :=\frac{1}{N}\sum_{m=1}^N \log \frac{1}{2\pi} \int_{-\pi}^\pi \exp\left(\frac{\langle \theta^*+\sigma \eps^{(m)}, g(\alpha) \cdot \theta \rangle}{\sigma^2}\right) d\alpha, \end{align*} and $\operatorname{const}(\theta)$ is a term not depending on the randomness $\{\eps^{(m)}\}$. We analyze separately the concentration of the terms $\mathrm{I}$ and $\mathrm{II}(\theta)$. For the given value $t>0$, define the event $\cE=\{|\mathrm{I}-\E[\mathrm{I}]|<2t\}$. We have \[\frac{\|\theta^*+\sigma \eps^{(m)}\|^2}{2\sigma^2}=\frac{\|\theta^*\|^2}{2\sigma^2}+\frac{\langle \theta^*, \varepsilon^{(m)}\rangle}{\sigma}+\frac{\|\varepsilon^{(m)}\|^2}{2}.\] Here the first term is deterministic, and the latter two terms satisfy $N^{-1}\sum_{m=1}^N \langle \theta^*, \eps^{(m)} \rangle \sim \mathcal{N}(0,\|\theta^*\|^2/N)$ and $\sum_{m=1}^N \|\eps^{(m)}\|^2 \sim \chi_{2KN}^2$. Then by standard Gaussian and chi-squared tail bounds, for a universal constant $c>0$ and any $t>0$, \[\P\left[\left|\frac{1}{N}\sum_{m=1}^N \frac{\langle \theta^*,\eps^{(m)} \rangle}{\sigma}\right| \geq t\right] \leq 2e^{-\frac{Nt^2\sigma^2}{2\|\theta^*\|^2}}, \qquad \mathbb{P}\left[\left|\frac{1}{N}\sum_{m=1}^N\frac{\|\varepsilon^{(m)}\|^2}{2}-K\right| \geq t\right] \leq 2e^{-cNK(\frac{t}{K} \wedge \frac{t^2}{K^2})}.\] So \begin{equation}\label{eq:Etprobbound} \P[\cE^c]=\P\Big[|\mathrm{I}-\E[\mathrm{I}]| \geq 2t\Big] \leq 2e^{-\frac{Nt^2\sigma^2}{2\|\theta^*\|^2}} +2e^{-cNK(\frac{t}{K} \wedge \frac{t^2}{K^2})} \leq 4e^{-c'N(t \wedge \frac{t^2}{K} \wedge \frac{t^2\sigma^2}{\|\theta^*\|^2})}. \end{equation} On the event $\cE$, we have $|R_N(\theta)-R(\theta)| \leq 2t+|\mathrm{II}(\theta)-\E[\mathrm{II}(\theta)]|$ as well as \begin{equation}\label{eq:ynormbound} \frac{1}{N}\sum_{m=1}^N \|\theta^*+\sigma \eps^{(m)}\|^2 \leq \E[\|\theta^*+\sigma \eps^{(m)}\|^2] +4t\sigma^2=\|\theta^*\|^2+(2K+4t)\sigma^2. \end{equation} Recalling the probability law $\cP_{\theta,\eps}$ from (\ref{eq:Pthetaeps}), the $\eps$-gradient of the function $f(\eps,\theta)$ defining $\mathrm{II}(\theta)$ is bounded as \[\|\nabla_\eps f(\eps,\theta)\|=\left\|\frac{1}{\sigma}\mathbb{E}_{\alpha \sim \cP_{\theta,\eps}} \left[g(\alpha)\cdot\theta\right]\right\| \leq \frac{1}{\sigma}\mathbb{E}_{\alpha \sim \cP_{\theta,\eps}} \left[\|g(\alpha)\cdot\theta\|\right] =\frac{\|\theta\|}{\sigma}.\] Thus $f(\eps,\theta)$ is $\frac{\|\theta\|}{\sigma}$-Lipschitz in $\eps$. Then by Gaussian concentration of measure, for universal constants $C,c>0$, we have that $f(\eps,\theta)-\E[f(\eps,\theta)]$ is $\frac{C\|\theta\|}{\sigma}$-subgaussian, and Hoeffding's inequality yields \[\P\left[|\mathrm{II}(\theta)-\E[\mathrm{II}(\theta)]|>t\right] =\P\left[\left|\frac1N\sum_{m=1}^N f(\eps^{(m)},\theta)-\E[f(\eps^{(m)},\theta)]\right|>t\right]\leq 2e^{-\frac{cN\sigma^2 t^2}{\|\theta\|^2}}.\] Now to obtain uniform concentration over $\{\theta:\|\theta\| \leq M\}$, set $\delta=t\sigma^2/\sqrt{2\|\theta^*\|^2+(4K+4t)\sigma^2}$, and let $N_\delta$ be a $\delta$-net of $\{\theta \in \R^{2K}:\|\theta\| \leq M\}$ having cardinality \[|N_\delta| \leq \left(1+\frac{2M}{\delta}\right)^{2K} =\left(1+\frac{2M\sqrt{2\|\theta^*\|^2+(4K+4t)\sigma^2}}{t\sigma^2}\right)^{2K}.\] The $\theta$-gradient of $\E[\mathrm{II}(\theta)]=\E[f(\eps^{(m)},\theta)]$ is bounded as \begin{align*} \|\nabla_\theta \E[\mathrm{II}(\theta)]\| &=\left\|\frac{1}{\sigma^2}\E_{\eps \sim \mathcal{N}(0,I)} \left[\E_{\alpha \sim \cP_{\theta,\eps}}[g(\alpha)^{-1} (\theta^*+\sigma\eps)]\right]\right\|\\ &\leq \frac{1}{\sigma^2}\E[\|\theta^*+\sigma \eps\|] \leq \frac{1}{\sigma^2}\sqrt{\E[\|\theta^*+\sigma \eps\|^2]} =\frac{1}{\sigma^2}\sqrt{\|\theta^*\|^2+2K\sigma^2}. \end{align*} Similarly, on the event $\cE$, the $\theta$-gradient of $\mathrm{II}(\theta)$ without expectation is bounded as \begin{align*} \|\nabla_\theta \mathrm{II}(\theta)\|&=\left\|\frac{1}{\sigma^2}\cdot\frac{1}{N}\sum_{m=1}^{N}\mathbb{E}_{\alpha \sim \cP_{\theta,\eps^{(m)}}} [g(\alpha)^{-1}(\theta^*+\sigma \eps^{(m)})] \right\|\\ &\leq \frac{1}{\sigma^2} \cdot \frac{1}{N}\sum_{m=1}^N \|\theta^*+\sigma\eps^{(m)}\| \leq \frac{1}{\sigma^2}\sqrt{\frac{1}{N}\sum_{m=1}^N \|\theta^*+\sigma \eps^{(m)}\|^2} \leq \frac{1}{\sigma^2} \sqrt{\|\theta^*\|^2+(2K+4t)\sigma^2}, \end{align*} the last inequality applying (\ref{eq:ynormbound}). Therefore $\mathrm{II}(\theta)-\E[\mathrm{II}(\theta)]$ is Lipschitz in $\theta$, with Lipschitz constant at most \[\frac{1}{\sigma^2}\sqrt{\|\theta^*\|^2+2K\sigma^2} +\frac{1}{\sigma^2} \sqrt{\|\theta^*\|^2+(2K+4t)\sigma^2} \leq \frac{1}{\sigma^2}\sqrt{2\|\theta^*\|^2+(4K+4t)\sigma^2}=\frac{t}{\delta}.\] Then \begin{align*} \P\left[\sup_{\theta:\|\theta\| \leq M} |R_N(\theta)-R(\theta)|>4t \text{ and } \cE\right] &\leq \P\left[\sup_{\theta:\|\theta\| \leq M} |\mathrm{II}(\theta)-\E[\mathrm{II}(\theta)]|>2t \text{ and } \cE\right]\\ &\leq \P\left[\sup_{\theta \in N_\delta} |\mathrm{II}(\theta)-\E[\mathrm{II}(\theta)]|>t\right] \leq 2|N_\delta|e^{-\frac{cN\sigma^2t^2}{M^2}}. \end{align*} Combining this with (\ref{eq:Etprobbound}) gives (\ref{eq:unifconc}). \end{proof} \begin{proof}[Proof of Lemma \ref{lemma-highprob-bound}] For the given value of $\delta_1$ and each integer $n \geq 1$, define \[\Gamma_n=\left\{\theta:n\delta_1\|\theta^*\|\leq \min_{\alpha \in \cA} \|\theta^*-g(\alpha) \cdot \theta\|<(n+1)\delta_1\|\theta^*\|\right\} \subset \R^{2K}.\] Observe that $\|\theta\| \leq [1+(n+1)\delta_1]\|\theta^*\|$ for $\theta \in \Gamma_n$, so Lemma \ref{lemma-KL-lower} implies \[D_{\KL}(p_{\theta^*}\| p_\theta) \geq \frac{n^2\delta_1^2\|\theta^*\|^2}{2\sigma^2} -\frac{1}{2}\log \left(\frac{C_2K^2\|\theta^*\|^2}{\sigma^2}\right) -\frac{[2+(n+1)\delta_1]C_3\|\theta^*\|}{\sigma}\sqrt{\log K} \text{ for all } \theta \in \Gamma_n.\] \revise{Then, under the given assumption that $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ for a sufficiently large constant $C_1>0$ (depending on $c_\gen,\delta_1$), setting \[t_n=c_0n^2\|\theta^*\|^2/\sigma^2\]} for a sufficiently small constant $c_0>0$, the above implies that \[D_{\KL}(p_{\theta^*}\| p_\theta) \geq 10t_n \text{ for all } \theta \in \Gamma_n.\] Applying (\ref{eq:unifconc}) with $t=t_n$ and $M=[1+(n+1)\delta_1]\|\theta^*\|$ gives, for some constants $C,C',c,c'>0$ depending on $\delta_1$, \begin{align*} &\P\left[\sup_{\theta:\|\theta\| \leq [1+(n+1)\delta_1]\|\theta^*\|} |R_N(\theta)-R(\theta)|>4t_n\right]\\ &\leq \revise{2\left(1+C\sqrt{1+\tfrac{K\sigma^2}{n^2\|\theta^*\|^2}}\right)^{2K} e^{-cn^2N \cdot \frac{\|\theta^*\|^2}{\sigma^2}} +4e^{-cn^2N \cdot \frac{\|\theta^*\|^2}{\sigma^2} (1 \wedge \frac{n^2\|\theta^*\|^2}{K\sigma^2})}}\\ &\leq (C'K)^K e^{-c'nN \log K}+e^{-c'nN(\log K)^2/K} \end{align*} where the last line applies $n \geq 1$ and $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ to simplify the bound. On the event where $|R_N(\theta)-R(\theta)| \leq 4t_n$ and $|R_N(\theta^*)-R(\theta^*)| \leq 4t_n$, since $D_{\KL}(p_{\theta^*}\|p_\theta)=R(\theta)-R(\theta^*) \geq 10t_n$, we must then have $R_N(\theta)-R_N(\theta^*) \geq 2t_n>0$ so that $\theta$ is not the MLE. Thus, \begin{equation}\label{eq:Gammanprob} \P\left[\hat{\theta}^{\text{MLE}} \in \Gamma_n\right] \leq (C'K)^K e^{-c'nN \log K}+e^{-c'nN(\log K)^2/K}. \end{equation} Summing over all $n \geq 1$ and recalling our choice of rotation for $\hat{\theta}^{\text{MLE}}$ that satisfies (\ref{eq:MLEversion}), \begin{align*} \P\left[\|\hat{\theta}^{\text{MLE}}-\theta^*\| \geq \delta_1\|\theta^*\|\right] &\leq \sum_{n=1}^\infty \P\left[\hat{\theta}^{\text{MLE}} \in \Gamma_n\right] \leq (C'K)^K \sum_{n=1}^\infty e^{-c'nN \log K}+\sum_{n=1}^\infty e^{-c'nN(\log K)^2/K}. \end{align*} Under the given assumption $N \geq C_0K$ for a sufficiently large constant $C_0>0$, both exponents $c'N\log K$ and $c'N(\log K)^2/K$ are bounded below by a constant. Then summing these geometric series gives, for some modified constants $C,c,c'>0$, \[\P\left[\|\hat{\theta}^{\text{MLE}}-\theta^*\| \geq \delta_1\|\theta^*\|\right] \leq (CK)^K e^{-cN\log K}+e^{-cN(\log K)^2/K} \leq e^{-c'N(\log K)^2/K}\] where the second inequality holds again under the assumption $N \geq C_0K$. This shows (\ref{eq:MLElocalization}). To show (\ref{eq:MLE4thmoment}), we apply $\|\theta\| \leq [1+(n+1)\delta_1]\|\theta^*\|$ for $\theta \in \Gamma_n$ and (\ref{eq:Gammanprob}) to get, for some constants $C,C',c'>0$ depending on $\delta_1$, \begin{align*} \E[\|\hat{\theta}^{\text{MLE}}\|^4] &\leq [(1+\delta_1)\|\theta^*\|]^4+\sum_{n=1}^\infty [(1+(n+1)\delta_1) \|\theta^*\|]^4 \cdot \P[\hat{\theta}^{\text{MLE}} \in \Gamma_n]\\ &\leq C\|\theta^*\|^4\left(1+(C'K)^K\sum_{n=1}^\infty n^4 e^{-c'nN\log K}+\sum_{n=1}^\infty n^4e^{-c'nN(\log K)^2/K}\right). \end{align*} For a sufficiently large constant $A>0$, we have $n^4e^{-An}<e^{-An/2}$ for all $n \geq 1$. Hence, under the condition $N \geq C_0K$ for sufficiently large $C_0>0$, we have \begin{align*} \E[\|\hat{\theta}^{\text{MLE}}\|^4] &\leq C\|\theta^*\|^4\left(1+(C'K)^K\sum_{n=1}^\infty e^{-(c'/2)nN\log K}+\sum_{n=1}^\infty e^{-(c'/2)nN(\log K)^2/K}\right)\\ &\leq C\|\theta^*\|^4\left(1+(C'K)^K e^{-c'' N\log K} +e^{-c''N(\log K)^2/K}\right) \leq C'\|\theta^*\|^4. \end{align*} \end{proof} \subsection{Lower bound for the information matrix} Define the domain \begin{equation}\label{eq:epsthetaalign} \cF_1(\theta,\delta_1)=\left\{\eps: \sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot \theta \rangle| \leq \frac{\delta_1\|\theta^*\|^2}{\sigma}\right\} \subset \R^{2K}. \end{equation} The following deterministic lemma guarantees that the law $\cP_{\theta,\eps}$ concentrates near 0 when $\eps \in \cF_1(\theta,\delta_1)$. \begin{Lemma}\label{lemma:angleconc} \revise{Suppose Assumption \ref{assump:gen} holds.} Fix any $\delta_0>0$. Then there exist constants $C_1,\delta_1>0$ depending only on \revise{$c_\gen,\delta_0$ such that if $\sigma^2 \leq \frac{\|\theta^*\|^2}{C_1\log K}$}, then the following holds: For any $\theta \in \cB(\delta_1)$ and any (deterministic) $\eps \in \cF_1(\theta,\delta_1)$, \begin{equation}\label{eq:maxangle} \sup_{\alpha \in [-\frac{\delta_0}{K},\frac{\delta_0}{K}]} \langle \theta^*+\sigma \eps,\,g(\alpha) \cdot \theta \rangle >\sup_{\alpha \in [-\pi,\pi) \setminus [-\frac{\delta_0}{K},\frac{\delta_0}{K}]} \langle \theta^*+\sigma \eps,\,g(\alpha)\cdot \theta \rangle. \end{equation} Furthermore, for a constant $c>0$ depending only on $\clower,\cupper,\delta_0$, \begin{equation}\label{eq:angleconc} \P_{\alpha \sim \cP_{\theta,\eps}}\Big[|\alpha|_\cA>\delta_0/K\Big] \leq \revise{e^{-c\|\theta^*\|^2/\sigma^2}}. \end{equation} \end{Lemma} \begin{proof} Define $I_1=[-\frac{\delta_0}{K},\frac{\delta_0}{K}]$ and $I_2=[-\pi,\pi) \setminus [-\frac{\delta_0}{K},\frac{\delta_0}{K}]$. Let us write \[\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle =\langle \theta^*,g(\alpha)\theta^* \rangle +\langle \theta^*,g(\alpha)(\theta-\theta^*) \rangle +\sigma \langle \eps,g(\alpha)\theta \rangle.\] The conditions $\theta \in \cB(\delta_1)$ and $\eps \in \cF_1(\theta,\delta_1)$ show for the second and third terms \begin{equation}\label{eq:exp23} |\langle \theta^*,g(\alpha)(\theta-\theta^*) \rangle| \leq \|\theta^*\|\cdot \|\theta-\theta^*\| \leq \delta_1 \|\theta^*\|^2, \qquad |\sigma\langle \eps,g(\alpha)\theta \rangle| \leq \delta_1 \|\theta^*\|^2. \end{equation} For the first term, Lemma \ref{lemma-infor-2} implies that for constants $C>0$ and $\iota>0$, \begin{equation}\label{eq:exp1} \langle \theta^*,g(\alpha)\theta^* \rangle \leq (1-\iota) \cdot \|\theta^*\|^2 \text{ if } \alpha \in I_2, \qquad \langle \theta^*,g(\alpha)\theta^* \rangle \geq \|\theta^*\|^2-\revise{CK^2\|\theta^*\|^2}\alpha^2 \text{ if } \alpha \in I_1. \end{equation} Then for all $\alpha \in I_2$, we have $\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle \leq (1-\iota+2\delta_1)\|\theta^*\|^2$, while for $\alpha=0 \in I_1$, we have $\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle \geq (1-2\delta_1)\|\theta^*\|^2$. Setting $\delta_1<\iota/4$, this shows (\ref{eq:maxangle}). To show (\ref{eq:angleconc}), we may correspondingly write the density (\ref{eq:Pthetaeps}) for the distribution $\cP_{\theta,\eps}$ as \[\frac{d\cP_{\theta,\eps}(\alpha)}{d\alpha} \propto \exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)=\exp\left( \frac{\langle \theta^*,g(\alpha)\theta^* \rangle}{\sigma^2} +\frac{\langle \theta^*,g(\alpha)(\theta-\theta^*) \rangle}{\sigma^2} +\frac{\sigma \langle \eps,g(\alpha)\theta \rangle}{\sigma^2}\right).\] Then \begin{align*} \int_{I_2}\exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)d\alpha &\leq \exp\left(\frac{(1-\iota+2\delta_1)\|\theta^*\|^2}{\sigma^2}\right),\\ \int_{I_1}\exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)d\alpha &\geq \exp\left(\frac{(1-2\delta_1)\|\theta^*\|^2}{\sigma^2}\right) \cdot \int_{-\delta_0/K}^{\delta_0/K} \revise{\exp\left(-\frac{CK^2\|\theta^*\|^2\alpha^2}{\sigma^2}\right)}d\alpha. \end{align*} Lower bounding this Gaussian integral using the same argument as (\ref{eq:gaussianintegral}), for a constant $C'>0$, \[\int_{-\delta_0/K}^{\delta_0/K} \exp\left(-\frac{CK^2\|\theta^*\|^2\alpha^2}{\sigma^2} \right)d\alpha \geq \left(\frac{C'K^2\|\theta^*\|^2}{\sigma^2}\right)^{-1/2}\sqrt{2\pi}.\] Combining these bounds and choosing $\delta_1<\iota/4$, for a constant $c>0$, \[J:=\frac{\int_{I_1}\exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)d\alpha} {\int_{I_2}\exp\left(\frac{\langle \theta^*+\sigma \eps, g(\alpha)\cdot\theta\rangle}{\sigma^2}\right)d\alpha} \geq \revise{\left(\frac{C'K^2\|\theta^*\|^2}{\sigma^2}\right)^{-1/2} \sqrt{2\pi}\exp\left(\frac{c\|\theta^*\|^2}{\sigma^2}\right) \geq \exp\left(\frac{c\|\theta^*\|^2}{2\sigma^2}\right)}\] where the last inequality \revise{holds for $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ and a sufficiently large constant $C_1>0$.} Since $\P_{\alpha \sim \cP_{\theta,\eps}}[|\alpha|_\cA \geq \frac{\delta_0}{K}] =1/(1+J)<1/J$, this shows (\ref{eq:angleconc}). \end{proof} Next, recall the complex representations \[\tilde{\theta}=(\theta_1,\ldots,\theta_K) \in \C^K, \qquad \tilde{v}=(v_1,\ldots,v_K) \in \C^K, \qquad \tilde{\eps}=(\eps_1,\ldots,\eps_K) \in \C^K\] and define $\cF_2(\theta,v,\delta_1) \subset \R^{2K}$ as the set of vectors $\eps \in \R^{2K}$ that satisfy the following three conditions: \revise{ \begin{align} \sup_{\alpha \in \cA} \Big|\langle \eps,g(\alpha) \cdot v \rangle\Big| &\leq \frac{\|\theta^*\|}{\sigma}\label{eq:epsvthetaalign1}\\ \sup_{\alpha,\alpha' \in [-\pi,\pi)} \frac{1}{\alpha^2}\left|\Re \sum_{k=1}^K \overline{\eps_k}e^{ik\alpha'} \Big(e^{ik\alpha}-1-ik\alpha\Big)\theta_k\right| &\leq \frac{\delta_1 K^2 \|\theta^*\|^2}{\sigma}\label{eq:epsvthetaalign2}\\ \sup_{\alpha,\alpha' \in [-\pi,\pi)} \frac{1}{|\alpha-\alpha'|} \left|\Re \sum_{k=1}^K \overline{\eps_k}\Big(e^{ik\alpha}-e^{ik\alpha'}\Big) v_k\right| &\leq \frac{\delta_1 K \|\theta^*\|}{\sigma} \label{eq:epsvthetaalign3} \end{align}} The domain $\cE(\theta,v,\delta_1)$ in Lemma \ref{lemma-infor-3} is given by $\cF_1(\theta,\delta_1) \cap \cF_2(\theta,v,\delta_1)$. \begin{proof}[Proof of Lemma \ref{lemma-infor-3}] Let us denote \[y=\theta^*+\sigma \eps\] The idea will be to approximate $\Var_{\alpha \sim \cP_{\theta,\eps}}$ by the variance with respect to a Gaussian law over $\alpha$. We fix a small constant $\delta_0>0$ to be determined, and take $C_1>0$ large enough and $\delta_1>0$ small enough so that the conclusions of Lemma \ref{lemma:angleconc} hold. Let \begin{equation}\label{eq:alpha0} \alpha_0=\argmax_{\alpha \in \cA} \langle y,\, g(\alpha) \cdot \theta \rangle \end{equation} (where we may take any maximizer if it is not unique). Then (\ref{eq:maxangle}) guarantees that $\alpha_0 \in [-\frac{\delta_0}{K},\frac{\delta_0}{K}]$. In the sense of Section \ref{sec:complexrepr}, denote the complex representations of $\theta^*,\eps,y,\theta,v \in \R^{2K}$ by \[\tilde{\theta}^*=(\theta_1^*,\ldots,\theta_K^*) \in \C^K, \qquad \tilde{\eps}=(\eps_1,\ldots,\eps_K) \in \C^K, \qquad \tilde{y}=(y_1,\ldots,y_K) \in \C^K,\] \[\tilde{\theta}=(\theta_1,\ldots,\theta_K) \in \C^K, \qquad \tilde{v}=(v_1,\ldots,v_K) \in \C^K.\] Then the complex representation of $g(\alpha) \cdot \theta$ is $(e^{ik\alpha}\theta_k:k=1,\ldots,K)$. By the inner-product relation (\ref{eq:CRisometry}), we have \begin{equation}\label{eq:weightexpr} \langle y,g(\alpha) \cdot \theta \rangle =\Re \sum_{k=1}^K \overline{y_k} \cdot e^{ik\alpha}\theta_k. \end{equation} The first-order condition for optimality of $\alpha_0$ in (\ref{eq:alpha0}) yields \[0=\frac{d}{d\alpha} \langle y,g(\alpha) \cdot \theta \rangle\Big|_{\alpha=\alpha_0}=\Re \sum_{k=1}^K \overline{y_k} \cdot ike^{ik\alpha_0} \cdot \theta_k.\] Applying this condition and the decomposition \begin{align*} e^{ik\alpha}=e^{ik\alpha_0}\Big[1+ik(\alpha-\alpha_0)+ \Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big)\Big] \end{align*} to (\ref{eq:weightexpr}), we may write the density function (\ref{eq:Pthetaeps}) for the distribution $\cP_{\theta,\eps}$ as \[\frac{d\cP_{\theta,\eps}(\alpha)}{d\alpha} \propto \exp\left(\frac{\langle y,g(\alpha)\cdot\theta\rangle}{\sigma^2}\right) \propto \exp\left(\frac{p(\alpha)}{\sigma^2}\right),\] where (also dropping constant terms that do not depend on $\alpha$) \[p(\alpha)=\Re \sum_{k=1}^K \overline{y_k} \cdot e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big)\theta_k.\] For $\alpha \in [-\frac{\delta_0}{K},\frac{\delta_0}{K}]$, we now establish a quadratic approximation for $p(\alpha)$. We have \[p(\alpha)=\mathrm{I}(\alpha)+\mathrm{II}(\alpha)+\mathrm{III}(\alpha)\] where \begin{align*} \mathrm{I}(\alpha)&=\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big) \theta_k^*\\ \mathrm{II}(\alpha)&=\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big) (\theta_k-\theta_k^*)\\ \mathrm{III}(\alpha)&=\Re \sum_{k=1}^K \sigma \overline{\eps_k} \cdot e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big)\theta_k. \end{align*} For $\mathrm{I}(\alpha)$, observe that $\overline{\theta_k^*}\theta_k^*=|\theta_k^*|^2$ is real, and \begin{align*} \Re e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big) &=\cos(k\alpha)-\cos(k\alpha_0)+k(\alpha-\alpha_0)\sin(k\alpha_0)\\ &=-\frac{k^2(\alpha-\alpha_0)^2}{2} \cos(k\tilde{\alpha}), \end{align*} for some $\tilde{\alpha}$ between $\alpha$ and $\alpha_0$. Since $\alpha,\alpha_0 \in [-\frac{\delta_0}{K},\frac{\delta_0}{K}]$ and $k \leq K$, for sufficiently small $\delta_0$ this implies \[-\frac{k^2(\alpha-\alpha_0)^2}{4} \geq \Re e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big) \geq -\frac{3k^2(\alpha-\alpha_0)^2}{4}.\] Then, applying \revise{the second and third bounds of (\ref{eq:krsumbounds}), there are constants $C,c>0$ (independent of $\delta_0$) such that \[-cK^2\|\theta^*\|^2(\alpha-\alpha_0)^2 \geq \mathrm{I}(\alpha) \geq -CK^2\|\theta^*\|^2(\alpha-\alpha_0)^2.\]} For $\mathrm{II}(\alpha)$, we apply $|e^{is}-1-is| \leq s^2$ for all real values $s \in \R$, Cauchy-Schwarz, and the condition $\theta \in \cB(\delta_1)$ to obtain \[|\mathrm{II}(\alpha)| \leq \sum_{k=1}^K k^2(\alpha-\alpha_0)^2 |\overline{\theta_k^*}||\theta_k-\theta_k^*| \leq \revise{(\alpha-\alpha_0)^2 K^2\sqrt{\sum_{k=1}^K |\theta_k^*|^2} \sqrt{\sum_{k=1}^K |\theta_k-\theta_k^*|^2} \leq \delta_1 (\alpha-\alpha_0)^2 K^2\|\theta^*\|^2}.\] For $\mathrm{III}(\alpha)$, we apply the condition (\ref{eq:epsvthetaalign2}) for $\eps \in \cF_2(\theta,v,\delta_1)$ to obtain \[|\mathrm{III}(\alpha)| \leq \revise{\delta_1(\alpha-\alpha_0)^2 K^2\|\theta^*\|^2}.\] Combining these bounds, for sufficiently small $\delta_1>0$ and some constants $C_0,c_0>0$ which we may take independent of $\delta_0,\delta_1$, we arrive at the desired quadratic approximation \revise{ \begin{equation}\label{eq:quadraticapprox} -c_0K^2\|\theta^*\|^2(\alpha-\alpha_0)^2 \geq p(\alpha) \geq -C_0K^2\|\theta^*\|^2(\alpha-\alpha_0)^2 \qquad \text{ for } \alpha \in [-\tfrac{\delta_0}{K},\tfrac{\delta_0}{K}]. \end{equation}} This implies the following variance bound: Denote $I_1=[-\frac{\delta_0}{K}, \frac{\delta_0}{K}]$ and $I_2=[-\pi,\pi) \setminus [-\frac{\delta_0}{K}, \frac{\delta_0}{K}]$. For any bounded function $f:[-\pi,\pi) \to \R$, denote $\|f\|_\infty=\sup_{\alpha \in [-\pi,\pi)} |f(\alpha)|$. Then \begin{align} \Var_{\alpha \sim \cP_{\theta,\eps}}[f(\alpha)] &=\inf_{x \in \R} \frac{\int_{-\pi}^\pi (f(\alpha)-x)^2 e^{p(\alpha)/\sigma^2}d\alpha} {\int_{-\pi}^\pi e^{p(\alpha)/\sigma^2}d\alpha}\nonumber\\ &\leq \inf_{x \in \R} \frac{\int_{I_1} (f(\alpha)-x)^2 e^{p(\alpha)/\sigma^2}d\alpha+4\|f\|_\infty^2 \int_{I_2} e^{p(\alpha)/\sigma^2} d\alpha}{\int_{-\pi}^\pi e^{p(\alpha)/\sigma^2}d\alpha}\nonumber\\ &\leq \revise{\inf_{x \in \R} \frac{\int_{I_1} (f(\alpha)-x)^2 e^{-c_0K^2\|\theta^*\|^2(\alpha-\alpha_0)^2/\sigma^2}d\alpha} {\int_{I_1} e^{-C_0K^2\|\theta^*\|^2(\alpha-\alpha_0)^2/\sigma^2}d\alpha} +4\|f\|_\infty^2 e^{-c(\delta_0)\|\theta^*\|^2/\sigma^2}}\label{eq:varexpr1} \end{align} where, in the last line, we have used (\ref{eq:quadraticapprox}) as well as (\ref{eq:angleconc}) to bound $\P_{\alpha \sim \cP_{\theta,\eps}}[\alpha \in I_2] \leq \revise{e^{-c(\delta_0)\|\theta^*\|^2/\sigma^2}}$ for a constant $c(\delta_0)>0$ depending on $\delta_0$. For the denominator of the first term of (\ref{eq:varexpr1}), we may evaluate the Gaussian integral as \revise{ \begin{align*} &\int_{I_1} e^{-C_0K^2\|\theta^*\|^2(\alpha-\alpha_0)^2/\sigma^2}d\alpha\\ &=\left(\frac{2C_0K^2\|\theta^*\|^2}{\sigma^2}\right)^{-1/2} \sqrt{2\pi}\left(1-\tilde{\Phi}\left[ \sqrt{\frac{2C_0K^2\|\theta^*\|^2}{\sigma^2}} \left(\frac{\delta_0}{K}+\alpha_0\right)\right] -\tilde{\Phi}\left[\sqrt{\frac{2C_0K^2\|\theta^*\|^2}{\sigma^2}} \left(\frac{\delta_0}{K}-\alpha_0\right)\right]\right). \end{align*}} Here, since $|\alpha_0| \leq \delta_0/K$, both values of $\tilde{\Phi}$ are at most $1/2$. Furthermore, under the condition $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ for $C_1>0$ large enough depending on $\delta_0$, at least one value of $\tilde{\Phi}$ is less than $1/4$. Thus, for a constant $C>0$ independent of $\delta_0,\delta_1$, we have simply \revise{ \[\int_{I_1} e^{-C_0K^2\|\theta^*\|^2(\alpha-\alpha_0)^2/\sigma^2}d\alpha \geq \left(\frac{CK^2\|\theta^*\|^2}{\sigma^2}\right)^{-1/2}.\]} Combining this with the normalization constant for the Gaussian law in the numerator of the first term of (\ref{eq:varexpr1}), we obtain \begin{equation}\label{eq:varexpr2} \Var_{\alpha \sim \cP_{\theta,\eps}}[f(\alpha)] \leq C\Var_{\alpha \sim \Normal(\alpha_0,\tau^2)} [f(\alpha)]+ 4\|f\|_\infty^2e^{-c(\delta_0)\|\theta^*\|^2/\sigma^2}, \qquad \tau^2:=\frac{C'\sigma^2}{K^2\|\theta^*\|^2}. \end{equation} Here $C,C'>0$ are some constants depending only on $c_\gen$ and independent of $\delta_0,\delta_1$, whereas $c(\delta_0)$ depends also on $\delta_0$. Finally, we apply this bound (\ref{eq:varexpr2}) to the function $f(\alpha)=v^\top g(\alpha)^{-1} y=\langle y,g(\alpha)v \rangle$. Observe that \begin{equation}\label{eq:finftybound} \|f\|_\infty \leq \|\theta^*\|\|v\|+\sigma \sup_{\alpha \in [-\pi,\pi)} \langle \eps,g(\alpha)v \rangle \leq C\|\theta^*\|, \end{equation} the last inequality using $\|v\|=1$ and (\ref{eq:epsvthetaalign1}) for $\eps \in \cF_2(\theta,v,\delta_1)$. To bound $\Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[f(\alpha)]$, we apply again the inner-product relation (\ref{eq:CRisometry}) to write the complex representation of $f(\alpha)$ as \[f(\alpha)=\Re \sum_{k=1}^K \overline{y_k} \cdot e^{ik\alpha}v_k =\mathrm{I}(\alpha)+\mathrm{II}(\alpha)+\mathrm{III}(\alpha)\] where \begin{align*} \mathrm{I}(\alpha)&=\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot e^{ik\alpha_0}\Big(1+ik(\alpha-\alpha_0)\Big)v_k\\ \mathrm{II}(\alpha)&=\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot e^{ik\alpha_0}\Big(e^{ik(\alpha-\alpha_0)}-1-ik(\alpha-\alpha_0)\Big)v_k\\ \mathrm{III}(\alpha)&=\Re \sum_{k=1}^K \sigma \overline{\eps_k} \cdot e^{ik\alpha} v_k \end{align*} Next, we are going to upper bound their variances under $\alpha\sim \mathcal N(\alpha_0, \tau^2)$. For $\mathrm{I}(\alpha)$, we may drop the constant term that is independent of $\alpha$ and write \begin{equation}\label{eq:VarI} \Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\mathrm{I}(\alpha)] =\Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}\left[\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot ik\alpha \cdot v_k +\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot (e^{ik\alpha_0}-1)ik\alpha \cdot v_k\right]. \end{equation} Recalling the tangent vector $u^*$ from (\ref{eq:ustar}), observe that its complex representation is \[\tilde{u}^*=\frac{d}{d\alpha}\Big(e^{ik\alpha}\theta_k^*: k=1,\ldots,K\Big)\Big|_{\alpha=0}=\Big(ik\theta_k^*:k=1,\ldots,K\Big).\] Then the inner-product relation (\ref{eq:CRisometry}) and the given orthogonality condition $\langle u^*,v \rangle=0$ imply \[\Re \sum_{k=1}^K -ik \cdot \overline{\theta_k^*} \cdot v_k=0,\] so the first term inside the variance of (\ref{eq:VarI}) is 0. Applying $|e^{ik\alpha_0}-1| \leq k|\alpha_0| \leq \delta_0 k/K$ for the second term, \begin{align*} \Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\mathrm{I}(\alpha)] &=\Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\alpha] \cdot \left(\Re \sum_{k=1}^K \overline{\theta_k^*} \cdot (e^{ik\alpha_0}-1)ik\cdot v_k\right)^2\\ &\leq \Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\alpha] \cdot \left(\sum_{k=1}^K |\theta_k^*| \cdot \frac{\delta_0k^2}{K} \cdot |v_k|\right)^2\\ &\leq \revise{\Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\alpha] \cdot \delta_0^2 K^2 \sum_{k=1}^K |\theta^*_k|^2 \cdot\sum_{k=1}^K |v_k|^2 =\tau^2 \delta_0^2 K^2\|\theta^*\|^2.} \end{align*} For $\mathrm{II}(\alpha)$, applying $|e^{is}-1-is| \leq s^2$ for any real value $s \in \R$, \begin{align*} \Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\mathrm{II}(\alpha)] &\leq \E_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\mathrm{II}(\alpha)^2]\\ &\leq \E_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}\left[ \left(\sum_{k=1}^K k^2(\alpha-\alpha_0)^2|\theta_k^*||v_k|\right)^2\right]\\ &\leq \revise{\E_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[(\alpha-\alpha_0)^4] K^4\sum_{k=1}^K |\theta_k^*|^2 \sum_{k=1}^K |v_k|^2 =3\tau^4K^4\|\theta^*\|^2}. \end{align*} For $\mathrm{III}(\alpha)$, we may center by a constant independent of $\alpha$ and apply (\ref{eq:epsvthetaalign3}) to obtain \begin{align*} \Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[\mathrm{III}(\alpha)] &\leq \E_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)} \left[\left(\Re \sum_{k=1}^K \sigma \overline{\eps_k}\Big(e^{ik\alpha} -e^{ik\alpha_0}\Big)v_k\right)^2\right]\\ &\leq \revise{\sigma^2 \cdot \E_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[(\alpha-\alpha_0)^2] \cdot \frac{\delta_1^2K^2\|\theta^*\|^2}{\sigma^2} =\tau^2 \delta_1^2 K^2\|\theta^*\|^2}. \end{align*} Combining all of the above, we have \begin{align} \Var_{\alpha \sim \mathcal{N}(\alpha_0,\tau^2)}[f(\alpha)] &\leq 3\Var[\mathrm{I}(\alpha)]+3\Var[\mathrm{II}(\alpha)] +3\Var[\mathrm{III}(\alpha)]\nonumber\\ &\leq \revise{\tau^2 \cdot C(\delta_0^2+\delta_1^2)K^2\|\theta^*\|^2 +\tau^4 \cdot CK^4\|\theta^*\|^2}\label{eq:finalvarfbound} \end{align} for a constant $C>0$ independent of $\delta_0,\delta_1$. Applying (\ref{eq:finftybound}) and (\ref{eq:finalvarfbound}) and the value of $\tau^2$ to (\ref{eq:varexpr2}), for a constant $C'>0$ independent of $\delta_0,\delta_1$, \revise{ \[\Var_{\alpha \sim \cP_{\theta,\eps}}[f(\alpha)] \leq C'\left(\delta_0^2+\delta_1^2+\frac{\sigma^2}{\|\theta^*\|^2} +\frac{\|\theta^*\|^2}{\sigma^2}e^{-c(\delta_0)\|\theta^*\|^2/\sigma^2}\right)\sigma^2.\]} Then, choosing $\delta_0,\delta_1>0$ sufficiently small depending on $\eta$, and applying $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ for a sufficiently large constant $C_1>0$ depending on $\delta_0$ and $\eta$, we obtain $\Var_{\alpha \sim \cP_{\theta,\eps}}[f(\alpha)] \leq \eta \sigma^2$ as desired. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:epsgoodprob}] Applying (\ref{eq:GPtail}) with $t=\delta_1\|\theta^*\|^2/\sigma$, \[\P\left[\sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot \theta \rangle| >\frac{\delta_1\|\theta^*\|^2}{\sigma} \text{ and } \|\eps\| \leq s\right] \leq \revise{\frac{C\sigma}{\|\theta^*\|} \cdot Ks \cdot e^{-c\|\theta^*\|^2/\sigma^2}}\] for constants $C,c>0$ (depending on $\delta_1$). Let us take \[s=\max(\sqrt{4K},\revise{\|\theta^*\|/\sigma}).\] Then applying $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ for sufficiently large $C_1>0$, this probability bound is at most $e^{-c'\|\theta^*\|^2/\sigma^2}$. By a chi-squared tail bound, since $\|\eps\|^2 \sim \chi_{2K}^2$ and $s^2 \geq 4K$, we have $\P[\|\eps\|^2>s^2] \leq e^{-cs^2} \leq e^{-c\|\theta^*\|^2/\sigma^2}$. Combining these bounds gives, for a constant $c>0$, \[\P\left[\sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot \theta \rangle| >\frac{\delta_1\|\theta^*\|^2}{\sigma}\right] \leq e^{-c\|\theta^*\|^2/\sigma^2},\] so $\eps \in \cF_1(\theta,\delta_1)$ with probability at least $1-e^{-c\|\theta^*\|^2/\sigma^2}$. The same argument applied with the unit vector $v$ in place of $\theta$ shows \revise{ \[\P\left[\sup_{\alpha \in \cA} |\langle \eps,g(\alpha) \cdot v \rangle| >\frac{\|\theta^*\|}{\sigma}\right] \leq e^{-c\|\theta^*\|^2/\sigma^2},\]} so (\ref{eq:epsvthetaalign1}) holds with probability at least $1-e^{-c\|\theta^*\|^2/\sigma^2}$. For the condition (\ref{eq:epsvthetaalign2}), note that $\eps_k \sim \mathcal{N}_\C(0,2)$ and these are independent for $k=1,\ldots,K$. Then \[f_{\alpha,\alpha'}(\eps):=\alpha^{-2}\sum_{k=1}^K \overline{\eps_k} e^{ik\alpha'}(e^{ik\alpha}-1-ik\alpha)\theta_k\] has distribution $\mathcal{N}_\C(0,2\tau^2)$ where \[\tau^2=\alpha^{-4}\sum_{k=1}^K |e^{ik\alpha'}(e^{ik\alpha}-1-ik\alpha)\theta_k|^2.\] So $\Re f_{\alpha,\alpha'}(\eps) \sim \mathcal{N}(0,\tau^2)$. Applying $|e^{is}-1-is| \leq s^2$ for all $s \in \R$, we have \revise{ \[\tau^2 \leq \sum_{k=1}^K k^4|\theta_k|^2 \leq K^4\|\theta^*\|^2.\] Then setting \[t=\frac{\delta_1 K^2\|\theta^*\|^2}{\sigma},\]} a Gaussian tail bound yields, for a constant $c>0$ (depending on $\delta_1$), \[\P[|\Re f_{\alpha,\alpha'}(\eps)|>t/2] \leq e^{-c\|\theta^*\|^2/\sigma^2}.\] Differentiating in $\alpha$ and $\alpha'$ and applying $|e^{is}-1-is| \leq s^2$ and $|e^{is}-1-is+s^2/2| \leq |s|^3$ for $s \in \R$, we have \begin{align*} |\partial_{\alpha'} \Re f_{\alpha,\alpha'}(\eps)| &=\left|\alpha^{-2}\Re \sum_{k=1}^K \overline{\eps_k} \cdot ike^{ik\alpha'}(e^{ik\alpha}-1-ik\alpha)\theta_k\right| \leq \sum_{k=1}^K k^3|\eps_k||\theta_k| \leq K^3\|\eps\|\|\theta\|,\\ |\partial_\alpha \Re f_{\alpha,\alpha'}(\eps)| &=\left|\alpha^{-3}\Re \sum_{k=1}^K \overline{\eps_k} e^{ik\alpha'}(e^{ik\alpha}(ik\alpha-2)+2+ik\alpha)\theta_k\right| \leq C\sum_{k=1}^K k^3|\eps_k||\theta_k| \leq CK^3\|\eps\|\|\theta\|. \end{align*} (For the second line, we have explicitly evaluated the derivative, and then applied $e^{ik\alpha}=1+ik\alpha-k^2\alpha^2/2+O(k^3\alpha^3)$ and canceled terms to yield the first inequality.) Then, on an event $\{\|\eps\|<s\}$, $\Re f_{\alpha,\alpha'}(\eps)$ is $L$-Lipschitz in both $\alpha$ and $\alpha'$, for \revise{$L=C_0K^3\|\theta^*\|s$} and a constant $C_0>0$. We set $\delta=t/(4L)$ and let $N_\delta$ be a $\delta$-net of $[-\pi,\pi)$ having cardinality \[|N_\delta|=\frac{8\pi L}{t} \leq \revise{\frac{C\sigma}{\|\theta^*\|}} \cdot Ks.\] Then \[\P\left[\sup_{\alpha,\alpha' \in [-\pi,\pi)} |\Re f_{\alpha,\alpha'}(\eps)|>t \text{ and } \|\eps\|<s\right] \leq \P\left[\sup_{\alpha,\alpha' \in N_\delta} |\Re f_{\alpha,\alpha'}(\eps)|>t/2 \right] \leq |N_\delta|^2 \cdot e^{-c\|\theta^*\|^2/\sigma^2}.\] Then, setting $s=\max(\sqrt{4K},\revise{\|\theta^*\|/\sigma})$ and applying the same argument as above shows \[\P\left[\sup_{\alpha,\alpha' \in [-\pi,\pi)} |\Re f_{\alpha,\alpha'}(\eps)|>t\right] \leq e^{-c\|\theta^*\|^2/\sigma^2},\] so (\ref{eq:epsvthetaalign2}) holds with probability at least $1-e^{-c\|\theta^*\|^2/\sigma^2}$. The argument for (\ref{eq:epsvthetaalign3}) is analogous. We define $\gamma=\alpha-\alpha'$, \[f_{\alpha',\gamma}(\eps):=\gamma^{-1} \sum_{k=1}^K \overline{\eps_k}e^{ik\alpha'}(e^{ik\gamma}-1)v_k \sim \mathcal{N}_\C(0,2\tau^2), \qquad \tau^2:=\gamma^{-2}\sum_{k=1}^K |(e^{ik\alpha'}(e^{ik\gamma}-1)v_k|^2\] and set \revise{$t=\delta_1K\|\theta^*\|/\sigma$}. Applying $|e^{ik\gamma}-1| \leq k|\gamma|$ and $\|v\|=1$, we have $\tau^2 \leq \sum_{k=1}^K k^2|v_k|^2 \leq K^2$, so that a Gaussian tail bound yields $\P[|\Re f_{\alpha',\gamma}(\eps)|>t/2] \leq e^{-c\|\theta^*\|^2/\sigma^2}$. On the event $\{\|\eps\|<s\}$, we have the Lipschitz bounds \begin{align*} |\partial_{\alpha'} \Re f_{\alpha',\gamma}(\eps)| &=\left|\gamma^{-1} \Re \sum_{k=1}^K \overline{\eps_k} \cdot ike^{ik\alpha'} (e^{ik\gamma}-1)v_k\right| \leq \sum_{k=1}^K k^2|\eps_k||v_k| \leq K^2\|\eps\|\|v\| \leq K^2s,\\ |\partial_\gamma \Re f_{\alpha',\gamma}(\eps)| &=\left|\gamma^{-2} \Re \sum_{k=1}^K \overline{\eps_k} \cdot e^{ik\alpha'} (e^{ik\gamma}(ik\gamma-1)+1)v_k\right| \leq C\sum_{k=1}^K k^2|\eps_k||v_k| \leq CK^2\|\eps\|\|v\| \leq CK^2s, \end{align*} where we have applied $e^{ik\gamma}=1+ik\gamma+O(k^2\gamma^2)$. Then applying a covering net argument as above, whose details we omit for brevity, we obtain \[\P\left[\sup_{\alpha' \in [-\pi,\pi),\;\gamma \in [-2\pi,2\pi)} |\Re f_{\alpha',\gamma}(\eps)|>t\right] \leq e^{-c\|\theta^*\|^2/\sigma^2},\] so (\ref{eq:epsvthetaalign3}) holds with probability at least $1-e^{-c\|\theta^*\|^2/\sigma^2}$. Combining these bounds yields the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma-delta-A}] Throughout the proof, $C,C',c,c'$ etc.\ are positive constants depending only on $c_\gen,\eta$ and changing from instance to instance. Recall the expression (\ref{eq:hessexpansion}). Let $C_1,\delta_1>0$ be such that the conclusion of Lemma \ref{lemma-infor-3} holds with $\eta/6$ in place of $\eta$. For any $\theta \in \cB(\delta_1)$ and unit vector $v$ satisfying $\langle u^*,v\rangle=0$, let us apply \[\Var_{\alpha \sim \cP_{\theta,\eps}} \Big[v^\top g(\alpha)^{-1}(\theta^*+\sigma \eps)\Big] \leq \E_{\alpha \sim \cP_{\theta,\eps}} \Big[\big(v^\top g(\alpha)^{-1}(\theta^*+\sigma \eps)\big)^2\Big] \leq \|\theta^*+\sigma \eps\|^2\] to upper-bound the second term of (\ref{eq:hessexpansion}) as \begin{equation}\label{eq:hessboundI123} \frac{1}{N}\sum_{m=1}^N \Var_{\alpha \sim \cP_{\theta,\eps^{(m)}}}\Big[v^\top g(\alpha)^{-1}(\theta^*+\sigma \eps^{(m)})\Big] \leq I_1(\theta,v)+I_2(\theta,v)+I_3 \end{equation} where \begin{align*} I_1(\theta,v) &= \frac1N\sum_{m=1}^N \Var_{\alpha \sim \cP_{\theta,\eps^{(m)}}}\Big[v^\top g(\alpha)^{-1}(\theta^*+\sigma \eps^{(m)})\Big] \cdot \mathbf{1}[\varepsilon^{(m)}\in \cE(\theta,v,\delta_1)],\\ I_2(\theta,v) &= \frac1N\sum_{m=1}^N \|\theta^*+\sigma\eps^{(m)}\|^2 \cdot \mathbf{1}\left[\varepsilon^{(m)} \notin \cE(\theta,v,\delta_1) \text{ and } \|\varepsilon^{(m)}\|^2 \leq 4K+\revise{\frac{\|\theta^*\|^2}{\sigma^2}}\right],\\ I_3 &= \frac1N\sum_{m=1}^N \|\theta^*+\sigma \eps^{(m)}\|^2 \cdot \mathbf{1}\left[\|\varepsilon^{(m)}\|^2>4K+\revise{\frac{\|\theta^*\|^2}{\sigma^2}}\right]. \end{align*} Here $I_1,I_2$ are dependent on $(\theta,v)$, whereas $I_3$ is independent of $(\theta,v)$. Lemma \ref{lemma-infor-3} applied with $\eta/6$ immediately gives the deterministic bound \begin{equation}\label{eq:I1bound} I_1(\theta,v) \leq \eta\sigma^2/6. \end{equation} For $I_2(\theta,v)$, on the event $\|\eps\|^2 \leq 4K+\|\theta^*\|^2/\sigma^2$, we have for a constant $C_2>0$ that \[\|\theta^*+\sigma \eps\|^2 \leq 2\|\theta^*\|^2+2\sigma^2\|\eps\|^2 \leq \revise{C_2(\|\theta^*\|^2+K\sigma^2)}.\] Thus \[I_2(\theta,v) \leq \revise{C_2(\|\theta^*\|^2+K\sigma^2)} \cdot \frac1N\sum_{m=1}^N \mathbf{1}[\varepsilon^{(m)} \notin \cE(\theta,v,\delta_1)].\] Denote $p=\P[\eps^{(m)} \notin \cE(\theta,v,\delta_1)]$ and \revise{$q=\frac{\eta\sigma^2}{6C_2(\|\theta^*\|^2+K\sigma^2)}$. By Lemma \ref{lemma:epsgoodprob}, \[p \leq e^{-c\|\theta^*\|^2/\sigma^2},\] so in particular $p<q$ for $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ and sufficiently large $C_1>0$.} Then by a Chernoff bound for binomial random variables \citep[Theorem 2.3.1]{vershynin2018high}, \[\P[I_2(\theta,v) \geq \eta \sigma^2/6]=\P\left[\sum_{m=1}^N \mathbf{1}[\varepsilon^{(m)} \notin \cE(\theta,v,\delta_1)] \geq Nq\right] \leq \left(\frac{ep}{q}\right)^{Nq}.\] We have $(ep/q) \leq e^{-c'\|\theta^*\|^2/\sigma^2}$ when $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ for sufficiently large $C_1>0$, so this yields \begin{equation}\label{eq:I2bound} \P[I_2(\theta,v) \geq \eta \sigma^2/6] \leq \revise{e^{-\frac{cN}{1+K\sigma^2/\|\theta^*\|^2}}}. \end{equation} For $I_3$, we bound separately its mean and its concentration. Denote the summand of $I_3$ as \[z^{(m)}=\|\theta^*+\sigma \eps^{(m)}\|^2 \cdot \mathbf{1}\left[\|\eps^{(m)}\|^2>4K+\frac{\|\theta^*\|^2}{\sigma^2}\right].\] Let $p'=\P[\|\eps^{(m)}\|^2>4K+\|\theta^*\|^2/\sigma^2]$. Then applying $\|\eps^{(m)}\|^2 \sim \chi_{2K}^2$ and a chi-squared tail bound, $p' \leq e^{-c(2K+\|\theta^*\|^2/\sigma^2)} \leq e^{-c\|\theta^*\|^2/\sigma^2}$. So by Cauchy-Schwarz, \begin{equation}\label{eq:Ezbound} \mathbb{E}[z^{(m)}] \leq \sqrt{\E \left[\|\theta^*+\sigma \eps^{(m)}\|^4\right]} \cdot \sqrt{p'} \leq \revise{C(\|\theta^*\|^2+K\sigma^2) \cdot e^{-c\|\theta^*\|^2/2\sigma^2}} \leq \eta\sigma^2/12 \end{equation} the last inequality holding for $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$ and sufficiently large $C_1>0$. For the concentration, let $\|X\|_{\psi_1}=\inf\{s>0:\E[e^{|X|/s}] \leq 2\}$ denote the sub-exponential norm of a random variable $X$. Observe that similarly by Cauchy-Schwarz, \begin{align*} \E\Big[e^{\frac{|z^{(m)}|}{s}}\Big] &=1-p'+\E\left[\exp\left(\frac{\|\theta^*+\sigma \eps^{(m)}\|^2}{s}\right) \cdot \mathbf{1}\left[\|\eps^{(m)}\|^2>4K+\frac{\|\theta^*\|^2}{\sigma^2}\right]\right]\\ &\leq 1+\revise{\sqrt{\E\Big[\exp\Big(4\|\theta^*\|^2+4\sigma^2\|\eps^{(m)}\|^2)/s\Big)\Big]}} \cdot \sqrt{p'}. \end{align*} Applying the moment generating function bound $\E[\exp(t\|\eps^{(m)}\|^2)]=(1-2t)^{-K} \leq e^{4tK}$ for $t<1/4$, we have \[\E\Big[e^{\frac{|z^{(m)}|}{s}}\Big] \leq 1+e^{2\|\theta^*\|^2/s} \cdot e^{8K\sigma^2/s} \cdot e^{-c\|\theta^*\|^2/2\sigma^2} \leq 2\] when \revise{$s \geq C'\sigma^2(1+K\sigma^2/\|\theta^*\|^2)$ for a sufficiently large constant $C'>0$. So $\|z^{(m)}\|_{\psi_1} \leq C\sigma^2(1+K\sigma^2/\|\theta^*\|^2)$, and Bernstein's inequality \citep[Theorem 2.8.1]{vershynin2018high} gives \[\mathbb{P}\left[\frac1N\sum_{m=1}^N z^{(m)}-\mathbb{E}[z^{(m)}]>\eta \sigma^2/12\right] \leq e^{-\frac{cN}{(1+K\sigma^2/\|\theta^*\|^2)^2}}.\] Combining with (\ref{eq:Ezbound}), \begin{equation}\label{eq:I3bound} \P[I_3 \geq \eta\sigma^2/6] =\P\left[\frac{1}{N}\sum_{m=1}^N z^{(m)}>\eta \sigma^2/6\right] \leq e^{-\frac{cN}{(1+K\sigma^2/\|\theta^*\|^2)^2}}. \end{equation}} Applying (\ref{eq:hessexpansion}), (\ref{eq:hessboundI123}), (\ref{eq:I1bound}), and (\ref{eq:I2bound}), we have \begin{align} \P\left[v^\top \nabla^2 R_N(\theta)v \leq \frac{1}{\sigma^2}-\frac{\eta}{2\sigma^2} \text{ and } I_3 \leq \frac{\eta\sigma^2}{6}\right] &\leq \P\left[I_1(\theta,v)+I_2(\theta,v)+I_3 \geq \frac{\eta\sigma^2}{2} \text{ and } I_3 \leq \frac{\eta\sigma^2}{6}\right]\nonumber\\ &\leq \P\left[I_2(\theta,v) \geq \frac{\eta\sigma^2}{6}\right] \leq e^{-\frac{cN}{1+K\sigma^2/\|\theta^*\|^2}}.\label{eq:hesspointwisebound} \end{align} Let us apply a covering net argument to take a union bound over $(\theta,v)$, and then combine with the bound (\ref{eq:I3bound}) for $I_3$ which is independent of $(\theta,v)$. We compute the Lipschitz constant of $v^\top \nabla^2 R_N(\theta)v$ in both $v$ and $\theta$: Taking the gradient in $v$, \[\left\|\nabla_v \Big[v^\top \nabla^2 R_N(\theta)v\Big]\right\| =\left\|2\nabla^2 R_N(\theta)v\right\| =2\sup_{u:\|u\|=1} u^\top \nabla^2 R_N(\theta)v.\] Then denoting $y^{(m)}=\theta^*+\sigma \eps^{(m)}$ and applying (\ref{eq:hess}), \begin{align} \left\|\nabla_v \Big[v^\top \nabla^2 R_N(\theta)v\Big]\right\| &\leq 2\left(\sup_{u:\|u\|=1} \frac{u^\top v}{\sigma^2}-\frac{1}{N\sigma^4} \sum_{m=1}^N \Cov_{\alpha \sim \cP_{\theta,\eps^{(m)}}} [u^\top g(\alpha)^{-1}y^{(m)}, v^\top g(\alpha)^{-1}y^{(m)}]\right)\nonumber\\ &\leq \frac{2}{\sigma^2} +\frac{2}{N\sigma^4}\sum_{m=1}^N \|y^{(m)}\|^2 \leq \frac{2}{\sigma^2}+\frac{4\|\theta^*\|^2}{\sigma^4} +\frac{4}{N\sigma^2} \sum_{m=1}^N \|\eps^{(m)}\|^2.\label{eq:gradquadv} \end{align} Similarly, taking the gradient in $\theta$, \[\left\|\nabla_\theta \left[v^\top \nabla^2 R_N(\theta)v\right]\right\| =\sup_{u:\|u\|=1} \nabla^3 R_N(\theta)[u,v,v]\] where $\nabla^3 R_N(\theta)[u,v,v]$ is the 3rd-derivative tensor of $R_N(\theta)$ evaluated at $u \otimes v \otimes v \in \R^{2K \times 2K \times 2K}$. We have \[\nabla^3 R_N(\theta)[u,v,v] =-\frac{1}{N\sigma^6}\sum_{m=1}^N \kappa_{\alpha \sim \cP_{\theta,\eps^{(m)}}}^3 [u^\top g(\alpha)^{-1}y^{(m)}, v^\top g(\alpha)^{-1}y^{(m)}, v^\top g(\alpha)^{-1}y^{(m)}]\] where $\kappa_{\alpha \sim \cP_{\theta,\eps^{(m)}}}^3[\cdot,\cdot,\cdot]$ denotes the 3rd-order mixed cumulant with respect to $\alpha \sim \cP_{\theta,\eps^{(m)}}$. For any random variables $X,Y,Z$, the moment-cumulant relations and H\"older's inequality give \[|\kappa^3[X,Y,Z]| \leq C \cdot \E[|X|^3]^{1/3}\E[|Y|^3]^{1/3}\E[|Z|^3]^{1/3}.\] Thus, \begin{equation}\label{eq:gradquadtheta} \left\|\nabla_\theta \left[v^\top \nabla^2 R_N(\theta)v\right]\right\| \leq \frac{C}{N\sigma^6}\sum_{m=1}^N \|y^{(m)}\|^3 \leq \frac{4C\|\theta^*\|^3}{\sigma^6}+\frac{4C}{N\sigma^3} \sum_{m=1}^N \|\eps^{(m)}\|^3. \end{equation} On an event \[\cA=\left\{\|\eps^{(m)}\|^2 \leq N \text{ for all } m=1,\ldots,N\right\},\] (\ref{eq:gradquadv}) and (\ref{eq:gradquadtheta}) imply that $v^\top \nabla^2 R_N(\theta)v$ is $L_v$-Lipschitz in $v$ and $L_\theta$-Lipschitz in $\theta$, for \revise{ \[L_v=C'\left(\frac{\|\theta^*\|^2}{\sigma^2}+N\right)\frac{1}{\sigma^2}, \qquad L_\theta=C'\left(\frac{\|\theta^*\|^2}{\sigma^2}+N\right)^{3/2}\frac{1}{\sigma^3}.\]} Let $N_v$ be a $\delta_v$-net of $\{v:\|v\|=1,\langle u^*,v \rangle=0\}$ and $N_\theta$ a $\delta_\theta$-net of $\{\theta: \theta \in \cB(\delta_1)\}$, for $\delta_v=\eta/(4L_v\sigma^2)$ and $\delta_\theta=\eta/(4L_\theta \sigma^2)$. This guarantees, for each $\theta \in \cB(\delta_1)$ and unit vector $v$ with $\langle u^*,v \rangle=0$, there exists $(\theta',v') \in N_\theta \times N_v$ such that \[\Big|v^\top \nabla^2 R_N(\theta)v-{v'}^\top \nabla^2 R_N(\theta')v'\Big| \leq L_\theta \|\theta-\theta'\|+L_v \|v-v'\| \leq \eta/2\sigma^2.\] Then, applying these Lipschitz bounds together with the pointwise bound (\ref{eq:hesspointwisebound}), \begin{align} &\P\left[\sup_{\theta \in \cB(\delta_1)}\, \sup_{v:\|v\|=1,\,\langle u^*,v \rangle=0}\, v^\top \nabla^2 R_N(\theta)v \leq \frac{1}{\sigma^2}-\frac{\eta}{\sigma^2} \text{ and } I_3 \leq \frac{\eta \sigma^2}{6} \text{ and } \cA \right]\nonumber\\ &\hspace{0.2in}\leq \P\left[\sup_{\theta \in N_\theta}\,\sup_{v \in N_v} v^\top \nabla^2 R_N(\theta)v \leq \frac{1}{\sigma^2} -\frac{\eta}{2\sigma^2} \text{ and } I_3 \leq \frac{\eta\sigma^2}{6}\right] \leq |N_v| \cdot |N_\theta| \cdot e^{-\frac{cN}{1+K\sigma^2/\|\theta^*\|^2}}. \label{eq:hessunionbound} \end{align} We may take the above nets to have cardinalities \revise{ \begin{align*} |N_v| \leq \left(1+\frac{2}{\delta_v}\right)^{2K} &\leq \left[C'\left(\frac{\|\theta^*\|^2}{\sigma^2}+N\right)\right]^{2K},\\ |N_\theta| \leq \left(1+\frac{C\|\theta^*\|}{\delta_\theta}\right)^{2K} &\leq \left[C'\left(\frac{\|\theta^*\|^2}{\sigma^2}+N\right)^{3/2} \frac{\|\theta^*\|}{\sigma}\right]^{2K}. \end{align*} Observe that under the given assumptions $N \geq C_0K(1+\frac{K\sigma^2}{\|\theta^*\|^2}) \log (K+\frac{\|\theta^*\|^2}{\sigma^2})$ and $\frac{\|\theta^*\|^2}{\sigma^2} \geq C_1\log K$, we have \[\frac{N}{\log N} \geq \frac{C_0K(1+\frac{K\sigma^2}{\|\theta^*\|^2}) \log (K+\frac{\|\theta^*\|^2}{\sigma^2})}{\log [C_0K(1+\frac{K\sigma^2}{\|\theta^*\|^2})\log (K+\frac{\|\theta^*\|^2}{\sigma^2})]}.\] Considering separately the cases $\frac{K\sigma^2}{\|\theta^*\|^2} \leq 1$ and $1 \leq \frac{K\sigma^2}{\|\theta^*\|^2} \leq \frac{K}{C_1\log K}$, it may be checked that this implies \begin{equation}\label{eq:logKlogN} \frac{N}{\log N} \geq C_0'K\left(1+\frac{K\sigma^2}{\|\theta^*\|^2}\right) \end{equation} where $C_0'$ may be taken to be a small absolute constant times $C_0/\log C_0$.} Then for sufficiently large $C_0,C_0'>0$ and some constant $c'>0$, the right side of (\ref{eq:hessunionbound}) may then be bounded as $|N_v| \cdot |N_\theta| \cdot e^{-\frac{cN}{1+K\sigma^2/\|\theta^*\|^2}} \leq e^{-\frac{c'N}{1+K\sigma^2/\|\theta^*\|^2}}$. Combining this with (\ref{eq:I3bound}) and the chi-squared tail bound $\P[\cA^c] \leq N \cdot \P[\|\eps^{(m)}\|^2>N] \leq Ne^{-cN} \leq e^{-c'N}$ for $N \geq C_0K$, this gives \[\P\left[\sup_{\theta \in \cB(\delta_1)}\, \sup_{v:\|v\|=1,\,\langle u^*,v \rangle=0}\, v^\top \nabla^2 R_N(\theta)v \leq \frac{1-\eta}{\sigma^2} \right] \leq e^{-\frac{c'N}{1+K\sigma^2/\|\theta^*\|^2}}+\P[I_3>\tfrac{\eta \sigma^2}{6}] +\P[\cA^c] \leq e^{-\frac{cN}{(1+K\sigma^2/\|\theta^*\|^2)^2}}\] which implies the lemma. \end{proof} \section{Proofs for minimax lower bound}\label{appendix:lower} \revise{For expositional clarity, we first show Lemma \ref{lemma:minimaxlowerP} in the setting $\beta=0$ where the Fourier coefficients do not decay. At the conclusion of this section, we extend the proof to all $\beta \in [0,\frac{1}{2})$.} \begin{proof}[Proof of Lemma \ref{lemma:KLupperbound}] The first bound (\ref{eq:KLupperlownoise}) is basic and due to the data processing inequality: Let $q_\theta(\alpha,y)$ denote the joint density of $\alpha \sim \Unif([-\pi,\pi))$ and $y=g(\alpha) \cdot \theta+\sigma \eps$. Then the data processing inequality implies \begin{align*} D_{\KL}(p_\theta \| p_{\theta'}) \leq D_{\KL}(q_\theta \| q_{\theta'}) &=\E_{(\alpha,y) \sim q_\theta} \left[-\frac{\|y-g(\alpha) \cdot \theta\|^2}{2\sigma^2} +\frac{\|y-g(\alpha) \cdot \theta'\|^2}{2\sigma^2}\right]\\ &=\mathop{\E_{\alpha \sim \Unif([-\pi,\pi))}}_{\eps \sim \mathcal{N}(0,I)} \left[-\frac{\|\sigma \eps\|^2}{2\sigma^2} +\frac{\|g(\alpha) \cdot (\theta-\theta')+\sigma \eps\|^2}{2\sigma^2}\right] =\frac{\|\theta-\theta'\|^2}{2\sigma^2}. \end{align*} In the remainder of the proof, we show (\ref{eq:KLupperhighnoise}). Let us write $\alpha,\alpha' \sim \Unif([-\pi,\pi))$ for independent random rotations, $\E$ for the expectation over only $\alpha,\alpha'$ (fixing $y$ and $\eps$), and $g=g(\alpha)$ and $g'=g(\alpha')$. We have \[D_{\KL}(p_{\theta}\|p_{\theta'}) \leq \chi^2(p_{\theta}\|p_{\theta'}) :=\int_{\R^{2K}} \frac{[p_\theta(y)-p_{\theta'}(y)]^2}{p_\theta(y)}dy\] where the right side is the $\chi^2$-divergence, see e.g.\ \cite[Lemma 2.7]{tsybakov2008introduction}. We derive an upper bound for $\chi^2(p_\theta\|p_{\theta'})$ using the idea of \cite[Theorem 9]{bandeira2020optimal}: Let $\varphi(z)=(2\pi \sigma^2)^{-K}\exp(-\|z\|^2/2\sigma^2)$ be the density of $\mathcal{N}(0,\sigma^2 I_{2K})$. Then \begin{equation}\label{eq:pthetaexpr} p_\theta(y)=\E[\varphi(y-g\theta)] =\E\Big[\varphi(y) \cdot e^{\frac{y^\top g\theta}{\sigma^2}-\frac{\|\theta\|^2}{2\sigma^2}}\Big]. \end{equation} By Jensen's inequality and the condition $\E[g]=0$, \begin{equation}\label{eq:pthetalower} p_\theta(y) \geq \varphi(y) \cdot e^{\frac{y^\top \E[g]\theta}{\sigma^2} -\frac{\|\theta\|^2}{2\sigma^2}} =\varphi(y) \cdot e^{-\frac{\|\theta\|^2}{2\sigma^2}}. \end{equation} Then applying (\ref{eq:pthetaexpr}), (\ref{eq:pthetalower}), and the moment generating function \[\int \varphi(y)e^{\frac{y^\top (g\theta+g'\theta')}{\sigma^2}}dy =e^{\frac{\|g\theta+g'\theta'\|^2}{2\sigma^2}} =e^{\frac{\|\theta\|^2+\|\theta'\|^2+2\langle g\theta,g'\theta'\rangle} {2\sigma^2}},\] we get \begin{align} \chi^2(p_\theta\|p_{\theta'}) &\leq \int \frac{\Big(\E\Big[\varphi(y) \cdot e^{\frac{y^\top g\theta}{\sigma^2}-\frac{\|\theta\|^2}{2\sigma^2}} \Big] -\E\Big[\varphi(y) \cdot e^{\frac{y^\top g\theta'}{\sigma^2} -\frac{\|\theta'\|^2}{2\sigma^2}}\Big]\Big)^2} {\varphi(y) \cdot e^{-\frac{\|\theta\|^2}{2\sigma^2}}}dy\nonumber\\ &=\int \varphi(y) \Big(e^{-\frac{\|\theta\|^2}{2\sigma^2}} \E\Big[e^{\frac{y^\top g\theta}{\sigma^2}+\frac{y^\top g'\theta}{\sigma^2}}\Big] -2e^{-\frac{\|\theta'\|^2}{2\sigma^2}}\E\Big[e^{\frac{y^\top g\theta}{\sigma^2} +\frac{y^\top g'\theta'}{\sigma^2}}\Big] +e^{-\frac{\|\theta'\|^2}{\sigma^2}+\frac{\|\theta\|^2}{2\sigma^2}} \E\Big[e^{\frac{y^\top g\theta'}{\sigma^2} +\frac{y^\top g'\theta'}{\sigma^2}}\Big]\Big)dy\nonumber\\ &=e^{\frac{\|\theta\|^2}{2\sigma^2}} \E\left[e^{\frac{\langle g\theta,g'\theta \rangle}{\sigma^2}} -2e^{\frac{\langle g\theta,g'\theta' \rangle}{\sigma^2}} +e^{\frac{\langle g\theta',g'\theta' \rangle}{\sigma^2}}\right]\nonumber\\ &=e^{\frac{\|\theta\|^2}{2\sigma^2}} \E\left[e^{\frac{\langle \theta,g\theta \rangle}{\sigma^2}} -2e^{\frac{\langle \theta,g\theta' \rangle}{\sigma^2}} +e^{\frac{\langle \theta',g\theta' \rangle}{\sigma^2}}\right]\nonumber\\ &=e^{\frac{\|\theta\|^2}{2\sigma^2}} \sum_{m=0}^\infty \frac{1}{\sigma^{2m}m!} \E\big[\langle \theta,g\theta \rangle^m -2\langle \theta,g\theta' \rangle^m+\langle \theta',g\theta' \rangle^m\big]. \label{eq:chisqbound} \end{align} For $m=0$ and $m=1$, the summand of (\ref{eq:chisqbound}) is 0. We evaluate the summand for $m=2$, and upper bound it for $m \geq 3$. Let \[\tilde{\theta}=(\theta_1,\ldots,\theta_K) \in \C^K, \qquad \tilde{\theta}'=(\theta_1',\ldots,\theta_K') \in \C^K\] be the complex representations of $\theta,\theta'$ as defined in Section \ref{sec:complexrepr}. For $m=2$, applying (\ref{eq:CRisometry}), \[\E[\langle \theta,g(\alpha)\theta' \rangle^2] =\sum_{k_1,k_2=1}^K \E\Big[\Re(\overline{\theta_{k_1}}e^{ik_1\alpha} \theta_{k_1}') \cdot \Re (\overline{\theta_{k_2}}e^{ik_2\alpha}\theta_{k_2}')\Big].\] For any $k_1,k_2 \in \{1,\ldots,K\}$, applying $\Re \bar{x}y =(x\bar{y}+\bar{x}y)/2$ and $\E[e^{ik\alpha}]=0$ for any non-zero integer $k$, \begin{align*} \E\Big[\Re(\overline{\theta_{k_1}}e^{ik_1\alpha} \theta_{k_1}') \cdot \Re (\overline{\theta_{k_2}}e^{ik_2\alpha}\theta_{k_2}')\Big] &=\frac{1}{4}\E\Big[(e^{-ik_1\alpha}\theta_{k_1}\overline{\theta_{k_1}'}+e^{ik_1\alpha}\overline{\theta_{k_1}}\theta_{k_1}') (e^{-ik_2\alpha}\theta_{k_2}\overline{\theta_{k_2}'}+e^{ik_2\alpha} \overline{\theta_{k_2}}\theta_{k_2}')\Big]\\ &=\frac{1}{2}\1\{k_1=k_2\}|\theta_{k_1}|^2|\theta_{k_1}'|^2. \end{align*} Then $\E[\langle \theta,g\theta' \rangle^2]=\frac{1}{2}\sum_{k=1}^K r_k^2{r_k'}^2$. This identity holds also with $\theta=\theta'$, so \begin{equation}\label{eq:m2term} \E\big[\langle \theta,g\theta \rangle^2 -2\langle \theta,g\theta' \rangle^2+\langle \theta',g\theta' \rangle^2\big] =\frac{1}{2}\sum_{k=1}^K (r_k^2-{r_k'}^2)^2. \end{equation} For any $m \geq 3$ and every $k_1,\ldots,k_m \in \{1,\ldots,K\}$, applying again $\E[e^{ik\alpha}]=0$ for $k \neq 0$, we have similarly \begin{align*} \E\left[\prod_{\ell=1}^m \Re(\overline{\theta_{k_\ell}} e^{ik_\ell\alpha}\theta_{k_\ell}')\right] &=\frac{1}{2^m}\E\left[\prod_{\ell=1}^m (e^{-ik_\ell\alpha}\theta_{k_\ell}\overline{\theta_{k_\ell}'}+e^{ik_\ell\alpha} \overline{\theta_{k_\ell}} \theta_{k_\ell}')\right]\\ &=\frac{1}{2^m}\sum_{s_1,\ldots,s_m \in \{+1,-1\}} \1\{s_1k_1+\ldots+s_mk_m=0\} \cdot \prod_{\ell:s_\ell=+1} \theta_{k_\ell}\overline{\theta_{k_\ell}'} \cdot \prod_{\ell:s_\ell=-1} \overline{\theta_{k_\ell}}\theta_{k_\ell}' \end{align*} Noting that the left side is real and taking the real part on the right side, this is equal to \[\frac{1}{2^m} \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \1\{s_1k_1+\ldots+s_mk_m=0\} \left(\prod_{\ell=1}^m r_{k_\ell}r_{k_\ell}'\right) \cos\left(\sum_{\ell=1}^m s_\ell \phi_{k_\ell} -s_\ell \phi_{k_\ell}'\right).\] Then, summing over all $k_1,\ldots,k_m \in \{1,\ldots,K\}$ and applying this also for $\theta=\theta'$, \begin{align} &\E\big[\langle \theta,g\theta \rangle^m -2\langle \theta,g\theta' \rangle^m+\langle \theta',g\theta' \rangle^m\big]\nonumber\\ &=\frac{1}{2^m}\sum_{k_1,\ldots,k_m=1}^K \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \1\{s_1k_1+\ldots+s_mk_m=0\} \cdot \nonumber\\ &\hspace{1in} \left[\left(\prod_{\ell=1}^m r_{k_\ell}-\prod_{\ell=1}^m r_{k_\ell}'\right)^2 +2\left(\prod_{\ell=1}^m r_{k_\ell}r_{k_\ell}'\right) \left(1-\cos\left(\sum_{\ell=1}^m s_\ell \phi_{k_\ell} -s_\ell \phi_{k_\ell}'\right)\right)\right]\label{eq:mthmoment} =:\mathrm{I}+\mathrm{II}, \end{align} where $\mathrm{I}$ is the term involving $(\prod_\ell r_{k_\ell}-\prod_\ell r_{k_\ell}')^2$, and $\mathrm{II}$ is the term involving $\cos(\sum_\ell s_\ell \phi_{k_\ell}-s_\ell \phi_{k_\ell}')$. To upper bound $\mathrm{I}$, let us write \[\prod_{\ell=1}^m r_{k_\ell}-\prod_{\ell=1}^m r_{k_\ell}' =\sum_{j=1}^m (r_{k_j}-r_{k_j}')r_{k_1}\ldots r_{k_{j-1}} r_{k_{j+1}}'\ldots r_{k_m}'.\] Then \[\left(\prod_{\ell=1}^m r_{k_\ell}-\prod_{\ell=1}^m r_{k_\ell}'\right)^2 \leq m \cdot \sum_{j=1}^m (r_{k_j}-r_{k_j}')^2 \left(r_{k_1}\ldots r_{k_{j-1}} r_{k_{j+1}}'\ldots r_{k_m}'\right)^2.\] So \begin{align*} \mathrm{I} &\leq \sum_{j=1}^m \sum_{k_1,\ldots,k_m=1}^K \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \1\{s_1k_1+\ldots+s_mk_m=0\} \cdot \frac{m}{2^m} (r_{k_j}-r_{k_j}')^2 \left(r_{k_1}\ldots r_{k_{j-1}} r_{k_{j+1}}'\ldots r_{k_m}'\right)^2 \end{align*} Consider this summand for $j=1$. Note that fixing $s_1,\ldots,s_m$ and $k_1,\ldots,k_{m-1}$, there is at most one choice for the remaining index $k_m \in \{1,\ldots,K\}$ that satisfies $s_1k_1+\ldots+s_mk_m=0$. Thus, the summand for $j=1$ is at most \[\max_{k_m=1}^K {r_{k_m}'}^2 \cdot \sum_{k_1,\ldots,k_{m-1}=1}^K \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \frac{m}{2^m} (r_{k_1}-r_{k_1}')^2 \left(r_{k_2}'\ldots r_{k_{m-1}}'\right)^2 \leq m\sum_{k=1}^K (r_k-r_k')^2 \rupper^2 R^{2(m-2)}.\] The same bound holds for each summand $j=1,\ldots,m$, yielding \[\mathrm{I} \leq m^2\sum_{k=1}^K (r_k-r_k')^2 \rupper^2 R^{2(m-2)}.\] To bound $\mathrm{II}$, observe that when $s_1k_1+\ldots+s_mk_m=0$, we have \[\cos\left(\sum_{\ell=1}^m s_\ell \phi_{k_\ell}-s_\ell \phi_{k_\ell}'\right)= \cos\left(\sum_{\ell=1}^m s_\ell \phi_{k_\ell}-s_\ell \phi_{k_\ell}' +\alpha \cdot s_\ell k_\ell\right)\] for any $\alpha \in \R$. Then applying $1-\cos(x) \leq x^2/2$ for any $x \in \R$, we obtain \[2\left(1-\cos\left(\sum_{\ell=1}^m s_\ell \phi_{k_\ell} -s_\ell \phi_{k_\ell}'\right)\right) \leq \inf_{\alpha \in \R} \left(\sum_{\ell=1}^m s_\ell \phi_{k_\ell} -s_\ell \phi_{k_\ell}' +\alpha \cdot s_\ell k_\ell\right)^2 =\inf_{\alpha \in \R} m\sum_{j=1}^m (\phi_{k_j}-\phi_{k_j}'+\alpha k_j)^2.\] So \begin{equation}\label{eq:IIbound} \mathrm{II} \leq \inf_{\alpha \in \R} \sum_{j=1}^m \sum_{k_1,\ldots,k_m=1}^K \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \1\{s_1k_1+\ldots+s_mk_m=0\} \cdot \frac{m}{2^m}(\phi_{k_j}-\phi_{k_j}'+\alpha k_j)^2 \left(\prod_{\ell=1}^m r_{k_\ell}r_{k_\ell}'\right). \end{equation} Applying $\sum_{k=1}^K r_kr_k' \leq \sum_{k=1}^K (r_k^2+{r_k'}^2)/2 \leq R^2$ and a similar argument as above, for any fixed $\alpha \in \R$, this summand for $j=1$ is at most \begin{align*} &\max_{k_m=1}^K r_{k_m}r_{k_m}' \cdot \sum_{k_1,\ldots,k_{m-1}=1}^K \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \frac{m}{2^m}(\phi_{k_1}-\phi_{k_1}'+\alpha k_1)^2\Big(r_{k_1}\ldots r_{k_{m-1}} r_{k_1}'\ldots r_{k_{m-1}}'\Big)\\ &\leq m\sum_{k=1}^K r_kr_k'(\phi_k-\phi_k'+\alpha k)^2\rupper^2 R^{2(m-2)}. \end{align*} The same bound holds for each summand $j=1,\ldots,m$, yielding \[\mathrm{II} \leq \inf_{\alpha \in \R} m^2\sum_{k=1}^K r_kr_k'(\phi_k-\phi_k'+\alpha k)^2 \cdot \rupper^2 \cdot R^{2(m-2)}.\] Combining these bounds for $\mathrm{I}$ and $\mathrm{II}$, we arrive at \[\E\big[\langle \theta,g\theta \rangle^m -2\langle \theta,g\theta' \rangle^m+\langle \theta',g\theta' \rangle^m\big] \leq m^2\rupper^2 \cdot R^{2(m-2)} \cdot \inf_{\alpha \in \R} \sum_{k=1}^K (r_k-r_k')^2+r_kr_k'(\phi_k-\phi_k'+\alpha k)^2.\] Let us now apply this to (\ref{eq:chisqbound}) and sum over $m \geq 3$: We have \[\sum_{m=3}^\infty \frac{m^2R^{2(m-2)}}{\sigma^{2m}m!} =\sum_{m=3}^\infty \frac{mR^2}{(m-1)(m-2)\sigma^6} \cdot \frac{R^{2(m-3)}}{\sigma^{2(m-3)}(m-3)!} \leq \frac{3R^2}{2\sigma^6}e^{R^2/\sigma^2}.\] Then \begin{align*} &\sum_{m=3}^\infty \frac{e^{\|\theta\|^2/2\sigma^2}}{\sigma^{2m}m!} \E\big[\langle \theta,g\theta \rangle^m -2\langle \theta,g\theta' \rangle^m+\langle \theta',g\theta' \rangle^m\big]\\ &\leq \frac{3\rupper^2 R^2e^{3R^2/2\sigma^2}}{2\sigma^6} \cdot \inf_{\alpha \in \R} \sum_{k=1}^K (r_k-r_k')^2+r_kr_k'(\phi_k-\phi_k'+\alpha k)^2. \end{align*} Applying this and (\ref{eq:m2term}) to (\ref{eq:chisqbound}) gives (\ref{eq:KLupperhighnoise}). \end{proof} For a specific regime of parameters $\theta,\theta' \in \R^{2K}$, we simplify the lower bound for the loss in Proposition \ref{prop-lossgeneral} by expressing the squared distance $|\phi_k-\phi_k'+k\alpha|_\cA^2$ on the circle $\cA$ in terms of the usual squared distance $(\phi_k-\phi_k'+k\alpha)^2$ on $\R$. \begin{Lemma}\label{lemma:lossbounds} Fix any $\theta,\theta' \in \R^{2K}$ and let $\theta=(r_k\cos\phi_k,r_k\sin\phi_k)_{k=1}^K$ and $\theta'=(r_k'\cos\phi_k',r_k'\sin\phi_k')_{k=1}^K$. For each $\alpha \in \R$, let $K_0(\alpha) \in [0,K]$ be the largest integer for which $|K_0(\alpha) \cdot \alpha| \leq \pi/2$. If $r_k,r_k' \geq \rlower$ and $|\phi_k-\phi_k'| \leq \pi/3$ for each $k=1,\ldots,K$, then for a universal constant $c>0$, \begin{equation}\label{eq:losslowerbound} L(\theta,\theta') \geq \sum_{k=1}^K (r_k-r_k')^2 +c \inf_{\alpha \in \R} \left( (K-K_0(\alpha))\rlower^2+ \sum_{k=1}^{K_0(\alpha)} r_kr_k'(\phi_k-\phi_k'+k\alpha)^2 \right) \end{equation} where the second summation is understood as 0 if $K_0(\alpha)=0$. \end{Lemma} \begin{proof} Recall the form (\ref{eq:lossrphi}) of the loss from Proposition \ref{prop-lossgeneral}, where the infimum over $\alpha$ may be restricted to $[-\pi,\pi]$ by periodicity. We provide a lower bound for $\alpha \in [0,\pi]$, and the case $\alpha \in [-\pi,0]$ is analogous. If $\alpha=0$, let $K_0=K_1=\ldots=K$. Otherwise if $\alpha \in (0,\pi]$, let $0 \leq K_0 \leq K_1 \leq K_2 \leq \ldots$ be such that each $K_m$ is the largest integer in $[0,K]$ for which $K_m \cdot \alpha \leq 2\pi m+\frac{\pi}{2}$. Note that if $K_m<K$ strictly, then we must have also $K_m<K_{m+1}$. If $K_0 \geq 1$, then for every $k \in [1,K_0]$, we have $k\alpha \in [0,\pi/2]$, so $\phi_k-\phi_k'+k\alpha \in [-\pi/3,5\pi/6]$ and \[1-\cos(\phi_k-\phi_k'+k\alpha) \geq c(\phi_k-\phi_k'+k\alpha)^2\] for a universal constant $c>0$. Thus \begin{equation}\label{eq:losslowerI} \sum_{k=1}^{K_0} r_kr_k'\Big[1-\cos(\phi_k-\phi_k'+k\alpha)\Big] \geq c\sum_{k=1}^{K_0} r_kr_k'(\phi_k-\phi_k'+k\alpha)^2. \end{equation} This bound is also trivially true if $K_0=0$. Now fix any $m \geq 0$ where $K_m<K$ strictly. Consider the values \[k \in \{K_m+1,\ldots,K_{m+1}\}.\] For each such $k$, we have $k\alpha \in (2\pi m+\frac{\pi}{2},2\pi m+\frac{5\pi}{2}]$. Let $a$ be the number of such values $k$ where $k\alpha \in (2\pi m+\frac{\pi}{2},2\pi m+\frac{3\pi}{2}]$, and let $b$ be the number of such values $k$ where $k\alpha \in (2\pi m+\frac{3\pi}{2},2\pi m+\frac{5\pi}{2}]$. Then we must have $a \geq 1$ because $\alpha \in [0,\pi]$. Also, the number of multiples of $\alpha$ belonging to $(2\pi m+\frac{3\pi}{2},2\pi m+\frac{5\pi}{2}]$ is at most 1 more than the number of multiples of $\alpha$ belonging to $(2\pi m+\frac{\pi}{2},2\pi m+\frac{3\pi}{2}]$, so $b \leq a+1$. Thus \[\frac{a}{K_{m+1}-K_m}=\frac{a}{a+b} \geq \frac{a}{2a+1} \geq \frac13.\] For $k=K_m+1,\ldots,K_m+a$, we must have $\phi_k-\phi_k'+k\alpha \in (2\pi m+\frac{\pi}{6}, 2\pi m+\frac{11\pi}{6}]$, so $1-\cos(\phi_k-\phi_k'+k\alpha) \geq c$ for a universal constant $c>0$. Then \[\sum_{k=K_m+1}^{K_{m+1}} r_kr_k'\Big[1-\cos(\phi_k-\phi_k'+k\alpha)\Big] \geq \sum_{k=K_m+1}^{K_m+a} c \cdot r_kr_k' \geq c\rlower^2 \cdot \frac{K_{m+1}-K_m}{3}.\] Now summing over all $m \geq 0$ where $K_m<K$, \begin{equation}\label{eq:losslowerII} \sum_{k=K_0+1}^K r_kr_k'\Big[1-\cos(\phi_k-\phi_k'+k\alpha)\Big] \geq \frac{c\rlower^2}{3} \cdot (K-K_0). \end{equation} Applying (\ref{eq:losslowerI}) and (\ref{eq:losslowerII}) and the analogous bounds for $\alpha \in [-\pi,0]$ to (\ref{eq:lossrphi}), and taking the infimum over $\alpha \in [-\pi,\pi]$, we obtain (\ref{eq:losslowerbound}). \end{proof} We conclude the proof of Lemma \ref{lemma:minimaxlowerP} \revise{for $\beta=0$} using the following version of Assouad's hypercube lower bound from \cite[Lemma 2]{cai2012optimal}. \begin{Lemma}\label{lemma:assouad} Fix $m \geq 1$, let $\{P_\tau:\tau \in \{0,1\}^m\}$ be any $2^m$ probability distributions, and let $\psi(P_\tau)$ take values in a metric space with metric $d$. Then for any $s>0$ and any estimator $\hat{\psi}(X)$ based on $X \sim P_\tau$, \begin{align*} &\sup_{\tau \in \{0,1\}^m} \E_{X \sim P_\tau}\Big[d(\hat{\psi}(X),\psi(P_\tau))^s\Big]\\ &\geq \frac{m}{2^{s+1}} \cdot \min_{H(\tau,\tau') \geq 1} \frac{d(\psi(P_\tau),\psi(P_{\tau'}))^s} {H(\tau,\tau')} \cdot \min_{H(\tau,\tau')=1} \Big(1-D_\TV(P_\tau,P_{\tau'})\Big). \end{align*} Here, $H(\tau,\tau')=\sum_{i=1}^m \1\{\tau_i \neq \tau_i'\}$ is the Hamming distance between $\tau$ and $\tau'$, and $D_\TV(P_\tau,P_{\tau'})$ is the total-variation distance between $P_\tau$ and $P_{\tau'}$. \end{Lemma} \begin{proof}[Proof of Lemma \ref{lemma:minimaxlowerP}, \revise{$\beta=0$}] We define $2^K$ parameters $\theta^\tau \in \cP_0$ indexed by $\tau \in \{0,1\}^K$: Fix a value $\phi \in [0,\pi/3]$ to be determined. For each $\tau \in \{0,1\}^K$, set \begin{equation}\label{eq:lowerboundPconstruction} \phi_k^\tau=\tau_k \phi=\begin{cases} \phi & \text{ if } \tau_k=1 \\ 0 & \text{ if } \tau_k=0. \end{cases} \end{equation} Then let $\theta^\tau$ be the vector where $r_k(\theta)=1$ and $\phi_k(\theta)=\phi_k^\tau$ for each $k=1,\ldots,K$. Let $P_\tau=p_{\theta^\tau}^N$ denote the law of $N$ samples $y^{(1)},\ldots,y^{(N)} \overset{iid}{\sim} p_{\theta^\tau}$. Let $\orbit_\theta=\{g(\alpha) \cdot \theta:\alpha \in \cA\}$ be the rotational orbit of $\theta$. Then $d(\orbit_\theta,\orbit_{\theta'}):=L(\theta,\theta')^{1/2} =\min_{\alpha \in \cA} \|\theta'-g(\alpha) \cdot \theta\|$ defines a metric over the space of all such orbits. We apply Lemma \ref{lemma:assouad} with $m=K$, $\psi(P_\tau)=\orbit_{\theta^\tau}$, this metric $d(\orbit_\theta,\orbit_{\theta'})$, and $s=2$. Applying (\ref{eq:KLupperlownoise}) and $|e^{is}-e^{it}| \leq |s-t|$ for all $s,t \in \R$, \begin{equation}\label{eq:DKLHamminglownoise} D_{\KL}(p_{\theta^\tau}\|p_{\theta^{\tau'}}) \leq \frac{\|\theta^\tau-\theta^{\tau'}\|^2}{2\sigma^2} =\frac{1}{2\sigma^2} \sum_{k=1}^K \big|e^{i\phi_k^\tau} -e^{i\phi_k^{\tau'}}\big|^2 \leq \frac{\phi^2}{2\sigma^2} \cdot H(\tau,\tau') \end{equation} where $H(\tau,\tau')$ is the Hamming distance. Applying (\ref{eq:KLupperhighnoise}) with the right side evaluated at $\alpha=0$, \revise{where $\rupper=1$ and $R^2=K$}, also \begin{equation}\label{eq:DKLHamminghighnoise} D_{\KL}(p_{\theta^\tau}\|p_{\theta^{\tau'}}) \leq \frac{\phi^2}{A} \cdot H(\tau,\tau'), \qquad \revise{A:=\frac{2\sigma^6}{3Ke^{3K/2\sigma^2}}} \end{equation} Then, setting \begin{equation}\label{eq:phistar} \phi=\min\left(\frac{1}{\sqrt{N}} \cdot \max(\sqrt{2\sigma^2},\sqrt{A}), \frac{\pi}{3}\right), \end{equation} these bounds imply for both cases of the max that $D_{\KL}(p_{\theta^\tau}\|p_{\theta^{\tau'}}) \leq H(\tau,\tau')/N$. Then by Pinsker's inequality (see e.g.\ \citep[Lemma 2.5]{tsybakov2008introduction}), \[D_{\TV}(P_\tau,P_{\tau'}) \leq \sqrt{\frac{1}{2}D_{\KL}(P_\tau\|P_{\tau'})} =\sqrt{\frac{N}{2}D_{\KL}(p_{\theta^\tau}\|p_{\theta^{\tau'}})} \leq \sqrt{\frac{1}{2}H(\tau,\tau')},\] so \[\min_{H(\tau,\tau')=1} \Big(1-D_{\TV}(P_\tau,P_{\tau'})\Big) \geq 1-\sqrt{1/2}>0.\] Since $\phi_k \in [0,\pi/3]$ for every $k$, we may apply Lemma \ref{lemma:lossbounds} to lower-bound the loss: For a universal constant $c>0$, we have \begin{equation}\label{eq:losslowerP} L(\theta^\tau,\theta^{\tau'}) \geq c \inf_{K_0 \in [0,K]} \inf_{\alpha \in \R} \left(K-K_0 +\sum_{k=1}^{K_0} (\phi_k^\tau-\phi_k^{\tau'}+k\alpha)^2 \right). \end{equation} For any fixed $K_0 \in [2,K]$, the inner infimum over $\alpha$ is attained at $\alpha=-\sum_{k=1}^{K_0} k(\phi_k^\tau-\phi_k^{\tau'}) /\sum_{k=1}^{K_0} k^2$, and we have \begin{align*} \inf_{\alpha \in \R} \sum_{k=1}^{K_0} (\phi_k^\tau-\phi_k^{\tau'}+k\alpha)^2 &=\sum_{k=1}^{K_0} (\phi_k^\tau-\phi_k^{\tau'})^2-\frac{\left(\sum_{k=1}^{K_0} k(\phi_k^\tau-\phi_k^{\tau'})\right)^2}{\sum_{k=1}^{K_0} k^2}\\ &=\phi^2 \cdot \left(H(\tau^{K_0},{\tau'}^{K_0})-\frac{\left(\sum_{k=1}^{K_0} k(\tau_k-\tau_k')\right)^2}{\sum_{k=1}^{K_0} k^2}\right) \end{align*} where $\tau^{K_0}=(\tau_1,\ldots,\tau_{K_0})$, ${\tau'}^{K_0}=(\tau_1',\ldots,\tau_{K_0}')$, and $H(\tau^{K_0},{\tau'}^{K_0})$ is the Hamming distance of these subvectors in $\{0,1\}^{K_0}$. Subject to a constraint that $H(\tau^{K_0},{\tau'}^{K_0})=h$, we have \begin{align*} \left(\sum_{k=1}^{K_0} k(\tau_k-\tau_k')\right)^2 &\leq \Big(K_0+(K_0-1)+\ldots+(K_0-h+1)\Big)^2\\ &=\left(\frac{h(2K_0-h+1)}{2}\right)^2 =h \cdot \frac{h(2K_0-h+1)^2}{4} \leq h \cdot \frac{(2K_0+1)^3}{27}, \end{align*} where the last inequality is tight at the maximizer $h=(2K_0+1)/3$. Then for any $K_0 \in [2,K]$, \begin{align*} H(\tau^{K_0},{\tau'}^{K_0})-\frac{\left(\sum_{k=1}^{K_0} k(\tau_k-\tau_k')\right)^2}{\sum_{k=1}^{K_0} k^2} &\geq H(\tau^{K_0},{\tau'}^{K_0}) \cdot \left(1-\frac{(2K_0+1)^3/27}{K_0(K_0+1)(2K_0+1)/6}\right) \geq \frac{2H(\tau^{K_0},{\tau'}^{K_0})}{27}. \end{align*} Applying also $K-K_0 \geq H((\tau_{K_0+1},\ldots,\tau_K),(\tau_{K_0+1}',\ldots,\tau_K'))$ and $\phi \leq \pi/3$, we get \begin{equation}\label{eq:finallossbound} \inf_{\alpha \in \R} \left(K-K_0+\sum_{k=1}^{K_0} (\phi_k^\tau-\phi_k^{\tau'}+k\alpha)^2\right) \geq c\phi^2 H(\tau,\tau') \end{equation} for a universal constant $c>0$. For $K_0=0$ or $K_0=1$ (and any $K \geq 2$), we may instead lower bound the left side by $K-K_0 \geq K/2 \geq H(\tau,\tau')/2$, so that this bound (\ref{eq:finallossbound}) holds also. Thus, taking the infimum in (\ref{eq:losslowerP}) over $K_0 \in [0,K]$, \begin{equation}\label{eq:dpsilower} d(\psi(P_\tau),\psi(P_{\tau'}))^2 =L(\theta^\tau,\theta^{\tau'}) \geq c'\phi^2 \cdot H(\tau,\tau') \end{equation} for a universal constant $c'>0$. Applying Lemma \ref{lemma:assouad} with $m=K$ and $s=2$, we obtain \begin{equation}\label{eq:assouadapplication} \inf_{\hat{\theta}} \sup_{\theta^* \in \cP(r)} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq \inf_{\hat{\theta}} \sup_{\theta^\tau:\tau \in \{0,1\}^K} \E_{\theta^\tau}[L(\theta^\tau,\hat{\theta})] \geq c''K\phi^2. \end{equation} Applying the form of $\phi$ from (\ref{eq:phistar}) concludes the proof. \end{proof} \revise{We now extend this argument to the more general case of $\beta \in [0,\frac{1}{2})$. Consider the subset of $\cP_\beta$ defined by \[\cP_\beta^*=\Big\{\theta^* \in \R^{2K}: r_k(\theta^*)=k^{-\beta} \text{ for all } k\in \{1,\ldots,K\}~~\text{and}~~\phi_k(\theta^*)=0\text{ for all } k \leq K/2\Big\}\] where the phases of the first half of the Fourier frequencies are fixed to 0. For $\theta,\theta'$ belonging to $\cP_\beta^*$, the following lemma modifies the KL upper bound (\ref{eq:KLupperhighnoise}) from Lemma \ref{lemma:KLupperbound}, replacing the factor $\rupper=\max_k r_k=1$ by a multiple of $\rlower=K^{-2\beta}$. \begin{Lemma}\label{lemma-6.2.1} Fix any $\beta \in [0,\frac{1}{2})$, and denote $R^2=\sum_{k=1}^K k^{-2\beta}$ and $\rlower^2=K^{-2\beta}$. Then for all $\theta, \theta' \in \cP_\beta^*$, \begin{equation} D_{\KL}(p_{\theta}\|p_{\theta'}) \leq \frac{27\rlower^2 R^2e^{5R^2/2\sigma^2}}{\sigma^6} \cdot \inf_{\alpha \in \R} \sum_{k=1}^K k^{-2\beta}(\phi_k-\phi_k'+\alpha k)^2 \label{eq:KLupperhighnoise-new} \end{equation} \end{Lemma} \begin{proof} Following the proof of Lemma \ref{lemma:KLupperbound}, we provide a new bound for the quantity $\mathrm{I}+\mathrm{II}$ in (\ref{eq:mthmoment}). Since $r_k(\theta)=r_k(\theta')$ for all $k$, we have $\mathrm{I}=0$. For $\mathrm{II}$, notice that we may restrict the summation over $k_1,\ldots,k_m$ in its definition to tuples where $\max(k_1,\ldots,k_m)>K/2$, since otherwise the summand is 0 upon setting $\alpha=0$, by the condition $\phi_k(\theta)=\phi_k(\theta')$ for all $k \leq K/2$. Therefore, we have similarly to (\ref{eq:IIbound}), \begin{align*} \mathrm{II} &\leq \inf_{\alpha \in \R} \sum_{j=1}^m \mathop{\sum_{k_1,\ldots,k_m=1}}_{\max(k_1,\ldots,k_m)>K/2}^K\;\; \mathop{\sum_{s_1,\ldots,s_m \in \{+1,-1\}}}_{s_1k_1+\ldots+s_mk_m=0} \frac{m}{2^m}(\phi_{k_j}-\phi_{k_j}'+\alpha k_j)^2 \left(\prod_{\ell=1}^m r_{k_\ell}r_{k_\ell}'\right). \end{align*} Fix $\alpha \in \R$, consider the summand for $j=1$, and notice that if $s_1k_1+\ldots + s_mk_m = 0$ and $\max(k_1,\ldots,k_m)>K/2$, then the second largest index amongst $k_1,\ldots,k_m$ is at least $K/(2m)$. Then, the summand for $j=1$ is bounded by \begin{align*} &\sum_{i=2}^m \mathop{\mathop{\sum_{k_1,\ldots,k_m=1}^K}_{\max(k_1,\ldots,k_m)>K/2}}_{ k_i \geq \max(k_2,\ldots,k_m)}\;\; \mathop{\sum_{s_1,\ldots,s_m \in \{+1,-1\}}}_{s_1k_1+\ldots+s_mk_m=0} \frac{m}{2^m}(\phi_{k_1}-\phi_{k_1}'+\alpha k_1)^2 \left(\prod_{\ell=1}^m r_{k_\ell}r_{k_\ell}'\right)\\ &\leq \sum_{i=2}^m\;\max_{k_i>\frac{K}{2m}} r_{k_i}r_{k_i}' \;\sum_{k_1,\ldots,k_{i-1},k_{i+1},\ldots,k_m=1}^K \sum_{s_1,\ldots,s_m \in \{+1,-1\}} \frac{m}{2^m}(\phi_{k_1}-\phi_{k_1}'+\alpha k_1)^2 \left(\prod_{\ell \neq i} r_{k_\ell}r_{k_\ell}'\right)\\ &\leq m^2 r^2\left(\frac{K}{2m}\right)^{-2\beta}\sum_{k_1=1}^K r_{k_1}r_{k_1}'(\phi_{k_1}-\phi_{k_1}'+\alpha k_1)^2 R^{2(m-2)}. \end{align*} In the second line, for each fixed $i \in \{2,\ldots,m\}$, we have used that for every choice of $s_1,\ldots,s_m$ and $k_1,\ldots,k_{i-1},k_{i+1},\ldots,k_m$, there is at most one choice of $k_i$ for which conditions $s_1k_1+\ldots+s_mk_m=0$, $\max(k_1,\ldots,k_m)>K/2$, and $k_i \geq \max(k_2,\ldots,k_m)$ all hold, and such a value $k_i$ must be at least $K/(2m)$. The same bound holds for each summand $j=1,\ldots,m$. Then, applying $(K/2m)^{-2\beta}=(2m)^{2\beta}\rlower^2 <2m\rlower^2$ for $\beta<1/2$, and taking the infimum over $\alpha \in \R$, this gives \[\mathrm{II} \leq \inf_{\alpha \in \R} 2m^4 R^{2(m-2)}\rlower^2 \sum_{k=1}^K k^{-2\beta}(\phi_k-\phi_k'+\alpha k)^2.\] Thus for all $m \geq 3$, \[\E[\langle \theta,g\theta \rangle^m-2\langle \theta,g\theta' \rangle^m +\langle \theta',g\theta' \rangle^m] \leq 2m^4 R^{2(m-2)}\rlower^2 \cdot \inf_{\alpha \in \R} \sum_{k=1}^K k^{-2\beta} (\phi_k-\phi_k'+\alpha k)^2,\] and the left side is 0 for $m=0,1,2$ because $r_k(\theta)=r_k(\theta')$ for all $k$. Now we apply this to (\ref{eq:chisqbound}) and sum over $m \geq 3$, using \begin{align*} \sum_{m=3}^\infty \frac{2m^4R^{2(m-2)}}{\sigma^{2m}m!} \leq \max_{m=3}^\infty \frac{2m^2R^2}{(m-1)(m-2)\sigma^6} \cdot \sum_{m=3}^\infty \frac{m \cdot R^{2(m-3)}}{\sigma^{2(m-3)}(m-3)!} =\frac{9R^2}{\sigma^6} \cdot \left(\frac{R^2}{\sigma^2}+3\right)e^{R^2/\sigma^2} \leq \frac{27R^2}{\sigma^6}e^{2R^2/\sigma^2} \end{align*} This yields as desired \[D_{\KL}(p_\theta\|p_{\theta'}) \leq \chi^2(p_\theta\|p_{\theta'}) \leq \frac{27R^2}{\sigma^6}e^{5R^2/2\sigma^2} \rlower^2 \cdot \inf_{\alpha \in \R} \sum_{k=1}^K k^{-2\beta}(\phi_k-\phi_k'+\alpha k)^2.\] \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:minimaxlowerP}] Consider the subset of points $\tau \in \{0,1\}^K$ such that $\tau_k=0$ for all $k \leq K/2$. Fixing a value $\phi \in [0,\pi/3]$, we define corresponding to each such $\tau \in \{0,1\}^K$ the parameter $\theta^\tau \in \cP_\beta^*$ where $r_k(\theta)=k^{-\beta}$ and $\phi_k(\theta)=\tau_k\phi$ for each $k=1,\ldots,K$. These parameter vectors $\theta^\tau$ are thus identified with the vertices of a hypercube of dimension $m=\lfloor K/2 \rfloor$. For any two such vectors $\theta^\tau$ and $\theta^{\tau'}$, applying (\ref{eq:KLupperlownoise}), we have analogously to (\ref{eq:DKLHamminglownoise}) that \[D_{\KL}(p_{\theta^\tau} \| p_{\theta^{\tau'}}) \leq \frac{\|\theta^\tau-\theta^{\tau'}\|^2}{2\sigma^2} =\frac{1}{2\sigma^2}\sum_{k>K/2} k^{-2\beta} \big|e^{i\phi_k^\tau}-e^{i\phi_k^{\tau'}}\big|^2 \leq \frac{\phi^2}{K^{2\beta}\sigma^2} \cdot H(\tau,\tau'),\] the last inequality using $(K/2)^{-2\beta}<2K^{-2\beta}$ when $\beta<1/2$. Applying (\ref{eq:KLupperhighnoise-new}) with $\alpha=0$, where $\rlower^2=K^{-2\beta}$ and $R^2=CK^{1-2\beta}$, we have analogously to (\ref{eq:DKLHamminghighnoise}) that for constants $C,c>0$ depending only on $\beta$, \[D_{\KL}(p_{\theta^{\tau}} \| p_{\theta^{\tau'}}) \leq \frac{\phi^2}{A} \cdot H(\tau,\tau'), \qquad A:=\frac{c\sigma^6K^{4\beta}}{K^{1-2\beta}e^{CK^{1-2\beta}/2\sigma^2}}\] Lower bounding both $\rlower^2$ and $r_kr_k'$ by $K^{-2\beta}$ in Lemma \ref{lemma:lossbounds}, and applying (\ref{eq:finallossbound}), we have \[L(\theta^{\tau}, \theta^{\tau'}) \geq c' K^{-2\beta} \phi^2 H(\tau, \tau')\] for a universal constant $c'>0$. We can choose \[\phi=\min\left(\frac{1}{\sqrt{N}}\cdot \max(\sqrt{K^{2\beta}\sigma^2},\sqrt{A}),\frac{\pi}{3}\right)\] to ensure $1-D_{\TV}(P_\tau,P_{\tau'}) \geq 1-\sqrt{1/2}>0$ whenever $H(\tau,\tau')=1$, as before. Finally, by using Lemma \ref{lemma:assouad} with $m=\lfloor K/2 \rfloor$, we can conclude analogously to (\ref{eq:assouadapplication}) that \[\inf_{\hat{\theta}} \sup_{\theta^* \in \cP_\beta^*} \E_{\theta^*}[L(\theta^*,\hat{\theta})] \geq c''K^{1-2\beta}\phi^2,\] and applying the above choice of $\phi$ concludes the proof. \end{proof}} \paragraph{Acknowledgements} The authors would like to thank Yihong Wu for a helpful discussion about KL-divergence in mixture models. \paragraph{Funding} Z. Fan is supported in part by NSF DMS 1916198, DMS 2142476. H. H. Zhou is supported in part by NSF grants DMS 2112918, DMS 1918925, and NIH grant 1P50MH115716. \bibliographystyle{plainnat} \bibliography{bibliography} \end{document}
2205.01737v1
http://arxiv.org/abs/2205.01737v1
Surfaces and their Profile Curves
\documentclass{amsart} \usepackage{amsfonts,amssymb,graphicx} \usepackage{epsfig,amsmath,latexsym} \usepackage{amscd, amssymb, amsthm, amsmath,eucal, verbatim} \usepackage{epstopdf} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `basename #1 .tif`.png} \newcommand{\bbslash}{\backslash\backslash} \newcommand{\bproof}{\paragraph {\bf Proof.}} \newcommand{\ds}{\displaystyle} \newcommand{\bv}{{\bf v}} \newcommand{\bw}{{\bf w}} \newcommand{\R}{{\mathbb{R}}} \newcommand{\ZZ}{{\mathbb{Z}}} \newcommand{\sC}{{\mathcal{C}}} \newcommand{\sH}{{\mathbb{H}}} \newcommand{\sG}{{\mathcal{G}}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}{Claim} \newtheorem{example}{Example} \begin{document} \title{Surfaces and their Profile Curves} \date{\today} \author{Joel Hass} \address{University of California, Davis, California 95616} \email{[email protected]} \thanks{This work was carried out while the author was visiting the Oxford Mathematical Institute and ISTA, and was a Christensen Fellow at St. Catherine's College. Research partially supported by NSF grant DMS:FRG 1760485 and BSF grant 2018313} \subjclass{Primary 57M50; Secondary 57M25, 65D17, 68U05} \keywords{profile curve, contour generator, silhouette, apparent contour, knot, 3-manifold, knotted surface} \begin{abstract} This paper examines the relationship between the profile curves of a surface in $\R^3$ and the isotopy class of the surface. \end{abstract} \maketitle \section{Profile curves} The points on a smoothly embedded surface in $\R^3$ that have vertical tangent planes form a collection of {\em profile curves}. These curves are often the most prominent features of a surface image, as indicated in Figure~\ref{profileseg}, and play an important role in image analysis and surface reconstruction. \begin{figure}[htbp] \centering \includegraphics[width=1.5in]{jellyfish.jpg} \caption{Profile curves can be dominant features of a surface image. } \label{profileseg} \end{figure} In this paper we study the relationship between a profile curve and the surface on which it lies. Profile curves occur where the orthogonal projection of a smooth embedded surface to the $xy$-plane fails to be an immersion. Whitney determined the local singularities of maps from a surface to the plane \cite{Whitney}. Haefliger then determined when such a singularity set results from following an immersion of a surface to $\R^3$ by orthogonal projection to the plane \cite{Haefliger}, see also \cite{PlantingaVegter}. For an open, everywhere-dense subset of surface maps to the plane, the singularities of the projection consist of points where the map has either a fold along a smooth arc, or a cusp at an isolated point, as shown in Figure~\ref{fig:cusp}. Cusps are discussed further in Section~\ref{curvescusps}. We call surface projections with these properties {\em generic}. Any smoothly embedded (or immersed) surface in $\R^3$ can be perturbed slightly so that its projection to the $xy$-plane is generic. \begin{figure}[htbp] \centering \includegraphics[width=2.in]{cusp3.jpg} \caption{The local singularities of generic surface projections, consisting of folds and g, occur along smooth profile curves on the surface. These project to piecewise-smooth curves in the plane.} \label{fig:cusp} \end{figure} Profile curves play an important role in the imaging, visualization and reconstruction of an object in 3-dimensions. They are referred to by a variety of terms, including the {\em contour generator}, {\em critical set}, {\em rim}, and {\em outline} of a surface. In some contexts, such as with images of transparent objects, the profile curves are clearly visible while the rest of the surface is not clearly seen. Thus profile curves play an important role in image analysis and surface reconstruction. The {\em profile curve projection}, the projection to the plane of a profile curve, is variously called the {\em apparent contour}, {\em occluding contour}, {\em silhouette} or {\em extremal boundary} of the surface \cite{BoyerBerger, CipollaBlake, FukudaYamamoto, Giblin, Pignoni, PlantingaVegter, VaillantFaugeras}. The {\em reconstruction problem} seeks to reconstruct a surface in $\R^3$ (up to some equivalence) from the full collection of profile curve projections. Generally this cannot be done without additional data, such as a labeling indicating the number of sheets above each arc, and then the problem can be solved algorithmically \cite{Bellettini}. Profile curves also play a role in understanding surfaces in $\R^4$ which can be projected first to $\R^3$ and then to the plane \cite{Carter}. Our focus is rather different, centering on the relationship between a profile curve and the isotopy type of its generating surface. We work in two settings, topological and geometric. We first ask what surfaces can generate a profile curve with a given knot type. As an example, the knotted torus in Figure~\ref{trefoils} has two parallel profile curves, each a trefoil knot. This might suggest that knotted surfaces always generate knotted profile curves but this is far from true. In Section~\ref{knotted} we show that any genus-$g$ surface, no matter how convoluted its embedding in $\R^3$, is isotopic to a surface whose profile curves form an unlink with $g+1$ components. So any surface can be deformed so that it generates a collection of profile curves forming an unlink. \begin{figure}[htbp] \centering \includegraphics[width=1.5in]{trefoil3wprofiles} \includegraphics[width=1.5in]{trefoil3wotorus} \caption{A knotted torus in $\R^3$ with two profile curves.} \label{trefoils} \end{figure} We then ask if it is possible for unknotted surfaces to generate knotted profile curves. In Section~\ref{unknotted} we show that this is always possible, and can even be achieved with a surface of smallest possible genus. Theorem~\ref{writhe} states that given a smooth curve $\gamma$ embedded on a surface in $\R^3$, we can isotop the curve and the surface so that the the curve becomes a profile curve. Moreover this can be done without moving the curve $\gamma$ if and only if a simple obstruction vanishes. Thus there is no general connection between the isotopy class of a profile curve and that of a surface that generates it, but there is an obstruction to obtaining a particular geometric realization of a curve. In Section~\ref{curvescusps} we extend this result to curves whose projections have cusps. In Section~\ref{restrictions} we give some applications of these results. As a consequence of Theorem~\ref{writhe}, we show that the unknotted curve in Figure~\ref{figeight}, which can be embedded on a sphere and also on a knotted torus, cannot be a profile curve for either of these surfaces. \begin{figure}[htbp] \centering \includegraphics[width=1.in]{spherefig8.jpg} \caption{This curve lies on a sphere, but cannot be the profile curve of a sphere or a knotted torus projected to the viewing plane.} \label{figeight} \end{figure} In contrast, the knotted curve in Figure~\ref{profiletrefoil} can be realized as a profile curve for either a knotted or an unknotted torus. \begin{figure}[htbp] \centering \includegraphics[width=1.25in]{trefoil6} \caption{A knotted profile curve generated by an unknotted torus.} \label{profiletrefoil} \end{figure} Observations of this type are of interest when studying surfaces of unknown topology using images that reveal their profile curves. For example, if an image seen through a microscope exhibits a profile curve of the type shown in Figure~\ref{figeight}, then it follows that the generating surface is not homeomorphic to a 2-sphere. In addition, we can conclude that the generating surface is not a knotted torus. While we look at individual profile curves generated by a surface, for some purposes it makes sense to look at the entire collection of profile curves. Note that this full collection of profile curves is not by itself sufficient to determine the topology of the surface generating the curves. Figure~\ref{profilet} shows two tori that have identical profile curves, shown at right, when projected in the indicated direction. The first torus is knotted while the second is unknotted. The same pair of profile curves is also generated by a pair of disjoint 2-spheres. \begin{figure}[htbp] \centering \includegraphics[width=1.in]{knottorus} \hspace{.3in} \includegraphics[width=1.in]{knottorus1} \hspace{.3in} \includegraphics[width=1.in]{knottorus2} \caption{A knotted and unknotted torus sharing the same pair of profile curves, shown at right.} \label{profilet} \end{figure} To use profile curves to reconstruct surfaces in $\R^3$ requires additional assumptions, or additional geometric or topological information. Algorithms for reconstructing surfaces in $\R^3$ from the planar projections of their profile curves are discussed in \cite{Bellettini}, using labels on arcs of the projection curves that indicate surface multiplicities. See also \cite{Hacon, Hacon2, MenascoNichols}. \section{Any surface generates a trivial profile curve link after isotopy} \label{knotted} We say that a genus-$g$ surface is {\em standardly embedded} if it is formed by taking a tubular neighborhood of a planar graph in the $xy$-plane. The profile curve projections form an unlink consisting of $g+1$ disjoint embedded loops. A surface in $\R^3$ is {\em unknotted} if it is isotopic to a standardly embedded surface. Figure~\ref{fig:profile1} shows a standardly embedded torus and the projections of its profile curves. The profile curves of a standardly embedded genus-$g$ surface form an unlink with $g+1$ components. \begin{figure}[htbp] \centering \includegraphics[width=.5in]{torus2c} \hspace{.1in} \includegraphics[width=.5in]{2curve} \caption{Profile curves of a standardly embedded torus and their planar projections.} \label{fig:profile1} \end{figure} Our first result shows that, after an isotopy, any surface can be positioned so that it generates a collection of profile curves that form an unlink with $g+1$ components. The isotopy class of the full collection of profile curves carries no information about the knotting of its generating surface. \begin{theorem} An embedded genus-$g$ surface $F \subset \R^3$ can be isotoped so that its profile curves form an unlink with $g+1$ components. \end{theorem} \begin{proof} If $F$ is a 2-sphere then it is isotopic to a round sphere and the result follows, so we assume the genus of $F$ is positive. The Loop Theorem implies that $F$ is compressible \cite{Papakyriakopoulos}. It follows that a genus $g$ surface $F$ can be compressed repeatedly until it is reduced to a collection of 2-spheres. Reversing this process, a surface isotopic to $F$ can be constructed by starting with a collection of disjoint round 2-spheres, each located so that its equator is a profile curve lying in the $xy$-plane, and then successively adding thin tubes to create a surface $F_1$. Each of the added tubes is a neighborhood of a near horizontal arc that starts and ends on a the equator of a 2-sphere. These tubes can lie on either side of a 2-sphere, and tubes can run through previously constructed tubes. The resulting surface $F_1$ is isotopic to $F$, and its profile curves are formed from arcs on the equators of the 2-spheres joined to two arcs for each tube, as in Figure~\ref{standard}. \begin{figure}[htbp] \centering \includegraphics[width=2in]{bands.jpeg} \caption{Any surface can be isotoped so that its profile curves are equators of spheres banded together along near horizontal bands. The curves run along tubes that can start on either side of the 2-spheres, and one tube can run through a second tube.} \label{standard} \end{figure} We claim that at each stage of the construction, a connected component of genus $g$ has $g+1$ profile curves, and that each tube has two profile curves running over it, one of which is {\em special}, in that it runs over no other tube. Initially there is a single component, a sphere with one equatorial profile curve, so the number of profile curves equals $1$ ,with $g=0$ being the genus of the component, and our claim holds. Adding a disjoint 2-sphere creates a new component, and on each component that satisfies the claim. If we add a tube that starts and ends on the same component, then an isotopy slides the two attaching points of the arc defining the tube so that they are adjacent on the equator of a sphere and the arc is near to horizontal. The new tube then generates a new profile curve that runs once over it, and is disjoint from other tubes. Moreover previous special profile curves are not changed by this tube addition. The genus $g$ of the component with the added tube is increased by one, so there remain $2g+1$ profile curves on each component of the surface. Finally we consider the effect of adding a tube connecting two distinct components of genus $n_1$ and $n_2$. The tube is a neighborhood of an arc running from the equator of one sphere to that of a second sphere, away from the special profile curves. This tube addition joins two profile curves and so decreases the number of profile curves by one, while the genus of the two components adds. The resulting new connected component has $2n_1 +1 + 2n_2 +1 -1 = 2(n_1+n_2)+1$ profile curves and genus $n_1+1 + n_2$. When all tubes are added, the profile curves of the final connected surface form a link with $g+1$ components, and this link contains $g$ special profile curves, each running once over a tube. We now describe an isotopy that transforms the profile curves of $F_1$ to an unlink. This isotopy is supported in a neighborhood of a meridian of the $g$ tubes that have special profile curves running over them. The isotopy starts by taking meridian curves for each of the $g$ tubes in turn, and rotating the tube near the meridian, along with its interior and any tubes going through it, to create a horizontal neck. The resulting surface has a new profile curve going around this neck, as in Figure~\ref{move}. \begin{figure}[htbp] \centering \includegraphics[width=1.5in]{simplify.jpg}\hspace{.7in} \includegraphics[width=1.8in]{simplify2.jpg} \caption{A local isotopy and its effect on a surface's profile curves.} \label{move} \end{figure} During this isotopy two profile curves going over the tube have been combined and one new profile curve has been created, so the total number of profile curves remains at $g+1$. This isotopy is repeated in turn for each of the $g$ tubes. If other tubes go through the interior of this tube, they can be positioned so that they remain near horizontal after the isotopy and so that the profile curves that run along them are moved by an isotopy. Repeating this process for the selected $g$ tubes results in a surface isotopic to $F$ whose profile curves form a link with $g+1$ components. To see that the resulting link is trivial, start with an innermost tube. The profile curve arcs on this tube can be isotoped inside the tube till they lie on the 2-sphere to which the tube is attached. This can be done successively on a sequence of tubes which are innermost relative to the link of profile curves, until the entire collection of profile curves has been isotoped to a union of disjoint curves lying on spheres. These form an unlink. \end{proof} \section{Curve to Profile} \label{unknotted} In this section we determine when an isotopy of a surface can turn a curve on the surface into a profile curve. An example of this question is whether a curve representing a trefoil knot can be a profile curve for an unknotted torus. We fix some terminology. Let $F \subset \R^3$ be a smooth embedded orientable surface with normal unit vector field $\nu_F$. If $\gamma$ is an oriented regular curve on $F$, then the {\em surface normal framing} of $\gamma$ is the restriction of $\nu_F$ to $\gamma$. This framing can be used to produce a push-off curve in $\R^3$, $ \gamma_+ = \gamma + \epsilon \nu_F$ for $\epsilon>0$ a small constant. Define the {\em surface linking number} $\lambda (\gamma, F)$ of $\gamma$ in $F$ to be the linking number of $\gamma_+$ and $\gamma$. The value of $\lambda (\gamma, F)$ is independent of the orientation of $F$ or $\gamma$, and of $\epsilon$ for $\epsilon$ sufficiently small. Moreover $\lambda (\gamma, F)$ is preserved by an isotopy of $F$, since linking number is preserved by isotopy. Now consider an oriented smooth curve $\gamma$ in $\R^3$ with a finite number of vertical tangencies, at which the curvature and torsion are non-zero. Profile curves of generic surfaces satisfy these properties. The {\em blackboard framing } of $\gamma$ is the unit horizontal vector field $\beta$ along $\gamma$ that is normal to $\dot \gamma$. It is obtained by lifting to $\gamma$ a continuous unit planar vector field normal to the projection $\pi(\gamma)$ in the $xy$-plane. Note that at a cusp the projection $\pi_*(\beta) $ flips from one side of $\pi(\gamma)$ to the other, as in Figure~\ref{pushoff}. \begin{figure}[htbp] \centering \includegraphics[width=.75in]{cuspproj1.jpg} \caption{The projection of a curve $\gamma$ near a cusp, and a planar vector field that lifts to the blackboard framing.} \label{pushoff} \end{figure} Let $\gamma_+ = \gamma + \epsilon \beta$, for $\epsilon >0$, be a push-off of $\gamma$ relative to this framing. The linking number of $\gamma_+$ and $\gamma$ is an integer $w(\gamma)$, called the {\em writhe}, and is independent of $\epsilon$ for $\epsilon$ sufficiently small. The writhe is not an isotopy invariant. We now show that starting with a surface $F_0$ and a smooth, embedded curve $\gamma \subset F_0$, there exists an isotopy, fixing $\gamma$, from $F_0$ to a surface $F_1$ for which $\gamma$ is a profile curve if and only if the writhe of $\gamma$ is equal to $\lambda (\gamma, F)$. We first establish this for curves whose projections have no cusps. \begin{theorem} \label{writhe} Let $\gamma$ be a smooth embedded curve on an embedded surface $F_0 \subset \R^3$ with $\dot \gamma$ not vertical. Then there is an isotopy from $F_0$ to a surface $F_1$ for which $\gamma$ is a profile curve, with $\gamma$ left fixed by this isotopy, if and only if $w(\gamma) = \lambda (\gamma, F_0)$. \end{theorem} \begin{proof} Suppose that there is an isotopy from $F_0$ to a surface $F_1$ that fixes $\gamma$ and that $\gamma$ is a profile curve of $F_1$. With appropriate orientations, the blackboard framing of $\gamma$ and the surface normal framing exactly coincide along a profile curve, with each a horizontal unit vector field along $\gamma$ normal to $F_1$. It follows that the writhe of $\gamma$ is equal to $\lambda (\gamma, F_1)$. Moreover an isotopy from $F_0$ to $F_1$ that fixes $\gamma$ preserves both invariants, so that $w(\gamma) = \lambda (\gamma, F_0)$. Now assume that $w(\gamma) = \lambda (\gamma, F_0)$. We construct an isotopy from $F_0$ to $F_1$ with support on an annular neighborhood $A$ of $\gamma$ that turns $\gamma$ into a profile curve. The annulus can be parametrized as $\{ A(t,s) , ~ 0 \le t \le 1, ~ -1 \le s \le 1 \}$ with $A(t,0) = \gamma(t)$ and $A(0,s) = A(1,s)$. The isotopy twists a neighborhood of $\gamma$ to make the surface normals along $\gamma$ horizontal, matching the blackboard framing, while leaving $\gamma$ and $\partial A$ fixed. Moreover in $F_1$ the transverse arc $\alpha_t(s) = \{ A(t, s), ~ -1 \le s \le 1 \} $ satisfies $\ddot \alpha_t(0) \cdot \nu_F >0 $ for $0 \le t \le 1$, so that $\alpha_t(s)$ is always convex in the same direction relative to the surface normal. After this isotopy, as $\gamma$ is traversed till it returns to its starting point in $F_1$, the annulus around $\gamma$ is rotated some number of times relative to its initial position, with the total number of rotations equal to the difference between the writhe and the surface linking number. When these two agree then $A(0,s)$ matches up with $A(1,s)$ and the boundary of the twisted annular neighborhood of $\gamma$ matches up with the rest of the surface. The isotopy fixes $\gamma$ throughout, and $\gamma$ ends up being a profile curve of $F_1$. \end{proof} The twisting process of the annulus neighborhood of $\gamma$ taking place in Theorem~\ref{writhe} is illustrated in Figure~\ref{twist} for a (1,1) curve on an unknotted torus. In this example the writhe is zero and the surface linking number is one, so the conditions of the Theorem do not apply. The initial arc $A(0,s)$ does not match up with the final arc $A(1,s)$, and the twisted annulus had boundary that does not match up with the torus. The torus cannot be isotoped so that this (1,1) curve is a profile curve. However by isotoping the (1,1) curve on the torus to change its writhe to equal one, we can arrange for Theorem~\ref{writhe} to apply. \begin{figure}[htbp] \centering \includegraphics[width=1in]{torus.jpeg} \hspace{1in} \includegraphics[width=1in]{twisttorus.jpeg} \\ \includegraphics[width=0.8in]{circ1.jpeg} \includegraphics[width=0.7in]{circ2.jpeg} \includegraphics[width=0.85in]{circ3.jpeg} \includegraphics[width=0.7in]{circ4.jpeg} \includegraphics[width=0.9in]{circ5.jpeg} \caption{A (1,1) curve on a torus has an annular neighborhood along it twisted so that its tangent plane is always vertical. Five cross-sections indicate twisting of an annular neighborhood to turn the curve into a profile curve. The last does not match the first, due to the addition of a full twist while traversing the curve. An isotopy of the curve that changes its writhe, as in the second torus, resolves this issue.} \label{twist} \end{figure} \begin{corollary} \label{afterisotopy} Let $\gamma_0$ be a smooth embedded curve on an embedded surface $F_0 \subset \R^3$. Then $\gamma_0$ is isotopic in $F_0$ to a curve $\gamma_1$ which is a profile curve of a surface $F_1$ that is smoothly isotopic to $F_0$. \end{corollary} \begin{proof} Suppose $\gamma$ has surface linking number $\lambda$ and writhe $w$. Let $\gamma_1$ be obtained from $\gamma_0$ by an isotopy that performs $\lambda-w$ positive Reidemeister I moves on $\gamma_0$, if $\lambda \ge w$, or $ w-\lambda$ negative Reidemeister I moves if $w >\lambda $, as in Figure~\ref{twist}. Such an isotopy always exists, and does not change the knot type of $\gamma_0$, but the resulting curve $\gamma_1$ has writhe equal to its surface linking number. Theorem~\ref{writhe} now applies, and there is an isotopy of the surface that fixes $\gamma_1$ and ends with $\gamma_1$ a profile curve. \end{proof} The {\em surface genus} of a curve $\gamma$ is the smallest genus of an unknotted surface in $\R^3$ that contains a curve isotopic to $\gamma$. \begin{corollary} A knot $K$ with surface genus $g(K)$ can be realized as a profile curve of a standardly embedded surface $F \subset \R^3$ having genus $g(K)$. \end{corollary} \begin{example} \label{multi} Any torus knot is realized as a profile curve generated by an unknotted torus. \end{example} \section{Curves with cusps} \label{curvescusps} In this section we extend Theorem~\ref{writhe} to curves whose projections have cusps. We start by reviewing the definition of a cusp. It follows from the work of Whitney and Haefliger that a generic smooth surface $F \subset \R^3$ has profile curves that are smooth and regular, so that the tangent vectors $\dot \gamma$ are never zero. Moreover $\dot \gamma$ is vertical at only a finite number of points, and these project to isolated cusps \cite{Whitney, Haefliger}, see also \cite{Arnold}, \cite{PlantingaVegter}. The local behavior of the projection of $F$ near $ \gamma \subset \R^3$ is modeled by two sheets that fold along the curve, except at the points where the profile curve is vertical. At a vertical tangency the projection of the surface has an ordinary cusp singularity, meaning that $\gamma(t)$ is modeled by the parametric curve $\gamma(t) = (t^2, t^3, t)$ with projection to the $xy$-plane given by $\pi(\gamma)(t) = (t^2, t^3)$. This curve has an ordinary cusp singularity at $t=0$. When we say that $\gamma$ projects to an ordinary cusp at a point where $\dot \gamma$ is vertical, we mean that after a smooth change of coordinates it has this form. We will drop the term ordinary since we only consider this type of cusp. The notions of writhe and surface linking number extend without change to curves whose projections have a finite number of cusps. We say that the {\em chirality} of a curve at a cusp is {\em right-handed} if the projection curve takes a right turn when proceeding through the cusp, and {\em left-handed} otherwise, as in Figure~\ref{cuspparity}. This notion is based on the orientation of the plane, not that of a surface $F$. \begin{figure}[htbp] \centering \includegraphics[width=1.25in]{cuspparity.jpg} \caption{Right--handed and left-handed cusps in a profile curve projection.} \label{cuspparity} \end{figure} We now look at a smooth curve in $\R^3$ whose projection has a finite number of cusps. We consider the number and the chirality of the cusps that can occur if the curve is a profile curve generated by some surface. Haefliger considered a map from a surface to the plane and a planar curve that is a component of the image of the singular points of this map. He showed that such a map can be factored through an immersion of the surface into $\R^3$ followed by projection to the plane if the curve has an annular neighborhood on the surface and has an even number of cusps, or if it is orientation reversing on the surface and has an odd number of cusps \cite{Haefliger}. The following lemma considers a related, but somewhat different scenario. We start with a curve in $\R^3$, not initially associated to any surface map. We show that for this curve to be the profile curve of a projection of some surface requires the conditions specified by Haefliger and an additional condition involving cusp chirality. \begin{lemma} \label{chirality} A smooth curve $\gamma \subset \R^3$ is a profile curve generated by an annular neighborhood $A$ of $\gamma$ if and only if $\gamma$ has an even number of points that project to cusps, and each of these projects to a cusp with the same chirality. Similarly, $\gamma$ is a profile curve generated by a Mobius strip neighborhood of $\gamma$ if and only if it has an odd number of points that project to cusps, and each of these cusps projects to a cusp with the same chirality. \end{lemma} \begin{proof} Suppose $\gamma$ is a profile curve with an annular neighborhood $A$. There are transverse arcs $ \alpha_t \subset A$ normal to $\gamma$ at $\gamma(t)$ for $\{ t_0 - \delta \le t \le t_0 + \delta\}$ a neighborhood of the cusp at $\gamma(t_0)$. Each transverse arc $\alpha_t$ has non-zero curvature except for $\alpha(t_0)$, which passes through the cusp vertically. The planar projection of the curvature vector $\pi_*(\ddot \alpha_t)$ points towards the cusp (zero-angle) side of $\pi(\gamma)$. See Figure~\ref{forbidden}. \begin{figure}[htbp] \centering \includegraphics[width=.5in] {crosssections1.jpg} \hspace{.33in} \includegraphics[width=1.5in]{forbidden.jpg} \hspace{.15in} \includegraphics[width=1.5in]{forbiddencusppair} \caption{A cross section of a surface $F$ indicating the curvature vector of arcs transverse to $\gamma$ near a cusp. The projected curvature vector points towards the zero-angle side of the curve near a cusp. This forces all cusps in a profile curve projection to have the same chirality. Two adjacent cusps with opposite chirality, as shown on the curve at right, cannot occur in the projection of a profile curve.} \label{forbidden} \end{figure} The projection of the curvature vector to $\alpha$ flips directon relative to $\gamma$ when passing thorough the cusp, so that in both the incoming and outgoing arcs, $\pi_*(\ddot \alpha_t)$ points towards the cusp side of $\pi(\gamma)$. Since this flip occurs only at a cusp and the curvature matches up after a full traversal of $\gamma$, there must be an even number of cusps on each profile curve when $\gamma$ is orientation preserving, and an odd number otherwise. Two successive cusps must have the same chirality, as otherwise the projected curvature vector of $\alpha_t$ would point away from the cusp side as it approached one of them, as indicated in the rightmost configuration in Figure~\ref{forbidden}. It follows that all cusps on the projection of $\gamma$ have the same chirality. \end{proof} This has implications for image analysis. \begin{example} \label{multi2} If a curve in the plane contains a right-handed and a left-handed cusp then it is not the projection of a profile curve generated by an immersed surface in $\R^3$. \end{example} \begin{proof} Lemma~\ref{chirality} implies that the chirality of all arcs must be the same for the projection of a profile curve. \end{proof} An example of such a curve is shown in Figure~\ref{forbidden}. We now extend Theorem~\ref{writhe} to curves with cusps. \begin{theorem} \label{writhecusps} Let $\gamma$ be a piecewise-smooth embedded curve on an embedded surface $F_0 \subset \R^3$ that is smooth and non-vertical except at an even number of points which project to cusps, all of the same chirality. Then the following two statements are equivalent. \begin{enumerate} \item $F_0$ is isotopic to a surface $F_1$ for which $\gamma$ is a profile curve, with $\gamma$ fixed by this isotopy, \item The writhe of $\gamma$ is equal to its surface linking number in $F_0$. \end{enumerate} \end{theorem} \begin{proof} The proof is similar to that of Theorem~\ref{writhe}. When isotoping the surface along $\gamma$ to turn it into a profile curve, it may be necessary to twist it at above a cusp, but this will always be through an integer multiple of $\pi$ since the tangent plane of $F_0$ is already vertical at a point that projects to a cusp, so the local structure will be preserved. The total number of twists of the surface relative to the blackboard framing will be zero when $w(\gamma) = \lambda (\gamma, F_0)$. The convexity of the surface along the transverse arcs $\alpha_t$ will match at $t=0$ and $t=1$ because $F_0$ is oriented, and so there are an even number of points projecting to cusps along $\gamma$, where the convexity direction flips. \end{proof} \section{Geometric obstructions} \label{restrictions} In this section we give examples showing restrictions imposed on a surface $F$ by the condition that it contains a given curve $\gamma$ as a profile curve. \begin{example} The writhe 1 curve $\gamma$ shown in Figure~\ref{figeight} cannot be a profile curve generated by a sphere. \end{example} \begin{proof} The surface linking number determined by the sphere for any curve lying on it is zero, since the curve bounds a disk on the sphere disjoint from a normal push-off. Thus the profile curve of a sphere cannot contain a curve whose writhe is non-zero. \end{proof} An embedded planar loop can be a profile curve generated by either a knotted or unknotted torus. In contrast, a curve with a single crossing can only be generated as a profile curve by an unknotted torus. \begin{example} The curve $\gamma$ in Figure~\ref{figeight} cannot be a profile curve generated by a knotted torus. \end{example} \begin{proof} Suppose that the curve $\gamma$ is a profile curve. It does not bound a disk on the torus, since it has writhe one and a curve bounding a disk on a torus has surface linking number zero. Similarly $\gamma$ cannot represent a torus meridian, which also has surface linking number equal to zero. Thus $\gamma$ must represent a homotopically nontrivial, and non-meridianal curve on a possibly knotted torus. Since $\gamma$ is unknotted, this is possible only if the torus is unknotted and $\gamma$ is either a $(p,1)$ or a $(1,q)$ curve. \end{proof} The restrictions on profile curves due to Theorem~\ref{writhe} can apply even when knowing the writhe only up to sign. This situation occurs in photographic images of surfaces, where it is often unclear which profile curve arc is in front and which in back when their projections cross. \begin{example} The immersed planar figure-eight curve, obtained by ignoring the over and under-crossing information in Figure~\ref{figeight}, cannot be the projection to the plane of a profile curve generated by a sphere or a knotted torus. \end{example} \begin{proof} The planar figure-eight curve is a projection of a curve whose writhe is either $+1$ or $-1$. Theorem~\ref{writhe} rules out either possibility for a profile curve on the stated surfaces. \end{proof} \begin{example} If a curve $\gamma$ isotopic to a trefoil is a profile curve for a torus and if the number of crossings and cusps in the projection of $\gamma$ is less than six, then the torus is knotted. \end{example} \begin{proof} There are at least three crossings, so the condition on the number of crossings and cusps implies that the writhe of $\gamma$ is non-zero (mod $6$). The surface linking number of a trefoil on the surface of an unknotted torus is zero (mod $6$). Since these are not equal, Theorem~\ref{writhe} implies that $\gamma$ cannot be a profile curve for an unknotted torus. \end{proof} \begin{thebibliography}{HHH} \bibitem{Adams} C. C. Adams, {\it The Knot Book,} New York: W.H. Freeman (1994). \bibitem{Arnold} Arnold, V.I., Gusein-Zade, S.M., Varchenko, A.N., {\it Singularities of Differentiable Maps, Volume I,} Birkhauser, Boston (1985) \bibitem{Bellettini} G. Bellettini, V. Beorchia, M. Paolini, F. Pasquarelli, {\it Shape Reconstruction from Apparent Contours: Theory and Algorithms,} Springer Berlin Heidelberg, February 2015. \bibitem{BoyerBerger} E. Boyer, M. O. Berger, {\it 3d surface reconstruction using occluding contours,} Int. Journal of Computer Vision 22 (3) (1997) 219--233. \bibitem{Carter} J.S. Carter, S. Kamada, M. Saito, {\it Surfaces in 4-Space,} Encyclopedia of Mathematical Sciences, vol. 142. Springer, New York (2004). \bibitem{CipollaBlake} R. Cipolla, A. Blake, {\it Surface shape from the deformation of apparent contours,} Int. Journal of Computer Vision 9 (2) (1992) 83--112. \bibitem{FukudaYamamoto} T. Fukuda and T. Yamamoto, {\it Apparent contours of stable maps into the sphere}, Journal of Singularities, 3 (2011), 113--125. \bibitem{Giblin} P. Giblin, {\it Apparent contours: an outline,} Phil. Trans. R. Soc. Lond. A (1998) 356, 1087--1102. \bibitem{GolubitskyGuillemin} Golubitsky and V. Guillemin, {\it Stable Mappings and Their Singularities}, (1973) Graduate Texts in Mathematics 14. Springer-Verlag, Berlin. \bibitem{Hacon} D. Hacon, C. Mendes de Jesus, and M. C. Romero Fuster, {\it Fold Maps from the Sphere to the Plane}, Experimental Mathematics, Vol. 15, (2006), No. 4, 491--497. \bibitem{Hacon2} D. Hacon, C. Mendes de Jesus and M. C. Romero Fuster, {\it Topological invariants of stable maps from a surface to the plane from a global viewpoint}. Real and Complex Singularities. Informa UK Limited, 2003. \bibitem{Haefliger} A. Haefliger, {\it Quelques remarques sur les applications differentiables d'une surface dans le plan}, Annales de l'Institut Fourier, Tome 10 (1960), pp. 47-60. \bibitem{MenascoNichols} W. Menasco and M. Nichols, {\it Surface Embeddings in $\R^2 \times \R$}, in preparation. \bibitem{Papakyriakopoulos} C. D. Papakyriakopoulos, {\it On Dehn's lemma and the asphericity of knots}, Ann. of Math. (2), 66-126, 1957. \bibitem{Pignoni} R. Pignoni, {\it On surfaces and their contours}, Manuscripta Mathematica, 72, (1991) 223--249. \bibitem{PlantingaVegter} S. Plantinga and G. Vegter, {\it Contour generators of evolving implicit surfaces,} Proceedings of the Eighth ACM Symposium on Solid Modeling and Applications 2003, Seattle, Washington, USA, June 16 - 20, 2003 \bibitem{VaillantFaugeras} R. Vaillant and O. Faugeras, {\it Using Extremal Boundaries for 3-D Object Modeling,} IEEE Trans. Pattern Anal. Mach. Intell 14(1992) 157--173. \bibitem{Whitney} H. Whitney, {\it On singularities of mappings of euclidean spaces. I. Mappings of the plane into the plane}, Ann. of Math. (2) 62 (1955), 374--410. \end{thebibliography} \end{document}
2205.01734v1
http://arxiv.org/abs/2205.01734v1
Squared distance matrices of trees with matrix weights
\documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{amsmath} \usepackage{fullpage} \usepackage{mathtools} \usepackage{csquotes} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{ex}{Example}[section] \newtheorem{dfn}{Definition}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{rmk}{Remark}[section] \title{Squared distance matrices of trees with matrix weights} \author{Iswar Mahato\thanks{Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India. Email: [email protected]} \and M. Rajesh Kannan\thanks{Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India. Email: [email protected], [email protected] }} \date{\today} \begin{document} \maketitle \baselineskip=0.25in \begin{abstract} Let $T$ be a tree on $n$ vertices whose edge weights are positive definite matrices of order $s$. The squared distance matrix of $T$, denoted by $\Delta$, is the $ns \times ns$ block matrix with $\Delta_{ij}=d(i,j)^2$, where $d(i,j)$ is the sum of the weights of the edges in the unique $(i,j)$-path. In this article, we obtain a formula for the determinant of $\Delta$ and find ${\Delta}^{-1}$ under some conditions. \end{abstract} {\bf AMS Subject Classification(2010):} 05C22, 05C50. \textbf{Keywords. } Tree, Distance matrix, Squared distance matrix, Matrix weight, Determinant, Inverse. \section{Introduction} Let $T$ be a tree with vertex set $V(T)=\{1,\hdots,n\}$ and edge set $E(T)=\{e_1,\hdots,e_{n-1}\}$. If two vertices $i$ and $j$ are adjacent, we write $i\sim j$. Let us assign an orientation to each edge of $T$. Two edges $e_i=(p,q)$ and $e_j=(r,s)$ of $T$ are \textit{ similarly oriented} if $d(p,r)=d(q,s)$ and is denoted by $e_i\Rightarrow e_j$, otherwise they are \textit{oppositely oriented} and is denoted by $e_i \rightleftharpoons e_j$. The \textit{edge orientation matrix} $H=(h_{ij})$ of $T$ is the $(n-1)\times (n-1)$ matrix whose rows and columns are indexed by the edges of $T$ and the entries are defined \cite{bapat2013product} as $$h_{ij}= \begin{cases} \text{$1$} & \quad\text{if $e_i\Rightarrow e_j$, $i \neq j$};\\ \text{$-1$} & \quad\text{if $e_i \rightleftharpoons e_j$, $i \neq j$};\\ \text{$1$} & \quad\text{if $i=j$.} \end{cases}$$ The \textit{incidence matrix} $Q$ of $T$ is the $n \times n-1$ matrix with its rows indexed by $V(T)$ and the columns indexed by $E(T)$. The entry corresponding to the row $i$ and column $e_j$ of $Q$ is $1$ if $e_j$ originates at $i$, $-1$ if $e_j$ terminates at $i$, and zero if $e_j$ and $i$ are not incident. We assume that the same orientation is used while defining the edge orientation matrix $H$ and the incidence matrix $Q$. The \emph{distance} between the vertices $i,j\in V(T)$, denoted by $d(i,j)$, is the length of the shortest path between them in $T$. The \emph{distance matrix} of $T$, denoted by $D(T)$, is the $n \times n$ matrix whose rows and columns are indexed by the vertices of $T$ and the entries are defined as follows: $D(T)=(d_{ij})$, where $d_{ij}=d(i,j)$. In \cite{bapat2013product}, the authors introduced the notion of \emph{squared distance matrix} $\Delta$, which is defined to be the Hadamard product $D\circ D$, that is, the $(i,j)$-th element of $\Delta$ is $d_{ij}^2$. For the unweighted tree $T$, the determinant of $\Delta$ is obtained in \cite{bapat2013product}, while the inverse and the inertia of $\Delta$ are considered in \cite{bapat2016squared}. In \cite{bapat2019}, the author considered an extension of these results to a weighted tree whose each edge is assigned a positive scalar weight and found the determinant and inverse of $\Delta$. Recently, in \cite{das2020squared}, the authors determined the inertia and energy of the squared distance matrix of a complete multipartite graph. Also, they characterized the graphs among all complete $t$-partite graphs on $n$ vertices for which the spectral radius of the squared distance matrix and the squared distance energy are maximum and minimum, respectively. In this article, we consider a weighted tree $T$ on $n$ vertices with each of its edge weights are positive definite matrices of order $s$. For $i,j \in V(T)$, the distance $d(i,j)$ between $i$ and $j$ is the sum of the weight matrices in the unique $(i,j)$-path of $T$. Thus, the distance matrix $D=(d_{ij})$ of $T$ is the block matrix of order $ns\times ns$ with its $(i,j)$-th block $d_{ij}=d(i,j)$ if $i\neq j$, and is the $s \times s$ zero matrix if $i=j$. The squared distance matrix $\Delta$ of $T$ is the $ns\times ns$ block matrix with its $(i,j)$-th block is equal to $d(i,j)^2$ if $i\neq j$, and is the $s \times s$ zero matrix if $i=j$. The Laplacian matrix $L=(l_{ij})$ of $T$ is the $ns \times ns$ block matrix defined as follows: For $i,j \in V(T)$, $i\neq j$, the $(i,j)$-th block $l_{ij}=-(W(i,j))^{-1}$ if $i \sim j$, where $W(i,j)$ is the matrix weight of the edge joining the vertices $i$ and $j$, and the zero matrix otherwise. For $i \in V(T)$, the $(i,i)$-th block of $L$ is $\sum_{j\sim i}(W(i,j))^{-1}$. In the context of classical distance, the matrix weights have been studied in \cite{atik2017distance} and \cite{Bapat2006}. The Laplacian matrix with matrix weights have been studied in \cite{atik2017distance,Sumit2022laplacian} and \cite{hansen2021expansion}. The Resistance distance matrix and the Product distance matrix with matrix weights have been considered in \cite{Atik-resistance}, and \cite{Product-matrix}, respectively. In this article, we consider the squared distance matrix $\Delta$ of a tree $T$ with matrix weights and find the formulae for the determinant and inverse of $\Delta$, which generalizes the results of \cite{bapat2013product,bapat2016squared,bapat2019}. This article is organized as follows. In Section $2$, we define needed notations and state some preliminary results, which will be used in the subsequent sections. In Section $3$, we find some relations of Incidence matrix, Laplacian matrix, and Distance matrix with squared distance matrix. In Section $4$ and Section $5$, we obtain the formula for the determinant and inverse of $\Delta$, respectively. \section{Notations and preliminary results} In this section, we define some useful notations and state some known results which will be needed to prove our main results. The $n\times 1$ column vector with all ones and the identity matrix of order $n$ are denoted by $\textbf{1}_n$ and $I_n$, respectively. Let $J$ denote the matrix of appropriate size with all entries equal to $1$. The transpose of a matrix $A$ is denoted by $A^{\prime}$. Let $A$ be an $n\times n$ matrix partitioned as $ A=\left[ {\begin{array}{cc} A_{11} & A_{12} \\ A_{21} & A_{22} \\ \end{array} } \right]$, where $A_{11}$ and $A_{22}$ are square matrices. If $A_{11}$ is nonsingular, then the \textit{Schur complement }of $A_{11}$ in $A$ is defined as $A_{22}-A_{21}{A_{11}^{-1}}A_{12}$. The following is the well known Schur complement formula: $ \det A= (\det A_{11})\det(A_{22}-A_{21}{A_{11}^{-1}}A_{12})$. The\textit{ Kronecker product }of two matrices $A=(a_{ij})_{m\times n}$ and $B=(b_{ij})_{p\times q}$, denoted by $A\otimes B$, is defined to be the $mp\times nq$ block matrix $[a_{ij}B]$. It is known that for the matrices $A,B,C$ and $D$, $(A\otimes B)(C\otimes D)=AC\otimes BD$, whenever the products $AC$ and $BD$ are defined. Also $(A\otimes B)^{-1}=A^{-1}\otimes B^{-1}$, if $A$ and $B$ are nonsingular. Moreover, if $A$ and $B$ are $n \times n$ and $p\times p$ matrices, then $\det(A\otimes B)=(\det A)^p(\det B)^n$. For more details about the Kronecker product, we refer to \cite{matrix-analysis}. Let $H$ be the edge-orientation matrix, and $Q$ be the incidence matrix of the underlying unweighted tree with an orientation assigned to each edge. The edge-orientation matrix of a weighted tree whose edge weights are positive definite matrices of order $s$ is defined by replacing $1$ and $-1$ by $I_s$ and $-I_s$, respectively. The incidence matrix of a weighted tree is defined in a similar way. That is, for the matrix weighted tree $T$, the edge-orientation matrix and the incidence matrix are defined as $(H\otimes I_s)$ and $(Q\otimes I_s)$, respectively. Now we introduce some more notations. Let $T$ be a tree with vertex set $V(T)=\{1,\hdots,n\}$ and edge set $E(T)=\{e_1,\hdots,e_{n-1}\}$. Let $W_i$ be the edge weight matrix associated with each edge $e_i$ of $T$, $i=1,2,\hdots,n$. Let $\delta_i$ be the degree of the vertex $i$ and set $\tau_i=2-\delta_i$ for $i=1,2,\hdots,n$. Let $\tau$ be the $n \times 1$ matrix with components $\tau_1,\hdots,\tau_n$ and $\Tilde{\tau}$ be the diagonal matrix with diagonal entries $\tau_1,\tau_2,\hdots,\tau_n$. Let $\hat{\delta_i}$ be the matrix weighted degree of $i$, which is defined as $$\hat{\delta_i}=\sum_{j:j\sim i}W(i,j), ~~i= 1,\hdots,n.$$ Let $\hat{\delta}$ be the $ns\times s$ block matrix with the components $\hat{\delta_1},\hdots,\hat{\delta_n}$. Let $F$ be a diagonal matrix with diagonal entries $W_1,W_2,\hdots,W_{n-1}$. It can be verified that $L=(Q\otimes I_s){F}^{-1} (Q^{\prime}\otimes I_s)$. A tree $T$ is said to be directed tree, if the edges of the tree $T$ are directed. If the tree $T$ has no vertex of degree $2$, then $\hat{\tau}$ denote the diagonal matrix with diagonal elements $1/\tau_1,1/\tau_2,\hdots,1/\tau_n$. In the following theorem, we state a basic result about the edge-orientation matrix $H$ of an unweighted tree $T$, which is a combination of Theorem $9$ of \cite{bapat2013product} and Theorem $11$ of \cite{bapat2016squared}. \begin{thm}\cite{bapat2013product,bapat2016squared}\label{detH} Let $T$ be a directed tree on $n$ vertices and let $H$ and $Q$ be the edge-orientation matrix and incidence matrix of $T$, respectively. Then $\det H=2^{n-2}\prod_{i=1}^n \tau_i$. Furthermore, if $T$ has no vertex of degree $2$, then $H$ is nonsingular and $H^{-1}=\frac{1}{2}Q^{\prime}\hat{\tau}Q$. \end{thm} Next, we state a known result related to the distance matrix of a tree with matrix weights. \begin{thm}[{\cite[Theorem 3.4]{atik2017distance}}]\label{thm:DL} Let $T$ be a tree on $n$ vertices whose each edge is assigned a positive definite matrix of order $s$. Let $L$ and $D$ be the Laplacian matrix and distance matrix of $T$, respectively. If $D$ is invertible, then the following assertions hold: \begin{enumerate} \item $LD=\tau \textbf{1}_n^{\prime}\otimes I_s-2I_n\otimes I_s$. \item $DL=\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s.$ \end{enumerate} \end{thm} \section{Properties of the squared distance matrices of trees } In this section, we find the relation of the squared distance matrix with other matrices, such as distance matrix, Laplacian matrix, incidence matrix, etc. We will use these results to obtain the formulae for determinants and inverses of the squared distance matrices of directed trees. \begin{lem}\label{lem:Ddel} Let $T$ be a tree with vertex set $\{1,2,\hdots,n\}$ and each edge of $T$ is assigned a positive definite matrix weight of order $s$. Let $D$ and $\Delta$ be the distance matrix and the squared distance matrix of $T$, respectively. Then $\Delta (\tau \otimes I_s) =D \hat{\delta}.$ \end{lem} \begin{proof} Let $i \in \{1,2,\hdots,n\}$ be fixed. For $j \neq i$, let $p(j)$ be the predecessor of $j$ on the $(i,j)$-path of the underlying tree. Let $e_j$ be the edge between the vertices $p(j)$ and $j$. For $1 \leq j\leq n-1 $, let $W_j$ denote the weight of the edge $e_j$ and $X_j=\hat{\delta_j}-W_j$. Therefore, \begin{eqnarray*} 2\sum_{j=1}^n d(i,j)^2 &=& \sum_{j=1}^n d(i,j)^2+\sum_{j\neq i} \Big(d(i,p(j))+W_j\Big)^2\\ &=&\sum_{j=1}^n d(i,j)^2+\sum_{j\neq i} d(i,p(j))^2+2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2. \end{eqnarray*} Since the vertex $j$ is the predecessor of $\delta_j-1$ vertices in the paths from $i$, we have $$\sum_{j\neq i} d(i,p(j))^2=\sum_{j=1}^n(\delta_j-1)d(i,j)^2.$$ Thus, \begin{eqnarray*} 2\sum_{j=1}^n d(i,j)^2 &=& \sum_{j=1}^n d(i,j)^2+\sum_{j=1}^n(\delta_j-1)d(i,j)^2+2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2\\ &=& \sum_{j=1}^n\delta_jd(i,j)^2+2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2. \end{eqnarray*} Therefore, the $(i,j)$-th element of $\Delta (\tau \otimes I_s)$ is \begin{align*} (\Delta (\tau \otimes I_s))_{ij}= \sum_{j=1}^n(2-\delta_j) d(i,j)^2=2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2. \end{align*} Now, let us compute the $(i,j)$-th element of $D \hat{\delta}$. \begin{eqnarray*} (D \hat{\delta})_{ij}=\sum_{j=1}^n d(i,j)\hat{\delta_j} &=& \sum_{j\neq i}\Big(d(i,p(j))+W_j\Big)(W_j+X_j)\\ &=&\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2+\sum_{j\neq i}\Big(d(i,p(j))+W_j\Big)X_j. \end{eqnarray*} Note that $X_j$ is the sum of the weights of all edges incident to $j$, except $e_j$. Hence, \begin{align*} \big(d(i,p(j))+W_j\big)X_j =d(i,j)X_j= \sum_{l\sim j,l\neq p(j)} d(i,p(l))W_l. \end{align*} Therefore, $$\sum_{j\neq i}\big(d(i,p(j))+W_j\big)X_j=\sum_{j\neq i}\sum_{l\sim j,l\neq p(j)} d(i,p(l))W_l=\sum_{j\neq i} d(i,p(j))W_j. $$ Thus, \begin{align*} (D \hat{\delta})_{ij}= \sum_{j=1}^n d(i,j)\hat{\delta_j}=2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2=(\Delta (\tau \otimes I_s))_{ij}. \end{align*} This completes the proof. \end{proof} \begin{lem}\label{lem:FHF} Let $T$ be a directed tree with vertex set $\{1,\hdots,n\}$ and edge set $\{e_1,\hdots,e_{n-1}\}$ with each edge $e_i$ is assigned a positive definite matrix weight $W_i$ of order $s$ for $1 \leq i \leq n-1$. Let $H$ and $Q$ be the edge orientation matrix and incidence matrix of $T$, respectively. If $F$ is the diagonal matrix with diagonal entries $W_1,W_2,\hdots,W_{n-1}$, then $$(Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s)=-2F(H\otimes I_s)F.$$ \end{lem} \begin{proof} For $i,j\in \{1,2,\hdots,n-1\}$, let $e_i$ and $e_j$ be two edges of $T$ such that $e_i$ is directed from $p$ to $q$ and $e_j$ is directed from $r$ to $s$. Let $W_i$ and $W_j$ be the weights of the edges $e_i$ and $e_j$, respectively. If $d(q,r)=Y$, then it is easy to see that \begin{eqnarray*} \Big((Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s)\Big)_{ij} &=& \begin{cases} \text{$(W_i+Y)^2+(W_j+Y)^2-(W_i+W_j+Y)^2-Y^2$,} & \text{if $e_i\Rightarrow e_j$,}\\ \text{$-(W_i+Y)^2-(W_j+Y)^2+(W_i+W_j+Y)^2+Y^2$,}& \text{if $e_i \rightleftharpoons e_j$.}\\ \end{cases}\\ &=& \begin{cases} \text{$-2W_iW_j$,} & \text{if $e_i\Rightarrow e_j$,}\\ \text{$2W_iW_j$,}& \text{if $e_i \rightleftharpoons e_j$.}\\ \end{cases} \end{eqnarray*} Note that $(F(H\otimes I_s)F)_{ij}= \begin{cases} \text{$W_iW_j$} & \quad\text{if $e_i\Rightarrow e_j$,}\\ \text{$-W_iW_j$}& \quad\text{if $e_i \rightleftharpoons e_j$.} \end{cases}$\\ Thus, $\Big((Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s)\Big)_{ij}=-2(F(H\otimes I_s)F)_{ij}.$ \end{proof} \begin{lem}\label{deltaL} Let $T$ be a tree with vertex set $\{1,\hdots,n\}$ and edge set $\{e_1,\hdots,e_{n-1}\}$ with each edge $e_i$ is assigned a positive definite matrix weight $W_i$ of order $s$ for $1 \leq i \leq n-1$. Let $L,D$ and $\Delta$ be the Laplacian matrix, the distance matrix and the squared distance matrix of $T$, respectively. Then $\Delta L=2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}$. \end{lem} \begin{proof} Let $i,j\in V(T)$ and the degree of the vertex $j$ is $t$. Suppose $j$ is adjacent to the vertices $v_1,v_2,\hdots,v_t$, and let $e_1,e_2,\hdots,e_t$ be the corresponding edges with edge weights $W_1,W_2,\hdots,W_t$, respectively.\\ \textbf{Case 1.} For $i=j$, we have \begin{eqnarray*} (\Delta L)_{ii}&=&\sum_{s=1}^n d(i,s)^2 l_{si}\\ &=&\sum_{s\sim i} d(i,s)^2 l_{si}\\ &=& W_1^2(-W_1)^{-1}+\hdots +W_t^2(-W_t)^{-1}\\ &=&-(W_1+W_2+\hdots +W_t)\\ &=&-\hat{\delta_i}\\ &=& \big(2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}\big)_{ii}. \end{eqnarray*} \textbf{Case 2.} Let $i\neq j$. Without loss of generality, assume that the $(i,j)$-path passes through the vertex $v_1$ (it is possible that $i=v_1$). If $d(i,j)=Z$, then $d(i,v_1)=Z-W_1$, $d(i,v_2)=Z+W_2$, $d(i,v_3)=Z+W_3$, $\hdots, d(i,v_t)=Z+W_t$. Therefore, \begin{eqnarray*} (\Delta L)_{ij}&=&\sum_{s=1}^n d(i,s)^2 l_{sj}\\ &=&\sum_{s\sim j} d(i,s)^2 l_{sj}+d(i,j)^2 l_{jj}\\ &=& {d(i,v_1)}^2(-W_1)^{-1}+{d(i,v_2)}^2(-W_2)^{-1}+\hdots +{d(i,v_t)}^2(-W_t)^{-1}+d(i,j)^2 l_{jj}\\ &=&(Z-W_1)^2(-W_1)^{-1}+(Z+W_2)^2(-W_2)^{-1}+(Z+W_3)^2(-W_3)^{-1}\\ & &+\hdots +(Z+W_t)^2(-W_t)^{-1}+Z^2\big((W_1)^{-1}+(W_2)^{-1}+\hdots+(W_t)^{-1}\big)\\ &=&(W_1^2-2ZW_1)(-W_1)^{-1}+(W_2^2+2ZW_2)(-W_2)^{-1}+(W_3^2+2ZW_3)(-W_3)^{-1}\\ & & +\hdots+(W_t^2+2ZW_t)(-W_t)^{-1}\\ &=&-(W_1+W_2+\hdots +W_t)+2Z-2(t-1)Z\\ &=& 2(2-t)Z-(W_1+W_2+\hdots +W_t)\\ &=& 2\tau_j Z-\hat{\delta_j}\\ &=& \big(2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}\big)_{ij}. \end{eqnarray*} This completes the proof. \end{proof} \section{Determinant of the squared distance matrix} In this section, we obtain a formula for the determinant of the squared distance matrix of a tree with positive definite matrix weights. First, we consider the trees with no vertex of degree $2$. \begin{thm}\label{det1} Let $T$ be a tree on $n$ vertices, and let $W_i$ be the weights of the edge $e_i$, where $W_i$'s are positive definite matrices of order $s$, $i=1,2,\hdots,n-1$. If $T$ has no vertex of degree $2$, then $$\det (\Delta)=(-1)^{(n-1)s}2^{(2n-5)s}\prod_{i=1}^n {(\tau_i)^s}\prod_{i=1}^{n-1}\det (W_i^2) \det\bigg(\sum_{i=1}^n \frac{\hat{\delta_i}^2}{\tau_i}\bigg ).$$ \end{thm} \begin{proof} Let us assign an orientation to each edge of $T$, and let $H$ be the edge orientation matrix and $Q$ be the incidence matrix of the underlying unweighted tree. Let $\Delta_i$ denote the $i$-th column block of the block matrix $\Delta$. Let $t_i$ be the $n \times 1$ column vector with $1$ at the $i$-th position and $0$ elsewhere, $i=1,2,\hdots,n$. Then \begin{equation}\label{eqn1} \left[ {\begin{array}{c} Q^{\prime}\otimes I_s\\ t_1^{\prime}\otimes I_s\\ \end{array} } \right] \Delta \left[ {\begin{array}{cc} Q\otimes I_s & t_1\otimes I_s\\ \end{array} } \right]= \left[ {\begin{array}{cc} (Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s) & (Q^{\prime}\otimes I_s)\Delta_1\\ \Delta_1^{\prime}(Q\otimes I_s) & 0\\ \end{array} } \right]. \end{equation} Since $\det\left[ {\begin{array}{c} Q^{\prime}\otimes I_s\\ t_1^{\prime}\otimes I_s\\ \end{array} } \right]=\det \Bigg( \left[ {\begin{array}{c} Q^{\prime}\\ t_1^{\prime}\\ \end{array} } \right]\otimes I_s \Bigg)=\pm 1$, by taking determinant of matrices in both sides of equation (\ref{eqn1}), we have \begin{align*} \det (\Delta) =& \det \left[ {\begin{array}{cc} (Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s) & (Q^{\prime}\otimes I_s)\Delta_1\\ \Delta_1^{\prime}(Q\otimes I_s) & 0\\ \end{array} } \right]. \end{align*} Using Lemma \ref{lem:FHF}, we have $\det (\Delta)=\det \left[ {\begin{array}{cc} -2F(H\otimes I_s)F & (Q^{\prime}\otimes I_s)\Delta_1\\ \Delta_1^{\prime}(Q\otimes I_s) & 0\\ \end{array} } \right].$ By Theorem \ref{detH}, we have $\det H=2^{n-2}\prod_{i=1}^n \tau_i$ and hence $\det(H\otimes I_s)=(\det H)^s=2^{(n-2)s}\prod_{i=1}^n \tau_i^s$. Thus, $-2F(H\otimes I_s)F$ is nonsingular, and by the Schur complement formula, we have \begin{eqnarray*} \det (\Delta) &=& \left[ {\begin{array}{cc} -2F(H\otimes I_s)F & (Q^{\prime}\otimes I_s)\Delta_1\\ \Delta_1^{\prime}(Q\otimes I_s) & 0\\ \end{array} } \right]\\ &=& \det(-2F(H\otimes I_s)F)\det \Big(-\Delta_1^{\prime}(Q\otimes I_s)(-2F(H\otimes I_s)F)^{-1}(Q^{\prime}\otimes I_s)\Delta_1\Big)\\ &=&(-1)^{(n-1)s}2^{(n-2)s}\prod_{i=1}^{n-1}\det(W_i^2) \det(H\otimes I_s)\det\Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(H\otimes I_s)^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\Big). \end{eqnarray*} Now, from Theorem \ref{detH}, it follows that $(H\otimes I_s)^{-1}=H^{-1}\otimes I_s=\frac{1}{2}Q^{\prime}\hat{\tau}Q\otimes I_s=\frac{1}{2}(Q^{\prime}\hat{\tau}Q\otimes I_s)$. Therefore, \begin{equation}\label{eqn det} \det (\Delta)=(-1)^{(n-1)s}2^{(2n-5)s}\prod_{i=1}^n {(\tau_i)^s}\prod_{i=1}^{n-1}\det(W_i^2)\det \Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\hat{\tau}Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\Big). \end{equation} Now, by Lemma \ref{deltaL} and Lemma \ref{lem:Ddel}, we have \begin{eqnarray*} & &\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\hat{\tau}Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\\ &=&\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)(\hat{\tau}\otimes I_s)(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\\ &=&\Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Big)(\hat{\tau}\otimes I_s)\Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Big)^{\prime}\\ &=&\big(\Delta_1^{\prime}L\big)(\hat{\tau}\otimes I_s)\big(\Delta_1^{\prime}L\big)^{\prime}\\ &=&\sum_i\big(2\tau_i d_{1i}-\hat{\delta_i}\big)^2\frac{1}{\tau_i}\\ &=&\sum_i\big(4{\tau_i}^2 d_{1i}^2+{\hat{\delta_i}}^2-4\tau_i d_{1i}\hat{\delta_i}\big)\frac{1}{\tau_i}\\ &=&\sum_i 4{\tau_i}^2 d_{1i}^2+\sum_i \frac{\hat{\delta_i}^2}{\tau_i}-\sum_i 4d_{1i}\hat{\delta_i}\\ &=&\sum_i \frac{\hat{\delta_i}^2}{\tau_i}. \end{eqnarray*} Substituting the value of $\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\hat{\tau}Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1$ in (\ref{eqn det}), we get the required result. \end{proof} \begin{figure} \centering \includegraphics[scale= 0.50]{sqdst1.jpg} \caption{ Tree $T_1$ on 4 vertices} \label{fig1} \end{figure} Next, let us illustrate the above theorem by an example. \begin{ex} Consider the tree $T_1$ in Figure 1, where the edge weights are \begin{align*} W_1=\left[ {\begin{array}{cc} 1 & 0\\ 0 & 1\\ \end{array} } \right], \qquad W_2=\left[ {\begin{array}{cc} 2 & 0\\ 0 & 1\\ \end{array} } \right], \qquad W_3=\left[ {\begin{array}{cc} 1 & 0\\ 0 & 2\\ \end{array} } \right]. \end{align*} \end{ex} Then, \begin{align*} \Delta =&\left[ {\begin{array}{cccc} 0 & W_1^2 & (W_1+W_2)^2 & (W_1+W_3)^2\\ W_1^2 & 0 & W_2^2 & W_3^2\\ (W_1+W_2)^2 & W_2^2 & 0 & (W_2+W_3)^2\\ (W_1+W_3)^2 & W_3^2 & (W_2+W_3)^2 & 0\\ \end{array} } \right] \\ =&\left[ {\begin{array}{cccccccc} 0 & 0 & 1 & 0 & 9 & 0 & 4 & 0\\ 0 & 0 & 0 & 1 & 0 & 4 & 0 & 9\\ 1 & 0 & 0 & 0 & 4 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 4\\ 9 & 0 & 4 & 0 & 0 & 0 & 9 & 0\\ 0 & 4 & 0 & 1 & 0 & 0 & 0 & 9\\ 4 & 0 & 1 & 0 & 9 & 0 & 0 & 0 \\ 0 & 9 & 0 & 4 & 0 & 9 & 0 & 0\\ \end{array} } \right] ~ \text{and}\\ \sum_{i=1}^4 \frac{\hat{\delta_i}^2}{\tau_i}=& W_1^2+W_2^2+W_3^2-(W_1+W_2+W_3)^2= \left[ {\begin{array}{cc} -10 & 0\\ 0 & -10\\ \end{array} } \right]. \end{align*} One can verify that, $$\det (\Delta)= 102400= (-1)^{6}2^{6}\prod_{i=1}^3 {(\tau_i)^2}\prod_{i=1}^{3}\det({W_i}^2) \det\Big (\sum_{i=1}^4 \frac{\hat{\delta_i}^2}{\tau_i}\Big ).$$ Next, we obtain a formula for the determinant of the squared distance matrix of a tree $T$, which has exactly one vertex of degree $2$. \begin{thm}\label{det} Let $T$ be a tree on $n$ vertices with the edge set $E(T)=\{e_1,e_2,\hdots,e_{n-1}\}$. Let the positive definite matrices $W_1,W_2,\hdots,W_{n-1}$ of order $s$ be the weights of the edges $e_1,e_2,\hdots,e_{n-1}$, respectively. Let $v$ be the vertex of degree $2$ and $u$ and $w$ be its neighbours in $T$. If $e_i=(u,v)$ and $e_j=(v,w)$, then $$\det (\Delta)=-(1)^{(n-1)s}2^{(2n-5)s}\det(W_i+W_j)^2 \prod_{k=1}^{n-1} \det(W_k^2)\prod_{k\neq v}\tau_k^s.$$ \end{thm} \begin{proof} Let us assign an orientation to each edge of $T$. Without loss of generality, assume that, the edge $e_i$ is directed from $u$ to $v$ and the edge $e_j$ is directed from $v$ to $w$. Let $\Delta_i$ denote the $i$-th column block of the block matrix $\Delta$. Let $t_i$ be the $n \times 1$ column vector with $1$ at the $i$-th position and $0$ elsewhere, $i=1,2,\hdots,n$. Therefore, by using Lemma \ref{lem:FHF}, we have \begin{eqnarray*} \left[ {\begin{array}{c} Q^{\prime}\otimes I_s\\ t_v^{\prime}\otimes I_s\\ \end{array} } \right] \Delta \left[ {\begin{array}{cc} Q\otimes I_s & t_v\otimes I_s\\ \end{array} } \right] &=& \left[ {\begin{array}{cc} (Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s) & (Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s) & 0\\ \end{array} } \right]\\ &=& \left[ {\begin{array}{cc} -2F(H\otimes I_s)F) & (Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s) & 0\\ \end{array} } \right] \end{eqnarray*} Pre-multiplying and post-multiplying the above equation by $\left[ {\begin{array}{cc} F^{-1}& 0\\ 0 & I_s\\ \end{array} } \right]$, we get \begin{eqnarray*} \left[ {\begin{array}{cc} F^{-1}& 0\\ 0 & I_s\\ \end{array} } \right] \left[ {\begin{array}{c} Q^{\prime}\otimes I_s\\ t_v^{\prime}\otimes I_s\\ \end{array} } \right] \Delta \left[ {\begin{array}{cc} Q\otimes I_s & t_v\otimes I_s\\ \end{array} } \right] \left[ {\begin{array}{cc} F^{-1}& 0\\ 0 & I_s\\ \end{array} } \right] &=& \left[ {\begin{array}{cc} -2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\ \end{array} } \right], \end{eqnarray*} which implies that \begin{eqnarray*} (\det(F^{-1}))^2 \det(\Delta) =\det \left[ {\begin{array}{cc} -2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\ \end{array} } \right]. \end{eqnarray*} Let $H(j|j)$ denote the $(n-2)s\times (n-2)s$ submatrix obtained by deleting the all blocks in the $j$-th row and $j$-th column from $H\otimes I_s$. Let $R_i$ and $C_i$ denote the $i$-th row and $i$-th column of the matrix $\left[ {\begin{array}{cc} -2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\ \end{array} } \right]$, respectively. Note that the blocks in the $i$-th and $j$-th column of $H\otimes I_s$ are identical. Now, perform the operations $R_j-R_i$ and $C_j-C_i$ in $\left[ {\begin{array}{cc} -2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\ \end{array} } \right]$, and then interchange $R_j$ and $R_{n-1}$, $C_j$ and $C_{n-1}$ . Since $\Delta_v^{\prime}(Q\otimes I_s)F^{-1})_j-( \Delta_v^{\prime}(Q\otimes I_s)F^{-1})_i=-W_j-W_i$, therefore \begin{equation} \det \left[ {\begin{array}{cc} -2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\ \end{array} } \right] = \det \left[ {\begin{array}{ccc} -2H(j|j) & 0 & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ 0 & 0 & -W_j-W_i\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & -W_j-W_i & 0\\ \end{array} } \right]. \end{equation} Since $H(j|j)$ is the edge orientation matrix of the tree obtained by deleting the vertex $v$ and replacing the edges $e_i$ and $e_j$ by a single edge directed from $u$ to $w$ in the tree, by Theorem \ref{detH}, we have $\det(H(j|j)=2^{(n-3)s}\prod_{k \neq v}\tau_k^s$, which is nonzero. Therefore, by applying the Schur complement formula, we have \begin{eqnarray*} & &\det \left[ {\begin{array}{ccc} -2H(j|j) & 0 & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ 0 & 0 & -W_j-W_i\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & -W_j-W_i & 0\\ \end{array} } \right] \\ &=& \det(-2H(j|j)) \det \bigg(\left[ {\begin{array}{cc} 0 & -W_j-W_i\\ -W_j-W_i & 0\\ \end{array} } \right]-\\ & &~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left[ {\begin{array}{cc} 0 & 0 \\ 0 & \Delta_v^{\prime}(Q\otimes I_s)F^{-1}(-2H(j|j))^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \end{array} } \right] \bigg)\\ &=&(-2)^{(n-2)s}\det(H(j|j)) \det \left[ {\begin{array}{cc} 0 & -W_j-W_i\\ -W_j-W_i & -\Delta_v^{\prime}(Q\otimes I_s)F^{-1}(-2H(j|j))^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ \end{array} } \right]. \end{eqnarray*} Again, by the proof of Theorem \ref{det1}, we have $$\Delta_v^{\prime}(Q\otimes I_s)F^{-1}(-2H(j|j))^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_v=-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}.$$ Therefore, \begin{eqnarray*} & &\det \left[ {\begin{array}{ccc} -2H(j|j) & 0 & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\ 0 & 0 & -W_j-W_i\\ \Delta_v^{\prime}(Q\otimes I_s)F^{-1} & -W_j-W_i & 0\\ \end{array} } \right] \\ &=& (-2)^{(n-2)s}\det(H(j|j)) \det \left[ {\begin{array}{cc} 0 & -W_j-W_i\\ -W_j-W_i & \frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\\ \end{array} } \right]\\ &=& (-2)^{(n-2)s}\det(H(j|j)) \det \left[ {\begin{array}{cc} 0 & W_j+W_i\\ W_j+W_i & -\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\\ \end{array} } \right]. \end{eqnarray*} Since $\det \Big(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\Big)\neq 0$, by Schur complement formula, we have \begin{eqnarray*} \det \left[ {\begin{array}{cc} 0 & W_j+W_i\\ W_j+W_i & -\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\\ \end{array} } \right] &=&\det \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg) \det \bigg[0-(W_j+W_i) \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg)^{-1}( W_j+W_i)\bigg]\\ &=&(-1)^s \det \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg) \det \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg)^{-1} \det(W_j+W_i)^2\\ &=&(-1)^s \det(W_i+W_j)^2. \end{eqnarray*} Thus, \begin{eqnarray*} \det (\Delta) &=&(\det F)^2(-1)^{s}(-2)^{(n-2)s}2^{(n-3)s}\prod_{k\neq v}\tau_k^s~\det(W_i+W_j)^2\\ &=&(-1)^{(n-1)s}2^{(2n-5)s}\det(W_i+W_j)^2\prod_{k=1}^{n-1}\det(W_k^2)\prod_{k\neq v}\tau_k^s. \end{eqnarray*} This completes the proof. \end{proof} \begin{figure} \centering \includegraphics[scale= 0.50]{sqdst2.jpg} \caption{ Tree $T_2$ on 5 vertices } \label{fig2} \end{figure} Now, we illustrate the above theorem by the following example. \begin{ex} Consider the tree $T_2$ in Figure \ref{fig2}, where the edge weights are \begin{align*} W_1=\left[ {\begin{array}{cc} 1 & 0\\ 0 & 1\\ \end{array} } \right], \qquad W_2=\left[ {\begin{array}{cc} 2 & 0\\ 0 & 1\\ \end{array} } \right], \qquad W_3=\left[ {\begin{array}{cc} 1 & 0\\ 0 & 2\\ \end{array} } \right], \qquad W_4=\left[ {\begin{array}{cc} 2 & 0\\ 0 & 2\\ \end{array} } \right]. \end{align*} \end{ex} Then, \begin{eqnarray*} \Delta &=&\left[ {\begin{array}{ccccc} 0 & W_1^2 & (W_1+W_2)^2 & (W_1+W_2+W_3)^2 & (W_1+W_2+W_4)^2\\ W_1^2 & 0 & W_2^2 & (W_2+W_3)^2 & (W_2+W_4)^2\\ (W_1+W_2)^2 & W_2^2 & 0 & W_3^2 & W_4^2\\ (W_1+W_2+W_3)^2 &(W_2+W_3)^2 & W_3^2 & 0 & (W_3+W_4)^2\\ (W_1+W_2+W_3)^2 & (W_2+W_4)^2 & W_4^2 & (W_3+W_4)^2 & 0\\ \end{array} } \right] \\ &=&\left[ {\begin{array}{cccccccccc} 0 & 0 & 1 & 0 & 9 & 0 & 16 & 0 & 25 & 0\\ 0 & 0 & 0 & 1 & 0 & 4 & 0 & 16 & 0 & 16\\ 1 & 0 & 0 & 0 & 4 & 0 & 9 & 0 & 16 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 9 & 0 & 9\\ 9 & 0 & 4 & 0 & 0 & 0 & 1 & 0 & 4 & 0\\ 0 & 4 & 0 & 1 & 0 & 0 & 0 & 4 & 0 & 4\\ 16 & 0 & 9 & 0 & 1 & 0 & 0 & 0 & 9 & 0\\ 0 & 16 & 0 & 9 & 0 & 4 & 0 & 0 & 0 & 16\\ 25 & 0 & 16 & 0 & 4 & 0 & 9 & 0 & 0 & 0 \\ 0 & 16 & 0 & 9 & 0 & 4 & 0 & 16 & 0 & 0 \\ \end{array} } \right]. \end{eqnarray*} One can verify that, $$\det (\Delta)= 9437184= (-1)^{8}2^{10}\det(W_1+W_2)^2 \prod_{i=1}^{4} \det(W_i^2)\prod_{k\neq 2}\tau_k^s.$$ \begin{cor} Let $T$ be a tree on $n$ vertices and each edge $e_i$ of $T$ is assigned a positive definite matrix $W_i$ order $s$, $i=1,2,\hdots,n-1$. If $T$ has at least two vertices of degree $2$, then $\det (\Delta)=0$. \end{cor} \begin{proof} The result follows from Theorem \ref{det}, since $\tau_i=0$ for at least two values of $i$. \end{proof} \section{Inverse of the squared distance matrix} This section considers trees with no vertex of degree $2$ and obtains an explicit formula for the inverse of its squared distance matrix. First, let us prove the following lemma which will be used to find $\Delta^{-1}$. \begin{lem}\label{lem:inv} Let $T$ be a tree of order $n$ with no vertex of degree $2$ and each edge of $T$ is assigned a positive definite matrix weight of order $s$. If $\beta=\Hat{{\delta}^{\prime}}(\Hat{\tau}\otimes I_s)\Hat{\delta}$ and $\eta=2\tau \otimes I_s-L(\hat{\tau}\otimes I_s)\Hat{\delta}$, then $$\Delta \eta =\textbf{1}_n \otimes \beta.$$ \end{lem} \begin{proof} By Lemma \ref{deltaL}, we have $\Delta L=2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n \otimes {\hat{\delta}^\prime}$. Hence, \begin{eqnarray*} \Delta L(\Hat{\tau}\otimes I_s)\hat{\delta}&=&2D\hat{\delta}-(\textbf{1}_n \otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)\hat{\delta}\\ &=&2D\hat{\delta}-\textbf{1}_n \otimes \sum_{i=1}^n\frac{\hat{\delta_i}^2}{\tau_i}. \end{eqnarray*} Since $\beta=\Hat{{\delta}^{\prime}}(\Hat{\tau}\otimes I_s)\Hat{\delta}=\sum_{i=1}^n\frac{\hat{\delta_i}^2}{\tau_i}$, therefore $\Delta L(\Hat{\tau}\otimes I_s)\hat{\delta}=2D\hat{\delta}-\textbf{1}_n \otimes \beta$. By Lemma \ref{lem:Ddel}, we have $\Delta (\tau \otimes I_s) =D \hat{\delta}$ and hence $\Delta L(\Hat{\tau}\otimes I_s)\hat{\delta}= 2\Delta (\tau \otimes I_s)-\textbf{1}_n\otimes \beta$. This completes the proof. \end{proof} If the tree $T$ has no vertex of degree $2$ and $\det(\beta) \neq 0$, then $\Delta$ is nonsingular, that is, ${\Delta}^{-1}$ exists. In the next theorem, we determine the formula for ${\Delta}^{-1}$. \begin{thm} Let $T$ be a tree of order $n$ with no vertex of degree $2$ and each edge of $T$ is assigned a positive definite matrix weight of order $s$. Let $\beta=\Hat{{\delta}^{\prime}}(\Hat{\tau}\otimes I_s)\Hat{\delta}$ and $\eta=2\tau \otimes I_s-L(\hat{\tau}\otimes I_s)\Hat{\delta}$. If $\det(\beta) \neq 0$, then $${\Delta}^{-1}=-\frac{1}{4}L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\eta {\beta}^{-1} {\eta}^{\prime}.$$ \end{thm} \begin{proof} Let $X=-\frac{1}{4}L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\eta {\beta}^{-1} {\eta}^{\prime}$. Then, \begin{equation}\label{eqn:inv1} \Delta X=-\frac{1}{4}\Delta L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\Delta \eta {\beta}^{-1} {\eta}^{\prime}. \end{equation} By Lemma \ref{deltaL}, we have $\Delta L=2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}$. Therefore, $$\Delta L(\Hat{\tau}\otimes I_s)L=2DL-(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L. $$ By Theorem \ref{thm:DL}, $DL=\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s$ and hence \begin{equation}\label{eqn:inv2} \Delta L(\Hat{\tau}\otimes I_s)L=2\Big(\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s\Big)-(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L. \end{equation} By Lemma \ref{lem:inv}, we have $\Delta \eta =\textbf{1}_n\otimes \beta=(\textbf{1}_n\otimes I_s)\beta$. Therefore, from equation (\ref{eqn:inv1}) and (\ref{eqn:inv2}), we have \begin{eqnarray*} \Delta X &=& -\frac{1}{2}\Big(\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s\Big)+\frac{1}{4}(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L+\frac{1}{4}(\textbf{1}_n \otimes I_s){\eta}^{\prime}\\ & = & -\frac{1}{2}\textbf{1}_n{\tau}^{\prime}\otimes I_s+I_n\otimes I_s+\frac{1}{4}(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L+\frac{1}{4}(\textbf{1}_n\otimes I_s)\Big(2\tau \otimes I_s-L(\hat{\tau}\otimes I_s)\Hat{\delta}\Big)^{\prime}\\ & = & -\frac{1}{2}\textbf{1}_n{\tau}^{\prime}\otimes I_s+I_n\otimes I_s+\frac{1}{4}(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L+\frac{1}{4}(\textbf{1}_n\otimes I_s)\Big(2\tau^{\prime} \otimes I_s-{\Hat{\delta}}^{\prime}(\hat{\tau}\otimes I_s)L\Big)\\ &=& I_n\otimes I_s=I_{ns}. \end{eqnarray*} This completes the proof. \end{proof} Now, let us illustrate the above formula for $\Delta^{-1}$ by an example. \begin{ex} Consider the tree $T_1$ in Figure 1, where the edge weights are \begin{align*} W_1=\left[ {\begin{array}{cc} 1 & 0\\ 0 & 1\\ \end{array} } \right], \qquad W_2=\left[ {\begin{array}{cc} 2 & 0\\ 0 & 1\\ \end{array} } \right], \qquad W_3=\left[ {\begin{array}{cc} 1 & 0\\ 0 & 2\\ \end{array} } \right]. \end{align*} \end{ex} Then, \begin{align*} \Delta =&\left[ {\begin{array}{cccc} 0 & W_1^2 & (W_1+W_2)^2 & (W_1+W_3)^2\\ W_1^2 & 0 & W_2^2 & W_3^2\\ (W_1+W_2)^2 & W_2^2 & 0 & (W_2+W_3)^2\\ (W_1+W_3)^2 & W_3^2 & (W_2+W_3)^2 & 0\\ \end{array} } \right] \\ =&\left[ {\begin{array}{cccccccc} 0 & 0 & 1 & 0 & 9 & 0 & 4 & 0\\ 0 & 0 & 0 & 1 & 0 & 4 & 0 & 9\\ 1 & 0 & 0 & 0 & 4 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 4\\ 9 & 0 & 4 & 0 & 0 & 0 & 9 & 0\\ 0 & 4 & 0 & 1 & 0 & 0 & 0 & 9\\ 4 & 0 & 1 & 0 & 9 & 0 & 0 & 0 \\ 0 & 9 & 0 & 4 & 0 & 9 & 0 & 0\\ \end{array} } \right],\\ L=&\left[ {\begin{array}{cccc} W_1^{-1}& -W_1^{-1} & 0 & 0\\ -W_1^{-1} & W_1^{-1}+W_2^{-1}+W_3^{-1} & -W_2^{-1} & -W_3^{-1}\\ 0 & -W_2^{-1} & W_2^{-1} & 0 \\ 0 & -W_3^{-1} & 0 &W_3^{-1}\\ \end{array} } \right] \\ =&\left[ {\begin{array}{cccccccc} 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0\\ -1 & 0 & 2.5 & 0 & -0.5 & 0 & -1 & 0\\ 0 & -1 & 0 & 2.5 & 0 & -1 & 0 & -0.5\\ 0 & 0 & -0.5 & 0 & 0.5 & 0 & 0 & 0\\ 0 & 0 & 0 & -1 & 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -0.5 & 0 & 0 & 0 & 0.5\\ \end{array} } \right], \end{align*} \begin{align*} \beta =\sum_{i=1}^4 \frac{\hat{\delta_i}^2}{\tau_i}=& W_1^2+W_2^2+W_3^2-(W_1+W_2+W_3)^2=\left[ {\begin{array}{cc} -10 & 0\\ 0 & -10\\ \end{array} } \right], ~ \text{and}\\ {\eta}^{\prime} =& \left[ {\begin{array}{cccccccc} -3 & 0 & 11 & 0 & -1 & 0 & -3 & 0\\ 0 & -3 & 0 & 11 & 0 & -3 & 0 & -1\\ \end{array} } \right]. \end{align*} Therefore, \begin{align*} L(\Hat{\tau}\otimes I_s)L=& \left[ {\begin{array}{cccccccc} 0 & 0 & 1.5 & 0 & -0.5 & 0 & -1 & 0\\ 0 & 0 & 0 & 1.5 & 0 & -1 & 0 & -0.5\\ 1.5 & 0 & -4& 0 & 1 & 0 & 1.5 & 0\\ 0 & 1.5 & 0 & -4 & 0 & 1.5 & 0 & 1\\ -0.5 & 0 & 1 & 0 & 0 & 0 & -0.5 & 0\\ 0 & -1 & 0 & 1.5 & 0 & 0 & 0 & -0.5\\ -1 & 0 & 1.5 & 0 & -0.5 & 0 & 0 & 0 \\ 0 & -0.5 & 0 & 1 & 0 & -0.5 & 0 & 0\\ \end{array} } \right],~ \text{and} \\ \eta {\beta}^{-1} {\eta}^{\prime}=& \left[ {\begin{array}{cccccccc} -0.9 & 0 & 3.3 & 0 & -0.3 & 0 & -0.9 & 0\\ 0 & -0.9 & 0 & 3.3 & 0 & -0.9 & 0 & -0.3\\ 3.3 & 0 & -12.1 & 0 & 1.1 & 0 & 3.3 & 0\\ 0 & 3.3 & 0 & -12.1 & 0 & 3.3 & 0 & 1.1\\ -0.3 & 0 & 1.1 & 0 & -0.1 & 0 & -0.3 & 0\\ 0 & -0.9 & 0 & 3.3 & 0 & -0.9 & 0 & -0.3\\ -0.9 & 0 & 3.3 & 0 & -0.3 & 0 & -0.9 & 0 \\ 0 & -0.3 & 0 & 1.1 & 0 & -0.3 & 0 & -0.1\\ \end{array} } \right]. \end{align*} One can verify that, $$\Delta^{-1}=-\frac{1}{4}L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\eta {\beta}^{-1} {\eta}^{\prime}.$$ \bibliographystyle{plain} \bibliography{reference} \end{document}
2205.01660v1
http://arxiv.org/abs/2205.01660v1
On the Lie algebra structure of integrable derivations
\documentclass{amsart} \usepackage{amsfonts,amssymb,amsmath,mathtools,mathrsfs,bm,stmaryrd,amsthm, enumerate} \usepackage[utf8]{inputenc} \usepackage[hidelinks]{hyperref} \usepackage[T1]{fontenc} \usepackage[english]{babel} \usepackage[capitalise]{cleveref} \usepackage[parfill]{parskip} \usepackage{wrapfig,tikz, tikz-cd} \usetikzlibrary{arrows} \newtheorem{theorem}{Theorem} \newtheorem{theorema}{Theorem} \renewcommand{\thetheorema}{\Alph{theorema}} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \theoremstyle{definition} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{example}{Example} \newtheorem{question}{Question} \newenvironment{lleonard}{\color{blue}} \newcommand{\Hom}{{\operatorname{Hom}}} \newcommand{\sHom}{\underline{\operatorname{Hom}}} \newcommand{\HH}{{\operatorname{HH}}} \newcommand{\sHH}{{\underline{\operatorname{HH}}}} \newcommand{\Ext}{{\operatorname{Ext}}} \newcommand{\sExt}{\underline{\operatorname{Ext}}} \newcommand{\RHom}{{\operatorname{RHom}}} \newcommand{\sing}{\underline{\operatorname{D}}^{\sf b}} \newcommand{\dcat}{{\operatorname{D}}} \newcommand{\dbcat}{{\operatorname{D}^{\sf b}}} \newcommand{\csing}{\underline{\operatorname{D}}^{\sf B}} \newcommand{\cdbcat}{\operatorname{D}^{\sf B}} \newcommand{\proj}{\operatorname{Proj}} \renewcommand{\flat}{\operatorname{Flat}} \DeclareMathOperator{\chr}{char} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Der}{Der} \DeclareMathOperator{\iDer}{Der_{\rm int}} \DeclareMathOperator{\obs}{obs} \newcommand{\susp}{{\mathtt{\Sigma}}} \newcommand{\tosim}{\xrightarrow{\sim}} \renewcommand{\-}{\text{-}} \newcommand{\op}{\textsf{op}} \def\sl{\mathfrak{sl}} \def\e{\mathfrak{e}} \def\f{\mathfrak{f}} \def\h{\mathfrak{h}} \title{On the Lie algebra structure of integrable derivations} \author{Benjamin Briggs} \address{ Mathematical Sciences Research Institute, Berkeley, California} \email{[email protected]} \author{Lleonard Rubio y Degrassi} \address{Dipartimento di Informatica - Settore di Matematica, Universit\`{a} degli Studi di Verona, Strada le Grazie 15 - Ca’ Vignal, I-37134 Verona, Italy} \email{[email protected]} \thanks{{\it Funding.} During this work the first author was hosted by the Mathematical Sciences Research Institute in Berkeley, California, supported by the National Science Foundation under Grant No.\ 1928930. The second author has been partially supported by EPSRC grant EP/L01078X/1 and the project PRIN 2017 - Categories, Algebras: Ring-Theoretical and Homological Approaches. He also acknowledges support from the project {\it REDCOM: Reducing complexity in algebra, logic, combinatorics}, financed by the programme {\it Ricerca Scientifica di Eccellenza 2018} of the Fondazione Cariverona.\\} \begin{document} \maketitle \begin{abstract} Building on work of Gerstenhaber, we show that the space of integrable derivations on an Artin algebra $A$ forms a Lie algebra, and a restricted Lie algebra if $A$ contains a field of characteristic $p$. We deduce that the space of integrable classes in $\HH^1(A)$ forms a (restricted) Lie algebra that is invariant under derived equivalences, and under stable equivalences of Morita type between self-injective algebras. We also provide negative answers to questions about integrable derivations posed by Linckelmann and by Farkas, Geiss and Marcos. \end{abstract} \section{Introduction} For any algebra $A$ over a commutative ring $k$, we consider the Lie algebra of $k$-linear derivations on $A$. The subspace of integrable derivations was introduced by Hasse and Schmidt in \cite{HS}, and has since been important in geometry and commutative algebra, especially in regards to deformations, jet spaces, and automorphism groups \cite{Ma,Mo,Vo}. More recently, integrable derivations have been used as source of invariants in representation theory \cite{FGM,ML}. The first Hochschild cohomology Lie algebra $\HH^1(A)$, consisting of all derivations modulo inner derivations, is a critically important invariant, and the subspace $\HH^1_{\rm int}(A)$ spanned by integrable derivations is the main object of interest in this work. This class of derivations is known to have good invariance properties: Farkas, Geiss and Marcos prove that $\HH^1_{\rm int}(A)$ is an invariant under Morita equivalences \cite{FGM}, and Linckelmann proves, for self-injective algebras, that $\HH^1_{\rm int}(A)$ is an invariant under stable equivalences of Morita type \cite{ML}. The first part of this work builds heavily on Gerstenhaber's work on integrable derivations \cite{G}, which seems not to be well-known. Following this work, we prove that for Artin algebras, the integrable derivations form a (restricted) Lie algebra. \begin{theorema}[See Corollary \ref{cor_Lie}] \label{theo_lie} If $A$ is an Artin algebra over a commutative Artinian ring $k$, then $\HH^1_{\rm int}(A)$ is a Lie subalgebra of $\HH^1(A)$. If moreover $A$ contains a field of characteristic $p$, then $\HH^1_{\rm int}(A)$ is a restricted Lie subalgebra of $\HH^1(A)$. \end{theorema} We also show that this (restricted) Lie algebra is invariant under derived equivalences and stable equivalences of Morita type, extending the work of Farkas, Geiss and Marcos \cite{FGM} and Linckelmann \cite{ML}. \begin{theorema}[See Theorem \ref{invarianceint}] \label{theoinvarianceint} Let $A$ and $B$ two finite dimensional split algebras over a field $k$. Assume either that $A$ and $B$ are derived equivalent, or that $A$ and $B$ are self-injective and stably equivalent of Morita type. Then $\HH_{\rm int}^1(A)\cong \HH_{\rm int}^1(B)$ as Lie algebras, and this is an isomorphism of restricted Lie algebras if $k$ is of positive characteristic. \end{theorema} In the second part of this work we answer several questions posed in \cite{FGM} and \cite{L} about integrable derivations on group algebras. We show by example that the Lie algebra of integrable derivations is not always solvable for every block of a finite group, proving a negative answer to {\cite[Question 8.2]{L}}. \begin{theorema}[See Theorem \ref{notsolvable}] \label{theonotsolvable} Let $k$ be a field of characteristic $p\geq 3$, let $C_p$ the cyclic group of order $p$ and let $A=k(C_p\times C_p)$. Then $\HH^1_{\rm int}(A)$ is not solvable. \end{theorema} By \cite[Theorem 2.2]{FGM} the group algebra of a finite $p$-group, over a field of characteristic $p$, always admits a non-integrable derivation. The authors of this work ask whether this is true of all finite groups with order divisible by $p$, and our next example shows that this is not the case. \begin{theorema}[See Theorem \ref{Spint}] \label{theospint} Let $k$ a field of characteristic $p\geq 3$ and let $kS_p$ be the group algebra of the symmetric group on $p$ letters. Then $\mathrm{dim}_k(\HH^1(kS_p))=1$ and $\HH^1(kS_p)= \HH^1_{\rm int}(kS_p)$. \end{theorema} Along the way, in Theorem \ref{dim} we give a formula for the dimension of $\HH^1(kS_n)$ for any $n$. The same formula has also been obtained independently in the recent work \cite{BKL}. The first part of Theorem \ref{Spint}, that $\mathrm{dim}_k(\HH^1(kS_p))=1$, is an immediate consequence. \subsection*{Outline} The sections of this paper can be read independently. In Section \ref{sec_int_Ger} we survey some of work of Gerstenhaber on integrable derivations and note some consequences. Here we prove Theorems \ref{theo_lie} and \ref{theoinvarianceint}. The main result of Section \ref{cexamplesolv} is Theorem \ref{theonotsolvable} which provides the first counterexample. In Section \ref{HH1symgroup} we give a formula for the dimension of $\HH^1(kS_n)$. The main result in Section \ref{cexampleexist} is Theorem \ref{theospint} which provides the second counterexample. The appendix contains a dictionary explaining the more general terminology used by Gerstenhaber in \cite{G}, and how it relates to the setting considered here. \section{Integrable derivations}\label{sec_int_Ger} Let $A$ be an algebra over a commutative ring $k$, and let $\Der(A)$ denote the space of $k$-linear derivations on $A$. Gerstenhaber investigated integral derivations in \cite{G}, but since his work was written in substantial generality, in the language of ``composition complexes", many of its results are not well-known. In this section we present some of the results of \cite{G} in more familiar terms (but in particular Lemma \ref{local_lem} and its consequences are new to this work). The main result of this section is that if $A$ is an Artin algebra, the class of integrable derivations forms a Lie subalgebra of $\Der(A)$. In order to define integrable derivations we consider the $k$-algebras $A[t]/(t^n)$ and their limit $A\llbracket t\rrbracket$. We denote by $\Aut_1(A[t]/(t^n))$ the group of $k[t]/(t^n)$-algebra automorphisms $\alpha$ which yield the identity modulo $t$. Any such automorphism can be expanded \[ \alpha = 1 + \alpha_1t+\alpha_{2}t^{2}+\cdots + \alpha_{n-1}t^{n-1} \] for some $k$-linear maps $\alpha_i\colon A\to A$. The first of these, $\alpha_1$, is a derivation on $A$, and in general the maps satisfy $\alpha_i(xy)=x\alpha_i(y) + \alpha_1(x)\alpha_{i-1}(y)+\dots + \alpha_i(x)y$ for all $i$. A sequence $\alpha_1,...,\alpha_{n-1}$ of linear endomorphsims of $A$ satisfying these identities is called a Hasse-Schmidt derivation of order $n$. One similarly interprets elements $\alpha\in \Aut_1(A\llbracket t\rrbracket)$ as infinite sequences $\alpha_1,\alpha_2,...$ of linear endomorphsims of $A$, and these are called Hasse-Schmidt derivations of infinite order. These were studied in \cite{HS} under the name higher derivation. Extending this slightly, we set $\Aut_m(A[t]/(t^n))$ to be the group of $k[t]/(t^n)$-algebra automorphisms $\alpha$ which yield the identity modulo $t^m$. Any such automorphism can be expanded \[ \alpha = 1 + \alpha_mt^m+\alpha_{m+1}t^{m+1}+\cdots + \alpha_{n-1}t^{n-1} \] for some $\alpha_i\colon A\to A$. The first nonvanishing coefficient $\alpha_m$ is always a derivation on $A$. The same goes for the power series algebra $A\llbracket t\rrbracket$ and the corresponding automorphisms in $\Aut_m(A\llbracket t\rrbracket)$. \begin{definition}[\cite{HS,G,R}] A $k$-linear derivation $D$ on $A$ will be called $[m,n)$-{integrable} if there is an automorphism $\alpha\in \Aut_m(A[t]/(t^n))$ such that $D=\alpha_m$. We say that $D$ is {$[m,\infty)$-integrable} if it is {$[m,n)$-integrable} for all $n$. And we say that $D$ is {$[m,\infty]$-integrable} if there is an automorphism $\alpha\in \Aut_m(A\llbracket t\rrbracket)$ such that $D=\alpha_m$. We will write \[ \Der_{[m,n)}(A) =\big\{ \ k\text{-linear } [m,n)\text{-integrable derivations on } A\ \big\}, \] and we will use similar notation for $[m,\infty)$- and $[m,\infty]$-integrable derivations. The derivations that are $[1,\infty]$-integrable are simply known as integrable, and we will also use the notation $\iDer(A)=\Der_{[1,\infty]}(A)$. \end{definition} \begin{remark} The $[m,n)$-{integrable} derivations are closed under addition and subtraction---that is, $\Der_{[m,n)}(A)$ is an additive subgroup of $\Der(A)$. If $\alpha,\alpha'\in \Aut_m(A[t]/(t^n))$ then the automorphisms \[ \alpha\alpha' = 1 + (\alpha_m+\alpha'_m)t^m+\cdots \quad\text{and}\quad \alpha^{-1}= 1-\alpha_mt^m+\cdots \] are witness to fact that $\alpha_m+\alpha'_m$ and $-\alpha_m$ are $[m,n)$-{integrable}. The same goes for for $[m,\infty)$- and $[m,\infty]$-integrable derivations. In the case $m=1$, $\Der_{[1,n)}(A)$ is moreover a submodule of $\Der(A)$ over ${\rm Z}(A)$---the centre of $A$. This can be seen from the automorphism \[ 1+z\alpha_1t+z^2\alpha_2t^2+\cdots \] that exists for any $\alpha \in \Aut_1(A[t]/(t^n))$ and $z\in {\rm Z}(A)$. \end{remark} \begin{remark} All inner derivations are integrable. Indeed if $a\in A$, then $1+at$ is a unit in $A\llbracket t\rrbracket$ and the automorphism ${\rm ad}(1+at) = 1 + [a,-]t+ \cdots $ shows that $[a,-]$ is integrable. \end{remark} \begin{definition} The first Hochshild cohomology of $A$ over $k$ is the quotient $\HH^1(A)$ of $\Der(A)$ by the space of inner derivations. We denote by $\HH^1_{\rm int}(A)$ the image of $\iDer(A)$ in $\HH^1(A)$. By the previous two remarks, a class in Hochschild cohomology is integrable if and only if all or any one of its representatives is integrable. \end{definition} \begin{remark} \label{rem_bracket_restr} If $D$ and $D'$ are derivations that are $[m,n)$-integrable and $[m',n)$-integrable respectively, then the commutator $[D,D']$ is an $[m+m',n)$-integrable derivation. Indeed, a computation shows that \[ \alpha \alpha'\alpha^{-1}\alpha'^{-1} = 1 + (\alpha_m\alpha'_{m'}-\alpha'_{m'}\alpha_{m})t^{m+m'}+\cdots \] for $\alpha \in \Aut_m(A[t]/(t^n))$ and $\alpha' \in \Aut_{m'}(A[t]/(t^{n}))$. If $A$ contains a field of characteristic $p$, then the $p$th power of any derivation is again a derivation, and together with the commutator bracket this makes $\Der(A)$ into a restricted Lie algebra. If $D$ is an $[m,n)$-integrable derivation then $D^p$ is a $[pm,n)$-integrable derivation. Indeed, this time one computes \[ \alpha^p = 1 + \alpha_m^pt^{pm}+\cdots \] for $\alpha \in \Aut_m(A[t]/(t^n))$. This structure is studied in \cite{R}. This remark shows that $[m,n)$-integrable derivations arise even if one is interested only in $[1,n)$-integrable derivations. \end{remark} Gerstenhaber works locally in \cite{G}, assuming that $k$ is an algebra over $\mathbb{Z}_p$ (the integers localised at $p$) for some prime $p$. Using the next lemma, in which we write $k_p=k\otimes_{\mathbb{Z}}\mathbb{Z}_p$ and $A_p=A\otimes_{\mathbb{Z}}\mathbb{Z}_p$, we can readily reduce to this case. Readers interested in algebras over local rings or fields can disregard the lemma. \begin{lemma}\label{local_lem} Let $A$ be a Noether algebra over a commutative Noetherian ring $k$. A $k$-linear derivation on $A$ is {$[m,n)$-integrable} if and only if for all primes $p$, the induced $k_p$-linear derivation on $A_p$ is {$[m,n)$-integrable}. \end{lemma} \begin{proof} The forward implication is clear. Conversely, assume that for each prime $p$ there is an automorphism $\alpha= 1+ \alpha_mt +\cdots + \alpha_{n-1}t^{n-1}$ in $\Aut_m(A_p[t]/(t^n))$ such that $\alpha_m=D$. Consider the element $(\alpha_i)\in \bigoplus_{i=m}^{n-1}\Hom_{k_p}(A_p,A_p)$. Since $k$ is Noetherian and $A$ is finitely generated as a $k$-module, $\bigoplus_{i=m}^{n-1}\Hom_{k_p}(A_p,A_p)\cong \big(\bigoplus_{i=m}^{n-1}\Hom_{k}(A,A)\big)_p$, and therefore there is a sequence $(\beta_i)\in \bigoplus_{i=m}^{n-1}\Hom_{k}(A,A)$ and an integer $u$ coprime to $p$ such that $(\alpha_i)=(\frac{\beta_i}{u})$ in $\bigoplus_{i=m}^{n-1}\Hom_{k_p}(A_p,A_p)$, and we can assume that $\beta_m= uD$. There is then an equality \[ 1+ u^m\alpha_mt +\cdots + u^{n-1}\alpha_{n-1}t^{n-1} = 1+ u^{m-1}\beta_mt^m +\cdots + u^{n-2}\beta_{n-1}t^{n-1} \] in $\Aut_m(A_p[t]/(t^n))$. In particular, for each $m\leqslant i<n$ the Hasse-Schmidt identity \[ u^{i-1}\beta_i(xy)-u^{i-1}x\beta_i(y) - u^{i-2}\beta_m(x)\beta_{i-m}(y)-\dots -u^{i-1} \beta_i(x)y =0 \] holds, when interpreted in $\Hom_{k_p}(A_p\otimes_{k_p}A_p,A_p)$. Since $\Hom_{k_p}(A_p\otimes_{k_p}A_p,A_p)\cong \Hom_{k}(A\otimes_{k}A,A)_p$ we may find an integer $v_i$ coprime to $p$ such that \[ v_i\big[u^{i-1}\beta_i(xy)-u^{i-1}x\beta_i(y) - u^{i-2}\beta_m(x)\beta_{i-m}(y)-\dots -u^{i-1} \beta_i(x)y\big] =0 \] holds in $\Hom_{k}(A\otimes_{k}A,A)$. Now set $v=v_m\cdots v_{n-1}$. It follows that the sequence \[ (v^mu^{m-1}\beta_m, \ldots, v^{n-1} u^{n-2}\beta_{n-1}) \] satisfies the Hasse-Schmidt identities in $\Hom_{k}(A\otimes_{k}A,A)$ for all $m\leqslant i<n$, and therefore defines an element of $\Aut_m(A[t]/(t^n))$. Now set \[ \gamma_p = 1+v^mu^{m-1}\beta_mt^m+ \cdots+ v^{n-1} u^{n-2}\beta_{n-1}t^{n-1} \quad\text{and}\quad w_p=v^mu^m \] (up until now our notation has not indicated the dependence on $p$). Note that $w_pD=v^mu^{m-1}\beta_m$ by construction. The ideal $(w_p ~:~ p\text{ is a prime}) \subseteq \mathbb{Z}$ contains an element coprime to every prime, and is consequently the unit ideal. This means there are primes $p_1,...,p_i$ and integers $a_1,...,a_i$ such that $a_1w_{p_1}+\cdots +a_iw_{p_i}=1$. The automorphism \[ \gamma=\gamma_{p_1}^{a_1}\cdots \gamma_{p_i}^{a_i}\in \Aut_m(A[t]/(t^n)) \] has as its $t^m$ coefficient $a_1w_{p_1}D+\cdots+ a_iw_{p_i}D=D$. Therefore $\gamma$ shows that $D$ is $[m,n)$-integrable. \end{proof} Certainly all $[m,\infty]$-integrable derivations are $[m,\infty)$-integrable. It seems to be open in general whether the inclusion $\Der_{[m,\infty)}(A)\subseteq \Der_{[m,\infty]}(A)$ can be strict. However, the next result, which is essentially due to Gerstenhaber, shows that equality holds for Artin algebras. \begin{lemma}\label{lem_infty} Let $A$ be an Artin algebra over a commutative Artinian ring $k$. A $k$-linear derivation on $A$ is {$[m,\infty)$-integrable} if and only if it is {$[m,\infty]$-integrable}. \end{lemma} \begin{proof} By Lemma \ref{local_lem} we may assume that $k$ is an algebra over $\mathbb{Z}_p$, so that the results of \cite{G} apply. Since $\Der(A)$ is an Artinian $k$-module, the sequence of submodules $\Der_{[m,n)}(A)$ must eventually stablise, and so there is an $n$ such that $\Der_{[m,n)}(A)=\Der_{[m,\infty)}(A)$. By {\cite[Theorem 5]{G}} this implies that $\Der_{[m,\infty)}(A)=\Der_{[m,\infty]}(A)$. \end{proof} It is easy to show that a $[1,\infty]$-integrable derivation is $[m,\infty]$-integrable for every $m$ (see \cite[Theorem 3.6.6]{R2}). The next result, due to Gerstenhaber, shows that the converse is also true. \begin{theorem}\label{th_m_int} Let $A$ be an Artin algebra over a commutative Artinian ring $k$. A $k$-linear derivation on $A$ is {$[1,\infty]$-integrable} if and only if it is {$[m,\infty]$-integrable}. \end{theorem} \begin{proof} By Lemma \ref{local_lem} we may assume that $k$ is an algebra over $\mathbb{Z}_p$. Let $D$ be an {$[m,\infty]$-integrable} derivation on $A$. By Lemma \ref{lem_infty} there is an $n$ such that $\Der_{[1,n)}(A)=\Der_{[1,\infty]}(A)$, so it suffices to show that $D$ is $[1,n)$-integrable. There is an automorphism \[ \alpha = 1+Dt^m+\alpha_{m+1}t^{m+1}+\cdots +\alpha_{mn}t^{mn} \] in $\Aut_m(A[t]/(t^{mn+1}))$, and applying \cite[Theorem 3]{G} to this yields an automorphism \[ \alpha' = 1+Dt^m+\alpha_{2m}'t^{2m}+\cdots +\alpha_{mn}'t^{mn} \] involving only powers of $t^m$. Replacing $t^m$ with $t$ finishes the proof. \end{proof} At this point we can deduce that $\iDer(A)=\Der_{[1,\infty]}(A)$ forms a Lie algebra; this was originally proven by Gerstenhaber \cite[Corollary 1]{G}, assuming that $k=k_p$ for some prime $p$. \begin{corollary}\label{cor_Lie} If $A$ is an Artin algebra over a commutative Artinian ring $k$, then $\iDer(A)$ is a Lie subalgebra of $\Der(A)$, and if moreover $A$ contains a field of characteristic $p$, then $\iDer(A)$ is a restricted Lie subalgebra of $\Der(A)$. By the same token, $\HH^1_{\rm int}(A)$ is a (restricted) Lie subalgebra of $\HH^1(A)$. \end{corollary} \begin{proof} This follows from Remark \ref{rem_bracket_restr} and Theorem \ref{th_m_int}. \end{proof} The class of integrable derivations is known to have good invariance properties, and we may use Corollary \ref{cor_Lie} to upgrade these invariance results to statements about Lie algebras. The next result builds on the work of Rouquier, Huisgen-Zimmermann, Saor\'in, Keller, and Linckelmann. When $k$ has characteristic zero all derivations are integrable, so that $\HH_{\rm int}^1(A)=\HH^1(A)$, and the theorem is well-known in this case. \begin{theorem} \label{invarianceint} Let $A$ and $B$ two finite dimensional split algebras over a field $k$. Assume either that $A$ and $B$ are derived equivalent, or that $A$ and $B$ are self-injective and stably equivalent of Morita type. Then $\HH_{\rm int}^1(A)\cong \HH_{\rm int}^1(B)$ as Lie algebras, and this is an isomorphism of restricted Lie algebras if $k$ is of positive characteristic. \end{theorem} \begin{proof} Assume first that $A$ and $B$ are derived equivalent. The identity component $\mathrm{Out}(A)^\circ$ of the group scheme of outer automorphism group of $A$ is a derived invariant by \cite{HZS,Ruq}. Therefore the set \[ \mathrm{Out}_1(A\llbracket t\rrbracket)^\circ=\{ f\in \Hom_{k\text{-scheme}}({\rm Spec}( k\llbracket t\rrbracket),\mathrm{Out}(A)^\circ) ~:~ f(0)=1\} \] is a derived invariant as well, and we have $\mathrm{Out}_1(A\llbracket t\rrbracket)^\circ\cong \mathrm{Out}_1(B\llbracket t\rrbracket)^\circ$. Since $\HH^1(A)\cong \Ext^1_{A^{\rm op}\otimes A}(A,A)$ the first Hochschild cohomology is also a derived invariant, and we have $\HH^1(A)\cong \HH^1(B)$. Both isomorphisms above are realised by tensoring with a complex of $A$-$B$ bimodules \cite{Rick}, and it follows that the \emph{map} below is a derived invariant: \[ \pi\colon \mathrm{Out}_1(A\llbracket t\rrbracket)^\circ \longrightarrow \HH^1(A) \quad \alpha = 1 + \alpha_1t+\alpha_2t^2+\cdots \mapsto \alpha_1. \] Therefore the image $\HH^1_{\rm int}(A)={\rm im}(\pi)$ is a derived invariant as well (c.f.~the proof of \cite[Theorem 5.1]{ML} for a similar argument). The fact that the isomorphism $\HH^1_{\rm int}(A)\cong \HH^1_{\rm int}(B)$ is one of Lie algebras now follows from Corollary \ref{cor_Lie} and \cite[Section 4]{KellerIII}. In positive characteristic this respects the $p$-power structure by \cite[(3.2)]{KellerI} combined with \cite[Theorem 2]{BR}. For the case of self-injective algebras that are stably equivalent of Morita type, there is by \cite[Theorem 5.1]{ML} an isomorphism $\HH_{\rm int}^1(A)\cong \HH_{\rm int}^1(B)$ induced by a transfer map, and this is an isomorphism of restricted Lie algebras by Corollary \ref{cor_Lie} together with \cite[Theorem 1.1]{R} and \cite[Theorem 1]{BR}. \end{proof} \subsection{Obstructions to integrability}\label{sub_obs} The results of \cite{G} are proven using an obstruction theory for integrability, which we explain now. For any automorphism $\alpha = 1+\alpha_1t^1+\cdots \alpha_{n-1}t^{n-1}$ in $\Aut_1(A[t]/(t^n))$ we define a $k$-linear map $\obs(\alpha)\colon A\otimes A\to A$ by the rule \[ \widetilde{\obs}(\alpha)(x\otimes y)=\alpha_1(x)\alpha_{n-1}(y)+\dots + \alpha_{n-1}(x)\alpha(y). \] Viewed as a degree $2$ element in the Hochschild cochain complex $C^*(A)$, one checks that $\widetilde{\obs}(\alpha)$ is a cycle, and therefore defines a cohomology class \[ \obs(\alpha)=[\widetilde{\obs}(\alpha)] \quad\text{in}\quad \HH^2(A). \] In order to extend $\alpha$ to an element of $\Aut_1(A[t]/(t^{n+1}))$, we must find a $k$-linear endomorphism $\alpha_n\colon A\to A$ satisfying the Hasse-Schmidt identity \[ \alpha_n(xy)=x\alpha_n(y) + \alpha_1(x)\alpha_{n-1}(y)+\dots + \alpha_n(x)y. \] This may be rearranged and formulated using the Hochschild cochain complex: \[ \partial(\alpha_n) = \widetilde{\obs}(\alpha)\quad\text{in}\quad C^2(A). \] We obtain the statement of \cite[Proposition 5]{G}: an automorphism $\alpha\in \Aut_1(A[t]/(t^{n})) $ can be extended to $\Aut_1(A[t]/(t^{n+1}))$ if and only if $\obs(\alpha)=0$. For any $[m,n)$-integrable derivation $D$, there is by definition an automorphism $\alpha = 1+\alpha_mt^m+\cdots \alpha_{n-1}t^{n-1}$ in $\Aut_m(A[t]/(t^n))$ with $\alpha_m=D$. The above result shows that if $\obs(\alpha)=0$, then $D$ is $[m,n+1)$-integrable. We caution that $D$ being $[m,n+1)$-integrable does not necessarily imply that $\obs(\alpha)=0$, only that \emph{some} automorphism in $\Aut_m(A[t]/(t^n))$ extending $1+Dt^m$ is unobstructed. The \emph{obstruction order} of an automorphism $\alpha$ in $ \Aut_1(A[t]/(t^i))$ is $n$ if it admits an extension to an automorphism in $\Aut_1(A[t]/(t^n))$, but admits no extension to an automorphism in $\Aut_1(A[t]/(t^{n+1}))$. Gerstenhaber proves the following two facts about the above obstruction theory. \begin{theorem}[{\cite[Theorem 1, Theorem 2, and Corollary 1]{G}}]\label{th_obs} Let $A$ be an algebra over a commutative ring $k$. \begin{enumerate} \item\label{th_obs_1} If $\alpha,\alpha'\in \Aut_1(A[t]/(t^n))$ then $\obs(\alpha\alpha')=\obs(\alpha)+\obs(\alpha')$ in $\HH^2(A)$. \item\label{th_obs_2} Assume $k=k_p$ for some prime $p$. If $D$ is a derivation then the obstruction order of $1+Dt$ is (if finite) of the form $p^e$. Moreover, in this case the obstruction order of $1+Dt^m$ is $mp^e$. \end{enumerate} \end{theorem} Because of (\ref{th_obs_2}), a derivation $D$ is said to have \emph{obstruction exponent} $e$ if $1+Dt$ has obstruction order $p^e$. To give an example of how the obstruction theory is used we use it to give Gerstenhaber's beautiful proof of the analogue of Corollary \ref{cor_Lie} for finitely integrable derivations (which are especially connected with jet spaces). \begin{corollary}[\cite{G}] If $A$ is a Noether algebra over a commutative Noetherian ring $k$, then $\Der_{[1,n)}(A)$ is a Lie subalgebra of $\Der(A)$, and if moreover $A$ contains a field of characteristic $p$, then $\iDer(A)$ is a restricted Lie subalgebra of $\Der(A)$. \end{corollary} \begin{proof} By Lemma \ref{local_lem} we may assume that $k=k_p$, so that Theorem \ref{th_obs} part (\ref{th_obs_2}) applies. Taking $D,D'\in \Der_{[1,n)}(A)$, we may assume by Theorem \ref{th_obs} part (\ref{th_obs_2}) that $n=p^e$ is a power of $p$. Suppose that $D=\alpha_1$ and $D'=\alpha_1'$ for two $\alpha,\alpha'\in \Aut_1(A[t]/(t^{p^e}))$. As in Remark \ref{rem_bracket_restr} the automorphism $\alpha \alpha'\alpha^{-1}\alpha'^{-1} $ shows that $[D,D']$ is $[2,p^e)$-integrable. However, by Theorem \ref{th_obs} part (\ref{th_obs_1}) we have $\obs(\alpha'\alpha^{-1}\alpha'^{-1})=0$, therefore $[D,D']$ is in fact $[2,p^e+1)$-integrable. By Theorem \ref{th_obs} part (\ref{th_obs_2}) it must therefore be $[2,2p^e)$-integrable, as $2p^{e-1}<p^e+1$. By \cite[Theorem 3]{G} this implies that $[D,D']$ is $[1,p^e)$-integrable. A similar argument yields the second claim, since $\obs(\alpha^p)=p\obs(\alpha)=0$ using Theorem \ref{th_obs} part (\ref{th_obs_1}). \end{proof} \begin{remark} Linckelmann considers a more general notion of integrable derivation in \cite{ML}, replacing $k\llbracket t \rrbracket$ with any discrete valuation ring. It would be interesting to develop the obstruction theory using this definition, and to see whether the results above extend to this context as well. \end{remark} \section{Counterexamples to solvability} \label{cexamplesolv} \subsubsection*{Notation} For the next three sections we denote by $S_n$ the the symmetric group on $n$ letters, by $A_n$ the alternating group and by $C_n$ the cyclic group of order $n$. When dealing with groups we use the superscript notation for the $n$-fold direct product and we denote by $H\rtimes N$ the semidirect product of $N$ and $H$. We denote by $k$ a field and by $k^{+}$ its additive group. In this section we give a counter example concerning the solvability of integrable derivations. \begin{question} [{\cite[Question 8.2]{L}}] When is the Lie algebra $\HH_{\rm int}^1(A)$ solvable? \end{question} In the same article Linckelmann suggests that, based on the examples, $\HH^1_{\rm int}(A)$ should be a solvable Lie algebra if $A$ is a block of a finite group algebra $kG$ over an algebraically closed field of prime characteristic. We provide a negative answer to this suggestion by considering the group algebra $kP$ of an elementary abelian $p$-group $P$ of rank $2$, that is, $kP=k(C_p\times C_p)$ where $C_p$ is a cyclic group of order $p$. In this case the group algebra $kP$ coincides with its unique block. \begin{theorem} \label{notsolvable} Let $k$ be a field of characteristic $p\geq 3$. Let $A=k(C_p\times C_p)$. Then $\HH^1_{int}(A)$ is not solvable. \end{theorem} \begin{proof} Note that $A=k(C_p\times C_p)\cong k[x,y]/(x^p, y^p)$. Then $\HH^1(A)$ is a Jacobson-Witt algebra \cite{J} and has a $k$-basis given by the derivations \[ \{f_{a,b}\ |\ 0 \leq a,b\leq p-1\}\cup \{g_{c,d}\ |\ 0 \leq c,d\leq p-1\} \] where $f_{a,b}(x)=x^ay^b$, $f_{a,b}(y)=0$ and $g_{c,d}(y)=x^cy^d$, $g_{c,d}(x)=0$. Note that $\HH^1_{\rm int}(A)$ has the same $k$-basis of $\HH^1(A)$ excluding the derivations $f_{0,0}$ and $g_{0,0}$. In order to prove that $f_{1,0}$ is an integrable derivation we construct the automorphism $\alpha=1+ f_{1,0}t$ $\in \mathrm{Aut}(A[[t]])$. It is easy to check that it preserves the relations. Using the same argument we can prove that $g_{0,1}$ is integrable as well. Then we use the fact that the space of integrable derivations form a ${\rm Z}(A)$-module. Since $kP$ is a commutative algebra, we obtain that the rest of the derivations in the basis, aside from $f_{0,0}$ and $g_{0,0}$, are integrable. The derivations $f_{0,0}$ and $g_{0,0}$ do not preserve the Jacobson radical, hence they are not integrable (see Corollary 2.1 in \cite{FGM}). Let $\f,\e,\h$ be a basis of $\sl_2(k)$ satisfying $[\e,\f] = \h$, $[\h,\f] = -2\f$, and $[\h,\e] = 2\e$. The derived subalgebra of $\HH^1_{\rm int}(A)$ contains the Lie algebra $\sl_2(k)$ via the map sending $\f$ to $f_{0,1}(=[f_{0,1},f_{1,0}])$, $\h$ to $f_{1,0}-g_{0,1}(=[g_{1,0},f_{0,1}])$, and $\e$ to $g_{1,0}(=[g_{1,0},g_{0,1}])$. The statement follows. \end{proof} By the same argument one shows: \begin{corollary} Let $k$ be a field of characteristic $p$. Let $P$ be an elementary abelian $p$-group of rank greater than $2$. Then $\HH^1_{\rm int}(kP)$ is not solvable. \end{corollary} \section{The first Hochschild cohomology of the symmetric group} \label{HH1symgroup} In this section we give a formula for the dimension of $\HH^1(kS_n)$. We start by recalling some basic facts on the representation theory of the symmetric group. \begin{definition} A \emph{partition} of a nonnegative integer $n$ is a decreasing sequence of positive integers $\lambda_1 > \lambda_2 > \dots > \lambda_s >0$ and positive integers $e_1,\dots, e_s$ such that such that $e_1\lambda_1+\dots +e_s\lambda_s=n$. We use the notation $\lambda = (\lambda_1^{e_1},\dots,\lambda_s^{e_s})$, and we say that $\lambda_i$ the \emph{$i$th part} of $\lambda$, and $e_i$ is the \emph{multiplicity} of $\lambda_i$. We denote by $\mathcal{P}(n)$ the set of all partitions of $n$. \end{definition} We recall that the conjugacy classes of $S_n$ are in bijection with the partitions of $n$, with the conjugacy class of an element $x$ corresponding to the partition $\lambda$ determined by the cycle type of $x$. That is, if $x=c_{1,1} \dots c_{1,e_1}\dots c_{s,1} \dots c_{s,e_s}$ in disjoint cycle notation (including cycles of length one), with $c_{i,j}$ a cycle of length $\lambda_i$, then $\lambda = (\lambda_1^{e_1},\dots,\lambda_s^{e_s})$. We begin by computing $\HH^1(kS_n)$ using the centraliser decomposition of Hochschild cohomology. \begin{theorem} \label{dim} Let $k$ a field of characteristic $p$ and let $S_n$ the symmetric group on $n$ letters. Then we have the following decomposition: \[ \HH^1(kS_n)=\bigoplus_{\lambda\in \mathcal{P}(n)}\Hom \Big (\prod_{i=1}^{s} (C_{\lambda_i}\times (S_{e_i}/A_{e_i})),k^{+} \Big ). \] \end{theorem} \begin{proof} Using the decomposition of $\mathrm{HH}^{1}(kS_n)$ into the direct sum of the first group cohomology of centraliser subgroups we have: \[\HH^1(kS_n)=\bigoplus_{\lambda \in \mathcal{P}(n)} \mathrm{H}^1(C_{S_n}(x),k)=\bigoplus_{\lambda \in \mathcal{P}(n)} \Hom(C_{S_n}(x),k^{+})\] Let $x$ be a representative element in each conjugacy class of $kS_n$. The first step is to study the centraliser $C_{S_n}(x)$. As consequence of the fact that conjugation permutes cycles of the same length we have that $C_{S_n}(x)=\prod_{i=1}^sC_{\lambda_i} \wr S_{e_i}$ where $\wr$ denotes the wreath product of $C_{\lambda_i}$ by $S_{e_i}$. In fact, there are two groups that sit inside $C_{S_n}(x)$ and that generate $C_{S_n}(x)$. The first one is $E:=S_{e_1}\times \dots \times S_{e_s}$ and the second is $\prod_{i=1}^s C^{e_i}_{\lambda_i}$. It is easy to check that $C_{S_n}(x)=\prod_{i=1}^s(C^{e_i}_{\lambda_i} \rtimes S_{e_i})$ where $S_{e_i}$ acts on the direct product $C^{e_i}_{\lambda_i}$ by permutation. The next step is to study the abelianisation of $C_{S_n}(x)$. Note that the derived subgroup of $C_{S_n(x)}$ is given by \[\prod_{i=1}^s[C_{\lambda_i}\wr S_{e_i}, C_{\lambda_i}\wr S_{e_i}].\] In general, the derived subgroup of a semi-direct product $ N\rtimes H$ is equal to $([N,N][N,H])\rtimes[H,H]$. In our case $H=S_{e_i}$ and $N=C^{e_i}_{\lambda_i}$. So $[S_{e_i}, S_{e_i}]=A_{e_i}$ and $[N,N]=1$. It is easy to check that $[C^{e_i}_{\lambda_i}, S_{e_i}]$ is isomorphic to $C_{\lambda_i}^{e_{i}-1}$. Hence \[[C_{\lambda_i}\wr S_{e_i},C_{\lambda_i}\wr S_{e_i}]\cong C_{\lambda_i}^{e_{i}-1}\rtimes A_{e_i}\] Consequently the abelianization of $C_{S_n}(x)$ is isomorphic to \[\prod_{i=1}^s (C_{\lambda_i}\times S_{e_i}/A_{e_i}).\] The statement follows. \end{proof} \begin{lemma}\label{lem_counting} Let $p$ be a prime, and $n$ a nonnegative integer. The number of parts of length divisible by $p$ in all partitions of $n$, counted without multiplicity, is equal to the number of parts of length $p$ in all partitions of $n$, counted with multiplicity. \end{lemma} \begin{proof} Using the notation $\lambda=(\lambda_1^{e_1},\dots,\lambda_s^{e_s})$ for a partition of $n$, we consider the set $\mathcal{S}_1$ of pairs $\{ (\lambda, e) \text{ with some }\lambda_i =p \text{ and } 1\leq e\leq e_i\}$, and the set $\mathcal{S}_2$ of pairs $\{ (\lambda, \lambda_i) \text{ with }p| \lambda_i \}$. We define a function $\mathcal{S}_1\to \mathcal{S}_2$ by the rule $(\lambda,e)\mapsto (\lambda',ep)$, where $\lambda'=(\lambda_1^{e_1},\dots,(ep)^{e'},\dots ,p^{e_i-e},\dots,\lambda_s^{e_s})$ and where $e'=e_j+1$ if $\lambda_j=ep$ was already a part of $\lambda$, or $e'=1$ if not (if it happens that $e=1$, then $\lambda=\lambda'$). This function is a bijection, and therefore $|\mathcal{S}_1|=|\mathcal{S}_2|$. Since $|\mathcal{S}_1|$ is the number of parts of length $p$ in all partitions of $n$, and $|\mathcal{S}_2|$ is the number of parts of length divisible by $p$ in all partitions of $n$ (without multiplicity), we are done. \end{proof} \begin{theorem} \label{dimHH1Snot2} If the characteristic of the field $k$ is different from $2$, then \[ \HH^1(kS_n)\cong\bigoplus_{\lambda\in \mathcal{P}(n)}\Hom \Big (\prod_{p | \lambda_i} C_{\lambda_i},k^{+}\Big) \] Therefore, $\dim_k(\HH^1(kS_n))$ is equal to the total number of parts, counted without multiplicity, divisible by $p$ for all partitions of $n$. These numbers are equal to the number of $p$'s in all partitions of $n$ which are given by the generating series \[ \sum_{n\geq 0} \dim_k(\HH^1(kS_n))t^n=\frac{t^{p}}{1-t^{p}}\prod_{n\geq 1}\frac{1}{(1-t^n)}. \] \end{theorem} \begin{proof} In Theorem \ref{dim}, the term $S_{e_i}/A_{e_i}$ will not contribute since the characteristic of the field is greater than $2$ and $S_{e_i}/A_{e_i}$ is either trivial or isomorphic to $C_2$. This yields the first statement. We also learn that $\dim_k(\HH^1(kS_n))$ is equal to the total number of parts divisible by $p$ in all partitions of $n$, counted without multiplicity. By Lemma \ref{lem_counting}, this the number of time $p$ occurs as a part in a partition of $n$. The generating series for this sequence can be found in \cite{Ri}. More precisely, we can associate to any sequence $(a_i)$ the function on partitions $L(\lambda):=\sum a_i \kappa_i$, where $\kappa_i$ is the number of parts of size $i$ in $\lambda$; then \cite[p185 Equation 23]{Ri} reads \[ \sum_{n\geq0}t^n\sum_{\lambda\in \mathcal{P}(n)}L(\lambda)= \left(\sum_{n\geq 1}\frac{a_nt^{n}}{1-t^{n}}\right)\prod_{n\geq 1}\frac{1}{(1-t^n)}. \] If we take $a_p=1$ and $a_i=0$ for $i\neq p$ then we obtain the desired series. \end{proof} \begin{theorem} If $k$ be a field of characteristic $2$ then \[ \sum_{n\geq 0} \dim_k(\HH^1(kS_n))t^n=\frac{2t^{2}}{1-t^{2}}\prod_{n\geq 1}\frac{1}{(1-t^n)}. \] \end{theorem} \begin{proof} By Theorem \ref{dim} \[ \HH^1(kS_n)\cong\bigoplus_{\lambda\in \mathcal{P}(n)}\Hom (\prod_{2| \lambda_i} C_{\lambda_i},k^{+})\oplus \Hom (\prod_{e_i\geq 2} C_{2},k^{+}). \] So the the computation of $\dim_k(\HH^1(kS_n))$ splits into two parts. For the first summand we count the number of parts in partitions on $n$ that are divisible by $2$; as in the proof of Theorem \ref{dimHH1Snot2} this is given by the generating series \[ \frac{t^{2}}{1-t^{2}}\prod_{n\geq 1}\frac{1}{(1-t^n)}. \] For the second summand we must count the number of parts with multiplicity $2$ or more in all partitions of $n$. In the usual formula for the total number of partitions \[ \sum_{n\geq0, \lambda\in\mathcal{P}(n)} t^n\ =\prod_{n\geq 1}\frac{1}{(1-t^n)} \] (cf.\ \cite{Ri}), the factor $1/{(1-t^i)}=(1+t^i+t^{2i}+\cdots)$ corresponds to parts of length $i$, with a term $t^{ie}$ contributing $1$ to the coefficient of $\lambda$ if $\lambda$ contains a part of length $i$ with multiplicity $e$. To modify this formula to count partitions with a chosen part of multiplicity $e\geq 2$, we simply replace this factor with $t^{2i}/{(1-t^i)}=(t^{2i}+t^{3i}+t^{4i}+\cdots)$. In total we get \[ \sum_{i} \left(t^{2i}\prod_{n\geq 1}\frac{1}{(1-t^n)}\right) = \frac{t^{2}}{1-t^{2}}\prod_{n\geq 1}\frac{1}{(1-t^n)}. \] The statement of the Theorem follows. \end{proof} An element $x$ in a finite group is \emph{$p$-regular} if its order is coprime to $p$, and otherwise it is called \emph{$p$-singular}. In the case of $S_n$, the $p$-singular elements are those containing at least one cycle of length divisible by $p$. In other words, the corresponding partition contains a part divisible by $p$. We write $\mathcal{SP}(n)$ for the set of all partitions of $n$ corresponding to conjugacy classes of $p$-singular elements. \begin{corollary} If $k$ is a field of characteristic $p$ then $\mathrm{dim}_k(\HH^1(kS_n))\geq |\mathcal{SP}(n)|$. \end{corollary} Finally, in the next section we will need the following fact. \begin{corollary} \label{S_p} If $k$ is a field of characteristic $p>2$ then $\mathrm{dim}_k(\HH^1(kS_p))=1$. \end{corollary} \begin{proof} By Theorem \ref{dim} we just need to count the the number of parts of length $p$ in all partitions of $p$, and there is clearly just one. \end{proof} \begin{remark} Recently, the authors of \cite{BKL} have also computed $\dim_k(\HH^1(kS_n))$ in terms of generating functions. \end{remark} \section{Counterexamples to the existence of non-integrable derivations} \label{cexampleexist} In this section we answer a question considered by Farkas, Geiss and Marcos. \begin{question}[\cite{FGM}] \label{question2} Let $G$ be a finite group and let $k$ be a field such that $\mathrm{char}(k)$ divides the order of $G$, must $kG$ admit a non-integrable derivation? \end{question} Since all inner derivations are integrable a necessary condition that should hold in order to state the previous question is the following: let $G$ be a finite group and assume the characteristic of the field $k$ divides the order of $G$. Then $\HH^1(kG)\neq 0$. This has been shown in \cite{FJLM} using the classification of finite simple groups. The authors state their question in terms of the automorphism group scheme, writing that \emph{It is tempting to conjecture that $kG$ does not have a smooth automorphism group scheme} \cite[below Theorem 2.2]{FGM}. Their question is equivalent to Question \ref{question2} by Theorem 1.2 in \cite{FGM}: the automorphism group scheme of a finite dimensional algebra $A$ is smooth if and only if every derivation of $A$ is integrable. In the following theorem we exhibit a family of counter-examples for any algebraically closed field of prime characteristic. \begin{theorem} \label{Spint} Let $k$ a field of characteristic $p\geq 3$ and let $kS_p$ be the group algebra of the symmetric group on $p$ letters. Then $\HH^1(kS_p)$ has a $k$-basis given by a single integrable derivation. \end{theorem} The first part of Theorem \ref{Spint} will follow from Corollary \ref{S_p}. To prove that the only outer derivation in $\HH^1(kS_p)$ is integrable, we will use the fact that the only non-semisimple block of $kS_p$ is derived equivalent to a symmetric Nakayama algebra. We recall some basic results about blocks of symmetric groups. A \emph {node} $(i,j)$ in the Young diagram $[\lambda]$ of $\lambda$ forms part of the \emph{rim} if $(i + 1, j + 1) \notin [\lambda]$. A \emph{$p$-hook} in $\lambda$ is a connected part of the rim of $[\lambda]$ consisting of exactly $p$ nodes, whose removal leaves the Young diagram a partition. The \emph{$p$-core} of $\lambda$, usually denoted by $\gamma(\lambda)$, is the partition obtained by repeatedly removing all $p$-hooks from $\lambda$. The number of $p$-hooks we remove is the \emph{$p$-weight} of $\lambda$, usually denoted by $w$. It is easy to note that the $p$-core of a partition is well-defined, that is, is independent of the way in which we remove the $p$-hooks. The blocks of group algebras of symmetric groups are determined by $p$-cores and weights: \begin{theorem}[Nakayama Conjecture] The blocks of the symmetric group $S_n$ are labelled by pairs $(\gamma,w)$, where $\gamma$ is a $p$-core and $w$ is the associated $p$-weight such that $n = |\gamma| + pw$. Hence the Specht module $S^{\lambda}$ lies in the block labelled by $(\gamma, w)$ of $kS_n$ if and only if $\lambda$ has $p$-core $\gamma$ and weight $w$. \end{theorem} Note that the statement above holds also for the simple modules, see the paragraph after Theorem 8.3.1 in \cite{Cra}. It is easy to see that blocks of weight $0$ are matrix algebras and blocks of weight $1$ have cyclic defect group. Since we will be mainly interested in blocks with cyclic defect group, we recall some background on Brauer trees and Nakayama algebras. A Brauer graph consists of a finite undirected connected graph, such that to each vertex we assign a cyclic ordering of the edges incident to it, and an integer greater than or equal to one, called the multiplicity of the vertex, see Definition 4.18.1 in \cite{Benson}. To each Brauer graph we can associate a finite dimensional algebra called Brauer graph algebra. A Brauer tree is a Brauer graph which is a tree, and has at most one vertex with multiplicity greater than one. Such vertex is called the exceptional vertex and the multiplicity is called exceptional multiplicity. A Brauer star algebra is a Brauer tree with a star-shape having $n$ edges and the exceptional vertex in the middle. Brauer tree algebras are important in modular representation theory because of the following result: A block $B$ with cyclic defect group of order $p^d $, having $m$ simple modules, is a Brauer tree algebra with $m$ edges and with exceptional multiplicity $\frac{p^d-1}{m}$, see Theorem 6.5.5. in \cite{Benson}. For further background in modular representation theory of finite groups see also \cite{L1} and \cite{L2}. A particular nice class of self-injective algebras are self-injective Nakayama algebras. A basic algebra of a (connected) self-injective Nakayama algebra is $kC_m/J_{m,n}$ where where $C_m$ is the extended Dynkin quiver of type $\tilde{A}_m$ and $J_{m,n}$ is the ideal in the path algebra $kC_m$ generated by the composition of $n+1$ consecutive arrows. Note also that every basic self-injective Nakayama algebra is a monomial algebra. In addition, the algebra $N^n_m$ is symmetric if and only if $m$ divides $n$, see \cite{Skow} for example. The symmetric Nakayama algebra $N^{em}_{e}$ having $e$ simple modules and Loewy length $em+1$ is a Brauer tree algebra with respect to the Brauer star with $e$ vertices and with exceptional multiplicity $em$. It is worth noting that not every Brauer tree algebra is isomorphic is a symmetric Nakayama algebra, however, a result due to Rickard \cite{Rick} shows that every Brauer tree algebra is derived equivalent to a symmetric Nakayama algebra, see also \cite[Theorem 6.10.1]{Zim}. \begin{theorem} \label{Naktree} Let $k$ be a field and let A be a Brauer tree algebra associated to a Brauer tree with $e$ edges and with exceptional multiplicity $m$. Then $A$ is derived equivalent to $N^{me}_{e}$ where $N^{me}_{e}$ is the symmetric Nakayama algebra having $e$ vertices and admissible ideal $J_{e,me}$. \end{theorem} We have all the ingredients to prove Theorem \ref{Spint}: \begin{proof}[Proof of Theorem \ref{Spint}] The principal block of $kS_p$, denoted by $B_0$, has cyclic defect $C_p$ and the number of simple modules of $B_0$ is $p-1$. This follows by Nakayama Conjecture since there are $p-1$ partitions having the same $p$-core of the partition representing the trivial module. The weight of $B_0$ is $1$ hence $B_0$ has cyclic defect $C_{p^d}$ for some $d$. In this case $B_0$ is a Brauer tree algebra for a Brauer tree with $p-1$ edges and exceptional multiplicity $1$, see after Example 5.1.4 in \cite{Cra}. Hence $B_0$ has cyclic defect $C_p$. By Theorem \ref{Naktree} we have that $B_0$ is derived equivalent to a Nakayama algebra denoted by $N^{p-1}_{p-1}$. The Gabriel quiver associated with $N^{p-1}_{p-1}$ has a set of vertices given by $\{e_i\}_{i=1}^{p-1}$ and it has $p-1$ arrows $\{a_i\}_{i=1}^{p-1}$ such that $t(a_i)=s(a_{i+1})=e_{i+1}$ for $i\neq p-1$ and $t(a_{p-1})=s(a_1)$. The rest of the blocks of $kS_p$ are matrix algebras since they have weight 0. Therefore $\HH^1(kS_p)\cong\HH^1(N^{p-1}_{p-1})$ and this restricts to an isomorphism $\HH^1_{\rm int}(kS_p)\cong\HH^1_{\rm int}(N^{p-1}_{p-1})$. The first Betti number $\beta(Q)$ of the underying graph of $N^{p-1}_{p-1}$ is $1$. This is because the Gabriel quiver is connected, the number of edges is $p-1$, the number of vertices is $p-1$ and consequently $\beta(Q)=(p-1)-(p-1)+1=1$. Since $N^{p-1}_{p-1}$ is monomial, by Theorem C in \cite{BR2} we have that the maximal toral rank is $1$ and it is easy to see that the map $f$ sending $a_1$ to $a_1$ and sending any other arrow to zero is a diagonal outer derivation. From Corollary \ref{S_p}, we have $\mathrm{dim}_k(\HH^1(kS_p))=1$. We deduce that there are no other outer derivations. Recall that $N^{p-1}_{p-1}$ is the bound quiver algebra $kC_{p-1}/J_{p-1,p-1}$ where $J_{p-1,p-1}$ is the ideal in the path algebra $kC_{p-1}$ generated by the composition of $p$ consecutive arrows. Let $\rho_1=(a_1\dots a_{p-1}a_1=0)$. Then any other relation that generates the ideal $J_{p-1,p-1}$ is given by the path that starts and ends at $a_i$ for every $2\leq i\leq n$. We construct the automorphism $\alpha=1+ ft$ $\in \mathrm{Aut}(A[[t]])$. This $k[[t]]$-automorphism preserves the relations. We check for $\rho_1$ since for the rest of generating relations the proof is similar. We have \begin{equation*} \begin{split} \alpha(a_1\dots a_{p-1}a_1)&=\alpha(a_1)\dots\alpha(a_{p-1})\alpha(a_1)=(a_1+a_1t)a_2\dots a_{p-1}(a_1+a_1t)\\ &=(a_1\dots a_{p-1}+a_1\dots a_{p-1}t)(a_1+a_1t)=0 \end{split} \end{equation*} Therefore $f$ is integrable and the statement follows. \end{proof} \begin{remark} Note that in order to construct the previous counterexample we have considered a Gabriel quiver without loops. In \cite{FGM} the authors consider $p$-groups, which are local algebras, hence all the arrows are loops. \end{remark} \appendix \section*{Appendix. Gerstenhaber's composition complexes} In Section \ref{sec_int_Ger} we used results of Gerstenhaber \cite{G} to establish facts about the Lie algebra of integral derivations on an algebra. In this appendix we survey the more general definitions in \cite{G} and compare them with the context of Section \ref{sec_int_Ger}. \begin{definition}[\cite{G}]\label{ccdef} Let $k$ be a commutative ring. A composition complex $C$ over $k$ is a sequence $C^0,C^1,...$ of $k$-modules, and for each $m,n$ and $0\leq i\leq m-1$ a bilinear composition operation \[ C^m\times C^n \to C^{m+n-1} \quad (f,g) \mapsto f\circ_ig, \] as well as for each $m,n$ a bilinear cup product operation \[ C^m\times C^n \to C^{m+n}\quad (f,g) \mapsto f\smile g, \] satisfying, for any $f\in C^n,g\in C^m$ and $h\in C^l$, the conditions \[ (f\circ_i g)\circ_jh = \begin{cases} (f\circ_j h)\circ_{i+l-1} h& \text{if } 0\leq j \leq i-1\\ f\circ_i (g\circ_{j-i}h) & \text{if } i\leq j \leq n-1, \end{cases} \] and \[ (f\smile g)\circ_jh = \begin{cases} (f\circ_i h)\smile g & \text{if } 0\leq i \leq m-1\\ f\smile (g\circ_{i-m}h) & \text{if } m\leq i \leq m+n-1. \end{cases} \] We further assume that the cup product of $C$ is associative, and that there is unit element $1\in C^1$ such that $1\circ_0 f = f\circ_i 1 = f$ for any $f\in C^m$ and $0\leq i\leq m-1$. \end{definition} The key example of a composition complex is the Hochschild cochain complex of a $k$-algebra $A$: \[ C^n(A) = \Hom(A^{\otimes n}, A) \quad \text{with} \quad f\circ_ig = f \circ (1^{\otimes i} \otimes g \otimes 1^{\otimes m-i-1}) \] for $f\in C^n$ and $g\in C^m$, and with the usual cup product \[ (f\smile g)(x_1\otimes \cdots x_{n+m}) = f(x_1\otimes \cdots x_n)g(x_{n+1}\otimes \cdots x_{n+m}). \] Other examples of composition complexes given in \cite{G} are the singular cochain complex of a topological space, and the cobar construction on a Hopf algebra. In general, one can work with any composition complex mimicking constructions which are standard for the Hochschild cochain complex, as the next definition shows. \begin{definition}[\cite{G}]\label{cc_defs} We provide a brief dictionary between the context of this paper and that of composition complexes. \begin{enumerate} \item\label{cc_1} For $f\in C^m$ and $g\in C^n$ we define the circle product and the bracket \[ f\circ g =\sum_{i=0}^{m-1} (-1)^{i(n-1)}f\circ_i g, \quad [f,g]=f\circ g-(-1)^{(m-1)(n-1)}g\circ f \] in $C^{m+n-1}$. These correspond to the usual circle product and Gerstenhaber bracket when $C=C^*(A)$. \item\label{cc_2} We call $m=1\smile 1\in C^2$ the multiplication element of $C$---in the case of the Hochschild cochain complex this is the multiplication map $A\otimes A \to A$. We then define a differential on $C$ by the rule $\partial(f)= [m,f]$, and $C$ becomes a complex $ C^0\xrightarrow{\partial}C^1\xrightarrow{\partial}C^2\xrightarrow{\partial}\cdots$ In particular we obtain cohomology groups ${\rm H}^i(C)$. When $C=C^*(A)$, these yield the usual Hochschild differential and Hochschild cohomology groups $\HH^i(A)$. \item A derivation in $C$ is a degree one cycle $D\in \Der(A)=Z^1(C)=\ker(C^1\to C^2)$. An automorphism in $C$ is an element $\alpha\in C^1$, invertible with respect to the circle product, such that $\alpha\circ m = \alpha\smile\alpha$. When $C=C^*(A)$, these correspond to derivations and automorphisms of the $k$-algebra $A$. \item By base change, $C$ gives rise to a composition complex $C\llbracket t\rrbracket$ over the ring $k\llbracket t\rrbracket$. A one-parameter family of automorphisms in $C$ is an automorphism in $C\llbracket t\rrbracket$, and we write $\Aut_1(C\llbracket t\rrbracket)$ for the set of one-parameter families of automorphisms of the form $\alpha = 1 + \alpha_1t+\alpha_2t^2+\cdots $ A derivation $D\in \Der(C)$ is called integrable if $D=\alpha_1$ for some $\alpha\in \Aut_1(C\llbracket t\rrbracket)$. One can similarly define $[m,n)$-integrable derivations for any $m,n$ by considering automorphisms in $C[t]/(t^n)$. Once again, if $C=C^*(A)$ this yields the usual notion of integrable derivation. \item Finally, if $\alpha\in \Aut_1(C[t]/(t^n))$, the obstruction theory of Subsection \ref{sub_obs} can be generalised by setting $\obs(\alpha)= [\alpha_1\smile \alpha_{n-1}+\cdots + \alpha_{n-1}\smile \alpha_1]\in {\rm H}^2(C)$. \end{enumerate} \end{definition} With these definitions in place, the results stated in Section \ref{sec_int_Ger} all hold at the generality of any composition complex. Since Gerstenhaber was primarily concerned with automorphisms and derivations, which can be understood from the first few degrees, the results of \cite{G} are stated even more generally for composition complexes \emph{truncated in degree $2$}. That is, $k$-modules $C^0,C^1,C^2$ having the structure and properties of \cref{ccdef} to the extent that they are meaningful. \begin{remark} In modern terminology, a composition complex is the same thing as a nonsymmetric operad with multiplication \cite{GV}. For example, the composition complex $C^*(A)$ is the endomorphism operad of $A$. In \cite{GV} Gerstenhaber and Voronov explain how any nonsymmetric operad with multiplication inherits the structure of a B$_\infty$-algebra. This construction mirrors some of the ideas from Definition \ref{cc_defs}; in particular, they construct the bracket (\ref{cc_1}) and differential (\ref{cc_2}) exactly as was done originally in \cite{G}. Conversely there are many interesting examples of operads with multiplication (for example the Kontsevich operad used in \cite{GV}), and each can be considered as a composition complex, which thereby obtains a notion of integrable derivation and an obstruction theory as in definition \ref{cc_defs}. \end{remark} \begin{thebibliography}{} \bibitem{Benson} J.\ Benson, {\em Representations and Cohomology, Vol. I: Cohomology of groups and modules}, Cambridge studies in advanced mathematics {\bf 30}, Cambridge University Press (1991). \bibitem{BKL} D. Benson, R.\ Kessar, M.\ Linckelmann, {\em Hochschild cohomology of symmetric groups in low degrees}, arXiv:2204.09970. \bibitem{BR} B.\ Briggs, L.\ Rubio y Degrassi, {\em Stable invariance of the restricted Lie algebra structure of Hochschild cohomology}, arXiv:2006.13871v2 (2020). \bibitem{BR2} B.\ Briggs, L.\ Rubio y Degrassi {\em Maximal tori in $\HH^1$ and the fundamental group}, Int.\ Math.\ Res.\ Not. \bibitem{Cra}D.\ Craven, {\em Representation theory of finite groups: a guidebook.} Universitext. Springer, Cham, 2019. \bibitem{FGM}D.\ R.\ Farkas, C.\ Geiss, E.\ N.\ Marcos, {\em Smooth automorphism group schemes.} Representations of algebras (S\~ao Paulo, 1999), 71--89, Lecture Notes in Pure and Appl. Math., {\bf 224} Dekker, New York (2002). \bibitem{FJLM} P.\ Fleischmann, I.\ Janiszczak, W.\ Lempken, {\em Finite groups have local non-Schur centralizers.} Manuscripta Math. {\bf 80} (1993), no. 2, 213--224. \bibitem{G} M.\ Gerstenhaber, {\em On the deformation of rings and algebras. III. } Ann. of Math. (2) {\bf 88} (1968), 1--34. \bibitem{GV} M.\ Gerstenhaber, A.\ A.\ Voronov, {\em Homotopy G-algebras and moduli space}, Int. Math. Res. Not. no. 3, 1995, 141--153, \bibitem{GJ} E.\ Getzler, J.\ D.\ S.\ Jones, {\em Operads, homotopy algebra, and iterated integrals for double loop spaces}, (hep-th/9403055). \bibitem{HS} H.\ Hasse, F.\ K.\ Schmidt, {\em Noch eine Bergründung der Theorie der höheren Dif- ferentialquotienten in einem algebraischen Funktionenkörper einer Unbestimmten}. J. Reine Angew.Math. {\bf 177}, 215–237 (1937) \bibitem{HZS} B.\ Huisgen-Zimmermann, M.\ Saor\'in, {\em Geometry of chain complexes and outer automorphisms under derived equivalence.} Trans. Amer. Math. Soc. {\bf 353} (2001), no. 12, 4757--4777. \bibitem{J} N.\ Jacobson, {\em Classes of restricted Lie algebras of characteristic $p$. II}. Duke Math. J. {\bf 10} (1943), 107--121 \bibitem{KellerI} B.\ Keller, {\em Derived invariance of higher structures on the Hochschild complex}, preprint, https://webusers.imj-prg.fr/~bernhard.keller/publ/dih.pdf, 2003. \bibitem {KellerIII} B.\ Keller, {\em Hochschild cohomology and derived Picard groups}, J. Pure Appl. Algebra {\bf 190}, Issues 1–3, (2004), 177--196. \bibitem{L} M.\ Linckelmann {\em Finite-dimensional algebras arising as blocks of finite group algebras,} Contemporary Mathematics {\bf 705}, (2018) 155--188. \bibitem{ML}M.\ Linckelmann, {\em Integrable derivations and stable equivalences of Morita type}. Proc. Edinb. Math. Soc. (2) {\bf 61} (2018), no. 2, 343--362. \bibitem{L1}M.\ Linckelmann, {\em The Block Theory of Finite Group Algebras, Volume 1}, LMS Student Society Texts 91, (2018). \bibitem{L2}M.\ Linckelmann, {\em The Block Theory of Finite Group Algebras, Volume 2}, LMS Student Society Texts 92, (2018). \bibitem{Ma} H.\ Matsumura, {\em Integrable derivations}, Nagoya Math. J. {\bf 87} (1982) 227--245. \bibitem{Mo} S.\ Molinelli. {\em Sul modulo delle derivazioni integrabili in caractteristica positiva} .Ann. Mat. Pura Appl. {\bf 121} (1979), 25–38. \bibitem{Rick} J.\ Rickard, {\em Derived categories and stable equivalence.} J. Pure Appl. Algebra {\bf 61} (1989), no. 3, 303--317. \bibitem{Ri} J.\ Riordan, {\em Combinatorial identities.} John Wiley \& Sons, Inc., New York-London-Sydney 1968 xiii+256 pp. \bibitem {Ruq} R.\ Rouquier, {\em Groupes d’automorphismes et \'equivalences stables ou d\'eriv\'ees.} AMA-Algebra Montpellier Announcements-01-2003 (2003). \bibitem{R}L.\ Rubio y Degrassi, {\em Invariance of the restricted p-power map on integrable derivations under stable equivalences}. J. Algebra {\bf 469} (2017), 288--301. \bibitem{R2}L.\ Rubio y Degrassi, {\em On Hochschild cohomology and modular representation theory}, (2016) PhD thesis. \bibitem{Skow} A.\ Skowro\'nski, {\em Self-injective algebras: finite and tame type}, in: Trends in Representation Theory of Algebras and Related Topics, Contemporary Math. 406, Amer. Math. Soc., Providence, RI, (2006), 169–238. \bibitem{Vo}P.\ Vojta. {\em Jets via Hasse–Schmidt derivations}. In “Diophantine geometry”, CRM Series, vol. 4, Ed. Norm., Pisa, 2007, 335–361. \bibitem{Zim} A.\ Zimmermann, {\em Representation Theory. A Homological Algebra Point of View}, Springer, (2014). \end{thebibliography} \end{document}
2205.01479v1
http://arxiv.org/abs/2205.01479v1
Dwork-type congruences and $p$-adic KZ connection
\documentclass[12pt]{amsart} \usepackage{amssymb,amscd} \usepackage{verbatim} \usepackage{amsmath,amssymb,graphicx,mathrsfs} \usepackage{enumerate} \usepackage[colorlinks=true,allcolors = blue]{hyperref} \usepackage{tikz} \usetikzlibrary{matrix} \usepackage[all]{xy} \textwidth 6.5truein \textheight 8.67truein \oddsidemargin 0truein \evensidemargin 0truein \topmargin 0truein \let\frak\mathfrak \let\Bbb\mathbb } } \def\vsk#1>{\vskip#1\baselineskip} \def\vv#1>{\vadjust{\vsk#1>}\ignorespaces} \def\vvn#1>{\vadjust{\nobreak\vsk#1>\nobreak}\ignorespaces} \def\vvgood{\vadjust{\penalty-500}} \let\alb\allowbreak \def\fratop{\genfrac{}{}{0pt}1} \def\satop#1#2{\fratop{\scriptstyle#1}{\scriptstyle#2}} \let\dsize\displaystyle \let\tsize\textstyle \let\ssize\scriptstyle \let\sssize\scriptscriptstyle \def\tfrac{\textstyle\frac} \let\phan\phantom \let\vp\vphantom \let\hp\hphantom \def\sskip{\par\vsk.2>} \let\Medskip\medskip \def\medskip{\par\Medskip} \let\Bigskip\bigskip \def\bigskip{\par\Bigskip} \let\Maketitle\maketitle \def\maketitle{\Maketitle\thispagestyle{empty}\let\maketitle\empty} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{defn}[thm]{Definition} \newtheorem{ex}[thm]{Example} \newtheorem{exmp}[thm]{Example} \newtheorem{Rem}[thm]{Remark} \theoremstyle{definition} \numberwithin{equation}{section} \theoremstyle{definition} \newtheorem*{rem}{Remark} \newtheorem*{example}{Example} \newtheorem*{exam}{Example \exno} \let\mc\mathcal \let\nc\newcommand \def\flati{\def\=##1{\rlap##1\hphantom b)}} \let\al\alpha \let\bt\beta \let\dl\delta \let\Dl\Delta \let\eps\varepsilon \let\gm\gamma \let\Gm\Gamma \let\ka\kappa \let\la\lambda \let\La\Lambda \let\pho\phi \let\phi\varphi \let\si\sigma \let\Si\Sigma \let\Sig\varSigma \let\Tht\Theta \let\tht\theta \let\thi\vartheta \let\Ups\Upsilon \let\om\omega \let\Om\Omega \let\der\partial \let\Hat\widehat \let\minus\setminus \let\ox\otimes \let\Tilde\widetilde \let\bra\langle \let\ket\rangle \let\ge\geqslant \let\geq\geqslant \let\le\leqslant \let\leq\leqslant \let\on\operatorname \let\bi\bibitem \let\bs\boldsymbol \let\xlto\xleftarrow \let\xto\xrightarrow \def\C{{\mathbb C}} \def\Z{{\mathbb Z}} \def\R{{\mathbb R}} \def\PP{{\mathbb P}} \def\Ac{{\mc A}} \def\B{{\mc B}} \def\F{{\mathbb F}} \def\Hc{{\mc H}} \def\Jc{{\mc J}} \def\Oc{{\mc O}} \def\Sc{{\mc S}} \def\V{{\mc V}} \def\+#1{^{\{#1\}}} \def\lsym#1{#1\alb\dots\relax#1\alb} \def\lc{\lsym,} \def\lox{\lsym\ox} \def\diag{\on{diag}} \def\End{\on{End}} \def\Gr{\on{Gr}} \def\Hom{\on{Hom}} \def\id{{\on{id}}} \def\Res{\on{Res}} \def\qdet{\on{qdet}} \def\rdet{\on{rdet}} \def\rank{\on{rank}} \def\sign{\on{sign}} \def\tbigoplus{\mathop{\textstyle{\bigoplus}}\limits} \def\tbigcap{\mathrel{\textstyle{\bigcap}}} \def\tr{\on{tr}} \def\Wr{\on{Wr}} \def\ii{i,\<\>i} \def\ij{i,\<\>j} \def\ik{i,\<\>k} \def\il{i,\<\>l} \def\ji{j,\<\>i} \def\jj{j,\<\>j} \def\jk{j,\<\>k} \def\kj{k,\<\>j} \def\kl{k,\<\>l} \def\lk{l,\<\>k} \def\II{I\<\<,\<\>I} \def\IJ{I\<\<,J} \def\ioi{i+1,\<\>i} \def\pp{p,\<\>p} \def\ppo{p,\<\>p+1} \def\pop{p+1,\<\>p} \def\pci{p,\<\>i} \def\pcj{p,\<\>j} \def\poi{p+1,\<\>i} \def\poj{p+1,\<\>j} \def\gl{\mathfrak{gl}} \def\gln{\mathfrak{gl}_N} \def\sln{\mathfrak{sl}_N} \def\slnn{\mathfrak{sl}_{\>n}} \def\glnt{\gln[t]} \def\Ugln{U(\gln)} \def\Uglnt{U(\glnt)} \def\Yn{Y\<(\gln)} \def\slnt{\sln[t]} \def\Usln{U(\sln)} \def\Uslnt{U(\slnt)} \def\beq{\begin{equation}} \def\eeq{\end{equation}} \def\be{\begin{equation*}} \def\ee{\end{equation*}} \nc{\bea}{\begin{eqnarray*}} \nc{\eea}{\end{eqnarray*}} \nc{\bean}{\begin{eqnarray}} \nc{\eean}{\end{eqnarray}} \def\g{{\mathfrak g}} \def\h{{\mathfrak h}} \def\n{{\mathfrak n}} \let\ga\gamma \let\Ga\Gamma \nc{\Il}{{\mc I_{\bs\la}}} \nc{\bla}{{\bs\la}} \nc{\Fla}{\F_\bla} \nc{\tfl}{{T^*\Fla}} \nc{\GL}{{GL_n(\C)}} \nc{\GLC}{{GL_n(\C)\times\C^*}} \let\aal\al \def\mub{{\bs\mu}} \def\Dc{\check D} \def\Di{{\tfrac1D}} \def\Dci{{\tfrac1\Dc}} \def\DV{\Di\V^-} \def\DL{\DV_\bla} \def\Vl{\V^+_\bla} \def\DVe{\Dci\V^=} \def\DLe{\DVe_\bla} \def\xxx{x_1\lc x_n} \def\yyy{y_1\lc y_n} \def\zzz{z_1\lc z_n} \def\Cx{\C[\xxx]} \def\Czh{\C[\zzz,h]} \def\Vz{V\<\ox\Czh} \def\ty{\Tilde Y\<(\gln)} \def\tb{\Tilde\B} \def\IMA{{I^{\<\>\max}}} \def\IMI{{I^{\<\>\min}}} \def\CZH{\C[\zzz]^{\>S_n}\!\ox\C[h]} \def\Sla{S_{\la_1}\!\lsym\times S_{\la_N}} \def\SlN{S_{\la_N}\!\lsym\times S_{\la_1}} \def\Czhl{\C[\zzz]^{\>\Sla}\!\ox\C[h]} \def\Czghl{\C[\<\>\zb\<\>;\Gmm\>]^{\>S_n\times\Sla}\!\ox\C[h]} \def\Czs{\C[\zzz]^{\>S_n}} \def\Ct{\Tilde C} \def\fc{\check f} \def\Hg{{\mathfrak H}} \def\Ic{\check I} \def\Ih{\hat I} \def\It{\tilde I} \def\Jt{\tilde J} \def\Pit{\Tilde\Pi} \def\Qc{\check Q} \let\sd s \def\sh{\hat s} \def\st{\tilde s} \def\sih{\hat\si} \def\sit{\tilde\si} \def\nablat{\Tilde\nabla} \def\Ut{\Tilde U} \def\Xt{\Tilde X} \def\Dh{\Hat D} \def\Vh{\Hat V} \def\Wh{\Hat W} \def\ib{\bs i} \def\jb{\bs j} \def\kb{\bs k} \def\iib{\ib,\<\>\ib} \def\ijb{\ib,\<\>\jb} \def\ikb{\ib,\<\>\kb} \def\jib{\jb,\<\>\ib} \def\jkb{\jb,\<\>\kb} \def\kjb{\kb,\<\>\jb} \def\zb{\bs z} \def\zzi{z_1\lc z_i,z_{i+1}\lc z_n} \def\zzii{z_1\lc z_{i+1},z_i\lc z_n} \def\zzzn{z_n\lc z_1} \def\xxi{x_1\lc x_i,x_{i+1}\lc x_n} \def\xxii{x_1\lc x_{i+1},x_i\lc x_n} \def\ip{\<\>i\>\prime} \def\ipi{\>\prime\<\>i} \def\jp{\prime j} \def\iset{\{\<\>i\<\>\}} \def\jset{\{\<\>j\<\>\}} \def\IMIp{{I^{\min,i\prime}}} \let\Gmm\Gamma \def\top{\on{top}} \def\phoo{\pho_{\>0}} \def\Vy#1{V^{\bra#1\ket}} \def\Vyh#1{\Vh^{\bra#1\ket}} \def\Hh{\Hg_n} \def\bmo{\nabla^{\>\sssize\mathrm{BMO}}} \def\Bin{B^{\<\>\infty}} \def\Bci{\B^{\<\>\infty}} \def\tbi{\tb^{\<\>\infty}} \def\Xin{X^\infty} \def\Bb{\rlap{$\<\>\bar{\phan\B}$}\rlap{$\>\bar{\phan\B}$}\B} \def\Hb{\rlap{$\bar{\,\,\phan H}$}\bar H} \def\Hbc{\rlap{$\<\>\bar{\phan\Hc}$}\rlap{$\,\bar{\phan\Hc}$}\Hc} \def\Xb{\rlap{$\bar{\;\>\phan X}$}\rlap{$\bar{\<\phan X}$}X} \def\Bk{B^{\<\>\kk}} \def\Bck{\B^{\<\>\kk}} \def\Bbk{\Bb^{\<\>\kk}} \def\Hck{\Hc^{\<\>\kk}} \def\Hbck{\Hbc^{\<\>\kk}} \def\mukp{\mu^{\kk+}} \def\muke{\mu^{\kk=}} \def\mukm{\mu^{\kk-}} \def\mukpm{\mu^{\kk\pm}} \def\nukp{\nu^{\<\>\kk+}} \def\nuke{\nu^{\<\>\kk=}} \def\nukm{\nu^{\<\>\kk-}} \def\nukpm{\nu^{\<\>\kk\pm}} \def\rhkm{\rho^{\<\>\kk-}} \def\rhkpm{\rho^{\<\>\kk\pm}} \def\tbk{\tb^{\<\>\kk}} \def\tbkp{\tb^{\<\>\kk+}} \def\tbkm{\tb^{\<\>\kk-}} \def\tbkpm{\tb^{\<\>\kk\pm}} \def\tbbp{\Bb^{\<\>\kk+}} \def\tbbm{\Bb^{\<\>\kk-}} \def\Sk{S^{\>\kk}} \def\Uk{U^\kk} \def\Upk{U'^{\<\>\kk}} \def\Wk{W^\kk} \def\Whk{\Wh^\kk} \def\Xk{X^\kk} \def\Xkp{X^{\kk+}} \def\Xkm{X^{\kk-}} \def\Xkpm{X^{\kk\pm}} \def\Xtk{\Xt^\kk} \def\Xbk{\Xb^{\<\>\kk}} \def\Yk{Y^{\<\>\kk}} \def\ddk_#1{\kk_{#1}\<\>\frac\der{\der\<\>\kk_{#1}}} \def\zno{\big/\bra\>z_1\<\lsym+z_n\<=0\,\ket} \def\vone{v_1\<\lox v_1} \def\Hplus{\bigoplus_\bla H_\bla} \def\Pone{\mathbb P^{\<\>1}} \def\FFF{\mathbb{F}} \def\bul{\mathbin{\raise.2ex\hbox{$\sssize\bullet$}}} \def\intt{\mathchoice {\mathop{\raise.2ex\rlap{$\,\,\ssize\backslash$}{\intop}}\nolimits} {\mathop{\raise.3ex\rlap{$\,\sssize\backslash$}{\intop}}\nolimits} {\mathop{\raise.1ex\rlap{$\sssize\>\backslash$}{\intop}}\nolimits} {\mathop{\rlap{$\sssize\<\>\backslash$}{\intop}}\nolimits}} \def\nablas{\nabla^{\<\>\star}} \def\nablab{\nabla^{\<\>\bul}} \def\chib{\chi^{\<\>\bul}} \def\zla{\zeta_{\>\bla}^{\vp:}} \let\gak\gamma \let\kk q \let\kp\kappa \let\cc c \def\kkk{\kk_1\lc\kk_N} \def\kkn{\kk_1\lc\kk_n} \let\Ko K \def\Kh{\Hat\Ko} \def\zzip{z_1\lc z_{i-1},z_i\<-\kp,z_{i+1}\lc z_n} \def\prodl{\prod^{\longleftarrow}} \def\prodr{\prod^{\longrightarrow}} \def\GZ/{Gelfand-Zetlin} \def\KZ/{{\slshape KZ\/}} \def\qKZ/{{\slshape qKZ\/}} \def\XXX/{{\slshape XXX\/}} \def\zz{{\bs z}} \def\qq{{\bs q}} \def\TT{{\bs t}} \def\glN{{\frak{gl}_N}} \def\ts{\bs t} \def\Sym{\on{Sym}} \def\Cts{\C[\<\>\ts\<\>]^{\>S_{\la^{(1)}}\<\<\lsym\times S_{\la^{(N-1)}}}} \def\ss{{\bs s}} \def\BL{{\tilde{\mc B}^q(H_\bla)}} \nc{\A}{{\mc A}} \def\glnn{{\frak{gl}_n}} \def\uu{{\bs u}} \def\VV{{\bs v}} \def\GG{{\bs \Ga}} \def\II{{\mc I}} \def\XX{{\mc X}} \def\a{{\frak{a}}} \def\CC{{\frak{C}}} \def\St{{\on{Stab}}} \def\xx{{\bs x}} \def\Czh{{(\C^N)^{\otimes n}\otimes\C(\zz;h)}} \def\WW{{\check{W}}} \def\FF{{\mathbb F}} \def\Sing{{\on{Sing}}} \def\sll{{\frak{sl}}} \def\slt{{\frak{sl}_2}} \def\Ant{{\on{Ant}}} \def\Ik{{\mc I_k}} \def\Q{{\mathbb Q}} \def\K{{\mathbb K}} \def\Fpz{{\F_p(z)}} \def\Fz{{\F_p(z)}} \def\Fzx{{\F_p(z)[x]}} \def\Mz{{\mc M_\Fz}} \nc{\hsl}{\widehat{{\frak{sl}_2}}} \nc{\BC}{{ \mathbb C}} \nc{\lra}{\longrightarrow} \nc{\CO}{{\mathcal{O}}} \nc{\BZ}{{ \mathbb Z}} \nc{\hfn}{\hat{\frak{n}}} \nc\Zs{{\Z/p^s\Z}} \nc\Zo{{\Zs[z]^0}} \nc\gr{{\on{gr}}} \nc\fD{{\frak D}} \let\ol\overline \let\wt\widetilde \let\wh\widehat \let\ovr\overleftarrow \renewcommand{\d}{{\mathrm d}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\Cf}{\operatorname{Coeff}} \begin{document} \hrule width0pt \vsk-> \title[Dwork--type congruences and $p$-adic \KZ/ connection] {Dwork--type congruences and $p$-adic \KZ/ connection} \author[Alexander Varchenko] {Alexander Varchenko} \maketitle \begin{center} {\it $^{\star}$ Department of Mathematics, University of North Carolina at Chapel Hill\\ Chapel Hill, NC 27599-3250, USA\/} \end{center} \vsk> {\leftskip3pc \rightskip\leftskip \parindent0pt \Small {\it Key words\/}: \KZ/ equations; Dwork--type congruences; Hasse--Witt matrices. \vsk.6> {\it 2020 Mathematics Subject Classification\/}: 11D79 (12H25, 32G34, 33C05, 33E30) \par} {\let\thefootnote\relax \footnotetext{\vsk-.8>\noindent $^\star\<${\sl E\>-mail}:\enspace [email protected], supported in part by NSF grant DMS-1954266 }} \begin{abstract} We show that the $p$-adic \KZ/ connection associated with the family of curves $y^q=(t-z_1)\dots (t-z_{qg+1})$ has an invariant subbundle of rank $g$, while the corresponding complex \KZ/ connection has no nontrivial proper subbundles due to the irreducibility of its monodromy representation. The construction of the invariant subbundle is based on new Dwork--type congruences for associated Hasse--Witt matrices. \end{abstract} {\small\tableofcontents\par} \setcounter{footnote}{0} \renewcommand{\thefootnote}{\arabic{footnote}} \section{Introduction} The Knizhnik--Zamolodchikov (\KZ/) differential equations are objects of conformal field theory, representation theory, enumerative geometry, see for example \cite{KZ, Dr, EFK, MO, V2}. The solutions of the \KZ/ equations have the form of multidimensional hypergeometric functions, see \cite{SV1}. In this paper we discuss the analog of hypergeometric solutions of the KZ equations considered over a $p$-adic field instead of the field of complex numbers. \vsk.2> More precisely, we consider the \KZ/ equations in the special case, in which the complex hypergeometric solutions are given by the integrals of the form \bean \label{Iq} I(z_1,\dots,z_{qg+1})= \int_C\frac{R(t,z_1,\dots, z_{qg+1})\, dt} {\sqrt[1/q]{(t-z_1)\dots(t-z_{qg+1})}} \eean where $q, g$ are positive integer parameters, and $R(t,z)$ are suitable rational functions. In this case the space of solutions of the \KZ/ equations is a $qg$-dimensional complex vector space. We also consider the $p$-adic version of the same differential equations. We assume that $q$ is a prime number (that is a technical assumption) and show that the $qg$-dimensional space of local solutions of these $p$-adic \KZ/ equations has a remarkable $g$-dimensional subspace of solutions which can be $p$-adic analytically continued as a subspace to a large domain $\frak D_{\on{KZ}}^{(m),o}$ in the space where the \KZ/ equations are defined, see Theorems \ref{thm inv} and \ref{thm rk g} for precise statements. This $g$-dimensional global subspace of solutions is defined as the uniform $p$-adic limit of a $g$-dimensional space of polynomial solutions of these \KZ/ equations modulo $p^s$ as $s\to\infty$. For $q=2$ and $g=1$ this construction was deduced in \cite{V5} from the classical B.\,Dwork's paper \cite{Dw}, see also \cite{VZ1}. For $q=2$ and any $g$ the corresponding construction was developed in \cite{VZ2}. \vsk.2> In \cite{SV2} general \KZ/ equations were considered over the field $\F_p$ and their polynomial solutions were constructed as $p$-approximations of hypergeometric integrals. In the current paper that construction is modified to obtain polynomial solutions modulo $p^s$ of the \KZ/ equations related to the integrals in formula \eqref{Iq}. The polynomial solutions are vectors of polynomials with integer coefficients. We call them the $p^s$-hypergeometric solutions. While the complex analytic integrals in \eqref{Iq} give the whole $qg$-dimensional space of all solutions of the complex \KZ/ equations, the $p^s$-hypergeometric solutions span only a $g$-dimensional subspace. Then the $p$-adic limit of that subspace as $s\to\infty$ gives the desired globally defined subspace of solutions. On other $p$-approximations of hypergeometric periods see \cite{SV2, RV1, RV2, VZ1, VZ2}. \vsk.2> In order to prove Theorems \ref{thm inv} and \ref{thm rk g} we develop new matrix Dwork--type congruences in Section \ref{sec 2}. In Section \ref{sec 3} we show how our Dwork--type congruences imply the uniform $p$-adic convergence of certain sequences of matrices on suitable domains of the space of their parameters. In Section \ref{sec 4} we define our \KZ/ equations and construct their complex holomorphic solutions. In Section \ref{sec 5} we describe the $p^s$-hypergeometric solutions of the same equations. In Section \ref{sec 6} we formulate and prove the main Theorems \ref{thm inv} and \ref{thm rk g}. \vsk.2> This paper may be viewed as a continuation of the paper \cite{VZ2} where the case $q=2$ is developed. \medskip The author thanks Louis Funar, Toshitake Kohno, Nick Salter, and Wadim Zudilin for useful discussions. The author thanks Max Planck Institute for Mathematics in Bonn for hospitality in May-June of 2022. \section{Dwork--type congruences} \label{sec 2} The Dwork--type congruences were originated by B.\,Dwork in the classical paper \cite{Dw}. On Dwork--type congruences see for example \cite{Dw, Me, MV, Vl, VZ1, VZ2}. \vsk.2> In this paper $p$ is an odd prime. We denote by $\Z_p[w^{\pm1}]$ the ring of Laurent polynomials in variables $w$ with coefficients in $\Z_p$. A congruence $F(w)\equiv G(w)\pmod{p^s}$ for two Laurent polynomials from the ring is understood as the divisibility by $p^s$ of all coefficients of $F(w)-\nobreak G(w)$. \vsk.2> For a Laurent polynomial $G(w)$ we define $\si(G(w))=G(w^p)$. \vsk.2> We denote $x=(t,z)$, where $t=(t_1,\dots,t_r)$ and $z=(z_1,\dots,z_n)$ are two groups of variables. \subsection{Definition of ghosts} Let $ {\bf e}=(e_1, \dots, e_{l})$ be a tuple of positive integers and $\La=(\La_0(x),\La_1(x), \dots, \La_l(x))$ a tuple of Laurent polynomials in $\Z_p[x^{\pm1}]$. Define $V_0(x)=\La_0(x)$. For $s=1,\dots,l$, define $V_s(x)$ by the recursive formula \bean \label{dls+} && \La_0(x)\La_1(x)^{p^{e_1}}\dots \La_s(x)^{p^{e_1+\dots+e_s}} =V_s(x) + V_{s-1}(x) \La_s(x^{p^{e_1+\dots+e_s}}) + \\ && \notag \phantom{aaa} + V_{s-2}(x) \La_{s-1}(x^{p^{e_1+\dots+e_{s-1}}}) \La_{s}(x^{p^{e_1+\dots+e_{s-1}}})^{p^{e_s}} + \dots + \\ && \notag \phantom{aaaaaa} + V_{0}(x) \La_{1}(x^{p^{e_1}})\La_{2}(x^{p^{e_1}})^{p^{e_2}}\cdots \La_{s}(x^{p^{e_1}})^{p^{e_2+\dots+e_s}}, \eean The Laurent polynomials $V_0(x), \dots, V_l(x) \in \Z_p[x^{\pm1}]$ are called the {\it ghosts} associated with the tuples ${\bf e}$ and $\La$. \vsk.2> For every $0\leq j\leq s\leq l$, denote \bea W_s(x) &:=& \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_s(x)^{p^{e_1+\dots+e_s}}, \\ W_s^{(j)}(x) &:=& \La_j(x)\La_{j+1}(x)^{p^{e_{j+1}}}\cdots \La_s(x)^{p^{e_{j+1}+\dots +e_s}}. \eea Then \eqref{dls+} can be formulates as \begin{equation} \label{dls} W_s(x) =V_s(x) + \sum_{j=1}^sV_{j-1}(x)W_s^{(j)}(x^{p^{e_{1}+\dots +e_j}}), \end{equation} or as \begin{equation} \label{dls-} W_s(x) =V_s(x) + \sum_{j=1}^sV_{j-1}(x)\si^{e_{1}+\dots +e_j}(W_s^{(j)}(x)). \end{equation} \begin{lem} \label{lem dl} For $s=0,1,\dots,l$, we have $V_s(x) \equiv 0 \pmod{p^{s}}$. \end{lem} \begin{proof} In the proof we use the congruence $F(x^{p})^{p^{i-1}}\equiv F(x)^{p^{i}}\pmod{p^{i}}$ valid for $i> 0$. For $s=0$ we have $V_0(x)=\La_0(x)$ and no requirements on divisibility. For $s=1$, we have \bea V_1(x) = \La_0(x)\La_1(x)^{p^{e_1}} -V_0(x)\La_1(x^{p^{e_1}}) = \La_0(x)(\La_1(x)^{p^{e_1}} -\La_1(x^{p^{e_1}})) , \eea and \bean \label{L1} && \\ && \notag \La_1(x^{ p^{e_1}}) \stackrel{\pmod{p^{}}}{\equiv} \La_1(x^{p^{e_1-1}})^p \stackrel{\pmod{p^{2}}}{\equiv} \La_1(x^{p^{e_1-2}})^{p^2} \stackrel{\pmod{p^{3}}}{\equiv} \dots \stackrel{\pmod{p^{e_1}}}{\equiv} \La_1(x)^{p^{e_1}}. \eean This proves the lemma for $s=1$. For $s>1$ the proof is by induction on $s$. Assume that the lemma is proved for all $j<s$. Then similarly to \eqref{L1} we obtain $\La_s(x^{p^{e_1+\dots+e_j}})^{p^{e_{j+1}+\dots+e_s}}{\equiv} \La_s(x)^{p^{e_{1}+\dots+e_s}} \pmod{p^{1+e_{j+1}+\dots+e_s}}$ and hence \bea V_{j-1}(x)\La_s(x^{p^{e_1+\dots+e_j}})^{p^{e_{j+1}+\dots+e_s}} &\equiv& V_{j-1}(x) \La_s(x)^{p^{e_1+\dots+e_s}} \pmod{p^{j+e_{j+1}+\dots+e_s}} \\ &\equiv& V_{j-1}(x) \La_s(x)^{p^{e_1+\dots+e_s}} \pmod{p^{s}} \eea since $e_i\geq 1$ for all $i$. Then we deduce modulo $p^{s}$: \begin{align*} V_s(x) &=W_{s-1}(x)\La_s(x)^{p^{e_{1}+\dots +e_s}} -\sum_{j=1}^{s-1}V_{j-1}(x) W_{s-1}^{(j)}(x^{p^{e_{1}+\dots +e_j}}) \La_s(x^{p^{e_{1}+\dots +e_j}})^{p^{e_{j+1}+\dots +e_s}} \\ & -V_{s-1}(x)\La_s(x^{p^{e_{1}+\dots +e_s}}) \equiv \\ & \equiv\bigg(W_{s-1}(x) -\sum_{j=1}^{s-1}V_{j-1}(x)W_{s-1}^{(j)}(x^{p^{e_{1}+\dots +e_j}}) -V_{s-1}(x)\bigg)\La_s(x)^{p^{e_{1}+\dots +e_s}} =0, \end{align*} obtaining the required statement. \end{proof} For a Laurent polynomial $F(t,z)$ in $t,z$, let $N(F)\subset \R^r$ be the Newton polytope of $F(t,z)$ with respect to the $t$ \emph{variables only}. \begin{lem} \label{lem 1.2} For $s=0,1,\dots,l$, we have \bea N(V_s) \subset N(\La_0)+ p^{e_1}N(\La_1) + \dots+p^{e_1+\dots+e_s}N(\La_s)\,. \eea \end{lem} \begin{proof} This follows from \eqref{dls} by induction on $s$. \end{proof} \smallskip \subsection{Convex polytopes} Let $\Dl=(\Dl_0, \dots, \Dl_l)$ be a tuple of nonempty finite subsets of $\Z^r$ of the same size $\# \Dl_j=g$ for some positive integer $g$. \begin{defn} \label{defN} A tuple $(N_0,N_1,\dots,N_l)$ of convex polytopes in $\R^r$ is called $(\Dl,{\bf e})$-\emph{admissible} if for any $0\le i\le j < l$ we have \bean \label{def ad} \phantom{aaaaaa} \big(\Dl_i + N_i+ p^{e_{i+1}}N_{i+1} + \dots+p^{e_{i+1}+\dots +e_{j}}N_j\big) \cap p^{e_{i+1}+\dots +e_{j+1}}\Z^r \subset p^{e_{i+1}+\dots +e_{j+1}}\Dl_{j+1}\,. \eean \end{defn} Notice that any sub-tuple $(N_i,N_{i+1},\dots,N_j)$ of a $(\Dl,{\bf e})$-admissible tuple $(N_0,N_1,\dots,N_l)$ is $(\Dl',{\bf e}')$-admissible where $\Dl'=(\Dl_i,\dots,\Dl_j)$ and ${\bf e}'=(e_{i+1},\dots,e_j)$. \begin{defn} \label{defn} A tuple $(\La_0(t,z),\La_1(t,z),\dots,\La_l(t,z))$ of Laurent polynomials is called $(\Dl,{\bf e})$-\emph{ad\-missible} if the tuple $\big(N(\La_0), N(\La_1), \dots, N(\La_l)\big)$ is $(\Dl,{\bf e})$-admissible. \end{defn} \begin{example} Let $r=1$, $n=13$, ${\bf e}=(2,2,\dots,2)$, $\Ga=\{1,2,3,4\}\subset \Z$, $\Dl=(\Ga,\Ga, \dots,\Ga)$, \linebreak $N=[0,13(p^2-1)/3]\subset \R$, $F(t_1,z) =\prod_{i=1}^{13}(t_1-z_i)^{(p^2-1)/3}$. Then the tuple $(N,N, \dots, N)$ of intervals in $\R$ and the tuple of polynomials $(F(t_1,z), F(t_1,z), \dots, F(t_1,z))$ are $(\Dl,{\bf e})$-admissible. \end{example} \smallskip \subsection{Hasse--Witt matrices} For $v\in\Z^r$ denote by $\Cf_v F(t,z)$ the coefficient of $t^v$ in the Laurent polynomial $F(t,z)$. This is a Laurent polynomial in $z$. \vsk.2> Given $m\ge 1$ and finite subsets $\Dl',\Dl''\subset \Z^r$, define the \emph{Hasse--Witt matrix} of the Laurent polynomial $F(t,z)$ by the formula \bean \label{Cuv+} A(m, \Dl',\Dl'', F(t,z)) := \big( \Cf_{p^mv-u} F(t,z)\big)_{u\in\Dl', v\in\Dl''}\,. \eean \begin{lem} \label{lem 2.5} Let $\La$ be a $(\Dl,{\bf e})$-admissible tuple of Laurent polynomials in \linebreak $\Z_p[x^{\pm1}]=\Z_p[t^{\pm1}, z^{\pm1}]$. Then for $0\le s\le l$ we have \begin{alignat*}{2} \textup{(i)} &\;\; A(e_{1}+\dots+e_{s+1}, \Dl_0,\Dl_{s+1}, V_s) \equiv 0 \pmod{p^s}; \\ \textup{(ii)} &\;\; A\big(e_{1}+\dots+e_{s+1}, \Dl_0,\Dl_{s+1}, W_s\big) = \\ & = A\big(e_1,\Dl_0,\Dl_{1}, V_0) \cdot \si^{e_1}\big(A\big(e_{2}+\dots+e_{s+1},\Dl_1,\Dl_{s+1}, W_s^{(1)}\big)\big) + \\ & +A\big(e_1+e_2,\Dl_0,\Dl_{2}, V_1) \cdot \si^{e_{1}+e_{2}}\big(A\big(e_{3}+\dots+e_{s+1}, \Dl_2,\Dl_{s+1}, W_s^{(2)}\big)\big) +\dots + \\ & +A\big(e_{1}+\dots+e_{s},\Dl_0,\Dl_{s}, V_{s-1}) \cdot \si^{e_{1}+\dots+e_{s}}\big(A\big(e_{s+1},\Dl_s,\Dl_{s+1}, W_s^{(s)}\big)\big) + \\ & + A\big(e_{1}+\dots+e_{s+1},\Dl_0,\Dl_{s+1}, V_s). \end{alignat*} \end{lem} Notice that all these matrices are $g\times g$-matrices. \begin{proof} Part (i) follows from Lemma \ref{lem dl}. To prove (ii) consider the identity \begin{align} \label{jth} & \La_0(t,z)\La_1(t,z)^{p^{e_1}}\dots \La_s(t,z)^{p^{e_1+\dots+e_s}} = \sum_{j=1}^sV_{j-1}(t,z)\La_j(t^{p^{e_1+\dots+e_j}},z^{p^{e_1+\dots+e_j}}) \times \\ & \notag \qquad \times \La_{j+1}(t^{p^{e_1+\dots+e_j}},z^{p^{e_1+\dots+e_j}})^{p^{e_{j+1}}}\dots \La_s(t^{p^{e_1+\dots+e_j}},z^{p^{e_1+\dots+e_j}})^{p^{e_{j+1}+\dots+e_s}}+ V_s(t,z), \end{align} which is nothing else but \eqref{dls+}. Let $u\in\Dl_0,v\in\Dl_{s+1}$. In order to calculate the coefficient of $t^{p^{e_{1}+\dots+e_{s+1}}v-u}$ in the $j$-th summand on the right-hand side of \eqref{jth}, we look for all pairs of vectors $w\in N(V_{j-1})$ and $y \in N(\La_j(t,z)\dots\La_s(t,z)^{p^{e_{j+1}+\dots+e_{s+1}}})$ such that \bea w+p^{e_1+\dots+e_j}y = p^{e_1+\dots+e_{s+1}}v-u. \eea Hence $u+w\in p^{e_1+\dots+e_j} \Z^r$. On the other hand, it follows from Lemma \ref{lem 1.2} that $w \in N(\La_0)+ p^{e_1}N(\La_1) + \dots+p^{e_1+\dots+e_{j-1}}N(\La_{j-1})$, so that \bea u+w\in \Dl_0 + N(\La_0)+ pN(\La_1) + \dots+p^{e_1+\dots+e_{j-1}}N(\La_{j-1}). \eea From the $(\Dl,{\bf e})$-admissibility we deduce that $u+w = p^{e_1+\dots+e_{j}} \dl$ for some $\dl\in\Dl_j$, thus $w=p^{e_1+\dots+e_j} \dl -\nobreak u$, \ $y=p^{e_{j+1}+\dots+e_{s+1}}v-\dl$ and \begin{align*} & \Cf_{p^{e_1+\dots+e_{s+1}}v-u} \big(V_{j-1}(t,z)\La_j(t^{p^{e_1+\dots+e_{j}}}, z^{p^{e_1+\dots+e_{j}}})\dots\La_s(t^{p^{e_1+\dots+e_j}},z^{p^{e_1+\dots+e_j}})^{p^{e_{j+1}+\dots+e_s}} \big) = \\ & \phantom{aaa} =\sum_{\dl\in\Dl_j} \Cf_{p^{e_1+\dots+e_{j}}\dl-u}(V_{j-1}(t,z))\, \cdot \\ & \phantom{aaaaaa} \cdot\, \si^{e_1+\dots+e_j}\big(\Cf_{p^{e_{j+1}+\dots+e_{s+1}}v-\dl} \big(\La_j(t,z)\La_{j+1}(t,z)^{p^{e_{j+1}} }\dots \La_s(t,z)^{p^{e_{j+1}+\dots+e_{s}}}\big)\big). \end{align*} This proves (ii). \end{proof} \subsection{Congruences} The next results discuss congruences of the type \\ $F_1(z)F_2(z)^{-1}\equiv G_1(z)G_2(z)^{-1}\pmod{p^s}$, where $F_1,F_2,G_1,G_2$ are $g\times g$ matrices whose entries are Laurent polynomials in $z$. We consider such congruences when the determinants $\det F_2(z)$ and $\det G_2(z)$ are Laurent polynomials both nonzero modulo~$p$. Using Cramer's rule we write the entries of the inverse matrix $F_2(z)^{-1}$ in the form $f_{ij}(z)/\det F_2(z)$ for $f_{ij}(z)\in\Z_p[z^{\pm1}]$ and do a similar computation for $G_2(z)$. This presents the congruence $F_1(z)F_2(z)^{-1}\equiv G_1(z)G_2(z)^{-1}\pmod{p^s}$ in the form \bean \label{ff=gg} \frac1{\det F_2(z)}\cdot F(z)\ \equiv\ \frac1{\det G_2(z)}\cdot G(z) \pmod{p^s} \eean for some $g\times g$ matrices $F(z), G(z)$ with entries in $\Z_p[z^{\pm1}]$, while \eqref{ff=gg} is nothing else but the congruence $F(z)\cdot \det G_2(z)\equiv G(z)\cdot \det F_2(z) \pmod{p^s}$. \begin{thm} \label{thm 1.6} Let $(\La_0(t,z),\La_1(t,z),\dots,\La_l(t,z))$ be a $(\Dl,{\bf e})$-admissible tuple of Laurent polynomials in $\Z_p[x^{\pm1}]=\Z_p[t^{\pm1}, z^{\pm1}]$. \begin{enumerate} \item[\textup{(i)}] For $0\leq s\leq l$ we have \begin{align} \notag & A\big(e_1+\dots+e_{s+1}, \Dl_0,\Dl_{s+1}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_s(x)^{p^{e_1+\dots+e_s}} \big)\equiv \\ \notag & \equiv A\big(e_1,\Dl_0,\Dl_1, \La_0(x)\big) \cdot \si^{e_1}\big(A\big(e_2,\Dl_1,\Dl_2, \La_1(x)\big)\big) \cdots \si^{e_1+\dots+e_s}\big(A\big(e_{s+1},\Dl_s,\Dl_{s+1}, \La_s(x)\big)\big) \end{align} modulo $p$. \item[\textup{(ii)}] Assume that the determinants of the matrices $ A\big(e_{i+1},\Dl_i,\Dl_{i+1}, \La_i(t,z)\big)$, $i=0,1,\dots,l$, are Laurent polynomials all nonzero modulo~$p$. Then for $1\leq s \leq l$ the determinant of the matrix $A\big(e_2+\dots+e_{s+1},\Dl_1, \Dl_{s+1}, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_s(x)^{p^{e_2+\dots+e_{s}}} \big)$ is a Laurent polynomial nonzero modulo~$p$ and we have modulo $p^s$\,\textup: \begin{align} \label{s cong} & A\big(e_1+\dots+e_{s+1}, \Dl_0,\Dl_{s+1}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_s(x)^{p^{e_1+\dots+e_s}}\big) \cdot \\ \notag & \cdot \si^{e_1}\big( A\big(e_2+\dots+e_{s+1},\Dl_1, \Dl_{s+1}, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_s(x)^{p^{e_2+\dots+e_{s}}} \big)\big)^{-1} \equiv \\ \notag & \equiv A\big(e_1+\dots+e_{s},\Dl_0,\Dl_{s}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_{s-1}(x)^{p^{e_1+\dots+e_{s-1}}}\big) \cdot \\ \notag & \cdot \si^{e_1}\big(A\big(e_2+\dots+e_s,\Dl_1,\Dl_s, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_{s-1}(x)^{p^{e_2+\dots+e_{s-1}}}\big)\big)^{-1}, \end{align} where in this congruence for $s=1$ we understand the second factor on the right-hand side as the $g\times g$ identity matrix, see formula \eqref{s=1} below. \end{enumerate} \end{thm} \begin{proof} By Lemma \ref{lem 2.5} we have \begin{align*} & A\big(e_1+\dots+e_{s+1},\Dl_0,\Dl_{s+1}, \La_0(x)\La_1(x)^{p^{e_1}}\dots \La_s(x)^{p^{e_1+\dots+e_{s}}}\big) \equiv \\ & \equiv A\big(e_1,\Dl_0,\Dl_{1}, \La_0(x)) \cdot \si^{e_1}\big(A\big(e_{2}+\dots+e_{s+1},\Dl_1,\Dl_{s+1}, \La_1(x)\La_2(x)^{p^{e_2}}\dots \La_s(x)^{p^{e_2+\dots+e_{s}}}\big)\big) \end{align*} modulo $p$. Iteration gives part (i) of the theorem. \vsk.2> If the determinants of the matrices $ A\big(e_{i+1},\Dl_i,\Dl_{i+1}, \La_i(t,z)\big)$, $i=0,1,\dots,l$, are Laurent polynomials all nonzero modulo~$p$, then part (i) implies that the determinant \bea & \det A\big(e_2+\dots+e_{s+1},\Dl_1, \Dl_{s+1}, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_s(x)^{p^{e_2+\dots+e_{s}}} \big) \equiv \\ & \equiv \prod_{j=1}^s \det \si^{e_2+\dots+e_{j}}\big(A\big(e_{j+1},\Dl_j,\Dl_{j+1}, \La_j(t,z)\big)\big) \pmod{p}, \eea is a Laurent polynomial nonzero modulo $p$. This proves the first statement of part (ii) of the theorem and allows us to consider the inverse matrices in the congruence of part (ii). \vsk.2> We prove part (ii) by induction on $s$. For $s=1$, congruence \eqref{s cong} takes the form \bean \label{s=1} & \\ \notag & A\big(e_1+e_2,\Dl_0,\Dl_2,\La_0(x)\La_1(x)^{p^{e_1}}\big)\cdot \si^{e_1}\big(A\big(e_2,\Dl_1,\Dl_2,\La_1(x)\big)\big)^{-1} \equiv A\big(e_1,\Dl_2,\Dl_1,\La_0(x)\big) \eean modulo $p$. This congruence follows from part (i). \vsk.2> For $1< s < l$ we substitute the expressions for $A\big(e_1+\dots+e_{s+1}, \Dl_0,\Dl_{s+1}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots $ $\cdots\La_s(x)^{p^{e_1+\dots+e_s}}\big)$ and $A\big(e_1+\dots+e_{s},\Dl_0,\Dl_{s}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_{s-1}(x)^{p^{e_1+\dots+e_{s-1}}}\big)$ from part (ii) of Lemma~\ref{lem 2.5} into the two sides of the desired congruence: \bean \label{con1} & A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_s(x)^{p^{e_1+\dots+e_s}}\big) \cdot \\ \notag & \cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s+1}e_a,\Dl_1, \Dl_{s+1}, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_s(x)^{p^{e_2+\dots+e_{s}}} \big)\big)^{-1} = A\big(e_1,\Dl_0,\Dl_{1}, V_0) + \\ \notag & + \sum_{j=2}^s A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big) \cdot \\ & \notag \cdot \, \si^{e_1}\big(A\big(\sum_{a=2}^{s+1}e_a,\Dl_1, \Dl_{s+1}, W_s^{(1)} \big)\big)^{-1} + \\ \notag & + \, A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1},V_s \big)\cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s+1}e_a,\Dl_1, \Dl_{s+1}, W_s^{(1)} \big)\big)^{-1} \eean and \bean \label{con2} & A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_{s-1}(x)^{p^{e_1+\dots+e_{s-1}}}\big) \cdot \\ \notag & \cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s}e_a,\Dl_1, \Dl_{s}, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_{s-1}(x)^{p^{e_2+\dots+e_{s-1}}} \big)\big)^{-1} = A\big(e_1,\Dl_0,\Dl_{1}, V_0) + \\ \notag & + \sum_{j=2}^s A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big) \cdot \\ & \notag \cdot \, \si^{e_1}\big(A\big(\sum_{a=2}^{s}e_a,\Dl_1, \Dl_{s}, W_{s-1}^{(1)} \big)\big)^{-1}. \eean Since we want to compare these two expressions modulo $p^s$, the last term in \eqref{con1} containing $V_s \equiv 0 \pmod{p^s}$ can be ignored. Given $j = 2, \dots , s$, we use the inductive hypothesis as follows: \bea & A\big(\sum_{a=i+1}^{s+1}e_a, \Dl_{i},\Dl_{s+1}, W^{(i)}_{s}\big) \cdot \si^{e_{i+1}}\big( A\big(\sum_{a=i+2}^{s+1}e_a,\Dl_{i+1}, \Dl_{s+1},W^{(i+1)}_{s} \big)\big)^{-1} \equiv \\ \notag & \equiv A\big(\sum_{a=i+1}^{s}e_a, \Dl_{i},\Dl_{s}, W^{(i)}_{s-1}\big) \cdot \si^{e_{i+1}}\big( A\big(\sum_{a=i+2}^{s}e_a,\Dl_{i+1}, \Dl_{s},W^{(i+1)}_{s-1} \big)\big)^{-1} \pmod{p^{s-i}} \eea for $i=1,\dots,j-1$. Applying $\sigma^{\sum_{a=1}^ie_a}$ to the $i$-th congruence and multiplying them out lead to telescoping products on both sides: \bea & \si^{e_1}\big(A\big(\sum_{a=2}^{s+1}e_a,\Dl_1, \Dl_{s+1}, W_s^{(1)} \big)\big) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big)^{-1}\equiv \\ & \equiv \si^{e_1}\big(A\big(\sum_{a=2}^{s}e_a,\Dl_1, \Dl_{s}, W_{s-1}^{(1)} \big)\big) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)^{-1} \eea modulo $p^{s-j+1}$. By our assumptions these four matrices are invertible. Therefore, we can invert them to obtain the congruence \bean \label{17} & \phantom{aaa} \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big) \cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s+1}e_a,\Dl_1, \Dl_{s+1}, W_s^{(1)} \big)\big)^{-1} \equiv \\ & \notag \phantom{aaa} \equiv \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)\cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s}e_a,\Dl_1, \Dl_{s}, W_{s-1}^{(1)} \big)\big)^{-1} \eean modulo $p^{s-j+1}$. Since $V_{j-1}\equiv 0\pmod{p^{j-1}}$, we obtain the congruence \bea & A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) \cdot \\ & \cdot\, \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big) \cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s+1}e_a,\Dl_1, \Dl_{s+1}, W_s^{(1)} \big)\big)^{-1} \equiv \\ & \equiv A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) \cdot \\ & \cdot\, \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)\cdot \si^{e_1}\big(A\big(\sum_{a=2}^{s}e_a,\Dl_1, \Dl_{s}, W_{s-1}^{(1)} \big)\big)^{-1} \eea modulo $p^{s}$. This shows that the $j$-th summands in \eqref{con1} and \eqref{con2} are congruent modulo $p^s$. The theorem is proved. \end{proof} \begin{cor} Under the assumptions of part \textup{(ii)} of Theorem \textup{\ref{thm 1.6}} for $1\leq s \leq l$ we have\,\textup: \begin{align*} & \det A\big(e_1+\dots+e_{s+1}, \Dl_0,\Dl_{s+1}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_s(x)^{p^{e_1+\dots+e_s}}\big) \cdot \\ \notag & \cdot \det \si^{e_1}\big(A\big(e_2+\dots+e_s,\Dl_1,\Dl_s, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_{s-1}(x)^{p^{e_2+\dots+e_{s-1}}}\big)\big) \equiv \\ \notag & \equiv \det A\big(e_1+\dots+e_{s},\Dl_0,\Dl_{s}, \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_{s-1}(x)^{p^{e_1+\dots+e_{s-1}}}\big) \cdot \\ \notag & \cdot \det \si^{e_1}\big( A\big(e_2+\dots+e_{s+1},\Dl_1, \Dl_{s+1}, \La_1(x)\La_2(x)^{p^{e_2}}\cdots \La_s(x)^{p^{e_2+\dots+e_{s}}} \big)\big) \end{align*} modulo $p^s$. \end{cor} \subsection{Derivations} Recall that $z=(z_1,\dots,z_n)$. Denote \bea D_v=\frac{\der}{\der z_v}, \quad v=1,\dots,n. \eea Let $F_1(z),F_2(z),G_1(z),G_2(z) \in \Z_p[z^{\pm1}]$ and $\ell\geq 1$. If \bea D_v(F_1(z))\cdot F_2(z)\equiv D_v(G_1(z))\cdot G_2(z)\pmod{p^s}\,, \eea then \begin{align} \label{1.5} & D_v(\si^\ell(F_1(z)))\cdot\si^\ell(F_2(z)) - D_v(\si^\ell(G_1(z)))\cdot\si^\ell(G_2(z)) = \\ &\qquad = D_v(F_1(z^{p^\ell}))\cdot F_2(z^{p^\ell}) - D_v(G_1(z^{p^\ell}))\cdot G_2(z^{p^\ell}) = \notag \\ &\qquad = p^\ell z_v^{p^\ell-1}\big(D_v(F_1(z))\cdot F_2(z) - D_v(G_1(z))\cdot G_2(z)\big)\big|_{z\to z^{p^\ell}} \equiv \notag \\ &\qquad \equiv 0 \pmod{p^{s+\ell}}. \notag \end{align} \vsk.2> \begin{thm} \label{thm der} Let $(\La_0(t,z),\La_1(t,z), \dots, \La_l(t,z))$ be a $(\Dl,{\bf e})$-admissible tuple of Laurent polynomials in $\Z_p[x^{\pm1}]=\Z_p[t^{\pm1},z^{\pm1}]$. Let $D=D_v$ for some $v=1,\dots,n$. Then under the assumptions of part \textup{(ii)} of Theorem \textup{\ref{thm 1.6}} we have \bean \label{Der} & D\big(\si^\ell\big(A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)\big)\big) \cdot \si^\ell\big(A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)\big)^{-1} \equiv \\ & \notag \equiv D\big(\si^\ell\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)\big)\big) \cdot \si^\ell\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)\big)^{-1} \pmod{p^{s+\ell}} \eean for $1\leq s \leq l$ and $0\leq \ell$. \end{thm} \begin{proof} Notice that it is sufficient to establish the congruences \eqref{Der} for $\ell=0$, as the general $\ell$ case follows from \eqref{1.5}. So, we assume that $\ell=0$ and proceed by induction on $s\ge0$. For $s=0$ the statement is trivially true. Using part (ii) of Lemma \ref{lem 2.5} we can write \bean \label{con3} & D\big(A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)\big) \cdot A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)^{-1} = \\ & \notag = \sum_{j=1}^{s+1} D\big(A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}\big)\big) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big) \cdot \\ & \notag \cdot \, A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)^{-1} + \\ & \notag + \sum_{j=1}^{s+1} A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) \cdot D\big(\si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big)\big) \cdot \\ & \notag \cdot \, A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)^{-1} \eean and \bean \label{con4} & D\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)\big) \cdot A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)^{-1}= \\ \notag & = \sum_{j=1}^s D\big(A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}\big)\big) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big) \cdot \\ & \notag \cdot \, A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)^{-1} + \\ \notag & + \sum_{j=1}^s A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) \cdot D\big( \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big) \big)\big) \cdot \\ & \notag \cdot \, A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)^{-1}. \eean The summands corresponding to $j=s+1$ in \eqref{con3} vanish modulo $p^s$ and can be ignored since $V_s \equiv 0\pmod{p^s}$. For the same reason \bean \label{saj} & D\big(A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}\big)\big)\equiv0\pmod{p^{j-1}}. \eean We also have \bean \label{17a} & \phantom{aaa} \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big) \cdot A\big(\sum_{a=1}^{s+1}e_a,\Dl_0, \Dl_{s+1}, W_s \big)^{-1} \equiv \\ & \notag \phantom{aaa} \equiv \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)\cdot A\big(\sum_{a=1}^{s}e_a,\Dl_0, \Dl_{s}, W_{s-1} \big)^{-1} \pmod{p^{s-j+1}}. \eean This follows from \eqref{17}, in which we take $j+1$ and $s+1$ for $j$ and $s$ and use $W_s$ instead of $W_{s+1}^{(1)}$. Multiplying congruences \eqref{saj} and \eqref{17a} we get \bean \label{17b} & D\big(A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}\big)\big)\cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big) \cdot \\ & \notag \cdot\,A\big(\sum_{a=1}^{s+1}e_a,\Dl_0, \Dl_{s+1}, W_s \big)^{-1} \equiv \\ & \notag \equiv D\big(A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}\big)\big) \cdot \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_{a},\Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)\cdot \\ \notag & \cdot\, A\big(\sum_{a=1}^{s}e_a,\Dl_0, \Dl_{s}, W_{s-1} \big)^{-1} \pmod{p^s}. \eean Congruence \eqref{17b} implies that the first sum in \eqref{con3} is congruent to the first sum in \eqref{con4} modulo $p^s$. To match the second sums we recall the inductive hypothesis in the form \bean \label{18.1} & \\ & \notag D\big(\si^{\sum_{a=1}^{j}e_a} \big(A\big(\sum_{a=j+1}^{s+1}e_a, \Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big)\big) \cdot \phantom{aaaaaaaaaaaa} \\ & \notag \cdot\, \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s+1}e_a, \Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big)^{-1} \equiv \\& \notag \equiv\, D\big(\si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_a, \Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)\big) \cdot \phantom{aaaaaaaaa} \\ \notag & \phantom{aaaaaaaaa} \cdot\, \si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_a, \Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)^{-1} \pmod{p^s}, \eean and notice that both sides in \eqref{18.1} are congruent to zero modulo $\si^{\sum_{a=1}^{j}e_a}$ by formula \eqref{1.5}. Therefore, multiplying congruences \eqref{18.1} and \eqref{17a} we obtain \bea & \notag D\big(\si^{\sum_{a=1}^{j}e_a} \big(A\big(\sum_{a=j+1}^{s+1}e_a, \Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\big)\big) \cdot A\big(\sum_{a=1}^{s+1}e_a,\Dl_0, \Dl_{s+1}, W_s \big)^{-1} \equiv \\& \notag \equiv D\big(\si^{\sum_{a=1}^{j}e_a}\big(A\big(\sum_{a=j+1}^{s}e_a, \Dl_j,\Dl_{s}, W_{s-1}^{(j)}\big)\big)\big) \cdot A\big(\sum_{a=1}^{s}e_a,\Dl_0, \Dl_{s}, W_{s-1} \big)^{-1} \pmod{p^s}. \eea Multiplying both sides of this congruence by $A\big(\sum_{a=1}^{j}e_a ,\Dl_0,\Dl_{j+1}, V_{j-1}) $ we conclude that the second sum in \eqref{con3} is congruent to the second sum in \eqref{con4} modulo $p^s$. The theorem is proved. \end{proof} \vsk.2> There are similar congruences for higher order derivatives of the matrices \linebreak $A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)$. We restrict ourselves with the second order derivatives. \begin{thm} \label{thm der2} Let $(\La_0(t,z),\La_1(t,z), \dots, \La_l(t,z))$ be a $(\Dl, {\bf e})$-admissible tuple of Laurent polynomials in $\Z_p[x^{\pm1}]=\Z_p[t^{\pm1},z^{\pm1}]$. Then under the assumptions of part \textup{(ii)} of Theorem \textup{\ref{thm 1.6}} we have \bean \label{Der2} & D_u\big(D_v\big(A\big(\sum_{a=1}^{s+1}e_a,\Dl_0,\Dl_{s+1},W_{s}\big)\big)\big) \,\cdot\, A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1},W_{s}\big)^{-1} \equiv \\ & \notag \equiv D_u\big(D_v\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s},W_{s-1}\big)\big)\big) \cdot A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s},W_{s-1}\big)^{-1} \pmod{p^{s}} \eean for all $1\leq u, v\leq n$ and $0\leq s\leq l$. \end{thm} \begin{proof} Notice that, for an invertible matrix $F(z)$ and a derivation $D$, we have $D(F^{-1})=-F^{-1}\,D(F)\,F^{-1}$. We apply the derivation $D_u$ to congruence \eqref{Der} with $D=D_v$ : \bea & D_u\big(D_v\big(A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)\big)\big) \cdot A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)^{-1} + \\ & +\,D_v\big(A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)\big) \cdot A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)^{-1} \cdot \\ & \cdot\, D_u\big(A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)\big) \cdot A\big(\sum_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\big)^{-1} \equiv \\ & \notag \equiv D_u\big(D_v\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)\big)\big) \cdot A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)^{-1} + \\ & +\, D_v\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)\big) \cdot A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)^{-1}\cdot \\& \cdot\, D_u\big(A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)\big) \cdot A\big(\sum_{a=1}^{s}e_a, \Dl_0,\Dl_{s}, W_{s-1}\big)^{-1} \eea modulo $p^s$. It remains to apply \eqref{Der} with $D=D_u$ and $D=D_v$ and $\ell=0$ to see that the second terms on both sides agree modulo~$p^s$. After their cancellation we are left with the required congruences in~\eqref{Der2}. \end{proof} \begin{rem} The results of Section \ref{sec 2} in the case $ {\bf e}=(e_1, \dots, e_{l})=(1,\dots,1)$ and $\Dl=(\Dl_0, \dots, \Dl_l)$ such that $\Dl_0 =\dots = \Dl_l$ were obtained in \cite{VZ2}. \end{rem} \section{Convergence} \label{sec 3} \subsection{Unramified extensions of $\Q_p$} We fix an algebraic closure $\overline{\Q_p}$ of $\Q_p$. For every $m$, there is a unique unramified extension of $\Q_p$ in $\overline{\Q_p}$ of degree $m$, denoted by $\Q_p^{(m)}$. This can be obtained by attaching to $\Q_p$ a primitive root of $1$ of order $p^m-1$. The norm $|\cdot|_p$ on $\Q_p$ extends to a norm $|\cdot|_p$ on $\Q_p^{(m)}$. Let \bea \Z_p^{(m)} = \{ a\in \Q_p^{(m)} \mid |a|_p\leq 1\} \eea denote the ring of integers in $\Q_p^{(m)}$. The ring $\Z_p^{(m)}$ has the unique maximal ideal \bea \mathbb M_p^{(m)} = \{ a\in \Q_p^{(m)} \mid |a|_p <1\}, \eea such that $\mathbb Z_p^{(m)}\big/ \mathbb M_p^{(m)}$ is isomorphic to the finite field $\F_{p^m}$. For every $u\in\F_{p^m}$ there is a unique $\tilde u\in \mathbb Z_p^{(m)}$ that is a lift of $u$ and such that $\tilde u^{p^m}=\tilde u$. The element $\tilde u$ is called the Teichmuller lift of $u$. \subsection{Domain $\frak D_B$} For $u\in\F_{p^m}$ and $r>0$ denote \bea D_{u,r} = \{ a\in \Z_p^{(m)}\mid |a-\tilde u|_p<r\}\,. \eea We have the partition \bea \Z_p^{(m)} = \bigcup_{u\in\F_{p^m}} D_{u,1}\,. \eea Recall $z=(z_1,\dots,z_n)$. For $B(z) \in \Z[z]$, define \bea \frak D_B \ =\ \{ a\in (\Z_p^{(m)})^n\, \mid \ |B(a)|_p=1\} . \eea Let $\bar B(z)$ be the projection of $B(z)$ to $\F_p[z]\subset \F_{p^m}[z]$. Then $\frak D_B$ is the union of unit polydiscs, \bea \frak D_B = \bigcup_{\substack{u_1,\dots,u_n\in \F_{p^m}\\ \bar B(u_1,\dots, u_n)\ne 0}} \ D_{u_1,1}\times \dots \times D_{u_n,1}\,. \eea For any $k$ we have \begin{align} \notag \{ a\in (\Z_p^{(m)})^n \mid \ |B(a^{p^k})|_p=1\} &=\bigcup_{\substack{u_1,\dots,u_n\in \F_{p^m}\\ \si^k(\bar B(u_1,\dots, u_n))\ne 0}} \ D_{u_1,1}\times \dots \times D_{u_n,1} = \\ \notag &=\bigcup_{\substack{u_1,\dots,u_n\in F_{p^m}\\ \bar B(u_1,\dots, u_n)\ne 0}} \ D_{u_1,1}\times \dots \times D_{u_n,1} = \frak D_B \,. \end{align} \begin{lem} [{\cite[Lemma 6.1]{VZ2}}] \label{lem nonempty} Let $\bar B_1(z), \dots, \bar B_k(z) \in \F_p[z]$ be nonzero polynomials such that $\deg \bar B_j(z)\leq d$, $j=1,\dots,k$, for some $d$. If $kd+1< p^m$, then the set \bea \{ a\in (\F_{p^m})^n\mid \bar B_j(a) \ne 0, \, j=1,\dots, f\} \eea is nonempty. \end{lem} \subsection{Uniqueness theorem} Let $\frak D \subset (\Z_p^{(m)})^n$ be the union of some of the unit polydiscs \linebreak $ D_{u_1,1}\times \dots \times D_{u_n,1}$\,, where $u_1,\dots,u_n\in\F_{p^m}$. Let $(F_i(z))_{i=1}^\infty$ and $(G_i(z))_{i=1}^\infty$ be two sequences of rational functions on $(\F_{p^m})^n$. Assume that each of the rational functions has the form $P(z)/Q(z)$, where $P(z), Q(z)\in\Z[z]$, and for any polydisc $ D_{u_1,1}\times \dots \times D_{u_n,1}\,\subset \frak D$, we have $|Q(\tilde u_1,\dots,\tilde u_n)|_p=1$, which implies that \bea |Q(a_1,\dots,a_n)|_p=1,\qquad \forall\ (a_1,\dots,a_n)\in \frak D. \eea Assume that the sequences $(F_i(z))_{i=1}^\infty$ and $(G_i(z))_{i=1}^\infty$ uniformly converge on $\frak D$ to analytic functions, which we denote by $F(z)$ and $G(z)$, respectively. \begin{thm} [\cite{VZ2}] \label{thm U} Under these assumptions, if $F(z)=G(z)$ on an open nonempty subset of~$\frak D$. Then $F(z)=G(z)$ on~$\frak D$. \end{thm} \subsection{Infinite tuples} Let ${\bf e}=(e_1, e_2, \dots)$ be an infinite tuple of positive integers. Let $\Dl=(\Dl_0, \dots, \Dl_l)$ be an infinite tuple of nonempty finite subsets of $\Z^r$ of the same size $\# \Dl_j=g$ for some positive integer $g$. Let $\La=(\La_0(x),\La_1(x), \dots)$ be an infinite tuple of Laurent polynomials in $\Z_p[x^{\pm1}]=\Z_p[t^{\pm1},z^{\pm1}]$. \vsk.2> Assume that the tuple $\La$ is $(\Dl,{\bf e})$-admissible. \vsk.2> Assume that each of the tuples ${\bf e}, \Dl, \La$ have only finitely many distinct elements. This means that there is a finite set of 4-tuples \bean \label{tuples} \mc T = \{(e^j, \bar \Dl^j, \tilde D^j,\La^j) \mid j=1,\dots,k\} \eean such that for any $l\geq 0$ the 4-tuple $(e_{l+1}, \Dl_l,\Dl_{l+1}, \La_l)$ equals one of the 4-tuples in $\mc T$. \begin{defn} \label{def F} The $(\Dl,{\bf e})$-admissible tuple $\La$ is called \emph{nondegenerate}, if for any $i=1,\dots,k$, the Laurent polynomial \bea \det A\big (e^j, \bar\Dl^j,\tilde \Dl^j, \La^j\big) \ \in \ \Z_p[z^{\pm1}] \eea is nonzero modulo~$p$. \end{defn} Recall the notation: \bea W_s(x) &:=& \La_0(x)\La_1(x)^{p^{e_1}}\cdots \La_s(x)^{p^{e_1+\dots+e_s}}, \\ W_s^{(j)}(x) &:=& \La_j(x)\La_{j+1}(x)^{p^{e_{j+1}}}\cdots \La_s(x)^{p^{e_{j+1}+\dots +e_s}}. \eea If a $(\Dl,{\bf e})$-admissible tuple $\La$ is nondegenerate, then for any $0\leq j\leq s$, the Laurent polynomials $\det A\big(\sum_{a=j+1}^{s+1}e_a, \Dl_j,\Dl_{s+1}, W_s^{(j)}\big)\in \Z_p[z^{\pm1}]$ are not congruent to zero modulo~$p$ and we may consider congruences involving the inverse matrices $A\big(\sum_{a=j+1}^{s+1}e_a, \Dl_j,\Dl_{s+1}, W_s^{(j)}\big)^{-1}$. \subsection{Domain of convergence} Assume that $\La$ is an infinite nondegenerate $(\Dl,{\bf e})$-admissible tuple and $m$ is a positive integer. Denote \bea \frak D^{(m)} = \{a \in (\Z_p^{(m)})^{n} \ \mid \ |\det A\big (e^j, \bar\Dl^j,\tilde \Dl^j, \La^j(t,a) \big)|_p=1, \,\,j=1,\dots,k\}. \eea \vsk.2> \begin{lem} \label{lem |det|} For any $0\leq j\leq s$ and $a\in \frak D^{(m)}$ we have \bea \Big\vert\det A\Big({\sum}_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}(t,a)\Big) \Big\vert_p =1. \eea \qed \end{lem} \begin{cor} All entries of $A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}(t,z)\big)^{-1}$ are rational functions in $z$ regular on $\frak D^{(m)}$. For every $a\in\frak D^{(m)}$ all entries of $A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}(t,a)\big)$ and $A\big(\sum_{a=j+1}^{s+1}e_{a},\Dl_j,\Dl_{s+1}, W_s^{(j)}(t,a)\big)^{-1}$ are elements of $\Z_p^{(m)}$. \qed \end{cor} \vsk.2> \begin{thm} \label{thm conv} Let $\La$ be an infinite nondegenerate $(\Dl,{\bf e})$-admissible tuple. Consider the sequence of $g\times g$ matrices \bean \label{sec of m} \phantom{aaa} \Big( A\Big({\sum}_{a=1}^{s+1}e_{a},\Dl_0,\Dl_{s+1}, W_s(t,z)\Big) \cdot \si^{e_1}\Big(A\Big({\sum}_{a=2}^{s+1}e_{a},\Dl_1,\Dl_{s+1}, W_s^{(1)}(t,z)\Big)\Big)^{-1} \Big)_{s\geq 0} \eean whose entries are rational functions in $z$ regular on the domain $\frak D^{(m)}$. This sequence uniformly converges on $\frak D^{(m)}$ as $s\to\infty$ to an analytic $g\times g$ matrix with values in $\Z_p^{(m)}$. Denote this matrix by $\mc A_\La(z)$. For $a\in\frak D^{(m)}$ we have \bean \label{det 1} \Big\vert \det \mc A_\La(a) \Big\vert_p=1\, \eean and the matrix $\mc A_\La(a)$ is invertible. \end{thm} \vsk.2> \begin{proof} By part (i) of Theorem \ref{thm 1.6} we have $\vert\det \si^{e_1}\big(A\big({\sum}_{a=2}^{s+1}e_{a},\Dl_1,\Dl_{s+1}, W_s^{(1)}(t,a)\big)\big)\vert_p =1$ for $a\in\frak D^{(m)}$. Hence the matrix in \eqref{sec of m} is a matrix of rational functions in $z$ regular on $\frak D^{(m)}$. Moreover, if $a\in\frak D^{(m)}$, then every entry of this matrix is an element of $\Z_p^{(m)}$. The uniform convergence on $\frak D^{(m)}$ of the sequence \eqref{sec of m} is a corollary of part (ii) of Theorem \ref{thm 1.6}. Equation \eqref{det 1} follows from part (i) of Theorem \ref{thm 1.6}. The theorem is proved. \end{proof} \vsk.2> \begin{thm} \label{thm conv2} Let $\La$ be an infinite nondegenerate $(\Dl,{\bf e})$-admissible tuple, and $D=D_v$, $v=1,\dots,n$. Given $\ell\geq 0$ consider the sequence of $g\times g$ matrices \bea \Big(\,D\Big(\si^\ell\Big(A\Big({\sum}_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\Big)\Big)\Big) \cdot \si^\ell\Big(A\Big({\sum}_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\Big)\Big)^{-1}\, \Big)_{s\geq 0} \eea whose entries are rational functions in $z$ regular on the domain $\frak D$. This sequence uniformly converges on $\frak D$ as $s\to\infty$ to an analytic $g\times g$ matrix with values in $\Z_p^{(m)}$. Denote this matrix by $\mc A_{\La,D\si^\ell}(z)$. \end{thm} \begin{proof} The theorem is a corollary of Theorem \ref{thm der}. \end{proof} \begin{thm} \label{thm conv3} Let $\La=(\La_0(x), \La_1(x), \La_2(x),\dots )$ be an infinite nondegenerate $(\Dl,{\bf e})$-admis\-sible tuple. Given $\ell\geq 0$ and $1\leq u,v\leq n$, consider the sequence of $g\times g$ matrices \bea \Big(\,D_u\Big(D_v\Big(A\Big({\sum}_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\Big)\Big)\Big) \cdot A\Big({\sum}_{a=1}^{s+1}e_a, \Dl_0,\Dl_{s+1}, W_s\Big)^{-1}\,\Big)_{s\geq 0} \eea whose entries are rational functions in $z$ regular on the domain $\frak D$. This sequence uniformly converges on $\frak D$ as $s\to\infty$ to an analytic $g\times g$ matrix with values in $\Z_p^{(m)}$. Denote this matrix by $\mc A_{\La,D_uD_v}(z)$. \end{thm} \begin{proof} The theorem is a corollary of Theorem \ref{thm der2}. \end{proof} \vsk.2> Let $\La=(\La_0(x), \La_1(x), \La_2(x),\dots )$ be an infinite nondegenerate $(\Dl,{\bf e})$-admissible tuple. Consider the $g\times g$ matrix valued functions $\mc A_{\La,\frac{\der}{\der z_u}\si^0}(z)$, $\mc A_{\La,\frac{\der}{\der z_v}\si^0}(z)$ in Theorem \ref{thm conv2} and denote them by $\mc A_u(z)$, $\mc A_v(z)$, respectively. Consider the $g\times g$ matrix valued function $\mc A_{\La,\frac{\der}{\der z_u}\frac{\der}{\der z_v}}(z)$ in Theorem \ref{thm conv3} and denote it by $\mc A_{u,v}(z)$. All the three functions are analytic on $\frak D^{(m)}$. \begin{lem} [{\cite[Lemma 3.7]{VZ2}}] \label{lem conv3} We have \bea \frac{\der}{\der z_u}\mc A_v = \mc A_{u,v} - \mc A_v\mc A_u\,. \eea \qed \end{lem} \section{\KZ/ equations and complex solutions} \label{sec 4} \subsection{\KZ/ equations} Let $\g$ be a simple Lie algebra with an invariant scalar product. The { Casimir element} is $\Om = {\sum}_i \,h_i\ox h_i \in \g \ox \g$, where $(h_i)\subset\g$ is an orthonormal basis. Let $V=\otimes_{i=1}^n V_i$ be a tensor product of $\g$-modules, $\ka\in\C^\times$ a nonzero number. The {\it \KZ/ equations} is the system of differential equations on a $V$-valued function $I(z_1,\dots,z_n)$, \bea \frac{\der I}{\der z_i}\ =\ \frac 1\ka\,{\sum}_{j\ne i}\, \frac{\Om_{i,j}}{z_i-z_j} I, \qquad i=1,\dots,n, \eea where $\Om_{i,j}:V\to V$ is the Casimir operator acting in the $i$th and $j$th tensor factors, see \cite{KZ, EFK}. \vsk.2> This system is a system of Fuchsian first order linear differential equations. The equations are defined on the complement in $\C^n$ to the union of all diagonal hyperplanes. \vsk.2> The object of our discussion is the following particular case. Let $n, q$ be positive integers. We consider the following system of differential and algebraic equations for a column $n$-vector $I=(I_1,\dots,I_n)$ depending on variables $z=(z_1,\dots,z_n)$\,: \bean \label{KZ} \phantom{aaa} \frac{\partial I}{\partial z_i} \ = \ {\frac 1 q} \sum_{j \ne i} \frac{\Omega_{ij}}{z_i - z_j} I , \quad i = 1, \dots , n, \qquad I_1+\dots+I_{n}=0, \eean where $z=(z_1,\dots,z_n)$; the $n\times n$-matrices $\Om_{ij}$ have the form \bea \Omega_{ij} \ = \ \begin{pmatrix} & \vdots^{\kern-1.2mm i} & & \vdots^{\kern-1.2mm j} & \\ {\scriptstyle i} \cdots & {-1} & \cdots & 1 & \cdots \\ & \vdots & & \vdots & \\ {\scriptstyle j} \cdots & 1 & \cdots & -1& \cdots \\ & \vdots & & \vdots & \end{pmatrix} , \eea and all other entries are zero. This joint system of {\it differential and algebraic equations} will be called the {\it system of \KZ/ equations} in this paper. \vsk.2> For $i=1,\dots,n$ denote \bean \label{GH} & H_i(z) = {\frac 1q} \sum_{j\ne i} \frac{\Omega_{ij}}{z_i - z_j}\,, \qquad \nabla_i^{\on{KZ}} = \frac{\der}{\der z_i} - H_i(z), \qquad i=1,\dots,n. \eean The linear operators $H_i(z)$ are called the Gaudin Hamiltonians. The \KZ/ equations can be written as the system of equations, \bea \nabla_i^{\on{KZ}}I=0, \quad i=1,\dots,n,\qquad I_1+\dots + I_n =0. \eea \vsk.2> System \eqref{KZ} is the system of the differential \KZ/ equations with parameter $\ka=q$ associated with the Lie algebra $\sll_2$ and the subspace of singular vectors of weight $n-2$ of the tensor power $(\C^2)^{\ox {n}}$ of two-dimensional irreducible $\sll_2$-modules, up to a gauge transformation, see this example in \cite[Section 1.1]{V2}, see also \cite{V3}. \subsection{Solutions over $\C$} \label{sec 11.4} Define the {\it master function} \bea \Phi(t,z) = (t-z_1)^{-1/q}\dots (t-z_n)^{-1/q} \eea and the column $n$-vector \bean \label{KZ sol} I^{(C)}(z) = (I_1,\dots,I_n):= \int_{C} \Big(\frac {\Phi(t,z)}{t-z_1}, \dots , \frac {\Phi(t,z)}{t-z_n}\Big)dt \,, \eean where $C\subset \C-\{z_1,\dots,z_n\}$ is a contour on which the integrand takes its initial value when $t$ encircles $C$. \begin{thm} The function $I^{(C)}(z)$ is a solution of system \eqref{KZ}. \end{thm} This theorem is a very particular case of the results in \cite{SV1}. \begin{proof} The theorem follows from Stokes' theorem and the two identities: \bean \label{i1} -\frac 1q\, \Big(\frac {\Phi(t,z)}{t-z_1} + \dots + \frac {\Phi(t,z)}{t-z_n}\Big)\, =\, \frac{\der\Phi}{\der t}(t,z)\,, \eean \bean \label{i2} \Big(\frac{\der }{\der z_i}-\frac1q \sum_{j\ne i} \frac {\Omega_{i,j}}{z_i-z_j} \Big) \Big(\frac {\Phi(t,z)}{t-z_1}, \dots, \frac {\Phi(t,z)}{t-z_n}\Big)\, = \frac{\der \Psi^i}{\der t} (t,z), \eean where $\Psi^i(t,z)$ is the column $n$-vector $(0,\dots,0,-\frac{\Phi(t,z)}{t-z_i},0,\dots,0)$ with the nonzero element at the $i$-th place. \end{proof} \begin{thm} [{cf.~\cite[Formula (1.3)]{V1}}] \label{thm dim} All solutions of system \eqref{KZ} have this form. Namely, the complex vector space of solutions of the form \eqref{KZ sol} is $(n-1)$-dimensional. \end{thm} \subsection{Solutions as vectors of first derivatives} \label{sec 11.5} Consider the integral \bea T(z) = T^{(C)}(z) = \int_C \Phi(t,z) \,dt. \eea Then \bea I^{(C)}(z) = \, q\, \Big(\frac {\der T^{(C)}}{\der z_1}, \dots , \frac {\der T^{(C)}}{\der z_n}\Big). \eea Denote $\nabla T = \Big(\frac {\der T}{\der z_1},\dots, \frac {\der T}{\der z_n}\Big)$. Then the column gradient vector $\nabla T$ of the function $T(z)$ satisfies the following system of \KZ/ equations \bea \nabla_i^{\on{KZ}} \nabla T =0, \quad i=1,\dots,n,\qquad \frac {\der T}{\der z_1} +\dots + \frac {\der T}{\der z_n}=0. \eea This is a system of second order linear differential equations on the function $T(z)$. \section{Solutions modulo powers of $p$} \label{sec 5} \subsection{Assumptions} \label{sec ass} ${}$ Let $p, q$, $p>q$, be prime numbers. Let $e$ be the order of $p$ modulo $q$, that is, the least positive integer such that $p^e\equiv 1\pmod{q}$. Hence $(p^e-1)/q$ is a positive integer. Let $n=gq+1$ for some positive integer $g$. Assume that $p^e> n$ and $p\geq n+q-2$. In this paper we consider the system of \KZ/ equations \eqref{KZ} with $n=gq+1$ and $\ka = q$ and study polynomial solutions of the \KZ/ equations modulo powers of $p$. \subsection{Polynomial solutions} \label{sec:new} For an integer $s\geq 1$ define the {\it master polynomial} \bea \Phi_s(t,z) = \big((t-z_1)\dots(t-z_n)\big)^{(p^{es}-1)/q}. \eea For $\ell=1,\dots, g$ define the column $n$-vector \bea I_{s,\ell} (z)=(I_{s,\ell,1}, \dots, I_{s,\ell.n}) \eea as the coefficient of $t^{\ell p^{es}-1}$ in the column $n$-vector of polynomials $\big(\frac{\Phi_s(t,z)}{t-z_1}, \dots, \frac {\Phi_s(t,z)}{t-z_n}\big)$. Notice that \bea \deg_t \frac{\Phi_s(t,z)}{t-z_i} = (gq+1)\frac{p^{es}-1}q-1 = gp^{es}-1 + \frac{p^{es}-1}q -g. \eea If $\ell >g$, then the polynomial $\frac{\Phi_s(t,z)}{t-z_i}$ does not have the monomial $t^{\ell p^{es}-1}$. \vsk.2> \begin{thm} [cf. \cite{V5, VZ2}] \label{thm 7.3} The column $n$-vector $I_{s,\ell}(z)$ of polynomials in $z$ is a solution of the system of \KZ/ equations \eqref{KZ} modulo $p^{es}$. \end{thm} \vsk.2> We call the column $n$-vectors $I_{s,\ell}(z)$, $\ell=1,\dots,g$, the {\it $p^{es}$-hypergeometric solutions} of the \KZ/ equations \eqref{KZ}. \vsk.2> \begin{proof} We have the following modifications of identities \eqref{i1}, \eqref{i2}\,: \bea \frac {p^{es}-1}q\, \Big(\frac {\Phi_s(t,z)}{t-z_1} + \dots + \frac {\Phi_s(t,z)}{t-z_n}\Big)\, =\, \frac{\der\Phi_s}{\der t}(t,z)\,, \eea \bea \Big(\frac{\der }{\der z_i} + \frac {p^{es}-1}q \sum_{j\ne i} \frac {\Omega_{i,j}}{z_i-z_j} \Big) \Big(\frac {\Phi_s(t,z)}{t-z_1}, \dots, \frac {\Phi_s(t,z)}{t-z_n}\Big)\, = \frac{\der \Psi_s^i}{\der t} (t,z), \eea where $\Psi_s^i(t,z)$ is the column $n$-vector $(0,\dots,0,-\frac{\Phi_s(t,z)}{t-z_i},0,\dots,0)$ with the nonzero element at the $i$-th place. Theorem \ref{thm 7.3} follows from these identities. \end{proof} Consider the $n\times g$ matrix \bea I_s(z) = (I_{s,1},\dots,I_{s,g}) = \big(I_{s, \ell, i}\big)_{\ell = 1,\dots,g}^{i=1,\dots,n}\ , \eea where $I_{s, \ell, i}$ stays at the $\ell$-th column and $i$-th row. The matrix $I_s(z)$ satisfies the \KZ/ equations, \bea \nabla_i^{\on{KZ}}I_s(z) =0, \quad i=1,\dots,n,\qquad I_{s,\ell,1}+\dots + I_{s,\ell,n}(z) =0, \quad \ell=1,\dots,g, \eea modulo $p^{es}$. \subsection{Coefficients of solutions} Consider the lexicographical ordering of monomials \linebreak $z_1^{d_1}\dots z_{n}^{d_{n}}$. We have $z_1>\dots > z_{n}$ and so on. For a nonzero Laurent polynomial $f(z)=\sum_{d_1,\dots,d_{n}} a_{d_1,\dots,d_{n}} z_1^{d_1}$ \dots $z_{n}^{d_{n}}$\, with coefficients in $\Z$\,,\ the nonzero summand $a_{d_1,\dots,d_{n}} z_1^{d_1}\dots z_n^{d_{n}}$ with the largest monomial $z_1^{d_1}$ \dots $z_{n}^{d_{n}}$ is called the { leading term} of $f(z)$. \vsk.2> If $f(z)$ and $g(z)$ are two nonzero Laurent polynomials, then the leading term of $f(z)g(z)$ equals the product of the leading terms of $f(z)$ and $g(z)$. \begin{lem} \label{lem leco} For $l=1,\dots,g$, the leading term of the vector-polynomial $I_{1,\ell}$ equals \linebreak $C_{\ell}\cdot (z_1\dots z_{q(g-\ell)+1})^{(p^e-1)/q}/z_{q(g-\ell)+1}^\ell$, \bean \label{leco} C_{\ell} \,=\, \pm\binom{(p^e-1)/q-1}{\ell-1}\Big(0, \dots, 0, 1,\frac{p^e-1}{q\ell},\dots, \frac{p^e-1}{q\ell}\Big), \eean where $\frac{p^e-1}{q\ell}$ is repeated $q\ell$ times, and \bean \label{bino} \binom{(p^e-1)/q-1}{\ell-1}\not\equiv 0 \pmod{p}. \eean \end{lem} \begin{proof} Formula \eqref{leco} is obtained by inspection. To prove \eqref{bino} consider the $p$-ary presentation $(p^e-1)/q-1 = a_0 + a_1p +\dots$ with $0\leq a_i\leq p-1$. The inequality \eqref{bino} follows from the inequality $a_0\geq g-1$ and Lucas theorem. We prove that $a_0\geq g-1$ under our assumption $p\geq n+q-2$. Indeed, $p^e = 1 + q(1+a_0) + qa_1p+ \dots$. Hence $1+ q(1+a_0)\geq p$. Let $p=qk+r$ for some integers $k,r$, $1\leq r\leq q-1$. Then $1+q(1+a_0)\geq qk+r$ or $q(1+a_0)\geq qk+r-1\geq kq$ or $a_0\geq k-1$. Hence $a_0\geq g-1$ if $k\geq g$. The inequality $p\geq n+q-2$ can be written as $kq+r\geq gq+1+q-2$ or $kq\geq gq+ q-r-1$. Hence $k\geq g$. The lemma is proved. \end{proof} \begin{lem} \label{lem minor} Consider the $n\times g$ matrix $I_1(z) = (I_{1,1},\dots,I_{1,g})$ and its $g\times g$ minor $M(z)$ in rows with indices $q(g-\ell) +1$ where $\ell=1,\dots,g$. Then $M(z)$ is a homogeneous polynomial of degree \bean \label{deg M} d_M = \frac{p^e-1}{q}\cdot\frac{qg^2+2g-qg}2 - \frac{g(g+1)}2\,, \eean and the polynomial $M(z)$ is nonzero modulo $p$. \end{lem} \begin{proof} Every column of $I_{1,\ell}$ is a homogeneous polynomial. Hence $M(z)$ is a homogeneous polynomial. By Lemma \ref{lem leco} the leading term of $M(z)$ equals \bean \label{lcd} \pm \prod_{\ell=1}^g \binom{(p^e-1)/q-1}{\ell-1} (z_1\dots z_{q(g-\ell)+1})^{(p^e-1)/q}/z_{q(g-\ell)+1}^\ell\,. \eean This expression is nonzero modulo $p$ by Lemma \ref{lem leco}. Formula \eqref{lcd} implies \eqref{deg M}. \end{proof} \section{Congruences for solutions of \KZ/ equations} \label{sec 6} \subsection{Congruences for Hasse--Witt matrices of \KZ/ equations} Let $r=1$, $n=gq+1$, ${\bf e}=(e,e,\dots)$, where $e$ is defined in Section \ref{sec ass}. Let \bean \label{DN} & \Ga =\{1,\dots,g\}\subset \Z, \qquad \Dl=(\Ga,\Ga, \dots ), \\ \notag & N=[0, gp^e + (p^e-1)/q-g]\subset \R. \eean The infinite tuple $(N, N,\dots)$ of intervals is $(\Dl,{\bf e})$-admissible\textup, see Definition \textup{\ref{defN}}. Recall the polynomial \bea \Phi_1(t,z) = \big((t-z_1)\dots(t-z_n)\big)^{(p^e-1)/q}. \eea The Newton polytope of $\Phi_1(t,z)$ with respect to variable $t$ is the interval \linebreak $N=[0, gp + (p-1)/q -g]$. We also have \bea \Phi_s(t,z) = \Phi_1(t,z)\cdot \Phi_1(t,z)^{p^e}\dots \Phi_1(t,z)^{p^{e(s-1)}}\,. \eea The infinite tuple $(\Phi_1(t,z), \Phi_1(t,z),\dots)$ is $(\Dl,{\bf e})$-admissible, see Definition \ref{defn}. \vsk.2> For $s\geq 1$ consider the Hasse--Witt $g\times g$ matrix \bea A(\Phi_s(t,z)) := A(es, \Ga, \Ga, \Phi_s(t,z)) = \big( \Cf_{p^{es}v-u}(\Phi_s(t,z))\big)_{u,v=1,\dots,g}\,, \eea see \eqref{Cuv+}. The entries of this matrix are polynomials in $z$. \vsk.2> \begin{thm} \label{thm F ne} The determinant $\det A(\Phi_1(t,z))$ is a homogeneous polynomial in $z$ of degree \bean \label{deg d} d_\Phi = \frac{p^e-1}{q}\cdot\frac{qg^2+2g-qg}2\,, \eean and the determinant is nonzero modulo $p$. \end{thm} \begin{proof} Denote $A(\Phi_1(t,z)) = : (A_{u,v}(z))_{u,v=1,\dots,g}$\,. \begin{lem} The leading term of $A_{u,v}(z)$ equals \bea && \pm\binom{(p^e-1)/q}{v-u} (z_1z_2\dots z_{qg+1-qv})^{(p^e-1)/q}/ z_{qg+1-qv}^{v-u}\,, \qquad \text{if}\;\; v\geq u, \\ \notag && \pm\binom{(p^e-1)/q}{u-v} (z_1z_2\dots z_{qg+1-qv})^{(p^e-1)/q} z_{qg+2-qv}^{u-v}\,, \qquad \quad\ \text{if}\;\; v\leq u. \eea For example, for $g=2$ the matrix of leading terms is \bean \label{ex d} \begin{pmatrix} \pm(z_1\dots z_{g+1})^{(p^e-1)/q} & \pm \binom{(p^e-1)/q}{1}z_1^{(p^e-1)/q}/z_1 \\ \pm \binom{(p^e-1)/q}{1}(z_1\dots z_{q+1})^{(p^e-1)/q}z_{q+2} & \pm z_1^{(p^e-1)/q} \end{pmatrix} . \eean \end{lem} \begin{proof} The proof is by inspection. \end{proof} The fact that $\det A(\Phi_1(t,z))$ is a homogeneous polynomial easily follows from the definition of $A(\Phi_1(t,z))$. It is also easy to see that the leading term of the determinant of the matrix of leading terms of $A_{u,v}(z)$ equals the product of diagonal elements, \bean \label{Deg} \pm \,\prod_{v=1}^{g}( z_1\dots z_{qg+1-qv})^{(p^e-1)/q}. \eean This expression is not congruent to zero modulo $p$. Counting the degree of the monomial in \eqref{Deg} we obtain \eqref{deg d}. This proves Theorem \ref{thm F ne}. \end{proof} \begin{cor} \label{cor F ne} The infinite nondegenerate $(\Dl, {\bf e})$-admissible tuple $(\Phi_1(t,z), \Phi_1(t,z),\dots )$ satisfies the assumptions of Theorem \textup{\ref{thm 1.6}}. Therefore, \begin{enumerate} \item[\textup{(i)}] for $s\geq 1$ we have \begin{align} \label{alal+} A(\Phi_{s}(t,z)) \equiv A(\Phi_1(t,z))\cdot\si^e(A(\Phi_1(t,z)))\cdots \si^{e(s-1)}(A\big(\Phi_1(t,z))) \pmod{p}\,; \end{align} \item[\textup{(ii)}] for $s\geq 1$ the determinant of the matrix $A(\Phi_s(t,z))$ is a polynomial, which is nonzero modulo~$p$, and we have modulo $p^s$\,\textup: \begin{align*} & A(\Phi_{s+1}(t,z))\cdot \si^e(A(\Phi_{s}(t,z)))^{-1} \equiv A(\Phi_{s}(t,z))\cdot \si^e(A(\Phi_{s-1}(t,z)))^{-1}, \end{align*} where for $s=1$ we understand the second factor on the right-hand side as the $g\times g$ identity matrix. \end{enumerate} \end{cor} \begin{proof} The corollary follows from Theorems \ref{thm F ne} and \ref{thm 1.6}. \end{proof} \vsk.2> \vsk.2> \subsection{Congruences for frames of solutions of \KZ/ equations} \begin{thm} \label{thm coS} We have the following congruences of $n\times g$ matrices. \begin{enumerate} \item[\textup{(i)}] For $s\geq 1$, \bea I_{s+1}(z) \cdot A( \Phi_{s+1}(t,z))^{-1} \equiv I_{s}(z) \cdot A(\Phi_{s}(t,z))^{-1} \pmod{p^s}\,. \eea \item[\textup{(ii)}] For $s\geq 1$ and $j=1,\dots, n$, \bea \frac {\der I_{s+1}}{\der z_j} (z) \cdot A(\Phi_{s+1}(t,z))^{-1} \equiv \frac{\der I_{s}}{\der z_j}(z) \cdot A(\Phi_{s}(t,z))^{-1} \pmod{p^s}\,. \eea \end{enumerate} \end{thm} \vsk.2> \begin{proof} Consider the first row of the Hasse--Witt matrix $A(\Phi_s(t,z))$, \bea \big(A_{1,1}(\Phi_s(t,z)),\dots, A_{1,g}(\Phi_s(t,z))\big), \quad A_{1,\ell}(\Phi_s(t,z)) = \on{Coeff}_{\ell p^s-1}(\Phi_{s}(t,z)). \eea For each $A_{1,\ell}(\Phi_s(t,z))$ we view the gradient \bea \nabla A_{1,\ell}(\Phi_s(t,z))=\Big(\frac{\der A_{1,\ell}(s)}{\der z_1}, \dots, \frac{\der A_{1,\ell}(s)}{\der z_n} \Big) \eea as a column $n$-vector. The resulting $n\times g$ matrix of gradients \bea \nabla A(s,z):=(\nabla A_{1,1}(\Phi_s(t,z)),\dots,\nabla A_{1,g}(\Phi_s(t,z))) \eea is proportion to the matrix $I_s(z)$, $\nabla A(s,z) = \frac{1-p^{es}}q I_s(z)$. By Theorems \ref{thm der} and \ref{thm der2} we have modulo $p^s$, \bea & \nabla A(s+1,z) \cdot A(\Phi_{s+1}(t,z))^{-1} \equiv \nabla A(s,z) \cdot A(\Phi_{s}(t,z))^{-1}, \\ & \frac{\der}{\der z_j}\big(\nabla A(s+1,z)\big) \cdot A(\Phi_{s+1}(t,z))^{-1} \equiv \frac{\der}{\der z_j}\big(\nabla A(s,z)\big) \cdot A(\Phi_{s}(t,z))^{-1}. \eea These congruences imply the theorem. \end{proof} \vsk.2> \begin{cor} \label{thm KZ mod p} For $s\geq 1$ we have \bea I_{s}(z) \cdot A(\Phi_{s}(t,z))^{-1} \equiv I_{1}(z) \cdot A(\Phi_{1}(t,z))^{-1} \pmod{p}. \eea \end{cor} \vsk.2> \subsection{Domain of convergence} By Theorem \ref{thm F ne} the polynomial $\det A(\Phi_1(t,z)) \in \Z[z]$ is of degree $d_\Phi$ and this polynomial is nonzero modulo $p$. For a positive integer $m$ define \bea \frak D^{(m)}_{\on{KZ}} = \{a \in (\Z_p^{(m)})^{n} \mid\ |\det A(\Phi_1(t,a))|_p=1\}\,. \eea By Lemma \ref{lem nonempty} the domain $\frak D^{(m)}_{\on{KZ}}$ is nonempty if $p^m> d_\Phi$. In what follows we assume that $p^m>d_\Phi$. We have $\vert\det A(\Phi_{s}(t,a))\big\vert_p =1$ for $ a\in \frak D^{(m)}_{\on{KZ}}$. All entries of $A(\Phi_{s}(t,z))^{-1}$ are rational functions in $z$ regular on $\frak D^{(m)}_{\on{KZ}}$. For every $a\in\frak D^{(m)}_{\on{KZ}}$ all entries of $A(\Phi_{s}(t,a))$ and $A(\Phi_{s}(t,a))^{-1}$ are elements of $\Z_p^{(m)}$. \begin{thm} \label{thm coKZ} The sequence of $g\times g$ matrices \bea \big(A\big(\Phi_{s}(t,z)\big)\cdot \si^e\big(A\big(\Phi_{s-1}(t,z)\big)\big)^{-1}\big)_{s\geq 1}\,, \eea whose entries are rational functions in $z$ regular on $\frak D^{(m)}_{\on{KZ}}$, uniformly converges on $\frak D^{(m)}_{\on{KZ}}$ as $s\to\infty$ to an analytic $g\times g$ matrix which will be denoted by $\mc A(z)$. For $a\in\frak D^{(m)}_{\on{KZ}}$ we have \bea \big\vert \det \mc A(a) \big\vert_p=1\, \eea and the matrix $\mc A(a)$ is invertible. \end{thm} \begin{proof} The theorem follows from Theorem \ref{thm conv}. \end{proof} \begin{thm} \label{thm coKZ} For $i=1,\dots,n$ the sequence of $g\times g$ matrices \bea \Big(\Big(\frac{\der}{\der z_i}A\big(\Phi_{s}(t,z)\big)\Big) \cdot A\big(\Phi_{s}(t,z)\big)^{-1}\Big)_{s\geq 1}\,, \eea whose entries are rational functions in $z$ regular on $\frak D^{(m)}_{\on{KZ}}$, uniformly converges on $\frak D^{(m)}_{\on{KZ}}$ as $s\to\infty$ to an analytic $g\times g$ matrix, which will be denoted by $\mc A^{(i)}(z)$. The sequence of $n\times g$ matrices \bea \big(I_s(z)\cdot A\big(\Phi_{s}(t,z)\big)^{-1}\big)_{s\geq 1}\,, \eea whose entries are rational functions in $z$ regular on $\frak D^{(m)}_{\on{KZ}}$, uniformly converges on $\frak D^{(m)}_{\on{KZ}}$ as $s\to\infty$ to an analytic $n\times g$ matrix which will be denoted by $\mc I(z)$. For $i=1,\dots, n$ the sequence of $n\times g$ matrices \bea \Big(\frac{\der I_s}{\der z_i}(z)\cdot A\big(s, \Phi_{s}(t,z)\big)^{-1}\Big)_{s\geq 1}\,, \eea whose entries are rational functions in $z$ regular on $\frak D^{(m)}_{\on{KZ}}$, uniformly converges on $\frak D^{(m)}_{\on{KZ}}$ as $s\to\infty$ to an analytic $n\times g$ matrix which will be denoted by $\mc I^{(i)}(z)$. We have \bea \frac{\der \mc I}{\der z_i} = \mc I^{(i)} - \mc I \cdot \mc A^{(i)}\,. \eea \end{thm} \begin{proof} The theorem follows from Theorems \ref{thm conv2}, \ref{thm conv3}, and Lemma \ref{lem conv3}. \end{proof} \begin{thm} \label{thm KZ mc} We have the following system of equations on $\frak D^{(m)}_{\on{KZ}}$\,: \bea \mc I^{(i)} = H_i \cdot \mc I, \qquad i=1,\dots, n, \eea where $H_i$ are the Gaudin Hamiltonians defined in \eqref{GH}. \end{thm} \begin{proof} The theorem is a corollary of Theorem \ref{thm 7.3}. \end{proof} \begin{cor} \label{cor mc I 1} For $a\in \frak D^{(m)}_{\on{KZ}}$ we have \bea \mc I(a) \equiv I_1(a)\cdot A\big(\Phi_{1}(t,a)\big)^{-1} \pmod{p}. \eea \end{cor} \begin{proof} The corollary follows from Corollary \ref{thm KZ mod p} and Theorem \ref{thm coKZ}. \end{proof} \subsection{Vector bundle $\mc L \,\to\, \frak D^{(m),o}_{\on{KZ}}$} Denote \bea W=\{(I_1,\dots,I_n)\in (\Q_p^{(m)})^n\ |\ I_1+\dots+I_n=0\}. \eea We consider vectors $(I_1,\dots,I_n)$ as column vectors. The differential operators $\nabla^{\on{KZ}}_i$, $i=1,\dots,n$, define a connection on the trivial bundle $W\times \frak D^{(m)}_{\on{KZ}} \to \frak D^{(m)}_{\on{KZ}}$, called the \KZ/ connection. The connection has singularities at the diagonal hyperplanes in $(\Z_p^{(m)})^n$ and is well-defined over \bea \frak D^{(m),o}_{\on{KZ}} = \{ a=(a_1,\dots,a_n)\in(\Z_p^{(m)})^n \mid |\det A(\Phi_1(t,a))|_p=1, \, a_i\ne a_j\,\ \forall i,j\}. \eea \vsk.2> \noindent The \KZ/ connection is flat, \bea \big[\nabla^{\on{KZ}}_i, \nabla^{\on{KZ}}_j\big]=0 \qquad \forall\,i,j, \eea see \cite{EFK}. The flat sections of the \KZ/ connection are solutions of system \eqref{KZ} of \KZ/ equations. \vsk.2> For any $a\in \frak D^{(m)}_{\on{KZ}}$ let $\mc L_a \subset W$ be the vector subspace generated by columns of the $n\times g$ matrix $\mc I(a)$. Then \bea \mc L := \bigcup\nolimits_{a\in \frak D^{(m)}_{\on{KZ}}}\,\mc L_a \,\to\, \frak D^{(m)}_{\on{KZ}} \eea is an analytic distribution of vector subspaces in the fibers of the trivial bundle $W\times \frak D^{(m)}_{\on{KZ}} \to \frak D^{(m)}_{\on{KZ}}$. \begin{thm} [{\cite[Theorem 6.7]{VZ2}}] \label{thm inv} The distribution $\mc L \,\to\, \frak D^{(m)}_{\on{KZ}}$ is invariant with respect to the \KZ/ connection. In other words, if $s(z)$ is a local section of $\mc L \,\to\, \frak D^{(m)}_{\on{KZ}}$, then the sections $\nabla_i^{\on{KZ}} s(z)$, $i=1,\dots,n$, also are sections of $\mc L \,\to\, \frak D^{(m)}_{\on{KZ}}$. \end{thm} \begin{proof} Let $\mc I(z)= (\mc I_1(z), \dots, \mc I_g(z))$ be columns of the $n\times g$ matrix $\mc I(z)$. Let $a\in \frak D^{(m)}_{\on{KZ}}$. Let $c(z) = (c_1(z),\dots,c_g(z))$ be a column vector of analytic functions at $a$. Consider a local section of the distribution $\mc L \,\to\, \frak D^{(m)}_{\on{KZ}}$, $ s(z)\, =\, \sum_{j=1}^g \,c_j(z) \mc I_j(z)\, = : \, \mc I \cdot c$. Then \bea \nabla^{\on{KZ}}_i s(z) &=& - H_i\cdot \mc I\cdot c +\frac{\der \mc I}{\der z_i} \cdot c + \mc I \cdot \frac{\der c}{\der z_i} \\ &=& - H_i\cdot \mc I\cdot c +( \mc I^{(i)} - \mc I \cdot \mc A^{(i)})\cdot c + \mc I \cdot \frac{\der c}{\der z_i} \\ &=& - H_i\cdot \mc I\cdot c +( H_i \cdot \mc I - \mc I \cdot \mc A^{(i)})\cdot c + \mc I \cdot \frac{\der c}{\der z_i} \\ &=& - \mc I \cdot \mc A^{(i)} \cdot c + \mc I \cdot \frac{\der c}{\der z_i} \,. \eea Clearly, the last expression is a local section of $\mc L \,\to\, \frak D^{(m)}_{\on{KZ}}$. \end{proof} \begin{thm} \label{thm rk} The function $a \mapsto \dim_{\Q_p^{(m)}} \mc L_a$ is constant on $\frak D^{(m), o}_{\on{KZ}}$, in other words, $\mc L \,\to\, \frak D^{(m)}_{\on{KZ}}$ is a vector bundle over $\frak D^{(m),o}_{\on{KZ}}\subset \frak D^{(m)}_{\on{KZ}}$. \end{thm} The proof coincides with the proof of Theorem 6.8 in \cite{VZ2}. \vsk.2> Recall that $d_\Phi$ is the degree of the polynomial $\det A(\Phi_1(t,z))$ and $d_M$ is the degree of the minor defined in Lemma \ref{lem minor}. \begin{thm} \label{thm rk g} If $p^m> d_\Phi + d_M$, then the analytic vector bundle $\mc L\to \frak D^{(m),o}_{\on{KZ}}$ is of rank $g$. \end{thm} \begin{proof} If $p^m> d_\Phi + d_M$, then the minor $M(z)$ defines a function on $\frak D^{(m),o}_{\on{KZ}}$ nonzero modulo $p$ by Lemma \ref{lem nonempty}. Then by Corollary \ref{cor mc I 1}, the $n\times g$ matrix valued function $\mc I(z)$ has a $g\times g$ minor nonzero on $\frak D^{(m)}_{\on{KZ}}$. This proves the theorem. \end{proof} \subsection{Remarks} \subsubsection{} One may expect that the subbundle $\mc L\to \frak D^{(m),o}_{\on{KZ}}$ can be extended to a rank $g$ subbundle over $\frak D^{(m)}_{\on{KZ}} - \frak D^{(m),o}_{\on{KZ}}$, the union of the diagonal hyperplanes in $ \frak D^{(m)}_{\on{KZ}}$. \subsubsection{} Following Dwork we may expect that locally at any point $a\in\frak \frak D^{(m),o}_{\on{KZ}}$, the solutions of the KZ equations with values in $\mc L\to \frak D^{(m),o}_{\on{KZ}}$ are given at $a$ by power series in $z_i-a_i$, $i=1,\dots,n$, bounded in their polydiscs of convergence, while any other local solution at $a$ is given by a power series unbounded in its polydisc of convergence, cf. \cite{Dw} and \cite[Theorem A.4]{V5}. \subsubsection{} The KZ connection $\nabla^{\on{KZ}}_i$, $i=1,\dots,n$, over $\C$ has no nontrivial proper invariant subbundles due to the irreducibility of its monodromy representation, see \cite[Lemma 6]{Fo}. Thus the existence of the invariant subbundle $\mc L\to \frak D^{(m),o}_{\on{KZ}}$ is a $p$-adic feature. \subsubsection{} The invariant subbundles of the KZ connection over $\C$ usually are related to some additional conformal block constructions, for example see \cite{FSV, SV2, V3, V4}. Apparently our subbundle $\mc L\to \frak D^{(m),o}_{\on{KZ}}$ is of a different $p$-adic nature. \bigskip \begin{thebibliography}{[COGP]} \normalsize \frenchspacing \raggedbottom \bi[Dr]{Dr} V.\,G.\,Drinfeld, {\it Quasi-Hopf algebras and Knizhnik-Zamolodchikov equations}, Problems of modern quantum field theory (Alushta, 1989), 1--13, Res. Rep. Phys., Springer 1989. \bi[Dw]{Dw} B.\,Dwork, {\it $p$-adic cycles}, Publ. Math. de lH\'ES {\bf 37} (1969), 27--115 \bi[EFK]{EFK} P.\,Etingof, I.\,Frenkel, A.\,Kirillov, {\it Lectures on representation theory and Knizhnik--Zamolodchikov equations}, Mathematical Surveys and Monographs, vol. 58 (AMS, Providence, RI, 1998), xiv+198 pp. ISBN: 0-8218-0496-0 \bi[FSV]{FSV} B.\,Feigin, V.\,Schechtman, A.\,Varchenko, {\it On algebraic equations satisfied by hypergeometric correlators in WZW models}, I, Comm. Math. Phys. {\bf 163} (1994), 173--184; II, Comm. In Math. Phys. {\bf 70} (1995), 219--247 \bi[Fo]{Fo} E.\,Formanek, {\it Braid group representations of low degree}, Proc. London Math. Soc. (3) {\bf 73} (1996), no. 2, 279--322 \bi[KZ]{KZ} V.\,Knizhnik, A.\,Zamolodchikov, {\it Current algebra and the Wess--Zumino model in two dimensions}, Nucl. Phys. {\bf B247} (1984), 83--103 \bi[MO]{MO} D.\,Maulik, A.\,Okounkov, {\it Quantum groups and quantum cohomology}, Ast\'erisque, t.~408 (Soci\'et\'e Math\'ematique de France, 2019), 1--277; \\ {\tt https://doi.org/10.24033/ast.1074} \bi[Me]{Me} A.\,Mellit, {\it A proof of Dwork's congruences}, unpublished (October 20, 2009), 1--3 \bi[MV]{MV} A.\:Mellit, M.\:Vlasenko, {\it Dwork's congruences for the constant terms of powers of a Laurent polynomial}, Int. J. Number Theory {\bf 12} (2016), no. 2, 313--321 \bi[RV1]{RV1} R.\,Rim\'anyi, A.\:Varchenko, {\it The $\F_p$-Selberg integral}, {\tt arXiv:2011.14248}, 1--19 \bi[RV2]{RV2} R.\,Rim\'anyi, A.\:Varchenko, {\it The $\mathbb F_p$-Selberg integral of type $A_n$}, {\tt arXiv:2012.01391}, 1--21 \bi[SV1]{SV1} V.\,Schechtman, A.\,Varchenko, {\it Arrangements of hyperplanes and Lie algebra homology}, Invent. Math. {\bf 106} (1991), 139--194 \bi[SV2]{SV2} V.\,Schechtman, A.\,Varchenko, {\it Solutions of KZ differential equations modulo $p$}, Ramanujan J. {\bf 48} (2019), no. 3, 655--683; \\ {\tt https://doi.org/10.1007/s11139-018-0068-x}, {\tt arXiv:1707.02615} \bi[V1]{V1} A.\,Varchenko, {\it Beta-function of Euler, Vandermonde determinant, Legendre equation and critical values of linear functions of configuration of hyperplanes}, I. Izv. Akademii Nauk USSR, Seriya Mat. {\bf53} (1989), no. 6, 1206--1235; II, Izv. Akademii Nauk USSR, Seriya Mat. {\bf54} (1990), no. 1, 146--158 \bi[V2]{V2} A.\,Varchenko, {\it Multidimensional Hypergeometric Functions and Representation Theory of Lie Algebras and Quantum Groups}, Advanced Series in Mathematical Physics, Vol. 21, World Scientific, 1995 \bi[V3]{V3} A.\,Varchenko, {\it Special functions, KZ type equations, and representation theory}, CBMS Regional Conference Series in Math., vol. 98 (AMS, Providence, RI, 2003), viii+118 pp. ISBN: 0-8218-2867-3 \bi[V4]{V4} A.\,Varchenko, {\it An invariant subbundle of the KZ connection mod $p$ and reducibility of $\hsl$ Verma modules mod $p$}, Math. Notes {\bf 109} (2021), no. 3, 386--397 \bi[V5]{V5} A.\,Varchenko, {\it Notes on solutions of KZ equations modulo $p^s$ and $p$-adic limit \linebreak $s\to\infty$}, with Appendix written jointly with S.\,Sperber, {\tt arXiv:2103.01725}, 1--42 \bi[VZ1]{VZ1} A.\,Varchenko and W.\,Zudilin, {\it Ghosts and congruences for $p^s$-appoximations of hypergeometric periods}, {\tt arXiv:2107.08548}, 1--29 \bi[VZ2]{VZ2} A.\,Varchenko and W.\,Zudilin, {\it Congruences for Hasse--Witt matrices and solutions of $p$-adic KZ equations}, {\tt arXiv:2108.12679}, 1--26 \bi[Vl]{Vl} M.\:Vlasenko, {\it Higher Hasse--Witt matrices}, Indag. Math. {\bf 29} (2018), 1411--1424 \end{thebibliography} \end{document}
2205.01430v1
http://arxiv.org/abs/2205.01430v1
A Riccati-Lyapunov Approach to Nonfeedback Capacity of MIMO Gaussian Channels Driven by Stable and Unstable Noise
\documentclass[conference,letterpaper]{IEEEtran} \IEEEoverridecommandlockouts \usepackage[letterpaper, left=.64in,right=.64in,top= 0.64in,bottom= 0.64in]{geometry} \usepackage{amsfonts} \usepackage{epsfig} \let\proof\relax \let\endproof\relax \usepackage{amsthm} \usepackage{graphicx} \usepackage{latexsym} \usepackage{amssymb} \usepackage{amsmath} \usepackage{parskip} \usepackage{multirow} \usepackage{cite} \usepackage{mathtools} \usepackage{epstopdf} \usepackage{etoolbox} \usepackage{array} \usepackage{mathptmx} \usepackage[bottom]{footmisc} \usepackage{tikz} \newcommand{\tikzmark}[2]{\tikz[overlay,remember picture,baseline] \node [anchor=base] (#1) {$#2$};} \usetikzlibrary{decorations.pathreplacing} \usepackage{verbatim} \usepackage{calrsfs} \usepackage{booktabs} \newcommand{\CDC}[1]{\textcolor{blue}{{#1}}} \newcommand{\CDCN}[1]{\textcolor{red}{{#1}}} \newcommand{\CDCC}[1]{\textcolor{green}{{#1}}} \DeclareMathAlphabet{\pazocal}{OMS}{zplm}{m}{n} \newcommand{\pazS}{\pazocal{S}} \newcommand{\pazC}{\pazocal{C}} \newcommand{\pazP}{\pazocal{P}} \newcommand{\pazK}{\pazocal{K}} \newcommand{\pazE}{\pazocal{E}} \newcommand{\pazU}{\pazocal{U}} \newcommand{\mb}{\mathbb} \let\bbordermatrix\bordermatrix \patchcmd{\bbordermatrix}{8.75}{4.75}{}{} \patchcmd{\bbordermatrix}{\left(}{\left[}{}{} \patchcmd{\bbordermatrix}{\right)}{\right]}{}{} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \renewcommand{\thetable}{\thesection.\arabic{table}} \newcommand{\eq}[1]{\mbox{{\rm(\hspace{-.35em}~\ref{#1})}}} \newcommand{\T}{{\cal T}} \newcommand{\F}{{\cal F}} \newcommand{\norm}[1]{\mid \! #1 \! \mid} \newcommand{\dnorm}[1]{|\!| #1 |\!|} \newcommand{\B}[2]{{\cal B}(#1,#2)} \newcommand{\ip}[2]{\langle #1,#2 \rangle} \newcommand{\ips}[3]{ \langle #1,#2 \rangle \raisebox{-.2ex}{$_{#3}$} } \newcommand{\real}[1]{\mbox{R\hspace{-2.0ex}I~$^{#1}$}} \newcommand{\re}{\Re} \newcommand{\fal}{\forall} \newcommand{\suchthat}{\, \mid \,} \newcommand{\sr}{\stackrel} \newcommand{\deq}{\doteq} \newcommand{\dar}{\downarrow} \newcommand{\rar}{\rightarrow} \newcommand{\lar}{\leftarrow} \newcommand{\Rar}{\Rightarrow} \newcommand{\til}{\tilde} \newcommand{\wh}{\widehat} \newcommand{\cd}{\cdot} \newcommand{\tri}{\sr{\triangle}{=}} \newcommand{\utri}{\sr{\nabla}{=}} \newcommand{\crar}{\sr{C}{\rar}} \newcommand{\clar}{\sr{C}{\lar}} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\bes}{\begin{eqnarray*}} \newcommand{\ees}{\end{eqnarray*}} \newcommand{\bce}{\begin{center}} \newcommand{\ece}{\end{center}} \newcommand{\beae}{\begin{IEEEeqnarray}{rCl}} \newcommand{\eeae}{\end{IEEEeqnarray}} \newcommand{\nms}{\IEEEeqnarraynumspace} \def\VR{\kern-\arraycolsep\strut\vrule &\kern-\arraycolsep} \def\vr{\kern-\arraycolsep & \kern-\arraycolsep} \newcommand{\ben}{\begin{enumerate}} \newcommand{\een}{\end{enumerate}} \newcommand{\hso}{\hspace{.1in}} \newcommand{\hst}{\hspace{.2in}} \newcommand{\hsf}{\hspace{.5in}} \newcommand{\noi}{\noindent} \newcommand{\mc}{\multicolumn} \newtheorem{theorem}{Theorem}[section] \newtheorem{problem}{Problem}[section] \newtheorem{remark}{Remark}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{notation}{Notation}[section] \newtheorem{contribution}{Contribution}[section] \newtheorem{assumptions}{Assumptions}[section] \newtheorem{definition}{Definition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{example}{Example}[section] \newtheorem{application}{Application}[section] \newtheorem{model}{Models}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{conclusion}{Conclusion}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{observation}{Observation}[section] \newtheorem{claim}{Claim}[section] \newtheorem{counterexample}{Counterexample}[section] \newtheorem{conditions}{Conditions}[section] \begin{document} \title{A Riccati-Lyapunov Approach to Nonfeedback Capacity of MIMO Gaussian Channels Driven by Stable and Unstable Noise\\ } \author{\IEEEauthorblockN{ Charalambos D. Charalambous and Stelios Louka} \IEEEauthorblockA{ \textit{University of Cyprus}\\ [email protected],[email protected]} } \maketitle \begin{abstract} In this paper it is shown that the nonfeedback capacity of multiple-input multiple-output (MIMO) additive Gaussian noise (AGN) channels, when the noise is nonstationary and unstable, is characterized by an asymptotic optimization problem that involves, a generalized matrix algebraic Riccati equation (ARE) of filtering theory, and a matrix Lyapunov equation of stability theory of Gaussian systems. Furthermore, conditions are identified such that, the characterization of nonfeedback capacity corresponds to the uniform asymptotic per unit time limit, over all initial distributions, of the characterization of a finite block or transmission without feedback information (FTwFI) capacity, which involves, two generalized matrix difference Riccati equations (DREs) and a matrix difference Lyapunov equation. \end{abstract} \section{Introduction, Problem, and Main Results} \label{sect:problem} \par Shannon's information theoretic definition of nonfeedback capacity of additive Gaussian noise (AGN) channels with memory, is a fundamental mathematical tool for analysis and synthesis of reliable data transmission over noisy communication channels. The characterization of nonfeedback capacity in frequency-domain, for stationary or asymptotically stationary channels, i.e., when the channel noise inputs and outputs are asymptotically stationary, gave rise to the so-called water-filling solution, which is documented in \cite{gallager1968,ihara1993,cover-thomas2006,yeung2008} and in several research papers, such as Tsybakov \cite{tsybakov2006}. The analysis of channel capacity for asymptotically equivalent matrices for MIMO Gaussian channels, is found in \cite{brandenburg-wyner:1974,gutierrez-crespo:2008} and more recently in \cite{gutierrez-crespo-rodriguez-hogstad:2017}. For Gaussian channels with intersymbol interference in \cite{hirt-massey:1988}. Bounds on nonfeedback capacity for single-input single-output (SISO) AGN channel with stable noise with memory, are derived in \cite{yanaki1992,yanaki1994,chen-yanaki1999}, as well as comparisons to feedback capacity. Recently, sequential time-domain characterizations of nonfeedback capacity for SISO AGN channels with unstable, finite-memory autoregressive noise are presented in \cite{kourtellaris-charalambous-loyka:2020a}. Sequential characterization of nonfeedback capacity for SISO AGN channels with general unstable noise with memory, are obtained in \cite{charalambous2020new}, and lower bounds in \cite[Corollary~II.1]{charalambous-kourtellaris-loykaIEEEITC2019}. Additional lower bounds are discussed in \cite{louka-kourtellaris-charalambous:2020b}, \cite{kourtellaris-charalambous-loyka:2020b}, which are equivalent to the Cover and Pombra \cite{cover-pombra1989} nonfeeback capacity formula. Equivalent characterizations of Cover and Pombra $n-$finite transmission, or block length for MIMO AGN channels with memory are presented in \cite{charalambous-kourtellaris-louka:2020a}. The purpose of this paper is to derive new results on nonfeedback capacity for multiple-input multiple-output (MIMO) AGN channels, driven by unstable, nonstationary and nonergodic noise in time-domain, described by \begin{align} Y_t=H_tX_t+V_t, \ \ t=1, \ldots, n, \ \ \frac{1}{n} {\bf E} \Big\{\sum_{t=1}^{n} ||X_t||_{{\mathbb R}^{n_x}}^2\Big\} \leq \kappa \label{g_cp_1} \end{align} where $\kappa \in [0,\infty)$, $X_t : \Omega \rar {\mathbb X}\tri {\mathbb R}^{n_x}$, $Y_t : \Omega \rar {\mathbb Y}\tri {\mathbb R}^{n_y}$, and $V_t : \Omega \rar {\mathbb V}\tri {\mathbb R}^{n_y}$, are the channel input, channel output and noise random variables (RVs), respectively, $(n_x, n_y)$ are finite positive integers, $H_t \in {\mathbb R}^{n_y\times n_x}$ is nonrandom and the distribution of the sequence $V^n = \{ V_1, \ldots, V_n\}$, i.e., ${\bf P}_{V^n}\tri {\mathbb P}\{V_1\leq v_1, \ldots, V_n\leq v_n\}$, is jointly Gaussian. \\ The main fundamental difference from previous characterizations of nonfeedback capacity found in the literature, i.e., \cite{gallager1968,ihara1993,cover-thomas2006,yeung2008,tsybakov2006}, is that the consideration of nonergodic, time-varying and unstable noise, gives higher achievable rates compared to stable noise (see \cite{charalambous2020new,louka-kourtellaris-charalambous:2020b,kourtellaris-charalambous-loyka:2020b,kourtellaris-charalambous-loyka:2020a}). {\it Operational Nonfeedback Code.} The code for the MIMO AGN channel (\ref{g_cp_1}), is denoted by $\{(n, {\cal M}^{(n)}, \epsilon_n, \kappa):n=0, 1, \dots\}$ and consists of (a) a set of uniformly distributed messages $M: \Omega \rar {\cal M}^{(n)} \tri \{ 1, \ldots, M^{(n)}\}$, known to the encoder and decoder, (b) a set of encoder strategies mapping messages $M=m$ and past channel inputs into current inputs, defined by\footnote{The superscript on expectation operator ${\bf` E}^g$ indicates that the corresponding distribution ${\bf P}= {\bf P}^g$ depends on the encoding strategy $g$.} \begin{align} &{\cal E}_n(\kappa) \triangleq \Big\{g_i: {\cal M}^{(n)} \times {\mathbb X}^{i-1} \rar {\mb X}_i, \:x_1=g_1(m), \; x_2=g_2(m,x_1), \nonumber \\ & \ldots, x_n=g_n(m,x^{n-1})\; \Big|\; \frac{1}{n} {\bf E}^g \Big\{\sum_{t=1}^n ||X_t||_{{\mathbb R}^{n_x}}^2\Big\} \leq \kappa \Big\} \label{NCM-6} \end{align} where $g_i(\cdot)$ are measurable maps and (c) a decoder $d_{n}(\cdot):{\mb Y}^n\rar {\cal M}^{(n)}$, with average probability of decoding error \begin{align} {\bf P}_{error}^{(n)} \triangleq& \frac{1}{M^{(n)}} \sum_{m \in {\cal M}^{(n)}} {\bf P}^g \Big\{d_{n}(Y^{n}) \neq m \big| M=m\Big\} \leq \epsilon_n. \label{NCM-7} \end{align} Note that ${\bf P}_{error}^{(n)}$ depends on the distribution of $V_1$, i.e., ${\bf P}_{V_1}$. The messages $M: \Omega \rar {\cal M}^{(n)}$ are assumed independent of the noise sequences $V^n$, that is $ {\bf P}_{V^n|M}={\bf P}_{V^n}$. {\it The code rate} is defined by $r_n\triangleq \frac{1}{n} \log M^{(n)}$. A rate $R$ is called an {\it achievable rate}, if there exists an encoder and decoder sequence satisfying $\lim_{n\longrightarrow\infty} {\epsilon}_n=0$ and $\liminf_{n \longrightarrow\infty}\frac{1}{n}\log{M}^{(n)}\geq R$. The operational definition of the {\it nonfeedback capacity} is $C^{OP}(\kappa)\triangleq \sup \{R\big| R \: \: \mbox{is achievable}\}\; \forall\; {\bf P}_{V_1} $, i.e., it does not depend on ${\bf P}_{V_1}$. \subsection{Main Results of the Paper} Throughout this paper, we consider the noise of Definition~\ref{def_nr_2}. \begin{definition} \label{def_nr_2} A time-varying partially observable state space (PO-SS) realization of the Gaussian noise $V^n$, is defined by \begin{align} &S_{t+1}=A_t S_{t}+ B_t W_t, \hso t=1, \ldots, n-1\label{real_1a}\\ &V_t= C_t S_{t} + N_t W_t, \hso t=1, \ldots, n, \label{real_1_ab}\\ & S_1\in G(\mu_{S_1},K_{S_1}), \hso K_{S_1} \succeq 0, \\ &W_t\in G(0,K_{W_t}),\hso K_{W_t} \succ 0, \hso t=1 \ldots, n, \\ & S_t : \Omega \rar {\mathbb R}^{n_s}, \ W_t : \Omega \rar {\mathbb R}^{n_w}, \ V_t : \Omega \rar {\mathbb R}^{n_y}, \\\ &R_t\tri N_t K_{W_t} N_t^T \succ 0,\hso t=1, \ldots, n \label{cp_e_ar2_s1_a_new} \end{align} where $W_t, t=1 \ldots, n$ is an independent Gaussian process, independent of $S_1$, $n_y, n_s, n_w$ are arbitrary positive integers, $(A_t, B_t, C_t, N_t, K_{S_1}, K_{W_t})$ are nonrandom, $X \in G(\mu_X, K_X)$ means $X$ is Gaussian distributed with mean $\mu_X$ and covariance $K_X$ and for any matrix $Q \in {\mathbb R}^{n \times n}$. The notation $Q\succeq 0$ (resp. $Q \succ 0$) means $Q$ is symmetric positive semidefinite (resp. definite). \end{definition} {\it Converse Coding Theorem.} Suppose there exists a sequence of achievable nonfeedback codes with error probability ${\bf P}_{error}^{(n)}\rightarrow 0$, as $n \rightarrow \infty$. Then $ R \leq \lim_{n \longrightarrow \infty}\frac{1}{n} C_n(\kappa, {\bf P}_{Y_1})$, where $C_n(\kappa, {\bf P}_{Y_1})$ is the sequential characterization of the $n-$finite block length, or transmission without feedback information ($n$-FTwFI) capacity formula \cite[Section~I, III]{charalambous-kourtellaris-louka:2020a}, given as follows. \begin{align} C_n(\kappa, {\bf P}_{Y_1}&)= \sup_{ \frac{1}{n} {\bf E} \big\{\sum_{t=1}^{n} ||X_t||_{{\mathbb R}^{n_x}}^2\big\} \leq \kappa }\sum_{t=1}^n I(X_t, V^{t-1};Y_t| Y^{t-1})\label{seq_2_nfb}\\ =&\sup_{ \frac{1}{n} {\bf E} \big\{\sum_{t=1}^{n} ||X_t||_{{\mathbb R}^{n_x}}^2\big\} \leq \kappa } H(Y^n)-H(V^n) \in [0,\infty] \label{seq_2_nfb_a} \end{align} where (\ref{seq_2_nfb_a}) follows from the channel definition (\ref{g_cp_1}) (provided the probability density functions exist) and the supremum is over ${\bf P}_{X_t|X^{t-1}}, t=1, \ldots, n$ induced by, \begin{align} &X_t = \Lambda_t {\bf X}^{t-1} + Z_t, \hso X_1=Z_1, \label{ch_in_1} \\ &{\bf X}^n\tri \left[ \begin{array}{cccc} X_1^T &X_2^T &\ldots &X_n^T\end{array}\right]^T, \\ &Z_t\in G(0, K_{Z_t}), \; K_{Z_t}\succeq 0, \; t=1, \ldots, n, \mbox{ indep. Gaussian}, \\ & Z_t \; \mbox{independent of} \; (V^{t-1},X^{t-1},Y^{t-1},Z^{t-1}), t=1, \ldots, n, \\ &{\Lambda}_t \in {\mathbb R}^{n_x\times (t-1)n_x}, \; \mbox{$ \forall t$ is nonrandom}.\label{ch_in_2} \end{align} By recursive substitution of ${\bf X}^{t-1}$ into the right hand side of (\ref{ch_in_1}), we obtain $X_t=\overline{Z}_t$, where $\overline{Z}_t \in G(0, K_{\overline{Z}_t}), t=1, \ldots, n$, is a correlated Gaussian noise, as given in \cite{cover-pombra1989}. However, for the purpose of our asymptotic analysis, we prefer (\ref{ch_in_1})-(\ref{ch_in_2}), because it is much easier to analyse. We emphasize that the consideration of unstable noise $V^n$ implies $Y^n$ is also unstable and therefore for our asymptotic analysis, we need to use the two innovations processes of $V^n$ and $Y^n$, as in \cite{charalambous2020new,louka-kourtellaris-charalambous:2020b,kourtellaris-charalambous-loyka:2020b,kourtellaris-charalambous-loyka:2020a}, giving rise to the following characterization of ${C}_{n}(\kappa, {\bf P}_{Y_1})\in [0,\infty]$ given by, \\ \begin{align} {C}_{n}(\kappa, {\bf P}_{Y_1}) = \sup_{(\Lambda_t, K_{\overline{Z}_t}), t=1, \ldots, n, \frac{1}{n} {\bf E} \big\{\sum_{t=1}^{n} ||X_t||_{ {\mathbb R}^{n_x}}^2\big\} \leq \kappa }\sum_{t=1}^n \Big\{ H(I_t)-H(\hat{I}_t)\Big\} \label{ftfic_is_g_in_nfb} \end{align} where $I_t, \hat{I}_t$ are the innovations processes of $Y^n, V^n$, \begin{align} I_t \tri Y_t- {\bf E}\big\{Y_t\Big|Y^{t-1}\big\}, \hst \hat{I}_t \tri V_t- {\bf E}\big\{V_t\Big|V^{t-1}\big\}. \label{inn_intr_thm1_nfb} \end{align} Clearly, the convergence properties of $\lim_{n \longrightarrow \infty} \frac{1}{n} C_n(\kappa,{\bf P}_{Y_1})$ and its independent of all ${\bf P}_{Y_1}$, are directly related to the convergence properties of $(I_t, \hat{I}_t, X_n), t=1,2, \ldots, n$, as $n \rightarrow \infty$. {\it State Space Realization of Channel Input.} For the analysis of the limit $\lim_{n \longrightarrow \infty} \frac{1}{n} C_n(\kappa,{\bf P}_{Y_1})$, we consider the alternative, state space realization of the Gaussian input $X^n$. \begin{align} &\Xi_{t+1}=F_{t} \Xi_{t}+ G_{t} Z_t, \hso t=1, \ldots, n-1,\label{real_1aa}\\ &X_t= \Gamma_t \Xi_{t} + D_t Z_t, \hso t=1, \ldots, n, \label{real_1_abb}\\ & \Xi_1\in G(\mu_{\Xi_1},K_{\Xi_1}), \; K_{\Xi_1} \succeq 0, \\ &Z_t\in G(0,K_{Z_t}),\hso K_{Z_t} \succeq 0, \hso t=1 \ldots, n, \\ &\mbox{$Z^n$ indep. seq.}, \hso \mbox{$(\Xi_1, Z^n, W^n)$ mutually indep.} \\ & \Xi_t : \Omega \rar {\mathbb R}^{n_\xi}, \ Z_t : \Omega \rar {\mathbb R}^{n_z}, \ X_t : \Omega \rar {\mathbb R}^{n_x} \label{real_1aaa} \end{align} where $n_\xi, n_z$ are arbitrary positive integers and $(F_{t}, G_{t}, \Gamma_t, D_t, K_{\Xi_1}, K_{Z_t})$ are nonrandom matrices $\forall t$. \\ Note that any finite-memory $AR$ input $X_t = \sum_{j=1}^M \Lambda_{t,j} X_{t-j} + Z_t$, with arbitrary large $M$, approximates (\ref{ch_in_1}), and this is a special case of (\ref{real_1aa})-(\ref{real_1aaa}). The performance of such inputs with $M=1$ discussed in \cite[Section~IV]{kourtellaris-charalambous-loyka:2020a} and \cite[Theorem~III.3]{louka-kourtellaris-charalambous:2020b} for IID input. {\it Asymptotic Characterization of Capacity.} The purpose of this paper is to analyze the two problems listed below. \\ {\bf Problem \#1.} Identify conditions, such that asymptotic limit exists and does not depend on ${\bf P}_{Y_1}$, \vspace{-0.1cm} \begin{align} C^{o}(\kappa,{\bf P}_{Y_1})\tri \lim_{n \longrightarrow \infty} \frac{1}{n} C_n(\kappa,{\bf P}_{Y_1}) =C(\kappa)\in [0,\infty), \hso \forall {\bf P}_{Y_1} \label{inter_change_limit_a} \end{align} and characterize $C(\kappa)$, which is independent of ${\bf P}_{Y_1}$. \begin{assumptions} Considered for the asymptotic analysis are the two cases of noise and channel input realizations. \\ {\bf Case 1:} Time-invariant, \begin{align} &(A_n, B_n, C_n, N_n, K_{W_n})=(A, B, C, N, K_{W}),\; K_{W}\succ 0, \label{tic_1} \hso \forall n \\ &(F_n,G_n, \Gamma_n, D_n, K_{Z_n})=(F,G, \Gamma, D, K_{Z}), K_Z \succeq 0, \hso \forall n. \label{tic_2} \end{align} {\bf Case 2:} Asymptotically time-invariant, \begin{align} &\lim_{n\longrightarrow \infty}(A_n, B_n, C_n, N_n, K_{W_n})=(A, B, C, N, K_{W}), \: K_W\succ 0, \label{atic_1} \\ & \lim_{n\longrightarrow \infty}(F_n,G_n, \Gamma_n, D_n, K_{Z_n})=(F,G, \Gamma, D, K_{Z}), \: K_Z \succeq 0\label{atic_2} \end{align} where the limits are element wise. For both Cases 1 and 2, the time-invariant realizations are assumed of minimal dimensions. \end{assumptions} For Cases 1 and 2, we identify conditions on 1) channel model matrices $(H, A, B, C, N, K_{W})$ and 2) channel input matrices $(F,G, K_{Z}, \Gamma, D), K_Z \succeq 0$, such that the limit in (\ref{inter_change_limit_a}) exists, is independent of ${\bf P}_{Y_1}$ and is characterized by \begin{align} &C^{o}(\kappa,{\bf P}_{Y_1})= C(\kappa) \tri \sup \frac{1}{2} \ln \big\{\frac{ \det \big( {\bf C} \Pi {\bf C}^T + {\bf D} K_{\overline{W}}{\bf D}^T \big) }{\det\big(C \Sigma C^T +N K_{W} N^T\big)}\big\}^+\label{ll_3_in}\\ &\mbox{the supremum is over} \; (F, G, \Gamma, D, K_{Z}) \; \; \mbox{and} \nonumber\\ &K_Z\succeq 0, \; tr(\Gamma P \Gamma^T +D K_Z D^T) \leq \kappa, \\ & \Pi \succeq 0, \; \Sigma \succeq 0 \; \mbox{satisfy matrix Algebraic Riccati Eqns}, \\ &P \succeq 0 \hso \mbox{satisfies a matrix Lyapunov Equation} \label{ll_4} \end{align} and where $\{\cdot\}^+\tri \max \{1, \cdot\}$, the $({\bf C}, {\bf D}, K_{\overline{W}})$ are specific matrices, related to the channel model and channel input matrices. {\bf Problem \#2.} Under the conditions of Problem \#1, we also show that the limit and the supremum can be interchanged and \begin{align} C^{\infty}(\kappa, {\bf P}_{Y_1}) \tri & \sup_{\lim_{n \longrightarrow \infty} \frac{1}{n} {\bf E} \big\{\sum_{t=1}^{n} ||X_t||_{ {\mathbb R}^{n_x}}^2\big\} \leq \kappa } \lim_{n \longrightarrow \infty} \frac{1}{n}\Big( \sum_{t=1}^n H(I_t)-H(\hat{I}_t)\Big) \label{inter_in}\\ =& C^{o}(\kappa,{\bf P}_{Y_1})= C(\kappa)\in [0,\infty), \hso \forall {\bf P}_Y. \end{align} {\it Direct Coding Theorem.} By (\ref{ll_3_in}) and (\ref{inter_in}), the convergence is uniform over all ${\bf P}_{Y_1}$. Hence, the asymptotic equipartition (AEP) and the information stability hold, from which follows directly that $C(\kappa)$ is the nonfeedback capacity, even for unstable channels, similar to the feedback capacity in \cite{kourtellaris-charalambousIT2015_Part_1,charalambous-kourtellaris-loykaIT2015_Part_2}. {\it Notation.} \\ ${\mathbb Z}_+ \tri \{1,2, \ldots\}, {\mathbb Z}_+^n \tri \{1,2, \ldots, n\}$, where $n$ is a finite positive integer. ${\mathbb R}\tri (-\infty, \infty)$, and ${\mathbb R}^m$ is the vector space of tuples of the real numbers for an integer $m\in {\mathbb Z}_+$. ${\mathbb R}^{n \times m}$ is the set of $n$ by $m$ matrices with entries from the set of real numbers for $(n,m)\in {\mathbb Z}_+\times {\mathbb Z}_+$. $I_{n} \in {\mathbb R}^{n\times n}, n\in {\mathbb Z}_{+}$ denotes the identity matrix, $tr\big(A\big)$ denotes the trace of any matrix $A \in {\mathbb R}^{n\times n}, n\in {\mathbb Z}_+$. \\ ${\mathbb C} \tri \{a+j b: (a,b) \in {\mathbb R} \times {\mathbb R}\}$ is the space of complex numbers. \\ ${\mathbb D}_o \tri \big\{c \in {\mathbb C}: |c| <1\big\}$ is the open unit disc of the space of complex number ${\mathbb C}$. $spec(A) \subset {\mathbb C}$ is the spectrum of a matrix $A \in {\mathbb R}^{q \times q}, q \in {\mathbb Z}_+$ (the set of all its eigenvalues). A matrix $A \in {\mathbb R}^{q \times q}$ is called exponentially stable if all its eigenvalues are within the open unit disc, that is, $spec(A) \subset {\mathbb D}_o$.\\ $X\in G(\mu_{X}, K_{X}), K_{X}\succeq 0$ denotes a Gaussian distributed RV $X$, with mean $\mu_{X}={\bf E}\{X\}$ and covariance $K_{X}\succeq 0$, $ K_X = cov(X,X) \tri {\bf E}\big\{\big(X-{\bf E}\big\{X\big\}\big) \big(X-{\bf E}\big\{X\big\}\big)^T \big\}$. Given another Gaussian RV $Y: \Omega \rar {\mathbb R}^{n_y}$, which is jointly Gaussian distributed with $X$, i.e., with joint distribution ${\bf P}_{X,Y}$, the conditional covariance of $X$ given $Y$ is (by properties of Gaussian RVs) \begin{align} K_{X|Y} = cov(X,X\Big|Y) \tri {\bf E}\big\{\big(X-{\bf E}\big\{X\Big|Y\big\}\big) \big(X-{\bf E}\big\{X\Big|Y\big\}\big)^T \big\}.\nonumber \end{align} \section{Asymptotic Characterization of Capacity} \subsection{Sequential Characterizations of $n-$FTwFI Capacity} \label{sect:AGN} We recall the characterization of $C_n(\kappa, {\bf P}_{Y_1})$ of (\ref{ftfic_is_g_in_nfb}), which shows its dependence on two DREs and a Lyapunov equation. \begin{theorem} \cite[Theorem III.2]{charalambous-kourtellaris-louka:2020a} Sequential characterization of ${C}_n(\kappa,{\bf P}_{Y_1})$. \label{thm_SS_nfb} Consider the MIMO AGN channel (\ref{g_cp_1}), the noise of Definition~\ref{def_nr_2} and the input (\ref{real_1aa})-(\ref{real_1aaa}). Define \begin{align*} &\Theta_t \tri \left(\begin{array}{cc} \Xi_t^T & S_{t}^T\end{array}\right)^T, \hso \overline{W}_t \tri \left(\begin{array}{cc} Z_t^T & W_{t}^T\end{array}\right)^T, \hso \overline{W}_t \in G(0,K_{\overline{W}_t}) , \\ &\Pi_{t} \tri cov\big(\Theta_t,\Theta_t\Big|Y^{t-1}\big)= {\bf E}\Big\{\Big(\Theta- \widehat{\Theta}_{t}\Big)\Big(\Theta_t-\widehat{\Theta}_{t} \Big)^T\Big\}, \\ & \widehat{\Theta}_{t} \tri {\bf E}\Big\{\Theta_t\Big|Y^{t-1}\Big\}, \; t=2, \ldots, n, \; \widehat{\Theta}_1 \tri \mu_{\Theta_1}, \; \Pi_1 \tri K_{\Theta_1} , \\ &P_t\tri cov\big(\Xi_t,\Xi_t)= {\bf E}\Big\{\Big(\Xi_t - {\bf E}\Big\{\Xi_t\Big\}\Big)\Big(\Xi_t - {\bf E}\Big\{\Xi_t\Big\}\Big)^T\Big\}. \end{align*} (i) The joint Gaussian process $(X^n,Y^n, V^n)$ is represented by \begin{align} &\Theta_{t+1}= {\bf A}_t \Theta_t + {\bf B}_t \overline{W}_t, \hso t=1, \ldots, n-1,\\ &Y_t={\bf C}_t \Theta_t + {\bf D}_t \overline{W}_t, \hso t=1, \ldots, n \\ &{\bf A}_t \tri \left( \begin{array}{cc} F_t & 0 \\ 0 & A_t \end{array} \right), \hso {\bf B}_t \tri \left( \begin{array}{cc} G_t & 0 \\ 0 & B_t \end{array} \right), \\ & {\bf C}_t \tri \left( \begin{array}{cc} H_t \Gamma_t & C_t \end{array} \right), \hso {\bf D}_t \tri \left( \begin{array}{cc} H_t D_t & N_t \end{array} \right). \end{align} where ${\bf A}_t, {\bf B}_t, {\bf C}_t, {\bf D}_t$ are appropriate matrices.\\ (ii) The covariance of the error, $\Pi_t \tri {\bf E} \big\{\widehat{E}_t \widehat{E}_t^T\big\}$,$\widehat{E}_t =\Theta_t-\widehat{\Theta}_t$, satisfies the generalized matrix DRE \begin{align} &\Pi_{t+1}= {\bf A}_{t} \Pi_{t}{\bf A}_{t}^T + {\bf B}_{t}K_{\overline{W}_{t}}{\bf B}_{t}^T -\Big({\bf A}_{t} \Pi_{t}{\bf C}_{t}^T+{\bf B}_{t}K_{\overline{W}_{t}}{\bf D}_{t}^T \Big) \nonumber \\ & \hspace{0.1cm} .\Big({\bf D}_{t} K_{\overline{W}_{t}} {\bf D}_{t}^T+{\bf C}_{t} \Pi_{t} {\bf C}_{t}^T\Big)^{-1}\Big( {\bf A}_{t} \Pi_{t}{\bf C}_{t}^T+ {\bf B}_{t} K_{\overline{W}_{t}}{\bf D}_{t}^T \Big)^T,\nonumber\\ & \hspace{0.1cm} \hso \Pi_t \succeq 0, \hso \Pi_{1}=K_{\Theta_1}\succeq 0, \hso t=1, \ldots, n.\label{DREy} \end{align} Moreover, the error $\widehat{E}_t \tri {\Theta}_t - \widehat{\Theta}_t$ satisfies the recursion \begin{align} &\widehat{E}_{t+1} = {\bf F}^{CL}_t (\Pi_t)\widehat{E}_t + \big({\bf B}_t - {\bf F}_t(\Pi_t){\bf D}_t\big) \overline{W}_t,\; t = 1,\dots, n, \label{error2}\\ & {\bf F}^{CL}_t(\Pi_t) = {\bf A}_t - {\bf F}_t(\Pi_t){\bf C}_t, \\ &{\bf F}_t(\Pi_t) = \big ({\bf A}_t\Pi_t{\bf C}_t^T +{\bf B}_t K_{\overline{W}_t}{\bf D}_t^T \big) \big({\bf D}_t K_{\overline{W}_t} {\bf D}_t^T + {\bf C}_t \Pi_t {\bf C}_t^T \big)^{-1}.\nonumber \end{align} (iii) The innovations process $I_t$ of $Y^n$ for $t=1, \ldots, n$, is \begin{align} &I_t \tri Y_t -{\bf E} \Big\{Y_t\Big|Y^{t-1}\Big\}= {\bf C}_t \big(\Theta_t-\widehat{\Theta}_{t}\big)+ {\bf D}_t \overline{W}_t, \label{kf_m_2_nfb_a} \\ & I_t\in G(0, K_{I_t}), \hso K_{I_t} ={\bf C}_t \Pi_t {\bf C}_t^T + {\bf D}_t K_{\overline{W}_t}{\bf D}_t^T. \end{align} (iv) The matrix $P_t = cov\big(\Xi_t,\Xi_t)$ satisfies Lyapunov recursion, \begin{align} &P_{t+1} = F_t P_t F_t^T + G_t K_{Z_t}G_t^T, \hso P_t\succeq 0, \hso P_1=K_{\Xi_1} \label{lyapunov}. \end{align} (v) The average power constraint is \begin{align} &\frac{1}{n} {\bf E} \Big\{\sum_{t=1}^{n} ||X_t||_{{\mathbb R}^{n_x}}^2\Big\}= \frac{1}{n} \sum_{t=1}^{n} tr \Big(\Gamma_t P_t \Gamma_t^T +D_t K_{Z_t}D_t^T\Big) \label{cp_9_al_n_nfb_a}. \end{align} (vii) The entropy of $Y^n$ is $H(Y^n)=\sum_{t=1}^n H({I}_t)$, is given by \begin{align} H(Y^n) = \frac{1}{2}\sum_{t=1}^n \ln \big( (2\pi e)^{n_y} \det\big({\bf C}_t \Pi_t {\bf C}_t^T + {\bf D}_t K_{\overline{W}_t}{\bf D}_t^T\big) \big) \label{entr_output} \end{align} and the entropy of $V^n$ is $H(V^n)=\sum_{t=1}^n H(\hat{I}_t)$, is given by \begin{align} &H(V^n) = \frac{1}{2}\sum_{t=1}^n \ln \big( (2\pi e)^{n_y} \det\big(C_t \Sigma_{t} C_t^T + N_t K_{W_t} N_t^T\big) \big) \label{entr_noise}\\ & \hat{I}_t \in G(0, K_{\hat{I}_t}) \hso \mbox{an orth. innov. proc. indep. of $V^{t-1}$}, \label{inn_po_2}\\ &K_{\hat{I}_t} \tri cov(\hat{I}_t, \hat{I}_t)= C_t \Sigma_t C_t^T +N_t K_{W_t} N_t^T=K_{V_t|V^{t-1}}. \label{cov_in_noise} \end{align} where $\Sigma_t $ satisfies the generalized matrix DRE \begin{align} \Sigma_{t+1}= &A_t \Sigma_{t}A_t^T + B_tK_{W_t}B_t^T -\Big(A_t \Sigma_{t}C_t^T+B_tK_{W_{t}}N^T \Big) \nonumber \\ & \hspace{0.1cm} . \Big(N_t K_{W_t} N_t^T+C_t \Sigma_{t} C_t^T\Big)^{-1}\Big( A_{t} \Sigma_{t}C_t^T+ B_t K_{W_t}N_t^T \Big)^T,\nonumber\\ & \hspace{0.1cm}\hso \Sigma_t \succeq 0, \hso t=1, \ldots, n, \hso \Sigma_{1}=K_{S_1}\succeq 0. \label{dre_1} \end{align} (v) An equivalent characterization of ${C}_n(\kappa, {\bf P}_{Y_1})$ is \begin{align} &{C}_n(\kappa, {\bf P}_{Y_1}) = \sup_{ \frac{1}{n} {\bf E}\big\{\sum_{t=1}^n ||X_t||_{{\mathbb R}^{n_x}}^2\big\}\leq \kappa } \frac{1}{2} \sum_{t=1}^n \ln \Big\{ \frac{ \det( K_{I_t}) }{\det(K_{\hat{I}_t})}\Big\}^+ \end{align} where the supremum is over $(F_t, G_t, \Gamma_t, D_t, K_{Z_t}), t=1, \ldots, n$. \end{theorem} \label{sect_POSS} \subsection{Asymptotic Analysis} \label{sect:q1} To address the limit $\lim_{n \longrightarrow \infty}\frac{1}{n} C_{n}(\kappa, {\bf P}_{V_1})$, under Case 1 and Case 2, we investigate the convergence properties of generalized matrix DREs (\ref{DREy}), (\ref{dre_1}) and Lyapunov matrix difference equation (\ref{lyapunov}) to their limits. Such properties are summarized in \cite[Theorem A.1]{charalambous-kourtellaris-loykaIT2015_Part_2} and \cite[Theorem III.2]{charalambous2020new}. We present sufficient conditions for convergence in Corollary~\ref{cor_POSS_IH}, Theorem~\ref{thm_fc_IH}, Theorem~\ref{lemma_vanschuppen}, irrespectively of whether the noise is stable, or unstable. \begin{corollary} \label{cor_POSS_IH} Consider Case 1 or Case 2 \\ Let $\Sigma_t, t=1, 2, \ldots $ denote the solution of the matrix DRE (\ref{dre_1}). \\ Let $\Sigma=\Sigma^{T} \succeq 0 $ be a solution of the corresponding ARE \begin{align} \Sigma= &A \Sigma A^T + B K_{W}B^T -\Big(A \Sigma C^T+B K_{W}N^T \Big) \nonumber \\&\hspace{0.1cm} . \Big(N K_{W} N^T+C \Sigma C^T\Big)^{-1} \Big( A \Sigma C^T+ B K_{W}N^T \Big)^T. \label{dre_1_SS} \end{align} Define the matrices \begin{align} &A^*\tri A- B K_W N^T \big(N K_W N^T\big)^{-1} C, \hso G\tri B, \nonumber \\& B^* \tri K_W- K_W N^T \Big(N K_W N^T\Big)^{-1} \Big(K_W N^T\Big)^T. \label{noise_st} \end{align} Suppose (see \cite{kailath-sayed-hassibi,caines1988,vanschuppen2021} for definitions) \begin{align} &\mbox{$ \{A, C\}$ is detectable, and $\{A^*, G B^{*,\frac{1}{2}}\}$ is stabilizable.} \label{st} \end{align} Any solution $\Sigma_{t}, t=1, 2, \ldots,n$ to the generalized matrix DRE (\ref{dre_1}) with arbitrary initial condition $\Sigma_{1}\succeq 0$, is such that $\lim_{n \longrightarrow \infty} \Sigma_{n} =\Sigma$, where $\Sigma\succeq 0$ is the unique solution of the generalized matrix ARE (\ref{dre_1_SS}) with $spec\big(M^{CL}(\Sigma)\big) \in {\mathbb D}_o$. \end{corollary} \vspace*{-.3cm} \begin{proof} For Case 1, the convergence of $\Sigma_n, n=1,2, \ldots$, follows from the detectability and stabilizability conditions. For Case 2, the statements of convergence of $\Sigma_n, n=1,2, \ldots$ hold, due to continuity property of solutions of generalized difference Riccati equations, with respect to its coefficients. \end{proof} \vspace{-0.5cm} \begin{theorem} \label{thm_fc_IH} Consider Case 1 or Case 2. \\ Let $\Pi_t, t=1, \ldots, $ denote the solution of the DRE (\ref{DREy}). \\ Let $\Pi = \Pi^{T} \succeq 0$ be a solution of the corresponding ARE \begin{align} &\Pi= {\bf A} \Pi{\bf A}^T + {\bf B}K_{\overline{W}}{\bf B}^T -\Big({\bf A} \Pi{\bf C}^T+{\bf B}K_{\overline{W}}{\bf D}^T \Big) \nonumber \\ & \hspace{0.1cm} . \Big({\bf D} K_{\overline{W}} {\bf D}^T+{\bf C} \Pi {\bf C}^T\Big)^{-1}\Big( {\bf A} \Pi {\bf C}^T+ {\bf B} K_{\overline{W}}{\bf D}^T \Big)^T.\label{kf_m_4_a_TI_ARE} \end{align} Define the matrices \cite{kailath-sayed-hassibi,caines1988,vanschuppen2021} \begin{align} &{\bf A}^*\tri {\bf A} - {\bf B} K_{\overline W} {\bf D}^T \big({\bf D} K_{\overline W} {\bf D}^T\big)^{-1} {\bf C}, \hso {\bf G}\tri {\bf B}, \nonumber \\& {\bf B}^* \tri K_{\overline W}- K_{\overline W} {\bf D}^T \Big({\bf D} K_{\overline W} {\bf D}^T\Big)^{-1} \Big(K_{\overline W} {\bf D}^T\Big)^T. \label{input_st} \end{align} Suppose \cite{kailath-sayed-hassibi,caines1988,vanschuppen2021} \begin{align} &\mbox{$\{{\bf A}, {\bf C}\}$ is detectable and $\{{\bf A}^*, {\bf G} {\bf B}^{*,\frac{1}{2}}\}$ is stabilizable.} \label{kt} \end{align} Any solution $\Pi_{t}, t=1, 2, \ldots,n$ to the generalized matrix DRE (\ref{DREy}) with arbitrary initial condition $\Pi_{1}\succeq 0$, is such that $\lim_{n \longrightarrow \infty} \Pi_{n}=\Pi$, where $\Pi\succeq 0$ is the unique solution of the generalized matrix ARE (\ref{kf_m_4_a_TI_ARE}), with $spec\big({\bf F}^{CL}(\Pi)\big) \in {\mathbb D}_o$. \end{theorem} \begin{proof} Similar to Corollary~\ref{cor_POSS_IH}. \end{proof} Theorem~\ref{lemma_vanschuppen} identifies conditions for the average power (\ref{cp_9_al_n_nfb_a}) to converge, using $P_t = cov\big(\Xi_t,\Xi_t)$, which satisfies (\ref{lyapunov}). \begin{theorem} Convergence of average power\\ \label{lemma_vanschuppen} Consider the average power of Thm~\ref{thm_SS_nfb}, for Cases 1 or 2. \\ Let $P_t, t=1 \ldots, n$ be a solution of Lyapunov recursion (\ref{lyapunov}). \\ Let $P\succeq 0$ be a solution of \begin{align} P=F P F^T +G K_{Z} G^T. \label{MP_1_new} \end{align} Suppose $F$ is an exponentially stable matrix. Any solution $P_{t}, t=1, 2, \ldots,n$ to the Lyapunov recursion DRE (\ref{lyapunov}), with arbitrary initial condition $P_{1}\succeq 0$, is such that $\lim_{n \longrightarrow \infty} P_{n}=P$, where $P\succeq 0$ is the unique solution of (\ref{MP_1_new}). Moreover, \begin{align} \lim_{n \longrightarrow \infty}& \frac{1}{n} {\bf E} \Big\{\sum_{t=1}^{n} ||X_t||_{{\mathbb R}^{n_x}}^2\Big\}= \lim_{n \longrightarrow \infty} \frac{1}{n} \sum_{t=1}^{n} tr \Big(\Gamma P_t \Gamma^T +D K_{Z}D^T\Big) \nonumber \\ =& tr \Big(\Gamma P \Gamma^T +D K_{Z}D^T\Big), \; \forall P_1\succeq 0. \label{cp_9_al_n_nfb_a2} \end{align} \end{theorem} \begin{proof} For Case 1, these are known\cite{vanschuppen2021}. For Case 2, the statements are due to continuity property of solutions of Lyapunov equations, with respect to their coefficients. \end{proof} \subsection{Asymptotic Characterizations of Nonfeedback Capacity } \label{sect:cor-solu} \begin{theorem} Characterization of $C^\infty(\kappa,{\bf P}_{Y_1})$ for Case 1 \\ \label{prob_1_in} Consider the time-invariant noise and channel input strategies of Case 1, i.e., (\ref{tic_1}) and (\ref{tic_2}) hold. \\ Define the per unit time limit and supremum by\footnote{If at any time $t$, the information $H(Y_t|Y^{t-1})-H(V_t|V^{t-1})=+\infty$ then it is removed, as it is usually the case \cite{pinsker1964}.} \begin{align} C^\infty(\kappa,{\bf P}_{Y_1}) \tri & \sup_{{\cal P}_{\infty}(\kappa)}\lim_{n \longrightarrow\infty}\frac{1}{2n} \sum_{t=1}^n \ln \Big\{ \frac{ \det \big( {\bf C} \Pi_t {\bf C}^T + {\bf D} K_{\overline{W}}{\bf D}^T \big) }{\det\big(C \Sigma_t C^T +N K_{W} N^T\big)}\Big\}^+ \label{i_ll_2} \end{align} where the average power constraint is defined by \begin{align} &{\cal P}_{\infty}(\kappa)\tri \Big\{(F, G, \Gamma, D, K_{Z}) \in {\cal P}^\infty \Big| \\ &\hspace*{1.cm} \lim_{n \longrightarrow \infty}\frac{1}{n} \sum_{t=1}^n tr (\Gamma P_t {\Gamma}^T +D K_Z {D}^T) \leq \kappa\Big\}, \label{Q_1_10_s1_new}\\ & {\cal P}^\infty \tri \Big\{(F, G, \Gamma, D, K_{Z}), \mbox{ such that the following hold}\nonumber \\&\mbox{(i) the detectability and stabilizability of (\ref{st})},\\ & \mbox{(ii) the detectability and stabilizability of (\ref{kt})}\\ & \mbox{(iii) $F$ is exponentially stable}\Big\}. \label{adm_set} \end{align} Then, $C^\infty(\kappa,{\bf P}_{Y_1})$ is given by \begin{align} C^\infty(\kappa,{\bf P}_{Y_1}) & = \sup_{ {\cal P}^\infty(\kappa)} \frac{1}{2} \ln \Big\{ \frac{ \det \big( {\bf C} \Pi {\bf C}^T + {\bf D} K_{\overline{W}}{\bf D}^T \big) }{\det\big(C \Sigma C^T +N K_{W} N^T\big)}\Big\}^+ \nonumber \\& \tri C^{\infty}(\kappa),\hso \forall {\bf P}_{Y_1} \label{ll_3} \end{align} where ${\cal P}^\infty(\kappa)$ is defined by \begin{align} {\cal P}^\infty(\kappa)&\tri \Big\{(F, G, \Gamma, D, K_{Z})\in {\cal P}^\infty\Big| \nonumber \\ &K_Z\succeq 0, \hso tr( \Gamma P{\Gamma}^T +D K_ZD^T) \leq \kappa \Big\} \label{ll_4} \end{align} and $\Sigma \succeq 0$ and $\Pi \succeq 0$ are the unique and stabilizable solutions, i.e., $spec\big(M^{CL}(\Sigma)\big) \in {\mathbb D}_o$ and $spec\big({\bf F}^{CL}(\Pi)\big) \in {\mathbb D}_o$ of the generalized matrix AREs (\ref{dre_1_SS}) and (\ref{kf_m_4_a_TI_ARE}) respectively, $P\succeq 0$ is the unique solution of the matrix Lyapunov equation (\ref{MP_1_new}), provided there exists $\kappa\in [0,\infty)$, such that ${\cal P}^\infty(\kappa)$ is non-empty.\\ Moreover, the optimal $(F, G, \Gamma, D, K_{Z}) \in {\cal P}^\infty(\kappa)$, is such that, \\ (i) if the noise is stable, then the input and the output processes $(X_t, Y_t), t=1, \ldots$ are asymptotic stationary and \\ (ii) if the noise is unstable, then the input and the innovations processes $(X_t, I_t), t=1, \ldots$ are asymptotic stationary. \end{theorem} \vspace{-0.5cm} \begin{proof} By the definition of the set ${\cal P}^\infty$, then Corollary~\ref{cor_POSS_IH}, Theorem~\ref{thm_fc_IH} and Theorem~\ref{lemma_vanschuppen} hold. Hence, the following summands converge in $[0,\infty)$ uniformly, $\forall {\bf P}_{Y_1}$: \begin{align} &\lim_{n \longrightarrow\infty}\frac{1}{n} \sum_{t=1}^n tr(\Gamma P_t \Gamma^T +D K_Z D^T) = tr(\Gamma P \Gamma^T +D K_Z D^T), \label{conve_1} \\ & \lim_{n \longrightarrow\infty}\frac{1}{2n} \Big\{ \sum_{t=1}^n \ln \big( \frac{ \det \big( {\bf C} \Pi_t {\bf C}^T + {\bf D} K_{\overline{W}}{\bf D}^T \big) }{\det\big(C \Sigma_t C^T +N K_{W} N^T\big)}\big)\Big\} \nonumber \\ &=\frac{1}{2} \ln \big( \frac{ \det \big( {\bf C} \Pi {\bf C}^T + {\bf D} K_{\overline{W}}{\bf D}^T \big) }{\det\big(C \Sigma C^T +N K_{W} N^T\big)}\big), \hso \forall {\bf P}_{Y_1} . \label{conve_2} \end{align} The last part of the theorem follows from the asymptotic properties of the Kalman-filter, as follows. For (i). $\Xi_{t}, t=1, \ldots$ is asymptotically stationary, which implies $X_t= \Gamma \Xi_{t} + D Z_t, X_1 = \Gamma {\Xi}_1 + D Z_1, t=2, \ldots, n, Z_t \in G(0, K_{Z}), \hso K_Z\succeq 0$, $I_t, t=1, \ldots$ and $Y_t=HX_t+ V_t, t=1, \ldots$ are asymptotically stationary. Similarly for (ii), with the exception that $Y_t=H X_t+ V_t, t=1, \ldots$ is not asymptotically stationary, because $V_t, t=1, \ldots $ is unstable. \end{proof} Next, we show that Theorem~\ref{prob_1_in} remains valid for Case 2. \begin{corollary} Characterization of $C^\infty(\kappa,{\bf P}_{Y_1})$ for Case 2 \\ \label{the_limsup} Consider the asymptotically time-invariant noise and channel input strategies of Case 2, i.e., (\ref{atic_1}) and (\ref{atic_2}) hold.\\ Define the per unit time limit and supremum by \begin{align} &C^{\infty,+}(\kappa,{\bf P}_{Y_1}) \tri \sup_{{\cal P}_{\infty}^{+}(\kappa) }\lim_{n \longrightarrow\infty}\frac{1}{2n} \Big\{ \nonumber \\ & \hspace*{1.0cm} \sum_{t=1}^n \ln \Big\{ \frac{ \det \big( {\bf C}_t \Pi_t {\bf C}_t^T + {\bf D}_t K_{\overline{W}_t}{\bf D}_t^T \big) }{\det\big(C_t \Sigma_t C_t^T +N_t K_{W_t} N_t^T\big)}\Big\}^+\Big\} \label{i_ll_2_ati}\\ &{\cal P}_{\infty}^+(\kappa)\tri \Big\{\{(F_n, G_n, \Gamma_n, D_n K_{Z_n})| n=1,2, \ldots \} \in {\cal P}_{\infty}^+\Big| \nonumber \\ &\hspace*{1.0cm} \lim_{n \longrightarrow\infty}\frac{1}{n} \sum_{t=1}^n tr\big(\Gamma_t P_t \Gamma_t^T +D_t K_{Z_t} D_t^T\big)\big\} \leq \kappa\Big\}, \label{Q_1_10_s1_new_ati} \\ & {\cal P}_\infty^+ \tri \Big\{\{ (F_n, G_n, \Gamma_n, D_n, K_{Z_n}) |n = 1,2,\dots\}\Big| \nonumber \\ & \lim_{n \longrightarrow \infty} (F_n, G_n, \Gamma_n, D_n K_{Z_n})=(F, G, \Gamma, D, K_{Z})\in {\cal P}^\infty \Big\}. \label{adm_set_ati} \end{align} Then, \begin{align} C^{\infty,+}(\kappa,{\bf P}_{Y_1})=C^{\infty}(\kappa,{\bf P}_{Y_1})=C^{\infty}(\kappa)=\mbox{(\ref{ll_3})}, \hso \forall {\bf P}_{Y_1}. \end{align} and the statements of Theorem~\ref{prob_1_in}.(i), (ii), remain valid. \end{corollary} \begin{proof} The solutions of the DREs and the Lyapunov equation are, $\Sigma_{n+1}= \Sigma_{n+1}(\Sigma_n,A_n,B_n,C_n,N_n,K_{W_n})$, $\Pi_{n+1}=\Pi_{n+1}(\Pi_{n},\Sigma_{n},{\bf A}_n, {\bf B}_n,{\bf C}_n,{\bf D}_n, K_{\overline{ W}_n}), $\\$ P_{n+1} = P_{n+1}(P_n,F_n,G_n,\Gamma_n,D_n, K_{Z_n}),\;n=1,2, \ldots$ and these are continuous with respect to their coefficients. Moreover, for all elements of the set ${\cal P}^\infty$, by (\ref{atic_1}), then \begin{align} &\lim_{n \longrightarrow\infty}\frac{1}{n}\sum_{t=1}^n tr (\Gamma_t P_t {\Gamma_t}^T +D_t K_{Z_t} {D_t}^T) =tr( \Gamma P \Gamma^T +D K_Z D^T), \; \forall {\bf P}_{Y_1}, \label{conve_1_tv} \\ & \lim_{n \longrightarrow\infty}\frac{1}{2n} \sum_{t=1}^n \ln \Big\{ \frac{ \det \big( {\bf C}_t \Pi_t {\bf C}_t^T + {\bf D}_t K_{\overline{W}_t}{\bf D}_t^T \big) }{\det\big(C_t \Sigma_t C_t^T +N_t K_{W_t} N_t^T\big)}\Big\}^+ \nonumber \\ &=\frac{1}{2} \ln \big( \frac{ \det \big( {\bf C} \Pi {\bf C}^T + {\bf D} K_{\overline{W}}{\bf D}^T \big) }{\det\big(C \Sigma C^T +N K_{W} N^T\big)}\big), \hso \forall {\bf P}_{Y_1}. \label{conve_2_tv} \end{align} The rest follows by repeating the proof of Theorem~\ref{prob_1_in}. \end{proof} Identity $C^{o}(\kappa,{\bf P}_{Y_1}) =C^{\infty}(\kappa,{\bf P}_{Y_1}) = C^{\infty}(\kappa), \forall {\bf P}_{Y_1}$ for Case 2, follows from the uniform convergence of Theorem~\ref{prob_1_in} and Corollary~\ref{the_limsup}; the derivation is omitted due to space limitation. \begin{theorem} Characterization of $C^o(\kappa,{\bf P}_{Y_1})$ for Case 2 \\ \label{thm:limsup_ati} Consider the asymptotically time-invariant noise and channel input strategies of Case 2, i.e., (\ref{atic_1}) and (\ref{atic_2}) hold. \\ Define the per unit time limit and supremum by \begin{align} &C^{o}(\kappa,{\bf P}_{Y_1}) \tri \lim_{n \longrightarrow\infty} \sup_{{\cal P}_{n}^{o,+}(\kappa)}\frac{1}{2n} \Big\{ \nonumber \\ &\hspace*{1.0cm} \sum_{t=1}^n \ln \Big\{ \frac{ \det \big( {\bf C}_t \Pi_t {\bf C}_t^T + {\bf D}_t K_{\overline{W}_t}{\bf D}_t^T \big) }{\det\big(C_t \Sigma_t C_t^T +N_t K_{W_t} N_t^T\big)}\Big\}^+\Big\} \label{limsup_ati_1}\\ &{\cal P}_{\infty}^+(\kappa)\tri \Big\{\{(F_n, G_n, \Gamma_n, D_n K_{Z_n})| n=1,2, \ldots \} \in {\cal P}_{\infty}^+\Big| \nonumber \\ &\hspace*{1.0cm} \frac{1}{n} \sum_{t=1}^n tr(\Gamma_t P_t \Gamma_t^T +D_t K_{Z_t} D_t^T)\big\} \leq \kappa\Big\}. \label{limsup_ati_2} \end{align} Then, \begin{align} C^{o}(\kappa,{\bf P}_{Y_1})=C^{\infty}(\kappa,{\bf P}_{Y_1})=C^{\infty}(\kappa)=\mbox{(\ref{ll_3})}, \: \forall {\bf P}_{Y_1} \end{align} and the statements of Theorem~\ref{prob_1_in}.(1), (ii), remain valid. \end{theorem} \begin{proof} The derivation uses the uniform limits (\ref{conve_1_tv}) and (\ref{conve_2_tv}), Theorem~\ref{prob_1_in} and Corollary~\ref{the_limsup}. \end{proof} \begin{remark} If the {\it stabilizability condition} is replaced by the {\it unit circle controllability} (see \cite{kailath-sayed-hassibi} for definition and \cite{charalambous2020new,louka-kourtellaris-charalambous:2020b,kourtellaris-charalambous-loyka:2020b,charalambous-kourtellaris-loykaIEEEITC2019} for specific examples), then Theorem~\ref{prob_1_in} and Theorem~\ref{thm:limsup_ati} remain valid with the fundamental difference that all limits are not uniform for all ${\bf P}_{V_1}$. For such a relaxation, the limits depend on ${\bf P}_{V_1}, \Sigma_1, P_1, \Pi_1$ and the asymptotic optimization problem $C^\infty(\kappa)$ may not be convex. Specific examples are found in \cite{louka-kourtellaris-charalambous:2020b}. \end{remark} \section{Conclusion} This paper presents new asymptotic characterizations of nonfeedback capacity of MIMO additive Gaussian noise (AGN) channels, when the noise is nonstationary and unstable. The asymptotic characterizations of nonfeedback capacity, involve two generalized matrix algebraic Riccati equations (AREs) of filtering theory and a Lyapunov matrix equation of stability theory of Gaussian systems. Identified, are conditions for uniform convergence of the asymptotic limits, which imply that the nonfeedback capacity is independent of the initial states. \newpage \bibliographystyle{IEEEtran} \bibliography{Bibliography_capacity} \end{document}
2205.01320v1
http://arxiv.org/abs/2205.01320v1
Bernstein inequality on conic domains and triangle
\documentclass{amsart} \usepackage{amsmath,amsthm} \usepackage{amsfonts,amssymb} \usepackage{accents} \usepackage{enumerate} \usepackage{accents,color} \usepackage{graphicx} \hfuzz1pc \addtolength{\textwidth}{0.5cm} \newcommand{\lvt}{\left|\kern-1.35pt\left|\kern-1.3pt\left|} \newcommand{\rvt}{\right|\kern-1.3pt\right|\kern-1.35pt\right|} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{exam}[thm]{Example} \newtheorem{ax}{Axiom} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}{Remark}[section]\newtheorem*{notation}{Notation} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\thmref}[1]{Theorem~\ref{#1}} \newcommand{\secref}[1]{\S\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \def\la{{\langle}} \def\ra{{\rangle}} \def\ve{{\varepsilon}} \def\d{\mathrm{d}} \def\sph{{\mathbb{S}^{d-1}}} \def\sB{{\mathsf B}} \def\sC{{\mathsf C}} \def\sD{{\mathsf D}} \def\sE{{\mathsf E}} \def\sG{{\mathsf G}} \def\sH{{\mathsf H}} \def\sK{{\mathsf K}} \def\sL{{\mathsf L}} \def\sM{{\mathsf M}} \def\sP{{\mathsf P}} \def\sQ{{\mathsf Q}} \def\sS{{\mathsf S}} \def\sU{{\mathsf U}} \def\bs{{\mathsf b}} \def\sc{{\mathsf c}} \def\sd{{\mathsf d}} \def\sm{{\mathsf m}} \def\sw{{\mathsf w}} \def\ff{{\mathfrak f}} \def\fh{{\mathfrak h}} \def\fp{{\mathfrak p}} \def\fs{{\mathfrak s}} \def\fS{{\mathfrak S}} \def\fD{{\mathfrak D}} \def\a{{\alpha}} \def\b{{\beta}} \def\g{{\gamma}} \def\k{{\kappa}} \def\t{{\theta}} \def\l{{\lambda}} \def\o{{\omega}} \def\s{\sigma} \def\la{{\langle}} \def\ra{{\rangle}} \def\ve{{\varepsilon}} \def\ab{{\mathbf a}} \def\bb{{\mathbf b}} \def\cb{{\mathbf c}} \def\ib{{\mathbf i}} \def\jb{{\mathbf j}} \def\kb{{\mathbf k}} \def\lb{{\mathbf l}} \def\mb{{\mathbf m}} \def\nb{{\mathbf n}} \def\rb{{\mathbf r}} \def\sb{{\mathbf s}} \def\tb{{\mathbf t}} \def\ub{{\mathbf u}} \def\vb{{\mathbf v}} \def\xb{{\mathbf x}} \def\yb{{\mathbf y}} \def\Ab{{\mathbf A}} \def\Bb{{\mathbf B}} \def\Db{{\mathbf D}} \def\Eb{{\mathbf E}} \def\Fb{{\mathbf F}} \def\Gb{{\mathbf G}} \def\Hb{{\mathbf H}} \def\Jb{{\mathbf J}} \def\Kb{{\mathbf K}} \def\Lb{{\mathbf L}} \def\Pb{{\mathbf P}} \def\Sb{{\mathbf S}} \def\CB{{\mathcal B}} \def\CD{{\mathcal D}} \def\CF{{\mathcal F}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}} \def\CJ{{\mathcal J}} \def\CL{{\mathcal L}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}} \def\CM{{\mathcal M}} \def\CT{{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}} \def\CA{{\mathcal A}} \def\CW{{\mathcal W}} \def\BB{{\mathbb B}} \def\CC{{\mathbb C}} \def\KK{{\mathbb K}} \def\NN{{\mathbb N}} \def\PP{{\mathbb P}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\SS{{\mathbb S}} \def\TT{{\mathbb T}} \def\VV{{\mathbb V}} \def\XX{{\mathbb X}} \def\ZZ{{\mathbb Z}} \def\proj{\operatorname{proj}} \def\Lip{\operatorname{Lip}} \def\vi{\varphi} \def\lla{\langle{\kern-2.5pt}\langle} \def\rra{\rangle{\kern-2.5pt}\rangle} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \def\rhow{{\lceil\tfrac{\varrho}{2}\rceil}} \newcommand{\rhoh}{{\lfloor\tfrac{\varrho}{2}\rfloor}} \DeclareMathOperator{\IM}{Im} \DeclareMathOperator{\esssup}{ess\,sup} \DeclareMathOperator{\meas}{meas} \def\brho{{\boldsymbol{\rho}}} \def\fD{{\mathfrak D}} \def\f{\frac} \def\va{\varepsilon} \def\sub{\substack} \graphicspath{{./}} \begin{document} \title[Bernstein inequality on conic domains and triangles] {Bernstein inequality on conic domains and triangles} \author{Yuan~Xu} \address{Department of Mathematics, University of Oregon, Eugene, OR 97403--1222, USA} \email{[email protected]} \thanks{The author is partially supported by Simons Foundation Grant \#849676 and by an Alexander von Humboldt award.} \date{\today} \subjclass[2010]{41A10, 41A63, 42C10, 42C40} \keywords{Bernstein inequality, polynomials, doubling weight, conic domain, triangle} \begin{abstract} We establish weighted Bernstein inequalities in $L^p$ space for the doubling weight on the conic surface $\VV_0^{d+1} = \{(x,t): \|x\| = t, x \in \RR^d, t\in [0,1]\}$ as well as on the solid cone bounded by the conic surface and the hyperplane $t =1$, which becomes a triangle on the plane when $d=1$. While the inequalities for the derivatives in the $t$ variable behave as expected, there are inequalities for the derivatives in the $x$ variables that are stronger than what one may have expected. As an example, on the triangle $\{(x_1,x_2): x_1 \ge 0, \, x_2 \ge 0,\, x_1+x_2 \le 1\}$, the usual Bernstein inequality for the derivative $\partial_1$ states that $\|\phi_1 \partial_1 f\|_{p,w} \le c n \|f\|_{p,w}$ with $\phi_1(x_1,x_2):= x_1(1-x_1-x_2)$, whereas our new result gives $$\| (1-x_2)^{-1/2} \phi_1 \partial_1 f\|_{p,w} \le c n \|f\|_{p,w}.$$ The new inequality is stronger and points out a phenomenon unobserved hitherto for polygonal domains. \end{abstract} \maketitle \section{Introduction} \setcounter{equation}{0} The Bernstein inequalities are fundamental in approximation theory, as seen in the inverse estimate in the characterization of best approximation and numerous other applications (see, for example, \cite{DG, DT}). Starting from algebraic polynomials on $[0,1]$, the Bernstein or Bernstein-Markov inequalities have been refined and generalized extensively by many authors. Most notable recent extensions are the inequalities in $L^p$ norm with the doubling weight, initiated in \cite{MT1}, and inequalities for multivariable polynomials on various domains, such as polytopes, convex domains, and domains with smooth boundaries; see \cite{B1, B2, BLMT, BM, BLMR, Dai1, DP2, DaiX, Kroo0, Kroo1, Kroo2, Kroo3, KR, LWW, T1, T2, X05} and reference therein. The purpose of this paper is to establish the Bernstein inequalities, in uniform norm and $L^p$ norm with doubling weight, for polynomials on the conic surface $$ \VV_0^{d+1} = \left \{(x,t): \|x\| = t, \quad x \in \RR^d, \, 0 \le t \le 1 \right\}, $$ and on the solid cone $\VV^{d+1}$ bounded by $\VV_0^{d+1}$ and the hyperplane $t =1$, as well as on the planar triangle. The domain has a singularity at the vertex. How singularity affects the Bernstein inequalities motivates our study. We will establish several inequalities that demonstrate the impact of the singularity at the vertex for the conic domains. Our results also lead to new inequalities on triangle domains that are stronger than those known in the literature, which are somewhat unexpected and reveal new phenomena hitherto unnoticed. To be more precise, let us first recall the Bernstein inequalities in $L^p$ norm with doubling weight on the interval \cite{MT1} and on the unit sphere \cite{Dai1, DaiX}. Let $\ell$ be a positive integer. For a doubling weight $w$ on $[0,1]$ and $1 \le p < \infty$, it is known \cite[Theorem 7.3]{MT1} that \begin{equation}\label{eq:Bernstein[0,1]} \left \|f^{(\ell)} \right\|_{L^p([0,1],w)} \le c n^{2 \ell} \|f\|_{L^p([0,1],w)}, \qquad \deg f \le n; \end{equation} moreover, setting $\varphi(x) = \sqrt{x(1-x)}$, then \cite[Theorem 7.4]{MT1} \begin{equation}\label{eq:Bernstein[0,1]phi} \left \|\varphi^\ell f^{(\ell)} \right\|_{L^p([0,1],w)} \le c n^\ell \|f\|_{L^p([0,1],w)}, \qquad \deg f \le n. \end{equation} For polynomials on the unit sphere $\sph$ and a doubling weight $w$ on $\sph$, the Bernstein inequalities are of the form \cite[Theorem 5.5.2]{DaiX} \begin{equation}\label{eq:BernsteinSph} \left \|D_{i,j}^\ell f \right \|_{L^p(\sph,w)} \le c n^\ell \|f\|_{L^p(\sph,w)}, \qquad \deg f \le n, \end{equation} where the derivatives $D_{i,j}$ are the angular derivatives on the unit sphere defined by \begin{equation}\label{eq:Dij} D_{i,j} = x_i \f{\partial}{\partial x_j} - x_j \f{\partial}{\partial x_i}, \quad 1 \le i < j \le d. \end{equation} These inequalities also hold for the uniform norm in the respective domain. The Bernstein inequalities on the conic surface are closely related to the above inequalities. Indeed, parametrizing $\VV_0^{d+1}$ as $\{(t \xi, t): \xi \in \sph, 0 \le t \le 1\}$, we see that the space of polynomials on $\VV_0^{d+1}$ contains the subspace of polynomials in the $t$ variable as well as spherical polynomials on the unit sphere $\sph$. Moreover, writing $(x,t) = (t\xi, t) \in \VV_0^{d+1}$ with $\xi \in \sph$, it follows that the derivatives on the surface $\VV_0^{d+1}$ are the partial derivative in the $t$ variable and the angular derivatives $D_{i,j}$ in the $x$ variable. The conic surface, however, is not a smooth surface because of its vertex at the origin of $\RR^{d+1}$. Our result shows that the inequality for the partial derivative in the $t$ variable behaves as expected, but the Bernstein inequality for $D_{i,j}$ on $\VV_0^{d+1}$ turns out to satisfy $$ \left \|\frac{1}{\sqrt{t}^\ell} D_{i,j}^\ell f \right \|_{L^p(\VV_0^{d+1},\sw)} \le c n^\ell \|f\|_{L^p(\VV_0^{d+1}, \sw)}, \quad \ell = 1,2, \quad \deg f \le n, $$ where $\sw$ is a doubling weight on the conic surface, but not for $\ell > 2$ in general. The factor $1/\sqrt{t}$ reflects the geometry of the conic surface. A similar result will also be established on the solid cone $\VV^{d+1}$. For $d=1$, the cone $\VV^2$ is a triangle and it is mapped, by an affine transform, to the standard triangle $$ \TT^2 = \{(y_1,y_2): y_1\ge 0, y_2\ge 0, y_1+y_2 \le 1\} $$ in $\RR^2$, so that we obtain new Bernstein inequalities on the triangle. In the literature, a typical Bernstein inequality on the triangle takes the form \begin{equation}\label{eq:triangleS1} \|\phi_{1} \partial_1 f \|_{L^p(\TT^2, w)}\le c \|f\|_{L^p(\TT^2, w)}, \quad \phi_1(y) = \sqrt{y_1(1-y_1-y_2)},\quad \deg f \le n, \end{equation} for $\partial_1 = \frac{\partial} {\partial y_1}$ for example (cf. \cite{BX, DZ, DT}), and it holds also for the uniform norm. The inequality \eqref{eq:triangleS1} is accepted as an appropriate generalization of the Bernstein inequality \eqref{eq:Bernstein[0,1]phi} on the triangle, since $\phi_1$ takes into account the boundary of $\TT^2$, just like $\varphi$ on $[0,1]$ in \eqref{eq:Bernstein[0,1]}. Nevertheless, we obtain a stronger inequality: \begin{equation}\label{eq:triangleS} \left \| \frac{1} {\sqrt{1-y_2}} \phi_{1} \partial_1 f \right\|_{L^p(\TT^2, w)}\le c \|f\|_{L^p(\TT^2, w)}, \quad \deg f \le n. \end{equation} The inequality \eqref{eq:triangleS} is somewhat surprising. Indeed, the additional factor $\f{1}{\sqrt{1-y_2}}$ appears to be a new phenomenon that has not been observed before. In hindsight, the inequality \eqref{eq:triangleS1} takes into account the boundary of the triangle, whereas the new ones take into account the singularity at the corners as well. On several regular domains, including the conic domains, there exist second-order differential operators that have orthogonal polynomials as eigenfunctions. For the unit sphere, it is the Laplace-Beltrami operator. On the conic surface $\VV_0^{d+1}$, it is given by $$ \Delta_{0,\g} = t(1-t)\partial_t^2 + \big( d-1 - (d+\g)t \big) \partial_t+ t^{-1} \sum_{1\le i < j \le d } \left( D_{i,j}^{(x)} \right)^2, $$ and it has the orthogonal polynomials with respect to $t^{-1}(1-t)^\g$ on $\VV_0^{d+1}$ as the eigenfunctions. The Bernstein inequality for such an operator can be established easily. For example, it was shown in \cite[Theorem 3.1.7]{X21} that $$ \left \|(- \Delta_{0,\g})^\ell \right \|_{L^p(\VV_0^{d+1},\sw)} \le c n^{2\ell} \|f\|_{L^p(\VV_0^{d+1}, \sw)}, \quad \ell =1,2,\ldots. $$ The proof relies only on the eigenvalues of $\Delta_{0,\g}$ and follows from a general result for all such operators on localizable homogeneous spaces. As it is, the dependence on the domain is opaque and hidden in the proof. This inequality nevertheless motivates our study on the Bernstein inequalities for the first-order derivatives. It can also be used to show the sharpness of our new result for special values of $\ell$ when $p=2$. Our analysis relies on orthogonal structures on the conic domains \cite{X20}. It is part of an ongoing program that aims at extending the results in approximation theory and harmonics analysis from the unit sphere to quadratic surfaces of revolution \cite{X20, X21a, X21, X21b}. Our main tools are the closed-form formula for the reproducing kernels in \cite{X20} and the highly localized kernel studied in \cite{X21}. The approach follows the analysis on the unit sphere \cite{DaiX}, but the conic domain has its intrinsic complexity. For example, the distance function on $\VV_0^{d+1}$ is not intuitively evident and satisfies a formula much more involved than the geodesic distance on the unit sphere (see \eqref{eq:distV0} below). The study in \cite{X21} establishes a framework, assuming the existence of highly localized kernels, for approximation and localized frames on a localizable homogenous space and, along the way, provides a toolbox for carrying out analysis on localizable homogeneous spaces. For the conic domains, the highly localized kernels are established by delicate estimates. For our proof of the Bernstein inequalities, we shall establish sharp estimates for the derivatives of these kernels. The paper is organized as follows. The main results are stated and discussed in Section 2. The proofs of the main results are in Section 3 for the conic surface and Section 4 for the solid cone. Throughout the paper, we let $c, c_1, c_2, \ldots$ as positive constants that depend only on fixed parameters and their values could change from line to line. We write $A \sim B$ if $c_1 B \le A \le c_2 B$. \section{Main results}\label{sec:Main} \setcounter{equation}{0} We state and discuss our main results in this section. The Bernstein inequalities on the conic surface are stated and discussed in the first subsection and those on the solid cone are stated in the second subsection. The inequalities on the triangle follow from those on the solid cone and will be discussed in the third subsection. \subsection{Main results on the conic surface} Parametrizing $\VV_0^{d+1}$ as $\{(t \xi, t): \xi \in \sph, 0 \le t \le 1\}$, we see that the first order derivative on the conic surfaces are $\partial_t = \f{\partial}{\partial t}$ in the $t$ variable and the angular derivatives $D_{i,j}$, $1\le i, j \le d$, in the $x$ variables, which we denote by $D_{i,j}^{(x)}$. The latter are called angular derivatives since if $\t_{i,j}$ is the angle of polar coordinates in the $(x_i,x_j)$-plane, defined by $(x_i,x_j) = r_{i,j} (\cos \t_{i,j},\sin \t_{i,j})$, where $r_{i,j} \ge 0$ and $0 \le \t_{i,j} \le 2 \pi$, then a quick computation shows that $$ D_{i,j} = x_i \frac{\partial}{\partial x_j} - x_j \frac{\partial}{\partial x_i} = \frac{\partial}{\partial \t_{i,j}}, \qquad 1 \le i,j \le d. $$ In particular, they are independent of the scale of $x$. We state our Bernstein inequalities for the derivatives $\partial_t$ and $D_{i,j}^{(x)}$ on the conic surface. Let $\sw$ be a doubling weight on $\VV_0^{d+1}$; see Subsection \ref{sec:OPconicSurface} for the definition. We denote by $\|\cdot\|_{p,\sw}$ the weighted $L^p$ norm on $\VV_0^{d+1}$ $$ \|f\|_{p,\sw} = \left( \int_{\VV_0^{d+1}} | f(x,t)|^p \sw(x,t) \d \sm (x,t)\right)^{1/p}, \qquad 1 \le p < \infty, $$ where $\sm$ denotes the Lebesgue measure on the conic surface. For $ p =\infty$, we denote by $\|\cdot\|_\infty$ the uniform norm on $\VV_0^{d+1}$. For $n = 0,1,2, \ldots$, let $\Pi_n(\VV_0^{d+1})$ be the space of polynomials of degree at most $n$ on $\VV_0^{d+1}$. \begin{thm} \label{thm:BI-V0} Let $\sw$ be a doubling weight on $\VV_0^{d+1}$ and let $f\in \Pi_n (\VV_0^{d+1})$. For $\ell \in \NN$ and $1 \le p < \infty$, \begin{equation} \label{eq:BI-V0-1} \|\partial_t^\ell f\|_{p,\sw} \le c_p n^{2 \ell} \|f\|_{p,\sw}, \end{equation} and, with $\varphi(t) = \sqrt{t(1-t)}$, \begin{equation}\label{eq:BI-V0-2} \|\phi^\ell \partial_t^\ell f\|_{p,\sw} \le c_p n^{\ell} \|f\|_{p,\sw}. \end{equation} Moreover, for $D_{i,j} = D_{i,j}^{(x)}$, $1 \le i < j \le d$, and $\ell \in \NN$, \begin{equation}\label{eq:BI-V0-4} \left \|D_{i,j}^\ell f\right \|_{p,\sw} \le c_p n^\ell \|f\|_{p,\sw}, \end{equation} and, furthermore, for $\ell =1, 2$, \begin{equation}\label{eq:BI-V0-3} \left \| \frac{1}{\sqrt{t}^\ell } D_{i,j}^\ell f\right \|_{p,\sw} \le c_p n^\ell \|f\|_{p,\sw}, \end{equation} but not for $\ell \ge 3$ in general. Finally, these inequalities also hold when the norm is the uniform norm $\|\cdot\|_\infty$ on $\VV_0^{d+1}$. \end{thm} The inequality \eqref{eq:BI-V0-3} is stronger than \eqref{eq:BI-V0-4} when $\ell = 1, 2$, since $0 \le t \le 1$. Moreover, for $\ell > 2$, the inequality \eqref{eq:BI-V0-4} follows from iteration. It is worth mentioning that the inequality \eqref{eq:BI-V0-3} for $\ell =1$ cannot be iterated to obtain the inequality for $\ell =2$, since the presence of $1/\sqrt{t}$ means that the function involved in the iteration is no longer a polynomial. We now discuss these inequalities in light of the spectral operator $\Delta_{0,\g}$ on the conic surface, which is defined for $\g > -1$ by \begin{equation}\label{eq:LB-operator} \Delta_{0,\g}:= t(1-t)\partial_t^2 + \big( d-1 - (d+\g)t \big) \partial_t+ t^{-1} \Delta_0^{(\xi)}, \end{equation} where $\xi = x/t$ and $\Delta_0^{(\xi)}$ denote the Laplace-Beltrami operator $\Delta_0$ on the unit sphere $\sph$ in $\xi$ variables. For $\g > -1$, the operator $- \Delta_{0,\g}$ is non-negative and has orthogonal polynomials with respect to the weight function $t^{-1}(1-t)^\g$ on $\VV_0^{d+1}$ as eigenfunctions; see Theorem \ref{thm:Jacobi-DE-V0} below. The latter property allows us to define $(-\Delta_{0,r})^{\a}$ for $\a \in \RR$. It is shown in \cite[Theorem 3.1.7]{X21} that the following Bernstein inequality holds. \begin{thm} \label{thm:BernsteinLB} Let $\sw$ be a doubling weight on $\VV_0^{d+1}$. Let $\g > -1$. Then, for $r > 0$ and $1 \le p \le \infty$, \begin{equation}\label{eq:BernsteinLB} \| (- \Delta_{0,\g})^{\f r 2}f \|_{p,\sw} \le c n^r \|f\|_{p,\sw}, \qquad \forall f\in \Pi_n(\VV_0^{d+1}). \end{equation} \end{thm} The proof of this theorem, however, does not use the explicit formula of the operator $\Delta_{0,\g}$ and relies only on the fact that the operator has orthogonal polynomials as eigenfunctions. The operator $\Delta_{0,\g}$ in \eqref{eq:LB-operator} contains a factor $t^{-1} \Delta_0^{(\xi)}$. It is known that the Laplace-Beltrami operator $\Delta_0$ can be decomposed in terms of angular derivatives (cf. \cite[Theorem 1.8.2]{DaiX}), \begin{equation}\label{eq:D=Dij} \Delta_0 = \sum_{1\le i < j \le d} D_{i,j}^2. \end{equation} Hence, the factor $t^{-1} \Delta_0^{(\xi)}$ in the operator $\Delta_{0,\g}$ can be decomposed as a sum of $\frac{1}{\sqrt{t}^{2}} (D_{i,j}^{(\xi)})^2$. Thus, the inequality \eqref{eq:BernsteinLB} with $r=2$ follows from inequalities in Theorem \ref{thm:BI-V0}. Notice, however, that \eqref{eq:BI-V0-3} holds only for $\ell = 1$ and $2$. This makes the higher order Bernstein inequality for the spectral operator $\Delta_{0,\g}$ that much more special. The operator $- \Delta_{0,\g}$ on the conic surface $\VV_0^{d+1}$ is self-adjoint, which is evident from the following identity that relies on the first-order derivatives on the conic surface. \begin{prop} \label{prop:self-adJacobi} For $\g > -1$, let $\sw_{-1,\g}(t) = t^{-1}(1-t)^\g$, $0 \le t \le 1$. Then \begin{align*} - \int_{\VV_0^{d+1}} \Delta_{0,\g} f(x,t) \cdot g(x,t) \sw_{-1,\g}(x,t) \d\sm(x,t) = \int_{\VV^{d+1}_0} t(1-t) \frac{\d f}{\d t} \frac{\d g}{\d t} \sw_{-1,\g}(t) \d \sm (x,t) & \\ + \sum_{1 \le i < j\le d} \int_{\VV_0^{d+1}} t^{-2}D_{i,j}^{(x)} f(x,t) D_{i,j}^{(x)} g (x,t) \sw_{-1,\g}(t) \d \sm(x,t)&, \end{align*} where $\frac{\d f}{\d t} = \frac{\d}{\d t} f(t\xi,t)$. In particular, $-\Delta_{0,\g}$ is self-adjoint in $L^2( \VV^{d+1}, \sw_{-1,\g})$. \end{prop} The proof of these results will be given in Subsection \ref{sect:proof_corollary}. It is used to deduce the following corollary. \begin{cor} \label{cor:B-V0_p=2} Let $d \ge 2$ and $\g > -1$. Then, for $\sw = \sw_{-1,\g}$ and any polynomial $f$, $$ \left \| \varphi \partial_t f \right \|_{2,\sw}^2 + \sum_{1\le i< j \le d} \left \|\f{1}{\sqrt{t}} D_{i,j}^{(x)} f \right \|_{2,\sw}^2 \le \left \| f \right \|_{2,\sw} \left \| \Delta_{0,\g} f \right \|_{2,\sw}. $$ In particular, for $f \in \Pi_n(\VV_0^{d+1})$, the inequality \eqref{eq:BernsteinLB} with $r =2$ and $p=2$ implies the inequalities \eqref{eq:BI-V0-2} and \eqref{eq:BI-V0-3} with $\ell =1$ and $p=2$ when $\sw = \sw_{-1,\g}$. \end{cor} In the other direction, the explicit formula of \eqref{eq:LB-operator} and \eqref{eq:D=Dij} shows immediately that $\|-\Delta_{0,\g} f\|_{p,\sw}$ is bounded by the sum of $\|\varphi^2 \partial_t^2 f\|_{p,\sw}$, $\| \partial_t f\|_{p,\sw}$ and $\| (\frac{1}{\sqrt{t}}D_{i,j})^2 f\|_{p,\sw}$. In particular, the inequalities in the Theorem \ref{thm:BI-V0} imply \eqref{eq:BernsteinLB} with $r =2$. By the spectral property, the inequality \eqref{eq:BernsteinLB} is sharp. The corollary and the discussion above provides some assurance that the inequalities \eqref{eq:BI-V0-2} and \eqref{eq:BI-V0-3} are sharp when $p =2$ and $\sw= \sw_{-1,\g}$. \subsection{Main results on the cone} Here the domain is the solid cone in $\RR^{d+1}$ for $d \ge 1$, $$ \VV^{d+1} = \left \{(x,t): \|x\| \le t, \, 0 \le t \le 1, \, x \in \RR^d \right \}. $$ Writing $\VV^{d+1}$ as $\{(t \xi, t): \xi \in \sph, 0 \le t \le 1\}$, we see that the first order derivative on the conic surfaces are $\partial_t = \f{\partial}{\partial t}$ in the $t$ variable and the angular derivatives $D_{i,j}$, $1\le i, j \le d$, in the $x$ variables, which we denote by $D_{i,j}^{(x)}$, as well as one more partial derivative, denoted by $D_{x_j}$ and defined by \begin{equation} \label{eq:Dxj} D_{x_j} = \sqrt{t^2-\|x\|^2} \frac{\partial}{\partial x_j}, \qquad 1 \le j \le d, \quad (x,t) \in \VV^{d+1}. \end{equation} Let $W$ be a doubling weight on $\VV^{d+1}$; see \ref{sec:OPcone} for the definition. We denote by $\|\cdot\|_{p,W}$ the weighted $L^p$ norm on $\VV^{d+1}$ $$ \|f\|_{p,W} = \left( \int_{\VV^{d+1}} | f(x,t)|^p W(x,t) \d x \d t \right)^{1/p}, \qquad 1 \le p < \infty. $$ \begin{thm} \label{thm:BI-V} Let $W$ be a doubling weight on $\VV^{d+1}$ and let $f\in \Pi_n$. For $\ell \in \NN$ and $1 \le p < \infty$, \begin{equation} \label{eq:BI-V-1} \left \|\partial_t^\ell f \right \|_{p,W} \le c_p n^{2 \ell} \|f\|_{p,W} \quad \hbox{and} \quad \left \|\varphi^\ell \partial_t^\ell f\right \|_{p,W} \le c_p n^{\ell} \|f\|_{p,W}, \end{equation} where $\varphi(t) = \sqrt{t(1-t)}$; moreover, for $ 1 \le i, j \le d$, \begin{equation}\label{eq:BI-V-1B} \left \|D_{i,j}^\ell f\right \|_{p,W} \le c_p n^\ell \|f\|_{p,W}, \end{equation} and, for $\ell = 1,2$, \begin{equation}\label{eq:BI-V-1C} \left \| \frac{1}{\sqrt{t}^\ell } D_{i,j}^\ell f\right \|_{p,W} \le c_p n^\ell \|f\|_{p,W}, \end{equation} which does not hold for $\ell =3,4,\ldots$ in general. Furthermore, for $\ell \in \NN$ and $1 \le j \le d$, \begin{equation}\label{eq:BI-V-2} \left \|\partial_{x_j}^\ell f \right \|_{p,W} \le c_p n^{2 \ell} \|f\|_{p,W} \quad \hbox{and} \quad \left \|\Phi^\ell \partial_{x_j}^\ell f\right \|_{p,W} \le c_p n^{\ell} \|f\|_{p,W}, \end{equation} where $\Phi(x,t) = \sqrt{t^2-\|x\|^2}$, and, for $\ell = 1, 2$, \begin{equation}\label{eq:BI-V-2B} \left \| \frac{1}{\sqrt{t}^\ell } \Phi^\ell \partial_{x_j}^\ell f\right \|_{p,W} \le c_p n^\ell \|f\|_{p,W}. \end{equation} Finally, these inequalities also hold when the norm is the uniform norm $\|\cdot\|_\infty$ on $\VV^{d+1}$. \end{thm} In contrast to the inequality \eqref{eq:BI-V-2}, we do not know if \eqref{eq:BI-V-2B} holds for $\ell \ge 3$. As it will be shown in Section 4, the two cases are closely related, but the example that works for \eqref{eq:BI-V-2} does not work for \eqref{eq:BI-V-2B}. Like the case of conic surface, there is also a spectral operator on the cone $\VV^{d+1}$. For $\mu > -\f12$ and $\g> -1$, define the second order differential operator \begin{align} \label{eq:V-DE} \fD_{\mu,\g} : = & \, t(1-t)\partial_t^2 + 2 (1-t) \la x,\nabla_x \ra \partial_t + t \Delta_x - \langle x, \nabla_x \rangle^2 \\ & + (2\mu+d)\partial_t - (2\mu+\g+d+1)( \la x,\nabla_x\ra + t \partial_t) + \langle x, \nabla_x \rangle. \notag \end{align} It is proved in \cite{X21} that orthogonal polynomials with respect to the weight function $$ W_{\mu,\g}(x,t)= (t^2-\|x\|^2)^{\mu-\f12} (1-t)^\g, \qquad \mu > -\tfrac12, \g > -1 $$ on $\VV^{d+1}$ are the eigenfunctions of the operator $\fD_{\mu,\g}$ (see, Theorem \ref{thm:Delta0V} below). In particular, the operator satisfies the following Bernstein inequality \cite[Theorem 3.1.7]{X21}. \begin{thm} \label{thm:BernsteinLB-V} Let $W$ be a doubling weight on $\VV^{d+1}$. Let $\g > -1$ and $\mu > -\f12$. For $r > 0$ and $1 \le p \le \infty$, \begin{equation}\label{eq:BernsteinLB-V} \| (- \fD_{\mu,\g})^{\f r 2}f \|_{p,W} \le c n^r \|f\|_{p,W}, \qquad \forall f\in \Pi_n. \end{equation} \end{thm} The operator $\fD_{\mu,\g}$ is self-adjoint, as can be seen in the following theorem. \begin{thm} \label{thm:self-adjointV} For $\mu > -\f12$ and $\g > -1$, \begin{align} \label{eq:intJacobiSolid} \int_{\VV^{d+1}} - \fD_{\mu,\g} f(x,t) & \cdot g(x,t) W_{\mu,\g}(x,t) \d x \d t = \int_{\VV^{d+1}} t \frac{\d f}{\d t} \frac{\d g}{\d t} W_{\mu,\g+1}(x,t) \d x \d t\\ & + \sum_{i=1}^d \int_{\VV^{d+1}} D_{x_i} f(x, t) \cdot D_{x_i} g(x,t) t^{-1} W_{\mu,\g}(x) \d x \d t \notag \\ & + \int_{\VV^{d+1}} \sum_{i < j} D_{i,j}^{(x)} f(x,t) \cdot D_{i,j}^{(x)} g(x,t) t^{-1} W_{\mu,\g} (x) \d x\,\d t, \notag \end{align} where $\frac{\d}{\d t} f = \frac{\d}{\d t} [f(t y,t)]$ for $y \in \BB^d$. \end{thm} The identity \eqref{eq:intJacobiSolid} is proved in Subsection \ref{sect:self-adjointV} and it implies immediately the following corollary. \begin{cor} Let $d \ge 2$, $\mu > -\f12$ and $\g > -1$. Let $W = W_{\mu,\g}$. Then for any polynomial $f$ $$ \left \| \varphi \partial_t f \right \|_{2,W}^2 + \sum_{j=1}^d \left \|\f{1}{\sqrt{t}} D_{x_j} f\right \|_{2,W}^2 + \sum_{1\le i< j \le d} \left \|\f{1}{\sqrt{t}} D_{i,j}^{(x)} f \right \|_{2,W}^2 \le \left \| f \right \|_{2,W} \left \| \fD_{\mu,\g} f \right \|_{2,W}. $$ In particular, the inequality \eqref{eq:BernsteinLB-V} for $\fD_{\mu,\g}$ with $r=2$ implies the inequalities \eqref{eq:BI-V-2} for $\varphi \partial_t$ and \eqref{eq:BI-V0-3} for $D_{i,j}^{(x)}$ and $D_{x_j}$ with $\ell =1$ and $p=2$ for $W = W_{\mu,\g}$. \end{cor} The corollary follows as in Corollary \ref{cor:B-V0_p=2}. The Bernstein inequalities for the first order derivatives in Theorem \ref{thm:BI-V} imply \eqref{eq:BernsteinLB-V} when $r =2$. Together, they provide some assurance that the inequalities in Theorem \ref{thm:BI-V} are sharp. \subsection{Bernstein inequality on the triangle} For $d =1$, the cone becomes the triangle $\VV^2 = \{(x,t): 0 \le t \le 1, \, |x| \le t\}$ of $\RR^2$. Making an affine change of variable $(x,t) = (y_1-y_2, y_1 + y_2)$, the triangle $\VV^2$ becomes the standard triangle domain $$ \TT^2 = \{(y_1, y_2): y_1 \ge 0, \, y_2 \ge 0, \, y_1+ y_2 \le 1\}. $$ The triangle $\TT^2$ is symmetric under permutation of $\{y_1,y_2,1-y_1-y_2\}$ and it is customary to consider the derivatives (cf. \cite{BX, DX}) $$ \partial_1 = \partial_{y_1}, \quad \partial_2 = \partial_{y_2}, \quad \partial_3 = \partial_{y_2} - \partial_{y_1}. $$ We further define, for $y = (y_1,y_2) \in \TT^2$, $$ \phi_1(y) = \sqrt{y_1(1-y_1-y_2)}, \quad \phi_2(y) = \sqrt{y_2(1-y_1-y_2)}, \quad \phi_3(y) = \sqrt{y_1 y_2}. $$ Let $\|\cdot\|_{p,w}$ be the $L^p$ norm on the triangle $\TT^2$ for $1\le p < \infty$ and the uniform norm on $\TT^2$ if $p = \infty$. \begin{thm} Let $w$ be a doubling weigh on $\TT^2$. Let $f \in \Pi_n^2$. Then, for $1 \le p < \infty$, $\ell \in \NN$ and $i =1,2,3$, \begin{equation} \label{eq:triangle} \left \| \partial_i^\ell f\right\| _{p,w} \le c n^{2\ell} \|f \|_{p,w}\quad \hbox{and}\quad \left \| \phi_i^\ell \partial_i^\ell f\right\| _{p,w} \le c n^\ell \|f\|_{p,w}. \end{equation} Furthermore, for $\ell =1,2$, \begin{align} \left \| \frac{1}{(1-y_2)^{\ell/2}} \phi_1^\ell \partial_{1}^\ell f \right \|_{p,w} \le c n^\ell \|f\|_{p,w}, \label{eq:tri1}\\ \left \| \frac{1}{(1-y_1)^{\ell/2}} \phi_2^\ell \partial_{2}^\ell f \right \|_{p,w} \le c n^\ell \|f\|_{p,w}, \label{eq:tri2}\\ \left \| \frac{1}{(y_1+y_2)^{\ell/2}} \phi_3^\ell \partial_{3}^\ell f \right \|_{p,w} \le c n^\ell \|f\|_{p,w}. \label{eq:tri3} \end{align} Moreover, these inequalities hold when the norm is replaced by the uniform norm. \end{thm} \begin{proof} Under the change of variables $\VV^2 \mapsto \TT^2$: $(x,t) \mapsto (y_1-y_2, y_1 + y_2)$, we see that $$ \partial_x= \frac12 (\partial_{y_2} - \partial_{y_1}) = \frac12 \partial_3 \quad\hbox{and} \quad \sqrt{t^2-|x|^2} = 2 \sqrt{y_1 y_2} = 2 \phi_3(y). $$ Hence, for $i = 3$, the two inequalities in \eqref{eq:triangle} follow from \eqref{eq:BI-V-2}, where $d=1$, and the inequality \eqref{eq:tri3} follows from \eqref{eq:BI-V-2B} in Theorem \ref{thm:BI-V}. The inequalities for $\partial_1$ and $\partial_2$ follow from those for $\partial_3$ by a change of variables that amounts to permute the variables $(y_1,y_2,1-y_1-y_2)$. \end{proof} The inequality \eqref{eq:triangle} is known when $w$ is a constant weight (cf. \cite{BX, DT, T1}) or the Jacobi weight. It is accepted as a natural generalization of the Bernstein inequality \eqref{eq:Bernstein[0,1]phi} of one variable, since the left-hand side of the inequality in the uniform norm gives a pointwise inequality that reduces to \eqref{eq:Bernstein[0,1]phi} when restricted to the boundary of the triangle. The latter property, however, is satisfied by \eqref{eq:tri1}--\eqref{eq:tri3} as well. Moreover, the factors in the left-hand side remains bounded; for example, $1-y_1-y_2 \le 1 - y_2$ in \eqref{eq:tri1}. The inequalities \eqref{eq:tri1}--\eqref{eq:tri3} are surprising and they are new as far as we are aware. It is also suggestive and may be worthwhile to ask if the same phenomenon appears on other domains, such as polytopes \cite{T1,T2}. \section{Bernstein inequalities on conic surface}\label{sec:OP-conicS} \setcounter{equation}{0} We work on the conic surface in this section. In the first subsection, we recall what is needed for our analysis on the domain, based on orthogonal polynomials and highly localized kernels. Several auxiliary inequalities are recalled or proved in the second subsection, to be used in the rest of the section. The proofs of the main results are based on the estimates of the kernel functions, which are carried out in the third subsection for the derivatives in the $t$-variable and the fourth subsection for the angular derivatives. Finally, the proof of Theorem \ref{thm:self-adjointV} and its corollary is given in the fifth subsection. \subsection{Orthogonal polynomials and localized kernels}\label{sec:OPconicSurface} Let $\Pi(\VV_0^{d+1})$ denote the space of polynomials restricted on the conic surface $\VV_0^{d+1}$ and, for $n = 0,1,2,\ldots$, let $\Pi_n(\VV_0^{d+1})$ be the subspace of polynomials in $\Pi(\VV_0^{d+1})$ of total degree at most $n$. Since $\VV_0^{d+1}$ is a quadratic surface, it is known that $$ \dim \Pi_n(\VV_0^{d+1}) = \binom{n+d}{n}+\binom{n+d-1}{n-1}. $$ For $\b > - d$ and $\g > -1$, we define the weight function $\sw_{\b,\g}$ by $$ \sw_{\b,\g}(t) = t^\b (1-t)^\g, \qquad 0 \le t \le 1. $$ Orthogonal polynomials with respect to $ \sw_{\b,\g}$ on $\VV_0^{d+1}$ are studied in \cite{X20}. Let $$ \la f, g\ra_{\sw_{\b,\g}} =\bs_{\b,\g} \int_{\VV_0^{d+1}} f(x,t) g(x,t) \sw_{\b,\g} \d \sm(x,t), $$ where $\d \sm$ denotes the Lebesgue measure on the conic surface, which is a well defined inner product on $\Pi(\VV_0^{d+1})$. Let $\CV_n(\VV_0^{d+1},\sw_{\b,\g})$ be the space of orthogonal polynomials of degree $n$. Then $\dim \CV_0(\VV_0^{d+1},\sw_{\b,\g}) =1$ and $$ \dim \CV_n(\VV_0^{d+1},\sw_{\b,\g}) = \binom{n+d-1}{n}+\binom{n+d-2}{n-1},\quad n=1,2,3,\ldots. $$ Let $\CH_m(\sph)$ be the space of spherical harmonics of degree $m$ in $d$ variables. Let $\{Y_\ell^m: 1 \le \ell \le \dim \CH_m(\sph)\}$ denote an orthonormal basis of $\CH_m(\sph)$. Then the polynomials \begin{equation*} \sS_{m, \ell}^n (x,t) = P_{n-m}^{(2m + \b + d-1,\g)} (1-2t) Y_\ell^m (x), \quad 0 \le m \le n, \,\, 1 \le \ell \le \dim \CH_m(\sph), \end{equation*} consist of an orthogonal basis of $\CV_n(\VV_0^{d+1}, \sw_{\b,\g})$. Let $\Delta_{0,\g}$ be the differential operator defined in \eqref{eq:LB-operator}. It has orthogonal polynomials as eigenfunctions \cite[Theorem 7.2]{X20}. \begin{thm}\label{thm:Jacobi-DE-V0} Let $d\ge 2$ and $\g > -1$. The orthogonal polynomials in $\CV_n(\VV_0^{d+1}, \sw_{-1,\g})$ are eigenfunctions of $\Delta_{0,\g}$; more precisely, \begin{equation}\label{eq:eigen-eqn} \Delta_{0,\g} u = -n (n+\g+d-1) u, \qquad \forall u \in \CV_n(\VV_0^{d+1}, \sw_{-1,\g}). \end{equation} \end{thm} The reproducing kernel of the space $\CV_n(\VV_0^{d+1}, \sw_{\b,\g})$ is denoted by $\sP_n(\sw_{\b,\g};\cdot,\cdot)$, which can be written as $$ \sP_n\big(\sw_{\b,\g}; (x,t),(y,s) \big) = \sum_{m=0}^n \sum_{k=1}^{\dim \CH_m^d} \frac{ \sS_{m, \ell}^n(x,t) \sS_{m, \ell}^n(y,s)}{\la \sS_{m, \ell}^n, \sS_{m', \ell'}^{n'} \ra_{\sw_{\b,\g}}}. $$ Let $\proj_n(\sw_{\b,\g}): L^2(\VV_0^{d+1},\sw_{\b,\g}) \to \CV_n(\VV_0^{d+1}, \sw_{\b,\g})$ be the orthogonal projection operator. It is an integral operator with $\sP_n\big(\sw_{\b,\g}; \cdot,\cdot \big)$ as its kernel, $$ \proj_n(\sw_{\b,\g};f) = \int_{\VV_0^{d+1}} f(y,s) \sP_n\big(\sw_{\b,\g}; \,\cdot, (y,s) \big) \sw_{\b,\g}(s) \d\sm(y,s). $$ The kernel $ \sP_n\big(\sw_{\b,\g}; \,\cdot, \cdot \big)$ satisfies a closed form formula, called the addition formula since it is akin to the classical addition formula for the spherical harmonics. The addition formula is given in terms of the Jacobi polynomial $P_n^{(\a,\b)}$, the orthogonal polynomial with respect to $w_{\a,\b}(t) = (1-t)^\a(1+t)^\b$ on $[-1,1]$. Let $$ Z_{n}^{(\a,\b)}(t) = \frac{P_n^{(a,\b)}(t) P_n^{(\a,\b)}(1)}{h_n^{(\a,\b)}}, \quad \a,\b > -1, $$ where $h_n^{(\a,\b)}$ is the $L^2$ norm of $P_n^{(\a,\b)}$ in $L^2([-1,1],w_{\a,\b})$. The addition formula is of the simplest form when $\g = -1$. \begin{thm} \label{thm:sfPbCone2} Let $d \ge 2$ and $\g \ge -\f12$. Then, for $(x,t), (y,s) \in \VV_0^{d+1}$, \begin{align} \label{eq:sfPbCone} \sP_n \big(\sw_{-1,\g}; (x,t), (y,s)\big) = b_{\g,d} \int_{[-1,1]^2} & Z_{n}^{(\g+d-\f32,-\f12)} \big( 2 \zeta (x,t,y,s; v)^2-1\big) \\ & \times (1-v_1^2)^{\f{d-4}{2}} (1-v_2^2)^{\g-\f12} \d v, \notag \end{align} where $b_{\g,d}$ is a constant so that $\sP_0\big(\sw_{-1,\g}; (x,t), (y,s)\big) =1$ and \begin{equation}\label{eq:zetaV0} \zeta (x,t,y,s; v) = v_1 \sqrt{\tfrac{st + \la x,y \ra}2}+ v_2 \sqrt{1-t}\sqrt{1-s}; \end{equation} moreover, the identity holds under limit when $\g = -\f12$ and/or $d = 2$. \end{thm} Our main tool for establishing polynomial inequalities is the highly localized kernel defined via a smooth cur-off function $\wh a \in C^\infty(\RR)$, which is a non-negative function and satisfies $\mathrm{supp}\, \wh a \subset [0, 2]$ and $\wh a(t) = 1$, $t\in [0, 1]$. The kernel is defined by \begin{equation} \label{def:Ln-gen} \sL_n (\sw_{-1,\g}; (x,t),(y,s)) = \sum_{k=0}^{\infty} \wh a \left(\frac{k}{n}\right) \sP_n(\sw; (x,t),(y,s)). \end{equation} Since $\wh a$ is supported on $[0,2]$, this is a kernel of polynomials of degree at most $2n$ in either the $x$ or the $y$ variable. Using the closed form of the reproducing kernel, we can write $\sL(\sw_{\b,\g})$ in terms of the kernel of the Jacobi polynomials defined by $$ L_n^{(\l,-\f12)} (t) = \sum_{k=0}^\infty\wh a \left(\frac{k}{n}\right) Z_n^{(\l,-\f12)}(t). $$ Indeed, it follows immediately from \eqref{eq:sfPbCone} that \begin{align}\label{eq:Ln-intV0} \sL_n (\sw_{-1,\g}; (x,t), (y,s) )= b_{\g,d} \int_{[-1,1]^2} & L_n ^{(\g+d-\f32,-\f12)}\big(2 \zeta (x,t,y,s; v)^2-1 \big)\\ & \times (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v. \notag \end{align} To show that this kernel is highly localized, we need the distance on the conic surface. For $(x,t)$ and $(y,s)$ on $\VV_0^{d+1}$, the distance function $\sd_{\VV_0}$ on $\VV_0^{d+1}$ is defined by \begin{equation}\label{eq:distV0} \sd_{\VV_0} ((x,t), (y,s)): = \arccos \left(\sqrt{\frac{\la x,y\ra + t s}{2}} + \sqrt{1-t}\sqrt{1-s}\right). \end{equation} For $r > 0$ and $(x,t)$ on $\VV_0^{d+1}$, we let $\sc((x,t), r)$ be the ball centered at $(x,t)$ with radius $r$ in terms of this distance function; that is, $$ \sc((x,t), r): = \left\{ (y,s) \in \VV_0^{d+1}: \sd_{\VV_0} \big((x,t),(y,s)\big)\le r \right\}. $$ Let $E$ be a subset in $\VV_0^{d+1}$ and $\sm$ denotes the Lebesgue measure in $\VV_0^{d+1}$. We define $$ \sw (E) = \int_E \sw(x,t) \d \sm(x,t). $$ A weight function $\sw$ is a doubling weight if there is a constant $L > 0$ such that $$ \sw\big(\sc((x,t), 2 r)\big) \le L \, \sw\big(\sc((x,t), r)\big), \quad r >0. $$ The least constant $L$ is called a doubling constant and the doubling index $\a(\sw)$ is the least index for which $\sup_{\sc((x,t), r} \sw(\sc((x,t), 2^m r))) /\sw(\sc((x,t), r)) \le c_{L(\sw)} 2^{m \a (\sw)}$, $m=1,2,\ldots$. As an example, the weight $\sw_{\b,\g}$ is a doubling weight on $\VV_0^{d+1}$ \cite[Proposition 4.6]{X21} and, for $n = 1,2, \ldots$, $\sw_{-1,\g}\big(\sc((x,t), n^{-2})\big)\sim \sw_{\g,d} (n; t)$ with \begin{equation}\label{eq:w(n;t)} \sw_{\g,d} (n; t) = \big(1-t+n^{-2}\big)^{\g+\f12}\big(t+n^{-2}\big)^{\f{d-2}{2}}. \end{equation} It is shown in \cite[Theorem 4.10]{X21} that, for $d\ge 2$, $\g \ge -\f12$, and any $\k > 0$, the kernel $\sL_n (\sw_{-1,\g}; \cdot,\cdot)$ satisfies the estimate \begin{equation}\label{eq:L-localized} \left |\sL_n (\sw_{-1,\g}; (x,t), (y,s))\right| \le \frac{c_\k n^d}{\sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }} \big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{-\k}. \end{equation} This shows, in particular, that the kernel decays away from $(x,t) = (y,s)$ faster than any polynomial rate. The decaying estimate is established using the case of $m = 0$ in the following lemma \cite[Theorem 2.6.7]{DaiX}. \begin{lem} Let $\ell$ be a positive integer and let $\eta$ be a function that satisfy, $\eta\in C^{3\ell-1}(\RR)$, $\mathrm{supp}\, \eta \subset [0,2]$ and $\eta^{(j)} (0) = 0$ for $j = 0,1,2,\ldots, 3 \ell-2$. Then, for $\a \ge \b \ge -\f12$, $t \in [-1,1]$ and $n\in \NN$, \begin{equation} \label{eq:DLn(t,1)} \left| \frac{d^m}{dt^m} L_n^{(\a,\b)}(t) \right| \le c_{\ell,m,\a}\left\|\eta^{(3\ell-1)}\right\|_\infty \frac{n^{2 \a + 2m+2}}{(1+n\sqrt{1-t})^{\ell}}, \quad m=0,1,2,\ldots. \end{equation} \end{lem} The proof of the localization of the kernel requires further properties of the distance function. Let $\sd_{[-1,1]}(\cdot,\cdot)$ be the distance function on the interval $[0,1]$ defined by $$ \sd_{[-1,1]}(t, s) = \arccos \left(\sqrt{t}\sqrt{s} + \sqrt{1-t}\sqrt{1-s}\right), \qquad t, s \in [0,1] $$ and let $\sd_{\SS}(\cdot,\cdot)$ denote the geodesic distance on the unit sphere $\sph$ defined by $$ \sd_\SS (\xi,\eta) = \arccos \la \xi,\eta\ra, \qquad \xi, \eta \in \sph. $$ Then, for $(x,t), (y,s) \in \VV_0^{d+1}$ and setting $x= t \xi$ and $y = s \eta$, the distance $\sd_{\VV_0}(\cdot,\cdot)$ satisfies, as shown in \cite[Proposition 4.3]{X21}, \begin{equation} \label{eq:d2=d2+d2} c_1 \sd_{\VV_0} ((x,t), (y,s)) \le \sd_{[0,1]}(t,s) + (t s )^{\f14} \sd_{\SS}(\xi,\eta) \le c_2 \sd_{\VV_0} ((x,t), (y,s)). \end{equation} Another two inequalities that we shall need are \begin{equation} \label{eq:|s-t|} \big| \sqrt{t} - \sqrt{s} \big|\le \sd_{\VV_0} ((x,t), (y,s)) \quad \hbox{and} \quad \big| \sqrt{1-t} - \sqrt{1-s} \big| \le \sd_{\VV_0} ((x,t), (y,s)) \end{equation} for $(x,t), (y,s) \in \VV_0^{d+1}$, established in \cite[Lemma 4.4]{X21}. \subsection{Auxiliary inequalities} We state several auxiliary results in this subsection. First of all, we will need the maximal function $f_{\b,n}^\ast$ defined by \begin{equation} \label{eq:fbn*} f_{\b,n}^\ast(x,t) = \max_{(y,s)\in \VV_0^{d+1}} \frac{|f(y,s)|} {\left(1+n \sd_{\VV_0}((x,t),(y,s)) \right)^\b}. \end{equation} This maximal function satisfies the following property \cite[Corollary 2.11]{X21}. \begin{prop}\label{prop:fbn-bound} If $ 0< p\leq \infty$, $ f\in\Pi_n(\VV_0^{d+1})$ and $\b > \a(\sw)/p$, then \begin{equation} \label{eq:fbn*bound} \|f\|_{p, \sw} \leq \|f_{\b,n}^\ast\|_{p,\sw} \leq c \|f\|_{p,\sw}, \end{equation} where $c$ depends also on $L(\sw)$ and $\b$ when $\b$ is either large or close to $\a(\sw)/p$. \end{prop} For our next result, we need the concept of a maximal $\ve$-separated set on the conic surface. However, the concept will be needed for the unit ball and the solid cone later in the paper, so we give its definition on a domain $\Omega$ equipped with a distance function $\sd(\cdot,\cdot)$. Let $B(x,\ve) = \{y \in \Omega:\sd(x,y) \le \ve\}$ be the ball centered at $x$ and with radius $\ve$ in $\Omega$. \begin{defn}\label{defn:separated-pts} Let $\Xi$ be a discrete set in $\Omega$. \begin{enumerate} [ \quad (a)] \item Let $\ve>0$. A discrete subset $\Xi$ of $\Omega$ is called $\ve$-separated if $\sd(x,y) \ge\ve$ for every two distinct points $x, y \in \Xi$. \item $\Xi$ is called maximal if there is a constant $c_d > 1$ such that \begin{equation*} 1 \le \sum_{z\in \Xi} \chi_{B(z, \ve)}(x) \le c_d, \qquad \forall x \in \Omega, \end{equation*} where $\chi_E$ denotes the characteristic function of the set $E$. \end{enumerate} \end{defn} A maximal $\ve$-separated subset is constructed in \cite[Proposition 4.17]{X21} using the separated sets in the $t$-variable on $[0,1]$ and in the $\xi$ variable on $\sph$, where $(x,t) = (t \xi, t) \in \VV_0^{d+1}$. We recall what is necessary for our purpose. Let $\ve > 0$ and let $N = \lfloor \frac{\pi}{2}\ve^{-1} \rfloor$. We define \begin{equation} \label{eq:tj-epj} t_j = \sin^2 \frac{(2j-1)\pi}{4 N} \quad \hbox{and}\quad \ve_j = \frac{\pi \ve} {2 \sqrt{t_j}}, \quad 1 \le j \le N. \end{equation} Let $\Xi_\SS(\ve_j)$ be the maximal $\ve_j$-separated set of $\sph$, so that there is a family of sets $\{\SS_\xi(\ve_j): \xi \in \Xi_\SS(\ve_j)\}$ which forms a partition $\sph = \bigcup_{\eta \in \Xi_\SS(\ve_j)} \SS_\eta(\ve_j)$. Then \begin{equation} \label{eq:separateV0} \Xi_{\VV_0} = \big\{(t_j \xi, t_j): \, \xi \in \Xi_\SS(\ve_j), \, 1\le j \le N \big\} \end{equation} defined a maximal $\ve$-separated subset on $\VV_0^{d+1}$. Such a set is used to establish the Marcinkiewicz-Zygmund inequality on the conic surface, which we need below. Out next result is of interest in itself. For polynomials of one variable, it is known \cite[(7.1.7)]{MT1} $$ \int_{-1}^1 |f(t)|^p w(t) \d t \le c_\delta \int_{-1+ \delta n^{-2}}^{1-\delta n^{-2}} |f(t)|^p w(t) \d t, \quad \deg f \le n, $$ where $w$ is a doubling weight and $\delta$ is a positive constant. The following proposition is an analog of the above inequality on the conic surface. \begin{prop} \label{prop:Remz} Let $\sw$ be a doubling weight function on $\VV_0^{d+1}$. For $n \in \NN$, let $\chi_{n,\delta}(t)$ denote the characteristic function of the interval $[\f \delta{n^2}, 1- \f \delta {n^2}]$. Then, for $f \in \Pi_n(\VV_0^{d+1})$, $1 \le p < \infty$, and every $\delta > 0$, \begin{equation}\label{eq:Remz} \int_{\VV_0^{d+1}} |f(x,t)|^p \sw(x,t) \d \sm(x,t) \le c_\delta \int_{\VV_{0}^{d+1}} |f(x,t)|^p \chi_{n,\delta}(t) \sw(x,t) \d \sm(x,t). \end{equation} Moreover, when $p = \infty$, \begin{equation}\label{eq:Remz2} \|f\|_\infty \le c \|f \chi_{n,\delta}(t)\|_{\infty}. \end{equation} \end{prop} \begin{proof} The proof uses the Marcinkiewicz-Zygmund inequality on the conic surface. Let $\Xi_{\VV_0}$ be the $\ve$-maximal separated set in \eqref{eq:separateV0}. For $\ve = \b n^{-1}$ with $\b > 0$, the Marcinkiewicz-Zygmund inequality states \cite[Theorem 4.18]{X21} that, for $1 \le p < \infty$, \begin{enumerate}[$(i)$] \item for $f\in\Pi_m(\VV_0^{d+1})$ with $n \le m \le c n$, \begin{equation*} \sum_{(z,r) \in \Xi_{\VV_0}} \Big( \max_{(x,t)\in \sc \big((z,r), n^{-1}\big)} |f(x,t)|^p \Big) \sw\!\left(\sc \big((z, r), n^{-1}\big) \right) \leq c_{\sw} \|f\|_{p,\sw}^p; \end{equation*} \item for $f \in \Pi_n(\VV_0^{d+1})$, \begin{align*} \|f\|_{p,\sw}^p \le c_{\sw} \sum_{(z,r) \in\Xi_{\VV_0}} \Big(\min_{(x,t)\in \sc\bigl((z,r), n^{-1}\bigr)} |f(x,t)|^p\Big) \sw\bigl(\sc\big((z,r),n^{-1}\big)\bigr). \end{align*} \end{enumerate} Clearly $t_0 = \sin^2 \frac{\pi}{4 N} \sim n^{-2}$ and $1- t_N \sim n^{-2}$ and the constant is proportional to $\b^{-1}$. Hence, for a fixed $\delta > 0$, by choosing $\b$ sufficiently large, we see that $f(z,r) = f(z,r) \chi_{n,\delta}(r)$ for all $(z,r) \in \Xi_{\VV_0}$. Consequently, it follows from (ii) and (i) that \begin{align*} \|f\|_{p,\sw}^p \, & \le c_{\sw} \sum_{(z,r) \in\Xi_{\VV_0}} |f(z,r) \chi_{n,\delta}(r) |^p \sw\bigl(\sc\big((z,r),n^{-1}\big)\bigr) \\ & \le c \int_{\VV_{0}^{d+1}} |f(x,t)|^p \chi_{n,\delta}(t) \sw(x,t) \d \sm(x,t) \end{align*} for all $f\in \Pi_n(\VV_0^{d+1})$, which is the desired inequality \eqref{eq:Remz}. For the uniform norm, $$ |f(x,t)| \le c \max_{(z,r) \in\Xi_{\VV_0}} |f(z,r)| $$ by \cite[Theorem 2.14]{X21}, from which the inequality \eqref{eq:Remz2} follows readily. \end{proof} \begin{rem}\label{rem:remark1} Setting $x = t\xi$, $\xi \in \sph$ shows that the space $\Pi_n^d(\VV_0^{d+1})$ is a subspace of $\Pi_n^*$, which contains all polynomials of degree $n$ in $t$ and in $\xi$. Examining the proof in \cite{X21} carefully, it is not difficult to see that the Marcinkiewicz-Zygmund inequalities in \cite[Theorem 4.18]{X21} holds for $\Pi_n^*$ and, as a consequence, so does \eqref{eq:Remz}. \end{rem} Our next lemma is technical and reduces to \cite[Lemma 4.14]{X21} when $\rho =0$ and $\tau_1=\tau_2=0$. The general version is needed to handle the derivatives of the highly localized kernels. \begin{lem}\label{lem:intLn} Let $d\ge 2$ and $\g > -1$. Let $\rho$ satisfies $0 \le \rho \le (d-2)/2$, $\tau_1$ satisfies $0 \le \tau_1 \le 1/2$ and $\tau_2$ satisfies $0 \le \tau_2 \le d/2$. For $0 < p < \infty$, assume $\k > \frac{2d}{p} + (\g+\f{d-1}{2}) |\f1p-\f12|$. Then for $x = t \xi$ and $y = s \eta$ and $(x,t) \in \VV_0^{d+1}$, \begin{align}\label{eq:intLn1} \int_{\VV_0^{d+1}} \frac{ s^{-\tau_1}(1-s)^{-\tau_2} (1 + \la \xi, \eta \ra)^{-\rho} \sw_{-1,\g}(s) \d \sm(y,s) }{ \sw_{\g,d} (n; s)^{\f{p}2} \big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k p}} \le c n^{-d +2 |\tau|} \sw_{\g,d} (n; t)^{1-\f{p}{2}}. \end{align} \end{lem} \begin{proof} For $\rho =0$ and $\tau_i =0$, this estimated is established in \cite[Lemma 4,1,4]{X21}. The proof follows along the same argument, we provide an account on the modification. The first step is to show that it is sufficient to consider only $p=2$ by using the doubling property of $\sw_{-1,\g}$, which remains valid when $\rho > 0$ and $\tau_i > 0$. Let $J_2$ denote the left-hand side of \eqref{eq:intLn1} with $p=2$. Then, using \begin{equation} \label{eq:intSS} \int_{\sph} g(\la \xi,\eta\ra) d\s(\eta) = \o_{d-1} \int_{-1}^1 g(u) (1-u^2)^{\f{d-3}{2}} \d u \end{equation} and $\t \sim 2 \sin\f{\t}2 = \sqrt{1-\cos \t}$ for $\t \le [0,\pi]$, we deduce that \begin{align*} J_{2} \, &\le c \int_0^1 \int_{-1}^1 \frac{ s^{d-1-\tau_1-\rho} (1-s)^{-\tau_2} (1+u)^{-\rho}(1-u^2)^{\f{d-3}{2}}\sw_{-1,\g}(s)} {\sw_{\g,d} (n; s) \big(1 + n \arccos \big( \sqrt{ts} \sqrt{\frac{1+u}{2}} + \sqrt{1-t}\sqrt{1-s} \big)\big)^{2\k} } \d u \d s \\ & \le c \int_0^1 \int_{0}^1 \frac{s^{d-1-\tau_1-\rho} (1-s)^{-\tau_2} v^{d-2-2\rho}(1-v^2)^{\f{d-3}{2}}\sw_{-1,\g}(s) } {\sw_{\g,d} (n; s) \left(1 + n \sqrt{1- \sqrt{ts} v - \sqrt{1-t}\sqrt{1-s}} \right)^{2k}} \d v \d s, \end{align*} where the second step follows from the changing variable $ \sqrt{\frac{1+u}{2}} \mapsto v$. Making a further changing of variable $v \mapsto z/\sqrt{s}$ gives \begin{align*} J_{2} \,& \le c \int_0^1 \int_{0}^{\sqrt{s}} \frac{s^{1-\tau_1+\rho} (1-s)^{-\tau_2}\, z^{d-2-2\rho}(s- z^2)^{\f{d-3}{2}} \sw_{-1,\g}(s) }{\sw_{\g,d} (n; s) \left(1 + n \sqrt{1- \sqrt{t}\,z - \sqrt{1-t}\sqrt{1-s}} \right)^{2\k} } \d z \d s. \end{align*} We now use $s^{1-\tau_1+\rho} z^{d-2 -2 \rho} \le s^{\frac{d}{2} - \tau_1} \le (s+n^{-2})^{\frac{d}{2}-\tau_1} \le n^{2 \tau_1}(s+n^{-2})^{\frac{d}{2}}$, which holds for $\rho \le (d-2)/2$ and $\tau_1 \le d/2$, and the formulas for $\sw_{-1,\g}(s)$ and $\sw_{\g,d} (n; s)$ to deduce \begin{align*} J_2 \le c n^{2 \tau_1} \int_0^1 \int_{0}^{\sqrt{s}} \frac{ (1-s)^{-\tau_2}(s- z^2)^{\f{d-3}{2}} }{ (1-s+n^{-2})^{\f12} \left(1 + n \sqrt{1- \sqrt{t}\, z - \sqrt{1-t}\sqrt{1-s}} \right)^{2\k} } \d z \d s. \end{align*} We make one more change of variable $s\mapsto 1-w^2$ and make use of $1-s = w^2/ (1+\sqrt{1-w^2}) \le w^2$ to obtain \begin{align*} J_{2} \, & \le c n^{2 \tau_1} \int_0^1 \int_{0}^{\sqrt{1-w^2}} \frac{w^{1-2\tau_2} (1-w^2- z^2)^{\f{d-3}{2}} }{ \left(w^2+n^{-2}\right)^{\f12} \left(1 + n \sqrt{1- \sqrt{t} z - \sqrt{1-t} \, w} \right)^{2\k}} \d z \d w \\ \, & \le c n^{2 \tau_1+2\tau_2} \int_0^1 \int_{0}^{\sqrt{1-w^2}} \frac{(1-w^2- z^2)^{\f{d-3}{2}} } {\left(1 + n \sqrt{1- \sqrt{t} z - \sqrt{1-t} \, w} \right)^{2\k}} \d z \d w, \end{align*} where the second inequality follows from $w^{1-2\tau_2} \le (w^2+n^{-2})^{\f12 - \tau_2} \le n^{2\tau_2}(w^2+n^{-2})^{\f12}$, which holds for $\tau \le \f12$. The last integral in the right-hand side has appeared and estimated in the proof of \cite[Lemma 4.14]{X21}, from which the desired estimate follows. \end{proof} Finally we recall the following lemma \cite[Lemma 4.11]{X21}, which plays an important role in establishing the localization of the kernel $\sL_n(\sw_{-1,\g})$. \begin{lem} \label{lem:kernelV0} Let $d \ge 2$ and $\g > -\f12$. Then, for $\b \ge 2\g+ d+1$, \begin{align*} \int_{[-1,1]^2} & \frac{(1-v_1)^{\f{d-2}2-1}(1-v_2)^{\g-\f12}} {\big(1+n\sqrt{1- \zeta (x,t,y,s; v)}\,\big)^{\b}} \d v\\ & \qquad \le \frac{cn^{- (2\g+d-1)}} {\sqrt{\sw_{\g,d}(n; t)}\sqrt{\sw_{\g,d}(n; s)}\big(1+n \sd_{\VV_0}((x,t),(y,s))\big)^{\b - 3\g- \frac{3d+1}{2}}} . \end{align*} \end{lem} \subsection{Bernstein inequality for $\partial_t$ on the conic surface}\label{sec:BernsteinV0} To prove the Bernstein inequality for the first-order derivatives, we need to understand the action of these derivatives on the highly localized kernels. We start with the case $\partial_t$. The key ingredient lies in the estimates below. \begin{lem} \label{lem:D2kernelV0} Let $d\ge 2$ and $\g \ge -\f12$. Then for any $\k > 0$, \begin{equation*} \left | \partial_t \sL_n (\sw_{-1,\g}; (x,t), (y,s))\right| \le \frac{c_\k n^{d+1}}{ \varphi(t) \sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) } \big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k}}. \end{equation*} \end{lem} \begin{proof} Taking derivative on the integral expression \eqref{eq:Ln-intV0} of the kernel and writing $\zeta(v)$ in place of $\zeta (x,t,y,s; v)$, we obtain with $\l = \g+ d-1$, \begin{align}\label{eq:DLn-intV0} \partial_t \sL_n (\sw_{-1,\g}; (x,t), (y,s) ) & = c_{\g} \int_{[-1,1]^2} \partial L_n ^{(\l-\f12,-\f12)}\big(2 \zeta (v)^2-1 \big)\\ & \times 4 \zeta (v) \partial_t \zeta (v) (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v, \notag \end{align} where $\partial L_n ^{(\l-\f12,-\f12)}(z) = \f{\sd}{\sd z} L_n ^{(\l-\f12,-\f12)}(z)$. We first prove the following estimate \begin{align}\label{eq:Dt-zeta} \left|2 \varphi(t) \partial_t \zeta (x,t,y,s; v)\right| \le \Sigma_1 + \Sigma_2(v_1) + \Sigma_3(v_2), \end{align} where $$ \Sigma_1= \sd_{\VV_0} \big((x,t),(y,s)\big),\quad \Sigma_2(v_1) = (1 -v_1) \sqrt{s}, \quad \Sigma_3(v_2)= (1-v_2) \sqrt{1-s}. $$ Recall $\zeta (x,t,y,s; v) = v_1 \sqrt{\tfrac{st + \la x,y \ra}2}+ v_2 \sqrt{1-t}\sqrt{1-s}$ and $|\zeta (x,t,y,s; v)|\le 1$. For simplicity, we write $\Psi = \sqrt{\frac{1+\la \xi,\eta \ra}{2}}$. Since $\la \xi,\eta \ra = \cos \sd_\SS(\xi,\eta)$ in terms of the geodesic distance on $\sph$, it follows that $\Psi = \cos \frac{\sd_{\SS}(\xi,\eta)}{2}$. Taking derivative of $\zeta$, \begin{align*} \partial_t \zeta (x,t,y,s; v) = v_1\frac{\sqrt{s}}{2 \sqrt{t}} \, \Psi - v_2 \frac{\sqrt{1-s}}{2 \sqrt{1-t}}, \end{align*} and writing $v_i = 1 - (1-v_i)$, it follows readily that $$ |2 \varphi(t) \partial_t \zeta (x,t,y,s; v)| \le (1 - v_1)\sqrt{s} + (1- v_2) \sqrt{1-s}+ \left| \sqrt{1-t} \sqrt{s}\,\Psi - \sqrt{t} \sqrt{1-s}\right|. $$ If $t \ge s$, we write the last term as $$ \sqrt{1-t} \sqrt{s}\,\Psi - \sqrt{t} \sqrt{1-s} = - \sqrt{1-t} \sqrt{s} (1- \Psi) + \sqrt{1-t}\sqrt{s} - \sqrt{t} \sqrt{1-s}, $$ whereas if $t \le s$, we write $$ \sqrt{1-t} \sqrt{s}\,\Psi - \sqrt{t} \sqrt{1-s} = \left(\sqrt{1-t}\sqrt{s} - \sqrt{t}\sqrt{1-s}\right)\Psi- \sqrt{t} \sqrt{1-s} (1- \Psi). $$ From these identities, we deduce the estimate $$ \left|\sqrt{1-t} \sqrt{s}\,\Psi - \sqrt{t} \sqrt{1-s}\right| \le \left| \sqrt{1-t}\sqrt{s} - \sqrt{t}\sqrt{1-s}\right| + (t s)^{\f14} (1-\Psi). $$ Since $\Psi = \cos \frac{\sd_{\SS}(\xi,\eta)}{2}$, we obtain an upper bound $1 - \Psi \le \f12 \sd_{\SS}(\xi,\eta)$, which implies by \eqref{eq:d2=d2+d2} that $(t s)^{\f14} \sd_{\SS}(\xi,\eta) \le c\,\sd_{\VV_0^{d+1}} \big( (x,t), (y,s)\big)$. Furthermore, we also have \begin{align*} | \sqrt{1-t} \sqrt{s} - \sqrt{t} \sqrt{1-s} | \, & \le |\sqrt{1-t}-\sqrt{1-s}| \sqrt{s} + |\sqrt{s}-\sqrt{t}| \sqrt{1-s} \\ & \le 2 \sd_{\VV_0}\big( (x,t),(y,s)\big), \end{align*} using \eqref{eq:|s-t|}. Putting these inequalities together, we have proved \eqref{eq:Dt-zeta}. Using this estimate, we can then use \eqref{eq:DLn(t,1)} in \eqref{eq:DLn-intV0} to obtain for a positive number $\b$, \begin{align*} & \left | \varphi(t) \partial_t \sL_n \big(\sw_{-1,\g}; (x,t), (y,s)\big) \right | \le \, c \int_{[-1,1]^2} \frac{ n^{2\l+3}} {\left(1+n \sqrt{1- \zeta(x,t,y,s;v)}\right)^\beta} \\ & \qquad\qquad\qquad \times \big( \Sigma_1 + \Sigma_2(v_1) + \Sigma_3(v_2) \big) (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v. \notag \end{align*} The integral with $\Sigma_1$ term is bounded by applying Lemma \ref{lem:kernelV0} and choosing $\beta$ appropriately. The integral with $\Sigma_2(v_1)$ term is bounded by applying the same lemma but with $(1-v_1)^{\frac{d-2}{2} -1}$ replaced by $(1-v_1)^{\f{d}{2}-1}$ and then using $n^{-1} \sqrt{s} \le (\sqrt{t} + n^{-1}) (\sqrt{s} + n^{-1})$, whereas the integral with $\Sigma_3(v_2)$ can be handled similarly. In fact, the estimate of the last two integrals are already appeared in the proof of \cite[Theorem 4.13]{X21}. This completes the proof. \end{proof} We are now ready to prove the Bernstein inequality for $\partial_t$ on the conic surface. \begin{thm} \label{thm:Bernstein} Let $\sw$ be a doubling weight on $\VV_0^{d+1}$. Let $\ell$ be a positive integer and $1 \le p < \infty$. Then \begin{equation}\label{eq:BernV0-1} \|\partial_t f\|_{p,\sw} \le c n^{2\ell} \|f\|_{p,\sw}, \quad \forall f\in \Pi_n(\VV_0^{d+1}), \end{equation} and, with $\varphi(t) = \sqrt{t(1-t)}$, \begin{equation}\label{eq:BernV0-2} \left \| \varphi^\ell \partial_t^\ell f \right \|_{p,\sw} \le c n^\ell \|f\|_{p,\sw}, \quad \forall f\in \Pi_n(\VV_0^{d+1}). \end{equation} \end{thm} \begin{proof} By the definition of $ \sL_n\left(\sw_{-1,\g}; \cdot,\cdot\right)$, every $f\in \Pi_n(\VV_0^{d+1})$ satisfies \begin{equation}\label{eq:reprodLn} f(x,t) = \int_{\VV_0^{d+1} } f(y,s) \sL_n\left(\sw_{-1,\g}; (x,t), (y,s)\right) \sw_{-1,\g}(s) \d \s(y,s). \end{equation} Taking $\partial_t$ derivative and using the maximal function $f_{\b,n}^\ast$ in \eqref{eq:fbn*}, we obtain \begin{align*} \left |\partial_t f(x,t) \right| \le c f_{\b,n}^\ast(x,t) \int_{\VV_0^{d+1}} \frac{ \left| \partial_t \sL_n\left(\sw_{-1,\g}; (x,t), (y,s)\right) \right| } {\big(1+n \sd_{\VV_0} ((x,t),(y,s)) \big)^\b} \sw_{-1,\g}(s) \d \sm (y,s). \end{align*} Using the first estimate in Lemma \ref{lem:D2kernelV0} and choosing $\b= 2 \a(\sw)/p$ and $\k > \b + 2 d + \frac12 (\g+\f{d-2}{2})$, we see that the integral in the right-hand side is bounded by $$ \int_{\VV_0^{d+1} } |f(y,s)| \frac{n^{d+1} \sw_{-1,\g}(s) \d \sm(y,s)} {\varphi(t) \sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }\big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k - \b}} \le c \frac{n}{\varphi(t)} $$ where the last step follows from \eqref{eq:intLn1} with $p =1$ and $\rho = 0$. Consequently, we obtain \begin{align}\label{eq:partial_t} \left| \partial_t f(x,t) \right| \le c \frac{n}{\varphi(t)} f_{\b,n}^\ast(x,t). \end{align} In particular, it follows by Proposition \ref{prop:Remz} and $\varphi(t) \ge \f1n$ for $\delta n^{-2} \le t \le 1- \delta n^{-2}$ that \begin{align*} \|\partial_t f\|_{p,\sw} \le \|\partial_t f \chi_{n,\delta}(t)\|_{p,\sw} \le c n^2 \|f_{\b,n}^\ast \|_{p,\sw} \le c n \|f\|_{p,\sw}, \end{align*} where the last step follows from \eqref{eq:fbn*bound}, which proves the inequality \eqref{eq:BernV0-1} for $\ell =1$. Iterating the inequality for $\ell =1$ proves \eqref{eq:BernV0-1} for $\ell > 1$. In the case of $p=\infty$, we use $\|f\|_\infty$ instead of $f_{\b,n}^*$ in \eqref{eq:partial_t}, the resulted estimate immediately implies \eqref{eq:BernV0-2} and, with the help of \eqref{eq:Remz2}, also \eqref{eq:BernV0-1}. Since $\left| \varphi(t) \partial_t f(x,t) \right| \le c n f_{\b,n}^\ast(x,t)$ by \eqref{eq:partial_t}, we obtain immediately, by \eqref{eq:fbn*bound}, that $\left \|\varphi \partial_t f \right \|_{p,\sw} \le c n \|f\|_{p,\sw}$, which proves \eqref{eq:BernV0-2} for $\ell =1$. Moreover, since $$ \left( \varphi(t) \partial_t \right)^2 = (1-2t) \partial_t + \varphi(t)^2 \partial^2_t, $$ we obtain immediately that $$ \left \|\varphi^2 \partial_t^2 f \right \|_{p,\sw} \le \|\partial_t f\|_{p,\sw} + \|\left( \varphi(t) \partial_t \right)^2 f\|_{p,\sw} \le c n^2 \|f\|_{p,\sw}, $$ where we have applyed the inequality \eqref{eq:BernV0-1} with $\ell =1$ and the inequality \eqref{eq:BernV0-2} with $\ell =1$ twice in the second step. This proves the inequality \eqref{eq:BernV0-2} for $\ell =2$. Since $\varphi^2 \partial_t^2 f$ is a polynomial in $\Pi_n(\VV_0^{d+1}, \sw)$, we can iterate the inequality \eqref{eq:BernV0-2} with $\ell = 2$ to establish the inequality for all even $\ell$ and, similarly, establish the inequality for odd $\ell$ after applying the inequality with $\ell =1$ once. This completes the proof. \end{proof} \subsection{Bernstein inequality for $D_{i,j}$ on the conic surface}\label{sec:BernsteinV0B} We start with the estimate of the derivative of the localized kernel. \begin{lem} \label{lem:DkernelV0} Let $d\ge 2$ and $\g \ge -\f12$. Let $D_{i,j}$ be the operator acting on $x$ variable. Then for any $\k > 0$ and $(x,t), (y,s) \in \VV_0^{d+1}$, \begin{equation*}\left | D_{i,j} \sL_n (\sw_{-1,\g}; (x,t), (y,s))\right| \le \frac{c_\k n^{d+1} \sqrt{t}}{\sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }\big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k}} \end{equation*} for $1 \le i< j \le d$ and, furthermore, \begin{equation*}\left | D_{i,j}^2 \sL_n (\sw_{-1,\g}; (x,t), (y,s))\right| \le \frac{c_\k n^{d+2} \big (t+ \sqrt{t}\sqrt{s}(1+ \la \xi,\eta\ra)^{-\f12}\big)}{\sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) } \big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k}}. \end{equation*} \end{lem} \begin{proof} The inequality \eqref{eq:DLn-intV0} holds with $D_t$ replaced by $D_{i,j}$. First, we need an estimate of $|D_{i,j} \zeta (x,t,y,s; v)|$. If $\xi, \eta \in \sph$, then $2 \sqrt{\frac{1+\la \xi,\eta \ra}{2}} = \|\xi+\eta\|$. Hence, for $x = t\xi$ and $y = s \eta$, $$ D_{i,j} \zeta (x,t,y,s; v) = v_1 \frac{x_i y_j - x_j y_i} {4\sqrt{\tfrac{t s + \la x, y\ra}2}} = v_1\sqrt{t}\sqrt{s}\,\frac{\xi_i\eta_j - \xi_j \eta_i} {2 \|\xi + \eta\|}. $$ Now, using the identity \begin{align*} 2 (\xi_i \eta_j - \xi_j \eta_i) = (\xi_i - \eta_i ) (\xi_j + \eta_j ) - (\xi_j - \eta_j ) (\xi_i + \eta_i ) \end{align*} and $\|\xi - \eta\| \le \sd_{\SS}(\xi,\eta)$ for $\xi,\eta \in \sph$, it follows immediately that $$ | \xi_j \eta_j - \xi_j \eta_i | \le \|\xi- \eta\|\, \| \xi + \eta \| \le \|\xi+ \eta\|\,\sd_{\SS}(\xi,\eta). $$ Putting these together, we obtain the inequality \begin{align*} | D_{i,j} \zeta (x,t,y,s; v)| \le |v_1| \sqrt{t}\sqrt{s} \, \sd_{\SS} (\xi,\eta) \le \sqrt{t}\sqrt{s} \, \sd_{\SS} (\xi,\eta). \end{align*} If $s \le t$, then $\sqrt{s} \, \sd_{\SS} (\xi,\eta) \le (t s)^\f14 \sd_{\SS} (\xi,\eta) \le c\, \sd_{\VV_0}((x,t), (y,s))$ by \eqref{eq:d2=d2+d2}. If $s \ge t$, then $\sqrt{s} \le (t s)^{\f14} + \sd_{\VV_0}((x,t), (y,s))$ by \eqref{eq:|s-t|}, which implies, by \eqref{eq:d2=d2+d2}, $$ \sqrt{s} \,\sd_{\SS} (\xi,\eta) \le (t s)^{\f14} \sd_{\SS} (\xi,\eta) + \pi \sd_{\VV_0}((x,t), (y,s)) \le c \, \sd_{\VV_0}((x,t), (y,s)). $$ Consequently, we have proved the inequality \begin{align}\label{eq:Dijz-bd} | D_{i,j} \zeta (x,t,y,s; v)| \le c \sqrt{t} \sd_{\VV_0}((x,t), (y,s)). \end{align} Using this inequality and $|\zeta (x,t,y,s;v)| \le 1$, we then use \eqref{eq:DLn(t,1)} in \eqref{eq:DLn-intV0} to obtain \begin{align*} \left | D_{i,j} \sL_n \big(\sw_{-1,\g}; (x,t), (y,s)\big) \right | \le \, & c \int_{[-1,1]^2} \frac{ n^{2\l+3} \sqrt{t} \sd_{\VV_0}\big((x,t),(y,s)\big)} {\left(1+n \sqrt{1- \zeta(x,t,y,s;v)}\right)^\k} \\ & \times (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v \notag \end{align*} from which the desired estimate follows from Lemma \ref{lem:kernelV0}. Now, taking the derivative $D_{i,j}$ one more time, we obtain \begin{align*} &D_{i,j}^2 \sL_n (\sw_{-1,\g}; (x,t), (y,s) ) \\ = \, & c_{\g} \int_{[-1,1]^2} \partial^2 L_n ^{(\l-\f12,-\f12)}\big(2 \zeta (v)^2-1 \big) \left(4 \zeta (v) D_{i,j} \zeta (v) \right)^2 (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v \\ + \,& c_{\g} \int_{[-1,1]^2} \partial L_n ^{(\l-\f12,-\f12)}\big(2 \zeta (v)^2-1 \big) 4 D_{i,j} \left[ \zeta (v) D_{i,j} \zeta (v) \right](1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v. \end{align*} Applying \eqref{eq:DLn(t,1)} with $m=2$ and \eqref{eq:Dijz-bd}, it is easy to see that the first integral in the right-hand side is bounded by \begin{align*} c \int_{[-1,1]^2} \frac{ n^{2\l+5} t \sd_{\VV_0}\big((x,t),(y,s)\big)^2} {\left(1+n \sqrt{1- \zeta(x,t,y,s;v)}\right)^\k} (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v, \end{align*} which can be estimated as in the case of $D_{i,j} L_n$ to give the desired upper bound. Moreover, the second integral is a sum of two integrals according to $$ D_{i,j} \left[ \zeta (v) D_{i,j} \zeta (v) \right] = \left[D_{i,j} \zeta (v) \right]^2 + \zeta(v) D_{i,j}^2\zeta(v); $$ while the first one, using \eqref{eq:Dijz-bd}, is dominated by the first integral, the second one can be bounded using the identity $$ D_{i,j}^2 \zeta(x,t,y,s;v) = - v_1 \sqrt{t}\sqrt{s} \left( \frac{\xi_i \eta_i + \xi_j \eta_j}{2 \|\xi+\eta\|} + \frac{ (\xi_i \eta_j- \xi_j \eta_i)^2}{2 \|\xi+\eta\|^3}\right), $$ which implies immediately that $$ |D_{i,j}^2 \zeta(x,t,y,s;v)| \le \sqrt{t}\sqrt{s} \frac{1+\sd_{\SS}(\xi,\eta)^2}{2 \|\xi+\eta\|} \le c \frac{\sqrt{t}\sqrt{s}}{\sqrt{1+\la \xi,\eta\ra}} $$ and leads to the upper bound $$ c \frac{\sqrt{t}\sqrt{s}}{\sqrt{1+\la \xi,\eta\ra}} \int_{[-1,1]^2} \frac{ n^{2\l+3}} {\left(1+n \sqrt{1- \zeta(x,t,y,s;v)}\right)^\k} (1-v_1^2)^{\f{d-2}2-1}(1-v_2^2)^{\g-\f12} \d v $$ which is again seen to be bounded by the desired upper bound. This completes the proof. \end{proof} \begin{thm} \label{thm:Bernstein2} Let $\sw$ be a doubling weight on $\VV_0^{d+1}$. If $f\in \Pi_n(\VV_0^{d+1})$ and $1\le p < \infty$, then for $1 \le i,j \le d$, \begin{equation}\label{eq:BernV0-3} \left \| \frac{1}{\sqrt{t}} D_{i,j} f \right \|_{p,\sw} \le c n \|f\|_{p,\sw} \quad \hbox{and}\quad \left \| \frac{1}{t} D_{i,j}^2 f \right \|_{p,\sw} \le c n^2 \|f\|_{p,\sw}. \end{equation} \end{thm} \begin{proof} For $f\in \Pi_n(\VV_0^{d+1})$, we start with the identity \eqref{eq:reprodLn}. Taking $D_{i,j}$ derivative and using the maximal function $f_{\b,n}^\ast$ in \eqref{eq:fbn*}, we obtain \begin{equation} \label{eq:Dijf-bd} \left| D_{i,j}^\ell f(x,t) \right| \le f_{\b,n}^\ast(x,t) \int_{\VV_0^{d+1} } \frac{ \left| D_{i,j}^\ell \sL^n(\sw_{-1,\g}; (x,t),(y,s)) \right| }{\big(1+n\sd_{\VV_0}((x,t),(y,s))\big)^\b} \sw_{-1,\g}(s) \d \sm (y,s). \end{equation} Using the estimate in Lemma \ref{lem:DkernelV0} and choosing $\b= 2 \a(\sw)/p$ and $\k > \b + 2 d + \frac12 (\g+\f{d-2}{2})$, it follows that, for $\ell =1$ the integral in the right-hand side is bounded by $$ c \int_{\VV_0^{d+1} } \frac{n^{d+1} \sqrt{t} \sw_{-1,\g}(y,s) \d \sm (y,s)} {\sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }\big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k-\b}} \le c n \sqrt{t}, $$ where the last step follows from \eqref{eq:intLn1} with $p =1$ and $\rho = 0$. Consequently, we obtain \begin{align*} \left| \frac{1}{\sqrt{t}}D_{i,j} f(x,t) \right| \le c n f_{\b,n}^\ast(x,t). \end{align*} For $1 \le p < \infty$, we can then apply \eqref{eq:fbn*bound} for $\VV_0^{d+1}$ to conclude that $$ \left\| \frac{1}{\sqrt{t}}D_{i,j} f\right\|_{p,\sw} \le c n \| f_{\b,n}^\ast\|_{p,\sw} \le c n \| f \|_{p,\sw}. $$ This proves \eqref{eq:BernV0-3} for $D_{i,j}$. For $D_{i,j}^2$, we first observe that, using $$ D_{i,j}^2 = - x_i \partial_i - x_j \partial_j + x_i D_{i,j} \partial_j - x_j D_{i,j} \partial_i, $$ the function $t^{-1} D_{i,j}^2 f(t\xi, t)$ is a polynomial in the variables $t$ and $\xi$. Hence, by Proposition \ref{prop:Remz} and Remark \ref{rem:remark1}, there is a $\delta> 0$ such that, with $\chi_{n,\d}(t) = \chi_{[\f{\delta}{n}, 1- \f{\delta}{n}]}(t)$, $$ \left \| \frac{1}{t} D_{i,j}^2 f \right \|_{p,\sw} \le c_\delta \left \| \frac{1}{t} D_{i,j}^2 f \cdot \chi_{n,\delta} \right \|_{p,\sw}. $$ Using $\sqrt{s} \le \sqrt{t} + \sd_{\VV_0}((x,t),(y,s))$ and the estimate in Lemma \ref{lem:DkernelV0}, the integral in the right-hand side of \eqref{eq:Dijf-bd} with $\ell =2$ is bounded by \begin{align*} & c_\k t \int_{\VV_0^{d+1} } \frac{n^{d+2}(1+\la \xi,\eta\ra)^{-\f12} \sw_{-1,\g}(y,s) \d \sm (y,s)} {\sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }\big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k-\b}} \\ & + c_\k t \int_{\VV_0^{d+1} } \frac{n^{d+1} (1+\la \xi,\eta\ra)^{-\f12} \sw_{-1,\g}(y,s) \d \sm (y,s)} {\sqrt{t} \sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }\big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k-\b-1}}, \end{align*} which, for $t \in [\f{\delta}{n}, 1- \f{\delta}{n}]$ so that $1/\sqrt{t} \le n$, is further bounded by $$ c_\k t \int_{\VV_0^{d+1} } |f(y,s)| \frac{n^{d+2} (1+\la \xi,\eta\ra)^{-\f12} \sw_{-1,\g}(y,s) \d \sm (y,s)} {\sqrt{t} \sqrt{ \sw_{\g,d} (n; t) }\sqrt{ \sw_{\g,d} (n; s) }\big(1 + n \sd_{\VV_0}( (x,t), (y,s)) \big)^{\k-\b-1}} \le c_\k t\, n^2, $$ by applying \eqref{eq:intLn1} with $p =1$ and $\rho = \f12$. Thus, we obtain $$ \left| \frac{1}{t}D_{i,j}^2 f(x,t) \right| \chi_{n,\delta}(t) \le c n^2 f_{\b,n}^\ast(x,t), $$ from which the proof follows readily from the boundedness of $f_{\b,n}^*$. Using \eqref{eq:Remz2} and $\|f\|_\infty$ instead of $f_{\b,n^*}$, the above proof also applies to the case $p = \infty$. \end{proof} The inequalities in \eqref{eq:BernV0-3} are the cases of $\ell = 1$ and $\ell =2$ of \eqref{eq:BI-V0-3}. We now show that the inequality \eqref{eq:BI-V0-3} does not hold in general for $\ell > 2$, we consider for example the polynomial $f_0(x,t) = x_1$. It is easy to see that $$ D_{1,j}^{2\ell} f_0(x,t) = (-1)^\ell x_1 \quad \hbox{and}\quad D_{1,j}^{2\ell -1} f_0(x,t) = (-1)^\ell x_j, $$ so that $(\sqrt{t})^{-\ell} D_{i,j}^\ell f_0(t\xi,t)$ has a singularity $t^{-\ell/2 +1}$ for $\ell > 2$. Hence, taking $\sw = \sw_{-1,\g}$ on $\VV_0^3$, for example, it is easy to see that $\|t^{-\ell/2} D_{i,j}^\ell f_0\|_{2,\sw_{-1,\g}}$ is infinite if $\ell \ge 3$. Thus, \eqref{eq:BI-V0-3} does not hold for $\ell \ge 3$ in general. Finally, for $\ell = 1,2, \ldots$ and any doubling weight $\sw$, the inequality \eqref{eq:BI-V0-4} holds. Indeed, the case $\ell = 1$ of \eqref{eq:BI-V0-4} follows readily from, and in fact weaker than, \eqref{eq:BernV0-3}. The case $\ell > 1$ of \eqref{eq:BI-V0-4} follows readily from iteration. \subsection{Self-adjoint of $-\Delta_{-1,\g}$} \label{sect:proof_corollary} In this subsection we give a proof of Proposition \ref{prop:self-adJacobi} and Corollary \ref{cor:B-V0_p=2}. \medskip \begin{proof}[Proof of Proposition \ref{prop:self-adJacobi}] First we assume $\g > 0$. The differential operator can be written as $$ \Delta_{0,\g} = \frac{1}{t^{d-2}(1-t)^\g} \frac{\partial}{\partial t} \left ( t^{d-1} (1-t)^{\g+1} \frac{\partial}{\partial t} \right) + t^{-1} \Delta_0^{(\xi)}. $$ Setting $x = t \xi$, the integral over the conic surface can be written as $$ \int_{\VV_0^{d+1}}f(x,t) \d\sm(x,t) = \int_0^1 t^{d-1} \int_{\sph} f(t \xi, t) \d \s(\xi) \d t. $$ Hence, we can write \begin{align*} \int_{\VV_0^{d+1}} & \Delta_{0,\g} f(x,t) \cdot g(x,t) \sw_{-1,\g}(t) \d\sm(x,t) \\ = & \int_0^1 \int_{\sph} \frac{\partial}{\partial t} \left ( t^{d-1} (1-t)^{\g+1} \frac{\partial f}{\partial t}\right) \cdot g(t \xi, t) \d \s(\xi)\d t \\ & + \sum_{1 \le i < j \le d} \int_0^1 t^{d-2} \int_{\sph} D_{i,j}^2 f(t \xi,t) \cdot g(t \xi, t) \d \s(\xi) \sw_{-1,\g}(t) \d t. \end{align*} We then integrate by parts in $t$ variable, using the assumption that $\g > 0$ and $d \ge 2$, in the first integral in the right-hand side, and applying (cf. \cite[Proposition 1.8.4]{DaiX}) \begin{equation}\label{eq:Dij-selfadj} \int_{\sph} f(\xi)D_{i,j} g(\xi) \d \s(\xi) = - \int_{\sph} D_{i,j}f(\xi) g(\xi) \d \s(\xi), \quad 1 \le i,j \le d, \end{equation} on the second integral in the right-hand side to establish the desired formula. Analytic continuation shows that the identity holds for $\g > -1$. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:B-V0_p=2}] The inequality follows from setting $g =f$ in the Proposition \ref{prop:self-adJacobi} and the Cauchy inequality. If $f \in \Pi_n$, then the inequality \eqref{eq:BernsteinLB} shows that the right-hand side of the stated inequality is bounded by $c n^2 \left \| f \right \|_{\sw, 2}^2$, so that each term of the left-hand sider is bounded by the same quantity and, hence, we obtain \eqref{eq:BI-V0-2} and \eqref{eq:BI-V0-3} for $\ell =1$ and $p=2$. \end{proof} \section{Bernstein inequalities on the cone}\label{sec:OP-cone} \setcounter{equation}{0} We work with the solid cone $\VV^{d+1}$ in this section. In the first subsection, we recall the orthogonal polynomials and their kernels on the cone. Instead of estimating the derivatives of the kernels as in the previous section, we rely on an intimate connection between analysis on the cone and on the conic surface, which is explained in the second subsection. The proof of the Bernstein inequalities on the cone, based on the connection, is given in the third subsection. Finally, the proof of Theorem \ref{thm:self-adjointV} and its corollary is in the fourth subsection. \subsection{Orthogonal polynomials and localized kernels}\label{sec:OPcone} Let $\Pi_n^d$ denote the space of polynomials of degree at most $n$ in $d$ variables. For $\mu > -\f12$ and $\g > -1$, we define the weight function $W_{\mu,\g}$ by $$ W_{\mu,\g}(x,t) = (t^2-\|x\|^2)^{\mu-\f12} (1-t)^\g, \qquad \|x\| \le t, \quad 0 \le t \le 1. $$ Orthogonal polynomials with respect to $W_{\mu,\g}$ on $\VV^{d+1}$ are studied in \cite{X20}, which are orthogonal with respect to the inner product $$ \la f, g\ra_{W_{\mu,\g}} =\bs_{\mu,\g} \int_{\VV^{d+1}} f(x,t) g(x,t) W_{\mu,\g} \d x \d t. $$ Let $\CV_n(\VV^{d+1},W_{\mu,\g})$ be the space of orthogonal polynomials of degree $n$. Then $$ \dim \CV_n(\VV^{d+1},W_{\mu,\g}) = \binom{n+d+1}{n}, \quad n=0, 1,2,\ldots. $$ An orthogonal basis of $\CV_n(\VV^{d+1}, W_{\mu,\g})$ can be given in terms of the Jacobi polynomials and the orthogonal polynomials on the unit ball. For $m =0,1,2,\ldots$, let $\{P_\kb^m(W_\mu): |\kb| = m, \kb \in \NN_0^d\}$ be an orthonormal basis of $\CV_n(\BB^d, W_\mu)$ on the unit ball. Let \begin{equation} \label{eq:coneJ} \Jb_{m,\kb}^n(x,t):= P_{n-m}^{(2m+2\mu+d-1, \g)}(1- 2t) t^m P_\kb^m\left(W_\mu; \frac{x}{t}\right), \quad \end{equation} Then $\{\Jb_{m,\kb}^n(x,t): |\kb| = m, \, 0 \le m\le n\}$ is an orthogonal basis of $\CV_n(\VV^{d+1},W_{\mu,\g})$. The following theorem is established in \cite[Theorem 3.2]{X20}. \begin{thm}\label{thm:Delta0V} Let $\mu > -\tfrac12$, $\g > -1$ and $n \in \NN_0$. Let $\fD_{\mu,\g}$ be the second order differential operator defined in \eqref{eq:V-DE}. Then the polynomials in $\CV_n(\VV^{d+1},W_{\mu,\g})$ are eigenfunctions of $\fD_{\mu,\g}$; more precisely, \begin{equation}\label{eq:cone-eigen} \fD_{\mu,\g} u = -n (n+2\mu+\g+d) u, \qquad \forall u \in \CV_n(\VV^{d+1},W_{\mu,\g}). \end{equation} \end{thm} The reproducing kernel of the space $\CV_n(\VV^{d+1}, W_{\mu,\g})$, denoted by $\Pb_n(W_{\g,\mu};\cdot,\cdot)$, can be written in terms of the above basis, $$ \Pb_n\big(W_{\mu,\g}; (x,t),(y,s) \big) = \sum_{m=0}^n \sum_{|\kb|=n} \frac{ \Jb_{m, \kb}^n(x,t) \Jb_{m, \kb}^n(y,s)}{\la \Jb_{m, \kb}^n, \Jb_{m, \kb}^n \ra_{W_{\mu,\g}}}. $$ It is the kernel of the projection $\proj_n(W_{\mu,\g}): L^2(\VV^{d+1},W_{\mu,\g}) \to \CV_n(\VV^{d+1}, W_{\mu,\g})$, $$ \proj_n(W_{\mu,\g};f) = \int_{\VV^{d+1}} f(y,s) \Pb_n\big(W_{\mu,\g}; \,\cdot, (y,s) \big) W_{\mu,\g}(s) \d y \d s. $$ This kernel enjoys an addition formula that can be written as a triple integral over $[-1,1]^3$ for $\mu > 0$; see \cite[Theorem 4.3]{X20}. For our purpose, we only need the closed-form formula for the degenerate case $\mu = 0$; see \cite[Theorem 5.8]{X21}. \begin{thm} \label{thm:PnCone2} Let $d \ge 2$, $\mu =0$ and $\g \ge -\f12$. Then, for $n =0,1,2,\ldots$, \begin{align}\label{eq:PbCone3} \Pb_n \big(W_{0,\g}; (x,t), (y,s)\big) = c_{\g,d} & \int_{[-1,1]^2} \left[ Z_{n}^{(\g+d-\f12,-\f12)} (\xi (x, t, y, s; 1, \vb)) \right. \\ & \qquad\quad \left. +Z_n^{(\g+d-\f12,-\f12)}(\xi (x, t, y, s; -1, \vb))\right] \notag \\ & \times (1-v_1^2)^{\f{d-1}2 - 1}(1-v_2^2)^{\g-\f12} \d \vb, \notag \end{align} where $c_{\g,d}$ is a constant, so that $\Pb_0 =1$ and $\xi (x,t, y,s; u, \vb) \in [-1,1]$ is defined by \begin{align} \label{eq:xi} \xi (x,t, y,s; u, \vb) = &\, v_1 \sqrt{\tfrac12 \left(ts+\la x,y \ra + \sqrt{t^2-\|x\|^2} \sqrt{s^2-\|y\|^2} \, u \right)}\\ & + v_2 \sqrt{1-t}\sqrt{1-s}. \notag \end{align} \end{thm} Let $\wh a$ be an admissible cut-off function. For $(x,t)$, $(y,s) \in \VV^{d+1}$, the localized kernel $\Lb_n(W_{0,\g}; \cdot,\cdot)$ is defined by $$ \Lb_n\left(W_{0,\g}; (x,t),(y,s)\right) = \sum_{j=0}^\infty \wh a\left( \frac{j}{n} \right) \Pb_j\left(W_{0,\g}; (x,t), (y,s)\right). $$ Like the kernel $\sL_n$ on the conic surface, the kernel $\Lb_n(W_{0,\g})$ is also highly localized via the distance function $\sd_{\VV}(\cdot, \cdot)$ on the cone. There is, however, no need to carry out the estimate for these kernels as in the case of the conic surface. Indeed, there is a direct connection between the kernels for $W_{0,\g}$ on $\VV^{d+1}$ and those for $w_{-1,\g}$ on the conic surface $\VV_0^{d+2}$ in $\RR^{d+2}$, as will be explained in the next subsection. \subsection{The analysis on $\VV^{d+1}$ and on $\VV_0^{d+2}$} Let $(x,t)$ and $(y,s)$ be the elements of the solid cone $\VV^{d+1}$. We shall adopt the notation \begin{equation} \label{eq:XY} X:=\big(x,\sqrt{t^2-\|x\|^2}\big), \quad Y:=\big(y,\sqrt{s^2-\|y\|^2}\big), \quad Y_*:=\big(y, - \sqrt{s^2-\|y\|^2}\big). \end{equation} It is evident that $\|X\| = t$, $\|Y\| = \|Y_*\| =s$, so that $(X,t)$, $(Y,s)$ and $(Y_*,s)$ are elements of the conic surface $\VV_0^{d+2}$ in $\RR^{d+2}$. \begin{prop} Let $\sL_n(w_{-1,\g}; \cdot,\cdot)$ be the kernel defined in \eqref{eq:Ln-intV0} for $\VV_0^{d+2}$. Then \begin{equation} \label{eq:LnW-Lnw} \Lb_n \left(W_{0,\g}; (x,t), (y,s)\right) = \sL_n \big(\sw_{-1,\g}; (X,t), (Y,s)\big) + \sL_n \big(\sw_{-1,\g}; (X,t), (Y_*,s)\big). \end{equation} \end{prop} \begin{proof} By \eqref{eq:PbCone3}, we can write $$ \Lb_n\left(W_{0,\g}; (x,t), (y,s)\right) = \Lb_n^+ \left(W_{0,\g}; (x,t), (y,s)\right)+ \Lb_n^-\left(W_{0,\g}; (x,t), (y,s)\right), $$ where, with $\l = \g+d$, \begin{align*} \Lb_n^\pm\left(W_{0,\g}; (x,t), (y,s)\right) = c_{0,\g,d} \int_{[-1,1]^2}&L_n^{(\l-\f12, -\f12)}\left(2 \xi(x,t,y,s; \pm 1,v)^2-1\right)\\ & \qquad \times (1-v_1^2)^{\f{d-1}{2}-1}(1-v_2^2)^{\g-\f12} \d v. \end{align*} Let $X = t X'$ and $Y = s Y'$. Then $X' = (x',\sqrt{1-\|x'\|^2}) \in \SS^d$ and $Y' = (y',\sqrt{1-\|y'\|^2}) \in \SS^d$. Then, using the explicit formula of $\xi$ in \eqref{eq:xi} and $\zeta$ in \eqref{eq:zetaV0}, it follows \begin{equation*} \xi(x,t,y,s; 1,v) = v_1 \sqrt{t s} \sqrt{\tfrac12 (1+ \la X',Y'\ra)}+v_2 \sqrt{1-t}\sqrt{1-s} = \zeta(X ,t, Y, s; 1,v). \end{equation*} The similar identity holds for $\xi(x,t, y,s; -1, v)$ if we replace $Y$ by $Y_*$ in the right-hand side. As a consequence, comparing with \eqref{eq:Ln-intV0}, we obtain \begin{align*} \Lb_n^+ \left(W_{0,\g}; (x,t), (y,s)\right) &= \sL_n \left(\sw_{-1,\g}; (X,t), (Y,s)\right), \\ \Lb_n^- \left(W_{0,\g}; (x,t), (y,s)\right) & = \sL_n \left(\sw_{-1,\g}; (X,t), (Y_*,s)\right), \end{align*} from which the stated identity follows readily. \end{proof} The connection between the cone $\VV^{d+1}$ and the conic surface $\VV_0^{d+2}$ is further manifested in the following integral relation. \begin{prop}\label{prop:IntV0V} Let $f: \RR^{d+1} \mapsto \RR$ be a continuous function. Let $x_{d+1} = \sqrt{t^2-\|x\|^2}$ for $\|x\| \le t$. Then $$ \int_{\VV_0^{d+2}} f(y,s) w(s) \d \sm(y,s) = \int_{\VV^{d+1}} \big [ f\big((x,x_{d+1}),t\big) + f\big((x, - x_{d+1}),t\big)\big ] t w(t) \frac{\d x\d t}{\sqrt{t^2-\|x\|^2}} . $$ \end{prop} \begin{proof} Let $\d \s$ be the surface measure on $\SS^d$. We write the integral on $\VV_0^{d+1}$ as \begin{align*} &\int_{\VV_0^{d+2}} f(y,s) w(s) \d \sm(y,s) = \int_0^1 s^d \int_{\SS^d} f(s \xi,s) w(s) \d\s(\xi) \d s \\ & = \int_0^1 s^d \int_{\BB^d} \left[ f\big(s (u,\sqrt{1-\|u\|^2}), s\big) + f\big(s (u, - \sqrt{1-\|u\|^2}), s\big) \right] \frac{\d u}{\sqrt{1-\|u\|^2}} \d x w(s) \d \s, \end{align*} where the second identity uses \cite[(A.5.4)]{DaiX}. Rewriting the right-hand side as the integral over $\VV^{d+1}$, with $x = s u$, proves the stated identity. \end{proof} The doubling weight on $\VV^{d+1}$ is defined as usual via the distant function $\sd_{\VV}$. The latter can be defined in terms of the distance function on the conic surface $\VV_0^{d+2}$. Indeed, using $X$ and $Y$ in \eqref{eq:XY}, we define \begin{equation} \label{eq:distV} \sd_{\VV} ((x,t), (y,s)) : = \arccos \left(\sqrt{\frac{\la X,Y\ra +ts}2} + \sqrt{1-t}\sqrt{1-s}\right), \end{equation} which is indeed a distance function on the solid cone $\VV^{d+1}$, as shown in \cite{X21}. In particular, we have the relation \begin{equation}\label{eq:dV=dV0} \sd_{\VV^{d+1}} ((x,t), (y,s)) = \sd_{\VV_0^{d+2}} ((X,t), (Y,s)), \quad (x,t), (y,s) \in \VV^{d+1}, \end{equation} where we write $\sd_{\VV} = \sd_{\VV^{d+1}}$ and $ \sd_{\VV_0} = \sd_{\VV_0^{d+2}}$ to emphasis the dimension of the domains. \subsection{Proof of main results} We will need an analog of Proposition \ref{prop:Remz} that is of interest in itself. \begin{prop} \label{prop:RemzV} Let $W$ be a doubling weight function on $\VV^{d+1}$. For $n \in \NN$, let $\chi_{n,\delta}(x,t)$ denote the characteristic function of the set $$ \left \{(x,t): \frac \delta{n^2} \le t \le 1- \frac \delta {n^2}, \quad \sqrt{t^2- \|x\|^2} \ge \delta \frac{\sqrt{t}}{n} \right \}. $$ Then, for $f \in \Pi_n^d$, $1 \le p < \infty$, and every $\delta > 0$, \begin{equation}\label{eq:RemzV} \int_{\VV^{d+1}} |f(x,t)|^p W(x,t) \d x \d t \le c_\delta \int_{\VV^{d+1}} |f(x,t)|^p \chi_{n,\delta}(x,t) W(x,t) \d x \d t. \end{equation} \end{prop} \begin{proof} The proof uses the Marcinkiewicz-Zygmund inequality on the cone established in \cite[Theorem 5.6]{X21} that uses a maximal $\ve$-separated subset $\Xi_{\VV}$ on $\VV^{d+1}$. It is almost verbatim as the proof of Proposition \ref{prop:Remz}, once it is shown that if $(x,t) \in \Xi_{\VV}$ and $\ve \approx n^{-1}$, then \begin{equation} \label{eq:tj-epjV} \frac \delta{n^2} \le t \le 1- \frac \delta {n^2} \quad\hbox{and}\quad \sqrt{t^2- \|x\|^2} \ge \delta \frac{\sqrt{t}}{n}. \end{equation} The set $\Xi_\VV$ is constructed in \cite[Proposition 5.15]{X21} using the separated sets in the $t$-variable on $[0,1]$ and in the $x'$ variable on $\BB^d$, where $(x,t) = (t x', t) \in \VV^{d+1}$. The construction is similar to that of $\Xi_{\VV_0}$. In fact, for $\ve > 0$ and $N = \lfloor \frac{\pi}{2}\ve^{-1} \rfloor$, we choose the same $t_j$ and $\ve_j$ as given in \eqref{eq:tj-epj} and define $$ \Xi_{\VV} = \big\{(t_j x', t_j): \, x \in \Xi_\BB(\ve_j), \, 1\le j \le N \big\}, $$ where $\Xi_\BB(\ve_j)$ is the maximal $\ve_j$-separated set of $\BB^d$. Choose again $\ve = \b n^{-1}$, $\b > 0$, so that $\ve_j \sim \sqrt{t_j} /n$. By the choices of $t_j$, the bounds for $t_j$ in \eqref{eq:tj-epjV} is immediate. Setting $x = t x'$, $x' \in \BB^d$, so that $\sqrt{t^2 - \|x\|^2} = t \sqrt{1-\|x'\|^2}$, we see that the second inequality in \eqref{eq:tj-epjV} is equivalent to, since $\ve_j = \frac{\pi}{2} \frac{\ve}{\sqrt{t_j}}$, $$ \sqrt{1-\|x'\|^2} \ge c\, \ve_j, \qquad x' \in \Xi_{\BB}(\ve_j), \quad 1 \le j \le n. $$ Thus, the question is reduced to the existence of a maximal $\ve$-separated set of $\BB^d$ that satisfies the above inequality. A construction is given below. We show that there is a maximal $\ve$-separated set $\Xi_\BB(\ve)$ on $\BB^d$ such that \begin{equation} \label{eq:Xi_B} \sqrt{1-\|u\|^2} \ge c \ve, \qquad u \in \Xi_\BB(\ve) \end{equation} for any given $\ve> 0$. We choose $N = \lfloor \frac{\pi}{2}\ve^{-1} \rfloor$ as above. For $1\le j \le N$ we define $$ \t_j:= \frac{(2 j-1)\pi}{2 N}, \qquad \t_j^- := \t_j- \frac{\pi}{2 N} \quad \hbox{and} \quad \t_j^+ := \t_j +\frac{\pi}{2 N}. $$ Let $r_j = \sin \frac{\t_j}2$ and define $r_j^-$ and $r_j^+$ accordingly. In particular, $r_1^- = 0$ and $r_N^+ = 1$. Then $\t_{j+1}^- =\t_j^+$ and $\BB^d$ can be partitioned by $$ \BB^d = \bigcup_{j=1}^N \BB_0^{(j)}, \quad \hbox{where}\quad \BB_0^{(j)}:= \left\{ r \xi \in \BB^d: r_j^- < r \le r_j^+, \, \xi \in \sph \right\}. $$ Let $\s_j := (2 r_j)^{-1} \pi \ve$. Let $\Xi_\SS(\s_j)$ be the maximal $\s_j$-separated set of $\sph$, such that $\{\SS_\xi(\s_j): \xi \in \Xi_\SS(\s_j)\}$ is a partition of the sphere, $\sph = \bigcup_{\eta \in \Xi_\SS(\s_j)}\SS_\eta(\s_j)$. We define $$ \Xi_{\BB}(\ve) = \left\{ r_j \xi: \xi \in \Xi_\SS(\s_j), \, 1 \le j \le N \right\}. $$ By Definition \ref{defn:separated-pts}, to show that $\Xi_\BB(\ve)$ is $\ve$-separated, we need $\sd_\BB(r_j \xi, r_k \eta) \ge \ve$ for any two distinct points $r_j \xi$ and $r_k \eta$ in $\Xi_\BB(\ve)$. It is known that the distance function $\sd_\BB$ of $\BB^d$ is defined by $$ \sd_{\BB}(x,y) = \arccos \left( \la x,y\ra + \sqrt{1-\|x\|^2} \sqrt{1-\|y\|^2} \right), \qquad x, y \in \BB^d. $$ Let $x = t \xi$ and $y = s \eta$, for $t,s \in [0,1]$ and $\xi,\eta \in \sph$, it is easy to verify that \begin{equation} \label{eq:dB} 1 - \cos \sd_{\BB}(x,y) = 1- \cos \sd_{[0,1]}(t^2,s^2) + t s \left(1-\cos \sd_{\SS}(\xi,\eta) \right) \end{equation} using $\cos \sd_{[0,1]}(t,s) = \sqrt{t}\sqrt{s} + \sqrt{1-t}\sqrt{1-s}$. We use \eqref{eq:dB} with $1- \cos \phi = 2 \sin^2 \f\phi 2$ and also use $\sd_{[0,1]}(t,s) = \f12 |\t - \phi|$ if $t = \sin^2 \f{\t}{2}$ and $s = \sin^2 \f{\phi}2$. If $j \ne k$ then, by \eqref{eq:dB}, $$ \sd_\BB(r_j \xi, r_k \eta) \ge \sd_{[0,1]}(r_j^2, r_k^2) =\frac12 |\t_j - \t_k| \ge \frac{\pi}{2N} \ge \ve. $$ If $j = k$, then $\xi$ and $\eta$ belong to the same $\Xi_{\SS}(\s_j)$, so that $\sd_{\SS}(\xi,\eta) \ge \s_j$. Hence, using $\frac{2}{\pi} \phi \le \sin \phi \le \phi$, we deduce from \eqref{eq:dB} that $$ \sd_\BB(r_j \xi,r_j\eta) \ge\frac{2}{\pi} r_j \sd_\SS(\xi,\eta) \ge \frac{2}{\pi} r_j \s_j = \ve. $$ Thus, $\Xi_\BB(\ve)$ is an $\ve$-separated set. Moreover, it is easy to see that $\Xi_\BB(\ve)$ is maximal. Now, if $x \in \Xi_{\BB}(\ve)$, then $\|x\| \le r_N = \sin \frac12 \t_N = \cos \frac{\pi}{4N}$, which implies that $\sqrt{1- \|x\|^2} \ge \sin \frac{\pi}{4N} \ge c \ve$. Thus, \eqref{eq:Xi_B} holds. This completes the proof. \end{proof} The unit ball $\BB^d$ equipped with the weight function $(1-\|u\|^2)^{\mu-\f12}$ is also a localizable homogeneous space in the sense of \cite{X21}, since it possesses highly localized kernels \cite{PX}. As a result, our construction of the maximal $\ve$-separated set on the unit ball also leads to the following proposition, which will not be used in this paper but is recorded here for possible future use. \begin{prop} \label{prop:RemzBall} Let $W$ be a doubling weight function on $\BB^d$. For $n \in \NN$, let $\chi_{n,\delta}(x,t)$ denote the characteristic function of the set $\{x\in \BB^d: \|x\| \le 1- \frac{\delta}{n^2}\}$. Then for $f \in \Pi_n^d$, $1 \le p < \infty$, and every $\delta > 0$, \begin{equation}\label{eq:RemzBB} \int_{\BB^d} |f(x)|^p W(x) \d x \le c_\delta \int_{\BB^d} |f(x)|^p \chi_{n,\delta}(x) W(x) \d x. \end{equation} \end{prop} The proof of the Bernstein inequalities on the cone follows closely the proof on the conic surface. We will use the maximal function $f_{\b,n}^\ast$ in \eqref{eq:fbn*}, defined similarly by \begin{equation} \label{eq:fbn*V} f_{\b,n}^\ast (x,t) = \max_{(y,s) \in \VV^{d+1}} \frac{|f(y,s)|}{(1+ n \sd_{\VV}((x,t),(y,s)))^\b}. \end{equation} using $ \sd_{\VV}((x,t),(y,s))$ and it remains bounded for any doubling weight $W$ on $\VV^{d+1}$. It again satisfies \begin{equation} \label{eq:fbn*Vbd} \|f\|_{p, W} \le \|f_{\b,n}^\ast \|_{p, W} \le c \|f\|_{p, W} \end{equation} for the doubling weight $W$ on $\VV^{d+1}$ and $1 \le p \le \infty$. \begin{lem} Let $f$ be defined on $\VV^{d+1}$. Define \begin{equation} \label{eq:F=f} F \big((x,x_{d+1}),t\big) = f(x,t), \qquad \big( (x,x _{d+1}), t\big) \in \VV_0^{d+2}, \end{equation} so that $f$ can be regarded as a function on $\VV_0^{d+2}$. Then \begin{equation} \label{eq:F*=f*} f_{\b,n}^*(x,t) = F_{\b,n}^*\big( (x,x_{d+1}),t\big), \qquad x \in \VV^{d+1}, \quad (x,x_{d+1}) \in \VV_0^{d+2}. \end{equation} \end{lem} \begin{proof} Since $F$ is independent of $x_{d+1}$, it follows by the definition of $F_{\b,n}^*$ in \eqref{eq:fbn*} that $F_{\b,n}^*((x,x_{d+1}), t) = F_{\b,n}^* ((x,-x_{d+1}),t)$. In particular, we can take the maximum in the definition of $F_{\b,n}^*$ over $\VV_{0,+}^{d+2} = \{(( y,y_{d+1}), t) \in \VV_0^{d+2}: y_{d+1} \ge 0\}$ instead of over $\VV_0^{d+1}$. Then \eqref{eq:F*=f*} follows from using \eqref{eq:dV=dV0} and replacing the maximum in \eqref{eq:fbn*V} as over $(Y,s) \in \VV_{0,+}^{d+2}$. \end{proof} \medskip \begin{proof}[Proof of Theorem \ref{thm:PnCone2}] Let $f$ be a polynomial of degree at most $n$. Then $$ f(x,t) = \int_{\VV^{d+1} } f(y,s) \Lb_n\left(W_{\g,0}; (x,t), (y,s)\right) W_{\g,0}(y,s) \d y \d s. $$ Let $T_{x,t}$ be a differential operator of $(x,t)$ variables. Then, by \eqref{eq:LnW-Lnw} and using the maximal function, we obtain from \eqref{eq:F*=f*} \begin{align*} | T_{x,t} f(x,t)| \le f_{\b,n}^*(x,t) & \left[\int_{\VV^{d+1} } \frac{ \left| T_{x,t} \sL_n\left(w_{-1,\g}; (X,t), (Y,s)\right)\right|} {\big(1+n \sd_{\VV_0}\big( (X,t),(Y,s)\big) \big)^\b} W_{\g,0}(y,s) \d y \d s \right.\\ & \left. + \int_{\VV^{d+1} } \frac{ \left| T_{x,t} \sL_n\left(w_{-1,\g}; (X,t), (Y_*,s)\right)\right|} {\big(1+n \sd_{\VV_0}\big( (X,t),(Y_*,s)\big) \big)^\b} W_{\g,0}(y,s) \d y \d s \right]. \end{align*} By the identity in Proposition \ref{prop:IntV0V}, we conclude that \begin{align}\label{eq:Txtf} | T_{x,t} f(x,t)| \le f_{\b,n}^*(x,t) \int_{\VV_0^{d+2}} \frac{ \left| T_{x,t} \sL_n\left(w_{-1,\g}; (X,t), (y,s)\right)\right|} {\big(1+n \sd_{\VV_0}\big( (X,t),(y,s)\big) \big)^\b} \d \sm(y,s). \end{align} The integral in the right-hand side has been estimated in the proof of Theorems \ref{thm:Bernstein} and \ref{thm:Bernstein2} in the previous subsection. Hence, using \eqref{eq:fbn*Vbd} and \eqref{eq:RemzV} when necessary, we have proved the inequalities \eqref{eq:BI-V-1} for $\partial_t$, \eqref{eq:BI-V-1B} and \eqref{eq:BI-V-1C} for $D_{i,j}$, $1\le i, j \le d$. For the $\partial_{x_j}$ we make the following observation. If $G(X) = g(x, \sqrt{t^2-\|x\|^2})$ for $x\in \RR^d$ and $|x| \le t$, then $$ \sqrt{t^2-\|x\|^2} \partial_j G = \sqrt{t^2-\|x\|^2} \partial_j g - x_j \partial_{d+1} g = D_{j,d+1}^{(X)} G $$ for $1 \le j \le d$. Consequently, following the procedure in the previous paragraph and using the proof of Theorem \ref{thm:Bernstein2} for $D_{j,d+1}$, we conclude that \begin{equation} \label{eq:PhiD-power} \left \|(\Phi \partial_{x_j})^\ell f\right\|_{p,W} \le c n^\ell \|f\|_{p,W} \quad \hbox{and} \quad \left \| \frac{1}{\sqrt{t}^\ell} (\Phi \partial_{x_j})^\ell f\right\|_{p,W} \le c n^\ell \|f\|_{p,W}, \end{equation} where the first inequality holds for all $\ell \in \NN$ and the second one holds for $\ell = 1,2$. Furthermore, the estimate in the Lemma \ref{lem:DkernelV0} shows that \begin{align*} \frac{\left|\partial_{x_j} \sL_n\left(w_{-1,\g}; (X,t), (y,s)\right)\right|} {\big(1+n \sd_{\VV_0}\big( (X,t),(y,s)\big) \big)^\b} & \le \frac{c_\k n^{d+1} \sqrt{t} \big(1+n \sd_{\VV_0}\big( (X,t),(y,s)\big) \big)^{-\k+\b}}{ \Phi(x,t) \sqrt{\sw_{\g,d}(n;t}\sqrt{\sw_{\g,d}(n;s} } \\ & \le \frac{c_\k n^{d+2} \big(1+n \sd_{\VV_0}\big( (X,t),(y,s)\big) \big)^{-\k+\b}}{ \sqrt{\sw_{\g,d}(n;t}\sqrt{\sw_{\g,d}(n;s} }, \end{align*} if $(x,t)$ satisfies $\Phi(x,t) \ge \delta \sqrt{t} n^{-1}$. Consequently, following the proof of Theorem \ref{thm:Bernstein2}, we conclude that $$ \left | \partial_{x_j} f(x,t) \right | \le c n^2 f_{\b,n}^*(x,t) \chi_{n,\delta}(x,t), \quad 1 \le j \le d, $$ where $\chi_{n,\delta}$ is defined in Proposition \ref{prop:RemzV}. Consequently, by \eqref{eq:RemzV} and \eqref{eq:fbn*Vbd}, we have proved the inequality $\left \|\partial_{x_j} f \right \|_{p,W} \le c n^2 \|f\|_{p,W}$. Iterating this inequality proves the first inequality in \eqref{eq:BI-V-2}. Using the identity $$ \left( \Phi(x,t) \partial_{x_j}\right)^2 = (\Phi(x,t))^2 \partial_{x_j}^2 - x_j \partial_{x_j}, $$ the first inequality in \eqref{eq:BI-V-2} with $\ell = 1$ and the first inequality in \eqref{eq:PhiD-power}, we conclude that $$ \left\| \Phi^2 \partial_{x_j}^2 f \right\|_{p,W} \le \left\|(\Phi \partial_{x_j})^2 f \right\|_{p,W} + \left\| \partial_{x_j} f \right\|_{p,W} \le c n^2 \|f\|_{p,W}, $$ which proves the second inequality in \eqref{eq:BI-V-2} with $\ell =2$. Since $\Phi^2$ is a polynomial of $(x,t)$, the second inequality \eqref{eq:BI-V-2} for $\ell > 2$ can be derived by iteration from the case $\ell =1$ and $2$. Finally, using the second inequality of \eqref{eq:PhiD-power}, the same argument shows that $$ \left\| \frac{1}{t} \Phi^2 \partial_{x_j}^2 f \right\|_{p,W} \le \left\| \frac{1}{t} (\Phi \partial_{x_j})^2 f \right\|_{p,W} + \left\| \frac{x_j}{t} \partial_{x_j} f \right\|_{p,W} \le c n^2 \|f\|_{p,W}, $$ where we have used $|x_j| \le \|x\| \le t$. This completes the proof. \end{proof} \subsection{Self-adjoint of the spectral operator on the cone}\label{sect:self-adjointV} We prove Theorem \ref{thm:self-adjointV} that shows the differential operator $\fD_{\mu,\g}$ defined in \eqref{eq:V-DE} is self-adjoint. We start with a representation $\fD_{\mu,\g}$ that is of interest in itself. It shows, in particular, that the operator $\fD_{\mu,\g}$ can be written in terms of the first-order differential operators appeared in Theorem \ref{thm:BI-V}. \begin{lem} The operator $\CD_{\mu,\g}$ can be written as \begin{align}\label{eq:deq-cone} \fD_{\mu,\g} &= \frac{1}{t^d W_{\mu,\g}(x,t)} ( \partial_t + t^{-1}\la x,\nabla_x\ra) t^{d+1} W_{\mu,\g+1}(x,t) ( \partial_t + t^{-1}\la x,\nabla_x\ra) \\ & + t^{-1} \bigg( \sum_{i=1}^d \frac{1}{W_{\mu,\g} (x,t)} \partial_{x_i} \big( W_{\mu+1,\g}(x,t) \partial_{x_i} \big) + \sum_{1 \le i < j \le d } ( D_{i,j}^{(x)})^2 \bigg). \notag \end{align} \end{lem} \begin{proof} We start with an observation that follows from a quick computation, \begin{align*} t (\partial_t + t^{-1} \la x,\nabla_x \ra) (t^2 - \|x\|^2)^{\mu-\f12} & = t (2 \mu-1) (t^2 - \|x\|^2)^{\mu-\f32} (t -t^{-1} \la x,x\ra) \\ & = (2 \mu-1) (t^2 - \|x\|^2)^{\mu-\f12}, \end{align*} from which it follows that \begin{align*} & \frac{1}{t^d W_{\mu,\g} (x,t)} \left((\partial_t+ t^{-1}\la x,\nabla_x \ra) t^{d+1} W_{\mu,\g+1} (x,t) (\partial_t+ t^{-1}\la x,\nabla_x \ra)\right) \\ & \qquad = t^{-1}(2\mu+d) \la x,\nabla_x \ra + (2\mu+d) \partial_t -(2\mu+d+\g+1)(t \partial_t + \la x ,\nabla \ra)\\ & \qquad\quad +t (1-t) (t^{-1} \la x,\nabla_x \ra + \partial_t)^2. \end{align*} The last term can be written as \begin{align*} &t(1-t) (\partial_t+ t^{-1} \la x,\nabla_x \ra )^2 = (1-t) (t \partial_t+ \la x,\nabla_x \ra )(\partial_t+ t^{-1} \la x,\nabla_x \ra ) \\ & \quad= t (1-t) \partial_t^2+ 2(1-t) \la x,\nabla_x \ra \partial_t - \la x,\nabla \ra^2 + \la x,\nabla_x \ra + t^{-1} \left ( \la x,\nabla \ra^2 - \la x,\nabla_x \ra \right) . \end{align*} Putting together shows that \begin{align*} \fD_{\mu,\g} = & \frac{1}{t^d W_{\mu,\g} (x,t)} \left( (t^{-1} \la x,\nabla_x \ra + \partial_t) t^{d+1} W_{\mu,\g+1}(x,t) (t^{-1}\la x,\nabla_x \ra + \partial_t)\right) \\ & +t^{-1} \left (t^2 \Delta_x - \la x,\nabla_x\ra^2 - (2\mu + d-1) \la x,\nabla_x\ra\right). \end{align*} To conclude the proof, we need the following identity, \begin{align*} & t^2 \Delta_x - \la x,\nabla_x\ra^2 - (2\mu + d-1) \la x,\nabla_x\ra \\ & \quad = \frac{1}{W_{\mu,\g} (x,t)} \sum_{i=1}^d \partial_{x_i} \big( W_{\mu+1,\g}(x,t) \partial_{x_i} \big) + \sum_{1 \le i < j \le d } ( D_{i,j}^{(x)})^2, \end{align*} which we claim to hold. Indeed, if we dilate this identity by $x = t y$, and use the relations $ t^2 \Delta_x=\Delta_y$, $ \la x,\nabla_x \ra= \la y,\nabla_y\ra$, $ D_{i,j}^{(x)} = D_{i,j}^{(y)}$, and \begin{equation} \label{eq:x=ytBall} \frac{1}{W_{\mu,\g}(x,t)} \partial_{x_i} \left( W_{\mu+1,\g}(x,t) \partial_{x_i} \right) = \frac{1}{\varpi_\mu (y)} \partial_{y_i} \left( \varpi_{\mu+1}(y) \partial_{y_i} \right), \end{equation} where $\varpi_\mu(y) = (1-\|y\|^2)^{\mu-\f12}$, we see that the claimed identity becomes \begin{align*} & \Delta_y - \la y,\nabla_y\ra^2 - (2\mu + d-1) \la y,\nabla_y\ra \\ & \qquad = \frac{1}{\varpi_\mu (y)} \sum_{i=1}^d \partial_{y_i} \left( \varpi_{\mu+1}(y) \partial_{y_i} \right) + \sum_{1 \le i < j \le d } ( D_{i,j}^{(y)})^2, \end{align*} which can be easily verified and is in fact the decomposition of the spectral differential operator for the unit ball $\BB^d$ (cf. \cite[Section 5.2]{DX}). This completes the proof. \end{proof} \noindent \begin{proof}[Proof of Theorem \ref{thm:self-adjointV}] Using the decomposition \eqref{eq:deq-cone} of $\fD_{\mu,\g}$ and making a change of variable $x = yt$, we can write \begin{align*} & \int_{\VV^{d+1}} \fD_{\mu,\g} \cdot g W_{\mu,\g} \d x\d t = \int_0^1 t^d \int_{\BB^d} \fD_{\mu,\g} f(t y,t) \cdot g(t y,t) W_{\mu,\g}(t y,t) \d y\d t \\ & = \int_{\BB^d} \int_0^1 ( \partial_t + t^{-1}\la x,\nabla_x\ra) \left( t^{d+1} W_{\mu,\g+1}(x,t) ( \partial_t + t^{-1}\la x,\nabla_x\ra) \right) f(t y,t) \cdot g(x,t)\d t \d y \\ & \qquad\quad + \int_0^1 t^{2\mu+d-2} \sum_{i=1}^d \int_{\BB^d} \partial_{y_i} \left( (1-\|y\|^2)^{\mu+\f12} \partial_{y_i} \right) f(t y,t) \cdot g(x,t) \d y (1-t)^\g \d t \\ & \qquad\quad + \sum_{1 \le i < j \le d } \int_0^1 t^{2\mu+ d-2} \int_{\BB^d} ( D_{i,j}^{(y)})^2 f(t y,t) \cdot g(t y,t) \varpi_\mu (y) \d y\, (1-t)^\g\d t, \end{align*} where we have used \eqref{eq:x=ytBall} in the second term in the righthand side and $ D_{i,j}^{(x)} = D_{i,j}^{(y)}$ in the third term in the righthand side. Assuming $\g > 0$, we observe that $$ \frac{\d}{\d t} [f(t y,t)] = \big( (t^{-1} \la x,\nabla \ra + \partial_t) f \big) (t y, t), $$ so that we can do integration by parts with respect to the variable $t$ in the first integral in the right-hand side. Integration by parts also shows that the second integral in the right-hand side is equal to \begin{align*} & - \int_0^1 t^{2\mu+d-2} \sum_{i=1}^d \int_{\BB^d} \partial_{y_i} f(t y,t) \cdot \partial_{y_i} g(t y,t)\varpi_{\mu+1}(y) \d y (1-t)^\g\d t \\ &\qquad = - \int_0^1 t^{d-1} \sum_{i=1}^d \int_{\BB^d} \partial_{x_i} f(x, t) \cdot \partial_{x_i} g(x,t)W_{\mu+1,\g}(x,t) \d x \d t, \end{align*} which is the second integral in the right-hand side of \eqref{eq:intJacobiSolid}. Next, sing the result on the unit ball, the third integral in the right-hand side is equal to \begin{align*} & - \sum_{1 \le i < j \le d } \int_0^1 t^{2\mu+ d-2} \int_{\BB^d} D_{i,j}^{(y)} f(t y,t) \cdot D_{i,j}^{(y)} g(t y,t) \varpi_\mu (y) \d y\, (1-t)^\g\d t \\ & \qquad \quad = - \sum_{1 \le i < j \le d } \int_0^1 t^{d-1} \int_{\BB^d} D_{i,j}^{(x)} f(x,t) \cdot D_{i,j}^{(x)} g(x,t) W_{\mu,\g} (x,t) \d x\,\d t, \end{align*} which gives the third integral in the right-hand side of \eqref{eq:intJacobiSolid}. Finally, analytic continuation shows that the identity holds for $\g > -1$. \end{proof} \begin{thebibliography}{99} \bibitem{B1} M. Baran, Bernstein type theorems for compact sets in $\RR^n$, \textit{J. Approx. Theory} \textbf{69} (1992), 156--166. \bibitem {B2} M. Baran, Bernstein type theorems for compact sets in $\RR^n$ revisited. \textit{J. Approx. Theory} \textbf{79} (1994), 190--198. \bibitem{BX} H. Berens and Y. Xu, K-moduli, moduli of smoothness, and Bernstein polynomials on simplices, \textit{Indag. Math. (N.S.)} \textbf{2}, 1991, 411--421. \bibitem{BLMT} L. Bos, N. Levenberg, P. Milman and B. A. Taylor, Tangential Markov inequalities characterize algebraic submanifolds of $\RR^N$. \textit{Indiana Univ. Math. J.} \textbf{44} (1995), 115--138. \bibitem{BM} L. Bos and P. Milman, Tangential Markov inequalities on singular varieties. \textit{Indiana Univ. Math. J.} \textbf{55} (2006), 65--73. \bibitem{BLMR} D. Burns, N. Levenberg, S. Mau, and Sz. R\'ev\'esz, Monge-Amp\`{e}re measures for convex bodies and Bernstein-Markov type inequalities, \textit{Trans. Amer. Math. Soc.} \textbf{362} (2010), 6325--6340. \bibitem{Dai1} F. Dai, Multivariate polynomial inequalities with respect to doubling weights and $A_\infty$ weights, \textit{J. Funct. Anal.} \textbf{235} (2006), 137--170. \bibitem{DP2} F. Dai and A. Prymak, $L^p$-Bernstein inequalities on $C^2$-domains and applications to discretization, \textit{Trans. Amer. Math. Soc.} \textbf{375} (2022), 1933--1976. \bibitem{DaiX} F. Dai and Y. Xu, \textit{Approximation theory and harmonic analysis on spheres and balls}. Springer Monographs in Mathematics, Springer, 2013. \bibitem{DG} R. DeVore and G. Lorentz, \textit{Constructive approximation}. Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], \textbf{303}. Springer-Verlag, Berlin, 1993. \bibitem{DZ} Z. Ditzian, Multivariate Bernstein and Markov inequalities. \textit{J. Approx. Theory} \textbf{70} (1992), 273--283. \bibitem{DT} Z. Ditzian and V. Totik, \textit{Moduli of smoothness}. Springer Series in Computational Mathematics, \textbf{9}. Springer--Verlag, New York, 1987. \bibitem{DX} C. Dunkl and Y. Xu, \textit{Orthogonal polynomials of several variables}. Encyclopedia of Mathematics and its Applications, \textbf{155}. Cambridge Univ. Press, Cambridge, 2014. \bibitem{Kroo0} A. Kr\'{o}o, On Bernstein-Markov-type inequalities for multivariate polynomials in $L_q$-norm, \textit{J. Approx. Theory} \textbf{159} (2009), 85--96. \bibitem{Kroo1} A. Kr\'{o}o, Bernstein type inequalities on star-like domains in $\RR^d$ with application to norming sets, \textit{Bull. Math. Sci.} \textbf{3} (2013), 349--361. \bibitem{Kroo2} A. Kr\'{o}o, Sharp $L_p$ Markov type inequality for cuspidal domains in $\RR^d$ \textit{J. Approx. Theory} \textbf{250} (2020), 105336, 6 pp. \bibitem{Kroo3} A. Kr\'{o}o, Sharp $L_p$ Bernstein type inequality for cuspidal domains in $\RR^d$. \textit{J. Approx. Theory} \textbf{267} (2021), Paper No. 105593, 11 pp. \bibitem{KR} A. Kr\'{o}o and S. R\'ev\'esz, On Bernstein and Markov-type inequalities for multivariate polynomials on convex bodies, \textit{J. Approx. Theory} \textbf{99} (1999), 134--152. \bibitem{LWW} J. Li, H. Wang and K. Wang, Weighted $L_p$ Markov factors with doubling weights on the ball. arXiv:2201.06711, 2022. \bibitem{MT1} G. Mastroianni and V. Totik Weighted polynomial inequalities with doubling and $A_\infty$ weights. \textit{Const. Approx.} \textbf{16} (2000), 37--71. \bibitem{PX} P. Petrushev and Y. Xu, Localized polynomial frames on the ball. \textit{Constr. Approx.} \textbf{27} (2008), 121--148. \bibitem{T1} V. Totik, Polynomial approximation on polytopes. \textit{Mem. Amer. Math. Soc.} \textbf{232} (2014), no. 1091. vi+112 pp. \bibitem{T2} V. Totik, Polynomial approximation in several variables, \textit{J. Approx. Theory} \textbf{252}: 105364 (2020). \bibitem{X05} Y. Xu, Weighted approximation of functions on the unit sphere. \textit{Const. Approx.} \textbf{21} (2005), 1--28. \bibitem{X20} Y. Xu, Fourier series in orthogonal polynomials on a cone of revolution, \textit{J. Fourier Anal. Appl.}, \textbf{26} (2020), Paper No. 36, 42 pp. \bibitem{X21a} Y. Xu, Orthogonal structure and orthogonal series in and on a double cone or a hyperboloid. \textit{Trans. Amer. Math. Soc.} \textbf{374} (2021), 3603--3657. \bibitem{X21} Y. Xu, Approximation and localized polynomial frame on conic domains. \textit{J. Functional Anal.} \textbf{281} (2021), no. 12, Paper No. 109257, 94 pp. \bibitem{X21b} Y. Xu, Fourier orthogonal series on a paraboloid. \textit{J. d'Analyse Math.} accepted. arXiv:2108.00247 \end{thebibliography} \end{document}
2205.01256v1
http://arxiv.org/abs/2205.01256v1
Hybrid Finite Difference Schemes for Elliptic Interface Problems with Discontinuous and High-Contrast Variable Coefficients
\documentclass[12pt,reqno]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{verbatim} \usepackage{graphicx} \usepackage[font=footnotesize]{caption} \usepackage{subcaption} \captionsetup[subfigure]{labelfont=rm} \usepackage{epstopdf} \usepackage{multirow} \usepackage{url} \usepackage{makecell} \usepackage{tabularx} \usepackage{longtable} \usepackage{graphicx} \usepackage{tikz} \usepackage{pgfplots} \usepackage{xcolor} \usepackage{pgfplots} \usepackage{pgfplotstable} \allowdisplaybreaks \usepackage{bm} \usepackage[toc]{appendix} \usepackage[capitalise]{cleveref} \usepackage{color} \allowdisplaybreaks \usepackage[top=0.5in,bottom=0.5in,left=0.7in,right=0.7in]{geometry} \newtheorem{lemma}{Lemma} \newtheorem{prop}[lemma]{Proposition} \newtheorem{cor}[lemma]{Corollary} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{remark}[lemma]{Remark} \newtheorem{assmp}[lemma]{Assumption} \newtheorem{example}[lemma]{Example} \newtheorem{ex}{Exercise} \newtheorem{definition}[lemma]{Definition} \newtheorem{algorithm}{Algorithm} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\NN}{\mathbb{N}_0} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\xb}{x_b} \newcommand{\eu}{u_e} \newcommand{\cp}{c^{+}} \newcommand{\cm}{c^{-}} \newcommand{\EE}{\mathcal{E}} \newcommand{\EF}{\mathcal{F}} \newcommand{\nv}{\vec{n}} \newcommand{\bo}{\mathcal{O}} \newcommand{\Op}{\Omega^+} \newcommand{\Om}{\Omega^-} \newtheorem{EffIm}{Implementation} \newcommand{\ka}{\textsf{k}} \newcommand{\ia}{\textsf{i}} \newcommand{\wa}{\textsf{w}} \newcommand{\ra}{\textsf{r}} \newcommand{\B}{\mathcal{B}} \newcommand{\rr}{\mathcal{R}} \newcommand{\id}{\mathbf{I}_d} \newcommand{\gd}{g_D} \newcommand{\gn}{g_N} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\be}{ \begin{equation} } \newcommand{\ee}{ \end{equation} } \newcommand{\odd}{\operatorname{odd}} \newcommand{\ind}{\Lambda} \numberwithin{equation}{section} \numberwithin{lemma}{section} gurename}{Table \& Figure} \begin{document} \title[ Hybrid Finite Difference Scheme for Elliptic Interface Problems]{ Hybrid Finite Difference Scheme for Elliptic Interface Problems with Discontinuous and High-Contrast Variable Coefficients} \author{Qiwei Feng, Bin Han, and Peter Minev} \thanks{Research supported in part by Natural Sciences and Engineering Research Council (NSERC) of Canada under grants RGPIN-2019-04276 (Bin Han), RGPIN-2017-04152 (Peter Minev), Westgrid (www.westgrid.ca), and Compute Canada Calcul Canada (www.computecanada.ca)} \address{Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada T6G 2G1. \quad {\tt [email protected]} \quad {\tt [email protected]} \quad {\tt [email protected]} } \makeatletter \@addtoreset{equation}{section} \makeatother \begin{abstract} For elliptic interface problems with discontinuous coefficients, the maximum accuracy order for compact 9-point finite difference scheme in irregular points is three \cite{FHM21b}. The discontinuous coefficients usually have abrupt jumps across the interface curve in the porous medium of realistic problems, causing the pollution effect of numerical methods. So, to obtain a reasonable numerical solution of the above problem, the higher order scheme and its effective implementation are necessary. In this paper, we propose an efficient and flexible way to achieve the implementation of a hybrid (9-point scheme with sixth order accuracy for interior regular points and 13-point scheme with fifth order accuracy for interior irregular points) finite difference scheme in uniform meshes for the elliptic interface problems with discontinuous and high-contrast piecewise smooth coefficients in a rectangle $\Omega$. We also derive the $6$-point and $4$-point finite difference schemes in uniform meshes with sixth order accuracy for the side points and corner points of various mixed boundary conditions (Dirichlet, Neumann and Robin) of elliptic equations in a rectangle. Our numerical experiments confirm the flexibility and the sixth order accuracy in $l_2$ and $l_{\infty}$ norms of the proposed hybrid scheme. \end{abstract} \keywords{Elliptic interface problems, hybrid finite difference schemes, fifth or sixth order accuracy, mixed boundary conditions, corner treatments, high-contrast coefficients, discontinuous and variable coefficients} \subjclass[2010]{65N06, 35J15, 76S05, 41A58} \maketitle \pagenumbering{arabic} \section{Introduction and motivations} Elliptic interface problems with discontinuous coefficients appear in many real-world applications: composite materials, fluid mechanics, nuclear waste disposal, and many others. One possible avenue to solve such problems, the so-called immersed interface method (IIM), is proposed by LeVeque and Li. It has been combined with finite difference, finite volume, and finite element spatial discretizations, with various degree of accuracy. Some of the most important developments include: the second order IIM \cite{CFL19,LeLi94}, the second order immersed finite volume element methods \cite{EwingLLL99}, the second order immersed finite element methods \cite{GongLiLi08,HeLL2011}, the second order fast iterative immersed interface methods of \cite{Li98}, the second order explicit-jump immersed interface methods of \cite{WieBube00}, the third order compact finite difference scheme of \cite{PanHeLi21} and fourth order IIM of \cite{XiaolinZhong07}. Another possible approach for the resolution of elliptic interface problems with discontinuous coefficients is the matched interface and boundary methods (MIB) . The related papers of MIB for the elliptic interface problems can be summarized as: second order MIB \cite{YuZhouWei07}, fourth order MIB \cite{ZW06}, fourth order MIB with the FFT acceleration \cite{FendZhao20pp109677}, sixth order MIB \cite{YuWei073D,ZZFW06}. For the anisotropic elliptic interface problems with discontinuous and matrix coefficients, \cite{DFL20} proposed a new finite element-finite difference (FE-FD) method with a second order of accuracy. In \cite{FHM21b} we developed a compact 9-point finite difference scheme for elliptic problems, that is formally fourth order accurate away from the interface of singularity of the solution (regular points), and third order accurate in the vicinity of this interface (irregular points). The numerical experiments in \cite{FHM21b} demonstrate that the proposed scheme is fourth order accuracy in the $l_2$ norm. Since the maximum accuracy for compact 9-point finite difference stencil at regular points is six, and a 13-point stencil at irregular points can achieve a fifth order of accuracy, in the present paper we derive a hybrid scheme that utilizes a 9-point stencil for regular points and a 13-point stencil for irregular points, for the case of elliptic problems with discontinuous scalar coefficients. In \cite{FHM21b} we demonstrated that if the coefficient of the problem is continuous the stencil of a 9-point scheme in 2D can be partitioned into 72 different configurations by the interface of singularity of the solution. In the case of discontinuous coefficients, we need to use a 13-point stencil at irregular points and this results in more possibilities for the stencil partitioning (see figure \ref{Extend the compact}). Thus, in the present paper we also derive an efficient way to achieve the implementation of the proposed hybrid scheme. A comprehensive literature review of the finite difference approximation of mixed boundary conditions in rectangular domains can be found in \cite{LiPan2021}. In addition, one should also mention the following literature concerned with the discretization of the boundary conditions for elliptic problems: the sixth order 6-point finite difference scheme for 1-side Neumann and 3-side Dirichlet boundary conditions of Helmholtz equations with constant wave numbers \cite{Nabavi07}, the sixth order 5-point or 6-point finite difference schemes for 1-side Neumann/Robin and 3-side Dirichlet boundary conditions of Helmholtz equations with variable wave numbers \cite{TGGT13}, the fourth order MIB for 4-side Robin boundary conditions of elliptic interface problems \cite{FendZhao20pp109677}, up to 8th order MIB for mixed boundary conditions of Dirichlet, Neumann and Robin with all constant coefficients of Poisson/Helmholtz equations \cite{FendZhao20pp109391}. Compact finite differences have also been successfully applied to elliptic problems with various boundary conditions in non-rectangular domains. In \cite{RenFengZhao2022} a fourth order MIB for Dirichlet, Neumann, and Robin boundary conditions has been proposed. \cite{WieBube00} developed a second order explicit-jump immersed interface method for problems with Dirichlet and Neumann boundary conditions, and \cite{ItoLiKyei05,LiIto06} proposed fourth order compact finite difference schemes for various combinations of boundary conditions . In \cite{FHM21Helmholtz}, we discussed the $6$-point and $4$-point finite difference schemes with sixth order accuracy for the side points and corner points of the Helmholtz equations respectively with a constant wave number $\ka$ in a rectangle. In this paper, we also extend the above results in \cite{FHM21Helmholtz} to the elliptic equations with variable coefficients and mixed combinations of Dirichlet $u|_{\Gamma_i}=g_i$, Neumann $\tfrac{\partial u}{\partial \nv}|_{\Gamma_j}=g_j$ and Robin $\tfrac{\partial u}{\partial \nv}+\alpha u|_{\Gamma_k}=g_k$ with smooth functions $\alpha$, $g_i$, $g_j$ and $g_k$, where $\Gamma_i/\Gamma_j/\Gamma_k$ for $i,j,k=1,2,3,4$ is one side of the rectangle (see \cref{fig:boundary} for an example of the mixed boundary conditions). \begin{figure}[h] \centering \hspace{12mm} \begin{subfigure}[b]{0.4\textwidth} \begin{tikzpicture}[scale = 1.5] \draw[help lines,step = 1] (0,0) grid (4,4); \node at (1,1)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (1,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (1,3)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,1)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,3)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (3,1)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (3,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (3,3)[circle,fill,inner sep=2.5pt,color=black]{}; \draw[line width=1.5pt, red] plot [smooth,tension=0.8] coordinates {(0,1.1) (1,1.4) (2,2.4) (2.7,4)}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \begin{tikzpicture}[scale = 1.5] \draw[help lines,step = 1] (0,0) grid (4,4); \node at (1,1)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (1,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (1,3)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,1)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,3)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (3,1)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (3,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (3,3)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (0,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (4,2)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,0)[circle,fill,inner sep=2.5pt,color=black]{}; \node at (2,4)[circle,fill,inner sep=2.5pt,color=black]{}; \draw[line width=1.5pt, red] plot [smooth,tension=0.8] coordinates {(0,1.1) (1,1.4) (2,2.4) (2.7,4)}; \end{tikzpicture} \end{subfigure} \caption {For irregular points, the 9-point scheme (left) and the 13-point scheme (right). The curve in red color is the interface curve $\Gamma_I$.} \label{Extend the compact} \end{figure} In order to define the subject of the present paper, let $\Omega=(l_1, l_2)\times(l_3, l_4)$ and $\psi$ be a smooth two-dimensional function. Consider a smooth curve $\Gamma_I:=\{(x,y)\in \Omega: \psi(x,y)=0\}$, which partitions $\Omega$ into two subregions: $\Op:=\{(x,y)\in \Omega\; :\; \psi(x,y)>0\}$ and $\Om:=\{(x,y)\in \Omega\; : \; \psi(x,y)<0\}$. We also define $ a_{\pm}:=a \chi_{\Omega^{\pm}}$, $f_{\pm}:=f \chi_{\Omega^{\pm}}$ and $u_{\pm}:=u \chi_{\Omega^{\pm}}. $ The model problem in this paper is defined as follows: \begin{equation} \label{Qeques2} \begin{cases} -\nabla \cdot( a\nabla u)=f &\text{in $\Omega \setminus \Gamma_I$},\\ \left[u\right]=\gd, \quad \left[a\nabla u \cdot \nv \right]=\gn &\text{on $\Gamma_I$},\\ \B_1 u =g_1 \text{ on } \Gamma_1:=\{l_{1}\} \times (l_3,l_4), & \B_2 u =g_2 \text{ on } \Gamma_2:=\{l_{2}\} \times (l_3,l_4),\\ \B_3 u =g_3 \text{ on } \Gamma_3:=(l_1,l_2) \times \{l_{3}\}, & \B_4 u =g_4 \text{ on } \Gamma_4:=(l_1,l_2) \times \{l_{4}\}, \end{cases} \end{equation} where $f$ is the source term, and for any point $(x_0,y_0)\in \Gamma_I$, \begin{align*} [u](x_0,y_0) & :=\lim_{(x,y)\in \Op, (x,y) \to (x_0,y_0) }u(x,y)- \lim_{(x,y)\in \Om, (x,y) \to (x_0,y_0) }u(x,y),\\ [ a\nabla u \cdot \nv](x_0,y_0) & := \lim_{(x,y)\in \Op, (x,y) \to (x_0,y_0) } a\nabla u(x,y) \cdot \nv- \lim_{(x,y)\in \Om, (x,y) \to (x_0,y_0) } a \nabla u(x,y) \cdot \nv, \end{align*} where $\nv$ is the unit normal vector of $\Gamma_I$ pointing towards $\Op$. In \eqref{Qeques2}, the boundary operators $\B_1,\ldots,\B_4 \in \{\id,\frac{\partial }{\partial \nv}+ \alpha \id\}$, where $\id$ represents the Dirichlet boundary condition, when $\alpha=0$, $\frac{\partial }{\partial \nv}$ represents the Neumann boundary condition, when $\alpha$ is a smooth 1D function, $\frac{\partial }{\partial \nv}+\alpha \id$ represents the Robin boundary condition. An example for the boundary conditions of \eqref{Qeques2} is shown in \cref{fig:boundary}. \begin{figure}[htbp] \begin{tikzpicture} \draw[domain =0:360,smooth] plot({sqrt(0)*cos(\x)}, {sqrt(0)*sin(\x)}); \draw (-pi, -pi) -- (-pi, pi) -- (pi, pi) -- (pi, -pi) --(-pi,-pi); \node (A) at (-2.8,0) {$\Gamma_1$}; \node (A) at (2.8,0) {$\Gamma_2$}; \node (A) at (0,-2.8) {$\Gamma_3$}; \node (A) at (0,2.8) {$\Gamma_4$}; \node (A) at (-5,0) {$\B_1u = \tfrac{\partial u}{\partial \nv}+\alpha u=g_1$}; \node (A) at (4.4,0) {$\B_2u=u=g_2$}; \node (A) at (0,-3.5) {$\B_3u=\tfrac{\partial u}{\partial \nv}=g_3$}; \node (A) at (0,3.5) {$\B_4u = \tfrac{\partial u}{\partial \nv}+\beta u=g_4$}; \end{tikzpicture} \caption {An example for the boundary configuration in \eqref{Qeques2}, where $\alpha$ and $\beta$ are two smooth 1D functions in $y$ and $x$ directions respectively.} \label{fig:boundary} \end{figure} We derive a hybrid finite difference scheme to solve \eqref{Qeques2} given the following assumptions: \begin{itemize} \item[(A1)] The coefficient $a$ is positive, piecewise smooth and has uniformly continuous partial derivatives of (total) orders up to six in each of the subregions $\Op$ and $\Om$. The coefficient $a$ is discontinuous across the interface $\Gamma_I$. \item[(A2)] The solution $u$ and the source term $f$ have uniformly continuous partial derivatives of (total) orders up to seven and five respectively in each of the subregions $\Op$ and $\Om$. Both $u$ and $f$ can be discontinuous across the interface $\Gamma_I$. \item[(A3)] The interface curve $\Gamma_I$ is smooth in the sense that for each $(x^*,y^*)\in \Gamma_I$, there exists a local parametric equation: $\gamma: (-\epsilon,\epsilon)\rightarrow \Gamma_I$ with $\epsilon>0$ such that $\gamma(0)=(x^*,y^*)$ and $\|\gamma'(0)\|_{2}\ne 0$. \item[(A4)] The 1D interface functions $\gd \circ \gamma$ and $\gn \circ \gamma$ have uniformly continuous derivatives of (total) orders up to five and four respectively on the interface $\Gamma_I$, where $\gamma$ is given in (A2). \item[(A5)] Each of the 1D boundary functions $g_1,\ldots,g_4$ in \eqref{Qeques2} and $\alpha$ in the Robin boundary conditions has uniformly continuous derivatives of (total) order up to five on the boundary $\Gamma_j$. \end{itemize} The organization of this paper is as follows. In \cref{subsec:regular}, we derive the compact 9-point finite difference scheme with sixth order accuracy for regular points in \cref{thm:regular:interior}. In \cref{Boundarypoints}, we propose the $6$-point schemes with sixth order accuracy for the side points of the boundary conditions $\tfrac{\partial u}{\partial \nv}+\alpha u|_{\Gamma_1}=g_1$, $\tfrac{\partial u}{\partial \nv}|_{\Gamma_3}=g_3$ and $\tfrac{\partial u}{\partial \nv}+\beta u|_{\Gamma_4}=g_4$ in \cref{hybrid:thm:regular:Robin:1,hybrid:thm:regular:Neu:3,hybrid:thm:regular:Robin:4} with two smooth functions $\alpha$ and $\beta$. In \cref{Cornerpoints}, we construct the $4$-point schemes with sixth order accuracy for the corner points of the boundary conditions $\tfrac{\partial u}{\partial \nv}+\alpha u|_{\Gamma_1}=g_1$, $\tfrac{\partial u}{\partial \nv}|_{\Gamma_3}=g_3$ and $\tfrac{\partial u}{\partial \nv}+\beta u|_{\Gamma_4}=g_4$ in \cref{thm:corner:1,thm:corner:2} with two smooth functions $\alpha$ and $\beta$. In \cref{hybrid:Irregular:points}, we first propose a simpler version of the transmission equation for the interface curve $\Gamma_I$ in \cref{thm:interface}. Then the 13-point finite difference scheme with fifth order accuracy for irregular points is shown in \cref{13point:Inter}. In order to achieve the implementation effectively for the 13-point scheme, we derive efficient implementation details using \eqref{ChandX} to \eqref{ckln:irregular}. In \cref{sec:numerical}, we present $10$ numerical examples, including $5$ examples with exact known solutions $u$, for our proposed hybrid finite difference scheme with contrast ratios $\sup(a_+)/\inf(a_-)=10^{-3},10^{-6},10^{6},10^{7}$. Our numerical experiments confirm the flexibility and the sixth order accuracy in $l_2$ and $l_{\infty}$ norms of our proposed hybrid scheme. For the coefficients $a(x,y)$, two jump functions $g_D,g_N$, interface curves $\Gamma_I$ and boundary conditions, we test the following cases: \begin{itemize} \item Either $a_+/a_-$ or $a_-/a_+$ is very large on the interface $\Gamma_I$ for high contrast coefficients $a$. \item The jump functions $g_D$ and $g_N$ are both either constant or non-constant. \item The interface curve $\Gamma_I$ is either smooth or sharp-edged. \item 4-side Dirichlet boundary conditions. \item 3-side Dirichlet and 1-side Robin boundary conditions. \item 1-side Dirichlet, 1-side Neumann and 2-side Robin boundary conditions. \end{itemize} In \cref{sec:hybrid:Conclu}, we summarize the main contributions of this paper. Finally, in \cref{hybrid:sec:proofs} we present the proofs for results stated in \cref{sec:sixord}. \section{Hybrid finite difference method on uniform Cartesian grids} \label{sec:sixord} We follow the same setup as in \cite{FHM21a,FHM21b,FHM21Helmholtz}. Let $\Omega=(l_1,l_2)\times (l_3,l_4)$ and we assume $l_4-l_3=N_0 (l_2-l_1)$ for some $N_0 \in \N$. For any positive integer $N_1\in \N$, we define $N_2:=N_0 N_1$ and so the grid size is $h:=(l_2-l_1)/N_1=(l_4-l_3)/N_2$. Let \be \label{xiyj} x_i=l_1+i h, \quad i=0,\ldots,N_1, \quad \text{and} \quad y_j=l_3+j h, \quad j=0,\ldots,N_2. \ee Recall that a compact stencil centered at $(x_i,y_j)$ contains nine points $(x_i+kh, y_j+lh)$ for $k,l\in \{-1,0,1\}$. Define \be\label{Compact:Set} \begin{split} & d_{i,j}^+:=\{(k,\ell) \; : \; k,\ell\in \{-1,0,1\}, \psi(x_i+kh, y_j+\ell h)\ge 0\}, \quad \mbox{and}\\ & d_{i,j}^-:=\{(k,\ell) \; : \; k,\ell\in \{-1,0,1\}, \psi(x_i+kh, y_j+\ell h)<0\}. \end{split} \ee Thus, the interface curve $\Gamma_I:=\{(x,y)\in \Omega \; :\; \psi(x,y)=0\}$ splits the nine points in our compact stencil into two disjoint sets $\{(x_{i+k}, y_{j+\ell})\; : \; (k,\ell)\in d_{i,j}^+\} \subseteq \Op \cup \Gamma_I$ and $\{(x_{i+k}, y_{j+\ell})\; : \; (k,\ell)\in d_{i,j}^-\} \subseteq \Om$. We refer to a grid/center point $(x_i,y_j)$ as \emph{a regular point} if $d_{i,j}^+=\emptyset$ or $d_{i,j}^-=\emptyset$. The center point $(x_i,y_j)$ of a stencil is \emph{regular} if all of its nine points are in $\Op \cup \Gamma_I$ (hence $d_{i,j}^-=\emptyset$) or in $\Om$ (i.e., $d_{i,j}^+=\emptyset$). Otherwise, if both $d_{i,j}^+$ and $d_{i,j}^-$ are nonempty, the center point $(x_i,y_j)$ of a stencil is referred to as \emph{an irregular point} . Now, let us pick and fix a base point $(x_i^*,y_j^*)$ inside the open square $(x_i-h,x_i+h)\times (y_j-h,y_j+h)$, which can be written as \be \label{base:pt} x_i^*=x_i-v_0h \quad \mbox{and}\quad y_j^*=y_j-w_0h \quad \mbox{with}\quad -1<v_0, w_0<1. \ee Throughout the paper, we shall use the following notations: \be\label{ufmn} \begin{split} &\alpha^{(n)}:=\frac{d^{n} \alpha}{ dy^n }(y_j^*), \quad {g_1}^{(n)}:=\frac{d^{n} g_1}{ dy^n }(y_j^*), \\ & \beta^{(m)}:=\frac{d^{m} \beta}{ dx^m }(x_i^*), \quad {g_3}^{(m)}:=\frac{d^{m} g_3}{ dx^m }(x_i^*), \quad {g_4}^{(m)}:=\frac{d^{m} g_4}{ dx^m }(x_i^*),\\ &a^{(m,n)}:=\frac{\partial^{m+n} a}{ \partial^m x \partial^n y}(x_i^*,y_j^*), \quad u^{(m,n)}:=\frac{\partial^{m+n} u}{ \partial^m x \partial^n y}(x_i^*,y_j^*),\quad f^{(m,n)}:=\frac{\partial^{m+n} f}{ \partial^m x \partial^n y}(x_i^*,y_j^*), \end{split} \ee which are their $(m,n)$th partial derivatives at the base point $(x_i^*,y_j^*)$. By \cite[(2.13)]{FHM21b}, we have \be \label{u:approx:key:V} u(x+x_i^*,y+y_j^*) = \sum_{(m,n)\in \ind_{M+1}^{V,1}} u^{(m,n)} G^V_{M,m,n}(x,y) +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^V_{M,m,n}(x,y)+\bo(h^{M+2}), \ee for $x,y\in (-2h,2h)$, where $u$ is the exact solution for \eqref{Qeques2}, the index sets $\ind_{M-1}$ and $\ind_{M+1}^{V,1}$ are defined in \eqref{Sk} and \eqref{indV12} respectively, and the functions $G^V_{M,m,n}$ and $Q^V_{M,m,n}$ are defined in \eqref{GVmn} and \eqref{QVmn} respectively. By \cite[(2.13) and (2.14)]{FHM21Helmholtz}, we also have \be \label{u:approx:key:H} u(x+x_i^*,y+y_j^*) = \sum_{(m,n)\in \ind_{M+1}^{H, 1}} u^{(m,n)} G^{H}_{M,m,n}(x,y) + \sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^H_{M,m,n}(x,y) + \mathcal{O}(h^{M+2}), \ee where the index sets $\ind_{M-1}$ and $\ind_{M+1}^{H,1}$ are defined in \eqref{Sk} and \eqref{indH12} respectively, and the functions $G^H_{M,m,n}$ and $Q^H_{M,m,n}$ are defined in \eqref{GHmn} and \eqref{QHmn} respectively. For the sake of better readability, all technical proofs of this section are provided in \cref{hybrid:sec:proofs}. \subsection{Stencils for regular points (interior)} \label{subsec:regular} We now extend the fourth order compact scheme in \cite[Theorem~3.1]{FHM21b} to a sixth order compact scheme. We only need to choose $M=6$ and replace $G_{m,n}$, $H_{m,n}$ and $\ind_{M+1}^{1}$ in \cite{FHM21b} by $G^V_{M,m,n}$ in \eqref{GVmn}, $Q^V_{M,m,n}$ in \eqref{QVmn}, and $\ind_{M+1}^{V,1}$ in \eqref{indV12}. We choose $(x_i^*,y_j^*)$ to be the center point of the 9-point compact scheme, i.e., $(x_i^*,y_j^*)=(x_i,y_j)$ and $v_0=w_0=0$ in \eqref{base:pt}. \begin{theorem}\label{thm:regular:interior} Let a grid point $(x_i,y_j)$ be a regular point, i.e., either $d_{i,j}^+=\emptyset$ or $d_{i,j}^-=\emptyset$ and $(x_i,y_j) \notin \partial \Omega $. Let $(u_{h})_{i,j}$ denote the numerical approximation of the exact solution $u$ of the elliptic interface problem \eqref{Qeques2} at an interior regular point $(x_i, y_j)$. Then the following difference scheme on a stencil centered at $(x_{i},y_j)$: \be\label{stencil:regular:interior:V} \mathcal{L}_h u_h := \begin{aligned} &C_{-1,-1}(u_{h})_{i-1,j-1}& +&C_{0,-1}(u_{h})_{i,j-1}& +&C_{1,-1}(u_{h})_{i+1,j-1} \\ +&C_{-1,0}(u_{h})_{i-1,j}& +&C_{0,0}(u_{h})_{i,j}& +&C_{1,0}(u_{h})_{i+1,j}\\ +&C_{-1,1}(u_{h})_{i-1,j+1}& +&C_{0,1}(u_{h})_{i,j+1}& +&C_{1,1}(u_{h})_{i+1,j+1}\\ \end{aligned} =\sum_{(m,n)\in \ind_{5}} f^{(m,n)}C_{f,m,n}, \ee achieves sixth order of accuracy for $-\nabla \cdot( a\nabla u)=f$ at the point $(x_i,y_j)$, where \[ C_{f,m,n}:= \sum_{k=-1}^1\sum_{\ell=-1}^1 C_{k,\ell} Q^{V}_{6,m,n}(kh, \ell h), \quad \mbox{for all} \quad (m,n)\in \ind_{5}, \] \begin{equation}\label{Ckell} C_{k,\ell}(h):=\sum_{i=0}^{M+1} c_{k,\ell,i}h^i, \qquad k,\ell \in \{-1,0,1\}, \end{equation} and $\{c_{k,\ell,i}\}$ is any non-trivial solution to the linear system induced by \cite[(3.5)]{FHM21b} with $M=6$. Moreover, the maximum accuracy order of a compact finite difference scheme for $-\nabla \cdot( a\nabla u)=f$ at the point $(x_i,y_j)$ is six. \end{theorem} To verify \cref{thm:regular:interior} with the numerical experiments in \cref{sec:numerical}, we use the unique solution $\{c_{k,\ell,i}\}$ to \cite[(3.5)]{FHM21b} with $M=6$ and the normalization condition $c_{-1,-1,0}=1$, setting to zero all $c_{-1,0,7}, c_{0,-1,7}, c_{0,0,6}, c_{0,0,7}, c_{-1,1,i_1}, c_{0,1,i_2}, c_{1,-1,i_2}, c_{1,0,i_3}, c_{1,1,i_4}$ for $i_1=1,6,7$, $i_2=5,6,7$, $i_3=4,5,6,7$ and $i_4=2,3,4,5,6,7$. \subsection{Stencils for boundary points}\label{All:Boundary} In this subsection, we extend \cite[Section~2.2]{FHM21Helmholtz} and discuss how to find a compact finite difference scheme with accuracy order six centered at $(x_i,y_j) \in \partial \Omega$. For clarity of presentation, we consider the following boundary conditions \be\label{model:corner} \begin{aligned} &\B_1u=\tfrac{\partial u}{\partial \nv}+\alpha u=g_1 \;\; \text{on} \;\; \Gamma_1,\qquad && \B_2u=u=g_2 \;\; \text{on} \;\; \Gamma_2,\\ &\B_3u=\tfrac{\partial u}{\partial \nv}=g_3 \;\; \text{on} \;\; \Gamma_3, &&\B_4u=\tfrac{\partial u}{\partial \nv}+\beta u=g_4 \;\; \text{on} \;\; \Gamma_4, \end{aligned} \ee where $\alpha$ and $\beta$ are two smooth 1D functions in $y$ and $x$ directions. For the 6-point and 4-point schemes in this subsection, we choose $(x_i^*,y_j^*)=(x_i,y_j)$ and $v_0=w_0=0$ in \eqref{base:pt}. An illustration of \eqref{model:corner} is shown in \cref{fig:boundary}. For the following identities in \eqref{CB1:EQ} and \eqref{CB4:EQ}, we define \be\label{delta:Fun} \delta_{a,a}:=1 \quad \mbox{and}\quad \delta_{a,b}:=0 \quad \mbox{for } a\ne b. \ee \subsubsection{Side points on the boundary $\partial \Omega$}\label{Boundarypoints} \begin{theorem}\label{hybrid:thm:regular:Robin:1} Let $(u_{h})_{i,j}$ denote the numerical approximation of the exact solution $u$ of the elliptic interface problem \eqref{Qeques2} at the point $(x_i, y_j)$. The following discretization on a stencil centered at $(x_0,y_j)\in \Gamma_1$: \be \label{stencil:regular:Robin:1} \mathcal{L}^{\mathcal{B}_1}_h u_h := \begin{aligned} &C^{\mathcal{B}_1}_{0,-1}(u_{h})_{0,j-1}& +&C^{\mathcal{B}_1}_{1,-1}(u_{h})_{1,j-1} \\ +&C^{\mathcal{B}_1}_{0,0}(u_{h})_{0,j}& +&C^{\mathcal{B}_1}_{1,0}(u_{h})_{1,j}\\ +&C^{\mathcal{B}_1}_{0,1}(u_{h})_{0,j+1}& +&C^{\mathcal{B}_1}_{1,1}(u_{h})_{1,j+1}\\ \end{aligned} = \sum_{(m,n)\in \ind_{4}} f^{(m,n)}C^{\mathcal{B}_1}_{f,m,n}+\sum_{n=0}^{5}g_{1}^{(n)}C_{g_{1},n}^{\B_1}, \ee achieves sixth order of accuracy for $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ at the point $(x_0,y_j) \in \Gamma_1$, where \[ C^{\mathcal{B}_1}_{f,m,n} := \sum\limits_{k=0}^1 \sum\limits_{\ell=-1}^1 C^{\mathcal{B}_1}_{k,\ell} Q^{V}_{5,m,n}(kh, \ell h),\quad \mbox{for all} \quad (m,n) \in \ind_4, \] \[ C_{g_1,n}^{\B_1} := -\sum\limits_{k=0}^1 \sum\limits_{\ell=-1}^1 C^{\mathcal{B}_1}_{k,\ell} G^{V}_{5,1,n}(kh, \ell h), \quad \mbox{for all} \quad n=0, \dots, 5, \] \[ C^{\B_1}_{k,\ell}(h):=\sum_{i=0}^{6} c^{\B_1}_{k,\ell,i}h^i, \qquad k\in\{0,1\},\ell\in\{-1,0,1\}, \] and $\{c^{\B_1}_{k,\ell,i}\}$ is any non-trivial solution to the linear system induced by \be \begin{split}\label{CB1:EQ} &\sum_{k=0}^1 \sum_{\ell=-1}^1 C^{\B_1}_{k,\ell} \left( G^{V}_{5,0,n}(kh, \ell h) + \sum_{i=n}^5 {i\choose n} {\alpha}^{(i-n)} G^{V}_{5,1,i}(kh, \ell h) (1-\delta_{n,6}) \right)\\ &=\bo(h^{7}),\quad \mbox{for all } \quad n=0,1,\dots,6. \end{split} \ee Moreover, the maximum accuracy order of a 6-point finite difference scheme for $\B_1u=\frac{\partial u}{\partial \nv} +\alpha u=g_1$ at the point $(x_0,y_j) \in \Gamma_1$ with two smooth functions $\alpha(y)$ and $a(x,y)$ is six. \end{theorem} In our numerical experiments in \cref{sec:numerical}, we use the unique solution $\{c^{\B_1}_{k,\ell,i}\}$ to \eqref{CB1:EQ} with the normalization condition $c^{\B_1}_{1,1,0}=1$, where all $c^{\B_1}_{0,0,6}, c^{\B_1}_{0,1,5}, c^{\B_1}_{0,1,6}, c^{\B_1}_{1,-1,i_1}, c^{\B_1}_{1,0,i_2}, c^{\B_1}_{1,1,i_3}$ for $i_1=1,4,5,6$, $i_2=3,4,5,6$, and $i_3=2,3,4,5,6$, are set to zero. In particular, if $a$ in \eqref{Qeques2} is a discontinuous constant coefficient and $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ with a constant $\alpha$, then the coefficients in \eqref{stencil:regular:Robin:1} are {\small{ \be \label{stencil:CB1:R} \begin{aligned} &C^{\mathcal{B}_1}_{0,1} =\frac{1}{75}{\alpha}^2h^2+\frac{1}{5}{\alpha}h+2,\ \ C^{\mathcal{B}_1}_{0,0} =\frac{8}{675}{\alpha}^5h^5-\frac{16}{675}{\alpha}^4h^4+\frac{16}{225}{\alpha}^3h^3-\frac{8}{25}{\alpha}^2h^2-\frac{34}{5}{\alpha}h-10,\\ & C^{\mathcal{B}_1}_{1,1} =1,\ \ \ C^{\mathcal{B}_1}_{1,0} = -\frac{8}{675}{\alpha}^4h^4+\frac{8}{225}{\alpha}^3h^3-\frac{8}{75}{\alpha}^2h^2+\frac{2}{5}{\alpha}h+4,\ \ C^{\mathcal{B}_1}_{0,-1}=C^{\mathcal{B}_1}_{0,1},\ \ C^{\mathcal{B}_1}_{1,-1}=C^{\mathcal{B}_1}_{1,1}. \end{aligned} \ee }} Similarly, we could obtain the following \cref{hybrid:thm:regular:Neu:3,hybrid:thm:regular:Robin:4}. \begin{theorem}\label{hybrid:thm:regular:Neu:3} Let $(u_{h})_{i,j}$ be the numerical approximation of the exact solution $u$ of the elliptic interface problem \eqref{Qeques2} at the point $(x_i, y_j)$. Then the following discretization stencil centered at $(x_i,y_0)\in \Gamma_3$: \be \label{stencil:regular:H:Neu:3} \mathcal{L}^{\mathcal{B}_3}_h u_h := \begin{aligned} &C^{\mathcal{B}_3}_{-1,0}(u_{h})_{i-1,0}& + &C^{\mathcal{B}_3}_{0,0}(u_{h})_{i,0}& +&C^{\mathcal{B}_3}_{1,0}(u_{h})_{i+1,0}\\ + &C^{\mathcal{B}_3}_{-1,1}(u_{h})_{i-1,1}& +&C^{\mathcal{B}_3}_{0,1}(u_{h})_{i,1}& +&C^{\mathcal{B}_3}_{1,1}(u_{h})_{i+1,1}\\ \end{aligned} = \sum_{(m,n)\in \ind_{4}} f^{(m,n)}C^{\mathcal{B}_3}_{f,m,n}+\sum_{n=0}^{5}g_{3}^{(n)}C_{g_{3},n}^{\B_3}, \ee achieves sixth order of accuracy for $\B_3u=\frac{\partial u}{\partial \nv}=g_3$ at the point $(x_i,y_0) \in \Gamma_3$, where \[ C^{\mathcal{B}_3}_{f,m,n} := \sum\limits_{k=-1}^1 \sum\limits_{\ell=0}^1 C^{\mathcal{B}_3}_{k,\ell} Q^{H}_{5,m,n}(kh, \ell h), \quad \mbox{for all} \quad (m,n) \in \ind_4, \] \[ C_{g_3,n}^{\B_3} := -\sum\limits_{k=-1}^1 \sum\limits_{\ell=0}^1 C^{\mathcal{B}_3}_{k,\ell} G^{H}_{5,n,1}(kh, \ell h), \quad \mbox{for all} \quad n=0, \dots, 5, \] \[ C^{\B_3}_{k,\ell}(h):=\sum_{i=0}^{6} c^{\B_3}_{k,\ell,i}h^i, \qquad k\in\{-1,0,1\},\ell\in\{0,1\}, \] and $\{c^{\B_3}_{k,\ell,i}\}$ is any non-trivial solution to the linear system induced by \begin{equation}\label{CB3:EQ} \sum_{k=-1}^1 \sum_{\ell=0}^1 C^{\B_3}_{k,\ell} G^{H}_{5,n,0}(kh, \ell h) =\bo(h^{7}),\quad \mbox{for all } \quad n=0,1,\dots,6, \end{equation} Moreover, the maximum accuracy order of a 6-point finite difference scheme for $\B_3u=\frac{\partial u}{\partial \nv} =g_3$ at the point $(x_i,y_0) \in \Gamma_3$ with a smooth function $a(x,y)$ is six. \end{theorem} For our numerical experiments in \cref{sec:numerical}, we use the unique solution $\{c^{\B_3}_{k,\ell,i}\}$ to \eqref{CB3:EQ} with the normalization condition $c^{\B_3}_{1,1,0}=1$, presetting to zero all $c^{\B_3}_{0,0,6}, c^{\B_3}_{-1,1,i_1}, c^{\B_3}_{0,1,i_2}, c^{\B_3}_{1,0,i_3}, c^{\B_3}_{1,1,i_4}$ for $i_1=1,5,6$, $i_2=4,5,6$, $i_3=3,4,5,6$, and $i_4=2,3,4,5,6$. In particular, if $a$ is a discontinuous constant coefficient in \eqref{Qeques2}, then the coefficients in \eqref{stencil:regular:H:Neu:3} are \be \label{stencil:CB3:N} C^{\mathcal{B}_3}_{1,0} =2,\ \ C^{\mathcal{B}_3}_{1,1} =1,\ \ C^{\mathcal{B}_3}_{0,0} =-10,\ \ C^{\mathcal{B}_3}_{0,1} =4,\ \ C^{\mathcal{B}_3}_{-1,0}=C^{\mathcal{B}_3}_{1,0},\ \ C^{\mathcal{B}_3}_{-1,1}=C^{\mathcal{B}_3}_{1,1}. \ee \begin{theorem}\label{hybrid:thm:regular:Robin:4} Let $(u_{h})_{i,j}$ be the numerical approximation of the exact solution $u$ of the elliptic interface problem \eqref{Qeques2} at the point $(x_i, y_j)$. Then the following discretization stencil centered at $(x_i,y_{N_2})\in \Gamma_4$: \be \label{stencil:regular:H:Robin:4} \mathcal{L}^{\mathcal{B}_4}_h u_h := \begin{aligned} &C^{\mathcal{B}_4}_{-1,-1}(u_{h})_{i-1,-1}& +&C^{\mathcal{B}_4}_{0,-1}(u_{h})_{i,-1}& +&C^{\mathcal{B}_4}_{1,-1}(u_{h})_{i+1,-1}\\ +&C^{\mathcal{B}_4}_{-1,0}(u_{h})_{i-1,0}& + &C^{\mathcal{B}_4}_{0,0}(u_{h})_{i,0}& +&C^{\mathcal{B}_4}_{1,0}(u_{h})_{i+1,0}\\ \end{aligned} = \sum_{(m,n)\in \ind_{4}} f^{(m,n)}C^{\mathcal{B}_4}_{f,m,n}+\sum_{n=0}^{5}g_{4}^{(n)}C_{g_{4},n}^{\B_4}, \ee achieves sixth order of accuracy for $\B_4u=\frac{\partial u}{\partial \nv}+\beta u=g_4$ at the point $(x_i,y_{N_2}) \in \Gamma_4$, where \[ C^{\mathcal{B}_4}_{f,m,n} := \sum\limits_{k=-1}^1 \sum\limits_{\ell=-1}^0 C^{\mathcal{B}_4}_{k,\ell} Q^{H}_{5,m,n}(kh, \ell h), \quad \mbox{for all} \quad (m,n) \in \ind_4, \] \[ C_{g_4,n}^{\B_4} := \sum\limits_{k=-1}^1 \sum\limits_{\ell=-1}^0 C^{\mathcal{B}_4}_{k,\ell} G^{H}_{5,n,1}(kh, \ell h), \quad \mbox{for all} \quad n=0, \dots, 5, \] \[ C^{\B_4}_{k,\ell}(h):=\sum_{i=0}^{6} c^{\B_4}_{k,\ell,i}h^i, \qquad k\in\{-1,0,1\},\ell\in\{-1,0\}, \] and $\{c^{\B_4}_{k,\ell,i}\}$ is any non-trivial solution to the linear system induced by \be \begin{split}\label{CB4:EQ} &\sum_{k=-1}^1 \sum_{\ell=-1}^0 C^{\B_4}_{k,\ell} \left( G^{H}_{5,n,0}(kh, \ell h) - \sum_{i=n}^5 {i\choose n} {\beta}^{(i-n)} G^{H}_{5,i,1}(kh, \ell h) (1-\delta_{n,6}) \right)\\ &=\bo(h^{7}),\quad \mbox{for all } \quad n=0,1,\dots,6. \end{split} \ee Moreover, the maximum accuracy order of a 6-point finite difference scheme for $\B_4u=\frac{\partial u}{\partial \nv} +\beta u=g_4$ at the point $(x_i,y_{N_2}) \in \Gamma_4$ with two smooth functions $\beta(x)$ and $a(x,y)$ is six. \end{theorem} For our numerical experiments in \cref{sec:numerical}, we use the unique solution $\{c^{\B_4}_{k,\ell,i}\}$ to \eqref{CB4:EQ} with the normalization condition $c^{\B_4}_{1,-1,0}=1$, presetting to zero all $c^{\B_4}_{0,-1,6}, c^{\B_4}_{-1,0,5}, c^{\B_4}_{-1,0,6}, c^{\B_4}_{0,0,i_1}, c^{\B_4}_{1,-1,i_2}, c^{\B_4}_{1,0,i_3}$ with $i_1=4,5,6$, $i_2=2,3,4,5,6$, $i_3=1,3,4,5,6$. In particular, if $a$ is a discontinuous piecewise constant coefficient and $\B_4u=\frac{\partial u}{\partial \nv}+\beta u=g_4$ with a constant $\beta$, then the coefficients in \eqref{stencil:regular:H:Robin:4} are \be \label{stencil:CB4:R} \begin{aligned} &C^{\mathcal{B}_4}_{1,-1} =1,\ \ C^{\mathcal{B}_4}_{0,-1} =-\frac{8}{675}{\beta}^4h^4+\frac{8}{225}{\beta}^3h^3-\frac{8}{75}{\beta}^2h^2+\frac{2}{5}{\beta}h+4,\\ & C^{\mathcal{B}_4}_{1,0} =\frac{1}{75}{\beta}^2h^2+\frac{1}{5}{\beta}h+2 ,\ \ \ C^{\mathcal{B}_4}_{0,0} =\frac{8}{675}{\beta}^5h^5-\frac{16}{675}{\beta}^4h^4+\frac{16}{225}{\beta}^3h^3-\frac{8}{25}{\beta}^2h^2-\frac{34}{5}{\beta}h-10,\\ & C^{\mathcal{B}_4}_{-1,-1}=C^{\mathcal{B}_4}_{1,-1},\ \ \ C^{\mathcal{B}_4}_{-1,0}=C^{\mathcal{B}_4}_{1,0}. \end{aligned} \ee \subsubsection{Stencils for corner points}\label{Cornerpoints} \begin{theorem} \label{thm:corner:1} Let $(u_{h})_{i,j}$ be the numerical approximation of the exact solution $u$ of the elliptic interface problem \eqref{Qeques2} at the point $(x_i, y_j)$. Then the following discretization on a stencil centered at the corner point $(x_0,y_0)$: \be \label{stencil:corner:1} \begin{aligned} \mathcal{L}^{\rr_1}_h u_h := \begin{aligned} &C^{\mathcal{R}_1}_{0,0}(u_{h})_{0,0}& +&C^{\mathcal{R}_1}_{1,0}(u_{h})_{1,0}\\ +&C^{\mathcal{R}_1}_{0,1}(u_{h})_{0,1}& +&C^{\mathcal{R}_1}_{1,1}(u_{h})_{1,1} \end{aligned} = \sum_{(m,n)\in \ind_{4}} f^{(m,n)}C^{\rr_1}_{f,m,n} + \sum_{n=0}^{5}g_{1}^{(n)}C_{g_{1},n}^{\rr_1} + \sum_{n=0}^{5}g_{3}^{(n)}C_{g_{3},n}^{\rr_1}, \end{aligned} \ee achieves sixth order of accuracy for $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ and $\B_3u=\frac{\partial u}{\partial \nv}=g_3$ at the point $(x_0,y_0)$, where $\{C^{\rr_1}_{k,\ell}\}_{k,\ell \in \{0,1\}}$, $\{C^{\rr_1}_{f,m,n}\}_{(m,n) \in \ind_4}$, $\{C^{\rr_1}_{g_1,n}\}_{n=0}^5$ and $\{C^{\rr_1}_{g_3,n}\}_{n=0}^5$ can be calculated by replacing $\B_1u=\frac{\partial u}{\partial \nv}- \ia \ka u=g_1$ by $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ in \cite[Theorem 2.4]{FHM21Helmholtz} with $M=M_f=M_{g_1}=M_{g_3}=5$, and replacing $G^{V}_{M,m,n}$, $Q^{V}_{M,m,n}$, $G^{H}_{M,m,n}$ and $Q^{H}_{M,m,n}$ in \cite{FHM21Helmholtz} by \eqref{GVmn}, \eqref{QVmn}, \eqref{GHmn} and \eqref{QHmn}, respectively. Moreover, the maximum accuracy order of a 4-point finite difference scheme for $\B_1u=\frac{\partial u}{\partial \nv} +\alpha u=g_1$ and $\B_3u=\frac{\partial u}{\partial \nv}=g_3$ at the point $(x_0,y_0)$ with two smooth functions $\alpha(y)$ and $a(x,y)$ is six. \end{theorem} In particular, if $a$ in \eqref{Qeques2} is a discontinuous piecewise constant coefficient, and $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ with a constant $\alpha$, then the coefficients in \eqref{stencil:corner:1} are \be \label{CR1:Corner:1} \begin{aligned} &C^{\mathcal{R}_1}_{0,0} = \frac{4}{675}{\alpha}^5h^5-\frac{8}{675}{\alpha}^4h^4+\frac{8}{225}{\alpha}^3h^3-\frac{4}{25}{\alpha}^2h^2-\frac{17}{5}{\alpha}h-5,\ \ C^{\mathcal{R}_1}_{0,1} = \frac{1}{75}{\alpha}^2h^2+\frac{1}{5}{\alpha}h+2,\\ &C^{\mathcal{R}_1}_{1,0} = -\frac{4}{675}{\alpha}^4h^4+\frac{4}{225}{\alpha}^3h^3-\frac{4}{75}{\alpha}^2h^2+\frac{1}{5}{\alpha}h+2,\ \ C^{\mathcal{R}_1}_{1,1} = 1.\\ \end{aligned} \ee \begin{theorem} \label{thm:corner:2} Let $(u_{h})_{i,j}$ be the numerical approximation of the exact solution $u$ of the elliptic interface problem \eqref{Qeques2} at the point $(x_i, y_j)$. Then the following discretization on a stencil centered at the corner point $(x_0,y_{N_2})$: \be \label{stencil:corner:2} {\footnotesize{ \begin{aligned} \mathcal{L}^{\rr_2}_h u_h := \begin{aligned} &C^{\mathcal{R}_2}_{0,-1}(u_{h})_{0,N_2-1}& +&C^{\mathcal{R}_2}_{1,-1}(u_{h})_{1,N_2-1}\\ +&C^{\mathcal{R}_2}_{0,0}(u_{h})_{0,N_2}& +&C^{\mathcal{R}_2}_{1,0}(u_{h})_{1,N_2} \end{aligned} = \sum_{(m,n)\in \ind_{4}} f^{(m,n)}C^{\rr_2}_{f,m,n} + \sum_{n=0}^{5}g_{1}^{(n)}C_{g_{1},n}^{\rr_2} + \sum_{n=0}^{5}g_{4}^{(n)}C_{g_{4},n}^{\rr_2}, \end{aligned} } } \ee achieves sixth order of accuracy for $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv}+\beta u=g_4$ at the point $(x_0,y_{N_2})$, where $\{C^{\rr_2}_{k,\ell}\}_{k\in \{0,1\},\ell \in \{-1,0\}}$, $\{C^{\rr_2}_{f,m,n}\}_{(m,n) \in \ind_4}$, $\{C^{\rr_2}_{g_1,n}\}_{n=0}^5$ and $\{C^{\rr_2}_{g_4,n}\}_{n=0}^5$ can be calculated by replacing $\B_1u=\frac{\partial u}{\partial \nv}- \ia \ka u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv}-\ia \ka u=g_4$ by $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv}+\beta u=g_4$ respectively in \cite[Theorem 2.5]{FHM21Helmholtz} with $M=M_f=M_{g_1}=M_{g_4}=5$ and replacing $G^{V}_{M,m,n}$, $Q^{V}_{M,m,n}$, $G^{H}_{M,m,n}$ and $Q^{H}_{M,m,n}$ in \cite{FHM21Helmholtz} by \eqref{GVmn}, \eqref{QVmn}, \eqref{GHmn} and \eqref{QHmn}, respectively. Moreover, the maximum accuracy order of a 4-point finite difference scheme for $\B_1u=\frac{\partial u}{\partial \nv} +\alpha u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv} +\beta u=g_4$ at the point $(x_0,y_{N_2})$ with three smooth functions $\alpha(y)$, $\beta(x)$ and $a(x,y)$ is six, where $\alpha(y_{N_2})\ne \beta(x_0)$. \end{theorem} Again, if $a$ in \eqref{Qeques2} is a discontinuous constant coefficient, $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv}+\beta u=g_4$ with $\alpha$ and $\beta$ being constant, then the coefficients on the left hand side in \eqref{stencil:corner:2} are \be \label{CR1:Corner:2} \begin{aligned} C^{\mathcal{R}_2}_{0,-1} &=\frac{1}{675}(4{\alpha}^5-6{\alpha}^4{\beta}+6{\alpha}^3{\beta}^2-4{\alpha}^2{\beta}^3)h^5+\frac{1}{675}(4{\alpha}^4-6{\alpha}^3{\beta}+6{\alpha}^2{\beta}^2-4{\alpha}{\beta}^3)h^4\\ &+\frac{1}{675}(9{\alpha}^2+63{\alpha}{\beta}-36{\beta}^2)h^2+\frac{1}{675}(135{\beta}+135{\alpha})h+2,\\ C^{\mathcal{R}_2}_{0,0} &= \frac{1}{225}(-4{\alpha}^4+6{\alpha}^3{\beta}-6{\alpha}^2{\beta}^2+4{\alpha}{\beta}^3)h^4+\frac{1}{225}(8{\alpha}^3-18{\alpha}^2{\beta}-30{\alpha}{\beta}^2+16{\beta}^3)h^3\\ &+\frac{1}{225}(-36{\alpha}^2-357{\alpha}{\beta}-36{\beta}^2)h^2+\frac{1}{225}(-765{\alpha}-765{\beta})h-5,\\ C^{\mathcal{R}_2}_{1,-1} &=\frac{1}{675}(-4{\alpha}^4+6{\alpha}^3{\beta}-6{\alpha}^2{\beta}^2+4{\alpha}{\beta}^3)h^4+1, \\ C^{\mathcal{R}_2}_{1,0} &= \frac{1}{225}(4{\alpha}^3-6{\alpha}^2{\beta}+6{\alpha}{\beta}^2-4{\beta}^3)h^3+\frac{1}{225}(-12{\alpha}^2+21{\alpha}{\beta}+3{\beta}^2)h^2\\ &+\frac{1}{225}(45{\beta}+45{\alpha})h+2. \end{aligned} \ee When $\alpha=\beta$, we further have $C^{\mathcal{R}_2}_{0,-1}=C^{\mathcal{R}_2}_{1,0}=\frac{4}{75}{\beta}^2h^2+\frac{2}{5}{\beta}h+2$ and $C^{\mathcal{R}_2}_{1,-1}=1$ in \eqref{CR1:Corner:2}. \subsection{Stencils for irregular points}\label{hybrid:Irregular:points} Let $(x_i,y_j)$ be an irregular point (i.e., both $d_{i,j}^+$ and $d_{i,j}^-$ are nonempty, see \cref{Extend the compact} for an example) and choose the base point $(x^*_i,y^*_j)\in \Gamma_I \cap (x_i-h,x_i+h)\times (y_j-h,y_j+h)$. By \eqref{base:pt}, we have \begin{equation} \label{base:pt:gamma} x_i^*=x_i-v_0h \quad \mbox{and}\quad y_j^*=y_j-w_0h \quad \mbox{with}\quad -1<v_0, w_0<1 \quad \mbox{and}\quad (x_i^*,y_j^*)\in \Gamma_I. \end{equation} Let $a_{\pm}$, $u_{\pm}$ and $f_{\pm}$ represent the coefficient function $a$, the solution $u$ and source term $f$ in $\Omega^{\pm}$. Similar to \eqref{ufmn}, we define that \begin{align*} & a_{\pm}^{(m,n)}:=\frac{\partial^{m+n} a_{\pm}}{ \partial^m x \partial^n y}(x^*_i,y^*_j),\qquad u_{\pm}^{(m,n)}:=\frac{\partial^{m+n} u_{\pm}}{ \partial^m x \partial^n y}(x^*_i,y^*_j),\qquad f_{\pm}^{(m,n)}:=\frac{\partial^{m+n} f_{\pm}}{ \partial^m x \partial^n y}(x^*_i,y^*_j), \\ & \gd^{(m,n)}:=\frac{\partial^{m+n} \gd}{ \partial^m x \partial^n y}(x^*_i,y^*_j),\qquad \gn^{(m,n)}:=\frac{\partial^{m+n} \gn}{ \partial^m x \partial^n y}(x^*_i,y^*_j). \end{align*} Similar to \eqref{u:approx:key:V}, we have \begin{align*} \label{u:approx:ir:key:2} u_\pm (x+x_i^*,y+y_j^*) & =\sum_{(m,n)\in \ind_{M+1}^{V,1}} u_\pm^{(m,n)} G^{\pm,V}_{M,m,n}(x,y) +\sum_{(m,n)\in \ind_{M-1}} f_\pm ^{(m,n)} Q^{\pm,V}_{M,m,n}(x,y)+\bo(h^{M+2}), \end{align*} for $x,y\in (-2h,2h)$, where $\ind_{M-1}$ and $\ind_{M+1}^{V,1}$ are defined in \eqref{Sk} and \eqref{indV12} respectively, $G^{\pm,V}_{M,m,n}(x,y)$ and $Q^{\pm,V}_{M,m,n}(x,y)$ are obtained by replacing $\{a^{(m,n)}: (m,n) \in \ind_{M}\}$ by $\{a_{\pm}^{(m,n)}: (m,n) \in \ind_{M}\}$ in \eqref{GVmn} and \eqref{QVmn}. As in \cite{FHM21a,FHM21b,FHM21Helmholtz}, near the point $(x_i^*,y_j^*)$, the parametric equation of $\Gamma_I$ can be written as: \be \label{parametric} x=r(t)+x_i^*,\quad y=s(t)+y_j^*,\quad (r'(t))^2+(s'(t))^2>0 \quad \mbox{for}\;\; t\in (-\epsilon,\epsilon) \quad \mbox{with}\quad \epsilon>0, \ee where $r$ and $s$ are smooth functions. Similarly to the definition of the 9-point compact stencil in \eqref{Compact:Set}, we define the following 4-point set for the 13-point scheme: \be\label{13point:Set} \begin{split} & e_{i,j}^+:=\{(k,\ell) \; : \; (k,\ell)\in \{(-2,0),(0,-2),(0,2),(2,0)\}, \psi(x_i+kh, y_j+\ell h)\ge 0\}, \quad \mbox{and}\\ & e_{i,j}^-:=\{(k,\ell) \; : \; (k,\ell)\in \{(-2,0),(0,-2),(0,2),(2,0)\}, \psi(x_i+kh, y_j+\ell h)<0\}. \end{split} \ee In the next theorem we present a simplified version of \cite[Theorem 3.2]{FHM21b}, adapted to the aim of developing of a fifth order hybrid 13-point scheme for irregular points. \begin{theorem}\label{thm:interface} Let $u$ be the solution to the elliptic interface problem in \eqref{Qeques2} and let $\Gamma_I$ be parameterized near $(x_i^*,y_j^*)$ by \eqref{parametric}. Then \be \label{tranmiss:cond} \begin{split} u_-^{(m',n')}&=\sum_{ \substack{ (m,n)\in \ind_{M+1}^{V,1} \\ m+n \le m'+n'} }T^{u_{+}}_{m',n',m,n}u_+^{(m,n)}+\sum_{(m,n)\in \ind_{M-1}} \left(T^+_{m',n',m,n} f_+^{(m,n)} + T^-_{m',n',m,n} f_{-}^{(m,n)}\right)\\ &+\sum_{(m,n)\in \ind_{M+1}} T^{g_D}_{m',n',m,n} g_{D}^{(m,n)} +\sum_{(m,n)\in \ind_{M}} T^{g_N}_{m',n',m,n} g_N^{(m,n)},\qquad \forall\; (m',n')\in \ind_{M+1}^{V,1}, \end{split} \ee where all the transmission coefficients $T^{u_+}, T^{\pm}, T^{\gd}, T^{\gn}$ are uniquely determined by $r^{(k)}(0)$, $s^{(k)}(0)$ for $k=0,\ldots,M+1$ and $\{a_{\pm}^{(m,n)}: (m,n)\in \ind_{M}\}$. Moreover, let $T^{u_+}_{m',n',m,n}$ be the transmission coefficient of $u_+^{(m,n)}$ in \eqref{tranmiss:cond} with $(m,n)\in \ind_{M+1}^{V,1}$, $m+n=m'+n'$ and $(m',n')\in \ind_{M+1}^{V,1}$. Then $T^{u_+}_{m',n',m,n}$ only depends on $r^{(k)}(0)$, $s^{(k)}(0)$ for $k=0,\ldots,M+1$ of \eqref{parametric} and $a_{\pm}^{(0,0)}$. Particularly, \be\label{T0000} T^{u_+}_{0,0,0,0}=1 \quad \mbox{and}\quad T^{u_+}_{m',n',0,0}=0 \quad \mbox{if } (m',n')\ne (0,0). \ee \end{theorem} Next, we provide the 13-point finite difference scheme for interior irregular points. \begin{theorem}\label{13point:Inter} Let $(u_{h})_{i,j}$ be the numerical approximation to the solution of \eqref{Qeques2} at an interior irregular point $(x_i, y_j)$. Pick a base point $(x_i^*,y_j^*)$ as in \eqref{base:pt:gamma}. Then the following 13-point scheme centered at the interior irregular point $(x_i,y_j)$: \begin{align}\label{13point:interface} \mathcal{L}^{\Gamma_I}_h & := \begin{aligned} & & & & &C_{0,-2}(u_{h})_{i,j-2}&\\ & &+&C_{-1,-1}(u_{h})_{i-1,j-1}& +&C_{0,-1}(u_{h})_{i,j-1}& +&C_{1,-1}(u_{h})_{i+1,j-1}& \\ +&C_{-2,0}(u_{h})_{i-2,j}&+&C_{-1,0}(u_{h})_{i-1,j}& +&C_{0,0}(u_{h})_{i,j}& +&C_{1,0}(u_{h})_{i+1,j} &+&C_{2,0}(u_{h})_{i+2,j}&\\ & &+&C_{-1,1}(u_{h})_{i-1,j+1}& +&C_{0,1}(u_{h})_{i,j+1}& +&C_{1,1}(u_{h})_{i+1,j+1}& \\ & & & & +&C_{0,2}(u_{h})_{i,j+2}& \end{aligned}\\ \nonumber &=\sum_{(m,n)\in \ind_{3} } f_+^{(m,n)}J^{+}_{m,n} + \sum_{(m,n)\in \ind_{3}} f_-^{(m,n)}J^{-}_{m,n} +\sum_{(m,n)\in \ind_{5}} \gd^{(m,n)}J^{\gd}_{m,n} + \sum_{(m,n)\in \ind_{4}} \gn^{(m,n)}J^{\gn}_{m,n}, \end{align} achieves fifth order accuracy, where all $\{C_{k,\ell}\}$ in \eqref{13point:interface} are calculated by \eqref{AX0}, $J^{\pm}_{m,n}:= J_{m,n}^{\pm,0}+J^{\pm,T}_{m,n}$ for all $(m,n)\in \ind_{3}$, {\footnotesize \begin{align*} & J^{\pm,0}_{m,n}:=\sum_{(k,\ell)\in d_{i,j}^\pm \cup e_{i,j}^\pm} C_{k,\ell} Q^{\pm,V}_{4,m,n}((v_0 +k)h,(w_0+\ell) h), \quad J^{\pm,T}_{m,n}:= \sum_{ \substack{ (m',n')\in \ind_{5}^{V,1} }} I^{-}_{m',n'} T^{\pm}_{m',n',m,n}, \quad \forall (m,n) \in \ind_{3},\\ & J^{\gd}_{m,n}:= \sum_{ \substack{ (m',n')\in \ind_{5}^{V,1} }} I^{-}_{m',n'} T^{\gd}_{m',n',m,n}, \quad \forall (m,n) \in \ind_{5}, \quad J^{\gn}_{m,n}:= \sum_{ \substack{ (m',n')\in \ind_{5}^{V,1} }} I^{-}_{m',n'} T^{\gn}_{m',n',m,n}, \quad \forall (m,n) \in \ind_{4},\\ & I^{-}_{m,n}:=\sum_{(k,\ell)\in d_{i,j}^- \cup e_{i,j}^-} C_{k,\ell} G^{-,V}_{4,m,n}((v_0+k)h,(w_0+\ell) h), \quad \forall (m,n) \in \ind_{5}^{V,1}. \end{align*} } Moreover, the maximum accuracy order of a 13-point finite difference stencil for \eqref{Qeques2} at an interior irregular point $(x_i, y_j)$ is five. \end{theorem} For the $13$-point scheme in \cref{13point:Inter}, if only one point in the set $\{(x_i-h,y_j-h),(x_i-h,y_j+h),(x_i+h,y_j-h),(x_i+h,y_j+h)\}$ belongs to $\Omega^{-}$ and the other 12 points all belong to $\Omega^{+}$, we can set $C_{k,\ell}=0$ for $(x_i+k h, y_j+\ell h)\in \Omega^{-}$, $x^*_i=x_i$, $y^*_i=y_i$ to achieve sixth order accuracy in $(x_i,y_j)$. Finally, we provide a way of achieving an efficient implementation for the $13$-point scheme in irregular points in \cref{13point:Inter}. \textbf{Efficient implementation details:}\\ By \cref{thm:interface}, a simpler $J^{u_+,T}_{m,n}(h)$ in \cite[(3.26)]{FHM21b} can be written as: \be\label{newJuT} J^{u_+,T}_{m,n}(h):= \sum_{ \substack{ (m',n')\in \ind_{M+1}^{V,1} \\ m'+n' \ge m+n}} I^-_{m',n'}(h) T^{u_+}_{m',n',m,n}. \ee Replacing $\ind_{M+1}^{1}$ by $\ind_{M+1}^{V,1}$ for \cite[(3.28) and (3.29)]{FHM21b}, we have \be\label{IJT} I^+_{m,n}(h)+J^{u_+,T}_{m,n}(h)=\bo(h^{M+2}), \ h\to 0, \; \mbox{ for all }\; (m,n)\in \ind_{M+1}^{V,1}. \ee Replacing $G^{\pm}_{m,n}$, $H^{\pm}_{m,n}$ and $d_{i,j}^\pm$ by $G^{\pm,V}_{M,m,n}$, $Q^{\pm,V}_{M,m,n}$ and $d_{i,j}^\pm \cup e_{i,j}^\pm$ for \cite[(3.25) and (3.26)]{FHM21b}, we obtain \[ \sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} C_{k,\ell}(h) G^{+,V}_{M,m,n}(v_0h+kh,w_0h+\ell h)+\sum_{ \substack{ (m',n')\in \ind_{M+1}^{V,1} \\ m'+n' \ge m+n}} I^-_{m',n'}(h) T^{u_+}_{m',n',m,n}=\bo(h^{M+2}), \] and \[ \begin{split} &\sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} C_{k,\ell}(h) G^{+,V}_{M,m,n}(v_0h+kh,w_0h+\ell h)\\ &+\sum_{ \substack{ (m',n')\in \ind_{M+1}^{V,1} \\ m'+n' \ge m+n}} \sum_{(k,\ell)\in d_{i,j}^-\cup e_{i,j}^-} C_{k,\ell}(h) G^{-,V}_{M,m',n'}(v_0h+kh,w_0h+\ell h) T^{u_+}_{m',n',m,n}=\bo(h^{M+2}). \end{split} \] So, \eqref{IJT} is equivalent to \be\label{eq:interface:sum} \begin{split} &\sum_{(k,\ell)\in d_{i,j}^-\cup e_{i,j}^-} C_{k,\ell}(h) \sum_{ \substack{ (m',n')\in \ind_{M+1}^{V,1} \\ m'+n' \ge m+n}} G^{-,V}_{M,m',n'}(v_0h+kh,w_0h+\ell h) T^{u_+}_{m',n',m,n}\\ &+\sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} C_{k,\ell}(h) G^{+,V}_{M,m,n}(v_0h+kh,w_0h+\ell h)=\bo(h^{M+2}), \qquad \mbox{for all } \ (m,n)\in \ind_{M+1}^{V,1}. \end{split} \ee Let \be\label{ChandX} C_{k,\ell}(h):=\sum_{i=0}^{M+1} c_{k,\ell,i}h^i, \qquad X_{k,\ell}:=( c_{k,\ell,0}, c_{k,\ell,1},\dots,c_{k,\ell,M+1})^T. \ee Since $G^{\pm,V}_{M,m,n}((k+v_0)h,(\ell+w_0)h)$ is the polynomial of $h$ and the degree of $h$ of every term in $G^{\pm,V}_{M,m,n}((k+v_0)h,(\ell+w_0)h)$ is non-negative, we deduce that \be\label{eq:plus} C_{k,\ell}(h) G^{+,V}_{M,m,n}((k+v_0)h,(\ell+w_0)h)=DA^{+,m,n}_{k,\ell}X_{k,\ell}+\bo(h^{M+2}), \ee \be\label{eq:minus} C_{k,\ell}(h) \sum_{ \substack{ (m',n')\in \ind_{M+1}^{V,1} \\ m'+n' \ge m+n}} G^{-,V}_{M,m',n'}((k+v_0)h,(\ell+w_0)h) T^{u_+}_{m',n',m,n}=DA^{-,m,n}_{k,\ell}X_{k,\ell}+\bo(h^{M+2}), \ee where \[ D=(h^0,h^1,\dots,h^{M+1}), \] and $A^{\pm,m,n}_{k,\ell}$ is independent for $h$ for all $(m,n)\in \ind_{M+1}^{V,1}$. So \eqref{eq:interface:sum} is equivalent to \be\label{eq:interface:sum:matrix} \begin{split} \sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} DA^{+,m,n}_{k,\ell}X_{k,\ell}+\sum_{(k,\ell)\in d_{i,j}^-\cup e_{i,j}^-} DA^{-,m,n}_{k,\ell}X_{k,\ell}=\bo(h^{M+2}), \qquad \mbox{for all } \ (m,n)\in \ind_{M+1}^{V,1}. \end{split} \ee Define \be\label{Amnkl} A^{m,n}_{k,\ell}:= \begin{cases} A^{+,m,n}_{k,\ell}, &\text{if } (k,\ell)\in d_{i,j}^+\cup e_{i,j}^+,\\ A^{-,m,n}_{k,\ell}, &\text{if } (k,\ell)\in d_{i,j}^-\cup e_{i,j}^-. \end{cases} \ee Then \eqref{eq:interface:sum:matrix} is equivalent to \[ A^{m,n}X=0, \qquad \mbox{for all } \ (m,n)\in \ind_{M+1}^{V,1}, \] where \be\label{Amn:Totla} A^{m,n}=(A^{m,n}_{-1,-1},A^{m,n}_{-1,0},A^{m,n}_{-1,1},A^{m,n}_{0,-1},A^{m,n}_{0,0},A^{m,n}_{0,1},A^{m,n}_{1,-1},A^{m,n}_{1,0},A^{m,n}_{1,1},A^{m,n}_{-2,0},A^{m,n}_{2,0},A^{m,n}_{0,-2},A^{m,n}_{0,2}), \ee and \be\label{X:Totla} X=(X_{-1,-1}^T,X_{-1,0}^T,X_{-1,1}^T,X_{0,-1}^T,X_{0,0}^T,X_{0,1}^T,X_{1,-1}^T,X_{1,0}^T,X_{1,1}^T,X_{-2,0}^T,X_{2,0}^T,X_{0,-2}^T,X_{0,2}^T)^T. \ee Let \be\label{TotalA} A=\left( (A^{0,0})^T, (A^{0,1})^T, \dots, (A^{0,M+1})^T, (A^{1,0})^T, (A^{1,1})^T, \dots, (A^{1,M})^T\right)^T. \ee Finally, \eqref{eq:interface:sum} is equivalent to \be\label{AX0} AX=0. \ee Since we use 13-point scheme for the irregular points, we have 13 components in \eqref{Amn:Totla} and \eqref{X:Totla}. If we use 9-point compact scheme for the irregular points, we only need to delete the last four components in \eqref{Amn:Totla} and \eqref{X:Totla}. For the 25-point or 36-point schemes for the irregular points, the only change is to add more $A^{m,n}_{k,\ell}$ and $X_{k,\ell}$ in \eqref{Amn:Totla} and \eqref{X:Totla}. Even there are many different cases for the $13$-point schemes for the irregular points depending on how the interface curve $\Gamma_I$ partitions the $13$ points in it, we can repeatedly use $A^{\pm,m,n}_{k,\ell}$ in \eqref{eq:plus}, \eqref{eq:minus} and \eqref{Amnkl} to cover all the cases which significantly reduce the computation cost and make the implementation very effective and flexible. Furthermore, if we want to obtain the lower or higher finite schemes for irregular points, we only need to delete or add some $A^{0,n+1}$ and $A^{1,n}$ in \eqref{TotalA}. After the above simplification, we find that the $A$ in \eqref{AX0} is a 36 by 78 matrix for the 13-point scheme with fifth order accuracy while $A$ is a 16 by 36 matrix and the 9-point scheme with third order accuracy. Observing the following identity (whose proof is given in \cref{hybrid:sec:proofs}) \be\label{ckln:irregular} c_{0,-2,i}+c_{-2,0,i}+c_{2,0,i}+c_{0,2,i}+\sum\limits_{k=-1}^{1} \sum\limits_{\ell=-1}^{1} c_{k,\ell,i}=0, \quad \mbox{for} \quad i=0,1,\dots, M+1, \ee we can further reduce the size of the matrix $A$ in \eqref{AX0} to $30$ by $72$ for the $13$-point scheme. \section{Numerical experiments} \label{sec:numerical} Let $\Omega=(l_1,l_2)\times(l_3,l_4)$ with $l_4-l_3=N_0(l_2-l_1)$ for some positive integer $N_0$. For a given $J\in \NN$, we define $h:=(l_2-l_1)/N_1$ with $N_1:=2^J$ and let $x_i=l_1+ih$ and $y_j=l_3+jh$ for $i=0,1,\dots,N_1$ and $j=0,1,\dots,N_2$ with $N_2:=N_0N_1$. Let $u(x,y)$ be the exact solution of \eqref{Qeques2} and $(u_{h})_{i,j}$ be a numerical solution at $(x_i, y_j)$ using the mesh size $h$. We measure the consistency of the proposed scheme in the $l_2$ norm by the relative error $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$, if the exact solution $u$ is available. If it is not, then we quantify the consistency error by ${\|u_{h}-u_{h/2}\|_{2}}$, where \begin{align*} \|u_{h}-u\|_{2}^2:= h^2&\sum_{i=0}^{N_1}\sum_{j=0}^{N_2} \left((u_h)_{i,j}-u(x_i,y_j)\right)^2, \ \ \|u\|_{2}^2:=h^2 \sum_{i=0}^{N_1}\sum_{j=0}^{N_2} \left(u(x_i,y_j)\right)^2,\\ &\|u_{h}-u_{h/2}\|_{2}^2:= h^2\sum_{i=0}^{N_1}\sum_{j=0}^{N_2} \left((u_{h})_{i,j}-(u_{h/2})_{2i,2j}\right)^2. \end{align*} In addition we also provide results for the infinity norm of the errors given by: \[ \|u_h-u\|_\infty :=\max_{0\le i\le N_1, 0\le j\le N_2} \left|(u_h)_{i,j}-u(x_i,y_j)\right|, \quad \|u_{h}-u_{h/2}\|_\infty:=\max_{0\le i\le N_1,0\le j\le N_2} \left|(u_{h})_{i,j}-(u_{h/2})_{2i,2j}\right|. \] \subsection{Numerical examples with known $u$} In this subsection, we provide five numerical examples with a known solution $u$ of \eqref{Qeques2}. Note that the maximum accuracy order for the compact 9-point finite difference scheme in irregular and regular points, for elliptic interface problems with discontinuous coefficients, is three and six, respectively. So, in \cref{hybrid:ex3,hybrid:ex5} we compare the proposed hybrid scheme with the compact 9-point scheme of a sixth order of accuracy at regular points and third order of accuracy at irregular points. That is, both uses the same compact $9$-point stencils with accuracy order six at all regular points, and they only differ at irregular points such that the proposed hybrid scheme uses $13$-point stencils having fifth order accuracy, while the compact $9$-point scheme uses $9$-point stencils having third order accuracy. Their computational costs are comparable, because the percentage of the number of irregular points over all the grid points decays exponentially to $0$ at the rate $\bo(2^{-J})$, e.g., this percentage is less than or around $1\%$ at the level $J=9$ for all our numerical examples. The five numerical examples can be characterized as follows: \begin{itemize} \item \cref{hybrid:ex3,hybrid:ex5} compare the proposed hybrid scheme and the $9$-point compact scheme. \item In all examples, either $a_+/a_-$ or $a_-/a_+$ is very large on $\Gamma_I$ for high contrast coefficients $a$. \item 4-side Dirichlet boundary conditions are demonstrated in \cref{hybrid:ex3,hybrid:ex4,hybrid:ex5}. \item 1-side Dirichlet, 1-side Neumann and 2-side Robin boundary conditions are considered in \cref{hybrid:ex1,hybrid:ex2}. \item Results for smooth interface curves $\Gamma_I$ are presented in \cref{hybrid:ex1,hybrid:ex2,hybrid:ex3,hybrid:ex4}. \item Results for a sharp-edged interface curve $\Gamma_I$ are demonstrated in \cref{hybrid:ex5}. \item Results for two constant jump functions $g_D$ and $g_N$ are shown in \cref{hybrid:ex1,hybrid:ex2,hybrid:ex3,hybrid:ex4}. \item Results for two non-constant jump functions $g_D$ and $g_N$ are presented in \cref{hybrid:ex5}. \end{itemize} \begin{example}\label{hybrid:ex3} \normalfont Let $\Omega=(-1.5,1.5)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=y^2+\frac{2x^2}{x^2+1}-1$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^3(2+\sin(x)\sin(y)), \qquad a_{-}=10^{-3}(2+\sin(x)\sin(y)), \qquad g_D=-200, \qquad g_N=0,\\ &u_{+}=10^{-3}\sin(4 x)\sin(4 y)(y^2(x^2+1)+x^2-1),\\ &u_{-}=10^{3}\sin(4 x)\sin(4 y)(y^2(x^2+1)+x^2-1)+200,\\ & u(-1.5,y)=g_1, \qquad u(1.5,y)=g_2, \qquad \mbox{for} \qquad y\in(-1.5,1.5),\\ & u(x,-1.5)=g_3, \qquad u(x,1.5)=g_4, \qquad \mbox{for} \qquad x\in(-1.5,1.5), \end{align*} the other functions $f^{\pm}$, $g_1, \ldots,g_4$ in \eqref{Qeques2} can be obtained by plugging the above functions into \eqref{Qeques2}. Note the high contrast $a_+/a_-=10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:QSp3} and \cref{hybrid:fig:QSp3}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:ex3} of our proposed hybrid finite difference scheme and compact 9-point scheme on uniform Cartesian meshes with $h=2^{-J}\times 3$. $\kappa$ is the condition number of the coefficient matrix.} \centering \setlength{\tabcolsep}{0.5mm}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Our proposed hybrid scheme} & \multicolumn{5}{c}{Compact 9-point scheme} \\ \cline{1-11} $J$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & $\kappa$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & $\kappa$ \\ \hline 4 &1.493E-01 &0 &1.362E+02 &0 &2.136E+02 &5.465E-01 &0 &4.515E+02 &0 &8.685E+01\\ 5 &3.124E-03 &5.6 &3.872E+00 &5.1 &4.262E+02 &4.751E-02 &3.5 &4.453E+01 &3.3 &4.896E+02\\ 6 &6.081E-05 &5.7 &7.168E-02 &5.8 &6.261E+03 &2.464E-03 &4.3 &2.890E+00 &3.9 &2.069E+03\\ 7 &1.238E-06 &5.6 &1.490E-03 &5.6 &1.701E+04 &2.745E-04 &3.2 &3.318E-01 &3.1 &9.171E+03\\ 8 &1.803E-08 &6.1 &3.305E-05 &5.5 &1.169E+05 &1.557E-05 &4.1 &1.894E-02 &4.1 &4.054E+04\\ 9 & & & & & &9.053E-07 &4.1 &1.185E-03 &4.0 &1.648E+05\\ \hline \end{tabular}} \label{hybrid:table:QSp3} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=3.0cm,height=3cm]{HyCUR3.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyAA3.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyUU3.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyERR3.pdf} \end{subfigure} \caption {\cref{hybrid:ex3}: the interface curve $\Gamma_I$ (first panel), the coefficient $a(x,y)$ (second panel), the numerical solution $u_h$ (third panel), and the error $|u_h-u|$ (fourth panel) with $h=2^{-8}\times 3$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:QSp3} \end{figure} \begin{example}\label{hybrid:ex5} \normalfont Let $\Omega=(-4.5,4.5)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ which is shown in \cref{hybrid:fig:QSp5}. Precisely, the sharp-edged interface is a square with 4 corner points $(-2,0)$, $(0,2)$, $(2,0)$ and $(0,-2)$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^{-3}, \qquad a_{-}=10^{3}, \qquad u_{+}=10^{3}\sin(x-y), \quad u_{-}=10^{-3}\cos(x)\cos(y)+1000,\\ & u(-4.5,y)= g_1, \qquad \qquad u(4.5,y)= g_2,\qquad \mbox{for} \qquad y\in(-4.5,4.5),\\ & u(x,-4.5)= g_3, \qquad \qquad u(x,4.5)= g_4, \qquad \mbox{for} \qquad x\in(-4.5,4.5), \end{align*} the other functions $f^{\pm}$, $g_D$, $g_N$, $g_1, \ldots,g_4$ in \eqref{Qeques2} can be obtained by plugging the above functions into \eqref{Qeques2}. Clearly, $g_D$ and $g_N$ are not constants. Note the high contrast $a_-/a_+=10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:QSp5} and \cref{hybrid:fig:QSp5}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:ex5} of our proposed hybrid finite difference scheme and compact 9-point scheme on uniform Cartesian meshes with $h=2^{-J}\times 9$. $\kappa$ is the condition number of the coefficient matrix.} \centering \setlength{\tabcolsep}{0.5mm}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Our proposed hybrid scheme} & \multicolumn{5}{c}{Compact 9-point scheme} \\ \cline{1-11} $J$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & $\kappa$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & $\kappa$ \\ \hline 4 &7.431E-03 &0 &2.062E+01 &0 &1.337E+03 &6.254E-02 &0 &1.574E+02 &0 &1.238E+03\\ 5 &4.505E-04 &4.0 &1.322E+00 &4.0 &1.020E+04 &1.110E-02 &2.5 &2.837E+01 &2.5 &6.529E+03\\ 6 &5.701E-06 &6.3 &1.778E-02 &6.2 &6.394E+04 &6.953E-04 &4.0 &1.929E+00 &3.9 &4.152E+04\\ 7 &4.937E-08 &6.9 &1.869E-04 &6.6 &3.920E+05 &2.993E-05 &4.5 &1.059E-01 &4.2 &3.286E+05\\ 8 &6.087E-10 &6.3 &2.942E-06 &6.0 &2.132E+07 &1.155E-06 &4.7 &4.177E-03 &4.7 &1.474E+06\\ 9 & & & & & &8.390E-08 &3.8 &3.391E-04 &3.6 &1.006E+07\\ \hline \end{tabular}} \label{hybrid:table:QSp5} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=3.0cm,height=3.0cm]{HyCUR5.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyAA5.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyUU5.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyERR5.pdf} \end{subfigure} \caption {\cref{hybrid:ex5}: the interface curve $\Gamma_I$ (first panel), the coefficient $a(x,y)$ (second panel), the numerical solution $u_h$ (third panel), and the error $|u_h-u|$ (fourth panel) with $h=2^{-7}\times 9$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:QSp5} \end{figure} \begin{example}\label{hybrid:ex1} \normalfont Let $\Omega=(-2.5,2.5)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=x^4+2y^4-2$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^{-3}(2+\sin(x)\sin(y)), \qquad a_{-}=10^3(2+\sin(x)\sin(y)), \qquad g_D=-10^5, \qquad g_N=0,\\ &u_{+}=10^3\sin(4\pi x)\sin(4\pi y)(x^4+2y^4-2), \qquad u_{-}=10^{-3}\sin(4\pi x)\sin(4\pi y)(x^4+2y^4-2)+10^5,\\ & -u_x(-2.5,y)+\alpha u(-2.5,y)= g_1, \qquad \qquad u(2.5,y)= g_2, \qquad \alpha=\sin(y),\qquad \mbox{for} \qquad y\in(-2.5,2.5),\\ & -u_y(x,-2.5)= g_3, \qquad \qquad u_y(x,2.5)+\beta u(x,2.5)= g_4, \qquad \beta=\cos(x), \qquad \mbox{for} \qquad x\in(-2.5,2.5), \end{align*} the other functions $f^{\pm}$, $g_1, \ldots,g_4$ in \eqref{Qeques2} can be obtained by plugging the above functions into \eqref{Qeques2}. Note the high contrast $a_-/a_+=10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:QSp1} and \cref{hybrid:fig:QSp1}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:ex1} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 5$. } \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline $J$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 5 &8.167E-01 & 0 &1.758E+05 & 0 &1.811E+05 & 0 &1.734E+05 & 0 \\ 6 &1.123E-02 &6.2 &2.488E+03 &6.1 &2.471E+03 &6.2 &2.441E+03 &6.2 \\ 7 &2.059E-04 &5.8 &4.711E+01 &5.7 &4.550E+01 &5.8 &4.640E+01 &5.7 \\ 8 &3.035E-06 &6.1 &7.028E-01 &6.1 &6.701E-01 &6.1 &6.919E-01 &6.1 \\ 9 &4.632E-08 &6.0 &1.087E-02 &6.0 &9.946E-03 &6.1 &1.037E-02 &6.1 \\ \hline \end{tabular}} \label{hybrid:table:QSp1} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=3.0cm,height=3.0cm]{HyCUR1.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyAA1.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyUU1.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyERR1.pdf} \end{subfigure} \caption {\cref{hybrid:ex1}: the interface curve $\Gamma_I$ (first panel), the coefficient $a(x,y)$ (second panel), the numerical solution $u_h$ (third panel), and the error $u-u_h$ (fourth panel) with $h=2^{-8}\times 5$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:QSp1} \end{figure} \begin{example}\label{hybrid:ex2} \normalfont Let $\Omega=(-2,2)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=x^2+y^2-2$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^3(2+\sin(x+y)), \qquad a_{-}=10^{-3}(2+\sin(x+y)), \qquad g_D=-10^3, \qquad g_N=0,\\ &u_{+}=10^{-3}\cos(4 (x-y))(x^2+y^2-2), \qquad u_{-}=10^{3}\cos(4 (x-y))(x^2+y^2-2)+10^3,\\ & -u_x(-2,y)+\alpha u(-2,y)= g_1, \qquad \qquad u(2,y)= g_2, \qquad \alpha=\sin(y),\qquad \mbox{for} \qquad y\in(-2,2),\\ & -u_y(x,-2)= g_3, \qquad \qquad u_y(x,2)+\beta u(x,2)= g_4, \qquad \beta=\cos(x), \qquad \mbox{for} \qquad x\in(-2,2), \end{align*} the other functions $f^{\pm}$, $g_1, \ldots,g_4$ in \eqref{Qeques2} can be obtained by plugging the above functions into \eqref{Qeques2}. Note the high contrast $a_+/a_-=10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:QSp2} and \cref{hybrid:fig:QSp2}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:ex2} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 4$. } \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline $J$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 4 &8.087E-01 &0 &4.191E+03 &0 &2.568E+03 &0 &4.141E+03 &0 \\ 5 &1.443E-02 &5.8 &1.061E+02 &5.3 &4.623E+01 &5.8 &1.048E+02 &5.3 \\ 6 &2.679E-04 &5.8 &2.154E+00 &5.6 &8.629E-01 &5.7 &2.132E+00 &5.6 \\ 7 &3.432E-06 &6.3 &3.518E-02 &5.9 &1.100E-02 &6.3 &3.477E-02 &5.9 \\ 8 &6.625E-08 &5.7 &6.192E-04 &5.8 &2.120E-04 &5.7 &6.118E-04 &5.8 \\ \hline \end{tabular}} \label{hybrid:table:QSp2} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=3.0cm,height=3.0cm]{HyCUR2.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyAA2.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyUU2.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyERR2.pdf} \end{subfigure} \caption {\cref{hybrid:ex2}: the interface curve $\Gamma_I$ (first panel), the coefficient $a(x,y)$ (second panel), the numerical solution $u_h$ (third panel), and the error $|u_h-u|$ (fourth panel) with $h=2^{-8}\times 4$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:QSp2} \end{figure} \begin{example}\label{hybrid:ex4} \normalfont Let $\Omega=(-2.5,2.5)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=y^2-2x^2+x^4-\frac{1}{4}$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^{-3}(2+\sin(x-y)), \qquad a_{-}=10^3(2+\sin(x-y)), \qquad g_D=-1.5\times 10^4, \qquad g_N=0,\\ &u_{+}=10^{3}\sin(16 (x+y))(y^2-2x^2+x^4-1/4),\\ &u_{-}=10^{-3}\sin(16 (x+y))(y^2-2x^2+x^4-1/4)+1.5\times 10^4,\\ & u(-2.5,y)= g_1, \qquad \qquad u(2.5,y)= g_2,\qquad \mbox{for} \qquad y\in(-2.5,2.5),\\ & u(x,-2.5)= g_3, \qquad \qquad u(x,2.5)= g_4, \qquad \mbox{for} \qquad x\in(-2.5,2.5), \end{align*} the other functions $f^{\pm}$, $g_1, \ldots,g_4$ in \eqref{Qeques2} can be obtained by plugging the above functions into \eqref{Qeques2}. Note the high contrast $a_-/a_+=10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:QSp4} and \cref{hybrid:fig:QSp4}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:ex4} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 5$.} \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline $J$ & $\frac{\|u_{h}-u\|_{2}}{\|u\|_{2}}$ &order & $\|u_{h}-u\|_{\infty}$ &order & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 5 &8.627E-01 &0 &9.480E+04 &0 &4.284E+04 &0 &9.338E+04 &0 \\ 6 &2.854E-02 &4.9 &2.758E+03 &5.1 &1.360E+03 &5.0 &2.736E+03 &5.1 \\ 7 &4.543E-04 &6.0 &5.673E+01 &5.6 &2.128E+01 &6.0 &5.658E+01 &5.6 \\ 8 &6.195E-06 &6.2 &1.184E+00 &5.6 &2.856E-01 &6.2 &1.177E+00 &5.6 \\ 9 &8.902E-08 &6.1 &1.738E-02 &6.1 &4.441E-03 &6.0 &1.788E-02 &6.0 \\ \hline \end{tabular}} \label{hybrid:table:QSp4} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.20\textwidth} \includegraphics[width=3.0cm,height=3.0cm]{HyCUR4.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyAA4.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyUU4.pdf} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=3.5cm,height=3.5cm]{HyERR4.pdf} \end{subfigure} \caption {\cref{hybrid:ex4}: the interface curve $\Gamma_I$ (first panel), the coefficient $a(x,y)$ (second panel), the numerical solution $u_h$ (third panel), and the error $|u_h-u|$ (fourth panel) with $h=2^{-8}\times 5$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:QSp4} \end{figure} \subsection{Numerical examples with unknown $u$} In this subsection, we provide five numerical examples with unknown $u$ of \eqref{Qeques2}. They can be characterized as follows. \begin{itemize} \item In all examples, either $a_+/a_-$ or $a_-/a_+$ is very large on $\Gamma_I$ for high-contrast coefficients $a$. \item 4-side Dirichlet boundary conditions are demonstrated in \cref{hybrid:unknown:ex1,hybrid:unknown:ex4}. \item 3-side Dirichlet and 1-side Robin boundary conditions in \cref{hybrid:unknown:ex2,hybrid:unknown:ex3}. \item 1-side Dirichlet, 1-side Neumann and 2-side Robin boundary conditions in \cref{hybrid:unknown:ex5}. \item All the interface curves $\Gamma_I$ are smooth and all the jump functions $g_D$ and $g_N$ are non-constant. \end{itemize} \begin{example}\label{hybrid:unknown:ex1} \normalfont Let $\Omega=(-2.5,2.5)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=x^4+2y^4-2$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=2+\cos(x)\cos(y), \qquad a_{-}=10^3(2+\sin(x)\sin(y)), \qquad g_D=\sin(x)\sin(y)-1, \\ &f_{+}=\sin(4\pi x)\sin(4\pi y), \qquad f_{-}=\cos(4\pi x)\cos(4\pi y), \qquad g_N=\cos(x)\cos(y),\\ & u(-2.5,y)= 0, \qquad \qquad u(2.5,y)= 0, \qquad \mbox{for} \qquad y\in(-2.5,2.5),\\ & u(x,-2.5)= 0, \qquad \qquad u(x,2.5)= 0, \qquad \mbox{for} \qquad x\in(-2.5,2.5). \end{align*} Note the high contrast $a_-/a_+\approx 10^3$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:unknown:QSp1} and \cref{hybrid:fig:unknown:QSp1}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:unknown:ex1} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 5$. } \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c} \hline $J$ & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 4 &9.83385E+02 &0 &3.29078E+02 &0 \\ 5 &1.93678E+01 &5.7 &6.50631E+00 &5.7 \\ 6 &3.13024E-01 &6.0 &1.04785E-01 &6.0 \\ 8 &9.47776E-05 &5.8 &3.20754E-05 &5.8 \\ \hline \end{tabular}} \label{hybrid:table:unknown:QSp1} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{HyCUR1.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{HyAA1.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTUU1.pdf} \end{subfigure} \caption {\cref{hybrid:unknown:ex1}: the interface curve $\Gamma_I$ (left), the coefficient $a(x,y)$ (middle) and the numerical solution $u_h$ (right) with $h=2^{-8}\times 5$, where $u_h$ is computed by our proposed hybrid finite difference scheme. In order to show the graph of $a(x,y)$ clearly, we rotate the graph of $a(x,y)$ by $\pi/2$ in this figure.} \label{hybrid:fig:unknown:QSp1} \end{figure} \begin{example}\label{hybrid:unknown:ex2} \normalfont Let $\Omega=(-\pi,\pi)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=x^2+y^2-2$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=2+\cos(x-y), \qquad a_{-}=10^3(2+\cos(x-y)), \qquad g_D=\sin(x-y)-2, \\ &f_{+}=\sin(8 x)\sin(8 y), \qquad f_{-}=\cos(8 x)\cos(8 y), \qquad g_N=\cos(x+y),\\ & -u_x(-\pi,y)+\cos(y) u(-\pi,y)= \cos(y)+1, \qquad \qquad u(\pi,y)= 0, \qquad \mbox{for} \qquad y\in(-\pi,\pi),\\ & u(x,-\pi)= 0, \qquad \qquad u(x,\pi)= 0, \qquad \mbox{for} \qquad x\in(-\pi,\pi). \end{align*} Note the high contrast $a_-/a_+=10^3$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:unknown:QSp2} and \cref{hybrid:fig:unknown:QSp2}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:unknown:ex2} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 2\pi$. } \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c} \hline $J$ & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 4 &7.02037E+02 &0 &1.84708E+02 &0 \\ 5 &9.69424E+00 &6.2 &2.54978E+00 &6.2 \\ 6 &2.26556E-01 &5.4 &5.97145E-02 &5.4 \\ 7 &2.57284E-03 &6.5 &6.79725E-04 &6.5 \\ 8 &5.07886E-05 &5.7 &1.34801E-05 &5.7 \\ \hline \end{tabular}} \label{hybrid:table:unknown:QSp2} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTCUR2.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTAA2.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTUU2.pdf} \end{subfigure} \caption {\cref{hybrid:unknown:ex2}: the interface curve $\Gamma_I$ (left), the coefficient $a(x,y)$ (middle) and the numerical solution $u_h$ (right) with $h=2^{-8}\times 2\pi$, where $u_h$ is computed by our proposed hybrid finite difference scheme. In order to show the graph of $a(x,y)$ clearly, we rotate the graph of $a(x,y)$ by $\pi/2$ in this figure.} \label{hybrid:fig:unknown:QSp2} \end{figure} \begin{example}\label{hybrid:unknown:ex3} \normalfont Let $\Omega=(-\frac{\pi}{2},\frac{\pi}{2})^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=y^2+\frac{2x^2}{x^2+1}-1$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^{3}(2+\sin(x+y)), \qquad a_{-}=10^{-3}(2+\cos(x-y)), \qquad g_D=\sin(x)\cos(y)-2, \\ &f_{+}=\sin(6 x)\sin(6 y), \qquad f_{-}=\cos(6 x)\cos(6 y), \qquad g_N=\cos(x+y),\\ & -u_x(-\frac{\pi}{2},y)+\cos(y) u(-\frac{\pi}{2},y)= \sin(y+\frac{\pi}{2})(y-\frac{\pi}{2}), \qquad \qquad u(\frac{\pi}{2},y)= 0, \qquad \mbox{for} \qquad y\in(-\frac{\pi}{2},\frac{\pi}{2}),\\ & u(x,-\frac{\pi}{2})= 0, \qquad \qquad u(x,\frac{\pi}{2})= 0, \qquad \mbox{for} \qquad x\in(-\frac{\pi}{2},\frac{\pi}{2}). \end{align*} The high contrast $a_+/a_-\approx 10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:unknown:QSp3} and \cref{hybrid:fig:unknown:QSp3}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:unknown:ex3} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times \pi$. } \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c} \hline $J$ & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 5 &1.17512E-01 &0 &1.95534E-01 &0 \\ 6 &1.34603E-03 &6.4 &5.01334E-03 &5.3 \\ 7 &2.97345E-05 &5.5 &9.62920E-05 &5.7 \\ 8 &3.63705E-07 &6.4 &1.11523E-06 &6.4 \\ \hline \end{tabular}} \label{hybrid:table:unknown:QSp3} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTCUR3.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTAA3.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTUU3.pdf} \end{subfigure} \caption {\cref{hybrid:unknown:ex3}: the interface curve $\Gamma_I$ (left), the coefficient $a(x,y)$ (middle) and the numerical solution $u_h$ (right) with $h=2^{-8}\times \pi$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:unknown:QSp3} \end{figure} \begin{example}\label{hybrid:unknown:ex4} \normalfont Let $\Omega=(-2.5,2.5)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=y^2-2x^2+x^4-\frac{1}{4}$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10^{3}(10+\cos(x)\cos(y)), \qquad a_{-}=10^{-3}(10+\sin(x)\sin(y)), \qquad g_D=\sin(x)-2, \\ &f_{+}=\sin(4\pi x)\sin(4\pi y), \qquad f_{-}=\cos(4\pi x)\cos(4\pi y), \qquad g_N=\cos(y),\\ & u(-2.5,y)= 0, \qquad \qquad u(2.5,y)= 0, \qquad \mbox{for} \qquad y\in(-2.5,2.5),\\ & u(x,-2.5)= 0, \qquad \qquad u(x,2.5)= 0, \qquad \mbox{for} \qquad x\in(-2.5,2.5). \end{align*} The high contrast $a_+/a_-\approx 10^6$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:unknown:QSp4} and \cref{hybrid:fig:unknown:QSp4}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:unknown:ex4} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 5$. } \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c} \hline $J$ & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 5 &6.18678E+00 &0 &9.88338E+00 &0 \\ 6 &9.69535E-02 &6.0 &2.17089E-01 &5.5 \\ 7 &1.67043E-03 &5.9 &3.52407E-03 &5.9 \\ 8 &2.43148E-05 &6.1 &5.22530E-05 &6.1 \\ \hline \end{tabular}} \label{hybrid:table:unknown:QSp4} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{HyCUR4.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTAA4.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTUU4.pdf} \end{subfigure} \caption {\cref{hybrid:unknown:ex4}: the interface curve $\Gamma_I$ (left), the coefficient $a(x,y)$ (middle) and the numerical solution $u_h$ (right) with $h=2^{-8}\times 5$. In order to show the graph of $u_h$ clearly, we rotate the graph of $u_h$ by $\pi/2$ in this figure.} \label{hybrid:fig:unknown:QSp4} \end{figure} \begin{example}\label{hybrid:unknown:ex5} \normalfont Let $\Omega=(-\pi,\pi)^2$ and the interface curve be given by $\Gamma_I:=\{(x,y)\in \Omega:\; \psi(x,y)=0\}$ with $\psi (x,y)=x^2+y^2-4$. The functions in \eqref{Qeques2} are given by \begin{align*} &a_{+}=10(2+\cos(x-y)), \qquad a_{-}=10^{-6}(2+\sin(x)\sin(y)), \qquad g_D=\sin(y)-10, \\ &f_{+}=\sin(6 x)\sin(6 y), \qquad f_{-}=\cos(6 x)\cos(6 y), \qquad g_N=\cos(x),\\ & -u_x(-\pi,y)+\sin(y) u(-\pi,y)= \cos(y), \qquad \qquad u(\pi,y)=0, \qquad \mbox{for} \qquad y\in(-\pi,\pi),\\ & -u_y(x,-\pi)= \sin(x-\pi), \qquad \qquad u_y(x,\pi)+\cos(x) u(x,\pi)=\cos(x)+1, \qquad \mbox{for} \qquad x\in(-\pi,\pi). \end{align*} The high contrast $a_+/a_-\approx 10^7$ on $\Gamma_I$. The numerical results are presented in \cref{hybrid:table:unknown:QSp5} and \cref{hybrid:fig:unknown:QSp5}. \end{example} \begin{table}[htbp] \caption{Performance in \cref{hybrid:unknown:ex5} of our proposed hybrid finite difference scheme on uniform Cartesian meshes with $h=2^{-J}\times 2\pi$.} \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c} \hline $J$ & ${\|u_{h}-u_{h/2}\|_{2}}$ &order & $\|u_{h}-u_{h/2}\|_{\infty}$ &order \\ \hline 5 &1.60217E+04 &0 &1.39059E+04 &0 \\ 6 &2.94197E+02 &5.8 &2.79828E+02 &5.6 \\ 7 &4.54676E+00 &6.0 &6.36193E+00 &5.5 \\ 8 &5.82759E-02 &6.3 &1.02577E-01 &6.0 \\ \hline \end{tabular}} \label{hybrid:table:unknown:QSp5} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTCUR5.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTAA5.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=5.5cm,height=3.5cm]{NTUU5.pdf} \end{subfigure} \caption {\cref{hybrid:unknown:ex5}: the interface curve $\Gamma_I$ (left), the coefficient $a(x,y)$ (middle) and the numerical solution $u_h$ (right) with $h=2^{-8}\times 2\pi$, where $u_h$ is computed by our proposed hybrid finite difference scheme.} \label{hybrid:fig:unknown:QSp5} \end{figure} \section{Conclusion}\label{sec:hybrid:Conclu} To our best knowledge, so far there were no 13-point finite difference schemes for irregular points available in the literature, that can achieve fifth or sixth order for elliptic interface problems with discontinuous coefficients. Our contributions of this paper are as follows: \begin{itemize} \item We propose a hybrid (13-point for irregular points and compact 9-point for interior regular points) finite difference scheme, which demonstrates six order accuracy in all our numerical experiments, for elliptic interface problems with discontinuous, variable and high-contrast coefficients, discontinuous source terms and two non-homogeneous jump conditions. \item The proposed hybrid scheme demonstrates a robust high-order convergence for the challenging cases of high-contrast ratios of the coefficients $a_{\pm}$: $\sup(a_+)/\inf(a_-)=10^{-3},10^{-6},10^{6},10^{7}$. \item Due to the flexibility and efficiency of the implementation, it is very simple to achieve the implementation for 25-point or 36-point schemes for irregular points of elliptic interface problems and Helmholtz interface equations with discontinuous wave numbers. \item From the results in \cref{hybrid:table:QSp3,hybrid:table:QSp5}, we find that if we only replace the $13$-point scheme for irregular points by a $9$-point scheme, then the numerical errors increase significantly, while the condition number only slightly decreases. Thus, the proposed hybrid scheme could significantly improve the numerical performance with a slight increase in the complexity of the corresponding linear system. \item We also derive a $6$-point/$4$-point schemes with a sixth order accuracy at the side/corner points for the case of smooth coefficients $\alpha$ and $\beta$ in the Robin boundary conditions $\tfrac{\partial u}{\partial \nv}+\alpha u=g_1$ and $\tfrac{\partial u}{\partial \nv}+\beta u=g_4$. \item The presented numerical experiments confirm the sixth order of accuracy in the $l_2$ and $l_{\infty}$ norms of our proposed hybrid scheme. \end{itemize} \section{Appendix} \label{hybrid:sec:proofs} Let us first present the definitions of several index sets $\ind_{M+1}, \ind_{M+1}^{V, 1}, \ind_{M+1}^{V, 2}, \ind_{M+1}^{H, 1}, \ind_{M+1}^{H, 2}$, which are employed in \cref{sec:sixord}. Define $\NN:=\N\cup\{0\}$, the set of all nonnegative integers. Given $M+1\in \NN$, we use the same definitions \cite[(2.4) and (2.7)]{FHM21Helmholtz} as follows: \be \label{Sk} \ind_{M+1}:=\{(m,n-m) \; : \; n=0,\ldots,M+1 \; \mbox{ and }\; m=0,\ldots,n\}, \qquad M+1\in \NN, \ee \be \label{indV12} \ind_{M+1}^{V, 2}:=\ind_{M+1}\setminus \ind_{M+1}^{V, 1}\quad \mbox{with}\quad \ind_{M+1}^{V, 1}:=\{(\ell,k-\ell) \; : k=\ell,\ldots, M+1-\ell\;\; \mbox{and} \;\;\ell=0,1\; \}, \ee \be \label{indH12} \ind_{M+1}^{H, j}:=\{(n,m):(m,n) \in \ind_{M+1}^{V,j}, j =1,2\}. \ee For all $(m,n)\in \ind_{M+1}^{V,1}$, we define \be\label{GVmn} G^V_{M,m,n}(x,y):=\sum_{\ell=0}^{\lfloor \frac{n}{2}\rfloor} \frac{(-1)^\ell x^{m+2\ell} y^{n-2\ell}}{(m+2\ell)!(n-2\ell)!}+\sum_{(m',n')\in \ind_{M+1}^{V,2} \setminus \ind_{m+n}^{V,2} }A^{V,u}_{m',n',m,n} \frac{ x^{m'} y^{n'}}{m'!n'!}, \ee and for all $(m,n)\in \ind_{M-1}$, \be\label{QVmn} \begin{split} Q^V_{M,m,n}(x,y):=\sum_{\ell=1}^{1+\lfloor \frac{n}{2}\rfloor} \frac{(-1)^{\ell} x^{m+2\ell} y^{n-2\ell+2}}{(m+2\ell)!(n-2\ell+2)!}\frac{1}{a^{(0,0)}} +\sum_{(m',n')\in \ind_{M+1}^{V,2} \setminus \ind_{m+n+2}^{V,2} }A^{V,f}_{m',n',m,n} \frac{ x^{m'} y^{n'}}{m'!n'!}, \end{split} \ee where $A^{V,u}_{m',n',m,n}$ and $A^{V,f}_{m',n',m,n}$ are constants which are uniquely determined by $\{a^{(m,n)}: (m,n) \in \ind_{M}\}$, and the floor function $\lfloor x\rfloor$ is defined to be the largest integer less than or equal to $x\in \R$. For all $(m,n)\in \ind_{M+1}^{H,1}$, we define \be\label{GHmn} G^H_{M,m,n}(x,y):=\sum_{\ell=0}^{\lfloor \frac{m}{2}\rfloor} \frac{(-1)^\ell y^{n+2\ell} x^{m-2\ell}}{(n+2\ell)!(m-2\ell)!}+\sum_{(m',n')\in \ind_{M+1}^{H,2} \setminus \ind_{m+n}^{H,2} }A^{H,u}_{m',n',m,n} \frac{ x^{m'} y^{n'}}{m'!n'!}, \ee and for all $(m,n)\in \ind_{M-1}$, \be\label{QHmn} \begin{split} Q^H_{M,m,n}(x,y):=\sum_{\ell=1}^{1+\lfloor \frac{m}{2}\rfloor} \frac{(-1)^{\ell} y^{n+2\ell} x^{m-2\ell+2}}{(n+2\ell)!(m-2\ell+2)!}\frac{1}{a^{(0,0)}} +\sum_{(m',n')\in \ind_{M+1}^{H,2} \setminus \ind_{m+n+2}^{H,2} }A^{H,f}_{m',n',m,n} \frac{ x^{m'} y^{n'}}{m'!n'!}, \end{split} \ee where $A^{H,u}_{m',n',m,n}$ and $A^{H,f}_{m',n',m,n}$ are constants which are uniquely determined by $\{a^{(m,n)}: (m,n) \in \ind_{M}\}$, and the floor function $\lfloor x\rfloor$ is defined to be the largest integer less than or equal to $x\in \R$. In this appendix, we provide the proofs to all the technical results stated in \cref{sec:sixord}. \begin{proof}[Proof of \cref{thm:regular:interior}] Choose $M=6$ and replace $G_{m,n}$, $H_{m,n}$ and $\ind_{M+1}^{1}$ in \cite[]{FHM21b} by $G^V_{M,m,n}$ given in \eqref{GVmn}, $Q^V_{M,m,n}$ in \eqref{QVmn}, and $\ind_{M+1}^{V,1}$ in \eqref{indV12} . \end{proof} \begin{proof}[Proof of \cref{hybrid:thm:regular:Robin:1}] Let $M_f=M_{g_1}=M$ in the proof of \cite[Theorem 2.3]{FHM21Helmholtz}. Then \cite[(4.7)]{FHM21Helmholtz} implies \be \label{stencil:regular:EQ:V:Robin:1} \sum_{k=0}^1 \sum_{\ell=-1}^1 C^{\mathcal{B}_1}_{k,\ell} u(x_i+kh,y_j+\ell h)= \sum_{(m,n)\in \ind_{M-1}} f^{(m,n)}C^{\mathcal{B}_1}_{f,m,n}+\sum_{n=0}^{M} g_1^{(n)}C^{\mathcal{B}_1}_{g_1,n}+\bo(h^{M+2}), \qquad h \to 0 \ee Since $-u_x+\alpha u=g_1$ on $\Gamma_1$, we have $u^{(1,n)} = \sum_{i=0}^n {n\choose i} {\alpha}^{(n-i)}u^{(0,i)} - g_1^{(n)}$ for all $n = 0,\dots, M$. By \eqref{u:approx:key:V}, {\footnotesize{ \begin{align*} &u(x+x_i^*,y+y_j^*)= \sum_{n=0}^{M+1} u^{(0,n)} G^{V}_{M,0,n}(x,y)+\sum_{n=0}^{M} u^{(1,n)} G^{V}_{M,1,n}(x,y) +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^{V}_{M,m,n}(x,y) +\bo(h^{M+2}) \\ &= \sum_{n=0}^{M+1} u^{(0,n)} G^{V}_{M,0,n}(x,y)+\sum_{n=0}^{M} u^{(1,n)} G^{V}_{M,1,n}(x,y) +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^{V}_{M,m,n}(x,y) +\bo(h^{M+2}) \\ & = \sum_{n=0}^{M+1} u^{(0,n)} G^{V}_{M,0,n}(x,y)+\sum_{n=0}^{M} \bigg( \sum_{i=0}^n {n\choose i} {\alpha}^{(n-i)}u^{(0,i)} -g_1^{(n)} \bigg) G^{V}_{M,1,n}(x,y) +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^{V}_{M,m,n}(x,y)\\ & \qquad +\bo(h^{M+2})\\ &=\sum_{n=0}^{M+1} u^{(0,n)} G^{V}_{M,0,n}(x,y) +\sum_{n=0}^{M} \sum_{i=0}^n {n\choose i} {\alpha}^{(n-i)}u^{(0,i)} G^{V}_{M,1,n}(x,y) -\sum_{n=0}^{M} g_{1}^{(n)} G^{V}_{M,1,n}(x,y) \\ & \qquad +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^{V}_{M,m,n}(x,y) +\bo(h^{M+2})\\ &=\sum_{n=0}^{M+1} u^{(0,n)} G^{V}_{M,0,n}(x,y) +\sum_{i=0}^{M} \sum_{n=i}^M {n\choose i} {\alpha}^{(n-i)}u^{(0,i)} G^{V}_{M,1,n}(x,y) -\sum_{n=0}^{M} g_{1}^{(n)} G^{V}_{M,1,n}(x,y)\\ & \qquad +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^{V}_{M,m,n}(x,y) +\bo(h^{M+2})\\ &=u^{(0,M+1)} G^{V}_{M,0,M+1}(x,y)+\sum_{n=0}^{M} u^{(0,n)} G^{V}_{M,0,n}(x,y) +\sum_{n=0}^{M} \sum_{i=n}^M {i\choose n} {\alpha}^{(i-n)}u^{(0,n)} G^{V}_{M,1,i}(x,y) -\sum_{n=0}^{M} g_{1}^{(n)} G^{V}_{M,1,n}(x,y)\\ & \qquad +\sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} Q^{V}_{M,m,n}(x,y) +\bo(h^{M+2}), \quad \mbox{for } x,y\in (-2h,2h). \end{align*} } } So \eqref{stencil:regular:EQ:V:Robin:1} leads to \be \label{stencil:regular:EQ2:V:Robin:1} \sum_{n=0}^{M+1} u^{(0,n)} I^{\mathcal{B}_1}_{n}+ \sum_{(m,n)\in \ind_{M-1}} f^{(m,n)} \left(J^{\B_1}_{m,n}-C^{\B_1}_{f,m,n}\right) +\sum_{n=0}^{M} g_1^{(n)}\left(K^{\B_1}_{n}-C^{\B_1}_{g_{1},n}\right) =\bo(h^{M+2}), \ee as $h \to 0$, where \begin{align} \nonumber &I^{\B_1}_{n}:=\sum_{k=0}^1 \sum_{\ell=-1}^1 C^{\B_1}_{k,\ell} \left( G^{V}_{M,0,n}(kh, \ell h) + \sum_{i=n}^M {i\choose n} {\alpha}^{(i-n)} G^{V}_{M,1,i}(kh, \ell h) (1-\delta_{n,M+1}) \right),\\ \label{IB1n} & J^{\B_1}_{m,n}:=\sum_{k=0}^1 \sum_{\ell=-1}^1 C^{\B_1}_{k,\ell} Q^{V}_{M,m,n}(kh, \ell h), \quad K^{\B_1}_{n}:=-\sum_{k=0}^1 \sum_{\ell=-1}^1 C^{\B_1}_{k,\ell} G^{V}_{M,1,n}(kh, \ell h), \end{align} $\delta_{a,a}=1$, and $\delta_{a,b}=0$ for $a \neq b$. \end{proof} \begin{proof}[Proof of \cref{hybrid:thm:regular:Neu:3}] The proof is almost identical to the proof of \cref{hybrid:thm:regular:Robin:1}. \end{proof} \begin{proof}[Proof of \cref{hybrid:thm:regular:Robin:4}] The proof is almost identical to the proof of \cref{hybrid:thm:regular:Robin:1}. \end{proof} \begin{proof}[Proof of \cref{thm:corner:1}] The proof is similar to the proof of \cite[Theorem 2.4]{FHM21Helmholtz}. Precisely, replace $\B_1u=\frac{\partial u}{\partial \nv}- \ia \ka u=g_1$ by $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ in the proof of \cite[Theorem 2.4]{FHM21Helmholtz} with $M=M_f=M_{g_1}=M_{g_3}=5$, and replace \cite[$G^{V}_{M,m,n}$, $Q^{V}_{M,m,n}$, $G^{H}_{M,m,n}$ and $Q^{H}_{M,m,n}$]{FHM21Helmholtz} by \eqref{GVmn}, \eqref{QVmn}, \eqref{GHmn} and \eqref{QHmn}. \end{proof} \begin{proof}[Proof of \cref{thm:corner:2}] The proof is similar to the proof of \cite[Theorem 2.5]{FHM21Helmholtz}. Precisely, replace $\B_1u=\frac{\partial u}{\partial \nv}- \ia \ka u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv}-\ia \ka u=g_4$ by $\B_1u=\frac{\partial u}{\partial \nv}+\alpha u=g_1$ and $\B_4u=\frac{\partial u}{\partial \nv}+\beta u=g_4$ respectively in the proof of \cite[Theorem 2.5]{FHM21Helmholtz} with $M=M_f=M_{g_1}=M_{g_4}=5$ and replace \cite[$G^{V}_{M,m,n}$, $Q^{V}_{M,m,n}$, $G^{H}_{M,m,n}$ and $Q^{H}_{M,m,n}$]{FHM21Helmholtz} by \eqref{GVmn}, \eqref{QVmn}, \eqref{GHmn} and \eqref{QHmn}. \end{proof} \begin{proof}[Proof of \cref{thm:interface}] \eqref{T0000} can be obtained by $u_-^{(0,0)}=u_+^{(0,0)}-g_{D}^{(0,0)}$ and \cite[(7.18)]{FHM21b}. The rest of the proof is straightforward and follows from \cite[(7.8), (7.10), (7.16), and (7.18)]{FHM21b}. \end{proof} \begin{proof}[Proof of \cref{13point:Inter}] Choose $M=4$, replace $\ind_{M+1}^{1}$, $G^{\pm}_{m,n}$, $H^{\pm}_{m,n}$ $d_{i,j}^\pm $ in \cite[Theorem 3.3]{FHM21b} by $\ind_{M+1}^{V,1}$, $G^{\pm,V}_{M,m,n}$, $Q^{\pm,V}_{M,m,n}$, $d_{i,j}^\pm \cup e_{i,j}^\pm$ in this paper. \end{proof} \begin{proof}[Proof of \eqref{ckln:irregular}] Note that when we use the formulas of \cite{FHM21b} in this proof, we need to replace $\ind_{M+1}^{1}$, $G^{\pm}_{m,n}$, $H^{\pm}_{m,n}$ $d_{i,j}^\pm $ in \cite{FHM21b} by $\ind_{M+1}^{V,1}$, $G^{\pm,V}_{M,m,n}$, $Q^{\pm,V}_{M,m,n}$, $d_{i,j}^\pm \cup e_{i,j}^\pm$ in this paper. Consider $I_{0,0}(h)=\bo(h^{M+2})$ in \cite[(3.29)]{FHM21b}. According to \cite[(3.28)]{FHM21b} and \eqref{newJuT} in this paper, $I_{0,0}(h)=\bo(h^{M+2})$ implies \be \label{U00:coeff} \sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} C_{k,\ell}(h) G^{+,V}_{M,0,0}(v_0h+kh,w_0h+\ell h)+ \sum_{ \substack{ (m',n')\in \ind_{M+1}^{V,1} \\ m'+n' \ge 0}} I^-_{m',n'}(h) T^{u_+}_{m',n',0,0}=\bo(h^{M+2}). \ee By \eqref{T0000}, \eqref{U00:coeff} is equivalent to \[\sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} C_{k,\ell}(h) G^{+,V}_{M,0,0}(v_0h+kh,w_0h+\ell h)+ I^-_{0,0}(h)=\bo(h^{M+2}),\] i.e., \be \label{U00:coeff:2} \sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} C_{k,\ell}(h) G^{+,V}_{M,0,0}(v_0h+kh,w_0h+\ell h)+ \sum_{(k,\ell)\in d_{i,j}^-\cup e_{i,j}^-} C_{k,\ell}(h) G^{-,V}_{M,0,0}(v_0h+kh,w_0h+\ell h)=\bo(h^{M+2}). \ee According to the proof of \cite[Lemma 2.1]{FHM21b} and \eqref{GVmn}, \be\label{G00} G^{\pm,V}_{M,0,0}(x,y):=1. \ee Consider the coefficients of $h^i$ for $i=0,1,\dots, M+1$ in \eqref{U00:coeff:2}, then \eqref{G00} implies \be\label{ckl0} \sum_{(k,\ell)\in d_{i,j}^+\cup e_{i,j}^+} c_{k,\ell,i} + \sum_{(k,\ell)\in d_{i,j}^-\cup e_{i,j}^-} c_{k,\ell,i} =0, \quad \mbox{for} \quad i=0,1,\dots, M+1. \ee This proves \eqref{ckln:irregular}. \end{proof} \begin{thebibliography}{99} \bibitem{CFL19} X.~Chen, X.~Feng, and Z.~Li, A direct method for accurate solution and gradient computations for elliptic interface problems. \emph{Numer. Algorithms.} \textbf{80} (2019), 709-740. \bibitem{DFL20} B.~Dong, X.~Feng, and Z.~Li, An FE-FD method for anisotropic elliptic interface problems. \emph{SIAM J. Sci. Comput.} \textbf{42} (2020), B1041-B1066. \bibitem{EwingLLL99} R.~Ewing, Z.~Li, T.~Lin, and Y.~Lin, The immersed finite volume element methods for the elliptic interface problems. \emph{Math. Comput. Simul.} \textbf{50} (1999), 63-76. \bibitem{FendZhao20pp109677} H.~Feng and S.~Zhao, A fourth order finite difference method for solving elliptic interface problems with the FFT acceleration. \emph{J. Comput. Phys.} \textbf{419} (2020), 109677. \bibitem{FendZhao20pp109391} H.~Feng and S.~Zhao, FFT-based high order central difference schemes for three-dimensional Poisson’s equation with various types of boundary conditions. \emph{J. Comput. Phys.} \textbf{410} (2020), 109391. \bibitem{FHM21a} Q.~Feng, B.~Han, and P.~Minev, Sixth order compact finite difference schemes for Poisson interface problems with singular sources. \emph{Comp. Math. Appl.} \textbf{99} (2021), 2-25. \bibitem{FHM21b} Q.~Feng, B.~Han, and P.~Minev, A high order compact finite difference scheme for elliptic interface problems with discontinuous and high-contrast coefficients, arXiv:2105.04600 (2021), 30 pp. \bibitem{FHM21Helmholtz} Q.~Feng, B.~Han, and M.~Michelle, Sixth order compact finite difference method for 2D Helmholtz equations with singular sources and reduced pollution effect, arxiv:2112.07154 (2021), 20 pp. \bibitem{GongLiLi08} Y.~Gong, B.~Li, and Z.~Li, Immersed-interface finite-element methods for elliptic interface problems with nonhomogeneous jump conditions. \emph{SIAM J. Numer. Anal.} \textbf{46} (2008), 472-495. \bibitem{HeLL2011} X.~He, T.~Lin, and Y.~Lin, Immersed finite element methods for elliptic interface problems with non-homogeneous jump conditions. \emph{Int. J. Numer. Anal. Model.} \textbf{8} (2011), 284-301. \bibitem{ItoLiKyei05} K.~Ito, Z.~Li, and Y.~Kyei, Higher-order, Cartesian grid based finite difference schemes for elliptic equations on irregular domains. \emph{ SIAM J. Sci. Comput.} \textbf{27} (2005), 346-367. \bibitem{LiIto06} Z.~Li and K.~Ito, The immersed interface method: numerical solutions of PDEs involving interfaces and irregular domains. \emph{Society for Industrial and Applied Mathematics}. 2006. \bibitem{Li98} Z.~Li, A fast iterative algorithm for elliptic interface problems. \emph{SIAM J. Numer. Anal.} \textbf{35} (1998), 230-254. \bibitem{LiPan2021} Z.~Li and K.~Pan, Can 4th-order compact schemes exist for flux type BCs? arxiv:2109.05638 (2021), 22 pp. \bibitem{LeLi94} R.~J.~ Leveque and Z.~Li, The Immersed interface method for elliptic equations with discontinuous coefficients and singular sources. \emph{SIAM J. Numer. Anal.} \textbf{31} (1994), 1019-1044. \bibitem{Nabavi07} M.~Nabavi, M.~H.~K.~Siddiqui, and J.~Dargahi, A new 9-point sixth-order accurate compact finite-difference method for the Helmholtz equation. \emph{J. Sound Vib.} \textbf{307} (2007), 972-982. \bibitem{PanHeLi21} K.~Pan, D.~He, and Z.~Li, A high order compact FD framework for elliptic BVPs involving singular sources, interfaces, and irregular domains, \emph{J. Sci. Comput.} \textbf{88} (2021), 1-25. \bibitem{RenFengZhao2022} Y.~Ren, H.~Feng, and S.~Zhao, A FFT accelerated high order finite difference method for elliptic boundary value problems over irregular domains. \emph{J. Comput. Phys.} \textbf{448} (2022), 110762. \bibitem{TGGT13} E.~Turkel, D.~Gordon, R.~Gordon, and S.~Tsynkov, Compact 2D and 3D sixth order schemes for the Helmholtz equation with variable wave number. \emph{J. Comp. Phys.} \textbf{232} (2013), 272-287. \bibitem{WieBube00} A.~Wiegmann and K.~P.~Bube, The explicit-jump immersed interface method: finite difference methods for PDEs with piecewise smooth solutions. \emph{SIAM J. Numer. Anal.} \textbf{37} (2000), 827-862. \bibitem{YuZhouWei07} S.~Yu, Y.~Zhou, and G.~W.~Wei, Matched interface and boundary (MIB) method for elliptic problems with sharp-edged interfaces. \emph{J. Comput. Phys.} \textbf{224} (2007), 729-756. \bibitem{YuWei073D} S.~Yu and G.~W.~Wei, Three-dimensional matched interface and boundary (MIB) method for treating geometric singularities. \emph{J. Comput. Phys.} \textbf{227} (2007), 602-632. \bibitem{ZZFW06} Y.~C.~Zhou, S.~Zhao, M.~Feig, and G.~W.~Wei, High order matched interface and boundary method for elliptic equations with discontinuous coefficients and singular sources. \emph{J. Comput. Phys.} \textbf{213} (2006), 1-30. \bibitem{ZW06} Y.~C.~Zhou and G.~W.~Wei, On the fictitious-domain and interpolation formulations of the matched interface and boundary (MIB) method. \emph{J. Comput. Phys.} \textbf{219} (2006), 228-246. \bibitem{XiaolinZhong07} X.~Zhong, A new high-order immersed interface method for solving elliptic equations with imbedded interface of discontinuity. \emph{J. Comput. Phys.} \textbf{225} (2007), 1066-1099. \end{thebibliography} \end{document}
2205.01196v2
http://arxiv.org/abs/2205.01196v2
Strong Stationarity Conditions for Optimal Control Problems Governed by a Rate-Independent Evolution Variational Inequality
\documentclass[onefignum,onetabnum]{siamart190516} \ifpdf \DeclareGraphicsExtensions{.eps,.pdf,.png,.jpg} \else \DeclareGraphicsExtensions{.eps} \newcommand{\creflastconjunction}{, and~} \newsiamremark{remark}{Remark} \newsiamremark{example}{Example} \newsiamremark{hypothesis}{Hypothesis} \crefname{hypothesis}{Hypothesis}{Hypotheses} \newsiamthm{claim}{Claim} \newsiamthm{assumption}{Assumption} \crefname{assumption}{Assumption}{Assumptions} \headers{Strong Stationarity for a Rate-Independent EVI} {Martin Brokate and Constantin Christof} \title{Strong Stationarity Conditions for Optimal Control Problems Governed by a Rate-Independent Evolution Variational Inequality\thanks{Submitted to the editors DATE. }} \author{Martin Brokate\thanks{Technische Universit\"at M\"unchen, Department of Mathematics, M6, Boltzmannstra{\ss}e 3, 85748 Garching bei M\"unchen, Germany; Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstra{\ss}e 39, 10117 Berlin, Germany; Faculty of Civil Engineering, Czech Technical University in Prague, Th\'{a}kurova 7, 16629 Praha 6, Czech Republic; \url{https://www.professoren.tum.de/brokate-martin}, \email{[email protected]}}\and Constantin Christof\thanks{Technische Universit\"at M\"unchen, Department of Mathematics, M17, Boltzmannstra{\ss}e 3, 85748 Garching bei M\"unchen, Germany, \url{https://www-m17.ma.tum.de/Lehrstuhl/ConstantinChristof}, \email{[email protected]} }} \usepackage{amsopn} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{amssymb} \usepackage{enumitem} \usepackage{dsfont} \newcommand{\sgn}{\operatorname{sgn}} \newcommand{\closure}{\operatorname{cl}} \newcommand{\var}{\operatorname{var}} \newcommand{\dd}{\,\mathrm{d}} \newcommand{\crit}{\mathrm{crit}} \newcommand{\red}{\mathrm{red}} \newcommand{\rad}{\mathrm{rad}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\ptw}{\mathrm{ptw}} \newcommand{\Uad}{U_{\textup{ad}}} \renewcommand{\AA}{\mathcal{A}} \newcommand{\BB}{\mathcal{B}} \newcommand{\CC}{\mathcal{C}} \newcommand{\DD}{\mathcal{D}} \newcommand{\EE}{\mathcal{E}} \newcommand{\FF}{\mathcal{F}} \newcommand{\GG}{\mathcal{G}} \newcommand{\HH}{\mathcal{H}} \newcommand{\II}{\mathcal{I}} \newcommand{\JJ}{\mathcal{J}} \newcommand{\KK}{\mathcal{K}} \newcommand{\LL}{\mathcal{L}} \newcommand{\MM}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \newcommand{\OO}{\mathcal{O}} \newcommand{\PP}{\mathcal{P}} \newcommand{\QQ}{\mathcal{Q}} \newcommand{\RR}{\mathcal{R}} \renewcommand{\S}{\mathcal{S}} \newcommand{\TT}{\mathcal{T}} \newcommand{\UU}{\mathcal{U}} \newcommand{\VV}{\mathcal{V}} \newcommand{\WW}{\mathcal{W}} \newcommand{\XX}{\mathcal{X}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \newcommand{\LLL}{\mathscr{L}} \newcommand{\CCC}{\mathscr{C}} \newcommand{\A}{\mathbb{A}} \newcommand{\C}{\mathbb{C}} \renewcommand{\H}{\mathbb{H}} \newcommand{\I}{\mathbb{I}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \renewcommand{\L}{\mathbb{L}} \newcommand{\K}{\mathbb{K}} \newcommand{\W}{\mathbb{W}} \newcommand{\ddp}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\ddt}{\frac{\mathrm{d}}{\mathrm{dt}}} \newcommand{\weakly}{\rightharpoonup} \newcommand{\weaklystar}{\stackrel\star\rightharpoonup} \definecolor{darkgreen}{rgb}{0,0.5,0} \definecolor{darkred}{rgb}{0.8,0,0} \usepackage{marginnote} } \usepackage{etoolbox} \makeatletter \patchcmd{\@addmarginpar}{\ifodd\c@page}{\ifodd\c@page\@tempcnta\m@ne}{}{} \makeatother \reversemarginpar \newcommand{\ccnote}[1]{\marginpar{\tiny\textbf{\color{darkgreen}{CC: #1}}}} \newcommand{\mbnote}[1]{\marginpar{\tiny\textbf{\color{darkred}{MB: #1}}}} \newcommand{\ccchange}[2][\marginnote{\textup{!}}]{{\color{darkgreen}#1#2}} \begin{document} \maketitle \begin{abstract}We prove strong stationarity conditions for optimal control problems that are governed by a prototypical rate-independent evolution variational inequality, i.e., first-order necessary optimality conditions in the form of a primal-dual multiplier system that are equivalent to the purely primal notion of Bouligand stationarity. Our analysis relies on recent results on the Hadamard directional differentiability of the scalar stop operator and a new concept of temporal polyhedricity that generalizes classical ideas of Mignot. The established strong stationarity system is compared with known optimality conditions for optimal control problems governed by elliptic obstacle-type variational inequalities and stationarity systems obtained by regularization. \end{abstract} \begin{keywords} optimal control, rate independence, stop operator, variational inequality, sweeping process, strong stationarity, Bouligand stationarity, Kurzweil integral, polyhedricity, hysteresis \end{keywords} \begin{AMS} 49J40, 47J40, 34C55, 49K21, 49K27 \end{AMS} \section{Introduction and summary of results}\label{sec:1}This paper is concerned with the derivation of first-order necessary optimality conditions for optimal control problems of the type \begin{equation*} \label{eq:P} \tag{P} \left \{~~ \begin{aligned} \text{Minimize} \quad & \JJ(y, y(T), u) \\ \text{w.r.t.} \quad &y \in CBV[0, T], \quad u \in \Uad,\\ \text{s.t.} \quad & \int_0^T (v - y)\dd (y - u)\geq 0~~ \forall v \in C([0, T]; Z), \\ &y(t) \in Z\quad \forall t \in [0, T],\quad y(0) = y_0. \end{aligned} \right. \end{equation*} Here, $y$ denotes the state; $u$ denotes the control; $T>0$ is given; $CBV[0,T]$ is the space of real-valued continuous functions of bounded variation on $[0, T]$; $\Uad$ is a subset of a suitable control space $U \subset CBV[0,T]$; $\JJ\colon L^\infty(0, T) \times \R \times U \to \R$ is a sufficiently smooth objective function; $Z = [-r,r]$ is a given interval with $r>0$; $C([0, T]; Z)$ is the set of continuous functions on $[0,T]$ with values in $Z$; $y_0 \in Z$ is a given initial value; and the integral in the governing variational inequality is understood in the sense of Kurzweil-Stieltjes (see \cite{Monteiro2019} and the \hyperref[sec:appendix]{appendix} of this paper for details on this type of integral). For the precise assumptions on the quantities in \eqref{eq:P}, we refer to \cref{sec:3}. The main result of this work -- \cref{th:main} -- establishes a so-called strong stationarity system for the problem \eqref{eq:P}. This is a first-order necessary optimality condition in primal-dual form that is satisfied by a control $\bar u \in \Uad$ if and only if $\bar u$ is a Bouligand stationary point of \eqref{eq:P}, i.e., if and only if the directional derivative of the reduced objective function of \eqref{eq:P} at $\bar u$ is nonnegative in all admissible directions. See also \eqref{eq:strongstatsys-2} below for the resulting stationarity system. \subsection{Background and relation to prior work} Before we present and discuss the strong stationarity system derived in \cref{th:main} in more detail, let us give some background. To keep the discussion concise, we focus on strong stationarity conditions for infinite-dimensional optimization problems arising in optimal control. For related results in finite dimensions, see \cite{Flegel2007,Harder2017,Hoheisel2013,Luo1996,Scheel2000} and the references therein. In the field of infinite-dimensional nonsmooth optimization, strong stationarity conditions (although originally not referred to as such) have first been derived for optimal control problems governed by elliptic obstacle-type variational inequalities in the seminal works \cite{Mignot1976,MignotPuel1984} of Mignot and Puel in the nineteen-seventies and -eighties. If we use a notation analogous to that in \eqref{eq:P}, then this kind of problem can be formulated (in its most primitive form) as follows: \begin{equation} \label{eq:optstaclecontrol} \begin{aligned} \text{Minimize} \quad & \JJ(y, u) \\ \text{w.r.t.} \quad &y \in H_0^1(\Omega), \quad u \in \Uad \subset L^2(\Omega),\\ \text{s.t.} \quad & y \in Z, \quad \int_\Omega \nabla y \cdot \nabla (v - y)\dd x \geq \int_\Omega u(v - y) \dd x \quad \forall v\in Z. \end{aligned} \end{equation} Here, $\Omega \subset \R^d$, $d \in \mathbb{N}$, is a nonempty open bounded set; $H_0^1(\Omega)$ and $L^2(\Omega)$ are defined as usual, see \cite{Evans2010,GilbargTrudinger1977}; $\JJ\colon H_0^1(\Omega) \times L^2(\Omega) \to \R$ is a Fr\'{e}chet differentiable objective function with partial derivatives $\partial_1 \JJ(y,u) \in H^{-1}(\Omega)$ and $\partial_2 \JJ(y,u) \in L^2(\Omega)$ (where $H^{-1}(\Omega)$ denotes the topological dual of $H_0^1(\Omega)$); $\Uad \subset L^2(\Omega)$ is a convex, nonempty, and closed set; $\nabla$ is the weak gradient; and $Z$ is a nonempty set of the type \[ Z:= \left \{ v \in H_0^1(\Omega) \colon \psi_1 \leq v \leq \psi_2 \text{ a.e.\ in } \Omega\right \} \] involving two given measurable functions $\psi_1, \psi_2\colon \Omega \to [-\infty, \infty]$. The main difficulty that arises when deriving first-order necessary optimality conditions for problems like \eqref{eq:optstaclecontrol} is that the governing variational inequality causes the control-to-state operator $S\colon L^2(\Omega) \to H_0^1(\Omega)$, $u \mapsto y$, to be nondifferentiable (in the sense of G\^{a}teaux and Fr\'{e}chet). This nonsmoothness prevents classical adjoint-based approaches as found, e.g., in \cite{Troeltzsch2010} from being applicable and makes it necessary to develop tailored strategies to establish stationarity systems for local minimizers. In \cite{Mignot1976,MignotPuel1984}, the problem of deriving first-order optimality conditions for \eqref{eq:optstaclecontrol} was tackled by exploiting that the solution mapping $S\colon L^2(\Omega) \to H_0^1(\Omega)$, $u \mapsto y$, of the lower-level variational inequality in \eqref{eq:optstaclecontrol} is Hadamard directionally differentiable with directional derivatives $\delta := S'(u;h)$, $u, h \in L^2(\Omega)$, that are uniquely characterized by the auxiliary problem \begin{equation} \label{eq:dirdiffcharobstacleproblem} \delta \in K_\crit(y,u), \quad \int_\Omega \nabla \delta \cdot \nabla (z - \delta) \dd x \geq \int_\Omega h(z - \delta) \dd x \quad \forall z\in K_\crit(y,u). \end{equation} Here, $K_\crit(y,u) := K_{\tan}(y) \cap (u + \Delta y)^\perp$ denotes the so-called \emph{critical cone} associated with $u$ and $y := S(u)$, i.e., the intersection of the kernel \[ (u + \Delta y)^\perp := \left \{ z \in H_0^1(\Omega)\colon \int_\Omega u z - \nabla y \cdot \nabla z \dd x = 0 \right \} \] of the functional $u + \Delta y \in H^{-1}(\Omega)$ and the tangent cone $K_{\tan}(y) \subset H_0^1(\Omega)$ to $Z$ at $y$ which is obtained by taking the closure of the radial cone $ K_{\rad}(y) := \R_+(Z - y)$ in $H_0^1(\Omega)$, cf.\ \cite[section 2]{Harder2017} and \cite{Haraux1977,Mignot1976}. By proceeding along the lines of \cite{Mignot1976,MignotPuel1984}, one obtains the following main result for the optimal control problem \eqref{eq:optstaclecontrol}: If a control $\bar u \in \Uad$ with state $\bar y := S(\bar u)$ is given such that the set $\R_+(\Uad - \bar u)$ is dense in $L^2(\Omega)$, then $\bar u$ is a Bouligand stationary point of \eqref{eq:optstaclecontrol} in the sense that \begin{equation} \label{eq:Bouligandobstacle} \left \langle \partial_1 \JJ(\bar y, \bar u), S'(\bar u;h)\right \rangle_{H_0^1} + \left (\partial_2 \JJ(\bar y, \bar u), h \right)_{L^2} \geq 0 \quad \forall h \in \R_+(\Uad - \bar u) \end{equation} holds if and only if there exist an adjoint state $\bar p \in H^1_0(\Omega)$ and a multiplier $\bar \mu \in H^{-1}(\Omega)$ such that $\bar u$, $\bar y$, $\bar p$, and $\bar \mu$ satisfy the system \begin{equation}\label{eq:sstatobst} \begin{gathered} \bar p + \partial_2 \JJ(\bar y, \bar u) = 0~~\text{ in } L^2(\Omega), \\ - \Delta \bar p = \partial_1 \JJ(\bar y, \bar u) - \bar \mu ~~\text{ in } H^{-1}(\Omega), \\ \bar p\in K_\crit(\bar y, \bar u), \quad \left \langle \bar \mu ,z\right \rangle_{H_0^1} \geq 0 \quad \forall z\in K_\crit(\bar y, \bar u). \end{gathered} \end{equation} Here and in what follows, the symbols $\langle \cdot, \cdot \rangle$ and $(\cdot, \cdot)$ denote a dual pairing and a scalar product, respectively. For a proof of the above result, see \cite[Corollary~6.1.11]{ChristofPhd2018}. Note that, since the inequality \eqref{eq:Bouligandobstacle} expresses that the directional derivatives of the reduced objective function $L^2(\Omega) \ni u \mapsto \JJ(S(u), u) \in \R$ of \eqref{eq:optstaclecontrol} are nonnegative in all admissible directions $h \in \R_+(\Uad - \bar u)$ at $\bar u$ and thus corresponds to the most natural first-order necessary optimality condition obtainable for a directionally differentiable function, and since the conditions \eqref{eq:Bouligandobstacle} and \eqref{eq:sstatobst} are equivalent, the system \eqref{eq:sstatobst} can be considered the most precise first-order primal-dual necessary optimality condition possible for \eqref{eq:optstaclecontrol}. This is the reason why systems of the type \eqref{eq:sstatobst} became known as \emph{strong stationarity conditions} since their initial appearance in \cite{Mignot1976,MignotPuel1984}. The main appeal of the system \eqref{eq:sstatobst} is, of course, its equivalence to the Bouligand stationarity condition \eqref{eq:Bouligandobstacle}. This characteristic property distinguishes \eqref{eq:sstatobst} from other first-order necessary optimality conditions and makes \eqref{eq:sstatobst} an important tool, e.g., for assessing which information about $\bar p$ and $\bar \mu$ is lost when a stationarity system is derived by means of a regularization or discretization approach. For details on this topic, we refer to the survey article \cite{Harder2017}. Because of these advantageous properties, strong stationarity conditions have come to play a distinct role in the field of optimal control of nonsmooth systems and have received considerable attention in the recent past. See, e.g., \cite{Betz2021,ChristofPhd2018,Christof2022,ReyesMeyer2016,Herzog2013,Hintermueller2009,Wachmuth2014,Wachsmuth2020} for contributions on strong stationarity conditions for optimal control problems governed by various elliptic variational inequalities of the first and the second kind, \cite{Betz2019,Christof2018nonsmoothPDE,Clason2021,Meyer2017} for extensions to optimal control problems governed by nonsmooth semi- and quasilinear PDEs, and \cite{Christof2021} for a generalization to the multiobjective setting. Note that all of these works on the concept of strong stationarity have in common that they are only concerned with elliptic variational inequalities or PDEs involving nonsmooth terms. What has -- at least to the best of our knowledge -- not been accomplished so far in the literature is the derivation of a necessary optimality condition analogous to \eqref{eq:sstatobst} for an optimal control problem that is governed by a true evolution variational inequality (where with ``true'' we mean that the inequality cannot be reformulated as a nonsmooth PDE or an elliptic problem, cf.\ \cite{Betz2019}). In fact, such an extension is even mentioned as an open problem in the seminal works of Mignot and Puel; see \cite[section 4]{MignotPuel1984} and \cite{MignotPuel1984:2} where strong stationarity conditions for parabolic obstacle problems are conjectured upon. This absence of results on strong stationarity systems for evolution variational inequalities is very unsatisfying in view of the multitude of processes that are modeled by this type of variational problem in finance, mechanics, and physics; see \cite{Mielke2015,Sofonea2012}. The main reason for the lack of contributions on strong stationarity conditions for evolution variational inequalities since the nineteen-seventies is that directional differentiability results analogous to that for the elliptic obstacle problem in \eqref{eq:dirdiffcharobstacleproblem} have not been available in the instationary setting for a long period of time. See, e.g., \cite[p.\ 582]{BonnansShapiro2000} where this problem is still referred to as open. Only recently, progress in this direction has been made. In \cite{Brokate2020,Brokate2015}, it could be proved by means of a semi-explicit solution formula involving the cumulated maximum that the control-to-state operator of the problem \eqref{eq:P} -- the so-called \emph{scalar stop operator} -- is Hadamard directionally differentiable in a pointwise manner; see \cref{th:dirdiff} below. In \cite{Christof2019parob}, it could further be shown by means of pointwise-a.e.\ convexity properties that the solution mapping of the parabolic obstacle problem is Hadamard directionally differentiable as a function into all Lebesgue spaces. This paper also establishes that the directional derivatives of the solution operator of the parabolic obstacle problem are the (not necessarily unique) solutions of a weakly formulated auxiliary variational inequality analogous to \eqref{eq:dirdiffcharobstacleproblem}, see \cite[Theorem 4.1]{Christof2019parob}. Very recently, in \cite{Brokate2021}, an auxiliary problem for the directional derivatives of the scalar stop operator in \eqref{eq:P} has also been obtained by means of a careful analysis of jump directions and approximation arguments. This auxiliary problem even yields a unique characterization, see \cref{th:dirdiffVI} below. \subsection{Main result and contribution of the paper} The purpose of the present paper is to show that the recent developments in \cite{Brokate2020,Brokate2015,Brokate2021} make it possible to prove a strong stationarity system for the optimal control problem \eqref{eq:P}. As far as we are aware, our analysis is the first to establish such a system for a true evolution variational inequality. The result in the literature that comes closest to the one derived in this paper is, at least to the best of our knowledge, \cite[Theorem 5.5]{Christof2019parob} which establishes a multiplier system for optimal control problems governed by parabolic obstacle-type variational inequalities that is equivalent to Bouligand stationarity if the adjoint state enjoys additional regularity properties -- a deficit that is caused by a mismatch between certain notions of capacity, see the discussion in \cite[section 5]{Christof2019parob}. In the present work, we do not require such additional regularity assumptions and obtain a strong stationarity system for \eqref{eq:P} that is fully equivalent to the notion of Bouligand stationarity. Our main result can be summarized as follows: If $\bar u \in \Uad$ is a control of \eqref{eq:P} with associated state $\bar y$ such that the set $\R_+(\Uad - \bar u)$ is dense in the control space $U$, then $\bar u$ is a Bouligand stationary point of \eqref{eq:P} (in a sense analogous to that of \eqref{eq:Bouligandobstacle}, see \cref{def:Bouligandstationary} below) if and only if there exist an adjoint state $\bar p \in BV[0,T]$ and a multiplier $\bar \mu \in G_r[0,T]^*$ such that $\bar u$, $\bar y$, $\bar p$, and $\bar \mu$ satisfy the system \begin{equation} \label{eq:strongstatsys-2} \begin{gathered} \bar p(0) = \bar p(T) = 0, \qquad \bar p(t) = \bar p(t-)~\forall t \in [0,T), \\ \bar p(t-) \in K^\ptw_{\crit}(\bar y, \bar u)(t)~\forall t \in [0,T], \\ \left \langle \bar \mu, z \right \rangle_{G_r} \geq 0\quad \forall z \in \KK_{G_r}^{\red,\crit}(\bar y, \bar u), \\ \int_0^T h \dd \bar p = \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} ~\forall h \in U, \\ -\int_0^T z \dd \bar p = \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), z\right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)z(T) - \left \langle \bar \mu, z \right \rangle_{G_r} \\ \hspace{8.5cm}\forall z \in G_r[0,T]. \end{gathered} \end{equation} Here, $BV[0,T]$ denotes the space of real-valued functions of bounded variation on $[0,T]$; $G_r[0,T]$ is the space of real-valued, regulated, and right-continuous functions on $[0,T]$; $G_r[0,T]^*$ is the topological dual space of $G_r[0,T]$; the partial derivatives of $\JJ$ are denoted by $\partial_i \JJ$, $i=1,2,3$; the minus in the argument of $\bar p$ denotes a left limit; and $K^\ptw_{\crit}(\bar y,\bar u)(t)$, $t \in [0,T]$, and $\KK_{G_r}^{\red,\crit}(\bar y, \bar u)$ are suitably defined cones (see \cref{def:ptwcritcone,def:6.1}). For the precise statement of the above result, see \cref{th:main}. Several things are noteworthy regarding the system \eqref{eq:strongstatsys-2}: First of all, it can be seen that the adjoint state $\bar p$ lacks regularity in comparison with the optimal state $\bar y$ ($BV[0,T]$ instead of $CBV[0,T]$). This reduced regularity reflects that the directional derivatives of the control-to-state mapping of \eqref{eq:P} are not continuous in time and thus significantly less regular than the states $y$ -- a behavior that is completely absent in the elliptic problem \eqref{eq:optstaclecontrol}. For details on this topic, see also \cite[section 3]{Christof2019parob} and \cite[Example 4.1]{Brokate2021} which demonstrate that all types of jump discontinuities of the derivatives are possible in the situation of \eqref{eq:P} and that the derivatives cannot be expected to possess, e.g., $H^{1/2}(0,T)$-regularity, cf.\ \cite{Jarusek2003}. Second, one observes that not the adjoint state $\bar p$ but its left limits are contained in the critical cone $K^\ptw_{\crit}(\bar y, \bar u)(t)$ for all $t \in [0,T]$ in \eqref{eq:strongstatsys-2}. As we will see below, this condition on the limiting behavior -- along with the left-continuity of $\bar p$ in the first line of \eqref{eq:strongstatsys-2} -- arises from certain properties of the jumps of the directional derivatives of the control-to-state mapping and the fact that the adjoint system evolves backwards in time (in contrast to the variational inequality for the directional derivatives of the control-to-state mapping which evolves in a forward manner). Note that these additional properties of the left limit of the adjoint state are not visible in stationarity systems derived by regularization, cf.\ \cite{Barbu1984,Cao2016,Colombo2020,dePinho2019,Ito2010,Stefanelli2017,Wachmuth2016}. This shows that \eqref{eq:strongstatsys-2} contains information that is not recoverable with regularization approaches. Lastly, it should be noted that the coupling between the adjoint state $\bar p$ and the partial derivative $\partial_3 \JJ(\bar y, \bar y(T), \bar u)$ of the objective $\JJ$ w.r.t.\ the control in \eqref{eq:strongstatsys-2} is not as direct as in \eqref{eq:sstatobst} but involves an integration step. This is a consequence of the rate-independence of the variational inequality governing \eqref{eq:P} and ultimately also the reason for the nonstandard start- and endpoint conditions $\bar p(0) = \bar p(T) = 0$ for $\bar p$ in \eqref{eq:strongstatsys-2}. We remark that these conditions reflect that the partial derivative $\partial_2 \JJ(\bar y, \bar y(T), \bar u)$ manifests itself -- in a distributional sense -- in the jump of $\bar p$ at the terminal time $T$, see the comments at the end of \cref{sec:7}. A similar behavior can also be observed in optimal control problems for parabolic PDEs, see \cite[section 5.5.1]{Troeltzsch2010}. Regarding the derivation of the strong stationarity system in \cref{th:main}, we would like to point out that -- even with the results of \cite{Brokate2020,Brokate2015,Brokate2021} at hand and even though the variational inequality in \eqref{eq:P} is one of the simplest evolution variational inequalities imaginable -- the proof of \eqref{eq:strongstatsys-2} is still quite involved. The main difficulty in the context of \eqref{eq:P} is that, due to the lack of weak-star continuity properties of the scalar stop operator, one has to discuss this problem in a control space $U$ whose topology is significantly stronger than that of $BV[0,T]$ to be able to ensure that \eqref{eq:P} is well posed; see the comments in \cref{sec:5} below. Since the directional derivatives of the scalar stop are only in $BV[0,T]$, the need for such a ``small'' control space $U$ makes it necessary to employ a careful limit analysis to ensure that the control space is \emph{ample} enough to be able to arrive at a strong stationarity system. Compare also with the comments on this topic in \cite{Christof2022,Herzog2013} and the results in \cite{Wachmuth2014} in this context. In our analysis, we tackle this problem by generalizing the classical concept of \emph{polyhedricity} to the time-dependent setting. This is a density property which, in the situation of the elliptic problem \eqref{eq:optstaclecontrol}, ensures that the set of critical radial directions $ K_{\rad}(y) \cap (u + \Delta y)^\perp$ is $H_0^1(\Omega)$-dense in $K_{\tan}(y) \cap (u + \Delta y)^\perp$ and which plays an important role in the sensitivity analysis of elliptic obstacle-type variational inequalities as well as the theory of second-order optimality conditions, see \cite{Haraux1977,Christof2018SSC,Wachsmuth2019}. For the approximation result that we establish in this context and that we refer to as ``temporal polyhedricity'', see \cref{theorem:tempoly}. We expect that \cref{theorem:tempoly}, along with the insights provided by \eqref{eq:strongstatsys-2}, is also helpful for the analysis of optimal control problems governed by more complicated evolution variational inequalities, cf.\ the problems studied in \cite{Christof2019parob,Muench2018,Samsonyuk2019}. \subsection{Structure of the remainder of the paper} We conclude this section with an overview of the content and the structure of the remainder of the paper. \Cref{sec:2,sec:3} are concerned with preliminaries. Here, we introduce the notation and the standing assumptions that we use throughout this work. In \cref{sec:4}, we collect basic results on the properties of the control-to-state mapping of \eqref{eq:P} -- the scalar stop operator. This section also recalls the directional differentiability results of \cite{Brokate2020,Brokate2015,Brokate2021} and discusses some of their consequences. \Cref{sec:5} addresses the solvability of \eqref{eq:P} and introduces the concept of Bouligand stationarity for this problem. This section also contains an example which shows that, to be able to prove the existence of solutions for \eqref{eq:P} by means of the direct method of the calculus of variations, one indeed has to consider a control space significantly smaller than $BV[0,T]$. In \cref{sec:6}, we prove the already mentioned temporal polyhedricity property for \eqref{eq:P}. The main result of this section is \cref{theorem:tempoly}. \Cref{sec:7} is concerned with the proof of the strong stationarity system \eqref{eq:strongstatsys-2}, see \cref{th:main}. The \hyperref[sec:appendix]{appendix} of the paper collects some results on the Kurzweil-Stieltjes integral that are needed for our analysis. \section{Notation} \label{sec:2} Throughout this work, $T>0$ is a given and fixed number. We denote the space of real-valued continuous functions on $[0,T]$ by $C[0,T]$ and the space of real-valued regulated functions on $[0,T]$ (i.e., the space of all functions that are uniform limits of step functions, see \cite[Definition 4.1.1, Theorem 4.1.5]{Monteiro2019}) by $G[0,T]$. We equip both $C[0,T]$ and $G[0,T]$ with the supremum norm $\|\cdot\|_\infty$. Recall that this makes $C[0,T]$ and $G[0,T]$ Banach spaces and that every $v \in G[0,T]$ possesses left and right limits, see \cite[chapter 4]{Monteiro2019}. Given $v \in G[0,T]$, we denote these limits by $v(t-)$ and $v(t+)$, respectively, with the usual conventions at the endpoints of $[0,T]$, i.e., \begin{align*} v(t-) &:= \lim_{[0, T] \ni s \to t^-} v(s)\quad \forall t \in (0, T],\qquad v(0-) := v(0), \\ v(t+) &:= \lim_{[0, T] \ni s \to t^+} v(s)\quad \forall t \in [0, T),\qquad v(T+) := v(T). \end{align*} For the left- and the right-limit function associated with a function $v \in G[0,T]$, we use the symbols $v_-$ and $v_+$, i.e., $v_-(t) := v(t-)$ and $v_+(t) := v(t+)$ for all $t \in [0,T]$. We further define $G_r[0,T] := \left \{ v \in G[0,T] \colon v = v_+ \right \}$. It is easy to check that this set of right-continuous regulated functions is a closed subspace of $(G[0,T], \|\cdot\|_\infty)$. The space of real-valued functions of bounded variation on $[0,T]$ is denoted by $BV[0,T]$. We emphasize that we do not consider elements of $BV[0,T]$ as equivalence classes in this paper but as classical functions $v\colon [0,T] \to \R$, as in \cite[chapter~2]{Monteiro2019}. For a discussion of different approaches to $BV[0,T]$, see \cite{Ambrosio2000}. We denote the variation of a function $v\colon[0,T]\to\R$ by $\var(v)$, and we define the total variation norm on $BV[0,T]$ as $\|v\|_{BV} := |v(0)| + \var(v)$. Recall that $(BV[0,T], \|\cdot \|_{BV})$ is a Banach space that is continuously embedded into $(G[0,T], \|\cdot \|_{\infty})$; see \cite[Theorem~2.2.2]{Monteiro2019}. We define $CBV[0,T] := BV[0,T] \cap C[0,T]$ and $BV_r[0,T] := BV[0,T] \cap G_r[0,T]$. Note that both of these sets are closed subspaces of $(BV[0,T], \|\cdot \|_{BV})$. Given a set-valued function $K\colon [0,T] \rightrightarrows \R$ and $0 \leq s < \tau \leq T$, we use the symbols $C([s,\tau];K)$ and $G([s,\tau];K)$ to denote the sets of continuous and regulated functions $v$ on $[s,\tau]$ which satisfy $v(t) \in K(t)$ for all $t \in [s,\tau]$, respectively. Sets $K \subset \R$ are interpreted as set-valued functions that are constant in time in this notation. We further set $C^\infty[0,T] := \{v \in C[0,T] \colon \exists \tilde v \in C^\infty(\R)\, \text{s.t.}\, v(t) = \tilde v(t) ~\forall t \in [0,T]\}$. For the classical Lebesgue and Sobolev spaces, we use the standard notation \mbox{$(L^p(0,T), \|\cdot\|_{L^p})$} and $(W^{k,p}(0,T), \|\cdot\|_{W^{k,p}})$, $1 \leq p \leq \infty$, $k \in \mathbb{N}$. The weak derivative of a function $v \in W^{1,p}(0,T)$ is denoted by $v' \in L^p(0,T)$. For the topological dual of a normed space $(X, \|\cdot\|_X)$, we use the symbol $X^*$, and for a dual pairing, the brackets $\langle\cdot, \cdot \rangle$ equipped with a subscript that clarifies the space. A closure is denoted by $\closure(\cdot)$. Weak, weak-star, and strong convergence are indicated by $\weakly$, \smash{$\weaklystar$}, and $\to$, respectively. Given a set $D \subset [0,T]$, we define $\mathds{1}_{D}\colon [0,T] \to \{0,1\}$ to be the characteristic function of $D$, i.e., the function that equals 1 on $D$ and 0 everywhere else. \section{Main problem and standing assumptions} \label{sec:3} As already mentioned in the introduction, the aim of this paper is to study optimal control problems of the type \begin{equation*} \tag{P} \left \{~~ \begin{aligned} \text{Minimize} \quad & \JJ(y, y(T), u) \\ \text{w.r.t.}\quad &y \in CBV[0, T], \quad u \in \Uad,\\ \text{s.t.}\quad & y = \S(u), \end{aligned} \right. \end{equation*} where $\S$ is the \emph{scalar stop operator}, i.e., the solution map $\S\colon CBV[0,T] \to CBV[0,T]$, $u \mapsto y$, of the rate-independent evolution variational inequality \begin{equation} \label{eq:V} \tag{V} \left \{~~ \begin{aligned} &\int_0^T (v - y)\dd (y - u)\geq 0 &&\forall v \in C([0, T]; Z), \\ &y(t) \in Z\quad \forall t \in [0, T], &&y(0) = y_0. \end{aligned} \right. \end{equation} General references for the properties of the function $\S$ are \cite{BrokateSprekels1996,Krejci1996,Krejci1999}; some of them will be discussed in detail in \cref{sec:4}. Note that, from the application point of view, \eqref{eq:P} can be interpreted as an optimal control problem for a one-dimensional sweeping process with characteristic set $Z = [-r,r]$, i.e., a problem that aims to control the trajectory of a body with one degree of freedom that is placed on a slippery surface within $Z$ and moved (swept) by moving $Z$ back and forth, see \cite[section 1.1]{Mielke2015}. (In this case, the trajectory is described by the \emph{scalar play operator} $\PP(u) := u - \S(u)$ and the control function $u$ models the movement of $Z$.) This physical interpretation, however, is mainly secondary in this work. We are primarily interested in the problem \eqref{eq:P} because it is the instationary counterpart of the optimal control problem \eqref{eq:optstaclecontrol} for the elliptic obstacle problem and captures the effects of ``pure'' evolution without any additional spatial dependencies (as present, e.g., in the parabolic obstacle problem, cf.\ \cite{Christof2018SSC}). We hope that the insights provided by our analysis are also helpful for the analysis of optimal control problems governed by more complicated systems arising, e.g., in the field of elasto-plasticity, which often involve the play and stop operator to incorporate hysteresis effects, cf.\ \cite{Mielke2015, Muench2018,Samsonyuk2019}. We would like to emphasize that the integral in \eqref{eq:V} -- along with all other integrals appearing in the remainder of this paper -- is to be understood in the sense of Kurzweil-Stieltjes. For an in-depth introduction to the integration theory for this type of integral, we refer to \cite{Monteiro2019}. A collection of basic definitions, elementary properties, and fundamental results related to the Kurzweil-Stieltjes integral can also be found in the \hyperref[sec:appendix]{appendix} of this paper. The use of the Kurzweil-Stieltjes integral for the variational inequality approach to rate-independent evolutions goes back to \cite{Krejci2003,Krejci2006,KrejciLiero2009} where it was employed for the study of discontinuous input functions $u$. For this kind of $u$, the integrand and the integrator (i.e., the function behind the ``$\dd$'') in \eqref{eq:V} usually have discontinuities at common points $t\in [0,T]$ so that the Riemann-Stieltjes integral no longer works. Such common discontinuities also appear naturally in the variational inequality that characterizes the directional derivatives of $\S$, cf.\ \cref{th:dirdiffVI} below. For a treatment based on the Young integral, see \cite{KrejciLaurencot2002}. Alternatively, the Lebesgue-Stieltjes integral can be used as for the types of integrands and integrators appearing in this paper it is equivalent to the Kurzweil-Stieltjes integral, see \cite[section 6.12]{Monteiro2019}. However, for this type of integral, a careful handling of statements involving ``almost everywhere'' is necessary since the $\sigma$-algebra and the family of its sets of measure zero depend on the integrator. In particular, a singleton $\{t\}$ has nonzero measure if the integrator is discontinuous at $t$. For the ease of reference, we collect our standing assumptions on the quantities in the optimal control problem \eqref{eq:P} and the variational inequality \eqref{eq:V} in: \begin{assumption}[standing assumptions]~\label{ass:standing} \begin{itemize} \item $T>0$ is given and fixed. \item $U \subset CBV[0,T]$ is a real vector space that is endowed with a norm $\|\cdot\|_U$ and that is continuously and densely embedded into $(C[0,T],\|\cdot\|_\infty)$. \item $\Uad$ is a nonempty and convex subset of $U$. \item $\JJ\colon L^\infty(0, T) \times \R \times U \to \R$ is a Fr\'{e}chet differentiable function whose partial derivative w.r.t.\ the first argument satisfies $\partial_1 \JJ(y, y(T), u) \in L^1(0, T) $ for all $(y, u) \in CBV[0,T] \times U$. Here, $L^1(0,T)$ is interpreted as a subset of $L^\infty(0,T)^*$ via the canonical embedding into the bidual. \item $Z$ is an interval of the form $Z = [-r,r]$ with an arbitrary but fixed $r>0$. \item $y_0 \in Z$ is a given and fixed starting value. \end{itemize} \end{assumption} The above assumptions are always assumed to hold in the following sections, even when not explicitly mentioned. We remark that, to be able to prove the existence of solutions for \eqref{eq:P}, one requires more information about $\JJ$, $\Uad$, etc.\ than provided by \cref{ass:standing}; see \cref{cor:solex}. For the derivation of the strong stationarity system \eqref{eq:strongstatsys-2}, however, this is not relevant. An example of a control space $U$ that satisfies the conditions in \cref{ass:standing} and that allows to prove the existence of minimizers for \eqref{eq:P} is the space $H^1(0,T)$, see \cref{sec:5} and the comments therein. \section{Properties of the scalar stop operator \texorpdfstring{$\boldsymbol{\S}$}{S}}\label{sec:4}In this section, we collect properties of the solution map $\S\colon u \mapsto y$ of the variational inequality \eqref{eq:V} that are needed for our analysis. We begin with fundamental results on the well-definedness, monotonicity, and directional differentiability of $\S$. \begin{theorem}[well-definedness and Lipschitz continuity]\label{th:Swellposed}The variational inequality \eqref{eq:V} possesses a unique solution $\S(u) := y\in CBV[0, T]$ for all $u \in CBV[0, T]$. For all $u\in W^{1,1}(0, T)$, it holds $y = \S(u) \in W^{1,1}(0, T)$ and \begin{equation}\label{eq;Vabscont} (v - y(t)) (y'(t) - u'(t)) \ge 0 \qquad \forall v\in Z \qquad\text{for a.a.}~t \in (0,T). \end{equation} Further, $\S$ satisfies the Lipschitz estimate \begin{equation} \label{eq;LipschitzSinfty} \|\S(u_1) - \S(u_2)\|_\infty \leq 2\|u_1 - u_2\|_\infty\qquad \forall u_1, u_2 \in CBV[0, T]. \end{equation} \end{theorem} \begin{proof} Proofs of the unique solvability of \eqref{eq:V} in $CBV[0, T]$ and of \eqref{eq;Vabscont} can be found in \cite[Theorem 4.1, Proposition 4.1]{Krejci1999}. The Lipschitz estimate \eqref{eq;LipschitzSinfty} follows from \cite[Theorem 7.1]{Krejci1999}; see also \cite[p.\ 49f.]{Krejci1996} and \cite[Proposition 2.3.4]{BrokateSprekels1996}. \end{proof} \begin{lemma}[general test functions]\label{lemma:gentest}Let $u\in CBV[0,T]$ and $0\le s < \tau\le T$. Then $y := \S(u)$ satisfies \begin{equation}\label{eq:gentest} \int_s^\tau (v-y)\dd(y-u) \ge 0 \qquad \forall v\in G([s,\tau];Z). \end{equation} \end{lemma} \begin{proof} Since $y + \mathds{1}_{[s,\tau]}(v-y) \in G([0,T];Z)$ for all $v \in G([s,\tau];Z)$ and due to \cref{lemma:subintervals}, it suffices to consider the case $[s,\tau] = [0,T]$. Let $v\colon[0,T]\to Z$ be a step function of the form \[ v = \sum_{j=1}^N \mathds{1}_{(t_{j-1},t_j)}\zeta_j + \sum_{j=0}^N \mathds{1}_{\{t_j\}}\hat{\zeta}_j \] with $\zeta_j,\hat{\zeta}_j\in Z$ and $0 = t_0 < ... < t_N = T$. Since $v = \lim_{n\to \infty} v_n$ pointwise for suitable $v_n\in C([0,T];Z)$, \eqref{eq:gentest} for $v$ follows from the bounded convergence theorem, \cref{th:boundedconv}. As step functions are dense in $G([0,T];Z)$ by \cite[Theorem 4.1.5]{Monteiro2019}, \eqref{eq:gentest} holds for arbitrary $v\in G([0,T];Z)$, again by the bounded convergence theorem. \end{proof} \begin{lemma}[piecewise monotonicity]\label{lemma:yumon}Let $u\in CBV[0,T]$ and set $y := \S(u)$. Let $J$ be an open nonempty subinterval of $[0,T]$. \begin{enumerate}[label=\roman*)] \item\label{lemma:yumon:item:i} If $J \subset \{ t\in [0,T]\colon y(t) > -r\}$, then $y-u$ is nonincreasing on $\closure{(J)}$. \item\label{lemma:yumon:item:ii} If $J\subset \{ t\in [0,T] \colon y(t) < r\}$, then $y-u$ is nondecreasing on $\closure{(J)}$. \end{enumerate} \end{lemma} \begin{proof} We prove \ref{lemma:yumon:item:i}. (The proof of \ref{lemma:yumon:item:ii} is analogous.) Let $s,\tau\in J$ with $s < \tau$. Then $y \ge - r + \varepsilon$ on $[s,\tau]$ for some $\varepsilon > 0$. As $v: = y - \varepsilon \in G([s,\tau];Z)$, we can apply \cref{lemma:gentest} to obtain \[ 0 \le \int_s^\tau (v-y)\dd(y-u) = - \varepsilon ((y-u)(\tau) - (y-u)(s)). \] Thus, $y-u$ is nonincreasing on $J$, and hence on $\closure{(J)}$ since $y-u$ is continuous. \end{proof} A proof of the foregoing lemma based on an explicit representation of $y-u$ can be found in \cite[section 5]{Brokate2015}. \begin{lemma}[comparison principle]\label{lemma:monotonicity}Let $u_1, u_2 \in CBV[0, T]$ be given such that $u_2 - u_1$ is nondecreasing in $[0, T]$. Then it holds $\S(u_2)(t) \geq \S(u_1)(t)$ for all $t \in [0, T]$. \end{lemma} \begin{proof} First, let us assume that $u_1,u_2\in W^{1,1}(0,T)$. From \eqref{eq;Vabscont}, we obtain that $y_1 := \S(u_1)$ and $y_2 := \S(u_2)$ satisfy \begin{equation} \label{eq:randomeq2736} (v - y_i(t))(y_i'(t)- u_i'(t))\geq 0\qquad \forall v \in Z \qquad \text{for a.a.}~t \in (0, T) \qquad i=1,2. \end{equation} Testing \eqref{eq:randomeq2736} for $i=1$ with $v = y_1(t) - \max\{0, y_1(t) - y_2(t)\} \in Z$ and for $i=2$ with $v = y_2(t) + \max\{0, y_1(t) - y_2(t)\} \in Z$ and adding the resulting inequalities gives \[ \max\{0, y_1 - y_2 \}\cdot (y_1' - y_2') \leq \max\{0, y_1 - y_2\}\cdot (u_1' - u_2') \leq 0 \quad \text{a.e.\ in $(0,T)$} \] as $u_2 - u_1$ is nondecreasing. By a classical result of Stampacchia, see, for instance, \cite[Lemmas~7.5 and 7.6]{GilbargTrudinger1977}, we have \[ \ddt \frac12 \big( \max\{0, y_1 - y_2\}\big)^2 = \max\{0, y_1 - y_2\}\cdot(y_1' - y_2') \qquad \text{a.e.\ in $(0,T)$.} \] Since $y_2(0) = y_1(0)$, we conclude that $\max\{0, y_1 - y_2\} \le 0$ on $[0,T]$. Thus, $y_2 \ge y_1$ on $[0,T]$ as claimed. In the general case $u_1,u_2\in CBV[0, T]$, we choose piecewise affine interpolants $u_1^n,u_2^n$ of $u_1,u_2$ on partitions $\Delta_n$ of $[0,T]$ whose widths go to zero for $n \to \infty$. Since $u_2^n - u_1^n$ is nondecreasing, too, it follows that $\S(u_2^n) \ge \S(u_1^n)$ on $[0,T]$ for all $n$. As $u_i^n \to u_i$ uniformly, by virtue of \eqref{eq;LipschitzSinfty}, we may pass to the limit, and the claim follows. \end{proof} \begin{theorem}[pointwise directional differentiability of $\S$]\label{th:dirdiff}The solution operator $\S\colon CBV[0, T] \to CBV[0, T]$ of \eqref{eq:V} is pointwise directionally differentiable in the sense that, for all $u, h \in CBV[0, T]$, there is a unique $\S'(u;h) \in BV[0, T]$ satisfying \[ \lim_{\alpha \to 0^+} \frac{\S(u + \alpha h)(t) - \S(u)(t)}{\alpha} = \S'(u;h)(t)\qquad \forall t \in [0,T]. \] \end{theorem} \begin{proof} See \cite[Corollary 5.4, Proposition 6.3]{Brokate2015} and also \cite[Theorem 2.1]{Brokate2021}. \end{proof}\pagebreak Similarly to the classical result \eqref{eq:dirdiffcharobstacleproblem} for the obstacle problem, the derivatives $\S'(u;h)$ in \cref{th:dirdiff} are characterized by an auxiliary variational inequality. To be able to state this inequality, we require some additional notation from \cite{Brokate2021}. \begin{definition}[inactive, biactive, and strictly active set]\label{def:biactiveetc}Let $u \in CBV[0,T]$ be a control with state $y := \S(u) \in CBV[0,T]$. We introduce: \begin{itemize} \item the inactive set: \[ I(y) := \{t \in [0, T] \colon |y (t)| < r\}, \] \item the biactive set associated with the upper bound of $Z$: \[ B_+(y,u):= \{t \in [0, T] \colon y (t) = r \text{ and } \exists \varepsilon > 0 \text{ s.t. } y - u = \mathrm{const}\text{ on } [t, t + \varepsilon)\}, \] \item the biactive set associated with the lower bound of $Z$: \[ B_-(y,u) := \{t \in [0, T] \colon y (t) = -r \text{ and } \exists \varepsilon > 0 \text{ s.t. } y - u = \mathrm{const}\text{ on } [t, t + \varepsilon)\}, \] \item the biactive set: \[ B(y,u) := B_+(y,u) \cup B_-(y,u), \] \item the strictly active set: \[ A(y,u) := \{t \in [0, T) \colon | y (t)| = r \text{ and } \nexists \varepsilon > 0 \text{ s.t. } y - u = \mathrm{const}\text{ on } [t, t + \varepsilon)\}. \] \end{itemize} Here and in what follows, we use the convention $T \in B_\pm(y,u)$ in the case $y(T) = \pm r$. \end{definition} \begin{definition}[radial and critical cone mapping] \label{def:ptwcritcone}Given an input function $u \in CBV[0,T]$ with state $y := \S(u) \in CBV[0,T]$, we define: \begin{itemize} \item the set-valued pointwise radial cone mapping: \[ K_\rad^\ptw(y)\colon [0, T] \rightrightarrows \R, \qquad K_\rad^\ptw(y)(t):= \begin{cases} \R & \text{ if } |y(t)| < r, \\ (-\infty, 0] & \text{ if } y(t) = r, \\ [0, \infty) & \text{ if } y(t) = -r, \end{cases} \] \item the set-valued pointwise critical cone mapping: \[ K^\ptw_{\crit}(y,u) \colon [0, T] \rightrightarrows \R, \qquad K^\ptw_{\crit}(y,u) (t):= \begin{cases} \R & \text{ if } t \in I(y), \\ (-\infty, 0] & \text{ if } t \in B_+(y,u), \\ [0, \infty) & \text{ if } t \in B_-(y,u), \\ \{0\} & \text{ if } t \in A(y,u). \end{cases} \] \end{itemize} \end{definition} Obviously, \[ K^\ptw_{\crit}(y,u)(t) \subset K_\rad^\ptw(y)(t) \quad\forall t \in [0,T]. \] Note that a function $z \in C^\infty[0, T]$ satisfying $z(t) \in K_\rad^\ptw(y)(t)$ for all $t \in [0, T]$ is not necessarily an element of the ``global'' radial cone associated with \eqref{eq:V}, i.e., does not necessarily satisfy $y(t) + \alpha z(t) \in Z$ for all $t \in [0, T]$ for a number $\alpha > 0$ independent of $t$. A possible counterexample here is $r=1$, $y_0 = 0$, $T = \pi/2$, $y(t) = u(t) = \sin(t)$, and $z(t) = \sin(2t)$. Indeed, for these $r$, $y_0$, $T$, $y$, $u$, and $z$, we clearly have $y = \S(u)$, $z(T) = 0 \in K_\rad^\ptw(y)(T) = (-\infty, 0]$, and $z(t) \in K_\rad^\ptw(y)(t) = \R$ for all $t \in [0, T)$. Due to the identities $y(T) = 1 = r$, $y'(T) = 0$, and $z'(T) = -2$, it further holds $y'(T) + \alpha z'(T) = - 2 \alpha < 0$ for all $\alpha > 0$. This implies that, for all $\alpha > 0$, there exists $t \in [0,T]$ satisfying $y(t) + \alpha z(t) > r$. We thus have $z(t) \in K_\rad^\ptw(y)(t)$ for all $t \in [0,T]$ but there does not exist $\alpha > 0$ satisfying $y(t) + \alpha z(t) \in Z$ for all $t \in [0, T]$. \cref{prop:classicalcritical,cor:classicalcritical} below establish a connection between the pointwise critical cone mapping $K^\ptw_{\crit}(y,u)\colon [0,T] \rightrightarrows \R$ and the classical notion of criticality, that is, the property of being an element of the kernel of the multiplier that appears in the variational inequality \eqref{eq:V}, cf.\ the definition of $K_\crit(y,u)$ in \eqref{eq:dirdiffcharobstacleproblem}. As a preparation for \cref{prop:classicalcritical,cor:classicalcritical}, we prove the following lemma. \begin{lemma}\label{lemma:ortho} Let $u \in CBV[0,T]$ be a control with state $y := \S(u) \in CBV[0,T]$ and let $z\in G[0,T]$ be a function satisfying $z = 0$ on $ A(y,u)$. Then \begin{equation}\label{eq:ortho} \int_s^\tau z\dd (y-u) = 0 \qquad \forall\;0\le s < \tau\le T. \end{equation} \end{lemma} \begin{proof} Define $D := \left ( I(y) \cup B(y,u) \right )\setminus \{T\}$. The continuity of $y$ and the definitions of $I(y)$ and $B(y,u)$ imply that, for every $t \in D$, there exists $\varepsilon > 0$ with $[t, t + \varepsilon) \subset D$. This entails that the set $D$ decomposes into disjoint connected components $\{D_i\}_{i \in \II}$ with $\II$ being finite or equal to $\mathbb{N}$ and $D_i$ being an interval with a nonempty interior for all $i \in \II$. Using \cref{lemma:yumon} and again the definition of $B(y,u)$, one easily checks that, for each $t \in D$, there exists $\varepsilon > 0$ such that $y - u$ is constant on $[t, t + \varepsilon)$. Since $y-u$ is continuous, this implies $y-u =: c_i = \mathrm{const}$ on each $[a_i,b_i] := \closure{(D_i)}$. Now $\smash{\int_0^T \mathds{1}_{\{T\}}z\dd(y-u) = 0}$ by \cref{eq:singlepointmass}. Using this identity, the fact that $z=0$ holds on $ A(y,u)$, \cref{lemma:subintervals}, and (in the case $\II = \mathbb{N}$) the bounded convergence theorem (\cref{th:boundedconv}), we see that \begin{equation} \label{eq:randomeq2536} \begin{aligned} \int_0^T z\dd (y-u) &= \int_0^T \mathds{1}_D z\dd (y-u) = \int_0^T \sum_{i\in \II} \mathds{1}_{D_i} z\dd (y-u) \\ &= \sum_{i\in \II} \int_0^T \mathds{1}_{D_i} z\dd (y-u) = \sum_{i\in \II} \int_{a_i}^{b_i} \mathds{1}_{D_i} z\dd c_i = 0. \end{aligned} \end{equation} Choosing $\mathds{1}_{[s,\tau]}z$ instead of $z$ in \eqref{eq:randomeq2536} yields \eqref{eq:ortho}, again due to \cref{lemma:subintervals}. \end{proof} \begin{proposition}[relation to the classical notion of criticality] \label{prop:classicalcritical}Suppose that a control $u \in CBV[0,T]$ with state $y := \S(u) \in CBV[0,T]$ and a function $z\in G[0,T]$ satisfying $z(t) \in K_\rad^\ptw(y)(t)$ for all $t \in [0,T]$ are given. Then it holds \begin{equation}\label{eq:classicalcritical:item:ii} \int_s^\tau z\dd (y-u) \ge 0 \qquad \forall\;0\le s < \tau\le T. \end{equation} Moreover, it is true that \vspace{0.05cm} \begin{equation}\label{eq:classicalcritical:item:i} z(t) \in K^\ptw_{\crit}(y,u)(t)~\forall t \in [0,T] \quad\Rightarrow\quad \int_s^\tau z\dd(y-u) = 0~~\forall\,0 \leq s < \tau \leq T, \end{equation} and, if $z$ possesses the additional regularity $z\in G_r[0,T]$, then we also have \begin{equation}\label{eq:classicalcritical:item:iii} \int_0^T z\dd(y-u) = 0 \quad \Rightarrow\quad z(t) \in K^\ptw_{\crit}(y,u)(t)~\forall t \in [0,T]. \end{equation} \end{proposition} \begin{proof} In order to prove \eqref{eq:classicalcritical:item:ii}, let $0 \leq s < \tau \leq T$ be given. We first assume that \mbox{$y(t) \in (-r,r]$} holds for all $t \in [s, \tau]$. By \cref{lemma:yumon}, $u - y$ is nondecreasing on $[s, \tau]$. Using the definition of $K_\rad^\ptw(y)$, it is easy to check that $\hat{z}(t) := \max\{0,z(t)\} \mathds{1}_{[s,\tau]}(t)$ satisfies the assumptions of \cref{lemma:ortho}. Therefore, \[ \int_s^\tau z\dd(y - u) = \int_s^\tau \min\{0,z\} \dd(y - u) = \int_s^\tau \max\{0,-z\} \dd(u - y) \geq 0. \] This proves \eqref{eq:classicalcritical:item:ii} in the case $y(t) \in (-r,r]$ for all $t \in [s, \tau]$. In the case $y(t) \in [-r,r)$ for all $t \in [s, \tau]$, we can use the exact same arguments as above with reversed signs to establish \eqref{eq:classicalcritical:item:ii}. To finally obtain \eqref{eq:classicalcritical:item:ii} for arbitrary $[s,\tau]$, it suffices to consider a subdivision of $[s,\tau]$ into subintervals of the above two types and to use \eqref{eq:decomposeInterval}. The implication \eqref{eq:classicalcritical:item:i} follows directly from \cref{lemma:ortho} since $z(t)\in K^\ptw_{\crit}(y,u)(t)$ for all $t \in [0,T]$ implies $z = 0$ on $ A(y,u)$. It remains to prove \eqref{eq:classicalcritical:item:iii}. Since $z(t)\in K_\rad^\ptw(y)(t) \setminus K^\ptw_{\crit}(y,u)(t)$ for some $t \in [0,T]$ if and only if $z(t) \neq 0$ and $t\in A(y,u)$, it suffices to show that the integral on the left side of \eqref{eq:classicalcritical:item:iii} is nonzero if a time $t$ with the latter property exists. So let $t \in A(y,u)$ be arbitrary but fixed and suppose that $z(t) \neq 0$. We assume w.l.o.g.\ that $y(t) = r$. (The case $y(t) = -r$ is analogous.) From $0 \neq z(t) \in K_\rad^\ptw(y)(t)$, we obtain that $z(t) < 0$ holds, and from the right-continuity of $z$, the definition of $ A(y,u)$, and the continuity of $y$, that $t \neq T$ and that there exist numbers $c, \varepsilon > 0$ such that $z(s) \leq -c $ and $y(s) \in (-r, r]$ holds for all $s \in [t, t + \varepsilon] \subset [0, T]$ and such that $y-u$ is not constant on $[t,t+\varepsilon)$. By \cref{lemma:yumon}, $y-u$ is nonincreasing on $[t, t + \varepsilon]$. It thus follows that \[ \int_{t}^{t + \varepsilon} z \dd(y-u) \geq c \int_{t}^{t+\varepsilon} \dd(u-y) = c\left ( (u - y)(t + \varepsilon) - (u - y)(t)\right ) > 0. \] Using \eqref{eq:classicalcritical:item:ii}, we conclude \[ \int_{0}^{T} z \dd(y-u) = \int_{0}^{t} z \dd(y-u) + \int_{t}^{t + \varepsilon} z \dd(y-u) + \int_{t + \varepsilon}^{T} z \dd(y-u) > 0. \] \end{proof} \begin{corollary}\label{cor:classicalcritical} Let $u \in CBV[0,T]$ be a control with state $y := \S(u)$ and let $z\in G_r[0,T]$ be a given function. Then \begin{equation*} z(t) \in K^\ptw_{\crit}(y,u)(t)~\forall t \in [0,T]\quad\Leftrightarrow\quad \left \{~~ \begin{aligned} &z(t) \in K^\ptw_{\rad}(y)(t)~\forall t \in [0,T] \text{ and } \\ & \int_s^\tau z\dd(y-u) = 0~~\forall\,0 \leq s < \tau \leq T. \end{aligned}\right. \end{equation*} \end{corollary} As \cref{cor:classicalcritical} shows, a function $z \in G_r[0,T]$ is ``critical in the pointwise sense'' if and only if it takes values in $\smash{K_\rad^\ptw(y)(t)}$ for all $t\in [0,T]$ and is contained in the kernel of the linear and continuous function $ \smash{G[0,T] \ni v \mapsto \int_s^\tau v \dd(y-u) \in \R} $ for all $0 \leq s < \tau \leq T$. For elements of $G_r[0,T]$, the pointwise notion of criticality introduced in \cref{def:ptwcritcone} is thus closely related to the notion of criticality appearing in the context of the classical obstacle problem, cf.\ \eqref{eq:dirdiffcharobstacleproblem}. This relation does not exist anymore in general when the assumption of right-continuity is dropped. Indeed, as the integrator $y-u$ of the integrals in \cref{prop:classicalcritical,cor:classicalcritical} does not assign mass to singletons due to the continuity of $u$ and $y$ and \cref{eq:singlepointmass}, for every $t \in A(y,u)$, the function $z(s) := -\sgn(y(t)) \mathds{1}_{\{t\}}(s)$ satisfies $z \in G[0, T]$, \smash{$z(s) \in K_\rad^\ptw(y)(s)$} for all $s \in [0, T]$, and $\smash{\int_s^\tau z\dd(y - u) = 0}$ for all $0 \leq s < \tau \leq T$ but does not vanish on the strictly active set $ A(y,u)$. In all situations in which $ A(y,u)$ is nonempty, the pointwise notion of criticality in \cref{def:ptwcritcone} thus differs from the ordinary, multiplier-based one as soon as the regularity of the considered functions is too poor. We are now in the position to state the auxiliary problem that characterizes the pointwise directional derivatives $\S'(u;h)$ of $\S$ in the situation of \cref{th:dirdiff}. \begin{theorem}[variational inequality for directional derivatives]\label{th:dirdiffVI}Consider a fixed control $u \in CBV[0, T]$ with associated state $y := \S(u) \in CBV[0, T]$. Then, for \mbox{every} $h \in CBV[0, T]$, the pointwise directional derivative $\delta := \S'(u; h) \in BV[0, T]$ of $\S$ at $u$ in direction $h$ is the unique solution in $BV[0, T]$ of the system \begin{equation} \label{eq:dirdiffVI} \begin{gathered} \int_0^s (z - \delta_+)\dd(\delta - h) \geq 0\quad \forall z \in G\left ([0, s]; K^\ptw_{\crit}(y,u) \right) \quad \forall s \in (0, T], \\ \delta_+(t) \in K^\ptw_{\crit}(y,u)(t)~\forall t \in [0, T], \qquad \delta(0) = 0. \end{gathered} \end{equation} Moreover, it holds $\delta(t) \in \{\delta(t+), \delta(t-)\}$ for all $t \in [0, T]$ and $\var(\delta) \leq 2\var(h)$. \end{theorem} \begin{proof} This follows from \cite[Theorem 2.1]{Brokate2021}, where the result is stated for the scalar play operator $\PP(u) := u - \S(u)$. \end{proof} As $z=0$ and $z = 2\delta_+$ are admissible test functions in \eqref{eq:dirdiffVI}, this variational inequality implies in particular that \begin{equation} \label{eq:dirdiffeq} \int_0^s \delta_+\dd(\delta - h) = 0 \quad \forall s \in (0, T]. \end{equation} We remark that, using the inclusion $\delta(t) \in \{\delta(t+), \delta(t-)\}$ and \cite[Lemma 6.3.3]{Monteiro2019}, it is easy to check that the inequality in \eqref{eq:dirdiffVI} is satisfied by $\delta$ regardless of whether the right limit $\delta_+$ in the integral is defined w.r.t.\ $[0, s]$ or w.r.t.\ $[0, T]$. To achieve that $\delta$ is uniquely characterized by \eqref{eq:dirdiffVI}, the definition w.r.t.\ $[0,s]$ and the corresponding convention for the endpoint $s$ have to be used, see \cite[proof of Theorem 2.1]{Brokate2021}. Regarding the regularity properties of the derivatives $\S'(u;h)$ in \cref{th:dirdiffVI}, it should be noted that $\S'(u;h)$ can satisfy $\S'(u;h)_+ \neq \S'(u;h) \neq \S'(u;h)_-$ even when $u$ and $h$ are smooth, see \cite[Example 4.1]{Brokate2021}. There is, however, a logic behind the jumps of $\S'(u;h)$ as the following corollary shows. \begin{corollary}[direction of jumps]\label{corollary:dirdiffjumps}Consider the situation in \cref{th:dirdiffVI} for some fixed $u, h \in CBV[0, T]$. Then, for all $t\in [0,T]$, it holds \begin{gather} \label{eq:dirdiffjumps1} (\delta(t+)-\delta(t-))\zeta \geq 0 \quad \forall \zeta \in K^\ptw_{\crit}(y,u) (t), \\ \label{eq:dirdiffjumps2} \delta(t+)(\delta(t+)-\delta(t-)) = \delta(t+)(\delta(t+)-\delta(t)) = 0. \end{gather} In particular, if $t \in [0, T]$ is a point of discontinuity of $\delta = \S'(u;h) \in BV[0, T]$, i.e., if $\delta(t+) \neq \delta(t-)$, then it holds $\delta(t+) = 0$. Moreover, we have $\delta(0+) = \delta(0) = 0$. \end{corollary} \begin{proof} For the test function $z = \mathds{1}_{\{t\}}\zeta$ with $t\in [0,T]$ and $\zeta \in K^\ptw_{\crit}(y,u) (t)$, we obtain from \eqref{eq:dirdiffVI}, using \eqref{eq:dirdiffeq} as well as \cref{eq:singlepointmass}, \[ 0 \leq \int_0^T \mathds{1}_{\{t\}}\zeta\dd(\delta - h) = \zeta((\delta - h)(t+) - (\delta - h)(t-))= \zeta(\delta(t+) - \delta(t-)) \] with the conventions $\delta(0-) = \delta(0)$ and $\delta(T+) = \delta(T)$. This proves \eqref{eq:dirdiffjumps1}. Using the test functions $z = \delta_+ \pm \mathds{1}_{\{t\}}\delta_+(t)$ in \eqref{eq:dirdiffVI}, we obtain analogously \[ 0 \leq \int_0^T \pm \mathds{1}_{\{t\}}\delta_+(t)\dd(\delta - h) = \pm \delta(t+)(\delta(t+) - \delta(t-)). \] Since $\delta(t) \in \{\delta(t-),\delta(t+)\}$, both equalities in \eqref{eq:dirdiffjumps2} follow. All other assertions are immediate consequences of \eqref{eq:dirdiffjumps1}, \eqref{eq:dirdiffjumps2}, and the initial condition $\delta(0) = 0$. \end{proof} We would like to point out that jump conditions similar to those in \cref{corollary:dirdiffjumps} also have to be studied in order to establish the system \eqref{eq:dirdiffVI}, see \cite[section 5]{Brokate2021}. We deduce \cref{corollary:dirdiffjumps} from \cref{th:dirdiffVI} here to simplify the presentation and to avoid recalling major parts of the analysis in \cite{Brokate2021}. As an immediate consequence of \cref{th:dirdiffVI,corollary:dirdiffjumps}, we obtain: \begin{corollary}[variational inequality for the right limits of the derivatives]\label{cor:dirdiffVI+}Consider an arbitrary but fixed $u \in CBV[0, T]$ with state $y := \S(u) \in CBV[0, T]$. Then, for every $h \in CBV[0, T]$, the right limit $\eta := \S'(u; h)_+ \in BV_r[0, T]$ of the pointwise directional derivative $\S'(u; h)$ of $\S$ at $u$ in direction $h$ is the unique solution in $BV_r[0, T]$ of the variational inequality \begin{equation} \label{eq:dirdiffVI+} \begin{gathered} \int_0^T (z - \eta)\dd(\eta- h) \geq 0\quad \forall z \in G\left ([0, T]; K^\ptw_{\crit}(y,u) \right), \\ \eta(t) \in K^\ptw_{\crit}(y,u)(t)~\forall t \in [0, T], \qquad \eta(0) = 0. \end{gathered} \end{equation} Moreover, for all $s\in (0,T]$, it is true that \begin{equation} \label{eq:dirdiffVI+s} \int_0^s (z - \eta)\dd(\eta- h) \geq 0\quad \forall z \in G\left ([0, s]; K^\ptw_{\crit}(y,u) \right). \end{equation} \end{corollary} \begin{proof} That $\eta$ satisfies the second line of \eqref{eq:dirdiffVI+} follows from \cref{th:dirdiffVI} and \cref{corollary:dirdiffjumps}. Since $\S'(u;h)\in BV[0,T]$ has at most countably many discontinuity points by \cite[Theorem 2.3.2]{Monteiro2019}, and because $(\eta - \S'(u;h))(T) = 0$ by convention and $(\eta - \S'(u;h))(0) = 0$ by \cref{corollary:dirdiffjumps}, it follows from \cref{lemma:intgzeroexcept} that \[ \int_0^T f\dd(\eta- \S'(u;h)) = 0\quad \forall f \in G[0, T]. \] If we combine this identity with \eqref{eq:dirdiffVI} for $s = T$ and the linearity of the Kurzweil-Stieltjes integral, then the variational inequality in \eqref{eq:dirdiffVI+} follows immediately. To establish \eqref{eq:dirdiffVI+s}, it suffices to consider functions of the form $z := \mathds{1}_{[0,s]}\tilde z + \mathds{1}_{(s,T]}\eta$, $s \in (0, T]$, $\smash{\tilde z \in G\left ([0, s]; K^\ptw_{\crit}(y,u) \right )}$, in \eqref{eq:dirdiffVI+} and to exploit \eqref{eq:decomposeInterval} and \eqref{eq:singlepointmass}. Suppose now that there are two $\eta_1, \eta_2 \in BV_r[0,T]$ satisfying \eqref{eq:dirdiffVI+}. In this case, we can consider functions of the form $z := \mathds{1}_{[0,s]}\eta_2 + \mathds{1}_{(s,T]}\eta_1$ and $z := \mathds{1}_{[0,s]}\eta_1 + \mathds{1}_{(s,T]}\eta_2$ in the inequalities for $\eta_1$ and $\eta_2$, respectively, and add the resulting estimates to obtain with \eqref{eq:decomposeInterval} and \eqref{eq:singlepointmass} that $ \int_0^s (\eta_2 - \eta_1)\dd(\eta_2 - \eta_1) \le 0 $ holds for all $s\in (0,T]$. Due to \cref{prop:partialIntegration} and $\eta_1(0) = \eta_2(0) = 0$, this yields $(\eta_1(s) - \eta_2(s))^2 \leq 0$ for all $s \in [0,T]$. This proves that \eqref{eq:dirdiffVI+} possesses at most one solution in $BV_r[0,T]$. \end{proof} Note that the system \eqref{eq:dirdiffVI+} has the same structure as ``usual'' rate-independent systems posed in $BV_r[0,T]$, cf.\ \cite[Theorem 3.3]{Recupero2020}. Because of this, \eqref{eq:dirdiffVI+} is easier to work with than \eqref{eq:dirdiffVI}, which involves the additional varying parameter $s \in (0,T]$. \section{First consequences for the optimal control problem (P)}\label{sec:5} As a direct consequence of the results for $\S$ in the last section, we obtain: \begin{corollary}[existence of solutions]\label{cor:solex}Assume, in addition to the conditions in our standing \cref{ass:standing}, that: \begin{itemize} \item $(U, \|\cdot\|_U)$ is a reflexive Banach space that is compactly embedded into $C[0,T]$, \item $\Uad$ is a closed subset of $(U, \|\cdot\|_U)$, \item $\JJ$ is lower semicontinuous in the sense that, for all $\{(y_n, z_n, u_n) \} \subset C[0, T] \times \R \times U$ satisfying $y_n \to y$ in $C[0, T]$, $z_n \to z$ in $\R$, and $u_n \weakly u$ in $U$, we have \[ \liminf_{n \to \infty} \JJ(y_n, z_n, u_n) \geq \JJ(y,z,u), \] \item $\JJ$ is radially unbounded in the sense that there exists a function $\rho\colon [0, \infty) \to \R$ satisfying $\rho(s) \to \infty$ for $s \to \infty$ and \[ \JJ(y, z, u) \geq \rho\left ( \|u\|_U \right ) \qquad \forall (y, z, u) \in C[0, T] \times \R \times U. \] \end{itemize} Then the problem \eqref{eq:P} possesses at least one globally optimal control-state pair $(\bar u, \bar y)$. \end{corollary}\pagebreak \begin{proof} This follows straightforwardly from the direct method of the calculus of variations and the Lipschitz continuity property in \eqref{eq;LipschitzSinfty}. \end{proof} A prototypical example of a space $U$ satisfying the conditions in \cref{cor:solex} is $H^1(0,T)$. We would like to point out that it is, in general, not possible to use the direct method of the calculus of variations in the situation of \cref{cor:solex} if the control space $U$ is not compactly embedded into $C[0,T]$ and if the convergence $u_n \weakly u$ in $U$ only implies \smash{$u_n \weaklystar u$} in $BV[0,T]$. To see this, suppose that $r = y_0 = 1$, that $T = 2$, and that $\varphi \in C^\infty(\R)$ is a function that is identical zero in $\R \setminus (0,2)$, equal to 2 at $t=1$, monotonously increasing in $[0,1]$, and monotonously decreasing in $[1,2]$. For such $r$, $y_0$, $T$, and $\varphi$, it is easy to check that the controls $u_n(t) := \varphi(n t)$, $t \in [0,T]$, $n \in \mathbb{N}$, satisfy $u_n \in C^\infty[0,T]$, $\|u_n\|_{BV} = \var(u_n) = 4$, and $\S(u_n) = \mathds{1}_{[0, 1/n)} + \mathds{1}_{[1/n, T]} (u_n - 1)$ for all $n$ as well as $u_n(t) \to 0$ for all $t \in [0,T]$ and $n \to \infty$. In particular, we have $\|\S(u_n)\|_{BV} = 1 + \var(\S(u_n)) = 3$ for all $n$ and $\S(u_n)(t) \to \mathds{1}_{\{0\}}(t) - \mathds{1}_{(0, T]}(t)$ for all $t \in [0,T]$ and $n \to \infty$. In view of \cite[Proposition~3.13]{Ambrosio2000}, this yields \smash{$C^\infty[0,T] \ni u_n \weaklystar 0$} and \smash{$CBV[0,T] \ni \S(u_n) \weaklystar \mathds{1}_{\{0\}} - \mathds{1}_{(0, T]} \neq \mathds{1}_{[0, T]} = \S(0)$} in $BV[0,T]$. The map $\S$ is thus not continuous w.r.t.\ weak-star convergence in $BV[0,T]$ -- even along sequences of smooth functions -- and we may conclude that it is indeed not possible to apply the direct method of the calculus of variations to establish the solvability of \eqref{eq:P} if the space $U$ only provides weak-star convergence in $BV[0,T]$ for minimizing sequences. We remark that the compact embedding $U \hookrightarrow C[0,T]$ needed in \cref{cor:solex} significantly complicates the derivation of the strong stationarity system \eqref{eq:strongstatsys-2} since it makes it impossible to find sequences that converge weakly or strongly in $U$ to the discontinuous directional derivatives $\S'(u;h)$. In fact, this difficulty already arises due to the embedding $U \hookrightarrow C[0,T]$ in \cref{ass:standing}. We will circumvent this problem in \cref{sec:6} by means of a careful analysis of pointwise limits. The next corollary is concerned with the Bouligand stationarity condition that arises from \cref{th:dirdiff}. \begin{corollary}[Bouligand stationarity condition]Suppose that $\bar u \in \Uad$ is a locally optimal control of \eqref{eq:P} with associated state $\bar y := \S(\bar u)$. Then it holds \begin{equation} \label{eq:Bouligand} \begin{aligned} \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), \S'(\bar u; h) \right \rangle_{L^\infty} &+ \partial_2 \JJ(\bar y, \bar y(T), \bar u)\S'(\bar u; h)(T) \\ &+ \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} \geq 0 \quad \forall h \in \R_+(\Uad - \bar u). \end{aligned} \end{equation} Here, $ \partial_1 \JJ(\bar y, \bar y(T), \bar u)\in L^1(0, T)$, $\partial_2 \JJ(\bar y, \bar y(T), \bar u) \in \R$, and $\partial_3 \JJ(\bar y, \bar y(T), \bar u) \in U^*$ are the partial Fr\'{e}chet derivatives of the objective function $\JJ\colon L^\infty(0, T) \times \R \times U \to \R$. \end{corollary} \begin{proof} This follows along standard lines from the convexity of $\Uad$, the Fr\'{e}chet differentiability of $\JJ$, \cref{th:dirdiff}, the Lipschitz estimate \eqref{eq;LipschitzSinfty}, and the $L^1$-regularity of $ \partial_1 \JJ(\bar y, \bar y(T), \bar u)$. See, e.g., \cite[Proposition 6.1.2]{ChristofPhd2018} or \cite[section 3]{Herzog2013}. \end{proof} The last result motivates: \begin{definition}[Bouligand stationary point]\label{def:Bouligandstationary}A control $\bar u \in \Uad$ with associated state $\bar y := \S(\bar u)$ is called a Bouligand stationary point of \eqref{eq:P} if $(\bar u,\bar y)$ satisfies \eqref{eq:Bouligand}. \end{definition} Due to its implicit nature, the Bouligand stationarity condition \eqref{eq:Bouligand} is typically not very helpful in practice. This is one of the main motivations for the derivation of strong stationarity systems. To establish such a system for \eqref{eq:P}, we study: \section{Temporal polyhedricity properties} \label{sec:6} Throughout this section, we assume that an arbitrary but fixed $u \in CBV[0,T]$ with state $y := \S(u) \in CBV[0,T]$ is given. For these $u$ and $y$, we introduce: \begin{definition}[reduced critical cone and smooth critical radial directions]\label{def:6.1} We define the reduced critical cone in $G_r[0, T]$ associated with $(y,u)$ to be the set \begin{equation*} \begin{aligned} \KK_{G_r}^{\red,\crit}(y,u) := \big \{ z \in G_r[0, T] \colon &z(t) \in K_{\crit}^{\ptw}(y,u)(t)\,\forall t \in [0,T],~ z(0) = 0, \\ &\text{and } z(t) = 0~\forall t \in [0,T] \text{ with } z(t-) \neq z(t) \big\} \end{aligned} \end{equation*} and the cone of smooth critical radial directions associated with $(y,u)$ to be the set \begin{equation*} \begin{aligned} \KK_{C^\infty}^{\rad,\crit}(y,u) := \big\{ z \in C^\infty[0, T] \colon &z(t) \in K^\ptw_{\crit}(y,u)(t)~\forall t \in [0,T], \, z(0) = 0, \\ &\text{and } \exists \alpha > 0 \text{ s.t. } y(t) + \alpha z(t) \in Z~\forall t \in [0, T] \big\}. \end{aligned} \end{equation*} \end{definition} Note that $\smash{\KK_{C^\infty}^{\rad,\crit}(y,u)}$ is a subset of $\smash{\KK_{G_r}^{\red,\crit}(y,u)}$, that both $\smash{\KK_{C^\infty}^{\rad,\crit}(y,u)}$ and $\smash{\KK_{G_r}^{\red,\crit}(y,u)}$ are cones containing the zero function, and that $\smash{\KK_{C^\infty}^{\rad,\crit}(y,u)}$ is convex. The cone \smash{$\KK_{G_r}^{\red,\crit}(y,u)$} is typically not convex due to the additional conditions on the points of discontinuity. From \cref{corollary:dirdiffjumps,cor:dirdiffVI+}, it follows that $\S'(u;h)_+$ is an element of \smash{$\KK_{G_r}^{\red,\crit}(y,u)$} for all $h \in CBV[0,T]$. In fact, $\smash{\KK_{G_r}^{\red,\crit}(y,u)}$ collects all information about the pointwise properties of the right limits of the derivatives $\S'(u;h)$ that we have derived so far. This motivates the name ``reduced critical cone'', cf.\ the analysis for elliptic variational inequalities in \cite{ChristofPhd2018}. From \cref{prop:classicalcritical}, we obtain that \begin{align*} \KK_{G_r}^{\red,\crit}(y,u) &= \Bigg\{ z \in G_r[0, T] \colon z(t) \in K_{\rad}^{\ptw}(y)(t)\,\forall t \in [0,T], \int_0^T z \dd(y-u) = 0, \\[-0.1cm] &\hspace{2.8cm}z(0) = 0, z(t) = 0~\forall t \in [0,T] \text{ with } z(t-) \neq z(t) \Bigg\} \end{align*} and \begin{align*} \KK_{C^\infty}^{\rad,\crit}(y,u) &= \Bigg \{ z \in C^\infty[0, T] \colon z(0) = 0, ~ \int_{0}^{T} z \dd(y-u) = 0, \text{ and} \\[-0.1cm] &\hspace{2.9cm}\exists \alpha > 0 \text{ s.t. } y(t) + \alpha z(t) \in Z~\forall t \in [0, T] \Bigg \}. \end{align*} The main result of this section -- \cref{theorem:tempoly} -- shows that the cone $\smash{\KK_{C^\infty}^{\rad,\crit}(y,u)}$ is, in a suitably defined sense, dense in $\smash{\KK_{G_r}^{\red,\crit}(y,u)}$. This density property extends the concept of polyhedricity to the setting considered in this paper. In the case of the elliptic problem \eqref{eq:optstaclecontrol}, polyhedricity expresses that the set $ K_{\rad}(y) \cap (u + \Delta y)^\perp$ is $H_0^1(\Omega)$-dense in the critical cone $K_{\tan}(y) \cap (u + \Delta y)^\perp$, see \cite{Haraux1977,Wachsmuth2019}. For the study of the inequality \eqref{eq:V}, the set \smash{$\KK_{C^\infty}^{\rad,\crit}(y,u)$} is relevant because of the following observation. \begin{lemma}[directional derivative in smooth critical radial directions]\label{lemma:dirdifcritrad}Let $h$ be an arbitrary but fixed element of the set \smash{$\KK_{C^\infty}^{\rad,\crit}(y,u)$}. Then there exists $\alpha > 0$ such that $\S(u + \beta h) = \S(u) + \beta h$ holds for all $\beta \in (0, \alpha)$. In particular, $\S'(u;h) = h$. \end{lemma} \begin{proof} According to the definition of the set $\KK_{C^\infty}^{\rad,\crit}(y,u) $, we can find a number $\alpha > 0$ such that $y(t) + \alpha h(t) \in Z$ holds for all $t \in [0, T]$. Since $Z$ is convex, this also yields $y(t) + \beta h(t) \in Z$ for all $t \in [0, T]$ and all $\beta \in (0, \alpha)$. From \cref{prop:classicalcritical} and the variational inequality \eqref{eq:V} for $y$, we moreover obtain that \[ \int_0^T h\dd(y-u) = 0 \qquad \text{and} \qquad \int_0^T (v - y)\dd (y - u)\geq 0 ~~\forall v \in C([0, T]; Z). \] If we combine the above with the initial conditions $y(0) = y_0$ and $h(0) = 0$ and our previous considerations, then it follows that \begin{equation*} \begin{aligned} & \int_0^T (v - (y + \beta h) )\dd (y + \beta h - (u + \beta h))\geq 0 &&\forall v \in C([0, T]; Z), \\ &y(t) + \beta h(t) \in Z\quad \forall t \in [0, T], &&y(0)+ \beta h(0) = y_0, \end{aligned} \end{equation*} holds for all $\beta \in (0, \alpha)$. Thus, $\S(u + \beta h) = y + \beta h$ for all $\beta \in (0, \alpha)$ by \cref{th:Swellposed} as claimed. The assertion about the directional derivative follows immediately from this identity. This completes the proof. \end{proof} Note that \cref{lemma:dirdifcritrad} remains valid when the space $C^\infty[0, T]$ in the definition of \smash{$\KK_{C^\infty}^{\rad,\crit}(y,u)$} is replaced with the space $CBV[0,T]$. We consider smooth critical radial directions in our analysis because this gives rise to a stronger density result in \cref{theorem:tempoly}. As we will see in \cref{sec:7}, \cref{lemma:dirdifcritrad} makes it possible to prove the strong stationarity system \eqref{eq:strongstatsys-2} once the polyhedricity property in \cref{theorem:tempoly} is established. To obtain the latter, we require the following result. \begin{lemma}\label{lemma:polyprep1}Suppose that $z \in \KK_{G_r}^{\red,\crit}(y,u)$ and $\xi > 0$ are given. Let $t \in [0, T]$ be an arbitrary but fixed point of continuity of $z$, i.e., a point with $z(t) = z(t-)$. Then there exists $\varepsilon > 0$ such that the step function \[ \zeta \colon [0,T] \to \R, \quad \zeta(s) := z(t)\mathds{1}_{J_\varepsilon(t)}(s), \quad J_\varepsilon(t) := [t-\varepsilon,t+\varepsilon]\cap [0,T], \] possesses all of the following properties: \begin{enumerate}[label=\roman*)] \item\label{lemma:polyprep1:i} It is true that \[ \sup_{s \in [t-\varepsilon,t+\varepsilon]\cap [0,T]} \left | z(s) - \zeta(s) \right | \leq \xi. \] \item\label{lemma:polyprep1:ii} It holds \[ \zeta(s) \in K^\ptw_{\crit}(y,u)(s) \qquad \forall s \in [0,T]. \] \item\label{lemma:polyprep1:iii} For every $0\leq \psi \in C_c^\infty(\R)$ with support $\supp(\psi) \subset (t-\varepsilon, t + \varepsilon)$, the function $\psi \zeta \in G[0,T]$ is an element of the cone $\KK_{C^\infty}^{\rad,\crit}(y,u)$. \end{enumerate} \end{lemma} \begin{proof} Since $z$ is continuous at $t$, we can find $\varepsilon > 0$ such that \ref{lemma:polyprep1:i} holds. If $z(t) = 0$, then $\zeta = 0$ and \ref{lemma:polyprep1:ii} and \ref{lemma:polyprep1:iii} hold trivially for this $\varepsilon$. Due to the definition of the set \smash{$\KK_{G_r}^{\red,\crit}(y,u)$} and the continuity of $z$ at $t$, this case covers in particular the situations $t = 0$ and $t\in \closure(A(y,u))$. In what follows, we may thus assume that \begin{equation} \label{eq:randomeq3746} z(t) \neq 0 \qquad \text{and}\qquad J_\varepsilon(t) \subset \big ( I(y) \cup B(y,u) \big ) \cap (0, T] \end{equation} and have to prove that, for a potentially smaller $\varepsilon$, we have \ref{lemma:polyprep1:ii} and \ref{lemma:polyprep1:iii}. To this end, we distinguish between three cases. Case 1: $t\in I(y)$. In this case, it follows from the continuity of $y$ that, after possibly making $\varepsilon$ smaller, we have $J_\varepsilon(t) \subset I(y)$ and $|y| \le r-\gamma$ on $J_\varepsilon(t)$ for some $\gamma > 0$. This implies in particular that $K^\ptw_{\crit}(y,u)(s)= \R$ for all $s\in J_\varepsilon(t)$. Case 2: $t\in B_+(y,u)$. In this case, it follows from the continuity of $y$ that, after possibly making $\varepsilon$ smaller, we have $J_\varepsilon(t) \subset I(y)\cup B_+(y,u)$ and $y \ge -r+\gamma$ on $J_\varepsilon(t)$ for some $\gamma > 0$. Due to the definition of $K^\ptw_{\crit}(y,u)$, this implies in particular that $z(t) \in K^\ptw_{\crit}(y,u)(t)= (-\infty,0] \subset K^\ptw_{\crit}(y,u)(s)$ for all $s \in J_\varepsilon(t)$. Case 3: $t\in B_-(y,u)$. In this case, it follows from the continuity of $y$ that, after possibly making $\varepsilon$ smaller, we have $J_\varepsilon(t) \subset I(y)\cup B_-(y,u)$ and $y \le r-\gamma$ on $J_\varepsilon(t)$ for some $\gamma > 0$. Due to the definition of $K^\ptw_{\crit}(y,u)$, this implies in particular that $z(t) \in K^\ptw_{\crit}(y,u)(t)= [0, \infty) \subset K^\ptw_{\crit}(y,u)(s)$ for all $s \in J_\varepsilon(t)$. In all of the above cases, the resulting $\varepsilon > 0$ satisfies $z(t) = \zeta(s) \in K^\ptw_{\crit}(y,u)(s)$ and $(y + \alpha \zeta)(s) \in Z$ for all $s\in J_\varepsilon(t)$ and all $0 < \alpha \leq \gamma \|\zeta\|_\infty^{-1}$. Since $\zeta(s) = 0$ for $s\notin J_\varepsilon(t)$, these inclusions for $\zeta$ are also true for all $s \in [0,T]$. This proves \ref{lemma:polyprep1:ii}. Consider now a function $0 \leq \psi\in C_c^\infty(\R)$ with $\supp(\psi)\subset (t-\varepsilon,t+\varepsilon)$. Then $\psi\zeta\in C^\infty[0,T]$ and it follows from the nonnegativity of $\psi$, the properties of $\zeta$, the cone property of $K^\ptw_{\crit}(y,u)(s)$, and \eqref{eq:randomeq3746} that $(\psi\zeta)(0) = 0$ holds and that $(\psi\zeta)(s)\in K^\ptw_{\crit}(y,u)(s)$ and $(y+\alpha\psi\zeta)(s)\in Z$ for all $s\in [0,T]$ and all $0 < \alpha \leq \gamma \|\psi \|_\infty^{-1}\|\zeta\|_\infty^{-1}$. This shows \smash{$\psi \zeta \in \KK_{C^\infty}^{\rad,\crit}(y,u)$}, establishes \ref{lemma:polyprep1:iii}, and completes the proof. \end{proof} The next lemma is a version of \cref{lemma:polyprep1} for points of discontinuity. \begin{lemma}\label{lemma:polyprep2}Suppose that $z \in \KK_{G_r}^{\red,\crit}(y,u)$ and $\xi > 0$ are given. Let $t \in [0, T]$ be an arbitrary but fixed point of discontinuity of $z$, i.e., a point with $z(t) \neq z(t-)$. Then there exists $\varepsilon > 0$ such that the step function \[ \zeta \colon [0,T] \to \R, \qquad \zeta (s) := z(t-)\mathds{1}_{J_\varepsilon^-(t)}(s), \quad J_\varepsilon^-(t) := [t-\varepsilon,t) \cap [0,T], \] possesses the following properties: \begin{enumerate}[label=\roman*)] \item\label{lemma:polyprep2:i} It is true that \[ \sup_{s \in [t-\varepsilon, t + \varepsilon] \cap [0,T]} \left | z(s) - \zeta(s) \right | \leq \xi. \] \item\label{lemma:polyprep2:ii} It holds \[ \zeta(s) \in K^\ptw_{\crit}(y,u)(s) \qquad \forall s \in [0,T]. \] \item\label{lemma:polyprep2:iii} For every $0 \leq \psi \in C_c^\infty(\R)$ with support $\supp(\psi) \subset (t-\varepsilon, t)$, the function $\psi \zeta \in G[0,T]$ is an element of the cone $\KK_{C^\infty}^{\rad,\crit}(y,u)$. \end{enumerate} \end{lemma} \begin{proof} Since $z\in\KK_{G_r}^{\red,\crit}(y,u)$, it necessarily holds $t > 0$ and $z(t) = 0$. As $z$ is right-continuous, this implies that there exists $\varepsilon > 0$ such that \ref{lemma:polyprep2:i} is satisfied. Moreover, $z(t-) \neq 0$ because $z$ is assumed to be discontinuous at $t$. Since $z = 0$ on $ A(y,u)$, it follows that, for a potentially smaller $\varepsilon$, we have \begin{equation}\label{eq:polyprep2} J_\varepsilon^-(t) \subset \big ( I(y) \cup B(y,u) \big ) \cap (0, T]. \end{equation} We now again distinguish between three cases. Case 1: After possibly making $\varepsilon$ smaller, we have $J_\varepsilon^-(t)\subset I(y)$. In this case, it holds $K^\ptw_{\crit}(y,u)(s)= \R$ for all $s\in J_\varepsilon^-(t)$ and it follows from the continuity of $y$ that, for every compact set $E \subset J_\varepsilon^-(t)$, we can find a number $\gamma > 0$ with $|y| \le r-\gamma$ on $E$. Case 2: There exists a sequence $\{s_n\}\subset B_+(y,u)$ with $s_n\to t^-$. In this case, we have $y(s_n) = r$ and $z(s_n)\in K^\ptw_{\crit}(y,u)(s_n)= (-\infty,0]$ for all $n$ and it follows that $y(t) = r$ and $z(t-)\le 0$. Due to the continuity of $y$ and \eqref{eq:polyprep2}, this implies that, after possibly making $\varepsilon$ smaller, we have $J_\varepsilon^-(t) \subset I(y) \cup B_+(y,u)$. In particular, it holds $z(t-) \in (-\infty,0] \subset K^\ptw_{\crit}(y,u)(s)$ for all $s\in J_\varepsilon^-(t)$ and, for all compact $E \subset J_\varepsilon^-(t)$, we can find a number $\gamma > 0$ with $y \ge -r + \gamma$ on $E$. Case 3: There exists a sequence $\{s_n\}\subset B_-(y,u)$ with $s_n\to t^-$. In this case, we have $y(s_n) = -r$ and $z(s_n)\in K^\ptw_{\crit}(y,u)(s_n)= [0, \infty)$ for all $n$ and it follows that $y(t) = -r$ and $z(t-)\ge 0$. Due to the continuity of $y$ and \eqref{eq:polyprep2}, this implies that, after possibly making $\varepsilon$ smaller, we have $J_\varepsilon^-(t) \subset I(y) \cup B_-(y,u)$. In particular, it holds $z(t-) \in [0, \infty) \subset K^\ptw_{\crit}(y,u)(s)$ for all $s\in J_\varepsilon^-(t)$ and, for all compact $E \subset J_\varepsilon^-(t)$, we can find a number $\gamma > 0$ with $y \le r - \gamma$ on $E$. In all of the above cases, the resulting $\varepsilon > 0$ satisfies $z(t-) = \zeta(s) \in K^\ptw_{\crit}(y,u)(s)$ for all $s\in J_\varepsilon^-(t)$. Since $\zeta(s) = 0$ for $s\notin J_\varepsilon^-(t)$, this proves \ref{lemma:polyprep1:ii}. Moreover, we obtain from the above construction that, for every compact set $E \subset J_\varepsilon^-(t)$, there exists a number $\gamma > 0$ with $(y + \alpha \zeta)(s) \in Z$ for all $s \in E$ and all $0 < \alpha \leq \gamma \|\zeta\|_\infty^{-1}$. If a function $\psi\in C_c^\infty(\R)$ with $\psi\ge 0$ and support $E := \supp(\psi)\subset (t-\varepsilon,t)$ is given, then this implies that $(y+\alpha\psi\zeta)(s)\in Z$ holds for all $s\in [0,T]$ and all $0 < \alpha \leq \gamma \|\psi\|_\infty^{-1}\|\zeta\|_\infty^{-1}$. Due to the nonnegativity of $\psi$, the properties of $\zeta$, and the cone property of $K^\ptw_{\crit}(y,u)(s)$, one further obtains that $(\psi\zeta)(s)\in K^\ptw_{\crit}(y,u)(s)$ holds for all $s\in [0,T]$, and due to \eqref{eq:polyprep2} and the properties of $\supp(\psi)$, that $(\psi\zeta)(0) = 0$ and $\psi \zeta \in C^\infty[0,T]$. Thus, \smash{$\psi \zeta \in \KK_{C^\infty}^{\rad,\crit}(y,u)$}. This establishes \ref{lemma:polyprep2:iii} and completes the proof. \end{proof} We can now prove the main result of this section. \begin{theorem}[temporal polyhedricity]\label{theorem:tempoly}Let $z \in \KK_{G_r}^{\red,\crit}(y,u)$ be given. Then there exist functions $z_{i,j}, z_j \in G_r[0,T]$, $i,j \in \mathbb{N}$, such that the following is true: \begin{equation*} \begin{gathered} z_{i,j} \in \KK_{C^\infty}^{\rad,\crit}(y,u), \qquad \|z_{i,j}\|_\infty \leq \|z\|_\infty~\forall i,j, \\ z_{j} \in \KK_{G_r}^{\red,\crit}(y,u), \qquad \|z_{j}\|_\infty \leq \|z\|_\infty~\forall j, \\ z_{i,j} \to z_j \text{ pointwise in } [0,T] \text{ for } i \to \infty \text{ for all }j, \\ z_j \to z \text{ uniformly in } [0,T] \text{ for } j \to \infty. \end{gathered} \end{equation*} \end{theorem} \begin{proof} Consider an arbitrary but fixed $j \in \mathbb{N}$ and define $\xi := 1/j$. For every $t \in [0,T]$, we choose $\varepsilon_t > 0$ for this $\xi$ as in \cref{lemma:polyprep1,lemma:polyprep2}. This results in a collection of open intervals $(t - \varepsilon_t, t + \varepsilon_t)$ that covers $[0,T]$. By compactness, we can choose a finite subcover of this collection. We denote the time points of this cover with $t_k$, $k=1,...,N$, $N \in \mathbb{N}$, and the associated $\varepsilon_{t_k}$ with $\varepsilon_k$, $k=1,...,N$. We assume w.l.o.g.\ that there are no $k,l$ satisfying $(t_k - \varepsilon_k, t_k + \varepsilon_k) \subset (t_l - \varepsilon_l, t_l + \varepsilon_l)$ and $k \neq l$. In this case, by possibly making the intervals $(t_k - \varepsilon_k, t_k + \varepsilon_k)$ smaller, we can construct intervals $(t_k - a_k, t_k + b_k)$, $a_k, b_k > 0$, such that \begin{equation*} \begin{gathered} (t_k - a_k, t_k + b_k) \subset (t_k - \varepsilon_k, t_k + \varepsilon_k) ~\forall k = 1,...,N, \qquad [0,T] \subset \bigcup_{k=1}^N (t_k - a_k, t_k + b_k), \\ \text{and } t_k \notin (t_l - a_l, t_l+ b_l)~\forall k \neq l. \end{gathered} \end{equation*} Consider now a smooth partition of unity on $[0,T]$ subordinate to the modified cover $(t_k - a_k, t_k + b_k)$, $k=1,...,N$, i.e., a collection of functions $\psi_k$, $k=1,...,N$, satisfying \begin{equation*} \begin{gathered} \psi_k \in C_c^\infty(\R),\quad 0 \leq \psi_k(t) \leq 1~\forall t \in \R, \quad \supp(\psi_k) \subset (t_k - a_k, t_k + b_k) ~\forall k=1,...,N, \\ \sum_{k=1}^N \psi_k(t) = 1~\forall t \in [0,T], \end{gathered} \end{equation*} see, e.g., \cite{Evans2010}, and choose an arbitrary but fixed function $\varphi \in C^\infty(\R)$ satisfying \[ 0 \leq \varphi(t) \leq 1~\forall t \in \R,\quad \varphi(t) = 1~\forall t \in (-\infty, -1],\quad \varphi(t) = 0~\forall t \in [0,\infty). \] Define \begin{equation*} z_{i,j}(s) := \sum_{k \colon z(t_k)= z(t_k-)} z(t_k) \psi_k(s) + \sum_{k \colon z(t_k) \neq z(t_k-)} z(t_k-) \psi_k(s) \varphi\left ( \frac{s - t_k + 1/i}{1/i} \right ) \end{equation*} for all $i \in \mathbb{N}$ and $s \in [0,T]$. We claim that $z_{i,j} \in \KK_{C^\infty}^{\rad,\crit}(y,u)$ holds for all $i \in \mathbb{N}$. To see this, we first note that we have \[ z(t_k) \psi_k(\cdot)\big |_{[0,T]} \in \KK_{C^\infty}^{\rad,\crit}(y,u)\quad \forall k\colon z(t_k)= z(t_k-) \] by \cref{lemma:polyprep1}\ref{lemma:polyprep1:iii} and the condition $\supp(\psi_k) \subset (t_k - a_k, t_k + b_k) \subset (t_k - \varepsilon_k, t_k + \varepsilon_k)$ for all $k$. Analogously, we also have \[ z(t_k-) \psi_k(\cdot) \varphi\left ( \frac{\cdot - t_k + 1/i}{1/i} \right )\Bigg |_{[0,T]} \in \KK_{C^\infty}^{\rad,\crit}(y,u)\quad \forall k\colon z(t_k)\neq z(t_k-) \] by the properties of $\psi_k$ and $\varphi$ and \cref{lemma:polyprep2}\ref{lemma:polyprep2:iii}. By combining these facts with the observation that $\smash{ \KK_{C^\infty}^{\rad,\crit}(y,u)}$ is a convex cone, the inclusion $z_{i,j} \in \KK_{C^\infty}^{\rad,\crit}(y,u)$ follows immediately. Due to the properties of $\varphi$, we further have \[ z_{i,j}(s) \to \sum_{k \colon z(t_k)= z(t_k-)} z(t_k) \psi_k(s) + \sum_{k \colon z(t_k) \neq z(t_k-)} z(t_k-) \psi_k(s) \mathds{1}_{(-\infty, t_k)}(s) \] for all $s \in [0,T]$ for $i \to \infty$. Let us denote the function on the right of the last limit with $z_j$. By construction, the points of discontinuity of this function $z_j$ are precisely the points $t_k$ with $z(t_k) \neq z(t_k-)$. Further, at these points, the function $z_j$ is clearly right-continuous and, by the choice of the functions $\psi_k$ and the condition $ t_k \notin (t_l - a_l, t_l+ b_l)$ for all $k \neq l$, we have \[ z_j(t_k) = z(t_k-) \psi_k(t_k) \mathds{1}_{(-\infty, t_k)}(t_k) = 0 \] for all $k$ with $z(t_k) \neq z(t_k-)$. In combination with the choice of the functions $\psi_k$, this yields $z_j \in G_r[0,T]$, $z_j(t) = z_j(t+) = 0$ for all $t \in [0,T]$ with $z_j(t) \neq z_j(t-)$, and $z_j(0) = 0$. Due to the properties of $\psi_k$, the inclusion $(t_k - a_k, t_k + b_k) \subset (t_k - \varepsilon_k, t_k + \varepsilon_k)$ for all $k$, the second points of \cref{lemma:polyprep1,lemma:polyprep2}, and the fact that $K^\ptw_{\crit}(y,u)(s)$ is a convex cone for all $s \in [0,T]$, we also have $\smash{z_j(s) \in K^\ptw_{\crit}(y,u)(s)}$ for all $s \in [0,T]$. In summary, this allows us to conclude that $z_j \in \KK_{G_r}^{\red,\crit}(y,u)$ holds as desired. It remains to establish the uniform convergence of $z_j$ to $z$ for $j \to \infty$. To this end, we note that, due to the properties of the partition of unity $\{\psi_k\}$, we have \begin{equation*} \begin{aligned} &\sup_{s \in [0,T]}\left | z(s) - z_j(s) \right | \\ &= \sup_{s \in [0,T]}\left | z(s) - \sum_{k \colon z(t_k)= z(t_k-)} z(t_k) \psi_k(s) - \sum_{k \colon z(t_k) \neq z(t_k-)} z(t_k-) \psi_k(s) \mathds{1}_{(-\infty, t_k)}(s) \right | \\ &= \sup_{s \in [0,T]}\left | \sum_{k \colon z(t_k)= z(t_k-)} \left (z(s) - z(t_k) \right ) \psi_k(s) \right. \\ &\qquad\qquad\quad \left. + \sum_{k \colon z(t_k) \neq z(t_k-)} \Big( z(s) - z(t_k-) \mathds{1}_{(-\infty, t_k)}(s)\Big )\psi_k(s) \right | \\ &\leq \sup_{s \in [0,T]} \left ( \sum_{k \colon z(t_k)= z(t_k-)} \sup_{\tau \in [t_k - \varepsilon_k, t_k + \varepsilon_k] \cap [0,T]} \left | z(\tau) - z(t_k) \right | \psi_k(s) \right. \\ &\qquad\qquad\quad + \left. \sum_{k \colon z(t_k) \neq z(t_k-)} \sup_{\tau \in [t_k - \varepsilon_k, t_k + \varepsilon_k] \cap [0,T]} \Big| z(\tau) - z(t_k-) \mathds{1}_{(-\infty, t_k)}(\tau)\Big | \psi_k(s) \right ). \end{aligned} \end{equation*} Due to the inequalities in \cref{lemma:polyprep1}\ref{lemma:polyprep1:i} and \cref{lemma:polyprep2}\ref{lemma:polyprep2:i}, our choice $\xi = 1/j$, and the properties of $\psi_k$, the last estimate yields \[ \sup_{s \in [0,T]}\left | z(s) - z_j(s) \right | \leq \sup_{s \in [0,T]} \left ( \sum_{k \colon z(t_k)= z(t_k-)} \frac{\psi_k(s)}{j} + \sum_{k \colon z(t_k) \neq z(t_k-)} \frac{\psi_k(s)}{j} \right ) = \frac{1}{j}. \] This shows that the sequence $\{z_j\}$ indeed converges uniformly to $z$ for $j \to \infty$. That we have $\|z_{i,j}\|_\infty \leq \|z\|_\infty$ and $\|z_{j}\|_\infty \leq \|z\|_\infty$ follows immediately from our construction and the properties of $\psi_k$ and $\varphi$. This completes the proof. \end{proof} Note that, to be able to establish that $\smash{\KK_{C^\infty}^{\rad,\crit}(y,u)}$ is dense in $\smash{\KK_{G_r}^{\red,\crit}(y,u)}$, one necessarily has to consider a type of convergence weaker than uniform convergence since otherwise it is not possible to leave the space $C[0,T] \supset \smash{\KK_{C^\infty}^{\rad,\crit}(y,u)}$. This is a major difference between the temporal polyhedricity result in \cref{theorem:tempoly} and the classical notion of polyhedricity for the elliptic obstacle problem in \eqref{eq:optstaclecontrol} which yields the density of the set of critical radial directions $ K_{\rad}(y) \cap (u + \Delta y)^\perp$ in the critical cone $K_{\tan}(y) \cap (u + \Delta y)^\perp$ in $(H_0^1(\Omega), \|\cdot\|_{H_0^1})$ and thus in the topology that is natural for the underlying variational inequality. For \eqref{eq:V}, this natural choice of the topology would be that of uniform convergence as the Lipschitz estimate \eqref{eq;LipschitzSinfty} shows. Before we apply \cref{theorem:tempoly} to derive strong stationarity conditions for \eqref{eq:P}, we prove a further auxiliary result. \begin{lemma}\label{lemma:polarconeatoms}Suppose that $t \in [0,T]$ is given and let $c \in \R$ be an element of the polar cone $K^\ptw_{\crit}(y,u)(t)^\circ$, i.e., the set \begin{equation} \label{eq:ptwnormalconedef} K^\ptw_{\crit}(y,u)(t)^\circ := \begin{cases} \{0\} &\text{ if } t \in I(y), \\ [0, \infty) & \text{ if } t \in B_+(y,u), \\ (-\infty, 0]& \text{ if } t \in B_-(y,u), \\ \R &\text{ if } t \in A(y,u). \end{cases} \end{equation} Then there exists a sequence $\{h_i\} \subset C^\infty[0,T]$ such that the following holds: \begin{equation*} \begin{gathered} \|h_i\|_\infty \leq |c| \text{ and } \|\S'(u;h_i)\|_\infty \leq 2|c|~\forall i \in \mathbb{N}, \\ \S'(u;h_i)_+ \to 0 \text{ pointwise in } [0,T] \text{ for } i \to \infty, \\ h_i \to c \mathds{1}_{[t,T]} \text{ pointwise in } [0,T] \text{ for } i \to \infty. \end{gathered} \end{equation*} \end{lemma} \begin{proof} If $t \in I(y)$, then we necessarily have $c=0$ and we can simply choose the sequence $h_i = 0$ for all $i$. If $t=0$, then the sequence defined by $h_i = c$ for all $i$ satisfies all assertions because $\S'(u;c\mathds{1}_{[0,T]}) = \S'(u;c\mathds{1}_{[0,T]})_+ = 0$ by \cref{th:dirdiffVI} in view of \cref{eq:integralofconstant}. We may thus assume that \[ 0 < t \in B(y,u)\cup A(y,u). \] Consider an arbitrary but fixed function $\varphi$ with the following properties \[ \varphi \in C^\infty(\R), \quad \varphi(s) = 0 ~\forall s \in (-\infty,-1], \quad \varphi(s) = 1 ~\forall s \in [0, \infty), \quad \varphi'(s) \geq 0~\forall s \in \R. \] We define $\{h_i\}$ via \[ h_i(s) := c \varphi\left ( \frac{s - t}{1/i}\right )\quad \forall s \in [0,T]\quad \forall i \in \mathbb{N}. \] This sequence clearly satisfies $\{h_i\} \subset C^\infty[0,T]$, $h_i(s) \to c \mathds{1}_{[t,T]}(s)$ for all $s \in [0,T]$ and $i \to \infty$, and $\|h_i\|_\infty = |c|$ for all $i$. Due to the Lipschitz estimate \eqref{eq;LipschitzSinfty}, this also implies that $\|\S'(u;h_i) \|_\infty \leq 2|c|$ holds for all $i$. It remains to establish the pointwise convergence of $ \S'(u;h_i)_+$ to zero. For this to hold, it suffices to prove that $\eta_i := \S'(u;h_i)_+$ satisfies $\eta_i = 0$ on $[0,t-1/i] \cup [t,T]$ for all $i$ with $1/i < t$. That $\eta_i$ vanishes on $[0,t-1/i]$ follows easily from the fact that $h_i$ is zero on $[0, t-1/i]$, \eqref{partialIntegration}, and \eqref{eq:dirdiffVI+s} with $z = 0$, $z = 2 \eta$, and $0 < s \leq t - 1/i$. Next, we prove that $\eta_i(t) = 0$ by distinguishing three cases. Case 1: $t\in A(y,u)$. In this case, we have $\eta_i(t) \in K^\ptw_{\crit}(y,u)(t) = \{0\}$. Case 2: $t\in B_+(y,u)$. In this case, we have $\eta_i(t) \in K^\ptw_{\crit}(y,u)(t) = (-\infty, 0]$, it holds $c \in [0, \infty)$, and $h_i$ is nondecreasing on $[0,T]$. By \cref{lemma:monotonicity}, this yields $\S(u+\alpha h_i) \geq \S(u)$ in $[0, T]$ for all $\alpha > 0$ and all $i \in \mathbb{N}$. Hence, $\S'(u;h_i) \geq 0$ in $[0,T]$ and, consequently, $\eta_i = \S'(u;h_i)_+ \geq 0$ in $[0,T]$. It follows that $\eta_i(t) = 0$. Case 3: $t\in B_-(y,u)$. In this case, it holds $\eta_i(t) \in K^\ptw_{\crit}(y,u)(t) = [0, \infty)$ and $c \in (-\infty, 0]$, and we can proceed completely analogously to Case 2 (with reversed signs) to obtain that $\eta_i(t) = 0$. It remains to prove that $\eta_i = 0$ on $(t,T]$ if $t < T$. Let $\hat{\eta}_i := \mathds{1}_{[0,t]}\eta_i = \mathds{1}_{[0,t)}\eta_i$. By the definition of $h_i$, the function $\hat{\eta}_i - h_i$ has the constant value $-c$ on $[t,T]$. Using \cref{eq:decomposeInterval} combined with \cref{eq:integralofconstant}, we obtain that, for all $ z \in G\left ([0, T]; K^\ptw_{\crit}(y,u) \right)$, we have \begin{align*} \int_0^T (z - \hat{\eta}_i)\dd(\hat{\eta}_i - h_i) &= \int_0^t (z - \hat{\eta}_i)\dd(\hat{\eta}_i - h_i) + \int_t^T (z - \hat{\eta}_i)\dd(\hat{\eta}_i - h_i) \\ &= \int_0^t (z - \hat{\eta}_i)\dd(\hat{\eta}_i - h_i) \\ &= \int_0^t (z - \eta_i)\dd(\eta_i - h_i) \ge 0, \end{align*} where the last inequality holds by \cref{cor:dirdiffVI+}. Since $\hat{\eta}_i(s) \in K^\ptw_{\crit}(y,u)(s)$ for all $s\in [0,T]$, we conclude that $\hat{\eta}_i$ solves \cref{eq:dirdiffVI+} for $h = h_i$. As $\eta_i$ is the unique solution of \cref{eq:dirdiffVI+}, we must have $\hat{\eta}_i = \eta_i$. Thus, $\eta_i = 0$ on $(t,T]$ and the proof is complete. \end{proof} \section{Strong stationarity condition}\label{sec:7} We are now in the position to prove the strong stationarity system \eqref{eq:strongstatsys-2}. \begin{theorem}[strong stationarity]\label{th:main}Consider the situation in \cref{ass:standing} and suppose that $\bar u \in \Uad$ is a control with state $\bar y := \S(\bar u)$ such that the set $\R_+(\Uad - \bar u)$ is dense in $U$. Then $\bar u$ is a Bouligand stationary point of \eqref{eq:P}, i.e., satisfies \eqref{eq:Bouligand} if and only if there exist an adjoint state $\bar p \in BV[0,T]$ and a multiplier $\bar \mu \in G_r[0,T]^*$ such that the following system is satisfied: \begin{equation} \label{eq:strongstatsys} \begin{gathered} \bar p(0) = \bar p(T) = 0, \quad \bar p(t) = \bar p(t-)~\forall t \in [0,T), \\ \bar p(t-) \in K^\ptw_{\crit}(\bar y, \bar u)(t)~\forall t \in [0,T], \\ \left \langle \bar \mu, z \right \rangle_{G_r} \geq 0\quad \forall z \in \KK_{G_r}^{\red,\crit}(\bar y, \bar u) , \\ \int_0^T h \dd \bar p = \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} ~\forall h \in U, \\ -\int_0^T z \dd \bar p = \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), z\right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)z(T) - \left \langle \bar \mu, z \right \rangle_{G_r} \\ \hspace{8.5cm}\forall z \in G_r[0,T]. \end{gathered} \end{equation} \end{theorem} \begin{proof} We begin with the proof of the implication ``\eqref{eq:Bouligand} $\Rightarrow$ \eqref{eq:strongstatsys}'': Suppose that a control $\bar u \in \Uad$ with state $\bar y := \S(\bar u)$ is given such that the set $\R_+(\Uad - \bar u)$ is dense in $U$ and such that \eqref{eq:Bouligand} holds. Then it follows from \eqref{eq:Bouligand}, the fact that \eqref{eq;LipschitzSinfty} implies that $ \|\S'(\bar u;h_1) - \S'(\bar u; h_2)\|_\infty \leq 2\|h_1 - h_2\|_\infty $ holds for all $h_1, h_2 \in CBV[0,T]$, the inclusion $U \subset CBV[0,T]$, and the continuity of the embedding $U \hookrightarrow C[0,T]$ that \begin{equation} \label{eq:Bouligandstat2} \begin{aligned} \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), \S'(\bar u; h) \right \rangle_{L^\infty} &+ \partial_2 \JJ(\bar y, \bar y(T), \bar u)\S'(\bar u; h)(T) \\ &+ \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} \geq 0 \qquad \forall h \in U. \end{aligned} \end{equation} Again due to \eqref{eq;LipschitzSinfty} and since $-h \in U$ holds for all $h \in U$, \eqref{eq:Bouligandstat2} yields \[ \left | \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} \right | \leq 2 \left ( \left \| \partial_1 \JJ(\bar y, \bar y(T), \bar u)\right \|_{L^1} + \left | \partial_2 \JJ(\bar y, \bar y(T), \bar u) \right | \right )\|h\|_{\infty} \] for all $h \in U$. In combination with the Hahn-Banach theorem, this shows that the linear functional $U \ni h \mapsto \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} \in \R$ can be extended to an element of the dual space $C[0,T]^*$. In view of the classical Riesz representation theorem (see, e.g., \cite[section 8.1]{Monteiro2019}) and \cref{lemma:intgzeroexcept}, this means that there exists a function $\bar p \in BV[0,T]$ satisfying $\bar p(t) = \bar p(t-)$ for all $t \in (0,T)$, $\bar p(T) = 0$, and \[ \int_0^T h \dd \bar p = \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} ~\forall h \in U. \] Since $\partial_1 \JJ(\bar y, \bar y(T), \bar u) \in L^1(0,T)$, $\S'(\bar u; h) = \S'(\bar u; h)_+$ a.e.\ in $(0,T)$ by \cite[Theorem 2.3.2]{Monteiro2019}, and $\S'(\bar u; h)(T) = \S'(\bar u; h)(T+) = \S'(\bar u; h)_+(T)$ by definition, we may now rewrite \eqref{eq:Bouligandstat2} as follows: \begin{equation} \label{eq:randomeq263536} \begin{aligned} &\int_0^T h \dd \bar p + \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), \S'(\bar u; h)_+ \right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)\S'(\bar u; h)_+(T) \geq 0 \\ &\hspace{10.5cm} \forall h \in U. \end{aligned} \end{equation} Note that, again due to the Lipschitz estimate $\|\S'(\bar u;h_1) - \S'(\bar u; h_2)\|_\infty \leq 2\|h_1 - h_2\|_\infty$ for $h_1, h_2 \in CBV[0,T]$ and since $U$ is dense in $C[0,T]$, \eqref{eq:randomeq263536} remains valid when the test space $U$ is replaced by $CBV[0,T]$. We define $\bar \mu \in G_r[0,T]^*$ via \[ \left \langle \bar \mu, z \right \rangle_{G_r} := \int_0^T z \dd \bar p + \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), z \right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)z(T) \quad \forall z \in G_r[0,T]. \] Then the last line in \eqref{eq:strongstatsys} holds, and it follows from \eqref{eq:randomeq263536} with test space $CBV[0,T]$ and \cref{lemma:dirdifcritrad} that \[ \int_0^T z \dd \bar p + \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), z \right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)z(T) \geq 0 \quad \forall z \in \KK_{C^\infty}^{\rad,\crit}(\bar y, \bar u). \] Due to \cref{theorem:tempoly} and the bounded convergence theorem (\cref{th:boundedconv}), we can extend the last inequality to the set $\KK_{G_r}^{\red,\crit}(\bar y, \bar u)$ by approximation, i.e., we have \[ \int_0^T z \dd \bar p + \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), z \right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)z(T) = \left \langle \bar \mu, z \right \rangle_{G_r} \geq 0 \] for all $z \in \KK_{G_r}^{\red,\crit}(\bar y, \bar u)$. This proves the third line in \cref{eq:strongstatsys}. It remains to establish the pointwise properties of $\bar p$ in \cref{eq:strongstatsys}. To this end, we again use that \cref{cor:dirdiffVI+} and \cref{eq:integralofconstant} imply that $\S'(\bar u; c\mathds{1}_{[0,T]})_+ = 0$ holds for all $c \in \R$. By \eqref{eq:randomeq263536} with test space $CBV[0,T]$, this yields \begin{equation*} \begin{aligned} 0 \leq c\int_0^T \dd \bar p = c\left ( \bar p(T) - \bar p(0)\right )\quad \forall c \in \R. \end{aligned} \end{equation*} Thus, $\bar p(0) = \bar p(T)$. Since $\bar p(T) = 0$ and $\bar p(t) = \bar p(t-)$ for all $t \in (0,T)$, and since $\bar p(0) = \bar p(0-)$ holds by definition, this establishes the first line of \cref{eq:strongstatsys}. Next, by invoking \cref{lemma:polarconeatoms}, by setting $h = h_i$ in \eqref{eq:randomeq263536} with test space $CBV[0,T]$, and by passing to the limit $i\to\infty$ by means of \cref{th:boundedconv} and the dominated convergence theorem, we obtain that, for every $t \in [0,T]$ and every $c \in K^\ptw_{\crit}(\bar y,\bar u)(t)^\circ$, we have \begin{equation} \label{eq:randomeq2635} 0 \leq \int_0^T c \mathds{1}_{[t,T]} \dd \bar p = c \left ( \bar p(T) - \bar p(t-) \right ) = - c \bar p(t-). \end{equation} Here, the last two equations follow from \cite[Lemma 6.3.3]{Monteiro2019} and the identity $\bar p(T) = 0$. By using the definition \eqref{eq:ptwnormalconedef} of the polar cone $K^\ptw_{\crit}(\bar y, \bar u)(t)^\circ$ in \eqref{eq:randomeq2635}, one readily obtains that $\bar p(t-) \in K^\ptw_{\crit}(\bar y, \bar u) (t)$ holds for all $t \in [0,T]$. This establishes the second line in \eqref{eq:strongstatsys} and proves, in combination with the previous steps, that the strong stationarity system \eqref{eq:strongstatsys} is indeed a necessary condition for Bouligand stationarity. Next, we prove the implication ``\eqref{eq:strongstatsys} $\Rightarrow$ \eqref{eq:Bouligand}''. Suppose that $\bar u \in \Uad$ is a control with state $\bar y := \S(\bar u)$ such that there exist $\bar p \in BV[0,T]$ and $\bar \mu \in G_r[0,T]^*$ satisfying \eqref{eq:strongstatsys}. Assume further that a direction $h \in U$ is given and define $\eta := \S'(\bar u;h)_+$. Then it follows from the properties of $\bar p$, \eqref{eq:dirdiffVI+} with $z := \eta + \bar p$, and the integration by parts formula for the Kurzweil-Stieltjes integral \cite[Theorem 6.4.2]{Monteiro2019} that \begin{equation*} \begin{aligned} 0 \leq \int_0^T \bar p\dd(\eta- h) &= \int_0^T (h - \eta) \dd \bar p + \bar p(T)(\eta- h)(T) - \bar p(0)(\eta- h)(0) \\ &\qquad + \sum_{t \in [0,T]} \left ( \bar p(t) - \bar p(t-)\right ) \left ( (\eta- h)(t) - (\eta- h)(t-) \right ) \\ &\qquad - \sum_{t \in [0,T]} \left ( \bar p(t) - \bar p(t+)\right ) \left ( (\eta- h)(t) - (\eta- h)(t+) \right ). \end{aligned} \end{equation*} Due to the identities $\bar p(0) = \bar p(T) = 0$ and $\eta(0) = \eta(0-) = 0$ and due to the left- and right-continuity properties of $\bar p$, $h$, and $\eta = \S'(\bar u;h)_+$, the last estimate simplifies to \[ 0 \leq \int_0^T (h - \eta) \dd \bar p - \bar p(T-) \left ( \eta(T) - \eta(T-) \right ). \] Note that \eqref{eq:dirdiffjumps1}, $\bar p(T-) \in K^\ptw_{\crit}(\bar y,\bar u) (T)$, and the convention $\eta(T) = \eta(T+)$ imply that $\bar p(T-) ( \eta(T) - \eta(T-) ) = \bar p(T-) ( \eta(T+) - \eta(T-) ) \geq 0$ holds. We thus obtain \[ 0 \leq \int_0^T (h - \eta) \dd \bar p = \int_0^T h \dd \bar p - \int_0^T \eta \dd \bar p, \] and, by the last three lines of \cref{eq:strongstatsys} and the properties of $\eta$, \begin{equation*} \begin{aligned} 0 &\leq \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} + \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), \eta \right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)\eta(T) - \left \langle \bar \mu, \eta \right \rangle_{G_r} \\ &\leq \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} + \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), \eta\right \rangle_{L^\infty} + \partial_2 \JJ(\bar y, \bar y(T), \bar u)\eta(T). \end{aligned} \end{equation*} If we now exploit that $\partial_1 \JJ(\bar y, \bar y(T), \bar u) \in L^1(0,T)$, that $\S'(\bar u; h)(T) = \S'(\bar u; h)(T+)$, and that $\eta = \S'(\bar u;h)$ a.e., then \eqref{eq:Bouligand} follows. This completes the proof. \end{proof} Note that, in the case $T \in I(\bar y)$, there exists $m > 0$ such that the function $z(t) := c \mathds{1}_{[T -\varepsilon, T]}(t) (t - T + \varepsilon)/\varepsilon$ is an element of \smash{$\KK_{G_r}^{\red,\crit}(\bar y, \bar u)$} for all $c \in \R$ and all $0 < \varepsilon < m$. For such a function $z$, the third line of \eqref{eq:strongstatsys} becomes $\left \langle \bar \mu, z \right \rangle_{G_r} = 0$. Using this in the fifth line of \eqref{eq:strongstatsys} and subsequently passing to the limit $\varepsilon \to 0^+$ by means of \cref{th:boundedconv} yields, due to the $L^1$-regularity of $\partial_1 \JJ(\bar y, \bar y(T), \bar u)$ and \cref{eq:singlepointmass}, that \[ -c \int_0^T \mathds{1}_{\{T\}} \dd \bar p = -c (\bar p(T) - \bar p(T-)) = \partial_2 \JJ(\bar y, \bar y(T), \bar u)c\qquad \forall c \in \R. \] Thus, $\bar p(T-) - \bar p(T) = \partial_2 \JJ(\bar y, \bar y(T), \bar u)$ and we obtain that the partial derivative $\partial_2 \JJ(\bar y, \bar y(T), \bar u) \in \R$ affects the jump of $\bar p$ at $T$, as mentioned in \cref{sec:1}. We remark that, by redefining $\bar p$, this implicit jump condition on the adjoint state in \eqref{eq:strongstatsys} can also be transformed into a condition on the function value at $T$. Indeed, by introducing the modified adjoint state $\bar q := \bar p + \partial_2 \JJ(\bar y, \bar y(T), \bar u)\mathds{1}_{\{T\}} \in BV[0,T]$, by using the integration by parts formula in \cite[Theorem 6.4.2]{Monteiro2019} in the fourth line of \eqref{eq:strongstatsys}, and by employing \eqref{eq:singlepointmass} and \cite[Lemma 6.3.2]{Monteiro2019}, one easily checks that the strong stationarity system in \cref{th:main} can also be formulated as follows: \begin{equation*} \begin{gathered} \bar q(0) = 0, \quad \bar q(T) = \partial_2 \JJ(\bar y, \bar y(T), \bar u), \quad \bar q(t) = \bar q(t-)~\forall t \in [0,T), \\ \bar q(t-) \in K^\ptw_{\crit}(\bar y, \bar u)(t)~\forall t \in [0,T], \\ \left \langle \bar \mu, z \right \rangle_{G_r} \geq 0\quad \forall z \in \KK_{G_r}^{\red,\crit}(\bar y, \bar u) , \\ - \int_0^T \bar q \dd h = \left \langle \partial_3 \JJ(\bar y, \bar y(T), \bar u), h \right \rangle_{U} ~\forall h \in U, \\ -\int_0^T z \dd \bar q = \left \langle \partial_1 \JJ(\bar y, \bar y(T), \bar u), z\right \rangle_{L^\infty} - \left \langle \bar \mu, z \right \rangle_{G_r}\quad \forall z \in G_r[0,T]. \end{gathered} \end{equation*} Regarding the assumption that the set $\R_+(\Uad - \bar u)$ is dense in $U$, we would like to point out that this so-called ``ample control'' condition in \cref{th:main} is rather restrictive and rarely satisfied if $\Uad \neq U$. Using techniques from \cite{Wachmuth2014}, it might be possible to establish a strong stationarity system for \eqref{eq:P} also under weaker assumptions on the control constraints. We leave this topic for future research. \appendix \section{Results on the Kurzweil-Stieltjes integral}\label{sec:appendix}Let $a,b \in \R$ with $a < b$ be given. For $f, g \in G[a,b]$, the Kurzweil-Stieltjes integral with \emph{integrand} $f$ and \emph{integrator} $g$ exists if at least one of the functions $f$ and $g$ has bounded variation, see \cite[Theorem 6.3.11]{Monteiro2019}. In this case, it yields a real number which we denote by \[ \int_a^b f\dd g \qquad \text{or} \qquad \int_a^b f(t)\dd g(t). \] The Kurzweil-Stieltjes integral coincides with the Riemann-Stieltjes integral whenever the latter exists, see \cite[Theorem 6.2.12]{Monteiro2019}. This holds in particular if $f\in C[a,b]$ and $g\in BV[a,b]$, see \cite[Theorem 5.6.1]{Monteiro2019}. If $c\in\R$ is interpreted as a constant function, then it holds \begin{equation}\label{eq:integralofconstant} \int_a^b c\dd g = c(g(b) - g(a)) \qquad \text{and} \qquad \int_a^b f\dd c = 0 \end{equation} for all $f, g \in G[a,b]$, see \cite[Remark 6.3.1]{Monteiro2019}. The Kurzweil-Stieltjes integral is linear w.r.t.\ the integrand $f$ and w.r.t.\ the integrator $g$, see \cite[Theorem 6.2.7]{Monteiro2019}. Further, for all $c\in (a,b)$, it holds \begin{equation}\label{eq:decomposeInterval} \int_a^b f\dd g = \int_a^c f\dd g + \int_c^b f\dd g \end{equation} provided the first integral exists, see \cite[Theorems 6.2.9, 6.2.10]{Monteiro2019}. For $t\in [a,b]$ and $g \in G[a,b]$, we have (see \cite[Lemma 6.3.3]{Monteiro2019}) \begin{equation}\label{eq:singlepointmass} \int_a^b \mathds{1}_{\{t\}}\dd g = g(t+) - g(t-) \end{equation} with the conventions $g(b+) := g(b)$ and $g(a-) := g(a)$. In particular, the integral in \eqref{eq:singlepointmass} equals zero if $g$ is continuous at $t$. \begin{lemma}\label{lemma:subintervals} Let $f\in G[a,b]$, $g\in BV_r[a,b]$, $a\le s < \tau\le b$, and $J := (s,\tau]$. Then \begin{equation}\label{eq:subintervals} \int_s^\tau f\dd g = \int_a^b \mathds{1}_J f\dd g. \end{equation} If $g\in CBV[a,b]$, then \cref{eq:subintervals} is also true for $J = [s,\tau]$, $J = (s,\tau)$, and $J = [s,\tau)$. \end{lemma} \begin{proof} This is a special case of \cite[Theorem 6.9.7]{Monteiro2019}. \end{proof} \begin{lemma}\label{lemma:intgzeroexcept} Let $g\in BV[a,b]$ be given such that $g(a) = g(b) = 0$ holds and such that the set $\{t\in [a,b]\colon g(t)\neq 0\}$ is finite or countably infinite. Then \[ \int_a^b f\dd g = 0 \qquad \forall f\in G[a,b]. \] \end{lemma} \begin{proof} This is a special case of \cite[Lemma 6.3.15]{Monteiro2019}. \end{proof} \begin{proposition}\label{prop:partialIntegration} Let $g\in BV_r[a,b]$. Then \begin{equation}\label{partialIntegration} \int_a^b g\dd g = \frac{1}{2}(g(b)^2 - g(a)^2) + \frac{1}{2} \sum_{t\in [a,b]} (g(t) - g(t-))^2. \end{equation} \end{proposition} \begin{proof} This is a special case of \cite[Corollary 2.12]{Krejci2003} or of \cite[Corollary 1.13]{KrejciLiero2009}. \end{proof} \begin{theorem}[bounded convergence theorem]\label{th:boundedconv}Let $g\in BV[a,b]$, $f_n\in G[a,b]$ with $\sup_n \|f_n\|_\infty < \infty$ and $f_n\to f$ pointwise in $[a,b]$ be given. Then the integral $ \int_a^b f\dd g $ exists and it holds \[ \lim_{n\to\infty} \int_a^b f_n\dd g = \int_a^b f\dd g. \] \end{theorem} \begin{proof} This is a special case of \cite[Theorem 6.8.13]{Monteiro2019}. \end{proof} \bibliographystyle{siamplain} \bibliography{references} \end{document}
2205.01158v1
http://arxiv.org/abs/2205.01158v1
Reproducing Kernels and New Approaches in Compositional Data Analysis
\documentclass[aos]{imsart} \RequirePackage{amsthm,amsmath,amsfonts,amssymb} \RequirePackage[numbers]{natbib} \RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{graphicx, subcaption} \RequirePackage{xypic} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \usepackage[bottom]{footmisc} \startlocaldefs \newtheorem{axiom}{Axiom} \newtheorem{claim}[axiom]{Claim} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}{Remark} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{remark} \newtheorem{definition}{Definition} \newtheorem*{example}{Example} \newtheorem*{fact}{Fact} \endlocaldefs \newcommand{\F}{\mathfrak F} \newcommand{\G}{\mathfrak G} \newcommand{\N}{\mathfrak N} \newcommand{\QQ}{\mathbb Q} \newcommand{\ZZ}{\mathbb Z} \newcommand{\CC}{\mathbb C} \newcommand{\RR}{\mathbb R} \newcommand{\HH}{\mathbb H} \newcommand{\SSS}{\mathcal S} \newcommand{\sL}{\mathcal L} \newcommand{\cH}{{\mathcal H}} \newcommand{\A}{\mathfrak A} \newcommand{\B}{\mathfrak B} \newcommand{\mega}{\overline{\omega}} \newcommand{\id}{\operatorname{id}} \newcommand{\im}{\operatorname{im}} \newcommand{\Op}{\operatorname{Op}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Con}{\operatorname{Con}} \newcommand{\QCon}{\operatorname{QCon}} \newcommand{\Diff}{\operatorname{Diff}} \newcommand{\Isom}{\operatorname{Isom}} \newcommand{\isom}{\operatorname{\mathfrak{isom}}} \newcommand{\Homeo}{\operatorname{Homeo}} \newcommand{\Teich}{\operatorname{Teich}} \newcommand{\Homeq}{\operatorname{Homeq}} \newcommand{\Homeqbar}{\operatorname{\overline{Homeq}}} \newcommand{\Stab}{\operatorname{Stab}} \newcommand{\Nbd}{\operatorname{Nbd}} \newcommand{\colim}{\operatornamewithlimits{colim}} \newcommand{\acts}{\curvearrowright} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\GL}{\operatorname{GL}} \mathchardef\mhyphen"2D \newcommand{\free}{\mathrm{free}} \newcommand{\refl}{\mathrm{refl}} \newcommand{\trefl}{\mathrm{trefl}} \newcommand{\tame}{\mathrm{tame}} \newcommand{\wild}{\mathrm{wild}} \newcommand{\Emb}{\operatorname{Emb}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Out}{\operatorname{Out}} \newcommand{\Maps}{\operatorname{Maps}} \newcommand{\rel}{\operatorname{rel}} \newcommand{\tildetimes}{\mathbin{\tilde\times}} \newcommand{\h}{\overset h} \newcommand{\spn}{\mathrm{span}} \newcommand{\dis}{\displaystyle} \newcommand{\orb}{\mathrm{Orbit}} \newcommand{\catname}[1]{\textbf{#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\mtfr}[1]{\mathfrak{#1}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\ant}{\mathrm{ant}} \newcommand{\var}{\mathrm{Var}} \begin{document} \begin{frontmatter} \title{Reproducing Kernels and New Approaches in Compositional Data Analysis } \runtitle{New Foundation of Compositional Data Analysis} \begin{aug} \author[A]{\fnms{Binglin} \snm{Li}\ead[label=e1]{[email protected]}} and \author[B]{\fnms{Jeongyoun} \snm{Ahn}\ead[label=e2]{[email protected]}} \address[A]{Department of Statistics, University of Georgia, \printead{e1}} \address[B]{Department of Industrial and Systems Engineering, KAIST, \printead{e2}} \end{aug} \begin{abstract} Compositional data, such as human gut microbiomes, consist of non-negative variables whose only the relative values to other variables are available. Analyzing compositional data such as human gut microbiomes needs a careful treatment of the geometry of the data. A common geometrical understanding of compositional data is via a regular simplex. Majority of existing approaches rely on a log-ratio or power transformations to overcome the innate simplicial geometry. In this work, based on the key observation that a compositional data are projective in nature, and based on the intrinsic connection between projective and spherical geometry, we re-interpret the compositional domain as the quotient topology of a sphere modded out by a group action, hence compositional statistics is a ``group-invariant'' directional statistics. This re-interpretation allows us to understand the function theory on compositional domains in terms of that on spheres, and then use spherical harmonics theory (along with reflection group actions) to a construction of ``Compositional Reproducing Kernel Hilbert Spaces''. This construction of RKHS theory for compositional data will widely open research avenues for future methodology developments; in particular, modern insights (e.g. kernel methods) from machine learning can be introduced to compositional data analysis. Supported by a representer theorem, one can apply kernel methods such as support vector machines for compositional data. The polynomial nature of compositional RKHS has both theoretical and computational benefits. Some applications for example nonparametric density estimation and compositional kernel exponential family are also discussed. \end{abstract} \begin{keyword} \kwd{Machine Learning} \kwd{Microbiome} \kwd{RKHS} \kwd{Spherical harmonics} \kwd{Directional Statistics} \end{keyword} \end{frontmatter} \section{Introduction}\label{sec:intro} Recent popularity of human gut microbiomes research has presented many data-analytic, statistical challenges \citep{calle2019statistical}. Among many features of microbiomes, or meta-genomics data, we address their \emph{compositional} nature in this work. Compositional data consist of $n$ observations of $(d+1)$ non-negative variables whose values represent the relative proportions to other variables in the data. Compositional data have been commonly observed in many scientific fields, such as bio-chemistry, ecology, finance, economics, to name just a few. The most notable aspect of compositional data is the restriction on their domain, from the condition that the sum of the variables is fixed. The compositional domain is not a classical vector space, but instead a (regular) simplex, that can be modeled by the simplex: \begin{equation}\label{eq:simplex} \Delta^d=\{(x_1,\dots,x_{d+1})\in \mathbb R^{d+1} \;| \sum_{i=1}^{d+1}x_i=1,\ x_i\geq 0, \forall i \}, \end{equation} which is topologically compact. The inclusion of zeros in (\ref{eq:simplex}) is crucial as most microbiomes data have a substantial number of zeros. Arguably the most prominent approach to handle the data on a simplex is to take a log-ratio transformation \citep{aitch86}, for which one has to consider only the open interior of $\Delta^d$, denoted by $\mathcal S^d$. Zeros are usually taken care of by adding a small number, however, it has been noted that the results of analysis can be quite dependent on how the zeros are replaced with \citep{lubbe2021comparison}. \citet{micomp} pointed out ``the dangers inherent in ignoring the compositional nature of the data'' and argued that microbiome datasets must be treated as compositions at all stages of analysis. Recently the need to handle compositional data without any transformation has been gaining popularity \citep{li2020s, rasmussen2020zero}. The approach proposed in this paper is to construct reproducing kernels of compositional data by interpreting compositional domains via projective spherical geometries. \subsection{New Trends of Techniques in Machine Learning}\label{machine} Besides the motivation from microbiology, another source of inspiration for this work is the current exciting development in statistics and machine learning. In particular, the rising popularity of applying higher tensors and kernel techniques, which allows multivariate techniques extend to exotic structures beyond traditional vector spaces, e.g., graphs (\cite{graphrkhs}), manifolds (\cite{vecman}) or images (\cite{tensorbrain}). This work serves an attempt to construct reproducing kernel structures for compositional data, so that new philosophies from modern machine learning can be introduced to this classical field in statistics. The approach in this work is to model the compositional data as a group quotient of a sphere $\mathbb S^d/\Gamma$ (see (\ref{allsame})), which gives a new connection of compositional data analysis with directional statistics. The idea of representing data by using tensors and frames is not new in directional statistics (see \cite{ambro}), but the authors find it more convenient to construct reproducing kernels for $\mathbb S^d/\Gamma$ (whose reason is given in Section \ref{whyrkhs}). We do want to mention that with the construction of reproducing kernels for compositional data, it indicates a new potential paradigm for compositional data analysis: traditional approach was aiming to find direct analogue of multivariate concepts, like mean, variance-covariance matrices and suitable regression analysis frameworks based on those concepts. However, finding the mean point over non-linear spaces, e.g. on manifold, is not an easy job, and in worst case scenarios, mean points might not even exist on the underlying space (e.g. the mean point of the uniform distribution on a unit circle is \emph{not} living on the circle). However, from the perspective of new philosophy in kernel mean embedding (surveyed in \cite{kermean}), which is formulated in our language: instead of finding the ``\emph{physical}'' point for the mean of a distribution, one should do statistics \emph{distributionally}, in other words, the mean or expectation is considered as a \emph{linear functional} on the RKHS, and this functional is represented as by an actual function in the Hilbert space, which is referred to as ``kernel mean $\mathbb E(k(X,\cdot))$'' in \cite{kermean}. Instead of trying to find another compositional point as the empirical mean of a compositional data set, one can construct ``kernel mean'' as a replacement of the traditional empirical mean, which is just $\big(\sum_{i=1}^nk(X_i,\cdot)\big)/n$. Moreover, one can also construct the analogue of variance-covariance matrix purely from kernels; in fact, in \cite{fbj09}, the authors considered the ``Gram Matrices'' constructed out of reproducing kernels, as consistent estimators of cross-variance operators (these operators play the role of covariance and cross-variance matrices in classical Euclidean spaces). Although we only construct reproducing kernels for compositional data, it does not mean that ``higher tensors'' is abandoned in our consideration. In fact, higher-tensor valued reproducing kernels are also included in kernel techniques with applications in manifold regularizations (\cite{vecman}) and shape analysis (\cite{matrixvaluedker}). These approaches on higher-tensor valued reproducing kernels indicate further possibilities of regression frameworks between exotic spaces $f:\ \mathcal X\rightarrow \mathcal Y$ with both the source $\mathcal X$ and $\mathcal Y$ being non-linear in nature, which extends the intuition of multivariate analysis further to nonlinear contexts, and compositional domains (traditionally modeled by an ``Aitchison simplex'') are an interesting class of examples which can be treated non-linearly. Since we remodel compositional domain using projective/spherical geometry, compositional domain is \emph{not} treated as a vector space, but a quotient topological space $\mathbb S^d/\Gamma$. Instead of ``putting a linear structure on an Aitchison simplex'' (see \cite{aitch86}), or square root transformation (which is still transformed from an Aitchison simplex), we choose to use modern machine learning techniques to ``linearize'' compositional data points by using kernel techniques (and possibly higher-tensor constructions) and one can still do ``multivariate analysis''. Our construction in this works initiates such an attempt to introduce these new streams of kernel and tensor techniques from statistical learning theory into compositional data analysis. \subsection{Highlights of the Present Work} Our contribution in this paper is three folds. First, we propose a new geometric foundation for compositional data analysis, $\mathbb P^d_{\geq 0}$, a subspace of a full projective space $\mathbb P^d$. Based on the close connection of spheres with projective spaces, we will also describe $\mathbb P^d_{\geq 0}$ in terms of $\mathbb S^d/\Gamma$, a reflection group acting on a sphere, and the fundamental domain of this actions is the first orthant $\mathbb S^d_{\geq 0}$ (a totally different reason of using ``$\mathbb S^d_{\geq 0}$'' in the traditional approach). Secondly, based on the new geometric foundations of compositional domains, we give a new compositional density estimation theory by making use of the long-developed spherical density estimation theory, and give a central limit theorem for integral squared errors, which leads to a goodness of fit test. Thirdly, also through this new geometric foundation, function spaces on compositional domains can be related with those on the spheres. Square integrable functions $L^2(\mathbb S^d)$ on the sphere is a focus of an ancient subject in mathematics and physics, called ``spherical harmonics''. Moreover, spherical harmonics theory also tells that each Laplacian eigenspace of $L^2(\mathbb S^d)$ is a reproducing kernel Hilbert space, and this allows us to construct reproducing kernels for compositional data points via ``orbital integrals'', which opens a door for machine learning techniques to be applied to compositional data. We also propose a compositional exponential family as a general distributional family for compositional data modeling. \subsection{Why Projective and Spherical Geometries?}\label{whysph} According to Aitchison in \cite{ai94}, ``any meaningful function of a composition must satisfy the requirement $f(ax)=f(x)$ for any $a\neq 0$''. In geometry and topology, a space consisting of such functions is called a \emph{projective space}, denoted by $\mathbb P^d$., therefore, projective geometry should be the natural candidate to model compositional data, rather than a simplex. Since a point in compositional domains can not have opposite signs, a compositional domain is in fact a ``positive cone'' $\mathbb P_{\geq 0}^d$ inside a full projective space. A key property of projective spaces is that stretching or shrinking the length of a vector in $\mathbb P^d$ does \emph{not} alter the point. Thus one can stretch a point in $\Delta^d$ to a point in the first orthant sphere by dividing it by its $\ell_2$ norm. Figure \ref{fig:spread} illustrates this stretching (``stretching'' is not a transformation from projective geometry point of view) in action. In short, projective geometry is more natural to model the compositional data according to Aitchison's original philosophy in \cite{ai94}. However, spheres are easier to work with because mathematically speaking, the function space on spheres is a well-treated subject in spherical harmonics theory, and statistically speaking, we can connect with directional statistics in a more natural way. Our compositional domain $\mathbb P^d_{\geq 0}$ can be naturally identified with $\mathbb S^d/\Gamma$, a sphere modded out by a reflection group action. This reflection group $\Gamma$ acts on the sphere $\mathbb S^d$ by reflection, and the \emph{fundamental domain} of this action is $\mathbb S^d_{\geq 0}$ (notions of group actions, fundamental domains and reflection groups are all discussed in Section \ref{sec:sphere}). Thus our connection with the first orthant sphere $\mathbb S^d_{\geq 0}$ is a natural consequence of projective geometry and its connection with spheres with group actions, having nothing to do with square root transformations. \subsection{Why Reproducing Kernels}\label{whyrkhs} As is explained in Section \ref{machine}, we strive to use new ideas of tensors and kernel techniques in machine learning to propose another framework for compositional data analysis, and Section \ref{whysph} explains the new connection with spherical geometry and directional statistics. However, it is not new in directional statistics where the idea of tensors was used to represent data points (see \cite{ambro}). So a naive idea would be to mimic directional statistics when studying ambiguous rotations: in \cite{ambro}, the authors studied how to do statistics over coset space $SO(3)/K$ where $K$ is a finite subgroup of $SO(3)$. In their case, the subgroup $K$ has to be a special class of subgroups of \emph{special} orthogonal groups, and within this class, they manage to study the corresponding tensors and frames, which gives the inner product structures of different data points. However, in our case, a compositional domain is $\mathbb S^d/\Gamma=O(d)\textbackslash O(d+1)/\Gamma$, a double coset space. Unlike \cite{ambro} which only considered $d=3$ case, our dimension $d$ is completely general; moreover, our reflection group $\Gamma$ is \emph{not} a subgroup of any special orthogonal groups, so constructions of tenors and frames in \cite{ambro} does not apply to our situation directly. Part of the novelty of this work is to get around this issue by making use of the reproducing kernel Hilbert Space (RKHS) structures on spheres, and ``averaging out'' the group action at the reproducing kernel level, which in return gives us a reproducing kernel structure on compositional domains. Once we have RKHS in hand, we can ``add'' and take the ``inner product'' of two data points, so our linearization strategy can also be regarded as a combination of ``the averaging approach'' and the ``embedding approach'' as in \cite{ambro}. In fact, an abstract function space together with reproducing kernels plays an important role in modern data analysis. In below we provide some philosophical motivations on the importance of function space over underlying data set: \begin{itemize} \item[(a)]Hilbert spaces of functions are naturally linear with an inner product structure. With the existence of (reproducing) kernels, data points are naturally incorporated into the function space, which leads to interesting interactions between the data set and functions defined over them. There has been a large amount of literature of embedding distributions into RKHS, e.g. \cite{disemd}, and using reproducing kernels to recover exponential families, e.g. \cite{expdual}. RKHS has also been used to recover classical statistical tests (e.g. goodness-of-fit test in \cite{kergof}), and regression in \cite{rkrgr}. Those works do not concern the analysis of function space, but primarily focus on the data analysis on the underlying data set, but all of them are done by passing over to RKHS. This implies the increasing recognition of the importance of abstract function space with (reproducing) kernel structure. \item[(b)] Mathematically speaking, given a geometric space $M$, the function space on $M$ can recover the underlying geometric space $M$ itself, and this principle has been playing a big role in modern geometry. Modern algebraic geometry, following the philosophy of Grothendieck, is based upon this insight. Function spaces can be generalized to matrix valued function spaces, and this generalization gives rise to non-commutative RKHS (used in shape analysis in \cite{matrixvaluedker}); moreover, non-commutative RKHS is connected with free probability theory (see \cite{ncrkhs}), which has been used in random effect and linear mixed models (see \cite{fj19} and \cite{princbulk}) . \end{itemize} \subsubsection{Structure of this paper} We describe briefly the content of the main sections of this article: \begin{itemize} \item In Section \ref{sec:sphere}, we will rebuild the geometric foundation of compositional domains by using projective geometry and spherical geometry. We will also point out that the old model using the closed simplex $\Delta^d$ is topologically the same as the new foundation. In diagrammatic way, we establish the following topological equivalence: \begin{equation}\label{4things} \Delta^d\cong \mathbb P^d_{\geq 0}\cong \mathbb S^d/\Gamma\cong\mathbb S^d_{\geq 0}, \end{equation} \noindent where $\mathbb S^d_{\geq 0}$ is the first orthant sphere, which is also the fundamental domain of the group action $\Gamma\acts \mathbb S^d$. All of the four spaces in (\ref{4things}) will be referred to as ``compositional domains''. As a direct application, we propose a compositional density estimation method by using the spherical density estimation theory via a spread-out construction through the quotient map $\pi:\ \mathbb S^d\rightarrow\mathbb S^d/\Gamma$, and proved that our compositional density estimator also possesses integral square errors that satisfies central limit theorems (Theorem \ref{ourclt}), which can be used for goodness-of-fit tests. \item Section \ref{sec:rkhs} will be devoted to constructing. compositional reproducing kernel Hilbert spaces. Our construction relies on the reproducing kernel structures on spheres, which is given by spherical harmonics theory. Wahba in \cite{wahba1981d} constructed splines using reproducing kernel structures on $\mathbb S^2$ (2-dimensional sphere), in which she also used spherical harmonics theory in \cite{sasone}, which only treated $2$-dimensional case. Our theory deals with general $d$-dimensional case, so we need the full power of spherical harmonics theory, which will be reviewed at the beginning of Section \ref{sec:rkhs}, and then we will use spherical harmonics theory to construct compositional reproducing kernels using an ``orbital integral'' type of idea. \item Section \ref{sec:app} will give a couple of applications of our construction of compositional reproducing kernels. (i) The first example is the representer theorem, but with one caveat: our RKHS is finite dimensional consisting degree $2m$ homogeneous polynomials, with no transcendental functions, so linear independence for distinct data points is not directly available, however we show that when the degree $m$ is high enough, linear independence still holds. Our statement of representer theorem is not new purely from RKHS theory point of view. Our point is to demonstrate that intuitions from traditional statistical learning can still be used in compositional data analysis, with some extra care. (ii) Secondly, we construct the compositional exponential family, which can be used to model the underlying distribution of compositional data. The flexible construction will enable us to utilize the distribution family in many statistical problems such as mean tests. \end{itemize} \section{New Geometric Foundations of Compositional Domains}\label{sec:sphere} \begin{figure} \centering \includegraphics[width = .4\textwidth]{SqrtCompare.png} \caption{Illustration of the stretching action on $\Delta^1$ to $\mathbb S^1$. Note that the stretching keeps the relative compositions where the square root transformation fails to do so. } \label{fig:stretch} \end{figure} In this section, we give a new interpretation of compositional domains as a cone $\mathbb P^d_{\geq 0}$ in a projective space, based on which compositional domains can be interpreted as spherical quotients by reflection groups. This connection will yield a ``spread-out'' construction on spheres and we demonstrate an immediate application of this new approach to compositional density estimation. \subsection{Projective and Spherical Geometries and a Spread-out Construction}\label{sec:spread} Compositional data consist of relative proportions of $d+1$ variables, which implies that each observation belongs to a projective space. A $d$-dimensional projective space $\mathbb P^d$ is the set of one-dimensional linear subspace of $\mathbb R^{d+1}$. A one-dimensional subspace of a vector space is just a line through the origin, and in projective geometry, all points in a line through the origin will be regarded as the same point in a projective space. Contrary to the classical linear coordinates $(x_1, \cdots,x_{d+1})$, a point in $\mathbb P^d$ can be represented by a projective coordinate $(x_1 : \cdots : x_{d+1})$, with the following property \[ (x_1 : x_2: \cdots : x_{d+1}) = (\lambda x_1 : \lambda x_2: \cdots : \lambda x_{d+1}), ~~~\text{for any } \lambda \ne 0. \] It is natural that an appropriate ambient space for compositional data is \emph{non-negative projective space}, which is defined as \begin{equation}\label{eq:proj} \mathbb P^d_{\ge 0} = \{(x_1 : x_2: \cdots : x_{d+1})\in \mathbb P^d \;| \; (x_1, x_2: \cdots : x_{d+1}) = (|x_1| : |x_2|: \cdots : |x_{d+1}|)\}. \end{equation} It is clear that the common representation of compositional data with a (closed) simplex $\Delta^d$ in (\ref{eq:simplex}) is in fact equivalent to (\ref{eq:proj}), thus we have the first equivalence: \begin{equation}\label{projtosimp} \mathbb P^d_{\geq 0}\cong \Delta^d. \end{equation} Let $\mathbb S^d$ denote a $d$-dimensional unit sphere, defined as \[ \mathbb S^d=\left\{(x_1,x_2,\dots, x_{d+1})\in \mathbb R^{d+1} \; | \sum_{i=1}^{d+1}x_i^2=1\right\}, \] and let $\mathbb S^d_{\geq 0}$ denote the first orthant of $\mathbb S^d$, a subset in which all coordinates are non-negative. The following lemma states that $\mathbb S^d_{\geq 0}$ can be a new domain for compositional data as there exists a bijective map between $\Delta^d$ and $\mathbb S^d_{\geq 0}$. \begin{lemma}\label{compcone} There is a canonical identification of $\Delta^d$ with $\mathbb S^d_{\geq 0}$, namely, $$ \xymatrix{\Delta^d\ar@<.4ex>[r]^f& \mathbb S^d_{\geq 0}\ar@<.4ex>[l]^g}, $$ where $f$ is the inflation map $g$ is the contraction map, with both $f$ and $g$ being continuous and inverse to each other. \end{lemma} \begin{proof}It is straightforward to construct the inflation map $f$. For $v \in \Delta^d$, it is easy to see that $f(v) \in \mathbb S^d_{\ge 0}$ when $ f(v) = v/\|v\|_2, $ where $\|v\|_2 $ is the $\ell_2$ norm of $v$. Note that the inflation map makes sure that $f(v)$ is in the same projective space as $v$. To construct the shrinking map $g$, for $s\in \mathbb S^d_{\geq 0}$ we define $ g(s) = s / \|s\|_1, $ where $\|s\|_1$ is the $\ell_1$ norm of $s$ and see that $g(s)\in\Delta^d$. One can easily check that both $f$ and $g$ are continuous and inverse to each other. \end{proof} Based on Lemma \ref{compcone}, we now identify $\Delta^d$ alternatively with the quotient topological space $\mathbb S^d/\Gamma$ for some group action $\Gamma$. In order to do so, we first show that the cone $\mathbb S^d_{\geq 0}$ is a strict fundamental domain of $\Gamma$, i.e., $\mathbb S^d_{\geq 0}\cong \mathbb S^d/\Gamma$. We start by defining a \emph{coordinate hyperplane} for a group. The $i$-th coordinate hyperplane $H_i\in \mathbb R^{d+1}$ with respect to a choice of a standard basis $\{e_1,e_2,\dots, e_{d+1}\}$ is a codimension one linear subspace which is defined as \[ H_i=\{(x_1,\dots, x_i,\dots, x_{d+1})\in \mathbb R^{d+1}:\ x_i=0\}, ~~ i = 1, \ldots, d+1. \] We define the reflection group $\Gamma$ with respect to coordinate hyperplanes as the follows: \begin{definition}\label{reflect} The reflection group $\Gamma$ is a subgroup of general linear group $GL(d+1)$ and it is generated by $\{\gamma_i, i = 1, \ldots, {d+1}\}$. Given the same basis $\{e_1,\dots, e_{d+1}\}$ for $\mathbb R^{d+1}$, the reflection $\gamma_i$ is a linear map specified via: \[ \gamma_i:\ (x_1,\dots,x_{i-1}, x_i, x_{i+1},\dots, x_{d+1})\mapsto (x_1,\dots,x_{i-1}, -x_i, x_{i+1},\dots, x_{d+1}). \] \end{definition} Note that if restricted on $\mathbb S^d$, $\gamma_i$ is an isometry map from the unit sphere $\mathbb S^d$ to itself, which we denote by $\Gamma\acts\mathbb S^d$. Thus, one can treat the group $\Gamma$ as a discrete subgroup of the isometry group of $\mathbb S^d$. In what follows we establish that $\mathbb S^d_{\ge 0}$ is a fundamental domain of the group action $\Gamma\acts \mathbb S^d$ in the topological sense. In general, there is no uniform treatment of a fundamental domain, but we will follow the approach by \cite{bear}. To introduce a fundamental domain, let us define an \emph{orbit} first. For a point $z\in \mathbb S^d$, an orbit of the group $\Gamma$ is the following set: \begin{equation}\label{eq:orbit} \orb^{\Gamma}_z=\{\gamma(z), \forall \gamma\in \Gamma\}. \end{equation} Note that one can decompose $\mathbb S^d$ into a disjoint union of orbits. The size of an orbit is not necessarily the same as the size of the group $|\Gamma|$, because of the existence of a \emph{stabilizer subgroup}, which is defined as \begin{equation}\label{stable} \Gamma_z=\{\gamma\in \Gamma, \gamma (z)=z\}. \end{equation} The set $\Gamma_z$ forms a group itself, and we call this group $\Gamma_z$ the \emph{stabilizer subgroup} of $\Gamma$. Every element in $\orb^{\Gamma}_z$ has isomorphic stabilizer subgroups, thus the size of $\orb^{\Gamma}_z$ is the quotient $|\Gamma|/|\Gamma_z|$, where $|\cdot|$ here is the cardinality of the sets. There are only finite possibilities for the size of a stabilizer subgroup for the action $\Gamma\acts \mathbb S^d$, and the size of stabilizer subgroups is dependent on codimensions of coordinate hyperplanes. \begin{definition}\label{fundomain} Let $G$ act properly and discontinuously on a $d$-dimensional sphere, with $d>1$. A \emph{fundamental domain} for the group action $G$ is a closed subset $F$ of the sphere such that every orbit of $G$ intersects $F$ in at least one point and if an orbit intersects with the interior of $F$ , then it only intersects $F$ at one point. \end{definition} A fundamental domain is \emph{strict} if every orbit of $G$ intersects $F$ at exactly one point. The following proposition identifies $\mathbb S^d_{\geq 0}$ as the quotient topological space $\mathbb S^d/\Gamma$, i.e., $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. \begin{proposition}\label{conedom} Let $\Gamma\acts \mathbb S^d$ be the group action described in Definition \ref{reflect}, then $\mathbb S^d_{\geq 0}$ is a strict fundamental domain. \end{proposition} In topology, there is a natural quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. With the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$, there should be a natural map $\mathbb S^d\rightarrow\mathbb S^d_{\geq 0}$. Now define a contraction map $c: \mathbb S^d\rightarrow \mathbb S^d_{\geq 0}$ via $(x_1,\dots,x_{d+1})\mapsto (|x_1|,\dots, |x_{d+1}|)$ by taking component-wise absolute values. Then it is straightforward to see that the $c$ is indeed the topological quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$, under the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. So far, via (\ref{projtosimp}), Lemma \ref{compcone} and Proposition \ref{conedom}, we have established the following equivalence: \begin{equation}\label{allsame} \mathbb P^d_{\geq 0}=\Delta^d=\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma. \end{equation} For the rest of the paper we will use the four characterizations of a compositional domain interchangeably. \paragraph{Spread-Out Construction} Based on (\ref{allsame}), one can turn a compositional data analysis problem into one on a sphere via \emph{spread-out construction}. The key idea is to associate one compositional data point $z\in \Delta^d=\mathbb S^d_{\geq 0}$ with a $\Gamma$-orbit of data points $\orb^{\Gamma}_z\subset \mathbb S^d$ in (\ref{eq:orbit}). Formally, given a point $z\in \Delta^d$, we construct the following \emph{data set} (\emph{not necessarily a set} because of possible repetitions): \begin{equation}\label{sprd} c^{-1}(z) = \left\{|\Gamma_{z'}|\ \text{copies of }z', \ \text{for}\ z'\in \text{Orbit}_z^\Gamma \right\}, \end{equation} where $\Gamma_{z'}$ is the stabilizer subgroup of $\Gamma$ with respect to $z'$ in (\ref{stable}). In general, if there are $n$ observations in $\Delta^d$, the spread-out construction will create a data set with $n2^{d+1}$ observations on $\mathbb S^d$, in which observations with zero coordinates are repeated. Figures \ref{fig:spread} (a) and (b) illustrate this idea with a toy data set with $d = 2$. \subsection{Compositional Density Estimation}\label{compdensec} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width = \textwidth]{simplex_eg.png} \caption{Compositional data on $\Delta^2$}\ \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{simplex_kde_3d.png} \caption{``Spread-out'' data on $\mathbb S^2$} \end{subfigure}\\ \begin{subfigure}[b]{0.50\textwidth} \includegraphics[width = \textwidth]{sphere_kde_3d.png} \caption{Density estimate on $\mathbb S^2$} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width = \textwidth]{simplex_kde.png} \caption{``Pulled-back'' estimate on $\Delta^2$} \end{subfigure} \caption{Toy compositional data on the simplex $\Delta^2$ in (a) are spread out to a sphere $\mathbb S^2$ in (b). The density estimate on $\mathbb S^2$ in (c) are pulled back to $\Delta^2$ in (d).}\label{fig:kde} \end{figure} The spread-out construction in (\ref{sprd}) provides a new intimate relation between directional statistics and compositional data analysis. Indeed, this construction produces a directional data set out of a compositional data set, then we can literally transform a compositional data problem into a directional statistics problem via this spread-out construction. For example, we can perform compositional independence/uniform tests by doing directional independence/uniform tests (as in \cite{sobind} and \cite{sobuni}) through spread-out constructions. In this section, we will give a new compositional density estimation framework (different from the one in \cite{oldden}) by using spread-out constructions. In directional statistics, density estimation for spherical data has a long history dating back to the late 70s in \cite{expdirect}. In the 80s, Hall and his collaborators in \cite{hallden}, along with Bai and his collaborators in \cite{baiden} established systematic framework for spherical density estimation theory. Spherical density estimation theory became popular later partly because its integral squared error (ISE) is closetly related with goodness of fit test as in \cite{chineseclt} and a recent work \cite{portclt}. The rich development in spherical density estimation theory will yield a compositional density framework via spread-out constructions. In the following we apply this idea to nonparametric density estimation for compositional data. Instead of directly estimating the density on $\Delta^d$, one can perform the estimation with the spread-out data on $\mathbb S^d$, from which a density estimate for compositional data can be obtained. Let $p(\cdot)$ denote a probability density function of a random vector $Z$ on $\mathbb S^d_{\geq 0}$, or equivalently on $\Delta^d$. The following proposition gives a form of the density of the spread-out random vector $\Gamma(Z)$ on the whole sphere $\mathbb S^d$. \begin{proposition}\label{induced} Let $Z$ be a random variable on $\mathbb S^d_{\geq 0}$ with probability density $p(\cdot)$, then the induced random variable $\Gamma(Z)=\{\gamma(Z)\}_{\gamma\in \Gamma}$, has the following density $\tilde{p}(\cdot)$ on $\mathbb S^d$: \begin{equation}\label{cshriek} \tilde{p}(z)=\frac{|\Gamma_z|}{|\Gamma|} p(c(z)), \ z\in \mathbb S^d, \end{equation} where $|\Gamma_z|$ is the cardinality of the stabilizer subgroup $\Gamma_z$ of $z$. (See (\ref{stable})). \end{proposition} Let $c_*$ denote the analogous operation for functions to the contraction map $c$ that applies to data points. It is clear that given a probability density $\tilde p$ on $\mathbb S^d$, we can obtain the original density on the compositional domain via the ``pull back'' operation $c_*$: \[ p(z) = c_*(\tilde p)(z)=\sum_{x\in c^{-1}(z)}\tilde p(x), ~~ z\in \mathbb S^d_{\geq 0}. \] Now consider estimating density on $\mathbb S^d$ with the spread-out data. Density estimation for data on a sphere has been well studied in directional statistics \citep{hallden, baiden}. For $x_1, \ldots x_n \in \mathbb S^d$, a density estimate for the underlying density is \[ \hat{f}_n(z)=\frac{c_h}{n}\sum_{i=1}^nK\left(\frac{1-z^T x_i}{h_n}\right),\ z\in \mathbb S^d, \] where $K$ is a kernel function that satisfies common assumptions in Assumption \ref{kband}, and $c_h$ is a normalizing constant. Applying this to the spread-out data $c^{-1}(x_i)$, $i = 1, \ldots, n$, we have a density estimate of $\tilde p (\cdot) $ defined on $\mathbb S^d$: \begin{equation}\label{fhattilde} \hat{f}^{\Gamma}_n(z)=\dis \frac{c_h}{n|\Gamma|}\sum_{1\leq i\leq n,\gamma\in \Gamma}K\left(\frac{1-z^T \gamma( x_i)}{h_n}\right), \ z\in \mathbb S^d, \end{equation} from which a density estimate on the compositional domain is obtained by applying $c_*$. That is, \[ \hat{p_n}(z)=c_*\hat{f}^{\Gamma}_n(z)=\sum_{x\in c^{-1}(z)}\hat{f}^\Gamma_n(x),\ \ z\in \mathbb S^d_{\geq 0}. \] Fig. \ref{fig:kde} (c) and (d) illustrate this density estimation process with a toy example. The consistency of the spherical density estimate $\hat f_n$ is established by \cite{ chineseclt, portclt}, where it is shown that the integral squared error (ISE) of $\hat f_n$, $\int_{\mathbb S^d} (\hat f_n - f)^2 dz$ follows a central limit theorem. It is straightforward to show that the ISE of the proposed compositional density estimator $\hat{p}_n$ on the compositional domain also asymptotically normally distributed by CLT. However, the CLT of ISE for spherical densities in \cite{chineseclt} contains an unnecessary finite support assumption on the density kernel function $K$ (very different from reproducing kernels); although in \cite{portclt} such finite support condition is dropped, their result was on directional-linear data, and their proof does not directly applies to the pure directional context. For the readers' convenient, we will provide the proof for the CLT of ISE for both compositional and spherical data, without the finite support condition as in \cite{chineseclt} \begin{theorem}\label{ourclt} CLT for ISE holds for both directional and compositional data under the mild conditions (H1, H2 and H3) in Section \ref{cltprf}, without the finite support condition on density kernel functions $K$. \end{theorem} The detail of the proof of THeorem \ref{ourclt} plus the statements of the technical conditions can be found Section \ref{cltprf}. \section{Reproducing Kernels of Compositional Data}\label{sec:rkhs} We will be devoted to construct reproducing kernel structures on compositional domains, based on the topological re-interpretation of $\Delta^d$ in Section \ref{sec:sphere}. The key idea is that based on the quotient map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma=\Delta^d$, we can use function spaces on spheres to understand function spaces on compositional domains. Moreover, we can construct reproducing kernel structures of a compositional domain $\Delta^d$ based on those on $\mathbb S^d$. The reproducing kernel was first introduced in 1907 by Zaremba when he studied boundary value problems for harmonic and biharmonic functions, but the systematic development of the subject was finally done in the early 1950s by Aronszajn and Bergman. Reproducing kernels on $\mathbb S^d$ were essentially discovered by Laplace and Legendre in the 19th centuary, although the reproducing kernels on spheres were called \emph{zonal spherical functions} at that time. Both spherical harmonics theory and RKHS have found applications in theoretical subjects like functional analysis, representation theory of Lie groups and quantum mechanics. In statistics, Wahba's successful application of RKHS in spline models popularized RKHS theory for $\mathbb S^d$. In particular, Wahba used spherical harmonics theory to construct an RKHS on $\mathbb S^2$ in \cite{wah81}. Generally speaking, for a fixed topological space $X$, there exists (and one can construct) multiple reproducing kernel Hilbert spaces on $X$; in \cite{wah81}, Wahba constructed an RKHS on $\mathbb S^2$ by considering a \emph{subspace} of $L^2(\mathbb S^2)$ under a finiteness condition, and her reproducing kernels were also built out of zonal spherical functions. Her RKHS on $\mathbb S^2$ is motivated by studying spline models on the sphere, while our motivation has nothing to do with spline models of any kind. In this work we consider reproducing structures on spheres which are \emph{different} from the one in \cite{wah81}, but spherical harmonics theory provides building blocks for both our approach and Wahba's in \cite{wah81}. Evolved from the re-interpretation of a compositional domain $\Delta^d$ as $\mathbb S^d/\Gamma$, we will construct reproducing kernels of compositional by using reproducing kernel structures on spheres. Since spherical harmonics theory gives reproducing kernel structures on $\mathbb S^d$, and a compositional domain $\Delta^d$ are topologically covered by spheres with their deck transformations group $\Gamma$. Thus naturally we wonder (i) whether function spaces on $\Delta^d$ can identified with the subspaces of $\Gamma$-invariant functions on $\mathbb S^d$, and (ii) whether one might ``build'' $\Gamma$-invariant kernels out of spherical reproducing kernels, and hope that the $\Gamma$-invariant kernels can play the role of ``reproducing kernels'' on $\Delta^d$. It turns out that the answers for both (i) and (ii) are positive (see Remark \ref{dreami} and Theorem \ref{reprcomp}). The discovery of reproducing kernel structures on $\Delta^d$ is crucially based on the reinterpretation of compositional domains via projective and spherical geometries in Section \ref{sec:sphere}. By considering $\Gamma$-invariant objects in spherical function spaces we managed to construct reproducing kernel structures for compositional domains, and compositional reproducing Hilbert spaces. Although compositional RKHS was first considered as a candidate ``inner product space'' for data points to be mapped into, the benefit of working with RKHS goes far beyond than this, due to exciting development of kernel techniques in machine learning theory that can be applied to compositional data analysis as is mentioned in Section \ref{machine}. This gives a new chance to construct a new framework for compositional data analysis, in which we ``upgrade'' compositional data points as functions (via reproducing kernels), and the classical statistical notions, like means and variance-covariances, will be ``upgraded'' to linear functionals and linear operators over the functions space. Traditionally important statistical topics such as dimension reduction, regression analysis, and many inference problems can be re-addressed in the light of this new ``kernel techniques'' in the sense of \cite{kermean}. \subsection{Recollection of Basic Facts from Spherical Harmonics Theory} We give a brief review of the theory of spherical harmonics in the following. See \citet{atkinson2012spherical} for a general introduction to the topic. In classical linear algebra, a finite dimensional linear space with a linear map to itself can be decomposed into direct sums of eigenspaces. Such a phenomenon still holds for $L^2(\mathbb S^d)$ with Laplacians being the linear operator to itself. Recall that the Laplacian operator on a function $f$ with $d+1$ variables is \[ \dis\Delta f=\sum_{i=1}^{d+1}\frac{\partial ^2 f}{\partial x_i^2}. \] Let $\mathcal H_i$ be the $i$-th eigenspace of the Laplacian operator. It is known that $L^2(\mathbb S^d)$ can be orthogonally decomposed as \begin{equation}\label{L2} L^2(\mathbb S^d)=\bigoplus_{i=1}^{\infty}\mathcal H_i, \end{equation} where the orthogonality is endowed with respect to the inner product in $L^2(\mathbb S^d)$: $\langle f, g \rangle = \dis\int_{\mathbb S^d} f \bar{g}$. Let $\mathcal P_{i}(d+1)$ be the space of homogeneous polynomials of degree $i$ in $d+1$ coordinates on $\mathbb S^d$. A homogeneous polynomial is a polynomial whose terms are all monomials of the same degree, e.g., $\mathcal P_4(3)$ includes $xy^3 + x^2yz$. Further, let $ H_{i}(d+1)$ be the space of homogeneous harmonic polynomials of degree $i$ on $\mathbb S^d$, i.e., \begin{equation}\label{eq:harmonic} H_{i}(d+1)=\{P\in \mathcal P_{i}(d+1)|\; \Delta P=0\}. \end{equation} For example, $x^3y + xy^3 - 3xyz^2$ and $x^2 - 6x^2 y^2 + y^4$ are members of $H_4(3)$. Importantly, the spherical harmonic theory has established that each eigenspace $\mathcal H_{i}$ in \eqref{L2} is indeed the same space as $ H_i(d+1)$. This implies that any function in $L^2(\mathbb S^d)$ can be approximated by an accumulated direct sum of orthogonal homogeneous harmonic polynomials. The following well-known proposition further reveals that the Laplacian constraint in (\ref{eq:harmonic}) is not necessary to characterize the function space on the sphere. \begin{proposition}\label{accudirct} Let $\mathcal P_{m}(d+1)$ be the space of degree $m$ homogeneous polynomial on $d+1$ variables on the unit sphere and $\mathcal H_i$ be the $i$th eigenspace of $L^2(\mathbb S^d)$. Then $$ \mathcal P_{m}(d+1)=\dis\bigoplus_{i=\ceil{m/2}-\floor{m/2}}^{\floor{m/2}}\mathcal H_{2i}, $$ where $\ceil{\cdot}$ and $\floor{\cdot}$ stand for round-up and round-down integers respectively. \end{proposition} From Proposition \ref{accudirct}, one can see that any $L^2$ function on $\mathbb S^d$ can be approximated by homogeneous polynomials. An important feature of spherical harmonics theory is that it gives reproducing structures on spheres, and now we will recall this fact. For the following discussion, we will fix a Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$, so $\mathcal H_i$ is a finite dimensional Hilbert space on $\mathbb S^d$; such a restriction on a single piece $\mathcal H_i$ is necessary because the entire Hilbert space $L^2(\mathbb S^d)$ does not have a reproducing kernel given that the Delta functional on $L^2(\mathbb S^d)$ is \emph{not} a bounded functional\footnote{At first sight, this might seem to contradict Wahba's study of splines on $2$-dimensional spheres in \cite{wah81}, but a careful reader can find that Wahba put a finiteness constraint on $L^2(\mathbb S^2)$, and she \emph{never} claimed that $L^2(\mathbb S^2)$ is a RKHS, so her RKHS on $\mathbb S^2$ is a subspace of $L^2(\mathbb S^2)$.}. \subsection{Zonal Spherical Functions as Reproducing Kernels in $\mathcal H_i$} On each Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$ on general $d$-dimensional spheres, we define a linear functional $L_x$ on $\mathcal H_i$, such that for each $Y\in \mathcal H_i$, $L_x(Y)=Y(x)$ for a fixed point $x\in \mathbb S^d$. General spherical harmonics theory tells us that there exists $k_i(x,t)$ such that: $$ L_x(Y)=Y(x)=\dis\int_{\mathbb S^d} Y(t)k_i(x,t)dt,\ x\in \mathbb S^d; $$ \noindent this function $k_i(x,t)$ is the representing function of the functional $L_x(Y)$, and classical spherical harmonics theory refers to the function $k_i(x,t)$ as the \emph{zonal spherical function}. But with the eventual formulation of reproducing kernel theory in 1950 by Aronszajn, zonal spherical functions are actually ``reproducing kernels'' inside $\mathcal H_i\subset L^2(\mathbb S^d)$ in the sense of \cite{aron50}. Another way to appreciate spherical harmonics theory is that it tells that each Laplacian eigenspace $\mathcal H_i\subset L^2(\mathbb S^d)$ is actually a reproducing kernel Hilbert space on $\mathbb S^d$, a special case of which was used by Wahba for $d=2$ in \cite{wah81}. Let's recollect some basic facts of zonal spherical functions for readers' convenience, and one can find the proof of them in almost any modern spherical harmonics references, in particular in Chapter IV of \cite{stein71}, by which we have the following proposition: \begin{proposition}\label{reprsph} The following properties hold for the zonal spherical function $k_i(x,t)$, which is also the reproducing kernel inside $\mathcal H_i\subset L^2(\mathbb S^d)$ with dimension $a_i$. \begin{itemize} \item[(a)]For a choice of orthonormal basis $\{Y_1, \dots, Y_{a_i}\}$ in $\mathcal H_i$, we can express the kernel $k_i(x,t)=\dis\sum_{i=1}^{a_i}\overline{Y_{i}(x)}Y_{i}(t)$, but $k_i(x,t)$ does not depend on choices of basis. \item[(b)]$k_i(x,t)$ is a real-valued function and symmetric, i.e., $k_i(x,t)=k_i(t,x)$. \item[(c)]For any orthogonal matrix $R\in O(d+1)$, we have $k_i(x,t)=k_i(Rx, Rt)$. \item[(d)] $k_i(x,x)=\dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any point $x\in \mathbb S^d$. \item[(e)]$k_i(x,t)\leq \dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any $x,\ t\in \mathbb S^d$. \end{itemize} \end{proposition} \begin{remark} \normalfont The above proposition ``\emph{seems}'' obvious from traditional perspectives, as if it could be found in any textbook, so readers with rich experience with RKHS theory might think that we are stating something trivial. However, we want to point out two facts \begin{itemize} \item [(1)]function spaces over underlying spaces with different topological structures behave very differently. Spheres are compact with no boundary, and their function spaces have Laplacian operators whose eigenspaces and finite dimensional, \emph{and} with reproducing kernels structures on spheres inside \emph{this} finite dimensional space. These coincidences are not expected to happen over other general topological spaces. \item[(2)]Relative to classical topological spaces whose RKHS were used more often, e.g. unit intervals or vector spaces, spheres are more ``exotic'' topological structures (simply connected space, but with nontrivial higher homotopy groups), while intervals or vector spaces are contractible with trivial homotopy groups. One way to appreciate this result is that classical ``naive'' expectations can still happen even on spheres with the aid of spherical harmonics theory. \end{itemize} \end{remark} In the next subsection we discuss the corresponding function space in the compositional domain $\Delta^d$. \subsection{Function Spaces on Compositional Domains}\label{sec:fcomp} With the identification $\Delta^d=\mathbb S^d/\Gamma$, the functions space $L^2(\Delta^d)$ can be identified with $L^2(\mathbb S^d/\Gamma)$, i.e., $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. The function space $L^2(\mathbb S^d)$ is well understood by spherical harmonics theory as above, so we want to relate $L^2(\mathbb S^d/\Gamma)$ with $L^2(\mathbb S^d)$ as follows. Notice that a function $h\in L^2(\mathbb S^d/\Gamma)$ is a map from $\mathbb S^d/\Gamma$ to (real or complex) numbers. Thus a natural associated function $\pi^*(h)\in L^2(\mathbb S^d)$ is given by the following composition of maps: $$ \pi\circ h:\ \ \xymatrix{\mathbb S^d\ar[r]^-{\pi}&\mathbb S^d/\Gamma\ar[r]^{\ \ h}& \mathbb C}. $$ Therefore, the composition $\pi\circ h=\pi^*(h)\in L^2(\mathbb S^d)$ gives rise to a natural embedding of the function space of compositional domains to that of a sphere $\pi^*:\ L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$. The embedding $\pi^*$ identifies the Hilbert space of compositional domains as a subspace of the Hilbert space of spheres. A natural question is how to characterize the subspace in $L^2(\mathbb S^d)$ that corresponds to functions on compositional domains. The following proposition states that $f\in \im(\pi^*)$ if and only if $f$ is constant on fibers of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$, almost everywhere. In other words, $f$ takes the same values on all $\Gamma$ orbits, i.e., on the set of points which are connected to each other by ``sign flippings''. \begin{proposition}\label{compfun} The image of the embedding $\pi^*: L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$ consists of functions $f\in L^2(\mathbb S^d)$ such that up to a measure zero set, is constant on $\pi^{-1}(x)$ for every $x\in \mathbb S^d/\Gamma$, where $\pi$ is the natural projection $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. \end{proposition} We call a function $f\in L^2(\mathbb S^d)$ that lies in the image of the embedding $\pi^*$ a \emph{$\Gamma$-invariant function}. Now we construct the contraction map $\pi_{*}: L^2(S^d)\rightarrow L^2(S^d/\Gamma)$ and this map will descend every function on spheres to a function on compositional domains. To construct $\pi_*$, it suffices to associate a $\Gamma$-invariant function to every function in $L^2(\mathbb S^d)$. For a point $z\in \mathbb S^d$ and a reflection $\gamma\in \Gamma$, a point $\gamma(z)$ lies in the set $\orb_z^\Gamma$ which is defined in (\ref{eq:orbit}). Starting with a function $f\in L^2(\mathbb S^d)$, we will define the associated $\Gamma$-invariant function $f^{\Gamma}$ as follows: \begin{proposition} Let $f$ be a function in $L^2(\mathbb S^d)$. Then the following $f^\Gamma$ \begin{equation}\label{eq:invfun} f^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}f(\gamma(z)), ~~~ z \in \mathbb S^d, \end{equation} is a $\Gamma$-invariant function. \end{proposition} \begin{proof} Each fiber of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$ is $\orb_z$ for some $z$ in the fiber. For any other point $y$ on the same fiber with $z$ for the projection $\pi$, there exists a reflection $\gamma\in \Gamma$ such that $y=\gamma (z)$. Then this proposition follows from the identity $f^{\Gamma}(z)=f^{\Gamma}(\gamma(z))$, which can be easily checked. \end{proof} The contraction map $ f\mapsto f^{\Gamma}$ on spheres naturally gives the following map \begin{equation}\label{lowerstar} \pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma),\ \text{with}\ f\mapsto f^{\Gamma} \end{equation} \begin{remark} \normalfont Some readers might argue that each element in an $L^2$ space is a \emph{function class} rather than a function, so in that sense $\pi_*(f)=f^{\Gamma}$ is not well-defined, but note that each element in $L^2(\mathbb S^d)$ can be approximated by polynomials, and the $\pi_*$ which is well defined on individual polynomial, will induce a well defined map on function classes. \end{remark} \begin{theorem}\label{invfunsp} This contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma)$ (defined in Equation (\ref{lowerstar})) has a section given by $\pi^*$, namely the composition $\pi_*\circ \pi^*$ induces the identity map from $L^2(\mathbb S^d/\Gamma)$ to itself. In particular, the contraction map $\pi_*$ is a surjection. \end{theorem} \begin{proof} One way to look at the relation of the two maps $\pi_*$ and $\pi_*$ is through the diagram $\xymatrix{L^2(\mathbb S^d/\Gamma)\ar@<.35ex>[r]^-{\pi^*}&L^2(\mathbb S^d)\ar@<.35ex>[l]^-{\pi_*}}$. The image of $\pi^*$ consists of $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. Conversely, given a $\Gamma$-invariant function $g\in L^2(\mathbb S^d)$, the map $g\mapsto g^{\Gamma}$ is an identity map, i.e. $g=g^{\Gamma}$, thus the theorem follows. \end{proof} \begin{remark}\label{dreami} \normalfont Theorem \ref{invfunsp} identifies functions on compositional domains as $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. For any function $f\in L^2(\mathbb S^d)$, we can produce the corresponding $\Gamma$-invariant function $f^{\Gamma}$ by (\ref{eq:invfun}). More importantly, we can ``recover'' $L^2(\Delta^d)$ from $L^2(\mathbb S^d)$, without losing any information. This allows us to construct reproducing kernels of $\Delta^d$ from $L^2(\mathbb S^d)$ in Section \ref{sec:rkhsc}. \end{remark} \subsection{Reduction to Homogeneous Polynomials of Even Degrees}\label{redsurg} We want to understand the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$ in terms of homogeneous polynomials. Proposition \ref{accudirct} tells us that if $m$ is even, then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{m/2}\mathcal H_{2i}$, and that if $m$ is odd then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{(m-1)/2}\mathcal H_{2i+1}$, where $\mathcal P_{m}(d+1)$ is the space of degree $m$ homogeneous polynomials in $d+1$ variables. In either of the cases ($m$ being even or odd), the degree of the homogeneous polynomials $m$ is the same as the $\max\{2i,\ \ceil{m/2}-\floor{m/2}\leq i\leq \floor{m/2}\}$ as in Proposition \ref{accudirct}. Therefore we can decompose the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$ into direct sum of two homogeneous polynomial spaces: $$ \dis\bigoplus_{i=0}^{m}\mathcal H_i=\mathcal P_{m}(d+1)\bigoplus \mathcal P_{m-1}(d+1). $$ However we will show that any monomial of odd degree term will collapse to zero by taking its $\Gamma$-invariant, thus only one piece of the above homogeneous polynomial space will ``survive'' under the contraction map $\pi_*$. This will further simplify the function space, which in turn facilitates an easy computation. Specifically, when working with accumulated direct sums $\bigoplus_{i=0}^{m}\mathcal H_i$ on spheres, not every function is a meaningful function on $\Delta^d=\mathbb S^d/\Gamma$, e.g., we can find a nonzero function $f\in \bigoplus_{i=1}^{m}\mathcal H_i$, but $f^{\Gamma}=0$. In fact, all of the odd pieces of the eigenspace $\mathcal H_m$ with $m$ being odd do not contribute to anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. In other words, the accumulated direct sum $\bigoplus_{i=0}^m\mathcal H_{2i+1}$ is ``killed'' to zero under $\pi_*$, as shown by the following Lemma: \begin{lemma}\label{killodd} For every monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ (each $\alpha_i\geq 0$), if there exits $ k$ with $\alpha_k$ being odd, then the monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ is a shadow function, that is, $(\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}=0$. \end{lemma} An important implication of this Lemma is that since each homogeneous polynomial in $\bigoplus_{i=0}^k\mathcal H_{2i+1}$ is a linear combination of monomials with at least one odd term, it is killed under $\pi_*$. This implies that all ``odd'' pieces in $L^2(\mathbb S^d) = \bigoplus_{i=0}^{\infty} \mathcal H_i$ do not contribute anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. Therefore, whenever using spherical harmonics theory to understand function spaces of compositional domains, it suffices to consider only even $i$ for $\mathcal H_{i}$ in $L^2(\mathbb S^d)$. In summary, the function space on the compositional domain $\Delta^d=\mathbb S^d/\Gamma$ has the following eigenspace decomposition: \begin{equation}\label{compfun} L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)=\dis\bigoplus_{i=0}^{\infty}\mathcal H_{2i}^{\Gamma}, \end{equation} \noindent where $\mathcal H_{2i}^{\Gamma}:=\{h\in \mathcal H_{2i},\ h=h^{\Gamma}\}$. \subsection{Reproducing Kernels for Compositional Domain}\label{sec:rkhsc} The main goal of this section is to establish reproducing kernels for compositional data. Inside each Laplacian eigenspace $\mathcal H_i$ in $L^2(\mathbb S^d)$, recall that Equation (\ref{compfun}) tells us that the $\Gamma$-invariant subspace $\mathcal H_i^{\Gamma}$ can be regarded as a function space on $\Delta^d=\mathbb S^d/\Gamma$. To find a candidate of reproducing kernel inside $\mathcal H_i^{\Gamma}$, we can first try to find the representing function for the following linear functional: \subsubsection{ $\Gamma$-invariant Functionals on $\mathcal H_i$} Let $L_z^{\Gamma}$ be a functional on $\mathcal H_i$ defined as follows: For any function $Y\in \mathcal H_i$, \[ L_{z}^{\Gamma}(Y)=Y^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}Y(\gamma z), \] for a given $z\in \mathbb S^d$. We can also define a Delta functional on $\mathcal H_i$ by $L_{z}(Y)=Y(z)$ for any $Y\in \mathcal H_i$, then one easily sees that $L_z^{\Gamma}$ and $L_z$ agree on the subspace $\mathcal H_i^{\Gamma}$ inside $\mathcal H_i$ and also note that $L_z^{\Gamma}$ can be seen as a composed map $L_z^{\Gamma}=L_z\pi_*:\ \mathcal H_i\rightarrow \mathcal H_i^{\Gamma}\rightarrow\mathbb C$ (recall that $\pi_*$ was defined in Equation (\ref{lowerstar})), so although $L_z^{\Gamma}$ is defined on $\mathcal H_i$, it can actually be seen as a ``Delta functional'' on $\mathbb S^d/\Gamma=\Delta^d$. To find the representing function for $L_z^{\Gamma}$, we will use zonal spherical functions: Let $k_i(\cdot, \cdot)$ be the reproducing kernel in the eigenspace $\mathcal H_i$. Define the ``compositional'' kernel $k_i^{\Gamma}(\cdot, \cdot)$ for $\mathcal H_i$ as \begin{equation}\label{fake} k_i^{\Gamma}(x, t)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, t), \ \ \forall x, t\in \mathbb S^d, \end{equation} \noindent from which it is straightforward to check that $k_i^\Gamma(z,\cdot)$ represent linear functionals of the form $L_{z}^{\Gamma}$, simply by following the definitions. \begin{remark} \normalfont The above definition of ``compositional kernels'' (in Equation (\ref{fake})) is not just a trick only to get rid of the ``redundant points'' on spheres. This definition is inspired by the notion of ``orbital integrals'' in analysis and geometry. In our case, the ``integral'' is a discrete version, because the ``compact subgroup'' in our situation is replaced by a finite discrete reflection group $\Gamma$. In fact, such kind of ``discrete orbital integral'' construction is not new in statistical learning theory, e.g., in \cite{equimatr}, the authors also used the ``orbital integral'' type of construction to study equivariant matrix valued kernels. \end{remark} At first sight, a compositional kernel is not symmetric on the nose, because we are only ``averaging'' over the group orbit on the first variable of the function $k_i(x,t)$. However recall $k_i(x,t)$ is both symmetric and orthogonally invariant by Propositional \ref{reprsph}, so quite counter-intuitively, compositional kernels are actually symmetric: \begin{proposition}\label{sym} Compositional kernels are symmetric, namely $k_i^{\Gamma}(x, t)= k_i^{\Gamma}(t, s)$. \end{proposition} \begin{proof} Recall that $k_i(x,t)=k_i(t,x)$ and that $k_i(G x,G t)=k_i(x,t)$ for any orthogonal matrix $G$. Notice that every reflection $\gamma\in \Gamma$ can be realized as an orthogonal matrix, then we have $$ \begin{array}{rcl} k_i^{\Gamma}(x, t)&=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, t)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(t,\gamma x)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}t,\gamma^{-1}(\gamma x))\ \ \\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}t,x)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma t,x)\\ &=& k_i^{\Gamma}(t, x) \end{array} $$ \end{proof} Recall $\mathcal H_i^{\Gamma}$ is the $\Gamma$-invariant functions inside $\mathcal H_i$, and by Equation (\ref{compfun}), $\mathcal H_i^{\Gamma}$ is the $i$-th subspace of a compositional function space $L^2(\Delta^d)$. Let $w_i(x,y)$ be the reproducing kernel (if it exists) inside $\mathcal H_i^{\Gamma}$, the $\Gamma$-invariant subspace of $\mathcal H_i$. A first naive candidate for $w_i(x,y)$ might be the spherical reproducing kernel $k_i(x,y)$, but $k_i(x,y)$ is not $\Gamma$-invariant. however, it turns out that the compositional kernels are actually reproducing with respect to all $\Gamma$-invariant functions in $\mathcal H_i$, while being $\Gamma$-invariant on both arguments. \begin{theorem}\label{reprcomp} Inside $\mathcal H_i$, the compositional kernel $k_i^{\Gamma}(x, t)$ is $\Gamma$-invariant on both arguments $x$ and $t$, and moreover $k_i^{\Gamma}(x, t)=w_i(x,y)$, i.e., the compositional kernel is the reproducing kernel for $\mathcal H^{\Gamma}_i$. \end{theorem} \begin{proof} First, by the definition of $k_i^{\Gamma}(x, t)$, it is $\Gamma$-invariant on the first argument $x$; by the symmetry of $k_i^{\Gamma}(x, t)$ in Proposition \ref{sym}, it is then also $\Gamma$-invariant on the second argument $t$, hence the compositional kernel $k_i^{\Gamma}(x, t)$ is a kernel inside $\mathcal H^{\Gamma}_i$. Secondly, let's prove the reproducing property of $k_i^{\Gamma}(x, t)$. For any $\Gamma$-invariant function $f\in \mathcal H_i^{\Gamma}\subset \mathcal H_i$, $$ \begin{array}{rcl} <f(t),k_i^{\Gamma}(x, t)> &=& <f(t),\dis\sum_{\gamma\in \Gamma}\frac{1}{|\Gamma|}k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}<f(t),k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(\gamma x)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(x)\ \ (f\ \text{is}\ \Gamma\text{-invariant})\\ &=&f(x) \end{array} $$ \end{proof} \begin{remark} \normalfont Theorem \ref{reprcomp} justified that a compositional kernel is actually the reproducing kernel for functions inside $\mathcal H_i$. Although the compositional kernel $k_i^{\Gamma}(x, t)$ is symmetric as is proved in Proposition \ref{sym}, we will still use $w_i(x,y)$ to denote $k_i^{\Gamma}(x, t)$ because $w_i(x,y)$ is, notationally speaking, more visually symmetric than the notation for compositional kernels. \end{remark} Recall that functions on compositional domains are identified with $\Gamma$-invariant functions on $\mathbb S^d$ and points on compositional domains are $\Gamma$-orbits on the sphere, so reproducing kernels for $\Delta^d$ should reproduce $\Gamma$-invariant functions, as the following theorem shows. \paragraph{Compositional RKHS and Spaces of Homogeneous Polynomials} Recall that based on Theorem \ref{accudirct}, the direct sum of an even (resp. odd) number of eigenspaces can be expressed as the set of homogeneous polynomials of fixed degrees. Further recall that the direct sum decomposition $L^2(\mathbb S^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i$ is an orthogonal one, so is the direct sum $L^2(\Delta^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i$. By the orthgonality between eigenspaces, the reproducing kernels for the finite direct sum $\bigoplus_{i=0}^m\mathcal H^{\Gamma}_i$ is naturally the summation $\sum_{i=0}^m w_i$. Note that by Lemma \ref{killodd}, it suffices to consider the even pieces of eigenspaces $\mathcal H_{2i}$. Finally, we give a formal definition of ``the degree $m$ reproducing kernel Hilbert space'' on $\Delta^d$, consisting degree $2m$ homogeneous polynomials: \begin{definition}\label{lthrkhs} Let $w_i$ be the reproducing kernel for $\Gamma$-invariant functions in the $i$th eigenspace $\mathcal H_i \subset L^2(\mathcal S^d)$. The degree $m$ compositional reproducing kernel Hilbert space is defined to be the finite direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$. And the reproducing kernel for the degree $m$ compositional reproducing kernel Hilbert space is \begin{equation}\label{mcompker} \omega_m(\cdot, \cdot) = \sum_{i=0}^m w_{2i}(\cdot,\cdot); \end{equation} \noindent then the degree $m$ RKHS for the compositional domain is the pair $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$, where $\mathcal H^{\Gamma}_{2i}$ is the space of $\Gamma$-invariant functions in $\mathcal H_{2i}$. \end{definition} Recall that the direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can identified as a subspace of $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, which is isomorphic to the space degree $2m$ homogeneous polynomials, so each function in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can be written as a degree $2m$ homogeneous polynomial, including the reproducing kernel $\omega_m(x,\cdot)$, although it is not obvious from Equation (\ref{mcompker}). Notice that for a point $(x_1,x_2,\dots, x_{d+1})\in \mathbb S^d$, the sum $\sum_{i=1}^{d+1}x_i^2=1$, so one can always use this sum to turn each element in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ to a homogeneous polynomial. For example, $x^2+1$ is not a homogeneous polynomial, but each point point $(x,y,z)\in \mathbb S^2$ satisfies $x^2+y^2+z^2=1$, then we have $x^2+1=x^2+x^2+y^2+z^2=2x^2+y^2+z^2$, which is a homogeneous polynomial on the sphere $\mathbb S^2$. In fact, we can say something more about $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$. Recall that Proposition \ref{killodd} ``killed'' the contributions from ``odd pieces'' $\mathcal H_{2k+1}$ under the contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\Delta^d)$. However, even inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, only a subspace can be identified a compositional function space, namely, those $\Gamma$-invariant homogeneous polynomials. The following proposition gives a characterization of which homogeneous polynomials inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ come from the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$: \begin{proposition}\label{mpolynomial} Given any element $\theta\in \bigoplus_{i=0}^m\mathcal H^{\Gamma}_{2i}\subset \bigoplus_{i=0}^m\mathcal H_{2i}\subset L^2(\mathbb S^d/\Gamma)$, there exists a degree $m$ homogeneous polynomial $p_m$, such that \begin{equation}\label{mplynomial} \theta(x_1, x_2,\dots, x_{d+1})=p_m(x_1^2, x_2^2,\cdots, x_{d+1}^{2}). \end{equation} \end{proposition} \begin{proof} Note that $\theta$ a degree $2m$ homogeneous $\Gamma$-invariant polynomial, then each monomial in $\theta$ has form $\prod_{i=1}^{d+1}x_i^{a_i}$ with $\sum_{i=1}^{d+1}a_i=2m$. If $\theta$ contains one monomial $\prod_{i=1}^{d+1}x_i^{a_i}$ with nonzero coefficient such that $a_i$ is odd for some $1\leq i\leq d+1$. Note that $\theta$ is $\Gamma$-invariant, i.e. $\theta=\theta^{\Gamma}$, which implies $\prod_{i=1}^{d+1}x_i^{a_i}=(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$, but the term $(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$ is zero by Proposition \ref{killodd}. Then $\theta$ is the linear combinations of monomials of the form $\prod_{i=1}^{d+1}x_i^{a_i}=\prod_{i=1}^{d+1}(x_i^2)^{a_i/2}$ with each $a_i$ being even and $\sum_i a_i/2=m$, thus the proposition follows. \end{proof} Recall that $m$ Compositional RKHS is $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$ in Definition \ref{lthrkhs}, and $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ consists of degree $2m$ homogeneous polynomials while $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ is just a subspace of it. Proposition \ref{mpolynomial} tells us that one can also have a concrete description of the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ via those degree $m$ homogeneous polynomials on squared variables. \section{Applications of Compositional Reproducing Kernels}\label{sec:app} The availability of compositional reproducing kernels will open a door to many statistical/machine learning techniques for compositional data analysis. However, we will only present two application scenarios, as an initial demonstration of influence of RKHS thoery on compoasitional data analysis. The first application is a representer theorem, which is motivated by newly developed kernel methods machine learning, especially by the rich theory of vector value regression (\cite{vecfun}, \cite{vecman} and \cite{vecman}). The second one is to construct exponential families on compositional domains. Parameters of compositional exponential models are compositional reproducing kernels, which exist for each even integer, so we get infinitely many exponential distributions on compositional domains. To the authors' knowledge, these will be the first class of nontrivial examples of explicit distributions on compositional domains with \emph{non-vanishing} densities on the boundary. \subsection{Compositional Representer Theorems} Other than the successful applications on traditional spline models, representer theorems are increasingly needed due to the new kernel techniques in modern machine learning. We will consider minimal normal interpolations and least square regularzations in this paper. Regularizations are especially important in many situations, like structured prediction, multi-task learning, multi- label classification and related themes that attempt to exploit output structure (see \cite{vecman}). A common theme in the above-mentioned contexts is non-parametric estimation of a vector-valued function $f:\ \mathcal X\rightarrow \mathcal Y$, between a structured input space $\mathcal X$ and a structured output space $\mathcal Y$. An important adopted framework in those analysis is the ``Vector-valued Reproducing Kernel Hilbert Spaces'' in \cite{vecfun}. Unsurprisingly, representer theorems not only are necessary, but also call for further generalizations in modern machine learning: \begin{itemize} \item [(i)]In classical spline models, the most frequently used version of representer theorems are about scalar valued kernels, but besides the above-mentioned scenario $f:\ \mathcal X\rightarrow \mathcal Y$ in manifold regularization context, in which vector valued representer theorems are needed, higher tensor valued kernels and their corresponding representer theorems are also desirable. In \cite{equimatr}, matrix valued kernels and their representer theorems are studied, with applications in image processing. \item[(ii)]Another related application lies in the popular kernel mean embedding theories, in particular, conditional mean embedding. Conditional mean embedding theory essentially gives an operator from on a RKHS to another RKHS (see \cite{condmean}) and to learn such operators, vector-valued regressions plus corresponding representer theorems are used. \end{itemize} In vector-valued regression framework, an important assumption that is discussed in representer theorems are \emph{linear independence} conditions (see \cite{vecfun}). Our construction of compositial RKHS are finite dimensional spaces of polynomials, then the linear indpendence conditions are not freely satisfied on the nose, so we will address this problem in this paper. Instead of dealing with vector-valued kernels, we will only focus on the special case of scalar valued (reproducing) kernels, but the issue can be clearly seen in this special case. \subsubsection{Linear Independence of Compositional Reproducing Kernels}\label{twist} The compositional RKHS that was constructed in section \ref{sec:rkhs} takes the form $\big(\bigoplus_{i=0}^{m}\mathcal H_{2i}, \omega_{m} \big)$ indexed by $m$. Based on the finite dimensional nature of compositional RKHS, it is not even clear whether different points yield to different functions $\omega_{m}(x_i, \cdot)$ inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$. we will give a positive answer when $m$ is high enough. Given a set of distinct compositional data points $\{x_i\}_{i=1}^n\subset \Delta^d$, we will show that the corresponding set of reproducing functions $\{\omega_{i}(x_i, \cdot)\}_{i=1}^n$ form a linearly independent set inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ if $m$ is high enough. \begin{theorem}\label{abundencelm} Let $\{x_i\}_{i=1}^n$ be distinct data points on a compositional domain $\Delta^d$. Then there exists a positive integer $M$, such that for any $m>M$, the set of functions $\omega^{\Gamma}_{m}(x_i, \cdot)\}_{i=1}^n$ is a linearly independent set in $\bigoplus_{i=0}^{m}\mathcal H_{2i}$. \end{theorem} \begin{proof} The quotient map $c_\Delta: \mathbb S^d\rightarrow \Delta^d$ can factor through a projective space, i.e., $c_\Delta:\ \mathbb S^d\rightarrow\mathbb P^d\rightarrow \Delta^d$. The main idea is to prove a stronger statement, in which we showed that distinct data points in $\mathbb P^d$ will give linear independence of \emph{projective kernels} for large enough $m$, where projective kernels are reproducing kernels in $\mathbb P^d$ whose definition was given in \ref{abundprf}. Then we construct two vector subspace $V_1^{m}$ and $V_2^{m}$ and a linear map $g_m$ from $V_1^{m}$ to $V_2^{m}$. The key trick is that the matrix representing the linear map $g_m$ becomes diagonally dominant when $m$ is large enough, which forces the spanning sets of both $V_1^{m}$ and $V_2^{m}$ to be linear independent. More details of the proof are given in Section \ref{abundprf}. \end{proof} In the proof of Theorem \ref{abundencelm}, we make use of the homogeneous polynomials $(y_i\cdot {t})^{2m}$, which is \emph{not} living inside a single piece $\mathcal H_{2i}$, thus we had to use the direct sum space $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ for our construction of RKHS. Without using projective kernels, one might wonder if the same argument works, however, the issue is that the matrix might have $\pm 1$ at off-diagonal entries, which will fail to be diagonally dominant when $m$ grows large enough. We break down to linear independence of projective kernels for distinct points, because reproducing kernels for distinct compositional data point are linear combinations of distinct projective kernels, then in this way, the off diagonal terms will be the power of inner product of two vectors that will not be antipodal or identical, thus off diagonal terms' $m$-th power will go to zero with $m$ goes to infinity. Another consequence of Theorem \ref{abundencelm} is that $\omega_{m}(x_i,\cdot)\neq \omega_{m}(x_j, \cdot)$ whenever $i\neq j$ when $m$ is large enough. Not only large enough $m$ will separate points on the reproducing kernel level, but also gives each data point their ``own dimension''. \subsubsection{Minimal Norm Interpolation and Least Squares Regularization}\label{represent} Once the linear independence (again, which was required in \cite{vecfun}) is established in Theorem \ref{abundencelm}, it is an easy corollary to establish the representer theorems for minimal norm interpolations and least square regularizations. Nothing is new from the point of view of general RKHS theory, but we will include these theorems and proofs on account of completeness. Again, we will focus on the scalar-valued (reproducing) kernels and functions, instead of the vector-valued kernels and regressions. However, Theorem \ref{abundencelm} sheds important lights on linearly independence issues, and interested readers can generalize these compositional representer theorems to vector-valued cases by following \cite{vecfun}. The first representer theorem we provide is a solution to minimal norm interpolation problem: for a fixed set of distinct points $\{x_i\}_{i=1}^n$ in $\Delta^d$ and a set of numbers $y=\{y_i\in \mathbb R\}_{i=1}^n$, let $I_y^{m}$ be the set of functions that interpolates the data \[ I_y^m = \{f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}, \] and out goal is to find $f_0$ with minimum $\ell_2$ norm, i.e., \[ \norm{f_0}=\inf\{\norm{f}, f\in I_y^{m}\}.\] \begin{theorem}\label{minorm} Choose $m$ large enough so that the reproducing kernels $\{\omega_m(x_i,t)\}_{i=1}^n$ are linearly independent, then the unique solution of the minimal norm interpolation problem $\min\{\norm{f},f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}$ is given by the linear combination of the kernels: $$ f_0(t)=\dis\sum_{i=1}^nc_i \; \omega_{m}(x_i,t) $$ where $\{c_i\}_{i=1}^n$ is the unique solution of the following system of linear equations: $$ \dis\sum_{j=1}^n\omega_{m}(x_i,x_j)c_j=y_i, \ \ 1\leq i\leq n. $$ \end{theorem} \begin{proof} For any other $f$ in $I_y^{m}$, define $g=f-f_0$. By considering the decomposition: $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2$, one can argue that the cross term $<f_0,g>=0$. The detail can be found in Section \ref{representerproofs}. We want to point out that the linear independence of reproducing kernels guarantees the uniqueness and existence of $f_0$. \end{proof} The second representer theorem is for a more realistic scenario with $\ell_2$ regularization, which has the following objective: \begin{equation}\label{l2obj} \sum_{i=1}^n |f(x_i)-y_i|^2+\mu\norm{f}^2. \end{equation} The goal is to find the $\Gamma$-invariant function $f_{\mu}\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ that minimizes (\ref{l2obj}). The solution to this problem is provided by the following representer theorem: \begin{theorem}\label{regularization} For a set of distinct compositional data points $\{x_i\}_{i=1}^n$, choose $m$ large enough such that the reproducing kernels $\{\omega_{m}(x_i, t)\}_{i=1}^n$ are linearly independent. Then the solution to (\ref{l2obj}) is given by \[ f_{\mu}(t) =\sum_{i=1}^n c_i \; \omega_{m}(x_i,t), \] where $\{c_i\}_{i=1}^n$ is the solution of the following system of linear equations: \[ \mu c_i+\sum_{j=1}^n \omega_{m}(x_i,x_j)c_j=y_i,\ \ 1\leq i\leq n. \] \end{theorem} \begin{proof} The detail of this proof can be found in Section \ref{representerproofs}, but we want to point out how the linear independence condition plays a role in here. In the middle of the proof we need to show that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i)) \omega_{m}(x_i,t)\big]$, where $f_{\mu}=\sum_{i=1}^n \omega_{m}(x_i,t)c_i$. We use the linear independence in Theorem \ref{abundencelm} to establish the equivalence between this linear equation system of $\{c_i\}_{i=1}^n$ and the one given in the theorem. \end{proof} \subsection{Compositional Exponential Family} With the construction of RKHS in hand, one can produce exponential families using the technique developed in \cite{cs06}. Recall that for a function space $\mathcal H$ with the inner product $<\cdot,\cdot>$ on a general topological space $\mathcal X$, whose reproducing kernel is given by $k(x, \cdot)$, the exponential family density $p_{\theta}(x)$ with the parameter $\theta\in \mathcal H$ is given by: \[ p(x, \theta)=\exp\{<\theta(\cdot),k(x,\cdot)>-g(\theta) \},\ \] where $g(\theta)=\log \dis\int_{\mathcal{X}}\exp\big(<\theta(\cdot),k(x,\cdot) >\big) dx$. For the compositional data we define the density of the $m$th exponential family as \begin{equation}\label{expfamily} p_{m}(x, \theta)=\exp\{<\theta(\cdot),\omega_{m}(x,\cdot)>-g(\theta) \},\ \forall x \in \mathbb S^d/\Gamma, \end{equation} where $\theta\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ and $g(\theta)=\log \int_{\mathbb S^d/\Gamma}\exp(<\theta(\cdot),\omega_{m}(x,\cdot) >) dx$. Note that this density can be made more explicit by using homogeneous polynomials. Recall that any function in $\bigoplus_{i=0}^m\mathcal H_{2i}^{\Gamma}$ can be written as a degree $m$ homogeneous polynomial with \emph{squared} variables by Lemma \ref{killodd}. Thus the density in (\ref{expfamily}) can be simplified to the following form: for $x=(x_1,\dots, x_{d+1})\in \mathbb S^{d}_{\geq 0}$, \begin{equation}\label{mpoly} p_{m}(x,\theta)=\exp\{s_{m}(x_1^2, x_2^2, \dots, x_{d+1}^2; \theta)-g(\theta)\}, \end{equation} where $s_m$ is a polynomial on squared variables $x_i^2$'s with $\theta$ as coefficients. Note that $s_m$ is invariant under ``sign-flippings'', and the normalizing constant can be computed via the integration over the entire sphere as follows: $$ g(\theta)=\int_{\mathbb S^d/\Gamma}\exp(s_m)dx=\frac{1}{|\Gamma|}\int_{\mathbb S^d}\exp(s_m)dx. $$ Fitting the model in (\ref{mpoly}) to compositional data can be done via maximum likelihood estimation or the regression method proposed by \cite{expdirect} for data on the unit sphere, or directional data. Further development on the model estimation is suggested as a future work. Figure \ref{fig:expfamily} displays three examples of the densities of the form (\ref{expfamily}). The three densities have the following $\theta$: \begin{eqnarray*} \theta_1 &=& -2 x_1^4 -2 x_2^4 -3 x_3^4 + 9 x_1^2 x_2^2 + 9 x_1^2 x_3^2 -2 x_2^2 x_3^2\\ \theta_2 &=& - x_1^4 - x_2^4 - x_3^4 - x_1^2 x_2^2 - x_1^2 x_3^2 - x_2^2 x_3^2\\ \theta_3 &=& - 3 x_1^4 - 2 x_2^4 - x_3^4 +9 x_1^2 x_2^2 - 5 x_1^2 x_3^2 - 5 x_2^2 x_3^2 \end{eqnarray*} The various shapes of the densities in the figure implies that the compositional exponential family can be used to model data with a wide range of locations and correlation structures. \begin{figure}[h] \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily1.png} \caption{$p_4(x,\theta_1)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily2.png} \caption{$p_4(x,\theta_2)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily3.png} \caption{$p_4(x,\theta_3)$} \end{subfigure} \caption{Three example densities from compositional exponential family. See text for specification of the parameters $\theta_i, i = 1, \ldots, 3.$ }\label{fig:expfamily} \end{figure} \section{Discussion}\label{sec:conclude} The key point of this work is to use projective and spherical geometries to reinterpret compositional domains, which allows us to construct reproducing kernels for compositional data points by using spherical harmonics under group actions. With the rapid development of modern machine learning theory, especially kernel techniques and kernel mean embedding theories. A sequence of questions can be asked about how to introduce modern kernel theories in compositional data analysis. Historically, Kent models were introduced in directional statistics, because of its geometric intuition (concentration) and its application in regression analysis like in \cite{sphkentreg}. Such features are also attractive to compositional data researcher, for example in \cite{sw11} and \cite{sw12}. Either case, an important notion is the mean and expectation and all of the work related to Kent models, no matter in directional statistics or in compositional data analysis, the existing hope, in inference or regression analysis, is to find the mean or expectation in the \emph{underlying space} where data points are living, namely, the mean of the Kent distributions is a \emph{point on the sphere}. Our approach suggests a new direction in compositional data analysis, by which we were referring to the new kernel techniques. If we look for the mean of a compositional distribution \emph{inside} the compositional domain, then different representations of compositional domains (simplex, spherical quotients, first orthant spheres etc.) will give a different mean point on compositional domains. However, the new kernel techniques suggests a new way of understanding means of expectations: instead of finding the \emph{physical mean point} on compositional domains, a new replacement of expectations is the kernel mean $\mathbb E[k(X,\cdot)]$, which is surveyed in great detail in \cite{kermean}. We did give a construction of kernel exponential models for compositional domains, but we did not talk about how to compute means and variance-covariance matrices for those exponential models. Of course, one can attempt to find the mean point on the compositional domain, but based on the philosophy of \cite{kermean}, one should compute the kernel mean $\mathbb E[k(X,\cdot)]$ and cross-variance operator as a replacement of traditional means and variance-covariances in traditional multivariate analysis. The authors will, in the forthcoming work, develop further techniques to come back to this issue of kernel means and cross-variance operators for compositional exponential models, via applying deeper functional analysis techniques. \appendix \section{Supplementary Proofs} \subsection{ Proof of Central Limit Theorems on Integral Squared Errors (ISE) in Section \ref{compdensec}}\label{cltprf} \begin{assumption}\label{kband} For all kernel density estimators and bandwidth parameters in this paper, we assume the following: \begin{itemize} \item[{\bf{H1}}] The kernel function $K:[0,\infty)\rightarrow [0,\infty)$ is continuous such that both $\lambda_d(K)$ and $\lambda_d(K^2)$ are bounded for $d\geq 1$, where $\lambda_d(K)=2^{d/2-1}\mathrm{vol}(S^d)\dis\int_{0}^{\infty}K(r)r^{d/2-1}dr$. \item[\bf{H2}]If a function $f$ on $\mathbb S^d\subset \mathbb R^{d+1}$ is extended to the entire $\mathbb R^{d+1}/\{0\}$ via $f(x)=f(x/\norm{x})$, then the extended function $f$ needs to have its first three derivatives bounded. \item[\bf{H3}] Assume the bandwidth parameter $h_n\rightarrow 0$ as $nh_n^d\rightarrow\infty$. \end{itemize} \end{assumption} Let $f$ be the extended function from $\mathbb S^d$ to $\mathbb R^{d+1}/\{0\}$ via $f(x/\norm{x})$, and let \[ \phi(f,x)=−x^T \nabla f(x) + d^{-1}\big(\nabla^2 f(x) - x^T (\mathcal H_xf) x\big) = d^{-1}\mathrm{tr} [\mathcal H_xf(x)], \] where $\mathcal H_x f$ is the Hessian matrix of $f$ at the point $x$. The term $b_d(K)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ b_d(K)=\dis\frac{\dis\int_0^{\infty}K(r)r^{d/2}dr}{\dis\int_0^{\infty}K(r)r^{d/2-1}dr} \] The term $\phi(h_n)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ \phi(h_n)=\dis\frac{4b_d(K)^2}{d^2}\sigma_x^2h_n^4 \] Proof of Theorem \ref{ourclt}: \begin{proof} The strategy in \cite{chineseclt} in the directional set-up follows that in \cite{hallclt}, whose key idea is to give asymptotic bounds for degenerate U-statistics, so that one can use Martingale theory to derive the central limit theorem. The step where the finite support condition was used in \cite{chineseclt}, is when they were trying to prove the asymptotic bound:$E(G_n^2(X_1,X_2))=O(h^{7d})$, where $G_n(x,y)=E[H_n(X,xH_n(X,y))]$ with $H_n=\dis\int_{\mathbb S^d}K_n(z,x)K_n(z,y)dz$ and the centered kernel $K_n(x,y)=K[(1-x'y)/h^2]-E\{[K(1-x'X)/h^2]\}$. During that prove, they were trying to show that the following term: \[ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\times\\ &&\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2, \end{array} \] satisfies $T_1=O(h^{7d})$. During this step, in order to give an upper bound for $T_1$, the finite support condition was substantially used. The idea to avoid this assumption was based on the observation in \cite{portclt} where they only concern the case of directional-linear CLT, whose result can not be directly used to the only directional case. Based on the method provided in Lemma 10 in \cite{portclt}, one can easily deduce the following asymptotic equivalence: $$ \dis\int_{\mathbb S^d}K^j(\frac{1-x^Ty}{h^2})\phi^i(y)dy\sim h^d\lambda_d(K^j)\phi^i(x), $$ where $\lambda_d(K^j)=2^{d/2-1}\mathrm{vol}(\mathbb S^{d-1})\dis\int_{0}^{\infty}K^j(r)r^{d/2-1}dr$. As a special case we have: $$ \dis\int_{\mathbb S^d}K^2(\dis\frac{1-x^Ty}{h^2})dy\sim h^d\lambda_d(K^2)C,\ \text{with}\ C\ \text{being a positive constant}. $$ Now we will proceed the proof without the finite support condition: $$ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\\ &&\times\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2\\ &\sim& \dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\left\{\int_{\mathbb S^d}f(z)\big[\lambda_d(K)h^dK(\frac{1-x^Tz}{h^2})\big]\times \big[\lambda_d(K)h^dK(\dis\frac{1-y^Tz}{h^2})\big]dz \right\}^2\\ &\sim& \lambda_d(K)^4h^{4d}\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)\big[ \lambda_d(K)h^d K(\dis\frac{1-x^Ty}{h^2})f(y)\big]^2dy\\ &=&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \left\{\int_{\mathbb S^d} K^2(\dis\frac{1-x^Ty}{h^2})f^3(y)dy \right\}f(x)dx\\ &\sim&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \lambda_d(K^2)h^dC\cdot f^3(x)f(x)dx\\ &=&C\lambda_d(K)^6\lambda_d(K^2)h^{7d}\dis\int_{\mathbb S^d}f^(x)dx=O(h^{7d}). \end{array} $$ Thus we have proved $T_1=O(h^{7d})$ without finite support assumption, then the rest of the proof will follow through as in \cite{chineseclt}. \end{proof} Observe the identity: \begin{equation}\label{csden} \dis\int_{\mathbb S^d_{\geq 0}}(\hat{p}_n-p)^2dx=\dis|\Gamma|\int_{\mathbb S^d}(\hat{f}_n-\tilde{p})^2dy, \end{equation} then the CLT of compositional ISE follows from the identity (\ref{csden}) and our proof of CLT on spherical ISE without finite support conditions on kernels. \subsection{Proofs of Shadow Monomials in Section \ref{sec:rkhs} }\label{reproproof} Proof of Proposition \ref{killodd}: \begin{proof} A direct computation yields: $$ \begin{array}{rcl} (\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}&=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i=1}^{d+1}(s_ix_i)^{\alpha_i}\\ &=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} x_k^{\alpha_k}+\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} (-x_k)^{\alpha_k}\\ &=&x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i}-x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} \\ &=&0. \end{array} $$ \end{proof} \subsection{Proof of Linear Independence of Reproducing Kernels in Theorem \ref{abundencelm}}\label{abundprf} We sketch a slight of more detailed (not complete) proof: \begin{proof} This is the most technical lemma in this article. We will sketch the philosophy of the proof in here, which can be intuitively understood topologically. Recall that we can produce a projective space $\mathbb P^d$ by identifying every pair of antipodal points of a sphere $\mathbb S^d$ (identify $x$ with $-x$), in other words $\mathbb P^d=\mathbb S^d/\mathbb Z_2$ where $\mathbb Z_2=\{0,1\}$ is a cyclic group of order $2$. Then we can define a projective kernel in $\mathcal H_i\subset L^2(\mathbb S^d)$ to be $k^p_i(x, \cdot)=[k_i(x,\cdot)+k_i(-x,\cdot)]/2$. We can also denote the projective kernel inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ by $\underline{k}_m^p(x, \cdot)=\sum_{i=0}^{m}k^p_{2i}(x,\cdot)$. Now we spread out the data set $\{x_i\}_{i=1}^n$ by ``spread-out'' construction in Section \ref{sec:spread}, and denote the spread-out data set as $\{\Gamma \cdot x_i\}_{i=1}^n=\{{c^{-1}_{\Delta}(x_i)}\}_{i=1}^{n}$ (a data set, not a set because of repetitions). A compositional reproducing kernel kernel is a summation of spherical reproducing kernels of on ${c^{-1}_{\Delta}(x_i)}$, divided by the number of elements in ${c^{-1}_{\Delta}(x_i)}$. This data set ${c^{-1}_{\Delta}(x_i)}$ has antipodal symmetry, then a compositional kernel is a linear combination of projective kernels. Notice that \emph{different} fake kernels are linear combinations of \emph{different} projective kernels. It suffices to show the linear independence of projective kernels for distinct data points and large enough $m$, which implies the linear independence of fake kernels $\{\underline{k}^{\Gamma}_{m}(x_i, \cdot)\}_{i=1}^n$. Now we are focusing on the linear independence of projective kernels. A projective kernel can be seen as a reproducing kernel for a point in $\mathbb P^d$. For a set of distinct points $\{y_i\}_{i=1}^l\subset \mathbb P^{d}$, we will show that the corresponding set of projective kernels $\{\underline{k}^{p}_{m}(y_i, \cdot)\}_{i=1}^l\subset \bigoplus_{i=0}^{m}\mathcal H_{2i}$ is linearly independent for an integer $l$ and a large enough $m$. Consider two vector subspace $V_1^{m}=\spn \big[ \{(y_i\cdot {t})^{2m}\}_{i=1}^l\big]$ and $V_2^{m}=\spn\big[\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l\big]$, both of which are inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}\subset L^2(\mathbb S^d)$. Then we can define a linear map $h_{m}:\ V_1^{m}\rightarrow V_2^{m}$ by setting $h_{m}((y_i\cdot {t})^{2m})=\sum_{j=1}^l<(y_i\cdot {t})^{2m},\underline{k}^{p}_{m}(y_j, t)>\underline{k}^{p}_{m}(y_j, t)$. This linear map $h_{m}$ is represented by an $l\times l$ symmetric matrix whose diagonal elements are $1$'s, and off diagonal elements are $[(y_i\cdot y_j)]^{2m}$. Notice that $y_i\neq y_j$ in $\mathbb P^d$, which means that they are not antipodal to each other in $\mathbb S^d$, thus $|y_i\cdot y_j|<1$. When $m$ is large enough, all off-diagonal elements will go to zero while diagonal elements always stay constant, then the matrix representing $h_{m}$ will become a \emph{diagonally dominant} matrix, which is full rank. When the linear map $h_{m}$ has full rank, both spanning sets $ \{(y_i\cdot {t})^{2m}\}_{i=1}^l$ and $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l$ have to be a basis for $V_1^{m}$ and $V_2^{m}$ correspondingly, then the set of projective kernels $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^m$ have to be linearly independent when $m$ is large enough. \end{proof} \subsection{Proof of Representer Theorems in Section \ref{represent}}\label{representerproofs} Proof of Theorem \ref{minorm} on minimal norm interpolation: \begin{proof} Note that the set $I_y^{m}=\{f\in \bigoplus_{i=0}^{m}\mathcal H_{2i}:\ f(x_i)=y_i\}$ is non-empty, because the $f_0$ defined by the linear system of equation is naturally in $I_y^{m}$. Let $f$ be any other element in $I_y^{m}$, define $g=f-f_0$, then we have: $$ \norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2. $$ Notice that $g\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$ and that $g(x_i)=0$ for $1\leq i\leq n$, we have: $$ \begin{array}{rcl} <f_0,g>&=&<\sum_{i=1}^n\omega_{m}(x_i,\cdot)c_i,g(\cdot)>\\ &=&\dis\sum_{i=1}^nc_i<\omega_{m}(x_i,\cdot),g(\cdot)>.\\ &=&\dis\sum_{i=1}^nc_ig(x_i)=0. \end{array} $$ Thus $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+\norm{f_0}^2$, which implies that $f_0$ is the solution to the minimal norm interpolation problem. \end{proof} Proof of Theorem \ref{regularization}, on regularization problems: \begin{proof} First define the loss functional $E(f)=\sum_{i=1}^n|f(x_i)-y_i|^2+\mu\norm{f}^2$. For any $\Gamma$-invariant function $f=f^{\Gamma}\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$, let $g=f-f_{\mu}$, then a simple computation yields: $$ \dis E(f)=E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2. $$ I want to show $\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)=\mu<f_{\mu},g>$, and an equivalent way of writing this equality is: $$ \dis\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\omega_{m}(x_i,t), g(t)>=\mu<f_{\mu},g>. $$ Now I claim that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$, which implies the above equality. To prove this claim, plug this linear combination $f_{\mu}=\sum_{i=1}^nc_i\cdot \omega_{m}(x_i,t)$ into the claim, then we get a system of linear equations in $\{c_i\}_{i=1}^n$, thus the proof of the claim breaks down to checking the system of linear equations in $\{c_i\}_{i=1}^n$, produced by the claim. Note that $\{\omega_{m}(x_i,t)\}_{i=1}^n$ is a linearly independent set, so one can check that the system of linear equations in $\{c_i\}_{i=1}^n$ produced by the claim is true, if and only if $\{c_i\}_{i=1}^n$ satisfy $\mu c_k+\sum_{i=1}^n c_i\cdot \omega_{m}(x_i,x_k)=y_k$ for every $k$ with $1\leq k\leq n$, which is given by the condition of this theorem. The equivalence of these two systems of linear equations is given by the linear independence of the set $\{\omega_{m}(x_i, t)\}_{i=1}^n$. Therefore we conclude that the claim $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$ is true. To finish the proof of this theorem, notice that $$ \begin{array}{rcl} \dis E(f)&=&E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t), g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[<\underbrace{\big(\mu f_{\mu}(t)-\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]\big)}_{=0}, g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2. \end{array} $$ The term $\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2$ in the above equality is always non-negative, thus $E(f_{\mu})\leq E(f)$, then the theorem follows. \end{proof} \bibliographystyle{asa} \bibliography{geometric} \end{document} \documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm,amscd, natbib, tikz,tikz-cd} \usepackage{wasysym, graphicx, hyperref, float} \usepackage[all,cmtip]{xy} \usepackage{tikz-cd} \usepackage{ gensymb } \usepackage{ textcomp, subcaption} \usepackage{authblk} \usepackage{enumitem} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{reason}[theorem]{Reasons} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{convention}[theorem]{Convention} \newtheorem{construction}[theorem]{Construction} \newtheorem{assumption}[theorem]{Assumption} \newcommand{\F}{\mathfrak F} \newcommand{\G}{\mathfrak G} \newcommand{\N}{\mathfrak N} \newcommand{\QQ}{\mathbb Q} \newcommand{\ZZ}{\mathbb Z} \newcommand{\CC}{\mathbb C} \newcommand{\RR}{\mathbb R} \newcommand{\HH}{\mathbb H} \newcommand{\SSS}{\mathcal S} \newcommand{\sL}{\mathcal L} \newcommand{\cH}{\mathcal H} \newcommand{\A}{\mathfrak A} \newcommand{\B}{\mathfrak B} \newcommand{\mega}{\overline{\omega}} \newcommand{\id}{\operatorname{id}} \newcommand{\im}{\operatorname{im}} \newcommand{\Op}{\operatorname{Op}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Con}{\operatorname{Con}} \newcommand{\QCon}{\operatorname{QCon}} \newcommand{\Diff}{\operatorname{Diff}} \newcommand{\Isom}{\operatorname{Isom}} \newcommand{\isom}{\operatorname{\mathfrak{isom}}} \newcommand{\Homeo}{\operatorname{Homeo}} \newcommand{\Teich}{\operatorname{Teich}} \newcommand{\Homeq}{\operatorname{Homeq}} \newcommand{\Homeqbar}{\operatorname{\overline{Homeq}}} \newcommand{\Stab}{\operatorname{Stab}} \newcommand{\Nbd}{\operatorname{Nbd}} \newcommand{\colim}{\operatornamewithlimits{colim}} \newcommand{\acts}{\curvearrowright} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\GL}{\operatorname{GL}} \mathchardef\mhyphen"2D \newcommand{\free}{\mathrm{free}} \newcommand{\refl}{\mathrm{refl}} \newcommand{\trefl}{\mathrm{trefl}} \newcommand{\tame}{\mathrm{tame}} \newcommand{\wild}{\mathrm{wild}} \newcommand{\Emb}{\operatorname{Emb}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Out}{\operatorname{Out}} \newcommand{\Maps}{\operatorname{Maps}} \newcommand{\rel}{\operatorname{rel}} \newcommand{\tildetimes}{\mathbin{\tilde\times}} \newcommand{\h}{\overset h} \newcommand{\spn}{\mathrm{span}} \newcommand{\dis}{\displaystyle} \newcommand{\orb}{\mathrm{Orbit}} \newcommand{\catname}[1]{\textbf{#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\mtfr}[1]{\mathfrak{#1}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\ant}{\mathrm{ant}} \newcommand{\var}{\mathrm{Var}} \oddsidemargin 0in \textwidth 6.5in \textheight 9in \topmargin 0in \headheight 0in \headsep 0in \usepackage{subcaption} \usepackage{amsthm,amsmath,amsfonts,amssymb, mathtools} \usepackage{xypic} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newcommand{\BlackBox}{\rule{1.5ex}{1.5ex}} \ifdefined\proof \renewenvironment{proof}{\par\noindent{\bf Proof\ }}{\hfill\BlackBox\\[2mm]} \else \newenvironment{proof}{\par\noindent{\bf Proof\ }}{\hfill\BlackBox\\[2mm]} \begin{document} \title{Reproducing Kernels and New Approaches in Compositional Data Analysis} \author[1]{Binglin Li} \author[2]{Jeongyoun Ahn} \affil[1]{University of Georgia} \affil[2]{Korea Advanced Institute of Science and Technology} \maketitle \begin{abstract}Compositional data, such as human gut microbiomes, consist of non-negative variables whose only the relative values to other variables are available. Analyzing compositional data such as human gut microbiomes needs a careful treatment of the geometry of the data. A common geometrical understanding of compositional data is via a regular simplex. Majority of existing approaches rely on a log-ratio or power transformations to overcome the innate simplicial geometry. In this work, based on the key observation that a compositional data are projective in nature, and on the intrinsic connection between projective and spherical geometry, we re-interpret the compositional domain as the quotient topology of a sphere modded out by a group action. This re-interpretation allows us to understand the function space on compositional domains in terms of that on spheres and to use spherical harmonics theory along with reflection group actions for constructing a \emph{compositional Reproducing Kernel Hilbert Space (RKHS)}. This construction of RKHS for compositional data will widely open research avenues for future methodology developments. In particular, well-developed kernel embedding methods can be now introduced to compositional data analysis. The polynomial nature of compositional RKHS has both theoretical and computational benefits. The wide applicability of the proposed theoretical framework is exemplified with nonparametric density estimation and kernel exponential family for compositional data. \end{abstract} \section{Introduction} Recent popularity of human gut microbiomes research has presented many data-analytic, statistical challenges \citep{calle2019statistical}. Among many features of microbiomes, or meta-genomics data, we address their \emph{compositional} nature in this work. Compositional data consist of $n$ observations of $(d+1)$ non-negative variables whose values represent the relative proportions to other variables in the data. Compositional data have been commonly observed in many scientific fields, such as bio-chemistry, ecology, finance, economics, to name just a few. The most notable aspect of compositional data is the restriction on their domain, specifically that the sum of the variables is fixed. The compositional domain is not a classical vector space, but instead a (regular) simplex, that can be modeled by the simplex: \begin{equation}\label{eq:simplex} \Delta^d=\left\{(x_1,\dots,x_{d+1})\in \mathbb R^{d+1} \;| \sum_{i=1}^{d+1}x_i=1,\ x_i\geq 0, \forall i \right\}, \end{equation} which is topologically compact. The inclusion of zeros in (\ref{eq:simplex}) is crucial as most microbiomes data have a substantial number of zeros. Arguably the most prominent approach to handle the data on a simplex is to take a log-ratio transformation \citep{aitch86}, for which one has to consider only the open interior of $\Delta^d$, denoted by $\mathcal S^d$. Zeros are usually taken care of by adding a small number, however, it has been noted that the results of analysis can be quite dependent on how the zeros are handled \citep{lubbe2021comparison}. \citet{micomp} pointed out ``the dangers inherent in ignoring the compositional nature of the data'' and argued that microbiome datasets must be treated as compositions at all stages of analysis. Recently some approaches that analyze compositional data without any transformation have been gaining popularity \citep{li2020s, rasmussen2020zero}. The approach proposed in this paper is to construct reproducing kernels of compositional data by interpreting compositional domains via projective spherical geometries. \subsection{Methodological Motivation}\label{machine} Besides the motivation from microbiomes studies, another source of inspiration for this work is the current exciting development in statistics and machine learning. In particular, the rising popularity of applying higher tensors and kernel techniques, which allows multivariate techniques to be extended to exotic structures beyond traditional vector spaces, e.g., graphs \citep{graphrkhs}, manifolds \citep{vecman} or images \citep{tensorbrain}. This work serves an attempt to construct reproducing kernel structures for compositional data, so that recent developments of (reproducing) kernel techniques from machine learning theory can be introduced to this classical field in statistics. The approach in this work is to model the compositional data as a group quotient of a sphere $\mathbb S^d/\Gamma$ (see (\ref{allsame})), which gives a new connection of compositional data analysis with directional statistics. The idea of representing data by using tensors and frames is not new in directional statistics \citep{ambro}, but the authors find it more convenient to construct reproducing kernels for $\mathbb S^d/\Gamma$ (whose reason is given in Section \ref{whyrkhs}). We do want to mention that the construction of reproducing kernels for compositional data indicates a new potential paradigm for compositional data analysis: traditional approaches aim to find direct analogue of multivariate concepts, like mean, variance-covariance matrices and suitable regression analysis frameworks based on those concepts. However, finding the mean point over non-linear spaces, e.g. on manifold, is not an easy job, and in worst case scenarios, mean points might not even exist on the underlying space (e.g. the mean point of the uniform distribution on a unit circle is \emph{not} living on the circle). In this work we take the perspective of kernel mean embedding \citep{kermean}. Roughly speaking, instead of finding the ``\emph{physical}'' point for the mean of a distribution, one could do statistics \emph{distributionally}. In other words, the mean or expectation is considered as a \emph{linear functional} on the RKHS, and this functional is represented by an actual function in the Hilbert space, which is referred to as ``kernel mean $\mathbb E[k(X,\cdot)]$''. Instead of trying to find another compositional point as the empirical mean of a compositional data set, one can construct ``kernel mean'' as a replacement of the traditional empirical mean, which is just $\sum_{i=1}^nk(X_i,\cdot)/n$. Moreover, one can also construct the analogue of variance-covariance matrix purely from kernels; in fact, \citet{fbj09} considered the gram matrix constructed out of reproducing kernels, as consistent estimators of cross-variance operators (these operators play the role of covariance and cross-variance matrices in classical Euclidean spaces). Since we remodel compositional domain using projective/spherical geometry, compositional domain is \emph{not} treated as a vector space, but a quotient topological space $\mathbb S^d/\Gamma$. Instead of ``putting a linear structure on an Aitchison simplex'' \citep{aitch86}, or square root transformation (which is still transformed from an Aitchison simplex), we choose to ``linearize'' compositional data points by using kernel techniques (and possibly higher-tensor constructions) and one can still do ``multivariate analysis''. Our construction in this work initiates such an attempt to introduce these recent development of kernel and tensor techniques from statistical learning theory into compositional data analysis. \subsection{Contributions of the Present Work} Our contribution in this paper is three folds. First, we propose a new geometric foundation for compositional data analysis, $\mathbb P^d_{\geq 0}$, a subspace of a full projective space $\mathbb P^d$. Based on the close connection of spheres with projective spaces, we will also describe $\mathbb P^d_{\geq 0}$ in terms of $\mathbb S^d/\Gamma$, a reflection group acting on a sphere, and the fundamental domain of this actions is the first orthant $\mathbb S^d_{\geq 0}$ (a totally different reason of using ``$\mathbb S^d_{\geq 0}$'' in the traditional approach). Secondly, based on the new geometric foundations of compositional domains, we propose a new nonparametric compositional density estimation by making use of the well-developed spherical density estimation theory. Furthermore, we provide a central limit theorem for integral squared errors, which leads to a goodness-of-fit test. Thirdly, also through this new geometric foundation, function spaces on compositional domains can be related with those on the spheres. Square integrable functions $L^2(\mathbb S^d)$ on the sphere is a focus of an ancient subject in mathematics and physics, called ``spherical harmonics''. Moreover, spherical harmonics theory also tells that each Laplacian eigenspace of $L^2(\mathbb S^d)$ is a reproducing kernel Hilbert space, and this allows us to construct reproducing kernels for compositional data points via ``orbital integrals'', which opens a door for machine learning techniques to be applied to compositional data. We also propose a compositional exponential family as a general distributional family for compositional data modeling. \subsection{Why Projective and Spherical Geometries?}\label{whysph} According to \cite{ai94}, ``any meaningful function of a composition must satisfy the requirement $f(ax)=f(x)$ for any $a\neq 0$.'' In geometry and topology, a space consisting of such functions is called a \emph{projective space}, denoted by $\mathbb P^d$, therefore, projective geometry should be the natural candidate to model compositional data, rather than a simplex. Since a point in compositional domains can not have opposite signs, a compositional domain is in fact a ``positive cone'' $\mathbb P_{\geq 0}^d$ inside a full projective space. A key property of projective spaces is that stretching or shrinking the length of a vector in $\mathbb P^d$ does \emph{not} alter the point. Thus one can stretch a point in $\Delta^d$ to a point in the first orthant sphere by dividing it by its $\ell_2$ norm. Figure \ref{fig:stretch} illustrates this stretching (``stretching'' is not a transformation from projective geometry point of view) in action. In short, projective geometry is more natural to model the compositional data according to the original philosophy in \cite{ai94}. However, spheres are easier to work with because mathematically speaking, the function space on spheres is a well-treated subject in spherical harmonics theory, and statistically speaking, we can connect with directional statistics in a more natural way. Our compositional domain $\mathbb P^d_{\geq 0}$ can be naturally identified with $\mathbb S^d/\Gamma$, a sphere modded out by a reflection group action. This reflection group $\Gamma$ acts on the sphere $\mathbb S^d$ by reflection, and the \emph{fundamental domain} of this action is $\mathbb S^d_{\geq 0}$ (notions of group actions, fundamental domains and reflection groups are all discussed in Section \ref{sec:sphere}). Thus our connection with the first orthant sphere $\mathbb S^d_{\geq 0}$ is a natural consequence of projective geometry and its connection with spheres with group actions, having nothing to do with square root transformations. \subsection{Why Reproducing Kernels?}\label{whyrkhs} As explained in Section \ref{machine}, we strive to use new ideas of tensors and kernel techniques in machine learning to propose another framework for compositional data analysis, and Section \ref{whysph} explains the new connection with spherical geometry and directional statistics. However, it is not new in directional statistics where the idea of tensors was used to represent data points \citep{ambro}. So a naive idea would be to mimic directional statistics when studying ambiguous rotations: \cite{ambro} studied how to do statistics over coset space $SO(3)/K$ where $K$ is a finite subgroup of $SO(3)$. In their case, the subgroup $K$ has to be a special class of subgroups of \emph{special} orthogonal groups, and within this class, they manage to study the corresponding tensors and frames, which gives the inner product structures of different data points. However, in our case, a compositional domain is $\mathbb S^d/\Gamma=O(d)\setminus O(d+1)/\Gamma$, a double coset space. Unlike \cite{ambro} that only considered $d=3$ case, our dimension $d$ is completely general; moreover, our reflection group $\Gamma$ is \emph{not} a subgroup of any special orthogonal groups, so constructions of tenors and frames in \cite{ambro} does not apply to our situation directly. Part of the novelty of this work is to get around this issue by making use of the reproducing kernel Hilbert Space (RKHS) structures on spheres, and ``averaging out'' the group action at the reproducing kernel level, which in return gives us a reproducing kernel structure on compositional domains. Once we have RKHS in hand, we can ``add'' and take the ``inner product'' of two data points, so our linearization strategy can also be regarded as a combination of ``the averaging approach'' and the ``embedding approach'' as in \cite{ambro}. In fact, an abstract function space together with reproducing kernels plays an increasingly important role. In below we provide some philosophical motivations on the importance of function space over underlying data set: \begin{itemize} \item[(a)]Hilbert spaces of functions are naturally linear with an inner product structure. With the existence of (reproducing) kernels, data points are naturally incorporated into the function space, which leads to interesting interactions between the data set and functions defined over them. There has been a large amount of literature of embedding distributions into RKHS, e.g. \cite{disemd}, and using reproducing kernels to recover exponential families, e.g. \cite{expdual}. RKHS has also been used to recover classical statistical tests, e.g. goodness-of-fit test in \cite{kergof}, and regression in \cite{rkrgr}. Those works do not concern the analysis of function space, but primarily focus on the data analysis on the underlying data set, but all of them are done by passing over to RKHS. This implies the increasing recognition of the importance of abstract function space with (reproducing) kernel structure. \item[(b)] Mathematically speaking, given a geometric space $M$, the function space on $M$ can recover the underlying geometric space $M$ itself, and this principle has been playing a big role in different areas of geometry; in particular, modern algebraic geometry, following the philosophy of Grothendieck, is based upon this insight. Function spaces can be generalized to matrix valued function spaces, and this generalization gives rise to non-commutative RKHS, which is used in shape analysis in \citet{matrixvaluedker}; moreover, non-commutative RKHS is connected with free probability theory \citep{ncrkhs}, which has been used in random effects and linear mixed effects models \citep{fj19, princbulk} . \end{itemize} \subsection{Structure of the Paper} We describe briefly the content of the main sections of this article: \begin{itemize} \item In Section \ref{sec:sphere}, we will rebuild the geometric foundation of compositional domains by using projective geometry and spherical geometry. We will also point out that the old model using the closed simplex $\Delta^d$ is topologically the same as the new foundation. In diagrammatic way, we establish the following topological equivalence: \begin{equation}\label{4things} \Delta^d\cong \mathbb P^d_{\geq 0}\cong \mathbb S^d/\Gamma\cong\mathbb S^d_{\geq 0}, \end{equation} \noindent where $\mathbb S^d_{\geq 0}$ is the first orthant sphere, which is also the fundamental domain of the group action $\Gamma\acts \mathbb S^d$. All of the four spaces in (\ref{4things}) will be referred to as ``compositional domains''. As a direct application, we propose a compositional density estimation method by using the spherical density estimation theory via a spread-out construction through the quotient map $\pi:\ \mathbb S^d\rightarrow\mathbb S^d/\Gamma$, and proved that our compositional density estimator also possesses integral square errors that satisfies central limit theorems (Theorem \ref{ourclt}), which can be used for goodness-of-fit tests. \item Section \ref{sec:rkhs} will be devoted to constructing. compositional reproducing kernel Hilbert spaces. Our construction relies on the reproducing kernel structures on spheres, which is given by spherical harmonics theory. \citet{wah81} constructed splines using reproducing kernel structures on $\mathbb S^2$ (2-dimensional sphere), in which she also used spherical harmonics theory in \cite{sasone}, which only treated $2$-dimensional case. Our theory deals with general $d$-dimensional case, so we need the full power of spherical harmonics theory, which will be reviewed at the beginning of Section \ref{sec:rkhs}, and then we will use spherical harmonics theory to construct compositional reproducing kernels using an ``orbital integral'' type of idea. \item Section \ref{sec:app} will give a couple of applications of our construction of compositional reproducing kernels. (i) The first example is the representer theorem, but with one caveat: our RKHS is finite dimensional consisting degree $2m$ homogeneous polynomials, with no transcendental functions, so linear independence for distinct data points is not directly available, however we show that when the degree $m$ is high enough, linear independence still holds. Our statement of representer theorem is not new purely from RKHS theory point of view. Our point is to demonstrate that intuitions from traditional statistical learning can still be used in compositional data analysis, with some extra care. (ii) Secondly, we construct the compositional exponential family, which can be used to model the underlying distribution of compositional data. The flexible construction will enable us to utilize the distribution family in many statistical problems such as mean tests. \end{itemize} \section{New Geometric Foundations of Compositional Domains}\label{sec:sphere} \begin{figure} \centering \includegraphics[width = .4\textwidth]{SqrtCompare.png} \caption{Illustration of the stretching action on $\Delta^1$ to $\mathbb S^1$. Note that the stretching keeps the relative compositions where the square root transformation fails to do so. } \label{fig:stretch} \end{figure} In this section, we give a new interpretation of compositional domains as a cone $\mathbb P^d_{\geq 0}$ in a projective space, based on which compositional domains can be interpreted as spherical quotients by reflection groups. This connection will yield a ``spread-out'' construction on spheres and we demonstrate an immediate application of this new approach to compositional density estimation. \subsection{Projective and Spherical Geometries and a Spread-out Construction}\label{sec:spread} Compositional data consist of relative proportions of $d+1$ variables, which implies that each observation belongs to a projective space. A $d$-dimensional projective space $\mathbb P^d$ is the set of one-dimensional linear subspace of $\mathbb R^{d+1}$. A one-dimensional subspace of a vector space is just a line through the origin, and in projective geometry, all points in a line through the origin will be regarded as the same point in a projective space. Contrary to the classical linear coordinates $(x_1, \cdots,x_{d+1})$, a point in $\mathbb P^d$ can be represented by a projective coordinate $(x_1 : \cdots : x_{d+1})$, with the following property \[ (x_1 : x_2: \cdots : x_{d+1}) = (\lambda x_1 : \lambda x_2: \cdots : \lambda x_{d+1}), ~~~\text{for any } \lambda \ne 0. \] It is natural that an appropriate ambient space for compositional data is \emph{non-negative projective space}, which is defined as \begin{equation}\label{eq:proj} \mathbb P^d_{\ge 0} = \left\{(x_1 : x_2: \cdots : x_{d+1})\in \mathbb P^d \;| \; (x_1, x_2: \cdots : x_{d+1}) = (|x_1| : |x_2|: \cdots : |x_{d+1}|)\right \}. \end{equation} It is clear that the common representation of compositional data with a (closed) simplex $\Delta^d$ in (\ref{eq:simplex}) is in fact equivalent to (\ref{eq:proj}), thus we have the first equivalence: \begin{equation}\label{projtosimp} \mathbb P^d_{\geq 0}\cong \Delta^d. \end{equation} Let $\mathbb S^d$ denote a $d$-dimensional unit sphere, defined as \[ \mathbb S^d=\left\{(x_1,x_2,\dots, x_{d+1})\in \mathbb R^{d+1} \; | \sum_{i=1}^{d+1}x_i^2=1\right\}, \] and let $\mathbb S^d_{\geq 0}$ denote the first orthant of $\mathbb S^d$, a subset in which all coordinates are non-negative. The following lemma states that $\mathbb S^d_{\geq 0}$ can be a new domain for compositional data as there exists a bijective map between $\Delta^d$ and $\mathbb S^d_{\geq 0}$. \begin{lemma} \label{compcone} There is a canonical identification of $\Delta^d$ with $\mathbb S^d_{\geq 0}$, namely, $$ \xymatrix{\Delta^d\ar@<.4ex>[r]^f& \mathbb S^d_{\geq 0}\ar@<.4ex>[l]^g}, $$ where $f$ is the inflation map $g$ is the contraction map, with both $f$ and $g$ being continuous and inverse to each other. \end{lemma} \begin{proof} It is straightforward to construct the inflation map $f$. For $v \in \Delta^d$, it is easy to see that $f(v) \in \mathbb S^d_{\ge 0}$ when $ f(v) = v/\|v\|_2, $ where $\|v\|_2 $ is the $\ell_2$ norm of $v$. Note that the inflation map makes sure that $f(v)$ is in the same projective space as $v$. To construct the shrinking map $g$, for $s\in \mathbb S^d_{\geq 0}$ we define $ g(s) = s / \|s\|_1, $ where $\|s\|_1$ is the $\ell_1$ norm of $s$ and see that $g(s)\in\Delta^d$. One can easily check that both $f$ and $g$ are continuous and inverse to each other. \end{proof} Based on Lemma \ref{compcone}, we now identify $\Delta^d$ alternatively with the quotient topological space $\mathbb S^d/\Gamma$ for some group action $\Gamma$. In order to do so, we first show that the cone $\mathbb S^d_{\geq 0}$ is a strict fundamental domain of $\Gamma$, i.e., $\mathbb S^d_{\geq 0}\cong \mathbb S^d/\Gamma$. We start by defining a \emph{coordinate hyperplane} for a group. The $i$-th coordinate hyperplane $H_i\in \mathbb R^{d+1}$ with respect to a choice of a standard basis $\{e_1,e_2,\dots, e_{d+1}\}$ is a codimension one linear subspace which is defined as \[ H_i=\{(x_1,\dots, x_i,\dots, x_{d+1})\in \mathbb R^{d+1}:\ x_i=0\}, ~~ i = 1, \ldots, d+1. \] We define the reflection group $\Gamma$ with respect to coordinate hyperplanes as the follows: \begin{definition}\label{reflect} The reflection group $\Gamma$ is a subgroup of general linear group $GL(d+1)$ and it is generated by $\{\gamma_i, i = 1, \ldots, {d+1}\}$. Given the same basis $\{e_1,\dots, e_{d+1}\}$ for $\mathbb R^{d+1}$, the reflection $\gamma_i$ is a linear map specified via: \[ \gamma_i:\ (x_1,\dots,x_{i-1}, x_i, x_{i+1},\dots, x_{d+1})\mapsto (x_1,\dots,x_{i-1}, -x_i, x_{i+1},\dots, x_{d+1}). \] \end{definition} Note that if restricted on $\mathbb S^d$, $\gamma_i$ is an isometry map from the unit sphere $\mathbb S^d$ to itself, which we denote by $\Gamma\acts\mathbb S^d$. Thus, one can treat the group $\Gamma$ as a discrete subgroup of the isometry group of $\mathbb S^d$. In what follows we establish that $\mathbb S^d_{\ge 0}$ is a fundamental domain of the group action $\Gamma\acts \mathbb S^d$ in the topological sense. In general, there is no uniform treatment of a fundamental domain, but we will follow the approach by \cite{bear}. To introduce a fundamental domain, let us define an \emph{orbit} first. For a point $z\in \mathbb S^d$, an orbit of the group $\Gamma$ is the following set: \begin{equation}\label{eq:orbit} \orb^{\Gamma}_z=\{\gamma(z), \forall \gamma\in \Gamma\}. \end{equation} Note that one can decompose $\mathbb S^d$ into a disjoint union of orbits. The size of an orbit is not necessarily the same as the size of the group $|\Gamma|$, because of the existence of a \emph{stabilizer subgroup}, which is defined as \begin{equation}\label{stable} \Gamma_z=\{\gamma\in \Gamma, \gamma (z)=z\}. \end{equation} The set $\Gamma_z$ forms a group itself, and we call this group $\Gamma_z$ the \emph{stabilizer subgroup} of $\Gamma$. Every element in $\orb^{\Gamma}_z$ has isomorphic stabilizer subgroups, thus the size of $\orb^{\Gamma}_z$ is the quotient $|\Gamma|/|\Gamma_z|$, where $|\cdot|$ here is the cardinality of the sets. There are only finite possibilities for the size of a stabilizer subgroup for the action $\Gamma\acts \mathbb S^d$, and the size of stabilizer subgroups is dependent on codimensions of coordinate hyperplanes. \begin{definition}\label{fundomain} Let $G$ act properly and discontinuously on a $d$-dimensional sphere, with $d>1$. A \emph{fundamental domain} for the group action $G$ is a closed subset $F$ of the sphere such that every orbit of $G$ intersects $F$ in at least one point and if an orbit intersects with the interior of $F$ , then it only intersects $F$ at one point. \end{definition} A fundamental domain is \emph{strict} if every orbit of $G$ intersects $F$ at exactly one point. The following proposition identifies $\mathbb S^d_{\geq 0}$ as the quotient topological space $\mathbb S^d/\Gamma$, i.e., $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. \begin{proposition}\label{conedom} Let $\Gamma\acts \mathbb S^d$ be the group action described in Definition \ref{reflect}, then $\mathbb S^d_{\geq 0}$ is a strict fundamental domain. \end{proposition} In topology, there is a natural quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. With the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$, there should be a natural map $\mathbb S^d\rightarrow\mathbb S^d_{\geq 0}$. Now define a contraction map $c: \mathbb S^d\rightarrow \mathbb S^d_{\geq 0}$ via $(x_1,\dots,x_{d+1})\mapsto (|x_1|,\dots, |x_{d+1}|)$ by taking component-wise absolute values. Then it is straightforward to see that the $c$ is indeed the topological quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$, under the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. So far, via (\ref{projtosimp}), Lemma \ref{compcone} and Proposition \ref{conedom}, we have established the following equivalence: \begin{equation}\label{allsame} \mathbb P^d_{\geq 0}=\Delta^d=\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma. \end{equation} For the rest of the paper we will use the four characterizations of a compositional domain interchangeably. \subsubsection{Spread-Out Construction} Based on (\ref{allsame}), one can turn a compositional data analysis problem into one on a sphere via \emph{spread-out construction}. The key idea is to associate one compositional data point $z\in \Delta^d=\mathbb S^d_{\geq 0}$ with a $\Gamma$-orbit of data points $\orb^{\Gamma}_z\subset \mathbb S^d$ in (\ref{eq:orbit}). Formally, given a point $z\in \Delta^d$, we construct the following \emph{data set} (\emph{not necessarily a set} because of possible repetitions): \begin{equation}\label{sprd} c^{-1}(z) = \left\{|\Gamma_{z'}|\ \text{copies of }z', \ \text{for}\ z'\in \text{Orbit}_z^\Gamma \right\}, \end{equation} where $\Gamma_{z'}$ is the stabilizer subgroup of $\Gamma$ with respect to $z'$ in (\ref{stable}). In general, if there are $n$ observations in $\Delta^d$, the spread-out construction will create a data set with $n2^{d+1}$ observations on $\mathbb S^d$, in which observations with zero coordinates are repeated. Figure \ref{fig:kde} (a) and (b) illustrate this idea with a toy data set with $d = 2$. \subsection{Illustration: Compositional Density Estimation}\label{compdensec} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width = \textwidth]{simplex_eg.png} \caption{Compositional data on $\Delta^2$}\ \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{simplex_kde_3d.png} \caption{``Spread-out'' data on $\mathbb S^2$} \end{subfigure}\\ \begin{subfigure}[b]{0.50\textwidth} \includegraphics[width = \textwidth]{sphere_kde_3d.png} \caption{Density estimate on $\mathbb S^2$} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width = \textwidth]{simplex_kde.png} \caption{``Pulled-back'' estimate on $\Delta^2$} \end{subfigure} \caption{Toy compositional data on the simplex $\Delta^2$ in (a) are spread out to a sphere $\mathbb S^2$ in (b). The density estimate on $\mathbb S^2$ in (c) are pulled back to $\Delta^2$ in (d).}\label{fig:kde} \end{figure} The spread-out construction in (\ref{sprd}) provides a new intimate relation between directional statistics and compositional data analysis. Indeed, this construction produces a directional data set out of a compositional data set, then we can literally transform a compositional data problem into a directional statistics problem via this spread-out construction. For example, we can perform compositional independence/uniform tests by doing directional independence/uniform tests \citep{sobind, sobuni} through spread-out constructions. In this section, we will give a new compositional density estimation framework by using spread-out constructions. In directional statistics, density estimation for spherical data has a long history dating back to the late 70s in \cite{expdirect}. In the 80s, \cite{hallden} and \cite{baiden} established systematic framework for spherical density estimation theory. Spherical density estimation theory became popular later partly because its integral squared error (ISE) is close dly related with goodness of fit test as in \cite{chineseclt} and a recent work \cite{portclt}. The rich development in spherical density estimation theory will yield a compositional density framework via spread-out constructions. In the following we apply this idea to nonparametric density estimation for compositional data. Instead of directly estimating the density on $\Delta^d$, one can perform the estimation with the spread-out data on $\mathbb S^d$, from which a density estimate for compositional data can be obtained. Let $p(\cdot)$ denote a probability density function of a random vector $Z$ on $\mathbb S^d_{\geq 0}$, or equivalently on $\Delta^d$. The following proposition gives a form of the density of the spread-out random vector $\Gamma(Z)$ on the whole sphere $\mathbb S^d$. \begin{proposition}\label{induced} Let $Z$ be a random variable on $\mathbb S^d_{\geq 0}$ with probability density $p(\cdot)$, then the induced random variable $\Gamma(Z)=\{\gamma(Z)\}_{\gamma\in \Gamma}$, has the following density $\tilde{p}(\cdot)$ on $\mathbb S^d$: \begin{equation}\label{cshriek} \tilde{p}(z)=\frac{|\Gamma_z|}{|\Gamma|} p(c(z)), \ z\in \mathbb S^d, \end{equation} where $|\Gamma_z|$ is the cardinality of the stabilizer subgroup $\Gamma_z$ of $z$. \end{proposition} Let $c_*$ denote the analogous operation for functions to the contraction map $c$ that applies to data points. It is clear that given a probability density $\tilde p$ on $\mathbb S^d$, we can obtain the original density on the compositional domain via the ``pull back'' operation $c_*$: \[ p(z) = c_*(\tilde p)(z)=\sum_{x\in c^{-1}(z)}\tilde p(x), ~~ z\in \mathbb S^d_{\geq 0}. \] Now consider estimating density on $\mathbb S^d$ with the spread-out data. Density estimation for data on a sphere has been well studied in directional statistics \citep{hallden, baiden}. For $x_1, \ldots x_n \in \mathbb S^d$, a density estimate for the underlying density is \[ \hat{f}_n(z)=\frac{c_h}{n}\sum_{i=1}^nK\left(\frac{1-z^T x_i}{h_n}\right),\ z\in \mathbb S^d, \] where $K$ is a kernel function that satisfies common assumptions in Assumption \ref{kband}, and $c_h$ is a normalizing constant. Applying this to the spread-out data $c^{-1}(x_i)$, $i = 1, \ldots, n$, we have a density estimate of $\tilde p (\cdot) $ defined on $\mathbb S^d$: \begin{equation}\label{fhattilde} \hat{f}^{\Gamma}_n(z)=\dis \frac{c_h}{n|\Gamma|}\sum_{1\leq i\leq n,\gamma\in \Gamma}K\left(\frac{1-z^T \gamma( x_i)}{h_n}\right), \ z\in \mathbb S^d, \end{equation} from which a density estimate on the compositional domain is obtained by applying $c_*$. That is, \[ \hat{p_n}(z)=c_*\hat{f}^{\Gamma}_n(z)=\sum_{x\in c^{-1}(z)}\hat{f}^\Gamma_n(x),\ \ z\in \mathbb S^d_{\geq 0}. \] Figure \ref{fig:kde} (c) and (d) illustrate this density estimation process with a toy example. The consistency of the spherical density estimate $\hat f_n$ is established by \cite{ chineseclt, portclt}, where it is shown that the integral squared error (ISE) of $\hat f_n$, $\int_{\mathbb S^d} (\hat f_n - f)^2 dz$ follows a central limit theorem. It is straightforward to show that the ISE of the proposed compositional density estimator $\hat{p}_n$ on the compositional domain also asymptotically normally distributed by CLT. However, the CLT of ISE for spherical densities in \cite{chineseclt} contains an unnecessary finite support assumption on the density kernel function $K$ (very different from reproducing kernels); although in \cite{portclt} such finite support condition is dropped, their result was on directional-linear data, and their proof does not directly applies to the pure directional context. For the readers' convenient, we will provide the proof for the CLT of ISE for both compositional and spherical data, without the finite support condition as in \cite{chineseclt} \begin{theorem}\label{ourclt} CLT for ISE holds for both directional and compositional data under the mild conditions (H1, H2 and H3) in Section \ref{cltprf}, without the finite support condition on density kernel functions $K$. \end{theorem} The detail of the proof of THeorem \ref{ourclt} plus the statements of the technical conditions can be found Section \ref{cltprf}. \section{Reproducing Kernels of Compositional Data}\label{sec:rkhs} We will be devoted to construct reproducing kernel structures on compositional domains, based on the topological re-interpretation of $\Delta^d$ in Section \ref{sec:sphere}. The key idea is that based on the quotient map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma=\Delta^d$, we can use function spaces on spheres to understand function spaces on compositional domains. Moreover, we can construct reproducing kernel structures of a compositional domain $\Delta^d$ based on those on $\mathbb S^d$. The reproducing kernel was first introduced in 1907 by Zaremba when he studied boundary value problems for harmonic and biharmonic functions, but the systematic development of the subject was finally done in the early 1950s by \citet{aron50}. Reproducing kernels on $\mathbb S^d$ were essentially discovered by Laplace and Legendre in the 19th centuary, although the reproducing kernels on spheres were called \emph{zonal spherical functions} at that time. Both spherical harmonics theory and RKHS have found applications in theoretical subjects like functional analysis, representation theory of Lie groups and quantum mechanics. In statistics, the successful application of RKHS in spline models by \citet{wah81} popularized RKHS theory for $\mathbb S^d$. In particular, they used spherical harmonics theory to construct an RKHS on $\mathbb S^2$. Generally speaking, for a fixed topological space $X$, there exists (and one can construct) multiple reproducing kernel Hilbert spaces on $X$; In their work, an RKHS on $\mathbb S^2$ was constructed by considering a \emph{subspace} of $L^2(\mathbb S^2)$ under a finiteness condition, and the reproducing kernels were also built out of zonal spherical functions. Their work is motivated by studying spline models on the sphere, while our motivation has nothing to do with spline models of any kind. In this work we consider reproducing structures on spheres which are \emph{different} from the one in \cite{wah81}, but we share the same building blocks, which is spherical harmonics theory. Evolved from the re-interpretation of a compositional domain $\Delta^d$ as $\mathbb S^d/\Gamma$, we will construct reproducing kernels of compositional by using reproducing kernel structures on spheres. Since spherical harmonics theory gives reproducing kernel structures on $\mathbb S^d$, and a compositional domain $\Delta^d$ are topologically covered by spheres with their deck transformations group $\Gamma$. Thus naturally we wonder (i) whether function spaces on $\Delta^d$ can identified with the subspaces of $\Gamma$-invariant functions on $\mathbb S^d$, and (ii) whether one might ``build'' $\Gamma$-invariant kernels out of spherical reproducing kernels, and hope that the $\Gamma$-invariant kernels can play the role of ``reproducing kernels'' on $\Delta^d$. It turns out that the answers for both (i) and (ii) are positive (see Remark \ref{dreami} and Theorem \ref{reprcomp}). The discovery of reproducing kernel structures on $\Delta^d$ is crucially based on the reinterpretation of compositional domains via projective and spherical geometries in Section \ref{sec:sphere}. By considering $\Gamma$-invariant objects in spherical function spaces we managed to construct reproducing kernel structures for compositional domains, and compositional reproducing Hilbert spaces. Although compositional RKHS was first considered as a candidate ``inner product space'' for data points to be mapped into, the benefit of working with RKHS goes far beyond than this, due to exciting development of kernel techniques in machine learning theory that can be applied to compositional data analysis as is mentioned in Section \ref{machine}. This gives a new chance to construct a new framework for compositional data analysis, in which we ``upgrade'' compositional data points as functions (via reproducing kernels), and the classical statistical notions, like means and variance-covariances, will be ``upgraded'' to linear functionals and linear operators over the functions space. Traditionally important statistical topics such as dimension reduction, regression analysis, and many inference problems can be re-addressed in the light of this new ``kernel techniques''. \subsection{Recollection of Basic Facts from Spherical Harmonics Theory} We give a brief review of the theory of spherical harmonics in the following. See \citet{atkinson2012spherical} for a general introduction to the topic. In classical linear algebra, a finite dimensional linear space with a linear map to itself can be decomposed into direct sums of eigenspaces. Such a phenomenon still holds for $L^2(\mathbb S^d)$ with Laplacians being the linear operator to itself. Recall that the Laplacian operator on a function $f$ with $d+1$ variables is \[ \dis\Delta f=\sum_{i=1}^{d+1}\frac{\partial ^2 f}{\partial x_i^2}. \] Let $\mathcal H_i$ be the $i$-th eigenspace of the Laplacian operator. It is known that $L^2(\mathbb S^d)$ can be orthogonally decomposed as \begin{equation}\label{L2} L^2(\mathbb S^d)=\bigoplus_{i=1}^{\infty}\mathcal H_i, \end{equation} where the orthogonality is endowed with respect to the inner product in $L^2(\mathbb S^d)$: $\langle f, g \rangle = \dis\int_{\mathbb S^d} f \bar{g}$. Let $\mathcal P_{i}(d+1)$ be the space of homogeneous polynomials of degree $i$ in $d+1$ coordinates on $\mathbb S^d$. A homogeneous polynomial is a polynomial whose terms are all monomials of the same degree, e.g., $\mathcal P_4(3)$ includes $xy^3 + x^2yz$. Further, let $ H_{i}(d+1)$ be the space of homogeneous harmonic polynomials of degree $i$ on $\mathbb S^d$, i.e., \begin{equation}\label{eq:harmonic} H_{i}(d+1)=\{P\in \mathcal P_{i}(d+1)|\; \Delta P=0\}. \end{equation} For example, $x^3y + xy^3 - 3xyz^2$ and $x^2 - 6x^2 y^2 + y^4$ are members of $H_4(3)$. Importantly, the spherical harmonic theory has established that each eigenspace $\mathcal H_{i}$ in \eqref{L2} is indeed the same space as $ H_i(d+1)$. This implies that any function in $L^2(\mathbb S^d)$ can be approximated by an accumulated direct sum of orthogonal homogeneous harmonic polynomials. The following well-known proposition further reveals that the Laplacian constraint in (\ref{eq:harmonic}) is not necessary to characterize the function space on the sphere. \begin{proposition}\label{accudirct} Let $\mathcal P_{m}(d+1)$ be the space of degree $m$ homogeneous polynomial on $d+1$ variables on the unit sphere and $\mathcal H_i$ be the $i$th eigenspace of $L^2(\mathbb S^d)$. Then \[ \mathcal P_{m}(d+1)=\dis\bigoplus_{i=\ceil{m/2}-\floor{m/2}}^{\floor{m/2}}\mathcal H_{2i}, \] where $\ceil{\cdot}$ and $\floor{\cdot}$ stand for round-up and round-down integers respectively. \end{proposition} From Proposition \ref{accudirct}, one can see that any $L^2$ function on $\mathbb S^d$ can be approximated by homogeneous polynomials. An important feature of spherical harmonics theory is that it gives reproducing structures on spheres, and now we will recall this fact. For the following discussion, we will fix a Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$, so $\mathcal H_i$ is a finite dimensional Hilbert space on $\mathbb S^d$; such a restriction on a single piece $\mathcal H_i$ is necessary because the entire Hilbert space $L^2(\mathbb S^d)$ does not have a reproducing kernel given that the Delta functional on $L^2(\mathbb S^d)$ is \emph{not} a bounded functional\footnote{At first sight, this might seem to contradict the discussion on splines on $2$-dimensional spheres in \cite{wah81}, but a careful reader can find that a finiteness constraint was imposed there, and it was \emph{never} claimed that $L^2(\mathbb S^2)$ is a RKHS. That is, their RKHS on $\mathbb S^2$ is a subspace of $L^2(\mathbb S^2)$.}. \subsection{Zonal Spherical Functions as Reproducing Kernels in $\mathcal H_i$} On each Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$ on general $d$-dimensional spheres, we define a linear functional $L_x$ on $\mathcal H_i$, such that for each $Y\in \mathcal H_i$, $L_x(Y)=Y(x)$ for a fixed point $x\in \mathbb S^d$. General spherical harmonics theory tells us that there exists $k_i(x,t)$ such that: $$ L_x(Y)=Y(x)=\dis\int_{\mathbb S^d} Y(t)k_i(x,t)dt,\ x\in \mathbb S^d; $$ \noindent this function $k_i(x,t)$ is the representing function of the functional $L_x(Y)$, and classical spherical harmonics theory refers to the function $k_i(x,t)$ as the \emph{zonal spherical function}, and furthermore, they are actually ``reproducing kernels'' inside $\mathcal H_i\subset L^2(\mathbb S^d)$ in the sense of \cite{aron50}. Another way to appreciate spherical harmonics theory is that it tells that each Laplacian eigenspace $\mathcal H_i\subset L^2(\mathbb S^d)$ is actually a reproducing kernel Hilbert space on $\mathbb S^d$, a special case when $d = 2$ was used \cite{wah81}. Let us recollect some basic facts of zonal spherical functions for readers' convenience in the next Proposition. One can find their proofs in almost any modern spherical harmonics references, in particular in \citet[Chapter IV]{stein71}: \begin{proposition}\label{reprsph} The following properties hold for the zonal spherical function $k_i(x,t)$, which is also the reproducing kernel inside $\mathcal H_i\subset L^2(\mathbb S^d)$ with dimension $a_i$. \begin{itemize} \item[(a)]For a choice of orthonormal basis $\{Y_1, \dots, Y_{a_i}\}$ in $\mathcal H_i$, we can express the kernel $k_i(x,t)=\dis\sum_{i=1}^{a_i}\overline{Y_{i}(x)}Y_{i}(t)$, but $k_i(x,t)$ does not depend on choices of basis. \item[(b)]$k_i(x,t)$ is a real-valued function and symmetric, i.e., $k_i(x,t)=k_i(t,x)$. \item[(c)]For any orthogonal matrix $R\in O(d+1)$, we have $k_i(x,t)=k_i(Rx, Rt)$. \item[(d)] $k_i(x,x)=\dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any point $x\in \mathbb S^d$. \item[(e)]$k_i(x,t)\leq \dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any $x,\ t\in \mathbb S^d$. \end{itemize} \end{proposition} \begin{remark} \normalfont The above proposition ``\emph{seems}'' obvious from traditional perspectives, as if it could be found in any textbook, so readers with rich experience with RKHS theory might think that we are stating something trivial. However, we want to point out two facts. \begin{itemize} \item [(1)] Function spaces over underlying spaces with different topological structures behave very differently. Spheres are compact with no boundary, and their function spaces have Laplacian operators whose eigenspaces and finite dimensional, which possesses reproducing kernels structures inside finite dimensional eigenspaces. These coincidences are not expected to happen over other general topological spaces. \item[(2)]Relative to classical topological spaces whose RKHS were used more often, e.g. unit intervals or vector spaces, spheres are more ``exotic'' topological structures (simply connected space, but with nontrivial higher homotopy groups), while intervals or vector spaces are contractible with trivial homotopy groups. One way to appreciate spherical harmonics theory is that classical ``naive'' expectations can still happen on spheres. \end{itemize} \end{remark} In the next subsection we discuss the corresponding function space in the compositional domain $\Delta^d$. \subsection{Function Spaces on Compositional Domains}\label{sec:fcomp} With the identification $\Delta^d=\mathbb S^d/\Gamma$, the functions space $L^2(\Delta^d)$ can be identified with $L^2(\mathbb S^d/\Gamma)$, i.e., $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. The function space $L^2(\mathbb S^d)$ is well understood by spherical harmonics theory as above, so we want to relate $L^2(\mathbb S^d/\Gamma)$ with $L^2(\mathbb S^d)$ as follows. Notice that a function $h\in L^2(\mathbb S^d/\Gamma)$ is a map from $\mathbb S^d/\Gamma$ to (real or complex) numbers. Thus a natural associated function $\pi^*(h)\in L^2(\mathbb S^d)$ is given by the following composition of maps: $$ \pi\circ h:\ \ \xymatrix{\mathbb S^d\ar[r]^-{\pi}&\mathbb S^d/\Gamma\ar[r]^{\ \ h}& \mathbb C}. $$ Therefore, the composition $\pi\circ h=\pi^*(h)\in L^2(\mathbb S^d)$ gives rise to a natural embedding of the function space of compositional domains to that of a sphere $\pi^*:\ L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$. The embedding $\pi^*$ identifies the Hilbert space of compositional domains as a subspace of the Hilbert space of spheres. A natural question is how to characterize the subspace in $L^2(\mathbb S^d)$ that corresponds to functions on compositional domains. The following proposition states that $f\in \im(\pi^*)$ if and only if $f$ is constant on fibers of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$, almost everywhere. In other words, $f$ takes the same values on all $\Gamma$ orbits, i.e., on the set of points which are connected to each other by ``sign flippings''. \begin{proposition}\label{compfun} The image of the embedding $\pi^*: L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$ consists of functions $f\in L^2(\mathbb S^d)$ such that up to a measure zero set, is constant on $\pi^{-1}(x)$ for every $x\in \mathbb S^d/\Gamma$, where $\pi$ is the natural projection $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. \end{proposition} We call a function $f\in L^2(\mathbb S^d)$ that lies in the image of the embedding $\pi^*$ a \emph{$\Gamma$-invariant function}. Now we construct the contraction map $\pi_{*}: L^2(S^d)\rightarrow L^2(S^d/\Gamma)$ and this map will descend every function on spheres to a function on compositional domains. To construct $\pi_*$, it suffices to associate a $\Gamma$-invariant function to every function in $L^2(\mathbb S^d)$. For a point $z\in \mathbb S^d$ and a reflection $\gamma\in \Gamma$, a point $\gamma(z)$ lies in the set $\orb_z^\Gamma$ which is defined in (\ref{eq:orbit}). Starting with a function $f\in L^2(\mathbb S^d)$, we will define the associated $\Gamma$-invariant function $f^{\Gamma}$ as follows: \begin{proposition} Let $f$ be a function in $L^2(\mathbb S^d)$. Then the following $f^\Gamma$ \begin{equation}\label{eq:invfun} f^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}f(\gamma(z)), ~~~ z \in \mathbb S^d, \end{equation} is a $\Gamma$-invariant function. \end{proposition} \begin{proof} Each fiber of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$ is $\orb_z^\Gamma$ for some $z$ in the fiber. For any other point $y$ on the same fiber with $z$ for the projection $\pi$, there exists a reflection $\gamma\in \Gamma$ such that $y=\gamma (z)$. Then this proposition follows from the identity $f^{\Gamma}(z)=f^{\Gamma}(\gamma(z))$, which can be easily checked. \end{proof} The contraction map $ f\mapsto f^{\Gamma}$ on spheres naturally gives the following map \begin{equation}\label{lowerstar} \pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma),\ \text{with}\ f\mapsto f^{\Gamma} \end{equation} \begin{remark} \normalfont Some readers might argue that each element in an $L^2$ space is a \emph{function class} rather than a function, so in that sense $\pi_*(f)=f^{\Gamma}$ is not well-defined, but note that each element in $L^2(\mathbb S^d)$ can be approximated by polynomials, and the $\pi_*$ which is well defined on individual polynomial, will induce a well defined map on function classes. \end{remark} \begin{theorem}\label{invfunsp} This contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma)$, as defined in (\ref{lowerstar}), has a section given by $\pi^*$, namely the composition $\pi_*\circ \pi^*$ induces the identity map from $L^2(\mathbb S^d/\Gamma)$ to itself. In particular, the contraction map $\pi_*$ is a surjection. \end{theorem} \begin{proof} One way to look at the relation of the two maps $\pi_*$ and $\pi_*$ is through the diagram $\xymatrix{L^2(\mathbb S^d/\Gamma)\ar@<.35ex>[r]^-{\pi^*}&L^2(\mathbb S^d)\ar@<.35ex>[l]^-{\pi_*}}$. The image of $\pi^*$ consists of $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. Conversely, given a $\Gamma$-invariant function $g\in L^2(\mathbb S^d)$, the map $g\mapsto g^{\Gamma}$ is an identity map, i.e., $g=g^{\Gamma}$, thus the theorem follows. \end{proof} \begin{remark}\label{dreami} \normalfont Theorem \ref{invfunsp} identifies functions on compositional domains as $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. For any function $f\in L^2(\mathbb S^d)$, we can produce the corresponding $\Gamma$-invariant function $f^{\Gamma}$ by (\ref{eq:invfun}). More importantly, we can ``recover'' $L^2(\Delta^d)$ from $L^2(\mathbb S^d)$, without losing any information. This allows us to construct reproducing kernels of $\Delta^d$ from $L^2(\mathbb S^d)$ in Section \ref{sec:rkhsc}. \end{remark} \subsection{ Further Reduction to Homogeneous Polynomials of Even Degrees}\label{redsurg} In this section we provide a further simplification of the homogeneous polynomials in the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$. Proposition \ref{accudirct} tells us that if $m$ is even, then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{m/2}\mathcal H_{2i}$, and that if $m$ is odd then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{(m-1)/2}\mathcal H_{2i+1}$, where $\mathcal P_{m}(d+1)$ is the space of degree $m$ homogeneous polynomials in $d+1$ variables. In either of the cases ($m$ being even or odd), the degree of the homogeneous polynomials $m$ is the same as the $\max\{2i,\ \ceil{m/2}-\floor{m/2}\leq i\leq \floor{m/2}\}$. Therefore we can decompose the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$ into the direct sum of two homogeneous polynomial spaces: $$ \dis\bigoplus_{i=0}^{m}\mathcal H_i=\mathcal P_{m}(d+1)\bigoplus \mathcal P_{m-1}(d+1). $$ However we will show that any monomial of odd degree term will collapse to zero by taking its $\Gamma$-invariant, thus only one piece of the above homogeneous polynomial space will ``survive'' under the contraction map $\pi_*$. This will further simplify the function space, which in turn facilitates an easy computation. Specifically, when working with accumulated direct sums $\bigoplus_{i=0}^{m}\mathcal H_i$ on spheres, not every function is a meaningful function on $\Delta^d=\mathbb S^d/\Gamma$, e.g., we can find a nonzero function $f\in \bigoplus_{i=0}^{m}\mathcal H_i$, but $f^{\Gamma}=0$. In fact, all of the odd pieces of the eigenspace $\mathcal H_m$ with $m$ being odd do not contribute to anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. In other words, the accumulated direct sum $\bigoplus_{i=0}^m\mathcal H_{2i+1}$ is ``killed'' to zero under $\pi_*$, as shown by the following Lemma: \begin{lemma}\label{killodd} For every monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ (each $\alpha_i\geq 0$), if there exits $ k$ with $\alpha_k$ being odd, then the monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ is a shadow function, that is, $(\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}=0$. \end{lemma} An important implication of this Lemma is that since each homogeneous polynomial in $\bigoplus_{i=0}^k\mathcal H_{2i+1}$ is a linear combination of monomials with at least one odd term, it is killed under $\pi_*$. This implies that all ``odd'' pieces in $L^2(\mathbb S^d) = \bigoplus_{i=0}^{\infty} \mathcal H_i$ do not contribute anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. Therefore, whenever using spherical harmonics theory to understand function spaces of compositional domains, it suffices to consider only even $i$ for $\mathcal H_{i}$ in $L^2(\mathbb S^d)$. In summary, the function space on the compositional domain $\Delta^d=\mathbb S^d/\Gamma$ has the following eigenspace decomposition: \begin{equation}\label{compfun} L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)=\dis\bigoplus_{i=0}^{\infty}\mathcal H_{2i}^{\Gamma}, \end{equation} \noindent where $\mathcal H_{2i}^{\Gamma}:=\{h\in \mathcal H_{2i},\ h=h^{\Gamma}\}$. \subsection{Reproducing Kernels for Compositional Domain}\label{sec:rkhsc} With the understanding of function spaces on compositional domains as invariant functions on spheres, we are ready to use spherical harmonic theory to construct reproducing kernel structures on compositional domains. \subsubsection{ $\Gamma$-invariant Functionals on $\mathcal H_i$} The main goal of this section is to establish reproducing kernels for compositional data. Inside each Laplacian eigenspace $\mathcal H_i$ in $L^2(\mathbb S^d)$, recall that the $\Gamma$-invariant subspace $\mathcal H_i^{\Gamma}$ can be regarded as a function space on $\Delta^d=\mathbb S^d/\Gamma$, based on (\ref{compfun}). To find a candidate of reproducing kernel inside $\mathcal H_i^{\Gamma}$, we first identify the representing function for the following linear functional $L_z^{\Gamma}$ on $\mathcal H_i$, which is defined as follows: For any function $Y\in \mathcal H_i$, \[ L_{z}^{\Gamma}(Y)=Y^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}Y(\gamma z), \] for a given $z\in \mathbb S^d$. One can easily see that $L_z^{\Gamma}$ and $L_z$ agree on the subspace $\mathcal H_i^{\Gamma}$ inside $\mathcal H_i$ and also that $L_z^{\Gamma}$ can be seen as a composed map $L_z^{\Gamma}=L_z\pi_*:\ \mathcal H_i\rightarrow \mathcal H_i^{\Gamma}\rightarrow\mathbb C$. Note that although $L_z^{\Gamma}$ is defined on $\mathcal H_i$, it can actually be seen as a ``Delta functional'' on $\mathbb S^d/\Gamma=\Delta^d$. To find the representing function for $L_z^{\Gamma}$, we will use zonal spherical functions: Let $k_i(\cdot, \cdot)$ be the reproducing kernel in the eigenspace $\mathcal H_i$. Define the ``compositional'' kernel $k_i^{\Gamma}(\cdot, \cdot)$ for $\mathcal H_i$ as \begin{equation}\label{fake} k_i^{\Gamma}(x, y)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, y), \ \ \forall x, y\in \mathbb S^d, \end{equation} \noindent from which it is straightforward to check that $k_i^\Gamma(z,\cdot)$ represents linear functionals of the form $L_{z}^{\Gamma}$, simply by following the definitions. \begin{remark} \normalfont The above definition of ``compositional kernels'' in (\ref{fake}) is not just a trick only to get rid of the ``redundant points'' on spheres. This definition is inspired by the notion of ``orbital integrals'' in analysis and geometry. In our case, the ``integral'' is a discrete version, because the ``compact subgroup'' in our situation is replaced by a finite discrete reflection group $\Gamma$. In fact, such kind of ``discrete orbital integral'' construction is not new in statistical learning theory, e.g., \cite{equimatr} also used the ``orbital integral'' type of construction to study equivariant matrix valued kernels. \end{remark} At first sight, a compositional kernel is not symmetric on the nose, because we are only ``averaging'' over the group orbit on the first variable of the function $k_i(x,y)$. However since $k_i(x,y)$ is both symmetric and orthogonally invariant by Propositional \ref{reprsph}, so quite counter-intuitively, compositional kernels are actually symmetric: \begin{proposition}\label{sym} Compositional kernels are symmetric, namely $k_i^{\Gamma}(x, y)= k_i^{\Gamma}(y, x)$. \end{proposition} \begin{proof} Recall that $k_i(x,y)=k_i(y,x)$ and that $k_i(G x,G y)=k_i(x,y)$ for any orthogonal matrix $G$. Notice that every reflection $\gamma\in \Gamma$ can be realized as an orthogonal matrix, then we have $$ \begin{array}{rcl} k_i^{\Gamma}(x, y)&=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, y)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(y,\gamma x)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}y,\gamma^{-1}(\gamma x))\ \ \\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}y,x)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma y,x)\\ &=& k_i^{\Gamma}(y, x) \end{array} $$ \end{proof} Recall that $\mathcal H_i^{\Gamma}$ is the $\Gamma$-invariant functions inside $\mathcal H_i$, and by (\ref{compfun}), $\mathcal H_i^{\Gamma}$ is the $i$-th subspace of a compositional function space $L^2(\Delta^d)$. A na\"ive candidate for the reproducing kernel inside $\mathcal H_i^{\Gamma}$, denoted as $w_i(x,y)$, might be the spherical reproducing kernel $k_i(x,y)$, but $k_i(x,y)$ is not $\Gamma$-invariant. It turns out that the compositional kernels are actually reproducing with respect to all $\Gamma$-invariant functions in $\mathcal H_i$, while being $\Gamma$-invariant on both arguments. \begin{theorem}\label{reprcomp} Inside $\mathcal H_i$, the compositional kernel $k_i^{\Gamma}(x, y)$ is $\Gamma$-invariant on both arguments $x$ and $y$, and moreover $k_i^{\Gamma}(x, y)=w_i(x,y)$, i.e., the compositional kernel is the reproducing kernel for $\mathcal H^{\Gamma}_i$. \end{theorem} \begin{proof} Firstly by the definition, $k_i^{\Gamma}(x, y)$ is $\Gamma$-invariant on the first argument $x$; by the symmetry of $k_i^{\Gamma}(x, y)$ in Proposition \ref{sym}, it is then also $\Gamma$-invariant on the second argument $y$, hence the compositional kernel $k_i^{\Gamma}(x, y)$ is a kernel inside $\mathcal H^{\Gamma}_i$. Secondly, let us prove the reproducing property of $k_i^{\Gamma}(x, y)$. For any $\Gamma$-invariant function $f\in \mathcal H_i^{\Gamma}\subset \mathcal H_i$, $$ \begin{array}{rcl} <f(t),k_i^{\Gamma}(x, t)> &=& <f(t),\dis\sum_{\gamma\in \Gamma}\frac{1}{|\Gamma|}k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}<f(t),k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(\gamma x)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(x)\ \ (f\ \text{is}\ \Gamma\text{-invariant})\\ &=&f(x) \end{array} $$ \end{proof} \begin{remark} \normalfont Theorem \ref{reprcomp} justifies that a compositional kernel is actually the reproducing kernel for functions inside $\mathcal H_i^\Gamma$. Although the compositional kernel $k_i^{\Gamma}(x, y)$ is symmetric as is proved in Proposition \ref{sym}, we will still use $w_i(x,y)$ to denote $k_i^{\Gamma}(x, y)$ because $w_i(x,y)$ is, notationally speaking, more visually symmetric than the notation for compositional kernels. \end{remark} \subsubsection{Compositional RKHS and Spaces of Homogeneous Polynomials} Recall that based on Theorem \ref{accudirct}, the direct sum of an even (resp. odd) number of eigenspaces can be expressed as the set of homogeneous polynomials of a fixed degree. Further recall that the direct sum decomposition $L^2(\mathbb S^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i$ is an orthogonal one, so is the direct sum $L^2(\Delta^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i^\Gamma$. By the orthgonality between eigenspaces, the reproducing kernels for the finite direct sum $\bigoplus_{i=0}^m\mathcal H^{\Gamma}_i$ is naturally the summation $\sum_{i=0}^m w_i$. Note that by Lemma \ref{killodd}, it suffices to consider only even pieces of eigenspaces $\mathcal H_{2i}$. Finally, we give a formal definition of ``the degree $m$ reproducing kernel Hilbert space'' on $\Delta^d$, consisting degree $2m$ homogeneous polynomials: \begin{definition}\label{lthrkhs} Let $w_i$ be the reproducing kernel for $\Gamma$-invariant functions in the $i$th eigenspace $\mathcal H_i \subset L^2(\mathbb S^d)$. The degree $m$ compositional reproducing kernel Hilbert space is defined to be the finite direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$, and the reproducing kernel for the degree $m$ compositional reproducing kernel Hilbert space is \begin{equation}\label{mcompker} \omega_m(\cdot, \cdot) = \sum_{i=0}^m w_{2i}(\cdot,\cdot). \end{equation} \noindent Thus the degree $m$ RKHS for the compositional domain is the pair $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$. \end{definition} Recall that the direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can identified as a subspace of $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, which is isomorphic to the space of degree $2m$ homogeneous polynomials, so each function in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can be written as a degree $2m$ homogeneous polynomial, including the reproducing kernel $\omega_m(x,\cdot)$, although it is not obvious from (\ref{mcompker}). Notice that for a point $(x_1,x_2,\dots, x_{d+1})\in \mathbb S^d$, the sum $\sum_{i=1}^{d+1}x_i^2=1$, so one can always use this sum to turn each element in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ to a homogeneous polynomial. For example, $x^2+1$ is not a homogeneous polynomial, but each point $(x,y,z)\in \mathbb S^2$ satisfies $x^2+y^2+z^2=1$, then we have $x^2+1=x^2+x^2+y^2+z^2=2x^2+y^2+z^2$, which is a homogeneous polynomial on the sphere $\mathbb S^2$. In fact, we can say something more about $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$. Recall that Proposition \ref{killodd} ``killed'' the contributions from ``odd pieces'' $\mathcal H_{2k+1}$ under the contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\Delta^d)$. However, even inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, only a subspace can be identified with a compositional function space, namely, those $\Gamma$-invariant homogeneous polynomials. The following proposition gives a characterization of which homogeneous polynomials inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ come from the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$: \begin{proposition}\label{mpolynomial} Given any element $\theta\in \bigoplus_{i=0}^m\mathcal H^{\Gamma}_{2i}\subset \bigoplus_{i=0}^m\mathcal H_{2i}\subset L^2(\mathbb S^d/\Gamma)$, there exists a degree $m$ homogeneous polynomial $p_m$, such that \begin{equation}\label{mplynomial} \theta(x_1, x_2,\dots, x_{d+1})=p_m(x_1^2, x_2^2,\cdots, x_{d+1}^{2}). \end{equation} \end{proposition} \begin{proof} Note that $\theta$ is a degree $2m$ homogeneous $\Gamma$-invariant polynomial, then each monomial in $\theta$ has form $\prod_{i=1}^{d+1}x_i^{a_i}$ with $\sum_{i=1}^{d+1}a_i=2m$. If $\theta$ contains one monomial $\prod_{i=1}^{d+1}x_i^{a_i}$ with nonzero coefficient such that $a_i$ is odd for some $1\leq i\leq d+1$. Note that $\theta$ is $\Gamma$-invariant, i.e., $\theta=\theta^{\Gamma}$, which implies $\prod_{i=1}^{d+1}x_i^{a_i}=(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$, but the term $(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$ is zero by Proposition \ref{killodd}. Thus $\theta$ is a linear combination of monomials of the form $\prod_{i=1}^{d+1}x_i^{a_i}=\prod_{i=1}^{d+1}(x_i^2)^{a_i/2}$ with each $a_i$ being even and $\sum_i a_i/2=m$, thus the proposition follows. \end{proof} Recall that the degree $m$ compositional RKHS is $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$ in Definition \ref{lthrkhs}, and $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ consists of degree $2m$ homogeneous polynomials while $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ is just a subspace of it. Proposition \ref{mpolynomial} tells us that one can also have a concrete description of the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ via those degree $m$ homogeneous polynomials on squared variables. \section{Applications of Compositional Reproducing Kernels}\label{sec:app} The availability of compositional reproducing kernels will open a door to many statistical/machine learning techniques for compositional data analysis. However, we will only present two application scenarios, as an initial demonstration of the influence of RKHS thoery on compositional data analysis. The first application is the representer theorem, which is motivated by newly developed kernel-based machine learning, especially by the rich theory of vector valued regression \citep{vecfun, vecman}. The second one is constructing exponential families on compositional domains. Parameters of compositional exponential models are compositional reproducing kernels. To the best of authors' knowledge, these will be the first class of nontrivial examples of explicit distributions on compositional domains with \emph{non-vanishing} densities on the boundary. \subsection{Compositional Representer Theorems} Beyond the successful applications on traditional spline models, representer theorems are increasingly relevant due to the new kernel techniques in machine learning. We will consider minimal normal interpolations and least square regularzations in this paper. Regularizations are especially important in many situations, like structured prediction, multi-task learning, multi-label classification and related themes that attempt to exploit output structure. A common theme in the above-mentioned contexts is non-parametric estimation of a vector-valued function $f:\ \mathcal X\rightarrow \mathcal Y$, between a structured input space $\mathcal X$ and a structured output space $\mathcal Y$. An important adopted framework in those analyses is the ``vector-valued reproducing kernel Hilbert spaces'' in \cite{vecfun}. Unsurprisingly, representer theorems not only are necessary, but also call for further generalizations in various contexts: \begin{itemize} \item [(i)]In classical spline models, the most frequently used version of representer theorems are about scalar valued kernels, but besides the above-mentioned scenario $f:\ \mathcal X\rightarrow \mathcal Y$ in manifold regularization context, in which vector valued representer theorems are needed, higher tensor valued kernels and their corresponding representer theorems are also desirable. In \cite{equimatr}, matrix valued kernels and their representer theorems are studied, with applications in image processing. \item[(ii)]Another related application lies in the popular kernel mean embedding theories, in particular, conditional mean embedding. Conditional mean embedding theory essentially gives an operator from an RKHS to another \citep{condmean}. In order to learn such operators, vector-valued regressions plus corresponding representer theorems are used. \end{itemize} In vector-valued regression framework, an important assumption discussed in representer theorems are \emph{linear independence} conditions \citep{vecfun}. As our construction of compositional RKHS is based on finite dimensional spaces of polynomials, the linear independence conditions are not freely satisfied on the nose, so we will address this problem in this paper. Instead of dealing with vector-valued kernels, we will only focus on the special case of scalar valued (reproducing) kernels, but the issue can be clearly seen in this special case. \subsubsection{Linear Independence of Compositional Reproducing Kernels}\label{twist} The compositional RKHS that was constructed in Section \ref{sec:rkhs} takes the form $\big(\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma, \omega_{m} \big)$ indexed by $m$. Based on the finite dimensional nature of compositional RKHS, it is not even clear whether different points yield to different functions $\omega_{m}(x_i, \cdot)$ inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma$. we will give a positive answer when $m$ is high enough. Given a set of distinct compositional data points $\{x_i\}_{i=1}^n\subset \Delta^d$, we will show that the corresponding set of reproducing functions $\{\omega_{m}(x_i, \cdot)\}_{i=1}^n$ form a linearly independent set inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma$ if $m$ is high enough. \begin{theorem}\label{abundencelm} Let $\{x_i\}_{i=1}^n$ be distinct data points on a compositional domain $\Delta^d$. Then there exists a positive integer $M$, such that for any $m>M$, the set of functions $\omega_{m}(x_i, \cdot)\}_{i=1}^n$ is a linearly independent set in $\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma$. \end{theorem} \begin{proof} The quotient map $c_\Delta: \mathbb S^d\rightarrow \Delta^d$ can factor through a projective space, i.e., $c_\Delta:\ \mathbb S^d\rightarrow\mathbb P^d\rightarrow \Delta^d$. The main idea is to prove a stronger statement, in which we showed that distinct data points in $\mathbb P^d$ will give linear independence of \emph{projective kernels} for large enough $m$, where projective kernels are reproducing kernels in $\mathbb P^d$ whose definition was given in \ref{abundprf}. Then we construct two vector subspace $V_1^{m}$ and $V_2^{m}$ and a linear map $g_m$ from $V_1^{m}$ to $V_2^{m}$. The key trick is that the matrix representing the linear map $g_m$ becomes diagonally dominant when $m$ is large enough, which forces the spanning sets of both $V_1^{m}$ and $V_2^{m}$ to be linear independent. More details of the proof are given in Section \ref{abundprf}. \end{proof} In the proof of Theorem \ref{abundencelm}, we make use of the homogeneous polynomials $(y_i\cdot {t})^{2m}$, which is \emph{not} living inside a single piece $\mathcal H_{2i}$, thus we had to use the direct sum space $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ for our construction of RKHS. Without using projective kernels, one might wonder if the same argument works, however, the issue is that the matrix might have $\pm 1$ at off-diagonal entries, which will fail to be diagonally dominant when $m$ grows large enough. We break down to linear independence of projective kernels for distinct points, because reproducing kernels for distinct compositional data points are linear combinations of distinct projective kernels, then in this way, the off diagonal terms will be the power of inner product of two vectors that will not be antipodal or identical, thus off diagonal terms' $m$-th power will go to zero with increasing $m$. Another consequence of Theorem \ref{abundencelm} is that $\omega_{m}(x_i,\cdot)\neq \omega_{m}(x_j, \cdot)$ whenever $i\neq j$ when $m$ is large enough. Not only large enough $m$ will separate points on the reproducing kernel level, but also gives each data point their ``own dimension.'' \subsubsection{Minimal Norm Interpolation and Least Squares Regularization}\label{represent} Once the linear independence is established in Theorem \ref{abundencelm}, it is an easy corollary to establish the representer theorems for minimal norm interpolations and least square regularizations. Nothing is new from the point of view of general RKHS theory, but we will include these theorems and proofs on account of completeness. Again, we will focus on the scalar-valued (reproducing) kernels and functions, instead of the vector-valued kernels and regressions. However, Theorem \ref{abundencelm} sheds important lights on linearly independence issues, and interested readers can generalize these compositional representer theorems to vector-valued cases by following \cite{vecfun}. The first representer theorem we provide is a solution to minimal norm interpolation problem: for a fixed set of distinct points $\{x_i\}_{i=1}^n$ in $\Delta^d$ and a set of numbers $y=\{y_i\in \mathbb R\}_{i=1}^n$, let $I_y^{m}$ be the set of functions that interpolates the data \[ I_y^m = \{f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}, \] and out goal is to find $f_0$ with minimum $\ell_2$ norm, i.e., \[ \norm{f_0}=\inf\{\norm{f}, f\in I_y^{m}\}.\] \begin{theorem}\label{minorm} Choose $m$ large enough so that the reproducing kernels $\{\omega_m(x_i,t)\}_{i=1}^n$ are linearly independent, then the unique solution of the minimal norm interpolation problem $\min\{\norm{f},f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}$ is given by the linear combination of the kernels: $$ f_0(t)=\dis\sum_{i=1}^nc_i \; \omega_{m}(x_i,t) $$ where $\{c_i\}_{i=1}^n$ is the unique solution of the following system of linear equations: $$ \dis\sum_{j=1}^n\omega_{m}(x_i,x_j)c_j=y_i, \ \ 1\leq i\leq n. $$ \end{theorem} \begin{proof} For any other $f$ in $I_y^{m}$, define $g=f-f_0$. By considering the decomposition: $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2$, one can argue that the cross term $<f_0,g>=0$. The detail can be found in Section \ref{representerproofs}. We want to point out that the linear independence of reproducing kernels guarantees the uniqueness and existence of $f_0$. \end{proof} The second representer theorem is for a more realistic scenario with $\ell_2$ regularization, which has the following objective: \begin{equation}\label{l2obj} \sum_{i=1}^n \left ( f(x_i)-y_i \right )^2+\mu\norm{f}^2. \end{equation} The goal is to find the $\Gamma$-invariant function $f_{\mu}\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ that minimizes (\ref{l2obj}). The solution to this problem is provided by the following representer theorem: \begin{theorem}\label{regularization} For a set of distinct compositional data points $\{x_i\}_{i=1}^n$, choose $m$ large enough such that the reproducing kernels $\{\omega_{m}(x_i, t)\}_{i=1}^n$ are linearly independent. Then the solution to (\ref{l2obj}) is given by \[ f_{\mu}(t) =\sum_{i=1}^n c_i \; \omega_{m}(x_i,t), \] where $\{c_i\}_{i=1}^n$ is the solution of the following system of linear equations: \[ \mu c_i+\sum_{j=1}^n \omega_{m}(x_i,x_j)c_j=y_i,\ \ 1\leq i\leq n. \] \end{theorem} \begin{proof} The detail of this proof can be found in Section \ref{representerproofs}, but we want to point out how the linear independence condition plays a role in here. In the middle of the proof we need to show that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i)) \omega_{m}(x_i,t)\big]$, where $f_{\mu}(t) =\sum_{i=1}^n \omega_{m}(x_i,t)c_i$. We use the linear independence in Theorem \ref{abundencelm} to establish the equivalence between this linear equation system of $\{c_i\}_{i=1}^n$ and the one given in the theorem. \end{proof} \subsection{Compositional Exponential Family} With the construction of RKHS in hand, one can produce exponential families using the technique developed in \cite{cs06}. Recall that for a function space $\mathcal H$ with the inner product $<\cdot,\cdot>$ on a general topological space $\mathcal X$, whose reproducing kernel is given by $k(x, \cdot)$, the exponential family density $p(x, \theta)$ with the parameter $\theta\in \mathcal H$ is given by: \[ p(x, \theta)=\exp\{<\theta(\cdot),k(x,\cdot)>-g(\theta) \},\ \] where $g(\theta)=\log \dis\int_{\mathcal{X}}\exp\big(<\theta(\cdot),k(x,\cdot) >\big) dx$. For compositional data we define the density of the $m$th degree exponential family as \begin{equation}\label{expfamily} p_{m}(x, \theta)=\exp\left\{<\theta(\cdot),\omega_{m}(x,\cdot)>-g(\theta) \right \},\quad x \in \mathbb S^d/\Gamma, \end{equation} where $\theta\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ and $g(\theta)=\log \int_{\mathbb S^d/\Gamma}\exp(<\theta(\cdot),\omega_{m}(x,\cdot) >) dx$. Note that this density can be made more explicit by using homogeneous polynomials. Recall that any function in $\bigoplus_{i=0}^m\mathcal H_{2i}^{\Gamma}$ can be written as a degree $m$ homogeneous polynomial with \emph{squared} variables by Lemma \ref{killodd}. Thus the density in (\ref{expfamily}) can be simplified to the following form: for $x=(x_1,\dots, x_{d+1})\in \mathbb S^{d}_{\geq 0}$, \begin{equation}\label{mpoly} p_{m}(x,\theta)=\exp\{s_{m}(x_1^2, x_2^2, \dots, x_{d+1}^2; \theta)-g(\theta)\}, \end{equation} where $s_m$ is a polynomial on squared variables $x_i^2$'s with $\theta$ as coefficients. Note that $s_m$ is invariant under ``sign-flippings'', and the normalizing constant can be computed via the integration over the entire sphere as follows: $$ g(\theta)=\int_{\mathbb S^d/\Gamma}\exp(s_m)dx=\frac{1}{|\Gamma|}\int_{\mathbb S^d}\exp(s_m)dx. $$ Figure \ref{fig:expfamily} displays three examples of compositional exponential distribution. The three densities respectively have the following $\theta$: \begin{eqnarray*} \theta_1 &=& -2 x_1^4 -2 x_2^4 -3 x_3^4 + 9 x_1^2 x_2^2 + 9 x_1^2 x_3^2 -2 x_2^2 x_3^2,\\ \theta_2 &=& - x_1^4 - x_2^4 - x_3^4 - x_1^2 x_2^2 - x_1^2 x_3^2 - x_2^2 x_3^2,\\ \theta_3 &=& - 3 x_1^4 - 2 x_2^4 - x_3^4 +9 x_1^2 x_2^2 - 5 x_1^2 x_3^2 - 5 x_2^2 x_3^2. \end{eqnarray*} The various shapes of the densities in the Figure implies that the compositional exponential family can be used to model data with a wide range of locations and correlation structures. \begin{figure}[h] \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily1.png} \caption{$p_4(x,\theta_1)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily2.png} \caption{$p_4(x,\theta_2)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily3.png} \caption{$p_4(x,\theta_3)$} \end{subfigure} \caption{Three example densities from compositional exponential family. See text for specification of the parameters $\theta_1, \theta_2$, and $\theta_3$. }\label{fig:expfamily} \end{figure} Further investigation is needed on the estimation of the parameters of the compositional exponential model (\ref{mpoly}), which is suggested as a future direction of research. A natural starting point is maximum likelihood estimation and a regression-based method such as the one discussed by \cite{expdirect}. \section{Discussion}\label{sec:conclude} A main contribution of this work is that we use projective and spherical geometries to reinterpret compositional domains, which allows us to construct reproducing kernels for compositional data points by using spherical harmonics under group actions. With the rapid development of kernel techniques (especially kernel mean embedding philosophy) in machine learning theory, this work will make it possible to introduce reproducing kernel theories to compositional data analysis. Let us for example consider the mean estimation problem for compositional data. Under the kernel mean embedding framework that is surveyed by \citet{kermean}, one can focus on kernel mean $\mathbb E[k(X,\cdot)]$ in the function space, rather a physical mean that exist in the compositional domain. The latter is known to be difficult to even define properly \citep{sphkentreg, sw11}. On the other hand, the kernel mean is endowed with flexibility and linear structure of the function space. Although inspired by the kernel mean embedding techniques, we did not address the computation of the kernel mean $\mathbb E[k(X,\cdot)]$ and cross-variance operator as a replacement of traditional means and variance-covariances in traditional multivariate analysis. The authors will, in the forthcoming work, develop further techniques to come back to this issue of kernel means and cross-variance operators for compositional exponential models, via applying deeper functional analysis techniques. Although we only construct reproducing kernels for compositional data, it does not mean that ``higher tensors'' is abandoned in our consideration. In fact, higher-tensor valued reproducing kernels are also included in kernel techniques with applications in manifold regularizations \citep{vecman} and shape analysis \citep{matrixvaluedker}. These approaches on higher-tensor valued reproducing kernels indicate further possibilities of regression frameworks between exotic spaces $f:\ \mathcal X\rightarrow \mathcal Y$ with both the source $\mathcal X$ and $\mathcal Y$ being non-linear in nature, which extends the intuition of multivariate analysis further to nonlinear contexts, and compositional domains (traditionally modeled by an ``Aitchison simplex'') are an interesting class of examples which can be treated non-linearly. \appendix \section{Supplementary Proofs} \subsection{ Proof of Central Limit Theorems on Integral Squared Errors (ISE) in Section \ref{compdensec}}\label{cltprf} \begin{assumption}\label{kband} For all kernel density estimators and bandwidth parameters in this paper, we assume the following: \begin{itemize} \item[{\bf{H1}}] The kernel function $K:[0,\infty)\rightarrow [0,\infty)$ is continuous such that both $\lambda_d(K)$ and $\lambda_d(K^2)$ are bounded for $d\geq 1$, where $\lambda_d(K)=2^{d/2-1}\mathrm{vol}(S^d)\dis\int_{0}^{\infty}K(r)r^{d/2-1}dr$. \item[\bf{H2}]If a function $f$ on $\mathbb S^d\subset \mathbb R^{d+1}$ is extended to the entire $\mathbb R^{d+1}/\{0\}$ via $f(x)=f(x/\norm{x})$, then the extended function $f$ needs to have its first three derivatives bounded. \item[\bf{H3}] Assume the bandwidth parameter $h_n\rightarrow 0$ as $nh_n^d\rightarrow\infty$. \end{itemize} \end{assumption} Let $f$ be the extended function from $\mathbb S^d$ to $\mathbb R^{d+1}/\{0\}$ via $f(x/\norm{x})$, and let $$ \phi (f,x)= -x^T \nabla f(x)+d^{-1}(\nabla^2 f(x)-x^T (\mathcal H_x f) x) = d^{-1}\mathrm{tr} [\mathcal H_xf(x)], $$ where $\mathcal H_x f$ is the Hessian matrix of $f$ at the point $x$. The term $b_d(K)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ b_d(K)=\dis\frac{\dis\int_0^{\infty}K(r)r^{d/2}dr}{\dis\int_0^{\infty}K(r)r^{d/2-1}dr} \] The term $\phi(h_n)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ \phi(h_n)=\dis\frac{4b_d(K)^2}{d^2}\sigma_x^2h_n^4 \] Proof of Theorem \ref{ourclt}: \begin{proof} The strategy in \cite{chineseclt} in the directional set-up follows that in \cite{hallclt}, whose key idea is to give asymptotic bounds for degenerate U-statistics, so that one can use Martingale theory to derive the central limit theorem. The step where the finite support condition was used in \cite{chineseclt}, is when they were trying to prove the asymptotic bound:$E(G_n^2(X_1,X_2))=O(h^{7d})$, where $G_n(x,y)=E[H_n(X,xH_n(X,y))]$ with $H_n=\dis\int_{\mathbb S^d}K_n(z,x)K_n(z,y)dz$ and the centered kernel $K_n(x,y)=K[(1-x'y)/h^2]-E\{[K(1-x'X)/h^2]\}$. During that prove, they were trying to show that the following term: \[ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\times\\ &&\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2, \end{array} \] satisfies $T_1=O(h^{7d})$. During this step, in order to give an upper bound for $T_1$, the finite support condition was substantially used. The idea to avoid this assumption was based on the observation in \cite{portclt} where they only concern the case of directional-linear CLT, whose result can not be directly used to the only directional case. Based on the method provided in Lemma 10 in \cite{portclt}, one can easily deduce the following asymptotic equivalence: $$ \dis\int_{\mathbb S^d}K^j(\frac{1-x^Ty}{h^2})\phi^i(y)dy\sim h^d\lambda_d(K^j)\phi^i(x), $$ where $\lambda_d(K^j)=2^{d/2-1}\mathrm{vol}(\mathbb S^{d-1})\dis\int_{0}^{\infty}K^j(r)r^{d/2-1}dr$. As a special case we have: $$ \dis\int_{\mathbb S^d}K^2(\dis\frac{1-x^Ty}{h^2})dy\sim h^d\lambda_d(K^2)C,\ \text{with}\ C\ \text{being a positive constant}. $$ Now we will proceed the proof without the finite support condition: $$ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\\ &&\times\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2\\ &\sim& \dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\left\{\int_{\mathbb S^d}f(z)\big[\lambda_d(K)h^dK(\frac{1-x^Tz}{h^2})\big]\times \big[\lambda_d(K)h^dK(\dis\frac{1-y^Tz}{h^2})\big]dz \right\}^2\\ &\sim& \lambda_d(K)^4h^{4d}\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)\big[ \lambda_d(K)h^d K(\dis\frac{1-x^Ty}{h^2})f(y)\big]^2dy\\ &=&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \left\{\int_{\mathbb S^d} K^2(\dis\frac{1-x^Ty}{h^2})f^3(y)dy \right\}f(x)dx\\ &\sim&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \lambda_d(K^2)h^dC\cdot f^3(x)f(x)dx\\ &=&C\lambda_d(K)^6\lambda_d(K^2)h^{7d}\dis\int_{\mathbb S^d}f^(x)dx=O(h^{7d}). \end{array} $$ Thus we have proved $T_1=O(h^{7d})$ without finite support assumption, then the rest of the proof will follow through as in \cite{chineseclt}. \end{proof} Observe the identity: \begin{equation}\label{csden} \dis\int_{\mathbb S^d_{\geq 0}}(\hat{p}_n-p)^2dx=\dis|\Gamma|\int_{\mathbb S^d}(\hat{f}_n-\tilde{p})^2dy, \end{equation} then the CLT of compositional ISE follows from the identity (\ref{csden}) and our proof of CLT on spherical ISE without finite support conditions on kernels. \subsection{Proofs of Shadow Monomials in Section \ref{sec:rkhs} }\label{reproproof} Proof of Proposition \ref{killodd}: \begin{proof} A direct computation yields: $$ \begin{array}{rcl} (\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}&=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i=1}^{d+1}(s_ix_i)^{\alpha_i}\\ &=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} x_k^{\alpha_k}+\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} (-x_k)^{\alpha_k}\\ &=&x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i}-x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} \\ &=&0. \end{array} $$ \end{proof} \subsection{Proof of Linear Independence of Reproducing Kernels in Theorem \ref{abundencelm}}\label{abundprf} We sketch a slight of more detailed (not complete) proof: \begin{proof} This is the most technical lemma in this article. We will sketch the philosophy of the proof in here, which can be intuitively understood topologically. Recall that we can produce a projective space $\mathbb P^d$ by identifying every pair of antipodal points of a sphere $\mathbb S^d$ (identify $x$ with $-x$), in other words $\mathbb P^d=\mathbb S^d/\mathbb Z_2$ where $\mathbb Z_2=\{0,1\}$ is a cyclic group of order $2$. Then we can define a projective kernel in $\mathcal H_i\subset L^2(\mathbb S^d)$ to be $k^p_i(x, \cdot)=[k_i(x,\cdot)+k_i(-x,\cdot)]/2$. We can also denote the projective kernel inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ by $\underline{k}_m^p(x, \cdot)=\sum_{i=0}^{m}k^p_{2i}(x,\cdot)$. Now we spread out the data set $\{x_i\}_{i=1}^n$ by ``spread-out'' construction in Section \ref{sec:spread}, and denote the spread-out data set as $\{\Gamma \cdot x_i\}_{i=1}^n=\{{c^{-1}_{\Delta}(x_i)}\}_{i=1}^{n}$ (a data set, not a set because of repetitions). A compositional reproducing kernel kernel is a summation of spherical reproducing kernels of on ${c^{-1}_{\Delta}(x_i)}$, divided by the number of elements in ${c^{-1}_{\Delta}(x_i)}$. This data set ${c^{-1}_{\Delta}(x_i)}$ has antipodal symmetry, then a compositional kernel is a linear combination of projective kernels. Notice that \emph{different} fake kernels are linear combinations of \emph{different} projective kernels. It suffices to show the linear independence of projective kernels for distinct data points and large enough $m$, which implies the linear independence of fake kernels $\{\underline{k}^{\Gamma}_{m}(x_i, \cdot)\}_{i=1}^n$. Now we are focusing on the linear independence of projective kernels. A projective kernel can be seen as a reproducing kernel for a point in $\mathbb P^d$. For a set of distinct points $\{y_i\}_{i=1}^l\subset \mathbb P^{d}$, we will show that the corresponding set of projective kernels $\{\underline{k}^{p}_{m}(y_i, \cdot)\}_{i=1}^l\subset \bigoplus_{i=0}^{m}\mathcal H_{2i}$ is linearly independent for an integer $l$ and a large enough $m$. Consider two vector subspace $V_1^{m}=\spn \big[ \{(y_i\cdot {t})^{2m}\}_{i=1}^l\big]$ and $V_2^{m}=\spn\big[\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l\big]$, both of which are inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}\subset L^2(\mathbb S^d)$. Then we can define a linear map $h_{m}:\ V_1^{m}\rightarrow V_2^{m}$ by setting $h_{m}((y_i\cdot {t})^{2m})=\sum_{j=1}^l<(y_i\cdot {t})^{2m},\underline{k}^{p}_{m}(y_j, t)>\underline{k}^{p}_{m}(y_j, t)$. This linear map $h_{m}$ is represented by an $l\times l$ symmetric matrix whose diagonal elements are $1$'s, and off diagonal elements are $[(y_i\cdot y_j)]^{2m}$. Notice that $y_i\neq y_j$ in $\mathbb P^d$, which means that they are not antipodal to each other in $\mathbb S^d$, thus $|y_i\cdot y_j|<1$. When $m$ is large enough, all off-diagonal elements will go to zero while diagonal elements always stay constant, then the matrix representing $h_{m}$ will become a \emph{diagonally dominant} matrix, which is full rank. When the linear map $h_{m}$ has full rank, both spanning sets $ \{(y_i\cdot {t})^{2m}\}_{i=1}^l$ and $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l$ have to be a basis for $V_1^{m}$ and $V_2^{m}$ correspondingly, then the set of projective kernels $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^m$ have to be linearly independent when $m$ is large enough. \end{proof} \subsection{Proof of Representer Theorems in Section \ref{represent}}\label{representerproofs} Proof of Theorem \ref{minorm} on minimal norm interpolation: \begin{proof} Note that the set $I_y^{m}=\{f\in \bigoplus_{i=0}^{m}\mathcal H_{2i}:\ f(x_i)=y_i\}$ is non-empty, because the $f_0$ defined by the linear system of equation is naturally in $I_y^{m}$. Let $f$ be any other element in $I_y^{m}$, define $g=f-f_0$, then we have: $$ \norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2. $$ Notice that $g\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$ and that $g(x_i)=0$ for $1\leq i\leq n$, we have: $$ \begin{array}{rcl} <f_0,g>&=&<\sum_{i=1}^n\omega_{m}(x_i,\cdot)c_i,g(\cdot)>\\ &=&\dis\sum_{i=1}^nc_i<\omega_{m}(x_i,\cdot),g(\cdot)>.\\ &=&\dis\sum_{i=1}^nc_ig(x_i)=0. \end{array} $$ Thus $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+\norm{f_0}^2$, which implies that $f_0$ is the solution to the minimal norm interpolation problem. \end{proof} Proof of Theorem \ref{regularization}, on regularization problems: \begin{proof} First define the loss functional $E(f)=\sum_{i=1}^n|f(x_i)-y_i|^2+\mu\norm{f}^2$. For any $\Gamma$-invariant function $f=f^{\Gamma}\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$, let $g=f-f_{\mu}$, then a simple computation yields: $$ \dis E(f)=E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2. $$ I want to show $\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)=\mu<f_{\mu},g>$, and an equivalent way of writing this equality is: $$ \dis\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\omega_{m}(x_i,t), g(t)>=\mu<f_{\mu},g>. $$ Now I claim that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$, which implies the above equality. To prove this claim, plug this linear combination $f_{\mu}=\sum_{i=1}^nc_i\cdot \omega_{m}(x_i,t)$ into the claim, then we get a system of linear equations in $\{c_i\}_{i=1}^n$, thus the proof of the claim breaks down to checking the system of linear equations in $\{c_i\}_{i=1}^n$, produced by the claim. Note that $\{\omega_{m}(x_i,t)\}_{i=1}^n$ is a linearly independent set, so one can check that the system of linear equations in $\{c_i\}_{i=1}^n$ produced by the claim is true, if and only if $\{c_i\}_{i=1}^n$ satisfy $\mu c_k+\sum_{i=1}^n c_i\cdot \omega_{m}(x_i,x_k)=y_k$ for every $k$ with $1\leq k\leq n$, which is given by the condition of this theorem. The equivalence of these two systems of linear equations is given by the linear independence of the set $\{\omega_{m}(x_i, t)\}_{i=1}^n$. Therefore we conclude that the claim $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$ is true. To finish the proof of this theorem, notice that $$ \begin{array}{rcl} \dis E(f)&=&E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t), g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[<\underbrace{\big(\mu f_{\mu}(t)-\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]\big)}_{=0}, g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2. \end{array} $$ The term $\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2$ in the above equality is always non-negative, thus $E(f_{\mu})\leq E(f)$, then the theorem follows. \end{proof} \bibliographystyle{asa} \bibliography{geometrycomp.bib} \end{document} \documentclass[aos]{imsart} \RequirePackage{amsthm,amsmath,amsfonts,amssymb} \RequirePackage[numbers]{natbib} \RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{graphicx, subcaption} \RequirePackage{xypic} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \usepackage[bottom]{footmisc} \startlocaldefs \newtheorem{axiom}{Axiom} \newtheorem{claim}[axiom]{Claim} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}{Remark} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{remark} \newtheorem{definition}{Definition} \newtheorem*{example}{Example} \newtheorem*{fact}{Fact} \endlocaldefs \newcommand{\F}{\mathfrak F} \newcommand{\G}{\mathfrak G} \newcommand{\N}{\mathfrak N} \newcommand{\QQ}{\mathbb Q} \newcommand{\ZZ}{\mathbb Z} \newcommand{\CC}{\mathbb C} \newcommand{\RR}{\mathbb R} \newcommand{\HH}{\mathbb H} \newcommand{\SSS}{\mathcal S} \newcommand{\sL}{\mathcal L} \newcommand{\cH}{{\mathcal H}} \newcommand{\A}{\mathfrak A} \newcommand{\B}{\mathfrak B} \newcommand{\mega}{\overline{\omega}} \newcommand{\id}{\operatorname{id}} \newcommand{\im}{\operatorname{im}} \newcommand{\Op}{\operatorname{Op}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Con}{\operatorname{Con}} \newcommand{\QCon}{\operatorname{QCon}} \newcommand{\Diff}{\operatorname{Diff}} \newcommand{\Isom}{\operatorname{Isom}} \newcommand{\isom}{\operatorname{\mathfrak{isom}}} \newcommand{\Homeo}{\operatorname{Homeo}} \newcommand{\Teich}{\operatorname{Teich}} \newcommand{\Homeq}{\operatorname{Homeq}} \newcommand{\Homeqbar}{\operatorname{\overline{Homeq}}} \newcommand{\Stab}{\operatorname{Stab}} \newcommand{\Nbd}{\operatorname{Nbd}} \newcommand{\colim}{\operatornamewithlimits{colim}} \newcommand{\acts}{\curvearrowright} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\GL}{\operatorname{GL}} \mathchardef\mhyphen"2D \newcommand{\free}{\mathrm{free}} \newcommand{\refl}{\mathrm{refl}} \newcommand{\trefl}{\mathrm{trefl}} \newcommand{\tame}{\mathrm{tame}} \newcommand{\wild}{\mathrm{wild}} \newcommand{\Emb}{\operatorname{Emb}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Out}{\operatorname{Out}} \newcommand{\Maps}{\operatorname{Maps}} \newcommand{\rel}{\operatorname{rel}} \newcommand{\tildetimes}{\mathbin{\tilde\times}} \newcommand{\h}{\overset h} \newcommand{\spn}{\mathrm{span}} \newcommand{\dis}{\displaystyle} \newcommand{\orb}{\mathrm{Orbit}} \newcommand{\catname}[1]{\textbf{#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\mtfr}[1]{\mathfrak{#1}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\ant}{\mathrm{ant}} \newcommand{\var}{\mathrm{Var}} \begin{document} \begin{frontmatter} \title{Reproducing Kernels and New Approaches in Compositional Data Analysis } \runtitle{New Foundation of Compositional Data Analysis} \begin{aug} \author[A]{\fnms{Binglin} \snm{Li}\ead[label=e1]{[email protected]}} and \author[B]{\fnms{Jeongyoun} \snm{Ahn}\ead[label=e2]{[email protected]}} \address[A]{Department of Statistics, University of Georgia, \printead{e1}} \address[B]{Department of Industrial and Systems Engineering, KAIST, \printead{e2}} \end{aug} \begin{abstract} Compositional data, such as human gut microbiomes, consist of non-negative variables whose only the relative values to other variables are available. Analyzing compositional data such as human gut microbiomes needs a careful treatment of the geometry of the data. A common geometrical understanding of compositional data is via a regular simplex. Majority of existing approaches rely on a log-ratio or power transformations to overcome the innate simplicial geometry. In this work, based on the key observation that a compositional data are projective in nature, and based on the intrinsic connection between projective and spherical geometry, we re-interpret the compositional domain as the quotient topology of a sphere modded out by a group action, hence compositional statistics is a ``group-invariant'' directional statistics. This re-interpretation allows us to understand the function theory on compositional domains in terms of that on spheres, and then use spherical harmonics theory (along with reflection group actions) to a construction of ``Compositional Reproducing Kernel Hilbert Spaces''. This construction of RKHS theory for compositional data will widely open research avenues for future methodology developments; in particular, modern insights (e.g. kernel methods) from machine learning can be introduced to compositional data analysis. Supported by a representer theorem, one can apply kernel methods such as support vector machines for compositional data. The polynomial nature of compositional RKHS has both theoretical and computational benefits. Some applications for example nonparametric density estimation and compositional kernel exponential family are also discussed. \end{abstract} \begin{keyword} \kwd{Machine Learning} \kwd{Microbiome} \kwd{RKHS} \kwd{Spherical harmonics} \kwd{Directional Statistics} \end{keyword} \end{frontmatter} \section{Introduction}\label{sec:intro} Recent popularity of human gut microbiomes research has presented many data-analytic, statistical challenges \citep{calle2019statistical}. Among many features of microbiomes, or meta-genomics data, we address their \emph{compositional} nature in this work. Compositional data consist of $n$ observations of $(d+1)$ non-negative variables whose values represent the relative proportions to other variables in the data. Compositional data have been commonly observed in many scientific fields, such as bio-chemistry, ecology, finance, economics, to name just a few. The most notable aspect of compositional data is the restriction on their domain, from the condition that the sum of the variables is fixed. The compositional domain is not a classical vector space, but instead a (regular) simplex, that can be modeled by the simplex: \begin{equation}\label{eq:simplex} \Delta^d=\{(x_1,\dots,x_{d+1})\in \mathbb R^{d+1} \;| \sum_{i=1}^{d+1}x_i=1,\ x_i\geq 0, \forall i \}, \end{equation} which is topologically compact. The inclusion of zeros in (\ref{eq:simplex}) is crucial as most microbiomes data have a substantial number of zeros. Arguably the most prominent approach to handle the data on a simplex is to take a log-ratio transformation \citep{aitch86}, for which one has to consider only the open interior of $\Delta^d$, denoted by $\mathcal S^d$. Zeros are usually taken care of by adding a small number, however, it has been noted that the results of analysis can be quite dependent on how the zeros are replaced with \citep{lubbe2021comparison}. \citet{micomp} pointed out ``the dangers inherent in ignoring the compositional nature of the data'' and argued that microbiome datasets must be treated as compositions at all stages of analysis. Recently the need to handle compositional data without any transformation has been gaining popularity \citep{li2020s, rasmussen2020zero}. The approach proposed in this paper is to construct reproducing kernels of compositional data by interpreting compositional domains via projective spherical geometries. \subsection{New Trends of Techniques in Machine Learning}\label{machine} Besides the motivation from microbiology, another source of inspiration for this work is the current exciting development in statistics and machine learning. In particular, the rising popularity of applying higher tensors and kernel techniques, which allows multivariate techniques extend to exotic structures beyond traditional vector spaces, e.g., graphs (\cite{graphrkhs}), manifolds (\cite{vecman}) or images (\cite{tensorbrain}). This work serves an attempt to construct reproducing kernel structures for compositional data, so that new philosophies from modern machine learning can be introduced to this classical field in statistics. The approach in this work is to model the compositional data as a group quotient of a sphere $\mathbb S^d/\Gamma$ (see (\ref{allsame})), which gives a new connection of compositional data analysis with directional statistics. The idea of representing data by using tensors and frames is not new in directional statistics (see \cite{ambro}), but the authors find it more convenient to construct reproducing kernels for $\mathbb S^d/\Gamma$ (whose reason is given in Section \ref{whyrkhs}). We do want to mention that with the construction of reproducing kernels for compositional data, it indicates a new potential paradigm for compositional data analysis: traditional approach was aiming to find direct analogue of multivariate concepts, like mean, variance-covariance matrices and suitable regression analysis frameworks based on those concepts. However, finding the mean point over non-linear spaces, e.g. on manifold, is not an easy job, and in worst case scenarios, mean points might not even exist on the underlying space (e.g. the mean point of the uniform distribution on a unit circle is \emph{not} living on the circle). However, from the perspective of new philosophy in kernel mean embedding (surveyed in \cite{kermean}), which is formulated in our language: instead of finding the ``\emph{physical}'' point for the mean of a distribution, one should do statistics \emph{distributionally}, in other words, the mean or expectation is considered as a \emph{linear functional} on the RKHS, and this functional is represented as by an actual function in the Hilbert space, which is referred to as ``kernel mean $\mathbb E(k(X,\cdot))$'' in \cite{kermean}. Instead of trying to find another compositional point as the empirical mean of a compositional data set, one can construct ``kernel mean'' as a replacement of the traditional empirical mean, which is just $\big(\sum_{i=1}^nk(X_i,\cdot)\big)/n$. Moreover, one can also construct the analogue of variance-covariance matrix purely from kernels; in fact, in \cite{fbj09}, the authors considered the ``Gram Matrices'' constructed out of reproducing kernels, as consistent estimators of cross-variance operators (these operators play the role of covariance and cross-variance matrices in classical Euclidean spaces). Although we only construct reproducing kernels for compositional data, it does not mean that ``higher tensors'' is abandoned in our consideration. In fact, higher-tensor valued reproducing kernels are also included in kernel techniques with applications in manifold regularizations (\cite{vecman}) and shape analysis (\cite{matrixvaluedker}). These approaches on higher-tensor valued reproducing kernels indicate further possibilities of regression frameworks between exotic spaces $f:\ \mathcal X\rightarrow \mathcal Y$ with both the source $\mathcal X$ and $\mathcal Y$ being non-linear in nature, which extends the intuition of multivariate analysis further to nonlinear contexts, and compositional domains (traditionally modeled by an ``Aitchison simplex'') are an interesting class of examples which can be treated non-linearly. Since we remodel compositional domain using projective/spherical geometry, compositional domain is \emph{not} treated as a vector space, but a quotient topological space $\mathbb S^d/\Gamma$. Instead of ``putting a linear structure on an Aitchison simplex'' (see \cite{aitch86}), or square root transformation (which is still transformed from an Aitchison simplex), we choose to use modern machine learning techniques to ``linearize'' compositional data points by using kernel techniques (and possibly higher-tensor constructions) and one can still do ``multivariate analysis''. Our construction in this works initiates such an attempt to introduce these new streams of kernel and tensor techniques from statistical learning theory into compositional data analysis. \subsection{Highlights of the Present Work} Our contribution in this paper is three folds. First, we propose a new geometric foundation for compositional data analysis, $\mathbb P^d_{\geq 0}$, a subspace of a full projective space $\mathbb P^d$. Based on the close connection of spheres with projective spaces, we will also describe $\mathbb P^d_{\geq 0}$ in terms of $\mathbb S^d/\Gamma$, a reflection group acting on a sphere, and the fundamental domain of this actions is the first orthant $\mathbb S^d_{\geq 0}$ (a totally different reason of using ``$\mathbb S^d_{\geq 0}$'' in the traditional approach). Secondly, based on the new geometric foundations of compositional domains, we give a new compositional density estimation theory by making use of the long-developed spherical density estimation theory, and give a central limit theorem for integral squared errors, which leads to a goodness of fit test. Thirdly, also through this new geometric foundation, function spaces on compositional domains can be related with those on the spheres. Square integrable functions $L^2(\mathbb S^d)$ on the sphere is a focus of an ancient subject in mathematics and physics, called ``spherical harmonics''. Moreover, spherical harmonics theory also tells that each Laplacian eigenspace of $L^2(\mathbb S^d)$ is a reproducing kernel Hilbert space, and this allows us to construct reproducing kernels for compositional data points via ``orbital integrals'', which opens a door for machine learning techniques to be applied to compositional data. We also propose a compositional exponential family as a general distributional family for compositional data modeling. \subsection{Why Projective and Spherical Geometries?}\label{whysph} According to Aitchison in \cite{ai94}, ``any meaningful function of a composition must satisfy the requirement $f(ax)=f(x)$ for any $a\neq 0$''. In geometry and topology, a space consisting of such functions is called a \emph{projective space}, denoted by $\mathbb P^d$., therefore, projective geometry should be the natural candidate to model compositional data, rather than a simplex. Since a point in compositional domains can not have opposite signs, a compositional domain is in fact a ``positive cone'' $\mathbb P_{\geq 0}^d$ inside a full projective space. A key property of projective spaces is that stretching or shrinking the length of a vector in $\mathbb P^d$ does \emph{not} alter the point. Thus one can stretch a point in $\Delta^d$ to a point in the first orthant sphere by dividing it by its $\ell_2$ norm. Figure \ref{fig:spread} illustrates this stretching (``stretching'' is not a transformation from projective geometry point of view) in action. In short, projective geometry is more natural to model the compositional data according to Aitchison's original philosophy in \cite{ai94}. However, spheres are easier to work with because mathematically speaking, the function space on spheres is a well-treated subject in spherical harmonics theory, and statistically speaking, we can connect with directional statistics in a more natural way. Our compositional domain $\mathbb P^d_{\geq 0}$ can be naturally identified with $\mathbb S^d/\Gamma$, a sphere modded out by a reflection group action. This reflection group $\Gamma$ acts on the sphere $\mathbb S^d$ by reflection, and the \emph{fundamental domain} of this action is $\mathbb S^d_{\geq 0}$ (notions of group actions, fundamental domains and reflection groups are all discussed in Section \ref{sec:sphere}). Thus our connection with the first orthant sphere $\mathbb S^d_{\geq 0}$ is a natural consequence of projective geometry and its connection with spheres with group actions, having nothing to do with square root transformations. \subsection{Why Reproducing Kernels}\label{whyrkhs} As is explained in Section \ref{machine}, we strive to use new ideas of tensors and kernel techniques in machine learning to propose another framework for compositional data analysis, and Section \ref{whysph} explains the new connection with spherical geometry and directional statistics. However, it is not new in directional statistics where the idea of tensors was used to represent data points (see \cite{ambro}). So a naive idea would be to mimic directional statistics when studying ambiguous rotations: in \cite{ambro}, the authors studied how to do statistics over coset space $SO(3)/K$ where $K$ is a finite subgroup of $SO(3)$. In their case, the subgroup $K$ has to be a special class of subgroups of \emph{special} orthogonal groups, and within this class, they manage to study the corresponding tensors and frames, which gives the inner product structures of different data points. However, in our case, a compositional domain is $\mathbb S^d/\Gamma=O(d)\textbackslash O(d+1)/\Gamma$, a double coset space. Unlike \cite{ambro} which only considered $d=3$ case, our dimension $d$ is completely general; moreover, our reflection group $\Gamma$ is \emph{not} a subgroup of any special orthogonal groups, so constructions of tenors and frames in \cite{ambro} does not apply to our situation directly. Part of the novelty of this work is to get around this issue by making use of the reproducing kernel Hilbert Space (RKHS) structures on spheres, and ``averaging out'' the group action at the reproducing kernel level, which in return gives us a reproducing kernel structure on compositional domains. Once we have RKHS in hand, we can ``add'' and take the ``inner product'' of two data points, so our linearization strategy can also be regarded as a combination of ``the averaging approach'' and the ``embedding approach'' as in \cite{ambro}. In fact, an abstract function space together with reproducing kernels plays an important role in modern data analysis. In below we provide some philosophical motivations on the importance of function space over underlying data set: \begin{itemize} \item[(a)]Hilbert spaces of functions are naturally linear with an inner product structure. With the existence of (reproducing) kernels, data points are naturally incorporated into the function space, which leads to interesting interactions between the data set and functions defined over them. There has been a large amount of literature of embedding distributions into RKHS, e.g. \cite{disemd}, and using reproducing kernels to recover exponential families, e.g. \cite{expdual}. RKHS has also been used to recover classical statistical tests (e.g. goodness-of-fit test in \cite{kergof}), and regression in \cite{rkrgr}. Those works do not concern the analysis of function space, but primarily focus on the data analysis on the underlying data set, but all of them are done by passing over to RKHS. This implies the increasing recognition of the importance of abstract function space with (reproducing) kernel structure. \item[(b)] Mathematically speaking, given a geometric space $M$, the function space on $M$ can recover the underlying geometric space $M$ itself, and this principle has been playing a big role in modern geometry. Modern algebraic geometry, following the philosophy of Grothendieck, is based upon this insight. Function spaces can be generalized to matrix valued function spaces, and this generalization gives rise to non-commutative RKHS (used in shape analysis in \cite{matrixvaluedker}); moreover, non-commutative RKHS is connected with free probability theory (see \cite{ncrkhs}), which has been used in random effect and linear mixed models (see \cite{fj19} and \cite{princbulk}) . \end{itemize} \subsubsection{Structure of this paper} We describe briefly the content of the main sections of this article: \begin{itemize} \item In Section \ref{sec:sphere}, we will rebuild the geometric foundation of compositional domains by using projective geometry and spherical geometry. We will also point out that the old model using the closed simplex $\Delta^d$ is topologically the same as the new foundation. In diagrammatic way, we establish the following topological equivalence: \begin{equation}\label{4things} \Delta^d\cong \mathbb P^d_{\geq 0}\cong \mathbb S^d/\Gamma\cong\mathbb S^d_{\geq 0}, \end{equation} \noindent where $\mathbb S^d_{\geq 0}$ is the first orthant sphere, which is also the fundamental domain of the group action $\Gamma\acts \mathbb S^d$. All of the four spaces in (\ref{4things}) will be referred to as ``compositional domains''. As a direct application, we propose a compositional density estimation method by using the spherical density estimation theory via a spread-out construction through the quotient map $\pi:\ \mathbb S^d\rightarrow\mathbb S^d/\Gamma$, and proved that our compositional density estimator also possesses integral square errors that satisfies central limit theorems (Theorem \ref{ourclt}), which can be used for goodness-of-fit tests. \item Section \ref{sec:rkhs} will be devoted to constructing. compositional reproducing kernel Hilbert spaces. Our construction relies on the reproducing kernel structures on spheres, which is given by spherical harmonics theory. Wahba in \cite{wahba1981d} constructed splines using reproducing kernel structures on $\mathbb S^2$ (2-dimensional sphere), in which she also used spherical harmonics theory in \cite{sasone}, which only treated $2$-dimensional case. Our theory deals with general $d$-dimensional case, so we need the full power of spherical harmonics theory, which will be reviewed at the beginning of Section \ref{sec:rkhs}, and then we will use spherical harmonics theory to construct compositional reproducing kernels using an ``orbital integral'' type of idea. \item Section \ref{sec:app} will give a couple of applications of our construction of compositional reproducing kernels. (i) The first example is the representer theorem, but with one caveat: our RKHS is finite dimensional consisting degree $2m$ homogeneous polynomials, with no transcendental functions, so linear independence for distinct data points is not directly available, however we show that when the degree $m$ is high enough, linear independence still holds. Our statement of representer theorem is not new purely from RKHS theory point of view. Our point is to demonstrate that intuitions from traditional statistical learning can still be used in compositional data analysis, with some extra care. (ii) Secondly, we construct the compositional exponential family, which can be used to model the underlying distribution of compositional data. The flexible construction will enable us to utilize the distribution family in many statistical problems such as mean tests. \end{itemize} \section{New Geometric Foundations of Compositional Domains}\label{sec:sphere} \begin{figure} \centering \includegraphics[width = .4\textwidth]{SqrtCompare.png} \caption{Illustration of the stretching action on $\Delta^1$ to $\mathbb S^1$. Note that the stretching keeps the relative compositions where the square root transformation fails to do so. } \label{fig:stretch} \end{figure} In this section, we give a new interpretation of compositional domains as a cone $\mathbb P^d_{\geq 0}$ in a projective space, based on which compositional domains can be interpreted as spherical quotients by reflection groups. This connection will yield a ``spread-out'' construction on spheres and we demonstrate an immediate application of this new approach to compositional density estimation. \subsection{Projective and Spherical Geometries and a Spread-out Construction}\label{sec:spread} Compositional data consist of relative proportions of $d+1$ variables, which implies that each observation belongs to a projective space. A $d$-dimensional projective space $\mathbb P^d$ is the set of one-dimensional linear subspace of $\mathbb R^{d+1}$. A one-dimensional subspace of a vector space is just a line through the origin, and in projective geometry, all points in a line through the origin will be regarded as the same point in a projective space. Contrary to the classical linear coordinates $(x_1, \cdots,x_{d+1})$, a point in $\mathbb P^d$ can be represented by a projective coordinate $(x_1 : \cdots : x_{d+1})$, with the following property \[ (x_1 : x_2: \cdots : x_{d+1}) = (\lambda x_1 : \lambda x_2: \cdots : \lambda x_{d+1}), ~~~\text{for any } \lambda \ne 0. \] It is natural that an appropriate ambient space for compositional data is \emph{non-negative projective space}, which is defined as \begin{equation}\label{eq:proj} \mathbb P^d_{\ge 0} = \{(x_1 : x_2: \cdots : x_{d+1})\in \mathbb P^d \;| \; (x_1, x_2: \cdots : x_{d+1}) = (|x_1| : |x_2|: \cdots : |x_{d+1}|)\}. \end{equation} It is clear that the common representation of compositional data with a (closed) simplex $\Delta^d$ in (\ref{eq:simplex}) is in fact equivalent to (\ref{eq:proj}), thus we have the first equivalence: \begin{equation}\label{projtosimp} \mathbb P^d_{\geq 0}\cong \Delta^d. \end{equation} Let $\mathbb S^d$ denote a $d$-dimensional unit sphere, defined as \[ \mathbb S^d=\left\{(x_1,x_2,\dots, x_{d+1})\in \mathbb R^{d+1} \; | \sum_{i=1}^{d+1}x_i^2=1\right\}, \] and let $\mathbb S^d_{\geq 0}$ denote the first orthant of $\mathbb S^d$, a subset in which all coordinates are non-negative. The following lemma states that $\mathbb S^d_{\geq 0}$ can be a new domain for compositional data as there exists a bijective map between $\Delta^d$ and $\mathbb S^d_{\geq 0}$. \begin{lemma}\label{compcone} There is a canonical identification of $\Delta^d$ with $\mathbb S^d_{\geq 0}$, namely, $$ \xymatrix{\Delta^d\ar@<.4ex>[r]^f& \mathbb S^d_{\geq 0}\ar@<.4ex>[l]^g}, $$ where $f$ is the inflation map $g$ is the contraction map, with both $f$ and $g$ being continuous and inverse to each other. \end{lemma} \begin{proof}It is straightforward to construct the inflation map $f$. For $v \in \Delta^d$, it is easy to see that $f(v) \in \mathbb S^d_{\ge 0}$ when $ f(v) = v/\|v\|_2, $ where $\|v\|_2 $ is the $\ell_2$ norm of $v$. Note that the inflation map makes sure that $f(v)$ is in the same projective space as $v$. To construct the shrinking map $g$, for $s\in \mathbb S^d_{\geq 0}$ we define $ g(s) = s / \|s\|_1, $ where $\|s\|_1$ is the $\ell_1$ norm of $s$ and see that $g(s)\in\Delta^d$. One can easily check that both $f$ and $g$ are continuous and inverse to each other. \end{proof} Based on Lemma \ref{compcone}, we now identify $\Delta^d$ alternatively with the quotient topological space $\mathbb S^d/\Gamma$ for some group action $\Gamma$. In order to do so, we first show that the cone $\mathbb S^d_{\geq 0}$ is a strict fundamental domain of $\Gamma$, i.e., $\mathbb S^d_{\geq 0}\cong \mathbb S^d/\Gamma$. We start by defining a \emph{coordinate hyperplane} for a group. The $i$-th coordinate hyperplane $H_i\in \mathbb R^{d+1}$ with respect to a choice of a standard basis $\{e_1,e_2,\dots, e_{d+1}\}$ is a codimension one linear subspace which is defined as \[ H_i=\{(x_1,\dots, x_i,\dots, x_{d+1})\in \mathbb R^{d+1}:\ x_i=0\}, ~~ i = 1, \ldots, d+1. \] We define the reflection group $\Gamma$ with respect to coordinate hyperplanes as the follows: \begin{definition}\label{reflect} The reflection group $\Gamma$ is a subgroup of general linear group $GL(d+1)$ and it is generated by $\{\gamma_i, i = 1, \ldots, {d+1}\}$. Given the same basis $\{e_1,\dots, e_{d+1}\}$ for $\mathbb R^{d+1}$, the reflection $\gamma_i$ is a linear map specified via: \[ \gamma_i:\ (x_1,\dots,x_{i-1}, x_i, x_{i+1},\dots, x_{d+1})\mapsto (x_1,\dots,x_{i-1}, -x_i, x_{i+1},\dots, x_{d+1}). \] \end{definition} Note that if restricted on $\mathbb S^d$, $\gamma_i$ is an isometry map from the unit sphere $\mathbb S^d$ to itself, which we denote by $\Gamma\acts\mathbb S^d$. Thus, one can treat the group $\Gamma$ as a discrete subgroup of the isometry group of $\mathbb S^d$. In what follows we establish that $\mathbb S^d_{\ge 0}$ is a fundamental domain of the group action $\Gamma\acts \mathbb S^d$ in the topological sense. In general, there is no uniform treatment of a fundamental domain, but we will follow the approach by \cite{bear}. To introduce a fundamental domain, let us define an \emph{orbit} first. For a point $z\in \mathbb S^d$, an orbit of the group $\Gamma$ is the following set: \begin{equation}\label{eq:orbit} \orb^{\Gamma}_z=\{\gamma(z), \forall \gamma\in \Gamma\}. \end{equation} Note that one can decompose $\mathbb S^d$ into a disjoint union of orbits. The size of an orbit is not necessarily the same as the size of the group $|\Gamma|$, because of the existence of a \emph{stabilizer subgroup}, which is defined as \begin{equation}\label{stable} \Gamma_z=\{\gamma\in \Gamma, \gamma (z)=z\}. \end{equation} The set $\Gamma_z$ forms a group itself, and we call this group $\Gamma_z$ the \emph{stabilizer subgroup} of $\Gamma$. Every element in $\orb^{\Gamma}_z$ has isomorphic stabilizer subgroups, thus the size of $\orb^{\Gamma}_z$ is the quotient $|\Gamma|/|\Gamma_z|$, where $|\cdot|$ here is the cardinality of the sets. There are only finite possibilities for the size of a stabilizer subgroup for the action $\Gamma\acts \mathbb S^d$, and the size of stabilizer subgroups is dependent on codimensions of coordinate hyperplanes. \begin{definition}\label{fundomain} Let $G$ act properly and discontinuously on a $d$-dimensional sphere, with $d>1$. A \emph{fundamental domain} for the group action $G$ is a closed subset $F$ of the sphere such that every orbit of $G$ intersects $F$ in at least one point and if an orbit intersects with the interior of $F$ , then it only intersects $F$ at one point. \end{definition} A fundamental domain is \emph{strict} if every orbit of $G$ intersects $F$ at exactly one point. The following proposition identifies $\mathbb S^d_{\geq 0}$ as the quotient topological space $\mathbb S^d/\Gamma$, i.e., $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. \begin{proposition}\label{conedom} Let $\Gamma\acts \mathbb S^d$ be the group action described in Definition \ref{reflect}, then $\mathbb S^d_{\geq 0}$ is a strict fundamental domain. \end{proposition} In topology, there is a natural quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. With the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$, there should be a natural map $\mathbb S^d\rightarrow\mathbb S^d_{\geq 0}$. Now define a contraction map $c: \mathbb S^d\rightarrow \mathbb S^d_{\geq 0}$ via $(x_1,\dots,x_{d+1})\mapsto (|x_1|,\dots, |x_{d+1}|)$ by taking component-wise absolute values. Then it is straightforward to see that the $c$ is indeed the topological quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$, under the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. So far, via (\ref{projtosimp}), Lemma \ref{compcone} and Proposition \ref{conedom}, we have established the following equivalence: \begin{equation}\label{allsame} \mathbb P^d_{\geq 0}=\Delta^d=\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma. \end{equation} For the rest of the paper we will use the four characterizations of a compositional domain interchangeably. \paragraph{Spread-Out Construction} Based on (\ref{allsame}), one can turn a compositional data analysis problem into one on a sphere via \emph{spread-out construction}. The key idea is to associate one compositional data point $z\in \Delta^d=\mathbb S^d_{\geq 0}$ with a $\Gamma$-orbit of data points $\orb^{\Gamma}_z\subset \mathbb S^d$ in (\ref{eq:orbit}). Formally, given a point $z\in \Delta^d$, we construct the following \emph{data set} (\emph{not necessarily a set} because of possible repetitions): \begin{equation}\label{sprd} c^{-1}(z) = \left\{|\Gamma_{z'}|\ \text{copies of }z', \ \text{for}\ z'\in \text{Orbit}_z^\Gamma \right\}, \end{equation} where $\Gamma_{z'}$ is the stabilizer subgroup of $\Gamma$ with respect to $z'$ in (\ref{stable}). In general, if there are $n$ observations in $\Delta^d$, the spread-out construction will create a data set with $n2^{d+1}$ observations on $\mathbb S^d$, in which observations with zero coordinates are repeated. Figures \ref{fig:spread} (a) and (b) illustrate this idea with a toy data set with $d = 2$. \subsection{Compositional Density Estimation}\label{compdensec} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width = \textwidth]{simplex_eg.png} \caption{Compositional data on $\Delta^2$}\ \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{simplex_kde_3d.png} \caption{``Spread-out'' data on $\mathbb S^2$} \end{subfigure}\\ \begin{subfigure}[b]{0.50\textwidth} \includegraphics[width = \textwidth]{sphere_kde_3d.png} \caption{Density estimate on $\mathbb S^2$} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width = \textwidth]{simplex_kde.png} \caption{``Pulled-back'' estimate on $\Delta^2$} \end{subfigure} \caption{Toy compositional data on the simplex $\Delta^2$ in (a) are spread out to a sphere $\mathbb S^2$ in (b). The density estimate on $\mathbb S^2$ in (c) are pulled back to $\Delta^2$ in (d).}\label{fig:kde} \end{figure} The spread-out construction in (\ref{sprd}) provides a new intimate relation between directional statistics and compositional data analysis. Indeed, this construction produces a directional data set out of a compositional data set, then we can literally transform a compositional data problem into a directional statistics problem via this spread-out construction. For example, we can perform compositional independence/uniform tests by doing directional independence/uniform tests (as in \cite{sobind} and \cite{sobuni}) through spread-out constructions. In this section, we will give a new compositional density estimation framework (different from the one in \cite{oldden}) by using spread-out constructions. In directional statistics, density estimation for spherical data has a long history dating back to the late 70s in \cite{expdirect}. In the 80s, Hall and his collaborators in \cite{hallden}, along with Bai and his collaborators in \cite{baiden} established systematic framework for spherical density estimation theory. Spherical density estimation theory became popular later partly because its integral squared error (ISE) is closetly related with goodness of fit test as in \cite{chineseclt} and a recent work \cite{portclt}. The rich development in spherical density estimation theory will yield a compositional density framework via spread-out constructions. In the following we apply this idea to nonparametric density estimation for compositional data. Instead of directly estimating the density on $\Delta^d$, one can perform the estimation with the spread-out data on $\mathbb S^d$, from which a density estimate for compositional data can be obtained. Let $p(\cdot)$ denote a probability density function of a random vector $Z$ on $\mathbb S^d_{\geq 0}$, or equivalently on $\Delta^d$. The following proposition gives a form of the density of the spread-out random vector $\Gamma(Z)$ on the whole sphere $\mathbb S^d$. \begin{proposition}\label{induced} Let $Z$ be a random variable on $\mathbb S^d_{\geq 0}$ with probability density $p(\cdot)$, then the induced random variable $\Gamma(Z)=\{\gamma(Z)\}_{\gamma\in \Gamma}$, has the following density $\tilde{p}(\cdot)$ on $\mathbb S^d$: \begin{equation}\label{cshriek} \tilde{p}(z)=\frac{|\Gamma_z|}{|\Gamma|} p(c(z)), \ z\in \mathbb S^d, \end{equation} where $|\Gamma_z|$ is the cardinality of the stabilizer subgroup $\Gamma_z$ of $z$. (See (\ref{stable})). \end{proposition} Let $c_*$ denote the analogous operation for functions to the contraction map $c$ that applies to data points. It is clear that given a probability density $\tilde p$ on $\mathbb S^d$, we can obtain the original density on the compositional domain via the ``pull back'' operation $c_*$: \[ p(z) = c_*(\tilde p)(z)=\sum_{x\in c^{-1}(z)}\tilde p(x), ~~ z\in \mathbb S^d_{\geq 0}. \] Now consider estimating density on $\mathbb S^d$ with the spread-out data. Density estimation for data on a sphere has been well studied in directional statistics \citep{hallden, baiden}. For $x_1, \ldots x_n \in \mathbb S^d$, a density estimate for the underlying density is \[ \hat{f}_n(z)=\frac{c_h}{n}\sum_{i=1}^nK\left(\frac{1-z^T x_i}{h_n}\right),\ z\in \mathbb S^d, \] where $K$ is a kernel function that satisfies common assumptions in Assumption \ref{kband}, and $c_h$ is a normalizing constant. Applying this to the spread-out data $c^{-1}(x_i)$, $i = 1, \ldots, n$, we have a density estimate of $\tilde p (\cdot) $ defined on $\mathbb S^d$: \begin{equation}\label{fhattilde} \hat{f}^{\Gamma}_n(z)=\dis \frac{c_h}{n|\Gamma|}\sum_{1\leq i\leq n,\gamma\in \Gamma}K\left(\frac{1-z^T \gamma( x_i)}{h_n}\right), \ z\in \mathbb S^d, \end{equation} from which a density estimate on the compositional domain is obtained by applying $c_*$. That is, \[ \hat{p_n}(z)=c_*\hat{f}^{\Gamma}_n(z)=\sum_{x\in c^{-1}(z)}\hat{f}^\Gamma_n(x),\ \ z\in \mathbb S^d_{\geq 0}. \] Fig. \ref{fig:kde} (c) and (d) illustrate this density estimation process with a toy example. The consistency of the spherical density estimate $\hat f_n$ is established by \cite{ chineseclt, portclt}, where it is shown that the integral squared error (ISE) of $\hat f_n$, $\int_{\mathbb S^d} (\hat f_n - f)^2 dz$ follows a central limit theorem. It is straightforward to show that the ISE of the proposed compositional density estimator $\hat{p}_n$ on the compositional domain also asymptotically normally distributed by CLT. However, the CLT of ISE for spherical densities in \cite{chineseclt} contains an unnecessary finite support assumption on the density kernel function $K$ (very different from reproducing kernels); although in \cite{portclt} such finite support condition is dropped, their result was on directional-linear data, and their proof does not directly applies to the pure directional context. For the readers' convenient, we will provide the proof for the CLT of ISE for both compositional and spherical data, without the finite support condition as in \cite{chineseclt} \begin{theorem}\label{ourclt} CLT for ISE holds for both directional and compositional data under the mild conditions (H1, H2 and H3) in Section \ref{cltprf}, without the finite support condition on density kernel functions $K$. \end{theorem} The detail of the proof of THeorem \ref{ourclt} plus the statements of the technical conditions can be found Section \ref{cltprf}. \section{Reproducing Kernels of Compositional Data}\label{sec:rkhs} We will be devoted to construct reproducing kernel structures on compositional domains, based on the topological re-interpretation of $\Delta^d$ in Section \ref{sec:sphere}. The key idea is that based on the quotient map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma=\Delta^d$, we can use function spaces on spheres to understand function spaces on compositional domains. Moreover, we can construct reproducing kernel structures of a compositional domain $\Delta^d$ based on those on $\mathbb S^d$. The reproducing kernel was first introduced in 1907 by Zaremba when he studied boundary value problems for harmonic and biharmonic functions, but the systematic development of the subject was finally done in the early 1950s by Aronszajn and Bergman. Reproducing kernels on $\mathbb S^d$ were essentially discovered by Laplace and Legendre in the 19th centuary, although the reproducing kernels on spheres were called \emph{zonal spherical functions} at that time. Both spherical harmonics theory and RKHS have found applications in theoretical subjects like functional analysis, representation theory of Lie groups and quantum mechanics. In statistics, Wahba's successful application of RKHS in spline models popularized RKHS theory for $\mathbb S^d$. In particular, Wahba used spherical harmonics theory to construct an RKHS on $\mathbb S^2$ in \cite{wah81}. Generally speaking, for a fixed topological space $X$, there exists (and one can construct) multiple reproducing kernel Hilbert spaces on $X$; in \cite{wah81}, Wahba constructed an RKHS on $\mathbb S^2$ by considering a \emph{subspace} of $L^2(\mathbb S^2)$ under a finiteness condition, and her reproducing kernels were also built out of zonal spherical functions. Her RKHS on $\mathbb S^2$ is motivated by studying spline models on the sphere, while our motivation has nothing to do with spline models of any kind. In this work we consider reproducing structures on spheres which are \emph{different} from the one in \cite{wah81}, but spherical harmonics theory provides building blocks for both our approach and Wahba's in \cite{wah81}. Evolved from the re-interpretation of a compositional domain $\Delta^d$ as $\mathbb S^d/\Gamma$, we will construct reproducing kernels of compositional by using reproducing kernel structures on spheres. Since spherical harmonics theory gives reproducing kernel structures on $\mathbb S^d$, and a compositional domain $\Delta^d$ are topologically covered by spheres with their deck transformations group $\Gamma$. Thus naturally we wonder (i) whether function spaces on $\Delta^d$ can identified with the subspaces of $\Gamma$-invariant functions on $\mathbb S^d$, and (ii) whether one might ``build'' $\Gamma$-invariant kernels out of spherical reproducing kernels, and hope that the $\Gamma$-invariant kernels can play the role of ``reproducing kernels'' on $\Delta^d$. It turns out that the answers for both (i) and (ii) are positive (see Remark \ref{dreami} and Theorem \ref{reprcomp}). The discovery of reproducing kernel structures on $\Delta^d$ is crucially based on the reinterpretation of compositional domains via projective and spherical geometries in Section \ref{sec:sphere}. By considering $\Gamma$-invariant objects in spherical function spaces we managed to construct reproducing kernel structures for compositional domains, and compositional reproducing Hilbert spaces. Although compositional RKHS was first considered as a candidate ``inner product space'' for data points to be mapped into, the benefit of working with RKHS goes far beyond than this, due to exciting development of kernel techniques in machine learning theory that can be applied to compositional data analysis as is mentioned in Section \ref{machine}. This gives a new chance to construct a new framework for compositional data analysis, in which we ``upgrade'' compositional data points as functions (via reproducing kernels), and the classical statistical notions, like means and variance-covariances, will be ``upgraded'' to linear functionals and linear operators over the functions space. Traditionally important statistical topics such as dimension reduction, regression analysis, and many inference problems can be re-addressed in the light of this new ``kernel techniques'' in the sense of \cite{kermean}. \subsection{Recollection of Basic Facts from Spherical Harmonics Theory} We give a brief review of the theory of spherical harmonics in the following. See \citet{atkinson2012spherical} for a general introduction to the topic. In classical linear algebra, a finite dimensional linear space with a linear map to itself can be decomposed into direct sums of eigenspaces. Such a phenomenon still holds for $L^2(\mathbb S^d)$ with Laplacians being the linear operator to itself. Recall that the Laplacian operator on a function $f$ with $d+1$ variables is \[ \dis\Delta f=\sum_{i=1}^{d+1}\frac{\partial ^2 f}{\partial x_i^2}. \] Let $\mathcal H_i$ be the $i$-th eigenspace of the Laplacian operator. It is known that $L^2(\mathbb S^d)$ can be orthogonally decomposed as \begin{equation}\label{L2} L^2(\mathbb S^d)=\bigoplus_{i=1}^{\infty}\mathcal H_i, \end{equation} where the orthogonality is endowed with respect to the inner product in $L^2(\mathbb S^d)$: $\langle f, g \rangle = \dis\int_{\mathbb S^d} f \bar{g}$. Let $\mathcal P_{i}(d+1)$ be the space of homogeneous polynomials of degree $i$ in $d+1$ coordinates on $\mathbb S^d$. A homogeneous polynomial is a polynomial whose terms are all monomials of the same degree, e.g., $\mathcal P_4(3)$ includes $xy^3 + x^2yz$. Further, let $ H_{i}(d+1)$ be the space of homogeneous harmonic polynomials of degree $i$ on $\mathbb S^d$, i.e., \begin{equation}\label{eq:harmonic} H_{i}(d+1)=\{P\in \mathcal P_{i}(d+1)|\; \Delta P=0\}. \end{equation} For example, $x^3y + xy^3 - 3xyz^2$ and $x^2 - 6x^2 y^2 + y^4$ are members of $H_4(3)$. Importantly, the spherical harmonic theory has established that each eigenspace $\mathcal H_{i}$ in \eqref{L2} is indeed the same space as $ H_i(d+1)$. This implies that any function in $L^2(\mathbb S^d)$ can be approximated by an accumulated direct sum of orthogonal homogeneous harmonic polynomials. The following well-known proposition further reveals that the Laplacian constraint in (\ref{eq:harmonic}) is not necessary to characterize the function space on the sphere. \begin{proposition}\label{accudirct} Let $\mathcal P_{m}(d+1)$ be the space of degree $m$ homogeneous polynomial on $d+1$ variables on the unit sphere and $\mathcal H_i$ be the $i$th eigenspace of $L^2(\mathbb S^d)$. Then $$ \mathcal P_{m}(d+1)=\dis\bigoplus_{i=\ceil{m/2}-\floor{m/2}}^{\floor{m/2}}\mathcal H_{2i}, $$ where $\ceil{\cdot}$ and $\floor{\cdot}$ stand for round-up and round-down integers respectively. \end{proposition} From Proposition \ref{accudirct}, one can see that any $L^2$ function on $\mathbb S^d$ can be approximated by homogeneous polynomials. An important feature of spherical harmonics theory is that it gives reproducing structures on spheres, and now we will recall this fact. For the following discussion, we will fix a Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$, so $\mathcal H_i$ is a finite dimensional Hilbert space on $\mathbb S^d$; such a restriction on a single piece $\mathcal H_i$ is necessary because the entire Hilbert space $L^2(\mathbb S^d)$ does not have a reproducing kernel given that the Delta functional on $L^2(\mathbb S^d)$ is \emph{not} a bounded functional\footnote{At first sight, this might seem to contradict Wahba's study of splines on $2$-dimensional spheres in \cite{wah81}, but a careful reader can find that Wahba put a finiteness constraint on $L^2(\mathbb S^2)$, and she \emph{never} claimed that $L^2(\mathbb S^2)$ is a RKHS, so her RKHS on $\mathbb S^2$ is a subspace of $L^2(\mathbb S^2)$.}. \subsection{Zonal Spherical Functions as Reproducing Kernels in $\mathcal H_i$} On each Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$ on general $d$-dimensional spheres, we define a linear functional $L_x$ on $\mathcal H_i$, such that for each $Y\in \mathcal H_i$, $L_x(Y)=Y(x)$ for a fixed point $x\in \mathbb S^d$. General spherical harmonics theory tells us that there exists $k_i(x,t)$ such that: $$ L_x(Y)=Y(x)=\dis\int_{\mathbb S^d} Y(t)k_i(x,t)dt,\ x\in \mathbb S^d; $$ \noindent this function $k_i(x,t)$ is the representing function of the functional $L_x(Y)$, and classical spherical harmonics theory refers to the function $k_i(x,t)$ as the \emph{zonal spherical function}. But with the eventual formulation of reproducing kernel theory in 1950 by Aronszajn, zonal spherical functions are actually ``reproducing kernels'' inside $\mathcal H_i\subset L^2(\mathbb S^d)$ in the sense of \cite{aron50}. Another way to appreciate spherical harmonics theory is that it tells that each Laplacian eigenspace $\mathcal H_i\subset L^2(\mathbb S^d)$ is actually a reproducing kernel Hilbert space on $\mathbb S^d$, a special case of which was used by Wahba for $d=2$ in \cite{wah81}. Let's recollect some basic facts of zonal spherical functions for readers' convenience, and one can find the proof of them in almost any modern spherical harmonics references, in particular in Chapter IV of \cite{stein71}, by which we have the following proposition: \begin{proposition}\label{reprsph} The following properties hold for the zonal spherical function $k_i(x,t)$, which is also the reproducing kernel inside $\mathcal H_i\subset L^2(\mathbb S^d)$ with dimension $a_i$. \begin{itemize} \item[(a)]For a choice of orthonormal basis $\{Y_1, \dots, Y_{a_i}\}$ in $\mathcal H_i$, we can express the kernel $k_i(x,t)=\dis\sum_{i=1}^{a_i}\overline{Y_{i}(x)}Y_{i}(t)$, but $k_i(x,t)$ does not depend on choices of basis. \item[(b)]$k_i(x,t)$ is a real-valued function and symmetric, i.e., $k_i(x,t)=k_i(t,x)$. \item[(c)]For any orthogonal matrix $R\in O(d+1)$, we have $k_i(x,t)=k_i(Rx, Rt)$. \item[(d)] $k_i(x,x)=\dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any point $x\in \mathbb S^d$. \item[(e)]$k_i(x,t)\leq \dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any $x,\ t\in \mathbb S^d$. \end{itemize} \end{proposition} \begin{remark} \normalfont The above proposition ``\emph{seems}'' obvious from traditional perspectives, as if it could be found in any textbook, so readers with rich experience with RKHS theory might think that we are stating something trivial. However, we want to point out two facts \begin{itemize} \item [(1)]function spaces over underlying spaces with different topological structures behave very differently. Spheres are compact with no boundary, and their function spaces have Laplacian operators whose eigenspaces and finite dimensional, \emph{and} with reproducing kernels structures on spheres inside \emph{this} finite dimensional space. These coincidences are not expected to happen over other general topological spaces. \item[(2)]Relative to classical topological spaces whose RKHS were used more often, e.g. unit intervals or vector spaces, spheres are more ``exotic'' topological structures (simply connected space, but with nontrivial higher homotopy groups), while intervals or vector spaces are contractible with trivial homotopy groups. One way to appreciate this result is that classical ``naive'' expectations can still happen even on spheres with the aid of spherical harmonics theory. \end{itemize} \end{remark} In the next subsection we discuss the corresponding function space in the compositional domain $\Delta^d$. \subsection{Function Spaces on Compositional Domains}\label{sec:fcomp} With the identification $\Delta^d=\mathbb S^d/\Gamma$, the functions space $L^2(\Delta^d)$ can be identified with $L^2(\mathbb S^d/\Gamma)$, i.e., $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. The function space $L^2(\mathbb S^d)$ is well understood by spherical harmonics theory as above, so we want to relate $L^2(\mathbb S^d/\Gamma)$ with $L^2(\mathbb S^d)$ as follows. Notice that a function $h\in L^2(\mathbb S^d/\Gamma)$ is a map from $\mathbb S^d/\Gamma$ to (real or complex) numbers. Thus a natural associated function $\pi^*(h)\in L^2(\mathbb S^d)$ is given by the following composition of maps: $$ \pi\circ h:\ \ \xymatrix{\mathbb S^d\ar[r]^-{\pi}&\mathbb S^d/\Gamma\ar[r]^{\ \ h}& \mathbb C}. $$ Therefore, the composition $\pi\circ h=\pi^*(h)\in L^2(\mathbb S^d)$ gives rise to a natural embedding of the function space of compositional domains to that of a sphere $\pi^*:\ L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$. The embedding $\pi^*$ identifies the Hilbert space of compositional domains as a subspace of the Hilbert space of spheres. A natural question is how to characterize the subspace in $L^2(\mathbb S^d)$ that corresponds to functions on compositional domains. The following proposition states that $f\in \im(\pi^*)$ if and only if $f$ is constant on fibers of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$, almost everywhere. In other words, $f$ takes the same values on all $\Gamma$ orbits, i.e., on the set of points which are connected to each other by ``sign flippings''. \begin{proposition}\label{compfun} The image of the embedding $\pi^*: L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$ consists of functions $f\in L^2(\mathbb S^d)$ such that up to a measure zero set, is constant on $\pi^{-1}(x)$ for every $x\in \mathbb S^d/\Gamma$, where $\pi$ is the natural projection $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. \end{proposition} We call a function $f\in L^2(\mathbb S^d)$ that lies in the image of the embedding $\pi^*$ a \emph{$\Gamma$-invariant function}. Now we construct the contraction map $\pi_{*}: L^2(S^d)\rightarrow L^2(S^d/\Gamma)$ and this map will descend every function on spheres to a function on compositional domains. To construct $\pi_*$, it suffices to associate a $\Gamma$-invariant function to every function in $L^2(\mathbb S^d)$. For a point $z\in \mathbb S^d$ and a reflection $\gamma\in \Gamma$, a point $\gamma(z)$ lies in the set $\orb_z^\Gamma$ which is defined in (\ref{eq:orbit}). Starting with a function $f\in L^2(\mathbb S^d)$, we will define the associated $\Gamma$-invariant function $f^{\Gamma}$ as follows: \begin{proposition} Let $f$ be a function in $L^2(\mathbb S^d)$. Then the following $f^\Gamma$ \begin{equation}\label{eq:invfun} f^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}f(\gamma(z)), ~~~ z \in \mathbb S^d, \end{equation} is a $\Gamma$-invariant function. \end{proposition} \begin{proof} Each fiber of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$ is $\orb_z$ for some $z$ in the fiber. For any other point $y$ on the same fiber with $z$ for the projection $\pi$, there exists a reflection $\gamma\in \Gamma$ such that $y=\gamma (z)$. Then this proposition follows from the identity $f^{\Gamma}(z)=f^{\Gamma}(\gamma(z))$, which can be easily checked. \end{proof} The contraction map $ f\mapsto f^{\Gamma}$ on spheres naturally gives the following map \begin{equation}\label{lowerstar} \pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma),\ \text{with}\ f\mapsto f^{\Gamma} \end{equation} \begin{remark} \normalfont Some readers might argue that each element in an $L^2$ space is a \emph{function class} rather than a function, so in that sense $\pi_*(f)=f^{\Gamma}$ is not well-defined, but note that each element in $L^2(\mathbb S^d)$ can be approximated by polynomials, and the $\pi_*$ which is well defined on individual polynomial, will induce a well defined map on function classes. \end{remark} \begin{theorem}\label{invfunsp} This contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma)$ (defined in Equation (\ref{lowerstar})) has a section given by $\pi^*$, namely the composition $\pi_*\circ \pi^*$ induces the identity map from $L^2(\mathbb S^d/\Gamma)$ to itself. In particular, the contraction map $\pi_*$ is a surjection. \end{theorem} \begin{proof} One way to look at the relation of the two maps $\pi_*$ and $\pi_*$ is through the diagram $\xymatrix{L^2(\mathbb S^d/\Gamma)\ar@<.35ex>[r]^-{\pi^*}&L^2(\mathbb S^d)\ar@<.35ex>[l]^-{\pi_*}}$. The image of $\pi^*$ consists of $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. Conversely, given a $\Gamma$-invariant function $g\in L^2(\mathbb S^d)$, the map $g\mapsto g^{\Gamma}$ is an identity map, i.e. $g=g^{\Gamma}$, thus the theorem follows. \end{proof} \begin{remark}\label{dreami} \normalfont Theorem \ref{invfunsp} identifies functions on compositional domains as $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. For any function $f\in L^2(\mathbb S^d)$, we can produce the corresponding $\Gamma$-invariant function $f^{\Gamma}$ by (\ref{eq:invfun}). More importantly, we can ``recover'' $L^2(\Delta^d)$ from $L^2(\mathbb S^d)$, without losing any information. This allows us to construct reproducing kernels of $\Delta^d$ from $L^2(\mathbb S^d)$ in Section \ref{sec:rkhsc}. \end{remark} \subsection{Reduction to Homogeneous Polynomials of Even Degrees}\label{redsurg} We want to understand the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$ in terms of homogeneous polynomials. Proposition \ref{accudirct} tells us that if $m$ is even, then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{m/2}\mathcal H_{2i}$, and that if $m$ is odd then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{(m-1)/2}\mathcal H_{2i+1}$, where $\mathcal P_{m}(d+1)$ is the space of degree $m$ homogeneous polynomials in $d+1$ variables. In either of the cases ($m$ being even or odd), the degree of the homogeneous polynomials $m$ is the same as the $\max\{2i,\ \ceil{m/2}-\floor{m/2}\leq i\leq \floor{m/2}\}$ as in Proposition \ref{accudirct}. Therefore we can decompose the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$ into direct sum of two homogeneous polynomial spaces: $$ \dis\bigoplus_{i=0}^{m}\mathcal H_i=\mathcal P_{m}(d+1)\bigoplus \mathcal P_{m-1}(d+1). $$ However we will show that any monomial of odd degree term will collapse to zero by taking its $\Gamma$-invariant, thus only one piece of the above homogeneous polynomial space will ``survive'' under the contraction map $\pi_*$. This will further simplify the function space, which in turn facilitates an easy computation. Specifically, when working with accumulated direct sums $\bigoplus_{i=0}^{m}\mathcal H_i$ on spheres, not every function is a meaningful function on $\Delta^d=\mathbb S^d/\Gamma$, e.g., we can find a nonzero function $f\in \bigoplus_{i=1}^{m}\mathcal H_i$, but $f^{\Gamma}=0$. In fact, all of the odd pieces of the eigenspace $\mathcal H_m$ with $m$ being odd do not contribute to anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. In other words, the accumulated direct sum $\bigoplus_{i=0}^m\mathcal H_{2i+1}$ is ``killed'' to zero under $\pi_*$, as shown by the following Lemma: \begin{lemma}\label{killodd} For every monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ (each $\alpha_i\geq 0$), if there exits $ k$ with $\alpha_k$ being odd, then the monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ is a shadow function, that is, $(\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}=0$. \end{lemma} An important implication of this Lemma is that since each homogeneous polynomial in $\bigoplus_{i=0}^k\mathcal H_{2i+1}$ is a linear combination of monomials with at least one odd term, it is killed under $\pi_*$. This implies that all ``odd'' pieces in $L^2(\mathbb S^d) = \bigoplus_{i=0}^{\infty} \mathcal H_i$ do not contribute anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. Therefore, whenever using spherical harmonics theory to understand function spaces of compositional domains, it suffices to consider only even $i$ for $\mathcal H_{i}$ in $L^2(\mathbb S^d)$. In summary, the function space on the compositional domain $\Delta^d=\mathbb S^d/\Gamma$ has the following eigenspace decomposition: \begin{equation}\label{compfun} L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)=\dis\bigoplus_{i=0}^{\infty}\mathcal H_{2i}^{\Gamma}, \end{equation} \noindent where $\mathcal H_{2i}^{\Gamma}:=\{h\in \mathcal H_{2i},\ h=h^{\Gamma}\}$. \subsection{Reproducing Kernels for Compositional Domain}\label{sec:rkhsc} The main goal of this section is to establish reproducing kernels for compositional data. Inside each Laplacian eigenspace $\mathcal H_i$ in $L^2(\mathbb S^d)$, recall that Equation (\ref{compfun}) tells us that the $\Gamma$-invariant subspace $\mathcal H_i^{\Gamma}$ can be regarded as a function space on $\Delta^d=\mathbb S^d/\Gamma$. To find a candidate of reproducing kernel inside $\mathcal H_i^{\Gamma}$, we can first try to find the representing function for the following linear functional: \subsubsection{ $\Gamma$-invariant Functionals on $\mathcal H_i$} Let $L_z^{\Gamma}$ be a functional on $\mathcal H_i$ defined as follows: For any function $Y\in \mathcal H_i$, \[ L_{z}^{\Gamma}(Y)=Y^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}Y(\gamma z), \] for a given $z\in \mathbb S^d$. We can also define a Delta functional on $\mathcal H_i$ by $L_{z}(Y)=Y(z)$ for any $Y\in \mathcal H_i$, then one easily sees that $L_z^{\Gamma}$ and $L_z$ agree on the subspace $\mathcal H_i^{\Gamma}$ inside $\mathcal H_i$ and also note that $L_z^{\Gamma}$ can be seen as a composed map $L_z^{\Gamma}=L_z\pi_*:\ \mathcal H_i\rightarrow \mathcal H_i^{\Gamma}\rightarrow\mathbb C$ (recall that $\pi_*$ was defined in Equation (\ref{lowerstar})), so although $L_z^{\Gamma}$ is defined on $\mathcal H_i$, it can actually be seen as a ``Delta functional'' on $\mathbb S^d/\Gamma=\Delta^d$. To find the representing function for $L_z^{\Gamma}$, we will use zonal spherical functions: Let $k_i(\cdot, \cdot)$ be the reproducing kernel in the eigenspace $\mathcal H_i$. Define the ``compositional'' kernel $k_i^{\Gamma}(\cdot, \cdot)$ for $\mathcal H_i$ as \begin{equation}\label{fake} k_i^{\Gamma}(x, t)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, t), \ \ \forall x, t\in \mathbb S^d, \end{equation} \noindent from which it is straightforward to check that $k_i^\Gamma(z,\cdot)$ represent linear functionals of the form $L_{z}^{\Gamma}$, simply by following the definitions. \begin{remark} \normalfont The above definition of ``compositional kernels'' (in Equation (\ref{fake})) is not just a trick only to get rid of the ``redundant points'' on spheres. This definition is inspired by the notion of ``orbital integrals'' in analysis and geometry. In our case, the ``integral'' is a discrete version, because the ``compact subgroup'' in our situation is replaced by a finite discrete reflection group $\Gamma$. In fact, such kind of ``discrete orbital integral'' construction is not new in statistical learning theory, e.g., in \cite{equimatr}, the authors also used the ``orbital integral'' type of construction to study equivariant matrix valued kernels. \end{remark} At first sight, a compositional kernel is not symmetric on the nose, because we are only ``averaging'' over the group orbit on the first variable of the function $k_i(x,t)$. However recall $k_i(x,t)$ is both symmetric and orthogonally invariant by Propositional \ref{reprsph}, so quite counter-intuitively, compositional kernels are actually symmetric: \begin{proposition}\label{sym} Compositional kernels are symmetric, namely $k_i^{\Gamma}(x, t)= k_i^{\Gamma}(t, s)$. \end{proposition} \begin{proof} Recall that $k_i(x,t)=k_i(t,x)$ and that $k_i(G x,G t)=k_i(x,t)$ for any orthogonal matrix $G$. Notice that every reflection $\gamma\in \Gamma$ can be realized as an orthogonal matrix, then we have $$ \begin{array}{rcl} k_i^{\Gamma}(x, t)&=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, t)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(t,\gamma x)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}t,\gamma^{-1}(\gamma x))\ \ \\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}t,x)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma t,x)\\ &=& k_i^{\Gamma}(t, x) \end{array} $$ \end{proof} Recall $\mathcal H_i^{\Gamma}$ is the $\Gamma$-invariant functions inside $\mathcal H_i$, and by Equation (\ref{compfun}), $\mathcal H_i^{\Gamma}$ is the $i$-th subspace of a compositional function space $L^2(\Delta^d)$. Let $w_i(x,y)$ be the reproducing kernel (if it exists) inside $\mathcal H_i^{\Gamma}$, the $\Gamma$-invariant subspace of $\mathcal H_i$. A first naive candidate for $w_i(x,y)$ might be the spherical reproducing kernel $k_i(x,y)$, but $k_i(x,y)$ is not $\Gamma$-invariant. however, it turns out that the compositional kernels are actually reproducing with respect to all $\Gamma$-invariant functions in $\mathcal H_i$, while being $\Gamma$-invariant on both arguments. \begin{theorem}\label{reprcomp} Inside $\mathcal H_i$, the compositional kernel $k_i^{\Gamma}(x, t)$ is $\Gamma$-invariant on both arguments $x$ and $t$, and moreover $k_i^{\Gamma}(x, t)=w_i(x,y)$, i.e., the compositional kernel is the reproducing kernel for $\mathcal H^{\Gamma}_i$. \end{theorem} \begin{proof} First, by the definition of $k_i^{\Gamma}(x, t)$, it is $\Gamma$-invariant on the first argument $x$; by the symmetry of $k_i^{\Gamma}(x, t)$ in Proposition \ref{sym}, it is then also $\Gamma$-invariant on the second argument $t$, hence the compositional kernel $k_i^{\Gamma}(x, t)$ is a kernel inside $\mathcal H^{\Gamma}_i$. Secondly, let's prove the reproducing property of $k_i^{\Gamma}(x, t)$. For any $\Gamma$-invariant function $f\in \mathcal H_i^{\Gamma}\subset \mathcal H_i$, $$ \begin{array}{rcl} <f(t),k_i^{\Gamma}(x, t)> &=& <f(t),\dis\sum_{\gamma\in \Gamma}\frac{1}{|\Gamma|}k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}<f(t),k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(\gamma x)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(x)\ \ (f\ \text{is}\ \Gamma\text{-invariant})\\ &=&f(x) \end{array} $$ \end{proof} \begin{remark} \normalfont Theorem \ref{reprcomp} justified that a compositional kernel is actually the reproducing kernel for functions inside $\mathcal H_i$. Although the compositional kernel $k_i^{\Gamma}(x, t)$ is symmetric as is proved in Proposition \ref{sym}, we will still use $w_i(x,y)$ to denote $k_i^{\Gamma}(x, t)$ because $w_i(x,y)$ is, notationally speaking, more visually symmetric than the notation for compositional kernels. \end{remark} Recall that functions on compositional domains are identified with $\Gamma$-invariant functions on $\mathbb S^d$ and points on compositional domains are $\Gamma$-orbits on the sphere, so reproducing kernels for $\Delta^d$ should reproduce $\Gamma$-invariant functions, as the following theorem shows. \paragraph{Compositional RKHS and Spaces of Homogeneous Polynomials} Recall that based on Theorem \ref{accudirct}, the direct sum of an even (resp. odd) number of eigenspaces can be expressed as the set of homogeneous polynomials of fixed degrees. Further recall that the direct sum decomposition $L^2(\mathbb S^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i$ is an orthogonal one, so is the direct sum $L^2(\Delta^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i$. By the orthgonality between eigenspaces, the reproducing kernels for the finite direct sum $\bigoplus_{i=0}^m\mathcal H^{\Gamma}_i$ is naturally the summation $\sum_{i=0}^m w_i$. Note that by Lemma \ref{killodd}, it suffices to consider the even pieces of eigenspaces $\mathcal H_{2i}$. Finally, we give a formal definition of ``the degree $m$ reproducing kernel Hilbert space'' on $\Delta^d$, consisting degree $2m$ homogeneous polynomials: \begin{definition}\label{lthrkhs} Let $w_i$ be the reproducing kernel for $\Gamma$-invariant functions in the $i$th eigenspace $\mathcal H_i \subset L^2(\mathcal S^d)$. The degree $m$ compositional reproducing kernel Hilbert space is defined to be the finite direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$. And the reproducing kernel for the degree $m$ compositional reproducing kernel Hilbert space is \begin{equation}\label{mcompker} \omega_m(\cdot, \cdot) = \sum_{i=0}^m w_{2i}(\cdot,\cdot); \end{equation} \noindent then the degree $m$ RKHS for the compositional domain is the pair $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$, where $\mathcal H^{\Gamma}_{2i}$ is the space of $\Gamma$-invariant functions in $\mathcal H_{2i}$. \end{definition} Recall that the direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can identified as a subspace of $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, which is isomorphic to the space degree $2m$ homogeneous polynomials, so each function in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can be written as a degree $2m$ homogeneous polynomial, including the reproducing kernel $\omega_m(x,\cdot)$, although it is not obvious from Equation (\ref{mcompker}). Notice that for a point $(x_1,x_2,\dots, x_{d+1})\in \mathbb S^d$, the sum $\sum_{i=1}^{d+1}x_i^2=1$, so one can always use this sum to turn each element in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ to a homogeneous polynomial. For example, $x^2+1$ is not a homogeneous polynomial, but each point point $(x,y,z)\in \mathbb S^2$ satisfies $x^2+y^2+z^2=1$, then we have $x^2+1=x^2+x^2+y^2+z^2=2x^2+y^2+z^2$, which is a homogeneous polynomial on the sphere $\mathbb S^2$. In fact, we can say something more about $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$. Recall that Proposition \ref{killodd} ``killed'' the contributions from ``odd pieces'' $\mathcal H_{2k+1}$ under the contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\Delta^d)$. However, even inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, only a subspace can be identified a compositional function space, namely, those $\Gamma$-invariant homogeneous polynomials. The following proposition gives a characterization of which homogeneous polynomials inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ come from the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$: \begin{proposition}\label{mpolynomial} Given any element $\theta\in \bigoplus_{i=0}^m\mathcal H^{\Gamma}_{2i}\subset \bigoplus_{i=0}^m\mathcal H_{2i}\subset L^2(\mathbb S^d/\Gamma)$, there exists a degree $m$ homogeneous polynomial $p_m$, such that \begin{equation}\label{mplynomial} \theta(x_1, x_2,\dots, x_{d+1})=p_m(x_1^2, x_2^2,\cdots, x_{d+1}^{2}). \end{equation} \end{proposition} \begin{proof} Note that $\theta$ a degree $2m$ homogeneous $\Gamma$-invariant polynomial, then each monomial in $\theta$ has form $\prod_{i=1}^{d+1}x_i^{a_i}$ with $\sum_{i=1}^{d+1}a_i=2m$. If $\theta$ contains one monomial $\prod_{i=1}^{d+1}x_i^{a_i}$ with nonzero coefficient such that $a_i$ is odd for some $1\leq i\leq d+1$. Note that $\theta$ is $\Gamma$-invariant, i.e. $\theta=\theta^{\Gamma}$, which implies $\prod_{i=1}^{d+1}x_i^{a_i}=(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$, but the term $(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$ is zero by Proposition \ref{killodd}. Then $\theta$ is the linear combinations of monomials of the form $\prod_{i=1}^{d+1}x_i^{a_i}=\prod_{i=1}^{d+1}(x_i^2)^{a_i/2}$ with each $a_i$ being even and $\sum_i a_i/2=m$, thus the proposition follows. \end{proof} Recall that $m$ Compositional RKHS is $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$ in Definition \ref{lthrkhs}, and $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ consists of degree $2m$ homogeneous polynomials while $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ is just a subspace of it. Proposition \ref{mpolynomial} tells us that one can also have a concrete description of the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ via those degree $m$ homogeneous polynomials on squared variables. \section{Applications of Compositional Reproducing Kernels}\label{sec:app} The availability of compositional reproducing kernels will open a door to many statistical/machine learning techniques for compositional data analysis. However, we will only present two application scenarios, as an initial demonstration of influence of RKHS thoery on compoasitional data analysis. The first application is a representer theorem, which is motivated by newly developed kernel methods machine learning, especially by the rich theory of vector value regression (\cite{vecfun}, \cite{vecman} and \cite{vecman}). The second one is to construct exponential families on compositional domains. Parameters of compositional exponential models are compositional reproducing kernels, which exist for each even integer, so we get infinitely many exponential distributions on compositional domains. To the authors' knowledge, these will be the first class of nontrivial examples of explicit distributions on compositional domains with \emph{non-vanishing} densities on the boundary. \subsection{Compositional Representer Theorems} Other than the successful applications on traditional spline models, representer theorems are increasingly needed due to the new kernel techniques in modern machine learning. We will consider minimal normal interpolations and least square regularzations in this paper. Regularizations are especially important in many situations, like structured prediction, multi-task learning, multi- label classification and related themes that attempt to exploit output structure (see \cite{vecman}). A common theme in the above-mentioned contexts is non-parametric estimation of a vector-valued function $f:\ \mathcal X\rightarrow \mathcal Y$, between a structured input space $\mathcal X$ and a structured output space $\mathcal Y$. An important adopted framework in those analysis is the ``Vector-valued Reproducing Kernel Hilbert Spaces'' in \cite{vecfun}. Unsurprisingly, representer theorems not only are necessary, but also call for further generalizations in modern machine learning: \begin{itemize} \item [(i)]In classical spline models, the most frequently used version of representer theorems are about scalar valued kernels, but besides the above-mentioned scenario $f:\ \mathcal X\rightarrow \mathcal Y$ in manifold regularization context, in which vector valued representer theorems are needed, higher tensor valued kernels and their corresponding representer theorems are also desirable. In \cite{equimatr}, matrix valued kernels and their representer theorems are studied, with applications in image processing. \item[(ii)]Another related application lies in the popular kernel mean embedding theories, in particular, conditional mean embedding. Conditional mean embedding theory essentially gives an operator from on a RKHS to another RKHS (see \cite{condmean}) and to learn such operators, vector-valued regressions plus corresponding representer theorems are used. \end{itemize} In vector-valued regression framework, an important assumption that is discussed in representer theorems are \emph{linear independence} conditions (see \cite{vecfun}). Our construction of compositial RKHS are finite dimensional spaces of polynomials, then the linear indpendence conditions are not freely satisfied on the nose, so we will address this problem in this paper. Instead of dealing with vector-valued kernels, we will only focus on the special case of scalar valued (reproducing) kernels, but the issue can be clearly seen in this special case. \subsubsection{Linear Independence of Compositional Reproducing Kernels}\label{twist} The compositional RKHS that was constructed in section \ref{sec:rkhs} takes the form $\big(\bigoplus_{i=0}^{m}\mathcal H_{2i}, \omega_{m} \big)$ indexed by $m$. Based on the finite dimensional nature of compositional RKHS, it is not even clear whether different points yield to different functions $\omega_{m}(x_i, \cdot)$ inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$. we will give a positive answer when $m$ is high enough. Given a set of distinct compositional data points $\{x_i\}_{i=1}^n\subset \Delta^d$, we will show that the corresponding set of reproducing functions $\{\omega_{i}(x_i, \cdot)\}_{i=1}^n$ form a linearly independent set inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ if $m$ is high enough. \begin{theorem}\label{abundencelm} Let $\{x_i\}_{i=1}^n$ be distinct data points on a compositional domain $\Delta^d$. Then there exists a positive integer $M$, such that for any $m>M$, the set of functions $\omega^{\Gamma}_{m}(x_i, \cdot)\}_{i=1}^n$ is a linearly independent set in $\bigoplus_{i=0}^{m}\mathcal H_{2i}$. \end{theorem} \begin{proof} The quotient map $c_\Delta: \mathbb S^d\rightarrow \Delta^d$ can factor through a projective space, i.e., $c_\Delta:\ \mathbb S^d\rightarrow\mathbb P^d\rightarrow \Delta^d$. The main idea is to prove a stronger statement, in which we showed that distinct data points in $\mathbb P^d$ will give linear independence of \emph{projective kernels} for large enough $m$, where projective kernels are reproducing kernels in $\mathbb P^d$ whose definition was given in \ref{abundprf}. Then we construct two vector subspace $V_1^{m}$ and $V_2^{m}$ and a linear map $g_m$ from $V_1^{m}$ to $V_2^{m}$. The key trick is that the matrix representing the linear map $g_m$ becomes diagonally dominant when $m$ is large enough, which forces the spanning sets of both $V_1^{m}$ and $V_2^{m}$ to be linear independent. More details of the proof are given in Section \ref{abundprf}. \end{proof} In the proof of Theorem \ref{abundencelm}, we make use of the homogeneous polynomials $(y_i\cdot {t})^{2m}$, which is \emph{not} living inside a single piece $\mathcal H_{2i}$, thus we had to use the direct sum space $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ for our construction of RKHS. Without using projective kernels, one might wonder if the same argument works, however, the issue is that the matrix might have $\pm 1$ at off-diagonal entries, which will fail to be diagonally dominant when $m$ grows large enough. We break down to linear independence of projective kernels for distinct points, because reproducing kernels for distinct compositional data point are linear combinations of distinct projective kernels, then in this way, the off diagonal terms will be the power of inner product of two vectors that will not be antipodal or identical, thus off diagonal terms' $m$-th power will go to zero with $m$ goes to infinity. Another consequence of Theorem \ref{abundencelm} is that $\omega_{m}(x_i,\cdot)\neq \omega_{m}(x_j, \cdot)$ whenever $i\neq j$ when $m$ is large enough. Not only large enough $m$ will separate points on the reproducing kernel level, but also gives each data point their ``own dimension''. \subsubsection{Minimal Norm Interpolation and Least Squares Regularization}\label{represent} Once the linear independence (again, which was required in \cite{vecfun}) is established in Theorem \ref{abundencelm}, it is an easy corollary to establish the representer theorems for minimal norm interpolations and least square regularizations. Nothing is new from the point of view of general RKHS theory, but we will include these theorems and proofs on account of completeness. Again, we will focus on the scalar-valued (reproducing) kernels and functions, instead of the vector-valued kernels and regressions. However, Theorem \ref{abundencelm} sheds important lights on linearly independence issues, and interested readers can generalize these compositional representer theorems to vector-valued cases by following \cite{vecfun}. The first representer theorem we provide is a solution to minimal norm interpolation problem: for a fixed set of distinct points $\{x_i\}_{i=1}^n$ in $\Delta^d$ and a set of numbers $y=\{y_i\in \mathbb R\}_{i=1}^n$, let $I_y^{m}$ be the set of functions that interpolates the data \[ I_y^m = \{f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}, \] and out goal is to find $f_0$ with minimum $\ell_2$ norm, i.e., \[ \norm{f_0}=\inf\{\norm{f}, f\in I_y^{m}\}.\] \begin{theorem}\label{minorm} Choose $m$ large enough so that the reproducing kernels $\{\omega_m(x_i,t)\}_{i=1}^n$ are linearly independent, then the unique solution of the minimal norm interpolation problem $\min\{\norm{f},f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}$ is given by the linear combination of the kernels: $$ f_0(t)=\dis\sum_{i=1}^nc_i \; \omega_{m}(x_i,t) $$ where $\{c_i\}_{i=1}^n$ is the unique solution of the following system of linear equations: $$ \dis\sum_{j=1}^n\omega_{m}(x_i,x_j)c_j=y_i, \ \ 1\leq i\leq n. $$ \end{theorem} \begin{proof} For any other $f$ in $I_y^{m}$, define $g=f-f_0$. By considering the decomposition: $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2$, one can argue that the cross term $<f_0,g>=0$. The detail can be found in Section \ref{representerproofs}. We want to point out that the linear independence of reproducing kernels guarantees the uniqueness and existence of $f_0$. \end{proof} The second representer theorem is for a more realistic scenario with $\ell_2$ regularization, which has the following objective: \begin{equation}\label{l2obj} \sum_{i=1}^n |f(x_i)-y_i|^2+\mu\norm{f}^2. \end{equation} The goal is to find the $\Gamma$-invariant function $f_{\mu}\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ that minimizes (\ref{l2obj}). The solution to this problem is provided by the following representer theorem: \begin{theorem}\label{regularization} For a set of distinct compositional data points $\{x_i\}_{i=1}^n$, choose $m$ large enough such that the reproducing kernels $\{\omega_{m}(x_i, t)\}_{i=1}^n$ are linearly independent. Then the solution to (\ref{l2obj}) is given by \[ f_{\mu}(t) =\sum_{i=1}^n c_i \; \omega_{m}(x_i,t), \] where $\{c_i\}_{i=1}^n$ is the solution of the following system of linear equations: \[ \mu c_i+\sum_{j=1}^n \omega_{m}(x_i,x_j)c_j=y_i,\ \ 1\leq i\leq n. \] \end{theorem} \begin{proof} The detail of this proof can be found in Section \ref{representerproofs}, but we want to point out how the linear independence condition plays a role in here. In the middle of the proof we need to show that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i)) \omega_{m}(x_i,t)\big]$, where $f_{\mu}=\sum_{i=1}^n \omega_{m}(x_i,t)c_i$. We use the linear independence in Theorem \ref{abundencelm} to establish the equivalence between this linear equation system of $\{c_i\}_{i=1}^n$ and the one given in the theorem. \end{proof} \subsection{Compositional Exponential Family} With the construction of RKHS in hand, one can produce exponential families using the technique developed in \cite{cs06}. Recall that for a function space $\mathcal H$ with the inner product $<\cdot,\cdot>$ on a general topological space $\mathcal X$, whose reproducing kernel is given by $k(x, \cdot)$, the exponential family density $p_{\theta}(x)$ with the parameter $\theta\in \mathcal H$ is given by: \[ p(x, \theta)=\exp\{<\theta(\cdot),k(x,\cdot)>-g(\theta) \},\ \] where $g(\theta)=\log \dis\int_{\mathcal{X}}\exp\big(<\theta(\cdot),k(x,\cdot) >\big) dx$. For the compositional data we define the density of the $m$th exponential family as \begin{equation}\label{expfamily} p_{m}(x, \theta)=\exp\{<\theta(\cdot),\omega_{m}(x,\cdot)>-g(\theta) \},\ \forall x \in \mathbb S^d/\Gamma, \end{equation} where $\theta\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ and $g(\theta)=\log \int_{\mathbb S^d/\Gamma}\exp(<\theta(\cdot),\omega_{m}(x,\cdot) >) dx$. Note that this density can be made more explicit by using homogeneous polynomials. Recall that any function in $\bigoplus_{i=0}^m\mathcal H_{2i}^{\Gamma}$ can be written as a degree $m$ homogeneous polynomial with \emph{squared} variables by Lemma \ref{killodd}. Thus the density in (\ref{expfamily}) can be simplified to the following form: for $x=(x_1,\dots, x_{d+1})\in \mathbb S^{d}_{\geq 0}$, \begin{equation}\label{mpoly} p_{m}(x,\theta)=\exp\{s_{m}(x_1^2, x_2^2, \dots, x_{d+1}^2; \theta)-g(\theta)\}, \end{equation} where $s_m$ is a polynomial on squared variables $x_i^2$'s with $\theta$ as coefficients. Note that $s_m$ is invariant under ``sign-flippings'', and the normalizing constant can be computed via the integration over the entire sphere as follows: $$ g(\theta)=\int_{\mathbb S^d/\Gamma}\exp(s_m)dx=\frac{1}{|\Gamma|}\int_{\mathbb S^d}\exp(s_m)dx. $$ Fitting the model in (\ref{mpoly}) to compositional data can be done via maximum likelihood estimation or the regression method proposed by \cite{expdirect} for data on the unit sphere, or directional data. Further development on the model estimation is suggested as a future work. Figure \ref{fig:expfamily} displays three examples of the densities of the form (\ref{expfamily}). The three densities have the following $\theta$: \begin{eqnarray*} \theta_1 &=& -2 x_1^4 -2 x_2^4 -3 x_3^4 + 9 x_1^2 x_2^2 + 9 x_1^2 x_3^2 -2 x_2^2 x_3^2\\ \theta_2 &=& - x_1^4 - x_2^4 - x_3^4 - x_1^2 x_2^2 - x_1^2 x_3^2 - x_2^2 x_3^2\\ \theta_3 &=& - 3 x_1^4 - 2 x_2^4 - x_3^4 +9 x_1^2 x_2^2 - 5 x_1^2 x_3^2 - 5 x_2^2 x_3^2 \end{eqnarray*} The various shapes of the densities in the figure implies that the compositional exponential family can be used to model data with a wide range of locations and correlation structures. \begin{figure}[h] \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily1.png} \caption{$p_4(x,\theta_1)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily2.png} \caption{$p_4(x,\theta_2)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily3.png} \caption{$p_4(x,\theta_3)$} \end{subfigure} \caption{Three example densities from compositional exponential family. See text for specification of the parameters $\theta_i, i = 1, \ldots, 3.$ }\label{fig:expfamily} \end{figure} \section{Discussion}\label{sec:conclude} The key point of this work is to use projective and spherical geometries to reinterpret compositional domains, which allows us to construct reproducing kernels for compositional data points by using spherical harmonics under group actions. With the rapid development of modern machine learning theory, especially kernel techniques and kernel mean embedding theories. A sequence of questions can be asked about how to introduce modern kernel theories in compositional data analysis. Historically, Kent models were introduced in directional statistics, because of its geometric intuition (concentration) and its application in regression analysis like in \cite{sphkentreg}. Such features are also attractive to compositional data researcher, for example in \cite{sw11} and \cite{sw12}. Either case, an important notion is the mean and expectation and all of the work related to Kent models, no matter in directional statistics or in compositional data analysis, the existing hope, in inference or regression analysis, is to find the mean or expectation in the \emph{underlying space} where data points are living, namely, the mean of the Kent distributions is a \emph{point on the sphere}. Our approach suggests a new direction in compositional data analysis, by which we were referring to the new kernel techniques. If we look for the mean of a compositional distribution \emph{inside} the compositional domain, then different representations of compositional domains (simplex, spherical quotients, first orthant spheres etc.) will give a different mean point on compositional domains. However, the new kernel techniques suggests a new way of understanding means of expectations: instead of finding the \emph{physical mean point} on compositional domains, a new replacement of expectations is the kernel mean $\mathbb E[k(X,\cdot)]$, which is surveyed in great detail in \cite{kermean}. We did give a construction of kernel exponential models for compositional domains, but we did not talk about how to compute means and variance-covariance matrices for those exponential models. Of course, one can attempt to find the mean point on the compositional domain, but based on the philosophy of \cite{kermean}, one should compute the kernel mean $\mathbb E[k(X,\cdot)]$ and cross-variance operator as a replacement of traditional means and variance-covariances in traditional multivariate analysis. The authors will, in the forthcoming work, develop further techniques to come back to this issue of kernel means and cross-variance operators for compositional exponential models, via applying deeper functional analysis techniques. \appendix \section{Supplementary Proofs} \subsection{ Proof of Central Limit Theorems on Integral Squared Errors (ISE) in Section \ref{compdensec}}\label{cltprf} \begin{assumption}\label{kband} For all kernel density estimators and bandwidth parameters in this paper, we assume the following: \begin{itemize} \item[{\bf{H1}}] The kernel function $K:[0,\infty)\rightarrow [0,\infty)$ is continuous such that both $\lambda_d(K)$ and $\lambda_d(K^2)$ are bounded for $d\geq 1$, where $\lambda_d(K)=2^{d/2-1}\mathrm{vol}(S^d)\dis\int_{0}^{\infty}K(r)r^{d/2-1}dr$. \item[\bf{H2}]If a function $f$ on $\mathbb S^d\subset \mathbb R^{d+1}$ is extended to the entire $\mathbb R^{d+1}/\{0\}$ via $f(x)=f(x/\norm{x})$, then the extended function $f$ needs to have its first three derivatives bounded. \item[\bf{H3}] Assume the bandwidth parameter $h_n\rightarrow 0$ as $nh_n^d\rightarrow\infty$. \end{itemize} \end{assumption} Let $f$ be the extended function from $\mathbb S^d$ to $\mathbb R^{d+1}/\{0\}$ via $f(x/\norm{x})$, and let \[ \phi(f,x)=−x^T \nabla f(x) + d^{-1}\big(\nabla^2 f(x) - x^T (\mathcal H_xf) x\big) = d^{-1}\mathrm{tr} [\mathcal H_xf(x)], \] where $\mathcal H_x f$ is the Hessian matrix of $f$ at the point $x$. The term $b_d(K)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ b_d(K)=\dis\frac{\dis\int_0^{\infty}K(r)r^{d/2}dr}{\dis\int_0^{\infty}K(r)r^{d/2-1}dr} \] The term $\phi(h_n)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ \phi(h_n)=\dis\frac{4b_d(K)^2}{d^2}\sigma_x^2h_n^4 \] Proof of Theorem \ref{ourclt}: \begin{proof} The strategy in \cite{chineseclt} in the directional set-up follows that in \cite{hallclt}, whose key idea is to give asymptotic bounds for degenerate U-statistics, so that one can use Martingale theory to derive the central limit theorem. The step where the finite support condition was used in \cite{chineseclt}, is when they were trying to prove the asymptotic bound:$E(G_n^2(X_1,X_2))=O(h^{7d})$, where $G_n(x,y)=E[H_n(X,xH_n(X,y))]$ with $H_n=\dis\int_{\mathbb S^d}K_n(z,x)K_n(z,y)dz$ and the centered kernel $K_n(x,y)=K[(1-x'y)/h^2]-E\{[K(1-x'X)/h^2]\}$. During that prove, they were trying to show that the following term: \[ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\times\\ &&\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2, \end{array} \] satisfies $T_1=O(h^{7d})$. During this step, in order to give an upper bound for $T_1$, the finite support condition was substantially used. The idea to avoid this assumption was based on the observation in \cite{portclt} where they only concern the case of directional-linear CLT, whose result can not be directly used to the only directional case. Based on the method provided in Lemma 10 in \cite{portclt}, one can easily deduce the following asymptotic equivalence: $$ \dis\int_{\mathbb S^d}K^j(\frac{1-x^Ty}{h^2})\phi^i(y)dy\sim h^d\lambda_d(K^j)\phi^i(x), $$ where $\lambda_d(K^j)=2^{d/2-1}\mathrm{vol}(\mathbb S^{d-1})\dis\int_{0}^{\infty}K^j(r)r^{d/2-1}dr$. As a special case we have: $$ \dis\int_{\mathbb S^d}K^2(\dis\frac{1-x^Ty}{h^2})dy\sim h^d\lambda_d(K^2)C,\ \text{with}\ C\ \text{being a positive constant}. $$ Now we will proceed the proof without the finite support condition: $$ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\\ &&\times\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2\\ &\sim& \dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\left\{\int_{\mathbb S^d}f(z)\big[\lambda_d(K)h^dK(\frac{1-x^Tz}{h^2})\big]\times \big[\lambda_d(K)h^dK(\dis\frac{1-y^Tz}{h^2})\big]dz \right\}^2\\ &\sim& \lambda_d(K)^4h^{4d}\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)\big[ \lambda_d(K)h^d K(\dis\frac{1-x^Ty}{h^2})f(y)\big]^2dy\\ &=&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \left\{\int_{\mathbb S^d} K^2(\dis\frac{1-x^Ty}{h^2})f^3(y)dy \right\}f(x)dx\\ &\sim&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \lambda_d(K^2)h^dC\cdot f^3(x)f(x)dx\\ &=&C\lambda_d(K)^6\lambda_d(K^2)h^{7d}\dis\int_{\mathbb S^d}f^(x)dx=O(h^{7d}). \end{array} $$ Thus we have proved $T_1=O(h^{7d})$ without finite support assumption, then the rest of the proof will follow through as in \cite{chineseclt}. \end{proof} Observe the identity: \begin{equation}\label{csden} \dis\int_{\mathbb S^d_{\geq 0}}(\hat{p}_n-p)^2dx=\dis|\Gamma|\int_{\mathbb S^d}(\hat{f}_n-\tilde{p})^2dy, \end{equation} then the CLT of compositional ISE follows from the identity (\ref{csden}) and our proof of CLT on spherical ISE without finite support conditions on kernels. \subsection{Proofs of Shadow Monomials in Section \ref{sec:rkhs} }\label{reproproof} Proof of Proposition \ref{killodd}: \begin{proof} A direct computation yields: $$ \begin{array}{rcl} (\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}&=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i=1}^{d+1}(s_ix_i)^{\alpha_i}\\ &=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} x_k^{\alpha_k}+\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} (-x_k)^{\alpha_k}\\ &=&x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i}-x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} \\ &=&0. \end{array} $$ \end{proof} \subsection{Proof of Linear Independence of Reproducing Kernels in Theorem \ref{abundencelm}}\label{abundprf} We sketch a slight of more detailed (not complete) proof: \begin{proof} This is the most technical lemma in this article. We will sketch the philosophy of the proof in here, which can be intuitively understood topologically. Recall that we can produce a projective space $\mathbb P^d$ by identifying every pair of antipodal points of a sphere $\mathbb S^d$ (identify $x$ with $-x$), in other words $\mathbb P^d=\mathbb S^d/\mathbb Z_2$ where $\mathbb Z_2=\{0,1\}$ is a cyclic group of order $2$. Then we can define a projective kernel in $\mathcal H_i\subset L^2(\mathbb S^d)$ to be $k^p_i(x, \cdot)=[k_i(x,\cdot)+k_i(-x,\cdot)]/2$. We can also denote the projective kernel inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ by $\underline{k}_m^p(x, \cdot)=\sum_{i=0}^{m}k^p_{2i}(x,\cdot)$. Now we spread out the data set $\{x_i\}_{i=1}^n$ by ``spread-out'' construction in Section \ref{sec:spread}, and denote the spread-out data set as $\{\Gamma \cdot x_i\}_{i=1}^n=\{{c^{-1}_{\Delta}(x_i)}\}_{i=1}^{n}$ (a data set, not a set because of repetitions). A compositional reproducing kernel kernel is a summation of spherical reproducing kernels of on ${c^{-1}_{\Delta}(x_i)}$, divided by the number of elements in ${c^{-1}_{\Delta}(x_i)}$. This data set ${c^{-1}_{\Delta}(x_i)}$ has antipodal symmetry, then a compositional kernel is a linear combination of projective kernels. Notice that \emph{different} fake kernels are linear combinations of \emph{different} projective kernels. It suffices to show the linear independence of projective kernels for distinct data points and large enough $m$, which implies the linear independence of fake kernels $\{\underline{k}^{\Gamma}_{m}(x_i, \cdot)\}_{i=1}^n$. Now we are focusing on the linear independence of projective kernels. A projective kernel can be seen as a reproducing kernel for a point in $\mathbb P^d$. For a set of distinct points $\{y_i\}_{i=1}^l\subset \mathbb P^{d}$, we will show that the corresponding set of projective kernels $\{\underline{k}^{p}_{m}(y_i, \cdot)\}_{i=1}^l\subset \bigoplus_{i=0}^{m}\mathcal H_{2i}$ is linearly independent for an integer $l$ and a large enough $m$. Consider two vector subspace $V_1^{m}=\spn \big[ \{(y_i\cdot {t})^{2m}\}_{i=1}^l\big]$ and $V_2^{m}=\spn\big[\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l\big]$, both of which are inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}\subset L^2(\mathbb S^d)$. Then we can define a linear map $h_{m}:\ V_1^{m}\rightarrow V_2^{m}$ by setting $h_{m}((y_i\cdot {t})^{2m})=\sum_{j=1}^l<(y_i\cdot {t})^{2m},\underline{k}^{p}_{m}(y_j, t)>\underline{k}^{p}_{m}(y_j, t)$. This linear map $h_{m}$ is represented by an $l\times l$ symmetric matrix whose diagonal elements are $1$'s, and off diagonal elements are $[(y_i\cdot y_j)]^{2m}$. Notice that $y_i\neq y_j$ in $\mathbb P^d$, which means that they are not antipodal to each other in $\mathbb S^d$, thus $|y_i\cdot y_j|<1$. When $m$ is large enough, all off-diagonal elements will go to zero while diagonal elements always stay constant, then the matrix representing $h_{m}$ will become a \emph{diagonally dominant} matrix, which is full rank. When the linear map $h_{m}$ has full rank, both spanning sets $ \{(y_i\cdot {t})^{2m}\}_{i=1}^l$ and $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l$ have to be a basis for $V_1^{m}$ and $V_2^{m}$ correspondingly, then the set of projective kernels $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^m$ have to be linearly independent when $m$ is large enough. \end{proof} \subsection{Proof of Representer Theorems in Section \ref{represent}}\label{representerproofs} Proof of Theorem \ref{minorm} on minimal norm interpolation: \begin{proof} Note that the set $I_y^{m}=\{f\in \bigoplus_{i=0}^{m}\mathcal H_{2i}:\ f(x_i)=y_i\}$ is non-empty, because the $f_0$ defined by the linear system of equation is naturally in $I_y^{m}$. Let $f$ be any other element in $I_y^{m}$, define $g=f-f_0$, then we have: $$ \norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2. $$ Notice that $g\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$ and that $g(x_i)=0$ for $1\leq i\leq n$, we have: $$ \begin{array}{rcl} <f_0,g>&=&<\sum_{i=1}^n\omega_{m}(x_i,\cdot)c_i,g(\cdot)>\\ &=&\dis\sum_{i=1}^nc_i<\omega_{m}(x_i,\cdot),g(\cdot)>.\\ &=&\dis\sum_{i=1}^nc_ig(x_i)=0. \end{array} $$ Thus $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+\norm{f_0}^2$, which implies that $f_0$ is the solution to the minimal norm interpolation problem. \end{proof} Proof of Theorem \ref{regularization}, on regularization problems: \begin{proof} First define the loss functional $E(f)=\sum_{i=1}^n|f(x_i)-y_i|^2+\mu\norm{f}^2$. For any $\Gamma$-invariant function $f=f^{\Gamma}\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$, let $g=f-f_{\mu}$, then a simple computation yields: $$ \dis E(f)=E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2. $$ I want to show $\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)=\mu<f_{\mu},g>$, and an equivalent way of writing this equality is: $$ \dis\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\omega_{m}(x_i,t), g(t)>=\mu<f_{\mu},g>. $$ Now I claim that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$, which implies the above equality. To prove this claim, plug this linear combination $f_{\mu}=\sum_{i=1}^nc_i\cdot \omega_{m}(x_i,t)$ into the claim, then we get a system of linear equations in $\{c_i\}_{i=1}^n$, thus the proof of the claim breaks down to checking the system of linear equations in $\{c_i\}_{i=1}^n$, produced by the claim. Note that $\{\omega_{m}(x_i,t)\}_{i=1}^n$ is a linearly independent set, so one can check that the system of linear equations in $\{c_i\}_{i=1}^n$ produced by the claim is true, if and only if $\{c_i\}_{i=1}^n$ satisfy $\mu c_k+\sum_{i=1}^n c_i\cdot \omega_{m}(x_i,x_k)=y_k$ for every $k$ with $1\leq k\leq n$, which is given by the condition of this theorem. The equivalence of these two systems of linear equations is given by the linear independence of the set $\{\omega_{m}(x_i, t)\}_{i=1}^n$. Therefore we conclude that the claim $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$ is true. To finish the proof of this theorem, notice that $$ \begin{array}{rcl} \dis E(f)&=&E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t), g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[<\underbrace{\big(\mu f_{\mu}(t)-\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]\big)}_{=0}, g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2. \end{array} $$ The term $\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2$ in the above equality is always non-negative, thus $E(f_{\mu})\leq E(f)$, then the theorem follows. \end{proof} \bibliographystyle{asa} \bibliography{geometric} \end{document} \documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm,amscd, natbib, tikz,tikz-cd} \usepackage{wasysym, graphicx, hyperref, float} \usepackage[all,cmtip]{xy} \usepackage{tikz-cd} \usepackage{ gensymb } \usepackage{ textcomp, subcaption} \usepackage{authblk} \usepackage{enumitem} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{reason}[theorem]{Reasons} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{convention}[theorem]{Convention} \newtheorem{construction}[theorem]{Construction} \newtheorem{assumption}[theorem]{Assumption} \newcommand{\F}{\mathfrak F} \newcommand{\G}{\mathfrak G} \newcommand{\N}{\mathfrak N} \newcommand{\QQ}{\mathbb Q} \newcommand{\ZZ}{\mathbb Z} \newcommand{\CC}{\mathbb C} \newcommand{\RR}{\mathbb R} \newcommand{\HH}{\mathbb H} \newcommand{\SSS}{\mathcal S} \newcommand{\sL}{\mathcal L} \newcommand{\cH}{\mathcal H} \newcommand{\A}{\mathfrak A} \newcommand{\B}{\mathfrak B} \newcommand{\mega}{\overline{\omega}} \newcommand{\id}{\operatorname{id}} \newcommand{\im}{\operatorname{im}} \newcommand{\Op}{\operatorname{Op}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Con}{\operatorname{Con}} \newcommand{\QCon}{\operatorname{QCon}} \newcommand{\Diff}{\operatorname{Diff}} \newcommand{\Isom}{\operatorname{Isom}} \newcommand{\isom}{\operatorname{\mathfrak{isom}}} \newcommand{\Homeo}{\operatorname{Homeo}} \newcommand{\Teich}{\operatorname{Teich}} \newcommand{\Homeq}{\operatorname{Homeq}} \newcommand{\Homeqbar}{\operatorname{\overline{Homeq}}} \newcommand{\Stab}{\operatorname{Stab}} \newcommand{\Nbd}{\operatorname{Nbd}} \newcommand{\colim}{\operatornamewithlimits{colim}} \newcommand{\acts}{\curvearrowright} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\GL}{\operatorname{GL}} \mathchardef\mhyphen"2D \newcommand{\free}{\mathrm{free}} \newcommand{\refl}{\mathrm{refl}} \newcommand{\trefl}{\mathrm{trefl}} \newcommand{\tame}{\mathrm{tame}} \newcommand{\wild}{\mathrm{wild}} \newcommand{\Emb}{\operatorname{Emb}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Out}{\operatorname{Out}} \newcommand{\Maps}{\operatorname{Maps}} \newcommand{\rel}{\operatorname{rel}} \newcommand{\tildetimes}{\mathbin{\tilde\times}} \newcommand{\h}{\overset h} \newcommand{\spn}{\mathrm{span}} \newcommand{\dis}{\displaystyle} \newcommand{\orb}{\mathrm{Orbit}} \newcommand{\catname}[1]{\textbf{#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\mtfr}[1]{\mathfrak{#1}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\ant}{\mathrm{ant}} \newcommand{\var}{\mathrm{Var}} \oddsidemargin 0in \textwidth 6.5in \textheight 9in \topmargin 0in \headheight 0in \headsep 0in \usepackage{subcaption} \usepackage{amsthm,amsmath,amsfonts,amssymb, mathtools} \usepackage{xypic} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newcommand{\BlackBox}{\rule{1.5ex}{1.5ex}} \ifdefined\proof \renewenvironment{proof}{\par\noindent{\bf Proof\ }}{\hfill\BlackBox\\[2mm]} \else \newenvironment{proof}{\par\noindent{\bf Proof\ }}{\hfill\BlackBox\\[2mm]} \begin{document} \title{Reproducing Kernels and New Approaches in Compositional Data Analysis} \author[1]{Binglin Li} \author[2]{Jeongyoun Ahn} \affil[1]{University of Georgia} \affil[2]{Korea Advanced Institute of Science and Technology} \maketitle \begin{abstract}Compositional data, such as human gut microbiomes, consist of non-negative variables whose only the relative values to other variables are available. Analyzing compositional data such as human gut microbiomes needs a careful treatment of the geometry of the data. A common geometrical understanding of compositional data is via a regular simplex. Majority of existing approaches rely on a log-ratio or power transformations to overcome the innate simplicial geometry. In this work, based on the key observation that a compositional data are projective in nature, and on the intrinsic connection between projective and spherical geometry, we re-interpret the compositional domain as the quotient topology of a sphere modded out by a group action. This re-interpretation allows us to understand the function space on compositional domains in terms of that on spheres and to use spherical harmonics theory along with reflection group actions for constructing a \emph{compositional Reproducing Kernel Hilbert Space (RKHS)}. This construction of RKHS for compositional data will widely open research avenues for future methodology developments. In particular, well-developed kernel embedding methods can be now introduced to compositional data analysis. The polynomial nature of compositional RKHS has both theoretical and computational benefits. The wide applicability of the proposed theoretical framework is exemplified with nonparametric density estimation and kernel exponential family for compositional data. \end{abstract} \section{Introduction} Recent popularity of human gut microbiomes research has presented many data-analytic, statistical challenges \citep{calle2019statistical}. Among many features of microbiomes, or meta-genomics data, we address their \emph{compositional} nature in this work. Compositional data consist of $n$ observations of $(d+1)$ non-negative variables whose values represent the relative proportions to other variables in the data. Compositional data have been commonly observed in many scientific fields, such as bio-chemistry, ecology, finance, economics, to name just a few. The most notable aspect of compositional data is the restriction on their domain, specifically that the sum of the variables is fixed. The compositional domain is not a classical vector space, but instead a (regular) simplex, that can be modeled by the simplex: \begin{equation}\label{eq:simplex} \Delta^d=\left\{(x_1,\dots,x_{d+1})\in \mathbb R^{d+1} \;| \sum_{i=1}^{d+1}x_i=1,\ x_i\geq 0, \forall i \right\}, \end{equation} which is topologically compact. The inclusion of zeros in (\ref{eq:simplex}) is crucial as most microbiomes data have a substantial number of zeros. Arguably the most prominent approach to handle the data on a simplex is to take a log-ratio transformation \citep{aitch86}, for which one has to consider only the open interior of $\Delta^d$, denoted by $\mathcal S^d$. Zeros are usually taken care of by adding a small number, however, it has been noted that the results of analysis can be quite dependent on how the zeros are handled \citep{lubbe2021comparison}. \citet{micomp} pointed out ``the dangers inherent in ignoring the compositional nature of the data'' and argued that microbiome datasets must be treated as compositions at all stages of analysis. Recently some approaches that analyze compositional data without any transformation have been gaining popularity \citep{li2020s, rasmussen2020zero}. The approach proposed in this paper is to construct reproducing kernels of compositional data by interpreting compositional domains via projective spherical geometries. \subsection{Methodological Motivation}\label{machine} Besides the motivation from microbiomes studies, another source of inspiration for this work is the current exciting development in statistics and machine learning. In particular, the rising popularity of applying higher tensors and kernel techniques, which allows multivariate techniques to be extended to exotic structures beyond traditional vector spaces, e.g., graphs \citep{graphrkhs}, manifolds \citep{vecman} or images \citep{tensorbrain}. This work serves an attempt to construct reproducing kernel structures for compositional data, so that recent developments of (reproducing) kernel techniques from machine learning theory can be introduced to this classical field in statistics. The approach in this work is to model the compositional data as a group quotient of a sphere $\mathbb S^d/\Gamma$ (see (\ref{allsame})), which gives a new connection of compositional data analysis with directional statistics. The idea of representing data by using tensors and frames is not new in directional statistics \citep{ambro}, but the authors find it more convenient to construct reproducing kernels for $\mathbb S^d/\Gamma$ (whose reason is given in Section \ref{whyrkhs}). We do want to mention that the construction of reproducing kernels for compositional data indicates a new potential paradigm for compositional data analysis: traditional approaches aim to find direct analogue of multivariate concepts, like mean, variance-covariance matrices and suitable regression analysis frameworks based on those concepts. However, finding the mean point over non-linear spaces, e.g. on manifold, is not an easy job, and in worst case scenarios, mean points might not even exist on the underlying space (e.g. the mean point of the uniform distribution on a unit circle is \emph{not} living on the circle). In this work we take the perspective of kernel mean embedding \citep{kermean}. Roughly speaking, instead of finding the ``\emph{physical}'' point for the mean of a distribution, one could do statistics \emph{distributionally}. In other words, the mean or expectation is considered as a \emph{linear functional} on the RKHS, and this functional is represented by an actual function in the Hilbert space, which is referred to as ``kernel mean $\mathbb E[k(X,\cdot)]$''. Instead of trying to find another compositional point as the empirical mean of a compositional data set, one can construct ``kernel mean'' as a replacement of the traditional empirical mean, which is just $\sum_{i=1}^nk(X_i,\cdot)/n$. Moreover, one can also construct the analogue of variance-covariance matrix purely from kernels; in fact, \citet{fbj09} considered the gram matrix constructed out of reproducing kernels, as consistent estimators of cross-variance operators (these operators play the role of covariance and cross-variance matrices in classical Euclidean spaces). Since we remodel compositional domain using projective/spherical geometry, compositional domain is \emph{not} treated as a vector space, but a quotient topological space $\mathbb S^d/\Gamma$. Instead of ``putting a linear structure on an Aitchison simplex'' \citep{aitch86}, or square root transformation (which is still transformed from an Aitchison simplex), we choose to ``linearize'' compositional data points by using kernel techniques (and possibly higher-tensor constructions) and one can still do ``multivariate analysis''. Our construction in this work initiates such an attempt to introduce these recent development of kernel and tensor techniques from statistical learning theory into compositional data analysis. \subsection{Contributions of the Present Work} Our contribution in this paper is three folds. First, we propose a new geometric foundation for compositional data analysis, $\mathbb P^d_{\geq 0}$, a subspace of a full projective space $\mathbb P^d$. Based on the close connection of spheres with projective spaces, we will also describe $\mathbb P^d_{\geq 0}$ in terms of $\mathbb S^d/\Gamma$, a reflection group acting on a sphere, and the fundamental domain of this actions is the first orthant $\mathbb S^d_{\geq 0}$ (a totally different reason of using ``$\mathbb S^d_{\geq 0}$'' in the traditional approach). Secondly, based on the new geometric foundations of compositional domains, we propose a new nonparametric compositional density estimation by making use of the well-developed spherical density estimation theory. Furthermore, we provide a central limit theorem for integral squared errors, which leads to a goodness-of-fit test. Thirdly, also through this new geometric foundation, function spaces on compositional domains can be related with those on the spheres. Square integrable functions $L^2(\mathbb S^d)$ on the sphere is a focus of an ancient subject in mathematics and physics, called ``spherical harmonics''. Moreover, spherical harmonics theory also tells that each Laplacian eigenspace of $L^2(\mathbb S^d)$ is a reproducing kernel Hilbert space, and this allows us to construct reproducing kernels for compositional data points via ``orbital integrals'', which opens a door for machine learning techniques to be applied to compositional data. We also propose a compositional exponential family as a general distributional family for compositional data modeling. \subsection{Why Projective and Spherical Geometries?}\label{whysph} According to \cite{ai94}, ``any meaningful function of a composition must satisfy the requirement $f(ax)=f(x)$ for any $a\neq 0$.'' In geometry and topology, a space consisting of such functions is called a \emph{projective space}, denoted by $\mathbb P^d$, therefore, projective geometry should be the natural candidate to model compositional data, rather than a simplex. Since a point in compositional domains can not have opposite signs, a compositional domain is in fact a ``positive cone'' $\mathbb P_{\geq 0}^d$ inside a full projective space. A key property of projective spaces is that stretching or shrinking the length of a vector in $\mathbb P^d$ does \emph{not} alter the point. Thus one can stretch a point in $\Delta^d$ to a point in the first orthant sphere by dividing it by its $\ell_2$ norm. Figure \ref{fig:stretch} illustrates this stretching (``stretching'' is not a transformation from projective geometry point of view) in action. In short, projective geometry is more natural to model the compositional data according to the original philosophy in \cite{ai94}. However, spheres are easier to work with because mathematically speaking, the function space on spheres is a well-treated subject in spherical harmonics theory, and statistically speaking, we can connect with directional statistics in a more natural way. Our compositional domain $\mathbb P^d_{\geq 0}$ can be naturally identified with $\mathbb S^d/\Gamma$, a sphere modded out by a reflection group action. This reflection group $\Gamma$ acts on the sphere $\mathbb S^d$ by reflection, and the \emph{fundamental domain} of this action is $\mathbb S^d_{\geq 0}$ (notions of group actions, fundamental domains and reflection groups are all discussed in Section \ref{sec:sphere}). Thus our connection with the first orthant sphere $\mathbb S^d_{\geq 0}$ is a natural consequence of projective geometry and its connection with spheres with group actions, having nothing to do with square root transformations. \subsection{Why Reproducing Kernels?}\label{whyrkhs} As explained in Section \ref{machine}, we strive to use new ideas of tensors and kernel techniques in machine learning to propose another framework for compositional data analysis, and Section \ref{whysph} explains the new connection with spherical geometry and directional statistics. However, it is not new in directional statistics where the idea of tensors was used to represent data points \citep{ambro}. So a naive idea would be to mimic directional statistics when studying ambiguous rotations: \cite{ambro} studied how to do statistics over coset space $SO(3)/K$ where $K$ is a finite subgroup of $SO(3)$. In their case, the subgroup $K$ has to be a special class of subgroups of \emph{special} orthogonal groups, and within this class, they manage to study the corresponding tensors and frames, which gives the inner product structures of different data points. However, in our case, a compositional domain is $\mathbb S^d/\Gamma=O(d)\setminus O(d+1)/\Gamma$, a double coset space. Unlike \cite{ambro} that only considered $d=3$ case, our dimension $d$ is completely general; moreover, our reflection group $\Gamma$ is \emph{not} a subgroup of any special orthogonal groups, so constructions of tenors and frames in \cite{ambro} does not apply to our situation directly. Part of the novelty of this work is to get around this issue by making use of the reproducing kernel Hilbert Space (RKHS) structures on spheres, and ``averaging out'' the group action at the reproducing kernel level, which in return gives us a reproducing kernel structure on compositional domains. Once we have RKHS in hand, we can ``add'' and take the ``inner product'' of two data points, so our linearization strategy can also be regarded as a combination of ``the averaging approach'' and the ``embedding approach'' as in \cite{ambro}. In fact, an abstract function space together with reproducing kernels plays an increasingly important role. In below we provide some philosophical motivations on the importance of function space over underlying data set: \begin{itemize} \item[(a)]Hilbert spaces of functions are naturally linear with an inner product structure. With the existence of (reproducing) kernels, data points are naturally incorporated into the function space, which leads to interesting interactions between the data set and functions defined over them. There has been a large amount of literature of embedding distributions into RKHS, e.g. \cite{disemd}, and using reproducing kernels to recover exponential families, e.g. \cite{expdual}. RKHS has also been used to recover classical statistical tests, e.g. goodness-of-fit test in \cite{kergof}, and regression in \cite{rkrgr}. Those works do not concern the analysis of function space, but primarily focus on the data analysis on the underlying data set, but all of them are done by passing over to RKHS. This implies the increasing recognition of the importance of abstract function space with (reproducing) kernel structure. \item[(b)] Mathematically speaking, given a geometric space $M$, the function space on $M$ can recover the underlying geometric space $M$ itself, and this principle has been playing a big role in different areas of geometry; in particular, modern algebraic geometry, following the philosophy of Grothendieck, is based upon this insight. Function spaces can be generalized to matrix valued function spaces, and this generalization gives rise to non-commutative RKHS, which is used in shape analysis in \citet{matrixvaluedker}; moreover, non-commutative RKHS is connected with free probability theory \citep{ncrkhs}, which has been used in random effects and linear mixed effects models \citep{fj19, princbulk} . \end{itemize} \subsection{Structure of the Paper} We describe briefly the content of the main sections of this article: \begin{itemize} \item In Section \ref{sec:sphere}, we will rebuild the geometric foundation of compositional domains by using projective geometry and spherical geometry. We will also point out that the old model using the closed simplex $\Delta^d$ is topologically the same as the new foundation. In diagrammatic way, we establish the following topological equivalence: \begin{equation}\label{4things} \Delta^d\cong \mathbb P^d_{\geq 0}\cong \mathbb S^d/\Gamma\cong\mathbb S^d_{\geq 0}, \end{equation} \noindent where $\mathbb S^d_{\geq 0}$ is the first orthant sphere, which is also the fundamental domain of the group action $\Gamma\acts \mathbb S^d$. All of the four spaces in (\ref{4things}) will be referred to as ``compositional domains''. As a direct application, we propose a compositional density estimation method by using the spherical density estimation theory via a spread-out construction through the quotient map $\pi:\ \mathbb S^d\rightarrow\mathbb S^d/\Gamma$, and proved that our compositional density estimator also possesses integral square errors that satisfies central limit theorems (Theorem \ref{ourclt}), which can be used for goodness-of-fit tests. \item Section \ref{sec:rkhs} will be devoted to constructing. compositional reproducing kernel Hilbert spaces. Our construction relies on the reproducing kernel structures on spheres, which is given by spherical harmonics theory. \citet{wah81} constructed splines using reproducing kernel structures on $\mathbb S^2$ (2-dimensional sphere), in which she also used spherical harmonics theory in \cite{sasone}, which only treated $2$-dimensional case. Our theory deals with general $d$-dimensional case, so we need the full power of spherical harmonics theory, which will be reviewed at the beginning of Section \ref{sec:rkhs}, and then we will use spherical harmonics theory to construct compositional reproducing kernels using an ``orbital integral'' type of idea. \item Section \ref{sec:app} will give a couple of applications of our construction of compositional reproducing kernels. (i) The first example is the representer theorem, but with one caveat: our RKHS is finite dimensional consisting degree $2m$ homogeneous polynomials, with no transcendental functions, so linear independence for distinct data points is not directly available, however we show that when the degree $m$ is high enough, linear independence still holds. Our statement of representer theorem is not new purely from RKHS theory point of view. Our point is to demonstrate that intuitions from traditional statistical learning can still be used in compositional data analysis, with some extra care. (ii) Secondly, we construct the compositional exponential family, which can be used to model the underlying distribution of compositional data. The flexible construction will enable us to utilize the distribution family in many statistical problems such as mean tests. \end{itemize} \section{New Geometric Foundations of Compositional Domains}\label{sec:sphere} \begin{figure} \centering \includegraphics[width = .4\textwidth]{SqrtCompare.png} \caption{Illustration of the stretching action on $\Delta^1$ to $\mathbb S^1$. Note that the stretching keeps the relative compositions where the square root transformation fails to do so. } \label{fig:stretch} \end{figure} In this section, we give a new interpretation of compositional domains as a cone $\mathbb P^d_{\geq 0}$ in a projective space, based on which compositional domains can be interpreted as spherical quotients by reflection groups. This connection will yield a ``spread-out'' construction on spheres and we demonstrate an immediate application of this new approach to compositional density estimation. \subsection{Projective and Spherical Geometries and a Spread-out Construction}\label{sec:spread} Compositional data consist of relative proportions of $d+1$ variables, which implies that each observation belongs to a projective space. A $d$-dimensional projective space $\mathbb P^d$ is the set of one-dimensional linear subspace of $\mathbb R^{d+1}$. A one-dimensional subspace of a vector space is just a line through the origin, and in projective geometry, all points in a line through the origin will be regarded as the same point in a projective space. Contrary to the classical linear coordinates $(x_1, \cdots,x_{d+1})$, a point in $\mathbb P^d$ can be represented by a projective coordinate $(x_1 : \cdots : x_{d+1})$, with the following property \[ (x_1 : x_2: \cdots : x_{d+1}) = (\lambda x_1 : \lambda x_2: \cdots : \lambda x_{d+1}), ~~~\text{for any } \lambda \ne 0. \] It is natural that an appropriate ambient space for compositional data is \emph{non-negative projective space}, which is defined as \begin{equation}\label{eq:proj} \mathbb P^d_{\ge 0} = \left\{(x_1 : x_2: \cdots : x_{d+1})\in \mathbb P^d \;| \; (x_1, x_2: \cdots : x_{d+1}) = (|x_1| : |x_2|: \cdots : |x_{d+1}|)\right \}. \end{equation} It is clear that the common representation of compositional data with a (closed) simplex $\Delta^d$ in (\ref{eq:simplex}) is in fact equivalent to (\ref{eq:proj}), thus we have the first equivalence: \begin{equation}\label{projtosimp} \mathbb P^d_{\geq 0}\cong \Delta^d. \end{equation} Let $\mathbb S^d$ denote a $d$-dimensional unit sphere, defined as \[ \mathbb S^d=\left\{(x_1,x_2,\dots, x_{d+1})\in \mathbb R^{d+1} \; | \sum_{i=1}^{d+1}x_i^2=1\right\}, \] and let $\mathbb S^d_{\geq 0}$ denote the first orthant of $\mathbb S^d$, a subset in which all coordinates are non-negative. The following lemma states that $\mathbb S^d_{\geq 0}$ can be a new domain for compositional data as there exists a bijective map between $\Delta^d$ and $\mathbb S^d_{\geq 0}$. \begin{lemma} \label{compcone} There is a canonical identification of $\Delta^d$ with $\mathbb S^d_{\geq 0}$, namely, $$ \xymatrix{\Delta^d\ar@<.4ex>[r]^f& \mathbb S^d_{\geq 0}\ar@<.4ex>[l]^g}, $$ where $f$ is the inflation map $g$ is the contraction map, with both $f$ and $g$ being continuous and inverse to each other. \end{lemma} \begin{proof} It is straightforward to construct the inflation map $f$. For $v \in \Delta^d$, it is easy to see that $f(v) \in \mathbb S^d_{\ge 0}$ when $ f(v) = v/\|v\|_2, $ where $\|v\|_2 $ is the $\ell_2$ norm of $v$. Note that the inflation map makes sure that $f(v)$ is in the same projective space as $v$. To construct the shrinking map $g$, for $s\in \mathbb S^d_{\geq 0}$ we define $ g(s) = s / \|s\|_1, $ where $\|s\|_1$ is the $\ell_1$ norm of $s$ and see that $g(s)\in\Delta^d$. One can easily check that both $f$ and $g$ are continuous and inverse to each other. \end{proof} Based on Lemma \ref{compcone}, we now identify $\Delta^d$ alternatively with the quotient topological space $\mathbb S^d/\Gamma$ for some group action $\Gamma$. In order to do so, we first show that the cone $\mathbb S^d_{\geq 0}$ is a strict fundamental domain of $\Gamma$, i.e., $\mathbb S^d_{\geq 0}\cong \mathbb S^d/\Gamma$. We start by defining a \emph{coordinate hyperplane} for a group. The $i$-th coordinate hyperplane $H_i\in \mathbb R^{d+1}$ with respect to a choice of a standard basis $\{e_1,e_2,\dots, e_{d+1}\}$ is a codimension one linear subspace which is defined as \[ H_i=\{(x_1,\dots, x_i,\dots, x_{d+1})\in \mathbb R^{d+1}:\ x_i=0\}, ~~ i = 1, \ldots, d+1. \] We define the reflection group $\Gamma$ with respect to coordinate hyperplanes as the follows: \begin{definition}\label{reflect} The reflection group $\Gamma$ is a subgroup of general linear group $GL(d+1)$ and it is generated by $\{\gamma_i, i = 1, \ldots, {d+1}\}$. Given the same basis $\{e_1,\dots, e_{d+1}\}$ for $\mathbb R^{d+1}$, the reflection $\gamma_i$ is a linear map specified via: \[ \gamma_i:\ (x_1,\dots,x_{i-1}, x_i, x_{i+1},\dots, x_{d+1})\mapsto (x_1,\dots,x_{i-1}, -x_i, x_{i+1},\dots, x_{d+1}). \] \end{definition} Note that if restricted on $\mathbb S^d$, $\gamma_i$ is an isometry map from the unit sphere $\mathbb S^d$ to itself, which we denote by $\Gamma\acts\mathbb S^d$. Thus, one can treat the group $\Gamma$ as a discrete subgroup of the isometry group of $\mathbb S^d$. In what follows we establish that $\mathbb S^d_{\ge 0}$ is a fundamental domain of the group action $\Gamma\acts \mathbb S^d$ in the topological sense. In general, there is no uniform treatment of a fundamental domain, but we will follow the approach by \cite{bear}. To introduce a fundamental domain, let us define an \emph{orbit} first. For a point $z\in \mathbb S^d$, an orbit of the group $\Gamma$ is the following set: \begin{equation}\label{eq:orbit} \orb^{\Gamma}_z=\{\gamma(z), \forall \gamma\in \Gamma\}. \end{equation} Note that one can decompose $\mathbb S^d$ into a disjoint union of orbits. The size of an orbit is not necessarily the same as the size of the group $|\Gamma|$, because of the existence of a \emph{stabilizer subgroup}, which is defined as \begin{equation}\label{stable} \Gamma_z=\{\gamma\in \Gamma, \gamma (z)=z\}. \end{equation} The set $\Gamma_z$ forms a group itself, and we call this group $\Gamma_z$ the \emph{stabilizer subgroup} of $\Gamma$. Every element in $\orb^{\Gamma}_z$ has isomorphic stabilizer subgroups, thus the size of $\orb^{\Gamma}_z$ is the quotient $|\Gamma|/|\Gamma_z|$, where $|\cdot|$ here is the cardinality of the sets. There are only finite possibilities for the size of a stabilizer subgroup for the action $\Gamma\acts \mathbb S^d$, and the size of stabilizer subgroups is dependent on codimensions of coordinate hyperplanes. \begin{definition}\label{fundomain} Let $G$ act properly and discontinuously on a $d$-dimensional sphere, with $d>1$. A \emph{fundamental domain} for the group action $G$ is a closed subset $F$ of the sphere such that every orbit of $G$ intersects $F$ in at least one point and if an orbit intersects with the interior of $F$ , then it only intersects $F$ at one point. \end{definition} A fundamental domain is \emph{strict} if every orbit of $G$ intersects $F$ at exactly one point. The following proposition identifies $\mathbb S^d_{\geq 0}$ as the quotient topological space $\mathbb S^d/\Gamma$, i.e., $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. \begin{proposition}\label{conedom} Let $\Gamma\acts \mathbb S^d$ be the group action described in Definition \ref{reflect}, then $\mathbb S^d_{\geq 0}$ is a strict fundamental domain. \end{proposition} In topology, there is a natural quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. With the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$, there should be a natural map $\mathbb S^d\rightarrow\mathbb S^d_{\geq 0}$. Now define a contraction map $c: \mathbb S^d\rightarrow \mathbb S^d_{\geq 0}$ via $(x_1,\dots,x_{d+1})\mapsto (|x_1|,\dots, |x_{d+1}|)$ by taking component-wise absolute values. Then it is straightforward to see that the $c$ is indeed the topological quotient map $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$, under the identification $\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma$. So far, via (\ref{projtosimp}), Lemma \ref{compcone} and Proposition \ref{conedom}, we have established the following equivalence: \begin{equation}\label{allsame} \mathbb P^d_{\geq 0}=\Delta^d=\mathbb S^d_{\geq 0}=\mathbb S^d/\Gamma. \end{equation} For the rest of the paper we will use the four characterizations of a compositional domain interchangeably. \subsubsection{Spread-Out Construction} Based on (\ref{allsame}), one can turn a compositional data analysis problem into one on a sphere via \emph{spread-out construction}. The key idea is to associate one compositional data point $z\in \Delta^d=\mathbb S^d_{\geq 0}$ with a $\Gamma$-orbit of data points $\orb^{\Gamma}_z\subset \mathbb S^d$ in (\ref{eq:orbit}). Formally, given a point $z\in \Delta^d$, we construct the following \emph{data set} (\emph{not necessarily a set} because of possible repetitions): \begin{equation}\label{sprd} c^{-1}(z) = \left\{|\Gamma_{z'}|\ \text{copies of }z', \ \text{for}\ z'\in \text{Orbit}_z^\Gamma \right\}, \end{equation} where $\Gamma_{z'}$ is the stabilizer subgroup of $\Gamma$ with respect to $z'$ in (\ref{stable}). In general, if there are $n$ observations in $\Delta^d$, the spread-out construction will create a data set with $n2^{d+1}$ observations on $\mathbb S^d$, in which observations with zero coordinates are repeated. Figure \ref{fig:kde} (a) and (b) illustrate this idea with a toy data set with $d = 2$. \subsection{Illustration: Compositional Density Estimation}\label{compdensec} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width = \textwidth]{simplex_eg.png} \caption{Compositional data on $\Delta^2$}\ \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{simplex_kde_3d.png} \caption{``Spread-out'' data on $\mathbb S^2$} \end{subfigure}\\ \begin{subfigure}[b]{0.50\textwidth} \includegraphics[width = \textwidth]{sphere_kde_3d.png} \caption{Density estimate on $\mathbb S^2$} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width = \textwidth]{simplex_kde.png} \caption{``Pulled-back'' estimate on $\Delta^2$} \end{subfigure} \caption{Toy compositional data on the simplex $\Delta^2$ in (a) are spread out to a sphere $\mathbb S^2$ in (b). The density estimate on $\mathbb S^2$ in (c) are pulled back to $\Delta^2$ in (d).}\label{fig:kde} \end{figure} The spread-out construction in (\ref{sprd}) provides a new intimate relation between directional statistics and compositional data analysis. Indeed, this construction produces a directional data set out of a compositional data set, then we can literally transform a compositional data problem into a directional statistics problem via this spread-out construction. For example, we can perform compositional independence/uniform tests by doing directional independence/uniform tests \citep{sobind, sobuni} through spread-out constructions. In this section, we will give a new compositional density estimation framework by using spread-out constructions. In directional statistics, density estimation for spherical data has a long history dating back to the late 70s in \cite{expdirect}. In the 80s, \cite{hallden} and \cite{baiden} established systematic framework for spherical density estimation theory. Spherical density estimation theory became popular later partly because its integral squared error (ISE) is close dly related with goodness of fit test as in \cite{chineseclt} and a recent work \cite{portclt}. The rich development in spherical density estimation theory will yield a compositional density framework via spread-out constructions. In the following we apply this idea to nonparametric density estimation for compositional data. Instead of directly estimating the density on $\Delta^d$, one can perform the estimation with the spread-out data on $\mathbb S^d$, from which a density estimate for compositional data can be obtained. Let $p(\cdot)$ denote a probability density function of a random vector $Z$ on $\mathbb S^d_{\geq 0}$, or equivalently on $\Delta^d$. The following proposition gives a form of the density of the spread-out random vector $\Gamma(Z)$ on the whole sphere $\mathbb S^d$. \begin{proposition}\label{induced} Let $Z$ be a random variable on $\mathbb S^d_{\geq 0}$ with probability density $p(\cdot)$, then the induced random variable $\Gamma(Z)=\{\gamma(Z)\}_{\gamma\in \Gamma}$, has the following density $\tilde{p}(\cdot)$ on $\mathbb S^d$: \begin{equation}\label{cshriek} \tilde{p}(z)=\frac{|\Gamma_z|}{|\Gamma|} p(c(z)), \ z\in \mathbb S^d, \end{equation} where $|\Gamma_z|$ is the cardinality of the stabilizer subgroup $\Gamma_z$ of $z$. \end{proposition} Let $c_*$ denote the analogous operation for functions to the contraction map $c$ that applies to data points. It is clear that given a probability density $\tilde p$ on $\mathbb S^d$, we can obtain the original density on the compositional domain via the ``pull back'' operation $c_*$: \[ p(z) = c_*(\tilde p)(z)=\sum_{x\in c^{-1}(z)}\tilde p(x), ~~ z\in \mathbb S^d_{\geq 0}. \] Now consider estimating density on $\mathbb S^d$ with the spread-out data. Density estimation for data on a sphere has been well studied in directional statistics \citep{hallden, baiden}. For $x_1, \ldots x_n \in \mathbb S^d$, a density estimate for the underlying density is \[ \hat{f}_n(z)=\frac{c_h}{n}\sum_{i=1}^nK\left(\frac{1-z^T x_i}{h_n}\right),\ z\in \mathbb S^d, \] where $K$ is a kernel function that satisfies common assumptions in Assumption \ref{kband}, and $c_h$ is a normalizing constant. Applying this to the spread-out data $c^{-1}(x_i)$, $i = 1, \ldots, n$, we have a density estimate of $\tilde p (\cdot) $ defined on $\mathbb S^d$: \begin{equation}\label{fhattilde} \hat{f}^{\Gamma}_n(z)=\dis \frac{c_h}{n|\Gamma|}\sum_{1\leq i\leq n,\gamma\in \Gamma}K\left(\frac{1-z^T \gamma( x_i)}{h_n}\right), \ z\in \mathbb S^d, \end{equation} from which a density estimate on the compositional domain is obtained by applying $c_*$. That is, \[ \hat{p_n}(z)=c_*\hat{f}^{\Gamma}_n(z)=\sum_{x\in c^{-1}(z)}\hat{f}^\Gamma_n(x),\ \ z\in \mathbb S^d_{\geq 0}. \] Figure \ref{fig:kde} (c) and (d) illustrate this density estimation process with a toy example. The consistency of the spherical density estimate $\hat f_n$ is established by \cite{ chineseclt, portclt}, where it is shown that the integral squared error (ISE) of $\hat f_n$, $\int_{\mathbb S^d} (\hat f_n - f)^2 dz$ follows a central limit theorem. It is straightforward to show that the ISE of the proposed compositional density estimator $\hat{p}_n$ on the compositional domain also asymptotically normally distributed by CLT. However, the CLT of ISE for spherical densities in \cite{chineseclt} contains an unnecessary finite support assumption on the density kernel function $K$ (very different from reproducing kernels); although in \cite{portclt} such finite support condition is dropped, their result was on directional-linear data, and their proof does not directly applies to the pure directional context. For the readers' convenient, we will provide the proof for the CLT of ISE for both compositional and spherical data, without the finite support condition as in \cite{chineseclt} \begin{theorem}\label{ourclt} CLT for ISE holds for both directional and compositional data under the mild conditions (H1, H2 and H3) in Section \ref{cltprf}, without the finite support condition on density kernel functions $K$. \end{theorem} The detail of the proof of THeorem \ref{ourclt} plus the statements of the technical conditions can be found Section \ref{cltprf}. \section{Reproducing Kernels of Compositional Data}\label{sec:rkhs} We will be devoted to construct reproducing kernel structures on compositional domains, based on the topological re-interpretation of $\Delta^d$ in Section \ref{sec:sphere}. The key idea is that based on the quotient map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma=\Delta^d$, we can use function spaces on spheres to understand function spaces on compositional domains. Moreover, we can construct reproducing kernel structures of a compositional domain $\Delta^d$ based on those on $\mathbb S^d$. The reproducing kernel was first introduced in 1907 by Zaremba when he studied boundary value problems for harmonic and biharmonic functions, but the systematic development of the subject was finally done in the early 1950s by \citet{aron50}. Reproducing kernels on $\mathbb S^d$ were essentially discovered by Laplace and Legendre in the 19th centuary, although the reproducing kernels on spheres were called \emph{zonal spherical functions} at that time. Both spherical harmonics theory and RKHS have found applications in theoretical subjects like functional analysis, representation theory of Lie groups and quantum mechanics. In statistics, the successful application of RKHS in spline models by \citet{wah81} popularized RKHS theory for $\mathbb S^d$. In particular, they used spherical harmonics theory to construct an RKHS on $\mathbb S^2$. Generally speaking, for a fixed topological space $X$, there exists (and one can construct) multiple reproducing kernel Hilbert spaces on $X$; In their work, an RKHS on $\mathbb S^2$ was constructed by considering a \emph{subspace} of $L^2(\mathbb S^2)$ under a finiteness condition, and the reproducing kernels were also built out of zonal spherical functions. Their work is motivated by studying spline models on the sphere, while our motivation has nothing to do with spline models of any kind. In this work we consider reproducing structures on spheres which are \emph{different} from the one in \cite{wah81}, but we share the same building blocks, which is spherical harmonics theory. Evolved from the re-interpretation of a compositional domain $\Delta^d$ as $\mathbb S^d/\Gamma$, we will construct reproducing kernels of compositional by using reproducing kernel structures on spheres. Since spherical harmonics theory gives reproducing kernel structures on $\mathbb S^d$, and a compositional domain $\Delta^d$ are topologically covered by spheres with their deck transformations group $\Gamma$. Thus naturally we wonder (i) whether function spaces on $\Delta^d$ can identified with the subspaces of $\Gamma$-invariant functions on $\mathbb S^d$, and (ii) whether one might ``build'' $\Gamma$-invariant kernels out of spherical reproducing kernels, and hope that the $\Gamma$-invariant kernels can play the role of ``reproducing kernels'' on $\Delta^d$. It turns out that the answers for both (i) and (ii) are positive (see Remark \ref{dreami} and Theorem \ref{reprcomp}). The discovery of reproducing kernel structures on $\Delta^d$ is crucially based on the reinterpretation of compositional domains via projective and spherical geometries in Section \ref{sec:sphere}. By considering $\Gamma$-invariant objects in spherical function spaces we managed to construct reproducing kernel structures for compositional domains, and compositional reproducing Hilbert spaces. Although compositional RKHS was first considered as a candidate ``inner product space'' for data points to be mapped into, the benefit of working with RKHS goes far beyond than this, due to exciting development of kernel techniques in machine learning theory that can be applied to compositional data analysis as is mentioned in Section \ref{machine}. This gives a new chance to construct a new framework for compositional data analysis, in which we ``upgrade'' compositional data points as functions (via reproducing kernels), and the classical statistical notions, like means and variance-covariances, will be ``upgraded'' to linear functionals and linear operators over the functions space. Traditionally important statistical topics such as dimension reduction, regression analysis, and many inference problems can be re-addressed in the light of this new ``kernel techniques''. \subsection{Recollection of Basic Facts from Spherical Harmonics Theory} We give a brief review of the theory of spherical harmonics in the following. See \citet{atkinson2012spherical} for a general introduction to the topic. In classical linear algebra, a finite dimensional linear space with a linear map to itself can be decomposed into direct sums of eigenspaces. Such a phenomenon still holds for $L^2(\mathbb S^d)$ with Laplacians being the linear operator to itself. Recall that the Laplacian operator on a function $f$ with $d+1$ variables is \[ \dis\Delta f=\sum_{i=1}^{d+1}\frac{\partial ^2 f}{\partial x_i^2}. \] Let $\mathcal H_i$ be the $i$-th eigenspace of the Laplacian operator. It is known that $L^2(\mathbb S^d)$ can be orthogonally decomposed as \begin{equation}\label{L2} L^2(\mathbb S^d)=\bigoplus_{i=1}^{\infty}\mathcal H_i, \end{equation} where the orthogonality is endowed with respect to the inner product in $L^2(\mathbb S^d)$: $\langle f, g \rangle = \dis\int_{\mathbb S^d} f \bar{g}$. Let $\mathcal P_{i}(d+1)$ be the space of homogeneous polynomials of degree $i$ in $d+1$ coordinates on $\mathbb S^d$. A homogeneous polynomial is a polynomial whose terms are all monomials of the same degree, e.g., $\mathcal P_4(3)$ includes $xy^3 + x^2yz$. Further, let $ H_{i}(d+1)$ be the space of homogeneous harmonic polynomials of degree $i$ on $\mathbb S^d$, i.e., \begin{equation}\label{eq:harmonic} H_{i}(d+1)=\{P\in \mathcal P_{i}(d+1)|\; \Delta P=0\}. \end{equation} For example, $x^3y + xy^3 - 3xyz^2$ and $x^2 - 6x^2 y^2 + y^4$ are members of $H_4(3)$. Importantly, the spherical harmonic theory has established that each eigenspace $\mathcal H_{i}$ in \eqref{L2} is indeed the same space as $ H_i(d+1)$. This implies that any function in $L^2(\mathbb S^d)$ can be approximated by an accumulated direct sum of orthogonal homogeneous harmonic polynomials. The following well-known proposition further reveals that the Laplacian constraint in (\ref{eq:harmonic}) is not necessary to characterize the function space on the sphere. \begin{proposition}\label{accudirct} Let $\mathcal P_{m}(d+1)$ be the space of degree $m$ homogeneous polynomial on $d+1$ variables on the unit sphere and $\mathcal H_i$ be the $i$th eigenspace of $L^2(\mathbb S^d)$. Then \[ \mathcal P_{m}(d+1)=\dis\bigoplus_{i=\ceil{m/2}-\floor{m/2}}^{\floor{m/2}}\mathcal H_{2i}, \] where $\ceil{\cdot}$ and $\floor{\cdot}$ stand for round-up and round-down integers respectively. \end{proposition} From Proposition \ref{accudirct}, one can see that any $L^2$ function on $\mathbb S^d$ can be approximated by homogeneous polynomials. An important feature of spherical harmonics theory is that it gives reproducing structures on spheres, and now we will recall this fact. For the following discussion, we will fix a Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$, so $\mathcal H_i$ is a finite dimensional Hilbert space on $\mathbb S^d$; such a restriction on a single piece $\mathcal H_i$ is necessary because the entire Hilbert space $L^2(\mathbb S^d)$ does not have a reproducing kernel given that the Delta functional on $L^2(\mathbb S^d)$ is \emph{not} a bounded functional\footnote{At first sight, this might seem to contradict the discussion on splines on $2$-dimensional spheres in \cite{wah81}, but a careful reader can find that a finiteness constraint was imposed there, and it was \emph{never} claimed that $L^2(\mathbb S^2)$ is a RKHS. That is, their RKHS on $\mathbb S^2$ is a subspace of $L^2(\mathbb S^2)$.}. \subsection{Zonal Spherical Functions as Reproducing Kernels in $\mathcal H_i$} On each Laplacian eigenspace $\mathcal H_i$ inside $L^2(\mathbb S^d)$ on general $d$-dimensional spheres, we define a linear functional $L_x$ on $\mathcal H_i$, such that for each $Y\in \mathcal H_i$, $L_x(Y)=Y(x)$ for a fixed point $x\in \mathbb S^d$. General spherical harmonics theory tells us that there exists $k_i(x,t)$ such that: $$ L_x(Y)=Y(x)=\dis\int_{\mathbb S^d} Y(t)k_i(x,t)dt,\ x\in \mathbb S^d; $$ \noindent this function $k_i(x,t)$ is the representing function of the functional $L_x(Y)$, and classical spherical harmonics theory refers to the function $k_i(x,t)$ as the \emph{zonal spherical function}, and furthermore, they are actually ``reproducing kernels'' inside $\mathcal H_i\subset L^2(\mathbb S^d)$ in the sense of \cite{aron50}. Another way to appreciate spherical harmonics theory is that it tells that each Laplacian eigenspace $\mathcal H_i\subset L^2(\mathbb S^d)$ is actually a reproducing kernel Hilbert space on $\mathbb S^d$, a special case when $d = 2$ was used \cite{wah81}. Let us recollect some basic facts of zonal spherical functions for readers' convenience in the next Proposition. One can find their proofs in almost any modern spherical harmonics references, in particular in \citet[Chapter IV]{stein71}: \begin{proposition}\label{reprsph} The following properties hold for the zonal spherical function $k_i(x,t)$, which is also the reproducing kernel inside $\mathcal H_i\subset L^2(\mathbb S^d)$ with dimension $a_i$. \begin{itemize} \item[(a)]For a choice of orthonormal basis $\{Y_1, \dots, Y_{a_i}\}$ in $\mathcal H_i$, we can express the kernel $k_i(x,t)=\dis\sum_{i=1}^{a_i}\overline{Y_{i}(x)}Y_{i}(t)$, but $k_i(x,t)$ does not depend on choices of basis. \item[(b)]$k_i(x,t)$ is a real-valued function and symmetric, i.e., $k_i(x,t)=k_i(t,x)$. \item[(c)]For any orthogonal matrix $R\in O(d+1)$, we have $k_i(x,t)=k_i(Rx, Rt)$. \item[(d)] $k_i(x,x)=\dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any point $x\in \mathbb S^d$. \item[(e)]$k_i(x,t)\leq \dis\frac{a_i}{\mathrm{vol}(\mathbb S^d)}$ for any $x,\ t\in \mathbb S^d$. \end{itemize} \end{proposition} \begin{remark} \normalfont The above proposition ``\emph{seems}'' obvious from traditional perspectives, as if it could be found in any textbook, so readers with rich experience with RKHS theory might think that we are stating something trivial. However, we want to point out two facts. \begin{itemize} \item [(1)] Function spaces over underlying spaces with different topological structures behave very differently. Spheres are compact with no boundary, and their function spaces have Laplacian operators whose eigenspaces and finite dimensional, which possesses reproducing kernels structures inside finite dimensional eigenspaces. These coincidences are not expected to happen over other general topological spaces. \item[(2)]Relative to classical topological spaces whose RKHS were used more often, e.g. unit intervals or vector spaces, spheres are more ``exotic'' topological structures (simply connected space, but with nontrivial higher homotopy groups), while intervals or vector spaces are contractible with trivial homotopy groups. One way to appreciate spherical harmonics theory is that classical ``naive'' expectations can still happen on spheres. \end{itemize} \end{remark} In the next subsection we discuss the corresponding function space in the compositional domain $\Delta^d$. \subsection{Function Spaces on Compositional Domains}\label{sec:fcomp} With the identification $\Delta^d=\mathbb S^d/\Gamma$, the functions space $L^2(\Delta^d)$ can be identified with $L^2(\mathbb S^d/\Gamma)$, i.e., $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. The function space $L^2(\mathbb S^d)$ is well understood by spherical harmonics theory as above, so we want to relate $L^2(\mathbb S^d/\Gamma)$ with $L^2(\mathbb S^d)$ as follows. Notice that a function $h\in L^2(\mathbb S^d/\Gamma)$ is a map from $\mathbb S^d/\Gamma$ to (real or complex) numbers. Thus a natural associated function $\pi^*(h)\in L^2(\mathbb S^d)$ is given by the following composition of maps: $$ \pi\circ h:\ \ \xymatrix{\mathbb S^d\ar[r]^-{\pi}&\mathbb S^d/\Gamma\ar[r]^{\ \ h}& \mathbb C}. $$ Therefore, the composition $\pi\circ h=\pi^*(h)\in L^2(\mathbb S^d)$ gives rise to a natural embedding of the function space of compositional domains to that of a sphere $\pi^*:\ L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$. The embedding $\pi^*$ identifies the Hilbert space of compositional domains as a subspace of the Hilbert space of spheres. A natural question is how to characterize the subspace in $L^2(\mathbb S^d)$ that corresponds to functions on compositional domains. The following proposition states that $f\in \im(\pi^*)$ if and only if $f$ is constant on fibers of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$, almost everywhere. In other words, $f$ takes the same values on all $\Gamma$ orbits, i.e., on the set of points which are connected to each other by ``sign flippings''. \begin{proposition}\label{compfun} The image of the embedding $\pi^*: L^2(\mathbb S^d/\Gamma)\rightarrow L^2(\mathbb S^d)$ consists of functions $f\in L^2(\mathbb S^d)$ such that up to a measure zero set, is constant on $\pi^{-1}(x)$ for every $x\in \mathbb S^d/\Gamma$, where $\pi$ is the natural projection $\mathbb S^d\rightarrow \mathbb S^d/\Gamma$. \end{proposition} We call a function $f\in L^2(\mathbb S^d)$ that lies in the image of the embedding $\pi^*$ a \emph{$\Gamma$-invariant function}. Now we construct the contraction map $\pi_{*}: L^2(S^d)\rightarrow L^2(S^d/\Gamma)$ and this map will descend every function on spheres to a function on compositional domains. To construct $\pi_*$, it suffices to associate a $\Gamma$-invariant function to every function in $L^2(\mathbb S^d)$. For a point $z\in \mathbb S^d$ and a reflection $\gamma\in \Gamma$, a point $\gamma(z)$ lies in the set $\orb_z^\Gamma$ which is defined in (\ref{eq:orbit}). Starting with a function $f\in L^2(\mathbb S^d)$, we will define the associated $\Gamma$-invariant function $f^{\Gamma}$ as follows: \begin{proposition} Let $f$ be a function in $L^2(\mathbb S^d)$. Then the following $f^\Gamma$ \begin{equation}\label{eq:invfun} f^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}f(\gamma(z)), ~~~ z \in \mathbb S^d, \end{equation} is a $\Gamma$-invariant function. \end{proposition} \begin{proof} Each fiber of the projection map $\pi:\ \mathbb S^d\rightarrow \mathbb S^d/\Gamma$ is $\orb_z^\Gamma$ for some $z$ in the fiber. For any other point $y$ on the same fiber with $z$ for the projection $\pi$, there exists a reflection $\gamma\in \Gamma$ such that $y=\gamma (z)$. Then this proposition follows from the identity $f^{\Gamma}(z)=f^{\Gamma}(\gamma(z))$, which can be easily checked. \end{proof} The contraction map $ f\mapsto f^{\Gamma}$ on spheres naturally gives the following map \begin{equation}\label{lowerstar} \pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma),\ \text{with}\ f\mapsto f^{\Gamma} \end{equation} \begin{remark} \normalfont Some readers might argue that each element in an $L^2$ space is a \emph{function class} rather than a function, so in that sense $\pi_*(f)=f^{\Gamma}$ is not well-defined, but note that each element in $L^2(\mathbb S^d)$ can be approximated by polynomials, and the $\pi_*$ which is well defined on individual polynomial, will induce a well defined map on function classes. \end{remark} \begin{theorem}\label{invfunsp} This contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\mathbb S^d/\Gamma)$, as defined in (\ref{lowerstar}), has a section given by $\pi^*$, namely the composition $\pi_*\circ \pi^*$ induces the identity map from $L^2(\mathbb S^d/\Gamma)$ to itself. In particular, the contraction map $\pi_*$ is a surjection. \end{theorem} \begin{proof} One way to look at the relation of the two maps $\pi_*$ and $\pi_*$ is through the diagram $\xymatrix{L^2(\mathbb S^d/\Gamma)\ar@<.35ex>[r]^-{\pi^*}&L^2(\mathbb S^d)\ar@<.35ex>[l]^-{\pi_*}}$. The image of $\pi^*$ consists of $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. Conversely, given a $\Gamma$-invariant function $g\in L^2(\mathbb S^d)$, the map $g\mapsto g^{\Gamma}$ is an identity map, i.e., $g=g^{\Gamma}$, thus the theorem follows. \end{proof} \begin{remark}\label{dreami} \normalfont Theorem \ref{invfunsp} identifies functions on compositional domains as $\Gamma$-invariant functions in $L^2(\mathbb S^d)$. For any function $f\in L^2(\mathbb S^d)$, we can produce the corresponding $\Gamma$-invariant function $f^{\Gamma}$ by (\ref{eq:invfun}). More importantly, we can ``recover'' $L^2(\Delta^d)$ from $L^2(\mathbb S^d)$, without losing any information. This allows us to construct reproducing kernels of $\Delta^d$ from $L^2(\mathbb S^d)$ in Section \ref{sec:rkhsc}. \end{remark} \subsection{ Further Reduction to Homogeneous Polynomials of Even Degrees}\label{redsurg} In this section we provide a further simplification of the homogeneous polynomials in the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$. Proposition \ref{accudirct} tells us that if $m$ is even, then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{m/2}\mathcal H_{2i}$, and that if $m$ is odd then $\mathcal P_{m}(d+1)=\bigoplus_{i=0}^{(m-1)/2}\mathcal H_{2i+1}$, where $\mathcal P_{m}(d+1)$ is the space of degree $m$ homogeneous polynomials in $d+1$ variables. In either of the cases ($m$ being even or odd), the degree of the homogeneous polynomials $m$ is the same as the $\max\{2i,\ \ceil{m/2}-\floor{m/2}\leq i\leq \floor{m/2}\}$. Therefore we can decompose the finite direct sum space $\bigoplus_{i=0}^{m}\mathcal H_i$ into the direct sum of two homogeneous polynomial spaces: $$ \dis\bigoplus_{i=0}^{m}\mathcal H_i=\mathcal P_{m}(d+1)\bigoplus \mathcal P_{m-1}(d+1). $$ However we will show that any monomial of odd degree term will collapse to zero by taking its $\Gamma$-invariant, thus only one piece of the above homogeneous polynomial space will ``survive'' under the contraction map $\pi_*$. This will further simplify the function space, which in turn facilitates an easy computation. Specifically, when working with accumulated direct sums $\bigoplus_{i=0}^{m}\mathcal H_i$ on spheres, not every function is a meaningful function on $\Delta^d=\mathbb S^d/\Gamma$, e.g., we can find a nonzero function $f\in \bigoplus_{i=0}^{m}\mathcal H_i$, but $f^{\Gamma}=0$. In fact, all of the odd pieces of the eigenspace $\mathcal H_m$ with $m$ being odd do not contribute to anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. In other words, the accumulated direct sum $\bigoplus_{i=0}^m\mathcal H_{2i+1}$ is ``killed'' to zero under $\pi_*$, as shown by the following Lemma: \begin{lemma}\label{killodd} For every monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ (each $\alpha_i\geq 0$), if there exits $ k$ with $\alpha_k$ being odd, then the monomial $\prod_{i=1}^{d+1}x_i^{\alpha_i}$ is a shadow function, that is, $(\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}=0$. \end{lemma} An important implication of this Lemma is that since each homogeneous polynomial in $\bigoplus_{i=0}^k\mathcal H_{2i+1}$ is a linear combination of monomials with at least one odd term, it is killed under $\pi_*$. This implies that all ``odd'' pieces in $L^2(\mathbb S^d) = \bigoplus_{i=0}^{\infty} \mathcal H_i$ do not contribute anything to $L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)$. Therefore, whenever using spherical harmonics theory to understand function spaces of compositional domains, it suffices to consider only even $i$ for $\mathcal H_{i}$ in $L^2(\mathbb S^d)$. In summary, the function space on the compositional domain $\Delta^d=\mathbb S^d/\Gamma$ has the following eigenspace decomposition: \begin{equation}\label{compfun} L^2(\Delta^d)=L^2(\mathbb S^d/\Gamma)=\dis\bigoplus_{i=0}^{\infty}\mathcal H_{2i}^{\Gamma}, \end{equation} \noindent where $\mathcal H_{2i}^{\Gamma}:=\{h\in \mathcal H_{2i},\ h=h^{\Gamma}\}$. \subsection{Reproducing Kernels for Compositional Domain}\label{sec:rkhsc} With the understanding of function spaces on compositional domains as invariant functions on spheres, we are ready to use spherical harmonic theory to construct reproducing kernel structures on compositional domains. \subsubsection{ $\Gamma$-invariant Functionals on $\mathcal H_i$} The main goal of this section is to establish reproducing kernels for compositional data. Inside each Laplacian eigenspace $\mathcal H_i$ in $L^2(\mathbb S^d)$, recall that the $\Gamma$-invariant subspace $\mathcal H_i^{\Gamma}$ can be regarded as a function space on $\Delta^d=\mathbb S^d/\Gamma$, based on (\ref{compfun}). To find a candidate of reproducing kernel inside $\mathcal H_i^{\Gamma}$, we first identify the representing function for the following linear functional $L_z^{\Gamma}$ on $\mathcal H_i$, which is defined as follows: For any function $Y\in \mathcal H_i$, \[ L_{z}^{\Gamma}(Y)=Y^{\Gamma}(z)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}Y(\gamma z), \] for a given $z\in \mathbb S^d$. One can easily see that $L_z^{\Gamma}$ and $L_z$ agree on the subspace $\mathcal H_i^{\Gamma}$ inside $\mathcal H_i$ and also that $L_z^{\Gamma}$ can be seen as a composed map $L_z^{\Gamma}=L_z\pi_*:\ \mathcal H_i\rightarrow \mathcal H_i^{\Gamma}\rightarrow\mathbb C$. Note that although $L_z^{\Gamma}$ is defined on $\mathcal H_i$, it can actually be seen as a ``Delta functional'' on $\mathbb S^d/\Gamma=\Delta^d$. To find the representing function for $L_z^{\Gamma}$, we will use zonal spherical functions: Let $k_i(\cdot, \cdot)$ be the reproducing kernel in the eigenspace $\mathcal H_i$. Define the ``compositional'' kernel $k_i^{\Gamma}(\cdot, \cdot)$ for $\mathcal H_i$ as \begin{equation}\label{fake} k_i^{\Gamma}(x, y)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, y), \ \ \forall x, y\in \mathbb S^d, \end{equation} \noindent from which it is straightforward to check that $k_i^\Gamma(z,\cdot)$ represents linear functionals of the form $L_{z}^{\Gamma}$, simply by following the definitions. \begin{remark} \normalfont The above definition of ``compositional kernels'' in (\ref{fake}) is not just a trick only to get rid of the ``redundant points'' on spheres. This definition is inspired by the notion of ``orbital integrals'' in analysis and geometry. In our case, the ``integral'' is a discrete version, because the ``compact subgroup'' in our situation is replaced by a finite discrete reflection group $\Gamma$. In fact, such kind of ``discrete orbital integral'' construction is not new in statistical learning theory, e.g., \cite{equimatr} also used the ``orbital integral'' type of construction to study equivariant matrix valued kernels. \end{remark} At first sight, a compositional kernel is not symmetric on the nose, because we are only ``averaging'' over the group orbit on the first variable of the function $k_i(x,y)$. However since $k_i(x,y)$ is both symmetric and orthogonally invariant by Propositional \ref{reprsph}, so quite counter-intuitively, compositional kernels are actually symmetric: \begin{proposition}\label{sym} Compositional kernels are symmetric, namely $k_i^{\Gamma}(x, y)= k_i^{\Gamma}(y, x)$. \end{proposition} \begin{proof} Recall that $k_i(x,y)=k_i(y,x)$ and that $k_i(G x,G y)=k_i(x,y)$ for any orthogonal matrix $G$. Notice that every reflection $\gamma\in \Gamma$ can be realized as an orthogonal matrix, then we have $$ \begin{array}{rcl} k_i^{\Gamma}(x, y)&=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma x, y)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(y,\gamma x)=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}y,\gamma^{-1}(\gamma x))\ \ \\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma^{-1}y,x)\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma}k_i(\gamma y,x)\\ &=& k_i^{\Gamma}(y, x) \end{array} $$ \end{proof} Recall that $\mathcal H_i^{\Gamma}$ is the $\Gamma$-invariant functions inside $\mathcal H_i$, and by (\ref{compfun}), $\mathcal H_i^{\Gamma}$ is the $i$-th subspace of a compositional function space $L^2(\Delta^d)$. A na\"ive candidate for the reproducing kernel inside $\mathcal H_i^{\Gamma}$, denoted as $w_i(x,y)$, might be the spherical reproducing kernel $k_i(x,y)$, but $k_i(x,y)$ is not $\Gamma$-invariant. It turns out that the compositional kernels are actually reproducing with respect to all $\Gamma$-invariant functions in $\mathcal H_i$, while being $\Gamma$-invariant on both arguments. \begin{theorem}\label{reprcomp} Inside $\mathcal H_i$, the compositional kernel $k_i^{\Gamma}(x, y)$ is $\Gamma$-invariant on both arguments $x$ and $y$, and moreover $k_i^{\Gamma}(x, y)=w_i(x,y)$, i.e., the compositional kernel is the reproducing kernel for $\mathcal H^{\Gamma}_i$. \end{theorem} \begin{proof} Firstly by the definition, $k_i^{\Gamma}(x, y)$ is $\Gamma$-invariant on the first argument $x$; by the symmetry of $k_i^{\Gamma}(x, y)$ in Proposition \ref{sym}, it is then also $\Gamma$-invariant on the second argument $y$, hence the compositional kernel $k_i^{\Gamma}(x, y)$ is a kernel inside $\mathcal H^{\Gamma}_i$. Secondly, let us prove the reproducing property of $k_i^{\Gamma}(x, y)$. For any $\Gamma$-invariant function $f\in \mathcal H_i^{\Gamma}\subset \mathcal H_i$, $$ \begin{array}{rcl} <f(t),k_i^{\Gamma}(x, t)> &=& <f(t),\dis\sum_{\gamma\in \Gamma}\frac{1}{|\Gamma|}k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}<f(t),k_i(\gamma x, t)>\\ &=&\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(\gamma x)=\dis\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}f(x)\ \ (f\ \text{is}\ \Gamma\text{-invariant})\\ &=&f(x) \end{array} $$ \end{proof} \begin{remark} \normalfont Theorem \ref{reprcomp} justifies that a compositional kernel is actually the reproducing kernel for functions inside $\mathcal H_i^\Gamma$. Although the compositional kernel $k_i^{\Gamma}(x, y)$ is symmetric as is proved in Proposition \ref{sym}, we will still use $w_i(x,y)$ to denote $k_i^{\Gamma}(x, y)$ because $w_i(x,y)$ is, notationally speaking, more visually symmetric than the notation for compositional kernels. \end{remark} \subsubsection{Compositional RKHS and Spaces of Homogeneous Polynomials} Recall that based on Theorem \ref{accudirct}, the direct sum of an even (resp. odd) number of eigenspaces can be expressed as the set of homogeneous polynomials of a fixed degree. Further recall that the direct sum decomposition $L^2(\mathbb S^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i$ is an orthogonal one, so is the direct sum $L^2(\Delta^d)=\bigoplus_{i=0}^{\infty}\mathcal H_i^\Gamma$. By the orthgonality between eigenspaces, the reproducing kernels for the finite direct sum $\bigoplus_{i=0}^m\mathcal H^{\Gamma}_i$ is naturally the summation $\sum_{i=0}^m w_i$. Note that by Lemma \ref{killodd}, it suffices to consider only even pieces of eigenspaces $\mathcal H_{2i}$. Finally, we give a formal definition of ``the degree $m$ reproducing kernel Hilbert space'' on $\Delta^d$, consisting degree $2m$ homogeneous polynomials: \begin{definition}\label{lthrkhs} Let $w_i$ be the reproducing kernel for $\Gamma$-invariant functions in the $i$th eigenspace $\mathcal H_i \subset L^2(\mathbb S^d)$. The degree $m$ compositional reproducing kernel Hilbert space is defined to be the finite direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$, and the reproducing kernel for the degree $m$ compositional reproducing kernel Hilbert space is \begin{equation}\label{mcompker} \omega_m(\cdot, \cdot) = \sum_{i=0}^m w_{2i}(\cdot,\cdot). \end{equation} \noindent Thus the degree $m$ RKHS for the compositional domain is the pair $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$. \end{definition} Recall that the direct sum $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can identified as a subspace of $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, which is isomorphic to the space of degree $2m$ homogeneous polynomials, so each function in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ can be written as a degree $2m$ homogeneous polynomial, including the reproducing kernel $\omega_m(x,\cdot)$, although it is not obvious from (\ref{mcompker}). Notice that for a point $(x_1,x_2,\dots, x_{d+1})\in \mathbb S^d$, the sum $\sum_{i=1}^{d+1}x_i^2=1$, so one can always use this sum to turn each element in $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ to a homogeneous polynomial. For example, $x^2+1$ is not a homogeneous polynomial, but each point $(x,y,z)\in \mathbb S^2$ satisfies $x^2+y^2+z^2=1$, then we have $x^2+1=x^2+x^2+y^2+z^2=2x^2+y^2+z^2$, which is a homogeneous polynomial on the sphere $\mathbb S^2$. In fact, we can say something more about $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$. Recall that Proposition \ref{killodd} ``killed'' the contributions from ``odd pieces'' $\mathcal H_{2k+1}$ under the contraction map $\pi_*:\ L^2(\mathbb S^d)\rightarrow L^2(\Delta^d)$. However, even inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$, only a subspace can be identified with a compositional function space, namely, those $\Gamma$-invariant homogeneous polynomials. The following proposition gives a characterization of which homogeneous polynomials inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ come from the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$: \begin{proposition}\label{mpolynomial} Given any element $\theta\in \bigoplus_{i=0}^m\mathcal H^{\Gamma}_{2i}\subset \bigoplus_{i=0}^m\mathcal H_{2i}\subset L^2(\mathbb S^d/\Gamma)$, there exists a degree $m$ homogeneous polynomial $p_m$, such that \begin{equation}\label{mplynomial} \theta(x_1, x_2,\dots, x_{d+1})=p_m(x_1^2, x_2^2,\cdots, x_{d+1}^{2}). \end{equation} \end{proposition} \begin{proof} Note that $\theta$ is a degree $2m$ homogeneous $\Gamma$-invariant polynomial, then each monomial in $\theta$ has form $\prod_{i=1}^{d+1}x_i^{a_i}$ with $\sum_{i=1}^{d+1}a_i=2m$. If $\theta$ contains one monomial $\prod_{i=1}^{d+1}x_i^{a_i}$ with nonzero coefficient such that $a_i$ is odd for some $1\leq i\leq d+1$. Note that $\theta$ is $\Gamma$-invariant, i.e., $\theta=\theta^{\Gamma}$, which implies $\prod_{i=1}^{d+1}x_i^{a_i}=(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$, but the term $(\prod_{i=1}^{d+1}x_i^{a_i})^{\Gamma}$ is zero by Proposition \ref{killodd}. Thus $\theta$ is a linear combination of monomials of the form $\prod_{i=1}^{d+1}x_i^{a_i}=\prod_{i=1}^{d+1}(x_i^2)^{a_i/2}$ with each $a_i$ being even and $\sum_i a_i/2=m$, thus the proposition follows. \end{proof} Recall that the degree $m$ compositional RKHS is $\big(\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}, \omega_{m} \big)$ in Definition \ref{lthrkhs}, and $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ consists of degree $2m$ homogeneous polynomials while $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ is just a subspace of it. Proposition \ref{mpolynomial} tells us that one can also have a concrete description of the subspace $\bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ via those degree $m$ homogeneous polynomials on squared variables. \section{Applications of Compositional Reproducing Kernels}\label{sec:app} The availability of compositional reproducing kernels will open a door to many statistical/machine learning techniques for compositional data analysis. However, we will only present two application scenarios, as an initial demonstration of the influence of RKHS thoery on compositional data analysis. The first application is the representer theorem, which is motivated by newly developed kernel-based machine learning, especially by the rich theory of vector valued regression \citep{vecfun, vecman}. The second one is constructing exponential families on compositional domains. Parameters of compositional exponential models are compositional reproducing kernels. To the best of authors' knowledge, these will be the first class of nontrivial examples of explicit distributions on compositional domains with \emph{non-vanishing} densities on the boundary. \subsection{Compositional Representer Theorems} Beyond the successful applications on traditional spline models, representer theorems are increasingly relevant due to the new kernel techniques in machine learning. We will consider minimal normal interpolations and least square regularzations in this paper. Regularizations are especially important in many situations, like structured prediction, multi-task learning, multi-label classification and related themes that attempt to exploit output structure. A common theme in the above-mentioned contexts is non-parametric estimation of a vector-valued function $f:\ \mathcal X\rightarrow \mathcal Y$, between a structured input space $\mathcal X$ and a structured output space $\mathcal Y$. An important adopted framework in those analyses is the ``vector-valued reproducing kernel Hilbert spaces'' in \cite{vecfun}. Unsurprisingly, representer theorems not only are necessary, but also call for further generalizations in various contexts: \begin{itemize} \item [(i)]In classical spline models, the most frequently used version of representer theorems are about scalar valued kernels, but besides the above-mentioned scenario $f:\ \mathcal X\rightarrow \mathcal Y$ in manifold regularization context, in which vector valued representer theorems are needed, higher tensor valued kernels and their corresponding representer theorems are also desirable. In \cite{equimatr}, matrix valued kernels and their representer theorems are studied, with applications in image processing. \item[(ii)]Another related application lies in the popular kernel mean embedding theories, in particular, conditional mean embedding. Conditional mean embedding theory essentially gives an operator from an RKHS to another \citep{condmean}. In order to learn such operators, vector-valued regressions plus corresponding representer theorems are used. \end{itemize} In vector-valued regression framework, an important assumption discussed in representer theorems are \emph{linear independence} conditions \citep{vecfun}. As our construction of compositional RKHS is based on finite dimensional spaces of polynomials, the linear independence conditions are not freely satisfied on the nose, so we will address this problem in this paper. Instead of dealing with vector-valued kernels, we will only focus on the special case of scalar valued (reproducing) kernels, but the issue can be clearly seen in this special case. \subsubsection{Linear Independence of Compositional Reproducing Kernels}\label{twist} The compositional RKHS that was constructed in Section \ref{sec:rkhs} takes the form $\big(\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma, \omega_{m} \big)$ indexed by $m$. Based on the finite dimensional nature of compositional RKHS, it is not even clear whether different points yield to different functions $\omega_{m}(x_i, \cdot)$ inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma$. we will give a positive answer when $m$ is high enough. Given a set of distinct compositional data points $\{x_i\}_{i=1}^n\subset \Delta^d$, we will show that the corresponding set of reproducing functions $\{\omega_{m}(x_i, \cdot)\}_{i=1}^n$ form a linearly independent set inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma$ if $m$ is high enough. \begin{theorem}\label{abundencelm} Let $\{x_i\}_{i=1}^n$ be distinct data points on a compositional domain $\Delta^d$. Then there exists a positive integer $M$, such that for any $m>M$, the set of functions $\omega_{m}(x_i, \cdot)\}_{i=1}^n$ is a linearly independent set in $\bigoplus_{i=0}^{m}\mathcal H_{2i}^\Gamma$. \end{theorem} \begin{proof} The quotient map $c_\Delta: \mathbb S^d\rightarrow \Delta^d$ can factor through a projective space, i.e., $c_\Delta:\ \mathbb S^d\rightarrow\mathbb P^d\rightarrow \Delta^d$. The main idea is to prove a stronger statement, in which we showed that distinct data points in $\mathbb P^d$ will give linear independence of \emph{projective kernels} for large enough $m$, where projective kernels are reproducing kernels in $\mathbb P^d$ whose definition was given in \ref{abundprf}. Then we construct two vector subspace $V_1^{m}$ and $V_2^{m}$ and a linear map $g_m$ from $V_1^{m}$ to $V_2^{m}$. The key trick is that the matrix representing the linear map $g_m$ becomes diagonally dominant when $m$ is large enough, which forces the spanning sets of both $V_1^{m}$ and $V_2^{m}$ to be linear independent. More details of the proof are given in Section \ref{abundprf}. \end{proof} In the proof of Theorem \ref{abundencelm}, we make use of the homogeneous polynomials $(y_i\cdot {t})^{2m}$, which is \emph{not} living inside a single piece $\mathcal H_{2i}$, thus we had to use the direct sum space $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ for our construction of RKHS. Without using projective kernels, one might wonder if the same argument works, however, the issue is that the matrix might have $\pm 1$ at off-diagonal entries, which will fail to be diagonally dominant when $m$ grows large enough. We break down to linear independence of projective kernels for distinct points, because reproducing kernels for distinct compositional data points are linear combinations of distinct projective kernels, then in this way, the off diagonal terms will be the power of inner product of two vectors that will not be antipodal or identical, thus off diagonal terms' $m$-th power will go to zero with increasing $m$. Another consequence of Theorem \ref{abundencelm} is that $\omega_{m}(x_i,\cdot)\neq \omega_{m}(x_j, \cdot)$ whenever $i\neq j$ when $m$ is large enough. Not only large enough $m$ will separate points on the reproducing kernel level, but also gives each data point their ``own dimension.'' \subsubsection{Minimal Norm Interpolation and Least Squares Regularization}\label{represent} Once the linear independence is established in Theorem \ref{abundencelm}, it is an easy corollary to establish the representer theorems for minimal norm interpolations and least square regularizations. Nothing is new from the point of view of general RKHS theory, but we will include these theorems and proofs on account of completeness. Again, we will focus on the scalar-valued (reproducing) kernels and functions, instead of the vector-valued kernels and regressions. However, Theorem \ref{abundencelm} sheds important lights on linearly independence issues, and interested readers can generalize these compositional representer theorems to vector-valued cases by following \cite{vecfun}. The first representer theorem we provide is a solution to minimal norm interpolation problem: for a fixed set of distinct points $\{x_i\}_{i=1}^n$ in $\Delta^d$ and a set of numbers $y=\{y_i\in \mathbb R\}_{i=1}^n$, let $I_y^{m}$ be the set of functions that interpolates the data \[ I_y^m = \{f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}, \] and out goal is to find $f_0$ with minimum $\ell_2$ norm, i.e., \[ \norm{f_0}=\inf\{\norm{f}, f\in I_y^{m}\}.\] \begin{theorem}\label{minorm} Choose $m$ large enough so that the reproducing kernels $\{\omega_m(x_i,t)\}_{i=1}^n$ are linearly independent, then the unique solution of the minimal norm interpolation problem $\min\{\norm{f},f\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}:\ f(x_i)=y_i\}$ is given by the linear combination of the kernels: $$ f_0(t)=\dis\sum_{i=1}^nc_i \; \omega_{m}(x_i,t) $$ where $\{c_i\}_{i=1}^n$ is the unique solution of the following system of linear equations: $$ \dis\sum_{j=1}^n\omega_{m}(x_i,x_j)c_j=y_i, \ \ 1\leq i\leq n. $$ \end{theorem} \begin{proof} For any other $f$ in $I_y^{m}$, define $g=f-f_0$. By considering the decomposition: $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2$, one can argue that the cross term $<f_0,g>=0$. The detail can be found in Section \ref{representerproofs}. We want to point out that the linear independence of reproducing kernels guarantees the uniqueness and existence of $f_0$. \end{proof} The second representer theorem is for a more realistic scenario with $\ell_2$ regularization, which has the following objective: \begin{equation}\label{l2obj} \sum_{i=1}^n \left ( f(x_i)-y_i \right )^2+\mu\norm{f}^2. \end{equation} The goal is to find the $\Gamma$-invariant function $f_{\mu}\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ that minimizes (\ref{l2obj}). The solution to this problem is provided by the following representer theorem: \begin{theorem}\label{regularization} For a set of distinct compositional data points $\{x_i\}_{i=1}^n$, choose $m$ large enough such that the reproducing kernels $\{\omega_{m}(x_i, t)\}_{i=1}^n$ are linearly independent. Then the solution to (\ref{l2obj}) is given by \[ f_{\mu}(t) =\sum_{i=1}^n c_i \; \omega_{m}(x_i,t), \] where $\{c_i\}_{i=1}^n$ is the solution of the following system of linear equations: \[ \mu c_i+\sum_{j=1}^n \omega_{m}(x_i,x_j)c_j=y_i,\ \ 1\leq i\leq n. \] \end{theorem} \begin{proof} The detail of this proof can be found in Section \ref{representerproofs}, but we want to point out how the linear independence condition plays a role in here. In the middle of the proof we need to show that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i)) \omega_{m}(x_i,t)\big]$, where $f_{\mu}(t) =\sum_{i=1}^n \omega_{m}(x_i,t)c_i$. We use the linear independence in Theorem \ref{abundencelm} to establish the equivalence between this linear equation system of $\{c_i\}_{i=1}^n$ and the one given in the theorem. \end{proof} \subsection{Compositional Exponential Family} With the construction of RKHS in hand, one can produce exponential families using the technique developed in \cite{cs06}. Recall that for a function space $\mathcal H$ with the inner product $<\cdot,\cdot>$ on a general topological space $\mathcal X$, whose reproducing kernel is given by $k(x, \cdot)$, the exponential family density $p(x, \theta)$ with the parameter $\theta\in \mathcal H$ is given by: \[ p(x, \theta)=\exp\{<\theta(\cdot),k(x,\cdot)>-g(\theta) \},\ \] where $g(\theta)=\log \dis\int_{\mathcal{X}}\exp\big(<\theta(\cdot),k(x,\cdot) >\big) dx$. For compositional data we define the density of the $m$th degree exponential family as \begin{equation}\label{expfamily} p_{m}(x, \theta)=\exp\left\{<\theta(\cdot),\omega_{m}(x,\cdot)>-g(\theta) \right \},\quad x \in \mathbb S^d/\Gamma, \end{equation} where $\theta\in \bigoplus_{i=0}^{m}\mathcal H^{\Gamma}_{2i}$ and $g(\theta)=\log \int_{\mathbb S^d/\Gamma}\exp(<\theta(\cdot),\omega_{m}(x,\cdot) >) dx$. Note that this density can be made more explicit by using homogeneous polynomials. Recall that any function in $\bigoplus_{i=0}^m\mathcal H_{2i}^{\Gamma}$ can be written as a degree $m$ homogeneous polynomial with \emph{squared} variables by Lemma \ref{killodd}. Thus the density in (\ref{expfamily}) can be simplified to the following form: for $x=(x_1,\dots, x_{d+1})\in \mathbb S^{d}_{\geq 0}$, \begin{equation}\label{mpoly} p_{m}(x,\theta)=\exp\{s_{m}(x_1^2, x_2^2, \dots, x_{d+1}^2; \theta)-g(\theta)\}, \end{equation} where $s_m$ is a polynomial on squared variables $x_i^2$'s with $\theta$ as coefficients. Note that $s_m$ is invariant under ``sign-flippings'', and the normalizing constant can be computed via the integration over the entire sphere as follows: $$ g(\theta)=\int_{\mathbb S^d/\Gamma}\exp(s_m)dx=\frac{1}{|\Gamma|}\int_{\mathbb S^d}\exp(s_m)dx. $$ Figure \ref{fig:expfamily} displays three examples of compositional exponential distribution. The three densities respectively have the following $\theta$: \begin{eqnarray*} \theta_1 &=& -2 x_1^4 -2 x_2^4 -3 x_3^4 + 9 x_1^2 x_2^2 + 9 x_1^2 x_3^2 -2 x_2^2 x_3^2,\\ \theta_2 &=& - x_1^4 - x_2^4 - x_3^4 - x_1^2 x_2^2 - x_1^2 x_3^2 - x_2^2 x_3^2,\\ \theta_3 &=& - 3 x_1^4 - 2 x_2^4 - x_3^4 +9 x_1^2 x_2^2 - 5 x_1^2 x_3^2 - 5 x_2^2 x_3^2. \end{eqnarray*} The various shapes of the densities in the Figure implies that the compositional exponential family can be used to model data with a wide range of locations and correlation structures. \begin{figure}[h] \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily1.png} \caption{$p_4(x,\theta_1)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily2.png} \caption{$p_4(x,\theta_2)$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{expfamily3.png} \caption{$p_4(x,\theta_3)$} \end{subfigure} \caption{Three example densities from compositional exponential family. See text for specification of the parameters $\theta_1, \theta_2$, and $\theta_3$. }\label{fig:expfamily} \end{figure} Further investigation is needed on the estimation of the parameters of the compositional exponential model (\ref{mpoly}), which is suggested as a future direction of research. A natural starting point is maximum likelihood estimation and a regression-based method such as the one discussed by \cite{expdirect}. \section{Discussion}\label{sec:conclude} A main contribution of this work is that we use projective and spherical geometries to reinterpret compositional domains, which allows us to construct reproducing kernels for compositional data points by using spherical harmonics under group actions. With the rapid development of kernel techniques (especially kernel mean embedding philosophy) in machine learning theory, this work will make it possible to introduce reproducing kernel theories to compositional data analysis. Let us for example consider the mean estimation problem for compositional data. Under the kernel mean embedding framework that is surveyed by \citet{kermean}, one can focus on kernel mean $\mathbb E[k(X,\cdot)]$ in the function space, rather a physical mean that exist in the compositional domain. The latter is known to be difficult to even define properly \citep{sphkentreg, sw11}. On the other hand, the kernel mean is endowed with flexibility and linear structure of the function space. Although inspired by the kernel mean embedding techniques, we did not address the computation of the kernel mean $\mathbb E[k(X,\cdot)]$ and cross-variance operator as a replacement of traditional means and variance-covariances in traditional multivariate analysis. The authors will, in the forthcoming work, develop further techniques to come back to this issue of kernel means and cross-variance operators for compositional exponential models, via applying deeper functional analysis techniques. Although we only construct reproducing kernels for compositional data, it does not mean that ``higher tensors'' is abandoned in our consideration. In fact, higher-tensor valued reproducing kernels are also included in kernel techniques with applications in manifold regularizations \citep{vecman} and shape analysis \citep{matrixvaluedker}. These approaches on higher-tensor valued reproducing kernels indicate further possibilities of regression frameworks between exotic spaces $f:\ \mathcal X\rightarrow \mathcal Y$ with both the source $\mathcal X$ and $\mathcal Y$ being non-linear in nature, which extends the intuition of multivariate analysis further to nonlinear contexts, and compositional domains (traditionally modeled by an ``Aitchison simplex'') are an interesting class of examples which can be treated non-linearly. \appendix \section{Supplementary Proofs} \subsection{ Proof of Central Limit Theorems on Integral Squared Errors (ISE) in Section \ref{compdensec}}\label{cltprf} \begin{assumption}\label{kband} For all kernel density estimators and bandwidth parameters in this paper, we assume the following: \begin{itemize} \item[{\bf{H1}}] The kernel function $K:[0,\infty)\rightarrow [0,\infty)$ is continuous such that both $\lambda_d(K)$ and $\lambda_d(K^2)$ are bounded for $d\geq 1$, where $\lambda_d(K)=2^{d/2-1}\mathrm{vol}(S^d)\dis\int_{0}^{\infty}K(r)r^{d/2-1}dr$. \item[\bf{H2}]If a function $f$ on $\mathbb S^d\subset \mathbb R^{d+1}$ is extended to the entire $\mathbb R^{d+1}/\{0\}$ via $f(x)=f(x/\norm{x})$, then the extended function $f$ needs to have its first three derivatives bounded. \item[\bf{H3}] Assume the bandwidth parameter $h_n\rightarrow 0$ as $nh_n^d\rightarrow\infty$. \end{itemize} \end{assumption} Let $f$ be the extended function from $\mathbb S^d$ to $\mathbb R^{d+1}/\{0\}$ via $f(x/\norm{x})$, and let $$ \phi (f,x)= -x^T \nabla f(x)+d^{-1}(\nabla^2 f(x)-x^T (\mathcal H_x f) x) = d^{-1}\mathrm{tr} [\mathcal H_xf(x)], $$ where $\mathcal H_x f$ is the Hessian matrix of $f$ at the point $x$. The term $b_d(K)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ b_d(K)=\dis\frac{\dis\int_0^{\infty}K(r)r^{d/2}dr}{\dis\int_0^{\infty}K(r)r^{d/2-1}dr} \] The term $\phi(h_n)$ in the statement of Theorem \ref{ourclt} is defined to be: \[ \phi(h_n)=\dis\frac{4b_d(K)^2}{d^2}\sigma_x^2h_n^4 \] Proof of Theorem \ref{ourclt}: \begin{proof} The strategy in \cite{chineseclt} in the directional set-up follows that in \cite{hallclt}, whose key idea is to give asymptotic bounds for degenerate U-statistics, so that one can use Martingale theory to derive the central limit theorem. The step where the finite support condition was used in \cite{chineseclt}, is when they were trying to prove the asymptotic bound:$E(G_n^2(X_1,X_2))=O(h^{7d})$, where $G_n(x,y)=E[H_n(X,xH_n(X,y))]$ with $H_n=\dis\int_{\mathbb S^d}K_n(z,x)K_n(z,y)dz$ and the centered kernel $K_n(x,y)=K[(1-x'y)/h^2]-E\{[K(1-x'X)/h^2]\}$. During that prove, they were trying to show that the following term: \[ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\times\\ &&\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2, \end{array} \] satisfies $T_1=O(h^{7d})$. During this step, in order to give an upper bound for $T_1$, the finite support condition was substantially used. The idea to avoid this assumption was based on the observation in \cite{portclt} where they only concern the case of directional-linear CLT, whose result can not be directly used to the only directional case. Based on the method provided in Lemma 10 in \cite{portclt}, one can easily deduce the following asymptotic equivalence: $$ \dis\int_{\mathbb S^d}K^j(\frac{1-x^Ty}{h^2})\phi^i(y)dy\sim h^d\lambda_d(K^j)\phi^i(x), $$ where $\lambda_d(K^j)=2^{d/2-1}\mathrm{vol}(\mathbb S^{d-1})\dis\int_{0}^{\infty}K^j(r)r^{d/2-1}dr$. As a special case we have: $$ \dis\int_{\mathbb S^d}K^2(\dis\frac{1-x^Ty}{h^2})dy\sim h^d\lambda_d(K^2)C,\ \text{with}\ C\ \text{being a positive constant}. $$ Now we will proceed the proof without the finite support condition: $$ \begin{array}{rcl} T_1&=&\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\\ &&\times\left\{ \dis\int_{\mathbb S^d}f(z)dz\dis\int_{\mathbb S^d}K[(1-u'x)/h^2]K[(1-u'z)/h^2]du \cdot \int_{\mathbb S^d}K[(1-u'y)/h^2]K[(1-u'z)/h^2]du \right\}^2\\ &\sim& \dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)dy\left\{\int_{\mathbb S^d}f(z)\big[\lambda_d(K)h^dK(\frac{1-x^Tz}{h^2})\big]\times \big[\lambda_d(K)h^dK(\dis\frac{1-y^Tz}{h^2})\big]dz \right\}^2\\ &\sim& \lambda_d(K)^4h^{4d}\dis\int_{\mathbb S^d}f(x)dx\int_{\mathbb S^d}f(y)\big[ \lambda_d(K)h^d K(\dis\frac{1-x^Ty}{h^2})f(y)\big]^2dy\\ &=&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \left\{\int_{\mathbb S^d} K^2(\dis\frac{1-x^Ty}{h^2})f^3(y)dy \right\}f(x)dx\\ &\sim&\lambda_d(K)^6h^{6d}\dis\int_{\mathbb S^d} \lambda_d(K^2)h^dC\cdot f^3(x)f(x)dx\\ &=&C\lambda_d(K)^6\lambda_d(K^2)h^{7d}\dis\int_{\mathbb S^d}f^(x)dx=O(h^{7d}). \end{array} $$ Thus we have proved $T_1=O(h^{7d})$ without finite support assumption, then the rest of the proof will follow through as in \cite{chineseclt}. \end{proof} Observe the identity: \begin{equation}\label{csden} \dis\int_{\mathbb S^d_{\geq 0}}(\hat{p}_n-p)^2dx=\dis|\Gamma|\int_{\mathbb S^d}(\hat{f}_n-\tilde{p})^2dy, \end{equation} then the CLT of compositional ISE follows from the identity (\ref{csden}) and our proof of CLT on spherical ISE without finite support conditions on kernels. \subsection{Proofs of Shadow Monomials in Section \ref{sec:rkhs} }\label{reproproof} Proof of Proposition \ref{killodd}: \begin{proof} A direct computation yields: $$ \begin{array}{rcl} (\prod_{i=1}^{d+1}x_i^{\alpha_i})^{\Gamma}&=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i=1}^{d+1}(s_ix_i)^{\alpha_i}\\ &=&\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} x_k^{\alpha_k}+\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} (-x_k)^{\alpha_k}\\ &=&x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i}-x_k^{\alpha_k}\dis\frac{1}{|\Gamma|}\sum_{s_i\in \{\pm 1\}}\prod_{i\neq k}(s_ix_i)^{\alpha_i} \\ &=&0. \end{array} $$ \end{proof} \subsection{Proof of Linear Independence of Reproducing Kernels in Theorem \ref{abundencelm}}\label{abundprf} We sketch a slight of more detailed (not complete) proof: \begin{proof} This is the most technical lemma in this article. We will sketch the philosophy of the proof in here, which can be intuitively understood topologically. Recall that we can produce a projective space $\mathbb P^d$ by identifying every pair of antipodal points of a sphere $\mathbb S^d$ (identify $x$ with $-x$), in other words $\mathbb P^d=\mathbb S^d/\mathbb Z_2$ where $\mathbb Z_2=\{0,1\}$ is a cyclic group of order $2$. Then we can define a projective kernel in $\mathcal H_i\subset L^2(\mathbb S^d)$ to be $k^p_i(x, \cdot)=[k_i(x,\cdot)+k_i(-x,\cdot)]/2$. We can also denote the projective kernel inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}$ by $\underline{k}_m^p(x, \cdot)=\sum_{i=0}^{m}k^p_{2i}(x,\cdot)$. Now we spread out the data set $\{x_i\}_{i=1}^n$ by ``spread-out'' construction in Section \ref{sec:spread}, and denote the spread-out data set as $\{\Gamma \cdot x_i\}_{i=1}^n=\{{c^{-1}_{\Delta}(x_i)}\}_{i=1}^{n}$ (a data set, not a set because of repetitions). A compositional reproducing kernel kernel is a summation of spherical reproducing kernels of on ${c^{-1}_{\Delta}(x_i)}$, divided by the number of elements in ${c^{-1}_{\Delta}(x_i)}$. This data set ${c^{-1}_{\Delta}(x_i)}$ has antipodal symmetry, then a compositional kernel is a linear combination of projective kernels. Notice that \emph{different} fake kernels are linear combinations of \emph{different} projective kernels. It suffices to show the linear independence of projective kernels for distinct data points and large enough $m$, which implies the linear independence of fake kernels $\{\underline{k}^{\Gamma}_{m}(x_i, \cdot)\}_{i=1}^n$. Now we are focusing on the linear independence of projective kernels. A projective kernel can be seen as a reproducing kernel for a point in $\mathbb P^d$. For a set of distinct points $\{y_i\}_{i=1}^l\subset \mathbb P^{d}$, we will show that the corresponding set of projective kernels $\{\underline{k}^{p}_{m}(y_i, \cdot)\}_{i=1}^l\subset \bigoplus_{i=0}^{m}\mathcal H_{2i}$ is linearly independent for an integer $l$ and a large enough $m$. Consider two vector subspace $V_1^{m}=\spn \big[ \{(y_i\cdot {t})^{2m}\}_{i=1}^l\big]$ and $V_2^{m}=\spn\big[\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l\big]$, both of which are inside $\bigoplus_{i=0}^{m}\mathcal H_{2i}\subset L^2(\mathbb S^d)$. Then we can define a linear map $h_{m}:\ V_1^{m}\rightarrow V_2^{m}$ by setting $h_{m}((y_i\cdot {t})^{2m})=\sum_{j=1}^l<(y_i\cdot {t})^{2m},\underline{k}^{p}_{m}(y_j, t)>\underline{k}^{p}_{m}(y_j, t)$. This linear map $h_{m}$ is represented by an $l\times l$ symmetric matrix whose diagonal elements are $1$'s, and off diagonal elements are $[(y_i\cdot y_j)]^{2m}$. Notice that $y_i\neq y_j$ in $\mathbb P^d$, which means that they are not antipodal to each other in $\mathbb S^d$, thus $|y_i\cdot y_j|<1$. When $m$ is large enough, all off-diagonal elements will go to zero while diagonal elements always stay constant, then the matrix representing $h_{m}$ will become a \emph{diagonally dominant} matrix, which is full rank. When the linear map $h_{m}$ has full rank, both spanning sets $ \{(y_i\cdot {t})^{2m}\}_{i=1}^l$ and $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^l$ have to be a basis for $V_1^{m}$ and $V_2^{m}$ correspondingly, then the set of projective kernels $\{\underline{k}^{p}_{m}(y_i, t)\}_{i=1}^m$ have to be linearly independent when $m$ is large enough. \end{proof} \subsection{Proof of Representer Theorems in Section \ref{represent}}\label{representerproofs} Proof of Theorem \ref{minorm} on minimal norm interpolation: \begin{proof} Note that the set $I_y^{m}=\{f\in \bigoplus_{i=0}^{m}\mathcal H_{2i}:\ f(x_i)=y_i\}$ is non-empty, because the $f_0$ defined by the linear system of equation is naturally in $I_y^{m}$. Let $f$ be any other element in $I_y^{m}$, define $g=f-f_0$, then we have: $$ \norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+2<f_0,g>+\norm{f_0}^2. $$ Notice that $g\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$ and that $g(x_i)=0$ for $1\leq i\leq n$, we have: $$ \begin{array}{rcl} <f_0,g>&=&<\sum_{i=1}^n\omega_{m}(x_i,\cdot)c_i,g(\cdot)>\\ &=&\dis\sum_{i=1}^nc_i<\omega_{m}(x_i,\cdot),g(\cdot)>.\\ &=&\dis\sum_{i=1}^nc_ig(x_i)=0. \end{array} $$ Thus $\norm{f}^2=\norm{g+f_0}^2=\norm{g}^2+\norm{f_0}^2$, which implies that $f_0$ is the solution to the minimal norm interpolation problem. \end{proof} Proof of Theorem \ref{regularization}, on regularization problems: \begin{proof} First define the loss functional $E(f)=\sum_{i=1}^n|f(x_i)-y_i|^2+\mu\norm{f}^2$. For any $\Gamma$-invariant function $f=f^{\Gamma}\in \bigoplus_{i=0}^{m}\mathcal H_{2i}$, let $g=f-f_{\mu}$, then a simple computation yields: $$ \dis E(f)=E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2. $$ I want to show $\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)=\mu<f_{\mu},g>$, and an equivalent way of writing this equality is: $$ \dis\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\omega_{m}(x_i,t), g(t)>=\mu<f_{\mu},g>. $$ Now I claim that $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$, which implies the above equality. To prove this claim, plug this linear combination $f_{\mu}=\sum_{i=1}^nc_i\cdot \omega_{m}(x_i,t)$ into the claim, then we get a system of linear equations in $\{c_i\}_{i=1}^n$, thus the proof of the claim breaks down to checking the system of linear equations in $\{c_i\}_{i=1}^n$, produced by the claim. Note that $\{\omega_{m}(x_i,t)\}_{i=1}^n$ is a linearly independent set, so one can check that the system of linear equations in $\{c_i\}_{i=1}^n$ produced by the claim is true, if and only if $\{c_i\}_{i=1}^n$ satisfy $\mu c_k+\sum_{i=1}^n c_i\cdot \omega_{m}(x_i,x_k)=y_k$ for every $k$ with $1\leq k\leq n$, which is given by the condition of this theorem. The equivalence of these two systems of linear equations is given by the linear independence of the set $\{\omega_{m}(x_i, t)\}_{i=1}^n$. Therefore we conclude that the claim $\mu f_{\mu}(t)=\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]$ is true. To finish the proof of this theorem, notice that $$ \begin{array}{rcl} \dis E(f)&=&E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2 -2\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)+2\mu<f_{\mu},g>+\mu\norm{g}^2\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n(y_i-f_{\mu}(x_i))g(x_i)\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[\mu<f_{\mu},g>-\sum_{i=1}^n<(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t), g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2+2\big[<\underbrace{\big(\mu f_{\mu}(t)-\sum_{i=1}^n\big[(y_i-f_{\mu}(x_i))\cdot \omega_{m}(x_i,t)\big]\big)}_{=0}, g(t)>\big]\\ &=&\dis E(f_{\mu})+\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2. \end{array} $$ The term $\sum_{i=1}^n |g(x_i)|^2+\mu\norm{g}^2$ in the above equality is always non-negative, thus $E(f_{\mu})\leq E(f)$, then the theorem follows. \end{proof} \bibliographystyle{asa} \bibliography{geometrycomp.bib} \end{document}
2205.01146v3
http://arxiv.org/abs/2205.01146v3
O(2)-symmetry of 3D steady gradient Ricci solitons
\documentclass[12pt]{amsart} \makeatletter \def\l@section{\@tocline{1}{10pt}{1pc}{}{}} \def\l@subsection{\@tocline{2}{0pt}{1pc}{4.6em}{}} \def\l@subsubsection{\@tocline{3}{0pt}{1pc}{7.6em}{}} \renewcommand{\tocsection}[3]{ \indentlabel{\@ifnotempty{#2}{\makebox[2.3em][l]{ \ignorespaces#1 #2.\hfill}}}\textbf{#3}} \renewcommand{\tocsubsection}[3]{ \indentlabel{\@ifnotempty{#2}{\hspace*{2.3em}\makebox[2.3em][l]{ \ignorespaces#1 #2.\hfill}}}#3} \renewcommand{\tocsubsubsection}[3]{ \indentlabel{\@ifnotempty{#2}{\hspace*{4.6em}\makebox[3em][l]{ \ignorespaces#1 #2.\hfill}}}#3} \makeatother \setcounter{tocdepth}{4} \usepackage{amsmath,amssymb,amscd,amsthm,graphicx,enumerate,tikz-cd} \usepackage{hyperref} \hypersetup{breaklinks=true} \usepackage{enumitem} \newlist{condenum}{enumerate}{1} \setlist[condenum]{label=\bfseries Condition \arabic*., ref=\arabic*, wide} \numberwithin{equation}{section} \setcounter{secnumdepth}{3} \setcounter{tocdepth}{3} \setlength{\parskip}{1ex} \theoremstyle{plain} \makeatletter \makeatother \usepackage[utf8]{inputenc} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{mathtools} \usepackage{makecell} \usepackage{xcolor} \bibliographystyle{alpha} \usepackage[margin=1.25in]{geometry} \usepackage{lipsum} \makeatletter \def\ps@pprintTitle{ \let\@oddhead\@empty \let\@evenhead\@empty \def\@oddfoot{} \let\@evenfoot\@oddfoot} \makeatother \newcommand{\dds}[1]{\frac{d #1}{d s}} \newcommand{\dd}[2]{\frac{d #1}{d #2}} \newcommand{\pp}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\ip}[1]{\left\langle #1 \right\rangle} \newcommand{\avg}[1]{ #1 _{avg}} \newcommand{\R}{\mathbb{R}} \newcommand{\RR}{\mathbb{R}^2} \newcommand{\mat}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\intmi}{\int\displaylimits_{M_{i}}} \newcommand{\intmii}{\int\displaylimits_{M_{i-1}}} \newcommand{\intm}{\int\displaylimits_{M_{1}}} \newcommand{\ps}{\partial_s} \newcommand{\pr}{\partial_r} \newcommand{\pth}{\partial_{\theta}} \newcommand{\ap}{\overline{P}} \newcommand{\al}{\overline{L}} \newcommand{\az}{\overline{\zeta}} \newcommand{\lmin}{\lambda_{min}} \newcommand{\LL}{\mathcal{L}} \newcommand{\scal}{\textnormal{scal}} \newcommand{\Rm}{\textnormal{Rm}} \newcommand{\scalar}{R} \newcommand{\Ric}{\textnormal{Ric}} \newcommand{\ptt}{\frac{\partial}{\partial t^+}} \newcommand{\aLL}{\overline{\mathcal{L}}} \newcommand{\pt}{\partial_t} \newcommand{\psp}{\frac{\partial^+}{\partial s}} \newcommand{\ptp}{\frac{\partial^+}{\partial t}} \newcommand{\M}{\mathcal{M}} \newcommand{\ot}{\overline{t}} \newcommand{\ox}{\overline{x}} \newcommand{\oq}{\overline{Q}} \newcommand{\sym}{\mathbb{Z}_2\times SO(2)} \newcommand{\rii}{\rightarrow\infty} \newcommand{\ri}{\rightarrow} \newcommand{\AVR}{\textnormal{AVR}} \newcommand{\diam}{\textnormal{diam}} \newcommand{\sy}{\mathbb{Z}_2\times SO(2)} \newcommand{\cigar}{\textnormal{Cigar}} \newcommand{\gs}{\Gamma(s)} \newcommand{\og}{\overline{g}} \newcommand{\ga}{\Gamma_{\ge A}} \newcommand{\gafa}{\Gamma_{A}} \newcommand{\Fi}{\phi_{iT_0}} \newcommand{\Fnegativei}{\phi_{-iT_0}} \newcommand{\Fiplus}{\phi_{(i+1)T_0}} \newcommand{\Finegativeplus}{\phi_{-(i+1)T_0}} \newcommand{\N}{\mathbb{N}} \newcommand{\sff}{\mathrm{I\!I}} \newcommand{\yi}[1]{\textcolor{black}{#1}} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lem}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{prop}[theorem]{Proposition} \newtheorem{claim}[theorem]{Claim} \newtheorem{cor}[theorem]{Corollary} \newtheorem{con}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{defn}[theorem]{Definition} \newtheorem*{theorem*}{Theorem} \usepackage{xpatch} \makeatletter \xpatchcmd{\tableofcontents}{\contentsname \@mkboth}{\small\contentsname \@mkboth}{}{} \xpatchcmd{\listoffigures}{\chapter *{\listfigurename }}{\chapter *{\small\listfigurename }}{}{} \makeatother \makeatletter \def\blfootnote{\xdef\@thefnmark{}\@footnotetext} \makeatother \begin{document} \begin{abstract} We prove that all 3D steady gradient Ricci solitons are O(2)-symmetric. The O(2)-symmetry is the most universal symmetry in Ricci flows with any type of symmetries. Our theorem is also the first instance of symmetry theorem for Ricci flows that are not rotationally symmetric. We also show that the Bryant soliton is the unique 3D steady gradient Ricci soliton with positive curvature that is asymptotic to a ray. \end{abstract} \blfootnote{The author was supported by the NSF grant DMS-2203310.} \title[O(2)-symmetry of 3D steady gradient Ricci solitons]{O(2)-symmetry of 3D steady gradient Ricci solitons} \author[Yi Lai]{Yi Lai} \email{[email protected]} \address[]{Department of Mathematics, Stanford University, CA 94305, USA} \maketitle \tableofcontents \begin{section}{Introduction}\label{s: intro} \subsection{Statement of the main results} The concept of Ricci solitons was introduced by Hamilton \cite{cigar}. Ricci solitons generate self-similar solutions of Hamilton's Ricci flow \cite{Hamilton_ric}, and often arise as singularity models of Ricci flows \cite{supR,distanceH,Cao-Kahler,Chen-Zhu-Pinch}. They can be viewed as the fixed points under the Ricci flow in the space of Riemannian metrics modulo rescalings and diffeomorphisms. Ricci solitons are also natural generalizations of the Einstein metrics and constant curvature metrics. A complete Riemannian manifold $(M,g)$ is called a Ricci soliton, if there exist a vector field $X$ and a constant $\lambda\in\R$ such that \begin{equation*} \Ric=\frac{1}{2}\LL_Xg+\lambda\,g. \end{equation*} The soliton is called \textit{shrinking} if $\lambda>0$, \textit{expanding} if $\lambda<0$, and \textit{steady} if $\lambda=0$. Moreover, if the vector field $X$ is the gradient of some smooth function $f$, then we say it is a \textit{gradient Ricci soliton}, and $f$ is the potential function. In particular, a steady gradient Ricci soliton satisfies the equation \begin{equation*} \Ric=\nabla^2 f. \end{equation*} The goal of this paper is to study steady gradient Ricci solitons in dimension 3. We assume they are non-flat. In dimension 2, the only steady gradient Ricci soliton is Hamilton's cigar soliton, which is rotationally symmetric \cite{cigar}. In dimension $n\ge3$, Bryant constructed a steady gradient Ricci soliton which is rotationally symmetric \cite{bryant}. See \cite{CaoHD,FIK,Lai2020_flying_wing} for more examples of Ricci solitons in dimension $n\ge 4$. In dimension 3, we know that all steady gradient Ricci solitons are non-negatively curved \cite{ChenBL}, and they are asymptotic to sectors of angle $\alpha\in[0,\pi]$. In particular, the Bryant soliton is asymptotic to a ray ($\alpha=0$), and the soliton $\R\times\cigar$ is asymptotic to a half-plane ($\alpha=\pi$). If the soliton has positive curvature, it must be diffeomorphic to $\R^3$ \cite{petersen}, and asymptotic to a sector of angle in $[0,\pi)$ \cite{Lai2020_flying_wing}. If the curvature is not strictly positive, then it is a metric quotient of $\R\times\cigar$ \cite{MT}. Hamilton conjectured that there exists a 3D steady gradient Ricci soliton that is asymptotic to a sector with angle in $(0,\pi)$, which is called a \textit{3D flying wing} \cite{CaoHD,infinitesimal,Catino,Chow2007a,DZ,HaRF}. The author confirmed this conjecture by constructing a family of $\mathbb{Z}_2\times O(2)$-symmetric 3D flying wings \cite{Lai2020_flying_wing}. More recently, the author showed that the asymptotic cone angles of these flying wings can take arbitrary values in $(0,\pi)$. It is then interesting to see whether a 3D steady gradient Ricci soliton with positive curvature must be either a flying wing or the Bryant soliton. This is equivalent to ask whether the Bryant soliton is the unique 3D steady gradient Ricci soliton with positive curvature that is asymptotic to a ray. Our first main theorem gives an affirmative answer to this. \begin{theorem}[Uniqueness theorem]\label{t: must look like a flying wing} Let $(M,g)$ be a 3D steady gradient Ricci soliton with positive curvature. If $(M,g)$ is asymptotic to a ray, then it must be isometric to the Bryant soliton up to a scaling. \end{theorem} We mention that there are many other uniqueness results for the 3D Bryant soliton under various additional assumptions. First, Bryant showed in his construction that the Bryant soliton is the unique rotationally symmetric steady gradient Ricci solitons \cite{bryant}. More recently, a well-known theorem by Brendle proved that the Bryant soliton is the unique steady gradient Ricci soliton that is non-collapsed in dimension 3 \cite{brendlesteady3d}. See also \cite{DZ,Cao2009OnLC,Cao2011BachflatGS,Chen2011OnFA,Munteanu2019PoissonEO,Catino} for more uniqueness theorems for the Bryant soliton and Cigar soliton. Our Theorem \ref{t: must look like a flying wing} is the Ricci flow analogue of X.J. Wang's well-known theorem in mean curvature flow, which proves that the bowl soliton is the unique entire convex graphical translator in $\R^3$ \cite{Wangxujia}. Note that the analogue of 3D steady Ricci solitons in mean curvature flow are convex translators in $\R^3$, where the rotationally symmetric solutions are called bowl solitons. Moreover, a 3D steady Ricci soliton asymptotic to a ray can be compared to a convex graphical translator whose definition domain is the entire $\R^2$. There have been many exciting symmetry theorems in geometric flows \cite{Huisken2015ConvexAS,Brendle2011AncientST,angenent2022unique,Bamler2021OnTR,Brendle_jdg_high,zhu2022so,Zhu2021RotationalSO,Bourni_jdg,Bourni_convex,Brendle2019UniquenessOC,Brendle2021OC,brendle2023rotational,BrendleNaffDasSesum,du2021hearing}. If one views the rotational symmetry as the `strongest' symmetry, then the $O(2)$-symmetry is naturally the `weakest', and the most universal symmetry in all ancient Ricci flow solutions. For example, in dimension 2, the non-flat ancient Ricci flows are the shrinking sphere, the cigar soliton, and the sausage solution \cite{2dancientcompact,Sesum}, and they are all rotationally symmetric (i.e. $O(2)$-symmetric). In dimension 3, the author's flying wing examples, the Fateev's examples \cite{Fateev} (see also \cite{Bakas2009AncientSO}) are all $O(2)$-symmetric but not rotationally symmetric (i.e. $O(3)$-symmetric). It was conjectured by Hamilton and Cao that the 3D flying wings are $O(2)$-symmetric. Our second main theorem confirms this conjecture. In particular, this is the first instance of a symmetry theorem for Ricci flows that are not rotationally symmetric. \begin{theorem}\label{t: symmetry of flying wing} Let $(M,g)$ be a 3D flying wing, then $(M,g)$ is $O(2)$-symmetric. \end{theorem} Here we say a complete 3D manifold is $O(2)$-symmetric if it admits an effective isometric $O(2)$-action, and the action fixes a complete geodesic $\Gamma$, such that the metric is a warped product metric on $M\setminus\Gamma$ with $S^1$-orbits. It is easy to see the Bryant soliton and $\R\times\cigar$ are also $O(2)$-symmetric. Therefore, combining Theorem \ref{t: must look like a flying wing} and \ref{t: symmetry of flying wing}, we see that all 3D steady gradient Ricci solitons are $O(2)$-symmetric. \begin{theorem}\label{t: symmetry of all solitons} Let $(M,g)$ be a 3D steady gradient Ricci soliton, then $(M,g)$ is $O(2)$-symmetric. \end{theorem} In mean curvature flow, the `weakest' symmetry is the $\mathbb{Z}_2$-symmetry, which are usually obtained using the standard maximum principle method. More precisely, if we compare 3D steady gradient Ricci solitons with convex translators in $\R^3$, then the $O(2)$-symmetry is compared with the $\mathbb{Z}_2$-symmetry (reflectional symmetry) there. The convex translators in $\R^3$ have been classified to be the tilted Grim Reapers, the flying wings, and the bowl soliton, all of which are $\mathbb{Z}_2$-symmetry \cite{white}. However, as its analogy in Ricci flow, the $O(2)$-symmetry is not `discrete' at all, and no maximum principle is available. We also obtain some geometric properties for the 3D flying wings. First, we show that the scalar curvature $R$ always attains its maximum at some point, which is also the critical point of $f$. The analogue of this statement in mean curvature flow is that the graph of the convex translator has a maximum point, which relies on the well-known convexity theorem by Spruck-Xiao \cite{Spruck2020CompleteTS}. \begin{theorem}\label{t': max} Let $(M,g,f)$ be a 3D steady gradient Ricci soliton with positive curvature, then there exists $p\in M$ which is a critical point of the potential function $f$, and the scalar curvature $R$ achieves the maximum at $p$. \end{theorem} We study the asymptotic geometry of 3D flying wings. First, we show that the soliton is $\mathbb{Z}_2$-symmetric at infinity, in the sense that the limits of $R$ at the two ends of $\Gamma$ are equal to a same positive number. Here $\Gamma$ is a complete geodesic fixed by the $O(2)$-isometry. After a rescaling we may assume this positive number is $4$, then we show that there are two asymptotic limits, one is $\R\times\cigar$ with $R(x_{tip})=4$, and the other is $\RR\times S^1$ with the diameter of the $S^1$-factor equal to $\pi$. Note that in a cigar soliton where $R=4$ at the tip, the diameter of the $S^1$-fibers in the warped-product metric converges to $\pi$ at infinity. See \cite{distanceH,Lu_Wang_Ricci_shrinker,De15,chow2022four,Lai2020_flying_wing} for more discussions on the asymptotic geometry of Ricci solitons. \begin{theorem}[$\mathbb{Z}_2$-symmetry at infinity]\label{t: R_1=R_2} Let $(M,g,f)$ be a 3D flying wing. Then after a rescaling we have \begin{equation*} \lim_{s\rii}R(\Gamma(s))=\lim_{s\ri-\infty}R(\Gamma(s))=4. \end{equation*} For any sequence of points $p_i\rii$, the pointed manifolds $(M,g,p_i)$ smoothly converge to either $\R\times\cigar$ with $R(x_{tip})=4$ or $\RR\times S^1$ with the diameter of the $S^1$-factor equal to $\pi$. Moreover, if $p_i\in\Gamma$, then the limit is $(\R\times\cigar,x_{tip})$, and if $d_g(\Gamma,p_i)\rii$, then the limit is $\R^2\times S^1$. \end{theorem} We also obtain a quantitative relation between the limit of $R$ along $\Gamma$, the asymptotic cone angle, and $R(p)$ where $p$ is the critical point of $f$. This is also true in the Bryant soliton, and thus is true for all 3D steady gradient Ricci solitons with positive curvature. \begin{theorem}\label{t': quantitative relation} Let $(M,g,f,p)$ be a 3D steady gradient Ricci soliton with positive curvature. Assume $(M,g)$ is asymptotic to a sector with angle $\alpha$, then we have \begin{equation*} \lim_{s\rii}R(\Gamma(s))=\lim_{s\ri-\infty}R(\Gamma(s))=R(p)\sin^2\frac{\alpha}{2}. \end{equation*} \end{theorem} It has been conjectured whether there is a dichotomy of the curvature decay rate of steady gradient solitons, that is, the curvature decays either exactly linearly or exponentially \cite{Munteanu2019PoissonEO,DZ,chan2022dichotomy,Chan2019CurvatureEF}. In dimension 3, the curvature of Bryant soliton decays linearly in the distance to the tip, and the curvature in $\R\times\cigar$ decays exponentially in distance to the line of cigar tips. In this paper, we prove that in a 3D flying wing, the curvature decays faster than any polynomial function in $r$, and slower than an exponential function in $r$, where $r$ is the distance function to $\Gamma$. \begin{theorem}\label{t': curvature estimate} Let $(M,g,f,p)$ be a 3D flying wing. Suppose $\lim_{s\rii}R(\Gamma(s))=4$. Then for any $\epsilon_0>0$ there exists $C(\epsilon_0)>0$, and for any $k\in\mathbb{N}$ there exists $C_k>0$ such that the following holds for all $x\in M$, \begin{equation*} C^{-1}e^{-2(1+\epsilon_0)\,d_g(x,\Gamma)}\le R(x)\le C_k\,d_g^{-k}(x,\Gamma). \end{equation*} \end{theorem} \subsection{Outline of difficulties and proofs} In Ricci flow, the `strongest' symmetry, i.e. the rotational symmetry was first studied by Brendle in dimension 3 with many novel ideas that are successfully generalize to prove rotational symmetry in higher dimensions \cite{brendlesteady3d,Brendle_jdg_high,brendle2023rotational,BrendleNaffDasSesum}. In contrast to the rotational symmetry, one of the major difficulties in studying any weaker symmetries is the non-uniqueness of asymptotic limits. For the $O(2)$-symmetry, this requires us to study the two different asymptotic limits $\mathbb{R}\times\cigar$ and $\R^2\times S^1$ separately and glue up these estimates in a delicate way. Our $O(2)$-symmetry theorem is the first instance of tackling this issue in Ricci flow. Our method may be generalized to study the $O(n-k)$-symmetries, $k=1,...,n-2$, which are weaker than the rotational symmetry (i.e. $O(n)$-symmetry). For example, the author constructed $n$-dimensional steady solitons that are non-collapsed, $O(n-1)$-symmetric but not $O(n)$-symmetric \cite{Lai2020_flying_wing}. Therefore, we need to develop new tools and methods to prove the $O(2)$-symmetry. Some of our methods are independent of the soliton structure and the dimension, such as the distance distortion estimates, the curvature estimates, and the symmetry improvement theorem, which should have more applications. In particular, as a consequence of the non-uniqueness issue of limits, Brendle's construction of the killing field is not applicable in our setting. So we introduce a new stability method to construct the killing field (see more in Section \ref{s: lie} and \ref{s: killing field}). Our stability method generalizes Brendle's method (see \cite[Lemma 4.1]{brendlesteady3d}) from steady solitons to flows, in the sense that it is much less restricted by the steady soliton structure, and hence may be applied in more general settings. For example, our method was recently applied in \cite{ZhaoZhu} to study the rigidity of the non-collapsed Bryant soliton in any dimension $n\ge 4$. We now outline the structure of the paper. In the following we assume $(M,g,f)$ is a 3D steady gradient Ricci soliton that is not the Bryant soliton. Let $\{\phi_t\}_{t\in\R}$ be the diffeomorphisms generated by $\nabla f$, $\phi_0=id$, and let $g(t)=\phi_{-t}^*g$. Then $(M,g(t))$, $t\in(-\infty,\infty)$, is the Ricci flow of the soliton. \textbf{Section \ref{s: pre}} We give most of the definitions and standard Ricci flow results that will be used in the following proofs. \textbf{Section \ref{s: asymptotic geometry}} We study the asymptotic geometry of the soliton in this section. First, by the splitting theorem of 3D Ricci flow we can show that for any sequence of points $x_i\in M$ going to infinity as $i\rii$, the rescaled manifolds $(M,r^{-2}(x_i)g,x_i)$ converge to a smooth limit which splits off a line, where $r(x_i)>0$ is some non-collapsing scale at $x_i$. We show that such an asymptotic limit is either isometric to $\R\times\cigar$ or $\R^2\times S^1$. Moreover, we find two integral curves $\Gamma_1,\Gamma_2$ of $\nabla f$ tending to infinity at one end, such that the asymptotic limits are isometric to $\R\times\cigar$ along them, and are $\RR\times S^1$ away from them. The two integral curves also correspond to the two edge rays in the sector which is the blow-down limit of the soliton. Second, we prove Theorem \ref{t': max} of the existence of the maximum point of $R$. This is also the unique critical point of $f$. We do this by a contradicting argument. Suppose there does not exist a maximum of $R$. Then we can find an integral curve $\gamma$ of $\nabla f$ which goes to infinity at both ends. We show that $R$ is non-increasing along the curve and has a positive limit at one end. Using that the asymptotic limits along both ends of $\gamma$ are isometric to $\R\times\cigar$, we can compare the geometry at the two ends, and by a convexity argument we can show that $R$ is actually constant along $\gamma$, so that the soliton is isometric to $\R\times\cigar$. This gives a contradiction to our positive curvature assumption. So we have a closed subset $\Gamma$, which is the union of the critical point and two integral curves of $\nabla f$, such that $\Gamma$ is invariant under the diffeomorphisms generated by $\nabla f$, and the soliton converges to $(\R\times\cigar,x_{tip})$ under rescalings along the two ends of $\Gamma$. Next, we prove a quadratic curvature decay away from the edge $\Gamma$. This corresponds to the case when $k=2$ in Theorem \ref{t': curvature estimate}. The proof uses Perelman's curvature estimate, which gives the upper bound $R(x,0)\le\frac{C}{r^2}$ on scalar curvature in a non-negatively curved Ricci flow $(M,g(t))$, $t\in[-r^2,0]$, assuming the flow is non-collapsed at $x$ on scale $r$. In our situation, we will show by methods of metric comparison geometry that the soliton is not non-collapsed at $x$ on scale $d_g(x,\Gamma)$ in a local universal covering. So we can apply Perelman's estimates on the local universal covering and obtain the desired quadratic decay $R(x)\le\frac{C}{d_g^2(x,\Gamma)}$. Lastly, we prove Theorem \ref{t: must look like a flying wing} and \ref{t: R_1=R_2}. In proving the two theorems, we will work in the backwards Ricci flow $(M,g(-\tau))$, $\tau\ge0$, and reduce the change of various geometric quantities to the distortion of distances and lengths under the flow. More specifically, for a fixed point $x\in M$ at which $(M,g(-\tau))$ is close to $\RR\times S^1$ on scale $h(\tau)$. Let $H(\tau)$ be the $g(-\tau)$-distance from $x$ to $\Gamma$. Then we can show \begin{equation*} \begin{cases} H'(\tau)\ge C^{-1}\cdot h^{-1}(\tau)\\ h'(\tau)\le C\cdot H^{-2}(\tau)\cdot h(\tau), \end{cases} \end{equation*} which guarantees that $h(\tau)$ stays bounded, and $H(\tau)$ grows at least linearly as $\tau\rii$. Using this we can show that $R$ has two positive limits $R_1,R_2$ at the two ends of $\Gamma$, and the asymtotic cone angle is non-zero. To show $R_1=R_2$, first we can find two points $x_1,x_2$ at which the soliton $(M,g)$ is $\epsilon(\tau)$-close to $\RR\times S^1$, on the scales $2R_1^{-1/2}$ and $2R_2^{-1/2}$. Here $\epsilon(\tau)\ri0$ as $\tau\rii$. Then we can show that $x_1,x_2$ stay in a bounded distance to each other as we move backwards long the Ricci flow, and hence $(M,g(-\tau))$ is $\epsilon(\tau)$-close to $\RR\times S^1$ at $x_1,x_2$ on a uniform scale. So $R_1=R_2$ follows by controlling the scale change at $x_1,x_2$ in the flow. Theorem \ref{t: R_1=R_2} is a key ingredient in proving the $O(2)$-symmetry theorem. \textbf{Section \ref{s: curvature etimates}} We prove Theorem \ref{t': curvature estimate} of the curvature estimates in this section. It is needed in the proof of the $O(2)$-symmetry. First, we derive the exponential curvature lower bound of $R$, which needs an improved Harnack inequality for non-negatively curved Ricci flows. For a Ricci flow solution with non-negative curvature operator, the following conventional integrated Harnack inequality can be obtained by integrating Hamilton's differential Harnack inequality and use the inequality $\Ric(v,v)\le|v|^2R$, \begin{equation*} \frac{R(x_2,t_2)}{R(x_1,t_1)}\ge \exp\left(-\frac{1}{2}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}\right), \end{equation*} see for example \cite[Theorem 4.40]{MT}. We observe that the inequality $\Ric(v,v)\le|v|^2R$ can be improved to $\Ric(v,v)\le\frac{1}{2}|v|^2R$, using which we can prove the following improved Harnack inequality \begin{equation*} \frac{R(x_2,t_2)}{R(x_1,t_1)}\ge \exp\left(-\frac{1}{4}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}\right). \end{equation*} Using this improved Harnack inequality and some distance distortion estimates we can prove the exponential curvature lower bound. Note that this exponential lower bound $C^{-1}(\epsilon_0)e^{-(2+\epsilon_0)\,d_g(\cdot,\Gamma)})$ is sharp, because $\epsilon_0$ can be arbitrarily small, and in $\R\times\cigar$ where $R=4$ at the cigar tip, we have $\Gamma=\R\times\{x_{tip}\}$ and $R$ decays like $e^{-2\,d_g(\cdot,\Gamma)}$. Next, we derive the polynomial upper bound of $R$, which states that $R$ decays faster than $d^{-k}_g(\cdot,\Gamma)$, for any $k\in\mathbb{N}$. We prove this by induction. First, the case of $k=2$ is proved in Section \ref{s: asymptotic geometry}. Now assume by induction that $R\le C_k\, d^{-k}_g(\cdot,\Gamma)$ for some $k\ge 2$. Since $R$ evolves by $\partial_t R=\Delta R+2|\Ric|^2(x,t)$ under the Ricci flow, we have the following reproduction formula for all $s<t$, \begin{equation*}\begin{split} R(x,t)&=\int_M G(x,t;y,s)R(y,s)\,d_sy+2\int_s^t\int_M G(x,t;z,\tau)|\Ric|^2(z,\tau)\,d_{\tau}z\,d\tau, \end{split}\end{equation*} where $G$ is the heat kernel of the heat equation $\pt u=\Delta u$. Using a heat kernel estimate on $G$ and the inductive assumption, we can show the first term goes to zero as we choose $s\ri-\infty$, and the second term is bounded by $C\cdot d_{g(t)}^{-(2k-1)}(x,\Gamma)$. Note that $k\ge 2$ implies $2k-1>k$, this completes the induction process. \textbf{Section \ref{s: semi-local}} In this section we prove a local stability theorem, which is another key ingredient of the $O(2)$-symmetry theorem. It states that the degree of $SO(2)$-symmetry improves as we move forward in time along the Ricci flow of the soliton. Here $SO(2)$-symmetric means that the manifold admits an isometric $SO(2)$-action whose principal orbits are circles. First, we prove the symmetry improvement theorem in the linear case. For a symmetric 2-tensor $h$ on a $SO(2)$-symmetric manifold, it has a decomposition as a sum of a rotationally invariant mode and an oscillatory mode. We show that if $h$ which satisfies the linearized Ricci-Deturck flow $\partial_t h=\Delta_L h$ on the cylindrical plane $\RR\times S^1$, then the oscillatory mode of $h$ decays exponentially in time. By a limiting argument we generalize this theorem to the non-linear case for the Ricci-Deturck flow perturbation, whose background is a $SO(2)$-symmetric Ricci flow that is sufficiently close to $\R^2\times S^1$. Moreover, the symmetry improvement theorem also describes the decay of $|h|$, in the case that it is bounded by an exponential function instead of a constant. More precisely, for $x_0\in M$, if $|h|(\cdot,0)\le e^{\alpha\,d_g(x_0,\cdot)}$ for any $\alpha\in[0,2.02]$, then $|h|(x_0,T)\le e^{-\delta_0 \,T}\cdot e^{2\alpha\,T}$ holds for some $\delta_0>0$. Applying the theorem to a 3D flying wing in which $R$ limits to $4$ along the edges, the increasing factor $e^{2\alpha\,T}$ will be compensated by the cigar tip contracting along the edges under the Ricci flow. It is crucial that $\alpha$ can be slightly greater than $2$, using which we can construct a $SO(2)$-symmetric approximating metric in Section \ref{s: Approximating $SO(2)$-symmetric metrics}, so that the error decays like $e^{-(2+\delta)\,d_g(\cdot,\Gamma)}$ for some small but positive $\delta$. So the error decays faster than that of $R$ by the exponential lower bound in Theorem \ref{t': curvature estimate}. We will use this fact to construct a killing field in Section \ref{s: lie} and \ref{s: killing field}. \textbf{Section \ref{s: Approximating $SO(2)$-symmetric metrics}} In this section we construct an approximating $SO(2)$-symmetric metric $\overline{g}$ satisfying suitable error estimates. First, we construct a $SO(2)$-symmetric metric $\overline{g}_1$ away from $\Gamma$, which satisfies \begin{equation}\label{e: g_2} |\overline{g}_1-g|_{C^{100}}\le e^{-(2+\epsilon_0)\,d_g(\cdot,\Gamma)}, \end{equation} for some $\epsilon_0>0$. To show this, we impose the following inductive assumption. \textbf{Inductive assumption one:} There are a constant $\delta\in(0,0.01)$ and an increasing arithmetic sequence $\alpha_n>0$, with $\delta\le\alpha_{n+1}-\alpha_n\le0.01$, and if $\alpha_n\le2.02$, then there is a $SO(2)$-symmetric metric $\widehat{g}_{n}$ such that \begin{equation}\label{e: intoinduction on n} |\widehat{g}_{n}-g|_{C^{100}}\le e^{-\alpha_nd_g(\cdot,\Gamma)}. \end{equation} If this is true for all $n\in\mathbb{N}$, then $\widehat{g}_N$ will satisfy \eqref{e: g_2} for a large enough $N\in\mathbb{N}$. Now assume inductive assumption one holds for $n$, to show it also holds for $n+1$, we want to apply the symmetry improvement theorem to the Ricci flow of the soliton. After applying the symmetry improvement theorem $i$ times, the error to a symmetric metric will decay by $C^{-i}$ for some $C>0$. So for points at larger distance to $\Gamma$, we need to apply the symmetry improvement theorem more times to achieve the error estimate $e^{-\delta\,d_g(\cdot,\Gamma)}$. Therefore, we need a second induction to apply the symmetry improvement theorem infinitely many times so that eventually the error estimate $e^{-\delta\,d_g(\cdot,\Gamma)}$ holds everywhere. \textbf{Inductive assumption two:} There is a sequence of $SO(2)$-symmetric metrics $\{\widehat{g}_{n,k}\}_{k=1}^{\infty}$ such that $\widehat{g}_{n,k}$ satisfies \eqref{e: intoinduction on n}, and for some $C>0$ we have \begin{equation}\begin{split}\label{e: intoinduction on k} |\widehat{g}_{n,k}-g|_{C^{100}}\le e^{-\alpha_{n} d_g(\cdot,\Gamma)}\cdot C^{-i}\quad \textit{on}\quad\Gamma_{\ge iD},\quad i=0,...,k, \end{split}\end{equation} where $\Gamma_{\ge iD}=\{x\in M:d_g(x,\Gamma)\ge iD\}$. If inductive assumption two is true for all $k\in\mathbb{N}$, we take $\widehat{g}_{n+1}$ to be a subsequential limit of $\widehat{g}_{n,k}$ as $k\rii$, then $\widehat{g}_{n+1}$ satisfies \eqref{e: intoinduction on n} for $n+1$. Inductive assumption two clearly holds for $k=0$ by taking $\widehat{g}_{n,0}=\widehat{g}_n$ and using inductive assumption one for $n$. Now assume it holds for $k\ge0$, we verify it for $k+1$ by applying the symmetry improvement theorem. More precisely, we consider the harmonic map heat flow from $(M,g(t))$ to the Ricci flow $\widehat{g}_{n,k}(t)$ starting from $\widehat{g}_{n,k}$ on $[0,T]$. Then the error between $g(t)$ and $\widehat{g}_{n,k}(t)$ is then described by the Ricci Deturck flow perturbation. Let $\widehat{g}_{n,k+1}$ be the final-time metric $\widehat{g}_{n,k}(T)$ modulo the rotationally invariant part of the error and a diffeomorphism. Since oscillatory part of the error decays exponentially in time by the symmetry improvement theorem, we can show that $\widehat{g}_{n,k+1}$ satisfies \eqref{e: intoinduction on k}. This completes the two inductions and hence we obtain a $SO(2)$-symmetric metric $\overline{g}_1$ satisfying \eqref{e: g_2}. Lastly, we modify the metric $\overline{g}_1$ to obtain the desired approximating $SO(2)$-symmetric metric $\overline{g}$, which satisfies both \eqref{e: g_2} and \begin{equation}\label{e: g3} |\overline{g}-g|_{C^{100}}(x)\ri0\quad\textit{as}\quad x\rii. \end{equation} Note that $\overline{g}_1$ already satisfies \eqref{e: g3} as we move away from $\Gamma$, we just need to extend this estimate near $\Gamma$. Since the soliton converges along the two ends of $\Gamma$ to $\R\times\cigar$ which is $SO(2)$-symmetric, we can obtain $\overline{g}$ by gluing up $\overline{g}_1$ with $\R\times\cigar$ in suitable neighborhoods of the two ends of $\Gamma$. \textbf{Section \ref{s: lie} } The goal of Section \ref{s: lie} and \ref{s: killing field} is to construct a non-trivial killing field of the soliton. We do this by a global stability argument using a heat kernel method, which is consistent with our curvature estimates and estimates of approximating metrics. In this section, we study the solution to the following initial value problem of the linearized Ricci-Deturck flow equation, \begin{equation}\begin{cases}\label{e: special heat equation} \pt h(t)=\Delta_{L,g(t)}h(t),\\ h(0)=\LL_Xg, \end{cases}\end{equation} where $X$ is the killing field of the approximating $SO(2)$-symmetric metric obtained in Section \ref{s: Approximating $SO(2)$-symmetric metrics}. By the conditions \eqref{e: g_2} and \eqref{e: g3}, and the exponential lower bound from Theorem \ref{t': curvature estimate}, we can deduce that $|\frac{\LL_Xg}{R}|(x)\ri0$ as $x\rii$. We show that $|h(t)|\ri0$ as $t\rii$. To prove this, we first observe by the Anderson-Chow's curvature pinching \cite{AC} that $|h|$ satisfies the following inequality \begin{equation}\label{e: think} |h(x,t)|\le\int_M G(x,t;y,0)|h|(y,0)\,d_0y, \end{equation} where $G(x,t;y,s)$ is the heat kernel to the following heat type equation, \begin{equation}\label{e: ok} \partial_t u=\Delta u+\frac{2|\Ric|^2(x,t)}{R(x,t)}u. \end{equation} Our key estimate is to show a vanishing theorem of the heat kernel $G(x,t;y,s)$ for any fixed pair $(y,s)$ at $t=\infty$. Using this vanishing theorem we can show that the integral in \eqref{e: think} in any compact subset is arbitrarily small when $t\rii$. For the integral outside the compact subset, by the initial condition it is an arbitrary small multiple of $R$ integrated against the heat kernel $G$, which is bounded by the maximum of $R$ seeing that $R$ is also a solution to \eqref{e: ok}. \textbf{Section \ref{s: killing field} } In this section we construct a killing field of the soliton metric. Let $X$ be the killing field of the approximating $SO(2)$-symmetric metric obtained in Section \ref{s: Approximating $SO(2)$-symmetric metrics}. Let $(M,g(t))$ be the Ricci flow of the soliton. Let $Q(t)=\pt \phi_{t*}(X)-\Delta_{g(t)} \phi_{t*}(X)-\Ric_{g(t)}(\phi_{t*}(X))$ and $Y(t)$ be a time-dependent vector field which solves the equation \begin{equation*} \begin{cases} \pt Y(t)-\Delta Y(t)-\Ric(Y(t))=Q(t),\\ Y(0)=0. \end{cases} \end{equation*} Moreover, let $X(t):=\phi_{t*}(X)-Y$. Then $X(t)$ solves the following initial value problem, \begin{equation*} \begin{cases} \partial_t X(t)-\Delta X(t)-\Ric(X(t))=0,\\ X(0)=X, \end{cases} \end{equation*} and the symmetric 2-tensor field $\LL_{X(t)}g(t)$ satisfies the equation \eqref{e: special heat equation}. Therefore, by the result from Section \ref{s: lie} we see that $\LL_{X(t)}g(t)$ tends to zero as $t\rii$. So the limit of $X(t)$ as $t\rii$ is a killing field of $(M,g)$. To show that the killing field is non-zero, we first show that $Q(t)$ satisfies a polynomial decay away from $\Gamma$ as a consequence of \eqref{e: g_2} and the polynomial curvature upper bound from Theorem \ref{t': curvature estimate}. Then by some heat kernel estimates on $|Y(t)|$ we show that it also satisfies the polynomial decay away from $\Gamma$, which guarantees the non-vanishing of the limit of $X(t)$ as $t\rii$. \textbf{Section \ref{s: O(2)-symmetry} } We prove Theorem \ref{t: symmetry of flying wing} of the $O(2)$-symmetry in this section. First, let $X$ be the killing field obtained in Section \ref{s: killing field}, and $\chi_{\theta}$, $\theta\in\R$, be the isometries generated by $X$. We show that $\chi_{\theta}$ are commutative with the diffeomorphisms $\phi_t$ generated by $\nabla f$. Then we show that $\chi_{\theta}$ is a $SO(2)$-isometry. This uses the existence of a maximum point of $R$, which must be fixed by the isometries $\chi_{\theta}$. Since the maximum point of $R$ is also a critical point of $f$, it follows that $f$ is invariant under the isometries. Using this we can show that $\chi_{\theta}$ is a $SO(2)$-isometry and fixes the edge $\Gamma$. Lastly, in order to show that the soliton is also $O(2)$-symmetric, it remains to show that the curvature form of the $SO(2)$-isometry vanishes everywhere. By using the soliton equation and the curvature formula under the $SO(2)$-isometry, we can reduce this to the vanishing of a scaling invariant quantity at a point on $\Gamma$. By a limiting argument and the scaling invariance, this can be further reduced to the Euclidean space $\R^3$ where the $SO(2)$-isometry is the rotation around the $z$-axis, and hence the vanishing assertion clearly holds. \subsection{Acknowledgments} I would like to thank Richard Bamler for his constant help and encouragement, and Otis Chodosh, Huai-Dong Cao, Bennett Chow, Xiaohua Zhu, Brian White for helpful discussions and comments. \end{section} \section{Preliminaries}\label{s: pre} In the following we present most of the definitions and concepts that are needed in the statement and proofs of the main results of this paper. \subsection{Steady gradient Ricci solitons} \begin{defn}[Steady Ricci soliton] We say a smooth complete Riemannian manifold $(M,g)$ is a steady Ricci soliton if it satisfies \begin{equation}\label{e: soliton} \Ric=\LL_X g, \end{equation} for some smooth vector field $X$. If moreover the vector field is the gradient of some smooth function $f$, then we say it is a steady gradient Ricci soliton, and $f$ is the potential function. In this case, the soliton satisfies the equation \begin{equation*} \Ric=\nabla^2 f. \end{equation*} By a direct computation using \eqref{e: soliton}, the family of metrics $g(t)=\phi_t^*(g)$, $t\in(-\infty,\infty)$, satisfies the Ricci flow equation, where $\{\phi_t\}_{t\in(-\infty,\infty)}$ is the one-parameter group of diffeomorphisms generated by $-\nabla f$ with $\phi_0$ the identity. We say $g(t)$ is the Ricci flow of the soliton. \end{defn} Throughout the paper, we use the triple $(M,g,f)$ to denote a steady gradient soliton $(M,g)$ and a potential function $f$, and use the quadruple $(M,g,f,p)$ to denote the soliton when $p\in M$ is a critical point of $f$. For 3D steady gradient Ricci solitons, by the maximum principle they must have non-negative sectional curvature \cite{ChenBL}. Moreover, by the strong maximum principle, see e.g. \cite[Lemma 4.13, Corollary 4.19]{MT}, we see that a 3D steady gradient Ricci soliton must be isometric to quotients of $\R\times\cigar$ if the curvature is not strictly positive everywhere. Therefore, throughout the paper we will assume our soliton has positive curvature. So by the soul theorem, the manifold is diffeomorphic to $\R^3$, see e.g. \cite{petersen}. There are several important identities for the steady gradient Ricci solitons due to Hamilton, see e.g. \cite{HaRF}. In particular, we will use frequently \begin{equation}\label{e: two} \begin{split} \langle\nabla R,\nabla f\rangle&=-2\Ric(\nabla f,\nabla f),\\ R+|\nabla f|^2&=\textnormal{const}. \end{split} \end{equation} By the second equation, a critical point of $f$ must be the maximum point of $R$. For a 3D steady gradient Ricci soliton, since the Ricci curvature is positive, the first equation implies that a maximum point of $R$ is also a critical point of $f$. We will show in Section \ref{s: asymptotic geometry} that the critical point exists in all 3D steady gradient Ricci solitons. Hamilton's cigar soliton is the first example of Ricci solitons \cite{cigar}. It is rotationally symmetric and has positive curvature. The cigar soliton is an important notion in this paper. In the following we review the definition of the cigar soliton and some properties we will use, including a precise description of the curvature decay and the tip contracting rate. \begin{defn}[Cigar soliton, c.f. \cite{cigar}]\label{d: cigar} Hamilton's cigar soliton is a complete Riemannian surface $(\R^2,g_c,f)$, where \begin{equation*} g_c:=\frac{dx^2+dy^2}{1+x^2+y^2},\quad\textit{and}\quad f=\log(1+x^2+y^2). \end{equation*} As a solution of Ricci flow, its time-dependent version is \begin{equation*} g_c(t):=\frac{dx^2+dy^2}{e^{4t}+x^2+y^2}. \end{equation*} Let $s$ denote the distance to the cigar tip $(0,0)$, then we may rewrite $g_c$ as \begin{equation}\label{e: warped cigar} g_c=ds^2+\tanh^2s\,d\theta^2, \end{equation} and the scalar curvature of $g_c$ is \begin{equation}\label{e: cigar R} R_{\Sigma}=4\,\textnormal{sech}^2s=\frac{4}{(e^s+e^{-s})^2}. \end{equation} In particular, $R(x_{\textnormal{tip}})=4$ and $K(x_{\textnormal{tip}})=2$. For a fixed $\theta_0\in[0,2\pi)$, the curve $\gamma(s):=(\theta_0,s)$ is a unit speed ray starting from the tip, and we can also compute that \begin{equation}\label{e: integrate Ricci} \int_{0}^{r}\Ric_{\Sigma}(\gamma'(s),\gamma'(s))\,ds=\int_0^{r}2\,\textnormal{sech}^2s\,ds=2\tanh s|^{r}_0=2\left(1-\frac{2}{e^{2r}+1}\right), \end{equation} which converges to $2$ as $r\rii$. Note that this integral is the speed of a point st distance $r$ drifts away from the tip in the backward Ricci flow $g_c(t)$. \end{defn} Throughout the paper, by abuse of notation, we will use $g_c$ to denote the metric on the cigar, and the product metric on $\R\times\cigar$ such that $R(x_{tip})=4$; and use $g_{stan}$ to denote the product metrics on $\R\times S^1$ and $\RR\times S^1$ such that the length of the $S^1$-fiber is equal to $2\pi$. With this convention, it is easy to see from \eqref{e: warped cigar} that for any sequence of points $x_i\rii$, the pointed manifolds $(\cigar,g_c,x_i)$ smoothly converges to $(\R\times S^1,g_{stan})$ in the Cheeger-Gromov sense. Next, we introduce the concept of collapsing and non-collapsing. \begin{defn}[Collapsing and non-collapsing] Let $(M^n,g)$ be an n-dimensional Riemannian manifold. We say the it is non-collapsed (resp. collapsed) if there exists (resp. does not exist) a constant $\kappa>0$ such that the following holds: For all $x\in M$, if $|\Rm|(x)<r^{-2}$ on $B_g(x,r)$ for some $r>0$, then \begin{equation*} vol_g(B_g(x,r))\ge\kappa r^n. \end{equation*} \end{defn} It is easy to see that an n-dimensional Riemannian manifold is collapsed if there is an asymptotic limit isometric to $\R^{n-2}\times\cigar$. \begin{lem}\label{l: cigar implies collapsing} Let $(M^n,g)$ be an n-dimensional Riemannian manifold. Suppose there exists a sequence of points $x_i\in M$ and constants $r_i>0$ such that the pointed manifolds $(M,r_i^{-2}g,x_i)$ smoothly converge to $\R^{n-2}\times\cigar$. Then $(M,g)$ is collapsed. \end{lem} \begin{proof} By the assumption we may choose a sequence of points $y_i\in M$ such that $(M,r_i^{-2}g,y_i)$ converge to $\R^{n-1}\times S^1$. So there is a sequence of constants $A_i\rii$ such that $|\Rm|_{r_i^{-2}g}\le A_i^{-2}$ on $B_{r_i^{-2}g}(y_i,A_i)$, and $\lim_{i\rii}\frac{vol_{r_i^{-2}g}B_{r_i^{-2}g}(y_i,A_i)}{A_i^n}=0$. After rescaling, this implies $|\Rm|_{g}\le (A_ir_i)^{-2}$ on $B_{g}(y_i,A_ir_i)$ and $\lim_{i\rii}\frac{vol_{g}B_{g}(y_i,A_ir_i)}{(A_ir_i)^n}=0$. So $(M,g)$ is collapsed. \end{proof} In Section \ref{s: asymptotic geometry} we will see that the opposite of Lemma \ref{l: cigar implies collapsing} is also true for all 3D steady gradient Ricci solitons, i.e. any collapsed solitons have asymptotic limits isometric to $\R\times\cigar$. Note all steady gradient Ricci solitons except the Bryant soliton are collapsed \cite{brendlesteady3d}. \subsection{Local geometry models} We will show in Section \ref{s: asymptotic geometry} that $\R\times\cigar$ and $\RR\times S^1$ are asymptotic limits in 3D steady gradient Ricci solitons that are not Bryant solitons. In this subsection we define $\epsilon$-necks, $\epsilon$-cylindrical planes and $\epsilon$-tip points, which are local geometry models corresponding to these asymptotic limits. Moreover, to obtain the asymptotic limits, we need to rescale the soliton by factors that are comparable to volume scale at the points. \begin{defn}[Volume scale] Let $(M,g)$ be a 3D Riemannian manifold. We define the volume scale $r(\cdot)$ to be \begin{equation*} r(x)=\sup\{s>0: vol_g(B_g(x,s))\ge\omega_0s^3\}, \end{equation*} where $\omega_0>0$ is chosen such that $r(x)=1$ for all $x\in \RR\times S^1$. It is clear that $\omega_0$ is less than the volume of the radius one ball in the Euclidean space $\R^3$. \end{defn} We measure the closeness of two pointed Riemannian manifolds by using the following notion of $\epsilon$-isometry. \begin{defn}[$\epsilon$-isometry between manifolds]\label{d: epsilon isometry} Let $\epsilon>0$ and $m\in\mathbb{N}$. Let $(M^n_i,g_i)$, $i=1,2$, be an n-dimensional Riemannian manifolds, $x_i\in M_i$. We say a smooth map $\phi:B_{g_1}(x_1,\epsilon^{-1})\ri M_2$, $\phi(x_1)=x_2$, is an $\epsilon$-isometry in the $C^m$-norm if it is a diffeomorphism onto the image, and \begin{equation}\label{e: parti} |\nabla^{k}(\phi^*g_2-g_1)|\le \epsilon\quad \textit{on}\quad B_{g_1}(x_1,\epsilon^{-1}),\quad k=0,1,...,m, \end{equation} where the covariant derivatives and norms are taken with respect to $g_1$. We also say $(M_2,g_2,x_2)$ is $\epsilon$-close to $(M_1,g_1,x_1)$ in the $C^m$-norm. In particular, if $m=[\epsilon^{-1}]$, then we simply say $(M_2,g_2,x_2)$ is $\epsilon$-close to $(M_1,g_1,x_1)$ and $\phi$ is an $\epsilon$-isometry. \end{defn} \begin{defn}[$\epsilon$-isometry between Ricci flows]\label{d: RF-epsilon isometry} Let $\epsilon>0$. Let $(M^n_i,g_i(t))$, $i=1,2$, $t\in[-\epsilon^{-1},\epsilon^{-1}]$, be an n-dimensional Ricci flow, $x_i\in M_i$. We say a smooth map $\phi:B_{g_1(0)}(x_1,\epsilon^{-1})\ri M_2$, $\phi(x_1)=x_2$, is an $\epsilon$-isometry between the two Ricci flows if it is a diffeomorphism onto the image, and \begin{equation*} |\nabla^{k}(\phi^*g_2(t)-g_1(t))|\le \epsilon\quad \textit{on}\quad B_{g_1(0)}(x_1,\epsilon^{-1})\times[-\epsilon^{-1},\epsilon^{-1}],\quad k=0,1,...,[\epsilon^{-1}], \end{equation*} where the covariant derivatives and norms are taken with respect to $g_1(0)$. We also say $(M_1,g_1(t),x_1)$ is $\epsilon$-close to $(M_2,g_2(t),x_2)$. \end{defn} In the following, we will choose the target manifolds to be the cylinder and cylindrical plane, and call the manifolds that are close to them the $\epsilon$-neck and $\epsilon$-cylindrical plane. \begin{defn}[$\epsilon$-neck]\label{d: neck} We say a 2D Riemannian manifold $(N,g)$ is an $\epsilon$-neck at $x_0\in M$ for some $\epsilon>0$, if there exists a constant $r>0$ such that $(N,r^{-2}g,x_0)$ is $\epsilon$-close to the cylinder $\R\times S^1$. We say $r$ is the scale of the $\epsilon$-neck and $x_0$ is a center of an $\epsilon$-neck. On $\R\times S^1$, let the function $x:\R\times S^1$ be the projection onto the $\R$-factor, and $\partial_{x}$ be the vector field. Then by an abuse of notation, we will denote the corresponding function and vector field on $N$ modulo the $\epsilon$-isometry by $x$ and $\partial_{x}$. \end{defn} \begin{defn}[$\epsilon$-cap] Let $(M,g)$ be a complete 2D Riemannian manifold. We say a compact subset $\mathcal{C}\subset M$ is an $\epsilon$-cap if $\mathcal{C}$ is diffeomorphic to a 2-ball and the boundary $\partial\mathcal{C}$ is the central circle of an $\epsilon$-neck $N$ in $(M,g)$. We say that the points in $\mathcal{C}\setminus N$ are centers of the $\epsilon$-cap. \end{defn} \begin{defn}[$\epsilon$-cylindrical plane]\label{d: cylindrical plane} We say a 3D Riemannian manifold $(M,g)$ is an $\epsilon$-cylindrical plane at $x_0\in M$ for some $\epsilon>0$, if there exists a constant $r>0$ such that $(M,r^{-2}g,x_0)$ is $\epsilon$-close to the cylindrical plane $\RR\times S^1$. We say $r$ is the scale of the $\epsilon$-cylindrical plane, and $x_0$ is the center of the $\epsilon$-cylindrical plane. Let $x,y:\R^2\times S^1\ri\R$ and $\theta:\R^2\times S^1\ri S^1$ be the projection to the three product factors. By abuse of notation, we use $\partial_{x},\partial_{y},\partial_{\theta}$ to denote the corresponding vector fields on $M$ modulo the $\epsilon$-isometry. In particular, we call $\partial_{\theta}$ to be the $SO(2)$-killing field of the $\epsilon$-cylindrical plane. \end{defn} At the center of an $\epsilon$-cylindrical plane, we introduce another scale in the following, which is comparable to the volume scale. Since this scale is measured by the length of curves, it is more useful than the volume scale in the Ricci flow of the soliton when combining with suitable distance distortion estimates. \begin{defn}[Scale at an $\epsilon$-cylindrical plane]\label{d: h} Let $(M,g)$ be a 3D Riemannian manifold. Suppose $x\in M$ is the center of an $\epsilon$-cylindrical plane. We denote by $h(x)$ the infimum length of all closed smooth curves at $x$ that are homotopic to the $S^1$-factor of the $\epsilon$-cylindrical plane in $B(x,1000 r(x))$, where $r(x)>0$ is the volume scale at $x$. It is clear that $h(x)$ is achieved by a geodesic loop at $x$. Moreover, by the definition of volume scale we have $r(x)=1$ and $h(x)=2\pi$ for all $x\in\RR\times S^1$. So when $\epsilon$ is sufficiently small we have \begin{equation*} 1.9\pi\,r(x)\le h(x)\le 2.1\pi\,r(x). \end{equation*} \end{defn} In a manifold that is $\epsilon$-close to the cigar soliton, we call the points that are $\epsilon$-close to the tip of cigar under the $\epsilon$-isometry to be the $\epsilon$-tip points. \begin{defn}[$\epsilon$-tip point]\label{d: tip points} Let $\epsilon>0$. Let $(M,g)$ be a 2D Riemannian manifold, $x\in M$. If there is an $\epsilon$-isometry from $(M,r^{-2}(x)g,x)$ to $(\cigar,r^{-2}(x_0)g_c,x_0)$, for some $x_0\in \cigar$ such that $d_{g_c}(x_0,x_{tip})\le\epsilon$, then we say $x$ is an $\epsilon$-tip point. Similarly, if $(M,g)$ is a 3D Riemannian manifold, $x\in M$. Suppose there is an $\epsilon$-isometry from $(M,r^{-2}(x)g,x)$ to $(\R\times\cigar,r^{-2}(x_0)g_c,x_0)$, for some $x_0\in \R\times\cigar$ such that $d_{g_c}(x_0,x_{tip})\le\epsilon$ where $x_{tip}$ is the tip of the cigar with the same $\R$-coordinate as $x_0$. Then we say that $x$ is an $\epsilon$-tip point. \end{defn} \subsection{Distance distortion estimates and curvature estimates} In this subsection, we review some standard distance distortion estimates and curvature estimates, which are originally due to Hamilton and Perelman \cite{Hamilton_singularity_formation,Pel1}. The following lemma gives an upper bound on the speed of distance shrinking between two points, using only local curvature bounds near the two points. The proof uses the second variation formula, see e.g. \cite[Theorem 18.7]{RFTandA3}. \begin{lem}\label{l: distance laplacian} Let $(M,g(t))_{t\in[0,T]}$ be a Ricci flow of dimension $n$. Let $K,r_0>0$. \begin{enumerate} \item Let $x_0\in M$ and $t_0\in(0,T)$. Suppose that $\Ric\le(n-1)K$ on $B_{t_0}(x_0,r_0)$. Then the distance function $d(x,t)=d_t(x,x_0)$ satisfies the following inequality in the outside of $B_{t_0}(x_0,r_0)$: \begin{equation*} (\pt-\Delta)|_{t=t_0}d\ge -(n-1)\left(\frac{2}{3}Kr_0+r_0^{-1}\right). \end{equation*} \item Let $t_0\in[0,T)$ and $x_0,x_1\in M$. Suppose \begin{equation*} \Ric(x,t_0)\le(n-1)K, \end{equation*} for all $x\in B_{t_0}(x_0,r_0)\cup B_{t_0}(x_1,r_0)$. Then \begin{equation*} \pt|_{t=t_0}d_t(x_0,x_1)\ge -2(n-1)(Kr_0+r_0^{-1}). \end{equation*} \end{enumerate} \end{lem} The following lemmas control how fast a metric ball shrinks and expands along Ricci flow, using the nearby curvature assumptions. They can be proved by using the Ricci flow equation, see e.g. \cite[Lemma 2.1, 2.2]{simon2021local}. \begin{lem}\label{l: expanding lemma} Let $(M,g(t))_{t\in[0,T]}$ be a Ricci flow of dimension $n$, $x_0\in M$. Let $K,A>0$. \begin{enumerate} \item Suppose $\Ric_{g(t)}\ge -K$ on $ B_t(x_0,A)\subset\subset M$ for all $t\in[0,T]$. Then the following holds for all $t\in[0,T]$: \begin{equation*} B_0(x_0,A\,e^{-KT})\subset B_t(x_0,A\,e^{-K(T-t)}). \end{equation*} \item Suppose $\Ric_{g(t)}\le K$ on $ B_t(x_0,R)\subset\subset M$ for all $t\in[0,T]$. Then the following holds for all $t\in[0,T]$: \begin{equation*} B_t(x_0,A \,e^{-Kt})\subset B_0(x_0,A). \end{equation*} \end{enumerate} \end{lem} The following curvature estimate is also due to Perelman \cite[Corollary 11.6]{Pel1}. It provides a curvature upper bound at points in a Ricci flow, if the local volume has a positive lower bound. For a more general version of this estimate see \cite[Proposition 3.2]{BamA}. \begin{lem}[Perelman's curvature estimate]\label{l: Perelman} For any $\kappa>0$ and $n\in \mathbb{N}$, there exists $C>0$ such that the following folds: Let $(M^n,g)\times [-T,0]$ be an $n$-dimensional Ricci flow (not necessarily complete). Let $x\in M$ be a point with $B_g(x,r)\times[-r^2,0]\subset\subset M\times [-T,0]$ for some $r>0$. Assume also $vol(B_g(x,r))\ge \kappa r^n$. Then $R(x,0)\le \frac{C}{r^2}$. \end{lem} \subsection{Metric comparisons} We need the following notions and facts from metric comparison geometry, see \cite{BGP}. Let $(M,g)$ be a complete n-dimensional Riemannian manifold with non-negative sectional curvature. \begin{lem}[Monotonicity of angles] For any triple of points $o,p,q\in M$, the comparison angle $\widetilde{\measuredangle}poq$ is the corresponding angle formed by minimizing geodesics with lengths equal to $d_g(o,p),d_g(o,q),d_g(p,q)$ in Euclidean space. Let $op,oq$ be two minimizing geodesics in $M$ between $o,p$ and $o,q$, and $\measuredangle poq$ be the angle between them at $o$, then $\measuredangle poq\ge\widetilde{\measuredangle}poq$. Moreover, for any $p'\in op$ and $q'\in oq$, we have $\widetilde{\measuredangle}p'oq'\ge \widetilde{\measuredangle}poq$. \end{lem} In a non-negatively curved complete non-compact Riemannian manifold, we can equip a length metric on the space of geodesic rays. Moreover, a blow-down sequence of this manifold converges to the metric cone over the space of rays in the Gromov-Hausdorff sense, see e.g. \cite[Prop 5.31]{MT}. Let $\gamma_1,\gamma_2$ be two rays with unit speed starting from a point $p\in M$, the limit $\lim_{r\rii}\widetilde{\measuredangle}\gamma_1(r)p\gamma_2(r)$ exists by the monotonicity of angles and we say it is the angle at infinity between $\gamma_1$ and $\gamma_2$, and denote it as $\measuredangle(\gamma_1,\gamma_2)$. \begin{lem}[Space of rays] Let $p\in M$ and $S_{\infty}(M,p)$ be the space of equivalent classes of rays starting from $p$, where two rays are equivalent if and only if the angle at infinity between them is zero, and the distance between two rays is the limit of the angle at infinity between them. Then $S_{\infty}(M,p)$ is a compact length space. \end{lem} \begin{lem}[Asymptotic cone]\label{l: converge to asymp} Let $p\in M$ and $\mathcal{T}(M,p)$ be the metric cone over $S_{\infty}(M,p)$. Then for any $\lambda_i\rii$, the sequence of pointed manifolds $(M,\lambda_i^{-1}g,p)$ converge to $\mathcal{T}(M,p)$ with $p$ converging to the cone point in the pointed Gromov-Hausdorff sense. Moreover, for $p,q\in M$, we have that $\mathcal{T}(M,p)$ is isometric to $\mathcal{T}(M,q)$. We say $(M,g)$ is asymptotic to $\mathcal{T}(M,p)$, and $\mathcal{T}(M,p)$ is the asymptotic cone of $M$. \end{lem} It is clear that the asymptotic cone $\mathcal{T}(M,p)$ is in fact independent of the choice of p. It is easy to see that the asymptotic cones of the Bryant soliton and $\R\times\cigar$ are a ray and a half-plane. In \cite{Lai2020_flying_wing}, the author constructed a family of 3D steady gradient Ricci solitons that are asymptotic to metric cones over an interval $[0,a]$, $a\in(0,\pi)$. We will show in Section \ref{s: asymptotic geometry} that this is true for all 3D steady gradient Ricci soliton with positive curvature that is not a Bryant soliton. In the rest of this subsection we introduce a very useful technical notion called strainer \cite{BGP}. It is similar to the notion of an orthogonal frame in the Euclidean space $\R^n$, that provides a local coordinate system in the metric space. \begin{defn}[$(m,\delta)$-strainer] Let $\delta>0$ and $1\le m\in\mathbb{N}$. A $2m$-tuple $(a_1,b_1,...,a_m,b_m)$ of points in a metric space $(X,d)$ is called an $(m,\delta)$-strainer around a point $x\in X$ if \begin{equation*} \begin{cases} \widetilde{\measuredangle}a_ixb_j>\frac{\pi}{2}-\delta \quad\textit{for all }i\neq j,\quad i,j=1,...,m,\\ \widetilde{\measuredangle}a_ixa_j>\frac{\pi}{2}-\delta \quad\textit{for all }i\neq j,\quad i,j=1,...,m,\\ \widetilde{\measuredangle}b_ixb_j>\frac{\pi}{2}-\delta \quad\textit{for all }i\neq j,\quad i,j=1,...,m,\\ \widetilde{\measuredangle}a_ixb_i>\pi-\delta\quad\textit{for all }i=1,...,m. \end{cases} \end{equation*} The strainer is said to have size $r$ if $d(x,a_i)=d(x,b_i)=r$ for all $i=1,...,m$. It is said to have size at least $r$ if $d(x,a_i)\ge r$ and $d(x,b_i)\ge r$ for all $i=1,...,m$. \end{defn} We also introduce the notion of $(m+\frac{1}{2},\delta)$-strainers. Similarly, this notion provides a local coordinate system in the metric space that look at a half-plane $\R^n_+=\R^n\times\R_+$. \begin{defn}[$(m+\frac{1}{2},\delta)$-strainer] Let $\delta>0$ and $1\le m\in\mathbb{N}$. A $2m+1$-tuple $(a_1,b_1,...,a_m,b_m,a_{m+1})$ of points in a metric space $(X,d)$ is called an $(m+\frac{1}{2},\delta)$-strainer around a point $x\in X$ if \begin{equation*} \begin{cases} \widetilde{\measuredangle}a_ixb_j>\frac{\pi}{2}-\delta , &\text{for all } i\neq j,\quad i=1,...,m+1,\quad j=1,...,m,\\ \widetilde{\measuredangle}a_ixa_j>\frac{\pi}{2}-\delta , &\text{for all } i\neq j,\quad i,j=1,...,m+1,\\ \widetilde{\measuredangle}b_ixb_j>\frac{\pi}{2}-\delta , &\text{for all } i\neq j,\quad i,j=1,...,m,\\ \widetilde{\measuredangle}a_ixb_i>\pi-\delta , &\text{for all } i=1,...,m. \end{cases} \end{equation*} The strainer is said to have size $r$ if $d(x,a_i)=d(x,b_j)=r$ for all $i=1,...,m+1$ and $j=1,...,m$. It is said to have size at least $r$ if $d(x,a_i)\ge r$ for all $i=1,...,m+1$ and $d(x,b_j)\ge r$ for all $j=1,...,m$. \end{defn} \subsection{Heat kernel estimates} We prove a few lemmas using the standard heat kernel estimates of the heat equations under the Ricci flows. Let $G(x,t;y,s)$, $x,y\in M$, $s<t$, be the heat kernel of the heat equation $\pt u=\Delta u$ under $g(t)$, that is, \begin{equation}\label{e: standard heat kernel} \begin{split} \pt G(x,t;y,s)&=\Delta_{x,t} G(x,t;y,s),\\ \lim_{t\searrow s}G(\cdot,t;y,s)&=\delta_{y}. \end{split} \end{equation} It is easy to see that $G(x,t;y,s)$ is also the heat kernel of the conjugate heat equation, that is, \begin{equation*} \begin{split} -\partial_s G(x,t;y,s)&=\Delta_{y,s} G(x,t;y,s)-R(y,s)\,G(x,t;y,s),\\ \lim_{s\nearrow t}G(x,t;\cdot,s)&=\delta_{x}. \end{split} \end{equation*} We have the following Gaussian upper bound for $G$. \begin{lem}[Upper bound of the heat kernel for an evolving metric](c.f. \cite[Theorem 26.25]{RFTandA3})\label{l: heat kernel lower bound implies upper bound} Let $(M^n,g(t))$ be a complete Ricci flow on $[0,T]$ with $|\Rm|\le K$. There exists a constant $C<\infty$ depending only on $n,T,K$ such that the conjugate heat kernel satisfies \begin{equation}\label{e: theorem 2625} G(x,t;y,s)\le \frac{C}{vol^{1/2}(B_s(x,\sqrt{\frac{t-s}{2}}))\cdot vol^{1/2}(B_s(y,\sqrt{\frac{t-s}{2}}))}\, \textnormal{exp}\left(-\frac{d^2_s(x,y)}{C(t-s)}\right) \end{equation} for any $x,y\in M$ and $0\le s<t\le T$. \end{lem} Using this we can prove the following lemma, which gives a time and distance dependent upper bound on subsolutions to the heat equation, which satisfy certain initial upper bounds. \begin{lem}\label{l: heat equation} For any $C_0>0$ and $\alpha>0$, there are a constant $\delta>0$ and a non-decreasing function $C:[0,\infty)\ri [0,\infty)$ such that the following holds: Let $(M,g(t),y_0)$ be a complete Ricci flow on $[0,T]$. Assume the following holds: \begin{enumerate} \item $|\Rm|(x,t)\le C_0$ for all $x\in M$ and $t\in[0,T]$. \item $u(x,t)$ is a function with $\partial_t u\le\Delta u$, satisfying \begin{equation*} |u|(x,0)\le e^{\alpha\,d_0(x,y_0)}. \end{equation*} \end{enumerate} Then $|u|(x,t)\le C(\alpha,T,C_0)\,e^{(\alpha+1)D}$ for any $(x,t)\in B_0(y_0,D)\times[0,T]$. \end{lem} We need the following heat kernel estimates, see e.g. \cite[Theorem 26.25]{RFTandA3}. \begin{proof} First, by the curvature assumption $|\Rm|\le K$, it follows immediately from the Ricci flow equation that the metrics at different times are comparable to each other, i.e. \begin{equation*} e^{-CK}g(s)\le g(t)\le e^{CK} g(s), \end{equation*} where $C$ depends only on $T$. From now on we will use $C$ to denote all constants that depend only on $T,K$, which may vary from line to line. By the reproduction formula we have \begin{equation}\label{e: I+II} u(x,t)=\int_M G(x,t;y,0)\cdot u(y,0)\,d_0y. \end{equation} Now assume $x\in B_0(x_0,D)$, and split the integral into two parts \begin{equation*} u(x,t)=\int_{B_0(x,1)} G(x,t;y,0)\cdot u(y,0)\,d_0y+\int_{M\setminus B_0(x,1)} G(x,t;y,0)\cdot u(y,0)\,d_0y. \end{equation*} For the first part, note that for any $y\in B_0(x,1)$, we have \begin{equation*} d_0(y,x_0)\le d_0(y,x)+d_0(x,x_0)\le 1 +D. \end{equation*} So by the assumption on $u$ we have $u(y,0)\le e^{\alpha(1+D)}$, and hence \begin{equation}\label{e: I} \int_{B_0(x,1)} G(x,t;y,0)\cdot u(y,0)\,d_0y\le e^{\alpha(1+D)}\int_{B_0(x,1)} G(x,t;y,0)\,d_0y\le e^{\alpha(1+D)}. \end{equation} To estimate the second part in \eqref{e: I+II}, we first claim that for any $y\in M\setminus B_0(x,1)$, the following holds, \begin{equation}\label{claim: heat kernel} G(x,t;y,0)\le C \cdot\textnormal{exp}\left(-\frac{d^2_0(x,y)}{Ct}\right). \end{equation} To this end, we note that if $t\ge 1$, the volumes of the two balls $B_0(x,\sqrt{\frac{t}{2}})$ and $B_0(y,\sqrt{\frac{t}{2}})$ are bounded below by $C^{-1}$. So the claim follows immediately from \eqref{e: theorem 2625}. If $t<1$, then by the assumption on the injectivity radius and the curvature, we see that the volumes of the two balls $B_0(x,\sqrt{\frac{t}{2}})$ and $B_0(y,\sqrt{\frac{t}{2}})$ are bounded below by $C^{-1}\left(\frac{t}{2}\right)^{n/2}$. Note also $\left(\frac{t}{2}\right)^{n/2}\le C\cdot\textnormal{exp}\left(-\frac{1}{Ct}\right)$, we obtain \begin{equation*} G(x,t;y,0)\le C\left(\frac{t}{2}\right)^{-n/2}\textnormal{exp}\left(-\frac{d^2_0(x,y)}{Ct}\right)\le C \cdot\textnormal{exp}\left(-\frac{d^2_0(x,y)}{Ct}\right), \end{equation*} which proves the claim. Since for any $y\in M\setminus B_0(x,1)$, we have \begin{equation*} u(y,0)\le e^{\alpha d_0(x_0,y)}\le e^{\alpha(d_0(x_0,x)+d_0(x,y))}\le e^{\alpha D}\cdot e^{\alpha d_0(x,y)}. \end{equation*} Combining this with \eqref{claim: heat kernel}, we see that the second part in \eqref{e: I+II} satisfies \begin{equation*}\begin{split}\label{e: II final} \int_{M\setminus B_0(x,1)} G(x,t;y,0)\cdot u(y,0)\,d_0y&\le C\,e^{\alpha D}\int_{M} \textnormal{exp}\left(-\frac{d^2_0(x,y)}{Ct}\right)\cdot e^{\alpha d_0(x,y)}\,d_0y\\ &\le C(\alpha,T)\,e^{\alpha D}\int_{M} \textnormal{exp}\left(-\frac{d^2_0(x,y)}{Ct}\right)\,d_0y\\ &\le C(\alpha,T,C_0)\,e^{\alpha D}. \end{split}\end{equation*} where in the last inequality we used the curvature bound $|\Rm|\le C_0$ and $t\le T$, which allows us to apply a volume comparison to estimate the last integral term, see also \cite[Lemma 2.8]{Lai2019}. Combining the two inequalities \eqref{e: I} and \eqref{e: II final} we get $|u|(x,t)\le C(\alpha,T,C_0)\,e^{(\alpha+1)D}$, which proves the lemma. \end{proof} \section{Asymptotic geometry at infinity}\label{s: asymptotic geometry} In this section, we study the asymptotic geometry of a 3D steady gradient soliton that is not a Bryant soliton. We show that it dimension reduces to the cigar soliton along two integral curves of $\nabla f$. Moreover, we show that the asymptotic cone of the soliton is isometric to a metric cone over an interval $[0,a]$, $a\in(0,\pi)$. We also prove a few other geometric properties of the soliton, one of which is that the scalar curvature attains its maximum at some point. \subsection{Classification of asymptotic limits} The main result in this subsection is Lemma \ref{l: DR to cigar}, which states that there are two asymptotic limits in the soliton, which are $\RR\times S^1$ and $\R\times\cigar$. We will use the following lemma to show that $\R\times T^2$ can not be an asymptotic limit. \begin{lem}\label{l: T2} Let $(M,g)$ be a 3D complete Riemannian manifold with positive curvature, and $\R\times T^2=\R\times S^1\times S^1$ is equipped with some product metric $g_0$ (in which the lengths of the two $S^1$-fibers are not necessarily equal), $x_0\in\R\times T^2$. Then there exists an $\epsilon_0>0$ such that for any $y\in M$, the pointed manifold $(M,r^{-2}(y)g,y)$ is not $\epsilon_0$-close to $(\R\times T^2,r^{-2}(x_0)g_0,x_0)$. \end{lem} \begin{proof} Suppose $\epsilon_0$ does not exist. Then there exists a sequence of points $y_k\in M$ such that $(M,r^{-2}(y_k)g,y_k)$ smoothly converges to $(\R\times T^2,r^{-2}(x_0)g_0,x_0)$. It is clear that $y_k\rii$, because otherwise the manifold is isometric to $\R\times T^2$, a contradiction to the positive curvature. So there exists $\epsilon_k\ri0$, an open neighborhood $U_k$ of $y_k$ and a diffeomorphism $\phi_k$ such that $\phi_k:[-\epsilon_k^{-1},\epsilon_k]\times T^2\ri U_k\subset M$ which maps $x_0\in\{0\}\times T^2$ to $y_k$, and $\phi_k$ is an $\epsilon_k$-isometry. We say $T^2_k:=\phi_k(\{0\}\times T^2)$ is the central torus, which is homeomorphic to $T^2$. In the rest of the proof we show that a connected component of the level set of the function $d_{y_0}(\cdot):=d_g(y_0,\cdot)$ at $y_k$ is homeomorphic to a 2-torus. Suppose this claim is true, then for $k$ sufficiently large, $d_g(y_0,y_k)$ is sufficiently large, which contradicts the fact that a level set of a distance function to a fixed point in a positively-curved 3D Riemannian manifold must be homeomorphic to a 2-sphere at all large distances, see e.g. \cite[Corollary 2.11]{MT}. Then this will prove the lemma. To show the claim, without loss of generality, we may assume after a rescaling that $r(y_k)=1$. Let $s:\R\times T^2\ri \R$ be the coordinate function in the $\R$-direction of $\R\times T^2$, and let $X=\phi_k(\partial_{x})$ be a vector field on $U_k\subset M$. For any small $\epsilon>0$, it is easy to see that for all large $k$, the angle formed by any minimizing geodesic from $y_0$ to a point $p\in U_k$ and $X(p)$ is less than $\epsilon$. Let $\chi_{\mu}$, $\mu\in(-100,100)$, be the flow generated by $X$ on $\phi_k((-100,100)\times T^2)\subset M$, then the distance function $d_{y_0}(\cdot)$ increases along $\chi_{\mu}$ at a rate bounded below by $1-C_0\epsilon$, where $C_0>0$ is a universal constant. In particular, an integral curve of $X$ intersects a level set of $d_{y_0}(\cdot)$ in a single point. Therefore, there is a continuous function $\tilde{s}:T^2_k\ri\R$ such that for any $x\in T^2_k$, we have $ d_{y_0}(\chi_{\tilde{s}(x)}(x))=d_{y_0}(y_k)$. Let $F:T^2_k\ri d_{y_0}^{-1}(d_{y_0}(y_k))$ be defined by $F(x)=\chi_{\tilde{s}(x)}(x)$, then $F$ is continuous. We show that $F$ is an injection: suppose $F(x_1)=F(x_2)=y$, then $x_i=\chi_{-\tilde{s}(x_i)}(y)$, $i=1,2$. Since $x_1,x_2\in T^2_k$, it follows that $(\phi_k^{-1})^*s(x_1)=(\phi_k^{-1})^*s(x_2)=0$ and \begin{equation*} 0=(\phi_k^{-1})^*s(x_1)-(\phi_k^{-1})^*s(x_2) =\tilde{s}(x_2)-\tilde{s}(x_1). \end{equation*} So $\tilde{s}(x_2)=\tilde{s}(x_1)$, and hence $x_1=x_2$. Since $T^2_k$ is compact, $F$ is a homeomorphism from the 2-torus onto the image which is a connected component of $d_{y_0}^{-1}(d_{y_0}(y_k))$. This proves the claim. \end{proof} The following lemma will be used to show that all asymptotic limit splits off a line. \begin{lem}\label{l: splitting} Let $(M,g)$ be a complete Riemannian manifold with non-negative sectional curvature and $\{y_k\}_{k=0}^{\infty}\subset M$ is a sequence of points with $d_g(y_0,y_k)\rii$. Then after passing to a subsequence of $\{y_k\}_{k=0}^{\infty}$, there exists a ray $\sigma:[0,\infty)\ri M$ with $\sigma(0)= y_0$ and a sequence of numbers $s_k\rii$ such that for $z_k=\sigma(s_k)$, we have $d_g(z_k,y_k)=d_g(y_0,y_k)$ and \begin{equation*} \widetilde{\measuredangle}y_0y_kz_k\ri\pi, \end{equation*} as $k\rii$. \end{lem} \begin{proof} A standard metric comparison argument. See for example \cite[Lemma 5.1.5]{Bamler_thesis}. \end{proof} \begin{lem}\label{l: DR to cigar} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. Then for any $\epsilon>0$, there is $D(\epsilon)>0$ such that for all $x\in M\setminus B_{g}(p,D)$, the pointed manifold $(M,r^{-2}(x)g,x)$ is $\epsilon$-close to exactly one of the following, \begin{enumerate} \item $(\mathbb{R}\times\cigar,r^{-2}(\widetilde{x})g_c,\widetilde{x})$; \item $(\RR\times S^1, g_{stan},\widetilde{x})$. \end{enumerate} We call these two limits the asymptotic limits of the soliton. \end{lem} \begin{proof} First, we show that there exists $D>0$ such that $(M,r^{-2}(x)g,x)$ is $\epsilon$-close to some product space with an $\R$-factor, for all $x\in M\setminus B_{g}(p,D)$. This suffices to show that for any sequence of points $\{y_k\}_{k=0}^{\infty}$, the rescaled Ricci flows $(M,r^{-2}(y_k)g(r^2(y_k)t),(y_k,0))$ converge to an ancient Ricci flow that split-off a line. First, we claim $\frac{r(y_k)}{d_g(y_0,y_k)}\ri0$ as $k\rii$. If this is not true, then $(M,g)$ has Euclidean volume growth, and hence is flat by Perelman's curvature estimate Lemma \ref{l: Perelman}, contradiction. So by passing to a subsequence we may assume $(M,r^{-2}(y_k)g(r^2(y_k)t),(y_k,0))$ converges to an ancient 3D Ricci flow, see \cite[Lemma 3.3]{Lai2020_flying_wing}. So by Lemma \ref{l: splitting} and the strong maximum principle \cite[Lemma 4.13, Corollary 4.19]{MT}, the limit flow splits off an $\R$-factor. Therefore, by the classification of ancient 2D Ricci flows \cite{Chu,Sesum}, the limit flow must be isometric to one of the following Ricci flows up to a rescaling, \begin{enumerate} \item $\R\times\cigar$; \item $\RR\times S^1$; \item $\R\times T^2$; \item $\R\times S^2$; \item $\R\times \textnormal{sausage solution}$. \end{enumerate} \yi{First, item (3) is impossible by Lemma \ref{l: T2}. Second, we can argue in the same way as \cite[Theorem 3.7]{Lai2020_flying_wing} to exclude item (5): Note that in \cite[Theorem 3.7]{Lai2020_flying_wing}, we argue under the $O(2)$-symmetry assumption which is not available here, use the curve $\Gamma(s)$ fixed under the $O(2)$-symmetry to define a diameter function $F(s)$, and show $\lim_{s\rightarrow\infty}F(s)=\infty$. In our setting, we first find a curve $\tilde{\Gamma}(s):[0,\infty)\rightarrow M$ going to infinity, such that for all $s\ge0$, $(M,R(\tilde{\Gamma}(s))g,\tilde{\Gamma}(s))$ is $\epsilon$-close to either $\R\times\textnormal{Cigar}$ or a time-slice of $\R\times \textnormal{Sausage solution}$ at the tip. Then we can define the diameter function $F(s)$ as in \cite[Theorem 3.7]{Lai2020_flying_wing} using $\tilde{\Gamma}(s)$, and show $\lim_{s\rightarrow\infty}F(s)=\infty$ in the same way. This implies that $\R\times \textnormal{Sausage solution}$ can not appear in the blow-up limit. } \yi{Finally, we exclude item (4) as follows: Suppose $\R\times S^2$ is an asymptotic limit, we claim that all asymptotic limits are $\R\times S^2$. If so, then by Brendle's result \cite{brendlesteady3d} we know that a 3D steady Ricci solitons must be Bryant soliton if it dimension reduces to $S^2$ along any sequence of points tending to infinity. This contradicts with our assumption that the soliton is not the Bryant soliton. To show the claim, suppose by contradiction that there is another type of limit, which by the above argument has to be item (1) or (2). First, we observe the following fact: Let $(M_i,g_i)$, $i=1,2,4$ be one of the limits (1), (2), (4), and $p\in M_i$ be an arbitrary point. For any $\epsilon>0$, there exists $C(\epsilon)>0$ so that for any $C> C(\epsilon)$, the metric space $(M_i,d_{C^{-1}g_i},p)$ is $\epsilon$-close in the pointed Gromov-Hausdorff sense to an $\epsilon^{-1}$-ball in $\R\times\R_+$ (for $i=1$), $\R^2$ (for $i=2$), or $\R$ (for $i=4$). In particular, there exists $\epsilon_0>0$ such that for all $C\ge C_0>0$, $(M_4,d_{C^{-1}g_4},p)$ are not $\epsilon_0$-close in the pointed Gromov-Hausdorff sense to neither $(M_1,d_{C^{-1}g_1},p)$ nor $(M_2,d_{C^{-1}g_2},p)$.} \yi{Since $\R\times S^2$ is not the unique limit, let $C>C(\frac{\epsilon}{3})$, then by an open-closed argument we can find a sequence of points $z_k\in M$ going to infinity as $k\rii$, such that $(M,C^{-1}r^{-2}(z_k)g,z_k)$ is not $\frac{\epsilon_0}{3}$-close to neither of the limits (1)(2)(4) in the pointed Gromov-Hausdorff sense, where $r(z_k)$ is the volume scale at $z_k$. This contradicts the above fact and hence proves the claim. } \end{proof} The following lemma shows that in the Ricci flow of the soliton, the closeness of a time-slice to the asymptotic limit leads to the closeness in a parabolic region of a certain size. \begin{cor}\label{c: RF closeness} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. Then for any $\epsilon>0$, there is $D(\epsilon)>0$ such that for all $x\in M\setminus B_{g}(p,D)$, the pointed rescaled Ricci flow $(M,r^{-2}(x)g(r^2(x)t),x)$ is $\epsilon$-close to exactly one of the following two Ricci flows \begin{enumerate} \item $(\mathbb{R}\times\cigar,r^{-2}(\widetilde{x})g_c(r^2(\widetilde{x})t),\widetilde{x})$; \item $(\RR\times S^1, g_{stan},\widetilde{x})$. \end{enumerate} \end{cor} \begin{proof} Since the Ricci flows $g_{\mathbb{R}\times\cigar}(t)$ and $g_{\mathbb{R}^2\times S^1}(t)$ on $\R\times\cigar$ and $\RR\times S^1$ are both eternal Ricci flows which have bounded curvature. The assertion now follows from Lemma \ref{l: DR to cigar} by a standard limiting argument as in \cite[Lemma 3.4]{Lai2021Producing3R}. \end{proof} In the remaining of this section we show that $\R\times\cigar$ is a stable asymptotic limit when we move forward in the Ricci flow of the soliton, in the sense that a region close to the $\R\times\cigar$ stays close to it until it is not close to any asymptotic limits. This is based on the observation that the Ricci flow of the cigar soliton contracts all points to the tip when we move forward in time along the flow. By using this and the closeness to the Ricci flow of cigar, we show in the next lemma that an $\epsilon$-tip point $x$ (see Definition \ref{d: tip points}) stays an $2\epsilon$-tip point outside a compact subset, when we move along an integral curve of $\phi_{-t}(x)$ $-\nabla f$. Note that this amounts to moving forward along the Ricci flow of the soliton, since $g(t)=\phi_{-t}^*g$ satisfies the Ricci flow equation, where $\{\phi_t\}_{t\in\R}$ are the diffeomorphisms generated by $\nabla f$. \begin{lem}\label{l: tip contracting} Fix some $p\in M$. For any $\epsilon>0$, there exists $D(\epsilon)>0$ such that the following holds. For any point $x\in M\setminus B_g(p,D)$, $\phi_{-t}(x)$ is the integral curve of $-\nabla f$, $t\in[0,\infty)$. Suppose $x$ is an $\epsilon$-tip point. Then $\phi_{-t}(x)$ is a $2\epsilon$-tip point for all $t\in[0,t(x))$, where $t(x)\in(0,\infty]$ is the supremum of $t$ such that $\phi_{-t}(x)\in M-B_g(p,D)$. \end{lem} \begin{proof} For the fixed $\epsilon$, we choose $T(\epsilon)>0$ to be the constant such that in the Ricci flow of the cigar soliton, the metric ball of radius $2\epsilon$ at time $0$ centered at the tip contracts to a metric ball of radius $\epsilon$ at time $T(\epsilon)$. Let $0<\epsilon_1<<\epsilon$ be a constant whose value will be chosen later, then choose $D(\epsilon_1)>0$ to be the constant from Corollary \ref{c: RF closeness}. If $x_0\notin B_g(p,D)$ is an $\epsilon$-tip point, in the following we will show that $\phi_{-r^2(x_0)\,t}(x_0)$ is a $2\epsilon$-point for all $t\in[0,T(\epsilon)]$ such that $\phi_{-r^2(x_0)\,t}(x_0)\notin B_g(p,D)$, and $\phi_{-r^2(x_0)\,T(\epsilon)}(x_0)$ is again an $\epsilon$-tip point. Suppose this is true, then the lemma follows by induction immediately. By Corollary \ref{c: RF closeness}, there is an $\epsilon_1$-isometry $\psi$ between the two pointed Ricci flows $(M,r^{-2}(x_0)g(r^2(x_0)t),x_0)$ and $(\R\times\cigar,r^{-2}(\psi(x_0))g_0(r^{2}(\psi(x_0))t),\psi(x_0))$. Note that $\psi$ is also a $100\epsilon_1$-isometry between time-slices, and hence an $\epsilon$-isometry for $\epsilon_1<\frac{\epsilon}{100}$. Let $x_{tip}$ be the tip of the cigar in $\R\times\cigar$ which has the same $\R$-coordinate as $\psi(x_0)$. Then by taking $\epsilon_0$ sufficiently small depending on $\epsilon$, and using the distance shrinking of the cigar, it is easy to see that $d_t(\psi(x_0),x_{tip})<2\epsilon$ in the Ricci flow $r^{-2}(\psi(x_0))g_0(r^{2}(\psi(x_0))t)$, for all $t\ge0$. This implies that the first half of the claim that $\phi_{-r^2(x_0)\,t}(x_0)$ is a $2\epsilon$-point. The second half of the claim follows by the choice of $T(\epsilon)$ and taking $\epsilon_1$ sufficiently small such that $\epsilon_1^{-1}>T(\epsilon)$. \end{proof} \subsection{The geometry near the edges} We study the local and global geometry at points that look like $\R\times\cigar$. First, we show in Lemma \ref{l: two chains} that there are two chains of infinitely many topological 3-balls that cover all $\epsilon$-tip points. Using this we show in Lemma \ref{l: two rays} that the asymptotic cone of the soliton is a metric cone over an interval $[0,a]$, $a\in[0,\pi)$, and the points in these two chains correspond to the boundary points of the cone. Next, in Lemma \ref{l: Gamma} we construct two smooth curves going to infinity inside the two chains, such that they are two integral curves of $\nabla f$ or $-\nabla f$, and along them the soliton converges to $(\mathbb{R}\times\cigar,x_{tip})$. Fix a point $p\in M$, in the following technical lemma we show that the velocity vector of a minimizing geodesic from $p$ to an $\epsilon$-tip point is almost parallel to the $\R$-direction in $\R\times\cigar$. The idea is to study the geometry near an $\epsilon$-tip point $q$ in three different scales: In the largest scale $d(p,q)$, the soliton looks like its asymptotic cone; In the smallest scale by the volume scale $r(q)$, it looks like $\R\times\cigar$; In some intermediate scale between $r(q)$ and $d(p,q)$, it looks like a 2-dimensional upper half-plane. Note when there is no confusion, we will omit the subscript $g$ and write $d_g(\cdot,\cdot)$ as $d(\cdot,\cdot)$ and $B_g(p,\cdot)$ as $B(p,\cdot)$. \begin{lem}\label{l: parallel} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. For any $\delta>0$, there exists $\epsilon>0$ such that the following holds: Let $q\in M$ be an $\epsilon$-tip point, and $\phi$ be an $\epsilon$-isometry. Let $\partial_{r}$ be the unit vector field in the $\R$-direction in $\R\times\cigar$. Let $\gamma:[0,1]\ri M$ be a minimizing geodesic from $p$ to $q$. Then $||\cos\measuredangle(\gamma'(1),\phi_*\left(\partial_{r}\right))|-1|\le\delta$. \end{lem} \begin{proof} Suppose not, then there are $\delta>0$, $\epsilon_i\ri0$ and $\epsilon_i$-tip points $q_i\rii$ such that \begin{equation}\label{e: bigger than delta} \measuredangle\left(\gamma_i'(1),\phi_{i*}\left(\partial_{r}\right)\right)\in(\delta,\pi-\delta), \end{equation} where $\gamma_i:[0,1]\ri M$ is a minimizing geodesic from $p$ to $q_i$, and $\phi_i$ is the inverse of an $\epsilon_i$-isometry at $q_i$. For convenience, we will use $\epsilon_i$ to denote any sequences $C\epsilon_i$ where $C>0$ is a constant independent of $i$. Since $d_g(p,q_i)\rii$ and the curvature is positive, after passing to a subsequence we may assume that the rescaled manifold $(M,d^{-2}(q_i,p)g,p)$ converges to the asymptotic cone in the Gromov-Hausdorff sense, see Lemma \ref{l: converge to asymp}. So we can find a point $z_i\in M$ such that the pair of points $(p,z_i)$ is a $(1,\epsilon_i)$-strainer at $q_i$ of size $d_g(q_i,p)$. Let $p_i$ be a point on $\gamma_i$ and $w_i$ be a point on the minimizing geodesic connecting $q_i,z_i$ such that $d(w_i,q_i)=d(p_i,q_i)=\frac{r_i}{\delta_i}$, where $\delta_i>\frac{r(q_i)}{d(q_i,p)}$ is a sequence converging to zero, which we may adjust later. Since $\frac{r(q_i)}{\delta_i}<d(q_i,p)$, by the monotonicity of angles, $(p_i,w_i)$ is a $(1,\epsilon_i)$-strainer at $q_i$ of size $\frac{r(q_i)}{\delta_i}$. So \begin{equation}\label{e: large angle} \widetilde{\measuredangle} p_iq_iw_i\ge\pi-\epsilon_i. \end{equation} Next, consider the rescaled pointed manifold $(M,r^{-2}(q_i)g,q_i)$, which is $\epsilon_i$-close to $(\R\times\cigar,x_{tip})$. Then there is a sequence of points $o_i\in M$ with $d(o_i,q_i)=\frac{r(q_i)}{\delta_i}$ such that the minimizing geodesic $\sigma_i:[0,1]\ri M$ from $q_i$ to $o_i$ satisfies $\measuredangle\left(\sigma_i'(0),\phi_{i*}\left(\partial_{r}\right)\right)\ri0$ as $i\rii$. Combining this fact with \eqref{e: bigger than delta} we get \begin{equation*} \measuredangle p_iq_io_i\in\left(\delta/2,\pi-\delta/2\right). \end{equation*} Since $\epsilon_i\ri0$, by choosing $\delta_i\ri0$ properly we have $|\widetilde{\measuredangle}p_iq_io_i-\measuredangle p_iq_io_i|<\frac{\delta}{10}$, and hence \begin{equation}\label{e: small angle} \widetilde{\measuredangle} p_iq_io_i\in\left(\delta/4,\pi-\delta/4\right) \end{equation} for all sufficiently large $i$. Now consider the rescaled pointed manifold $(M,\delta^2_ir^{-2}(q_i)g,q_i)$. Since $\delta_i\ri0$, after passing to a subsequence we may assume that it converges to the upper half-plane $\R\times\R_+$ in the Gromov-Hausdorff sense, with $q_i\ri(0,0),o_i\ri(1,0)\in \R\times\R_+$ modulo the approximation maps. Assume $p_i$ converges to $(x_0,y_0)\in \R\times\R_+$, then by \eqref{e: small angle} we have $y_0>\frac{\delta}{100}$. On the other hand, \eqref{e: large angle} implies that $(x_0,y_0)$ is one point in a $(1,0)$-strainer at $(0,0)$ of size $1$. So it is clear that $|x_0|=1$, $y_0=0$, a contradiction. This proves the lemma. \end{proof} In the next few definitions, we introduce the concept of $\epsilon$-solid cylinders. These are topological 3-balls that look like a large neighborhood of the tip in $\R\times\cigar$. A chain of $\epsilon$-solid cylinder is a sequence of these cylinders meeting nicely. In this subsection, we will show in Lemma \ref{l: two chains} that all $\epsilon$-tip points are covered by exactly two such chains. \begin{defn}[$\epsilon$-solid cylinder] Let $x\in M$ be an $\epsilon$-tip point, and $\phi_x$ be the inverse of the corresponding $\epsilon$-isometry. We say that the neighborhood $\nu_{L,D}:=\phi_x([-L,L]\times B_{g_c}(x_{tip},D))$, is an $\epsilon$-solid cylinder centered at $x$, where $L,D>0$ are constants. \end{defn} In order to make sure that the union of two intersecting $\epsilon$-solid cylinders is still a topological 3-ball, we want them to meet nicely. So we introduce the concept of good intersection between two $\epsilon$-solid cylinders, see e.g. \cite[Section 5.6]{MT2}. \begin{defn}[Good intersection]\label{d: good intersection} Let $y_1,y_2$ be two $\epsilon$-tip points, and $\phi_{y_i}$ be the inverses of corresponding $\epsilon$-isometries. Let $\widetilde{\gamma}_i:[-L_i,L_i]\ri M$ be a curve passing through $y_i$ such that $\phi_{y_i}^{-1}(\widetilde{\gamma}_i(s))=(s,x_{tip})$. We say $\nu(i)=\phi_{y_i}([-L_i,L_i]\times B_{g_c}(x_{tip},D_i))$, $i=1,2$, have good intersection if after possibly reversing the directions either or both of the $\R$-factors, the following hold: \begin{enumerate} \item The projection $r_1$ in the direction of $\R$ is an increasing function along $\widetilde{\gamma}_2$ at any point of $\widetilde{\gamma}_2\cap \nu(1)$. \item There is a point in the negative end of $\nu(2)$ that is contained in $\nu(1)$, and the positive end of $\nu(2)$ is disjoint from $\nu(1)$. \item Either $(1.1)D_1r(y_1)\le D_2r(y_2)$ or $D_2r(y_2)\le (0.9)D_1r(y_1)$. \end{enumerate} \end{defn} With the notion above, if two $\epsilon$-solid cylinders have good intersection, then the intersection is homeomorphic to a 3-ball, see \cite[Lemma 5.19]{MT2}. \begin{defn}[Chain] Suppose that we have a sequence of $\epsilon$-solid cylinders $\{\nu(1),\cdots,\nu(k)\}$, $k\in\mathbb{N}\cup\{\infty\}$, with the curves $\widetilde{\gamma}_i$ from Definition \ref{d: good intersection}. We say that they form a chain of $\epsilon$-solid cylinders if the following hold: \begin{enumerate} \item For each $1\le i<k$ the open sets $\nu(i)$ and $\nu(i+1)$ have a good intersection with the given orientations. \item If $\nu(i)\cap\nu(j)\neq\emptyset$ for some $i\neq j$, then $|i-j|=1$. \end{enumerate} \end{defn} \begin{lem}(c.f. \cite[Lemma 5.22]{MT2}) Suppose that $\{\nu(1),\cdots,\nu(k)\}$ is a chain of $\epsilon$-solid cylinders. Then $\nu(1)\cup\cdots\cup\nu(k)$ is homeomorphic to a 3-ball and its boundary is the union of the negative end of $\nu(1)$, the positive end of $\nu(k)$, and an annulus $A$. \end{lem} For an $\epsilon$-tip point $x$, let $\nu$ be an $\epsilon$-solid cylinder centered at it. In Lemma \ref{l: one chain}, we construct a chain of $\epsilon$-solid cylinders starting from $\nu$, which extends to infinity on one end. Moreover, the $\epsilon$-solid cylinder on the other end meets a 2-sphere metric sphere $\partial B(p,D_0)$ in a spanning disk, where $D_0>0$ is independent of $x$. \begin{lem}[Extend an $\epsilon$-solid cylinder to a chain]\label{l: one chain} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. There exists $\overline{D}>0$ such that the following is true. For any $\epsilon$-tip point $x\in M\setminus B(p,\overline{D})$, there exist an integer $N\in\mathbb{N}$, a sequence of $\epsilon$-tip points $\{x_i\}_{i=-N}^{\infty}$ going to infinity, $x_{-N}\in \partial B(p,\overline{D})$, and an infinite chain of $\epsilon$-solid cylinders $\{\nu(i)\}_{i=-N}^{\infty}$ centered at $x_i$, where $\nu(i)=\phi_{x_i}([-900,900]\times B_{g_c}(x_{tip}, L_i))$, $L_i\in[500,1000]$, such that if $\nu(i)$ intersects with $\partial B(p,D)$, $D\ge \overline{D}$, then $\nu(i)$ meets $\partial B(p,D)$, $D\ge \overline{D}$, in a spanning disk. \end{lem} \begin{proof} By Lemma \ref{l: DR to cigar} we can choose $\overline{D}>0$ sufficiently large such that the rescaled soliton at any point in $M\setminus B(p,\overline{D}/2)$ is $\epsilon$-close to the asymptotic limits. For an $\epsilon$-tip point $x_0\in M\setminus B(p,\overline{D})$, assume $d(p,x_0)=D_0$. Let $\phi_{x_0}$ be the inverse of the $\epsilon$-isometry. Then $\nu(0):=\phi_{x_0}([-900,900]\times B_{g_c}(x_{tip}, 1000))$ is an $\epsilon$-solid cylinder. Let $r$ be the projection onto the the $\R$-direction in $\R\times\cigar$. Then by Lemma \ref{l: parallel}, after possibly replacing $r$ by $-r$, we may assume $\measuredangle(\nabla d(p,\cdot),\phi_{x_0*}(\partial_{r}))<\epsilon$. So the two points $y_{\pm}:=\phi_{x_0}((\pm1000,x_{tip}))\in M$ satisfy \begin{equation*} d(y_+,p)\ge(1-\epsilon)1000\,r(x_0)+D_0\quad\textit{and}\quad d(y_-,p)\le-(1-\epsilon)1000\,r(x_0)+D_0. \end{equation*} By the choice of $\overline{D}$ we can find two $\epsilon$-tip points $x_{\pm1}\in B(y_{\pm},r(x_0))$. In particular, we have $x_{\pm1}\notin \phi_{x_0}([-900,900]\times B_{g_c}(x_{tip}, 1000))$, and \begin{equation*} d(x_1,p)>900\,r(x_0)+D_0\quad \textit{and}\quad d(x_{-1},p)<-900\,r(x_0)+D_0. \end{equation*} Similarly, let $\phi_{x_{\pm1}}$ be the inverse of the $\epsilon$-isometry at $x_{\pm1}$, then $\nu(\pm1):=\phi_{x_{\pm1}}([-900,900]\times B_{g_c}(x_{tip}, 500))$ is an $\epsilon$-solid cylinder at $x_{\pm1}$. It is clear that $\nu(0)$ and $\nu(\pm1)$ have good intersections. Repeating this, we can obtain a sequence of $\epsilon$-tip points $\{x_i\}_{i=-N}^{\infty}$, $x_{-N}\in B(p,\overline{D})$, and a sequence of $\epsilon$-solid cylinders $\nu(i):=\phi_{x_i}([-900,900]\times B_{g_c}(x_{tip}, D_i))$ centered at $x_i$, where $D_i=500$ when $i$ is odd and $D_i=1000$ when $i$ is even, such that $x_{i+1}\in B(\phi_{x_i}((1000,x_{tip})),10 r(x_i))$. Therefore, by triangle inequalities it is easy to see that $\nu(i)$ only intersects with $\nu(i-1)$ and $\nu(i+1)$, and have good intersections with them. In particular, for all $i\ge0$ we have \begin{equation*} d(x_i,p)>900 \sum_{k=0}^{i-1}r(x_k)+D_0\rii. \end{equation*} So $\{\nu(i)\}_{i=1}^{\infty}$ is an infinite chain of $\epsilon$-solid cylinders. Moreover, we see that $\nu(i)$ meets all metric spheres $\partial B(p,D)$, $D\ge \overline{D}$, in a spanning disk since by Lemma \ref{l: parallel} we have $\measuredangle(\nabla d(p,\cdot),\phi_{x_i*}(\partial_{r}))<\epsilon$ for all $i$. \end{proof} In the following, we use Lemma \ref{l: one chain} to show that all $\epsilon$-tip points outside of a compact subset are contained in the union of finitely many chains of $\epsilon$-solid cylinders. \begin{lem}[Disjoint chains containing all $\epsilon$-tip points]\label{l: finitely many chains} Under the same assumption as in Lemma \ref{l: one chain}, and let $\overline{D}$ be from Lemma \ref{l: one chain}. There exist $k$ chains of $\epsilon$-solid cylinders $\mathcal{C}_1,\cdots,\mathcal{C}_k$, $k\in\mathbb{N}$, each of which satisfies the conclusions in Lemma \ref{l: one chain}. Moreover, all $\epsilon$-tip points in $M\setminus B(p,\overline{D})$ are contained in the union of $\mathcal{C}_1,\cdots,\mathcal{C}_k$. \end{lem} \begin{proof} Assume $x_1$ is an $\epsilon$-tip point, $x_1\in M\setminus B(p,\overline{D})$. Then let $\mathcal{C}_1$ be a chain of $\epsilon$-solid cylinders produced by Lemma \ref{l: one chain} whose union contains $x_1$. If there exists an $\epsilon$-tip point $x_2\in M\setminus(\mathcal{C}_1\cup B(p,\overline{D}))$, then by Lemma \ref{l: one chain} we can construct a new chain $\mathcal{C}_2$ of $\epsilon$-solid cylinders containing $x_2$. We claim that $\mathcal{C}_1\cap\mathcal{C}_2=\emptyset$, i.e. the union of all $\epsilon$-solid cylinders in $\mathcal{C}_1$ is disjoint from that in $\mathcal{C}_2$. First, for two $\epsilon$-tip points $x,y$ with $d(x,y)\le1000\,r(x)$, it is easy to see that $d(x,y)\le 0.1r(x)$ when $\epsilon$ is sufficiently small. Using this fact we see that $x_2$ is at least $900r(x_2)$-away from $\mathcal{C}_1$, and hence $\mathcal{C}_1\cap\mathcal{C}_2\cap \partial B(p,D_1)=\emptyset$, where $D_1=d(x_2,p)\ge \overline{D}$. Let $a,b>0$ be the infimum and supremum of $r$ such that $\mathcal{C}_1\cap\mathcal{C}_2\cap \partial B(p,r)=\emptyset$. Then $a<D_1<b$. Suppose by contradiction that $a\ge \overline{D}$. Then $\mathcal{C}_1\cap\mathcal{C}_2\cap \partial B(p,a)\neq\emptyset$, and we can find an $\epsilon$-tip $y\in\mathcal{C}_1\cap\mathcal{C}_2\cap \partial B(p,a)$. However, by Lemma \ref{l: parallel} this implies that $\nu$ the $\epsilon$-solid cylinder centered at $y$, is contained in $\mathcal{C}_1\cap\mathcal{C}_2$ and the positive end of $\nu$ is at distance $a+1000r(y)$ to $p$, which is greater than $a$. This contradicts the infimum assumption of $a$. By a similar argument we can show $b=\infty$. This proves the claim. Repeating this procedure, we obtain a sequence of chains of $\epsilon$-solid cylinders whose unions are disjoint. This must stop in finite steps, because these chains intersect with $\partial B(p,\overline{D})$ in spanning disks whose areas are uniformly bounded below. So we may assume these chains are $\mathcal{C}_1,\cdots,\mathcal{C}_k$ and they contain all $\epsilon$-tip points. \end{proof} Now we show that the number of these chains are exactly equal to two. To do this, we need the following lemma, \cite[Proposition 4.4]{MT2}, which enables us to glue up $\epsilon$-cylindrical planes that intersect, and produce a global $S^1$-fibration on their union. \begin{lem}[global $S^1$-fibration]\label{l: MT-glue} (\cite[Proposition 4.4]{MT2}) Let $M$ be a 3D Riemannian manifold. Given $\epsilon'>0$, the following holds for all $\epsilon>0$ less than a positive constant $\epsilon_1(\epsilon')$. Suppose $K\subset M$ is a compact subset and each $x\in K$ is the center of an $\epsilon$-cylindrical plane. Then there is an open subset $V$ containing $K$ and a smooth $S^1$-fibration structure on $V$. Furthermore, if $U$ is an $\epsilon$-cylindrical plane that contains a fiber $F$ of the fibration on $V$, then $F$ is $\epsilon'$-close to a vertical $S^1$-factor in $U$ and $F$ generates the fundamental group of $U$. In particular, the diameter of $F$ is at most twice the length of any circle in the $\epsilon$-cylindrical plane centered at any point of $F$. \end{lem} \begin{lem}[Two chains containing all $\epsilon$-tip points]\label{l: two chains} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. There exists $\overline{D}>0$ such that the following is true. There are exactly $2$ chains of $\epsilon$-solid cylinders $\mathcal{C}_1,\mathcal{C}_2$, each of which satisfies the conclusions in Lemma \ref{l: one chain}. Moreover, all $\epsilon$-tip points in $M\setminus B(p,\overline{D})$ are contained in the union of $\mathcal{C}_1$ and $\mathcal{C}_2$. \end{lem} \begin{proof} First, we show $k\ge 1$, which is equivalent to that the $\R\times\cigar$ is indeed an asymptotic limit of the soliton $M$. Suppose this is not the case. On the one hand, since all $\epsilon$-tip points are contained in $\mathcal{C}_1,\cdots,\mathcal{C}_k$, the complement of them in $M\setminus B(p,\overline{D})$ is covered by $\epsilon$-cylindrical planes. Then by Lemma \ref{l: MT-glue} we can find a connected open subset $V_1$ containing $B(p,2\overline{D})-B(p,\overline{D})$ which carries a smooth $S^1$-fibration. Consider the homotopy exact sequence \begin{equation*} \cdots\ri\pi_2(V_{1,0})\ri\pi_1(S^1)\ri\pi_1(V_1)\ri\pi_1(V_{1,0})\ri\cdots \end{equation*} Since the base space $V_{1,0}$ of this fibration is connected and non-compact, we have $\pi_2(V_{1,0})=0$, and it follows that the $S^1$-fiber is incompressible in $V_1$, i.e. the map $\pi_1(S^1)\ri\pi_1(V_1)$ is an injection, see e.g. \cite{Hatcher}. On the other hand, let $U$ be an $\epsilon$-cylindrical plane contained in $B(p,2\overline{D})-B(p,\overline{D})$, $x\in U$ be the center of the $\epsilon$-cylindrical plane, and $F$ be the $S^1$-fiber of the fibration on $V$ that passes through $x$. Then by Lemma \ref{l: MT-glue}, the fiber $F$ is $\epsilon$-close to the vertical $S^1$-factor in $U$, and hence $F$ is also contained in $U$, and hence contained in $B(p,2\overline{D})-B(p,\overline{D})$. Note that the metric spheres $\partial B(p,\overline{D})$ and $\partial B(p,2\overline{D})$ are diffeomorphic to 2-spheres, and $B(p,2\overline{D})-B(p,\overline{D})$ is difffeomorphic to $\R\times S^2$. So if $k=0$, we see that $B(p,2\overline{D})-B(p,\overline{D})-\mathcal{C}_1-\cdots-\mathcal{C}_k$ is diffeomorphic to $\R\times S^2$; if $k=1$, it is diffeomorphic to $\R^3$, both of which have trivial fundamental groups. Therefore, $F$ bounds a disk in $B(p,2\overline{D})-B(p,\overline{D})$. However, this contradicts with the above fact that the fiber $F$ is incompressible. So $k\ge 2$. Next, we show $k=2$. Suppose not, then $k\ge3$. Since the subset $K:=B(p,2\overline{D})-B(p,\overline{D})-\mathcal{C}_1-\mathcal{C}_2-\cdots-\mathcal{C}_k$ is covered by $\epsilon$-cylindrical planes, by Lemma \ref{l: MT-glue} we can obtain a smooth $S^1$-fibration on a subset $V$ containing $K$. Let $C_1,C_2$ be two circles of the fibration on $V$ that are contained in $\mathcal{C}_1,\mathcal{C}_2$, respectively. Then by Lemma \ref{l: MT-glue}, $C_i$ bounds a spanning disk $D_i$ in $\mathcal{C}_i$, $i=1,2$. Let $A\subset B(p,2\overline{D})-B(p,\overline{D})$ be an annulus bounded by $C_1,C_2$, which is saturated by $S^1$-fibers. Then the union $S:=D_1\cup D_2\cup A$ is a 2-sphere which is isotopic to the two metric spheres $\partial B(p,\overline{D})$ and $\partial B(p,2\overline{D})$. In particular, $S$ separates $\partial B(p,\overline{D})$ from $\partial B(p,2\overline{D})$. Since $k\ge3$, it follows that $\mathcal{C}_3$ intersects with $S$ at some $\epsilon$-tip point $y$. First, $y$ cannot be in the annulus $A$ because $A$ is covered by $\epsilon_0$-cylindrical planes for a very small $\epsilon_0>0$. Second, $y$ cannot be in either of the two spanning disks $D_1$ and $D_2$, because $\mathcal{C}_3$ is disjoint from $\mathcal{C}_i$, $i=1,2$. So this contradiction proves $k=2$. \end{proof} It is clear that at the volume scale, an $\epsilon$-tip point is different from a point at which the manifold is an $\epsilon$-cylindrical plane. We show in the following technical lemma that they can be also distinguished from each another at an even larger scale. Roughly speaking, the former point looks like a boundary point in the half-plane, while the latter looks like a point in the plane. \begin{lem}\label{l: tip point has to be edge} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. For any $\delta>0$, there exists $\epsilon>0$ such that for any $\epsilon$-tip point $x$, there exists a constant $0<r_x<\delta d(p,x)$ such that there is no 2-strainer at $x$ of size larger than $r_x$. \end{lem} \begin{proof} First, note that for $x$ sufficiently far away from $p$, we have by volume comparison that $\frac{r(x)}{d(p,x)}\le \delta^2$. Take $r_x=r(x)/\delta$, then $r_x<\delta d(p,x)$ and the rescaling $(M,r_x^{-2}g, x)$ is the scaling down of $(M,r^{-2}(x)g, x)$ by $\delta^{-1}$. The scaling down of $(\R\times\cigar,x_{tip})$ by $\delta^{-1}$ is $C\delta$-close to the 2-dimensional upper half-plane $(\R\times\R_+,(0,0))$ in the pointed Gromov-Hausdorff sense. Take $\epsilon<\delta$, by the definition of $\epsilon$-tip point, we see that $(M,r^{-2}(x)g, x)$ is $\epsilon$-close to $(\R\times\cigar,x_{tip})$. Therefore, $(M,r_x^{-2}g, x)$ is $C\epsilon\delta$-close in the Gromov-Hausdorff sense to the metric ball $B((0,0),1)$ in the 2-dimensional upper half-plane $(\R\times\R_+,(0,0))$. It is easy to see that there is not a $(2,\epsilon_0)$-strainer at $(0,0)$ in $\R\times\R_+$ of size larger than $1$, which is the same in $(M,r_x^{-2}g, x)$. Scaling back, it follows that there is not a $(2,\epsilon_0)$-strainer at $x$ in $(M,g)$ of size larger than $r_x$. \end{proof} Recall that $S_{\infty}(M,p)$ is the metric space of equivalent classes of rays starting from $p$. We show in the following lemma that $S_{\infty}(M,p)$ is a closed interval $[0,\theta]$ for some $\theta\in[0,\pi)$. Moreover, the two chains of $\epsilon$-solid cylinders $\mathcal{C}_1,\mathcal{C}_2$ are two ``edges" of the soliton $(M,g)$, in the sense that the two rays $\gamma_1,\gamma_2$ correspond to the two end points in $[0,\theta]$. Note we will show $\theta>0$ in Corollary \ref{c: Asymptotic to a sector} so that the soliton is indeed a flying wing. \begin{lem}\label{l: two rays} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $p\in M$ be a fixed point. Then $S_{\infty}(M,p)$ is isometric to an interval $[0,\theta]$ for some $\theta\in[0,\pi)$. Moreover, for any sequence of points $p^{(i)}_k\in\mathcal{C}_i$, $i=1,2$, $p^{(i)}_k\rii$ as $k\rii$, the minimizing geodesics $pp^{(i)}_k$ subsequentially converge to two rays $\gamma_1,\gamma_2$, such that $[\gamma_1]=0$, $[\gamma_2]=\theta$, after possibly switching $\gamma_1,\gamma_2$. \end{lem} \begin{proof} Fix some $p\in M$. We shall say that two quantities $d_1,d_2>0$ are comparable if $C^{-1}d_1<d_2<Cd_1$ for some universal constant $C>0$. By the non-euclidean volume growth of $(M,g)$, the asymptotic cone is a 2-dimensional metric cone over $S_{\infty}(M,p)$, where $S_{\infty}(M,p)$ is a 1-dimensional Alexandrov space, and hence is an interval or a circle. In the latter case, we can find a $(2,\epsilon)$-strainer of size comparable to $d(p,x)$ at any point $x\in M$. However, this is impossible at an $\epsilon$-tip point by Lemma \ref{l: tip point has to be edge}. So $S_{\infty}(M,p)$ is isometric to an interval $[0,\theta]$, $\theta\in[0,\pi]$. Moreover, we have $\theta<\pi$, because otherwise the manifold splits off a line and is isometric to $\R\times\cigar$, which does not have strictly positive curvature. First, we show that for any points going to infinity in $\mathcal{C}_1$, the minimizing geodesics from $p$ to them subsequentially converge to rays that are in the same equivalent class in $S_{\infty}(M,p)$. Suppose by contradiction that this is not true. Then we can find two sequences of points $p_{1k},p_{2k}\in\mathcal{C}_1$ going to infinity, such that the minimizing geodesics $pp_{1k},pp_{2k}$ converge to two rays $\sigma_1,\sigma_2$ with $\widetilde{\measuredangle}(\sigma_1,\sigma_2)>0$. Let $\beta_k:[0,1]\ri\mathcal{C}_1$ be a smooth curve joining $p_{1k},p_{2k}$, which consists of $\epsilon$-tip points. By Lemma \ref{l: parallel} we may assume that $d(p,\beta_k(s))$ is monotone in $s$. Let $\theta_{ik}(s)=\widetilde{\measuredangle}\sigma_i(d(p,\beta_k(s)))\,p\,\beta_k(s)$, $i=1,2$. Then it is clear that $\theta_{1k}(0),\theta_{2k}(1)$ converge to $0$, and $\theta_{1k}(1),\theta_{2k}(0)$ converge to $\widetilde{\measuredangle}(\sigma_1,\sigma_2)>0$, as $k\rii$. So for sufficiently large $k$, we have $\frac{\theta_{1k}(0)}{\theta_{2k}(0)}<\epsilon$ and $\frac{\theta_{1k}(1)}{\theta_{2k}(1)}>\epsilon^{-1}$. By continuity, there exists a point $q_k=\beta_k(s_k)$, $s_k\in(0,1)$, such that $\theta_{1k}(s_k)=\theta_{2k}(s_k)$, and hence \begin{equation*} \widetilde{\measuredangle}\sigma_1(d(p,q_k))\,p\,q_k=\widetilde{\measuredangle}\sigma_2(d(p,q_k))\,p\,q_k. \end{equation*} Since $d(p,p_{1k}),d(p,p_{2k})\rii$, we have $d(p,q_k)\rii$. Therefore, after passing to a subsequence $pq_k$ converges to a ray $\sigma_3$, which satisfies $\widetilde{\measuredangle}(\sigma_3,\sigma_1)=\widetilde{\measuredangle}(\sigma_3,\sigma_2)$. Since $S_{\infty}(M,p)$ is an interval, this implies \begin{equation*} \widetilde{\measuredangle}(\sigma_3,\sigma_1)=\widetilde{\measuredangle}(\sigma_3,\sigma_2)= \widetilde{\measuredangle}(\sigma_1,\sigma_2)/2. \end{equation*} So for sufficiently large $k$, we can find a $(2,\epsilon)$-strainer at $q_k$ of size comparable to $d(q_k,p)$, which is impossible by Lemma \ref{l: tip point has to be edge} because $q_k$ is an $\epsilon$-tip point. Now it remains to show that the ray $\sigma_1$ corresponds to one of the two end points in $S_{\infty}(M,p)=[0,\theta]$. This can be shown by a similar argument: Suppose $\theta>0$ and $[\sigma_1]\in (0,\theta)$. Then we can find $(2,\epsilon_0)$-strainers at a sequence of $\epsilon$-tip points $x_k\rii$ at scales comparable to $d(p,x_k)$, which contradicts with Lemma \ref{l: tip point has to be edge}. \end{proof} Now we prove our main result in this subsection. We construct two smooth curves $\Gamma_i:[0,\infty)\ri\mathcal{C}_i$ tending to infinity, $i=1,2$, which are integral curves of either $\nabla f$ or $-\nabla f$, such that the rescaled manifold converges to $\R\times\cigar$ pointed at the tip along $\Gamma_i$. \begin{lem}\label{l: Gamma} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. There exists a smooth curve $\Gamma_i:[0,\infty)\ri\mathcal{C}_i$ which is an integral curve of $\nabla f$ or $-\nabla f$, such that $\lim_{s\rii}\Gamma_i(s)=\infty$, and for any sequence of points $x_i\rii$ along $\Gamma_i$, the pointed manifolds $(M,r^{-2}(x_i)g,x_i)$ smoothly converge to $(\R\times\cigar,r^{-2}(x_{tip})g_c,x_{tip})$. \end{lem} \begin{proof} Fix some point $p\in M$, and let $D_0>0$ be a large constant such that if $f$ has a critical point (which must be unique), then $B(p,D_0)$ contains the critical point. We will use $\epsilon(D)>0$ denote all positive constants depending on $D$ such that $\epsilon\ri0$ as $D\rii$. Take a sequence of $\epsilon_k$-tip points $p_k\in \mathcal{C}_1$ going to infinity, where $\epsilon_k\ri0$. Assume $p_k\notin B(p,D_0)$ for all $k$. Denote by $\gamma_{p_k}:[0,s_k]\ri M$ the integral curve of $-\nabla f$ starting from $p_k$, where $s_k\in(0,\infty]$ is the smallest value of $s$ such that $\gamma_{p_k}(s)\in \partial B(p,D_0)$. It is easy to see that $s_k\rii$ as $k\rii$. By Lemma \ref{l: tip contracting} we see that $\gamma_{p_k}([0,s_k])\subset\mathcal{C}_1$ and if $\gamma_{p_k}(s)\notin B(p,D)$ for some $D>0$, then $\gamma_{p_k}(s)$ is an $\epsilon(D)$-tip point, where $\epsilon(D)\ri0$ as $D\rii$ is independent of $k$. First, assume $s_k$ is finite for all $k$. Let $q_k=\gamma_{p_k}\in\mathcal{C}_1\cap \partial B(p,D_0)$, then after passing to a subsequence $q_k$ converges to a point $q_{\infty}\in\mathcal{C}_1\cap \partial B(p,D_0)$. Denote by $\widetilde{\gamma}_{q_k}$ the integral curve of $\nabla f$ starting from $q_k$. Then $\widetilde{\gamma}_{q_k}$ converges to the integral curve $\Gamma_1:[0,\infty)\ri\mathcal{C}_1$ of $\nabla f$ starting from $q_{\infty}$, which satisfies all the assertions. Now assume $s_k=\infty$ for some $k=k_0$. Since the critical point of $f$ (if exists) is contained in $B(p,D_0)$, this implies that $\gamma_{p_{k_0}}(s)\rii$ as $s\rii$. Therefore, we may take $\gamma_{p_{k_0}}$ to be $\Gamma_1$, which is an integral curve of $-\nabla f$ and satisfies all assertions as a consequence of Lemma \ref{l: tip contracting}. By the same argument we can find $\Gamma_2\subset\mathcal{C}_2$ satisfying the assertions. \end{proof} \begin{remark} When $(M,g)$ is isometric to $\R\times\cigar$, and the potential function $f$ could be a non-constant linear function in the $\R$-direction, then one of $\Gamma_1,\Gamma_2$ is the integral curve of $\nabla f$ and the other is of $-\nabla f$. However, if $(M,g)$ has strictly positive curvature, we will show in Theorem \ref{t: Rmax critical point} that $f$ has a unique critical point, and $\Gamma_1,\Gamma_2$ are both integral curves of $\nabla f$. \end{remark} Let $\Gamma=\Gamma_1([0,\infty))\cup\Gamma_2([0,\infty))$, the following lemma shows that the distance to the subset $\Gamma$ must be achieved at interior points, so that the minimizing geodesics connecting them to $\Gamma$ are orthogonal to $\Gamma$ by the first variation formula. \begin{lem}\label{l: inf achieved on the non-compact portion} Under the same assumptions as in Lemma \ref{l: Gamma}. Suppose there are a sequence of points $p_i\rii$ and a constant $s_0>0$, such that the following holds: \begin{equation*} d(p_i,\Gamma)=d(p_i,\Gamma_1([0,s_0])\cup\Gamma_2([0,s_0])) \end{equation*} Then $M$ is isometric to $\R\times\cigar$. \end{lem} \begin{proof} We will show that after passing to a subsequence, the minimizing geodesics from $p$ to $p_i$ converge to a geodesic ray $\gamma$ such that $\measuredangle(\gamma,\gamma_i)\ge\pi/2$, where $\gamma_i$ are the two rays corresponding to the two end points in $S_{\infty}(M,p)$, see Lemma \ref{l: two rays}. Then by Lemma \ref{l: two rays} this implies $S_{\infty}(M,p)=[0,\pi]$ and the assertion of the lemma follows immediately. In the proof we use $\epsilon$ to denote all positive constants that goes to zero as $i\rii$. Let $q_i\in\Gamma$ such that $d(p_i,\Gamma([-s_0,s_0]))=d(p_i,q_i)$. Choose a sequence of points $o_i\in \Gamma_1$ with $d(q_i,o_i)=\alpha_i d(p_i,q_i)$, such that $\alpha_i\ri0$ and $d(q_i,o_i)\rii$. Since $\alpha_i\ri0$, we have $\widetilde{\measuredangle}o_ip_iq_i\ri0$, and hence \begin{equation*} \widetilde{\measuredangle}o_iq_ip_i+\widetilde{\measuredangle}q_io_ip_i\ge\pi-\epsilon. \end{equation*} Since $d(p_i,o_i)\ge d(p_i,\Gamma)=d(p_i,q_i)$, the segment $o_iq_i$ is the longest in the comparison triangle $o_ip_iq_i$, so it must be opposite to the largest comparison angle, i.e. \begin{equation*} \widetilde{\measuredangle}o_iq_ip_i\ge \widetilde{\measuredangle}q_io_ip_i. \end{equation*} So the last two inequalities imply \begin{equation}\label{e: soon} \widetilde{\measuredangle}o_iq_ip_i\ge\pi/2-\epsilon. \end{equation} By Lemma \ref{l: two rays} the minimizing geodesics $po_i$ converge to the ray $\gamma_1$. After passing to a subsequence we may assume $pp_i$ converge to a ray $\gamma$. Then by the boundeeness of $\{q_i\}$ and \eqref{e: soon}, it is easy to see that $\measuredangle (\gamma_1,\gamma)\ge\pi/2$. By a symmetric argument we also have $\measuredangle (\gamma_2,\gamma)\ge\pi/2$. So $\measuredangle (\gamma_1,\gamma_2)\ge\pi$ by Lemma \ref{l: Gamma}, which implies a splitting of $(M,g)$, and hence it is isometric to $\R\times\cigar$. \end{proof} \subsection{Quadratic curvature decay} The main result in this subsection is the following theorem of the quadratic curvature decay, which corresponds the upper bound in Theorem \ref{t': curvature estimate}. We show that there is a uniform $C>0$ such that the scalar curvature has the upper bound $R\le\frac{C}{d^2(\cdot,\Gamma)}$, where $\Gamma=\Gamma_1([0,\infty))\cup\Gamma_2([0,\infty))$ and $\Gamma_1,\Gamma_2$ are from Lemma \ref{l: Gamma}. \begin{theorem}\label{l: curvature upper bound initial} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. There exists $C>0$ such that for any $x\in M\setminus\Gamma$, \begin{equation*} R(x)\le\frac{C}{d^2(x,\Gamma)}. \end{equation*} \end{theorem} In the following we will introduce some constants and we may further adjust their values. The dependence of these constants is subject to the following order, \begin{equation*} \delta,\alpha,D_2,\epsilon,D_1,\omega,C, \end{equation*} such that each constant is chosen depending only on the preceding ones. More precisely, we will choose $D_1,D_2$ sufficiently large such that $\delta$ can be arbitrarily close to zero. Let $\epsilon>0$ be some very small number, and fix a point $p_0\in M$. Then by Lemma \ref{l: DR to cigar}, \ref{l: two chains} and \ref{l: Gamma} we see that there are $D_1>D_2>0$ such that the following holds: First, the soliton is an $\epsilon$-cylindrical plane at all points $x$ which satisfies $d(x,p_0)\ge \frac{1}{2}D_1$ and $d(x,\Gamma)\ge \frac{1}{2}D_2 \,r(x)$. Second, the soliton is $\epsilon$-close to $(\R\times\cigar,r^{-2}(x_0)g_c,x_0)$ for some $x_0$, $d_{g_c}(x_{tip},x_0)<10 D_2$. By compactness we may choose $C>0$ large enough, so that Theorem \ref{l: curvature upper bound initial} holds in $B(p_0,D_1)$. Moreover, for points $x\in M\setminus B(p_0,D_1)$ which satisfies $d(x,p_0)\ge D_1$ and $d(x,\Gamma)\le D_2 \,r(x)$, by the definition of the volume scale and a volume comparison argument, it is clear that there exists $\omega>0$ such that \begin{equation*} vol(B(\widetilde{x},d(x,\Gamma)))\ge \omega\,d(x,\Gamma)^3. \end{equation*} So Theorem \ref{l: curvature upper bound initial} holds at $x$ by Perelman's curvature estimate, see Lemma \ref{l: Perelman}. Therefore, from now on we assume that \begin{equation}\label{e: captain} d(x,p_0)\ge D_1\quad\textit{and}\quad d(x,\Gamma)\ge D_2 \,r(x). \end{equation} So the metric ball $B(x,d(x,\Gamma)/2)$ is covered by $\epsilon$-cylindrical planes. By the $S^1$-fibration Lemma \ref{l: MT-glue}, we see that there is an open subset $U$ containing $B(x,d(x,\Gamma)/2)$ which has a global $S^1$-fibration, and the $S^1$-fibers are incompressible. The following lemma gives a uniform lower bound for the volume ratio on the universal coverings of $U$. So by applying Perelman's curvature estimate in the universal covering we obtain the inequality in Theorem \ref{l: curvature upper bound initial} at a lift of $x$, which implies the same inequality at $x$. So Theorem \ref{l: curvature upper bound initial} reduces to the following lemma. \begin{lem}\label{l: non-collapsing in universal cover} There exists $\omega>0$ such that in the universal cover $\widetilde{U}$ of $U$, we have $B(\widetilde{x},d(x,\Gamma)/2)\subset\subset \widetilde{U}$, and \begin{equation*} vol(B(\widetilde{x},d(x,\Gamma)/2))\ge \omega\,d^3(x,\Gamma), \end{equation*} where $\widetilde{x}\in\widetilde{U}$ is a lift of $x$. \end{lem} \begin{proof}[Proof of Lemma \ref{l: non-collapsing in universal cover}] To prove the lemma, we will follow the idea in \cite[Lemma 2.2]{BamD} to construct a $(3,\delta)$-strainer near $\widetilde{x}$ of size comparable to $d(x,\Gamma)$ for some small $\delta>0$, then use this to obtain a lower bound on the volume ratio as in \cite[Lemma 2.2]{BamD} and \cite[Theorem 10.8.18]{BuragoBuragoIvanov}. We first construct a $(2,\delta)$-strainer at $\widetilde{x}$ using Claim \ref{c: alpha triangle} and \ref{l: alp}. \begin{claim}\label{c: alpha triangle} There exist $p,q\in M$ such that the triple of points $\{p,q,x\}$ forms a $\frac{\pi}{6}$-triangle and $d(x,p)=d(x,\Gamma)$. Here by a $\mu$-triangle we mean that $\mu>0$ and \begin{equation*} \widetilde{\measuredangle}xpq,\widetilde{\measuredangle}xqp,\widetilde{\measuredangle}pxq>\mu. \end{equation*} \end{claim} \begin{proof} Let $\epsilon\in(0,\frac{1}{100})$ be some fixed small number. Let $p\in\Gamma$ to be the closest point to $x$ on $\Gamma$, then by Lemma \ref{l: inf achieved on the non-compact portion} $p$ is an interior point in $\Gamma$, and $p$ is an $\epsilon$-tip point by Lemma \ref{l: Gamma}. Therefore, it is easy to see that the minimizing geodesic $px$ and $\psi_{p*}(\partial_{r})$ is almost orthogonal at $p$, where $\psi_p$ is the $\epsilon$-isometry to $(\R\times\cigar,r^{-2}(x_{tip})g_c,x_{tip})$ at $p$, i.e. \begin{equation}\label{e: plane} \left|\measuredangle \left(px,\psi_{p*}\left(\partial_{r}\right)\right)-\frac{\pi}{2}\right|\le\epsilon. \end{equation} Note that $d(x,\Gamma)>D_2\,r(x)$ and $d(x,p)=d(x,\Gamma)$, we claim that \begin{equation}\label{e: hudson} d(x,\Gamma)>\frac{1}{10}\,D_2\,r(p). \end{equation} Because otherwise we may choose $D_1$ sufficiently large (depending on $D_2$) such that $d(x,p)\le \frac{1}{10}\,D_2\,r(p)$ must imply $r(p)\le 10r(x)$, which implies $d(x,p)\le \,D_2\,r(x)$, a contradiction. So \eqref{e: hudson} holds. By Lemma \ref{l: Gamma} we can take $D_1$ to be large so that $\epsilon$ is sufficiently small, where $\epsilon$ is determined by $D_2$ such that the following holds: By \eqref{e: plane} and the $\epsilon$-closeness to $\R\times\cigar$ in the region containing the $B(p,D_2\,r(p))$, we have for all points $y\in B(p,D_2\,r(p))$ that \begin{equation}\label{e: Sully} \widetilde{\measuredangle}ypx'\le\frac{\pi}{2}+\frac{1}{100}\le\frac{2\pi}{3}, \end{equation} where $x'$ is a point on the minimizing geodesic $px$ such that \begin{equation*} d(p,x')=\frac{1}{10}\,D_2\,r(p)<d(x,\Gamma). \end{equation*} Now let $q\in\Gamma$ be a point such that $d(q,p)=d(x,\Gamma)$. By \eqref{e: hudson} we can choose a point $q'$ on the minimizing geodesic $pq$ such that \begin{equation*} d(p,q')=\frac{1}{10}\,D_2\,r(p)<d(x,\Gamma). \end{equation*} Then using \eqref{e: Sully} we obtain \begin{equation*} \widetilde{\measuredangle}q'px'\le\frac{\pi}{2}+\frac{1}{100}\le\frac{2\pi}{3}. \end{equation*} By the monotonicity of angles, the last three inequalities imply \begin{equation*} \widetilde{\measuredangle}qpx\le\frac{\pi}{2}+\frac{1}{100}\le\frac{2\pi}{3}. \end{equation*} Note $d(x,q)\ge d(x,\Gamma)=d(x,p)=d(p,q)$. The segment $|\widetilde{x}\widetilde{q}|$ is the longest in the comparison triangle $\widetilde{\triangle}\widetilde{x}\widetilde{q}\widetilde{p}$, and thus it must be opposite to the largest comparison angle, i.e. \begin{equation*} \widetilde{\measuredangle}qpx\ge \widetilde{\measuredangle}qxp=\widetilde{\measuredangle}pqx. \end{equation*} This combined with $\widetilde{\measuredangle}qpx+ \widetilde{\measuredangle}qxp+\widetilde{\measuredangle}pqx=\pi$ implies \begin{equation*} \widetilde{\measuredangle}qpx\ge \widetilde{\measuredangle}qxp=\widetilde{\measuredangle}pqx\ge\frac{\pi}{6}. \end{equation*} So $\{p,q,x\}$ forms a $\frac{\pi}{6}$-triangle. \end{proof} \begin{claim}\label{l: alp} There exists $\alpha>0$ such that when $D_1,D_2$ are sufficiently large, the following holds for all $x\in M$ satisfying \eqref{e: captain}: There is a point $x'\in B(x,\frac{1}{10} \,d(x,\Gamma))$ such that there is a $(2,\delta)$-strainer at $x'$ of size $\alpha\,d(x,\Gamma)$. \end{claim} \begin{proof} Suppose this is not true. Then there is a sequence of points $x_i\in M$ going to infinity and $d(x_i,\Gamma)\rii$ such that the claim fails. Let $p_i,q_i$ be points from Claim \ref{c: alpha triangle} which form an $\frac{\pi}{6}$-triangle together with $x_i$, and $d(p_i,x_i)=d(x_i,\Gamma)$. After passing to a subsequence, the pointed manifolds $(M,d^{-2}(x_i,\Gamma)g,x_i)$ converge to a complete non-compact length space $(X,d_X)$, which is an Alexandrov space with non-negative curvature \cite{BGP}. The triples $(p_i,q_i,x_i)$ converge to a triple $(p_{\infty},q_{\infty},x_{\infty})$ in the limit space, which forms a $\frac{\pi}{6}$-triangle. So the Alexandrov dimension of the limit space is at least two. Because otherwise $X$ must be isometric to a ray or a line, which does not contain any $\frac{\pi}{6}$-triangles. Since the set of $(2,\delta)$-strainers is open and dense in an Alexandrov space of dimension 2, we can find a point $x'\in B_{d_X}(x,\frac{1}{10})$ such that there is a $(2,\delta)$-strainer at $x'$ of size $\alpha$ for some $\alpha>0$. This induces a contradiction for sufficiently large $i$. \end{proof} Assume we can show that the volume of $B(x',1)$ is bounded below, then by a volume comparison using that $x'\in B(x,\frac{1}{10} \,d(x,\Gamma))$, this would imply a lower bound on the volume of $B(x,1)$. Therefore, we may assume without loss of generality that there is a $(2,\delta)$-strainer $(a_1,b_1,a_2,b_2)$ at $x$ of size $\alpha\,d(x,\Gamma)$. From now on we will work with the rescaled metric $h=d^{-2}(x,\Gamma)g$ and bound the 1-ball around a lift of $x$ in the universal cover from below by a universal constant. Let $\pi:\widetilde{U}\ri U$ be a universal covering, and $\widetilde{x},\widetilde{a}_i,\widetilde{b}_i$ be lifts of $x,a_i,b_i$ in the universal cover $\widetilde{U}$ such that \begin{equation}\label{e: equal length} d(\widetilde{x},\widetilde{a}_i)=d(x,a_i)=d(\widetilde{x},\widetilde{b}_i)=d(x,b_i)=\alpha. \end{equation} Then since the covering map $\pi$ is $1$-Lipschitz, we have \begin{equation}\label{e: bigger length} d(\widetilde{a}_i,\widetilde{b}_j)\ge d(a_i,b_j),\quad d(\widetilde{a}_i,\widetilde{a_j})\ge d(a_i,a_j),\quad d(\widetilde{b}_i,\widetilde{b}_j)\ge d(b_i,b_j). \end{equation} So the comparison angles between $\widetilde{x},\widetilde{a}_i,\widetilde{b}_j$ at $\widetilde{x}$ are at least as large as those between $x,a_i,b_j$ at $x$, i.e. \begin{equation*} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{b}_j\ge \widetilde{\measuredangle} a_ixb_j,\quad \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{b}_j\ge \widetilde{\measuredangle} a_1xa_2,\quad \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{b}_j\ge \widetilde{\measuredangle} b_1xb_2. \end{equation*} So $(\widetilde{a}_1,\widetilde{a}_2,\widetilde{b}_1,\widetilde{b}_2)$ is a $(2,\delta)$-strainer at $\widetilde{x}$. Next, we will extend the $(2,\delta)$-strainer to a $(2+\frac{1}{2},\delta)$-strainer at $\widetilde{x}$. Since the $S^1$-fiber in $U$ is incompressible, we can find a sequence $\widetilde{x}_i$ of lifts of $x$ that is unbounded. We may assume that the the consecutive distances of $\{\widetilde{x}_i\}$ are at most $D_2^{-1/2}$. Because otherwise, there would be two points $\widetilde{x}_i,\widetilde{x}_j$ such that $d(\widetilde{x}_i,\widetilde{x}_j)\ge D_2^{-1/2}$. Rescaling back to the metric $g$ and by Lemma \ref{l: MT-glue}, we see that the length of the circle in the $\epsilon$-circle plane at $x$ is at least $D_2^{-1/2}\,d(x,\Gamma)$. This implies $r(x)\ge D_2^{-1/2}\,d(x,\Gamma)$, which contradicts our assumption \eqref{e: captain}. So we can find an $i\in\mathbb{N}$ such that with $\widetilde{y}=\widetilde{x}_i$ we have \begin{equation*} |d(\widetilde{y},\widetilde{x})-D_2^{-\frac{1}{4}}|<D_2^{-\frac{1}{2}}. \end{equation*} \begin{claim} The tuple $(\widetilde{a}_1,\widetilde{a}_2,\widetilde{b}_1,\widetilde{b}_2,\widetilde{y})$ is a $(2+\frac{1}{2},\delta)$-tuple at $\widetilde{x}$ of size at least $D_2^{-\frac{1}{4}}-D_2^{-\frac{1}{2}}$. \end{claim} \begin{proof} Note that in the triangle $\triangle \widetilde{y}\widetilde{x}\widetilde{a}_i$, the segment $|\widetilde{y}\widetilde{a}_i|$ has the longest length, and thus must be opposite to the largest comparison angle, i.e. \begin{equation*} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{y}\ge \widetilde{\measuredangle}\widetilde{x}\widetilde{y}\widetilde{a}_i. \end{equation*} Since $d(\widetilde{x},\widetilde{y})\ri0$ as $D_2\rii$, we find \begin{equation}\label{e: smaller than delta} \widetilde{\measuredangle}\widetilde{y}\widetilde{a}_i\widetilde{x}<\delta. \end{equation} We also have \begin{equation}\label{e: euclid equal to pi} \widetilde{\measuredangle}\widetilde{y}\widetilde{a}_i\widetilde{x}+\widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{y}+\widetilde{\measuredangle}\widetilde{x}\widetilde{y}\widetilde{a}_i=\pi. \end{equation} So the last three inequalities imply $\widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{y}\ge\frac{\pi}{2}-\delta$. The same is true with $\widetilde{a}_i$ replaced by $\widetilde{b}_i$. So \begin{equation}\label{e: hero} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{y}\ge\frac{\pi}{2}-\delta\quad \textit{and}\quad \widetilde{\measuredangle}\widetilde{b}_i\widetilde{x}\widetilde{y}\ge\frac{\pi}{2}-\delta. \end{equation} and hence the claim holds. \end{proof} \begin{claim}\label{c: 2half strainer at y} The tuple $(\widetilde{a}_1,\widetilde{a}_2,\widetilde{b}_1,\widetilde{b}_2,\widetilde{x})$ is a $(2+\frac{1}{2},\delta)$-tuple at $\widetilde{y}$ of size at least $D_2^{-\frac{1}{4}}-D_2^{-\frac{1}{2}}$. \end{claim} \begin{proof} First, since $|d(\widetilde{y},\widetilde{a}_i)-d(\widetilde{x},\widetilde{a}_i)|<D_2^{-\frac{1}{4}}$ and $|d(\widetilde{y},\widetilde{b}_i)-d(\widetilde{x},\widetilde{b}_i)|<D_2^{-\frac{1}{4}}$, we see that $(\widetilde{a}_1,\widetilde{a}_2,\widetilde{b}_1,\widetilde{b}_2)$ is a $(2,\delta)$-strainer at $\widetilde{y}$ of size at least $\alpha-D_2^{-\frac{1}{4}}-2D_2^{-\frac{1}{2}}$. Note we may assume $D_2$ sufficiently large such that this is at least $\frac{\alpha}{2}>D_2^{-\frac{1}{4}}-D_2^{-\frac{1}{2}}$. By metric comparison we have \begin{equation*} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{y}+ \widetilde{\measuredangle}\widetilde{b}_i\widetilde{x}\widetilde{y}+ \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{b}_i\le2\pi, \end{equation*} which by \eqref{e: hero} and $\widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{b}_i\ge\pi-\delta$ implies \begin{equation*} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{x}\widetilde{y}\le\frac{\pi}{2}+\delta\quad \textit{and}\quad \widetilde{\measuredangle}\widetilde{b}_i\widetilde{x}\widetilde{y}\le\frac{\pi}{2}+\delta. \end{equation*} Combining this with \eqref{e: euclid equal to pi} and \eqref{e: smaller than delta} we obtain \begin{equation*} \widetilde{\measuredangle}\widetilde{x}\widetilde{y}\widetilde{a}_i\ge\frac{\pi}{2}-\delta\quad\textit{and}\quad \widetilde{\measuredangle}\widetilde{x}\widetilde{y}\widetilde{b}_i\ge\frac{\pi}{2}-\delta. \end{equation*} So the claim holds. \end{proof} Now take $\widetilde{m}$ be the midpoint of a minimizing geodesic between $\widetilde{y}$ and $\widetilde{x}$. \begin{claim}\label{c: 3-strainer} The tuple $(\widetilde{a}_1,\widetilde{a}_2,\widetilde{b}_1,\widetilde{b}_2,\widetilde{y},\widetilde{x})$ is a $(3,\delta)$-strainer at $\widetilde{m}$ of size at least $\frac{1}{2}D_2^{-\frac{1}{4}}-D_2^{-\frac{1}{2}}$. \end{claim} \begin{proof} First, by the monotonicity of comparison angles we have \begin{equation*} \widetilde{\measuredangle}\widetilde{m}\widetilde{x}\widetilde{a}_i\ge \widetilde{\measuredangle}\widetilde{y}\widetilde{x}\widetilde{a}_i\ge\frac{\pi}{2}-\delta\quad\textit{and}\quad \widetilde{\measuredangle}\widetilde{m}\widetilde{x}\widetilde{b}_i\ge \widetilde{\measuredangle}\widetilde{y}\widetilde{x}\widetilde{b}_i\ge\frac{\pi}{2}-\delta. \end{equation*} Then repeating the same argument as in Claim \ref{c: 2half strainer at y} replacing $\widetilde{y}$ by $\widetilde{m}$, we wee that \begin{equation*} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{m}\widetilde{x},\; \widetilde{\measuredangle}\widetilde{b}_i\widetilde{m}\widetilde{x}>\frac{\pi}{2}-\delta. \end{equation*} Replacing $\widetilde{x}$ by $\widetilde{y}$, then similarly we can obtain \begin{equation*} \widetilde{\measuredangle}\widetilde{a}_i\widetilde{m}\widetilde{y},\; \widetilde{\measuredangle}\widetilde{b}_i\widetilde{m}\widetilde{y}>\frac{\pi}{2}-\delta. \end{equation*} Moreover, similarly as before we can see that $(\widetilde{a}_1,\widetilde{a}_2,\widetilde{b}_1,\widetilde{b}_2)$ is a $(2,\delta)$-strainer at $\widetilde{m}$. Finally, $\widetilde{\measuredangle}\widetilde{y}\widetilde{m}\widetilde{x}=\pi$ is trivially true. So the claim holds. \end{proof} Now using the $(3,\delta)$-strainer in Claim \ref{c: 3-strainer}, one can construct a $100$-bilipschitz map $f:B(\widetilde{m},\lambda \,D_2^{-\frac{1}{4}})\ri\R^3$ for some sufficiently small $\lambda$ as in \cite[Lemma 2.2(i)]{BamD} and \cite[Theorem 10.8.18]{BuragoBuragoIvanov}, which implies \begin{equation*} vol(B(\widetilde{m},\lambda \,D_2^{-\frac{1}{4}}))>c(\lambda \,D_2^{-\frac{1}{4}})^3, \end{equation*} for some universal $c>0$. This implies Lemma \ref{l: non-collapsing in universal cover} by volume comparison. \end{proof} Now Theorem \ref{l: curvature upper bound initial} follows from Lemma \ref{l: non-collapsing in universal cover} and Perelman's curvature estimate. \subsection{Existence of a critical point} The main result in this subsection is Theorem \ref{t: Rmax critical point}, which proves the existence of the maximum point of the scalar curvature, which is also the critical point of the potential function. In Lemma \ref{l: geometry of level set near tip} and \ref{l: 3D neck is 2D neck}, we study the geometry of level sets of the potential function $f$. To start, we first note that the second fundamental form of a level set of $f$ satisfies \begin{equation}\label{e: second fundamental form} \mathrm{I\!I}=-\left.\frac{\nabla ^2 f}{|\nabla f|}\right|_{f^{-1}(a)}=-\left.\frac{\Ric}{|\nabla f|}\right|_{f^{-1}(a)}\le 0. \end{equation} Recall the Gauss equation that for a manifold $N$ embedded in a Riemannian manifold $(M,g)$, the curvature tensor $\Rm_N$ of $N$ with induced metric can be expressed using the second fundamental form and $\Rm_M$, the curvature tensor of $M$: \begin{equation*} \langle \Rm_N(u,v)w,z \rangle=\langle \Rm_M(u,v)w,z \rangle+\langle\mathrm{I\!I}(u,z),\mathrm{I\!I}(v,w)\rangle-\langle\mathrm{I\!I}(u,w),\mathrm{I\!I}(v,z)\rangle \end{equation*} So \eqref{e: second fundamental form} implies the level sets of $f$ with induced metric have positive curvature. More precisely, for a level set of $f$ which passes an $\epsilon$-tip point, Lemma \ref{l: geometry of level set near tip} shows that at a point that is sufficiently far away from this $\epsilon$-tip point in the level set, the soliton is close to $\RR\times S^1$ under a suitable rescaling. We show this by a limiting argument. \begin{lem}\label{l: geometry of level set near tip} Let $(M,g,f)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. For any $\delta>0$ and $C_0>0$, there exists $\overline{D}>0$ such that the following holds: For all $D\ge\overline{D}$, there exists $\epsilon>0$ such that the following holds: Suppose $q\in M$ is an $\epsilon$-tip point with $|\nabla f|(q)\ge C_0^{-1}$, and let $\Sigma=f^{-1}(f(q))$ be the level set passing through $q$. Suppose also that $z\in\Sigma$ is a point with $d_{\Sigma}(q,z)=D\,R^{-1/2}(q)$, where $d_{\Sigma}$ is the length metric of the induced metric on $\Sigma$. Then the soliton $(M,g)$ is a $\delta$-cylindrical plane at $q$. \end{lem} \begin{proof} Suppose this is false, then there is some $\delta>0$, such that for any large $\overline{D}$, there exist a constant $D>\overline{D}$, a sequence of constants $\epsilon_k\ri0$, and a sequence of $\epsilon_k$-tip points $q_k\rii$, and a sequence of points $z_k\in \Sigma_k:=f^{-1}(f(q_k))$ with $d_{\Sigma_k}(q_k,z_k)= D\,R^{-1/2}(q_k)$ such that the soliton $(M,g)$ is not a $\delta$-cylindrical plane at $z_k$. In the following we will show that when $\overline{D}$ is large enough, the level sets $\Sigma_k$ under suitable rescalings converge to a level set of a smooth function on $\R\times\cigar$, and $z_k$ converge to a point at which $\R\times\cigar$ is a $\delta/2$-cylindrical plane, which will imply a contradiction for sufficiently large $k$. We now divide the discussion into three cases depending on the limit of $R(q_k)$. To start, note that $R+|\nabla f|^2=C_1^2$ for some $C_1>0$. So $R\le C_1^2$ and $|\nabla f|\le C_1$. \textbf{Case 1:} $\limsup_{k\rii} R(q_k)>C^{-2}>0$ for some $C>0$ which may depend on the sequence. Then we can deduce from the soliton equation $\nabla^2 f=\Ric$ that derivatives of order at least two of the functions $\widetilde{f}_k:=f-f(q_k)$ are uniformly bounded. Moreover, at $q_k$ we have $\widetilde{f}_k(q_k)=0$ and \begin{equation*} C_0^{-1}\le|\nabla \widetilde{f}_k|(q_k)=|\nabla f|(q_k)\le C_1. \end{equation*} Let $\widetilde{\nabla}$ denote the covariant derivatives of $\widetilde{g}_k=\frac{1}{16}R(q_k)g$, then we have that $|\widetilde{\nabla} \widetilde{f}_k|_{\widetilde{g}_k}= 4R^{-1/2}(q_k)|\nabla f|$ is uniformly bounded above, and in particular at $q_k$ we have \begin{equation*} |\widetilde{\nabla} \widetilde{f}_k|(q_k)\ge 4\,C_0^{-1}R^{-1/2}(q_k)\ge 4\,C_0^{-1}C_1^{-1}. \end{equation*} So after passing to a subsequence we may assume that the functions $\widetilde{f}_k$ converge to a smooth function $f_{\infty}$ on $\R\times\cigar$, which satisfies $f_{\infty}(x_{tip})=0$, $\nabla^2 f_{\infty}=\Ric$ and \begin{equation*} |\nabla f_{\infty}|(x_{tip})\ge 4\,C_0^{-1}C_1^{-1}. \end{equation*} Note that $C_0,C_1$ are constants that only depend only on the soliton but not the sequence. Since $\nabla^2 f_{\infty}=\Ric$, by the uniqueness of the potential function on the cigar soliton we see that $f_{\infty}$ is the sum of the potential function on $\cigar$ which vanishes at the tip and a linear function along the $\R$-factor whose derivative has absolute value at least $4\,C_0^{-1}C_1^{-1}$ and vanishes at $x_{tip}$. In particular, $0$ is a regular value of $f_{\infty}$ and the level set $\Sigma_{\infty}:=f^{-1}_{\infty}(0)$ is a non-compact complete rotationally symmetric 2D manifold. Therefore, after passing to a subsequence, as the manifolds $(M,\frac{1}{16}R(q_k)g,q_k)$ smoothly converge to $(\R\times\cigar,g_c,x_{tip})$, the level sets $(\Sigma_k,\frac{1}{16}R(q_k)g_{\Sigma_k},q_k)$ of $\widetilde{f}_k$ with the induced metrics smoothly converge to the level set $(\Sigma_{\infty},g_{\Sigma_{\infty}},x_{tip})$ of $f_{\infty}$, and $z_k\in\Sigma_k$ converge to a point $z_{\infty}\in\Sigma_{\infty}$ with $d_{\Sigma_{\infty}}(x_{tip},z_{\infty})=\frac{1}{4}D$. Since $\R\times\cigar$ is a $\delta/2$-cylindrical plane at $z_{\infty}$ when $\overline{D}$ is sufficiently large depending on $\delta$ and $4\,C_0^{-1}C_1^{-1}$, we obtain a contradiction for all sufficiently large $k$. \textbf{Case 2:} $\lim_{k\rii} R(q_k)=0$. Consider the rescaled metrics $\widetilde{g}_k=\frac{1}{16}R(q_k)g$ and the rescaled functions $\widetilde{f}_k:=\frac{f-f(q_k)}{4R^{-1/2}(q_k)}$, then $\widetilde{f}_k$ satisfies $\widetilde{f}_k(q_k)=0$ and \begin{equation}\label{e: higher derivatives of f} \widetilde{\nabla}^2\widetilde{f}_k=\nabla^2\widetilde{f}_k=\frac{\nabla^2 f}{4R^{-1/2}(q_k)}=\frac{\Ric}{4R^{-1/2}(q_k)}=\frac{\widetilde{\Ric}}{4R^{-1/2}(q_k)}, \end{equation} and also \begin{equation*} |\widetilde{\nabla}\widetilde{f}_k|_{\widetilde{g}_k}= 4R^{-1/2}(q_k)|\nabla\widetilde{f}_k|= |\nabla f|\le C_1. \end{equation*} In particular, at $q_k$ we have \begin{equation}\label{e: the norm of gradient} |\widetilde{\nabla}\widetilde{f}_k|_{\widetilde{g}_k}(q_k)=|\nabla f|(q_k)\in[C_0^{-1},C_1]. \end{equation} By \eqref{e: higher derivatives of f} and \eqref{e: the norm of gradient}, the derivatives of $\widetilde{f}_k$ are uniformly bounded. Thus there is a subsequence of $\widetilde{f}_k$ converging to a smooth function $f_{\infty}$ on $\R\times\cigar$ with $f_{\infty}(x_{tip})=0$. By \eqref{e: higher derivatives of f} and \eqref{e: the norm of gradient} we have $\nabla^2f_{\infty}=0$ and $|\nabla f_{\infty}|(x_{tip})>0$. So $f_{\infty}$ is a non-constant linear function in the $\R$-direction. In particular, $0$ is a regular value of $f_{\infty}$, and the level set $\Sigma_{\infty}:=f_{\infty}^{-1}(0)$ is equal to $\{a\}\times\cigar\subset\R\times\cigar$, for some $a\in\R$. Therefore, after passing to a subsequence, as the manifolds $(M,\frac{1}{16}\,R(q_k)g,q_k)$ smoothly converge to $(\R\times\cigar,g_c,x_{tip})$, the level sets $(\Sigma_k,\frac{1}{16}\,R(q_k)g_{\Sigma_k},q_k)$ of $\widetilde{f}_k$ with induced metrics smoothly converge to the level set $(\Sigma_{\infty},g_{\Sigma_{\infty}},x_{tip})$ of $f_{\infty}$, and the points $z_k\in\Sigma_k$ converge to a point $z_{\infty}\in\Sigma_{\infty}$ with $d_{\Sigma_{\infty}}(x_{tip},z_{\infty})=\frac{1}{4}D$. So it follows when $\overline{D}$ is sufficiently large that $\R\times\cigar$ is a $\delta/2$-cylindrical plane at $z_{\infty}$. This is a contradiction for large $k$. \end{proof} The next lemma shows that for a point at which the soliton looks sufficiently like the cylindrical plane $\RR\times S^1$, the level set of $f$ passing through it looks like the cylinder $\R\times S^1$. We prove this lemma by a limiting argument. \begin{lem}\label{l: 3D neck is 2D neck} Let $(M,g,f)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. For any $\delta>0$ and $C_0>0$, there exists $\epsilon>0$ such that if $M$ is an $\epsilon$-cylindrical plane at $x\in M$ and $r(x)\ge C_0^{-1}$, then the level set $f^{-1}(f(x))$ of $f$ passing through $x$ is a $\delta$-neck at $x$ at scale $r(x)$. \end{lem} \begin{proof} Suppose the conclusion is not true, then for some $\delta>0$ and $C_0>0$, there is a sequence of points $x_i\in M$ at which $(M,g)$ is an $\epsilon_i$-cylindrical plane, $\epsilon_i\ri0$, such that $\Sigma_i:=f^{-1}(f(x_i))$ is not a $\delta$-neck at $x_i$ at scale $r(x_i)$. Consider the rescalings of the metrics $\widetilde{g}_i:=r^{-2}(x_i)g$, and the rescalings of the functions $\widetilde{f}_i:=\frac{f_i-f_i(x_i)}{r(x_i)}$. We have $\widetilde{f}_i(x_i)=0$ and \begin{equation*} \begin{split} \widetilde{\nabla}^{k+2}\widetilde{f}_i&=r^{-1}(x_i)\widetilde{\nabla}^{k+2} f_i =r^{-1}(x_i)\widetilde{\nabla}^{k} (\widetilde{\nabla}^{2}f_i) =r^{-1}(x_i)\widetilde{\nabla}^{k}(\widetilde{\Ric}),\quad k\ge0\\ \end{split} \end{equation*} and also \begin{equation*} \widetilde{\nabla}\widetilde{f}_i=r^2(x_i)\nabla\widetilde{f}_i=r(x_i)\nabla f_i. \end{equation*} Therefore, using $r(x_i)\ge C_0^{-1}$ we obtain \begin{equation*} \begin{split} |\widetilde{\nabla}^{k+2}\widetilde{f}_i|_{\widetilde{g}_i}&=r^{-1}(x_i)|\widetilde{\nabla}^{k}(\widetilde{\Ric})|_{\widetilde{g}_i}\le C_0 |\widetilde{\nabla}^{k}(\widetilde{\Ric})|_{\widetilde{g}_i}\\ \end{split} \end{equation*} which goes to zero for each $k\ge 0$ since $\epsilon_i\ri0$. Note $R+|\nabla f|^2=C_1^2$ for some $C_1>0$, we also have $|\nabla f|\le C_1$ and \begin{equation*} |\widetilde{\nabla} \widetilde{f}_i|_{\widetilde{g}_i}=|\nabla f|\le C_1. \end{equation*} In particular, since $r(x_i)\ge C_0^{-1}$ and $\epsilon_i\ri0$, it follows that $R(x_i)\ri0$ and $|\nabla f|(x_i)\ge\frac{1}{2}C_1$ for all large $i$. So at $x_i$ we have \begin{equation*} |\widetilde{\nabla} \widetilde{f}_i|_{\widetilde{g}_i}(x_i)=|\nabla f|(x_i)\in\left[C_1/2,C_1\right]. \end{equation*} So after passing to a subsequence we may assume that the manifolds $(M,\widetilde{g}_i,x_i)$ smoothly converge to $(\RR\times S^1,g_{stan},x_{\infty})$, and the functions $\widetilde{f}_i$ converge to a smooth function $f_{\infty}$ on $\RR\times S^1$, which satisfies $f_{\infty}(x_{\infty})=0$ and \begin{equation}\label{e: happyday} |\nabla^{k+2} f_{\infty}|=0,\quad{k\ge0},\quad {and}\quad |\nabla f_{\infty}|(x_{\infty})\in\left[C_1/2,C_1\right]. \end{equation} By \eqref{e: happyday} it is easy to see that $f_{\infty}$ is a constant in each $S^1$-factor in $\RR\times S^1$, and a non-constant linear function on the $\RR$-factor. After a possible rotation on $\RR$, we may assume the level set $\Sigma_{\infty}:=f^{-1}_{\infty}(0)=\{(x,0,\theta):x\in\R,\theta\in[0,2\pi)\}$. In particular, $0$ is a regular value of $f_{\infty}$, and $\Sigma_{\infty}$ is isometric to $(\R\times S^1,g_{stan})$. Therefore, the level sets $(\Sigma_i,r^{-2}(x_i)g_{\Sigma_i},x_i)$ of $\widetilde{f}_i$ smoothly converge to the level set $(\Sigma_{\infty},g_{stan},x_{\infty})$ of $f_{\infty}$. This implies that $\Sigma_i$ is a $\delta$-neck when $i$ is sufficiently large, a contradiction. \end{proof} The following lemma compares the value of $f$ at two points $x$ and $y$, when the minimizing geodesic from $x$ to $y$ is orthogonal to $\nabla f$ at $y$. \begin{lem}\label{l: compare f} Let $x,y$ be two points in $M$. Suppose that a minimizing geodesic $\sigma$ from $y$ to $x$ is orthogonal to $\nabla f$ at $y$. Then $f(x)\ge f(y)$. \end{lem} \begin{proof} Since $\langle\nabla f(\sigma(0)),\sigma'(0)\rangle=0$, computing by calculus variation we have \begin{equation*} \begin{split} f(x)-f(y)&=f(\sigma(1))-f(\sigma(0)) =\int_{0}^{1}\langle\nabla f(\sigma(r)),\sigma'(r)\rangle\,dr\\ &=\int_{0}^1\int_0^r\nabla^2f(\sigma'(s),\sigma(s))\,ds\,dr\\ &=\int_{0}^1\int_0^r\Ric(\sigma'(s),\sigma(s))\,ds\,dr \ge 0. \end{split} \end{equation*} \end{proof} The following lemma compares the scales of two $\epsilon$-necks in a positively-curved non-compact complete 2D manifold. It says that the scale of the inner $\epsilon$-neck is almost not larger than that of the outer $\epsilon$-neck. \begin{lem}\label{l: 2D surface} For any $\delta>0$, there exists $\epsilon>0$ such that the following holds: Let $(M,g)$ be a 2D complete non-compact Riemannian manifold with positive curvature and let $p$ be a soul for it. Then for any $\epsilon$-neck $N$ disjoint from $p$ the central circle of $N$ separates the soul from the end of the manifold. In particular, if two $\epsilon$-necks $N_1$ and $N_2$ in $M$ are disjoint from each other and from $p$, then the central circles of $N_1$ and $N_2$ are the boundary components of a region in $M$ diffeomorphic to $S^1\times I$. Moreover, assume $N_1$ is contained in the 2-ball bounded by the central circle of $N_2$, then the scales $r_1,r_2$ of $N_1,N_2$ satisfy \begin{equation*} r_1<(1+\delta)r_2. \end{equation*} \end{lem} \begin{proof} The proof is a slight modification of the proof of \cite[Lemma 2.20]{MT} using Busemann functions. \end{proof} Now we prove the critical point theorem. \begin{theorem}\label{t: Rmax critical point} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Then there exists $p\in M$ such that $R$ attains its maximum at $p$ and $\nabla f(p)=0$. \end{theorem} \begin{proof} Let $\epsilon>0$ be some constant we can take arbitrarily small, and we will use $\delta>0$ to denote all constants that converge to zero as $\epsilon\ri0$. We suppose by contradiction that $R_{\max}$ does not exists. First, the level sets of $f$ are non-compact: Suppose not, then for some $a\in\R$, the level set $f^{-1}(a)$ is compact and hence is diffeomorphic to $S^2$. So $f^{-1}(a)$ separates the manifold into a compact and a non-compact connected component. Since $f$ is convex and non-constant, by the maximum principle, it attains the minimum in the compact region. This contradicts our assumption. Let $\Gamma_1,\Gamma_2:[0,\infty)\in M$ be the two integral curves of $\nabla f$ or $-\nabla f$ from Lemma \ref{l: Gamma}, which extend to infinity on the open ends. First, we claim that $\Gamma_2$ and $\Gamma_1$ can not be integral curves of $-\nabla f$ at the same time. Because otherwise, on the one hand, we have $d(\Gamma_1(s),\Gamma_2(s))\rii$ as $s\rii$ by Lemma \ref{l: two rays}. On the other hand, since $\Gamma_1,\Gamma_2$ are integral curves of $-\nabla f$, it follows by the positive curvature and distance expanding along the backwards Ricci flow of the soliton that $d(\Gamma_1(s),\Gamma_2(s))\le d(\Gamma_1(s_0),\Gamma_2(s_0))$ for any $s \ge s_0$. This contradiction shows the claim. So we may assume $\Gamma_1$ is an integral curve of $\nabla f$. In Claim \ref{c: gamma goes to infinity}, \ref{c: DR}, and \ref{c: in C_2}, we will construct a complete integral curve $\Gamma:(-\infty,+\infty)\in M$ of $\nabla f$ such that for some $s_0>0$, $\Gamma([s_0,\infty))\subset \mathcal{C}_1,\Gamma((-\infty,-s_0])\subset \mathcal{C}_2$ and moreover the manifolds $(M,r^{-2}(\Gamma(s))g,\Gamma(s))$ converge to $(\R\times\cigar,r^{-2}(x_{tip})g_c,x_{tip})$ as $s\ri\pm\infty$. Note that $\Gamma((-\infty,\infty))$ is invariant under the diffeomorphisms $\phi_t$ generated by $\nabla f$. \begin{claim}\label{c: gamma goes to infinity} Take $\gamma_1:[0,\infty)\ri M$ to be the integral curve of $-\nabla f$ starting from $\Gamma_1(0)$. Then $\gamma_1(s)$ goes to infinity as $s\rii$. \end{claim} \begin{proof} Suppose otherwise, assume for some $s_i\rii$ and a compact subset $V$ there is $\gamma_1(s_i)\in V$. By the compactness of $V$, there is $c>0$ such that $\Ric\ge c g$ and $c\le|\nabla f|\le c^{-1}$ in $V$. So by the first identity in \eqref{e: two} we have \begin{equation*} \frac{d}{ds}|_{s=s_i}R(\gamma_1(s))\ge c. \end{equation*} Moreover, by the increasing of $R(\gamma_1(s))$ we get $\frac{d}{ds}|_{s=s_i}R(\gamma_1(s))\ge 0$ for all $s$. It is clear that there is a uniform $C_0>0$ such that $\left|\frac{d^2}{ds^2}R(\gamma_1(s))\right|\le C_0$ for all $s$. We may choose the sequence $s_i$ such that $s_{i+1}>s_i+1$. Then \begin{equation*} R(\gamma_1(s_{i+1}))\ge R(\gamma(s_1)) + \sum_{k=1}^{i}(R(\gamma_1(s_{k+1}))-R(\gamma_1(s_{k}))) \ge R(\gamma(s_1)) + i\,c^2C_0^{-1}/2\rii, \end{equation*} which is impossible. \end{proof} \begin{claim}\label{c: DR} The manifolds $(M,r^{-2}(\gamma_1(s))g,\gamma_1(s))$ converge smoothly to the manifold $(\R\times\cigar,r^{-2}(x_{tip})g_{\Sigma},x_{tip})$ as $s\rii$. In particular, $\gamma_1(s)$ is an $\epsilon$-tip point for all sufficiently large $s$. \end{claim} \begin{proof}[Proof of Claim \ref{c: DR}] It follows from Theorem \ref{l: DR to cigar} that $(M,r^{-2}(\gamma_1(s))g,\gamma_1(s))$ converge smoothly to either $(\R\times\cigar,r^{-2}(x_0)g_{\Sigma},x_0)$, or $(\RR\times S^1,g_{stan},x_0)$. We show that it must be the first case: Since $\liminf_{s\rii}R(\gamma_1(s))>0$, by the quadratic curvature decay Theorem \ref{l: curvature upper bound initial}, it follows that $\gamma_1(s)$ is within uniformly bounded distance to the $\epsilon$-tip points on $\Gamma_1\cup\Gamma_2$. So the limit must be $(\R\times\cigar,r^{-2}(x_0)g_{\Sigma},x_0)$. Moreover, since $\gamma_1(s)$ is an integral curve of $-\nabla f$, by the distance shrinking in the cigar soliton it is easy to see that $x_0$ must be a tip point in $\R\times\cigar$. \end{proof} \begin{claim}\label{c: in C_2} $\gamma_1(s)\subset\mathcal{C}_2$ for all $s$ sufficiently large. \end{claim} \begin{proof}[Proof of Claim \ref{c: in C_2}] Since the two chains $\mathcal{C}_1,\mathcal{C}_2$ contains all $\epsilon$-tip points by Lemma \ref{l: two chains}, we have either $\gamma_1(s)\in \mathcal{C}_2$ or $\gamma_1(s)\in\mathcal{C}_1$ for all sufficiently large $s$. Suppose $\gamma_1(s)\in\mathcal{C}_1$ for all large $s$. On the one hand, by Claim \ref{c: DR}, we have $d(\gamma_1(s),\Gamma_1)\ri0$ as $s\rii$. Since $\Gamma_1(s)$ is the integral curve of $\nabla f$, we see that $|\nabla f|(\Gamma_1(s))$ increases in $s$, and hence $\liminf_{s\rii}|\nabla f|(\gamma_1(s))=\liminf_{s\rii}|\nabla f|(\Gamma_1(s))>0$. So by Claim \ref{c: DR} we have that $\measuredangle(\nabla f,\phi_{*}(\partial_{r}))<\epsilon$ at all $\epsilon$-tip points in $\mathcal{C}_1$ after possibly replacing $r$ by $-r$. On the other hand, for a fixed point $p_0$, by Lemma \ref{l: parallel}, $\measuredangle(\nabla d(p_0,\cdot),\phi_{*}(\partial_{r}))<\epsilon$ holds at all $\epsilon$-tip points in $\mathcal{C}_1$ after possibly replacing $r$ by $-r$. So either $\measuredangle(\nabla f,\nabla d(p_0,\cdot))<\epsilon$ or $\measuredangle(-\nabla f,\nabla d(p_0,\cdot))<\epsilon$ has to hold at all $\epsilon$-tip points in $\mathcal{C}_1$. Note that Claim \ref{c: gamma goes to infinity} implies that $d(p_0,\gamma_1(s))\rii$ as $s\rii$, and hence (1) must hold. But the fact $d(p_0,\Gamma_1(s))\rii$ as $s\rii$ implies (2) must hold, a contradiction. \end{proof} Therefore, letting $\Gamma(s)=\Gamma_1(s)$ for $s\ge s_0$, and $\Gamma(s)=\gamma_1(s_0-s)$ for $s\le s_0$ we get the desired complete integral curve $\Gamma:(-\infty,+\infty)$ of $\nabla f$. So we may assume $\Gamma_2(s)=\gamma_1(s-s_0)$ for $s\ge s_0$, then $\Gamma_1,\Gamma_2$ still satisfy the conclusions in Lemma \ref{l: Gamma}, and moreover satisfy the additional properties that $\lim_{s\rii}R(\Gamma_2(s))>0$ and $\Gamma_1,\Gamma_2$ are both parts of a complete integral curve $\Gamma$. After a rescaling we may assume $\lim_{s\rii}R(\Gamma_2(s))=4$. Then for some $s_1>0$ whose value will be determined later, we can find a point $p$ which is the center of an $\epsilon$-cylindrical plane, such that $|h(p)-2\pi|\le\epsilon$ and $d(p,\Gamma)=d(p,\Gamma_2)=s_1$, see Definition \ref{d: h} for $h(\cdot)$. Let $\gamma_p(t)$ be the integral curve of $\nabla f$ starting from $p$. Then $d(\gamma_p(t),\Gamma)$ increases in $t$. In particular, $d(\gamma_p(t),\Gamma(0))\ge d(\gamma_p(t),\Gamma)\ge s_1$. So by Lemma \ref{l: inf achieved on the non-compact portion} we see that when $s_1$ is sufficiently large, the distance $d(\gamma_p(t),\Gamma)$ for any fixed $t$ is always attained in $\Gamma((-\infty,-s_0)\cup(s_0,\infty))$ where $\Gamma(s)$ are $\epsilon$-tip points. In particular, the minimizing geodesic connecting $\gamma_p(t)$ to some point $y_t\in\Gamma((-\infty,-s_0)\cup(s_0,\infty))$ such that $d(\gamma_p(t),y_t)=d(\gamma_p(t),\Gamma)$ is orthogonal to $\Gamma$ at the $\epsilon$-tip point $y_t$, and $y_t\rii$ as $t\rii$. On the one hand, by distance distortion estimate we have \begin{equation}\label{e: fast} \frac{d}{dt}d(\gamma_p(t),\Gamma)\ge\sup_{\gamma\in \mathcal{Z}(t)}\int_{\gamma}\Ric(\gamma'(s),\gamma'(s))\,ds \end{equation} in the backward difference quotient sense (see \cite[Lemma 18.1]{RFTandA3}), where $\mathcal{Z}(t)$ is the space of minimizing geodesics $\gamma$ that realize the distance of $d(\gamma_p(t),\Gamma)$. In particular, if $y_t\in\Gamma_2$ and $\gamma$ is a minimizing geodesic connecting $\gamma_p(t)$ to $y_t$, we have \begin{equation}\label{e: cigarricci} \frac{d}{dt}d(\gamma_p(t),\Gamma)\ge\int_{\gamma}\Ric(\gamma'(s),\gamma'(s))\,ds\ge 2-\epsilon \quad \textit{for all}\quad t\in[0,T], \end{equation} where in the second inequality we used \eqref{e: integrate Ricci} that in a cigar soliton with $R(x_{tip})=4$, the integral of Ricci curvature along a geodesic ray starting from the tip is equal to $2$. On the other hand, we have \begin{equation}\label{e: slow} \frac{d}{dt}d(\gamma_p(t),\Gamma_1(s_0+t))\le\inf_{\gamma\in \mathcal{W}(t)}\int_{\gamma}\Ric(\gamma'(s),\gamma'(s))\,ds, \end{equation} in the forward difference quotient sense, where $\mathcal{W}(t)$ is the space of all minimizing geodesics between $\gamma_p(t)$ and $\Gamma_1(s_0+t)$. Since $R(\Gamma(s))$ strictly decreases in $s$, because otherwise $(M,g)$ is isometric to $\R\times\cigar$, we may assume that for some $c_1>0$ we have $R(\Gamma_1(s_0,\infty))\le 4-c_1$. So by \eqref{e: slow} there exists some $c_2>0$ such that \begin{equation}\label{e: largecigarricci} \frac{d}{dt}d(\gamma_p(t),\Gamma_1(s_0+t))\le2-c_2. \end{equation} Therefore, it follows from \eqref{e: cigarricci} and \eqref{e: largecigarricci} that $d(\gamma_p(t),\Gamma)=d(\gamma_p(t),\Gamma_1)$ for sufficiently large $t$. Therefore, we may let \begin{equation*} T=\sup\{t: d(\gamma_p(t),\Gamma)=d(\gamma_p(t),\Gamma_2)\}<\infty, \end{equation*} then $d(\gamma_p(T),\Gamma_1)=d(\gamma_p(T),\Gamma_2)$ and $d(\gamma_p(t),\Gamma)=d(\gamma_p(t),\Gamma_2)$ for all $t\le T$. Integrating \eqref{e: cigarricci} from $0$ to $T$ we obtain \begin{equation*} d(\gamma_p(t),\Gamma)\ge d(p,\Gamma)+(2-\epsilon)t\ge s_1+(2-\epsilon)t. \end{equation*} \yi{Since $\gamma_p(t)$ is the integral curve of $\nabla f$ starting from $p$, it follows by the definition of $h$ (see Definition \ref{d: h}) that $h(\gamma_p(t))$ is equal to the length of a minimizing geodesic loop at $p$ with respect to $g(t)$. So by the Ricci flow equation, $\Rm\ge0$, and $|\Ric|\le C R$, we see that $h(\gamma_p(t))$ is non-decreasing in $t$ and the following evolution inequality holds \begin{equation*} \frac{d}{dt} h(\gamma_p(t))\le C\cdot R(\gamma_p(t))\cdot h(\gamma_p(t)). \end{equation*} } Combining this with the following curvature upper bound from Theorem \ref{l: curvature upper bound initial}, \begin{equation*} R(\gamma_p(t))\le\frac{C}{d^2(\gamma_p(t),\Gamma)}\le \frac{C}{(s_1+(2-\epsilon)t)^2}, \end{equation*} we obtain \begin{equation*} \frac{d}{dt} h(\gamma_p(t))\le \frac{C}{(s_1+(2-\epsilon)t)^2}\cdot h(\gamma_p(t)). \end{equation*} Assuming $s_1$ is sufficiently large and integrating this we obtain \begin{equation}\label{e: ell less than} h(\gamma_p(t))\le h(p)(1+\epsilon)\le 2\pi(1+\epsilon). \end{equation} Let $q\in \Gamma_1$ be a point such that \begin{equation*} d(\gamma_p(T),q)=d(\gamma_p(T),\Gamma). \end{equation*} So by Lemma \ref{l: compare f} we have \begin{equation*} f(q)< f(\gamma_p(T)). \end{equation*} Since $R(\Gamma_1(s))$ decreases and $f(\Gamma_1(s))$ increases in $s$, there is $q_2\in\Gamma_1$ such that \begin{equation*} f(q_2)=f(\gamma_p(T))>f(q) \quad \textit{and} \quad R(q)> R(q_2). \end{equation*} In the rest of proof we will show $R(q_2)\ge 4-\delta$. First, if $d(\gamma_p(T),q_2)<\delta^{-1}R^{-1/2}(q_2)$, then since $q_2$ is an $\epsilon$-tip point, we obtain \begin{equation*} |h(\gamma_p(T))-4\pi\cdot R^{-1/2}(q_2)|\le\delta. \end{equation*} This fact combined with \eqref{e: ell less than} gives \begin{equation}\label{e: upper bound of h} R^{-1/2}(q_2)\le\frac{1}{4\pi}\,h(\gamma_p(T))+\delta \le \frac{1}{2}+\delta, \end{equation} and hence $R(q_2)\ge 4-\delta$. So we may assume from now on that $d(\gamma_p(T),q_2)\ge\delta^{-1}R^{-1/2}(q_2)$. \begin{claim}\label{claim: neck} There exists an $\delta$-cylindrical plane at some $z\in f^{-1}(f(q_2))$ at scale $r=R^{-1/2}(q_2)$. \end{claim} \begin{proof}[Proof of Claim \ref{claim: neck}] Let $\phi:(\mathbb{R}\times\cigar,x_{tip}):\ri (M,q_2)$ be the inverse of an $\epsilon$-isometry. For the interval $[-1/\delta,1/\delta]\subset\R$ and the ball $B_{g_c}(x_{tip},1/\sqrt{\delta})$ in $(\cigar,g_c)$, we consider image of their product $U:=\phi([-1/\delta,1/\delta]\times B_{g_c}(x_{tip},1/\sqrt{\delta}))$. Let $\sigma$ be a smooth curve connecting $q_2$ to $\gamma_p(T)$ in the level set $f^{-1}(f(q_2))$. Since $\gamma_p(T)\notin U$, by continuity $\sigma$ must exit $U$ at some $z\in\partial U$. Denote $\partial U_{\pm}:=\phi(\{\pm1/\delta\}\times B_{g_c}(x_{tip},1/\sqrt{\delta}))$. We will show that $z\in \phi([-1/\delta,1/\delta]\times \partial B_{g_c}(x_{tip},1/\sqrt{\delta}))=\partial U-\partial U_--\partial U_+$. Replacing $+,-$ if necessary we may assume $f(\phi(1/\delta))>f(q_2)>f(\phi(-1/\delta))$. On the one hand, for any $y\in \partial U_-$, let $\sigma:[0,\ell]\ri M$ be a unit speed geodesic from $y$ to some point $q_-\in \phi([-1/\delta,1/\delta]\times x_{tip})$, which achieves the distance from $y$ to it. Then we have \begin{equation*} \begin{split} f(y)-f(q_-)&=\int_0^{\ell}\langle\nabla f,\sigma'(r)\rangle dr =\int_0^{\ell}\int_0^r\nabla^2 f(\sigma'(s),\sigma'(s))\,ds dr\\ &\le \int_0^{\ell}\int_0^{\ell}\Ric(\sigma'(s),\sigma'(s))\,ds dr \le C/\sqrt{\delta}, \end{split} \end{equation*} where we used that the length of $\sigma$ satisfies $\ell\in[\delta^{-1/2}R^{-1/2}(q_2),2\delta^{-1/2}R^{-1/2}(q_2)]$. This then implies \begin{equation*} f(y)\le f(q_-)+\frac{C}{\sqrt{\delta}}\le f(q_2)-\frac{1}{C\delta}+\frac{C}{\sqrt{\delta}}<f(q_2), \end{equation*} and hence $\partial U_-$ is disjoint from $f^{-1}(f(q_2))$. On the other hand, for any $y\in \partial U_+$, let $q_+$ be a point in $\phi([-1/\delta,1/\delta]\times x_{tip})$ that is closest to $y$. Then \begin{equation*} f(y)\ge f(q_+)\ge f(q_2)+\frac{1}{C\delta}>f(q_2), \end{equation*} and hence $\partial U_+$ is also disjoint from $f^{-1}(f(q_2))$. So the claim holds. \end{proof} By Lemma \ref{l: 3D neck is 2D neck}, the two $\delta$-cylindrical plane at $\gamma_p(T)$ and $z$ produce the two $\delta$-necks in the level set surface $f^{-1}(f(q_2))$: \yi{One $\delta$-neck denoted by $N_1$ is centered at $\gamma_p(T)$ which has scale $2(1+\delta)$, because $h(\gamma_p(T))\le 2\pi(1+\delta)$ by \eqref{e: upper bound of h}, and $h(\gamma_p(T))\ge h(p)\ge 2\pi-\epsilon$ by the choice of $p$ and the monotonicity of $h$ along integral curves;} and the other $\delta$-neck denoted by $N_2$ is centered at $z$ with scale $R^{-1/2}(q_2)$. By the choice of $N_2$ and $d(\gamma_p(T),q_2)\ge\delta^{-1}R^{-1/2}(q_2)$, it is clear that $N_2$ is in the 2-ball bounded by the central circle of $N_1$. Since $f^{-1}(f(q_2))$ is positively-curved, we can apply Lemma \ref{l: 2D surface} and deduce that \begin{equation*} R^{-1/2}(q_2)\le 2(1+\delta), \quad\textit{hence}\quad R(q_2)\ge 4(1-\delta). \end{equation*} Now letting $\epsilon$ go to zero, by the monotonicity of $R$ along $\Gamma$, this implies that $R$ is a constant along $\Gamma$. So $R\equiv2$ on $\Gamma$. So by the soliton identity, \begin{equation*} \Ric(\nabla f,\nabla f)=-\langle\nabla R,\nabla f\rangle=0. \end{equation*} The Ricci curvature vanishes along $\Gamma$ in the direction of $\nabla f$. So the soliton splits off a line and it is isometric to $\R\times\cigar$, contradiction. This proves the existence of a critical point of $f$. \end{proof} The following corollary follows immediately from the Theorem, Lemma \ref{l: Gamma} and \ref{l: two rays}. \begin{cor}\label{l: new Gamma} There are two integral curves $\Gamma_i:(-\infty,\infty)\ri M$ of $\nabla f$, $i=1,2$, such that the followings hold: \begin{enumerate} \item Let $p$ be the critical point of $f$. Then $\lim_{s\ri-\infty}\Gamma_i(s)=p$; \item The pointed manifolds $(M,r^{-2}(\Gamma_i(s))g,\Gamma_i(s))$ smoothly converge to the manifold $(\R\times\cigar,r^{-2}(x_{tip})g_c,x_{tip})$ as $s\rii$; \item For any $p^{(i)}_k\in\Gamma_i$, $p^{(i)}_k\rii$ as $k\rii$, the minimizing geodesics $pp^{(i)}_k$ subsequentially converge to two rays $\gamma_1,\gamma_2$, such that $[\gamma_1]=0,\,[\gamma_2]=\theta\in S_{\infty}(M,p)=[0,\theta]$, $\theta\in[0,\pi)$. \end{enumerate} \end{cor} \subsection{An ODE lemma for distance distortion estimates} We will use the following ODE lemma of two time-dependent scalar functions to estimate certain distance distortion in Theorem \ref{l: wing-like}. This method generalizes the bootstrap argument in \cite[Theorem 1.3]{Lai2020_flying_wing}, which relies on the $O(2)$-symmetric structure of the soliton. \begin{lem}(An ODE Lemma)\label{l: ODE} Let $H,d:[0,T]\rightarrow(0,\infty)$ be two differentiable functions satisfying the following \begin{equation}\label{e: ODE derivative assump} \begin{cases} H'(t)\ge C_1\cdot h^{-1}(t)\\ h'(t)\le C_2\cdot H^{-2}(t)\cdot h(t), \end{cases} \end{equation} for some constants $C_1,C_2>0$. Suppose \begin{equation}\label{e: C5} \frac{H(0)}{h(0)}> \frac{C_2}{C_1}. \end{equation} Let $C_3:=C_1 h^{-1}(0)-C_2H^{-1}(0)>0$. Then we have \begin{equation*} \begin{cases} H(t)\ge C_3t+H(0)\\ h(t)\le h(0)e^{\frac{C_2}{C_3H(0)}}, \end{cases} \end{equation*} for all $t\in[0,T]$. \end{lem} In Section \ref{s: asymptotic geometry} we will show that the soliton outside of a compact subset is covered by two regions: The edge region consists of two solid cylindrical chains where the local geometry is close to $\R\times\cigar$, and the almost flat region carries a $S^1$-fibration and the local geometry looks like $\RR\times S^1$. Fix a point $x$ in the almost flat region, the two functions $h(t)$ and $H(t)$ are essentially the length of the $S^1$-fiber at $\phi_t(x)$, and the distance from $\phi_t(x)$ to $\Gamma$, where $\{\phi_t\}_{t\in\R}$ are the diffeomorphisms generated by $\nabla f$. \begin{proof} Dividing both sides by $h(t)$ in the second inequality in \eqref{e: ODE derivative assump} we get \begin{equation*} \pt(\ln h(t))\le C_2H^{-2}(t). \end{equation*} Integrating this from $0$ to $t$ we get \begin{equation*} \ln h(t)\le C_2\int_{0}^{t}H^{-2}(s)ds + \ln h(0), \end{equation*} and hence \begin{equation*} h(t)\le h(0)\,e^{C_2\int_{0}^{t}H^{-2}(s)\,ds}, \end{equation*} plugging which into the first inequality in \eqref{e: ODE derivative assump} we get \begin{equation*} H'(t)\ge C_1\,h(0)\,e^{-C_2\int_0^{t}H^{-2}(s)\,ds}. \end{equation*} Let $H_0(t)$ be a solution to the following problem: \begin{equation}\label{e: H_0} \begin{cases} H_0(0)=H(0)\\ H'_0(t)=C_1\,h^{-1}(0)\,e^{-C_2\int_0^{t}H_0^{-2}(s)\,ds}. \end{cases} \end{equation} Then it is easy to see that \begin{equation}\label{e: H bigger than H0} H(t)\ge H_0(t)>0, \end{equation} for all $t\ge0$. The second equation in \eqref{e: H_0} implies \begin{equation*} \ln (H'_0(t))=\ln (C_1\, h^{-1}(0))-C_2\int_0^t H_0^{-2}(s)ds, \end{equation*} differentiating which at both sides we obtain \begin{equation*} \begin{split} \pt( H'_0(t)&-C_2H_0^{-1}(t))=0. \end{split} \end{equation*} Integrating this and using \eqref{e: H_0},\eqref{e: C5} we obtain \begin{equation*} H'_0(t)-C_2H_0^{-1}(t)= H'_0(0)-C_2H_0^{-1}(0)=C_1 h^{-1}(0)-C_2H^{-1}(0)=C_3>0. \end{equation*} So by \eqref{e: H bigger than H0} we obtain \begin{equation*} H(t)\ge H_0(t)\ge C_3t+H(0). \end{equation*} Substituting this into the second inequality in \eqref{e: ODE derivative assump} we get \begin{equation*} \pt(\ln h(t))\le \frac{C_2}{(C_3t+H(0))^2}, \end{equation*} integrating which we obtain \begin{equation*} h(t)\le h(0)e^{\frac{C_2}{C_3H(0)}-\frac{C_2}{C_3(C_3t+H(0))}}\le h(0)e^{\frac{C_2}{C_3H(0)}}. \end{equation*} \end{proof} \subsection{Asymptotic cone is not a ray} In this subsection, we show that the scalar curvature has positive limits along the two integral curves $\Gamma_1,\Gamma_2$. As a consequence of this, the asymptotic cone is isometric to a sector with non-zero angle. A key step in the proof is to choose two suitable functions that evolves by the conditions in Lemma \ref{l: ODE}. Roughly speaking, for a fixed point $x$ at which the soliton is an $\epsilon$-cylindrical plane, the two functions $h(t)$ and $H(t)$ are essentially the time-$t$-length of the $S^1$-fiber at $x$, and the time-$t$-distance to the edges $\Gamma$, where $t$ is the backwards time variable in the Ricci flow of the soliton. These two functions satisfy the inequalities in Lemma \ref{l: ODE}: On the one hand, by Perelman's curvature estimate we will see that the curvature in the almost flat region is bounded above by $ H^{-2}(t)$. So by the Ricci flow equation, $h(t)$ evolves by the second inequality in \eqref{e: ODE derivative assump}. On the other hand, the increase of $H(t)$ is contributed by the regions that look like $\R\times\cigar$, whose volume scale is roughly $h(t)$. So $H(t)$ evolves under the first inequality in \eqref{e: ODE derivative assump}. Then applying the lemma we will see that $H(t)$ increases at least linearly, and $h(t)$ stays bounded as $t\rii$. We first prove a technical lemma using metric comparison. \begin{lem}\label{l: exactly two caps in 2D} There exists $\epsilon>0$ such that the following holds: Let $\Sigma$ be a 2D complete Riemannian manifold with non-negative curvature. Then there can not be more than two disjoint $\epsilon$-caps. Moreover, suppose there are two disjoint $\epsilon$-caps centered at $p_1,p_2$, and $\Sigma$ is an $\epsilon$-neck at a point $p\in\Sigma$ such that $p$ is not in the two $\epsilon$-caps. Then the central circle at $p$ separates $p_1$ and $p_2$. \end{lem} \begin{proof} For the first claim, suppose by contradiction that there are three disjoint $\epsilon$-caps $\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3$ centered at $p_1,p_2,p_3$. We shall use $\delta(\epsilon)$ to denote all constants that go to zero as $\epsilon$ goes to zero. Assume the minimizing geodesics $p_1p_2,p_1p_3$ intersect the boundary of $\mathcal{C}_1$ at $x_2$ and $x_3$ respectively, which are centers of two $\epsilon$-necks. So we have \begin{equation*} d(x_2,x_3)<\delta(\epsilon)d(x_2,p_1), \end{equation*} which by the monotonicity of angles implies \begin{equation*} \widetilde{\measuredangle}p_2p_1p_3\le\widetilde{\measuredangle}x_2p_1x_3\le \delta(\epsilon). \end{equation*} In the same way we obtain that $\widetilde{\measuredangle}p_1p_2p_3,\widetilde{\measuredangle}p_1p_3p_2\le\delta(\epsilon)$. But then we have \begin{equation*} \widetilde{\measuredangle}p_1p_2p_3+\widetilde{\measuredangle}p_2p_1p_3+\widetilde{\measuredangle}p_1p_3p_2\le3\delta(\epsilon)<\pi, \end{equation*} which is impossible. For the second claim, suppose the central circle at $p$ does not separates $p_1$ and $p_2$. Let $\psi:(-\epsilon^{-1},\epsilon^{-1})\times S^1\ri \Sigma$ be the inverse of the $\epsilon$-isometry of the $\epsilon$-neck at $p$. Let $\gamma_{\pm}=\psi(\{\pm\epsilon^{-1}\}\times S^1)$. Then after possibly replacing $+$ with $-$, we claim that the minimizing geodesics $pp_1,pp_2,p_1p_2$ are all contained in the the component of $\Sigma$ separated by $\gamma_{-}$ which contains $\gamma_{+}$: First, since $p_1,p_2$ are in the same component of $\Sigma$ separated by $\psi(\{0\}\times S^1)$, suppose $pp_1$ intersects $\gamma_{+}$, then it follows that $pp_2$ also intersects $\Sigma_{+}$, and the claim follows by the minimality of these geodesics. By the claim, we can use a similar argument as before to deduce \begin{equation*} \widetilde{\measuredangle}p_1p_2p+\widetilde{\measuredangle}p_2p_1p+\widetilde{\measuredangle}p_1pp_2<\pi, \end{equation*} which is a contradiction. \end{proof} Now we prove the main theorem in this section. First, it states that the soliton is $\mathbb{Z}_2$-symmetric at infinity, in the sense that $R$ has equal positive limits along the two ends of $\Gamma$. Moreover, assume this positive limit is equal to $4$ after a proper rescaling, then any sequence of points going to infinity converges to either $\RR\times S^1$ or $\R\times\cigar$, without any rescalings. We remark that this $\mathbb{Z}_2$-symmetry at infinity is also true in mean curvature flow: A mean curvature flow flying wing in $\R^3$ is a graph over a finite slab. Moreover, the slab width is equal to that of its asymptotic translators, which are two tilted Grim-Reaper hypersurfaces \cite{Spruck2020CompleteTS,white}. \begin{theorem}\label{l: wing-like} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Let $\Gamma_1,\Gamma_2$ be the two integral curves of $\nabla f$ from Corollary \ref{l: new Gamma}. Then after a rescaling of $(M,g)$, we have \begin{equation*} \lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))=4. \end{equation*} Moreover, for any sequence of points $q_k\rii$, the sequence of pointed manifolds $(M,g,q_k)$ converge to either $(\RR\times S^1,g_{stan})$, or $(\R\times\cigar,g_c)$. In particular, if $\{q_k\}\subset\Gamma_1\cup\Gamma_2$, then $(M,g,q_k)$ converges to $\R\times\cigar$. \end{theorem} \begin{proof} We will first prove $\lim_{s\rii}R(\Gamma_i(s))>0$, $i=1,2$. By Theorem \ref{t: Rmax critical point} we know that $f$ has a unique critical point $x_0$. Assume $f(x_0)=0$. Then it is easy to see that all level sets $f^{-1}(s)$ for all $s>0$ are diffeomorphic to 2-spheres, and the induced metrics have positive curvature. Suppose by contradiction that $\lim_{s\rii}R(\Gamma_i(s))>0$, $i=1,2$, does not hold. Let $\Gamma_1,\Gamma_2$ be from Corollary \ref{l: new Gamma}, and $\Gamma=\Gamma_1(-\infty,\infty)\cup\Gamma_2(-\infty,\infty)\cup\{x_0\}$. Then the subset $\Gamma$ is invariant under the diffeomorphism $\phi_t$. Let $C$ denote all positive universal constants, $\epsilon$ denote all positive constants that we may take arbitrarily small, and $\delta$ denote all positive constants that converge to zero as $\epsilon\ri0$. The value of $\delta$ may change from line to line. Suppose by contradiction that the theorem does not hold. Then we may assume $\lim_{s\rii}R(\Gamma_1(s))=0$. Choose a point $p\in M$ such that $(M,g)$ is an $\epsilon$-cylindrical plane at $p$, and $d(p,\Gamma)=d(p,\Gamma_1)$. Let $\gamma_p(t)$ be the integral curve of $\nabla f$ starting from $p$. Then by Lemma \ref{l: tip contracting} it follows that $(M,g)$ is always an $\epsilon$-cylindrical plane at $\gamma_p(t)$ for $t\ge0$. So we can define $h(\gamma_p(t))$ as in Definition \ref{d: h}. Denote $d(\gamma_p(t),\Gamma)$ by $H(\gamma_p(t))$, and abbreviate $\gamma_p(t)$ as $p_t$. By some distortion estimates and Theorem \ref{l: curvature upper bound initial} it is easy to see that \begin{equation}\label{e: H h initial} \begin{cases} \pt H(p_t)\ge C^{-1}\cdot r^{-1}(q_t)\\ \pt h(p_t)\le C\cdot H^{-2}(p_t)\cdot h(p_t), \end{cases} \end{equation} In the following we will show that $r(q_t)<C\,h(p_t)$, and $H(p_t),h(p_t)$ satisfy the conditions in the ODE Lemma \ref{l: ODE}. First, let $q_t\in\Gamma$ be a point such that $d(p_t,\Gamma)=d(p_t,q_t)$. We claim that $R(q_t)\ri0$ as $t\rii$. First, this is clear if $\lim_{s\rii}R(\Gamma_2(s))=0$, because $q_t\rii$ as $t\rii$ by Lemma \ref{l: inf achieved on the non-compact portion}. Second, if $\lim_{s\rii}R(\Gamma_2(s))>0$. We may assume \begin{equation}\label{e: inf bigger than sup} \sup_{s\in[s_1,\infty)} R(\Gamma_1(s))< 100\inf_{s\in[s_1,\infty)} R(\Gamma_2(s)), \end{equation} for some $s_1>0$. We may also assume by Lemma \ref{l: inf achieved on the non-compact portion} that $q_t\in\Gamma_1([s_1,\infty))\cup\Gamma_2([s_1,\infty))$. Since $d(p,\Gamma)=d(p,\Gamma_1)$, by \eqref{e: inf bigger than sup} and a distance distortion estimate we see that the closest point $q_t$ on $\Gamma$ is always on the segment $\Gamma_1([s_1,\infty))$ for all $t\ge 0$. So $R(q_t)\ri0$. So the claim holds. We fix some sufficiently large $t$ so that $R(q_t)<\frac{1}{2000C^2}$, where the value of $C>0$ will be clear later. For simplicity, we will omit the subscript $t$ in $q_t$ and $p_t$. Let $\Sigma:=f^{-1}(f(q))$. Then $\Sigma$ is diffeomorphic to $S^2$, and it separates $M$ into a bounded component $f^{-1}([0,f(q)))$ diffeomorphic to a 3-ball, and an unbounded component $f^{-1}((f(q),\infty))$ diffeomorphic to $\R\times S^2$. So $\Gamma_2\cap\Sigma\neq\emptyset$. Let $q_2\in\Gamma_2\cap\Sigma$, and we may assume it is an $\epsilon$-tip point. \begin{claim}\label{c: one half} $d_{\Sigma}(p_1,q_2)\ge\frac{1}{2}\,d_{\Sigma}(q,p_1)$. \end{claim} \begin{proof}[Proof of the claim] Let $\gamma:[0,1]\ri M$ be a minimizing geodesic from $q$ to $p$, then by Lemma \ref{l: compare f} we have $f(\gamma([0,1]))\ge f(q)$, so $p\in f^{-1}([f(q),\infty))$. Therefore, there is a smooth non-negative function $T:[0,1]\ri\R$ such that $\overline{\gamma}:=\phi_{-T(r)}(\gamma(r))\in\Sigma$, $r\in[0,1]$. Let $p_1=\overline{\gamma}(1)=\phi_{-T(1)}(p)$. Then by the discussion in the beginning of the proof, we may assume $(M,g)$ is an $\epsilon$-cylindrical plane at $p_1$. First, by the positive curvature and distance shrinking in the Ricci flow $g(t)=\phi_{-t}^*g$ of the soliton, we have \begin{equation}\label{e: two d} h(p_1)\le h(p). \end{equation} Consider the smooth map $\chi:[0,1]\times\R\ri M$ defined by $\chi(r,t)=\phi_t(\overline{\gamma}(r))$. Since $\phi_t$ is the flow of $\nabla f$, we have $f\circ\chi(r,t)=t+f(q)$ and hence $\langle\chi_*(\partial_t),\chi_*(\partial_r)\rangle=\langle\nabla f,\chi_*(\partial_r)\rangle=0$. So we can compute that \begin{equation}\label{e: useful} \begin{split} d(p,q)=L(\gamma)&=\int_0^1|\chi_{*(r,T(r))}(\partial_r)+T'(r)\cdot\chi_{*(r,T(r))}(\partial_t)|\,dr\\ &\ge\int_0^1|\chi_{*(r,T(r))}(\partial_r)|\,dr =\int_0^1|\phi_{T(r)*}(\overline{\gamma}'(r))|\,dr\\ &\ge\int_0^1|\overline{\gamma}'(r)|\,dr=L(\overline{\gamma})\ge d_{\Sigma}(q,p_1), \end{split} \end{equation} where $d_{\Sigma}$ denotes the intrinsic metric on $\Sigma$. Since $|\nabla f|\ge C^{-1}>0$ on $M\setminus B(x_0,1)$, we have \begin{equation}\label{e: one half} \begin{split} d(p,p_1)\le C(f(p)-f(p_1))&=C(f(p)-f(q))\\ &=C\int_0^1\int_0^r\Ric(\gamma'(s),\gamma'(s))\,ds\,dr\\ &\le C\,d(p,q)\int_0^1\Ric(\gamma'(r),\gamma'(r))\,dr \le \frac{1}{2} d(p,q), \end{split} \end{equation} where in the last inequality we used $\int_0^1\Ric(\gamma'(r),\gamma'(r))\,dr\le \frac{1}{2C}$, which follows from the second variation formula and the assumption $R(q)<\frac{1}{2000C^2}$. By the choice of $q$ we have $d(p,q_2)\ge d(p,\Gamma) =d(p,q)$. Together with inequalities \eqref{e: useful} and \eqref{e: one half} this implies \begin{equation*}\begin{split}\label{e: distance compare for volume comparison} d_{\Sigma}(p_1,q_2)\ge d(p_1,q_2)\ge d(p,q_2)-d(p,p_1) \ge d(p,q)-d(p,p_1)\ge\frac{1}{2}d_{\Sigma}(q,p_1). \end{split}\end{equation*} This proves the claim. \end{proof} \begin{claim}\label{c: compare r} $r(q)\le 1200\,r(p)$. \end{claim} \begin{proof}[Proof of the claim] Since $q$ is an $\epsilon$-tip point, we may assume without loss of generality that $d_{\Sigma}(q,p_1)\ge \overline{D} R^{-1/2}(q)$ where $\overline{D}$ is from Lemma \ref{l: geometry of level set near tip}, because otherwise the claim clearly holds for sufficiently small $\epsilon$. So we can find a point $q'_1$ on the $\Sigma$-minimizing geodesic between $q$ and $p_1$ such that $d_{\Sigma}(q,q'_1)= \overline{D} R^{-1/2}(q)$ and hence \begin{equation}\label{e: point} r(q)\le 10\,r(q'_1). \end{equation} Moreover, by Lemma \ref{l: geometry of level set near tip} it follows that $(M,g)$ is a $\delta$-cylindrical plane at $q'_1$. So by Lemma \ref{l: 3D neck is 2D neck} this implies that the level set $\Sigma$ is a $\delta$-neck at $p_1$ and $q'_1$ at scale $r(p_1)$ and $r(q'_1)$. We may also assume $d_{\Sigma}(p_1,q'_1)>10\,r(p_1)$, because otherwise $r(q)\le 10 r(q'_1)\le 20\,r(p_1)$ and the claim holds. We will that the following inequalities hold, \begin{equation}\label{e: VC} d_{\Sigma}(q_2,p_1)\le d_{\Sigma}(q_2,q'_1)\le 3\,d_{\Sigma}(q_2,p_1). \end{equation} For the first inequality in \eqref{e: VC}, since $q_2$ and $q'_1$ are in two disjoint $\delta$-caps, it follows by Lemma \ref{l: exactly two caps in 2D} that the central circle at $p_1$ separates $q'_1$ and $q_2$ in $\Sigma$. So a minimizing geodesic between $q'_1$ and $q_2$ intersects the central circle at $p_1$, and hence \begin{equation*} d_{\Sigma}(q_2,q'_1)\ge d_{\Sigma}(p_1,q_2)-10 h(p_1)+d_{\Sigma}(p_1,q'_1)\ge d_{\Sigma}(p_1,q_2), \end{equation*} where in the last inequality we used $d_{\Sigma}(p_1,q'_1)>10\,h(p_1)$. For the second inequality in \eqref{e: VC}, by Claim \ref{c: one half} we have \begin{equation*} d_{\Sigma}(q_2,q'_1)\le d_{\Sigma}(q_2,p_1)+d_{\Sigma}(p_1,q'_1) \le d_{\Sigma}(q_2,p_1)+d_{\Sigma}(p_1,q)\le 3\,d_{\Sigma}(q_2,p_1). \end{equation*} By the first inequality in \eqref{e: VC} and the positive curvature on $\Sigma$, we can deduce by the volume comparison that \begin{equation}\label{e: oil} \frac{|\partial B_{\Sigma}(q_2,d_{\Sigma}(q_2,q'_1))|}{d_{\Sigma}(q_2,q'_1)}\le \frac{|\partial B_{\Sigma}(q_2,d_{\Sigma}(q_2,p_1))|}{d_{\Sigma}(q_2,p_1)}. \end{equation} Since $\Sigma$ is a $\delta$-neck at both points $p_1$ and $q'_1$, it is easy to see that \begin{equation*} \frac{1}{2}\le\frac{|\partial B_{\Sigma}(q_2,d_{\Sigma}(q_2,q'_1))|}{r(q'_1)}\le2\quad\textit{and}\quad \frac{1}{2}\le\frac{|\partial B_{\Sigma}(q_2,d_{\Sigma}(q_2,p_1))|}{r(p_1)}\le2. \end{equation*} Therefore, by the second inequality in \eqref{e: VC} and \eqref{e: oil} we get $r(q'_1)\le 12\,r(p_1)$. Then by \eqref{e: point}, $r(p_1)\le 10\,h(p_1)$, and \eqref{e: two d} we get \begin{equation*} r(q)\le 120\,r(p_1)\le 1200\,h(p_1)\le 1200\,h(p). \end{equation*} \end{proof} Restoring the subscript $t$ in $p_t,q_t$, we proved $r(q_t)<1200\,h(p_t)$. So by the evolution inequalities \eqref{e: H h initial} we see that $h(p_t)$ and $H(p_t)$ satisfy the following inequalities \begin{equation*} \begin{cases} \pt H(p_t)\ge (1200C)^{-1}\cdot h^{-1}(q_t),\\ \pt h(p_t)\le C\cdot H^{-2}(p_t)\cdot h(p_t). \end{cases} \end{equation*} Since $p$ is the center of an $\epsilon$-cylindrical plane, it follows that by assuming $\epsilon$ to be sufficiently small, we have $\frac{H(p)}{h(p)}>1200\,C^2$. Therefore, the two functions $H(p_t),h(p_t)$ satisfy all assumptions in the ODE Lemma \ref{l: ODE}, applying which we can deduce \begin{equation*} H(p_t)\ge C^{-1}t\quad\textit{and}\quad h(p_t)\le C, \end{equation*} for all sufficiently large $t$. So by Claim \ref{c: compare r} and $r(p_t)\le 10\,h(p_t)$ we obtain \begin{equation*} r(q_t)\le 1200\,r(p_t)\le 12000\,h(p_t)\le C. \end{equation*} Note $q_t\in \Gamma_1$ are $\epsilon$-tip points and $q_t\rii$ as $t\rii$. This implies $\lim_{s\rii}R(\Gamma_1(s))>0$, a contradiction. Therefore, we proved $\lim_{s\rii}R(\Gamma_i(s))>0$. Lastly, we prove $\lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))$ and the rest assertions of the theorem. Let $x_1,x_2\in M$ be any two $\epsilon$-cylindrical points. We will show $|h(x_1)-h(x_2)|\le \epsilon$. If this is true, then combining with the convergence to $\R\times\cigar$ after rescalings along $\Gamma_1$ and $\Gamma_2$, this implies the theorem. On the one hand, similarly as before, we can show that \begin{equation}\label{enheng} |h(x_i)-h(\phi_t(x_i))|\le\epsilon,\quad i=1,2. \end{equation} On the other hand, we claim \begin{claim}\label{c: lnt} $\lim_{t\rii} d(\phi_t(x_1),\phi_t(x_2))<\infty$. \end{claim} \begin{proof}[Proof of the claim] First, we have that $d(\phi_t(x_i),\Gamma)\ge d(x_i,\Gamma)+C^{-1}t$, $i=1,2$, so by the distance distortion Lemma \ref{l: distance laplacian} and Theorem \ref{l: curvature upper bound initial} we have \begin{equation*} \frac{d}{dt}d(\phi_t(x_1),\phi_t(x_2))\le \max\left\{\frac{C}{d(x_1,\Gamma)+C^{-1}t},\frac{C}{d(x_2,\Gamma)+C^{-1}t}\right\}, \end{equation*} integrating which we obtain \begin{equation}\label{e: lnt} d(\phi_t(x_1),\phi_t(x_2))\le d(x_1,x_2)+C\ln t. \end{equation} Therefore, for any sufficiently large $t$, let $\gamma:[0,1]\ri M$ be a minimizing geodesic between $\phi_t(x_1),\phi_t(x_2)$, by triangle inequalities we have $d(\gamma([0,1],\Gamma)> C^{-1}t$. So by Theorem \ref{l: curvature upper bound initial} we have $\sup_{s\in[0,1]}R(\gamma(s))\le \frac{C}{t^2}$, and hence \eqref{e: lnt} implies \begin{equation*} \frac{d}{dt}d(\phi_t(x_1),\phi_t(x_2))\le\int_{\gamma}\Ric(\gamma'(s),\gamma'(s))\,ds\le \frac{C}{t^{\frac{3}{2}}}, \end{equation*} integrating which we proved the claim. \end{proof} Note that the points $\phi_t(x_2)$ converge to a rescaling of $\RR\times S^1$ as $t\rii$, so by Claim \ref{c: lnt} we see that \begin{equation*} |h(\phi_t(x_1))-h(\phi_t(x_2))|\le\epsilon, \end{equation*} for all sufficiently large $t$. Combining this with \eqref{enheng}, this implies \begin{equation*} |h(x_1)-h(x_2)|\le\epsilon, \end{equation*} which proves the theorem. \end{proof} In the following we show that the soliton is asymptotic to a sector. Therefore, 3D steady gradient solitons are all flying wings except the Bryant soliton. \begin{cor}[Asymptotic to a sector]\label{c: Asymptotic to a sector} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature. If the asymptotic cone of $(M,g)$ is a ray, then $(M,g)$ is isometric to a Bryant soliton. \end{cor} \begin{proof} Suppose that $(M,g)$ is not a Bryant soliton. Let $C>0$ denotes all constants depending on the soliton $(M,g)$ and $\epsilon>0$ be some sufficiently small number. Let $\Gamma_1,\Gamma_2$ be the integral curves from Corollary \ref{l: new Gamma}. By Theorem \ref{l: wing-like} we may assume $\lim_{s\rii}R(\Gamma_i(s))=4$. Let $\Gamma=\Gamma_1([0,\infty))\cup\Gamma_2([0,\infty))$. Let $p\in M$ be the center of an $\epsilon$-cylindrical plane, then we have \begin{equation}\label{e: scallop0} d(\phi_t(p),\Gamma)\ge 1.9\,t+d(p,\Gamma), \end{equation} for $t\ge0$. Suppose $\max R=R(x_0)\le C$, then we have \begin{equation*} d(x_0,\phi_t(p))\le 20\,C\,t, \end{equation*} which combining with \eqref{e: scallop0} implies \begin{equation}\label{e: scallop} d(\phi_t(p),\Gamma)\ge C^{-1}d(x_0,\phi_t(p)) \end{equation} for all large $t$. Let $q_k\in\Gamma_1$, $\overline{q}_k\in\Gamma_2$ be sequences of points such that $d(x_0,q_k)=d(x_0,\overline{q}_k)$, and let $\sigma_k:[0,1]\ri M$ be a minimizing geodesic connecting $q_k,\overline{q}_k$. Then it is easy to see $d(x_0,\sigma_k([0,1]))\rii$ as $k\rii$. Since $d(x_0,\phi_t(p))\rii$ as $t\rii$, the integral curve $\phi_t(p)$ must pass through the $S^1$-factor of an $\epsilon$-cylindrical plane centered at some point on $\sigma_k(0,1)$. In particular, we can find $t_k>0$ and $s_k\in(0,1)$ such that $t_k\rii$ as $k\rii$, and \begin{equation*} d(\phi_{t_k}(p),\sigma_k(s_k))<2\pi. \end{equation*} Since $d(q_k,\overline{q}_k)\ge d(\sigma_k(s_k),q_k)\ge d(\sigma_k(s_k),\Gamma)$, this implies by triangle inequality that \begin{equation*} d(q_k,\overline{q}_k) \ge d(\phi_{t_k}(p),\Gamma)-d(\phi_{t_k}(p),\sigma_k(s_k)) \ge d(\phi_{t_k}(p),\Gamma)-2\pi, \end{equation*} which together with \eqref{e: scallop} implies \begin{equation}\label{berry} d(q_k,\overline{q}_k)\ge C^{-1}d(x_0,\phi_{t_k}(p)) \end{equation} for all large $k$. Since $(M,g)$ is not isometric to $\R\times\cigar$, we have \begin{equation*} d(q_k,\overline{q}_k)<(2-C^{-1})d(x_0,q_k). \end{equation*} Combining it with the following triangle inequality \begin{equation*} d(x_0,\phi_{t_k}(p))+d(q_k,\overline{q}_k)\ge d(x_0,q_k)+d(x_0,\overline{q}_k)-2\pi=2d(x_0,q_k)-2\pi, \end{equation*} we obtain \begin{equation*} d(x_0,\phi_{t_k}(p))\ge C^{-1}d(x_0,q_k). \end{equation*} So by \eqref{berry} this implies $d(q_k,\overline{q}_k)\ge C^{-1}d(x_0,q_k)$ and thus $\widetilde{\measuredangle}q_kx_0\overline{q}_k\ge C^{-1}$. Lastly, by Lemma \ref{l: two rays}, the minimizing geodesics $x_0q_k,x_0\overline{q}_k$ converge to two rays $\sigma_1,\sigma_2$ with $\widetilde{\measuredangle}(\sigma_1,\sigma_2)\ge C^{-1}>0$. So the soliton is asymptotic to a sector. \end{proof} \section{Upper and lower curvature estimates}\label{s: curvature etimates} In this subsection, we prove Theorem \ref{t': curvature estimate} of the two-sided curvature estimates. For the lower bound, Theorem \ref{l: R>e^{-2r}} shows that $R$ decays at most exponential fast away from $\Gamma$ by using the improved Harnack inequality in Corollary \ref{l: Harnack}. For the upper bound, Theorem \ref{t: R upper bd} shows that $R$ decays at least polynomially fast away from $\Gamma$. Theorem \ref{t: R upper bd} is proved using the quadratic curvature decay from Theorem \ref{l: curvature upper bound initial} and a heat kernel method. \subsection{Improved integrated Harnack inequality} In this subsection, we prove an improved integrated Harnack inequality for Ricci flows with non-negative curvature operators. This improved Harnack inequality will be used to deduce the exponential curvature lower bound in Theorem \ref{l: R>e^{-2r}}. First, we state Hamilton's traced differential Harnack inequality and its integrated version. \begin{theorem} Let $(M,g(t)),t\in(0,T]$ be an n-dimensional Ricci flow with complete time slices and non-negative curvature operator. Assume furthermore that the curvature is bounded on compact time intervals. Then for any $(x,t)\in M\times(0,T]$ and $v\in T_xM$, \begin{equation}\label{e: Harnack} \pt R(x,t)+\frac{R}{t}+2\langle v,\nabla R\rangle+2\Ric(v,v)\ge 0. \end{equation} Moreover, integrating this inequality appropriately yields: For any $(x_1,t_1),(x_2,t_2)\in M\times (0,T]$ with $t_1<t_2$, we have \begin{equation}\label{e: integrated version} \frac{R(x_2,t_2)}{R(x_1,t_1)}\ge\frac{t_1}{t_2} \exp\left(-\frac{1}{2}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}\right). \end{equation} \end{theorem} \begin{remark} By the soliton identities it is not hard to see that the equality in the differential Harnack inequality \eqref{e: Harnack} is achieved if $(M,g(t))$ is the Ricci flow of an expanding gradient Ricci soliton with non-negative curvature operator and $v=\nabla f_t$. Note we adopt the convention that the flow satisfies $\Ric(g(t))+\frac{1}{2t}g(t)=\nabla^2 f_t$, $t>0$, see e.g. \cite[Chapter 10.4]{HaRF}. \end{remark} \begin{remark} In dimension 2, using $\Ric=\frac{1}{2}R\,g$ one can prove the following slightly better integrated Harnack inequality, \begin{equation}\label{e: integrated version better} \frac{R(x_2,t_2)}{R(x_1,t_1)}\ge\frac{t_1}{t_2} \exp\left(-\frac{1}{4}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}\right). \end{equation} \end{remark} The main result of this subsection shows that \eqref{e: integrated version better} actually holds in all dimensions. Our key observation is the following curvature inequality. \begin{lem}\label{l: Ric compare to R} Let $(M,g)$ be an n-dimensional Riemannian manifold with non-negative curvature operator. Then \begin{equation}\label{e: Ric} \Ric\le \frac{1}{2}\,R\,g. \end{equation} \end{lem} \begin{proof} To show this, let $p\in M$ and choose an orthonormal basis $\{e_i\}_{i=1}^n$ of $T_pM$ under which the Ricci curvature is diagonal: \begin{equation*} \Ric=(\lambda_1,...,\lambda_n), \end{equation*} where $\lambda_1\ge...\ge\lambda_n$ are the $n$ eigenvalues. Let $k_{ij}=\Rm(e_i,e_j,e_j,e_i)$. Then since $\Rm\ge0$, we have \begin{equation*} k_{1i}\le\sum_{j\neq i}k_{ji}=\lambda_i \end{equation*} for all $i=2,...,n$. So \begin{equation*} \lambda_1=k_{12}+k_{13}+\dots+k_{1n}\le\lambda_2+\lambda_3+\dots+\lambda_n, \end{equation*} hence we have \begin{equation*} \Ric(v,v)\le\lambda_1|v|^2\le\left(\frac{\lambda_1+\dots+\lambda_n}{2}\right)|v|^2=\frac{1}{2}|v|^2R. \end{equation*} which proves the lemma. \end{proof} Now we prove the improved integrated Harnack inequality. \begin{theorem}[Improved integrated Harnack inequality] \label{l: Harnack_with_time} Let $(M,g(t))$, $t\in[0,T]$, be a Ricci flow with non-negative curvature operator. Then for any $x_1,x_2\in M$ and $0<t_1<t_2\le T$, we have \begin{equation*} \frac{R(x_2,t_2)}{R(x_1,t_1)}\ge\frac{t_1}{t_2} \exp\left(-\frac{1}{4}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}\right). \end{equation*} \end{theorem} \begin{proof} In the Harnack inequality \eqref{e: Harnack}, let $v=-\nabla(\log R)(x,t)$ and $T_0=0$, we get \begin{equation}\label{e: Harnack_0} R^{-1}\pt R+\frac{1}{t}-2|\nabla\log R|^2+\frac{2\Ric(\nabla\log R,\nabla\log R)}{R}\ge0. \end{equation} Then by Lemma \ref{l: Ric compare to R} we obtain \begin{equation}\label{e: equality holds on cigar} \frac{\partial}{\partial t}\log(tR)\ge|\nabla \log R|^2. \end{equation} Let $\mu:[t_1,t_2]\rightarrow M$ be a $g(t_1)$-minimizing geodesic from $x_1$ to $x_2$. Then \begin{equation*} \begin{split} \log\left(\frac{t_2R(x_2,t_2)}{t_1R(x_1,t_1)}\right)&=\int_{t_1}^{t_2}\frac{d}{dt}\log(tR(\mu(t),t))\;dt\\ &=\int_{t_1}^{t_2} \frac{\partial}{\partial t}\log(tR)+\left\langle\nabla\log (tR),\frac{d\mu}{dt}\right\rangle\;dt\\ &\ge\int_{t_1}^{t_2} |\nabla \log R|^2+\left\langle\nabla\log R,\frac{d\mu}{dt}\right\rangle\;dt\\ &\ge\int_{t_1}^{t_2}|\nabla\log R|^2-|\nabla\log R|\left|\frac{d\mu}{dt}\right|\;dt\\ &\ge-\frac{1}{4}\int_{t_1}^{t_2}\left|\frac{d\mu}{dt}\right|^2\;d\mu \ge-\frac{1}{4}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}. \end{split} \end{equation*} \end{proof} Note that if moreover the Ricci flow $(M,g(t))$ is ancient, then \eqref{e: equality holds on cigar} becomes \begin{equation}\label{e: equality holds on cigar2} \frac{\partial}{\partial t}\log R\ge|\nabla \log R|^2. \end{equation} In particular, the following integrated Harnack inequality is a direct consequence of Theorem \ref{l: Harnack_with_time}. \begin{cor}[Improved Harnack inequality, ancient flow]\label{l: Harnack} Let $(M,g(t))$, $t\in(-\infty,0]$, be a Ricci flow. Suppose $\Rm_{g(t)}\ge0$ for all $t\in(-\infty,0]$. Then for any $x_1,x_2\in M$ and $t_1<t_2$, we have \begin{equation*} \frac{R(x_2,t_2)}{R(x_1,t_1)}\ge \exp\left(-\frac{1}{4}\frac{d^2_{g(t_1)}(x_1,x_2)}{t_2-t_1}\right). \end{equation*} \end{cor} \begin{remark} Note that the equality in Lemma \ref{l: Ric compare to R} holds on any 2-dimensional solutions. In the cigar soliton, it is easy to see that the equality in \eqref{e: equality holds on cigar2} is achieved, but the equality is lost in the integrated version. Nevertheless, the factor $\frac{1}{4}$ in Corollary \ref{l: Harnack} is still sharp in the sense that using it we can obtain a curvature lower bound on cigar soliton which is arbitrarily close to the actual curvature decay in the cigar at infinity: Let $(\Sigma,g_c)$ be a cigar soliton and $R(x_{tip})=4$. Let $(\Sigma,g_c(t))$ be the Ricci flow of the soliton. For any $x\in \Sigma$, let $t=-\frac{d_0(x,x_{tip})}{2}$, then by the distance distortion estimate \eqref{e: integrate Ricci} in $g_c(t)$, we have \begin{equation*} d_t(x,x_{tip})\le d_0(x,x_{tip})+(2-\epsilon)(-t), \end{equation*} where $\epsilon>0$ denotes all constants depending on $d_0(x,x_{tip})$, such that $\epsilon\ri0$ as $d_0(x,x_{tip})\rii$. So applying the improved Harnack inequality we get \begin{equation*} R(x,0)\ge R(x_{tip},t)\,e^{-(2-\epsilon)\,d_0(x,x_{tip})}= 4\,e^{-(2-\epsilon)\,d_0(x,x_{tip})}. \end{equation*} This can be compared with the curvature formula of the cigar soliton \eqref{e: cigar R}, \begin{equation*} R(x,0)=\frac{16}{(e^{d_0(x,x_{tip})}+e^{-d_0(x,x_{tip})})^2}\le 16 \,e^{-2\,d_0(x,x_{tip})}. \end{equation*} \end{remark} \subsection{Exponential lower bound of the curvature} In this subsection we use the improved Harnack inequality to deduce the exponential curvature lower bound. \begin{theorem}[Scalar curvature exponential lower bound]\label{l: R>e^{-2r}} Let $(M,g,f,p)$ be a 3D steady gradient soliton that is not a Bryant soliton. Assume $\lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))=4$. Then for any $\epsilon_0>0$, there exists $C>0$ such that \begin{equation}\label{e: conde} R(x)\ge C^{-1}e^{-2(1+\epsilon_0)d_g(x,\Gamma)}. \end{equation} \end{theorem} \begin{proof} For any $\epsilon_0>0$, let $\epsilon>0$ denote all small constants depending on $\epsilon_0$ whose values may change from line to line. Let $(M,g(t))$, $t\in(-\infty,\infty)$, be the Ricci flow associated to the soliton $(M,g)$, $g(0)=g$. Consider the subset $U$ consisting of all points $x$ such that the distance $d_t(x,\Gamma)$ for all $t\le0$ must be achieved at $\epsilon$-tip points on $\Gamma$. Then it is clear by Lemma \ref{l: inf achieved on the non-compact portion} that the complement of $U$ is compact. So we can find a constant $C>0$ such that $R\ge C^{-1}$ on $M\setminus U$. Therefore, it suffices to prove the curvature lower bound \eqref{e: conde} for points $(x,0)\in U\times\{0\}\subset M\times(-\infty,\infty)$. Let $x\in U$ and $t\le0$. By a distance distortion estimate we have \begin{equation}\label{e: seminar} -\frac{d}{dt}d_t(x,\Gamma)\le\sup_{\gamma\in\mathcal{Z}(t)}\int_{\gamma}\Ric(\gamma'(s),\gamma'(s))\,ds, \end{equation} where the derivative is the backward difference quotient, and $\mathcal{Z}(t)$ is the space of all minimizing geodesics $\gamma$ which realize the distance $d_t(x,\Gamma)$. For any such $\gamma$ connecting $x$ to a point $y_t\in\Gamma$, $d_t(x,y_t)=d_t(x,\Gamma)$, we have that $y_t$ is an $\epsilon$-tip point since $x\in U$, and $\gamma$ is orthogonal at $y_t$ to $\Gamma$. Moreover, by the assumption $\lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))=4$ we have \begin{equation}\label{e: oo} R(y_t,t)\ge4-\epsilon. \end{equation} So by taking $\epsilon$ small, \eqref{e: seminar} implies \begin{equation*} -\frac{d}{dt}d_t(x,\Gamma)\le2(1+\epsilon_0), \end{equation*} integrating which we get \begin{equation}\label{e: tt} d_t(x,y_t)=d_t(x,\Gamma)\le d_0(x,\Gamma)+2(1+\epsilon_0)(-t). \end{equation} Now applying Corollary \ref{l: Harnack} (Improved Harnack inequality) and using \eqref{e: oo},\eqref{e: tt} we obtain \begin{equation*} \begin{split} R(x,0)\ge R(y_t,t)e^{-\frac{d^2_{t}(x,y_t)}{4(-t)}} \ge (4-\epsilon) e^{-\frac{(d_0(x,\Gamma)+2(1+\epsilon_0) (-t))^2}{4(-t)}}. \end{split} \end{equation*} Letting $t=-\frac{d_0(x,\Gamma)}{2}$, this implies \begin{equation*} R(x,0)\ge (4-\epsilon)e^{-2(1+\epsilon_0)d_0(x,\Gamma)}. \end{equation*} \end{proof} \begin{remark} Note that this curvature estimate is sharp: In the manifold $\R\times\cigar$, the curvature decays like $O(e^{-2\,d_g(\cdot,\Gamma)})$, so our lower bound estimate $O(e^{-(2+\epsilon_0)\,d_g(\cdot,\Gamma)})$ gets arbitrarily close to it as the distance $d_g(\cdot,\Gamma)$ goes to infinity. \end{remark} \subsection{Polynomial upper bound of the curvature} In Theorem \ref{t: R upper bd} we show that the quadratic curvature decay from Theorem \ref{l: curvature upper bound initial} can be improved to polynomial decay at any rate. The proof relies on the following heat kernel estimate. This estimate shows that the heat kernel starting from $(x,t)$ behaves like a Gaussian, and it is centered at the $(x,s)$ for all $s< t-2$. For $s\in[t-2,t)$, the Gaussian bound also holds by Lemma \ref{l: heat kernel lower bound implies upper bound}. \begin{lem}\label{l: L-geodesic} Let $(M,g,f,p)$ be a 3D steady gradient soliton that is not a Bryant soliton and $(M,g(t))$ be the Ricci flow of the soliton. Let $G(x,t;y,s)$, $x,y\in M$, $s< t-2$, be the heat kernel of the heat equation $\pt u=\Delta u$ under $g(t)$. Then there exists $C>0$ such that \begin{equation*} G(x,t;y,s)\le C\,(t-s)^\frac{3}{2}\, \textnormal{exp}\left(-\frac{d^2_s(x,y)}{4C(t-s)}\right). \end{equation*} \end{lem} \begin{proof} After a rescaling we assume $\lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))=4$. We shall use $C$ to denote all constants depending only on the soliton $(M,g)$. Without loss of generality, we may assume $s=0$. For any $s\in[0,1]$ and $z\in B_s(x,1)$, let $\gamma:[s,t]\ri M$ be a curve such that $\gamma|_{[s,2]}$ is a minimizing geodesic connecting $x$ and $z$ with respect to $g(0)$, and $\gamma|_{[2,t]}\equiv y$. For any $\tau\in[0,t-s]$, by Theorem \ref{l: curvature upper bound initial} we have $R(y,t-\tau)\le \frac{C}{r^2(y,t-\tau)}$. Moreover, denote $d_t(x,\Gamma)$ by $r(x,t)$, then by Theorem \ref{l: wing-like} and distance distortion estimates, we have \begin{equation}\label{e: distance change} r(x,t)+1.9(t-s)\le r(x,s)\le r(x,t)+2.1\,(t-s). \end{equation} So $R(y,t-\tau)\le \frac{C}{(r(x,t)+C^{-1}\tau)^2}$. For $\tau\in[t-2,t-s]$, we have $R(\gamma(t-\tau),t-\tau)\le C$ and $|\gamma'|(t-\tau)\le C$. Putting these together we can estimate the $\LL$-length of $\gamma$, \begin{equation*} \begin{split} \mathcal{L}(\gamma)&=\int_{0}^{t-2}\sqrt{\tau}R(y,\tau)\,d\tau+ \int_{t-2}^{t-s}\sqrt{\tau}(R(\gamma(t-\tau),t-\tau)+|\gamma'|^2)\,d\tau\\ &\le \int_{0}^{t-2}\sqrt{\tau}\frac{C}{(r(x,t)+C^{-1}\tau)^2}\,d\tau+\int_{t-2}^{t-s}C\sqrt{\tau}\,d\tau\le C\sqrt{t}. \end{split} \end{equation*} Let $\ell(z,s):=\ell_{(x,t)}(z,s)$ be the reduced length from $(x,t)$ to $(z,s)$, then \begin{equation*} \ell(z,s)=\frac{\mathcal{L}_{(x,t)}(z,s)}{2\sqrt{t-s}}\le \frac{\mathcal{L}(\gamma)}{2\sqrt{t-s}}\le C. \end{equation*} Recall the heat kernel lower bound by Perelman in \cite[Corollary 9.5]{Pel1} we get: \begin{equation}\label{reduced length} G(x,t;z,s)\ge \frac{1}{4\pi(t-s)^{3/2}}e^{-\ell(z,s)}\ge\frac{C}{t^{\frac{3}{2}}}, \end{equation} for all $s\in[0,1]$ and $z\in B_s(x,1)$, integrating which in $B_s(x,1)$ we get \begin{equation}\label{low} \int_{B_s(x,1)}G(x,t;z,s)\,d_sz\ge \frac{C}{t^{3/2}}, \end{equation} for all $s\in [0,1]$. Let $y\in M$, then by the multiplication inequality for the heat kernel in \cite[Theorem 1.30]{HN} we have \begin{equation}\label{e: hein-naber} \left(\int_{B_s(x,1)}G(x,t;z,s)\,d_sz\right)\left(\int_{B_s(y,1)}G(x,t;z,s)\,d_sz\right)\le C\, \textnormal{exp}\left(-\frac{(d_s(x,y)-2)^2}{4C(t-s)}\right). \end{equation} So by substituting \eqref{low} into \eqref{e: hein-naber} and using the distance distortion estimate $d_s(x,y)-2\ge C^{-1}d_0(x,y)-2\ge (2C)^{-1}d_0(x,y)$, we obtain \begin{equation*} \left(\int_{B_s(y,1)}G(x,t;z,s)\,d_sz\right)\le C\,t^\frac{3}{2}\, \textnormal{exp}\left(-\frac{d_0(x,y)^2}{4C(t-s)}\right). \end{equation*} Integrating this for all $s\in[0,1]$, and then applying the parabolic mean value inequality (see e.g. \cite{RFTandA3}) to $G(x,t;\cdot,\cdot)$ at $(y,0)$, we obtain \begin{equation*} G(x,t;y,0)\le C\,t^\frac{3}{2}\, \textnormal{exp}\left(-\frac{d^2_0(x,y)}{4Ct}\right). \end{equation*} \end{proof} \begin{theorem}[Scalar curvature polynomial upper bound]\label{t: R upper bd} Let $(M,g,f)$ be a 3D steady gradient soliton that is not a Bryant soliton. Then for any integer $k\ge2$, there exists $C_k>0$ such that \begin{equation*} R\le \frac{C_k}{d_g^k(\cdot,\Gamma)}. \end{equation*} \end{theorem} \begin{proof} By Theorem \ref{l: curvature upper bound initial} this is true for $k=2$. Let $(M,g(t))$ be the Ricci flow of the soliton. We denote $d_t(x,\Gamma)$ by $r(x,t)$. After a rescaling we assume $\lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))=4$, so \eqref{e: distance change} holds. Suppose by induction that this is true for $k\ge2$, we will show that this is also true for $k+1$. In the following $C$ denotes all positive constants that depend on $k$, the maximum of $R$ and the limits of $R$ at the two ends of $\Gamma$. Since $R$ satisfies the evolution equation \begin{equation*} \partial_tR=\Delta R+2|\Ric|^2, \end{equation*} for a fixed pair $(x,t)\in M\times(-\infty,\infty)$ we have \begin{equation*}\begin{split} R(x,t)&=\int_M G(x,t;y,s)R(y,s)\,d_sy+2\int_s^t\int_M G(x,t;z,s)|\Ric|^2(z,\tau)\,d_{\tau}z\,d\tau\\ &:=I(s)+II(s). \end{split}\end{equation*} First, we claim that $\lim_{s\ri-\infty}I(s)=0$. To show this, we split $I(s)$ into two integrals on $B_s(x,\frac{r(x,s)}{1000})$ and $M\setminus B_s(x,\frac{r(x,s)}{1000})$, and denote them respectively by $I_1(s)$ and $I_2(s)$. Then for $I(s)$, using $\int_M G(x,t;y,s)\,d_sy=1$ we can estimate that \begin{equation*} I_1(s)\le\left(\int_M G(x,t;y,s)\,d_sy\right)\cdot\left(\sup_{B_{s}(x,\frac{r(x,s)}{1000})}R(\cdot,s)\right)=\sup_{B_{s}(x,\frac{r(x,s)}{1000})}R(\cdot,s). \end{equation*} For any $y\in B_{s}(x,\frac{r(x,s)}{1000})$, we have $r(y,s)\ge r(x,s)-\frac{r(x,s)}{1000}\ge\frac{r(x,s)}{2}$. So $R(y,s)\le\frac{C}{r^k(x,s)}$ by the inductive assumption. So it follows by \eqref{e: distance change} that $I_1(s)\le\frac{C}{r^2(x,s)}$ which goes to zero as $s\ri-\infty$. For $I_2(s)$, since by \eqref{e: distance change} we have \begin{equation*} \frac{d_s^2(y,x)}{t-s}\ge C^{-1}r(x,s)\ge C^{-1}(t-s), \end{equation*} for all $y\in M\setminus B_s(x,\frac{r(x,s)}{1000})$, it follows by the heat kernel estimates Lemma \ref{l: L-geodesic} and Lemma \ref{l: heat kernel lower bound implies upper bound} that \begin{equation*} G(x,t;y,s)\le\,C\,e^{-\frac{d_s^2(y,x)}{C(t-s)}}, \end{equation*} which implies \begin{equation*} I_2(s)\le\,C\,\int_{M\setminus B_s(x,\frac{r(x,s)}{1000})}e^{-\frac{d_s^2(y,x)}{C(t-s)}}\,d_sy\le Ce^{-\frac{r(x,s)}{C}}\ri0, \quad\textit{as }s\ri-\infty. \end{equation*} Next, we estimate $II(s)$. Let \begin{equation*} J(\tau):=\int_M G(x,t;z,\tau)|\Ric|^2(z,\tau)\,d_{\tau}z, \end{equation*} and split it into two integrals on $B_{\tau}(x,\frac{r(x,\tau)}{1000})$ and $M\setminus B_{\tau}(x,\frac{r(x,\tau)}{1000})$, and denote them respectively by $J_1(\tau)$ and $J_2(\tau)$. Then by a similar argument as above and using the inductive assumption, we see that $J_1(\tau)\le\frac{C}{r^{2k}(x,\tau)}$ and $J_2(\tau)\le C\,e^{-\frac{r(x,\tau)}{C}}\le \frac{C}{r^{2k}(x,\tau)}$. Therefore, integrating $J(s)$ we obtain \begin{equation*}\begin{split} II(s)&=2\int_s^tJ(\tau)\,d\tau=2\int_s^t(J_1(\tau)+J_2(\tau))\,d\tau \le\int_s^t\frac{C}{r^{2k}(x,\tau)}\,d\tau\\ &\le\int_s^t\frac{C}{(r(x,t)+1.9\,(t-\tau))^{2k}}\,d\tau\\ &=C\left(\frac{1}{r^{2k-1}(x,t)}-\frac{1}{(r(x,t)+1.9\,(t-s))^{2k-1}}\right) \le\frac{C}{r^{2k-1}(x,t)}. \end{split}\end{equation*} Combining this with the estimate on $I(s)$, it follows that \begin{equation*} R(x,t)\le\limsup_{s\ri-\infty}(I(s)+II(s))\le\frac{C}{r^{2k-1}(x,t)}. \end{equation*} Since $k\ge2$, we have $2k-1\ge k+1$, which proves the theorem by induction. \end{proof} \section{Symmetry improvement theorems}\label{s: semi-local} In this section we will study the Ricci-Deturck perturbations $h$ whose background metric is a $SO(2)$-symmetric complete Ricci flow which is sufficiently close to the cylindrical plane $\RR\times S^1$. Such symmetric 2-tensor can be decomposed as $h=h_++h_-$, where $h_+$ is the rotationally invariant mode and $h_-$ is the oscillatory mode. We show that the oscillatory mode $h_-$ decays in time exponentially in a certain sense. We will first prove the linear version of this symmetry improvement theorem, that is, the oscillatory mode of a linearized Ricci-Deturck flow on $\RR\times S^1$ decays exponentially in time. Then we can obtain the theorem from its linear version by using a limiting argument. More explicitly, $|h_-|$ decays exponentially in time in the following sense: First, if $|h_-|$ is initially bounded uniformly by a constant, then the theorem shows that it decays as $e^{-\delta_0t}$ for some $\delta_0>0$. Moreover, if $|h_-|(\cdot,0)$ has an exponential growth in the space direction, then the theorem shows that $|h_-|(\cdot,t)$ still decays as $e^{-\delta_0t}$ modulo the same exponential growth rate in the space direction. \subsection{SO(2)-decomposition of a symmetric 2-tensor} For a 3D Riemannian manifold $(M,g)$, we say it is $SO(2)$-symmetric if it admits an effective isometric $SO(2)$-action. Equivalently, this means that there is a one parameter group of isometries $\psi_{\theta}$, $\theta\in\R$, such that $\psi_{\theta}=i.d.$ if and only if $\theta=2k\pi$, $k\in\mathbb{Z}$. Throughout this section, we will moreover assume that $(M,g)$ is a 3D $SO(2)$-symmetric Riemannian manifold such that there exist a 2D Riemannian manifold $(N,g_0)$ and a Riemannian submersion $\pi:(M,g)\ri(N,g_0)$ which maps an orbit of the $SO(2)$-action to a point in $N$. This can be ensured when the $SO(2)$-action is free. Let $U$ be a local coordinate chart on $N$ with coordinates $\rho:(x,y)\in U_0\subset\RR\ri\rho(x,y)\in U$. Take a section $s: U\ri\pi^{-1}(U)\subset M$. Parametrize $SO(2)$ by $\theta:[0,2\pi)\ri SO(2)$. Then we obtain a local coordinate on $\pi^{-1}(U)$ by $(x,y,\theta)\ri \theta\cdot s(\rho(x,y))$. Let $h$ be a symmetric 2-tensor on $M$, and \begin{equation}\label{e: h_+} h_+(y):=\frac{1}{2\pi}\int_0^{2\pi}(\theta^*h)(y)\,d\theta, \end{equation} and $h_-:=h-h_+$. Then $h_+,h_-$ are two symmetric 2-tensors. For any $\theta_0\in SO(2)$, we have \begin{equation*} \theta_0^*h_+=\frac{1}{2\pi}\int_0^{2\pi}\theta_0^*(\theta^*h)\,d\theta=\frac{1}{2\pi}\int_0^{2\pi}(\theta_0+\theta)^*h\,d\theta=\frac{1}{2\pi}\int_{\theta_0}^{2\pi+\theta_0}\theta^*h\,d\theta=h_+. \end{equation*} So we say $h_+$ is the rotationally invariant part and $h_-$ is the oscillatory part of $h$, and $h=h_++h_-$ the $SO(2)$-decomposition of $h$. Similarly, We say $h$ is rotationally invariant if $h=h_+$, and oscillatory if $h_+=0$. We now analyze the structure of the oscillatory mode more carefully. Since the one forms $dx,dy,d\theta$ are invariant under the $SO(2)$-action, it follows that the basis $\{dx^2,dy^2,dxdy,dxd\theta,dyd\theta,d\theta^2\}$ of the space of all symmetric 2-tensors are rotationally invariant. So the $SO(2)$-decomposition of $h$ reduces to that the decomposition of components under this basis: $h$ can be written as below under the local coordinates, \begin{equation*} h=F_{1}dx^2+F_{2}dy^2+F_3dxdy+F_4dxd\theta+F_5dyd\theta+F_6d\theta^2, \end{equation*} where $F_i(x,y,\theta):U_0\times S^1\ri\R$ are functions. Let $F_{i,\pm}$ be the i-th component in $h_{\pm}$. Then \begin{equation*} F_{i,+}(x,y,\theta)=\frac{1}{2\pi}\int_0^{2\pi}F_{i}(x,y,\theta')\,d\theta', \end{equation*} which is independent of $\theta$, and \begin{equation*} F_{i,-}(x,y,\theta)=\sum_{j=1}^{\infty}A_{i,j}(x,y)\cos(j\theta)+B_{i,j}(x,y)\sin(j\theta), \end{equation*} where \begin{equation*}\begin{split} A_{i,j}(x,y)&=\frac{1}{\pi}\int_0^{2\pi}F_i(x,y,\theta')\cos(j\theta')\,d\theta',\\ B_{i,j}(x,y)&=\frac{1}{\pi}\int_0^{2\pi}F_i(x,y,\theta')\sin(j\theta')\,d\theta'. \end{split} \end{equation*} We have the following observations: Suppose $\{M_i,g_i,x_i\}$ is a sequence of $SO(2)$-symmetric Riemannian manifold, which smoothly converges to a $SO(2)$-symmetric Riemannian manifold $(M_{\infty},g_{\infty},x_{\infty})$, and the convergence if $SO(2)$-equivariant. Suppose also that $h_i$ is a sequence of symmetric 2-tensors on $M_i$ that smoothly converges to a symmetric 2-tensor on $M_{\infty}$. Write $h_i=h_{i,+}+h_{i,-}$ and $h_{\infty}=h_{\infty,+}+h_{\infty,-}$ for the $SO(2)$-decomposition. Then $h_{i,+}$ smoothly converges to $h_{\infty,+}$, and $h_{i,-}$ smoothly converges to $h_{\infty,-}$. \subsection{A symmetry improvement theorem in the linear case} Note that straightforward computation shows that the decomposition $h=h_++h_-$ is compatible with the linearized Ricci Deturck flow $\partial_th=\Delta_Lh$ on the cylindrical plane $\RR\times S^1$. In the following, we consider an oscillatory symmetric 2-tensor $h$ on $\R^2\times S^1$ which solves the linearized Ricci DeTurck equation. Assume $|h|(\cdot,0)$ satisfies an exponential growth bound, then the following proposition shows that $|h|$ decays exponentially in time in a certain sense. \begin{prop}(On $\RR\times S^1$, linear)\label{l: h on R2xS1} Let $\delta_0=0.01$. There exists $T_0>0$ such that the following holds for all \begin{equation*} \alpha\in[0,2.02],\quad T\ge T_0. \end{equation*} Let $h(\cdot,t)$, $t\in[0,T]$, be a continuous family of oscillatory tensors on $\RR\times S^1$, which is smooth on $t\in(0,T]$ and satisfies the linearized Ricci deturck flow $\partial_t h=\Delta_L h$. Suppose we have \begin{equation}\label{e: bound on |h(0)|} |h(x,y,\theta,0)|\le A\,e^{\alpha\sqrt{x^2+y^2}} \end{equation} for any $(x,y,\theta)\in\RR\times S^1$. Then \begin{equation*} |h|(0,0,\theta,T)\le A\,e^{2\alpha T}\cdot e^{-\delta_0 T}. \end{equation*} \end{prop} Note that at $(0,0,\theta)$, the upper bound $|h|\le A$ at time $0$ becomes $|h|\le A\,e^{2\alpha T}\cdot e^{-\delta_0T}$ at time $T$. For $\alpha=0$, the bound at time $T$ is $|h|\le A\cdot e^{-\delta_0T}$, in which case the exponential decay is clear. For $\alpha\neq0$, there is an extra increasing factor $e^{2\alpha T}$ which seems to cancel out the effect of the decreasing factor $e^{-\delta_0T}$. In this case, the exponential decay rate is measured by the time-dependent distance to the `base point' in a suitable Ricci flow. So the increasing factor $e^{2\alpha T}$ will be compensated by the distance shrinking as going forward along the flow. In Section \ref{s: Approximating $SO(2)$-symmetric metrics}, we will apply the non-linear version of this proposition on the 3D flying wing with $\lim_{s\rii}R(\Gamma_i(s))=4$, $i=1,2$. We will consider $h$ satisfying the initial bound $|h|\le e^{\alpha\,d_g(\cdot,\Gamma)}$. Since the soliton converges to $\R\times\cigar$ along $\Gamma$, it follows the distance to $\Gamma$ shrinks at a speed arbitrarily close to $2$. This will outweigh the increasing caused by $e^{2\alpha T}$. It is crucial that $\alpha$ can be slightly greater than $2$ since we will rely on this to find a $SO(2)$-symmetric metric sufficiently close to the soliton metric so that the error decays like $e^{-(2+\delta)\,d_g(\cdot,\Gamma)}$ for some small but positive $\delta$. So the error can decay faster than the scalar curvature as a consequence of Theorem \ref{t': curvature estimate}. \begin{proof} Since $h$ is oscillatory, we can write it as \begin{equation*}\begin{split} h(x,y,\theta,t)=F_1&(x,y,\theta,t)dx^2+F_2(x,y,\theta,t)dxdy+F_3(x,y,\theta,t)dy^2\\ &+F_4(x,y,\theta,t)dxd\theta+F_5(x,y,\theta,t)dyd\theta +F_6(x,y,\theta,t)d\theta^2, \end{split}\end{equation*} where $F_i$ are in the following form \begin{equation*}\begin{split}\label{e: F_i} F_i(x,y,\theta,t)&=\sum_{j=1}^{\infty}A_{i,j}(x,y,t)\cos (j\theta) + B_{i,j}(x,y,t)\sin (j\theta),\\ A_{i,j}(x,y,t)&=\frac{1}{\pi}\int_0^{2\pi}F_i(x,y,\theta',t)\cos(j\theta')\,d\theta'\\ B_{i,j}(x,y,t)&=\frac{1}{\pi}\int_0^{2\pi}F_i(x,y,\theta',t)\sin(j\theta')\,d\theta'. \end{split}\end{equation*} So by the assumption \eqref{e: bound on |h(0)|} we have $|F_i(x,y,\theta,0)|\le A\,e^{\alpha\sqrt{x^2+y^2}}$, and hence \begin{equation}\label{e: A and B upper bound} |A_{i,j}|(x,y,0),|B_{i,j}|(x,y,0)\le 2A\,e^{\alpha\sqrt{x^2+y^2}}. \end{equation} Since $h$ satisfies $\partial_th=\Delta_L h$, which in the coordinate $(x,y,\theta)$ is equivalent to \begin{equation}\label{e: solve} \partial_t F_i(x,y,\theta,t)=(\partial_{xx}+\partial_{yy}+\partial_{\theta\theta})F_i(x,y,\theta,t)=(\Delta_{\R^2}+\partial_{\theta\theta})F_i(x,y,\theta,t). \end{equation} Solving \eqref{e: solve} term by term we see \begin{equation*} \begin{split} \partial_t A_{i,j}= \Delta_{\R^2} A_{i,j}-j^2\,A_{i,j},\quad \partial_t B_{i,j}= \Delta_{\R^2} B_{i,j}-j^2\,B_{i,j}. \end{split} \end{equation*} So $A_{i,j}(x,y,t)\cdot e^{j^2(t-T)}$ and $B_{i,j}(x,y,t)\cdot e^{j^2(t-T)}$ satisfy the heat equation on $\R^2$. In the following, we will estimate these terms from above at $(0,0,T)$. For convenience, we will omit the indices for a moment and let \begin{equation}\label{e: firstapril} u(x,y,t)=A_{i,j}(x,y,t)\cdot e^{j^2(t-T)}. \end{equation} Then $u$ satisfies the heat equation \begin{equation*} \partial_t u=\Delta_{\R^2} u, \end{equation*} and by \eqref{e: A and B upper bound} we have \begin{equation}\label{e: april} |u|(x,y,0)\le 2A\,e^{\alpha\sqrt{x^2+y^2}}\cdot e^{-j^2T}. \end{equation} Since $\cos 0.4\ge \frac{1}{1.1}$, it follows that for any $(x,y)\in\RR$, there is \begin{equation*} \sqrt{x^2+y^2}\le 1.1\,(x\cos\theta+y\sin\theta), \end{equation*} for all $\theta\in[\theta_0-0.4,\theta_0+0.4]$, where $\theta_0$ satisfies $\cos\theta_0=\frac{x}{\sqrt{x^2+y^2}}$ and $\sin\theta_0=\frac{y}{\sqrt{x^2+y^2}}$. So for any $\alpha$ we have that \begin{equation}\label{e: exp} e^{\alpha\sqrt{x^2+y^2}}\le \frac{1}{0.8} \int_0^{2\pi}e^{1.1\,\alpha(x\cos\theta+y\sin\theta)} \;d\theta. \end{equation} Let \begin{equation*} v(x,y,t)=2A\,e^{-j^2T}\cdot e^{(1.1\alpha)^2t}\cdot\frac{1}{0.8} \int_0^{2\pi}e^{(1.1\alpha)(x\cos\theta+y\sin\theta)} \;d\theta. \end{equation*} For any fixed $\theta$, by a straightforward computation we see that the function $e^{(1.1\alpha)^2t}\cdot e^{(1.1\alpha)(x\cos\theta+y\sin\theta)}$ is a solution to the heat equation on $\R^2$. So it follows that $v$ also satisfies the heat equation, i.e. \begin{equation*} \partial_t v=\Delta_{\R^2} v. \end{equation*} Note by \eqref{e: april} and \eqref{e: exp} we have $|u|(x,y,0)\le v(x,y,0)$. Moreover, by Lemma \ref{l: heat equation} we have a linear exponential growth bound on $|u|(x,y,t)$ for all later times $t\in[0,T]$ which may depend on $T$. This allows us to use the maximum principle (see e.g. \cite{Evans2010PartialDE}) and deduce that \begin{equation}\label{e: h_i} |u|(0,0,T)\le v(0,0,T)=2.5 \,A\,e^{-j^2T} \cdot e^{(1.1\alpha)^2 T}. \end{equation} Since $\alpha\in[0,2.02]$, it is easy to check that the following holds \begin{equation*} 2\alpha-(1.1\alpha)^2+1\ge0.1. \end{equation*} Take $T_0=\frac{\ln 2.5}{0.05}$, then $e^{0.05\,T}> 2.5 $ for all $T\ge T_0$, and hence \begin{equation*} 2.5\, e^{-T}\cdot e^{(1.1\alpha)^2 T}\le e^{2\alpha T}\cdot e^{-0.05\, T}. \end{equation*} Substituting this into \eqref{e: h_i} we obtain \begin{equation*} |u|(0,0,T)\le A\,e^{-(j^2-1)T}\cdot e^{2\alpha T}\cdot e^{-0.05\, T}. \end{equation*} Restoring the indices in \eqref{e: firstapril}, we obtain \begin{equation*} |A_{i,j}|(0,0,T)\le A\,e^{-(j^2-1)T}\cdot e^{2\alpha T}\cdot e^{-0.05\, T}. \end{equation*} Similarly, we can show that $|B_{i,j}|(0,0,T)$ satisfies the same inequality. Therefore, assuming $T_0\ge\frac{\ln 400}{0.04}$, we obtain \begin{equation*}\begin{split} |F_i|(0,0,T)&\le \sum_{j=1}^{\infty}|A_{i,j}|(0,0,T)+\sum_{j=1}^{\infty}|B_{i,j}|(0,0,T)\\ &\le 2A\,e^{2\alpha T}\cdot e^{-0.05\, T}\cdot\sum_{j=1}^{\infty}e^{-(j^2-1)T}\\ &\le 4A\,e^{2\alpha T}\cdot e^{-0.05\, T} \le \frac{1}{100}A\,e^{2\alpha T}\cdot e^{-0.01\, T}, \end{split}\end{equation*} which impiles $|h|(0,0,T)\le A\,e^{2\alpha T}\cdot e^{-0.01\, T}$, and hence proves the lemma. \end{proof} \subsection{A symmetry improvement theorem in the nonlinear case} In Theorem \ref{t: symmetry improvement}, we prove the non-linear version of Proposition \ref{l: h on R2xS1}. In the theorem, $h$ is a symmetric 2-tensor satisfying the Ricci Deturck flow perturbation with background metric $g(t)$ being a $SO(2)$-symmetric complete Ricci flow which is sufficiently close to $\RR\times S^1$ at a base point. We will show that the oscillatory part of $h$ has a similar exponential decay in time as in Proposition \ref{l: h on R2xS1}. We briefly recall some facts of Ricci Deturck flow perturbations from \cite[Appendix A]{bamler2022uniqueness}. Let $(M,g(t))$ be a complete Ricci flow and $h$ be a solution to the following Ricci Deturck flow perturbation equation with background metric $g(t)$. \begin{equation*} \nabla_{\partial_t}h=\Delta_{g(t)}h+2\,\Rm_{g(t)}(h)+\mathcal{Q}_{g(t)}[h], \end{equation*} where $\mathcal{Q}_{g(t)}[h]$ is quadratic in $h$ and its spacial derivatives, and the left-hand side contains the conventional Uhlenbeck trick: \begin{equation*} (\nabla_{\partial_t}h)_{ij}=(\partial_th)_{ij}+g^{pq}(h_{pj}\Ric_{qi}+h_{ip}\Ric_{qj}). \end{equation*} Then $\widetilde{h}:=\alpha^{-1}h$ satifies the rescaled Ricci Deturck flow perturbation equation \begin{equation*} \nabla_{\partial_t}\widetilde{h}=\Delta_{g(t)}\widetilde{h}+2\,\Rm_{g(t)}(\widetilde{h})+\mathcal{Q}^{(\alpha)}_{g(t)}[\widetilde{h}], \end{equation*} which converges to the linearized Ricci-Deturck equation as $\alpha\ri0$, \begin{equation*} \nabla_{\partial_t}\widetilde{h}=\Delta_{g(t)}\widetilde{h}+2\,\Rm_{g(t)}(\widetilde{h}), \end{equation*} which can also be written as $\partial_t\widetilde{h}=\Delta_{L}\widetilde{h}$, where $\Delta_{L}$ is the lichnerowicz laplacian \begin{equation*} \Delta_{L}h_{ij}=\Delta h_{ij}+2\,g^{kp}g^{\ell q}R_{kij\ell}h_{pq}-g^{pq}(h_{pj}\Ric_{qi}+h_{ip}\Ric_{qj}). \end{equation*} Theorem \ref{t: symmetry improvement} is proved using a limiting argument: We consider a sequence of blow-ups of solutions $h_i$ to the Ricci Deturck flow perturbation, and show that they converge to a solution to the linearized Ricci Deturck flow to which we can apply Proposition \ref{l: h on R2xS1}. To take the limit, we need to derive uniform bounds for $h_i$ and the derivatives. To this end, we first observe that for a solution $h$ to the Ricci Deturck flow perturbation, $|h|^2$ satisfies the following evolution inequality, see \cite[Appendix A.1]{bamler2022uniqueness}, \begin{equation*}\begin{split}\label{e: original} \partial_t |h|^2\le (g+h)^{ij}\nabla^2_{ij}|h|^2-2(g+h)^{ij}g^{pq}g^{uv}\nabla_ih_{pu}\nabla_jh_{qv}\\ +C(n)|\Rm_g|\cdot|h|^2+C(n)|h|\cdot|\nabla h|^2, \end{split}\end{equation*} where $C(n)>0$ is some dimensional constant. Note that the elliptic operator $(g+h)^{ij}\nabla^2_{ij}$ is not exactly a laplacian of metrics. So in order to use the standard heat kernel estimates, we compare this operator with the exact laplacian $\Delta_{g(t)+h(t)}$ in the following lemma and show that $\partial_t |h|^2\le \Delta_{g(t)+h(t)}|h|^2+|h|^2$. \begin{lem}\label{l: laplacian} For any $n\in\mathbb{N}$, there are constants $C_0(n),C_1(n)$ such that the following holds: Let $(M^n,g(t))$, $t\in[0,T]$, be a Ricci flow (not necessarily complete) with $|\Rm|\le 1$ and $\textnormal{inj}\ge 1$. and $h(t)$ be a Ricci flow perturbation with background $g(t)$. Suppose $|\nabla^kh|\le\frac{1}{C_0(n)}<\frac{1}{100}$, then $|h|^2$ satisfies the following evolution inequality \begin{equation}\label{e: garden} \partial_t |h|^2\le \Delta_{g(t)+h(t)}|h|^2+C_1(n)\,|h|^2. \end{equation} \end{lem} \begin{proof} In the following the covariant derivatives and curvature quantities are taken with respect to $g(t)$ and the time-index $t$ in $g(t),h(t)$ is suppressed. Let $C$ denote all dimensional constants whose values may change from line to line. Let $(x^1,x^2,...,x^n)$ be local coordinates on an open subset $U\subset M$, such that $|\nabla^kg|\le C$, $k=0,1$, for example we may choose the distance coordinates, \cite[Theorem 74]{petersen}. By the formula of the Hessian $\nabla^2_{ij} f=\partial^2_{ij}f-\Gamma^k_{ij}\partial_kf$ for any smooth function $f$ on $U$, it is easy to see \begin{equation*}\begin{split} (g+h)^{ij}\nabla^2_{ij}|h|^2&=(g+h)^{ij}\partial^2_{ij}|h|^2-(g+h)^{ij}\Gamma^k_{ij}\partial_k|h|^2,\\ \Delta_{g+h}|h|^2&=(g+h)^{ij}\partial^2_{ij}|h|^2-(g+h)^{ij}\widetilde{\Gamma}^k_{ij}\partial_k|h|^2, \end{split}\end{equation*} where $\widetilde{\Gamma}^k_{ij}$ is the Christoffel symbol of $g+h$. So we have \begin{equation*}\begin{split}\label{e: lemon} (g+h)^{ij}\nabla^2_{ij}|h|^2-\Delta_{g+h}|h|^2&=(g+h)^{ij}(\widetilde{\Gamma}^k_{ij}-\Gamma^k_{ij})\partial_k|h|^2. \end{split}\end{equation*} Seeing that $|\widetilde{\Gamma}^k_{ij}-\Gamma^k_{ij}|\le C(|h|+|\nabla h|)$ and using the assumption $|\nabla^kh|\le\frac{1}{C_0(n)}$, $k=0,1$, for sufficiently large $C_0(n)$ we obtain \begin{equation*}\begin{split} |(g+h)^{ij}\nabla^2_{ij}|h|^2-\Delta_{g+h}|h|^2|&\le C\,|h|^2\,|\nabla h|+C\,|h|\,|\nabla h|^2 \le \frac{1}{2}\,|h|^2+\frac{1}{2}|\nabla h|^2. \end{split}\end{equation*} Combining this with \eqref{e: original}, and note that \begin{equation*} 2(g+h)^{ij}g^{pq}g^{uv}\nabla_ih_{pu}\nabla_jh_{qv}\ge 1.8\,|\nabla h|^2, \end{equation*} we obtain \eqref{e: garden}. \end{proof} Now we prove the main result of this section. \begin{theorem}[On almost cylindrical part, non-linear]\label{t: symmetry improvement} There exist $\delta_0,T_0>0$ such that for any $T\ge T_0$, there exists $\overline{\epsilon}(T),\overline{\delta}(T),\underline{D}(T)>0$ such that for any \begin{equation*} \alpha\in[0,2.02],\quad \epsilon<\overline{\epsilon},\quad\delta<\overline{\delta},\quad D_{\#}>\underline{D}. \end{equation*} the following holds: Let $(M,g(t),x_0)$, $t\in[0,T]$, be a $SO(2)$-symmetric complete Ricci flow with $|\Rm|_{g(t)}\le 1$, and $(M,g,x_0)$ is $\delta$-close to $(\RR\times S^1,g_{stan})$ in the $C^{98}$-norm. Let $h(t)$ be a Ricci Deturck flow perturbation with background metric $g(t)$ on $B_0(x_0,D_{\#})\times[0,T]$ and $|\nabla^{k}h(t)|\le\frac{1}{1000}$, $k=0,1$. Suppose also \begin{equation}\label{e: double} \begin{cases} |\nabla^kh|(x,0)\le \epsilon\cdot e^{100\,d_0(x,x_0)} \quad&\textit{for}\quad x\in B_0(x_0,D_{\#}),\quad k=0,1,2,\\ |h|(x,t)\le \epsilon\cdot e^{10\,D_{\#}}\quad&\textit{for}\quad (x,t)\in \partial B_0(x_0,D_{\#})\times[0,T]. \end{cases} \end{equation} Suppose also \begin{equation}\label{e: alpha} |h|(x,0)\le\epsilon\cdot e^{\alpha\,d_{g(0)+h(0)}(x,x_0)}\quad\textit{for}\quad x\in B_0(x_0,D_{\#}). \end{equation} Then we have \begin{equation*} |\nabla^kh_-|(x_0,T)\le \epsilon\cdot e^{-\delta_0T}\cdot e^{2\alpha T},\quad k=0,1,...,100. \end{equation*} where $h_-$ is the oscillatory part of $h$. \end{theorem} \begin{proof}[Proof of Theorem \ref{t: symmetry improvement}] Let $T_0>0$ be from Proposition \ref{l: h on R2xS1}, and the value of $\delta_0$ will be determined later. Suppose the assertion does not hold for some $T\ge T_0$ and $C_0>0$. Then there are sequences of numbers $\epsilon_i\ri0,\delta_i\ri0,D_i\ri\infty$, a sequence of $SO(2)$-symmetric complete Ricci flows $(M_i,g_i(t))$, $t\in[0,T]$, which is $\delta_i$-close to $\RR\times S^1$ at $(x_i,0)\in M_i\times[0,T]$, and a sequence of Ricci deturck flow perturbation $h_i(t)$ with background metric $g_i(t)$, defined on $B_0(x_0,D_i)\times[0,T]$, which satisfies \eqref{e: double} and \begin{equation}\label{e: initial bd on h} |h_i|(x,0)\le\epsilon_i\cdot e^{\alpha\, d_{g_i(0)+h_i(0)}(x,x_i)}, \end{equation} but there is some $k_i\in\{0,1,...,100\}$ such that \begin{equation}\label{e: h-lower bound} |\nabla^{k_i}h_{i,-}|(x_i,T)\ge \epsilon_i\cdot e^{-\delta_0 T}\cdot e^{2\alpha T}. \end{equation} After passing to a subsequence we may assume that the pointed Ricci flows $(M_i,g_i(t),x_i)$ on $[0,T]$ converge to $(\RR\times S^1,g_{stan},x_0)$ in the $C^{96}$-sense. We will show that $\frac{h_i}{\epsilon_i}$ converge to a solution to the linearized Ricci Deturck flow on $\RR\times S^1$. To this end, since $|\nabla^{k}h|\le\frac{1}{1000}$, $k=0,1$, by Lemma \ref{l: laplacian}, there is $C_0>0$ such that \begin{equation*} \partial_t|h_i|^2\le\Delta_{g_i+h_i}|h_i|^2+ C_0\cdot|h_i|^2 \quad\textit{on}\quad B_0(x_i,D_i)\times[0,T]. \end{equation*} Let $u_i:=e^{-C_0t}|h_i|^2$, then this implies $\partial_t u_i\le\Delta_{g_i+h_i}u_i$. Moreover, \eqref{e: double} implies \begin{equation*}\begin{cases} u_i(x,0) \le \epsilon_i^2\cdot e^{20\, d_{g_i(0)+h_i(0)}(x,x_i)} &\textit{for}\quad x\in B_0(x_i,D_{i}),\\ u_i(x,t)\le \epsilon_i^2\cdot e^{20\,D_{i}} &\textit{for}\quad x\in\partial B_0(x_i,D_{i}), t\in[0,T]. \end{cases} \end{equation*} Applying the heat kernel estimate Lemma \ref{l: heat equation} and the weak maximum principle on $B_0(x_i,D_i)\times[0,T]$, we obtain bounds on $|u_i|$ which are independent of all $i$: For any $A>0$, there exists $C(A,T)>0$, which is uniform for all $i$, such that \begin{equation*} |u_i|(x,t)\le C(A,T)\cdot \epsilon_i^2\quad\textit{on}\quad B_0(x_i,A)\times[0,T], \end{equation*} and hence \begin{equation}\label{e: C0 bound} |h_i|(x,t)\le C(A,T)\cdot \epsilon_i\quad\textit{on}\quad B_0(x_i,A)\times[0,T], \end{equation} with a possibly larger $C(A,T)$. By the local derivative estimates for the Ricci deturck flow perturbations \cite[Lemma A.14]{bamler2022uniqueness}, this implies bounds for higher derivatives, \begin{equation*} |\nabla^mh_i|\le C_m(A,T)\cdot\epsilon_i\cdot t^{-m/2}\quad\textit{on}\quad B_0(x_i,A/2)\times(0,T], \end{equation*} where $m\in\mathbb{N}$ and $C_m(A,T)>0$ are constants depending on $A,T,m$. Moreover, the first inequality in \eqref{e: double} implies \begin{equation*} |\nabla^k h_i|\le C(A,T)\cdot\epsilon_i\quad\textit{on}\quad B_0(x_i,A/2)\times[0,T],\quad k=0,1,2. \end{equation*} Therefore, letting $H_i=\frac{h_i}{\epsilon_i}$, then \begin{equation*} |\nabla^mH_i|\le C_m(A,T)\cdot t^{-m/2}\quad\textit{on}\quad B_0(x_i,A/2)\times(0,T], \end{equation*} and also \begin{equation*} |\nabla^k H_i|\le C(A,T)\quad\textit{on}\quad B_0(x_i,A/2)\times[0,T],\quad k=0,1,2. \end{equation*} So after passing to a subsequence, $H_i$ converges to a symmetric 2-tensor $H_{\infty}$ on $(\RR\times S^1)\times[0,T]$ in the $C^0$-sense, and the convergence is smooth on $(0,T]$. On the one hand, by the contradiction assumption \eqref{e: h-lower bound} there is some $k_0\in\{0,...,100\}$ such that \begin{equation}\label{e: greater than} |\nabla^{k_0}H_{\infty,-}|(x_0,T)\ge e^{-\delta_0T}\cdot e^{2\alpha T}. \end{equation} On the other hand, since $\epsilon_i\ri0$, it follows that $H_{\infty}$ satisfies the linearized Ricci Deturck equation $\partial_t H_{\infty}=\Delta_L H_{\infty}$. The initial bound \eqref{e: initial bd on h} passes to the limit and implies \begin{equation*} |H_{\infty}|(x,0)\le e^{\alpha \,d_{g_{stan}}(x,x_0)}. \end{equation*} So we can apply Proposition \ref{l: h on R2xS1} to the oscillatory part $H_{\infty,-}$ at every point in the backward parabolic neighborhood $U:=B_T(x_0,1)\times[T-1,T]\subset (\RR\times S^1)\times[0,T]$ centered at $(x_0,T)$, and obtain \begin{equation*} |H_{\infty,-}|< e^{\alpha+0.01}\cdot e^{-0.01\,T}\cdot e^{2\alpha T}\quad \textit{on}\quad U. \end{equation*} Since the components of $H_{\infty,-}$ satisfy the heat equation \eqref{e: solve}, it follows by the standard derivative estimates of heat equations that for all $k=0,1,...,100$, \begin{equation*} |\nabla^kH_{\infty,-}|(x_0,T)< C_k\cdot e^{-0.01\,T}\cdot e^{2\alpha T}\le e^{-\delta_0T}\cdot e^{2\alpha T}, \end{equation*} for some $\delta_0<0.01$, which contradicts with \eqref{e: greater than}. \end{proof} \section{Construction of an approximating SO(2)-symmetric metric}\label{s: Approximating $SO(2)$-symmetric metrics} The main goal in this section is to construct an approximating $SO(2)$-symmetric metric which approximates the soliton metric within error $e^{-2(1+\epsilon_0)\,d_g(\cdot,\Gamma)}$, for some positive constant $\epsilon_0>0$, and moreover the error goes to zero as we move towards the infinity of the soliton. Here $\Gamma=\Gamma_1(-\infty,\infty)\cup\Gamma_2(-\infty,\infty)\cup\{p\}$, where $p$ is the critical point of $M$, and $\Gamma_1,\Gamma_2$ are two integral curves from Corollary \ref{l: new Gamma}. The construction consists of two parts: First, in subsection \ref{ss: the desired exponential decay part 2}, we do an inductive construction to obtain a $SO(2)$-symmetric metric $\overline{g}$ that approximates the soliton metric within the error $e^{-2(1+\epsilon_0)d_g(\cdot,\Gamma)}$. Next, in subsection \ref{ss: near the edge part 3}, we extend $\overline{g}$ to a neighborhood of $\Gamma$ to obtain the desired approximating metric. In the first step, we repeating the following process in an induction scheme: We consider the harmonic map heat flows from the Ricci flow of the soliton to the Ricci flow of some approximating $SO(2)$-symmetric metric. The error between the two flows is characterized by the Ricci deturck flow perturbation, whose oscillatory mode decays in time by our symmetric improvement theorem. Therefore, the accuracy of the approximation will improve by the flow, after adding the rotationally symmetric mode in the Ricci deturck flow perturbation to the approximating metric. Note that the perturbation could grow very fast in the compact regions because the soliton is not close to $\RR\times S^1$ and we do not have a symmetry improvement theorem there. In order to deal with this, we will do surgeries to the soliton metric $g$ and approximating $SO(2)$-symmetric metrics, by cutting-off their compact regions and glue-back regions that are sufficiently close to $\RR\times S^1$. The resulting manifolds are diffeomorphic to $\RR\times S^1$, and close to $\RR\times S^1$ everywhere. So the harmonic map heat flows between the flows of these manifolds exist up to a long enough time for us to apply Theorem \ref{t: symmetry improvement}. In the surgeries we need to glue up $\epsilon$-cylindrical planes, and for the $SO(2)$-symmetric metrics we also need to preserve the $SO(2)$-symmetry in the resulting metrics. This needs some gluing-up lemmas in subsection \ref{ss: glueup}. We conduct the surgeries in subsection \ref{ss: Surgery on the soliton metric part1} and \ref{ss: Surgery on the background SO(2)-symmetric metrics part 1}. \subsection{Glue up SO(2)-symmetric metrics}\label{ss: glueup} In order to do the surgeries, we need to know how to glue up $SO(2)$-symmetric metrics. This is done in this subsection. Recall that for a 3D Riemannian manifold $(M,g)$, we say it is $SO(2)$-symmetric if there is a one parameter group of isometries $\psi_{\theta}$, $\theta\in\R$, such that $\psi_{\theta}=i.d.$ if and only if $\theta=2k\pi$, $k\in\mathbb{Z}$. In this subsection, we show how to glue up several $SO(2)$-symmetric metrics which are close to $(\RR\times S^1,g_{stan})$ and also close to each other on their intersections. Since the metrics throughout this subsection are $\epsilon$-close to $\RR\times S^1$ for some very small $\epsilon$, we will take derivatives and measure norms using the metric $g_{stan}$ on $\RR\times S^2$. First, we show in following lemma that if a 3D Riemannian manifold $(M,g)$ is $\epsilon$-close to $\RR\times S^1$ at $x_0\in M$ under two $\epsilon$-isometries $\phi_{1}$ and $\phi_{2}$, then the two vector fields $\phi_{1*}\left(\partial_{\theta}\right)$ and $\phi_{2*}\left(\partial_{\theta}\right)$ are $C_0\epsilon$-close. Therefore, the vector field $\partial_{\theta}$ is well-defined on an $\epsilon$-cylindrical plane up to sign and an error $\epsilon$. \begin{lem}\label{l: two epsilon cylindrical planes are close} Let $k\in\mathbb{N}$. There exists $C_0,\overline{\epsilon}>0$ such that the following holds for all $\epsilon<\overline{\epsilon}$. Let $(M,g)$ be a 3-dimensional Riemannian manifold. Suppose $(M,g,x_0)$ is $\epsilon$-close to $(\RR\times S^1,g_{stan})$ in the $C^k$-norm under two $\epsilon$-isometries $\phi_i:S^1\times(-\epsilon^{-1},\epsilon^{-1})\times(-\epsilon^{-1},\epsilon^{-1})\rightarrow U_i\subset\subset M$, $i=1,2$, where $V:=\phi_1(S^1\times(-100,100)\times(-100,100))\subset U_2$. Then after possibly replacing $\phi_2$ by $\phi_2\circ p$, where $p(\theta,x,y)=(-\theta,x,y)$ for $\theta\in[0,2\pi)$ and $x,y\in(-\epsilon^{-1},\epsilon^{-1})$, we have \begin{equation*} \left|\phi_{1*}\left(\partial_{\theta}\right)-\phi_{2*}\left(\partial_{\theta}\right)\right|_{C^{k-1}(V)}\le C_0\epsilon. \end{equation*} \end{lem} \begin{proof} We shall use $\epsilon$ to denote all constants $C_0\epsilon$, where $C_0>0$ is a constant depending only on $k$. Let $g_i=(\phi_i^{-1})^{*}g_{stan}$ and $X_i=\phi_{i*}\left(\partial_{\theta}\right)$, $i=1,2$. Let $(x,y,\theta)$ be the coordinates on $U_1$ induced by $\phi_1$ such that $g_1$ can be written as $d\theta^2+dx^2+dy^2$, where $x,y\in(-\epsilon^{-1},\epsilon^{-1})$ and $\theta\in [0,2\pi)$. So $X_1=\partial_{\theta}$. The coordinate function $\theta$ can be lifted to a function $z$ on the universal covering $\widetilde{U}_1\rightarrow U_1$, so that the metric on $\widetilde{U}_1$ can be written as $dz^2+dx^2+dy^2$ under the coordinates $(x,y,z)$. Assume $\epsilon$ is sufficiently small, then \begin{equation}\label{e: conference} |g_1-g_2|_{C^k(V)}\le|g_1-g|_{C^k(V)}+|g_2-g|_{C^k(V)}\le\epsilon. \end{equation} Since $\LL_{X_2}g_2=0$, this implies \begin{equation}\label{e: sloan} \left|\LL_{X_2}g_1\right|_{C^{k-1}(V)}\le\left|\LL_{X_2}(g_2+(g_1-g_2))\right|_{C^{k-1}(V)}=\left|\LL_{X_2}(g_1-g_2)\right|_{C^{k-1}(V)}\le\epsilon. \end{equation} Note that the following is the form of all killing fields on $\R^3$. \begin{equation*} a_1\partial_{x}+a_2\partial_{y}+a_3\partial_{z}+b_1(x\partial_{z}-z\partial_{x})+b_2(x\partial_{y}-y\partial_{x})+b_3(z\partial_{y}-y\partial_{z}), \end{equation*} where $a_1,a_2,a_3,b_1,b_2,b_3\in\R$. A direct computation using \eqref{e: sloan}, we see that $X_2$ is $\epsilon$-close in the $C^{k-1}$-norm to a vector field on $V$ of the following form, \begin{equation}\label{e: rutgers} a_1\partial_{x}+a_2\partial_{y}+a_3\partial_{\theta}+b_1(x\partial_{\theta}-{\theta}\partial_{x})+b_2(x\partial_{y}-y\partial_{x})+b_3({\theta}\partial_{y}-y\partial_{\theta}). \end{equation} In the following we will estimate these coefficients and show that $|a_3\pm1|\le\epsilon$ and other coefficients are bounded in absolute values by $\epsilon$. First, for a vector field in \eqref{e: rutgers} to be well-defined at $\theta=0$, we must have \begin{equation*} |b_1|+|b_3|\le\epsilon. \end{equation*} Next, since $\nabla^{g_2} X_2=0$, it follows by \eqref{e: conference} that $|\nabla^{g_1} X_2|_{C^{k-1}(V)}\le\epsilon$, which implies $|b_2|\le\epsilon$. So $X_2$ is $\epsilon$-close in the $C^{k-1}$-norm to the vector \begin{equation*} Y:=a_1\partial_{x}+a_2\partial_{y}+a_3\partial_{\theta}, \end{equation*} and hence the flow generated by $Y$, which is \begin{equation*} \begin{cases} x(t;x_0,y_0,\theta_0)=x_0+a_1t,\\ y(t;x_0,y_0,\theta_0)=y_0+a_2t,\\ \theta(t;x_0,y_0,\theta_0)=\theta_0+a_3t, \end{cases} \end{equation*} is $\epsilon$-close in the $C^{k-1}$-norm to the flow generated by $X_2$ on $V$. Since the flow of $X_2$ is $2\pi$-periodic, it follows that \begin{equation}\label{e: a1a2a3} |a_1|+|a_2|+|2\pi\,a_3-2m\pi|\le \epsilon, \end{equation} for some $m\in\mathbb{N}$. Note that $X_i$ is a unit speed velocity vector of the $g_i$-minimal geodesic loop, $i=1,2$, it is easy to see that for sufficiently small $\epsilon$ we must have either $|X_2-X_1|\le\frac{1}{1000}$ or $|X_2+X_1|\le\frac{1}{1000}$. This implies $|a_3\pm1|\le\frac{1}{100}$, which combined with \eqref{e: a1a2a3} implies \begin{equation*} |a_1|+|a_2|+|a_3\pm1|\le\epsilon, \end{equation*} which proves the lemma. \end{proof} The next lemma is a step further than Lemma \ref{l: two epsilon cylindrical planes are close}, which shows that if a $SO(2)$-symmetric metric is $\epsilon$-close to an $\epsilon$-cylindrical plane, then their killing fields are $C_0\epsilon$-close. \begin{lem}\label{l: killing fields are close} Let $k\in\mathbb{N}$. There exists $C_0,\overline{\epsilon}>0$ such that the following holds for all $\epsilon<\overline{\epsilon}$. Let $(U,g)$ be a 3D Riemannian manifold, $x_0\in U$. Suppose $g$ is $SO(2)$-symmetric metric and $X$ is the killing field of the $SO(2)$-isometry. Suppose also $(U,g,x_0)$ is $\epsilon$-close to $(\RR\times S^1,g_{stan})$ in the $C^k$-norm, and \begin{equation*} |X-\partial_{\theta}|\le\frac{1}{1000}\quad \textit{on}\quad B_g(x_0,1000)\subset U, \end{equation*} where $\partial_{\theta}$ denotes the killing field along the $S^1$-direction in an $\epsilon$-cylindrical plane. Then \begin{equation*} \left|X-\partial_{\theta}\right|_{C^{k-1}(B_g(x_0,1000))}\le C_0\epsilon. \end{equation*} \end{lem} Note that the assumption $|X-\partial_{\theta}|\le\frac{1}{1000}$ in this lemma is necessary even if we derive a better bound using it. For example, on $\RR\times S^1$, a $SO(2)$-isometry could be either a rotation in the $xy$-plane around the origin, or a rotation in the $S^1$-factor, but their killing fields are not close to each other. \begin{proof} We shall use $\epsilon$ to denote all constants $C_0\epsilon$, where $C_0>0$ is a constant independent of $\epsilon$. First, let $\phi_1$ be the $\epsilon$-isometry to $(\RR\times S^1,g_{stan})$, and let $(x,y,\theta)$, $x,y\in(-\epsilon^{-1},\epsilon^{-1})$ and $\theta\in [0,2\pi)$, be local coordinates on an open subset $V$ containing $B_g(x_0,1000)$, such that $g_1:=(\phi_1^{-1})^{*}g_{stan}$ can be written as $d\theta^2+dx^2+dy^2$. Then $\theta$ can be lifted to a function $z$ on the universal covering $\widetilde{V}\ri V$, and the induced metric on $\widetilde{V}$ is $dz^2+dx^2+dy^2$. Assume $\epsilon$ is sufficiently small, then \begin{equation*} |g-g_1|_{C^k(U)}\le\epsilon. \end{equation*} Seeing also $\LL_Xg=0$, this implies \begin{equation*} \left|\LL_{X}g_1\right|_{C^{k-1}(U)} =\left|\LL_{X}(g_1-g)\right|_{C^{k-1}(U)}\le\epsilon. \end{equation*} Similarly as in Lemma \ref{l: two epsilon cylindrical planes are close}, this implies that $X$ is $\epsilon$-close in the $C^{k-1}$-norm to the following vector field on $V$, \begin{equation}\label{e: Y vector} Y:=a_1\partial_{x}+a_2\partial_{y}+a_3\partial_{\theta}+b_2(x\partial_{y}-y\partial_{x}), \end{equation} where $a_1,a_2,a_3,b_2\in\R$. In the following we will show that $|a_3-1|\le\epsilon$ and $|a_1|,|a_2|,|b_2|\le\epsilon$. First, assume $b_2\neq0$. Then the flow generated by $Y$ is \begin{equation}\label{e: case} \begin{cases} y(t;x_0,y_0,\theta_0)=y_0\cos b_2t+\frac{a_1}{b_2}(1-\cos b_2t)+\frac{a_2}{b_2}\sin b_2t\\ x(t;x_0,y_0,\theta_0)=x_0\cos b_2t+\frac{a_2}{b_2}(\cos b_2t-1)+\frac{a_1}{b_2}\sin b_2t\\ \theta(t;x_0,y_0,\theta_0)=\theta_0+a_3t. \end{cases} \end{equation} Since the flow generated by $X$ is $2\pi$-periodic and $Y$ is $\epsilon$-close to $X$, it follows that \begin{equation}\label{e: y0x0} |x(2\pi;x_0,y_0,\theta_0)-x_0|+|y(2\pi;x_0,y_0,\theta_0)-y_0|\le\epsilon, \end{equation} for some $m\in\mathbb{N}$. First, by $|X-\partial_{\theta}|\le\frac{1}{1000}$ we have \begin{equation*} \left|Y-\partial_{\theta}\right|\le |X-Y|+\left|X-\partial_{\theta}\right|\le \epsilon+\left|X-\partial_{\theta}\right|\le\frac{1}{500} \end{equation*} which implies $|b_2|\le\frac{1}{100}$. Moreover, by taking $y_0=100,-100$ in \eqref{e: y0x0} and using \eqref{e: case}, we get \begin{equation*} 200|\cos (2\pi b_2)-1|= |y(2\pi;0,100,0)-100-y(2\pi;0,-100,0)+100|\le\epsilon, \end{equation*} which combined with $|b_2|\le\frac{1}{100}$ implies $|b_2|\le\epsilon$. So we may assume $b_2=0$ in \eqref{e: Y vector}, so $X$ is $\epsilon$-close to the following vector field in the $C^{k-1}$-norm on $V$, \begin{equation}\label{e: Z vector} Z:=a_1\partial_{x}+a_2\partial_{y}+a_3\partial_{\theta}, \end{equation} which generates the flow \begin{equation}\label{e: case again} \begin{cases} y(t;x_0,y_0,\theta_0)=y_0+a_2t;\\ x(t;x_0,y_0,\theta_0)=x_0+a_1t;\\ \theta(t;x_0,y_0,\theta_0)=\theta_0+a_3t. \end{cases} \end{equation} The $2\pi$-periodicity of the flow of $X$ implies immediately \begin{equation*} |a_1|+|a_2|+|2\pi a_3-2m\pi|\le \epsilon. \end{equation*} Using $|X-\partial_{\theta}|\le\frac{1}{1000}$ this implies $|a_3-1|\le\epsilon$, which proves the lemma. \end{proof} In the following lemma, we show that one can glue up one parameter groups of diffeomorphisms that are $\epsilon$-close to each other on their intersections, to obtain a global one parameter group of diffeomorphisms that are $C_0\epsilon$-close to them, where the constant $C_0$ does not depend on $\epsilon$. \begin{lem}\label{l: glue up local diffeomorphisms from N x S1 to M} Let $m,k\in\mathbb{N}$. There exists $C_0,\overline{\epsilon}>0$ such that the following holds for all $\epsilon<\overline{\epsilon}$. Let $(M,g)$ be a 3D Riemannian manifold. Suppose $(M,g)$ is $\epsilon$-close to $(\RR\times S^1,g_{stan})$ at all $x\in M$. Suppose $\{U_i\}_{i=1}^{\infty}$ is an open covering of $M$ such that at most $m$ of them intersect at one point. Moreover, there is a one parameter group of diffeomorphisms $\{\phi_{i,t}\}_{t\in\R}$ on each $U_i$, which satisfies: \begin{enumerate} \item\label{it: 1} $\phi_{i,0}=\phi_{i,2\pi}=i.d.$. \item\label{it: 2} $|\phi_{i,t*}(\partial_{t})-\partial_{\theta}|\le\frac{1}{1000}$, where $\partial_{\theta}$ denotes the killing field along the $S^1$-direction in an $\epsilon$-cylindrical plane up to sign. \item\label{it: 3} $|\phi_{i}-\phi_{j}|_{C^{k}((U_i\cap U_j)\times S^1)}\le\epsilon$, where $\phi_i(x,\theta)=\phi_{i,\theta}(x)$ for any $(x,\theta)\in U_i\times S^1$. \end{enumerate} Then there exists a one parameter group of diffeomorphisms $\{\psi_{t}\}_{t\in\R}$ on $M$ satisfying \begin{enumerate} \item\label{property 1} $\psi_{0}=\psi_{2\pi}=i.d.$. \item\label{property 2} $|\psi-\phi_{i}|_{C^{k}(U_i\times S^1)}\le C_0 \epsilon$ for all $i$. \item\label{property 3} $\psi_{t}=\phi_{i,t}$ on $\{x\in M: B_g(x,1000\,r(x))\subset U_i\}$. \end{enumerate} \end{lem} \begin{proof} In the following we denote $\phi_{i,t}$ as $\phi_{i,\theta}$, $\theta\in[0,2\pi]$, and $C_0$ denotes for all positive constants that depend on $k$. Since $(M,g)$ is covered by $\epsilon$-cylindrical planes, by a standard gluing up argument, we can find a smooth complete surface $N$ embedded in $M$, such that the tangent space of $N$ is $\frac{1}{100}$-almost orthogonal to $\partial_{\theta}$ in each $\epsilon$-cylindrical plane. Equip the manifold $N\times S^1$ with a warped product metric $\overline{g}=g_N+d\theta^2$, $\theta\times[0,2\pi)$, where $g_N$ is the induced metric of $(M,g)$ on $N$. First, we will use the local one parameter groups $\phi_{i,\theta}$ to construct local diffeomorphisms $F_i: N\times S^1\ri M$. Let $V_i=U_i\cap N$, then $\{V_i\}_{i=1}^{\infty}$ is an open covering of $N$, and at most $m$ of them intersect at one point. Let $F_i: V_i\times S^1\ri U_i$ be defined by \begin{equation*} F_i(x,\theta)=\phi_{i,\theta}(\varphi(x)). \end{equation*} Then $F_i$ is a diffeomorphism, and \begin{equation*} |F_i-F_j|_{C^k((V_i\cap V_j)\times S^1)}\le C_0\epsilon. \end{equation*} Next, we will first construct a global diffeomorphism $F:N\times S^1\ri M$ by gluing up the diffeomorphisms $F_i$, such that $F$ is $C_0\epsilon$-close to each $F_i$. Suppose $\overline{\epsilon}$ is sufficiently small such that for any $x\in M$, $B_g(x,\overline{\epsilon})$ is a convex neighborhood of $x$, i.e. the minimizing geodesics connecting any two points in $B_g(x,\overline{\epsilon})$ are unique and contained in $B_g(x,\overline{\epsilon})$. Let $\Delta_2$ be the open neighborhood of $\{(x,x)\in M^2\}$ the diagonal of $M^2$, \begin{equation*} \Delta_2=\{(x,y)\in M^2: d_g(x,y)\le\overline{\epsilon}\}\subset M^2. \end{equation*} Define the smooth map \begin{equation*} \Sigma_{2}: \{(s_1,s_2)\in[0,1]^2: s_1+s_2=1\}\times \Delta_2\ri M, \end{equation*} as $\Sigma_2(s_1,s_2,x_1,x_2)=\gamma_{x_2,x_1}(s_1)$ where $\gamma_{x_2,x_1}:[0,1]\ri M$ is a minimizing geodesic from $x_2$ to $x_1$. Then $\Sigma_2$ satisfies the following properties for all $s_1,s_2\in[0,1]$, $s_1+s_2=1$, and $x,x_1,x_2\in M$: \begin{enumerate} \item $\Sigma_2(1,0,x_1,x_2)=x_1$, $\Sigma_2(0,1,x_1,x_2)=x_2$. \item $\Sigma_2(s_1,s_2,x,x)=x$. \end{enumerate} Then for each $K\in\mathbb{N}$, we can inductively construct the open neighborhood of $\{(x,...,x)\in M^K\}$ the diagonal in $M^K$, see also \cite{Bamler2020CompactnessTO}, \begin{equation*} \Delta_K=\{(x_1,...,x_K)\in M^K: d_g(x_i,x_j)\le\overline{\epsilon}\}\subset M^K, \end{equation*} and the smooth map \begin{equation*} \Sigma_K: \{(s_1,...,s_K)\in[0,1]^K: s_1+\cdots+s_K=1\}\times \Delta_K\ri M, \end{equation*} by defining \begin{equation*} \Sigma_K(s_1,...,s_K,x_1,...,x_K):=\Sigma_2(1-s_K,s_K,\Sigma_{K-1}(s_1,...,s_{K-1},x_1,...,x_{K-1}),x_K), \end{equation*} with the following properties for all $s_1,...,s_K\in[0,1]$, $s_1+\cdots+s_K=1$, and $x,x_1,x_2,...,x_K\in M$: \begin{enumerate} \item If for some $j\in\{1,...,K\}$ we have $s_j=1$ and $s_i=0$ for all $i\neq j$, then $\Sigma_K(s_1,...,s_K,x_1,...,x_K)=x_j$. \item $\Sigma_K(s_1,...,s_K,x,...,x)=x$. \item If $s_{K-i+1}=...=s_K=0$ for some $i\ge 1$, then $\Sigma_K(s_1,...,s_K,x_1,...,x_K)=\Sigma_{K-1}(s_1,...,s_{K-i},x_1,...,x_{K-i})$. \end{enumerate} Let $\{h_i\}_{i=1}^{\infty}$ be a partition of unity of $N\times S^1$ subordinated to $\{V_i\times S^1\}_{i=1}^{\infty}$, such that $h_i$ is constant on each $S^1$-factor, and $h_i\equiv1$ on $\widetilde{V}_i\times S^1$, where $\widetilde{V}_i=\{x\in N: B_{g_N}(x,500\,r(x))\subset V_i\}\times S^1$. Then let $F: N\times S^1\ri M$ be such that for any $x\in N\times S^1$, \begin{equation*} F(x):=\Sigma_K(h_1(x),...,h_K(x),F_1(x),...,F_K(x)), \end{equation*} where $K\in\mathbb{N}$ is some integer such that $h_{K'}(x)=0$ for any $K'>K$. By the properties of $\Delta_K$ and $\Sigma_K$, we see that $F$ is a well-defined smooth map, and it satisfies \begin{equation}\label{e: diffeos close} |F-F_i|_{C^k(V_i\times S^1)}\le C_0\epsilon. \end{equation} Next, we will show that $F$ is a diffeomorphism. First, by \eqref{e: diffeos close} and the definition of $\overline{g}$ we see that for any $p\in N\times S^1$ and $v\in T_p(N\times S^1)$ we have \begin{equation}\label{e: covering map} |F_{*p}(v,v)|_g\ge0.9|v|_{\overline{g}}. \end{equation} So $F_*$ is non-degenerate. Next, we argue that $F$ is injective. To see this, observe that by \eqref{e: diffeos close} and assumption \eqref{it: 2} we have that $F$ is injective on $B_{g_N}(x,2)\times S^1$ for any $x\in N$, and $F(x_1\times S^1)\cap F(x_2\times S^1)=\emptyset$ for any $x_1,x_2\in N$ such that $d_{g_N}(x_1,x_2)\ge 1$. Now suppose $F(x_1,\theta_1)=F(x_2,\theta_2)$, then we must have $d_{g_N}(x_1,x_2)<1$, and hence $x_1=x_2,\theta_1=\theta_2$ as desired. Therefore, $F$ is a diffeomorphism. Next, let $\theta\in[0,2\pi)$ be the parametrization of $S^1$, and let $X:=F_*(\partial_{\theta})$. Then $X$ generates a $2\pi$-periodic one-parameter group of diffeomorphisms $\psi_{\theta}$, $\theta\in[0,2\pi)$ on $M$. In the following we will show that $\psi_{\theta}$ satisfies all the properties. Denote $\psi(x,\theta)=\psi_{\theta}(x)$ for all $(x,\theta)\in M\times S^1$. First, let $\widetilde{U}_i=\{x\in M: B_g(x,1000\,r(x))\subset U_i\}$. Then it is easy to see that $\widetilde{U}_i\subset F(\widetilde{V}_i\times S^1)$. Since $h_i\equiv1$ on $\widetilde{V}_i\times S^1$, it follows that $F=F_i$, and hence $\psi_{\theta}=\phi_{i,\theta}$ on $\widetilde{U}_i$, which verifies property \eqref{property 3}. Lastly, to verify property \eqref{property 2}, we note that for any $x\in U_i=F_i(V_i\times S^1)$, suppose $x=F_i(z,\theta_1)$, then for any $\theta\in[0,2\pi)$, we have \begin{equation*} \phi_{i,\theta}(x)=\phi_{i,\theta+\theta_1}(x)=F_i(z,\theta_1+\theta), \end{equation*} and also \begin{equation*} \psi(F\circ F_i^{-1}(x),\theta) =\psi(F(z,\theta_1),\theta)=F(z,\theta_1+\theta). \end{equation*} Therefore, using \eqref{e: diffeos close} the closeness of $F$ and $F_i$, as well as the closeness of $F\circ F_i^{-1}:U_i\ri M$ and $i.d.: U_i\ri M$, we can deduce that \begin{equation*} |\psi-\phi_{i}|_{C^k(U_i\times S^1)}\le C_0\epsilon, \end{equation*} which verifies \eqref{property 2}. \end{proof} Now we prove the main result of this subsection, which shows that if there are some $SO(2)$-symmetric metrics which are $\epsilon$-close to each other, then we can glue them up to obtain a global $SO(2)$-symmetric metric which is $C_0\epsilon$-close to the original metrics. \begin{lem}\label{l: glue cylindrical plane} Let $k,m\in\mathbb{N}$. There exists $C_0,\overline{\epsilon}>0$ such that the following holds for all $\epsilon<\overline{\epsilon}$. Let $(M,g)$ be a 3D Riemannian manifold diffeomorphic to $\RR\times S^1$, which is $\epsilon$-close to $(\RR\times S^1,g_{stan})$ at all $x\in M$. Suppose $\{U_i\}_{i=1}^{\infty}$ is a locally finite covering of $M$ such that at most $m$ of them intersects at one point, and there is a $SO(2)$-symmetric metric $g_i$ on $W_i:=\bigcup_{x\in U_i}B_g(x,1000r(x))$, with the $SO(2)$-isometry $\{\phi_{i,\theta}\}_{\theta\in[0,2\pi)}$ and killing field $X_i$, which satisfies \begin{equation}\label{e: gi and g} |g_i-g|_{C^k(W_i)}\le\epsilon,\quad\textit{and}\quad |X_i-\partial_{\theta}|_{C^0(W_i)}\le\frac{1}{1000}, \end{equation} where $\partial_{\theta}$ denotes the killing field along the $S^1$-direction in an $\epsilon$-cylindrical plane up to sign. Let $\{h_i\}_{i=1}^{\infty}$ be a partition of unity that subordinates to $\{U_i\}_{i=1}^{\infty}$. Then we can find a $SO(2)$-symmetric metric $\overline{g}$ on $M$ with $SO(2)$-isometry $\{\psi_{\theta}\}_{\theta\in[0,2\pi)}$ such that \begin{equation}\label{e: g3 and g} |\overline{g}-g|_{C^{k-1}(M)}\le C_0\epsilon\quad\textit{and}\quad|\psi_{\theta}-\phi_{i,\theta}|_{C^{k-1}(U_i)}\le C_0\epsilon. \end{equation} Moreover, $\overline{g}=g_i$ on the subset $\{x\in M: h_i(\phi_{i,\theta}(x))=1,\,\theta\in[0,2\pi)\}$. \end{lem} \begin{proof} In the following $C_0$ denotes for all positive constants that depend on $k,m$. Let \begin{equation*} V_i=\{x\in M: h_i(\phi_{i,\theta}(x))=1,\,\theta\in[0,2\pi)\}. \end{equation*} First, applying Lemma \ref{l: killing fields are close} to $g_i$ we have \begin{equation}\label{e: Xtheta} \left|X_i-\partial_{\theta}\right|_{C^{k-1}(U_i)}\le C_0\epsilon, \end{equation} where $\partial_{\theta}$ is the killing field along the $S^1$-direction in an $\epsilon$-cylindrical plane up to sign. As in Lemma \ref{l: glue up local diffeomorphisms from N x S1 to M}, let $N$ be a 2D complete surface smoothly embedded in $M$, whose the tangent space is $\frac{1}{100}$-almost orthogonal to $\partial_{\theta}$. Next, we will construct a diffeomorphism $\sigma:N\times S^1\ri M$, such that $\sigma|_{N\times\{0\}}=id_N$ and $\sigma_*(\partial_{\theta})$ is $\frac{1}{100}$-close to the $S^1$-factor of any $\epsilon$-cylindrical planes. To do this, first we can find a covering of $M$ by $\epsilon$-cylindrical planes such that the number of them intersecting at any point is bounded by a universal constant. Then we claim that after reversing the $\theta$-coordinate in certain $\epsilon$-cylindrical planes we can arrange that the vector fields $\partial_{\theta}$ are $C_0\epsilon$-close in the intersections. Suppose the claim does not hold, then by Lemma \ref{l: two epsilon cylindrical planes are close} it is easy to find an embedded Klein bottle in $M$. Since $M$ is diffeomorphic to $\RR\times S^1$, which can be embedded into $\R^3$ as a tubular neighborhood of a circle, it follows that the Klein bottle can be embedded in $\R^3$, which is impossible by \cite[Corollary 3.25]{Hatcher:478079}. Now the diffeomorphism $\sigma$ follows immediately from applying Lemma \ref{l: glue up local diffeomorphisms from N x S1 to M}. Therefore, we can replacing $X_i$ by $-X_i$ for some $i$ so that they are all $\frac{1}{100}$-close to $\sigma_*(\partial_{\theta})$. So \eqref{e: Xtheta} implies \begin{equation}\label{e: vector field close} |X_i-X_j|_{C^{k-1}(U_i\cap U_j)}\le C_0\epsilon. \end{equation} Replacing $\phi_{i,\theta}$ by $\phi_{i,-\theta}$ for such $i$, then this implies \begin{equation}\label{e: closeness of maps} |\phi_{i,\theta}-\phi_{j,\theta}|_{C^{k-1}(U_i\cap U_j)}\le C_0\epsilon. \end{equation} Then by Lemma \ref{l: glue up local diffeomorphisms from N x S1 to M} we can construct a one parameter group of diffeomorphisms $\{\psi_{\theta}\}$ on $M$, such that $\psi_0=\psi_{2\pi}=i.d.$ and \begin{equation*} |\psi_{\theta}-\phi_{i,\theta}|_{C^{k-1}(U_i)}\le C_0\epsilon, \end{equation*} and $\psi_{\theta}=\phi_{i,\theta}$ on $V_i$. Let $\widehat{g}=\sum_{i=1}^{\infty}h_i\cdot g_i$, then $\widehat{g}=g_i$ on $V_i$ and by \eqref{e: gi and g} we have \begin{equation*} |\widehat{g}-g|_{C^{k-1}}\le C_0\epsilon\quad\textit{on}\quad M. \end{equation*} Let \begin{equation*} \overline{g}=\frac{1}{2\pi}\int_0^{2\pi}\psi^*_{\theta}\widehat{g}\,d\theta, \end{equation*} then $(M,\overline{g})$ is $SO(2)$-symmetric under the isometries $\psi_{\theta}$, and \begin{equation*}\begin{split} |\overline{g}-g_i|_{C^{k-2}(U_i)}& \le\frac{1}{2\pi}\int_0^{2\pi}|\psi_{\theta}^*g-\phi^*_{i,\theta}g_i|_{C^{k-2}(U_i)}\,d\theta\\ &\le\frac{1}{2\pi}\int_0^{2\pi}|\psi_{\theta}^*(g-g_i)|_{C^{k-2}(U_i)}+|\psi_{\theta}^*g_i-\phi^*_{i,\theta}g_i|_{C^{k-2}(U_i)}\,d\theta\le C_0\epsilon, \end{split}\end{equation*} which combined with \eqref{e: gi and g} implies \eqref{e: g3 and g}. Moreover, if $x\in V_i$, then $\psi_{\theta}(x)=\phi_{i,\theta}(x)$ and $\widehat{g}(x)=g_i(x)$, so we have \begin{equation*}\begin{split} \overline{g}(x) =\frac{1}{2\pi}\int_0^{2\pi}\psi^*_{\theta}(\widehat{g}(\psi_{\theta}(x)))\,d\theta =\frac{1}{2\pi}\int_0^{2\pi}\phi^*_{i,\theta}(g_i(\phi_{i,\theta}(x)))\,d\theta=g_i(x), \end{split}\end{equation*} which finishes the proof. \end{proof} \subsection{Surgery on the soliton metric} \label{ss: Surgery on the soliton metric part1} In this subsection we will conduct a surgery on the soliton by first removing a neighborhood of the edges $\Gamma$ and then grafting a region covered by $\epsilon$-cylindrical planes onto the soliton. After the surgery, we obtain a complete metric on $\RR\times S^1$, which is covered everywhere by $\epsilon$-cylindrical planes. We fix some conventions and notations. First, in the rest of the entire section we assume $(M,g)$ is a 3D steady gradient soliton with positive curvature that is not a Bryant soliton. Then by Theorem \ref{l: wing-like} we may assume after a rescaling that \begin{equation}\label{e: dinner} \lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))= 4. \end{equation} Next, let $\rho=d_g(\cdot,\Gamma)$, for any $0<A<B$ we write $\Gamma_{[A,B]}=\rho^{-1}([A,B])$, $\Gamma_{\le A}=\rho^{-1}([0,A])$, $\Gamma_{\ge A}=\rho^{-1}([A,\infty))$, and $\Gamma_A=\rho^{-1}(A)$. In following lemma we extend the soliton metric on $\Gamma_{\ge A}$ to obtain a complete manifold $(M',g')$ diffeomorphic to $\RR\times S^1$, which is covered by $\epsilon$-cylindrical planes. \begin{lem}[The $\epsilon$-grafted soliton metric $g'$]\label{l: lemma one} Let $(M,g)$ be a 3D steady gradient soliton with positive curvature that is not a Bryant soliton satisfying \eqref{e: dinner}. For any $\epsilon>0$ and $m\in\mathbb{N}$, there exist $A>0$ and a complete Riemannian manifold $(M',g')$ diffeomorphic to $\RR\times S^1$ such that the followings hold: \begin{enumerate} \item There is an isometric embedding $\phi:(\Gamma_{> A},g)\ri (M',g')$. \item $(M',g')$ is an $\epsilon$-cylindrical plane at any point $x\in M'$ in the $C^m$-norm. \end{enumerate} \end{lem} \begin{proof} Let $A>0$ be sufficiently large such that $\Gamma_{\ge A}$ is covered by $\epsilon$-cylindrical planes. We may furthermore increase $A$ depending on $m,\epsilon$ and the soliton $(M,g)$. By using the Ricci flow equation $\partial_tg(t)=-2\Ric(g(t))$, $g(t)=\phi_{-t}^*g$, the quadratic curvature upper bound in Theorem \ref{t: R upper bd}, and Shi's derivative estimates \cite{Shi1987derivative1}, we may assume when $A$ is sufficiently large that \begin{equation*} |\nabla^{\ell}(g-\phi^*_{-(k+1)}g)|\le\epsilon\quad\textit{on}\quad\phi_{k}(\ga)\quad \ell=0,...,m, \end{equation*} where the covariant derivatives are taken with respect to $g$. Therefore, for each $k\in\mathbb{N}$, by a standard gluing-up argument, we can construct a metric $g_k$ on $\ga$ which satisfies \begin{equation*} g_k=\phi^*_{-k}g \quad\textit{on}\quad \phi_{k}(\ga), \end{equation*} and for all $i=0,1,2,...,k-1$, \begin{equation}\label{costco} |\nabla^{\ell}(g_k-g)|\le C_0\,\epsilon\quad \textit{on}\quad M,\quad \ell=0,...,m, \end{equation} where here and below $C_0$ denotes all positive constants that only depend on the soliton and $m$. Now fixed a point $p\in\ga\subset M$ and let $p_k:=\phi_{k}(p)$. Then after passing to a subsequence we may assume that the pointed manifolds $(\ga,g_k,p_k)$ converge to a smooth manifold $(M',g',p')$. At the same time, the isometric embeddings $\phi_{k}:(\ga,g,p)\ri(\phi_{k}(\ga),g_k,p_k)\subset(M,g_k,p_k)$ smoothly converge to an isometric embedding $\pi:(\ga,g,p)\rightarrow(M',g',p')$ in the $C^m$-sense. Furthermore, by \eqref{costco}, we see that $(M',g',p')$ is $C_0\epsilon$-close to the smooth limit of $(M,g,p_k)$, which by Lemma \ref{l: DR to cigar} must be isometric to $\RR\times S^1$ after a suitable rescaling. In particular, this implies that $(M',g')$ is complete, diffeomorphic to $\RR\times S^1$, and covered by $C_0\epsilon$-cylindrical planes. \end{proof} We now use the $\epsilon$-grafted soliton metric $g'$ from Lemma \ref{l: lemma one} to generate a family of metrics $g'(t)$ on $M'$, which satisfies the Ricci flow equation in a staircase region of $M'\times[0,\infty)$. In future proofs in this section, the flow $(M',g'(t))$ will be used as domains of harmonic map heat flows. First, let $X$ be a smooth vector field such that $X=\nabla f$ on $\Gamma_{\ge A+200}$, and $X=0$ on $M'\setminus\ga$, and $|\nabla^mX|\le C_0$, $m=0,1,...,100$. Second, let $\psi_{t}$ be the family of diffeomorphisms generated by $X$ with $\psi_{0}=\textnormal{id}$. Let \begin{equation*} g'(t)=\psi_{t}^*g',\quad t\ge0, \end{equation*} be a smooth family of metrics on $M'$. Then $(M',g'(t))$ is covered by $\epsilon$-cylindrical planes everywhere for all $t\ge0$. For a subset $U\subset M'$ and $t\ge0$, let \begin{equation*} U^t=\{x\in M': x\in\psi_t(U)\}. \end{equation*} Replacing $A$ by $A+200$, then we have $X=\nabla f$, $\psi_t=\phi_t$, and $g'=g$ on $\Gamma_{>A}$. Note also that $\phi_t(\Gamma_{>A})\subset\Gamma_{>A}$, we have that for any $B\ge A$, \begin{equation*}\begin{split} \Gamma^t_{>B}&=\phi_t(\Gamma_{>B})=\{x\in \Gamma_{>A}: d_{g(t)}(x,\Gamma)>B\}, \end{split}\end{equation*} where in the second equality we identified $\Gamma_{>A}\subset M'$ with $\Gamma_{>A}\subset M$, and used the fact that $g(t)=\phi_t^*g$. Since $g(t)$ satisfies the Ricci flow equation, it follows that the Ricci flow equation holds on the open subset \begin{equation*} \bigcup_{t\ge0}\,(\Gamma^t_{> A}\times\{t\})\subset M'\times[0,\infty). \end{equation*} We call $(M',g')$ the $\epsilon$-grafted soliton, and $(M',g'(t))$ the $\epsilon$-grafted soliton flow. \subsection{Surgery on SO(2)-symmetric metrics} \label{ss: Surgery on the background SO(2)-symmetric metrics part 1} The next lemma allows us to do a surgery on a $SO(2)$-symmetric metric $\widehat{g}$ on an open subset containing $\Gamma_{\ge B}$ for some large $B>0$. This surgery extends the incomplete $SO(2)$-symmetric metric $\widehat{g}$ to a complete $SO(2)$-symmetric metric. Moreover, if $\widehat{g}$ is close to the soliton metric $g$, then the resulting complete metric is close to the grafted soliton metric $g'$. In future proofs in this section, we will run harmonic map heat flows from the grafted soliton flow $(M',g'(t))$ to Ricci flows starting from some suitable $SO(2)$-symmetric metrics we obtained by the surgery. \begin{lem}[A global $SO(2)$-symmetric metric]\label{l: lemma two} There are constants $C_0>0$ such that the following holds: For any $\epsilon>0$, let $A>0$ and $(M',g')$ be the $\epsilon$-grafted soliton from Lemma \ref{l: lemma one}. Then for any $B>A$, suppose $\widehat{g}$ is a $SO(2)$-symmetric metric on an open subset $U\supset\Gamma_{\ge B}$ with the $SO(2)$-isometry $\psi_{\theta}$ and the killing field $X$, such that \begin{equation}\label{e: banana} |\widehat{g}-g'|_{C^{100}(U)}\le \epsilon,\quad |X-\partial_{\theta}|_{C^0(U)}<\frac{1}{1000}, \end{equation} where $\partial_{\theta}$ is the killing field of an $\epsilon$-cylindrical plane. Then there is a $SO(2)$-symmetric metric $\widetilde{g}$ on $M'$ with the $SO(2)$-isometry $\widetilde{\psi}_{\theta}$ and the killing field $\widetilde{X}$, such that \begin{equation*} |\widetilde{g}-g'|_{C^{98}(M')}\le C_0\,\epsilon\quad\textit{and}\quad|\widetilde{X}-\partial_{\theta}|_{C^{98}(M')}\le C_0\,\epsilon. \end{equation*} Moreover, we have $\widetilde{g}=\widehat{g}$ and $\widetilde{\psi}_{\theta}=\psi_{\theta}$ on $\Gamma_{\ge B+100}$. \end{lem} \begin{proof}[Proof of Lemma \ref{l: lemma two}] Let $\psi_{\theta}$, $\theta\in[0,2\pi)$ be the $SO(2)$-isometry of $\widehat{g}$. It is easy to find a covering of $M'$ by a sequence of $\epsilon$-cylindrical planes $\{U_i\}_{i=1}^{\infty}$ and $U_0=U$ such that the number of them intersecting at any point is bounded by a universal constant. Then we can find a partition of unity $\{h_i\}_{i=0}^{\infty}$ subordinate to $\{U_i\}_{i=0}^{\infty}$ such that the function $h_0$ satisfies $h_0(\psi_{\theta}(x))=1$ for all $x\in\Gamma_{\ge B+100}$. Now the assertions follow immediately from applying Lemma \ref{l: glue cylindrical plane}. \end{proof} \subsection{An approximating metric away from the edge}\label{ss: the desired exponential decay part 2} In this subsection, we construct a $SO(2)$-symmetric approximation metric away from the edge $\Gamma$ such that the error decays at the rate $O(e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)})$. In the proof of Theorem \ref{t: precise approximation}, we will need to choose some constants that are sufficiently large or small such that certain requirements are satisfied. In order to show that the dependence between these constants is not circular, we introduce the following parameter order, \begin{equation*} \delta_0,C_0,T,\underline{D},\epsilon,A,D. \end{equation*} such that each parameter is chosen depending only on the preceding parameters. \begin{theorem}[Approximation with a good exponential decay]\label{t: precise approximation} Let $(M,g,f,p)$ be a 3D steady gradient soliton that is not a Bryant soliton satisfying \eqref{e: dinner}. Then there exist a constant $\epsilon_1,A_1>0$ and a $SO(2)$-symmetric metric $\widehat{g}$ on an open subset containing $\Gamma_{\ge A_1}$ such that $\Gamma_{\ge A_1}$ is covered by $\epsilon$-cylindrical planes, and \begin{equation*} |\nabla^m(g-\widehat{g})|\le e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}\quad\textit{on}\quad\Gamma_{\ge A_1},\quad m=0,...,100. \end{equation*} Moreover, let $X$ be the killing field of the $SO(2)$-isometry of $\widehat{g}$ and $\partial_{\theta}$ be the $SO(2)$-killing field of an $\epsilon$-cylindrical plane, then we have $|X-\partial_{\theta}|\le\frac{1}{1000}$. \end{theorem} \begin{proof} We choose the following constants which satisfies the above parameter order, and whose values may be further adjusted later: \begin{enumerate} \item Let $\delta_0>0$ be from Theorem \ref{t: symmetry improvement}. \item Let $C_0>0$ be the maximum of $1$ and the constants $C_0>0$ from Lemma \ref{l: two epsilon cylindrical planes are close}, \ref{l: glue cylindrical plane}, and \ref{l: lemma two}. \item Let $T>T_0$, where $T_0$ is from Theorem \ref{t: symmetry improvement}. Assume also that $2C_0^2\cdot e^{400}<e^{\frac{\delta_0}{2}T}$ and $T^{1/2}>\frac{160}{\delta_0}$. Let $\underline{D}(T)>0$ be determined by Theorem \ref{t: symmetry improvement}. \item\label{i: 3} Let $0<\epsilon<\min\{\frac{1}{1000C_0}\frac{\overline{\delta}(T)}{C_0},\frac{\overline{\epsilon}(T)}{e^{400}C_0^2}\}$, where $\overline{\delta}(T),\overline{\epsilon}(T)>0$ are constants determined by Theorem \ref{t: symmetry improvement}. \item Let $A>0$ be sufficiently large so that \begin{enumerate} \item $g'=g$ on $\Gamma_{\ge A}\subset M'$, where $(M',g')$ is the $\frac{\epsilon}{C_0}$-grafted soliton from Lemma \ref{l: lemma one} and $\Gamma_{\ge A}$ is covered by $\frac{\epsilon}{C_0}$-cylindrical planes. Let $(M',g'(t))$ be the $\frac{\epsilon}{C_0}$-grafted soliton flow, which it satisfies the Ricci flow equation on \begin{equation*} \bigcup_{t\ge0}\Gamma^t_{> A}\subset M'\times[0,\infty). \end{equation*} \item By Theorem \ref{l: wing-like} and the assumption \eqref{e: dinner} we may assume that for any point $x\in\Gamma_{\ge A}\subset M$ and $t\ge0$, we have \begin{equation*} 2-\frac{\delta_0}{16}\le \frac{d}{dt}d_g(\phi_t(x),\Gamma)\le2+\frac{\delta_0}{16}. \end{equation*} \end{enumerate} \item Let $D=\max\{A,\ln C_0,\ln\epsilon^{-1},10\underline{D},100T^{1/2},e^{400}\}$, and $\frac{\ln C_0}{D}<0.01$. \end{enumerate} We will impose two inductive assumptions. The first one produces a finite sequence of metrics $\widehat{g}_n$ on $\Gamma_{\ge D_n}$ until $n$ is sufficiently large so that $\widehat{g}_n$ satisfies the assertion of the theorem. For each fixed $n$, suppose the first inductive assumption holds for $n$, then the second inductive assumption produces an infinite sequence of metrics $\widehat{g}_{n,i}$, $i=0,1,2,...$, on $\Gamma_{\ge D_n}$, where $\widehat{g}_{n,0}=\widehat{g}_{n}$. We will then take a limit of these metrics $\widehat{g}_{n,i}$ as $i\rii$ and obtain a metric $\widehat{g}_{n+1}$ that satisfies the first inductive assumption for $n+1$. We will see that all metrics in the proof are $\frac{1}{2000C_0}$-close to $(\RR\times S^1,g_{stan})$. So all the following derivatives and norms at a point $x$ are taken and measured with respect to $(\phi_x^{-1})^*g_{stan}$, where $\phi_x$ is a $\frac{1}{2000C_0}$-isometry at $x$. Note that for a different choice of $\phi_x$, the estimates differ at most by the factor $1.1$. \textbf{Inductive assumption one:} For any $n\in\mathbb{N}$ such that $\frac{n\ln C_0}{D}\le2.02$, there are a sequence of increasing constants $D_n>0$ and a $SO(2)$-symmetric metric $\widehat{g}_{n}$ on $\Gamma_{\ge D_n}$ such that for $\alpha_n:=\frac{n\ln C_0}{D}\le2.02$ we have \begin{equation}\label{e: induction on n} |\nabla^m(\widehat{g}_{n}-g)|\le \epsilon\cdot e^{-\alpha_n(d_g(\cdot,\Gamma)-D_n)} \quad \textit{on}\quad \Gamma_{\ge D_n}\quad m=0,...,100. \end{equation} Moreover, let $X_n$ be the killing field of the $SO(2)$-isometry of $\widehat{g}_n$ and $\partial_{\theta}$ be the $SO(2)$-killing field of an $\epsilon$-cylindrical plane, then we have \begin{equation}\label{e: indone-vector} |X_n-\partial_{\theta}|\le\frac{1}{1000}. \end{equation} Suppose inductive assumption one is true for a moment. Since $\frac{\ln C_0}{D}<0.01$, we can find an integer $n$ such that $2<\frac{n\ln C_0}{D}\le2.02$. Then the metric $\widehat{g}_n$ on $\Gamma_{\ge D_n}$ satisfies the assertion of the theorem, with $\epsilon_1=\frac{1}{2}(\frac{n\ln C_0}{D}-2)$ and $A_1=D_n$. So the theorem follows immediately after establishing inductive assumption one. First, for $n=0$, since $(M',g')$ is covered everywhere by $\frac{\epsilon}{C_0}$-cylindrical planes and $g'=g$ on $\Gamma_{\ge A}$, by applying Lemma \ref{l: glue cylindrical plane} we obtain a $SO(2)$-symmetric metric on $M'$ which is covered by $C_0\epsilon$-cylindrical planes, and its restriction on $\Gamma_{\ge D_0}$ satisfies the inductive assumptions for some $D_0\ge A+D$. Now suppose the inductive assumption holds for $n\ge 0$, in the rest of the proof we show that it also holds for $n+1$. Without loss of generality, we may assume $\frac{(n+1)\delta_0T}{2D}<2.02$, because otherwise we are done. Now we impose a second inductive assumption. \textbf{Inductive assumption two:} Let $n\ge0$ be fixed. Then for any $k\in\mathbb{N}$ there exists a $SO(2)$-symmetric metric $\widehat{g}_{n,k}$ on an open subset in $M$ containing $\Gamma_{\ge D_n}$, which satisfies \begin{equation}\begin{split}\label{e: induction on i} |\nabla^m(\widehat{g}_{n,k}-g)|\le \epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n( d_g(\cdot,\Gamma)-D_n)}\quad \textit{on}\quad\Gamma_{\ge D_n+iD}\quad\\ \textit{for}\quad i=0,...,k,\quad m=0,...,100. \end{split}\end{equation} Moreover, let $X_{n,k}$ be the killing field of the $SO(2)$-isometry of $\widehat{g}_{n,k}$ and $\partial_{\theta}$ be the $SO(2)$-killing field of an $\epsilon$-cylindrical plane, then we have \begin{equation}\label{e: vector n,k} |X_{n,k}-\partial_{\theta}|\le\frac{1}{1000}. \end{equation} For $k=0$, inductive assumption two clearly holds for $\widehat{g}_{n,0}=\widehat{g}_n$. Now assume it is true for an integer $k\ge0$, we will show it also holds for $k+1$. First, since \begin{equation*} |\nabla^m(\widehat{g}_{n,k}-g)|\le \epsilon\quad \textit{on}\quad \Gamma_{\ge D_n},\quad m=0,...,100, \end{equation*} by applying Lemma \ref{l: lemma two} we obtain a $SO(2)$-symmetric metric $\widetilde{g}_{n,k}$ on $M'$ with the $SO(2)$-isometry $\psi_{n,k}$ and the killing field $\widetilde{X}_{n,k}$, such that $\widetilde{g}_{n,k}=\widehat{g}_{n,k}$ on $\Gamma_{\ge D_n+100}$ and \begin{equation}\label{e: global epsilon} |\nabla^m(\widetilde{g}_{n,k}-g')|\le C_0\epsilon,\quad\textit{and}\quad|\widetilde{X}_{n,k}-\partial_{\theta}|\le C_0\epsilon\quad\textit{on}\quad M',\quad m=0,...,98. \end{equation} Moreover, we claim that the following holds: \begin{equation}\label{e: induction on i1} \begin{split} |\nabla^m(\widetilde{g}_{n,k}-g)|\le C_0 \cdot e^{400}\cdot \epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n( d_g(\cdot,\Gamma)-D_n)}\quad \textit{on}\quad\Gamma_{\ge D_n+iD}\quad\\ \textit{for}\quad i=-1,0,...,k,\quad m=0,...,98. \end{split} \end{equation} To show this, for all $i\ge 1$, since $\widetilde{g}_{n,k}=\widehat{g}_{n,k}$ on $\Gamma_{\ge D_n+100}$, and $D\ge 100$, the claim clearly holds by \eqref{e: induction on i}. For $i=-1$, the claim follows directly from \eqref{e: global epsilon}. For $i=0$, to show the claim holds on $\Gamma_{\ge D_n}$, on the one hand we note that on $\Gamma_{\ge D_n+100}$, we have $\widetilde{g}_{n,k}=\widehat{g}_{n,k}$, and thus the claim holds by \eqref{e: induction on i}; On the other hand, on $\Gamma_{[D_n, D_n+100]}$, we have $d_g(\cdot,\Gamma)-D_n\le 100$, so the claim \eqref{e: induction on i1} follows from \eqref{e: global epsilon} and $\alpha_n<4$. Let $(M',\widetilde{g}_{n,k}(t))$ be the Ricci flow that starts from $\widetilde{g}_{n,k}(0)=\widetilde{g}_{n,k}$. We may take $\epsilon$ sufficiently small so that $(M',\widetilde{g}_{n,k}(t))$ exists up to time $T$ and $|\Rm|_{\widetilde{g}(t)}\le\frac{1}{T}$ for all $t\in[0,T]$, there is a smooth harmonic map heat flow $\{\chi_{n,k,t}\}:(M',g'(t))\ri(M',\widetilde{g}_{n,k}(t))$, $t\in[0,T]$, with $\chi_{n,k,0}=\textit{i.d.}$, and the perturbation $h_{n,k}(t):=(\chi_{n,k,t}^{-1})^*g'(t)-\widetilde{g}_{n,k}(t)$ satisfies $|h_{n,k}(t)|\le\frac{1}{1000}$. See \cite[Lemma A.24]{bamler2022uniqueness} for the existence of harmonic map heat flows and estimates of perturbations. For the fixed $n$ and $k$, we will omit the subscripts $n,k$ in $\chi_{n,k,t},\widetilde{g}_{n,k}(t),h_{n,k}(t)$ for a moment. For a fixed $i=0,1,...,k+1$, let \begin{equation*} x\in\Gamma^T_{\ge D_n+iD}\quad\textit{and}\quad x'=\chi_T(x). \end{equation*} We will apply Theorem \ref{t: symmetry improvement} (symmetry improvement) at $(x',0)$ with suitable constants that will be determined later. In the following we will verify all assumptions of Theorem \ref{t: symmetry improvement} and determine the constants. We first prove the following claim. \begin{claim}\label{claim: carro} For any $L>10 T^{1/2}$, we have $B_{\widetilde{g}(0)}(x',L)\subset \chi_t(B_{g'(t)}(x,10\,L))$ for all $t\in[0,T]$. \end{claim} \begin{proof} First, we observe that by $|h|\le\frac{1}{1000}$ we have \begin{equation}\label{e: beef0} B_{\widetilde{g}(t)}(\chi_t(x),5L)\subset \chi_t(B_{g'(t)}(x,10\,L)). \end{equation} Now let $y\in B_0(x',L)$, by triangle inequality we have \begin{equation}\label{e: beef} \begin{split} d_{\widetilde{g}(t)}(y,\chi_t(x))&\le d_{\widetilde{g}(t)}(x',\chi_t(x))+d_{\widetilde{g}(t)}(x',y).\\ \end{split} \end{equation} On the one hand, by the local drift estimate of harmonic map heat flows \cite[Lemma A.18]{bamler2022uniqueness}, and the curvature bound $|\Rm|<\frac{1}{T}$, we have \begin{equation}\label{e: beef1} d_{\widetilde{g}(t)}(x',\chi_t(x))=d_{\widetilde{g}(t)}(\chi_t(x),\chi_T(x))\le 10\,(T-t)^{1/2}\le 10\,T^{1/2}<L. \end{equation} On the other hand, by the distance distortion estimate on the Ricci flow $(M',\widetilde{g}_k(t))$ under the curvature bound $|\Rm|<\frac{1}{T}$, we have \begin{equation}\label{e: beef2} d_{\widetilde{g}(t)}(x',y)\le 2\,e^{\sup|\Rm|\cdot T}\,d_{\widetilde{g}(0)}(x',y) \le4\,d_{\widetilde{g}(0)}(x',y)\le 4\,L. \end{equation} Combining \eqref{e: beef}\eqref{e: beef1}\eqref{e: beef2} we obtain $d_{\widetilde{g}(t)}(y,\chi_t(x))\le 5\,L$, which by \eqref{e: beef0} implies the claim. \end{proof} First, we verify assumption \eqref{e: alpha} in Theorem \ref{t: symmetry improvement}. Let \begin{equation*} \mathcal{H}(x')=C_0\cdot e^{400}\cdot\epsilon\cdot C_0^{-(i-1)}\cdot e^{-\alpha_n( d_g(x',\Gamma)-D_n)}. \end{equation*} Let $y\in\Gamma_{\ge D_n+(i-1)D}$, then by the triangle inequality we have \begin{equation*} d_g(y,\Gamma)\ge d_g(x',\Gamma)-d_g(y,x'), \end{equation*} and thus by \eqref{e: induction on i1} we obtain \begin{equation}\label{e: star} \begin{split} |\nabla^m h|(y,0)&\le \mathcal{H}(x')\cdot e^{\alpha_n\, d_g(y,x')}\quad\textit{on}\quad \Gamma_{\ge D_n+(i-1)D},\quad m=0,...,98. \end{split} \end{equation} Since $x\in\Gamma^T_{\ge D_n+iD}$, by the definition of the flow $g'(t)$ we see that $x\in\Gamma^t_{\ge D_n+iD}$ for all $t\in[0,T]$. In particular, we have $x\in \Gamma^0_{\ge D_n+iD}=\Gamma_{\ge D_n+iD}$. So taking $L=D/10$ in Claim \ref{claim: carro} we see that \begin{equation*} B_{\widetilde{g}(0)}(x',\underline{D})\subset B_{\widetilde{g}(0)}(x',D/10)\subset B_g(x,D)\subset\Gamma_{\ge D_n+(i-1)D} \end{equation*} By \eqref{e: star} this verifies the assumption \eqref{e: alpha} in Theorem \ref{t: symmetry improvement} at $(x',0)$, with the constants $\alpha$ there equal to $\alpha_n$, and $\epsilon$ there equal to $\mathcal{H}(x')$. Second, we verify the assumption of Theorem \ref{t: symmetry improvement} that the perturbation $h$ restricted on $B_{\widetilde{g}(0)}(x',D_{\#})\times[0,T]$ is a Ricci-Deturck flow perturbation for \begin{equation*} D_{\#}=\frac{1}{10}(d_{g(T)}(x,\Gamma)-D_n+D)>\underline{D}. \end{equation*} This follows from Claim \ref{claim: carro} because by taking $L=D_{\#}$ we have \begin{equation*}\begin{split} B_{\widetilde{g}(0)}(x',D_{\#})&\subset\chi_t(B_{g'(t)}(x,d_{g(T)}(x,\Gamma)-D_n+D))\\ &\subset\chi_t(B_{g'(t)}(x,d_{g(t)}(x,\Gamma)-D_n+D))\subset\chi_t(\Gamma^t_{\ge D_n-D})\subset\chi_t(\Gamma^t_{\ge A}), \end{split} \end{equation*} for all $t\in[0,T]$. Moreover, by taking $\epsilon$ sufficiently small we may assume $|\nabla^kh|\le\frac{1}{1000}$, $k=0,1$, on $B_{\widetilde{g}(0)}(x',D_{\#})\times[0,T]$. See \cite[Lemma A.14]{bamler2022uniqueness} for the local derivative estimate of Ricci flow perturbations. Lastly, we will verify assumption \eqref{e: double} of Theorem \ref{t: symmetry improvement}. Recall that assumption \eqref{e: double} consists of two estimates of $|h|$ on the parabolic boundary of $B_{\widetilde{g}(0)}(x',D_{\#})\times[0,T]=(\partial B_{\widetilde{g}(0)}(x',D_{\#})\times[0,T])\cup (B_{\widetilde{g}(0)}(x',D_{\#})\times\{0\})$. We first verify the assumption on $\partial B_{\widetilde{g}(0)}(x',D_{\#})\times[0,T]$, which follows immediately from the following claim and $|h|\le\frac{1}{1000}$. \begin{claim} $e^{100 D_{\#}}\cdot \mathcal{H}(x')\ge\frac{1}{1000}$. \end{claim} \begin{proof} Note by \eqref{e: beef} we have $d_g(x,x')\le 10T^{1/2}$, and $d_g(x,\Gamma)\le d_{g(T)}(x,\Gamma)+2.1T$, we have \begin{equation*} d_g(x',\Gamma)\le d_g(x,\Gamma)+d_g(x,x') \le d_{g(T)}(x,\Gamma)+2.1T+10T^{1/2} =10D_{\#}+2.1T+10T^{1/2}+D_n. \end{equation*} Substituting this into $\mathcal{H}(x')$ and seeing that $e^{-4(2.1T+10T^{1/2})}\ge e^{-10T}\ge e^{-D}\ge e^{-10 D_2}$, and $\epsilon\ge e^{-D}\ge e^{-10D_2}$, the claim follows. \end{proof} Since $B_{\widetilde{g}(0)}(x',D_{\#})\subset\Gamma_{\ge D_n-D}$, the assumption \eqref{e: double} on $B_{\widetilde{g}(0)}(x',D_{\#})\times\{0\}$ follows immediately from the following claim \begin{claim}\label{c: check condition 2} $|\nabla^mh|(y,0)\le \mathcal{H}(x')\cdot e^{4\,d_g(x',y)}$ for all $y\in \Gamma_{\ge D_n-D}$, $m=0,...,98$. \end{claim} \begin{proof} If $y\in\Gamma_{\ge D_n+(i-1)D}$, then the claim holds by \eqref{e: star}. Now assume $y\in \Gamma_{[D_n+(j-1)D, D_n+jD)}$ for some $0\le j\le i-1$. Then we have \begin{equation}\label{e: dij} d_g(x',y)\ge (i-j)D, \end{equation} which combined with \eqref{e: induction on i1} implies \begin{equation}\begin{split}\label{e: hd} |\nabla^mh|(y,0)& \le \mathcal{H}(x')\cdot C_0^{i-j}\cdot e^{\alpha_n\,(d_g(x',\Gamma)-d_g(y,\Gamma))}\\ \end{split}\end{equation} Note that \eqref{e: dij} implies $C_0^{i-j}\le e^{\frac{\ln C_0}{D}\cdot d_g(x',y)}\le e^{0.01d_g(x',y)}$, using also $d_g(x',y)\ge d_g(x',\Gamma)-d_g(y,\Gamma)$ and \eqref{e: hd} we obtain the claim. \end{proof} Therefore, applying Theorem \ref{t: symmetry improvement} (symmetric improvement) at $(x',0)$ we obtain \begin{equation}\begin{split}\label{e: first h} |\nabla^mh_-|(x',T)&\le\mathcal{H}(x')\cdot e^{2\alpha_n T}\cdot e^{-\delta_0T}\\ &\le C_0\cdot e^{400}\cdot\epsilon\cdot C_0^{-(i-1)}\cdot e^{-\alpha_n( d_g(x',\Gamma)-D_n)}\cdot e^{2\alpha_n T}\cdot e^{-\delta_0T}, \end{split}\end{equation} for $m=0,...,100$. Since $T^{1/2}>\frac{160}{\delta_0}$, using the triangle inequality $d_g(x',\Gamma)-d_g(x,\Gamma)\ge -d_g(x,x')$ and \begin{equation*} d_g(x',x)=d_g(\chi_{T}(x),x)\le10\,T^{1/2}, \end{equation*} it is easy to see that $e^{-\frac{\delta_0}{4}T}\cdot e^{-\alpha_n(d_g(x',\Gamma)-D_n)}\le e^{-\alpha_n(d_g(x,\Gamma)-D_n)}$, which combined with \eqref{e: first h} implies \begin{equation}\begin{split}\label{e: hen} |\nabla^mh_-|(x',T) &\le C_0\cdot e^{400}\cdot\epsilon\cdot C_0^{-(i-1)}\cdot e^{-\alpha_n( d_g(x,\Gamma)-D_n)}\cdot e^{2\alpha_n T}\cdot e^{-\frac{3\delta_0}{4}T}, \end{split}\end{equation} for $m=0,...,100$. Moreover, by the assumption of $A$ we have \begin{equation*} d_{g(T)}(x,\Gamma)\le d_g(x,\Gamma)-\left(2-\frac{\delta_0}{16}\right)T\le d_g(x,\Gamma)-\left(2-\frac{\delta_0}{4\alpha_n}\right)T, \end{equation*} and hence $e^{-\frac{\delta_0}{4}T}\cdot e^{-\alpha_n(d_g(x,\Gamma)-D_n)}\cdot e^{2\alpha_nT}\le e^{-\alpha_n(d_{g(T)}(x,\Gamma)-D_n)}$, which together with \eqref{e: hen} implies \begin{equation}\begin{split}\label{e: ce} |\nabla^mh_-|(x',T) &\le C_0\cdot e^{400}\cdot\epsilon\cdot C_0^{-(i-1)}\cdot e^{-\alpha_n( d_{g(T)}(x,\Gamma)-D_n)}\cdot e^{-\frac{\delta_0}{2}T}, \end{split}\end{equation} for $m=0,...,100$. So by the assumption $2C_0^2\cdot e^{400}<e^{\frac{\delta_0}{2}T}$ and \eqref{e: ce} we obtain \begin{equation*} |\nabla^m h_-|(x',T)\le\frac{1}{2}\epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n( d_{g(T)}(x,\Gamma)-D_n)}, \end{equation*} and hence \begin{equation}\label{e: pullback h} |\nabla^m\chi_T^*h_-|(x,T)\le\epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n( d_{g(T)}(x,\Gamma)-D_n)}. \end{equation} Now we restore the subscripts $n,k$. Since $g'(T)=\chi_{n,k,T}^*(h_{n,k}(T)+\widetilde{g}_{n,k}(T))$ and $h_{n,k}(T)=h_{n,k,+}(T)+h_{n,k,-}(T)$, by letting \begin{equation*} \widehat{g}_{n,k+1}=\chi_{n,k,T}^*(h_{n,k,+}(T)+\widetilde{g}_{n,k}(T)), \end{equation*} we see that $\widehat{g}_{n,k+1}$ is a $SO(2)$-symmetric metric on $M'$ and \eqref{e: pullback h} implies \begin{equation}\begin{split}\label{e: like2} |\nabla^m(g'(T)-\widehat{g}_{n,k+1})|\le\epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n( d_{g(T)}(\cdot,\Gamma)-D_n)},\quad\textit{on}\quad\Gamma^T_{\ge D_n+iD}, \end{split}\end{equation} where $m=0,...,100$. This is true for all $i=0,1,...,k+1$. Since $g'(T)=\phi_{-T}^*g$ and $\phi_{-T}:(\Gamma^T_{ \ge A},g'(T))\ri(\Gamma_{\ge A},g)$ is an isometry, replacing $\widehat{g}_{n,k+1}$ by $\phi_T^*(\widehat{g}_{n,k+1})$, then by \eqref{e: like2} we obtain a $SO(2)$-symmetric metric $\widehat{g}_{n,k+1}$ on $M'$ such that \begin{equation}\label{e: firstind} |\nabla^m(g-\widehat{g}_{n,k+1})|\le\epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n( d_{g}(\cdot,\Gamma)-D_n)},\quad\textit{on}\quad\Gamma_{\ge D_n+iD},\quad m=0,...,98. \end{equation} This verifies \eqref{e: induction on i} in inductive assumption two for $k+1$. It remains to verify \eqref{e: vector n,k} in inductive assumption two for $k+1$. To do this, let $\psi_x:(\RR\times S^1,g_{stan})\ri (\RR\times S^1,\widetilde{g},x)$ be an $C_0\epsilon$-isometry at $x\in \RR\times S^1$, we may assume that $\epsilon$ is sufficiently small so that $\psi_x:(\RR\times S^1,g_{stan})\ri (\RR\times S^1,\widetilde{g}(t),x)$ is an $\frac{1}{2000C_0}$-isometry for all $t\in[0,T]$. By \eqref{e: global epsilon} we see \begin{equation}\label{e: so2killing field} |(\chi_{n,k,T}^{-1})_*\widetilde{X}_{n,k}-(\chi_{n,k,T}^{-1})_*\partial_{\theta}|\le 4C_0\epsilon. \end{equation} Note that by the uniqueness of Ricci flow, see e.g. \cite{ChenBL}, the $SO(2)$-isometry $\psi_{n,k,\theta}$ of $\widetilde{g}_{n,k}(0)$ is $SO(2)$-isometry of $\widetilde{g}_{n,k}(t)$ for all $t\in[0,T]$. In particular, $\widetilde{X}_{n,k}$ is the killing field of $\widetilde{g}_{n,k}(T)$ and hence $(\chi_{n,k,T}^{-1})_*\widetilde{X}_{n,k}$ is the killing field of the $\widehat{g}_{k+1}$. So the vector field $(\chi_{n,k,T}^{-1})_*\partial_{\theta}$ is the $SO(2)$-killing field of an $\frac{1}{2000C_0}$-cylindrical plane. By Lemma \ref{l: two epsilon cylindrical planes are close}, for the $SO(2)$-killing field $\partial_{\theta'}$ of an $\epsilon$-cylindrical plane, we have $|\partial_{\theta'}-(\chi_{n,k,T}^{-1})_*\partial_{\theta}|\le\frac{1}{2000}$, which combined with \eqref{e: so2killing field} implies \begin{equation*} |(\chi_{n,k,T}^{-1})_*\widetilde{X}_{n,k}-\partial_{\theta'}|\le\frac{1}{1000}, \end{equation*} which confirms \eqref{e: vector n,k} in inductive assumption two for $k+1$. Combining this with \eqref{e: firstind} we verified inductive assumption two for $k+1$. Therefore, inductive assumption two holds for all $k\ge0$. Now let $p\in \Gamma_{> D_n}$ be some fixed point, then we may assume after passing to a subsequence and letting $k\rii$ that the pointed $SO(2)$-symmetric manifolds $(M',\widehat{g}_{n,k},p)$ converge in the $C^{98}$-norm to a $SO(2)$-symmetric manifold $(M',\widehat{g}_{n+1},p)$, which satisfies \begin{equation}\label{e: readyh} |\nabla^m(g-\widehat{g}_{n+1})|\le \epsilon\cdot C_0^{-i}\cdot e^{-\alpha_n(d_g(\cdot,\Gamma)-D_n)} \quad \textit{on}\quad \Gamma_{\ge D_n+iD}, \end{equation} for $m=0,...,98$. Moreover, let $X_{n+1}$ be the killing field of the $SO(2)$-isometry of $\widehat{g}_{n+1}$ and $\partial_{\theta}$ be the $SO(2)$-killing field of an $\epsilon$-cylindrical plane, then $|X_{n+1}-\partial_{\theta}|\le\frac{1}{1000}$, which verifies \eqref{e: indone-vector} for $n+1$. For any $i\ge0$, and $y\in\Gamma_{[D_n+iD,D_n+(i+1)D)}$, we have $d_g(y,\Gamma)\le D_n+(i+1)D$, which together with \eqref{e: readyh} implies \begin{equation}\label{e: readyh2} |\nabla^m(g-\widehat{g}_{n+1})|(y)\le \epsilon\cdot e^{-\alpha_{n+1}(d_g(y,\Gamma)-D_{n+1})}, \end{equation} where $D_{n+1}=\frac{(D+D_n)\ln C_0+\alpha_n D_nD}{\ln C_0+\alpha_nD}>D_n$ and $\alpha_{n+1}=\frac{(n+1)\ln C_0}{D}$. Therefore, we have \begin{equation*} |\nabla^m(g-\widehat{g}_{n+1})|\le \epsilon\cdot e^{-\alpha_{n+1}(d_g(\cdot,\Gamma)-D_{n+1})} \quad \textit{on}\quad \Gamma_{\ge D_{n+1}}, \end{equation*} for $m=0,...,100$, which verifies inductive assumption one for $n+1$ and thus proves the theorem. \end{proof} \subsection{Extend the approximating metric near the edges} \label{ss: near the edge part 3} The approximating $SO(2)$-symmetric metric in Theorem \ref{t: precise approximation} is constructed on an open subset away from $\Gamma$. Next, we want to extend it to a $SO(2)$-symmetric metric which is also defined on a neighborhood of $\Gamma$. Seeing that the soliton dimension reduces to $\R\times\cigar$ along the $\Gamma$, we can find a sequence of $SO(2)$-symmetric metrics in balls centered at $\Gamma$ whose radius going to infinity, such that these metrics are $e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}$-close to the soliton metric. Then by using Lemma \ref{l: glue cigar} we can glue up these metrics near $\Gamma$ with the metric $\overline{g}$ at suitable distance to $\Gamma$, and obtain a $SO(2)$-symmetric metric defined everywhere outside of a compact subset of $M$. This new metric will inherit their closeness to $g$ both near $\Gamma$ and away from $\Gamma$. To achieve this goal, we need the following lemma which compares an almost killing field $Y$ in $(\R\times\cigar,g_{\Sigma})$ with the actual killing field. \begin{lem}\label{l: glue cigar} For any $C_1,\epsilon_1>0$, there exists $C(C_1,\epsilon_1)>0$ such that the following holds: Write the metric of $(\R\times\cigar,g_{\Sigma})$ as $g_{\Sigma}=ds^2+dr^2+\varphi^2(r)d\theta^2$ under the coordinate $(s,r,\theta)$, where $r$ is the distance to the line $\R\times\{x_{tip}\}$. Suppose $R((0,x_{tip}))=4$. Suppose $Y$ is a vector field defined on $A\le r\le B$ for some $0<A<B$, such that \begin{equation}\label{e: L_Yg} |\nabla^k(\LL_{Y}g_{\Sigma})|\le C_1\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}\quad k=0,...,98. \end{equation} Suppose also that \begin{equation}\label{e: assump limit} \lim_{r\rii}|\nabla^k(Y-\partial_{\theta})|(\cdot,B,\cdot)\le C_1\,e^{-2(1+\epsilon_1)A},\quad k=0,...,98. \end{equation} Then we have \begin{equation*} |\nabla^k(Y-\partial_{\theta})|(\cdot,A,\cdot)\le C\,e^{-2(1+\epsilon_1)A}\quad k=0,...,96. \end{equation*} \end{lem} \begin{proof} Write $Y=Y^s\partial_s+Y^r\partial_r+Y^{\theta}\partial_{\theta}$ under the coordinate $(s,r,\theta)$, then by the formula of Lie derivatives for a symmetric 2-tensor, we have \begin{equation}\label{e: lie} \begin{split} \LL_Y g_{\Sigma}(\partial_{r},\partial_{\theta})&=\partial_{\theta} Y^r+\pr Y^{\theta}\varphi^2;\\ \LL_Y g_{\Sigma}(\pr,\ps)&=\pr Y^s+\ps Y^r;\\ \LL_Y g_{\Sigma}(\pr,\pr)&=2\,\pr Y^r.\\ \end{split} \end{equation} Moreover, by assumption \eqref{e: assump limit} we have \begin{equation*} (|Y^s|_{C^k}+|Y^r|_{C^k}+|Y^{\theta}-1|)_{C^k}(\cdot,B,\cdot)\le C\,e^{-2(1+\epsilon_1)A}. \end{equation*} By the third equation in \eqref{e: lie} and \eqref{e: L_Yg} and integrating from $r=A$ to $r=B$, we see that $|Y^r|_{C^{k-1}}(\cdot,A,\cdot)\le C\,e^{-2(1+\epsilon_1)A}$. Substituting this into the first two equations in \eqref{e: lie} and integrating from $r=A$ to $r=B$ we obtain $(|Y^{\theta}-1|_{C^{k-2}}+|Y^s|)_{C^{k-2}}(\cdot,A,\cdot)\le C\,e^{-2(1+\epsilon_1)A}$, which proves the lemma. \end{proof} We now prove the main result of this section. \begin{theorem}\label{c: best approximation} Let $(M,g,f,p)$ be a 3D steady gradient soliton that is not a Bryant soliton. Assume $\lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))= 4$. Then there exist a constant $C,\epsilon_1>0$ and a $SO(2)$-symmetric metric $\overline{g}$ defined outside of a compact subset of $M$ such that for any $k=0,...,98$, the followings hold: \begin{enumerate} \item $|\nabla^k(g-\overline{g})|\rightarrow 0$ as $x\rightarrow\infty$. \item $|\nabla^k(g-\overline{g})|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}$. \end{enumerate} \end{theorem} \begin{proof} We will use $\epsilon(D)$ to denote all functions which goes to zero as $D\rii$. Let $C>0$ denote all constants that depending only on the soliton. On the one hand, since the manifold dimension reduces along $\Gamma$ to $\R\times\cigar$, for each $i=1,2$, by a standard gluing-up argument we can find a $SO(2)$-symmetric metric $\widetilde{g}_i$ on an open subset $U_i$ containing the balls $B_g(\Gamma_i(s),D_1(s))$ for all large $s$, such that $\lim_{s\rii}D_1(s)=\infty$, and the followings hold: \begin{enumerate} \item Let $\epsilon_1>0$ be from Theorem \ref{t: precise approximation}. Then for each $i=1,2$, we have \begin{equation}\label{e: Li} |\nabla^k(\widetilde{g}_i-g)|\le\min\{\epsilon(d_g(\cdot,p)),\, C\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}\},\quad\textit{on}\quad U_i \end{equation} \item $U_1\cap U_2=\emptyset$. \item Let $\psi_{i,\theta}$, $\theta\in[0,2\pi)$, be the $SO(2)$-isometries for $(U_i,\widetilde{g}_i)$, $i=1,2$, we can also assume that there is an embedded surface $N_i$ in $U_i\cap\Gamma_{\ge 1000}$, which is diffeomorphic to $\RR$, and intersects each $S^1$-orbit of $\psi_{i,\theta}$ exactly once, and its tangent space $T_xN_i$ at $x\in N_i$ is $\frac{1}{100}$-close to the orthogonal space of the $S^1$-orbit passing through $x$. \item For each large $s$, there is a smooth map $\psi_{i,s}$ from $B_g(\Gamma_i(s),D(s))$ into $(\R\times\cigar,g_{\Sigma})$ which is a diffeomorphism onto the image, such that \begin{equation}\label{e: liying} |\nabla^k(\widetilde{g}_i-\psi_{i,s}^*g_{\Sigma})|\le C\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}. \end{equation} Let $X_i$ be the killing field of the $SO(2)$-isometry of $\widetilde{g}_i$, then \begin{equation}\label{e: X} |\nabla^k(X_i-(\psi_{i,s})^{-1}_*(\partial_{\theta}))|\le C\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}. \end{equation} \end{enumerate} On the other hand, by Theorem \ref{t: precise approximation} we have a $SO(2)$-symmetric metric $\widehat{g}$ defined on an open subset $U\supset\Gamma_{\ge A}$ for some $A>0$ such that \begin{equation}\label{e: ying} |\nabla^k(g-\widehat{g})|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}. \end{equation} Let $\psi_{\theta}$, $\theta\in[0,2\pi)$, be the $SO(2)$-isometries for $(U,\widehat{g})$, and $Y$ be the killing field of $\psi_{\theta}$. Next, we will compare the two vector fields $X_i$ and $Y$ on $\Gamma_{> A}\cap U_i$. First, by \eqref{e: Li},\eqref{e: liying},\eqref{e: ying} we have \begin{equation}\label{e: stem} |\nabla^k(\LL_Y\psi^*_{i,s}g_{\Sigma})|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}, \end{equation} Second, we also have \begin{equation*} |\nabla^k(\widehat{g}-\psi^*_{i,s}g_{\Sigma})|\le|\nabla^k(\widehat{g}-g)|+|\nabla^k(g-\widetilde{g}_i)|+|\nabla^k(\widetilde{g}_i-\psi^*_{i,s}g_{\Sigma})|\le\epsilon(d_g(\cdot,\Gamma)). \end{equation*} Moreover, the manifold is an $\epsilon(d_g(\cdot,\Gamma))$-cylindrical plane at all points where the metrics $\widehat{g}$ and $\psi^*_{i,s}g_{\Sigma}$ are defined. So we can apply Lemma \ref{l: killing fields are close} and deduce \begin{equation}\label{e: shanzhuxiao} |\nabla^k(Y-(\psi_{i,s})^{-1}_*(\partial_{\theta}))|\le\epsilon(d_g(\cdot,\Gamma)). \end{equation} In particular, for any $A>0$, there exists $B>A$ such that $\epsilon(B)<e^{-2(1+\epsilon_1)A}$. Therefore, for each $i=1,2$, by \eqref{e: stem} and \eqref{e: shanzhuxiao} we can apply Lemma \ref{l: glue cigar} and deduce that there is a function $D_2:[s_0,\infty)\ri\R_+$ for a sufficiently large $s_0$ such that $B_g(\Gamma_i(s),D_2(s))\subset U_i$ and $D_2(s)\rii$ as $s\rii$, and the following holds on $B_g(\Gamma_i(s),D_2(s))$, \begin{equation*} |\nabla^k(Y-(\psi_{i,s})^{-1}_*(\partial_{\theta}))|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}. \end{equation*} Combining this with \eqref{e: X} we see that the following holds on $V_i=\cup_{s> s_0}B_g(\Gamma(s),D_2(s))$, \begin{equation}\label{e: paufu} |\nabla^k(Y-X_i)|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}, \end{equation} which implies \begin{equation*} |\nabla^k(\psi-\psi_i)|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}. \end{equation*} Let $h: V_1\cup V_2\cup U\ri[0,1]$ be a smooth cut-off function satisfying $h\equiv1$ on a $\psi_{i,\theta}$-invariant open subset $W_i\subset V_i$ which contains $B_g(\Gamma_i(s),D_3(s))$ with $D_3(s)\rii$ as $s\rii$ for each $i=1,2$, and $h\equiv0$ on a $\psi_{\theta}$-invariant open subset containing $U\setminus (V_1\cup V_2)$. Let $V_{i,0}$ be a $\psi_{i,\theta}$-invariant open subset such that \begin{equation*} W_i\cap\Gamma_{\ge 100}\subset V_{i,0}\subset V_i\cap\Gamma_{\ge 100}. \end{equation*} Next, we define a smooth map $F_i:N_i\times S^1\ri M$ by letting \begin{equation*} F_i(x,\theta)=\Sigma_2(1-h,h,\psi_{\theta}(x),\psi_{i,\theta}(x)), \end{equation*} where $x\in N_i$, $\theta\in[0,2\pi)$. Then in the same way as in Lemma \ref{l: glue up local diffeomorphisms from N x S1 to M} we can show that $F_i$ is a diffeomorphism onto the image, and \begin{equation*} |\nabla^k(F_{i*}(\partial_{\theta})-X_i)|\le C\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}, \end{equation*} Let $\Psi_{i,\theta}$ be the $2\pi$-periodic one parameter group of diffeomorphisms generated by $F_{i*}(\partial_{\theta})$, $i=1,2$. Since $V_1\cap V_2=\emptyset$, we have $\Psi_{1,\theta}=\Psi_{2,\theta}$ on $U\setminus(V_1\cup V_2)$. Therefore, we obtain a $2\pi$-periodic one parameter group of diffeomorphisms $\Psi_{\theta}$ on $V_1\cup V_2\cup U$, which satisfies \begin{enumerate} \item $\Psi_{\theta}=\psi_{i,\theta}$ on $W_i$. \item $\Psi_{\theta}=\psi_{\theta}$ on $U\setminus(V_1\cup V_2)$. \item $|\Psi-\psi|_{C^k(U\times S^1)}+|\Psi-\psi_i|_{C^k(V_i\times S^1)}\le C\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}$. \end{enumerate} So by letting \begin{equation*} \overline{g}=\frac{1}{2\pi}\int_0^{2\pi}\Psi^*_{\theta}(h\cdot \widetilde{g}_i+(1-h)\cdot\widehat{g})\,d\theta. \end{equation*} we obtain a $SO(2)$-symmetric metric on $V_1\cup V_2\cup U$, which contains the complement of a compact subset. Moreover, $\overline{g}$ satisfies the following properties: \begin{enumerate} \item $\overline{g}=\widetilde{g}_i$ on $W_i$, for each $i=1,2$. \item $\overline{g}=\widehat{g}$ on $U\setminus (V_1\cup V_2)$. \item For some $D_0>0$, we have \begin{equation}\label{e: happy} \begin{cases} |\overline{g}-g|\le C\,e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}\quad\textit{on}\quad \Gamma_{\ge D_0}\\ |\overline{g}-g|\le\epsilon(d_g(\cdot,p))\quad\textit{on}\quad W_1\cup W_2. \end{cases} \end{equation} \end{enumerate} The first inequality implies that $\overline{g}$ satisfies the assertion of exponential decay away from $\Gamma$. Moreover, since $d_g(x,p)\rii$ as $d_g(x,\Gamma)\rii$ for any $x\in M\setminus (W_1\cup W_2)$, the first inequality implies $|\overline{g}-g|\le\epsilon(d_g(\cdot,p))$ also holds on $M\setminus (W_1\cup W_2)$. So $\overline{g}$ satisfies the assertion of decaying to zero at infinity. \end{proof} \section{The evolution of the Lie derivative}\label{s: lie} In this section, $(M,g)$ is a 3D steady gradient soliton that is not a Bryant soliton, and $(M,g(t))$ is the Ricci flow of the soliton. Let $h(t)$ be a linearized Ricci-DeTurck flow with background metric $g(t)$. The main result is Proposition \ref{p: h decays to zero}, which shows that $h(t)$ tends to zero as $t$ goes to infinity, if the initial value $h(0)$ satisfies the condition $\frac{h(x,0)}{R(x)}\ri0$ as $x\rii$. In particular, let $\overline{g}$ be the approximating $SO(2)$-symmetric metric obtained from Theorem \ref{c: best approximation}, and $\partial_{\theta}$, $\theta\in[0,2\pi)$, be the killing field of the $SO(2)$-symmetry. We show that the Lie derivative $\LL_{\partial_{\theta}}g$ satisfies this initial condition, hence decays to zero as $t\rii$ under the linearized Ricci-DeTurck equation. \subsection{The vanishing of a heat kernel at infinity} In this subsection we prove a vanishing theorem of the heat kernel to a certain heat type equation at time infinity. We will see that for a linearized Ricci Deturck flow $h(t)$, the norm $|h|(\cdot,t)$ is controlled by the convoluted integral of this heat kernel and $|h|(\cdot,0)$. Let $G$ be the heat kernel of the heat type equation \begin{equation}\label{e: special} \pt H=\Delta H+\frac{2|\Ric|^2}{R}H. \end{equation} That is, for any $t>s$, $x,y\in M$, \begin{equation*}\begin{split} \pt G(x,t;y,s)&=\Delta_{x,t} G(x,t;y,s)+\frac{2|\Ric|^2(x,t)}{R(x,t)}G(x,t;y,s),\\ \lim_{t\searrow s}G(\cdot,t;y,s)&=\delta_{y}. \end{split}\end{equation*} \begin{lem}[Vanishing of heat kernel at time infinity]\label{l: vanishing of heat kernel} Let $p$ be the point where $R$ attains the maximum. For any fixed $D>0$, let \begin{equation*} u_D(x,t)=\int_{B_0(p,D)}G(x,t;y,0)\,d_0y, \end{equation*} then $\sup_{B_{t}(p,D)}u_D(\cdot,t)\ri0$ as $t\rii$. \end{lem} \begin{proof} Note that $u_D$ satisfies the equation \eqref{e: special}. First, we show that there exists $C_1>0$ such that $u_D(x,t)\le C_1$ for all $(x,t)\in M\times[0,\infty)$. Since the scalar curvature satisfies the equation $\partial_t R=\Delta R+\frac{2|\Ric|^2}{R}\cdot R$, by using the reproduction formula we have \begin{equation*} R(x,t)=\int_M G(x,t;y,0)R(y,0)\,d_0y. \end{equation*} By compactness we have for some $c>0$ that $R(y,0)\ge c$ for all $y\in B_0(p,D)$, so it follows that $u_D(x,t)\le c^{-1}R(x,t)\le c^{-1}R(p)$. So we may take $C_1=c^{-1}R(p)$. Now suppose the claim does not hold, then there exist $\epsilon>0$ and a sequence of $t_i\rii$ and $x_i\in B_{t_i}(p,D)$ such that $u_D(x_i,t_i)\ge\epsilon>0$. Without loss of generality we may assume that $t_{i+1}\ge t_i+1$. Since $u\le C_1$, by a standard parabolic estimate we see that $|\partial_t u_D|(x,t)+|\nabla u_D|(x,t)\le C_2$ for some $C_2>0$. Therefore, there exists $\delta_1\in(0,1)$ such that \begin{equation}\label{e: this} \int_{B_t(p,D)}\int_{B_0(p,D)}G(x,t;y,0)\,d_0y\,d_tx=\int_{B_t(p,D)}u(x,t)\,d_tx\ge\delta_1, \end{equation} for all $t\in [t_i,t_i+\delta_1]$. Since $\Rm>0$, we can choose an orthonormal basis $\{e_1,e_2,e_3\}$ such that $\Rm(e_1\wedge e_2)=-\lambda_3\,e_1\wedge e_2$, $\Rm(e_1\wedge e_3)=-\lambda_2\,e_1\wedge e_3$, $\Rm(e_2\wedge e_3)=-\lambda_1\,e_2\wedge e_3$ for some $\lambda_1,\lambda_2,\lambda_3>0$. So it is easy to see $2|\Ric|^2=2(\lambda_2+\lambda_3)^2+2(\lambda_1+\lambda_3)^2+2(\lambda_1+\lambda_2)^2$ and $R^2=4(\lambda_1+\lambda_2+\lambda_3)^2$, which implies \begin{equation*} 2|\Ric|^2-R^2<0. \end{equation*} So by compactness there is $\delta_2>0$ such that \begin{equation}\label{e: onehand} \sup_{B_t(p,D)}\frac{2|\Ric|^2-R^2}{R}\le-\delta_2. \end{equation} Let $F_D(t)=\int_{B_0(p,D)}\int_M G(x,t;y,0)\,d_tx\,d_0y$, then since \begin{equation}\label{e: integratey} \begin{split} \pt&\int_M G(x,t;y,0)\,d_tx =\int_M\pt G(x,t;y,0)-R(x,t)G(x,t;y,0)\,d_tx\\ =&\int_M\Delta_{x,t} G(x,t;y,0)+\frac{2|\Ric|^2(x,t)}{R(x,t)}G(x,t;y,0)-R(x,t)G(x,t;y,0)\,d_tx\\ =&\int_M \frac{G(x,t;y,0)}{R(x,t)}(2|\Ric|^2(x,t)-R^2(x,t))\,d_tx, \end{split} \end{equation} where we used the fact that the heat kernel $G$ satisfies a Gaussian upper bound so that $\int_{M}\Delta_{x,t} G(x,t;y,0)\,d_tx$ vanishes by the divergence theorem. Hence we obtain \begin{equation}\label{e: sum} \pt F_D(t)=\int_{B_0(p,D)}\int_M \frac{G(x,t;y,0)}{R(x,t)}(2|\Ric|^2-R^2)(x,t)\,d_tx\,d_0y<0, \end{equation} and by \eqref{e: this} and \eqref{e: onehand} we see that $\pt F_D(t)\le-\delta_1\delta_2$ for all $t\in[t_i,t_i+\delta_1]$. Note $G$ is everywhere positive so that $F_D(t)>0$, this implies \begin{equation*} \begin{split} F_D(0)&<\lim_{t\rii}F_D(t)-F_D(0)\le -\sum_{i=1}^{\infty}\int_{t_i}^{t_i+\delta_1}\delta_1\cdot\delta_2\,dt=-\sum_{i=1}^{\infty}\delta_1^2\cdot \delta_2, \end{split} \end{equation*} which is a contradiction. So this proves the lemma. \end{proof} \subsection{The vanishing of the Lie derivative at infinity} We prove the main result in this subsection by applying the heat kernel estimates. First, we prove a lemma using the Anderson-Chow pinching estimate. \begin{lem}[c.f.\cite{AC}]\label{l: AC} Let $(M,g(t))$, $t\in[0,T]$, be a 3D Ricci flow with positive sectional curvature. Consider a solution $h$ to the linearized Ricci-Deturck flow on $(M,g(t))$, and a positive solution $H$ to the following equation \begin{equation}\label{e: H evolve} \partial_t H=\Delta H+\frac{2|\Ric|^2(x,t)}{R(x,t)}H. \end{equation} Then the following holds: \begin{equation*} \partial_t\left(\frac{|h|^2}{H^2}\right)\le\Delta\left(\frac{|h|^2}{H^2}\right)+2\nabla H\cdot\nabla\left(\frac{|h|^2}{H^2}\right). \end{equation*} \end{lem} \begin{proof} By a direct computation using $\partial_th=\Delta_{L,g(t)}h$ and \eqref{e: H evolve} we have, see \cite{AC}, \begin{equation*}\begin{split} \partial_t\left(\frac{|h|^2}{H^2}\right)=\Delta\left(\frac{|h|^2}{H^2}\right)+2\nabla H\cdot\nabla\left(\frac{|h|^2}{H^2}\right)+\frac{4}{H^2}\left(R_{ijkl}h_{il}h_{jk}-\frac{|\Ric|^2}{R}|h|^2\right)\\ -2\frac{|H\nabla_ih_{jk}-(\nabla_iH)h_{jk}|^2}{H^4}. \end{split}\end{equation*} Then the lemma follows immediately form the pinching estimate from \cite{AC} that for any non-zero symmetric 2-tensor $h$, \begin{equation*} \frac{\Rm(h,h)}{|h|^2}\le\frac{|\Ric|^2}{R}. \end{equation*} \end{proof} We now prove the main result of this section. Note that $|h|(\cdot,t)$ is controlled by the convoluted integral of the heat kernel of \eqref{e: special}, we split the integral into two parts, where in the compact region, the integral tends to zero as a consequence of our vanishing theorem. In the non-compact subset, we use the assumption $\frac{h(x,0)}{R(x)}\ri0$ as $x\rii$ and the reproduction formula of the scalar curvature to deduce that the integral is bounded above by arbitrarily small multiples of the scalar curvature. \begin{prop}\label{p: h decays to zero} Let $(M,g(t))$ be the Ricci flow of a 3D steady gradient Ricci soliton that is not a Bryant soliton. Consider a solution $h(t)$, $t\in[0,\infty)$, to the linearized Ricci-Deturck flow on $M$, i.e. \begin{equation*} \partial_t h(t)=\Delta_{L,g(t)}h(t). \end{equation*} Suppose $h$ satisfies the following initial condition \begin{equation*} \frac{|h|(x,0)}{R(x)}\ri0\quad \textit{as}\quad x\rii. \end{equation*} Then $|h|(x,t)$ converges to $0$ smoothly and uniformly on any compact subset $B_t(p,D)$ as $t\rii$. \end{prop} \begin{proof} Now let $H(x,t)=\int_M G(x,t;y,0)|h|(y,0)\,d_0y$, then $H$ solves the equation \begin{equation*} \partial_t H=\Delta H+\frac{2|\Ric|^2(x,t)}{R(x,t)}H. \end{equation*} Therefore, by Lemma \ref{l: AC} and applying the weak maximum principle to $\frac{|h|^2}{H^2}$, we see that $|h|\le H$, that is, \begin{equation}\label{e: compare} |h(x,t)|\le\int_M G(x,t;y,0)|h|(y,0)\,d_0y. \end{equation} For any $\epsilon>0$, by the assumption on $|h|(\cdot,0)$, we can find some $D>0$ such that $|h(y,0)|\le\epsilon R(y,0)$ for all $y\in M\setminus B_0(p,D)$. We may assume $D>\frac{1}{\epsilon}$. So by \eqref{e: compare} and using Lemma \ref{l: vanishing of heat kernel} (vanishing of heat kernel) we have for all sufficiently large $t$ and $x\in B_{t}(p,D)$ that \begin{equation*} \begin{split} |h(x,t)|&\le\int_{B_0(p,D)}G(x,t;y,0)|h|(y,0)\,d_0y+\int_{M\setminus B_0(p,D)}G(x,t;y,0)|h|(y,0)\,d_0y\\ &\le\epsilon+\epsilon\int_MG(x,t;y,0)R(y,0)\,d_0y\\ &=\epsilon+\epsilon R(x,t)\le\epsilon(1+R(p)), \end{split} \end{equation*} This implies $\sup_{B_t(p,D)}|h|(\cdot,t)\le\epsilon(1+R(p))$ for all large $t$, and the assertion follows by letting $\epsilon\ri0$. \end{proof} As a direct application, we prove the following \begin{cor}\label{c: Lie derivative tends to zero} Let $\overline{g}$ be the $SO(2)$-symmetric metric from Theorem \ref{c: best approximation}, and $X$ be a vector field on $M$ which is a smooth extension of the killing field of the $SO(2)$-isometry of $\overline{g}$, and $X$ has a bounded norm. Let $h(t)$ be the solution to the following initial value problem of the linearized Ricci-Deturck flow \begin{equation*}\begin{cases} \pt h(t)=\Delta_{L,g(t)}h(t),\\ h(0)=\LL_Xg. \end{cases}\end{equation*} Then $|h|(x,t)$ converges to $0$ smoothly and uniformly on any compact subset $B_t(p,D)$ as $t\rii$. \end{cor} \begin{proof} By Proposition \ref{p: h decays to zero} it suffices to show that $\frac{|h|(x,0)}{R(x)}\ri0$ as $x\rii$. We claim this is true. On the one hand, by Theorem \ref{c: best approximation}, there exists $\epsilon_1,C_1>0$ such that the following holds, \begin{equation}\label{e: sit} \begin{cases} |h(0)|\le C_1\cdot e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}\quad\textit{on}\quad M,\\ |h(x,0)|\ri0\quad\textit{as}\quad x\rii. \end{cases}\end{equation} On the other hand, by Theorem \ref{l: R>e^{-2r}} (Scalar curvature exponential lower bound) we can find a constant $C_2>0$ depending only on $\epsilon_1$ such that \begin{equation}\label{e: straight} R\ge C_2^{-1}e^{-2(1+\frac{\epsilon_1}{2})d_g(\cdot,\Gamma)}. \end{equation} For any $\epsilon>0$, by the second condition in \eqref{e: sit} we can find $D(\epsilon)>0$ such that $|h|(x,0)\le\epsilon$ for all $x\in M\setminus B_g(p,D(\epsilon))$. First, if $x\in \Gamma_{\le L(\epsilon)}$, where $L(\epsilon)=\frac{\ln\frac{C}{\sqrt{\epsilon}}}{2(1+\epsilon_1)}$, we have \begin{equation*} |h|(x,0)\le\epsilon=\sqrt{\epsilon}\cdot C\cdot e^{-2(1+\epsilon_1)L(\epsilon)}\le\sqrt{\epsilon}\cdot R(x). \end{equation*} Second, if $x\in \Gamma_{\ge L(\epsilon)}$, then by the first condition in \eqref{e: sit} and \eqref{e: straight} we obtain \begin{equation*} |h|(x,0)\le C_1\cdot e^{-\epsilon_1\,d_g(x,\Gamma)}\cdot e^{-2(1+\frac{\epsilon_1}{2})\,d_g(x,\Gamma)}\le C_1C_2\cdot e^{-\epsilon_1\,L(\epsilon)}\cdot R(x). \end{equation*} Note $L(\epsilon)\rii$ as $\epsilon\ri0$, the claim follows immediately. \end{proof} \section{Construction of a killing field}\label{s: killing field} Let $(M,g)$ be a 3D steady gradient soliton that is not a Bryant soliton, and $(M,g(t))$, $t\in(-\infty,\infty)$, be the Ricci flow of the soliton. In this section, we study the evolution of a vector field $X(t)$ under the equation \begin{equation}\label{e: X(t)} \partial_t X(t)=\Delta_{g(t)} X(t)+\Ric_{g(t)}(X), \end{equation} In particular, we will choose $X(t)$ such that $X(0)$ to be the killing field of the $SO(2)$-isometry of the approximating metric obtained from Theorem \ref{c: best approximation}, and show that $X(t_i)$ converges to a non-zero killing field of the soliton $(M,g)$ for a sequence $t_i\rii$. Throughout this section, we assume \begin{equation*} \lim_{s\rii}R(\Gamma_1(s))=\lim_{s\rii}R(\Gamma_2(s))= 4. \end{equation*} and $\overline{g}$ is the $SO(2)$-symmetric metric from Theorem \ref{c: best approximation} which satisfies \begin{equation}\label{e: quote} |\nabla^k(g-\overline{g})|\le C\, e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)},\quad k=0,...,98. \end{equation} Let $A>0$ be sufficiently large so that $\Gamma_{\ge A}$ is covered by $\epsilon$-cylindrical planes. In particular, the $SO(2)$-isometry $\psi_{\theta}$ of $\overline{g}$ acts freely on an open subset $U\supset\Gamma_{\ge A}$. So we can find a 2D manifold $(N,g_N)$ and a Riemannian submersion $\pi:(U,g)\ri(N,g_N)$ which maps a $S^1$-orbit to a point in $N$. Let $\rho:(-1,1)^2\ri B\subset N$ be a local coordinate at $p\in N$ such that $\rho(0,0)=p$, and $s:B\ri U$ be a section of the Riemannian submersion $\pi$. Then the map $\Phi:(-1,1)^2\times[0,2\pi)\ri U$ defined by $\Phi(x,y,\theta)=\psi_{\theta}(s(\rho(x,y)))$, gives a local coordinate at $s(p)\in U$ under which $\overline{g}$ can be written as follows, \begin{equation*} \overline{g}=\sum_{\alpha,\beta=x,y}g_{\alpha\beta}\,d{\alpha}\,d{\beta}+G(d\theta+A_xdx+A_ydy)^2, \end{equation*} where $G, A_x,A_y, g_{\alpha\beta}$ are functions that are independent of $\theta$. Then $Y=\partial_{\theta}$. Note that a change of section changes the connection form $A=A_xdx+A_ydy$ by an exact form, and leaves invariant the curvature form \begin{equation*} dA=(\partial_{x}A_y-\partial_{y}A_x)dx\wedge dy=F_{xy}\,dx\wedge dy. \end{equation*} \begin{lem}\label{l: dA and nabla dA} For $k=0,...,96$, there are $C_k>0$ such that for all $q\in U$ we have \begin{equation}\label{e: nabla dA0} |\widetilde{\nabla}^{\ell} ( dA)|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)},\quad \ell=0,1, \end{equation} and \begin{equation}\label{e: nablavarphi} |\widetilde{\nabla}^{\ell}G^{1/2}|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)},\quad\ell=1,2, \end{equation} for $k=0,...,96$, where $\widetilde{\nabla}$ denotes the covariant derivative on $(N,g_N)$. \end{lem} \begin{proof} We adopt the notion such that for a tensor $\tau=\tau^{i_1...i_r}_{j_1...j_s}\,dx^{j_1}\otimes\cdots \otimes dx^{j_s}\otimes \partial_{x_{i_1}}\otimes \cdots\otimes\partial_{x_{i_r}}$ on $B$, we have $\tau^{i_1...i_r}_{j_1...j_s,k}=\partial_{x_k}(\tau^{i_1...i_r}_{j_1...j_s})$, $\tau^{i_1...i_r}_{j_1...j_s,k\ell}=\partial^2_{x_k,x_{\ell}}(\tau^{i_1...i_r}_{j_1...j_s})$, and \begin{equation*}\begin{split} \widetilde{\nabla}_{\partial_{x_k}}\tau&=\tau^{i_1...i_r}_{j_1...j_s\,;\,k}\;dx^k\otimes dx^{j_1}\otimes\cdots \otimes dx^{j_s}\otimes \partial_{x_{i_1}}\otimes \cdots\otimes\partial_{x_{i_r}}\\ \widetilde{\nabla}^2_{\partial_{x_{k},x_{\ell}}}\tau&=\tau^{i_1...i_r}_{j_1...j_s\,;\,k\ell}\;dx^k\otimes dx^{\ell}\otimes dx^{j_1}\otimes\cdots \otimes dx^{j_s}\otimes \partial_{x_{i_1}}\otimes \cdots\otimes\partial_{x_{i_r}}. \end{split}\end{equation*} For a point $p$ in the base manifold parametrized by $x,y$, it is convenient to choose the section so that $A(p)=0$. Then the non-zero components of the curvature tensor $\overline{R}_{IJKL}$ of $\overline{g}$ are given in terms of the components of the curvature tensor $R_{\alpha\beta\gamma\delta}$ of $(B,g_N)$, the components $F_{xy}$, and the function $G$ by the following, see \cite[Section 4.2]{Lott2007DimensionalRA}, \begin{equation}\label{e: DR} \begin{split} \overline{R}_{\theta\alpha\theta\beta}&=-\frac{1}{2}G_{;\alpha\beta}+\frac{1}{4}G^{-1}G_{,\beta}G_{,\alpha}+\frac{1}{4}g^{\alpha\beta}G^2F_{xy}^2,\\ \overline{R}_{\theta\alpha\beta\alpha}&=-\frac{1}{2}GF_{xy;\alpha}-\frac{3}{4}G_{,\alpha}F_{xy},\\ \overline{R}_{\alpha\beta\alpha\beta}&=R_{\alpha\beta\alpha\beta}-\frac{3}{4}GF_{xy}^2. \end{split} \end{equation} Let $R_{IJKL}$ be the components of the curvature tensor of the soliton metric $g$, then by Theorem \ref{t: R upper bd} we have $|R_{IJKL}|\le\frac{C_k}{d^k_g(\cdot,\Gamma)}$ for any $k\in\mathbb{N}$ and $C_k>0$. So by \eqref{e: quote} we obtain $|\overline{R}_{IJKL}|\le\frac{C_k}{d^k_g(\cdot,\Gamma)}$ after replacing $C_k$ by a possibly larger number. In particular, by the second equation in \eqref{e: DR} we obtain \begin{equation}\label{e: nabla dA} |\widetilde{\nabla} (G^{\frac{3}{2}} dA)|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)}, \end{equation} for all $q\in U$. By Kato's inequality this implies \begin{equation}\label{e: im} |\widetilde{\nabla} |G^{\frac{3}{2}} dA||(\pi(q))\le|\widetilde{\nabla} (G^{\frac{3}{2}} dA)|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)}. \end{equation} By Theorem \ref{l: wing-like} there exists $C>0$ such that \begin{equation*} d_g(\phi_t(q),\Gamma)\ge d_g(q,\Gamma)+C^{-1}\,t,\quad t\ge0, \end{equation*} for any point $q\in\Gamma_{\ge A}$ where $A>0$ is sufficiently large. Note that $(M,g,\phi_t(q))$ converge to $\RR\times S^1$ as $t\rii$, we have $\lim_{t\rii}|G^{\frac{3}{2}}dA|(\pi(\phi_t(q)))=0$. Since by \eqref{e: im} we have \begin{equation} \left|\frac{d}{dt}\,|G^{\frac{3}{2}}dA|(\pi(\phi_t(q)))\right|=\left|\langle\widetilde{\nabla}|G^{\frac{3}{2}}dA|,\pi_*(\nabla f(\phi_t(q)))\rangle\right|\le \frac{C_k}{d_g^k(\phi_t(q),\Gamma)}, \end{equation} integrating which from zero to infinity, we see that there is some $C_{k-1}>0$ such that \begin{equation*} |G^{\frac{3}{2}}dA|(\pi(q))\le\frac{C_{k-1}}{d_g^{k-1}(q,\Gamma)}. \end{equation*} This together with \eqref{e: nabla dA} implies \eqref{e: nabla dA0}. It also implies $|\frac{1}{4}g^{\alpha\beta}G^2F_{xy}^2|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)}$ in the first equation in \eqref{e: DR}, and hence implies $|\widetilde{\nabla}^2G^{1/2}|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)}$. Similarly as before, we obtain $|\widetilde{\nabla}G^{1/2}|(\pi(q))\le\frac{C_k}{d_g^k(q,\Gamma)}$ by an integration. \end{proof} Let $\partial_{\theta}$ be the killing field of the $SO(2)$-isometry of $\overline{g}$ outside of a compact subset of $M$. We can extend it to a smooth vector field $Y$ on $M$ such that $|Y|\le 10$. Let $Y(t)=\phi_{t*}Y$ for all $t\ge0$, and \begin{equation*} Q(t)=-\partial_t(Y(t))+\Delta_{g(t)}Y(t)+\Ric_{g(t)}(Y(t)). \end{equation*} We will often abbreviate it as $Q(t)=-\partial_tY+\Delta Y+\Ric(U)$ when there is no confusion. Next, we show that $Q(t)$ has a polynomial decay away from $\Gamma$. \begin{lem}\label{l: Q(t)} For $k=0,...,94$, there are $C_k>0$ such that \begin{equation*} |Q(x,t)|\le\frac{C_k}{d^k_t(x,\Gamma)}, \end{equation*} for all $t\ge0$. \end{lem} \begin{proof} Since $g(t)=\phi^*_{-t}g$, $\phi_t:(M,g)\ri(M,g(t))$ is an isometry, it follows \begin{equation*} \partial_t Y(t)=\partial_t(\phi_{t*}Y)=\phi_{t*}(\LL_{\nabla f}Y),\quad \Delta_{g(t)}Y(t)=\phi_{t*}(\Delta Y),\quad \Ric_{g(t)}(Y(t))=\phi_{t*}(\Ric(Y)). \end{equation*} So the lemma reduces to show the following: \begin{equation*} |-\LL_{\nabla f}Y+\Delta Y+\Ric(Y)|\le\frac{C_k}{d^k_g(\cdot,\Gamma)}. \end{equation*} Let $h=\LL_Yg$, then by a direct computation we have the identity, see e.g. \cite{brendlesteady3d}, \begin{equation}\label{e: oxtail} div(h)-\frac{1}{2}tr\nabla h=\Delta Y+\Ric(Y). \end{equation} Since by \eqref{e: quote} we have \begin{equation*} |\nabla^{k-1}h|=|\nabla^{k-1}(\LL_Yg)|=|\nabla^{k-1}(\LL_Y(g-\overline{g}))|\le C\cdot e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}, \end{equation*} it then follows that \begin{equation*} |div(h)|+|tr\nabla h|\le C\cdot e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}. \end{equation*} Therefore, by \eqref{e: oxtail} and Theorem \ref{t: R upper bd} (scalar curvature polynomial upper bounds) \begin{equation}\label{water} |\Delta Y+\Ric(Y)|\le C\cdot e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}\le\frac{C_k}{d^k_g(\cdot,\Gamma)}. \end{equation} So the lemma reduces to estimate $|\LL_{\nabla f}Y|$. To do this, we assume $\nabla f=F^{\theta}\partial_{\theta}+F^{\alpha}\partial_{\alpha}$, $\alpha=x,y$, where $F^{\theta}=\partial_{\theta} f\cdot G^{-1}$. Then we can compute that \begin{equation}\label{e: Y derivative} \LL_{\nabla f}Y=\LL_{\nabla f}\partial_{\theta}=\partial_{\theta}F^{\theta}\cdot\partial_{\theta}+\partial_{\theta}F^{\alpha}\cdot\partial_{\alpha}. \end{equation} We will see in the following equations that the two components $\partial_{\theta}F^{\theta}$ and $\partial_{\theta}F^{\alpha}$ also appear in the components of $\LL_{\nabla f}\overline{g}$ which we will compare to. Replace the coordinate function $\theta$ by $\theta+\int_0^{y}\int_0^{x}\partial_{y}A_x(x',y')\,dx'\,dy'\mod 2\pi$, then by \eqref{e: nabla dA0} we have \begin{equation*} \overline{g}=\sum_{\alpha,\beta=x,y}g_{\alpha\beta}\,d{\alpha}\,d{\beta}+Gd\theta^2+\overline{h}, \end{equation*} where $\overline{h}$ is a 2-tensor satisfies $|\nabla^{\ell}\overline{h}|\le \frac{C_k}{d_g^k(\cdot,\Gamma)}$, $\ell=0,1$. In particular, this implies \begin{equation*} |\overline{g}_{\alpha\theta}|+|\partial_{\beta}\overline{g}_{\alpha\theta}|\le\frac{C_k}{d_g^k(\cdot,\Gamma)},\quad \alpha,\beta=x,y. \end{equation*} So by a direct computation we obtain \begin{equation}\label{e: g derivative} \begin{split} (\LL_{\nabla f}\overline{g})_{\theta \beta}& =2\,\partial_{\theta}F^{\alpha}\cdot\overline{g}_{\alpha\beta}-F^{\theta}\partial_{\beta}G+O(d_g^{-k}(\cdot,\Gamma)),\\ (\LL_{\nabla f}\overline{g})_{\theta\theta}&=F^{\alpha}\partial_{\alpha}G+2(\partial_{\theta}F^{\theta})\cdot G+O(d_g^{-k}(\cdot,\Gamma)). \end{split} \end{equation} where $O(d_g^{-k}(\cdot,\Gamma))$ denotes functions that are bounded by $\frac{C_k}{d_g^{k}(\cdot,\Gamma)}$ in absolute values. Since $\frac{1}{2}\LL_{\nabla f}g=\nabla^2 f=\Ric$, we have \begin{equation*}\begin{split} |\LL_{\nabla f}\overline{g}|&\le|\LL_{\nabla f}(\overline{g}-g)|+|\LL_{\nabla f}g|\le C\cdot e^{-2(1+\epsilon_1)d_g(\cdot,\Gamma)}+2|\Ric_g| \le \frac{C_k}{d_g^{k}(\cdot,\Gamma)}. \end{split}\end{equation*} Therefore, by comparing \eqref{e: Y derivative} and \eqref{e: g derivative} we can deduce \begin{equation*} |\LL_{\nabla f}Y|\le \frac{C_k}{d_g^{k}(\cdot,\Gamma)}+C\cdot|\widetilde{\nabla} G|, \end{equation*} which combined with \eqref{e: nablavarphi} implies \begin{equation*} |\LL_{\nabla f}Y|\le\frac{C_k}{d_g^k(\cdot,\Gamma)}, \end{equation*} which proves the lemma. \end{proof} Let $Z(t)$ be a vector field which solves the following equation \begin{equation}\label{e: Z equation} \begin{cases} -\partial_t Z+\Delta Z+\Ric(Z)=Q(t),\\ Z(0)=0. \end{cases} \end{equation} In the next lemma, we show that $Z(t)$ has a polynomial decay away from $\Gamma$. \begin{lem}\label{l: Z equation} For $k=0,...,92$, there are $C_k>0$ such that $|Z(t)|\le\frac{C_k}{d^k_t(\cdot, \Gamma)}$. \end{lem} \begin{proof} We can compute that \begin{equation*}\begin{split} \Delta|Z|^2&=2\langle\Delta Z, Z\rangle+2\langle\nabla Z, \nabla Z\rangle\\ \partial_t|Z|^2&=2\langle\partial_t Z, Z\rangle-2\,\Ric(Z,Z), \end{split}\end{equation*} combining which with \eqref{e: Z equation} we obtain \begin{equation*} \partial_t|Z(t)|^2=\Delta_{g(t)}|Z(t)|^2-2|\nabla_{g(t)}Z(t)|_{g(t)}^2-2\langle Q(t),Z(t)\rangle_{g(t)}, \end{equation*}\ which we will often abbreviate as $\partial_t|Z|^2=\Delta|Z|^2-2|\nabla Z|^2-2\langle Q,Z\rangle$. Similarly, we can show $\partial_t|X|^2=\Delta|X|^2-2|\nabla X|^2\le\Delta|X|^2$. So by the maximum principle we get $|X(t)|\le|X(0)|\le C$, and hence \begin{equation*} |Z(t)|\le|Y(t)|+|X(t)|\le C. \end{equation*} So $\partial_t|Z|^2\le\Delta|Z|^2+C\cdot|Q|\le\Delta|Z|^2+\frac{C}{d^m_t(\cdot, \Gamma)}$. The lemma now follows immediately from the following lemma. \end{proof} \begin{lem} Let $(M,g(t))$ be a 3D steady gradient soliton that is not a Bryant soliton. Let $u: M\times[0,T]\rii$ be a smooth non-negative function, which satisfies $u(\cdot,0)=0$, and \begin{equation*} \partial_tu\le\Delta u+\frac{C_0}{d_t^k(\cdot,\Gamma)}, \end{equation*} for some $k\ge 2$. Then there exists $C=C(C_0,k)>0$ such that $u(\cdot,t)\le\frac{C}{d_t^{k-1}(\cdot,\Gamma)}$. \end{lem} \begin{proof} Let $C>0$ denote all constants depending on $C_0,k$. Denote $d_t(x,\Gamma)$ by $r(x,t)$, which satisfies the distance distortion estimates \eqref{e: distance change}. By the maximum principle and the integral formula for solutions of heat type equations, we obtain \begin{equation*} u(x,t)\le\int_0^t\int_{M}G(x,t;y,s)\frac{C_0}{r^k(y,s)}\,d_sy\,ds, \end{equation*} where $G(x,t;y,s)$ is the heat kernel of the heat equation under $g(t)$, see \eqref{e: standard heat kernel}. For a fixed $s\in[0,t]$, we split the integral $\int_{M}G(x,t;y,s)\frac{C_0}{r^k(y,s)}\,d_sy$ into two integrals on $B_s(x,\frac{r(x,s)}{1000})$ and $M\setminus B_s(x,\frac{r(x,s)}{1000})$, and denote them respectively by $I(s)$ and $II(s)$. We will estimate them similarly as in the proof of Theorem \ref{t: R upper bd}. For $II(s)$, note $\frac{d_s^2(y,x)}{t-s}\ge\frac{r(x,s)}{C}$ for all $y\in M\setminus B_s(x,\frac{r(x,s)}{1000})$, by the heat kernel estimates Lemma \ref{l: L-geodesic} and Lemma \ref{l: heat kernel lower bound implies upper bound} we obtain \begin{equation*} II(s)\le C\int_{M\setminus B_s(x,\frac{r(x,s)}{1000})}e^{-\frac{d_s^2(y,x)}{C(t-s)}}\,d_sy \le C\cdot e^{-\frac{r(x,s)}{C}}\le\frac{C}{r^k(x,s)}. \end{equation*} For $I(s)$, we have $r(y,s)\ge\frac{r(x,s)}{2}$ for all $y\in B_{s}(x,\frac{r(x,s)}{1000})$, and thus \begin{equation*} I(s)\le C\sup_{y\in B_s(x,\frac{r(x,s)}{1000})}r^{-k}(y,s)\le\frac{C}{r^{k}(x,s)}. \end{equation*} Therefore, by \eqref{e: distance change} and the estimates of $I(s)$ and $II(s)$, we obtain \begin{equation*} u(x,t)\le\int_0^t\frac{C}{r^k(x,s)}\,ds \le C\left(\frac{1}{r^{k-1}(x,t)}-\frac{1}{(r(x,t)+1.9\,t)^{k-1}}\right)\le\frac{C}{r^{k-1}(x,t)}. \end{equation*} \end{proof} Now we prove the main result of this section, which finds a non-trivial killing field of $(M,g)$ as time goes to infinity. \begin{prop}\label{l: non-vanishing} There exists a vector field $X_{\infty}$ which does not vanish everywhere and has bounded norm such that $\LL_{X_{\infty}}g=0$. \end{prop} \begin{proof} Let $X(t)=Y(t)-Z(t)$ and $\widetilde{X}(t)=\phi_{-t*}X(t)$. We will show that there exists a sequence $t_i\rii$ such that the vector fields $\widetilde{X}(t_i)$ on $M$ smoothly converge to a non-zero killing field $X_{\infty}$. By the definitions of $Z(t)$ and $X(t)$, it is easy to see \begin{equation*} \begin{cases} \partial_t X(t)=\Delta X(t)+\Ric(X(t)),\\ X(0)=Y(0). \end{cases} \end{equation*} Let $h(t)=\LL_{X(t)}g(t)$, then a direct computation shows that $h(t)$ satisfies the linearized Ricci-DeTurck equation \begin{equation*} \begin{cases} \partial_t h(t)=\Delta_L h(t),\\ h(0)=\LL_Xg=\LL_Yg. \end{cases} \end{equation*} Note we have the isometry \begin{equation*} (M,g(t),p,X(t),h(t))\xrightarrow[]{\phi_{-t}} (M,g,p,\widetilde{X}(t),\widetilde{h}(t)). \end{equation*} So $\widetilde{h}(t):=\phi_t^*h(t)=\LL_{\widetilde{X}(t)}g$. By Theorem \ref{l: wing-like}, for any $\epsilon>0$, the manifold is covered by $\epsilon$-cylindrical planes on scale $1$ on $\Gamma_{\ge A}$ for sufficiently large $A$. So we may pick a point $q\in M$ such that $||Y|(q,0)-1|\le\epsilon$ and $r(q,0)>2C_1+1$, where $C_1>0$ is the constant from Lemma \ref{l: Z equation}. Then by the definition of $Y$ we have $|Y|(\phi_t(q),t)=|Y|(q,0)\ge1-\epsilon$, and by Lemma \ref{l: Z equation} we have $|Z|(\phi_t(q),t)=|Z|(q,0)\le\frac{1}{2}$. Therefore, \begin{equation*} |\widetilde{X}|_g(q,t)=|X|_{g(t)}(\phi_t(q),t)\ge |Y|_{g(t)}(\phi_t(q),t)-|Z|_{g(t)}(\phi_t(q),t)\ge\frac{1}{2}-\epsilon. \end{equation*} Next, by the $C^0$-upper bound $|X|(t)\le C$, and the standard interior estimates for linear parabolic equations; see e.g. \cite[Theorem 7.22]{lieberman1996second}, we have uniform $C^k$-upper bounds on $|\widetilde{X}|(t)$. Therefore, by the Arzella-Ascoli theorem, there exists $t_i\rii$ such that $\widetilde{X}(t_i)$ smoothly uniformly converges to a vector field $X_{\infty}$ and correspondingly $\widetilde{h}(t_i)$ converges to a smooth symmetric 2-tensor $\LL_{X_{\infty}}g$. First, we have $X_{\infty}\neq0$, because $|X_{\infty}|(q)=|\widetilde{X}|(q,t_i)\ge\frac{1}{2}-\epsilon$. Moreover, by Corollary \ref{c: Lie derivative tends to zero} we see that $\widetilde{h}(t_i)$ converges to $0$ smoothly and uniformly on any compact subsets of $M$, which implies $\LL_{X_{\infty}}g=0$. \end{proof} \section{Proof of the O(2)-symmetry}\label{s: O(2)-symmetry} In this section we prove the $O(2)$-symmetry for all 3D steady gradient solitons that are not the Bryant soliton. By Proposition \ref{l: non-vanishing} we find a non-zero smooth vector field $X$ such that $\LL_Xg=0$. We will show that $X$ induces an isometric $O(2)$-action $\{\chi_{\theta}\}_{\theta\in[0,2\pi)}$. Throughout this section we assume $(M,g,f,p)$ is a 3D steady gradient solitons with positive curvature that is not the Bryant soliton, where $p$ is the critical point of $f$, and $f(p)=0$. First, let $\{\chi_{\theta}\}_{\theta\in\mathbb{R}}$, be the one parameter group of isometries generated by $X$, we show that $X$ and $\nabla f$ are commutative, and hence the diffeomorphisms they generate are commutative. \begin{lem}\label{l: comm} $\left[X,\nabla f\right]=0$, and $\chi_{\theta}\circ\phi_t=\phi_t\circ\chi_{\theta}$, $t\in\R,\theta\in\R$. \end{lem} \begin{proof} We first show that the potential function is invariant under $\chi_{\theta}$. Let $p$ be the critical point of $f$. Since $p$ is the unique maximum point of $R$, we have $\chi_{\theta}(p)=p$ for all $t$, and hence \begin{equation*} f\circ\chi_{\theta}(p)=f(p)=0,\quad \nabla (f\circ\chi_{\theta})(p)=\nabla f(p)=0,\quad \nabla^2(f\circ\chi_{\theta})=\Ric=\nabla^2 f. \end{equation*} For any $x\in M$, let $\sigma:[0,1]\ri M$ be a minimizing geodesic from $p$ to $x$, then \begin{equation*}\begin{split} f(\chi_{\theta}(x))&=f(\chi_{\theta}(p))+\int_0^1\int_0^r\nabla^2(f\circ\chi_{\theta})(\sigma'(s),\sigma'(s))\,ds\,dr\\ &=f(p)+\int_0^1\int_0^r\nabla^2f(\sigma'(s),\sigma'(s))\,ds\,dr =f(x). \end{split}\end{equation*} So $f\circ\chi_{\theta}\equiv f$. Now since $\chi^*_{\theta}(f)=f$ and $\chi^*_{\theta}g=g$, it is easy to see $\chi^*_{\theta}\left(\nabla f\right)=\nabla f$. So $\left[X,\nabla f\right]=0$ and hence $\chi_{\theta}\circ\phi_t=\phi_t\circ\chi_{\theta}$. \end{proof} Second, we show that $\chi_{\theta}$ is a $SO(2)$-isometry. \begin{lem} There exists $\lambda>0$ such that after replacing $\{\chi_{\theta}\}$ by $\{\chi_{\lambda\theta}\}$, we have that $\{\chi_{\theta}\}$ is a $SO(2)$-isometry on $M$. \end{lem} \begin{proof} Since $f$ is invariant under $\chi_{\theta}$, it follows that the level sets of $f$ is invariant under $\chi_{\theta}$. So $\chi_{\theta}$ induces an isometry on each level set of $f$. Since the level sets $f^{-1}(a)$, $a>0$, are compact and diffeomorphic to $S^2$, it is easy to see that $X|_{f^{-1}(a)}$ vanishes at exactly two points, and $\chi_{\theta}|_{f^{-1}(a)}$ acts by rotations with two fixed points. Therefore, after replacing $X$ by $\lambda X$ for some $\lambda>0$ we may assume that $\chi_{\theta}|_{f^{-1}(a)}=id$ if and only if $\theta=2k\pi$, for $k\in\mathbb{Z}$. In particular, for a point $y\in f^{-1}(a)$, we have $\chi_{2\pi}(y)=y$, and $(\chi_{2\pi}|_{f^{-1}(a)})_{*y}$ is the identity transformation of the tangent space $T_yf^{-1}(a)$. Since $\chi_{\theta}$ is a smooth family of diffeomorphisms, and $\chi_0=id$, it follows that $\chi_{2\pi}$ preserves the orientation. So $(\chi_{2\pi})_{*y}$ is the identity transformation of $T_yM$, and hence $\chi_{2\pi}=id$. Therefore, $\chi_{\theta}$, $\theta\in[0,2\pi)$ is a $SO(2)$-isometry. \end{proof} Next, we show that the fixed point set Let $\Gamma'=\{x\in M: X(x)=0\}=\{x\in M: \chi_{\theta}(x)=x,\theta\in\R\}$ of the $SO(2)$-isometry $\chi_{\theta}$ coincides with $\Gamma=\Gamma_1(-\infty,\infty)\cup\Gamma_2(-\infty,\infty)\cup\{p\}$, where $\Gamma_1,\Gamma_2$ are two integral curves of $\nabla f$ from Corollary \ref{l: new Gamma}. \begin{lem} $\Gamma=\Gamma'$. \end{lem} \begin{proof} Note that $\Gamma\setminus\{p\},\Gamma'\setminus\{p\}$ are both unions of two integral curves of $\nabla f$. Let $\Gamma'_1,\Gamma'_2$ are the two components of $\Gamma'\setminus\{p\}$. It suffices to show that for each $j=1,2$, the integral curves $\Gamma_j,\Gamma'_j$, intersect at some point, after possibly switching the order of $\Gamma_1$ and $\Gamma_{2}$. To see this, note that on the one hand, by Corollary \ref{l: new Gamma} we have that the manifolds $(M,r^{-2}(x)g,x)$ converge to $(\R\times\cigar,r^{-2}(x_{tip})g_c,x_{tip})$ for any sequence $x\rii$ along $\Gamma$. On the other hand, since the points on $\Gamma'$ are fixed points of the $SO(2)$-isometry, it is easy to see that the manifolds $(M,r^{-2}(x)g,x)$ converge to $(\R\times\cigar,r^{-2}(x_{tip})g_c,x_{tip})$ for any sequence $x\rii$ along $\Gamma'$. Therefore, for any $i\in\mathbb{N}$, switching the order of $\Gamma_1$ and $\Gamma_{2}$ we may assume that there are two points $x_i\in\Gamma_1\cap (M\setminus B_g(p,2))$ and $y_i\in\Gamma'_1\cap(M\setminus B_g(p,2))$ such that $d_g(x_i,y_i)<i^{-1}$. Let $t_i>0$ be a constant such that $\phi_{-t_i}(x_i)\in B_g(p,2)\setminus B_g(p,1)$. Then \begin{equation*} d_g(\phi_{-t_i}(x_i),\phi_{-t_i}(y_i))=d_{g(t_i)}(x_i,y_i)\le d_g(x_i,y_i)\le i^{-1}\ri0. \end{equation*} So after passing to a subsequence we may assume $\phi_{-t_i}(x_i), \phi_{-t_i}(y_i)\ri q\neq p$, and hence $q\in\Gamma_1\cap\Gamma'_1$ and $\Gamma_1=\Gamma_2$. Similarly we can show $\Gamma_2=\Gamma'_2$. \end{proof} Lastly, we prove the $O(2)$-symmetry, that is, there exist a totally geodesic surface $N\subset M$ and a diffeomorphism $\Phi: N\times S^1\ri M\setminus\Gamma$ such that the pull-back metric $\Phi^*g$ is a warped-product metric $\Phi^*g=g_N+\varphi^2d\theta^2$, $\theta\in[0,2\pi)$, where $g_N$ is the induced metric on $N$ and $\varphi$ is a function on $N$. \begin{lem} The $SO(2)$-isometry $\chi_{\theta}$ is an $O(2)$-isometry. \end{lem} \begin{proof} Let $\Sigma=f^{-1}(a)$ for some fixed $a>0$, and $\sigma:[0,1]\ri \Sigma$ be a minimizing geodesic in $\Sigma$ connecting the two fixed points $\{x_a,\overline{x}_a\}$. Let $\Phi:(0,1)\times(-\infty,\infty)\times [0,2\pi)\ri M\setminus\Gamma$ be a diffeomorphism defined as \begin{equation*} \Phi(r,t,\theta)=\phi_t(\chi_{\theta}(\sigma(r)))=\chi_{\theta}(\phi_t(\sigma(r)). \end{equation*} Then we can write the metric under this coordinate as \begin{equation*} g=\sum_{\alpha,\beta=r,t}g_{\alpha\beta}dx_{\alpha}dx_{\beta}+G(d\theta+A)^2. \end{equation*} Since the vectors $\partial_{r},\partial_t=\nabla f,\partial_{\theta}=X$ are orthogonal at all points in $\Sigma\setminus\{x_a,\overline{x}_a\}=\Phi(\{(r,0,\theta): r\in(0,1),\theta\in[0,2\pi)\})$, the connection form $A=A_r\,dr+A_t\,dt$ vanishes at these points. Moreover, we have $A_t=0$ everywhere because $\langle\partial_t,\partial_{\theta}\rangle=\langle\nabla f,X\rangle=0$. On the one hand, by the curvature formula \eqref{e: DR} for a $SO(2)$-symmetric metric we have \begin{equation}\label{e: Ricextra} \Ric_{t\theta}=R_{\theta r t r}=-\frac{1}{2}G\Phi_{rt;r}-\frac{3}{4}G_{,r}F_{rt}. \end{equation} On the other hand, by the soliton equation $\Ric=\nabla^2 f$ we have \begin{equation}\label{e: Ricci by the soliton} \Ric_{t\theta}=\nabla^2_{t\theta} f=-\left\langle\nabla_{\nabla f}\nabla f,\partial_{\theta}\right\rangle=-\left\langle\nabla_{\partial_t}\partial_t,\partial_{\theta}\right\rangle =\frac{1}{2}\partial_{\theta}\left\langle\partial_t,\partial_t\right\rangle=0, \end{equation} where we used $\left\langle\partial_t,\partial_{\theta}\right\rangle=\langle X,-\nabla f\rangle=0$ and $\partial_{\theta}\left\langle\partial_t,\partial_t\right\rangle=X(|\nabla f|^2)=0$. So by \eqref{e: Ricextra} and \eqref{e: Ricci by the soliton} we have \begin{equation}\label{e: der of F is zero} \widetilde{\nabla}_{\partial_{r}}(G^{3/2}dA)=G^{\frac{1}{2}}(GF_{rt;r}+\frac{3}{2}G_{,r}F_{rt})\,dr\wedge dt=0, \end{equation} at points in $\Sigma\setminus\{x_a,\overline{x}_a\}$. \begin{claim} $dA=0$ holds on $M\setminus\Gamma$. \end{claim} \begin{proof}[Proof of the claim] Consider the rescaled manifolds $(M,r_i^{-2}g,x_a)$ where $r_i>0$ is an arbitrary sequence going to zero. Then it is easy to see that $(M,r_i^{-2}g,q)$ smoothly converges to the Euclidean space $\R^3$, with $\Gamma$ converging to a straight line, which we may assume to be the $z$-axis after a change of coordinates. So the $SO(2)$-isometries on $(M,r_i^{-2}g,q)$ converges to the rotation around the $z$-axis. Note that $|G\,dA|$ is scaling invariant, this convergence implies $|G\,dA|(r_i,\theta,0)\ri0$ as $i\rii$, which proves $\lim_{r\ri0}|G\,dA|(r,\theta,0)=0$. So $\lim_{r\ri0}|G^{3/2}dA|(r,\theta,0)=0$. Then by \eqref{e: der of F is zero}, we get $G^{3/2}dA(r,\theta,0)=0$ and $dA(r,\theta,0)=0$. So $dA=0$ on $\Sigma\setminus\{x_a,\overline{x}_a\}$. Note that we may choose $\Sigma$ to be $f^{-1}(a)$ for any $a>0$, the same argument implies $dA=0$ everywhere on $M\setminus\Gamma$, which proves the claim. \end{proof} Now since $dA=0$ and $A_t=0$, we have $\frac{\partial A_r}{\partial t}=\frac{\partial A_t}{\partial r}=0$. Note $A_r(r,0,\theta)$=0, this implies $A_r(r,t,\theta)=0$ for all $t\in\R$. So $A=0$ and hence the metric can be written as the following warped-product form under the coordinates $(r,t,\theta)$, \begin{equation*} g=\sum_{\alpha,\beta=r,t}g_{\alpha\beta}dx_{\alpha}dx_{\beta}+Gd\theta^2. \end{equation*} \end{proof} Using the $O(2)$-symmetry and the $\mathbb{Z}_2$-symmetry at infinity, we can follow the same line as in \cite[Theorem 1.5]{Lai2020_flying_wing} to prove Theorem \ref{t': quantitative relation}. \bibliography{bib} \bibliographystyle{abbrv} \end{document}
2205.01065v2
http://arxiv.org/abs/2205.01065v2
On the topology of Gaussian random zero sets
\documentclass[12pt]{amsart} \usepackage{epsfig,color} \usepackage{blindtext} \usepackage{hyperref} \usepackage{graphicx} \usepackage{enumitem} \usepackage{url} \usepackage{amssymb} \usepackage{graphicx,import} \usepackage{comment} \usepackage{ulem} \usepackage{outlines} \usepackage{esint} \usepackage{verbatim} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage{amsmath} \usepackage{dsfont} \setcounter{tocdepth}{3} \makeatletter \def\l@subsection{\@tocline{2}{0pt}{2.8pc}{5pc}{}} \def\l@subsubsection{\@tocline{2}{0pt}{5.6pc}{5pc}{}} \headheight=6.15pt \textheight=8in \textwidth=6.5in \oddsidemargin=0in \evensidemargin=0in \topmargin=0in \setcounter{section}{0} \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem{fact}[theorem]{Fact} \newtheorem*{acknowledgements}{Acknowledgements} \newtheorem{idea}[theorem]{Idea} \newtheorem{gap}[theorem]{Gap} \newtheorem{property}[theorem]{Property} \newtheorem*{remark*}{Remark} \numberwithin{equation}{section} \newcommand{\mf}{\mathbf} \newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb} \newcommand{\mr}{\mathrm} \newcommand{\mfk}{\mathfrak} \newcommand{\spt}{\mathrm{spt}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\bH}{\mathbf{H}} \newcommand{\pa}{\partial} \DeclareMathOperator{\vol}{Vol} \DeclareMathOperator{\II}{II} \title[On the topology of Gaussian random zero sets]{On the topology of Gaussian random zero sets} \date{\today} \author{Zhengjiang Lin} \address{Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY10012, USA} \email{malin at nyu.edu} \begin{document} \begin{abstract} We study the asymptotic laws for the number, Betti numbers, and isotopy classes of connected components of zero sets of real Gaussian random fields, where the random zero sets almost surely consist of submanifolds of codimension greater than or equal to one. Our results include `random knots' as a special case. Our work is closely related to a series of questions posed by Berry in~\cite{berry2001knotted,berry2000phase}; in particular, our results apply to the ensembles of random knots that appear in the complex arithmetic random waves (Example~\ref{Example: complex random wave}), the Bargmann-Fock model (Example~\ref{Example: Bargmann-Fock}), Black-Body radiation (Example~\ref{Example: Black-Body radiation}), and Berry's monochromatic random waves. Our proofs combine techniques introduced for level sets of random scalar-valued functions with methods from differential geometry and differential topology. \end{abstract} \maketitle \tableofcontents \pagebreak \section{Introduction} \subsection{Background} We consider a Gaussian random field $F: \mathbb{R}^n \times (\Omega, \mfk{S},\mc{P}) \to \mathbb{R}^m$ with $n > m$, where $(\Omega, \mfk{S},\mc{P})$ is a probability space. That is, for every $k \in \mb{Z}_+$ and every given $x_1,x_2,\dots,x_k \in \mb{R}^n$, $(F(x_1, \cdot), F(x_2, \cdot), \dots, F(x_k, \cdot) )$ is a joint Gaussian random vector; and for every fixed $\omega \in \Omega$, $F(\cdot, \omega)$ is a map (vector field) from $\mb{R}^n$ to $\mb{R}^m$. See Example~\ref{Example: Bargmann-Fock} and Example~\ref{Example: Black-Body radiation} for Gaussian fields defined on $\mb{R}^n$, and also Example~\ref{Example: complex random wave} for Gaussian random fields defined on a manifold. For this map $F$, one can define its random zero set as $Z(F) \equiv \{x : F(x) = 0 \in \mathbb{R}^m \}$. After some regularity assumptions on $F$, $Z(F)$ a.s.~consists of many submanifolds, either closed or open, of codimension $m$, as proved later in Lemma~\ref{L: Quantitative Bulinskaya}. For example, when $n=3,m=2$, $Z(F)$ possibly contains many knots and links, which is a very interesting and deep branch in algebraic topology. On the other hand, `random knots' have appeared in many applied sciences like quantum physics~\cite{berry2001knotted,berry2000phase}, cosmic strings~\cite{vilenkin1994cosmic}, classical fluid flows~\cite{kleckner2013creation}, and superfluid flows~\cite{kleckner2016superfluid}, for example. In particular, in~\cite{taylor2016vortex}, numerical experiments showed that random knots are prevailing in many physical models, where those knots are realized by zeros of random complex scalar wavefunctions. The term `random knots' has its own pure mathematical interest. The random zero set $Z(F)$ actually gives a probability distribution on knots. We can probably ask, how many trefoil knots are in $Z(F) \cap B_1 \subset \mathbb{R}^3$? Because $Z(F)$ is a random set, the number of trefoil knots is also a random variable. Is this number always a nontrivial (nonzero) random variable? What is the expectation or what are the higher moments of such a random variable? We can ask similar questions on other types of knots and obtain a random empirical distribution on all types of knots. Among all types of knots, what are the most typical knots? We study similar questions for more general $n,m$ and for $F$ defined on more general topological spaces like manifolds. The zero sets of random scalar-valued functions have been considered by many authors; a selective list of mathematical work on this topic includes~\cite{NS09, NS16, SW19, S16, W21}. In considering the zero sets of vector-valued functions, we must deal with the fact that the geometry and topology of submanifolds of codimension greater than one is much richer and more complex than the codimension-one setting. \subsection{Main Results}\label{Subsection: main results} \subsubsection{Stationary Gaussian Fields on Euclidean Spaces} We consider a $C^{3-}$-smooth Gaussian field $F: \mathbb{R}^n \times (\Omega, \mfk{S},\mc{P}) \to \mathbb{R}^{m}$, $m \in(0,n)$. Here, $C^{3-} \equiv \cap_{0 < \beta < 1} C^{2+ \beta}$. If we write $F=(f_1, f_2,\dots, f_m)$, then if the two point covariance matrix $K(x,y)$ with elements $ K_{ij}(x,y) \equiv \mc{E}(f_i(x)f_j(y))$ is $C^3$-smooth, we can know that $F$ is a.s.~$C^{3-}$-smooth. Here, we use $\mathcal{E}(\cdot)$ to denote expectations. On the other hand, Kolmogorov's theorem says that this $F$ is uniquely determined by $K$ up to an equivalence. We give two examples first, and readers can see~\cite{dalmao20193} for more. Although $F$'s in these examples have i.i.d.~components $\{f_i{\}}$, we do not need such an assumption in any of our theorems. \begin{example}[Bargmann-Fock model]\label{Example: Bargmann-Fock} Let ${\{f_i\}}_{i=1} ^m$ be i.i.d.~scalar-valued Gaussian random functions defined on $\mb{R}^n$ with the same two point function $ k(x,y) \equiv \mc{E}(f_1(x)f_1(y)) = e^{-\frac{{||x-y||}^2}{2}}$. Here, $||\cdot||$ is the standard $L^2$-distance. Notice that $k(x,y)$ is a function of $x-y$, which we can write as $k(x-y)$. The Fourier transform of $k(z)$ is called the \textit{spectral measure} of $f_1$, which is ${(2\pi)}^{-n/2} e^{-{||z||}^2/2}$. \end{example} \begin{example}[Black-Body radiation]\label{Example: Black-Body radiation} Let $n=3$ and $f_1,f_2$ be i.i.d.~scalar-valued Gaussian random functions defined on $\mb{R}^3$ with the same two point function $ k(x,y) \equiv \mc{E}(f_1(x)f_1(y)) = \frac{c_1}{{||x-y||}^2} - \frac{c_2 ||x-y|| \cosh{(||x-y||)}}{{\sinh{(||x-y||)}}^2}$. Here $c_1,c_2$ are two fixed constants. Again, $k(x,y) = k(x-y)$. The Fourier transform of $k(z)$ is $\frac{c ||z||}{e^{||z||}-1}$ with another constant $c$. \end{example} In Example~\ref{Example: Bargmann-Fock} and Example~\ref{Example: Black-Body radiation}, the $k(x,y)$'s are functions of $x-y$. We say that a random field $F$ is \textit{stationary} if its covariance matrix $K(x,y) = K(x-y)$. Recall that $Z(F)$ a.s.~consists of many submanifolds, either closed or open. We use $N(R;F)$ to denote the number of connected components of $Z(F)$ fully contained in the open cube $C_R \equiv {(-R,R)}^n $ of $\mathbb{R}^n$. Similarly, for each $l = 0, 1, \dots, n-m$, we define $\beta_l(R;F)$ as the sum of $l$-th Betti numbers over $\mathbb{R}$ for connected components of $Z(F)$ fully contained in $C_R$, i.e., \begin{equation} \beta_l(R;F) \equiv \sum_{\gamma \subset Z(F) \cap C_R} \beta_l(\gamma), \end{equation} where the summation counts closed components $\gamma$ fully contained in $C_R$ and $\beta_l(\gamma)$ is the $l$-th Betti number over $\mathbb{R}$ of $\gamma$. We introduce $\mc{C}(n-m)$ as the set of $C^1$-isotopy classes of closed $(n-m)$-dimensional manifolds that can be embedded into $\mathbb{R}^n$ so that the embeddings have trivial normal bundles. That is, for any two $(n-m)$-dimensional manifolds $M_1,M_2$ embedded in $\mathbb{R}^n$ with trivial normal bundles, if $M_1$ is $C^1$-isotopic to $M_2$, then they are in the same class $c \in \mathcal{C}$. This trivial normal bundle assumption is necessary in this paper. It is not always correct that for any closed $(n-m)$-dimensional submanifold $M \subset \mathbb{R}^n$, there is a $G \in C^1(\mathbb{R}^n, \mathbb{R}^m)$ such that $M$ is a non-degenerate connected component of $Z(G)$, which means that $\nabla G$ is of full rank on $M$. But if the normal bundle of $M$ in $\mathbb{R}^n$ is trivial, then it is possible to find such a $G$. We also notice that $\mc{C}(n-m)$ is a countable set, because any $G$ can be approximated on a compact set by polynomials. Embeddings of $\mb{S}^1$ always have trivial normal bundles in any $\mb{R}^n$. So, when $n=3,m=2$, $\mc{C}(3-2)$ consists of the classes of all knots. One can also replace $\mc{C}(3-2)$ with all possible links in the following theorems, but for simplicity, we keep this definition. In the following theorems, we let $N(R;F,c)$ be the number of connected components of $Z(F)$ of class $c \in \mc{C}(n-m)$ which are fully contained in $C_R$. Example~\ref{Example: Bargmann-Fock}, Example~\ref{Example: Black-Body radiation}, and Berry's monochromatic random waves model all satisfy the assumptions in the following theorems. \begin{theorem}\label{Main Results: Local} \textit{ Assume that $F:\mathbb{R}^n \to \mathbb{R}^m$, $m \in (0,n)$, is a $C^{3-}$-smooth centered stationary Gaussian random field such that the joint distribution of $(F(0),\nabla F(0))$ is non-degenerate and mean-zero. \begin{itemize} \item[(1)] For each $c \in \mc{C}(n-m)$, there exists a number $\nu_{F,c} \geq 0$ so that \begin{equation} \mathcal{E}(N(R;F,c)) = \nu_{F,c} \cdot |C_R| + o(R^n), \end{equation} as $R \to +\infty$. Here $|C_R|$ is the volume of $C_R$. \item[(2)] If the translation action of $\mathbb{R}^n$ is ergodic, then \begin{equation} \lim_{R \to \infty}\frac{N(R;F,c)}{|C_R|} = \nu_{F,c} \ \text{almost surely} \quad \text{and} \quad \lim_{R \to \infty}\mathcal{E} \bigg| \frac{N(R;F,c)}{|C_R|} - \nu_{F,c} \bigg| = 0 . \end{equation} \end{itemize} (1) and (2) also hold true if one replaces $N(R;F,c)$ with $N(R;F)$ and replaces $\nu_{F,c}$ with $\nu_F \geq 0$. \begin{itemize} \item[(3)] \begin{equation} \nu_F = \sum_{c \in \mc{C}(n-m)} \nu_{F,c} . \end{equation} \end{itemize} } \end{theorem} See more details in Section~\ref{Section: Stationary Random Field} about the translation action of $\mathbb{R}^n$ and the definition of ergodicity. A natural question is when $\nu_F$ and $\nu_{F,c}$'s are positive. In the special case when $F$ consists of independent ${\{f_i\}}_{i=1} ^m$, if for each $i = 1, \dots, m$, the spectral measure $\rho_i$ associated with $f_i$ is the surface measure on the unit sphere $\mb{S}^{n-1}$ (Berry's monochromatic random waves), or if its support has a nonempty interior, then one can get that $\nu_F >0$ and $\nu_{F,c}>0$ for all $c \in \mc{C}(n-m)$. See also~\cite{NS16,SW19, W21} for the scalar-valued ($m=1$) case. We will also discuss some weaker assumptions in Appendix~\ref{APP: Positivity}. For the Betti numbers $\beta_l(R;F)$, we have the following counterpart. \begin{theorem}\label{Main Results: Local Betti} \textit{ Assume that $F:\mathbb{R}^n \to \mathbb{R}^m$, $m \in (0,n)$, is a $C^{3-}$-smooth centered stationary Gaussian random field such that the joint distribution of $(F(0),\nabla F(0))$ is non-degenerate and mean-zero. Then, for each $l = 0,1, \dots ,n-m$, we have the following results. \begin{itemize} \item[(1)] There exists a number $\nu_{l;F} \geq 0$ so that \begin{equation} \mathcal{E}(\beta_l(R;F)) = \nu_{l;F} \cdot |C_R| + o (R^n), \end{equation} as $R \to +\infty$. \item[(2)] If the translation action of $\mathbb{R}^n$ is ergodic, then \begin{equation} \lim_{R \to \infty}\mathcal{E} \bigg| \frac{\beta_l(R;F)}{|C_R|} - \nu_{l;F} \bigg| = 0 . \end{equation} \end{itemize} } \end{theorem} Under the assumptions we discussed after Theorem~\ref{Main Results: Local} (or under weaker assumptions discussed in Appendix~\ref{APP: Positivity}), these $\nu_{l;F}$ are positive numbers. \subsubsection{Gaussian Fields Ensembles on Manifolds} We turn to some families of Gaussian random fields $\{F_L{\}}$ defined on $X^n$, a $n$-dimensional closed manifold, where each $F_L$ takes values in $\mb{R}^m$ with $n>m$. The results that we discuss here are closely related to questions raised by Berry at the end of his paper~\cite{berry2001knotted} and section 7 of another paper~\cite{berry2000phase}. We first give an example of a family of Gaussian random fields, called complex arithmetic random waves, see for example~\cite{dalmao2019phase}. This model also appeared in many numerical experiments on random knots, see for example~\cite{taylor2016vortex}. \begin{example}\label{Example: complex random wave} We choose the $n$-dimensional torus $X^n = \mb{T}^n = \mb{R}^n / \mb{Z}^n$. Let $\mc{H}_L$ be the subspace of $L^2(X)$ that consists of trigonometric polynomials of degree $\lambda \in \mb{Z}^n$ with $ |\lambda|_2 = L$. $L$ takes values in the set $\mc{L} \equiv \{L = {(\sum_{i=1} ^n \lambda_i ^2)}^{1/2} : \lambda_1,\lambda_2, \dots,\lambda_n \in \mb{Z} {\}} $. We can define a complex Gaussian distribution on $\mc{H}_L$ by \begin{equation} F_L(x) = \sum_{\lambda \in \mb{Z}^n, |\lambda|_2 = L} (\xi_{\lambda} + \eta_{\lambda} \cdot i ) e^{(2\pi i \langle \lambda , x \rangle)}, \end{equation} where $\xi_\lambda,\eta_\lambda$ are i.i.d.~standard real Gaussian random variables, $\langle \lambda , x \rangle$ is the standard inner product in $\mb{R}^n$. We call the sequence of complex random fields $\{F_L{\}}$ the ensemble of complex arithmetic random waves. It is not hard to see that $\mr{Re}( F_L(x))$ and $\mr{Im}( F_L(x))$ are two independent real Gaussian random functions and they have the same distribution. Hence, $Z(F_L) \equiv \{ x \in X : F_L(x) = 0 {\}}$ should consist of submanifolds of codimension $2$. In particular, when $n=3$, it is the random knot model studied in~\cite{taylor2016vortex} and is closely related to Berry's work~\cite{berry2001knotted, berry2000phase}. \end{example} In Section~\ref{Section: Local Double Limit and Global Limit}, we will give another example of $\{F_L\}$ defined on $\mb{S}^{n}$, Example~\ref{Example}, called Kostlan's ensemble, to illustrate more technical details. For those $\{F_L\}$ satisfying the technical but natural assumptions in Section~\ref{Section: Local Double Limit and Global Limit}, which include complex arithmetic random waves and the Kostlan's ensemble, we say that $\{F_L\}$ is \textit{tame}. We first state the results for $\{F_L\}$ defined on an open set of $\mb{R}^n$ and then state it on general closed Riemannian manifolds. \begin{theorem}\label{Main Results: Global 1} \textit{ If ${\{F_L\}}_{L \in \mathcal{L}}$ is tame on an open set $U \subset \mathbb{R}^n$, then we have the following: \begin{itemize} \item [(1)] There is a measurable locally bounded function $x \mapsto \bar{\nu}(x )$ on $U$, such that for every sequence of connected component counting measures $n_L$ of $F_L$ and for every $\varphi \in C_{c}(U)$, we have that \begin{equation} \lim_{L \to \infty} \mathcal{E} \bigg[ \bigg | \frac{1}{L^n} \int_U \varphi(x) dn_L(x) - \int_U \varphi(x) \bar{\nu}(x) dx \bigg | \bigg] = 0. \end{equation} \item[(2)] In particular, for a bounded smooth domain $D \Subset U$, \begin{equation} \lim_{L \to \infty} \mathcal{E} \bigg[ \bigg | \frac{N(D;F_L)}{L^n} - \int_D \bar{\nu}(x) dx \bigg | \bigg] = 0, \end{equation} where $N(D;F_L)$ is the number of connected components of $Z(F_L)$ fully contained in $D$. \end{itemize} } \end{theorem} Here, we say that a Borel measure $n_L$ is a connected component counting measure of $F_L$ if $\spt (n_L) \subset Z(F_L)$ and the $n_L$-mass of each component of $Z(F_L)$ is $1$. These $n_L$'s are random. Now, we state a manifold version of Theorem~\ref{Main Results: Global 1}. Suppose that $X$ is an $n$-dimensional $C^3$-manifold without boundary. \begin{theorem}\label{Main Results: Global 2} \textit{ Assume that ${\{F_L\}}_{L \in \mathcal{L}}$ is tame on $X$, then we have the following: \begin{itemize} \item [(1)] There is a locally finite Borel non-negative measure $n_\infty$ on $X$ such that for every choice of connected component counting measures $n_L$ of $F_L$ and every $\varphi \in C_{c}(X)$, we have that \begin{equation} \lim_{L \to \infty} \mathcal{E} \bigg[ \bigg | \frac{1}{L^n} \int_X \varphi(x) dn_L(x) - \int_X \varphi(x) dn_\infty(x) \bigg | \bigg] = 0. \end{equation} \item [(2)] In particular, when $X$ is a closed manifold, \begin{equation} \lim_{L \to \infty} \mathcal{E} \bigg[ \bigg | \frac{N(X;F_L)}{L^n} - n_{\infty}(X) \bigg | \bigg] = 0, \end{equation} where $N(X;F_L)$ is the number of connected components of $Z(F_L)$. \end{itemize} } \end{theorem} \begin{remark} $n_{\infty}(\cdot)$ in Theorem~\ref{Main Results: Global 2}, $\bar{\nu}(\cdot)$ in Theorem~\ref{Main Results: Global 1}, and $\bar{\nu}_l(\cdot)$'s in Theorem~\ref{Main Results: Global Betti} later, are strictly positive under assumptions similar to those we discussed for Theorem~\ref{Main Results: Local} and Theorem~\ref{Main Results: Local Betti}. In particular, for the complex arithmetic random waves and Kostlan's ensemble, the corresponding $n_{\infty}(\cdot)$ and $\bar{\nu}_l(\cdot)$'s are strictly positive. \end{remark} \begin{remark}\label{Rmk: Global Limiting Measure} For any $c \in \mc{C}(n-m)$, results like Theorem~\ref{Main Results: Global 1} and Theorem~\ref{Main Results: Global 2} also hold true if one replaces `connected component counting' with `connected component with type $c$ counting'. In particular, for a closed manifold $X$, one can counstruct an empirical isotopy class counting measure on $\mc{C}(n-m)$ like~\cite{SW19}. More precisely, given $N(X;F_L)$ and $N(X;F_L, c)$ (number of connected components of type $c$) as in Theorem~\ref{Main Results: Global 2} and its analogies, we define \begin{equation} \mu_{L} \equiv \frac{1}{N(X; F_L)} \cdot \sum_{c \in \mc{C}(n-m)} N(X; F_L,c) \cdot \delta_c, \end{equation} which is a random probability measure on $\mc{C}(n-m)$. Then, we can also obtain a higher codimension generalization of Theorem 1.1 of~\cite{SW19}. That is, when $n_{\infty}(X) >0$, there is a limiting probability measure (in discrepancy sense) on $\mc{C}(n-m)$ defined by \begin{equation} \mu_{\infty} \equiv \frac{1}{n_{\infty}(X)} \cdot \sum_{c \in \mc{C}(n-m)} n_{\infty,H}(X) \cdot \delta_H, \end{equation} such that for any $\epsilon >0$, \begin{equation} \lim_{L \to \infty} \mathcal{P}\big( \sup_{A \subset \mc{C}(n-m)} |\mu_L(A) - \mu_\infty(A)| > \epsilon\big) = 0. \end{equation} We omit the proof of this last result from this paper. The proof is not a direct corollary of~\cite{SW19}, while it is fairly straightforward once one has the tools developed in this paper. \end{remark} Our final result is on asymptotic laws of Betti numbers. For each $l = 0, \dots,n-m$, we use $\beta_l(F_L)$ to denote the summation of $l$-th Betti numbers of all connected components of $Z(F_L)$ in the given closed $n$-dimensional manifold $X$. \begin{theorem}\label{Main Results: Global Betti} \textit{ Assume that ${\{F_L\}}_{L \in \mathcal{L}}$ is tame on $X$ and $X$ is a closed manifold. Then for each $l = 0, \dots, n-m$, there is a nonnegative measurable and bounded function $\bar{\nu}_l(x)$ on $X$, and a positive constant $C_l$ depending on ${\{F_L{\}}}_{L \in \mathcal{L}}$, such that \begin{equation} \int_{X} \bar{\nu}_l(x) \ d x \leq \liminf_{L \to \infty}\frac{\mathcal{E}(\beta_l(F_L))}{L^n} \leq \limsup_{L \to \infty}\frac{\mathcal{E}(\beta_l(F_L))}{L^n} \leq C_l \cdot |X|, \end{equation} where $|X|$ is the volume of $X$. } \end{theorem} The constants $C_l$ in Theorem~\ref{Main Results: Global Betti} depend on the parameters in our Definition~\ref{D: Axiom 2}. For Theorem~\ref{Main Results: Global Betti}, we do not know whether an actual limit exists for each $l = 1, \dots, n-m-1$. This is because of the existence of giant connected components of $Z(F_L)$, which can appear in many Gaussian ensemble examples. One may see relevant discussion on page 6 of~\cite{W21} and references therein. We also believe that some of these results can be extended to general level sets, i.e., $F^{-1}(z)$ with $z \in \mathbb{R}^m$. But for simplicity, we focus on zero sets in this paper. This introduction has thus far summarized our main results. Concerning our methods: very roughly speaking, they combine techniques from the literature on zero sets of random scalar-valued functions with tools from differential geometry. In the statistics of random zero sets, a widely-used tool is the Kac-Rice theorem, see our Theorem~\ref{Thm: Kac-Rice} for example. The Kac-Rice theorem computes the moments of `integral random variables' of the form \begin{equation} \int_{Z(F) \cap A} h(x, W(x)) \ d \mc{H}^{n-m}(x), \end{equation} where $A \subset \mb{R}^n$ is a Borel set, both $F: \mb{R}^n \to \mb{R}^m$ and $W$ are random fields, $h$ is a function with enough regularity, and $\mc{H}^{n-m}$ is the $(n-m)$-dimensional Hausdorff measure (the volume measure on the embedded submanifolds). On the other hand, statistics like the number, Betti numbers, or isotopy classes, are related to some geometric integrals on embedded submanifolds (see our Theorem~\ref{Thm: Fenchel} and Theorem~\ref{Thm: Chern}, for example). Now the idea is to choose suitable $h$'s with enough regularity which are compatible with those geometric integrals. However, the $h$'s involved in the geometric integrals always have highly singular parts. For example, in the simplest case when $m=1$ so that $F$ is a scalar-valued function, Theorem~\ref{Thm: Fenchel} suggests to choose $h$ as the mean curvature $H$ on $Z(F)$ and $W = (\nabla F , \nabla ^2 F)$, where $H$ has the following formula, \begin{equation} H = \frac{\Delta F}{||\nabla F||} - \frac{\nabla^2 F(\nabla F, \nabla F)}{{||\nabla F||}^3}. \end{equation} Here, $\nabla F$ is the standard gradient in $\mb{R}^n$, $\nabla^2 F$ is the standard Hessian, and $\Delta_{\mb{R}^n}F$ is the standard Laplacian (see Example 1.1.3 in~\cite{mantegazza2011lecture}). When $m >1$, the curvature computations involved in Theorem~\ref{Thm: Fenchel} and Theorem~\ref{Thm: Chern} become more complicated, as shown in our Appendix~\ref{APP: Computation}. We will explain in detail how we estimate these curvature involved geometric integrals in Section~\ref{Section: L^1}. Now, we can close this introduction with a summary of the paper's organization. In Section~\ref{Section: L^1}, we will show $L^1$-estimates for the number and Betti numbers of connected components fully contained in $C_R$, i.e., upper bounds for $\mathcal{E}(N(R;F)) / R^n$ and $\mathcal{E}(\beta_l(R;F)) / R^n$ independent of $R$, which are our Theorem~\ref{Thm: L^1 Bound on Number} and Theorem~\ref{Thm: L^1 Bound on Betti}. We also prove a quantitative version of Bulinskaya's Lemma~\ref{L: Quantitative Bulinskaya}. This part contains core ideas of this paper, so we suggest reading it first before reading other sections. In Section~\ref{Section: Stationary Random Field}, we will restate Theorem~\ref{Main Results: Local} as Theorem~\ref{Thm: Euclidean Random Field} and prove it. The proof for Theorem~\ref{Main Results: Local Betti} is similar, and we will mention necessary modifications at the end of Section~\ref{Section: Stationary Random Field}. The ergodicity issue of translation actions and the positivity issue of $\nu_{F,c}$ related to Theorem~\ref{Main Results: Local} and Theorem~\ref{Thm: Euclidean Random Field} will be included in Appendix~\ref{APP: Ergodicity} and Appendix~\ref{APP: Positivity}. In Section~\ref{Section: Local Double Limit and Global Limit}, we restate Theorem~\ref{Main Results: Global 1} as Theorem~\ref{Thm: Global Limit} and prove it, which also needs two technical lemmas. One is our quantitative Bulinskaya's Lemma~\ref{L: Quantitative Bulinskaya} for random fields. Another is Lemma~\ref{L: Continuous Stability}, which is a stability lemma for connected components under small perturbations. The proof of Theorem~\ref{Thm: Global Limit} also needs an upper bound for $\mathcal{E}({(N(R;F))}^q) / R^{nq}$ independent of $R$ with some $q = q(n,m) > 1$, which is our Theorem~\ref{Thm: L^q Bound on Number}. The proof of Theorem~\ref{Main Results: Global 2} then follows from Theorem 3 in~\cite{NS16} once Theorem~\ref{Main Results: Global 1} is completed. Finally, we restate Theorem~\ref{Main Results: Global Betti} as Theorem~\ref{Thm: Global Betti} and prove it. In all of these proofs, we will use $|\cdot|$ or $||\cdot||$ to denote norms of vectors or volumes of a set when there is no ambiguity. We only use $||\cdot||_{C^{1+\beta}}$ or $||\cdot||_{C^{2}}$, i.e., $||\cdot||_{\cdot}$ with subindices, to denote norms related to smoothness. We will also use the notations $v=(v_i)$ (or $A=(A_{ij})$) to denote vectors (or matrices) with elements $v_i$ (or $A_{ij}$). When we do computations, we will use, for example, $C(n,m)$, to denote constants depending on $n,m$, but they are not necessarily the same from line to line. \section{Random Fields on Cubes}\label{Section: L^1} We denote $C_R \equiv {(-R,R)}^n $ as open cubes with side lengths $2R$ for $R>0$ in $\mb{R}^n$. Let $F : C_{R+1} \to \mb{R}^{m}$, $m\in(0,n)$, be a continuous Gaussian random field, and denote that $F(x) = (f_1(x), \dots, f_{m}(x))$. Then, the $(x,y)$-covariance kernel (two-point covariance kernel), \begin{equation} K_{i_1 i_2}(x,y) = \mc{E}\big(f_{i_1}(x)f_{i_2}(y)\big), \ i_1,i_2 = 1,2,\dots, m, \end{equation} is an $m \times m$ matrix. \begin{definition}\label{D: Axiom 1} We say that $F:C_{R+1} \to \mb{R}^m$ satisfies \textit{ $(R; M,k_1)$-assumptions} if on $C_{R+1}$, \begin{itemize} \item[$(A1)$] $F$ is centered, i.e., $\mc{E}(f_i(x)) = 0$ for $i = 1, \dots, m$. And the joint distribution of $F(x)$ and its Jacobian $\nabla F (x)$, i.e., $(f_1(x), \dots, f_m(x), \pa_1 f_1(x) , \pa_2 f_1(x), \dots, \pa_n f_m(x) )$, is a non-degenerate Gaussian vector with a uniform lower bound for all $x \in C_{R+1}$, i.e., \begin{equation} \mc{E}\bigg(\bigg| \sum_{i = 1}^m\eta_i \cdot f_i (x) + \sum_{i=1} ^m \sum_{j=1} ^n \xi_{ij} \cdot \pa_j f_i (x)\bigg|^2 \bigg) \geq k_1 >0, \end{equation} for all $x \in C_{R+1}$, and for any $\eta = (\eta_i) \in \mb{R}^m$, $\xi = (\xi_{ij}) \in \mb{R}^{nm}$, with $||\eta||^2 + ||\xi||^2 = 1$. \item[$(A2)$] $F$ is almost surely $C^{3-} \equiv \cap_{0 < \beta < 1} C^{2+ \beta}$ in $C_{R+1}$: \begin{equation} \max_{0\leq |\alpha| \leq 3} \sup_{x \in C_{R+1}} | D_x ^{\alpha}D_y ^{\alpha}K_{i_1 i_2}(x,y) |_{y=x}| \leq M, \ i_1,i_2 = 1,2,\dots, m. \end{equation} \noindent Furthermore, we say that $F$ satisfies \textit{ $(R; M,k_1, k_2)$-assumptions} if on $C_{R+1}$, $F$ also satisfies the following two-point nondegeneracy assumption $(A3)$ for some $k_2 >0$. \item[$(A3)$] For every pair of distinct points $x,y \in C_{R}$, the Gaussian vector $(F(x),F(y))$ has a nondegenerate distribution with the following restrictions: \begin{equation}\label{E: Non-Degenerate Joint Distribution} p_{(F(x),F(y))}(0,0) \leq \begin{cases} \frac{k_2}{||x-y||^m} ,& \ \text{if } ||x-y|| \leq 1, \\ & \\ k_2 ,& \ \text{if } ||x-y|| >1, \end{cases} \end{equation} where $p_{(F(x),F(y))}(0,0)$ is the density of the joint distribution of $(F(x), F(y))$ at $(0,0)$. \end{itemize} \end{definition} \begin{remark} The condition $(A3)$, and the later condition $(B4)$ in Definition~\ref{D: Axiom 3}, will only be used when we prove Theorem~\ref{Thm: L^q Bound on Number}, Theorem~\ref{Thm: Global Limit}, and Theorem~\ref{Thm: Global Betti} in Section~\ref{Section: Local Double Limit and Global Limit}. The upper bound $\frac{k_2}{||x-y||^m}$ is natural when $x$ and $y$ are close to each other, which actually follows from the condition $(A1)$. In our Appendix~\ref{APP: 2 Point}, we will explain this and show that $(A1)$ and $(A2)$ would imply (\ref{E: Non-Degenerate Joint Distribution}) locally. \end{remark} Our first main result in Section~\ref{Section: L^1} is the following theorem. \begin{theorem}\label{Thm: L^1 Bound on Number} \textit{ Assume that $F : C_{R+1} \to \mb{R}^{m}$, satisfies \textit{ $(R; M,k_1)$-assumptions} on $C_{R+1}$. Then, there is a constant $D_1 = D_1(n,m, M,k_1) >0$, such that \begin{equation} \mc{E}(N(R;F)) \leq D_1 \cdot |C_R|. \end{equation} } \end{theorem} Here, $|C_R|$ is the volume of $C_R$. Notice that Theorem~\ref{Thm: L^1 Bound on Number} also holds true for any $r \in (0,R)$ with the the same $D_1$. We need two theorems before we prove Theorem~\ref{Thm: L^1 Bound on Number}. The first is the following Kac-Rice theorem, see for example Remark 10 in Section 2 of~\cite{CH20} and Theorem 6.10 in Chapter 6 of~\cite{AW09}. \begin{theorem}[Kac-Rice Theorem with Integrand]\label{Thm: Kac-Rice} \textit{ Let $W: C_{R+1} \to \mb{R}^{n_1}$ be a continuous Gaussian random field such that $(F,W)$ is also a Gaussian field on $C_{R+1}$. Assume that $h = h(x,w): \mb{R}^n \times \mb{R}^{n_1} \to \mb{R}$ is a positive continuous bounded function. Under some mild assumptions on $F$, the $s$-th moments of the integral random variable \begin{equation} \int_{ \{F = 0\} \cap B} h(u,W(u)) d\mc{H}^{n-m}(u) \end{equation} has the following formula: \begin{equation}\label{E: Kac-Rice} \begin{split} & \quad \mc{E} \bigg[{\bigg(\int_{ \{F = 0\} \cap B} h(u,W(u)) d\mc{H}^{n-m}(u) \bigg)}^s \bigg] \\ &= \int_{B^s} J_{s,F}(h;u_1 , \dots , u_s) p_{(F(u_1), \dots, F(u_s))} (0) \ du_1 \dots du_{s}, \end{split} \end{equation} for every Borel measurable set $B \subset C_{R+1}$, where \begin{equation} \begin{split} & \quad J_{s,F}(h;u_1, \dots , u_s) \\ &= \mc{E} \bigg[ \prod_{t=1} ^{s} h(u_t, W(u_t)) \sqrt{\det[{(\nabla F (u_t))}^T (\nabla F(u_t) )]} \bigg \vert F(u_t) = 0, t = 1, \dots, s \bigg], \end{split} \end{equation} and $p_{(F(u_1), \dots ,F(u_s))}(0)$ is the density of ${(F(u_t))}_{t=1}^s$ at $(0, \dots,0)$, ${(\nabla F )}^T (\nabla F)$ is the $m \times m$ matrix ${(\nabla f_{i_1} \cdot \nabla f_{i_2})}_{m \times m}$, and $\mc{H}^{n-m}$ is the $(n-m)$-dimensional Hausdorff measure, i.e., surface measure for submanifolds of $\mb{R}^n$. } \end{theorem} In Theorem~\ref{Thm: Kac-Rice}, for $s = 1$, we only need to assume that $F$ satisfies the \textit{ $(R; M,k_1)$-assumptions}, while for $s = 2$, we need to assume that $F$ satisfies the \textit{ $(R; M,k_1,k_2)$-assumptions}. Higher moments can also be computed using the same formula as (\ref{E: Kac-Rice}) with some natural added assumptions. See~\cite{AW09,CH20} again. Our second theorem is to link the number of connected components to a curvature integral over $Z(F)$. Let me cite Theorem 3 in~\cite{C71}, and readers can see more references therein. We will give more intuition of geometric inequalities when we state Theorem~\ref{Thm: L^1 Bound on Betti} and Theorem~\ref{Thm: Chern} later. \begin{theorem}[Fenchel-Borsuk-Willmore-Chern-Lashof Inequality]\label{Thm: Fenchel} \textit{ Let $M$ be a connected compact $(n-m)$-dimensional submanifold of $\mb{R}^{n}$ without boundary. Then, \begin{equation}\label{E: Willmore} |\mb{S}^{n-m}| \leq \int_{M}\bigg|\bigg|\frac{\bH}{n-m}\bigg|\bigg|^{n-m} \ d\mc{H}^{n-m}, \end{equation} where $|\mb{S}^{n-m}|$ is the volume of $(n-m)$-dimensional unit sphere and $\bH$ is the mean curvature vector of $M \subset \mb{R}^n$. The equality in (\ref{E: Willmore}) holds if and only if when $n-m >1$, $M$ is embedded as a hypersphere in an $(n-m + 1)$-dimensional linear subspace of $\mb{R}^n$; when $n-m=1$, $M$ is embedded as a simple convex closed curve in a $2$-dimensional linear subspace of $\mb{R}^n$. } \end{theorem} If we already know that all components of $Z(F)$ contained in $C_R$ are regular submanifolds, we can then apply this Theorem~\ref{Thm: Fenchel}, and see that there is a positive constant $C = C(m,n)$ such that \begin{equation}\label{E: Bound Components Number} \begin{split} N(R;F) &\leq C \cdot \sum_{M \subset F^{-1}(0) \cap C_R, \text{disjoint} } \int_{M} ||\bH||^{n-m} \ d\mc{H}^{n-m} \\ &\leq C \cdot \int_{F^{-1}(0) \cap C_R} ||\bH||^{n-m} \ d\mc{H}^{n-m} . \end{split} \end{equation} Then, we can use Theorem~\ref{Thm: Kac-Rice} to compute the expectation of the mean curvature integral. Indeed, the regularity follows from Bulinskaya's lemma, see for example, Proposition 6.12 of~\cite{AW09}. We will also prove a quantitative version of Bulinskaya's lemma in our Lemma~\ref{L: Quantitative Bulinskaya}, which was proved when $m=1$ in Lemma 7 in~\cite{NS16}. Now, we can use (\ref{E: Bound Components Number}) to compute our expectations and prove the Theorem~\ref{Thm: L^1 Bound on Number}. \begin{proof}[Proof of Theorem~\ref{Thm: L^1 Bound on Number}] The general computations for $||\bH||$ are included in Appendix~\ref{APP: Computation} and Theorem~\ref{Thm: APP Curvature}. Let us recall the results and rewrite them to adapt to our current settings. We can write $||\bH||$ by terms and derivatives of $F$. If we set $\varphi: M \to \mb{R}^n$ as an embedding of a regular connected component $M$ of $Z(F)$, then we define $ (g_{ij} ) \equiv (\pa_i \varphi \cdot \pa_j \varphi)$, $(g^{ij}) \equiv {(g_{ij} )}^{-1}$, $A = (A_{\alpha \beta}) \equiv (\nabla f_\alpha \cdot \nabla f_\beta) = \big({(\nabla F)}^T (\nabla F)\big)$, and $A^{\alpha \beta} \equiv {(A^{-1})}_{\alpha \beta}$, where $v_1 \cdot v_2$ is the standard inner product for two vectors $v_1, v_2 \in \mb{R}^n$. Then, we have the following inequalities ($||\cdot||$ are all vectors norms or matrices norms), \begin{equation}\label{E: Bound Mean Curvature} \begin{split} ||\bH||^2 &= \sum_{1 \leq \alpha ,\beta \leq m} \sum_{1 \leq i,j,k,l \leq n-m} \ (\nabla^2 f_\alpha) (\pa_i \varphi , \pa_j \varphi) g^{ij} A^{\alpha \beta} (\nabla^2 f_\beta) (\pa_k \varphi , \pa_l \varphi) g^{kl} \\ &\leq C(n,m)\cdot||A^{-1}|| \cdot ||\nabla^2 F || ^2 \\ & = C(n,m)\cdot |\det A |^{-1} \cdot ||\mr{adj} A|| \cdot ||\nabla^2 F ||^2 \\ &\leq C(n,m)\cdot |\det A |^{-1} \cdot ||\nabla F||^{2(m-1)} \cdot ||\nabla^2 F ||^2 , \end{split} \end{equation} where $\mr{adj} A$ is the adjugate matrix associated with $A$. Here, $C(n,m)$ are different positive constants but they only depend on $n,m$. According to Theorem~\ref{Thm: Kac-Rice}, we let \begin{equation} W = (\nabla F, \nabla^2 F), \end{equation} which satisfies that $(F(u),W(u))$ is Gaussian for every $u \in C_{R+1}$. According to (\ref{E: Bound Components Number}) and (\ref{E: Bound Mean Curvature}), we set \begin{equation} h(x,W) \equiv h(W) \equiv {(C(n,m) )}^{\frac{(n-m)}{2}}\cdot \big|\det A \big|^{-\frac{(n-m)}{2}} \cdot ||\nabla F||^{(n-m)(m-1)} \cdot ||\nabla^2 F||^{(n-m)}. \end{equation} Here, we regard $W$ as a vector with elements $(\nabla F, \nabla^2 F)$ and we regard $h$ as a function of this vector $W$ and independent of $x$. To be consistent with the boundedness assumption in our Theorem~\ref{Thm: Kac-Rice} (see also Remark 10 in Section 2 of~\cite{CH20} and Theorem 6.10 in~\cite{AW09}), we need to modify $h$ to make it bounded. We denote the closed singular set of $h$ as $\mc{S}$, which is of Hausdorff codimension $1$ in the vector space of $W$ because $\det A$ is a polynomial with elements in $\nabla F$. So, we can construct a monotone family of positive continuous cut-off functions $\varphi_Q(W)$, such that $\varphi_Q \to 1$ as $Q \to \infty$, and $\varphi_{Q_1}(W) < \varphi_{Q_2}(W)$ when $Q_1 < Q_2$, and $\varphi_Q(W) = 0$ when $\dist(W , \mc{S}) < 1/Q$ or $||W|| > Q$. We then set \begin{equation} h_Q(x, W) \equiv h_Q(W) \equiv \varphi_Q(W) h(W), \quad Q>0, \end{equation} which is a monotone family of positive continuous bounded functions. We can then see that, by monotone convergence theorem, \begin{equation}\label{E: Kac-Rice in Curvature} \begin{split} &\mc{E} \bigg(\int_{F^{-1}(0) \cap C_R} (||\bH (u) ||^{(n-m)}) \ d\mc{H}^{n-m}(u) \bigg ) \leq \mc{E} \bigg( \int_{F^{-1}(0) \cap C_R} h(u,W(u))\ d\mc{H}^{n-m}(u) \bigg) \\ &= \mc{E} \bigg( \lim_{Q \to \infty} \int_{F^{-1}(0) \cap C_R} h_Q(u,W(u))\ d\mc{H}^{n-m}(u) \bigg) \\ &= \lim_{Q \to \infty} \mc{E} \bigg( \int_{F^{-1}(0) \cap C_R} h_Q(u,W(u))\ d\mc{H}^{n-m}(u) \bigg) \\ &= \lim_{Q \to \infty} \int_{C_R} J_{1,F}(h_Q;u ) \cdot p_{F(u)} (0) \ du \\ &= \lim_{Q \to \infty} \int_{C_R} \mc{E} \big[h_Q(u,W(u)) \cdot | \det A |^{1/2} \ \big| \ F(u) = 0\big] \cdot p_{F(u)} (0) \ du \\ &= \int_{C_R} \mc{E} \big[h(u,W(u)) \cdot | \det A |^{1/2} \ \big| \ F(u) = 0\big] \cdot p_{F(u)} (0) \ du . \end{split} \end{equation} Since $F$ satisfies $(A1)$ in Definition~\ref{D: Axiom 1}, $p_{F(u)}(0) \leq C(n,m,k_1)$ for any $u \in C_R$. To estimate the conditional expectation, we first notice that there are terms including $\nabla F$ and $\nabla^2 F$, which we are going to separate using the inequality $ab \leq \frac{k}{k+1} a^{\frac{k}{k+1}} + \frac{1}{k+1} b^{k+1} \leq a^{\frac{k}{k+1}} + b^{k+1}$ for some positive integer $k = k(m,n)$ to be determined later. The conditional expectation term in (\ref{E: Kac-Rice in Curvature}) then becomes \begin{equation} \begin{split} & \ \quad \mc{E}\big[| \det A (u)|^{\frac{1-(n-m) }{2}} \cdot ||\nabla F(u) ||^{(n-m)(m-1)} \cdot ||\nabla^2 F(u) ||^{(n-m)} \ \big| \ F(u) = 0\big] \\ &\leq \mc{E}\bigg[ {\bigg(| \det A (u)|^{\frac{1-(n-m) }{2}} \cdot ||\nabla F(u) ||^{(n-m)(m-1)} \bigg)}^{\frac{k+1}{k}} + ||\nabla^2 F(u) ||^{(n-m)(k+1)} \ \bigg| \ F(u) = 0\bigg ]. \end{split} \end{equation} We now need to estimate \begin{equation}\label{E: Determinant Integral in Curvature} \mc{E}\bigg[ {\bigg(| \det A (u)|^{\frac{1-(n-m)}{2}} \cdot ||\nabla F(u) ||^{(n-m)(m-1)} \bigg)}^{\frac{k+1}{k}} \ \bigg| \ F(u) = 0 \bigg ] \end{equation} and \begin{equation}\label{E: Hessian Term in Curvature} \mc{E}\bigg[ ||\nabla^2 F(u) ||^{(n-m)(k+1)} \ \bigg| \ F(u) = 0\bigg ] \end{equation} separately. We estimate (\ref{E: Hessian Term in Curvature}) first. For each $u \in C_{R}$, and each $\pa^2 _{j_1j_2} f_i (u)$, one needs to find $a^{t}_{j_1 j_2 , i}(u)$ such that for each $s = 1 , \dots m$, \begin{equation}\label{E: Independence for Conditional Expectation} \mc{E}\big[ \big(\pa^2 _{j_1j_2}f_i(u) + \sum_{t=1}^{m}a^t _{j_1j_2,i} f_t(u) \big) \cdot f_s(u)\big] = 0. \end{equation} Since $F$ satisfies $(A1)$ in Definition~\ref{D: Axiom 1}, the matrix ${(\mc{E}(f_t(u) \cdot f_s(u)))}_{m \times m}$ is always invertible and has uniformly lowerly bounded determinant value for all $u \in C_R$. Hence, those $a^t _{j_1j_2,i}$ are solvable and there is a constant $C = C(n,m,M,k_1)$ such that \begin{equation} |a^t _{j_1j_2,i}| \leq C. \end{equation} So, there is another constant $C =C(n,m,M,k_1, k) = C(n,m,M,k_1)$ (since $k$ only depend on $n,m$ and will be determined later) such that \begin{equation} \begin{split} & \quad \mc{E}\big[ |\pa^2 _{j_1j_2} f_i(u) |^{(n-m)(k+1)} \ \big| \ F(u) = 0\big ] \\ & = \mc{E}\big[ |\pa^2 _{j_1j_2} f_i(u) + \sum_{t=1}^{m}a^t _{j_1j_2,i} f_t(u)|^{(n-m)(k+1)} \big ] \leq C. \end{split} \end{equation} The estimate for (\ref{E: Determinant Integral in Curvature}) requires more ingredients. First, notice that since $(F,\nabla F)$ satisfies $(A1)$ in Definition~\ref{D: Axiom 1}. Then, for each $u \in C_{R}$ and each $\pa_j f_i(u)$, we need to find $a^t _{j,i}(u)$ such that for each $s = 1, \dots,m$, \begin{equation} \mc{E}\big[ \big(\pa _{j}f_i(u) + \sum_{t=1}^{m}a^t _{j,i}(u) f_t(u) \big) \cdot f_s(u)\big] = 0. \end{equation} For similar reasons in estimating those $a^t _{j_1 j_2,i}$ in (\ref{E: Independence for Conditional Expectation}), $a^t _{j,i}(u)$ are also solvable and there is a constant $C = C(n,m,M,k_1)$ such that \begin{equation} |a^t _{j,i}| \leq C. \end{equation} Next, we replace each $\pa_j f_i(u)$ in (\ref{E: Determinant Integral in Curvature}) with ${(\pa_j f_i)}^\# \equiv \pa _{j}f_i(u) + \sum_{t=1}^{m}a^t _{j,i}(u) f_t(u)$, and also, define ${(\nabla F (u))}^\#$ as replacing elements $\pa_j f_i$ in $\nabla F$ with ${(\pa_j f_i)}^\#$. Since the joint distribution of $(F,\nabla F)$ is non-degenerate, the Gaussian vector ${(\nabla F (u))}^\#$ is also non-degenerate. We can divide the non-degenerate $nm$-dimensional Gaussian vector ${(\nabla F (u))}^\#$ into $m$ $n$-dimensional vectors: \begin{equation} {(\nabla F (u))}^\# = (v_1(u) , v_2(u) ,\dots, v_m(u)), \end{equation} where for each $t = 1, \dots ,m$, \begin{equation} v_t(u) ={( {(\nabla f_t (u))}^\# )}^T= ({(\pa_1 f_t(u))}^\#, {(\pa_2 f_t(u))}^\# , \dots , {(\pa_n f_t(u))}^\# ). \end{equation} With these notations, we also set ${(A(u))}^\# = {(v_{t_1} \cdot v_{t_2})}_{m \times m}$, and we see that (\ref{E: Determinant Integral in Curvature}) equals to \begin{equation}\label{E: Determinant Integral in Curvature, No Condition} \mc{E}\bigg[ {\bigg({| \det {(A (u))}^\#|}^{\frac{1-(n-m)}{2}} \cdot {||{(\nabla F(u))}^\# ||}^{(n-m)(m-1)} \bigg)}^{\frac{k+1}{k}} \bigg ]. \end{equation} Notice that the determinant of the covariance kernel of ${(\nabla F(u))}^\#$ is lowerly bounded by a constant $C = C(n,m,M,k_1)$ as in $(A1)$ and $(A2)$ in Definition~\ref{D: Axiom 1}, the (\ref{E: Determinant Integral in Curvature, No Condition}) then reduces to estimating the following integral: \begin{equation}\label{E: Determinant Integral Origin} \begin{split} C(n,m,M, k_1) \bigg(\int_{{(\mb{R}^n)}^{\otimes m}} & {|\det(v_{t_1} \cdot v_{t_2})|}^{\alpha} \cdot {|{|v_1|} ^2 + \cdots + {|v_m|}^2|}^\beta \\ &\cdot e^{-c(n,m,M,k_1)(|v_1|^2 + \cdots |v_m|^2)}\ dv_1 \cdots d v_m \bigg). \end{split} \end{equation} Here, $\alpha = \frac{k+1}{k} \cdot \frac{1- (n-m) }{2}$, which is very close to but less than $- \frac{n-m-1}{2}$ since $k$ will be chosen to be large enough, and $\beta = \frac{k+1}{2k} \cdot (n-m)(m-1) $, and we use $|\cdot|$ to denote vector norms temporarily. Also, $c(n,m,M,k_1)$ is a positive constant related to the covariance kernel for ${(\nabla F(u))}^\#$. We can simplify the integral in (\ref{E: Determinant Integral Origin}) further and then get \begin{equation}\label{E: Determinant Integral Spherical Decomposition} \begin{split} & \quad \bigg(\int_{{(\mb{S}^{n-1})}^{\otimes m}} |\det(w_{t_1} \cdot w_{t_2})| ^\alpha \ d\sigma(w_1) \cdots d\sigma(w_m) \bigg) \\ &\cdot \bigg(\int_{{(\mb{R}_+)}^{\otimes m}} |r_1|^{2\alpha+n-1} \cdots |r_m|^{2\alpha+n-1}||r_1| ^2 + \cdots + |r_m|^2|^\beta \\ & \quad \cdot e^{-c(n,m,M,k_1)(|r_1|^2 + \cdots |r_m|^2)}\ dr_1 \cdots d r_m \bigg) , \end{split} \end{equation} where we use spherical coordinates, and $\sigma(\cdot)$ is the surface measure on the unit sphere $\mb{S}^{n-1}$. Notice that $0 < 2\alpha + n-1 < n-1$ since $m \geq 1$ and $\alpha = - \frac{n-m-1}{2} - \epsilon$ for some small $\epsilon = \epsilon(m,n)>0$ to be determined later. Hence, the second multiplier is bounded by a constant $C = C(n,m,M,k_1)$. The rest is to consider the singular integral \begin{equation}\label{E: Determinant Integral 1} \int_{{(\mb{S}^{n-1})}^{\otimes m}} |\det(w_{t_1} \cdot w_{t_2})| ^\alpha \ d\sigma(w_1) \cdots d\sigma(w_m) \end{equation} for some $\alpha <0$. Currently, our $\alpha$ is close to $-\frac{n-m-1}{2}$ but not less than $-\frac{n-m}{2}$. Indeed, we can prove a stronger statement and the sharp $\alpha $ is actually $-\frac{n-m+1}{2}$! Hence, one can just choose any $\epsilon = \epsilon(m,n) \in (0,1/2)$, so that $k$ is well-chosen in the previous settings and $k$ only depends on $m,n$. This sharp value of $\alpha$ will be used when we prove the quantitative Bulinskaya's lemma, our Lemma~\ref{L: Quantitative Bulinskaya}. \begin{lemma}\label{L: Singular Integral} \textit{ Let $n > m >0$. For any $s_0 \in (0,1)$ and $\alpha \in ( - \frac{n-m+1}{2} s_0 , 0 )$, there is a positive constant $C = C(s_0,n,m) $ such that \begin{equation}\label{E: Determinant Integral in Lemma} \int_{{(\mb{S}^{n-1})}^{\otimes m}} |\det(w_{t_1} \cdot w_{t_2})| ^\alpha \ d\sigma(w_1) \cdots d\sigma(w_m) \leq C. \end{equation} } \end{lemma} \begin{proof}[Proof of Lemma~\ref{L: Singular Integral}] If $m = 1$, then there is nothing to prove. For $m \geq 2$, we fix $\{e_1 , \dots, e_n\}$ as the standard orthonormal frame of $\mb{R}^n$. Notice that $| \det ( w_{t_1} \cdot w_{t_2}) | $ is invariant under actions of $O(n)$. Hence, the left hand side of (\ref{E: Determinant Integral in Lemma}) equals to \begin{equation}\label{E: Determinant Integral 2} |\mb{S}^{n-1}| \cdot \int_{{(\mb{S}^{n-1})}^{\otimes (m-1)}} |\det(A(e_1))| ^\alpha \ d\sigma(w_2) \cdots d\sigma(w_m), \end{equation} where $A(e_1)$ has the form \begin{equation} A(e_1)= \begin{bmatrix} 1 & e_1 \cdot w_2 & \cdots & e_1 \cdot w_m \\ w_2 \cdot e_1 & 1 &\cdots & w_2 \cdot w_m \\ \vdots & \vdots & \cdots & \vdots \\ w_m \cdot e_1 & w_m \cdot w_2 & \cdots & 1 \end{bmatrix} . \end{equation} By elementary row operations, we see that $A(e_1)$ has the same determinant value as the matrix \begin{equation} A'(e_1)= \begin{bmatrix} 1 & e_1 \cdot w_2 & \cdots & e_1 \cdot w_m \\ 0 & & & \\ \vdots & &{\bigg[w_{t_1} \cdot w_{t_2} - (e_1 \cdot w_{t_1})(e_1 \cdot w_{t_2})\bigg]} _{ ( m-1) \times (m-1)}& \\ 0 & & & \end{bmatrix} . \end{equation} Then, we can parametrize $\mb{S}^{n-1}$ by $B^{n-1}$, the unit disc of dimension $n-1$. That is, for each $t = 2, \dots,m$, almost all $w_t \in \mb{S}^{n-1}$ can be written in coordinates of the form $(\sqrt{1- |w_t '|^2}, \ w_t ')$ or $(- \sqrt{1- |w_t'|^2}, \ w_t')$ for $w_t ' = (w_t(2), \dots, w_t(n) ) \in B^{n-1}$. So, $w_{t_1} \cdot w_{t_2} - (e_1 \cdot w_{t_1})(e_1 \cdot w_{t_2}) = \langle w_{t_1} ' , w_{t_2} ' \rangle_{\mb{R}^{n-1} }$, where $\langle \cdot , \cdot \rangle_{\mb{R}^{n-1} }$ is the standard inner product in $\mb{R}^{n-1}$. By the area formula, we can rewrite (\ref{E: Determinant Integral 2}) as \begin{equation}\label{E: Induction on Singular Integral} \begin{split} & \quad |\mb{S}^{n-1}| \cdot 2^{m-1 } \cdot \int_{{(B^{n-1})}^{\otimes (m-1)}} |\det(\langle w_{t_1} ' , w_{t_2} ' \rangle_{\mb{R}^{n-1}})| ^\alpha \cdot \prod_{t=2} ^m \frac{1}{\sqrt{1- |w_t '|^2}} \ dw_2 ' \cdots dw_m ' \\ &= |\mb{S}^{n-1}| \cdot 2^{m-1 } \cdot \int_{{(\mb{S}^{n-2})}^{\otimes (m-1)}} |\det(\langle \tilde{w}_{t_1} , \tilde{w}_{t_2} \rangle_{\mb{R}^{n-1}})| ^\alpha \ d\tilde{\sigma}{(\tilde{w}_2)} \cdots d\tilde{\sigma}{(\tilde{w}_m)} \\ &\qquad \cdot \int_{{(0,1)}^{\otimes (m-1)}} \prod_{t=2} ^m \frac{r_t ^{2 \alpha + n -2}}{\sqrt{1- |r_t |^2}} \ dr_2 \cdots dr_m . \end{split} \end{equation} Here $\tilde{\sigma}(\cdot)$ is the surface measure of $\mb{S}^{n-2}$. Notice that since $n > m \geq 2$, we have that $-(n-m+1) s_0 + n-2 \geq -(n-1)s_0 + n-2 = (n-1)(1-s_0) - 1 \geq 2(1-s_0) -1 = 1- 2s_0$. Hence, \begin{equation} \int_0 ^1 r^{2\alpha + n-2} \ dr \leq \int_0 ^1 r^{ -(n-m+1) s_0 + n-2} \ dr \leq \frac{1}{2(1-s_0)} \ . \end{equation} If $m=2$, we finish our proof because in (\ref{E: Induction on Singular Integral}), the determinant value in the integral is just $1$. If $m >2$, then we run the same process for the pair $n-2, m-1$ with the condition that $ n-2 \geq m -1 \geq 2$. Continue doing this process, we can finish the proof for Lemma~\ref{L: Singular Integral}. \end{proof} With this Lemma~\ref{L: Singular Integral}, we can bound (\ref{E: Determinant Integral in Curvature}) by a constant $C = C(n,m,M,k_1)>0 $. Combine this constant with the constant $C = C(n,m,M,k_1)>0$ bounding (\ref{E: Hessian Term in Curvature}), we can use (\ref{E: Bound Components Number}) and finish our proof for Theorem~\ref{Thm: L^1 Bound on Number}. \end{proof} \begin{remark}\label{Rmk: Fary-Milnor} The sharp constant in Theorem~\ref{Thm: Fenchel} does not influence our proofs for Theorem~\ref{Thm: L^1 Bound on Number} as we only need an upper bound. On the other hand, one should not expect that we can calculate an accurate expectation for the number of connected components in such a way. For example, when $n=3$ and $m = 2$, $Z(F)$ consists of many closed curves, which we call knots. The F\'{a}ry-Milnor's theorem says that if a closed curve is not an unknot, then the integral on the right hand side of (\ref{E: Willmore}) is strictly greater than $4\pi$, while in this case, $|\mb{S}^1| = 2\pi$. By our later Theorem~\ref{Thm: Euclidean Random Field} (also stated as Theorem~\ref{Main Results: Local} previously), all types of knots will appear with positive probabilities in limiting cases under some assumptions, which include the complex arithmetic random waves and Kostlan's ensemble. So, even an accurate calculation of curvature integrals will not tell us the accurate value of the limiting expectation, $\nu_F$ in Theorem~\ref{Thm: Euclidean Random Field}, of the number of connected components. \end{remark} Next, for Betti numbers, we can build up similar upper bounds as in Theorem~\ref{Thm: L^1 Bound on Number}. \begin{theorem}\label{Thm: L^1 Bound on Betti} \textit{ Assume that $F : C_{R+1} \to \mb{R}^{m}$, satisfies \textit{ $(R; M,k_1)$-assumptions} on $C_{R+1}$. Then, there is a constant $\widetilde{D}_1 = \widetilde{D}_1(n,m, M,k_1) >0$ such that \begin{equation} \sum_{l=0}^{n-m} \mc{E}(\beta_l(R;F)) \leq \widetilde{D}_1 \cdot |C_R|. \end{equation} } \end{theorem} The proof is the same as Theorem~\ref{Thm: L^1 Bound on Number} once we replace Theorem~\ref{Thm: Fenchel} with the Chern-Lashof inequalities in~\cite{CL57,CL58}, which connects the topology of a closed manifold to its extrinsic curvatures. On the other hand, there are some other works and techniques in differential geometry that lead to similar topological controls from the study of well-known vanishing theorems. See, for example,~\cite{B88} and references therein, where a De Giorgi-Nash-Moser iteration scheme also leads to a control on Betti numbers. But the forms shown in~\cite{CL57, CL58} are cleaner and well adapted to our applications here. The author also proved them independently in earlier researches in differential geometry, but then learned that these things were studied well by very great geometers many years ago. The following inequality is Theorem 1 in~\cite{CL58} but we rewrite it a little to be adapted to our notations. \begin{theorem}[Chern-Lashof Inequality]\label{Thm: Chern} \textit{ If $M$ is a connected closed $(n-m)$-dimensional submanifold of $\mb{R}^{n}$ without boundary, then, \begin{equation}\label{E: Chern-Lashof} \sum_{l=0}^{n-m} \beta_l(M) \leq \frac{1}{|\mb{S}^{n-1}|} \cdot \int_{M} \int_{T_x ^\perp M \cap \{|y| = 1\}} \big| \det \big(\langle \II(x) , y \rangle_{\mb{R}^n}\big) \big| d\sigma^{m-1}(y)\ d\mc{H}^{n-m}(x). \end{equation} } \end{theorem} In (\ref{E: Chern-Lashof}), $|\mb{S}^{n-1}|$ is the volume of $(n-1)$-dimensional unit sphere, $\beta_l(M) $ is the $l$-th Betti number of $M$ over $\mb{R}$, $\II(x)$ is the second fundamental form of $M \subset \mb{R}^n$ at $x$, and $y$ ranges over unit vectors in $\mb{R}^n$ that are on the fiber $T_x ^\perp M $ of normal bundle $T ^\perp M $, $\langle \cdot , \cdot \rangle_{\mb{R}^n}$ is the Euclidean inner product in $\mb{R}^n$, and $\sigma^{m-1}$ is the standard surface measure on $\mb{S}^{m-1} \subset T_x ^\perp M \subset \mb{R}^n$. We can see that $\big| \det \big(\langle \II(x) , y \rangle\big) \big| \leq C(n,m) \cdot ||\II(x)||^{n-m}$. To estimate the norm of $\II(x)$, we can also get from Theorem~\ref{Thm: APP Curvature} that, \begin{equation}\label{E: Bound Second Fundamental Form} \begin{split} ||\II||^2 &\leq C(n,m)\cdot||A^{-1}|| \cdot ||\nabla^2 F || ^2 \\ & = C(n,m)\cdot |\det A |^{-1} \cdot ||\mr{adj} A|| \cdot ||\nabla^2 F ||^2 \\ &\leq C(n,m)\cdot |\det A |^{-1} \cdot ||\nabla F||^{2(m-1)} \cdot ||\nabla^2 F ||^2 , \end{split} \end{equation} for some positive constants $C(n,m)$. We can use the same $h$ and $h_Q$ as shown in the proof of Theorem~\ref{Thm: L^1 Bound on Number} and then obtain the proof of Theorem~\ref{Thm: L^1 Bound on Betti}. Before finishing this section, we prove a quantitative Bulinskaya's lemma for random fields, where the proof will use the optimal degree $\alpha$ appeared in Lemma~\ref{L: Singular Integral}. We will prove it for Gaussian random fields for simplicity, although the result and the proof actually do not depend on Gaussian random variables essentially. This lemma is one of the key factors that will be used when we prove theorems in Section~\ref{Section: Local Double Limit and Global Limit}. Assume that our Gaussian random field $F : C_{R+1} \to \mb{R}^{m}$ satisfies \textit{ $(R; M,k_1)$-assumptions} on $C_{R+1}$. But $C^{2-}$-smoothness for $F$ is already enough for our proof here. First, let $||\cdot||$ denote the norm of a vector and we define \begin{equation}\label{E: Definition of Smallest Eigenvalue} \lambda(F(x)) \equiv \min_{w \in \mb{S}^{m-1}} ||\nabla (F \cdot w) ||(x). \end{equation} So, ${\lambda(F(x))} ^2$ is the smallest eigenvalue of the matrix $A = {(\nabla f_{i_1} \cdot \nabla f_{i_2})}_{m \times m}$ at $x$. This matrix $A$ appeared in the proof of Theorem~\ref{Thm: L^1 Bound on Number}. Then, we can obtain the following lemma. \begin{lemma}\label{L: Quantitative Bulinskaya} \textit{ Given $\delta >0$, there exists a $\tau = \tau(\delta, R,n,m,M,k_1) \in (0,1/2)$, such that \begin{equation} \mc{P} \big( \min_{x \in \bar{C}_R} \max\{||F(x)||,\lambda(F(x)) \} < \tau \big) < \delta \end{equation} } \end{lemma} \begin{proof} Denote $\Omega_\tau$ as the event that \begin{equation} \{ \exists\ y \in \bar{C}_R, \text{ s.t. } ||F(y)|| < \tau, \ \lambda(F(y)) < \tau \}. \end{equation} We put $W = 1 + ||F||_{C^{1+ \beta}(C_{R+1/2})}$ with $\beta \in (0,1)$ to be specified later. Here $||F||_{C^{1+ \beta}(C_{R+1/2})}$ is the $C^{1 + \beta}$-norm for $F$ on the cube $C_{R+1/2}$. Then, if $\Omega_\tau$ occurs for a $y \in \bar{C}_R$, we assume that $||\nabla( F \cdot w_y) || (y) < \tau$ for a corresponding fixed $w_y \in \mb{S}^{m-1}$. Then, for $x$ in the ball $B(y,\tau)$, \begin{equation} ||F(x)|| \leq \tau + \tau \cdot ||F||_{C^{1+ \beta}(C_{R+1/2})} = W \tau, \end{equation} and \begin{equation} \lambda(F(x)) \leq ||\nabla (F \cdot w_y) ||(x) \leq \tau + \tau^{\beta} \cdot ||F||_{C^{1+ \beta}(C_{R+1/2})} \leq W \tau^{\beta}. \end{equation} Since each element of the matrix ${(\nabla F (x))}^T (\nabla F(x)) = {(\nabla f_{i_1} \cdot \nabla f_{i_2})}_{m \times m}$ has an upper bound $W^2$, the largest eigenvalue of it is also bounded by $C(n,m) \cdot W^2$. We can further get that \begin{equation} \big|\det \big({(\nabla F (x))}^T (\nabla F(x))\big) \big| \leq C(n,m) \cdot {(\lambda(F(x)))}^2 \cdot W^{2(m-1)} \leq C(n,m) \cdot W^{2m} \tau^{2\beta} . \end{equation} Let $\alpha_1 = m + (n-m+1) m$, $\alpha_2 = m + (n-m+1)\beta$. Then, for some $\eta \in (0,1)$, we define a function $\Phi_\eta(x)$ for $x \in C_{R+1}$ and get that \begin{equation} \Phi_{\eta}(x) \equiv ||F(x)||^{-m \eta } \cdot \big|\det \big({(\nabla F (x))}^T (\nabla F(x))\big) \big| ^{-\frac{n-m+1}{2} \eta} \geq C(n,m) \cdot W^{-\alpha_1 \eta } \cdot \tau^{-\alpha_2 \eta}, \end{equation} where the inequality is for $x \in B(y,\tau)$. So, whenever $\Omega_\tau$ happens, we have that \begin{equation} \int_{C_{R+1}} \Phi_{\eta}(x) \geq \int_{B(y,\tau)} \Phi_{\eta}(x) \geq C(n,m) \cdot W^{-\alpha_1 \eta } \cdot \tau^{n-\alpha_2 \eta} . \end{equation} Notice that $n- \alpha_2 \eta $ = $n(1- \eta \beta) - m \eta (1-\beta) - \beta \eta$. Hence, we can choose $\beta = \beta(n,m) \in (0,1)$, $\eta = \eta(n,m) \in (0,1)$, which are close to $1$, such that $n - \alpha_2 \eta < - \epsilon$ for some small positive $\epsilon = \epsilon(n,m) \in (0,1/2)$. Hence, \begin{equation} \begin{split} \mc{P}(\Omega_\tau) &\leq C(n,m) \cdot \tau^{\alpha_2 \eta - n} \cdot \mc{E} \bigg[W^{\alpha_1 \eta} \cdot \int_{C_{R+1}} \Phi_{\eta}(x) \bigg] \\ &\leq C(n,m) \cdot \tau^{\epsilon} \cdot {\bigg(\mc{E} \big(W^{p' \alpha_1 \eta} \big) \bigg)}^{\frac{1}{p'}} \cdot {\bigg( \mc{E} {\bigg(\int_{C_{R+1}} \Phi_{\eta}(x) \bigg)}^p\bigg)}^{\frac{1}{p}} \\ &\leq C(n,m) \cdot \tau^{\epsilon} \cdot |C_{R+1}|^{1-\frac{1}{p}} \cdot {\bigg(\mc{E} \big(W^{p' \alpha_1 \eta} \big) \bigg)}^{\frac{1}{p'}} \cdot {\bigg( \int_{C_{R+1}} \mc{E}(\Phi_{\eta}^p(x)) \bigg)}^{\frac{1}{p}} \end{split} \end{equation} for some $p' = p'(n,m)> 1$, $p = p(n,m) >1$, with $\frac{1}{p'} + \frac{1}{p} = 1$ and $p \cdot \eta <1$. For the term $\mc{E} \big(W^{p' \alpha_1 \eta} \big)$, an upper bound depending on $n,m,M,k_1$ follows from the Kolmogorov's theorem, see for example Appendix A.9 and Appendix A.11 in~\cite{NS16}. Next, since for each $x \in C_{R+1}$, the joint distribution of $(F(x),\nabla F(x))$ satisfies $(A1)$ in Definition~\ref{D: Axiom 1}, then similar to (\ref{E: Determinant Integral Origin}), we can write \begin{equation} \begin{split} \mc{E}(\Phi_{\eta}^p(x)) &= \mc{E} \bigg(||F(x)||^{-m \eta p} \cdot {\big|\det {\big({(\nabla F (x))}^T (\nabla F(x))\big)} \big|} ^{-\frac{n-m+1}{2} \eta p} \bigg), \\ &\leq C(n,m,M, k_1) \cdot \int_{\mb{R}^m} ||v_0||^{-m\eta p} \cdot e^{-c(n,m,M,k_1)(||v_0||^2)} \ dv_0 \\ & \cdot \int_{{(\mb{R}^n)}^{\otimes m}} {|\det(v_{t_1} \cdot v_{t_2})|}^{-\frac{n-m+1}{2}\eta p} \cdot e^{-c(n,m,M,k_1)({||v_1||}^2 + \cdots {||v_m||}^2)}\ dv_1 \cdots d v_m, \end{split} \end{equation} where we use $v_0 $ to denote the components of $F(x) = (f_1(x) ,\dots, f_m(x))$, and $(v_1, \dots, v_m)$ to denote components of $\nabla F(x) = ({(\nabla f_1 (x))}^T, \dots , {(\nabla f_m (x))}^T)$, and $(v_{t_1} \cdot v_{t_2})$ to denote the matrix with elements $v_{t_1} \cdot v_{t_2}$. Since $\eta p < 1$ by our choice, we can bound the above two terms by some constants $C = C(n,m,M,k_1) > 0$ following the proof of Theorem~\ref{Thm: L^1 Bound on Number}. See the process proving (\ref{E: Determinant Integral in Curvature}), (\ref{E: Determinant Integral Spherical Decomposition}) and Lemma~\ref{L: Singular Integral}. Since $\epsilon = \epsilon(n,m) >0$, we can finish the proof for Lemma~\ref{L: Quantitative Bulinskaya}. \end{proof} \section{Proof of Theorem~\ref{Main Results: Local} and Theorem~\ref{Main Results: Local Betti}}\label{Section: Stationary Random Field} With theorems exhibited in Section~\ref{Section: L^1}, we are now ready to prove Theorem~\ref{Main Results: Local} and Theorem~\ref{Main Results: Local Betti}. In this section, we assume that $F:\mb{R}^n \to \mb{R}^{m}$, $m \in(0,n)$, is a centered stationary Gaussian random field satisfying $(A1)$ and $(A2)$ in Definition~\ref{D: Axiom 1} at the origin $x=0$, which is equivalent to saying that $F$ satisfies the \textit{ $(R; M,k_1)$-assumptions} on at the origin $x = 0$. Also, since $F$ is stationary, these assumptions hold true at any point $x \in \mb{R}^n$. We will first focus on Theorem~\ref{Main Results: Local} and explain the necessary modifications for proving Theorem~\ref{Main Results: Local Betti} after the proof of Theorem~\ref{Main Results: Local}. We first explain some definitions in Theorem~\ref{Main Results: Local} and give some basic settings similar to the $m=1$ cases in~\cite{NS16,SW19,W21}. We denote $\mfk{B}(C^1 (\mb{R}^n ,\mb{R}^{m}))$ as the Borel $\sigma$-algebra generated by open sets in $C^1 (\mb{R}^n ,\mb{R}^{m})$, and denote $\gamma_F = F_* (\mc{P})$ as the pushforward probability measure of $\mc{P}$, where $\mc{P}$ is the probability measure on the background probability space $(\Omega, \mfk{S}, \mc{P})$. By Bulinskaya's lemma (see either Proposition 6.12 of~\cite{AW09} or our Lemma~\ref{L: Quantitative Bulinskaya}), we see that we only need to consider a Borel subset of $C^1(\mb{R}^n,\mb{R}^m)$: \begin{equation} C^1_* (\mb{R}^n ,\mb{R}^{m}) \equiv \{G \in C^1(\mb{R}^n,\mb{R}^m) \ | \ \nabla G \text{ is of full rank on } Z(G)\}, \end{equation} since $\gamma_F(C^1 (\mb{R}^n ,\mb{R}^{m}) \backslash C^1_* (\mb{R}^n ,\mb{R}^{m}) ) = 0$. The action of $\mb{R}^n$ on $(C^1_* (\mb{R}^n ,\mb{R}^{m}), \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m})), \gamma_F )$ is by shifts $\tau_v: G(x) \mapsto G(x + v)$, which is measure preserving since $F$ is stationary. We say that the action of $\mb{R}^n$ is \textit{ ergodic} if, for every set $A \subset \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m}))$ that satisfies $\gamma_F((\tau_v)A \Delta A) =0$ for all $v \in \mb{R}^n$, then either $\gamma_F(A) = 0$ or $\gamma_F(A ) = 1$. Here, $\Delta$ is the symmetric difference. We restate Theorem~\ref{Main Results: Local} in the following way. \begin{theorem}\label{Thm: Euclidean Random Field} \textit{ Assume that $F:\mb{R}^n \to \mb{R}^{m}$, $m \in(0,n)$, is a centered stationary Gaussian random field satisfying the \textit{ $(R; M,k_1)$-assumptions} at the origin $x=0$. \begin{itemize} \item[(1)] There exists a number $\nu = \nu_F \geq 0$ so that \begin{equation}\label{E: Growth Degree Constant} \mc{E}(N(R;F)) = \nu \cdot |C_R| + o_{R \to \infty} (R^n). \end{equation} \item[(2)] If the translation action of $\mb{R}^n$ on $(C^1_* (\mb{R}^n ,\mb{R}^{m}), \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m})), \gamma_F )$ is ergodic, then \begin{equation}\label{E: Ergodicity Convergence} \lim_{R \to \infty}\frac{N(R;F)}{|C_R|} = \nu \ \text{almost surely} \quad \text{and} \quad \lim_{R \to \infty}\mc{E} \bigg| \frac{N(R;F)}{|C_R|} - \nu \bigg| = 0 . \end{equation} \end{itemize} Moreover, for any $c \in \mc{C}(n-m)$, (1) and (2) hold true if one replaces $N(R;F)$ with $N(R;F,c)$ and replaces the number $\nu_F$ with a $\nu_{F,c} \geq 0$. \begin{itemize} \item[(3)] \begin{equation} \nu_F = \sum_{c \in \mc{C}(n-m)} \nu_{F,c} . \end{equation} \end{itemize} } \end{theorem} \begin{remark} When $m=1$, there are two assumptions in~\cite{NS16,SW19, W21} about when the shifting action of $\mb{R}^n$ is ergodic and when $\nu_F>0$. For general $m>1$, one may need more ingredients. In particular, when the random field $F$ consists of independent random functions, i.e., $F = (f_1, \dots,f_m)$ and $f_{i_1}$ is independent of $f_{i_2}$ when $i_1 \neq i_2$, we can denote the spectral measure of $f_i$ as $\rho_i$, and assume that $\rho_i$ has no atoms and its compact support has nonempty interior. Under these two assumptions, we can obtain ergodicity and positivity. One can also obtain the ergodicity and positivity of $\nu_F$ for Berry's monochromatic random waves model as shown in Appendix~\ref{APP: Ergodicity} and Appendix~\ref{APP: Positivity}, where we will discuss more on the ergodicity and the positivity without the independence assumption. \end{remark} To prove Theorem~\ref{Thm: Euclidean Random Field}, we define $N(x,R; F)$ as the number of all connected components of $Z(F)$ fully contained in the open cube $C_R(x) = C_R +x$, shifted from $C_R$ by $x$, of $\mb{R}^n$, and define $N^*(x,R;F)$ as the number of all connected components of $Z(F)$ that intersect the closed cube $\overline{C_R(x)}$. For any $c \in \mc{C}(n-m)$, we can also define $N(x,R; F , c)$ and $N^* (x,R; F , c)$ in a similar way. So, for any $G \in C^1_* (\mb{R}^n ,\mb{R}^{m})$, $N^*(R;G) - N(R;G) \leq \mfk{N}_\#(R;G)$, where \begin{equation}\label{E: Definition of Residue Parts} \mfk{N}_\# (R;G) = \begin{cases} \sum_{k=i} ^{n-1} \mfk{N}_k(R;G) ,& \ \text{if } Z(G) \text{ transversally intersect and only intersect with } \\ \ & \ \text{$i$-th skeletons to $(n-1)$-th skeletons of $\pa C_R$ } \\ \ & \ \text{for some $i \in \{ m, \dots, n-1\}$}, \\ \ & \ \\ + \infty,& \ \text{otherwise. And we denote this set as Degen$(R)$}. \end{cases} \end{equation} Here, $\mfk{N}_k(R;G)$ is the number of connected components of $Z(G)$ which intersect with skeletons in dimension $k$ but do not intersect with skeletons in dimensions $\leq k$. Notice that the measurability of $G \mapsto N(R;G)$ for fixed $R$ follows from its lower semicontinuity on $C^1_* (\mb{R}^n ,\mb{R}^{m})$. See for example Lemma~\ref{L: Continuous Stability}. For the measurability of $\mfk{N}_\# (R;G)$, we notice that transversal intersection is stable under small $C^1$-perturbations, and hence $C^1_* (\mb{R}^n ,\mb{R}^{m}) \backslash \text{Degen}(R)$ is an open subset of $C^1_* (\mb{R}^n ,\mb{R}^{m})$. By applying Bulinskaya's lemma to each skeleton of $\pa C_R$, we see that $\gamma_F(\text{Degen}(R)) = 0$. The measurability of $\mfk{N}_\#(R;G)$ then follows again from the lower semicontinuity on $C^1_* (\mb{R}^n ,\mb{R}^{m}) \backslash \text{Degen}(R)$. For the expectation of $\mc{E}(\mfk{N}_k(R;F))$, we can restrict the Gaussian random field $F$ on each $k$-th skeleton of $\pa C_R$, which is a $k$-dimensional open cube in the corresponding $k$-dimensional linear subspace of $\mb{R}^n$. Such a restriction still keeps the conditions $(A1)$ and $(A2)$ in Definition~\ref{D: Axiom 1}. Hence, by Theorem~\ref{Thm: L^1 Bound on Number}, there is a constant $C = C(k,m,M,k_1)>0$ such that $\mc{E}(\mfk{N}_k(R;F)) \leq C \cdot R^k$. So, there is a constant $C = C(n,m,M,k_1)>0$ such that $\mc{E}(\mfk{N}_\# (R;F)) \leq C \cdot R^{n-1}$. This observation is crucial in proving Theorem~\ref{Thm: Euclidean Random Field}. We also need the following lemma. \begin{lemma}\label{L: Integral Sandwich} \textit{ For every $r \in (0,R)$, we have that \begin{equation} \frac{1}{|C_r|} \int_{C_{R-r}} N(x,r;F) \ dx \leq N(R;F) \leq \frac{1}{|C_r|} \int_{C_{R+r}} N^* (x,r;F) \ dx. \end{equation} } \end{lemma} The proof of this lemma follows from Lemma 1 in~\cite{NS16}. For any $c \in \mc{C}(n-m)$, similar inequalities hold true if one replaces $N(x,r;F)$ (resp., $N(R;F)$, $N^*(x,r;F)$) with $N(x,r;F,c)$ (resp., $N(R;F,c)$, $N^*(x,r;F,c)$). See, for example, Lemma 3.7 in~\cite{SW19}. In~\cite{W21}, there is also a similar lemma for Betti numbers. We omit the proof for Lemma~\ref{L: Integral Sandwich}. \begin{proof}[Proof of (1) in Theorem~\ref{Thm: Euclidean Random Field}] We only prove (1) for $N(R;F)$ and the proof for $N(R;F,c)$ is the same. Let \begin{equation} \nu = \limsup_{R \to \infty} \frac{\mc{E}(N(R;F))}{R^n}, \end{equation} which is bounded by a constant $D_1 = D_1(n,m,M,k_1)>0$ by Theorem~\ref{Thm: L^1 Bound on Number}. So, for each $\epsilon >0$, we can choose an $r > 0$ such that \begin{equation} \frac{\mc{E}(N(r;F))}{r^n} \geq \nu -\epsilon. \end{equation} By Lemma~\ref{L: Integral Sandwich}, we see that for all $R > r$, \begin{equation} \mc{E}(N(R;F)) \geq \frac{1}{|C_r|} \int_{C_{R-r}} \mc{E}( N(x,r;F) ) \ dx = \frac{|C_{R-r}|}{|C_r|} \cdot \mc{E}(N(r;F)), \end{equation} since $F$ is stationary. Hence, \begin{equation} \liminf_{R \to \infty} \frac{\mc{E}(N(R;F))}{R^n} \geq \frac{\mc{E}(N(r;F))}{r^n} \geq \nu - \epsilon. \end{equation} Let $\epsilon \to 0$, we then get (\ref{E: Growth Degree Constant}). \end{proof} \begin{proof}[Proof of (2) in Theorem~\ref{Thm: Euclidean Random Field}] We only prove (2) for $N(R;F)$ and the proof for $N(R;F,c)$ is the same. We will use Wiener's ergodic theorem in~\cite{NS16}, which has also been used in~\cite{SW19,W21}. We define two random variables: \begin{equation} \Phi(R,r) \equiv\frac{1}{|C_r|} \int_{C_{R-r}} N(x,r;F) \ dx = \frac{1}{|C_r|} \int_{C_{R-r}} N(r;\tau_x (F)) \ dx, \end{equation} and \begin{equation} \Psi(R,r) \equiv\frac{1}{|C_r|} \int_{C_{R+r}} \mfk{N}_\# (x,r;F) \ dx = \frac{1}{|C_r|} \int_{C_{R+r}} \mfk{N}_\# (r;\tau_x (F)) \ dx. \end{equation} By Lemma~\ref{L: Integral Sandwich}, we see that $\Phi(R,r) \leq N(R;F) \leq \Phi(R+2r,r) + \Psi(R,r) $. Also, Wiener's ergodic theorem gives that, for fixed $r$, \begin{equation} \lim_{R \to \infty} \frac{\Phi(R,r)}{|C_{R-r}|} = \frac{\mc{E}(N(r;F))}{|C_r|} , \ \text{and } \lim_{R \to \infty} \frac{\Psi(R,r)}{|C_{R+r}|} = \frac{\mc{E}(\mfk{N}_\# (r;F))}{|C_r|}, \end{equation} a.s.~and in $L^1(\mc{P})$. Equivalently, \begin{equation} \lim_{R \to \infty} \frac{\Phi(R,r)}{|C_{R}|} = \frac{\mc{E}(N(r;F))}{|C_r|} , \ \text{and } \lim_{R \to \infty} \frac{\Psi(R,r)}{|C_{R}|} = \frac{\mc{E}(\mfk{N}_\# (r;F))}{|C_r|}, \end{equation} a.s.~and in $L^1(\mc{P})$. Notice that \begin{equation}\label{E: Split Sandwich into Parts} \begin{split} \bigg| \frac{N(R;F)}{|C_R|} - \nu \bigg| &\leq \bigg| \frac{N(R;F)}{|C_R|} - \frac{\Phi(R,r)}{|C_R|} \bigg| + \bigg| \frac{\Phi(R,r)}{|C_R|} - \frac{\mc{E}(N(r;F))}{|C_r|} \bigg| + \bigg| \frac{\mc{E}(N(r;F))}{|C_r|} - \nu \bigg| \\ &= \frac{N(R;F)}{|C_R|} - \frac{\Phi(R,r)}{|C_R|} + \bigg| \frac{\Phi(R,r)}{|C_R|} - \frac{\mc{E}(N(r;F))}{|C_r|} \bigg| + \bigg| \frac{\mc{E}(N(r;F))}{|C_r|} - \nu \bigg|. \end{split} \end{equation} Hence, for the convergence in mean part, after taking expectations on both sides and letting $R \to \infty$, by (\ref{E: Growth Degree Constant}), \begin{equation} \begin{split} 0 &\leq \limsup_{R\to \infty} \mc{E} \bigg[\bigg| \frac{N(R;F)}{|C_R|} - \nu \bigg|\bigg] \\ &\leq \nu- \frac{\mc{E}(N(r;F))}{|C_r|} + \bigg| \frac{\mc{E}(N(r;F))}{|C_r|} - \nu \bigg|. \end{split} \end{equation} Then, let $r \to \infty$, by (\ref{E: Growth Degree Constant}) again, we get the convergence in mean part of (\ref{E: Ergodicity Convergence}). For the almost sure convergence part, we see that we can further reduce (\ref{E: Split Sandwich into Parts}) and get \begin{equation} \begin{split} \bigg| \frac{N(R;F)}{|C_R|} - \nu \bigg| &\leq \frac{\Phi(R+2r,r) + \Psi(R,r)}{|C_R|} - \frac{\Phi(R,r)}{|C_R|} + \bigg| \frac{\Phi(R,r)}{|C_R|} - \frac{\mc{E}(N(r;F))}{|C_r|} \bigg| \\ & \ + \bigg| \frac{\mc{E}(N(r;F))}{|C_r|} - \nu \bigg|. \end{split} \end{equation} So, a.s., \begin{equation} 0 \leq \limsup_{R \to \infty} \bigg| \frac{N(R;F)}{|C_R|} - \nu \bigg| \leq \frac{\mc{E}(\mfk{N}_\# (r;F))}{|C_r|} + \bigg| \frac{\mc{E}(N(r;F))}{|C_r|} - \nu \bigg|. \end{equation} By the discussions before Lemma~\ref{L: Integral Sandwich}, we see that $\mc{E}(\mfk{N}_\# (r;F)) \leq C \cdot r^{n-1}$ for a $C = C(n,m,M,k_1) >0$. Then, let $r \to \infty$ and use (\ref{E: Growth Degree Constant}) again, we get the almost sure convergence part of (\ref{E: Ergodicity Convergence}). \end{proof} \begin{proof}[Proof of (3) in Theorem~\ref{Thm: Euclidean Random Field}] By definition, for any $F \in C^1_* (\mb{R}^n ,\mb{R}^{m})$ and any $R > 1$, \begin{equation} N(R;F) = \sum_{c \in \mc{C}(n-m)} N(R;F,c). \end{equation} For a subset $A \subset \mc{C}(n-m)$, we define $N(R; F,A)$ as the number of connected components of $Z(F)$ lying entirely in $C_R$ with their isotopy classes lying in $A$. Hence, for any finite subset $A \subset \mc{C}(n-m)$, \begin{equation} N(R;F) \geq \sum_{c \in A} N(R;F,c). \end{equation} Take expectations on both sides, we divide both sides by $|C_R|$ and let $R \to \infty$. By (1) of Theorem~\ref{Thm: Euclidean Random Field}, we see that $\nu_F \geq \sum_{c \in A} \nu_{F,c}$. Since $A$ is arbitrary, we see that \begin{equation} \nu_F \geq \sum_{c \in \mc{C}(n-m)} \nu_{F,c}. \end{equation} On the other hand, for any $c \in \mc{C}(n-m)$, by Lemma~\ref{L: Integral Sandwich} applied to $N(R;F,c)$, since $F$ is stationary, we see that \begin{equation} \frac{|C_{R-r}|}{|C_r|} \cdot \mc{E}(N(r;F,c))=\frac{1}{|C_r|} \int_{C_{R-r}} \mc{E}(N(x,r;F,c)) \ dx \leq \mc{E}(N(R;F,c)), \end{equation} when $0<r < R$. Divide both sides by $|C_R|$ and let $R \to \infty$, we see that for any $r >0$, \begin{equation} \frac{\mc{E}(N(r;F,c))}{|C_r|} \leq \nu_{F,c}. \end{equation} Hence, by monotone convergence theorem, \begin{equation} \frac{\mc{E}(N(r;F))}{|C_r|} = \mc{E} \bigg(\sum_{c \in \mc{C}(n-m)} \frac{N(r;F,c)}{|C_r|} \bigg) = \sum_{c \in \mc{C}(n-m)} \frac{\mc{E}(N(r;F,c))}{|C_r|} \leq \sum_{c \in \mc{C}(n-m)}\nu_{F,c}. \end{equation} Let $r \to \infty$, the left hand side becomes $\nu_F$. We then finish the proof. \end{proof} \begin{remark} One can also prove the part (3) of Theorem~\ref{Thm: Euclidean Random Field} by analyzing Cheeger's finiteness theorem quantitatively, like the proof of Theorem 4.2 in~\cite{SW19}, which will give readers a better understanding of the geometry of random zero sets. \end{remark} For Theorem~\ref{Main Results: Local Betti}, the proof needs the following inequality like Lemma~\ref{L: Integral Sandwich}, which was proved as Lemma 3.1 in~\cite{W21}. \begin{lemma}\label{L: Integral Sandwich Betti} \textit{ For each $l = 0, 1, \dots, n-m$ and every $r \in (0,R)$, we have that \begin{equation} \frac{1}{|C_r|} \int_{C_{R-r}} \beta_l (x,r;F) \ dx \leq \beta_l (R;F), \end{equation} where $\beta_l (x,r;F)$ is the sum of $l$-th Betti numbers over $\mb{R}$ of all connected components of $Z(F)$ fully contained in the open cube $C_r(x)$. } \end{lemma} Combine Lemma~\ref{L: Integral Sandwich Betti} and Theorem~\ref{Thm: L^1 Bound on Betti}, we can conclude Theorem~\ref{Main Results: Local Betti} by a simila way as we proved Theorem~\ref{Thm: Euclidean Random Field}. We do not have an almost sure convergence for Betti numbers because for Betti numbers, we cannot similarly define an $\mfk{N}_\# (r;F)$ and get an estimate like $\mc{E}(\mfk{N}_\# (r;F)) \leq C \cdot r^{n-1}$. \section{Proof of Theorem~\ref{Main Results: Global 1} and Theorem~\ref{Main Results: Global Betti} }\label{Section: Local Double Limit and Global Limit} In this section, we are going to prove Theorem~\ref{Main Results: Global 1} and Theorem~\ref{Main Results: Global Betti} for parametric ensembles of vector-valued Gaussian fields. When $m=1$, those ensembles are constructed in a similar similar way as models in~\cite{CH20, GW15, NS16, SW19,W21}. Let us first give some assumptions similar to Definition~\ref{D: Axiom 1}. Let ${\{F_L \}}_{L \in \mc{L}}$ be a family of Gaussian fields defined on an open subset $U \subset \mb{R}^n$, where the index $L$ attains a discrete set $\mc{L} \subset \mb{R}$. We usually call this family a Gaussian ensemble. We let the covariance matrix associated with $F_L = (f_{L,1}, \dots, f_{L,m})$ be \begin{equation} {(K_L)}_{i_1 i_2}(x,y) = \mc{E}\big(f_{L, i_1}(x) f_{L,i_2}(y)\big), \ i_1,i_2 = 1,2,\dots, m, \end{equation} and let the scaled version for $F_{x,L}(u) \equiv F_L(x + L^{-1} u )$ be \begin{equation} {(K_{x,L})}_{i_1 i_2}(u,v) = \mc{E}({(F_{x,L})}_{i_1}(u){(F_{x,L})}_{i_2}(v)) = {(K_L)}_{i_1 i_2}(x + L^{-1}u, x + L^{-1}v). \end{equation} \begin{definition}\label{D: Axiom 2} We say that a family of Gaussian random fields ${\{F_L\}}_{L \in \mc{L}}$ is locally uniformly controllable, if for every compact subset $Q \subset U$, there are positive constants $M = M(Q),k_1 = k_1(Q)$ such that \begin{itemize} \item [$(B1)$] For each $x \in Q$, $F_L(x)$ is centered and the joint distribution of $(F_L(x),\nabla F_L (x) )$ is a non-degenerate Gaussian vector with a rescaled uniformly lower bound on the covariance kernel, i.e., \begin{equation} \liminf_{L \to \infty} \inf_{x \in Q} \inf_{(\eta,\xi) \in \mb{S}^{m+mn-1}}\mc{E}\bigg(\bigg| \sum_{i = 1}^m\eta_i \cdot f_{L,i} (x) + L^{-1} \cdot \sum_{i=1} ^m \sum_{j=1} ^n \xi_{ij} \cdot \pa_j f_{L,i} (x)\bigg|^2 \bigg) \geq k_1 >0, \end{equation} where $\eta = (\eta_i) \in \mb{R}^m$, $\xi = (\xi_{ij}) \in \mb{R}^{nm}$ with $||\eta||^2 + ||\xi||^2 = 1$. \item [$(B2)$] We have the local uniform $C^{3-}$-smoothness: \begin{equation} \limsup_{L \to \infty} \max_{0\leq { |\alpha| }\leq 3} \sup_{x \in Q} L^{-2|\alpha|} {| {{D_x}^{\alpha} {D_y }^{\alpha}{(K_L)}_{i_1 i_2}(x,y) |}_{y=x}|} \leq M, \ i_1,i_2 = 1,2,\dots, m. \end{equation} \end{itemize} \end{definition} For an $x \in U$, if there exists a continuous stationary Gaussian field $F_x : \mb{R}^n \to \mb{R}^m$, such that for every finite point set $\mc{U} \subset \mb{R}^n$, the Gaussian vectors $(F_{x,L}) |_{\mc{U}}$ converge to $(F_x)|_{\mc{U}}$ in distribution, then we call $F_x$ the translation invariant local limit of ${\{F_L\}}_{L \in \mc{L}}$ at $x$. And hence, \begin{equation} \lim_{L\to \infty} K_{x,L}(u,v) = K_x(u-v) \quad \text{for any } (u,v) \in \mb{R}^n \times \mb{R}^n, \end{equation} where $K_x$ is the covariance kernel of $F_x$. On the other hand, since we have the uniform smoothness assumption $(B2)$, the existence of such an $F_x$ is equivalent to the existence of the limiting covariance kernel $K_x$. See more discussions in Appendix A.12 of~\cite{NS16}. \begin{definition}\label{D: Axiom 3} For a family of Gaussian fields ${\{F_L\}}_{L \in \mc{L}}$ defined on $U \subset \mb{R}^n$ that are also locally uniformly controllable, we say that ${\{F_L\}}_{L \in \mc{L}}$ is tame if there exists a Borel subset $U' \subset U$ of full Lebesgue measure such that, for all $x \in U'$, \begin{itemize} \item [$(B3)$] ${\{F_L\}}_{L \in \mc{L}}$ has a translation invariant local limit $F_x$ at $x$, which satisfies that the action of $\mb{R}^n$ on $(C^1_* (\mb{R}^n ,\mb{R}^{m}), \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m})), \gamma_{F_x} )$ is ergodic. \item [$(B4)$] There is a $k_2 = k_2(x)$, and for every $R > 0$, there is an $L_0 = L_0 (R,x)>0$, such that when $L > L_0$, $F_{x,L}(u)$ satisfies $(A3)$ in Definition~\ref{D: Axiom 1}, i.e., if $x \in Q \cap U'$ for some compact set $Q \subset U$, then when $L > L_0$, $F_{x,L}(u)$ satisfies \textit{ $(R;M(Q),k_1(Q),k_2(x))$-assumptions} on $C_{R+1}$. \end{itemize} \end{definition} \begin{remark} For an $n$-dimensional $C^3$-manifold $X$ without boundary, we say that a parametric Gaussian ensemble ${\{F_L\}}_{L \in \mc{L}}$ is tame if for every $C^3$-chart $(U,\pi)$, $\pi: U \subset \mb{R}^n \to X$, ${\{F_L \circ \pi \}}_{L \in \mc{L}}$ is tame on $U$. This definition does not depend on the choice of charts. See Section 9 of~\cite{NS16} when $m=1$. \end{remark} We then give an example, the Kostlan's ensemble, as a tame ensemble defined on $\mb{S}^n \subset \mb{R}^{n+1}$ when $m = 1$. When $m>1$, we can let $F_L = (f_{L,1}, \dots , f_{L,m})$ consist of independent ${\{f_{L,i}\}}_{L}$'s so that each ${\{f_{L,i}\}}_{L}$ is a Kostlan's ensemble (or other general tame Gaussian functions ensembles) defined on the same manifold. See more examples in~\cite{NS16, S16}. Apart from these concrete examples, in~\cite{CH20,SW19} and references therein, one will see a large family of Gaussian functions ensembles satisfying our definitions of tame ensembles and these definitions are actually very generic. \begin{example}[Kostlan's ensemble]\label{Example} Consider the linear space of real homogeneous polynomials of degree $d$ with coordinates $(x_0, \dots, x_n) \in \mb{R}^{n+1}$, which is endowed with the inner product \begin{equation} \langle P_d, Q_d \rangle = \sum_{|J| = d } \binom{d}{J} ^{-1} p_J q_J, \end{equation} where \begin{equation} J = (j_0 , \dots, j_n) , \ |J| = \sum_{s= 0 } ^n j_s , \ \binom{d}{J} = \frac{d !}{j_0 ! \cdot \cdots \cdot j_n !} , \end{equation} and \begin{equation} P_d(x) = \sum_{|J| = d} p_J x^J, \ Q_d(x) = \sum_{|J| = d} q_J x^J , \ x^J = x_0 ^{j_0} \cdot \cdots \cdot x_n ^{j_n}. \end{equation} This inner product, up to a positive constant $C = C(n,d)$, equals to the inner product on the Bargmann-Fock space~\cite{B61}, i.e., the subspace of analytic functions on $\mb{C}^{n+1}$ where the inner product \begin{equation} \langle f , g \rangle_{BK} \equiv C(n,d)\int_{\mb{C}^{n+1}} f(z) \overline{g(z)} e^{-||z||^2} \ d \vol(z) \end{equation} is well-defined. One has that $\langle P_d , Q_d \rangle = \langle P_d , Q_d \rangle_{BK}$ after the extension of $P_d$ and $Q_d$ to $\mb{C}^{n+1}$. Hence, we get an orthonormal basis \begin{equation} {\bigg \{ \sqrt{ \binom{d}{J} } x^J \bigg \}}_{|J| = d} \end{equation} on the linear space of real homogeneous polynomials of degree $d$, with respect to the inner product we defined above. We then obtain a random homogeneous polynomial $R_d(x)$ of degree $d$, i.e., \begin{equation} R_d(x) = \sum_{|J| = d} \sqrt{ \binom{d}{J} } a_{J} x^J, \end{equation} where ${\{a_J\}}_{|J| = d}$ are i.i.d.~standard Gaussian random variables. The zero sets of $R_d(x)$ make sense as hypersurfaces on either $\mb{S}^{n}$ or $\mb{R}P^n$, the real projective space of dimension $n$. Hence, we can view ${\{R_d(x)\}}_{d \in \mb{N}}$ as a family of Gaussian random functions on $\mb{S}^n$. The two-point covariance kernel is \begin{equation} \mc{E}(R_d(x)R_d(y)) = {(\langle x , y \rangle_{\mb{R}^{n+1}})}^d = {(\cos(\theta(x,y)))}^d, \end{equation} where $\langle x , y \rangle_{\mb{R}^{n+1}}$ is the standard inner product and $\theta(x,y)$ is the angle between $x,y$ as vectors in $\mb{R}^{n+1}$. Fix an $x \in \mb{S}^n$, let $\exp_x: \mb{R}^n = T_x \mb{S}^n \to \mb{S}^n$ be the exponential map at $x$. We consider \begin{equation} R_{x,d}(u) \equiv R_d \big( \exp_x(d^{-1/2} \cdot u ) \big) \end{equation} and then \begin{equation} K_{x,d}(u,v) \equiv \mc{E}(R_{x,d}(u) R_{x,d}(v) ) = {(\langle \exp_x(d^{-1/2} \cdot u ) , \exp_x(d^{-1/2} \cdot v ) \rangle_{\mb{R}^{n+1}})}^d. \end{equation} As $d \to \infty$, $K_{x,d}(u,v)$, together with its partial derivatives of any finite order, locally uniformly converges to a covariance kernel $K_{R_x}(u,v)$ of a stationary Gaussian function $R_x(u)$ defined on $\mb{R}^n$. This $K_{R_x}$ is actually independent of $x$, and one can see that \begin{equation} K_{R_x}(u,v) = K_{R}(u-v) \equiv e^{-||u-v||^2/2}. \end{equation} We have already seen this kernel in Example~\ref{Example: Bargmann-Fock}. Define the parameters set $\mc{L} \equiv {\{\sqrt{d} \}}_{d\in \mb{N}}$, one can then verify that this Kostlan's ensemble satisfies all of our definitions of tame ensembles. To see this, we first notice that $K_R$ will satisfy all definitions for some universal constants only depending on $n$, and one uses the uniform convergence from $K_{x,d}$ to $K_R$ to show that these $K_{x,d}$ satisfy our definitions uniformly. \end{example} Now, we focus on ${\{F_L\}}_{L \in \mc{L}}$ defined on an open subset $U \subset \mb{R}^n$ again. Assume that ${\{F_L\}}_{{L \in \mc{L}}}$ is tame and $U'$ is the full measure set in Definition~\ref{D: Axiom 3}. For each $x \in U'$, we define $\bar{\nu}(x ) \equiv \nu_{F_x}$, where we obtained each $\nu_{F_x}$ from Theorem~\ref{Thm: Euclidean Random Field}. We say that a Borel measure $n_L$ is a connected component counting measure of $F_L$ if $\spt (n_L) \subset Z(F_L)$ and the $n_L$-mass of each component is $1$. Now, we can restate Theorem~\ref{Main Results: Global 1} as the following. \begin{theorem}\label{Thm: Global Limit} \textit{ Assume that ${\{F_L\}}_{L \in \mc{L}}$ is tame on an open set $U \subset \mb{R}^n$. Then, \begin{itemize} \item [(1)] The function $x \mapsto \bar{\nu}(x )$ is measurable and locally bounded in $U$. \item [(2)] For every sequence of connected component counting measures $n_L$ of $F_L$ and for every $\varphi \in C_{c}(U)$, we have that \begin{equation} \lim_{L \to \infty} \mc{E} \bigg[ \bigg | \frac{1}{L^n} \int_U \varphi(x) dn_L(x) - \int_U \varphi(x) \bar{\nu}(x) dx \bigg | \bigg] = 0. \end{equation} \end{itemize} } \end{theorem} Consider an $x \in U' $ with the translation invariant limit $F_x : \mb{R}^n \to \mb{R}^m$, which is centered and stationary. We see, for example, by Appendix A.12 of~\cite{NS16}, that $F_x(u)$ satisfies $(A1)$, $(A2)$, and $(A3)$ in Definition~\ref{D: Axiom 1} at the origin $u=0$ with $M(Q)$ and $k_1(Q)$ if $x \in Q$ for a compact set $Q \subset U$, and with $k_2(x)$ in $(B4)$ of Definition~\ref{D: Axiom 3}. We then have a local double limit lemma since $F_x$ also satisfies the ergodicity assumption $(B3)$ in Definition~\ref{D: Axiom 3}. \begin{lemma}\label{L: Double Limit in Probability} \textit{ Assume that ${\{F_L\}}_{L \in \mc{L}}$ is locally uniformly controllable and $F_x$ is its translation invariant local limit at a fixed $x \in U$. We also assume that for this $F_x$, the action of $\mb{R}^n$ on $(C^1_* (\mb{R}^n ,\mb{R}^{m}), \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m})), \gamma_{F_x} )$ is ergodic. Then, for any $\epsilon >0$, \begin{equation} \lim_{R \to \infty} \limsup_{L \to \infty} \mc{P} \bigg(\bigg| \frac{N(R; F_{x,L})}{|C_R|} - \nu_{F_x}\bigg| > \epsilon \bigg) = 0 . \end{equation} } \end{lemma} \begin{remark}\label{Rmk: Smooth Double Limit} For each $l = 0, 1, \dots, n-m$, Lemma~\ref{L: Double Limit in Probability} holds true if we replace $N(R; F_{x,L})$ with $\beta_l(R; F_{x,L})$ and replace $\nu_{F_x}$ with $\nu_{l;F_x}$. For any $c \in \mc{C}(n-m)$, Lemma~\ref{L: Double Limit in Probability} also holds true if one replaces $N(R; F_{x,L})$ with $N(R; F_{x,L},c)$ and replaces $\nu_{F_x}$ with $\nu_{F_x,c}$. \end{remark} When $m=1$, the proof of this lemma was shown in Theorem 5 in~\cite{S16}, Proposition 6.2 of~\cite{SW19}, or Theorem 1.5 in~\cite{W21}. In order to prove this Lemma~\ref{L: Double Limit in Probability}, one needs to replace some lemmas for random functions with those for random fields, which are our Lemma~\ref{L: Quantitative Bulinskaya} and Lemma~\ref{L: Continuous Stability}. So, we only sketch the proof idea. On the other hand, this result can be improved to $L^1$-convergence, i.e., \begin{equation} \lim_{R \to \infty} \limsup_{L \to \infty} \mc{E} \bigg(\bigg| \frac{N(R; F_{x,L})}{|C_R|} - \nu_{F_x}\bigg| \bigg) = 0 , \end{equation} after building up our Theorem~\ref{Thm: L^q Bound on Number}. See (\ref{E: Double Limit in L^1}) in the proof for part (1) of Theorem~\ref{Thm: Global Limit}. \begin{proof}[Sketch of the proof idea for Lemma~\ref{L: Double Limit in Probability}] Recall that \begin{equation} \lim_{R \to \infty} \mc{P} \bigg(\bigg| \frac{N(R; F_{x})}{|C_R|} - \nu_{F_x}\bigg| > \frac{\epsilon}{2} \bigg) = 0, \end{equation} by Theorem~\ref{Thm: Euclidean Random Field}. Fix a large $R > 0$. One can also show that \begin{equation} \lim_{A \to \infty} \mc{P} \big( ||F_x||_{C^{2}(C_{2R})} > A \big) = 0, \end{equation} and by our quantitative Bulinskaya's lemma, Lemma~\ref{L: Quantitative Bulinskaya}, \begin{equation} \lim_{\tau \to 0} \mc{P} \big( \min_{y \in \bar{C}_{2R}} \max\{||F_x(y)||,\lambda(F_x(y)) \} < \tau \big) = 0 . \end{equation} Also, one can show that for any $\delta >0$, \begin{equation} \limsup_{L \to \infty} \mc{P} \big( ||F_{x,L} - F_x||_{C^1(C_{2R})} > \delta \big) = 0. \end{equation} The above limits show that, outside an event $\Sigma$ with small probability, $Z(F_x) \cap C_R$ are all regular submanifolds, and $Z(F_{x,L}) \cap C_R$ is actually a tiny perturbation of $Z(F_{x}) \cap C_R$. Hence, for those components of $Z(F_{x,L})$ fully contained in $C_R$, they are $C^1$-isotopic to those components of $Z(F_{x})$ fully contained in $C_{R+1}$. In particular, $N(R-1;F_{x}) \leq N(R;F_{x,L}) \leq N(R+1;F_{x})$. Hence, \begin{equation} \begin{split} \mc{P} \bigg(\bigg| \frac{N(R; F_{x,L})}{|C_R|} - \nu_{F_x}\bigg| > \epsilon \bigg) &\leq \mc{P} \bigg( \frac{N(R+1; F_{x})}{|C_R|} - \nu_{F_x} > \epsilon \bigg) \\ & \ + \mc{P} \bigg( \frac{N(R-1; F_{x})}{|C_R|} - \nu_{F_x} < - \epsilon \bigg) + \mc{P}(\Sigma). \end{split} \end{equation} Let us formulate the $C^1$-isotopic result we used above. \begin{lemma}\label{L: Continuous Stability} \textit{ Fix a small $\tau >0$, a large $A>0$, and a $\beta \in (0,1)$. Assume that $G(x) \in C^{1+\beta}(C_{R+2}, \mb{R}^m)$ for some $m \in (0,n)$ and $||G||_{C^{1+\beta}(C_{R+2})} < A$. If for each $x \in C_{R+1}$, either $||G(x)|| > \tau$ or $\lambda(G(x)) > \tau$ (see the definition for $\lambda(G(x))$ in Lemma~\ref{L: Quantitative Bulinskaya}), then, there is a positive constant $\delta = \delta(n , m, \tau, A, \beta) > 0$, such that for any $C^1$-vector field $\tilde{G}(x) \in C^1(C_{R+2}, \mb{R}^m)$ with $|| \tilde{G}(x) - G(x) ||_{C^1(C_{R+1})} < \delta$, one can show that each connected component of $Z(G)$ fully contained in $C_{R-1}$ is $C^1$-isotopic to a connected component of $Z(\tilde{G})$ fully contained in $C_{R}$, and this map is injective. In particular, $N(R; \tilde{G}) \geq N(R-1; G)$. } \end{lemma} For this stability lemma, one can see, for example, Lemma 4.3 of~\cite{W21}, Proposition 6.8 of~\cite{SW19}, and Thom's isotopy Theorem. The general idea is considering $G_t(x) = G(x) + t \cdot (\tilde{G}(x)-G(x))$, $t \in [0,1]$. Since now $|| \tilde{G}(x) - G(x) ||_{C^1(C_{R+1})} $ is small, for each $t$, we know that $Z(G_t)$ is regular. For each connected component of $Z(G)$ fully contained in $C_{R-1}$, say $\gamma$, the normal vector bundle ${(T\gamma)}^\perp$ is trivial and one can use $G(x)$ as a local trivialization of this normal vector bundle. At each $x \in \gamma$, one can consider an $m$-dimensional disc $D^m(x,r) \subset {(T_x \gamma)}^{\perp} \subset \mb{R}^n$ for some $r = r(n,m,\tau,A,\beta)>0$ small, and then apply contraction mapping theorem to get a unique $y_t$ in each $D(x,r)$ such that $G_t(y_t) = 0$. This process actually builds up the $C^1$-isotopy. If one only needs the inequality $N(R; \tilde{G}) \geq N(R-1; G)$, one can just use $C^0$-perturbations, i.e., just assume that $|| \tilde{G} - G ||_{C^0(C_{R+1})}$ is small. The proof follows by replacing the contraction mapping theorem with Brouwer's fixed-point theorem. Similar inequalities of $C^0$-perturbations also hold true for Betti numbers, see for example Theorem 2 of~\cite{LS19}. So, we can finish the proof idea for Lemma~\ref{L: Double Limit in Probability}. \end{proof} In order to prove Theorem~\ref{Thm: Global Limit}, we need an estimate stronger than Theorem~\ref{Thm: L^1 Bound on Number} with the assumption $(A3)$ in Definition~\ref{D: Axiom 1}. \begin{theorem}\label{Thm: L^q Bound on Number} \textit{ Assume that $F : C_{R+1} \to \mb{R}^{m}$, satisfies \textit{ $(R; M,k_1,k_2)$-assumptions} on $C_{R+1}$. Then, there is a constant $D_2 = D_2(n,m, M,k_1,k_2) >0$ and a constant $q = q(n,m) > 1$, such that for $R >1$, \begin{equation} \mc{E}({(N(R;F))}^q) \leq D_2 \cdot |C_R|^q. \end{equation} } \end{theorem} Before we give the proof, let us remark that a similar result also holds true for Betti numbers. \begin{theorem}\label{Thm: L^q Bound on Betti} \textit{ Assume that $F : C_{R+1} \to \mb{R}^{m}$, satisfies \textit{ $(R; M,k_1,k_2)$-assumptions} on $C_{R+1}$. Then there is a constant $\widetilde{D}_2 = \widetilde{D}_2(n,m, M,k_1,k_2) >0$ and a constant $q = q(n,m) > 1$, such that for $R >1$, \begin{equation} \sum_{l=0}^{n-m} \mc{E}({(\beta_l(R;F))}^q) \leq \widetilde{D}_2 \cdot |C_R|^q. \end{equation} } \end{theorem} The necessary modification is to again use the Chern-Lashof inequality in Theorem~\ref{Thm: Chern}. Hence, we only give the proof for Theorem~\ref{Thm: L^q Bound on Number}. \begin{proof}[Proof of Theorem~\ref{Thm: L^q Bound on Number}] For a $q = q(n,m)>1$ but close to $1$ and a positive integer $k$ to be determined later, by (\ref{E: Bound Components Number}), we have that for some positive constants $C = C(n,m)$, \begin{equation}\label{E: Bound Components Number 2} \begin{split} &{(N(R;F))}^q \leq C(n,m) \cdot {\bigg( \int_{F^{-1}(0) \cap C_R} ||\bH||^{n-m} \ d\mc{H}^{n-m} \bigg)}^q \\ &\leq C \cdot \bigg( \int_{F^{-1}(0) \cap C_R} ||\bH||^{(n-m)q} \ d\mc{H}^{n-m} \bigg) \cdot {\big(\mc{H}^{n-m}(F^{-1}(0) \cap C_R)\big)}^{q-1} \\ &= C \cdot \bigg( \int_{F^{-1}(0) \cap C_R} ||\bH||^{(n-m)q} \ d\mc{H}^{n-m} \bigg) \cdot {\bigg(\frac{\mc{H}^{n-m}(F^{-1}(0) \cap C_R)}{|C_R|}\bigg)}^{q-1} \cdot |C_R|^{q-1} \\ &= C \cdot |C_R|^{q-1} \bigg[ \int_{F^{-1}(0) \cap C_R} {(||\bH||^{\frac{(n-m)q(k+1)}{k}})}^{\frac{k}{k+1}} \\ & \qquad \qquad \qquad \quad \cdot {\bigg( {\bigg(\frac{\mc{H}^{n-m}(F^{-1}(0) \cap C_R)}{|C_R|}\bigg)}^{(k+1)(q-1)} \bigg)}^{\frac{1}{k+1}} \ d\mc{H}^{n-m} \bigg] \\ &\leq \frac{C}{k+1} |C_R|^{q-1} \bigg[ \int_{F^{-1}(0) \cap C_R} k (||\bH||^{\frac{(n-m)q(k+1)}{k}}) \\ & \qquad \qquad \qquad \quad + \bigg( {\bigg(\frac{\mc{H}^{n-m}(F^{-1}(0) \cap C_R)}{|C_R|}\bigg)}^{(k+1)(q-1)} \bigg) \ d\mc{H}^{n-m} \bigg] \\ &\leq C \cdot |C_R|^{q-1} \int_{F^{-1}(0) \cap C_R} (||\bH||^{\frac{(n-m)q(k+1)}{k}}) \ d\mc{H}^{n-m} \\ & \quad +C \cdot |C_R|^{-k(q-1)}{\big(\mc{H}^{n-m}(F^{-1}(0) \cap C_R)\big)}^{1+(k+1)(q-1)}. \end{split} \end{equation} We set $k = \frac{2-q}{q-1}$ and we will determine $q = q(m,n)$ later. Then, we can see that \begin{equation}\label{E: Bound Components Number 3} \begin{split} {(N(R;F))}^q &\leq C \cdot |C_R|^{q-1} \int_{F^{-1}(0) \cap C_R} (|\bH|^{\frac{(n-m)q}{2-q}}) \ d\mc{H}^{n-m} \\ & \quad + C\cdot |C_R|^{q-2}{\big(\mc{H}^{n-m}(F^{-1}(0) \cap C_R)\big)}^{2}. \end{split} \end{equation} Notice that when $q$ is greater than $1$ and close to $1$, $q/(2-q)$ is also greater than $1$ and close to $1$. To simplify the notation, we let $q/(2-q) = \theta >1$. In order to estimate $\mc{E}({(N(R;F))}^q)$, by (\ref{E: Bound Components Number 3}), we need to estimate \begin{equation}\label{E: Bound Term 1} \mc{E} \bigg(\int_{F^{-1}(0) \cap C_R} (|\bH|^{(n-m)\theta}) \ d\mc{H}^{n-m} \bigg ) \end{equation} and \begin{equation}\label{E: Bound Term 2} \mc{E}\bigg({\big(\mc{H}^{n-m}(F^{-1}(0) \cap C_R)\big)}^{2} \bigg). \end{equation} Since $\theta$ is close to $1$, the estimate for (\ref{E: Bound Term 1}) is the same as Theorem~\ref{Thm: L^1 Bound on Number} and, in particular, the same as estimating (\ref{E: Determinant Integral in Curvature}). The $\theta$ is then allowed to be greater than $1$ but close to $1$ because of the sharp constant $\alpha$ shown in Lemma~\ref{L: Singular Integral}. So, we can choose a $\theta = \theta(n,m)>1$ and choose a $q = q(n,m)>1$ such that $k+1 = \frac{1}{q-1}$ is a positive integer. We also get that (\ref{E: Bound Term 1}) is bounded by $|C_R|$ multiplying a constant $C = C(n,m,M,k_1) >0$. For (\ref{E: Bound Term 2}), by the Kac-Rice Theorem~\ref{Thm: Kac-Rice}, we have that \begin{equation} \mc{E}\bigg({\big(\mc{H}^{n-m}(F^{-1}(0) \cap C_R)\big)}^{2} \bigg) = \int_{C_R \times C_R} J_{2,F}(1;u_1,u_2) p_{(F(u_1), F(u_2))}(0) \ du_1 du_2, \end{equation} where $J_{2,F}(h;u_1, u_2) $ has the expression that $J_{2,F}(h;u_1, u_2) = $ \begin{equation*} \mc{E}\bigg[ \sqrt{\det[{(\nabla F (u_1))}^T (\nabla F(u_1) )]} \sqrt{\det[{(\nabla F (u_2))}^T (\nabla F(u_2) )]} \ \bigg| \ F(u_1) = F(u_2) = 0 \bigg]. \end{equation*} We have an estimate for $p_{(F(u_1), F(u_2))}(0)$ from the $(A3)$ assumption in Definition~\ref{D: Axiom 1}. For this $J_{2,F}$ term, we will prove that it has a positive upper bound $C = C(n,m,M,k_1,k_2)< \infty$. By the inequality $2ab \leq a^2 + b^2$ and symmetry, we only need to estimate \begin{equation}\label{E: Jacobian in L^2 Coarea} \mc{E}\big[ \det[{(\nabla F (u_1))}^T (\nabla F(u_1) )] \ \big| \ F(u_1) = F(u_2) = 0 \big], \end{equation} for $u_1,u_2 \in C_R$ and $u_1 \neq u_2$. By similar tricks in (\ref{E: Independence for Conditional Expectation}), fix $i \in \{1, \dots , m\}$ and $j \in \{1 ,\dots , n \}$, we first need to find $a_{j,i} ^t$ and $b_{j,i} ^t$ such that for each $s = 1, \dots, m$, \begin{equation}\label{E: Orthogonal Condition 1} \mc{E}\big[ (\pa_j f_i (u_1) + \sum_{t=1} ^m a_{j,i} ^t f_t(u_1)+ b_{j,i} ^t f_t(u_2)) \cdot f_s(u_1)\big] = 0, \end{equation} and \begin{equation}\label{E: Orthogonal Condition 2} \mc{E}\big[ (\pa_j f_i (u_1) + \sum_{t=1} ^m a_{j,i} ^t f_t(u_1)+ b_{j,i} ^t f_t(u_2)) \cdot f_s(u_2)\big] = 0. \end{equation} To simplify the notations, we let $v = \pa_j f_i (u_1)$, $A = (a_{j,i}^1, \dots, a_{j,i}^m)$, $B = (b_{j,i}^1, \dots, b_{j,i}^m)$, $\mc{E}(F(u_1)v) = (\mc{E}(f_1(u_1)v) ,\dots, \mc{E}(f_m(u_1)v))$, $\mc{E}(F(u_2)v) = (\mc{E}(f_1(u_2)v) ,\dots, \mc{E}(f_m(u_2)v))$. Set $\mr{Cov}(x,y)$ as the $m\times m $ matrix with elements ${(\mr{Cov}(x,y))}_{i_1 i_2} = \mc{E}(f_{i_1}(x) f_{i_2}(y))$, and set $\Lambda(x,y)$ as the covariance kernel of the joint distribution of $(F(x),F(y))$, i.e., \begin{equation} \Lambda(x,y)= \begin{bmatrix} \mr{Cov}(x,x) & \mr{Cov}(x,y)\\ \mr{Cov}(y,x) & \mr{Cov}(y,y) \end{bmatrix} . \end{equation} $\Lambda(x,y)$ is invertible when $x \neq y$ by the $(A3)$ assumption. Hence, the equations (\ref{E: Orthogonal Condition 1}) and (\ref{E: Orthogonal Condition 2}) are solvable, and we get \begin{equation}\label{E: Solved Orthogonal Condition} (A,B) = - (\mc{E}(F(u_1)v), \mc{E}(F(u_2)v)) \cdot {(\Lambda (u_1,u_2))}^{-1}. \end{equation} Notice that \begin{equation} \det[{(\nabla F (u_1))}^T (\nabla F(u_1) )] \leq C(n,m) \cdot {||\nabla F ||}^{2m} , \end{equation} we see that the upper bound for (\ref{E: Jacobian in L^2 Coarea}) reduces to the upper bound for \begin{equation} \mc{E}{(v + A\cdot F(u_1) + B \cdot F(u_2))}^{2m}. \end{equation} Since it is the $2m$-th moment of a centered Gaussian variable, we only need to estimate its variance. By (\ref{E: Solved Orthogonal Condition}), we have that \begin{equation} \begin{split} & \quad \mc{E}{( A\cdot F(u_1) + B \cdot F(u_2))}^{2} \\ &= (\mc{E}(F(u_1)v), \mc{E}(F(u_2)v)) \cdot {(\Lambda (u_1,u_2))}^{-1} \cdot {(\mc{E}(F(u_1)v), \mc{E}(F(u_2)v))}^T. \end{split} \end{equation} We will use Theorem~\ref{Thm: APP C 1} and consider the case $|u_1 - u_2| \leq \delta$ and the case $|u_1 - u_2 | > \delta$ separately, where $\delta$ is chosen in Theorem~\ref{Thm: APP C 1}. When $|u_1 - u_2| \leq \delta$, by elementary row operations, \begin{equation} \begin{split} \Lambda (u_1,u_2) & = \begin{bmatrix} \mr{Id}_m & 0\\ \mr{Cov}(u_2,u_1) {\mr{Cov}(u_1,u_1)}^{-1} & {\mr{Id}}_m \end{bmatrix} \\ \quad &\cdot \begin{bmatrix} \mr{Cov}(u_1,u_1) & 0\\ 0 & \mr{Cov}(u_2,u_2) - \mr{Cov}(u_2,u_1) {\mr{Cov}(u_1,u_1)}^{-1}\mr{Cov}(u_1,u_2) \end{bmatrix} \\ \quad &\cdot \begin{bmatrix} \mr{Id}_m & {\mr{Cov}(u_1,u_1)}^{-1}\mr{Cov}(u_1,u_2)\\ 0 & \mr{Id}_m \end{bmatrix} . \end{split} \end{equation} Hence, \begin{equation}\label{E: Bound Variance of Two Points} \begin{split} &\quad \mc{E}{( A F(u_1) + B F(u_2))}^{2} \\ &= (\mc{E}(F(u_1)v), \mc{E}(F(u_2)v) - \mc{E}(F(u_1)v) {\mr{Cov}(u_1,u_1)}^{-1} \mr{Cov}(u_1,u_2)) \\ &\quad \cdot \begin{bmatrix} \mr{Cov}(u_1,u_1) & 0\\ 0 & \mr{Cov}(u_2,u_2) - \mr{Cov}(u_2,u_1) {\mr{Cov}(u_1,u_1)}^{-1}\mr{Cov}(u_1,u_2) \end{bmatrix} ^{-1} \\ &\quad \cdot {(\mc{E}(F(u_1)v), \mc{E}(F(u_2)v) - \mc{E}(F(u_1)v) {\mr{Cov}(u_1,u_1)}^{-1} \mr{Cov}(u_1,u_2))}^T . \end{split} \end{equation} For the middle matrix in (\ref{E: Bound Variance of Two Points}), the $\mr{Cov}(u_1,u_1)$ block is good, because by $(A1)$ in Definition~\ref{D: Axiom 1}, $\mr{Cov}(u_1,u_1)$ is quantitatively non-degenerate. For the second $m \times m$ block, by Theorem~\ref{Thm: APP C 1}, the smallest eigenvalue of $\mr{Cov}(u_2,u_2) - \mr{Cov}(u_2,u_1) {\mr{Cov}(u_1,u_1)}^{-1}\mr{Cov}(u_1,u_2)$ is lowerly bounded by $(k_1/2) \cdot |u_1-u_2 |^2$. On the other hand, by the mean value theorem, \begin{equation} \big| \mc{E}(F(u_2)v) - \mc{E}(F(u_1)v) {\mr{Cov}(u_1,u_1)}^{-1} \mr{Cov}(u_1,u_2) \big|^2 \leq C(n,m,M) \cdot |u_1 - u_2|^2 . \end{equation} Hence, when $|u_1 - u_2| \leq \delta$, \begin{equation} \mc{E}{( A F(u_1) + B F(u_2))}^{2} \leq C(n,m,M,k_1). \end{equation} When $|u_1 - u_2| > \delta$, by $(A3)$ in Definition~\ref{D: Axiom 1}, we see that there is a positive constant $c = c(n,m,M,k_1,k_2)$ such that \begin{equation} \det(\Lambda(u_1,u_2)) \geq c. \end{equation} Since the norm of $(\mc{E}(F(u_1)v), \mc{E}(F(u_2)v))$ is bounded by a constant $C = C(n,m,M)>0$, we get that \begin{equation} \mc{E}{( A F(u_1) + B F(u_2))}^{2} \leq C(n,m,M,k_1,k_2). \end{equation} Combine with the fact that $\mc{E}(v^2) \leq C(n,m,M)$, we can then prove that \begin{equation} \mc{E}{(v + A\cdot F(u_1) + B \cdot F(u_2))}^{2}\leq C(n,m,M,k_1,k_2). \end{equation} Hence, by the previous arguments, \begin{equation} J_{2,F} \leq C(n,m,M,k_1,k_2). \end{equation} Combine this with $(A4)$ in Definition~\ref{D: Axiom 1}, i.e., \begin{equation} p_{(F(u_1), F(u_2))} (0) \leq \begin{cases} \frac{k_2}{|u_1-u_2|^m} ,& \ \text{if } |u_1-u_2| \leq 1, \\ & \\ k_2 ,& \ \text{if } |u_1-u_2| >1, \end{cases} \end{equation} we get that, for $R > 1$, \begin{equation} \begin{split} & \quad \mc{E}\bigg({\big(\mc{H}^{n-m}(F^{-1}(0) \cap C_R)\big)}^{2} \bigg) \\ &\leq C(n,m, M,k_1,k_2) \cdot R^{n} \cdot (\int_0 ^1 r^{n-1-m} \ dr+ \int_1 ^{2R} r^{n-1 } \ dr ) \\ &\leq C(n,m, M,k_1,k_2) \cdot R^{2n }. \end{split} \end{equation} Now, combine estimates for (\ref{E: Bound Term 1}) and (\ref{E: Bound Term 2}), we can then estimate $\mc{E}({(N(R;F))}^q)$ with some $q = q(n,m)>1$ but close to 1 by (\ref{E: Bound Components Number 3}), and finally obtain that for all $R >1$, \begin{equation} \mc{E}({(N(R;F))}^q) \leq C(n,m, M, k_1,k_2) \cdot R^q. \end{equation} \end{proof} In Theorem~\ref{Thm: L^q Bound on Number}, we need to add an additional condition $(A3)$ to insure the nondegeneracy of $F$ at two different points, so that we can get $L^q$-bounds for the number and Betti numbers of connected components. In order to get a uniform control on the family ${\{F_L\}}_{L \in \mc{L}}$, we need to add the tame conditions in Definition~\ref{D: Axiom 3}, with which we can obtain the proof of our Theorem~\ref{Thm: Global Limit}. Notice that, if ${\{F_L\}}_{L \in \mc{L}}$ satisfies $(B4)$ for some $k_2(x)$ at $x \in U$ in Definition~\ref{D: Axiom 3}, then its translation invariant limit $F_x$ also satisfies $(A3)$ in Definition~\ref{D: Axiom 1} for this $k_2(x)$ and for any $R > 1$. \begin{proof}[Proof of (1) in Theorem~\ref{Thm: Global Limit}] Assume that $x \in Q \cap U'$ for some compact subset $Q \subset U$, then $F_x$ satisfies \textit{ $(R;M(Q),k_1(Q),k_2(x))$-assumptions} on $C_{R+1}$ for each $R >0$. By Theorem~\ref{Thm: L^1 Bound on Number}, \begin{equation} \bar{\nu}(x) =\lim_{R \to \infty} \frac{\mc{E}(N(R;F_x))}{|C_R|} \leq D_1(n,m,M(Q),k_1(Q)). \end{equation} The measurability is directly if we consider the function \begin{equation} \nu_{R,L}(x,w) \equiv \frac{N(R;F_{x,L})}{|C_R|}, \end{equation} defined on $U_{-(R+1)/L} \times \Omega'$ for $U_{-r} \equiv \{z \in U \ | \ \dist(z,\pa U) > r\}$ and $\Omega' = \{w \in \Omega \ | \ F_L \in C_* ^1 (\mb{R}^n,\mb{R}^m) \text{ for all } L \}$. Notice that $\mc{P}(\Omega \backslash \Omega') = 0$ by Bulinskaya's lemma. The measurability of $\nu_{R,L}(x,w)$ follows from the measurability of compositions of lower semicontinuous and measurable maps. Then, we claim that for any fixed $x \in Q \cap U'$, \begin{equation}\label{E: Double Limit in L^1} \lim_{R \to \infty} \limsup_{L \to \infty} \mc{E}(|\nu_{R,L}(x, \cdot) - \bar{\nu}(x)|) = 0, \end{equation} which is a stronger version than Lemma~\ref{L: Double Limit in Probability}. The measurability of $\bar{\nu}(x)$ follows from the Fubini-Tonelli theorem. First, we notice that for each $x \in Q \cap U'$ and each $R >0$, when $L > L_0(R,x)$, Theorem~\ref{Thm: L^q Bound on Number} gives that \begin{equation} \mc{E}({(\nu_{R,L}(x, \cdot))}^q) \leq D_2 \end{equation} for some $q = q(n,m)>1$ and $D_2 = D_2(n,m,M(Q),k_1(Q), k_2(x))< \infty$. On the other hand, given any $\epsilon > 0$, Lemma~\ref{L: Double Limit in Probability} tells us that \begin{equation} \lim_{R \to \infty} \limsup_{L \to \infty} \mc{P}(\Omega_\epsilon) = 0, \end{equation} where $\Omega_\epsilon \equiv \{ w \in \Omega' \ | \ |\nu_{R,L}(x,w) - \bar{\nu}(x) | > \epsilon\}$. Hence, when $L > L_0(R,x)$, by the H\"{o}lder inequality, \begin{equation} 0 \leq \mc{E}(|\nu_{R,L}(x, \cdot) - \bar{\nu}(x)|) \leq \epsilon + {(\mc{P}(\Omega_\epsilon))}^{1- \frac{1}{q}} \cdot C(n,m,D_1,D_2). \end{equation} Since $D_1,D_2$ are independent of $R,L$, we can let $L \to \infty$ first, and then let $R \to \infty$, and finally let $\epsilon \to 0$. Hence, we finish the proof of the claim. \end{proof} \begin{proof}[Proof of (2) in Theorem~\ref{Thm: Global Limit}] The proof follows similar strategies in~\cite{NS16,SW19} when $m=1$, but we still need to modify it by our earlier theorems and lemmas for $m>1$. We include the proof adapting to our settings here. We can assume that $\varphi \geq 0$ and set $Q = \spt(\varphi)$. Fix an arbitrary $\delta \in (0,1)$ such that $Q_{+4\delta} \subset U$ and we denote $Q_1 = Q_{+ \delta}$, $Q_2 = Q_{ + 2\delta}$, where $Q_{+\delta} \equiv \{z \in \mb{R}^n \ | \ \dist(z,Q) \leq \delta \}$. For any $x \in Q_1$, let $\varphi_{-}(x) \equiv \inf_{C_\delta(x)} \varphi$ and $\varphi_{+}(x) \equiv \sup_{C_\delta(x)} \varphi$, where we recall that $C_\delta(x)$ is the shift from the open cube $C_\delta(0)$ to the center $x$. Then, for parameters $T, R ,L$, with $1 < T < R < \delta L$, we have that $\varphi_{-}(x) \leq \varphi(y) \leq \varphi_{+}(x)$ for any $y \in C_{R/L}(x)$. Then, we obtain that \begin{equation}\label{E: Global Limit Sandwich 1} \begin{split} \int_{Q_1} \varphi_{-}(x) \nu_{R,L}(x,w) \ dx &\leq \int_{Q_1} \varphi_{-}(x) \frac{n_L(C_{R/L}(x))}{|C_R|} \ dx \\ &\leq \int_{Q_1} \int_{C_{R/L}(x)} \frac{\varphi(y)}{|C_R|} \ dn_L(y) \ dx=\frac{1}{L^n} \int_U \varphi(y) \ dn_L(y) \\ &\leq \int_{Q_1} \varphi_{+}(x) \frac{n_L(C_{R/L}(x))}{|C_R|} \ dx. \end{split} \end{equation} Then, we cover $Q_2$ with $C(n) \cdot |Q_2| \cdot {(L/T)}^n$ open cubes with side lengths $ (T/L)$, and we deonte these cubes as $\{C_j\}$. For example, one can consider an approximation of $Q_2$ by dyadic cubes. For each component of $Z(F_L) \cap C_{R/L}(x)$ that does not intersect with the boundaries of any $C_j$, it is fully contained in one $C_j$. Hence, it is fully contained in $C_{(R+T)/L}(x)$. For those components of $Z(F_L)$ that intersect with at least one $C_j$, the number of them are bounded by $\sum_j \mfk{N}_\# ( C_j; F_L)$. Recall that, by Bulinskaya's lemma and the discussions after the definition of $\mfk{N}_\# (R;G)$ in (\ref{E: Definition of Residue Parts}), for each $j$, it is in probability $1$ that $\mfk{N}_\# ( C_j; F_L)$ is finite and $\mc{E}(\mfk{N}_\# ( C_j; F_L)) \leq C(n,m,M(Q),k_1(Q)) \cdot {(T)}^{n-1}$ when $L$ is large. Hence, \begin{equation}\label{E: Global Limit Sandwich 2} \begin{split} \int_{Q_1} \varphi_{+}(x) \frac{n_L(C_{R/L}(x))}{|C_R|} \ dx &\leq {\bigg(\frac{R+T}{R}\bigg)}^n \cdot \int_{Q_1} \varphi_{+}(x) \nu_{R+T,L}(x,w) \ dx \\ & \quad + (\sup_U \varphi) \cdot L^{-n} \cdot (\sum_j \mfk{N}_\# ( C_j; F_L)). \end{split} \end{equation} For the $\int_U \varphi(x) \bar{\nu}(x) dx$ term, let $\omega_\varphi(\cdot)$ be the modulus of continuity of $\varphi$, \begin{equation} \begin{split} \int_U \varphi(x) \bar{\nu}(x) dx &\geq {\bigg(\frac{R + T}{R} \bigg)}^n \cdot \int_{Q_1} \varphi_{+}(x) \bar{\nu}(x) \ dx \\ & \quad - C(n) \cdot (T/R) \cdot (\sup_U \varphi) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1| - \omega_\varphi(\delta) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1|. \end{split} \end{equation} Combine with (\ref{E: Global Limit Sandwich 1}) and (\ref{E: Global Limit Sandwich 2}), we see that \begin{equation} \begin{split} & \quad \frac{1}{L^n} \int_U \varphi(y) \ dn_L(y) - \int_U \varphi(x) \bar{\nu}(x) dx \\ &\leq {\bigg(\frac{R+T}{R}\bigg)}^n \cdot (\sup_U \varphi) \cdot \int_{Q_1} \big|\nu_{R+T,L}(x,w) - \bar{\nu}(x) \big|\ dx \\ & \quad + C(n) \cdot (T/R) \cdot (\sup_U \varphi) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1| + \omega_\varphi(\delta) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1| \\ & \quad + (\sup_U \varphi) \cdot L^{-n} \cdot (\sum_j \mfk{N}_\# ( C_j; F_L)). \end{split} \end{equation} and \begin{equation} \begin{split} & \quad \frac{1}{L^n} \int_U \varphi(y) \ dn_L(y) - \int_U \varphi(x) \bar{\nu}(x) dx \\ &\geq -(\sup_U \varphi) \cdot \int_{Q_1} \big|\nu_{R,L}(x,w) - \bar{\nu}(x) \big|\ dx - \omega_\varphi (\delta) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1| . \end{split} \end{equation} Hence, \begin{equation}\label{E: Global Limit Sandwich 3} \begin{split} & \quad \mc{E}^* \bigg[ \bigg | \frac{1}{L^n} \int_U \varphi(x) dn_L(x) - \int_U \varphi(x) \bar{\nu}(x) dx \bigg | \bigg] \\ &\leq 2^n \cdot (\sup_U \varphi) \cdot \int_{Q_1} \mc{E}\big(\big|\nu_{R+T,L}(x,\cdot) - \bar{\nu}(x) \big| \big)\ dx \\ & \quad + (\sup_U \varphi) \cdot \int_{Q_1} \mc{E} \big(\big|\nu_{R,L}(x,w) - \bar{\nu}(x) \big| \big)\ dx + (\sup_U \varphi) \cdot L^{-n} \cdot (\sum_j \mc{E}\big( \mfk{N}_\# ( C_j; F_L) \big) ) \\ & \quad + C(n) \cdot (T/R) \cdot (\sup_U \varphi) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1| + \omega_\varphi(\delta) \cdot ||\bar{\nu}||_{L^\infty(Q_1)} \cdot |Q_1| . \end{split} \end{equation} Notice that, by $(B1)$ and $(B2)$ in Definition~\ref{D: Axiom 2}, for fixed $R>T>1$, there is a uniform $L_1 = L_1(R)>0$, such that when $L> L_1$, \begin{equation} \mc{E} \big(\big|\nu_{R+T,L}(x,w) - \bar{\nu}(x) \big| \big) \leq D_1 (n,m,M(Q),k_1(Q)), \end{equation} for every $x \in Q_1$ by Theorem~\ref{Thm: L^1 Bound on Number}. Hence, the function \begin{equation} \eta_R(x) \equiv \limsup_{L \to \infty} \mc{E} \big(\big|\nu_{R,L}(x,w) - \bar{\nu}(x) \big| \big) \end{equation} has the same upper bound $D_1$. Recall that the claim (\ref{E: Double Limit in L^1}) in proving part (1) of Theorem~\ref{Thm: Global Limit} shows that $\lim_{R \to \infty} \eta_R(x) = 0$. Hence, apply $ \limsup_{L \to \infty}$ first and then apply $\lim_{R \to \infty}$, the first two terms in (\ref{E: Global Limit Sandwich 3}) go to $0$ by the dominated convergence theorem. The fourth term will also go to $0$ since $T < R$ is still fixed at this step. As previously mentioned, since ${\{F_L\}}_{L \in \mc{L}}$ satisfies $(B1)$ and $(B2)$ in Definition~\ref{D: Axiom 2}, when $L >L_1(R)$, for each $j$, $ \mc{E}(\mfk{N}_\# ( C_j; F_L)) \leq C(n,m,M(Q),k_1(Q)) \cdot {(T)}^{n-1}$. So, the third term is bounded by $(\sup_U \varphi ) \cdot C(n,m,M(Q) , k_1(Q) ) \cdot T^{-1}$ for any fixed $T < R < \delta L$. We now let $T \to \infty$ and finally let $\delta \to \infty$, we can see that the right hand side of (\ref{E: Global Limit Sandwich 3}) goes to $0$. Hence, we can finish the proof. \end{proof} The proof of Theorem~\ref{Main Results: Global 2} is the same as the proof of Theorem 3 in~\cite{NS16} after we finished Theorem~\ref{Thm: Global Limit} as above. Therefore, we omit the details. Instead, let us explore the general Betti numbers of tame parametric ensembles ${\{F_L\}}_{L \in \mc{L}}$ defined on an $n$-dimensional closed $C^3$-manifold $X$. Recall that we say that a parametric Gaussian ensemble ${\{F_L\}}_{L \in \mc{L}}$ is tame on $X$ if for every $C^3$-chart $\pi: U \subset \mb{R}^n \to X$, ${\{F_L \circ \pi \}}_{L \in \mc{L}}$ is tame on $U$ as in Definition~\ref{D: Axiom 3}. For each $l = 0, \dots,n-m$, we will use $\beta_l(F_L)$ to denote the summation of $l$-th Betti numbers over $\mb{R}$ of all connected components of $Z(F_L)$ in $X$. For each $x \in X$, let $F_x$ be the translation invariant local limit of $\{F_L\}$ at $x$. By Theorem~\ref{Main Results: Local Betti}, we get a limiting constant $\nu_{l;F_x}$. We then define a function $\bar{\nu}_l(x) = \nu_{l;F_x}$ on $X$. The following theorem also holds true if $X$ is noncompact and without boundary, but one needs to modify the definition of $\beta_l(F_L)$ into the summation of Betti numbers of those connected components that are fully contained in a geodesic ball with a finite radius in $X$. \begin{theorem}\label{Thm: Global Betti} \textit{ Assume that ${\{F_L\}}_{L \in \mc{L}}$ is tame on $X$ and $X$ is closed. Then, for each $l = 0, \dots, n-m$, the function $x \mapsto \bar{\nu}_l(x)$ is measurable and bounded on $X$, and there is a positive constant $C_l = C_l(n,m,M,k_1)$, such that \begin{equation} \int_{X} \bar{\nu}_l(x) \ d \vol( x) \leq \liminf_{L \to \infty}\frac{\mc{E}(\beta_l(F_L))}{L^n} \leq \limsup_{L \to \infty}\frac{\mc{E}(\beta_l(F_L))}{L^n} \leq C_l \cdot |X|, \end{equation} where $|X|$ is the volume of $X$, and the two positive constants $M = M(X)$ and $k_1 = k_1(X)$ are chosen in Definition~\ref{D: Axiom 2}. } \end{theorem} \begin{proof}[Proof of the lower bound in Theorem~\ref{Thm: Global Betti}] The proof for measurability and local boundedness of $\bar{\nu}_l(x)$ is the same as part (1) of Theorem~\ref{Thm: Global Limit} because one can verify these properties on local charts. Since $X$ is closed, $\bar{\nu}_l$ is actually bounded on $X$. One thing we can similarly get is the following $L^1$-convergence. Choose a local chart $\pi : U \to X$ with a bounded open set $U \subset \mb{R}^n$. For $x \in \pi(U)$ and $y = \pi^{-1}(x)$, we define a random function \begin{equation} \nu_{l;R,L}(y,w) \equiv \frac{\beta_l(R;{(F\circ \pi)}_{y,L})}{|C_R|} , \end{equation} for $w \in \Omega$. Here ${(F\circ \pi)}_{y,L}$ is the pull-back of $F_L$ from $\pi(U)$ to $U$ and is rescaled, i.e., \begin{equation} {(F\circ \pi)}_{y,L}(u) \equiv F_L(\pi(y + L^{-1}u)), \end{equation} for $u \in \mb{R}^n$ with $y + L^{-1}u \in U$. Hence, $\beta_l(R;{(F\circ \pi)}_{y,L}) = \beta_l(C_{R/L}(y); F_L \circ \pi)$. Then, using Theorem~\ref{Thm: L^q Bound on Betti}, we can similarly get, as in the claim (\ref{E: Double Limit in L^1}) in proving part (1) of Theorem~\ref{Thm: Global Limit}, that \begin{equation}\label{E: Double Limit in L^1 for Betti} \lim_{R \to \infty} \limsup_{L \to \infty} \mc{E}(|\nu_{l;R,L}(y, \cdot) - \bar{\nu}_l (x) \sqrt{\det{g_y}}|) = 0. \end{equation} Here $g_y$ is the metric tensor of $X$ at $ x = \pi(y)$, which is compatible with change of coordinates to add this determinant term here, because we use $|C_R|$ in standard $\mb{R}^n$ volume to define $\nu_{l;R,L}(y,w) $. For more discussions on change of coordinates, see Section 9 of~\cite{NS16}. Recall that $\beta_l(R;{(F\circ \pi)}_{y,L})$ only counts those connected components that are fully contained in the cube $C_R$. We set parameters $R,L$ with $1 <R < \delta L$ for some $\delta \in (0,1)$, and let $Q \subset U$ be a compact subset such that $Q_{+ \delta} \subset U$ but $Q_{+2 \delta} \nsubseteq U$. We have that \begin{equation}\label{E: Betti Global Lower Bound} \begin{split} &\quad \int_Q \nu_{l;R,L}(y,w) \ dy =\int_Q \frac{\beta_l(C_{R/L}(y);F_L \circ \pi)}{|C_R|} \ dy \\ & = \frac{1}{|C_R|} \int_Q \sum_{\gamma \subset Z(F_L \circ \pi)} \chi_{\gamma \subset C_{R/L}(y)} \cdot \beta_l(\gamma) \ dy \\ & \leq \frac{1}{|C_R|} \sum_{\gamma \subset Z(F_L \circ \pi) \cap U}\ \beta_l(\gamma) \int_Q \chi_{\gamma \subset C_{R/L}(y)} \ dy \\ & \leq \frac{|C_{R/L}|}{|C_R|} \sum_{\gamma \subset Z(F_L \circ \pi) \cap U}\ \beta_l(\gamma) = \frac{\beta_l(U;F_L \circ \pi)}{L^n} = \frac{\beta_l(\pi(U);F_L) }{L^n}. \end{split} \end{equation} Here, $\chi$ is the indicator function, and we use $\beta_l(\pi(U);F_L)$ to denote the summation of $l$-th Betti numbers of all connected components of $Z(F_L)$ fully contained in the open set $\pi(U)$. The last inequality is because \begin{equation} \int_Q \chi_{\gamma \subset C_{R/L}(y)} \ dy= |\{y \in Q \ \vert \ y \in \cap_{z \in \gamma} C_{R/L}(z) \}| \leq |C_{R/L}|. \end{equation} Then, \begin{equation} \begin{split} \int_{Q} \bar{\nu}_l(\pi(y)) \sqrt{\det{g_y}} \ dy &\leq \int_Q \mc{E}( \nu_{l;R,L}(y,\cdot) )\ dy + \int_Q \mc{E}(|\nu_{l;R,L}(y, \cdot) - \bar{\nu}_l (x) \sqrt{\det{g_y}}|) \ dy. \\ &\leq \frac{\mc{E}(\beta_l(\pi(U);F_L))}{L^n} + \int_Q \mc{E}(|\nu_{l;R,L}(y, \cdot) - \bar{\nu}_l (x) \sqrt{\det{g_y}}|) \ dy. \end{split} \end{equation} Use (\ref{E: Double Limit in L^1 for Betti}), and take limits $\lim_{R \to \infty} \liminf_{L \to \infty}$ at both sides. Notice that it is $\limsup_{L \to \infty}$ in (\ref{E: Double Limit in L^1 for Betti}). Hence, we can apply the dominated convergence theorem to the second term of the right hand side, which is dominated by the constant in Theorem~\ref{Thm: L^1 Bound on Betti}. We then get that \begin{equation} \int_{\pi(Q)} \bar{\nu}_l(x) \ d \vol (x) = \int_{Q} \bar{\nu}_l(\pi(y)) \sqrt{\det{g_y}} \ dy \leq \liminf_{L \to \infty} \frac{\mc{E}(\beta_l(\pi(U);F_L))}{L^n}. \end{equation} Since $\delta$ and $Q$ were chosen arbitrarily, we see that \begin{equation} \int_{\pi(U)} \bar{\nu}_l(x) \ dx \leq \liminf_{L \to \infty} \frac{\mc{E}(\beta_l(\pi(U);F_L))}{L^n}. \end{equation} Finally, consider a triangulation of $X$ with finite partitions ${\{\pi_j(U_j)\}}_{j=1} ^{J}$ such that each $(U_j , \pi_j)$ is a local chart on $X$. Then, \begin{equation} \int_{X} \bar{\nu}_l(x) \ dx = \sum_{j=1}^J \int_{\pi(U_j)} \bar{\nu}_l(x) \ dx \leq \liminf_{L \to \infty} \sum_{j=1}^J \frac{\mc{E}(\beta_l(\pi(U_j);F_L))}{L^n} \leq \liminf_{L \to \infty} \frac{\mc{E}(\beta_l(X;F_L))}{L^n}. \end{equation} \end{proof} For the upper bound in Theorem~\ref{Thm: Global Betti}, the methods we used for Theorem~\ref{Thm: Global Limit} do not work, because one cannot omit those giant components that intersect with boundaries of small cubes $\{C_j\}$ now. So, we do not know whether limits exist for Betti numbers cases. In fact, those giant components exist in many known concrete examples. Readers can see more discussions on page 6 of~\cite{W21} and references therein. Here, we use the Chern-Lashof inequality in Theorem~\ref{Thm: Chern} to calculate a global upper bound directly. \begin{proof}[Proof of the upper bound in Theorem~\ref{Thm: Global Betti}] We remark that this upper bound result actually holds true for uniformly controllable Gaussian ensembles ${\{F_L\}}_{L \in \mc{L}}$, i.e., for ${\{F_L\}}_{L \in \mc{L} }$ only satisfying Definition~\ref{D: Axiom 2}. Since $X$ is a $C^3$-manifold, let $\Phi:X \to \mb{R}^{n+q}$ for some $q > 0$ be an isometric embedding, which follows from Nash's embedding theorem. Then, we can regard $X$ as a submanifold of $\mb{R}^{n+q}$ and also regard $Z(F_L) = F_L ^{-1}(0)$ as finitely many closed submanifolds of $\mb{R}^{n+q}$. According to Theorem~\ref{Thm: Chern}, similar to the proofs of Theorem~\ref{Thm: L^1 Bound on Number} and Theorem~\ref{Thm: L^1 Bound on Betti} (see (\ref{E: Bound Components Number})), one can get that \begin{equation} \beta_l(X;F_L) \leq C(n,m) \cdot \int_{Z(F_L)} || \II^{Z(F_L)} (x)||^{n-m} \ d \mc{H}^{n-m}(x). \end{equation} Here, the second fundamental form is the one for $Z(F_L)$ as submanifolds of $\mb{R}^{n+q}$. Then, by Theorem~\ref{Thm: APP Curvature}, \begin{equation} \beta_l(X;F_L) \leq C(n,m) \int_{Z(F_L)} C(n,m,X) + ||A_L ^{-1}||^{\frac{(n-m)}{2}} \cdot || {(\nabla^X)}^2 F_L ||^{(n-m)} \ d \mc{H}^{n-m}(x), \end{equation} where $C(n,m,X)$ is a constant depending on $X$ and the embedding map $\Phi$, and $A_L$ is an $m \times m$ matrix, whose elements are ${(A_L)}_{\alpha \beta} = \langle (\nabla^X f_{L,\alpha}) , (\nabla^X f_{L,\beta}) \rangle_X$. We also use $\nabla^X$ to denote gradients on $X$, use ${(\nabla^X)}^2$ to denote Hessians on $X$, and use $\langle \cdot , \cdot \rangle_X$ to denote the Riemannian metric tensor on $X$. One can use a similar trick in (\ref{E: Bound Mean Curvature}) and then can obtain an upper bound to $||A_L ^{-1}||$ by $|\det A_L | ^{-1} \cdot ||\nabla^X F||^{2(m-1)}$. So, again, we can let the Gaussian field be \begin{equation} W_L = (\nabla^X F_L , {(\nabla^X)}^2 F_L ) \end{equation} in the Kac-Rice formula, Theorem~\ref{Thm: Kac-Rice}, and also set \begin{equation} \begin{split} & \quad h(x,W_L(x)) \\ &= C(n,m,X) + \big|\det A_L (x) \big|^{-\frac{(n-m)}{2}} ||\nabla^X F_L(x)||^{(n-m)(m-1)} {||{(\nabla^X)}^2 F_L(x) ||}^{(n-m)}. \end{split} \end{equation} Hence, \begin{equation} \begin{split} \mc{E}(\beta_l(X;F_L)) &\leq \mc{E} \bigg( \int_{Z(F_L)} h(x,W_L(x))\ d \mc{H}^{n-m}(x) \bigg) \\ &= \int_{X} \mc{E} \big[ h(x,W_L(x)) \cdot |\det A_L (x) |^{1/2} \ \big| \ F_L(x) = 0\big] \cdot p_{F_L(x)}(0)\ d \vol (x). \end{split} \end{equation} The term $p_{F_L(x)}(0)$ is bounded by a constant $C = C(n,m,k_1,M)>0$ according to $(B1)$ and $(B2)$ in Definition~\ref{D: Axiom 2} with positive $M = M(X)$ and $k_1 = k_1(X)$. So, let us estimate the conditional expectation term at each $x \in X$. For a local chart $(U,\pi)$ around $x \in \pi(U)$, after choosing local normal coordinates $(u_1, \dots, u_n)$ around $x = 0$, the $\nabla^X $ and ${(\nabla^X)}^2$ are just the usual gradient and the usual Hessian at $x=0$. Then, we rewrite them with the rescaled Gaussian fields we defined at the beginning of Section~\ref{Section: Local Double Limit and Global Limit}, i.e., $F_{x,L}(u) = F_L \circ \pi (x + L^{-1} u)$, $\nabla^X F_L(x) = L \cdot \nabla F_{x,L}(0)$, and ${(\nabla^X)}^2F_L(x) = L^2 \cdot (\nabla ^2 F_{x,L}(0))$. Also, the elements of $A_L$ then become ${(A_L)}_{\alpha \beta} = L^2 \cdot \langle (\nabla {(F_{x,L})}_\alpha (0) ) , (\nabla {(F_{x,L})}_\beta (0) ) \rangle_{\mb{R}^n} \equiv L^2 \cdot {(A_{x,L})}_{\alpha \beta}$, where $\langle \cdot , \cdot \rangle_{\mb{R}^n}$ is the standard inner product in $\mb{R}^n$ formed by coordinates $(u_1, \dots,u_n)$. Then, \begin{equation} \begin{split} & \quad \big|\det A_L (x) \big|^{-\frac{(n-m)}{2}} ||\nabla^X F_L(x)||^{(n-m)(m-1)} ||{(\nabla^X)}^2 F_L(x) ||^{(n-m)} \\ &= L^{-m(n-m)}\big|\det A_{x,L} (0) \big|^{-\frac{(n-m)}{2}} \cdot L^{(n-m)(m-1)}||\nabla F_{x,L}(0)||^{(n-m)(m-1)} \\ & \quad \cdot L^{2(n-m)} {||{(\nabla)}^2 F_{x,L}(0) ||}^{(n-m)} \\ &= L^{(n-m)}\big|\det A_{x,L} (0) \big|^{-\frac{(n-m)}{2}} ||\nabla F_{x,L}(0)||^{(n-m)(m-1)} ||{(\nabla)}^2 F_{x,L}(0) ||^{(n-m)} \\ &\equiv L^{(n-m)} \cdot P(F_{x,L})(0), \end{split} \end{equation} where $P(F_{x,L})(0)$ denote the long terms in order to simplify the notation in this proof. Hence, the conditional expectation term at $x$ becomes \begin{equation} \begin{split} & \ \quad \mc{E} \big[ h(x,W_L(x)) \cdot |\det A_L (x) |^{1/2} \ \big| \ F_L(x) = 0\big] \\ &= L^m \cdot C(n,m,X) \cdot \mc{E} \big[ |\det A_{x,L}(0) |^{1/2} \ \big| \ F_{x,L}(0) = 0\big] \\ & + L^n \cdot \mc{E}\big[ P(F_{x,L})(0) \cdot |\det A_{x,L}(0) |^{1/2} \ \big| \ F_{x,L}(0) = 0\big] . \end{split} \end{equation} Recall that $F_L \circ \pi$ satisfies $(B1)$ and $(B2)$ in Definition~\ref{D: Axiom 2} with $M = M(X), k_1 = k_1(X)$, when $L$ is large. We then see, from the same proof in Theorem~\ref{Thm: L^1 Bound on Number}, that \begin{equation} \mc{E} \big[ |\det A_{x,L}(0) |^{1/2} \ \big| \ F_{x,L}(0) = 0\big] \leq C(n,m,M,k_1), \end{equation} and \begin{equation} \mc{E}\big[ P(F_{x,L})(0) \cdot |\det A_{x,L}(0) |^{1/2} \ \big| \ F_{x,L}(0) = 0\big] \leq C(n,m,M,k_1), \end{equation} for some positive constants $C(n,m,M,k_1)$. Since $n>m$, the second term with $L^n$ is much larger. Hence, \begin{equation} \begin{split} & \quad \limsup_{L \to \infty}\frac{\mc{E}(\beta_l(F_L))}{L^n} \\ & \leq \limsup_{L \to \infty} \frac{C(n,m,M,k_1)}{L^n} \int_{X} (L^m \cdot C(n,m,X) + L^n) \ d \mc{H}^n(x) = C_l \cdot |X|, \end{split} \end{equation} where $C_l = C_l(n,m,M,k_1)$ is the constant $C(n,m,M,k_1)$, which is independent of the constant $C(n,m,X)$, and is also independent of the arbitrarily chosen embedding $\Phi$. \end{proof} \begin{appendix} \section{Curvatures Computations}\label{APP: Computation} In this appendix, we give some preliminary computations for curvatures involved in the two geometric inequalities, Theorem~\ref{Thm: Fenchel} and Theorem~\ref{Thm: Chern}. More precisely, we will compute second fundamental forms and mean curvatures for submanifolds obtained from zero sets. Let $X \subset \mb{R}^{n+q}$ be an $n$-dimensional $C^3$-manifold embedded in $\mb{R}^{n+q}$ for some $q \geq 0$. A special case is when $q=0$ and $X = \mb{R}^n$. We let $M$ be an $(n-m)$-dimensional manifold embedded in $X$. Hence, $M$ is also a submanifold of $\mb{R}^{n+q}$. We assume that $M$ is realized as a regular part of the zero set of a non-degenerate $m$-dimensional $C^2$-vector field $F$ on $X$, i.e., for $F = (f_1, \dots, f_m)$, $M$ is a connected component of $F^{-1}(0) = Z(F)$, and on $M$, $\nabla^X F$, the gradient of $F$ with respect to the Riemannian connection on $X$, is of full rank. Choose a point $p_0 \in M$, assume that $p = (t_1, \dots, t_{n-m})$ is a local coordinate on $M$ around $p_0 = (0, \dots,0)$. Denote the embedding from $M$ into $X$ as $\varphi$, and choose $x = (x_1, \dots, x_n)$ as a local coordinate on $X$ around $x_0 = \varphi(p_0)$. So, one may write $\varphi = (\varphi^1, \dots, \varphi^n)$. We denote the embedding from $X$ into $\mb{R}^{n+q}$ by $\Phi$. If we view all these objects as subsets in $\mb{R}^{n+q}$, then $p_0$, $x_0 = \varphi(p_0)$, and $y_0 = \Phi(x_0) = \Phi(\varphi(p_0))$ are actually the same point. Our aim in this section is to compute the second fundamental form and mean curvature of $M$ as a submanifold of $\mb{R}^{n+q}$ at the point $y_0 = \Phi(\varphi(p_0))$. Since we only consider tensors at a fixed point, for simplicity, let us assume that $(t_1 , \dots, t_{n-m})$ and $(x_1, \dots,x_n)$ are local normal coordinates on $M$ and $X$ at the points $p_0, x_0 = \varphi(p_0)$ respectively. That is, at $x_0$, \begin{equation}\label{E: APP 0 orthonormal 1} \langle \partial_{x_l} \Phi , \partial_{x_k} \Phi \rangle_{\mb{R}^{n+q}} = \delta_{lk}, \end{equation} for $\langle \cdot , \cdot \rangle_{\mb{R}^{n+q}}$ the standard inner product in $\mb{R}^{n+q}$, together with that \begin{equation}\label{E: APP 0 normal coordinates 1} \partial^2 _{x_l x_k} \Phi \in T_{y_0} ^{\perp} X, \quad \forall \ l,k = 1, \dots n, \end{equation} where $T_{y_0} ^{\perp} X$ is the normal vector space of $X$ at $y_0 = \Phi(x_0)$ in $\mb{R}^{n+q}$. At $p_0$, \begin{equation}\label{E: APP 0 orthonormal 2} \sum_{l} (\partial_{t_i} \varphi^l)(\partial_{t_j} \varphi^l) = \delta_{ij}, \end{equation} together with that \begin{equation} \sum_l (\partial^2 _{t_i t_j}\varphi^l) (\partial_{t_s}\varphi^l) = 0, \quad \forall \ i,j,s = 1, \dots , n-m. \end{equation} We first consider the second fundamental form $\II = \II^M$ of $M$ as a submanifold of $\mb{R}^{n+q}$ at $y_0 = \Phi(\varphi(p_0))$, which is defined by \begin{equation} \langle \II_{ij} , \mf{n} \rangle \equiv \bigg\langle \frac{\partial^2(\Phi \circ \varphi)}{\partial t_i \partial t_j }(p_0) , \mf{n} \bigg\rangle_{\mb{R}^{n+q}}, \end{equation} where $\mf{n} \in T_{y_0} ^{\perp} M$ is a unit normal vector of $M$ at $y_0 $ in $\mb{R}^{n+q}$. Let us compute these derivatives. \begin{equation} \partial_{t_j}(\Phi \circ \varphi) = \sum_{l} (\partial_{x_l} \Phi) (\partial_{t_j} \varphi^{l}), \end{equation} and \begin{equation} \partial^2_{t_i t_j} (\Phi \circ \varphi) = \sum_{l,k} (\partial^2 _{x_l x_k}\Phi) (\partial_{t_j}\varphi^l) (\partial_{t_i}\varphi^k) + \sum_l (\partial_{x_l}\Phi) (\partial^2 _{t_i t_j} \varphi^l). \end{equation} If $\mf{n} \in T_{y_0} ^{\perp} X$, then $\mf{n} \perp (\partial_{x_l}\Phi)$. Hence, \begin{equation} |\langle \II_{ij} , \mf{n} \rangle| = \bigg| \sum_{l,k} \big\langle (\partial^2 _{x_l x_k}\Phi) , \mf{n} \big\rangle (\partial_{t_j}\varphi^l) (\partial_{t_i}\varphi^k) \bigg | \leq ||\II^X(y_0) ||. \end{equation} The last inequality is because of (\ref{E: APP 0 orthonormal 2}), which says that $\partial_{t_i} \varphi$ and $\partial_{t_j} \varphi$ are two unit vectors. So, the value is bounded by the norm of the matrix, which is $||\II^X(y_0)||$, the norm of the second fundamental form of $X$ as a submanifold of $\mb{R}^{n+q}$ at $y_0$. This is a constant only depending on $X$ and the embedding $\Phi$, so we denote an upper bound of this constant by $C(X)>0$. If $\mf{n} \in T_{y_0} X \cap T_{y_0} ^{\perp} M$, i.e., $\mf{n}$ is tangent to $X$ but normal to $M$ at $y_0$ in $\mb{R}^{n+q}$, we have that \begin{equation}\label{E: APP 0 Estimates 1} |\langle \II_{ij} , \mf{n} \rangle| = \bigg| \sum_{l} \big\langle (\partial _{x_l}\Phi) , \mf{n} \big\rangle (\partial^2 _{t_i t_j}\varphi^l) \bigg |, \end{equation} which is because of (\ref{E: APP 0 normal coordinates 1}), $\partial^2 _{x_l x_k} \Phi \perp \mf{n}$. Now, let us use the fact that $M$ is realized as a part of $Z(F)$ to make more calculations and to estimate (\ref{E: APP 0 Estimates 1}). Since $f_\alpha(\varphi) = 0$ for $\alpha = 1, \dots , m$, we differentiate these equations and get that \begin{equation}\label{E: APP 0 Function Derivatives 1} \sum_l ( \partial_{x_l} f_\alpha) (\partial_{t_j} \varphi^l) = 0, \end{equation} and \begin{equation}\label{E: APP 0 Function Derivatives 2} \sum_{l,k} (\partial^2 _{x_l x_k} f_\alpha)(\partial_{t_j} \varphi^l) (\partial_{t_i} \varphi^k) + \sum_{l} (\partial_{x_l} f_\alpha) (\partial^2 _{t_i t_j} \varphi^l) = 0. \end{equation} Since $\nabla^X F$ is of full rank on $M$, we see that these $\nabla^X f_\alpha \equiv \sum_l (\partial_{x_l} f_\alpha) (\partial_{x_l} \Phi)$, actually form a basis of $T_{y_0} X \cap T_{y_0} ^{\perp} M$. Hence, we take an $m \times m$ symmetric matrix $T = (T^{\alpha \beta})$ such that these \begin{equation} \mf{n}_\alpha \equiv \sum_{\beta} T^{\alpha \beta} (\nabla^X f_\beta) =\sum_{\beta} T^{\alpha \beta} \sum_l (\partial_{x_l} f_\beta) (\partial_{x_l} \Phi) \end{equation} are orthonormal, i.e., $\langle \mf{n}_{\alpha_1} , \mf{n}_{\alpha_2}\rangle = \delta_{\alpha_1 \alpha_2}$. On the other hand, \begin{equation} \langle \mf{n}_{\alpha_1} , \mf{n}_{\alpha_2}\rangle = \sum_{\beta_1, \beta_2} T^{\alpha_1 \beta_1} T^{\alpha_2 \beta_2} \sum_l (\partial_{x_l} f_{\beta_1}) (\partial_{x_l} f_{\beta_2}), \end{equation} where if one uses matrices notations, the right hand side is $T A T^T$, where $T^T$ is the transpose of $T$, and $A$ is the matrix consists of inner products of $\nabla^X f_\beta$, i.e., $A = (A_{\beta_1 \beta_2}) = \big(\sum_l (\partial_{x_l} f_{\beta_1}) (\partial_{x_l} f_{\beta_2})\big) $. Hence, \begin{equation} T^T T = A^{-1}. \end{equation} Then, according to (\ref{E: APP 0 orthonormal 1}), (\ref{E: APP 0 Estimates 1}), and (\ref{E: APP 0 Function Derivatives 2}), we can get that \begin{equation} |\langle \II_{ij} , \mf{n}_\alpha \rangle| = \bigg|\sum_{\beta} T^{\alpha \beta} \sum_{l} (\partial_{x_l} f_\beta) (\partial^2 _{t_i t_j}\varphi^l) \bigg | = \bigg|\sum_{\beta} T^{\alpha \beta} \sum_{l,k} (\partial^2 _{x_l x_k} f_\beta) (\partial_{ t_j}\varphi^l) (\partial_{ t_i}\varphi^k) \bigg |. \end{equation} So, \begin{equation} \begin{split} &\sum_\alpha |\langle \II_{ij} , \mf{n}_\alpha \rangle|^2 = \sum_{\alpha} \bigg|\sum_{\beta} T^{\alpha \beta} \sum_{l,k} (\partial^2 _{x_l x_k} f_\beta) (\partial_{ t_j}\varphi^l) (\partial_{ t_i}\varphi^k) \bigg |^2 \\ &= \sum_{\alpha} \sum_{\beta_1 , \beta_2} T^{\alpha \beta_1} T^{\alpha \beta_2} \bigg[\sum_{l_1,k_1} (\partial^2 _{x_{l_1} x_{k_1}} f_{\beta_1}) (\partial_{ t_j}\varphi^{l_1}) (\partial_{ t_i}\varphi^{k_1}) \bigg] \bigg[\sum_{l_2,k_2} (\partial^2 _{x_{l_2} x_{k_2}} f_{\beta_2}) (\partial_{ t_j}\varphi^{l_2}) (\partial_{ t_i}\varphi^{k_2}) \bigg] \\ &= \sum_{\beta_1 , \beta_2} {(A^{-1})}_{\beta_1 \beta_2} \bigg[\sum_{l_1,k_1} (\partial^2 _{x_{l_1} x_{k_1}} f_{\beta_1}) (\partial_{ t_j}\varphi^{l_1}) (\partial_{ t_i}\varphi^{k_1}) \bigg] \bigg[\sum_{l_2,k_2} (\partial^2 _{x_{l_2} x_{k_2}} f_{\beta_2}) (\partial_{ t_j}\varphi^{l_2}) (\partial_{ t_i}\varphi^{k_2}) \bigg]. \end{split} \end{equation} For an upper bound, by (\ref{E: APP 0 orthonormal 2}) again, we notice that those $\partial_{t_j} \varphi$ are unit vectors. Hence, \begin{equation} \sum_\alpha {|\langle \II_{ij} , \mf{n}_\alpha \rangle|}^2 \leq C(n,m) \cdot ||A^{-1}||\cdot ||{(\nabla^X )}^2 F ||^2, \end{equation} for some positive constant $C(n,m)$, where ${(\nabla^X )}^2 F$ is the Hessian of $F$ and those $||\cdot||$ are matrices norms. We denote $\{\mf{n}_\mu\}$ as an orthonormal basis of $T_{y_0} ^{\perp} X$, then the norm square of the second fundamental form $\II^M$ at $y_0$ is defined by \begin{equation} ||\II^M(y_0)||^2 \equiv \sum_{i,j} \bigg(\sum_\alpha |\langle \II_{ij} , \mf{n}_\alpha \rangle|^2 + \sum_{\mu} |\langle \II_{ij} , \mf{n}_{\mu} \rangle|^2 \bigg). \end{equation} We can now summarize our results by the following theorem. \begin{theorem}\label{Thm: APP Curvature} \textit{ The second fundamental form $\II^M$ of $M$ as a submanifold of $\mb{R}^{n+q}$ satisfies that \begin{equation} ||\II^{M}||^2\leq C(n,m,X) + C(n,m) \cdot ||A^{-1}||\cdot ||{(\nabla^X )}^2 F ||^2, \end{equation} where the matrix $A$ consists of inner products of $\nabla^X f_\alpha$ for $\alpha = 1, \dots , m$, and ${(\nabla^X )}^2 F $ is the Hessian. The positive constant $C(n,m,X)$ depends on $X$ and the embedding $\Phi : X \to \mb{R}^{n+q}$. In particular, when $X = \mb{R}^n$, one can let $C(n,m,X) = 0$. } \end{theorem} We remark that the mean curvature $\bH$ at $y_0$ is defined by \begin{equation} \langle \bH(y_0) , \mf{n} \rangle \equiv \sum_{i} \langle \II_{ii} , \mf{n} \rangle, \end{equation} for $\mf{n} \in T_{y_0} ^{\perp} M$. Hence, \begin{equation} ||\bH(y_0)||^2 \equiv \sum_\alpha \bigg|\sum_i \langle \II_{ii} , \mf{n}_\alpha \rangle \bigg|^2 + \sum_{\mu} \bigg|\sum_i\langle \II_{ii} , \mf{n}_{\mu} \rangle \bigg|^2 \leq C(n,m) \cdot || \II^M(y_0) ||^2 , \end{equation} for some positive constant $C(n,m)$. \section{Ergodicity in Theorem~\ref{Main Results: Local} and Theorem~\ref{Thm: Euclidean Random Field}}\label{APP: Ergodicity} In this appendix, we give a sufficient condition to guarantee ergodicity of translations on $(C^1_* (\mb{R}^n ,\mb{R}^{m}), \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m})), \gamma_F )$. Here $\mfk{B}(C^1 _* (\mb{R}^n ,\mb{R}^{m}))$ is the Borel $\sigma$-algebra generated by all open sets in $C^1 _* (\mb{R}^n ,\mb{R}^{m})$. Recall that $F: \mb{R}^n \to \mb{R}^m$ is a centered stationary Gaussian random field satisfying $(A1)$ and $(A2)$ in Definition~\ref{D: Axiom 1} at the origin $x=0$. We write $F(x) = (f_1(x), \dots,f_m(x))$, and let \begin{equation} k_{i_1 i_2}(x-y) \equiv K_{i_1 i_2}(x,y) = \mc{E}\big(f_{i_1}(x)f_{i_2}(y)\big), \ i_1,i_2 = 1,2,\dots, m, \end{equation} be an $m \times m$ matrix. Notice that, since $F$ is stationary, $\mc{E}\big(f_{i_1}(x)f_{i_2}(y)\big) = \mc{E}\big(f_{i_1}(x-y)f_{i_2}(0)\big)$ and $k_{i_2 i_1}(x-y) = \mc{E}\big(f_{i_2}(x)f_{i_1}(y)\big) = \mc{E}\big(f_{i_2}(0)f_{i_1}(y-x)\big) = k_{i_1 i_2}(y-x)$. Hence, $k_{i_2 i_1}(x) = k_{i_1 i_2}(- x)$ for all $x \in \mb{R}^n$. \begin{theorem} \textit{ If \begin{equation}\label{E: APP A Assumption} \lim_{R \to \infty} \sum_{1 \leq i_1,i_2 \leq m} \frac{1}{|B_R|} \int_{B_R} {(k_{i_1 i_2}(x))}^2 dx = 0, \end{equation} then the translations actions of $\mb{R}^n$ on $(C^1_* (\mb{R}^n ,\mb{R}^{m}), \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m})), \gamma_F )$ is ergodic. } \end{theorem} This is the Fomin-Grenander-Maruyama theorem. The proof is inspired by Appendix B of~\cite{NS16} for random functions. Here, we prove it for random fields. \begin{proof} For an $A \in \mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m}))$ satisfying $\gamma_F ( \tau_v (A) \Delta A) = 0$ for every $v \in \mb{R}^n$, we need to show that $\gamma_F(A)$ is either $0$ or $1$. Here $(\tau_v G)(x) \equiv G(x+v)$ for any $v \in \mb{R}^n$ and $G \in C^1_* (\mb{R}^n ,\mb{R}^{m})$. Notice that $\mfk{B}(C^1_* (\mb{R}^n ,\mb{R}^{m}))$ is generated by evaluations sets $I(x; a ,b)$ for $x \in \mb{R}^n$ and $a \prec b \in \mb{R}^m$ ($a \prec b$ means that $a_i < b_i$ for all $i = 1, \dots m$): \begin{equation} I(x; a ,b) = \{G \in C^1_* (\mb{R}^n ,\mb{R}^{m}) \ | \ a_i \leq G_i(x) < b_i \text{ for all } i = 1, \dots m \}, \end{equation} where $G_i(x)$ is the $i$-th component of $G(x)$. Hence, for any $\epsilon > 0$, we can take finitely many points $x_1 , \dots ,x_k \in \mb{R}^n$ and a Borel set $ B \subset \mb{R}^{mk}$ so that $\gamma_F(A \Delta P) < \epsilon$ for \begin{equation} P = P(x_1 ,\dots ,x_k ; B ) \equiv \{ G \in C^1_* (\mb{R}^n ,\mb{R}^{m}) \ | \ (G(x_1) , \dots, G(x_k)) \in B\}. \end{equation} We can also assume that the distribution of the Gaussian vector $(F(x_1), \dots, F(x_k))$ is non-degenerate (Otherwise, one can use the maximal linear basis of this Gaussian vector.). So, we can get that \begin{equation} \gamma_F(P) = {(2\pi)}^{-\frac{mk}{2}} {(\det \Lambda)}^{-\frac{1}{2}} \int_B e^{-\frac{1}{2} \langle \Lambda^{-1}t, t \rangle} \ dt, \end{equation} where $\Lambda_{j_1 i_1, j_2 i_2} = \mc{E}(f_{i_1}(x_{j_1}) f_{i_2}(x_{j_2})) = k_{i_1 i _2}(x_{j_1} - x_{j_2})$ is the covariance matrix for the Gaussian vector $(F(x_1), \dots, F(x_k))$ and it is an $mk \times mk$ matrix, and $\langle \cdot , \cdot \rangle$ is the inner product for vectors in $\mb{R}^{mk}$. Since $\tau_v P = P(x_1 - v , \dots , x_k - v; B)$, we have that \begin{equation} P \cap \tau_v P = P(x_1, \dots , x_k , x_1 - v , \dots , x_k - v; B \times B). \end{equation} If we know that $(F(x_1), \dots, F(x_k) , F(x_1 - v) , \dots, F(x_k - v))$ is also non-degenerate, we can write \begin{equation}\label{E: APP A 1} \gamma_F(P\cap \tau_v P) = {(2\pi)}^{-mk} {(\det \tilde{\Lambda})}^{-\frac{1}{2}} \int_{B \times B} e^{-\frac{1}{2} \tilde{\Lambda}^{-1}t \cdot t } \ dt, \end{equation} where \begin{equation}\label{E: APP A 2} \tilde{\Lambda}= \begin{bmatrix} \Lambda & \Theta(v )\\ \Theta^T (v) & \Lambda \end{bmatrix} \end{equation} with $ \Theta_{j_1 i_1, j_2 i_2} = \mc{E}(f_{i_1}(x_{j_1}) f_{i_2}(x_{j_2} - v)) = k_{i_1 i _2}(x_{j_1} - x_{j_2} +v)$. We will prove that we can choose a sequence $\{v_l\} \subset \mb{R}^n$ so that $||\Theta(v_l)|| \to 0$ as $l \to \infty$. Hence, we can not only get the nondegeneracy of (\ref{E: APP A 2}) and then get (\ref{E: APP A 1}), but also conclude that \begin{equation} \lim_{l \to \infty} \gamma_F(P\cap \tau_{v_l} P) = {(\gamma_F(P))}^2. \end{equation} And then \begin{equation} \gamma_F(A) = \limsup_{l \to \infty }\gamma_F(A \cap \tau_{v_l} A) \leq \limsup_{l \to \infty } \gamma_F(P\cap \tau_{v_l} P) + 2 \epsilon \leq {(\gamma_F(A) + \epsilon)}^2 + 2\epsilon. \end{equation} Let $\epsilon \to 0$, we see that $\gamma_F(A)$ is either $0$ or $1$ since $\gamma_F$ is a probability measure. Now, the reason that we can choose such a sequence $\{v_l\}$ is because of our assumption. Notice that if we denote $T = \max_{j_1,j_2} | x_{j_1} - x_{j_2}|$, then for any $R >0$, we have that \begin{equation} \begin{split} \frac{1}{|B_R|} \int_{B_R} ||\Theta(v)||^2 \ dv &\leq \frac{C(m,k)}{|B_R|} \sum_{i_1,i_2,j_1,j_2} \int_{B_R} {(k_{i_1 i _2}(x_{j_1} - x_{j_2} +v))}^2 \ dv \\ &\leq \frac{C(m,k)}{|B_R|} \sum_{1 \leq i_1,i_2 \leq m}\int_{B_{R+T}} {(k_{i_1 i_2})}^2(v) \ dv, \end{split} \end{equation} which goes to $0$ as $R \to \infty$ by our assumption. We then choose a sequence $\{v_l\}$ using the mean value theorem. \end{proof} \begin{remark} When $m=1$, by Wiener's lemma, our assumption is equivalent to that the spectral measure, i.e., the Fourier transform, of the covariance function $k(x-y) = \mc{E}(f(x)f(y))$ has no atoms. This is mentioned in Appendix B of~\cite{NS16} in proving this ergodicity theorem when $m=1$. When $m>1$, if we assume that $F$ consists of independent centered stationary Gaussian random functions, i.e., each $f_{i_1}(x)$ is independent of $f_{i_2}(y)$ for $i_1\neq i_2$ and any $x,y \in \mb{R}^n$, and we make the assumption that for each $f_{i_1}$, the corresponding spectral measure $\rho_{i_1}$ does not have atoms, then we can also get (\ref{E: APP A Assumption}) by Wiener's lemma and hence get our ergodicity theorem. Also, when $m=1$, it is mentioned in~\cite{NS16} that ergodicity is actually equivalent to spectral measures having no atoms, which is actually the full version of the Fomin-Grenander-Maruyama theorem. \end{remark} \section{Positive Limiting Constants in Theorem~\ref{Main Results: Local} and Theorem~\ref{Thm: Euclidean Random Field}}\label{APP: Positivity} In this appendix, similar to Appendix~\ref{APP: Ergodicity}, we let $F: \mb{R}^n \to \mb{R}^m$ be a centered stationary Gaussian random field satisfying $(A1)$ and $(A2)$ in Definition~\ref{D: Axiom 1} at the origin $x=0$. Our main goal in this appendix is to build up an approximation result under some further assumptions on $F$. For any bounded open set $B \subset \mb{R}^n$, any $G \in C^1(\mb{R}^n, \mb{R}^m)$, and any $\epsilon >0$, we want to show that \begin{equation} \mc{P}(||F - G||_{C^1(B)} < \epsilon ) >0. \end{equation} For this purpose, we need to recall some basic facts about Gaussian fields first. One can see Chapter 4 of~\cite{GS04} together with Appendix A of~\cite{NS16} as references to the following discussions. The general strategies were also mentioned in~\cite{CS19,SW19}. We write $F(x) = (f_1(x), \dots,f_m(x))$, then \begin{equation} k_{i_1 i_2}(x-y) \equiv K_{i_1 i_2}(x,y) = \mc{E}\big(f_{i_1}(x)f_{i_2}(y)\big), \ i_1,i_2 = 1,2,\dots, m, \end{equation} is an $m \times m$ matrix. Since $F$ is stationary, $k_{i_2 i_1}(x) = k_{i_1 i_2}(- x)$ for all $x \in \mb{R}^n$. Also, $k$ has a positive property in the following sense. Let $x_1, \dots, x_r \in \mb{R}^n$ and $a_1, \dots, a_r \in \mb{C}$, then \begin{equation} \sum_{1 \leq t,s \leq r} k(x_t - x_s)a_t \overline{a_s} \quad \text{is positive semi-definite in } \mr{End}_{\mb{C}}(\mb{R}^m). \end{equation} This is because for any $\eta = {(\eta_1, \dots, \eta_m)}^T \in \mb{C}^m $, \begin{equation} \begin{split} \eta^T \cdot \bigg(\sum_{1 \leq t,s \leq r} k(x_t - x_s)a_t \overline{a_s} \bigg)\cdot \overline{\eta} &= \sum_{1\leq i_1,i_2 \leq m}\sum_{1 \leq t,s \leq r}\mc{E}\bigg( a_t\eta_{i_1}f_{i_1}(x_t) \cdot \overline{a_s\eta_{i_2}f_{i_2}(x_s)} \bigg) \\ &= \mc{E} \bigg| \sum_{i_1=1}^m \sum_{t = 1}^r a_t\eta_{i_1}f_{i_1}(x_t) \bigg|^2 \geq 0. \end{split} \end{equation} Then, by the Bochner-Herglotz theorem, the Fourier transform of $k$, which we denote as $\rho = (\rho_{i_1 i_2})$, is a finite positive semi-definite $\mr{End}_{\mb{C}}(\mb{R}^m)$-valued measure on $\mb{R}^n$, and satisfies that \begin{equation} \mc{E}\big(f_{i_1}(x)f_{i_2}(y)\big) = k_{i_1 i_2}(x-y) = \int_{\mb{R}^n} e^{i \langle x-y, \xi \rangle} \ d\rho_{i_1 i_2}(\xi). \end{equation} Since $k(x) = {(k(-x))}^T$, we see that $\rho = \rho^* \equiv {(\overline{\rho})}^T$ and $\rho(\xi) = {(\rho(-\xi))}^T$. Indeed, $F$ has the following expression. \begin{equation}\label{E: APP D Spectral Decomposition} F(x) = \int_{\mb{R}^n} e^{i \langle x, \xi \rangle} \ dZ(\xi), \end{equation} where $Z(\cdot)$ is a complex vector-valued orthogonal random measure, which satisfies that for any Borel sets $U_1, U_2 \subset \mb{R}^n$ with $\rho(U_1)$ and $\rho(U_2)$ finite, $\mc{E}(Z(U_1)) =\mc{E}(Z(U_2)) = 0$ and $\mc{E}(Z_{i_1}(U_1) \overline{Z_{i_2}(U_2)}) = \rho_{i_1 i_2}(U_1 \cap U_2)$ for $i_1,i_2 = 1, \dots, m$. Notice that this also implies that $|\rho_{i_1 i_2}(U_1 \cap U_2)|^2 \leq |\rho_{i_1 i_1}(U_1)| \cdot |\rho_{i_2 i_2}(U_2)|$ by the Cauchy-Schwarz inequality. Hence, for any Borel set $U \subset \mb{R}^n$, if $\rho_{i_1 i_2}(U) \neq 0$, then $\rho_{i_1 i_1}(U) \neq 0$ and $\rho_{i_2 i_2}(U) \neq 0$. We define the space of square-summable Hermitian fields as \begin{equation} L^2 _H (\rho) \equiv \{G:\mb{R}^n \to \mb{C}^m \ \vert \ G \in L^2(\rho), \text{ and } G(x) = \overline{G(-x)} \text{ for all }x \in \mb{R}^n \}, \end{equation} where for $G = {(g_1, \dots,g_m)}^T\in L^2(\rho)$, \begin{equation} ||G||^2 _{L^2(\rho)} \equiv \sum_{1 \leq i_1, i_2 \leq m} \int_{\mb{R}^n} g_{i_1}(\xi) \overline{g_{i_2}(\xi)} \ d\rho_{i_1 i_2}(\xi) < \infty. \end{equation} We also define the reproducing kernel Hilbert spaces associated with $\rho$ (or equivalently, associated with $k$), which are \begin{equation} \mc{H}(\rho) \equiv \mc{F}_\rho (L^2 (\rho)), \quad \mc{H}_0(\rho) \equiv \mc{F}_\rho (L^2 _H (\rho)), \end{equation} where for $G= {(g_1, \dots,g_m)}^T \in L^2 (\rho)$ and for each $i_2 = 1, \dots,m$, \begin{equation}\label{E: APP D Fourier} \hat{g}_{i_2}(x) \equiv {(\mc{F}_\rho(G)(x))}_{i_2} = \sum_{i_1 = 1}^m \int_{\mb{R}^n} e^{-i \langle x, \xi \rangle} g_{i_1}(\xi) \ d\rho_{i_1 i_2}(\xi). \end{equation} We denote $\hat{G} = \mc{F}_{\rho}(G) ={( \hat{g}_1, \dots,\hat{g}_m)}^T$. The inner product of $\mc{H}(\rho)$ is defined by \begin{equation} \langle \widehat{G_1} , \widehat{G_2}\rangle_{\mc{H}(\rho)} \equiv \langle G_1 , G_2 \rangle_{L^2(\rho)} . \end{equation} Define a map $\Phi: L^2(\Omega, \mfk{S}, \mc{P}) \to C^1(\mb{R}^n , \mb{R}^m)$, $h \mapsto \Phi[h](x)$, and for each $t = 1 , \dots m$, the $t$-th component of $\Phi[h]$, ${(\Phi[h])}_t$, is given by $\mc{E}( h \overline{f_t(x)})$, the covariance of $h$ and $f_t(x)$. Note that although our random variables in $L^2(\Omega, \mfk{S}, \mc{P}) $ and $F$ are real-valued, we still write $\overline{(\cdot)}$ in order to be consistent. Then, the image of $\Phi$, $\mc{H}(F) \equiv \Phi(L^2(\Omega, \mfk{S}, \mc{P}))$, is a Hilbert space with the induced inner product \begin{equation} \langle \Phi[h_1], \Phi[h_2] \rangle_{\mc{H}(F)} \equiv \mc{E}(h_1 \overline{h_2}) = \langle h_1, h_2 \rangle_{L^2(\Omega , \mfk{S}, \mc{P})}, \end{equation} for any $h_1, h_2 \in H(F)$, where $H(F) \subset L^2(\Omega, \mfk{S}, \mc{P})$ is the closed $\mb{R}$-linear span of all $(F(x) \cdot w)$ for $x \in \mb{R}^n$ and $w \in \mb{R}^m$, which is exactly the orthogonal complement of $\mr{Ker}(\Phi)$. Let $\{h_j\}$ be a countable orthonormal basis of $H(F)$ and set $e_j = \Phi[h_j] \in C^1(\mb{R}^n , \mb{R}^m)$, we see that for any fixed $w \in \mb{R}^m$, and fixed $x \in \mb{R}^n$, \begin{equation} F(x) \cdot w = \sum_{j} \mc{E}\big((F(x) \cdot w) h_j\big)h_j = \sum_{j} \big( \Phi[h_j](x) \cdot w\big) h_j, \end{equation} and equivalently, \begin{equation}\label{E: APP D Expansion of F} F(x) = \sum_{j} e_j(x) h_j. \end{equation} For any $\tilde{G} \in \mc{H}(F)$, we can write a convergent series in $\mc{H}(F)$, \begin{equation}\label{E: APP D Expansion of H(F)} \tilde{G}(x) = \sum_{j} e_j(x) \langle \tilde{G}, e_j \rangle_{\mc{H}(F)} . \end{equation} Since we have the assumption $(A2)$ in Definition~\ref{D: Axiom 1} that $F$ is a $C^{3-}$-smooth Gaussian field, the convergence (\ref{E: APP D Expansion of H(F)}) is, at least, in locally $C^1$-sense. Then, we can get that for any $\epsilon > 0$ and any bounded open set $B \subset \mb{R}^n$, \begin{equation}\label{E: APP D Positive Approximation} \mc{P}(||F-\tilde{G}||_{C^1(B)}>\epsilon)>0. \end{equation} Hence, we need to show that $\mc{H}(F)$ is $C^1$-dense in $C^1(\mb{R}^n , \mb{R}^m)$ under some assumptions. On the other hand, there is a way to link the inner products $\langle \cdot , \cdot \rangle_{\mc{H}(\rho)}$ and $\langle \cdot , \cdot \rangle_{\mc{H}(F)}$ by the following isometry map, which will show that these two spaces, $\mc{H}_0(\rho)$ and $\mc{H}(F)$, and their inner products are actually the same. For any $G = {(g_1 , \dots, g_m)}^T \in L^2 _H(\rho)$, let \begin{equation}\label{E: APP D Isometry Map} I(G) \equiv \int_{\mb{R}^n} G(\xi) \cdot \ dZ(\xi) = \sum_{i = 1}^m \int_{\mb{R}^n} g_i(\xi) \ dZ_i(\xi), \end{equation} then \begin{equation} \begin{split} ||I(G)||^2 _{L^2(\Omega , \mfk{S}, \mc{P})} &= \mc{E} \bigg( \int_{\mb{R}^n} G(\xi_1) \cdot \ dZ(\xi_1) \int_{\mb{R}^n} \overline{G(\xi_2)} \cdot \ d\overline{Z(\xi_2)} \bigg) \\ &= \sum_{1 \leq i_1, i_2 \leq m} \int_{\mb{R}^n} g_{i_1}(\xi) \overline{g_{i_2}(\xi)} \ d\rho_{i_1 i_2}(\xi) = ||G||^2 _{L^2(\rho)}. \end{split} \end{equation} Notice that $H(F) \subset I(L^2 _H(\rho))$. Actually, one can even show that $H(F) = I(L^2 _H (\rho))$. This is because ${\{e^{i \langle x , \xi \rangle}\}}_{x \in \mb{R}^n}$ is a dense family as $\xi$-functions in $L^2_{\rho_{tt}}(\mb{R}^n)$ for each $t = 1, \dots,m$, which means that the linear combinations of functions in this family can approximate smooth functions with compact support. One then compares (\ref{E: APP D Spectral Decomposition}) and (\ref{E: APP D Isometry Map}). Also, one can check that \begin{equation} \begin{split} {(\Phi[I(G)](x))}_{i_2} &= \mc{E}\big(I(G) \overline{f_{i_2}(x)} \big) = \mc{E} \bigg( \int_{\mb{R}^n} G(y) \cdot \ dZ(y) \int_{\mb{R}^n} e^{-i \langle x , \xi \rangle}\ d\overline{Z_{i_2}(\xi)} \bigg) \\ &= \sum_{i_1 = 1}^m \int_{\mb{R}^n} e^{-i \langle x, \xi \rangle} g_{i_1}(\xi) \ d\rho_{i_1 i_2}(\xi) = {(\mc{F}_\rho(G)(x))}_{i_2}, \end{split} \end{equation} and then \begin{equation} \begin{split} \langle \widehat{G_1} , \widehat{G_2}\rangle_{\mc{H}(\rho)} &= \langle G_1,G_2 \rangle_{L^2(\rho)} = \langle I(G_1), I(G_2) \rangle_{L^2(\Omega , \mfk{S}, \mc{P})} \\ & = \langle \Phi[I(G_1)], \Phi[I(G_1)] \rangle_{\mc{H}(F)} = \langle \widehat{G_1} , \widehat{G_2}\rangle_{\mc{H}(F)}. \end{split} \end{equation} For similar reasons, one can show that $\mc{H}(F) = \mc{H}_0(\rho) $. We define a subset $\Sigma_\rho$ of $C^1(\mb{R}^n, \mb{R}^m)$ as follows. For each element $ H(x) \in \Sigma_\rho$, write $H(x)$ as ${(h_1(x), \dots, h_m(x))}^T $, and for each $t = 1, \dots,m$, $h_t(x)$ has the form \begin{equation}\label{E: APP D Discrete Sum Set Sigma} h_t(x) = \sum_{p = 1} ^{q} \sum_{s=1} ^m \bigg( e^{-i \langle x, \xi_p \rangle} g_s(\xi_p) \rho_{st}(V_p ) + e^{i \langle x, \xi_p \rangle} g_s(-\xi_p) \rho_{st}(-V_p) \bigg), \end{equation} where $V_p $ are Borel sets, $\xi_p \in V_p $, and $G = {(g_1, \dots,g_m)}^T \in L^2 _H (\rho)$. We emphasize that these $V_p, \xi_p$ are the same for diffrent $t = 1, \dots,m$. This $H(x)$ is actually a Riemann sum of $\mc{F}_\rho (G)$ in (\ref{E: APP D Fourier}) if $G$ is also continuous. Also, notice that $h_t(x)$ is real, because $g_s(-\xi_p ) = \overline{g_s(\xi_p )}$ and $\rho_{st}(-V_p ) = \overline{\rho_{st}(V_p ) }$. Our aim is to prove that under some assumptions, one can show that $\Sigma_\rho$, and hence $\mc{H}_0(\rho)$, is $C^1$-dense in $C^1(B, \mb{R}^m)$ for any bounded open set $B \subset \mb{R}^n$. Then, we can use (\ref{E: APP D Positive Approximation}) to get an approximation by $F(x)$ with positive probability. This strategy was sketched in Lemma 5.5 and Proposition 5.2 in~\cite{SW19}. We call that the $\mb{C}^m$-valued orthogonal random measure $Z(\cdot) = {(Z_1(\cdot), \dots , Z_m(\cdot))}^T$ in (\ref{E: APP D Spectral Decomposition}) is \textit{ $m$-balls absolute non-degenerate} if there are $m$ open balls, ${\{ B_{r_t}(y_t) \}}_{t=1} ^m$, such that for each $t = 1, \dots , m$, and for any nonempty open subset $U \subset B_{r_t}(y_t)$, \begin{equation} Z_t(U) \notin \mr{Span}_{\mb{C}} \{ Z_1(U), \dots, Z_{t-1}(U), Z_{t+1}(U), \dots , Z_m(U) \}, \end{equation} which is equivalent to saying that there are complex constants $c_{t1} (U), \dots, c_{tm} (U)$, such that \begin{equation}\label{E: APP D m nondegenerate constants} \mc{E}\bigg( (\sum_{i=1}^m c_{ti} (U) Z_i(U)) \ \overline{Z_s(U)} \bigg) = \sum_{i=1}^m c_{ti} (U) \rho_{is}(U)= \delta_{ts}. \end{equation} This also implies that $B_{r_t}(y_t)$ is in the interior of the compact support of $\rho_{tt}$. Morover, we requires that there are absolute positive constants $C_1(Z)$ and $C_2(Z)$ such that for any open subset $V \subset \cup_i B_{r_i}(y_i)$, $|\rho_{st}|(V) \leq C_1(Z) \cdot |\rho_{st}(V)|$ for any $s,t$; and for any $s,t$ and $U \subset B_{r_t}(y_t)$, $ \sum_{i=1}^m |c_{ti} (U) \rho_{is}(U)| \leq C_2(Z)$. \begin{example}\label{Example: APP D 1} If the random field $F$ has independent components, i.e., $Z_s(\cdot)$ is independent of $Z_t(\cdot)$ when $s \neq t$, this $m$-balls absolute non-degenerate condition is equivalent to saying that for each $t$, the interior of the compact support of $\rho_{tt}$ is nonempty. In this case, each $\rho_{tt}$ is also a nonnegative measure. One can choose $B_{r_t}(y_t)$ contained in the interior of the compact support of $\rho_{tt}$, and let $c_{ti}(U) = \delta_{ti} \cdot {\rho_{tt}(U)}^{-1}$ for all $t$ and $i$. Then, choose $C_1(Z)=1$ and $C_2(Z)=1$. \end{example} In general, the constant $C_2(Z)$ is quite tricky to obtain. One may use the following weaker but more explicit assumption. \begin{example}\label{Example: APP D 2} Assume that there is an open ball $B \subset \mb{R}^n$, and a positive constant $\lambda$, such that for any open subset $U \subset B$, \begin{equation} \lambda^{-1} \cdot |U| \cdot \mr{Id}_m \leq \rho (U) \leq \lambda \cdot |U| \cdot \mr{Id}_m, \end{equation} where $\rho(U)$ is the $m \times m $ matrix with elements $\rho_{i_1 i_2}(U)$. For the notation $A \geq B$, we mean that the smallest eigenvalue of $A-B$ is nonnegative. And $|U|$ is the volume of $U$ in $\mb{R}^n$. We can also replace $|\cdot|$ with any other measure that is mutually absolutely continuous with respect to $|\cdot|$. With this nondegeneracy assumption, we can choose $c(U)$ as the inverse of $\rho(U)$ and set a $C_2(Z)$ only depending on $\lambda$. And if we also have a $C_1(Z)$ satisfying the definition, then this random measure $Z(\cdot)$ satisfies this \textit{ $m$-balls absolute non-degenerate} assumption. \end{example} According to the above definitions and examples, assume that we can get two universal positive constants $C_1(Z)$ and $C_2(Z)$ for the \textit{ $m$-balls absolute non-degenerate} condition, we then have the following theorem. \begin{theorem}\label{Thm: APP D} \textit{ If $Z(\cdot)$ is \textit{ $m$-balls absolute non-degenerate}, then $\mc{H}_0(\rho)$ is $C^1$-dense in $C^1(B, \mb{R}^m)$ for any bounded open set $B \subset \mb{R}^n$. } \end{theorem} \begin{proof} By the assumption, we denote $B_{r_t}(y_t)$ by $V^t$. We assume that ${\{V^t\}}_{t=1} ^m$ are disjoint, otherwise we choose smaller open balls in them. Recall that $\rho(x) = \rho^*(x) = {(\rho(-x))}^T$. We can also assume that $V^t \cap ( -V^t) = \emptyset$, otherwise we choose smaller balls again. The following strategies have also been sketched in Lemma 5.5 and Proposition 5.2 in~\cite{SW19} when $m=1$. Fix any $p(x) = {(p_1(x), \dots, p_m(x))}^T$ such that for each $t = 1, \dots , m$, $p_t(x)$ is a real polynomial in $x_1, \dots, x_n$. Then, for any $\epsilon >0$ and any bounded open set $B$, we need to find a $G \in L^2 _H (\rho)$ such that \begin{equation} ||{(\mc{F}_\rho(G)(x))}_t - p_t(x) ||_{C^1(B)} <\epsilon. \end{equation} We fix such $\epsilon$ and $B$ and then proceed. Let $\phi(\xi)$ be a smooth, nonnegative function supported in $B_1(0)$ such that $\int_{\mb{R}^n} \phi(\xi) \ d\xi = 1$ and $\phi(\xi_1) = \phi(\xi_2)$ if $||\xi_1|| = ||\xi_2||$. For any $\delta_t \in (0,1)$ and any multi-index $\beta_t$, we see that the function \begin{equation} \phi_{\delta_t,\beta_t}(\xi) \equiv \frac{\partial^{\beta_t} }{\partial \xi^{\beta_t} } \bigg( \frac{1}{{(r_t \delta_t)}^n} \phi\bigg(\frac{\xi - y_t}{r_t \delta_t}\bigg)\bigg) \end{equation} is supported in $V^t = B_{r_t}(y_t)$. Now, we do the Fourier transform on ${(-i)}^{\beta_t} \phi_{\delta_t,\beta_t}(\xi)$, and then get a function \begin{equation}\label{E: APP D Approximate Polynomials} \begin{split} \tilde{g}_{\delta_t, \beta_t}(x) &\equiv {(-i)}^{\beta_t} \cdot \int_{\mb{R}^n} (\phi_{\delta_t,\beta_t}(\xi) ) e^{-i\langle x, \xi \rangle} \ d\xi \\ &= x^{\beta_t} \cdot \int_{\mb{R}^n } \frac{1}{{(r_t \delta_t)}^n} \phi\bigg(\frac{\xi - y_t}{r_t \delta_t}\bigg) e^{-i \langle x , \xi \rangle } \ d\xi \\ & \to x^{\beta_t} \cdot e^{-i \langle x , y_t \rangle }, \end{split} \end{equation} as $\delta_t \to 0^+$. Notice that, for any bounded open set $B \subset \mb{R}^n$, the convergence of (\ref{E: APP D Approximate Polynomials}) is in $C^1(B , \mb{C})$ as functions in $x$. Here, for $g \in C^1(B , \mb{C})$, we mean that $g = u + iv$ for $u,v \in C^1(B,\mb{R})$, and the $C^1$-norm is induced by $C^1$-norms on $u,v$ components. We first consider a polynomial with complex coefficients, which we denote as $q_\epsilon(x)$, such that $|| q_\epsilon(x) - e^{i \langle x,y_t \rangle}||_{C^1(B)} \leq \epsilon$. Then, by (\ref{E: APP D Approximate Polynomials}), one can find a smooth complex-valued function $\phi_t(\xi)$ supported in $V^t$ such that the Fourier transform of $\phi_t$, i.e., $\mc{F}(\phi_t)(x) \equiv \int_{\mb{R}^n} \phi_t(\xi) e^{-i\langle x, \xi \rangle} \ d\xi$, satisfies that \begin{equation} ||\mc{F}(\phi_t)(x) - p_t(x) q_\epsilon(x) e^{-i \langle x , y_t \rangle }||_{C^1(B)} \leq \epsilon. \end{equation} Hence, \begin{equation}\label{E: APP D Close to Polynomials} ||\mc{F}(\phi_t)(x) - p_t(x)||_{C^1(B)} \leq C(||p_t||_{C^1(B)}, ||y_t|| ) \cdot \epsilon, \end{equation} where $C(||p_t||_{C^1(B)}, ||y_t|| )$ is a positive constant depending on the $C^1$-norm of $p_t$ on $B$ and the length of $y_t$. We then choose $\varphi_t(\xi) = (\phi_t(\xi) + \overline{\phi_t(-\xi)})/2$, whose Fourier transform is the real part of $\mc{F}(\phi_t)(x)$ and (\ref{E: APP D Close to Polynomials}) still holds for $\mc{F}(\varphi_t)(x)$. Notice that $\varphi_t(-\xi) = \overline{\varphi_t(\xi)}$ for any $\xi \in \mb{R}^n$ and $\varphi_t(\xi)$ is supported in $V^t \cup (-V^t)$. Now, we approximate $\mc{F}(\varphi_t)(x)$ in $C^1(B)$ by a Riemann sum. That is, for the chosen $\epsilon>0$, we have another function \begin{equation} R(\varphi_t)(x) = \frac{1}{2}\sum_{p=1} ^{q_t} \bigg( e^{-i \langle x, \xi_p ^t \rangle} \phi_t(\xi_p ^t) |V_p ^t| + e^{i \langle x, \xi_p ^t \rangle} \overline{\phi_t(\xi_p ^t)} |V_p ^t| \bigg), \end{equation} such that $||\mc{F}(\varphi_t)(x) - R(\varphi_t)(x)||_{C^1(B)} \leq \epsilon$, where ${\{V_p ^t\}}_{p=1} ^{q_t}$ is a disjoint open subdivision of $V^t$ and each $\xi_p ^t$ is a point in $V_p ^t$. Also, this subdivision is so narrow such that for any $\xi \in V_p ^t$ and any $x \in B$, \begin{equation}\label{E: APP D Small Division 1} |e^{-i \langle x , \xi_p ^t \rangle} - e^{-i \langle x , \xi \rangle}| \leq \epsilon \cdot {\bigg( \sum_{s=1}^m ||\phi_s||_{L^1(\mb{R}^n)} \bigg)}^{-1}, \end{equation} \begin{equation}\label{E: APP D Small Division 2} \ |\xi_ p e^{-i \langle x , \xi_p ^t \rangle} - \xi e^{-i \langle x , \xi \rangle}| \leq \epsilon \cdot {\bigg( \sum_{s=1}^m ||\phi_s||_{L^1(\mb{R}^n)} \bigg)}^{-1}, \end{equation} and \begin{equation} \sum_{l=1}^m \sum_{V_p ^l} \cdot | \phi_l(\xi_p ^l) | \cdot | V_p ^l| \leq 2 \cdot \bigg( \sum_{l=1}^m ||\phi_l||_{L^1(\mb{R}^n)} \bigg). \end{equation} We then collect all $\{V_p ^t\}$ for all $t = 1, \dots ,m$, and get a family of disjoint open sets $\{V_p\}$ and their corresponding $\{\xi_p\}$. Notice that $\phi_t(\xi_p ^s) = 0$ for $s \neq t$. We denote $q = \sum_{t=1} ^m q_t$. Compare $R(\varphi_t)(x)$ and functions in $\Sigma_\rho$ of the form (\ref{E: APP D Discrete Sum Set Sigma}), we consider a function $G = {(g_1, \dots, g_m)}^T \in L^2 _H (\rho)$ such that $G(x) = 0$ if $x$ is not contained in any $V_p$ or $-V_p$. And for each $g_s$ and each $V_p ^t$, let $g_s$ be the constant $c_{ts}(V_p ^t) (\phi_t(\xi_p ^t) |V_p ^t| )$ on $V_p ^t$ and $\overline{c_{ts} (V_p ^t) (\phi_t(\xi_p ^t) |V_p ^t| )}$ on $(-V_p ^t)$, where the constants $c_{ts} (V_p ^t)$ are from (\ref{E: APP D m nondegenerate constants}). This $G$ is well-defined and in $ L^2 _H (\rho)$. By the transform (\ref{E: APP D Discrete Sum Set Sigma}), we see that for each $t = 1, \dots, m$, \begin{equation} \begin{split} &{(H(G)(x))}_t \equiv \sum_{p = 1} ^{q} \sum_{s=1} ^m \bigg( e^{-i \langle x, \xi_p \rangle} g_s(\xi_p) \rho_{st}(V_p ) + e^{i \langle x, \xi_p \rangle} g_s(-\xi_p) \rho_{st}(-V_p) \bigg) \\ & = 2 \mr{Re} \sum_{l=1}^m \sum_{V_p ^l} \sum_{s=1} ^m e^{-i \langle x, \xi_p ^l \rangle} g_s(\xi_p ^l) \rho_{st}(V_p ^l) \\ & = 2 \mr{Re} \sum_{l=1}^m \sum_{V_p ^l} \sum_{s=1} ^m e^{-i \langle x, \xi_p ^l \rangle} c_{ls}(V_p ^l) \rho_{st}(V_p ^l) (\phi_l(\xi_p ^l) |V_p ^l| ) \\ & = 2 \mr{Re} \sum_{l=1}^m \sum_{V_p ^l} e^{-i \langle x, \xi_p ^l \rangle} \delta_{lt} (\phi_l(\xi_p ^l) |V_p ^l| ) \\ & = 2 \mr{Re} \sum_{V_p ^t} e^{-i \langle x, \xi_p ^t \rangle} (\phi_t(\xi_p ^t) |V_p ^t| ) = 2 R(\varphi_t)(x), \end{split} \end{equation} where $\mr{Re}(\cdot)$ means the real part of a complex number. Hence, we see that $R(\varphi_t)(x)$ is actually expressed by $G/2$ in the transform (\ref{E: APP D Discrete Sum Set Sigma}). Then, we compare $H(G)$ and the $\rho$-Fourier transform of $G$. Recall that the subdivision $\{V_p\}$ satisfies that for any $\xi \in V_p$, both $|e^{-i \langle x , \xi_p \rangle} - e^{-i \langle x , \xi \rangle}|$ and $|\xi_p e^{-i \langle x , \xi_p \rangle} - \xi e^{-i \langle x , \xi \rangle}|$ are bounded by the chosen $\epsilon$ and the constant related to $\phi_l$ in (\ref{E: APP D Small Division 1}) and (\ref{E: APP D Small Division 2}). The absolute non-degeneracy of $Z(\cdot)$ assumption is that $|\rho_{st}|(U) \leq C_1(Z)\cdot |\rho_{st}(U)|$ for any $s,t$. Hence, by this choice of the subdivision, and notice that $G$ is a constant on each $V_p$, we then have that \begin{equation} \begin{split} & \qquad \frac{1}{2}\big| {(\mc{F}_\rho(G)(x))}_{t} - {(H(G)(x) )}_t \big| =\bigg| \mr{Re}\sum_{p=1}^q \sum_{s = 1}^m \int_{V_p} e^{-i \langle x, \xi \rangle} g_{s}(\xi) - e^{-i \langle x, \xi_p \rangle} g_{s}(\xi_p)\ d\rho_{st}(\xi) \bigg| \\ &= \bigg| \mr{Re}\sum_{p=1}^q \sum_{s = 1}^m g_s(\xi_p) \int_{V_p} e^{-i \langle x, \xi \rangle} - e^{-i \langle x, \xi_p \rangle} \ d\rho_{st}(\xi) \bigg| \\ &\leq \epsilon \cdot {\bigg( \sum_{s=1}^m ||\phi_s||_{L^1(\mb{R}^n)} \bigg)}^{-1} \cdot\sum_{p=1}^q \sum_{s = 1}^m |g_s(\xi_p)| |\rho_{st}|(V_p) \\ &\leq \epsilon \cdot {\bigg( \sum_{s=1}^m ||\phi_s||_{L^1(\mb{R}^n)} \bigg)}^{-1}\cdot C_1(Z) \sum_{p=1}^q \sum_{s = 1}^m |g_s(\xi_p) \rho_{st}(V_p) | \\ &= \epsilon \cdot {\bigg( \sum_{s=1}^m ||\phi_s||_{L^1(\mb{R}^n)} \bigg)}^{-1}\cdot C_1(Z) \sum_{l=1}^m \sum_{V_p ^l} \sum_{s = 1}^m |c_{ls}(V_p ^l) \rho_{st}(V_p ^l) | \cdot | \phi_l(\xi_p ^l) | \cdot | V_p ^l| \\ &\leq \epsilon \cdot {\bigg( \sum_{s=1}^m ||\phi_s||_{L^1(\mb{R}^n)} \bigg)}^{-1} \cdot C_1(Z)C_2(Z) \sum_{l=1}^m \sum_{V_p ^l} \cdot | \phi_l(\xi_p ^l) | \cdot | V_p ^l| \\ &\leq 2\epsilon \cdot C_1(Z)C_2(Z). \end{split} \end{equation} One can do similar estimates on $\big|\nabla[{(\mc{F}_\rho(G)(x))}_{t} - {(H(G)(x) )}_t] \big|$. Hence, there is a positive constant $C(C_1(Z),C_2(Z))$, such that \begin{equation} ||{(\mc{F}_\rho(G)(x))}_t - {(H(G)(x) )}_t ||_{C^1(B)} \leq C\big( C_1(Z),C_2(Z) \big) \cdot \epsilon. \end{equation} Then, \begin{equation} ||{(\mc{F}_\rho(G)(x))}_t - p_t(x) ||_{C^1(B)} \leq C\big( ||p_t||_{C^1(B)} , |y_t|,C_1(Z),C_2(Z) \big) \cdot \epsilon. \end{equation} Since any map in $C^1(B,\mb{R}^m)$ can be $C^1$-approximated by polynomial vectors $p(x)$, we finish the proof. \end{proof} Now, we can prove the positivity of the limiting constant $\nu_F$ in Theorem~\ref{Thm: Euclidean Random Field}. Notice that by Lemma~\ref{L: Integral Sandwich}, we have that for all $R >0$, \begin{equation} \frac{\mc{E}(N(R;F))}{|C_R|} \leq \nu_F. \end{equation} Hence, we only need to show that there is an $R = R_0$ such that $\mc{P}(N(R_0;F)>0)>0$. According to (\ref{E: APP D Positive Approximation}) (also see Proposition 5.2 of~\cite{SW19}), together with Theorem~\ref{Thm: APP D} and the fact that $\mc{H}_0(\rho) = \mc{H}(F)$, we see that for any $\epsilon > 0 $ and any $G \in C^1(\mb{R}^n, \mb{R}^m)$, \begin{equation} \mc{P}(||F - G||_{C^1(B_{10}(0))} < \epsilon ) >0, \end{equation} where $B_{10}(0)$ is the ball with center $0$ and radius $10$ in $\mb{R}^n$. We consider \begin{equation} G = (g_1 , \dots,g_m) \equiv (x_1, \dots,x_{m-1},1- |x_m|^2 -|x_{m+1}|^2 - \dots |x_n|^2), \end{equation} so that $Z(G)$ is a unit $\mb{S}^{n-m}$ contained in the linear subspace of coordinates $x_m, \dots,x_n$. Hence, there is an $\epsilon = \epsilon(n,m)$ chosen in Lemma~\ref{L: Continuous Stability} such that \begin{equation} \mc{P}(N(10;F) \geq 1) \geq \mc{P}(||F - G||_{C^1(C_{B_{10}(0)})} < \epsilon )>0. \end{equation} This shows the positivity of $\nu_F$. Moreover, under our assumptions, by Lemma~\ref{L: Continuous Stability}, there is a component $\gamma$ of $Z(F)$ in $B_{10}(0)$ that is $C^1$-isotopic to the component of $Z(G)$, which is $\mb{S}^{n-m}$ for this $G$. But actually, one can replace $G$ here with any $G \in C^1(B_{10}(0), \mb{R}^m)$ such that $Z(G) \cap B_1(0) \neq \emptyset$ and $0$ is a regular value for $G$. For similar reasons, one can show that for any $C^1$-isotopic type $c$ which can be realized as a regular connected component of some $Z(G)$ in $B_1(0)$, we always have that \begin{equation} \mc{P}(N(10;F, H) \geq 1) \geq \mc{P}(||F - G||_{C^1(C_{B_{10}(0)})} < \epsilon )>0. \end{equation} \begin{remark} We can also show that $\nu_F>0$ and $\nu_{F,c}>0$ when $F$ consists of independent $f_i$, $i =1 ,\dots,m$, and each $f_i$ is a frequency $1$ random wave on $\mb{R}^n$, i.e., \begin{equation} \mc{E}(f_{i_1}(u) f_{i_2}(v)) = \delta_{i_1 i_2} \cdot \int_{\mb{S}^{n-1}} e^{i \langle u-v , \xi \rangle } \ d\mc{H}^{n-1}(\xi), \end{equation} and solves \begin{equation} \Delta f_i + f_i = 0. \end{equation} In this case, one can use a similar proof for Theorem 1 in~\cite{CS19}, which is in their section 3. The rough idea is as follows. We consider an arbitrary $(n-m)$-dimensional connected closed submanifold $\gamma$ of $\mb{R}^{n+q}$ fully contained in $B_1(0)$, and this $\gamma$ is also realized as a regular connected component of zero sets of some $C^1$-smooth function $G$, i.e., $\gamma \subset Z(G) \cap B_1(0)$, and $\nabla G$ is of full rank on $\gamma$. Our goal is to modify this $\gamma$ such that it is $C^1$-isotopic to another $\tilde{\gamma}$ which can be expressed as the transversal intersection of $m$ closed analytic hypersurfaces, and these $m$ closed hypersurfaces also enclose $m$ bounded domains with the same first Dirichlet eigenvalue. Denote that $G = (g_1 , \dots g_m)$ and assume that $0$ is a regular value of each $g_i$, otherwise one just replaces each $g_i$ with some $g_i+t_i$ with $t_i$ small, and get a new component $C^1$-isotopic to the original $\gamma$. We do not know whether each $Z(g_i)$ is a closed hypersurface yet. But since each $Z(g_i)$ is a regular hypersurface, one can replace each $g_i$ with $\tilde{g}_i \equiv {g_i(x)}^2 + \epsilon_i |x|^2 - \delta_i ^2$ for very small $\epsilon_i$ and very small $\delta_i$, such that $0$ is still a regular value of $\tilde{g}_i$ by Sard's theorem. Notice that $Z(\tilde{g}_i) \subset B_{\delta_i/\epsilon_i}(0)$. Hence, we get a new component $\tilde{\gamma}$ that can be realized as a part of the intersection of those $Z(\tilde{g}_i)$ with each $Z(\tilde{g}_i)$ bounded. This $\tilde{\gamma}$ is also $C^1$-isotopic to $\gamma$ since $\epsilon_i$ and $\delta_i$ are small. We can also assume that each $\tilde{g}_i$ is $C^\infty$-smooth. Assume that $\tilde{\gamma} = \cap_i c_i$, where each $c_i$ is a bounded connected component of $Z(\tilde{g}_i)$ and is also a closed hypersurface. Let $A_i$ be the bounded component of $\mb{R}^n \backslash c_i$. Now, for each $A_i$, one can make smooth connected sum with long and thin necks to some large balls $B_i$ and avoid touching $\tilde{\gamma}$. We do these connected sums and get new bounded sets $\mc{A}_i$, such that all these ${\{\mc{A}_i\}}_{i=1} ^m$ have the same first Dirichlet eigenvalue $\lambda$. This same first Dirichlet eigenvalue property is possible, because the first Dirichlet eigenvalue for a given domain is always larger than the first eigenvalues of domains which contain it, and we can adjust the size of each ball $B_i$ to let all $\mc{A}_i$ have the same first eigenvalue. Now, $\tilde{\gamma}$ becomes a part of the transversal intersection of those $\pa \mc{A}_i$. For each $\mc{A}_i$, we assume that $\partial \mc{A}_i$ is realized as a component of the regular zero set of another smooth function $\bar{g}_i$. Approximate each $\bar{g_i}$ by another analytic function $\mathbf{g}_i$ defined in a neighborhood of $\partial \mc{A}_i$, we see that the set $\{\mathbf{g}_i = 0\}$ has a bounded component $\widetilde{c_i}$ which is very close to $\partial \mc{A}_i$, and these $\widetilde{c_i}$ form another transversal intersection $\tilde{\gamma}_1$, which is $C^1$-isotopic to $\tilde{\gamma}$. The existence of such an analytic $\mathbf{g}_i$ is by the same arguments as proving Theorem 1 on page 7 of~\cite{CS19}, where they used the Whitney approximation Theorem. The new problem here is that $\widetilde{\mc{A}_i}$, the bounded component of $\mb{R}^n \backslash \widetilde{c_i}$, may not have the same first Dirichlet eigenvalue. To fix this, since $\mc{A}_i$ have the same first Dirichlet eigenvalue, and each $\mc{A}_i$ is very close to $\widetilde{\mc{A}_i}$, we may scale $\widetilde{\mc{A}_i}$ a little, or equivalently, replace $\mathbf{g}_i(x)$ with $\mathbf{g}_i(\eta_i x)$ for some $\eta_i$ close to $1$, and this small scaling will not affect the transversal intersection $\tilde{\gamma}_1$ up to another $C^1$-isotopy. To summarize, we get a $\tilde{\gamma}_1$ in the same $C^1$-isotopy class as the original $\gamma$, which can also be expressed as the transversal intersection of analytic hypersurfaces $\widetilde{c_i} = \partial \widetilde{\mc{A}_i}$, where all these $\widetilde{\mc{A}_i}$ are bounded and have the same first Dirichlet eigenvalue. Then, the remaining arguments follow easily from the proof on page 8 of~\cite{CS19}. \end{remark} \section{Assumptions (A3) in Definition~\ref{D: Axiom 1} and (B4) in Definition~\ref{D: Axiom 3}}\label{APP: 2 Point} As mentioned previously, we can use $(A1)$ and $(A2)$ to deduce $(A3)$ in Definition~\ref{D: Axiom 1}. Let us explain it more. The two-point $m \times m$ covariance matrix $\mr{Cov}(x,y)$ is defined by \begin{equation} {(\mr{Cov}(x,y))}_{i_1 i_2} \equiv \ \mc{E}(f_{i_1}(x)f_{i_2}(y)), \ i_1,i_2 = 1, \dots,m. \end{equation} \begin{theorem}\label{Thm: APP C 1} \textit{ Assume that $F : C_{R+1} \to \mb{R}^m$ satisfies \textit{ $(R; M,k_1)$-assumptions} in Definition~\ref{D: Axiom 1}. Then, there is a $\delta = \delta(n,m,M,k_1) \in (0,1)$ such that when $x \in C_R$ and $||x-y|| \leq \delta$, we have that for any $\eta \in \mb{S}^{m-1} \subset \mb{R}^m$, \begin{equation}\label{E: APP C Smallest Eigenvalue} \eta^T \cdot \big(\mr{Cov}(y,y) - \mr{Cov}(y,x) {\mr{Cov}(x,x)}^{-1} \mr{Cov}(x,y) \big) \cdot \eta \geq \frac{k_1}{2} \cdot ||y-x||^2 . \end{equation} In particular, there is a $k_2 = k_2(n,m,M,k_1)>0$ such that when $x\in C_R$ and $||x-y|| <\delta$, then \begin{equation}\label{E: APP C Local (A3) Condition} p_{(F(x),F(y))}(0,0) \leq \frac{k_2}{||x-y||^m} , \end{equation} which also implies that the joint distribution of $(F(x),F(y))$ is non-degenerate if $||x-y|| \leq \delta$. } \end{theorem} \begin{proof} Notice that (\ref{E: APP C Local (A3) Condition}) follows from (\ref{E: APP C Smallest Eigenvalue}) by elementary row operations in calculating the determinant of the covariance kernel for the joint distribution of $(F(x), F(y))$. To prove (\ref{E: APP C Smallest Eigenvalue}), we fix an arbitrary $x\in C_R$. For any $w = {(w_1, \dots, w_n)}^T \in \mb{S}^{n-1}$, we define an $m \times m$ symmetric matrix-valued function on $[-1,1] \subset \mb{R}$: \begin{equation} k_w(t) \equiv \mr{Cov}(x +t \cdot w,x + t \cdot w) - \mr{Cov}(x+ t \cdot w,x) {\mr{Cov}(x,x)}^{-1} \mr{Cov}(x,x+t \cdot w) . \end{equation} Since $F$ satisfies $(A1)$ in Definition~\ref{D: Axiom 1}, $\mr{Cov}(x,x)$ is invertible and hence $k_w(t)$ is well defined on $[-1,1]$. It is easy to see that $k_w(0) = 0$ and $k_w '(0) = 0$. Then, \begin{equation}\label{E: APP C Second Derivative} \begin{split} & \quad {(k_w ''(t) )}_{i_1 i_2} \\ &= \mc{E}((\nabla^2 _{w,w}f_{i_1})(x+tw) f_{i_2}(x+tw)) + \mc{E}(f_{i_1}(x+tw) (\nabla^2 _{w,w}f_{i_2})(x+tw)) \\ & \quad + 2 \mc{E}((\nabla_w f_{i_1})(x+tw) (\nabla_w f_{i_2})(x+tw)) \\ & \quad -\sum_{p,q} \mc{E}((\nabla^2 _{w,w}f_{i_1})(x+tw)f_{p}(x) ) {({\mr{Cov}(x,x)}^{-1})}_{pq} {(\mr{Cov}(x,x+t \cdot w) )}_{q i_2} \\ & \quad -\sum_{p,q} {(\mr{Cov}(x+ t \cdot w,x) )}_{i_1 p} {({\mr{Cov}(x,x)}^{-1})}_{pq} \mc{E}(f_{q}(x)(\nabla^2 _{w,w}f_{i_2})(x+tw) ) \\ & \quad - 2\sum_{p,q} \mc{E}((\nabla_w f_{i_1})(x+tw)f_{p}(x) ) \cdot {( {\mr{Cov}(x,x)}^{-1} )}_{pq} \cdot \mc{E}(f_{q}(x)(\nabla _{w}f_{i_2})(x+tw) ). \end{split} \end{equation} By Cauchy's mean value theorem, we have that for any $i_1, i_2 = 1, \dots , m$, there is a $\xi_{i_1 i_2}$ between $0$ and $t$, such that \begin{equation} {(k_w(t))}_{i_1 i_2} = \frac{t^2}{2} \cdot {(k_w ''(\xi_{i_1 i_2}))}_{i_1 i_2} . \end{equation} Furthermore, notice that $F$ is $C^{3-}$-smooth, by assumption $(A2)$, we have that \begin{equation} | {(k_w ''(\xi_{i_1 i_2}))}_{i_1 i_2} - {(k_w '' (0) )}_{i_1 i_2}| \leq \sqrt{\xi_{i_1i_2}} \cdot C_1 \leq \sqrt{t} \cdot C_1 \end{equation} for some positive constant $C_1 = C_1(n,m,M)$. We set the remaining term as the symmetric matrix $R$ with elements $ R_{i_1i_2}\equiv t^{-1/2} \cdot [{(k_w ''(\xi_{i_1 i_2}))}_{i_1i_2} - {( k_w '' (0))}_{i_1 i_2}]$, so that \begin{equation}\label{E: APP C Expansion} k_w(t) = \frac{t^2}{2} \big(k_w ''(0) + \sqrt{t} R \big) . \end{equation} For the expression of $k_w '' (0)$, we set $t = 0$ in (\ref{E: APP C Second Derivative}) and then get that \begin{equation} \begin{split} {(k_w ''(0))}_{i_1 i_2} &= 2 \big[ \mc{E}((\nabla_w f_{i_1})(x) (\nabla_w f_{i_2})(x)) \\ &\quad - \sum_{p,q} \mc{E}((\nabla_w f_{i_1})(x)f_{p}(x) ) \cdot {( {\mr{Cov}(x,x)}^{-1} )}_{pq} \cdot \mc{E}(f_{q}(x)(\nabla _{w}f_{i_2})(x) ) \big]. \end{split} \end{equation} For any $\eta = {(\eta_1, \dots ,\eta_m)}^T\in \mb{S}^{m-1}$, we let \begin{equation} v_\eta = \sum_{i=1}^m \eta_i ((\nabla_w f_i)(x)) = \sum_{i=1}^m \sum_{j=1} ^n \eta_i w_j ((\pa_j f_i)(x)). \end{equation} Notice that $\mr{Cov}(x,x)$ is a positive definite matrix, so it can be written as the square of another positive definite matrix, i.e., $\mr{Cov}(x,x) = B^2$ and $B$ is positive definite. Set the Gaussian vector $G = (g_1 ,\dots ,g_m) \equiv {F(x)}^T \cdot B^{-1}$ so that $\mc{E}(g_{i_1} g_{i_2}) = \delta_{i_1i_2}$. Then, \begin{equation} \begin{split} \frac{1}{2}\eta^T \cdot k_w ''(0) \cdot \eta &= \mc{E}(v_\eta^2) - \sum_p \mc{E}(v_\eta g_p) \cdot \mc{E}(g_p v_\eta) \\ &= \mc{E}\big[{\big(v_\eta - \sum_p g_p \cdot \mc{E}(v_\eta g_p)\big)}^2 \big]. \end{split} \end{equation} Notice that $v_\eta$ is a linear combination of $\nabla F(x)$ terms while $\sum_p g_p \cdot \mc{E}(v_\eta g_p)$ is a linear combination of $F(x)$ terms. Since the joint distribution of $(F(x),\nabla F(x))$ is non-degenerate by $(A1)$ of Definition~\ref{D: Axiom 1}, we see that \begin{equation} \mc{E}\big[{\big(v_\eta - \sum_p g_p \cdot \mc{E}(v_\eta g_p)\big)}^2 \big] \geq k_1 \cdot \bigg|\sum_{i =1 }^m \sum_{j=1}^n {(\eta_i w_j)}^2 \bigg| = k_1, \end{equation} where for the inequality, we only collect the coefficients for $\nabla F(x)$ terms. Hence, by (\ref{E: APP C Expansion}), there is a $\delta = \delta(n,m,M,k_1) \in (0,1)$ such that when $|t| \leq \delta$, we have that \begin{equation} \eta^T \cdot k_w (t) \cdot \eta = \frac{t^2}{2}\eta^T \cdot (k_w ''(0) + \sqrt{t}R) \cdot \eta \geq \frac{t^2}{2} \cdot k_1. \end{equation} \end{proof} \begin{remark} Theorem~\ref{Thm: APP C 1} gives an upper bound for the density $p_{(F(x),F(y))}(0,0)$ when $0<||x-y|| < \delta$. If we know that $(F(x),F(y))$ is always non-degenerate when $x\neq y$ and $x,y \in C_{R+1}$, then condition $(A3)$ in Definition~\ref{D: Axiom 1} will hold true automatically. Then, $k_2$ depends on the covariance kernel fo the joint distribution of $(F(x),F(y))$ when $||x-y|| > \delta$. For condition $(B4)$ in Definition~\ref{D: Axiom 3}, in most of applications and concrete examples like Kostlan's ensemble or complex arithmetic random waves, first, we will know that the convergence for kernels ${\{K_{x,L}\}}_{L \in \mc{L}}$ is in $C^k$, i.e., for all $R > 0$, \begin{equation} \lim_{L \to \infty} ||K_{x,L}(u,v) - K_x(u-v)||_{C^k(C_{R+1} \times C_{R+1})} = 0. \end{equation} In this case, conditions $(B1)$, $(B2)$ and $(B4)$ will hold true if for all compact set $Q \subset U$, and $x \in Q$, $K_x(u-v)$ satisfies a uniform \textit{ $(R; M(Q),k_1(Q),k_2(x))$-assumptions}. In particular, for $(B4)$, if we know that for all $x \in U$ and $R > 0$, \begin{equation} \lim_{L \to \infty} \sup_{u,v \in C_{R+1}}||K_{x,L}(u,v) - K_x(u-v)|| = 0, \end{equation} and $K_x$, or equivalently $F_x$, satisfies $(A3)$ in Definition~\ref{D: Axiom 1} for some $k_2 = k_2(x)$, then when $||u-v|| > \delta$, the quantitative two-point nondegeneracy of $K_{x,L}$, or equivalently $F_{x,L}$, holds true from this uniform convergence. When $||u-v|| \leq \delta$, one can just apply Theorem~\ref{Thm: APP C 1} because if $K_{x,L}$ converges to $K_x$ in $C^k$, then $K_{x,L}$ satisfies \textit{ $(R; 2M(Q),k_1(Q)/2)$-assumptions} since $K_x$ satisfies \textit{ $(R; M(Q),k_1(Q))$-assumptions}. \end{remark} \end{appendix} \section*{Declarations} \bigskip {\bf Acknowledgements} The author would like to thank his advisor, Professor Fang-Hua Lin, for his continuous support and encouragement. The author would also like to thank Professor Paul Bourgade and Professor Robert V.~Kohn for very helpful discussions and suggestions in the finalizing stage. Professor Yuri Bakhtin brought~\cite{GS04} and Professor Ao Sun brought~\cite{CL58} to the author's attention, and the author wants to thank them. The author also wants to thank Professor Ao Sun and Professor Chao Li for their encouragement in 2021. This work was completed when the author studied at NYU as a PhD student, partially supported by the NSF grants DMS2247773, DMS2055686, and DMS2009746. \bibliographystyle{acm} \bibliography{RandomZeroSetsNewIntroduction3.bib} \end{document}
2205.01013v3
http://arxiv.org/abs/2205.01013v3
Crossing numbers and rotation numbers of cycles in a plane immersed graph
\documentclass[a4paper]{amsart} \usepackage{amsmath,amssymb} \usepackage{graphicx} \usepackage[all]{xy} \usepackage{multirow} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Example}[Theorem]{Example} \newtheorem{Problem}[Theorem]{Problem} \newtheorem{Question}[Theorem]{Question} \newtheorem{Remark}[Theorem]{Remark} \def\labelenumi{\rm ({\makebox[0.8em][c]{\theenumi}})} \def\theenumi{\arabic{enumi}} \def\labelenumii{\rm {\makebox[0.8em][c]{(\theenumii)}}} \def\theenumii{\roman{enumii}} \renewcommand{\thefigure} {\arabic{section}.\arabic{figure}} \makeatletter \@addtoreset{figure}{section}\def\@thmcountersep{-} \makeatother ll}\hbox{\rule[-2pt]{3pt}{6pt}}} \newcommand{\bsquare}{\hbox{\rule{6pt}{6pt}}} \numberwithin{equation}{section} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\blankbox}[2]{ \parbox{\columnwidth}{\centering \setlength{\fboxsep}{0pt} \fbox{\raisebox{0pt}[#2]{\hspace{#1}}} }} \begin{document} \title[Crossing numbers and rotation numbers of cycles in a graph]{Crossing numbers and rotation numbers of cycles in a plane immersed graph} \author{Ayumu Inoue} \address{Department of Mathematics, Tsuda University, 2-1-1 Tsuda-machi, Kodaira-shi, Tokyo 187-8577, Japan} \email{[email protected]} \author{Naoki Kimura} \address{Department of Mathematics, Graduate School of Fundamental Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan} \email{[email protected]} \author{Ryo Nikkuni} \address{Department of Mathematics, School of Arts and Sciences, Tokyo Woman's Christian University, 2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585, Japan} \email{[email protected]} \author{Kouki Taniyama} \address{Department of Mathematics, School of Education, Waseda University, Nishi-Waseda 1-6-1, Shinjuku-ku, Tokyo, 169-8050, Japan} \email{[email protected]} \thanks{The first author was partially supported by Grant-in-Aid for Scientific Research (c) (No. 19K03476) , Japan Society for the Promotion of Science. The third author was partially supported by Grant-in-Aid for Scientific Research (c) (No. 19K03500) , Japan Society for the Promotion of Science. The fourth author was partially supported by Grant-in-Aid for Scientific Research (c) (No. 21K03260) and Grant-in-Aid for Scientific Research (A) (No. 21H04428), Japan Society for the Promotion of Science.} \subjclass[2020]{Primary 05C10; Secondly 57K10.} \date{} \dedicatory{} \keywords{crossing number, rotation number, plane immersed graph, Petersen graph, Heawood graph, Reduced Wu and generalized Simon invariant, Legendrian knot, Legendrian spatial graph, Thurston-Bennequin number} \begin{abstract} For any generic immersion of a Petersen graph into a plane, the number of crossing points between two edges of distance one is odd. The sum of the crossing numbers of all $5$-cycles is odd. The sum of the rotation numbers of all $5$-cycles is even. We show analogous results for $6$-cycles, $8$-cycles and $9$-cycles. For any Legendrian spatial embedding of a Petersen graph, there exists a $5$-cycle that is not an unknot with maximal Thurston-Bennequin number, and the sum of all Thurston-Bennequin numbers of the cycles is $7$ times the sum of all Thurston-Bennequin numbers of the $5$-cycles. We show analogous results for a Heawood graph. We also show some other results for some graphs. We characterize abstract graphs that has a generic immersion into a plane whose all cycles have rotation number $0$. \end{abstract} \maketitle \section{Introduction}\label{introduction} Let $G$ be a finite graph. We denote the set of all vertices of $G$ by $V(G)$ and the set of all edges by $E(G)$. We consider $G$ as a topological space in the usual way. Then a vertex of $G$ is a point of $G$ and an edge of $G$ is a subspace of $G$. A graph $G$ is said to be {\it simple} if it has no loops and no multiple edges. Suppose that $G$ has no multiple edges. Let $u$ and $v$ be mutually adjacent vertices of $G$. Then the edge of $G$ incident to both $u$ and $v$ is denoted by $uv$. Then $uv=vu$ as an unoriented edge. The orientation of $uv$ is given so that $u$ is the initial vertex and $v$ is the terminal vertex. Therefore $uv\neq vu$ as oriented edges. A {\it cycle} of $G$ is a subgraph of $G$ that is homeomorphic to a circle ${\mathbb S}^{1}$. A cycle with $k$ edges is said to be a {\it $k$-cycle}. The set of all cycles of $G$ is denoted by $\Gamma(G)$ and the set of all $k$-cycles of $G$ is denoted by $\Gamma_{k}(G)$. A {\it spatial embedding} of $G$ is an embedding of $G$ into the $3$-space ${\mathbb R}^{3}$. The image of a spatial embedding is said to be a {\it spatial graph}. The set of all spatial embeddings of $G$ is denoted by $SE(G)$. A {\it plane generic immersion} of $G$ is an immersion of $G$ into the plane ${\mathbb R}^{2}$ whose multiple points are only finitely many transversal double points between edges. Such a double point is said to be a {\it crossing point} or a {\it crossing}. The image of a plane generic immersion together with the distinction of the crossing points and the image of the degree $4$ vertices is said to be a {\it plane immersed graph}. Let $f:G\to{\mathbb R}^{2}$ be a plane generic immersion of $G$. Let $H$ be a subgraph of $G$. We denote the number of crossings of the restriction map $f|_{H}:H\to{\mathbb R}^{2}$ by $c(f(H))$. The set of all plane generic immersion of $G$ is denoted by $PGI(G)$. Let $K_{n}$ be a complete graph on $n$ vertices and $K_{m,n}$ a complete bipartite graph on $m+n$ vertices. It is shown in \cite{C-G} and \cite{Sachs} that for any spatial embedding $f:K_{6}\to {\mathbb R}^{3}$, the sum of all linking numbers of the links in $f(K_{6})$ is an odd number. Let $a_{2}(J)$ be the second coefficient of the Conway polynomial of a knot $J$. It is also shown in \cite{C-G} that for any spatial embedding $f:K_{7}\to {\mathbb R}^{3}$, the sum $\displaystyle{\sum_{\gamma\in\Gamma_{7}(K_{7})}a_{2}(f(\gamma))}$ is an odd number. See also \cite{Nikkuni} for refinements of these results, and \cite{S-S}\cite{S-S-S}\cite{Taniyama3} etc. for higher dimensional analogues. An analogous phenomenon appears in plane immersed graphs. A {\it self crossing} is a crossing of the same edge. An {\it adjacent crossing} is a crossing between two mutually adjacent edges. A {\it disjoint crossing} is a crossing between two mutually disjoint edges. It is known that for $G=K_{5}$ or $G=K_{3,3}$, the number of all disjoint crossings of a plane generic immersion of $G$ is always odd. See for example \cite[Proposition 2.1]{S-T} or \cite[Lemma 1.4.3]{Skopenkov}. Some theorems on plane immersed graphs are also stated in \cite{Skopenkov}. See also \cite{D-F-V} for related results. As analogous phenomenon we show the following results. Let $G$ be a finite graph with at least one cycle. The {\it girth} $g(G)$ of $G$ is the minimal lengths of the cycles of $G$. Namely every cycle of $G$ contains at least $g(G)$ edges and there is a $g(G)$-cycle of $G$. Let $G$ be a finite graph and $H,K$ connected subgraphs of $G$. The distance $d(H,K)$ of $H$ and $K$ in $G$ is defined to be the minimum number of edges of a path of $G$ joining $H$ and $K$. Then $d(H,K)=0$ if and only if $H\cup K$ is connected. Let $d$ and $e$ be mutually distinct edges of $G$. We note that $d(d,e)=0$ if and only if $d$ and $e$ are adjacent. Then $d(d,e)=1$ if and only if $d$ and $e$ are disjoint and there exists an edge $x$ of $G$ adjacent to both of them. If $g(G)\geq5$ then such $x$ is unique. Similarly $d(d,e)=2$ if and only if $d$ and $e$ are disjoint, no edge of $G$ is adjacent to both of them and there exist mutually adjacent edges $x$ and $y$ of $G$ such that $x$ is adjacent to $d$ and $y$ is adjacent to $e$. Let $k$ be a natural number. Let $D_{k}(G)$ be the set of all unordered pairs $(d,e)$ of edges of $G$ with $d(d,e)=k$. \begin{Theorem}\label{theorem-K4} Let $f:K_{4}\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma(K_{4})}c(f(\gamma))\equiv 0\pmod 2. \] \end{Theorem} \begin{Theorem}\label{theorem-K33} Let $f:K_{3,3}\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{4}(K_{3,3})}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{6}(K_{3,3})}c(f(\gamma))\equiv 1\pmod 2. \] \end{Theorem} We denote a Petersen graph by $PG$. A Petersen graph $PG$ and a plane generic immersion $g:PG\to {\mathbb R}^{2}$ of $PG$ is illustrated in Figure \ref{PG}. \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{PG.eps}} \end{center} \caption{A plane generic immersion of $PG$} \label{PG} \end{figure} \begin{Theorem}\label{theorem-Petersen0} Let $f:PG\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{(d,e)\in D_{1}(PG)}|f(d)\cap f(e)|\equiv 1\pmod 2. \] \end{Theorem} Note that, for an edge $e$ of $PG$, the edges of $PG$ with distance $1$ with $e$ forms an $8$-cycle of $PG$, see Figure \ref{edge-distance-PG}. \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{edge-distance-PG.eps}} \end{center} \caption{Distance $1$ edges form an $8$-cycle} \label{edge-distance-PG} \end{figure} The modular equality in Theorem \ref{theorem-Petersen0} has an integral lift to an invariant of spatial embeddings of $PG$ as stated in Theorem \ref{theorem-Petersen1}. This is a kind of spatial graph invariants called {\it Reduced Wu and generalized Simon invariants}. See \cite{Taniyama1} \cite{Taniyama2} \cite{F-F-N}. We prepare some notions in order to state Theorem \ref{theorem-Petersen1}. Let $G$ be a finite graph and $f:G\to {\mathbb R}^{2}$ a plane generic immersion of $G$. Let $\pi:{\mathbb R}^{3}\to{\mathbb R}^{2}$ be a natural projection defined by $\pi(x,y,z)=(x,y)$. A spatial embedding $\varphi:G\to {\mathbb R}^{3}$ of $G$ is said to be a {\it lift} of $f$ if $f=\pi\circ\varphi$. The subset $f(G)$ of ${\mathbb R}^{2}$ together with the vertex information $f|_{V(G)}$ and over/under crossing information of $\varphi$ at each crossing of $f$ is said to be a {\it diagram} of $\varphi$ based on $f(G)$. Suppose $G$ is simple. Then a diagram of $\varphi$ restores $\varphi$ up to ambient isotopy of ${\mathbb R}^{3}$. Let $D$ be a diagram of $\varphi$. Suppose that each edge of $G$ is oriented. A crossing of $D$ is said to be a {\it positive crossing} or a {\it negative crossing} if it is as illustrated in Figure \ref{positive-negative-crossing}. Let $d$ and $e$ be mutually distinct edges of $G$. Let $c_{D}^{+}(d,e)$ be the number of positive crossings of $f(d)\cap f(e)$ and $c_{D}^{-}(d,e)$ the number of negative crossings of $f(d)\cap f(e)$. We set $\ell_{D}(d,e)=c_{D}^{+}(d,e)-c_{D}^{-}(d,e)$. \begin{figure}[htbp] \begin{center} \scalebox{0.5}{\includegraphics*{positive-negative-crossing.eps}} \end{center} \caption{Positive crossing and negative crossing} \label{positive-negative-crossing} \end{figure} Let $u_{i}$ and $v_{i}$ be the vertices of a Petersen graph as illustrated in Figure \ref{PG} for $i=1,2,3,4,5$. We consider the suffixes modulo $5$. Namely $u_{-4}=u_{1}=u_{6}$, $u_{-3}=u_{2}=u_{7}$, $v_{-4}=v_{1}=v_{6}$ and so on. Then $E(PG)$ consists of $15$ edges $u_{i}u_{i+1}$, $u_{i}v_{i}$ and $v_{i}v_{i+2}$ for $i=1,2,3,4,5$. We consider that these edges are oriented. We define a map $\varepsilon:D_{1}(PG)\to{\mathbb Z}$ by $\varepsilon(u_{i}u_{i+1},u_{i+2}u_{i+3})=1$, $\varepsilon(u_{i}u_{i+1},u_{i-1}v_{i-1})=1$, $\varepsilon(u_{i}u_{i+1},u_{i+2}v_{i+2})=-1$, $\varepsilon(u_{i}u_{i+1},v_{j}v_{j+2})=1$,\\ $\varepsilon(u_{i}v_{i},u_{i\pm1}v_{i\pm1})=-1$, $\varepsilon(u_{i}v_{i},u_{i\pm2}v_{i\pm2})=1$, $\varepsilon(u_{i}v_{i},v_{i+1}v_{i+3})=-1$,\\ $\varepsilon(u_{i}v_{i},v_{i+2}v_{i+4})=1$ and $\varepsilon(v_{i}v_{i+2},v_{i+1}v_{i+3})=-1$ for $i,j\in\{1,2,3,4,5\}$. Let $\varphi:PG\to{\mathbb R}^{3}$ be a spatial embedding that is a lift of a plane generic immersion $f:PG\to{\mathbb R}^{2}$. Let $D$ be a diagram of $\varphi$ based on $f(PG)$. We set \[ {\mathcal L}(\varphi)=\sum_{(d,e)\in D_{1}(PG)}\varepsilon(d,e)\ell_{D}(d,e). \] \begin{Theorem}\label{theorem-Petersen1} Let $\varphi:PG\to{\mathbb R}^{3}$ be a spatial embedding of a Petersen graph $PG$ that is a lift of a plane generic immersion $f:PG\to{\mathbb R}^{2}$ of $PG$. Let $D$ be a diagram of $\varphi$ based on $f(PG)$. Then ${\mathcal L}(\varphi)$ is a well-defined ambient isotopy invariant of $\varphi$ and we have \[ {\mathcal L}(\varphi)\equiv\sum_{(d,e)\in D_{1}(PG)}|f(d)\cap f(e)|\equiv 1\pmod 2. \] \end{Theorem} \begin{Remark}\label{remark-Petersen1} {\rm Let $G$ be a finite graph and $k$ a natural number. Let $S_{k}(G)$ be a subcomplex of a $2$-dimensional complex $G\times G$ defined by \[ S_{k}(G)=\bigcup_{(d,e)\in D_{k}(G)}d\times e\cup e\times d. \] It is known that $S_{1}(K_{5})$ is homeomorphic to a closed orientable surface of genus $6$ and $S_{1}(K_{3,3})$ is homeomorphic to a closed orientable surface of genus $4$ \cite{Sarkaria}. Let $\varphi:G\to{\mathbb R}^{3}$ be a spatial embedding of $G$. Let $\tau_{\varphi}:S_{k}(G)\to{\mathbb S}^{2}$ be a Gauss map defined by \[ \tau_{\varphi}(x,y)=\frac{\varphi(x)-\varphi(y)}{\|\varphi(x)-\varphi(y)\|}. \] It is known that the mapping degree of $\tau_{\varphi}:S_{1}(G)\to{\mathbb S}^{2}$ for $G=K_{5}$ or $K_{3,3}$ is equal to the {\it Simon invariant} of $\varphi$ up to sign \cite{Taniyama2}. By a straightforward consideration we see that $S_{1}(PG)$ is homeomorphic to a closed orientable surface of genus $16$, and for a spatial embedding $\varphi:PG\to{\mathbb R}^{3}$ we see that ${\mathcal L}(\varphi)$ is equal to the mapping degree of $\tau_{\varphi}:S_{1}(PG)\to{\mathbb S}^{2}$ up to sign. } \end{Remark} \begin{Theorem}\label{theorem-Petersen} Let $f:PG\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{5}(PG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{6}(PG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{9}(PG)}c(f(\gamma))\equiv 1\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{8}(PG)}c(f(\gamma))\equiv 0\pmod 4. \] \end{Theorem} We denote a Heawood graph by $HG$. A Heawood graph $HG$ and a plane generic immersion $g:HG\to {\mathbb R}^{2}$ of $HG$ is illustrated in Figure \ref{HG}. \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{HG.eps}} \end{center} \caption{A plane generic immersion of $HG$} \label{HG} \end{figure} \begin{Theorem}\label{theorem-Heawood0} Let $f:HG\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{(d,e)\in D_{2}(HG)}|f(d)\cap f(e)|\equiv 1\pmod 2. \] \end{Theorem} Note that, for an edge $e$ of $HG$, the edges of $HG$ with distance $2$ with $e$ forms an $8$-cycle of $HG$, see Figure \ref{edge-distance-HG}. \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{edge-distance-HG.eps}} \end{center} \caption{Distance $2$ edges form an $8$-cycle} \label{edge-distance-HG} \end{figure} We note that the modular equality in Theorem \ref{theorem-Heawood0} also has an integral lift to an invariant of spatial embeddings of $HG$. That is an invariant defined in \cite[Theorem 3.15]{F-F-N}. It is always an odd number \cite[Lemma 3.16]{F-F-N}. The invariant is defined as follows. Let $u_{1},u_{2},u_{3},u_{4},u_{5},u_{6},u_{7}$ and $v_{1},v_{2},v_{3},v_{4},v_{5},v_{6},v_{7}$ be the vertices of a Heawood graph as illustrated in Figure \ref{HG}. We consider the suffixes modulo $7$. Then $E(HG)$ consists of $21$ edges $u_{i}v_{i}$, $u_{i}v_{i-1}$ and $v_{i}u_{i-2}$ for $i=1,2,3,4,5,6,7$. We consider that these edges are oriented. We define a map $\varepsilon:D_{1}(PG)\cup D_{2}(PG)\to{\mathbb Z}$ by $\varepsilon(u_{i}v_{i},u_{i\pm1}v_{i\pm1})=2$, $\varepsilon(u_{i}v_{i},u_{i\pm2}v_{i\pm2})=-2$, $\varepsilon(u_{i}v_{i},u_{i\pm3}v_{i\pm3})=-3$, $\varepsilon(u_{i}v_{i},u_{i+2}v_{i+1})=\varepsilon(u_{i}v_{i},u_{i-1}v_{i-2})=1$, \\ $\varepsilon(u_{i}v_{i},u_{i+3}v_{i+2})=\varepsilon(u_{i}v_{i},u_{i-2}v_{i-3})=2$, $\varepsilon(u_{i}v_{i},u_{i+4}v_{i+3})=5$, \\ $\varepsilon(u_{i}v_{i},v_{i+1}u_{i-1})=3$, $\varepsilon(u_{i}v_{i},v_{i+3}u_{i+1})=\varepsilon(u_{i}v_{i},v_{i-1}u_{i+4})=2$, \\ $\varepsilon(u_{i}v_{i},v_{i+4}u_{i+2})=\varepsilon(u_{i}v_{i},v_{i+5}u_{i+3})=-1$, \\ $\varepsilon(u_{i}v_{i-1},u_{i+1}v_{i})=\varepsilon(u_{i}v_{i-1},u_{i-1}v_{i-2})=2$, \\ $\varepsilon(u_{i}v_{i-1},u_{i+2}v_{i+1})=\varepsilon(u_{i}v_{i-1},u_{i-2}v_{i-3})=1$, \\ $\varepsilon(u_{i}v_{i-1},u_{i+3}v_{i+2})=\varepsilon(u_{i}v_{i-1},u_{i-3}v_{i-4})=-2$, \\ $\varepsilon(u_{i}v_{i-1},v_{i}u_{i-2})=\varepsilon(u_{i}v_{i-1},v_{i+1}u_{i-1})=2$, \\ $\varepsilon(u_{i}v_{i-1},v_{i+3}u_{i+1})=\varepsilon(u_{i}v_{i-1},v_{i+5}u_{i+3})=3$, \\ $\varepsilon(u_{i}v_{i-1},v_{i+4}u_{i+2})=3$, $\varepsilon(v_{i}u_{i-2},v_{i+1}u_{i-1})=\varepsilon(v_{i}u_{i-2},v_{i-1}u_{i-3})=5$, \\ $\varepsilon(v_{i}u_{i-2},v_{i+2}u_{i})=\varepsilon(v_{i}u_{i-2},v_{i-2}u_{i-4})=2$, and \\ $\varepsilon(v_{i}u_{i-2},v_{i+3}u_{i+1})=\varepsilon(v_{i}u_{i-2},v_{i-3}u_{i-5})=2$ for $i=1,2,3,4,5,6,7$. Let $\varphi:HG\to{\mathbb R}^{3}$ be a spatial embedding that is a lift of a plane generic immersion $f:HG\to{\mathbb R}^{2}$. Let $D$ be a diagram of $\varphi$ based on $f(HG)$. We set \[ {\mathcal L}(\varphi)=\sum_{(d,e)\in D_{1}(HG)\cup D_{2}(HG)}\varepsilon(d,e)\ell_{D}(d,e). \] \begin{Theorem}\label{theorem-Heawood1} Let $\varphi:HG\to{\mathbb R}^{3}$ be a spatial embedding of a Heawood graph $HG$ that is a lift of a plane generic immersion $f:HG\to{\mathbb R}^{2}$ of $HG$. Let $D$ be a diagram of $\varphi$ based on $f(HG)$. Then we have \[ {\mathcal L}(\varphi)\equiv\sum_{(d,e)\in D_{2}(HG)}|f(d)\cap f(e)|\equiv 1\pmod 2. \] \end{Theorem} \begin{Remark}\label{remark-Heawood1} {\rm The integral lift above involves disjoint edges of $HG$ with distance $1$. It is straightforward to check that there exists no integral lift involving only disjoint edges of $HG$ with distance $2$. As a related fact we see by a straightforward consideration that $S_{2}(HG)$ is homeomorphic to a closed non-orientable surface of non-orientable genus $30$, and for any spatial embedding $\varphi:HG\to{\mathbb R}^{3}$, the ${\mathbb Z}/2{\mathbb Z}$-mapping degree of $\tau_{\varphi}:S_{2}(HG)\to{\mathbb S}^{2}$ is equal to $1$. } \end{Remark} \begin{Theorem}\label{theorem-Heawood} Let $f:HG\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{6}(HG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{8}(HG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{10}(HG)}c(f(\gamma))\equiv 1\pmod 2, \] \[ \sum_{\gamma\in\Gamma_{12}(HG)}c(f(\gamma))\equiv 0\pmod 4, \] and \[ \sum_{\gamma\in\Gamma_{14}(HG)}c(f(\gamma))\equiv 0\pmod 2. \] \end{Theorem} Let $f:G\to {\mathbb R}^{2}$ be a plane generic immersion of a finite graph $G$. Let $\gamma$ be a cycle of $G$. Suppose that $\gamma$ is given an orientation. Then $f(\gamma)$ is an oriented plane closed curve. We denote the rotation number of $f(\gamma)$ by ${\rm rot}(f(\gamma))$. We note that the parity of ${\rm rot}(f(\gamma))$ is independent of the choice of orientation of $\gamma$. It is easy to see that \[ {\rm rot}(f(\gamma))-c(f(\gamma))\equiv 1\pmod 2. \] We denote the number of elements of a finite set $X$ by $|X|$. The number of $k$-cycles of $G$ for $G=K_{4},K_{3,3}$ and $PG$ listed below is known and also easy to enumerate. The number of $k$-cycles of $HG$ listed below is also known. See for example \cite[4.2]{Sivaraman}. Since $|\Gamma(K_{4})|=7\equiv 1\pmod 2$, $|\Gamma_{4}(K_{3,3})|=9\equiv 1\pmod 2$, $|\Gamma_{6}(K_{3,3})|=6\equiv 0\pmod 2$, $|\Gamma_{5}(PG)|=12\equiv 0\pmod 2$, $|\Gamma_{6}(PG)|=10\equiv 0\pmod 2$, $|\Gamma_{8}(PG)|=15\equiv 1\pmod 2$, $|\Gamma_{9}(PG)|=20\equiv 0\pmod 2$, $|\Gamma_{6}(HG)|=28\equiv 0\pmod 2$, $|\Gamma_{8}(HG)|=21\equiv 1\pmod 2$, $|\Gamma_{10}(HG)|=84\equiv 0\pmod 2$, $|\Gamma_{12}(HG)|=56\equiv 0\pmod 2$ and $|\Gamma_{14}(HG)|=24\equiv 0\pmod 2$, we have the following immediate corollaries. \begin{Corollary}\label{corollary-K4} Let $f:K_{4}\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma(K_{4})}{\rm rot}(f(\gamma))\equiv 1\pmod 2. \] \end{Corollary} \begin{Corollary}\label{corollary-K33} Let $f:K_{3,3}\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{4}(K_{3,3})}{\rm rot}(f(\gamma))\equiv 0\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{6}(K_{3,3})}{\rm rot}(f(\gamma))\equiv 1\pmod 2. \] \end{Corollary} \begin{Corollary}\label{corollary-Petersen} Let $f:PG\to {\mathbb R}^{2}$ be a plane generic immersion. Then \begin{align*} &\sum_{\gamma\in\Gamma_{5}(PG)}{\rm rot}(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{6}(PG)}{\rm rot}(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{8}(PG)}{\rm rot}(f(\gamma))\\ \equiv &\sum_{\gamma\in\Gamma_{9}(PG)}{\rm rot}(f(\gamma))\equiv 1\pmod 2. \end{align*} \end{Corollary} \begin{Corollary}\label{corollary-Heawood} Let $f:HG\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{6}(HG)}{\rm rot}(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{10}(HG)}{\rm rot}(f(\gamma))\equiv 1\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{8}(HG)}{\rm rot}(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{12}(HG)}{\rm rot}(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{14}(HG)}{\rm rot}(f(\gamma))\equiv 0\pmod 2. \] \end{Corollary} Furthermore we have the following theorem. \begin{Theorem}\label{theorem-zero-rotation} Let $G$ be a finite graph. Then the following conditions are equivalent. \begin{enumerate} \item[{\rm (1)}] There is a plane generic immersion $f:G\to{\mathbb R}^{2}$ such that ${\rm rot}(f(\gamma))=0$ for every cycle $\gamma$ of $G$. \item[{\rm (2)}] The graph $G$ does not have $K_{4}$ as a minor. \end{enumerate} \end{Theorem} Let $({\mathbb R}^{3},\xi_{\rm std})$ be the $3$-space with the standard contact structure. A {\it Legendrian knot} is a smooth knot in ${\mathbb R}^{3}$ that is tangent to the contact plane at each point. We consider Legendrian knots up to Legendrian isotopy. The Thurston-Bennequin number ${\rm tb}(K)$ and the rotation number ${\rm rot}(K)$ of a Legendrian knot $K$ are Legendrian isotopy invariants. A Legendrian knot $K$ is said to be a {\it trivial unknot} \cite{O-P} if it is a trivial knot as a classical knot and ${\rm tb}(K)=-1$. It is shown in \cite{O-P} that a finite graph $G$ has a Legendrian embedding with all cycles trivial unknots if and only if $G$ does not have $K_{4}$ as a minor. See also \cite{Tanaka} for related results. It is known that a trivial unknot has rotation number $0$. The rotation number of a Legendrian knot $K$ coincides with the rotation number of the plane immersed circle that is the image of $K$ under Lagrangian projection. Therefore the only if part is an immediate consequence of Corollary \ref{corollary-K4}. Namely we have the following corollary. \begin{Corollary}[\cite{O-P}]\label{corollary-K4-2} Let $f:K_{4}\to {\mathbb R}^{3}$ be a Legendrian embedding. Then there is a cycle $\gamma$ of $K_{4}$ such that $f(\gamma)$ is not a trivial unknot. \end{Corollary} Let $G$ be a finite graph and $f:G\to{\mathbb R}^{3}$ a Legendrian embedding. Then $f$ is said to be a {\it minimal embedding} \cite{O-P2} if $f(\gamma)$ is a trivial unknot for every $g(G)$-cycle $\gamma$ of $G$. Then we immediately have the following corollaries. \begin{Corollary}\label{corollary-Petersen-Legendrian} The Petersen graph $PG$ has no minimal Legendrian embedding. \end{Corollary} \begin{Corollary}\label{corollary-Heawood-Legendrian} The Heawood graph $HG$ has no minimal Legendrian embedding. \end{Corollary} Let $G$ be a finite graph and $f:G\to{\mathbb R}^{3}$ a Legendrian embedding. The {\it total Thurston-Bennequin number} of $f$ is defined in \cite{O-P2} to be \[ TB(f)=\sum_{\gamma\in\Gamma(G)}tb(f(\gamma)). \] The following is also defined for a natural number $k$ in \cite{O-P2}. \[ TB_{k}(f)=\sum_{\gamma\in\Gamma_{k}(G)}tb(f(\gamma)). \] It is shown in \cite{O-P2} that $TB(f)$ is determined by $TB_{3}(f)$ when $G$ is a complete graph and by $TB_{4}(f)$ when $G$ is a complete bipartite graph. In this paper we extend these results to a Petersen graph and a Heawood graph. \begin{Theorem}\label{theorem-Petersen-tb} Let $f:PG\to{\mathbb R}^{3}$ be a Legendrian embedding. Then \[ TB_{6}(f)=TB_{5}(f), \] \[ TB_{8}(f)=2TB_{5}(f), \] and \[ TB_{9}(f)=3TB_{5}(f). \] Therefore \[ TB(f)=7TB_{5}(f). \] \end{Theorem} \begin{Theorem}\label{theorem-Heawood-tb} Let $f:HG\to{\mathbb R}^{3}$ be a Legendrian embedding. Then \[ TB_{8}(f)=TB_{6}(f), \] \[ TB_{10}(f)=5TB_{6}(f), \] \[ TB_{12}(f)=4TB_{6}(f), \] and \[ TB_{14}(f)=2TB_{6}(f). \] Therefore \[ TB(f)=13TB_{6}(f). \] \end{Theorem} \section{Crossing numbers of cycles in a plane immersed graph}\label{crossing-numbers} \begin{Proposition}\label{proposition-crossing1} Let $G$ be a finite graph and $\Lambda$ a set of subgraphs of $G$. Let $m$ be a positive integer. Suppose that the following {\rm (1)} and {\rm (2)} hold. \begin{enumerate} \item[{\rm (1)}] For every edge $e$ of $G$, the number of elements of $\Lambda$ containing $e$ is a multiple of $m$. \item[{\rm (2)}] For every pair of edges $d$ and $e$ of $G$, the number of elements of $\Lambda$ containing both $d$ and $e$ is a multiple of $m$. \end{enumerate} \noindent Then for any plane generic immersion $f$ of $G$ \[ \sum_{\lambda\in\Lambda}c(f(\lambda))\equiv 0\pmod m. \] \end{Proposition} \vskip 5mm \noindent{\bf Proof.} Let $x$ be a self crossing of $f(G)$. Then by the condition (1) $x$ is counted a multiple of $m$ times in the sum. Let $x$ be an adjacent crossing or a disjoint crossing of $f(G)$. Then then by the condition (2) $x$ is counted a multiple of $m$ times in the sum. Therefore the sum is a multiple of $m$. $\Box$ \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-K4}.} We note that $\Gamma(K_{4})=\Gamma_{3}(K_{4})\cup\Gamma_{4}(K_{4})$, $|\Gamma_{3}(K_{4})|=4$ and $|\Gamma_{4}(K_{4})|=3$. Every edge of $K_{4}$ is contained in $2$ $3$-cycles and $2$ $4$-cycles. The total $4$ is a multiple of $2$. Every pair of mutually adjacent edges of $K_{4}$ is contained in a $3$-cycle and a $4$-cycle. The total $2$ is a multiple of $2$. Every pair of disjoint edges of $K_{4}$ is contained in no $3$-cycles and $2$ $4$-cycles. The total $2$ is a multiple of $2$. Then by Proposition \ref{proposition-crossing1} we have the result. $\Box$ \vskip 5mm We note that the phenomenon described in Theorem \ref{theorem-K4} widely appears on graphs with certain symmetries. We show two of them below. The proofs are entirely analogous and we omit them. \begin{Theorem}\label{theorem-K5} Let $f:K_{5}\to {\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{4}(K_{5})}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{5}(K_{5})}c(f(\gamma))\equiv 0\pmod 2. \] \end{Theorem} Let $m$ be a natural number. Let $T(m)$ be a graph of $3$ vertices and $3m$ edges such that each pair of vertices is joined by exactly $m$ multiple edges. \begin{Theorem}\label{theorem-multiple-triangle} Let $m$ be a natural number. Let $f:T(m)\to{\mathbb R}^{2}$ be a plane generic immersion. Then \[ \sum_{\gamma\in\Gamma_{3}(T(m))}c(f(\gamma))\equiv 0\pmod m. \] \end{Theorem} \vskip 5mm It is known that any two plane generic immersions are transformed into each other up to self-homeomorphisms of $G$ and ${\mathbb R}^2$ by a finite sequence of local moves illustrated in Figure \ref{Reidemeister}. These moves are called {\it Reidemeister moves}. \begin{figure}[htbp] \begin{center} \scalebox{0.7}{\includegraphics*{Reidemeister.eps}} \end{center} \caption{Reidemeister moves} \label{Reidemeister} \end{figure} \begin{Proposition}\label{proposition-crossing2} Let $G$ be a finite graph and $\Lambda$ a set of subgraphs of $G$. Let $m$ be a positive integer. The following conditions {\rm (A)} and {\rm (B)} are mutually equivalent. \begin{enumerate} \item[{\rm (A)}] For any plane generic immersions $f$ and $g$ of $G$, \[ \sum_{\lambda\in\Lambda}c(f(\lambda))\equiv \sum_{\lambda\in\Lambda}c(g(\lambda))\pmod m. \] \item[{\rm (B)}] All of the following conditions {\rm (1)}, {\rm (2)}, {\rm (3)} and {\rm (4)} hold. \begin{enumerate} \item[{\rm (1)}] For every edge $e$ of $G$, the number of elements of $\Lambda$ containing $e$ is a multiple of $m$. \item[{\rm (2)}] For every pair of edges $d$ and $e$ of $G$, twice the number of elements of $\Lambda$ containing both $d$ and $e$ is a multiple of $m$. \item[{\rm (3)}] For every pair of a vertex $v$ and an edge $e$ of $G$, \[ \sum_{i=1}^{k}|\{\lambda\in\Lambda\mid\lambda\supset e\cup e_{i}\}|\equiv 0\pmod m, \] where $e_{1},\cdots,e_{k}$ are the edges of $G$ incident to $v$. \item[{\rm (4)}] For every pair of mutually adjacent edges $d$ and $e$ of $G$, the number of elements of $\Lambda$ containing both $d$ and $e$ is a multiple of $m$. \end{enumerate} \end{enumerate} \end{Proposition} \begin{Remark}\label{remark-crossing} {\rm If $\Lambda\subset\Gamma(G)$, then \[ \sum_{i=1}^{k}|\{\lambda\in\Lambda\mid\lambda\supset e\cup e_{i}\}| =2\sum_{1\leq i<j\leq k}|\{\lambda\in\Lambda\mid\lambda\supset e\cup e_{i}\cup e_{j}\}| \equiv 0\pmod 2. \] Therefore the condition (3) of (B) for $m=2$ automatically holds. } \end{Remark} \vskip 5mm \noindent{\bf Proof of Proposition \ref{proposition-crossing2}.} We set $\displaystyle{\tau_{\Lambda}(h)=\sum_{\lambda\in\Lambda}c(h(\lambda))}$ for a plane generic immersion $h$ of $G$. We note that a Reidemeister move ${\rm R4}$ in Figure \ref{Reidemeister} is realized by a Reidemeister move ${\rm R4'}$ in Figure \ref{Reidemeister2} and Reidemeister moves ${\rm R2}$ in Figure \ref{Reidemeister}. Therefore $f$ and $g$ are transformed into each other by ${\rm R1}$, ${\rm R2}$, ${\rm R3}$, ${\rm R4'}$ and ${\rm R5}$. A Reidemeister move ${\rm R1}$ create or annihilate a self crossing. Then we see that $\tau_{\Lambda}\pmod m$ is invariant under ${\rm R1}$ if and only if the condition (1) holds. A Reidemeister move ${\rm R2}$ create or annihilate two crossings. Suppose that they are self crossings, then $\tau_{\Lambda}\pmod m$ is invariant if the condition (1) holds. Suppose that they are both adjacent crossings or both disjoint crossings, then $\tau_{\Lambda}\pmod m$ is invariant if the condition (2) holds. If the condition (2) does not hold, then we can find $f$ and $g$ that differs by a Reidemeister move ${\rm R2}$ such that $\displaystyle{ \sum_{\lambda\in\Lambda}c(f(\lambda))-\sum_{\lambda\in\Lambda}c(g(\lambda)) }$ is not a multiple of $m$. A Reidemeister move ${\rm R3}$ does not change the number of self crossings, adjacent crossings and disjoint crossings for every edge, pair of mutually adjacent edges and pair of disjoint edges respectively. Therefore $\tau_{\Lambda}$ is always invariant under ${\rm R3}$. A Reidemeister move ${\rm R4'}$ create or annihilate crossings between an edge $e$ of $G$ and the edges of $G$ incident to $v$. Then we see that $\tau_{\Lambda}\pmod m$ is invariant under ${\rm R4'}$ if and only if the condition (3) holds. A Reidemeister move ${\rm R5}$ create or annihilate an adjacent crossing. Then we see that $\tau_{\Lambda}\pmod m$ is invariant under ${\rm R5}$ if and only if the condition (4) holds. This completes the proof. $\Box$ \begin{figure}[htbp] \begin{center} \scalebox{0.4}{\includegraphics*{Reidemeister2.eps}} \end{center} \caption{${\rm R4'}$} \label{Reidemeister2} \end{figure} \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-K33}.} Let $g:K_{3,3}\to{\mathbb R}^{2}$ be a plane generic immersion illustrated in Figure \ref{K33}. We see that exactly one $4$-cycle of $K_{3,3}$ has a crossing under $g$ and exactly three $6$-cycles of $K_{3,3}$ has a crossing under $g$. Therefore \[ \sum_{\gamma\in\Gamma_{4}(K_{3,3})}c(g(\gamma))\equiv\sum_{\gamma\in\Gamma_{6}(K_{3,3})}c(g(\gamma))\equiv 1\pmod 2. \] Next we will check that the conditions (1), (2), (3) and (4) of Proposition \ref{proposition-crossing2} for $G=K_{3,3}$, $\Lambda=\Gamma_{4}(K_{3,3})$ or $\Lambda=\Gamma_{6}(K_{3,3})$, and $m=2$ hold. Each edge of $K_{3,3}$ is contained in exactly $4$ $4$-cycles and $4$ $6$-cycles of $K_{3,3}$. Therefore (1) holds. Since $m=2$, (2) automatically holds. Since $\Lambda\subset\Gamma(K_{3,3})$ and $m=2$, (3) holds by Remark \ref{remark-crossing}. Let $d$ and $e$ be mutually adjacent edges of $K_{3,3}$. Then we see that there exist exactly $2$ $4$-cycles and exactly $2$ $6$-cycles containing both of them. Therefore (4) holds. Then by Proposition \ref{proposition-crossing2} we have \[ \sum_{\gamma\in\Gamma_{4}(K_{3,3})}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{4}(K_{3,3})}c(g(\gamma))\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{6}(K_{3,3})}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{6}(K_{3,3})}c(g(\gamma))\pmod 2. \] Therefore we have \[ \sum_{\gamma\in\Gamma_{4}(K_{3,3})}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{6}(K_{3,3})}c(f(\gamma))\equiv 1\pmod 2. \] $\Box$ \begin{figure}[htbp] \begin{center} \scalebox{0.8}{\includegraphics*{K33.eps}} \end{center} \caption{A plane generic immersion of $K_{3,3}$} \label{K33} \end{figure} \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Petersen0}.} We set \[ \kappa(f)=\sum_{(d,e)\in D_{1}(PG)}|f(d)\cap f(e)| \] for a plane generic immersion $f:PG\to{\mathbb R}^{2}$ of $PG$. Let $g:PG\to{\mathbb R}^{2}$ be a plane generic immersion illustrated in Figure \ref{PG}. We note that each crossing of $g(PG)$ is a disjoint crossing between two edges of $PG$ with distance $1$. Therefore we have \[ \kappa(g)=\sum_{(d,e)\in D_{1}(PG)}|g(d)\cap g(e)|=5\equiv 1\pmod 2. \] Next we will show that $\kappa(f) \pmod 2$ is invariant under Reidemeister moves. Since $\kappa$ do not count self crossings, it is invariant under ${\rm R1}$. The change of $\kappa$ under ${\rm R2}$ is $\pm2$ or $0$ and therefore $\kappa(f) \pmod 2$ is invariant under ${\rm R2}$. The third Reidemeister move ${\rm R3}$ do not change $\kappa$ itself. Let $e$ be an edge of $PG$ that is involved in a fourth Reidemeister move ${\rm R4}$. As we saw before, the edges of $PG$ with distance $1$ with $e$ forms an $8$-cycle of $PG$. Therefore we see that the change of $\kappa$ under ${\rm R4}$ is $\pm2$ or $0$ and therefore $\kappa(f) \pmod 2$ is invariant under ${\rm R4}$. Since $\kappa$ do not count adjacent crossings, it is invariant under ${\rm R5}$. Since $f$ and $g$ are transformed into each other by Reidemeister moves, we have \[ \kappa(f)\equiv\kappa(g)\equiv 1\pmod 2. \] $\Box$ \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Petersen1}.} The proof that ${\mathcal L}(\varphi)$ is a well-defined ambient isotopy invariant taking its value in a odd number is entirely analogous to that of Simon invariants \cite{Taniyama1} and reduced Wu and generalized Simon invariants \cite{F-F-N}. The modular equality immediately follows from the definitions. $\Box$ \vskip 5mm We prepare the following lemma for the proof of Theorem \ref{theorem-Petersen}. \begin{Lemma}\label{lemma-PG}\ \begin{enumerate} \item[{\rm (1)}] For any vertices $x$ and $y$ of $PG$, there exists an isomorphism of $PG$ that maps $x$ to $y$. \item[{\rm (2)}] For any edges $x$ and $y$ of $PG$, there exists an isomorphism of $PG$ that maps $x$ to $y$. \item[{\rm (3)}] For any pairs of mutually adjacent edges $x,y$ and $z,w$ of $PG$, there exists an isomorphism of $PG$ that maps $x\cup y$ to $z\cup w$. \item[{\rm (4)}] Let $x,y,z$ and $w$ be edges of $PG$ with $d(x,y)=d(z,w)=1$. Then there exists an isomorphism of $PG$ that maps $x\cup y$ to $z\cup w$. \item[{\rm (5)}] Let $x,y,z$ and $w$ be edges of $PG$ with $d(x,y)=d(z,w)=2$. Then there exists an isomorphism of $PG$ that maps $x\cup y$ to $z\cup w$. \end{enumerate} \end{Lemma} \vskip 5mm \noindent{\bf Proof.} Let $p:PG\to PG$ be an isomorphism defined by $p(u_{i})=u_{i+1}$ and $p(v_{i})=v_{i+1}$ for $i=1,2,3,4,5$. Let $q:PG\to PG$ be an isomorphism defined by $q(u_{i})=u_{5-i}$ and $q(v_{i})=v_{5-i}$ for $i=1,2,3,4,5$. Let $r:PG\to PG$ be an isomorphism defined by $r(u_{i})=v_{2i}$ and $r(v_{i})=u_{2i}$ for $i=1,2,3,4,5$. Let $s:PG\to PG$ be an isomorphism defined by $s(u_{1})=u_{1}$, $s(u_{2})=u_{2}$, $s(u_{5})=u_{5}$, $s(v_{1})=v_{1}$, $s(u_{3})=v_{2}$, $s(u_{4})=v_{5}$, $s(v_{2})=u_{3}$, $s(v_{5})=u_{4}$, $s(v_{3})=v_{4}$ and $s(v_{4})=v_{3}$. Then we see that all isomorphisms requested in (1) are generated by $p$ and $r$. We note that $q$ exchanges an edge $u_{5}u_{1}$ for an edge $u_{5}u_{4}$ and $s$ exchanges an edge $u_{2}v_{2}$ for an edge $u_{2}u_{3}$. Then by combining $p$ and $r$ we have all isomorphisms requested in (2) and (3). Suppose $d(x,y)=d(z,w)=1$. Then there is an edge $e$ of $PG$ adjacent to both $x$ and $y$. By (2) we map $e$ to an edge $u_{1}u_{2}$. We note that $q$ maps an edge $u_{2}u_{3}$ to $u_{3}u_{2}$. Then by combining other isomorphisms we have an isomorphism that maps $u_{1}u_{2}$ to $u_{2}u_{1}$. Then combining $s$ and these isomorphisms if necessary, $x\cup y$ is mapped to $u_{2}u_{3}\cup u_{5}u_{1}$. Similarly $z\cup w$ is mapped to $u_{2}u_{3}\cup u_{5}u_{1}$. This implies that $x\cup y$ is mapped to $z\cup w$ and (4) holds. Suppose $d(x,y)=d(z,w)=2$. Then there are mutually adjacent edges $d$ and $e$ of $PG$ such that $d$ is adjacent to $x$ and $e$ is adjacent to $y$. By (3) we map $d\cup e$ to $u_{1}u_{2}\cup u_{5}u_{1}$. Then we see that $x\cup y$ is mapped to $u_{2}v_{2}\cup u_{5}u_{4}$ or $u_{2}u_{3}\cup u_{5}v_{5}$. Similarly $z\cup w$ is mapped to $u_{2}v_{2}\cup u_{5}u_{4}$ or $u_{2}u_{3}\cup u_{5}v_{5}$. Since $s$ exchanges $u_{2}v_{2}\cup u_{5}u_{4}$ for $u_{2}u_{3}\cup u_{5}v_{5}$, we see that (5) holds. $\Box$ \vskip 5mm Let $G$ be a finite graph and $k$ a natural number. Let $e$ be an edge of $G$. We set $\alpha_{k}(e,G)=|\{\gamma\in\Gamma_{k}(G)\mid\gamma\supset e\}|$. Let $d$ be another edge of $G$. We set $\alpha_{k}(d\cup e,G)=|\{\gamma\in\Gamma_{k}(G)\mid\gamma\supset d\cup e\}|$. Suppose that $d$ and $e$ are mutually disjoint and oriented. Let $\gamma$ be a cycle of $G$ containing $d\cup e$. The cycle $\gamma$ is said to be {\it coherent} with respect to $d$ and $e$ if the orientations of $d$ and $e$ are coherent in $\gamma$. Otherwise $\gamma$ is said to be {\it incoherent} with respect to $d$ and $e$. Let $C_{k}(d\cup e,G)$ be the set of all cycles of $\Gamma_{k}(G)$ containing $d\cup e$ coherent with respect to $d$ and $e$. Let $I_{k}(d\cup e,G)$ be the set of all cycles of $\Gamma_{k}(G)$ containing $d\cup e$ incoherent with respect to $d$ and $e$. Then $\alpha_{k}(d\cup e,G)=|C_{k}(d\cup e,G)|+|I_{k}(d\cup e,G)|$. We set $\beta_{k}(d\cup e,G)=|C_{k}(d\cup e,G)|-|I_{k}(d\cup e,G)|$. \begin{Lemma}\label{lemma-PG2}\ \begin{enumerate} \item[{\rm (1)}] Let $e$ be an edge of $PG$. Then $\alpha_{5}(e,PG)=4$, $\alpha_{6}(e,PG)=4$, $\alpha_{8}(e,PG)=8$ and $\alpha_{9}(e,PG)=12$. \item[{\rm (2)}] Let $d$ and $e$ be mutually adjacent edges of $PG$. Then $\alpha_{5}(d\cup e,PG)=2$, $\alpha_{6}(d\cup e,PG)=2$, $\alpha_{8}(d\cup e,PG)=4$ and $\alpha_{9}(d\cup e,PG)=6$. \item[{\rm (3)}] Let $d$ and $e$ be mutually disjoint oriented edges of $PG$ with $d(d,e)=1$. Then $\alpha_{5}(d\cup e,PG)=1$, $\alpha_{6}(d\cup e,PG)=1$, $\alpha_{8}(d\cup e,PG)=4$ and $\alpha_{9}(d\cup e,PG)=7$. Suppose that the $5$-cycle of $PG$ containing $d\cup e$ is coherent with respect to $d$ and $e$. Then $\beta_{5}(d\cup e,PG)=1$, $\beta_{6}(d\cup e,PG)=1$, $\beta_{8}(d\cup e,PG)=2$ and $\beta_{9}(d\cup e,PG)=3$. \item[{\rm (4)}] Let $d$ and $e$ be mutually disjoint oriented edges of $PG$ with $d(d,e)=2$. Then $\alpha_{5}(d\cup e,PG)=0$, $\alpha_{6}(d\cup e,PG)=2$, $\alpha_{8}(d\cup e,PG)=4$, $\alpha_{9}(d\cup e,PG)=8$ and $\beta_{5}(d\cup e,PG)=\beta_{6}(d\cup e,PG)=\beta_{8}(d\cup e,PG)=\beta_{9}(d\cup e,PG)=0$. \end{enumerate} \end{Lemma} \vskip 5mm \noindent{\bf Proof.} There are $6$ pairs of mutually disjoint $5$-cycles of $PG$. They are \\ $(u_{1}u_{2}u_{3}u_{4}u_{5}u_{1}, v_{1}v_{3}v_{5}v_{2}v_{4}v_{1})$ and $(u_{i}u_{i+1}v_{i+1}v_{i+4}u_{i+4}u_{i}, v_{i}v_{i+2}u_{i+2}u_{i+3}v_{i+3}v_{i})$ \\ for $i=1,2,3,4,5$. Then $\Gamma_{5}(PG)$ consists of these $12$ $5$-cycles. The $10$ $6$-cycles $u_{i}u_{i+1}u_{i+2}u_{i+3}v_{i+3}v_{i}u_{i}$ and $u_{i}u_{i+1}v_{i+1}v_{i+4}v_{i+2}v_{i}u_{i}$\\ for $i=1,2,3,4,5$ are the elements of $\Gamma_{6}(PG)$. The $15$ $8$-cycles $u_{i}u_{i+1}u_{i+2}v_{i+2}v_{i}v_{i+3}u_{i+3}u_{i+4}u_{i}$, \\ $u_{i}u_{i+1}v_{i+1}v_{i+3}v_{i}v_{i+2}v_{i+4}u_{i+4}u_{i}$ and $u_{i}u_{i+1}v_{i+1}v_{i+3}u_{i+3}u_{i+2}v_{i+2}v_{i}u_{i}$\\ for $i=1,2,3,4,5$ are the elements of $\Gamma_{8}(PG)$. The $20$ $9$-cycles \\ $u_{i}u_{i+1}v_{i+1}v_{i+3}u_{i+3}u_{i+2}v_{i+2}v_{i+4}u_{i+4}u_{i}$, $u_{i}u_{i+1}v_{i+1}v_{i+4}v_{i+2}u_{i+2}u_{i+3}v_{i+3}v_{i}u_{i}$, \\ $u_{i}u_{i+1}u_{i+2}v_{i+2}v_{i+4}v_{i+1}v_{i+3}u_{i+3}u_{i+4}u_{i}$ and $u_{i}u_{i+1}u_{i+2}u_{i+3}v_{i+3}v_{i+1}v_{i+4}v_{i+2}v_{i}u_{i}$ \\ for $i=1,2,3,4,5$ are the elements of $\Gamma_{9}(PG)$. Then by Lemma \ref{lemma-PG} (2) we see that counting cycles for a particular edge $e$ of $PG$ will show (1). Similarly we have (2), (3) and (4) by Lemma \ref{lemma-PG} (3), (4) and (5) respectively. $\Box$ \vskip 5mm Summarizing the statements in Lemma \ref{lemma-PG2} we have Table \ref{table-Petersen-cycles}. \begin{table}[htb] \centering \caption{number of $k$-cycles of $PG$} \scalebox{0.75} { \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$k$} & \multirow{2}{*}{$|\Gamma_{k}(PG)|$} & \multirow{2}{*}{$|\Gamma_{k}(PG)|\cdot k$} & \multirow{2}{*}{$\alpha_{k}(e,PG)$} & \multicolumn{3}{c}{$\alpha_{k}(d\cup e,PG)$} & \multicolumn{2}{|c|}{$\beta_{k}(d\cup e,PG)$} \\ \cline{5-9} & & & & $d(d,e)=0$ & $d(d,e)=1$ & $d(d,e)=2$ & $d(d,e)=1$ & $d(d,e)=2$ \\ \hline $5$ & $12$ & $60$ & $4$ & $2$ & $1$ & $0$ & $1$ & $0$ \\ \hline $6$ & $10$ & $60$ & $4$ & $2$ & $1$ & $2$ & $1$ & $0$ \\ \hline $8$ & $15$ & $120$ & $8$ & $4$ & $4$ & $4$ & $2$ & $0$ \\ \hline $9$ & $20$ & $180$ & $12$ & $6$ & $7$ & $8$ & $3$ & $0$ \\ \hline \end{tabular} } \label{table-Petersen-cycles} \end{table} \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Petersen}.} Let $g:PG\to{\mathbb R}^{2}$ be a plane generic immersion illustrated in Figure \ref{PG}. Since each crossing of $g(PG)$ is a crossing between two edges of $PG$ with distance $1$ and $\alpha_{5}(d\cup e,PG)=1$ for edges $d$ and $e$ of $PG$ with $d(d,e)=1$, we have \[ \sum_{\gamma\in\Gamma_{5}(PG)}c(g(\gamma))=5\equiv 1\pmod 2. \] Since $\alpha_{6}(d\cup e,PG)=1$ and $\alpha_{9}(d\cup e,PG)=7$ for edges $d$ and $e$ of $PG$ with $d(d,e)=1$, we have \[ \sum_{\gamma\in\Gamma_{6}(PG)}c(g(\gamma))=5\equiv 1\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{9}(PG)}c(g(\gamma))=35\equiv 1\pmod 2. \] Next we will check that the conditions (1), (2), (3) and (4) of Proposition \ref{proposition-crossing2} for $G=PG$, $\Lambda=\Gamma_{5}(PG)$, $\Gamma_{6}(PG)$ or $\Gamma_{9}(PG)$ and $m=2$ hold. Since $\alpha_{5}(e,PG)=\alpha_{6}(e,PG)=4$ and $\alpha_{9}(e,PG)=12$ for every edge $e$ of $PG$, we see that (1) holds. For $m=2$, (2) automatically holds. Since $\Lambda\subset\Gamma(PG)$ (3) holds for $m=2$ by Remark \ref{remark-crossing}. Let $d$ and $e$ be mutually adjacent edges of $PG$. Then $\alpha_{5}(d\cup e,PG)=\alpha_{6}(d\cup e,PG)=2$ and $\alpha_{9}(d\cup e,PG)=6$. Therefore (4) holds. Then by Proposition \ref{proposition-crossing2} we have \[ \sum_{\gamma\in\Gamma_{5}(PG)}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{5}(PG)}c(g(\gamma))\pmod 2, \] \[ \sum_{\gamma\in\Gamma_{6}(PG)}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{6}(PG)}c(g(\gamma))\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{9}(PG)}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{9}(PG)}c(g(\gamma))\pmod 2. \] Therefore we have \[ \sum_{\gamma\in\Gamma_{5}(PG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{6}(PG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{9}(PG)}c(f(\gamma))\equiv 1\pmod 2. \] We note that $\alpha_{8}(e,PG)=8$ for every edge $e$ of $PG$ and $\alpha_{8}(d\cup e,PG)=4$ for any distinct edges $d$ and $e$. Then by Proposition \ref{proposition-crossing1} we have \[ \sum_{\gamma\in\Gamma_{8}(PG)}c(f(\gamma))\equiv 0\pmod 4. \] $\Box$ \vskip 5mm \begin{Example}\label{example-PG} {\rm Another plane generic immersion $f:PG\to {\mathbb R}^{2}$ of $PG$ is illustrated in Figure \ref{PG2}. The crossing number $c(f(PG))=2$ is known to be minimal among all plane generic immersions of $PG$. Note that the upper crossing of $f(PG)$ in Figure \ref{PG2} is a crossing between distance $2$ edges of $PG$ and the lower crossing is a crossing between distance $1$ edges of $PG$. Then we see that $f$ satisfies all modular equalities in Theorem \ref{theorem-Petersen0} and Theorem \ref{theorem-Petersen}. } \end{Example} \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{PG2.eps}} \end{center} \caption{Another plane generic immersion of $PG$} \label{PG2} \end{figure} \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Heawood0}.} We set \[ \kappa(f)=\sum_{(d,e)\in D_{2}(HG)}|f(d)\cap f(e)| \] for a plane generic immersion $f:HG\to{\mathbb R}^{2}$ of $HG$. Let $g:HG\to{\mathbb R}^{2}$ be a plane generic immersion illustrated in Figure \ref{HG}. We note that $g(HG)$ has $7$ crossings between distance $1$ edges and $7$ crossings between distance $2$ edges. Therefore we have \[ \kappa(g)=\sum_{(d,e)\in D_{2}(HG)}|g(d)\cap g(e)|=7\equiv 1\pmod 2. \] Next we will show that $\kappa(f) \pmod 2$ is invariant under Reidemeister moves. Since $\kappa$ do not count self crossings, it is invariant under ${\rm R1}$. The change of $\kappa$ under ${\rm R2}$ is $\pm2$ or $0$ and therefore $\kappa(f) \pmod 2$ is invariant under ${\rm R2}$. The third Reidemeister move ${\rm R3}$ do not change $\kappa$ itself. Let $e$ be an edge of $HG$ that is involved in a fourth Reidemeister move ${\rm R4}$. As we saw before, the edges of $HG$ with distance $2$ with $e$ forms an $8$-cycle of $HG$. Therefore we see that the change of $\kappa$ under ${\rm R4}$ is $\pm2$ or $0$ and therefore $\kappa(f) \pmod 2$ is invariant under ${\rm R4}$. Since $\kappa$ do not count adjacent crossings, it is invariant under ${\rm R5}$. Since $f$ and $g$ are transformed into each other by Reidemeister moves, we have \[ \kappa(f)\equiv\kappa(g)\equiv 1\pmod 2. \] $\Box$ \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Heawood1}.} We note that \[ \ell_{D}(d,e)=c_{D}^{+}(d,e)-c_{D}^{-}(d,e)\equiv c_{D}^{+}(d,e)+c_{D}^{-}(d,e)\equiv|d\cap e|\pmod 2. \] We also note that $\varepsilon(d,e)$ is an odd number if $d(d,e)=1$ and it is an even number if $d(d,e)=2$. Therefore we have the modular equality. $\Box$ \begin{Lemma}\label{lemma-HG}\ \begin{enumerate} \item[{\rm (1)}] For any vertices $x$ and $y$ of $HG$, there exists an isomorphism of $HG$ that maps $x$ to $y$. \item[{\rm (2)}] For any edges $x$ and $y$ of $HG$, there exists an isomorphism of $HG$ that maps $x$ to $y$. \item[{\rm (3)}] For any pairs of mutually adjacent edges $x,y$ and $z,w$ of $HG$, there exists an isomorphism of $HG$ that maps $x\cup y$ to $z\cup w$. \item[{\rm (4)}] Let $x,y,z$ and $w$ be edges of $HG$ with $d(x,y)=d(z,w)=1$. Then there exists an isomorphism of $HG$ that maps $x\cup y$ to $z\cup w$. \item[{\rm (5)}] Let $x,y,z$ and $w$ be edges of $HG$ with $d(x,y)=d(z,w)=2$. Then there exists an isomorphism of $HG$ that maps $x\cup y$ to $z\cup w$. \end{enumerate} \end{Lemma} \vskip 5mm \noindent{\bf Proof.} A Heawood graph has the following symmetries. It is $7$-periodic with respect to the isomorphism $\alpha:HG\to HG$ defined by $\alpha(u_{i})=u_{i+1}$ and $\alpha(v_{i})=v_{i+1}$. It is $2$-periodic with respect to the isomorphism $\beta:HG\to HG$ defined by $\beta(u_{i})=v_{8-i}$ and $\beta(v_{i})=u_{8-i}$. It is $3$-periodic with $2$ fixed vertices with respect to the isomorphism $\gamma:HG\to HG$ defined by $\gamma(u_{1})=u_{1}$, $\gamma(v_{6})=v_{6}$, $\gamma(v_{1})=v_{7}$, $\gamma(v_{7})=v_{3}$, $\gamma(v_{3})=v_{1}$, $\gamma(u_{2})=u_{5}$, $\gamma(u_{5})=u_{3}$, $\gamma(u_{3})=u_{2}$, $\gamma(v_{2})=v_{4}$, $\gamma(v_{4})=v_{5}$, $\gamma(v_{5})=v_{2}$ and $\gamma(u_{4})=u_{6}$, $\gamma(u_{6})=u_{7}$, $\gamma(u_{7})=u_{4}$. Then $\alpha$ and $\beta$ generate all isomorphisms requested in (1). We note that $\gamma$ cyclically maps the edges incident to $u_{1}$. Then we see that these isomorphisms generate all isomorphisms requested in (2) and (3). Let $\varphi:HG\to HG$ be an isomorphism defined by $\varphi(u_{1})=u_{1}$, $\varphi(v_{7})=v_{7}$, $\varphi(u_{7})=u_{7}$, $\varphi(v_{4})=v_{4}$, $\varphi(u_{5})=u_{5}$, $\varphi(v_{5})=v_{5}$, $\varphi(v_{1})=v_{3}$, $\varphi(v_{3})=v_{1}$, $\varphi(u_{2})=u_{4}$, $\varphi(u_{4})=u_{2}$, $\varphi(v_{2})=v_{6}$, $\varphi(v_{6})=v_{2}$ and $\varphi(u_{3})=u_{6}$, $\varphi(u_{6})=u_{3}$. We note that $\varphi$ fixes the edges $u_{1}v_{7}$ and $u_{7}v_{7}$ and exchanges $u_{1}v_{1}$ for $u_{1}v_{3}$. Suppose $d(x,y)=d(z,w)=1$. Let $e$ be an edge of $HG$ adjacent to both of $x$ and $y$. By (2) we map $e$ to $u_{1}v_{7}$. Then by $\varphi$ and $\beta$ we map $x\cup y$ to $u_{1}v_{1}\cup u_{7}v_{7}$. Similarly $z\cup w$ is mapped to $u_{1}v_{1}\cup u_{7}v_{7}$. Thus we have an isomorphism requested in (4). Let $\psi:HG\to HG$ be an isomorphism defined by $\psi(v_{1})=v_{1}$, $\psi(u_{1})=u_{1}$, $\psi(v_{7})=v_{7}$, $\psi(u_{7})=u_{7}$, $\psi(v_{3})=v_{3}$, $\psi(u_{5})=u_{5}$, $\psi(u_{2})=u_{6}$, $\psi(u_{6})=u_{2}$, $\psi(v_{2})=v_{6}$, $\psi(v_{6})=v_{2}$, $\psi(u_{3})=u_{4}$, $\psi(u_{4})=u_{3}$, and $\psi(v_{4})=v_{5}$, $\psi(v_{5})=v_{4}$. We note that $\psi$ fixes the path $v_{1}u_{1}v_{7}u_{7}$ and exchanges $v_{1}u_{2}$ for $v_{1}u_{6}$. Suppose $d(x,y)=d(z,w)=2$. Let $d$ and $e$ be mutually adjacent edges of $HG$ such that $x\cup d\cup e\cup y$ is a path. We map $d\cup e$ to $u_{1}v_{7}\cup u_{7}v_{7}$ by (3). Then by $\varphi$ we map $x$ or $y$ to $u_{1}v_{1}$. Then by $\beta$ and $\psi$ the path $x\cup d\cup e\cup y$ is mapped to $u_{2}v_{1}u_{1}v_{7}u_{7}$. Thus $x\cup y$ is mapped to $u_{2}v_{1}\cup v_{7}u_{7}$. Similarly $z\cup w$ is mapped to $u_{2}v_{1}\cup v_{7}u_{7}$. Thus we have an isomorphism requested in (5). $\Box$ \begin{Lemma}\label{lemma-HG2}\ \begin{enumerate} \item[{\rm (1)}] Let $e$ be an edge of $HG$. Then $\alpha_{6}(e,HG)=8$, $\alpha_{8}(e,HG)=8$, $\alpha_{10}(e,HG)=40$, $\alpha_{12}(e,HG)=32$ and $\alpha_{14}(e,HG)=16$. \item[{\rm (2)}] Let $d$ and $e$ be mutually adjacent edges of $HG$. Then $\alpha_{6}(d\cup e,HG)=4$, $\alpha_{8}(d\cup e,HG)=4$, $\alpha_{10}(d\cup e,HG)=20$, $\alpha_{12}(d\cup e,HG)=16$ and $\alpha_{14}(d\cup e,HG)=8$. \item[{\rm (3)}] Let $d$ and $e$ be mutually disjoint oriented edges of $HG$ with $d(d,e)=1$. Then $\alpha_{6}(d\cup e,HG)=2$, $\alpha_{8}(d\cup e,HG)=2$, $\alpha_{10}(d\cup e,HG)=18$, $\alpha_{12}(d\cup e,HG)=16$ and $\alpha_{14}(d\cup e,HG)=12$. Suppose that the $6$-cycle of $HG$ containing $d\cup e$ is coherent with respect to $d$ and $e$. Then $\beta_{6}(d\cup e,HG)=2$, $\beta_{8}(d\cup e,HG)=2$, $\beta_{10}(d\cup e,HG)=10$, $\beta_{12}(d\cup e,HG)=8$ and $\beta_{14}(d\cup e,HG)=4$. \item[{\rm (4)}] Let $d$ and $e$ be mutually disjoint edges of $HG$ with $d(d,e)=2$. Then $\alpha_{5}(d\cup e,HG)=0$, $\alpha_{6}(d\cup e,HG)=2$, $\alpha_{8}(d\cup e,HG)=4$, $\alpha_{8}(d\cup e,HG)=4$ and $\alpha_{9}(d\cup e,HG)=8$. Suppose that the $6$-cycle of $HG$ containing $d\cup e$ is coherent with respect to $d$ and $e$. Then $\beta_{6}(d\cup e,HG)=1$, $\beta_{8}(d\cup e,HG)=1$, $\beta_{10}(d\cup e,HG)=5$, $\beta_{12}(d\cup e,HG)=4$ and $\beta_{14}(d\cup e,HG)=2$. \end{enumerate} \end{Lemma} \vskip 5mm \noindent{\bf Proof.} We note that $HG$ is a bipartite graph on $14$ vertices and $g(HG)=6$. Therefore \[ \Gamma(HG)=\bigcup_{k=3}^{7}\Gamma_{2k}(HG). \] There are $28$ $6$-cycles of $HG$. They are \\ $u_{i}v_{i}u_{i+1}v_{i+1}u_{i+2}v_{i+2}u_{i}$, $u_{i}v_{i}u_{i+1}v_{i+3}u_{i+3}v_{i+2}u_{i}$, $u_{i}v_{i+2}u_{i+3}v_{i+3}u_{i+4}v_{i+6}u_{i}$ \\ and $u_{i}v_{i+2}u_{i+2}v_{i+4}u_{i+4}v_{i+6}u_{i}$ for $i=1,2,3,4,5,6,7$. There are $21$ $8$-cycles of $HG$. They are \\ $u_{i+6}v_{i+6}u_{i}v_{i}u_{i+5}v_{i+4}u_{i+2}v_{i+1}u_{i+6}$, $v_{i+5}u_{i+6}v_{i+6}u_{i}v_{i}u_{i+1}v_{i+3}u_{i+3}v_{i+5}$ and\\ $u_{i+1}v_{i+1}u_{i+2}v_{i+4}u_{i+5}v_{i+5}u_{i+3}v_{i+3}u_{i+1}$ for $i=1,2,3,4,5,6,7$. There are $84$ $10$-cycles of $HG$. They are \\ $v_{i+4}u_{i+5}v_{i+5}u_{i+6}v_{i+6}u_{i}v_{i}u_{i+1}v_{i+1}u_{i+2}v_{i+4}$, \\ $u_{i+1}v_{i+1}u_{i+2}v_{i+2}u_{i+3}v_{i+5}u_{i+5}v_{i+4}u_{i+4}v_{i+3}u_{i+1}$, \\ $u_{i}v_{i+2}u_{i+2}v_{i+1}u_{i+6}v_{i+5}u_{i+5}v_{i+4}u_{i+4}v_{i+6}u_{i}$, \\ $u_{i}v_{i+2}u_{i+2}v_{i+1}u_{i+1}v_{i}u_{i+5}v_{i+4}u_{i+4}v_{i+6}u_{i}$, \\ $u_{i}v_{i}u_{i+1}v_{i+1}u_{i+6}v_{i+6}u_{i+4}v_{i+3}u_{i+3}v_{i+2}u_{i}$, \\ $u_{i}v_{i}u_{i+5}v_{i+5}u_{i+6}v_{i+6}u_{i+4}v_{i+3}u_{i+3}v_{i+2}u_{i}$, \\ $u_{i}v_{i}u_{i+1}v_{i+3}u_{i+4}v_{i+6}u_{i+6}v_{i+5}u_{i+3}v_{i+2}u_{i}$, \\ $u_{i+6}v_{i+6}u_{i}v_{i}u_{i+5}v_{i+5}u_{i+3}v_{i+3}u_{i+1}v_{i+1}u_{i+6}$, \\ $u_{i}v_{i+2}u_{i+3}v_{i+3}u_{i+1}v_{i+1}u_{i+2}v_{i+4}u_{i+4}v_{i+6}u_{i}$, \\ $u_{i}v_{i+2}u_{i+2}v_{i+4}u_{i+5}v_{i+5}u_{i+3}v_{i+3}u_{i+4}v_{i+6}u_{i}$, \\ $u_{i}v_{i}u_{i+5}v_{i+4}u_{i+4}v_{i+6}u_{i+6}v_{i+1}u_{i+2}v_{i+2}u_{i}$ and\\ $v_{i}u_{i+1}v_{i+3}u_{i+3}v_{i+5}u_{i+6}v_{i+1}u_{i+2}v_{i+4}u_{i+5}v_{i}$ for $i=1,2,3,4,5,6,7$. There are $56$ $12$-cycles of $HG$. They are \\ $u_{i+6}v_{i+6}u_{i}v_{i}u_{i+5}v_{i+4}u_{i+4}v_{i+3}u_{i+3}v_{i+2}u_{i+2}v_{i+1}u_{i+6}$, \\ $v_{i+5}u_{i+6}v_{i+6}u_{i}v_{i}u_{i+1}v_{i+3}u_{i+4}v_{i+4}u_{i+2}v_{i+2}u_{i+3}v_{i+5}$, \\ $u_{i}v_{i+2}u_{i+2}v_{i+1}u_{i+1}v_{i+3}u_{i+3}v_{i+5}u_{i+5}v_{i+4}u_{i+4}v_{i+6}u_{i}$, \\ $u_{i}v_{i}u_{i+1}v_{i+1}u_{i+2}v_{i+4}u_{i+4}v_{i+6}u_{i+6}v_{i+5}u_{i+3}v_{i+2}u_{i}$, \\ $u_{i}v_{i}u_{i+1}v_{i+3}u_{i+4}v_{i+6}u_{i+6}v_{i+5}u_{i+5}v_{i+4}u_{i+2}v_{i+2}u_{i}$, \\ $u_{i}v_{i}u_{i+1}v_{i+3}u_{i+3}v_{i+5}u_{i+6}v_{i+6}u_{i+4}v_{i+4}u_{i+2}v_{i+2}u_{i}$, \\ $u_{i}v_{i}u_{i+5}v_{i+4}u_{i+2}v_{i+1}u_{i+6}v_{i+6}u_{i+4}v_{i+3}u_{i+3}v_{i+2}u_{i}$ and\\ $u_{i}v_{i+2}u_{i+3}v_{i+5}u_{i+5}v_{i+4}u_{i+2}v_{i+1}u_{i+1}v_{i+3}u_{i+4}v_{i+6}u_{i}$ \\ for $i=1,2,3,4,5,6,7$. There are $24$ $14$-cycles of $HG$. \\ They are $u_{1}v_{1}u_{2}v_{2}u_{3}v_{3}u_{4}v_{4}u_{5}v_{5}u_{6}v_{6}u_{7}v_{7}u_{1}$, $u_{1}v_{1}u_{6}v_{6}u_{4}v_{4}u_{2}v_{2}u_{7}v_{7}u_{5}v_{5}u_{3}v_{3}u_{1}$, \\ $v_{7}u_{1}v_{3}u_{4}v_{6}u_{7}v_{2}u_{3}v_{5}u_{6}v_{1}u_{2}v_{4}u_{5}v_{7}$ \\ and $u_{i}v_{i}u_{i+1}v_{i+1}u_{i+2}v_{i+4}u_{i+5}v_{i+5}u_{i+6}v_{i+6}u_{i+4}v_{i+3}u_{i+3}v_{i+2}u_{i}$, \\ $u_{i+6}v_{i+6}u_{i}v_{i}u_{i+5}v_{i+5}u_{i+3}v_{i+2}u_{i+2}v_{i+4}u_{i+4}v_{i+3}u_{i+1}v_{i+1}u_{i+6}$ and\\ $v_{i+6}u_{i}v_{i+2}u_{i+2}v_{i+1}u_{i+6}v_{i+5}u_{i+3}v_{i+3}u_{i+1}v_{i}u_{i+5}v_{i+4}u_{i+4}v_{i+6}$ \\ for $i=1,2,3,4,5,6,7$. Then by Lemma \ref{lemma-HG} (2) we see that counting cycles for a particular edge $e$ of $HG$ will show (1). Similarly we have (2), (3) and (4) by Lemma \ref{lemma-HG} (3), (4) and (5) respectively. $\Box$ \vskip 5mm Summarizing the statements in Lemma \ref{lemma-HG2} we have Table \ref{table-Heawood-cycles}. \begin{table}[htb] \centering \caption{number of $k$-cycles of $HG$} \scalebox{0.73} { \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$k$} & \multirow{2}{*}{$|\Gamma_{k}(HG)|$} & \multirow{2}{*}{$|\Gamma_{k}(HG)|\cdot k$} & \multirow{2}{*}{$\alpha_{k}(e,HG)$} & \multicolumn{3}{c}{$\alpha_{k}(d\cup e,HG)$} & \multicolumn{2}{|c|}{$\beta_{k}(d\cup e,HG)$} \\ \cline{5-9} & & & & $d(d,e)=0$ & $d(d,e)=1$ & $d(d,e)=2$ & $d(d,e)=1$ & $d(d,e)=2$ \\ \hline $6$ & $28$ & $168$ & $8$ & $4$ & $2$ & $1$ & $2$ & $1$ \\ \hline $8$ & $21$ & $168$ & $8$ & $4$ & $2$ & $3$ & $2$ & $1$ \\ \hline $10$ & $84$ & $840$ & $40$ & $20$ & $18$ & $17$ & $10$ & $5$ \\ \hline $12$ & $56$ & $672$ & $32$ & $16$ & $16$ & $20$ & $8$ & $4$ \\ \hline $14$ & $24$ & $336$ & $16$ & $8$ & $12$ & $10$ & $4$ & $2$ \\ \hline \end{tabular} } \label{table-Heawood-cycles} \end{table} \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Heawood}.} Let $g:HG\to{\mathbb R}^{2}$ be a plane generic immersion illustrated in Figure \ref{HG}. We see that $g(HG)$ contains $7$ disjoint crossings between distance $1$ edges and $7$ disjoint crossings between distance $2$ edges. Since $\alpha_{6}(d\cup e,HG)=2$ for edges $d$ and $e$ of $HG$ with $d(d,e)=1$ and $\alpha_{6}(d\cup e,HG)=1$ for edges $d$ and $e$ of $HG$ with $d(d,e)=2$, we have \[ \sum_{\gamma\in\Gamma_{6}(HG)}c(g(\gamma))=2\cdot7+7=21\equiv 1\pmod 2. \] Similarly we have \[ \sum_{\gamma\in\Gamma_{8}(HG)}c(g(\gamma))=2\cdot7+3\cdot7=35\equiv 1\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{10}(HG)}c(g(\gamma))=18\cdot7+17\cdot7=245\equiv 1\pmod 2. \] Next we will check that the conditions (1), (2), (3) and (4) of Proposition \ref{proposition-crossing2} for $G=HG$, $\Lambda=\Gamma_{6}(HG)$, $\Gamma_{8}(HG)$ or $\Gamma_{10}(HG)$ and $m=2$ hold. Since $\alpha_{6}(e,HG)=\alpha_{8}(e,HG)=8$ and $\alpha_{10}(e,HG)=40$ for every edge $e$ of $HG$, we see that (1) holds. For $m=2$, (2) automatically holds. Since $\Lambda\subset\Gamma(HG)$ (3) holds for $m=2$ by Remark \ref{remark-crossing}. Let $d$ and $e$ be mutually adjacent edges of $HG$. Then $\alpha_{6}(d\cup e,HG)=\alpha_{8}(d\cup e,HG)=4$ and $\alpha_{10}(d\cup e,HG)=20$. Therefore (4) holds. Then by Proposition \ref{proposition-crossing2} we have \[ \sum_{\gamma\in\Gamma_{6}(HG)}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{6}(HG)}c(g(\gamma))\pmod 2, \] \[ \sum_{\gamma\in\Gamma_{8}(HG)}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{8}(HG)}c(g(\gamma))\pmod 2 \] and \[ \sum_{\gamma\in\Gamma_{10}(HG)}c(f(\gamma))\equiv \sum_{\gamma\in\Gamma_{10}(HG)}c(g(\gamma))\pmod 2. \] Therefore we have \[ \sum_{\gamma\in\Gamma_{6}(HG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{8}(HG)}c(f(\gamma))\equiv\sum_{\gamma\in\Gamma_{10}(HG)}c(f(\gamma))\equiv 1\pmod 2. \] We note that $\alpha_{12}(e,HG)=32$ for every edge $e$ of $HG$, $\alpha_{12}(d\cup e,PG)=16$ for any adjacent edges $d$ and $e$, $\alpha_{12}(d\cup e,PG)=16$ for any edges $d$ and $e$ with $d(d,e)=1$ and $\alpha_{12}(d\cup e,PG)=20$ for any edges $d$ and $e$ with $d(d,e)=2$. Then by Proposition \ref{proposition-crossing1} we have \[ \sum_{\gamma\in\Gamma_{12}(HG)}c(f(\gamma))\equiv 0\pmod 4. \] We note that $\alpha_{14}(e,HG)=16$ for every edge $e$ of $HG$, $\alpha_{14}(d\cup e,PG)=8$ for any adjacent edges $d$ and $e$, $\alpha_{14}(d\cup e,PG)=12$ for any edges $d$ and $e$ with $d(d,e)=1$ and $\alpha_{14}(d\cup e,PG)=10$ for any edges $d$ and $e$ with $d(d,e)=2$. Then by Proposition \ref{proposition-crossing1} we have \[ \sum_{\gamma\in\Gamma_{14}(HG)}c(f(\gamma))\equiv 0\pmod 2. \] $\Box$ \section{Rotation numbers of cycles in a plane immersed graph}\label{rotation-numbers} In this section we give a proof of Theorem \ref{theorem-zero-rotation}. \begin{Lemma}\label{lemma-zero-rotation} Let $G$ be a $2$-connected loopless graph. Let $u$ and $v$ be distinct vertices of $G$ such that the graph obtained from $G$ by adding an edge joining $u$ and $v$ does not have $K_{4}$ as a minor. Then there exists a plane generic immersion $f:G\to{\mathbb R}^{2}$ with the following properties. \begin{enumerate} \item[{\rm (1)}] It holds that ${\rm rot}(f(\gamma))=0$ for every cycle $\gamma$ of $G$. \item[{\rm (2)}] Let $h:{\mathbb R}^{2}\to{\mathbb R}$ be a height function defined by $h(x,y)=y$. Then $h(f(G))=[h(f(v)),h(f(u))]$. If $P$ is a path of $G$ joining $u$ and $v$, then $h\circ f$ maps $P$ homeomorphically onto the closed interval $[h(f(v)),h(f(u))]$. \end{enumerate} \end{Lemma} \noindent{\bf Proof.} The following is a proof by induction on the number of vertices of $G$. First suppose $|V(G)|=2$. Then $G$ is a $\theta_{n}$-curve graph and a plane generic immersion described in Figure \ref{theta-n}, where the case $n=5$ is illustrated, satisfies (1) and (2). Let $k$ be a natural number greater than or equal to $2$. Suppose that the claim is true if $|V(G)|\leq k$. Suppose $|V(G)|=k+1$. Let $X_{1},\cdots,X_{n}$ be the connected components of the topological space $G\setminus\{u,v\}$ and $H_{1},\cdots,H_{n}$ the closures of them in $G$. Namely $H_{i}=X_{i}\cup\{u,v\}$ and $H_{i}$ is regarded as a subgraph of $G$ for $i=1,\cdots,n$. Let $H_{i}'$ be a graph obtained from $H_{i}$ by adding an edge $e_{i}$ joining $u$ and $v$. By the assumption on $G$ we see that $H_{i}'$ is a $2$-connected loopless graph that does not have $K_{4}$ as a minor. We will show that $n$ is greater than $1$. Suppose to the contrary that $n=1$. Then $G=H_{1}$. Let $P$ be a path of $H_{1}$ joining $u$ and $v$. Suppose that there exists another path $Q$ of $H_{1}$ joining $u$ and $v$ such that $P\cap Q=\{u,v\}$. Since $X_{1}=H_{1}\setminus\{u,v\}$ is connected there is a path $R$ of $H_{1}$ away from $u$ and $v$ joining a vertex of $P$ and a vertex of $Q$. Then we see that $H_{1}'$ has $K_{4}$ as a minor. Therefore there are no such path of $H_{1}$. Since $H_{1}=G$ is $2$-connected, $H_{1}$ has no cut vertices. Therefore ${\rm deg}(u)\geq2$. Then there exists a path $Q$ of $H_{1}$ joining $u$ and a vertex $w$ of $P$ such that $P\cap Q=\{u,w\}$. We may suppose that $w$ is closest to $v$ on $P$ among all choices of such path $Q$. Then we have either $w$ is a cut vertex of $H_{1}$ or $H_{1}'$ has $K_{4}$ as a minor. See Figure \ref{path} for the latter case. We note that the second vertex from bottom in the right graph of Figure \ref{path} may be $v$. Both contradict to the assumption. Thus we have $n\geq2$. Since $|V(G)|\geq3$ at least one of $H_{1},\cdots,H_{n}$, say $H_{1}$, contains at least $3$ vertices. Suppose that all other $H_{i}$ contains exactly two vertices. Then $H_{i}$ contains exactly one edge that joins $u$ and $v$ for $i=2,\cdots,n$. Then by an argument similar to the previous one we see that $H_{1}$ has a cut vertex $w$. Let $H_{1,1}$ and $H_{1,2}$ be connected subgraphs of $H_{1}$ such that $H_{1}=H_{1,1}\cup H_{1,2}$ and $H_{1,1}\cap H_{1,2}=\{w\}$. We may suppose without loss of generality that $H_{1,1}$ contains $u$ and $H_{1,2}$ contains $v$. Let $H_{1,1}'$ be a graph obtained from $H_{1,1}$ by adding an edge $e_{1,1}$ joining $u$ and $w$. Let $H_{1,2}'$ be a graph obtained from $H_{1,2}$ by adding an edge $e_{1,2}$ joining $w$ and $v$. Then $H_{1,1}'$ and $u$ and $w$ satisfy the induction hypothesis and $H_{1,2}'$ and $w$ and $v$ also satisfy the induction hypothesis. Let $f_{1}:H_{1,1}'\to{\mathbb R}^{2}$ and $f_{2}:H_{1,2}'\to{\mathbb R}^{2}$ be plane generic immersions that satisfy (1) and (2). By a translation we may assume $f_{1}(w)=f_{2}(w)$. Let $f_{0}:H_{1}\to{\mathbb R}^{2}$ be a plane generic immersion defined by $f_{0}|_{H_{1,1}}=f_{1}|_{H_{1,1}}$ and $f_{0}|_{H_{1,2}}=f_{2}|_{H_{1,2}}$. Then by a construction similar to that illustrated in Figure \ref{theta-n} we have a plane generic immersion $f:G\to{\mathbb R}^{2}$ with $f|_{H_{1}}=f_{0}$ that satisfies (1) and (2). See for example Figure \ref{zero-immersion} where the case $n=3$ is illustrated. Next suppose that another $H_{i}$, say $H_{2}$, contains at least three vertices. Then by a construction similar to that illustrated in Figure \ref{zero-immersion}, we have a plane generic immersion of $G$ that satisfies (1) and (2). See for example Figure \ref{zero-immersion2}. This completes the proof. $\Box$ \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{theta-n.eps}} \end{center} \caption{All cycles have rotation number $0$} \label{theta-n} \end{figure} \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{path.eps}} \end{center} \caption{$n$ is greater than $1$} \label{path} \end{figure} \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{zero-immersion.eps}} \end{center} \caption{A construction} \label{zero-immersion} \end{figure} \begin{figure}[htbp] \begin{center} \scalebox{0.6}{\includegraphics*{zero-immersion2.eps}} \end{center} \caption{An example} \label{zero-immersion2} \end{figure} \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-zero-rotation}.} Suppose that $K_{4}$ is a minor of $G$. Since $K_{4}$ is $3$-regular, $G$ contains a subgraph that is homeomorphic to $K_{4}$. Then by Corollary \ref{corollary-K4} we see that every plane generic immersion of $G$ contains a cycle with non-zero rotation number. Therefore (1) implies (2). Suppose that $G$ does not have $K_{4}$ as a minor. We will show that there exists a plane generic immersion $f:G\to{\mathbb R}^{2}$ such that ${\rm rot}(f(\gamma))=0$ for every cycle $\gamma$ of $G$. By considering the block decomposition, it is sufficient to show the case that $G$ is $2$-connected and loopless. Then by Lemma \ref{lemma-zero-rotation} we have such a plane generic immersion. This completes the proof. $\Box$ \section{The total Thurston-Bennequin number of a Legendrian embedding of a finite graph}\label{total-Thurston-Bennequin-number} \begin{Lemma}\label{lemma-total-tb} Let $G$ be a finite graph and $f:G\to {\mathbb R}^{3}$ a Legendrian embedding of $G$. Let $j$ and $k$ be natural numbers. Suppose that there exists a rational number $q$ such that the following holds. \begin{enumerate} \item[{\rm (1)}] For any edge $e$ of $G$, $\alpha_{k}(e,G)=q\cdot\alpha_{j}(e,G)$, \item[{\rm (2)}] For any pair of mutually adjacent edges $d$ and $e$ of $G$, $\alpha_{k}(d\cup e,G)=q\cdot\alpha_{j}(d\cup e,G)$, \item[{\rm (3)}] For any pair of mutually disjoint oriented edges $d$ and $e$ of $G$, $\beta_{k}(d\cup e,G)=q\cdot\beta_{j}(d\cup e,G)$. \end{enumerate} Then \[ TB_{k}(f)=q\cdot TB_{j}(f). \] \end{Lemma} \noindent{\bf Proof.} Let $\gamma$ be a cycle of $G$. Let $D=D(f(\gamma))$ be a Lagrangian projection of a Legendrian knot $f(\gamma)$. Let $w(D)$ be the writhe of $D$. It is known that $\displaystyle{tb(f(\gamma))=w(D)}$. We note that each crossing of $D$ is a self crossing, an adjacent crossing or a disjoint crossing. A self crossing of an edge $e$ of $G$ contribute $\pm\alpha_{j}(e,G)$ to $TB_{j}(f)$ and $\pm\alpha_{k}(e,G)$ to $TB_{k}(f)$. Similarly an adjacent crossing of edges $d$ and $e$ contribute $\pm\alpha_{j}(d\cup e,G)$ to $TB_{j}(f)$ and $\pm\alpha_{k}(d\cup e,G)$ to $TB_{k}(f)$. A disjoint crossing of edges $d$ and $e$ contribute $\pm\beta_{j}(d\cup e,G)$ to $TB_{j}(f)$ and $\pm\beta_{k}(d\cup e,G)$ to $TB_{k}(f)$. By taking the sum of these numbers we have the result. $\Box$ \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Petersen-tb}.} By Lemma \ref{lemma-total-tb} and Lemma \ref{lemma-PG2} we have the result. $\Box$ \vskip 5mm \noindent{\bf Proof of Theorem \ref{theorem-Heawood-tb}.} By Lemma \ref{lemma-total-tb} and Lemma \ref{lemma-HG2} we have the result. $\Box$ \vskip 3mm \section*{Acknowledgments} The authors are grateful to the referee for his/her helpful comments. \vskip 3mm {\normalsize \begin{thebibliography}{99} \bibitem{C-G}{J. H. Conway and C. McA. Gordon}, {Knots and links in spatial graphs}, {\it J. Graph Theory}, {\bf 7} (1983), 445-453. \bibitem{D-F-V}{A. DeCelles, J. Foisy and C. Versace}, {On graphs for which every planar immersion lifts to a knotted spatial embedding}, {\it Involve}, {\bf 1} (2008), 145-158. \bibitem{F-F-N}{E. Flapan, W. Fletcher and R. Nikkuni}, {Reduced Wu and generalized Simon invariants for spatial graphs}, {\it Math. Proc. Cambridge Philos. Soc.}, {\bf 156} (2014), 521-544. \bibitem{Nikkuni}{R. Nikkuni}, {A refinement of the Conway-Gordon theorems}, {\it Topology Appl.}, {\bf 156} (2009), 2782-2794. \bibitem{O-P}{D. O'Donnol and E. Pavelescu}, {On Legendrian graphs}, {\it Algebr. Geom. Topol.}, {\bf 12} (2012), 1273-1299. \bibitem{O-P2}{D. O'Donnol and E. Pavelescu}, {The total Thurston-Bennequin number of complete and complete bipartite Legendrian graphs}, {\it Advances in the mathematical sciences}, 117-137, {Assoc. Women Math. Ser.}, {\bf 6}, Springer, 2016. \bibitem{Sachs}{H. Sachs}, {On spatial representations of finite graphs}, {Finite and infinite sets}, (Eger, 1981), vol. 2, 649-662, Colloq. Math. Soc. J\'{a}nos Bolyai 37, North-Holland, Amsterdam, 1984. \bibitem{S-S-S}{J Segal, A. Skopenkov and S. Spiez}, {Embeddings of polyhedra in ${\mathbb R}^{m}$ and the deleted product obstruction}, {\it Topol. Appl.}, {\bf 85} (1998), 335-344. \bibitem{S-S}{J. Segal and S. Spiez}, {Quasi-embeddings and embedding of polyhedra in ${\mathbb R}^{m}$}, {\it Topol. Appl.}, {\bf 45} (1992), 275-282. \bibitem{S-T}{M. Sakamoto and K. Taniyama}, {Plane curves in an immersed graph in ${\mathbb R}^{2}$}, {\it J. Knot Theory Ramifications}, {\bf 22} (2013), 1350003, 10 pp. \bibitem{Sarkaria}{K. S. Sarkaria}, {A one-dimensional Whitney trick and Kuratowski's graph planarity criterion}, {\it Israel J. Math.}, {\bf 73} (1991), 79-89. \bibitem{Sivaraman}{V. Sivaraman}, {Some topics concerning graphs, signed graphs and matroids}, Ph.D. Dissertation, The Ohio State University, 2012. \bibitem{Skopenkov}{A. Skopenkov}, {Invariants of graph drawings in the plane}, {\it Arnold Math. J.}, {\bf 6} (2020), 21-55. \bibitem{Tanaka}{T. Tanaka}, {On the maximal Thurston-Bennequin number of knots and links in spatial graphs}, {\it Topology Appl.}, {\bf 180} (2015), 132-141. \bibitem{Taniyama1}{K. Taniyama}, {Cobordism, homotopy and homology of graphs in $R^{3}$}, {\it Topology}, {\bf 33} (1994), 509-523. \bibitem{Taniyama2}{K. Taniyama}, {Homology classification of spatial embeddings of a graph}, {\it Topology Appl.}, {\bf 65} (1995), 205-228. \bibitem{Taniyama3}{K. Taniyama}, {Higher dimensional links in a simplicial complex embedded in a sphere}, {\it Pacific J. Math.}, {\bf 194} (2000), 465-467. \end{thebibliography} } \end{document} \bibitem{Skopenkov}{A. Skopenkov}, {Invariants of graph drawings in the plane}, {\it Arnold Math. J.}, {\bf 6} (2020), 21-55. Skopenkov, A. Invariants of graph drawings in the plane Arnold Math. J. 6 (2020), no. 1, 21-55.
2205.00866v2
http://arxiv.org/abs/2205.00866v2
The Laplacians, Kirchhoff index and complexity of linear Möbius and cylinder octagonal-quadrilateral networks
\documentclass[10pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{amsfonts} \usepackage{mathrsfs,amscd,amssymb,amsthm,amsmath,bm,graphicx} \usepackage{graphicx} \usepackage{cite} \usepackage{color} \usepackage[shortcuts]{extdash} \setlength{\evensidemargin}{-4cm} \setlength{\oddsidemargin}{1mm} \setlength{\textwidth}{16cm} \setlength{\textheight}{22cm} \setlength{\headsep}{1.4mm} \makeatletter \renewcommand{\@seccntformat}[1]{{\csname the#1\endcsname}{\normalsize.}\hspace{.5em}} \makeatother \renewcommand{\thesection}{\normalsize \arabic{section}} \renewcommand{\refname}{\normalsize \bf{References}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\theequation{\thesection.\arabic{equation}} \def \[{\begin{equation}} \def \]{\end{equation}} \def \CL{\rm CL} \def\O{\mathcal{O}(M)} \def\TN{\rm TN} \def\AO{A\mathcal{O}(M)} \def\Mod{{\rm Mod}} \def\Z{{\Bbb Z}} \newtheorem{thm}{Theorem}[section] \newtheorem{defi}{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{exa}{Example} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{prob}[thm]{Problem} \newtheorem{Con}{Conjecture} \newtheorem{Fac}{Factor} \newtheorem{Cla}{Claim} \newcommand{\orcidauthorA}{0000-0002-9620-7692} \newcommand{\nz}{\rm nz} \newenvironment{kst} {\setlength{\leftmargini}{1.3\parindent} \begin{itemize} \setlength{\itemsep}{-1.1mm}} {\end{itemize}} \newenvironment{lst} {\setlength{\leftmargini}{0.75\parindent} \begin{itemize} \setlength{\itemsep}{-1.1mm}} {\end{itemize}} \newenvironment{wst} {\setlength{\leftmargini}{1.5\parindent} \begin{itemize} \setlength{\itemsep}{-1.1mm}} {\end{itemize}} \begin{document} \setlength{\baselineskip}{13pt} \begin{center}{\Large \bf The Laplacians, Kirchhoff index and complexity of linear M\"{o}bius and cylinder octagonal-quadrilateral networks } \vspace{4mm} {\large Jia-Bao Liu $^{1,*}$, Lu-Lu Fang $^{1,*}$, Qian Zheng $^{1}$, Xin-Bei Peng $^{1}$}\vspace{2mm} {\small $^{1}$School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, P.R. China\\} \vspace{2mm} \end{center} \footnotetext{E-mail address: [email protected], [email protected],[email protected], [email protected].} \footnotetext{* Corresponding author.} {\noindent{\bf Abstract.}\ \ Spectrum graph theory not only facilitate comprehensively reflect the topological structure and dynamic characteristics of networks, but also offer significant and noteworthy applications in theoretical chemistry, network science and other fields. Let $L_{n}^{8,4}$ represent a linear octagonal-quadrilateral network, consisting of $n$ eight-member ring and $n$ four-member ring. The M\"{o}bius graph $Q_{n}(8,4)$ is constructed by reverse identifying the opposite edges, whereas cylinder graph $Q'_{n}(8,4)$ identifies the opposite edges by order. In this paper, the explicit formulas of Kirchhoff indices and complexity of $Q_{n}(8,4)$ and $Q'_{n}(8,4)$ are demonstrated by Laplacian characteristic polynomials according to decomposition theorem and Vieta's theorem. In surprise, the Kirchhoff index of $Q_{n}(8,4)$\big($Q'_{n}(8,4)$\big) is approximately one-third half of its Wiener index as $n\to\infty$.\\ \noindent{\bf Keywords}: Kirchhoff index; Complexity; M\"{o}bius graph; Cylinder graph.\vspace{2mm} \section{Introduction}\label{sct1} \ \ \ \ \ The graphs in this paper are simple, undirected and connected. Firstly recall some definitions that most used in graph theory. Suppose $G$ as a simple undirected graph with $\big|V_G\big|=n$ and $\big|E_G\big|=m$. The adjacent matrix $A(G)=[a_{i,j}]_{n\times n}$ of $G$ is a symmetric matrix. When the vertex $i$ is adjacent to $j$, we define $a_{i,j}=1$ and $a_{i,j}=0$ otherwise. For a vertex $i\in V_G$, $d_{i}$ is the number of edges arised from $i$ and the diagonal matrix $D(G)=diag\{d_{1},d_{2},\cdots,d_{n}\}$ is degree matrix. Subsequently, the Laplacian matrix is specified as $L(G)=D(G)-A(G)$, which spectrum can be expressed by $0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{n}$. For more notations, one can be referred to \cite{F.R}. Alternatively, The $(m,n)$-entry of the Laplacian matrix can be noted by \begin{eqnarray} \big(L(G)\big)_{mn}= \begin{cases} d_{m}, & m=n;\\ -1, & m\neq{n}~and~ v_{m}\backsim v_{n};\\ 0, & otherwise. \end{cases} \end{eqnarray} The traditional distance for every pairs of vertices $i$ and $j$ is $d_{G}(v_i,v_j)=\{d_{ij}|\ the\ shortset\ length\ of\ v_{i}\ \\ and\ v_{j}\}$. For many parameters used to describe the structure of a graph in chemical and mathematical researches\cite{B,C.,D}, the most commonly used one is the distance-based parameter Wiener index\cite{Wiener,A.D}, which is known to \begin{eqnarray*} W(G)=\sum_{i<j}d_{ij}. \end{eqnarray*} Suppose that each edge of a connected graph $G$ is regarded as a unit resistor, and the resistance distance\cite{D.J} between any two vertices $i$ and $j$ is recorded as $r_{ij}$. Similar to Wiener index, according to the resistance distance, the expression of Kirchhoff index \cite{D.,D.J.} are given, namely \begin{eqnarray*} Kf(G)=\sum_{i<j}r_{ij}. \end{eqnarray*} It's well known that scholars in various fields have a strong interest in the study of the Kirchhoff index, this has promoted researchers to try some calculation methods to compute the Kirchhoff index of a given graph and get its closed formula. As early as 1985, Y.L. Yang et al.\cite{Y.L} gave the corresponding decomposition theorem for the corresponding matrix of linear structure. Up to now, many scholars have used this method to describe the Kirchhoff index of a series of linear hydrocarbons chain. For instances, in 2007, Yang et al.\cite{Y.} calculated the Kirchhoff index of linear hexagonal chain by using the decomposition of Laplacian matrix. In 2019, Geng et al.\cite{G.W} used this method to calculate the M\"{o}bius chain and cylinder chain of phenylenes chain. Shi and Liu et al.\cite{Z.L} computed the Kirchhoff index of linear octagonal-quadrilateral network in 2020. For more information, see \cite{H.B,E.,F.,G.,H.,M.,K.}. After learning the excellent works of scholars, this paper computes the Kirchhoff indices, Wiener indices and the complexity of M\"{o}bius graph and its cylinder graph of linear octagonal-quadrilateral network.\\ Let $L_{n}^{8,4}$ be the linear octagonal-quadrilateral network as illustrated in Figure 1, and octagons and quadrilaterals are connected by a common edge. Then the corresponding M\"{o}bius graph $Q_{3}\big(8,4\big)$ of octagonal-quadrilateral network is obtained by the reverse identification of the opposite edge by $L_{n}^{8,4}$, and its cylinder graph $Q'_{3}\big(8,4\big)$ of octagonal-quadrilateral network is obtained by identifying the opposite edge of $L_{n}^{8,4}$. An illustration of this are given in Figure 2. Obviously, we can obtained that $\big|V_{Q_{n}}(8,4)\big|=8n$,~$\big|E_{Q_{n}}(8,4)\big|=10n$ and $\big|V_{Q'_{n}}(8,4)\big|=8n$,~ $\big|E_{Q'_{n}}(8,4)\big|=10n.$ \begin{figure}[htbp] \centering\includegraphics[width=16.5cm,height=4cm]{1} \caption{Linear octagonal-quadrilateral networks. } \end{figure} \begin{figure}[htbp] \centering\includegraphics[width=13cm,height=4cm]{2} \caption{A special class of octagonal-quadrilateral networks for $n=3$.} \end{figure} The rest of this paper is organized as follows: In Section 2, we introduce some basic notations and related lemmas. In Section 3, the Kirchhoff index, the Wiener index and the complexity of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$ are be computed. In Section 4, we demonstrate the ratio of Wiener index and Kirchhoff index of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$. Finally, we provide insights into the subsequent extensions in Section 5. \section{Preliminary}\label{sct2} \ \ \ \ \ We recommend the notion of symbols and related calculation methods. This in turn would help to the proof of the rest of the article. The characteristic polynomial of matrix $A$ of order $n$ is defined as $P_{A}(x)=det\big(xI-A\big)$. We note that $\varpi$ is an automorphism of $G$, we can write the product of disjoint 1-cycles and transposition, namely \begin{equation*} \mathscr{\varpi}=(\bar{1})(\bar{2})\cdots(\bar{m})(1,1')(2,2')\cdots(k,k'). \end{equation*} Then $\big|V(G)\big|=m+2k$,~ let $v_{0}=\big\{\bar{1},\bar{2},\cdots \bar{m}\big\},~ v_{1}=\big\{1,2\cdots k\big\},~v_{2}=\big\{1',2'\cdots k'\big\}.$ Thus the Laplacian matrix can be formed by block matrix, then \begin{equation} L(G)=\left( \begin{array}{ccc} L_{V_{0}V_{0}}& L_{V_{0}V_{1}}& L_{V_{0}V_{2}}\\ L_{V_{1}V_{0}}& L_{V_{1}V_{1}}& L_{V_{1}V_{2}}\\ L_{V_{2}V_{0}}& L_{V_{2}V_{1}}& L_{V_{2}V_{2}}\\ \end{array} \right), \end{equation} where \begin{equation} L_{V_{0}V_{1}}=L_{V_{0}V_{2}},~ L_{V_{1}V_{2}}=L_{V_{2}V_{1}},~and~ L_{V_{1}V_{1}}=L_{V_{2}V_{2}}. \end{equation} Let \begin{equation} P=\left( \begin{array}{ccc} I_{m}& 0& 0\\ 0& \frac{1}{\sqrt{2}}I_{k}& \frac{1}{\sqrt{2}}I_{k}\\ 0& \frac{1}{\sqrt{2}}I_{k}& -\frac{1}{\sqrt{2}}I_{k} \end{array} \right), \end{equation} where $I_{m}$ is the identity matrix of order $n$. Note that $P'$ is the transposition of $P$. Comprehensive consideration of Eqs.(2.2)-(2.4), thus we employ \begin{equation*} P'L(G)P=\left( \begin{array}{cc} L_{A}& 0\\ 0& L_{S} \end{array} \right), \end{equation*} where \begin{eqnarray*} L_{A}=\left( \begin{array}{cc} L_{V_{0}V_{0}}& \sqrt{2}L_{V_{0}V_{1}}\\ \sqrt{2}L_{V_{1}V_{0}}& L_{V_{1}V_{1}}+L_{V_{1}V_{2}}\\ \end{array} \right),~ L_{S}=L_{V_{1}V_{1}}-L_{V_{1}V_{2}}. \end{eqnarray*} \begin{lem}\textup{\cite{Y.L}} $L_{A},~L_{S}$ thus obtained above can be used in $P_{L(L_{n})}$, namely \begin{eqnarray*} P_{L(L_{n})}\big(x\big)=P_{L_{A}}\big(x\big)P_{L_{S}}\big(x\big). \end{eqnarray*} \end{lem} \begin{lem}\textup{\cite{Gut}} Gutman and mohar show a new calculation formula, namely \begin{eqnarray*} Kf(G)=n\sum_{k=2}^{n}\frac{1}{\mu_{k}} , \end{eqnarray*} where $0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{n}(n\geq2)$ are the eigenvalues of $L(G)$. \end{lem} \begin{lem}\textup{\cite{H.Y.}} The number of spanning trees of $G$ can also be concluded to the complexity, which is defined by \begin{eqnarray*} \mathscr{L}(G)=\frac{1}{n}\prod_{i=1}^{n}\mu_{i}. \end{eqnarray*} \end{lem} \begin{lem}\textup{\cite{Y.X.}} Assign $C_{n}$ for a cycle with $n$ vertices. The explicit formula of Laplacian spectrum of $C_{n}$ can be constructed, which is expressed as \begin{eqnarray*} S_{p}\big(L(C_{n})\big)=\Big\{2-2\cos\frac{(2\pi i)}{n}\Big|1\leq i\leq n\Big\},~ \end{eqnarray*} and the Kirchhoff index of $C_{n}$ is given by \begin{eqnarray*} Kf\big(C_{n}\big)=\frac{n^{3}-n}{12}. \end{eqnarray*} \end{lem} \section{Main results}\label{sct3} \ \ \ \ \ In this section, we firstly focus on calculating the systematic formula for the Kirchhoff index of $Q_{n}\big(8,4\big)$. By Lemma 2.1, we apply the notion of the eigenvalues of Laplacian matrix to compute the $Kf\big(Q_{n}(8,4)\big)$. Besides, we determine the complexity of $Q_{n}\big(8,4\big)$, which is made up of the products of degree of all vertices. The scenario is similar for $Q'_{n}\big(8,4\big)$. Given an $n\times n$ matrix $R$, and put deleting the $i_{1}th,~ i_{2}th,~\cdots ,~i_{k}th$ rows and columns of $R$ are expressed as $R\big[\{i_{1},i_{2},\cdots ,~i_{k}\}\big]$. The vertices of $Q_{n}\big(8,4\big)$ are tabulated in Figure 2. Evidently, $\pi=(1,1')(2,2')\cdots (4n,(4n))')$ is an automorphism of $Q_{n}\big(8,4\big)$ and $v_{0}=\emptyset,~v_{1}=\big\{1,2,3,\cdots, 4n\big\},~v_{2}=\big\{1',2',3',\cdots,(4n)'\big\}.$ Maenwile, we assign $L_{A}\big(Q_{n}(8,4)\big)$ and $L_{S}\big(Q_{n}(8,4)\big)$ as $L_{A}$ and $L_{S}$. Then one can get \begin{eqnarray} L_{A}=L_{V_{1}V_{1}}+L_{V_{1}V_{2}},~L_{S}=L_{V_{1}V_{1}}-L_{V_{1}V_{2}}. \end{eqnarray} By Equation (3.5), we have \begin{eqnarray*} L_{V_1 V_1}&=& \left( \begin{array}{ccccccccccc} 3 & -1 & & & & & & & & &\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 3 & -1 & & & & & &\\ & & & -1 & 3 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 3 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ & & & & & & & & & -1 & 3\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} \begin{eqnarray*} L_{V_1 V_2}&=& \left( \begin{array}{ccccccccccc} -1 & & & & & & & & & & -1\\ & 0 & & & & & & & & &\\ & & 0 & & & & & & & &\\ & & & -1 & & & & & & &\\ & & & & 0 & & & & & &\\ & & & & & 0 & & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & & -1 & & & \\ & & & & & & & & 0 & &\\ & & & & & & & & & 0 &\\ -1 & & & & & & & & & & -1\\ \end{array} \right)_{(4n)\times(4n)}. \end{eqnarray*} Hence, \begin{eqnarray*} L_A&=& \left( \begin{array}{ccccccccccc} 2 & -1 & & & & & & & & & -1\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 2 & -1 & & & & & &\\ & & & -1 & 2 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 2 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ -1 & & & & & & & & & -1 & 2\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} and \begin{eqnarray*} L_S&=& \left( \begin{array}{ccccccccccc} 4 & -1 & & & & & & & & & 1\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 4 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 4 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ 1 & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}. \end{eqnarray*} Assuming that $0=\alpha_{1}<\alpha_{2}\leq\alpha_{3}\leq\cdots\leq\alpha_{4n}$ are the roots of $P_{L_{A}}\big(x\big)=0$, and $0<\beta_{1}\leq\beta_{2}\leq\beta_{3}\leq\cdots\leq\beta_{4n}$ are the roots of $P_{L_{S}}\big(x\big)=0$. By Lemma 2.2, we immediately have \begin{eqnarray} Kf\big(Q_{n}(8,4)\big)=8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\alpha_{i}}+\sum_{j=1}^{4n}\frac{1}{\beta_{j}}\Bigg). \end{eqnarray} Next, the prime aim of follows is to calculate $\sum\limits_{i=2}^{4n}\frac{1}{\alpha_{i}}$ and $\sum\limits_{j=1}^{4n}\frac{1}{\beta_{j}}$.\\ Combing with Lemma 2.1 and Lemma 2.4, one can get \begin{eqnarray} Kf\big(Q_{n}(8,4)\big)=8n\Bigg(\frac{16n^{2}-1}{12}+\sum_{j=1}^{4n}\frac{1}{\beta_{j}}\Bigg). \end{eqnarray} \begin{lem} Let $\beta_{j}$ be the eigenvalue of $L_{S}$. And assume that $\mu=15+4\sqrt{14}$, $\nu=15-4\sqrt{14}$. \begin{eqnarray} \sum_{j=1}^{4n}\frac{1}{\beta_{j}}=\frac{(-1)^{4n-1}b_{4n-1}}{detL_{S}}. \end{eqnarray} \end{lem} \noindent{\bf Proof.} $$P_{L_{S}}(x)=det(xI-L_{S})=x^{4n}+b_{1}x^{4n-1}+\cdots+b_{4n-1}x+b_{4n},~b_{4n}\neq0.$$ The key to get Eq.(3.8) is to determine $(-1)^{4n-1}b_{4n-1}$ and $detL_{S}$. We demonstrate the $ith$ order principal submatrix $M_{i} ~and~ M'_{i}$ by from the first $i$ rows and columns of the following matrices $L^{1}_{S}$ and $L^{2}_{S}$ respectively. For $i=1,2,...,4n-1$, we have \begin{eqnarray*} L_S^{1}&=& \left( \begin{array}{ccccccccccc} 4 & -1 & & & & & & & & &\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 4 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 4 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} and \begin{eqnarray*} L_S^{2}&=& \left( \begin{array}{ccccccccccc} 2 & -1 & & & & & & & & &\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 4 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 2 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 2 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 4 & -1\\ & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}. \end{eqnarray*} Applying the results above, we can get two Claims.\\ \noindent{\bf Claim 1.} For $1\leq j\leq 4n$, \begin{eqnarray*} q_{j}= \begin{cases} \Big(\frac{1}{2}+\frac{9\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j}{4}}+\Big(\frac{1}{2}-\frac{9\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j}{4}},& if~j\equiv 0\ (mod\ 4);\\ \Big(2+\frac{31\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-1}{4}}+\Big(2-\frac{31\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-1}{4}},& if~j\equiv 1\ (mod\ 4);\\ \Big(\frac{7}{2}+\frac{53\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-2}{4}}+\Big(\frac{7}{2}-\frac{53\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-2}{4}},& if~j\equiv 2\ (mod\ 4);\\ \Big(5+\frac{75\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-3}{4}}+\Big(5-\frac{75\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-3}{4}},& if~j\equiv 3\ (mod\ 4). \end{cases} \end{eqnarray*} \noindent{\bf Proof of Claim 1.} Put $q_{j}:=detM_{j}$, with $q_{0}=1$, and $q'_{j}:=detM'_{j}$, with $q_{0}=1$. Then by a direct computing, we can get $q_{1}=4,~q_{2}=7,~q_{3}=10,~q_{4}=33,~q_{5}=122,~q_{6}=211,~q_{7}=300,~q_{8}=989.$ For $4\leq j\leq 4n-1$, then \begin{eqnarray*} q_{j}= \begin{cases} 4q_{j-1}-q_{j-2},& if~j\equiv 0 \ (mod\ 4);\\ 4q_{j-1}-q_{j-2},& if~j\equiv 1 \ (mod\ 4);\\ 2q_{j-1}-q_{j-2},& if~j\equiv 2 \ (mod\ 4);\\ 2q_{j-1}-q_{j-2},& if~j\equiv 3 \ (mod\ 4). \end{cases} \end{eqnarray*} For $1\leq j\leq n-1,$ let $a_{j}=m_{4j}; ~0\leq j\leq n-1,~b_{j}=m_{4j+1},~c_{j}=m_{4j+2},~d_{j}=m_{4j+3}.$ Then we can get $a_{1}=33,~b_{0}=4,~c_{0}=7,~d_{0}=10,~b_{1}=122,~c_{1}=211,~d_{1}=300,$ and for $j\geq 2$ , we have \begin{eqnarray} \begin{cases} a_{j}=4d_{j-1}-c_{j-1};\\ b_{j}=4a_{j}-d_{j-1};\\ c_{j}=2b_{j}-a_{j};\\ d_{j}=2c_{j}-b_{j}. \end{cases} \end{eqnarray} From the first three equations in (3.9), one can get $a_{j}=\frac{2}{13}c_{j}+\frac{2}{13}c_{j-1}$. Next, substituting $a_{j}$ to the third equation, one has $b_{j}=\frac{15}{26}c_{j}+\frac{1}{26}c_{j-1}$. Then substituting $b_{j}$ to the fourth equation, we have $d_{j}=\frac{37}{26}c_{j}-\frac{1}{26}c_{j-1}.$ Finally, Substituting $a_{j}$ and $d_{j}$ to the first equation, one has $c_{j}-30c_{j-1}+c_{j=2}=0.$ Thus \begin{eqnarray*} c_{j}=k_{1}\cdot\mu^{j}+k_{2}\cdot\nu^{j}. \end{eqnarray*} According to $c_{0}=7,c_{1}=211,$ we have \begin{eqnarray*} \begin{cases} k_{1}+k_{2}=7;\\ k_{1}(15+4\sqrt{14})+k_{2}(15-4\sqrt{14})=211. \end{cases} \end{eqnarray*} and \begin{eqnarray*} \begin{cases} k_{1}=(\frac{7}{2}+\frac{53\sqrt{14}}{56}\Big);\\ k_{2}=(\frac{7}{2}-\frac{53\sqrt{14}}{56}\Big). \end{cases} \end{eqnarray*} Thus \begin{eqnarray*} \begin{cases} a_{j}=\Big(\frac{1}{2}+\frac{9\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(\frac{1}{2}-\frac{9\sqrt{14}}{56}\Big)\cdot\nu^{j};\\ b_{j}=\Big(2+\frac{31\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(2-\frac{31\sqrt{14}}{56}\Big)\cdot\nu^{j};\\ c_{j}=\Big(\frac{7}{2}+\frac{53\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(\frac{7}{2}-\frac{53\sqrt{14}}{56}\Big)\cdot\nu^{j};\\ d_{j}=\Big(5+\frac{75\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(5-\frac{75\sqrt{14}}{56}\Big)\cdot\nu^{j}, \end{cases} \end{eqnarray*} as desired.\hfill\rule{1ex}{1ex}\\ Using a similar method of Claim 1, we can prove Claim 2.\\ \noindent{\bf Claim 2.} For $1\leq j\leq 4n$, \begin{eqnarray*} q^{'}_{j}= \begin{cases} \Big(\frac{1}{2}+\frac{11\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j}{4}}+\Big(\frac{1}{2}-\frac{11\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j}{4}},& if~j\equiv 0\ (mod\ 4);\\ \Big(1+\frac{17\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-1}{4}}+\Big(1-\frac{17\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-1}{4}},& if~j\equiv 1\ (mod\ 4);\\ \Big(\frac{3}{2}+\frac{23\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-2}{4}}+\Big(\frac{3}{2}-\frac{23\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-2}{4}},& if~j\equiv 2\ (mod\ 4);\\ \Big(5+\frac{75\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-3}{4}}+\Big(5-\frac{75\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-3}{4}},& if~j\equiv 3\ (mod\ 4). \end{cases} \end{eqnarray*} By the related properties of determinant, it is evidently that \begin{eqnarray*} \det L_S&=& \left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 1\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & -1\\ 1 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right|_{4n}\\ &=&\left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & -1\\ 1 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right|_{4n}+\left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 1\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & 0\\ 1 & 0 & 0 & \cdots & 0 & -1 & 0\\ \end{array} \right|_{4n}\\ &=&q_{4n}+(-1)^{4n+1}\cdot(-1)^{4n-1}+(-1)^{4n+1}\cdot\Big[(-1)^{4n-1}+(-1)^{4n}q^{'}_{4n-2}\Big]\\ &=&q_{4n}-q^{'}_{4n-2}+2.\\ \end{eqnarray*} Together with Claims 1 and 2, we can get one interesting Claim (Claim 3).\\ \noindent{\bf Claim 3.} $detL_{S}=\mu^{n}+\nu^{n}+2.$\\ So far, we have calculated the $detL_{S}$ in Eq.(3.8). Then the prime of the rest is to calculate $(-1)^{4n-1}b_{4n-1}$.\\ \noindent{\bf Claim 4.} $(-1)^{4n-1}b_{4n-1}=\frac{9n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{14}.$\\ \noindent{\bf Proof of Claim 4.} We apply the notion that $(-1)^{4n-1}b_{4n-1}$ is the total of all the principal minors of (4$n$-1)\-/order matix $L_{S}$, thus we obtain \begin{eqnarray*} (-1)^{4n-1}b_{4n-1}&=&\sum_{j=1}^{4n}detL_{S}[i]\\ &=&\sum_{j=4,j\equiv0(mod\ 4)}^{4n}detL_{S}[j]+\sum_{j=1,j\equiv1(mod\ 4)}^{4n-3}detL_{S}[j]\\ &&+\sum_{j=2,j\equiv2(mod\ 4)}^{4n-2}detL_{S}[j]+\sum_{j=3,j\equiv3(mod\ 4)}^{4n-1}detL_{S}[j]. \end{eqnarray*} Let $\left( \begin{array}{cc} O & P\\ S & T\\ \end{array} \right)$ be a block matrix of $L_{S}[j]$ and N=$\left( \begin{array}{cc} 0 & -I_{j-1}\\ I_{4n-j} & 0\\ \end{array} \right)$. It's easy to get \begin{eqnarray} N'L_{S}[j]N=\left( \begin{array}{cc} 0 & -I_{j-1}\\ -I_{4n-j} & 0\\ \end{array} \right)' \left( \begin{array}{cc} O & P\\ S & T\\ \end{array} \right) \left( \begin{array}{cc} 0 & -I_{j-1}\\ -I_{4n-j} & 0\\ \end{array} \right)= \left( \begin{array}{cc} T & -S\\ -P & O\\ \end{array} \right). \end{eqnarray} Combining with Eq.(3.10), based on the different value of $j$, we list the following four cases.\\ {\bf Case 1.} For $ j\equiv0(mod\ 4)$~ and ~$4\leq j\leq 4n-4$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 4 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 2 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 2 \\ \end{array} \right)_{(4n-1)\times(4n-1)}=q_{4n-1}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=4,j\equiv0(mod\ 4)}^{4n}detL_{S}[j]=nq_{4n-1}=\frac{5n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}. \end{eqnarray*} {\bf Case 2.} For $ j\equiv1(mod\ 4)$~ and ~$1\leq j\leq 4n-3$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 2 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 4 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 2 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right)_{(4n-1)\times(4n-1)}=q'_{4n-1}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=1,j\equiv1(mod\ 4)}^{4n-3}detL_{S}[j]=nq'_{4n-1}=\frac{5n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}. \end{eqnarray*} {\bf Case 3.} For $ j\equiv2(mod\ 4)$~ and ~$2\leq j\leq 4n-2$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 2 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 4 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 4 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 4 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right)_{(4n-1)\times(4n-1)}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=2,j\equiv2(mod\ 4)}^{4n-2}detL_{S}[j]=n\Big[2\big(4q_{4n-3}-q'_{4n-4}\big)-q_{4n-3}\Big]=\frac{13n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}). \end{eqnarray*} {\bf Case 4.} For $ j\equiv3(mod\ 4)$~ and ~$3\leq j\leq 4n-1$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 4 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 4 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 4 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 2 \\ \end{array} \right)_{(4n-1)\times(4n-1)}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=3,j\equiv3(mod\ 4)}^{4n-1}detL_{S}[j]=n(4q_{4n-2}-q'_{4n-3})=\frac{13n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}. \end{eqnarray*} Thus, one has the following equation \begin{eqnarray*} (-1)^{4n-1}b_{4n-1}=\frac{9n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{14}, \end{eqnarray*} as desired.\hfill\rule{1ex}{1ex}\\ According to the results of facts 3 and 4, we immediately get \begin{eqnarray} \sum_{j=1}^{4n}\frac{1}{\beta_{j}}=\frac{9n\sqrt{14}}{14}\Bigg(\frac{(\mu^{n}-\nu^{n}\big)}{\mu^{n}+\nu^{n}\big)-2}\Bigg). \end{eqnarray} \hfill\rule{1ex}{1ex}\\ \begin{thm} Suppose $Q_{n}\big(8,4\big)$ as a M\"{o}bius graph constructed by $n$ octagonals and $n$ quadrilaterals. Then \begin{eqnarray*} Kf\big(Q_{n}(8,4)\big)=\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu^{n}\big)+2}\Bigg). \end{eqnarray*} \end{thm} \noindent{\bf Proof.} Substituting Eqs.(3.7) and (3.11) into (3.6), the Kirchhoff index of $Q_{n}(8,4)$ can be expressed \begin{eqnarray*} Kf\big(Q_{n}(8,4)\big)&=&8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\alpha_{i}}+\sum_{j=1}^{4n}\frac{1}{\beta_{j}}\Bigg)\\ &=&8n\Bigg(\frac{16n^{2}-1}{12}+\frac{9n\sqrt{14}}{14}\cdot\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu\big)+2}\Bigg)\\ &=&\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu\big)+2}\Bigg). \end{eqnarray*} The result as desired.\hfill\rule{1ex}{1ex}\\ Next, we aim to compute the Kirchhoff index for $Q'_{n}\big(8,4\big)$. By Figure 2, it is apparently to see $\pi=(1,1')(2,2') \cdots(4n,(4n)')$ is an automorphism of $Q'_{n}\big(8,4\big)$. In other words, $v_{0}=\emptyset,~v_{1}=\big\{1,2,3,\cdots, 4n\big\}~and ~v_{2}=\big\{1',2',3',\cdots,(4n)'\big\}.$ In likewise, we express $L_{A}\big(Q'_{n}(8,4)\big)$ and $L_{S}\big(Q'_{n}(8,4)\big)$ as $L'_{A}$ and $L'_{S}$. Thus one can get $L'_{A}=L_{A}$ and \begin{eqnarray*} L'_S&=& \left( \begin{array}{ccccccccccc} 4 & -1 & & & & & & & & & -1\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 4 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 4 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ -1 & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} \begin{thm} Suppose $Q'_{n}\big(8,4\big)$ is cylinder graph of the octagonal-quadrilateral network, then \begin{eqnarray*} Kf\big(Q'_{n}(8,4)\big)=\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu^{n}\big)-2}\Bigg). \end{eqnarray*} \end{thm} \noindent{\bf Proof.} Assuming that $0=\xi_{1}<\xi_{2}\leq\xi_{3}\leq\cdots\leq\xi_{4n}$ are the roots of $P_{L_{A}}\big(x\big)=0$, and $0<\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\leq\cdots\leq\lambda_{4n}$ are the roots of $P_{L_{S}}\big(x\big)=0$. By Lemma 2.1, we have $S_{p}\big(Q_{n}(8,4)\big)=\big\{\xi_{1},\xi_{2},\cdots,\xi_{4n},\lambda_{1},\lambda_{2},\cdots,\lambda_{4n}\big\}$. Then we straightforwardly get that \begin{eqnarray*} Kf\big(Q'_{n}(8,4)\big)&=&8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\xi_{i}}+\sum_{j=1}^{4n}\frac{1}{\lambda_{j}}\Bigg)\\ &=&2Kf\big(C_{4n}\big)+8n\cdot\frac{ (-1)^{4n-1}b'_{4n-1}}{detL'_{S}}. \end{eqnarray*} Using a method similar to the previous paper, it is can directly calculate that $detL'_{S}[j]=detL_{S}[j]$ and $b'_{4n-1}=b_{4n-1}$. Note that \begin{eqnarray*} \det L'_S&=& \left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & -1\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & -1\\ -1 & 0 & 0 & \cdots & 0 & -1 & 4\\ \end{array} \right|_{4n}=q_{4n}-q'_{4n-2}-2. \end{eqnarray*} Hence, \begin{eqnarray*} Kf\big(Q'_{n}(8,4)\big)&=&8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\xi_{i}}+\sum_{j=1}^{4n}\frac{1}{\lambda_{j}}\Bigg)\\ &=&2Kf\big(C_{4n}\big)+8n\frac{ (-1)^{4n-1}b'_{4n-1}}{detL'_{S}}\\ &=&\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu^{n}\big)-2}\Bigg). \end{eqnarray*} This completes the proof.\hfill\rule{1ex}{1ex}\\ The Kirchhoff indices of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$ for $n=1,\cdot\cdot\cdot,15$ are tabulated Table 1.\\ \begin{table}[htbp] \setlength{\abovecaptionskip}{0.15cm} \centering\vspace{.3cm} \setlength{\tabcolsep}{25pt} \caption{ The Kirchhoff indices of $Q_{1}(8,4),Q_{2}(8,4),...,Q_{15}(8,4)$ and $Q'_{1}(8,4),Q'_{2}(8,4),...,Q'_{15}(8,4).$} \begin{tabular}{c|c|c|c} \hline $G$ & $Kf(G)$ & $G$ & $Kf(G)$ \\ \hline $Q_{1}(8,4)$ & $28.00$ & $Q'_{1}(8,4)$ & $30.57$ \\ $Q_{2}(8,4)$ & $160.8$ & $Q'_{2}(8,4)$ & $161.14$ \\ $Q_{3}(8,4)$ & $459.17$ & $Q'_{3}(8,4)$ & $459.20$ \\ $Q_{4}(8,4)$ & $987.88$ & $Q'_{4}(8,4)$ & $987.89$ \\ $Q_{5}(8,4)$ & $1811.07$ & $Q'_{5}(8,4)$ & $1811.07$ \\ $Q_{6}(8,4)$ & $2992.74$ & $Q'_{6}(8,4)$ & $2992.74$ \\ $Q_{7}(8,4)$ & $4596.90$ & $Q'_{7}(8,4)$ & $4596.90$ \\ $Q_{8}(8,4)$ & $6687.54$ & $Q'_{8}(8,4)$ & $6687.54$ \\ $Q_{9}(8,4)$ & $9328.67$ & $Q'_{9}(8,4)$ & $9328.67$ \\ $Q_{10}(8,4)$ & $12584.28$ & $Q'_{10}(8,4)$ & $12584.28$ \\ $Q_{11}(8,4)$ & $16518.38$ & $Q'_{11}(8,4)$ & $16518.38$ \\ $Q_{12}$(8,4) & $21194.96$ & $Q'_{12}(8,4)$ & $21194.96$ \\ $Q_{13}(8,4)$ & $26678.03$ & $Q'_{13}(8,4)$ & $26678.03$ \\ $Q_{14}(8,4)$ & $33031.59$ & $Q'_{14}(8,4)$ & $33031.59$ \\ $Q_{15}(8,4)$ & $40319.63$ & $Q'_{15}(8,4)$ & $40319.63$ \\ \hline \end{tabular} \end{table} In the sequel, the prime aim is to concentrate on calculate the complexity of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$. \begin{thm} Let $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$ be M\"{o}bius graph and cylinder graph of the linear octagonal-quadrilateral networks, respectively. Then \begin{eqnarray*} \mathscr{L}\big(Q_{n}(8,4)\big)=2n\big(\mu^{n}+\nu^{n}+2\big), \end{eqnarray*} and \begin{eqnarray*} \mathscr{L}\big(Q'_{n}(8,4)\big)=2n\big(\mu^{n}+\nu^{n}-2\big). \end{eqnarray*} \end{thm} \noindent{\bf Proof.} With initial conditions as $L_{A}=L'_{A}=L(C_{4n}); ~\tau(C_{4n})=4n.$ Based on Lemma 2.3 and Lemma 2.4, one has the following equation. \begin{eqnarray*} \prod_{i=2}^{4n}\alpha_{i}=\prod_{i=2}^{4n}\xi_{i}=4n\tau(C_{4n})=16n^{2}. \end{eqnarray*} Note that \begin{eqnarray*} \prod_{j=1}^{4n}\beta_{j}=detL_{S}=\big(\mu^{n}+\nu^{n}+2\big), \end{eqnarray*} \begin{eqnarray*} \prod_{j=1}^{4n}\lambda_{j}=detL'_{S}=\big(\mu^{n}+\nu^{n}-2\big). \end{eqnarray*} Hence, \begin{eqnarray*} \mathscr{L}\big(Q_{n}(8,4)\big)=\frac{1}{8n}\prod_{i=2}^{4n}\alpha_{i}\cdot \prod_{j=1}^{4n}\beta_{j}=2n\big(\mu^{n}+\nu^{n}+2\big), \end{eqnarray*} and \begin{eqnarray*} \mathscr{L}\big(Q'_{n}(8,4)\big)=\frac{1}{8n}\prod_{i=2}^{4n}\xi_{i}\cdot \prod_{j=1}^{4n}\lambda_{j}=2n\big(\mu^{n}+\nu^{n}-2\big).\\ \end{eqnarray*} \hfill\rule{1ex}{1ex}\\ The complexity of $Q_{n}(8,4)$ and $Q'_{n}(8,4)$ have been computed, which are tabulated in Table 2. \begin{table}[htbp] \setlength{\abovecaptionskip}{0.15cm} \centering \vspace{.2cm} \setlength{\tabcolsep}{10pt} \caption{The complexity of $Q_{1},Q_{2},...,Q_{8}$ and $Q'_{1},Q'_{2},...,Q'_{8}$.} \begin{tabular}{c|c|c|c} \hline ~~~~~~$G$ ~~~~~~&~~~~~~$\mathscr{L}(G)$ ~~~~~~&~~~~~~$G$ ~~~~~~&~~~~~~$\mathscr{L}(G)$~~~~~~ \\ \hline $Q_{1}$ & $64$ & $Q'_{1}$ & $56$ \\ $Q_{2}$ & $3600$ & $Q'_{2}$ & $3584$ \\ $Q_{3}$ & $161472$ & $Q'_{3}$ & $161448$ \\ $Q_{4}$ & $6451232$ & $Q'_{4}$ & $6451200$ \\ $Q_{5}$ & $241651520$ & $Q'_{5}$ & $241651480$\\ $Q_{6}$ & $8689777200$ & $Q'_{6}$ & $8689777152$\\ $Q_{7}$ & $303803889088$ & $Q'_{7}$ & $303803889032$\\ $Q_{8}$ & $10404546969664$ & $Q'_{8}$ & $10404546969600$\\ \hline \end{tabular} \end{table} \section{Relation between Kirchhoff index and Wiener index of $Q_{n}(8,4)$ and $Q'_{n}(8,4)$}\label{sct4} First part of this section is a certification of the Wiener index of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$, and the rest is Wiener index and Kirchhoff index for comparison.\\ \begin{thm}For the network $Q_{n}\big(8,4\big)$, \begin{eqnarray*} W\big(Q_{n}(8,4)\big)=32n^3+16n^2+4n. \end{eqnarray*} \end{thm} \noindent{\bf Proof.} For the M\"{o}bius graph $Q_{n}\big(8,4\big)$, two types of vertices can be considered $(n\geq2)$.\\ (a) vertex with degree two,\\ (b) vertex with degree three.\\ Hence, for type a, we obtain \begin{eqnarray} \omega_{1}(k)=4n\Bigg[\Big(\sum_{k=1}^{2n}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k+\sum_{k=3}^{2n}k\Big)+3+4\Bigg], \end{eqnarray} and for type b, we have \begin{eqnarray} \omega_{2}(k)=4n\Bigg(\sum_{k=1}^{2n}k+\sum_{k=1}^{2n}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k\Bigg). \end{eqnarray} Summing the Eqs.(4.12), (4.13) together, we get the Eq.(4.14).\\ \begin{equation} \begin{split} W\big(Q_{n}(8,4)\big)&=\frac{\omega_{1}(k)+\omega_{2}(k)}{2}\\ &=\frac{4n\Bigg(\sum\limits_{k=1}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=3}^{2n}k+7\Bigg)}{2}\\ &=32n^3+16n^2+4n. \end{split} \end{equation} \hfill\rule{1ex}{1ex}\\ \begin{thm} Turning to the cylinder graph $Q'_{n}(8,4)$, then \begin{eqnarray*} W\big(Q'_{n}(8,4)\big)=32n^3+16n^2-8n. \end{eqnarray*} \end{thm} \noindent{\bf Proof.} According to the proof of $W\big(Q_{n}(8,4)\big)$, we have the Eqs.(4.15), (4.16) by similar approach. \begin{eqnarray} \omega_{1}(k)=4n\Big[\Big(\sum_{k=1}^{2n-1}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k+\sum_{k=3}^{2n+1}k\Big)+3+4\Big]. \end{eqnarray} \begin{eqnarray} \omega_{2}(k)=4n\Bigg[1+\Big(\sum_{k=1}^{2n-1}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k+\sum_{k=2}^{2n+1}k\Big)\Bigg]. \end{eqnarray} Combining the Eqs.(4.15), (4.16) together, we derive the Eq.(4.17).\\ \begin{equation} \begin{split} W\big(Q'_{n}(8,4)\big)&=\frac{\omega_{1}(k)+\omega_{2}(k)}{2}\\ &=\frac{4n\Bigg(\sum\limits_{k=1}^{2n-1}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=2}^{2n+1}k+\sum\limits_{k=1}^{2n-1}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=3}^{2n+1}k+8\Bigg)}{2}\\ &=32n^3+16n^2-8n. \end{split} \end{equation} \hfill\rule{1ex}{1ex}\\ Whatever $n$ is even or odd, we deduced the Wiener indices of the M\"{o}bius and cylinder graph of the linear octagonal-quadrilateral networks must be tantamount.\\ \noindent{\bf Corollary 4.3.} For different values of $n$, we illustrate the ratios of the Wiener index to the Kirchhoff index for both $Q_{n}(8,4)$ and $Q'_{n}(8,4)$ are gradually closing to $3$.\\ \begin{equation*} \lim_{n\to\infty}\frac{W\big(Q_{n}(8,4)\big)}{Kf\big(Q_{n}(8,4)\big)}=3, \end{equation*} and \begin{equation*} \lim_{n\to\infty}\frac{W\big(Q'_{n}(8,4)\big)}{Kf\big(Q'_{n}(8,4)\big)}=3. \end{equation*} \section{Conclusion}\label{sct5} \ \ \ \ \ A central result of this paper is a systematic expression for the Kirchhoff indices and complexity of the linear M\"{o}bius and cylinder octagonal-quadrilateral networks in terms of the reciprocal of the eigenvalues of the Laplacian matrix. By comparatively analyzing the correlation ability of topological indices with Wiener indices and Kirchhoff indices, we have discovered the underlying relationship between them that the ratio value consistently being three. Finally, we expect the results developed here could become useful part of the research in other polygon chains and their variants. \section*{Funding} \ \ \ \ \ This work was supported in part by Anhui Provincial Natural Science Foundation under Grant 2008085J01, and by Natural Science Fund of Education Department of Anhui Province under Grant KJ2020A0478 and National Natural Science Foundation of China Grant 11601006, 12001008. \begin{thebibliography}{99} \small \setlength{\itemsep}{-.8mm} \bibitem{F.R} F.R.K. Chung, Spectral graph theory, American Mathematical Society Providence, RI, 1997. \bibitem{B} A. Dobrymin, I. Gutman, S. Klav\v{z}ar, and P. \v{Z}igert, Winer index of hexagonal systems, Acta Applicandae Mathematucae 72 (2002) 94-247. \bibitem{C.} I. Gutman, S. C. Li, W. Wei, Cacti with $n$ vertices and $t$ cycles having extremal Wiener index, Discrete Applied Mathematics 232 (2017) 189-200. \bibitem{D} I. Gutman, S. Klav\v{z}ar, B. Mohar, Match Communications in Mathematical and in Computer Chemistry 35 (1997) 1-259. \bibitem{Wiener} H. Wiener, Structural determination of paraffin boiling points, Journal of the American Chemical Society 69 (1947) 17-20. \bibitem{A.D} A. Dobrynin, Branchings in trees and the calculation of the Wiener index of a tree, Match Communications in Mathematical and in Computer Chemistry 41 (2000) 119-134. \bibitem{D.J} D.J. Klein, M. Randi\'c, Resistance distances, Journal of Mathematical Chemistry 12 (1993) 81-95. \bibitem{D.} D.J. Klein, Resistance-distance sum rules, Croatica Chemica Acta 75 (2002) 633-649. \bibitem{D.J.} D.J. Klein, O. Ivanciuc, Graph cyclicity, excess conductance, and resistance deficit, Journal of Mathematical Chemistry 30 (2001) 271-287. \bibitem{Y.L} Y.L. Yang, T.Y. Yu, Graph theory of viscoelasticities for polymers with starshaped, multiple-ring and cyclic multiple-ring molecules, Macromolecular Chemistry and Physics 186 (1985) 609-631. \bibitem{Y.} Y.J. Yang, H.P. Zhang, Kirchhoff index of linear hexagonal chains, International Journal of Quantum Chemistry 108(3) (2008) 503-512. \bibitem{G.W} X.Y. Geng, P. W, On the Kirchhoff indices and the number of spanning Trees of M\"{o}bius phenylenes chain and cylinder Phenylenes Chain, Polycylic Aromatic Compounds 41 (2019) 1681-1693. \bibitem{Z.L} J.B. Liu, Z.Y. Shi, Y.H. Pan, Computing the Laplacian spectrum of linear octagonal-quadrilateral networks and its applications, Polycylic Aromatic Compounds 42 (2020) 659-670. \bibitem{H.B} X.L. M, H. B, The normalized Laplacians, degree-Kirchhoff index and the spanning trees of hexagonal M\"{o}bius graphs, Applied Mathematics and Computation 355 (2019) 33-46. \bibitem{E.} J. Huang, S.C. Li, L. Sun, The normalized Laplacians degree-Kirchhoff index and the spanning trees of linear hexagonal chains, Discrete Applied Mathematics 207 (2016) 67-79. \bibitem{F.} Y.J. Peng, S.C. Li, On the Kirchhoff index and the number of spanning trees of linear phenylenes, Match Communications in Mathematical and in Computer Chemistry 77 (2017) 765-780. \bibitem{G.} J.B. Liu, J. Zhao, Z.X. Zhu, On the number of spanning trees and normalized Laplacian of linear octagonal-quadrilateral networks, International Journal of Quantum Chemistry 119 (17) (2019) e25971. \bibitem{H.} J. Huang, S.C. Li, X. Li, The normalized Laplacian, degree-Kirchhoff index and spanning trees of the linear polyomino chains, Applied Mathematics and Computation 289 (2016) 324-334. \bibitem{M.} H.P. Zhang, Y.J. Yang, Resistance Distance and Kirchhoff Index in Circulant Graphs, International Journal of Quantum Chemistry 107 (2007) 330-339. \bibitem{K.} S. Li, W. Sun, S. Wang, Multiplicative degree-Kirchhoff index and number of spanning trees of a zigzag polyhex nanotube TUHC[2n,2], International Journal of Quantum Chemistry 119 (17) (2019) e25969. \bibitem{Gut} I. Gutman, B. Mohar, The quasi-Wiener and the Kirchhoff indices coincide, Journal of Chemical Information and Modeling 36 (1996) 982-985. \bibitem{H.Y.} H.Y. Chen, F.J. Zhang, Resistance distance and the normalized Laplacian spectrum, Discrete Applied Mathematics 155 (2007) 654-661. \bibitem{Y.X.}Y.J. Yang, and X.Y. Jiang, Unicyclic graphs with extremal Kirchhoff index, Match Communications in Mathmatical and in Computer Chemistry 60 (2008) 20-107. \end{thebibliography} \end{document} \documentclass[10pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{amsfonts} \usepackage{mathrsfs,amscd,amssymb,amsthm,amsmath,bm,graphicx} \usepackage{graphicx} \usepackage{cite} \usepackage{color} \usepackage[shortcuts]{extdash} \setlength{\evensidemargin}{-4cm} \setlength{\oddsidemargin}{1mm} \setlength{\textwidth}{16cm} \setlength{\textheight}{22cm} \setlength{\headsep}{1.4mm} \makeatletter \renewcommand{\@seccntformat}[1]{{\csname the#1\endcsname}{\normalsize.}\hspace{.5em}} \makeatother \renewcommand{\thesection}{\normalsize \arabic{section}} \renewcommand{\refname}{\normalsize \bf{References}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\theequation{\thesection.\arabic{equation}} \def \[{\begin{equation}} \def \]{\end{equation}} \def \CL{\rm CL} \def\O{\mathcal{O}(M)} \def\TN{\rm TN} \def\AO{A\mathcal{O}(M)} \def\Mod{{\rm Mod}} \def\Z{{\Bbb Z}} \newtheorem{thm}{Theorem}[section] \newtheorem{defi}{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{exa}{Example} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{prob}[thm]{Problem} \newtheorem{Con}{Conjecture} \newtheorem{Fac}{Factor} \newtheorem{Cla}{Claim} \newcommand{\orcidauthorA}{0000-0002-9620-7692} \newcommand{\nz}{\rm nz} \newenvironment{kst} {\setlength{\leftmargini}{1.3\parindent} \begin{itemize} \setlength{\itemsep}{-1.1mm}} {\end{itemize}} \newenvironment{lst} {\setlength{\leftmargini}{0.75\parindent} \begin{itemize} \setlength{\itemsep}{-1.1mm}} {\end{itemize}} \newenvironment{wst} {\setlength{\leftmargini}{1.5\parindent} \begin{itemize} \setlength{\itemsep}{-1.1mm}} {\end{itemize}} \begin{document} \setlength{\baselineskip}{13pt} \begin{center}{\Large \bf The Laplacians, Kirchhoff index and complexity of linear M\"{o}bius and cylinder octagonal-quadrilateral networks } \vspace{4mm} {\large Jia-Bao Liu $^{1,*}$, Lu-Lu Fang $^{1,*}$, Qian Zheng $^{1}$, Xin-Bei Peng $^{1}$}\vspace{2mm} {\small $^{1}$School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, P.R. China\\} \vspace{2mm} \end{center} \footnotetext{E-mail address: [email protected], [email protected],[email protected], [email protected].} \footnotetext{* Corresponding author.} {\noindent{\bf Abstract.}\ \ Spectrum graph theory not only facilitate comprehensively reflect the topological structure and dynamic characteristics of networks, but also offer significant and noteworthy applications in theoretical chemistry, network science and other fields. Let $L_{n}^{8,4}$ represent a linear octagonal-quadrilateral network, consisting of $n$ eight-member ring and $n$ four-member ring. The M\"{o}bius graph $Q_{n}(8,4)$ is constructed by reverse identifying the opposite edges, whereas cylinder graph $Q'_{n}(8,4)$ identifies the opposite edges by order. In this paper, the explicit formulas of Kirchhoff indices and complexity of $Q_{n}(8,4)$ and $Q'_{n}(8,4)$ are demonstrated by Laplacian characteristic polynomials according to decomposition theorem and Vieta's theorem. In surprise, the Kirchhoff index of $Q_{n}(8,4)$\big($Q'_{n}(8,4)$\big) is approximately one-third half of its Wiener index as $n\to\infty$.\\ \noindent{\bf Keywords}: Kirchhoff index; Complexity; M\"{o}bius graph; Cylinder graph.\vspace{2mm} \section{Introduction}\label{sct1} \ \ \ \ \ The graphs in this paper are simple, undirected and connected. Firstly recall some definitions that most used in graph theory. Suppose $G$ as a simple undirected graph with $\big|V_G\big|=n$ and $\big|E_G\big|=m$. The adjacent matrix $A(G)=[a_{i,j}]_{n\times n}$ of $G$ is a symmetric matrix. When the vertex $i$ is adjacent to $j$, we define $a_{i,j}=1$ and $a_{i,j}=0$ otherwise. For a vertex $i\in V_G$, $d_{i}$ is the number of edges arised from $i$ and the diagonal matrix $D(G)=diag\{d_{1},d_{2},\cdots,d_{n}\}$ is degree matrix. Subsequently, the Laplacian matrix is specified as $L(G)=D(G)-A(G)$, which spectrum can be expressed by $0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{n}$. For more notations, one can be referred to \cite{F.R}. Alternatively, The $(m,n)$-entry of the Laplacian matrix can be noted by \begin{eqnarray} \big(L(G)\big)_{mn}= \begin{cases} d_{m}, & m=n;\\ -1, & m\neq{n}~and~ v_{m}\backsim v_{n};\\ 0, & otherwise. \end{cases} \end{eqnarray} The traditional distance for every pairs of vertices $i$ and $j$ is $d_{G}(v_i,v_j)=\{d_{ij}|\ the\ shortset\ length\ of\ v_{i}\ \\ and\ v_{j}\}$. For many parameters used to describe the structure of a graph in chemical and mathematical researches\cite{B,C.,D}, the most commonly used one is the distance-based parameter Wiener index\cite{Wiener,A.D}, which is known to \begin{eqnarray*} W(G)=\sum_{i<j}d_{ij}. \end{eqnarray*} Suppose that each edge of a connected graph $G$ is regarded as a unit resistor, and the resistance distance\cite{D.J} between any two vertices $i$ and $j$ is recorded as $r_{ij}$. Similar to Wiener index, according to the resistance distance, the expression of Kirchhoff index \cite{D.,D.J.} are given, namely \begin{eqnarray*} Kf(G)=\sum_{i<j}r_{ij}. \end{eqnarray*} It's well known that scholars in various fields have a strong interest in the study of the Kirchhoff index, this has promoted researchers to try some calculation methods to compute the Kirchhoff index of a given graph and get its closed formula. As early as 1985, Y.L. Yang et al.\cite{Y.L} gave the corresponding decomposition theorem for the corresponding matrix of linear structure. Up to now, many scholars have used this method to describe the Kirchhoff index of a series of linear hydrocarbons chain. For instances, in 2007, Yang et al.\cite{Y.} calculated the Kirchhoff index of linear hexagonal chain by using the decomposition of Laplacian matrix. In 2019, Geng et al.\cite{G.W} used this method to calculate the M\"{o}bius chain and cylinder chain of phenylenes chain. Shi and Liu et al.\cite{Z.L} computed the Kirchhoff index of linear octagonal-quadrilateral network in 2020. For more information, see \cite{H.B,E.,F.,G.,H.,M.,K.}. After learning the excellent works of scholars, this paper computes the Kirchhoff indices, Wiener indices and the complexity of M\"{o}bius graph and its cylinder graph of linear octagonal-quadrilateral network.\\ Let $L_{n}^{8,4}$ be the linear octagonal-quadrilateral network as illustrated in Figure 1, and octagons and quadrilaterals are connected by a common edge. Then the corresponding M\"{o}bius graph $Q_{3}\big(8,4\big)$ of octagonal-quadrilateral network is obtained by the reverse identification of the opposite edge by $L_{n}^{8,4}$, and its cylinder graph $Q'_{3}\big(8,4\big)$ of octagonal-quadrilateral network is obtained by identifying the opposite edge of $L_{n}^{8,4}$. An illustration of this are given in Figure 2. Obviously, we can obtained that $\big|V_{Q_{n}}(8,4)\big|=8n$,~$\big|E_{Q_{n}}(8,4)\big|=10n$ and $\big|V_{Q'_{n}}(8,4)\big|=8n$,~ $\big|E_{Q'_{n}}(8,4)\big|=10n.$ \begin{figure}[htbp] \centering\includegraphics[width=16.5cm,height=4cm]{1} \caption{Linear octagonal-quadrilateral networks. } \end{figure} \begin{figure}[htbp] \centering\includegraphics[width=13cm,height=4cm]{2} \caption{A special class of octagonal-quadrilateral networks for $n=3$.} \end{figure} The rest of this paper is organized as follows: In Section 2, we introduce some basic notations and related lemmas. In Section 3, the Kirchhoff index, the Wiener index and the complexity of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$ are be computed. In Section 4, we demonstrate the ratio of Wiener index and Kirchhoff index of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$. Finally, we provide insights into the subsequent extensions in Section 5. \section{Preliminary}\label{sct2} \ \ \ \ \ We recommend the notion of symbols and related calculation methods. This in turn would help to the proof of the rest of the article. The characteristic polynomial of matrix $A$ of order $n$ is defined as $P_{A}(x)=det\big(xI-A\big)$. We note that $\varpi$ is an automorphism of $G$, we can write the product of disjoint 1-cycles and transposition, namely \begin{equation*} \mathscr{\varpi}=(\bar{1})(\bar{2})\cdots(\bar{m})(1,1')(2,2')\cdots(k,k'). \end{equation*} Then $\big|V(G)\big|=m+2k$,~ let $v_{0}=\big\{\bar{1},\bar{2},\cdots \bar{m}\big\},~ v_{1}=\big\{1,2\cdots k\big\},~v_{2}=\big\{1',2'\cdots k'\big\}.$ Thus the Laplacian matrix can be formed by block matrix, then \begin{equation} L(G)=\left( \begin{array}{ccc} L_{V_{0}V_{0}}& L_{V_{0}V_{1}}& L_{V_{0}V_{2}}\\ L_{V_{1}V_{0}}& L_{V_{1}V_{1}}& L_{V_{1}V_{2}}\\ L_{V_{2}V_{0}}& L_{V_{2}V_{1}}& L_{V_{2}V_{2}}\\ \end{array} \right), \end{equation} where \begin{equation} L_{V_{0}V_{1}}=L_{V_{0}V_{2}},~ L_{V_{1}V_{2}}=L_{V_{2}V_{1}},~and~ L_{V_{1}V_{1}}=L_{V_{2}V_{2}}. \end{equation} Let \begin{equation} P=\left( \begin{array}{ccc} I_{m}& 0& 0\\ 0& \frac{1}{\sqrt{2}}I_{k}& \frac{1}{\sqrt{2}}I_{k}\\ 0& \frac{1}{\sqrt{2}}I_{k}& -\frac{1}{\sqrt{2}}I_{k} \end{array} \right), \end{equation} where $I_{m}$ is the identity matrix of order $n$. Note that $P'$ is the transposition of $P$. Comprehensive consideration of Eqs.(2.2)-(2.4), thus we employ \begin{equation*} P'L(G)P=\left( \begin{array}{cc} L_{A}& 0\\ 0& L_{S} \end{array} \right), \end{equation*} where \begin{eqnarray*} L_{A}=\left( \begin{array}{cc} L_{V_{0}V_{0}}& \sqrt{2}L_{V_{0}V_{1}}\\ \sqrt{2}L_{V_{1}V_{0}}& L_{V_{1}V_{1}}+L_{V_{1}V_{2}}\\ \end{array} \right),~ L_{S}=L_{V_{1}V_{1}}-L_{V_{1}V_{2}}. \end{eqnarray*} \begin{lem}\textup{\cite{Y.L}} $L_{A},~L_{S}$ thus obtained above can be used in $P_{L(L_{n})}$, namely \begin{eqnarray*} P_{L(L_{n})}\big(x\big)=P_{L_{A}}\big(x\big)P_{L_{S}}\big(x\big). \end{eqnarray*} \end{lem} \begin{lem}\textup{\cite{Gut}} Gutman and mohar show a new calculation formula, namely \begin{eqnarray*} Kf(G)=n\sum_{k=2}^{n}\frac{1}{\mu_{k}} , \end{eqnarray*} where $0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{n}(n\geq2)$ are the eigenvalues of $L(G)$. \end{lem} \begin{lem}\textup{\cite{H.Y.}} The number of spanning trees of $G$ can also be concluded to the complexity, which is defined by \begin{eqnarray*} \mathscr{L}(G)=\frac{1}{n}\prod_{i=1}^{n}\mu_{i}. \end{eqnarray*} \end{lem} \begin{lem}\textup{\cite{Y.X.}} Assign $C_{n}$ for a cycle with $n$ vertices. The explicit formula of Laplacian spectrum of $C_{n}$ can be constructed, which is expressed as \begin{eqnarray*} S_{p}\big(L(C_{n})\big)=\Big\{2-2\cos\frac{(2\pi i)}{n}\Big|1\leq i\leq n\Big\},~ \end{eqnarray*} and the Kirchhoff index of $C_{n}$ is given by \begin{eqnarray*} Kf\big(C_{n}\big)=\frac{n^{3}-n}{12}. \end{eqnarray*} \end{lem} \section{Main results}\label{sct3} \ \ \ \ \ In this section, we firstly focus on calculating the systematic formula for the Kirchhoff index of $Q_{n}\big(8,4\big)$. By Lemma 2.1, we apply the notion of the eigenvalues of Laplacian matrix to compute the $Kf\big(Q_{n}(8,4)\big)$. Besides, we determine the complexity of $Q_{n}\big(8,4\big)$, which is made up of the products of degree of all vertices. The scenario is similar for $Q'_{n}\big(8,4\big)$. Given an $n\times n$ matrix $R$, and put deleting the $i_{1}th,~ i_{2}th,~\cdots ,~i_{k}th$ rows and columns of $R$ are expressed as $R\big[\{i_{1},i_{2},\cdots ,~i_{k}\}\big]$. The vertices of $Q_{n}\big(8,4\big)$ are tabulated in Figure 2. Evidently, $\pi=(1,1')(2,2')\cdots (4n,(4n))')$ is an automorphism of $Q_{n}\big(8,4\big)$ and $v_{0}=\emptyset,~v_{1}=\big\{1,2,3,\cdots, 4n\big\},~v_{2}=\big\{1',2',3',\cdots,(4n)'\big\}.$ Maenwile, we assign $L_{A}\big(Q_{n}(8,4)\big)$ and $L_{S}\big(Q_{n}(8,4)\big)$ as $L_{A}$ and $L_{S}$. Then one can get \begin{eqnarray} L_{A}=L_{V_{1}V_{1}}+L_{V_{1}V_{2}},~L_{S}=L_{V_{1}V_{1}}-L_{V_{1}V_{2}}. \end{eqnarray} By Equation (3.5), we have \begin{eqnarray*} L_{V_1 V_1}&=& \left( \begin{array}{ccccccccccc} 3 & -1 & & & & & & & & &\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 3 & -1 & & & & & &\\ & & & -1 & 3 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 3 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ & & & & & & & & & -1 & 3\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} \begin{eqnarray*} L_{V_1 V_2}&=& \left( \begin{array}{ccccccccccc} -1 & & & & & & & & & & -1\\ & 0 & & & & & & & & &\\ & & 0 & & & & & & & &\\ & & & -1 & & & & & & &\\ & & & & 0 & & & & & &\\ & & & & & 0 & & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & & -1 & & & \\ & & & & & & & & 0 & &\\ & & & & & & & & & 0 &\\ -1 & & & & & & & & & & -1\\ \end{array} \right)_{(4n)\times(4n)}. \end{eqnarray*} Hence, \begin{eqnarray*} L_A&=& \left( \begin{array}{ccccccccccc} 2 & -1 & & & & & & & & & -1\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 2 & -1 & & & & & &\\ & & & -1 & 2 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 2 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ -1 & & & & & & & & & -1 & 2\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} and \begin{eqnarray*} L_S&=& \left( \begin{array}{ccccccccccc} 4 & -1 & & & & & & & & & 1\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 4 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 4 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ 1 & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}. \end{eqnarray*} Assuming that $0=\alpha_{1}<\alpha_{2}\leq\alpha_{3}\leq\cdots\leq\alpha_{4n}$ are the roots of $P_{L_{A}}\big(x\big)=0$, and $0<\beta_{1}\leq\beta_{2}\leq\beta_{3}\leq\cdots\leq\beta_{4n}$ are the roots of $P_{L_{S}}\big(x\big)=0$. By Lemma 2.2, we immediately have \begin{eqnarray} Kf\big(Q_{n}(8,4)\big)=8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\alpha_{i}}+\sum_{j=1}^{4n}\frac{1}{\beta_{j}}\Bigg). \end{eqnarray} Next, the prime aim of follows is to calculate $\sum\limits_{i=2}^{4n}\frac{1}{\alpha_{i}}$ and $\sum\limits_{j=1}^{4n}\frac{1}{\beta_{j}}$.\\ Combing with Lemma 2.1 and Lemma 2.4, one can get \begin{eqnarray} Kf\big(Q_{n}(8,4)\big)=8n\Bigg(\frac{16n^{2}-1}{12}+\sum_{j=1}^{4n}\frac{1}{\beta_{j}}\Bigg). \end{eqnarray} \begin{lem} Let $\beta_{j}$ be the eigenvalue of $L_{S}$. And assume that $\mu=15+4\sqrt{14}$, $\nu=15-4\sqrt{14}$. \begin{eqnarray} \sum_{j=1}^{4n}\frac{1}{\beta_{j}}=\frac{(-1)^{4n-1}b_{4n-1}}{detL_{S}}. \end{eqnarray} \end{lem} \noindent{\bf Proof.} $$P_{L_{S}}(x)=det(xI-L_{S})=x^{4n}+b_{1}x^{4n-1}+\cdots+b_{4n-1}x+b_{4n},~b_{4n}\neq0.$$ The key to get Eq.(3.8) is to determine $(-1)^{4n-1}b_{4n-1}$ and $detL_{S}$. We demonstrate the $ith$ order principal submatrix $M_{i} ~and~ M'_{i}$ by from the first $i$ rows and columns of the following matrices $L^{1}_{S}$ and $L^{2}_{S}$ respectively. For $i=1,2,...,4n-1$, we have \begin{eqnarray*} L_S^{1}&=& \left( \begin{array}{ccccccccccc} 4 & -1 & & & & & & & & &\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 4 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 4 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} and \begin{eqnarray*} L_S^{2}&=& \left( \begin{array}{ccccccccccc} 2 & -1 & & & & & & & & &\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 4 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 2 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 2 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 4 & -1\\ & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}. \end{eqnarray*} Applying the results above, we can get two Claims.\\ \noindent{\bf Claim 1.} For $1\leq j\leq 4n$, \begin{eqnarray*} q_{j}= \begin{cases} \Big(\frac{1}{2}+\frac{9\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j}{4}}+\Big(\frac{1}{2}-\frac{9\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j}{4}},& if~j\equiv 0\ (mod\ 4);\\ \Big(2+\frac{31\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-1}{4}}+\Big(2-\frac{31\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-1}{4}},& if~j\equiv 1\ (mod\ 4);\\ \Big(\frac{7}{2}+\frac{53\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-2}{4}}+\Big(\frac{7}{2}-\frac{53\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-2}{4}},& if~j\equiv 2\ (mod\ 4);\\ \Big(5+\frac{75\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-3}{4}}+\Big(5-\frac{75\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-3}{4}},& if~j\equiv 3\ (mod\ 4). \end{cases} \end{eqnarray*} \noindent{\bf Proof of Claim 1.} Put $q_{j}:=detM_{j}$, with $q_{0}=1$, and $q'_{j}:=detM'_{j}$, with $q_{0}=1$. Then by a direct computing, we can get $q_{1}=4,~q_{2}=7,~q_{3}=10,~q_{4}=33,~q_{5}=122,~q_{6}=211,~q_{7}=300,~q_{8}=989.$ For $4\leq j\leq 4n-1$, then \begin{eqnarray*} q_{j}= \begin{cases} 4q_{j-1}-q_{j-2},& if~j\equiv 0 \ (mod\ 4);\\ 4q_{j-1}-q_{j-2},& if~j\equiv 1 \ (mod\ 4);\\ 2q_{j-1}-q_{j-2},& if~j\equiv 2 \ (mod\ 4);\\ 2q_{j-1}-q_{j-2},& if~j\equiv 3 \ (mod\ 4). \end{cases} \end{eqnarray*} For $1\leq j\leq n-1,$ let $a_{j}=m_{4j}; ~0\leq j\leq n-1,~b_{j}=m_{4j+1},~c_{j}=m_{4j+2},~d_{j}=m_{4j+3}.$ Then we can get $a_{1}=33,~b_{0}=4,~c_{0}=7,~d_{0}=10,~b_{1}=122,~c_{1}=211,~d_{1}=300,$ and for $j\geq 2$ , we have \begin{eqnarray} \begin{cases} a_{j}=4d_{j-1}-c_{j-1};\\ b_{j}=4a_{j}-d_{j-1};\\ c_{j}=2b_{j}-a_{j};\\ d_{j}=2c_{j}-b_{j}. \end{cases} \end{eqnarray} From the first three equations in (3.9), one can get $a_{j}=\frac{2}{13}c_{j}+\frac{2}{13}c_{j-1}$. Next, substituting $a_{j}$ to the third equation, one has $b_{j}=\frac{15}{26}c_{j}+\frac{1}{26}c_{j-1}$. Then substituting $b_{j}$ to the fourth equation, we have $d_{j}=\frac{37}{26}c_{j}-\frac{1}{26}c_{j-1}.$ Finally, Substituting $a_{j}$ and $d_{j}$ to the first equation, one has $c_{j}-30c_{j-1}+c_{j=2}=0.$ Thus \begin{eqnarray*} c_{j}=k_{1}\cdot\mu^{j}+k_{2}\cdot\nu^{j}. \end{eqnarray*} According to $c_{0}=7,c_{1}=211,$ we have \begin{eqnarray*} \begin{cases} k_{1}+k_{2}=7;\\ k_{1}(15+4\sqrt{14})+k_{2}(15-4\sqrt{14})=211. \end{cases} \end{eqnarray*} and \begin{eqnarray*} \begin{cases} k_{1}=(\frac{7}{2}+\frac{53\sqrt{14}}{56}\Big);\\ k_{2}=(\frac{7}{2}-\frac{53\sqrt{14}}{56}\Big). \end{cases} \end{eqnarray*} Thus \begin{eqnarray*} \begin{cases} a_{j}=\Big(\frac{1}{2}+\frac{9\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(\frac{1}{2}-\frac{9\sqrt{14}}{56}\Big)\cdot\nu^{j};\\ b_{j}=\Big(2+\frac{31\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(2-\frac{31\sqrt{14}}{56}\Big)\cdot\nu^{j};\\ c_{j}=\Big(\frac{7}{2}+\frac{53\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(\frac{7}{2}-\frac{53\sqrt{14}}{56}\Big)\cdot\nu^{j};\\ d_{j}=\Big(5+\frac{75\sqrt{14}}{56}\Big)\cdot\mu^{j}+\Big(5-\frac{75\sqrt{14}}{56}\Big)\cdot\nu^{j}, \end{cases} \end{eqnarray*} as desired.\hfill\rule{1ex}{1ex}\\ Using a similar method of Claim 1, we can prove Claim 2.\\ \noindent{\bf Claim 2.} For $1\leq j\leq 4n$, \begin{eqnarray*} q^{'}_{j}= \begin{cases} \Big(\frac{1}{2}+\frac{11\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j}{4}}+\Big(\frac{1}{2}-\frac{11\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j}{4}},& if~j\equiv 0\ (mod\ 4);\\ \Big(1+\frac{17\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-1}{4}}+\Big(1-\frac{17\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-1}{4}},& if~j\equiv 1\ (mod\ 4);\\ \Big(\frac{3}{2}+\frac{23\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-2}{4}}+\Big(\frac{3}{2}-\frac{23\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-2}{4}},& if~j\equiv 2\ (mod\ 4);\\ \Big(5+\frac{75\sqrt{14}}{56}\Big)\cdot\mu^{\frac{j-3}{4}}+\Big(5-\frac{75\sqrt{14}}{56}\Big)\cdot\nu^{\frac{j-3}{4}},& if~j\equiv 3\ (mod\ 4). \end{cases} \end{eqnarray*} By the related properties of determinant, it is evidently that \begin{eqnarray*} \det L_S&=& \left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 1\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & -1\\ 1 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right|_{4n}\\ &=&\left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & -1\\ 1 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right|_{4n}+\left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 1\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & 0\\ 1 & 0 & 0 & \cdots & 0 & -1 & 0\\ \end{array} \right|_{4n}\\ &=&q_{4n}+(-1)^{4n+1}\cdot(-1)^{4n-1}+(-1)^{4n+1}\cdot\Big[(-1)^{4n-1}+(-1)^{4n}q^{'}_{4n-2}\Big]\\ &=&q_{4n}-q^{'}_{4n-2}+2.\\ \end{eqnarray*} Together with Claims 1 and 2, we can get one interesting Claim (Claim 3).\\ \noindent{\bf Claim 3.} $detL_{S}=\mu^{n}+\nu^{n}+2.$\\ So far, we have calculated the $detL_{S}$ in Eq.(3.8). Then the prime of the rest is to calculate $(-1)^{4n-1}b_{4n-1}$.\\ \noindent{\bf Claim 4.} $(-1)^{4n-1}b_{4n-1}=\frac{9n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{14}.$\\ \noindent{\bf Proof of Claim 4.} We apply the notion that $(-1)^{4n-1}b_{4n-1}$ is the total of all the principal minors of (4$n$-1)\-/order matix $L_{S}$, thus we obtain \begin{eqnarray*} (-1)^{4n-1}b_{4n-1}&=&\sum_{j=1}^{4n}detL_{S}[i]\\ &=&\sum_{j=4,j\equiv0(mod\ 4)}^{4n}detL_{S}[j]+\sum_{j=1,j\equiv1(mod\ 4)}^{4n-3}detL_{S}[j]\\ &&+\sum_{j=2,j\equiv2(mod\ 4)}^{4n-2}detL_{S}[j]+\sum_{j=3,j\equiv3(mod\ 4)}^{4n-1}detL_{S}[j]. \end{eqnarray*} Let $\left( \begin{array}{cc} O & P\\ S & T\\ \end{array} \right)$ be a block matrix of $L_{S}[j]$ and N=$\left( \begin{array}{cc} 0 & -I_{j-1}\\ I_{4n-j} & 0\\ \end{array} \right)$. It's easy to get \begin{eqnarray} N'L_{S}[j]N=\left( \begin{array}{cc} 0 & -I_{j-1}\\ -I_{4n-j} & 0\\ \end{array} \right)' \left( \begin{array}{cc} O & P\\ S & T\\ \end{array} \right) \left( \begin{array}{cc} 0 & -I_{j-1}\\ -I_{4n-j} & 0\\ \end{array} \right)= \left( \begin{array}{cc} T & -S\\ -P & O\\ \end{array} \right). \end{eqnarray} Combining with Eq.(3.10), based on the different value of $j$, we list the following four cases.\\ {\bf Case 1.} For $ j\equiv0(mod\ 4)$~ and ~$4\leq j\leq 4n-4$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 4 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 2 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 2 \\ \end{array} \right)_{(4n-1)\times(4n-1)}=q_{4n-1}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=4,j\equiv0(mod\ 4)}^{4n}detL_{S}[j]=nq_{4n-1}=\frac{5n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}. \end{eqnarray*} {\bf Case 2.} For $ j\equiv1(mod\ 4)$~ and ~$1\leq j\leq 4n-3$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 2 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 4 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 2 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right)_{(4n-1)\times(4n-1)}=q'_{4n-1}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=1,j\equiv1(mod\ 4)}^{4n-3}detL_{S}[j]=nq'_{4n-1}=\frac{5n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}. \end{eqnarray*} {\bf Case 3.} For $ j\equiv2(mod\ 4)$~ and ~$2\leq j\leq 4n-2$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 2 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 4 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 4 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 4 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 4 \\ \end{array} \right)_{(4n-1)\times(4n-1)}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=2,j\equiv2(mod\ 4)}^{4n-2}detL_{S}[j]=n\Big[2\big(4q_{4n-3}-q'_{4n-4}\big)-q_{4n-3}\Big]=\frac{13n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}). \end{eqnarray*} {\bf Case 4.} For $ j\equiv3(mod\ 4)$~ and ~$3\leq j\leq 4n-1$. \begin{eqnarray*} \ N'L_S[j]N&=& \left( \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & 0\\ -1 & 4 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 4 & -1 & 0\\ 0 & 0 & 0 & \cdots & -1 & 4 & -1\\ 0 & 0 & 0 & \cdots & 0 & -1 & 2 \\ \end{array} \right)_{(4n-1)\times(4n-1)}, \end{eqnarray*} and \begin{eqnarray*} \sum_{j=3,j\equiv3(mod\ 4)}^{4n-1}detL_{S}[j]=n(4q_{4n-2}-q'_{4n-3})=\frac{13n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{56}. \end{eqnarray*} Thus, one has the following equation \begin{eqnarray*} (-1)^{4n-1}b_{4n-1}=\frac{9n\sqrt{14}\big(\mu^{n}-\nu^{n}\big)}{14}, \end{eqnarray*} as desired.\hfill\rule{1ex}{1ex}\\ According to the results of facts 3 and 4, we immediately get \begin{eqnarray} \sum_{j=1}^{4n}\frac{1}{\beta_{j}}=\frac{9n\sqrt{14}}{14}\Bigg(\frac{(\mu^{n}-\nu^{n}\big)}{\mu^{n}+\nu^{n}\big)-2}\Bigg). \end{eqnarray} \hfill\rule{1ex}{1ex}\\ \begin{thm} Suppose $Q_{n}\big(8,4\big)$ as a M\"{o}bius graph constructed by $n$ octagonals and $n$ quadrilaterals. Then \begin{eqnarray*} Kf\big(Q_{n}(8,4)\big)=\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu^{n}\big)+2}\Bigg). \end{eqnarray*} \end{thm} \noindent{\bf Proof.} Substituting Eqs.(3.7) and (3.11) into (3.6), the Kirchhoff index of $Q_{n}(8,4)$ can be expressed \begin{eqnarray*} Kf\big(Q_{n}(8,4)\big)&=&8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\alpha_{i}}+\sum_{j=1}^{4n}\frac{1}{\beta_{j}}\Bigg)\\ &=&8n\Bigg(\frac{16n^{2}-1}{12}+\frac{9n\sqrt{14}}{14}\cdot\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu\big)+2}\Bigg)\\ &=&\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu\big)+2}\Bigg). \end{eqnarray*} The result as desired.\hfill\rule{1ex}{1ex}\\ Next, we aim to compute the Kirchhoff index for $Q'_{n}\big(8,4\big)$. By Figure 2, it is apparently to see $\pi=(1,1')(2,2') \cdots(4n,(4n)')$ is an automorphism of $Q'_{n}\big(8,4\big)$. In other words, $v_{0}=\emptyset,~v_{1}=\big\{1,2,3,\cdots, 4n\big\}~and ~v_{2}=\big\{1',2',3',\cdots,(4n)'\big\}.$ In likewise, we express $L_{A}\big(Q'_{n}(8,4)\big)$ and $L_{S}\big(Q'_{n}(8,4)\big)$ as $L'_{A}$ and $L'_{S}$. Thus one can get $L'_{A}=L_{A}$ and \begin{eqnarray*} L'_S&=& \left( \begin{array}{ccccccccccc} 4 & -1 & & & & & & & & & -1\\ -1 & 2 & -1 & & & & & & & &\\ & -1 & 2 & -1 & & & & & & &\\ & & -1 & 4 & -1 & & & & & &\\ & & & -1 & 4 & -1 & & & & &\\ & & & & -1 & 2 & -1 & & & &\\ & & & & & & \ddots & & & &\\ & & & & & & -1 & 4 & -1 & &\\ & & & & & & & -1 & 2 & -1 &\\ & & & & & & & & -1 & 2 & -1\\ -1 & & & & & & & & & -1 & 4\\ \end{array} \right)_{(4n)\times(4n)}, \end{eqnarray*} \begin{thm} Suppose $Q'_{n}\big(8,4\big)$ is cylinder graph of the octagonal-quadrilateral network, then \begin{eqnarray*} Kf\big(Q'_{n}(8,4)\big)=\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu^{n}\big)-2}\Bigg). \end{eqnarray*} \end{thm} \noindent{\bf Proof.} Assuming that $0=\xi_{1}<\xi_{2}\leq\xi_{3}\leq\cdots\leq\xi_{4n}$ are the roots of $P_{L_{A}}\big(x\big)=0$, and $0<\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\leq\cdots\leq\lambda_{4n}$ are the roots of $P_{L_{S}}\big(x\big)=0$. By Lemma 2.1, we have $S_{p}\big(Q_{n}(8,4)\big)=\big\{\xi_{1},\xi_{2},\cdots,\xi_{4n},\lambda_{1},\lambda_{2},\cdots,\lambda_{4n}\big\}$. Then we straightforwardly get that \begin{eqnarray*} Kf\big(Q'_{n}(8,4)\big)&=&8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\xi_{i}}+\sum_{j=1}^{4n}\frac{1}{\lambda_{j}}\Bigg)\\ &=&2Kf\big(C_{4n}\big)+8n\cdot\frac{ (-1)^{4n-1}b'_{4n-1}}{detL'_{S}}. \end{eqnarray*} Using a method similar to the previous paper, it is can directly calculate that $detL'_{S}[j]=detL_{S}[j]$ and $b'_{4n-1}=b_{4n-1}$. Note that \begin{eqnarray*} \det L'_S&=& \left| \begin{array}{ccccccc} 4 & -1 & 0 & \cdots & 0 & 0 & -1\\ -1 & 2 & -1 & \cdots & 0 & 0 & 0\\ 0 & -1 & 2 & \cdots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 2 & -1 & 0\\ 0 & 0 & 0 & \cdots & 0 & 2 & -1\\ -1 & 0 & 0 & \cdots & 0 & -1 & 4\\ \end{array} \right|_{4n}=q_{4n}-q'_{4n-2}-2. \end{eqnarray*} Hence, \begin{eqnarray*} Kf\big(Q'_{n}(8,4)\big)&=&8n\Bigg(\sum_{i=2}^{4n}\frac{1}{\xi_{i}}+\sum_{j=1}^{4n}\frac{1}{\lambda_{j}}\Bigg)\\ &=&2Kf\big(C_{4n}\big)+8n\frac{ (-1)^{4n-1}b'_{4n-1}}{detL'_{S}}\\ &=&\frac{32n^{3}-2n}{3}+\frac{36n^{2}\sqrt{14}}{7}\Bigg(\frac{\big(\mu^{n}-\nu^{n}\big)}{\big(\mu^{n}+\nu^{n}\big)-2}\Bigg). \end{eqnarray*} This completes the proof.\hfill\rule{1ex}{1ex}\\ The Kirchhoff indices of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$ for $n=1,\cdot\cdot\cdot,15$ are tabulated Table 1.\\ \begin{table}[htbp] \setlength{\abovecaptionskip}{0.15cm} \centering\vspace{.3cm} \setlength{\tabcolsep}{25pt} \caption{ The Kirchhoff indices of $Q_{1}(8,4),Q_{2}(8,4),...,Q_{15}(8,4)$ and $Q'_{1}(8,4),Q'_{2}(8,4),...,Q'_{15}(8,4).$} \begin{tabular}{c|c|c|c} \hline $G$ & $Kf(G)$ & $G$ & $Kf(G)$ \\ \hline $Q_{1}(8,4)$ & $28.00$ & $Q'_{1}(8,4)$ & $30.57$ \\ $Q_{2}(8,4)$ & $160.8$ & $Q'_{2}(8,4)$ & $161.14$ \\ $Q_{3}(8,4)$ & $459.17$ & $Q'_{3}(8,4)$ & $459.20$ \\ $Q_{4}(8,4)$ & $987.88$ & $Q'_{4}(8,4)$ & $987.89$ \\ $Q_{5}(8,4)$ & $1811.07$ & $Q'_{5}(8,4)$ & $1811.07$ \\ $Q_{6}(8,4)$ & $2992.74$ & $Q'_{6}(8,4)$ & $2992.74$ \\ $Q_{7}(8,4)$ & $4596.90$ & $Q'_{7}(8,4)$ & $4596.90$ \\ $Q_{8}(8,4)$ & $6687.54$ & $Q'_{8}(8,4)$ & $6687.54$ \\ $Q_{9}(8,4)$ & $9328.67$ & $Q'_{9}(8,4)$ & $9328.67$ \\ $Q_{10}(8,4)$ & $12584.28$ & $Q'_{10}(8,4)$ & $12584.28$ \\ $Q_{11}(8,4)$ & $16518.38$ & $Q'_{11}(8,4)$ & $16518.38$ \\ $Q_{12}$(8,4) & $21194.96$ & $Q'_{12}(8,4)$ & $21194.96$ \\ $Q_{13}(8,4)$ & $26678.03$ & $Q'_{13}(8,4)$ & $26678.03$ \\ $Q_{14}(8,4)$ & $33031.59$ & $Q'_{14}(8,4)$ & $33031.59$ \\ $Q_{15}(8,4)$ & $40319.63$ & $Q'_{15}(8,4)$ & $40319.63$ \\ \hline \end{tabular} \end{table} In the sequel, the prime aim is to concentrate on calculate the complexity of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$. \begin{thm} Let $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$ be M\"{o}bius graph and cylinder graph of the linear octagonal-quadrilateral networks, respectively. Then \begin{eqnarray*} \mathscr{L}\big(Q_{n}(8,4)\big)=2n\big(\mu^{n}+\nu^{n}+2\big), \end{eqnarray*} and \begin{eqnarray*} \mathscr{L}\big(Q'_{n}(8,4)\big)=2n\big(\mu^{n}+\nu^{n}-2\big). \end{eqnarray*} \end{thm} \noindent{\bf Proof.} With initial conditions as $L_{A}=L'_{A}=L(C_{4n}); ~\tau(C_{4n})=4n.$ Based on Lemma 2.3 and Lemma 2.4, one has the following equation. \begin{eqnarray*} \prod_{i=2}^{4n}\alpha_{i}=\prod_{i=2}^{4n}\xi_{i}=4n\tau(C_{4n})=16n^{2}. \end{eqnarray*} Note that \begin{eqnarray*} \prod_{j=1}^{4n}\beta_{j}=detL_{S}=\big(\mu^{n}+\nu^{n}+2\big), \end{eqnarray*} \begin{eqnarray*} \prod_{j=1}^{4n}\lambda_{j}=detL'_{S}=\big(\mu^{n}+\nu^{n}-2\big). \end{eqnarray*} Hence, \begin{eqnarray*} \mathscr{L}\big(Q_{n}(8,4)\big)=\frac{1}{8n}\prod_{i=2}^{4n}\alpha_{i}\cdot \prod_{j=1}^{4n}\beta_{j}=2n\big(\mu^{n}+\nu^{n}+2\big), \end{eqnarray*} and \begin{eqnarray*} \mathscr{L}\big(Q'_{n}(8,4)\big)=\frac{1}{8n}\prod_{i=2}^{4n}\xi_{i}\cdot \prod_{j=1}^{4n}\lambda_{j}=2n\big(\mu^{n}+\nu^{n}-2\big).\\ \end{eqnarray*} \hfill\rule{1ex}{1ex}\\ The complexity of $Q_{n}(8,4)$ and $Q'_{n}(8,4)$ have been computed, which are tabulated in Table 2. \begin{table}[htbp] \setlength{\abovecaptionskip}{0.15cm} \centering \vspace{.2cm} \setlength{\tabcolsep}{10pt} \caption{The complexity of $Q_{1},Q_{2},...,Q_{8}$ and $Q'_{1},Q'_{2},...,Q'_{8}$.} \begin{tabular}{c|c|c|c} \hline ~~~~~~$G$ ~~~~~~&~~~~~~$\mathscr{L}(G)$ ~~~~~~&~~~~~~$G$ ~~~~~~&~~~~~~$\mathscr{L}(G)$~~~~~~ \\ \hline $Q_{1}$ & $64$ & $Q'_{1}$ & $56$ \\ $Q_{2}$ & $3600$ & $Q'_{2}$ & $3584$ \\ $Q_{3}$ & $161472$ & $Q'_{3}$ & $161448$ \\ $Q_{4}$ & $6451232$ & $Q'_{4}$ & $6451200$ \\ $Q_{5}$ & $241651520$ & $Q'_{5}$ & $241651480$\\ $Q_{6}$ & $8689777200$ & $Q'_{6}$ & $8689777152$\\ $Q_{7}$ & $303803889088$ & $Q'_{7}$ & $303803889032$\\ $Q_{8}$ & $10404546969664$ & $Q'_{8}$ & $10404546969600$\\ \hline \end{tabular} \end{table} \section{Relation between Kirchhoff index and Wiener index of $Q_{n}(8,4)$ and $Q'_{n}(8,4)$}\label{sct4} First part of this section is a certification of the Wiener index of $Q_{n}\big(8,4\big)$ and $Q'_{n}\big(8,4\big)$, and the rest is Wiener index and Kirchhoff index for comparison.\\ \begin{thm}For the network $Q_{n}\big(8,4\big)$, \begin{eqnarray*} W\big(Q_{n}(8,4)\big)=32n^3+16n^2+4n. \end{eqnarray*} \end{thm} \noindent{\bf Proof.} For the M\"{o}bius graph $Q_{n}\big(8,4\big)$, two types of vertices can be considered $(n\geq2)$.\\ (a) vertex with degree two,\\ (b) vertex with degree three.\\ Hence, for type a, we obtain \begin{eqnarray} \omega_{1}(k)=4n\Bigg[\Big(\sum_{k=1}^{2n}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k+\sum_{k=3}^{2n}k\Big)+3+4\Bigg], \end{eqnarray} and for type b, we have \begin{eqnarray} \omega_{2}(k)=4n\Bigg(\sum_{k=1}^{2n}k+\sum_{k=1}^{2n}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k\Bigg). \end{eqnarray} Summing the Eqs.(4.12), (4.13) together, we get the Eq.(4.14).\\ \begin{equation} \begin{split} W\big(Q_{n}(8,4)\big)&=\frac{\omega_{1}(k)+\omega_{2}(k)}{2}\\ &=\frac{4n\Bigg(\sum\limits_{k=1}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=3}^{2n}k+7\Bigg)}{2}\\ &=32n^3+16n^2+4n. \end{split} \end{equation} \hfill\rule{1ex}{1ex}\\ \begin{thm} Turning to the cylinder graph $Q'_{n}(8,4)$, then \begin{eqnarray*} W\big(Q'_{n}(8,4)\big)=32n^3+16n^2-8n. \end{eqnarray*} \end{thm} \noindent{\bf Proof.} According to the proof of $W\big(Q_{n}(8,4)\big)$, we have the Eqs.(4.15), (4.16) by similar approach. \begin{eqnarray} \omega_{1}(k)=4n\Big[\Big(\sum_{k=1}^{2n-1}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k+\sum_{k=3}^{2n+1}k\Big)+3+4\Big]. \end{eqnarray} \begin{eqnarray} \omega_{2}(k)=4n\Bigg[1+\Big(\sum_{k=1}^{2n-1}k+\sum_{k=1}^{2n}k+\sum_{k=2}^{2n}k+\sum_{k=2}^{2n+1}k\Big)\Bigg]. \end{eqnarray} Combining the Eqs.(4.15), (4.16) together, we derive the Eq.(4.17).\\ \begin{equation} \begin{split} W\big(Q'_{n}(8,4)\big)&=\frac{\omega_{1}(k)+\omega_{2}(k)}{2}\\ &=\frac{4n\Bigg(\sum\limits_{k=1}^{2n-1}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=2}^{2n+1}k+\sum\limits_{k=1}^{2n-1}k+\sum\limits_{k=1}^{2n}k+\sum\limits_{k=2}^{2n}k+\sum\limits_{k=3}^{2n+1}k+8\Bigg)}{2}\\ &=32n^3+16n^2-8n. \end{split} \end{equation} \hfill\rule{1ex}{1ex}\\ Whatever $n$ is even or odd, we deduced the Wiener indices of the M\"{o}bius and cylinder graph of the linear octagonal-quadrilateral networks must be tantamount.\\ \noindent{\bf Corollary 4.3.} For different values of $n$, we illustrate the ratios of the Wiener index to the Kirchhoff index for both $Q_{n}(8,4)$ and $Q'_{n}(8,4)$ are gradually closing to $3$.\\ \begin{equation*} \lim_{n\to\infty}\frac{W\big(Q_{n}(8,4)\big)}{Kf\big(Q_{n}(8,4)\big)}=3, \end{equation*} and \begin{equation*} \lim_{n\to\infty}\frac{W\big(Q'_{n}(8,4)\big)}{Kf\big(Q'_{n}(8,4)\big)}=3. \end{equation*} \section{Conclusion}\label{sct5} \ \ \ \ \ A central result of this paper is a systematic expression for the Kirchhoff indices and complexity of the linear M\"{o}bius and cylinder octagonal-quadrilateral networks in terms of the reciprocal of the eigenvalues of the Laplacian matrix. By comparatively analyzing the correlation ability of topological indices with Wiener indices and Kirchhoff indices, we have discovered the underlying relationship between them that the ratio value consistently being three. Finally, we expect the results developed here could become useful part of the research in other polygon chains and their variants. \section*{Funding} \ \ \ \ \ This work was supported in part by Anhui Provincial Natural Science Foundation under Grant 2008085J01, and by Natural Science Fund of Education Department of Anhui Province under Grant KJ2020A0478 and National Natural Science Foundation of China Grant 11601006, 12001008. \begin{thebibliography}{99} \small \setlength{\itemsep}{-.8mm} \bibitem{F.R} F.R.K. Chung, Spectral graph theory, American Mathematical Society Providence, RI, 1997. \bibitem{B} A. Dobrymin, I. Gutman, S. Klav\v{z}ar, and P. \v{Z}igert, Winer index of hexagonal systems, Acta Applicandae Mathematucae 72 (2002) 94-247. \bibitem{C.} I. Gutman, S. C. Li, W. Wei, Cacti with $n$ vertices and $t$ cycles having extremal Wiener index, Discrete Applied Mathematics 232 (2017) 189-200. \bibitem{D} I. Gutman, S. Klav\v{z}ar, B. Mohar, Match Communications in Mathematical and in Computer Chemistry 35 (1997) 1-259. \bibitem{Wiener} H. Wiener, Structural determination of paraffin boiling points, Journal of the American Chemical Society 69 (1947) 17-20. \bibitem{A.D} A. Dobrynin, Branchings in trees and the calculation of the Wiener index of a tree, Match Communications in Mathematical and in Computer Chemistry 41 (2000) 119-134. \bibitem{D.J} D.J. Klein, M. Randi\'c, Resistance distances, Journal of Mathematical Chemistry 12 (1993) 81-95. \bibitem{D.} D.J. Klein, Resistance-distance sum rules, Croatica Chemica Acta 75 (2002) 633-649. \bibitem{D.J.} D.J. Klein, O. Ivanciuc, Graph cyclicity, excess conductance, and resistance deficit, Journal of Mathematical Chemistry 30 (2001) 271-287. \bibitem{Y.L} Y.L. Yang, T.Y. Yu, Graph theory of viscoelasticities for polymers with starshaped, multiple-ring and cyclic multiple-ring molecules, Macromolecular Chemistry and Physics 186 (1985) 609-631. \bibitem{Y.} Y.J. Yang, H.P. Zhang, Kirchhoff index of linear hexagonal chains, International Journal of Quantum Chemistry 108(3) (2008) 503-512. \bibitem{G.W} X.Y. Geng, P. W, On the Kirchhoff indices and the number of spanning Trees of M\"{o}bius phenylenes chain and cylinder Phenylenes Chain, Polycylic Aromatic Compounds 41 (2019) 1681-1693. \bibitem{Z.L} J.B. Liu, Z.Y. Shi, Y.H. Pan, Computing the Laplacian spectrum of linear octagonal-quadrilateral networks and its applications, Polycylic Aromatic Compounds 42 (2020) 659-670. \bibitem{H.B} X.L. M, H. B, The normalized Laplacians, degree-Kirchhoff index and the spanning trees of hexagonal M\"{o}bius graphs, Applied Mathematics and Computation 355 (2019) 33-46. \bibitem{E.} J. Huang, S.C. Li, L. Sun, The normalized Laplacians degree-Kirchhoff index and the spanning trees of linear hexagonal chains, Discrete Applied Mathematics 207 (2016) 67-79. \bibitem{F.} Y.J. Peng, S.C. Li, On the Kirchhoff index and the number of spanning trees of linear phenylenes, Match Communications in Mathematical and in Computer Chemistry 77 (2017) 765-780. \bibitem{G.} J.B. Liu, J. Zhao, Z.X. Zhu, On the number of spanning trees and normalized Laplacian of linear octagonal-quadrilateral networks, International Journal of Quantum Chemistry 119 (17) (2019) e25971. \bibitem{H.} J. Huang, S.C. Li, X. Li, The normalized Laplacian, degree-Kirchhoff index and spanning trees of the linear polyomino chains, Applied Mathematics and Computation 289 (2016) 324-334. \bibitem{M.} H.P. Zhang, Y.J. Yang, Resistance Distance and Kirchhoff Index in Circulant Graphs, International Journal of Quantum Chemistry 107 (2007) 330-339. \bibitem{K.} S. Li, W. Sun, S. Wang, Multiplicative degree-Kirchhoff index and number of spanning trees of a zigzag polyhex nanotube TUHC[2n,2], International Journal of Quantum Chemistry 119 (17) (2019) e25969. \bibitem{Gut} I. Gutman, B. Mohar, The quasi-Wiener and the Kirchhoff indices coincide, Journal of Chemical Information and Modeling 36 (1996) 982-985. \bibitem{H.Y.} H.Y. Chen, F.J. Zhang, Resistance distance and the normalized Laplacian spectrum, Discrete Applied Mathematics 155 (2007) 654-661. \bibitem{Y.X.}Y.J. Yang, and X.Y. Jiang, Unicyclic graphs with extremal Kirchhoff index, Match Communications in Mathmatical and in Computer Chemistry 60 (2008) 20-107. \end{thebibliography} \end{document}
2205.00832v2
http://arxiv.org/abs/2205.00832v2
Gradient Descent, Stochastic Optimization, and Other Tales
\documentclass[twoside,11pt]{book} \usepackage{jmlr2e} \usepackage{bm} \usepackage[english]{babel} \usepackage{imakeidx} \makeindex[columns=2, title=Alphabetical Index] \usepackage{stackengine} \usepackage{enumitem} \setenumerate[1]{itemsep=0pt,partopsep=0pt,parsep=\parskip,topsep=5pt} \setitemize[1]{itemsep=0pt,partopsep=0pt,parsep=\parskip,topsep=5pt} \setdescription{itemsep=0pt,partopsep=0pt,parsep=\parskip,topsep=5pt} \usepackage{dsfont} \newcommand{\indicator}{\mathds{1}} \newcommand{\prob}{\mathbb{P}} \newcommand{\Var}{\mathrm{Var}} \usepackage[framemethod=tikz]{mdframed} \newcommand{\mdframecolor}{gray!10} \newcommand{\mdframehideline}{true} \newcommand{\mdframecolorNote}{gray!10} \newcommand{\mdframehidelineNote}{true} \newcommand{\mdframeHighlight}{red!10} \newcommand{\mdframecolorSkip}{yellow!10} \pagestyle{empty} \usepackage{amsmath} \usepackage{arydshln} \numberwithin{equation}{section} \usepackage{amsmath} \usepackage{blkarray} \newcommand{\xy}[2]{\ensuremath{\frac{x_{#1}}{y_{#2}}}} \usepackage[final]{pdfpages} \usepackage[noframe]{showframe} \usepackage{framed} \usepackage{lipsum} \definecolor{color0} {RGB}{174,225,254} \definecolor{color1} {RGB}{220,227,248} \definecolor{color2} {RGB}{28,130,185} \definecolor{color3} {RGB}{255,253,250} \definecolor{colormiddleright} {RGB}{245,253,250} \definecolor{colorbottomleft} {RGB}{255,243,250} \definecolor{coloruppermiddle} {RGB}{255,253,230} \definecolor{colormiddleleft} {RGB}{255,253,250} \definecolor{colorcr} {RGB}{249,253,232} \definecolor{colorreduction} {RGB}{255,235,254} \definecolor{colorqr} {RGB}{254,221,199} \definecolor{colorbiconjugate} {RGB}{251,149,161} \definecolor{colorsvd} {RGB}{215,247,235} \definecolor{colorupperright} {RGB}{239,246,251} \definecolor{colorspectral} {RGB}{206,226,243} \definecolor{colorbottomright} {RGB}{220,224,236} \definecolor{coloreigenvalue} {RGB}{197,203,224} \definecolor{colorupperleft} {RGB}{235,243,240} \definecolor{colorsemidefinite} {RGB}{217,232,226} \definecolor{colormiddle} {RGB}{235, 240,255} \definecolor{colorlu} {RGB}{220,227,255} \definecolor{colorals} {RGB}{240,230,255} \definecolor{coloralsbkg} {RGB}{248,243,255} \definecolor{canaryyellow}{rgb}{1.0, 0.75, 0.0} \definecolor{bluepigment}{rgb}{0.0, 0.0, 1.0} \definecolor{canarypurple}{RGB}{208, 13, 241} \newenvironment{svgraybox}{ \def\FrameCommand{\fboxsep=\FrameSep \colorbox{color1}} \MakeFramed{\advance\hsize-\width \FrameRestore\FrameRestore}}{\endMakeFramed} \definecolor{shadecolor}{gray}{0.75} \usepackage{xcolor} \newcommand{\cleft}[2][.]{ \begingroup\colorlet{savedleftcolor}{.} \color{#1}\left#2\color{savedleftcolor}} \newcommand{\cright}[2][.]{ \color{#1}\right#2\endgroup } \usepackage{tcolorbox} \usepackage{graphicx} \usepackage[hang]{subfigure} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{amsmath} \usepackage{graphics} \usepackage{epsfig} \usepackage{blkarray} \usepackage{listings} \newenvironment{sbmatrix}[1]{\def\mysubscript{#1}\mathop\bgroup\begin{bmatrix}}{\end{bmatrix}\egroup_{\textstyle\mathstrut\mysubscript}} \usepackage{tikz} \usetikzlibrary{mindmap,trees,backgrounds} \usetikzlibrary{arrows.meta} \usetikzlibrary{decorations.text} \usetikzlibrary{calc,backgrounds} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {#1};}} \usepackage{changepage} \newlength{\offsetpage} \setlength{\offsetpage}{1.0cm} \newenvironment{widepage}{\begin{adjustwidth}{-\offsetpage/2}{-\offsetpage/2} \addtolength{\textwidth}{\offsetpage}} {\end{adjustwidth}} \usepackage{hhline} \newcommand*{\bmboxtimes}{\multicolumn{1}{|c|}{\bm{\boxtimes}}} \newcommand*{\colorbmboxtimes}{\multicolumn{1}{|c|}{\textcolor{mylightbluetext}{\bm{\boxtimes}}}} \usetikzlibrary{positioning,shapes,shadows,arrows} \tikzstyle{condition}=[rectangle, draw=black, rounded corners, fill=colorqr, drop shadow, text centered, anchor=north, text=black, text width=3cm] \tikzstyle{abstract}=[rectangle, draw=black, rounded corners, fill=blue!30, drop shadow, text centered, anchor=north, text=black, text width=3cm] \tikzstyle{comment}=[rectangle, draw=black, rounded corners, fill=color1, drop shadow, text centered, anchor=north, text=black, text width=3cm] \tikzstyle{myarrow}=[->, >=open triangle 90, thick] \tikzstyle{line}=[-, thick] \usepackage{sidecap} \sidecaptionvpos{figure}{t} \usepackage{verbatimbox} \usepackage[labelfont=bf]{caption} \newcommand\myhrulefill[1]{\leavevmode\leaders\hrule height#1\hfill\kern0pt} \DeclareCaptionFormat{myformat}{{\color[RGB]{0,0,0}\myhrulefill{0.08em}}\\#1#2#3} \captionsetup[figure]{format=myformat} \usepackage{graphicx} \usepackage{pythonhighlight} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \makeatletter \renewcommand*\cleardoublepage{ \clearpage \if@twoside \ifodd\c@page \hbox{}\newpage \if@twocolumn\hbox{} \newpage } \makeatother \let\originalpart=\part \def\part#1{\cleardoublepage\clearpage \pagecolor{\partcolor} \originalpart{#1}\nopagecolor } \usepackage{yfonts,color,lettrine} \definecolor{caligraphcolor}{HTML}{7F0000} \usepackage{setspace} \renewcommand{\LettrineFont}{\initfamily\color{red}} \setcounter{DefaultLines}{3} \AtBeginDocument{\setlength{\DefaultFindent}{0.5em}} \setlength{\DefaultNindent}{0pt} \renewcommand{\DefaultLraise}{-0.1} \usepackage[Bjornstrup]{fncychap} \newcommand{\colortitlechap}{\color[rgb]{0.3,0.3,0.3}} \newcommand{\colornumberchap}{\color[rgb]{0.6,0.6,0.6}} \newcommand{\colorbackchap}{\colorbox[RGB]{187,201,242}} \usepackage{fancyhdr, blindtext} \newcommand{\changefontt}{ \fontsize{11}{13}\selectfont } \newcommand{\changefonts}{ \fontsize{9}{11}\selectfont } \pagestyle{fancy} \fancyhf{} \fancyhead[RO]{\color{winestain}\changefontt \leftmark} \fancyhead[LO]{\color{winestain}\changefontt JUN LU} \fancyhead[RE]{\color{winestain}\changefonts \rightmark} \fancyfoot[CE,CO]{\color{winestain}\thepage} \fancypagestyle{plain}{ \renewcommand{\headrulewidth}{0pt} \fancyhf{} \fancyfoot[C]{\color{winestain} \thepage}} \renewcommand{\headrule}{\color{winestain}\hrule height 1.2pt \vspace{0.5mm}} \usepackage[colorlinks]{hyperref} \usepackage{color} \definecolor{winestain}{rgb}{0.5,0,0} \definecolor{mydarkblue}{rgb}{0,0.08,0.45} \usepackage{hyperref} \hypersetup{ colorlinks=true, linktoc=all, linkcolor=winestain, anchorcolor=blue, citecolor=winestain, } \usepackage[hyperpageref]{backref} \renewcommand\thefootnote{\textcolor{cyan}{\arabic{footnote}}} \usepackage{booktabs} \newcommand{\clearchapter}{ \chapter } \fi} \makeatletter \renewcommand*\cleardoublepage{ \clearpage \if@twoside \ifodd\c@page \hbox{}\newpage \if@twocolumn\hbox{} \newpage } \makeatother \let\originalpart=\part \def\part#1{\cleardoublepage\clearpage \pagecolor{\partcolor} \originalpart{#1}\nopagecolor } \newtheorem{definitionT}{Definition}[chapter] \newmdenv[skipabove=7pt, skipbelow=7pt, rightline=false, leftline=true, topline=false, bottomline=false, linecolor=mydefinitionred, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=0pt, leftmargin=2cm, rightmargin=0cm, linewidth=4pt, innerbottommargin=0pt]{dBox} \newenvironment{definition}{\begin{dBox}\begin{definitionT}}{\end{definitionT}\end{dBox}} \newtheorem{exerciseC}{Exercise}[chapter] \newmdenv[skipabove=7pt, skipbelow=7pt, rightline=false, leftline=true, topline=false, bottomline=false, linecolor=mydarkgreen, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=0pt, leftmargin=2cm, rightmargin=0cm, linewidth=4pt, innerbottommargin=0pt]{eBox} \newenvironment{exercise}{\begin{eBox}\begin{exerciseC}}{\end{exerciseC}\end{eBox}} \usepackage{adforn} \newcommand{\xchaptertitle}{Chapter~\thechapter~} \newcommand{\problemname}{Problems} \newenvironment{problemset}[1][\xchaptertitle~\problemname]{ \vspace*{10pt} \begin{center} \phantomsection\addcontentsline{toc}{section}{\texorpdfstring{\xchaptertitle~\problemname}{\problemname}} \markright{#1} \textcolor{structurecolor}{\Large\bfseries\adftripleflourishleft~#1~\adftripleflourishright} \end{center} \begin{enumerate}}{ \end{enumerate}} \makeatletter \renewcommand{\l@section}{\@dottedtocline{1}{1.5em}{2.25em}} \renewcommand{\l@subsection}{\@dottedtocline{2}{3.7em}{3.1em}} \renewcommand{\l@subsubsection}{\@dottedtocline{3}{4.5em}{3.4em}} \makeatother \numberwithin{equation}{section} \renewcommand\thesection{\thechapter.\arabic{section}} \renewcommand\thesubsection{\thesection.\arabic{subsection}} \usepackage{minitoc} \setcounter{minitocdepth}{3} \let\cleardoublepage\clearpage \usepackage[titletoc]{appendix} \input{symbols_dl} \input{symbols.tex} \jmlrheading{1}{2022}{1-48}{4/00}{10/00}{Jun Lu} rstpageno{1} \begin{document} \newcommand{\mytitle}{Gradient Descent, Stochastic Optimization, and Other Tales} \title{\mytitle} \frontmatter \input{front.tex} \clearpage \author{ \begin{center} \name Jun Lu \\ \email [email protected] \end{center} } \clearpage \maketitle \chapter*{\centering \begin{normalsize}Preface\end{normalsize}} The goal of this book is to debunk and dispel the magic behind the black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why these techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This book doesn't shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms. Gradient descent stands out as one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published \textit{A stochastic approximation method}, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization. However, we clearly realize our inability to cover all the useful and interesting results concerning optimization methods and given the paucity of scope to present this discussion, e.g., the separated analysis of trust region methods, convex optimization, and so on. We refer the reader to literature in the field of numerical optimization for a more detailed introduction to the related fields. The article is primarily a summary of purpose, significance of important concepts in optimization methods, e.g., vanilla gradient descent, gradient descent with momentum, conjugate descent, conjugate gradient, and the origin and rate of convergence of the methods, which shed light on their applications. The mathematical prerequisite is a first course in linear algebra and calculus. Other than this modest background, the development is self-contained, with rigorous proof provided throughout. \paragraph{Keywords: } Gradient descent, Stochastic gradient descent, Steepest descent, Conjugate descent and conjugate gradient, Learning rate annealing, Adaptive learning rate, Second-order methods. \newpage \begingroup \hypersetup{ linkcolor=winestain, linktoc=page, } \dominitoc \tableofcontents \listoffigures \endgroup \newpage \mainmatter \input{chapter-intro} \input{chapter-GD} \input{chapter-lineSearch} \input{chapter-lrate} \input{chapter-stocopt} \input{chapter-second} \input{chapter-append} \newpage \vskip 0.2in \bibliography{bib} \clearpage \printindex \end{document} gleft}{{\em (Left)}} gcenter}{{\em (Center)}} gright}{{\em (Right)}} gtop}{{\em (Top)}} gbottom}{{\em (Bottom)}} \newcommand{\captiona}{{\em (a)}} \newcommand{\captionb}{{\em (b)}} \newcommand{\captionc}{{\em (c)}} \newcommand{\captiond}{{\em (d)}} \newcommand{\newterm}[1]{{\bf #1}} gref#1{figure~\ref{#1}} \def\ceil#1{\lceil #1 \rceil} \def\floor#1{\lfloor #1 \rfloor} \def\1{\bm{1}} \newcommand{\train}{\mathcal{D}} \newcommand{\valid}{\mathcal{D_{\mathrm{valid}}}} \newcommand{\test}{\mathcal{D_{\mathrm{test}}}} \def\eps{{\epsilon}} \def\reta{{\textnormal{$\eta$}}} \def\ra{{\textnormal{a}}} \def\rb{{\textnormal{b}}} \def\rc{{\textnormal{c}}} \def\rd{{\textnormal{d}}} \def\re{{\textnormal{e}}} \def\rf{{\textnormal{f}}} \def\rg{{\textnormal{g}}} \def\rh{{\textnormal{h}}} \def\ri{{\textnormal{i}}} \def\rj{{\textnormal{j}}} \def\rk{{\textnormal{k}}} \def\rl{{\textnormal{l}}} \def\rn{{\textnormal{n}}} \def\ro{{\textnormal{o}}} \def\rp{{\textnormal{p}}} \def\rq{{\textnormal{q}}} \def\rr{{\textnormal{r}}} \def\rs{{\textnormal{s}}} \def\rt{{\textnormal{t}}} \def\ru{{\textnormal{u}}} \def\rv{{\textnormal{v}}} \def\rw{{\textnormal{w}}} \def\rx{{\textnormal{x}}} \def\ry{{\textnormal{y}}} \def\rz{{\textnormal{z}}} \def\rX{{\textnormal{X}}} \def\rY{{\textnormal{Y}}} \def\rZ{{\textnormal{Z}}} \def\rvepsilon{{\mathbf{\epsilon}}} \def\rvtheta{{\mathbf{\theta}}} \def\rva{{\mathbf{a}}} \def\rvb{{\mathbf{b}}} \def\rvc{{\mathbf{c}}} \def\rvd{{\mathbf{d}}} \def\rve{{\mathbf{e}}} \def\rvf{{\mathbf{f}}} \def\rvg{{\mathbf{g}}} \def\rvh{{\mathbf{h}}} \def\rvu{{\mathbf{i}}} \def\rvj{{\mathbf{j}}} \def\rvk{{\mathbf{k}}} \def\rvl{{\mathbf{l}}} \def\rvm{{\mathbf{m}}} \def\rvn{{\mathbf{n}}} \def\rvo{{\mathbf{o}}} \def\rvp{{\mathbf{p}}} \def\rvq{{\mathbf{q}}} \def\rvr{{\mathbf{r}}} \def\rvs{{\mathbf{s}}} \def\rvt{{\mathbf{t}}} \def\rvu{{\mathbf{u}}} \def\rvv{{\mathbf{v}}} \def\rvw{{\mathbf{w}}} \def\rvx{{\mathbf{x}}} \def\rvy{{\mathbf{y}}} \def\rvz{{\mathbf{z}}} \def\erva{{\textnormal{a}}} \def\ervb{{\textnormal{b}}} \def\ervc{{\textnormal{c}}} \def\ervd{{\textnormal{d}}} \def\erve{{\textnormal{e}}} \def\ervf{{\textnormal{f}}} \def\ervg{{\textnormal{g}}} \def\ervh{{\textnormal{h}}} \def\ervi{{\textnormal{i}}} \def\ervj{{\textnormal{j}}} \def\ervk{{\textnormal{k}}} \def\ervl{{\textnormal{l}}} \def\ervm{{\textnormal{m}}} \def\ervn{{\textnormal{n}}} \def\ervo{{\textnormal{o}}} \def\ervp{{\textnormal{p}}} \def\ervq{{\textnormal{q}}} \def\ervr{{\textnormal{r}}} \def\ervs{{\textnormal{s}}} \def\ervt{{\textnormal{t}}} \def\ervu{{\textnormal{u}}} \def\ervv{{\textnormal{v}}} \def\ervw{{\textnormal{w}}} \def\ervx{{\textnormal{x}}} \def\ervy{{\textnormal{y}}} \def\ervz{{\textnormal{z}}} \def\rmA{{\mathbf{A}}} \def\rmB{{\mathbf{B}}} \def\rmC{{\mathbf{C}}} \def\rmD{{\mathbf{D}}} \def\rmE{{\mathbf{E}}} \def\rmF{{\mathbf{F}}} \def\rmG{{\mathbf{G}}} \def\rmH{{\mathbf{H}}} \def\rmI{{\mathbf{I}}} \def\rmJ{{\mathbf{J}}} \def\rmK{{\mathbf{K}}} \def\rmL{{\mathbf{L}}} \def\rmM{{\mathbf{M}}} \def\rmN{{\mathbf{N}}} \def\rmO{{\mathbf{O}}} \def\rmP{{\mathbf{P}}} \def\rmQ{{\mathbf{Q}}} \def\rmR{{\mathbf{R}}} \def\rmS{{\mathbf{S}}} \def\rmT{{\mathbf{T}}} \def\rmU{{\mathbf{U}}} \def\rmV{{\mathbf{V}}} \def\rmW{{\mathbf{W}}} \def\rmX{{\mathbf{X}}} \def\rmY{{\mathbf{Y}}} \def\rmZ{{\mathbf{Z}}} \def\ermA{{\textnormal{A}}} \def\ermB{{\textnormal{B}}} \def\ermC{{\textnormal{C}}} \def\ermD{{\textnormal{D}}} \def\ermE{{\textnormal{E}}} \def\ermF{{\textnormal{F}}} \def\ermG{{\textnormal{G}}} \def\ermH{{\textnormal{H}}} \def\ermI{{\textnormal{I}}} \def\ermJ{{\textnormal{J}}} \def\ermK{{\textnormal{K}}} \def\ermL{{\textnormal{L}}} \def\ermM{{\textnormal{M}}} \def\ermN{{\textnormal{N}}} \def\ermO{{\textnormal{O}}} \def\ermP{{\textnormal{P}}} \def\ermQ{{\textnormal{Q}}} \def\ermR{{\textnormal{R}}} \def\ermS{{\textnormal{S}}} \def\ermT{{\textnormal{T}}} \def\ermU{{\textnormal{U}}} \def\ermV{{\textnormal{V}}} \def\ermW{{\textnormal{W}}} \def\ermX{{\textnormal{X}}} \def\ermY{{\textnormal{Y}}} \def\ermZ{{\textnormal{Z}}} \def\evalpha{{\alpha}} \def\evbeta{{\beta}} \def\evepsilon{{\epsilon}} \def\evlambda{{\lambda}} \def\evomega{{\omega}} \def\evmu{{\mu}} \def\evpsi{{\psi}} \def\evsigma{{\sigma}} \def\evtheta{{\theta}} \def\eva{{a}} \def\evb{{b}} \def\evc{{c}} \def\evd{{d}} \def\eve{{e}} \def\evf{{f}} \def\evg{{g}} \def\evh{{h}} \def\evi{{i}} \def\evj{{j}} \def\evk{{k}} \def\evl{{l}} \def\evm{{m}} \def\evn{{n}} \def\evo{{o}} \def\evp{{p}} \def\evq{{q}} \def\evr{{r}} \def\evs{{s}} \def\evt{{t}} \def\evu{{u}} \def\evv{{v}} \def\evw{{w}} \def\evx{{x}} \def\evy{{y}} \def\evz{{z}} \def\gA{{\mathcal{A}}} \def\gB{{\mathcal{B}}} \def\gC{{\mathcal{C}}} \def\gD{{\mathcal{D}}} \def\gE{{\mathcal{E}}} \def\gF{{\mathcal{F}}} \def\gG{{\mathcal{G}}} \def\gH{{\mathcal{H}}} \def\gI{{\mathcal{I}}} \def\gJ{{\mathcal{J}}} \def\gK{{\mathcal{K}}} \def\gL{{\mathcal{L}}} \def\gM{{\mathcal{M}}} \def\gN{{\mathcal{N}}} \def\gO{{\mathcal{O}}} \def\gP{{\mathcal{P}}} \def\gQ{{\mathcal{Q}}} \def\gR{{\mathcal{R}}} \def\gS{{\mathcal{S}}} \def\gT{{\mathcal{T}}} \def\gU{{\mathcal{U}}} \def\gV{{\mathcal{V}}} \def\gW{{\mathcal{W}}} \def\gX{{\mathcal{X}}} \def\gY{{\mathcal{Y}}} \def\gZ{{\mathcal{Z}}} \def\emLambda{{\Lambda}} \def\emA{{A}} \def\emB{{B}} \def\emC{{C}} \def\emD{{D}} \def\emE{{E}} \def\emF{{F}} \def\emG{{G}} \def\emH{{H}} \def\emI{{I}} \def\emJ{{J}} \def\emK{{K}} \def\emL{{L}} \def\emM{{M}} \def\emN{{N}} \def\emO{{O}} \def\emP{{P}} \def\emQ{{Q}} \def\emR{{R}} \def\emS{{S}} \def\emT{{T}} \def\emU{{U}} \def\emV{{V}} \def\emW{{W}} \def\emX{{X}} \def\emY{{Y}} \def\emZ{{Z}} \def\emSigma{{\Sigma}} \newcommand{\etens}[1]{\mathsfit{#1}} \def\etLambda{{\etens{\Lambda}}} \def\etA{{\etens{A}}} \def\etB{{\etens{B}}} \def\etC{{\etens{C}}} \def\etD{{\etens{D}}} \def\etE{{\etens{E}}} \def\etF{{\etens{F}}} \def\etG{{\etens{G}}} \def\etH{{\etens{H}}} \def\etI{{\etens{I}}} \def\etJ{{\etens{J}}} \def\etK{{\etens{K}}} \def\etL{{\etens{L}}} \def\etM{{\etens{M}}} \def\etN{{\etens{N}}} \def\etO{{\etens{O}}} \def\etP{{\etens{P}}} \def\etQ{{\etens{Q}}} \def\etR{{\etens{R}}} \def\etS{{\etens{S}}} \def\etT{{\etens{T}}} \def\etU{{\etens{U}}} \def\etV{{\etens{V}}} \def\etW{{\etens{W}}} \def\etX{{\etens{X}}} \def\etY{{\etens{Y}}} \def\etZ{{\etens{Z}}} \newcommand{\pdata}{p_{\rm{data}}} \newcommand{\ptrain}{\hat{p}_{\rm{data}}} \newcommand{\Ptrain}{\hat{P}_{\rm{data}}} \newcommand{\pmodel}{p_{\rm{model}}} \newcommand{\Pmodel}{P_{\rm{model}}} \newcommand{\ptildemodel}{\tilde{p}_{\rm{model}}} \newcommand{\pencode}{p_{\rm{encoder}}} \newcommand{\pdecode}{p_{\rm{decoder}}} \newcommand{\precons}{p_{\rm{reconstruct}}} \newcommand{\Ls}{\mathcal{L}} \newcommand{\R}{\mathbb{R}} \newcommand{\emp}{\tilde{p}} \newcommand{\lr}{\alpha} \newcommand{\reg}{\lambda} \newcommand{\rect}{\mathrm{rectifier}} \newcommand{\softmax}{\mathrm{softmax}} \newcommand{\sigmoid}{\sigma} \newcommand{\softplus}{\zeta} \newcommand{\KL}{D_{\mathrm{KL}}} \newcommand{\standarderror}{\mathrm{SE}} \newcommand{\normlzero}{L^0} \newcommand{\normlone}{L^1} \newcommand{\normltwo}{L^2} \newcommand{\normlp}{L^p} \newcommand{\normmax}{L^\infty} \newcommand{\parents}{Pa} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\Tr}{Tr} \let\ab\allowbreak \newcommand{\bone}{\mathbf{1}} \newcommand{\complex}{\mathbb{C}} \newcommand{\nilp}{\mathrm{nilp}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\trace}{\mathrm{tr}} \newcommand{\rms}{\text{RMS}} \newcommand{\dataset}{{\cal D}} \newcommand{\fracpartial}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\real}{\mathbb{R}} \newcommand{\integer}{\mathbb{Z}} \newcommand{\cspace}{\mathcal{C}} \newcommand{\nspace}{\mathcal{N}} \newcommand{\leadto}{\qquad\underrightarrow{ \text{leads to} }\qquad} \newcommand{\leadtosmall}{\,\,\underrightarrow{ \text{leads to} }\,\,} \newcommand{\Cov}{\mathrm{Cov}} \newcommand{\Exp}{\mathrm{E}} \newcommand{\normal}{\mathcal{N}} \newcommand{\gap}{\,\,\,\,\,\,\,\,} \definecolor{mylightbluetitle}{RGB}{60,113,183} \definecolor{mylightbluetext}{RGB}{27,54,189} \definecolor{structurecolorblue}{RGB}{60,113,183} \definecolor{structurecolorgreen}{RGB}{63,145,182} \colorlet{structurecolor}{structurecolorblue} \definecolor{structurecolorelegant}{RGB}{60,113,183} \definecolor{structurecolorlt}{RGB}{31,119,185} \definecolor{mydarkblue}{rgb}{0,0.08,0.45} \definecolor{mydarkred}{rgb}{0.70,0.00,0.00} \definecolor{mydarkgreen}{rgb}{0.00,0.30,0.00} \definecolor{mydarkyellow}{RGB}{197,151,13} \definecolor{mydarkpurple}{RGB}{149,18,192} \definecolor{mydarkgray}{RGB}{64,64,64} \definecolor{mydefinitionred}{rgb}{0.30,0.00,0.00} \newcommand{\exampbar}{\hfill $\square$\par} \newcommand{\bOmega}{\boldsymbol\Omega} \newcommand{\bomega}{\boldsymbol\omega} \newcommand{\balpha}{\boldsymbol\alpha} \newcommand{\bbeta}{\boldsymbol\beta} \newcommand{\bdelta}{\boldsymbol\delta} \newcommand{\btheta}{\boldsymbol\theta} \newcommand{\bTheta}{\boldsymbol\Theta} \newcommand{\bPhi}{\boldsymbol\Phi} \newcommand{\bPsi}{\boldsymbol\Psi} \newcommand{\bpsi}{\boldsymbol\psi} \newcommand{\bgamma}{\boldsymbol\gamma} \newcommand{\bepsilon}{\boldsymbol\epsilon} \newcommand{\bLambda}{\boldsymbol\Lambda} \newcommand{\boldeta}{\boldsymbol\eta} \newcommand{\blambda}{\boldsymbol\lambda} \newcommand{\bphi}{\boldsymbol\phi} \newcommand{\bphiast}{\bphi_\ast} \newcommand{\bmu}{\boldsymbol\mu} \newcommand{\bsigma}{\boldsymbol\sigma} \newcommand{\bSigma}{\boldsymbol\Sigma} \newcommand\norm[1]{\left\lVert#1\right\rVert} \newcommand{\diag}{\mathrm{diag}} \newcommand{\dotL}{\dot{\bm{L}}} \newcommand{\dotU}{\dot{\bm{U}}} \newcommand{\ddotL}{\ddot{\bm{L}}} \newcommand{\widebarbA}{\overline{\bm{A}}} \newcommand{\widebarbB}{\overline{\bm{B}}} \newcommand{\widebarbC}{\overline{\bm{C}}} \newcommand{\widebarbD}{\overline{\bm{D}}} \newcommand{\widebarbE}{\overline{\bm{E}}} \newcommand{\widebarbF}{\overline{\bm{F}}} \newcommand{\widebarbG}{\overline{\bm{G}}} \newcommand{\widebarbH}{\overline{\bm{H}}} \newcommand{\widebarbI}{\overline{\bm{I}}} \newcommand{\widebarbJ}{\overline{\bm{J}}} \newcommand{\widebarbK}{\overline{\bm{K}}} \newcommand{\widebarbL}{\overline{\bm{L}}} \newcommand{\widebarbM}{\overline{\bm{M}}} \newcommand{\widebarbN}{\overline{\bm{N}}} \newcommand{\widebarbO}{\overline{\bm{O}}} \newcommand{\widebarbP}{\overline{\bm{P}}} \newcommand{\widebarbQ}{\overline{\bm{Q}}} \newcommand{\widebarbR}{\overline{\bm{R}}} \newcommand{\widebarbS}{\overline{\bm{S}}} \newcommand{\widebarbT}{\overline{\bm{T}}} \newcommand{\widebarbU}{\overline{\bm{U}}} \newcommand{\widebarbV}{\overline{\bm{V}}} \newcommand{\widebarbW}{\overline{\bm{W}}} \newcommand{\widebarbX}{\overline{\bm{X}}} \newcommand{\widebarbY}{\overline{\bm{Y}}} \newcommand{\widebarbZ}{\overline{\bm{Z}}} \newcommand{\widebarba}{\overline{\bm{a}}} \newcommand{\widebarbb}{\overline{\bm{b}}} \newcommand{\widebarbc}{\overline{\bm{c}}} \newcommand{\widebarbd}{\overline{\bm{d}}} \newcommand{\widebarbe}{\overline{\bm{e}}} \newcommand{\widebarbf}{\overline{\bm{f}}} \newcommand{\widebarbg}{\overline{\bm{g}}} \newcommand{\widebarbh}{\overline{\bm{h}}} \newcommand{\widebarbi}{\overline{\bm{i}}} \newcommand{\widebarbj}{\overline{\bm{j}}} \newcommand{\widebarbk}{\overline{\bm{k}}} \newcommand{\widebarbl}{\overline{\bm{l}}} \newcommand{\widebarbm}{\overline{\bm{m}}} \newcommand{\widebarbn}{\overline{\bm{n}}} \newcommand{\widebarbo}{\overline{\bm{o}}} \newcommand{\widebarbp}{\overline{\bm{p}}} \newcommand{\widebarbq}{\overline{\bm{q}}} \newcommand{\widebarbr}{\overline{\bm{r}}} \newcommand{\widebarbs}{\overline{\bm{s}}} \newcommand{\widebarbt}{\overline{\bm{t}}} \newcommand{\widebarbu}{\overline{\bm{u}}} \newcommand{\widebarbv}{\overline{\bm{v}}} \newcommand{\widebarbw}{\overline{\bm{w}}} \newcommand{\widebarbx}{\overline{\bm{x}}} \newcommand{\widebarby}{\overline{\bm{y}}} \newcommand{\widebarbz}{\overline{\bm{z}}} \newcommand{\widetildebA}{\widetilde{\bm{A}}} \newcommand{\widetildebB}{\widetilde{\bm{B}}} \newcommand{\widetildebC}{\widetilde{\bm{C}}} \newcommand{\widetildebD}{\widetilde{\bm{D}}} \newcommand{\widetildebE}{\widetilde{\bm{E}}} \newcommand{\widetildebF}{\widetilde{\bm{F}}} \newcommand{\widetildebG}{\widetilde{\bm{G}}} \newcommand{\widetildebH}{\widetilde{\bm{H}}} \newcommand{\widetildebI}{\widetilde{\bm{I}}} \newcommand{\widetildebJ}{\widetilde{\bm{J}}} \newcommand{\widetildebK}{\widetilde{\bm{K}}} \newcommand{\widetildebL}{\widetilde{\bm{L}}} \newcommand{\widetildebM}{\widetilde{\bm{M}}} \newcommand{\widetildebN}{\widetilde{\bm{N}}} \newcommand{\widetildebO}{\widetilde{\bm{O}}} \newcommand{\widetildebP}{\widetilde{\bm{P}}} \newcommand{\widetildebQ}{\widetilde{\bm{Q}}} \newcommand{\widetildebR}{\widetilde{\bm{R}}} \newcommand{\widetildebS}{\widetilde{\bm{S}}} \newcommand{\widetildebT}{\widetilde{\bm{T}}} \newcommand{\widetildebU}{\widetilde{\bm{U}}} \newcommand{\widetildebV}{\widetilde{\bm{V}}} \newcommand{\widetildebW}{\widetilde{\bm{W}}} \newcommand{\widetildebX}{\widetilde{\bm{X}}} \newcommand{\widetildebY}{\widetilde{\bm{Y}}} \newcommand{\widetildebZ}{\widetilde{\bm{Z}}} \newcommand{\widetildeba}{\widetilde{\bm{a}}} \newcommand{\widetildebb}{\widetilde{\bm{b}}} \newcommand{\widetildebc}{\widetilde{\bm{c}}} \newcommand{\widetildebd}{\widetilde{\bm{d}}} \newcommand{\widetildebe}{\widetilde{\bm{e}}} \newcommand{\widetildebf}{\widetilde{\bm{f}}} \newcommand{\widetildebg}{\widetilde{\bm{g}}} \newcommand{\widetildebh}{\widetilde{\bm{h}}} \newcommand{\widetildebi}{\widetilde{\bm{i}}} \newcommand{\widetildebj}{\widetilde{\bm{j}}} \newcommand{\widetildebk}{\widetilde{\bm{k}}} \newcommand{\widetildebl}{\widetilde{\bm{l}}} \newcommand{\widetildebm}{\widetilde{\bm{m}}} \newcommand{\widetildebn}{\widetilde{\bm{n}}} \newcommand{\widetildebo}{\widetilde{\bm{o}}} \newcommand{\widetildebp}{\widetilde{\bm{p}}} \newcommand{\widetildebq}{\widetilde{\bm{q}}} \newcommand{\widetildebr}{\widetilde{\bm{r}}} \newcommand{\widetildebs}{\widetilde{\bm{s}}} \newcommand{\widetildebt}{\widetilde{\bm{t}}} \newcommand{\widetildebu}{\widetilde{\bm{u}}} \newcommand{\widetildebv}{\widetilde{\bm{v}}} \newcommand{\widetildebw}{\widetilde{\bm{w}}} \newcommand{\widetildebx}{\widetilde{\bm{x}}} \newcommand{\widetildeby}{\widetilde{\bm{y}}} \newcommand{\widetildebz}{\widetilde{\bm{z}}} \newcommand{\off}{\text{off}} \newcommand{\ba}{\bm{a}} \newcommand{\bA}{\bm{A}} \newcommand{\bb}{\bm{b}} \newcommand{\bB}{\bm{B}} \newcommand{\bc}{\bm{c}} \newcommand{\bC}{\bm{C}} \newcommand{\bd}{\bm{d}} \newcommand{\bD}{\bm{D}} \newcommand{\be}{\bm{e}} \newcommand{\bE}{\bm{E}} \newcommand{\bff}{\bm{f}} \newcommand{\bF}{\bm{F}} \newcommand{\bg}{\bm{g}} \newcommand{\bG}{\bm{G}} \newcommand{\bh}{\bm{h}} \newcommand{\bH}{\bm{H}} \newcommand{\bi}{\bm{i}} \newcommand{\bI}{\bm{I}} \newcommand{\bj}{\bm{j}} \newcommand{\bJ}{\bm{J}} \newcommand{\bk}{\bm{k}} \newcommand{\bK}{\bm{K}} \newcommand{\bl}{\bm{l}} \newcommand{\bL}{\bm{L}} \newcommand{\bmm}{\bm{m}} \newcommand{\bM}{\bm{M}} \newcommand{\bn}{\bm{n}} \newcommand{\bN}{\bm{N}} \newcommand{\bo}{\bm{o}} \newcommand{\bO}{\bm{O}} \newcommand{\bp}{\bm{p}} \newcommand{\bP}{\bm{P}} \newcommand{\bq}{\bm{q}} \newcommand{\bQ}{\bm{Q}} \newcommand{\br}{\bm{r}} \newcommand{\bR}{\bm{R}} \newcommand{\bs}{\bm{s}} \newcommand{\bS}{\bm{S}} \newcommand{\bt}{\bm{t}} \newcommand{\bT}{\bm{T}} \newcommand{\bu}{\bm{u}} \newcommand{\bU}{\bm{U}} \newcommand{\bv}{\bm{v}} \newcommand{\bV}{\bm{V}} \newcommand{\bw}{\bm{w}} \newcommand{\bW}{\bm{W}} \newcommand{\bx}{\bm{x}} \newcommand{\bX}{\bm{X}} \newcommand{\by}{\bm{y}} \newcommand{\bY}{\bm{Y}} \newcommand{\bz}{\bm{z}} \newcommand{\bZ}{\bm{Z}} \newcommand{\whbx}{\widehat{\bx}} \newcommand{\whbd}{\widehat{\bd}} \newcommand{\whbg}{\widehat{\bg}} \newcommand{\whL}{\widehat{L}} \newcommand{\abovebX}[1]{\overset{\mathit{(#1)}}{\bX}} \newcommand{\eG}{\bm{\mathscr{G}}} \newcommand{\eA}{\bm{\mathscr{A}}} \newcommand{\eB}{\bm{\mathscr{B}}} \newcommand{\eY}{\bm{\mathscr{Y}}} \newcommand{\eX}{\bm{\mathscr{X}}} \newcommand{\eL}{\bm{\mathscr{L}}} \newcommand{\weX}{\widehat{\bm{\mathscr{X}}}} \newcommand{\weG}{\widehat{\bm{\mathscr{G}}}} \newcommand{\widehatbA}{\widehat{\bm{A}}} \newcommand{\mathcalV}{\mathcal{V}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\bzero}{\mathbf{0}} \newcommand{\kernel}{\mathbf{ker}} \newcommand{\mathE}{\mathbb{E}} \newcommand{\mat}[1]{\mathbf{#1}} \renewcommand\vec{\mathbf} \newcommand\inv[1]{#1\raisebox{1.15ex}{$\scriptscriptstyle-\!1$}} \newcommand{\trans}[1]{\ensuremath{#1^{ \top}}} \newcommand{\brx}{\bm{R}_\gamma^{(x)}} \newcommand{\bry}{\bm{R}_\gamma^{(y)}} \newcommand{\brxt}{\bm{R}_\gamma^{(x)\top}} \newcommand{\bryt}{\bm{R}_\gamma^{(y)\top}} \newcommand{\bxgamma}{\bX_\gamma} \newcommand{\bygamma}{\bY_\gamma} \newcommand{\bugamma}{\bU_\gamma} \newcommand{\bvgamma}{\bV_\gamma} \newcommand{\bomegagamma}{\bOmega_\gamma} \newcommand{\svd}{\mathrm{svd}} \def\sA{{\mathbb{A}}} \def\sB{{\mathbb{B}}} \def\sC{{\mathbb{C}}} \def\sD{{\mathbb{D}}} \def\sF{{\mathbb{F}}} \def\sG{{\mathbb{G}}} \def\sH{{\mathbb{H}}} \def\sI{{\mathbb{I}}} \def\sJ{{\mathbb{J}}} \def\sK{{\mathbb{K}}} \def\sL{{\mathbb{L}}} \def\sM{{\mathbb{M}}} \def\sN{{\mathbb{N}}} \def\sO{{\mathbb{O}}} \def\sP{{\mathbb{P}}} \def\sQ{{\mathbb{Q}}} \def\sR{{\mathbb{R}}} \def\sS{{\mathbb{S}}} \def\sT{{\mathbb{T}}} \def\sU{{\mathbb{U}}} \def\sV{{\mathbb{V}}} \def\sW{{\mathbb{W}}} \def\sX{{\mathbb{X}}} \def\sY{{\mathbb{Y}}} \def\sZ{{\mathbb{Z}}} \begingroup \clearpage \pagestyle{empty} \begin{center} \bfseries \vspace{1cm} \Huge {\Huge {\mytitle}} \vspace{1cm} \normalsize \vspace{1cm} \vspace{1cm} \vspace{1cm} \normalsize \vspace{1cm} \end{center} \clearpage \newpage\null\thispagestyle{empty}\newpage \clearpage \pagestyle{empty} \begin{center} \bfseries \vspace{1cm} \Huge {\Huge \mytitle} \vspace{1cm} \normalsize \vspace{1cm} \small BY\\ \Large Jun Lu\\[0.5em] \vspace{1cm} \vspace{1cm} \normalsize \vspace{10cm} \large \vspace{1cm} \end{center} \clearpage \pagestyle{empty} \clearpage \endgroup \newpage \clearchapter{Background} \newpage In this chapter, we offer a concise overview of fundamental concepts in linear algebra and calculus. Additional significant concepts will be introduced and elaborated upon as needed for clarity. Note that this chapter does not aim to provide an exhaustive treatment of these subjects. Interested readers wishing to delve deeper into these topics are advised to consult advanced texts on linear algebra and calculus. For the sake of simplicity, we restrict our consideration to real matrices throughout this text. Unless explicitly stated otherwise, the eigenvalues of the matrices under discussion are assumed to be real as well. \paragraph{The vector space $\real^n$ and matrix space $\real^{m\times n}$.} The vector space $\real^n$ is the set of $n$-dimensional column vectors with real components. Throughout the book, our primary focus will be on problems within the $\real^n$ vector space. However, in a few instances, we will also explore other vector spaces, e.g., the nonnegative vector space. Correspondingly, the matrix space $\real^{m\times n}$ represents the set of all real-valued matrices with dimensions $m\times n$. In all cases, scalars will be denoted in a non-bold font, possibly with subscripts (e.g., $a$, $\alpha$, $\alpha_i$). We will use \textbf{boldface} lowercase letters, possibly with subscripts, to denote vectors (e.g., $\bmu$, $\bx$, $\bx_n$, $\bz$), and \textbf{boldface} uppercase letters, possibly with subscripts, to denote matrices (e.g., $\bA$, $\bL_j$). The $i$-th element of a vector $\bz$ will be denoted by $z_i$ in the non-bold font. Subarrays are formed when fixing a subset of indices. The $i$-th row and $j$-th column value of matrix $\bA$ (referred to as entry ($i,j$) of $\bA$) will be denoted by $a_{ij}$. Furthermore, it will be helpful to utilize the \textbf{Matlab-style notation}: the submatrix of matrix $\bA$ spanning from the $i$-th row to the $j$-th row and the $k$-th column to the $m$-th column will be denoted by $\bA_{i:j,k:m}$. A colon is used to indicate all elements of a dimension, e.g., $\bA_{:,k:m}$ denotes the $k$-th column to the $m$-th column of matrix $\bA$, and $\bA_{:,k}$ denotes the $k$-th column of $\bA$. Alternatively, the $k$-th column of matrix $\bA$ may be denoted more compactly as $\ba_k$. \index{Matlab-style notation} When the index is non-continuous, given ordered subindex sets $I$ and $J$, $\bA[I, J]$ denotes the submatrix of $\bA$ obtained by extracting the rows and columns of $\bA$ indexed by $I$ and $J$, respectively; and $\bA[:, J]$ denotes the submatrix of $\bA$ obtained by extracting the columns of $\bA$ indexed by $J$, where again the colon operator implies all indices, and the $[:, J]$ syntax in this expression selects all rows from $\bA$ and only the columns specified by the indices in $J$. \begin{definition}[Matlab Notation]\label{definition:matlabnotation} Suppose $\bA\in \real^{m\times n}$, and $I=[i_1, i_2, \ldots, i_k]$ and $J=[j_1, j_2, \ldots, j_l]$ are two index vectors, then $\bA[I,J]$ denotes the $k\times l$ submatrix $$ \bA[I,J]= \begin{bmatrix} a_{i_1,j_1} & a_{i_1,j_2} &\ldots & a_{i_1,j_l}\\ a_{i_2,j_1} & a_{i_2,j_2} &\ldots & a_{i_2,j_l}\\ \vdots & \vdots&\ddots & \vdots\\ a_{i_k,j_1} & a_{i_k,j_2} &\ldots & a_{i_k,j_l}\\ \end{bmatrix}. $$ Whilst, $\bA[I,:]$ denotes a $k\times n$ submatrix, and $\bA[:,J]$ denotes a $m\times l$ submatrix analogously. We note that it does not matter whether the index vectors $I$ and $J$ are row vectors or column vectors. What's important is which axis they index (either rows of $\bA$ or columns of $\bA$). Note that the ranges of the indices are given as folows: $$ \left\{ \begin{aligned} 0&\leq \min(I) \leq \max(I)\leq m;\\ 0&\leq \min(J) \leq \max(J)\leq n. \end{aligned} \right. $$ \end{definition} In all instances, vectors are presented in column form rather than as rows. A row vector will be denoted by the transpose of a column vector such as $\ba^\top$. A column vector with specific values is separated by the semicolon symbol $``;"$, for instance, $\bx=[1;2;3]$ is a column vector in $\real^3$. Similarly, a row vector with specific values is split by the comma symbol $``,"$, e.g., $\by=[1,2,3]$ is a row vector with 3 values. Additionally, a column vector can also be denoted by the transpose of a row vector, e.g., $\by=[1,2,3]^\top$ is also a column vector. The transpose of a matrix $\bA$ will be denoted as $\bA^\top$, and its inverse will be denoted by $\bA^{-1}$. We will denote the $p \times p$ identity matrix by $\bI_p$ (or simply by $\bI$ when the size is clear from context). A vector or matrix of all zeros will be denoted by a \textbf{boldface} zero, $\bzero$, with the size clear from context, or we denote $\bzero_p$ to be the vector of all zeros with $p$ entries. Similarly, a vector or matrix of all ones will be denoted by a \textbf{boldface} one $\bone$ whose size is clear from context, or we denote $\bone_p$ to be the vector of all ones with $p$ entries. We will frequently omit the subscripts of these matrices when the dimensions are clear from context. We will use $\be_1, \be_2, \ldots, \be_n$ to represent the standard basis of $\real^n$, where $\be_i$ is the vector whose $i$-th component is one while all the others are zero. \index{Eigenvalue} \begin{definition}[Eigenvalue] Given any vector space $E$ and any linear map $\bA: E \rightarrow E$ (or simply real matrix $\bA\in\real^{m\times n}$), a scalar $\lambda \in K$ is called an eigenvalue, or proper value, or characteristic value of $\bA$, if there exists some nonzero vector $\bu \in E$ such that \begin{equation*} \bA \bu = \lambda \bu. \end{equation*} \end{definition} In fact, real-valued matrices can have complex eigenvalues. However, all the eigenvalues of symmetric matrices are real (see Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}). \index{Spectrum} \index{Spectral radius} \begin{definition}[Spectrum and Spectral Radius]\label{definition:spectrum} The set of all eigenvalues of $\bA$ is called the spectrum of $\bA$ and is denoted by $\Lambda(\bA)$. The largest magnitude of the eigenvalues is known as the spectral radius $\rho(\bA)$: $$ \rho(\bA) = \mathop{\max}_{\lambda\in \Lambda(\bA)} |\lambda|. $$ \end{definition} \index{Eigenvector} \begin{definition}[Eigenvector] A vector $\bu \in E$ is called an eigenvector, or proper vector, or characteristic vector of $\bA$, if $\bu \neq 0$ and if there exists some $\lambda \in K$ such that \begin{equation*} \bA \bu = \lambda \bu, \end{equation*} where the scalar $\lambda$ is then an eigenvalue. And we say that $\bu$ is an eigenvector associated with $\lambda$. \end{definition} Moreover, the tuple $(\lambda, \bu)$ mentioned above is termed an \textbf{eigenpair}. Intuitively, these definitions imply that multiplying matrix $\bA$ by the vector $\bu$ results in a new vector that is in the same direction as $\bu$, but its magnitude scaled by a factor $\lambda$. For any eigenvector $\bu$, we can scale it by a scalar $s$ such that $s\bu$ remains an eigenvector of $\bA$; and for this reason we call the eigenvector an eigenvector of $\bA$ associated with eigenvalue $\lambda$. To avoid ambiguity, it is customary to assume that the eigenvector is normalized to have length $1$ and the first entry is positive (or negative) since both $\bu$ and $-\bu$ are eigenvectors. In linear algebra, every vector space has a basis, and any vector within the space can be expressed as a linear combination of the basis vectors. We then define the span and dimension of a subspace via the basis. \index{Subspace} \begin{definition}[Subspace] A nonempty subset $\mathcal{V}$ of $\real^n$ is called a subspace if $x\ba+y\ba\in \mathcal{V}$ for every $\ba,\bb\in \mathcal{V}$ and every $x,y\in \real$. \end{definition} \index{Span} \begin{definition}[Span] If every vector $\bv$ in subspace $\mathcal{V}$ can be expressed as a linear combination of $\{\ba_1, \ba_2, \ldots,$ $\ba_m\}$, then $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is said to span $\mathcal{V}$. \end{definition} \index{Linearly independent} In this context, we will often use the idea of the linear independence of a set of vectors. Two equivalent definitions are given as follows. \begin{definition}[Linearly Independent] A set of vectors $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is called linearly independent if there is no combination that can yield $x_1\ba_1+x_2\ba_2+\ldots+x_m\ba_m=0$ unless all $x_i$'s are equal to zero. An equivalent definition is that $\ba_1\neq \bzero$, and for every $k>1$, the vector $\ba_k$ does not belong to the span of $\{\ba_1, \ba_2, \ldots, \ba_{k-1}\}$. \end{definition} \index{Basis} \index{Dimension} \begin{definition}[Basis and Dimension] A set of vectors $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is called a basis of $\mathcal{V}$ if they are linearly independent and span $\mathcal{V}$. Every basis of a given subspace has the same number of vectors, and the number of vectors in any basis is called the dimension of the subspace $\mathcal{V}$. By convention, the subspace $\{\bzero\}$ is said to have a dimension of zero. Furthermore, every subspace of nonzero dimension has a basis that is orthogonal, i.e., the basis of a subspace can be chosen orthogonal. \end{definition} \index{Column space} \begin{definition}[Column Space (Range)] If $\bA$ is an $m \times n$ real matrix, we define the column space (or range) of $\bA$ as the set spanned by its columns: \begin{equation*} \mathcal{C} (\bA) = \{ \by\in \mathbb{R}^m: \exists\, \bx \in \mathbb{R}^n, \, \by = \bA \bx \}. \end{equation*} And the row space of $\bA$ is the set spanned by its rows, which is equivalent to the column space of $\bA^\top$: \begin{equation*} \mathcal{C} (\bA^\top) = \{ \bx\in \mathbb{R}^n: \exists\, \by \in \mathbb{R}^m, \, \bx = \bA^\top \by \}. \end{equation*} \end{definition} \index{Null space} \begin{definition}[Null Space (Nullspace, Kernel)] If $\bA$ is an $m \times n$ real matrix, we define the null space (or kernel, or nullspace) of $\bA$ as the set: \begin{equation*} \nspace (\bA) = \{\by \in \mathbb{R}^n: \, \bA \by = \bzero \}. \end{equation*} And the null space of $\bA^\top$ is defined as \begin{equation*} \nspace (\bA^\top) = \{\bx \in \mathbb{R}^m: \, \bA^\top \bx = \bzero \}. \end{equation*} \end{definition} Both the column space of $\bA$ and the null space of $\bA^\top$ are subspaces of $\real^m$. In fact, every vector in $\nspace(\bA^\top)$ is orthogonal to vectors in $\cspace(\bA)$, and vice versa.\footnote{Every vector in $\nspace(\bA)$ is also perpendicular to vectors in $\cspace(\bA^\top)$, and vice versa.} \index{Rank} \begin{definition}[Rank] The rank of a matrix $\bA\in \real^{m\times n}$ is the dimension of its column space. That is, the rank of $\bA$ is equal to the maximum number of linearly independent columns of $\bA$, and is also the maximum number of linearly independent rows of $\bA$. The matrix $\bA$ and its transpose $\bA^\top$ have the same rank. A matrix $\bA$ is considered to have full rank if its rank equals $min\{m,n\}$. Specifically, given a vector $\bu \in \real^m$ and a vector $\bv \in \real^n$, then the $m\times n$ matrix $\bu\bv^\top$ obtained by the outer product of vectors is of rank 1. In short, the rank of a matrix is equal to: \begin{itemize} \item number of linearly independent columns; \item number of linearly independent rows. \end{itemize} And remarkably, these two quantities are always equal (see \citet{lu2022matrix}). \end{definition} \index{Orthogonal complement} \begin{definition}[Orthogonal Complement in General] The orthogonal complement $\mathcalV^\perp\subseteq \real^m$ of a subspace $\mathcalV\subseteq\real^m$ contains any vector that is perpendicular to $\mathcalV$. That is, $$ \mathcalV^\perp = \{\bv\in \real^m : \bv^\top\bu=0, \,\,\, \forall \bu\in \mathcalV \}. $$ These two subspaces are mutually exclusive yet collectively span the entire space. The dimensions of $\mathcalV$ and $\mathcalV^\perp$ sum up to the dimension of the entire space. Furthermore, it holds that $(\mathcalV^\perp)^\perp=\mathcalV$. \end{definition} \index{Orthogonal complement} \begin{definition}[Orthogonal Complement of Column Space] If $\bA$ is an $m \times n$ real matrix, the orthogonal complement of $\mathcal{C}(\bA)$, $\mathcal{C}^{\bot}(\bA)$, is the subspace defined as: \begin{equation*} \begin{aligned} \mathcal{C}^{\bot}(\bA) &= \{\by\in \mathbb{R}^m: \, \by^\top \bA \bx=\bzero, \, \forall \bx \in \mathbb{R}^n \} \\ &=\{\by\in \mathbb{R}^m: \, \by^\top \bv = \bzero, \, \forall \bv \in \mathcal{C}(\bA) \}. \end{aligned} \end{equation*} \end{definition} We can then identify the four fundamental spaces associated with any matrix $\bA\in \real^{m\times n}$ of rank $r$: \begin{itemize} \item $\cspace(\bA)$: Column space of $\bA$, i.e., linear combinations of columns with dimension $r$; \item $\nspace(\bA)$: Null space of $\bA$, i.e., all $\bx$ with $\bA\bx=\bzero$ with dimension $n-r$; \item $\cspace(\bA^\top)$: Row space of $\bA$, i.e., linear combinations of rows with dimension $r$; \item $\nspace(\bA^\top)$: Left null space of $\bA$, i.e., all $\by$ with $\bA^\top \by=\bzero$ with dimension $m-r$, \end{itemize} where $r$ represents the rank of the matrix. Furthermore, $\nspace(\bA)$ is the orthogonal complement of $\cspace(\bA^\top)$, and $\cspace(\bA)$ is the orthogonal complement of $\nspace(\bA^\top)$. The proof can be found in \citet{lu2022matrix}. \index{Fundamental subspace} \index{Orthogonal matrix} \begin{definition}[Orthogonal Matrix, Semi-Orthogonal Matrix] A real square matrix $\bQ\in\real^{n\times n}$ is an orthogonal matrix if the inverse of $\bQ$ is equal to its transpose, that is, $\bQ^{-1}=\bQ^\top$ and $\bQ\bQ^\top = \bQ^\top\bQ = \bI$. In other words, suppose $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$, where $\bq_i \in \real^n$ for all $i \in \{1, 2, \ldots, n\}$, then $\bq_i^\top \bq_j = \delta(i,j)$, where $\delta(i,j)$ is the Kronecker delta function. For any vector $\bx$, the orthogonal matrix will preserve the length: $\norm{\bQ\bx}= \norm{\bx}$. If $\bQ$ contains only $\gamma$ of these columns with $\gamma<n$, then $\bQ^\top\bQ = \bI_\gamma$ stills holds, where $\bI_\gamma$ is the $\gamma\times \gamma$ identity matrix. But $\bQ\bQ^\top=\bI$ will not hold. In this case, $\bQ$ is called semi-orthogonal. \end{definition} \index{Normal matrix} \index{Hermitian matrix} \index{Orthogonal matrix} \index{Unitary matrix} From an introductory course on linear algebra, we have the following remark regarding the equivalent claims of nonsingular matrices. \begin{remark}[List of Equivalence of Nonsingularity for a Matrix] For a square matrix $\bA\in \real^{n\times n}$, the following claims are equivalent: \begin{itemize} \item $\bA$ is nonsingular;~\footnote{The source of the name is a result of the singular value decomposition (SVD).} \item $\bA$ is invertible, i.e., $\bA^{-1}$ exists; \item $\bA\bx=\bb$ has a unique solution $\bx = \bA^{-1}\bb$; \item $\bA\bx = \bzero$ has a unique, trivial solution: $\bx=\bzero$; \item Columns of $\bA$ are linearly independent; \item Rows of $\bA$ are linearly independent; \item $\det(\bA) \neq 0$; \item $\dim(\nspace(\bA))=0$; \item $\nspace(\bA) = \{\bzero\}$, i.e., the null space is trivial; \item $\cspace(\bA)=\cspace(\bA^\top) = \real^n$, i.e., the column space or row space span the whole $\real^n$; \item $\bA$ has full rank $r=n$; \item The reduced row echelon form is $\bR=\bI$; \item $\bA^\top\bA$ is symmetric positive definite; \item $\bA$ has $n$ nonzero (positive) singular values; \item All eigenvalues are nonzero. \end{itemize} \end{remark} It will be shown important to take the above equivalence into mind. On the other hand, the following remark shows the equivalent claims for singular matrices. \begin{remark}[List of Equivalence of Singularity for a Matrix] For a square matrix $\bA\in \real^{n\times n}$ with eigenpair $(\lambda, \bu)$, the following claims are equivalent: \begin{itemize} \item $(\bA-\lambda\bI)$ is singular; \item $(\bA-\lambda\bI)$ is not invertible; \item $(\bA-\lambda\bI)\bx = \bzero$ has nonzero $\bx\neq \bzero$ solutions, and $\bx=\bu$ is one of such solutions; \item $(\bA-\lambda\bI)$ has linearly dependent columns; \item $\det(\bA-\lambda\bI) = 0$; \item $\dim(\nspace(\bA-\lambda\bI))>0$; \item Null space of $(\bA-\lambda\bI)$ is nontrivial; \item Columns of $(\bA-\lambda\bI)$ are linearly dependent; \item Rows of $(\bA-\lambda\bI)$ are linearly dependent; \item $(\bA-\lambda\bI)$ has rank $r<n$; \item Dimension of column space = dimension of row space = $r<n$; \item $(\bA-\lambda\bI)^\top(\bA-\lambda\bI)$ is symmetric semidefinite; \item $(\bA-\lambda\bI)$ has $r<n$ nonzero (positive) singular values; \item Zero is an eigenvalue of $(\bA-\lambda\bI)$. \end{itemize} \end{remark} \index{Vector norm} \index{Matrix norm} \begin{definition}[Vector $\ell_2$-Norm]\label{definition:vec_l2_norm} For a vector $\bx\in\real^n$, the \textbf{$\ell_2$ vector norm} is defined as $\norm{\bx}_2 = \sqrt{x_1^2+x_2^2+\ldots+x_n^2}$. \end{definition} For a matrix $\bA\in\real^{m\times n}$, we define the (matrix) Frobenius norm as follows. \begin{definition}[Matrix Frobenius Norm\index{Frobenius norm}]\label{definition:frobernius-in-svd} The \textbf{Frobenius norm} of a matrix $\bA\in \real^{m\times n}$ is defined as $$ \norm{\bA}_F = \sqrt{\sum_{i=1,j=1}^{m,n} (a_{ij})^2}=\sqrt{\trace(\bA\bA^\top)}=\sqrt{\trace(\bA^\top\bA)} = \sqrt{\sigma_1^2+\sigma_2^2+\ldots+\sigma_r^2}, $$ where $\sigma_1, \sigma_2, \ldots, \sigma_r$ are nonzero singular values of $\bA$. \end{definition} The spectral norm is defined as follows. \begin{definition}[Matrix Spectral Norm]\label{definition:spectral_norm} The \textbf{spectral norm} of a matrix $\bA\in \real^{m\times n}$ is defined as $$ \norm{\bA}_2 = \mathop{\max}_{\bx\neq\bzero} \frac{\norm{\bA\bx}_2}{\norm{\bx}_2} =\mathop{\max}_{\bu\in \real^n: \norm{\bu}_2=1} \norm{\bA\bx}_2 , $$ which is also the maximal singular value of $\bA$, i.e., $\norm{\bA}_2 = \sigma_1(\bA)$. \end{definition} We note that the Frobenius norm serves as the matrix counterpart of vector $\ell_2$-norm. For simplicity, we do not give the full subscript of the norm for the vector $\ell_2$-norm or Frobenius norm when it is clear from the context which one we are referring to: $\norm{\bA}=\norm{\bA}_F$ and $\norm{\bx}=\norm{\bx}_2$. However, for the spectral norm, the subscript $\norm{\bA}_2$ should \textbf{not} be omitted. \subsection*{Differentiability and Differential Calculus} \begin{definition}[Directional Derivative, Partial Derivative]\label{definition:partial_deri} Given a function $f$ defined over a set $\sS\subseteq \real^n$ and a nonzero vector $\bd\in\real^n$. Then the \textbf{directional derivative} of $f$ at $\bx$ w.r.t. the direction $\bd$ is given by, if the limit exists, $$ \mathop{\lim}_{t\rightarrow 0^+} \frac{f(\bx+t\bd) - f(\bx)}{t}. $$ And it is denoted by $f^\prime(\bx; \bd)$ or $D_{\bd}f(\bx)$. The directional derivative is sometimes called the \textbf{G\^ateaux derivative}. For any $i\in\{1,2,\ldots,n\}$, the directional derivative at $\bx$ w.r.t. the direction of the $i$-th standard basis $\be_i$ is called the $i$-th \textbf{partial derivative} and is denoted by $\frac{\partial f}{\partial x_i} (\bx)$, $D_{\be_i}f(\bx)$, or $\partial_i f(\bx)$. \end{definition} If all the partial derivatives of a function $f$ exist at a point $\bx\in\real^n$, then the \textit{gradient} of $f$ at $\bx$ is defined as the column vector containing all the partial derivatives: $$ \nabla f(\bx)= \begin{bmatrix} \frac{\partial f}{\partial x_1} (\bx)\\ \frac{\partial f}{\partial x_2} (\bx)\\ \vdots \\ \frac{\partial f}{\partial x_n} (\bx) \end{bmatrix} \in \real^n. $$ A function $f$ defined over an open set $\sS\subseteq \real^n$ is called \textit{continuously differentiable} over $\sS$ if all the partial derivatives exist and are continuous on $\sS$. In the setting of continuous differentiability, the directional derivative and gradient have the following relationship: \begin{equation} f^\prime(\bx; \bd) = \nabla f(\bx)^\top \bd, \gap \text{for all }\bx\in\sS \text{ and }\bd\in\real^n. \end{equation} And in the setting of continuously differentiability, we also have \begin{equation} \mathop{\lim}_{\bd\rightarrow \bzero} \frac{f(\bx+\bd) - f(\bx) - \nabla f(\bx)^\top \bd}{\norm{\bd}} = 0\gap \text{for all }\bx\in\sS, \end{equation} or \begin{equation} f(\by) = f(\bx)+\nabla f(\bx)^\top (\by-\bx) + o(\norm{\by-\bx}), \end{equation} where $o(\cdot): \real_+\rightarrow \real$ is a one-dimensional function satisfying $\frac{o(t)}{t}\rightarrow 0$ as $t\rightarrow 0^+$. The partial derivative $\frac{\partial f}{\partial x_i} (\bx)$ is also a real-valued function of $\bx\in\sS$ that can be partially differentiated. The $j$-th partial derivative of $\frac{\partial f}{\partial x_i} (\bx)$ is defined as $$ \frac{\partial^2 f}{\partial x_j\partial x_i} (\bx)= \frac{\partial \left(\frac{\partial f}{\partial x_i} (\bx)\right)}{\partial x_j} (\bx). $$ This is called the ($j,i$)-th \textit{second-order partial derivative} of function $f$. A function $f$ defined over an open set $\sS\subseteq$ is called \textit{twice continuously differentiable} over $\sS$ if all the second-order partial derivatives exist and are continuous over $\sS$. In the setting of twice continuously differentiability, the second-order partial derivative are symmetric: $$ \frac{\partial^2 f}{\partial x_j\partial x_i} (\bx)= \frac{\partial^2 f}{\partial x_i\partial x_j} (\bx). $$ The \textit{Hessian} of the function $f$ at a point $\bx\in\sS$ is defined as the symmetric $n\times n$ matrix $$ \nabla^2f(\bx)= \begin{bmatrix} \frac{\partial^2 f}{\partial x_1^2} (\bx) & \frac{\partial^2 f}{\partial x_1\partial x_2} (\bx) & \ldots & \frac{\partial^2 f}{\partial x_1\partial x_n} (\bx)\\ \frac{\partial^2 f}{\partial x_2\partial x_1} (\bx) & \frac{\partial^2 f}{\partial x_2\partial x_2} (\bx) & \ldots & \frac{\partial^2 f}{\partial x_2\partial x_n} (\bx)\\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial^2 f}{\partial x_n\partial x_1} (\bx) & \frac{\partial^2 f}{\partial x_n\partial x_2} (\bx) & \ldots & \frac{\partial^2 f}{\partial x_n^2} (\bx) \end{bmatrix}. $$ We provide a simple proof of Taylor's expansion in Appendix~\ref{appendix:taylor-expansion} (p.~\pageref{appendix:taylor-expansion}) for one-dimensional functions. In the case of high-dimensional functions, we have the following two approximation results. \begin{theorem}[Linear Approximation Theorem]\label{theorem:linear_approx} Let $f(\bx):\sS\rightarrow \real$ be a twice continuously differentiable function over an open set $\sS\subseteq\real^n$, and given two points $\bx, \by$. Then there exists $\bx^\star\in[\bx,\by]$ such that $$ f(\by) = f(\bx)+ \nabla f(\bx)^\top (\by-\bx) + \frac{1}{2} (\by-\bx)^\top \nabla^2 f(\bx^\star) (\by-\bx). $$ \end{theorem} \begin{theorem}[Quadratic Approximation Theorem] Let $f(\bx):\sS\rightarrow \real$ be a twice continuously differentiable function over an open set $\sS\subseteq\real^n$, and given two points $\bx, \by$. Then it follows that $$ f(\by) = f(\bx)+ \nabla f(\bx)^\top (\by-\bx) + \frac{1}{2} (\by-\bx)^\top \nabla^2 f(\bx) (\by-\bx) + o(\norm{\by-\bx}^2). $$ \end{theorem} \newpage \clearchapter{Gradient Descent}\label{chapter:gradient-descent} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \index{Gradient descent} \section{Gradient Descent}\label{section:gradient-descent-all} \lettrine{\color{caligraphcolor}G} The \textit{gradient descent} (GD) method is employed to find the minimum of a differentiable, convex or non-convex function, commonly referred to as the ``cost" or ``loss" function (also known as the ``objective" function). It stands out as one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning, deep learning, and various optimization problems. And this is particularly true for optimizing neural networks. In the context of machine learning, the cost function measures the difference between the predicted output of a model and the actual output. The neural networks or machine learning in general find the set of parameters $\bx\in \real^d$ (also known as weights) in order to optimize an objective function $L(\bx)$. The gradient descent aims to find a sequence of parameters: \begin{equation} \bx_1, \bx_2, \ldots, \bx_T, \end{equation} such that as $T\rightarrow \infty$, the objective function $L(\bx_T)$ attains the optimal minimum value. At each iteration $t$, a step $\Delta \bx_t$ is applied to update the parameters. Denoting the parameters at the $t$-th iteration by $\bx_t$. Then the update rule becomes \begin{equation} \bx_{t+1} = \bx_t + \Delta \bx_t. \end{equation} \index{Learning rate} \index{Strict SGD} \index{Mini-batch SGD} \index{Local minima} The most straightforward gradient descents is the \textit{vanilla update}: the parameters move in the opposite direction of the gradient, which finds the steepest descent direction since the gradients are orthogonal to level curves (also known as level surfaces, see Lemma~\ref{lemm:direction-gradients}): \begin{equation}\label{equation:gd-equaa-gene} \Delta \bx_{t} = -\eta \bg_t= -\eta \frac{\partial L(\bx_t)}{\partial \bx_t} = -\eta \nabla L(\bx_t), \end{equation} where the positive value $\eta$ denotes the \textit{learning rate} and depends on specific problems, and $\bg_t=\frac{\partial L(\bx^t)}{\partial \bx^t} \in \real^d$ represents the gradient of the parameters. The learning rate $\eta$ controls how large of a step to take in the direction of negative gradient so that we can reach a (local) minimum. While if we follow the negative gradient of a single sample or a batch of samples iteratively, the local estimate of the direction can be obtained and is known as the \textit{stochastic gradient descent} (SGD) \citep{robbins1951stochastic}. The SGD can be categorized into two types: \begin{itemize} \item \textbf{The strict SGD:} Computes the gradient for only one randomly chosen data point in each iteration. \item \textbf{The mini-batch SGD:} Represents a compromise between GD and strict SGD, using a subset (mini-batch) of the dataset to compute the gradient. \end{itemize} The SGD method is particular useful when the number of training entries (i.e., the data used for updating the model, while the data used for final evaluation is called the test entries or test data) are substantial, resulting in that the gradients from different input samples may cancel out and the final update is small. In the SGD framework, the objective function is stochastic, composed of a sum of subfunctions evaluated at different subsamples of the data. However, the drawback of the vanilla update (both GD and SGD) lies in its susceptibility to getting trapped in local minima \citep{rutishauser1959theory}. For a small step size, gradient descent makes a monotonic improvement at every iteration, ensuring convergence, albeit to a local minimum. However, the speed of the vanilla gradient descent method is generally slow, and it can exhibit an exponential rate in case of poor curvature conditions. While choosing a rate higher than this may lead to divergence in terms of the objective function. Determining an optimal learning rate (whether global or per-dimension) becomes more of an art than science for many problems. Previous work has been done to alleviate the need for selecting a global learning rate \citep{zeiler2012adadelta}, while it remains sensitive to other hyper-parameters. In the following sections, we embark on a multifaceted exploration of the gradient descent method, unraveling its intricacies and adaptations through distinct lenses. This comprehensive journey is designed to foster a nuanced comprehension of the algorithms, elucidating its various formulations, challenges, and contextual applications. \section{Gradient Descent by Calculus} An intuitive analogy for gradient descent is to envision a river's course flowing from a mountaintop. The objective of gradient descent aligns with the river's goal—to descend to the lowest point at the foothill from the mountain's summit. To restate the problem, the objective function is $L(\bx)$, where $\bx$ is a $d$-dimensional input variable; our goal is to use an algorithm to get the (local) minimum of $L(\bx)$. To provide further precision, let's think about what happens when we move the ball a small amount $\Delta x_1$ in the $x_1$ direction, a small amount $\Delta x_2$ in the $x_2$ direction, \ldots, and a small amount $\Delta x_d$ in the $x_d$ direction. Calculus informs us of the variation in the objective function $L(\bx)$ as follows: $$ \Delta L(\bx) \approx \frac{\partial L}{\partial x_1}\Delta x_1 + \frac{\partial L}{\partial x_2}\Delta x_2 + \ldots + \frac{\partial L}{\partial x_d}\Delta x_d. $$ In this sense, the challenge is to determine $\Delta x_1$, \ldots, $\Delta x_d$ in a manner that induces a negative change in $\Delta L(\bx)$, i.e., we'll make the objective function decrease, aiming for minimization. Let $\Delta \bx=[\Delta x_1,\Delta x_2, \ldots , \Delta x_d]^\top$ represent the vector of changes in $\bx$, and $\nabla L(\bx) = \frac{\partial L(\bx)}{\partial \bx}=[\frac{\partial L}{\partial x_1},\frac{\partial L}{\partial x_2}, \ldots , \frac{\partial L}{\partial x_d}]^\top$ denote the gradient vector of $L(\bx)$ \footnote{Note the difference between $\Delta L(\bx)$ and $\nabla L(\bx)$.}. Then it follows that $$ \Delta L(\bx) \approx \frac{\partial L}{\partial x_1}\Delta x_1 + \frac{\partial L}{\partial x_2}\Delta x_2 +\ldots + \frac{\partial L}{\partial x_d}\Delta x_d = \nabla L(\bx)^\top \Delta \bx. $$ In the context of descent, our objective is to ensure that $\Delta L(\bx)$ is negative. This condition ensures that a step $\bx_{t+1} = \bx_t+\Delta \bx_t$ (from $t$-th iteration to $(t+1)$-th iteration) results in a decrease of the loss function $L(\bx_{t+1}) = L(\bx_t) + \Delta L(\bx_t)$, given that $\Delta L(\bx_t) \leq 0$. It can be demonstrated that if the update step is defined as $\Delta \bx_t=-\eta \nabla L(\bx_t)$, where $\eta$ is the learning rate, the following relationship holds: $$ \Delta L(\bx_t) \approx -\eta \nabla L(\bx_t)^\top\nabla L(\bx_t) = -\eta\norm{\nabla L}_2^2 \leq 0. $$ To be precise, $\Delta L(\bx_t) < 0$ in the above equation; otherwise, we would have reached the optimal point with zero gradients. This analysis confirms the validity of gradient descent. The update rule for the next parameter $\bx_{t+1}$ is given by: $$ \bx_{t+1} = \bx_t - \eta \nabla L(\bx_t). $$ This update rule will make the objective function drop to the minimum point steadily in a convex setting or local minima in a non-convex setting. \index{Descent condition} \index{Descent direction} \index{Search direction} \index{Taylor's formula} \begin{remark}[Descent Condition]\label{remark:descent_condition} In the above construction, we define $\Delta \bx_t = -\eta \nabla L(\bx_t)$, where $-\nabla L(\bx_t)$ is the \textit{descent direction} such that $\Delta L \approx -\eta \nabla L(\bx_t)^\top \nabla L(\bx_t) < 0$ (assuming $\nabla L(\bx_t) \neq \bzero $). More generally, any \textit{search direction} $\bd_t \in \real^d{\setminus}\{\bzero\}$ that satisfies the \textit{descent condition} can be chosen as the descent direction: $$ \frac{d L(\bx_t + \eta \bd_t)}{d \eta}\bigg|_{\eta=0} = \nabla L(\bx_t)^\top \bd_t <0. $$ In other words, according to Taylor's formula (Appendix~\ref{appendix:taylor-expansion}, p.~\pageref{appendix:taylor-expansion}), $$ L(\bx_t+\eta\bd_t) \approx L(\bx_t) + \eta \nabla L(\bx_t)^\top\bd_t $$ implies $L(\bx_t+\eta\bd_t) < L(\bx_t)$ when $\eta$ is sufficiently small. When $\bd_t = -\nabla L(\bx_t)$, the descent direction is known as the \textit{steepest descent direction}. When the learning rate $\eta$ is not fixed and decided by exact line search, the method is called \textit{steepest descent method} (see Section~\ref{section:quadratic-in-steepestdescent}, p.~\pageref{section:quadratic-in-steepestdescent}). \end{remark} \index{Convex functions} \index{Jensen's inequality} \subsection*{Gradient Descent in Convex Problems} We further consider the application of gradient descent in convex problems. The notion of convexity for a function is defined as follows. \begin{definition}[Convex Functions] A function $f: \sS \rightarrow \real$ defined on a convex set $\sS \subseteq \real^n$ is called convex if $$ f(\lambda \bx +(1-\lambda)\by) \leq \lambda f(\bx) +(1-\lambda) f(\by), $$ where $\bx,\by\in \sS$, and $\lambda\in[0,1]$. And the function $f$ is called strictly convex if $$ f(\lambda \bx +(1-\lambda)\by) < \lambda f(\bx) +(1-\lambda) f(\by), $$ where $\bx\neq \by\in \sS$, and $\lambda\in(0,1)$. \end{definition} There are several inequalities in convex functions. \begin{lemma}[Inequalities in Convex Functions] A convex function satisfies the following inequalities \citep{beck2017first}. \paragraph{Jensen's inequality.} Let $f: \sS \rightarrow \real$ be a convex function defined on a convex set $\sS\subseteq \real^n$. Then, given any $\bx_1, \bx_2, \ldots, \bx_k\in\sS$ and $\blambda\in\Delta_k$, it follows that $$ f\left(\sum_{i=1}^{k} \lambda_i\bx_i\right) \leq \sum_{i=1}^{k}\lambda_if(\bx_i). $$ \paragraph{Gradient inequality.} Suppose further $f$ is continuously differentiable. Then, given any $\bx,\by\in\sS$, $f$ is convex over $\sS$ if and only if $$ f(\bx) -f(\by) \leq \nabla f(\bx)^\top (\bx-\by). $$ Given any $\bx\neq \by\in\sS$, $f$ is strictly convex over $\sS$ if and only if $$ f(\bx) -f(\by) < \nabla f(\bx)^\top (\bx-\by). $$ \paragraph{Monotonicity of the gradient.} Suppose again $f$ is continuously differentiable. Then, given any $\bx,\by\in\sS$, $f$ is convex over $\sS$ if and only if $$ (\nabla f(\bx) - \nabla f(\by))^\top (\bx-\by)\geq 0. $$ \end{lemma} If the objective function $L(\bx)$ is (continuously differentiable) convex, then the relationship $\nabla L(\bx_t)^\top(\bx_{t+1}-\bx_{t})\geq 0$ implies $L(\bx_{t+1}) \geq L(\bx_t)$. This can be derived from the gradient inequality of a continuously differentiable convex function, i.e., $L(\bx_{t+1})- L(\bx_{t})\geq L(\bx_t)^\top(\bx_{t+1}-\bx_t)$. In this sense, to ensure a reduction in the objective function, it is imperative to ensure $\nabla L(\bx_t)^\top (\bx_{t+1}-\bx_{t})\leq 0$. In the context of gradient descent, the choice of $\Delta \bx_t = \bx_{t+1}-\bx_t$ aligns with the negative gradient $-\nabla L(\bx_t)$. However, there are many other descent methods, such as \textit{steepest descent}, \textit{normalized steepest descent}, \textit{Newton step}, and so on. The main idea of these methods are undertaken to ensure $\nabla L(\bx_t)^\top(\bx_{t+1}-\bx_{t})= \nabla L(\bx_t)^\top \Delta \bx_t \leq 0$ if the objective function is convex. \index{Greedy search} \section{Gradient Descent by Greedy Search}\label{section:als-gradie-descent-taylor} We now consider the greedy search such that $\bx_{t+1} \leftarrow \mathop{\arg \min}_{\bx_t} L(\bx_t)$. Suppose we want to approximate $\bx_{t+1}$ by a linear update on $\bx_t$, the expression takes the following form: $$ \bx_{t+1} = \bx_t + \eta \bv. $$ The problem now revolves around finding a solution for $\bv$ to minimize the expression: $$ \bv=\mathop{\arg \min}_{\bv} L(\bx_{t} + \eta \bv) . $$ By Taylor's formula (Appendix~\ref{appendix:taylor-expansion}, p.~\pageref{appendix:taylor-expansion}), $L(\bx_t + \eta \bv)$ can be approximated by $$ L(\bx_t + \eta \bv) \approx L(\bx_t ) + \eta \bv^\top \nabla L(\bx_t ), $$ when $\eta$ is sufficiently small. Considering the condition $\norm{\bv}=1$ for positive $\eta$, we formulate the descent search as: $$ \bv=\mathop{\arg \min}_{\norm{\bv}=1} L(\bx_{t} + \eta \bv) \approx\mathop{\arg \min}_{\norm{\bv}=1} \left\{L(\bx_{t} ) + \eta \bv^\top \nabla L(\bx_{t} )\right\}. $$ This is known as the \textit{greedy search}. This process leads to the optimal $\bv$ determined by $$ \bv = -\frac{\nabla L(\bx_{t} )}{\norm{\nabla L(\bx_{t} )}}, $$ i.e., $\bv$ lies in the opposite direction of $\nabla L(\bx_{t} )$. Consequently, the update for $\bx_{t+1}$ is reasonably expressed as: $$ \bx_{t+1} =\bx_{t} + \eta \bv = \bx_{t} - \eta \frac{\nabla L(\bx_{t})}{\norm{\nabla L(\bx_{t} )}}, $$ which is usually called the \textit{gradient descent}, as aforementioned. If we further absorb the denominator into the step size $\eta$, the gradient descent can be simplified to the trivial way: $$ \bx_{t+1} = \bx_{t} - \eta {\nabla L(\bx_{t})}. $$ \section{Geometrical Interpretation of Gradient Descent} \begin{lemma}[Direction of Gradients]\label{lemm:direction-gradients} An important fact is that gradients are orthogonal to level curves (also known as level surfaces). \end{lemma} \begin{proof}[of Lemma~\ref{lemm:direction-gradients}] This is equivalent to proving that the gradient is orthogonal to the tangent of the level curve. For simplicity, let's first look at the two-dimensional case. Suppose the level curve has the form $f(x,y)=c$. This implicitly establishes a relationship between $x$ and $y$ such that $y=y(x)$, where $y$ can be thought of as a function of $x$. Therefore, the level curve can be written as $$ f(x, y(x)) = c. $$ The chain rule indicates $$ \frac{\partial f}{\partial x} \underbrace{\frac{dx}{dx}}_{=1} + \frac{\partial f}{\partial y} \frac{dy}{dx}=0. $$ Therefore, the gradient is perpendicular to the tangent: $$ \left\langle \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right\rangle \cdot \left\langle \frac{dx}{dx}, \frac{dy}{dx}\right\rangle=0. $$ Let's now treat the problem in full generality, consider the level curve of a vector $\bx\in \real^n$: $f(\bx) = f(x_1, x_2, \ldots, x_n)=c$. Each variable $x_i$ can be regarded as a function of a variable $t$ on the level curve $f(\bx)=c$: $f(x_1(t), x_2(t), \ldots, x_n(t))=c$. Differentiate the equation with respect to $t$ by chain rule: $$ \frac{\partial f}{\partial x_1} \frac{dx_1}{dt} + \frac{\partial f}{\partial x_2} \frac{dx_2}{dt} +\ldots + \frac{\partial f}{\partial x_n} \frac{dx_n}{dt} =0. $$ Therefore, the gradients is perpendicular to the tangent in $n$-dimensional case: $$ \left\langle \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \ldots, \frac{\partial f}{\partial x_n}\right\rangle \cdot \left\langle \frac{dx_1}{dt}, \frac{dx_2}{dt}, \ldots \frac{dx_n}{dt}\right\rangle=0. $$ This completes the proof. \end{proof} The lemma above offers a profound geometrical interpretation of gradient descent. In the pursuit of minimizing a convex function $L(\bx)$, the gradient descent strategically navigates in the direction opposite to the gradient that can decrease the loss. Figure~\ref{fig:alsgd-geometrical} depicts a two-dimensional scenario, where $-\nabla L(\bx)$ pushes the loss to decrease for the convex function $L(\bx)$. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[A two-dimensional convex function $L(\bx)$.]{\label{fig:alsgd1} \includegraphics[width=0.47\linewidth]{./imgs/momentum_surface.pdf}} \subfigure[$L(\bx)=c$ is a constant.]{\label{fig:alsgd2} \includegraphics[width=0.44\linewidth]{./imgs/alsgd2.pdf}} \caption{Figure~\ref{fig:alsgd1} shows a convex function surface plot and its contour plot (\textcolor{mylightbluetext}{blue}=low, \textcolor{mydarkyellow}{yellow}=high), where the upper graph is the surface plot, and the lower one is the projection of it (i.e., contour). Figure~\ref{fig:alsgd2}: $-\nabla L(\bx)$ pushes the loss to decrease for the convex function $L(\bx)$.} \label{fig:alsgd-geometrical} \end{figure} \index{Regularization} \section{Regularization: A Geometrical Interpretation} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{./imgs/alsgd3.pdf} \caption{Constrained gradient descent with $\bx^\top\bx\leq C$. The \textcolor{mydarkgreen}{green} vector $\bw$ is the projection of $\bv_1$ into $\bx^\top\bx\leq C$ where $\bv_1$ is the component of $-\nabla l(\bx)$ perpendicular to $\bx_1$. The right picture is the next step after the update in the left picture. $\bx_\star$ denotes the optimal solution of \{$\min l(\bx)$\}.} \label{fig:alsgd3} \end{figure} The gradient descent also unveils the geometrical significance of regularization. To avoid confusion, we denote the loss function without regularization by $l(\bz)$ and the loss with regularization by $L(\bx) = \l(\bx)+\lambda_x \norm{\bx}^2$, where $l(\bx): \real^d \rightarrow \real$ (this notation is exclusive to this section). When minimizing $l(\bx)$, the descent method will search in $\real^d$ for a solution. However, in machine learning, an exhaustive search across the entire space may lead to overfitting. A partial remedy involves searching within a subset of the vector space, such as searching in $\bx^\top\bx < C$ for some constant $C$. That is, $$ \mathop{\arg\min}_{\bx} \,\, l(\bx), \gap \text{s.t.,} \gap \bx^\top\bx\leq C. $$ We will see that this constrained search helps prevent overfitting by introducing regularization through the addition of a penalty term in the optimization process. In the previous discussion, a trivial gradient descent approach proceeds in the direction of $-\nabla l(\bx)$, updating $\bx$ by $\bx\leftarrow \bx-\eta \nabla l(\bx)$ for a small step size $\eta$. When the level curve is $l(\bx)=c_1$ and the descent approach is situated at $\bx=\bx_1$, where $\bx_1$ is the intersection of $\bx^\top\bx=C$ and $l(\bx)=c_1$, the descent direction $-\nabla l(\bx_1)$ will be perpendicular to the level curve of $l(\bx_1)=c_1$, as shown in the left picture of Figure~\ref{fig:alsgd3}. However, if we further restrict that the optimal value can only be in the subspace $\bx^\top\bx\leq C$, the trivial descent direction $-\nabla l(\bx_1)$ will lead $\bx_2=\bx_1-\eta \nabla l(\bx_1)$ outside of $\bx^\top\bx\leq C$. To address this, the step $-\nabla l(\bx_1)$ is decomposed into $$ -\nabla l(\bx_1) = a\bx_1 + \bv_1, $$ where $a\bx_1$ is the component perpendicular to the curve of $\bx^\top\bx=C$, and $\bv_1$ is the component parallel to the curve of $\bx^\top\bx=C$. Keeping only the step $\bv_1$, then the update $$ \bx_2 = \text{project}(\bx_1+\eta \bv_1) = \text{project}\left(\bx_1 + \eta \underbrace{(-\nabla l(\bx_1) -a\bx_1)}_{\bv_1}\right)\footnote{where the project($\bx$) will project the vector $\bx$ to the closest point inside $\bx^\top\bx\leq C$. Notice here the direct update $\bx_2 = \bx_1+\eta \bv_1$ can still make $\bx_2$ outside the curve of $\bx^\top\bx\leq C$.} $$ will lead to a smaller loss from $l(\bx_1)$ to $l(\bx_2)$ while still satisfying the prerequisite of $\bx^\top\bx\leq C$. This technique is known as the \textit{projection gradient descent}. It is not hard to see that the update $\bx_2 = \text{project}(\bx_1+\eta \bv_1)$ is equivalent to finding a vector $\bw$ (depicted in \textcolor{mydarkgreen}{green} vector in the left panel of Figure~\ref{fig:alsgd3}) such that $\bx_2=\bx_1+\bw$ lies inside the curve of $\bx^\top\bx\leq C$. Mathematically, the $\bw$ can be obtained as $-\nabla l(\bx_1) -2\lambda \bx_1$ for some $\lambda$, as shown in the middle panel of Figure~\ref{fig:alsgd3}. This aligns with the negative gradient of $L(\bx)=l(\bx)+\lambda\norm{\bx}^2$ such that $$ -\nabla L(\bx) = -\nabla l(\bx) - 2\lambda \bx, $$ and $$ \begin{aligned} \bw &= -\nabla L(\bx) \leadto \bx_2 &= \bx_1+ \bw =\bx_1 - \nabla L(\bx). \end{aligned} $$ And in practice, a small step size $\eta$ can be applied to prevent crossing the curve boundary of $\bx^\top\bx\leq C$: $$ \bx_2 =\bx_1 - \eta\nabla L(\bx). $$ \index{Quadratic form} \index{Fisher information matrix} \index{Positive definite} \index{Positive semidefinite} \index{Symmetry} \section{Quadratic Form in Gradient Descent}\label{section:quadratic_vanilla_GD} We delve deeper into (vanilla) gradient descent applied to the simplest model, the convex quadratic function, \begin{equation}\label{equation:quadratic-form-general-form} L(\bx) = \frac{1}{2} \bx^\top \bA \bx - \bb^\top \bx + c, \gap \bx\in \real^d, \end{equation} where $\bA\in \real^{d\times d}$, $\bb \in \real^d$, and $c$ is a scalar constant. Though the quadratic form in Eq.~\eqref{equation:quadratic-form-general-form} is an extremely simple model, it is rich enough to approximate many other functions, e.g., the Fisher information matrix \citep{amari1998natural}, and capture key features of pathological curvature. The gradient of $L(\bx)$ at point $\bx$ is given by \begin{equation}\label{equation:unsymmetric_gd_gradient} \nabla L(\bx) = \frac{1}{2} (\bA^\top +\bA) \bx - \bb. \end{equation} The unique minimum of the function is the solution of the linear system $\frac{1}{2} (\bA^\top +\bA) \bx= \bb $: \begin{equation}\label{equation:gd_solution_unsymmetric} \bx_\star = 2(\bA^\top +\bA)^{-1}\bb. \end{equation} If $\bA$ is symmetric (for most of our discussions, we will restrict to symmetric $\bA$ or even \textit{positive definite}, see definition below), the equation reduces to \begin{equation}\label{equation:symmetric_gd_gradient} \nabla L(\bx) = \bA \bx - \bb. \end{equation} Then the unique minimum of the function is the solution of the linear system $\bA\bx=\bb$~\footnote{This represents the \textit{first-order optimality condition} for local optima points. Note the proof of this condition for multivariate functions heavily relies on the first-order optimality conditions for one-dimensional functions, which is also known as the \textit{Fermat's theorem}. Refer to Exercise~\ref{problem:fist_opt}.}, where $\bA,\bb$ are known matrix or vector, $\bx$ is an unknown vector; and the optimal point of $\bx$ is thus given by $$ \bx_\star = \bA^{-1}\bb $$ if $\bA$ is nonsingular. \index{Fermat's theorem} \begin{exercise}\label{problem:fist_opt} \textbf{First-order optimality condition for local optima points.} Consider the \textit{Fermat's theorem}: for a one-dimensional function $g(\cdot)$ defined and differentiable over an interval ($a, b$), if a point $x^\star\in(a,b)$ is a local maximum or minimum, then $g^\prime(x^\star)=0$. Prove the first-order optimality conditions for multivariate functions based on this Fermat's theorem for one-dimensional functions. That is, consider function $f: \sS\rightarrow \real$ as a function defined on a set $\sS\subseteq \real^n$. Suppose that $\bx^\star\in\text{int}(\sS)$, i.e., in the interior point of the set, is a local optimum point and that all the partial derivatives (Definition~\ref{definition:partial_deri}, p.~\pageref{definition:partial_deri}) of $f$ exist at $\bx^\star$. Then $\nabla f(\bx^\star)=\bzero$, i.e., the gradient vanishes at all local optimum points. (Note that, this optimality condition is a necessary condition; however, there could be vanished points which are not local maximum or minimum point.) \end{exercise} \paragraph{Symmetric matrices.} A symmetric matrix can be further categorized into positive definite, positive semidefinite, negative definite, negative semidefinite, and indefinite types as follows. \begin{definition}[Positive Definite and Positive Semidefinite\index{Positive definite}\index{Positive semidefinite}]\label{definition:psd-pd-defini} A matrix $\bA\in \real^{n\times n}$ is considered positive definite (PD) if $\bx^\top\bA\bx>0$ for all nonzero $\bx\in \real^n$. And a matrix $\bA\in \real^{n\times n}$ is called positive semidefinite (PSD) if $\bx^\top\bA\bx \geq 0$ for all $\bx\in \real^n$. \footnote{ In this book a positive definite or a semidefinite matrix is always assumed to be symmetric, i.e., the notion of a positive definite matrix or semidefinite matrix is only interesting for symmetric matrices. } \footnote{Similarly, a complex matrix $\bA$ is said to be \textit{Hermitian positive definite} (HSD), if $\bA$ is Hermitian and $\bz^\ast\bA\bz>0$ for all $\bz\in\complex^n$ with $\bz\neq \bzero$.} \footnote{A symmetric matrix $\bA\in\real^{n\times n}$ is called \textit{negative definite} (ND) if $\bx^\top\bA\bx<0$ for all nonzero $\bx\in\real^n$; a symmetric matrix $\bA\in\real^{n\times n}$ is called \textit{negative semidefinite} (NSD) if $\bx^\top\bA\bx\leq 0$ for all $\bx\in\real^n$; and a symmetric matrix $\bA\in\real^{n\times n}$ is called \textit{indefinite} (ID) if there exist $\bx$ and $\by\in\real^n$ such that $\bx^\top\bA\bx<0$ and $\by^\top\bA\by>0$. } \end{definition} \begin{lemma}[Positive Definite Properties] Given a negative definite matrix $\bA$, then $-\bA$ is a positive definite matrix; if $\bA$ is negative semidefintie matrix, then $-\bA$ is positive semidefinite. Positive definite, positive semidefinite, and indefinite matrices admit the following properties. \paragraph{Eigenvalue.} A matrix is positive definite if and only if it has exclusively \textit{positive eigenvalues}. Similarly, a matrix is positive semidefinite if and only if it exhibits solely \textit{nonnegative eigenvalues}. And a matrix is indefinite if and only if it possesses at least one positive eigenvalue and at least one negative eigenvalue. \paragraph{Diagonals.} The diagonal elements of a positive definite matrix are all \textit{positive}. And similarly, the diagonal elements of a positive semidefinite matrix are all \textit{nonnegative}. And the diagonal elements of a indefinite matrix contains at least one positive diagonal and at least one negative diagonal. \end{lemma} The proof for the lemma is trivial and can be found in \citet{lu2022matrix} \begin{figure}[htp] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Positive definite matrix: $\bA = \begin{bmatrix} 200 & 0 \\ 0 & 200 \end{bmatrix}$.]{\label{fig:quadratic_PD} \includegraphics[width=0.485\linewidth]{imgs/quadratic_PD.pdf}} \subfigure[Negative definite matrix: $\bA = \begin{bmatrix} -200 & 0 \\ 0 & -200 \end{bmatrix}$.]{\label{fig:quadratic_ND} \includegraphics[width=0.485\linewidth]{imgs/quadratic_ND.pdf}} \subfigure[Semidefinite matrix: $\bA = \begin{bmatrix} 200 & 0 \\ 0 & 0 \end{bmatrix}$. A line runs through the bottom of the valley is the set of solutions.]{\label{fig:quadratic_singular} \includegraphics[width=0.485\linewidth]{imgs/quadratic_singular.pdf}} \subfigure[Indefinte matrix: $\bA = \begin{bmatrix} 200 & 0 \\ 0 & -200 \end{bmatrix}$.]{\label{fig:quadratic_saddle} \includegraphics[width=0.485\linewidth]{imgs/quadratic_saddle.pdf}} \caption{Loss surface for different quadratic forms.} \label{fig:different_quadratics} \end{figure} For different types of matrix $\bA$, the loss surface of $L(\bx)$ will be different, as illustrated in Figure~\ref{fig:different_quadratics}. When $\bA$ is positive definite, the surface forms a convex bowl; when $\bA$ is negative definite, on the contrary, the surface becomes a concave bowl. $\bA$ also could be singular, in which case $\bA\bx-\bb=\bzero$ has more than one solution, and the set of solutions is a line (in the two-dimensional case ) or a hyperplane (in the high-dimensional case). This situation is similar to the case of a semidefinite quadratic form, as shown in Figure~\ref{fig:quadratic_singular}. If $\bA$ does not fall into any of these categories, a saddle point emerges (see Figure~\ref{fig:quadratic_saddle}), posing a challenge for gradient descent. In such cases, alternative methods, e.g., perturbed GD \citep{jin2017escape, du2017gradient}, can be employed to navigate away from saddle points. \begin{figure}[htp] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Contour and the descent direction. The red dot is the optimal point.]{\label{fig:quadratic_vanillegd_contour} \includegraphics[width=0.31\linewidth]{imgs/quadratic_steepest_contour_tilt.pdf}} \subfigure[Vanilla GD, $\eta=0.02$.]{\label{fig:quadratic_vanillegd_contour2} \includegraphics[width=0.31\linewidth]{./imgs/steepest_gd_mom-0_lrate-2.pdf}} \subfigure[Vanilla GD, $\eta=0.08$.]{\label{fig:quadratic_vanillegd_contour8} \includegraphics[width=0.31\linewidth]{./imgs/steepest_gd_mom-0_lrate-8.pdf}} \caption{Illustration for the linear search of quadratic form with $\bA=\begin{bmatrix} 20 & 7 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. The procedure is at $\bx_t=[-3,3.5]^\top$ for the $t$-th iteration.} \label{fig:quadratic_vanillegd} \end{figure} \index{Quadratic form} \index{Saddle point} Note that in this context, gradient descent is not necessary; we can directly proceed to the minimum (if the data matrix $\bA$ is nonsingular, and we have an algorithm to compute its inverse). However, we are concerned with the iterative updates of the convex quadratic function. Suppose we pick up a starting point $\bx_1\in \real^d$ \footnote{In some texts, the starting point is denoted as $\bx_0$, however, we will take it as $\bx_1$ in this article.}. The trivial way for the update at time step $t$ involves fixing the learning rate $\eta$ and choosing a descent direction $\bd_t$; and the gradient descent update becomes: $$ \bx_{t+1} = \bx_t + \eta \bd_t. $$ This results in a monotonically decreasing sequence of $\{L(\bx_t)\}$. Specifically, when the descent direction is chosen to be the negative gradient $\bd_t=\bA\bx_t-\bb$ ($\bA$ is symmetric), the update becomes \begin{equation}\label{equation:vanilla-gd-update} \text{Vanilla GD: \gap } \bx_{t+1} = \bx_t - \eta (\bA\bx_t-\bb). \end{equation} A concrete example is given in Figure~\ref{fig:quadratic_vanillegd}, where $\bA=\begin{bmatrix} 20 & 7 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. Suppose at $t$-th iteration, $\bx_t=[-3,3.5]^\top$. Figure~\ref{fig:quadratic_vanillegd_contour} shows the descent direction given by the negative gradient at the point of $\bx_t$; Figure~\ref{fig:quadratic_vanillegd_contour2} and Figure~\ref{fig:quadratic_vanillegd_contour8} present 10 iterations afterwards with $\eta=0.02$ and $\eta=0.08$, respectively. \index{Spectral decomposition} \paragraph{Closed form for vanilla GD.} When $\bA$ is symmetric, it admits spectral decomposition (Theorem 13.1 in \citet{lu2022matrix} or Appendix~\ref{appendix:spectraldecomp}, p.~\pageref{appendix:spectraldecomp}): $$ \bA=\bQ\bLambda\bQ^\top \in \real^{d\times d} \leadto \bA^{-1} = \bQ\bLambda^{-1}\bQ^\top, $$ where $\bQ = [\bq_1, \bq_2, \ldots , \bq_d]$ comprises mutually orthonormal eigenvectors of $\bA$, and $\bLambda = \diag(\lambda_1, \lambda_2, \ldots , \lambda_d)$ contains the corresponding real eigenvalues of $\bA$. If we further assume $\bA$ is positive definite, then the eigenvalues are all positive. By convention, we order the eigenvalues such that $\lambda_1\geq \lambda_2\geq \ldots \geq \lambda_d$. Define the following iterate vector at iteration $t$ as \begin{equation}\label{equation:vanilla-yt} \by_t = \bQ^\top(\bx_t - \bx_\star), \end{equation} where $\bx_\star = \bA^{-1}\bb$ if we further assume $\bA$ is nonsingular, as aforementioned. It then follows that $$ \begin{aligned} \by_{t+1} &= \bQ^\top(\bx_{t+1} - \bx_\star) = \bQ^\top(\bx_{t} - \eta(\bA\bx_{t}-\bb) - \bx_\star) \gap &\text{($\bx_{t+1} = \bx_{t}-\eta\nabla L(\bx_{t})$)}\\ &=\bQ^\top(\bx_{t} - \bx_\star) - \eta \bQ^\top (\bA\bx_{t}-\bb) \\ &= \by_{t} - \eta \bQ^\top (\bQ\bLambda\bQ^\top\bx_{t}-\bb) \gap &\text{($\bA=\bQ\bLambda\bQ^\top$)}\\ &= \by_{t} - \eta (\bLambda\bQ^\top\bx_{t}-\bQ^\top\bb) \\ &= \by_{t} - \eta \bLambda\bQ^\top (\bx_{t}-\bx_\star) = \by_{t} - \eta \bLambda \by_{t} \\ &= (\bI - \eta \bLambda)\by_{t} = (\bI - \eta \bLambda)^t\by_{1} \\ \end{aligned} $$ where the second equality is from Eq.~\eqref{equation:vanilla-gd-update}. This reveals the error term at each iteration: \begin{equation}\label{equation:vanilla-gd-closedform} \norm{\bx_{t+1} - \bx_\star}^2 = \norm{\bQ\by_{t+1}}^2 = \norm{\bQ(\bI - \eta \bLambda)^t\by_{1}}^2 = \bigg|\bigg|\sum_{i=1}^{d} y_{1,i} \cdot (1-\eta \lambda_i)^t \bq_i\bigg|\bigg|^2, \end{equation} where $\by_1$ depends on the initial parameter $\bx_1$, and $y_{1,i}$ is the $i$-th element of $\by_1$. An intuitive interpretation for $\by_{t+1}$ is the error in the $\bQ$-basis at iteration $t+1$. By Eq.~\eqref{equation:vanilla-gd-closedform}, we realize that the learning rate should be chosen such that \begin{equation}\label{equation:vanillagd-quandr-rate-chgoices} |1-\eta\lambda_i| \leq 1, \gap \forall \,\, i\in \{1,2,\dots, d\}. \end{equation} And the error is a sum of $d$ terms, each has its own dynamics and depends on the rate of $1-\eta\lambda_i$; the closer the rate is to 1, the slower it converges in that dimension \citep{shewchuk1994introduction, o2015adaptive, goh2017momentum}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{imgs/rate_convergen_steepest.pdf} \caption{Rate of convergence (per iteration) in vanilla GD method. The $y$-axis is $\frac{\kappa-1}{\kappa+1}$.} \label{fig:rate_convergen_vanillaGD} \end{figure} \index{Rate of convergence} To ensure convergence, the learning rate must satisfy that $|1-\eta\lambda_i| \leq 1$. This condition implies $0<\eta\lambda_i <2$ for $i$ in $\{1,2,\ldots, d\}$. Therefore, the overall rate of convergence is determined by the slowest component: $$ \text{rate}(\eta) = \max \{|1-\eta\lambda_1|, |1-\eta\lambda_d|\}, $$ since $\lambda_1\geq \lambda_2\geq \ldots \geq \lambda_d$. The optimal learning rate occurs when the first and the last eigenvectors converge at the same rate, i.e., $\eta\lambda_1-1 =1- \eta\lambda_d$: \begin{equation}\label{equation:eta-vanilla-gd} \text{optimal } \eta = \underset{\eta}{\arg\min} \text{ rate}(\eta) = \frac{2}{\lambda_1+\lambda_d}, \end{equation} and \begin{equation}\label{equation:vanialla-gd-rate} \text{optimal rate} = \underset{\eta}{\min} \text{ rate}(\eta) = \frac{\lambda_1/\lambda_d - 1}{\lambda_1/\lambda_d + 1} =\frac{\kappa - 1}{\kappa + 1}, \end{equation} where $\kappa = \frac{\lambda_1}{\lambda_d}$ is known as the \textit{condition number} (see \citet{lu2021numerical} for more information). When $\kappa=1$, the convergence is fast with just one step; as the condition number increases, the gradient descent becomes slower. The rate of convergence (per iteration) is plotted in Figure~\ref{fig:rate_convergen_vanillaGD}. The more \textit{ill-conditioned} the matrix, i.e., the larger its condition number, the slower the convergence of vanilla GD. \newpage \clearchapter{Line Search} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \index{Line search} \index{Steepest descent} \section{Line Search}\label{section:line-search} \lettrine{\color{caligraphcolor}I} In the last section, we derive the gradient descent, where the update step at step $t$ is $ -\eta\bg_t:=-\eta \nabla L(\bx_t)$, and the learning rate $\eta$ controls how large of a step to take in the direction of negative gradient. Line search is a method that directly determines the optimal learning rate in order to provide the most significant improvement in the gradient movement. Formally, the line search solves the following problem at the $t$-th step of gradient descent: $$ \eta_t = \underset{\eta}{\arg\min}\,\, L(\bx_t - \eta \bg_t). $$ After performing the gradient update $\bx_{t+1} = \bx_t - \eta_t \bg_t$, the gradient is computed at $\bx_{t+1}$ for the next step $t+1$. More generally, let $\bd_t$ be the descent direction; then, the gradient descent with line search (to differentiate, we call it \textit{steepest descent} when $\bd_t=-\bg_t$ in this article, and the fixed learning rate GD is known as the \textit{vanilla GD}) can be described by: $$ \eta_t = \underset{\eta}{\arg\min}\,\, L(\bx_t + \eta \bd_t). $$ \begin{lemma}[Orthogonality in Line Search]\label{lemm:linear-search-orghonal} The gradient of optimal point $\bx_{t+1}=\bx_t + \eta_t \bd_t $ of a line search is orthogonal to the current update direction $\bd_t$: $$ \nabla L(\bx_{t+1})^\top \bd_t = 0. $$ \end{lemma} \begin{proof}[of Lemma~\ref{lemm:linear-search-orghonal}] Suppose $\nabla L(\bx_{t+1})^\top \bd_t \neq 0$, then there exists a $\delta$ and it follows by Taylor's formula (Appendix~\ref{appendix:taylor-expansion}, p.~\pageref{appendix:taylor-expansion}) that \begin{equation}\label{equation:orthogonal-line-search} L(\bx_t +\eta_t \bd_t \pm \delta \bd_t) \approx L(\bx_t +\eta_t \bd_t)\pm \delta \bd_t^\top \nabla L(\bx_t +\eta_t \bd_t). \end{equation} Since $\bx_t +\eta_t \bd_t$ is the optimal move such that $ L(\bx_t +\eta_t \bd_t) \leq L(\bx_t +\eta_t \bd_t \pm \delta \bd_t)$ and $\delta \neq 0$. This leads to the claim $$ \bd_t^\top \nabla L(\bx_t +\eta_t \bd_t)=0.$$ We complete the proof. \end{proof} In line search methods, the loss function at iteration $t$ can be expressed in terms of $\eta$ as follows: $$ J(\eta) = L(\bx_t + \eta \bd_t). $$ Consequently, the problem can be formulated as finding $$ \eta_t = \underset{\eta}{\arg\min} \,\, L(\bx_t + \eta \bd_t)=\underset{\eta}{\arg\min} \,\, J(\eta). $$ This indicates that the (local) minimum of $\eta$ can be obtained by finding the solution of $J^\prime(\eta)=0$ if $J(\eta)$ is differentiable, according to Fermat's theorem (see Exercise~\ref{problem:fist_opt}, p.~\pageref{problem:fist_opt}). The solution then follows that \begin{equation}\label{equation:j_eta_ajmijo} J^\prime(\eta) =\bd_t^\top \nabla L(\bx_t+\eta\bd_t)=0, \end{equation} which reaffirms Lemma~\ref{lemm:linear-search-orghonal}. When $\eta=0$, we have (by Remark~\ref{remark:descent_condition}, p.~\pageref{remark:descent_condition}) \begin{equation}\label{equation:linesearc-eta0} J^\prime (0) = \bd_t^\top \bg_t\leq 0. \end{equation} A crucial property in typical line search settings is that the loss function $J(\eta)$, when expressed in terms of $\eta$, is often a unimodal function. If a value $\eta_{\max}$ is identified such that $J^\prime(\eta_{\max}) > 0$, the optimal learning rate is then in the range of $[0, \eta_{\max}]$. Line search methods are followed to find the optimal $\eta$ within this range, satisfying the optimal condition $J^\prime(\eta)=0$. Now, we introduce some prominent line search approaches: bisection linear search, Golden-Section line search, and Armijo rule search. \index{Bisection line search} \section{Bisection Line Search} In the \textit{bisection line search} method, we start by setting the interval $[a,b]$ as $[\eta_{\min}, \eta_{\max}]$, where $\eta_{\min}$ and $ \eta_{\max}$ serve as the lower and upper bounds, respectively, for the learning rate $\eta$ ($\eta_{\min}$ can be set to 0 as specified by Eq.~\eqref{equation:linesearc-eta0}). The bisection line search involves evaluating the loss function $J(\eta)$ at the midpoint $\frac{a+b}{2}$. Given the information that $J^\prime(a)<0$ and $J^\prime(b)>0$, the bisection line search follows that $$ \left\{ \begin{aligned} \text{set } a &:= \frac{a+b}{2} \text{, \gap if $J^\prime\left(\frac{a+b}{2}\right)<0$}; \\ \text{set } b &:= \frac{a+b}{2} \text{, \gap if $J^\prime\left(\frac{a+b}{2} \right)>0$}. \end{aligned} \right. $$ The procedure is repeated until the interval between $a$ and $b$ becomes sufficiently small. The bisection line search is also known as the \textit{binary line search}. And in some cases, the derivative of $J(\eta)$ cannot be easily obtained; then the interval is narrowed by evaluating the objective function at two closely spaced points around $\frac{a+b}{2} $. To be more concrete, assume $J(\eta)$ is convex (since we are in the descent setting), we evaluate the loss function at $\frac{a+b}{2}$ and $\frac{a+b}{2}+\epsilon$, where $\epsilon$ is a numerically small value, e.g., $\epsilon=1e-8$. This allows us to evaluate whether the function is increasing or decreasing at $\frac{a+b}{2}$ by determining which of the two evaluations is larger. If the function is increasing at $\frac{a+b}{2}$, the interval is narrowed to $[a,\frac{a+b}{2}+\epsilon]$; otherwise, it is narrowed to $[\frac{a+b}{2}, b]$. $$ \left\{ \begin{aligned} \text{set } b &= \frac{a+b}{2}+\epsilon, &\text{ \gap if increasing at $\frac{a+b}{2}$};\\ \text{set } a &= \frac{a+b}{2}, &\text{ \gap otherwise}. \\ \end{aligned} \right. $$ This iterative process continues until the range is sufficiently small or the required level of accuracy is achieved in the interval. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[$\eta=a$ yields the minimum.]{\label{fig:golden_1} \includegraphics[width=0.23\linewidth]{./imgs/golden_1.pdf}} \subfigure[$\eta=c_1$ yields the minimum.]{\label{fig:golden_2} \includegraphics[width=0.23\linewidth]{./imgs/golden_2.pdf}} \subfigure[$\eta=c_2$ yields the minimum.]{\label{fig:golden_3} \includegraphics[width=0.23\linewidth]{./imgs/golden_3.pdf}} \subfigure[$\eta=b$ yields the minimum.]{\label{fig:golden_4} \includegraphics[width=0.23\linewidth]{./imgs/golden_4.pdf}} \caption{Demonstration of 4 different update ways in golden-section line search.} \label{fig:conjguatecy-golden_1234} \end{figure} \index{Golden-section line search} \section{Golden-Section Line Search} Similar to the bisection line search, the \textit{golden-section line search} also identifies the best learning rate $\eta$ for a unimodal function $J(\eta)$. Again, it starts with the interval $[a,b]$ as $[0, \eta_{\max}]$. However, instead of selecting a midpoint, the golden-section search designates a pair of $c_1, c_2$ satisfying $a<c_1<c_2<b$. The procedure is as follows: if $\eta=a$ results in the minimum value for $J(\eta)$ (among the four values $J(a), J(c_1), J(c_2)$, and $J(b)$), we can exclude the interval $(c_1, b]$; if $\eta=c_1$ yields the minimum value, we can exclude the interval $(c_2, b]$; if $\eta=c_2$ yields the minimum value, we can exclude the interval $[a, c_1)$; and if $\eta=b$ yields the minimum value, we can exclude the interval $[a, c_2)$. The four situations are shown in Figure~\ref{fig:conjguatecy-golden_1234}. In other words, at least one of the intervals $[a,c_1]$ and $[c_2, b]$ can be discarded in the golden-section search method. In summary, we have $$ \begin{aligned} &\text{when $J(a)$ is the minimum, exclude $(c_1, b]$};\\ &\text{when $J(c_1)$ is the minimum, exclude $(c_2, b]$};\\ &\text{when $J(c_2)$ is the minimum, exclude $[a, c_1)$};\\ &\text{when $J(b)$ is the minimum, exclude $[a, c_2)$}.\\ \end{aligned} $$ By excluding one of the four intervals, the new bounds $[a, b]$ are adjusted accordingly, and the process iterates until the range is sufficiently small. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/armijo_convex.pdf} \caption{Demonstration of Armijo rule in a convex setting.} \label{fig:armijo_convex} \end{figure} \index{Armijo rule} \section{Armijo Rule} Similar to Eq.~\eqref{equation:orthogonal-line-search}, using Taylor's formula once again, we have $$ J(\eta)=L(\bx_t +\eta_t \bd_t ) \approx L(\bx_t )+ \eta \bd_t^\top \nabla L(\bx_t ). $$ Since $\bd_t^\top \nabla L(\bx_t ) \leq 0$ (by Remark~\ref{remark:descent_condition}, p.~\pageref{remark:descent_condition}), it follows that \begin{equation}\label{equation:armijo_step_approx} L(\bx_t + \eta \bd_t) \leq L(\bx_t) + \alpha \eta \cdot \bd_t^\top \nabla L(\bx_t), \gap \alpha \in (0,1). \end{equation} Let $\widetilde{J}(\eta) = J(0)+ J^\prime(0) \cdot \eta$ \footnote{The tangent of $J(\eta)$ at $\eta=0$.} and $\widehat{J}(\eta) = J(0)+ \alpha J^\prime(0) \cdot \eta$, the relationship between the two functions is depicted in Figure~\ref{fig:armijo_convex} for the case where $J(\eta)$ is a convex function; and we note that $\widehat{J}(\eta) > \widetilde{J}(\eta)$ when $\eta>0$. The Armijo rule states that an acceptable $\eta$ should satisfy $J(\widehat{\eta}) \leq \widehat{J}(\widehat{\eta})$ to ensure sufficient decrease and $J( \widehat{\eta}/\beta) > \widehat{J}(\widehat{\eta}/\beta)$ to prevent the step size from being too small, where $\beta\in (0,1)$. This ensures that the (local) optimal learning rate is in the range of $[\widehat{\eta}, \widehat{\eta}/\beta)$. By Eq.~\eqref{equation:armijo_step_approx}, the two criteria above can also be described by: $$ \left\{ \begin{aligned} J(\widehat{\eta}) &\leq \widehat{J}(\widehat{\eta}); \\ J( \widehat{\eta}/\beta) &> \widehat{J}(\widehat{\eta}/\beta), \end{aligned} \right. \Longrightarrow \gap \left\{ \begin{aligned} L(\bx_t + \widehat{\eta} \bd_t) - L(\bx_t)&\leq \alpha \widehat{\eta} \cdot \bd_t^\top\nabla L(\bx_t); \\ L(\bx_t + \widehat{\eta}/\beta \bd_t) - L(\bx_t)&> \alpha \widehat{\eta}/\beta \cdot \bd_t^\top\nabla L(\bx_t) . \end{aligned} \right. $$ The complete algorithm for calculating the learning rate at the $t$-th iteration is outlined in Algorithm~\ref{alg:als_armijo}. In practice, the parameters are typically set as $\beta \in [0.2, 0.5]$ and $\alpha\in [1e-5, 0.5]$. Additionally, it's worth noting that the Armijo rule is inexact, and it works even when $J(\eta)$ is not unimodal. After developing the Armijo algorithm, the underlying concept of the Armijo rule becomes apparent: the descent direction $J^\prime(0)$ at that starting point $\eta=0$ often deteriorates in terms of rate of improvement since it moves further along this direction. However, a fraction $\alpha\in [1e-5, 0.5]$ of this improvement is acceptable. By Eq.~\eqref{equation:armijo_step_approx}, the descent update at $(t+1)$-th iteration $L(\bx_t + \eta \bd_t)$ is at least $\alpha \eta \cdot \bd_t^\top \nabla L(\bx_t)$ smaller than that of $t$-th iteration. \begin{algorithm}[h] \caption{Armijo Rule at $t$-th Iteration} \label{alg:als_armijo} \begin{algorithmic}[1] \Require Start with $\eta_t=s$, $0<\beta<1$, and $0<\alpha<1$; \State isStop = False; \While{isStop is False} \If{$L(\bx_t + \eta_t \bd_t) - L(\bx_t) \leq \alpha \eta_t \cdot \bd_t^\top \nabla L(\bx_t)$} \State isStop = True; \Else \State $\eta_t= \beta \eta_t$; \EndIf \EndWhile \State Output $\eta_t$; \end{algorithmic} \end{algorithm} \index{Steepest descent} \index{Quadratic form} \section{Quadratic Form in Steepest Descent}\label{section:quadratic-in-steepestdescent} Following the discussion of the quadratic form in the gradient descent section (Section~\ref{section:quadratic_vanilla_GD}, p.~\pageref{section:quadratic_vanilla_GD}), we now discuss the quadratic form in gradient descent with line search. By definition, we express $J(\eta)$ as follows: $$ \begin{aligned} J(\eta) = L(\bx_t+\eta \bd_t) &= \frac{1}{2} (\bx_t+\eta \bd_t)^\top \bA (\bx_t+\eta \bd_t) - \bb^\top (\bx_t+\eta \bd_t) +c\\ &= L(\bx_t) + \eta \bd_t^\top \underbrace{\left(\frac{1}{2} (\bA+\bA^\top )\bx_t - \bb \right)}_{=\nabla L(\bx_t)=\bg_t} +\frac{1}{2} \eta^2 \bd_t^\top \bA\bd_t, \end{aligned} $$ which is a quadratic function with respect to $\eta$ for which a closed form for the line search exists: \begin{equation}\label{equation:eta-gd-steepest} \eta_t = - \frac{\bd_t^\top \bg_t}{ \bd_t^\top \bA\bd_t }. \end{equation} We observe that $\bd_t^\top \bA\bd_t >0 $ when $\bd_t \neq 0 $. As previously mentioned, when the search direction is the negative gradient $\bd_t=-\bg_t$, the method is known as the \textit{steepest descent}. Consequently, the descent update becomes \begin{equation}\label{equation:steepest-quadratic} \begin{aligned} \text{Steepest Descent: \gap }\bx_{t+1} &= \bx_t + \eta_t \bd_t \\ &= \bx_t - \frac{\bd_t^\top \bg_t}{ \bd_t^\top \bA\bd_t } \bd_t \\ & =\bx_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t, \end{aligned} \end{equation} where $\bd_t = -\bg_t$ for the gradient descent case. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Contour and the descent direction. The red dot is the optimal point.]{\label{fig:quadratic_steepest_contour_tilt} \includegraphics[width=0.485\linewidth]{imgs/quadratic_steepest_contour_tilt.pdf}} \subfigure[Intersection of the loss surface and vertical plane through the descent direction.]{\label{fig:quadratic_steepest_surface_tilt} \includegraphics[width=0.485\linewidth]{imgs/quadratic_steepest_surface_tilt.pdf}} \subfigure[Intersection of the loss surface and vertical plane through the descent direction in two-dimensional space.]{\label{fig:quadratic_steepest_tilt_intersection} \includegraphics[width=0.485\linewidth]{imgs/quadratic_steepest_tilt_intersection.pdf}} \subfigure[Various gradients on the line through the descent direction, where the gradient at the bottommost point is orthogonal to the gradient of the previous step.]{\label{fig:quadratic_steepest_tilt_gradient_direction} \includegraphics[width=0.485\linewidth]{imgs/quadratic_steepest_tilt_gradient_direction.pdf}} \caption{Illustration for the line search of quadratic form with $\bA=\begin{bmatrix} 20 & 7 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. The procedure is at $\bx_t=[-3,3.5]^\top$ for $t$-th iteration. } \label{fig:quadratic_steepest_tilt} \end{figure} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Vanilla GD, $\eta=0.02$.]{\label{fig:momentum_gd_conjugate2} \includegraphics[width=0.31\linewidth]{./imgs/steepest_gd_mom-0_lrate-2.pdf}} \subfigure[Vanilla GD, $\eta=0.08$.]{\label{fig:momentum_gd_conjugate8} \includegraphics[width=0.31\linewidth]{./imgs/steepest_gd_mom-0_lrate-8.pdf}} \subfigure[Steepest descent.]{\label{fig:conjguatecy_zigzag2} \includegraphics[width=0.31\linewidth]{./imgs/steepest_gd_bisection.pdf}} \caption{Illustration for the line search of quadratic form with $\bA=\begin{bmatrix} 20 & 7 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. The procedure is at $\bx_t=[-3,3.5]^\top$ for $t$-th iteration. The example is executed for 10 iterations by vanilla GD with $\eta=0.02$, vanilla GD with $\eta=0.08$, and steepest descent, respectively. We notice the tedious choices in vanilla GD; and the zigzag path in the steepest GD with line search due to the orthogonality between each gradient and the previous gradient (Lemma~\ref{lemm:linear-search-orghonal}). } \label{fig:quadratic_steepest_tilt22} \end{figure} A concrete example is presented in Figure~\ref{fig:quadratic_steepest_tilt}, where $\bA=\begin{bmatrix} 20 & 7 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. Suppose at the $t$-th iteration, the parameter is positioned at $\bx_t=[-3,3.5]^\top$. Additionally, Figure~\ref{fig:quadratic_steepest_contour_tilt} presents the descent direction via the negative gradient. The line search involves selecting the learning rate $\eta_t$ by minimizing $J(\eta) = L(\bx_t + \eta\bd_t)$, which is equivalent to determining the point on the intersection of the vertical plane through the descent direction and the paraboloid defined by the loss function $L(\bx)$, as shown in Figure~\ref{fig:quadratic_steepest_surface_tilt}. Figure~\ref{fig:quadratic_steepest_tilt_intersection} further shows the parabola defined by the intersection of the two surfaces. In Figure~\ref{fig:quadratic_steepest_tilt_gradient_direction}, various gradients on the line through the descent direction are displayed, where the gradient at the bottommost point is orthogonal to the gradient of the previous step $\nabla L(\bx_{t+1})^\top \bd_t$, as proved in Lemma~\ref{lemm:linear-search-orghonal}, where the black arrows are the gradients and the blue arrows are the projection of these gradients along $\bd_t = -\nabla L(\bx_t)$. An intuitive explanation for this orthogonality at the minimum is as follows: the slope of the parabola (Figure~\ref{fig:quadratic_steepest_tilt_intersection}) at any point is equal to the magnitude of the projection of the gradients onto the search direction (Figure~\ref{fig:quadratic_steepest_tilt_gradient_direction}) \citep{shewchuk1994introduction}. These projections represent the rate of increase of the loss function $L(\bx)$ as the point traverses the search line; and the minimum of $L(\bx)$ occurs where the projection is zero, and corresponding to when the gradient is orthogonal to the search line. The example is executed for 10 iterations in Figure~\ref{fig:momentum_gd_conjugate2}, Figure~\ref{fig:momentum_gd_conjugate8}, and Figure~\ref{fig:conjguatecy_zigzag2} using vanilla GD with $\eta=0.02$, vanilla GD with $\eta=0.08$, and steepest descent, respectively. It is evident that vanilla GD involves cumbersome choices of learning rates; and the zigzag trajectory in the steepest GD with line search, resulting from the orthogonality between each gradient and the previous gradient (Lemma~\ref{lemm:linear-search-orghonal}), can be observed. However, this drawback and limitation will be partly addressed in conjugate descent (Section~\ref{section:conjugate-descent}, p.~\pageref{section:conjugate-descent}). \index{Spectral decomposition} \index{Quadratic form} \subsubsection{Special Case: Symmetric Quadratic Form} To delve into the convergence results of steepest descent, we explore specific scenarios. Following \citet{shewchuk1994introduction}, we first introduce some key definitions: the \textit{error vector} between the parameter $\bx_t$ at $t$-th iteration and the optimal parameter point $\bx_\star$, and the \textit{residual vector} between target $\bb$ and prediction at $t$-th iteration $\bA\bx_t$. \begin{definition}[Error and Residual Vector]\label{definition:error-gd-} At iteration $t$, the \textit{error vector} is defined as $\be_t = \bx_t - \bx_\star$, a vector indicates how far the iterate is from the solution, where $\bx_\star =\bA^{-1}\bb$ when $\bA$ is symmetric and nonsingular. Substituting into Eq.~\eqref{equation:steepest-quadratic}, the update for the error vector is \begin{equation}\label{equation:steepest-quadratic-error} \be_{t+1} = \be_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t. \end{equation} Furthermore, the \textit{residual} $\br_t = \bb - \bA\bx_t$ indicates how far the iterate is from the correct value of $\bb$. Note in this case, the residual is equal to the negative gradient and the descent direction, i.e., $\br_t = \bd_t = -\bg_t$, when $\bA$ is symmetric (we may use $-\bg_t$ and $\br_t$ interchangeably when $\bA$ is symmetric). \end{definition} We first consider the case where the error vector $\be_t$ at iteration $t$ is an eigenvector of $\bA$ corresponding to eigenvalue $\lambda_t$, i.e., $\bA\be_t = \lambda_t\be_t $. Then the gradient vector (for symmetric $\bA$ by Eq.~\eqref{equation:symmetric_gd_gradient}, p.~\pageref{equation:symmetric_gd_gradient}) $$ \bg_t = \bA\bx_t-\bb = \bA\left(\bx_t-\underbrace{\bA^{-1}\bb}_{\bx_\star}\right) =\bA\be_t = \lambda_t\be_t $$ is also an eigenvector of $\bA$ corresponding to eigenvalue $\lambda_t$, i.e., $\bA\bg_t = \lambda_t \bg_t$. By Eq.~\eqref{equation:steepest-quadratic-error}, the update for $(t+1)$-th iteration is $$ \begin{aligned} \be_{t+1} &= \be_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t\\ &=\be_t - \frac{\bg_t^\top \bg_t}{ \lambda_t \bg_t^\top \bg_t } (\lambda_t\be_t) = \bzero . \end{aligned} $$ Therefore, convergence to the solution is achieved in just one additional step when $\be_t$ is an eigenvector of $\bA$. A concrete example is shown in Figure~\ref{fig:steepest_gd_bisection_eigenvector}, where $\bA=\begin{bmatrix} 20 & 5 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Steepest GD, $\bA\be_t=\lambda_t\be_t$.]{\label{fig:steepest_gd_bisection_eigenvector} \includegraphics[width=0.481\linewidth]{./imgs/steepest_gd_bisection_eigenvector.pdf}} \subfigure[Steepest GD, $\lambda_1=\lambda_2$.]{\label{fig:steepest_gd_bisection_eigenvector_sameeigenvalue} \includegraphics[width=0.481\linewidth]{./imgs/steepest_gd_bisection_sameeigenvalue.pdf}} \caption{Illustration of special cases for GD with line search of quadratic form. $\bA=\begin{bmatrix} 20 & 5 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, $c=0$, and starting point to descent is $\bx_t=[-1.3, 4.3]^\top$ for Fig~\ref{fig:steepest_gd_bisection_eigenvector}. $\bA=\begin{bmatrix} 20 & 0 \\ 0 & 20 \end{bmatrix}$, $\bb=\bzero$, $c=0$, and starting point to descent is $\bx_t=[-1, 2]^\top$ for Fig~\ref{fig:steepest_gd_bisection_eigenvector_sameeigenvalue}.} \label{fig:steepest_specialcases} \end{figure} \subsubsection{Special Case: Symmetric with Orthogonal Eigenvectors} When $\bA$ is symmetric, it admits spectral decomposition (Theorem 13.1 in \citet{lu2022matrix} or Appendix~\ref{appendix:spectraldecomp}, p.~\pageref{appendix:spectraldecomp}): $$ \bA=\bQ\bLambda\bQ^\top \in \real^{d\times d} \leadto \bA^{-1} = \bQ\bLambda^{-1}\bQ^\top, $$ where the columns of $\bQ = [\bq_1, \bq_2, \ldots , \bq_d]$ are eigenvectors of $\bA$ and are mutually orthonormal, and the entries of $\bLambda = \diag(\lambda_1, \lambda_2, \ldots , \lambda_d)$ with $\lambda_1\geq \lambda_2\geq \ldots \geq \lambda_d$ are the corresponding eigenvalues of $\bA$, which are real. Since the eigenvectors are chosen to be mutually orthonormal: $$ \bq_i^\top \bq_j = \left\{ \begin{aligned} &1, \gap i=j;\\ &0, \gap i\neq j, \end{aligned} \right. $$ the eigenvectors also span the entire space $\real^d$ such that every error vector $\be_t \in \real^d$ can be expressed as a linear combination of the eigenvectors: \begin{equation}\label{equation:steepest-et-eigen-decom} \be_t = \sum_{i=1}^{d} \alpha_i \bq_i, \end{equation} where $\alpha_i$ indicates the component of $\be_t$ in the direction of $\bq_i$. Then the gradient vector (for symmetric $\bA$ by Eq.~\eqref{equation:symmetric_gd_gradient}, p.~\pageref{equation:symmetric_gd_gradient}) can be obtained by \begin{equation}\label{equation:steepest-eigen-decom-part} \begin{aligned} \bg_t &= \bA\bx_t-\bb =\bA\be_t = \bA \sum_{i=1}^{d} \alpha_i \bq_i\\ &= \sum_{i=1}^{d} \alpha_i \lambda_i\bq_i, \end{aligned} \end{equation} i.e., a linear combination of eigenvectors with length at the $i$-th dimension being $\alpha_i\lambda_i$. Again by Eq.~\eqref{equation:steepest-quadratic-error}, the update for the $(t+1)$-th iteration is $$ \begin{aligned} \be_{t+1} &= \be_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t\\ &=\be_t - \frac{\sum_{i=1}^{d}\alpha_i^2\lambda_i^2}{ \sum_{i=1}^{d}\alpha_i^2\lambda_i^3 } \sum_{i=1}^{d} \alpha_i \lambda_i\bq_i \\ &= \bzero . \end{aligned} $$ The above equation indicates that when only one component of $\alpha_i$'s is nonzero, the convergence is achieved in only one step, as illustrated in Figure~\ref{fig:steepest_gd_bisection_eigenvector}. More specially, when $\lambda_1=\lambda_2=\ldots =\lambda_d=\lambda$, i.e., all the eigenvalues are the same, it then follows that $$ \begin{aligned} \be_{t+1} &= \be_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t\\ &=\be_t - \frac{\sum_{i=1}^{d}\alpha_i^2}{ \sum_{i=1}^{d}\alpha_i^2 } \be_t \\ &= \bzero . \end{aligned} $$ Therefore, it takes only one step further to converge to the solution for any arbitrary $\be_t$. A specific example is shown in Figure~\ref{fig:steepest_gd_bisection_eigenvector_sameeigenvalue}, where $\bA=\begin{bmatrix} 20 & 0 \\ 0 & 20 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. \subsubsection{General Convergence Analysis for Symmetric PD Quadratic}\label{section:general-converg-steepest} To delve into the general convergence results, we further define the \textit{energy norm} for the error vector, denoted by $\norm{\be}_{\bA} = (\be^\top\bA\be)^{1/2}$. It can be shown that minimizing $\norm{\be_t}_{\bA}$ is equivalent to minimizing $L(\bx_t)$ due to the relation: \begin{equation}\label{equation:energy-norm-equivalent} \norm{\be}_{\bA}^2 = 2L(\bx_t) \underbrace{- 2L(\bx_\star) -2\bb^\top\bx_\star}_{\text{constant}}. \end{equation} With the definition of the energy norm, Eq.~\eqref{equation:steepest-quadratic-error}, and the symmetric positive definiteness of $\bA$, the update on the energy norm sequence is expressed as follows: \begin{equation}\label{equation:energy-norm-leq} \begin{aligned} \norm{\be_{t+1}}_{\bA}^2 &= \be_{t+1}^\top \bA\be_{t+1} \\ &= \left(\be_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t\right)^\top \bA \left(\be_t - \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t\right)\\ &=\norm{\be_t}_{\bA}^2 + \left(\frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t }\right)^2 \bg_t^\top \bA\bg_t - 2 \frac{\bg_t^\top \bg_t}{ \bg_t^\top \bA\bg_t } \bg_t^\top \bA \be_t \\ &=\norm{\be_t}_{\bA}^2 - \frac{(\bg_t^\top \bg_t)^2}{ \bg_t^\top \bA\bg_t } \gap\gap&(\bA\be_t = \bg_t)\\ &=\norm{\be_t}_{\bA}^2 \cdot\left(1- \frac{(\bg_t^\top \bg_t)^2}{ \bg_t^\top \bA\bg_t \cdot \be_t^\top \bA\be_t}\right)\\ &=\norm{\be_t}_{\bA}^2 \cdot\left(1- \frac{(\sum_{i=1}^{d}\alpha_i^2\lambda_i^2)^2}{(\sum_{i=1}^{d}\alpha_i^2\lambda_i^3) \cdot (\sum_{i=1}^{d}\alpha_i^2\lambda_i)}\right) \gap\gap&(\text{by Eq.~\eqref{equation:steepest-et-eigen-decom}, Eq.~\eqref{equation:steepest-eigen-decom-part}} )\\ &=\norm{\be_t}_{\bA}^2 \cdot r^2, \end{aligned} \end{equation} where $r^2 =\left(1- \frac{(\sum_{i=1}^{d}\alpha_i^2\lambda_i^2)^2}{(\sum_{i=1}^{d}\alpha_i^2\lambda_i^3) \cdot (\sum_{i=1}^{d}\alpha_i^2\lambda_i)}\right)$ determines the rate of convergence. As per convention, we assume $\lambda_1 \geq \lambda_2\geq \ldots \geq \lambda_d>0$, i.e., the eigenvalues are ordered in magnitude and positive since $\bA$ is positive definite. Then the condition number is defined as $\kappa = \frac{\lambda_1}{\lambda_d}$. Additionally, let $\kappa_i = \lambda_i/\lambda_d$ and $\sigma_i = \alpha_i/\alpha_1$. It follows that $$ r^2 = \left(1- \frac{ (\kappa^2+ \sum_{i=\textcolor{mylightbluetext}{2}}^{d}\sigma_i^2\kappa_i^2)^2} { (\kappa^3 + \sum_{i=\textcolor{mylightbluetext}{2}}^{d}\sigma_i^2\kappa_i^3) \cdot ( \kappa +\sum_{i=\textcolor{mylightbluetext}{2}}^{d}\sigma_i^2\kappa_i)} \right). $$ Therefore, the rate of convergence is further controlled by $\kappa$, $\sigma_i$'s, and $\kappa_i$'s, where $|\kappa_i|\geq 1$ for $i\in \{2,3,\ldots,d\}$. \paragraph{Two-dimensional case.} Specifically, when $d=2$, we have \begin{equation}\label{equation:2d-rate-steepest} r^2= 1- \frac{ (\kappa^2+ \sigma_2^2)^2} { (\kappa^3 + \sigma_2^2) \cdot ( \kappa +\sigma_2^2)} . \end{equation} Figure~\ref{fig:converge_contour_steepest} depicts the value $r^2$ as a function of $\kappa$ and $\sigma_2$. When $d=2$, from Eq.~\eqref{equation:steepest-et-eigen-decom}, we have \begin{equation}\label{equation:steepest-et-eigen-decom-2d} \be_t = \alpha_1 \bq_1+\alpha_2\bq_2. \end{equation} This confirms the two special examples shown in Figure~\ref{fig:steepest_specialcases}: when $\be_t$ is an eigenvector of $\bA$, it follows that: $$ \begin{aligned} \text{case 1: } &\alpha_2=0\leadto \sigma_2 = \alpha_2/\alpha_1 \rightarrow 0;\\ \text{case 2: } &\alpha_1=0\leadto \sigma_2 = \alpha_2/\alpha_1 \rightarrow \infty, \end{aligned} $$ i.e., the slope of $\sigma_2$ is either zero or infinite, the rate of convergence approaches zero and it converges instantly in just one step (example in Figure~\ref{fig:steepest_gd_bisection_eigenvector}). While if the eigenvalues are identical, $\kappa=1$, once again, the rate of convergence is zero (example in Figure~\ref{fig:steepest_gd_bisection_eigenvector_sameeigenvalue}). \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/converge_contour_steepest.pdf} \caption{Demonstration of the rate of convergence in steepest descent method with two-dimensional parameter. When $\sigma_2=0, \infty$, or $\kappa=1$, the rate of convergence is 0, which makes the update converges instantly in one step. The two cases correspond to $\be_t$ being an eigenvector of $\bA$ and the eigenvalues being identical, respectively, as examples shown in Figure~\ref{fig:steepest_specialcases}.} \label{fig:converge_contour_steepest} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{imgs/rate_convergen_steepest.pdf} \caption{Upper bound on the rate of convergence (per iteration) in steepest descent method with two-dimensional parameter. The $y$-axis is $\frac{\kappa-1}{\kappa+1}$.} \label{fig:rate_convergen_steepest} \end{figure} \index{Rate of convergence} \paragraph{Worst case.} We recall that $\sigma_2 = \alpha_2/\alpha_1$ determines the error vector $\be_t$ (Eq.~\eqref{equation:steepest-et-eigen-decom} or Eq.~\eqref{equation:steepest-et-eigen-decom-2d}), which, in turn, decides the point $\bx_t$ in the two-dimensional case. It is then interesting to identify the worst point to descent. Holding $\kappa$ fixed (i.e., $\bA$ and the loss function $L(\bx)$ are fixed), suppose further $t=1$, we aim to see the worst starting point $\bx_1$ to descent. It can be shown that the rate of convergence in Eq.~\eqref{equation:2d-rate-steepest} is maximized when $\sigma_2=\pm \kappa$: $$ \begin{aligned} r^2 &\leq 1-\frac{4 \kappa^2}{\kappa^5+2\kappa^4+\kappa^3}= \frac{(\kappa-1)^2}{(\kappa+1)^2}. \end{aligned} $$ Substituting into Eq.~\eqref{equation:energy-norm-leq}, we have $$ \begin{aligned} \norm{\be_{t+1}}_{\bA}^2 &\leq \norm{\be_t}_{\bA}^2 \cdot \frac{(\kappa-1)^2}{(\kappa+1)^2},\\ \leadto &\norm{\be_{t+1}}_{\bA} \leq \norm{\be_1}_{\bA} \cdot\left(\frac{\kappa-1}{\kappa+1}\right)^t. \end{aligned} $$ The upper bound of the rate of convergence (per iteration) is plotted in Figure~\ref{fig:rate_convergen_steepest}. Once again, the more \textit{ill-conditioned} the matrix, the slower the convergence of steepest descent. We may notice that the (upper bound of the) rate of convergence is the same as that of the vanilla GD in Eq.~\eqref{equation:vanialla-gd-rate} (p.~\pageref{equation:vanialla-gd-rate}). However, the two are different in that the rate of the vanilla GD is described in terms of the $\by_t$ vector in Eq.~\eqref{equation:vanilla-yt} (p.~\pageref{equation:vanilla-yt}); while the rate of steepest descent is presented in terms of the energy norm. Moreover, the rate of vanilla GD in Eq.~\eqref{equation:vanialla-gd-rate} (p.~\pageref{equation:vanialla-gd-rate}) is obtained by selecting a specific learning rate, as shown in Eq.~\eqref{equation:eta-vanilla-gd} (p.~\pageref{equation:eta-vanilla-gd}), which is not practical in vanilla GD since the learning rate is fixed throughout all iterations. This makes the rate of vanilla GD more of a tight bound. In practice, vanilla GD converges slower than steepest descent, as evident in the examples shown in Figure~\ref{fig:momentum_gd_conjugate2}, Figure~\ref{fig:momentum_gd_conjugate8}, and Figure~\ref{fig:conjguatecy_zigzag2}. \newpage \clearchapter{Learning Rate Annealing and Warmup} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \index{Learning rate annealing} \index{Learning rate warmup} \section{Learning Rate Annealing and Warmup}\label{section:learning-rate-annealing} \lettrine{\color{caligraphcolor}W} We have discussed in Eq.~\eqref{equation:gd-equaa-gene} (p.~\pageref{equation:gd-equaa-gene}) that the learning rate $\eta$ controls how large of a step to take in the direction of the negative gradient so that we can reach a (local) minimum. In a wide range of applications, a fixed learning rate works well in practice. While there are alternative learning rate schedules that change the learning rate during learning, and it is most often changed between epochs. We shall see in the sequel that per-dimension optimizers can change the learning rate in each dimension adaptively, e.g., AdaGrad, AdaDelta, RMSProp, and AdaSmooth \citep{duchi2011adaptive, hinton2012neural, zeiler2012adadelta, lu2022adasmooth}; while in this section, we focus on strategies for decaying or annealing the global learning rate, i.e., the value of $\eta$ in Eq.~\eqref{equation:gd-equaa-gene} (p.~\pageref{equation:gd-equaa-gene}). A constant learning rate often poses a dilemma to the analyst: a small learning rate used will cause the algorithm to take too long to reach anywhere close to an optimal solution. While a large initial learning rate will allow the algorithm to come reasonably close to a good (local) minimum in the cost surface at first; however, the algorithm will then oscillate back and forth around the point for a very long time. One method to prevent this challenge is to slow down the parameter updates by decreasing the learning rate. This can be done manually when the validation accuracy appears to plateau. On the other hand, decaying the learning rate over time, based on how many epochs through the data have been done, can naturally address these issues. Common decay functions include \textit{step decay}, \textit{inverse decay}, and \textit{exponential decay}. The subsequent section delves into the mathematical formulations of various learning rate annealing schemes. \index{Learning rate annealing} \section{Learning Rate Annealing}\label{section:learning-rate-anneal} \paragraph{Step decay.} The \textit{step decay} scheduler drops the learning rate by a factor every epoch or every few epochs. For a given iteration $t$, number of iterations to drop $n$, initial learning rate $\eta_0$, and decay factor $d<1$, the form of step decay is given by $$ \eta_t = \eta_0 \cdot d^{\floor{\frac{t}{n}} }=\eta_0 \cdot d^s, $$ where $s={\floor{\frac{t}{n}} }$ is called the \textit{step stage} to decay. Therefore, the step decay policy decays the learning rate every $n$ iterations. \paragraph{Multi-step decay.} The \textit{multi-step decay} scheduler is a slightly different version of the step decay, wherein the step stage is the index where the iteration $t$ falls in the milestone vector $\bmm=[m_1,m_2,\ldots, m_k]^\top$ with $0\leq m_1\leq m_2\leq\ldots\leq m_k\leq T$ and $T$ being the total number of iterations (or epochs) \footnote{When $T$ is the total number of iterations, it can be obtained by the product of the number of epochs and the number of steps per epoch.}. To be more concrete, the step stage $s$ at iteration $t$ is obtained by $$ s = \left\{ \begin{aligned} &0, \gap &t<m_1;\\ &1, \gap &m_1\leq t < m_2; \\ &\ldots\\ &k, \gap &m_k\leq t \leq T. \end{aligned} \right. $$ As a result, given the iteration $t$, initial learning rate $\eta_0$, and decay factor $d<1$, the learning rate at iteration $t$ is calculated as $$ \eta_t = \eta_0 \cdot d^s. $$ \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/lr_step_decay.pdf} \caption{Demonstration of step decay, multi-step decay, annealing polynomial, inverse decay, inverse square root, and exponential decay schedulers. One may find that among the six, exponential decay exhibits the smoothest behavior, while multi-step decay is characterized by the least smoothness.} \label{fig:lr_step_decay} \end{figure} \paragraph{Exponential decay.} Given the iteration $t$, the initial learning rate $\eta_0$, and the exponential decay factor $k$, the form of the \textit{exponential decay} is given by $$ \eta_t = \eta_0 \cdot \exp(-k \cdot t), $$ where the parameter $k$ controls the rate of the decay. \paragraph{Inverse decay.} The \textit{inverse decay} scheduler is a variant of exponential decay in that the decaying effect is applied by the inverse function. Given the iteration number $t$, the initial learning rate $\eta_0$, and the decay factor $k$, the form of the inverse decay is obtained by $$ \eta_t = \frac{\eta_0}{1+ k\cdot t}, $$ where, again, the parameter $k$ controls the rate of the decay. \paragraph{Inverse square root.} The \textit{inverse square root} scheduler is a learning rate schedule $$ \eta_t = \eta_0 \cdot \sqrt{w} \cdot \frac{1}{\sqrt{\max(t, w)}}, $$ where $t$ represents the current training iteration, $w$ is the number of warm-up steps, and $\eta_0$ is the initial learning rate. This scheduler maintains a constant learning rate for the initial steps, then exponentially decays the learning rate until the pre-training phase concludes. \paragraph{Annealing polynomial decay.} Given the iteration $t$, the \textit{max decay iteration} $M$, the power factor $p$, the initial learning rate $\eta_0$, and the final learning rate $\eta_T$, the \textit{annealing polynomial decay} at iteration $t$ can be obtained by \begin{equation}\label{equation:annealing_polynomial} \begin{aligned} & decay\_batch = \min(t, M) ;\\ & \eta_t = (\eta_0-\eta_T)\cdot \left(1-\frac{t}{decay\_batch}\right)^{p}+\eta_T. \end{aligned} \end{equation} In practice, the default values for the parameters are: initial rate $\eta_0=0.001$, end rate $\eta_T=1e-10$, the warm up steps $M=T/2$ where $T$ is the maximal iteration number, and power rate $p=2$. Figure~\ref{fig:lr_step_decay} compares step decay, multi-step decay, annealing polynomial decay, inverse decay, inverse square root, and exponential decay with a specified set of parameters. The smoothness varies among these methods, with exponential decay exhibiting the smoothest behavior and multi-step decay being the least smooth. In annealing polynomial decay, the max decay iteration parameter $M$ determines the decay rate: \begin{itemize} \item When $M$ is small, the decay gets closer to that of the exponential scheduler or the step decay; however, the exponential decay has a longer tail. That is, the exponential scheduler decays slightly faster in the beginning iterations but slows down in the last few iterations. \item When $M$ is large, the decay gets closer to that of the multi-step decay; however, the multi-step scheduler exhibits a more aggressive behavior. \end{itemize} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Training loss.]{\label{fig:mnist_scheduler_1} \includegraphics[width=0.47\linewidth]{imgs/mnist_scheduler_loss_Train.pdf}} \subfigure[Training accuracy.]{\label{fig:mnist_scheduler_2} \includegraphics[width=0.47\linewidth]{imgs/mnist_scheduler_acc_Train.pdf}} \subfigure[Test loss.]{\label{fig:mnist_scheduler_3} \includegraphics[width=0.47\linewidth]{imgs/mnist_scheduler_loss_Test.pdf}} \subfigure[Test accuracy.]{\label{fig:mnist_scheduler_4} \includegraphics[width=0.47\linewidth]{imgs/mnist_scheduler_acc_Test.pdf}} \caption{Training and test performance with different learning rate schemes.} \label{fig:mnist_scheduler_1234} \end{figure} \paragraph{Toy example.} To assess the impact of different schedulers, we utilize a toy example involving the training of a multi-layer perceptron (MLP) on the MNIST digit classification set \citep{lecun1998mnist} \footnote{It has a training set of 60,000 examples, and a test set of 10,000 examples.}. Figure~\ref{fig:mnist_scheduler_1234} presents the training and test performance in terms of \textit{negative log-likelihood loss}. The parameters for various schedulers are detailed in Figure~\ref{fig:lr_step_decay} (for 100 epochs). We observe that the stochastic gradient descent method with fixed learning rate may lead to a continued reduction in test loss; however, its test accuracy may get stuck at a certain point. The toy example shows learning rate annealing schemes, in general, can enhance optimization methods by guiding them towards better local minima with improved performance. \index{Learning rate warmup} \section{Learning Rate Warmup} The concept of warmup in training neural networks receive attention in recent years \citep{he2016deep, goyal2017accurate, smith2019super}. Further insights into the efficacy of the warmup scheduler in neural machine translation (NML) can be found in the comprehensive discussion by \citet{popel2018training}. The learning rate annealing schedulers can be utilized on both epoch- and step-basis. However, the learning rate warmup schemes are typically applied in the step context, where the total number of steps equals the product of the number of epochs and the number of steps per epoch, as aforementioned \citep{vaswani2017attention, howard2018universal}. Note that with this scheduler, early stopping should typically be avoided. In the rest of this section, we delve into two commonly used warmup policies, namely, the slanted triangular learning rates (STLR) and the Noam methods. \paragraph{Slanted Triangular Learning Rates (STLR).} STLR is a learning rate schedule that first linearly increases the learning rate over some number of epochs and then linearly decays it over the remaining epochs. The rate at iteration $t$ is computed as follows: $$ \begin{aligned} cut &= \ceil{T\cdot frac} ;\\ p &= \left\{ \begin{aligned} &t/cut , \gap &\text{if }t<cut;\\ & 1 - \frac{t-cut }{cut\cdot (1/frac-1)}, \gap &\text{otherwise}; \end{aligned} \right.\\ \eta_t &= \eta_{\text{max}} \cdot \frac{1+p\cdot (ratio - 1)}{ratio },\\ \end{aligned} $$ where $T$ is the number of training iterations (the product of the number of epochs and the number of updates per epoch), $frac$ is the fraction of iterations we want to increase the learning rate, $cut$ is the iteration when we switch from increasing to decreasing the learning rate, $p$ is the fraction of the number of iterations we have increased or decreased the learning rate respectively, $ratio$ specifies how much smaller the lowest learning rate is from the maximum learning rate $\eta_{\text{max}}$. In practice, the default values are $frac=0.1$, $ratio=32$, and $\eta_{\text{max}}=0.01$ \citep{howard2018universal}. \paragraph{Noam.} The Noam scheduler is originally used in neural machine translation (NML) tasks and is proposed in \citet{vaswani2017attention}. This corresponds to increasing the learning rate linearly for the first ``warmup\_steps" training steps and decreasing it thereafter proportionally to the inverse square root of the step number, scaled by the inverse square root of the dimensionality of the model (linear warmup for a given number of steps followed by exponential decay). Given the warmup steps $w$ and the model size $d_{\text{model}}$ (representing the hidden size parameter which dominates the number of parameters in the model), the learning rate $\eta_t$ at step $t$ can be calculated by $$ \eta_t = \alpha \cdot \frac{1}{\sqrt{d_{\text{model}}}}\cdot \min \left(\frac{1}{\sqrt{t}} , \frac{t}{w^{3/2}}\right), $$ where $\alpha$ is a smoothing factor. In the original paper, the warmup step $w$ is set to $w=4000$. While in practice, $w=25000$ can be a good choice. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/lr_noam.pdf} \caption{Comparison of Noam and STLR schedulers.} \label{fig:lr_noam} \end{figure} Moreover, in rare cases, the model size is occasionally set to be the same as the warmup steps, resulting in what is known as the \textit{warmup Noam scheduler}: $$ \eta_t = \alpha \cdot \frac{1}{\sqrt{w}} \cdot \min \left(\frac{1}{\sqrt{t}} , \frac{t}{w^{3/2}}\right). $$ \noindent Figure~\ref{fig:lr_noam} compares STLR and Noam schedulers with various parameters. We may observe that, in general, the Noam scheduler decays slower when the warmup phase finishes compared to the STLR. \section{Cyclical Learning Rate (CLR) Policy}\label{section:cyclical-lr} The cyclical learning rate is a generalization of warmup and decay policies (Noam scheme or STLR policy typically involve only one cycle). The essence of this learning rate policy comes from the observation that a temporary increase in the learning rate might have a short-term negative effect and yet achieves a long-term beneficial effect. This observation leads to the idea of letting the learning rate vary within a range of values rather than adopting a stepwise fixed or exponentially decreasing value, where minimum and maximum boundaries are set to make the learning rate vary between them. The simplest function to adopt this idea is the triangular window function that linearly increases and then linearly decreases \citep{smith2017cyclical}. \citet{dauphin2014identifying, dauphin2015equilibrated} argue that the difficulty in minimizing the loss arises from saddle points (toy example in Figure~\ref{fig:quadratic_saddle}, p.~\pageref{fig:quadratic_saddle}) rather than poor local minima. Saddle points have small gradients that slow the pace of the learning process. However, increasing the learning rate enables more rapid traversal of saddle point plateaus. In this scenario, a cyclical learning rate policy with periodical increasing and decreasing of the learning rate between minimum and maximum boundaries is reasonable. The minimum and maximum boundaries are problem-specific. Typically, the model is run for several epochs with different learning rates ranging from low to high values, known as the \textit{learning rate range test}. In this case, plotting the accuracy versus learning rate helps identify suitable minimum and maximum boundaries: when the accuracy starts to increase and when the accuracy slows, becomes ragged, or starts to fall, the two of which constitute good choices for the minimum and maximum boundaries. The cyclical learning rate policies fall into two categories: the one based on iteration, and the one based on epoch. The former one implements the annealing and warmup at each iteration, while the latter one does so on an epoch-basis \footnote{The total number of iterations equals the product of the number of epochs and the number of updates per epoch.}. However, there is no significance distinction between the two; any policy can be applied in either one of the two fashions. In the following paragraphs, we will discuss the update policies based on their original proposals. \paragraph{Triangular, Triangular2, and Exp Range.} The \textit{triangular} policy involves a linear increase and decrease of the learning rate. Given the initial learning rate $\eta_0$ (the lower boundary in the cycle), the maximum learning rate $\eta_{\max}$, and the step size $s$ (number of training iterations per half cycle), the learning rate $\eta_t$ at iteration $t$ can be obtained by: $$ triangular: \gap \left\{ \begin{aligned} {cycle}&= \floor{1+\frac{t}{2s}};\\ x &= \text{abs}\left(\frac{t}{s} - 2\times {cycle}+1\right);\\ \eta_t &=\eta_0 + (\eta_{\max}-\eta_0) \cdot \max(0, 1-x),\\ \end{aligned} \right. $$ where the calculated \textit{cycle} indicates which cycle iteration $t$ is in. The same as the \textit{triangular} policy, the \textit{triangular2} policy cuts in half at the end of each cycle: $$ triangular2:\gap \eta_t =\eta_0 + (\eta_{\max}-\eta_0) \cdot \max(0, 1-x) \cdot \frac{1}{2^{\text{cycle}-1}}. $$ Less aggressive than the \textit{triangular2} policy, the amplitude of a cycle in \textit{exp\_range} policy is scaled exponentially based on $\gamma^t$, where $\gamma<1$ is the scaling constant: $$ exp\_range:\gap \eta_t =\eta_0 + (\eta_{\max}-\eta_0) \cdot \max(0, 1-x) \cdot \gamma^{t}. $$ A comparison of these three policies is presented in Figure~\ref{fig:lr_triangular}. In practice, the step size $s$ is typically set to $2\sim 10$ times the number of iterations in an epoch \citep{smith2017cyclical}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/lr_triangular.pdf} \caption{Demonstration of \textit{triangular}, \textit{triangular2}, and \textit{exp\_range} schedulers.} \label{fig:lr_triangular} \end{figure} \paragraph{Cyclical cosine.} The \textit{Cyclical cosine} is a type of learning rate scheduler that initiates with a high learning rate, rapidly decreases it to a minimum value, and then quickly increases it again. The resetting of the learning rate acts as a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a ``warm restart" in contrast to a ``cold restart," where a new set of small random numbers may be used as a starting point \citep{loshchilov2016sgdr, huang2017snapshot}. The learning rate $\eta_t$ at iteration $t$ is calculated as follows: $$ \eta_t = \frac{\eta_0}{2} \left( \cos \left( \frac{\pi \,\, \text{mod}(t-1,\ceil{T/M} ) }{\ceil{T/M}} \right)+1\right), $$ where $T$ is the total number of training iterations (note the original paper takes the iterations as epochs in this sense \citep{loshchilov2016sgdr}), $M$ is the number of cycles, and $\eta_0$ is the initial learning rate. The scheduler anneals the learning rate from its initial value $\eta_0$ to a small learning rate approaching 0 over the course of a cycle. That is, we split the training process into $M$ cycles as shown in Figure~\ref{fig:lr_cosine}, each of which starts with a large learning rate $\eta_0$ and then gets annealed to a small learning rate. The provided equation facilitates a rapid decrease in the learning rate, encouraging the model to converge towards its first local minimum after a few epochs. The optimization then continues at a larger learning rate that can perturb the model and dislodge it from the minimum \footnote{The goal of the procedure is similar to the perturbed SGD that can help escape from saddle points \citep{jin2017escape, du2017gradient}.}. The iterative procedure is then repeated several times to achieve multiple convergences. In practice, the iteration $t$ usually refers to the $t$-th epoch. More generally, any learning rate with general function $f$ in the following form can have a similar effect: $$ \eta_t = f(\text{mod}(t-1, \ceil{T/M})). $$ Moreover, the learning rate can be set for each batch rather than before each epoch to introduce more nuance to the updates \citep{huang2017snapshot}. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Cyclical Cosine.]{\label{fig:lr_cosine} \includegraphics[width=0.48\linewidth]{imgs/lr_cosine.pdf}} \subfigure[Cyclical step.]{\label{fig:lr_cyclical_step} \includegraphics[width=0.48\linewidth]{imgs/lr_cyclical_step.pdf}} \caption{Cyclical cosine and cyclical step learning rate policies.} \label{fig:lr_cosine_cyclical_step} \end{figure} \paragraph{Cyclical step.} Similar to the cyclical cosine scheme, the \textit{cyclical step learning rate policy} combines a linear learning rate decay with warm restarts \citep{mehta2019espnetv2}: $$ \eta_t =\eta_{\text{max}} - (t \,\, \text{mod}\,\, M) \cdot \eta_{\text{min}}. $$ where in the original paper, $t$ refers to the epoch count, $\eta_{\text{min}}$ and $\eta_{\text{max}}$ are the ranges for the learning rate, and $M$ is the cycle length after which the learning rate will restart. The learning rate scheme can be seen as a variant of the cosine learning policy as discussed above and the comparison between the two policies is shown in Figure~\ref{fig:lr_cosine_cyclical_step}. In practice, $\eta_{\text{min}}=0.1$, $\eta_{\text{max}}=0.5$, and $M=5$ are set as default values in the original paper. \paragraph{Cyclical polynomial.} The \textit{cyclical polynomial} is a variant of the \textit{annealing polynomial decay} (Eq.~\eqref{equation:annealing_polynomial}) scheme, where the difference is that the cyclical polynomial scheme employs a cyclical warmup similar to the $exp\_range$ policy. Given the iteration number $t$, the initial learning rate $\eta_0$, the final learning rate, $\eta_T$, and the maximal decay number $M<T$, the rate can be calculated by: $$ \begin{aligned} & decay\_batch = M\cdot \ceil{\frac{t}{M}} \\ & \eta_t= (\eta_0-\eta_T)\cdot \left(1-\frac{t}{decay\_batch+\epsilon}\right)^{p}+\eta_T, \end{aligned} $$ where $\epsilon=1e-10$ is applied for better conditioning when $t=0$. Figure~\ref{fig:lr_cyclic_polynomial_decay} presents the cyclical polynomial scheme with various parameters. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/lr_cyclic_polynomial_decay.pdf} \caption{Demonstration of cyclical polynomial scheduler with various parameters.} \label{fig:lr_cyclic_polynomial_decay} \end{figure} \newpage \clearchapter{Stochastic Optimizer} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \index{Stochastic optimizer} \section{Stochastic Optimizer} \lettrine{\color{caligraphcolor}O} Over the years, stochastic gradient-based optimization has emerged as a fundamental method in various fields of science and engineering, including computer vision and automatic speech recognition processing \citep{krizhevsky2012imagenet, hinton2012deep, graves2013speech}. Stochastic gradient descent (SGD) and deep neural network (DNN) play a core role in training stochastic objective functions. When a new deep neural network is developed for a given task, some hyper-parameters related to the training of the network must be chosen heuristically. For each possible combination of structural hyper-parameters, a new network is typically trained from scratch and evaluated over and over again. While much progress has been made on hardware (e.g., Graphical Processing Units) and software (e.g., cuDNN) to speed up the training time of a single structure of a DNN, the exploration of a large set of possible structures remains very slow, making the need of a stochastic optimizer that is insensitive to hyper-parameters. Efficient stochastic optimizers, therefore, play a crucial role in training deep neural networks. \begin{table}[h] \begin{tabular}{llll|lll} \hline Method & Year & Papers \gap \gap & & Method & Year & Papers \\ \hline Adam & 2014 & 7532 & & AdamW & 2017 & 45 \\ SGD & 1951 & 1212 & & Local SGD & 2018 & 41 \\ RMSProp & 2013 & 293 & & Gravity & 2021 & 37 \\ Adafactor & 2018 & 177 & & AMSGrad & 2019 & 35 \\ Momentum & 1999 & 130 & & LARS & 2017 & 31 \\ LAMB & 2019 & 126 & & MAS & 2020 & 26 \\ AdaGrad & 2011 & 103 & & DFA & 2016 & 23 \\ Deep Ensembles & 2016 & 69 & & Nesterov momentum & 1983 & 22 \\ FA & 2014 & 46 & & Gradient Sparsification & 2017 & 20 \\ \hline \end{tabular} \caption{Data retrieved on April 27th, 2022 via https://paperswithcode.com/.} \label{table:stochastic-optimizers} \end{table} There are several variants of SGD to use heuristics for estimating a good learning rate at each iteration of the progress. These methods either attempt to accelerate learning when suitable or to slow down learning near a local minimum. In this section, we introduce a few stochastic optimizers that are in the two categories. Table~\ref{table:stochastic-optimizers} provides an overview of the number of papers utilizing these optimizers for specific tasks and their publication dates. For additional comprehensive reviews, one can also check \citet{zeiler2012adadelta}, \citet{ruder2016overview}, \citet{goodfellow2016deep}, and many others. \section{Momentum } If the cost surface is not spherical, learning can be quite slow because the learning rate must be kept small to prevent divergence along the steep curvature directions \citep{polyak1964some, rumelhart1986learning, qian1999momentum, sutskever2013importance}. The SGD with momentum (that can be applied to full batch or mini-batch learning) attempts to use the previous step to speed up learning when suitable such that it enjoys better convergence rates in deep networks. The main idea behind the momentum method is to speed up the learning along dimensions where the gradient consistently points in the same direction; and to slow the pace along dimensions in which the sign of the gradient continues to change. Figure~\ref{fig:momentum_gd} shows a set of updates for vanilla GD, where we can find the update along dimension $x_1$ is consistent; and the move along dimension $x_2$ continues to change in a zigzag pattern. The GD with momentum keeps track of past parameter updates with an exponential decay, and the update method has the following form from iteration $t-1$ to iteration $t$: \begin{equation} \begin{aligned} \Delta \bx_t &= \rho \Delta \bx_{t-1} - \eta \frac{\partial L(\bx_t)}{\partial \bx_t},\\ \end{aligned} \end{equation} where the algorithm remembers the latest update and adds it to the present update by multiplying a parameter $\rho$, called the \textit{momentum parameter}, blending the present update with the past update. That is, the amount we change the parameter is proportional to the negative gradient plus the previous weight change; the added \textit{momentum term} acts as both a smoother and an accelerator. The momentum parameter $\rho$ works as a \textit{decay constant}, where $\Delta\bx_1$ may have an effect on $\Delta \bx_{100}$; however, its effect is decayed by this decay constant. In practice, the momentum parameter $\rho$ is usually set to be 0.9 by default. Momentum simulates the concept of inertia in physics. This means that in each iteration, the update mechanism is not only related to the gradient descent, which refers to the \textit{dynamic term}, but also maintains a component that is related to the direction of the last update iteration, which refers to the momentum. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[ A two-dimensional surface plot for quadratic convex function. ]{\label{fig:alsgd12} \includegraphics[width=0.47\linewidth]{./imgs/momentum_surface.pdf}} \subfigure[The contour plot of $L(\bx)$. The red dot is the optimal point.]{\label{fig:alsgd22} \includegraphics[width=0.44\linewidth]{./imgs/momentum_contour.pdf}} \caption{Figure~\ref{fig:alsgd12} shows a function surface and a contour plot (\textcolor{mylightbluetext}{blue}=low, \textcolor{mydarkyellow}{yellow}=high), where the upper graph is the surface, and the lower one is the projection of it (i.e., contour). The quadratic function is from parameters $\bA=\begin{bmatrix} 4 & 0\\ 0 & 40 \end{bmatrix}$, $\bb=[12,80]^\top $, and $c=103$. Or equivalently, $L(\bx)=2(x_1-3)^2 + 20(x_2-2)^2+5$ and $\frac{\partial L(\bx)}{\partial \bx}=[4x_1-12, 8x_2-16]^\top$.} \label{fig:momentum-contour} \end{figure} Momentum exhibits superior performance, particularly in the presence of a ravine-shaped loss curve. A ravine refers to an area where the surface curves are significantly steeper in one dimension than in another (see the surface and contour curve in Figure~\ref{fig:momentum-contour}, i.e., a long, narrow valley). Ravines are common near local minima in deep neural networks, and vanilla GD or SGD has trouble navigating them. As shown by the toy example in Figure~\ref{fig:momentum_gd}, GD tends to oscillate across the narrow ravine since the negative gradient will point down one of the steep sides rather than along the ravine towards the optimum. Momentum helps accelerate gradients in the correct direction and dampens oscillations, as evident in the example shown in Figure~\ref{fig:momentum_mum}. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Optimization without momentum. A higher learning rate may result in larger parameter updates in the dimension across the valley (direction of $x_2$) which could lead to oscillations back and forth across the valley.]{\label{fig:momentum_gd} \includegraphics[width=0.47\linewidth]{./imgs/mom_surface_lrate40_gd-0_xy-2-5.pdf}} \subfigure[Optimization with momentum. Though the gradients along the valley (direction of $x_1$) are much smaller than the gradients across the valley (direction of $x_2$), they are typically in the same direction. Thus, the momentum term accumulates to speed up movement, dampens oscillations, and causes us to barrel through narrow valleys, small humps and (local) minima.]{\label{fig:momentum_mum} \includegraphics[width=0.47\linewidth]{./imgs/mom_surface_lrate40_gd-1_xy-2-5_mom-2.pdf}} \caption{The loss function is shown in Figure~\ref{fig:momentum-contour}. The starting point is $[-2, 5]^\top$. After 5 iterations, the squared loss from vanilla GD is 42.72, and the loss from GD with momentum is 35.41 in this simple case. The learning rates $\eta$ are set to be 0.04 in both cases.} \label{fig:momentum_gd_compare} \end{figure} As mentioned earlier, the momentum method is achieved by incorporating a fraction $\rho$ of the update vector from the previous time step into the current update vector. When $\Delta \bx_t$ and $\Delta \bx_{t-1}$ are in the same direction, the momentum accelerates the update step (e.g., the \textcolor{mylightbluetext}{blue} arrow regions in Figure~\ref{fig:momentum_mum}). Conversely, if they are in the opposite directions, the algorithm tends to update in the former direction if $\bx$ has been updated in this direction for many iterations. To be more concrete, in Figure~\ref{fig:momentum_gd}, considering the \textcolor{mylightbluetext}{blue} starting point, and then looking at the \textcolor{cyan}{cyan} point we get to after one step in the step of the update without momentum, they have gradients that are pretty much equal and opposite. As a result, the gradient across the ravine has been canceled out. But the gradient along the ravine has not canceled out. Therefore, along the ravine, we're going to keep building up speed, and so, after the momentum method has settled down, it'll tend to go along the bottom of the ravine. From this figure, the problem with the vanilla GD is that the gradient is big in the direction in which we only want to travel a small distance; and the gradient is small in the direction in which we want to travel a large distance. However, one can easily find that the momentum term helps average out the oscillation along the short axis while at the same time adds up contributions along the long axis. In other words, although it starts off by following the gradient, however, when it has velocity, it no longer does steepest descent. We call this \textit{momentum}, which makes it keep going in the previous direction. \index{Spectral decomposition} \index{Eigenvalue decomposition} \index{Quadratic form} \subsubsection{Quadratic Form in Momentum}\label{section:quadratic-in-momentum} Following the discussion of the quadratic form in GD (Section~\ref{section:quadratic_vanilla_GD}, p.~\pageref{section:quadratic_vanilla_GD}) and steepest descent (Section~\ref{section:quadratic-in-steepestdescent}, p.~\pageref{section:quadratic-in-steepestdescent}), we now discuss the quadratic form in GD with momentum. The update is: $$ \begin{aligned} \Delta \bx_t &= \rho\Delta \bx_{t-1} - \eta\nabla L(\bx_t); \\ \bx_{t+1} &= \bx_t + \Delta \bx_t, \end{aligned} $$ where $\nabla L(\bx_t) = \bA\bx_t - \bb$ if $\bA$ is symmetric for the quadratic form. The update becomes $$ \begin{aligned} \Delta \bx_t &= \rho\Delta \bx_{t-1} - \eta(\bA\bx_t - \bb); \\ \bx_{t+1} &= \bx_t + \Delta \bx_t. \end{aligned} $$ Again, define the iterate vectors as follows: $$ \left\{ \begin{aligned} \by_t &= \bQ^\top(\bx_t - \bx_\star);\\ \bz_t &= \bQ^\top \Delta \bx_t, \end{aligned} \right. $$ where $\bx_\star = \bA^{-1}\bb$ under the assumption that $\bA$ is nonsingular and PD as aforementioned, and $\bA=\bQ\bLambda\bQ^\top$ is the spectral decomposition of matrix $\bA$. This construction leads to the following update rule: $$ \begin{aligned} \bz_t &= \rho \bz_{t-1} - \eta\bLambda \by_t; \\ \by_{t+1} &= \by_t + \bz_t, \end{aligned} $$ or, after rearrangement: $$ \begin{bmatrix} \bz_t \\ \by_{t+1} \end{bmatrix} = \begin{bmatrix} \rho \bI & -\eta \bLambda \\ \rho \bI & -\eta \bLambda + \bI \end{bmatrix} \begin{bmatrix} \bz_{t-1} \\ \by_{t} \end{bmatrix}. $$ And this leads to the per-dimension update: \begin{equation}\label{equation:momentum-quadra-generalformula} \begin{bmatrix} z_{t,i} \\ y_{t+1,i} \end{bmatrix} = \begin{bmatrix} \rho & -\eta\lambda_i \\ \rho & 1-\eta \lambda_i \end{bmatrix}^t \begin{bmatrix} z_{0,i} \\ y_{1,i} \end{bmatrix}= \bB^t \begin{bmatrix} z_{0,i} \\ y_{1,i} \end{bmatrix}, \end{equation} where $z_{t,i}$ and $y_{t,i}$ are $i$-th element of $\bz_t$ and $\by_t$, respectively, and $\bB=\begin{bmatrix} \rho & -\eta\lambda_i \\ \rho & 1-\eta \lambda_i \end{bmatrix}$. Note here, $\bz_0$ is initialized as a zero vector, and $\by_1$ is initialized as $\bQ^\top (\bx_1-\bx_\star)$, where $\bx_1$ represents the initial parameter. Suppose the eigenvalue decomposition (Theorem 11.1 in \citet{lu2022matrix} or Appendix~\ref{appendix:eigendecomp}, p.~\pageref{appendix:eigendecomp}) of $\bB$ admits $$ \bB = \bC\bD \bC^{-1}, $$ where the columns of $\bC$ contain eigenvectors of $\bB$, and $\bD=\diag(\alpha,\beta)$ is a diagonal matrix containing the eigenvalues of $\bB$. Then $\bB^t = \bC\bD^t\bC^{-1}$. Alternatively, the eigenvalues of $\bB$ can be calculated by solving $\det(\bB-\alpha\bI)=0$: $$ \alpha, \beta = \frac{(\rho+1-\eta\lambda_i) \pm \sqrt{(\rho+1-\eta\lambda_i)^2 -4\rho}}{2}. $$ We then have by \citet{williams1992n} that $$ \bB^t= \left\{ \begin{aligned} &\alpha^t \frac{\bB-\beta\bI}{\alpha-\beta} - \beta^t \frac{\bB-\alpha\bI}{\alpha-\beta}, \gap &\text{if $\alpha\neq \beta$};\\ &\alpha^{t-1}(t\bB - (t-1)\alpha\bI), \gap &\text{if $\alpha=\beta$}. \end{aligned} \right. $$ Substituting into Eq.~\eqref{equation:momentum-quadra-generalformula} yields the following expression: $$ \begin{bmatrix} z_{t,i} \\ y_{t+1,i} \end{bmatrix} = \bB^{t} \begin{bmatrix} z_{0,i} \\ y_{1,i} \end{bmatrix}, $$ where the rate of convergence is controlled by the slower one, $\max\{|\alpha|, |\beta|\}$; when $\max\{|\alpha|, |\beta|\}<1$, the GD with momentum is guaranteed to converge. In the case of $\rho=0$, the momentum reduces to the vanilla GD, the condition for convergence becomes $$ \max\{|\alpha|, |\beta|\} = |1-\eta\lambda_i | <1, \gap \forall \,\, i\in \{1,2,\ldots, d\}, $$ which aligns with that in Eq.~\eqref{equation:vanillagd-quandr-rate-chgoices} (p.~\pageref{equation:vanillagd-quandr-rate-chgoices}). Following the same example illustrated in Figure~\ref{fig:momentum-contour} and Figure~\ref{fig:momentum_gd_compare}, where $\bA=\begin{bmatrix} 4 & 0\\ 0 & 40 \end{bmatrix}$ with eigenvalues $\lambda_1=4$ and $\lambda_2=40$, and the matrix $\bB$ in Eq.~\eqref{equation:momentum-quadra-generalformula} is defined as $$ \bB_1 = \begin{bmatrix} \rho & -4 \eta\\ \rho & 1-4\eta \end{bmatrix} \gap \text{and}\gap \bB_2 = \begin{bmatrix} \rho & -40\eta \\ \rho & 1-40\eta \end{bmatrix}, $$ respectively. Then it can be shown that when $\eta=0.04, \rho=0.8$, the rate of convergence is approximately $0.89$; Figure~\ref{fig:momentum_rho8} displays the updates for 20 iterations, though the motion is in a zigzag pattern, it can still converge. However, when $\eta=0.04, \rho=1$, the rate of convergence is equal to 1; Figure~\ref{fig:momentum_rho10} shows the updates for 20 iterations, the movement diverges, even as it traverses the optimal point. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Momentum $\rho=0.2$, convergence rate$\approx0.79$.]{\label{fig:momentum_rho2} \includegraphics[width=0.31\linewidth]{./imgs/mom_surface_lrate40_gd-1_xy-2-5_mom-2.pdf}} \subfigure[Momentum $\rho=0.8$, convergence rate$\approx0.89$.]{\label{fig:momentum_rho8} \includegraphics[width=0.31\linewidth]{./imgs/mom_surface_lrate40_gd-1_xy-2-5_mom-8.pdf}} \subfigure[Momentum $\rho=1$, convergence rate=1.]{\label{fig:momentum_rho10} \includegraphics[width=0.31\linewidth]{./imgs/mom_surface_lrate40_gd-1_xy-2-5_mom-10.pdf}} \caption{Momentum creates its own oscillations. The learning rates $\eta$ are set to be 0.04 for all scenarios.} \label{fig:momentum_own_oscilation} \end{figure} \section{Nesterov Momentum} Nesterov momentum, also known as Nesterov accelerated gradient (NAG), represents a slightly different version of the momentum update and has recently been gaining popularity. The core idea behind Nesterov momentum is that, when the current parameter vector is at some position $\bx_t$, then examining the momentum update we discussed in the previous section. We know that the momentum term alone (i.e., ignoring the second term with the gradient) is about to nudge the parameter vector by $\rho \Delta \bx_{t-1}$. Therefore, if we are about to compute the gradient, we can treat the future approximate position $\bx_{t} + \rho \Delta \bx_{t-1}$ as a lookahead--this is a point in the vicinity of where we are soon going to end up. Hence, it makes sense to compute the gradient at $\bx_{t} + \rho \Delta \bx_{t-1}$ instead of at the old position $\bx_{t}$. Finally, the step takes on the following form: $$ \begin{aligned} \Delta \bx_t &= \rho \Delta \bx_{t-1} - \eta \frac{\partial L(\bx_{t} + \rho \Delta \bx_{t-1})}{\partial \bx}. \end{aligned} $$ \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Momentum: evaluate gradient at the current position $\bx_t$, and the momentum is about to carry us to the tip of the green arrow.]{\label{fig:nesterov1} \includegraphics[width=0.47\linewidth]{./imgs/momentum.pdf}} \subfigure[Nesterov momentum: evaluate the gradient at this ``looked-ahead" position.]{\label{fig:nesterov2} \includegraphics[width=0.47\linewidth]{./imgs/momentum-nesterov.pdf}} \caption{Comparison of momentum and Nesterov momentum.} \label{fig:momentum_nesterov_comp} \end{figure} Figure~\ref{fig:momentum_nesterov_comp} shows the difference between momentum and Nesterov momentum approaches. This important difference is thought to counterbalance too high velocities by “peeking ahead” actual objective values in the candidate search direction. In other words, one first makes a big jump in the direction of the previously accumulated gradient; then measures the gradient where you end up and make a \textbf{correction}. But in standard momentum, one first jumps by current gradient, then makes a big jump in the direction of the previously accumulated gradient. To draw a metaphor, it turns out, if you're going to gamble, it's much better to gamble and then make a correction, than to make a correction and then gamble \citep{hinton2012neural2}. \citet{sutskever2013importance} show that Nesterov momentum has a provably better bound than gradient descent in convex, non-stochastic objectives settings. \section{AdaGrad} The learning rate annealing procedure modifies a single global learning rate that is applied to all dimensions of the parameters (Section~\ref{section:learning-rate-anneal}, p.~\pageref{section:learning-rate-anneal}). \citet{duchi2011adaptive} proposed a method called AdaGrad where the learning rate is updated on a per-dimension basis. The learning rate for each parameter depends on the history of gradient updates of that parameter in a way such that parameters with a scarce history of updates can be updated faster by using a larger learning rate. In other words, parameters that have not been updated much in the past are more likely to have higher learning rates now. Denoting the element-wise vector multiplication between $\ba$ and $\bb$ by $\ba\odot\bb$, formally, the AdaGrad has the following update step: \begin{equation} \begin{aligned} \Delta \bx_t &= - \frac{ \eta}{ \sqrt{\sum_{\tau=1}^t \bg_{\tau}^2 +\epsilon} }\odot \bg_{t} , \end{aligned} \end{equation} where $\epsilon$ is a smoothing term to better condition the division, $\eta$ is a global learning rate shared by all dimensions, $\bg_\tau^2$ indicates the element-wise square $\bg_\tau\odot \bg_\tau$, and the denominator computes the $\ell_2$ norm of a sum of all previous squared gradients in a per-dimension fashion. Though the global learning rate $\eta$ is shared by all dimensions, each dimension has its own dynamic learning rate controlled by the $\ell_2$ norm of accumulated gradient magnitudes. Since this dynamic learning rate grows with the inverse of the accumulated gradient magnitudes, larger gradient magnitudes have smaller learning rates, and smaller absolute values of gradients have larger learning rates. Therefore, the aggregated squared magnitude of the partial derivative with respect to each parameter over the course of the algorithm in the denominator has the same effects as the learning rate annealing. One advantage of AdaGrad lies in that it is very easy to implement, the code snippet in the following is its implementation by Python: \begin{python} # Assume the gradient dx and parameter vector x cache += dx**2 x += - learning_rate * dx / np.sqrt(cache + 1e-8) \end{python} On the other hand, AdaGrad partly eliminates the need to tune the learning rate controlled by the accumulated gradient magnitude. However, AdaGrad faces a significant drawback related to the unbounded accumulation of squared gradients in the denominator. Since every added term is positive, the accumulated sum keeps growing or exploding during every training step. This in turn causes the per-dimension learning rate to shrink and eventually decrease throughout training and become infinitesimally small, eventually falling to zero and stopping training any more. Moreover, since the magnitudes of gradients are factored out in AdaGrad, this method can be sensitive to the initialization of the parameters and the corresponding gradients. If the initial magnitudes of the gradients are large or infinitesimally huge, the per-dimension learning rates will be low for the remainder of training. This can be partly combated by increasing the global learning rate, making the AdaGrad method sensitive to the choice of learning rate. Further, AdaGrad assumes the parameter with fewer updates should favor a larger learning rate; and one with more movement should employ a smaller learning rate. This makes it consider only the information from squared gradients or the absolute value of the gradients. And thus AdaGrad does not include information from the total move (i.e., the sum of updates; in contrast to the sum of absolute updates). To be more succinct, AdaGrad exhibits the following primary drawbacks: 1) the continual decay of learning rates throughout training; 2) the need for a manually selected global learning rate; 3) considering only the absolute value of gradients. \section{RMSProp} RMSProp is an extension of AdaGrad designed to overcome the main weakness of AdaGrad \citep{hinton2012neural, zeiler2012adadelta}. The original idea of RMSProp is simple: it restricts the window of accumulated past gradients to some fixed size $w$ rather than $t$ (i.e., current time step). However, since storing $w$ previous squared gradients is inefficient, the RMSProp introduced in \citet{hinton2012neural, zeiler2012adadelta} implements this accumulation as an exponentially decaying average of the squared gradients. This is very similar to the idea of momentum term (or decay constant). We first discuss the specific formulation of RMSProp. Assume at time $t$, the running average, denoted by $E[\bg^2]_t$, is computed as follows: \begin{equation}\label{equation:adagradwin} E[\bg^2]_t = \rho E[\bg^2]_{t-1} + (1 - \rho) \bg_{t}^2, \end{equation} where $\rho$ is a decay constant similar to that used in the momentum method, and $\bg_t^2$ indicates the element-wise square $\bg_t\odot \bg_t$. In other words, the estimate is achieved by multiplying the current squared aggregate (i.e., the running estimate) by the decay constant $\rho$ and then adding $(1-\rho)$ times the current squared partial derivative. This running estimate is initialized to $\mathbf{0}$, which may introduce some bias in early iterations; while the bias disappears over the long term. We notice that the old gradients decay exponentially over the course of the algorithm. As Eq.~\eqref{equation:adagradwin} is just the root mean squared (RMS) error criterion of the gradients, we can replace it with the criterion short-hand. Let $\rms[\bg]_t = \sqrt{E[\bg^2]_t + \epsilon}$, where again a constant $\epsilon$ is added to better condition the denominator. Then the resulting step size can be obtained as follows: \begin{equation}\label{equation:rmsprop_update} \Delta \bx_t=- \frac{\eta}{\rms[\bg]_t} \odot \bg_{t}, \end{equation} where again $\odot$ is the element-wise vector multiplication. As aforementioned, the form in Eq.~\eqref{equation:adagradwin} is originally from the exponential moving average (EMA). In the original form of EMA, $1-\rho$ is also known as the smoothing constant (SC), where the SC can be expressed as $\frac{2}{N+1}$ and the period $N$ can be thought of as the number of past values to do the moving average calculation \citep{lu2022exploring}: \begin{equation}\label{equation:ema_smooting_constant} \text{SC}=1-\rho \approx \frac{2}{N+1}. \end{equation} The above Eq.~\eqref{equation:ema_smooting_constant} establishes a relationship among different variables: the decay constant $\rho$, the smoothing constant (SC), and the period $N$. For instance , if $\rho=0.9$, then $N=19$. That is, roughly speaking, $E[\bg^2]_t $ at iteration $t$ is approximately equal to the moving average of the past 19 squared gradients and the current one (i.e., the moving average of a total of 20 squared gradients). The relationship in Eq.~\eqref{equation:ema_smooting_constant} though is not discussed in the original paper of \citet{zeiler2012adadelta}, it is important to decide the lower bound of the decay constant $\rho$. Typically, a time period of $N=3$ or 7 is thought to be a relatively small frame making the lower bound of decay constant $\rho=0.5$ or 0.75; as $N\rightarrow \infty$, the decay constant $\rho$ approaches $1$. AdaGrad is designed to converge rapidly when applied to a convex function; while RMSProp performs better in nonconvex settings. When applied to a nonconvex function to train a neural network, the learning trajectory can pass through many different structures and eventually arrives at a region that is a locally convex bowl. AdaGrad shrinks the learning rate according to the entire history of the squared partial derivative leading to an infinitesimally small learning rate before arriving at such a convex structure. While RMSProp discards ancient squared gradients to address this problem. However, we can find that the RMSProp still only considers the absolute value of gradients and a fixed number of past squared gradients is not flexible. This limitation can cause a small learning rate near (local) minima as we will discuss in the sequel. The RMSProp is developed independently by Geoff Hinton in \citet{hinton2012neural} and by Matthew Zeiler in \citet{zeiler2012adadelta} both of which are stemming from the need to resolve AdaGrad's radically diminishing per-dimension learning rates. \citet{hinton2012neural} suggest setting $\rho$ to 0.9 and the global learning rate $\eta$ to default to $0.001$. The RMSProp further can be combined into the Nesterov momentum method \citep{goodfellow2016deep}, where the comparison between the two is presented in Algorithm~\ref{alg:rmsprop} and Algorithm~\ref{alg:rmsprop_nesterov}. \begin{algorithm}[h] \caption{RMSProp} \label{alg:rmsprop} \begin{algorithmic}[1] \State {\bfseries Input:} Initial parameter $\bx_1$, constant $\epsilon$; \State {\bfseries Input:} Global learning rate $\eta$, by default $\eta=0.001$; \State {\bfseries Input:} Decay constant $\rho$; \State {\bfseries Input:} Initial accumulated squared gradients $E[\bg^2]_{0} = \bzero $; \For{$t=1:T$ } \State Compute gradient $\bg_t = \nabla L(\bx_{t})$; \State Compute running estimate $ E[\bg^2]_t = \rho E[\bg^2]_{t-1} + (1 - \rho) \bg_{t}^2;$ \State Compute step $\Delta \bx_t =- \frac{\eta}{\sqrt{E[\bg^2]_t+\epsilon }} \odot \bg_{t}$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{RMSProp with Nesterov Momentum} \label{alg:rmsprop_nesterov} \begin{algorithmic}[1] \State {\bfseries Input:} Initial parameter $\bx_1$, constant $\epsilon$; \State {\bfseries Input:} Global learning rate $\eta$, by default $\eta=0.001$; \State {\bfseries Input:} Decay constant $\rho$, \textcolor{mylightbluetext}{momentum constant $\alpha$}; \State {\bfseries Input:} Initial accumulated squared gradients $E[\bg^2]_{0} = \bzero $, and update step $\Delta \bx_{0}=\bzero$; \For{$t=1:T$ } \State Compute interim update $\widetilde{\bx}_{t} = \bx_{t} + \alpha \Delta \bx_{t-1}$; \State Compute interim gradient $\bg_t = \nabla L(\textcolor{mylightbluetext}{\widetilde{\bx}_{t}})$; \State Compute running estimate $ E[\bg^2]_t = \rho E[\bg^2]_{t-1} + (1 - \rho) \bg_{t}^2;$ \State Compute step $\Delta \bx_t =\textcolor{mylightbluetext}{\alpha \Delta \bx_{t-1}}- \frac{\eta}{\sqrt{E[\bg^2]_t +\epsilon }} \odot \bg_{t}$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} \section{AdaDelta}\label{section:adadelta} \citet{zeiler2012adadelta} further shows an inconsistency in the units of the step size in RMSProp (so as the vanilla SGD, the momentum, and the AdaGrad). To overcome this weakness, and draw from the correctness of the second-order method (further discussed in Section~\ref{section:seconr-methods}, p.~\pageref{section:seconr-methods}), the author considers rearranging Hessian to determine the quantities involved. It is well known that though the calculation of Hessian or approximation to the Hessian matrix is a tedious and computationally expensive task, the curvature information it provides proves valuable for optimization, and the units in Newton's method are well matched. Given the Hessian matrix $\bH$, the update step in Newton's method can be described as follows \citep{becker1988improving, dauphin2014identifying}: \begin{equation} \Delta\bx_t \propto - \bH^{-1} \bg_t \propto \frac{\frac{\partial L(\bx_t)}{\partial \bx_t}}{\frac{\partial^2 L(\bx_t)}{\partial \bx^2}}. \end{equation} This implies \begin{equation} \frac{1}{\frac{\partial^2 L(\bx_t)}{\partial \bx_t^2}} = \frac{\Delta \bx_t}{\frac{\partial L(\bx_t)}{\partial \bx_t}}, \end{equation} i.e., the units of the Hessian matrix can be approximated by the right-hand side term of the above equation. Since the RMSProp update in Eq.~\eqref{equation:rmsprop_update} already involves $\rms[\bg]_t$ in the denominator, i.e., the units of the gradients. Introducing an additional unit of the order of $\Delta \bx_t$ in the numerator can match the same order as Newton's method. To do this, define another exponentially decaying average of the update steps: \begin{equation} \begin{aligned} \rms[\Delta \bx]_t &= \sqrt{E[\Delta \bx^2]_t } \\ &= \sqrt{\rho E[\Delta \bx^2]_{t-1} + (1 - \rho) \Delta \bx_{t}^2 }. \end{aligned} \end{equation} Since the value of $\Delta \bx_t$ for the current iteration is unknown, and the curvature can be assumed to be locally smoothed, making it suitable to approximate $\rms[\Delta \bx]_t$ by $\rms[\Delta \bx]_{t-1}$. So we can use an estimation of $\frac{1}{\frac{\partial^2 L(\bx_t)}{\partial \bx_t^2}} $ to substitute for the computationally expensive $\bH^{-1}$: \begin{equation} \frac{\Delta \bx_t}{\frac{\partial L(\bx_t)}{\partial \bx_t}} \sim \frac{\rms[\Delta \bx]_{t-1}}{\rms[\bg]_t}. \end{equation} This presents an approximation to the diagonal Hessian, using only RMS measures of $\bg$ and $\Delta \bx$, and results in the update step whose units are matched: \begin{equation} \Delta \bx_t = -\frac{\rms[\Delta \bx]_{t-1}}{\rms[\bg]_t} \odot \bg_t. \end{equation} The idea of AdaDelta, derived from the second-order method, alleviates the annoying choosing of learning rate. Meanwhile, a web demo developed by Andrej Karpathy can be explored to find the convergence rates among SGD, SGD with momentum, AdaGrad, and AdaDelta \footnote{see https://cs.stanford.edu/people/karpathy/convnetjs/demo/trainers.html.}. A crucial consideration when employing the RMSProp or AdaDelta method is to carefully notice that though the accumulated squared gradients in the denominator can compensate for the per-dimension learning rates, if we save the checkpoint of the neural networks at the end of some epochs and want to re-tune the parameter by loading the weights from the checkpoint, the first few batches of the re-tuning can perform poorly since there are not enough squared gradients to smooth the denominator. A particular example is shown in Figure~\ref{fig:er-rmsprop_epochstart}, where we save the weights and load them after each epoch; loss deterioration is observed after each epoch. While this doesn't significantly impact overall training progress as the loss can still go down from that point, a more effective choice involves saving the $E[\bg^2]_t$ along with the weights of the neural networks. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/rmsprop_epochstart.pdf} \caption{Demonstration of tuning parameter after each epoch by loading the weights. We save the weights and load them after each epoch such that there are step-points while re-training after each epoch. That is, loss deterioration is observed after each epoch.} \label{fig:er-rmsprop_epochstart} \end{figure} \index{Loss deterioration} \index{Effective ratio} \index{Exponential moving average} \section{AdaSmooth}\label{section:adaer} In this section, we will discuss the effective ratio, derived from previous updates in the stochastic optimization process. We will elucidate its application to achieve adaptive learning rates per-dimension via the flexible smoothing constant, hence the name AdaSmooth. The idea presented in the section is derived from the RMSProp method in order to improve two main drawbacks of the method: 1) consider only the absolute value of the gradients rather than the total movement in each dimension; 2) the need for manually selection of hyper-parameters. \paragraph{Effective Ratio (ER).} \citet{kaufman2013trading, kaufman1995smarter} suggested replacing the smoothing constant in the EMA formula with a constant based on the \textit{efficiency ratio} (ER). And the ER is shown to provide promising results for financial forecasting via classic quantitative strategies, wherein the ER of the closing price is calculated to decide the trend of the underlying asset \citep{lu2022exploring}. This indicator is designed to measure the \textit{strength of a trend}, defined within a range from -1.0 to +1.0, with a larger magnitude indicating a larger upward or downward trend. Recently, \citet{lu2022reducing} show that the ER can be utilized to reduce overestimation and underestimation in time series forecasting. Given the window size $M$ and a series $\{h_1, h_2, \ldots, h_T\}$, it is calculated with a simple formula: \noindent \begin{equation} \begin{aligned} e_t &= \frac{s_t}{n_t}= \frac{h_{t} - h_{t-M}}{\sum_{i=0}^{M-1} |h_{t-i} - h_{t-1-i}|}= \frac{\text{Total move for a period}}{\text{Sum of absolute move for each bar}}, \end{aligned} \end{equation} where $e_t$ is the ER of the series at time $t$. At a strong trend (i.e., the input series is moving in a certain direction, either up or down), the ER will approach 1 in absolute value; if there is no directed movement, it will be a little more than 0. Instead of calculating the ER base on the closing price of the underlying asset, we want to calculate the ER of the moving direction in the update methods for each parameter. And in the descent methods, we care more about how much each parameter moves apart from its initial point in each period, either moving positively or negatively. So here we only consider the absolute value of the ER. To be specific, the ER for the parameters in the method is calculated as follows: \begin{equation}\label{eqution:signoiase-er-delta} \begin{aligned} \be_t = \frac{\bs_t}{\bn_t}&= \frac{| \bx_t - \bx_{t-M}|}{\sum_{i=0}^{M-1} | \bx_{t-i} - \bx_{t-1-i}|}= \frac{| \sum_{i=0}^{M-1} \Delta \bx_{t-1-i}|}{\sum_{i=0}^{M-1} | \Delta \bx_{t-1-i}|}, \end{aligned} \end{equation} where $\be_t \in \real^d$, and its $i$-th element $e_{t,i}$ is in the range of $ [0, 1]$ for all $i$ in $[1,2,\ldots, d]$. A large value of $e_{t,i}$ indicates the descent method in the $i$-th dimension is moving consistently in a certain direction; while a small value approaching 0 means the parameter in the $i$-th dimension is moving in a zigzag pattern, alternating between positive and negative movements. In practice, and across all our experiments, $M$ is selected to be the batch index for each epoch. That is, $M=1$ if the training is in the first batch of each epoch; and $M=M_{\text{max}}$ if the training is in the last batch of the epoch, where $M_{\text{max}}$ is the maximal number of batches per epoch. In other words, $M$ ranges in $[1, M_{\text{max}}]$ for each epoch. Therefore, the value of $e_{t,i}$ indicates the movement of the $i$-th parameter in the most recent epoch. Or even more aggressively, the window can range from 0 to the total number of batches seen during the entire training progress. The adoption of the adaptive window size $M$ rather than a fixed one has the benefit that we do not need to keep the past $M+1$ steps $\{ \bx_{t-M}, \bx_{t-M+1}, \ldots, \bx_t\}$ to calculate the signal and noise vectors $\{\bs_t,\bn_t\}$ in Eq.~\eqref{eqution:signoiase-er-delta} since they can be obtained in an accumulated fashion. \paragraph{AdaSmooth.}\label{section:adaer-after-er} If the ER in magnitude of each parameter is small (approaching 0), the movement in this dimension is zigzag, the AdaSmooth method tends to use a long period average as the scaling constant to slow down the movement in that dimension. When the absolute ER per-dimension is large (tend to 1), the path in that dimension is moving in a certain direction (not zigzag), and the learning actually is happening and the descent is moving in a correct direction, where the learning rate should be assigned to a relatively large value for that dimension. Thus, the AdaSmooth tends to choose a small period which leads to a small compensation in the denominator; since the gradients in the closer periods are small in magnitude when it's near the (local) minima. A particular example is shown in Figure~\ref{fig:er-explain}, where the descent is moving in a certain direction, and the gradient in the near periods is small in magnitude; if we choose a larger period to compensate for the denominator, the descent will be slower due to the large factored denominator. In short, we want a smaller period to calculate the exponential average of the squared gradients in Eq.~\eqref{equation:adagradwin} if the update is moving in a certain direction without a zigzag pattern; while when the parameter is updated in a zigzag fashion, the period for the exponential average should be larger \citep{lu2022adasmooth}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/alsgd_specialcase.pdf} \caption{Demonstration of how the effective ratio works. Stochastic optimization tends to move a large step when it is far from the (local) minima; and a relatively small step when it is close to the (local) minima.} \label{fig:er-explain} \end{figure} The obtained value of ER is incorporated into the exponential smoothing formula. To enhance our approach, we aim to dynamically adjust the time period $N$ discussed in Eq.~\eqref{equation:ema_smooting_constant} to be a smaller value when the ER tends to 1 in absolute value; or a larger value when the ER moves towards 0. When $N$ is small, $\text{SC} $ is known as a ``\textit{fast SC}"; otherwise, $\text{SC} $ is known as a ``\textit{slow SC}". For example, let the small time period be $N_1=3$, and the large time period be $N_2=199$. The smoothing ratio for the fast movement must align with that of EMA with period $N_1$ (``fast SC" = $\frac{2}{N_1+1}$ = 0.5), and for the period of no trend EMA period must be equal to $N_2$ (``slow SC" = $\frac{2}{N_2+1}$ = 0.01). Thus the new changing smoothing constant is introduced, called the ``\textit{scaled smoothing constant}" (SSC), denoted by a vector $\bc_t\in \real^d$: $$ \bc_t = ( \text{fast SC} - \text{slow SC}) \times \be_t + \text{slow SC}. $$ By Eq.~\eqref{equation:ema_smooting_constant}, we can define the \textit{fast decay constant} $\rho_1=1-\frac{2}{N_1+1}$, and the \textit{slow decay constant} $\rho_2 = 1-\frac{2}{N_2+1}$. Then the scaled smoothing constant vector can be obtained by: $$ \bc_t = ( \rho_2- \rho_1) \times \be_t + (1-\rho_2), $$ where the smaller $\be_t$, the smaller $\bc_t$. For a more efficient influence of the obtained smoothing constant on the averaging period, Kaufman recommended squaring it. The final calculation formula then follows: \begin{equation}\label{equation:squared-ssc} E[\bg^2]_t = \bc_t^2 \odot \bg_{t}^2 + \left(1-\bc_t^2 \right)\odot E[\bg^2]_{t-1}. \end{equation} or after rearrangement: $$ E[\bg^2]_t = E[\bg^2]_{t-1}+ \bc_t^2 \odot (\bg_{t}^2 - E[\bg^2]_{t-1}). $$ We notice that $N_1=3$ is a small period to calculate the average (i.e., $\rho_1=1-\frac{2}{N_1+1}=0.5$) such that the EMA sequence will be noisy if $N_1$ is less than 3. Therefore, the minimum value of $\rho_1$ in practice is set to be greater than 0.5 by default. While $N_2=199$ is a large period to compute the average (i.e., $\rho_2=1-\frac{2}{N_2+1}=0.99$) such that the EMA sequence almost depends only on the previous value, leading to the default value of $\rho_2$ no larger than 0.99. Experimental study will reveal that the AdaSmooth update will be insensitive to the hyper-parameters in the sequel. We also carefully notice that when $\rho_1=\rho_2$, the AdaSmooth algorithm recovers to the RMSProp algorithm with decay constant $\rho=1-(1-\rho_2)^2$ since we square it in Eq.~\eqref{equation:squared-ssc}. After developing the AdaSmooth method, we realize the main idea behind it is similar to that of SGD with momentum: to speed up (compensate less in the denominator) the learning along dimensions where the gradient consistently points in the same direction; and to slow the pace (compensate more in the denominator) along dimensions in which the sign of the gradient continues to change. \index{Saddle point} As discussed in the cyclical learning rate section (Section~\ref{section:cyclical-lr}, p.~\pageref{section:cyclical-lr}), \citet{dauphin2014identifying, dauphin2015equilibrated} argue that the difficulty in minimizing the loss arises from saddle points rather than poor local minima. Saddle points, characterized by small gradients, impede the learning process. However, an adaptive smoothing procedure for the learning rates per-dimension can naturally find these saddle points and compensate less in the denominator or ``increase" the learning rates when the optimization is in these areas, allowing more rapid traversal of saddle point plateaus. When applied to a nonconvex function to train a neural network, the learning trajectory may pass through many different structures and eventually arrive at a region that is a locally convex bowl. AdaGrad shrinks the learning rate according to the entire history of the squared partial derivative and may have made the learning rate too small before arriving at such a convex structure. RMSProp partly solves this drawback since it uses an exponentially decaying average to discard ancient squared gradients, making it more robust in a nonconvex setting compared to the AdaGrad method. The AdaSmooth goes further in two points: 1) when it's close to a saddle point, a small compensation in the denominator can help it escape the saddle point; 2) when it's close to a locally convex bowl, the small compensation further makes it converge faster. Empirical evidence shows the ER used in the simple moving average (SMA) with a fixed windows size $w$ can also reflect the trend of the series/movement in quantitative strategies \citep{lu2022exploring}. However, this again needs to store $w$ previous squared gradients in the AdaSmooth case, making it inefficient and we shall not adopt this extension. \paragraph{AdaSmoothDelta.}\label{section:adasmoothdelta} We observe that the ER can also be applied to the AdaDelta setting: \begin{equation}\label{equation:adasmoothdelta} \Delta \bx_t = -\frac{\sqrt{E[\Delta \bx^2]_t}}{\sqrt{E[\bg^2]_t+\epsilon}} \odot \bg_t, \end{equation} where \begin{equation}\label{equation:adasmoothdelta111} E[\bg^2]_t = \bc_t^2 \odot \bg_{t}^2 + \left(1-\bc_t^2 \right)\odot E[\bg^2]_{t-1} , \end{equation} and \begin{equation}\label{equation:adasmoothdelta222} E[\Delta \bx^2]_t = (1-\bc_t^2) \odot \Delta \bx^2_t+ \bc_t^2 \odot E[\Delta \bx^2]_{t-1}, \end{equation} in which case the difference in $E[\Delta \bx^2]_t$ is to choose a larger period when the ER is small. This is reasonable in the sense that $E[\Delta \bx^2]_t$ appears in the numerator, while $E[\bg^2]_t$ is in the denominator of Eq.~\eqref{equation:adasmoothdelta}, making their compensation towards different directions. Alternatively, a fixed decay constant can be applied for $E[\Delta \bx^2]_t$: $$ E[\Delta \bx^2]_t = (1-\rho_2) \Delta \bx^2_t+ \rho_2 E[\Delta \bx^2]_{t-1}, $$ The AdaSmoothDelta optimizer introduced above further alleviates the need for a hand specified global learning rate, which is conventionally set to $\eta=1$ in the Hessian context. However, due to the adaptive smoothing constants in Eq.~\eqref{equation:adasmoothdelta111} and \eqref{equation:adasmoothdelta222}, the $E[\bg^2]_t $ and $E[\Delta \bx^2]_t$ are less locally smooth, making it less insensitive to the global learning rate than the AdaDelta method. Therefore, a smaller global learning rate, e.g., $\eta=0.5$ is favored in AdaSmoothDelta. The full procedure for computing AdaSmooth is then formulated in Algorithm~\ref{algo:adasmooth}. \begin{algorithm}[tb] \caption{Computing AdaSmooth: the AdaSmooth algorithm. All operations on vectors are element-wise. Good default settings for the tested tasks are $\rho_1=0.5, \rho_2=0.99, \epsilon=1e-6, \eta=0.001$; see Section~\ref{section:adaer-after-er} or Eq.~\eqref{equation:ema_smooting_constant} for a detailed discussion on the explanation of the decay constants' default values. The AdaSmoothDelta iteration can be calculated in a similar way. } \label{alg:computer-adaer} \begin{algorithmic}[1] \State {\bfseries Input:} Initial parameter $\bx_1$, constant $\epsilon$; \State {\bfseries Input:} Global learning rate $\eta$, by default $\eta=0.001$; \State {\bfseries Input:} Fast decay constant $\rho_1$, slow decay constant $\rho_2$; \State {\bfseries Input:} Assert $\rho_2>\rho_1$, by default $\rho_1=0.5$, $\rho_2=0.99$; \For{$t=1:T$ } \State Compute gradient $\bg_t = \nabla L(\bx_t)$; \State Compute ER $\be_t=\frac{| \bx_t - \bx_{t-M}|}{\sum_{i=0}^{M-1} | \Delta \bx_{t-1-i}|}$ ; \State Compute scaled smoothing vector $\bc_t = ( \rho_2- \rho_1) \times \be_t + (1-\rho_2)$; \State Compute normalization term $E[\bg^2]_t = \bc_t^2 \odot \bg_{t}^2 + \left(1-\bc_t^2 \right)\odot E[\bg^2]_{t-1} ;$ \State Compute step $\Delta \bx_t =- \frac{\eta}{\sqrt{E[\bg^2]_t+\epsilon}} \odot \bg_{t}$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic}\label{algo:adasmooth} \end{algorithm} We have discussed the step-points problem when reloading weights from checkpoints in the RMSProp or AdaDelta methods. However, this issue is less severe in the AdaSmooth setting as a typical example shown in Figure~\ref{fig:er-rmsprop_epochstart22}, where a smaller loss deterioration is observed in the AdaSmooth example compared to the RMSProp case. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/rmsprop_epochstart22.pdf} \caption{Demonstration of tuning parameter after each epoch by loading the weights. We save the weights and load them after each epoch such that there are step-points while re-training after each epoch. This issue is less sever in the AdaSmooth case than in the RMSProp method. A smaller loss deterioration is observed in the AdaSmooth example than that of the RMSProp case.} \label{fig:er-rmsprop_epochstart22} \end{figure} \index{Loss deterioration} \begin{figure}[!h] \centering \subfigure[MNIST training Loss]{\includegraphics[width=0.47\textwidth, ]{imgs/MNIST_MLP_loss_Train.pdf} \label{fig:mnist_loss_train_mlp}} \subfigure[Census Income training loss]{\includegraphics[width=0.47\textwidth]{imgs/Census_MLP_loss_Train.pdf} \label{fig:census_loss_train_mlp}} \caption{\textbf{MLP:} Comparison of descent methods on MNIST digit and Census Income data sets for 60 and 200 epochs with MLP.} \label{fig:mnist_census_loss_MLP} \end{figure} \paragraph{Example: multi-layer perceptron.} To see the difference between the discussed algorithms by far, we conduct experiments with different machine learning models; and different data sets including real handwritten digit classification task, MNIST \citep{lecun1998mnist} \footnote{It has a training set of 60,000 examples, and a test set of 10,000 examples.}, and Census Income \footnote{Census income data has 48842 number of samples and 70\% of them are used as the training set in our case: https://archive.ics.uci.edu/ml/datasets/Census+Income.} data sets are used. In all scenarios, the same parameter initialization is adopted when training with different stochastic optimization algorithms. We compare the results in terms of convergence speed and generalization. Multi-layer perceptrons (MLP, a.k.a., multi-layer neural networks) are powerful tools for solving machine learning tasks, finding internal linear and nonlinear features behind the model inputs and outputs. We adopt the simplest MLP structure: an input layer, a hidden layer, and an output layer. We notice that rectified linear unit (Relu) outperforms Tanh, Sigmoid, and other nonlinear units in practice, making it the default nonlinear function in our structures. Since dropout has become a core tool in training neural networks \citep{srivastava2014dropout}, we adopt 50\% dropout noise to the network architecture during training to prevent overfitting. To be more concrete, the detailed architecture for each fully connected layer is described by F$(\langle \textit{num outputs} \rangle:\langle \textit{activation function} \rangle)$; and for a dropout layer is described by DP$(\langle \textit{rate} \rangle)$. Then the network structure we use can be described as follows: \begin{equation} \text{F(128:Relu)DP(0.5)F(\text{num of classes}:Softmax)}. \end{equation} All methods are trained on mini-batches of 64 images per batch for 60 or 200 epochs through the training set. Setting the hyper-parameter to $\epsilon=1e-6$. If not especially mentioned, the global learning rates are set to $\eta=0.001$ in all scenarios. While a relatively large learning rate ($\eta=0.01$) is used for AdaGrad method due to its accumulated decaying effect; learning rate for the AdaDelta method is set to 1 as suggested by \citet{zeiler2012adadelta} and for the AdaSmoothDelta method is set to 0.5 as discussed in Section~\ref{section:adasmoothdelta}. In Figure~\ref{fig:mnist_loss_train_mlp} and ~\ref{fig:census_loss_train_mlp}, we compare SGD with momentum, AdaGrad, RMSProp, AdaDelta, AdaSmooth, and AdaSmoothDelta in optimizing the training set losses for MNIST and Census Income data sets, respectively. The SGD with momentum method does the worst in this case. AdaSmooth performs slightly better than AdaGrad and RMSProp in the MNIST case and much better than the latters in the Census Income case. AdaSmooth shows fast convergence from the initial epochs while continuing to reduce the training losses in both the two experiments. We here show two sets of slow decay constant for AdaSmooth, i.e., ``$\rho_2=0.9$" and ``$\rho_2=0.95$". Since we square the scaled smoothing constant in Eq.~\eqref{equation:squared-ssc}, when $\rho_1=\rho_2=0.9$, the AdaSmooth recovers to RMSProp with $\rho=0.99$ (so as the AdaSmoothDelta and AdaDelta case). In all cases, the AdaSmooth results perform better while there is almost no difference between the results of AdaSmooth with various hyper-parameters in the MLP model. Table~\ref{fig:mlp_table_perform} shows the best training set accuracy for different algorithms. While we notice the best test set accuracies for various algorithms are very close; we only present the best ones for the first 5 epochs in Table~\ref{fig:mlp_table_perform-test}. In all scenarios, the AdaSmooth method converges slightly faster than other optimization methods in terms of the test accuracy for this toy example. \begin{table}[!h] \centering \begin{tabular}{lll} \hline Method &\gap MNIST &\gap Census \\ \hline SGD with Momentum ($\rho=0.9$) &\gap 98.64\% &\gap 85.65\%\\ AdaGrad ($\eta$=0.01) &\gap 98.55\%&\gap 86.02\%\\ RMSProp ($\rho=0.99$) &\gap 99.15\%&\gap 85.90\%\\ AdaDelta ($\rho=0.99$) &\gap 99.15\%&\gap 86.89\%\\ AdaSmooth ($\rho_1=0.5, \rho_2=0.9$) &\gap \textbf{99.34}\%&\gap \textbf{86.94}\%\\ AdaSmooth ($\rho_1=0.5, \rho_2=0.95$) &\gap \textbf{99.45}\%&\gap \textbf{87.10}\%\\ AdaSmoothDelta ($\rho_1=0.5, \rho_2=0.9$) &\gap \textbf{99.60}\%&\gap {86.86}\%\\ \hline \end{tabular} \caption{\textbf{MLP}: Best in-sample evaluation in training accuracy (\%).} \label{fig:mlp_table_perform} \end{table} \begin{table}[!h] \centering \begin{tabular}{lll} \hline Method &\gap MNIST &\gap Census \\ \hline SGD with Momentum ($\rho=0.9$) &\gap 94.38\%&\gap 83.13\%\\ AdaGrad ($\eta$=0.01) &\gap 96.21\%& \gap84.40\%\\ RMSProp ($\rho=0.99$) &\gap 97.14\%& \gap84.43\%\\ AdaDelta ($\rho=0.99$) &\gap 97.06\%&\gap84.41\%\\ AdaSmooth ($\rho_1=0.5, \rho_2=0.9$) &\gap 97.26\%&\gap 84.46\%\\ AdaSmooth ($\rho_1=0.5, \rho_2=0.95$) &\gap 97.34\%&\gap 84.48\%\\ AdaSmoothDelta ($\rho_1=0.5, \rho_2=0.9$) &\gap 97.24\% &\gap \textbf{84.51}\%\\ \hline \end{tabular} \caption{\textbf{MLP}: Best out-of-sample evaluation in test accuracy for the first 5 epochs. } \label{fig:mlp_table_perform-test} \end{table} \section{Adam} Adaptive moment estimation (Adam) is yet another adaptive learning rate optimization algorithm \citep{kingma2014adam}. The Adam algorithm uses a similar normalization by second-order information, the running estimates for squared gradient; however, it also incorporates first-order information into the update. In addition to storing the exponential moving average of past squared gradient (the second moment) like RMSProp, AdaDelta, and AdaSmooth, Adam also keeps an exponentially decaying average of the past gradients: \begin{equation}\label{equation:adam-updates} \begin{aligned} \bmm_t &= \rho_1 \bmm_{t-1} + (1-\rho_1)\bg_t; \\ \bv_t &= \rho_2 \bv_{t-1} +(1-\rho_2)\bg_t^2, \end{aligned} \end{equation} where $\bmm_t$ and $\bv_t$ are running estimates of the first moment (the mean) and the second moment (the uncentered variance) of the gradients, respectively. The drawback of RMSProp is that the running estimate $E[\bg^2]$ of the second-order moment is biased in the initial time steps since it is initialized to $\bzero$; especially when the decay constant is large (when $\rho$ is close to 1 in RMSProp). Observing the biases towards zero in Eq.~\eqref{equation:adam-updates} as well, Adam counteracts these biases by computing the bias-free moment estimates: $$ \begin{aligned} \widehat{\bmm}_t &= \frac{\bmm_t}{1-\rho_1^t}; \\ \widehat{\bv}_t &= \frac{\bv_t}{1-\rho_2^t}. \end{aligned} $$ The first and second moment estimates are then incorporated into the update step: $$ \Delta \bx_t = - \frac{\eta}{\sqrt{\widehat{\bv}_t}+\epsilon} \odot \widehat{\bmm}_t. $$ And therefore the update becomes $$ \bx_{t+1} =\bx_t - \frac{\eta}{\sqrt{\widehat{\bv}_t}+\epsilon} \odot \widehat{\bmm}_t. $$ In practice, \citet{kingma2014adam} suggest to use $\rho_1=0.9$, $\rho_2=0.999$, and $\epsilon=1e-8$ for the parameters by default. \section{AdaMax} Going further from Adam, \citet{kingma2014adam} notice the high-order moment: $$ \bv_t = \rho_2^p \bv_{t-1} + (1-\rho_2^p) |\bg_t|^p, $$ that is numerically unstable for large $p$ values, making $\ell_1$ and $\ell_2$ norms the common choices for updates. However, when $p\rightarrow \infty$, the $\ell_{\infty}$ also exhibits stable behavior. Therefore, the AdaMax admits the following moment update: $$ \bu_t = \rho_2^\infty \bu_{t-1} + (1-\rho_2^\infty) |\bg_t|^\infty = \max (\rho_2\bu_{t-1}, |\bg_t|), $$ where we do not need to correct for initialization bias in this case, and this yields the update step $$ \Delta \bx_t = - \frac{\eta}{\bu_t} \odot \widehat{\bmm}_t. $$ In practice, \citet{kingma2014adam} suggest to use $\eta=0.002$, $\rho_1=0.9$, and $\rho_2=0.999$ as the default parameters. \section{Nadam} Nadam (Nesterov-accelerated Adam) combines the ideas of Adam and Nesterov momentum \citep{dozat2016incorporating}. We recall the momentum and NAG updates as follows: $$ \boxed{ \begin{aligned} &\text{Momentum:}\\ &\bg_t = \nabla L(\bx_t);\\ &\Delta \bx_t = \rho\Delta \bx_{t-1} - \eta \bg_t; \\ &\bx_{t+1} = \bx_t +\Delta \bx_t, \end{aligned} } \gap \boxed{ \begin{aligned} &\text{NAG:}\\ &\bg_t = \nabla L(\bx_{t} + \rho \Delta \bx_{t-1});\\ &\Delta \bx_t =\rho\Delta \bx_{t-1} - \eta\bg_t; \\ &\bx_{t+1} = \bx_t +\Delta \bx_t, \end{aligned} } $$ \citet{dozat2016incorporating} first proposes a modification of NAG by using the current momentum vector to look ahead, which we call NAG$^\prime$ here. That is, applying the momentum update twice for each update: $$ \boxed{ \begin{aligned} &\text{NAG}^\prime:\\ &\bg_t = \nabla L(\bx_t);\\ &\Delta \bx_t = \rho\Delta \bx_{t-1} - \eta \bg_t; \\ &\Delta \bx_t^\prime = \rho\Delta \bx_{t} - \eta \bg_t; \\ &\bx_{t+1} = \bx_t +\Delta \bx_t^\prime . \end{aligned} } $$ By rewriting the Adam in the following form, where a similar modification according to NAG$^\prime$ leads to the Nadam update: $$ \boxed{ \begin{aligned} &\text{Adam}:\\ &\bmm_t = \rho_1 \bmm_{t-1} + (1-\rho_1)\bg_t;\\ &\widehat{\bmm}_t = \rho_1 \frac{\bmm_{t-1} }{1-\rho_1^t} +\frac{1-\rho_1}{1-\rho_1^t} \bg_t; \\ &\Delta \bx_t = - \frac{\eta}{\sqrt{\widehat{\bv}_t}+\epsilon} \odot \widehat{\bmm}_t;\\ &\bx_{t+1} = \bx_t +\Delta \bx_t, \end{aligned} } \leadto \boxed{ \begin{aligned} &\text{Nadam}:\\ &\bmm_t = \rho_1 \bmm_{t-1} + (1-\rho_1)\bg_t;\\ &\widehat{\bmm}_t = \rho_1 \frac{\bmm_{\textcolor{mylightbluetext}{t}} }{1-\rho_1^{\textcolor{mylightbluetext}{t+1}}} +\frac{1-\rho_1}{1-\rho_1^t} \bg_t; \\ &\Delta \bx_t = - \frac{\eta}{\sqrt{\widehat{\bv}_t}+\epsilon} \odot \widehat{\bmm}_t;\\ &\bx_{t+1} = \bx_t +\Delta \bx_t. \end{aligned} } $$ However, the $\rho_1 \frac{\bmm_{t-1} }{1-\rho_1^t}$ in $\widehat{\bmm}_t$ of the Adam method can be replaced in a momentum fashion; by applying the same modification on NAG$^\prime$, the second version of Nadam has the following form (though it's not originally presented in \citet{dozat2016incorporating}): $$ \boxed{ \begin{aligned} &\text{Adam}^\prime:\\ &\bmm_t = \rho_1 \bmm_{t-1} + (1-\rho_1)\bg_t;\\ &\widehat{\bmm}_t = \rho_1 \textcolor{mylightbluetext}{\widehat{\bmm}_{t-1}} +\frac{1-\rho_1}{1-\rho_1^t} \bg_t; \\ &\Delta \bx_t = - \frac{\eta}{\sqrt{\widehat{\bv}_t}+\epsilon} \odot \widehat{\bmm}_t;\\ &\bx_{t+1} = \bx_t +\Delta \bx_t^\prime, \end{aligned} } \leadto \boxed{ \begin{aligned} &\text{Nadam}^\prime:\\ &\widehat{\bmm}_t = \rho_1 \frac{\bmm_{t-1} }{1-\rho_1^t} +\frac{1-\rho_1}{1-\rho_1^t} \bg_t; \\ &\widehat{\bmm}_t^\prime = \rho_1 \textcolor{mylightbluetext}{\widehat{\bmm}_{t}} +\frac{1-\rho_1}{1-\rho_1^t} \bg_t; \\ &\Delta \bx_t = - \frac{\eta}{\sqrt{\widehat{\bv}_t}+\epsilon} \odot \widehat{\bmm}_t^\prime;\\ &\bx_{t+1} = \bx_t +\Delta \bx_t^\prime. \end{aligned} } $$ \index{Saddle point} \section{Problems in SGD}\label{section:c-problem} The introduced optimizers for stochastic gradient descent are widely used optimization algorithms, especially in training machine learning models. However, it has its challenges and potential issues. Here are some common problems associated with SGD: \paragraph{Saddle points.} When the Hessian of loss function is positive definite, then the optimal point $\bx_\star$ with vanishing gradient must be a local minimum. Similarly, when the Hessian is negative definite, the point is a local maximum; when the Hessian has both positive and negative eigenvalues, the point is a saddle point (see later discussion in Eq.~\eqref{equation:reparametrization-newton}, p.~\pageref{equation:reparametrization-newton}). The stochastic optimizers discussed above in practice are first order optimization algorithms: they only look at the gradient information, and never explicitly compute the Hessian. Such algorithms may get stuck at saddle points (toy example in Figure~\ref{fig:quadratic_saddle}, p.~\pageref{fig:quadratic_saddle}). In the algorithms presented earlier, including vanilla update, AdaGrad, AdaDelta, RMSprop, and others, this issue may arise. AdaSmooth may have chance to go out of the saddle points as argued in Section~\ref{section:adaer}. On the other hand, the mechanism of momentum and Nesterov momentum help point $\bx$ to go over the local minimum or saddle point because they have a term of previous step size (in general), but make the model more difficult to converge, especially when the momentum term $\rho$ is large. \paragraph{Low speed in SGD.} However, although claimed in Rong Ge's post \footnote{http://www.offconvex.org/2016/03/22/saddlepoints/}, it is potential to converge to saddle points with high error rate, \citet{dauphin2014identifying} and Benjamin Recht's post \footnote{http://www.offconvex.org/2016/03/24/saddles-again/} point out that it is in fact super hard to converge to a saddle point if one picks a random initial point and run SGD. This is because the typical problem for both local minima and saddle points is that they are often surrounded by plateaus of small curvature in the error surface. In the SGD algorithms we discuss above, they are repelled away from a saddle point to a lower error by following the directions of negative curvature. In other words, there will be no so called saddle points problem in SGD algorithms. However, this repulsion can occur slowly due to the plateau with small curvature. While, for the second-order methods, e.g., Newton's method, it does not treat saddle points appropriately. This is partly because Newton's method is designed to rapidly descend plateaus surrounding local minima by rescaling gradient steps by the inverse eigenvalues of the Hessian matrix (we will see shortly in the sequel). As argued in \citet{dauphin2014identifying}, random Gaussian error functions over large $d$ dimensions are increasingly likely to have saddle points rather than local minima as $d$ increases. And the ratio of the number of saddle points to local minima increases exponentially with the dimensionality $d$. The authors also argue that it is saddle points rather than local minima that provide a fundamental impediment to rapid high dimensional non-convex optimization. In this sense, local minima with high errors are exponentially rare in the dimensionality of the problem. So, the computation will be slow in SGD algorithms to escape from small curvature plateaus. \paragraph{First-order method to escape from saddle point.} The post \footnote{http://www.offconvex.org/2016/03/22/saddlepoints/} by Rong Ge introduces a first-order method to escape from saddle point. He claims that saddle points are very unstable: if we put a ball on a saddle point, then slightly perturb it, the ball is likely to fall to a local minimum, especially when the second-order term $\frac{1}{2} \Delta \bx^\top \bH \Delta \bx$ (see later discussion in Eq.~\eqref{equation:newton-derive}, p.~\pageref{equation:newton-derive}) is significantly smaller than 0 (i.e., there is a steep direction where the function value decreases, and assume we are looking for a local minimum), which is called a \textit{strict saddle function} in Rong Ge's post. In this case, we can use \textit{noisy gradient descent}: $$ \bx_{t+1} = \bx_t + \Delta\bx + \bepsilon. $$ where $\bepsilon$ is a noise vector that has zero mean $\mathbf{0}$. Actually, this is the basic idea of SGD, which uses the gradient of a mini batch rather than the true gradient. However, the drawback of the stochastic gradient descent is not the direction, but the size of the step along each eigenvector. The step, along any eigen-direction $\bq_i$, is given by $-\lambda_i \Delta {v_i}$ (see later discussion in Section~\ref{section:newtonsmethod}, p.~\pageref{section:newtonsmethod}, feel free to skip this paragraph at a first reading), when the steps taken in the direction with small absolute value of eigenvalues, the step is small. To be more concrete, as an example where the curvature of the error surface may not be the same in all directions. If there is a long and narrow valley in the error surface, the component of the gradient in the direction that points along the base of the valley is very small; while the component perpendicular to the valley walls is quite large even though we have to move a long distance along the base and a small distance perpendicular to the walls. This phenomenon can be seen in Figure~\ref{fig:momentum_gd} (though it's partly solved by SGD with momentum). We normally move by making a step that is some constant times the negative gradient rather than a step of constant length in the direction of the negative gradient. This means that in steep regions (where we have to be careful not to make our steps too large), we move quickly; and in shallow regions (where we need to move in big steps), we move slowly. This phenomenon again contributes to the slower convergence of SGD methods compared to second-order methods. \newpage \clearchapter{Second-Order Methods} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \section{Second-Order Methods}\label{section:seconr-methods} \lettrine{\color{caligraphcolor}W} We previously addressed the derivation of AdaDelta based on the consistency of units in second-order methods (Section~\ref{section:adadelta}, p.~\pageref{section:adadelta}). In this section, we provide a brief overview of Newton's method and its typical variations, including damped Newton's method and Levenberg gradient descent. Furthermore, we also derive the conjugate gradient from scratch that utilizes second-order information to capture the curvature shape of the loss surface in order to favor a faster convergence. \section{Newton's Method}\label{section:newtonsmethod} Newton's method is an optimization policy that employs Taylor's expansion to approximate the loss function with a quadratic form, providing an estimation of the minimum location based on the approximated quadratic equation. By Taylor's formula (Appendix~\ref{appendix:taylor-expansion}, p.~\pageref{appendix:taylor-expansion}) and disregarding derivatives of higher order, the loss function $L(\bx+\Delta \bx)$ can be approximated by \begin{equation}\label{equation:newton-derive} L(\bx+\Delta \bx) \approx L(\bx) +\Delta \bx^\top \nabla L(\bx) + \frac{1}{2} \Delta \bx^\top \bH \Delta \bx, \end{equation} where $\bH$ is the Hessian of the loss function $L(\bx)$ with respect to $\bx$. The optimal point (minimum point) of Eq.~\eqref{equation:newton-derive} is then obtained at $$ \bx_\star = \bx - \bH^{-1} \nabla L(\bx). $$ That is, the update step is rescaled by the inverse of Hessian. Newton's update can be intuitively understood as utilizing curvature information from the Hessian: when the curvature is steep, the inverse Hessian can scale more making the step smaller; while the curvature is flat, the inverse Hessian scales less resulting in a larger update. However, for nonlinear $L(\bx)$, achieving the minimum in a single step is not feasible. Similar to the stochastic optimization methods we have introduced in previous sections, Newton's method is updated iteratively as formulated in Algorithm~\ref{alg:newton_method} \citep{roweis1996levenberg, goodfellow2016deep}. \begin{algorithm}[H] \caption{Newton's Method} \label{alg:newton_method} \begin{algorithmic}[1] \State {\bfseries Input:} Initial parameter $\bx_1$; \For{$t=1:T$ } \State Compute gradient $\bg_t = \nabla L(\bx_{t})$; \State Compute Hessian $\bH_t = \nabla^2 L(\bx_{t});$ \State Compute inverse Hessian $\bH_t^{-1}$; \State Compute update step $\Delta \bx_t = -\bH_t^{-1}\bg_t$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} The computational complexity of Newton's method arises from the computation of the inverse of the Hessian at each training iteration. The number of entries in the Hessian matrix is squared in the number of parameters ($\bx\in \real^d, \bH\in \real^{d\times d}$), making the complexity of the inverse be of order $\mathcal{O}(d^3)$ \citep{trefethen1997numerical, boyd2018introduction, lu2021numerical}. As a consequence, only networks with a small number of parameters, e.g., shallow neural networks or multi-layer perceptrons, can be trained by Newton's method in practice. \index{Spectral decomposition} \index{Reparametrization} \paragraph{Reparametrization of the space around critical point.} A \textit{critical point or stationary point} is a point $\bx$ where the gradient of $L(\bx)$ vanishes (suppose $L(\bx)$ is differentiable over the region of $L(\bx)$). A useful reparametrization of the loss function $L$ \textbf{around critical points} is derived from Taylor's expansion. Since the gradient vanishes, Eq.~\eqref{equation:newton-derive} can be written as \begin{equation} L(\bx+\Delta \bx) \approx L(\bx) + \frac{1}{2} \Delta \bx^\top \bH \Delta \bx. \end{equation} Since Hessian $\bH$ is symmetric, it admits spectral decomposition (Theorem 13.1 in \citet{lu2022matrix} or Appendix~\ref{appendix:spectraldecomp}, p.~\pageref{appendix:spectraldecomp}): $$ \bH=\bQ\bLambda\bQ^\top \in \real^{d\times d}, $$ where the columns of $\bQ = [\bq_1, \bq_2, \ldots , \bq_d]$ are eigenvectors of $\bH$ and are mutually orthonormal, and the entries of $\bLambda = \diag(\lambda_1, \lambda_2, \ldots , \lambda_d)$ are the corresponding real eigenvalues of $\bA$. We define the following vector $\Delta\bv$: $$ \Delta \bv= \begin{bmatrix} \bq_1^\top \\ \vdots \\ \bq_d^\top \end{bmatrix} \Delta\bx= \bQ^\top \Delta\bx. $$ Then the reparametrization follows \begin{equation}\label{equation:reparametrization-newton} \begin{aligned} L(\bx+\Delta \bx) &\approx L(\bx) + \frac{1}{2} \Delta \bx^\top \bH \Delta \bx\\ &= L(\bx) + \frac{1}{2} \Delta \bv^\top \bLambda \Delta\bv = L(\bx) + \frac{1}{2} \sum_{i=1}^{d} \lambda_i (\Delta v_i)^2, \end{aligned} \end{equation} where $\Delta v_i$ is the $i$-th element of $\Delta\bv$. A conclusion on the type of the critical point follows immediately from the reparametrization: \begin{itemize} \item If all eigenvalues are nonzero and positive, then the critical point is a local minimum; \item If all eigenvalues are nonzero and negative, then the critical point is a local maximum; \item If the eigenvalues are nonzero, and both positive and negative eigenvalues exist, then the critical point is a saddle point. \end{itemize} In vanilla GD, if an eigenvalue $\lambda_i$ is positive (negative), then the step moves towards (away) from $\bx$ along $\Delta \bv$, guiding the GD towards optimal $\bx_\star$. The step along any direction $\bq_i$ is given by $-\lambda_i\Delta v_i$. In contrast, in Newton's method, the step is rescaled by the inverse Hessian, making the step along direction $\bq_i$ scaled into $-\Delta v_i$. This may cause problem when the eigenvalue is negative, resulting in the step moving in the \textit{opposite} direction compared to vanilla GD \citep{dauphin2014identifying}. The reparametrization shows that rescaling the gradient along the direction of each eigenvector can result in wrong direction when the eigenvalue $\lambda_i$ is negative. This suggests the rescale by its magnitude, i.e., scale by $1/|\lambda_i|$ rather than $1/\lambda_i$, preserving sign of the gradient and addressing the slowness issue of GD at the same time. From Eq.~\eqref{equation:reparametrization-newton}, when both positive and negative eigenvalues exist, both vanilla GD and Newton's method may get stuck in saddle points, leading to suboptimal performance. However, scaling by $1/|\lambda_i|$ can partly solve this problem, since the movement around the saddle point can either increase or decrease the loss from this rescaling, rather than stay where it is \citep{nocedal1999numerical, dauphin2014identifying}. \section{Damped Newton's Method} Newton's method addresses the slowness problem by rescaling the gradients in each direction with the inverse of the corresponding eigenvalue, yielding the step $\Delta \bx_t = -\bH_t^{-1}\bg_t$ at iteration $t$. However, this approach can result in moving in an undesired direction when the eigenvalue is negative, causing the Newton's step to proceed along the eigenvector in a direction opposite to the gradient descent step and increasing the error. To mitigate this issue, damping the Hessian is proposed, where negative curvature is eliminated by introducing a constant $\alpha$ to its diagonal, yielding the step $\Delta \bx_t= - (\mathbf{H}+\alpha \mathbf{I})^{-1} \bg_t$. We can view $\alpha$ as the tradeoff between Newton's method and vanilla GD. When $\alpha$ is small, it is closer to Newton's method; when $\alpha$ is large, it is closer to vanilla GD. In this case, we get the step $-\frac{\lambda_i}{\lambda_i + \alpha}\Delta \bg_t$. However, obviously, the drawback of damping Newton's method is evident{\textemdash}it may result in a small step size across many eigen-directions due to the influence of the large damping factor $\alpha$. \section{Levenberg (-Marquardt) Gradient Descent} The quadratic rule is not universally better since it assumes a linear approximation of $L(\bx)$, which is only valid when it's in proximity to a minimum. The \textit{Levenberg gradient descent} goes further by combining the idea of damped Newton's method and vanilla GD. Initially, we can apply a steepest descent type method until we approach a minimum, prompting switching to the quadratic rule. The distance from a minimum is described by evaluating the loss \citep{levenberg1944method}. If the loss is increasing, the quadratic approximation is not working well and we are not likely near a minimum, yielding a larger $\alpha$ in damped Newton's method; while if the loss is decreasing, the quadratic approximation is working well and we are approaching a minimum, yielding a smaller $\alpha$ in damped Newton's method. Marquardt improved this method by incorporating the local curvature information. In this modification, one replaces the identity matrix in Levenberg's method by the diagonal of the Hessian, resulting in the \textit{Levenberg-Marquardt gradient descent} \citep{marquardt1963algorithm}: $$ \Delta \bx_t= - \bigg(\mathbf{H}+\alpha \cdot \diag(\bH)\bigg)^{-1} \bg_t. $$ The Levenberg-Marquardt gradient descent method is nothing more than a heuristic method since it is not optimal for any well defined criterion of speed or error measurements, and it is merely a well thought out optimization procedure. However, it is an optimization method that works extremely well in practice, especially for medium sized nonlinear models. \index{Conjugate gradient} \index{Hessian-orthogonal} \section{Conjugate Gradient}\label{section:conjugate-descent} We have shown that the vanilla GD (employing the negative gradient as the descent direction) can move back and forth in a zigzag pattern when applied in a quadratic bowl (a ravine-shaped loss curve, see example in Figure~\ref{fig:quadratic_vanillegd_contour8}, p.~\pageref{fig:quadratic_vanillegd_contour8}). This zigzag behavior becomes more pronounced if the learning rate is guaranteed by line search (Section~\ref{section:line-search}, p.~\pageref{section:line-search}), since the gradient is orthogonal to the previous update step (Lemma~\ref{lemm:linear-search-orghonal}, p.~\ref{lemm:linear-search-orghonal} and example in Figure~\ref{fig:conjguatecy_zigzag2}, p.~\pageref{fig:conjguatecy_zigzag2}). The choice of orthogonal descent directions fails to preserve the minimum along the previous search directions, and the line search will undermine the progress we have already achieved in the direction of the previous line search. This results in the zigzag pattern in movement. Instead of favoring a descent direction that is orthogonal to previous search direction (i.e., $\bd_{t+1}^\top \bd_t=0$), the \textit{conjugate descent} selects a search direction that is \textit{Hessian-orthogonal} (i.e., $\bd_{t+1}^\top\bH\bd_t=0$, or conjugate with respect to $\bH$). This choice ensures that the movement is compensated by the curvature information of the loss function. In Figure~\ref{fig:conjugate_tile_A_orthogonals}, we show examples of Hessian-orthogonal pairs when the eigenvalues of the Hessian matrix $\bH$ are different or identical. When the eigenvalues of the Hessian matrix are the same, the Hessian-orthogonality reduces to trivial orthogonal cases (this can be shown by the spectral decomposition of the Hessian matrix, where the orthogonal transformation does not alter the orthogonality \citep{lu2021numerical}). \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Surface plot: Hessian with different eigenvalues.]{\label{fig:conjugate_tile_A_orthogonal_diffeigenv} \includegraphics[width=0.481\linewidth]{./imgs/conjugate_tile_A_orthogonal_diffeigenv.pdf}} \subfigure[Surface plot: Hessian with same eigenvalues.]{\label{fig:conjugate_tile_A_orthogonal_sameeigenv} \includegraphics[width=0.481\linewidth]{./imgs/conjugate_tile_A_orthogonal_sameeigenv.pdf}} \caption{Illustration of $\bH$-orthogonal for different Hessian matrices in two-dimensional case: $\bH=\begin{bmatrix} 40 & 5 \\ 7 & 10 \end{bmatrix}$ for Fig~\ref{fig:conjugate_tile_A_orthogonal_diffeigenv} and $\bH=\begin{bmatrix} 40 & 0 \\ 0 & 40 \end{bmatrix}$ for Fig~\ref{fig:conjugate_tile_A_orthogonal_sameeigenv}. The $\bH$-orthogonal pairs are orthogonal when $\bH$ has identical eigenvalues.} \label{fig:conjugate_tile_A_orthogonals} \end{figure} We now provide the formal definition of conjugacy as follows: \begin{definition}[Conjugacy]\label{definition:conjugacy} Given a positive definite matrix $\bA\in \real^{d\times d}$, the vectors $\bu, \bv\in \real^d$ are \textit{conjugate} with respect to $\bA$ if $\bu,\bv\neq \bzero$ and $\bu^\top\bA\bv = 0$. \end{definition} In the method of \textit{conjugate gradient} (CG), we determine a descent direction that is conjugate to the previous search direction with respect to the Hessian matrix $\bH$, ensuring that the new update step will not undo the progress made in the previous directions: $$ \bd_{t} = -\nabla L(\bx_t) + \beta_t \bd_{t-1}, $$ where $\beta_t$ is a coefficient controlling how much the previous direction would add back to the current search direction. Three commonly used methods to compute the coefficient are as follows: $$ \begin{aligned} \text{Fletcher-Reeves:\gap } &\beta_t^F = \frac{\nabla L(\bx_t)^\top \nabla L(\bx_t)}{\nabla L(\bx_{t-1})^\top \nabla L(\bx_{t-1})},\\ \text{Polak-Ribi\`ere:\gap } &\beta_t^P = \frac{\bigg(\nabla L(\bx_t) - \nabla L(\bx_{t-1})\bigg)^\top \nabla L(\bx_t)}{\nabla L(\bx_{t-1})^\top \nabla L(\bx_{t-1})},\\ \text{Hestenes–Stiefel:\gap } &\beta_t^H = \frac{\bigg(\nabla L(\bx_t) - \nabla L(\bx_{t-1})\bigg)^\top \nabla L(\bx_t)}{\bigg(\nabla L(\bx_t) - \nabla L(\bx_{t-1})\bigg)^\top \bd_{t-1}}. \end{aligned} $$ In the case of a quadratic loss function, the conjugate gradient ensures that the gradient along the previous direction does not increase in magnitude \citep{shewchuk1994introduction, nocedal1999numerical, iserles2009first, goodfellow2016deep}. The full procedure of the conjugate gradient method is formulated in Algorithm~\ref{alg:conjugate_descent}, where it is observed that the first step of conjugate gradient is identical to a step of steepest descent when the learning rate is calculated by exact line search since $\beta_1=0$. \begin{algorithm}[H] \caption{Fletcher-Reeves Conjugate Gradient} \label{alg:conjugate_descent} \begin{algorithmic}[1] \State {\bfseries Input:} Initial parameter $\bx_1$; \State {\bfseries Input:} Initialize $\bd_0 =\bzero $ and $\bg_0 = \bd_0+\epsilon$; \For{$t=1:T$ } \State Compute gradient $\bg_t = \nabla L(\bx_{t})$; \State Compute coefficient $\beta_{t} = \frac{ \bg_t^\top \bg_t}{\bg_{t-1}^\top \bg_{t-1}}$ (\text{Fletcher-Reeves }); \State Compute descent direction $\bd_t = -\bg_{t} +\beta_{t} \bd_{t-1}$; \State Fixed learning rate $\eta_t=\eta$ or find it by line search: $\eta_t = \arg\min L(\bx_t + \eta \bd_t)$; \State Compute update step $\Delta \bx_t = \eta_t\bd_t$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Vanilla GD, fixed $\eta=0.08$.]{\label{fig:cgm_conjugate8} \includegraphics[width=0.22\linewidth]{./imgs/steepest_gd_mom-0_lrate-8.pdf}} \subfigure[Steepest descent.]{\label{fig:cgm_zigzag2} \includegraphics[width=0.22\linewidth]{./imgs/steepest_gd_bisection.pdf}} \subfigure[Conjugate descent, fixed $\eta=0.06$.]{\label{fig:cgm_conjugate2} \includegraphics[width=0.22\linewidth]{./imgs/conjugate_gd_bisection_fix.pdf}} \subfigure[Conjugate descent, exact line search.]{\label{fig:cgm_conjugate3} \includegraphics[width=0.22\linewidth]{./imgs/conjugate_gd_bisection.pdf}} \caption{Illustration for the vanilla GD, steepest descent, and CG of quadratic form with $\bA=\begin{bmatrix} 20 & 7 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, and $c=0$. The starting point to descent is $\bx_1=[-3,3.5]^\top$.} \label{fig:cgm-zigzzag} \end{figure} In the subsequent sections, we base our derivation of the conjugate gradient on the assumption of a symmetric positive definite $\bA$; however, it can be readily adapted to asymmetric matrices. A comparison among vanilla GD, steepest descent, and conjugate gradient is shown in Figure~\ref{fig:cgm-zigzzag}, where we observe that the updates in CG have less zigzag pattern than vanilla GD and steepest descent. \index{Quadratic form} \subsubsection{Quadratic Form in Conjugate Direction (CD) Method} Following the discussion of the quadratic form in GD (Section~\ref{section:quadratic_vanilla_GD}, p.~\pageref{section:quadratic_vanilla_GD}), the quadratic form in steepest descent (Section~\ref{section:quadratic-in-steepestdescent}, p.~\pageref{section:quadratic-in-steepestdescent}), and the quadratic form in momentum (Section~\ref{section:quadratic-in-momentum}, p.~\pageref{section:quadratic-in-momentum}), we turn our attention to the quadratic form in CG. To introduce the \textit{conjugate gradient (CG)} method, we proceed by an exploration of the \textit{conjugate direction (CD)} method, where the distinction between them will become evident in the subsequent discussions. According to the definition of conjugacy (Definition~\ref{definition:conjugacy}), it is easy to show that any set of vectors $\{\bd_1, \bd_2, \ldots, \bd_d\}\in \real^d$ satisfying this property with respect to the symmetric positive definite Hessian matrix $\bH=\frac{1}{2}(\bA^\top+\bA)$: \begin{equation}\label{equation:conjguate-definis} \bd_i^\top \bH\bd_j, \gap \forall i\neq j, \end{equation} is also linearly independent. That is, the set span the entire $\real^d$ space: $$ \text{span}\{\bd_1, \bd_2, \ldots, \bd_d\} = \real^d. $$ Given the initial parameter $\bx_1$ and a set of \textit{conjugate directions} $\{\bd_1, \bd_2, \ldots, \bd_d\}$ (defined in Eq.~\eqref{equation:conjguate-definis}), the update at time $t$ is given by \begin{equation}\label{equation:conjugate_direction-update} \bx_{t+1} = \bx_t +\eta_t \bd_t, \end{equation} where $\eta_t$ is the learning rate at time $t$ and is obtained by minimizing the one-dimensional quadratic function $J(\eta)=L(\bx_t+\eta\bd_t)$, as presented in Eq.~\eqref{equation:eta-gd-steepest} (p.~\pageref{equation:eta-gd-steepest}): \begin{equation}\label{equation:conjugate_direction-update2} \eta_t = - \frac{\bd_t^\top \bg_t}{ \bd_t^\top \bA\bd_t } \gap \text{with}\gap \bg_t = \nabla L(\bx_t). \end{equation} Then, we can establish the following theorem that the updates following from the conjugate directions will converge in $d$ steps (the dimension of the parameter) when $\bA$ is symmetric positive definite. \begin{theorem}[Converge in $d$ Steps]\label{theorem:conjudate_direc_d-steps} For any initial parameter $\bx_1$, the sequence $\{\bx_t\}$ generated by the conjugate direction algorithm Eq.~\eqref{equation:conjugate_direction-update} converges to the solution $\bx_\star$ in at most $d$ steps {when $\bA$ is symmetric positive definite}. \end{theorem} \begin{proof}[of Theorem~\ref{theorem:conjudate_direc_d-steps}] Since conjugate directions $\{\bd_1, \bd_2, \ldots, \bd_d\}$ span the entire $\real^d$ space, the initial error vector $\be_1 = \bx_1 - \bx_\star$ (Definition~\ref{definition:error-gd-}, p.~\pageref{definition:error-gd-}) then can be expressed as a linear combination of the conjugate directions: \begin{equation}\label{equation:dsteps-symmetric} \be_1 = \bx_1-\bx_\star = \gamma_1\bd_1 + \gamma_2\bd_2+\ldots+\gamma_d\bd_d. \end{equation} The $\gamma_i$'s can be obtained by the following equation: $$ \begin{aligned} \bd_t^\top \bH\be_1 &= \sum_{i=1}^{d} \gamma_i \bd_t^\top \bH\bd_i=\gamma_t \bd_t^\top \bH\bd_t &\text{(by conjugacy, Eq.~\eqref{equation:conjguate-definis})}\\ \leadto \gamma_t &= \frac{\bd_t^\top \bH\be_1}{\bd_t^\top \bA\bd_t}= \frac{\bd_t^\top \bH(\be_1+ \sum_{i=1}^{t-1}\eta_i \bd_i )}{\bd_t^\top \bH\bd_t} &\text{(by conjugacy, Eq.~\eqref{equation:conjguate-definis})}\\ &=\frac{\bd_t^\top \bH\be_t}{\bd_t^\top \bH\bd_t}. \end{aligned} $$ When $\bA$ is symmetric and nonsingular, we have $\be_t = \bx_t-\bx_\star = \bx_t-\bA^{-1}\bb$ and $\bH=\bA$. It can be shown that $\gamma_t$ is equal to $\frac{\bd_t^\top \bg_t}{\bd_t^\top \bA\bd_t}$. This is exactly the same form (in magnitude) as the learning rate at time $t$ in steepest descent: $\gamma_t = -\eta_t$ (see Eq.~\eqref{equation:eta-gd-steepest}, p.~\pageref{equation:eta-gd-steepest}). Substituting into Eq.~\eqref{equation:dsteps-symmetric}, it follows that $$ \bx_\star = \bx_1 + \eta_1\bd_1+\eta_2\bd_2+\ldots +\eta_d\bd_d. $$ Moreover, we have updates by Eq.~\eqref{equation:conjugate_direction-update} that $$ \begin{aligned} \bx_{d+1} &= \bx_d + \eta_d\bd_d \\ &=\bx_{d-1} +\eta_{d-1}\bd_{d-1} + \eta_d\bd_d \\ &= \ldots \\ &= \bx_1 + \eta_1\bd_1 + \eta_2\bd_2+\ldots+\eta_d\bd_d = \bx_\star, \end{aligned} $$ which completes the proof. \end{proof} The above theorem states the conjudate direction given by Eq.~\eqref{equation:conjugate_direction-update} converges in $d$ steps, i.e., $\bx_{d+1}$ minimizes the quadratic function $L(\bx)=\frac{1}{2}\bx^\top\bA\bx-\bb^\top\bx+c$ over the entire space $\real^d$. Furthermore, we can prove at each iteration $t\leq d$, the update $\bx_{t+1}$ minimizes the quadratic function over a subspace of $\real^d$. \begin{theorem}[Expanding Subspace Minimization]\label{theorem:expanding_subspace_minimization} For any initial parameter $\bx_1$, the sequence $\{\bx_t\}$ is generated by the conjugate direction algorithm Eq.~\eqref{equation:conjugate_direction-update}. Then it follows that \begin{equation}\label{equation:expanding_subspace_minimization_zero} \bg_{t+1}^\top \bd_i=0, \gap \forall i=1,2,\ldots, t, \text{ and } t\in \{1,2,\ldots, d\}, \end{equation} where $\bg_t = \bA\bx_t - \bb$ (i.e., the gradient when $\bA$ is symmetric), and $\bx_{t+1}$ is the minimizer of $L(\bx)=\frac{1}{2}\bx^\top\bA\bx-\bb^\top\bx+c$ with symmetric positive definite $\bA$ over the subspace \begin{equation}\label{equation:space_d_t} \mathbb{D}_t=\left\{\bx | \bx=\bx_1 + \text{span}\{\bd_1, \bd_2, \ldots, \bd_t\}\right\}. \end{equation} \end{theorem} \begin{proof}[of Theorem~\ref{theorem:expanding_subspace_minimization}] We first prove $\bg_{t+1}^\top\bd_i$ by induction. When $t=1$, since $\eta_1$ is obtained to minimize $J(\eta)=L(\bx_1+\eta\bd_1)$, by Lemma~\ref{lemm:linear-search-orghonal} (p.~\pageref{lemm:linear-search-orghonal}), we have $\bg_2 = \nabla L(\bx_2)$ that is orthogonal to $\bd_1$. Suppose now for general $t-1$, the induction hypothesis is satisfied with $\bg_{t}^\top\bd_i=0$ for $i=0,1,\ldots, t-1$. The $\bg_t$ has the following update \begin{equation}\label{equation:conjguate-redsidual-update} \begin{aligned} \bg_{t+1} &= \bA\bx_{t+1}-\bb = \bA (\bx_t+\eta_t\bd_t) -\bb \\ &= \bg_t + \eta_t\bA\bd_t. \end{aligned} \end{equation} By conjugacy and the induction hypothesis, we have $\bg_{t+1}^\top \bd_i=0$ for $i=\{0,1,\ldots,t-1\}$. If we further prove this is also true for $\bg_{t+1}^\top\bd_t$, we complete the proof. This follows again from Lemma~\ref{lemm:linear-search-orghonal} (p.~\pageref{lemm:linear-search-orghonal}), the current gradient is orthogonal to the previous search direction $\bd_t$. For the second part, we define $f(\bm{\eta}) = L(\bx_1+\eta_1\bd_1+\eta_2\bd_2+\ldots +\eta_t\bd_t)$, which is a strictly convex quadratic function over $\bm{\eta}=[\eta_1, \eta_2, \ldots, \eta_t]^\top$ such that $$ \frac{\partial f(\bm{\eta})}{\partial \eta_i} = 0, \gap \forall i=1,2,\ldots, t. $$ This implies $$ \underbrace{\nabla L(\bx_1 +\eta_1\bd_1+\eta_2\bd_2+\ldots +\eta_t\bd_t}_{\nabla L(\bx_{t+1})})^\top \bd_i = 0, \gap \forall i=1,2,\ldots, t. $$ That is, $\bx_{t+1}\in \{\bx | \bx=\bx_1 + \text{span}\{\bd_1, \bd_2, \ldots, \bd_t\}\}$ is the minimizer of $L(\bx)$. \end{proof} \index{Quadratic form} \subsubsection{Quadratic Form in Conjugate Gradient (CG) Method} We have mentioned that the conjugate gradient (CG) method differs from the conjugate descent (CD) method. The distinction lies in the fact that the CG method computes a new vector $\bd_{t+1}$ using only the previous vector $\bd_t$ rather than the entire sequence $\{\bd_1, \bd_2, \ldots, \bd_t\}$. And the resulting $\bd_{t+1}$ will automatically be conjugate to the sequence in this sense. In the CG method, each search direction $\bd_t$ is chosen to be a linear combination of negative gradient $-\bg_t$ (the search direction in steepest descent) and the previous direction $\bd_{t-1}$: \begin{equation}\label{equation:cd_gradient_direction} \bd_{t} = -\bg_t + \beta_t \bd_{t-1}. \end{equation} This yields $$ \beta_t = \frac{\bg_t^\top\bA\bd_{t-1}}{\bd_{t-1}^\top \bA\bd_{t-1}} . $$ This choice of $\beta_t$ and $\bd_t$ actually results in the conjugate sequence $\{\bd_1, \bd_2, \ldots, \bd_t\}$. To see this, we first provide the definition of the \textit{Krylov subspace of degree $t$ for vector $\bv$ with respect to matrix $\bA$}: $$ \mathcal{K}(\bv; t) = \text{span}\{\bv, \bA\bv, \ldots, \bA^{t-1}\bv\}. $$ \begin{theorem}[Converge in $d$ Steps]\label{theorem:conjudate_CD_d-steps} For any initial parameter $\bx_1$, the sequence $\{\bx_t\}$ generated by the conjugate descent algorithm, with search direction generated by Eq.~\eqref{equation:cd_gradient_direction}, converges to the solution $\bx_\star$ in at most $d$ steps {when $\bA$ is symmetric positive definite}. The result follows from the following claims: \begin{align} \bg_t^\top \bg_i &= 0, \gap \text{for $i=\{1,2,\ldots, t-1\}$};\label{equation:conjudate_CD_d1}\\ \text{span}\{\bg_1, \bg_2, \ldots, \bg_t\} &= \text{span}\{\bg_1, \bA\bg_1, \ldots, \bA^{t-1}\bg_1\}=\mathcal{K}(\bg_1; t);\label{equation:conjudate_CD_d2}\\ \text{span}\{\bd_1, \bd_2, \ldots, \bd_t\} &= \text{span}\{\bg_1, \bA\bg_1, \ldots, \bA^{t-1}\bg_1\}=\mathcal{K}(\bg_1; t);\label{equation:conjudate_CD_d3}\\ \bd_t^\top\bA\bd_i &= 0, \gap \text{for $i=\{1,2,\ldots, t-1\}$},\label{equation:conjudate_CD_d4} \end{align} where Eq.~\eqref{equation:conjudate_CD_d4} indicates the sequence $\{\bd_t\}$ is conjugate. \end{theorem} \begin{proof}[of Theorem~\ref{theorem:conjudate_CD_d-steps}] The proof proceeds through induction. Eq.~\eqref{equation:conjudate_CD_d2} and Eq.~\eqref{equation:conjudate_CD_d3} are trivial when $t=1$. Assuming that for $t$, Eq.~\eqref{equation:conjudate_CD_d2}, Eq.~\eqref{equation:conjudate_CD_d3}, and Eq.~\eqref{equation:conjudate_CD_d4} hold as well; if we can show the equations still hold for $t+1$, then we complete the proof. By the induction hypothesis, we have $$ \bg_t \in \text{span}\{\bg_1, \bA\bg_1, \ldots, \bA^{t-1}\bg_1\}, \gap \bd_t \in \text{span}\{\bg_1, \bA\bg_1, \ldots, \bA^{t-1}\bg_1\}. $$ Left multiplying by $\bA$, it follows that \begin{equation}\label{equation:dstesps_induction00} \bA\bd_t \in \text{span}\{\bA\bg_1, \bA^2\bg_1, \ldots, \bA^{t}\bg_1\}. \end{equation} Since \begin{equation}\label{equation:dstesps_induction11} \begin{aligned} \bg_{t+1} &= \bA\bx_{t+1}-\bb = \bA(\bx_t+\Delta \bx_t)-\bb\\ &=\bA(\bx_t+\eta_t \bd_t) -\bb = \bg_t + \eta_t\bA\bd_t, \end{aligned} \end{equation} then we have \begin{equation}\label{equation:dstesps_induction2} \bg_{t+1}\in \text{span}\{\bg_1,\bA\bg_1, \bA^2\bg_1, \ldots, \bA^{t}\bg_1\}. \end{equation} Combining Eq.~\eqref{equation:dstesps_induction2} and Eq.~\eqref{equation:conjudate_CD_d2}, we have $$ \text{span}\{\bg_1, \bg_2, \ldots, \bg_{t}, \bg_{t+1}\} \subset \text{span}\{\bg_1, \bA\bg_1, \ldots, \bA^{t-1}\bg_1, \bA^t\bg_1\}. $$ To see the reverse inclusion, by Eq.~\eqref{equation:conjudate_CD_d3}, it follows that $$ \bA^t \bg_1 = \bA(\bA^{t-1}\bg_1) \in \text{span}\{\bA\bd_1, \bA\bd_2, \ldots, \bA\bd_t\}. $$ Again, by Eq.~\eqref{equation:dstesps_induction11}, we have $\bA\bd_t = (\bg_{t+1}-\bg_t)/\eta_t$. Therefore, $$ \bA^t \bg_1 \in \text{span}\{\bg_1, \bg_2, \ldots, \bg_t, \bg_{t+1}\} . $$ Combining with Eq.~\eqref{equation:conjudate_CD_d2}, we have $$ \text{span}\{\bg_1, \bA\bg_1, \ldots, \bA^{t-1}\bg_1, \bA^{t}\bg_1\} \subset \text{span}\{\bg_1, \bg_2, \ldots, \bg_t, \bg_{t+1}\}. $$ Therefore, Eq.~\eqref{equation:conjudate_CD_d2} holds for $t+1$. Eq.~\eqref{equation:conjudate_CD_d3} follows similarly and also holds for $t+1$. To see how Eq.~\eqref{equation:conjudate_CD_d4} holds for $t+1$, we have $$ \bd_{t+1}^\top\bA\bd_i = (-\bg_{t+1} + \beta_{t+1}\bd_{t})^\top\bA\bd_i. $$ By Theorem~\ref{theorem:expanding_subspace_minimization}, we have \begin{equation}\label{equation:dstesps_induction_argue41} \bg_{t+1}^\top\bd_i=0 \text{ for } i\in \{1,2,\ldots, t\}. \end{equation} Furthermore, by Eq.~\eqref{equation:dstesps_induction00} and Eq.~\eqref{equation:conjudate_CD_d3}, we have \begin{equation}\label{equation:dstesps_induction_argue42} \bA\bd_i \in \text{span}\{\bA\bg_1, \bA^2\bg_1, \ldots, \bA^{i}\bg_1\}\subset \text{span}\{\bd_1, \bd_2, \ldots, \bd_i, \bd_{i+1}\}. \end{equation} Combining Eq.~\eqref{equation:dstesps_induction_argue41} and Eq.~\eqref{equation:dstesps_induction_argue42}, it then follows that $$ \bd_{t+1}^\top\bA\bd_i=0,\gap \text{ for }i\in \{1,2,\ldots,t-1\}. $$ We need to further demonstrate $\bd_{t+1}^\top\bA\bd_t=0$, a result that is evident given the intentional design of the algorithm to satisfy this condition. To establish the validity of Eq.~\eqref{equation:conjudate_CD_d1}, we have $\bd_i = -\bg_i+\beta_i\bd_{i-1}$. Therefore, $\bg_i \in \text{span}\{\bd_i,\bd_{i-1}\}$. Furthermore, employing Eq.~\eqref{equation:dstesps_induction_argue41}, we prove $\bg_t^\top\bg_i=0$ for $i\in\{1,2,\ldots, t-1\}$. \end{proof} Therefore, the CG method developed by Eq.~\eqref{equation:cd_gradient_direction} that creates conjugate directions $\bd_{t+1}\bA\bd_t=0$ indeed finds a conjugate set $\bd_{t+1}\bA\bd_i=0$ for $i\in\{1,2,\ldots,t\}$. By Theorem~\ref{theorem:conjudate_direc_d-steps}, the CG method thus converges in at most $d$ steps (when $\bA$ is symmetric PD). The complete procedure is then formulated in Algorithm~\ref{alg:vanilla_conjugate_descent}. \begin{algorithm}[H] \caption{Vanilla Conjugate Gradient Method for Quadratic Function} \label{alg:vanilla_conjugate_descent} \begin{algorithmic}[1] \State {\bfseries Require:} Symmetric positive definite $\bA\in \real^{d\times d}$; \State {\bfseries Input:} Initial parameter $\bx_1$; \State {\bfseries Input:} Initialize $\bd_0 =\bzero $ and $\bg_0 = \bd_0+\epsilon$; \For{$t=1:d$ } \State Compute gradient $\bg_t = \nabla L(\bx_{t})$; \State Compute coefficient $\beta_{t} = \frac{ \bg_t^\top\bA \bd_{t-1}}{\bg_{t-1}^\top \bA\bg_{t-1}}$; \State Compute descent direction $\bd_t = -\bg_{t} +\beta_{t} \bd_{t-1}$; \State Learning rate $\eta_t = - \frac{\bd_t^\top \bg_t}{ \bd_t^\top \bA\bd_t }$; \State Compute update step $\Delta \bx_t = \eta_t\bd_t$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} \index{Complexity} \index{Flops} To further reduce the complexity of the CG algorithm, we introduce the notion of floating operation (flop) counts. We follow the classical route and count the number of floating-point operations (flops) that the algorithm requires. Each addition, subtraction, multiplication, division, and square root is considered one flop. Note that we have the convention that an assignment operation is not counted as one flop. The calculation of the complexity extensively relies on the complexity of the multiplication of two matrices so that we formulate the finding in the following lemma. \begin{lemma}[Vector Inner Product Complexity] Given two vectors $\bv,\bw\in \real^{n}$. The inner product of the two vectors $\bv^\top\bw$ is given by $\bv^\top\bw=v_1w_1+v_2w_2+\ldots v_nw_n$, involving $n$ scalar multiplications and $n-1$ scalar additions. Therefore, the complexity for the inner product is $2n-1$ flops. \end{lemma} The matrix multiplication thus relies on the complexity of the inner product. \begin{lemma}[Matrix Multiplication Complexity]\label{lemma:matrix-multi-complexity} For matrix $\bA\in\real^{m\times n}$ and $\bB\in \real^{n\times k}$, the complexity of the multiplication $\bC=\bA\bB$ is $mk(2n-1)$ flops. \end{lemma} \begin{proof}[of Lemma~\ref{lemma:matrix-multi-complexity}] We notice that each entry of $\bC$ involves a vector inner product that requires $n$ multiplications and $n-1$ additions. And there are $mk$ such entries, which leads to the conclusion. \end{proof} By Theorem~\ref{theorem:conjudate_CD_d-steps}, we can replace the formula for calculating learning rate into: $$ \eta_t = - \frac{\bd_t^\top \bg_t}{ \bd_t^\top \bA\bd_t } \leadto \eta_t = - \frac{\textcolor{mylightbluetext}{\bg_t}^\top \bg_t}{ \bd_t^\top \bA\bd_t }. $$ According to Eq.~\eqref{equation:conjguate-redsidual-update}, it follows that $\eta_t\bA\bd_t = \bg_{t+1}-\bg_t$. Combining with Eq.~\eqref{equation:expanding_subspace_minimization_zero} and Eq.~\eqref{equation:conjudate_CD_d1}, $\beta_t$ can also be expressed as $$ \beta_t = -\frac{\bg_t^\top\bg_t}{\bd_{t-1}^\top \bg_{t-1}} =\frac{\bg_t^\top\bg_t}{\bg_{t-1}^\top \bg_{t-1}} . $$ This reduces the computational complexity from $\mathcal{O}(4d^2)$ to $\mathcal{O}(4d)$ flops. This practical CG method is then outlined in Algorithm~\ref{alg:practical_conjugate_descent}. \begin{algorithm}[H] \caption{Practical Conjugate Gradient Method for Quadratic Function} \label{alg:practical_conjugate_descent} \begin{algorithmic}[1] \State {\bfseries Require:} Symmetric positive definite $\bA\in \real^{d\times d}$; \State {\bfseries Input:} Initial parameter $\bx_1$; \State {\bfseries Input:} Initialize $\bd_0 =\bzero $ and $\bg_0 = \bd_0+\epsilon$; \For{$t=1:d$ } \State Compute gradient $\bg_t = \nabla L(\bx_{t})$; \State Compute coefficient $\beta_{t} = \frac{\bg_t^\top\bg_t}{\bg_{t-1}^\top \bg_{t-1}}$; \Comment{set $\beta_1=0$ by convention} \State Compute descent direction $\bd_t = -\bg_{t} +\beta_{t} \bd_{t-1}$; \State Learning rate $\eta_t =- \frac{{\bg_t}^\top \bg_t}{ \bd_t^\top \bA\bd_t }$; \State Compute update step $\Delta \bx_t = \eta_t\bd_t$; \State Apply update $\bx_{t+1} = \bx_{t} + \Delta \bx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} \index{Spectral decomposition} \index{Quadratic form} \index{Symmetry} \index{Positive definite} \subsubsection{Convergence Analysis for Symmetric Positive Definite Quadratic} We further discuss the convergence results of the CG method. According to Eq.~\eqref{equation:conjudate_CD_d3}, there exists a set of $\{\sigma_1,\sigma_2,\ldots,\sigma_t\}$ coefficients such that \begin{equation}\label{equation:cg-convergence-xt1} \begin{aligned} \bx_{t+1} &=\bx_1 + \eta_1\bd_1+\eta_2\bd_2+\ldots+\eta_t\bd_t \\ &= \bx_1 + \sigma_1\bg_1+\sigma_2\bA \bg_1+\ldots+\sigma_t\bA^{t-1}\bg_1\\ &= \bx_1 + P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bg_1, \end{aligned} \end{equation} where $P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA) = \sigma_1\bI+\sigma_2\bA +\ldots+\sigma_t\bA^{t-1}$ is a polynomial of degree $t-1$ with coefficients $\{\sigma_1, \sigma_2, \ldots, \sigma_t\}$. This polynomial is a special case of a polynomial of degree $t-1$ with random coefficients $\{\omega_1, \omega_2, \ldots, \omega_t\}$, denoted by $P_{t-1}(\bA) = \omega_1\bI+\omega_2\bA +\ldots+\omega_t\bA^{t-1}$. (Note that $P_{t-1}$ can take either a scalar or a matrix as its argument). Suppose the symmetric positive definite $\bA$ admits the spectral decomposition (Theorem 13.1 in \citet{lu2022matrix} or Appendix~\ref{appendix:spectraldecomp}, p.~\pageref{appendix:spectraldecomp}): $$ \bA=\bQ\bLambda\bQ^\top \in \real^{d\times d} \leadto \bA^{-1} = \bQ\bLambda^{-1}\bQ^\top, $$ where the columns of $\bQ = [\bq_1, \bq_2, \ldots , \bq_d]$ are eigenvectors of $\bA$ and are mutually orthonormal, and the entries of $\bLambda = \diag(\lambda_1, \lambda_2, \ldots , \lambda_d)$ with $ \lambda_1\geq \lambda_2\geq \ldots\geq \lambda_d>0$ are the corresponding eigenvalues of $\bA$, which are real and ordered by magnitude (the eigenvalues are positive due to the positive definiteness assumption of $\bA$). It then follows that any eigenvector of $\bA$ is also an eigenvector of $P_{t-1}(\bA)$: $$ P_{t-1}(\bA) \bq_i = P_{t-1}(\lambda_i) \bq_i, \gap \forall i\in \{1,2,\ldots, d\}. $$ Moreover, since the eigenvectors span the entire space $\real^d$, there exists a set of $\{\nu_1,\nu_2,\ldots, \nu_d\}$ coefficients such that the initial error vector $\be_1$ can be expressed as \begin{equation}\label{equation:cg-convergence-xt2} \be_1=\bx_1 - \bx_\star = \sum_{i=1}^{d} \nu_i \bq_i, \end{equation} where $\bx_1$ is the initial parameter. Combining Eq.~\eqref{equation:cg-convergence-xt1} and Eq.~\eqref{equation:cg-convergence-xt2}, this yields the update of the error vector: \begin{equation}\label{equation:cg-convergence-xt3} \begin{aligned} \be_{t+1}&=\bx_{t+1} - \bx_\star \\\ &=\bx_1 + P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bg_1-\bx_\star\\ &=\bx_1 + P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)(\bA\bx_1 - \bA\underbrace{\bA^{-1} \bb}_{\bx_\star})-\bx_\star\\ &=\bx_1 + P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bA(\bx_1 - \bx_\star)-\bx_\star\\ &=\bigg(\bI+P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bA\bigg) (\bx_1 - \bx_\star)\\ &=\bigg(\bI+P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bA\bigg) \sum_{i=1}^{d} \nu_i \bq_i= \sum_{i=1}^{d}\bigg(1+ \lambda_i P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bigg) \nu_i\bq_i \end{aligned} \end{equation} To further discuss the convergence results, we need to use the notion of \textit{energy norm} for error vector $\norm{\be}_{\bA} = (\be^\top\bA\be)^{1/2}$ as defined in Section~\ref{section:general-converg-steepest} (p.~\pageref{section:general-converg-steepest}). It can be shown that minimizing $\norm{\be_t}_{\bA}$ is equivalent to minimizing $L(\bx_t)$ by Eq.~\eqref{equation:energy-norm-equivalent} (p.~\pageref{equation:energy-norm-equivalent}). \begin{remark}[Polynomial Minimization] Since we proved in Theorem~\ref{theorem:expanding_subspace_minimization} that $\bx_{t+1}$ minimizes $L(\bx)$ over the subspace $\mathbb{D}_t$ defined in Eq.~\eqref{equation:space_d_t}, it also minimizes the energy norm $\norm{\be}_{\bA}$ over the subspace $\mathbb{D}_t$ at iteration $t$. It then follows that $P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)$ minimizes over the space of all possible polynomials of degree $t-1$: $$ P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA) =\mathop{\arg\min}_{P_{t-1}(\bA)} \norm{\bx_1 + P_{t-1}(\bA)\bg_1-\bx_\star}_{\bA}. $$ \end{remark} Then the update of the squared energy norm can be obtained by $$ \begin{aligned} \norm{\be_{t+1}}_{\bA}^2 &= \be_{t+1}^\top \bA\be_{t+1} =\be_{t+1}^\top \left(\sum_{i=1}^{d}\lambda_i \bq_i\bq_i^\top\right) \be_{t+1} \\ & = \sum_{i=1}^{d} \lambda_i (\be_{t+1}^\top\bq_i)^2 \\ &=\sum_{i=1}^{d} \lambda_i \left(\bq_i^\top \bigg(\sum_{j=1}^{d}\bigg(1+ \lambda_j P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\bA)\bigg) \nu_j\bq_j\bigg)\right)^2 &\text{(by Eq.~\eqref{equation:cg-convergence-xt3})}\\ &=\sum_{i=1}^{d} \bigg(1+ \lambda_i P^{\textcolor{mylightbluetext}{\star}}_{t-1}(\lambda_i)\bigg)^2 \lambda_i\nu_i^2 &\text{($\bq_i^\top\bq_j=0$ if $i\neq j$)} \\ &=\mathop{\min}_{P_{t-1}} \sum_{i=1}^{d} \bigg(1+ \lambda_i P_{t-1}(\lambda_i)\bigg)^2 \lambda_i\nu_i^2 \\ &\leq m_t \sum_{i=1}^{d} \lambda_i\nu_i^2 &\text{($m_t = \mathop{\min}_{P_{t-1}}\mathop{\max}_{1\leq j\leq d} (1+ \lambda_j P_{t-1}(\lambda_j))^2 $)} \\ &\leq m_t \cdot \norm{\be_{1}}_{\bA}^2. \end{aligned} $$ Therefore, the rate of convergence for the CG method is controlled by \begin{equation}\label{equation:cg-convergence-xt5} m_t = \mathop{\min}_{P_{t-1}}\mathop{\max}_{1\leq j\leq d}(1+ \lambda_j P_{t-1}(\lambda_j))^2. \end{equation} \subsection*{Special Case: $\bA$ Has Only $r$ Distinct Eigenvalues} We then consider some special cases. Firstly, we want to show the CG method terminates in exactly $r$ iterations if the symmetric positive definite $\bA$ has only $r$ distinct eigenvalues. To establish this, suppose $\bA$ has distinct eigenvalues $\mu_1<\mu_2<\ldots<\mu_r$. And we define a polynomial $Q_r(\lambda)$ by $$ Q_r(\lambda) =\frac{(-1)^r}{\mu_1\mu_2\ldots\mu_r} (\lambda-\mu_1)(\lambda-\mu_2)\ldots (\lambda-\mu_r), $$ where $Q_r(\lambda_i)=0$ for $i=\{1,2,\ldots, d\}$ and $Q_r(0)=1$. Therefore, it follows that the polynomial $$ R_{r-1}(\lambda) = \frac{Q_r(\lambda)-1}{\lambda} $$ is a polynomial of degree $r-1$ with a root at $\lambda=0$. Setting $t-1=r-1$ in Eq.~\eqref{equation:cg-convergence-xt5}, we have $$ \begin{aligned} 0&\leq m_{r}=\mathop{\min}_{P_{r-1}}\mathop{\max}_{1\leq j\leq d}(1+ \lambda_j P_{r-1}(\lambda_j))^2\\ &=\mathop{\max}_{1\leq j\leq d}(1+ \lambda_j R_{r-1}(\lambda_j))^2 = \mathop{\max}_{1\leq j\leq d} Q_r^2(\lambda_i) = 0. \end{aligned} $$ As a result, $m_{r}$=0, and $\norm{\be_{r+1}}_{\bA}=0$, implying $\bx_{r+1} = \bx_\star$ and the algorithm terminates at iteration $r$. A specific example is shown in Figure~\ref{fig:conjugate_specialcases}, where Figure~\ref{fig:conjugate_specialcases_2eigenvalue} terminates in two steps since it has two distinct eigenvalues, and Figure~\ref{fig:conjugate_specialcases_1eigenvalue} terminates in just one step as it has one distinct eigenvalue. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[CG, 2 distinct eigenvlaues. Finish in 2 steps.]{\label{fig:conjugate_specialcases_2eigenvalue} \includegraphics[width=0.481\linewidth]{./imgs/conjugate_gd_bisection_2eigenvector.pdf}} \subfigure[CG, 1 distinct eigenvalue. Finish in 1 step.]{\label{fig:conjugate_specialcases_1eigenvalue} \includegraphics[width=0.481\linewidth]{./imgs/conjugate_gd_bisection_1eigenvector.pdf}} \caption{Illustration of special cases for CG with exact line search of quadratic forms. $\bA=\begin{bmatrix} 20 & 5 \\ 5 & 5 \end{bmatrix}$, $\bb=\bzero$, $c=0$, and starting point to descent is $\bx_1=[-2, 2]^\top$ for Fig~\ref{fig:conjugate_specialcases_2eigenvalue}. $\bA=\begin{bmatrix} 20 & 0 \\ 0 & 20 \end{bmatrix}$, $\bb=\bzero$, $c=0$, and starting point to descent is $\bx_1=[-2, 2]^\top$ for Fig~\ref{fig:conjugate_specialcases_1eigenvalue}.} \label{fig:conjugate_specialcases} \end{figure} \subsection*{Closed Form by Chebyshev Polynomials} It can be shown that Eq.~\eqref{equation:cg-convergence-xt5} is minimized by a Chebyshev polynomial, given by $$ 1+ \lambda_j P_{t-1}(\lambda_j) = \frac{T_{t}\left( \frac{\lambda_{\max} + \lambda_{\min} - 2\lambda}{\lambda_{\max}-\lambda_{\min}} \right) } {T_{t}\left( \frac{\lambda_{\max} + \lambda_{\min} }{\lambda_{\max}-\lambda_{\min}} \right) }, $$ where $T_t(w) = \frac{1}{2} \left[ (w+\sqrt{w^2+1})^t + (w-\sqrt{w^2-1})^t\right]$ represents the Chebyshev polynomial of degree $t$. \begin{proof} To see this, we can express the $m_t$ in Eq.~\eqref{equation:cg-convergence-xt5} as \begin{equation}\label{equation:cg-convergence-xt5_rewrite} m_t = \mathop{\min}_{P_{t-1}}\mathop{\max}_{1\leq j\leq d}(1+ \lambda_j P_{t-1}(\lambda_j))^2 = \mathop{\min}_{P_{t-1}}\mathop{\max}_{1\leq j\leq d} (\widetilde{P}_{t}(\lambda_i))^2, \end{equation} where $\widetilde{P}_{t}(\lambda) = 1+ \lambda P_{t-1}(\lambda)=1+w_1\lambda + \ldots+w_t\lambda^t$ is a special polynomial of degree $t$ with $\widetilde{P}_{t}(0)=1$. We note that the Chebyshev polynomial can be expressed on the interval $w\in [-1,1]$ as $$ T_t(w) =\cos(t \cos^{-1} w), \gap w\in [-1,1] \leadto |T_t(w)| \leq 1,\gap \text{if } w\in [-1,1]. $$ It is observable that $\widetilde{P}_{t}(\lambda)$ oscillates within the range $\pm {T_{t}\left( \frac{\lambda_{\max} + \lambda_{\min} }{\lambda_{\max}-\lambda_{\min}} \right) }^{-1}$ over the domain $[\lambda_{\min}, \lambda_{\max}]$. Suppose there exists a polynomial $S_t(\lambda)$ of degree $t$ such that $S_t(0)=1$ and $S_t$ is better than $\widetilde{P}_t$ on the domain $[\lambda_{\min}, \lambda_{\max}]$. It then follows that the $S_t-\widetilde{P}_t$ has a zero at $\lambda=0$ and other $t$ zeros on the range $[\lambda_{\min}, \lambda_{\max}]$, making it has $t+1$ zeros, which leads to a contradiction. Therefore, $\widetilde{P}_t$ is the optimal polynomial of degree $t$. This completes the proof. \end{proof} Therefore, it follows that $$ \begin{aligned} \norm{\be_{t+1}}_{\bA} &\leq T_t\left( \frac{\lambda_{\max} + \lambda_{\min}}{\lambda_{\max} - \lambda_{\min}} \right)^{-1} \cdot\norm{\be_1}_{\bA} \\ &=T_t\left( \frac{\kappa+1}{\kappa-1} \right)^{-1} \cdot \norm{\be_1}_{\bA}\\ &= 2\left[ \left(\frac{\sqrt{\kappa}+1}{\sqrt{\kappa}-1} \right)^t + \left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1} \right)^t \right]^{-1} \cdot\norm{\be_1}_{\bA}, \end{aligned} $$ where $\kappa = \frac{\lambda_{\max}}{\lambda_{\min}}$ is the condition number, and $\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1} \right)^t \rightarrow 0$ as iteration $t$ grows. A weaker inequality can be obtained by $$ \norm{\be_{t+1}}_{\bA} \leq 2 \left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1} \right)^t \cdot\norm{\be_1}_{\bA}. $$ Figure~\ref{fig:rate_convergen_conjugae_comparison} compares the rate of convergence of steepest descent and CG per iteration. It is observed that CG exhibits significantly faster convergence compared to steepest descent. \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Rate of convergence for steepest descent per iteration (same as Figure~\ref{fig:rate_convergen_steepest}, p.~\pageref{fig:rate_convergen_steepest}). The $y$-axis is $\frac{\kappa-1}{\kappa+1}$.]{\label{fig:rate_convergen_steepest1} \includegraphics[width=0.481\linewidth]{./imgs/rate_convergen_steepest.pdf}} \subfigure[Rate of convergence for CG per iteration. The $y$-axis is $\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}$.]{\label{fig:rate_convergen_conjugate} \includegraphics[width=0.481\linewidth]{./imgs/rate_convergen_conjugae.pdf}} \caption{Illustration of the rate of convergence for CG and steepest descent.} \label{fig:rate_convergen_conjugae_comparison} \end{figure} \index{Rate of convergence} \subsection*{Preconditioning} Since the smaller the condition number $\kappa$, the faster the convergence (Figure~\ref{fig:rate_convergen_conjugate}). We can accelerate the convergence of CG by transforming the linear system to improve the eigenvalue distribution of $\bA${\textemdash}the procedure is known as \textit{preconditioning}. The variable $\bx$ is transformed to $\widehat{\bx}$ via a nonsingular matrix $\bP$, satisfying $$ \begin{aligned} \whbx &= \bP\bx;\\ \whL(\whbx) &=\frac{1}{2}\whbx^\top (\bP^{-\top} \bA\bP^{-1})\whbx - (\bP^{-\top}\bb)^\top \whbx +c. \end{aligned} $$ When $\bA$ is symmetric, the solution of $\whL(\whbx)$ is equivalent to the solution of the linear equation $$ \begin{aligned} (\bP^{-\top} \bA\bP^{-1})\whbx &= \bP^{-\top}\bb \\ &\leadto \bP^{-\top}\bA\bx=\bP^{-\top}\bb \\ &\leadto \bA\bx=\bb. \end{aligned} $$ That is, we can solve $\bA\bx=\bb$ indirectly by solving $\bP^{-\top}\bA\bx=\bP^{-\top}\bb$. Therefore, the rate of convergence of the quadratic form $\whL(\whbx)$ depends on the condition number of $\bP^{-\top} \bA\bP^{-1}$, which can be controlled by the nonsingular matrix $\bP$. Intuitively, preconditioning is a procedure to stretch the quadratic form to make it more spherical so that the eigenvalues are clustered in a smaller range. A specific example is given in Figure~\ref{fig:conjugate_specialcases} that we want to transform the elliptical contour in Figure~\ref{fig:conjugate_specialcases_2eigenvalue} into the spherical contour in Figure~\ref{fig:conjugate_specialcases_1eigenvalue}. Based on Algorithm~\ref{alg:practical_conjugate_descent}, the preconditioned CG method is formulated in Algorithm~\ref{alg:predicition_CG}. \begin{algorithm}[h] \caption{Transformed-Preconditioned CG for Quadratic Functions} \label{alg:predicition_CG} \begin{algorithmic}[1] \State {\bfseries Require:} Symmetric positive definite $\bA\in \real^{d\times d}$; \State {\bfseries Input:} Initial parameter $\whbx_1$; \State {\bfseries Input:} Initialize $\whbd_0 =\bzero $ and $\whbg_0 = \whbd_0+\epsilon$; \For{$t=1:d$ } \State Compute gradient $\whbg_t = \nabla \whL(\whbx_{t}) = (\bP^{-\top} \bA\bP^{-1})\whbx- \bP^{-\top}\bb $; \Comment{$=\textcolor{mylightbluetext}{\bP^{-\top}}\bg_t$} \State Compute coefficient $\widehat{\beta}_{t} = \frac{\whbg_t^\top\whbg_t}{\whbg_{t-1}^\top \whbg_{t-1}} $; \Comment{$= \frac{\bg_t^\top \textcolor{mylightbluetext}{(\bP^\top\bP)^{-1}}\bg_t} {\bg_{t-1}^\top\textcolor{mylightbluetext}{(\bP^\top\bP)^{-1}} \bg_{t-1}}$} \State Compute descent direction $\whbd_t = -\whbg_{t} +\widehat{\beta}_{t} \whbd_{t-1}$; \Comment{$=-\textcolor{mylightbluetext}{\bP^{-\top}}\bg_{t} +\widehat{\beta}_{t} \whbd_{t-1}$} \State Learning rate $\widehat{\eta}_t =- \frac{{\whbg_t}^\top \whbg_t}{ \whbd_t^\top (\bP^{-\top} \bA\bP^{-1})\whbd_t } $; \Comment{$= - \frac{{\bg_t}^\top \textcolor{mylightbluetext}{(\bP^\top\bP)^{-1}}\bg_t}{ \whbd_t^\top (\bP^{-\top} \bA\bP^{-1})\whbd_t }$} \State Compute update step $\Delta \whbx_t = \widehat{\eta}_t\whbd_t$; \State Apply update $\whbx_{t+1} = \whbx_{t} + \Delta \whbx_t$; \EndFor \State {\bfseries Return:} resulting parameters $\textcolor{mylightbluetext}{\bx_t=\bP^{-1}\whbx_t}$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} However, the procedure in Algorithm~\ref{alg:predicition_CG} is not desirable since we need to transform $\bx$ into $\whbx=\bP\bx$ and untransformed back by $\bx=\bP^{-1}\whbx$ as highlighted in the blue texts of Algorithm~\ref{alg:predicition_CG}. This introduces additional computational overhead. Let $\bM=\bP^\top\bP$, Algorithm~\ref{alg:untransformed_predicition_CG} is proposed to formulate the untransformed-preconditioned CG, which proves to be more efficient than Algorithm~\ref{alg:predicition_CG}. \begin{algorithm}[h] \caption{Untransformed-Preconditioned CG for Quadratic Functions} \label{alg:untransformed_predicition_CG} \begin{algorithmic}[1] \State {\bfseries Require:} Symmetric positive definite $\bA\in \real^{d\times d}$; \State {\bfseries Input:} Initial parameter $\bx_1$; \State {\bfseries Input:} Initialize $\bd_0 =\bzero $ and $\bg_0 = \bd_0+\epsilon$; \For{$t=1:d$ } \State Compute gradient $\bg_t = \nabla L(\bx_{t})$; \Comment{Same as that of Algorithm~\ref{alg:practical_conjugate_descent}} \State Compute coefficient $\widehat{\beta}_{t} = \frac{\bg_t^\top\textcolor{mylightbluetext}{\bM^{-1}}\bg_t}{\bg_{t-1}^\top\textcolor{mylightbluetext}{\bM^{-1}} \bg_{t-1}}$; \Comment{Same as that of Algorithm~\ref{alg:predicition_CG}} \State Compute descent direction $\widetilde{\bd}_t = -\textcolor{mylightbluetext}{\bM^{-1}}\bg_{t} +\widehat{\beta}_{t} \widetilde{\bd}_{t-1}$; \Comment{$=-\textcolor{mylightbluetext}{\bP^{-1}} \whbd_{t}$ in Algorithm~\ref{alg:predicition_CG}} \State Learning rate $\widehat{\eta}_t =- {({\bg_t}^\top\textcolor{mylightbluetext}{\bM^{-1}} \bg_t)}/{ (\widetilde{\bd}_t^\top \bA\widetilde{\bd}_t )}$; \Comment{Same as that of Algorithm~\ref{alg:predicition_CG}} \State Compute update step ${\Delta \bx}_t = \widehat{\eta}_t\widetilde{\bd}_t$; \Comment{$=-\textcolor{mylightbluetext}{\bP^{-1}} \Delta\whbx_{t}$ in Algorithm~\ref{alg:predicition_CG}} \State Apply update $\bx_{t+1} = \bx_{t} + {\Delta \bx}_t$; \EndFor \State {\bfseries Return:} resulting parameters $\bx_t$, and the loss $L(\bx_t)$. \end{algorithmic} \end{algorithm} \begin{figure}[h] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Contour plot of quadratic function with $\bA$.]{\label{fig:conjugate_precondition1} \includegraphics[width=0.481\linewidth]{./imgs/conjugate_precondition1.pdf}} \subfigure[Contour plot of quadratic function with $\bP^{-\top} \bA\bP^{-1}$.]{\label{fig:conjugate_precondition2} \includegraphics[width=0.481\linewidth]{./imgs/conjugate_precondition2.pdf}} \caption{Illustration of preconditioning for $\bA=\begin{bmatrix} 20&5 \\5&5 \end{bmatrix}$. $\bP$ is obtained by the Cholesky decomposition such that $\bM=\bA=\bP^\top\bP$.} \label{fig:conjugate_precondition13} \end{figure} \index{Cholesky decomposition} \paragraph{Second perspective of preconditioning.} The matrices $\bM^{-1}\bA$ and $\bP^{-\top} \bA\bP^{-1}$ have the same eigenvalues. To see this, suppose the eigenpair of $\bM^{-1}\bA$ is $(\bM^{-1} \bA) \bv =\lambda \bv$, it follows that $$ (\bP^{-\top} \bA\bP^{-1}) (\bP\bv) = \bP^{-\top} \bA\bv = \bP\bP^{-1}\bP^{-\top} \bA\bv =\bP\bM^{-1}\bA\bv=\lambda (\bP\bv). $$ Therefore, the preconditioning can be understood from two perspectives. While the second perspective is to solve $\bM^{-1}\bA\bx = \bM^{-1}\bb$, where the condition number is decided by matrix $\bM^{-1}\bA$. The simplest preconditioner $\bM^{-1}$ is a diagonal matrix whose diagonal entries are identical to those of $\bA$, known as \textit{diagonal preconditioning}, in which case we scale the quadratic form along the coordinate axes. In contrast, the \textit{perfect preconditioner} is $\bM=\bA$ such that $\bM^{-1}\bA=\bI$, whose condition number is 1, in which case the quadratic form is scaled along its eigenvector directions. In this sense, the $\bP$ can be obtained by the (pseudo) Cholesky decomposition (Theorem 2.1 in \citet{lu2022matrix} or Appendix~\ref{appendix:choleskydecomp}) such that $\bM=\bA=\bP^\top\bP$. Figure~\ref{fig:conjugate_precondition13} shows the perfect preconditioning on $\bM=\bA=\begin{bmatrix} 20&5 \\5&5\\ \end{bmatrix}$ such that the eigenvalues of $\bP^{-\top}\bA\bP^{-1}$ are identical and the condition number is thus equal to 1. \index{Cholesky decomposition} \subsubsection{General Conjugate Gradient Method} We now revisit the general CG method as introduced in \citet{fletcher1964function}. The method has been previously formulated in Algorithm~\ref{alg:conjugate_descent}; we may notice the \textit{Fletcher-Reeves Conjugate Gradient} method (Algorithm~\ref{alg:conjugate_descent}) is just the same as the \textit{Practical Conjugate Gradient} method (Algorithm~\ref{alg:practical_conjugate_descent}) under the conditions of a strongly convex quadratic loss function and the use of an exact line search for the learning rate $\eta_t$. To see why the Fletcher-Reeves Conjugate Gradient algorithm (Algorithm~\ref{alg:conjugate_descent}) works, the search direction $\bd_t$ must satisfy the descent condition (Remark~\ref{remark:descent_condition}, p.~\pageref{remark:descent_condition}) such that $ \bg_t^\top \bd_t<0$. The descent condition is satisfied when the learning rate is calculated by exact line search, in which case the gradient $\nabla L(\bx_t) = \bg_t$ is orthogonal to search direction $\bd_{t-1}$ (Lemma~\ref{lemm:linear-search-orghonal}, p.~\pageref{lemm:linear-search-orghonal}): $\bg_t^\top\bd_{t-1}=0$. Therefore, $$ \bg_t^\top \bd_t = \bg_t^\top (-\bg_t +\beta_t\bd_{t-1} ) = -\norm{\bg_t}^2 + \beta_t \bg_t^\top \bd_{t-1}<0 $$ when $\eta_t$ is determined by exact line search. However, when $\eta_t$ is fixed or calculated by inexact line search, the descent condition $\bg_t^\top\bd_t$ may not be satisfied. This problem, however, can be attacked by \textit{strong Wolfe conditions} \citep{nocedal1999numerical}; and we will not go into the details. \paragraph{Polak-Ribi\`ere conjugate gradient.} We have mentioned previously that the $\beta_t$ can also be computed by the Polak-Ribi\`ere coefficient: $$ \text{Polak-Ribi\`ere:\gap } \beta_t^P = \frac{\bigg(\nabla L(\bx_t) - \nabla L(\bx_{t-1})\bigg)^\top \nabla L(\bx_t)}{\nabla L(\bx_{t-1})^\top \nabla L(\bx_{t-1})} = \frac{(\bg_t - \bg_{t-1})^\top \bg_t}{ \bg_{t-1}^\top \bg_{t-1}} . $$ When the loss function is strongly convex quadratic and the learning rate is chosen by exact line search, the Polak-Ribi\`ere coefficient $\beta_t^P$ is identical to the Fletcher-Reeves coefficient $\beta_t^F$ since $\bg_{t-1}^\top\bg_t=0$ by Theorem~\ref{theorem:conjudate_CD_d-steps}. \paragraph{Hestenes–Stiefel conjugate gradient.} Hestenes–Stiefel coefficient is yet another variant of the Polak-Ribi\`ere coefficient: $$ \text{Hestenes–Stiefel:\gap } \beta_t^H = \frac{\bigg(\nabla L(\bx_t) - \nabla L(\bx_{t-1})\bigg)^\top \nabla L(\bx_t)}{\bigg(\nabla L(\bx_t) - \nabla L(\bx_{t-1})\bigg)^\top \bd_{t-1}} = \frac{(\bg_t - \bg_{t-1})^\top \bg_t}{ (\bg_t - \bg_{t-1})^\top \bd_{t-1}}. $$ When the loss function is strongly convex quadratic and the learning rate is chosen by exact line search, the Hestenes–Stiefel coefficient $\beta_t^H$ is identical to the Fletcher-Reeves coefficient $\beta_t^F$ since $\bg_{t-1}^\top\bg_t=0$ by Theorem~\ref{theorem:conjudate_CD_d-steps} and $\bg_t^\top\bd_{t-2}=\bg_{t-1}^\top\bd_{t-2}=0$ by Theorem~\ref{theorem:expanding_subspace_minimization}. Moreover, numerical experiments show that the Polak-Ribi\`ere coefficient and Hestenes –Stiefel coefficient are more robust than Fletcher-Reeves coefficient in nonconvex settings \citep{nocedal1999numerical}. \newpage \appendix \newpage \clearchapter{Taylor’s Expansion} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \section{Taylor’s Expansion}\label{appendix:taylor-expansion} \index{Taylor's formula} \begin{theorem}[Taylor’s Expansion with Lagrange Remainder] Let $f(x): \real\rightarrow \real$ be $k$-times continuously differentiable on the closed interval $I$ with endpoints $x$ and $y$, for some $k\geq 0$. If $f^{(k+1)}$ exists on the interval $I$, then there exists a $x^\star \in (x,y)$ such that $$ \begin{aligned} &\gap f(x) \\ &= f(y) + f^\prime(y)(x-y) + \frac{f^{\prime\prime}(y)}{2!}(x-y)^2+ \ldots + \frac{f^{(k)}(y)}{k!}(x-y)^k + \frac{f^{(k+1)}(x^\star)}{(k+1)!}(x-y)^{k+1}\\ &=\sum_{i=0}^{k} \frac{f^{(i)}(y)}{i!} (x-y)^i + \frac{f^{(k+1)}(x^\star)}{(k+1)!}(x-y)^{k+1}. \end{aligned} $$ The Taylor's expansion can be extended to a function of vector $f(\bx):\real^n\rightarrow \real$ or a function of matrix $f(\bX): \real^{m\times n}\rightarrow \real$. \end{theorem} The Taylor's expansion, or also known as the \textit{Taylor's series}, approximates the function $f(x)$ around the value of $y$ by a polynomial in a single indeterminate $x$. To see where this series comes from, we recall from the elementary calculus course that the approximated function around $\theta=0$ for $\cos (\theta)$ is given by $$ \cos (\theta) \approx 1-\frac{\theta^2}{2}. $$ That is, the $\cos (\theta)$ is approximated by a polynomial with a degree of 2. Suppose we want to approximate $\cos (\theta)$ by the more general polynomial with a degree of 2 by $ f(\theta) = c_1+c_2 \theta+ c_3 \theta^2$. A intuitive idea is to match the gradients around the $0$ point. That is, $$\left\{ \begin{aligned} \cos(0) &= f(0); \\ \cos^\prime(0) &= f^\prime(0);\\ \cos^{\prime\prime}(0) &= f^{\prime\prime}(0);\\ \end{aligned} \right. \leadto \left\{ \begin{aligned} 1 &= c_1; \\ -\sin(0) &=0= c_2;\\ -\cos(0) &=-1= 2c_3.\\ \end{aligned} \right. $$ This makes $f(\theta) = c_1+c_2 \theta+ c_3 \theta^2 = 1-\frac{\theta^2}{2}$, and agrees with our claim that $\cos (\theta) \approx 1-\frac{\theta^2}{2}$ around the 0 point. We shall not give the details of the proof. \newpage \clearchapter{Matrix Decomposition} \begingroup \hypersetup{linkcolor=winestain, linktoc=page, } \minitoc \newpage \endgroup \section{Cholesky Decomposition}\label{appendix:choleskydecomp} \lettrine{\color{caligraphcolor}P} Positive definiteness or positive semidefiniteness is one of the highest accolades to which a matrix can aspire. In this section, we introduce decompositional approaches for positive definite matrices and we illustrate the most famous Cholesky decomposition as follows. \index{Cholesky decomposition} \begin{theoremHigh}[Cholesky Decomposition]\label{theorem:cholesky-factor-exist} Every positive definite (PD) matrix $\bA\in \real^{n\times n}$ can be factored as $$ \bA = \bR^\top\bR, $$ where $\bR \in \real^{n\times n}$ is an upper triangular matrix \textbf{with positive diagonal elements}. This decomposition is known as the \textit{Cholesky decomposition} of $\bA$. $\bR$ is known as the \textit{Cholesky factor} or \textit{Cholesky triangle} of $\bA$. Alternatively, $\bA$ can be factored as $\bA=\bL\bL^\top$, where $\bL=\bR^\top$ is a lower triangular matrix \textit{with positive diagonals}. Specifically, the Cholesky decomposition is unique (Corollary~\ref{corollary:unique-cholesky-main}). \end{theoremHigh} The Cholesky decomposition is named after a French military officer and mathematician, Andr\'{e}-Louis Cholesky (1875{\textendash}1918), who developed the Cholesky decomposition in his surveying work. The Cholesky decomposition is used primarily to solve positive definite linear systems. We here only establish the existence of the Cholesky decomposition through an inductive approach. Although alternative methods, such as a consequence of the LU decomposition, exist for proving it \citep{lu2022matrix}. \begin{proof}[of Theorem~\ref{theorem:cholesky-factor-exist}] We will prove by induction that every $n\times n$ positive definite matrix $\bA$ has a decomposition $\bA=\bR^\top\bR$. The $1\times 1$ case is trivial by setting $R=\sqrt{A}$, thus, $A=R^2$. Suppose any $k\times k$ PD matrix $\bA_k$ has a Cholesky decomposition. If we prove that any $(k+1)\times(k+1)$ PD matrix $\bA_{k+1}$ can also be factored as this Cholesky decomposition, then we complete the proof. For any $(k+1)\times(k+1)$ PD matrix $\bA_{k+1}$, write out $\bA_{k+1}$ as $$ \bA_{k+1} = \begin{bmatrix} \bA_k & \bb \\ \bb^\top & d \end{bmatrix}. $$ We note that $\bA_k$ is PD. From the inductive hypothesis, it admits a Cholesky decomposition $\bA_k$ given by $\bA_k = \bR_k^\top\bR_k$. We can construct the upper triangular matrix $$ \bR=\begin{bmatrix} \bR_k & \br\\ 0 & s \end{bmatrix}, $$ such that it follows that $$ \bR_{k+1}^\top\bR_{k+1} = \begin{bmatrix} \bR_k^\top\bR_k & \bR_k^\top \br\\ \br^\top \bR_k & \br^\top\br+s^2 \end{bmatrix}. $$ Therefore, if we can prove $\bR_{k+1}^\top \bR_{k+1} = \bA_{k+1}$ is the Cholesky decomposition of $\bA_{k+1}$ (which requires the value $s$ to be positive), then we complete the proof. That is, we need to prove $$ \begin{aligned} \bb &= \bR_k^\top \br, \\ d &= \br^\top\br+s^2. \end{aligned} $$ Since $\bR_k$ is nonsingular, we have a unique solution for $\br$ and $s$ that $$ \begin{aligned} \br &= \bR_k^{-\top}\bb, \\ s &= \sqrt{d - \br^\top\br} = \sqrt{d - \bb^\top\bA_k^{-1}\bb}, \end{aligned} $$ where we assume $s$ is nonnegative. However, we need to further prove that $s$ is not only nonnegative, but also positive. Since $\bA_k$ is PD, from Sylvester's criterion (see \citet{lu2022matrix}), and the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix} \bA & \bB \\ \bC & \bD \end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$, we have $$ \det(\bA_{k+1}) = \det(\bA_k)\det(d- \bb^\top\bA_k^{-1}\bb) = \det(\bA_k)(d- \bb^\top\bA_k^{-1}\bb)>0. $$ Because $ \det(\bA_k)>0$, we then obtain that $(d- \bb^\top\bA_k^{-1}\bb)>0$, and this implies $s>0$. We complete the proof. \end{proof} \index{Uniqueness} \begin{corollary}[Uniqueness of Cholesky Decomposition\index{Uniqueness}]\label{corollary:unique-cholesky-main} The Cholesky decomposition $\bA=\bR^\top\bR$ for any positive definite matrix $\bA\in \real^{n\times n}$ is unique. \end{corollary} \begin{proof}[of Corollary~\ref{corollary:unique-cholesky-main}] Suppose the Cholesky decomposition is not unique, then we can find two decompositions such that $\bA=\bR_1^\top\bR_1 = \bR_2^\top\bR_2$, which implies $$ \bR_1\bR_2^{-1}= \bR_1^{-\top} \bR_2^\top. $$ From the fact that the inverse of an upper triangular matrix is also an upper triangular matrix, and the product of two upper triangular matrices is also an upper triangular matrix, \footnote{Same for lower triangular matrices: the inverse of a lower triangular matrix is also a lower triangular matrix, and the product of two lower triangular matrices is also a lower triangular matrix.} we realize that the left-side of the above equation is an upper triangular matrix and the right-side of it is a lower triangular matrix. This implies $\bR_1\bR_2^{-1}= \bR_1^{-\top} \bR_2^\top$ is a diagonal matrix, and $\bR_1^{-\top} \bR_2^\top= (\bR_1^{-\top} \bR_2^\top)^\top = \bR_2\bR_1^{-1}$. Let $\bLambda = \bR_1\bR_2^{-1}= \bR_2\bR_1^{-1}$ be the diagonal matrix. We notice that the diagonal value of $\bLambda$ is the product of the corresponding diagonal values of $\bR_1$ and $\bR_2^{-1}$ (or $\bR_2$ and $\bR_1^{-1}$). That is, for $$ \bR_1=\begin{bmatrix} r_{11} & r_{12} & \ldots & r_{1n} \\ 0 & r_{22} & \ldots & r_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \ldots & r_{nn} \end{bmatrix}, \qquad \bR_2= \begin{bmatrix} s_{11} & s_{12} & \ldots & s_{1n} \\ 0 & s_{22} & \ldots & s_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \ldots & s_{nn} \end{bmatrix}, $$ we have, $$ \begin{aligned} \bR_1\bR_2^{-1}= \begin{bmatrix} \frac{r_{11}}{s_{11}} & 0 & \ldots & 0 \\ 0 & \frac{r_{22}}{s_{22}} & \ldots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \ldots & \frac{r_{nn}}{s_{nn}} \end{bmatrix} = \begin{bmatrix} \frac{s_{11}}{r_{11}} & 0 & \ldots & 0 \\ 0 & \frac{s_{22}}{r_{22}} & \ldots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \ldots & \frac{s_{nn}}{r_{nn}} \end{bmatrix} =\bR_2\bR_1^{-1}. \end{aligned} $$ Since both $\bR_1$ and $\bR_2$ have positive diagonals, this implies $r_{11}=s_{11}, r_{22}=s_{22}, \ldots, r_{nn}=s_{nn}$. And $\bLambda = \bR_1\bR_2^{-1}= \bR_2\bR_1^{-1} =\bI$. That is, $\bR_1=\bR_2$ and this leads to a contradiction. The Cholesky decomposition is thus unique. \end{proof} \section{Eigenvalue Decomposition}\label{appendix:eigendecomp} \index{Eigenvalue decomposition} \begin{theoremHigh}[Eigenvalue Decomposition]\label{theorem:eigenvalue-decomposition} Any square matrix $\bA\in \real^{n\times n}$ with linearly independent eigenvectors can be factored as $$ \bA = \bX\bLambda\bX^{-1}, $$ where $\bX\in\real^{n\times n}$ contains the eigenvectors of $\bA$ as its columns, and $\bLambda\in\real^{n\times n}$ is a diagonal matrix $\diag(\lambda_1, $ $\lambda_2, \ldots, \lambda_n)$ with $\lambda_1, \lambda_2, \ldots, \lambda_n$ denoting the eigenvalues of $\bA$. \end{theoremHigh} Eigenvalue decomposition is also known as to diagonalize the matrix $\bA$. When no eigenvalues of $\bA$ are repeated, the corresponding eigenvectors are guaranteed to be linearly independent. Then $\bA$ can be diagonalized. It is essential to emphasize that without a set of $n$ linearly independent eigenvectors, diagonalization is not feasible. \begin{proof}[of Theorem~\ref{theorem:eigenvalue-decomposition}] Let $\bX=[\bx_1, \bx_2, \ldots, \bx_n]$ be the linearly independent eigenvectors of $\bA$. Clearly, we have $$ \bA\bx_1=\lambda_1\bx_1,\qquad \bA\bx_2=\lambda_2\bx_2, \qquad \ldots, \qquad\bA\bx_n=\lambda_n\bx_n. $$ In the matrix form, we can express this as: $$ \bA\bX = [\bA\bx_1, \bA\bx_2, \ldots, \bA\bx_n] = [\lambda_1\bx_1, \lambda_2\bx_2, \ldots, \lambda_n\bx_n] = \bX\bLambda. $$ Since we assume the eigenvectors are linearly independent, then $\bX$ has full rank and is invertible. We obtain $$ \bA = \bX\bLambda \bX^{-1}. $$ This completes the proof. \end{proof} We will discuss some similar forms of eigenvalue decomposition in the spectral decomposition section, where the matrix $\bA$ is required to be symmetric, and the factor $\bX$ is not only nonsingular but also orthogonal. Alternatively, the matrix $\bA$ is required to be a \textit{simple matrix}, that is, the algebraic multiplicity and geometric multiplicity are the same for $\bA$, and $\bX$ will be a trivial nonsingular matrix that may not contain the eigenvectors of $\bA$. A matrix decomposition in the form of $\bA =\bX\bLambda\bX^{-1}$ has a nice property that we can compute the $m$-th power efficiently. \index{$m$-th Power} \begin{remark}[$m$-th Power]\label{remark:power-eigenvalue-decom} The $m$-th power of $\bA$ is $\bA^m = \bX\bLambda^m\bX^{-1}$ if the matrix $\bA$ can be factored as $\bA=\bX\bLambda\bX^{-1}$. \end{remark} We observe that the prerequisite for the existence of the eigenvalue decomposition is the linear independence of the eigenvectors of $\bA$. This condition is inherently fulfilled under specific circumstances. \begin{lemma}[Different Eigenvalues]\label{lemma:diff-eigenvec-decompo} Suppose the eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ of $\bA\in \real^{n\times n}$ are all distinct, the associated eigenvectors are inherently linearly independent. Put differently, any square matrix with unique eigenvalues can be diagonalized. \end{lemma} \begin{proof}[of Lemma~\ref{lemma:diff-eigenvec-decompo}] Suppose the eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ are all different, and the eigenvectors $\bx_1,\bx_2, \ldots, \bx_n$ are linearly dependent. That is, there exists a nonzero vector $\bc = [c_1,c_2,\ldots,c_{n-1}]^\top$ satisfying $$ \bx_n = \sum_{i=1}^{n-1} c_i\bx_{i}. $$ Then we have $$ \begin{aligned} \bA \bx_n &= \bA (\sum_{i=1}^{n-1} c_i\bx_{i}) \\ &=c_1\lambda_1 \bx_1 + c_2\lambda_2 \bx_2 + \ldots + c_{n-1}\lambda_{n-1}\bx_{n-1}. \end{aligned} $$ and $$ \begin{aligned} \bA \bx_n &= \lambda_n\bx_n\\ &=\lambda_n (c_1\bx_1 +c_2\bx_2+\ldots +c_{n-1} \bx_{n-1}). \end{aligned} $$ Combining two equations above, we have $$ \sum_{i=1}^{n-1} (\lambda_n - \lambda_i)c_i \bx_i = \bzero . $$ This leads to a contradiction since $\lambda_n \neq \lambda_i$ for all $i\in \{1,2,\ldots,n-1\}$, from which the result follows. \end{proof} \section{Spectral Decomposition}\label{appendix:spectraldecomp}\label{section:existence-of-spectral} \index{Spectral decomposition} \begin{theoremHigh}[Spectral Decomposition]\label{theorem:spectral_theorem} A real matrix $\bA \in \real^{n\times n}$ is symmetric if and only if there exists an orthogonal matrix $\bQ$ and a diagonal matrix $\bLambda$ such that \begin{equation*} \bA = \bQ \bLambda \bQ^\top, \end{equation*} where the columns of $\bQ = [\bq_1, \bq_2, \ldots, \bq_n]$ are eigenvectors of $\bA$ and are mutually orthonormal, and the entries of $\bLambda=\diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ are the corresponding eigenvalues of $\bA$, which are real. And the rank of $\bA$ is equal to the number of nonzero eigenvalues. This is known as the \textit{spectral decomposition} or \textit{spectral theorem} for real symmetric matrix $\bA$. Specifically, we have the following properties: \begin{enumerate} \item A symmetric matrix has only \textbf{real eigenvalues}; \item The eigenvectors are orthogonal such that they can be chosen \textbf{orthonormal} by normalization; \item The rank of $\bA$ is equal to the number of nonzero eigenvalues; \item If the eigenvalues are distinct, the eigenvectors are linearly independent. \end{enumerate} \end{theoremHigh} The above decomposition is called the spectral decomposition for real symmetric matrices and is often known as the \textit{spectral theorem}. We prove the theorem in several steps. \begin{tcolorbox}[title={Symmetric Matrix Property 1 of 4},colback=\mdframecolorTheorem] \begin{lemma}[Real Eigenvalues]\label{lemma:real-eigenvalues-spectral} The eigenvalues of any symmetric matrix are all real. \end{lemma} \end{tcolorbox} \begin{proof}[of Lemma~\ref{lemma:real-eigenvalues-spectral}] Suppose eigenvalue $\lambda$ of a symmetric matrix $\bA$ is a complex number $\lambda=a+ib$, where $a,b$ are real. Its complex conjugate is $\bar{\lambda}=a-ib$. Same for the corresponding complex eigenvector $\bx = \bc+i\bd$ and its complex conjugate $\bar{\bx}=\bc-i\bd$, where $\bc, \bd$ are real vectors. We then have the following property $$ \bA \bx = \lambda \bx\qquad \underrightarrow{\text{ leads to }}\qquad \bA \bar{\bx} = \bar{\lambda} \bar{\bx}\qquad \underrightarrow{\text{ transpose to }}\qquad \bar{\bx}^\top \bA =\bar{\lambda} \bar{\bx}^\top. $$ We take the dot product of the first equation with $\bar{\bx}$ and the last equation with $\bx$: $$ \bar{\bx}^\top \bA \bx = \lambda \bar{\bx}^\top \bx, \qquad \text{and } \qquad \bar{\bx}^\top \bA \bx = \bar{\lambda}\bar{\bx}^\top \bx. $$ Then we have the equality $\lambda\bar{\bx}^\top \bx = \bar{\lambda} \bar{\bx}^\top\bx$. Since $\bar{\bx}^\top\bx = (\bc-i\bd)^\top(\bc+i\bd) = \bc^\top\bc+\bd^\top\bd$ is a real number. Therefore, the imaginary part of $\lambda$ is zero and $\lambda$ is real. \end{proof} \begin{tcolorbox}[title={Symmetric Matrix Property 2 of 4},colback=\mdframecolorTheorem] \begin{lemma}[Orthogonal Eigenvectors]\label{lemma:orthogonal-eigenvectors} The eigenvectors corresponding to distinct eigenvalues of any symmetric matrix are orthogonal. Therefore, we can normalize eigenvectors to make them orthonormal since $\bA\bx = \lambda \bx \underrightarrow{\text{ leads to } } \bA\frac{\bx}{\norm{\bx}} = \lambda \frac{\bx}{\norm{\bx}}$, which corresponds to the same eigenvalue. \end{lemma} \end{tcolorbox} \begin{proof}[of Lemma~\ref{lemma:orthogonal-eigenvectors}] Suppose eigenvalues $\lambda_1, \lambda_2$ correspond to eigenvectors $\bx_1, \bx_2$, satisfying $\bA\bx_1=\lambda \bx_1$ and $\bA\bx_2 = \lambda_2\bx_2$. We have the following equality: $$ \bA\bx_1=\lambda_1 \bx_1 \leadto \bx_1^\top \bA =\lambda_1 \bx_1^\top \leadto \bx_1^\top \bA \bx_2 =\lambda_1 \bx_1^\top\bx_2, $$ and $$ \bA\bx_2 = \lambda_2\bx_2 \leadto \bx_1^\top\bA\bx_2 = \lambda_2\bx_1^\top\bx_2, $$ which implies $\lambda_1 \bx_1^\top\bx_2=\lambda_2\bx_1^\top\bx_2$. Since eigenvalues $\lambda_1\neq \lambda_2$, the eigenvectors are orthogonal. \end{proof} In Lemma~\ref{lemma:orthogonal-eigenvectors} above, we establish the orthogonality of eigenvectors associated with distinct eigenvalues of symmetric matrices. More generally, we prove the important theorem that eigenvectors corresponding to distinct eigenvalues of any matrix are linearly independent. \begin{theorem}[Independent Eigenvector Theorem]\label{theorem:independent-eigenvector-theorem} If a matrix $\bA\in \real^{n\times n}$ has $k$ distinct eigenvalues, then any set of $k$ corresponding (nonzero) eigenvectors are linearly independent. \end{theorem} \begin{proof}[of Theorem~\ref{theorem:independent-eigenvector-theorem}] We will prove by induction. Firstly, we will prove that any two eigenvectors corresponding to distinct eigenvalues are linearly independent. Suppose $\bv_1$ and $\bv_2$ correspond to distinct eigenvalues $\lambda_1$ and $\lambda_2$, respectively. Suppose further there exists a nonzero vector $\bx=[x_1,x_2] \neq \bzero $ satisfying \begin{equation}\label{equation:independent-eigenvector-eq1} x_1\bv_1+x_2\bv_2=\bzero. \end{equation} That is, $\bv_1,\bv_2$ are linearly independent. Multiply Eq.~\eqref{equation:independent-eigenvector-eq1} on the left by $\bA$, we get \begin{equation}\label{equation:independent-eigenvector-eq2} x_1 \lambda_1\bv_1 + x_2\lambda_2\bv_2 = \bzero. \end{equation} Multiply Eq.~\eqref{equation:independent-eigenvector-eq1} on the left by $\lambda_2$, we get \begin{equation}\label{equation:independent-eigenvector-eq3} x_1\lambda_2\bv_1 + x_2\lambda_2\bv_2 = \bzero. \end{equation} Subtract Eq.~\eqref{equation:independent-eigenvector-eq2} from Eq.~\eqref{equation:independent-eigenvector-eq3}, we find $$ x_1(\lambda_2-\lambda_1)\bv_1 = \bzero. $$ Since $\lambda_2\neq \lambda_1$ and $\bv_1\neq \bzero$, we must have $x_1=0$. From Eq.~\eqref{equation:independent-eigenvector-eq1} and $\bv_2\neq \bzero$, we must also have $x_2=0$, which arrives at a contradiction. Thus, $\bv_1$ and $\bv_2$ are linearly independent. Now, suppose any $j<k$ eigenvectors are linearly independent, if we could prove that any $j+1$ eigenvectors are also linearly independent, we finish the proof. Suppose $\bv_1, \bv_2, \ldots, \bv_j$ are linearly independent, and $\bv_{j+1}$ is dependent on the first $j$ eigenvectors. That is, there exists a nonzero vector $\bx=[x_1,x_2,\ldots, x_{j}]\neq \bzero$ such that \begin{equation}\label{equation:independent-eigenvector-zero} \bv_{j+1}= x_1\bv_1+x_2\bv_2+\ldots+x_j\bv_j . \end{equation} Suppose the $j+1$ eigenvectors correspond to distinct eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_j,\lambda_{j+1}$. Multiply Eq.~\eqref{equation:independent-eigenvector-zero} on the left by $\bA$, we get \begin{equation}\label{equation:independent-eigenvector-zero2} \lambda_{j+1} \bv_{j+1} = x_1\lambda_1\bv_1+x_2\lambda_2\bv_2+\ldots+x_j \lambda_j\bv_j . \end{equation} Multiply Eq.~\eqref{equation:independent-eigenvector-zero} on the left by $\lambda_{j+1}$, we get \begin{equation}\label{equation:independent-eigenvector-zero3} \lambda_{j+1} \bv_{j+1} = x_1\lambda_{j+1}\bv_1+x_2\lambda_{j+1}\bv_2+\ldots+x_j \lambda_{j+1}\bv_j . \end{equation} Subtract Eq.~\eqref{equation:independent-eigenvector-zero3} from Eq.~\eqref{equation:independent-eigenvector-zero2}, we find $$ x_1(\lambda_{j+1}-\lambda_1)\bv_1+x_2(\lambda_{j+1}-\lambda_2)\bv_2+\ldots+x_j (\lambda_{j+1}-\lambda_j)\bv_j = \bzero. $$ From assumption, $\lambda_{j+1} \neq \lambda_i$ for all $i\in \{1,2,\ldots,j\}$, and $\bv_i\neq \bzero$ for all $i\in \{1,2,\ldots,j\}$. We must have $x_1=x_2=\ldots=x_j=0$, which leads to a contradiction. Then $\bv_1,\bv_2,\ldots,\bv_j,\bv_{j+1}$ are linearly independent. This completes the proof. \end{proof} A direct consequence of the above theorem is as follows: \begin{corollary}[Independent Eigenvector Theorem, CNT.]\label{theorem:independent-eigenvector-theorem-basis} If a matrix $\bA\in \real^{n\times n}$ has $n$ distinct eigenvalues, then any set of $n$ corresponding eigenvectors form a basis for $\real^n$. \end{corollary} \begin{tcolorbox}[title={Symmetric Matrix Property 3 of 4},colback=\mdframecolorTheorem] \begin{lemma}[Orthonormal Eigenvectors for Duplicate Eigenvalue]\label{lemma:eigen-multiplicity} If $\bA$ has a duplicate eigenvalue $\lambda_i$ with multiplicity $k\geq 2$, then there exist $k$ orthonormal eigenvectors corresponding to $\lambda_i$. \end{lemma} \end{tcolorbox} \begin{proof}[of Lemma~\ref{lemma:eigen-multiplicity}] We note that there is at least one eigenvector $\bx_{i1}$ corresponding to $\lambda_i$. And for such eigenvector $\bx_{i1}$, we can always find additional $n-1$ orthonormal vectors $\by_2, \by_3, \ldots, \by_n$ so that $\{\bx_{i1}, \by_2, \by_3, \ldots, \by_n\}$ forms an orthonormal basis in $\real^n$. Put the $\by_2, \by_3, \ldots, \by_n$ into matrix $\bY_1$ and $\{\bx_{i1}, \by_2, \by_3, \ldots, \by_n\}$ into matrix $\bP_1$: $$ \bY_1=[\by_2, \by_3, \ldots, \by_n] \qquad \text{and} \qquad \bP_1=[\bx_{i1}, \bY_1]. $$ We then have $$ \bP_1^\top\bA\bP_1 = \begin{bmatrix} \lambda_i &\bzero \\ \bzero & \bY_1^\top \bA\bY_1 \end{bmatrix}. $$ As a result, $\bA$ and $\bP_1^\top\bA\bP_1$ are similar matrices such that they have the same eigenvalues since $\bP_1$ is nonsingular (even orthogonal here). We obtain $$ \det(\bP_1^\top\bA\bP_1 - \lambda\bI_n) = \footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix} \bA & \bB \\ \bC & \bD \end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$. } (\lambda_i - \lambda )\det(\bY_1^\top \bA\bY_1 - \lambda\bI_{n-1}). $$ If $\lambda_i$ has multiplicity $k\geq 2$, then the term $(\lambda_i-\lambda)$ occurs $k$ times in the polynomial from the determinant $\det(\bP_1^\top\bA\bP_1 - \lambda\bI_n)$, i.e., the term occurs $k-1$ times in the polynomial from $\det(\bY_1^\top \bA\bY_1 - \lambda\bI_{n-1})$. In another word, $\det(\bY_1^\top \bA\bY_1 - \lambda_i\bI_{n-1})=0$ and $\lambda_i$ is an eigenvalue of $\bY_1^\top \bA\bY_1$. Let $\bB=\bY_1^\top \bA\bY_1$. Since $\det(\bB-\lambda_i\bI_{n-1})=0$, the null space of $\bB-\lambda_i\bI_{n-1}$ is not empty. Suppose $(\bB-\lambda_i\bI_{n-1})\bn = \bzero$, i.e., $\bB\bn=\lambda_i\bn$ and $\bn$ is an eigenvector of $\bB$. From $ \bP_1^\top\bA\bP_1 = \begin{bmatrix} \lambda_i &\bzero \\ \bzero & \bB \end{bmatrix}, $ we have $ \bA\bP_1 \begin{bmatrix} z \\ \bn \end{bmatrix} = \bP_1 \begin{bmatrix} \lambda_i &\bzero \\ \bzero & \bB \end{bmatrix} \begin{bmatrix} z \\ \bn \end{bmatrix}$, where $z$ is any scalar. From the left side of this equation, we have \begin{equation}\label{equation:spectral-pro4-right} \begin{aligned} \bA\bP_1 \begin{bmatrix} z \\ \bn \end{bmatrix} &= \begin{bmatrix} \lambda_i\bx_{i1}, \bA\bY_1 \end{bmatrix} \begin{bmatrix} z \\ \bn \end{bmatrix} \\ &=\lambda_iz\bx_{i1} + \bA\bY_1\bn. \end{aligned} \end{equation} And from the right side of the equation, we have \begin{equation}\label{equation:spectral-pro4-left} \begin{aligned} \bP_1 \begin{bmatrix} \lambda_i &\bzero \\ \bzero & \bB \end{bmatrix} \begin{bmatrix} z \\ \bn \end{bmatrix} &= \begin{bmatrix} \bx_{i1} & \bY_1 \end{bmatrix} \begin{bmatrix} \lambda_i &\bzero \\ \bzero & \bB \end{bmatrix} \begin{bmatrix} z \\ \bn \end{bmatrix}\\ &= \begin{bmatrix} \lambda_i\bx_{i1} & \bY_1\bB \end{bmatrix} \begin{bmatrix} z \\ \bn \end{bmatrix}\\ &= \lambda_i z \bx_{i1} + \bY_1\bB \bn \\ &=\lambda_i z \bx_{i1} + \lambda_i \bY_1 \bn. \qquad (\text{Since $\bB \bn=\lambda_i\bn$})\\ \end{aligned} \end{equation} Combining Eq.~\eqref{equation:spectral-pro4-left} and Eq.~\eqref{equation:spectral-pro4-right}, we obtain $$ \bA\bY_1\bn = \lambda_i\bY_1 \bn, $$ which means $\bY_1\bn$ is an eigenvector of $\bA$ corresponding to the eigenvalue $\lambda_i$ (same eigenvalue corresponding to $\bx_{i1}$). Since $\bY_1\bn$ is a linear combination of $\by_2, \by_3, \ldots, \by_n$, which are orthonormal to $\bx_{i1}$, the $\bY_1\bn$ can be chosen to be orthonormal to $\bx_{i1}$. To conclude, if we have one eigenvector $\bx_{i1}$ corresponding to $\lambda_i$ whose multiplicity is $k\geq 2$, we could construct the second eigenvector by choosing one vector from the null space of $(\bB-\lambda_i\bI_{n-1})$, as constructed above. Suppose now, we have constructed the second eigenvector $\bx_{i2}$, which is orthonormal to $\bx_{i1}$. For such eigenvectors $\bx_{i1}$ and $\bx_{i2}$, we can always find additional $n-2$ orthonormal vectors $\by_3, \by_4, \ldots, \by_n$ so that $\{\bx_{i1},\bx_{i2}, \by_3, \by_4, \ldots, \by_n\}$ forms an orthonormal basis in $\real^n$. Put the $\by_3, \by_4, \ldots, \by_n$ into matrix $\bY_2$ and $\{\bx_{i1},\bx_{i2}, \by_3, \by_4, \ldots, \by_n\}$ into matrix $\bP_2$: $$ \bY_2=[\by_3, \by_4, \ldots, \by_n] \qquad \text{and} \qquad \bP_2=[\bx_{i1}, \bx_{i2},\bY_1]. $$ We then have $$ \bP_2^\top\bA\bP_2 = \begin{bmatrix} \lambda_i & 0 &\bzero \\ 0& \lambda_i &\bzero \\ \bzero & \bzero & \bY_2^\top \bA\bY_2 \end{bmatrix} = \begin{bmatrix} \lambda_i & 0 &\bzero \\ 0& \lambda_i &\bzero \\ \bzero & \bzero & \bC \end{bmatrix}, $$ where $\bC=\bY_2^\top \bA\bY_2$ such that $\det(\bP_2^\top\bA\bP_2 - \lambda\bI_n) = (\lambda_i-\lambda)^2 \det(\bC - \lambda\bI_{n-2})$. If the multiplicity of $\lambda_i$ is $k\geq 3$, $\det(\bC - \lambda_i\bI_{n-2})=0$ and the null space of $\bC - \lambda_i\bI_{n-2}$ is not empty so that we can still find a vector from null space of $\bC - \lambda_i\bI_{n-2}$ and $\bC\bn = \lambda_i \bn$. Now we can construct a vector $\begin{bmatrix} z_1 \\ z_2\\ \bn \end{bmatrix}\in \real^n $, where $z_1, z_2$ are any scalar values, such that $$ \bA\bP_2\begin{bmatrix} z_1 \\ z_2\\ \bn \end{bmatrix} = \bP_2 \begin{bmatrix} \lambda_i & 0 &\bzero \\ 0& \lambda_i &\bzero \\ \bzero & \bzero & \bC \end{bmatrix} \begin{bmatrix} z_1 \\ z_2\\ \bn \end{bmatrix}. $$ Similarly, from the left side of the above equation, we will get $\lambda_iz_1\bx_{i1} +\lambda_iz_2\bx_{i2}+\bA\bY_2\bn$. From the right side of the above equation, we will get $\lambda_iz_1\bx_{i1} +\lambda_i z_2\bx_{i2}+\lambda_i\bY_2\bn$. As a result, $$ \bA\bY_2\bn = \lambda_i\bY_2\bn, $$ where $\bY_2\bn$ is an eigenvector of $\bA$ and orthogonal to $\bx_{i1}, \bx_{i2}$. And it is easy to construct the eigenvector to be orthonormal to the first two. The process can go on, and finally, we will find $k$ orthonormal eigenvectors corresponding to $\lambda_i$. In fact, the dimension of the null space of $\bP_1^\top\bA\bP_1 -\lambda_i\bI_n$ is equal to the multiplicity $k$. It also follows that if the multiplicity of $\lambda_i$ is $k$, there cannot be more than $k$ orthogonal eigenvectors corresponding to $\lambda_i$. Otherwise, it will come to the conclusion that we could find more than $n$ orthogonal eigenvectors, which would lead to a contradiction. \end{proof} The proof of the existence of the spectral decomposition is then trivial from the lemmas above. For any matrix multiplication, the rank of the multiplication result is not larger than the rank of the inputs. \begin{lemma}[Rank of $\bA\bB$]\label{lemma:rankAB} For any matrix $\bA\in \real^{m\times n}$ and $\bB\in \real^{n\times k}$, then the matrix multiplication $\bA\bB\in \real^{m\times k}$ has $\rank$($\bA\bB$)$\leq$min($\rank$($\bA$), $\rank$($\bB$)). \end{lemma} \begin{proof}[of Lemma~\ref{lemma:rankAB}] For matrix multiplication $\bA\bB$, we have \begin{itemize} \item All rows of $\bA\bB$ are combinations of rows of $\bB$, the row space of $\bA\bB$ is a subset of the row space of $\bB$. Thus, $\rank$($\bA\bB$)$\leq$$\rank$($\bB$). \item All columns of $\bA\bB$ are combinations of columns of $\bA$, the column space of $\bA\bB$ is a subset of the column space of $\bA$. Thus, $\rank$($\bA\bB$)$\leq$$\rank$($\bA$). \end{itemize} Therefore, $\rank$($\bA\bB$)$\leq$min($\rank$($\bA$), $\rank$($\bB$)). \end{proof} \begin{tcolorbox}[title={Symmetric Matrix Property 4 of 4},colback=\mdframecolorTheorem] \begin{lemma}[Rank of Symmetric Matrices]\label{lemma:rank-of-symmetric} If $\bA$ is an $n\times n$ real symmetric matrix, then rank($\bA$) = the total number of nonzero eigenvalues of $\bA$. In particular, $\bA$ has full rank if and only if $\bA$ is nonsingular. Further, $\cspace(\bA)$ is the linear space spanned by the eigenvectors of $\bA$ that correspond to nonzero eigenvalues. \end{lemma} \end{tcolorbox} \begin{proof}[of Lemma~\ref{lemma:rank-of-symmetric}] For any symmetric matrix $\bA$, we have $\bA$, in spectral form, as $\bA = \bQ \bLambda\bQ^\top$ and also $\bLambda = \bQ^\top\bA\bQ$. Since we have shown in Lemma~\ref{lemma:rankAB} that the rank of the multiplication $\rank$($\bA\bB$)$\leq$min($\rank$($\bA$), $\rank$($\bB$)). $\bullet$ From $\bA = \bQ \bLambda\bQ^\top$, we have $\rank(\bA) \leq \rank(\bQ \bLambda) \leq \rank(\bLambda)$; $\bullet$ From $\bLambda = \bQ^\top\bA\bQ$, we have $\rank(\bLambda) \leq \rank(\bQ^\top\bA) \leq \rank(\bA)$, The inequalities above give us a contradiction. And thus $\rank(\bA) = \rank(\bLambda)$, which is the total number of nonzero eigenvalues. Since $\bA$ is nonsingular if and only if all of its eigenvalues are nonzero, $\bA$ has full rank if and only if $\bA$ is nonsingular. \end{proof} \index{$m$-th Power} Similar to the eigenvalue decomposition, we can compute the $m$-th power of matrix $\bA$ via the spectral decomposition more efficiently. \begin{remark}[$m$-th Power]\label{remark:power-spectral} The $m$-th power of $\bA$ is $\bA^m = \bQ\bLambda^m\bQ^\top$ if the matrix $\bA$ can be factored as the spectral decomposition $\bA=\bQ\bLambda\bQ^\top$. \end{remark}
2205.00799v1
http://arxiv.org/abs/2205.00799v1
Optimal preference satisfaction for conflict-free joint decisions
\documentclass{article} \usepackage{PRIMEarxiv} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{fancyhdr} \usepackage{graphicx} \usepackage{mathtools} \usepackage{amsmath} \usepackage{subcaption} \usepackage[table]{xcolor} \usepackage{latexsym} \usepackage{enumitem} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{lemma}{\bf Lemma}[section] \newtheorem{condition}{\bf Condition}[section] \newtheorem{corollary}{\bf Corollary}[section] \newtheorem{definition}{\bf Definition}[section] \newtheorem{conjecture}{\bf Conjecture}[section] \newtheorem{assumption}{\bf Assumption}[section] \numberwithin{equation}{section} \def\qed{\hfill $\Box$} llb{\cellcolor{black!20}} \mathtoolsset{showonlyrefs=false} \hypersetup{ colorlinks=true, linkcolor=blue, citecolor=blue } \usepackage[]{enumitem} \setlist[enumerate]{ labelsep=8pt, labelindent=0.5\parindent, itemindent=0pt, leftmargin=*, before=\setlength{\listparindent}{-\leftmargin}, } \pagestyle{fancy} \thispagestyle{empty} \rhead{ \textit{}} \fancyhead[LO]{\textit{Optimal preference satisfaction for conflict-free joint decisions}} \title{Optimal preference satisfaction for conflict-free joint decisions} \author{ Hiroaki Shinkawa$^{1, \dagger}$, Nicolas Chauvet$^1$, Guillaume Bachelier$^2$,\\ André Röhm$^1$, Ryoichi Horisaki$^1$, and Makoto Naruse$^1$\\ $^{1}$Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan\\ $^{2}$ Université Grenoble Alpes, CNRS, Inst. Neel, Grenoble, France } \begin{document} \maketitle \begin{abstract} We all have preferences when multiple choices are available. If we insist on satisfying our preferences only, we may suffer a loss due to conflicts with other people's identical selections. Such a case applies when the choice cannot be divided into multiple pieces due to the intrinsic nature of the resources. Former studies, such as the top trading cycle, examined how to conduct fair joint decision-making while avoiding decision conflicts from the perspective of game theory when multiple players have their own deterministic preference profiles. However, in reality, probabilistic preferences can naturally appear in relation to the stochastic decision-making of humans. Here, we theoretically derive conflict-free joint decision-making that can satisfy the probabilistic preferences of all individual players. More specifically, we mathematically prove the conditions wherein the deviation of the resultant chance of obtaining each choice from the individual preference profile, which we call the loss, becomes zero, meaning that all players' satisfaction is perfectly appreciated while avoiding decision conflicts. Furthermore, even in situations where zero-loss conflict-free joint decision-making is unachievable, we show how to derive joint decision-making that accomplishes the theoretical minimum loss while ensuring conflict-free choices. Numerical demonstrations are also shown with several benchmarks. \end{abstract} \keywords{joint decision-making \and resource allocation \and game theory \and multi-armed bandit \and optimization} \section{Introduction} Since the industrial revolution, we have witnessed important technological advances, but we are now faced with the finiteness of resources, which leads to the necessity for strategies of competition or sharing for their allocation \cite{hira2017emergence, george2018management}. We need to consider other people's, or any entities', choices along with those of ours. One of the critical issues we need to be concerned about is decision conflicts. For example, suppose a lot of people or devices try to connect to the same mobile network simultaneously. In that case, the wireless resources available per person or device will be significantly smaller, resulting in poor communication bandwidth or even zero connectivity due to congestion \cite{liu2007resource}. The example above considers a case where an individual suffers due to a choice conflict with others. Here, the division of resources among entities is allowed. However, in other cases, separable allocation to multiple entities is not permitted. Think of a draft in a professional football league. Each of the clubs has specific players in mind to pick in the draft; however, in principle, only a single club can sign with one player. That is, decision conflicts must be strictly resolved. These examples highlight the importance of accomplishing individual satisfaction while avoiding decision conflicts. Indeed, Sharpley and Scarf proposed the Top Trading Cycle (hereinafter called TTC) allocation algorithm \cite{shapley1974cores} to address this problem. A typical example is known as the house allocation problem. Each of the $n$ students ranks the houses they want to live in from a list of $n$ houses. In this situation, TTC allocates one house to each student. The solution given by TTC is known to be game-theoretic core-stable; that is, arbitrary numbers of students can swap houses with each other and still not get a better allocation than the current one. There are many other mechanisms which originate in TTC including those that allow indifference in preference profiles \cite{alcalde2011exchange, aziz2012housing, saban2013house} and those where agents can be allocated fractions of houses \cite{athanassoglou2011house}. While TTC nicely works when players have deterministic rankings, humans or artificial machines can also have probabilistic preferences. That is, unlike in the house allocation problem, an agent with probabilistic preferences would be unsatisfied by always receiving its top preference. Rather, over repeated allocations, the distribution of outcomes and their probabilistic preferences should become similar. Such probabilistic preferences can naturally appear in relation to the stochastic decision-making of humans. The multi-armed bandit (MAB) problem \cite{robbins1952some, sutton2018reinforcement}, for example, typically encompasses probabilistic preferences. In the MAB problem, one player repeatedly selects and draws one of several slot machines based on a certain policy. The reward for each slot machine is generated stochastically based on a probability distribution, which the player does not know a priori. The purpose of the MAB problem is to maximize the cumulative reward obtained by the player under such circumstances. First, the player draws all the machines, including those that did not generate many rewards, to accurately estimate the probability distribution of the reward that each machine follows. This aspect is called exploration. On the other hand, after the player is confident in the estimation, the extensive drawing of the highest reward probability machine increases the cumulative reward. This is called exploitation. In order to successfully solve the MAB problem, it is necessary to strike a balance between exploration and exploitation especially in a dynamically changing environment, which is called the exploration-exploitation dilemma \cite{march1991exploration}. We can see an example of probabilistic preferences in the context of the MAB problem. The softmax algorithm, one of the well-known stochastic decision-making methods, is an efficient way to solve this problem \cite{vermorel2005multi}. Specifically, if the empirical reward for each machine $i$ at a certain time $t$ is $\mu_i(t)$, the probability that a player selects each machine $i$ is expressed as \begin{equation} s_i(t) = \frac{e^{\mu_i(t)/\tau}}{\sum\limits_k e^{\mu_k(t)/\tau}}. \end{equation} Here, $\tau$ is called the \it{temperature}, \rm which controls the balance between exploration and exploitation. This means that the player has a probabilistic preference at each time step as to which machine to select and with what percent. Furthermore, we can extend the MAB problem to the case involving multiple players, which is called the competitive multi-armed bandit problem (competitive MAB problem) \cite{lai2010cognitive, kim2016harnessing}. In the competitive MAB problem, when multiple players simultaneously select the same machine and that machine generates a reward, the reward is distributed among the players. The aim is to maximize collective rewards of the players while ensuring equality among them \cite{deneubourg1989collective}. Decision conflicts inhibit the players from maximizing the total rewards in such situations. Indeed, wireless communications, for example, suffer from such difficulties wherein the simultaneous usage of the same band by multiple devices results in performance degradation for individual devices. In solving competitive MAB problems, Chauvet \it{et al.} \rm theoretically and experimentally demonstrated the usefulness of quantum entanglement to avoid decision conflicts without direct communication \cite{chauvet2019entangled, chauvet2020entangled}. A powerful aspect of the usage of entanglement is that the optimization of the reward by one player leads to the optimization of the total rewards for the team. This is not easily achievable because normally, if everyone tries to draw the best machine, it leads to decision conflicts and thus diminishes the total rewards in the competitive MAB problem. Moreover, Amakasu \it{et al.} \rm proposed to utilize quantum interference so that the number of choices can be extended to an arbitrary number while perfectly avoiding decision conflicts \cite{amakasu2021conflict}. A more general example is multi-agent reinforcement learning \cite{tan1993multi, busoniu2008comprehensive}. In this case, multiple agents learn individually and make probabilistic choices while balancing exploration and exploitation at each time step. Successfully integrating these agents and making cooperative decisions without selection conflicts can help accelerate the learning \cite{mnih2016asynchronous}. With such motivations and backgrounds, what should be clarified is how the probabilistic preferences of individual players can be accommodated while avoiding choice conflicts. More fundamentally, the question is whether it is really possible to satisfy all players' preferences while eliminating decision conflicts. If yes, what is the condition to realize such requirements? This study theoretically clarifies the condition when joint decision-making probabilities exist so that all players' preference profiles are perfectly satisfied. In other words, the condition that provides the loss, which is the deviation of each player's selection preference to the resulting chance of obtaining each choice determined via the joint decision probabilities, becomes zero is clearly formulated. Furthermore, we derive the joint decision-making probabilities that minimize the loss even when such a condition is not satisfied. In the present work, we purely focus on the satisfaction of the players' preferences, and the discussion which links to external environments such as rewards is left for future studies. \section{Theory}\label{sec:construction method} \subsection{Problem formulation}\label{subsec:problem} This section introduces the formulation of the problem under study. To begin with, we will examine the cases where the number of players or agents is two, who are called player A and player B. There are $N$ choices for each of them, which are called arms in the literature on the MAB problem, where $N$ is a natural number greater than or equal to 2. Each of players A and B selects a choice probabilistically depending on his or her preference probability. Here, the preference of player A is represented by \begin{equation} \boldsymbol{A} = \begin{pmatrix}A_1&A_2&\cdots&A_N\end{pmatrix}. \end{equation} Similarly, player B's preference is given by \begin{equation} \boldsymbol{B} = \begin{pmatrix}B_1&B_2&\cdots&B_N\end{pmatrix}. \end{equation} These preferences have the typical properties of probabilities: \begin{equation} A_1+A_2+\cdots+A_N = B_1+B_2+\cdots+B_N = 1, \quad A_i \geq 0, \quad B_i \geq 0\quad (i=1, 2, \cdots, N). \end{equation} The upper side of Figure \ref{fig:problem_settings} schematically illustrates such a situation where player A and B each has their own preference in a four-arm case. \begin{figure}[t] \centering \includegraphics[width=12.0cm]{problem_settings.pdf} \caption{Problem settings. The sum of each row should be close to the corresponding preference of player A, and the sum of each column should be close to the corresponding preference of player B.} \label{fig:problem_settings} \end{figure} Now, we introduce the joint selection probability matrix, which is given in the form of \begin{equation} \boldsymbol{P} = \begin{pmatrix} 0&p_{1,2}&\cdots&p_{1,N}\\ p_{2,1}&0&\cdots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ p_{N,1}&\cdots&\cdots&0 \end{pmatrix}. \end{equation} The non-diagonal element $p_{i,j}$, with $i\neq j$, denotes the probability of when player A selects arm $i$ and player B chooses arm $j$. Since we consider non-conflict choices, the diagonal terms are all zero. Here, the summation of $p_{i,j}$ over all $i$s and $j$s is unity and $p_{i,j} \geq 0$. \begin{equation} \sum_{i,j}p_{i,j} = 1, \quad p_{i,j} \geq 0. \end{equation} Our interest is which $\boldsymbol{P}$ meets the preferences of both players A and B. To incorporate such a perspective, we formulate the degree of satisfaction of the players in the following way. By summing up $p_{i,j}$s row by row of the joint selection probability matrix, a preference profile given by \begin{equation} \pi_A(i) = \sum_j p_{i,j} \end{equation} is obtained. $\pi_A(i)$ represents the probability of player A selecting arm $i$ as a result of the joint selection probability matrix $\boldsymbol{P}$. We call $\pi_A(i)$ ``the satisfied preference'' (the link with the preference $A_i$ will be discussed hereafter). \begin{equation} \sum_i \pi_A(i) = 1 \end{equation} holds based on the definition of the joint selection probability matrix. Such a structure is illustrated by the red shaded elements in the second row of the joint selection probability matrix in Figure \ref{fig:problem_settings}. Similarly, the satisfied preference along the columns of $\boldsymbol{P}$ \begin{equation} \pi_B(j) = \sum_i p_{i,j} \end{equation} represents the probability of player B selecting arm $j$ as a result of the joint selection probability matrix $\boldsymbol{P}$. Our aim is to find the optimal $p_{i,j}$s wherein \begin{equation} \pi_A(i) \approx A_i,\quad \pi_B(j) \approx B_j \end{equation} hold for all $i=1, 2, \ldots, N$ and $j=1, 2, \ldots, N$; that is, the player's preferences $(A_i, B_j)$ is the same or close to the satisfied preferences $(\pi_A(i), \pi_B(j))$. To quantify such a metric, we define the loss $L$ akin to an $L_2$-norm. \begin{equation} L = \sum_i \left(\pi_A(i)-A_i\right)^2 + \sum_j \left(\pi_B(j)-B_j\right)^2, \end{equation} which comprises the sum of squares of the gap between the preferences and the satisfied preferences for the two players. In the following sections, we prove that the loss defined above can be {\it{zero}}; in other words, the perfect satisfaction for the players can indeed be realized, under certain conditions. Furthermore, even in cases when zero-loss is not achievable, we can systematically derive the joint selection probability matrix that ensures the minimum loss. Here, we define the popularity $S_i$ for each arm and propose three theorems and one conjecture based on the values of $S_i$. \begin{definition} \normalfont The popularity $S_i$ is defined as the sum of the preferences of player A and player B for arm $i$. \begin{equation} S_i \coloneqq A_i + B_i \quad (i=1, 2, \ldots, N). \end{equation} Since the preferences $A_i$ and $B_i$ are probabilities, it holds that \begin{equation} \sum_i S_i = \sum_i A_i + \sum_i B_i = 2. \end{equation} \end{definition} Hereinafter, the minimum loss is denoted by $L_{\text{min}}$. \begin{equation} L_{\text{min}} = \min_{\boldsymbol{P}}\{L\}. \end{equation} In the meantime, the loss function can be defined as other forms, such as the Kullback-Leibler (KL) divergence, also known as relative entropy \cite{kullback1951information}. The extended discussions on such metrics are future studies, but we consider that the fundamental structure of the problem under study will not change significantly. \subsection{Theorem 1}\label{sbsec:theorem1} \subsubsection{Statement} \begin{theorem}\label{thm:0loss_case} Assume that all the popularities $S_i$ are smaller than or equal to 1. Then, it is possible to construct a joint selection probability matrix which makes the loss $L$ equal to zero. \begin{equation} \forall i; S_i \leq 1 \Rightarrow L_{\normalfont\text{min}} = 0. \end{equation} \end{theorem} \subsubsection{Auxiliary lemma} To prove Theorem \ref{thm:0loss_case}, we first prove a lemma, where the problem settings are slightly modified. In the original problem, we treat each player's preference as probabilities. Here, however, their preferences do not have to be probabilities as long as they are non-negative, and the sum of the preference for each player is given by a constant $T$, which we refer to as the total preference. Their preferences are called $\hat{\boldsymbol{A}}$ and $\hat{\boldsymbol{B}}$ respectively. We will put a hat over each notation to avoid confusion about which definition we are referring to between the original problem and the modified problem. \begin{gather} \hat{\boldsymbol{A}} = \begin{pmatrix}\hat{A}_1&\hat{A}_2&\cdots&\hat{A}_N\end{pmatrix}.\\ \hat{\boldsymbol{B}} = \begin{pmatrix}\hat{B}_1&\hat{B}_2&\cdots&\hat{B}_N\end{pmatrix}.\\ \hat{A}_1+\hat{A}_2+\cdots+\hat{A}_N = \hat{B}_1+\hat{B}_2+\cdots+\hat{B}_N = T, \quad \hat{A}_i \geq 0, \quad \hat{B}_i \geq 0 \quad (i=1, 2, \ldots, N). \label{definition:pref_hat} \end{gather} We still refer to \begin{equation} \hat{\boldsymbol{P}} = \begin{pmatrix} 0&\hat{p}_{1,2}&\cdots&\hat{p}_{1,N}\\ \hat{p}_{2,1}&0&\cdots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ \hat{p}_{N,1}&\cdots&\cdots&0 \end{pmatrix} \end{equation} as a joint selection probability matrix even though, technically, each $\hat{p}_{i,j}$ is no longer a probability but a ratio of each joint selection happening, and the sum of all entries is $T$, instead of 1. The other notations are defined similarly to the original problem. \begin{gather} \text{The satisfied preference:}\qquad\hat{\pi}_A(i) = \sum_j \hat{p}_{i,j}, \quad \hat{\pi}_B(j) = \sum_i \hat{p}_{i,j},\\ \text{The loss:}\qquad\hat{L} = \sum_i (\hat{\pi}_A(i)-\hat{A}_i)^2 + \sum_j (\hat{\pi}_B(j)-\hat{B}_j)^2, \quad \hat{L}_{\text{min}} = \min_{\hat{\boldsymbol{P}}}\{\hat{L}\},\\ \text{The popularity:}\qquad\hat{S}_i=\hat{A}_i+\hat{B}_i. \end{gather} \begin{lemma}\label{lemma:0loss_case} Assume that all the popularities $\hat{S}_i$ are smaller than or equal to the total preference $T$. Then, it is possible to construct a joint selection probability matrix which makes the loss $\hat{L}$ equal to 0. \begin{equation} \forall i; \hat{S}_i \leq T \Rightarrow \hat{L}_{\normalfont\text{min}} = 0. \end{equation} \end{lemma} We see that Theorem \ref{thm:0loss_case} is a special case of Lemma \ref{lemma:0loss_case} when $T=1$. \subsubsection{Outline of the proof} We prove Lemma \ref{lemma:0loss_case} by mathematical induction. In sections \ref{sbsbsec:small_arms}, we show that Lemma \ref{lemma:0loss_case} holds for $N=2$ and $N=3$. In section \ref{sbsbsec:general_arms} and \ref{sbsbsec:construction}, we suppose that Lemma \ref{lemma:0loss_case} has been proven for $N$ arms, and then show that Lemma \ref{lemma:0loss_case} also holds for $N+1$ arms. Specifically, in section \ref{sbsbsec:general_arms}, we assume the existence of $p_{i,j}$s which satisfy certain conditions and prove that the perfect satisfaction for the players is achievable using these $p_{i,j}$s. Then, in section \ref{sbsbsec:construction}, we verify their existence. \subsubsection{When \texorpdfstring{$N=2$}{Lg} and when \texorpdfstring{$N=3$}{Lg}}\label{sbsbsec:small_arms} When $N=2$, \begin{equation} \hat{S}_1 = \hat{S}_2 = T \end{equation} is the only case where the assumption of Lemma \ref{lemma:0loss_case}, that is, $\forall i; \hat{S}_i \leq T$ is fulfilled. Due to the constraint \eqref{definition:pref_hat}, \begin{equation} \hat{A}_2 = T - \hat{A}_1, \quad \hat{B}_1 = T - \hat{A}_1, \quad \hat{B}_2 = \hat{A}_1. \end{equation} In this case, \begin{equation} \hat{\boldsymbol{P}} = \begin{pmatrix}0&\hat{A}_1\\ \hat{A}_2 & 0\end{pmatrix} \end{equation} makes the loss equal to zero. When $N=3$, by setting the values in the joint selection probability matrix as follows, the loss becomes zero. \begin{equation}\label{opt_n_3} \hat{\boldsymbol{P}} = \begin{pmatrix}0&\hat{p}_{1,2}&-\hat{p}_{1,2}+\hat{A}_1\\T-\hat{p}_{1,2}-\hat{A}_3-\hat{B}_3&0&\hat{p}_{1,2}-\hat{A}_1+\hat{B}_3\\\hat{p}_{1,2}+\hat{A}_3-\hat{B}_2&-\hat{p}_{1,2}+\hat{B}_2&0\end{pmatrix}. \end{equation} The satisfied preferences for players A and B via $\hat{\boldsymbol{P}}$ result in $(\hat{A}_1, \hat{A}_2, \hat{A}_3)$ and $(\hat{B}_1, \hat{B}_2, \hat{B}_3)$, which perfectly match with the preferences of players A and B, respectively. However, it should be noted that all the elements in $\hat{\boldsymbol{P}}$ must be non-negative. \begin{equation} \begin{gathered} \hat{p}_{1,2} \geq 0, \quad -\hat{p}_{1,2}+\hat{A}_1 \geq 0, \quad T-\hat{p}_{1,2}-\hat{A}_3-\hat{B}_3 \geq 0,\\ \hat{p}_{1,2}-\hat{A}_1+\hat{B}_3 \geq 0, \quad \hat{p}_{1,2}+\hat{A}_3-\hat{B}_2\geq 0, \quad -\hat{p}_{1,2}+\hat{B}_2 \geq 0. \end{gathered} \end{equation} To summarize these inequalities, the following inequality should hold; \begin{equation}\label{main_ineq} \max{\{0,\hat{A}_1-\hat{B}_3,-\hat{A}_3+\hat{B}_2\}}\leq \hat{p}_{1,2} \leq \min{\{\hat{A}_1,\hat{B}_2, T-\hat{A}_3-\hat{B}_3\}}. \end{equation} If a non-negative $\hat{p}_{1,2}$ which satisfies \eqref{main_ineq} exists, $\hat{\boldsymbol{P}}$ given in \eqref{opt_n_3} is a valid joint selection probability matrix, which makes the loss zero. In other words, if we can prove that $\hat{p}_{1,2}$ which is in the range of \eqref{main_ineq} exist, whatever $\hat{\boldsymbol{A}}$ and $\hat{\boldsymbol{B}}$ are, Lemma \ref{lemma:0loss_case} is proven to hold for $N=3$. To prove that a non-negative $\hat{p}_{1,2}$ which satisfies \eqref{main_ineq} always exists, we consider the following three cases depending on the value of the left-hand side term of \eqref{main_ineq}. In each case, we show that three candidates for the right-hand side term of \eqref{main_ineq} are greater than the left-hand side term. \noindent\textbf{[Case 1]} When $\max{\{0,\hat{A}_1-\hat{B}_3,-\hat{A}_3+\hat{B}_2\}} = 0$.\\ $0 \leq \hat{A}_1, \quad 0 \leq \hat{B}_2$ are evident from the definition \eqref{definition:pref_hat}. Also, $0 \leq T-\hat{A}_3-\hat{B}_3$ holds because of the assumption of the lemma $\hat{S}_3 \leq T$. Thus, \begin{equation}\label{max_case1} 0 \leq \min{\{\hat{A}_1, \hat{B}_2, T-\hat{A}_3-\hat{B}_3\}}.\end{equation} \noindent\textbf{[Case 2]} When $\max{\{0, \hat{A}_1-\hat{B}_3, -\hat{A}_3+\hat{B}_2\}} = \hat{A}_1-\hat{B}_3$.\\ First, the definition \eqref{definition:pref_hat} guarantees the following: \begin{equation}\hat{A}_1-\hat{B}_3\leq \hat{A}_1.\end{equation} Next, using the assumption $\hat{S}_1 \leq T$, \begin{equation}\hat{A}_1-\hat{B}_3 \leq (T-\hat{B}_1)-\hat{B}_3 = B_2.\end{equation} Finally, \begin{equation}\hat{A}_1-\hat{B}_3 = (T-\hat{A}_2-\hat{A}_3)-\hat{B}_3 \leq T-\hat{A}_3-\hat{B}_3.\end{equation} Therefore, \begin{equation}\label{max_case2} \hat{A}_1-\hat{B}_3 \leq \min{\{\hat{A}_1,\hat{B}_2, T-\hat{A}_3-\hat{B}_3\}}.\end{equation} \noindent\textbf{[Case 3]} When $\max{\{0, \hat{A}_1-\hat{B}_3, -\hat{A}_3+\hat{B}_2\}} = -\hat{A}_3+\hat{B}_2$.\\ $\hat{S}_2 \leq T$ implies the following. \begin{equation}-\hat{A}_3+\hat{B}_2 \leq -\hat{A}_3+(T-\hat{A}_2) = A_1.\end{equation} Also, the definition \eqref{definition:pref_hat} guarantees $-\hat{A}_3+\hat{B}_2 \leq \hat{B}_2$. Finally, \begin{equation}-\hat{A}_3+\hat{B}_2 = -\hat{A}_3+(T-\hat{B}_1-\hat{B}_3) \leq T-\hat{A}_3-\hat{B}_3\end{equation} holds because $\hat{B}_1 \geq 0$. Hence, \begin{equation}\label{max_case3} -\hat{A}_3+\hat{B}_2 \leq \min{\{\hat{A}_1, \hat{B}_2, T-\hat{A}_3-\hat{B}_3\}}.\end{equation} From \eqref{max_case1}, \eqref{max_case2} and \eqref{max_case3}, for any $\boldsymbol{A}, \boldsymbol{B}$, \begin{equation} \max{\{0,\hat{A}_1-\hat{B}_3,-\hat{A}_3+\hat{B}_2\}}\leq \min{\{\hat{A}_1,\hat{B}_2, T-\hat{A}_3-\hat{B}_3\}}. \end{equation} Therefore, it is evident that $\hat{p}_{1,2}$ that satisfies \eqref{main_ineq} exist. Hence, $\hat{\boldsymbol{P}}$ given in \eqref{opt_n_3} is a valid joint selection probability matrix, which makes the loss $\hat{L}$ zero. \subsubsection{Induction}\label{sbsbsec:general_arms} In this section and the following section, we suppose that Lemma \ref{lemma:0loss_case} has been proven to hold when there are $N$ arms $(N \geq 3)$. Here, we show that Lemma \ref{lemma:0loss_case} also holds for $N+1$ case. In the following argument, the arm with the lowest $\hat{S}_i$ is denoted by arm $K$. \begin{equation} \min{\{\hat{S}_i\}} = \hat{S}_K. \end{equation} Now, we make the following assumption, which focuses on only the values in the $K$th row or the $K$th column. \begin{assumption}\label{assumption:Kth_arm} There exist $\hat{p}_{K,1}, \hat{p}_{K,2}, \ldots, \hat{p}_{K,N+1}, \hat{p}_{1,K}, \hat{p}_{2,K}, \ldots, \hat{p}_{N+1,K}$ which satisfy all the following conditions \eqref{eq_a}--\eqref{ineq_ab}.\\ The sum of the $K$th row is equal to $\hat{A}_K$. \begin{equation}\label{eq_a}\sum_{j=1}^{N+1} \hat{p}_{K,j} = \hat{A}_K.\end{equation} The sum of the $K$th column is equal to $\hat{B}_K$. \begin{equation}\label{eq_b}\sum_{i=1}^{N+1} \hat{p}_{i,K} = \hat{B}_K.\end{equation} The sum of the $j$th column without the $K$th row is non-negative. \begin{equation}\label{ineq_a}\sum_{i \neq K} \hat{p}_{i,j} \geq 0 \Leftrightarrow \hat{p}_{K,j} \leq \hat{B}_j \quad (j=1, 2, \ldots, K-1, K+1, \ldots, N+1).\end{equation} The sum of the $i$th row without the $K$th column is non-negative. \begin{equation}\label{ineq_b}\sum_{j \neq K} \hat{p}_{i,j} \geq 0 \Leftrightarrow \hat{p}_{i,K} \leq \hat{A}_i \quad (i=1, 2, \ldots, K-1, K+1, \ldots, N+1).\end{equation} In the gray-shaded area of the joint selection probability matrix below, all the remaining popularities are smaller than or equal to the remaining total preference. Note that $\hat{S}_K = \min\{\hat{S}_i\}$. \begin{equation}\label{ineq_ab}(\hat{A}_i-\hat{p}_{i,K})+(\hat{B}_i-\hat{p}_{K,i}) \leq T-\hat{S}_K \quad (i=1, 2, \ldots, K-1, K+1, \ldots, N+1).\end{equation} \end{assumption} We first suppose that Assumption \ref{assumption:Kth_arm} is valid and show the way to construct a joint selection probability matrix which makes the loss $\hat{L}$ zero, using these $\hat{p}_{i,j}$s. Later, we will prove that Assumption \ref{assumption:Kth_arm} is indeed correct. Here, we show that, using the $\hat{p}_{i,j}$s which are assumed to exist in Assumption \ref{assumption:Kth_arm}, we can make the loss $\hat{L}$ equal to zero. From conditions \eqref{eq_a} and \eqref{eq_b}, $(\pi_A(K)-A_K)^2+(\pi_B(K)-B_K)^2$, which are terms in the definition of $\hat{L}$ regarding arm $K$, is zero. \begin{equation}\label{loss_Kth_arm} (\pi_A(K)-A_K)^2+(\pi_B(K)-B_K)^2 = 0. \end{equation} Next, we consider the loss for the remaining part of the joint selection probability matrix in the gray-shaded region described in \eqref{remaining_part}, which is defined by \begin{equation} \hat{L}_\text{rem} = \hat{L} - \{(\hat{\pi}_A(K)-\hat{A}_K)^2+(\hat{\pi}_B(K)-\hat{B}_K)^2\}. \end{equation} \begin{equation}\label{remaining_part} \left(\begin{array}{ccccccc} llb *&\fillb\cdots&p_{1,K}&\fillb\cdots&\fillb *&\fillb *\\ llb 0&\fillb\cdots&p_{2,K}&\fillb\cdots&\fillb *&\fillb *\\ llb\vdots&\fillb\ddots&\vdots&\fillb\cdots&\fillb\vdots&\fillb\vdots\\ p_{K,1}&p_{K,2}&\cdots&0&\cdots&p_{K,N}&p_{K,N+1}\\ llb\vdots&\fillb\vdots&\vdots&\fillb\ddots&\fillb\vdots&\fillb\vdots\\ llb *&\fillb\cdots&p_{N,K}&\fillb\cdots&\fillb 0&\fillb *\\ llb *&\fillb\cdots&p_{N+1,K}&\fillb\cdots&\fillb *&\fillb 0\\ \end{array}\right) \end{equation} \begin{table}[t] \centering \caption{The preference setting which the remaining part of the joint selection probability matrix in the gray-shaded region follows. $\hat{A}_i^* = \hat{A}_i - \hat{p}_{i,K}, \quad \hat{B}_j^* = \hat{B}_j - \hat{p}_{K,j}$.}\label{tb:problem_settings_for_proof} \scalebox{0.9}{ \begin{tabular}{|c||c|c|c|c|c|c|c||c|} \hline Player&Arm 1&Arm 2&$\cdots$&Arm $(K-1)$&Arm $(K+1)$&$\cdots$&Arm $(N+1)$&Sum\\ \hline A&$\hat{A}_1^*$&$\hat{A}_2^*$&$\cdots$&$\hat{A}_{K-1}^*$&$\hat{A}_{K+1}^*$&$\cdots$&$\hat{A}_{N+1}^*$&$T-\hat{S}_K$\\ B&$\hat{B}_1^*$&$\hat{B}_2^*$&$\cdots$&$\hat{B}_{K-1}^*$&$\hat{B}_{K+1}^*$&$\cdots$&$\hat{B}_{N+1}^*$&$T-\hat{S}_K$\\ \hline \end{tabular} } \end{table} $\hat{L}_\text{rem}$ is the loss which corresponds to a problem where the preference setting is described in Table \ref{tb:problem_settings_for_proof}. $\hat{A}_i^*$ and $\hat{B}_i^*$ are defined by \begin{equation}\hat{A}_i^* = \hat{A}_i - \hat{p}_{i,K}, \quad \hat{B}_j^* = \hat{B}_j - \hat{p}_{K,j}.\end{equation} Now, we prove that the optimal $\hat{L}_\text{rem}$ is zero. $A_i^*$ and $B_i^*$ fulfill the requirements for the preference; that is, the preferences need to be non-negative, because $p_{i,K}$s and $p_{K,j}$s follow \eqref{ineq_a} and \eqref{ineq_b} and the total preferences for both players are the same. In this section, we suppose we have proven that Lemma \ref{lemma:0loss_case} holds when there are $N$ arms, and because of \eqref{ineq_ab}, all the remaining popularities $\hat{S}_i^* \coloneqq \hat{A}_i^*+\hat{B}_i^* = (\hat{A}_i-\hat{p}_{i,K})+(\hat{B}_i-\hat{p}_{K,i})$ are smaller than or equal to the remaining total preference $T-\hat{S}_K$. Thus, Lemma \ref{lemma:0loss_case} in the case of $N$ arms verifies \begin{equation}\label{loss_other_arms} \min\{\hat{L}_\text{rem}\} = 0. \end{equation} From \eqref{loss_Kth_arm} and \eqref{loss_other_arms}, using $\hat{p}_{K,1}, \hat{p}_{K,2}, \ldots, \hat{p}_{K,N+1}, \hat{p}_{1,K}, \hat{p}_{2,K}, \ldots, \hat{p}_{N+1,K}$ and the optimal $\hat{p}_{i,j}$s for the gray-shaded region, \begin{equation} \hat{L}_{\text{min}} = \min\{\hat{L}_\text{rem}\} + \{(\hat{\pi}_A(K)-\hat{A}_K)^2+(\hat{\pi}_B(K)-\hat{B}_K)^2\} = 0. \end{equation} Therefore, if Assumption \ref{assumption:Kth_arm} is correct, Lemma \ref{lemma:0loss_case} is proven to hold for $N+1$ arms. \subsubsection{Verification of Assumption \ref{assumption:Kth_arm}}\label{sbsbsec:construction} In this section, we prove that Assumption \ref{assumption:Kth_arm} is indeed correct. Let us call the arm with the highest $\hat{S}_i$ arm $V$ ($V \neq K$). \begin{equation} \max \{\hat{S}_i\} = \hat{S}_V \end{equation} First, we prove by contradiction that there is at most one arm which violates \begin{equation}\label{already_ineq_ab} \hat{S}_i \leq T - \hat{S}_K, \end{equation} and if there is such an arm, arm $V$ is the one that breaks \eqref{already_ineq_ab}. Note that all the other arms, which satisfy \eqref{already_ineq_ab}, follow \eqref{ineq_ab}. We assume that there are two arms which do not satisfy \eqref{already_ineq_ab} (we call them arm $V_1$ and arm $V_2$) and we will prove that this assumption leads to a contradiction. \begin{equation} \hat{S}_{V_1} > T - \hat{S}_K, \quad \hat{S}_{V_2} > T - S_K. \end{equation} If we add each side, \begin{equation}\label{ineq_s} \hat{S}_{V_1} + \hat{S}_{V_2} > 2T - 2\hat{S}_K \Leftrightarrow 2T < \hat{S}_{V_1}+\hat{S}_{V_2}+2\hat{S}_K. \end{equation} Since $\min{\{\hat{S}_i\}}=\hat{S}_K$ and $N\geq 3$, \begin{equation} \hat{S}_{V_1}+\hat{S}_{V_2}+2\hat{S}_K \leq \sum_{i=1}^{N+1} \hat{S}_i = 2T. \end{equation} Together with \eqref{ineq_s}, we obtain a contradiction \begin{equation} 2T < 2T. \end{equation} Therefore, it is proven, by contradiction, that there is at most one arm which violates \eqref{already_ineq_ab}. Such an arm follows \begin{equation} \hat{S}_i > T - \hat{S}_K, \end{equation} thus it is the one that has the highest $\hat{S}_{i}$, which we call arm $V$. Now, we can systematically determine $\hat{p}_{i,j}$s in the $K$th row or the $K$th column to show Assumption \ref{assumption:Kth_arm} is correct; that is, these $\hat{p}_{i,j}$s fulfill the conditions \eqref{eq_a}--\eqref{ineq_ab}. We cannot arbitrarily fill in the values in the $K$th row or the $K$th column so that their sum is $\hat{A}_K$ and $\hat{B}_K$, respectively, but have to take the other arms' preferences into consideration. \eqref{ineq_a}, \eqref{ineq_b} and \eqref{ineq_ab} give us boundaries for $\hat{p}_{K,j}$s and $\hat{p}_{i,K}$s to exist. Not only must we ensure that $\hat{p}_{K,j}$ and $\hat{p}_{i,K}$ do not exceed these boundaries, but we must also ensure that sufficient probability is assigned to the most popular arm $V$, because otherwise the left-hand side of \eqref{ineq_ab} can sometimes be greater than the right-hand side for arm $V$. We consider the following three cases depending on size relation of $\hat{A}_K, \hat{B}_V, \hat{B}_K, \hat{A}_V$, and for each case, it is possible to construct the optimal $\hat{p}_{i,j}s$ that satisfy all the conditions \eqref{eq_a}--\eqref{ineq_ab}. Note that these three cases are collectively exhaustive. \begin{enumerate}[label=\textbf{[Case \arabic*]}] \item $\hat{A}_K \leq \hat{B}_V$ and $\hat{B}_K \leq \hat{A}_V$ \item $\hat{A}_K > \hat{B}_V$ \item $\hat{B}_K > \hat{A}_V$ \end{enumerate} As a reminder, arm $K$ is the arm with the lowest popularity and arm $V$ is the one with the highest popularity. For detailed construction procedures, see Appendix A. With the above three cases, it is proven that $\hat{p}_{K,1}, \hat{p}_{K,2}, \ldots, \hat{p}_{K,N+1}, \hat{p}_{1,K}, \hat{p}_{2,K}, \ldots, \hat{p}_{N+1,K}$ which satisfy \eqref{eq_a}--\eqref{ineq_ab} always exist. In other words, Assumption \ref{assumption:Kth_arm} is indeed correct. Therefore, if we assume that Lemma \ref{lemma:0loss_case} has been proven to hold for $N$ arms, then we know that Lemma \ref{lemma:0loss_case} holds when there are $N+1$ arms. Hence, with mathematical induction, Lemma \ref{lemma:0loss_case} holds for any number of arms. \qed \subsection{Theorem 2}\label{sbsec:theorem2} \subsubsection{Statement} Here, we come back to the original problem where the players' preferences are probabilities. Note that the popularity $S_i$ still can be greater than 1 because it is the sum of the preferences for each arm. \begin{theorem}\label{thm:non0loss_case} If any value of $S_i$ is greater than 1, it is not possible to make the loss $L$ equal to zero.\\ In a case when the $N$th arm is the most popular; that is, $\max{\{S_i\}}=S_N>1$, the minimum loss is \begin{equation} L_{\text{min}} = \frac{N}{2(N-1)}\cdot (S_N-1)^2. \end{equation} The following joint selection probability matrix is one of the matrices which minimize the loss. \begin{equation} \tilde{\boldsymbol{P}} = \begin{pmatrix} 0&0&\cdots&0&A_1+\epsilon\\ 0&0&\cdots&0&A_2+\epsilon\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&0&A_{N-1}+\epsilon\\ B_1+\epsilon&B_2+\epsilon&\cdots&B_{N-1}+\epsilon&0 \end{pmatrix}, \quad \epsilon=\frac{S_N-1}{2(N-1)}. \end{equation} \end{theorem} \subsubsection{Outline of the proof} We can prove that the loss function $L$ is convex by showing that its Hessian is semidefinite. Details of the proof can be found in Appendix B. In section \ref{sbsbsec:KKT}, we rewrite the problem into an optimization problem with an equality constraint and inequality constraints, and derive $\tilde{p}_{i,j}$s which satisfy the Karush-Kuhn-Tucker conditions \cite{karush1939minima, kuhn1951nonlinear} (hereinafter called KKT conditions). In a convex optimization problem, where the objective function and all the constraints are convex, points which satisfy the KKT conditions give us the global optima \cite{boyd2004convex}. Therefore, the optimal joint selection probability matrix consists of the above-mentioned $\tilde{p}_{i,j}$s. \subsubsection{KKT conditions}\label{sbsbsec:KKT} Here, we derive the optimal joint selection probability matrix and calculate the minimum loss. The flattened vector of a joint selection probability matrix is defined as \begin{equation} \boldsymbol{p} = \begin{pmatrix}p_{1,2}&p_{1,3}&\cdots&p_{1,N}&p_{2,1}&p_{2,3}&\cdots&p_{2,N}&\cdots&p_{N,N-1}\end{pmatrix}^T \end{equation} If functions $h$ and $g_{i,j}$s are defined as follows: \begin{equation} h(\boldsymbol{p}) = \sum_{i,j} p_{i,j}-1, \quad g_{i,j}(\boldsymbol{p}) = -p_{i,j}\quad (i\neq j), \end{equation} the problem of minimizing the loss while satisfying the constraints can be written as \begin{equation} \begin{aligned} \min_{\boldsymbol{p}} \quad & L(\boldsymbol{{p}})\\ \textrm{s.t.} \quad & h(\boldsymbol{p}) = 0, \quad g_{i,j}(\boldsymbol{p})\leq 0. \end{aligned} \end{equation} Here, $L(\boldsymbol{p})$ is the loss which corresponds to $\boldsymbol{p}$. Since the objective function and the constraints are all convex, $\tilde{\boldsymbol{p}}$ which satisfies the KKT conditions below gives the global minimum. \begin{equation}\label{KKT} \begin{gathered} \nabla L(\tilde{\boldsymbol{p}}) + \sum_{i,j} \lambda_{i,j} \nabla g_{i,j}(\tilde{\boldsymbol{p}}) + \mu \nabla h(\tilde{\boldsymbol{p}}) = \boldsymbol{0},\\ \lambda_{i,j} g_{i,j}(\tilde{\boldsymbol{p}}) = 0,\\ \lambda_{i,j} \geq 0,\\ g_{i,j}(\tilde{\boldsymbol{p}}) \leq 0, \quad h(\tilde{\boldsymbol{p}})=0. \end{gathered} \end{equation} The following parameters satisfy all the conditions described in \eqref{KKT}. \begin{equation} \begin{gathered} \epsilon = \frac{S_N-1}{2(N-1)},\\ \mu = 2(N-2)\epsilon ,\\ \lambda_{i,j} = \left\{ \begin{array}{cl} 2N\epsilon & (i\neq N \land j\neq N) \\ 0 & (otherwise) \end{array} \right.,\\ \tilde{p}_{i,j} = \left\{ \begin{array}{cl} 0 & (i\neq N \land j\neq N) \\ A_i + \epsilon & (j=N)\\ B_j + \epsilon & (i=N) \end{array} \right. \end{gathered} \end{equation} See Appendix C for the verification of each condition. Hence, \begin{equation} \tilde{\boldsymbol{P}} = \begin{pmatrix} 0&0&\cdots&0&A_1+\epsilon\\ 0&0&\cdots&0&A_2+\epsilon\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&0&A_{N-1}+\epsilon\\ B_1+\epsilon&B_2+\epsilon&\cdots&B_{N-1}+\epsilon&0 \end{pmatrix}, \quad \epsilon=\frac{S_N-1}{2(N-1)} \end{equation} gives the global optima. The minimum loss is \begin{align} \begin{split} L_{\text{min}} &= \sum_i (\pi_A(i)-A_i)^2 + \sum_j (\pi_B(j)-B_j)^2\\ &= \sum_{i=1}^{N-1}(\pi_A(i)-A_i)^2 + \sum_{j=1}^{N-1}(\pi_B(j)-B_j)^2 + (\pi_A(N)-A_N)^2 + (\pi_B(j)-B_N)\\ &= \sum_{i=1}^{N-1}\epsilon^2 + \sum_{j=1}^{N-1}\epsilon^2 + \{-(N-1)\epsilon\}^2 + \{-(N-1)\epsilon\}^2\\ &= (N-1)\epsilon^2 + (N-1)\epsilon^2 + (N-1)^2\epsilon^2 + (N-1)^2\epsilon^2\\ &= \frac{N}{2(N-1)}\cdot (S_N-1)^2. \end{split} \end{align} Note that $\tilde{\boldsymbol{P}}$ is one example of the global optima and there could be other matrices which give us the same minimum loss. Therefore, Theorem \ref{thm:non0loss_case} is proven. \qed \subsection{Theorem 3} So far, we have presented two theorems concerning two players and $N$ arms. In this section, we consider the case where $M$ players exist where $M$ is greater than two. The players are called player A, player B, player C and the like. Moreover, the preference of player $X$ for arm $i$ is denoted as $X^\dagger_i$. Similar to the original problem, \begin{equation} \sum_i X^\dagger_i = 1, \quad X^\dagger_i \geq 0 \end{equation} holds. The sum of the preferences for each arm is denoted by the popularity \begin{equation} S^\dagger_i = A^\dagger_i + B^\dagger_i + C^\dagger_i + \cdots. \end{equation} We define $d_x$ as the arm index which the $x$th player selects. Then, $p^\dagger_{d_1, d_2, \ldots, d_M}$ represents the joint selection probability of the $x$th player selecting arm $d_x$. The collection of the joint selection probabilities is called the joint selection probability tensor. The satisfied preference or the resultant selection probability for player $X$ consists of the sum of all the probabilities of cases where the $x$th player selects arm $i$: \begin{equation} \pi^\dagger_x(i) = \underbrace{\sum_{\substack{d_1 \notin \{i\}}}\sum_{\substack{d_2 \notin \{i, d_1\}}}\cdots \sum_{\substack{d_M \notin \{i, d_1, d_2, \ldots , d_{M-1}\}}}}_{\text{Summation without the } x \text{th player}} p^\dagger_{d_1, d_2, \ldots, d_x=i , \ldots , d_M}. \end{equation} The loss is $L^\dagger$ is defined as the sum of squares of the gap between the preferences and the satisfied preferences. \begin{equation} L^\dagger = \sum_x \sum_i \left(\pi^\dagger_x(i)-X^\dagger_i\right)^2. \end{equation} Here, the $x$th player is called player $X$. \subsubsection{Statement} \begin{theorem}\label{thm:general_non0loss} Suppose there are $M$ players and $N$ arms. If any of $S^\dagger_i$ is greater than 1, it is not possible to make the loss $L^\dagger$ equal to zero. \end{theorem} \subsubsection{Proof by contradiction} Suppose, without loss of generality, that $S^\dagger_1 > 1$. We first assume it is possible to make the loss $L^\dagger$ equal to zero, and then we will prove that it leads to a contradiction. If the loss $L^\dagger$ were to become zero, the followings are required. \begin{equation}\label{sumeq_a} A^\dagger_1 = \sum_{\substack{d_2\notin \{1\}}}\sum_{\substack{d_3\notin \{1, d_2\}}} \cdots \sum_{\substack{d_M\notin \{1, d_2, d_3, \ldots , d_{M-1}\}}}p^\dagger_{1, d_2, d_3, \ldots , d_M}. \end{equation} \begin{equation}\label{sumeq_b} B^\dagger_1 = \sum_{\substack{d_1\notin \{1\}}}\sum_{\substack{d_3\notin \{d_1,1\}}} \cdots \sum_{\substack{d_M\notin \{d_1, 1, d_3, \ldots , d_{M-1}\}}} p^\dagger_{d_1, 1, d_3, \ldots , d_M}. \end{equation} \begin{equation}\label{sumeq_c} C^\dagger_1 = \sum_{\substack{d_1\notin \{1\}}}\sum_{\substack{d_2\notin \{d_1,1\}}} \cdots \sum_{\substack{d_M\notin \{d_1, d_2, 1, \ldots , d_{M-1}\}}} p^\dagger_{d_1, 1, d_3, \ldots , d_M}. \end{equation} \begin{equation*} \vdots \end{equation*} The terms which appear in the right-hand side of \eqref{sumeq_a} is not identical to the terms in the right-hand side of \eqref{sumeq_b} because $d_1 \neq 1$ in \eqref{sumeq_b}. Similarly, any term which appears in the right-hand side of \eqref{sumeq_a} is not identical to a variable which appear in the right-hand side of all the equations \eqref{sumeq_b}, \eqref{sumeq_c}, $\ldots$. Therefore, if we take the sum of the right-hand sides of \eqref{sumeq_a}, \eqref{sumeq_b}, \eqref{sumeq_c}, $\ldots$, it should be smaller than or equal to the sum of all the elements in the joint selection probability tensor, which is 1. On the other hand, if we take the sum of the left-hand sides of \eqref{sumeq_a}, \eqref{sumeq_b}, \eqref{sumeq_c}, $\cdots$, we get $A^\dagger_1+B^\dagger_1+C^\dagger_1+\cdots = S^\dagger_1$. Thus, if we compare the both sides, we get $S^\dagger_1\leq 1$. This consequence contradicts with the assumption that $S^\dagger_1 > 1$. Hence, with proof by contradiction, Theorem \ref{thm:general_non0loss} holds for any number of players and arms. \qed \subsection{Conjecture} The construction method introduced in Theorem \ref{thm:0loss_case} is expected to be applicable to general $M$ players. Here, we propose a conjecture: \begin{conjecture}\label{thm:general_0loss} Suppose there are $M$ players and $N$ arms. If all values of $S^\dagger_i$ are less than or equal to 1, it is possible to make the loss $L^\dagger$ equal to zero. \end{conjecture} This conjecture seems true, but so far, the proof has not been completed in generality and is left for future studies. \section{Numerical Demonstrations} In this section, we introduce several baseline models that output a joint probability selection matrix for given probabilistic preferences of two players. Then, we show how much the loss can be improved by using the construction method of the optimal joint probability selection matrix introduced in Theorems \ref{thm:0loss_case} and \ref{thm:non0loss_case} (henceforth referred to as ``the optimal satisfaction matrix''). The definitions of the notations, such as the preference and the loss, follow section \ref{subsec:problem}. \subsection{Baselines} \subsubsection{Uniform random} In what we call ``uniform random'' method, the resulting joint selection probability matrix is such that all elements are equal except for the diagonals, which are filled with zeros. That is to say, decision conflicts never happens, but the selection is determined completely randomly by the two players. If we consider the preference setting shown in Table \ref{tb:pref_example}, the output of this method is \begin{equation} \begin{pmatrix} 0&\frac{1}{6}&\frac{1}{6}\\ \frac{1}{6}&0&\frac{1}{6}\\ \frac{1}{6}&\frac{1}{6}&0 \end{pmatrix}. \end{equation} \begin{table}[t] \centering \caption{An Example of a preference setting}\label{tb:pref_example} \begin{tabular}{cccc} \hline Player&Arm 1&Arm 2&Arm 3\\ \hline A&0.3&0.25&0.45\\ B&0.5&0.2&0.3\\ \hline \end{tabular} \end{table} \subsubsection{Simultaneous renormalization} In this method, the product of each player's preference is considered first. Then, the diagonals, where decision conflicts happen, are modified to zero, and finally, the whole joint selection probability matrix is renormalized so that the sum is 1. The formula is as follows: \begin{equation} \begin{gathered} r_{i,j} = \left\{ \begin{array}{cc} A_i \cdot B_j & (i\neq j) \\ 0 & (i=j) \end{array} \right. ,\\ p_{i,j} = \frac{r_{i,j}}{\sum\limits_{i,j} r_{i,j}}. \end{gathered} \end{equation} The joint selection probability matrix generated by this simultaneous renormalization method for the case in Table \ref{tb:pref_example} is \begin{equation} \begin{pmatrix} 0&0.06&0.09\\ 0.125&0&0.075\\ 0.225&0.09&0 \end{pmatrix}/0.665 = \begin{pmatrix} 0&0.090&0.1353\\ 0.1880&0&0.1128\\ 0.3383&0.1353&0 \end{pmatrix}. \end{equation} \subsubsection{Random order} In what we call ``random order'' method, the players first randomly determine in which order they will draw the arms. This is inspired by random priority mechanism proposed by Abdulkadiro{\u{g}}lu {\it{et al.}} in the literature where preferences are deterministic \cite{abdulkadirouglu1998random}. Here instead, we consider probabilistic preference profiles. Each player selects an arm according to the pre-determined order, but the arms already drawn by the previous players cannot be selected again; thus, the selection probabilities for those arms are set to zero. Under the setting in Table \ref{tb:pref_example}, if we want to calculate $p_{1,2}$, two possible orders are considered. The first case is where player A draws first. In this case, the joint selection probability is \begin{equation} 0.3 \cdot \frac{0.2}{1-0.5} = 0.12. \end{equation} The other case is where player B draws first, and the probability is \begin{equation} 0.2 \cdot \frac{0.3}{1-0.25} = 0.08. \end{equation} Therefore, by taking the average, \begin{equation} p_{1,2} = \frac{0.12 + 0.08}{2} = 0.1. \end{equation} Similarly, we can calculate all the joint selection probabilities and the resulting matrix is \begin{equation} \begin{pmatrix} 0&0.1&0.1718\\ 0.1674&0&0.1151\\ 0.3214&0.1243&0 \end{pmatrix}. \end{equation} \subsection{Performance comparison} We compare the loss $L$ of uniform random, simultaneous renormalization, random order and the optimal satisfaction matrix. Four preference settings shown below are examined to evaluate the performance of each method. Namely, the degree of satisfaction of the players' preferences are investigated through the comparison of the loss $L$. $c_i$ is used as a normalization term to ensure that the sum of the preference is 1. \begin{enumerate} \item Arithmetic progression and same preference.\\ $\displaystyle A_1: A_2:\cdots : A_N = B_1:B_2:\cdots :B_N = (1:2:\cdots : N) / c_1, \quad c_1 = \frac{(N+1)N}{2}$. \item Modified geometric progression with common ratio 2 and same preference.\\ $\displaystyle A_1: A_2:\cdots : A_N = B_1:B_2:\cdots :B_N = (1:1:2:\cdots : 2^{N-2}) / c_2, \quad c_2 = 2^{N-1}$. \item Modified geometric progression with common ratio 2 and reversed preference.\\ $\displaystyle A_1: A_2:\cdots : A_N = B_N:B_{N-1}:\cdots :B_1 = (1:1:2:\cdots : 2^{N-2}) / c_3, \quad c_3 = 2^{N-1}$. \item Geometric progression with common ratio 3 and same preference.\\ $\displaystyle A_1: A_2:\cdots : A_N = B_1:B_2:\cdots :B_N = (1:3:\cdots : 3^{N-1}) / c_4, \quad c_4 = \frac{3^N-1}{2}$. \end{enumerate} Note that in case (i)--(iii), the optimal satisfaction matrix achieves $L=0$ since $\forall i;S_i \leq 1$, whereas in case (iv), it is not possible to achieve $L=0$ since \begin{equation} S_N = 2\cdot \frac{3^{N-1}}{\sum_{i=0}^{N-1}3^i} = \frac{4\cdot 3^{N-1}}{3^N-1}>\frac{3^N}{3^N-1}>1. \end{equation} As for $N$, the following numbers are used. \begin{equation} N = 3, 4, 5, \ldots, 50. \end{equation} Figure \ref{fig:loss_comparison} summarizes the loss $L$ as a function of the number of arms $N$ accomplished by each method. In all cases, the optimal satisfaction matrix performs the best, followed by random order, simultaneous renormalization and uniform random. The result in case (i) shows that the loss decreases as the number of arms rises for all the methods. This trend is due to our choice of the loss being similar to an $L_2$-norm. The absolute value of each preference $A_i, B_i$ becomes minor as the number of arms increases. In the real world, the ratio of preference settings in case (ii) is likely to appear more frequently than in case (i). For uniform random, simultaneous renormalization, and random order, when two players have the same preference, the loss increases with the number of arms. In contrast, the result shows that the optimal satisfaction matrix consistently achieves 0-loss, which underlines the importance of the construction method of the optimal joint selection probability matrix in the real world, where there is a vast number of choices. The result in case (iii) shows that, together with the optimal satisfaction matrix, simultaneous renormalization and random order also have quite good accuracy. This result is due to the fact that the probability of decision conflicts happening is significantly smaller in this case (iii), where the players have strong, reversed preferences. The diagonal terms are renormalized in simultaneous renormalization, which gives perturbations to the other terms in the joint selection probability matrix. In this case (iii), these diagonal terms are smaller than in other cases, so the perturbations to the other terms are reduced. In random order, the second player sets his/her preference of the arm drawn by the first player to zero, but in the setup of case (iii), this preference tends to be small because the first player is more likely to select an arm with a higher preference, which is an unfavoured arm for the second player. This means that the second player can select an arm based on a preference that is almost equal to his/her original preference. In case (iv), $S_N$ is greater than 1, so the optimal satisfaction matrix also does not achieve 0-loss. However, around $N=50$, the loss for random order is about 1.2 times smaller than the loss for simultaneous renormalization, while the optimal loss is almost twice smaller than the loss for random order. \begin{figure}[t] \centering \includegraphics[width=13.0cm]{result.pdf} \caption{Loss comparison. $Y$ axes are log scale for case (i), (ii) and (iii). Lines for the optimal satisfaction matrix overlap with $X$ axes for these cases to show the loss is zero. (i)~arithmetic progression + same preference, (ii)~geometric progression with common ratio 2 + same preference, (iii)~geometric progression with common ratio 2 + reversed preference, (iv)~geometric progression with common ratio 3 + same preference. For case (i)--(iii), the minimum loss is zero, while for case (iv), it is greater than zero.} \label{fig:loss_comparison} \end{figure} \section{Conclusion} In this study, we theoretically examined how to maximize each player's satisfaction by properly designing joint selection probabilities while avoiding decision conflicts when there are multiple players with probabilistic preferences. In other words, the present study demonstrated how to accomplish conflict-free stochastic decision-making among multiple players wherein each player's preference is highly appreciated. Particularly, we clarified the condition when the optimal joint selection probabilities perfectly eliminate the deviation of the resulting choice selection probabilities and the player's probabilistic preference in two-player, $N$-choice situations, which leads to what we call zero-loss realizations. Furthermore, even under circumstances wherein zero-loss is unachievable, we showed how to construct, what we call, the optimal satisfaction matrix, whose joint selection probabilities minimize the loss. Moreover, we generalized the theory to $M$ player situations ($M \geq 3$), and proved the conditions wherein a zero-loss joint selection is impossible. In addition, we numerically demonstrated the impact of the optimal satisfaction matrix by comparing several approaches that are able to provide conflict-free joint decision-making. There are still many interesting future topics, including the mathematical proof of the conjecture shown in this study, which discusses the condition of zero-loss conflict-free joint stochastic selections involving an arbitrary number of players. Detailed analysis of situations when we replace the loss with other metrics such as the KL divergence also remains to be explored. Furthermore, extending the discussions to external environments, not just players' satisfaction, will be important in view of practical applications. This study paves a way toward multi-agent conflict-free stochastic decision-making. \section*{Acknowledgments} This work was supported in part by the CREST project (JPMJCR17N2) funded by the Japan Science and Technology Agency and Grants-in-Aid for Scientific Research (JP20H00233) funded by the Japan Society for the Promotion of Science. \newpage \bibliographystyle{RS} \bibliography{reference} \newpage \renewcommand{\theequation}{S.\arabic{equation}} \section*{Appendix A: Construction procedures for Assumption 2.1} In the induction step of section \ref{sbsec:theorem1} of the main text, we assumed the existence of $p_{i,j}$s which satisfy certain conditions and proved that the perfect satisfaction is achievable using these $p_{i,j}$s. Here, we prove this assumption; namely, Assumption 2.1 below is indeed true. \setcounter{section}{2} \begin{assumption}\label{assumption:Kth_arm} There exist $\hat{p}_{K,1}, \hat{p}_{K,2}, \ldots, \hat{p}_{K,N+1}, \hat{p}_{1,K}, \hat{p}_{2,K}, \ldots, \hat{p}_{N+1,K}$ which satisfy all the following conditions \eqref{eq_a}--\eqref{ineq_ab}.\\ The sum of the $K$th row is equal to $\hat{A}_K$. \begin{equation}\label{eq_a}\sum_{j=1}^{N+1} \hat{p}_{K,j} = \hat{A}_K.\end{equation} The sum of the $K$th column is equal to $\hat{B}_K$. \begin{equation}\label{eq_b}\sum_{i=1}^{N+1} \hat{p}_{i,K} = \hat{B}_K.\end{equation} The sum of the $j$th column without the $K$th row is non-negative. \begin{equation}\label{ineq_a}\sum_{i \neq K} \hat{p}_{i,j} \geq 0 \Leftrightarrow \hat{p}_{K,j} \leq \hat{B}_j \quad (j \neq K).\end{equation} The sum of the $i$th row without the $K$th column is non-negative. \begin{equation}\label{ineq_b}\sum_{j \neq K} \hat{p}_{i,j} \geq 0 \Leftrightarrow \hat{p}_{i,K} \leq \hat{A}_i \quad (i \neq K).\end{equation} In the gray-shaded area of the joint selection probability matrix below, all the remaining popularities are smaller than or equal to the remaining total preference. Note that $\hat{S}_K = \min\{\hat{S}_i\}$. \begin{equation}\label{ineq_ab}(\hat{A}_i-\hat{p}_{i,K})+(\hat{B}_i-\hat{p}_{K,i}) \leq T-\hat{S}_K \quad (i \neq K).\end{equation} \end{assumption} Now, we can systematically determine $\hat{p}_{i,j}$s in the $K$th row or the $K$th column of the joint selection probability matrix to show Assumption \ref{assumption:Kth_arm} is correct; that is, these $\hat{p}_{i,j}$s fulfill the conditions \eqref{eq_a}--\eqref{ineq_ab}. We cannot arbitrarily fill in the values in the $K$th row or the $K$th column so that their sum is $\hat{A}_K$ and $\hat{B}_K$, respectively, but have to take the other arms' preferences into consideration. \eqref{ineq_a}, \eqref{ineq_b} and \eqref{ineq_ab} give us boundaries for $\hat{p}_{K,j}$s and $\hat{p}_{i,K}$s to exist. Not only must we ensure that $\hat{p}_{K,j}$ and $\hat{p}_{i,K}$ do not exceed these boundaries, but we must also ensure that sufficient probability is assigned to the most popular arm $V$, because otherwise the left-hand side of \eqref{ineq_ab} can sometimes be greater than the right-hand side for arm $V$. We consider the following three cases depending on size relation of $\hat{A}_K, \hat{B}_V, \hat{B}_K, \hat{A}_V$, and for each case, it is possible to construct the optimal $\hat{p}_{i,j}s$ that satisfy all the conditions \eqref{eq_a}--\eqref{ineq_ab}. Note that these three cases are collectively exhaustive. \begin{enumerate}[label=\textbf{[Case \arabic*]}] \item $\hat{A}_K \leq \hat{B}_V$ and $\hat{B}_K \leq \hat{A}_V$ \item $\hat{A}_K > \hat{B}_V$ \item $\hat{B}_K > \hat{A}_V$ \end{enumerate} As a reminder, arm $K$ is the arm with the lowest popularity and arm $V$ is the one with the highest popularity. All the arms except for arm $V$ satisfies the following condition. \begin{equation}\label{already_ineq_ab} \hat{S}_i \leq T - \hat{S}_K. \end{equation} \noindent\textbf{[Case 1]} $\hat{A}_K \leq \hat{B}_V$ and $\hat{B}_K \leq \hat{A}_V$ The following $\hat{p}_{i,j}$s on the $K$th row or $K$th column evidently satisfy the conditions \eqref{eq_a} and \eqref{eq_b}: \begin{equation}\label{solution_situation1} \hat{p}_{K,V} = \hat{A}_K, \quad \hat{p}_{V,K}=\hat{B}_K, \quad \hat{p}_{K,j} =0 \quad (j\neq V), \quad \hat{p}_{i,K}=0 \quad (i\neq V). \end{equation} In addition, \eqref{ineq_a} and \eqref{ineq_b} are fulfilled because $\hat{A}_K \leq \hat{B}_V$ and $\hat{B}_K \leq \hat{A}_V$ in this particular case. Moreover, \eqref{ineq_ab} is satisfied since for $i=V$, \begin{equation} \hat{p}_{K,V}+\hat{p}_{V,K}=\hat{A}_K+\hat{B}_K=\hat{S}_K \geq \hat{S}_K-(T-\hat{S}_V) = \hat{S}_V+\hat{S}_K-T, \end{equation} which leads to \begin{equation} (\hat{A}_V-\hat{p}_{V,K}) + (\hat{B}_V-\hat{p}_{K,V}) \leq T-\hat{S}_K. \end{equation} For the other $i$s, \eqref{ineq_ab} follows because they satisfy \eqref{already_ineq_ab} and \eqref{already_ineq_ab} is equivalent to \eqref{ineq_ab} when $\hat{p}_{K,i}=\hat{p}_{i,K}=0$. Hence, $\hat{p}_{i,j}$s described in \eqref{solution_situation1} fulfill all the conditions \eqref{eq_a}--\eqref{ineq_ab} in \textbf{[Case 1]}. \begin{table}[t] \centering \caption{An Example of [Case 1]}\label{tb:case1_example} \begin{tabular}{|c||c|c|c|c||c|} \hline Player&Arm 1&Arm 2&Arm 3&Arm 4&Total preference $T$\\ \hline Player A&0.1&0.2&0.3&0.4&1.0\\ Player B&0.2&0.2&0.1&0.5&1.0\\ \hline\hline Popularity $S$&0.3&0.4&0.4&0.9&2.0\\ \hline \end{tabular} \end{table} Table \ref{tb:case1_example} illustrates an example of such cases where $\hat{A}_K \leq \hat{B}_V$ and $\hat{B}_K \leq \hat{A}_V$. Here, the most popular arm $K$ is arm 1 and the least popular arm $V$ is arm 4. In this case, the first row and the first column of the joint selection probability matrix should be filled in as follows: \begin{equation} \left(\begin{array}{cccc} 0&0&0&0.1\\ llb *&\fillb *\\ llb *&\fillb *\\ llb *&\fillb * \end{array}\right) \end{equation} Then, the remaining gray-shaded region should satisfy the preference setting described in Table \ref{tb:case1_remain}. Note that each popularity is less than or equal to the total preference in the remaining part; that is, the assumption of Lemma 2.1 holds for this part. \begin{table}[t] \centering \caption{Preference profile for the remaining part in [Case 1]}\label{tb:case1_remain} \begin{tabular}{|c||c|c|c||c|} \hline Player&Arm 2&Arm 3&Arm 4&Total preference $T$\\ \hline Player A&0.2&0.3&0.2&0.7\\ Player B&0.2&0.1&0.4&0.7\\ \hline\hline Popularity $S$&0.4&0.4&0.6&1.4\\ \hline \end{tabular} \end{table} \noindent\textbf{[Case 2]} $\hat{A}_K > \hat{B}_V$ In this case, from the fact that $\hat{S}_K \leq \hat{S}_V$, it follows \begin{equation}\label{cond_bk} \hat{B}_K < \hat{A}_V. \end{equation} When $\hat{B}_V+\hat{B}_1<\hat{A}_K$, let $m$ be the arm index which satisfies the following inequality. \begin{equation} \hat{B}_V+\underbrace{\hat{B}_1+\hat{B}_2+\cdots+\hat{B}_{m-1}}_{\text{does not contain } \hat{B}_V \text{ or } \hat{B}_K}< \hat{A}_K \leq \hat{B}_V+\underbrace{\hat{B}_1+\hat{B}_2+\cdots+\hat{B}_{m}}_{\text{does not contain } \hat{B}_V \text{ or } \hat{B}_K}. \end{equation} When $\hat{B}_V+\hat{B}_1 \geq \hat{A}_K$, we define $m=1$. $m$ always exists because \begin{equation} \hat{B}_V+\underbrace{\hat{B}_1+\hat{B}_2+\cdots+\hat{B}_{N+1}}_{\text{does not contain } \hat{B}_V \text{ or } \hat{B}_K} = T-\hat{B}_K \geq \hat{A}_K. \end{equation} Then, \begin{equation}\label{solution_situation2} \begin{cases} [\text{When }m=1]\\ \hat{p}_{K,V}=\hat{B}_V, \quad \hat{p}_{K,1}=\hat{A}_K-\hat{B}_V, \quad \hat{p}_{K,j}=0 \quad (j \notin \{1, V\}),\\ \hat{p}_{V,K}=\hat{B}_K, \quad \hat{p}_{i,K}=0 \quad (i \neq V)\\ [\text{When }m \geq 2]\\ \hat{p}_{K,K}=0, \quad \hat{p}_{K,V} = \hat{B}_V, \quad \hat{p}_{K,j}=\hat{B}_j \quad (j < m \text{ and } j \notin \{K, V\}),\\ \hat{p}_{K,m}=\hat{A}_K-(\hat{B}_V+\underbrace{\hat{B}_1+\hat{B}_2+\cdots+\hat{B}_{m-1}}_{\text{does not contain } \hat{B}_V \text{ or } \hat{B}_K}), \quad \hat{p}_{K,j}=0 \quad (m < j \text{ and } j \notin \{K, V\}),\\ \hat{p}_{V,K}=\hat{B}_K, \quad \hat{p}_{i,K}=0 \quad (i \neq V) \end{cases} \end{equation} evidently satisfy the conditions \eqref{eq_a} and \eqref{eq_b}. Moreover, $\hat{p}_{i,j}$s in \eqref{solution_situation2} fulfill \eqref{ineq_a} because for $j=V, 1, 2, \ldots, m-1$, \begin{equation} \hat{p}_{K,j} = \hat{B}_j \leq \hat{B}_j, \end{equation} and for $j=m$, the definition of $m$ has the condition $\hat{A}_K \leq \hat{B}_V+\underbrace{\hat{B}_1+\hat{B}_2+\cdots+\hat{B}_{m}}_{\text{does not contain } \hat{B}_V \text{ or } \hat{B}_K}$, which implies \begin{equation} \hat{A}_K-(\hat{B}_V+\underbrace{\hat{B}_1+\hat{B}_2+\cdots+\hat{B}_{m-1}}_{\text{does not contain } \hat{B}_V \text{ or } \hat{B}_K}) \leq \hat{B}_m. \end{equation} For $j=m+1, m+2, \ldots, N+1$, \begin{equation} \hat{p}_{K,j} = 0 \leq \hat{B}_j. \end{equation} Furthermore, with $\hat{p}_{i,j}$s described in \eqref{solution_situation2}, the condition \eqref{ineq_b} is also satisfied since for $i=V$, \eqref{cond_bk} verifies \begin{equation} \hat{p}_{V,K} = \hat{B}_K \leq \hat{A}_K, \end{equation} and for the other $i$s, \begin{equation} \hat{p}_{i, K} = 0 \leq \hat{A}_i. \end{equation} Finally, \eqref{ineq_ab} is satisfied because for $i=V$, \begin{align}\label{ineq_ab_situation2} \begin{split} \hat{p}_{K,V}+\hat{p}_{V,K} = \hat{B}_V + \hat{B}_K & \geq \hat{B}_V + \hat{B}_K - (T-\hat{A}_V-\hat{A}_K)\\ &= \hat{S}_V+\hat{S}_K-T. \end{split} \end{align} Note that $T-\hat{A}_V-\hat{A}_K$ is always non-negative because $T \geq \hat{A}_V+\hat{A}_K$. \eqref{ineq_ab_situation2} is equivalent to \begin{equation} (\hat{A}_V-\hat{p}_{V,K})+(\hat{B}_V-\hat{p}_{K,V}) \leq T-\hat{S}_K. \end{equation} For the other $i$s, they follow \eqref{already_ineq_ab}, and with the prerequisites $\hat{p}_{i,K} \geq 0, \quad \hat{p}_{K,i} \geq 0$, it follows \eqref{ineq_ab}. Therefore, $\hat{p}_{i,j}$s given in \eqref{solution_situation2} satisfy all the conditions \eqref{eq_a}--\eqref{ineq_ab} in \textbf{[Case 2]}. \begin{table}[t] \centering \caption{An Example of [Case 2]}\label{tb:case2_example} \begin{tabular}{|c||c|c|c|c||c|} \hline Player&Arm 1&Arm 2&Arm 3&Arm 4&Total preference $T$\\ \hline Player A&0.25&0.1&0.15&0.5&1.0\\ Player B&0.1&0.35&0.35&0.2&1.0\\ \hline\hline Popularity $S$&0.35&0.45&0.5&0.7&2.0\\ \hline \end{tabular} \end{table} Table \ref{tb:case2_example} shows an example of [Case 2]. Here, the most popular arm $K$ is arm 1 and the least popular arm $V$ is arm 4. In this case, $m=2$ because \begin{equation} \hat{B}_V(=0.2) < \hat{A}_K(=0.25) \leq \hat{B}_V(=0.2) + \hat{B}_2(=0.35). \end{equation} Therefore, the first row and the first column of the joint selection probability matrix should be filled in as follows: \begin{equation} \left(\begin{array}{cccc} 0&0.05&0&0.2\\ llb *&\fillb *\\ llb *&\fillb *\\ llb *&\fillb * \end{array}\right) \end{equation} Then, the remaining gray-shaded region should satisfy the preference setting described in Table \ref{tb:case2_remain}. Note that each popularity is less than or equal to the total preference in the remaining part; that is, the assumption of Lemma 2.1 holds for this part. \begin{table}[t] \centering \caption{Preference profile for the remaining part in [Case 2]}\label{tb:case2_remain} \begin{tabular}{|c||c|c|c||c|} \hline Player&Arm 2&Arm 3&Arm 4&Total preference $T$\\ \hline Player A&0.1&0.15&0.4&0.65\\ Player B&0.3&0.35&0.0&0.65\\ \hline\hline Popularity $S$&0.4&0.5&0.4&1.3\\ \hline \end{tabular} \end{table} \noindent\textbf{[Case 3]} $\hat{B}_K > \hat{A}_V$ This case is quite similar to \textbf{[Case 2]}. If we swap $\hat{A}_i$ and $\hat{B}_i$ in the discussion in \textbf{[Case~2]}, we obtain $\hat{p}_{i,j}$s which satisfy all the conditions \eqref{eq_a}--\eqref{ineq_ab}. \section*{Appendix B: Convexity of the loss $L$} When we derived the optimal joint selection probability matrix in section \ref{sbsec:theorem2} of the main text, we used the fact that the loss function $L$ is convex. Here, we prove that the loss function $L$ is indeed convex. \begin{equation} L = \sum_i (\pi_A(i)-A_i)^2 + \sum_j (\pi_B(j)-B_j)^2 \end{equation} We will prove that each $(\pi_A(i)-A_i)^2$ and $(\pi_B(j)-B_j)^2$ is convex, then we know that $L$ is convex because the sum of convex functions is also convex. Now, the first term of $L$ is defined as \begin{equation} L_1 := (\pi_A(1)-A_1)^2 = (p_{1,2}+p_{1,3}+\cdots+p_{1,N}-A_1)^2. \end{equation} The Hessian matrix for $L_1$ in terms of all $p_{i,j}$s $(i\neq j, i=1,2,\cdots,N, j=1,2,\cdots,N)$ is \begin{equation} H_1 = \begin{pmatrix}D&O\\ O&D_O\end{pmatrix} \end{equation} where \begin{equation} D = \overbrace{\begin{pmatrix}2\quad 2\quad \cdots\quad 2\\ 2\quad 2\quad \cdots\quad 2\\ \vdots\quad \vdots\quad \vdots\quad \vdots \\ 2\quad 2\quad \cdots\quad 2\end{pmatrix}}^{N-1},\quad D_O = \overbrace{\begin{pmatrix}0\quad 0\quad \cdots\quad 0\\ 0\quad 0\quad \cdots\quad 0\\ \vdots\quad \vdots\quad \vdots\quad \vdots \\ 0\quad 0\quad \cdots\quad 0\end{pmatrix}}^{(N-1)^2}. \end{equation} Since this is a block diagonal matrix, the eigenvalues of $H_1$ is the union of the eigenvalues of $D$ and $D_o$. The eigenvalues of $D_o$ are obviously $\overbrace{0,0,\cdots ,0}^{(N-1)^2}$ and those of $D$ are $2(N-1), \overbrace{0,0,\cdots,0}^{N-2}$. Therefore, all the eigenvalues of $H_1$ are non-negative, which means that $H_1$ is positive semidefinite, and this implies $L_1$ is convex. Similarly, each $(\pi_A(i)-A_i)^2$ and $(\pi_B(j)-B_j)^2$ is proven to be convex because of the symmetry of the loss function, given that $i=1$ is not special among all $i$s. From the above argument, we now know that the loss function $L$ is convex. \section*{Appendix C: Verification of the KKT conditions} In section \ref{sbsbsec:KKT} of the main text, where we derived the point which satisfies all the KKT conditions, we defined new notations and functions. The flattened vector of a joint selection probability matrix is defined as \begin{equation} \boldsymbol{p} = \begin{pmatrix}p_{1,2}&p_{1,3}&\cdots&p_{1,N}&p_{2,1}&p_{2,3}&\cdots&p_{2,N}&\cdots&p_{N,N-1}\end{pmatrix}^T \end{equation} Functions $h$ and $g_{i,j}$s are defined as follows: \begin{equation} h(\boldsymbol{p}) = \sum_{i,j} p_{i,j}-1, \quad g_{i,j}(\boldsymbol{p}) = -p_{i,j}\quad (i\neq j). \end{equation} Now, the problem can be written as \begin{equation} \begin{aligned} \min_{\boldsymbol{p}} \quad & L(\boldsymbol{{p}})\\ \textrm{s.t.} \quad & h(\boldsymbol{p}) = 0, \quad g_{i,j}(\boldsymbol{p})\leq 0. \end{aligned} \end{equation} $L(\boldsymbol{p})$ is the loss which corresponds to $\boldsymbol{p}$. Since the objective function and the constraints are all convex, $\tilde{\boldsymbol{p}}$ which satisfies the KKT conditions below gives the global minimum. \begin{gather} \nabla L(\tilde{\boldsymbol{p}}) + \sum_{i,j} \lambda_{i,j} \nabla g_{i,j}(\tilde{\boldsymbol{p}}) + \mu \nabla h(\tilde{\boldsymbol{p}}) = \boldsymbol{0},\label{stationarity}\\ \lambda_{i,j} g_{i,j}(\tilde{\boldsymbol{p}}) = 0,\label{slackness}\\ \lambda_{i,j} \geq 0,\label{dual}\\ g_{i,j}(\tilde{\boldsymbol{p}}) \leq 0, \quad h(\tilde{\boldsymbol{p}})=0.\label{primal} \end{gather} Here, we will show that \begin{gather} \epsilon = \frac{S_N-1}{2(N-1)},\label{opt_eps}\\ \mu = 2(N-2)\epsilon ,\label{opt_mu}\\ \lambda_{i,j} = \left\{ \begin{array}{cl} 2N\epsilon & (i\neq N \text{ and } j\neq N) \\ 0 & (otherwise) \end{array} \right.,\label{opt_lambda}\\ \tilde{p}_{i,j} = \left\{ \begin{array}{cl} 0 & (i\neq N \text{ and } j\neq N) \\ A_i + \epsilon & (j=N)\\ B_j + \epsilon & (i=N) \end{array} \right.\label{opt_p} \end{gather} satisfy \eqref{stationarity}--\eqref{primal}. \noindent\textbf{[Condition 1] Stationarity \eqref{stationarity}.} In the following argument, we call the element in each vector which is in the same dimension as $p_{i,j}$ ``the $(i,j)$th element'' in a vector. From the definition of $L$, the $(i,j)$th element of $\nabla L(\boldsymbol{p})$, denoted by $\nabla L(\boldsymbol{p})_{[i,j]}$, is \begin{equation} \nabla L(\boldsymbol{p})_{[i,j]} = 2(\pi_A(i) - A_i) + 2(\pi_B(j) - B_j). \end{equation} With $\tilde{p}_{i,j}$s described in \eqref{opt_p}, \begin{equation} \left(\nabla L(\tilde{\boldsymbol{p}})\right)_{[i,j]} = \left\{ \begin{array}{cl} 2\epsilon + 2\epsilon = 4\epsilon & (i\neq N \text{ and } j\neq N) \\ -2(N-1)\epsilon + 2\epsilon = -2(N-2)\epsilon & (otherwise) \end{array} \right. . \end{equation} Also, with $\tilde{p}_{i,j}$s and $\lambda_{i,j}$s described in \eqref{opt_lambda} and \eqref{opt_p}, the $(i,j)$th element of $\displaystyle \sum\limits_{i,j} \lambda_{i,j} \nabla g_{i,j}(\tilde{\boldsymbol{p}})$ is \begin{equation} \left(\displaystyle \sum\limits_{i,j} \lambda_{i,j} \nabla g_{i,j}(\tilde{\boldsymbol{p}})\right)_{[i,j]} = \left\{ \begin{array}{cl} -2N\epsilon & (i\neq N \text{ and } j\neq N) \\ 0 & (otherwise) \end{array} \right. . \end{equation} Moreover, the $(i,j)$th element of $\mu \nabla h(\tilde{\boldsymbol{p}})$ is \begin{equation} \left(\mu \nabla h(\tilde{\boldsymbol{p}})\right)_{[i,j]} = 2(N-2)\epsilon. \end{equation} Therefore, stationarity is successfully achieved with $\tilde{p}_{i,j}$s, $\lambda_{i,j}$s and $\mu$ given in \eqref{opt_eps}--\eqref{opt_p}. \begin{equation} \nabla L(\tilde{\boldsymbol{p}}) + \sum_{i,j} \lambda_{i,j} \nabla g_{i,j}(\tilde{\boldsymbol{p}}) + \mu \nabla h(\tilde{\boldsymbol{p}}) = \boldsymbol{0}. \end{equation} \noindent\textbf{[Condition 2] Complementary slackness \eqref{slackness}.} From \eqref{opt_lambda} and \eqref{opt_p}, in both cases, where $i\neq N \land j\neq N$ or not, \begin{equation} \lambda_{i,j}g_{i,j}(\tilde{\boldsymbol{p}}) = -\lambda_{i,j}\tilde{p}_{i,j} = 0 \geq 0. \end{equation} \noindent\textbf{[Condition 3] Dual feasibility \eqref{dual}.} Since $S_N-1 > 0$, it is evident that $\epsilon$ is positive and so is $2N\epsilon$. Thus, \begin{equation} \lambda_{i,j} \geq 0. \end{equation} \noindent\textbf{[Condition 4] Primal feasibility \eqref{primal}.} Since $\epsilon > 0$, it follows that $A_i+\epsilon$ and $B_j+\epsilon$ are non-negative. Then, \begin{equation} g_{i,j}(\tilde{\boldsymbol{p}})\leq 0. \end{equation} Also, \begin{align} \begin{split} h(\tilde{\boldsymbol{p}}) &= \sum_{i,j} p_{i,j}-1 \\ &= \sum_{i=1}^{N-1}(A_i+\epsilon) + \sum_{j=1}^{N-1}(B_j+\epsilon)-1\\ &= \sum_{i=1}^{N-1}(A_i+B_i) + 2(N-1)\epsilon -1\\ &= 2-S_N + (S_N-1)-1 = 0. \end{split} \end{align} Therefore, \begin{equation} h(\tilde{\boldsymbol{p}}) = 0 \end{equation} holds. \end{document}
2205.00749v2
http://arxiv.org/abs/2205.00749v2
Topologizing interpretable groups in $p$-adically closed fields
\documentclass[12pt]{article} \title{Topologizing interpretable groups in $p$-adically closed fields} \author{Will Johnson} \usepackage{amsmath, amssymb, amsthm} \usepackage{fullpage} \usepackage{amscd} \usepackage{hyperref} \usepackage[all]{xy} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{centernot} \usepackage{enumitem} \DeclareMathOperator*{\ind}{\raise0.2ex\hbox{\ooalign{\hidewidth$\vert$\hidewidth\cr\raise-0.9ex\hbox{$\smile$}}}} \newcommand{\forkindef}{\forkindep^{\mathrm{def}}} \newcommand{\dimind}{\ind^{\dim}} \newcommand{\pCF}{p\mathrm{CF}} \newcommand{\Ab}{\operatorname{Ab}} \newcommand{\Def}{\operatorname{Def}} \newcommand{\DefSub}{\operatorname{DefSub}} \newcommand{\Abs}{\operatorname{Abs}} \newcommand{\End}{\operatorname{End}} \newcommand{\bdn}{\operatorname{bdn}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\qftp}{\operatorname{qftp}} \newcommand{\rk}{\operatorname{rk}} \newcommand{\Don}{\Xi} \newcommand{\codim}{\operatorname{codim}} \newcommand{\Gal}{\operatorname{Gal}} \newcommand{\Num}{\operatorname{Num}} \newcommand{\Cl}{\operatorname{Cl}} \newcommand{\Div}{\operatorname{Div}} \newcommand{\Ann}{\operatorname{Ann}} \newcommand{\Frac}{\operatorname{Frac}} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\height}{\operatorname{ht}} \newcommand{\Der}{\operatorname{Der}} \newcommand{\Pic}{\operatorname{Pic}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Proj}{\operatorname{Proj}} \newcommand{\characteristic}{\operatorname{char}} \newcommand{\Spec}{\operatorname{Spec}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Tor}{\operatorname{Tor}} \newcommand{\Tr}{\operatorname{Tr}} \newcommand{\res}{\operatorname{res}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\length}{\operatorname{length}} \newcommand{\Log}{\operatorname{Log}} \newcommand{\Set}{\operatorname{Set}} \newcommand{\Fun}{\operatorname{Fun}} \newcommand{\id}{\operatorname{id}} \newcommand{\Gp}{\operatorname{Gp}} \newcommand{\Gr}{\operatorname{Gr}} \newcommand{\Ring}{\operatorname{Ring}} \newcommand{\Mod}{\operatorname{Mod}} \newcommand{\Vect}{\operatorname{Vect}} \newcommand{\Mor}{\operatorname{Mor}} \newcommand{\SheafHom}{\mathcal{H}om} \newcommand{\pre}{\operatorname{pre}} \newcommand{\coker}{\operatorname{coker}} \newcommand{\acl}{\operatorname{acl}} \newcommand{\dcl}{\operatorname{dcl}} \newcommand{\tp}{\operatorname{tp}} \newcommand{\stp}{\operatorname{stp}} \newcommand{\dom}{\operatorname{dom}} \newcommand{\val}{\operatorname{val}} \newcommand{\Int}{\operatorname{Int}} \newcommand{\Ext}{\operatorname{Ext}} \newcommand{\Stab}{\operatorname{Stab}} \newcommand{\trdeg}{\operatorname{tr.deg}} \newcommand{\bd}{\operatorname{bd}} \newcommand{\img}{\operatorname{im}} \newcommand{\coim}{\operatorname{coim}} \newcommand{\dpr}{\operatorname{dp-rk}} \newcommand{\redrk}{\operatorname{rk}_0} \newcommand{\botrk}{\operatorname{rk}_\bot} \newcommand{\mininf}{-_\infty} \newcommand{\cl}{\operatorname{cl}} \newcommand{\SubMod}{\operatorname{SubMod}} \newcommand{\Pow}{\mathcal{P}ow} \newcommand{\Sub}{\operatorname{Sub}} \newcommand{\soc}{\operatorname{soc}} \newcommand{\qsoc}{\operatorname{qsoc}} \newcommand{\Dir}{\operatorname{Dir}} \newcommand{\Pro}{\operatorname{Pro}} \newcommand{\Ind}{\operatorname{Ind}} \newcommand{\Lex}{\operatorname{Lex}} \newtheorem{theorem}{Theorem}[section] \newtheorem{theoremq}[theorem]{Likely Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{lemmadefinition}[theorem]{Lemma-Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{fact}[theorem]{Fact} \newtheorem{factq}[theorem]{Likely Fact} \newtheorem{maybe}[theorem]{Fact (Hopefully?)} \newtheorem{falsehood}[theorem]{False Conjecture} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{dream}{Dream}[section] \newtheorem{construction}[theorem]{Construction} \newtheorem{question}[theorem]{Question} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{proposition-eh}[theorem]{Proposition(?)} \newtheorem*{theorem-star}{Theorem} \newtheorem*{conjecture-star}{Conjecture} \newtheorem*{lemma-star}{Lemma} \newtheorem{claim}[theorem]{Claim} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem*{axioms}{Axioms} \newtheorem{remark}[theorem]{Remark} \newtheorem*{warning}{Warning} \theoremstyle{remark} \newtheorem*{acknowledgment}{Acknowledgments} \newcommand{\Aa}{\mathbb{A}} \newcommand{\Qq}{\mathbb{Q}} \newcommand{\eq}{\mathrm{eq}} \newcommand{\Ee}{\mathcal{E}} \newcommand{\Rr}{\mathbb{R}} \newcommand{\Rp}{\mathbb{RP}} \newcommand{\Zz}{\mathbb{Z}} \newcommand{\Kk}{\mathbb{K}} \newcommand{\Nn}{\mathbb{N}} \newcommand{\Cc}{\mathbb{C}} \newcommand{\Dd}{\mathcal{D}} \newcommand{\Ii}{\mathbb{I}} \newcommand{\Gg}{\mathbb{G}} \newcommand{\Hh}{\mathcal{H}} \newcommand{\Uu}{\mathbb{U}} \newcommand{\Mm}{\mathbb{M}} \newcommand{\Ff}{\mathbb{F}} \newcommand{\Pp}{\mathbb{P}} \newcommand{\Ll}{\mathbb{L}} \newcommand{\Oo}{\mathcal{O}} \newcommand{\mm}{\mathfrak{m}} \newcommand{\pp}{\mathfrak{p}} \newenvironment{claimproof}[1][\proofname] { \proof[#1] \renewcommand*\qedsymbol{$\square$\rlap{\textsubscript{Claim}}} } { \endproof } \begin{document} \maketitle \begin{abstract} We consider interpretable topological spaces and topological groups in a $p$-adically closed field $K$. We identify a special class of ``admissible topologies'' with topological tameness properties like generic continuity, similar to the topology on definable subsets of $K^n$. We show every interpretable set has at least one admissible topology, and every interpretable group has a unique admissible group topology. We then consider definable compactness (in the sense of Fornasiero) on interpretable groups. We show that an interpretable group is definably compact if and only if it has finitely satisfiable generics (\textit{fsg}), generalizing an earlier result on definable groups. As a consequence, we see that \textit{fsg} is a definable property in definable families of interpretable groups, and that any \textit{fsg} interpretable group defined over $\Qq_p$ is definably isomorphic to a definable group. \end{abstract} \section{Introduction} The theory of $p$-adically closed fields, denoted $\pCF$, has a cell decomposition theorem \cite[\S4, Theorem~$1.1'$]{vdDS} analogous to the cell decomposition in o-minimal theories. This in turn yields a dimension theory \cite[\S3]{vdDS} and topological tameness results. Here are three representative results from the topological tameness of $\pCF$: \begin{itemize} \item If $f : X \to Y$ is a definable function, then $f$ is continuous on a dense open subset of $X$ \cite[\S4, Theorem~$1.1'$]{vdDS}. \item If $X$ is definable and non-empty, then the frontier $\partial X$ has dimension strictly less than the dimension of $X$ \cite[Theorem~3.5]{p-minimal-cells}. \item If $X$ is definable of dimension $n$, then there is an open definable subset $X' \subseteq X$ such that $X'$ is an $n$-dimensional definable manifold and $\dim(X \setminus X') < n$ (see Remark~\ref{definable-to-manifold}). \end{itemize} These results in turn allow one to give any definable group $G$ the structure of a definable manifold in a canonical way \cite{Pillay-G-in-p}. Unlike a typical o-minimal theory, $\pCF$ does \emph{not} have elimination of imaginaries. Consequently, interpretable sets and groups no longer come with obvious topologies. This is a shame, as there are some important interpretable sets in $\pCF$, like the value group and the set of balls. Moreover, interpretable groups arise naturally in the study of definable groups when forming quotient groups. One can define a general class of \emph{interpretable topological spaces}, that is, definable topological spaces in $p\mathrm{CF}^\eq$. However, topological tameness results no longer hold on this general class. For example, if $X$ and $Y$ are the home sort $K$ with the standard topology and discrete topology, respectively, then the identity map $X \to Y$ is not generically continuous. We need to exclude things like the discrete topology on the home sort. In this paper, we define a special class of \emph{admissible} interpretable topological spaces (Definition~\ref{adm-def}). The class of admissible topologies excludes cases like the discrete topology on the home sort. The class of admissible topologies has many nice properties. First, there are ``enough'' admissible topologies: \begin{enumerate} \item Every interpretable set has at least one admissible topology (Theorem~\ref{construction-1}). Every interpretable group has a unique admissible group topology (Theorem~\ref{adm-group-thm}). \item A definable set $D \subseteq K^n$ is admissible, as a topological subspace of $K^n$. \item A definable manifold is admissible (Example~\ref{upsilon}). \item An interpretable subspace of an admissible topological space is admissible (Proposition~\ref{subspaces}). \item A product or disjoint union of two admissible topological spaces is admissible (Proposition~\ref{products-etc}). \item Admissibility is preserved under interpretable homeomorphism. \end{enumerate} Second, admissible topologies are ``tame'' or nice: \begin{enumerate}[resume] \item Admissible topological spaces are Hausdorff. \item If $X$ is admissible and $p \in X$, then some neighborhood of $p$ is interpretably homeomorphic to a definable subset of $K^n$. \item If $X$ is admissible and $D \subseteq X$ is interpretable, then the frontier $\partial D$ has lower dimension than $D$ (Proposition~\ref{frontier-dimension}). See Section~\ref{sec-dim} for a review of dimension theory on interpretable sets. \item If $f : X \to Y$ is interpretable, then there is an interpretable closed subset $D \subseteq X$ of lower dimension than $X$, such that $f$ is continuous on $X \setminus D$ (Proposition~\ref{gen-con}). \item If $X$ is admissible, then there is an interpretable closed subset $D \subseteq X$ of lower dimension than $X$, such that $X \setminus D$ is everywhere locally homeomorphic to $K^n$, for $n = \dim(X)$ (Corollary~\ref{generically-n-manifold}). \end{enumerate} \begin{remark} One interesting corollary of the final point is that if $S$ is interpretable of dimension $n$, then there is an interpretable injection from an $n$-dimensional ball into $S$. (This can also be proved directly.) \end{remark} Admissible interpretable topologies were introduced in \cite[Section~4]{wj-o-minimal} in the esoteric context of o-minimal theories without elimination of imaginaries. Luckily, the arguments of \cite{wj-o-minimal} carry over to the $p$-adic context with almost no changes. Nevertheless, we take the opportunity to clean up some of the proofs and results. We also slightly strengthen the definition of ``admissible,'' in order to get cleaner theorems. We apply the theory of admissibility to interpretable groups in $\pCF$. Classically, Pillay constructed a definable manifold structure on any definable group in $\pCF$ \cite{Pillay-G-in-p}. Admissibility allows us to run the same arguments on interpretable groups, yielding the following: \begin{theorem}[{= Theorem~\ref{adm-group-thm}}] \label{group-theorem} If $G$ is an interpretable group, then there is a unique admissible group topology on $G$. \end{theorem} When $G$ is definable, the admissible group topology is Pillay's definable manifold structure on $G$. Theorem~\ref{group-theorem} feels counterintuitive in the case of the value group $(\Gamma,+)$, which doesn't admit any obvious topology. In fact, we get the discrete topology: \begin{proposition}[{= Remark~\ref{local-dim-remark}}] If $G$ is interpretable, the admissible group topology on $G$ is discrete if and only if $\dim(G) = 0$. In particular, the admissible group topology on $\Gamma$ is the discrete topology. \end{proposition} Thus, nothing very interesting happens on 0-dimensional groups. Nevertheless, it is convenient to have a canonical topology which works uniformly across both definable groups and 0-dimensional interpretable groups. The admissible group topology is well-behaved in several ways: \begin{theorem} Let $G$ be an interpretable group with its admissible group topology, and let $H$ be an interpretable subgroup. \begin{enumerate} \item $H$ is always closed (Proposition~\ref{closed-subgroup}(\ref{cs1})). $H$ is clopen if and only if $\dim(H) = \dim(G)$ (Proposition~\ref{open-subgroup}). \item The admissible group topology on $H$ is the subspace topology (Proposition~\ref{closed-subgroup}(\ref{cs2})). \item If $H$ is normal, then the admissible group topology on $G/H$ is the quotient topology (Proposition~\ref{quotient-prop}(\ref{qp3})). \end{enumerate} \end{theorem} \begin{theorem} Let $f : G \to H$ be an interpretable homomorphism. \begin{enumerate} \item $f$ is continuous with respect to the admissible group topologies on $G$ and $H$ (Proposition~\ref{hom-cts}). \item If $f$ is injective, then $f$ is a closed embedding (Corollary~\ref{ce-cor}). \item If $f$ is surjective, then $f$ is an open map (Corollary~\ref{o-cor}). \end{enumerate} \end{theorem} \subsection{Application to \textit{fsg} groups} \label{intro-fsg} In future work with Yao \cite{jy-abelian}, Theorem~\ref{group-theorem} will be used to generalize some of the results of \cite{johnson-yao} to interpretable groups. In the present paper, we apply Theorem~\ref{group-theorem} to the study of interpretable groups with \textit{fsg}. Recall that an interpretable group has \emph{finitely satisfiable generics} (\textit{fsg}) if there is a small model $M_0$ and a global type $p \in S_G(\Mm)$ such that every left translate $g \cdot p$ is finitely satisfiable in $M_0$. This notion is due to Hrushovski, Peterzil, and Pillay \cite{goo}, who show that generic sets behave well in \textit{fsg} groups. An interpretable subset $X \subseteq G$ is said to be \emph{left generic} or \emph{right generic} if $G$ can be covered by finitely many left translates or right translates of $X$, respectively. \begin{fact}[{\cite[Proposition~4.2]{goo}}]\label{hpp-fact} Suppose $G$ has \textit{fsg}, witnessed by $p$ and $M_0$. \begin{enumerate} \item A definable set $X \subseteq G$ is left generic iff it is right generic. \item Non-generic sets form an ideal: if $X \cup Y$ is generic, then $X$ is generic or $Y$ is generic. \item A definable set $X$ is generic if and only if every left translate of $X$ intersects $G(M_0)$. \end{enumerate} \end{fact} The significance of \textit{fsg} is that it corresponds to ``definable compactness'' in several settings. For example, if $G$ is a group definable in a nice o-minimal structure, then $G$ has \textit{fsg} if and only if $G$ is definably compact \cite[Remark~5.3]{udi-anand}. In $\pCF$, a definable group $G$ has \textit{fsg} if and only if it is definably compact \cite{O-P,johnson-fsg}. With admissible group topologies in hand, the same arguments generalize to interpretable groups: \begin{theorem}[{= Theorem~\ref{fsg-char}}] \label{t1.6} An interpretable group $G$ has \textit{fsg} if and only if it is definably compact with respect to the admissible group topology. \end{theorem} Here, definable compactness is in the sense of Fornasiero \cite{fornasiero}; see Definition~\ref{fornasiero-definition}. Using this, we obtain some consequences which have nothing to do with topology: \begin{theorem} \begin{enumerate} \item The \textit{fsg} property is definable in families: if $\{G_a\}_{a \in X}$ is an interpretable family of interpretable groups, and $X_{fsg}$ is the set of $a \in X$ such that $G_a$ has \textit{fsg}, then $X_{fsg}$ is interpretable (Corollary~\ref{fsg-def}). \item If $G$ is interpretable over $\Qq_p$ and $G$ has \textit{fsg}, then $G$ is isomorphic to a definable group (Corollary~\ref{fsg-int-qp}). \end{enumerate} \end{theorem} Theorem~\ref{t1.6} says something more concrete for 0-dimensional interpretable groups, such as groups interpretable in the value group $\Gamma$. To explain, we need a few preliminary remarks. The structure $\Qq_p^\eq$ eliminates $\exists^\infty$ (even though the theory $p\mathrm{CF}^\eq$ \emph{does not}). Consequently, one can define a class of ``pseudofinite'' interpretable sets, characterized by the two properties: \begin{itemize} \item Over $\Qq_p$, pseudofiniteness agrees with finiteness. \item Pseudofiniteness is definable in families. \end{itemize} See Proposition~\ref{psf-prop} for a precise formulation. Pseudofiniteness can also be defined explicitly; see Definition~\ref{psf-def} and Proposition~\ref{other-psf}. For example, $S$ is pseudofinite if and only if $S$ is definably compact with respect to the discrete topology. With pseudofiniteness in hand, Theorem~\ref{t1.6} yields the following for 0-dimensional interpretable groups: \begin{theorem}[{= Proposition~\ref{p8.8}}] Let $G$ be a 0-dimensional interpretable group. Then $G$ has \textit{fsg} if and only if $G$ is pseudofinite. \end{theorem} \subsection{Relation to prior work} This paper leans heavily on \cite{wj-o-minimal}, \cite{Pillay-G-in-p}, and \cite{johnson-fsg}. On some level, this paper is merely the following three observations: \begin{enumerate} \item The theory of ``admissible topologies'' from Sections~3 and 4 of \cite{wj-o-minimal} can be transfered from the o-minimal setting to $\pCF$. \item Pillay's construction of definable manifold structures on definable groups \cite{Pillay-G-in-p} can then be applied to interpretable groups, giving a unique admissible group topology on any interpretable group. \item The proof in \cite{johnson-fsg} that ``\textit{fsg} = definable compactness'' then generalizes to interpretable groups. \end{enumerate} In the o-minimal context, a similar idea of constructing a topology on interpretable groups appears in the work of Eleftheriou, Peterzil, and Ramakrishnan \cite[Theorem~8.7]{interpretable-groups}. Using their topology, the authors show that interpretable groups are definable \cite[Theorem~8.22]{interpretable-groups}, which implies in hindsight that the topology is Pillay's topology on definable groups. (This contrasts with $\pCF$, where there are non-definable interpretable groups such as the value group and residue field.) It is known that $\pCF$ eliminates imaginaries after adding the so-called \emph{geometric sorts} to the language \cite[Theorem~1.1]{pcf-ei}. The $n$th geometric sort is a quotient of $GL_n(K)$ and can be understood as a space of lattices in $K^n$. It is probably possible to circumvent much of this paper, especially Section~\ref{adm-sec-1}, through the explicit description of imaginaries, as explained in Section~\ref{geometric-ei} below. However, the advantage of the present approach is that it is much more likely to generalize to other settings, such as $P$-minimal theories, where an explicit description of imaginaries is unknown. \subsection{Outline} In Section~\ref{sec-dim} we review the dimension theory for imaginaries in geometric structures following \cite{gagelman}.\footnote{This is necessary because the use of \th-independence and $U^{\text{\th}}$-rank in \cite{wj-o-minimal} no longer works in the non-rosy theory $\pCF$. Note that the use of \th-independence and $U^{\text{\th}}$-rank in \cite{wj-o-minimal} was overkill; the dimension theory on imaginaries in geometric structures from \cite{gagelman} would have sufficed. In fact, in o-minimal theories, the dimension on imaginaries agrees with $U^{\text{\th}}$-rank.} In Section~\ref{random-review} we review the notion of definable and interpretable topological spaces and Fornasiero's definition of definable compactness \cite{fornasiero}. In Section~\ref{adm-sec-1} we develop the theory of admissible topologies in $\pCF$, showing that they satisfy topological tameness properties akin to definable sets (\S\ref{ss-tame}), that there are enough of them (\S\ref{ss-construct}), and that natural operations on topological spaces preserve admissibility (\S\ref{closure-props}). In Section~\ref{adm-sec-2} we turn to admissible group topologies, showing that each interpretable group has a unique admissible group topology, and checking the topological properties of homomorphisms. In Section~\ref{def-com-sec} we make a few remarks about definable compactness in admissible interpretable topological spaces. In Section~\ref{fsg-sec} we apply this to the study of \textit{fsg} interpretable groups, and in Section~\ref{0-dim} we analyze what happens for 0-dimensional groups. Finally, in Section~\ref{s-fd} we consider future research directions, including possible generalizations and open problems. \subsection{Conventions} \emph{Open maps are assumed to be continuous}. If $X$ is a subset of a topological space, then $\partial X$ denotes the frontier of $X$ and $\bd(X)$ denotes the boundary. If $E$ is an equivalence relation on a set $X$ and $X'$ is a subset, then $E \restriction X'$ denotes the restriction of $E$ to $X'$. If $E$ is an equivalence relation on a topological space $X$, then $X/E$ denotes the quotient topological space. When $X' \subseteq X$, we sometimes abbreviate $X'/(E \restriction X')$ as $X'/E$. Following \cite{panorama}, we define a \emph{$p$-adically closed field} to be a field elementarily equivalent to $\Qq_p$, and we denote the theory of $p$-adically closed fields by $\pCF$. The term ``$p$-adically closed field'' is often used in a more general sense to refer to any field elementarily equivalent to a finite extension of $\Qq_p$. For simplicity, we will not consider this more general context. However, \emph{all the results in this paper generalize to $p$-adically closed fields in the broad sense}, replacing $\Qq_p$ with its finite extensions in certain places. Symbols like $x,y,z,a,b,c,\ldots$ can denote singletons or tuples. Tuples are finite by default. Letters $A,B,C,\ldots$ are usually reserved for small sets of parameters, and letters $M, N$ are usually reserved for small models. We denote the monster model by $\Mm$. We maintain the distinction between definable and interpretable sets, as well as the distinction between reals (in $\Mm$) and imaginaries (in $\Mm^\eq$). ``Definable'' means ``definable with parameters,'' and ``0-definable'' means ``definable without parameters.'' If $D$ is definable, then $\ulcorner D \urcorner$ denotes ``the'' code of $D$, which is well-defined up to interdefinability, but is usually imaginary. We write the $\acl$-dimension in geometric theories as $\dim(a/B)$ for complete types and $\dim(X)$ for definable sets. \section{Dimension theory in geometric structures} \label{sec-dim} Let $\Mm$ be a monster model of a complete one-sorted theory $T$. Recall the following definitions from \cite{geostructs, gagelman}. \begin{definition} $T$ is a \emph{pregeometric theory} if $\acl(-)$ satisfies the Steinitz exchange property. $T$ is a \emph{geometric theory} if it is pregeometric and $\exists^\infty$ is eliminated. \end{definition} There is a well-known dimension theory on pregeometric structures. This dimension theory assigns a dimension $\dim(a/B)$ to each complete type $\tp(a/B)$ and a dimension $\dim(X)$ to each definable set $X$. By work of Gagelman \cite{gagelman}, the dimension theory extends to $T^{\eq}$. We review this theory below. \textbf{For the rest of this section, we assume $T$ is pregeometric.} \begin{definition} \label{dim-def} Let $A \subseteq \Mm$ be a small set of parameters and $b = (b_1,\ldots,b_n)$ be a tuple in $\Mm$. The tuple $b$ is \emph{$\acl$-independent (over $A$)} if $b_i \notin \acl(A \cup \{b_j : j \ne i\})$ for $1 \le i \le n$. The \emph{dimension} of $b$ over $A$, written $\dim(b/A)$, is the length of a maximal subtuple $c$ of $b$ such that $c$ is $\acl$-independent over $A$. This is independent of the choice of $c$, assuming the exchange property. \end{definition} The following properties of dimension are well-known: \begin{enumerate} \item Automorphism invariance: if $\sigma \in \Aut(\Mm)$, then $\dim(\sigma(a)/\sigma(B)) = \dim(a/B)$. \item Extension: given $a$ and $B \subseteq C$, there is $a' \equiv_B a$ with $\dim(a'/C) = \dim(a'/B) = \dim(a/B)$. \item Additivity: $\dim(a,b/C) = \dim(a/Cb) + \dim(b/C)$. \item Base monotonicity: if $B \subseteq C$, then $\dim(a/B) \ge \dim(a/C)$. \item Finite character: Given $a, B$ there is a finite subset $B' \subseteq B$ witih $\dim(a/B') = \dim(a/B)$. \item Anti-reflexivity: $\dim(a/B) = 0 \iff a \in \acl(B)$. \end{enumerate} The exchange property is perserved when imaginaries are named as parameters: \begin{fact}[{\cite[Lemma 3.1]{gagelman}}] \label{gagelman-exchange} Suppose $A \subseteq \Mm^{\eq}$ is small and $b, c \in \Mm$. Then \begin{equation} b \in \acl(Ac) \setminus \acl(A) \implies c \in \acl(Ab). \end{equation} \end{fact} Therefore Definition~\ref{dim-def} can be generalized to define $\dim(b/A)$ for real $b \in \Mm^n$ and imaginary $A \subseteq \Mm^\eq$. The six properties listed above continue to hold. Finally, we consider the case where $b$ is imaginary: \begin{definition} \label{dim-def-2} Let $b$ be a tuple of imaginaries and $A$ be a set of imaginaries. Let $c$ be a real tuple such that $b \in \acl^\eq(Ac)$. Define \begin{equation*} \dim(b/A) := \dim(c/A) - \dim(c/Ab). \end{equation*} \end{definition} \begin{fact} \phantomsection \label{dim-basics} \begin{enumerate} \item In Definition~\ref{dim-def-2}, $\dim(b/A)$ is well-defined, independent of the choice of $c$. \item When $b$ is a real tuple, Definition~\ref{dim-def-2} agrees with Definition~\ref{dim-def}. \item $\dim(-/-)$ satisfies automorphism invariance, extension, additivity, base monotonicity, and finite character. \item $\dim(-/-)$ satisfies half of anti-reflexivity: \begin{equation*} b \in \acl^\eq(A) \implies \dim(b/A) = 0. \end{equation*} \end{enumerate} \end{fact} Fact~\ref{dim-basics} is proved in \cite[Lemma~3.3, Proposition~3.4]{gagelman}. Gagelman omits the proof of Base Monotonicity, which is mildly subtle, so we review the proof for completeness: \begin{proof}[Proof (of base monotonicity)] Suppose $a \in \Mm^\eq$ and $B \subseteq C \in \Mm^\eq$. Take a real tuple $d \in \Mm^n$ such that $a \in \acl^\eq(Bd)$. By the extension property, we may move $d$ by an automorphism and arrange $\dim(d/Ba) = \dim(d/Ca)$. Then \begin{equation*} \dim(a/B) = \dim(d/B) - \dim(d/Ba) \ge \dim(d/C) - \dim(d/Ca) = \dim(a/C) \end{equation*} by base monotonicity for real tuples. \end{proof} \begin{definition} \label{set-def} Let $A$ be a small subset of $\Mm^\eq$ and $X$ be an $A$-interpretable set. Then $\dim(X) := \max_{c \in X} \dim(c/A)$. When $X = \varnothing$, we define $\dim(X) = -\infty$. \end{definition} \begin{fact}[{\cite[p.\@ 320--321]{gagelman}}] \label{fact2.7} In Definition~\ref{set-def}, $\dim(X)$ does not depend on $A$. \end{fact} Fact~\ref{fact2.7} follows formally from base monotonicity and extension. Then Propositions~\ref{dim-of-sets} and \ref{fibers} below follow formally from Fact~\ref{dim-basics} via the usual proofs. \begin{proposition} \label{dim-of-sets} Let $X, Y$ be interpretable sets. \begin{enumerate} \item $\dim(X) \ge 0$ iff $X \ne \varnothing$. \item \label{dos2} If $X$ is finite, then $\dim(X) \le 0$. \item If $X$ is definable, then $\dim(X) \le 0$ if and only if $X$ is finite. \item If $X, Y$ are in the same sort, then $\dim(X \cup Y) = \max(\dim(X),\dim(Y))$. \item $\dim(X \times Y) = \dim(X) + \dim(Y)$. \end{enumerate} \end{proposition} \begin{proposition} \label{fibers} Let $f : X \to Y$ be an interpretable function between two interpretable sets. \begin{enumerate} \item If every fiber has dimension at most $k$, then $\dim(X) \le k + \dim(Y)$. \item If every fiber has dimension at least $k$, then $\dim(X) \ge k + \dim(Y)$. \item If every fiber has dimension exactly $k$, then $\dim(X) = k + \dim(Y)$. \item If $f$ is surjective, then $\dim(X) \ge \dim(Y)$. \item If $f$ is injective, or more generally if $f$ has finite fibers, then $\dim(X) \le \dim(Y)$. \item If $f$ is a bijection, then $\dim(X) = \dim(Y)$. \end{enumerate} \end{proposition} Recall that the pregeometric theory $T$ is \emph{geometric} if it eliminates $\exists^\infty$. \begin{fact}[{\cite[Proposition 3.7]{gagelman}}] \phantomsection \label{continuity-01} \begin{enumerate} \item \label{uno} If $a \in \Mm^n$ and $B \subseteq \Mm^\eq$, then $\dim(a/B)$ is the minimum of $\dim(X)$ as $X$ ranges over $B$-definable sets containing $a$. \item \label{dos} Suppose $T$ is geometric. If $a \in \Mm^\eq$ and $B \subseteq \Mm^\eq$, then $\dim(a/B)$ is the minimum of $\dim(X)$ as $X$ ranges over $B$-interpretable sets containing $a$. \end{enumerate} \end{fact} The assumption that $T$ is geometric is necessary in (\ref{dos}), as shown by the following example. Suppose $\Mm$ is an equivalence relation with infinitely many equivalence classes of size $n$, for each positive integer $n$. By saturation, there are also infinite equivalence classes. The following facts are easy to verify: \begin{enumerate} \item For $A \subseteq \Mm$, the algebraic closure $\acl(A) \subseteq \Mm$ is the union of $A$ and all finite equivalence classes intersecting $A$. \item $\acl$ satisfies exchange (on $\Mm$). \item If $C$ is an equivalence class and $b = \ulcorner C \urcorner \in \Mm^\eq$ is its code, then \begin{equation*} \dim(b/\varnothing) = \begin{cases} 1 & \text{ if $C$ is finite} \\ 0 & \text{ if $C$ is infinite.} \end{cases} \end{equation*} \item If $b$ is the code of an infinite equivalence class and $X$ is an $\varnothing$-interpretable set containing $b$, then $\dim(X) = 1 > \dim(b/\varnothing) = 0$. \end{enumerate} Another special property of geometric theories is that dimension is definable in families: \begin{fact}[{\cite[Fact~2.4]{gagelman}}] \label{def-defable} Suppose $T$ is geometric. Let $\{X_a\}_{a \in Y}$ be a definable family of definable sets. Then for any $k$, the set $\{a \in Y : \dim(X_a) = k\}$ is definable. \end{fact} This holds for interpretable families as well: \begin{proposition} \label{dim-definable-omega} Suppose $T$ is geometric. Let $\{X_a\}_{a \in Y}$ be an interpretable family of interpretable sets. Then for any $k$, the set $\{a \in Y : \dim(X_a) = k\}$ is interpretable. \end{proposition} \begin{proof}[Proof sketch] Let $X$ be an interpretable set, defined as $D/E$ for some definable set $D \subseteq \Mm^n$ and definable equivalence relation $E$ on $D$. Let $[a]_E$ denote the $E$-equivalence class of $a \in D$. Let $D_j = \{a \in D : \dim([a]_E) = j\}$. Let $X_j$ be the quotient $D_j/E$. Each set $D_j$ is definable by Fact~\ref{def-defable}. Using Propositions~\ref{dim-of-sets} and \ref{fibers}, one sees that \begin{equation*} \dim(X) = \dim\left(\bigcup_{j = 0}^n X_j\right) = \max_{0 \le j \le n} \dim(X_j) = \max_{0 \le j \le n} \left(\dim(D_j) - j\right). \end{equation*} This calculates $\dim(X)$ in a definable way. \end{proof} \subsection{Dimensional independence} Continue to assume $T$ is pregeometric, but not necessarily geometric. \begin{lemma} \label{acl-alt} $\dim(a/B) = \dim(a/\acl^\eq(B))$. \end{lemma} \begin{proof} Clear from the definitions. \end{proof} \begin{definition}\label{first-ind-def} Suppose $a, b \in \Mm$ and $C \subseteq \Mm^\eq$. Then $a \dimind_C b$ means that $\dim(a,b/C) = \dim(a/C) + \dim(b/C)$. \end{definition} \begin{lemma} \label{ind-1} Suppose $a, b \in \Mm$ and $C \subseteq \Mm^\eq$. \begin{enumerate} \item \label{eins} $a \dimind_C b \iff b \dimind_C a$. \item \label{drei} $a \dimind_C b$ if and only if $\dim(a/C) = \dim(a/Cb)$. \item \label{vier} If $a' \in \acl^\eq(Ca)$ and $b' \in \acl^\eq(Cb)$, then $a \dimind_C b \implies a' \dimind_C b'$. \end{enumerate} \end{lemma} \begin{proof} (\ref{eins}) is trivial. (\ref{drei}) holds because \begin{equation*} \dim(a/C) + \dim(b/C) - \dim(a,b/C) = \dim(a/C) - \dim(a/Cb) \ge 0 \end{equation*} by additivity and base monotonicity. For part (\ref{vier}), we may assume $a' = a$ by symmetry. Then $a \dimind_C b \implies \dim(a/C) = \dim(a/Cb)$. By Lemma~\ref{acl-alt}, this means \begin{equation*} \dim(a/\acl^\eq(C)) = \dim(a/\acl^\eq(Cb)). \end{equation*} But $\acl^\eq(C) \subseteq \acl^\eq(Cb') \subseteq \acl^\eq(Cb)$, so base monotonicity gives \begin{equation*} \dim(a/\acl^\eq(C)) = \dim(a/\acl^\eq(Cb')) = \dim(a/\acl^\eq(Cb)). \end{equation*} By Lemma~\ref{acl-alt} again, $\dim(a/C) = \dim(a/Cb')$, and so $a \dimind_C b'$. \end{proof} By Lemma~\ref{ind-1}(\ref{vier}), $a \dimind_C b$ depends only on $a$ and $b$ as sets. \begin{definition} If $A, B, C \subseteq \Mm^\eq$, then $A \dimind_C B$ means that $A_0 \dimind_C B_0$ for all finite subsets $A_0 \subseteq A$ and $B_0 \subseteq B$. \end{definition} This extends Definition~\ref{first-ind-def} by Lemma~\ref{ind-1}(\ref{vier}). \begin{proposition} \phantomsection \label{ind-2} \begin{enumerate} \item $A \dimind_C B \iff B \dimind_C A$. \item \label{silly-mon} If $A' \subseteq \acl^\eq(CA)$ and $B' \subseteq \acl^\eq(CB)$, then $A \dimind_C B \implies A' \dimind_C B'$. \item \label{sami} $a \dimind_C B$ holds iff $\dim(a/C) = \dim(a/CB)$. \item \label{otkhi} If $B_1 \subseteq B_2 \subseteq B_3$, then \begin{equation*} A \dimind_{B_1} B_3 \iff (A \dimind_{B_1} B_2 \text{ and } A \dimind_{B_2} B_3). \end{equation*} \end{enumerate} \end{proposition} \begin{proof} The first two points are clear from Lemma~\ref{ind-1}. For part (\ref{sami}), use base monotonicity and finite character to reduce to the case where $B$ is finite, which is Lemma~\ref{ind-1}(\ref{drei}). In part (\ref{otkhi}), we may assume $A$ is a finite tuple $a$, in which case (\ref{otkhi}) says \begin{equation*} \dim(a/B_1) = \dim(a/B_3) \iff (\dim(a/B_1) = \dim(a/B_2) \text{ and } \dim(a/B_2) = \dim(a/B_3)). \end{equation*} This holds because $\dim(a/B_1) \ge \dim(a/B_2) \ge \dim(a/B_3)$ by base monotonicity. \end{proof} \begin{lemma} \label{type-ish} Let $a$ be a real tuple, possibly infinite. Let $B \subseteq C$ be sets of imaginaries. Then there is a consistent partial $\ast$-type $\Sigma_{a,B,C}(x)$ whose realizations are the tuples $a' \equiv_B a$ such that $a' \dimind_B C$. \end{lemma} \begin{proof} Let $I$ be a set indexing the tuple $a = (a_i : i \in I)$. For each finite $J \subseteq I$, let $\pi_J$ be the projection map from $I$-tuples to $J$-tuples. (For example, $\pi_J(a)$ is the finite subtuple of $a$ determined by $J$.) If $a' \in \Mm^I$, note that \begin{itemize} \item $a' \equiv_B a$ holds iff $\pi_J(a') \equiv_B \pi_J(a)$ for all finite $J \subseteq I$. \item $a' \dimind_B C$ holds iff $\pi_J(a') \dimind_B C$ for all finite $J \subseteq I$, because of the definition of $\dimind$. \end{itemize} Therefore we may reduce to the case where $I$ is finite. Let $n = \dim(a/B)$. If $a' \equiv_B a$, then $\dim(a'/B) = n$ and so \begin{equation*} a' \dimind_B C \iff \dim(a'/B) \le \dim(a'/C) \iff n \le \dim(a'/C) \qquad \qquad \text{(for $a' \equiv_B a$)}. \end{equation*} Therefore \begin{equation*} \{a' \in \Mm^I : a' \equiv_B a, ~ a' \dimind_B C\} = \{a' \in \Mm^I : a' \equiv_B a, ~ \dim(a'/C) \ge n\}. \end{equation*} The condition $a' \equiv_B a$ is defined by the type $\tp(a/B)$. By Fact~\ref{continuity-01}(\ref{uno}), the condition $\dim(a'/C) \ge n$ is defined by the type \begin{equation*} \{\neg \varphi(x,c) : \varphi \in L, ~ c \in C, ~ \dim(\varphi(\Mm,c)) < n\}. \end{equation*} Therefore $\{a' \in \Mm^I : a' \equiv_B a, ~ a' \dimind_B C\}$ is type-definable. Finally, the set is non-empty by the extension property of $\dim(-/-)$. \end{proof} \begin{proposition} \label{extension-2} Let $A, B, C$ be small subsets of $\Mm^\eq$. Then there is $\sigma \in \Aut(\Mm/C)$ such that $\sigma(A) \dimind_C B$. \end{proposition} \begin{proof} Take an infinite real tuple $a$ such that $A \subseteq \dcl^\eq(a)$. By Lemma~\ref{type-ish}, there is some $a' \equiv_C a$ such that $a' \dimind_C B$. Equivalently, there is $\sigma \in \Aut(\Mm/C)$ such that $\sigma(a) \dimind_C B$. Now $\sigma(A) \subseteq \dcl^\eq(\sigma(a))$, so $\sigma(A) \dimind_C B$ holds by Proposition~\ref{ind-2}(\ref{silly-mon}). \end{proof} \section{Interpretable topological spaces and definable compactness} \label{random-review} Let $M$ be any structure. \begin{definition} A topology on an interpretable set $X$ is an \emph{interpretable topology} if there is an interpretable basis of opens, i.e., an interpretable family $\{S_a\}_{a \in Y}$ such that $\{S_a : a \in Y\}$ is a basis for the topology. An \emph{interpretable topological space} is an interpretable set with an interpretable topology. \end{definition} For example, if $M \models p\mathrm{CF}$ then $M^n$ with the standard topology is an interpretable topological space. A family of sets $\mathcal{F}$ is \emph{downwards-directed} if for any $X, Y \in \mathcal{F}$, there is $Z \in \mathcal{F}$ with $Z \subseteq X \cap Y$. In topology, compactness can be defines as follows: a topological space $X$ is compact if any downwards-directed family of non-empty closed subsets of $X$ has non-empty intersection. \begin{definition} \label{fornasiero-definition} An interpretable topological space $X$ is \emph{definably compact} (in the sense of Fornasiero) if every downwards-directed interpretable family of non-empty closed subsets of $X$ has non-empty intersection. An interpretable subset $D \subseteq X$ is \emph{definably compact} if it is definably compact with respect to the subspace topology. \end{definition} Definable compactness in this sense was studied independently by Fornasiero \cite{fornasiero} and the author \cite{wj-o-minimal}. Many of the expected properties hold: \begin{fact} \phantomsection \label{dc-fact} \begin{enumerate} \item \label{df1} If $X$ is an interpretable topological space and $X$ is compact, then $X$ is definably compact (clear). \item If $X, Y$ are definably compact, then the disjoint union $X \sqcup Y$ and the product $X \times Y$ are definably compact \cite[Lemma~3.5(2), Proposition~3.7]{wj-o-minimal}. \item If $X$ is a Hausdorff interpretable topological space and $D \subseteq X$ is a definably compact interpretable subset, then $D$ is closed \cite[Lemma~3.8]{wj-o-minimal}. \item Finite sets are definably compact (clear). \item If $X$ is an interpretable topological space and $D_1, D_2$ are definably compact interpretable subsets, then the union $D_1 \cup D_2$ is definably compact \cite[Lemma~3.5(2)]{wj-o-minimal}. \item If $f : X \to Y$ is an interpretable continuous map and $X$ is definably compact, then the image $f(X)$ is definably compact as a subset of $Y$ \cite[Lemma~3.4]{wj-o-minimal}. \item Definable compactness and non-compactness are preserved in elementary extensions (clear). \end{enumerate} \end{fact} In $p$-adically closed fields definable compactness has additional nice properties: \begin{fact} \phantomsection \label{dc-fact-pcf} \begin{enumerate} \item \label{dfp1} If $M$ is a $p$-adically closed field, a definable set $X \subseteq M^n$ is definably compact iff $X$ is closed and bounded \cite[Lemmas~2.4,2.5]{johnson-yao}. \item Consequently, if $M = \Qq_p$, then a definable set $X \subseteq M^n$ is definably compact iff it is compact. \end{enumerate} \end{fact} \section{Admissible topologies in $p$-adically closed fields} \label{adm-sec-1} Work in a monster model $\Mm$ of $\pCF$. We give $\Mm$ the standard valuation topology and $\Mm^n$ the product topology. If $X \subseteq \Mm^n$ is definable, give $X$ the subspace topology. This allows us to regard any definable set as a definable topological space. \begin{definition} Let $X$ be a Hausdorff interpretable topological space. \begin{enumerate} \item \label{ab1} $X$ is a \emph{definable manifold} if $X$ is covered by finitely many open subsets $U_1, \ldots, U_n$, such that $U_i$ is interpretably homeomorphic to an open definable subset of $\Mm^{k_i}$ for some $k_i$. \item \label{ab2} $X$ is \emph{locally Euclidean} if for every point $p \in X$, there is a neighborhood $U \ni p$ and an interpretable homeomorphism from $U$ to an open subset of $\Mm^n$ for some $n$ depending on $p$. \item $X$ is \emph{locally definable} if for every point $p \in X$, there is a neighborhood $U \ni p$ and an interpretable homeomorphism from $U$ to a definable subspace of $\Mm^n$ for some $n$ depending on $p$. \end{enumerate} \end{definition} \begin{remark} \begin{enumerate} \item In (\ref{ab1}) and (\ref{ab2}), we allow the dimension to vary from point to point(!) \item Definable manifolds and locally Euclidean spaces are similar. The difference is that in a definable manfiold the atlas is finite, whereas in a locally Euclidean space the atlas can be infinite. \item Because we are working in a monster model, the atlas in local Euclideanity must be uniformly definable, i.e., the neighborhood $U \ni p$ and open embedding $U \to \Mm^n$ must have bounded complexity. The same holds for local definability. \item $\Mm$ with the discrete topology is a locally Euclidean definable topological space that is not a definable manifold. \item If $X$ is a definable manifold, then $X$ is in interpretable bijection with a definable set. So definable manifolds are essentially definable objects. \end{enumerate} \end{remark} Recall our convention that ``open map'' means ``continuous open map.'' \begin{definition} Let $X$ be a Hausdorff interpretable topological space. \begin{enumerate} \item $X$ is \emph{definably dominated} if there is an interpretable surjective open map $Y \to X$, for some definable set $Y \subseteq \Mm^n$ with the subspace topology. \item $X$ is \emph{manifold dominated} if there is an interpretable surjective open map $Y \to X$, for some definable manifold $Y$. \end{enumerate} \end{definition} \begin{lemma}\label{dom-trans} Let $X \to Y$ be an interpretable surjective open map between two Hausdorff interpretable topological spaces $X$ and $Y$. \begin{enumerate} \item \label{dt1} If $X$ is definably dominated, then $Y$ is definably dominated. \item \label{dt2} If $X$ is manifold dominanted, then $Y$ is manifold dominated. \end{enumerate} \end{lemma} \begin{proof} We prove (\ref{dt1}); (\ref{dt2}) is similar. Take a definable set $W \subseteq \Mm^n$ and an interpretable surjective open map $W \to X$ witnessing the fact that $X$ is definably dominated. Then the composition $W \to X \to Y$ witnesses that $Y$ is definably dominated. \end{proof} \begin{lemma} If $X$ is manifold dominated, then $X$ is definably dominated. \end{lemma} \begin{proof} By Lemma~\ref{dom-trans}, it suffices to show that definable manifolds are definably dominated. Let $X$ be a definable manifold. By definition there are definable open sets $U_i \subseteq \Mm^{n_i}$ for $1 \le i \le m$ and open embeddings $f_i : U_i \to X$ which are jointly surjective. The disjoint union $U_1 \sqcup \cdots \sqcup U_m$ is homeomorphic to a definable set $D \subseteq \Mm^N$ for sufficiently large $N$. The natural map $U_1 \sqcup \cdots \sqcup U_m \to X$ is an interpretable surjective open map. Therefore $X$ is definably dominated. \end{proof} \begin{definition} \label{adm-def} Let $X$ be a Hausdorff interpretable topological space. \begin{enumerate} \item $X$ is \emph{admissible} if $X$ is definably dominated and locally definable. \item $X$ is \emph{strongly admissible} if $X$ is manifold dominated and locally Euclidean. \end{enumerate} We say that an interpretable topology $\tau$ on an interpretable set $X$ is \emph{admissible} or \emph{strongly admissible} if the space $(X,\tau)$ is admissible or strongly admissible, respectively. \end{definition} In \cite[Section~4]{wj-o-minimal}, ``admissibility'' was used to refer to the weaker condition of definable domination, without local definability. However, the frontier dimension inequality fails without local definability (Proposition~\ref{frontier-dimension}, Remark~\ref{why-definably-dominated}). \begin{example} \label{upsilon} If $X$ is a definable manifold, then $X$ is strongly admissible. (The identity map $\id_X$ witnesses that $X$ is manifold dominated.) Similarly, if $X \subseteq \Mm^n$ is a definable set with the subspace topology, then $X$ is admissible. \end{example} \begin{example} \label{gamma} The discrete topology on the value group $\Gamma$ is strongly admissible. The valuation map $\Mm^\times \to \Gamma$ witnesses manifold-domination. In contrast, the discrete topology on $\Mm$ is \emph{not} admissible, by Proposition~\ref{local-dim} below. \end{example} \begin{example} \label{half-compactification} Let $\Gamma_\infty$ be the extended value group $\Gamma \cup \{+\infty\}$. Consider the topology on $\Gamma_\infty$ with basis \begin{equation*} \{\{\gamma\} : \gamma \in \Gamma\} \cup \{[\gamma,+\infty] : \gamma \in \Gamma\}. \end{equation*} The valuation map $\Mm \to \Gamma_\infty$ shows that $\Gamma_\infty$ is manifold dominated. However, $\Gamma_\infty$ is not locally definable. The point $+\infty \in \Gamma_\infty$ has the property that every neighborhood is infinite and 0-dimensional. This cannot happen in a definable set, where being 0-dimensional is equivalent to being finite. \end{example} \begin{example} \label{impl-1} Let $D \subseteq \Mm^2$ be the definable set $\{(x,y) \in \Mm^2 : x = 0 ~ \vee ~ y \ne 0\}$ with the subspace topology. Then $D$ is admissible. We will see below that $D$ is not manifold dominated (Example~\ref{impl-2}). \end{example} \subsection{Closure properties} \label{closure-props} \begin{remark} \label{finite-strong} Let $X$ be a finite interpretable set with the discrete topology. Then $X$ is strongly admissible. In fact, $X$ is a definable manifold. \end{remark} In the category of topological spaces, the class of open maps is closed under composition and base change. Consequently, if $f_i : Y_i \to X_i$ is an open map for $i = 1, 2$, then the product map $Y_1 \times Y_2 \to X_1 \times X_2$ is also an open map. \begin{proposition} \label{products-etc} The following classes of interpretable topological spaces are closed under finite disjoint unions and finite products. \begin{enumerate} \item \label{pe2} The class of locally definable spaces. \item \label{pe1} The class of locally Euclidean spaces. \item \label{pe3} The class of definably dominated spaces. \item \label{pe4} The class of manifold dominated spaces. \item \label{pe5} The class of admissible spaces. \item \label{pe6} The class of strongly admissible spaces. \end{enumerate} \end{proposition} \begin{proof} We focus on binary disjoint unions and binary products. (The 0-ary cases are handled by Remark~\ref{finite-strong}.) Cases (\ref{pe1}) and (\ref{pe2}) are straightforward. For (\ref{pe3}), let $X_1, X_2$ be two definably dominated spaces. Let $f_i : Y_i \to X_i$ be a map witnessing definable domination for $i = 1, 2$. Up to definable homeomorphism, $Y_1 \sqcup Y_2$ is a definable set, and so $Y_1 \sqcup Y_2 \to X_1 \sqcup X_2$ witnesses that $X_1 \sqcup X_2$ is definably dominated. Similarly, $Y_1 \times Y_2 \to X_1 \times X_2$ witnesses that $X_1 \times X_2$ is definably dominated. The proof for (\ref{pe4}) is similar. Then (\ref{pe2}) and (\ref{pe3}) give (\ref{pe5}), while (\ref{pe1}) and (\ref{pe4}) give (\ref{pe6}). \end{proof} Say that a class $\mathcal{C}$ of interpretable topological spaces is ``closed under subspaces'' if $\mathcal{C}$ contains every interpretable subspace of every $X \in \mathcal{C}$. \begin{proposition}\label{subspaces} The following classes of interpretable topological spaces are closed under subspaces: \begin{enumerate} \item \label{ss1} The class of locally definable spaces. \item \label{ss2} The class of definably dominated spaces. \item \label{ss3} The class of admissible spaces. \end{enumerate} \end{proposition} \begin{proof} (\ref{ss1}) is straightforward, and (\ref{ss1}) and (\ref{ss2}) give (\ref{ss3}), so it suffices to prove (\ref{ss2}). Suppose $X$ is definably dominated, witnessed by an interpretable surjective open map $Y \to X$. Let $X'$ be an interpretable subspace of $X$. Let $Y'$ be the pullback $X' \times_X Y$: \begin{equation*} \xymatrix{ Y' \ar[d] \ar[r] & Y \ar[d] \\ X' \ar[r] & X.} \end{equation*} Then $Y'$ is an interpretable subspace of the definable set $Y \subseteq \Mm^n$, so $Y'$ is also a definable subset of $\Mm^n$. The map $Y' \to X'$ is open and surjective because it is the pullback of the open and surjective map $X' \to X$. Then $Y' \to X'$ witnesses that $X'$ is definably dominated. \end{proof} The class of strongly admissible spaces is \emph{not} closed under subspaces; local Euclideanity is not preserved. \begin{remark} \label{open-obvious} If $X$ is strongly admissible and $U$ is an interpretable open subset, then $U$ (with the subspace topology) is strongly admissible. Indeed, $U$ is Hausdorff and locally Euclidean, and if $Y$ is a definable manifold with an interpretable surjective open map $f : Y \to X$, then $f^{-1}(U)$ is a definable manifold and $f^{-1}(U) \to U$ shows that $U$ is manifold dominated. \end{remark} \subsection{Construction of strongly admissible topologies} \label{ss-construct} In this section, we show that every interpretable set admits at least one strongly admissible topology. The proofs closely follow the proofs in the o-minimal case \cite[Section~3]{wj-o-minimal}, and we omit the proofs when no changes are needed. \begin{definition} \label{def-oq} Let $X$ be a topological space and $E$ be an equivalence relation on $X$. Consider $X/E$ with the quotient topology. \begin{enumerate} \item $E$ is an \emph{OQ equivalence relation} if $X \to X/E$ is an open map. \item If $D \subseteq X$, then the \emph{$E$-saturation of $D$}, written $D^E$, is the union of the $E$-equivalence classes intersecting $D$. \end{enumerate} \end{definition} ``OQ'' stands for ``open quotient.'' In \cite{wj-o-minimal}, OQ equivalence relations were called ``open equivalence relations,'' and the $E$-saturation was called the ``$E$-closure.'' \begin{remark} \label{criterion-for-oer} Let $X$ be a topological space with basis $\mathcal{B}$, and let $E$ be an equivalence relation on $X$. The following are equivalent: \begin{enumerate} \item $E$ is an OQ equivalence relation on $X$. \item For any open set $U \subseteq X$, the $E$-saturation $U^E$ is open. \item For any basic open set $B \in \mathcal{B}$, the $E$-saturation $B^E$ is open. \item \label{cfo4} If $p,p' \in X$ and $pEp'$ and $B \in \mathcal{B}$ is a basic neighborhood of $p$, then there is $B' \in \mathcal{B}$ a basic neighborhood of $p'$, such that $\forall x \in B' ~ \exists y \in B ~ xEy$. (That is, $B' \subseteq B^E$.) \end{enumerate} \end{remark} \begin{fact}[{\cite[Lemma~3.12]{wj-o-minimal}}] \label{why-open} Let $X$ be an interpretable topological space and $E$ be an interpretable OQ equivalence relation on $X$. Then the quotient topology on $X/E$ is interpretable. \end{fact} When $E$ is not an OQ equivalence relation, the quotient topology on $X/E$ can fail to be interpretable. For example, this happens when $X = \Mm^2$ and $E$ is the equivalence relation collapsing the $x$-axis to a point. \begin{fact}[{\cite[Lemma~3.13]{wj-o-minimal}}] \label{open-to-subs} Let $X$ be a topological space and $E$ be an OQ equivalence relation on $X$. Let $X'$ be an open subset of $X$ and let $E'$ be the restriction $E \restriction X$. \begin{enumerate} \item \label{ots1} $E'$ is an OQ equivalence relation on $X$. \item $X'/E' \to X/E$ is an open embedding. \end{enumerate} \end{fact} \begin{definition} Let $X$ be a non-empty interpretable set. An interpretable subset $Y \subseteq X$ is a \emph{large subset} if $\dim(Y \setminus X) < \dim(X)$. \end{definition} This terminology follows \cite[Definition~1.11]{Pillay-G-in-o} and \cite[Definition~8.3]{interpretable-groups}. In \cite{wj-o-minimal}, large sets were called ``full sets.'' \begin{remark} \phantomsection \label{large-remark} \begin{enumerate} \item If $X$ is a large subset of $Y$, then $\dim(X) = \dim(Y)$. \item \label{lr2} If $X$ is a large subset of $Y$, and $Y$ is a large subset of $Z$, then $X$ is a large subset of $Z$. \item If $\dim(X) = 0$, the only large subset is $X$ itself. \end{enumerate} \end{remark} \begin{remark} \label{definable-to-manifold} If $X \subseteq \Mm^n$ is definable, then there is a large open subset $X' \subseteq X$ such that $X'$ is a definable manifold. If $X$ is $M$-definable for a small model $M \preceq \Mm$, then we can take $X'$ to be $M$-definable. If $k = \dim(X)$, then we can take $X'$ to be a $k$-dimensional manifold in the strong sense that $X'$ is covered by finitely many open sets definably homeomorphic to open subsets of $\Mm^k$. This is obvious from cell decomposition results and dimension inequalities, but we give a proof for completeness. \end{remark} \begin{proof} Say that a non-empty set $C \subseteq \Mm^n$ is a \emph{t-cell} \cite[Definition~2.6]{p-minimal-cells} if there is a coordinate projection $\pi : \Mm^n \to \Mm^k$ such that $\pi(C)$ is open in $\Mm^k$, and $C \to \pi(C)$ is a homeomorphism. By \cite[\S4, Theorem~$1.1'$]{vdDS}, we can write $X$ as a finite disjoint union of t-cells $\bigcup_{i = 1}^m C_i$. If $X$ is $M$-definable, we can take the $C_i$ to be $M$-definable. Without loss of generality, $C_1,\ldots,C_j$ have dimension $k$ and $C_{j+1},\ldots,C_m$ have dimension $< k$. Let $D_i$ be the union of the cells other than $C_i$. For $i \le j$, let \begin{equation*} U_i = X \setminus \overline{D_i} = C_i \setminus \overline{D_i} = C_i \setminus \partial D_i \end{equation*} and take $X' = \bigcup_{i = 1}^j U_i$. Each set $U_i$ is an open subset of $X$, so $X'$ is an open subset. For each $i \le j$, $U_i$ is an open subset of $C_i$, which is definably homeomorphic to an open subset of $\Mm^k$, and therefore $U_i$ is also definably homeomorphic to an open subset of $\Mm^k$. Therefore $X'$ is a $k$-dimensional manifold in the strong sense. To see that $X'$ is a large subset of $X$, note that \begin{equation*} X \setminus X' = \bigcup_{i = 1}^j (C_i \cap \partial D_i) \cup \bigcup_{i = {j+1}}^m C_i. \end{equation*} For $i > j$, $\dim(C_i) < k$ by assumption. The frontier dimension inequality \cite[Theorem~3.5]{p-minimal-cells} shows that $\dim(C_i \cap \partial D_i) \le \dim(\partial D_i) < \dim(D_i) \le k$. Therefore $\dim(X \setminus X') < k = \dim(X')$. \end{proof} \begin{definition} Let $D$ be an $A$-interpretable set. An element $b \in D$ is \emph{generic in $D$ (over $A$)} if $\dim(b/A) = \dim(D)$. \end{definition} \begin{warning} If $D$ is 0-dimensional, then every element of $D$ is generic. \end{warning} \begin{lemma} \label{generic-eucl} If $X \subseteq \Mm^n$ is non-empty and $A$-definable, and $p$ is generic in $X$ (over $A$), then $X$ is locally Euclidean on an open neighborhood of $p$. \end{lemma} \begin{proof} By Remark~\ref{definable-to-manifold} there is a large open subset $X' \subseteq X$ that is a definable manifold. Let $B$ be a set of parameters defining $X'$. Moving $B$ and $X'$ by an automorphism over $A$, we may assume $p \dimind_A B$. Then $\dim(p/B) = \dim(p/A) = \dim(X) > \dim(X \setminus X')$, and so $p$ is not in the $B$-definable set $X \setminus X'$. Then $p \in X'$, and $X'$ is the desired open neighborhood of $p$. \end{proof} The following lemma, which parallels \cite[Lemma~3.15]{wj-o-minimal}, collects some useful tools. \begin{lemma} \label{tricks} Let $X \subseteq \Mm^n$ be $A$-definable, endowed with the subspace topology. \begin{enumerate} \item \label{tr1} Let $D$ be a definable subset of $X$. Let $\partial D$ be the frontier of $D$, within the topological space $X$. Then $\dim(\partial D) < \dim(D)$. \item \label{tr2} Suppose $P \subseteq X$ is definable or $\vee$-definable over $A$, and suppose $P$ contains every element that is generic in $X$. Then there is an $A$-definable large open subset $X' \subseteq X$ such that $X' \subseteq P$. \item \label{tr3} Suppose $b \in X$ and $U$ is a neighborhood of $b$ in the topological space $X$. Let $S \subseteq \Mm^\eq$ be a small set of parameters. Then there is a smaller open neighborhood $b \in U' \subseteq U$ such that $\ulcorner U' \urcorner \dimind_A bS$. \end{enumerate} \end{lemma} \begin{proof} Part (\ref{tr1}) is \cite[Theorem~3.5]{p-minimal-cells}. Part (\ref{tr2}) is proved analogously to \cite[Lemma~3.15(2)]{wj-o-minimal}, making use of Fact~\ref{continuity-01}(\ref{dos}). Part (\ref{tr3}) requires a new argument. Take $U' = X \cap B$, where $B$ is a sufficiently small ball in $\Mm^n$ around $b$. A simple calculation shows that the set of balls in $\Mm^n$ is 0-dimensional. Therefore\[\dim(\ulcorner B \urcorner/\varnothing) = \dim(\ulcorner B \urcorner / A) = \dim(\ulcorner B \urcorner / AbS) = 0,\] implying $\ulcorner B \urcorner \dimind_A bS$. But $\ulcorner U' \urcorner \in \dcl^\eq(A \ulcorner B \urcorner)$, and so $\ulcorner U' \urcorner \dimind_A bS$. \end{proof} The next three lemmas show that given a definable set $X$ and definable equivalence relation $E$, we can improve the nature of $X/E$ by repeatedly replacing $X$ with a large open definable subset. \begin{lemma} \label{step-1} Let $X \subseteq \Mm^n$ be $A$-definable and non-empty and let $E$ be an $A$-definable equivalence relation on $X$. Then there is an $A$-definable large open subset $X' \subseteq X$ such that the restriction $E \restriction X'$ is an OQ equivalence relation on $X'$. \end{lemma} \begin{proof} The proof of \cite[Proposition~3.16]{wj-o-minimal} works verbatim, replacing $\ind^{\mathrm{\th}}$ with $\dimind$. \end{proof} \begin{lemma} \label{step-2} Let $X \subseteq \Mm^n$ be $A$-definable and non-empty and let $E$ be an $A$-definable OQ equivalence relation on $X$. Then there is an $A$-definable large open subset $X' \subseteq X$ such that $X'/E := X'/(E \restriction X')$ is Hausdorff. \end{lemma} \begin{proof} The proof of \cite[Proposition~3.18]{wj-o-minimal} works verbtaim, replacing $\ind^{\mathrm{\th}}$ with $\dimind$. \end{proof} \begin{lemma} \label{step-3} Let $X \subseteq \Mm^n$ be $A$-definable and non-empty and let $E$ be an $A$-definable OQ equivalence relation on $X$ such that $X/E$ is Hausdorff. Then there is an $A$-definable large open subset $X' \subseteq X$ such that $X'/E$ is locally Euclidean. \end{lemma} \begin{proof} The proof of \cite[Proposition~3.20]{wj-o-minimal} works verbtaim, replacing $\ind^{\mathrm{\th}}$ with $\dimind$. Definable compactness continues to work properly thanks to Fact~\ref{dc-fact-pcf}. The use of the $\dcl(-)$ pregeometry on the home sort does not present any problems because of Fact~\ref{gagelman-exchange} and the fact that $\acl^\eq(A) \cap \Mm = \dcl^\eq(A) \cap \Mm$ for $A \subseteq \Mm^\eq$ in $\pCF$. \end{proof} \begin{theorem} \label{thm-pi} Let $X \subseteq \Mm^n$ be a non-empty definable set, and $E$ be a definable equivalence relation on $X$. There is a large open subset $X' \subseteq X$ such that \begin{itemize} \item $X'$ is a definable manifold \item $E \restriction X'$ is an OQ equivalence relation on $X'$ \item $X'/E$ is Hausdorff \item $X'/E$ is locally Euclidean. \end{itemize} If $X$ and $E$ are $M$-definable for some small model $M \preceq \Mm$, then we can take $X'$ to be $M$-definable. \end{theorem} \begin{proof} The proof is similar to \cite[Theorem~3.14]{wj-o-minimal}. Take a small model $M$ defining $X$ and $E$. First apply Remark~\ref{definable-to-manifold} to obtain an $M$-definable large open subset $X_1 \subseteq X$ such that $X_1$ is a definable manifold. Let $E_1 = E \restriction X_1$. Then apply Lemma~\ref{step-1} to get an $M$-definable large open subset $X_2 \subseteq X_1$ such that if $E_2 = E \restriction X_2$, then $E_2$ is an OQ equivalence relation on $X_2$. Then apply Lemma~\ref{step-2} to get an $M$-definable large open subset $X_3 \subseteq X_2$ such that if $E_3 = E \restriction X_3$, then $X_3/E_3$ is Hausdorff. (The relation $E_3$ is an OQ equivalence relation on $X_3$ by Fact~\ref{open-to-subs}(\ref{ots1}).) Then apply Lemma~\ref{step-3} to get an $M$-definable large open subset $X_4 \subseteq X_3$ such that if $E_4 = E \restriction X_4$, then $X_4/E_4$ is locally Euclidean. By Fact~\ref{open-to-subs}, $E_4$ is an OQ equivalence relation on $X_4$, and the quotient space $X_4/E_4$ is an open subspace of $X_3/E_3$. In particular, $X_4/E_4$ is Hausdorff. By Remark~\ref{large-remark}(\ref{lr2}), $X_4$ is a large open subset of $X$. Thus $X_4$ is a definable manifold. Take $X' = X_4$. \end{proof} \begin{theorem} \label{construction-1} Let $Y$ be an interpretable set. Then $Y$ admits at least one strongly admissible topology. If $Y$ is $M$-interpretable for a small model $M \preceq \Mm$, then $Y$ admits an $M$-interpretable strongly admissible topology. \end{theorem} \begin{proof} The proof is similar to \cite[Proposition~4.2]{wj-o-minimal}. Write $Y$ as $X/E$ for some $M$-definable set $X \subseteq \Mm^n$ and $M$-definable equivalence relation $E$. Proceed by induction on $\dim(X)$. If $\dim(X) = -\infty$, then $X$ and $Y$ are empty, and the unique topology on $Y$ is strongly admissible. Suppose $\dim(X) \ge 0$, so $X$ and $Y$ are non-empty. By Theorem~\ref{thm-pi}, there is an $M$-definable large open subset $X' \subseteq X$ such that $X'$ is a definable manifold, and if $E' = E \restriction X'$, then $E'$ is an OQ equivalence relation on $X'$, and the quotient $X'/E'$ is Hausdorff and locally Euclidean. Then the quotient topology on $X'/E'$ is interpretable by Fact~\ref{why-open}, and the map $X' \to X'/E'$ is a surjective interpretable open map, because $E'$ is an OQ equivalence relation. Therefore $X' \to X'/E'$ witnesses that $X'/E'$ (with the quotient topology) is strongly admissible. Let $Y' = X'/E'$, and let $Y'' = Y \setminus Y''$. Then $Y$ is a disjoint union of $Y'$ and $Y''$. Let $X''$ be the preimage of $Y''$ in $X$, and let $E'' = E \restriction X''$. Then $Y'' = X''/E''$. Moreover, $X'' \cap X' = \varnothing$, so $X'' \subseteq X \setminus X'$ and then $\dim(X'') < \dim(X)$ as $X'$ is large. By induction, $Y''$ admits a strongly admissible topology. So $Y'$ and $Y''$ both admit strongly admissible topologies. The disjoint union topology on $Y$ is strongly admissible by Proposition~\ref{products-etc}. \end{proof} \subsection{Tameness in admissible topologies} \label{ss-tame} In this section, we verify that the topological tameness theorems for definable sets extend to admissible topological spaces. \begin{lemma} \label{side-shrink} Let $X$ be a definably dominated interpretable topological space, interpretable over a small set of parameters $A$. Let $S$ be a small set of parameters. Let $b \in X$ be a point. Let $U \ni b$ be a neighborhood of $b$ in $X$. Then there is a smaller neighborhood $b \in U' \subseteq U$ such that $\ulcorner U' \urcorner \dimind_A bS$. \end{lemma} This is similar to but slightly stronger than \cite[Lemma~4.5]{wj-o-minimal}, and nearly the same proof works: \begin{proof} Let $f : Y \to X$ be a map witnessing definable domination. In particular, $Y$ is a definable set, and $f$ is a surjective open map. Let $A' \supseteq A$ be a small set over which $X, Y$, and $f$ are interpretable. Moving $f, Y, A'$ by an automorphism over $A$, we may assume $A' \dimind_A pS$ by Proposition~\ref{extension-2}. Take $\tilde{p} \in Y$ with $f(\tilde{p}) = p$. Then $f^{-1}(U)$ is a neighborhood of $\tilde{p}$. By Lemma~\ref{tricks}(\ref{tr3}), there is a smaller neighborhood $U''$ of $\tilde{p}$ in $Y$ such that $\ulcorner U'' \urcorner \dimind_{A'} \tilde{p}S$. Take $U' = f(U'')$; this is a neighborhood of $p$ because $f$ is an open map. Then $\ulcorner U' \urcorner \dimind_{A'} pS$. As $A' \dimind_A pS$, left transitivity gives $\ulcorner U' \urcorner \dimind_A pS$. \end{proof} \begin{definition} Let $X$ be an interpretable topological space and $p \in X$ be a point. The \emph{local dimension} of $X$ at $p$, written $\dim_p(X)$, is the minimum of $\dim(U)$ as $U$ ranges over neighborhoods of $p$ in $X$. More generally, if $D$ is an interpretable subset of $X$, then $\dim_p(D)$ is the minimum of $\dim(U \cap D)$ as $U$ ranges over neighborhoods of $p$ in $X$. \end{definition} \begin{proposition} \label{local-dim} Let $X$ be a definably dominated interpretable topological space. Then $\dim(X) = \max_{p \in X} \dim_p(X)$. \end{proposition} \begin{proof} The proof of \cite[Proposition~4.6]{wj-o-minimal} goes through verbatim, replacing $\ind^{\mathrm{\th}}$ with $\dimind$, using Lemma~\ref{side-shrink}. \end{proof} \begin{corollary} \label{container} Let $X$ be an interpretable set with $n = \dim(X)$. Then there is a non-empty open definable set $D \subseteq \Mm^n$ and an interpretable injection $D \hookrightarrow X$. \end{corollary} \begin{proof} By Theorem~\ref{construction-1}, there is a strongly admissible topology on $X$. By Proposition~\ref{local-dim} there is a point $p \in X$ with $\dim_p(X) = n$. By local Euclideanity, $p$ has a neighborhood that is interpretably homeomorphic to a ball in $\Mm^m$ for some $m$. Clearly $m = \dim_p(X)$. \end{proof} \begin{proposition} \label{frontier-dimension} Let $X$ be an admissible interpretable topological space and $D$ be an interpretable subset. Then $\dim \partial D < \dim D$, $\dim \bd(D) < \dim(X)$, and $\dim \overline{D} = \dim D$. \end{proposition} \begin{proof} The proof of \cite[Proposition~4.7]{wj-o-minimal} goes through almost verbatim, using Proposition~\ref{local-dim}. \end{proof} \begin{remark} \label{why-definably-dominated} Proposition~\ref{frontier-dimension} fails if we replace ``admissible'' with the weaker condition ``definably dominated.'' If $\Gamma_\infty$ is as in Example~\ref{half-compactification}, then the subset $\Gamma \subseteq \Gamma_\infty$ has the same dimension (zero) as its frontier $\{+\infty\}$. \end{remark} \begin{proposition} \label{mostly-euclidean} Let $X$ be an admissible interpretable topological space. Then there is a large open subset $X' \subseteq X$ such that $X'$ (with the subspace topology) is locally Euclidean. \end{proposition} \begin{proof} The proof is similar to \cite[Proposition~4.7(3) and Lemma~4.9]{wj-o-minimal}. Let $A$ be a small set of parameters over which $X$ is interpretable. Let $X_{\mathrm{Eu}}$ be the locally Euclidean locus of $X$, the set of points $p \in X$ such that some neighborhood of $p$ is interpretably homeomorphic to an open definable subset of $\Mm^n$ for some $n$. It is easy to see that $X_{\mathrm{Eu}}$ is $\vee$-definable, a small union of $A$-interpretable subsets of $X$. Let $Y$ be the union of the $A$-interpretable subsets $D \subseteq X$ with $\dim(D) < \dim(Y)$. Like $X_{\mathrm{Eu}}$, the set $Y$ is $\vee$-definable over $A$. \begin{description} \item[Case 1:] $X_{\mathrm{Eu}} \cup Y \subsetneq X$. Take $b \in X \setminus (X_{\mathrm{Eu}} \cup Y)$. Then $\dim(b/A) = \dim(X)$ by Proposition~\ref{continuity-01}(\ref{dos}) and the fact that $b \notin Y$. By local definability and Lemma~\ref{side-shrink}, there is a neighborhood $U$ of $b$ such that $U$ is homeomorphic to a definable subset of $\Mm^n$ and $\ulcorner U \urcorner \dimind_A b$. Let $f : U \to \Mm^n$ be an interpretable topological embedding. Let $B \supseteq A$ be a small set of parameters defining $U$ and $f$. Moving $B, f$ by an automorphism in $\Aut(\Mm/A \ulcorner U \urcorner)$, we may assume $B \dimind_{A \ulcorner U \urcorner} b$. By left transitivity, $B \dimind_A b$. Then \begin{equation*} \dim(X) = \dim(b/A) = \dim(b/B) = \dim(f(b)/B) \le \dim(f(U)) = \dim(U) \le \dim(X). \end{equation*} Therefore $f(b)$ is generic in the $B$-definable set $f(U)$. By Lemma~\ref{generic-eucl}, $f(U)$ is locally Euclidean on an open neighborhood of $f(b)$. Then $U$ is locally Euclidean on an open neighborhood of $b$, contradicting the fact that $b \notin X_{\mathrm{Eu}}$. \item[Case 2:] $X = X_{\mathrm{Eu}} \cup Y$. Then saturation gives $X = X_{\mathrm{Eu}} \cup Y_0$ where $Y_0$ is a finite union of $A$-interpretable subsets of $X$ of lower dimension. In particular, $\dim(Y_0) < \dim(X)$. By Proposition~\ref{frontier-dimension}, the closure $\overline{Y_0}$ has lower dimension than $X$. Take $X' = X \setminus \overline{Y_0}$. \qedhere \end{description} \end{proof} \begin{corollary} \label{generically-n-manifold} If $X$ is an admissible interpretable topological space of dimension $n$, then there is a large open subset $X' \subseteq X$ such that $X'$ is everywhere locally homeomorphic to $\Mm^n$. \end{corollary} \begin{proof} By Proposition~\ref{mostly-euclidean}, we may pass to a large open subset and assume $X$ is locally Euclidean. Then partition $X$ as $\bigcup_{i = 0}^n X_i$, where $X_i = \{p \in X : \dim_p(X) = i\}$. Local Euclideanity ensures that $p \mapsto \dim_p(X)$ is locally constant, so each $X_i$ is open. By Proposition~\ref{local-dim}, $\dim(X_i) = i$ for non-empty $X_i$. Therefore $X' := X_n$ is non-empty, and its complement has dimension $< n$. \end{proof} A slightly different argument from Proposition~\ref{mostly-euclidean} gives the following: \begin{lemma} \label{some-open} Let $X$ be a non-empty definably dominated interpretable topological space. Then $X$ has a non-empty open subset $X' \subseteq X$ that is strongly admissible. \end{lemma} \begin{proof} Take $Y \subseteq \Mm^n$ definable and $f : Y \to X$ an interpretable surjective open map. Let $E$ be the kernel equivalence relation $\{(x,y) \in Y^2 : f(x) = f(y)\}$. Then we can identify $X$ with $Y/E$ on the level of sets. Moreover, $X$ is homeomorphic to $Y/E$ with the quotient topology: \begin{itemize} \item If $U \subseteq X$ is open, then $f^{-1}(U)$ is open because $f$ is continuous. \item If $f^{-1}(U)$ is open then $U = f(f^{-1}(U))$ is open because $f$ is a surjective open map. \end{itemize} Thus we may identify $X$ with the quotient topology $Y/E$. The fact that $Y \to Y/E$ is open means that $E$ is an OQ equivalence relation. By Theorem~\ref{thm-pi}, there is a large (hence non-empty) open subset $Y' \subseteq Y$ such that $Y'$ is a definable manifold, and $Y'/E$ is locally Euclidean. By Fact~\ref{open-to-subs}, $Y'/E$ is an open subspace of $Y/E = X$, and $Y' \to Y'/E$ is an open map (i.e., $E \restriction Y'$ is an OQ equivalence relation). Then $X' := Y'/E$ is strongly admissible. \end{proof} \begin{proposition} \label{gen-con} Let $X, Y$ be admissible topological spaces. Let $f : X \to Y$ be interpretable. \begin{enumerate} \item There is a large open subset $X' \subseteq X$ on which $f$ is continuous. \item We can write $X$ as a finite disjoint union of locally closed interpretable subsets on which $f$ is continuous. \end{enumerate} \end{proposition} \begin{proof} The proof of \cite[Proposition~4.12]{wj-o-minimal} works, essentially verbatim, using generic continuity of definable functions in $\pCF$. \end{proof} \begin{corollary} \label{gen-con-2} Let $X, Y$ be $A$-interpretable admissible topological spaces, and $f : X \to Y$ be $A$-interpretable. If $b \in X$ is generic over $A$, then $f$ is continuous at $p$. \end{corollary} \section{Admissible groups} \label{adm-sec-2} \subsection{Uniqueness} \begin{lemma} \label{to-admissible} Let $G$ be an interpretable group, and let $\tau$ be an interpretable topology that is invariant under left translations. If $\tau$ is definably dominated, then $\tau$ is admissible and locally Euclidean. \end{lemma} \begin{proof} Lemma~\ref{some-open} gives a non-empty open subset of $(G,\tau)$ that is locally Euclidean. By translation invariance, $(G,\tau)$ is locally Euclidean everywhere. Then $(G,\tau)$ is definably dominated and locally definable, i.e., admissible. \end{proof} \begin{lemma} \label{uniqueness-0} Let $G$ be an interpretable group. Then $G$ admits at most one admissible topology $\tau$ that is invariant under left-translations. \end{lemma} \begin{proof} Suppose $\tau_1$ and $\tau_2$ are left-invariant admissible topologies on $G$. By Proposition~\ref{gen-con}, the map $\id_G : (G,\tau_1) \to (G,\tau_2)$ is continuous at at least one point $a \in G$. If $\delta \in G$ and $f(x) = \delta \cdot x$, then the composition \begin{equation*} (G,\tau_1) \stackrel{f}{\to} (G,\tau_1) \stackrel{\id_G}{\to} (G,\tau_2) \stackrel{f^{-1}}{\to} (G,\tau_2) \end{equation*} is continuous at $\delta^{-1} a$. That is, $\id_G : (G,\tau_1) \to (G,\tau_2)$ is continuous at $\delta^{-1} a$, for arbitrary $\delta$. Then $\id_G : (G,\tau_1) \to (G,\tau_2)$ is continuous (everywhere). Similarly, $\id_G : (G,\tau_2) \to (G,\tau_1)$ is continuous, implying $\tau_1 = \tau_2$. \end{proof} Lemma~\ref{uniqueness-0} shows that an interpretable group admits at most one admissible group topology. In the following section, we will see that every interpretable group atmits \emph{at least one} admissible group topology. For now, we draw a useful consequence of the above lemmas. \begin{lemma} \label{upgrade} Let $G$ be an interpretable group. Let $\tau$ be a definably dominated interpretable topology on $G$ that is invariant under left translations. Then $\tau$ is an admissible group topology on $G$. \end{lemma} \begin{proof} First, $\tau$ is admissible by Lemma~\ref{to-admissible}. We claim $\tau$ is invariant under right translations. Take $g \in G$, and let $\tau'$ be the image of $\tau$ under the right translation $x \mapsto g \cdot x$. Then $\tau'$ is invariant under left translations (because left and right translations commute). By Lemma~\ref{uniqueness-0}, $\tau' = \tau$. Thus $\tau$ is right-invariant. In particular, right translations are continuous. Let $\tau^{-1}$ be the image of $\tau$ under the inverse map. The fact that $\tau$ is right-invariant implies that $\tau^{-1}$ is left invariant. Then $\tau^{-1} = \tau$, implying that the inverse map is continuous. Finally we show that the group operation $m : G \times G \to G$ is continuous. By Proposition~\ref{gen-con} (and Proposition~\ref{products-etc}), the group operation $m$ is continuous at at least one point $(a,b)$. For any $\delta, \epsilon \in G$, we have \begin{equation*} m(x,y) = \epsilon^{-1} \cdot m(\epsilon \cdot x, y \cdot \delta) \cdot \delta^{-1}. \end{equation*} The right hand side is continuous at $(x,y) = (\epsilon^{-1} \cdot a, b \cdot \delta^{-1})$, and therefore so is the left hand side. As $\epsilon, \delta$ are arbitrary, $m$ is continuous everywhere. \end{proof} \subsection{Existence} \label{sec:ex} In this section we show that each interpretable group admits at least one strongly admissible group topology. \begin{lemma} \label{genericky-tool} Let $G$ be an interpretable group and let $U \subseteq G$ be a large subset. \begin{enumerate} \item \label{gt1} Finitely many left-translates of $U$ cover $G$. \item \label{gt2} For any $x, y \in G$, there is a left-translate $g \cdot U$ containing both $x$ and $y$. \end{enumerate} \end{lemma} \begin{proof} Take a small set of parameters $A$ defining $G$ and $U$. \begin{enumerate} \item Take $(g_1,\ldots,g_{n+1})$ generic in $G^{n+1}$ over $A$. Note that $\dim(g_i/A,g_1,\ldots,g_{i-1}) = \dim(g_i/A) = \dim(G)$ for each $i$, and so $g_i \dimind_A g_1,\ldots,g_{i-1}$. We claim $G \subseteq \bigcup_{i = 1}^{n+1} g_i \cdot U$. Fix any $h \in U$. For $0 \le i \le n+1$ let $d_i = \dim(h/A,g_1,\ldots,g_i)$. Then \begin{equation*} 0 \le d_{n+1} \le d_n \le \cdots \le d_1 \le d_0 = \dim(h/A) \le \dim(G) = n. \end{equation*} It is impossible that $0 \le d_{n+1} < d_n < \cdots < d_0 \le n$, so there is some $1 \le i \le n+1$ such that $d_{i-1} = d_i$, i.e., $\dim(h/A,g_1,\ldots,g_{i-1}) = \dim(h/A,g_1,\ldots,g_i)$. Then \begin{equation*} h \dimind_{A, g_1,\ldots,g_{i-1}} g_i \text{ and } g_1,\ldots,g_{i-1} \dimind_A g_i, \end{equation*} so $h \dimind_A g_i$ by left transitivity. Then $\dim(g_i/Ah) = \dim(g_i/A) = \dim(G)$, and $g_i$ is generic in $G$ over $Ah$. Then $\dim(g_i^{-1} \cdot h / Ah) = \dim(g_i / Ah) = \dim(G)$, and so $g_i^{-1} \cdot h$ must be in the $Ah$-interpretable large subset $U$, implying $h \in g_i \cdot U$. \item Take $g \in G$ generic over $Axy$. Then $\dim(g^{-1} \cdot x / Axy) = \dim(g/Axy) = \dim(G)$, and so $g^{-1} \cdot x$ is in the $Axy$-interpretable large subset $U$. Then $x \in g \cdot U$, and similarly $y \in g \cdot U$. \qedhere \end{enumerate} \end{proof} \begin{lemma} \label{finite-cover} Let $X$ be a Hausdorff interpretable topological space. Let $U_1, \ldots, U_n$ be finitely many interpretable open subsets covering $X$. If each $U_i$ is strongly admissible, then $X$ is strongly admissible. \end{lemma} \begin{proof} The space $X$ is locally Euclidean because it is covered by locally Euclidean spaces. The disjoint union $U_1 \sqcup \cdots \sqcup U_n$ is manifold dominated by Proposition~\ref{products-etc}. The map $U_1 \sqcup \cdots \sqcup U_n \to U_1 \cup \cdots \cup U_n = X$ is an interpretable surjective open map, and so $X$ is manifold dominated by Lemma~\ref{dom-trans}. Then $X$ is Hausdorff, locally Euclidean, and manifold dominated. \end{proof} \begin{proposition} \label{construction-2} Let $G$ be an interpretable group. Then there is a strongly admissible group topology $\tau$ on $G$. \end{proposition} \begin{proof} By Theorem~\ref{construction-1} there is a strongly admissible topology $\tau_0$ on $G$. Let $C$ be a small set of parameters over which everything is defined. If $a, b \in G$, let $a \preceq b$ mean that the map $x \mapsto b \cdot a^{-1} \cdot x$ is $\tau_0$-continuous at $a$. Then $\preceq$ is a $C$-interpretable preorder on $G$. Let $\approx$ be the associated $C$-interpretable equivalence relation. \begin{claim} If $(a,b) \in G^2$ is generic over $C$, then $a \approx b$. \end{claim} \begin{claimproof} By symmetry it suffices to show $a \preceq b$. Let $\delta = b \cdot a^{-1}$. Then $\dim(a,\delta/C) = \dim(a,b/C) = 2 \dim(G)$, which implies that $a$ is generic in $G$ over $C \delta$. By Corollary~\ref{gen-con-2}, the map $x \mapsto \delta \cdot x$ is $\tau_0$-continuous at $a$. \end{claimproof} Fix $a_0 \in G$ generic over $C$. If $b \in G$ is generic over $Ca_0$, then $b \approx a_0$. Therefore the equivalence class of $a_0$ is a large subset of $G$. Let $U$ be the $\tau_0$-interior of this equivalence class. Then $U$ is a large subset of $G$ by Proposition~\ref{frontier-dimension}. We have constructed a large $\tau_0$-open subset $U \subseteq G$ such that if $a, b \in U$, then the map $x \mapsto b \cdot a^{-1} \cdot x$ is $\tau_0$-continuous at $x = a$. Consider $G \times U$, where $G$ has the discrete topology(!), $U$ has the topology $\tau_0 \restriction U$, and $G \times U$ has the product topology. \begin{claim} Let $E$ be the equivalence relation on $G \times U$ given by \begin{equation*} (x,y)E(x',y') \iff x \cdot y = x' \cdot y'. \end{equation*} Then $E$ is an OQ equivalence relation on $G \times U$ (Definition~\ref{def-oq}). \end{claim} \begin{claimproof} Fix an interpretable basis $\mathcal{B}$ for $U$. Then $\{\{a\} \times B : a \in G, ~ B \in \mathcal{B}\}$ is a basis for $G \times U$. We verify that $E$ satisfies criterion (\ref{cfo4}) of Remark~\ref{criterion-for-oer}. Fix $p = (a,b)$ and $p' = (c,d)$ in $G \times U$, with $a \cdot b = c \cdot d$. Fix a basic neighborhood $\{a\} \times B$ of $(a,b)$. We must find a basic neighborhood $\{c\} \times B'$ of $(c,d)$ such that every element of $\{c\} \times B'$ is equivalent to an element of $\{a\} \times B$. Note that $b \cdot d^{-1} = a^{-1} \cdot c$. As $b, d \in U$, the map \begin{equation*} f(x) = b \cdot d^{-1} \cdot x = a^{-1} \cdot c \cdot x \end{equation*} is continuous at $x = d$, and maps $d$ to $b$. As $B$ is a neighborhood of $b$, there is a neighborhood $B'$ of $d$ such that $f(B') \subseteq B$. If $(c,x) \in \{c\} \times B'$ then $(c,x)E(a,f(x))$ and $(a,f(x)) \in \{a\} \times B$. So every element of $\{c\} \times B'$ is equivalent to an element of $\{a\} \times B$, as required. \end{claimproof} Let $\tau$ be the quotient equivalence relation on $(G \times U)/E = G$. By Fact~\ref{why-open}, $\tau$ is an interpretable equivalence relation on $G$ (but not necessarily Hausdorff). For each $a \in G$, the composition \begin{gather*} U \hookrightarrow G \times U \to G \\ x \mapsto (a,x) \mapsto a \cdot x \end{gather*} is an injective open map, i.e., an open embedding. So we have proven the following: \begin{claim} \label{property-claim} There is an interpretable topology $\tau$ on $G$ such that for any $a \in G$, the map \begin{align*} i_a : (U,\tau_0 \restriction U) & \to (G,\tau) \\ i_a(x) &= a \cdot x \end{align*} is an open embedding. \end{claim} The family of open embeddings $\{i_a\}_{a \in G}$ is jointly surjective, and so the property in Claim~\ref{property-claim} uniquely determines $\tau$. The property is invariant under left translation, and therefore $\tau$ is invariant under left translations. We claim $\tau$ is Hausdorff. Given distinct $x, y \in G$, there is some $a \in G$ such that $\{x,y\} \subseteq a \cdot U$, by Lemma~\ref{genericky-tool}(\ref{gt2}). Then $a \cdot U$ is a Hausdorff open subset of $(G,\tau)$ containing $x$ and $y$, so $x$ and $y$ are separated by neighborhoods. By Remark~\ref{open-obvious}, $(U,\tau_0 \restriction U)$ is strongly admissible. By Lemma~\ref{genericky-tool}(\ref{gt1}), finitely many left translates of $U$ cover $G$. Each of these translates is homeomorphic to $(U,\tau_0 \restriction U)$, so by Lemma~\ref{finite-cover}, $(G,\tau)$ is strongly admissible. In summary, $\tau$ is an interpretable topology, $\tau$ is invariant under left translation, $\tau$ is Hausdorff, and $\tau$ is strongly admissible. By Lemma~\ref{upgrade}, $\tau$ is a group topology on $G$. \end{proof} \begin{theorem} \label{adm-group-thm} Let $G$ be an interpretable group. \begin{enumerate} \item If $\tau$ is an interpretable group topology on $G$, then the following properties are equivalent: \begin{enumerate} \item $\tau$ is strongly admissible. \item $\tau$ is admissible. \item $\tau$ is manifold dominated. \item $\tau$ is definably dominated. \end{enumerate} \item There is a unique interpretable group topology $\tau$ satisfying these equivalent conditions. \end{enumerate} \end{theorem} \begin{proof} The weakest of the four conditions is definable domination, and the strongest is strong admissibility. Proposition~\ref{construction-2} gives a strongly admissible group topology $\tau_0$. Suppose $\tau$ is definably dominated. Then $\tau$ is admissible by Lemma~\ref{to-admissible}, and so $\tau = \tau_0$ by Lemma~\ref{uniqueness-0}. \end{proof} \subsection{Examples} \begin{definition} An \emph{admissible group} is an interpretable group with an admissible group topology. \end{definition} By Theorem~\ref{adm-group-thm}, every interpretable group becomes an admissible group in a unique way. \begin{example} If $G$ is a definable group, then Pillay shows that $G$ admits a unique definable manifold topology \cite{Pillay-G-in-p}. Definable manifolds are admissible, so Pillay's topology agrees with the unique admissible group topology on $G$. \end{example} \begin{example} The discrete topology on the value group $\Gamma$ is admissible (Example~\ref{gamma}). This is necessarily the unique admissible group topology on $\Gamma$. \end{example} \begin{remark} \label{local-dim-remark} Let $G$ be an interpretable group, topologized with its unique admissible topology. Let $n = \dim(G)$. By Proposition~\ref{local-dim}, there is a point $a \in G$ at which the local dimension is $n$. By translation symmetry, the local dimension is $n$ at every point. As $G$ is locally Euclidean, we see that $G$ is locally homeomorphic to an open subset of $\Mm^n$ at any point. As a consequence, the admissible group topology on $G$ is discrete if and only if $\dim(G) = 0$. \end{remark} \subsection{Subgroups, quotients, and homomorphisms} In this section, we analyze the topological properties of interpretable homomorphisms. \begin{proposition} \label{hom-cts} Let $f : G \to H$ be an interpretable homomorphism between two admissible groups. Then $f$ is continuous. \end{proposition} \begin{proof} Similar to the proof of Lemma~\ref{uniqueness-0}. \end{proof} \begin{proposition} \label{closed-subgroup} Let $G$ be an interpretable group and let $H$ be an interpretable subgroup. Let $\tau_G$ and $\tau_H$ be the admissible topologies in $G$ and $H$. \begin{enumerate} \item \label{cs1} $H$ is $\tau_G$-closed. \item \label{cs2} $\tau_H$ is the restriction of $\tau_G$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Because $\tau_G$ is invariant under translations, the $\tau_G$-frontier $\partial H$ is a union of cosets of $H$. Therefore, one of two things happens: $\partial H = \varnothing$ (meaning that $H$ is $\tau_G$-closed), or $\dim(\partial H) \ge \dim(H)$ (contradicting Proposition~\ref{frontier-dimension}). \item The restriction $\tau_G \restriction H$ is admissible by Proposition~\ref{subspaces}. The group operation $H \times H \to H$ is continuous with respect to $\tau_G \restriction H$, and so $(H,\tau_G \restriction H)$ is an admissible group. By the uniqueness of the topology, $\tau_G \restriction H = \tau_H$. \qedhere \end{enumerate} \end{proof} \begin{corollary} \label{ce-cor} If $f : G \to H$ is an injective homomorphism of admissible groups, then $f$ is a closed embedding. \end{corollary} \begin{proposition} \label{open-subgroup} Let $G$ be an admissible group and $H$ be an interpretable subgroup. Then $H$ is open in the admissible topology on $G$ if and only if $\dim(H) = \dim(G)$. \end{proposition} \begin{proof} $H$ is open if and only if $H$ has non-empty interior (as a subset of $G$). If $\dim(H) = \dim(G)$, then $H$ has non-empty interior by Proposition~\ref{frontier-dimension}. Conversely, suppose $H$ has non-empty interior. By Remark~\ref{local-dim-remark}, $\dim_x(G) = \dim(G)$ for all $x \in G$. Taking $x$ in the interior of $H$, we see $\dim(H) = \dim(G)$. \end{proof} \begin{proposition}\label{quotient-prop} Let $G$ be an admissible group and let $H$ be an interpretable normal subgroup. \begin{enumerate} \item \label{qp3} The quotient topology on $G/H$ agrees with the unique admissible group topology on $G/H$. \item The map $G \to G/H$ is an open map. \end{enumerate} \end{proposition} \begin{proof} Let $E$ be the equivalence relation $xEy \iff xH = yH$. \begin{claim} $E$ is an OQ equivalence relation on $G$ (Definition~\ref{def-oq}). \end{claim} \begin{claimproof} We verify criterion (\ref{cfo4}) of Remark~\ref{criterion-for-oer}. Suppose $g_1, g_2 \in G$ are $E$-equivalent, and $B$ is a neighborhood of $g_1$. Take $h \in H$ such that $g_1 \cdot h = g_2$. Then $B \cdot h$ is a neighborhood of $g_2$, and every element of $B \cdot h$ is $E$-equivalent to an element of $B$. \end{claimproof} Consider $G/H$ with the quotient topology $\tau$. By the claim, $G \to G/H$ is an open map. By Fact~\ref{why-open}, $\tau$ is interpretable. It remains to show that $\tau$ is the admissible group topology on $G/H$. \begin{claim} $\tau$ is a group topology. \end{claim} \begin{claimproof} We claim $(x,y) \mapsto x \cdot y^{-1}$ is continuous with respect to $\tau$. Fix $a,b \in G/H$. Take lifts $\tilde{a}, \tilde{b} \in G$. Let $U$ be a neighborhood of $a \cdot b^{-1}$ in $G/H$. Let $\pi : G \to G/H$ be the quotient map. Then $\pi^{-1}(U)$ is an open neighborhood of $\tilde{a} \cdot \tilde{b}^{-1}$ in $G$. By continuity of the group operations on $G$, there are open neighborhoods $W \ni \tilde{a}$ and $V \ni \tilde{b}$ in $G$ such that $W \cdot V^{-1} \subseteq \pi^{-1}(U)$, in the sense that \begin{equation*} x \in W, ~ y \in V \implies x \cdot y^{-1} \in \pi^{-1}(U). \end{equation*} As $\pi$ is an open map, $\pi(W)$ and $\pi(V)$ are open neighborhoods of $a$ and $b$. Then $\pi(W) \cdot \pi(V)^{-1} = \pi(V \cdot W^{-1}) \subseteq \pi(\pi^{-1}(U)) = U$. This shows continuity at $(a,b)$. \end{claimproof} By Proposition~\ref{closed-subgroup}(\ref{cs1}), $H$ is closed, which implies $\{1\} \subseteq G/H$ is closed in $\tau$ by definition of the quotient topology. As $\tau$ is a group topology, it follows that $\tau$ is Hausdorff. Then the open map $G \to G/H$ shows that $\tau$ is definably dominated, by Lemma~\ref{dom-trans}(\ref{dt1}). Finally, $\tau$ is the admissible group topology on $G/H$ by Theorem~\ref{adm-group-thm}. \end{proof} \begin{corollary} \label{o-cor} Let $f : G \to H$ be a surjective interpretable homomorphism between two admissible groups. Then $f$ is an open map. \end{corollary} \section{Definable compactness in strongly admissible spaces} \label{def-com-sec} Recall definable compactness from Definition~\ref{fornasiero-definition}. Let $X$ be an interpretable topological space in a $p$-adically closed field $M$ with value group $\Gamma$. \begin{definition}{{\cite[Definition~2.6]{johnson-yao}}} A \emph{$\Gamma$-exhaustion} of $X$ is an interpretable family $\{X_\gamma\}_{\gamma \in \Gamma}$, such that \begin{itemize} \item Each $X_\gamma$ is a definably compact, clopen subset of $X$. \item If $\gamma \le \gamma'$, then $X_\gamma \subseteq X_{\gamma'}$. \item $X = \bigcup_{\gamma \in \Gamma} X_\gamma$. \end{itemize} \end{definition} \begin{proposition} \label{gx-purpose} Let $\{X_\gamma\}_{\gamma \in \Gamma}$ be a $\Gamma$-exhaustion of an interpretable topological space $X$. Then $X$ is definably compact if and only if $X = X_\gamma$ for some $\gamma \in \Gamma$. \end{proposition} \begin{proof} Each $X_\gamma$ is definably compact, so if $X = X_\gamma$ then $X$ is definably compact. Conversely, suppose $X$ is definably compact. Then the family $\{X \setminus X_\gamma\}_{\gamma \in \Gamma}$ is a downward-directed family of closed sets with empty intersection. By definable compactness, some $X \setminus X_\gamma$ must vanish. \end{proof} \begin{lemma} \phantomsection \label{zhang-yao} \begin{enumerate} \item Let $f : X \to Y$ be an interpretable surjective open map between Hausdorff interpretable topological spaces. Suppose $\{X_\gamma\}_{\gamma \in \Gamma}$ is a $\Gamma$-exhaustion of $X$. Let $Y_\gamma = f(X_\gamma)$ for each $\gamma$. Then $\{Y_\gamma\}_{\gamma \in \Gamma}$ is a $\Gamma$-exhaustion of $Y$. \item \label{zy2} If $X$ is manifold dominated, then $X$ has a $\Gamma$-exhaustion. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Each $Y_\gamma$ is open because $f$ is an open map. Each $Y_\gamma$ is definably compact as the image of a definably compact set under a definable map. Each $Y_\gamma$ is closed because $Y$ is Hausdorff. We have $\bigcup_\gamma Y_\gamma = \bigcup_\gamma f(X_\gamma) = f(X) = Y$, because $f$ is surjective. It is clear that if $\gamma \le \gamma'$, then $Y_{\gamma} = f(X_{\gamma}) \subseteq f(X_{\gamma'}) = Y_{\gamma'}$. \item Take a definable manifold $U$ and an interpretable surjective open map $f : U \to X$. By \cite[Proposition~2.8]{johnson-yao}, $U$ admits a $\Gamma$-exhaustion. By the first point, we can push this forward to a $\Gamma$-exhaustion on $X$. \qedhere \end{enumerate} \end{proof} The idea of Lemma~\ref{zhang-yao} came out of discusisons with Ningyuan Yao and Zhentao Zhang. \begin{remark} Say that an interpretable topological space $X$ is ``locally definably compact'' if for every $p \in X$, there is a definably compact subspace $Y \subseteq X$ such that $p$ is in the interior of $Y$. Lemma~\ref{zhang-yao} shows that any manifold dominated space $X$ is locally definably compact. Indeed, take a $\Gamma$-exhaustion $\{X_\gamma\}_{\gamma \in \Gamma}$. If $p \in X$, then there is $\gamma \in \Gamma$ such that $p \in X_\gamma$, and we can take $Y = X_\gamma$. \end{remark} \begin{example} \label{impl-2} Let $D \subseteq \Mm^2$ be the definable set $\{(x,y) \in \Mm^2 : x = 0 ~ \vee ~ y \ne 0\}$ from Example~\ref{impl-1}. Using Fact~\ref{dc-fact-pcf}(\ref{dfp1}), one can see that if $N$ is a neighborhood of $(0,0)$ in $D$, and $\overline{N}$ is the closure of $(0,0)$ in $D$, then $\overline{N}$ is not definably compact. In other words, local definable compactness fails at $(0,0) \in D$. Therefore $D$ is not manifold dominated, and not strongly admissible (though it is admissible, trivially). \end{example} \begin{theorem} \label{definability-annoy} Definable compactness is a definable property, on families of strongly admissible spaces, and families of interpretable groups. More precisely: \begin{enumerate} \item \label{da1} If $\{(X_i,\tau_i)\}_{i \in I}$ is an interpretable family of strongly admissible spaces, then the set $\{i \in I : (X_i,\tau_i) \text{ is definably compact}\}$ is an interpretable subset of $I$. \item \label{da2} If $\{G_i\}_{i \in I}$ is an interpretable family of groups, then the set \[\{i \in I : G_i \text{ is definably compact with respect to the admissible group topology}\}\] is an interpretable subset of $I$. \end{enumerate} \end{theorem} This follows formally from Proposition~\ref{gx-purpose} and Lemma~\ref{zhang-yao}, using the method of \cite[Proposition~4.1]{johnson-fsg}. However, we can give a cleaner proof, using the following lemma. \begin{lemma} \label{trickery} Let $X$ be strongly admissible and definably compact. Then there is a closed and bounded definable set $D \subseteq \Mm^n$ and a continuous interpretable surjection $f : D \to X$. \end{lemma} \begin{proof} Take $Y$ a definable manifold and $g : Y \to X$ an interpretable surjective open map. Take $U_1, \ldots, U_m \subseteq Y$ an open cover and interpretable open embeddings $h_i : U_i \to \Mm^{n_i}$. Let $\{U_{i,\gamma}\}_{\gamma \in \Gamma}$ be a $\Gamma$-exhaustion of $U_i$ for each $i$. Then $\left\{\bigcup_{i = 1}^m U_{i,\gamma} \right\}_{\gamma \in \Gamma}$ is a $\Gamma$-exhaustion of $Y$, and $\left\{ f\left( \bigcup_{i = 1}^m U_{i,\gamma} \right) \right\}_{\gamma \in \Gamma}$ is a $\Gamma$-exhaustion of $X$ by Lemma~\ref{zhang-yao}. As $X$ is definably compact, there is some $\gamma_0 \in \Gamma$ such that $f \left( \bigcup_{i = 1}^m U_{i,\gamma_0} \right) = X$ by Proposition~\ref{gx-purpose}. Each set $U_{i,\gamma_0}$ is homeomorphic to a definable subset of $\Mm^{n_i}$, via the embeddings $g_i$. Then the disjoint union $U_{1,\gamma_0} \sqcup \cdots \sqcup U_{m,\gamma_0}$ is homeomorphic to a definable set $D \subseteq \Mm^n$ for sufficiently large $n$. The natural map \begin{equation*} U_{1,\gamma_0} \sqcup \cdots \sqcup U_{m,\gamma_0} \to \bigcup_{i = 1}^m U_{i,\gamma_0} \hookrightarrow Y \to X \end{equation*} is surjective by choice of $\gamma_0$. So there is a continuous surjection $D \to X$. Finally, $D$ is homeomorphic to the disjoint union $U_{1,\gamma_0} \sqcup \cdots \sqcup U_{m,\gamma_0}$, which is definably compact, and so $D$ is closed and bounded by Fact~\ref{dc-fact-pcf}(\ref{dfp1}). \end{proof} Using this we can prove Theorem~\ref{definability-annoy} \begin{proof}[Proof (of Theorem~\ref{definability-annoy})] \begin{enumerate} \item Let $S$ be the set of $i \in I$ such that $(X_i,\tau_i)$ is definably compact. It suffices to show that $S$ and $I \setminus S$ are $\vee$-definable, i.e., small unions of $M_0$-definable sets. By Lemma~\ref{trickery}, $i \in S$ if and only if there is a definable set $D$ and a surjective continuous interpretable map $D \to X_i$. It is easy to see that the set of such $i$ is $\vee$-definable. Meanwhile, by our definition of definable compactness, $i \notin S$ if and only if there is a downward-directed interpretable family $\{F_j\}_{j \in J}$ of non-empty closed subsets of $X_i$ such that $\bigcap_{j \in J} F_j = \varnothing$. Again, the set of such $i$ is $\vee$-definable. \item Let $S$ be the set of $i \in I$ such that $G_i$ is definably compact, with respect to the unique admissible group topology on $G_i$. Then $i \in S$ (resp. $i \notin S$) if and only if there is an interpretable topology $\tau$ on $G_i$, a definable set $D \subseteq \Mm^n$, and an interpretable function $f : D \to G_i$ such that \begin{itemize} \item $\tau$ is a Hausdorff group topology on $G_i$ \item $f$ is a surjective open map. \item $(G_i,\tau)$ is definably compact (resp. not definably compact). \end{itemize} Again, these conditions are easily seen to be $\vee$-definable, using part (\ref{da1}) for the third point. Thus $S$ and its complement are both $\vee$-definable. \qedhere \end{enumerate} \end{proof} Using different methods, Pablo And\'ujar Guerrero and the author have shown that definable compactness is a definable property in \emph{any} interpretable family of topological spaces \cite[Theorem~8.16]{andujar-johnson}. In other words, Theorem~\ref{definability-annoy}(\ref{da1}) holds without the assumption of strong admissibility. \subsection{Definable compactness in $\Qq_p$} Fix a copy of $\Qq_p$ embedded into the monster $\Mm \models p\mathrm{CF}$. If $X$ is a $\Qq_p$-interpretable topological space, then $X(\Qq_p)$ is naturally a topological space. \begin{warning} $X(\Qq_p)$ is usually not a subspace of $X(\Mm)$. This is analogous to how if $M$ is a highly saturated elementary extension of $\Rr$, then $\Rr$ with the order topology is not a subspace of $M$ with the order topology. In fact, if we start with $(M,\le)$ and take the induced subspace topology on $\Rr$, we get the discrete topology. \end{warning} \begin{proposition} \label{actually-compact} Let $X$ be a strongly admissible topological space in $\Qq_p$. Then $X$ is definably compact if and only if $X$ is compact. \end{proposition} \begin{proof} If $X$ is compact, then $X$ is definably compact by Fact~\ref{dc-fact}(\ref{df1}). Conversely, suppose $X$ is definably compact. By Lemma~\ref{trickery}, there is a closed bounded definable set $D \subseteq \Mm^n$ and an interpretable continuous surjection $f : D \to X$. As $\Qq_p \preceq \Mm$ we can take $D$ and $f$ to be defined over $\Qq_p$. Then $f : D(\Qq_p) \to X(\Qq_p)$ is a continuous surjection and $D(\Qq_p)$ is compact, so $X(\Qq_p)$ is compact. \end{proof} \begin{proposition} \label{eliminator} Let $X$ be $\Qq_p$-interpretable strongly admissible topological space. If $X$ is definably compact, then $X$ is a definable manifold. In particular, there is a $\Qq_p$-interpretable set-theoretic bijection between $X$ and a definable set. \end{proposition} \begin{proof} For each point $a \in X(\Qq_p)$, there is an open neighborhood $U_a$ and an open embedding $f_a : U_a \to \Mm^{n_a}$ where $n_a$ is the local dimension of $X$ at $a$. Because $\Qq_p \preceq \Mm$, we can take $f_a$ and $U_a$ to be $\Qq_p$-interpretable. Then $U_a(\Qq_p)$ is an open subset of $X(\Qq_p)$ containing $a$. By Proposition~\ref{actually-compact}, $X(\Qq_p)$ is compact, and so there are finitely many points $a_1, \ldots, a_n$ such that $X(\Qq_p) = \bigcup_{i = 1}^n U_{a_i}(\Qq_p)$. Then $X = \bigcup_{i = 1}^n U_{a_i}$ because $\Qq_p \preceq \Mm$. The sets $U_{a_i}$ and maps $f_{a_i} : U_{a_i} \to \Mm^{n_{a_i}}$ witness that $X$ is a definable manifold. \end{proof} \section{Interpretable groups and \textit{fsg}} \label{fsg-sec} Recall the definition of \textit{fsg} (finitely satisfiable generics) from Section~\ref{intro-fsg}. \begin{theorem} \label{fsg-char} Let $G$ be an interpretable group in a $p$-adically closed field. Then $G$ has finitely satisfiable generics (\textit{fsg}) if and only if $G$ is definably compact with respect to the admissible group topology. \end{theorem} \begin{proof} The arguments from \cite[Sections 3, 5--6]{johnson-fsg} work almost verbatim, given everything we have proved so far. The existence of $\Gamma$-exhaustions, used in \cite[Section 3]{johnson-fsg}, is handled by Lemma~\ref{zhang-yao}(\ref{zy2}). We don't need to redo the arguments of \cite[Section 5]{johnson-fsg}---if $G$ is a definably compact $\Qq_p$-interpretable group, then $G$ \emph{is already definable} by Proposition~\ref{eliminator}, so we can directly apply \cite[Proposition~5.1]{johnson-fsg}.\footnote{In particular, we don't need to worry about Haar measurability of interpretable subsets of $G$ because $G$ is definable.} The arguments of \cite[Section 6]{johnson-fsg} go through, changing ``definable'' to ``interpretable'' everywhere. \end{proof} \begin{corollary}\label{fsg-def} If $\{G_a\}_{a \in Y}$ is an interpretable family of interpretable groups, then the set \begin{equation*} \{a \in Y : G_a \text{ has \textit{fsg}}\} \end{equation*} is interpretable. \end{corollary} \begin{proof} Theorems~\ref{fsg-char} and \ref{definability-annoy}. \end{proof} \begin{corollary} \label{fsg-int-qp} Let $G$ be an interpretable group in $\Qq_p$. If $G$ has \textit{fsg}, then $G$ is interpretably isomorphic to a definable group. \end{corollary} \begin{proof} Proposition~\ref{eliminator}. \end{proof} \section{Zero-dimensional groups and sets} \label{0-dim} In Remark~\ref{local-dim-remark}, we saw that an admissible group $G$ is discrete iff it is zero-dimensional. This holds more generally for admissible spaces, by the following two propositions: \begin{proposition} If $X$ is an admissible topological space and $X$ is discrete, then $\dim(X) \le 0$. \end{proposition} \begin{proof} Immediate by considering local dimension (Proposition~\ref{local-dim}). \end{proof} Conversely, the discrete topology is admissible on zero-dimensional sets: \begin{proposition} \label{funny-converse} Let $X$ be a zero-dimensional interpretable set. The discrete topology is strongly admissible, and is the only admissible topology on $X$. \end{proposition} \begin{proof} We claim any admissible topology $\tau$ on $X$ is discrete. Indeed, Proposition~\ref{mostly-euclidean} gives a large open subspace $X' \subseteq X$ that is locally Euclidean. Then $\dim(X \setminus X') < \dim(X) = 0$, so $X \setminus X' = \varnothing$ and $X' = X$. Zero-dimensional locally Euclidean spaces are discrete. Meanwhile, Theorem~\ref{construction-1} shows that there is \emph{some} strongly admissible topology $\tau$ on $X$. By the previous paragraph, $\tau$ must be the discrete topology. \end{proof} \begin{definition} \label{psf-def} An interpretable set $S$ is \emph{pseudofinite} if $\dim(S) = 0$, and $S$ with the discrete topology is definably compact. \end{definition} (In Proposition~\ref{other-psf}, we will see that the requirement $\dim(S) = 0$ is redundant.) \begin{proposition} \label{stupid} If an interpretable set $S$ is finite, then $S$ is pseudofinite. \end{proposition} \begin{proof} Suppose $S$ is finite. Then $\dim(S) = 0$ by Proposition~\ref{dim-of-sets}(\ref{dos2}). The discrete topology on $S$ is compact, hence definably compact by Fact~\ref{dc-fact}(\ref{df1}). \end{proof} \begin{proposition} \phantomsection \label{psf-prop} \begin{enumerate} \item \label{psp1} Let $S$ be a $\Qq_p$-interpretable set. Then $S$ is pseudofinite iff $S$ is finite. \item \label{psp2} If $\{S_a\}_{a \in I}$ is an interpretable family, then $\{a \in I : S_a \text{ is pseudofinite}\}$ is interpretable. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item If $S$ is finite, then $S$ is pseudofinite by Proposition~\ref{stupid}. Conversely, suppose $S$ is pseudofinite, so $\dim(S) = 0$ and the discrete topology on $S$ is definably compact. By Proposition~\ref{funny-converse}, the discrete topology is strongly admissible. By Proposition~\ref{actually-compact}, the discrete topology is compact, so $S$ is finite. \item Dimension 0 is definable by Proposition~\ref{dim-definable-omega}. Assuming dimension 0, the discrete topology is strongly admissible by Proposition~\ref{funny-converse}, and then definable compactness is definable by Theorem~\ref{definability-annoy}(\ref{da1}). \qedhere \end{enumerate} \end{proof} \begin{remark} Proposition~\ref{psf-prop} characterize pseudofiniteness uniquely---it is the unique definable property which agrees with finiteness over $\Qq_p$. \end{remark} \begin{corollary} \label{exists-infty} $\Qq_p^{\eq}$ eliminates $\exists^\infty$: for any $L^\eq$-formula $\phi(x,y)$, there is a formula $\psi(y)$ such that $\phi(\Qq_p,b)$ is infinite if and only if $b \in \psi(\Qq_p)$. (But $\Mm^\eq$ does \emph{not} eliminate $\exists^\infty$, of course.) \end{corollary} This could probably also be seen by the explicit characterization of imaginaries in \cite[Theorem~1.1]{pcf-ei}. For 0-dimensional groups, pseudofiniteness is equivalent to \textit{fsg}: \begin{proposition} \label{p8.8} Let $G$ be a 0-dimensional interpretable group. Then $G$ has \textit{fsg} if and only if $G$ is pseudofinite. \end{proposition} \begin{proof} By Theorem~\ref{fsg-char}, $G$ has \textit{fsg} if and only if the admissible group topology on $G$ is definably compact. By Remark~\ref{local-dim-remark} or Proposition~\ref{funny-converse}, the admissible group topology on $G$ is discrete. \end{proof} For example, the value group $\Gamma$ does not have \textit{fsg}, but the circle group $[0,\gamma) \subseteq \Gamma$ (with addition ``modulo $\gamma$'') \emph{does} have \textit{fsg}. We close by giving some equivalent characterizations of pseudofiniteness. \begin{proposition} \label{other-psf} Let $S$ be an interpretable set. The following are equivalent: \begin{enumerate} \item \label{op1} $S$ is pseudofinite. \item \label{op2} $S$ with the discrete topology is definably compact. \item \label{op3} If $\mathcal{D} = \{D_a\}_{a \in I}$ is an interpretable family of subsets of $S$, and $\mathcal{D}$ is linearly ordered under inclusion, then $\mathcal{D}$ has a minimal element. \end{enumerate} \end{proposition} \begin{proof} Consider two additional criteria: \begin{enumerate} \setcounter{enumi}{3} \item \label{op4} If $\mathcal{D} = \{D_a\}_{a \in I}$ is a downwards-directed interpretable family of non-empty subsets of $S$, then $\bigcap_{a \in I} D_a \ne \varnothing$. \item \label{op5} If $\mathcal{D} = \{D_a\}_{a \in I}$ is a linearly ordered interpretable family of non-empty subsets of $S$, then $\bigcap_{a \in I} D_a \ne \varnothing$. \end{enumerate} We claim that (\ref{op1})$\implies$(\ref{op2})$\iff$(\ref{op4})$\implies$(\ref{op5}), (\ref{op1})$\implies$(\ref{op3})$\implies$(\ref{op5}), and (\ref{op5})$\implies$(\ref{op1}). First of all, (\ref{op1})$\implies$(\ref{op2}) by our definition of ``pseudofinite,'' and (\ref{op2})$\iff$(\ref{op4}) by definition of definable compactness. The implications (\ref{op4})$\implies$(\ref{op5}) and (\ref{op3})$\implies$(\ref{op5}) are trivial. If (\ref{op1})$\centernot \implies$(\ref{op3}), then there is a pseudofinite interpretable set $S$ and an interpretable chain $\mathcal{D}$ of subsets of $S$ with no minimum. Because $\Qq_p \preceq \Mm$ and pseudofiniteness is a definable property (Proposition~\ref{psf-prop}(\ref{psp2})), we can assume $S$ and $\mathcal{D}$ are interpretable over $\Qq_p$. Then $S$ is actually finite (Proposition~\ref{psf-prop}(\ref{psp1})) and so $\mathcal{D}$ must have a minimum, a contradiction. It remains to show (\ref{op5})$\implies$(\ref{op1}). We prove $\neg(\ref{op5})$ assuming $\neg(\ref{op1})$. There are two cases, depending on why (\ref{op1}) fails: \begin{description} \item[$\dim(S) > 0$:] Then Corollary~\ref{container} shows that there is a ball $B$ in $\Mm^n$ for $n = \dim(S)$, and an interpretable injection $B \hookrightarrow S$. Without loss of generality, $B$ is $\Oo^n$, where $\Oo$ is the valuation ring on $\Mm$. Then $B$ contains a linearly ordered definable family of non-empty subsets with empty intersection, namely the family of sets $B(\gamma)^n \setminus \{\bar{0}\}$, where $B(\gamma) \subseteq \Mm$ is the ball of radius $\gamma$. Using the embedding $B \hookrightarrow S$, we get a similar family in $S$, contradicting (\ref{op5}). \item[$\dim(S) = 0$] but the discrete topology is not definably compact: Then $S$ with the discrete topology is strongly admissible by Proposition~\ref{funny-converse}. By Lemma~\ref{zhang-yao} there is a $\Gamma$-exhaustion $\{S_\gamma\}_{\gamma \in \Gamma}$ of $S$. As $S$ is not definably compact, $S \setminus S_\gamma \ne \varnothing$ for each $\gamma$, by Proposition~\ref{gx-purpose}. Then the family $\{S \setminus S_\gamma\}_{\gamma \in \Gamma}$ contradicts (\ref{op5}). \qedhere \end{description} \end{proof} \subsection{Geometric elimination of imaginaries} \label{geometric-ei} The $n$th \emph{geometric sort} is the quotient $S_n := GL_n(\Mm)/GL_n(\Oo)$. The theory $\pCF$ has elimination of imaginaries after adding the geometric sorts to the language \cite[Theorem~1.1]{pcf-ei}. Consequently, we may assume any interpretable set $X$ is a definable subset of some product $\Mm^n \times \prod_{i = 1}^k S_{m_i}$. The geometric sorts are 0-dimensional, so by Proposition~\ref{funny-converse} the discrete topology on $S_n$ is an admissible topology. Consequently, if we take the the standard topology on $\Mm$, the discrete topology on each $S_{m_i}$, the product topology on $\Mm^n \times \prod_{i = 1}^k S_{m_i}$, and the subspace topology on $X \subseteq \Mm^n \times \prod_{i = 1}^k S_{m_i}$, we get an admissible topology on $X$, by Section~\ref{closure-props}. This suggests the following alternative approach to tame topology on interpretable sets in $\pCF$. Work in the language $L_G$ with the geometric sorts. Then all interpretable sets are definable, up to isomorphism. Endow the home sort $\Mm$ with the usual valuation topology, and endow each geometric sort $S_n$ with the discrete topology. Endow any $L_G$-definable set $D \subseteq \Mm^n \times \prod_{i = 1}^k S_{m_i}$ with the subspace topology (of the product topology). By analogy with the o-minimal ``definable spaces'' in \cite{PS}, say that a definable topological space $X$ is a \emph{geometric definable space} if it is covered by finitely many open sets $U_1, \ldots, U_n$, each of which is definably isomorphic to an ($L_G$-)definable set. Note that geometric definable spaces are admissible. Geometric definable spaces should be an adequate framework for tame topology on interpretable sets and interpretable groups in $\pCF$: \begin{enumerate} \item Every interpretable set is $L_G$-definable by elimination of imaginaries, and therefore admits the structure of a geometric definable space in a trivial way. \item The topological tameness results of Section~\ref{ss-tame} hold for geometric definable spaces. We can see this from admissibility, but there are probably direct proofs. \item The methods of Section~\ref{sec:ex} or \cite{Pillay-G-in-p} presumably show that any interpretable group can be given the structure of a geometric definable space. \end{enumerate} Geometric definable spaces might provide a simpler alternative to the admissible spaces of the present paper. On the other hand, such an approach is unlikely to generalize beyond $\pCF$. \section{Further directions} \label{s-fd} There are several directions for further research. \subsection{Extensions to $P$-minimal and visceral theories} Many of the results of this paper may generalize from $\pCF$ to other $P$-minimal theories---expansions of $\pCF$ in which every unary definable set is definable in the pure field sort \cite{p-min}. The topological tameness results of $\pCF$ like the cell decomposition and dimension theory are known to generalize to $P$-minimal theories \cite{p-min,p-minimal-cells}. More generally, some of the results of this paper may generalize to \emph{visceral theories} with the exchange property. Visceral theories were introduced by Dolich and Goodrick \cite{viscerality}. Recall the notion of uniformities and uniform spaces from topology. A theory is \emph{visceral} if there is a definable uniformity on the home sort $\Mm^1$ such that a unary definable set $D \subseteq \Mm^1$ is infinite iff it has non-empty interior. The theory $\pCF$ is visceral, as are $P$-minimal theories, o-minimal expansions of DOAG, C-minimal expansions of ACVF, unstable dp-minimal theories of fields, and many theories of valued fields. Dolich and Goodrick prove a number of topological tameness results for visceral theories, analogous to those that hold in o-minimal and $P$-minimal theories.\footnote{Some of Dolich and Goodrick's results are dependent on the technical assumption ``definable finite choice'' (DFC). In forthcoming work, I will show that the assumption DFC can generally be removed from all of Dolich and Goodrick's results, as one would expect \cite{own-visceral}.} It seems likely that the results of Sections~\ref{adm-sec-1} and \ref{adm-sec-2} generalize to visceral theories with the exchange property. (There are some subtleties around the proof of Lemma~\ref{step-3}, but these problems are not insurmountable.) On the other hand, local definable compacntess fails to hold in the visceral setting, so Sections~\ref{def-com-sec}--\ref{0-dim} probably do not generalize. \subsection{Zero-dimensional \textit{dfg} groups} An interpretable group $G$ is said to have \textit{dfg} (definable $f$-generics) if there is a global definable type $p$ on $G$ with boundedly many left translates. In distal theories like $\Qq_p$, the two properties \textit{fsg} and \textit{dfg} are polar opposites, in some sense. For example, if $G$ is an infinite interpretable group, then $G$ can have a most one of the two properties \textit{fsg} and \textit{dfg} (essentially by \cite[Proposition~2.27]{pierre-distal}). If $G$ has dp-rank 1 and is definably amenable, then $G$ satisfies exactly one of the two properties (essentially by \cite[Theorem~2.8]{surprise} and \cite[Proposition~8.21]{NIPguide}). By Proposition~\ref{eliminator}, the \textit{fsg} interpretable groups over $\Qq_p$ are definable. This vaguely suggests the following Conjecture: \begin{conjecture} If $G$ is a $\Qq_p$-interpretable 0-dimensional, definably amenable group, then $G$ has \textit{dfg}. \end{conjecture} The intuition is that ``zero-dimensional'' is the opposite of ``definable,'' for infinite groups. For example, the value group $\Gamma$ has \textit{dfg}. \subsection{The adjoint action} Suppose $G$ is interpretable. By Theorem~\ref{adm-group-thm}, the unique admissible group topology on $G$ is locally Euclidean. In particular, $G$ is a manifold in a weak sense. Because of generic differentiability, we can probably endow $G$ with a $C^1$-manifold structure, and then look at how $G$ acts on the tangent space at $1_G$ by conjugation. That is, we can look at the adjoint action of $G$. Say that a group is ``locally abelian'' if there is an open neighborhood $U \ni 1_G$ on which the group operation is commutative. If $G$ is locally abelian, then the adjoint action is trivial. The converse should hold by reducing to the case where $G$ is defined over $\Qq_p$ and using properties of $p$-adic Lie groups. If $G$ is locally abelian, witnessed by $U \ni 1_G$, then the center of the centralizer of $U$ is an abelian open subgroup. Thus, locally abelian groups have open abelian interpretable subgroups. For a general interpretable group $G$, the adjoint action gives a homomorphism $G \to GL_n(\Mm)$, where $n = \dim(G)$. The kernel should be a locally abelian group, and the image is definable. Thus, every interpretable group should be an extension of a definable group by a locally abelian group. This suggests the question: which groups are locally abelian? Can we classify them? Zero-dimensional interpretable groups are locally abelian, and so are abelian interpretable groups. Are all locally abelian interpretable groups built out of 0-dimensional groups and abelian groups? If $G$ is locally abelian, is there a \emph{normal} abelian open subgroup? \begin{acknowledgment} The author was supported by the National Natural Science Foundation of China (Grant No.\@ 12101131). The idea of writing this paper arose from discussions with Ningyuan Yao and Zhentao Zhang, who convinced me that interpretable groups in $\pCF$ could be meaningfully topologized. Alf Onshuus provided some helpful references. Anand Pillay asked what these results imply for groups interpretable in the value group, which inspired Section~\ref{0-dim}. \end{acknowledgment} \bibliographystyle{alpha} \bibliography{minibib}{} \end{document}
2205.00729v1
http://arxiv.org/abs/2205.00729v1
On the Analysis of a Generalised Rough Ait-Sahalia Interest Rate Model
\documentclass[12pt]{article} \bibliographystyle{abbrv} \usepackage{lineno,hyperref} \usepackage{bbm} \usepackage{float} \usepackage{mathrsfs} \usepackage{graphicx} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amstext} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{amssymb} \usepackage[bf,SL,BF]{subfigure} \usepackage{subfigure} \usepackage{caption} \newcommand{\triple}{{\vert\kern-0.25ex\vert\kern-0.25ex\vert}} \theoremstyle{plain} \newtheorem{rem}{Remark} \makeatletter \newcommand*\bigcdot{\mathpalette\bigcdot@{.45}} \newcommand*\bigcdot@[2]{\mathbin{\vcenter{\hbox{\scalebox{#2}{$\m@th#1\bullet$}}}}} \makeatother \newtheorem{definition}{Definition}[section] \newtheorem{theorem}[definition]{Theorem} \newtheorem{lemma}[definition]{Lemma} \newtheorem{corollary}[definition]{Corollary} \newtheorem{prop}[definition]{Proposition} \newtheorem{assumption}[definition]{Assumption} \theoremstyle{definition} \newtheorem{remark}[definition]{Remark} \newtheorem{remarks}[definition]{Remarks} \newtheorem{example}[definition]{Example} \renewcommand\labelenumi{(\roman{enumi})} \renewcommand\theenumi\labelenumi \newcommand{\eproof}{\hfill $\Box$} \begin{document} \title{\bf On the Analysis of a Generalised Rough Ait-Sahalia Interest Rate Model } \author {{\bf Emmanuel Coffie,\ \bf Xuerong Mao,\ \bf Frank Proske}} \date{} \maketitle \begin{abstract} Fractional Brownian motion with the Hurst parameter $H<\frac{1}{2}$ is used widely, for instance, to describe a 'rough' stochastic volatility process in finance. In this paper, we examine an Ait-Sahalia-type interest rate model driven by a fractional Brownian motion with $H<\frac{1}{2}$ and establish theoretical properties such as an existence-and-uniqueness theorem, regularity in the sense of Malliavin differentiability and higher moments of the strong solutions. \medskip \noindent \\ {\small\bf Keywords}: Stochastic interest rate, rough volatility, fractional Brownian motion, strong solution, higher moments \end{abstract} \section{Introduction}\label{sec1} Over the years, SDEs driven by noise with $\alpha^-$-H\"older continuous random paths for $\alpha\in[\frac{1}{2},1)$ have been applied to model the dynamical behaviour of asset prices in finance. See e.g. \cite{Karatzas,Lamberton} and the references therein. However, in recent years, empirical evidence (see e.g. \cite{Gatheral}) has shown that volatility paths of asset prices are more irregular in the sense of $\alpha^-$-H\"older continuity for $\alpha\in(0,\frac{1}{2})$ in many instances. This inadequacy actually showed the need for models based on SDEs driven by a noise of low $\alpha^-$-H\"older regularity with $\alpha\in(0,\frac{1}{2})$ which has been used by researchers and practitioners to describe the volatility dynamics of asset prices. These models are driven by rough signals that can capture well the 'roughness' in the volatility process of asset prices. Such rough signals arise e.g. from paths of the fractional Brownian motion (fBm). The fractional Brownian motion is a generalisation of the ordinary Brownian motion. It is a centred self-similar Gaussian process with stationary increments which depends on the Hurst parameter H. The Hurst parameter lies in $(0,1)$ and controls the regularity of the sample paths in the sense of a.e. (local) $H^-$-H\"older continuity. The smaller the Hurst parameter, the rougher the sample paths and vice versa. For instance, the authors in \cite{amine} employ the fractional Brownian motion with $H<\frac{1}{2}$ to model the 'rough' volatility process of asset prices and derive a representation of the sensitivity parameter delta for option prices. Similarly, the authors in \cite{Coffie} also consider an asset price model in connection with the sensitivity analysis of option prices whose correlated 'rough' volatility dynamics is described by means of a SDE driven by a fractional Brownian motion with $H<\frac{1}{2}$. The reader may consult \cite{Nualart, Biagini} for the coverage of properties and financial applications of the fractional Brownian motion with $H<\frac{1}{2}$ . See also the Appendix. \par In the context of interest rate modelling, Ait-Sahalia proposed a new class of highly nonlinear stochastic models in \cite{ait} for the evolution of interest rates through time after rejecting existing univariate linear-drift stochastic models based on empirical studies. In this model, (short term) interest rates $x_t$ have the SDE dynamics \begin{equation}\label{intro:eq:1} dx(t)=(\alpha_{-1}x(t)^{-1}-\alpha_{0}+\alpha_{1}x(t)-\alpha_{2}x(t)^{2})dt+\sigma x(t)^{\theta}dB_t \end{equation} on $t\ge 0$ with initial value $x(0)=x_0$, where $\alpha_{-1},\alpha_{0},\alpha_{1}, \alpha_{2}>0$, $\sigma>0$, $\theta >1$ and $B_t$ is a scalar Brownian motion. This type of interest rate model has been studied by many authors (see e.g. \cite{Szpruch, Emma}). \par In order to capture "rough" (short term) interest rates, e.g. in turbulent bond markets, one may replace the driving noise $B_{\bigcdot}$ in \eqref{intro:eq:1} by a fractional Brownian motion $B^H_{\bigcdot}$ and consider an interest rate model based on the SDE \begin{equation}\label{intro:eq:2} dx(t)=(\alpha_{-1}x(t)^{-1}-\alpha_{0}+\alpha_{1}x(t)-\alpha_{2}t^{2H-1}x(t)^{\rho})dt+\sigma x(t)^{\theta}dB_t^H \end{equation} for $t\ge 0$ with initial value $x(0)=x_0$, $t\in(0,1]$, $H\in(0,\frac{1}{2})$ and $\rho>1$. The stochastic integral for the fractional Brownian motion in \eqref{intro:eq:2} is defined via an integral concept in \cite{Nualart} and related to a Wick-It\^{o}-Skorohod type of integral. See also Section \ref{sec6}. \par On the other hand, an alternative model to \eqref{intro:eq:2} for the description of rough interest rate dynamics could be the following SDE, which "preserves" the classical drift structure of the Ait-Sahalia model in \eqref{intro:eq:1}: \begin{equation}\label{intro:eq:3} dx(t)=(\alpha_{-1}x(t)^{-1}-\alpha_{0}+\alpha_{1}x(t)-\alpha_{2}x(t)^{\rho})dt+\sigma x(t)^{\theta}d^{\circ}B_t^H \end{equation} for $t\ge 0$ and $H\in(0,\frac{1}{2})$, where $\sigma x(t)^{\theta}d^{\circ}B_t^H$ stands for a stochastic integral in the sense of Russo and Vallois. See Section \ref{sec6}. \par Although we also obtain an existence and uniqueness result for solutions to \eqref{intro:eq:3} (see Theorem \ref{final}), we mainly focus in this paper on the study of SDE \eqref{intro:eq:2}. \par The remainder of the paper is organised as follows: In Section \ref{sec3}, we introduce the fractional Ait-Sahalia interest rate model. We establish an existence and uniqueness result for solutions to SDE \eqref{intro:eq:2} in Section \ref{sec6} by studying the properties of solutions to an associated SDE driven by an additive fractional noise (see Sections \ref{sec4} and \ref{sec5}). In addition, we also discuss the alternative model \eqref{intro:eq:3} in Section \ref{sec6}. \section{The fractional Ait-Sahalia model}\label{sec3} Throughout this paper unless specified otherwise, we employ the following notation. Let $(\Omega, \mathcal{F},\mathbb{P})$ be a complete probability space with filtration $\{ \mathcal{F}_t\}_{t\geq 0}$ satisfying the usual conditions (i.e., it is increasing and right continuous while $\mathcal{F}_0$ contains all $\mathbb{P}$ null sets). Denote $\mathbb{E}$ as the expectation corresponding to $\mathbb{P}$. Suppose that $B^H_t$, $0\le t\le 1$, is a scalar fractional Brownian motion (fBm) with Hurst parameter $H\in (0,\frac{1}{2})$ and $B_t$, $0\le t\le 1$, is a scalar Brownian motion defined on this probability space. \par In what follows, we are interested to study the SDE \begin{equation}\label{chap1:eq:1} x_t=x_0+\int_0^t\big(\alpha_{-1}x_s^{-1}-\alpha_0+\alpha_1x_s-\alpha_2s^{2H-1}x_s^{\rho}\big)ds+\int_0^t\sigma x^{\theta}_sdB_s^H, \end{equation} $x_0\in(0,\infty)$, $0\le t\le 1$, where $H\in(\frac{1}{3},\frac{1}{2})$, $\tilde{\theta}>0$, $\rho>1+\frac{1}{H\tilde{\theta}}$, $\theta:=\frac{\tilde{\theta}+1}{\tilde{\theta}}$, $\sigma>0$ and $\alpha_i>0$, $i=-1,\cdots,2$. Here the stochastic integral term with respect to $B_{\bigcdot}^H$ in \eqref{chap1:eq:1} is defined by means of an integral concept introduced by F. Russo, P. Vallois in \cite{Errami}. See Section \ref{sec6}. \par As already mentioned in the introduction, solutions to the SDE \eqref{chap1:eq:1} can be used as a model (fractional Ait-Sahalia model) for the description of the dynamics of (short term) interest rates in finance. In fact, in this paper, we aim at establishing the existence and uniqueness of strong solutions $x_t>0$ to SDE \eqref{chap1:eq:1}. In doing so, we show that such solutions can be obtained as transformations to the SDE \begin{equation}\label{sec2:eq1} y_t=x+\int_0^t \tilde{f}(s,y_s)ds-\tilde{\sigma} B_t^H, \quad 0\le t\le 1, \quad H\in(0,\frac{1}{2}), \end{equation} where \begin{align}\label{sec2:eq2} \tilde{f}(s,y)&=\alpha_{-1}(-\tilde{\theta} y^{2\tilde{\theta}+1})+\alpha_0y^{\tilde{\theta}+1}-\alpha_1\frac{y}{\tilde{\theta}}+\alpha_2 s^{2H-1}\frac{1}{\tilde{\theta}^{\rho}} y^{-\tilde{\theta}\rho+\tilde{\theta}+1}-\tilde{\sigma}Hs^{2H-1}y^{-1}(\tilde{\theta}+1), \end{align} where $\tilde{\sigma}>0$, $0< s\le 1$, $0<y<\infty$. See Section \ref{sec6}. \par In the sequel, we want to prove the following new properties for solutions to SDE \eqref{sec2:eq1}: \begin{itemize} \item Existence and uniqueness of positive strong solutions (Corollary \ref{thrm1}) \item Regularity of solutions in the sense of Malliavin differentiability (Theorem \ref{Thrm 4.2}) \item Existence of higher moments (Theorem \ref{highermoment}). \end{itemize} \section{Existence and uniqueness of solutions to singular SDEs with additive fractional noise for $H< \frac{1}{2}$}\label{sec4} In this section, we wish to analyse the following generalisation of the SDE \eqref{sec2:eq1} given by \begin{equation}\label{sec2:eq3} x_t=x_0+\int^t_0b(s,x_s)ds+\sigma B^H_t,\quad 0\le t\le 1,\quad H\in(0,\frac{1}{2}),\quad \sigma>0. \end{equation} We require the following conditions \begin{description} \item (A1) $b\in C\big((0,1)\times (0,\infty)\big)$ and has a continuous spatial derivative $b^{\prime}:=\frac{\partial }{\partial x}b$ such that \begin{equation*} b'(t,x)\le K_t, \quad 0< t< 1, \quad x\in(0,\infty), \end{equation*} where $K_t:=t^{2H-1}K$ for some $K\ge 0$. \item (A2) There exist $x_1>0$, $\alpha>\frac{1}{H}-1$ and $h_1>0$ such that $b(t,x)\ge h_1t^{2H-1}x^{-\alpha}$, $t\in(0,1]$, $x\le x_1$. \item (A3) There are $x_2>0$ and $h_2>0$ such that $b(t,x)\le h_2t^{2H-1}(x+1)$, $t\in(0,1]$, $x\ge x_2$. \end{description} \begin{theorem}\label{thrm1} Suppose that \textup{(A1-A3)} hold. Then for all $x_0>0$ the \textup{SDE} \eqref{sec2:eq3} has a unique strong positive solution $x_t$, $0\le t\le1$. \end{theorem} \begin{proof} Without loss of generality, let $\sigma=1$. We are required to establish the following analytical properties. \begin{enumerate} \item Uniqueness: Suppose $x_{\bigcdot}$ and $y_{\bigcdot}$ are two solutions to \eqref{sec2:eq3}. Then \begin{align*} x_t-y_t=\int_0^t\big(b(s,x_s)-b(s,y_s)\big)ds. \end{align*} So, using the product rule, the mean value theorem and (A1), we get \begin{align*} (x_t-y_t)^2&=2\int_0^t\big(b(s,x_s)-b(s,y_s)\big)(x_s-y_s)ds\\ &\le 2\int_0^tK_s(x_s-y_s)^2ds. \end{align*} Hence, Gronwall's Lemma implies that \begin{align*} x_t-y_t=0,\quad 0\le t\le 1. \end{align*} \item Existence: Let $x_0>0$. Because of the regularity assumptions imposed on $b$, we know that the equation \eqref{sec2:eq3} has (path-by-path) local solutions. Define the stopping times \begin{equation*} \tau_0:=\inf\{t\in[0,1]:x_t=0\} \text{ and } \tau_n:=\inf\{t\in[0,1]:x_t\ge n\}, \end{equation*} where $\inf\emptyset:=1^+$. Just as in \cite{Zhang}, we want to prove that $\tau_0=1^+$ and $\lim_{n\rightarrow \infty}\tau_n=1^+$. Here $1^+$ stands for an artificially added element larger than 1. Suppose that $\tau_0\le 1$. Then there is a $\hat{\tau}_0\in (0,\tau_0]$ such that $x_t\le x_1$ for all $(\hat{\tau}_0,\tau_0]$. By (A2), we know that $b(t,x)>0$ for $x\in(0,x_1)$ and $t>0$. Hence, \begin{equation} 0=x_{\tau_0}=x_t+\int_t^{\tau_0}b(s,x_s)ds+B^H_{\tau_0}-B^H_t,\quad t\in (\hat{\tau}_0,\tau_0]. \end{equation} This implies \begin{equation} x_t\le \vert B^H_{\tau_0}-B^H_t\vert\le \vert\vert B_{\bigcdot}^H\vert\vert_{\beta}(\tau_0-t)^{\beta},\ \text{$t\in (\hat{\tau}_0,\tau_0]$ for $\beta\in (0,H)$}. \end{equation} Here $\vert\vert \cdot\vert\vert_{\beta}$ denotes the H\"older-seminorm given by \begin{equation*} \vert\vert f\vert\vert_{\beta}=\sup_{0\le s<t\le 1}\frac{\vert f(s)-f(t)\vert}{(t-s)^{\beta}} \end{equation*} for $\beta$-H\"older continuous functions $f$. So, we also obtain that \begin{align*} \vert\vert B_{\bigcdot}^H\vert\vert_{\beta}(\tau_0-t)^{\beta}&\ge \vert B^H_{\tau_0}-B^H_t\vert \ge \int^{\tau_0}_tb(s,x_s)ds\\ &\ge h_1\int^{\tau_0}_t s^{2H-1}x_s^{-\alpha}ds\ge \frac{h_1}{\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}^{\alpha}}\int^{\tau_0}_t s^{2H-1}\frac{1}{(\tau_0-s)^{\alpha\beta}}ds\\ &\ge \frac{h_1}{\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}^{\alpha}}\tau_0^{2H-1}\int^{\tau_0}_t \frac{1}{(\tau_0-s)^{\alpha\beta}}ds. \end{align*} \end{enumerate} If $\alpha \beta\ge 1$, we get a contradiction. For $\alpha \beta< 1$, we find that \begin{align*} \vert\vert B_{\bigcdot}^H\vert\vert_{\beta}(\tau_0-t)^{\beta}&\ge \frac{h_1}{\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}^{\alpha}}\tau_0^{2H-1} \frac{(\tau_0-t)^{1-\alpha\beta}}{1-\alpha\beta},\quad t\in (\hat{\tau}_0,\tau_0]. \end{align*} Hence, \begin{align*} 0=\lim_{t\rightarrow \tau_0}\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}(\tau_0-t)^{\beta+\alpha\beta-1}\ge \frac{h_1\tau_0^{2H-1}}{\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}^{\alpha}(1-\alpha\beta)}>0. \end{align*} So $\tau_0=1^+$. Assume now that \begin{align*} \tau_{\infty}:=\lim_{n\rightarrow \infty}\tau_n\le 1. \end{align*} Then we can show as in \cite{Zhang} by using (A3) that \begin{align*} x_t\le x_2+x_0+\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}\tau^{\beta}_{\infty}+h_2\tau^{2H}_{\infty}(2H)^{-1}+h_2\int_{\hat{\tau}_1}^t s^{2H-1}x_sds. \end{align*} So, by letting \begin{align*} \alpha= x_2+x_0+\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}\tau^{\beta}_{\infty}+h_2\tau^{2H}_{\infty}(2H)^{-1}, \end{align*} it follows from Gronwall's Lemma that \begin{align*} x_t&\le \alpha+\int_{\hat{\tau}_1}^t \alpha h_2s^{2H-1}\exp\Big(\int_s^th_2u^{2H-1}du\Big)ds\\ &\le \gamma+\int_0^1 \gamma h_2s^{2H-1}\exp\Big(\int_s^1 h_2u^{2H-1}du\Big)ds, \end{align*} where $\gamma:=x_2+x_0+\vert\vert B_{\bigcdot}^H\vert\vert_{\beta}+\frac{h_2}{2H}$. The latter estimate leads to a contradiction. \end{proof} As a consequence of Theorem \ref{thrm1}, we obtain the following result: \begin{corollary}\label{corona} Suppose that $x\in (0,\infty)$ and $\rho>\frac{1}{H\tilde{\theta}}+1$. Then there exists a unique strong solution $y_t>0$ to \textup{SDE} \eqref{sec2:eq1}. \end{corollary} \begin{proof} Let $\epsilon=\frac{H}{2}$. Then \begin{equation*} \tilde{f}(s,y)=\tilde{g}_1(s,y)+\tilde{g}_2(s,y), \end{equation*} where \begin{equation*} \tilde{g}_1(s,y):=\alpha_{-1}(-\tilde{\theta} y^{2\tilde{\theta}+1})+\alpha_0y^{\tilde{\theta}+1}-\alpha_1\frac{y}{\tilde{\theta}}+\epsilon\tilde{\sigma}s^{2H-1}y^{-1}(\tilde{\theta}+1), \end{equation*} and \begin{equation*} \tilde{g}_2(s,y):=\alpha_2 s^{2H-1}\frac{1}{\tilde{\theta}^{\rho}} y^{-\tilde{\theta}\rho+\tilde{\theta}+1}-(H+\epsilon)\tilde{\sigma}s^{2H-1}y^{-1}(\tilde{\theta}+1). \end{equation*} We see that \begin{align*} \tilde{g}_1(s,y)&\ge \alpha_{-1}(-\tilde{\theta} y^{2\tilde{\theta}+1})+\alpha_0y^{\tilde{\theta}+1}-\alpha_1\frac{y}{\tilde{\theta}}+\epsilon\tilde{\sigma}y^{-1}(\tilde{\theta}+1)\\ &\ge 0 \end{align*} for all $s\in (0,1]$ and $y\in(0,y_0)$ for some $y_0>0$. Since $$-\tilde{\theta}\rho+\tilde{\theta}+1<-\frac{1}{H}+1<-1,$$ we also find some $y_1>0$ such that \begin{align*} \tilde{g}_2(s,y)&=s^{2H-1} y^{-\tilde{\theta}\rho+\tilde{\theta}+1}\Big(\alpha_2\frac{1}{\tilde{\theta}^{\rho}}-(H+\epsilon)\tilde{\sigma}(\tilde{\theta}+1)y^{\tilde{\theta}\rho-\tilde{\theta}-2}\Big)\\ &\ge h_1s^{2H-1}y^{-\alpha} \end{align*} for all $s\in (0,1]$ and $y\in(0,y_1]$, where $h_1>0$ and $\alpha:=\tilde{\theta}\rho-\tilde{\theta}-1$. So, \begin{align*} \tilde{f}(s,y)\ge h_1s^{2H-1}y^{-\alpha} \end{align*} for all $s\in (0,1]$, $y\in(0,y_1)$ for some $y_1>0$, which shows that $\tilde{f}$ satisfies (A2). As for (A3), we see that there exists some $y_2\ge 1$ such that \begin{align*} \tilde{f}(s,y)&\le s^{2H-1}\Big(\alpha_2\frac{1}{\tilde{\theta}^{\rho}}y^{-\tilde{\theta}\rho+\tilde{\theta}+1}-H\tilde{\sigma}y^{-1}(\tilde{\theta}+1)\Big)\le h_2s^{2H-1}(1+y) \end{align*} for all $s\in (0,1]$, $y\in(y_2,\infty)$ and some $h_2>0$. We have that \begin{align*} \tilde{f}^{\prime}(s,y)=f_1(s,y)+f_2(s,y), \end{align*} where \begin{align*} f_1(s,y):=-\alpha_{-1}\tilde{\theta}(2\tilde{\theta}+1)y^{2\tilde{\theta}}+\alpha_0(\tilde{\theta}+1)y^{\tilde{\theta}}-\frac{\alpha_1}{\tilde{\theta}} \end{align*} and \begin{align*} f_2(s,y):= s^{2H-1}\Big(\alpha_2\frac{1}{\tilde{\theta}^{\rho}}(-\tilde{\theta}\rho+\tilde{\theta}+1)y^{-\tilde{\theta}\rho+\tilde{\theta}}+H\tilde{\sigma}(\tilde{\theta}+1)y^{-2}\Big). \end{align*} So, there exist $y_1,y_2>0$ such that \begin{align*} \tilde{f}'(s,y)\le f_1(s,y)\le K\le s^{2H-1}K=K_s \end{align*} for all $s\in (0,1]$, $y\in(0,y_1)$ as well as \begin{align*} \tilde{f}'(s,y)\le f_2(s,y)\le s^{2H-1}K=K_s \end{align*} for all $s\in(0,1]$, $y\in(y_2,\infty)$ and some $K>0$. On the other hand, we see that \begin{align*} \tilde{f}'(s,y)\le K_1+s^{2H-1}K_2\le s^{2H-1}K=K_s \end{align*} for all $s\in (0,1]$, $y_0\in[y_1,y_2]$ for some $K_1,K_2,K>0$. Altogether, we see that $\tilde{f}$ also satisfies (A1). Since $-B^H_{\bigcdot}$ is a fractional Brownian motion, the proof follows. \end{proof} \section{Malliavin differentiability and existence of higher moments of solutions}\label{sec5} In this section, we want to show that the solution $x$ to the SDE \begin{equation}\label{10} x_t=x+\int_0^t \tilde{f}(s,x_s)ds-\tilde{\sigma}B_t^H, \quad 0\le t\le 1, x>0, \end{equation} is Malliavin differentiable in the direction of $B_{\bigcdot}^H$ for $H\in(0,\frac{1}{2})$. Furthermore, we verify that solutions $x_t$ to \eqref{10} belong to $L^q$ for all $q\ge 1$. For this purpose, let $\tilde{f}_n:(0,1]\times \mathbb{R}\rightarrow \mathbb{R}$, $n\ge 1$ be a sequence of bounded, globally Lipschitz continuous (and smooth) functions such that \begin{enumerate} \item $\tilde{f}\mid_{[\frac{1}{n},n]}=\tilde{f}\mid_{(0,1]\times [\frac{1}{n},n]}$ for all $n\ge 1$,\\ \item $\tilde{f}'_n(s,x)\le K_s$ for all $(s,x)\in (0,1]\times \mathbb{R}$, $n\ge 1$, where $K_s$ is defined in (A1). \end{enumerate} So we see that \begin{equation*} \tilde{f}'_n(s,x)\underset{n\rightarrow \infty}{\longrightarrow}\tilde{f}(s,x) \end{equation*} for all $(s,x)\in (0,1]\times (0,\infty)$. Denote by $D^H_{\bigcdot}$ and $D_{\bigcdot}$, the Malliavian derivative in the direction of $B^H_{\bigcdot}$ and $W_{\bigcdot}$, respectively. Here $W_{\bigcdot}$ is the Wiener process with respect to the representation \begin{equation}\label{new0} B^H_t=\int^t_0K_H(t,s)dW_s,\quad t\ge 0. \end{equation} See the Appendix. Since $-B^H_{\bigcdot}$ is a fractional Brownian motion, let us without loss of generality assume in \eqref{10} that $\tilde{\sigma}=-1$. Because of the regularity of the functions $\tilde{f}_n$, $n\ge 1$, we find that the solutions $x^n_{\bigcdot}$ to \begin{equation*} x^n_t=x+\int_0^t \tilde{f}_n(s,x_s)ds+B_t^H, \quad x>0, \quad 0\le t\le 1 \end{equation*} are Malliavin differentiable with Malliavin derivative $D_u^Hx_t$ satisfying the equation \begin{equation*} D_u^Hx_t^n=\int^t_u \tilde{f}^{\prime}_n(s,x^n_s)D_u^Hx_s^nds+\chi_{[0,t]}(u). \end{equation*} Hence, \begin{equation*} D_u^Hx^n_t=\chi_{[0,t]}(u)\exp\Big(\int_u^t\tilde{f}_n^{\prime}(s,x_s^n )ds \Big)\quad \lambda\times \text{P-a.e.} \end{equation*} for all $0\le t\le 1$ ( $\lambda$ Lebesgue measure). See \cite{Nualart}. Further, using the transfer principle between $D^H_{\bigcdot}$ and $D_{\bigcdot}$, we have that \begin{equation}\label{new1} K_H^*D^H_{\bigcdot}x_t=D_{\bigcdot}x_t \end{equation} where $K_H^*:\mathcal{H}\rightarrow L^2([0,T])$ is given by \begin{equation} (K_H^*y)(s)=K_H(T,s)y(s)+\int_s^T(y(t)-y(s))\frac{\partial}{\partial t}K_H(t,s)dt \end{equation} for \begin{equation} \frac{\partial}{\partial t}K_H(t,s)=c_H\Big(H-\frac{1}{2}\Big)\Big(\frac{1}{2}\Big)^{H-\frac{1}{2}}\Big(t-s\Big)^{H-\frac{3}{2}}. \end{equation} Here $\mathcal{H}=I_{T^-}^{\frac{1}{2}-H}(L^2)$. See the Appendix. On the other hand, using \eqref{new1}, we also see that \begin{equation} D_ux^n_t=\int_u^t \tilde{f}'_n(s,x_s^n)D_ux_s^nds+K_H(t,u) \end{equation} in $L^2([0,t]\times \Omega)$ for all $0\le t\le 1$. Set \begin{equation*} Y_t^n(u)=D_ux_t^n-K_H(t,u). \end{equation*} Then, \begin{equation*} Y_t^n(u)=\int_u^t\Big\{\tilde{f}'_n(s,x_s^n)Y^n_s(u)+\tilde{f}_n(s,x_s^n)K_H(s,u) \Big\}ds. \end{equation*} Using the fundamental solution of the equation \begin{equation*} \dot{\Phi}(t)=\tilde{f}'_n(t,x_t^n)\cdot\Phi(t),\quad \Phi(u)=1. \end{equation*} We then obtain that \begin{equation*} Y_t^n(u)=\int_u^t\exp\Big(\int_s^t\tilde{f}'(r,x_r^n)dr\Big)\tilde{f}'_n(s,x_s^n)K_H(s,u)ds. \end{equation*} Hence, \begin{align*} D_ux_t^n&=\int^t_u \exp\Big(\int_s^t\tilde{f}'(r,x_r^n)dr\Big)\tilde{f}'_n(s,x_s^n)K_H(s,u)ds+K_H(t,u)\\ &= J_1^n(t,u)+J_2^n(t,u)+K_H(t,u), \quad u<t,\quad \lambda\times\text{P-a.e.}, \end{align*} where \begin{align*} J_1^n(t,u)&:=\int_u^t\exp\Big(\int_s^t \tilde{f}'(r,x_r)dr\Big)\Big( \tilde{f}'_n(s,x_s^n)-K_s\Big)K_H(s,u)ds \end{align*} and \begin{align*} J_2^n(t,u)&:=\int_u^t\exp\Big(\int_s^t \tilde{f}'(r,x_r)dr\Big)K_s\cdot K_H(s,u)ds. \end{align*} Without loss of generality, let $T=t=1$. Then \begin{equation} \int_0^1(D_ux_1^n)^2du\le C\Big\{\int_0^1(J^n_1(1,u))^2du+\int_0^1(J^n_2(1,u))^2du+\int_0^1(K_H(1,u))^2du \Big\}. \end{equation} Using Fubini's theorem, we get that \begin{align*} \int_0^1(J^n_1(1,u))^2du&=\int_0^1\Big(\int_0^1\chi_{_{[u,1]}}(s)\exp\Big(\int_s^t \tilde{f}'(r,x_r)dr\Big)\Big( \tilde{f}'_n(s,x_s^n)-K_s\Big)K_H(s,u)ds\Big)^2du\\ &=\int_0^1\int_0^1 \Big\{\exp\Big(\int_{s_1}^1\tilde{f}'(r,x_r)dr\Big)\Big( \tilde{f}'_n(s_1,x_{s_1}^n)-K_{s_1}\Big)\Big)\\ &\times \exp\Big(\int_{s_2}^1\tilde{f}'(r,x_r)dr\Big)\Big( \tilde{f}'_n(s_2,x_{s_2}^n)-K_{s_2}\Big)\int_0^{s_1\wedge s_2} K_H(s_1,u)K_H(s_2,u)du\Big)\Big\}ds_1ds_2. \end{align*} From \eqref{new0}, we see for the covariance function \begin{align*} R_H(s_1,s_2)=\mathbb{E}[B_{s_1}^H\cdot B_{s_2}^H] \end{align*} that \begin{align*} R_H(s_1,s_2)=\int_0^{s_1\wedge s_2}K_H(s_1,u)K_H(s_2,u)du. \end{align*} Since \begin{align*} 0\le R_H(s_1,s_2)=\frac{1}{2}(s_1^{2H}+s_2^{2H}-\vert s_1-s_2\vert^{2H})\le 1,\quad H< \frac{1}{2} \end{align*} and \begin{align*} \Big( \tilde{f}'_n(s_1,x_{s_1}^n)-K_{s_1}\Big)\cdot \Big( \tilde{f}'_n(s_2,x_{s_2}^n)-K_{s_2}\Big)\ge 0 \end{align*} for $0< s_1,s_2\le 1$, we find that \begin{align*} \int_0^1(J^n_1(1,u))^2du&\le \Big(\int_0^1\Big(\exp\Big(\int_s^t \tilde{f}'(r,x_r)dr\Big)\Big( \tilde{f}'_n(s,x_s^n)-K_s\Big)ds\Big)^2\\ &=\Big\{-\exp\Big(\int_s^1\tilde{f}'(r,x_r)dr\Big)\Big\vert_{s=0}^1-\int_0^1K_s\exp\Big(\int_s^1\tilde{f}'(r,x_r)dr\Big)ds\Big\}^2\\ &\le \Big(\exp(\int_0^1K_rdr)+\int_0^1K_sds\cdot\exp(\int_0^1K_rdr)\Big)^2. \end{align*} Similarly, we also obtain that \begin{align*} \int_0^1(J^n_2(1,u))^2du&\le C(K,H) \end{align*} for a constant $C(K,H)<\infty$. We also have that \begin{align*} \int_0^1(K_H(1,u))^2du=\mathbb{E}[(B_1^H)^2]=1. \end{align*} Altogether, we get that \begin{equation}\label{boom} \mathbb{E}[\int_0^1(D_ux_1^n)^2du]\le C(K,H) \end{equation} for all $n\ge 1$ for a constant $C(K,H)<\infty$. Define now the stopping times $\tau_n$ by \begin{align*} \tau_n=\inf\Big\{0\le t\le 1; x_t\notin [\frac{1}{n},n]\Big\}\quad (\inf\emptyset=\infty) \end{align*} Then we know from the proof of the existence of solutions in the previous section that $\tau_n\nearrow \infty$ for $n\rightarrow\infty$. So, \begin{align*} x^n_{t\wedge \tau_n}-x_{t\wedge \tau_n}&=\int_0^{t\wedge \tau_n}\Big\{\tilde{f}_n(s,x_s^n)-\tilde{f}(s,x_s)\Big\}ds\\ &=\int_0^t \chi_{_{[0,\tau_n)}}(s)\Big\{ \tilde{f}_n(s,x_{s\wedge\tau_n}^n)-\tilde{f}_n(s,x_{s\wedge\tau_n})\Big\}ds. \end{align*} Hence, \begin{align*} \vert x^n_{t\wedge \tau_n}-x_{t\wedge \tau_n}\vert &\le K_n\int^t_0\vert x^n_{s\wedge \tau_n}-x_{s\wedge \tau_n}\vert ds \end{align*} for a Lipschitz constant $K_n$. Then Gronwall's Lemma implies that \begin{equation*} x^n_{t\wedge \tau_n}=x_{t\wedge \tau_n} \end{equation*} for all $t,n$ P-a.e. Since $\tau_n\nearrow \infty$ for $n\rightarrow\infty$ a.e. we have that \begin{equation}\label{boom1} x_t^n\underset{n\rightarrow \infty}{\rightarrow} x_t \end{equation} for all t P-a.e. Using the Clark-Ocone formula (see \cite{Nualart}), we get that \begin{align*} x^n_1=\mathbb{E}[x^n_1]+\int_0^1\mathbb{E}[D_sx^n_1\vert \mathcal{F}_s]dW_s, \end{align*} where $\{\mathcal{F}\}_{0\le t\le 1}$ is the filtration generated by $W_{\bigcdot}$. It follows that \begin{align*} \mathbb{E}[(x_1^n-\mathbb{E}[x^n_1])^2] &=\mathbb{E}\Big[\int_0^1\Big(\mathbb{E}[D_sx^n_1\vert \mathcal{F}_s]\Big)^2ds\Big]\\ &\le\mathbb{E}\Big[\int_0^1\mathbb{E}[(D_sx^n_1)^2\vert \mathcal{F}_s]ds\Big]=\int_0^1\mathbb{E}[(D_sx^n_1)^2]ds. \end{align*} So, we see from \eqref{boom} that \begin{align*} \mathbb{E}[(x_1^n-\mathbb{E}[x^n_1])^2]\le C(K,H)< \infty. \end{align*} for all $n\ge 1$. We also have that \begin{align*} \big\vert\vert x_1^n-\mathbb{E}[x^n_1]\vert-\vert x_1-\mathbb{E}[x^n_1]\vert \big\vert\le \vert x^n_1-x_1\vert\underset{n\rightarrow \infty}{\longrightarrow} 0 \end{align*} because of \eqref{boom1}. So, \begin{align*} \underset{n\rightarrow \infty}{\varliminf}\vert x_1^n-\mathbb{E}[x^n_1]\vert=\underset{n\rightarrow \infty}{\varliminf}\vert x_1-\mathbb{E}[x^n_1]\vert. \end{align*} Suppose that $\mathbb{E}[x_1^n]$, $n\ge 1$ is unbounded. Then there exists a subsequence $n_k$, $k\ge 1$ such that \begin{align*} \vert\mathbb{E}[x^{n_k}_1]\vert\underset{n\rightarrow \infty}{\longrightarrow} \infty. \end{align*} It follows from the Lemma of Fatou and the positivity of $x_t$ that \begin{align*} \infty&=\mathbb{E}\Big[\underset{k\rightarrow \infty}{\varliminf}\Big(\big\vert x_1-\vert\mathbb{E}[x^{n_k}_1]\vert\big\vert\Big)^2\Big]\\ &\le \mathbb{E}\Big[\underset{k\rightarrow \infty}{\varliminf}\Big(\big\vert x_1-\mathbb{E}[x^{n_k}_1]\big\vert\Big)^2\Big]\\ &=\mathbb{E}\Big[\underset{k\rightarrow \infty}{\varliminf}\Big(\big\vert x^{n_k}_1-\mathbb{E}[x^{n_k}_1]\big\vert\Big)^2\Big]\\ &\le \underset{k\rightarrow \infty}{\varliminf}\mathbb{E}\Big[\big\vert x^{n_k}_1-\mathbb{E}[x^{n_k}_1]\big\vert^2\Big]\le C<\infty, \end{align*} which is a contradiction. Hence, \begin{align*} \sup_{n\ge 1}\vert \mathbb{E}[x_1^n]\vert<\infty. \end{align*} Further, we also obtain from the Burkholder-Davis-Gundy inequality and \eqref{boom} that \begin{align}\label{estim} \mathbb{E}[\vert x_1^n\vert^{2p}]&<C_p\Big(\vert\mathbb{E}[x_1^n]\vert^{2p}+\mathbb{E}\Big[\big(\int_0^1 \mathbb{E}[D_sx_1^n\vert \mathcal{F}_s]dW_s\big)^{2p}\Big] \Big)\nonumber\\ &\le C_p\Big(\vert\mathbb{E}[x_1^n]\vert^{2p}+\mathbb{E}\Big[\big(\sup_{0\le u\le 1}\big\vert\int_0^u \mathbb{E}[D_sx_1^n\vert \mathcal{F}_s]dW_s\big\vert\big)^{2p}\Big]\Big)\nonumber\\ &\le C_p\Big(\vert\mathbb{E}[x_1^n]\vert^{2p}+m_p\mathbb{E}\Big[(\int_0^1 \mathbb{E}[D_sx_1^n\vert \mathcal{F}_s])^2ds\big)^{p}\Big]\Big)\nonumber\\ &\le C(p,K,H) \end{align} for $n\ge 1$. So it follows from \eqref{boom1} and the Lemma of Fatou that \begin{align*} \mathbb{E}[\vert x_1\vert^{2p}]\le \underset{n\rightarrow \infty}{\varliminf}\mathbb{E}[\vert x_1^n\vert]^{2p}\le C(p,K,H)<\infty \end{align*} for all $p\ge 1$. So we obtain the following result: \begin{theorem}\label{highermoment} Let $x_t, 0\le t\le 1$ be the solution to \eqref{10}. Then $x_t\in L^q(\Omega)$ for all $q\ge 1$ and $0\le t\le 1$. \end{theorem} In addition, we obtain from Lemma 1.2.3 in \cite{Nualart} in connection with the estimate \eqref{estim} that $x_1$ is Malliavin differentiable in the direction of $W_{\bigcdot}$. The latter, in combination with \eqref{new1}, also entails the Malliavin differentiability of $x_1$ with respect to $B^H_{\bigcdot}.$ Thus we have also shown the following result: \begin{theorem}\label{Thrm 4.2} The positive unique strong solution $x_t$ to \eqref{10} is Malliavin differentiable in the direction of $B^H_{\bigcdot}$ and $W_{\bigcdot}$ for all $0\le t\le 1$. \end{theorem} \section{Application}\label{sec6} In this section, we aim at applying the results of the previous section to obtain a unique strong solution to $x_t$ to the SDE \begin{equation}\label{app1} x_t=x_0+\int_0^t\big(\alpha_{-1}x^{-1}_s-\alpha_0+\alpha_1x_s-\alpha_2s^{2H-1}x_s^{\rho}\big)ds+\int_0^t\sigma x_s^{\theta}dB_s^H, \end{equation} $0\le t\le 1$, for $H\in(\frac{1}{3},\frac{1}{2})$, $\tilde{\theta}>0$, $\rho>1+\frac{1}{H\tilde{\theta}}$, $\sigma>0$ and $\theta:=\frac{\tilde{\theta}+1}{\tilde{\theta}}$. Here, the stochastic integral with respect to $B_{\bigcdot}^H$ is defined by \begin{equation}\label{app2} \int_0^tg(x_s)dB^H_s=\int_0^t-Hs^{2H-1}g^{\prime}(x_s)ds+\int_0^tg(x_s)d^{\circ}B_s^H \end{equation} for functions $g\in \mathcal{C}^3$. See also the second Remark \ref{appremark} below. The stochastic integral on the right hand side of \eqref{app2} is the symmetric integral with respect to $B^H_{\bigcdot}$ introduced by F. Russo, P. Vallois. See e.g. \cite{Errami} and the references therein. Such an integral denoted by \begin{equation} \int_0^tY_sd^{\circ}X_s, \quad t\in [0,1] \end{equation} for continuous process $X_{\bigcdot}$, $Y_{\bigcdot}$ is defined as \begin{equation*} \lim_{\epsilon\searrow 0}\frac{1}{2\epsilon}\int_0^tY_s(X_{s+\epsilon}-X_s)ds, \end{equation*} provided this limit exists in the ucp-topology. In order to construct a solution to \eqref{app1}, we need a version of the It\^{o} formula for processes $Y_{\bigcdot}$, which have a finite cubic variation. A continuous process is said to have a finite strong cubic variation (or 3-variation), denoted by $[Y,Y,Y]$, if \begin{equation*} [Y,Y,Y]:=\lim_{\epsilon\searrow 0}\frac{1}{\epsilon}\int_0^t(Y_{s+\epsilon}-Y_s)^3ds \end{equation*} exists in ucp as well as \begin{equation*} \sup_{0< \epsilon\le 1}\frac{1}{\epsilon}\int_0^1(Y_{s+\epsilon}-Y_s)^3ds< \infty\quad \text{a.e.} \end{equation*} See \cite{Errami}. Using the concept of finite strong cubic variation, one can show the following It\^{o} formula (see \cite{Errami}). \begin{theorem}\label{apptheorem} Assume that $Y_{\bigcdot}$ is a real valued process with finite strong cubic variation and $g\in \mathcal{C}^3$. Then \begin{equation*} g(Y_t)=g(Y_0)+\int_0^tg^{\prime}(Y_s)d^{\circ}Y_s-\frac{1}{12}\int_0^tg^{\prime\prime\prime}(Y_s)d[Y,Y,Y]_s, \quad 0\le t\le 1. \end{equation*} \end{theorem} \begin{remark} The last term on the right hand side of the equation is a Lebesgue-Stieltjes integral with respect to the bounded variation process $[Y,Y,Y]$. \end{remark} \begin{remark}\label{appremark}\leavevmode \begin{itemize} \item We mention that for $Y_{\bigcdot}=B_{\bigcdot}^H$, $H\in(\frac{1}{3},\frac{1}{2})$, $[B_{\bigcdot}^H,B_{\bigcdot}^H,B_{\bigcdot}^H]$ is zero a.e. \item If $X_{\bigcdot}=B_{\bigcdot}^H$ in \eqref{app2}, then it follows from Theorem 6.3.1 in \cite{Biagini} that our stochastic integral in \eqref{app2} equals the Wick-It\^{o}-Skorohod integral. The latter also gives a justification for the definition of the stochastic integral in \eqref{app2} in the general case. \end{itemize} \end{remark} \begin{theorem}\label{end} Suppose that $H\in(\frac{1}{3},\frac{1}{2})$, $\tilde{\theta}>0$, $\sigma>0$ and $\rho>1+\frac{1}{H\tilde{\theta}}$. Let $\theta=\frac{\tilde{\theta}+1}{\tilde{\theta}}$. Then there exists a unique strong and positive solution to the \textup{SDE} \eqref{app1}. \end{theorem} \begin{proof} Let $y_{\bigcdot}$ be the unique strong and positive solution to \begin{equation*} y_t=x+\int_0^t\tilde{f}(s,y_s)ds-\tilde{\sigma}B_t^H, \quad 0\le t\le 1,\quad x>0, \end{equation*} where $\tilde{f}$ is defined as in Section \ref{sec3}. Define $g\in \mathcal{C}^3$ by $g(y)=\frac{y^{-\tilde{\theta}}}{\tilde{\theta}}$. Then Theorem \ref{apptheorem} entails that \begin{equation*} x_t:=g(y_t)=\frac{x^{-\tilde{\theta}}}{\tilde{\theta}}+\int_0^t(-1)y_s^{-(\tilde{\theta}+1)}d^{\circ}y_s-\frac{1}{12}\int_0^tg^{\prime\prime\prime}(y_s)d[y,y,y]_s. \end{equation*} Since $[B_{\bigcdot}^H,B_{\bigcdot}^H,B_{\bigcdot}^H]$ is zero a.e. (see Remark \ref{appremark}), we observe that $[y,y,y]$ is zero a.e. So \begin{align*} x_t&=\frac{x^{-\tilde{\theta}}}{\tilde{\theta}}+\int_0^t(-1)y_s^{-(\tilde{\theta}+1)}d^{\circ}y_s\\ &=\frac{x^{-\tilde{\theta}}}{\tilde{\theta}}+\int_0^t(-1)y_s^{-(\tilde{\theta}+1)}\tilde{f}(s,y_s)ds+\int_0^t\tilde{\sigma} y_s^{-(\tilde{\theta}+1)}d^{\circ} B_s^H\\ &=\frac{x^{-\tilde{\theta}}}{\tilde{\theta}}-\int_0^t\Big\{(-1)y_s^{-(\tilde{\theta}+1)}\tilde{f}(s,y_s)-H\tilde{\sigma} s^{2H-1}y_s^{-(\tilde{\theta}+2)}(\tilde{\theta}+1)\Big\}ds+\int_0^t\tilde{\sigma} y_s^{-(\tilde{\theta}+1)}dB_s^H. \end{align*} Since we can write $(y_s)^{-(\tilde{\theta}+1)}=\tilde{\theta}^{\theta}\Big(\frac{y_s^{-\tilde{\theta}}}{\tilde{\theta}}\Big)^{\theta}$, we now have \begin{align*} x_t&=\frac{x^{-\tilde{\theta}}}{\tilde{\theta}}+\int_0^tf(s,\frac{y_s^{-\tilde{\theta}}}{\tilde{\theta}})ds+\int_0^t\tilde{\sigma} y_s^{-(\tilde{\theta}+1)}dB_s^H\\ &=\frac{x^{-\tilde{\theta}}}{\tilde{\theta}}+\int_0^tf(s,\frac{y_s^{-\tilde{\theta}}}{\tilde{\theta}})ds+\int_0^t\tilde{\sigma} \tilde{\theta}^{\theta}(x_s)^{\theta}dB_s^H, \end{align*} where $f(s,y):=\alpha_{-1}y^{-1}-\alpha_0+\alpha_1y-\alpha_2s^{2H-1}y^{\rho}$, $s\in (0,1]$, $y\in(0,\infty)$. So $x_{\bigcdot}$ satisfies the SDE \eqref{app1} if we choose $\tilde{\sigma}=\tilde{\theta}^{-\theta}\sigma$ for $\sigma>0$. In order to show the uniqueness of solutions to SDE \eqref{app1}, one can apply the It\^{o} formula in Theorem \ref{apptheorem} to the inverse function $g^{-1}$ given by $g^{-1}(y)=\big(\tilde{\theta}\big)^{-\frac{1}{\tilde{\theta}}} y^{-\frac{1}{\tilde{\theta}}}$ by using the fact that $[B_{\bigcdot}^H,B_{\bigcdot}^H,B_{\bigcdot}^H]=0$ a.e. for $H\in(\frac{1}{3},\frac{1}{2})$. \end{proof} Finally, using the same arguments as in the proof of Theorem \ref{end}, we also get the following result for the alternative Ait-Sahalia model \eqref{intro:eq:3}: \begin{theorem}\label{final} Retain the conditions of Theorem \ref{end} with respect to $H,\tilde{\theta}, \theta$ and $\rho$. Then there exists a unique strong solution $x_t>0$ to \textup{SDE} \eqref{intro:eq:3}. \end{theorem} \begin{proof} Just as in the proof of Theorem \ref{end}, we can consider the SDE \eqref{sec2:eq1}, where the vector field $\tilde{f}$ now is given by \begin{align}\label{end:eq1} \tilde{f}(s,y)&=\alpha_{-1}(-\tilde{\theta} y^{2\tilde{\theta}+1})+\alpha_0y^{\tilde{\theta}+1}-\alpha_1\frac{y}{\tilde{\theta}}+\alpha_2 \frac{1}{\tilde{\theta}^{\rho}} y^{-\tilde{\theta}\rho+\tilde{\theta}+1} \end{align} for $0<y < \infty$. Then as in the proof of Corollary \eqref{corona} one immediately verifies that $\tilde{f}$ satisfies the assumptions of Theorem \ref{thrm1}, which yields a unique strong solution $y_t>0$ to \eqref{sec2:eq1} in this case. In exactly the same way, we also obtain the results of Theorem \ref{highermoment} and Theorem \ref{Thrm 4.2} with respect to $\tilde{f}$ in \eqref{end:eq1}. Finally, we can apply the It\^{o} formula as in the proof of Theorem \ref{end} and construct a unique strong solution $x_t>0$ to \eqref{intro:eq:3} based on $y_{\bigcdot}$. \end{proof} \section{Appendix} \bigskip For some of the proofs in this article we need to recall some basic concepts from fractional calculus (see \cite{Lizorkin} and \cite{Samko}). Let $a,$ $b\in \mathbb{R}$ with $a<b$. Let $f\in L^{p}([a,b])$ with $p\geq 1$ and $\alpha >0$. Then the \emph{left-} and \emph{right-sided Riemann-Liouville fractional integrals} are defined as \begin{equation*} I_{a^{+}}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\int_{a}^{x}(x-y)^{\alpha -1}f(y)dy \end{equation*}and \begin{equation*} I_{b^{-}}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\int_{x}^{b}(y-x)^{\alpha -1}f(y)dy \end{equation*}for almost all $x\in \lbrack a,b]$. Here $\Gamma $ is the Gamma function. Let $p\geq 1$ and let $I_{a^{+}}^{\alpha }(L^{p})$ (resp. $I_{b^{-}}^{\alpha }(L^{p})$) be the image of $L^{p}([a,b])$ of the operator $I_{a^{+}}^{\alpha }$ (resp. $I_{b^{-}}^{\alpha }$). If $f\in I_{a^{+}}^{\alpha }(L^{p})$ (resp. $f\in I_{b^{-}}^{\alpha }(L^{p})$) and $0<\alpha <1$ then we can define the \emph{left-} and \emph{right-sided Riemann-Liouville fractional derivatives} by \begin{equation*} D_{a^{+}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}\int_{a}^{x}\frac{f(y)}{(x-y)^{\alpha }}dy \end{equation*}and \begin{equation*} D_{b^{-}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}\int_{x}^{b}\frac{f(y)}{(y-x)^{\alpha }}dy. \end{equation*} The left- and right-sided derivatives of $f$ can be represented as \begin{equation*} D_{a^{+}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\left( \frac{f(x)}{(x-a)^{\alpha }}+\alpha \int_{a}^{x}\frac{f(x)-f(y)}{(x-y)^{\alpha +1}}dy\right) \end{equation*}and \begin{equation*} D_{b^{-}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\left( \frac{f(x)}{(b-x)^{\alpha }}+\alpha \int_{x}^{b}\frac{f(x)-f(y)}{(y-x)^{\alpha +1}}dy\right) . \end{equation*} The above definitions imply that \begin{equation*} I_{a^{+}}^{\alpha }(D_{a^{+}}^{\alpha }f)=f \end{equation*}for all $f\in I_{a^{+}}^{\alpha }(L^{p})$ and \begin{equation*} D_{a^{+}}^{\alpha }(I_{a^{+}}^{\alpha }f)=f \end{equation*}for all $f\in L^{p}([a,b])$ and similarly for $I_{b^{-}}^{\alpha }$ and $% D_{b^{-}}^{\alpha }$. \bigskip Denote by $B^{H}=\{B_{t}^{H},t\in \lbrack 0,T]\}$ a $d$-dimensional \emph{fractional Brownian motion} with Hurst parameter $H\in (0,\frac{1}{2})$. The later means that $B_{\cdot }^{H}$ is a centered Gaussian process with a covariance function given by \begin{equation*} (R_{H}(t,s))_{i,j}:=E[B_{t}^{H,(i)}B_{s}^{H,(j)}]=\delta _{ij}\frac{1}{2}\left( t^{2H}+s^{2H}-|t-s|^{2H}\right) ,\quad i,j=1,\dots ,d, \end{equation*}where $\delta _{ij}$ is one, if $i=j$, or zero else. In the sequel, we also shortly recall the construction of the fractional Brownian motion, which can be found in \cite{Nualart}. For convenience, we restrict ourselves to the case $d=1$. Denote by $\mathcal{E}$ the class of step functions on $[0,T]$ and Let $% \mathcal{H}$ be the Hilbert space which one gets through the completion of $\mathcal{E}$ with respect to the inner product \begin{equation*} \langle 1_{[0,t]},1_{[0,s]}\rangle _{\mathcal{H}}=R_{H}(t,s). \end{equation*}The latter provides an extension of the mapping $1_{[0,t]}\mapsto B_{t}$ to an isometry between $\mathcal{H}$ and a Gaussian subspace of $L^{2}(\Omega )$ with respect to $B^{H}$. Let $\varphi \mapsto B^{H}(\varphi )$ be this isometry. If $H<\frac{1}{2}$, one finds that the covariance function $R_{H}(t,s)$ can be represented as \bigskip\ \begin{equation} R_{H}(t,s)=\int_{0}^{t\wedge s}K_{H}(t,u)K_{H}(s,u)du, \label{2.2} \end{equation}where \begin{equation} K_{H}(t,s)=c_{H}\left[ \left( \frac{t}{s}\right) ^{H-\frac{1}{2}}(t-s)^{H-\frac{1}{2}}+\left( \frac{1}{2}-H\right) s^{\frac{1}{2}-H}\int_{s}^{t}u^{H-\frac{3}{2}}(u-s)^{H-\frac{1}{2}}du\right] . \label{KH} \end{equation}Here $c_{H}=\sqrt{\frac{2H}{(1-2H)\beta (1-2H,H+\frac{1}{2})}}$ and $\beta $ is the Beta function. See \cite[Proposition 5.1.3]{Nualart}. Using the kernel $K_{H}$, one can obtain via (\ref{2.2}) an isometry $% K_{H}^{\ast }$ between $\mathcal{E}$ and $L^{2}([0,T])$ such that $(K_{H}^{\ast }1_{[0,t]})(s)=K_{H}(t,s)1_{[0,t]}(s).$ This isometry allows for an extension to the Hilbert space $\mathcal{H}$, which has the following representations in terms of fractional derivatives \begin{equation*} (K_{H}^{\ast }\varphi )(s)=c_{H}\Gamma \left( H+\frac{1}{2}\right) s^{\frac{1}{2}-H}\left( D_{T^{-}}^{\frac{1}{2}-H}u^{H-\frac{1}{2}}\varphi (u)\right) (s) \end{equation*}and \begin{align*} (K_{H}^{\ast }\varphi )(s)=& \,c_{H}\Gamma \left( H+\frac{1}{2}\right) \left( D_{T^{-}}^{\frac{1}{2}-H}\varphi (s)\right) (s) \\ & +c_{H}\left( \frac{1}{2}-H\right) \int_{s}^{T}\varphi (t)(t-s)^{H-\frac{3}{2}}\left( 1-\left( \frac{t}{s}\right) ^{H-\frac{1}{2}}\right) dt. \end{align*}for $\varphi \in \mathcal{H}$. One can also prove that $\mathcal{H}% =I_{T^{-}}^{\frac{1}{2}-H}(L^{2})$. See \cite{DU} and \cite[Proposition 6]% {alos}. We know that $K_{H}^{\ast }$ is an isometry from $\mathcal{H}$ into $% L^{2}([0,T])$. Thus, the $d$-dimensional process $W=\{W_{t},t\in \lbrack 0,T]\}$ defined by \begin{equation} W_{t}:=B^{H}((K_{H}^{\ast })^{-1}(1_{[0,t]})) \label{WBH} \end{equation}is a Wiener process and the process $B^{H}$ has the representation \begin{equation} B_{t}^{H}=\int_{0}^{t}K_{H}(t,s)dW_{s}. \label{BHW} \end{equation} \begin{thebibliography}{99} \bibitem{Karatzas} Karatzas, I., Shreve, S.E., 1998. Methods of Mathematical Finance. Springer-Verlag. \bibitem{Lamberton} Lamberton, D. and Lapeyre, B., 2011. Introduction to Stochastic Calculus Applied to Finance. CRC press. \bibitem{Gatheral} Gatheral, J., Jaisson, T. and Rosenbaum, M., 2018. Volatility is rough. Quantitative Finance, 18(6), pp.933-949. \bibitem{Nualart} Nualart, D., 2006. The Malliavin Calculus and Related Topics (Vol. 1995, p. 317). Berlin: Springer. \bibitem{amine} Amine, O., Coffie, E., Harang, F. and Proske, F., 2020. A Bismut–Elworthy–Li formula for singular SDEs driven by a fractional Brownian motion and applications to rough volatility modelling. Communications in Mathematical Sciences, 18(7), pp.1863-1890. \bibitem{Coffie} Coffie, E., Duedahl, S. and Proske, F., 2021. Sensitivity Analysis with respect to a stochastic stock price model with rough volatility via a Bismut-Elworthy-Li Formula for Singular SDEs. arXiv preprint arXiv:2107.06022. \bibitem{alos} Alòs, E., Mazet, O. and Nualart, D., 2000. Stochastic calculus with respect to fractional Brownian motion with Hurst parameter lesser than 1/2. Stochastic processes and their applications, 86(1), pp.121-139. \bibitem{ait} Ait-Sahalia, Y., 1996. Testing continuous-time models of the spot interest rate. The review of financial studies, 9(2), pp.385-426. \bibitem{Szpruch} Szpruch, L., Mao, X., Higham, D.J. and Pan, J., 2011. Numerical simulation of a strongly nonlinear Ait-Sahalia-type interest rate model. BIT Numerical Mathematics, 51(2), pp.405-425. \bibitem{dung} Dung, N.T., 2016. Tail probabilities of solutions to a generalized Ait-Sahalia interest rate model. Statistics and Probability Letters, 112, pp.98-104. \bibitem{Emma} Coffie, E. and Mao, X., 2021. Truncated EM numerical method for generalised Ait-Sahalia-type interest rate model with delay. Journal of Computational and Applied Mathematics, 383, pp.113-1372. \bibitem{DU} Decreusefond, L. and Ustunel, A.S., 1998. Stochastic analysis of the fractional Brownian motion. Potential Analysis 10, pp.177-214. \bibitem{Zhang} Zhang, S.Q. and Yuan, C., 2021. Stochastic differential equations driven by fractional Brownian motion with locally Lipschitz drift and their implicit Euler approximation. Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 151(4), pp.1278-1304. \bibitem{Errami} Errami, M. and Russo, F., 2003. n-covariation, generalized Dirichlet processes and calculus with respect to finite cubic variation processes. Stochastic Processes and their Applications, 104(2), pp.259-299. \bibitem{Biagini} Biagini, F., Hu, Y., Øksendal, B. and Zhang, T., 2008. Stochastic Calculus for Fractional Brownian motion and Applications. Springer Science \& Business Media. \bibitem{Lizorkin} Lizorkin, P.I., 2001. Fractional integration and differentiation. Encyclopedia of Mathematics, Springer. \bibitem{Samko} Samko, S.G., Kilbas, A.A. and Marichev, O.I., 1993. Fractional integrals and derivatives. Yverdon-les-Bains, Switzerland: Gordon and breach science publishers, Yverdon. \end{thebibliography} \end{document}
2205.00702v3
http://arxiv.org/abs/2205.00702v3
Foliations on Shimura varieties in positive characteristic
\documentclass[oneside,english,american]{amsart} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{textcomp} \usepackage{mathrsfs} \usepackage{amstext} \usepackage{amsthm} \usepackage{amssymb} \usepackage{graphicx} \makeatletter \numberwithin{equation}{section} \numberwithin{figure}{section} \theoremstyle{plain} \newtheorem{thm}{\protect\theoremname}[subsection] \theoremstyle{plain} \newtheorem{lem}[thm]{\protect\lemmaname} \theoremstyle{plain} \newtheorem{prop}[thm]{\protect\propositionname} \theoremstyle{plain} \newtheorem{cor}[thm]{\protect\corollaryname} \theoremstyle{remark} \newtheorem*{rem*}{\protect\remarkname} \theoremstyle{plain} \newtheorem*{prop*}{\protect\propositionname} \theoremstyle{definition} \newtheorem{defn}[thm]{\protect\definitionname} \theoremstyle{plain} \newtheorem*{thm*}{\protect\theoremname} \theoremstyle{remark} \newtheorem{rem}[thm]{\protect\remarkname} \usepackage[all]{xy} \makeatother \usepackage{babel} \addto\captionsamerican{\renewcommand{\corollaryname}{Corollary}} \addto\captionsamerican{\renewcommand{\definitionname}{Definition}} \addto\captionsamerican{\renewcommand{\lemmaname}{Lemma}} \addto\captionsamerican{\renewcommand{\propositionname}{Proposition}} \addto\captionsamerican{\renewcommand{\remarkname}{Remark}} \addto\captionsamerican{\renewcommand{\theoremname}{Theorem}} \addto\captionsenglish{\renewcommand{\corollaryname}{Corollary}} \addto\captionsenglish{\renewcommand{\definitionname}{Definition}} \addto\captionsenglish{\renewcommand{\lemmaname}{Lemma}} \addto\captionsenglish{\renewcommand{\propositionname}{Proposition}} \addto\captionsenglish{\renewcommand{\remarkname}{Remark}} \addto\captionsenglish{\renewcommand{\theoremname}{Theorem}} \providecommand{\corollaryname}{Corollary} \providecommand{\definitionname}{Definition} \providecommand{\lemmaname}{Lemma} \providecommand{\propositionname}{Proposition} \providecommand{\remarkname}{Remark} \providecommand{\theoremname}{Theorem} \usepackage[table]{xcolor} \newcommand{\red}{\textcolor{red}} \usepackage{MnSymbol} \begin{document} \title{Foliations on Shimura varieties in positive characteristic} \author{Eyal Z. Goren and Ehud de Shalit} \keywords{\selectlanguage{english}Shimura variety, foliation} \subjclass[2000]{\selectlanguage{english}11G18, 14G35} \address{\selectlanguage{english}Eyal Z. Goren, McGill University, Montral, Canada} \address{\selectlanguage{english}[email protected]} \address{\selectlanguage{english}Ehud de Shalit, Hebrew University of Jerusalem, Israel} \address{\selectlanguage{english}[email protected]} \selectlanguage{american}\begin{abstract} This paper is a continuation of \cite{G-dS1}. We study foliations of two types on Shimura varieties $S$ in characteristic $p$. The first, which we call ``tautological foliations'', are defined on Hilbert modular varieties, and lift to characteristic $0$. The second, the ``$V$-foliations'', are defined on unitary Shimura varieties in characteristic $p$ only, and generalize the foliations studied by us before, when the CM field in question was quadratic imaginary. We determine when these foliations are $p$-closed, and the locus where they are smooth. Where not smooth, we construct a ``successive blow up'' of our Shimura variety to which they extend as smooth foliations. We discuss some integral varieties of the foliations. We relate the quotient of $S$ by the foliation to a purely inseparable map from a certain component of another Shimura variety of the same type, with parahoric level structure at $p$, to $S.$ \tableofcontents{} \end{abstract} \maketitle \vspace{-1cm} \section{Introduction} Let $S$ be a non-singular variety over a field $k$, and $\mathcal{T}$ its tangent bundle. A (smooth) foliation on $S$ is a vector sub-bundle of $\mathcal{T}$ that is closed under the Lie bracket. If $\text{\rm char}(k)=p>0$, a $p$-foliation is a foliation that is, in addition, closed under the operation $\xi\mapsto\xi^{p}.$ As explained below, such $p$-foliations play an important role in studying purely inseparable morphisms $S\to S'.$ Foliations have been studied in many contexts, and our purpose here is to explore certain $p$-foliations on Shimura varieties of PEL type, which bear a relation to the underlying moduli problem. The connection between the two topics can go either way. It can be seen as using the rich geometry of Shimura varieties to produce interesting examples of foliations, or, in the other direction, as harnessing a new tool to shed light on some geometrical aspects of these Shimura varieties, especially in characteristic $p$. We study two types of foliations, that could well turn out to be particular cases of a more general theory. The first lie on Hilbert modular varieties. Let $S$ be a Hilbert modular variety associated with a totally real field $L,$ $[L:\mathbb{Q}]=g$ (see the text for details). Let $\Sigma$ be any proper non-empty subset of $\mathscr{I}=\mathrm{Hom}(L,\mathbb{R}).$ Fixing an embedding of $\bar{\mathbb{Q}}$ in $\bar{\mathbb{Q}}_{p}$ we view $\mathscr{I}$ also as the set of embeddings of $L$ in $\mathbb{\bar{Q}}_{p}.$ If $p$ is unramified in $L$ these embeddings end up in $\mathbb{Q}_{p}^\text{\rm nr}$ and the Frobenius automorphism $\phi\in {Gal}(\mathbb{Q}_{p}^\text{\rm nr}/\mathbb{Q}_{p})$ permutes them. Consider the uniformization $\Gamma\setminus\mathfrak{H}^{\mathscr{I}}\simeq S(\mathbb{C}),$ describing the complex points of $S$ as a quotient of the product of $g$ copies of the upper half plane (indexed by $\mathscr{I}$) by an arithmetic subgroup of $\text{\rm SL}_{2}(L).$ The foliation $\mathscr{F}_{\Sigma}$, labelled by the subset $\Sigma$, is defined most easily complex analytically: at any point $x\in S(\mathbb{C})$ its fiber is spanned by $\partial/\partial z_{i}$ for $i\in\Sigma\subset\mathscr{I}.$ Our main results about it are the following: \begin{itemize} \item Using the Kodaira-Spencer isomorphism, $\mathscr{F}_{\Sigma}$ can be defined algebraically, hence also in the characteristic $p$ fiber of $S$, for any good unramified rational prime $p.$ Its ``reduction modulo $p$'' is a smooth $p$-foliation if and only if the subset $\Sigma$ is invariant under Frobenius. \item If the singleton $\{\sigma\}$ is not Frobenius-invariant (i.e. the corresponding prime of $L$ is not of absolute degree 1), the obstruction to $\mathscr{F}_{\{\sigma\}}$ being $p$-closed can be identified with the square of a partial Hasse invariant. \item Let $\mathfrak{p}$ be a prime of $L$ dividing $p$. Let $S_0(\mathfrak{p})$ be the special fiber of the integral model of the Hilbert modular variety with $\Gamma_0(\mathfrak{p})$-level structure studied in \cite{Pa}. When $\Sigma$ consists of all the embeddings \textit{not} inducing $\mathfrak{p}$, the (purely inseparable) quotient of $S$ by the foliation $\mathscr{F}_{\Sigma}$ can be identified with a certain irreducible component of $S_0(\mathfrak{p})$. \item It is easy to see that $\mathscr{F}_{\Sigma}$ does not have any integral varieties in characteristic~$0$. In contrast, as we show below, any $p$-foliation in characteristic $p$ admits a plentiful supply of integral varieties. In our case, we show that certain Goren-Oort strata in the reduction modulo $p$ of $S$ are integral varieties of the foliation $\mathscr{F}_{\Sigma}$. \end{itemize} The second class of foliations studied in this paper lives on unitary Shimura varieties $M$ of arbitrary signature, associated with a CM field $K$. They generalize the foliations studied in \cite{G-dS1} when $K$ was quadratic imaginary, and are again labelled by subsets $\Sigma$ of $\mathscr{I}^{+}=\mathrm{Hom}(L,\mathbb{R})$ where $L$ is the totally real subfield of~$K$. Unlike the foliations of the first type, they are particular to the characteristic $p$ fibers, where~$p$ is again a good unramified prime, and are of different genesis. They are defined using the Verschiebung isogeny of the universal abelian scheme over the $\mu$-ordinary locus $M^\text{\rm ord}$ of $M$. Their study relies to a great extent on the work of Wedhorn and Moonen cited in the bibliography. We refer to the text for the precise definition of the foliation denoted by $\mathscr{F}_{\Sigma}$, as it is a bit technical. A rough description is this: In the $p$-kernel of the universal abelian scheme over $M^\text{\rm ord}$ lives an important subgroup scheme, which played a special role in the work of Oort and his school, namely the maximal subgroup scheme of $\alpha_{p}$-type. Its cotangent space serves to define the $\mathscr{F}_{\Sigma}$ via the Kodaira-Spencer isomorphism. The main results concerning $\mathscr{F}_{\Sigma}$ are the following: \begin{itemize} \item $\mathscr{F}_{\Sigma}$ is a smooth $p$-foliation on $M^\text{\rm ord},$ regardless of what $\Sigma$ is. Involutivity follows from the flatness of the Gauss-Manin connection, but being $p$-closed is more delicate, and is a consequence of a theorem of Cartier on the $p$-curvature of that connection. \item Although in general more complicated than the relation found in \cite{G-dS1}, one can work out explicitly the relation between the foliation $\mathscr{F}_{\Sigma}$ and the ``cascade structure'' defined by Moonen \cite{Mo} on the formal completion $\widehat{M}_{x}$ of $M$ at a $\mu$-ordinary point $x$. While the cascade structure does not globalize, its ``trace'' on the tangent space, constructed from the foliations $\mathscr{F}_{\Sigma},$ does globalize neatly. \item There is a maximal open subset $M_{\Sigma}\subset M$ to which $\mathscr{F}_{\Sigma}$ extends as a smooth $p$-foliation with the same definition used to define it on $M^\text{\rm ord}.$ This $M_{\Sigma}$ is a union of Ekedahl-Oort (EO) strata, and in fact consists of all the strata containing in their closure a smallest one, denoted $M_{\Sigma}^\text{\rm fol}.$ The description of which EO strata participate in $M_{\Sigma}$ is given combinatorially in terms of ``shuffles'' in the Weyl group. \item Outside $M_{\Sigma}$ the foliation $\mathscr{F}_{\Sigma}$ acquires singularities, but we construct a ``successive blow up'' $\beta:M^{\Sigma}\to M$, which is an isomorphism over $M_{\Sigma}$, to which the lifting of $\mathscr{F}_{\Sigma}$ extends as a smooth $p$-foliation. This $M^{\Sigma}$ is an interesting (characteristic $p$) moduli problem in its own right. It is non-singular, and the extension of $\mathscr{F}_{\Sigma}$ to it is transversal to the fibers of $\beta$. \item When $K$ is quadratic imaginary, the EO stratum $M_{\Sigma}^\text{\rm fol}$ was proved to be an integral variety (in the sense of foliations) of $\mathscr{F}_{\Sigma}$. A similar result is expected here when $\Sigma=\mathscr{I}^{+}.$ This can be probably proved via elaborate Dieudonn module computations, as in \cite{G-dS1}, but in this paper we content ourselves with checking that the dimensions match. \item A natural interesting question is to identify the purely inseparable quotient of $M^{\Sigma}$ by (the extended) $\mathscr{F}_{\Sigma}$ with a certain irreducible component of the special fiber of the Rapoport-Zink model of a unitary Shimura variety with parahoric level structure at $p$. This was done in our earlier paper when~$K$ was quadratic imaginary, and was used to obtain some new results on the geometry of that particular irreducible component. In the general case treated in this paper, we know of a natural candidate with which we would like to identify the quotient of $M^{\Sigma}$ by $\mathscr{F}_{\Sigma}.$ However, following the path set in \cite{G-dS1} for a general CM field $K$ would require a significant amount of work, and we leave this question for a future paper. \end{itemize} \ Section 2 is a brief review of general results on foliations, especially in characteristic~$p$. The main two sections of the paper, Sections 3 and 4, are devoted to the two types of foliations, respectively. \subsubsection{Notation} \begin{itemize} \item For any commutative $\mathbb{F}_{p}$-algebra $R$ we let $\phi:R\to R$ be the homomorphism $\phi(x)=x^{p}.$ \item If $S$ is a scheme over $\mathbb{F}_{p},$ $\Phi_{S}$ denotes its absolute Frobenius morphism. It is given by the identity on the underlying topological space of $S$, and by the map $\phi$ on its structure sheaf. If $\mathcal{H}$ is an $\mathcal{O}_{S}$-module then we write $\mathcal{H}^{(p)}$ (or $\mathcal{H}^{(p)/S}$) for $\Phi_{S}^{*}\mathcal{H}=\mathcal{O}_{S}\otimes_{\phi,\mathcal{O}_{S}}\mathcal{H}$. \item If $T\to S$ is a morphism of schemes over $\mathbb{F}_{p},$ $T^{(p)}$ (or $T^{(p)/S}$) is $S\times_{\Phi_{S},S}T$ and $\text{\rm Fr}_{T/S}:T\to T^{(p)}$ is the relative Frobenius morphism, characterized by the relation $pr_{2}\circ \text{\rm Fr}_{T/S}=\Phi_{T}.$ \item If $A\to S$ is an abelian scheme and $A^{t}\to S$ is its dual, then $\text{\rm Fr}_{A/S}$ is an isogeny and Verschiebung $\text{\rm Ver}_{A/S}:A^{(p)}\to A$ is the dual isogeny of $\text{\rm Fr}_{A^{t}/S}.$ \item If $\mathcal{H}$ is an $\mathcal{O}_{S}$-module with $\mathcal{O}$ action (for some ring $\mathcal{O}$), and $\tau:\mathcal{O}\to\mathcal{O}_{S}$ is a homomorphism, then \[ \mathcal{H}[\tau]=\{\alpha\in\mathcal{H}|\,\forall a\in\mathcal{O}\,\,a.\alpha=\tau(a)\alpha\}. \] If $T:\mathcal{H\to\mathcal{G}}$ is a homomorphism of sheaves of modules, we denote $\ker T=\mathcal{H}[T].$ \item If $x\in S$, the fiber of $\mathcal{H}$ at $x$ is denoted $\mathcal{H}_{x}.$ This is a vector space over the residue field $k(x).$ The same notation is used for the fiber $\mathcal{H}_{x}=x^{*}\mathcal{H}$ at a geometric point $x:\text{\rm Spec}(k)\to S$. \item If $\mathcal{H}^{\vee}$ is the dual of a locally free $\mathcal{O}_{S}$-module $\mathcal{H}$ we denote the pairing $\mathcal{H}^{\vee}\times\mathcal{H}\to\mathcal{O}_{S}$ by $\left\langle ,\right\rangle .$ \item By the Dieudonn module of a $p$-divisible group over a perfect field $k$ in characteristic $p$, or of a finite commutative $p$-torsion group scheme over $k,$ we understand its \emph{contravariant} Dieudonn module. \end{itemize} \subsubsection{Acknowledgements} We would like to thank N. Shepherd-Barron and R. Taylor for sharing their unpublished manuscript \cite{E-SB-T} with us. This research was supported by ISF grant 276.17 and NSERC grant 223148 and the hospitality of McGill University and the Hebrew University. \section{Generalities on Foliations} \subsection{Smooth foliations} Let $k$ be a perfect field and $S$ a $d$-dimensional smooth $k$-variety, $d>0$. Let $\mathcal{T}$ denote the tangent bundle of $S$. If $U\subset S$ is Zariski open, then sections $\xi\in\mathcal{T}(U)$ are \emph{vector fields} on $U$ and act on $\mathcal{O}_{S}(U)$ as derivations. The space $\mathcal{T}(U)$ has a structure of a Lie algebra (of infinite dimension) over $k,$ when we define \[ [\xi,\eta](f)=\xi(\eta(f))-\eta(\xi(f)). \] A \emph{foliation $\mathscr{\mathcal{F}}$} on $S$ is a saturated subsheaf $\mathcal{F}\subset\mathcal{T}$ closed under the Lie bracket. For every Zariski open set $U,$ the vector fields on $U$ along the foliation form a saturated $\mathcal{O}_{S}(U)$-submodule $\mathcal{F}(U)\subset\mathcal{T}(U)$ closed under the Lie bracket, i.e. if $f\in\mathcal{O}_{S}(U),$ $\xi\in\mathcal{T}(U)$ and $f\xi\in\mathcal{F}(U)$ then $\xi\in\mathcal{F}(U),$ and if $\xi,\eta\in\mathcal{F}(U)$ then $[\xi,\eta]\in\mathcal{F}(U).$ The foliation $\mathcal{F}$ is called \emph{smooth} if it is a vector sub-bundle of $\mathcal{T},$ namely if both~$\mathcal{F}$ and $\mathcal{T}/\mathcal{F}$ are locally free sheaves. Since $\mathcal{F}$ is assumed to be saturated, and since a torsion-free finite module over a discrete valuation ring is free, the locus $Sing(\mathcal{F})$ where $\mathcal{F}$ is \emph{not} smooth is a closed subset of $S$ of codimension $\ge 2$. As an example, the vector field $x\partial/\partial x+y\partial/\partial y$ generates a rank-1 foliation on $\mathbb{A}^{2},$ whose singular set is the origin. If $k=\mathbb{C}$ then by a well known theorem of Frobenius every $x\in S-Sing(\mathcal{F})$ has a \emph{classical} open neighborhood $V\subset S(\mathbb{C})$ which can be decomposed into a disjoint union of parallel leaves $L$ of the foliation. Each leaf $L$ is a smooth complex submanifold of $V$ and if $y\in L$ then the tangent space $T_{y}L\subset T_{y}S=\mathcal{T}_{y}$ is just the fiber of $\mathcal{F}$ at $y$. One says that $L$ is an \emph{integral subvariety} of the foliation. Whether a smooth foliation has an \emph{algebraic} integral subvariety passing through a given point $x$, and the classification of all such integral subvarieties, is in general a hard problem, see \cite{Bo}. There is a rich literature on foliations on algebraic varieties. See, for example, the book \cite{Br}. \subsection{Foliations in positive characteristic} \subsubsection{$p$-foliations and purely inseparable morphisms of height 1} If $\text{\rm char}(k)=p$ is positive then $\mathcal{T}$ is a $p$-Lie algebra, namely if $\xi\in\mathcal{T}(U)$ then $\xi^{p}=\xi\circ\cdots\circ\xi$ (composition $p$ times) is also a derivation, hence lies in $\mathcal{T}(U).$ A foliation $\mathcal{F}$ is called a $p$\emph{-foliation} if it is $p$-\emph{closed}: whenever $\xi\in\mathcal{F}(U)$, then $\xi^{p}\in\mathcal{F}(U)$ as well. The interest in $p$-foliations in characteristic $p$ stems from their relation to purely inseperable morphisms of height 1. The following theorem has its origin in Jacobson's \emph{inseparable Galois theory }for field extensions (\cite[\S8]{Jac}). We denote by $\phi:k\to k$ the $p$-power map, by $S^{(p)}=k\times_{\phi,k}S$ the $p$-transform of $S$, and by \[ \text{\rm Fr}_{S/k}:S\to S^{(p)} \] the relative Frobenius morphism. We denote by $\Phi_{S}:S\to S$ the absolute Frobenius of $S.$ Thus, \[ \Phi_{S}=pr_{2}\circ \text{\rm Fr}_{S/k}, \] where $pr_{2}:S^{(p)}=k\times_{\phi,k}S\to S$ is the projection onto the second factor. \begin{thm} \label{thm:Quotient by foliation}\cite{Ek} Let $k$ be a perfect field, $\text{\rm char}(k)=p.$ Let $S$ be a smooth $k$-variety and denote by $\mathcal{T}$ its tangent bundle. There exists a one-to-one correspondence between smooth $p$-foliations $\mathcal{F}\subset\mathcal{T}$ and factorizations of the relative Frobenius morphism $\text{\rm Fr}_{S/k}=g\circ f,$ \[ S\overset{f}{\to}T\overset{g}{\to}S^{(p)}, \] where $T$ is a smooth $k$-variety (equivalently, where $f$ and $g$ are finite and flat). We call $T$ the quotient of $S$ by the foliation $\mathcal{F}$. Given $\mathcal{F}$, if (locally) $S=\text{\rm Spec}(A)$, then $T=\text{\rm Spec}(B)$ where $B=A^{\mathcal{F}=0}$ is the subring annihilated by $\mathcal{F}$, and $f$ is induced by the inclusion $B\subset A.$ Conversely, given a factorization as above, then $\mathcal{F}=\ker(df)$ where $df$ is the map induced by~$f$ on the tangent bundle. Furthermore, if $r=\mathrm{rk}(\mathcal{F})$ then $\deg(f)=p^{r}.$ \end{thm} As mentioned above, the birational version of this theorem is due to Jacobson. From this version one deduces rather easily a correspondence as in the theorem, when $T$ is only assumed to be \emph{normal, }and $\mathcal{F}$ is saturated, closed under the Lie bracket and $p$-closed, but not assumed to be smooth. The main difficulty is in showing that $T$ is smooth if and only if $\mathcal{F}$ is smooth, i.e. locally a direct summand everywhere. The reference \cite{Ek} only cites the work of Yuan \cite{Yuan} and of Kimura and Nitsuma \cite{Ki-Ni}, but does not give the details. The proof in the book \cite{Mi-Pe} seems to be wrong. A full account may be found in \cite{Li}. \subsubsection{The obstruction to being $p$-closed\label{subsec:obstruction p closed}} \label{subsubsec: obstruction} If $\mathcal{F}\subset\mathcal{T}$ is a smooth foliation, the map $\xi\mapsto\xi^{p}\mod\mathcal{F}$ induces an $\mathcal{O}_{S}$-linear map of vector bundles \[ \kappa_{\mathcal{F}}:\Phi_{S}^{*}\mathcal{F}\to\mathcal{T}/\mathcal{F} \] which is identically zero if and only if $\mathcal{F}$ is $p$-closed. See \cite{Ek}, Lemma 4.2(ii). We call the map $\kappa_{\mathcal{F}}$ the \emph{obstruction to being} $p$-\emph{closed}. \subsection{Integral varieties in positive characteristic} \label{sec:general integral} In contrast to the situation in characteristic 0, integral varieties of $p$-foliations in characteristic $p$ abound, and are easily described. The goal of this section is to clarify their construction.\footnote{This construction shows that the conjecture ``of Andr-Oort type'', suggested in 5.3 of \cite{G-dS1}, is far from being true.} As in the previous section we let $S$ be a smooth $d$-dimensional quasi-projective variety over a perfect field $k$ of characteristic $p,$ $\mathcal{T}=TS$ its tangent bundle, and $\mathcal{F}$ a smooth $p$-foliation of rank $r$. We denote by $S\overset{f}{\to}T$ the quotient of $S$ by $\mathcal{F}$, as in Theorem~\ref{thm:Quotient by foliation}. Let $\mathcal{G}=\mathcal{T}/\mathcal{F}=\mathrm{Im}(df).$ This is a smooth $p$-foliation of rank $d-r$ on $T$ and the quotient of $T$ by $\mathcal{G}$ is $T\overset{g}{\to}S^{(p)}.$ The factorizations of the relevant Frobenii are $Fr_{S/k}=g\circ f$ and $Fr_{T/k}=f^{(p)}\circ g.$ \begin{defn} Let $\iota:W\hookrightarrow S$ be a closed subvariety of $S$ and $W^{\mathrm{sm}}$ the (open dense) smooth locus in $W$. We say that $W$ is an \emph{integral variety} of $\mathcal{F}$ if at every $x\in W^{\mathrm{sm}}$ we have $T_{x}W=\iota^{*}\mathcal{F}_{x}$ (as subspaces of $\iota^{*}T_{x}S$). In this case $\dim W=r.$ We say that $W$ is \emph{transversal} to $\mathcal{F}$ at $x\in W^{\mathrm{sm}}$ if $T_{x}W\cap\iota^{*}\mathcal{F}_{x}=0,$ and that it is \emph{generically transversal to $\mathcal{F}$ }if the set of points where it is transversal is a dense open set of $W.$ \end{defn} \begin{rem} Unlike the case of characteristic 0, an integral subvariety of a smooth $p$-foliation need not be smooth. Consider, for example, the foliation generated by $\partial/\partial v$ on $\mathbb{A}^{2}=Spec(k[u,v])$. The irreducible curve \[ u(u+v^{p})+v^{2p}=0 \] is an integral curve of the foliation, but is singular at $x=(0,0)$. The curve $u-v^{2}=0$ is generically transversal to the same foliation, although it is not transversal to it at $x$. If $S=Spec(A)$ and $W=Spec(A/I)$ for a prime ideal $I$, then regarding $\mathcal{F}$ as a submodule of the module of derivations of $A$ over $k,$ $W$ is an integral variety of $\mathcal{F}$ if and only if $\mathcal{F}(I)\subset I.$ \end{rem} \begin{prop} Let $\iota:W\hookrightarrow S$ be a closed $r$-dimensional subvariety of $S$ and $Z=f(W)\hookrightarrow T$ the corresponding subvariety of $T$ (also $r$-dimensional). Then the following are equivalent: \end{prop} \begin{enumerate} \item $f_{W}:W\to Z$ is purely inseparable of degree $p^{r}$; \item $W$ is an integral variety of $\mathcal{F}$; \item $g_{Z}:Z\to W^{(p)}$ is a birational isomorphism; \item $Z$ is generically transversal to $\mathcal{G}.$ \end{enumerate} \begin{proof} Remark first that since $g_{Z}\circ f_{W}=Fr_{W/k}$ induces a purely inseparable field extension $k(W^{(p)})\subset k(W)$ of degree $p^{r},$ (1) and (3) are equivalent, and in fact are equivalent to $g_{Z}$ being separable (generically tale). Let $x\in W^{\mathrm{sm}}$ be such that $y=f(x)\in Z^{\mathrm{sm}}.$ The commutative diagram \[\xymatrix@M=0.3cm{T_{x}W \ar[d]_{df_{W,x}} \ar@{^{(}->}[r] & T_{x}S\ar[d]^{df_x}\\ T_{y}Z \ar@{^{(}->}[r] & T_{y}T } \] and the fact that $\ker(df)=\mathcal{F}$ tell us that \[ \ker(df_{W})=TW^{\mathrm{sm}}\cap\iota^{*}\mathcal{F}. \] It follows that $f_{W}$ is purely inseparable of degree $p^{r}$ ($df_{W}=0$) if and only if $TW^{\mathrm{sm}}$ and $\iota^{*}\mathcal{F}$, both rank-$r$ vector sub-bundles of $\iota^{*}TS,$ coincide along $W^{\mathrm{sm}}.$ This shows the equivalence of $(1)$ and $(2)$. It also follows that $f_{W}$ is separable (generically tale) if and only for generic $x$ we have $T_{x}W^{\mathrm{sm}}\cap\iota^{*}\mathcal{F}_{x}=0.$ When applied to $g$ and~$Z$, instead of $f$ and $W$, this gives the equivalence of $(3)$ and $(4)$. \end{proof} \begin{thm} Notation as above, any two points $x_{1},x_{2}$ of $S$ lie on an integral variety of $\mathcal{F}.$ \end{thm} \begin{proof} One may assume that $0< {\rm rank}(\mathcal{F}) < d = {\rm dim}(S)$. We have the diagram $\xymatrix@C=0.8cm{S \ar[r]^f & T \ar[r]^g& S^{(p)}}$, where the first arrow is dividing by the foliation $\mathcal{F}$ and the second arrow by the foliation $\mathcal{G}$. Let $y_{i}=f(x_{i})\in T.$ A standard application of Bertini's theorem \cite[Theorem II 8.18]{H} shows that there is a subvariety $Z\subset T$ of dimension $r$ passing through the points~$y_{i}$ which is generically transversal to $\mathcal{G}.$ Choosing $W$ so that $g(Z)=W^{(p)},$ and hence $f(W)=Z,$ we conclude from the previous Proposition that $W$ is an integral variety of $\mathcal{F}$ passing through $x_{1}$ and~$x_{2}.$ \end{proof} Thus integral varieties in characteristic $p$ abound, and are easy to classify. Nevertheless, given a particular subvariety $W$, it is still interesting to decide whether it is an integral variety of $\mathcal{F}$ or not. \section{Tautological foliations on Hilbert modular varieties} \subsection{Hilbert modular schemes} Let $L$ be a totally real field, $[L:\mathbb{Q}]=g\ge2$, $N\ge4$ an integer, and $\mathfrak{c}$ a fractional ideal of $L$, relatively prime to $N$, called the \emph{polarization module}. We denote by $\mathfrak{d}$ the different ideal of $L/\mathbb{Q}$. Let $D=\mathrm{disc}_{L/\mathbb{Q}}$ be the absolute discriminant of $L$. Consider the moduli problem over $\mathbb{Z}[(ND)^{-1}],$ attaching to a scheme $S$ over $\mathbb{Z}[(ND)^{-1}]$ the set $\mathscr{M}(S)$ of isomorphism classes of four-tuples \[ \underline{A}=(A,\iota,\lambda,\eta), \] where \begin{itemize} \item $A$ is an abelian scheme over $S$ of relative dimension $g$. \item $\iota:\mathcal{O}_{L}\hookrightarrow\mathrm{End}(A/S)$ is an injective ring homomorphism making the tangent bundle $TA$ a locally free sheaf of rank $1$ over $\mathcal{O}_{L}\otimes\mathcal{O}_{S}$. We denote by $A^{t}$ the dual abelian scheme, and by $\iota(a)^{t}$ the dual endomorphism induced by $\iota(a).$ \item $\lambda:\mathfrak{c}\otimes_{\mathcal{O}_{L}}A\simeq A^{t}$ is a $\mathfrak{c}$-polarization of $A$ in the sense of \cite{Ka} (1.0.7). This means that $\lambda$ is an isomorphism of abelian schemes compatible with the $\mathcal{O}_{L}$ action, where $a\in\mathcal{O}_{L}$ acts on the left hand side via $a\otimes1=1\otimes\iota(a)$ and on the right hand side via $\iota(a)^{t}.$ Furthermore, under the identification \[ \mathrm{Hom}_{\mathcal{O}_{L}}(A,A^{t})\simeq\mathrm{Hom}_{\mathcal{O}_{L}}(A,\mathfrak{c}\otimes_{\mathcal{O}_{L}}A) \] induced by $\lambda,$ the symmetric elements on the left hand side (the elements $\alpha$ satisfying $\alpha^{t}=\alpha$ after we canonically identify $A^{tt}$ with $A$) correspond precisely to the elements of $\mathfrak{c}$, and those arising from an ample line bundle~$\mathcal{L}$ on $A$ via $\alpha_{\mathcal{L}}(x)=[\tau_{x}^{*}\mathcal{L}\otimes\mathcal{L}^{-1}]$ correspond to the totally positive cone in~$\mathfrak{c}.$ Note that the Rosati involution induced by $\lambda$ on $\mathcal{O}_{L}$ is the identity. \item $\eta$ is a $\Gamma_{00}(N)$-level structure on $A$ in the sense of \cite{Ka} (1.0.8), i.e. a closed immersion of $S$-group schemes $\eta:\mathfrak{d}^{-1}\otimes\mu_{N}\hookrightarrow A[N]$ compatible with $\iota$. \end{itemize} The moduli problem $\mathscr{M}$ is representable by a smooth scheme over $\mathbb{Z}[(ND)^{-1}],$ of relative dimension $g$, which we denote by the same letter $\mathscr{M}$, and call the \emph{Hilbert modular scheme} (see \cite{Ra}). If we want to remember the dependence on $\mathfrak{c}$ we use the notation $\mathscr{M}^{\mathfrak{c}}$ for $\mathscr{M}.$ The Hilbert moduli scheme admits smooth toroidal compactifications, depending on some extra data. See \cite{Lan, Ra}. The complex points $\mathscr{M}(\mathbb{C})$ of $\mathscr{M}$ may be identified, as a complex manifold, with $\Gamma\backslash\mathfrak{H}^{g}$, where $\mathfrak{H}$ is the upper half plane, and $\Gamma\subset \text{\rm SL}(\mathcal{O}_{L}\oplus\mathfrak{d}^{-1}\mathfrak{c}^{-1})$ is some congruence subgroup. If $\mathfrak{c}=\gamma\mathfrak{c'}$, where $\gamma\gg0$, i.e. $\gamma$ is a totally positive element of $L$, then $\lambda\circ(\gamma\otimes1):\mathfrak{c}'\otimes_{\mathcal{O}_{L}}A\simeq A^{t}$ is a $\mathfrak{c}'$-polarization of~$A$. Thus only the strict ideal class of $\mathfrak{c}$ in $Cl^{+}(L)$ matters: the moduli schemes $\mathscr{M}^{\mathfrak{c}}$ and $\mathscr{M}^{\mathfrak{c}'}$ are isomorphic, via an isomorphism depending on the choice of $\gamma.$ Fix a prime number $p$ which is unramified in $L$ and relatively prime to $N$. By the last remark, we may (and do) assume that $p$ is also relatively prime to $\mathfrak{c}.$ Write \[ p\mathcal{O}_{L}=\mathfrak{p}_{1}\dots\mathfrak{p}_{r},\,\,\,f_{i}=f(\mathfrak{p}_{i}/p) \] the inertia degree of $\mathfrak{p}_{i}$, and let $\kappa$ be a finite field into which all the $\kappa(\mathfrak{p}_{i})=\mathcal{O}_{L}/\mathfrak{p}_{i}$ embed, i.e. $\kappa=\mathbb{F}_{p^{n}}$ where $n$ is divisible by $\mathrm{lcm}\{f_{1},\dots,f_{r}\}.$ Let $W(\kappa)$ be the Witt vectors of $\kappa$. We have a decomposition \[ \mathbb{B}=\mathrm{Hom}(L,W(\kappa)[p^{-1}])=\coprod_{\mathfrak{p}|p}\mathbb{B}_{\mathfrak{p}} \] indexed by the $r$ primes of $\mathcal{O}_{L}$ dividing $p,$ where $\sigma:L\hookrightarrow W(\kappa)[1/p]$ belongs to $\mathbb{B}_{\mathfrak{p}}$ if $\sigma^{-1}(pW(\kappa))\cap\mathcal{O}_{L}=\mathfrak{p}$. The Frobenius automorphism of $W(\kappa)$, denoted $\phi$, operates on $\mathbb{B}$ from the left via $\sigma\mapsto\phi\circ\sigma$, and permutes each $\mathbb{B}_{\mathfrak{p}}$ cyclically. We denote by \[ M=\mathscr{M}\otimes_{\mathbb{Z}[(ND)^{-1}]}\mathbb{F}_{p} \] the special fiber of $\mathscr{M}$ at $p$. Note that $M$ is smooth over $\mathbb{F}_{p}$. \subsection{The tautological foliations} \subsubsection{The definition} Complex analytically, the complex manifold $\Gamma\backslash\mathfrak{H}^{g}$ admits $g$ tautological rank 1 smooth foliations, generated by the vector fields $\partial/\partial z_{i}$, where $z_{i}$ ($1\le i\le g$) are the coordinate functions. As the derivations $\partial/\partial z_{i}$ commute with each other, any $r$ of them generate a rank $r$ analytic foliation on $\mathscr{M}(\mathbb{C})$. We shall now show that these foliations are in fact algebraic, and address the question which of them descends modulo $p$ to a smooth $p$-foliation on $M$. We shall then relate the quotients of $M$ by these foliations to Hilbert modular varieties with Iwahori level structure at $p$. \medskip{} \textbf{Convention}. From now on we denote by $\mathscr{M}$ the base change of the Hilbert modular scheme from $\mathbb{Z}[(ND)^{-1}]$ to $W(\kappa)$ and by $M$ its special fiber, a smooth variety over $\kappa$. Recall that $\kappa$ is assumed to be large enough so that there are $g$ distinct embeddings $\sigma:L\hookrightarrow W(\kappa)[1/p].$ \medskip{} Let $\pi:A^\text{\rm univ}\to\mathscr{M}$ denote the universal abelian variety over $\mathscr{M}$ and \[ \underline{\omega}=\pi_{*}\Omega_{A^\text{\rm univ}/\mathscr{M}}^{1} \] its \emph{Hodge bundle}. Since $p\nmid\mathrm{disc}_{L/\mathbb{Q}}$, it decomposes under the action of $\mathcal{O}_{L}$ as a direct sum of $g$ line bundles \[ \underline{\omega}=\oplus_{\sigma\in\mathbb{B}}\mathscr{L}_{\sigma} \] where $\mathscr{L}_{\sigma}=\{\alpha\in\underline{\omega}|\,\iota(a)^{*}(\alpha)=\sigma(a)\alpha\,\,\forall a\in\mathcal{O}_{L}\}.$ Let $\underline{\text{\rm Lie}}=\underline{\text{\rm Lie}}(A^\text{\rm univ}/\mathscr{M})=\underline{\omega}^{\vee}$ be the relative tangent space of the universal abelian variety. The Kodaira-Spencer isomorphism is an isomorphism of $\mathcal{O}_{\mathscr{M}}$-modules (\cite{Ka}, (1.0.19)-(1.0.20)) \begin{multline} \text{\rm KS}:\mathcal{T}_{\mathscr{M}/W(\kappa)}\simeq\mathrm{Hom}_{\mathcal{O}_{L}\otimes\mathcal{O}_{\mathscr{M}}}(\underline{\omega},\underline{\text{\rm Lie}}((A^\text{\rm univ})^{t}/\mathscr{M}))\label{eq:KS} \\ \simeq\mathrm{Hom}_{\mathcal{O}_{L}\otimes\mathcal{O}_{\mathscr{M}}}(\underline{\omega},\underline{\text{\rm Lie}}\otimes_{\mathcal{O}_{L}}\mathfrak{c})=\underline{\text{\rm Lie}}^{\otimes2}\otimes_{\mathcal{O}_{L}}\mathfrak{dc}, \end{multline} where the second isomorphism results from the polarization $\lambda^\text{\rm univ}:\mathfrak{c}\otimes_{\mathcal{O}_{L}}A^\text{\rm univ}\simeq(A^\text{\rm univ})^{t}$, and the $\otimes^2$ is the tensor product of $\mathcal{O}_L \otimes \mathcal{O}_\mathscr{M}$-modules. Since $(p,\mathfrak{dc})=1$ we have \[ \underline{\text{\rm Lie}}\otimes_{\mathcal{O}_{L}}\mathfrak{dc}=\underline{\text{\rm Lie}}\simeq\oplus_{\sigma\in\mathbb{B}}\mathscr{L}_{\sigma}^{\vee}. \] We therefore get from $\text{\rm KS}$ a canonical decomposition of the tangent space of $\mathscr{M}$ into a direct sum of $g$ line bundles \[ \mathcal{T}_{\mathscr{M}/W(\kappa)}\simeq\oplus_{\sigma\in\mathbb{B}}\mathscr{L}_{\sigma}^{-2}. \] We denote by $\mathscr{F}_{\sigma}$ the line sub-bundle of $\mathcal{T}_{\mathscr{M}/W(\kappa)}$ corresponding to $\mathscr{L}_{\sigma}^{-2}$ under this isomorphism. \begin{lem} Let $\Sigma\subset\mathbb{B}$ be any set of embeddings of $L$ into $W(\kappa)[1/p].$ Then $\mathscr{F}_{\Sigma}=\oplus_{\sigma\in\Sigma}\mathscr{F}_{\sigma}$ is involutive: if $\xi,\eta$ are sections of $\mathscr{F}_{\Sigma}$, so is $[\xi,\eta].$ \end{lem} \begin{proof} Recall that the Kodaira-Spencer isomorphism is derived from the Gauss-Manin connection \[ \nabla:H_{dR}^{1}(A^\text{\rm univ}/\mathscr{M})\to H_{dR}^{1}(A^\text{\rm univ}/\mathscr{M})\otimes_{\mathcal{O}_{\mathscr{M}}}\Omega_{\mathscr{M}/W(\kappa)}^{1}. \] If $\xi$ is a section of $\mathcal{T}_{\mathscr{M}/W(\kappa)}$ we denote by $\nabla_{\xi}:H_{dR}^{1}(A^\text{\rm univ}/\mathscr{M})\to H_{dR}^{1}(A^\text{\rm univ}/\mathscr{M})$ the map obtained by contraction with $\xi$. The Gauss-Manin connection is well-known to be flat, namely \[ \nabla_{[\xi,\eta]}=\nabla_{\xi}\circ\nabla_{\eta}-\nabla_{\eta}\circ\nabla_{\xi}. \] Now, $\nabla_{\xi}$ commutes with $\iota(a)^{*}$ for $a\in\mathcal{O}_{L}$, and therefore preserves the $\sigma$-isotypic component $H_{dR}^{1}(A^\text{\rm univ}/\mathscr{M})[\sigma]$ for each $\sigma\in\mathbb{B}$. \emph{By} \emph{definition,} $\xi\in\mathscr{F}_{\sigma}$ if for every $\tau\ne\sigma$ the operator $\nabla_{\xi}$ maps the subspace \[ \mathscr{L}_{\tau}=\underline{\omega}[\tau]\subset H_{dR}^{1}(A^\text{\rm univ}/\mathscr{M})[\tau] \] to itself. Similarly, $\xi\in\mathscr{F}_{\Sigma}$ if the same holds for every $\tau\notin\Sigma.$ It follows at once from the flatness of $\nabla$ that if this condition holds for $\xi$ and $\eta$, it holds for $[\xi,\eta].$ \end{proof} We conclude that $\mathscr{F}_{\Sigma}$ is a smooth foliation. We call these foliations \emph{tautological.} \subsubsection{The main theorem} We consider now the foliations $\mathscr{F}_{\Sigma}$ in the special fiber $M=\mathscr{M}\times_{W(\kappa)}\kappa$ only. The following theorem summarizes the main results in the Hilbert modular case. As we learned from \cite{E-SB-T}, point (i) was also observed there some years ago. \begin{thm} \label{Main Theorem HMV}(i) The smooth foliation $\mathscr{F}_{\Sigma}$ is $p$-closed if and only if $\Sigma$ is invariant under the action of Frobenius, namely $\phi\circ\Sigma=\Sigma.$ In particular, $\mathscr{F}_{\sigma}$ is $p$-closed if and only if $f(\mathfrak{p}_{\sigma}/p)=1$ where $\mathfrak{p}_{\sigma}$ is the prime induced by $\sigma.$ (ii) Suppose that $f(\mathfrak{p}_{\sigma}/p)\ne1.$ Then, up to a unit, the obstruction $\kappa_{\mathscr{F}_{\sigma}}$ to $\mathscr{F}_{\sigma}$ being $p$-closed (\ref{subsec:obstruction p closed}) is equal to the square of the $\phi\circ\sigma$-partial Hasse invariant $h_{\phi\circ\sigma}$ \cite{Go}. (iii) Let $\mathfrak{p}$ be a prime of $L$ above $p$ and $\mathscr{F}_{\mathfrak{p}}=\mathscr{F}_{\mathbb{B}_{\mathfrak{p}}}=\oplus_{\sigma\in\mathbb{B}_{\mathfrak{p}}}\mathscr{F}_{\sigma}$ the corresponding $p$-foliation. The quotient of $M$ by the $p$-foliation $\oplus_{\mathfrak{q}\ne\mathfrak{p}}\mathscr{F}_{\mathfrak{q}}$ may be identified with the tale component of the $\Gamma_{0}(\mathfrak{p})$-moduli scheme $M_{0}(\mathfrak{p})$ (see details in the proof). \end{thm} \subsection{Preliminaries} \subsubsection{Tate objects\label{subsec:Tate-objects}} We begin our proof of Theorem \ref{Main Theorem HMV} by recalling a result of Katz \cite{Ka}, who computed the effect of the Kodaira-Spencer isomorphism on $q$-expansions. As in \cite{Ka} (1.1.4), let $S$ be a set of $g$ linearly independent $\mathbb{Q}$-linear forms $l_{i}:L\to\mathbb{Q}$ preserving (total) positivity. Let $\mathfrak{a}$ and $\mathfrak{b}$ be fractional ideals of $L$, relatively prime to $N,$ such that $\mathfrak{c}=\mathfrak{ab}^{-1}$. Let $\mathcal{R}=W(\kappa)\otimes\mathbb{Z}((\mathfrak{ab},S))$ be the ring defined in \cite{Ka} (1.1.7), after base change to $W(\kappa).$ Let \[ \mathrm{Tate}_{\mathfrak{a,b}}(q)=\mathbb{G}_{m}\otimes\mathfrak{d}^{-1}\mathfrak{a}^{-1}/q(\mathfrak{b}) \] be the abelian scheme over $\mathcal{R}$ constructed in \cite{Ka} (1.1.13). For fixed $\mathfrak{a}$ and $\mathfrak{b}$ it is essentially independent of $S$. Since $\mathfrak{a}$ is relatively prime to $N$, $\mathrm{Tate}_{\mathfrak{a,b}}(q)$ admits a \emph{canonical} $\Gamma_{00}(N)$-level structure $\eta_{can}$ (denoted in \cite{Ka} (1.1.16) by $i_{can}$). It also admits a canonical $\mathfrak{c}$-polarization $\lambda_{can}$ and a canonical action $\iota_{can}$ of $\mathcal{O}_{L}.$ We thus obtain an object \[ \underline{\mathrm{Tate}}_{\mathfrak{a,b}}(q)=(\mathrm{Tate}_{\mathfrak{a,b}}(q),\iota_{can},\lambda_{can},\eta_{can}) \] over the ring $\mathcal{R}$ (for any choice of $S$). For the definition of $q$-expansions at the cusp labeled by the pair $\mathfrak{(a},\mathfrak{b})$, recalled below, we assume, in addition, that $\mathfrak{a}$ is relatively prime to $p.$ This can always be achieved, since only the classes of $\mathfrak{a}$ and $\mathfrak{b}$ in the strict ideal class group $Cl^{+}(L)$ matter. The Lie algebra of $\mathrm{Tate}_{\mathfrak{a,b}}(q)$ is given by a canonical isomorphism (\cite{Ka} (1.1.17)) \[ \omega_{\mathfrak{a}}:\underline{\text{\rm Lie}}=\text{\rm Lie}(\mathrm{Tate}_{\mathfrak{a,b}}(q)/\mathcal{R})\simeq \text{\rm Lie}(\mathbb{G}_{m}\otimes\mathfrak{d}^{-1}\mathfrak{a}^{-1}/\mathcal{R})=\mathfrak{d}^{-1}\mathfrak{a}^{-1}\otimes\mathcal{R}, \] hence the Kodaira-Spencer map $\text{\rm KS}$ (\ref{eq:KS}) induces a map \begin{equation} \text{\rm KS}:\mathcal{T}_{\text{\rm Spec}(\mathcal{R})/W(\kappa)}\to\underline{\text{\rm Lie}}^{\otimes2}\otimes_{\mathcal{O}_{L}}\mathfrak{dc}\simeq\mathfrak{d}^{-1}\mathfrak{a}^{-1}\mathfrak{b}^{-1}\otimes\mathcal{R}.\label{eq:KS-q} \end{equation} The tangent bundle $\mathcal{T}_{\text{\rm Spec}(\mathcal{R})/W(\kappa)}$ is the module of $W(\kappa)$-derivations of $\mathcal{R}$. For $\gamma\in\mathfrak{d}^{-1}\mathfrak{a}^{-1}\mathfrak{b}^{-1}$ we may consider the derivation $D(\gamma)$ of $\mathcal{R}$ (analogue of $q\frac{d}{dq})$ given by \[ D(\gamma)(\sum_{\alpha\in\mathfrak{ab},\,\,l_{i}(\alpha)\ge-n}a_{\alpha}q^{\alpha})=\sum_{\alpha\in\mathfrak{ab},\,\,l_{i}(\alpha)\ge-n}\text{\rm Tr}_{L/\mathbb{Q}}(\alpha\gamma)a_{\alpha}q^{\alpha}. \] We then have the following elegant result. \begin{lem} \label{lem:Katz' formula}(\cite{Ka} (1.1.20)) The image of $D(\gamma)$ under the Kodaira-Spencer map $\text{\rm KS}$ in $(\ref{eq:KS-q})$ is $\gamma\otimes1.$ \end{lem} \subsubsection{Hilbert modular forms and partial Hasse invariants} In this subsection we set up notation for Hilbert modular forms, and recall some results due to the first author and to Diamond and Kassaei on the $q$-expansions of such forms over $\kappa.$ By our assumption on $\kappa,$ for any $W(\kappa)$-algebra $R$ we have \[ \mathcal{O}_{L}\otimes R\simeq\oplus_{\sigma\in\mathbb{B}}R_{\sigma} \] where $R_{\sigma}$ is the ring $R$ equipped with the action of $\mathcal{O}_{L}$ via $\sigma:\mathcal{O}_{L}\hookrightarrow W(\kappa)$. A \emph{weight }is a tuple $k=(k_{\sigma})_{\sigma\in\mathbb{B}}$ with $k_{\sigma}\in\mathbb{Z}.$ We shall often write $k$ also as the element \[ \sum_{\sigma\in\mathbb{B}}k_{\sigma}[\sigma]\in\mathbb{Z}[\mathbb{B}]. \] The weight $k$ defines a homomorphism of algebraic groups over $W(\kappa),$ $\chi=\chi_{k}:\text{\rm Res}_{W(\kappa)}^{\mathcal{O}_{L}\otimes W(\kappa)}(\mathbb{G}_{m})\to\mathbb{G}_{m}$, given on $R$-points by \[ \chi:(\mathcal{O}_{L}\otimes R)^{\times}\to R^{\times},\,\,\,\chi(a\otimes x)=\prod_{\sigma\in\mathbb{B}}(\sigma(a)x)^{k_{\sigma}}. \] A weight $k$, level $N$ Hilbert modular form (HMF) $f$ over $W(\kappa)$ `` la Katz'' is a rule associating to any $W(\kappa)$-algebra $R$, any four-tuple $\underline{A}$ over $R$ as above, and any $\mathcal{O}_{L}\otimes R$-basis $\omega$ of $\underline{\omega}_{A/R}$ (such a basis always exists locally on $R$) an element $f(\underline{A},\omega)\in R$, which depends only on the isomorphism type of the pair $(\underline{A},\omega)$, is compatible with base change $R\to R'$ (over $W(\kappa)$), and satisfies \[ f(\underline{A},\alpha\omega)=\chi(\alpha)^{-1}f(\underline{A},\omega) \] for $\alpha\in(\mathcal{O}_{L}\otimes R)^{\times}.$ The $(\mathfrak{a,b})$-$q$-expansion of $f$ is the element \[ f(\underline{\mathrm{Tate}}_{\mathfrak{a,b}}(q),(\text{\rm Tr}_{L/\mathbb{Q}}\otimes1)\circ\omega_{\mathfrak{a}})\in\mathcal{R}, \] where $\mathcal{R},$ $\mathfrak{a}$ and $\mathfrak{b}$ are as above. Note that by our assumption that $\mathfrak{a}$ is relatively prime to $p$, $\mathfrak{d}^{-1}\mathfrak{a}^{-1}\otimes\mathcal{R}=\mathfrak{d}^{-1}\otimes\mathcal{R}$. Since $\text{\rm Tr}_{L/\mathbb{Q}}:\mathfrak{d}^{-1}\to\mathbb{Z}$ is an $\mathcal{O}_{L}$-basis of $\mathrm{Hom}(\mathfrak{d}^{-1},\mathbb{Z}),$ \[ (\text{\rm Tr}_{L/\mathbb{Q}}\otimes1)\circ\omega_{\mathfrak{a}}:\underline{\text{\rm \text{\rm Lie}}}\to\mathcal{R} \] is indeed an $\mathcal{O}_{L}\otimes\mathcal{R}$-basis of $\underline{\omega}_{A/\mathcal{R}}$ for $A=\mathrm{Tate}_{\mathfrak{a,b}}(q).$ For a weight $k$ denote by $\mathscr{L}_{\chi}$ the line bundle \[ \mathscr{L}_{\chi}=\bigotimes_{\sigma\in\mathbb{B}}\mathscr{L}_{\sigma}^{\otimes k_{\sigma}} \] on $\mathscr{M}.$ Let $R$ be a $W(\kappa)$-algebra and $\underline{A}$ a four-tuple over $R$ as above, corresponding to a morphism $h:\text{\rm Spec}(R)\to\mathscr{M}$ over $W(\kappa)$. An $\mathcal{O}_{L}\otimes R$-basis $\omega$ of $\omega_{A/R}$ yields $R$-bases $\omega_{\sigma}$ of the line bundles $\omega_{A/R}[\sigma]=h^{*}\mathscr{L}_{\sigma}$ for every $\sigma\in\mathbb{B},$ hence a basis $\omega_{\chi}$ of $\bigotimes_{\sigma\in\mathbb{B}}\omega[\sigma]^{\otimes k_{\sigma}}=h^{*}\mathscr{L}_{\chi}$. If $f$ is a weight $k$ HMF then $f(\underline{A},\omega)\cdot\omega_{\chi}$ is independent of $\omega$. We may therefore regard $f$ as a global section of $\mathscr{L}_{\chi},$ and vice versa, any global section of $\mathscr{L}_{\chi}$ over $\mathscr{M}$ is a HMF of weight $k$ and level $N.$ Since we assume that $g\ge2$, by the Kcher principle any HMF $f$ is automatically holomorphic at the cusps, and the $q$-expansions of $f$ lie in \[ \mathcal{R}\cap\{a_{0}+\sum_{\alpha\gg0}a_{\alpha}q^{\alpha}|\,a_{0},a_{\alpha}\in W(\kappa)\}. \] The same analysis holds if we restrict to $\kappa$-algebras $R$ rather than $W(\kappa)$-algebras, and yields a definition of HMF's of weight $k$ and level $N$ over $\kappa$, as well as an interpretation of such modular forms as global sections of $\mathscr{L}_{\chi}$ over $M,$ the special fiber of $\mathscr{M}.$ We also get the mod-$p$ $(\mathfrak{a,b})$-$q$-expansion of a modular form over $\kappa$ as an element of $\mathcal{R}/p\mathcal{R}$ by the same recipe. However, in general, not every HMF over~$\kappa$ lifts to a HMF over $W(\kappa).$ The exact sequence \[ 0\to\mathscr{L}_{\chi}\overset{\times p}{\to}\mathscr{L}_{\chi}\to\mathscr{L}_{\chi}/p\mathscr{L_{\chi}}\to0 \] shows that the obstruction to lifting a HMF over $\kappa$ lies in $H^{1}(\mathscr{M},\mathscr{L}_{\chi}).$ Let $M_{k}(N,W(\kappa))=H^{0}(\mathscr{M},\mathscr{L}_{\chi_{k}})$ denote the space of weight $k,$ level $N$ HMF's over $W(\kappa)$ and similarly $M_{k}(N,\kappa)=H^{0}(M,\mathscr{L}_{\chi_{k}})$ the space of weight $k$, level $N$ HMF's over $\kappa.$ The $q$-\emph{expansion principle }says that a modular form over $W(\kappa)$ (or over $\kappa$), all of whose $q$-expansions, for all choices of $(\mathfrak{a},\mathfrak{b})$ - corresponding to the various cusps of the Hilbert modular scheme - vanish, is 0. The space \[ M_{*}(N,W(\kappa))=\bigoplus_{k\in\mathbb{Z}[\mathbb{B}]}M_{k}(N,W(\kappa)) \] carries a natural ring structure, and is called the \emph{ring of modular forms of level} $N$ over $W(\kappa).$ Similar terminology applies to the ring $M_{*}(N,\kappa)$ of modular forms of level $N$ over $\kappa$. The $q$-expansion homomorphisms extend naturally to ring homomorphisms from these rings to the rings $\mathcal{R}$ or $\mathcal{R}/p\mathcal{R}$ (depending on the choice of $\mathfrak{a}$ and $\mathfrak{b}$). However, different HMF's over $\kappa$ (of different weights) may now have the same $q$-expansions. An important role in the study of $q$-expansions in characteristic $p$ is played by the $g$ \emph{partial Hasse invariants} $h_{\sigma}$ ($\sigma\in\mathbb{B}).$ These are modular forms over $\kappa,$ of weights \[ k_{\sigma}=p[\phi^{-1}\circ\sigma]-[\sigma], \] whose $q$-expansions at every unramified cusp is $1$. The reader is referred to Theorem~2.1 of \cite{Go} for details, but briefly the situation is as follows: Let $(\underline{A}, \omega)$ be an object as above, over a $\kappa$-algebra $R$, where $\underline{{\rm Lie}} \,A$ is a free $\mathcal{O}_L\otimes R$ module of rank $1$ and $\omega$ is a basis for the relative differentials of $A$ over $R$. Under the decomposition $\mathcal{O}_L\otimes R = \oplus_{\sigma} R$, where $\sigma$ runs over the homomorphisms $\mathcal{O}_L \rightarrow \kappa$, we have a corresponding decomposition $\omega = \oplus_\sigma \omega_\sigma$. Let $\eta = \oplus_\sigma \eta_\sigma$ be the decomposition of the dual basis $\eta$ for $R^1\pi_\ast \mathcal{O}_A$ under the polarization. Then, for every $\sigma$, $h_\sigma((\underline{A}, \omega))$ is defined by the identity $h_\sigma(\underline{A}, \omega)\cdot\eta_\sigma = {\rm Fr}(\eta_{\phi^{-1}\circ \sigma})$. \medskip Following \cite{DK} we define the following cones $C^\text{\rm min}\subset C^\text{\rm std}\subset C^\text{\rm hasse}$ in $\mathbb{R}[\mathbb{B}]$: \begin{itemize} \item $C^\text{\rm min}=\{\sum a_{\sigma}[\sigma]|\,\,pa_{\sigma}\ge a_{\phi^{-1}\circ\sigma}\,\,\forall\sigma\}$ (the \emph{minimal cone}) \item $C^\text{\rm std}=\{\sum a_{\sigma}[\sigma]|\,\,a_{\sigma}\ge0\,\,\forall\sigma\}$ (the \emph{standard cone}) \item $C^\text{\rm hasse}=\{\sum_{\sigma}a_{\sigma}(p[\phi^{-1}\circ\sigma]-[\sigma])|\,\,a_{\sigma}\ge0\}$ (the \emph{Hasse cone}). \end{itemize} For example, when $g=2$ and $p$ is split in $L$ all three cones coincide. In contrast, when $p$ is inert in $L$ they look as follows: \bigskip{} \begin{center} \includegraphics[scale=0.35]{Cones1} \end{center} \bigskip{} Let $f\in M_{k}(N,\kappa).$ According to \cite{A-G} Proposition 8.9, any other $g\in M_{k'}(N,\kappa)$ having the same $q$-expansions as $f$ is a product of $f$ and partial Hasse invariants raised to integral (possibly negative) powers. Furthermore, there exists a weight $\Phi(f),$ called the \emph{filtration} of $f$, and $g\in M_{\Phi(f)}(N,\kappa)$ having the same $q$-expansions as $f,$ such that any other $g_{1}\in M_{k'}(N,\kappa)$ with the same $q$-expansions is of the form \[ g_{1}=g\prod_{\sigma\in\mathbb{B}}h_{\sigma}^{n_{\sigma}} \] for some integers $n_{\sigma}\ge0.$ These results generalize well-known results of Serre for elliptic modular forms in characteristic $p$. \begin{thm} \label{thm:Diamond Kassaei on filtration of HMF}(\cite{DK}, Corollary 1.2) The filtration $\Phi(f)\in C^\text{\rm min}.$ \end{thm} \subsubsection{Hilbert modular varieties with Iwahori level structure at $p$} The last piece of input needed for the proof of Theorem \ref{Main Theorem HMV} concerns $\Gamma_{0}(\mathfrak{p})$-moduli problems, for the primes $\mathfrak{p}$ of $L$ dividing $p.$ References for the results quoted below are \cite{G-K,Pa,St}. Fix a prime $\mathfrak{p}$ of $L$ dividing $p$. Let $\mathscr{M}_{0}(\mathfrak{p})$ be the moduli problem over $W(\kappa)$ classifying isomorphism types of tuples $(\underline{A},H)$ where $\underline{A}\in\mathscr{M}$ and $H$ is a finite flat $\mathcal{O}_{L}$-invariant isotropic subgroup scheme of $A${[}$\mathfrak{p}]$ of rank $p^{f}$, where $f=f(\mathfrak{p}/p).$ The meaning of ``isotropic'' is the following. Since we assumed that $\mathfrak{c}$ is relatively prime to $p$ there is a canonical isomorphism of group schemes $\mathfrak{c}\otimes_{\mathcal{O}_{L}}A[p]\simeq A[p]$ (sending $\alpha\otimes u$ to $\iota(\alpha)u$). The canonical $e_{p}$-pairing $A[p]\otimes A^{t}[p]\to\mu_{p}$ therefore induces, via $\lambda,$ a perfect Weil pairing \[ \left\langle .,.\right\rangle _{\lambda}:A[p]\otimes A[p]\to\mu_{p}. \] It restricts to a perfect pairing on $A[\mathfrak{p}]$ (since the Rosati involution on $\mathcal{O}_{L}$ is the identity), and $H$ is a maximal isotropic subgroup scheme of $A[\mathfrak{p}].$ If $\mathfrak{c}$ is not relatively prime to $p,$ we may change it, in the definition of $\mathscr{M},$ to $\mathfrak{c}'=\gamma\mathfrak{c}$, where $\gamma\gg0$, so that $\mathfrak{c'}$ is now relatively prime to $p$, and get an isomorphic moduli problem (the isomorphism depending on $\gamma$). By ``isotropic'' we then mean that $H$ is isotropic with respect to the Weil pairing induced by the isomorphism $\mathfrak{c}'\otimes_{\mathcal{O}_{L}}A[p]\simeq A[p]$ as above. As before, the moduli problem $\mathscr{M}_{0}(\mathfrak{p})$ is represented by a scheme, flat over $W(\kappa)$, which we denote by the same letter, and the forgetful morphism is a proper morphism $\mathscr{M}_{0}(\mathfrak{p})\to\mathscr{M}.$ The scheme $\mathscr{M}_{0}(\mathfrak{p})$ is normal and Cohen-Macaulay. We let \[ M_{0}(\mathfrak{p})\to M \] be the characteristic $p$ fiber of this morphism, over the field $\kappa$. This morphism has been studied in detail in \cite{G-K}. Away from the ordinary locus, it is neither finite nor flat. Let $M^\text{\rm ord}$ be the ordinary locus of $M$, the open dense subset where none of the partial Hasse invariants $h_{\sigma}$ vanishes. If $k$ is an algebraically closed field containing~$\kappa$ and $x:\text{\rm Spec}(k)\to M$ a $k$-valued point of $M,$ then $x$ lies on $M^\text{\rm ord}$ if and only if the corresponding abelian variety $A_{x}=x^{*}(A^\text{\rm univ})$ is ordinary. Let $M_{0}(\mathfrak{p})^\text{\rm ord}$ be the open subset of $M_{0}(\mathfrak{p})$ which lies over $M^\text{\rm ord}.$ Then $M_{0}(\mathfrak{p})^\text{\rm ord}$ is the disjoint union of two smooth varieties. The component $M_{0}(\mathfrak{p})^\text{\rm ord,m}$ (the \emph{multiplicative} component) classifies tuples $(\underline{A},H)$ where $H$ is multiplicative (its Cartier dual is tale). The forgetful morphism is an isomorphism $M_{0}(\mathfrak{p})^\text{\rm ord,m}\simeq M^\text{\rm ord},$ its inverse given by the section $\underline{A}\mapsto(\underline{A},A[\text{\rm Fr}]\cap A[\mathfrak{p}]).$ Here $\text{\rm Fr}:A\to A^{(p)}$ is the relative Frobenius morphism (everything over a base scheme $S$ lying over $\kappa$). The second component $M_{0}(\mathfrak{p})^\text{\rm ord,et}$ (the \emph{tale} component) classifies pairs $(\underline{A},H)$ where $H$ is tale. The forgetful morphism $M_{0}(\mathfrak{p})^\text{\rm ord,et}\to M^\text{\rm ord}$ is finite flat, purely inseparable of height 1 and degree $p^{f}.$ To discuss the Atkin-Lehner map $w$ we have to bring the polarization module~$\mathfrak{c}$ back into the picture, because $w$ will in general change it. We therefore recall that all our constructions depended on an auxiliary ideal~$\mathfrak{c}$ (at least on its narrow ideal class in $Cl^{+}(L)$), and use the notation $M^{\mathfrak{c}},$ $M_{0}^{\mathfrak{c}}(\mathfrak{p})$ etc. to emphasize this dependency. We now define \[ w:M_{0}^{\mathfrak{c}}(\mathfrak{p})\to M_{0}^{\mathfrak{cp}}(\mathfrak{p}),\,\,\,w(\underline{A},H)=(\underline{A}/H,A[\mathfrak{p}]/H). \] Here $\underline{A}/H$ is the tuple $(A/H,\iota',\lambda',\eta')$ where $\iota'$ and $\eta'$ are induced by $\iota$ and $\eta$. The $\mathfrak{cp}$-polarization $\lambda'$ of $A/H$ is obtained as follows. Giving $\lambda$ is equivalent to giving a homomorphism \[ \psi_{1}:\mathfrak{c}\to\mathrm{Hom}_{\mathcal{O}_{L}}(A,A^{t})_{sym}=\mathcal{P}_{A} \] such that $\psi_{1}(\alpha)$ is in the cone of $\mathcal{O}_{L}$-polarizations $\mathcal{P}_{A}^{+}$ if and only if $\alpha\gg0,$ and such that $\psi_{1}$ induces an isomorphism \[ \mathfrak{c}\otimes_{\mathcal{O}_{L}}A\simeq A^{t}. \] Denoting $A/H$ by $B$ and letting $f:A\to B$ be the canonical homomorphism, the polarization $\lambda'$ is determined by a similar homomorphism \[ \psi_{2}:\mathfrak{cp}\to\mathrm{Hom}_{\mathcal{O}_{L}}(B,B^{t})_{sym}=\mathcal{P}_{B}, \] defined by the relation \[ f^{t}\circ\psi_{2}(\alpha)\circ f=\psi_{1}(\alpha) \] for all $\alpha\in\mathfrak{pc}\subset\mathfrak{c}.$ The \emph{existence }of $\psi_{2}(\alpha)$ stems from the fact that $H$ is isotropic for the pairing $A[p]\times A[p]\to\mu_{p}$ induced by $\psi_{1}(\alpha).$ Its \emph{uniqueness} is obvious, and because of this uniqueness $\psi_{2}(\cdot)$ is a homomorphism. See \cite{Pa}, 2.2. The subgroup scheme $A[\mathfrak{p}]/H\subset(A/H)[\mathfrak{p}]=\mathfrak{p}^{-1}H/H$ is finite and flat, $\mathcal{O}_{L}$-invariant, and isotropic with respect to $\lambda'$, of rank $p^{f}.$ The tuple $(\underline{A}/H,A[\mathfrak{p}]/H)$ therefore lies on $M_{0}^{\mathfrak{cp}}(\mathfrak{p})$, and $w$ is well defined. In the definition of $w$ we may replace $\mathfrak{cp}$ by a polarization module $\mathfrak{c}'$ which is relatively prime to $p,$ as was assumed for $\mathfrak{c},$ by multiplying by an appropriate $\gamma\gg0$, keeping the strict ideal class of $\mathfrak{cp}$ unchanged. The Atkin-Lehner map is not an involution if the class of $\mathfrak{p}$ in $Cl^{+}(L)$ is not trivial. Indeed, \[ w^{2}(\underline{A},H)=(\underline{A}/A[\mathfrak{p}],\mathfrak{p}^{-1}H/A[\mathfrak{p}])\in M_{0}^{\mathfrak{cp}^{2}}(\mathfrak{p}). \] Nevertheless, as in the case of modular curves, it preserves the ordinary locus and exchanges the ordinary tale and ordinary multiplicative components: \[ w:M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,m}\simeq M_{0}^{\mathfrak{cp}}(\mathfrak{p})^\text{\rm ord,et},\,\,\,w:M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et}\simeq M_{0}^{\mathfrak{cp}}(\mathfrak{p})^\text{\rm ord,m}. \] We now define $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ to be the \emph{Zariski closure} of $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,m}$ in $M_{0}^{\mathfrak{c}}(\mathfrak{p})$, and $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ the Zariski closure of $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et}.$ The following proposition summarizes the situation. \begin{prop} \label{prop:etale and multiplicative components}Both $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ and $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ are finite flat over $M^{\mathfrak{c}}.$ The forgetful morphism is an isomorphism $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}\simeq M^{\mathfrak{c}},$ and $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ is purely inseparable of height 1 and degree $p^{f}$ over $M^{\mathfrak{c}}.$ The map $w$ induces isomorphisms \[ w:M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}\simeq M_{0}^{\mathfrak{cp}}(\mathfrak{p})^\text{\rm et},\,\,\,w:M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}\simeq M_{0}^{\mathfrak{cp}}(\mathfrak{p})^{\rm m}. \] Both $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ and $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ are therefore smooth over $\kappa$. \end{prop} \begin{proof} Taking Zariski closures, it is clear that $w$ maps $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ to $M_{0}^{\mathfrak{cp}}(\mathfrak{p})^\text{\rm et}$ and $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ to $M_{0}^{\mathfrak{cp}}(\mathfrak{p})^{\rm m}$. Let $h$ be the order of $[\mathfrak{p}]$ in $Cl^{+}(L)$ and let $\gamma$ be a totally positive element of $L$ such that $(\gamma)=\mathfrak{p}^{h}.$ Applying $w$ successively $2h$ times we get the map \[ M_{0}^{\mathfrak{c}}(\mathfrak{p})\to M_{0}^{\gamma^{2}\mathfrak{c}}(\mathfrak{p}),\,\,\,(\underline{A},H)\mapsto(\underline{A}/A[\gamma],\gamma^{-1}H/A[\gamma]). \] Denote by $\gamma_{*}:M_{0}^{\gamma^{2}\mathfrak{c}}(\mathfrak{p})\to M_{0}^{\mathfrak{c}}(\mathfrak{p})$ the isomorphism \[ (A',\iota',\lambda',\eta';H')\mapsto(A',\iota',\lambda'\circ(\gamma^{2}\otimes1),\eta'\circ(\gamma\otimes1);H'). \] Identifying $A/A[\gamma]$ with $A$ under $\gamma$ (carrying $\gamma^{-1}H/A[\gamma]$ to $H$) it is easy to check that $\gamma_{*}$ maps the tuple $(\underline{A}/A[\gamma],\gamma^{-1}H/A[\gamma])\in M_{0}^{\gamma^{2}\mathfrak{c}}(\mathfrak{p})$ back to $(\underline{A},H)\in M_{0}^{\mathfrak{c}}(\mathfrak{p}).$ We see that \[ \gamma_{*}\circ w^{2h} \] is the identity, hence $w:M_{0}^{\mathfrak{c}}(\mathfrak{p})\to M_{0}^{\mathfrak{cp}}(\mathfrak{p})$ is an isomorphism. The multiplicative component maps isomorphically to $M^{\mathfrak{c}}$ since the section $\underline{A}\mapsto(\underline{A},A[\text{\rm Fr}]\cap A[\mathfrak{p}])$ extends from the ordinary part to all of $M^{\mathfrak{c}}$ and must map it to $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ by continuity. It follows that $M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ is smooth. Since $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ is isomorphic to $M_{0}^{\mathfrak{cp}}(\mathfrak{p})^{\rm m}$, it is also smooth. It remains to prove that the morphism $\pi:M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}\to M^{\mathfrak{c}}$ is finite flat, purely inseparable of height 1 and degree $p^{f}$. Consider $(M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et})^{(p)}=\kappa\times_{\phi,\kappa}M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$, which we canonically identify with $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$, since it is actually defined over $\mathbb{F}_{p},$ and the relative Frobenius morphism (we use $Y$ as a shorthand for $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$) \[ \text{\rm Fr}_{Y/\kappa}:M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}\to M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}. \] As $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ is smooth, $\text{\rm Fr}_{Y/\kappa}$ is finite and flat of degree $p^{g}$. We claim that there is a morphism $\theta:M^{\mathfrak{c}}\to M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ such that $\text{\rm Fr}_{Y/\kappa}=\theta\circ\pi$. This will force both $\theta$ and $\pi$ to be finite flat and purely inseparable of height 1, as finite morphisms between regular schemes are flat (``miracle flatness'', \cite{Stacks}, Lemma 10.128.1). As the degree of $\pi$ over the ordinary locus is $p^{f},$ this is its degree everywhere, and as a by-product we get that the degree of $\theta$ is $p^{g-f}.$ We define $\theta$ in the language of moduli problems. Let $\sigma:M^{\mathfrak{c}}\simeq M_{0}^{\mathfrak{c}}(\mathfrak{p})^{\rm m}$ be the section $\underline{A}\mapsto(\underline{A},A[\text{\rm Fr}]\cap A[\mathfrak{p}])$ described before. Then $w\circ\sigma:M^{\mathfrak{c}}\simeq M_{0}^{\mathfrak{cp}}(\mathfrak{p})^\text{\rm et}$ is an isomorphism. Let $\mathfrak{p}'=\prod_{\mathfrak{q}\ne\mathfrak{p}}\mathfrak{q}$ be the product of the primes of $L$ dividing $p$ that are different from $\mathfrak{p}.$ Let \[ \theta':M_{0}^{\mathfrak{cp}}(\mathfrak{p})\to M_{0}^{\mathfrak{cpp'}}(\mathfrak{p})=M_{0}^{\mathfrak{c}p}(\mathfrak{p}) \] be the map \[ \theta':(\underline{A},H)\mapsto(\underline{A}/A[\text{\rm Fr}]\cap A[\mathfrak{p}'],H\mod A[\text{\rm Fr}]\cap A[\mathfrak{p}']). \] As $H\subset A[\mathfrak{p}]$, and $\mathfrak{p}$ and $\mathfrak{p}'$ are relatively prime, this map is well-defined and in fact sends $M_{0}^{\mathfrak{cp}}(\mathfrak{p})^\text{\rm et}$ to $M_{0}^{\mathfrak{c}p}(\mathfrak{p})^\text{\rm et}$ (it is enough to check this on the ordinary locus). We let \[ \theta=\theta'\circ w\circ\sigma:M^{\mathfrak{c}}\to M_{0}^{\mathfrak{c}p}(\mathfrak{p})^\text{\rm et}\simeq M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}. \] In the last step, we have used the fact that $\mathfrak{pp'}=(p)$ is principal to identify $M_{0}^{\mathfrak{c}p}(\mathfrak{p})^\text{\rm et}\simeq M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ sending $(A,\iota,\lambda,\eta;H)$ to $(A,\iota,\tilde{\lambda},\eta;H)$ where if $\lambda:\mathfrak{c}p\otimes_{\mathcal{O}_{L}}A\simeq A^{t}$ is a $\mathfrak{c}p$-polarization, $\tilde{\lambda}:\mathfrak{c}\otimes_{\mathcal{O}_{L}}A\simeq A^{t}$ is the $\mathfrak{c}$-polarization given by $\tilde{\lambda}=\lambda\circ(p\otimes1).$ It follows that \begin{equation}\label{equation for theta} \theta(\underline{A},H)=(\underline{A}/A[\text{\rm Fr}],A[\mathfrak{p}]\mod A[\text{\rm Fr}]). \end{equation} To conclude the proof we have to show that for $(\underline{A},H)\in M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ \[ (\underline{A}/A[\text{\rm Fr}],A[\mathfrak{p}]\mod A[\text{\rm Fr}])=\text{\rm Fr}_{Y/\kappa}(\underline{A},H). \] It is enough to verify this for $(\underline{A},H)\in M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et},$ as two morphisms that coincide on a dense open set, are equal. Assume that $(\underline{A},H)\in M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et}(k)$ is a $k$-valued point, for a $\kappa$-algebra $k.$ Then $\text{\rm Fr}=\text{\rm Fr}_{A/k}$ is the relative Frobenius of $A$ over $k$ (not to be confused with $\text{\rm Fr}_{Y/\kappa}$ on the right hand side!). But $\underline{A}/A[\text{\rm Fr}]\simeq\underline{A}^{(p)/k}$ (base change with respect to the absolute Frobenius of $k$), and since $H$ is the unique tale subgroup of $A[\mathfrak{p}]$ of order $p^{f},$ and $A[\mathfrak{p}]\mod A[\text{\rm Fr}]$ is the unique tale subgroup of $A^{(p)/k}[\mathfrak{p}]$ of order $p^{f}$, we must also have $A[\mathfrak{p}]\mod A[\text{\rm Fr}]\simeq H^{(p)/k}.$ Finally, let us explain the equality $(\underline{A}^{(p)/k},H^{(p)/k})=\text{\rm Fr}_{Y/\kappa}(\underline{A},H)$. Intuitively, ``the moduli of the object obtained by Frobenius base change is the Frobenius base change of the original moduli''. However, as there are two \emph{different }relative Frobenii involved, care must be taken. Let $(\underline{A},H)\in M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et}(k)$ correspond to the point $x:\text{\rm Spec}(k)\to M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et}$ (over $\kappa$). By the functoriality of Frobenius, we have the following commutative diagram, where we have substituted $Y=M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm ord,et}$: \[\xymatrix@C=1.5cm{\text{\rm Spec}(k)\ar[r]^x\ar[d]_{\text{\rm Fr}_{k\!/\! \kappa}} & Y\ar[d]^{\text{\rm Fr}_{Y\!/\! \kappa}}\\ \text{\rm Spec}(k^{(p)\!/\! \kappa}) \ar[r]^{x^{(p)\!/\! \kappa}}\ar[d] & Y^{(p)\!/\! \kappa}\ar[d]\\ \text{\rm Spec}(k) \ar[r]^x & Y }\] Here the vertical unmarked arrows are the base change maps with respect to the absolute Frobenius of $\kappa.$ The composition of the two vertical arrows on the right is the absolute Frobenius $\Phi_{Y}$ of $Y$, and the composition of the two arrows on the left is the absolute Frobenius $\Phi_{k}$ of $\text{\rm Spec}(k)$. We therefore have \[ (\underline{A}^{(p)/k},H^{(p)/k})=\Phi_{k}^{*}(x^{*}(\underline{A}^\text{\rm univ},H^\text{\rm univ}))=(\Phi_{Y}\circ x)^{*}(\underline{A}^\text{\rm univ},H^\text{\rm univ})=\text{\rm Fr}_{Y/\kappa}(\underline{A},H). \] (In the last step, we identified $Y^{(p)/\kappa}$ with $Y$, as it is defined over $\mathbb{F}_{p},$ hence we may identify $\text{\rm Fr}_{Y/\kappa}(\underline{A},H)$ with $\Phi_{Y}(\underline{A},H).$) \end{proof} \begin{cor} The smooth $\kappa$-variety $M_{0}^{\mathfrak{c}}(\mathfrak{p})^\text{\rm et}$ is the quotient of $M^{\mathfrak{c}}$ by a smooth $p$-foliation of rank $g-f$. \end{cor} \begin{proof} The finite flat morphism $\theta$ defined in the proof of the previous proposition is purely inseparable of height 1 and degree $p^{g-f}.$ The corollary now follows easily from Theorem \ref{thm:Quotient by foliation}. \end{proof} \subsection{Proof of Theorem \ref{Main Theorem HMV}} \subsubsection{Proof of parts (i) and (ii)} Let $\Sigma\subset\mathbb{B}.$ We have seen that the foliation $\mathscr{F}_{\Sigma}$ is involutive. The obstruction $\kappa_{\mathscr{F}_{\Sigma}}$ to it being $p$-closed (\S\ref{subsec:obstruction p closed}) lies in \[ \mathrm{Hom}(\Phi_{M}^{*}\mathscr{F}_{\Sigma},\mathcal{T}_{M/\kappa}/\mathscr{F}_{\Sigma})\simeq\bigoplus_{\sigma\in\Sigma}\bigoplus_{\tau\notin\Sigma}\mathrm{Hom}(\mathscr{L}_{\sigma}^{-2p},\mathscr{L}_{\tau}^{-2}) \] \[ =\bigoplus_{\sigma\in\Sigma}\bigoplus_{\tau\notin\Sigma}\Gamma(M,\mathscr{L}_{\sigma}^{2p}\otimes\mathscr{L}_{\tau}^{-2}). \] Here we use the fact that under the absolute Frobenius $\Phi_{M}\colon M\to M$ the pullback of a \emph{line} bundle $\mathscr{L}$ is isomorphic to $\mathscr{L}^{p}.$ Indeed, in characteristic $p$ the map $f\otimes s\mapsto fs^{\otimes p}$ is an isomorphism $\mathcal{O}_{M}\otimes_{\phi,\mathcal{O}_{M}}\mathscr{L}\simeq\mathscr{L}^{p}.$ \begin{lem} \label{lem:Nonexistence of HMF of certain weights}Let $\sigma\ne\tau$. We have \[ \Gamma(M,\mathscr{L}_{\sigma}^{2p}\otimes\mathscr{L}_{\tau}^{-2})=\begin{cases} \begin{array}{c} 0\\ \kappa\cdot h_{\tau}^{2} \end{array} & \begin{array}{c} \tau\ne\phi\circ\sigma;\\ \tau=\phi\circ\sigma. \end{array}\end{cases} \] \end{lem} \begin{proof} Let $h$ be a non-zero Hilbert modular form on $M$ of weight $2p[\sigma]-2[\tau].$ According to Theorem \ref{thm:Diamond Kassaei on filtration of HMF} there exist integers $a_{\beta}\ge0$ such that \[ \Phi(h)=2p[\sigma]-2[\tau]-\sum_{\beta\in\mathbb{B}}a_{\beta}(p[\beta]-[\phi\circ\beta])\in C^\text{\rm min}\subset C^\text{\rm std} \] (we end up using only the weaker result that the left hand side lies in $C^\text{\rm std}$). If $\tau$ is not in the $\phi$-orbit of $\sigma$ and $\tau\in\mathbb{B}_{\mathfrak{p}}$ we get, upon summing the coefficients of $\beta\in\mathbb{B}_{\mathfrak{p}}$ in $\Phi(h)$ that \[ -2-(p-1)\sum_{\beta\in\mathbb{B}_{\mathfrak{p}}}a_{\beta}\ge0, \] a clear contradiction. It follows that $\tau=\phi^{i}\sigma$ for some $1\le i\le f-1$ where $f\ge2$ is the length of the $\phi$-orbit of $\sigma$ (if $\sigma\in\mathbb{B}_{\mathfrak{p}},$ then $f=f(\mathfrak{p}/p)$). Labelling the $\beta\in\mathbb{B}_{\mathfrak{p}}$ by $0,1,\dots,f-1$ so that $\phi\circ[i]=[i+1\mod f]$, and assuming, without loss of generality, that $[\sigma]=[0]$ and $[\tau]=[i]$ for some $1\le i\le f-1$ we have \[ 2p[0]-2[i]-\sum_{j=0}^{f-1}a_{j}(p[j]-[j+1\mod f])=\sum_{j=0}^{f-1}k_{j}[j], \] where $a_{j}\ge0$ and $k_{j}\ge0.$ Summing over the coefficients we get $2p-2-(p-1)\sum a_{j}=\sum k_{j}\ge0,$ hence $\sum a_{j}=0,1$ or 2. We can not have $a_{j}=0$ for all $j,$ since $\Phi(h)\in C^\text{\rm std}$. If $a_{m}=1$ and all the other $a_{j}=0$ we again reach a contradiction since the coefficient of $[i]$ comes out negative, no matter what $m$ is. There remains the case where $\sum a_{j}=2.$ In this case all the $k_{j}=0$. Looking at the coefficient of $[0]$ we must have $2p-pa_{0}+a_{f-1}=0.$ This forces $a_{0}=2$ hence all the other $a_{j}=0$ and $i=1$. This means that $\tau=\phi\circ\sigma$ and $hh_{\tau}^{-2}$ has weight 0, i.e. is a constant in $\kappa,$ proving the lemma. \end{proof} The lemma implies that if $\Sigma$ is invariant under $\phi$ (i.e. is a union of certain $\mathbb{B}_{\mathfrak{p}}$'s for the primes $\mathfrak{p}$ above $p$) then $\kappa_{\mathscr{F}_{\Sigma}}$ vanishes, and $\mathscr{F}_{\Sigma}$ is therefore $p$-closed. To prove the converse, completing the proof of part (i) of Theorem \ref{Main Theorem HMV}, it is enough to show that when $\Sigma=\{\sigma\}$ the obstruction $\kappa_{\mathscr{F}_{\sigma}}$ is a \emph{non-zero} multiple of $h_{\phi\circ\sigma}^{2}.$ This will establish, at the same time, claim (ii) of the Theorem. To this end we use $q$-expansions. Let $\mathcal{R}$ be one of the rings associated with the cusps as in \ref{subsec:Tate-objects}, and $R=\mathcal{R}/p\mathcal{R}.$ The pull back of the tangent bundle of $M$ to $\text{\rm Spec}(R)$ is identified with the Lie algebra $\text{\rm Der}(R/\kappa)$ and the Kodaira-Spencer isomorphism yields the isomorphism (\ref{eq:KS-q}) \[ \text{\rm KS}:\mathcal{T}_{\text{\rm Spec}(R)/\kappa}=\text{\rm Der}(R/\kappa)\simeq\mathcal{O}_{L}\otimes R=\bigoplus_{\sigma\in\mathbb{B}}R_{\sigma}. \] Let $\{e_{\sigma}\}$ be the idempotents of $\mathcal{O}_{L}\otimes R$ corresponding to this decomposition. Then $\text{\rm KS}:\mathscr{L}_{\sigma}^{-2}|_{\text{\rm Spec}(R)}\simeq Re_{\sigma}=R_{\sigma}$. The ring $\mathcal{O}_{L}\otimes R$ has an endomorphism \[ \varphi(\alpha\otimes r)=\alpha\otimes r^{p} \] and $\varphi(e_{\sigma})=e_{\phi\circ\sigma}.$ Indeed, \[ \alpha\otimes1\cdot\varphi(e_{\sigma})=\varphi(\alpha\otimes1\cdot e_{\sigma})=\varphi(1\otimes\sigma(\alpha)\cdot e_{\sigma})=1\otimes\sigma(\alpha)^{p}\cdot\varphi(e_{\sigma})=1\otimes(\phi\circ\sigma)(\alpha)\cdot\varphi(e_{\sigma}) \] so $\varphi(e_{\sigma}),$ being an idempotent in $R_{\phi\circ\sigma},$ must equal $e_{\phi\circ\sigma}$. Let $\xi_{\sigma}\in\mathscr{L}_{\sigma}^{-2}$ be the derivation mapping to $e_{\sigma}$ under $\text{\rm KS}$. If $e_{\sigma}=\sum_{j}\gamma_{j}\otimes r_{j}$ ($\gamma_{j}\in\mathcal{O}_{L},$~$r_{j}\in R$) then by Katz' formula (Lemma \ref{lem:Katz' formula}) \[ \xi_{\sigma}(\sum_{\alpha}a_{\alpha}q^{\alpha})=\sum_{\alpha}a_{\alpha}(\sum_{j}r_{j}\text{\rm Tr}_{L/\mathbb{Q}}(\alpha\gamma_{j}))q^{\alpha}. \] It follows that when we iterate $\xi_{\sigma}$ $p$ times we get the derivation \[ \xi_{\sigma}^{p}(\sum_{\alpha}a_{\alpha}q^{\alpha})=\sum_{\alpha}a_{\alpha}(\sum_{j}r_{j}^{p}\text{\rm Tr}_{L/\mathbb{Q}}(\alpha\gamma_{j}))q^{\alpha}, \] i.e. the derivation corresponding to $\varphi(e_{\sigma})=e_{\phi\circ\sigma}.$ We conclude that $\xi_{\sigma}^{p}=\xi_{\phi\circ\sigma}\ne0$, hence $\kappa_{\mathscr{F}_{\sigma}}$ is a non-zero section of $\mathscr{L}_{\phi\circ\sigma}^{-2}\otimes\mathscr{L}_{\sigma}^{2p}.$ \begin{rem*} The reader might have noticed that the same $q$-expansion computation can be used to give an alternative proof of all of (i) and (ii) in Theorem \ref{Main Theorem HMV}. However, we found Lemma \ref{lem:Nonexistence of HMF of certain weights} and the relation to the main result of \cite{DK} of independent interest, especially considering the extension of our results to other PEL Shimura varieties. \end{rem*} \subsubsection{Proof of (iii)} We now turn to the last part of the theorem, identifying the quotient of $M$ by the smooth $p$-foliation \[ \mathscr{G}=\bigoplus_{\mathfrak{q}\ne\mathfrak{p}}\mathscr{F}_{\mathfrak{q}} \] with the purely inseparable, finite flat map $\theta:M\to M_{0}(\mathfrak{p})^\text{\rm et}$ constructed in Proposition \ref{prop:etale and multiplicative components}. Note that the rank of $\mathscr{G}$ is $g-f,$ while $\text{\rm deg}(\theta)=p^{g-f}.$ Since $M_{0}(\mathfrak{p})^\text{\rm et}$ is the quotient of $M$ by the smooth $p$-foliation $\ker(d\theta)$, and the latter is also of rank $g-f,$ it is enough to prove that $d\theta$ annihilates $\mathscr{G}$ to conclude that \[ \ker(d\theta)=\mathscr{G}, \] thus proving part (iii) of Theorem \ref{Main Theorem HMV}. To simplify the notation write $N=M_{0}(\mathfrak{p})^\text{\rm et},$ let $k$ be an algebraically closed field containing $\kappa,$ $x\in M(k)$ and $y=\theta(x)\in N(k).$ Let $\underline{A}$ be the tuple parametrized by~$x.$ Then, by (\ref{equation for theta}), $y$ parametrizes the tuple $(\underline{A}^{(p)},\text{\rm Fr}(A[\mathfrak{p}]))$ where $\text{\rm Fr}=\text{\rm Fr}_{A/k}.$ Write $TM$ for the tangent bundle $\mathcal{T}_{M/\kappa}$ and $T_{x}M$ for its fiber at $x,$ the tangent space to $M$ at $x$. Similar meanings are attached to the symbols $TN$ and $T_{y}N.$ Let $k[\epsilon]$ be the ring of dual numbers, $\epsilon^{2}=0.$ In terms of the moduli problem, \[ T_{x}M=\{\underline{\widetilde{A}}\in M(k[\epsilon])|\,\underline{\widetilde{A}}\mod\epsilon=\underline{A}\}/\simeq \] and its origin is the ``constant'' tuple $\text{\rm Spec}(k[\epsilon])\times_{\text{\rm Spec}(k)}\underline{A}.$ Similarly, \[ T_{y}N=\{(\underline{\widetilde{B}},\widetilde{H})\in N(k[\epsilon])|\,(\underline{\widetilde{B}},\widetilde{H})\mod\epsilon=(\underline{A}^{(p)},\text{\rm Fr}(A[\mathfrak{p}]))\}/\simeq \] and its origin is the ``constant'' tuple $\text{\rm Spec}(k[\epsilon])\times_{\text{\rm Spec}(k)}(\underline{A}^{(p)},\text{\rm Fr}(A[\mathfrak{p}])).$ Let $\widetilde{x}=\underline{\widetilde{A}}\in T_{x}M$ be a tangent vector at $x$. In terms of moduli problems \[ d\theta(\widetilde{x})=\theta(\underline{\widetilde{A}})=(\underline{\widetilde{A}}^{(p)},\text{\rm Fr}(\widetilde{A}[\mathfrak{p}]))\in N(k[\epsilon]) \] where now $\widetilde{A}^{(p)}=\widetilde{A}^{(p)/k[\epsilon]}$ is the base change of $\widetilde{A}$ with respect to the raising-to-power $p$ homomorphism $\phi_{k[\epsilon]}\colon k[\epsilon]\to k[\epsilon],$ the PEL structure $\iota,\lambda,\eta$ accompanying $\widetilde{A}$ in the definition of $\underline{\widetilde{A}}$ undergoes the same base change, and $\text{\rm Fr}=\text{\rm Fr}_{\widetilde{A}/k[\epsilon]}:\widetilde{A}\to\widetilde{A}^{(p)}$ is the relative Frobenius of $\widetilde{A}$ over $k[\epsilon]$. We have to show that if $\widetilde{x}\in\mathscr{G}_{x}\subset T_{x}M$ then $d\theta(\widetilde{x})=0$, namely that the tuple $(\underline{\widetilde{A}}^{(p)},\text{\rm Fr}(\widetilde{A}[\mathfrak{p}]))$ is \emph{constant }along $\text{\rm Spec}(k[\epsilon]).$ That $\underline{\widetilde{A}}^{(p)}$ is constant along $\text{\rm Spec}(k[\epsilon])$ is always true, regardless of whether $\widetilde{x}\in\mathscr{G}_{x}$ or not, simply becasue $\phi_{k[\epsilon]}$ factors as the projection modulo $\epsilon$, $k[\epsilon]\twoheadrightarrow k,$ followed by $\phi_{k}$, and then by the inclusion $k\hookrightarrow k[\epsilon]:$ \[ \phi_{k[\epsilon]}:k[\epsilon]\twoheadrightarrow k\overset{\phi_{k}}{\to}k\hookrightarrow k[\epsilon]. \] It all boils down to the identity $(a+b\epsilon)^{p}=a^{p}.$ Suppose $\widetilde{x}\in\mathscr{G}_{x}$ and let us show that $\text{\rm Fr}(\widetilde{A}[\mathfrak{p}])$ is also constant. Recall the local models $M^\text{\rm loc}$, $N^\text{\rm loc}$ of $M$ and $N$ constructed in \cite{Pa}, 3.3. Let $R$ be any $\kappa$-algebra. Let $W=(\mathcal{O}_{L}\otimes R)^{2}$ with the induced $\mathcal{O}_{L}$ action and the perfect alternate pairing ($e_{1},e_{2}$ is the standard basis) \[ \left\langle a\otimes t\cdot e_{1},b\otimes s\cdot e_{2}\right\rangle =-\left\langle b\otimes s\cdot e_{2},a\otimes t\cdot e_{1}\right\rangle =\text{\rm Tr}_{L/\mathbb{Q}}(ab)ts\in R, \] \[ \left\langle a\otimes t\cdot e_{i},b\otimes s\cdot e_{i}\right\rangle =0\,\quad(i=1,2). \] Then $M^\text{\rm loc}(R)$ is the set of rank-1 $\mathcal{O}_{L}\otimes R$ local direct summands $\omega\subset W$ which are totally isotropic (equal to their own annihilator) under $\left\langle ,\right\rangle $. Similarly, the $R$-points of $N^\text{\rm loc}$ are given by the following data. Fix $a\in\mathcal{O}_{L}$ such that $\mathfrak{p}=(a,p)$ but $a\equiv1\mod\mathfrak{q}$ for every prime $\mathfrak{q}\ne\mathfrak{p}$ above $p$. Let $u:W\to W$ be the $\mathcal{O}_{L}\otimes R$-linear map sending $e_{1}$ to $e_{1}$ and $e_{2}$ to $a\otimes1\cdot e_{2}$. Equivalently, if we decompose $W=\oplus_{\mathfrak{q}|p}W[\mathfrak{q}],$ $w=\sum_{\mathfrak{q}|p}w(\mathfrak{q})$, then $u$ sends $e_{2}(\mathfrak{p})$ to $0$, but every $e_{2}(\mathfrak{q})$ for $\mathfrak{q}\ne\mathfrak{p}$ to itself. Then $N^\text{\rm loc}(R)$ is the set of pairs $(\omega,\omega')$ of rank-1 $\mathcal{O}_{L}\otimes R$ local direct summands $\omega,\omega'\subset W$ which are totally isotropic such that $u(\omega)\subset\omega'$. The scheme $N^\text{\rm loc}$ is a closed subscheme of a product of two Grassmannians, and its projection to the first factor is $M^\text{\rm loc}$. We shall use the $R$-points of the local models with $R=k$ or $k[\epsilon]$ to study the map $d\theta$ between $T_{x}M$ and $T_{y}N.$ Let $W=(\mathcal{O}_{L}\otimes k)^{2}$, $\widetilde{W}=(\mathcal{O}_{L}\otimes k[\epsilon])^{2},$ and suppose $x=\underline{A}$ corresponds to $\xi=(\omega\subset W)\in M^\text{\rm loc}(k).$ We may fix an identification $W=H_{dR}^{1}(A/k)$ so that $\omega=\omega_{A}=H^{0}(A,\Omega_{A/k}^{1}).$ Let $\widetilde{x}=\underline{\widetilde{A}}\in M(k[\epsilon])$ map to $x$ modulo $\epsilon$. Let $\widetilde{\xi}=(\widetilde{\omega}\subset\widetilde{W})\in M^\text{\rm loc}(k[\epsilon])$ correspond to $\widetilde{x}$ under the isomorphism $T_{x}M\simeq T_{\xi}M^\text{\rm loc},$ i.e. $\widetilde{\omega}\mod\epsilon=\omega.$ Fix $\alpha\in\omega$ and suppose that $\widetilde{\alpha}=\alpha+\epsilon\beta$ ($\beta\in W)$ is an element of $\widetilde{\omega}$ mapping to $\alpha$ modulo $\epsilon$. Since $\widetilde{\alpha}$ is uniquely determined modulo $\epsilon\widetilde{\omega}=\epsilon\omega$, the image $\overline{\beta}$ of $\beta$ in \[ W/\omega=H^{1}(A,\mathcal{O})=\text{\rm Lie}(A^{t})=\omega_{A^{t}}^{\vee} \] is well-defined. We have thus associated to $\widetilde{x}$ a map from $\omega=\omega_{A}$ to $\omega_{A^{t}}^{\vee}.$ Using the polarization $\lambda$ we view this as a map $\text{\rm KS}(\widetilde{x})$ from $\omega_{A}$ to $\omega_{A}^{\vee}.$ It is straightforward to prove that $\widetilde{\omega}$ is totally isotropic if and only if $\text{\rm KS}(\widetilde{x})$ is symmetric, i.e. $\text{\rm KS}(\widetilde{x})^{\vee}=\text{\rm KS}(\widetilde{x}).$ The following is a reformulation of the Kodaira-Spencer isomorphism: \begin{prop*} The map $\text{\rm KS}$ is an isomorphism from $T_{x}M$ onto the space of symmetric $\mathcal{O}_{L}\otimes k$-homomorphisms from $\omega_{A}$ to $\omega_{A}^{\vee}.$ The fiber $\mathscr{G}_{x}$ of the foliation $\mathscr{G}$ consists of those tangent vectors $\widetilde{x}$ for which $\text{\rm KS}(\widetilde{x})$ annihilates $\omega_{A}[\mathfrak{p}].$ \end{prop*} In the decomposition $\mathcal{O}_{L}\otimes k=\bigoplus_{\sigma\in\mathbb{B}}k_{\sigma}$, $\mathfrak{p}\otimes k$ is the ideal of all $(\alpha_{\sigma})$ for which $\alpha_{\sigma}=0$ whenever $\sigma\in\mathbb{B}_{\mathfrak{p}}.$ We therefore have \[ \omega[\mathfrak{p}]=\bigoplus_{\sigma\in\mathbb{B}_{\mathfrak{p}}}\omega[\sigma] \] ($\omega[\mathfrak{p}]$ denotes the kernel of $\mathfrak{p,}$ $\omega[\sigma]$ denotes the $\sigma$-isotypical component of $\omega$), and recall that each $\omega[\sigma]$ is one-dimensional over $k$. We conclude that if $\widetilde{x}\in\mathscr{G}_{x}$ then whenever $\alpha\in\omega$ and $\widetilde{\alpha}=\alpha+\epsilon\beta\in\widetilde{\omega}$ maps to it modulo $\epsilon,$ then for any $\sigma\in\mathbb{B}_{\mathfrak{p}}$ the $\sigma$-component of $\beta$ is proportional to the $\sigma$-component of $\alpha.$ In other words, $\widetilde{\omega}[\mathfrak{p}]=k[\epsilon]\otimes_{k}\omega[\mathfrak{p}].$ By the crystalline deformation theory of Grothendieck (equivalently, by the local model), the group scheme $\widetilde{A}[p]$ over $k[\epsilon]$ is completely determined by the lifting~$\widetilde{\omega}$ of $\omega$ to $k[\epsilon]$. In the notation used above $\widetilde{\omega}$ determines the point $\widetilde{\xi}=(\widetilde{\omega}\subset\widetilde{W})\in M^\text{\rm loc}(k[\epsilon]),$ hence the deformation $\widetilde{x}=\underline{\widetilde{A}}\in M(k[\epsilon]),$ and therefore $\widetilde{A}[p].$ Furthermore, the subgroup scheme $\widetilde{A}[\mathfrak{p}]$ is constant along $k[\epsilon]$ if and only if $\widetilde{\omega}[\mathfrak{p}]=k[\epsilon]\otimes_{k}\omega[\mathfrak{p}].$ We have therefore proved that if $\widetilde{x}\in\mathscr{G}_{x},$ $\widetilde{A}[\mathfrak{p}]$ is constant along $k[\epsilon].$ With it, so are $\widetilde{A}[\mathfrak{p}][\text{\rm Fr}]$ and the quotient $\text{\rm Fr}(\widetilde{A}[\mathfrak{p}])$. \subsection{Integral varieties} \label{subsec integral varieties} Our goal in this section is to prove the following theorem. \begin{thm} \label{thm:HBMV integral varieties}(i) Assume that $\emptyset\subsetneqq\Sigma\subsetneqq\mathbb{B}$. Then the foliation $\mathscr{F}_{\Sigma}$ does not have any (algebraic) integral variety in the generic fiber $\mathscr{M}_{\mathbb{Q}}$ of the Hilbert modular variety. (ii) Assume that $\Sigma$ is invariant under $\phi$. Let $\Sigma^{c}$ be its complement. Then the Goren-Oort stratum $M_{\Sigma}$ (see definition (\ref{eq:GO stratum}) below) is an integral variety of $\mathscr{F}_{\Sigma^{c}}$ in the characteristic $p$ fiber $M$ of the Hilbert modular variety. \end{thm} \begin{proof} The proof of part (i) is transcendental. Had there been an integral variety to $\mathscr{F}_{\Sigma}$ in characteristic 0, it would provide an algebraic integral variety over $\mathbb{C}$. But over the universal covering $\mathfrak{H}^{g}$ the \emph{analytic} leaves of the foliation are easily determined. If $z_{0}=(z_{0,\sigma})_{\sigma\in\mathbb{B}}$ is a point of $\mathfrak{H}^{g}$, the leaf through it is the coordinate ``plane'' \[ H_{\Sigma}(z_{0})=\{z\in\mathfrak{H}^{g}|\,z_{\tau}=z_{0,\tau}\,\,\forall\tau\notin\Sigma\}. \] Unless $\Sigma$ is empty or the whole of $\mathbb{B},$ these coordinate ``planes'' do not descend to algebraic varieties in $\Gamma\setminus\mathfrak{H}^{g}$ because the map $H_{\Sigma}(z_{0})\to\Gamma\setminus\mathfrak{H}^{g}$ has a dense image. In fact, \cite{RT} Proposition 3.4 shows that the analytic leaves of these foliations do not even contain any algebraic curves. (ii) Let $\mathcal{H}=H_{dR}^{1}(A^\text{\rm univ}/M)$ be the relative de Rham cohomology of the universal abelian variety over $M$ and $\nabla:\mathcal{H}\to\mathcal{H}\otimes_{\mathcal{O}_{M}}\Omega_{M/\kappa}^{1}$ the Gauss-Manin connection. For a vector field $\xi\in\mathcal{T}_{M/\kappa}(U)$ over a Zariski open set $U\subset M$ we denote by \[ \nabla_{\xi}:\mathcal{H}(U)\to\mathcal{H}(U) \] the $\xi$-derivation obtained by contracting $\Omega_{M/\kappa}^{1}$ with $\xi$. It satisfies ($a\in\mathcal{O}_{M}(U)$) \[ \nabla_{\xi}(at)=\xi(a)t+a\nabla_{\xi}(t). \] If $\sigma,\tau\in\mathbb{B}$ and $\tau$ is not in the $\phi$-orbit of $\sigma,$ and if $\xi\in\mathscr{F}_{\tau}(U)\subset\mathcal{T}_{M/\kappa}(U)$ then, since $\text{\rm KS}(\xi)$ annihilates \[ \mathscr{L}_{\sigma}=\underline{\omega}[\sigma]\subset\underline{\omega}\subset\mathcal{H}, \] $\nabla_{\xi}$ induces an $\mathcal{O}_{M}$-derivation of the line bundle $\mathscr{L}_{\sigma}$ over $U$. The same holds with $\mathscr{L}_{\phi\circ\sigma}$ since $\tau\ne\phi\circ\sigma$. By the usual rules of derivations, we obtain a derivation $\nabla_{\xi}$ of the line bundle $\mathrm{Hom}(\mathscr{L}_{\phi\circ\sigma},\mathscr{L}_{\sigma}^{p}),$ of which the partial Hasse invariant $h_{\phi\circ\sigma}$ is a global section. Let us elaborate on the last statement. If $t$ is a section of $\mathscr{L}_{\sigma}$ then the induced derivation of \[ \mathscr{L}_{\sigma}^{p}=\mathscr{L}_{\sigma}^{\otimes p}\simeq\mathscr{L_{\sigma}}^{(p)} \] (the first is the $p$-th tensor product, the second the base-change by the absolute Frobenius of $M$) is given by \[ \nabla_{\xi}^{(p)}(at^{p})=\xi(a)t^{p} \] (equivalently, on Frobenius base-change, $\nabla_{\xi}^{(p)}(a\otimes t)=\xi(a)\otimes t$; note that this is a \emph{canonical} derivation, independent of the original $\nabla_{\xi}$). If $\mathcal{H}$ is the relative de Rham cohomology of $A^\text{\rm univ},$ and $\mathcal{H}^{(p)}$ the relative de Rham cohomology of $A^{{\rm univ}(p)},$ then the same formula applied to the Gauss-Manin connection of $A^\text{\rm univ}$ gives the Gauss-Manin connection of $A^{{\rm univ}(p)}.$ Once again, the latter is the \emph{canonical} connection which exists on the Frobenius base change of any vector bundle. \emph{Any} section of the form $1\otimes t$ is flat, just as any function which is a $p$-th power is annihilated by all the derivations. Finally, if $h:\mathscr{L}_{\phi\circ\sigma}\to\mathscr{L}_{\sigma}^{p}$ is a homomorphism of line bundles, then$\nabla_{\xi}h$ is the homomorphism \[ (\nabla_{\xi}h)(t)=\nabla_{\xi}^{(p)}(h(t))-h(\nabla_{\xi}(t)). \] \begin{lem} Let $\sigma, \tau \in \mathbb{B}$, not in the same $\phi$-orbit. The partial Hasse invariant $h_{\phi\circ\sigma}$ is horizontal for $\xi\in\mathscr{F}_{\tau},$ i.e. $\text{\rm \ensuremath{\nabla_{\xi}}}(h_{\phi\circ\sigma})=0.$ \end{lem} \begin{proof} Identify $\mathscr{L}_{\sigma}^{p}$ with $\mathscr{L}_{\sigma}^{(p)}=\Phi_{M}^{*}\mathscr{L}_{\sigma}=\Phi_{M}^{*}(\underline{\omega}[\sigma])=(\Phi_{M}^{*}\underline{\omega})[\phi\circ\sigma]$. By definition, $h_{\phi\circ\sigma}$ is the $\phi\circ\sigma$ component of the $\mathcal{O}_{L}$-homomorphism \[ V:\underline{\omega}\to\underline{\omega}^{(p)}=\Phi_{M}^{*}\underline{\omega}, \] induced by the Verschiebung isogeny $\text{\rm Ver}_{A^\text{\rm univ}/M}:A^{{\rm univ}(p)}\to A^\text{\rm univ}.$ It is horizontal since the Gauss-Manin connection commutes, in general, with any map on $\mathcal{H}=H_{dR}^{1}(A^\text{\rm univ}/M)$ induced by an isogeny, and in particular \[ \nabla_{\xi}(h_{\phi\circ\sigma})=\nabla_{\xi}^{(p)}\circ h_{\phi\circ\sigma}-h_{\phi\circ\sigma}\circ\nabla_{\xi}=0. \] \end{proof} Let $H_{\phi\circ\sigma}$ be the hypersurface defined by the vanishing of $h_{\phi\circ\sigma}$ in $M.$ By the results of \cite{G-O}, it is smooth, and $h_{\phi\circ\sigma}$ vanishes on it to first order. Furthermore, for different $\sigma$'s these hypersurfaces intersect transversally. Let $x\in H_{\phi\circ\sigma}$ and choose a Zariski open neighborhood $U$ of $x$ on which $\mathrm{Hom}(\mathscr{L}_{\phi\circ\sigma},\mathscr{L}_{\sigma}^{p})$ is a trivial invertible sheaf. Let~$e$ be a basis of $\mathrm{Hom}(\mathscr{L}_{\phi\circ\sigma},\mathscr{L}_{\sigma}^{p})$ over $U$ and write $h_{\phi\circ\sigma}=he$ for some $h\in\mathcal{O}_{M}(U)$. Then $H_{\phi\circ\sigma}\cap U$ is given by the equation $h=0$ and $h$ vanishes on it to first order. Furthermore, if $\xi\in\mathscr{F}_{\tau}(U),$ by the Lemma we have \[ 0=\nabla_{\xi}(h_{\phi\circ\sigma})=\xi(h)\cdot e+h\nabla_{\xi}(e), \] so along $H_{\phi\circ\sigma}=\{h=0\}$ we also have $\xi(h)=0.$ This proves that $\xi$ is parallel to $H_{\phi\circ\sigma},$ i.e. $\xi_{x}\in T_{x}H_{\phi\circ\sigma}\subset T_{x}M.$ Let $\Sigma$ be a $\phi$-invariant subset of $\mathbb{B}$. Since the same analysis holds for every $\sigma\in\Sigma$ and every $\tau\notin\Sigma$ we get that at every point $x$ of \begin{equation} M_{\Sigma}:=\{x|\,h_{\sigma}(x)=0\,\,\forall\sigma\in\Sigma\}\label{eq:GO stratum} \end{equation} the $p$-foliation \[ \mathscr{F}_{\Sigma^{c}}=\oplus_{\tau\notin\Sigma}\mathscr{F}_{\tau} \] is contained in $T_{x}M_{\Sigma}\subset T_{x}M.$ As both $\mathscr{F}_{\Sigma^{c}}$ and $TM_{\Sigma}$ are vector bundles of the same rank $g-\#(\Sigma)$, and both are local direct summands of $TM$, we have shown that the Goren-Oort stratum $M_{\Sigma}$ is an integral variety of $\mathscr{F}_{\Sigma^{c}}.$ \end{proof} \section{$V$-foliations on unitary Shimura varieties} \subsection{Notation and preliminary results on unitary Shimura varieties} \subsubsection{The moduli scheme} We now turn to the second type of foliations considered in this paper, on unitary Shimura varieties in characteristic $p$. Let $K$ be a CM field, $[K:\mathbb{Q}]=2g$ and $L=K^{+}$ its totally real subfield. Let $\rho\in Gal(K/L)$ denote complex conjugation. Let $E\subset\mathbb{C}$, \emph{the field of definition,} be a number field containing all the conjugates\footnote{We do not insist on the field of definition being the minimal possible one, i.e. the reflex field of the CM type.} of $K$. For \[ \tau\in\mathscr{I}:=\mathrm{Hom}(K,E)=\mathrm{Hom}(K,\mathbb{C}) \] we write $\bar{\tau}=\tau\circ\rho$. We let $\mathscr{I}^{+} =\mathrm{Hom}(L,E)=\mathscr{I}/\left\langle \rho\right\rangle$ be the set of orbits of $\mathscr{I}$ under the action of $\rho$, and write its elements as unordered pairs $\{\tau,\bar{\tau}\}.$ Let $d\ge1$ and fix a \emph{PEL-type $\mathcal{O}_{K}$-lattice $(\Lambda,\left\langle ,\right\rangle ,h)$ }of rank $d$ over $\mathcal{O}_{K}$ (\cite{Lan}, 1.2.1.3). Thus $\Lambda$ is a projective $\mathcal{O}_{K}$-module of rank $d$ (regarded, if we forget the $\mathcal{O}_{K}$-action, as a lattice of rank $2gd$), $\left\langle ,\right\rangle $ is a non-degenerate alternating bilinear form $\Lambda\times\Lambda\to2\pi i\mathbb{Z},$ satisfying $\left\langle ax,y\right\rangle =\left\langle x,\bar{a}y\right\rangle $ for $a\in\mathcal{O}_{K}$, and \[ h:\mathbb{C}\to {\rm End}_{\mathcal{O}_{K}}(\Lambda\otimes\mathbb{R}) \] is an $\mathbb{R}$-linear ring homomorphism satisfying (i) $\left\langle h(z)x,y\right\rangle =\left\langle x,h(\bar{z})y\right\rangle $ (ii) $(x,y)=(2\pi i)^{-1}\left\langle x,h(i)y\right\rangle $ is an inner product (symmetric and positive definite) on the real vector space $\Lambda\otimes\mathbb{R}$. The $2gd$-dimensional complex vector space $V=\Lambda\otimes\mathbb{C}$ breaks up as a direct sum \[ V=V_{0}\oplus V_{0}^{c} \] of two $\left\langle ,\right\rangle $-isotropic subspaces, where $V_{0}=\{v\in V|\,h(z)v=1\otimes z\cdot v\}$ and $V_{0}^{c}=\{v\in V|\,h(z)v=1\otimes\bar{z}\cdot v\}$. The inclusion $\Lambda\otimes\mathbb{R}\subset\Lambda\otimes\mathbb{C}=V$ allows us to identify $V_{0}$ with the real vector space $\Lambda\otimes\mathbb{R}$, and then its complex structure is given by $J=h(i).$ As representations of $\mathcal{O}_{K}$ \[ V_{0}\simeq\sum_{\tau\in\mathscr{I}}r_{\tau}\tau,\,\,\,V_{0}^{c}\simeq\sum_{\tau\in\mathscr{I}}r_{\tau}\bar{\tau}, \] where the $r_{\tau}$ are non-negative integers satisfying $r_{\tau}+r_{\bar{\tau}}=d$ for each $\tau$. We call the collection $\{r_{\tau}\}$ (or the formal sum $\sum_{\tau\in\mathscr{I}}r_{\tau}\tau$) the \emph{signature} of $(\Lambda,\left\langle ,\right\rangle ,h)$ (\cite{Lan} 1.2.5.2), or the \emph{CM} \emph{type}. Let $N\ge3$ (the \emph{tame level}) be an integer which is relatively prime to the discriminant of the lattice $(\Lambda,\left\langle ,\right\rangle ).$ Let $S$ be the set of \emph{bad} \emph{primes}, defined to be the rational primes that ramify in $K$, divide $N$, or divide the discriminant of $\Lambda$. The primes $p\notin S$ are called \emph{good}, and we fix once and for all such a prime $p$. Consider the following moduli problem $\mathscr{M}$ over $\mathcal{O}_{E}[1/S]$. For an $\mathcal{O}_{E}[1/S]$-algebra $R$, the set $\mathscr{M}(R)$ is the set of isomorphism classes of tuples $\underline{A}=(A,\iota,\lambda,\eta)$ where: \begin{itemize} \item $A$ is an abelian scheme of relative dimension $gd$ over $R$. \item $\iota:\mathcal{O}_{K}\hookrightarrow {\rm End}(A/R)$ is an embedding of rings, rendering $\text{\rm Lie}(A/R)$ an $\mathcal{O}_{K}$-module of type $\sum_{\tau\in\mathscr{I}}r_{\tau}\tau.$ \item $\lambda:A\to A^{t}$ is a $\mathbb{Z}_{(p)}^{\times}$-polarization whose Rosati involution preserves $\iota(\mathcal{O}_{K})$ and induces on it complex conjugation. \item $\eta$ is a full level-$N$ structure compatible via $\lambda$ with $(\Lambda\otimes\widehat{\mathbb{Z}}^{(p)},\left\langle ,\right\rangle ).$ \end{itemize} See \cite{Lan}, 1.4.1.2 for more details, in particular pertaining to the level-$N$ structure. The moduli problem $\mathscr{M}$ is representable by a smooth scheme over $\mathcal{O}_{E}[1/S]$, which we denote by the same letter. Its complex points form a finite disjoint union of Shimura varieties associated with the unitary group of signature $\{r_{\tau}\}.$ Denote by \[ \underline{A}^\text{\rm univ}=(A^\text{\rm univ},\iota^\text{\rm univ},\lambda^\text{\rm univ},\eta^\text{\rm univ}) \] the universal tuple over $\mathscr{M}.$ \bigskip{} We let $\kappa$ be a finite field, large enough to contain all the residue fields of the primes of $E$ above $p.$ Fix, once and for all, an embedding $E\hookrightarrow W(\kappa)[1/p]$, and consider \[ M=\kappa\times_{\mathcal{O}_{E}[1/S]}\mathscr{M}, \] the special fiber at the chosen prime of $\mathcal{O}_{E}[1/S]$, base-changed to $\kappa.$ It is a smooth variety over $\kappa$ of dimension $\sum_{\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}}r_{\tau}r_{\bar{\tau}}.$ We let $\mathcal{T}$ denote its tangent bundle. Via the fixed embedding of $\mathcal{O}_{E}$ in $W(\kappa)$ we regard $\mathscr{I}$ also as the set of homomorphisms of $\mathcal{O}_{K}$ to $\kappa.$ For a prime $\mathfrak{P}$ of $\mathcal{O}_{K}$ above $p$ we let $\mathscr{I}_{\mathfrak{P}}$ be those homomorphisms that factor through $\kappa(\mathfrak{P})=\mathcal{O}_{K}/\mathfrak{P},$ \[ \mathscr{I}=\coprod_{\mathfrak{P}|p}\mathscr{I}_{\mathfrak{P}},\,\,\,\mathscr{I}_{\mathfrak{P}}=\mathrm{Hom}(\kappa(\mathfrak{P}),\kappa)=\mathrm{Hom}(\mathcal{O}_{K,\mathfrak{P}},W(\kappa)). \] The Frobenius $\phi(x)=x^{p}$ acts on $\mathscr{I}$ on the left via $\tau\mapsto\phi\circ\tau$ and the $\mathscr{I}_{\mathfrak{P}}$ are its orbits, each of them permuted cyclically by $\phi$. Following Moonen's convention \cite{Mo}, when we use $\mathscr{I}_{\mathfrak{P}}$ as an indexing set, we shall also write $i$ for $\tau$ and $i+1$ for $\phi\circ\tau$. This will be done without further notice to avoid the heavy notation $\tau_{i+1}=\phi\circ\tau_{i}.$ \subsubsection{The Kodaira-Spencer isomorphism} Let $\pi:A^\text{\rm univ}\to\mathscr{M}$ be the structure morphism of the universal abelian variety, and \[ \underline{\omega}=\pi_{*}(\Omega_{A^\text{\rm univ}/\mathscr{M}}^{1})\subset\mathcal{H}=\mathbb{R}^{1}\pi_{*}(\Omega_{A^\text{\rm univ}/\mathscr{M}}^{\bullet}) \] its relative de-Rham cohomology $\mathcal{H}$ and Hodge bundle $\underline{\omega}$. These are vector bundles on $\mathscr{M}$ of ranks $2gd$ and $gd$ respectively. The Hodge bundle $\underline{\omega}$ is the dual bundle to the relative \text{\rm Lie} algebra $\underline{\text{\rm Lie}}=\text{\rm Lie}(A^\text{\rm univ}/\mathscr{M})$. Since $E$ contains all the conjugates of~$K,$ $S$ contains all the ramified primes in $K$, and $\mathcal{O}_{\mathscr{M}}$ is an $\mathcal{O}_{E}[1/S]$-algebra, these vector bundles decompose under the action of $\mathcal{O}_{K}$ into isotypical parts \[ \underline{\omega}=\bigoplus_{\tau\in\mathscr{I}}\underline{\omega}[\tau]\subset\bigoplus_{\tau\in\mathscr{I}}\mathcal{H}[\tau]=\mathcal{H}. \] For each $\tau$ the rank of $\underline{\omega}[\tau]$ is $r_{\tau}$, and we have a short exact sequence (the Hodge filtration) \[ 0\to\underline{\omega}[\tau]\to\mathcal{H}[\tau]\to R^{1}\pi_{*}(\mathcal{O}_{A^\text{\rm univ}})[\tau]\to0. \] Since $\lambda^\text{\rm univ}$ is a prime-to-$p$ quasi-isogeny,\emph{ }and the Rosati involution on $\iota^\text{\rm univ}(\mathcal{O}_{K})$ is complex conjugation, \emph{after} we base-change from $\mathcal{O}_{E}[1/S]$ to $W(\kappa)$ the polarization induces an isomorphism \[ R^{1}\pi_{*}(\mathcal{O}_{A^\text{\rm univ}})[\tau]\simeq R^{1}\pi_{*}(\mathcal{O}_{(A^\text{\rm univ})^{t}})[\bar{\tau}]=\underline{\text{\rm Lie}}[\bar{\tau}]=\underline{\omega}[\bar{\tau}]^{\vee}. \] Since $\mathrm{rk}(\underline{\omega}[\tau])+\mathrm{rk}(\underline{\omega}[\bar{\tau}])=d,$ each $\mathcal{H}[\tau]$ is of rank $d$. We introduce the shorthand notation \[ \mathcal{P}_{\tau}=\underline{\omega}[\tau]. \] The Hodge filtration exact sequence can be written therefore, over $W(\kappa)$, as \begin{equation} 0\to\mathcal{P}_{\tau}\to\mathcal{H}[\tau]\to\mathcal{P}_{\bar{\tau}}^{\vee}\to0.\label{eq:Hodge Filtration Exact Sequence} \end{equation} For any abelian scheme $A/R$, there is a canonical perfect pairing \[ \{,\}_{dR}:H_{dR}^{1}(A/R)\times H_{dR}^{1}(A^{t}/R)\to R. \] In our case, using the prime-to-$p$ quasi-isogeny $\lambda$ to identify $H_{dR}^{1}(A/R)[\bar{\tau}]$ with $H_{dR}^{1}(A^{t}/R)[\tau]$, we get a pairing \[ \{,\}_{dR}:\mathcal{H}[\tau]\times\mathcal{H}[\bar{\tau}]\to\mathcal{O}_{\mathscr{M}}. \] Under this pairing $\mathcal{P}_{\tau}$ and $\mathcal{P}_{\bar{\tau}}$ are exact annihilators of each other, and the induced pairing between $\mathcal{P}_{\tau}$ and $\mathcal{P}_{\tau}^{\vee}$ is the natural one. The Gauss-Manin connection is a flat connection \[ \nabla:\mathcal{H}\to\mathcal{H}\otimes\Omega_{\mathscr{M}}^{1}. \] If $\xi\in\mathcal{T}$ is a vector field (on an open set in $\mathscr{M}$, omitted from the notation), as in \S\ref{subsec integral varieties}, we denote by $\nabla_{\xi}:\mathcal{H}\to\mathcal{H}$ the $\xi$-derivation of $\mathcal{H}$ obtained by contracting $\Omega_{\mathscr{M}}^{1}$ with~$\xi$. Since $\nabla$ commutes with the endomorphisms, $\nabla_{\xi}$ preserves the $\tau$-isotypical parts $\mathcal{H}[\tau]$ for every $\tau\in\mathscr{I}.$ When $\nabla_{\xi}$ is applied to $\mathcal{P}_{\tau}$ and the result is projected to $\mathcal{P}_{\bar{\tau}}^{\vee}$, we get an $\mathcal{O}_{\mathscr{M}}$\emph{-linear} homomorphism \[ \text{\rm KS}^{\vee}(\xi)_{\tau}\in {\rm Hom}(\mathcal{P}_{\tau},\mathcal{P}_{\bar{\tau}}^{\vee})\simeq\mathcal{P}_{\tau}^{\vee}\otimes\mathcal{P}_{\bar{\tau}}^{\vee}. \] Using the formalism of the Gauss-Manin connection and the symmetry of the polarization, it is easy to check that when we identify $\mathcal{P}_{\tau}^{\vee}\otimes\mathcal{P}_{\bar{\tau}}^{\vee}$ with $\mathcal{P}_{\bar{\tau}}^{\vee}\otimes\mathcal{P}_{\tau}^{\vee}$, \[ \text{\rm KS}^{\vee}(\xi)_{\tau}=\text{\rm KS}^{\vee}(\xi)_{\bar{\tau}}. \] Thus $\text{\rm KS}^{\vee}(\xi)_{\tau}$ depends only on the pair $\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}$, i.e. on $\tau|_{L}.$ When we combine these maps, we get an $\mathcal{O}_{\mathscr{M}}$\emph{-linear} homomorphism \[ \text{\rm KS}^{\vee}(\xi)\in\bigoplus_{\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}}\mathcal{P}_{\tau}^{\vee}\otimes\mathcal{P}_{\bar{\tau}}^{\vee}. \] \begin{prop}[The Kodaira-Spencer isomorphism] The map \[\xymatrix@C=0.6cm{ \text{\rm KS}^{\vee}\colon \mathcal{T}\ar[r]^>>>>>\sim&\underset{{\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}}}{\bigoplus}\mathcal{P}_{\tau}^{\vee}\otimes\mathcal{P}_{\bar{\tau}}^{\vee} }\] sending $\xi$ to $\text{\rm KS}^{\vee}(\xi)$ is an isomorphism. \end{prop} We let $\text{\rm KS}$ be the isomorphism dual to $\text{\rm KS}^{\vee},$ namely \begin{equation}\label{eq:KS-1}\xymatrix@C=0.6cm{ \text{\rm KS}\colon\underset{{\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}}}{\bigoplus}\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}}\ar[r]^>>>>>\sim&\Omega_{\mathscr{M}}^{1}.} \end{equation} \subsubsection{\label{subsec:The-mu-ordinary-locus}The $\mu$-ordinary locus of $M$} For a general signature, the abelian varieties parametrized by the mod $p$ fiber $M$ of our moduli space are never ordinary. There is, however, a dense open set $M^\text{\rm ord}\subset M$ at whose geometric points $A^\text{\rm univ}$ is ``as ordinary as possible''. To make this precise we introduce, following \cite{Mo} 1.2.3, certain standard Dieudonn modules and their associated $p$-divisible groups. Let $k$ be an algebraically closed field containing $\kappa.$ If $A$ is an abelian variety over the field $k$, with endomorphisms by $\mathcal{O}_{K}$, its $p$-divisible group $A[p^{\infty}]$ breaks up as a product \[ A[p^{\infty}]=\prod_{\mathfrak{P}|p}A[\mathfrak{P}^{\infty}] \] over the primes of $\mathcal{O}_{K}$ above $p$, and $A[\mathfrak{P}^{\infty}]$ becomes a $p$-divisible group with $\mathcal{O}_{K,\mathfrak{P}}$-action. Fix a prime $\mathfrak{P}|p$ and write $\mathcal{O}=\mathcal{O}_{K,\mathfrak{P}}$. Let \[ \mathfrak{f}:\mathscr{I}_{\mathfrak{P}}=\mathrm{Hom}(\mathcal{O},W(k))\to[0,d] \] be an integer-valued function. Let $M(d,\mathfrak{f})$ be the following Dieudonn module with $\mathcal{O}$-action over $W(k).$ First, \[ M(d,\mathfrak{f})=\oplus_{i\in\mathscr{I}_{\mathfrak{P}}}M_{i} \] where $M_{i}=\oplus_{j=1}^{d}W(k)e_{i,j}$ is a free $W(k)$-module of rank $d$. We let $\mathcal{O}$ act on $M_{i}$ via the homomorphism $i:\mathcal{O}\to W(k).$ We let $F$ (resp. $V)$ be the $\phi$-semilinear (resp. $\phi^{-1}$-semilinear) endomorphism of $M(d,\mathfrak{f})$ satisfying (recall the convention that if $i$ refers to the embedding $\tau$ then $i+1$ refers to $\phi\circ\tau$) \[ F(e_{i,j})=\begin{cases} \begin{array}{c} e_{i+1,j}\\ pe_{i+1,j} \end{array} & \begin{array}{c} 1\le j\le d-\mathfrak{f}(i)\\ d-\mathfrak{f}(i)<j\le d \end{array}\end{cases} \] and \[ V(e_{i+1,j})=\begin{cases} \begin{array}{c} pe_{i,j}\\ e_{i,j} \end{array} & \begin{array}{c} 1\le j\le d-\mathfrak{f}(i)\\ d-\mathfrak{f}(i)<j\le d \end{array}\end{cases}. \] Then $M(d,\mathfrak{f})$ is a Dieudonn module with $\mathcal{O}$-action, of rank $[\mathcal{O}:\mathbb{Z}_{p}]d$ over $W(k).$ We let $X(d,\mathfrak{f})$ be the unique $p$-divisible group with $\mathcal{O}$-action over $k$ whose contravariant Dieudonn module is $M(d,\mathfrak{f}).$ Let $N(d,\mathfrak{f})=M(d,\mathfrak{f})/pM(d,\mathfrak{f}).$ This is the Dieudonn module of the finite group scheme $Y(d,\mathfrak{f})=X(d,\mathfrak{f})[p].$ The cotangent space of $X(d,\mathfrak{f})$ is canonically isomorphic to $N(d,\mathfrak{f})[F]=\bigoplus_{i\in\mathscr{I}_{\mathfrak{P}}}\bigoplus_{j=d-\mathfrak{f}(i)+1}^{d}ke_{i,j}.$ It inherits a $\kappa(\mathfrak{P})$-action and its $i$-isotypic subspace $\bigoplus_{j=d-\mathfrak{f}(i)+1}^{d}ke_{i,j}$ is $\mathfrak{f}(i)$-dimensional. Let $X$ be a $p$-divisible group with $\mathcal{O}$-action over $k.$ In \cite{Mo}, Theorem 1.3.7, it is proved that if either $X$ is isogenous to $X(d,\mathfrak{f})$, or $X[p]$ is isomorphic to $Y(d,\mathfrak{f})$, then $X$ is already isomorphic to $X(d,\mathfrak{f})$. \begin{defn} Let $A$ be an abelian variety over $k$ with $\mathcal{O}_{K}$-action, of dimension $2gd.$ Then $A$ is called $\mu$\emph{-ordinary} if every $A[\mathfrak{P}^{\infty}]$ with its $\mathcal{O}_{K,\mathfrak{P}}$-action is isomorphic to some $X(d,\mathfrak{f}).$ \end{defn} Let $A$ be a $\mu$-ordinary abelian variety over $k$ with $\mathcal{O}_{K}$-action and CM type $\{r_{\tau}\}$ as before. The cotangent space of $A$ may be identified with that of $A[p^{\infty}]$. From the relation between the cotangent space of $A[p^{\infty}]$ and its Dieudonn module, it follows that if $\tau=i\in\mathscr{I}_{\mathfrak{P}}$, $\mathfrak{f}(i)=r_{\tau}.$ It follows that the function $\mathfrak{f}$ is determined by the signature, hence all the $\mu$-ordinary $A/k$ parametrized by geometric points of $M$ have isomorphic $p$-divisible groups. Wedhorn proved the following fundamental theorem. \begin{thm*} \cite{Wed,Mo} There is a dense open set $M^\text{\rm ord}\subset M$ such that for any geometric point $x\in M(k)$ the abelian variety $A_{x}^\text{\rm univ}$ is $\mu$-ordinary if and only if $x\in M^\text{\rm ord}(k).$ \end{thm*} Using the slope decomposition explained below, it is possible to attach a Newton polygon to a $p$-divisible group with $\mathcal{O}_{K}$-action, and the points of $M^\text{\rm ord}$ are characterized also as those whose Newton polygon lies \emph{below }every other Newton polygon (Newton polygons \emph{go up under specialization}). \subsubsection{\label{subsec:Slope-decomposition}Slope decomposition over $M^\text{\rm ord}$} We next review the slope decomposition of the $\mathfrak{P}$-divisible group of a $\mu$-ordinary abelian variety $A$ with $\mathcal{O}_{K}$-action over an algebraically closed field $k$ containing $\kappa.$ Recall that $A[\mathfrak{P}^{\infty}]\simeq X(d,\mathfrak{f}).$ For each $1\le j\le d$ the submodule \[ M(d,\mathfrak{f})^{j}=\bigoplus_{i\in\mathscr{I}_{\mathfrak{P}}}W(k)e_{i,j} \] of $M(d,\mathfrak{f})$ is a sub-Dieudonn module of $\mathcal{O}$-height 1. We define its \emph{slope} as the rational number \[ \frac{|\{i|\,j>d-\mathfrak{f}(i)\}|}{|\mathscr{I}_{\mathfrak{P}}|}. \] (This is the same as the slope in the classification of $p$-divisible groups and Dieudonn\'e modules.) Note that the slope of $M(d,\mathfrak{f})^{j+1}$ is greater or equal than the slope of $M(d,\mathfrak{f})^{j}$, and if they are equal, $M(d,\mathfrak{f})^{j+1}\simeq M(d,\mathfrak{f})^{j}$. Let $0\le\lambda_{1}<\lambda_{2}<\cdots<\lambda_{r}\le1$ be the distinct slopes obtained in this way, and for $1\le\nu\le r$ let $d^{\nu}$ be the number of $j$'s with $\mathrm{slope}(M(d,\mathfrak{f})^{j})=\lambda_{\nu}.$ Define a function $\mathfrak{f}^{\nu}:\mathscr{I}_{\mathfrak{P}}\to\{0,d^{\nu}\}$ by \[ \mathfrak{f}^{\nu}(i)=\begin{cases} \begin{array}{c} 0\\ d^{\nu} \end{array} & \begin{array}{c} \mathrm{if\,\,\,}\sum_{\ell=1}^{\nu-1}d^{\ell}<d-\mathfrak{f}(i)\\ \mathrm{if\,\,\,}\sum_{\ell=1}^{\nu-1}d^{\ell}\ge d-\mathfrak{f}(i). \end{array}\end{cases} \] Then grouping together the $M(d,\mathfrak{f})^{j}$ of slope $\lambda_{\nu}$ we get an isoclinic Dieudonn module isomorphic to $M(d^{\nu},\mathfrak{f}^{\nu}).$ We arrive at the \emph{slope decomposition} \[ M(d,\mathfrak{f})=\bigoplus_{\nu=1}^{r}M(d^{\nu},\mathfrak{f}^{\nu}), \] and similarly for the $p$-divisible group \[ X(d,\mathfrak{f})=\prod_{\nu=1}^{r}X(d^{\nu},\mathfrak{f}^{\nu}). \] This description is valid for $\mu$-ordinary $p$-divisible groups (with $\mathcal{O}$-action) over algebraically closed fields only. Its significance stems from the fact that when we study deformations, the isoclinic $p$-divisible groups with $\mathcal{O}$-action deform uniquely (are rigid), and the deformations arise only from non-trivial extensions of one isoclinic subquotient by another one, of a higher slope. Over an artinian ring with residue field $k$ the slope decomposition is replaced by a \emph{slope filtration}. The study of the universal deformation space via these extensions lead Moonen to introduce his \emph{cascade} structures, which are the main topic of \cite{Mo}. Finally, we remind the reader that by the Serre-Tate theorem, deformations of a tuple $\underline{A}\in M(k)$ correspond to deformations of $A[p^{\infty}]$ with its $\mathcal{O}_{K}$-structure and polarization. Moonen's theory of cascades supplies therefore ``coordinates'' at a $\mu$-ordinary point $x\in M^\text{\rm ord}(k),$ reminiscent of the Serre-Tate coordinates at an ordinary point of the usual modular curve. \subsubsection{Duality} Finally, let us examine duality. Quite generally, the Cartier dual of $A[p^{\infty}]$ is $A^{t}[p^{\infty}].$ A $\mathbb{Z}_{(p)}^{\times}$-polarization $\lambda$ therefore makes $A[p^{\infty}]$ self-dual. In the presence of an $\mathcal{O}_{K}$-action as above, the duality induced by $\lambda$ sets $A[\mathfrak{P}^{\infty}]$ in duality with $A[\bar{\mathfrak{P}}^{\infty}].$ Let $\mathfrak{p}$ be the prime of $L$ underlying $\mathfrak{P}.$ We distinguish two cases. (a) If $\mathfrak{p}\mathcal{O}_{K}=\mathfrak{P\bar{P}}$ is split, there are no further restrictions on $A[\mathfrak{P}^{\infty}],$ but $A[\bar{\mathfrak{P}}^{\infty}]$ is completely determined by $A[\mathfrak{P}^{\infty}]$, being its dual group. (b) If $\mathfrak{p}\mathcal{O}_{K}=\mathfrak{P}$ is inert, $A[\mathfrak{P}^{\infty}]$ is self-dual. In this case let $m=[\kappa(\mathfrak{p}):\mathbb{F}_{p}]$ be the inertia degree of $\mathfrak{p},$ so that $[\kappa(\mathfrak{P}):\mathbb{F}_{p}]=2m$. Complex conjugation $\rho\in Gal(K/L)$ fixes $\mathfrak{P},$ so induces an automorphism of $\kappa(\mathfrak{P})$, and for $\tau\in\mathscr{I}_{\mathfrak{P}}$ we have $\tau\circ\rho=\phi^{m}\circ\tau.$ Recall that we denoted $\tau$ by $i$ and $\phi^{m}\circ\tau$ by $i+m$. If $A$ is $\mu$-ordinary, the self-duality of $A[\mathfrak{P}^{\infty}]$ is manifested (\cite{Mo} 3.1.1, Moonen's $\varepsilon=+1$ in our case) in a \emph{perfect symmetric} $W(k)$-linear pairing \[ \varphi:M(d,\mathfrak{f})\times M(d,\mathfrak{f})\to W(k) \] such that \[ \varphi(Fx,y)=\varphi(x,Vy)^{\phi} \] \[ \varphi(ax,y)=\varphi(x,\bar{a}y) \] ($a\in\mathcal{O}_{K})$. This (or the relation $r_{\tau}+r_{\bar{\tau}}=d$) implies that $\mathfrak{f}(i)+\mathfrak{f}(i+m)=d$. In fact, $M_{i}$ is orthogonal to $M_{i'}$ unless $i'=i+m$ and we can choose the basis $\{e_{i,j}\}$ in such a way that \[ \varphi(e_{i,j},e_{i+m,j'})=c_{i,j}\delta_{j',d+1-j} \] with some $c_{i,j}\in W(k)^{\times}$ ($\delta_{a,b}$ is Kronecker's delta). This means that the Dieudonn modules $M(d,\mathfrak{f})^{j}$ and $M(d,\mathfrak{f})^{d+1-j}$ are dual under this pairing. See \cite{Mo} 3.2.3, case (AU). \subsection{The $V$-foliations on the $\mu$-ordinary locus} \subsubsection{Construction} Consider the universal abelian variety $A^\text{\rm univ}$ over $M,$ which we now denote for brevity $A,$ and its Verschiebung isogeny \[ \text{\rm Ver}:A^{(p)}=M\times_{\Phi_{M},M}A\to A. \] The relative de Rham cohomology of $A^{(p)},$ denoted $\mathcal{H}^{(p)}$, may be identified with $\Phi_{M}^{*}\mathcal{H}$, and its Hodge bundle $\underline{\omega}^{(p)}$ with $\Phi_{M}^{*}\underline{\omega}.$ Letting $a\in\mathcal{O}_{K}$ act on $A^{(p)}$ as $\iota^{(p)}(a)=1\times\iota(a)$ we get an induced action $\iota^{(p)}$ of $\mathcal{O}_{K}$ on $\mathcal{H}^{(p)}$ and on $\underline{\omega}^{(p)}.$ However, for $\tau\in\mathscr{I}$ \[ \mathcal{H}[\tau]^{(p)}:=\Phi_{M}^{*}(\mathcal{H}[\tau])=\mathcal{H}^{(p)}[\phi\circ\tau], \] because if $x\in\mathcal{H}[\tau]$ and $1\otimes x\in\mathcal{O}_{M}\otimes_{\phi,\mathcal{O}_{M}}\mathcal{H}[\tau]=\Phi_{M}^{*}(\mathcal{H}[\tau]),$ then \[ \iota^{(p)}(a)(1\otimes x)=1\otimes\tau(a)x=\tau(a)^{p}\otimes x=\phi\circ\tau(a)\cdot(1\otimes x). \] The isogeny $\text{\rm Ver}$ commutes with the endomorphisms, \[ \text{\rm Ver}\circ\iota^{(p)}(a)=\iota(a)\circ \text{\rm Ver}, \] and therefore induces a homomorphism of vector bundles \[ V:\mathcal{H}[\tau]\to\text{\rm \ensuremath{\mathcal{H}^{(p)}[\tau]=\mathcal{H}[\phi^{-1}\circ\tau]^{(p)},}} \] and similarly on $\underline{\omega}[\tau]=\mathcal{P}_{\tau}$ \[ V:\mathcal{P}_{\tau}\to(\mathcal{P}^{(p)})_{\tau}=(\mathcal{P}_{\phi^{-1}\circ\tau})^{(p)}. \] We shall use the notation $\mathcal{P}_{\phi^{-1}\circ\tau}^{(p)}$ for the right hand side. At a $\mu$-ordinary geometric point $x\in M^\text{\rm ord}(k)$ we may identify the fiber $\mathcal{H}_{x}=H_{dR}^{1}(A_{x}/k)$ with the (contravariant) Dieudonn module of $A_{x}[p],$ and the linear map $V:\mathcal{H}_{x}\to\mathcal{H}_{x}^{(p)}$ with the $\phi^{-1}$-semilinear endomorphism $V$ of the Dieudonn module. Let $\tau=i\in\mathscr{I}_{\mathfrak{P}}.$ Recalling the shape of the Dieudonn module $M(d,\mathfrak{f})$ of $A_{x}[\mathfrak{P}^{\infty}]$ we conclude that $\text{\rm \ensuremath{\mathcal{P}_{\tau,x}[V]=}}\underline{\omega}_{x}[\tau][V]=0$ if $\mathfrak{f}(i-1)\ge\mathfrak{f}(i)$, and \[ \text{\rm \ensuremath{\mathcal{P}_{\tau,x}[V]=}}\sum_{j=d-\mathfrak{f}(i)+1}^{d-\mathfrak{f}(i-1)}ke_{i,j} \] if $\mathfrak{f}(i-1)<\mathfrak{f}(i)$. We recall that $\mathfrak{f}(i)=r_{\tau}$ and $\mathfrak{f}(i-1)=r_{\phi^{-1}\circ\tau}.$ The following is the main definition of the second part of our paper. \begin{defn} Let $\Sigma\subset\mathscr{I}^{+}.$ Let\footnote{The reader might have noticed a twist in our notation. While the foliations denoted $\mathscr{F}_{\Sigma}$ on Hilbert modular varieties, in the first part of our paper, grow with $\Sigma$, our current $\mathscr{F}_{\Sigma}$ become smaller when $\Sigma$ grows. This could be solved by labelling our $\mathscr{F}_{\Sigma}$ by the \emph{complement} of $\Sigma$, but as the two types of foliations are distinct and of a different nature, we did not find it necessary to reconcile the two conventions.} $\mathscr{F}_{\Sigma}\subset\mathcal{T}$ be the subsheaf on $M^\text{\rm ord}$ which is the \emph{annihilator}, under the pairing between $\mathcal{T}$ and $\Omega_{M}^{1},$ of \[ \text{\rm KS}\left(\sum_{\{\tau,\bar{\tau}\}\in\Sigma}(\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}})[V\otimes V]\right). \] \end{defn} Our first goal is to prove that $\mathscr{F}_{\Sigma}$ is a $p$-foliation. Note that at every $x\in M(k)$ \begin{equation} \mathcal{P}_{\tau,x}\otimes\mathcal{P}_{\bar{\tau},x}[V\otimes V]=\mathcal{P}_{\tau,x}[V]\otimes\mathcal{P}_{\bar{\tau},x}+\mathcal{P}_{\tau,x}\otimes\mathcal{P}_{\bar{\tau},x}[V].\label{eq:ker_V_times_V} \end{equation} By the discussion above, if $x\in M^\text{\rm ord}(k),$ the first term is a subspace whose dimension is \[ \max\{0,r_{\tau}-r_{\phi^{-1}\circ\tau}\}\cdot(d-r_{\tau}) \] and the second is of dimension $r_{\tau}\cdot\max\{0,r_{\phi^{-1}\circ\tau}-r_{\tau}\}.$ Here we used the relations $\overline{\phi^{-1}\circ\tau}=\phi^{-1}\circ\tau\circ\rho=\phi^{-1}\circ\bar{\tau}$ and $r_{\bar{\tau}}=d-r_{\tau}.$ At most one of the terms is non-zero, and both are zero if and only if either $r_{\tau}=r_{\phi^{-1}\circ\tau},$$r_{\tau}=0$ or $r_{\tau}=d.$ In particular, \[ \dim_{k}\mathcal{P}_{\tau,x}\otimes\mathcal{P}_{\bar{\tau},x}[V\otimes V] \] is the same for all $x\in M^\text{\rm ord}(k).$ \begin{lem} \label{lem:Foliation is smooth}For $\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}$ let \[ r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}=\max\{0,r_{\tau}-r_{\phi^{-1}\circ\tau}\}\cdot(d-r_{\tau})+r_{\tau}\cdot\max\{0,r_{\phi^{-1}\circ\tau}-r_{\tau}\} \] (this quantity is symmetric in $\tau$ and $\bar{\tau}).$ Over $M^\text{\rm ord}$, the subsheaf $\mathscr{F}_{\Sigma}$ is a vector sub-bundle of $\mathcal{T}$ of corank $r_{V}(\Sigma)=\sum_{\{\tau,\bar{\tau}\}\in\Sigma}r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}.$ Its rank is given by the formula \[ \mathrm{rk}(\mathscr{F}_{\Sigma})=\sum_{\{\tau,\bar{\tau}\}\notin\Sigma}r_{\tau}r_{\bar{\tau}}+\sum_{\{\tau,\bar{\tau}\}\in\Sigma}\min\{r_{\tau},r_{\phi^{-1}\circ\tau}\}\cdot\min\{r_{\bar{\tau}},r_{\phi^{-1}\circ\bar{\tau}}\}. \] \end{lem} \begin{proof} The sheaf $\mathscr{G}_{\tau}=(\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}})[V\otimes V]$ is a subsheaf of $\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}}$, and $\mathscr{F}_{\Sigma},$ the annihilator of $\text{\rm KS}(\sum_{\{\tau,\bar{\tau}\}\in\Sigma}\mathscr{G}_{\tau})$ in $\mathcal{T}$, is \emph{saturated}. This is because $\mathcal{O}_{M}$ has no zero divisors: if $f\in\mathcal{O}_{M}$, $\xi\in\mathcal{T}$ and $f\xi\in\mathscr{F}_{\Sigma}$, then for every $\omega\in \text{\rm KS}(\sum_{\{\tau,\bar{\tau}\}\in\Sigma}\mathscr{G}_{\tau})$ we have $f\left\langle \omega,\xi\right\rangle =\left\langle \omega,f\xi\right\rangle =0$, so $\left\langle \omega,\xi\right\rangle =0.$ Quite generally, if $M$ is a variety over a field $k$, $\mathcal{F}$ and $\mathcal{G}$ are locally free sheaves of rank $n$, and $T\in\mathrm{Hom}_{\mathcal{O}_{M}}(\mathcal{F},\mathcal{G})$ is such that the fiber rank $\mathrm{rk}(T_{x})=m$ is constant on $M$, then $\ker(T)$ is locally a direct summand (i.e. a vector sub-bundle) of rank $n-m$. Compare \cite{Mu}, II.5 Lemma 1, p.51. It is in fact enough to verify the constancy of $\mathrm{rk}(T_{x})$ at closed points, because every other point $y\in M$ contains closed points in its closure, and the fiber rank can only go up under specialization. Applying this with $\mathcal{F}=\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}}$, $\mathcal{G}=\mathcal{F}^{(p)}$ and $T=V\otimes V$ we deduce that $\mathscr{G}_{\tau}$, hence also $\mathscr{F}_{\Sigma}$, are vector sub-bundles. The rank of $\mathscr{F}_{\Sigma}$ is \[ \sum_{\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}}r_{\tau}r_{\bar{\tau}}-\sum_{\{\tau,\bar{\tau}\}\in\Sigma}r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}. \] For $\{\tau,\bar{\tau}\}\in\Sigma$ we first assume that $r_{\phi^{-1}\circ\tau}\le r_{\tau}.$ We then have \[ r_{\tau}r_{\bar{\tau}}-r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}=r_{\tau}r_{\bar{\tau}}-(r_{\tau}-r_{\phi^{-1}\circ\tau})r_{\bar{\tau}}=r_{\phi^{-1}\circ\tau}r_{\bar{\tau}}. \] But by our assumption $r_{\phi^{-1}\circ\tau}=\min\{r_{\tau},r_{\phi^{-1}\circ\tau}\}$ and $r_{\bar{\tau}}=\min\{r_{\bar{\tau}},r_{\phi^{-1}\circ\bar{\tau}}\}.$ The case $r_{\phi^{-1}\circ\tau}\ge r_{\tau}$ is treated similarly. \end{proof} \subsubsection{Closure under Lie brackets and $p$-power} \begin{lem} The vector bundle $\mathscr{F}_{\Sigma}$ is involutive: if $\xi,\eta$ are sections of $\mathscr{F}_{\Sigma},$ so is $[\xi,\eta].$ \end{lem} \begin{proof} The proof is essentially the same as the proof of Proposition 3 in \cite{G-dS1}. For $\alpha\in\mathcal{P}_{\tau}$ and $\beta\in\mathcal{P}_{\bar{\tau}}$ we have the formula \[ \left\langle \text{\rm KS}(\alpha\otimes\beta),\xi\right\rangle =\{\nabla_{\xi}(\alpha),\beta\}_{dR}\in\mathcal{O}_{M} \] (loc. cit. Lemma 4). Thus, $\xi\in\mathscr{F}_{\Sigma}$ if and only if for every $\tau$ such that $\{\tau,\bar{\tau}\}\in\Sigma$ we have \[ \nabla_{\xi}(\mathcal{P}_{\tau}[V])\perp\mathcal{P}_{\bar{\tau}} \] under the pairing $\{,\}_{dR}:\mathcal{H}[\tau]\times\mathcal{H}[\bar{\tau}]\to\mathcal{O}_{M}.$ But the left annihilator of $\mathcal{P}_{\bar{\tau}}$ is $\mathcal{P}_{\tau}.$ The Gauss-Manin connection commutes with isogenies, so in particular carries $\mathcal{H}[V]$ to itself. It follows that $\xi\in\mathscr{F}_{\Sigma}$ if and only if \begin{equation} \nabla_{\xi}(\mathcal{P}_{\tau}[V])\subset\mathcal{P}_{\tau}[V]\label{eq:criterion} \end{equation} for any $\tau$ such that $\{\tau,\bar{\tau}\}\in\Sigma$. The Gauss-Manin connection is well-known to be flat, i.e. \[ \nabla_{[\xi,\eta]}=\nabla_{\xi}\circ\nabla_{\eta}-\nabla_{\eta}\circ\nabla_{\xi}, \] so if both $\xi$ and $\eta$ lie in $\mathscr{F}_{\Sigma},$ $(\ref{eq:criterion})$ implies that so does $[\xi,\eta].$ \end{proof} \begin{lem} The vector bundle $\mathscr{F}_{\Sigma}$ is $p$-closed: if $\xi$ is a section of $\mathscr{F}_{\Sigma}$, so is $\xi^{p}$. \end{lem} \begin{proof} Again we follow the proof of Proposition 3 in \cite{G-dS1}. The $p$-curvature \[ \psi(\xi)=\nabla_{\xi}^{p}-\nabla_{\xi^{p}} \] does not vanish identically, but is only a nilpotent endomorphism of $\mathcal{H}$ (\cite{Ka-Tur}, 5). However, denoting by $\mathcal{H}^{(p)}=\Phi_{M}^{*}\mathcal{H}=H_{dR}^{1}(A^{(p)}/M)$ $(A=A^\text{\rm univ})$ the relative de Rham cohomology of $A^{(p)},$ and by $F$ the map induced by the relative Frobenius $\text{\rm Fr}:A\to A^{(p)}$ on cohomology, we have \[ \mathcal{H}[V]=F^{*}\mathcal{H}^{(p)}. \] Furthermore, since the Gauss-Manin connection commutes with any isogeny, the following diagram commutes \begin{equation} \label{diagram1} \xymatrix@=1cm{ \mathcal{H}^{(p)} \ar[r]^{F^\ast} \ar[d]_{\nabla^{(p)} = \nabla_{\text{\rm can}}} &\mathcal{H}\ar[d]^{\nabla} \\ \mathcal{H}^{(p)} \otimes \Omega_M \ar[r]^{F^{*}\otimes1} & \mathcal{H} \otimes \Omega_M. } \end{equation} Here $\nabla^{(p)},$ the Gauss-Manin connection of $A^{(p)}$, turns out to be the canonical connection $\nabla_{can}$ that exists on \emph{any} vector bundle of the form $\Phi_{M}^{*}\mathcal{H},$ namely if $f\otimes\alpha\in\mathcal{O}_{M}\otimes_{\phi,\mathcal{O}_{M}}\mathcal{H},$ \[ \nabla_{can}(f\otimes\alpha)=df\otimes\alpha \] (that this is well-defined follows from the rule $d(g^{p}f)=g^{p}df$). The commutativity of $(\ref{diagram1})$ implies that the restriction of $\nabla$, the Gauss-Manin connection of $A$, to $F^{*}\mathcal{H}^{(p)}=\mathcal{H}[V]$ is the connection denoted $\nabla_{can}$ in \cite{Ka-Tur}. It follows from Cartier's theorem (loc. cit. Theorem 5.1) that $\psi(\xi)$ vanishes when restricted to $\mathcal{H}[V]$. We conclude the proof as in the previous lemma, using the criterion $(\ref{eq:criterion})$ for $\xi$ to lie in $\mathscr{F}_{\Sigma}$. \end{proof} Altogether we proved the following. \begin{thm} The sheaf $\mathscr{F}_{\Sigma}$ is a smooth $p$-foliation on $M^\text{\rm ord}$. \end{thm} We stress the difference between the tautological foliations on Hilbert modular varieties, which were $p$-closed only if $\Sigma$ was invariant under the action of $\phi,$ and the $V$-foliations on unitary Shimura varieties that are always $p$-closed. The reason lies in the last Lemma, in the delicate relation between the $p$-curvature of the Gauss-Manin connection and the kernel of Verschiebung. \subsection{Relation with Moonen's cascade structure} \subsubsection{Moonen's cascade structure} In \cite{Mo} Moonen generalized the notion of Serre-Tate coordinates to any Shimura variety of PEL type in characteristic $p$. Here we recall his results in the case of our unitary Shimura variety $M$. Fix a $\mu$-ordinary geometric point $x:\text{\rm Spec}(k)\to M^\text{\rm ord}$. Let $W=W(k)$ be the ring of Witt vectors over $k$, and consider the category $\mathbf{C}_{W}$ of local artinian $W$-algebras with residue field $k$, morphisms being local homomorphisms inducing the identity on $k$. Let $\mathbf{FS}_{W}$ be the category of affine formal schemes $\mathfrak{X}$ over $W$ with the property that $\Gamma(\mathfrak{X},\mathcal{O}_{\mathfrak{X}})$ is a profinite $W$-algebra (the last condition regarded as a ``smallness'' condition). By a theorem of Grothendieck, associating to $\mathfrak{X}\in\mathbf{FS}_{W}$ the functor of points $R\mapsto\mathfrak{X}(R)$ ($R\in\mathbf{C}_{W})$ identifies $\mathbf{FS}_{W}$ with the category of left-exact functors from $\mathbf{C}_{W}$ to sets. Equip $\mathbf{FS}_{W}$ with the flat topology and let $\mathfrak{T}=\mathbf{\widehat{FS}}_{W}$ be the topos of sheaves of sets on it. Since the flat topology is coarser than the canonical topology, $\mathbf{FS}_{W}$ embeds in $\mathbf{\widehat{FS}}_{W}$ (Yoneda's lemma). In particular, we may consider ${\rm Spf}(\widehat{\mathcal{O}}_{\mathscr{M},x})$ as a sheaf of sets on $\mathbf{FS}_{W}$. Let $\mathbb{D}=\mathrm{Def}(\underline{X}')$ be the universal deformation space of the pair \[ \underline{X}'=(A_{x}^\text{\rm univ}[p^{\infty}],\iota_{x}^\text{\rm univ}). \] For every $\mathfrak{X}\in\mathbf{FS}_{W}$ the deformations of $\underline{X}'$ over $\mathfrak{X}$ make up the set $\mathbb{D}(\mathfrak{X})$, and $\mathbb{D}\in\mathbf{\widehat{FS}}_{W}.$ In fact, it is representable by a formal scheme in $\mathbf{FS}_{W}$. If $\underline{X}'^{,D}$ is the Cartier dual of $X$, with the $\mathcal{O}_{K}$-action \[ \iota^{D}(a):=\iota(\bar{a})^{D}, \] then $\mathrm{Def}(\underline{X}'^{,D})=\mathbb{D}$ as well, since any deformation of $\underline{X}'$ yields a deformation of $\underline{X}'^{,D}$ and vice versa. (The cascade structures defined below will be dual, though.) The polarization $\lambda_{x}^\text{\rm univ}:X\simeq X^{D}$ (intertwining the actions $\iota$ and $\iota^{D}$) induces an automorphism \[ \gamma:\mathbb{D}=\mathrm{Def}(\underline{X}')\simeq\mathrm{Def}(\underline{X}'^{,D})=\mathbb{D}, \] and the subsheaf $\mathbb{D}^{\lambda}=\{x\in\mathbb{D}|\,\gamma(x)=x\}$ of symmetric elements in $\mathbb{D}$ is the universal deformation space of \[ \underline{X}=(A_{x}^\text{\rm univ}[p^{\infty}],\iota_{x}^\text{\rm univ},\lambda_{x}^\text{\rm univ}). \] See \cite{Mo}, 3.3.1. Like $\mathbb{D},$ its sub-functor $\mathbb{D}^{\lambda}\in\mathbf{\widehat{FS}}_{W},$ and is representable. Indeed, $\mathbb{D}^{\lambda}$ is represented by the formal scheme ${\rm Spf}(\widehat{\mathcal{O}}_{\mathscr{M},x})$, and its characteristic $p$ fiber by ${\rm Spf}(\widehat{\mathcal{O}}_{M,x})$. \bigskip{} We refer to \cite{Mo}, 2.2.1, for the precise definition of an $r$\emph{-cascade} in a topos $\mathfrak{T}$. A $1$-cascade is a point, a $2$-cascade is a sheaf of commutative groups, and a $3$-cascade is a biextension. The general structure of an $r$-cascade in $\mathfrak{T}$ generalizes these notions (see below). We have a decomposition $\underline{X} = \bigtimes_{\mathfrak{p}} \underline{X}[\mathfrak{p}^\infty]$, where $\mathfrak{p}$ runs over primes of $L$ dividing~$p$, and a corresponding decomposition $\mathbb{D} = \bigtimes_\mathfrak{p} \mathbb{D}_\mathfrak{p}$. The products are understood as fibered products over ${\rm Spec}(W)$. In \cite{Mo}, 2.3.6, Moonen defines a structure of an $r$-cascade on each $\mathbb{D}_\mathfrak{p}$, where $r = r(\mathfrak{p})$ is the number of slopes of $X[\mathfrak{p}^\infty]$ (see \ref{subsec:Slope-decomposition}). One might refer to $\mathbb{D}$ as a multi-cascade. Thus ${\rm Spf}(\widehat{\mathcal{O}}_{\mathscr{M},x})$ (or rather, the sheaf that it represents) is endowed with the structure of symmetric elements in the self-dual multi-cascade $\mathbb{D}.$ \subsubsection{Previous results} When $K$ is quadratic imaginary and $p$ is inert we showed in \cite{G-dS1}, Theorem 13, that the foliation we have constructed (denoted there $\mathcal{T}S^{+}$, and here $\mathscr{F}_{\Sigma}$) is compatible with Moonen's cascade structure on $M^\text{\rm ord}$ in the following sense. Let $(n,m)$ be the signature, $n> m$, so that $\mathrm{rk}(\mathscr{F}_{\Sigma})=m^{2}.$ Let $x\in M^\text{\rm ord}(k).$ The number of slopes of $A_{x}^\text{\rm univ}[p^{\infty}]$ turns out to be three, and ${\rm Spf}(\widehat{\mathcal{O}}_{M,x})$ acquires from the cascade structure a structure of a $\widehat{\mathbb{G}}_{m}^{m^{2}}$-torsor. The formal torus $\widehat{\mathbb{G}}_{m}^{m^{2}}$ gives rise to an $m^{2}$-dimensional subspace of the $mn$-dimensional tangent space at~$x$, and in loc. cit. we proved that this subspace coincided with the foliation. We find that while the cascade structure ``lives'' only on the formal neighborhood of $x$, and does not globalize, its ``trace'' on the tangent space globalizes to the foliation that we constructed in the tangent bundle of $M^\text{\rm ord}$. \subsubsection{The general case: subspaces of the tangent bundle defined by the cascade} We shall now describe how this result generalizes to the setting of our paper. For simplicity let us assume that~$p$ is inert in $L,$ $p\mathcal{O}_{L}=\mathfrak{p}$, $\mathbb{D} = \mathbb{D}_\mathfrak{p}$. Both the foliation and the cascade structure break up as products of corresponding structures over the primes~$\mathfrak{p}$ of $L$ dividing $p,$ so with a little more book-keeping the general case follows the same pattern as the inert case. We shall also assume that $\mathfrak{p}$ splits in $K,$ and write as before $\mathfrak{p}\mathcal{O}_{K}=\mathfrak{P}\bar{\mathfrak{P}}.$ The other case ($\mathfrak{p}$ inert in $K$) is a little more complicated, but can be handled in a similar way. The assumption that $\mathfrak{p}$ splits in $K$ allows us to concentrate on the deformation space of $\underline{X}=A_{x}^\text{\rm univ}[\mathfrak{P}^{\infty}],$ as a $p$-divisible group with $\mathcal{O}_{K}$-action, and ignore the polarization. The resulting deformation space is just an $r$-cascade $\mathbb{D}_{\mathfrak{P}}.$ Then, $\mathbb{D}=\mathbb{D}_{\mathfrak{P}}\times\mathbb{D}_{\mathfrak{\mathfrak{P}}}^{\vee},$ and the polarization induces an isomorphism $\mathbb{D}_{\mathfrak{P}}\simeq\mathbb{D}_{\mathfrak{P}}^{\vee}=\mathbb{D}_{\bar{\mathfrak{P}}}$ that we denote $x\mapsto\gamma(x)$; $\mathbb{D}^{\lambda}$ is the set of pairs $(x,\gamma(x))\in\mathbb{D}$, and is therefore isomorphic to $\mathbb{D}_{\mathfrak{P}}.$ Because of the relation $\mathscr{F}_{\Sigma_{1}\cup\Sigma_{2}}=\mathscr{F}_{\Sigma_{1}}\cap\mathscr{F}_{\Sigma_{2}}$ it is enough to determine the relation of $\mathscr{F}_{\{\tau,\bar{\tau}\}}$ to the cascade structure, and the general case will follow from it. As we have seen in $(\ref{eq:KS-1}),$ the Kodaira-Spencer map yields an isomorphism \[\xymatrix@C=0.6cm{ \text{\rm KS}^{\vee}\colon \mathcal{T}\ar[r]^>>>>>\sim&\underset{\{\sigma,\bar{\sigma}\}\in\mathscr{I}^{+}}{\bigoplus}\mathcal{\mathcal{P}_{\sigma}^{\vee}\otimes}\mathcal{P}_{\bar{\sigma}}^{\vee},} \] and we may write \[ \mathcal{T}_{\{\sigma,\bar{\sigma}\}}=(\text{\rm KS}^{\vee})^{-1}(\mathcal{\mathcal{P}_{\sigma}^{\vee}\otimes}\mathcal{P}_{\bar{\sigma}}^{\vee}). \] We then have \begin{equation} \mathscr{F}_{\{\tau,\bar{\tau}\}}=\bigoplus_{\{\sigma,\bar{\sigma}\}\ne\{\tau,\bar{\tau}\}}\mathcal{T}_{\{\sigma,\bar{\sigma}\}}\oplus\mathscr{E}_{\{\tau,\bar{\tau}\}}\label{eq:F_=00007B=00005Ctau,=00005Ctaubar=00007D} \end{equation} where $\mathscr{E}_{\{\tau,\bar{\tau}\}}\subset\mathcal{T}_{\{\tau,\bar{\tau}\}}$ is the annihilator of $\text{\rm KS}(\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}}[V\otimes V])$. If we choose the labelling so that $r_{\phi^{-1}\circ\tau}\le r_{\tau}$ then over $M^\text{\rm ord}$ we have $\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}}[V\otimes V]=\mathcal{P}_{\tau}[V]\otimes\mathcal{P}_{\bar{\tau}}$ and \[ \mathrm{rk}\;\mathscr{E}_{\{\tau,\bar{\tau}\}}=r_{\phi^{-1}\tau}\cdot(d-r_{\tau}). \] Although our $p$-foliation is $\mathscr{F}_{\{\tau,\bar{\tau}\}}$ and not $\mathscr{E}_{\{\tau,\bar{\tau}\}}$ (the latter is an involutive sub-bundle but need not be $p$-closed!), for our purpose it will be enough to relate $\mathscr{E}_{\{\tau,\bar{\tau}\}}$ to the cascade structure. Choose the notation $\mathfrak{p}\mathcal{O}_{K}=\mathfrak{P}\bar{\mathfrak{P}}$ so that $\tau\in\mathscr{I}_{\mathfrak{P}}$ and $\bar{\tau}\in\mathscr{I}_{\bar{\mathfrak{P}}}$. As in \ref{subsec:The-mu-ordinary-locus} let $X(d,\mathfrak{f})$ (now for $\mathfrak{f}:\mathscr{I}_{\mathfrak{P}}\to[0,d]$) be the standard $p$-divisible group with $\mathcal{O}_{K}$-structure over $k$ whose Dieudonn module is $M(d,\mathfrak{f})$. As before, $\mathfrak{f}(i)=r_{\sigma}$ if $i=\sigma\in\mathscr{I}_{\mathfrak{P}}.$ Let $r$ be the number of distinct slopes of $X(d,\mathfrak{f})$ (not to be confused with the CM type $\{r_{\sigma}\}$). Fix $x\in M^\text{\rm ord}(k)$. We recall some notation related to the cascade structure at $x.$ There is a canonical $r$-cascade structure \[ \mathscr{C}=\{\Gamma^{(a,b)},G^{(a,b)}|\,1\le a<b\le r\} \] on the deformation space \[ {\rm Spf}(\widehat{\mathcal{O}}_{M,x})=\mathbb{D}_{\mathfrak{P}}. \] The $\Gamma^{(a,b)}$ are formal schemes supported at $x,$ and $\Gamma^{(1,r)}=\mathbb{D}_{\mathfrak{P}}.$ They are equipped with (left and right) morphisms \[ \lambda:\Gamma^{(a,b)}\to\Gamma^{(a,b-1)},\,\,\,\rho:\Gamma^{(a,b)}\to\Gamma^{(a+1,b)}, \] satisfying $\lambda\circ\rho=\rho\circ\lambda$ (where applicable). Each $\Gamma^{(a,b)}$ is endowed with a structure of a relative bi-extension of \[ \Gamma^{(a,b-1)}\times_{\Gamma^{(a+1,b-1)}}\Gamma^{(a+1,b)} \] by a formal group that we denote by $G^{(a,b)}$ (in the category of formal schemes \emph{over} $\Gamma^{(a+1,b-1)}$). See the following diagram. \[ \xymatrix@C=1.7cm{& \Gamma^{(a, b)}\ar[ddl]_\lambda\ar[ddr]^\rho\ar@{..>}[d]_{G^{(a, b)}} & \\ & { \Gamma^{(a, b-1)}\!\!\!\!\! \!\!\!\underset{{\Gamma^{(a+1, b-1)}}}{\times}\!\!\!\! \!\!\!\!\Gamma^{(a+1, b)}}\ar@{..>}[dr]\ar@{..>}[dl]& \\ \Gamma^{(a, b-1)}\ar[dr]^\rho & &\Gamma^{(a+1, b)}\ar[dl]_\lambda\\ & \Gamma^{(a+1, b-1)}& }\] In fact, $G^{(a,b)}={\rm Ext}(X^{(a)},X^{(b)})$ where $X^{(\nu)}$ is the $\nu$-th isoclinic component of $A_{x}^\text{\rm univ}[\mathfrak{P}^{\infty}]\simeq X(d,\mathfrak{f})$. This identification should be interpreted as an identity between fppf sheaves of $\mathcal{O}_{K}$-modules; each $X^{(\nu)},$ with its $\mathcal{O}_{K}$-structure, is rigid, so admits a unique canonical lifting $X_{R}^{(\nu)}$ to any local artinian ring $R$ with residue field $k$, and \[ G^{(a,b)}(R)={\rm Ext}_{R}(X_{R}^{(a)},X_{R}^{(b)}). \] Let \[ \mathcal{T}_{x}^{(a,b)}=\ker d(\lambda,\rho) \] where $d(\lambda,\rho)$ is the differential of the map \[ (\lambda,\rho):\Gamma^{(a,b)}\to\Gamma^{(a,b-1)}\times_{\Gamma^{(a+1,b-1)}}\Gamma^{(a+1,b)}. \] This is the subspace of the tangent space of $\Gamma^{(a,b)}$ ``in the direction'' of $G^{(a,b)}.$ Thus, \[ \mathcal{T}_{x}^{(a,b)}={\rm Ext}_{k[\varepsilon]}(X_{k[\varepsilon]}^{(a)},X_{k[\varepsilon]}^{(b)}) \] is the $k$-vector space of all the extensions, over the ring of dual numbers $k[\varepsilon]$, of the formal $\mathcal{O}_{K}$-module $X^{(a)}$ by the formal $\mathcal{O}_{K}$-module $X^{(b)}.$ Using these spaces we define subspaces \[ \mathcal{T}_{x}^{[a,b]}\subset\mathcal{T}_{x} \] by a descending induction on $b-a$ for $1\le a<b\le r$. First, $\mathcal{T}_{x}^{[1,r]}=\mathcal{T}_{x}^{(1,r)}.$ Suppose $\mathcal{T}_{x}^{[a,b]}$ have been defined when $b-a>s$, let $b-a=s$ and consider $U=\mathcal{T}_{x}/\sum_{[a,b]\subsetneq[a',b']}\mathcal{T}_{x}^{[a',b']}.$ Then $\mathcal{T}_{x}^{(a,b)}$ is a subspace of $U$ and we let $\mathcal{T}_{x}^{[a,b]}$ be its preimage in $\mathcal{T}_{x}$, so that \[ \mathcal{T}_{x}^{(a,b)}=\mathcal{T}_{x}^{[a,b]}/\sum_{[a,b]\subsetneq[a',b']}\mathcal{T}_{x}^{[a',b']}. \] For example, if $r=3$ then ${\rm Spf}(\widehat{\mathcal{O}}_{M,x})=\Gamma^{(1,3)}$ has the structure of a bi-extension of $\Gamma^{(1,2)}\times\Gamma^{(2,3)}$ by $G^{(1,3)},$ $\mathcal{T}_{x}^{[1,3]}$ is the tangent space to $G^{(1,3)},$ $\mathcal{T}_{x}^{[1,2]}/\mathcal{T}_{x}^{[1,3]}=\mathcal{T}_{x}^{(1,2)}$ is the tangent space to $\Gamma^{(1,2)}$ and, likewise, $\mathcal{T}_{x}^{[2,3]}/\mathcal{T}_{x}^{[1,3]}=\mathcal{T}_{x}^{(2,3)}$ is the tangent space to $\Gamma^{(2,3)}$ (all tangent spaces are at the origin). In general, we have defined a collection of subspaces of $\mathcal{T}_{x}$ indexed by closed intervals $[a,b]$ with $1\le a<b\le r,$ so that $\mathcal{T}_{x}^{I}\supset\mathcal{T}_{x}^{J}$ whenever $I\subset J.$ The filtration of $\mathcal{T}_{x}$ by the $\mathcal{T}_{x}^{[a,b]}$ is not linearly ordered, but its graded pieces are the $\mathcal{T}_{x}^{(a,b)},$ the tangent spaces to the $G^{(a,b)}.$ \subsubsection{The relation between the cascade and $\mathscr{E}_{\{\tau,\bar{\tau}\}}$} Depending on our $\tau$, we define two integers $0\le p_{\tau}\le q_{\tau}\le r.$ Recall that the $\nu$-th isoclinic group $X^{(\nu)}$ is of the form \[ X^{(\nu)}=X(d^{\nu},\mathfrak{f}^{(\nu)})=X(1,\mathfrak{g}^{(\nu)})^{d^{\nu}} \] with $\mathfrak{g}^{(\nu)}:\mathscr{I}_{\mathfrak{P}}\to\{0,1\},$ $\mathfrak{f}^{(\nu)}=d^{\nu}\mathfrak{g}^{(\nu)},$ and $\mathfrak{g}^{(\nu+1)}\ge\mathfrak{g}^{(\nu)}.$ If $\tau=i\in\mathscr{I}_{\mathfrak{P}}$ we let \[ p_{\tau}=\#\{\nu|\,\mathfrak{g}^{(\nu)}(i)=0\},\,\,\,q_{\tau}=p_{\phi^{-1}\circ\tau}=\#\{\nu|\,\mathfrak{g}^{(\nu)}(i-1)=0\}, \] so that $\mathfrak{f}(i)=\sum_{\nu=p_{\tau}+1}^{r}d^{\nu}$ and similarly $\mathfrak{f}(i-1)=\sum_{\nu=q_{\tau}+1}^{r}d^{\nu}$. \begin{prop} The fiber of $\mathscr{E}_{\{\tau,\bar{\tau}\}}$ at $x$ coincides with the subspace $\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}\cap\mathcal{T}_{\{\tau,\bar{\tau}\},x}$. If $p_{\tau}=0$ or $q_{\tau}=r$ this is 0. \end{prop} We can summarize the situation as follows. The Kodaira-Spencer isomorphism induces a ``vertical'' decomposition of the tangent bundle into the direct sum of the $\mathcal{T}_{\{\sigma,\bar{\sigma}\}}$. The foliation $\mathscr{F}_{\{\tau,\bar{\tau}\}}$ is ``vertical'' in the sense that it combines the subspace $\mathscr{E}_{\{\tau,\bar{\tau}\}}\subset\mathcal{T}_{\{\tau,\bar{\tau}\}}$ with the full $\mathcal{T}_{\{\sigma,\bar{\sigma}\}}$ for all $\{\sigma,\bar{\sigma}\}\ne\{\tau,\bar{\tau}\}.$ The cascade structure, on the other hand, results from the slope decomposition of $X(d,\mathfrak{f}),$ and induces a ``horizontal'' filtration by the $\mathcal{T}^{[a,b]}$ on the tangent bundle. The proposition describes the way these vertical decomposition and horizontal filtration interact. \begin{proof} We first compute the dimension of $\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}\cap\mathcal{T}_{\{\tau,\bar{\tau}\},x}$. The graded pieces of $\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}$ are the $\mathcal{T}_{x}^{(a,b)}$ for $1\le a\le p_{\tau}$ and $q_{\tau}+1\le b\le r.$ The dimension of $\mathcal{T}_{x}^{(a,b)}$ is the dimension of the formal group \[ G^{a,b}={\rm Ext}(X^{(a)},X^{(b)})={\rm Ext}(X(1,\mathfrak{g}^{(a)}),X(1,\mathfrak{g}^{(b)}))^{d^{a}d^{b}}, \] which is $d^{a}d^{b}\sum_{i\in\mathscr{I}_{\mathfrak{P}}}(\mathfrak{g}^{(b)}(i)-\mathfrak{g}^{(a)}(i)).$ The dimension of $\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}$ is therefore \[ \sum_{1\le a\le p_{\tau}}\sum_{q_{\tau}+1\le b\le r}d^{a}d^{b}\sum_{i\in\mathscr{I}_{\mathfrak{P}}}(\mathfrak{g}^{(b)}(i)-\mathfrak{g}^{(a)}(i)), \] and the dimension of $\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}\cap\mathcal{T}_{\{\tau,\bar{\tau}\},x}$ is the contribution of $i=\tau$ to this sum. But from the way $p_{\tau}$ and $q_{\tau}$ were defined it follows that for the particular index~$i$ corresponding to $\tau$ we have, for all $a$ and $b$ in the above range, $\mathfrak{g}^{(b)}(i)=1$ and $\mathfrak{g}^{(a)}(i)=0.$ It follows that \[ \dim\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}\cap\mathcal{T}_{\{\tau,\bar{\tau}\},x}=\sum_{1\le a\le p_{\tau}}\sum_{q_{\tau}+1\le b\le r}d^{a}d^{b}=r_{\phi^{-1}\circ\tau}\cdot(d-r_{\tau})=\mathrm{rk}\;\mathscr{E}_{\{\tau,\bar{\tau}\}}. \] To conclude the proof of the Proposition it is therefore enough to show the inclusion \[ W_{x}:=\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}\cap\mathcal{T}_{\{\tau,\bar{\tau}\},x}\subset\mathscr{E}_{\{\tau,\bar{\tau}\}}, \] i.e. that $W_{x}$ annihilates $\text{\rm KS}(\mathcal{P}_{\tau}[V]\otimes\mathcal{P}_{\bar{\tau}}).$ Let \[ i:\mathfrak{S}\hookrightarrow {\rm Spf}(\widehat{\mathcal{O}}_{M,x}) \] be the infinitesimal neighborhood of $x$ in the direction of $W_{x}.$ More precisely, \[ \mathfrak{S}={\rm Spf}((\mathcal{O}_{M,x}/\mathfrak{m}_{x}^{2})/((\mathfrak{m}_{x}/\mathfrak{m}_{x}^{2})[W_{x}])) \] where $(\mathfrak{m}_{x}/\mathfrak{m}_{x}^{2})[W_{x}]$ is the subspace of the cotangent space at $x$ annihilated by $W_{x}\subset\mathcal{T}_{x}.$ To conform with \cite{G-dS1} we introduce the following notation. Let $\mathcal{A}=i^{*}A^\text{\rm univ}$, \[ \mathcal{H}=H_{dR}^{1}(\mathcal{A}/\mathfrak{S})=i^{*}H_{dR}^{1}(A^\text{\rm univ}/\widehat{\mathcal{O}}_{M,x}),\,\,\,\mathcal{P}=i^{*}\mathcal{P}_{\tau},\,\,\,\mathcal{P}_{0}=\mathcal{P}[V],\,\,\,\mathcal{Q}=i^{*}\mathcal{P}_{\bar{\tau}}. \] Over $M^\text{\rm ord}$, the $p$-divisible group $A^\text{\rm univ}[\mathfrak{P}^{\infty}]$ has an $\mathcal{O}_{K}$-stable slope filtration by $p$-divisible groups \[ \mathscr{X}^{(r)}\subset\mathscr{X}^{(r-1,r)}\subset\cdots\subset\mathscr{X}^{(1,r)}=A^\text{\rm univ}[\mathfrak{P}^{\infty}], \] characterized by the fact that at each geometric point $x\in M^\text{\rm ord}$ \[ \mathscr{X}_{x}^{(a,r)}=X^{(a)}\times\cdots\times X^{(r)}, \] where $X^{(\nu)}$ is the $\nu$-th isoclinic factor of $A_{x}^\text{\rm univ}[\mathfrak{P}^{\infty}]$ (the slopes increasing with $\nu$). Let \[ 0\subset {\rm Fil}^{2}=i^{*}(\mathscr{X}^{(q_{\tau}+1,r)})\subset {\rm Fil}^{1}=i^{*}(\mathscr{X}^{(p_{\tau}+1,r)})\subset {\rm Fil}^{0}=\mathcal{A}[\mathfrak{P}^{\infty}]. \] It follows from the construction of the cascade $\mathscr{C}=\{\Gamma^{(a,b)}\}$ that while the full $\mathcal{A}[\mathfrak{P}^{\infty}]$ does deform over $\mathfrak{S}$, its subquotient $p$-divisible groups ${\rm Fil}^{1}$ and ${\rm Fil}^{0}/{\rm Fil}^{2}$ are constant there: they are obtained (with their $\mathcal{O}_{K}$-structure) by base change from the fiber at $x$. Indeed, as the only non-zero graded pieces of $\mathcal{T}_{x}^{[p_{\tau},q_{\tau}+1]}$ are the $\mathcal{T}_{x}^{(a,b)}$ for $a\le p_{\tau}$ and $q_{\tau}+1\le b$, only extensions of $X^{(a)}$ by $X^{(b)}$ for $a$ an $b$ in these ranges contribute to the deformation of $\mathcal{A}[\mathfrak{P}^{\infty}]$ over $\mathfrak{S}$. But such an $X^{(a)}$ does not participate in ${\rm Fil}^{1},$ and $X^{(b)}$ does not participate in ${\rm Fil}^{0}/{\rm Fil}^{2}$. It also follows from our choice of $p_{\tau}$ and $q_{\tau}$ that $\mathcal{P}_{0}$ pairs trivially with the tangent space to ${\rm Fil}^{2}$, so can be considered a subspace of the cotangent space of the $p$-divisible group $G={\rm Fil}^{0}/{\rm Fil}^{2}.$ Let $D(G)=\mathbb{D}(G)_{\mathfrak{S}}$ be the evaluation of the Dieudonn crystal associated to $G$ on $\mathfrak{S}.$ This is an $\mathcal{O}_{\mathfrak{S}}$-module with $\mathcal{O}_{K}$-action, and it is equipped with an integrable connection $\nabla$. We have $D(G)\subset D(\mathcal{A}[p^{\infty}])=H_{dR}^{1}(\mathcal{A}/\mathfrak{S})=\mathcal{H}$ and \[ \nabla:D(G)\to D(G)\otimes\Omega_{\mathfrak{S}}^{1} \] is induced by the Gauss-Manin connection on $\mathcal{H}.$ Therefore, the diagram \[ \xymatrix@C=1cm@M=0.3cm{\mathcal{P} = \omega_{\mathcal{A}/\mathfrak{S}}[\tau] \ar@{^{(}->}[r]\ar[d]_{{\rm KS}_{\mathcal{A}/\mathfrak{S}}} & \mathcal{H}[\tau]\ar[d]^\nabla\\ \mathcal{Q}^\vee\otimes \Omega^1_{\mathfrak{S}} \underset{\lambda}{\stackrel{\sim}{\longrightarrow}} \omega^\vee_{\mathcal{A}^t/\mathfrak{S}}[\tau]\otimes \Omega^1_\mathfrak{S}& \mathcal{H}[\tau] \otimes \Omega^1_\mathfrak{S} \ar@{->>}[l] }\] used to compute $\text{\rm KS}_{\mathcal{A}/\mathfrak{\mathfrak{S}}},$ can be replaced, when we restrict to $\mathcal{P}_{0}=\mathcal{P}[V]$, by the diagram \[ \xymatrix@C=1cm@M=0.3cm{\mathcal{P}_0 \ar@{^{(}->}[r]\ar[d]_{{\rm KS}_{\mathcal{A}/\mathfrak{S}}} & D(G)[\tau]\ar[d]^\nabla\\ \mathcal{Q}^\vee\otimes \Omega^1_{\mathfrak{S}} & D(G)[\tau] \otimes \Omega^1_\mathfrak{S}. \ar[l] }\] However, by the constancy of $G={\rm Fil}^{0}/{\rm Fil}^{2}$ over $\mathfrak{S}$, the left arrow vanishes. We must therefore have $\text{\rm KS}_{\mathcal{A}/\mathfrak{S}}(\mathcal{P}_{0}\otimes\mathcal{Q})=0.$ By the functoriality of Kodaira-Spencer homomorphisms with respect to base change we conclude that $W_{x},$ the tangent space to $\mathfrak{S},$ annihilates $\text{\rm KS}_{A^\text{\rm univ}/M}(\mathcal{P}_{\tau}[V]\otimes\mathcal{P}_{\bar{\tau}})\subset\Omega_{M/k}^{1},$ i.e. is contained in $\mathscr{E}_{\{\tau,\bar{\tau}\}},$ and this completes the proof. \end{proof} \subsection{The extension of the foliations to inner Ekedahl-Oort strata} \subsubsection{The problem} This section is largely combinatorial. The variety $M$ has a stratification by locally closed subsets $M_{w}$, of which $M^\text{\rm ord}$ is the largest, called the Ekedahl-Oort (EO) strata of $M$. They are labelled by certain Weyl group cosets $w$ (or by their distinguished representatives of shortest length), recalled below. They are characterized by the fact that the isomorphism class of $A_{x}^\text{\rm univ}[p]$, with its $\mathcal{O}_{K}$-structure and polarization, is the same for all geometric points $x$ in a given stratum, and the strata are maximal with respect to this property. See \cite{Mo2,Wed2}. Consider a (not necessarily closed) point $x\in M.$ Let $k(x)$ be its residue field. By $(\ref{eq:ker_V_times_V})$ and the fact that \[ \mathcal{P}_{\tau,x}[V]\otimes\mathcal{P}_{\bar{\tau},x}\cap\mathcal{P}_{\tau,x}\otimes\mathcal{P}_{\bar{\tau},x}[V]=\mathcal{P}_{\tau,x}[V]\otimes\mathcal{P}_{\bar{\tau},x}[V], \] the dimension of $\ker(V\otimes V:(\mathcal{P_{\tau}}\otimes\mathcal{P}_{\bar{\tau}})_{x}\to(\mathcal{P}_{\phi^{-1}\circ\tau}^{(p)}\otimes\mathcal{P}_{\phi^{-1}\circ\bar{\tau}}^{(p)})_{x}$) is given by \begin{equation} r_{V}\{\tau,\bar{\tau}\}(x)=\dim\mathcal{P}_{\tau,x}[V]\cdot r_{\bar{\tau}}+r_{\tau}\cdot\dim\mathcal{P}_{\bar{\tau},x}[V]-\dim\mathcal{P}_{\tau,x}[V]\cdot\dim\mathcal{P}_{\bar{\tau},x}[V].\label{eq:r_v(x) recall} \end{equation} This quantity depends only on $A_{x}^\text{\rm univ}[p]$, and is therefore \emph{constant along each EO strata}. At $x\in M^\text{\rm ord}$ it was expressed in terms of the CM type by the formula \[ r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}=\max\{0,r_{\tau}-r_{\phi^{-1}\circ\tau}\}\cdot(d-r_{\tau})+r_{\tau}\cdot\max\{0,r_{\phi^{-1}\circ\tau}-r_{\tau}\}. \] (We did it at geometric points, but the same works at every schematic point of $M^\text{\rm ord},$ not necessarily closed.) Since $r_{V}\{\tau,\bar{\tau}\}(x)$ can only increase under specialization, $r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}$ is the minimal value attained by it, and \begin{equation} M_{\Sigma}=\{x\in M|\,r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}\,\,\forall\{\tau,\bar{\tau}\}\in\Sigma\}\label{eq:M_=00005CSigma} \end{equation} is a Zariski open set, which is a union of EO strata. Clearly, \[ M_{\Sigma_{1}\cup\Sigma_{2}}=M_{\Sigma_{1}}\cap M_{\Sigma_{2}}. \] In our earlier work \cite{G-dS1}, where there was only one $\Sigma$ to consider, this set was denoted by $M_{\sharp}.$ The proof of Lemma \ref{lem:Foliation is smooth} shows that over $M_{\Sigma}$ the sheaf \begin{equation} \mathscr{F}_{\Sigma}=\text{\rm KS}\left(\sum_{\{\tau,\bar{\tau}\}\in\Sigma}(\mathcal{P}_{\tau}\otimes\mathcal{P}_{\bar{\tau}})[V\otimes V]\right)^{\perp}\label{eq:the foliation} \end{equation} remains a vector sub-bundle of $\mathcal{T}$ of corank $r_{V}(\Sigma)=\sum_{\{\tau,\bar{\tau}\}\in\Sigma}r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}.$ The properties of being involutive and $p$-closed extend by continuity from the open dense $\mu$-ordinary stratum. We conclude: \begin{thm} The vector bundle $\mathscr{F}_{\Sigma}$ extends as a \emph{smooth} $p$-foliation to the Zariski open set $M_{\Sigma}$. \end{thm} Our task is therefore to calculate $r_{V}\{\tau,\bar{\tau}\}(x)$ at $x\in M_{w}$ for the different EO strata $M_{w}$, and see for which of them it equals $r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}.$ This will give us an explicit description of $M_{\Sigma}$. \subsubsection{Previous results} When $L=\mathbb{Q}$ and $K$ is quadratic imaginary, and when $p\mathcal{O}_{K}=\mathfrak{P}$ is inert (the split case being trivial for quadratic imaginary $K$), we showed in \cite{G-dS1} that there exists a smallest EO stratum $M^\text{\rm fol}$ in $M_{\sharp}$. Any other stratum lies in $M_{\sharp}$ if and only if it contains $M^\text{\rm fol}$ in its closure. In this case $\Sigma=\{\tau,\bar{\tau}\}$, and writing $(r_{\tau},r_{\bar{\tau}})=(n,m)$ we may assume, without loss of generality, that $n\ge m.$ We find that $r_{V}(\Sigma)=(n-m)m,$ hence \[ \mathrm{rk}\;\mathscr{F}_{\Sigma}=m^{2}. \] The dimension of $M^\text{\rm fol}$ is also $m^{2},$ while the dimension of $M$ itself is $nm$. For example, for Shimura varieties attached to the group $U(n,1),$ the foliation extends everywhere except to the lowest, $0$-dimensional, EO stratum, consisting of the so-called superspecial points. In case $K$ is quadratic imaginary, the labelling of the EO strata of $M$ is by $(n,m)$\emph{-shuffles}. We review it to motivate the type of combinatorics that will show up in the general case. A permutation $w$ of $\{1,\dots,n+m\}$ is called an $(n,m)$-shuffle if \[ w^{-1}(1)<\cdots<w^{-1}(n),\,\,\,w^{-1}(n+1)<\cdots<w^{-1}(n+m), \] i.e. $w$ interlaces the intervals $[1,n]$ and $[n+1,n+m]$ but keeps the order within each interval. If $w$ is an $(n,m)$-shuffle and $M_{w}$ is the corresponding EO stratum, then its dimension is equal to the length of $w$ \[ \dim(M_{w})=\ell(w)=\sum_{i=1}^{n}(w^{-1}(i)-i). \] The unique $w$ for which this gets the value $nm$ is \begin{equation} w^\text{\rm ord}=\left(\begin{array}{cccccc} 1 & \cdots & m & m+1 & \cdots & n+m\\ n+1 & \cdots & n+m & 1 & \cdots & n \end{array}\right)\label{eq:ordinary w} \end{equation} and the corresponding $M_{w}$ is $M^\text{\rm ord}.$ The formula for $r_{V}\{\tau,\bar{\tau}\}(x)$ at $x\in M_{w}(k)$ reads \[ r_{V}\{\tau,\bar{\tau}\}(x)=a(w)\cdot m=|\{1\le i\le n|\,1\le w^{-1}(i)\le n\}|\cdot m \] (compare \cite{G-dS1} (2.3)). For example, for $w^\text{\rm ord}$ this is $(n-m)m.$ It is readily seen that there is a unique $(n,m)$-shuffle $w^\text{\rm fol}$ for which $a(w)$ is still $n-m,$ but for which $\dim(M_{w})=m^{2}$ is the smallest possible. This is the permutation \[ w^\text{\rm fol}=\left(\begin{array}{ccccccccc} 1 & \cdots & n-m & n-m+1 & \cdots & n & n+1 & \cdots & n+m\\ 1 & \cdots & n-m & n+1 & \cdots & n+m & n-m+1 & \cdots & n \end{array}\right), \] whose corresponding EO stratum is $M^\text{\rm fol}.$ \subsubsection{Weyl group cosets and EO strata} We return to a general CM field $K$. Denote by $\Pi_{e,d-e}$ the set of $(e,d-e)$-shuffles in the symmetric group $\mathfrak{S}_{d}.$ They serve as representatives (of minimal length) for \[ \mathfrak{S}_{e}\times\mathfrak{S}_{d-e}\setminus\mathfrak{S}_{d}. \] For an $(e,d-e)$-shuffle $\pi$ let \[ \check{\pi}=w_{0}\circ\pi\circ w_{0} \] where $w_{0}(\nu)=d+1-\nu$ is the element of maximal length in $\mathfrak{S}_{d}.$ This $\check{\pi}$ is a $(d-e,e)$-shuffle, and $\pi\mapsto\check{\pi}$ is a bijection between $\Pi_{e,d-e}$ and $\Pi_{d-e,e}.$ Explicitly, \[ \check{\pi}^{-1}(d+1-\nu)=d+1-\pi^{-1}(\nu). \] Let $w=(w_{\tau})_{\tau\in\mathscr{I}}$ where $w_{\tau}\in\Pi_{r_{\tau},d-r_{\tau}}$ and $w_{\bar{\tau}}=\check{w}_{\tau}.$ Note that $w_{\tau}$ and $w_{\bar{\tau}}$, being conjugate by $w_{0}$, have the same length. Let $k$ be, as usual, an algebraically closed field containing $\kappa.$ Consider the following Dieudonn module with $\mathcal{O}_{K}$-structure $N_{w}$ attached to $w$: \begin{itemize} \item $N_{w}=\bigoplus_{i\in\mathscr{I}}\bigoplus_{j=1}^{d}ke_{i,j},$ $\mathcal{O}_{K}$ acting on $\bigoplus_{j=1}^{d}ke_{i,j}$ via $i:\mathcal{O}_{K}\to k.$ \vspace{0.2cm} \item $F(e_{i,j})=\begin{cases} \begin{array}{c} 0\\ e_{i+1,m} \end{array} & \begin{array}{c} \mathrm{if}\,\,w_{i}(j)\le\mathfrak{f}(i)\\ \mathrm{if}\,\,w_{i}(j)=\mathfrak{f}(i)+m. \end{array}\end{cases}$ \vspace{0.2cm} \item $V(e_{i+1,j})=\begin{cases} \begin{array}{c} 0\\ e_{i,n} \end{array} & \begin{array}{c} \mathrm{if}\,\,j\le d-\mathfrak{f}(i)\\ \mathrm{if}\,\,j=d-\mathfrak{f}(i)+w_{i}(n). \end{array}\end{cases}$ \end{itemize} This $N_{w}$ is endowed with a pairing, setting its $\tau$ and $\bar{\tau}$-components in duality, but we suppress it from the notation, as it is irrelevant to the computation that we have to make. \begin{prop} (\cite{Mo2}, Theorem 6.7) The EO strata $M_{w}$ of $M$ are in one-to-one correspondence with the $w$'s as above. If $x\in M_{w}(k)$ is a geometric point, the Dieudonn module of $A_{x}^\text{\rm univ}[p]$ is isomorphic, with its $\mathcal{O}_{K}$-structure, to $N_{w}.$ The dimension of $M_{w}$ is given by \begin{equation} \dim(M_{w})=\ell(w):=\sum_{\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}}\ell(w_{\tau}).\label{eq:EO dimension formula} \end{equation} \end{prop} As an example, the $\mu$-ordinary stratum $M^\text{\rm ord}$ corresponds to $w=w^\text{\rm ord}=(w_{i}^\text{\rm ord})_{i\in\mathscr{I}}$ where $w_{i}^\text{\rm ord}$ is given by $(\ref{eq:ordinary w})$ with $(n,m)=(\mathfrak{f}(i),d-\mathfrak{f}(i)).$ We now consider $r_{V}\{\tau,\bar{\tau}\}(x)$, as in (\ref{eq:r_v(x) recall}), for $x\in M_{w}(k).$ As usual we write $\tau=i$ and $r_{\tau}=\mathfrak{f}(i)$. We have \begin{equation} \begin{split} \dim\mathcal{P}_{\tau,x}[V]& =|\{j|\,j\le r_{\phi^{-1}\circ\bar{\tau}},\,w_{\tau}(j)\le r_{\tau}\}|\label{eq:P_tau_V} \\ & =|\{j|\,j\le d-\mathfrak{f}(i-1),\,w_{i}(j)\le\mathfrak{f}(i)\}| \\ &\ge\max\{0,\mathfrak{f}(i)-\mathfrak{f}(i-1)\}, \end{split} \end{equation} and \begin{equation} \begin{split} \dim\mathcal{P}_{\bar{\tau},x}[V]&=|\{j|\,j\le r_{\phi^{-1}\circ\tau},\,w_{\bar{\tau}}(j)\le r_{\bar{\tau}}\}| \\& =|\{\ell|\,\ell\le r_{\bar{\tau}},\,w_{\bar{\tau}}^{-1}(\ell)\le r_{\phi^{-1}\circ\tau}\}| \\ &=|\{\ell|\,\ell\le r_{\bar{\tau}},\,\check{w}_{\tau}^{-1}(\ell)\le r_{\phi^{-1}\circ\tau}\}| \\ &=|\{m|\,d+1-m\le r_{\bar{\tau}},\,d+1-w_{\tau}^{-1}(m)\le r_{\phi^{-1}\circ\tau}\}| \\ &=|\{j|\,r_{\phi^{-1}\circ\bar{\tau}}+1\le j,\,r_{\tau}+1\le w_{\tau}(j)\}| \\ &=|\{j|\,d-\mathfrak{f}(i-1)+1\le j,\,\mathfrak{f}(i)+1\le w_{i}(j)\}| \\ &\ge\max\{0,\mathfrak{f}(i-1)-\mathfrak{f}(i)\}. \end{split} \end{equation} \begin{lem} We have $r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}$ precisely when the following conditions are satisfied: \end{lem} \begin{itemize} \item If $\mathfrak{f}(i-1)\le\mathfrak{f}(i)$, \begin{equation} |\{j|\,j\le d-\mathfrak{f}(i-1),\,w_{i}(j)\le\mathfrak{f}(i)\}|=\mathfrak{f}(i)-\mathfrak{f}(i-1),\label{eq:Cond_1-1} \end{equation} \item If $\mathfrak{f}(i)\le\mathfrak{f}(i-1)$, \begin{equation} |\{j|\,d-\mathfrak{f}(i-1)+1\le j,\,\mathfrak{f}(i)+1\le w_{i}(j)\}|=\mathfrak{f}(i-1)-\mathfrak{f}(i).\label{eq:Cond_2-1} \end{equation} \end{itemize} \begin{proof} We begin by noting that when $\mathfrak{f}(i-1)=\mathfrak{f}(i)$ the two conditions agree with each other, since each of them is equivalent, in this case, to $w_{i}=w_{i}^\text{\rm ord}$, so the Lemma is consistent. Let us dispose first of the cases where $\mathfrak{f}(i)\in\{0,d\}.$ In these extreme cases $\mathcal{P}_{\tau}$ or $\mathcal{P}_{\bar{\tau}}$ are zero, so \[ r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}=0 \] for all $x.$ The conditions of the Lemma also hold then, trivially, everywhere. We therefore assume from now on that $0<\mathfrak{f}(i)<d.$ Suppose $\mathfrak{f}(i-1)\le\mathfrak{f}(i).$ If $(\ref{eq:Cond_1-1})$ holds then \[ \dim\mathcal{P}_{\tau,x}[V]\otimes\mathcal{P}_{\bar{\tau},x}=(\mathfrak{f}(i)-\mathfrak{f}(i-1))\cdot(d-\mathfrak{f}(i))=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}. \] We claim that $|\{j|\,d-\mathfrak{f}(i-1)+1\le j,\,\mathfrak{f}(i)+1\le w_{i}(j)\}|=0$, and therefore $\mathcal{P}_{\bar{\tau},x}[V]=0$ and $r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}$ as desired. Suppose, to the contrary, that for some $j$ we had $d-\mathfrak{f}(i-1)<j$ and $\mathfrak{f}(i)<w_{i}(j)$. Then there would be \emph{fewer} than $\mathfrak{f}(i-1)$ values of $j$ such that $d-\mathfrak{f}(i-1)<j$ and $w_{i}(j)\le\mathfrak{f}(i),$ hence \emph{more than} $\mathfrak{f}(i)-\mathfrak{f}(i-1)$ values of $j$ such that $j\le d-\mathfrak{f}(i-1)$ and $w_{i}(j)\le\mathfrak{f}(i).$ This would violate condition $(\ref{eq:Cond_1-1})$. Conversely, still under $\mathfrak{f}(i-1)\le\mathfrak{f}(i)$, assume that $r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}=(\mathfrak{f}(i)-\mathfrak{f}(i-1))\cdot(d-\mathfrak{f}(i)).$ Since we assumed that $d-\mathfrak{f}(i)\ne0$ we can not have $\dim\mathcal{P}_{\tau,x}[V]>\mathfrak{f}(i)-\mathfrak{f}(i-1)$ and condition $(\ref{eq:Cond_1-1})$ must hold. The arguments when $\mathfrak{f}(i)\le\mathfrak{f}(i-1)$ are analogous. \end{proof} It is easy to see that if $\mathfrak{f}(i-1)\le\mathfrak{f}(i)$ the $(\mathfrak{f}(i),d-\mathfrak{f}(i))$-shuffle $w_{i}^\text{\rm fol}$ given by {\scriptsize{} \[ \left(\begin{array}{ccccccccc} 1 & \cdots & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1) & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1)+1 & \cdots & d-\mathfrak{f}(i\!-\!1) & d-\mathfrak{f}(i\!-\!1)+1 & \cdots & d\\ 1 & \cdots & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1) & \mathfrak{f}(i)+1 & \cdots & d & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1)+1 & \cdots & \mathfrak{f}(i) \end{array}\right) \] }satisfies $(\ref{eq:Cond_1-1})$, and this is the $(\mathfrak{f}(i),d-\mathfrak{f}(i))$-shuffle of smallest length satisfying it. Its length is then \[ \ell(w_{i}^\text{\rm fol})=(d-\mathfrak{f}(i))\mathfrak{f}(i-1). \] Similarly if $\mathfrak{f}(i)\le\mathfrak{f}(i-1)$ letting $\mathfrak{g}(i)=d-\mathfrak{f}(i)$ the same holds with $w_{i}^\text{\rm fol}$ given by {\scriptsize{} \[ \left(\begin{array}{ccccccccc} 1 & \cdots & \mathfrak{g}(i\!-\!1) & \mathfrak{g}(i\!-\!1)+1 & \cdots & \mathfrak{g}(i\!-\!1)+\mathfrak{f}(i) & \mathfrak{g}(i\!-\!1)+\mathfrak{f}(i)+1 & \cdots & d\\ \mathfrak{f}(i)+1 & \cdots & \mathfrak{g}(i\!-\!1)+\mathfrak{f}(i) & 1 & \cdots & \mathfrak{f}(i) & \mathfrak{g}(i\!-\!1)+\mathfrak{f}(i)+1 & \cdots & d \end{array}\right). \] }In this case the length is \[ \ell(w_{i}^\text{\rm fol})=\mathfrak{f}(i)(d-\mathfrak{f}(i-1)). \] We arrive at the following result. \begin{thm} The EO stratum $M_{w}\subset M_{\Sigma}$ if and only if for every $\{\tau,\bar{\tau}\}\in\Sigma$, writing $\tau=i,$ $\phi^{-1}\circ\tau=i-1$ as usual, $(\ref{eq:Cond_1-1})$ or $(\ref{eq:Cond_2-1})$ hold. There exists a unique EO stratum $M_{w}\subset M_{\Sigma}$ of smallest dimension. It is given by the following recipe: $w_{i}=w_{i}^\text{\rm fol}$ if $\{\tau,\bar{\tau}\}\in\Sigma$ and $w_{i}=id.$ otherwise. Denote this $M_{w}$ by $M_{\Sigma}^\text{\rm fol}.$ Its dimension is given by \[ \dim M_{\Sigma}^\text{\rm fol}=\sum_{\{\tau,\bar{\tau}\}\in\Sigma}\min(r_{\tau},r_{\phi^{-1}\circ\tau})\cdot\min(r_{\bar{\tau}},r_{\phi^{-1}\circ\bar{\tau}}). \] Any other EO stratum $M_{w}$ lies in $M_{\Sigma}$ if and only if $M_{\Sigma}^\text{\rm fol}$ lies in its closure. \end{thm} \begin{proof} By the computations above, the unique $M_{w}$ of smallest dimension which is still contained in $M_{\Sigma}$, i.e. for which $r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}$ for all $\{\tau,\bar{\tau}\}\in\Sigma$, is obtained when $w_{i}=w_{i}^\text{\rm fol}$ whenever $\{\tau,\bar{\tau}\}\in\Sigma$ (this condition is symmetric in $\tau$ and $\bar{\tau}$), and $w_{i}=id.$ otherwise. Its dimension follows from $(\ref{eq:EO dimension formula})$, and the computation of the lengths of the $w_{i}^\text{\rm fol}.$ If $M_{\Sigma}^\text{\rm fol}$ lies in the closure of $M_{w}$ then for $x\in M_{\Sigma}^\text{\rm fol}$ and $y\in M_{w}$ and $\{\tau,\bar{\tau}\}\in\Sigma$ we have \[ r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\}\le r_{V}\{\tau,\bar{\tau}\}(y)\le r_{V}\{\tau,\bar{\tau}\}(x)=r_{V}^\text{\rm ord}\{\tau,\bar{\tau}\} \] so equality holds and $M_{w}\subset M_{\Sigma}$. Conversely, suppose that $M_{w}\subset M_{\Sigma}$. Assume, without loss of generality, that $r_{\phi^{-1}\circ\tau}\le r_{\tau}.$ Then condition $(\ref{eq:Cond_1-1})$ holds, so writing $\tau=i$, we must have that $w_{i}$ is the permutation {\scriptsize{} \[ \left(\begin{array}{ccccccccc} 1 & \cdots & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1) & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1)+1 & \cdots & d-\mathfrak{f}(i\!-\!1) & d-\mathfrak{f}(i\!-\!1)+1 & \cdots & d\\ * & \cdots & * & * & \cdots & * & \mathfrak{f}(i)-\mathfrak{f}(i\!-\!1)+1 & \cdots & \mathfrak{f}(i) \end{array}\right). \] }Since the blocks $[1,\mathfrak{f}(i)-\mathfrak{f}(i-1)]$ and $[\mathfrak{f}(i)+1,d]$ must appear in the bottom row in increasing order (but interlaced), it is easy to check that the permutation $w_{i}^\text{\rm fol}$ is smaller than or equal to $w_{i}$ in the Bruhat order on the Weyl group of $GL_{d}.$ This is enough (although in general, not equivalent) for $M_{\Sigma}^\text{\rm fol}$ to lie in the closure of $M_{w}.$ (For the closure relation between EO strata, see \cite{V-W}.) \end{proof} \subsubsection{Integral varieties} The results of \S \ref{sec:general integral} imply that integral varieties of the foliation $\mathscr{F}_\Sigma$ abound. Nonetheless, it's interesting to identify specific examples. By Lemma \ref{lem:Foliation is smooth}, when $\Sigma=\mathscr{I}^{+},$ $\dim M_{\Sigma}^\text{\rm fol}=\mathrm{rk}\;\mathscr{F}_{\Sigma}.$ We expect $M_{\Sigma}^\text{\rm fol}$ to be an integral variety of $\mathscr{F}_{\Sigma}$ in this case. This has been proved when $K$ is quadratic imaginary in \cite{G-dS1}, Theorem 25, and would be analogous to Theorem \ref{thm:HBMV integral varieties} above. The proof of Theorem 25 in \cite{G-dS1} was not conceptual, and involved tedious computations with Dieudonn modules. \subsection{The moduli problem $M^{\Sigma}$ and the extension of the foliation to it} \subsubsection{The moduli scheme $M^{\Sigma}$} As we have seen in the previous section, $\mathscr{F}_{\Sigma}$, defined by $(\ref{eq:the foliation}),$ is a $p$-foliation on all of $M$, but is smooth only on the Zariski open $M_{\Sigma}.$ Our goal is to define a ``successive blow-up'' $\beta:M^{\Sigma}\to M,$ which is an isomorphism over $M_{\Sigma},$ and a natural extension of $\mathscr{F}_{\Sigma}$ to a \emph{smooth} $p$-foliation on $M^{\Sigma}$. See \cite{G-dS1} 4.1 for $K$ quadratic imaginary, where $M^{\Sigma}$ was denoted $M^{\sharp}.$ Fix $\{\tau,\bar{\tau}\}\in\Sigma$, and assume that $r_{\phi^{-1}\circ\tau}\le r_{\tau}$ (otherwise switch notation between $\tau$ and $\bar{\tau}$). Consider the moduli problem $M^{\tau,\bar{\tau}}$ on $\kappa$-algebras $R$ given by \[ M^{\tau,\bar{\tau}}(R)=\{(\underline{A},\mathcal{N})|\,\underline{A}\in M(R),\,\mathcal{N}\subset\mathcal{P}_{\tau}[V]\,\mathrm{a\,subbundle\,of\,rank}\,r_{\tau}-r_{\phi^{-1}\circ\tau}\}/\simeq. \] (Here, as before, by a \emph{subbundle}, we mean a subsheaf that is locally a direct summand.) The forgetful map $\beta:M^{\tau,\bar{\tau}}\to M$ is bijective above $M_{\Sigma}$, since \[ \mathrm{rk}(\mathcal{P}_{\tau}[V])=r_{\tau}-r_{\phi^{-1}\circ\tau} \] holds there. Let $(n,m)=(r_{\tau},r_{\phi^{-1}\circ\tau})$ and consider the relative Grassmannian $Gr(n-m,\mathcal{P}_{\tau})$ over $M$, classifying sub-bundles $\mathcal{N}$ of rank $n-m$ in the rank $n$ bundle $\mathcal{P}_{\tau}.$ This is a smooth scheme over $M,$ of relative dimension $(n-m)m.$ As the condition $V(\mathcal{N})=0$ is closed, the moduli problem $M^{\tau,\bar{\tau}}$ is representable by a closed subscheme of $Gr(n-m,\mathcal{P}_{\tau}).$ The fiber $M_{x}^{\tau,\bar{\tau}}=\beta^{-1}(x)$ is the Grassmannian of $(n-m)$-dimensional subspaces in $\mathcal{P}_{\tau,x}[V].$ Suppose $x\in M_{w}$ where $w=(w_{\sigma})_{\sigma\in\mathscr{I}},$ $w_{\sigma}\in\Pi_{r_{\sigma},d-r_{\sigma}}$ and $w_{\bar{\sigma}}=\check{w}_{\sigma}$. As we have computed in $(\ref{eq:P_tau_V})$ \[ \dim\mathcal{P}_{\tau,x}[V]=a_{\tau}(w):=|\{j| j\le d-r_{\phi^{-1}\circ\tau}, w_{\tau}(j)\le r_{\tau}\}|\ge n-m, \] and consequently \[ \dim M_{x}^{\tau,\bar{\tau}}=(n-m)(a_{\tau}(w)-n+m). \] Denote by $M_{w}^{\tau,\bar{\tau}}=\beta^{-1}(M_{w}).$ We have shown that it is smooth of relative dimension $(n-m)(a_{\tau}(w)-n+m)$ over $M_{w}.$ When the EO strata undergo specialization, this dimension jumps. The main result concerning $M^{\tau,\bar{\tau}}$ is the following. \begin{thm} The scheme $M^{\tau,\bar{\tau}}$ is a non-singular variety, and $\beta$ induces a bijection between its irreducible ( = connected) components and those of $M$. In particular, $M^{\tau,\bar{\tau},{\rm ord}}$ is dense in $M^{\tau,\bar{\tau}}.$ \end{thm} \begin{proof} This is, mutatis mutandis, the proof of Theorem 15 in \cite{G-dS1} 4.1.3, and we refer to our earlier paper for details. \end{proof} \begin{cor} The moduli problem which is the fiber product of the $M^{\tau,\bar{\tau}}$ over~$M$, for all $\{\tau,\bar{\tau}\}\in\Sigma$, is represented by a smooth $\kappa$-variety $M^{\Sigma}$. The map $\beta:M^{\Sigma}\to M$ induces a bijection on irreducible components, and is an isomorphism over $M_{\Sigma}.$ In particular, $M^{\Sigma,{\rm ord}}$ is dense in $M^{\Sigma}.$ Over an EO stratum $M_{w}$ that is not contained in $M_{\Sigma}$ the map $\beta$ is no longer an isomorphism, but it is smooth of relative dimension \[ \sum_{\{\tau,\bar{\tau}\}\in\Sigma}(r_{\tau}-r_{\phi^{-1}\circ\tau})(a_{\tau}(w)-r_{\tau}+r_{\phi^{-1}\circ\tau}). \] Here for each pair $\{\tau,\bar{\tau}\}$ we choose $\tau$ so that $r_{\tau}\ge r_{\phi^{-1}\circ\tau}$, and \[ a_{\tau}(w)=|\{j| j\le d-r_{\phi^{-1}\circ\tau}, w_{\tau}(j)\le r_{\tau}\}|. \] \end{cor} We can also draw the following corollary. \begin{cor} The open set $M_{\Sigma}\subset M$ is the largest open set in $M$ to which the foliation $\mathscr{F}_{\Sigma}$ extends as a smooth foliation. \end{cor} \begin{proof} We have already noted that $\mathscr{F}_{\Sigma}$ is a saturated foliation everywhere on $M.$ Outside $M_{\Sigma}$ the dimension of its fibers is strictly larger than their dimension over $M^\text{\rm ord},$ so $\mathscr{F}_{\Sigma}$ can not be a vector sub-bundle on any open set larger than $M_{\Sigma}.$ \end{proof} \begin{rem} Since $\beta:M^{\Sigma}\to M$ is a birational projective morphism between non-singular varieties, it follows from the general theory (\cite{Stacks} 29.43 and \cite{H} Theorem 7.17 and exercise 7.11(c)) that $\beta$ is a blow up at an ideal sheaf supported on $M-M_{\Sigma}.$ \end{rem} \subsubsection{Extending the foliation to $M^{\Sigma}$} Above $M_{\Sigma}$, the map $\beta$ is an isomorphism \[ M^{\Sigma}\supset\beta^{-1}(M_{\Sigma})\simeq M_{\Sigma}, \] so the foliation $\mathscr{F}_{\Sigma}$ induces a foliation on $\beta^{-1}(M_{\Sigma}).$ We now explain how to extend it to a smooth foliation on all of $M^{\Sigma}.$ Let $k$ be, as usual, an algebraically closed field containing $\kappa$, and $y\in M^{\Sigma}(k)$ a geometric point, with image $x=\beta(y)\in M(k).$ The point $y$ ``is'' a pair $(\underline{A}_{x}^\text{\rm univ},\mathcal{N}_{y})$ where $\mathcal{N}_{y}=(\mathcal{N}_{y,\tau})$. Here \begin{itemize} \item $\tau$ range over representatives of the pairs $\{\tau,\bar{\tau}\}\in\Sigma$, chosen in such a way that $r_{\phi^{-1}\circ\tau}\le r_{\tau},$ \item $\mathcal{N}_{y,\tau}\subset\mathcal{P}_{x,\tau}[V]$ is an $r_{\tau}-r_{\phi^{-1}\circ\tau}$-dimensional subspace. \end{itemize} Let $\mathcal{H}_{x}=H_{dR}^{1}(A_{x}^\text{\rm univ}/k)$ and $\underline{\omega}_{x}=H^{0}(A_{x}^\text{\rm univ},\Omega^{1}).$ The polarization $\lambda_{x}$ on $A_{x}^\text{\rm univ}$ induces a perfect pairing \[ \{,\}_{\lambda}:\underline{\omega}_{x}\times\mathcal{H}_{x}/\underline{\omega}_{x}\to k \] satisfying $\{\iota(a)u,v\}_{\lambda}=\{u,\iota(\bar{a})v\}_{\lambda}$ for $a\in\mathcal{O}_{K}$. Adapting the proof of Theorem 15 in \cite{G-dS1} 4.1.3 to our situation, we see that the tangent space $\mathcal{T}M_{y}^{\Sigma}$ at the point~$y$ can be described as the set of pairs $(\varphi,\psi)$ where \begin{itemize} \item $\varphi\in\mathrm{Hom}_{\mathcal{O}_{K}}(\underline{\omega}_{x},\mathcal{H}_{x}/\underline{\omega}_{x})^{sym}$ is an $\mathcal{O}_{K}$-linear homomorphism from $\underline{\omega}_{x}$ to $\mathcal{H}_{x}/\underline{\omega}_{x}$ which is symmetric with respect to $\{,\}_{\lambda},$ i.e. satisfies $\{u,\varphi(v)\}_{\lambda}=\{v,\varphi(u)\}_{\lambda}$ for $u,v\in\underline{\omega}_{x}$. By Kodaira-Spencer, such a $\varphi$ represents a tangent vector to $M$ at $x,$ and the projection $(\varphi,\psi)\mapsto\varphi$ corresponds to the map $d\beta:\mathcal{T}M_{y}^{\Sigma}\to\mathcal{T}M_{x}.$ We can write $\varphi$ as a tuple $(\varphi_{\tau})$ where $\tau\in\mathscr{I}$ and $\varphi_{\tau}\in\mathrm{Hom}(\mathcal{P}_{x,\tau},\mathcal{H}_{x,\tau}/\mathcal{P}_{x,\tau}$). The symmetry condition and the fact that $\{,\}_{\lambda}$ induces a perfect pairing between $\mathcal{P}_{x,\bar{\tau}}$ and $\mathcal{H}_{x,\tau}/\mathcal{P}_{x,\tau}$ for every $\tau\in\mathscr{I},$ imply that $\varphi_{\bar{\tau}}$ is determined by $\varphi_{\tau}$, and that for a given choice of representatives~$\tau$ for the pairs $\{\tau,\bar{\tau}\}\in\mathscr{I}^{+}$ the $\varphi_{\tau}$ may be chosen arbitrarily. \item $\psi\in\mathrm{Hom}_{\mathcal{O}_{K}}(\mathcal{N}_{y},\mathcal{H}_{x}[V]/\mathcal{N}_{y})$ satisfies $\varphi|_{\mathcal{N}_{y}}=\psi\mod\underline{\omega}_{x}.$ \end{itemize} The second condition means that $\psi=(\psi_{\tau})$ where the $\tau$ range over the same set of representatives for $\{\tau,\bar{\tau}\}\in\Sigma$ as above, $\psi_{\tau}\in\mathrm{Hom}(\mathcal{N}_{y,\tau},\mathcal{H}_{x,\tau}[V]/\mathcal{N}_{y,\tau})$ and $\varphi_{\tau}|_{\mathcal{N}_{y,\tau}}=\psi_{\tau}\mod\mathcal{P}_{x,\tau}$. The tangent space to the fiber $\beta^{-1}(x)$ is the space of $(\varphi,$$\psi)$ with $\varphi=0,$ or, alternatively, the space \[ \{(\psi_{\tau})|\,\psi_{\tau}\in\mathrm{Hom}(\mathcal{N}_{y,\tau},\mathcal{P}_{x,\tau}[V]/\mathcal{N}_{y,\tau})\}. \] \begin{thm} There exists a unique smooth $p$-foliation $\mathscr{F}^{\Sigma}$ on $M^{\Sigma}$ , characterized by the fact that at any geometric point $y\in M^{\Sigma}(k)$ as above, $\mathscr{F}_{y}^{\Sigma}$ is the subspace \[ \mathscr{F}_{y}^{\Sigma}=\{(\varphi,\psi)|\,\psi=0\}. \] The foliation $\mathscr{F}^{\Sigma}$ agrees with $\mathscr{F}_{\Sigma}$ on $\beta^{-1}(M_{\Sigma})\simeq M_{\Sigma}$ and, in general, is transversal to the fibers of $\beta$. \end{thm} \begin{proof} A straightforward adaptation of the proof of Proposition 20 in \cite{G-dS1} 4.3. \end{proof} Intuitively, $\mathscr{F}_{y}^{\Sigma}$ are the directions in $\mathcal{T}M_{y}^{\Sigma}$ in which $\mathcal{N}_{y}$ does not undergo any infinitesimal deformation. By the alluded transversality, its projection to $\mathcal{T}M_{x}$ is injective, and identifies $\mathscr{F}_{y}^{\Sigma}$ with the subspace of $\varphi=(\varphi_{\tau})_{\tau\in\mathscr{I}}$ such that for every $\{\tau,\bar{\tau}\}\in\Sigma$ (and $\tau$ a representative as above) $\varphi_{\tau}(\mathcal{N}_{y,\tau})=0.$ This subspace, however, varies with $y\in\beta^{-1}(x).$ \begin{thebibliography}{E-SB-T} \bibitem[A-G]{A-G}Andreatta, F., Goren, E.Z.: \emph{Hilbert modular forms: mod p and p-adic aspects}, Mem. Amer. Math. Soc. \textbf{173} (2005), no. 819. \bibitem[Bo]{Bo}Bost, J.-B.: \emph{Algebraic leaves of algebraic foliations over number fields}, Publ. IHS \textbf{93 }(2001), 161-221. \bibitem[Br]{Br}Brunella, M.: \emph{Birational geometry of foliations,} IMPA monographs vol.\textbf{ 1}, Springer, 2015. \bibitem[DK]{DK}Diamond, F., Kassaei, P.L.: \emph{Minimal weights of Hilbert modular forms in characteristic $p,$} Compos. Math. \textbf{153} (2017), 1769-1778. \bibitem[Ek]{Ek}\foreignlanguage{english}{Ekedahl, T.: \emph{Foliations and inseparable morphisms}, in: \emph{Algebraic Geometry}, Bowdin 1985, Proc. Symp. Pure Math. \textbf{46 (2)}, 139-149 (1987).} \bibitem[E-SB-T]{E-SB-T}Ekedahl, T., Shepherd-Barron, N.I., Taylor, R.L.: \emph{A conjecture on the existence of compact leaves of algebraic foliations}, preprint. \bibitem[Go]{Go}Goren, E.Z.: \emph{Hasse invariants for Hilbert modular varieties,} Israel J. Math.\emph{ }\textbf{122} (2001), 157-174. \bibitem[G-dS1]{G-dS1}Goren, E.Z., de Shalit, E.: \emph{Foliations on unitary Shimura varieties in positive characteristic,} Compos. Math. \textbf{154} (2018), 2267-2304. \bibitem[G-K]{G-K}Goren, E.Z., Kassaei, P.L.: \emph{Canonical subgroups over Hilbert modular varieties}, J. Reine Angew. Math. \textbf{670} (2012), 1-63. \bibitem[G-O]{G-O}Goren, E.Z., Oort, F.: \emph{Stratifications of Hilbert modular varieties}. J. Algebraic Geom. \textbf{9 }(2000), 111--154. \bibitem[H]{H}Hartshorne, R.: \emph{Algebraic Geometry, }Graduate Texts in Mathematics, No. 52. Springer-Verlag, New York-Heidelberg, 1977. \bibitem[Jac]{Jac}Jacobson, N.: \foreignlanguage{english}{\emph{Basic Algebra II, }Dover Books in Mathematics, 2009.} \bibitem[Ka]{Ka}Katz, N.: \emph{$p$-adic $L$-functions for CM fields}, Invent. Math. \textbf{49} (1978), 199-297. \bibitem[Ka-Tur]{Ka-Tur}\foreignlanguage{english}{Katz, N.: \emph{Nilpotent connections and the monodromy theorem: Applications of a result of Turrittin}, Publ. Math }IHS\foreignlanguage{english}{, \textbf{39} (1970), 175--232. } \bibitem[Ki-Ni]{Ki-Ni}Kimura, T., Nitsuma, H.: \emph{On Kuntz' conjecture}, J. Math. Soc. Japan, \textbf{34} (1982), 371-378. \bibitem[Lan]{Lan}\foreignlanguage{english}{Lan, K.-W.: \emph{Arithmetic compactifications of PEL-type Shimura varieties}, London Mathematical Society Monographs \textbf{36}, Princeton, 2013.} \bibitem[Li]{Li}Livneh, G.: \emph{Purely inseparable Galois morphisms and smooth foliations, }M.Sc. thesis, the Hebrew University, 2021. \bibitem[Mi-Pe]{Mi-Pe}Miyaoka, Y., Peternell, T.: \emph{Geometry of higher dimensional algebraic varieties, }Birkhuser, 1997\emph{.} \bibitem[Mo]{Mo}Moonen, B.: \emph{Serre-Tate theory for moduli spaces of PEL type, }Ann. Sci. c. Norm. Supr. \textbf{37} (2004), 223-269. \bibitem[Mo2]{Mo2}\foreignlanguage{english}{Moonen, B.: \emph{Group schemes with additional structures and Weyl group elements}, in: Moduli of abelian varieties,\emph{ }C. Faber, G. van der Geer, F. Oort, eds., Progress in mathematics 195, Birkhauser, 2001, 255-298.} \bibitem[Mu]{Mu}Mumford, D.: \emph{Abelian Varieties}, TATA Institute of Fundamental Research, Oxford Univ. Press, 1970. \bibitem[Pa]{Pa}Pappas, G.: \emph{Arithmetic models for Hilbert modular varieties}, Compos. Math. \textbf{98} (1995), 43-76. \bibitem[Ra]{Ra}Rapoport, M.: \emph{Compactifications de l'espace de modules de Hilbert-Blumenthal}, Compos. Math. \textbf{36} (1978), 255-335. \bibitem[R-T]{RT} Rousseau, E. and Touzet, F.: \emph{Curves in Hilbert modular varieties}, Asian J. Math. {\bf 22} (2018), no. 4, 673 - 689. \bibitem[St]{St}Stamm, H.: \emph{On the reduction of Hilbert-Blumenthal moduli schemes with $\Gamma_{0}(p)$-level structure}, Forum Math. \textbf{9} (1997), 405-455. \bibitem[Stacks]{Stacks}\emph{The stacks project}, https://stacks.math.columbia.edu \bibitem[V-W]{V-W}Viehmann ,E. and Wedhorn, T.: \emph{Ekedahl-Oort and Newton strata for Shimura varieties of PEL type}, Math. Ann. \textbf{356} (2013), 1493-1550. \bibitem[Wed]{Wed}Wedhorn, T.: \emph{Ordinariness in good reductions of Shimura varieties of PEL-type, }Ann. Sci. c. Norm. Supr. \textbf{32} (1999), 575-618. \bibitem[Wed2]{Wed2}\foreignlanguage{english}{Wedhorn, T.: \emph{The dimension of Oort strata of Shimura varieties of PEL-type}, in:\emph{ Moduli of abelian varieties, }C. Faber, G. van der Geer, F. Oort, eds., Progress in mathematics 195, Birkhauser, 2001, 441-471.} \bibitem[Yu]{Yuan}Yuan, S.: \emph{Inseparable Galois theory of exponent one, }Trans. Amer. Math. Soc. \textbf{149} (1970), 163-170. \end{thebibliography} \end{document}
2205.00674v1
http://arxiv.org/abs/2205.00674v1
Theorem of resonance of Small Volume High Contrast multilayered materials
\documentclass[a4paper,12pt,sort&compress]{article} \usepackage{graphicx} \usepackage{subcaption} \usepackage[english]{babel} \usepackage[square,numbers,comma]{natbib} \usepackage{caption} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{bm} \usepackage{color} \usepackage{authblk} \usepackage{mathrsfs} \addtolength{\topmargin}{-2cm} \providecommand{\keywords}[1] { \small \textbf{\textit{Keywords---}} #1 } \title{Theorem of resonance of Small Volume High Contrast multilayered materials} \date{} \newtheorem{lem:convergence}{Lemma}[section] \newtheorem{thm:eigenvalueAppr}{Theorem}[section] \newtheorem{thm:eigenvalueAppr1}{Theorem}[section] \newtheorem{thm:eigenvalueAppr01}{Theorem}[section] \newtheorem{thm:eigenvalueAppr02}{Theorem}[section] \newtheorem{thm:eigenvalueAppr2}{Theorem}[section] \newtheorem{cor:spcase}{Corollary}[section] \newtheorem{cor:shari}{Corollary}[section] \newtheorem{cor:nano}{Corollary}[section] \author[1]{Taoufik Meklachi, [email protected]} \affil[1]{School of Science, Engineering, and Technology, Penn State Harrisburg} \begin{document} \maketitle \keywords{Spectroscopy, multilayered materials, nonlinear eigenvalue problems, resonance formula, high contrast material, compact operators, novel materials.} \begin{abstract} The need of mathematically formulate relations between composite materials' properties and its resonance response is growing. This is due the fast technological advancement in micro-material manufacturing, present in chips for instance. In this paper two theorems are presented, providing formulas of scattering resonance of double-layered and multilayered small volumes in terms of the coefficient of sussceptibility, being high, and the geometric characteristics. Spectroscopy measurements of the composite medium can exploit the formula to detect its dimension and susceptibility index. \end{abstract} \section{Introduction} Non-linear spectral analysis of scattering resonances has been a growing field attracting more interest due to the vital applications in imaging, spectroscopy for instance, and material science. In particular, the material's design efficiency at the microscopic level and the refined choice of the material's components, unarguably, enhance quality and function of the composite material. In this paper, I will present a first order approximation of scattering resonance of small multilayered volume with high contrast. the layers are concentric and of arbitrary number. Furthermore, the shape of the small volume is also arbitrary, which gives this formula a wide scope of usefulness. The results in this work is a smooth transition from the asymptotic formula for a single small volume with high contrast, elaborated in ref. \cite{Meklachi2018}, to a high contrast small volume composed of multiple concentric media. The scaling technique used in this paper is the same employed in ref. \cite{Meklachi2016}. Data mining and artificial intelligence, from other hand, were introduced to advance materials manufacturing via an extensive library of materials' properties data. Approach that is especially used when the mathematical physics model is complex to solve or the resulting novel material manifest anomalous phenomena violating physics laws, as in the unnatural bending of incident electromagnetic waves in metamaterials which violates Snell's law ~\cite{Meklachi2016}\cite{Milton2006}\cite{Bouchitt2010}\cite{Bruno2007}\cite{Cai2007}\cite{Greenleaf2009}\cite{Vasquez2009}\cite{Kohn2010}\cite{Kohn2008}\cite{Lai2009}\cite{Liu2009}\cite{McPhedran2009}. For instance, in ref.\cite{Liu2015} machine learning is adopted to study the elastic localization linkages in high-contrast composite materials. A decent number of literature is referenced therein. Most recent contributions to the study of high contrast media can be found in\cite{Meklachi2022}\cite{Ammari2021}\cite{Challa2019}. In section \ref{asymsec} the scattering resonance for two concentric layers is derived, which serves as a basis to generalize the formula to any number of concentric layers in section \ref{asymsec1}. \section{The double layered case}\label{asymsec} Consider a small 3D volume $B_h$ in vacuum and contains the origin, with arbitrary shape and a dielectric susceptibility $\eta(x)$ such that $$\eta(x)=\chi_{hB}(x)\frac{\eta_0(x)}{h^2} $$ Suppose $B_h$ is optically inhomogeneous with two concentric high contrast media $B_1$, the inner medium, and $B_2$, the outer one, such that \[hB=hB_1\cup hB_2, \] \[\eta(x)=\chi_{hB_1}(x)\eta^1 +\chi_{hB_2}(x){\eta^2}=\chi_{hB_1}(x)\frac{\eta_0^1}{h^2} +\chi_{hB_2}(x)\frac{\eta_0^2}{h^2},\] and \[\eta_0(x)=\chi_{B_1}(x)\eta_0^1 +\chi_{B_2}(x){\eta_0^2}\] for constants $\eta^1$, $\eta^2$, $\eta_0^1$ and $\eta_0^2$. First, I will derive an asymptotic formula for scattering resonances $\lambda_h$ on the scaled down volume $hB$ with two concentric media. Formula that will be later generalized to multiple layers of arbitrary number of concentric media. The field $u$ satisfies Helmholtz equation: \begin{equation}\label{helm} \Delta u+k^2(1+\eta)u=0 \end{equation} where $k$ is the wave number and $\lambda=k^2$ is the spectral parameter. Scaling technique, as performed in \cite{Meklachi2018}, on the integral form of the solution of \eqref{helm} given by Lippmann-Schwinger results in the non-linear eigenvalue problem \begin{equation}\label{eq:nonlinL} \lambda_h T_h(\lambda_h)u_h=u_h \end{equation} where \begin{align}\label{eq:Th1} T_h(\lambda)(u)({x})&=\frac{\eta_0^1}{4\pi}\int_{B_1}\frac{\exp({i\sqrt{\lambda}h|x-y|)}}{|x-y|}u(y)dy\\ &+\frac{\eta_0^2}{4\pi}\int_{B_2}\frac{\exp({i\sqrt{\lambda}h|x-y|)}}{|x-y|}u(y)dy \end{align} The limiting form of equation \eqref{eq:nonlinL} \begin{equation}\label{eig1} \lambda_0 T_0(u_0)=u_0 \end{equation} as $h \rightarrow 0$ is a linear eigenvalue problem where \begin{equation}\label{eq:T00} T_0(u)({x})=\frac{\eta_0^1}{4\pi}\int_{B_1}\frac{1}{|x-y|}u(y)dy+\frac{\eta_0^2}{4\pi}\int_{B_2}\frac{1}{|x-y|}u(y)dy \end{equation} Let \[U_1=\int_{B_1}u_0(x)dx \quad \text{and }\quad U_2=\int_{B_2}u_0(x)dx\] The following Theorem provides a first order approximation to $\lambda_h$ for small volume high contrast two concentric media. \begin{thm:eigenvalueAppr01}\label{thm5} Let $U$ be a domain bounded away from the negative real axis in $\mathbb{C}$. Let $T_0$ and $T_h(\lambda)$ be two linear compact operators from $L^2(B)$ to $L^2(B)$ defined by \eqref{eq:T00} and \eqref{eq:Th1}, respectively. Let $\lambda_0\neq0$ in $U$ be a simple eigenvalue of $T_0$, and let $u_0$ be the normalized eigenfunction. Then for $h$ small enough, there exists a nonlinear eigenvalue $\lambda_h$ of $T_h$ satisfying the formula: \begin{equation}\label{asymptotic2lay} \lambda_h=\lambda_0-i\frac{{\lambda_0}^\frac{5}{2}}{4\pi}\left[{\eta_0^1U_1^2+\eta_0^2U_2^2+(\eta_0^1+\eta_0^2)U_1U_2}\right]h+\mathcal{O}(h^2). \end{equation} \end{thm:eigenvalueAppr01} \begin{proof} This proof uses Lemma 2.1 and Theorem 2.1 in ref. \cite{Meklachi2018}. First, $T_h(\lambda)$ converges uniformly in the $L^2(B)$ norm to $T_0$ as $h\rightarrow0$. This can be shown by using Lemma 2.1 to first establish uniform converge on $L^2(B_1)$ and $L^2(B_2)$, then the triangular inequality property of the $L^2$-norm establishes the uniform convergence on $L^2(B)$. In fact, there exist positive $C_1$ and $C_2$ such that \begin{align*} ||T_h(\lambda)-T_0||&\leq||(T_h(\lambda)-T_0)|_{B_1}+(T_h(\lambda)-T_0)|_{B_2}||\\ &\leq||T_h(\lambda)-T_0||_{L^2(B_1)}+||T_h(\lambda)-T_0||_{L^2(B_2)}\\ &\leq C_1h+C_2h \quad \text{by Lemma 2.1}\\ &=(C_1+C_2)h \end{align*} Theorem 2.1 in \cite{Meklachi2018} provides a first order approximation of $\lambda_h$ given by \begin{equation}\label{th2.1form} \lambda_h=\lambda_0+{\lambda_0}^2\large\left\langle(T_0-T_h(\lambda_0))u_0,u_0\large\right\rangle+\mathcal{O}(h^2) \end{equation} In particular we have \begin{align}\label{display} (T_0-T_h(\lambda))(u_0)(x)&=\frac{\eta_0^1}{4\pi}\int_{B_1}\big(1-\exp({i\sqrt{\lambda}h|x-y|)}\big) \frac{u_0(y)}{|x-y|}dy\\ &+\frac{\eta_0^2}{4\pi}\int_{B_2}\big(1-\exp({i\sqrt{\lambda}h|x-y|)}\big) \frac{u_0(y)}{|x-y|}dy\label{display2} \end{align} Taylor expansion on the function $h\rightarrow \exp({i\sqrt{\lambda}h|x-y|)}$ to the first order gives \[\big(1-\exp({i\sqrt{\lambda}h|x-y|)}\big) \frac{1}{|x-y|}=-i\sqrt{\lambda}h+\mathcal{O}(h^2)\] Substituting in \eqref{display} and \eqref{display2} we obtain \begin{equation*} (T_0-T_h(\lambda))(u_0)(x)=-i\frac{{\lambda_0}^\frac{1}{2}}{4\pi}(\eta_0^1U_1+\eta_0^2U_2)h+\mathcal{O}(h^2) \end{equation*} Finally plugging the last expression in \eqref{th2.1form} concludes formula \eqref{asymptotic2lay} \end{proof} A useful formulation in spectroscopy applications when computing the volume of $B_h$ could be \begin{equation}\label{asymptotic2lay1} \lambda_h=\lambda_0-i\frac{{\lambda_0}^\frac{5}{2}}{4\pi}\left[{(\eta_0^1U_1+\eta_0^2U_2)U_0}\right]h+\mathcal{O}(h^2). \end{equation} where \[U_0=\int_{B}u_0(x)dx.\] The special case when $\eta_0^1=\eta_1^2$, hence $\eta^1=\eta^2$, provides exactly the asymptotic formula for the resonance derived for a single small volume and high contrast scatterer, ref. \cite{Meklachi2018}, which writes \begin{equation}\label{asymptotic} \lambda_h=\lambda_0-i\frac{\eta_0}{4\pi}{\lambda_0}^\frac{5}{2}{U_0}^2h+\mathcal{O}(h^2). \end{equation} \section{The multilayered resonance Theorem}\label{asymsec1} Let $B_h$ be a small 3D volume that contains the origin with arbitrary shape and dielectric susceptibility $\eta(x)$ such that $$\eta(x)=\chi_{hB}(x)\frac{\eta_0(x)}{h^2} $$ Suppose $B_h$ is composite and optically inhomogenous with $n$ layers of concentric high contrast media ${\{B_i}\}_{1\leq i\leq n}$, such that \[hB=\cup_{i=1}^{i=n} hB_i, \] \[\eta(x)=\sum_{i=1}^{i=n}\chi_{hB_i}(x)\eta^i =\sum_{i=1}^{i=n}\chi_{hB_i}(x)\frac{\eta_0^i}{h^2},\] and \[\eta_0(x)=\sum_{i=1}^{i=n}\chi_{B_i}(x)\eta_0^i\] for constants $\eta^i$, $\eta_0^i$. The field $u$ satisfies Helmholtz equation \eqref{helm}, and similarly to the double layer case, can be formulated as an integral non-linear eigenvalue problem \eqref{eq:nonlinL} where \begin{equation} T_h(\lambda)(u)({x})=\sum_{k=1}^{k=n}\frac{\eta_0^k}{4\pi}\int_{B_k}\frac{\exp({i\sqrt{\lambda}h|x-y|)}}{|x-y|}u(y)dy \end{equation} having the limiting linear eigenvalue problem \eqref{eig1} of the operator \begin{equation} T_0(u)({x})=\sum_{k=1}^{k=n}\frac{\eta_0^k}{4\pi}\int_{B_k}\frac{1}{|x-y|}u(y)dy \end{equation} as $h\rightarrow 0$. \begin{thm:eigenvalueAppr02} Assume that the hypotheses in Theorem \eqref{thm5} hold. The scattering resonance satisfying \eqref{eq:nonlinL} is given by \begin{equation}\label{asymptotic2lay2} \lambda_h=\lambda_0-i\frac{{\lambda_0}^\frac{5}{2}}{4\pi}\left[{U_0\sum_{k=1}^{k=n}\eta_0^kU_k}\right]h+\mathcal{O}(h^2) \end{equation} where \[U_0=\int_{B}u_0(x)dx,\] and $$U_k=\int_{B_k}u_0(x)dx, \quad 1\leq k\leq n.$$ \end{thm:eigenvalueAppr02} \begin{proof} Uniform converge of $T_h(\lambda)$ to $T_0$ as $h \rightarrow 0$ can be shown the same way as in the proof of Theorem \eqref{thm5}. Furthermore, and similarly to the two layer case, we have \begin{align*} (T_0-T_h(\lambda))(u_0)(x)&=\sum_{k=1}^{k=n}\frac{\eta_0^k}{4\pi}\int_{B_k}\big(1-\exp({i\sqrt{\lambda}h|x-y|)}\big) \frac{u_0(y)}{|x-y|}dy\\ &=-i\frac{{\lambda_0}^\frac{1}{2}}{4\pi}(\sum_{k=1}^{k=n}\eta_0^kU_k)h+\mathcal{O}(h^2) \end{align*} which, substituted in formula \eqref{th2.1form}, gives the asymptotic expression \eqref{asymptotic2lay2} \end{proof} \newpage \bibliographystyle{siam.bst} \bibliography{Spectral2.bib} \end{document}
2205.00648v1
http://arxiv.org/abs/2205.00648v1
Edge Resolvability of Crystal Cubic Carbon Structure
\documentclass{article} \usepackage{latexsym, amsmath, amssymb} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{amsmath, amsthm, amscd, amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{array} \usepackage{multirow} \usepackage{float} \usepackage{enumerate} \usepackage[a4paper, total={6in, 10in}]{geometry} \usepackage{placeins} \newtheorem{thm}{Theorem}[section] \newtheorem{exm}{Example}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{rem}{Remark}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}{Lemma} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{question} \newtheorem{ques}[thm]{Question} \theoremstyle{remark} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\Real}{\mathbb R} \newcommand{\eps}{\varepsilon} \newcommand{\To}{\longrightarrow} \newcommand{\BX}{\mathbf{B}(X)} \newcommand{\A}{\mathcal{A}} \newcommand{\RR}{{\mathbb R}} \newcommand{\DD}{{\mathbb D}} \newcommand{\BB}{{\mathbb B}} \newcommand{\CC}{{\mathbb C}} \newcommand{\ZZ}{{\mathbb Z}} \newcommand{\QQ}{{\mathbb Q}} \newcommand{\NN}{{\mathbb N}} \newcommand{\bsk}{{\bigskip}} \newcommand{\gap}{\vspace {5pt}} \newcommand{\midskip}{\vspace {10pt}} \newcommand{\biggap}{\vspace {15pt}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \renewcommand{\thethm}{\arabic{thm}} \makeatletter \newcommand*{\rom}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \def\smskip {\par \vskip 5pt} \def\QED {\hfill $\Box$\smskip} \def\a{\alpha} \def\vp{\varphi} \def\msk{\medskip} \def\ol{\overline} \def\bege{\begin{equation}} \def\ende{\end{equation}} \def\bsk{\bigskip} \def\cent{\centerline} \def\a{\alpha} \def\om{\omega} \def\b{\beta} \def\g{\Gamma}\def\ol{\overline} \def\ve{\varepsilon} \def\d{\Gamma} \def\pt{\partial} \def\bin{\atopwithdelims} \def\begr{\begin{eqnarray}} \def\endr{\end{eqnarray}} \def\qand{\quad\mbox{ and }\quad} \def\qfor{\quad\mbox{ for }\quad} \def\qie{\quad\mbox{ i.e. }\quad} \def\bege{\begin{equation}} \def\ende{\end{equation}} \def\begr{\begin{eqnarray}} \def\endr{\end{eqnarray}} \def\bnum{\begin{enumerate}} \def\enum{\end{enumerate}} \begin{document} \begin{center} \textbf{ Edge Resolvability of Crystal Cubic Carbon Structure} \end{center} \begin{center} Sahil Sharma$^{1}$, Vijay Kumar Bhat$^{2,}$$^{\ast}$, Sohan Lal$^{3}$ \end{center} \begin{center} School of Mathematics,\end{center} \begin{center}Shri Mata Vaishno Devi University,\end{center}\begin{center}Katra-$182320$, Jammu and Kashmir, India. \end{center} \begin{center} 1. [email protected] 2. [email protected] 3. [email protected] \end{center} \hspace{-5.0mm}\textbf{Abstract} Chemical graph theory is commonly used to analyse and comprehend chemical structures and networks, as well as their features. The resolvability parameters for graph $G$= $(V,E)$ are a relatively new advanced field in which the complete structure is built so that each vertex (atom) or edge (bond) represents a distinct position. In this article, we study the resolvability parameters i.e., edge resolvability of chemical graph of crystal structure of cubic carbon $CCS(n)$.\\\\ \hspace{-5.0mm}\textbf{Keywords}: Edge metric dimension, metric dimension, crystal cubic carbon, cubes\\\\ \textbf{MSC(2020):} 13A99, 05C12\\\\ \hspace{-5.0mm}\textbf{$1$. Introduction }\\ Mathematical applications have been increasingly popular in recent years. Graph theory has rapidly grown in theoretical conclusions as well as applicability to real-life issues as a handy tool for dealing with events relations. Distance is a concept that penetrates all of the graph theory, and it is utilized in isomorphism tests, graph operations, maximal and minimal connectivity problems, and diameter problems. Several characteristics linked to graph distances have piqued the interest of several scholars. One of them is the metric dimension.\\ The notion of \textit{metric dimension} (MD) was introduced by Slater \cite{li}, Melter and Harary \cite{ki} independently. They termed the ordered subset $K =\{\alpha_{1},\alpha_{2},\alpha_{3},...,\alpha_{m}\}$ of vertices of graph $G = (V,E)$ as resolving set, if the representation $r(u|K) = (d(u,\alpha_1),d(u,\alpha_2),...,d(u,\alpha_m))$ for each $u\in V$ is unique, where $d(u,\alpha_{m})$ is the shortest path between $u$ and $\alpha_{m}$. The cardinality of the minimal resolving set is called MD (locating number) of $G$, and is denoted by $dim(G)$. Chartrand et al. \cite{ti} studied MD of path graphs $(P_n)$ and complete graph $(K_n)$. Later on, Caceres et al. \cite{pi} studied about MD of the cartesian product of graphs, hypercubes, and cycles graphs. MD of regular graphs, Jahangir graphs, prism graphs, and circulant graphs was examined by several different authors. In chemistry, MD is used to give a series of compounds a unique representation, which leads to drug discovery. Other applications of MD include navigation, combinatorial optimization, and games such as coin weighing, etc. \cite{ti,VK}.\\ However in connection with the study of a new variant of metric generator in graphs Kelenc et al. \cite{len}, introduced \textit{edge metric dimension of a graph} (EMD). This is based on the fact that an ordered subset $H$ of vertices of $G =(V,E)$ is called an edge resolving set (edge metric generator), if for $e_1 \neq e_2 \in E$, there is a vertex $h \in H$ such that $d(e_1,h) \neq d(e_2,h)$, where $d(e,h) = min\{d(a,h),d(b,h)\}, ab = e \in E\; \mbox{and}\; h \in H$. The cardinality of the smallest edge metric generator (edge basis) is called EMD and is denoted by $edim(G)$. After that, Zhang and Gao \cite{fd} discussed EMD of some convex polytopes. EMD of path graphs, complete graphs, bipartite graphs, wheel graphs, and many others have been studied. Recently, knor et al. \cite{mar} studied graphs with $edim(G) < dim(G)$. After the introduction of EMD by Kelenc et al. \cite{len}, several other authors \cite{den,se,Kg,SS,sk} have studied it thoroughly.\\ The structure of a chemical molecule is commonly described by a group of functional groups placed on a substructure. The structure is a graph-theoretic labelled graph, with vertex and edge labels representing atom and bond types, respectively. A collection of compounds is primarily defined by the substructure common to them, which is characterised by modifying the set of functional groups and/or permuting their locations. Traditionally, these “positions” simply reflect uniquely defined atoms (vertices) of the substructure (common subgraph). These positions seldom form a minimum set $K$ for which every two distinct vertices have distinct ordered k-tuples forming a minimum dimensional representation of the positions definable on the common subgraph. In this context, $K$ is referred to as a resolving set relative to V(G).\\ We can assess whether any two compounds in the collection share the same functional group at a given position using the standard pattern. When analyzing whether a compound's characteristics are responsible for its pharmacological activity, this comparison statement is important \cite{pi,ti}. Sharma et al. \cite{CP} studied the mixed metric resolvability of chemical compound polycyclic aromatic hydrocarbon networks. The problem of computing the edge metric dimension of $CCS(n)$ and certain generalisations of these graphs is discussed in this work. One of the forms of carbon, namely diamond, is anticipated to convert into the $C_8$ structure, which is a cubic body-centered structure having 8 elements in its unit cell, at pressures exceeding 1000 GPa (gigapascal). Its composition is comparable to cubane and is found in one of silicon's metastable phases. Carbon sodalite was postulated as the structure of this phase in 2012. This cubic carbon phase could be useful in astronomy.\\ Several authors studied different topological properties of $CCS(n)$. Yang et al. \cite{Yan} studied the Augmented Zagreb index, forgotten index, Balaban index, and redefined Zagreb indices of $CCS(n)$. Baig et al. \cite{BA} computed the degree-based additive topological indices mainly atom bond connectivity index, geometric index, first and second Zagreb index, etc. Further, Imran et al. \cite{IMA} studied eccentric based bond connectivity index and eccentric-based geometric arithmetic index of $CCS(n)$. For further studies concerning the $CCS(n)$ one can refer to \cite{ZHA,ZAH} and references therein.\\ Zhang and Naeem \cite{Nem} studied the metric dimension of $CCS(n)$. They computed that for $n = 1$, $dim(CCS(n))$ is $3$, and for $n\geq 2$, $dim(CCS(n))$ is $ 7^{n-2}\times 16$. To the best of our knowledge, no work has been reported regarding the edge metric basis and edge metric dimension (EMD) of $CCS(n)$. For this, we consider the baseline paper reported by (Zhang and Naeem \cite{Nem}) and further extend the result with comparisons. In this study we compute the EMD of $CCS(n)$ and we also show that MD and EMD of $CCS(n)$ are the same.\\ \hspace{-5.0mm}\textbf{2. Crystal Cubic Carbon Structure $CCS(n)$}\\ The valency of carbon allows it to create a variety of allotropes. Diamond and graphite are well-known forms of carbon, while crystal cubic carbon, also known as pcb, is one of its potential allotropes. The chemical structure of $CCS(n)$ is constructed as \begin{enumerate} \item $CCS(n)$ starts from one unit cube, also called $CCS(1)$, and central cube as shown in Fig. 1. \item At the second level all the eight vertices of $CCS(1)$ are attached to different cubes through edges (bridge edges), which results in the construction of $CCS(2)$ as shown in Fig. 2. \item Again at the next level all the $7\times 8$ vertices of degree $3$ of $CCS(2)$ are attached to different cubes through edges, which results in the construction of $CCS(3)$. \item Similarly we construct $CCS(n)$ by attaching new cubes to the vertices of degree 3 at the preceding level as shown in Fig. 3.\\ \end{enumerate} At each level, we receive a new set of cubes that are attached to the vertex of degree 3 of cubes of the previous level. These cubes are referred to as the outermost cubes. In $CCS(2)$, we can also see that there are eight outermost cubes and $ 7\times 8$ vertices of degree 3. So at the third level, $7\times 8$ cubes are attached. Similarly, there are $7^{n-2}\times8$ vertices of degree 3 at each subsequent level. So, this process is again repeated to get the next level.\\ \hspace{-5.0mm}\textbf{2.1 Origin of Crystal Cubic Carbon Structure $CCS(n)$}\\ Carbon, as one of the elements that have been known and used on the planet since ancient times, continues to draw attention due to its wide range of industrial and commercial applications, ranging from cutting tools to electrical and optoelectronic materials, and so on. Carbon exhibits the adaptability of $sp$, $sp^2$, and $sp^3$ hybridization states, resulting in a variety of rich allotropic carbon forms. Graphite and diamond are two prevalent allotropes of carbon, with the former being the most thermodynamically stable form of all known carbon allotropes at ambient temperatures (e.g. hexagonal diamond, amorphous carbon, carbyne, fullerenes, carbon nanotubes, crystal cubic carbon and graphene).\\ Diamond is projected to convert into a body-centered cubic structure at ultra high pressures of above 1000 GPa. This phase is significant in astrophysics and the study of the deep cores of planets such as Uranus and Neptune. In 1979, \cite{Mat} a super dense and super hard material that looked like this phase was synthesised and published, having the Im3m space group and eight atoms per primitive unit cell. It was claimed that the so-called $C_8$ structure, with eight-carbon cubes comparable to cubane in the Im3m space group, had been synthesised, with eight atoms per primitive unit cell or 16 atoms per standard unit cell. The super cubane structure, the BC8 structure, a structure with clusters of four carbon atoms in tetrahedra in space group I43m with four atoms per primitive unit cell (eight per conventional unit cell), and a structure named "carbon sodalite" were all discussed in a report published in 2012, \cite{pok}. In 2017, Baig et al. \cite{BA} modified and extended the structure of carbon sodalite and named it crystal cubic carbon $CCS(n)$. $CCS(n)$ as a carbon allotrope is assumed to have wide a range of industrial applications. For more study on carbon allotropes one can refer \cite{ALL,CAL}. \begin{center} \begin{figure}[h!] \centering \includegraphics[width=2.3in]{cu1} \caption{The chemical structure of $CCS(1)$.}\label{cu1} \end{figure} \end{center} \begin{center} \begin{figure}[h!] \centering \includegraphics[width=3.5in]{cu06} \caption{The chemical structure of $CCS(2)$.}\label{cu06} \end{figure} \end{center} \begin{center} \begin{figure}[h!] \centering \includegraphics[width=5.0in]{cube77} \caption{The chemical structure of $CCS(n)$.}\label{cube77} \end{figure} \end{center} \FloatBarrier \hspace{-5.0mm}\textbf{3. Edge metric dimension of $CCS(n)$}\\ The EMD of various graphs like Mobius network, wheel graph, convex polytopes, windmill graph, etc. have been studied. We will discuss the EMD of $CCS(n)$, $n\geq 1$ in this section. \begin{thm} For $n=1$, the edge metric dimension of $CCS(n) = 3$. \end{thm} \begin{proof} First, claim that $edim(CCS(1)) \leq 3$. We have labelled all 8 vertices of the cube as $r_i$, $1\leq i \leq 8$, and edges as $e_i$, $1\leq i \leq 12$ as shown in Fig. 1.\\ Let $R_E = \{r_1,r_2,r_3\}$ be set of vertices of graph $CCS(1)$. We have to show that $R_E$ is an edge metric generator of $CCS(1)$.\\ The representation of each edge $e_i$, $1\leq i \leq 12$ with respect to $R_E$ is given as \begin{align*} r\left[e_1|R_E\right] = (0,0,1) \hspace{3cm}& r\left[e_7|R_E\right] = (0,1,2)\\ r\left[e_2|R_E\right] = (1,0,0) \hspace{3cm}& r\left[e_8|R_E\right] = (1,1,2)\\ r\left[e_3|R_E\right] = (1,1,0) \hspace{3cm}& r\left[e_9|R_E\right] = (2,2,1)\\ r\left[e_4|R_E\right] = (0,1,1) \hspace{3cm}& r\left[e_{10}|R_E\right] = (1,0,1)\\ r\left[e_5|R_E\right] = (1,2,1) \hspace{3cm}& r\left[e_{11}|R_E\right] = (2,1,0)\\ r\left[e_6|R_E\right] = (1,2,2) \hspace{3cm}& r\left[e_{12}|R_E\right] = (2,1,1)\\ \end{align*} The representation of each edge is unique. Therefore $edim(CCS(n)) \leq 3$.\\ \hspace{-5.0mm}Next, we claim that $edim(CCS(1)) \geq 3$.\\ Since, for any graph $G$, $edim(G) = 1$ iff $G$ is a path graph. This implies that $edim(CCS(1)) \neq 1$. Now, if $edim(CCS(1)) = 2$, then edge metric genarator $R_E$ of $CCS(1)$ consists of two vertices (say $R_E= \{r_i,r_j\}$; $i\neq j$). Due to the symmetry of $CCS(1)$, we have two choices for the vertices of $R_E$. \begin{enumerate} \item Both the vertices of $R_E$ are on the same face of the cube.\\ Further, we have two choices \subitem (i) $r_i$ and $r_j$ are on the face diagonal of the cube. \subitem (ii) $r_i$ and $r_j$ are on the same edge of the face of the cube. \item Both the vertices of $R_E$ are on the primary diagonal of the cube. \end{enumerate} \textbf{Case 1}: Firstly, when both vertices are on the same face and along the face diagonal, then we consider $R_E = \{r_1,r_3\}$. We have $r\left[e_1|R_E\right] = (0,1) = r\left[e_4|R_E\right]$.\\ Secondly, when both vertices are on the same face and along the same edge, then we consider $R_E = \{r_2,r_3\}$. We have $r\left[e_4|R_E\right] = (1,1) = r\left[e_{12}|R_E\right]$.\\ \textbf{Case 2}: When both vertices are on the primary diagonal of the cube, then we consider $R_E = \{r_3,r_7\}$. We have $r\left[e_4|R_E\right] = (1,1) = r\left[e_5|R_E\right]$.\\ In both cases, we get a contradiction. Therefore $edim(CCS(1)) \geq 3$. Hence $edim(CCS(1)) =3$.\\ \end{proof} \begin{thm} For $n\geq 2$, the edge metric dimension of $CCS(n) = 7^{n-2}\times 16$. \end{thm} \begin{proof} Let $R_E = \{r_1,r_2,r_3,...r_k\}$ be the set of vertices of type $a_1$ and $a_2$ of outermost cubes $U_n$ as shown in Fig. 4.\\ The cube $U_n$ is the outermost and attached with the chain of cubes through vertex $c$. The degree of vertex $c$ is $4$, and all other vertices are of degree 3. We assume that $R_E$ is an edge metric generator and $k =7^{n-2}\times 16$ (since there are $7^{n-2}\times 8$ cubes in the outermost layer of $CCS(n)$). The representation of any two distinct arbitrary edges of $CCS(n)$ can be compared in the following ways.\\ \begin{center} \begin{figure}[h!] \centering \includegraphics[width=3.0in]{cu2} \caption{ Cube $U_n$ (arbitrary) from the outermost cubes of $CCS(n)$.}\label{cu2} \end{figure} \end{center} \FloatBarrier (i) When both the arbitrarily selected edges are on $U_n$ of $CCS(n)$ as shown in Fig. 4.\\ We assume that $r_1 = a_1$ and $r_2 = a_2$, then $R_E = \{a_1,a_2,r_3,r_4...,r_k\}$. The representation of each edge $e_i$, $1\leq i \leq 12$ of cube $U_n$ concerning $R_E$ is given as\\ \hspace{-6.0mm} $r\left[e_1|R_E\right] = \big(0,1,d(c,r_3),d(c,r_4),...d(c,r_k)\big)$\\ $r\left[e_2|R_E\right] = \big(0,2,d(c,r_3)+1,d(c,r_4)+1,...d(c,r_k)+1\big)$\\ $r\left[e_3|R_E\right] = \big(1,2,d(c,r_3)+2,d(c,r_4)+2,...d(c,r_k)+2\big)$\\ $r\left[e_4|R_E\right] = \big(1,1,d(c,r_3)+2,d(c,r_4)+2,...d(c,r_k)+2\big)$\\ $r\left[e_5|R_E\right] = \big(1,0,d(c,r_3)+1,d(c,r_4)+1,...d(c,r_k)+1\big)$\\ $r\left[e_6|R_E\right] = \big(2,1,d(c,r_3)+2,d(c,r_4)+2,...d(c,r_k)+2\big)$\\ $r\left[e_7|R_E\right] = \big(1,2,d(c,r_3)+1,d(c,r_4)+1,...d(c,r_k)+1\big)$\\ $r\left[e_8|R_E\right] = \big(2,1,d(c,r_3)+1,d(c,r_4)+1,...d(c,r_k)+1\big)$\\ $r\left[e_9|R_E\right] = \big(1,1,d(c,r_3),d(c,r_4),...d(c,r_k)\big)$\\ $r\left[e_{10}|R_E\right] = \big(1,0,d(c,r_3),d(c,r_4),...d(c,r_k)\big)$\\ $r\left[e_{11}|R_E\right] = \big(2,0,d(c,r_3)+1,d(c,r_4)+1,...d(c,r_k)+1\big)$\\ $r\left[e_{12}|R_E\right] = \big(0,1,d(c,r_3)+1,d(c,r_4)+1,...d(c,r_k)+1\big)$\\ We see that all these representations are different.\\ (ii) When both the arbitrarily selected edges are on the distinct cubes of chain $CH_{c}$, one end of which is the cube of the outermost level as shown in Fig. 5.\\ Let us assume that two selected edges $e_L$ and $e_M$ are on two distinct cubes namely $L$-cube and $M$-cube respectively on the chain of cubes. Assume that the chain of cubes $CH_{c}$ in $CCS(n)$, has one end as its central cube and the other end is the outermost cube which has a pair of distinct arbitrary edge resolving vertices (say $r_1$ and $r_2$). Since $e_L$ is one of the edges of $L$-cube and $e_M$ is edge of $M$-cube, then it clear that $d(e_L,R_E) < d(e_M,R_E)$. Hence representation of both edges concerning $R_E$ is distinct.\\\\ (iii) When both the arbitrarily selected edges are bridge edges on the chain of cubes $CH_{c}$, one end of which is the cube of the outermost level as shown in Fig. 5.\\ Let us assume that two selected edges $e_{b_i}$ and $e_{b_j}$ are on the chain of cubes. Assume that the chain of cubes $CH_{c}$ in $CCS(n)$, has one end as its central cube and the other end is the outermost cube which has a pair of distinct arbitrary edge resolving vertices (say $r_1$ and $r_2$).Then it clear that $d(e_{b_i},R_E) < d(e_{b_j},R_E)$. Hence representation of both edges concerning $R_E$ is distinct.\\\\ \begin{center} \begin{figure}[H] \centering \includegraphics[width=4.0in]{cu333} \caption{ Chain of cubes ($CH_{c}$) in $CCS(n)$.}\label{cu333} \end{figure} \end{center} \FloatBarrier (iv) When both the arbitrarily selected edges are on distinct chains of cubes (say $CH_{c_1}$ and $CH_{c_2}$) and these two chains are connected to a common cube called branching cube.\\\\ Let us assume that the two selected edges $e_M$ and $e_N$ are on two distinct cubes namely $M$-cube and $N$-cube respectively. Further, these cubes are on distinct chains of cubes (say $CH_{c_1}$ and $CH_{c_2}$) connected at branching $B$ cube as shown in Fig. 6.\\ Assume that the chain of cubes $CH_{c_1}$ and $CH_{c_2}$ in $CCS(n)$, has one end as branching cube and the other end as outermost cubes, which has a pair of distinct arbitrary edge resolving vertices $r_i = r_1, r_{i+1}$ and $r_j = r_3, r_{j+1}$. Also, the length of the shortest path between vertex $r_1$ and edge $e_M$ on the $M$ cube is greater than the length of the shortest path between vertex $r_1$ and edge $e_N$ on the $N$ cube. This implies $d(e_N,r_1)\neq d(e_M,r_1)$, and hence $r(e_N,R_E) \neq r(e_M,R_E)$. \begin{center} \begin{figure}[h] \centering \includegraphics[width=3.5in]{cu444} \caption{The distinct chains of cubes through branching cube}\label{cu444} \end{figure} \end{center} \FloatBarrier (v) When both the arbitrarily selected bridge edges are on distinct chains of cubes (say $CH_{c_1}$ and $CH_{c_2}$) and these two chains are connected to a common cube called branching cube.\\\\ Let us assume that the two selected edges $e_{b_i}$ and $e_{b_j}$ are on two distinct chains of cubes namely $CH_{c_1}$ and $CH_{c_2}$ respectively. Further, these distinct chains are connected at branching $B$ cube as shown in Fig. 6.\\ Assume that the chain of cubes $CH_{c_1}$ and $CH_{c_2}$ in $CCS(n)$, has one end as branching cube and the other end as outermost cubes, which has a pair of distinct arbitrary edge resolving vertices $r_i = r_1, r_{i+1}$ and $r_j = r_3, r_{j+1}$. Also, the length of the shortest path between vertex $r_1$ and edge $e_{b_j}$ is greater than the length of the shortest path between vertex $r_1$ and edge $e_{b_i}$. This implies $d(e_{b_i},r_1)\neq d(e_{b_j},r_1)$, and hence $r(e_{b_i},R_E) \neq r(e_{b_j},R_E)$.\\\\ (vi) When both the arbitrarily selected edges are on the central cube as shown in Fig. 7. \begin{center} \begin{figure}[h!] \centering \includegraphics[width=3.0in]{cu5} \caption{Central Cube of $CCS(n)$ and $r_i$,$r_{i+1}$ are vertices on the outermost level of cubes.}\label{cu5} \end{figure} \end{center} \FloatBarrier Assume that the two arbitrary selected edges are on the central cube. We have labelled all 8 vertices as $a_1$,$a_2$...$a_8$ and edges as $e_i$, $i = 1,2,3...,12$. Without loss of generality, we assume that vertices $r_{2i-1}, r_{2i}$, $i = 1,2,3...,8$, are on the outermost layer cubes in $CCS(n)$, and these outermost cubes having vertices $r_{2i-1}, r_{2i}$ are connected to the central cube at vertex $a_i$, $1\leq i\leq 8$, through a chain of cubes. Since each vertex is joined by three edges of the central cube ($a_1$ is joined by edge $e_1,e_4,e_8$) so the shortest distance of these edges from vertex $r_1$ and $r_2$ is the same but simultaneously edge $e_1$ is also joined by vertex $a_2$ but edges $e_4$ and $e_8$ are not joined by $a_2$. This further implies that the shortest path between edge $e_1$ and vertex $r_3$ is not the same as edges $e_4$ and $e_8$, and so on for other edges. Hence the representation is distinct for each edge concerning $R_E$.\\\\ (vii) When both the arbitrarily selected edges are on the middle cube, which is neither the outer cube nor the central cube. \begin{center} \begin{figure}[h!] \centering \includegraphics[width=4.5in]{cu999} \caption{An arbitrary middle cube $C_m$ which is neither the outermost cube nor the central cube of $CCS(n)$.}\label{cu999} \end{figure} \end{center} Assume that the middle cube contains two arbitrary selected edges as shown in Fig. 8. We have labelled all 8 vertices as $c$,$a_1$,$a_2$...$a_7$ and edges as $e_i$, $i = 1,2,3...,12$. We can assume that the vertices $r_1$ and $r_2$ lie on the cube (say $C_j$) of the outermost layer of $CCS(n)$ and that $C_j$ is joined to the middle cube $C_m$ at vertex $a_1$ by a chain of cubes without losing generality. Similarly, we can suppose that $r_{2i-1}, r_{2i}$ lie on the cubes of the outermost layer in $CCS(n)$, and these cubes are connected to middle cube $C_m$ at vertices $a_i$, $ 2\leq i \leq 7$, through the chain of cubes. As in case (vi), we find that each vertex is joined by three edges of cube $C_m$. Similar to the above case the representation is distinct for each edge concerning $R_E$.\\\\ (viii) When one of the arbitrarily selected edges is on one of the cube, and the other is the bridge edge between cubes.\\ Clearly, by the symmetry of the structure, the representation is distinct for each edge concerning $R_E$.\\ All these cases prove that $R_E = \{r_1,r_2,r_3...r_k\}$ is resolving set. Hence $edim(CCS(n))\leq 7^{n-2}\times 16$.\\\\ Next, we claim that $edim\big(CCS(n)\big) \geq 7^{n-2} \times 16$. let $C_n$ be an arbitrary cube on the outermost layer of $CCS(n)$ as shown in Fig. 9. We have labelled all the vertices of $C_n$ as $a_i$, $i = 1,2,3...8$, and edges as $e_i$, $i = 1,2,3...,12$. The degree of each vertex is 3, except the vertex $a_1$, which is attached to the chain of the cube through an edge. Let $R_E = \{r_1,r_2,...r_k\}$ be an edge metric generator of $CCS(n)$. In order to complete the proof, we have to show that the edge metric generator $R_E$ has a minimum of two vertices from $C_n$. To begin, we assume $v\in R_E$ such that $v\notin C_n$, then $r\left[e_1|R_E\right] = \big(d(a_1,r_1),d(a_1,r_2),d(a_1,r_3),...d(a_1,r_k)\big)$ = $r\left[e_{11}|R_E\right]$ which is a contradiction.\\ \begin{center} \begin{figure}[h!] \centering \includegraphics[width=2.5in]{cu888} \caption{Outermost cube ($C_n$) of $CCS(n)$.}\label{cu888} \end{figure} \end{center} \FloatBarrier Secondly, we suppose that only one vertex of $R_E$ belongs to $C_n$. Without loss of generality, we assume that this common vertex is $r_1$.\\ If $r_1 = a_1$, then\\ $d(e_1|R_E) = \{0,d(a_1,r_2),d(a_1,r_3),...,d(a_1,r_k)\} = d(e_{11}|R_E)$, a contradiction.\\ If $r_1 = a_2$, then\\ $d(e_2|R_E) = \{0,d(a_1,r_2)+1,d(a_1,r_3)+1,...,d(a_1,r_k)+1\} = d(e_{10}|R_E)$, a contradiction.\\ If $r_1 = a_3$, then\\ $d(e_2|R_E) = \{0,d(a_1,r_2)+1,d(a_1,r_3)+1,...,d(a_1,r_k)+1\} = d(e_{6}|R_E)$, a contradiction.\\ If $r_1 = a_4$, then\\ $d(e_3|R_E) = \{0,d(a_1,r_2)+2,d(a_1,r_3)+2,...,d(a_1,r_k)+2\} = d(e_{9}|R_E)$, a contradiction.\\ If $r_1 = a_5$, then\\ $d(e_5|R_E) = \{0,d(a_1,r_2)+1,d(a_1,r_3)+1,...,d(a_1,r_k)+1\} = d(e_{7}|R_E)$, a contradiction.\\ If $r_1 = a_6$, then\\ $d(e_5|R_E) = \{0,d(a_1,r_2)+1,d(a_1,r_3)+1,...,d(a_1,r_k)+1\} = d(e_{6}|R_E)$, a contradiction.\\ If $r_1 = a_7$, then\\ $d(e_7|R_E) = \{0,d(a_1,r_2)+1,d(a_1,r_3)+1,...,d(a_1,r_k)+1\} = d(e_{8}|R_E)$, a contradiction.\\ If $r_1 = a_8$, then\\ $d(e_8|R_E) = \{0,d(a_1,r_2)+1,d(a_1,r_3)+1,...,d(a_1,r_k)+1\} = d(e_{10}|R_E)$, a contradiction.\\ Our claim was backed by the inconsistency in all of the circumstances. As a result, the edge metric genarator $R_E$ of $CCS(n)$ has minimum of two vertices from the outermost cube $C_n$. The cubes in $CCS(n)$ are increased by a number equal to $7$ multiplied by the number of cubes in the outermost layer of the preceding level at each step or level, as can be seen from the construction of $CCS(n)$. This implies that $edim\big(CCS(n)\big) \geq 7^{n-2} \times 16$. Hence $edim\big(CCS(n)\big) = 7^{n-2} \times 16$. \end{proof} \hspace{-5.0mm}\textbf{4. Conclusion}\\ In this article, the edge metric dimension of $CCS(n)$ has been studied. In particular, we have computed that for $n = 1$, $edim(CCS(n)) = 3$, and for $n\geq2$, $edim(CCS(n)) = 7^{n-2}\times 16$, which is similar to metric dimension of $CCS(n)$. These results are useful to study the structural properties of the chemical compound $CCS(n)$. In the future, we will extend our approach to find the fault-tolerant metric dimension, and the mixed metric dimension of $CCS(n)$.\\\\ \hspace{-5.0mm}\textbf{5. Declarations}\\\\ \hspace{-5.0mm}\textbf{Data availability statement}\\ This article does not qualify for data sharing since no data sets were generated or analysed during this research.\\\\ \hspace{-5.0mm}\textbf{Novelty statement}\\ The edge resolvability of various well-known graphs has recently been computed. For instance, convex polytopes, the web graph, circulant graphs, certain chemical structures, and so on. In chemistry, one of the most important problems is to represent a series of chemical compounds mathematically so that each component has its representation. Graph invariants play an important role to analyze the abstract structures of chemical graphs. But there are still some chemical graphs for which the vertex and edge resolvability has not been found yet, one such compound is crystal cubic carbon structure $CCS(n)$. Therefore, in this article, we compute the edge resolvability of $CCS(n)$, for $n\geq 1$ and see that $edim(CCS(n)) = dim(CCS(n))$.\\ \hspace{-5.0mm}\textbf{Conflict of interest}\\ The authors disclose that they have no competing interests. \begin{thebibliography}{137} \bibitem{BA} A. Q. Baig, M. Imran, W. Khalid, M. Naeem, Molecular description of carbon graphite and crystal cubic carbon structures, Can. J. Chem., 95(6) (2017) 674–686. \bibitem{pi} J. Caceres, C. Hernando, M. Mora, I. Pelayo, M. Puertas, C. Seara, D. Wood, On the metric dimension of cartesian products of graphs, SIAM J. Discrete Math, 21(2) (2007) 423–441. \bibitem{ti} G. Chartrand, L. Eroh, M. A. Johnson, O. R. Oellermann, Resolvability in graphs and metric dimension of a graph, Discrete Appl. Math, 105(1-3) (2000) 99–113. \bibitem{den} B. Deng, M. F. Nadeem, M. Azeem, On the Edge Metric Dimension of Different Families of Möbius Networks, Math. Probl. Eng., (2021) 6623208, doi:10.1155/2021/6623208. \bibitem{ki} F. Harary, R. A. Melter, On the metric dimension of a graph, Ars Comb. 2(1) (1976) 191–195. \bibitem{ALL} M. Hatala, P. Gemeiner, L. Lorencová et al., Screen-printed conductive carbon layers for dye-sensitized solar cells and electrochemical detection of dopamine. Chem. Pap. 75 (2021) 3817–3829. \bibitem{IMA} M. Imran, M. Naeem, A. Q. Baig, M. K. Siddiqui, M. A. Zahid, W. Gao, Modified eccentric descriptors of crystal cubic carbon, J. Discret. Math. Sci. Cryptogr., 22(7) (2019) 1215–1228. \bibitem{len} A. Kelenc, N. Tratnik, I.G. Yero, Uniquely identifying the edges of a graph: the edge metric dimension, Discrete Appl. Math, 251 (2018) 204-220. \bibitem{CAL} O. V. Kharissova, J. Rodríguez, B. I. Kharisov, Non-standard ROS-generating combination “theraphthal–ascorbic acid” in low-temperature transformations of carbon allotropes. Chem. Pap., 73 (2019) 239–248. \bibitem{mar} M. Knor, S. Majstorović, A. T. M. Toshi, R. Škrekovski, I. G. Yero, Graphs with the edge metric dimension smaller than the metric dimension, Appl. Math. Comput., 401 (2021) 126076. \bibitem{Mat} N. N. Matyushenko, V. E. Strel'Nitskiǐ, V. A. Gusev, 1979. A dense new version of crystalline carbon $C_8$, ZhETF Pisma Redaktsiiu, 30 (1729) 218. \bibitem{pok} A. Pokropivny, S. Volz, ‘$C_8$ phase’: Supercubane, tetrahedral, BC‐8 or carbon sodalite?, Phys. Status Solidi B, 249(9) (2012) 1704-1708. \bibitem{se} J. Sedlar, R. Škrekovski, Bounds on metric dimensions of graphs with edge disjoint cycles, Appl. Math. Comput., 396 (2021) 125908. \bibitem{Kg} S. Sharma, V. K. Bhat, Fault-tolerant metric dimension of zero-divisor graphs of commutative rings. AKCE Int. J. Graphs Comb., (2021) 1-7, doi:10.1080/09728600.2021.2009746. \bibitem{VK} S. K. Sharma, V. K. Bhat, Metric dimension of heptagonal circular ladder, Discrete Math. Algorithms Appl., 13(1) (2021) 20500095. \bibitem{CP} S. K. Sharma, V. K. Bhat, H. Raza, S. Sharma, On mixed metric dimension of polycyclic aromatic hydrocarbon networks. Chem. Pap, (2022) 1-14, doi:10.1007/s11696-022-02151-x. \bibitem{SS} P. Singh, S. Sharma, S. K. Sharma, V. K. Bhat, Metric dimension and edge metric dimension of windmill graphs, AIMS Math, 6(9) (2021) 9138-9153. \bibitem{li} P. J. Slater, Leaves of trees, Congress. Numer, 14 (1975) 549–559. \bibitem{sk} C. Wei, M. Salman, S. Shahzaib, M. U. Rehman, J. Fang, Classes of Planar Graphs with Constant Edge Metric Dimension, Complexity, (2021), doi:10.1155/2021/5599274. \bibitem{ZHA} H. Yang, M. Naeem, A. Q. Baig, H. Shaker, M. K. Siddiqui, Vertex Szeged index of crystal cubic carbon structure, J. Discret. Math. Sci. Cryptogr, 22(7) (2019) 1177–1187. \bibitem{Yan} H. Yang, M. Naeem, M. K. Siddiqui, Molecular properties of carbon crystal cubic structures, Open Chem., 18(1) (2020) 339–346. \bibitem{ZAH} M. A. Zahid, M. Naeem, A. Q. Baig, W. Gao, General fifth M-zagreb indices and fifth M-zagreb polynomials of crystal cubic carbon, Util. Math., 109 (2018) 263–270. \bibitem{fd} Y. Zhang, S. Gao, On the edge metric dimension of convex polytopes and its related graphs, J. Comb. Optim, 39(2) (2020) 334–350. \bibitem{Nem} X. Zhang, M. Naeem, Metric dimension of crystal cubic carbon structure, J. Math., (2021), doi:10.1155/2021/3438611. \end{thebibliography} \end{document}-
2205.00647v2
http://arxiv.org/abs/2205.00647v2
Maximal Dissent: a State-Dependent Way to Agree in Distributed Convex Optimization
\pdfminorversion=4 \documentclass[12pt]{article} \usepackage[margin=1in]{geometry} \usepackage{lipsum} \usepackage{bbm} \usepackage[usenames]{color} \usepackage{enumerate} \usepackage{times} \usepackage{textcomp} \usepackage{cite} \usepackage{url} \usepackage{subcaption} \usepackage{amsfonts,mathrsfs} \usepackage{amssymb,amsmath} \usepackage{amsthm} \usepackage{verbatim} \usepackage{acronym} \usepackage{mathtools} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \usepackage{graphicx} \usepackage{todonotes} \newcommand{\todoin}[1]{\todo[inline]{#1}} \usepackage{enumerate} \usepackage{subfiles} \usepackage[capitalize]{cleveref} \newcommand{\remove}[1]{} \newcommand{\Rm}{\mathbb{R}^m} \def\Tr{\mathrm{tr}} \def\E{\mathbb{E}} \def \N {\mathbb{N}} \def \Pdist {\mathbb{P}} \def \R{\mathbb{R}} \def \Z {\mathbb{Z}} \def \M {M} \def \seq {\rightrightarrows} \def \Ecal {\mathcal{E}} \def \Fcal {\mathcal{F}} \def \Gcal {\mathcal{G}} \def \Lcal {\mathcal{L}} \def \Ncal {\mathcal{N}} \def \Scal {\mathcal{S}} \def \Vcal{[n]} \def \a {\alpha} \def \b {\beta} \def \d {\delta} \def \e {E} \def \uh {\hat{u}} \def \vh {\hat{v}} \def \xdot {\dot{x}} \def \Bbar {\Bar{B}} \def \diam {\mathrm{diam}({\mathcal{G}})} \def \one {\mathbf{1}} \def \lt {\Tilde{\lambda}} \def \lrg {\lambda_{2}} \def \rootl {\sqrt{\lambda}} \def \node {r} \def \Vone {\tilde{V}} \def \quadraticprop {Contraction Property} \def \elmg {e_{\textrm{lmg}}} \def \emax {e_{\max}} \def \erg {e_{\textrm{rg}}} \def \Almg {A_{\textrm{lmg}}} \def \Amg {A_{\textrm{max}}} \def \Arg {A_{\textrm{rg}}} \def \Alb {A_{\textrm{lb}}} \def \Agossip {B} \def \Mat {\mathcal{A}} \def \Pmat {\pi} \def \subgrad {\tilde \nabla} \newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{assumption}{Assumption} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \usepackage{enumerate} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\sets}[1]{\mathcal{#1}} \newcommand\inp[2]{\langle #1, #2 \rangle} \usepackage[normalem]{ulem} \def \vecw {\vec{w}} \def \vecx {\vec{x}} \def \bfx {\vec{x}} \def\Gt{G} \def\Cgph{\mathcal C(G_{ph})} \def\At{A} \def\Dt{D} \def\Lt{L} \def\Nt{N} \def\Gph{G_{ph}} \def\Gn{\mathcal{G}_n} \def\spann{\text{span}} \def\argmax{\text{argmax}} \def\Eph{E_{ph}} \newcommand{\beh}[1]{{\color{red} #1}} \newcommand{\um}{\textcolor{blue}} \newcommand{\mmv}{\textcolor{magenta}} \definecolor{OliveGreen}{rgb}{0,0.5,0} \newcommand{\av}{\textcolor{black}} \renewcommand{\baselinestretch}{1} \author{Ashwin Verma, Marcos M. Vasconcelos, Urbashi Mitra, and Behrouz Touri\footnote{ A.\ Verma and B.\ Touri are with the Department of Electrical and Computer Engineering, University of California San Diego (email: [email protected], [email protected]), M.\ M.\ Vasconcelos is with the Commonwealth Cyber Initiative and the Department of Electrical and Computer Engineering, Virginia Tech (email: [email protected]), and U. Mitra is with the Ming Hsieh Department of Electrical Engineering, University of Southern California (email: [email protected]). U. Mitra was supported in part by the following agencies: ONR under grant N00014- 15-1-2550, NSF under grants CNS-1213128, CCF-1718560, CCF-1410009, CPS-1446901 and AFOSR under grant FA9550-12-1-0215. M. M. Vasconcelos was supported by funding from the Commonwealth Cyber Initiative (CCI). }} \title{Maximal Dissent: a {State-Dependent} Way to Agree in Distributed Convex Optimization} \date{} \begin{document} \maketitle \begin{abstract} Consider a set of agents collaboratively solving a distributed convex optimization problem, asynchronously, under stringent communication constraints. In such situations, when an agent is activated and is allowed to communicate with only one of its neighbors, we would like to pick the one holding the most informative local estimate. We propose new algorithms where the agents with maximal dissent average their estimates, leading to an information mixing mechanism that often displays faster convergence to an optimal solution compared to randomized gossip. The core idea is that when two neighboring agents whose distance between local estimates is the largest among all neighboring agents in the network average their states, it leads to the largest possible immediate reduction of the quadratic Lyapunov function used to establish convergence to the set of optimal solutions. As a broader contribution, we prove the convergence of max-dissent subgradient methods using a unified framework that can be used for other state-dependent distributed optimization algorithms. Our proof technique bypasses the need of establishing the information flow between any two agents within a time interval of uniform length by intelligently studying convergence properties of the Lyapunov function used in our analysis. \end{abstract} \section{Introduction} In distributed convex optimization, a collection of agents collaborate to minimize the sum of local objective functions by exchanging information over a communication network. The primary goal is to design algorithms that converge to an optimal solution via local interactions dictated by the underlying communication network. A standard strategy to solving distributed optimization problems consists of each agent first combining the local estimates shared by its neighbors followed by a first-order subgradient method on its local objective function \cite{Tsitsiklis:1986,nedic2009distributed,nedic2010constrained}. Of particular relevance herein are the so-called {\em gossip} algorithms \cite{shah2009gossip}, where the information mixing step consists of averaging the sates of two agents connected by one of the edges selected from the network graph. Two benefits of gossip algorithms are their simple asynchronous implementation and a reduction in communication costs. One common gossip algorithm is \textit{randomized}, in which an agent is randomly activated and chooses one of its neighbors randomly to average its state \cite{boyd2006randomized,ram2009asynchronous, Aysal:2009}. The randomization mechanism used in this gossip scheme is usually \textit{state-independent}\textbf{}. We consider a different approach to gossip in which the agent chooses one of its neighbors based on its state. At one extreme, we may think of agents who prefer to gossip with neighbors with similar ``opinions''. As in an \textit{echo-chamber}, where agents may only talk to others if they reinforce their own opinions, it does not lead to an effective information mixing mechanism. At the opposite extreme, we consider agents who prefer to gossip with neighbors with maximal disagreement or dissent. In this paper, we focus on the concept of \textit{max-dissent} gossip as a state-dependent information mixing mechanism for distributed optimization. We establish the convergence of the resulting distributed subgradient method under minimal assumptions on the underlying communication graph, and the local functions. The idea of enabling a consensus protocol to use state-dependent matrices dates back to the Hegselmann and Krause~\cite{hegselmann2002opinion} model for opinion dynamics. However, the literature on state-dependent averaging in distributed optimization is scarce and mostly motivated by applications in which the state represents the physical location of mobile agents (e.g. robots, autonomous vehicles, drones, etc.). In such settings, the state-dependency arises from the fact that agents that are physically closer have a higher probability of successfully communicating with each other \cite{lobel2011distributed,alaviani2021distributed, Alaviani:2021b}. Existing results assume that the local interactions between agents lead to strong connectivity over time. Unlike previous work, our model does not assume that the state of an agent represents its the position in space. Moreover, we do not impose strong assumptions on the network's connectivity over time such as in \cite{nedic2010constrained} and \cite{lobel2011distributed}. Our work is closely related to state-dependent \textit{averaging} schemes known as \textit{Load-Balancing} \cite{cybenko1989dynamic} and \textit{Greedy Gossip with Eavesdropping} \cite{ustebay2010greedy}. The main idea in these methods is to accelerate averaging by utilizing the information from the most \textit{informative} neighbor, i.e., the neighbors whose states are maximally different with respect to some norm from each agent. We refer to it as the \textit{maximal dissent} heuristics. The challenges of convergence analysis for maximal dissent averaging are highlighted in~\cite{cybenko1989dynamic,nedic2009distributedaveraging,ustebay2010greedy}. However, concepts akin to max-dissent have only been explored for the specific problem of averaging \cite{ustebay2010greedy}. Our work, on the other hand, focuses on distributed convex optimization, whose convergence is not guaranteed by the convergence of the averaging scheme alone. As a broader impact of the results herein, we show that schemes that incorporate mixing of information between max-dissent agents will converge to a global optimizer of the underlying distributed optimization problem almost surely. Our result enables us to propose and extend the use of load-balancing, and max-dissent gossip to distributed optimization. The key property of max-dissent averaging is that it leads to a contraction of the Lyapunov function used to establishing convergence. While recent work has considered similar contraction results (e.g. \cite{koloskova2019decentralized, koloskova2020unified}), they are not applicable to state-dependent schemes, and do not establish almost sure convergence, but only convergence in expectation. \subsection{Contributions} A preliminary version of some of the ideas herein has previously appeared in \cite{Verma:2021}, which addressed only one of the schemes, namely \textit{Global Max-Gossip} for distributed optimization of univariate functions. The results reported here are much more general than the ones in \cite{Verma:2021} addressing $d$-dimensional optimization, and covering multiple state-dependent schemes (e.g. Load-Balancing, Global and Local-Max Gossip) in which the max-dissent agents communicate with non-zero probability at each time. The main contributions of this paper are: \begin{itemize} \item We present state-dependent distributed optimization schemes that do not rely on or imply explicit strong connectivity conditions (such as $B$-connectivity). \item We characterize a general result highlighting the importance of max-dissent agents on a graph for distributed optimization, significantly simplifying the task of establishing contraction results for a large class of consensus-based subgradient methods. \item We prove the convergence of state-dependent algorithms to a global optimizer for distributed optimization problems using a technique involving the aforementioned contraction property of a quadratic Lyapunov function. \item We present numerical experiments that suggest that the proposed state-dependent subgradient methods improve the convergence rate of distributed estimation problems relative to conventional (state-independent) gossip algorithms. \end{itemize} \subsection{Organization} The rest of the paper is organized as follows. First, we formulate distributed optimization problems, and outline a generic state-dependent distributed subgradient method in Section~\ref{sec:problem}. In Section~\ref{sec:consensus} we introduce Local and Global Max-Gossip, and review Randomized Gossip and Load Balancing distributed averaging schemes. We discuss the role of maximal dissent agents and their selection in averaging algorithms in Section~\ref{sec:maxedge}. In Section~\ref{sec:conv}, we present our main results on the convergence of maximal dissent state-dependent distributed subgradient methods. We provide a numerical example that shows the benefit of using algorithms based on the maximal dissent averaging in Section~\ref{sec:numerical}. We conclude the paper in Section~\ref{sec:conclude} where we outline future research directions. \subsection{Notation} For a positive integer $n$, we define $[n]=\{1,2,\dots,n\}$. We denote the $d$-dimensional Euclidean space by $\R^d$. We use boldface letters such as $\mathbf{x}$ to represent vectors, and lower-case letters such as $x$ to represent scalars. Upper-case letters such as $A$ represent matrices. We use $A^T$ to denote the transpose of a matrix $A$. For $i\in [n]$, we denote by $\vec{b}_i$ the $i$-th standard basis vector of $\R^n$. We denote by $\one$, the vector with all components equal to one, whose dimension will be clear from the context. For a vector $\vec{v}$, we denote the $\ell_2$-norm by $\|\vec{v}\|$, and the average of its entries by $\bar{v}$. We say that an $n\times n$ matrix $A$ is stochastic if it is non-negative and the elements in each of its rows add up to one. We say that $A$ is doubly stochastic if both $A$ and $A^T$ are stochastic. For two vectors $\vec{a}, \vec{b} \in \mathbb{R}^n$, we define $\inp{\vec{a}}{\vec{b}} = \vec{a}^T\vec{b}$. The trace of a square matrix $A$ is defined to be the sum of entries on the main diagonal of $A$ and is denoted by $\Tr(A)$. For matrices $A,B \in \R^{n \times m}$ we define $\inp{A}{B} =\Tr(A^TB)$ as the inner product and $\|A\|_F$ to denote the Frobenius norm of $A$. \section{Problem Formulation}\label{sec:problem} Consider distributed system with $n$ agents with an underlying communication network defined by a graph $\Gcal = (\Vcal, \Ecal)$. Each agent $i \in \Vcal$ has access to a {\em local} convex function $f_i:\R^d \to \R$. The agents can communicate only with their one-hop neighbours as dictated by the network graph $\Gcal$. Our goal is to design a distributed algorithm to solve the following unconstrained optimization problem \begin{align}\label{eq:def_DistOpt_Problem} F^* = \min_{\vecw\in\R^d} F(\vecw), \;\; \mbox{where} \;\; F(\vecw) \triangleq \sum_{i=1}^n f_i(\vecw). \end{align} We assume that the local objective function $f_i$ is known only to node $i$ and the nodes can only communicate by exchanging information about their local estimates of the optimal solution. The solution set of the problem is defined as \begin{equation} \sets{W}^* \triangleq \arg \min_{\vecw \in \R^d} F(\vecw). \nonumber \end{equation} Throughout the paper, we make extensive use of the notion of the \textit{subgradient} of a function. \begin{definition}[Subgradient]\label{def:subgradient} A subgradient of a convex function $f:\R^d \to \R$ at a point $\vecw_0\in \R^d$ is a vector $\vec{g} \in \R^d$ such that \begin{align} f(\vecw_0) + \inp{\vec{g}}{\vecw-\vecw_0} \leq f(\vecw)\nonumber \end{align} for all $\vecw\in \R^d$. We denote the set of all subgradients of $f$ at $\vecw_0$ by $\partial f(\vecw_0)$, which is called the \textit{subdifferential} of $f$ at $\vecw_0$. \end{definition} We make the following assumptions on the structure of the optimization problem in \cref{eq:def_DistOpt_Problem}. \begin{assumption}[Non-empty solution set]\label{asmpn:Nonempty_solution} The optimal solution set is non-empty, i.e., $ \sets{W}^* \neq \emptyset. $ \end{assumption} \begin{assumption}[Bounded Subgradients]\label{asmpn:bounded_subgrad} Each local objective function $f_i$'s subgradients are uniformly bounded. In other words, for each $ i \in [n] $, there exists a finite constant $L_i$ such that for all $ \vecw\in \R^d $, we have $\|\vec{g}\| \leq L_i, \ \vec{g} \in \partial f_i(\vecw).$ \end{assumption} There exist many algorithms to solve the problem in \cref{eq:def_DistOpt_Problem}. Nedic and Ozdaglar~\cite{nedic2009distributed} introduced one of the pioneering schemes, in which each agent keeps an estimate of the optimal solution and at each time step, the agents share their local estimate with their neighbors. Then, each agent updates its estimate using a time-varying, \textit{state-independent} convex combination of the information received from their neighbors and its own local estimate. For $t\geq 0$, let $a_{ij}(t)$ denote the coefficients of the aforementioned convex combination at time $t$ such that $a_{ij}(t)\geq 0$, for all $i,j \in [n]$, $a_{ij}(t)=0$ if $\{i,j\} \notin \mathcal{E}$, and $\sum_{j=1}^n a_{ij}(t)=1$, for all $i\in [n]$. Let $\mathbf{x}_i(t)$ denote the $i$-th agent's estimate of the optimal solution at time $t$. The convex combination is followed by taking a step in the direction of any subgradient in the subdifferential at the local estimate, i.e., \begin{align}\label{eq:oldupdateSubgrad} \vecx_i(t+1) &=& \sum_{j=1}^n a_{ij}(t)\vecx_j(t) - \alpha(t)\vec{g}_i(t), \end{align} where $\vec{g}_i(t)\in \partial f_i\big(\vecx_i(t)\big)$, and $\alpha(t)$ is a step-size sequence. Herein, we generalize the algorithm in~\cite{nedic2009distributed} by allowing the weights in the convex combination to be \textit{state-dependent} in addition to being time-varying. Let each agent $i \in \Vcal$ initialize its estimate at an arbitrary point ${\vecx_i(0)\in \R^d}$, which is updated at discrete-time iterations $t\geq 0$ based on its own subgradient and the estimates received from neighboring agents as follows \begin{align} \vecw_i(t+1) &= \sum_{j=1}^n a_{ij}\big(t,\vecx_1(t),\vecx_2(t),\ldots,\vecx_n(t)\big)\vecx_j(t), \nonumber\\ \vecx_i(t+1) &= \vecw_i(t+1) - \alpha(t+1)\vec{g}_i(t+1), \nonumber \end{align} where $a_{ij}\big(t,\vecx_1(t),\vecx_2(t),\dots,\vecx_n(t)\big)$ are non-negative weights, $\alpha(t)$ is a step-size sequence, and $\vec{g}_i(t)\in \partial f_i\big(\vecw_i(t)\big)$ for all $t \geq 0$. We can express this update rule compactly in matrix form as \begin{align} W(t+1) &= A\big(t,X(t)\big)X(t), \label{eq:update_mat_1} \\\nonumber X(t+1) &= W(t+1) - \alpha(t+1)G(t+1), \end{align} where \begin{equation} A\big(t,X(t)\big) \triangleq \Big[a_{ij}\big(t,\vecx_1(t),\dots,\vecx_n(t)\big)\Big]_{i,j\in [n]}\nonumber \end{equation} and \begin{equation} X(t) \triangleq \begin{bmatrix} \vecx^T_1(t) \\ \vdots \\ \vecx^T_n(t) \end{bmatrix}, \ W(t) \triangleq \begin{bmatrix} \vecw^T_1(t) \\ \vdots \\ \vecw^T_n(t) \end{bmatrix}, \ G(t) \triangleq \begin{bmatrix} \vec{g}^T_1(t)\\ \vdots \\ \vec{g}^T_n(t) \end{bmatrix}. \nonumber \end{equation} Note that another difference between Eqs. \eqref{eq:update_mat_1} and \eqref{eq:oldupdateSubgrad} is that agent $i$ computes the subgradient for the local function $f_i$ at the computed average $\vecw_i(t+1)$ instead of $\vecx_i(t)$, $t\geq 0$. \begin{assumption}[Diminishing step-size]\label{asmpn:Step-Size} The step-sizes $\alpha(t)>0$ form a non-increasing sequence that satisfies \begin{equation} \sum_{t=1}^\infty\alpha(t) = \infty \ \ \text{and} \ \ \sum_{t=1}^\infty\alpha^2(t) < \infty. \nonumber \end{equation} \end{assumption} For a step-size sequence that satisfies \cref{asmpn:Step-Size}, if the sequence of matrices $\{A(t)\}$, where $A(t)=[a_{ij}(t)]_{i,j\in [n]}$, is doubly stochastic and sufficiently mixing, and the objective functions satisfy the regularity conditions in Assumptions \ref{asmpn:Nonempty_solution} and \ref{asmpn:bounded_subgrad}, then the iterates in Eq.~\eqref{eq:oldupdateSubgrad} converge to an optimal solution irrespective of the initial conditions $\vecx_i(0)\in \R^d$, i.e., $ \lim_{t\to\infty}\vecx_i(t)=\vecx^*, \ i\in [n], $ where $\vecx^*\in \sets{W}^*$ \cite[Propositions 4 and 5]{nedic2010constrained}. Our goal for the remainder of the paper is to establish a similar result for state-dependent maximal dissent distributed subgradient methods. \section{State-dependent average-consensus}\label{sec:consensus} In this section, we discuss three state-dependent average-consensus schemes that can potentially accelerate the existing distributed optimization methods, {in so doing, we endeavor to unify the state-dependent average-consensus methodology.} The first scheme, Local Max-Gossip, was studied in~\cite{ustebay2010greedy} {exclusively} for the average consensus problem. We provide two novel averaging schemes, the Max-Gossip and Load-Balancing averaging schemes, that provide faster convergence. The dynamics of these algorithms can be understood as the instances of Eq.~\eqref{eq:update_mat_1} with constant local cost functions $f_i(\bfx)\equiv c$, $i\in [n]$, i.e., \begin{align} X(t+1)=A\big(t,X(t)\big)X(t).\nonumber \end{align} We will consider three (two asynchronous and one synchronous) algorithms. The first two algorithms are related to the well-known randomized gossip algorithm \cite{boyd2006randomized,shah2009gossip}. First, we present a brief description of Randomized Gossip. \subsection{Randomized Gossip}\label{sec:RG} Consider a network $\Gcal=([n],\Ecal)$ of $n$ agents, where each agent has an initial estimate $\bfx_i(0)$. At each iteration $t\geq 0$, a node $i$ is chosen uniformly from $[n]$, independently of the earlier realizations. Then, $i$ chooses one of its neighbors $j\in \Ncal_{i}$, where $\mathcal{N}_{i} \triangleq \{j\in [n]: \{i,j\} \in \Ecal\}$, with probability $P_{ij}>0$. The two nodes exchange their current states $\bfx_{i}(t)$ and $\bfx_{j}(t)$, and update their states according to \begin{equation}\label{eq:update} \bfx_{i}(t+1)=\bfx_{j}(t+1)=\frac{1}{2}\big(\bfx_{i}(t)+\bfx_{j}(t)\big). \end{equation} The states of the remaining agents are unchanged. The update rule in \cref{eq:update} admits a more compact matrix representation as \begin{equation}\label{eqn:gossip} X(t+1)=\Agossip(e)X(t), \end{equation} where $e=\{i,j\}$, and \begin{equation} \Agossip(e)\triangleq I - \frac{1}{2}(\vec{b}_{i}-\vec{b}_{j})(\vec{b}_{i}-\vec{b}_{j})^T. \label{eq:matrix_form_1} \end{equation} It is necessary that $\sum_{\ell=1}^nP_{i\ell}=1$ for all $i$, where $P_{i\ell}=0$ if and only if $\{i,\ell\}\not \in\Ecal$. The dynamical system described in~\cref{eqn:gossip} and its convergence rate are studied in~\cite{boyd2006randomized}. \input{global-local} \subsection{Load-Balancing} Another state-dependent algorithm known as \textit{Load-Balancing} can also be used to speed up convergence of average-consensus~\cite{nedic2009distributedquantization}. However, in contrast to the previous two cases, where only two nodes update at a given time, Load-Balancing is a synchronous averaging algorithm where all the agents operate simultaneously. In the traditional Load-Balancing algorithm, the state at each agent is a scalar, which induces a total ordering amongst the agents, i.e., the neighbours of an agent are classified by having greater or smaller state values than the agent's current state. When the states at the agents are multi-dimensional vectors, a total ordering is not available and must be defined. We introduce a variant of Load-Balancing based on the Euclidean distance between the states of any two agents as follows. At time $t$, each agent $i \in [n]$ carries out the following steps: \begin{enumerate}[1.] \item Agent $i$ sends its state to its neighbors. \item Agent $i$ computes the distance between its state and each of its neighbors. Let $\mathcal{S}_i$ denote the subset of neighbors of agent $i$ whose state have maximal Euclidean distance, i.e., \begin{equation}\label{eq:max_dissent_set} \mathcal{S}_i \triangleq \arg \max_{j \in \mathcal{N}_i} \ \|\vecx_i -\vecx_j\|. \end{equation} Agent $i$ sends an averaging request to the agents in $\mathcal{S}_i.$ \label{step:sendRequest} \item Agent $i$ receives averaging requests from its neighbors. If it receives a request from a single agent $j\in \mathcal{S}_i$, then it sends an acknowledgement to that agent. In the event that agent $i$ receives multiple requests, it sends an acknowledgement to one of the requests uniformly at random. \item If agent $i$ sends and receives an acknowledgement from agent $j$, then it updates its state as $\vecx_i \gets (\vecx_i + \vecx_{j})/2.$ \end{enumerate} \av{The conditions for interaction between two nodes in Load Balancing is characterized in the following proposition.} \begin{proposition}\label{prop:LBNecessity} Consider a connected graph $\Gcal$ and a stochastic process $\big\{X(t),A\big(t,X(t)\big)\big\}$, where $A\big(t,X(t)\big)$ is the characterization of averaging according to the Load-Balancing algorithm, i.e. $A\big(t,X(t)\big)X(t)$ is the output of the Load-Balancing algorithm for a network with state matrix $X(t)$, $t \geq 0$. The following statements hold: \begin{enumerate} \item Two agents $i,j$ such that $(i,j) \in \Ecal$ average their states only if \begin{multline}\label{eq:LBNecessity} \| \vecx_i(t) - \vecx_j(t) \| \geq \max\Big\{\max_{r\in \Ncal_i\setminus\{j\}}\|\vecx_i(t) - \vecx_r(t)\|, \max_{r\in \Ncal_j\setminus\{i\}}\|\vecx_j(t) - \vecx_r(t)\|\Big\}. \end{multline} \item Let $(i,j) \in \Ecal$. If \cref{eq:LBNecessity} holds with strict inequality, then $i,j$ average their states. \end{enumerate} \end{proposition} \av{\cref{prop:LBNecessity} is proven in Appendix~\ref{appendix:LBNecessity}.} \section{On the selection of Max-edges}\label{sec:maxedge} Consider the stochastic process $\big\{X(t),A\big(t,X(t)\big)\big\}$, where $X(t)$ is the network state matrix, and $A\big(t,X(t)\big)$ a state-dependent averaging matrix. Let $\{\Fcal_t\}_{t=0}^\infty$ be a filtration such that $\Fcal_t$ is the $\sigma$-algebra generated by \begin{equation} \big\{\{X(k),A\big(k,X(k)\big)\mid k\leq t\big\}\setminus\big\{A\big(t,X(t)\big)\big\}. \nonumber \end{equation} We establish a non-zero probability that a pair of agents that constitute a max-edge will update their states for the averaging schemes discussed in \cref{sec:consensus}. \begin{proposition}\label{prop:LMLBContracting} Let $\big\{X(t),A\big(t,X(t)\big)\big\}_{t=0}^{\infty}$ be the random process generated by either Randomized Gossip, Local Max-Gossip, Max-Gossip, or Load-Balancing consensus schemes. Then, for the random indices $i^*,j^* \in \Vcal$ defined through the max-edge in \cref{eq:emax} as $\emax\big(X(t)\big)=\{i^*,j^*\}$, we have \begin{equation}\label{eqn:maxEdgeDelta} \E \Big[A\big(t,X(t)\big)^T \! \! A\big(t,X(t)\big) \mid \Fcal_t \Big]_{i^*j^*}\geq \delta \ \ \text{a.s.,} \end{equation} where $\delta = \min_{\{i,j\}\in \Ecal} P_{ij}/n$ for Randomized Gossip, such that $P_{ij}$ is the probability that node $i$ chooses node $j \in \mathcal{N}_j$; $\delta = 1/n$ for Local Max-Gossip; $\delta = 1/2$ for Global Max-Gossip; and $\delta=1/\big(2(n-1)^2\big)$ for Load-Balancing. \end{proposition} \av{\cref{prop:LMLBContracting} establishes that given the knowledge until time $t$, in expectation, the agents comprising the max-edge based on the network state matrix $X(t)$, exchange their values with a positive weight bounded away from zero. Qualitatively, for gossip-based algorithms, this implies that there is a positive probability bounded away from zero that the agents comprising the max-edge carry out exchange of information with each other. We use \cref{prop:LMLBContracting} along with \cref{thm:maxEdgeContraction} to establish that the averaging matrices characterizing the algorithms discussed in \cref{sec:consensus} are contracting. Therefore, the subgradient methods based on these averaging algorithms converge to the same optimal solution almost surely as stated in \cref{cor:ConvergenceExamples} of \cref{thm:main1}. In other words, as long as the averaging step involves gossip over the max-edge with positive probability (bounded away from zero), we will have a contraction in the Lyapunov function capturing the sample variance, which is a key step in proving the convergence of our averaging based-subgradient methods. \cref{prop:LMLBContracting} is proven in Appendix~\ref{appendix:LMLBContracting}.} \section{Convergence of state-dependent Distributed Optimization }\label{sec:conv} In the previous section, we have set the stage for studying the convergence of state-dependent averaging-based distributed optimization algorithms. Our proofs rely on two properties: double stochasticity and the \textit{contraction property} (\cref{thm:maxEdgeContraction}). To state the contraction property, we define the Lyapunov function $V:\R^{n \times d} \rightarrow \mathbb{R}$ as \begin{equation} \label{eq:Lyapunov} V(X) \triangleq \sum_{i=1}^n\|\vecx_i - \bar{\vecx}\|^2, \end{equation} where $X = [\vecx_1,\cdots,\vecx_n]^T$ and $\bar{\vecx}= \frac{1}{n}\sum_{i=1}^n\vecx_i$. \begin{theorem}[Contraction property]\label{thm:maxEdgeContraction} Consider a connected graph $\Gcal$ and the stochastic process $\big\{X(t),A\big(t,X(t)\big)\big\}_{t=0}^{\infty}$ with a natural filtration $\{\Fcal_t\}_{t\geq 0}$ for the dynamics given by Eq.~\eqref{eq:update_mat_1}. If ${A\big(t,X(t)\big) \in \Fcal_{t+1}}$ is doubly stochastic for all $t \geq 0$, and for the random variables $i^*,j^* \in \Vcal$ defined through the max-edge in \cref{eq:emax} as $\emax\big(X(t)\big)=\{i^*,j^*\}$, \begin{equation}\label{ineq:deltaCondition} \E\Big[A\big(t,X(t)\big)^T\! \!A\big(t,X(t)\big)\mid \Fcal_t\Big]_{i^*j^*} \geq \delta, \ \ \text{a.s.,} \end{equation} where $\delta>0$, holds for all $t\geq 0$ and $X(0) \in \R^{n\times d},$ then \begin{equation}\label{eq:thmContraction} \E\Big[V\big(A\big(t,X(t)\big)X(t)\big) \mid \Fcal_t\Big] \leq \lambda V\big(X(t)\big) \, \, \text{a.s.}, \end{equation} where $\lambda = 1-2\delta/\big((n-1)\diam^2\big).$ \end{theorem} Theorem~\ref{thm:maxEdgeContraction} is proven in Appendix~\ref{appendix:edgecontraction} and provides our key new ingredient: proving a contraction result for doubly stochastic averaging matrices containing the maximally dissenting edge. The proof of Theorem~\ref{thm:maxEdgeContraction} makes use of the double stochasticity of the matrices to characterize the exact one-step decrease in the Lyapunov function and then uses a clever trick to characterize its fractional decrease based on the fact that underlying communication graph is connected. \begin{remark} Theorem~\ref{thm:maxEdgeContraction} also holds for time-varying graphs provided they remain connected at each time $t$. More precisely, the theorem holds for a sequence of connected graphs $\{\Gcal_t\}$ and at every time $t\geq 0$, for $i^*,j^*$ defined through $\emax(\Gcal_t,X(t))$, the inequality in \cref{ineq:deltaCondition} holds, then the inequality in \cref{eq:thmContraction} will hold with scaling at time $t$ being \begin{equation} \lambda_t = 1-\frac{2\delta}{(n-1)\mathrm{diam}({\Gcal_t)}^2} \leq 1 - \frac{2\delta}{(n-1)^3}. \nonumber \end{equation} Therefore, the contraction property for connected time-varying graphs holds with a factor of at most $\underline{\lambda} \triangleq 1 - 2\delta/(n-1)^3.$ \end{remark} For a connected graph $\Gcal$ and the stochastic process $\big\{X(t),A\big(t,X(t)\big)\big\}_{t= 0}^{\infty}$ with the filtration $\{\Fcal_t\}_{t= 0}^{\infty}$ generated according to the dynamics in \cref{eq:update_mat_1}, we define a contracting averaging matrix as follows. \begin{definition}[Contracting averaging matrix]\label{asmpn:doublystochastic} A state-dependent averaging matrix $A\big(t,X(t)\big)$ is contracting with respect to the Lyapunov function $V(\cdot)$ in \cref{eq:Lyapunov} if there exists a $\lambda \in (0,1)$ such that \begin{equation}\label{eq:defContractingMatrix} \E\Big[V\Big(A\big(t,X(t)\big)X(t)\Big) \mid \Fcal_t\Big] \leq \lambda V\big(X(t)\big) \end{equation} holds a.s. for all $t\geq 0$. \end{definition} { The main result of this work establishes convergence guarantees for these dynamics as stated below.} \begin{theorem}[Almost sure convergence of state-dependent subgradient methods]\label{thm:main1} Consider the distributed optimization problem in \cref{eq:def_DistOpt_Problem} and let Assumptions \ref{asmpn:Nonempty_solution} and \ref{asmpn:bounded_subgrad} hold. Assume a connected communication graph $\Gcal$ and the subgradient method in \cref{eq:update_mat_1}. If the random matrices $A\big(t,X(t)\big)$ in Eq.~\eqref{eq:update_mat_1} are doubly stochastic and contracting, and the step-sizes $\{\alpha(t)\}$ follow \cref{asmpn:Step-Size}, then for all initial conditions $X(0)\in \R^{n\times d}$, \begin{equation} \lim_{t \to \infty} \vecw_i(t) = \vecw^*, \ \forall i \in [n], \ \mbox{ a.s.}, \nonumber \end{equation} where $\vecw^* \in \sets{W}^*$. \end{theorem} \cref{thm:main1} establishes the almost-sure convergence of the state variables to an optimal solution of \cref{eq:def_DistOpt_Problem}, based on the consensus-based subgradient methods where the averaging matrices are doubly stochastic and contracting. \cref{thm:maxEdgeContraction} provides a simplified condition, the presence of averaging over the `max-edge', which, when satisfied, implies the averaging matrix is contracting. Note that, as shown in \cref{prop:LMLBContracting}, this simplified condition holds for Local Max-Gossip, Max-Gossip, and Load-Balancing averaging. Thus, we have the subsequent corollary following immediately from \cref{prop:LMLBContracting}, \cref{thm:maxEdgeContraction}, and \cref{thm:main1}. \begin{corollary}\label{cor:ConvergenceExamples} Consider the distributed optimization problem in \cref{eq:def_DistOpt_Problem} and let Assumptions \ref{asmpn:Nonempty_solution} and \ref{asmpn:bounded_subgrad} hold. Assume a connected communication graph $\Gcal$ and the subgradient method \eqref{eq:update_mat_1} where the averaging matrices $A\big(t,X(t)\big)$ in Eq.~\eqref{eq:update_mat_1} are based solely on either the Local Max-Gossip, Max-Gossip or Load-Balancing averaging, and the step-sizes $\{\alpha(t)\}$ follow \cref{asmpn:Step-Size}. Then \begin{equation} \lim_{t \to \infty} \vecw_i(t) = \vecw^*, \,\, \forall i \in [n], \ \ \text{a.s.}, \nonumber \end{equation} for all initial condition $X(0)\in \R^{n\times d}$, and some $\vecw^* \in \sets{W}^*$. \end{corollary} For the remainder of this section, we provide the key steps and results that are needed to prove \cref{thm:main1}. We defer the proof of these technical results to the Appendix. The proof strategy for Theorem~\ref{thm:main1} can be broken down into two main steps: (i) showing that the evolution of the dynamics followed by the average state variable $\{\bar{\vecx}(t)\}$ converges to a solution of the optimization problem in \eqref{eq:def_DistOpt_Problem} and (ii) every node $i \in \Vcal$ tracks the dynamics of this average state variable such that the tracking error goes to zero. The first step requires the following result which establishes a bound on the accumulation of the tracking error for every agent. \begin{lemma}\label{lemma:dist_from_mean} Let $\Gcal$ be a connected graph and consider sequences $\{W(t)\}$ and $\{X(t)\}$ generated by the subgradient method in Eq.~\eqref{eq:update_mat_1} using sate-dependent, doubly stochastic and contracting averaging matrices $A\big(t,X(t)\big)$. If Assumptions~\ref{asmpn:bounded_subgrad} and \ref{asmpn:Step-Size} hold, then for any initial estimates $X(0)\in \mathbb{R}^{n\times d}$, the following hold a.s.\ for all $i\in [n]$ \begin{align} \lim_{t \to \infty} \left\|\vecw_i(t+1) - \bar{\vecx}(t)\right\| = 0,\qquad\text{and}\cr \sum_{t=0}^{\infty}\alpha(t+1)\E\left[ \|\vecw_i(t+1) - \bar{\vecx}(t) \| \mid \Fcal_t\right]<\infty. \nonumber \end{align} \end{lemma} Lemma~\ref{lemma:dist_from_mean}, which is proven in Appendix~\ref{appendix:distfrommean}, establishes guarantees on the consensus error for the local estimates $\vecw_i(t)$. Lemma~\ref{lemma:iterate_decomp} will be used to bound the distance of the average state $\bar{\vecx}(t)$ to an optimal point. \begin{lemma}[Lemma 8,~\cite{Nedic_2013}]\label{lemma:iterate_decomp} Suppose that Assumption \ref{asmpn:bounded_subgrad} holds. Then, for any connected graph $\Gcal$, initial condition $X(0)\in \R^{n\times d}$, $\vec{v}\in \R^d$, and $t \geq 0$, for the dynamics $\{X(t),A(t,X(t))\}$ of the subgradient method Eq.~\eqref{eq:update_mat_1} where $A(t,X(t))$ are doubly stochastic, we have \begin{multline} \E\big[\|\bar{\vecx}(t+1) - \vec{v}\|^2 \mid \Fcal_t \big] \leq \|\bar{\vecx}(t) - \vec{v}\|^2 - \alpha(t+1)\frac{2}{n}\Big(F\big(\bar{\vecx}(t)\big) - F(\vec{v})\Big) \\ +\alpha(t+1)\frac{4}{n}\sum_{i=1}^n L_i \E[\|\vecw_i(t+1) \textbf{}- \bar{\vecx}(t)\|\mid \Fcal_t] +\alpha^2(t+1)\frac{L^2}{n^2}, \ \ \text{a.s.} \nonumber \end{multline} \end{lemma} We note that Lemma 8 in~\cite{Nedic_2013} was originally intended for state independent dynamics. However, its proof only relies on the double stochasticity of the averaging matrices, convexity of the local functions, boundedness of the subgradients, and not on whether the averaging is state-dependent or not. Finally, combining the above two results implies that the distance of each agent's local estimate $\vecx_i(t)$ to the optimal set $\sets{W}^*$ will be \textit{approximately} decreasing. The following result then will be used to show that this \textit{approximate} decrease results in convergence to $\sets{W}^*$. \begin{lemma} \label{lemma:convex} Consider a minimization problem $\min_{\vecx \in \R^d} f(\vecx)$, where $f:\R^d \to \R$ is a continuous function. Assume that the solution set $\sets{X}^*$ of the problem is nonempty. Let $\{\vecx_t\}$ be a stochastic process such that for all $\vecx \in \sets{X}^*$ and for all $t \geq 0$, \begin{equation} \E[\|\vecx_{t+1} - \vecx\|^2 \mid \Fcal_t] \leq (1+b_t)\|\vecx_t - \vecx\|^2 - a_t\big(f(\vecx_t) - f(\vecx)\big) + c_t \qquad\textrm{a.s.}, \nonumber \end{equation} where $b_t\geq 0$, $a_t\geq 0$, and $c_t \geq 0$ for all $t \geq 0$ and ${\sum_{t=0}^\infty b_t <\infty} $, $\sum_{t=0}^\infty a_t = \infty $, and $\sum_{t=0}^\infty c_t < \infty$ a.s. Then the sequence $\{\vecx_t\}$ converges to a solution $\vecx^* \in \sets{X}^*$ a.s. \end{lemma} This result has been proven as part of~\cite[Theorem~1]{aghajan2020distributed} but due to the stand-alone significance of the result we have stated it as a lemma above and its proof is provided in Appendix~\ref{appendix:lemmaConvex}. Now, we are ready to formally prove Theorem~\ref{thm:main1} by combining the aforementioned results.\\ \begin{proof}[{Proof of Theorem~\ref{thm:main1}}] From Lemma~\ref{lemma:iterate_decomp}, for ${\vec{v} = \vecw^*\in \sets{W}^*}$, we have \begin{multline} \E[\|\bar{\vecx}(t+1) - \vecw^*\|^2 \mid \Fcal_t]\leq \|\bar{\vecx}(t) - \vecw^*\|^2 - \frac{2\alpha(t+1)}{n}\Big(F\big(\bar{\vecx}(t)\big) - F^*\Big) + \alpha^2(t+1)\frac{L^2}{n^2} \\ +4\frac{\alpha(t+1)}{n}\sum_{i=1}^n L_i \E[\|\vecw_i(t+1) - \bar{\vecx}(t)\|\mid \Fcal_t],\nonumber \end{multline} for all $t \geq 0$. From Lemma~\ref{lemma:dist_from_mean}, we know that \begin{align} &\sum_{t=0}^{\infty} 4\frac{\alpha(t+1)}{n}\sum_{i=1}^n L_i \E[\|\vecw_i(t+1) - \bar{\vecx}(t)\|\mid \Fcal_t] = \nonumber \\\nonumber &\sum_{i=1}^n \frac{4L_i}{n} \sum_{t=0}^{\infty}\alpha(t+1)\E[\|\vecw_i(t+1) - \bar{\vecx}(t)\| \mid \Fcal_t]< \infty \ \ a.s. \end{align} Furthermore, $\alpha(t)$ is not summable and ${\sum_{t=0}^\infty \alpha^2(t) < \infty}$. Therefore, all the conditions for Lemma~\ref{lemma:convex} hold with $a_t = 2\alpha(t+1)/n$, $b_t = 0$, and \begin{equation} c_t = \alpha(t+1)\frac{4}{n}\sum_{i=1}^n L_i\E[ \|\vecw_i(t+1) - \bar{\vecx}(t)\| \mid \Fcal_t ] + \alpha^2(t+1)\frac{L^2}{n^2}. \nonumber \end{equation} Therefore, from Lemma~\ref{lemma:convex}, the sequence $\{\bar{\vecx}(t)\}$ converges to a solution $\hat{\vecw} \in \sets{W}^*$ almost surely. Finally, Lemma~\ref{lemma:dist_from_mean} implies that $\lim_{t \to \infty} \|\vecw_i(t+1)- \bar{\vecx}(t)\| = 0$ for all $i \in [n]$ almost surely. Therefore, the sequences $\{\vecw_i(t+1)\}$ converge to the same solution $\hat{\vecw}\in \sets{W}^*$ for all $i \in [n]$ almost surely. \end{proof} \input{convergence_rate} \section{Numerical Examples}\label{sec:numerical} To illustrate our analytical results, we present a simulation of a distributed optimization problem where the local functions' subgradients are not restricted to be uniformly bounded. In particular, we look at the standard distributed estimation problem in a sensor network setting with $n=180$ agents. Here, each agent $i \in \Vcal $ wants to estimate an unknown parameter $\theta_0$. Each node has access to a noisy measurement of the parameter $c_i = \theta_0 + n_i$, where $n_i$'s are independent, zero mean Gaussian random variables with variance $\sigma^2_i>0$. In this setting, the Maximum Likelihood (ML) estimator $\text{\cite[Theorem~5.3]{van2007parameter}}$ is the minimizer of the separable cost function $F(w) = \sum_{i=1}^n (w - c_i)^2/\sigma_i^2$. Note that this problem is a distributed optimization problem with the local cost function $f_i(w)=(w - c_i)^2/\sigma_i^2$. For the variance $\sigma_i^2$, we picked $1/\sigma_i^2$ independently and uniformly over $(0,1)$. For each node $i \in [n]$, the initial local estimates $x_i(0)$ are drawn independently from a standard Gaussian distribution. We consider the performance for different topologies of the underlying communication graph $\mathcal{G}$ ranging from dense graphs (Complete and Barbell), moderately dense graphs (Erd\"{o}s-R\'enyi), to sparse graphs (Line and Star). We chose a connected graph with the edge probability $p=0.4$ for Erd\"{o}s-R\'enyi graph. For the Barbell graph, we chose equal number of nodes for the three components -- two Complete graphs and the connecting Line graph. We ran the averaging-based subgradient optimizer with four different averaging update rules: Randomized Gossip \cite{boyd2006randomized}, Local Max-Gossip, Max-Gossip, and Load-Balancing. For the Randomized Gossip, at each time a node in $[n]$ wakes up uniformly at random, and it chooses one of its neighbors uniformly at random for communication. To account for the stochastic nature of Randomized Gossip and Local Max-Gossip algorithms we average the error values over $10$ runs keeping the initial conditions and samples at the nodes the same. The resulting plots in \cref{fig:experiment180}, show the decay of the error $\|\vecw(t) - w_*\one\|$ as a function of $t$, where $w_*=\sum_{i=1}^n \frac{c_i}{\sigma_i^2}/\sum_{i=1}^n \frac{1}{\sigma_i^2}$ is the optimal solution for $F(w)$. \av{For the Erd\"{o}s-R\'enyi communication graph, we also plot the decay of the error with the number of bits exchanged between the nodes in \cref{fig:er_plots} for Randomized Gossip, Local Max-Gossip, and Load-Balancing.} \av{In the simulation, $32$ bits are used for exchange of the estimates and $1$ bit is used for the exchange of each acknowledgement. Therefore, the number of bits exchanged per step for Randomized Gossip is $64$. For Local Max-Gossip, at time $t$ with $s(t)\in [n]$ being the randomly chosen node, $|\Ncal_{s(t)}| + 32 |\Ncal_{s(t)}| + 32 $ bits are exchanged for waking up the neighboring nodes, obtaining their values, and sending the neighbor with the maximum disagreement its own value. Finally, for Load Balancing, $ 32\sum_{i=1}^n |\Ncal_i| + n + {\text{ACK}}(t)$ bits are exchanged for sharing the values with the neighbors, sending request to the neighbour with the maximum disagreement, and sending the acknowledgement, where ${\text{ACK}}(t)$ is the total number of bits exchanged for sending the acknowledgement bits at time $t$.\footnote{In the numerical simulation, there are no cases with multiple neighbors with maximum disagreement.}} \subsection{Comparison of Asynchronous Methods} From \cref{fig:experiment180}, the performance of the subgradient methods using state-dependent averaging shows an improvement in convergence rate. The convergence rates increase as we go from Randomized Gossip, Local Max-Gossip, Max-Gossip to Load-Balancing averaging based optimizers. We will refer to the subgradient methods using the state-dependent averaging by their averaging algorithm in the succeeding discussion. In general, the performance of Max-Gossip is superior to the one of Local Max-Gossip. Clearly, Local Max-Gossip converges faster than Randomized Gossip. However convergence rate also depends on the graph topology: Local Max-Gossip applied on a Star graph has essentially the same rate as Randomized Gossip since the nodes at the periphery have only the central node as the choice to gossip with, and the probability of the first node being selected for gossiping is $n-1$ times larger to be a peripheral node as compared to the central node. Overall, we notice the increase in the performance of Max-Gossip and Local Max-Gossip as compared to Randomized Gossip with increasing connectivity. \av{Moreover, from \cref{fig:er_plots} we note the significantly better performance of Local Max-Gossip with respect to the number of exchanges between the nodes as opposed to that of synchronous Load-Balancing. } \subsection{Max-Gossip vs. Load-Balancing} When comparing different state-dependent averaging schemes, it should be noted that unlike gossip, Max-Gossip, and Local Max-Gossip, Load-Balancing is a synchronous scheme where in addition to the max-edge, other local max-edges are often incorporated in the averaging scheme simultaneously. Therefore, it is only natural that the convergence rate of Load-Balancing is superior to that of Max-Gossip, since it averages not only the two nodes defined by the max-edge, but, additionally, other nodes connected by edges with large disagreement at the same time. By a similar logic, for the Complete graph, the performance of Load-Balancing and Max-Gossip are the same since all the nodes are holding scalar estimates and due to the ordering between the estimates, all the nodes send their request for averaging to either the node with the maximum or minimum estimate resulting in only the max-edge performing the updates. We observe that the gap in performance of Load-Balancing and Max-Gossip, which has the best performance amongst the discussed asynchronous methods, increases with the diameter of the graph. Characterizing the analytical dependence of convergence rate as a function of graph topology metrics is of interest for future work. \begin{figure}[t] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Plots/OptDist/Complete.pdf} \subcaption{Complete Graph} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Plots/OptDist/Barbell.pdf} \subcaption{Barbell Graph} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Plots/OptDist/Line.pdf} \subcaption{Line Graph} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Plots/OptDist/Star.pdf} \subcaption{Star Graph} \end{subfigure} \caption{Error decay for different graphs with $180$ nodes} \label{fig:experiment180} \end{figure} \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Plots/OptDist/ER_large_t.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Plots/OptDist/ER_bits.pdf} \end{subfigure} \caption{Error decay for Erd\"{o}s-R\'enyi Graph with $180$ nodes} \label{fig:er_plots} \end{figure} \color{black} \subsection{Logistic Regression} In order to illustrate the applicability of the results to a more general high-dimensional convex problem, we look at an example of regularized logistic regression for classification over MNIST dataset containing $56000$ samples. In the experiment we train a model with the loss function defined as \begin{align*} J(\vec{w},b) &= \frac{1}{m}\sum_{j=1}^m \left(-y_j\log\frac{1}{1+\exp(-(\vec{x_j}^T\vec{w})+b)} -(1-y_j)\log\frac{\exp(-(\vec{x_j}^T\vec{w})+b)}{1+\exp(-(\vec{x_j}^T\vec{w})+b)}\right) \cr &\qquad + \frac{1}{2m}\|\vec{w}\|^2+ \frac{1}{2m}|b|^2, \end{align*} where $\{(\vec{x}_j,y_j)\}_{j=1}^{56000}$ are the samples used for training. The samples are used to classify the digits in MNIST dataset into two classes based on whether the digits are greater than or equal to $5$ or not. The experiment is run over a graph with $20$ nodes with each node containing the same number of samples from the dataset. We initialize the nodes with all zero vectors. The communication graph representing the underlying connection between the nodes is a ladder graph. We consider the performance for the averaging-based subgradient optimizer with Randomized Gossip, Local Max-Gossip, Max-Gossip, and Load Balancing. For Randomized Gossip, as in previous experiment, at each time a node in $[n]$ wakes uniformly at random and chooses one of its neighbor uniformly at random. We average the performance for Randomized Gossip and Local Max-Gossip over 3 runs. In \cref{fig:network_variance} we plot the network variance, $\|W(t) - \frac{\one\one^T}{n}W(t)\|_F^2$ for step-size $\alpha(t)=\frac{1}{t}$ for all $t\geq 1$. We observe that the decay in the loss of the function for the consensus-based subgradient method is similar to each other. However the decay of the network variance, defined as the sum of the square of the deviation of the state estimates from their mean, over time in decreasing order of speed is observed for Load Balancing, Max-Gossip, Local Max-Gossip, and finally Randomized Gossip. \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{Plots/Network_Variance_log.png} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{Plots/Loss.png} \end{subfigure} \caption{Network Variance for Ladder Graph with $20$ nodes} \label{fig:network_variance} \end{figure} \color{black} \section{Conclusions}\label{sec:conclude} We proposed, studied, and analyzed the role of maximal dissent nodes in distributed optimization schemes, leading to many exciting state-dependent consensus-based subgradient methods. The proof of our result relies on a certain contraction property of these schemes. Our result opens up avenues for synthesizing or extending the use of state-dependent averaging-schemes for distributed optimization including the Max-Gossip, Local Max-Gossip, and Load-Balancing algorithms. Finally, we compared simulation results of a distributed estimation problem for gossip-based subgradient methods and the proposed state-dependent algorithms. Our numerical experiments show the faster convergence speed of schemes that use maximal dissent between nodes compared with state-independent gossip schemes. These simulations strongly support the intuition behind our main result, i.e., mixing of information between the maximal dissent nodes is critically important for the working (and enhancing) of the consensus-based subgradient methods. Although, we have shown the convergence of such state-dependent algorithms, establishing their rate of convergence, and especially relating them to various graph quantities such as diameter and edge density of the graph remains open problems for future research endeavors. {\color{black} The introduction of a state-dependent element for other class of algorithms specifically those which provide linear convergence rates such as distributed gradient tracking method \cite{qu2017harnessing,pu2021distributed} and their convergence analysis are part of future direction for the problem.} \appendix \section{Proof of \cref{prop:LBNecessity}} \label{appendix:LBNecessity} \begin{proof}[Proof of \cref{prop:LBNecessity}] For any $\omega\in \Omega$, consider $X(t;\omega) \in \R^{n\times d}$. If nodes $i$ and $j$ update their values to their average, that is $\big(\vecx_i(t;\omega) + \vecx_j(t;\omega)\big)/2$, then we know that during the round of Load-Balancing algorithm starting at value $X(t;\omega)$ in step 2, node $i$ and node $j$ have sent their averaging request to each other. Therefore, we have $j \in \arg\max_{r \in \Ncal_i}\|\vecx_j(t;\omega) - \vecx_r(t;\omega)\|$ and $i \in \arg\max_{r \in \Ncal_j}\|\vecx_i(t;\omega) - \vecx_r(t;\omega)\|$. Hence, for any $\omega \in \Omega$, \begin{equation}\label{eq:proofLBnecessity}\| \vecx_i(t;\omega) - \vecx_j(t;\omega) \| \geq \max\Big\{\max_{r\in \Ncal_i\setminus\{j\}}\|\vecx_i(t;\omega) - \vecx_r(t;\omega)\|, \max_{r\in \Ncal_j\setminus\{i\}}\|\vecx_j(t;\omega) - \vecx_r(t;\omega)\|\Big\}. \end{equation} On the other hand, if \cref{eq:proofLBnecessity} holds with strict inequality, then node $i$ and node $j$ send averaging requests only to each other in step 2 and respond to each other in step 3, and carry out their averaging according to step 4. \end{proof} \section{Proof of \cref{prop:LMLBContracting}}\label{appendix:LMLBContracting} \begin{proof} We first discuss the result for Randomized Gossip, Local Max-Gossip, and Max-Gossip averaging. The averaging matrices for the gossip algorithms where two agents update their states to their average takes the form of \cref{eq:matrix_form_1}. Therefore, for these gossip algorithms we have \begin{equation} A\big(t,X(t)\big)^T \! \! A\big(t,X(t)\big) = A\big(t,X(t)\big) \nonumber \end{equation} and \begin{equation} \E\Big[A\big(t,X(t)\big)^T A\big(t,X(t)\big) \mid \Fcal_t \Big] = \E\Big[A\big(t,X(t)\big) \mid \Fcal_t \Big]. \nonumber \end{equation} Consider two nodes $ i,j \in \Vcal $ such that $\{i,j\}\in \Ecal$. For Randomized Gossip, $\E\big[A\big(t,X(t)\big)_{ij} \mid \Fcal_t\big] = (P_{ij} +P_{ji})/2n.$ Moreover, since $\{i,j\} \in \Ecal$, we have $P_{ij},P_{ji}>0$. Let $P_* = \min_{\{i,j\}\in \Ecal} P_{ij}$. For the max-edge $\{i^*,j^*\},$ \cref{eqn:maxEdgeDelta} holds with ${\delta = P_*/n >0}$. Let $i\in \Vcal$ and state estimate matrix $X(t)$. For Local Max-Gossip, let $r_i\big(X(t)\big)$ be determined according to \cref{eqn:rsxt}. Consider the max-edge $\emax\big(X(t)\big)=\{i^*,j^*\}$. Then, $r_{i*}\big(X(t)\big) = j^*$ and $r_{j^*}\big(X(t)\big)=i^*$. Thus, \begin{align} \E\Big[A\big(t,X(t)\big)_{i^*j^*} \mid \Fcal_t\Big] =\frac{1}{n} \nonumber \end{align} and Local Max-Gossip averaging satisfies inequality \cref{eqn:maxEdgeDelta} with $\delta = 1/n.$ Similarly, for the Max-Gossip averaging with state estimate $X(t)$ at time $t$, for the max-edge $\emax\big(X(t)\big)=\{i^*,j^*\}$, we have \begin{equation} \E\left[A\big(t,X(t)\big)_{i^*j^*} \mid \Fcal_t \right] = \frac{1}{2}, \nonumber \end{equation} and \cref{eqn:maxEdgeDelta} holds with $\delta = 1/2$. Let us now discuss the presence of max-edge in the Load-Balancing averaging scheme. Consider the state estimate matrix $X(t)$ and $\emax\big(X(t)\big) = \{i^*,j^*\}$ to be the max-edge with respect to $X(t)$. By the definition of a max-edge we know that nodes $i^*, j^*$ satisfy inequality \cref{eq:LBNecessity}. Consider the case when nodes $i^*,j^*$ satisfy \cref{eq:LBNecessity} with strict inequality. From \cref{prop:LBNecessity}, we know that $A(t,X(t))_{i^*j^*}, A\big(t,X(t)\big)_{j^*i^*}, A\big(t,X(t)\big)_{i^*i^*}, A\big(t,X(t)\big)_{j^*i^*}$ are equal to $1/2,$ which implies that $A\big(t,X(t)\big)_{i^*\ell}=A\big(t,X(t)\big)_{\ell j^*}=0$ for all $\ell \not \in \{i^*,j^*\}$. Therefore, \begin{equation} \E\Big[A\big(t,X(t)\big)^T \!\! A\big(t,X(t)\big) \mid \Fcal_t\Big]_{i^*j^*} = 1/2, \nonumber \end{equation} and the inequality in \cref{eqn:maxEdgeDelta} holds with $\delta = 1/2$. Finally, consider the case when there are multiple neighbors of nodes $i^*, \,j^*$ with distance equal to $\|\vecx_{i^*}(t)-\vecx_{j^*}(t)\|$. Let $|\Scal_{i^*}|\geq 1$ and $|\Scal_{j^*}|\geq 1$ where $\Scal_i$ is given by \cref{eq:max_dissent_set}. Then, according to Load-Balancing algorithm, nodes $i^*, \, j^*$ update their states to their average with probability $1/(|\Scal_{i^*}| \cdot |\Scal_{j^*}|)$. Since $|\Scal_{i^*}|\leq n-1$ and $|\Scal_{j^*}|\leq n-1$, we have \begin{equation} \E \Big[A\big(t,X(t)\big)^T \! \! A\big(t,X(t)\big)_{i^*j^*} \mid \Fcal_t \Big] \geq \frac{1}{2(n-1)^2}, \nonumber \end{equation} and \cref{eqn:maxEdgeDelta} holds with $\delta = 1/2(n-1)^2.$ \end{proof} \section{Proof of Theorem~\ref{thm:maxEdgeContraction}}\label{appendix:edgecontraction} To prove \cref{thm:maxEdgeContraction} we must first define a few quantities related to the distance between the nodes on the graph and their relationships. \begin{definition}\label{def:distances} Consider a connected graph $\Gcal$ and a matrix $X = [\vecx_1,\dots,\vecx_n]^T \in \R^{n\times d}$ such that $\vecx_i \in \R^d$ is the estimate at node $i$ in the graph $\Gcal$. Let $d(X)$ denote the \underline{maximal distance} between the estimates of \textit{any} two nodes in the graph \begin{equation}\label{eq:dist_any} d(X) \triangleq \max_{i,j \in \{1,2,\dots,n\}} \|\vecx_i - \vecx_j\|. \end{equation} Let $d_{\Gcal}(X)$ denote the maximal distance between the estimates among any two \textit{connected} nodes in the graph \begin{equation}\label{eq:dist_edge} d_{\Gcal}(X) \triangleq \max_{\{i,j\} \in \Ecal} \|\vecx_i - \vecx_j\|. \end{equation} Finally, let $\diam$ denote the \textit{longest shortest path} between any two nodes of the graph $\Gcal$. \end{definition} \begin{proposition}\label{prop:rel_d} Given a connected graph $\Gcal$ and a matrix $X = [\vecx_1,\dots,\vecx_n]^T \in \R^{n\times d}$, such that $\vecx_i \in \R^d$ is the solution estimate at node $i$ in the graph $\Gcal$, we have \begin{equation} \frac{d(X)}{\diam} \leq d_{\Gcal}(X) \leq d(X). \nonumber \end{equation} \end{proposition} \vspace{5pt} \begin{proof} The upper bound on $d_{\Gcal}(X)$ follows from \cref{eq:dist_any,eq:dist_edge} in \cref{def:distances}. To prove the lower bound on $d_{\Gcal}(X)$, we assume, without loss of generality, that the rows of the matrix $X \in \R^{n\times d}$ are such that $d(X) = \|\vecx_1 - \vecx_n\|$. Since $\mathcal{G}$ is connected, its diameter is finite and there is a path of length $k\leq \diam$, denoted by $\{v_0,v_1\},\{v_1,v_2\},\dots,\{v_{k-1},v_k\}$, where $v_0=1$ and $v_k=n$, with $v_i\in \Vcal$ for $i = 0,1,\ldots,k$. The distance $d(\vec{x})$ is bounded as \begin{equation}\label{eq:triangle_ineq} \|\vecx_1 - \vecx_n\| \leq \sum_{i=0}^{k-1} \|\vecx_{v_i} - \vecx_{v_{i+1}} \|, \end{equation} where Eq.~\eqref{eq:triangle_ineq} follows from the triangle inequality. Finally, each term in the sum Eq.~\eqref{eq:triangle_ineq} is bounded above by $d_{\Gcal}(\vec{x})$. Hence, \begin{equation} d(X) \leq kd_{\Gcal}(X)\leq \diam d_{\Gcal}(X). \nonumber \end{equation} \end{proof} Next, we state a result quantifying the decrease in the Lyapunov function defined in \cref{eq:Lyapunov} that is the vector form of~\cite[Lemma~1]{nedic2009distributedquantization}. \begin{lemma}\label{lemma:DecreaseLyapunov} Given a doubly stochastic matrix $A \in \R^{n\times n}$, let $c_{ij}$ denote the $(i,j)$-th entry of the matrix $A^TA$. Then for all $X=[\vecx_1,\dots,\vecx_n]^T \in \R^{n\times d}$, we have \begin{equation} V(AX) = V(X) - \sum_{i<j} c_{ij} \|\vecx_i - \vecx_j\|^2. \nonumber \end{equation} \end{lemma} \begin{proof} By definition, the Lyapunov function in \cref{eq:Lyapunov} can be written as \begin{equation} V(X) = \mathrm{tr}\big[(X-\bar{X})^T(X-\bar{X})\big],\nonumber \end{equation} where $\bar{X} = \frac{\one\one^T}{n}X$. The doubly stochasticity of $A$ implies $\overline{AX}= \frac{\one \one^T}{n} AX = \frac{\one\one^T}{n}X = \frac{A\one\one^T}{n}X = A\bar{X}$. Therefore, \begin{align} V(AX) & = & \Tr[(AX- A\bar{X})^T(AX- A\bar{X})]. \nonumber \end{align} Finally, \begin{equation} V(X)-V(AX) = \Tr[(X- \bar{X})^T (I- A^TA)(X- \bar{X})]. \nonumber \end{equation} Since $A^TA$ is a symmetric and stochastic matrix, we have $c_{ij}= c_{ji}$ and $c_{ii} =1 - \sum_{i\neq j}c_{ij}$. Thus, \begin{equation} A^T A = I - \sum_{i<j}c_{ij}(\vec{b}_i - \vec{b}_j)(\vec{b}_i - \vec{b}_j)^T,\nonumber \end{equation} where $\vec{b}_i\in \mathbb{R}^n$ is the standard basis vector for all $i \in \Vcal$. Since \begin{equation} \Tr[(X- \bar{X})^T(\vec{b}_i - \vec{b}_j)(\vec{b}_i - \vec{b}_j)^T(X-\bar{X})] = \|\vecx_i - \vecx_j\|^2, \nonumber \end{equation} we have \begin{align} V(X)- V(AX) = \sum_{i<j}c_{ij}\|\vecx_i - \vecx_j\|^2. \nonumber \end{align} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:maxEdgeContraction}] At time $t\geq 0$ consider the state estimate $X(t) = [\vecx_1(t), \dots, \vecx_n(t) ]^T \in \R^n$, the corresponding max-edge $\emax(X(t))=\{i^*,j^*\}$ and the doubly stochastic averaging matrix $A(t,X(t))$ such that \begin{equation} \E[A(t,X(t))^T\!\! A(t,X(t))_{i^*j^*} \mid \Fcal_t] \geq \delta >0 \ \ \text{a.s.} \nonumber \end{equation} Define $$\Omega_{\delta}(t) = \{\omega: \E[A(t,X(t))^T\!\! A(t,X(t))_{i^*j^*} \mid \Fcal_t] \geq \delta\}.$$ For legibility, we drop the time index in the variables for the rest of this proof and use $X$, $\Fcal$, $A(X)$, $\Omega_{\delta}$ instead of $X(t), \Fcal_t, \Omega_{\delta}(t)$, and $A(t,X(t))$. Using arguments similar to the ones from~\cite[Lemma~9]{nedic2009distributedaveraging}, for $X \in \Fcal$ and doubly stochastic matrix $A(X)$ such that \begin{equation} \mathbb{E}[\big(A(X)^T \! \! A(X)\big)_{i^*j^*} \mid \Fcal]\geq \delta>0 \ \ \text{a.s.}, \label{eqn:AssumptionDecreaseInMaxEdge} \end{equation} where $\Fcal$ is a $\sigma$-field, $X\in \Fcal$, and ${\emax(X) = \{i^*,j^*\}}$. We will show that ${\mathbb{E}\big[V(A(X)X)\mid \Fcal\big] \leq \lambda V(X)}$ a.s. for some $\lambda \in (0,1)$. From \cref{lemma:DecreaseLyapunov}, the difference in the quadratic Lyapunov function $V$ evaluated at $X$ and $A(X)X$ is given by \begin{equation} V(X) - V(A(X)X) = \sum_{i<j} c_{ij}(X)\|\vecx_i - \vecx_j\|^2, \nonumber \end{equation} where $c_{ij}(X)$ is the $(i,j)$-th entry of $ A(X)^TA(X) $, {i.e.}, $c_{ij}(X) = (A(X)^T \! \! A(X))_{ij}$. Taking the conditional expectation with respect to the filtration $\mathcal{F}$, we obtain \begin{align} V(X) &- \E\big[V(A(X)X) \mid \Fcal\big] \nonumber\\ & = \sum_{i<j}\big(\E[\big(A(X)^T \!\! A(X)\big)_{ij} \mid \Fcal]\big)\|\vecx_i - \vecx_j\|^2 \nonumber\\ & \geq c_{i^*j^*}(X) \|\vecx_{i^*} - \vecx_{j^*}\|^2 \geq \delta \|\vecx_{i^*} - \vecx_{j^*}\|^2 \ \text{a.s.}, \nonumber \end{align} where $\emax(X)= \{i^*,j^*\}$ and the first inequality follows from the non-negativity of the squared terms and the second inequality follows from \cref{eqn:AssumptionDecreaseInMaxEdge}. Recall that the constant $\delta$ depends on the averaging scheme. If $V(X) = 0$, more precisely for the samples path characterized by $\omega\in \Omega_{\delta}(t)$ such that $V(X(t;\omega))=0$\footnote{We omit the dependency on $\omega$ and $t$ for legibility.}, then $X =\one \vec{c}^T$ for some $\vec{c} \in \R^{d}$. Therefore, $A(X)X = A(X)\one\vec{c}^T=\one\vec{c}^T$ since $A(X)$ is doubly stochastic and $V(A(X)X) = 0$. Thus, the inequality $\mathbb{E}\big[V(A(X)X)\mid \Fcal\big] \leq \lambda V(X)$ is satisfied. Let ${\Lcal \triangleq \{ \one \vec{p}^T \mid \vec{p}\in \R^d\}}$. For $X= [\vecx_1,\cdots,\vecx_n]^T \not\in \Lcal $, more precisely for the samples path characterized by $\omega\in \Omega_{\delta}(t)$ such that $X(t;\omega)\not\in\mathcal{L}$, the conditional expected fractional decrease in the Lyapunov function is \begin{equation} \frac{V(X)-\E[V(A(X)X) \mid \Fcal]}{V(X)} \geq \delta \frac{\|\vecx_{i^*} - \vecx_{j^*}\|^2}{\sum_{i=1}^n \|\vecx_i - \bar{\vecx}\|^2}, \nonumber \end{equation} where $\bar{\vecx} = \frac{1}{n}\sum_{i=1}^n \vecx_i$. Using the definition of $d_{\Gcal}(X)$ and Proposition~\ref{prop:rel_d}, we obtain the following bound \begin{equation} \frac{V(X)-\E[V(AX)\mid \Fcal]}{V(X)} \geq \frac{\delta}{\diam^2}\frac{d^2(X)}{\sum_{i=1}^n \|\vecx_i - \bar{\vecx}\|^2} \label{eq:Vt_ratio_1}. \nonumber \end{equation} For $X\not\in \mathcal{L}$, let \begin{equation} g(X)\triangleq \frac{d^2(X)}{\sum_{i=1}^n \|\vecx_i - \bar{\vecx}\|^2}.\nonumber \end{equation} Note that $g(X)$ satisfies the following invariance relations \begin{equation} g(X + \one\vec{p}^T)=g(X), \ \ \vec{p} \in \R^d, \nonumber \end{equation} and \begin{equation} g(cX)=g(X), \ \ c \in \R\setminus\{0\}. \nonumber \end{equation} Therefore, for $X\not\in \Lcal$ the following inequality and identity hold \begin{align} g(X)& \geq & \min_{Z \in \mathbb{R}^{n\times d} : \sum_i \vec{z}_i = \vec{0}}\frac{d^2(Z)}{\sum_{i=1}^n \|\vec{z}_i\|^2} \nonumber \\ &=& \min_{Z \in \mathbb{R}^{n\times d} : \sum_i \vec{z}_i = \vec{0}; \sum_{i} \|\vec{z}_i\|^2 =1}{d^2(Z)}. \nonumber \end{align} Note that if $\sum_{i=1}^n \vec{z}_i=\vec{0}$ and $\sum_{i=1}^n \|\vec{z}_i\|^2=1$, then we have \begin{equation} \sum_{1\leq i <j \leq n} \inp{\vec{z}_i}{\vec{z}_j} = - \frac{1}{2}\sum_{i=1}^n \|\vec{z}_i\|^2 = -\frac{1}{2}. \label{eq:inpValue} \end{equation} By definition, $d(Z) \geq \|\vec{z}_i - \vec{z}_j \|$ for all $i,j \in [n]$. Using the fact that the maximum of a set of values is greater than its average for the set $\{\|\vec{z}_{i} -\vec{z}_j\|^2\}_{1\leq i < j \leq n}$, we get \begin{align} d^2(Z) &\geq& \frac{2}{n(n-1)} \sum_{1 \leq i < j \leq n} \|\vec{z}_i - \vec{z}_j\|^2 = \frac{2}{n-1}, \nonumber \end{align} where the last step follows from \cref{eq:inpValue} and the fact that $\sum_{i=1}^n \|\vec{z}_i\|^2 = 1$. Finally, using \cref{eq:Vt_ratio_1}, we get \begin{equation} \frac{V(X)-\E\big[V(A(X)X) \mid \Fcal\big]}{V(X)} \geq \frac{2\delta}{(n-1) \diam^2}. \nonumber \end{equation} Since $\mathbb{E}\big[V(A(X)X)\mid \Fcal\big] \leq \lambda V(X)$ for $X \in \Lcal$ and for $X \not\in \Lcal$, we have $\mathbb{E}\big[V(A(X)X)\mid \Fcal\big] \leq \lambda V(X)$ a.s. Thus, \begin{equation} \E\big[V(A(t,X(t))X(t)) \mid \Fcal_t\big] \leq \lambda V\big(X(t)\big) \ \ \text{a.s.}, \nonumber \end{equation} where $\lambda =1- 2\delta/\big((n-1)\diam^2\big).$ \end{proof} \section{Limiting properties of the Lyapunov function $V(\cdot)$}\label{appendix:distfrommean} To prove \cref{lemma:dist_from_mean} we will make use of the following result. \begin{theorem}[{Robbins-Siegmund Theorem}]\label{thm:R-S} Let $(\Omega,\Fcal,\mathcal{P})$ be a probability space and ${\Fcal_0\subseteq \Fcal_1 \subseteq \cdots}$ be a sequence of sub $\sigma$-fields of $\Fcal$. Let $\{u_t\},\{v_t\},\{q_t\}$, and $\{w_t\}$ be $\Fcal_t$-measurable random variables, where $\{u_t\}$ is uniformly bounded from below, and $\{v_t\}$, $\{q_t\}$, and $\{w_t\}$ are non-negative. Let $\sum_{t=0}^{\infty} w_t<\infty,\ \sum_{t=0}^{\infty} q_t <\infty $ and \begin{align} \E [u_{t+1} \mid \Fcal_t] \leq (1+q_t)u_t -v_t +w_t, \ \ \text{a.s.,} \nonumber \end{align} for all $t\geq 0$. Then, the sequence $\{u_t\}$ converges and $\sum_{t=0}^{\infty} {v_t} < \infty$ a.s. \end{theorem} \begin{proof}[Proof of Lemma~\ref{lemma:dist_from_mean}] To study the convergence of $V\big(W(t)\big)$, we first derive a super-martingale like inequality for the stochastic process $\big\{V\big(W(t)\big)\big\}$. For $X(t)\in \Fcal_t$ using the contracting averaging property of $A(t,X(t))$ in \cref{eq:defContractingMatrix}, we get \begin{align} \E\big[V\big(W(t+1)\big)\mid \Fcal_t\big] &= \E\big[V\big(A(t,X(t))X(t)\big) \mid \Fcal_t\big] \cr &\leq \lambda V\big(X(t)\big), \ \ \text{a.s.}, \label{eqn:proofContraction} \end{align} where $\lambda \in (0,1)$. We know that $X(t) = W(t) + \e(t)$, so from triangle inequality on $\|W(t)-\bar{W}(t)+E(t)-\bar{E}(t)\|_F$ we have \begin{equation}\label{eq:triangle_ineq_VX} V\big(X(t)\big) \leq V\big(W(t)\big) + V\big(\e(t)\big) + 2\sqrt{V\big(W(t)\big)}\sqrt{V\big(\e(t)\big)}. \end{equation} Using the inequality above in \cref{eqn:proofContraction}, for all $t\geq 0$ we get \begin{equation} \E\big[V\big(W(t+1)\big)\mid \Fcal_t \big] \leq \lambda \bigg(V\big(W(t)\big) + V\big(\e(t)\big) + 2\sqrt{V\big(W(t)\big)}\sqrt{V\big(\e(t)\big)}\bigg) \ \ \text{a.s.} \nonumber \end{equation} Since $V\big(\e(t)\big) = \|\e(t)-\bar{\e}(t)\|_F^2 \leq \|\e(t)\|_F^2 \leq L^2\alpha^2(t)$, we get \begin{align} \E[V\big(W(t+1)\big)\mid \Fcal_t] &\leq \lambda \left(\sqrt{V\big(W(t)\big)} + L\alpha(t)\right)^2 \ \ \text{a.s.}\nonumber\end{align} From Jensen's inequality, we have \begin{equation} \E\Big[\sqrt{V\big(W(t+1)\big)} \mid \Fcal_t \Big] \leq \sqrt{\E\big[V\big(W(t+1)\big) \mid \Fcal_t\big]} \leq \sqrt{\lambda}\Big(\sqrt{V\big(W(t)\big)} + L\alpha(t)\Big) \ \ \text{a.s.} \nonumber \end{equation} Taking the expectation, multiplying by $\alpha(t+1)$ and using the fact that $\{\alpha(t)\}$ is non-increasing, we get \begin{equation} \alpha(t+1) \E \big[\sqrt{V\big(W(t+1)\big)}\big] \leq\alpha(t)\E\sqrt{V\big(W(t)\big)} - (1-\sqrt{\lambda})\alpha(t)\E\sqrt{V\big(W(t)\big)} + \alpha^2(t) \ \ \text{a.s.} \nonumber \end{equation} Since the diminishing step sequence $\{\alpha(t)\}$ satisfies $\sum_{t=1}^{\infty}\alpha^2(t)<\infty$, \cref{thm:R-S} results in \begin{equation} \sum_{t=1}^{\infty}\alpha(t)\E\sqrt{V\big(W(t)\big)}<\infty, \ \ \nonumber\end{equation} and by the Monotone Convergence Theorem, we have, \begin{equation}\label{eq:MCTapplied} \E\left[\sum_{t=1}^{\infty}\alpha(t)\ \sqrt{V\big(W(t)\big)}\right]<\infty, \end{equation} which implies that \begin{equation} \sum_{t=1}^{\infty}\alpha(t)\ \sqrt{V\big(W(t)\big)}<\infty, \ \text{a.s.} \nonumber \end{equation} Since $V(W(t)) = \sum_{i=1}^n \|\vecw_i(t)-\bar{\vecw}(t)\|^2$, we know that \begin{align} \sum_{t=1}^{\infty} \alpha(t)\|\vecw_i(t) - \bar{\vecw}(t)\| \leq \sum_{t=1}^{\infty}\alpha(t)\sqrt{V\big(W(t)\big)}<\infty, \nonumber \end{align} for all $i\in [n]$, a.s. Since $\sum_{t=1}^{\infty} \alpha(t)\|\vecw_i(t)-\bar{\vecw}(t)\| <\infty$ and $\sum_{t=1}^{\infty}\alpha(t)=\infty $, we have \begin{equation}\label{eq:liminfw} \liminf_{t\to\infty} \|\vecw_i(t)-\bar{\vecw}(t)\| = 0, \ \forall i \in [n], \ \ \text{ a.s.} \end{equation} Further since we have, \begin{equation} \sum_{t=1}^{\infty}\alpha(t)\E\sqrt{V\big(W(t)\big)} = \E\left[\sum_{t=1}^{\infty}\alpha(t)\E\left[\sqrt{V\big(W(t)\big)} \mid \Fcal_t\right]\right], \nonumber \end{equation} using Monotone Convergence Theorem similar to \cref{eq:MCTapplied} implies that \begin{equation} \E\left[\sum_{t=1}^{\infty}\alpha(t)\E\left[ \|\vecw_i(t) - \bar{\vecw}(t) \| \mid \Fcal_t\right]\right]<\infty, \nonumber \end{equation} and so, we have \begin{equation} \sum_{t=1}^{\infty}\alpha(t)\E\left[\sqrt{V\big(W(t)\big)} \mid \Fcal_t\right]<\infty \ \ \text{a.s.}, \nonumber \end{equation} and therefore, \begin{equation}\label{eq:final1} \sum_{t=1}^{\infty}\alpha(t)\E\left[ \|\vecw_i(t) - \bar{\vecw}(t) \| \mid \Fcal_t\right]<\infty, \ \forall i \in [n], \ \text{a.s.} \end{equation} Further, for all $t\geq 0$, we know \begin{equation} \E\big[V\big(W(t+1)\big) \mid \Fcal_t\big] \leq \lambda \left(V\big(W(t)\big) + 2L\alpha(t)\sqrt{V\big(W(t)\big)} +L^2\alpha^2(t)\right) \ \text{a.s.} \nonumber \end{equation} Since we have \begin{equation} \sum_{t=1}^{\infty} 2\alpha(t)\sqrt{V\big(W(t)\big)} + \lambda L^2\alpha^2(t)<\infty \quad \textrm{a.s.}, \nonumber \end{equation} \Cref{thm:R-S} implies that $ \{V(W(t))\} \ \text{converges a.s.}$ Therefore, \begin{equation} \|\vecw_i(t+1) - \bar{\vecw}(t+1) \| \ \text{converges,} \ \forall i \in [n],\ \ \textrm{a.s.} \nonumber \end{equation} Using \eqref{eq:liminfw} with the above result, we get \begin{equation}\label{eq:final2} \lim_{t\to\infty} \|\vecw_i(t+1) - \bar{\vecw}(t+1)\| = 0, \ \forall i\in[n],\ \ \text{a.s.} \end{equation} Finally, since $\bar{\vecw}(t+1)^T = \frac{\one^TW(t+1)}{n} = \frac{\one^TA(t,X(t))X(t)}{n}$ from the double stochasticity of $A(t,X(t))$, we have \begin{equation} \bar{\vecw}(t+1)^T=\frac{\one^TX(t)}{n} = \bar{\vecx}(t)^T, \nonumber \end{equation} which from \cref{eq:final1,eq:final2} implies \cref{lemma:dist_from_mean}. \end{proof} \section{Proof of Lemma~\ref{lemma:convex}} \label{appendix:lemmaConvex} To prove Lemma~\ref{lemma:convex}, we follow the proof in~\cite[Theorem~1]{aghajan2020distributed}. \begin{proof} For all $\vecx \in \mathcal{X}^*$ and $t \geq 0$, we have \begin{equation} \E\big[\|\vecx_{t+1} - \vecx \|^2 \mid \Fcal_t \big] \leq (1+b_t)\|{\vecx}_t - \vecx\|^2 - a_t\big(f(\vecx_t) - f(\vecx)\big) +c_t \ \text{a.s.} \nonumber\end{equation} For any $\vecx \in \mathcal{X}^*$, \Cref{thm:R-S} implies that ${\{\|\vecx_t-\vecx\|\} \ \text{converges}}$ and \begin{equation} \sum_{t=0}^{\infty} a_t\big(f({\vecx}_t)-f(\vecx)\big)<\infty \ \ \text{a.s.} \nonumber \end{equation} Since for any $\vecx\in \mathcal{X}^*$ we have $f(\vecx) = f^*$, the event \begin{equation} \Omega_\vecx = \bigg\{ \omega : \lim_{t\to \infty} \|\vecx_t(\omega) - \vecx\| \textrm{ exists, and } \sum_{t=0}^{\infty} a_{t}(f(\vecx_t(\omega))-f^*)<\infty \bigg\} \nonumber \end{equation} is such that $\mathbb{P}(\Omega_\vecx) = 1$. Note that here we denote by $\{{\vecx}_t(\omega)\}_{t\geq 0}$ the sample path for the corresponding $\omega$. Let $\mathcal{X}_d^* \subseteq \mathcal{X}^*$ be a countable dense subset of $\mathcal{X}^*$ and ${\Omega_d = \bigcap_{\vecx\in \mathcal{X}_d^*} \Omega_\vecx.}$ We have $\mathbb{P}(\Omega_d) = 1$ since $\mathcal{X}_d^*$ is countable. For any $\omega \in \Omega_d$, since $\sum_{t=0}^{\infty} a_t = \infty$ and $\sum_{t=0}^{\infty} a_t\big(f({\vecx}_t(\omega)-f^*\big)<\infty $, we have \begin{equation}\label{eqn:fliminf} \liminf_{t\to \infty} f\big({\vecx}_t(\omega)\big) = f^*. \end{equation} From \cref{eqn:fliminf} and the continuity of $f$, for all $\omega \in \Omega_d$, we have \begin{align} \liminf_{t \to \infty} \|\vecx_t(\omega) - \vec{x}^*(\omega)\|=0, \nonumber\end{align} for some $\vec{x}^*(\omega) \in \mathcal{X}^*$\footnote{$\vec{v}^*(\omega)$ may not be in $\mathcal{X}^*_d$.}. Consider a subsequence $\{\vecx_{t_k}(\omega)\}_{k\geq0}$ of $\{\vecx_t(\omega)\}_{t\geq 0}$ such that \begin{equation} \lim_{k \to \infty} f\big(\vecx_{t_k}(\omega)\big) = f^*. \nonumber \end{equation} For any $\omega \in \Omega_d$, $\lim_{t \to \infty} \|\vecx_t(\omega)-\hat{\vecx}\|$ exists for $\hat{\vecx} \in \mathcal{X}_d^*$. Therefore, the sequences $\{\vecx_t(\omega)\}_{t\geq0}$ are bounded. Hence, $\{\vecx_{t_k}(\omega)\}_{k\geq 0}$ is also bounded, has a limit point $\vec{x}^*(\omega) \in \mathcal{X}^*$, and without loss of generality, \begin{equation} \lim_{k \to \infty} \vecx_{t_k}(\omega) = \vecx^*(\omega). \nonumber \end{equation} Since $\mathcal{X}_d^*$ is dense, there is a sequence $\{\vec{q}_s(\omega)\}_{s\geq0}$ in $\mathcal{X}_d^*$ such that \begin{equation} \lim_{s\to \infty} \|\vec{q}_s(\omega)-\vec{x}^*(\omega)\| = 0. \nonumber \end{equation} For $\omega \in \Omega_d$, $\lim_{t\to \infty} \|\vecx_t(\omega) - \vec{q}_s(\omega)\|$ exists for all $s\geq 0$, which is $\|\vecx^*(\omega) - \vec{q}_s(\omega)\|$. Moreover, \begin{equation} \lim_{t\to\infty} \|\vecx_t(\omega) - \vec{q}_s(\omega)\| \leq \liminf_{t\to\infty} \|\vecx_t(\omega) - \vecx^*(\omega)\| + \| \vecx^*(\omega)- \vec{q}_s(\omega)\| \leq \| \vecx^*(\omega)- \vec{q}_s(\omega)\|, \nonumber \end{equation} which implies that \begin{equation} \lim_{s\to \infty}\lim_{t\to \infty} \|\vecx_t(\omega) - \vec{q}_s(\omega)\| = 0. \nonumber \end{equation} Finally, \begin{equation} \limsup_{t \to \infty} \|\vecx_t(\omega) - \vecx^*(\omega)\| \leq \lim_{s \to \infty} \limsup_{t \to \infty} \|\vecx_t(\omega)- \vec{q}_s(\omega)\| + \|\vec{q}_s(\omega)- \vecx^*(\omega)\| = 0. \nonumber \end{equation} Therefore, for any $\omega \in \Omega_d$, we have $ \lim_{t\to \infty} \vecx_t(\omega) = \vecx^*(\omega), \ \ $ where $\vecx^*(\omega) \in \mathcal{X}^*.$ So we have, ${\lim_{t\to \infty} \vecx_t= \vecx^*}$ a.s. \end{proof} \bibliographystyle{ieeetr} \bibliography{bib} \end{document} \subsection{Global Max-Gossip} The standard gossiping algorithm described above is state-independent in the sense that the selection of the \textit{gossiping edge} $e$ does not depend on the states at the agents at any time. Herein, we propose \textit{Global Max-Gossip} where we select the edge connecting the agents with the largest possible \textit{dissent} (disagreement) among all edges in the graph $\Gcal = (\Vcal, \Ecal)$, i.e., \begin{equation} \emax(\Gcal, X) = \arg \max_{\{i,j\}\in\Ecal}\|\vecx_i-\vecx_j\|. \label{eq:emax} \end{equation} In case there are multiple solutions to \cref{eq:emax}, we select the smallest pair of indices $(i^*,j^*)$ based on the lexicographical order, without loss of optimality. For brevity, we use $\emax(X)$ to denote the \textit{max-edge}. \textit{Global Max-Gossip} serves as a benchmark as to what is achievable via state-dependent averaging schemes. Global Max-Gossip requires an oracle to provide the edge resulting in the largest possible Lyapunov function reduction across all network edges. Obtaining a decentralized algorithm to determine the max-dissent edge is a challenging open problem beyond the scope of this paper. Given an initial state matrix $X(0)$, the Max-Gossip averaging scheme admits a state-dependent dynamics of the form \begin{equation} A\big(t,X(t)\big) = B\Big(\emax\big(X(t)\big)\Big), \nonumber \end{equation} where the gossiping matrix is given by \cref{eq:matrix_form_1} and the max-edge is selected according to \cref{eq:emax}. \subsection{Local Max-Gossip}\label{sec:LMG} In \textit{Local Max-Gossip} introduced in~\cite{ustebay2010greedy} under the moniker of \textit{Greedy Gossip with Eavesdropping}, a random selected node gossips with the neighbor $j\in \mathcal{N}_{i}$ that has the largest\footnote{In case there are multiple solutions to \cref{eq:max}, we may select the agent with the smallest index, without loss of optimality.} possible state discrepancy with $i$, i.e., \begin{equation}\label{eq:max} j = \arg \max_{j\in \mathcal{N}_{i}} \|\vecx_j(t) - \vecx_{i}(t) \|. \end{equation} Convergence is accelerated by gossiping with the neighbor with the largest disagreement as this leads to the largest possible immediate reduction in the Lyapunov function used to capture the variance of the states in the network. Since the edge over which the gossiping occurs depends on the current state of the neighbors, the resulting averaging matrix is a state-dependent, random matrix. For a sequence of independently and uniformly distributed index sequence $\{s(t)\}$, the Local Max-Gossip dynamics can be written as a state-dependent averaging scheme as follows \begin{equation} A\big(t,X(t)\big)=\Agossip\Big(\big\{s(t),r_{s(t)}\big(X(t)\big)\big\}\Big),\nonumber \end{equation} where \begin{equation}\label{eqn:rsxt} r_s(X) = \arg \max_{r\in \Ncal_s}\|\vecx_s-\vecx_r\|. \end{equation} \color{black} \section{Convergence Rate} In this section we discuss the convergence rate of the time-averaged version of the discussed state-dependent consensus based subgradient methods when the step size at time $t$ is set as $1/\sqrt{t}$ for $t\geq 1$. The convergence rates for the different algorithm differ via the contraction factor $\lambda$ defined for the contracting averaging matrix through \eqref{eq:defContractingMatrix}. Let $\lambda_t$ be the contraction factor defined through the contracting property of the matrices at time $t$. More precisely, for all $t \geq 0$ \begin{align*} \E [ V(A(t,X(t))X(t)) ] \leq \lambda_t V(X(t)), \end{align*} where $\lambda_t = \lambda \phi_t$ with $\phi_t \in (0,1)$. Here, $\lambda$ is the uniform bound on the contraction factor and $\lambda_t = \lambda_t(X(t))$ is a state-dependent (and possibly time-dependent) contraction factor. We refer to the tighter contraction bound to point out the improvement in convergence rate in state-dependent consensus based subgradient method. The proof of the convergence rates closely follow the proof provided in \cite{Nedic_2013}. In the following lemma, we establish the convergence rate of the accumulation of error between the estimate for each agent from the mean of the estimates over all agents. \begin{lemma}\label{lemma:convergence_rate_x} Under the assumptions of \Cref{thm:main1} with $\alpha(t) = 1/\sqrt{t}$, we have \begin{align} \frac{1}{\sqrt{n}}\sum_{k=0}^t \alpha(k+1) \sum_{i=1}^n\E[\|\vecw_i(k+1)- \bar{\vecw}_i(k+1)\|] \leq (K_1\E[\|X(0)-\bar{X}(0)\|_F] + LK_2(1+\ln t)) \end{align} and \begin{align}\label{ineq:lemma_recursive2} \frac{1}{\sum_{k=0}^t \alpha(k+1)}&\sum_{k=0}^t \alpha(k+1) \sum_{i=1}^n \frac{1}{\sqrt{n}}\E[\|\vecw_i(k+1)- \bar{\vecw}_i(k+1)\|] \cr &\leq \frac{1}{\sqrt{t+1}}(K_1\E[\|X(0)-\bar{X}(0)\|_F] + LK_2(1+\ln t)), \end{align} where $K_1 = K_2 = \frac{\rootl}{1-\rootl}.$ \end{lemma} \begin{proof} From triangle inequality similar to \eqref{eq:triangle_ineq_VX}, we know for all $t \geq 1$ \begin{align} \E[\|W(t+1)-\bar{W}(t+1)\|_F] &\leq \sqrt{\lambda_t} \E[\|W(t)-\bar{W}(t)\|_F] + \sqrt{\lambda_t} \E[\|E(t)-\bar{E}(t)\|_F]. \nonumber \end{align} Repeatedly applying the above inequality and since the perturbation is bounded as $V(E(t)) \leq \frac{L^2}{t}$ for all $t \geq 1$ we get \begin{align} \E[\|W(t+1)-\bar{W}(t+1)\|_F] &\leq \prod_{s=1}^{t}\sqrt{\lambda_s} \E[\|W(1)-\bar{W}(1)\|_F] + \sum_{s=1}^t \prod_{k=s}^{t}\sqrt{\lambda_k} \E[\|E(s)-\bar{E}(s)\|_F] \cr &\leq \prod_{s=0}^{t}\sqrt{\lambda_s} \E[\|W(0)-\bar{W}(0)\|_F] + \sum_{s=1}^t \prod_{k=s}^{t}\sqrt{\lambda_k} \frac{L}{\sqrt{s}}.\nonumber \end{align} For brevity, define $\phi(t:s)= \prod_{k=s}^{t}\phi(k)$ and rewrite the above inequality as \begin{align} \E[\|W(t+1)-\bar{W}(t+1)\|_F] &\leq \rootl^{t+1}\phi(t:0) \E[\|W(0)-\bar{W}(0)\|_F] + \sum_{s=1}^t \sqrt{\lambda}^{t-s+1}\phi(t:s) \frac{L}{\sqrt{s}} \label{ineq:recursive_expansion_vwt} \end{align} To obtain the bound on accumulation of the errors, using \eqref{ineq:recursive_expansion_vwt} we get \begin{align} &\frac{1}{\sqrt{n}}\sum_{k=0}^{t} \alpha(k+1)\sum_{i=1}^n \E[\|\vecw_i(k+1)-\bar{\vec{w}}_i(k+1)\|] \leq \sum_{k=0}^{t} \alpha(k+1)\|W(k+1)-\bar{W}(k+1)\|_F \cr &\leq \sum_{k=0}^{t} \frac{1}{\sqrt{k+1}}\rootl^{k+1} \phi(k:0)\E[\|X(0)-\bar{X}(0)\|_F] +L \sum_{k=1}^{t} \frac{1}{\sqrt{k+1}} \sum_{s=1}^k \frac{\rootl^{k+1-s} \phi(k:s)}{\sqrt{s}} \cr &= c_{1}(t)\E[\|X(0)-\bar{X}(0)\|_F] +L c_2(t), \nonumber \end{align} where $c_1(t), c_2(t)$ are given by \begin{align}\label{eq:c1_c2_def} c_1(t):= \sum_{k=0}^{t} \frac{\rootl^{k+1}}{\sqrt{k+1}} \phi(k:0), \qquad c_2(t):=\sum_{k=1}^{t} \frac{1}{\sqrt{k+1}} \sum_{s=1}^k \frac{\rootl^{k+1-s} \phi(k:s)}{\sqrt{s}}. \end{align} Using the decreasing property of $\alpha(t)$, the fact that $\phi(t)\leq 1$ for all $t\geq 0$, and the expression for a sum of a geometric series, we can uniformly bound $c_1(t)$ by $\frac{\sqrt{\lambda}}{1-\sqrt{\lambda}}$. For $c_2(t)$, note that \begin{align} c_2(t) &\leq \sum_{k=1}^{t} \sum_{s=1}^k \frac{\rootl^{k+1-s} \phi(k:s)}{s} \leq \sum_{k=1}^{t} \sum_{s=1}^k \frac{\rootl^{k+1-s} }{s} \cr &= \sum_{s=1}^{t} \frac{1}{s}\sum_{k=1}^t \rootl^{k+1-s} \leq \frac{\rootl}{1-\rootl}\sum_{s=1}^{t} \frac{1}{s} \leq \frac{\rootl}{1-\rootl}(1+\ln t), \label{ineq:bound_c2} \end{align} where the second inequality in \eqref{ineq:bound_c2} follows from \[ \sum_{s=1}^t \frac{1}{s} = 1+ \sum_{s=2}^t \frac{1}{s} \leq 1+\int_{1}^t \frac{du}{u}= 1+ \ln t.\] Define $K_1 := \frac{\rootl}{1-\rootl}$ and $K_2:= \frac{\rootl}{1-\rootl}.$ Therefore, we have \begin{align} \frac{1}{\sqrt{n}}\sum_{k=0}^{t} \alpha(k+1)\sum_{i=1}^n\E[\|\vecw_i(k+1)-\bar{\vec{w}}_i(k+1)\|] \leq K_1 \E[\|X(0)-\bar{X}(0)\|_F] + L K_2 (1+\ln t). \nonumber \end{align} Finally using the fact that $\sum_{k=0}^{t} \alpha(k+1) \geq \int_{0}^{t+1} \frac{du}{u+1} \geq \sqrt{t+1}$ we get inequality \eqref{ineq:lemma_recursive2}. \end{proof} Using the accumulation of variance of the state estimates we establish an upper bound on the expected deviation of the global function at the time-averaged version of the average state estimates from the optimal value in the following lemma. \begin{lemma}\label{lemma:converge_f_xbar} Under the assumptions of \Cref{thm:main1} with $\alpha(t) = 1/\sqrt{t}$ for all $t\geq 1$ and for any $\vecw^* \in \mathcal{W}^*$ we have \begin{align} \E\left[F\left( \frac{ \sum_{k=0}^t \alpha(k+1)\bar{\vecx}(k) }{ \sum_{k=0}^t \alpha(k+1)}\right) - F(\vecw^*)\right] &\leq \frac{n}{2} \frac{ \|\bar{\vecx}(0)-\vecw^*\| }{\sqrt{t+1}} + \frac{ L^2 (1+\ln(t+1)) }{2n \sqrt{t+1}}\cr &+ \frac{2L\sqrt{n}K_1 }{\sqrt{t+1}} \E[\|X(0)-\bar{X}(0)\|_F] + 2L^2K_2\sqrt{n}\frac{1+\ln t}{\sqrt{t+1}}, \nonumber \end{align} where $K_1 = K_2 = \frac{\rootl}{1-\rootl}.$ \end{lemma} \begin{proof} By taking expectation on both sides for the inequality \cref{lemma:iterate_decomp}, for any $\vec{v} \in \R^d$ and $t \geq 0$ we have \begin{align} \sum_{k=0}^t \frac{2\alpha(k+1)}{n} \E[F(\bar{\vecx}(k))- F(\vec{v})] \leq \|\bar{\vecx}(0) - \vec{v} \|^2 &+ \sum_{k=0}^t \frac{4\alpha(k+1)}{n}\sum_{i=1}^n L_i \E[\|\vecw_i(k+1) - \bar{\vecw}(t+1) \|] \cr &+ \sum_{k=0}^t \alpha^2(k+1)\frac{L^2}{n^2}, \nonumber \end{align} since $\bar{\vecw}(t+1) = \bar{\vecx}(t)$ for all $t \geq 0$. Define $S(t+1) = \sum_{k=0}^t \alpha(k+1)$. Dividing the inequality above by $\frac{2 S(t+1)}{n} $ we get \begin{align} \sum_{k=0}^t \frac{\alpha(k+1)}{S(t+1)} \E[F(\bar{\vecx}(k))- F(\vec{v})] \leq \frac{n}{2}\frac{\|\bar{\vecx}(0) - \vec{v} \|^2}{S(t+1)} &+ \sum_{k=0}^t \frac{2\alpha(k+1)}{S(t+1)}\sum_{i=1}^n L_i \E[\|\vecw_i(k+1) - \bar{\vecw}(t+1) \|] \cr &+ \frac{1}{S(t+1)}\sum_{k=0}^t \alpha^2(k+1)\frac{L^2}{2n}. \nonumber \end{align} From \cref{lemma:convergence_rate_x} we have \begin{align} &\sum_{i=1}^n \sum_{k=0}^t \frac{\alpha(k+1)}{S(t+1)}\sum_{i=1}^n L_i \E[\|\vecw_i(k+1) - \bar{\vecw}(t+1) \|] \cr &\leq \frac{K_1\sqrt{n}}{\sqrt{t+1}}\E[\|X(0)-\bar{X}(0)\|_F] + LK_2\sqrt{n}\frac{(1+\ln t)}{\sqrt{t+1}}. \end{align} Furthermore as $\sum_{k=0}^t \alpha^2(k+1) = \sum_{k=0}^t \frac{1}{k+1} \leq 1 + \ln (t+1)$ and $S(t+1) \geq \sqrt{t+1}$, for $\vec{v} = \vecw^*$ we have \begin{align} \sum_{k=0}^t \frac{\alpha(k+1)}{S(t+1)} \E[F(\bar{\vecx}(k))- F(\vec{w}^*)] &\leq \frac{n}{2}\frac{\|\bar{\vecx}(0) - \vec{w}^* \|^2}{\sqrt{t+1}}\cr &+ 2 \frac{LK_1\sqrt{n}}{ \sqrt{t+1}}\E[\|X(0)-\bar{X}(0)\|_F] + L^2K_2\sqrt{n}\frac{1+\ln t}{\sqrt{t+1}}\cr &+ \frac{1+\ln(t+1)}{\sqrt{t+1}}\frac{L^2}{2n}\nonumber \end{align} which upon rearrangement gives us the result. \end{proof} Finally, we provide a bound on the expected deviation of the global function computed at the time averaged version of the state estimates of any agent from the optimal value in the following theorem. \begin{theorem}\label{thm:convergence_rate_f} Consider the assumptions of \Cref{thm:main1} with $\alpha(t) = 1/\sqrt{t}$ for all $t\geq 1$ and $\vecw^* \in \mathcal{W}^*$. For $\tilde \vecw_i(t+1) = \frac{\sum_{k=0}^t \alpha(k+1)\vecw_i(k+1)}{\sum_{k=0}^t\alpha(k+1)}$, we have \begin{align} \E[F(\tilde \vecw_i(t+1)) - F(\vecw^*)]&\leq \frac{n}{2} \frac{ \|\bar{\vecx}(0)-\vecw^*\| }{\sqrt{t+1}} + \frac{ L^2 (1+\ln(t+1)) }{2n \sqrt{t+1}}\cr &+ \frac{L(2\sqrt{n}+1) K_1}{\sqrt{t+1}} \E[\|X(0)-\bar{X}(0)\|_F] + L^2 K_2(2\sqrt{n}+1)\frac{1+\ln t}{\sqrt{t+1}},\nonumber \end{align} where $K_1 = K_2 = \frac{\rootl}{1-\rootl}.$ \end{theorem} \begin{proof} By the boundedness assumption of the subgradients we have \begin{align} \E[F(\tilde{\vecw}_i(t+1)) - F\left(\frac{\sum_{k=0}^t \alpha(k+1) \bar{\vecx}(k)}{\sum_{k=0}^t \alpha(k+1)}\right) &\leq \frac{L}{\sum_{k=0}^t \alpha(k+1)} \sum_{k=0}^t \alpha(k+1) \E[\|\vecw_i(t+1) - \bar{\vecx}(k)\|] \cr &\leq \frac{L}{\sqrt{t+1}} (K_1\E[\|X(0)-\bar{X}(0)\|_F] + LK_2(1+\ln t)) \nonumber \end{align} Using the above inequality and \Cref{lemma:converge_f_xbar} we get \begin{align} \E[F(\tilde \vecw_i(t+1)) - F(\vecw^*)] &\leq \frac{L}{\sqrt{t+1}} (K_1\E[\|X(0)-\bar{X}(0)\|_F] + LK_2(1+\ln t)) +\cr & +\frac{n}{2} \frac{ \|\bar{\vecx}(0)-\vecw^*\| }{\sqrt{t+1}} + \frac{ L^2 (1+\ln(t+1)) }{2n \sqrt{t+1}}\cr &+ \frac{2LK_1\sqrt{n}}{ \sqrt{t+1}} \E[\|X(0)-\bar{X}(0)\|_F] + 2L^2K_2n\frac{1+\ln t}{\sqrt{t+1}}. \cr &= \frac{n}{2} \frac{ \|\bar{\vecx}(0)-\vecw^*\| }{\sqrt{t+1}} + \frac{ L^2 (1+\ln(t+1)) }{2n \sqrt{t+1}}\cr &+ \frac{LK_1(2\sqrt{n}+1) }{\sqrt{t+1}} \E[\|X(0)-\bar{X}(0)\|_F] + {L^2K_2(2\sqrt{n}+1)}\frac{1+\ln t}{\sqrt{t+1}}. \nonumber \end{align} \end{proof} \subsection{Discussion} The subgradient method converges to the optimal at the rate of $O(\frac{\ln t}{\sqrt{t}})$. For randomized gossip, the convergence rate is comparable to the result that can be obtained from the result in \cref{thm:convergence_rate_f} from the result in \cite[Theorem 2]{Nedic_2013}. However the approach in \cite{Nedic_2013} cannot be directly used to state the result in \cref{thm:convergence_rate_f} since the proof involves establishing inequality for every coordinate of the vector estimates and summing up the resulting inequalities. Such an approach cannot be extended to state-dependent averaging algorithms discussed in this work since the averaging step depends on the $\ell_2$ norm of the difference between the nodes' estimates and cannot be decoupled to establish result on individual coordinates. Another reason behind using the contraction factor approach is the lack of B-connectivity result for the interaction between the agents when using state-dependent averaging. The hidden constant terms of the convergence rate, $O(\frac{\ln t}{\sqrt{t}})$, are influenced by the consensus algorithm used with the subgradient descent. In \Cref{thm:convergence_rate_f} the consensus step of the algorithms influences the convergence rate through the constants $K_1, K_2$ such that the convergence becomes faster as the constants decrease. Note that $K_1, K_2$ are upper bounds for $c_1(t), c_2(t)$ defined through \eqref{eq:c1_c2_def}. Based on \cref{thm:maxEdgeContraction}, the contraction factor $\lambda =1- \frac{2\delta}{(n-1)\diam^2}$ is obtained in the following corollary, where $\delta$ for Randomized Gossip, Local Max-Gossip, Max-Gossip, and Load Balancing are provided through \cref{prop:LMLBContracting}. \begin{corollary}\label{cor:constantsRate} In \cref{thm:convergence_rate_f} the constants $K_1, K_2$ are given by $\frac{\rootl}{1-\rootl}$ which are bounded above by $n^2(n-1)\diam^2$ for Randomized Gossip, $n(n-1)\diam^2$ for Local Max-Gossip, $2(n-1)\diam^2 $ for Max-Gossip, and $(n-1)^3\diam^2 $ for Load Balancing being used as the averaging scheme with the subgradient method. \end{corollary} \begin{proof} For Randomized Gossip, $1-\sqrt{\lambda} \geq \frac{1}{2}(1-\lambda) \geq \frac{1}{n^2(n-1)\diam^2}\geq \frac{1}{n^2(n-1)\diam^2}$. Therefore $K_1,K_2$ are bounded as $\frac{\rootl}{1-\rootl}\leq n^2(n-1)\diam^2.$ Similarly, for Local Max-Gossip we have $1-\rootl \geq \frac{1}{n(n-1)\diam^2}$ leading to $\frac{\rootl}{1-\rootl}\leq n(n-1)\diam^2$, for Max-Gossip we have $1-\rootl \geq \frac{1}{2(n-1)\diam^2}$ leading to $\frac{\rootl}{1-\rootl}\leq 2(n-1)\diam^2$, and for Load Balancing $1-\rootl \geq \frac{1}{(n-1)^3\diam^2}$ resulting in $\frac{\rootl}{1-\rootl}\leq (n-1)^3\diam^2$. \end{proof} \begin{remark} We may also comment that the above result uses a conservative bound on the contraction factor $\lambda>0$. The values mentioned in \cref{cor:constantsRate} are upper bounds on the constants in the convergence rate. However, tighter bounds on the constants $K_1, K_2$ are possible. For Randomized Gossip, the contraction factor can be improved to the square of the second largest eigenvalue of the expected averaging matrix $\E[A(t,X(t))]$. In principle, in the proof of Theorem~5, for each of the state-dependent algorithm, such a contraction factor would depend on the sample path (past trajectory) of the dynamics. For example, when the consensus scheme used is Load Balancing, we know that for most practical purposes, when the nodes do not have multiple neighbors with maximal disagreement, the constant $\delta$ in \cref{prop:LMLBContracting} is even grater than $\frac{1}{2}$, more precisely, it is $\frac{C_e(X)}{2}$, where $C_e$ is the number of edges over which the exchange is taking place in the averaging step with the state estimate $X\in \R^{n\times d}$. With the improved $\delta$, the bound on the constants $K_1,K_2$ can be improved to $\frac{(n-1)\diam^2}{2C_e(X(t))}\leq \frac{(n-1)\diam^2}{2}$. Similarly the bounds on the convergence rate for Local Max-Gossip can be improved by using tighter contraction factor for the averaging matrices. However as seen from \cite[Theorem 2]{ustebay2010greedy}, the contraction factor may take cumbersome form which cannot be readily used to establish better bounds on $c_1(t),c_2(t)$. The problem of finding useful convergence rate for state-dependent averaging is a non-trivial open problem. \end{remark} \color{black} \subsection{Local Max-Gossip}\label{sec:LMG} The standard gossiping algorithm described above is state-independent in the sense that the selection of the \textit{gossiping edge} $e$ does not depend on the states at the agents at any time. One pertinent question is: -- If an agent $i$, upon being randomly selected, could chose one of its neighbors to gossip based on the value of their state, which one should it be? The answer is that with maximal dissent. \um{Thus, we consider the neighbor selection strategy which gossips} with the neighbor $j\in \mathcal{N}_{i}$ that has the largest\footnote{In case there are multiple solutions to \cref{eq:max}, we may select the agent with the smallest index, without loss of optimality.} possible state discrepancy (maximal dissent) with $i$, i.e., \begin{equation}\label{eq:max} j = \arg \max_{j\in \mathcal{N}_{i}} \|\vecx_j(t) - \vecx_{i}(t) \|. \end{equation} The selection rule in \cref{eq:max} was introduced in~\cite{ustebay2010greedy} under the moniker \textit{Greedy Gossip with Eavesdropping}. Herein, we refer to this scheme as \textit{Local Max-Gossip}. The main intuition is very simple: with the goal of accelerating the convergence to the average of the initial states in the presence of the information regarding the neighbors' current state, gossiping with a neighbor with the largest disagreement leads to the largest possible immediate reduction in the Lyapunov function used to capture the variance of the states in the network. Since the edge over which the gossiping occurs depends on the current state of the neighbors, the resulting averaging matrix is a state-dependent, random matrix. For a sequence of independently and uniformly distributed index sequence $\{s(t)\}$, the Local Max-Gossip dynamics can be written as a state-dependent averaging scheme as follows \begin{equation}\label{eq:localmaxmat} A\big(t,X(t)\big)=\Agossip\Big(\big\{s(t),r_{s(t)}\big(X(t)\big)\big\}\Big), \end{equation} where \begin{equation}\label{eqn:rsxt} r_s(X) = \arg \max_{r\in \Ncal_s}\|\vecx_s-\vecx_r\|. \end{equation} \begin{remark} The authors of~\cite{ustebay2010greedy} have also proved that the Local Max-Gossip averaging scheme converges faster than the standard randomized gossip algorithm. However, the characterization of the convergence rate is, essentially, an intractable problem. This is due to the intricate inter-temporal dependence of the state-dependent gossiping matrices. Nevertheless, empirical evidence clearly shows that Local Max-Gossip, when available, results in a significant gain over state-independent gossiping with respect to convergence rates. \end{remark} \subsection{Global Max-Gossip} Local Max-Gossip can be further improved by choosing the edge connecting the agents with the largest possible disagreement among all edges in the graph $\Gcal = (\Vcal, \Ecal)$, i.e., \begin{equation} \emax(\Gcal, X) = \arg \max_{\{i,j\}\in\Ecal}\|\vecx_i-\vecx_j\|. \label{eq:emax} \end{equation} In case there are multiple solutions to \cref{eq:emax}, we select the smallest pair of indices $(i^*,j^*)$ based on the lexicographical order, without loss of optimality. For brevity, we use $\emax(X)$ to denote the \textit{max-edge}. \um{The resulting algorithm is termed \textit{Max-Gossip} and serves as a benchmark as to what is achievable via state-dependent averaging schemes. Max-gossip requires an oracle to provide the edge resulting in the largest possible Lyapunov function reduction across all network edges. A question for future work is to determine whether a decentralized method exists which offers performance between that of Max-Gossip and Local Max-Gossip.} Given an initial state matrix $X(0)$, the Max-Gossip averaging scheme admits a state-dependent dynamics of the form \begin{equation}\label{eqn:maxgossip} A\big(t,X(t)\big) = B\Big(\emax\big(X(t)\big)\Big), \end{equation} where the gossiping matrix is given by \cref{eq:matrix_form_1} and the max-edge is selected according to \cref{eq:emax}. Using arguments similar to \cite[Theorem~2]{ustebay2010greedy}, it can be shown that the Max-Gossip has a faster convergence rate than the Local Max-Gossip algorithm. However, the complete characterization of its convergence rate also seems to be an intractable problem.
2205.00596v1
http://arxiv.org/abs/2205.00596v1
Generalization of the basis theorem for the B-type Coxeter groups
\documentclass[8pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{amsmath} \usepackage{amssymb} \usepackage{enumitem,kantlipsum} \DeclareMathOperator{\sgn}{sgn} \newtheorem{theorem}{Theorem}[subsection] \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{claim}[theorem]{Claim} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \def\e{\bold{E}} \def\b{\bold{B}} \def\x{\bold{x}} \def\n{\bold{n}} \def\v{\bold{v}} \def\u{\bold{u}} \def\h{\bold{H}} \def\c{\bold{C}} \def\cu{\bold{curl}} \def\gr{\bold{grad}} \def\l{\bold{L}^2(\Omega)} \def\f{\bold{F}} \def\s{\bold{S}} \def\x{\bold{X}(\Omega)} \def\y{\bold{Y}(\Omega)} \def\X{\bold{X}_k(\omega)} \def\Y{\bold{Y}_k(\omega)} \def\L{\bold{L}^2_1(\omega)} \def\fe{\bold{f}_\e} \def\ge{g_\e} \def\gb{g_\b} \def\fbb{\bold{f}_\b} \def\dbt{\frac{\partial\b}{\partial t}} \def\det{\frac{\partial\e}{\partial t}} \begin{document} \author{Sawsan Khaskeia\\Department of Mathematics\\Ariel University, Israel\\[email protected]\and Robert Shwartz\\Department of Mathematics\\Ariel University, Israel\\[email protected]} \date{} \title{Generalization of the basis theorem for the $B$-type Coxeter groups} \maketitle \begin{abstract} The $OGS$ for non-abelian groups is an interesting generalization of the basis of finite abelian groups. The definition of $OGS$ states that every element of a group has a unique presentation as a product of some powers of specific generators of the group, in a specific given order. In case of the symmetric groups $S_{n}$ there is a paper of R. Shwartz, which demonstrates a strong connection between the $OGS$ and the standard Coxeter presentation of the symmetric group, which is called the standard $OGS$ of $S_n$. In this paper we generalize the standard $OGS$ of $S_n$ to the finite classical Coxeter group $B_n$. We describe the exchange laws for the generalized standard $OGS$ of $B_n$, and we connect it to the Coxeter length and the descent set of $B_n$. \end{abstract} \section{Introduction} The fundamental theorem of finitely generated abelian groups states the following: Let $A$ be a finitely generated abelian group, then there exists generators $a_{1}, a_{2}, \ldots a_{n}$, such that every element $a$ in $A$ has a unique presentation of a form: $$g=a_{1}^{i_{1}}\cdot a_{2}^{i_{2}}\cdots a_{n}^{i_{n}},$$ where, $i_{1}, i_{2}, \ldots, i_{n}$ are $n$ integers such that for $1\leq k\leq n$, $0\leq i_{k}<|g_{k}|$, where $a_{k}$ has a finite order of $|a_{k}|$ in $A$, and $i_{k}\in \mathbb{Z}$, where $a_{k}$ has infinite order in $A$. Where, the meaning of the theorem is that every abelian group $A$ is direct sum of finitely many cyclic subgroup $A_{i}$ (where $1\leq i\leq k$), for some $k\in \mathbb{N}$. \begin{definition}\label{ogs} Let $G$ be a non-abelian group. The ordered sequence of $n$ elements $\langle g_{1}, g_{2}, \ldots, g_{n}\rangle$ is called an $Ordered ~~ Generating ~~ System$ of the group $G$ or by shortened notation, $OGS(G)$, if every element $g\in G$ has a unique presentation in the a form $$g=g_{1}^{i_{1}}\cdot g_{2}^{i_{2}}\cdots g_{n}^{i_{n}},$$ where, $i_{1}, i_{2}, \ldots, i_{n}$ are $n$ integers such that for $1\leq k\leq n$, $0\leq i_{k}<r_{k}$, where $r_{k} | |g_{k}|$ in case the order of $g_{k}$ is finite in $G$, or $i_{k}\in \mathbb{Z}$, in case $g_{k}$ has infinite order in $G$. The mentioned canonical form is called $OGS$ canonical form. For every $q>p$, $1\leq x_{q}<r_{q}$, and $1\leq x_{p}<r_{p}$ the relation $$g_{q}^{x_{q}}\cdot g_{p}^{x_{p}} = g_{1}^{i_{1}}\cdot g_{2}^{i_{2}}\cdots g_{n}^{i_{n}},$$ is called exchange law. \end{definition} In contrast to finitely generated abelian groups, the existence of an $OGS$ is generally not true for every finitely generated non-abelian group. Even in case of two-generated infinite non-abelian groups it is not difficult to find counter examples. For example, the Baumslag-Solitar groups $BS(m,n)$ \cite{BS}, where $m\neq \pm1$ or $n\neq \pm1$, or most of the cases of the one-relator free product of a finite cyclic group generated by $a$, with a finite two-generated group generated by $b, c$ with the relation $a^{2}\cdot b\cdot a\cdot c=1$ \cite{S}, do not have an $OGS$. Even the question of the existence of an $OGS$ for a general finite non-abelian group is still open. Moreover, contrary to the abelian case where the exchange law is just $g_{q}\cdot g_{p}=g_{p}\cdot g_{q}$, in most of the cases of non-abelian groups with the existence of an $OGS$, the exchange laws are very complicated. Although there are some specific non-abelian groups where the exchange laws are very convenient and have very interesting properties. A very good example of it this the symmetric group $S_{n}$. In 2001, Adin and Roichman \cite{AR} introduced a presentation of an $OGS$ canonical form for the symmetric group $S_n$, for the hyperoctahedral group $B_n$, and for the wreath product $\mathbb{Z}_{m}\wr S_{n}$. Adin and Roichman proved that for every element of $S_n$ presented in the standard $OGS$ canonical form, the sum of the exponents of the $OGS$ equals the major-index of the permutation. Moreover, by using an $OGS$ canonical form, Adin and Roichman generalized the theorem of MacMahon \cite{Mac} to the $B$-type Coxeter group, and to the wreath product $\mathbb{Z}_{m}\wr S_{n}$. A few years later, this $OGS$ canonical form was generalized for complex reflection groups by Shwartz, Adin and Roichman \cite{SAR}. Recently, Shwartz \cite{S1} significantly extended the results of \cite{AR}, \cite{SAR}, where the $OGS$ of $S_{n}$ is strongly connected to the Coxeter length and to the descent set of the elements. Moreover, in \cite{S1}, there are described the exchange laws for the $OGS$ canonical forms of the symmetric group $S_n$, which have very interesting and surprising properties. In the paper we try to generalize the results of \cite{S1} to the finite classical Coxeter group $B_{n}$. Similarly to the symmetric group $S_n$, the group $B_n$ can be considered as permutation group as well. Therefore, we recall the notations of permutations, the $OGS$ of $S_{n}$ and the corresponding exchange laws, from \cite{S1}.\\ \begin{definition}\label{sn} Let $S_n$ be the symmetric group on $n$ elements, then : \begin{itemize} \item The symmetric group $S_n$ is an $n-1$ generated simply-laced finite Coxeter group of order $n!$, which has the presentation of: $$\langle s_1, s_2, \ldots, s_{n-1} | s_i^{2}=1, ~~ (s_i\cdot s_{i+1})^{3}=1, ~~(s_i\cdot s_j)^2=1 ~~for ~~|i-j|\geq 2\rangle;$$ \item The group $S_n$ can be considered as the permutation group on $n$ elements. A permutation $\pi\in S_n$ is denoted by $$ \pi=[\pi(1); ~\pi(2); \ldots; ~\pi(n)] $$ (i.e., $\pi= [2; ~4; ~1; ~3]$ is a permutation in $S_{4}$ which satisfies $\pi(1)=2$, $\pi(2)=4$, $\pi(3)=1$, and $\pi(4)=3$); \item Every permutation $\pi\in S_n$ can be presented in a cyclic notation, as a product of disjoint cycles of the form $(i_1, ~i_2, ~\ldots, ~i_m)$, which means $\pi(i_{k})=i_{k+1}$, for $1\leq k\leq m-1$, and $\pi(i_{m})=i_{1}$ (i.e., The cyclic notation of $\pi= [3; ~4; ~1; ~5; ~2]$ in $S_5$, is $(1, ~3)(2, ~4, ~5)$); \item The Coxeter generator $s_i$ can be considered the permutation which exchanges the element $i$ with the element $i+1$, i.e., the transposition $(i, i+1)$; \item We consider multiplication of permutations in left to right order; i.e., for every $\pi_1, \pi_2\in S_n$, $\pi_1\cdot \pi_2 (i)=\pi_2(j)$, where, $\pi_1(i)=j$ (in contrary to the notation in \cite{AR} where Adin, Roichman, and other people have considered right to left multiplication of permutations); \item For every permutation $\pi\in S_n$, the Coxeter length $\ell(\pi)$ is the number of inversions in $\pi$, i.e., the number of different pairs $i, j$, s. t. $i<j$ and $\pi(i)>\pi(j)$; \item For every permutation $\pi\in S_n$, the set of the locations of the descents is defined to be $$des\left(\pi\right)=\{1\leq i\leq n-1 | \pi(i)>\pi(i+1)\},$$ and $$i\in des\left(\pi\right) ~~if ~and ~only ~if ~~\ell(s_i\cdot \pi)<\ell(\pi)$$ (i.e., $i$ is a descent of $\pi$ if and only if multiplying $\pi$ by $s_i$ in the left side shortens the Coxeter length of the element.); \item For every permutation $\pi\in S_n$, the major-index is defined to be $$maj\left(\pi\right)=\sum_{\pi(i)>\pi(i+1)}i$$ (i.e., major-index is the sum of the locations of the descents of $\pi$.). \item By \cite{BB} Chapter 3.4, every element $\pi$ of $S_n$ can be presented uniquely in the following normal reduced form, which we denote by $norm(\pi)$: $$norm(\pi)=\prod_{u=1}^{n-1}\prod_{r=0}^{y_{u}-1}s_{u-r}.$$ such that $y_u$ is a non-negative integer where, $0\leq y_u\leq u$ for every $1\leq u\leq n-1$. By our notation of $norm(\pi)$ the Coxeter length of an element $\pi$ as follow: $$\ell(\pi)=\sum_{u=1}^{n-1}y_{u}.$$ For example: Let $m=8$, $y_2=2$, $y_4=3$, $y_5=1$, $y_8=4$, and $y_1=y_3=y_6=y_7=0$, then $$norm(\pi)=(s_2\cdot s_1)\cdot (s_4\cdot s_3\cdot s_2)\cdot s_5\cdot (s_8\cdot s_7\cdot s_6\cdot s_5).$$ $$\ell(\pi)=2+3+1+4=10.$$ \end{itemize} \end{definition} \begin{theorem}\label{canonical-sn} Let $S_n$ be the symmetric group on $n$ elements. For every $2\leq m\leq n$, define $t_{m}$ to be the product $\prod_{j=1}^{m-1}s_{j}$. The element $t_{m}$ is the permutation $$t_m= [m; ~1; ~2; \ldots; ~m-1; ~m+1;\ldots; ~n]$$ which is the $m$-cycle $(m, ~m-1, ~\ldots, ~1)$ in the cyclic notation of the permutation. Then, the elements $t_{n}, t_{n-1}, \ldots, t_{2}$ generates $S_n$, and every element of $S_n$ has a unique presentation in the following $OGS$ canonical form: $$t_{2}^{i_{2}}\cdot t_{3}^{i_{3}}\cdots t_{n}^{i_{n}},~~~ where ~~~0\leq i_{k}<k ~~~for ~~~2\leq k\leq n$$ \end{theorem} \begin{proposition}\label{exchange} The following holds: \\ In order to transform the element $t_{q}^{i_{q}}\cdot t_{p}^{i_{p}}$ ($p<q$) onto the $OGS$ canonical form\\ $t_{2}^{i_{2}}\cdot t_{3}^{i_{3}}\cdots t_{n}^{i_{n}}$, i.e., according to the standard $OGS$, one needs to use the following exchange laws: \[ t_{q}^{i_{q}}\cdot t_{p}^{i_{p}}=\begin{cases} t_{i_{q}+i_{p}}^{i_q}\cdot t_{p+i_{q}}^{i_{p}}\cdot t_{q}^{i_{q}} & q-i_{q}\geq p \\ \\ t_{i_{q}}^{p+i_{q}-q}\cdot t_{i_{q}+i_{p}}^{q-p}\cdot t_{q}^{i_{q}+i_{p}} & i_{p}\leq q-i_{q}\leq p \\ \\ t_{p+i_{q}-q}^{i_{q}+i_{p}-q}\cdot t_{i_{q}}^{p-i_{p}}\cdot t_{q}^{i_{q}+i_{p}-p} & q-i_{q}\leq i_{p}. \end{cases} \] \end{proposition} \begin{remark}\label{exchange-2} The standard $OGS$ canonical form of $t_{q}^{i_{q}}\cdot t_{p}^{i_{p}}$ is a product of non-zero powers of two different canonical generators if and only if $q-i_{q}=p$ or $q-i_{q}=i_{p}$, as follow: \begin{itemize} \item If $q-i_q=p$ then by considering $q-i_q\geq p$: $$t_{i_{q}+i_{p}}^{i_q}\cdot t_{p+i_{q}}^{i_{p}}\cdot t_{q}^{i_{q}}=t_{i_{q}+i_{p}}^{i_q}\cdot t_{q}^{i_p}\cdot t_{q}^{i_q}$$ and by considering $q-i_q\leq p$: $$t_{i_{q}}^{p+i_{q}-q}\cdot t_{i_{q}+i_{p}}^{q-p}\cdot t_{q}^{i_{q}+i_{p}}=t_{i_q}^{0}\cdot t_{i_q+i_p}^{i_q}\cdot t_{q}^{i_{q}+i_{p}};$$ \item If $q-i_q=i_p$ then by considering $q-i_q\geq i_p$: $$t_{i_q}^{p+i_q-q}\cdot t_{i_{q}+i_{p}}^{q-p}\cdot t_{q}^{i_{q}+i_{p}}=t_{i_q}^{p-i_p}\cdot t_q^{q-p}\cdot t_q^q=t_{i_q}^{p-i_p}\cdot t_q^{q-p}$$ and by considering $q-i_q\leq i_p$: $$t_{p+i_{q}-q}^{i_{q}+i_{p}-q}\cdot t_{i_{q}}^{p-i_{p}}\cdot t_{q}^{i_{q}+i_{p}-p}=t_{p+i_q-q}^0\cdot t_{i_{q}}^{p-i_{p}}\cdot t_{q}^{q-p}.$$ \end{itemize} Hence we have \[ t_{q}^{i_{q}}\cdot t_{p}^{i_{p}}=\begin{cases} t_{i_{q}+i_{p}}^{i_q}\cdot t_{q}^{i_{q}+i_{p}} & q-i_{q} = p \\ \\ t_{i_{q}}^{p-i_{p}}\cdot t_{q}^{q-p} & q-i_{q} = i_{p}. \end{cases} \] \end{remark} Moreover, the most significant achievement of the paper \cite{S1} is the definition of the standard $OGS$ elementary factorization. By using the standard $OGS$ elementary factorization, it is possible to give a very interesting formula for the Coxeter length and a complete classification of the descent set of any arbitrary element of $S_n$. In the paper we try to generalize the standard $OGS$ elementary factorization to the $B$-type Coxeter groups, in order to find similar properties (Coxeter length and the descent set) for the elements of the group. Hence, we recall the definition of the standard $OGS$ elementary element and factorization for the symmetric group $S_n$ as it is defined in \cite{S1}, and theorems concerning the Coxeter length and the descent set of elements of $S_n$ as it is mentioned and proved in \cite{S1} . \begin{definition}\label{elementary} Let $\pi\in S_n$, where $\pi=\prod_{j=1}^{m}t_{k_{j}}^{i_{k_{j}}}$ is presented in the standard $OGS$ canonical form, with $i_{k_{j}}>0$ for every $1\leq j\leq m$. Then, $\pi$ is called standard $OGS$ elementary element of $S_n$, if $$\sum_{j=1}^{m}i_{k_{j}}\leq k_{1}.$$ \end{definition} \begin{theorem}\label{theorem-elementary}\cite{S1} Let $\pi=\prod_{j=1}^{m}t_{k_{j}}^{i_{k_{j}}}$ be a standard $OGS$ elementary element of $S_n$, presented in the standard $OGS$ canonical form, with $i_{k_{j}}>0$ for every $1\leq j\leq m$. Then, the following are satisfied: \begin{itemize} \item $$\ell(\pi)=\sum_{j=1}^{m}k_{j}\cdot i_{k_{j}}-(i_{k_{1}}+i_{k_{2}}+\cdots +i_{k_{m}})^{2}=\sum_{j=1}^{m}k_{j}\cdot i_{k_{j}}-\left(maj\left(\pi\right)\right)^{2};$$ \item Every subword of $\pi$ is a standard $OGS$ elementary element too. In particular, for every two subwords $\pi_{1}$ and $\pi_{2}$ of $\pi$, such that $\pi=\pi_{1}\cdot \pi_{2}$, it is satisfied: $$\ell(\pi)=\ell(\pi_{1}\cdot \pi_{2})<\ell(\pi_{1})+\ell(\pi_{2});$$ \item $$\ell(s_r\cdot \pi)=\begin{cases} \ell(\pi)-1 & r=\sum_{j=1}^{m}i_{k_{j}} \\ \ell(\pi)+1 & r\neq \sum_{j=1}^{m}i_{k_{j}} \end{cases}.$$ i.e., $des\left(\pi\right)$ contains just one element, which means $des\left(\pi\right)=\{maj\left(\pi\right)\}$. \end{itemize} \end{theorem} \begin{definition}\label{canonical-factorization-def} Let $\pi\in S_n$. Let $z(\pi)$ be the minimal number, such that $\pi$ can be presented as a product of standard $OGS$ elementary elements, with the following conditions: \begin{itemize} \item $$\pi=\prod_{v=1}^{z(\pi)}\pi^{(v)}, ~~~~ where ~~~~\pi^{(v)}=\prod_{j=1}^{m^{(v)}}t_{h^{(v)}_{j}}^{\imath_{j}^{(v)}},$$ by the presentation in the standard $OGS$ canonical form for every $1\leq v\leq z(\pi)$ and $1\leq j\leq m^{(v)}$ such that: \begin{itemize} \item $\imath_{j}^{(v)}>0;$ \\ \item $\sum_{j=1}^{m^{(1)}}\imath_{j}^{(1)}\leq h^{(1)}_{1}$ i.e., $maj\left(\pi^{(1)}\right)\leq h^{(1)}_{1}$; \\ \item $h^{(v-1)}_{m^{(v-1)}}\leq\sum_{j=1}^{m^{(v)}}\imath_{j}^{(v)}\leq h^{(v)}_{1}$ for $2\leq v\leq z$ \\ \\ i.e., $h^{(v-1)}_{m^{(v-1)}}\leq maj\left(\pi^{(v)}\right)\leq h^{(v)}_{1} ~~ for ~~ 2\leq v\leq z$. \end{itemize} \end{itemize} Then, the mentioned presentation is called \textbf{Standard $OGS$ elementary factorization} of $\pi$. Since the factors $\pi^{(v)}$ are standard $OGS$ elementary elements, they are called standard $OGS$ elementary factors of $\pi$. \end{definition} \begin{theorem}\label{theorem-factorization}\cite{S1} Let $\pi=\prod_{j=1}^{m}t_{k_{j}}^{i_{k_{j}}}$ be an element of $S_n$ presented in the standard $OGS$ canonical form, with $i_{k_{j}}>0$ for every $1\leq j\leq m$. Consider the standard $OGS$ elementary factorization of $\pi$ with all the notations used in Definition \ref{canonical-factorization-def}. Then, the following properties hold: \begin{itemize} \item The standard $OGS$ elementary factorization of $\pi$ is unique, i.e., the parameters $z(\pi)$, $m^{(v)}$ for $1\leq v\leq z(\pi)$, $h^{(v)}_{j}$, and $\imath_{j}^{(v)}$ for $1\leq j\leq m^{(v)}$, are uniquely determined by the standard $OGS$ canonical form of $\pi$, such that: \begin{itemize} \item For every $h^{(v)}_{j}$ there exists exactly one $k_{j'}$ (where, $1\leq j'\leq m$), such that $h^{(v)}_{j}=k_{j'}$; \item If $h^{(v)}_{j}=k_{j'}$, for some $1\leq v\leq z(\pi)$, ~$1<j<m^{(v)}$, and $1\leq j'\leq m$, then $\imath_{j}^{(v)}=i_{k_{j'}}$; \item If $h^{(v_{1})}_{j_{1}}=h^{(v_{2})}_{j_{2}}$, where $1\leq v_{1}<v_{2}\leq z(\pi)$, ~$1\leq j_{1}\leq m^{(v_{1})}$, and \\ $1\leq j_{2}\leq m^{(v_{2})}$, then necessarily $v_{1}=v_{2}-1$, ~$j_{1}=m^{(v_{1})}$, ~$j_{2}=1$, and $$h^{(v_{2}-1)}_{m^{(v_{2}-1)}}=h^{(v_{2})}_{1}=maj\left(\pi_{(v_{2})}\right)=k_{j'},$$ for some $j'$, such that $\imath_{m^{(v_{2}-1)}}^{(v_{2}-1)}+\imath_{1}^{(v_{2})}=i_{k_{j'}}$; \end{itemize} \item $$\ell(s_r\cdot \pi) = \begin{cases} \ell(\pi)-1 & r=\sum_{j=1}^{m^{(v)}}\imath_{j}^{(v)} ~~for ~~ 1\leq v\leq z(\pi) \\ \ell(\pi)+1 & otherwise \end{cases}.$$ i.e., $$des\left(\pi\right)=\bigcup_{v=1}^{z(\pi)}des\left(\pi^{(v)}\right)=\{maj\left(\pi^{(v)}\right)~|~1\leq v\leq z(\pi)\};$$ \item \begin{align*} \ell(\pi) &= \sum_{v=1}^{z(\pi)}\ell(\pi^{(v)}) = \sum_{v=1}^{z(\pi)}\sum_{j=1}^{m^{(v)}}h^{(v)}_{j}\cdot \imath_{j}^{(v)}-\sum_{v=1}^{z(\pi)}\left(maj\left(\pi^{(v)}\right)\right)^{2} \\ &= \sum_{x=1}^{m}k_{x}\cdot i_{k_{x}}-\sum_{v=1}^{z(\pi)}\left(maj\left(\pi^{(v)}\right)\right)^{2} \\ &= \sum_{x=1}^{m}k_{x}\cdot i_{k_{x}}-\sum_{v=1}^{z(\pi)}{{\left(c^{(v)}\right)}}^{2}, ~~ where ~~ c^{(v)}\in des\left(\pi\right). \end{align*} \end{itemize} \end{theorem} The paper is organized as follow: In section \ref{gen-OGS_B-n}, we recall some important definitions and basic properties concerning the group $B_{n}$, Then we generalize the definition of the standard $OGS$ and we show the arising exchange laws. In section \ref{parabolic-subgroup}, we focus on the parabolic subgroup of $B_{n}$ which does not contain $s_{0}$ in it's reduced Coxeter presentation. It is easy to show that the subgroup is isomorphic to the symmetric group $S_n$, where we denote the subgroup by $\dot{S}_n$. We show the presentation of $\dot{S}_n$ by the generalized standard $OGS$, and then we show the properties of the standard $OGS$ elementary elements and the the standard $OGS$ elementary factorization of the elements (as it described in \cite{S1}) in $\dot{S}_n$. In section \ref{gen-stan-factor}, we generalize the definition of the standard $OGS$ elementary factorization for the group $B_{n}$. In section \ref{cox-length}, we introduce a formula for the length function of $B_{n}$ by using the generalized standard $OGS$ elementary factorization which is defined in the previous section. In section \ref{descent-bn}, we characterize the descent set of the elements in the Coxeter group $B_{n}$, by using the generalized standard $OGS$ presentation for the elements of $B_{n}$. \section{Generalization of the standard OGS for the Coxeter group $B_{n}$}\label{gen-OGS_B-n} First, we recall some important definitions and basic properties concerning the group $B_{n}$, for example see \cite{BB} \begin{definition}\label{def-bn} Let $B_{n}$ be the Coxeter group with $n$ generators, with the following presentation: $$\begin{array}{r} \left\langle s_{0}, s_{1}, \ldots, s_{n-1}\right| s_{i}^{2}=1, \left(s_{0} \cdot s_{1}\right)^{4}=1, \left(s_{i} \cdot s_{i+1}\right)^{3}=1 \text { for } 1 \leq i \leq n-1, \left.\left(s_{i} \cdot s_{j}\right)^{2}=1 \text { for }|i-j| \geq 2\right\rangle \end{array}$$ \end{definition} \textbf{Basic properties of $B_n$} \\ \begin{itemize} \item The group $B_{n}$ can be presented as a permutation group of the set $[\pm n]$, Where :$$[\pm n]=\{i \in \mathbb{Z} | 1 \leqslant i \leqslant n \quad \text{or} \quad -n \leqslant i \leqslant -1 \}$$ With the following property:\\ $$ \pi(-i)=-\pi(i) \quad \text { for every } \quad i \in[\pm n] . $$ \item $B_{n}$, can be considered as a signed permutation group, where $\pi$ is uniquely determined by $\pi(i)$ for $1\leq i\leq n$. \item A signed permutation $\pi\in B_n$ is denoted by $$ [\pi(1); ~\pi(2); \ldots; ~\pi(n)] $$ (e.g., $\pi= [2; ~-4; ~1; ~3]$ is a permutation in $B_{4}$ which satisfies $\pi(1)=2$, $\pi(2)=-4$, $\pi(3)=1$, $\pi(4)=3$, and $\pi(-1)=-2$, $\pi(-2)=4$, $\pi(-3)=-1$, $\pi(-4)=-3$) \item $[|\pi(1)|; ~|\pi(2)|; \ldots; ~|\pi(n)|] $ is a permutation of $S_n$ \\ \item Similarly to $S_n$, for $1\leq i\leq n-1$, the Coxeter generators $s_i$ can be considered the permutation which exchanges the element $i$ with the element $i+1$ and additionally exchanges $-i$ with $-(i+1)$, and $s_0$ is the permutation which exchanges the element $1$ with the element $-1$. \item $B_n\simeq\mathbb{Z}_2^{n}\rtimes S_n$. \item $|B_n|=2^n\cdot n!$. \end{itemize} \begin{definition} We define des($\pi$) to be the left descent set of $\pi$. $$s_{i} \in des(\pi) \quad if \quad \ell(s_{i} \cdot \pi) < \ell(\pi).$$ Similarly to $S_{n}$, the following properties holds in $B_{n}$: $$ des(\pi)=\{0 \leqslant i \leqslant n-1 \mid \pi(i) > \pi(i+1)\}, \quad (\pi(0) \text{ defined to be } (0))$$ \end{definition} Now, we recall the definition of the normal reduced form for $B_{n}$, as it is defined in \cite{BB} Chapter 3.4. \begin{definition}\label{Normal form of $B_{n}$} $$norm (\pi) = \prod_{i=0}^{n-1} \prod_{j=0}^{y_{i}-1} s_{|i-j|}$$ where, $y_i$ is a non-negative integer such that $0\leq y_i\leq 2i+1$, and the Coxeter length of $\pi$ as follow $$\ell(\pi)=\sum_{i=0}^{n-1}y_i.$$ \end{definition} \begin{example} Consider the normal form of the following $\pi\in B_5$ $$norm(\pi)=s_0\cdot (s_1\cdot s_0\cdot s_1)\cdot (s_3\cdot s_2\cdot s_1\cdot s_0)\cdot (s_4\cdot s_2\cdot s_2\cdot s_1\cdot s_0\cdot s_1\cdot s_2)\cdot (s_5\cdot s_4\cdot s_3)$$ Then, $y_0=1, \ y_1=3, \ y_2=0, \ y_3=4, \ y_4=7, \ y_5=3$, and then $$\ell(\pi)=1+3+0+4+7+3=18.$$ \end{example} \subsection{The generalized standard OGS of $B_{n}$} In this subsection we generalize the definition of the standard OGS (defined in \cite{S1}) for the group $B_n$, and we show the arising exchange laws. Now, similar to the definition of $t_k$ in $S_n$ \cite{S1}, we define $\tau_k$ in $B_n$ for $k=1,2, \ldots, n$ as follow: \begin{definition}\label{tau} For $k= 1,2,\ldots,n$ let $\tau_{k}$ be : $$\tau_{k}= \prod_{j=0}^{k-1} s_{j} $$ \end{definition} \begin{remark} For every $1\leqslant k \leqslant n $, $\tau_{k}$ satisfies the following properties: $$\begin{array}{l} \tau_{k}(1)=-k \\ \tau_{k}(j)=j-1, \text { for } 2 \leq j \leq k \\ \tau_{k}(j)=j, \text { for } k+1 \leq j \leq n \end{array}$$ \end{remark} \begin{remark} $$\tau_{k} = s_{0} \cdot t_{k} .$$ \end{remark} \begin{remark}\label{tau-power} For $1\leq k\leq n$, let $\tau_{k}$ be an element of $B_n$ as it is defined in Definition \ref{tau}, then the following holds: \begin{itemize} \item For $i_{k}=-k$ $$ \tau_{k}^{i_{k}}(j) =\left\{\begin{array}{ll} -j & \text { for } 1 \leq j \leq k \\ \\ j & \text { for } k+1 \leq j \leq n \end{array}\right. $$ \item For $0<i_k<k$ $$ \tau_{k}^{i_{k}}(j) =\left\{\begin{array}{ll} -(j-i_k+k) & \text { for } 1 \leq j \leq i_k \\ \\ j-i_k & \text { for } i_k+1 \leq j \leq k \\ \\ j & \text { for } k+1 \leq j \leq n \end{array}\right. $$ \item For $-k<i_k<0$ $$ \tau_{k}^{i_{k}}(j) =\left\{\begin{array}{ll} j-i_{k} & \text { for } 1 \leq j \leq i_k +k \\ \\ -(j-i_k-k)& \text { for } i_k+k+1 \leq j \leq k \\ \\ j & \text { for } k+1 \leq j \leq n \end{array}\right. $$ \end{itemize} \end{remark} Now, we define the generalized standard $OGS$ for the group $B_n$ as follow \begin{theorem}\label{ogs-bn} For $k=1, 2, \ldots, n$ let $\tau_{k}$ be the elements of $B_n$ as defined in Definition \ref{tau} then the following holds:\\ Every element $g\in B_n$ has a unique presentation in the following form: $$\tau_{1}^{i_1}\cdot \tau_{2}^{i_2}\cdots \tau_{n}^{i_n}$$ such that, $-k\leq i_{k}<k$. \end{theorem} \begin{proof} The proof is by induction on $n$. for $n=1$, it is easy to see that $B_1$ is generated by $\tau_1$ since $B_1$ is a cyclic group of order $2$. Now, assume by induction that theorem holds for every $k$ such that $k\leq n-1$. Denote by $\dot{B}_{n-1}$ the parabolic subgroup of $B_n$ generated by $s_0, s_1, \ldots s_{n-2}$. Easy to see that $\dot{B}_{n-1}$ is isomorphic to $B_{n-1}$ (which satisfy the theorem by the induction hypothesis). Notice, that in the permutation presentation of $B_n$, every element $x\in \dot{B}_{n-1}$ satisfies $x(n)=n$. Now, consider the right cosets of $\dot{B}_{n-1}$ in $B_n$. There are $2n$ different right cosets, where every two elements $x$ and $y$ in the same right coset of $\dot{B}_{n-1}$ in $B_n$ satisfy $x(n)=y(n)$. Now, notice that the powers of $\tau_{n}$ satisfy the following properties: \begin{itemize} \item For $0\leq i_n<n$, $\tau_{n}^{i_n}(n)=n-i_n$; \item For $-n\leq i_n<0$, $\tau_{n}^{i_n}(n)=i_n$. \end{itemize} Hence, for $-n\leq i_n\leq n-1$, the elements $\tau_{n}^{i_n}$ gives the $2n$ different images of $n$ in the permutation presentation. Hence, for $-n\leq i_n\leq n-1$, we have the following $2n$ different right cosets of $\dot{B}_{n-1}$ in $B_n$ : $$\dot{B}_{n-1}\tau_{n}^{i_n}.$$ Then the result of the theorem holds for $k=n$. \end{proof} \begin{example} Consider the element $\pi= [-2; ~-1; ~-4; ~-3]$ of $B_4$. Now, we construct the $OGS$ presentation of the element, as it is described in Theorem \ref{ogs-bn}. First, notice that $\pi(4)=-3$. Hence $\tau_4^{i_4}(i_4)=-3$, which implies that $i_4=-3$. Thus we conclude: $$\pi\in \dot{B}_3 ~\tau_4^{-3}.$$ Now, consider $\pi(3)=-4$. Notice, that $\tau_4^{-3}(-1)=-4$. Hence, we consider $i_3$, such that $\tau_3^{i_3}(3)=-1$, which implies $i_3=-1$. Thus we conclude: $$\pi\in \dot{B}_2 ~\tau_3^{-1}\cdot \tau_4^{-3}.$$ Continuing by the same process, consider $\pi(2)=-1$. Notice, that \\ $[\tau_3^{-1}\cdot \tau_4^{-3}](1)=-1$. Hence, we consider $i_2$, such that $\tau_2^{i_2}(2)=1$, which implies $i_2=2-1=1$. Thus we conclude: $$\pi = \dot{B}_1 ~\tau_2\cdot\tau_3^{-1}\cdot \tau_4^{-3}.$$ Finally, considering $\pi(1)=-2$. Notice, that \\ $[\tau_2\cdot \tau_3^{-1}\cdot \tau_4^{-3}](-1)=-2$. Hence, we consider $i_1$, such that $\tau_1^{i_1}(1)=-1$, which implies $i_1=-1$. Thus we conclude: $$\pi = \tau_1^{-1}\cdot\tau_2\cdot\tau_3^{-1}\cdot \tau_4^{-3}.$$ \end{example} \begin{remark} We call the presentation of elements of $B_n$ which has been shown in Theorem \ref{ogs-bn} the generalized standard $OGS$ presentation of $B_n$. \end{remark} Now, we show some important properties of the generalized standard $OGS$ of $B_{n}$.\\ We start with the exchange laws; \begin{proposition} In order to transform the element $\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}$ $(p<q)$ into the generalized standard $OGS$ presentation of the form $\tau_{1}^{i_{1}}\cdot \tau_{2}^{i_{2}} \cdots \tau_{n}^{i_{n}}$ , one need to use the following exchange laws: \begin{itemize} \item The case $0<r_{p}<p$: $$ \tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}=\left\{\begin{array}{ll} \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}}^{r_{q}} \cdot \tau_{p+r_{q}}^{r_{p}} \cdot \tau_{q}^{r_{q}} & q-r_{q} \geq p \\ \\ \tau_{r_{q}}^{p-q} \cdot \tau_{r_{q}+r_{p}}^{q-p} \cdot \tau_{q}^{r_{q}+r_{p}} & r_{p} \leq q-r_{q} \leq p \\ \\ \tau_{p+r_{q}-q}^{r_{q}+r_{p}-q} \cdot \tau_{r_{q}}^{p-r_{p}-r_{q}} \cdot \tau_{q}^{r_{q}+r_{p}-p-q} & q-r_{q} \leq r_{p} \end{array}\right. $$ \item The case $r_{p}=-p$: $$ \tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}=\left\{\begin{array}{ll} \tau_{r_{q}}^{-r_{q}} \cdot \tau_{p+r_{q}}^{-p-r_{q}} \cdot \tau_{q}^{r_{q}} & q-r_{q} \geq p \\ \\ \tau_{p+r_{q}-q}^{-p-r_{q}+q} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{q}^{r_{q}-q} & q-r_{q}<p \end{array}\right. $$ \item The case $-p < r_{p} < 0$: $$ \tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}=\left\{\begin{array}{ll} \tau_{r_{q}+r_{p}+p}^{r_{q}} \cdot \tau_{p+r_{q}}^{r_{p}-r_{q}} \cdot \tau_{q}^{r_{q}} & q-r_{q} \geq p \\ \\ \tau_{p+r_{q}-q}^{-p-r_{q}+q} \cdot \tau_{r_{q}}^{p+r_{q}-q} \cdot \tau_{r_{q}+r_{p}+p}^{q-p} \cdot \tau_{q}^{p+r_{p}-q+r_{q}} & r_{p}+p \leq q-r_{q} \leq p \\ \\ \tau_{p+r_{q}-q}^{r_{p}} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{q}^{r_{q}+r_{p}} & q-r_{q} \leq r_{p}+p \end{array}\right. $$ \end{itemize} \end{proposition} \begin{proof} By Remark $2.1.3.$ we have $$\tau_{i} = s_{0} \cdot t_{i} .$$ Then, $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}= (s_{0} \cdot t_{q})^{r_{q}} \cdot (s_{0} \cdot t_{p})^{r_{p}} $$ Similar to the exchange laws for the standard $OGS$ of $S_n$ as it is presented in Proposition \ref{exchange}, we consider several cases, depending on the values of $p, q, r_p, r_q$. We have the following three main cases: \begin{enumerate} \item $0<r_p<p$; \item $r_p=-p $; \item $-p<r_p<0$. \end{enumerate} Let start with the first case. \begin{enumerate} \item \text{The case} \quad $0 < r_{p} < p$ \\ Similar to three cases of exchange laws which arises from the standard $OGS$ presentation of $S_n$, (as it is presented in Proposition \ref{exchange}), we have the following three subcases of exchange laws, depending on how the value of $q-r_q$ compares to the values of $p$ and $r_p$: \begin{itemize} \item $q-r_q\geq p$; \item $r_p\leq q-r_q\leq p$; \item $q-r_q\leq r_p$. \end{itemize} Now, we start with the first subcase: $\bullet$ $q-r_{q}\geq p$ :\\ $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}= (s_{0} \cdot t_{q})^{r_{q}} \cdot (s_{0} \cdot t_{p})^{r_{p}} $$ Consider the permutation presentation of $(s_{0} \cdot t_{q})^{r_{q}} \cdot (s_{0} \cdot t_{p})^{r_{p}}$ :\\ $$1 \rightarrow -(q-r_{q}+1) \rightarrow -(q-r_{q}+1).$$ $$r_{q} \rightarrow -q \rightarrow -q.$$ $$r_{q}+1 \rightarrow 1 \rightarrow -(p-r_{p}+1).$$ $$r_{q}+r_{p} \rightarrow r_{p} \rightarrow -(p).$$ $$r_{q}+r_{p}+1 \rightarrow r_{p}+1 \rightarrow 1.$$ $$q \rightarrow q-r_{q} \rightarrow q-r_{q}\quad \text{or}\quad p-r_{p}.$$ By the exchange laws of $S_{n}$, as it is presented in Proposition \ref{exchange}: $$t_{q}^{r_{q}} \cdot t_{p}^{r_{p}} = t_{r_{q}+r_{p}}^{r_{q}} \cdot t_{p+r_{q}}^{r_{p}} \cdot t_{q}^{r_{q}}$$ Now, we consider the permutation presentation of $$ (s_{0} \cdot t_{r_{q}+r_{p}})^{r_{q}} \cdot (s_{0} \cdot t_{p+r_{q}})^{r_{p}} \cdot (s_{0} \cdot t_{q})^{r_{q}}$$ $$1 \rightarrow -(r_{p}+1) \rightarrow -1 \rightarrow q-r_{q}+1.$$ $$r_{q} \rightarrow -(r_{p}+r_{q}) \rightarrow -r_{q} \rightarrow q-r_{q}+r_{q} = q.$$ $$r_{q}+1 \rightarrow 1 \rightarrow -1 \rightarrow -(p+r_{q}-r_{p}+1) \rightarrow -(p-r_{p}+1). $$ $$r_{q}+r_{p} \rightarrow r_{p} \rightarrow -(p+r_{q}) \rightarrow -p.$$ $$r_{q}+r_{p}+1 \rightarrow r_{q}+r_{p}+1 \rightarrow r_{q}+1 \rightarrow 1. $$ $$q \rightarrow q-r_{q} \rightarrow q-r_{q}.$$ Then the permutation presentations of $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}$ and $\tau_{r_{q}+r_p}^{r_p} \cdot \tau_{p+r_{q}}^{r_p} \cdot \tau_{q}^{r_{q}}$ satisfy the following properties: \begin{itemize} \item For $1\leq j\leq r_q$: $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}(j)=-\tau_{r_{q}+r_{p}}^{r_q} \cdot \tau_{p+r_{q}}^{r_{p}} \cdot \tau_{q}^{r_{q}}(j)$; \item For $r_q+1\leq j\leq n$: $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}(j)=\tau_{r_{q}+r_{p}}^{r_q} \cdot \tau_{p+r_{q}}^{r_{p}} \cdot \tau_{q}^{r_{q}}(j)$. \end{itemize} Then, we get the following result: $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}= \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}}^{r_{q}} \cdot \tau_{p+r_{q}}^{r_{p}} \cdot \tau_{q}^{r_{q}},$$ in case $q-r_q\geq p$ and $0<r_p<p$. \\ Now, we turn to the second subcase of the case $0<r_p<p$. \\ $\bullet$ $r_{p} \leqslant q-r_{q} \leqslant p$ $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}= (s_{0} \cdot t_{q})^{r_{q}} \cdot (s_{0} \cdot t_{p})^{r_{p}}, $$ By the same way, we can see that the permutation presentations of $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}$ and $\tau_{r_{q}}^{p-q+r_q} \cdot \tau_{r_{q}+r_{p}}^{q-p} \cdot \tau_{q}^{r_{q}+r_{p}}$ satisfy the following properties: \begin{itemize} \item For $1\leq j\leq r_q$: $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}(j)=-\tau_{r_{q}}^{p-q+r_q} \cdot \tau_{r_{q}+r_{p}}^{q-p} \cdot \tau_{q}^{r_{q}+r_{p}}(j)$; \item For $r_q+1\leq j\leq n$: $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}(j)=\tau_{r_{q}}^{p-q+r_q} \cdot \tau_{r_{q}+r_{p}}^{q-p} \cdot \tau_{q}^{r_{q}+r_{p}}(j)$. \end{itemize} Then, we get the following result: $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}}= \tau_{r_{q}}^{p-q} \cdot \tau_{r_{q}+r_{p}}^{q-p} \cdot \tau_{q}^{r_{q}+r_{p}},$$ in case $r_p\leqslant q-r_q\leqslant p$ and $0<r_p<p$. \\ Now, we turn to the third subcase of the case $0<r_p<p$. \\ $\bullet$ $q-r_{q} \leqslant r_{p}$ $$\tau_{q}^{r_{q}}\cdot \tau_{p}^{r_{p}}= (s_{0} \cdot t_{q})^{r_{q}}\cdot (s_{0} \cdot t_{p})^{r_{p}} .$$ By the same way, we can see that the permutation presentations of $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}$ and $\tau_{p+r_{q}-q}^{r_q+r_p-q} \cdot \tau_{r_q}^{p-r_p} \cdot \tau_{q}^{r_{q}+r_{p}-q}$ satisfy the following properties: \begin{itemize} \item For $1\leq j\leq r_q$: $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}(j)=\tau_{p+r_{q}-q}^{r_q+r_p-q} \cdot \tau_{r_q}^{p-r_p} \cdot \tau_{q}^{r_{q}+r_{p}-q}(j)$; \item For $r_q+1\leq j\leq n$: $\tau_{q}^{r_q}\cdot \tau_{p}^{r_p}(j)=-\tau_{p+r_{q}-q}^{r_q+r_p-q} \cdot \tau_{r_q}^{p-r_p} \cdot \tau_{q}^{r_{q}+r_{p}-q}(j)$. \end{itemize} Then, we get the result: $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}} = \tau_{p+r_{q}-q}^{r_{q}+r_{p}-q} \cdot \tau_{r_{q}}^{p-r_{p}-r_{q}} \cdot \tau_{q}^{r_{q}+r_{p}-p-q},$$ in case $q-r_q\leqslant r_p$ and $0<r_p<p$. \\ \item \text{The case} \quad $r_{p}=-p$.\\ The case is divided into the following two subcases, depending on the value of $q-r_q$ compares to the values of $p$: \begin{itemize} \item $q-r_q\geq p$; \item $q-r_q < p$. \end{itemize} Now, we start with the first subcase: $\bullet$ $q-r_{q}\geq p.$ \\ Consider the permutation of $\tau_{q}^{r_{q}} \cdot \tau_{p}^{-p}$: $$ \tau_{q}^{r_q}= [-(q-r_{q}+1); ~-(q-r_{q}); \ldots; ~-q; ~1; ~2;\ldots; ~q-r_q] $$ $$ \tau_{p}^{-p}= [-1; ~-2; \ldots; ~-p; ~p+1; ~p+2; \ldots; ~q]. $$ Therefore the permutation presentation of $\tau_{q}^{r_q}\cdot \tau_{p}^{-p}$: $$1 \rightarrow -(q-r_{q}+1) \rightarrow -(q-r_{q}+1).$$ $$r_{q} \rightarrow -q \rightarrow -q.$$ $$r_{q}+1 \rightarrow 1 \rightarrow -1.$$ $$r_{q}+p \rightarrow p \rightarrow -p.$$ Now, consider the permutation presentation of $\tau_{r_q}^{-r_q}\cdot \tau_{r_{q}+p}^{-(r_q+p)}$: $$ \tau_{r_q}^{-r_q}\cdot \tau_{r_q+p}^{-(r_q+p)}=[1; ~2; \ldots; ~r_q; ~-(r_q+1); \ldots -(r_q+p); ~r_q+p+1; \ldots; ~q]. $$ Hence we get $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{-p}=\tau_{r_q}^{-r_q}\cdot \tau_{r_{q}+p}^{-(r_q+p)}\cdot \tau_{q}^{r_{q}},$$ in case $q-r_q\geq p$ and $r_p=-p$. \\ $\bullet$ Now we turn to the subcase $q-r_{q}<p.$\\ Since $q-r_q<p$ is the permutation presentation of $\tau_{q}^{r_{q}} \cdot \tau_{p}^{-p}$ as follow: $$1 \rightarrow -(q-r_{q}+1) \rightarrow q-r_{q}+1.$$ $$p+r_{q}-q \rightarrow -p \rightarrow p.$$ $$p+r_{q}-q+1 \rightarrow -(p+1) \rightarrow -(p+1).$$ $$r_{q} \rightarrow -q \rightarrow -q.$$ $$r_{q}+1 \rightarrow 1 \rightarrow -1.$$ $$q \rightarrow q-r_{q} \rightarrow -(q-r_{q}).$$ Now, considering the permutation presentation of $\tau_{r_{q}+p-q}^{-(r_q+p-q)}\cdot \tau_{r_q}^{-r_q}\cdot \tau_{q}^{-q}$: $$ \tau_{r_q+p-q}^{-(r_q+p-q)}\cdot \tau_{r_q}^{-r_q}\cdot \tau_{q}^{-q}=[-1; ~-2; \ldots; ~-(r_q+p-q); ~r_q+p-q+1; \ldots ~r_q; ~-(r_q+1); \ldots; ~-q]. $$ Hence we get $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{-p}=\tau_{r_{q}+p}^{-(r_q+p)}\cdot \tau_{r_q}^{-r_q}\cdot \tau_{q}^{r_{q}-q},$$ in case $q-r_q<p$ and $r_p=-p$. \\ \item \text{The case} \quad $-p < r_{p} < 0$\\ In this case, we have the following three subcases, depending on how the value of $q-r_q$ compares to the values of $p$ and $p+r_p$: \begin{itemize} \item $q-r_q\geq p$; \item $r_p+p\leq q-r_q\leq p$; \item $q-r_q\leq r_p+p$. \end{itemize} Now, we start with the first subcase: $\bullet$ $q-r_{q} \geq p.$ $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}} = \tau_{q}^{r_{q}} \cdot \tau_{p}^{p+r_{p}} \cdot \tau_{p}^{-p}$$ Consider the exchange law for $\tau_{q}^{r_{q}} \cdot \tau_{p}^{p+r_{p}}$ where $0<p+r_{p}<p$ and $q-r_{q} \geq p$. $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{p+r_{p}}= \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}+p}^{r_{q}} \cdot \tau_{p+r_{q}}^{p+r_{p}} \cdot \tau_{q}^{r_{q}}.$$ Then we get: $$ \tau_{q}^{r_{q}} \cdot \tau_{p}^{p+r_{p}} \cdot \tau_{p}^{-p}= \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}+p}^{r_{q}} \cdot \tau_{p+r_{q}}^{p+r_{p}} \cdot \tau_{q}^{r_{q}} \cdot \tau_{p}^{-p}$$ Then, by using the exchange law for $\tau_{q}^{r_{q}} \cdot \tau_{p}^{-p}$ where $q-r_{q} \geq p$ we get: $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}} = \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}+p}^{r_{q}} \cdot \tau_{p+r_{q}}^{p+r_{p}} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{p+r_{q}}^{-p-r_{q}} \cdot \tau_{q}^{r_{q}}.$$ $$ = \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}+p}^{r_{q}} \cdot \tau_{p+r_{p}}^{-(p+r_{p})} \cdot \tau_{p+r_{q}+r_{p}}^{-(p+r_{q}+r_{p})} \cdot \tau_{p+r_{q}}^{p+r_{p}} \cdot \tau_{p+r_{q}}^{-(p+r_{q})} \cdot \tau_{q}^{r_{q}}.$$ $$ = \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{p+r_{p}+r_{q}}^{-(p+r_{p}+r_{q})} \cdot \tau_{p+r_{p}+r_{q}}^{r_{q}} \cdot \tau_{p+r_{q}+r_{p}}^{-(p+r_{q}+r_{p})} \cdot \tau_{p+r_{q}}^{p+r_{p}} \cdot \tau_{p+r_{q}}^{-(p+r_{q})} \cdot \tau_{q}^{r_{q}}.$$ Then, we get the result: $$\tau_{q}^{r_{q}} \cdot \tau_{p}^{r_{p}} = \tau_{r_{q}+r_{p}+p}^{r_{q}} \cdot \tau_{p+r_{q}}^{r_{p}-r_{q}} \cdot \tau_{q}^{r_{q}},$$ in case $q-r_q\geq p$ and $-p<r_p<0$. \\ Now, we turn to the second subcase of the case $-p<r_p<0$. \\ $\bullet$ $r_{p}+p \leqslant q-r_{q} \leqslant p.$ $$\tau_{q}^{r_{q}}= \tau_{q}^{r_{q}}\cdot \tau_{p}^{p+r_{p}} \cdot \tau_{p}^{-p}.$$ $$\tau_{q}^{r_{q}}\cdot \tau_{p}^{p+r_{p}} \cdot \tau_{p}^{-p} = \tau_{r_{q}}^{p-q} \cdot \tau_{r_{q}+p+r_{p}}^{q-p}\cdot \tau_{q}^{r_{q}+p+r_{p}}\cdot \tau_{p}^{-p}.$$ Consider the exchange law for $\tau_{q}^{r_{q}+p+r_{p}}\cdot \tau_{p}^{-p}.$ where $q-r_{q}<p$ . $$=\tau_{r_{q}}^{p-q} \cdot \tau_{r_{q}+p+r_{p}}^{q-p}\cdot \tau_{p+r_{q}+p+r_{p}-q}^{-p-r_{q}-(p+r_{p}+q)} \cdot \tau_{r_{q}+p+r_{p}}^{-(r_{q}+p+r_{p})} \cdot \tau_{q}^{r_{q}+p+r_{p}-q}$$ Consider the exchange law for $\tau_{r_{q}+p+r_{p}}^{q-p}\cdot \tau_{p+r_{q}+p+r_{p}-q}^{-p-r_{q}-(p+r_{p}+q)}$ where $q-r_{q}=p$ . $$=\tau_{r_{q}}^{p-q} \cdot \tau_{q-p}^{-(q-p)}\cdot \tau_{r_{q}+p+r_{p}-q}^{-(r_{q}+p+r_{p})} \cdot \tau_{r_{q}+p+r_{p}}^{q-p} \cdot \tau_{r_{q}+p+r_{p}}^{-(r_{q}+p+r_{p})} \cdot \tau_{q}^{r_{q}+p+r_{p}-q}.$$ $$\tau_{r_{q}}^{p-q} \cdot \tau_{q-p}^{-(q-p)}= \tau_{r_{q}}^{r_{q}-(q-p)}\cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{q-p}^{-(q-p)}$$ Consider the exchange law for $\tau_{r_{q}}^{r_{q}-(q-p)}\cdot \tau_{q-p}^{-(q-p)} $, where $q-r_q=p$. $$= \tau_{r_{q}-(q-p)}^{-(r_{q}-(q-p))} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}}^{r_{q}-(q-p)} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+p+r_{p}}^{q-p}\cdot \tau_{q}^{r_{q}+p+r_{p}-q}.$$ Then, we get the result: $$\tau_{q}^{r_{q}}\cdot \tau_{p}^{r_{p}} = \tau_{r_{q}-q+p}^{-r_{q}+q-p}\cdot \tau_{r_{q}}^{r_{q}+p-q}\cdot \tau_{r_{q}+p+r_{p}}^{q-p}\cdot \tau_{q}^{r_{q}+p+r_{p}-q},$$ in case $r_p+p\leqslant q-r_q\leqslant p$ and $-p<r_p<0$. \\ Now, we turn to the third subcase of the case $0<r_p<p$. \\ $\bullet$ $q-r_{q} \leqslant r_{p}+p.$ $$\tau_{q}^{r_{q}}\cdot \tau_{p}^{r_{p}} = \tau_{q}^{r_{q}}\cdot \tau_{p}^{p+r_{p}}\cdot \tau_{p}^{-p}$$ By the exchange law of the case $0 \leqslant r_{p} \leqslant p$ where $q-r_{q}\leqslant r_{p}$ we get: $$\tau_{q}^{r_{q}}\cdot \tau_{p}^{p+r_{p}}\cdot \tau_{p}^{-p}= \tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{r_{q}}^{p-(p+r_{p})-r_{q}} \cdot \tau_{q}^{r_{q}+p+r_{p}-p-q} \cdot \tau_{p}^{-p}.$$ $$=\tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{r_{q}}^{-r_{p}-r_{q}} \cdot \tau_{q}^{r_{q}+r_{p}-q}\cdot \tau_{p}^{-p}. $$ $$\tau_{q}^{r_{q}+r_{p}-q} \cdot \tau_{p}^{-p}= \tau_{q}^{r_{q}+r_{p}} \cdot \tau_{q}^{-q} \cdot \tau_{p}^{-p}. $$ Consider the exchange law for $\tau_{q}^{r_{q}+r_{p}} \cdot \tau_{p}^{-p} $ where, $q-r_q <p$ $$=\tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{r_{q}}^{-r_{p}-r_{q}} \cdot \tau_{p+r_{q}+r_{p}-q}^{-(p+r_{q}+r_{p}-q)}\cdot \tau_{r_{q}+r_{p}}^{-(r_{q}+r_{p})} \cdot \tau_{q}^{r_{q}+r_{p}} \cdot \tau_{q}^{-q}\cdot \tau_{q}^{-q}$$ $$\tau_{r_{q}}^{-r_{p}-r_{q}} \cdot \tau_{p+r_{q}+r_{p}-q}^{-(p+r_{q}+r_{p}-q)}= \tau_{r_{q}}^{-r_{p}} \cdot \tau_{r_{q}}^{-r_{q}}\cdot \tau_{p+r_{q}+r_{p}-q}^{-(p+r_{q}+r_{p}-q)}. $$ Consider the exchange law for $\tau_{r_{q}}^{-r_{p}} \cdot \tau_{p+r_{q}+r_{p}-q}^{-(p+r_{q}+r_{p}-q)} $ where $q-r_q >p$ $$=\tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{-r_{p}}^{r_{p}} \cdot \tau_{p+r_{q}+r_{p}-q-r_{p}}^{-(p+r_{q}-q)} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}}^{-(r_{q}+r_{p})} \cdot \tau_{q}^{r_{q}+r_{p}}.$$ $$=\tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{-r_{p}}^{r_{p}} \cdot \tau_{p+r_{q}-q}^{-(p+r_{q}-q)} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}+r_{p}}^{-(r_{q}+r_{p})} \cdot \tau_{q}^{r_{q}+r_{p}}.$$ Consider the exchange law $\tau_{r_{q}}^{-r_{p}} \cdot \tau_{r_{q}+r_{p}}^{-(r_{q}+r_{p})}$ where $q-r_q=p$ $$=\tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{-r_{p}}^{r_{p}} \cdot \tau_{p+r_{q}-q}^{-(p+r_{q}-q)} \cdot \tau_{-r_{p}}^{r_{p}} \cdot \tau_{r_{q}+r_{p}-r_{p}}^{-r_{q}} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{r_{q}}^{-r_{q}}\cdot \tau_{q}^{r_{q}+r_{p}}.$$ Consider the exchange law for $\tau_{p+r_{q}-q}^{r_q+p+r_p-q} \cdot \tau_{-r_{p}}^{r_{p}}$ where $q-r_q=p$ $$=\tau_{r_{q}+p+r_{p}-q}^{-(r_{q}+p+r_{p}-q)} \cdot \tau_{-r_{p}+r_{q}+p+r_{p}-q}^{-(r_{q}+p-q)} \cdot \tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{p+r_{q}-q}^{-(p+r_{q}-q)} \cdot \tau_{-r_{p}}^{r_{p}} \cdot \tau_{r_{q}}^{-r_{q}} \cdot \tau_{r_{q}}^{-r_{p}}\cdot \tau_{r_{q}}^{-r_{q}}\cdot \tau_{q}^{r_{q}+r_{p}}.$$ $$=\tau_{r_{q}+p+r_{p}-q}^{-(r_{q}+p+r_{p}-q)} \cdot \tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{-r_{p}}^{r_{p}} \cdot \tau_{r_{q}}^{-r_{p}}\cdot \tau_{q}^{r_{q}+r_{p}}.$$ Consider the exchange law for $\tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{-r_{p}}^{r_{p}}$ where $q-r_q=p.$ $$=\tau_{r_{q}+p+r_{p}-q}^{-(r_{q}+p+r_{p}-q)} \cdot \tau_{r_{q}+p+r_{p}-q}^{-(r_{q}+p+r_{p}-q)} \cdot \tau_{-r_{p}+r_{q}+r_{p}+p-q}^{-(r_{q}+p-q)}\cdot \tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{q}^{r_{q}+r_{p}}.$$ $$=\tau_{r_{q}+p-q}^{-(r_{q}+p-q)} \cdot \tau_{p+r_{q}-q}^{r_{q}+p+r_{p}-q} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{q}^{r_{q}+r_{p}}.$$ Then, we get the result: $$\tau_{q}^{r_q} \cdot \tau_{p}^{r_p}=\tau_{r_{q}+p-q}^{r_{p}} \cdot \tau_{r_{q}}^{-r_{p}} \cdot \tau_{q}^{r_{q}+r_{p}},$$ in case $q-r_q\leqslant r_p+p$ and $-p<r_p<0$. \end{enumerate} \end{proof} \section{The presentation of the parabolic subgroup $\dot{S}_{n}$ of $B_{n}$ by the generalized standard OGS}\label{parabolic-subgroup} This section deals with the properties of the parabolic subgroup of $B_{n}$ which does not contain $s_{0}$ in its reduced Coxeter presentation.\\ \begin{definition} \label{SnBn} Denote by $\dot{S}_n$ the parabolic subgroup of $B_n$ which is generated by $s_1, s_2, \ldots, s_{n-1}$ (i.e, The elements of $B_n$ which can be written without any occurrence of $s_0$). \end{definition} \begin{remark} By the Coxeter diagram of $B_n$ one can easily see that the subgroup $\dot{S}_n$ which is defined in Definition \ref{SnBn} is isomorphic to the symmetric group $S_n$. \end{remark} Now, we show the generalized standard $OGS$ decomposition of the elements of the subgroup $\dot{S}_{n}$ of $B_n$ by using the following two lemmas\\ \begin{lemma}\label{tau-1+1} Consider the elements of $B_{n}$ are presented by the generalized standard OGS presentation, as it is described in Theorem \ref{ogs-bn}, then the following holds:\\ $\bullet$ $\tau_{k}^{-1} \cdot \tau_{k+r}$ is the element $\prod_{j=k}^{k+r-1} s_{j}$ of $\dot{S}_{n}$ for every $1\leqslant i \leqslant k$.\\ \end{lemma} \begin{proof} We start with the case of $r=1$: $$ \tau_{k} = s_{0}\cdot s_{1} \cdots s_{k-1}$$ Then, $$\tau_{k}^{-1}\cdot \tau_{k+1} = s_{k-1}\cdots s_{1}\cdot s_{0} \cdot s_{0} \cdot s_{1} \cdots s_{k-1} \cdot s_{k}$$ $$\tau_{k}^{-1}\cdot \tau_{k+1} = s_{k}$$ Now, we turn to the case: $\tau_{k}^{-1}\cdot \tau_{k+r}$.\\ The normal form of the element $\tau_{k}^{-1}\cdot \tau_{k+r}$ as follow: $$\tau_{k}^{-1}\cdot \tau_{k+r} = (\tau_{k}^{-1}\cdot \tau_{k+1}) \cdot (\tau_{k+1}^{-1}\cdot \tau_{k+2}) \cdots (\tau_{k+r-1}^{-1}\cdot \tau_{k+r})$$ Then we conclude : $$ \tau_{k}^{-1}\cdot \tau_{k+r} = s_{k} \cdot s_{k+1} \cdots s_{k+r-1}$$ \end{proof} \begin{lemma}\label{tau-i+i} Consider the elements of $B_{n}$ are presented by the generalized the standard OGS presentation, as it is described in Theorem \ref{ogs-bn}, then the following holds:\\ $\bullet$ $\tau_{k_{1}}^{-i} \cdot \tau_{k_{2}}^{i}$ is the element $\prod_{j_{1}=k_{1}}^{k_{2}-1} \prod_{j_{2}=0}^{j_{1}+i-1} s_{j_{1}-j_{2}}$ of $\dot{S}_{n}$ for every $1 \leqslant i \leqslant k$.\\ \end{lemma} \begin{proof} The proof is by induction on the value of $i$.\\ The case of $i=1$ has been proved in Lemma \ref{tau-1+1}\\ Assume by induction that the lemma holds for $i=i_0$, i.e., $$\tau_{k_{1}}^{-i_0} \cdot \tau_{k_{2}}^{i_0}=\prod_{j_{1}=k_{1}}^{k_{2}-1} \prod_{j_{2}=0}^{j_{1}+i_0-1} s_{j_{1}-j_{2}}.$$ Now we prove the lemma for $i=i_0+1$. \\ First, notice: $$\tau_{k}^{-(i_0+1)} \cdot \tau_{k+1}^{i_0+1} = \tau_{k}^{-1} \cdot (\tau_{k}^{-i_0} \cdot \tau_{k+1}^{i_0}) \cdot \tau_{k+1}$$ Then by using the induction assumption and using the property $s_p\cdot s_q=s_q\cdot s_p$ for $|p-q|\geq 2$ we have, $$\tau_{k}^{-1} \cdot (\tau_{k}^{-i_0} \cdot \tau_{k+1}^{i_0}) \cdot \tau_{k+1} = (s_{k-1} \cdots s_{1}\cdot s_{0}) \cdot (s_{k} \cdot s_{k-1} \cdots s_{k-(i_0-1)}) \cdot (s_{0}\cdot s_{1} \cdots s_{k-1} \cdot s_{k}).$$ $$=(s_{k-1} \cdots s_{k-(i_0-2)}\cdot s_{k-(i_0-1)} \cdot s_{k-i_0})\cdot ( s_{k-(i_0+1)}\cdots s_0)\cdot (s_{k} \cdot s_{k-1} \cdots$$ $$s_{k-(i_0-1)})\cdot (s_{0}\cdot s_{1} \cdots s_{k-(i_0+1)})\cdot(s_{k-i_0}\cdot s_{k-(i_0-1)}\cdots s_k).$$ $$= (s_{k-1} \cdots s_{k-(i_0-2)}\cdot s_{k-(i_0-1)} \cdot s_{k-i_0})\cdot (s_{k}\cdot s_{k-1}\cdots s_{k-(i_0-1)}\cdot s_{k-i_0} \cdot s_{k-(i_0-1)} \cdot s_{k-(i_0-1)+1} \cdots s_{k}).$$ Since $$s_{k}\cdot s_{k-1}\cdots s_{k-(i_0-1)}\cdot s_{k-i_0} \cdot s_{k-(i_0-1)} \cdots s_{k}=s_{k-i_0}\cdot s_{k-(i_0-1)}\cdots s_k\cdot s_{k-1}\cdots s_{k-(i_0-1)}\cdot s_{k-i_0}$$ We get: $$\tau_{k}^{-1} \cdot (\tau_{k}^{-i_0} \cdot \tau_{k+1}^{i_0}) \cdot \tau_{k+1} = s_{k-1} \cdot s_{k-2} \cdots s_{k-i_0} \cdot s_{k-i_0} \cdot s_{k-(i_0-1)} \cdots s_{k-1} \cdot s_{k} \cdot s_{k-1} \cdots s_{k-i_0}.$$ $$=s_{k} \cdot s_{k-1} \cdots s_{k-i_0}.$$ Hence, the lemma holds for $i=i_0+1$, and therefore it holds for every value of $i$. \end{proof} \begin{theorem}\label{main-ogs-sn} The presentation of every element $\pi\in \dot{S}_n$ by the generalized standard $OGS$ (as it is described in Theorem \ref{ogs-bn}) has the following form: \begin{enumerate} \item $\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$ where $-k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$, for every $1 \leqslant j \leqslant m$, $i_{k_{1}} < 0$. \item $\sum_{j=1}^{m} i_{k_{j}} = 0 $. \item $0 \leqslant \sum_{j=r}^{m} i_{k_{j}} \leqslant k_{r-1}$ for\quad $2 \leqslant r \leqslant m$.\\ Where, 3. is equivalent to \\ $-k_{r} \leqslant \sum_{j=1}^{r} i_{k_{j}} \leqslant 0$ \quad for \quad $1 \leqslant r \leqslant m-1$. \end{enumerate} \end{theorem} \begin{proof} Consider $\pi \in \dot{S}_{n}$. Then the presentation of $\pi$ by the normal form as it is defined in \cite{BB}, Chapter 3.4, is as follow: $$\pi = (s_{r_{1}} \cdot s_{r_{1}-1} \cdots s_{r_{1}-v_{1}} )\cdot (s_{r_{2}} \cdot s_{r_{2}-1} \cdots s_{r_{2}-v_{2}}) \cdots (s_{r_{z}} \cdot s_{r_{z}-1} \cdots s_{r_{z}-v_{z}}),$$ where $z$ is the positive integer, $r_{1} < r_{2} < \cdots < r_{z}$, and $0 \leqslant v_{j} \leqslant r_{j}-1$, for every $1 \leqslant j\leqslant z$. By Lemma \ref{tau-i+i} we have, $$s_{r_{j}} \cdot s_{r_{j}-1}\cdots s_{r_{j}-v_{j}} = \tau_{r_{j}}^{-(v_{j}+1)}\cdot \tau_{r_{j}+1}^{v_{j}+1}$$ for every $1\leqslant j \leqslant z$. Therefore, \begin{equation}\label{tau-tau+1} \pi = \tau_{r_{1}}^{-(v_{1}+1)} \cdot \tau_{r_{1}+1}^{v_{1}+1} \cdot \tau_{r_{2}}^{-(v_{2}+1)} \cdot \tau_{r_{2}+1}^{v_{2}+1} \cdots \tau_{r_{z}}^{-(v_{z}+1)}\cdot \tau_{r_{z}+1}^{v_{z}+1}. \end{equation} Now, notice for the case where $r_{j}+1=r_{j+1}$ and $v_{j}=v_{j+1}$ for some $1\leq j\leq z-1$. Then, \begin{equation}\label{tau-j=tau-j+1} \tau_{r_j+1}^{v_j+1}\cdot \tau_{r_{j+1}}^{-(v_{j+1}+1)}=\tau_{r_j+1}^{v_j+1}\cdot \tau_{r_j+1}^{-(v_j+1)}=1. \end{equation} Hence, we get that Equation \ref{tau-tau+1} is equivalent to \begin{equation}\label{tau-tau+1-reduce-j} \pi = \tau_{r_{1}}^{-(v_{1}+1)} \cdots \tau_{r_{j}}^{-(v_{j}+1)}\cdot \tau_{r_{j+1}+1}^{v_{j+1}+1}\cdots \tau_{r_{z}+1}^{v_{z}+1}. \end{equation} Hence, by substituting the identity which arises from Equation \ref{tau-j=tau-j+1} in Equation \ref{tau-tau+1}, for every $1\leq j\leq z-1$, such that $r_{j}+1=r_{j+1}$ and $v_{j}=v_{j+1}$, Equation \ref{tau-tau+1} becomes of the following form: \begin{equation}\label{tau-tau+1-reduced} \pi = \tau_{r_{1}}^{-(v_{1}+1)} \cdot \tau_{r_{q_1}+1}^{v_{1}+1} \cdot \tau_{r_{q_1+1}}^{-(v_{q_1}+1)} \cdot \tau_{r_{q_2}+1}^{v_{q_1}+1} \cdots \tau_{r_{q_{z'}+1}}^{-(v_{q_{z'}}+1)}\cdot \tau_{r_{z}+1}^{v_{q_{z'}}+1}. \end{equation} such that the following holds: \begin{itemize} \item $1\leq q_1$; \item $q_j+1\leq q_{j+1}$ for every $1\leq j\leq z'-1$; \item $q_{z'}=z$. \end{itemize} Now, consider the generalized standard $OGS$ presentation of $\pi$ as it is described in Theorem \ref{ogs-bn}. \begin{equation}\label{ogs-formula} \pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}} \end{equation} such that \item $-k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$ , for every $1 \leqslant j \leqslant m$. Then, by concerning the uniqueness of presentation of an element $\pi\in B_n$ by the generalized standard $OGS$, and using the presentation of $\pi$ as it is presented in Equation \ref{tau-tau+1-reduced}, the following properties are satisfied. \begin{itemize} \item \begin{equation}\label{equiv0} \sum_{j=1}^m i_{k_j}= -(v_1+1)+(v_1+1)+\sum_{j=1}^{z'}(-(v_{q_j}+1)+(v_{q_j}+1))=0. \end{equation} Hence, part 2 of the theorem holds.\\ Now, we show part 3 of the theorem, where we use all notations which were used from the beginning of the proof up to this point. We consider the generalized standard $OGS$ presentation of $\pi$ with the notations of Equation \ref{ogs-formula}. Then, consider the following equation. \begin{equation}\label{part3} -k_{r} \leqslant \sum_{j=1}^{r} i_{k_{j}} \leqslant 0 \quad for \quad 1 \leqslant r \leqslant m. \end{equation} We prove Equation \ref{part3} by induction on $r$. Then, we show that Equation \ref{part3} is equivalent to part 3 of the theorem. \item Since $k_{1}= r_{1}$, we have $-k_1\leq i_{k_{1}}= -(v_{1}+1)< 0$. Hence, Equation \ref{part3} holds for $r=1$. Now, we show Equation \ref{part3} for $r=2$, and then we prove it by induction for every $r\leq m$. \item Now, we consider $k_2$ and $i_{k_2}$. Notice, $k_{2}=r_{q_1}+1$ and by Equation \ref{tau-tau+1-reduced}, either $$r_{q_1}+1 < r_{{q_1}+1}$$ or $$r_{q_1}+1 = r_{{q_1}+1}.$$ First, assume $$r_{q_1}+1 < r_{{q_1}+1}.$$ Then, $i_{k_2}=v_1+1$, and then \begin{equation}\label{nequal} i_{k_{1}}+ i_{k_{2}} -(v_1+1)+(v_1+1)=0. \end{equation} Now, assume $$r_{q_1}+1 = r_{{q_1}+1}.$$ Then, $$\tau_{r_{q_1}+1}^{v_{1}+1} \cdot \tau_{r_{q_1+1}}^{-(v_{q_1}+1)}=\tau_{r_{q_1}+1}^{v_{1}-v_{q_1}}.$$ Hence, we get $$i_{k_2}={v_{1}-v_{q_1}}.$$ Therefore, $$i_{k_{1}}+i_{k_{2}}= -(v_{1}+1)+(v_{1}-v_{q_1})=-(v_{q_1}+1)<0,$$ and since $$-(v_{q_1}+1)\geq -\tau_{r_{q_1+1}}=-\tau_{r_{q_1}+1}=-k_2,$$ we get \begin{equation}\label{equal} -k_2\leqslant i_{k_{1}} + i_{k_{2}}<0. \end{equation} Hence, by considering both Equations \ref{nequal} and \ref{equal} we get $$-k_{2} \leqslant i_{k_{1}} + i_{k_{2}}\leqslant 0,$$ for every $\pi\in \dot{S}_n.$ \item Now, assume by induction on $p$, for every $1\leq p\leq p_0$, $$-k_{p} \leqslant \sum_{j=1}^{p} i_{k_{j}} \leqslant 0. $$ Consider $k_{{p_0}+1}$ and $i_{k_{{p_0}+1}}$. Notice, either $k_{{p_0}+1}=r_{q_y}+1$ or $k_{{p_0}+1}=r_{q_{y}+1}$ for some $1\leq y\leq z'$. Assume $k_{{p_0}+1}=r_{q_y}+1$. Then by Equation \ref{tau-tau+1-reduced}, either $$r_{q_y}+1 < r_{{q_y}+1}$$ or $$r_{q_y}+1 = r_{{q_{y}}+1},$$ for some $1\leq y\leq z'$. First, assume $$r_{q_y}+1 < r_{{q_y}+1}.$$ Then, $i_{k_{p_0+1}}=v_{q_{y-1}}+1$, and then by using Equation \ref{tau-tau+1-reduced}, \begin{equation}\label{equiv-p} \sum_{j=1}^{p_0+1}i_{k_j}= -(v_1+1)+(v_1+1)+\sum_{j=1}^{y-1}(-(v_{q_j}+1)+(v_{q_j}+1))=0. \end{equation} Now, assume $$r_{q_y}+1 = r_{{q_{y}}+1}.$$ Then, \begin{equation}\label{i-k-p-0+1} \tau_{r_{q_y}+1}^{v_{q_{y-1}}+1} \cdot \tau_{r_{q_y+1}}^{-(v_{q_y}+1)}=\tau_{r_{q_1}+1}^{v_{q_{y-1}}-v_{q_y}}\quad \text{which implies}\quad i_{k_{p_0+1}}=v_{q_{y-1}}-v_{q_y} \end{equation} By induction \begin{equation}\label{induction-p0} -k_{p_0}\leq \sum_{j=1}^{p_0}i_{k_j}\leq 0, \end{equation} By the uniqueness of the presentation by the generalized standard $OGS$ (as it is described in Theorem \ref{ogs-bn}), and by using Equation \ref{tau-tau+1-reduced}, the following holds: \begin{equation}\label{equiv-p-1} \sum_{j=1}^{p_0}i_{k_j}= -(v_1+1)+(v_1+1)+\sum_{j=1}^{y-2}(-(v_{q_j}+1)+(v_{q_j}+1))-(v_{q_{y-1}}+1)=-(v_{q_{y-1}}+1). \end{equation} If $k_{{p_0}+1}=r_{q_y}+1$, then $k_{p_0}=r_{q_{y-1}+1}$. Then, by Equation \ref{tau-tau+1-reduced}, \\ $v_{q_{y-1}}+1\leq r_{q_{y-1}+1}$. Hence, $$v_{q_{y-1}}+1\leq k_{p_0}.$$ Therefore, $$-k_{p_0+1} =-( r_{{q_{y}}+1})\leq -( v_{q_y}+1)=-(v_{q_{y-1}}+1)+ (v_{q_{y-1}}-v_{q_y})\leq -k_{p_0}+(v_{q_{y-1}}-v_{q_y}).$$ By Equation \ref{i-k-p-0+1}, $i_{k_{p_0+1}}=v_{q_{y-1}}-v_{q_y}$, and then by using Equation \ref{induction-p0}, we get $$-k_{p_0}+(v_{q_{y-1}}-v_{q_y})\leq \sum_{j=1}^{p_0}i_{k_j}+i_{k_{p_0+1}}=\sum_{j=1}^{p_0+1}i_{k_j}$$ Then By using Equations \ref{equiv-p-1}, \ref{i-k-p-0+1} $$\sum_{j=1}^{p_0+1}i_{k_j}=\sum_{j=1}^{p_0}i_{k_j}+i_{k_{p_0+1}}-(v_{q_{y-1}}+1)+v_{q_{y-1}}-v_{q_y}=-1-v_{q_y}\leq 0.$$ Now, assume $k_{{p_0}+1}=r_{q_{y}+1}$. Then by Equation \ref{tau-tau+1-reduced}, either $$r_{q_y}+1 < r_{{q_y}+1}\quad \text{or}\quad r_{q_y}+1 = r_{{q_{y}}+1},$$ for some $1\leq y\leq z'$. In case $r_{q_y}+1 = r_{{q_{y}}+1}$, we have already proved $-k_{p_0+1}\leq \sum_{j=1}^{p_0+1}i_{k_j}\leq 0.$ Hence, assume $$r_{q_y}+1 < r_{{q_y}+1}.$$ Then, $i_{k_{p_0+1}}=-(v_{q_{y}}+1)$, and then by using Equation \ref{tau-tau+1-reduced}, $$\sum_{j=1}^{p_0+1}i_{k_j}= -(v_1+1)+(v_1+1)+\sum_{j=1}^{y-1}(-(v_{q_j}+1)+(v_{q_j}+1))-(v_{q_{y}}+1)= -(v_{q_{y}}+1)=i_{k_{p_0+1}}.$$ Therefore, $$-k_{p_0+1}\leq i_{k_{p_0+1}}=\sum_{j=1}^{p_0+1}i_{k_j}=-(v_{q_{y}}+1)<0$$ Hence, we get \begin{equation}\label{geq-k-p} -k_{p} \leqslant \sum_{j=1}^{p} i_{k_{j}} \leqslant 0 \quad for \quad 1 \leqslant p \leqslant m-1. \end{equation} \item By Equation \ref{geq-k-p}, $-k_{p-1} \leqslant \sum_{j=1}^{p-1} i_{k_{j}} \leqslant 0$, and by Equation \ref{equiv0}, $\sum_{j=1}^{m} i_{k_{j}}=0$.\\ Therefore, $$0 \leqslant \sum_{j=p}^{m} i_{k_{j}}=\sum_{j=1}^{m} i_{k_{j}}- \sum_{j=1}^{p-1} i_{k_{j}}\leqslant k_{p-1}.$$ \end{itemize} Hence, part 3 of the theorem holds. \end{proof} \begin{example} Let $$\pi = \tau_{9}^{-8} \cdot \tau_{10} \cdot \tau_{11}^{-3} \cdot \tau_{13}^{10} = (\tau_{9}^{-8} \cdot \tau_{10}^{8}) \cdot (\tau_{10}^{-7} \cdot \tau_{11}^{7}) \cdot (\tau_{11}^{-10} \cdot \tau_{13}^{10})$$ The permutation presentation of $\pi$ $$ [9; ~-1; ~-2; ~-3; ~-4; ~-5; ~-6; ~-7; ~-8; ~10; ~11; ~12; ~13] $$ $$\cdot [-10; ~1; ~2; ~3; ~4; ~5; ~6; ~7; ~8; ~ 9; ~11; ~12; ~13] $$ $$\cdot [4; ~5; ~6; ~7; ~8; ~9; ~10; ~11; ~-1; ~-2; ~-3; ~12; ~13] $$ $$\cdot [-4; ~-5; ~-6; ~-7; ~-8; ~-9; ~-10; ~-11; ~-12; ~-13; ~1; ~2; ~3] $$ $$= [1; ~ 5; ~7; ~8; ~9; ~10; ~11; ~12; ~13; ~4; ~6; ~2; ~3]. $$ The following holds: \begin{itemize} \item $k_1=9, \quad k_2=10, \quad k_3=11, \quad k_4=13$; \item $i_{k_{1}} =-8, \quad i_{k_2}=1, \quad i_{k_3}=-3, \quad i_{k_4}=10;$ \item $i_{k_4}=10>0, \quad i_{k_3}+i_{k_4}=7\geq 0,\quad i_{k_2}+i_{k_3}+i_{k_4}=8\geq 0;$ \item $i_{k_1}+i_{k_2}+i_{k_3}+i_{k_4}=0$. \end{itemize} \end{example} \begin{lemma}\label{tau-t} Let $\pi$ be an element of $\dot{S}_n$. By Theorem \ref{main-ogs-sn}, the generalized standard $OGS$ presentation of $\pi$ as an element of $B_n$ has the following form $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$$ such that \begin{itemize} \item $ -k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$, for every $1 \leqslant j \leqslant m$. \item $\sum_{j=1}^{m} i_{k_{j}}=0$. \item $0\leqslant \sum_{j=r}^{m}i_{k_{j}}\leqslant k_{r-1}$, for every $2\leq r\leq m$. \end{itemize} Then, by considering $\pi$ as an element of $S_n$, the standard $OGS$ presentation of $\pi$ as it is presented in Theorem \ref{canonical-sn}, has the following form: $$t_{k_{1}}^{i_{k_{1}}^{\prime}} \cdot t_{k_{2}}^{i_{k_{2}}^{\prime}} \cdots t_{k_{m}}^{i_{k_{m}}^{\prime}}$$where, \begin{itemize} \item $i_{k_{j}}^{\prime}=i_{k_{j}} \quad \text{if} \quad 0\leqslant i_{k_{j}} <k_{j}.$ \item $i_{k_{j}}^{\prime} = k_{j}+i_{k_{j}} \quad \text{if} \quad -k_{j}\leqslant i_{k_{j}} < 0.$ \end{itemize} \end{lemma} \begin{proof} Consider the generalized standard $OGS$ presentation of the following element $\pi\in \dot{S}_n$, as it is described in Theorem \ref{main-ogs-sn}: $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$$ such that \begin{itemize} \item $ -k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$, for every $1 \leqslant j \leqslant m$. \item $\sum_{j=1}^{m} i_{k_{j}}=0$. \item $0\leqslant \sum_{j=r}^{m}i_{k_{j}}\leqslant k_{r-1}$, for every $2\leq r\leq m$. \end{itemize} Then $$\pi= (\tau_{k_{1}}^{-(\sum_{j=2}^{m} i_{k_j})} \cdot \tau_{k_{2}}^{(\sum_{j=2}^{m} i_{k_j})}) \cdots (\tau_{k_{m-2}}^{-(i_{k_{m-1}}+i_{k_{m}})}\cdot \tau_{k_{m-1}}^{(i_{k_{m-1}}+i_{k_{m}})})\cdot (\tau_{k_{m-1}}^{-i_{k_{m}}}\cdot \tau_{k_{m}}^{i_{k_{m}}})$$ Then , we conclude the following identity, where in left hand side of Equation \ref{tau_t} using Lemma \ref{tau-i+i} and in right hand side of Equation \ref{tau_t} using the standard $OGS$ presentation of $\pi$ as it is described in Theorem \ref{canonical-sn} by considering $\pi$ as an element of $S_n$. \begin{equation}\label{tau_t} \tau_{k_{1}}^{-j} \cdot \tau_{k_{2}}^{j} = t_{k_{1}}^{k_{1}-j} \cdot t_{k_{2}}^{j} \end{equation} Hence, $$\pi=(t_{k_{1}}^{k_{1}-(\sum_{j=2}^{m} i_{k_j})} \cdot t_{k_{2}}^{\sum_{j=2}^{m} i_{k_j}})\cdots (t_{k_{m-2}}^{k_{m-2}-(\sum_{j=m-1}^{m} i_{k_j})}\cdot t_{k_{m-1}}^{\sum_{j=m-1}^{m} i_{k_j}})\cdot (t_{k_{m-1}}^{k_{m-1}-i_{k_{m}}}\cdot t_{k_{m}}^{i_{k_{m}}})=$$ $$=t_{k_{1}}^{k_{1}-(\sum_{j=2}^{m} i_{k_j})} \cdot t_{k_2}^{k_2+i_{k_2}}\cdots t_{k_m}^{k_m+i_{k_m}}$$ Since $t_{k_j}^{k_j}=1$ and $k_1\geq |i_{k_1}|=\sum_{j=2}^{m}i_{k_j}$ we get the following: $$t_{k_{1}}^{k_{1}-(\sum_{j=2}^{m} i_{k_j})} \cdot t_{k_2}^{k_2+i_{k_2}}\cdots t_{k_m}^{k_m+i_{k_m}}=t_{k_1}^{i'_{k_1}}\cdots t_{k_m}^{i'_{k_m}}.$$ \end{proof} Theorem \ref{standard-bn-sn} shows the characterization of the standard $OGS$ elementary elements of $S_n$ (defined in Definition \ref{elementary}) by the generalized standard $OGS$ of $\dot{S}_n$ according to Theorem \ref{main-ogs-sn}. \begin{theorem}\label{standard-bn-sn} Let $\pi$ be an element of $\dot{S}_{n}$ in $B_{n}$. Then, by considering $\pi$ as an element of $S_n$, $\pi$ is a standard $OGS$ elementary element as it is defined in Definition \ref{elementary} if and only if the generalized standard $OGS$ presentation of $\pi$ as it is described in Theorem \ref{ogs-bn}, has the following form: $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}, $$where, $$-k_1\leqslant i_{k_{1}} < 0 \quad , 0<i_{k_{j}} <k_j \quad \text{for} \quad j\geq 2 ,\quad \sum_{j=1}^{m} i_{k_{j}} =0.$$ \end{theorem} \begin{proof} Consider: $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}}\cdots \tau_{k_{m}}^{i_{k_{m}}} $$ and assume $$-k_1\leqslant i_{k_{1}} < 0 \quad ,0< i_{k_{j}} <k_j \quad \text{for} \quad j\geq 2 ,\quad \sum_{j=1}^{m} i_{k_{j}} =0.$$ First, we prove that $\pi\in \dot{S}_n$ and then by considering $\pi$ as an element of $S_n$, we prove that $\pi$ is a standard $OGS$ elementary element (as it is defined in Definition \ref{elementary}). \\ Consider $\sum_{j=r}^{m} i_{k_{j}}$, for $2\leq r\leq m$. Since $i_{k_r}>0$ for every $r\geq 2$, we have $\sum_{j=r}^{m} i_{k_{j}}\leq\sum_{j=2}^{m} i_{k_{j}}$. Since $\sum_{j=1}^{m} i_{k_{j}}=0$, we have $\sum_{j=2}^{m} i_{k_{j}}=-i_{k_1}$ Therefore, $-k_1\leq i_{k_1}<0$ implies that $0<\sum_{j=2}^{m} i_{k_{j}}\leq k_1$. Hence, for every $2\leq r\leq m$: $$0<\sum_{j=r}^{m} i_{k_{j}}\leq \sum_{j=2}^{m} i_{k_{j}}\leq k_1<k_{r}.$$ Hence, by Theorem \ref{main-ogs-sn}, $\pi\in \dot{S}_n$. Now consider the presentation of $\pi$ as an element of $S_n$ according to the standard $OGS$ presentation by using Lemma \ref{tau-t}. $$\pi=t_{k_1}^{i'_{k_1}}\cdot t_{k_2}^{i'_{k_2}}\cdots t_{k_m}^{i'_{k_m}}$$ such that the following holds: \begin{itemize} \item $i'_{k_r}=i_{k_r}$, since $i_{k_r}>0$ for $2\leq r\leq m$. \item $i'_{k_1}=k_1+i_{k_1}$, since $i_{k_1}<0$. \item $\sum_{j=1}^m i'_{k_j}=k_1+\sum_{j=1}^m i_{k_j}=k_1$, since $\sum_{j=1}^m i_{k_j}=0$. \end{itemize} Hence, concerning the value of $i_{k_1}$, $\pi$ has one of the two following presentation as an element of $S_n$: \begin{itemize} \item In the case $-k_1<i_{k_1}<0$,we have $0<i'_{k_1}=k_1+i_{k_1}<k_1$, and then the standard $OGS$ presentation (as it is presented in Theorem \ref{canonical-sn}) of $\pi$ as an element of $S_n$ as follow: $$\pi=t_{k_1}^{i'_{k_1}}\cdot t_{k_2}^{i'_{k_2}}\cdots t_{k_m}^{i'_{k_m}}, $$ such that $1\leq i'_{k_j}<k_j$ for every $1\leq j\leq m$, and $\sum_{j=1}^m i'_{k_j}=k_1$. Hence, by Definition \ref{elementary}, the element $t_{k_1}^{k_1+i_{k_1}}\cdot t_{k_2}^{i_{k_2}}\cdots t_{k_m}^{i_{k_m}}$ is a standard $OGS$ elementary element; \item In the case $i_{k_1}=-k_1$, we have $i'_{k_1}=k_1+i_{k_1}=0$, and then the standard $OGS$ presentation (as it is presented in Theorem \ref{canonical-sn}) of $\pi$ as an element of $S_n$ as follow: $$\pi=t_{k_1}^{k_1+i_{k_1}}\cdot t_{k_2}^{i_{k_2}}\cdots t_{k_m}^{i_{k_m}}=t_{k_2}^{i_{k_2}}\cdots t_{k_m}^{i_{k_m}},$$ such that $\sum_{j=2}^m i_{k_j}=k_1+i_{k_1}+\sum_{j=2}^m i_{k_j}=k_1<k_2$. Hence, by Definition \ref{elementary} the element $$\pi=t_{k_2}^{i_{k_2}}\cdots t_{k_m}^{i_{k_m}}$$ is a standard $OGS$ elementary element. \end{itemize} Now, we turn to the proof second direction of the theorem.\\ Assume $\pi\in \dot{S}_n$, and by considering $\pi$ as an element of $S_n$, $\pi$ is a standard $OGS$ elementary element. Then, by Definition \ref{elementary}, $$\pi=t_{k_1}^{i_{k_1}}\cdot t_{k_2}^{i_{k_2}}\cdots t_{k_m}^{i_{k_m}},$$ where, $1\leq i_{k_j}<k_j$ and $\sum_{j=1}^m i_{k_j}\leq k_1$.Then, by \cite{S1} Theorem 28 $$norm(\pi)=\prod_{u=\rho_1}^{k_1-1}\prod_{r=0}^{\rho_1-1}s_{u-r}\cdot \prod_{u=k_1}^{k_2-1}\prod_{r=0}^{\rho_2-1}s_{u-r}\cdot \prod_{u=k_2}^{k_3-1}\prod_{r=0}^{\rho_3-1}s_{u-r}\cdots \prod_{u=k_{m-1}}^{k_m-1}\prod_{r=0}^{\rho_m-1}s_{u-r},$$ where, $\rho_{j}=\sum_{x=j}^{m}i_{k_{x}}$ for $1\leq j\leq m$. Now, consider $\pi$ presented in terms of generators $\tau_{k_j}$ (as it defined in Definition \ref{tau}), for $1\leq j\leq m$. By Lemma \ref{tau-i+i}, $$\pi=(\tau_{maj(\pi)}^{-\rho_1}\cdot \tau_{k_1}^{\rho_1})\cdot (\tau_{k_1}^{-\rho_2}\cdot \tau_{k_2}^{\rho_2})\cdot (\tau_{k_2}^{-\rho_3}\cdot \tau_{k_3}^{\rho_3})\cdots (\tau_{k_{m-1}}^{-\rho_m}\cdot \tau_{k_m}^{\rho_m})$$ $$=\tau_{maj(\pi)}^{-\rho_1}\cdot \tau_{k_1}^{\rho_1-\rho_2}\cdot\tau_{k_2}^{\rho_2-\rho_3}\cdots \tau_{k_{m-1}}^{\rho_{m-1}-\rho_m}\cdot \tau_{k_m}^{\rho_m}.$$ Since $\rho_j=\sum_{x=j}^{m}i_{k_x}$, the following holds: \begin{itemize} \item $\rho_1=maj(\pi)$; \item $\rho_j-\rho_{j+1}=i_{k_j}$ for $1\leq j\leq m-1$; \item $\rho_m=i_{k_m}$. \end{itemize} Now, notice that $\pi$ is a standard $OGS$ elementary element, therefore by Definition \ref{elementary}, $maj(\pi)\leq k_1$, i.e., either $maj(\pi)<k_1$ or $maj(\pi)=k_1$. Hence, in case $maj(\pi)<k_1$, the following holds: $$\pi= \tau_{maj(\pi)}^{-\rho_1}\cdot \tau_{k_1}^{\rho_1-\rho_2}\cdot\tau_{k_2}^{\rho_2-\rho_3}\cdots \tau_{k_{m-1}}^{\rho_{m-1}-\rho_m}\cdot \tau_{k_m}^{\rho_m}$$ $$=\tau_{maj(\pi)}^{-maj(\pi)}\cdot \tau_{k_1}^{i_{k_1}}\cdot\tau_{k_2}^{i_{k_2}}\cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot \tau_{k_m}^{i_{k_m}}.$$ such that $0<i_{k_j}<k_j$ for $1\leq j\leq m$, and $-maj(\pi)<0$. Hence, the Theorem holds in case $maj(\pi)<k_1$. Now, assume $maj(\pi)=k_1$, then $$\rho_2=\sum_{j=2}^m i_{k_j}=\sum_{j=1}^m i_{k_j}-i_{k_1}=maj(\pi)-i_{k_1}=k_1-i_{k_1},$$ and then, $$\tau_{maj(\pi)}^{-\rho_1}\cdot \tau_{k_1}^{\rho_1-\rho_2}=\tau_{k_1}^{-\rho_2}=\tau_{k_1}^{i_{k_1}-k_1}$$ Hence, the following holds: $$\pi= \tau_{maj(\pi)}^{-\rho_1}\cdot \tau_{k_1}^{\rho_1-\rho_2}\cdot\tau_{k_2}^{\rho_2-\rho_3}\cdots \tau_{k_{m-1}}^{\rho_{m-1}-\rho_m}\cdot \tau_{k_m}^{\rho_m}$$ $$=\tau_{k_1}^{i_{k_1}-k_1}\cdot \tau_{k_2}^{i_{k_2}}\cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot \tau_{k_m}^{i_{k_m}}.$$ such that $0<i_{k_j}<k_j$ for $2\leq j\leq m$, and $i_{k_1}-k_1<0$. Hence, the Theorem holds in case $maj(\pi)=k_1$ as well. \end{proof} \begin{example} Let $$\pi = \tau_{6}^{-5} \cdot \tau_{8}^{2} \cdot \tau_{9}^{2} \cdot \tau_{10}$$ Notice, $$i_{k_1}=-5<0, \quad i_{k_2}=2>0, \quad i_{k_3}=2>0,\quad i_{k_4}=1>0,$$ $$i_{k_1}+i_{k_2}+i_{k_3}+i_{k_4}=-5+2+2+1=0.$$ Hence, the generalized standard $OGS$ presentation of $\pi$ satisfies the conditions of Theorem \ref{standard-bn-sn}. By Lemma \ref{tau-t}, the presentation of $\pi$ in the standard $OGS$ presentation, by considering $\pi$ as an element of $S_n$ as follow: $$\pi=t_{6}^{6-5}\cdot t_8^2\cdot t_9^2\cdot t_{10}=t_6\cdot t_8^2\cdot t_9^2\cdot t_{10}. $$ Notice, $maj(\pi)=1+2+2+1=6=k_1$. Hence, by Defintion \ref{elementary}, $\pi$ is a standard $OGS$ elementary element by considering $\pi$ as an elemnt of $S_n$.\\ Now, consider the permutation presentation of $\pi$. $$\pi= [6; ~-1; ~-2; ~-3; ~-4; ~-5; ~7; ~8; ~9; ~10]\cdot [-7; ~-8; ~1; ~2; ~3; ~4; ~5; ~6; ~9; ~10] $$ $$\cdot [-8; ~-9; ~1; ~2; ~3; ~4; ~5; ~6; ~7; ~10] \cdot [-10; ~1; ~2; ~3; ~4; ~5; ~6; ~7; ~8; ~9] $$ $$= [1; ~4; ~5; ~7; ~8; ~10; ~2; ~3; ~6; ~9]. $$ Notice, that the following holds: $$\pi(1)<\pi(2)<\pi(3)<\pi(4)<\pi(5)<\pi(6), \quad \pi(6)>\pi(7),$$ $$\pi(7)<\pi(8)<\pi(9)<\pi(10).$$ Therefore, $des(\pi)=\{6\}$. By \cite{S1} Theorem 28, $\pi\in S_n$ is a standard $OGS$ elementary element if and only if $des(\pi)$ contains a single element. Hence, $\pi$ is a standard $OGS$ elementary element by considering $\pi$ as an element of $S_n$. \end{example} Theorem \ref{factorization-} shows a connection between the generalized standard $OGS$ of an element $\pi\in \dot{S}_n$ as it is presented in Theorem \ref{main-ogs-sn}, and the standard $OGS$ factorization of $\pi$ as it is defined in Definition \ref{canonical-factorization-def} by considering $\pi$ as an element of $S_n$. \begin{theorem}\label{factorization-} Let $\pi$ be an element of $\dot{S}_n$, such that the presentation of $\pi$ by the generalized standard $OGS$ , as it is presented in Theorem \ref{main-ogs-sn}, as follow: $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$$ such that \begin{itemize} \item $ -k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$, for every $1 \leqslant j \leqslant m$. \item $\sum_{j=1}^{m} i_{k_{j}}=0$. \item $0\leqslant \sum_{j=r}^{m}i_{k_{j}}\leqslant k_{r-1}$, for every $2\leq r\leq m$. \end{itemize} Then, consider the standard $OGS$ elementary factorization of $\pi$ as it is defined in Definition \ref{canonical-factorization-def}, where we consider $\pi$ as an element in $S_n$: $$\pi= \prod_{v=1}^{z(\pi)} \pi^{(v)}$$ The value of $z(\pi)$ satisfies the following property: $$z(\pi) =| \{1 \leqslant j \leqslant n \quad |\quad i_{k_{j}} < 0.\}|$$ \end{theorem} \begin{proof} Let $\pi\in \dot{S}_n$ is presented by the generalized standard $OGS$ presentation as follow: $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}}\cdots \tau_{k_{m}}^{i_{k_{m}}}.$$ Denote by $q$ the number of $j$ such that $1\leq j\leq m$ and $i_{k_j}<0$, and by $r_1, r_2\ldots, r_q$, all the indices of $k$, such that $r_1<r_2<r_3,...<r_q$, and $i_{k_{r_j}}<0$ for $1\leq j\leq q$. Since, $\pi\in \dot{S}_n$, by Theorem \ref{main-ogs-sn}, $i_{k_1}<0$. Hence, $r_1=1$. Now, for every $1\leq \alpha\leq q$, let define $i^{\prime}_{\alpha}$ by the following recursive algorithm: \begin{itemize} \item Let define $i^{\prime}_{q}$ to be $$i^{\prime}_{q}=i_{k_m};$$ \item For $\alpha=q-1$, let define $i^{\prime}_{q-1}$ to be $$i^{\prime}_{q-1}=i_{r_q}+\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}+i^{\prime}_{q};$$ \item Then for every $1\leq \alpha\leq q-2$, let define $i^{\prime}_{\alpha}$ to be $$i^{\prime}_{\alpha}=i_{r_{\alpha+1}}+\sum_{j=r_{\alpha+1}+1}^{r_{\alpha+2}-1} i_{k_{j}}+i^{\prime}_{\alpha+1}.$$ \end{itemize} Then, consider the following presentation of $\pi$ $$\pi = \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}}\cdots \tau_{k_{m}}^{i_{k_{m}}}$$ $$ = (\tau_{k_{r_1}}^{i_{k_{r_1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}}\cdots \tau_{k_{r_{2}-1}}^ {i_{k_{r_{2}-1}}}\cdot \tau_{k_{r_{2}}}^{i^{\prime}_{1}}) \cdot (\tau_{k_{r_{2}}}^{-\sum_{j=r_{2}+1}^{r_3-1} i_{k_{j}}-i^{\prime}_{2}} \cdot \tau_{k_{r_{2}+1}}^{i_{k_{r_{2}+1}}} \cdots \tau_{k_{r_{3}-1}}^ {i_{k_{r_{3}-1}}}\cdot \tau_{k_{r_{3}}}^{i^{\prime}_{2}})\cdots $$ $$ \cdots (\tau_{k_{r_{q}}}^{-\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}-i^{\prime}_{q}} \cdot \tau_{k_{r_{q}+1}}^{i_{k_{r_{q}+1}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot \tau_{k_{m}}^{i^{\prime}_{q}}).$$ Since $\pi\in \dot{S}_n$, by Theorem \ref{main-ogs-sn}, $\sum_{j=1}^m i_{k_j}=0$. Therefore, $$i_{k_1}=i_{k_{r_1}}=-\sum_{j=2}^{r_2-1} i_{k_{j}}-i^{\prime}_{1}.$$ Hence, $\pi$ can be presented as follow: $$\pi = (\tau_{k_{r_1}}^{-\sum_{j=2}^{r_2-1} i_{k_{j}}-i^{\prime}_{1}} \cdot \tau_{k_{2}}^{i_{k_{2}}}\cdots \tau_{k_{r_{2}-1}}^ {i_{k_{r_{2}-1}}}\cdot\tau_{k_{r_{2}}}^{i^{\prime}_{1}}) \cdot (\tau_{k_{r_{2}}}^{-\sum_{j=r_{2}+1}^{r_3-1} i_{k_{j}}-i^{\prime}_{2}} \cdot \tau_{k_{r_{2}+1}}^{i_{k_{r_{2}+1}}} \cdots \tau_{k_{r_{3}-1}}^{i_{k_{r_{3}-1}}}\cdot \tau_{k_{r_{3}}}^{i^{\prime}_{2}})\cdots$$ $$\cdots (\tau_{k_{r_{q}}}^{-\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}-i^{\prime}_{q}} \cdot \tau_{k_{r_{q}+1}}^{i_{k_{r_{q}+1}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}} \tau_{k_{m}}^{i^{\prime}_{q}}).$$ Now, notice, the following properties holds: \begin{itemize} \item $i_{k_j}>0$ for every $1\leq j\leq m$ such that $j\neq r_p$ for $1\leq p\leq q$. \item Notice, $i^{\prime}_{q}=i_{k_m}$. Hence, by Theorem \ref{main-ogs-sn}, $i^{\prime}_{q}>0$. \item Since $\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}+i^{\prime}_{q}= \sum_{j=r_{q}+1}^{m} i_{k_{j}}$, by Theorem \ref{main-ogs-sn}, $\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}+i^{\prime}_{q}\leq k_{r_q}$. Hence, $-k_{r_q}\leq -\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}-i^{\prime}_{q}<0 $. \item Notice, $i^{\prime}_{q-1}=i_{r_q}+\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}+i^{\prime}_{q}=\sum_{j=r_q}^m i_{k_j}$. By Theorem \ref{main-ogs-sn}, $\sum_{j=r_q}^{m} i_{k_j}\geq 0$. Hence, $i^{\prime}_{q-1}\geq 0$. \item Now, by considering $\sum_{j=r_{\alpha+1}+1}^{r_{\alpha+2}-1} i_{k_{j}}+i^{\prime}_{\alpha+1}$ and $i^{\prime}_{\alpha}$ for $1\leq \alpha\leq q-2$, the following properties holds (which can be proved by the same argument as it have been proved for $\alpha=q-1$): \begin{itemize} \item $-k_{r_{\alpha+1}}\leq -\sum_{j=r_{\alpha+1}+1}^{r_{\alpha+2}-1} i_{k_{j}}-i^{\prime}_{\alpha+1}<0$; \item $i^{\prime}_{\alpha}\geq 0$. \end{itemize} \end{itemize} Hence, by Theorem \ref{standard-bn-sn}, every component $\pi_{\alpha}$ of $\pi$ (where $1\leq \alpha\leq q$) of the following form $$\pi_{\alpha}=\tau_{k_{r_{\alpha}}}^{-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1} i_{k_{j}}-i^{\prime}_{\alpha}} \cdot \tau_{k_{r_{\alpha}+1}}^{i_{k_{r_{\alpha}+1}}} \cdots \tau_{k_{r_{\alpha+1}-1}}^ {i_{k_{r_{\alpha+1}-1}}}\cdot \tau_{k_{r_{\alpha+1}}}^{i^{\prime}_{\alpha}}$$ is a standard $OGS$ elementary element by considering $\pi_{\alpha}$ as an element of $S_n$.\\ Notice the following observations: \begin{itemize} \item Since $i^{\prime}_{\alpha}\geq 0$ for $1\leq \alpha\leq q-1$, we have that either $i^{\prime}_{\alpha}>0$ or $i^{\prime}_{\alpha}=0$ for $1\leq \alpha\leq q-1$. ( $i^{\prime}_{q}=i_{k_m}>0$ for every $\pi\in \dot{S}_n$). \item Since $\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}+i^{\prime}_{\alpha}\leq k_{r_{\alpha}}$, we have that either \\ $k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}>0$ or $k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}=0$ for $1\leq \alpha\leq q$. \end{itemize} Then, by Lemma \ref{tau-t}, the presentation of the component $\pi_{\alpha}$ of $\pi$ by the standard $OGS$ of $S_n$ is one of the four following form: \begin{itemize} \item $$\pi_{\alpha}=t_{k_{r_{\alpha}}}^{k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}} \cdot t_{k_{r_{\alpha}+1}}^{i_{k_{r_{\alpha}+1}}} \cdots t_{k_{r_{\alpha+1}-1}}^ {i_{k_{r_{\alpha+1}-1}}}\cdot t_{k_{r_{\alpha+1}}}^{i^{\prime}_{\alpha}},$$ in case $i^{\prime}_{\alpha}>0$ and $\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}+i^{\prime}_{\alpha}< k_{r_{\alpha}}$ (i.e., $k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}>0$); \item $$\pi_{\alpha}=t_{k_{r_{\alpha}}}^{k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}} \cdot t_{k_{r_{\alpha}+1}}^{i_{k_{r_{\alpha}+1}}} \cdots t_{k_{r_{\alpha+1}-1}}^ {i_{k_{r_{\alpha+1}-1}}},$$ in case $i^{\prime}_{\alpha}=0$ and $\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}+i^{\prime}_{\alpha}< k_{r_{\alpha}}$ (i.e., $k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}>0$); \item $$\pi_{\alpha}=t_{k_{r_{\alpha}+1}}^{i_{k_{r_{\alpha}+1}}} \cdots t_{k_{r_{\alpha+1}-1}}^ {i_{k_{r_{\alpha+1}-1}}}\cdot t_{k_{r_{\alpha+1}}}^{i^{\prime}_{\alpha}},$$ in case $i^{\prime}_{\alpha}>0$ and $\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}+i^{\prime}_{\alpha}= k_{r_{\alpha}}$ (i.e., $k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}=0$); \item $$\pi_{\alpha}=t_{k_{r_{\alpha}+1}}^{i_{k_{r_{\alpha}+1}}} \cdots t_{k_{r_{\alpha+1}-1}}^ {i_{k_{r_{\alpha+1}-1}}},$$ in case $i^{\prime}_{\alpha}=0$ and $\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}+i^{\prime}_{\alpha}= k_{r_{\alpha}}$ (i.e., $k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}=0$) \end{itemize} Now, by considering the standard $OGS$ presentation of $\pi_{\alpha}$ for $1\leq \alpha\leq q$ as an element in $S_n$ the following holds: \begin{itemize} \item In all the possibly cases of $\pi_{\alpha}$, $$maj(\pi_{\alpha})=k_{r_{\alpha}}-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}-i^{\prime}_{\alpha}+\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1}i_{k_{j}}+i^{\prime}_{\alpha}=k_{r_{\alpha}}.$$ \item The smallest $p$ such that a non-zero power of $t_p$ is a subword of the standard $OGS$ presentation of $\pi_{\alpha}$ is $p=k_{r_{\alpha}}=maj(\pi_{\alpha})$ or \\ $p=k_{r_{\alpha}+1}>maj(\pi_{\alpha})$. \item The greatest $p$ such that a non-zero power of $t_p$ is a subword of the standard $OGS$ presentation of $\pi_{\alpha-1}$ (for $\alpha\geq 2$) is $p=k_{r_{\alpha}}=maj(\pi_{\alpha})$ or $p=k_{r_{\alpha}-1}<maj(\pi_{\alpha})$. \end{itemize} Hence, by Definition \ref{canonical-factorization-def}, $$\pi=\pi_1\cdot \pi_2\cdots \pi_q$$ is a standard $OGS$ elementary factorization of $\pi$, such that every element $\pi_{\alpha}$ for $1\leq \alpha\leq q$ is the elementary factor $\pi^{(\alpha)}$ of $\pi$, where $z(\pi)=q$. \end{proof} \begin{example} Let $\pi=\tau_5^{-3}\cdot \tau_7^2\cdot \tau_8^{-4}\cdot \tau_9^4\cdot \tau_{11}^{-3}\cdot \tau_{12}^4$. \\ Then by Theorem \ref {main-ogs-sn} $\pi\in \dot{S}_{12}$. \\ Moreover, \begin{itemize} \item $i_{k_1}=-3<0, \quad i_{k_3}=-4<0, \quad i_{k_5}=-3<0$; \item $i_{k_2}=2>0, \quad i_{k_4}=4>0, \quad i_{k_6}=4>0$. \end{itemize} Hence, by Theorem \ref{factorization-} $z(\pi)=3$ and the standard $OGS$ elementary factorization of $\pi$ as follow: $$\pi^{(1)}=\tau_5^{-3}\cdot \tau_7^2\cdot \tau_8,\quad \pi^{(2)}=\tau_8^{-5}\cdot \tau_9^4\cdot \tau_{11}, \quad \pi^{(3)}=\tau_{11}^{-4}\cdot \tau_{12}^4.$$ Indeed, $$\pi^{(1)}\cdot \pi^{(2)}\cdot \pi^{(3)}=(\tau_5^{-3}\cdot \tau_7^2\cdot \tau_8)\cdot (\tau_8^{-5}\cdot \tau_9^4\cdot \tau_{11})\cdot (\tau_{11}^{-4}\cdot \tau_{12}^4)$$ $$=\tau_5^{-3}\cdot \tau_7^2\cdot \tau_8^{-4}\cdot \tau_9^4\cdot \tau_{11}^{-3}\cdot \tau_{12}^4=\pi.$$ \end{example} \section{Generalization of the standard OGS factorization for $B_{n}$}\label{gen-stan-factor} There is defined In \cite{S1} a standard $OGS$ elementary factorization for the elements of the symmetric group $S_n$. This definition eases the calculation of the Coxeter length and the descent sets of the elements of $S_n$. In this section, we generalize the definition of the standard OGS elementary factorization for the group $B_n$. \begin{theorem}\label{uvuv} Let $\pi \in B_{n}$, then $\pi$ has a unique presentation in the following form: $u_{1} \cdot v_{1} \cdot u_{2} \cdot v_{2} \cdots u_{r}$ for some $r$, such that the following holds: \begin{itemize} \item $u_{j} \in \dot{S}_{n}$, for every $1\leqslant j\leqslant r$. \item $v_{j} = \tau_{p_{j}}^{p_{j}}$, where for $j>j^{\prime}$, $p_{j} > p_{j^{\prime}}$. \item For every $1 \leqslant j \leqslant r$, either $u_{j} = 1$ or $u_{j} = \tau_{p_{j_{1}}}^{i_{p_{j_{1}}}} \cdots \tau_{p_{j_{z_{j}}}}^{i_{p_{j_{z_{j}}}}} $ by the generalized standard $OGS$ presentation (as it is described in Theorem \ref{ogs-bn}), where $p_{j_{1}} \geq p_{j-1}$, for every $2 \leqslant j \leqslant r$,and $p_{j_{z_{j}}} \leqslant p_{j}$ , for every $1 \leqslant j \leqslant r-1$. \end{itemize} \end{theorem} \begin{proof} Let $\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}} $ be expressed by the generalized standard $OGS$ presentation.\\ - First, We find $u_{r}$ and $v_{r-1}$, where we consider two cases which depend on the possibly values of $i_{k_{m}}$ as follow: \begin{enumerate} \item $1\leq i_{k_m}<k_m$. \item $-k_m\leq i_{k_m}<0$. \end{enumerate} - Consider case 1 (i.e., $1\leq i_{k_m}<k_m$). \begin{itemize} \item By Theorem \ref{main-ogs-sn}, $\tau_{k_{m}}^{i_{k_{m}}}$ can be considered as a subword of an element in $\dot{S}_n$. Hence, we consider $\tau_{k_{m}}^{i_{k_{m}}}$ as a subword of $u_{r}$ and $p_{r_{z_{r}}}=k_{m}$.\\ - Now, We consider ${k_{m-1}}$ and $i_{k_{m-1}}$ in case 1 ($1\leq i_{k_m}<k_m$), where we divide the proof into the following two subcases: \begin{itemize} \item subcase 1: $k_{m-1} < i_{k_{m}}$. \item subcase 2: $k_{m-1} \geq i_{k_{m}}$. \end{itemize} \item subcase 1: If $k_{m-1} < i_{k_{m}}$, then put $u_r$ to be $u_{r}= \tau_{i_{k_{m}}}^{-i_{k_{m}}}\cdot \tau_{k_{m}}^{i_{k_{m}}}$, and put $v_{r-1}$ to be $v_{r-1} = \tau_{i_{k_{m}}}^{i_{k_{m}}} $.\\ Then, \\ $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}} \cdot \tau_{i_{k_{m}}}^{i_{k_{m}}}\cdot \tau_{i_{k_{m}}}^{-i_{k_{m}}}\cdot \tau_{k_{m}}^{i_{k_{m}}} =\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}} \cdot v_{r-1} \cdot u_{r}.$$ Then, we look at the subword $$\tilde{\pi}=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}$$ of $\pi$ , and we consider $u_{r-1}$ and $v_{r-2}$ by applying the same algorithm as we have done for finding $u_r$ and $v_{r-1}$, by considering $\tilde{\pi}$ instead of $\pi$. \item subcase 2: If $k_{m-1} \geq i_{k_{m}}$, then there are the following two possibilities: \begin{itemize} \item Possibility 1: $i_{k_{m}}+ i_{k_{m-1}}\geq k_{m-1}$. \item Possibility 2: $i_{k_{m}}+ i_{k_{m-1}}< k_{m-1}$. \end{itemize} \item Assume possibility 1. Then, $i_{k_{m}}+ i_{k_{m-1}}\geq k_{m-1}$. Then we put $u_r$ to be $u_{r}=\tau_{k_{m-1}}^{-i_{k_{m}}}\cdot \tau_{k_{m}}^{i_{k_{m}}}$ and we put $v_{r-1}$ to be $v_{r-1}= \tau_{k_{m-1}}^{k_{m-1}}$. Then, $$\pi = \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-2}}^{i_{k_{m-2}}}\cdot \tau_{k_{m-1}}^{{i_{k_{m-1}}+i_{k_m}-k_{m-1}}}\cdot \tau_{k_{m-1}}^{k_{m-1}}\cdot (\tau_{k_{m-1}}^{-i_{k_m}}\cdot \tau_{k_m}^{i_{k_m}})$$ $$= \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-2}}^{i_{k_{m-2}}}\cdot \tau_{k_{m-1}}^{{i_{k_{m-1}}+i_{k_m}-k_{m-1}}} \cdot v_{r-1} \cdot u_{r}.$$ Then, we look at the subword $$\tilde{\pi}=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-2}}^{i_{k_{m-2}}}\cdot \tau_{k_{m-1}}^{{i_{k_{m-1}}+i_{k_m}-k_{m-1}}}$$ of $\pi$ , and we consider $u_{r-1}$ and $v_{r-2}$ by applying the same algorithm as we have done for finding $u_r$ and $v_{r-1}$, by considering $\tilde{\pi}$ instead of $\pi$. \item Assume possibility 2. Then, $i_{k_{m}}+ i_{k_{m-1}}< k_{m-1}$. Then, $\tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot \tau_{k_{m}}^{i_{k_{m}}}$ is a subword of $u_{r}$ and $p_{r_{z_{r-1}}}= k_{m-1}$. Then, we consider $\tau_{k_{m-2}}$ and $i_{k_{m-2}}$. Then, We analyze $\tau_{k_{m-2}}$ and $i_{k_{m-2}}$ by the same argument as we have analysed $\tau_{k_{m-1}}$ and $i_{k_{m-1}}$.\\ First, dividing the proof to two main subcases \begin{itemize} \item Subcase 1: $k_{m-2}< i_{k_{m}}+ i_{k_{m-1}}$. \item Subcase 2: $k_{m-2}\geq i_{k_{m}}+ i_{k_{m-1}}$. \end{itemize} \item We analyze subcase 1 by the same algorithm as we have done in the subcase $k_{m-1}<i_{k_m}$. i.e., put $u_r$ to be $u_r=\tau_{i_{k_{m}}+ i_{k_{m-1}}}^{-i_{k_{m}}-i_{k_{m-1}}}\cdot \tau_{k_{m-1}}^{i_{k_{m-1}} }\cdot \tau_{k_{m}}^{i_{k_{m}} }$ and put $v_{r-1}$ to be $v_{r-1}= \tau_{i_{k_{m}}+ i_{k_{m-1}}}^{i_{k_{m}}+i_{k_{m-1}}}.$ \item We analyze subcase 2 by the same algorithm as we have done in the subcase $k_{m-1}\geq i_{k_m}$ . i.e., considering two possibilities. \begin{itemize} \item Possibility 1: $i_{k_{m}}+ i_{k_{m-1}}+i_{k_{m-2}}\geq k_{m-2}$. \item Possibility 2: $i_{k_{m}}+ i_{k_{m-1}}+i_{k_{m-2}}< k_{m-2}$. \end{itemize} \item Then, in case of possibility 1, we put $u_r$ and $v_{r-1}$ by the same way as we have done for the possibility $i_{k_{m}}+ i_{k_{m-1}}\geq k_{m-1}$. \item In case of possibility 2 ($i_{k_{m}}+i_{k_{m-1}}+i_{k_{m-2}} < k_{m-2}$), we conclude that $\tau_{k_{m-2}}^{i_{k_{m-2}}} \cdot \tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot \tau_{k_{m}}^{i_{k_{m}}}$ is a subword of $u_{r}$ and $p_{r_{z_{r-2}}} = k_{m-2}$. Then, considering $\tau_{k_{m-3}}$ and $i_{k_{m-3}}$ by continuing the same process as we have done in the possibility $i_{k_{m}}+i_{k_{m-1}} < k_{m-1}$ where we have considered $\tau_{k_{m-2}}$ and $i_{k_{m-2}}$. \item Then we continue the algorithm by the same method. \item Finally, we get the following conclusion from the described algorithm concerning $u_r$ and $v_{r-1}$ in case 1 ($i_{k_m}>0$). For every $1\leq x\leq m$, Let define $\rho_x$ to be $\rho_x=\sum_{j=x}^{m} i_{k_{j}}$. Then, we consider the largest $x$ such that $1\leq x\leq m$ such that one of the following two conditions holds \begin{itemize} \item $\rho_{x+1}\leq k_{x}\leq \rho_{x}$. \item $k_{x-1}<\rho_{x}<k_{x}$. \end{itemize} Then $u_r$ and $v_{r-1}$ have the following presentation by the generalized standard $OGS$. \begin{itemize} \item If ~~$\rho_{x+1}\leq k_{x}\leq \rho_{x}$, then $$u_{r}=\tau_{k_{x}}^{-\rho_{x+1}}\cdot \tau_{k_{x+1}}^{i_{k_{x+1}}}\cdot\tau_{k_{x+2}}^{i_{k_{x+2}}} \cdots \tau_{k_m}^{i_{k_m}}\quad v_{r-1}=\tau_{k_{x}}^{k_{x}}.$$ $$\pi=\tau_{k_1}^{i_{k_1}}\cdot\tau_{k_2}^{i_{k_2}}\cdots \tau_{k_{x-1}}^{i_{k_{x-1}}}\cdot \tau_{k_x}^{\rho_x-k_x} \cdot v_{r-1}\cdot u_r .$$ Hence, we find $u_{r-1}$ and $v_{r-2}$ by considering $$\tilde{\pi}=\tau_{k_1}^{i_{k_1}}\cdot\tau_{k_2}^{i_{k_2}}\cdots \tau_{k_{x-1}}^{i_{k_{x-1}}}\cdot\tau_{k_x}^{\rho_x-k_x},$$ where we use the same method which is used to find $u_r$ and $v_{r-1}$. \item If ~~$k_{x-1}<\rho_{x}<k_{x}$, then $$u_{r}=\tau_{\rho_{x}}^{- \rho_{x}}\cdot \tau_{k_{x}}^{i_{k_{x}}}\cdot \tau_{k_{x+1}}^{i_{k_{x+1}}}\cdots \tau_{k_m}^{i_{k_m}}\quad v_{r-1}=\tau_{\rho_{x}}^{\rho_{x}}$$ $$\pi=\tau_{k_1}^{i_{k_1}}\cdot\tau_{k_2}^{i_{k_2}}\cdots \tau_{k_{x-1}}^{i_{k_{x-1}}} \cdot v_{r-1}\cdot u_r .$$ Hence, we find $u_{r-1}$ and $v_{r-2}$ by considering $$\tilde{\pi}=\tau_{k_1}^{i_{k_1}}\cdot\tau_{k_2}^{i_{k_2}}\cdots \tau_{k_{x-1}}^{i_{k_{x-1}}},$$ where we use the same method which is used to find $u_r$ and $v_{r-1}$. \end{itemize} \end{itemize} - Now, We turn to case 2 (i.e., $-k_m\leq i_{k_{m}}<0$ ).\\ If $i_{k_{m}}<0$ then, $u_{r}$ is defined to be $1$, and $v_{r-1}$ is defined to be $\tau_{k_{m}}^{-k_{m}}=\tau_{k_m}^{k_m}$. Then, $$\tau_{k_m}^{i_{k_m}}=\tau_{k_m}^{-k_m+(i_{k_m}+k_m)}=\tau_{k_m}^{i_{k_m}+k_m}\cdot \tau_{k_m}^{-k_m}$$ Since, $-k_m\leq i_{k_m}<0$, we get that $0\leq k_m+i_{k_m}<k_m$. Then, $$\pi = \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}+k_{m}}} \cdot v_{r-1} \cdot u_{r},$$ where $0 \leqslant i_{k_{m}}+k_{m} < k_{m}$ . Now, we consider the following two subcases: \begin{itemize} \item Subcase 1: $i_{k_m}+k_m>0$. \item Subcase 2: If $i_{k_m}+k_m=0$. \end{itemize} Consider subcase 1. Then, $i_{k_m}+k_m>0$, and then we find $u_{r-1}$ by the same algorithm as we have done for finding $u_r$ in case 1 (The case of $1\leq i_{k_m}<k_m$).\\ Now, consider subcase 2. Then, $i_{k_m}+k_m=0$. Hence, $$\pi = \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot \tau_{k_{m}}^{i_{k_{m}+k_{m}}} \cdot v_{r-1} \cdot u_{r}=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}\cdot v_{r-1} \cdot u_{r}.$$ Then, we look at the subword $$\tilde{\pi}=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}}$$ of $\pi$ , and we consider $u_{r-1}$ and $v_{r-2}$ by applying the same algorithm as we have done for finding $u_r$ and $v_{r-1}$, by considering $\tilde{\pi}$ instead of $\pi$. \end{proof} \begin{example} Let $\pi=\tau_3^2\cdot \tau_4^3\cdot \tau_5^{-2}\cdot \tau_7^4\cdot \tau_8^2\cdot \tau_9^4$. \\ Then, applying the algorithm as it is described in Theorem \ref{uvuv}. First notice, $$i_{k_6}=4>0.$$ Hence $$\tau_{k_6}^{i_{k_6}}=\tau_9^4$$ is a subword of $u_r$. Now notice, $$i_{k_6}+i_{k_5}=6,\quad i_{k_6}+i_{k_5}+i_{k_4}=11, \quad k_4=7,$$ Hence, we have: $$i_{k_6}+i_{k_5}<k_{4}<i_{k_6}+i_{k_5}+i_{k_4}$$ Therefore, we put $u_r$ and $v_{r-1}$ as follow: $$u_r=\tau_{k_4}^{-i_{k_5}-i_{k_6}}\cdot \tau_{k_5}^{i_{k_5}}\cdot \tau_{k_6}^{i_{k_6}}=\tau_7^{-6}\cdot \tau_8^2\cdot \tau_9^4,\quad v_{r-1}=\tau_{k_4}^{k_4}=\tau_7^7.$$ Hence, we get $$\pi=\tau_{k_1}^{i_{k_1}}\cdot \tau_{k_2}^{i_{k_2}}\cdot \tau_{k_3}^{i_{k_3}}\cdot \tau_{k_4}^{i_{k_4}+k_4-k_5-k_6}\cdot \tau_{k_4}^{k_4}\cdot (\tau_{k_4}^{-i_{k_5}-i_{k_6}}\cdot \tau_{k_5}^{i_{k_5}}\cdot \tau_{k_6}^{i_{k_6}})=$$ $$\tau_3^2\cdot \tau_4^3\cdot \tau_5^{-2}\cdot \tau_7^{3}\cdot \tau_7^{7}\cdot (\tau_7^{-6}\cdot \tau_8^2\cdot \tau_9^4).$$ Now, look at the following subword of $\pi$: $$\tau_{k_1}^{i_{k_1}}\cdot \tau_{k_2}^{i_{k_2}}\cdot\tau_{k_3}^{i_{k_3}}\cdot\tau_{k_4}^{i'_{k_4}}=\tau_3^2\cdot \tau_4^3\cdot \tau_5^{-2}\cdot \tau_7^3.$$ Where, $$i'_{k_4}=i_{k_4}+k_4-i_{k_5}-i_{k_6}.$$ Since, $$i'_{k_4}=3>0,$$ $$\tau_{k_4}^{i'_{k_4}}=\tau_7^3$$ is a subword of $u_{r-1}$. Now, the following is satisfied: $$i'_{k_4}+i_{k_3}=3-2=1, \quad k_2=4, \quad i'_{k_4}+i_{k_3}+i_{k_2}=4$$ Hence, $$i'_{k_4}+i_{k_3}<k_2\leq i'_{k_4}+i_{k_3}+i_{k_2}$$ Therefore, we put $u_{r-1}$ and $v_{r-1}$ to be as follow: $$u_{r-1}=\tau_{k_2}^{-i_{k_3}-i'_{k_4}}\cdot \tau_{k_3}^{i_{k_3}}\cdot \tau_{k_4}^{i'_{k_4}}=\tau_4^{-1}\cdot \tau_5^{-2}\cdot \tau_7^3, \quad v_{r-2}=\tau_{k_2}^{k_2}=\tau_4^4.$$ Then we get: $$\pi=\tau_{k_1}^{i_{k_1}}\cdot \tau_{k_2}^{k_2}\cdot (\tau_{k_2}^{-i_{k_3}-i'_{k_4}}\cdot \tau_{k_3}^{i_{k_3}}\cdot \tau_{k_4}^{i'_{k_4}})\cdot \tau_{k_4}^{k_4}\cdot (\tau_{k_4}^{-i_{k_5}-i_{k_6}}\cdot \tau_{k_5}^{i_{k_5}}\cdot \tau_{k_6}^{i_{k_6}})=$$ $$\tau_3^2\cdot \tau_4^4\cdot (\tau_4^{-1}\cdot \tau_5^{-2}\cdot \tau_7^3)\cdot \tau_7^{7}\cdot(\tau_7^{-6}\cdot \tau_8^2\cdot \tau_9^4). $$ Now, consider the following subword of $\pi$: $$\tau_{k_1}^{i_{k_1}}=\tau_3^2.$$ Since $$k_1=2>0,$$ $$\tau_{k_1}^{i_{k_1}}=\tau_3^2$$ is a subword of $u_{r-2}$. Now, notice $$i_{k_1}=2, \quad k_1=3.$$ Hence, $$i_{k_1}<k_1.$$ Therefore, we get $r-3=1$, and we put $u_{r-2}=u_2$, $v_{r-3}=v_1$ and $u_{r-3}=u_1$ as follow: $$u_{r-2}=u_2=\tau_{i_{k_1}}^{-i_{k_1}}\cdot \tau_{k_1}^{i_{k_1}}=\tau_2^{-2}\cdot \tau_3^3, \quad v_1=\tau_{i_{k_1}}^{i_{k_1}}=\tau_2^2, \quad u_1=1.$$ Hence, $$\pi=u_1\cdot v_1\cdot u_2\cdot v_2\cdot u_3\cdot v_3\cdot u_4=\tau_2^2\cdot(\tau_2^{-2}\cdot \tau_3^2)\cdot \tau_4^4\cdot(\tau_4^{-1}\cdot \tau_5^{-2}\cdot \tau_7^3)\cdot \tau_7^{7}\cdot(\tau_7^{-6}\cdot \tau_8^2\cdot \tau_9^4).$$ Where, $$u_1=1\quad\quad v_1=\tau_2^2=\tau_2^{-2}\quad\quad u_2=\tau_2^{-2}\cdot \tau_3^2\quad\quad v_2=\tau_4^4=\tau_4^{-4}$$ $$u_3=\tau_4^{-1}\cdot \tau_5^{-2}\cdot \tau_7^3\quad\quad v_3=\tau_7^7=\tau_7^{-7}\quad\quad u_4=\tau_7^{-6}\cdot \tau_8^2\cdot \tau_9^4.$$ \end{example} \section{The Coxeter length function in $B_{n}$ by applying the generalized standard $OGS$}\label{cox-length} In this section We give an explicit formula for $\ell(\pi)$ considering the generalized standard $OGS$ presentation, as it is described in Theorem \ref{ogs-bn}, and the presentation that is described in Theorem \ref{uvuv}. We start with a lemma about the length of elements of the form $\tau_{k}^{-k}$. Then, we find the length of elements in the subgroup $\dot{S}_n$. Finally we give a formula for the length of a general element of $B_n$. \begin{lemma}\label{tau-k-k} For $1\leq k\leq n$, let $\tau_{k}$ be the element of $B_n$ as it is defined in Definition \ref{tau}. Then, the following holds: $$\ell(\tau_{k}^{-k}) = \sum_{i=0}^{k-1} 2i-1 = k^{2}$$\\ \end{lemma} \begin{proof} By considering the normal form of $\tau_k^{-k}\in B_{n}$ as it is defined in Definition \ref{Normal form of $B_{n}$}: $$\tau_{k}^{-k} = \prod_{i=0}^{k-1} \prod_{j=0}^{2i} s_{|i-j|} = s_{0} \cdot (s_{1}\cdot s_{0} \cdot s_{1}) \cdot (s_{2}\cdot s_{1} \cdot s_{0} \cdot s_{1} \cdot s_{2}) \cdots (s_{k-1}\cdots s_{1}\cdot s_{0} \cdot s_{1} \cdots s_{k-1}). $$ Hence $$\ell(\tau_{k}^{-k}) = \sum_{i=0}^{k-1} 2i-1 = k^{2}$$ \end{proof} \begin{theorem}\label{ell-sn} Let $\pi$ be an element in the subgroup $\dot{S}_{n}$ of $B_{n}$, where by the generalized standard $OGS$ presentation of $\pi$ (as it is described in Theorem \ref{main-ogs-sn}): $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$$ such that \begin{itemize} \item $ -k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$, for every $1 \leqslant j \leqslant m$. \item $\sum_{j=1}^{m} i_{k_{j}}=0$. \item $0\leqslant \sum_{j=r}^{m}i_{k_{j}}\leqslant k_{r-1}$, for every $2\leq r\leq m$. \end{itemize} Then, $$\ell(\pi) = \sum_{j=1}^{m} k_{j} \cdot i_{k_{j}}.$$ \end{theorem} \begin{proof} Let $\pi$ be an element of $\dot{S}_{n}$, presented by the generalized standard $OGS$ (as it is described in Theorem \ref{main-ogs-sn}) as follow: $$\pi=\tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$$ such that \begin{itemize} \item $ -k_{j} \leqslant i_{k_{j}} \leqslant k_{j}-1$, for every $1 \leqslant j \leqslant m$. \item $\sum_{j=1}^{m} i_{k_{j}}=0$. \item $0\leqslant \sum_{j=r}^{m}i_{k_{j}}\leqslant k_{r-1}$, for every $2\leq r\leq m$. \end{itemize} The presentation of $\pi$ by the normal form (\cite{BB} Chapter 3.4) as follow: $$\pi = (s_{r_{1}} \cdot s_{r_{1}-1} \cdots s_{r_{1}-\mu_{1}} )\cdot (s_{r_{2}} \cdot s_{r_{2}-1} \cdots s_{r_{2}-\mu_{2}}) \cdots (s_{r_{z}} \cdot s_{r_{z}-1} \cdots s_{r_{z}-\mu_{z}}),$$ where $r_j<r_{j+1}$ for every $1\leq j\leq z-1$. Therefore, \quad $\ell(\pi) = \sum_{j=1}^{z} \mu_{j} + z $.\\ Then, By Lemma \ref{tau-i+i}, $$ \pi = \tau_{r_{1}}^{-(\mu_{1}+1)} \cdot \tau_{r_{1}+1}^{\mu_{1}+1} \cdot \tau_{r_{2}}^{-(\mu_{2}+1)} \cdot \tau_{r_{2}+1}^{\mu_{2}+1} \cdots \tau_{r_{z}}^{-(\mu_{z}+1)}\cdot \tau_{r_{z}+1}^{\mu_{z}+1}.$$ Thus, $$\ell(\pi) = \sum_{j=1}^{z} (\mu_{j}+1) = \sum_{j=1}^{z} {-(\mu_{j}+1)\cdot r_{j} + (\mu_{j}+1) \cdot (r_{j}+1)}.$$ Since $r_j<r_{j+1}$, we conclude $r_{j}+1\leq r_{j+1}$. Hence, we have either $r_j+1=r_{j+1}$ or $r_j+1<r_{j+1}$ for some $1\leq j\leq z-1$. Therefore, in case $r_j+1=r_{j+1}$ for some $1\leq j\leq z-1$, $$ \pi = \tau_{r_{1}}^{-(\mu_{1}+1)} \cdot \tau_{r_{1}+1}^{\mu_{1}+1} \cdot \tau_{r_{2}}^{-(\mu_{2}+1)} \cdot \tau_{r_{2}+1}^{\mu_{2}+1} \cdots \tau_{r_{z}}^{-(\mu_{z}+1)}\cdot \tau_{r_{z}+1}^{\mu_{z}+1}=$$ $$=\tau_{r_{1}}^{-(\mu_{1}+1)} \cdots \tau_{r_{j}+1}^{(\mu_{j}+1)-(\mu_{j+1}+1)} \cdots \tau_{r_{z}+1}^{\mu_{z}+1}=$$ $$=\tau_{k_1}^{i_{k_1}}\cdots \tau_{k_m}^{i_{k_m}},$$ Then, by the uniqueness of presentation by the generalized standard $OGS$ (Theorem \ref{ogs-bn}), we conclude: $$\ell(\pi)= \sum_{j=1}^{z} (\mu_{j}+1)\cdot r_{j} + (\mu_{j}+1) \cdot (r_{j}+1)=\sum_{j=1}^m k_j\cdot i_{k_j}.$$ \end{proof} The next theorem demonstrates the connection of the generalized standard $OGS$ presentation of an element $\pi$ of $B_{n}$ (as it is described in Theorem \ref{ogs-bn}) to the Coxeter length of $\pi$. \begin{theorem} Let $\pi$ be an element of $B_{n}$, Consider the presentation of $\pi$ in the form $u_{1} \cdot v_{1} \cdot u_{2} \cdot v_{2} \cdots u_{r}$ for some $r$, as it is described in Theorem \ref{uvuv}.\\ Then, $$\ell(\pi) = \ell(u_{r}) + \ell(v_{r-1}) - \ell(u_{r-1}) - \ell(v_{r-1}) + \ell(u_{r-2}) + \ell(v_{r-2}) - \cdots + (-1)^{r-1} \cdot \ell(u_{1}).$$ \end{theorem} \begin{proof} First, consider the case: $$\pi = u\cdot v= \prod_{j=1}^{m} \tau_{k_{j}}^{i_{k_j}}\cdot \tau_{p}^{p}.$$ where $p\geq k_m$. For $1\leq j<n$, denote by $\dot{B}_j$ the parabolic subgroup of $B_n$ which is generated by $s_0, s_1, \ldots s_{j-1}$. (Notice $\dot{B}_j$ is isomorphic to $B_j$.) By Theorem \ref{ogs-bn}, $u$ is in the parabolic subgroup $\dot{B}_{k_m}$ of $B_n$, and by \cite{BB} Chapter 3.4, the normal form of $v=\tau_p^p$ as follow $$v=\tau_p^p = s_{0}\cdot (s_{1}\cdot s_{0}\cdot s_{1}) \cdots (s_{p-1}\cdot s_{p-2} \cdots s_{0} \cdot s_1\cdots s_{p-1}) $$ Since, $p\geq k_m$, both elements $u$ and $v$ can be considered as elements of $\dot{B}_p$. By \cite{BB} Chapter 3.4 $v=\tau_p^p$ is the longest element in the parabolic subgroup $\dot{B}_{p}$ of $B_n$. Therefore, by \cite{BB} Chapter 2.3, $$\ell(\pi) = \ell(v) - \ell(u).$$ Now, assume the following by induction on $m$: If $$ \pi = u_{1} \cdot v_{1} \cdots u_{m-1} \cdot v_{m-1} \cdot u_{m} \cdot v_{m},$$ then $$\ell(\pi) = \sum_{i=1}^{m} (-1)^{m-i} \cdot (\ell(v_{i}) - \ell(u_{i})).$$ Now, consider : $$\pi_{1} = \pi \cdot u_{m+1}.$$ where $\pi=u_{1} \cdot v_{1} \cdots u_{m-1} \cdot v_{m-1} \cdot u_{m} \cdot v_{m}$. Then, $v_m=\tau_q^{q}$ for some $q<n$, and by Theorem \ref{uvuv} $\pi$ is in the parabolic subgroup $\dot{B}_q$ of $B_n$. Notice, that the generalized standard $OGS$ presentation of $u_{m+1}$ is $u_{m+1}=\prod_{j=1}^m\tau_{k_j}^{i_{k_j}}$, where by Theorem \ref{uvuv}, $k_1\geq q$, and then by \cite{BB} Chapter 3.4, the normal form of $u_{m+1}$ as follow: $$u_{m+1} = (s_{r_{1}} \cdot s_{r_{1}-1} \cdots s_{r_{1}-\mu_{1}} )\cdot (s_{r_{2}} \cdot s_{r_{2}-1} \cdots s_{r_{2}-\mu_{2}}) \cdots (s_{r_{z}} \cdot s_{r_{z}-1} \cdots s_{r_{z}-\mu_{z}})$$ where $r_j<r_{j+1}$ for every $1\leq j\leq z-1$. By the proof of Theorem \ref{ell-sn}, $r_1=k_1$, and since $k_1\geq q$, we have that $r_1\geq q$. Since, $\pi\in \dot{B}_q$, the normal form of $\pi\cdot u_{m+1}$ by \cite{BB} Chapter 3.4 as follow: $$norm(\pi\cdot u_{m+1})=$$ $$=norm(\pi)\cdot (s_{r_{1}} \cdot s_{r_{1}-1} \cdots s_{r_{1}-\mu_{1}} )\cdot (s_{r_{2}} \cdot s_{r_{2}-1} \cdots s_{r_{2}-\mu_{2}}) \cdots (s_{r_{z}} \cdot s_{r_{z}-1} \cdots s_{r_{z}-\mu_{z}})=$$ $$=norm(\pi)\cdot norm(u_{m+1}).$$ Hence, $$\ell(\pi\cdot u_{m+1})=\ell(\pi)+\ell(u_{m+1}).$$ Now, consider: $$\pi_{2} = \pi_{1}\cdot v_{m+1}=\pi \cdot (u_{m+1} \cdot v_{m+1}).$$ By the same argument as in the calculation of the length of $ (u\cdot v)$, we get that $v_{m+1}$ is the longest element in a parabolic subgroup of $B_n$, which contains the elements $u_1, v_1, \ldots, u_m, v_m, u_{m+1}$. Hence, by \cite{BB} Chapter 2.3, $$\ell(\pi_{2}) = \ell(v_{m+1})-\ell(\pi_1) = \ell(v_{m+1})-[\ell(u_{m+1})+\ell(\pi)] =(\sum_{i=1}^{m+1} (-1)^{m+1-i}\cdot (\ell(v_{i}) - \ell(u_{i})).$$ Hence, $$\ell(\pi) = \sum_{i=1}^{m+1} (-1)^{m+1-i} (\ell(v_{i}) - \ell(u_{i})).$$ \end{proof} \begin{example} Let $\pi$ be: $$\pi = \tau_{2} \cdot \tau_{3} \cdot \tau_{4}^{3} \cdot \tau_{5}^{2}$$ $$\pi = \tau_{1} \cdot (\tau_{1}^{-1} \cdot \tau_{2} ) \cdot \tau_{2}^{2} \cdot (\tau_{2}^{-2} \cdot \tau_{3} \cdot \tau_{4}) \cdot \tau_{4}^{4} \cdot (\tau_{4}^{-2} \cdot \tau_{5}^{2}).$$ Hence, $$u_1=1\quad\quad v_1=\tau_{1}=\tau_{1}^{-1}\quad\quad u_2=\tau_{1}^{-1} \cdot \tau_{2}\quad \quad v_2=\tau_{2}^2=\tau_{2}^{-2}$$ $$u_3=\tau_{2}^{-2} \cdot \tau_{3} \cdot \tau_{4}\quad\quad v_3=\tau_{4}^4=\tau_{4}^{-4}\quad\quad u_4=\tau_{4}^{-2} \cdot \tau_{5}^{2}.$$ Then, $$\ell(\pi) = \ell(\tau_{4}^{-2} \cdot \tau_{5}^{2}) + \ell(\tau_{4}^{-4}) - \ell(\tau_{2}^{-2} \cdot \tau_{3} \cdot \tau_{4}) -\ell(\tau_{2}^{-2})+ \ell(\tau_{1}^{-1} \cdot \tau_{2})+\ell(\tau_{1}^{-1}).$$ $$= 4\cdot (-2) + 5\cdot 2 + 4\cdot 4 - 2\cdot (-2) - 3 - 4 - 2\cdot 2 + 1 \cdot (-1) + 2 + 1 = 13.$$ Indeed, the presentation of $\pi$ in the normal form contains $13$ Coxeter generators as follow $$\text{norm}(\pi)=s_0\cdot s_1\cdot (s_2\cdot s_1\cdot s_0)\cdot (s_3\cdot s_2\cdot s_1\cdot s_0\cdot s_1\cdot s_2)\cdot (s_4\cdot s_3).$$ \end{example} \section{The descent set of elements of $B_{n}$.}\label{descent-bn} \quad In this subsection, we characterize the descent set of the elements in the Coxeter group $B_{n}$ by using the generalized standard $OGS$ presentation for the elements of $B_n$ \begin{proposition}\label{des-bn-sn} Let $\pi$ be an element of $\dot{S}_{n}$ in $B_{n}$ which is expressed by the generalized standard OGS presentation (i.e., $\pi = \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}}$, where by Theorem \ref{main-ogs-sn}, $\sum_{j=1}^{m} i_{k_j}=0$ , and $k_j\leqslant \sum_{j=1}^{r} i_{k_j} \leqslant 0$ $\quad$, $r < m.$)\\ Then $des(\pi)$, the set of the locations of the descents of $\pi$ satisfies the following property: $$s_{k_{j}} \in des(\pi) \Longleftrightarrow i_{k_{j}} < 0$$ \end{proposition} \begin{proof} Consider $$\pi = \tau_{k_{1}}^{i_{k_{1}}} \cdot \tau_{k_{2}}^{i_{k_{2}}} \cdots \tau_{k_{m}}^{i_{k_{m}}} , \quad \text{where} \quad \sum_{j=1}^{m} i_{k_{j}}=0.$$ Denote by $r_1, \ldots, r_{q}$, all the indices $j$ such that $i_{k_{j}}<0$. Since by Theorem \ref{main-ogs-sn}, $i_{k_1}<0$ for every element of $\dot{S}_n$, we have $r_1=1$.\\ Now, for every $1\leq \alpha\leq q$, let define $i^{\prime}_{\alpha}$ as it was defined in Theorem \ref{factorization-}, by the following recursive method: \begin{itemize} \item Let define $i^{\prime}_{q}$ to be $$i^{\prime}_{q}=i_{k_m};$$ \item starting with $\alpha=q-1$, let define $i^{\prime}_{q-1}$ to be $$i^{\prime}_{q-1}=i_{r_q}+\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}+i^{\prime}_{q};$$ \item Then for every $1\leq \alpha\leq q-2$, let define $i^{\prime}_{\alpha}$ to be $$i^{\prime}_{\alpha}=i_{r_{\alpha+1}}+\sum_{j=r_{\alpha+1}+1}^{r_{\alpha+2}-1} i_{k_{j}}+i^{\prime}_{\alpha+1}.$$ \end{itemize} Then by Theorem \ref{factorization-}, $$\pi = (\tau_{k_{r_1}}^{-\sum_{j=2}^{r_2-1} i_{k_{j}}-i^{\prime}_{1}} \cdot \tau_{k_{2}}^{i_{k_{2}}}\cdots \tau_{k_{r_{2}-1}}^ {i_{k_{r_{2}-1}}}\cdot\tau_{k_{r_{2}}}^{i^{\prime}_{1}}) \cdot (\tau_{k_{r_{2}}}^{-\sum_{j=r_{2}+1}^{r_3-1} i_{k_{j}}-i^{\prime}_{2}} \cdot \tau_{k_{r_{2}+1}}^{i_{k_{r_{2}+1}}} \cdots \tau_{k_{r_{3}-1}}^{i_{k_{r_{3}-1}}}\cdot \tau_{k_{r_{3}}}^{i^{\prime}_{2}})\cdots$$ $$\cdots (\tau_{k_{r_{q}}}^{-\sum_{j=r_{q}+1}^{m-1} i_{k_{j}}-i^{\prime}_{q}} \cdot \tau_{k_{r_{q}+1}}^{i_{k_{r_{q}+1}}} \cdots \tau_{k_{m-1}}^{i_{k_{m-1}}} \tau_{k_{m}}^{i^{\prime}_{q}}).$$ Then, by Theorem \ref{factorization-} the following properties hold: \begin{itemize} \item By considering $\pi$ as an element of $S_n$, the subword $$\pi_{\alpha}=\tau_{k_{r_{\alpha}}}^{-\sum_{j=r_{\alpha}+1}^{r_{\alpha+1}-1} i_{k_{j}}-i^{\prime}_{\alpha}} \cdot \tau_{k_{r_{\alpha}+1}}^{i_{k_{r_{\alpha}+1}}} \cdots \tau_{k_{r_{\alpha+1}-1}}^{i_{k_{r_{\alpha+1}-1}}}\cdot \tau_{k_{r_{\alpha+1}}}^{i^{\prime}_{\alpha}}$$ is the standard $OGS$ elementary factor $\pi^{(\alpha)}$ of $\pi$ for every $\alpha$ such that $1\leq \alpha\leq q$, where $maj(\pi^{(\alpha)})=k_{r_{\alpha}}$, and $z(\pi)=q$; \item $i_{k_j}>0$ for every $2\leq j\leq m$ such that $j\neq r_{\alpha}$ for $1\leq \alpha\leq q$. \end{itemize} Hence, by Theorem \ref{theorem-factorization}, $$des\left(\pi\right)=\bigcup_{\alpha=1}^{z(\pi)}des\left(\pi^{(\alpha)}\right)=\{maj\left(\pi^{(\alpha)}\right)~|~1\leq \alpha\leq z(\pi)\};$$ Hence, for $1\leq j\leq k_m$, ~$s_j\in des(\pi)$ if and only if $j=k_{r_{\alpha}}$ for $1\leq \alpha\leq q$. Since by definition of $r_{\alpha}$, ~$j=k_{r_{\alpha}}$ if and only if $i_j<0$, we get that $s_j\in des(\pi)$ if and only if $i_j<0$. \end{proof} \begin{example} Let $\pi=\tau_{3}^{-2} \cdot \tau_{4}^{-1} \cdot \tau_{5}^{3}$ .\\ Then, the following holds: \begin{itemize} \item $k_1=3, \quad k_2=4, \quad k_3=5$; \item $i_{k_{1}} =-2, \quad i_{k_2}=-1, \quad i_{k_3}=3$; \item $i_{k_1}=-2<0,\quad i_{k_1}+i_{k_2}=-1\leq 0,\quad i_{k_1}+i_{k_2}+i_{k_3}=0$. \end{itemize} Hence, by Theorem \ref{main-ogs-sn} $\pi\in \dot{S}_5$. \\ The permutation presentation of $\pi$ is as follows: $$\pi=\tau_{3}^{-2} \cdot \tau_{4}^{-1} \cdot \tau_{5}^{3}$$ $$= [3; ~-1; ~-2; ~4; ~5] \cdot [2; ~3; ~4; ~-1; ~5] \cdot [-3; ~-4; ~-5; ~1; ~2] $$ $$= [1; ~4; ~5; ~3; ~2] $$ Then we have, $$\pi(1)<\pi(2)<\pi(3),\quad\quad \pi(3)>\pi(4)>\pi(5).$$ Hence, $$\text{des}(\pi)=\{3, 4\}.$$ \end{example} \begin{proposition} Let $\pi$ be the element $\tau_{k}^{-k}$ , Then: $$des(\tau_{k}^{-k}) = \{0,1,2,\cdots ,k-1\}$$ \end{proposition} \begin{proof} Consider the permutation presentation of $\pi$: $$\pi = [-1; ~-2; \ldots; ~-k; ~k+1; ~k+2; \ldots; ~n] $$ Then,by definition $2.0.1.$ $$des(\tau_{k}^{-k}) = \{0,1,2,\cdots ,k-1\}$$ \end{proof} \begin{theorem} Let $\pi$ be the element $v\cdot u = \tau_{i_{1}}^{-i_{1}} \cdot (\tau_{i_{2}}^{k_{2}} \cdots \tau_{i_{m}}^{k_{m}}).$ where $v$ is the longest element of the parabolic subgroup which is generated by the Coxeter elements $\{s_{0},s_{1} ,\cdots , s_{i_{1}-1}\}$, and $u$ is an element of $\dot{S}_{n}$, presented by the generalized standard $OGS$, where $i_2\geq i_1$. Then the descent set of $\pi$ satisfies the following: \begin{itemize} \item $j \in des(v\cdot u) \quad , 0 \leqslant j < i_{1} \Longleftrightarrow j \notin des (u).$ \item $j \in des(v\cdot u) \quad , j > i_{1} \Longleftrightarrow j \in des(u).$ \item $i_1\notin des(v\cdot u)$. \end{itemize} \end{theorem} \begin{proof} Consider $$\pi = \tau_{i_{1}}^{-i_{1}} \cdot (\tau_{i_{2}}^{k_{2}} \cdots \tau_{i_{m}}^{k_{m}}).$$ $$ v = \tau_{i_{1}}^{-i_{1}}.$$ $$u = \tau_{i_{2}}^{k_{2}} \cdots \tau_{i_{m}}^{k_{m}} \quad , u \in S_{i_{m}}.$$ Then by considering the permutation presentation of the element $vu$: \begin{itemize} \item $[v\cdot u](j) = -u(j) \quad 1\leqslant j \leqslant i_{1}$ \item $[v\cdot u](j) = u(j) \quad i_{1}+1\leqslant j \leqslant i_{m}.$ \end{itemize} Hence the following holds: \begin{itemize} \item For $1\leq j\leq i_{1}-1$: $$j\in des(v\cdot u)\Longleftrightarrow [v\cdot u](j)>[v\cdot u](j+1)\Longleftrightarrow u(j)<u(j+1) \quad (i.e.,\quad j\notin des(u)).$$ \item For $i_1+1\leq j\leq n-1$: $$j\in des(v\cdot u)\Longleftrightarrow [v\cdot u](j)>[v\cdot u](j+1)\Longleftrightarrow u(j)>u(j+1) \quad (i.e., \quad j\in des(u)).$$ \item For $j=i_1$: $$[v\cdot u](i_1)=-u(i_1), \quad [v\cdot u](i_1+1)=u(i_1+1) \Longrightarrow [v\cdot u](i_1)<[v\cdot u](i_1+1) \Longrightarrow i_1\notin des(v\cdot u).$$ \end{itemize} \end{proof} \begin{theorem} Let $\pi$ be the element $u\cdot v =(\tau_{i_{1}}^{k_{1}} \cdot \tau_{i_{2}}^{k_{2}} \cdots \tau_{i_{m}}^{k_{m}}) \cdot \tau_{i_{m+1}}^{i_{m+1}}$ where $v$ is the longest element of the parabolic subgroup generated by the Coxeter elements $\{s_{0},s_{1} ,\cdots , s_{i_{m+1}-1}\}$ ,and $u$ is element of $\dot{S}_{n}$ presented by the generalized standard $OGS$, and $i_m\leq i_{m+1}$, then the descent set of $\pi$ satisfies the following property: \begin{itemize} \item $j \in des(u\cdot v) \quad , 0 \leqslant j <i_{m} \Longleftrightarrow j \notin des(u).$ \item $j \in des(u\cdot v) \quad , i_{m}\leqslant j <i_{m+1}.$ \end{itemize} \end{theorem} \begin{proof} Consider $$\pi = (\tau_{i_{1}}^{k_{1}} \cdot \tau_{i_{2}}^{k_{2}} \cdots \tau_{i_{m}}^{k_{m}}) \cdot \tau_{i_{m+1}}^{i_{m+1}} \quad, i_{m}\leq i_{m+1}.$$ $$u=\tau_{i_{1}}^{k_{1}} \cdot \tau_{i_{2}}^{k_{2}} \cdots \tau_{i_{m}}^{k_{m}} \quad u \in S_{i_{m}}.$$ $$des(u) = i_{j} \quad , k_{j} < 0.$$ $$v=\tau_{i_{m+1}}^{i_{m+1}}.$$ $$des(v) = \{0,1,\cdots , i_{m+1-1}\}.$$ Then by looking at the permutation presentation of the element $uv$: $$[u\cdot v](j) = -u(j) \quad 1\leqslant j \leqslant i_m, \quad\quad [u\cdot v](j) = -j \quad i_{m}< j < i_{m+1}.$$ Hence the following holds: \begin{itemize} \item For $1\leq j\leq i_ m$: $$j\in des(u\cdot v)\Longleftrightarrow [u\cdot v](j)>[u\cdot v](j+1)\Longleftrightarrow u(j)<u(j+1) \quad (i.e.,\quad j\notin des(u)).$$ \item For $i_m < j\leq i_m+1$: $$[u\cdot v](j)>[u\cdot v](j+1)\Longrightarrow j\in des(u\cdot v)).$$ \end{itemize} \end{proof} \begin{example} The case $\pi=v \cdot u$:\\ $\bullet$ Consider $$\pi=\tau_{9}^{4}\cdot \tau_{10}^{4}= \tau_{8}^{-8} \cdot (\tau_{8}^{-8} \cdot \tau_{9}^{4}\cdot \tau_{10}^{4}) .$$ The permutation presentation of $\pi$ as follow: $$v= \tau_{8}^{-8}= [-1; ~-2; ~-3; ~-4; ~-5; ~-6; ~-7; ~-8; ~9; ~10] $$ $$u=\tau_{8}^{-8} \cdot \tau_{9}^{4}\cdot \tau_{10}^{4}$$ $$= [-1; ~-2; ~-3; ~-4; ~-5; ~-6; ~-7; ~-8; ~9; ~10] \cdot [-6; ~-7; ~-8; ~-9; ~1; ~2; ~3; ~4; ~5; ~10] \cdot $$ $$ [~-7; ~-8; ~-9; ~-10; ~1; ~2; ~3; ~4; ~5; ~6] $$ $$= [2; ~3; ~4; ~5; ~7; ~8; ~9; ~10; ~1; ~6]. $$ Hence, $$\pi=v\cdot u=\tau_{8}^{-8} \cdot (\tau_{8}^{-8} \cdot \tau_{9}^{4}\cdot \tau_{10}^{4})=\tau_{9}^{4}\cdot \tau_{10}^{4}$$ $$= [-1; ~-2; ~-3; ~-4; ~-5; ~-6; ~-7; ~-8; ~9; ~10]\cdot [2; ~3; ~4; ~5; ~7; ~8; ~9; ~10; ~1; ~6] $$ $$= [-2; ~-3; ~-4; ~-5; ~-7; ~-8; ~-9; ~-10; ~1; ~6] $$ Then :\\ $$des(v)=des(\tau_{8}^{-8}) = \{0,1,2,...,7\}, \quad des(u)=des(\tau_{8}^{-8} \cdot \tau_{9}^{4}\cdot \tau_{10}^{4} ) = \{8\}.$$ Hence, we conclude that in the permutation presentation of $u$, we get: $$u(1)<u(2)<...<u(8)\quad \quad u(8)>u(9)\quad\quad u(9) < u(10).$$ And in the permutation presentation of $v$ , $$v(i)= -i \quad \text{for} \quad 1<i<8.$$ Hence, in the permutation presentation of $v\cdot u$ : \begin{itemize} \item For every $1 < i < 8 \Longleftrightarrow [v\cdot u](i) = -u(i).$ \item For $i=9,10 \Longleftrightarrow [v\cdot u](i)=u(i).$ \item Therefore $u(8)<0 \quad and \quad u(9)>0$, which applies $[v\cdot u](8) < [v\cdot u](9).$ \item Furthermore the following holds: $$0 > [v\cdot u](1) > [v\cdot u](2) > [v\cdot u](3) > \cdots > [v\cdot u](8),\quad [v\cdot u](9) < [v\cdot u](10).$$ \end{itemize} Hence, the descent set of $\pi$ as follow: $$des(\pi)=\{0,1,2,3,4,5,6,7\}. $$ \end{example} \begin{example} The case $\pi=u\cdot v$ :\\ $\bullet$Let $\pi$ be $$(\tau_{3}^{-1}\cdot \tau_{4}^{-2} \cdot \tau_{5}^{3})\cdot \tau_5^{-5}.$$ $u= \tau_{3}^{-1}\cdot \tau_{4}^{-2} \cdot \tau_{5}^{3}\in \dot{S}_{5}$ and $v=\tau_5^{-5}$. \\ The permutation presentation of $\pi$ as follow: $$u=\tau_{3}^{-1}\cdot \tau_{4}^{-2} \cdot \tau_{5}^{3}$$ $$= [2; ~3; ~-1; ~4; ~5] \cdot [3; ~4; ~-1; ~-2; ~5] \cdot [-3; ~-4; ~-5; ~1; ~2]$$ $$= [1; ~3; ~5; ~4; ~2]. $$ $$v=\tau_5^{-5}= [-1; ~-2; ~-3; ~-4; ~-5]. $$ Hence, $$\pi=u\cdot v=(\tau_{3}^{-1}\cdot \tau_{4}^{-2} \cdot \tau_{5}^{3})\cdot \tau_5^{-5}$$ $$ [-1; ~-3; ~-5; ~-4; ~-2]. $$ Then by Proposition \ref{des-bn-sn}: $$des(u)=\{3,4\}$$ Hence we conclude: $$u(1) < u(2) < u(3)\quad\quad u(3) > u(4) > u(5).$$ Now, we turn to the element $v$. We know that: $$v(i) = -i \quad \text{for every} \quad 1<i<5 .$$ Then, Then by Proposition \ref{des-bn-sn}: $$ des(v) = \{0,1,2,3,4\}.$$ Therefore, in the permutation presentation of $u\cdot v$, We have:\\ $$[uv](i) = -u(i) \quad 1<i<5.$$ Hence, we have in the permutation presentation of $u\cdot v$, $$(0 > [u\cdot v](1) >[u\cdot v](2) > [u\cdot v](3)), \quad ([u\cdot v](3) < [u\cdot v](4) < [u\cdot v](5) ).$$ Then the descent set of $\pi=u\cdot v$ is as follow: $$des(u\cdot v)= \{0,1,2\}\quad \text{where}\quad des(u\cdot v) = \{0,1,2,3,...,k-1\} - des(u).$$ \end{example} \section{Conclusions and future plans} This paper is a natural continuation of the paper \cite{S1}, where there was introduced a quite interesting generalization of the fundamental theorem for abelian groups to two important and very elementary families of non-abelian Coxeter groups, the $I$-type (dihedral groups), and the $A$-type (symmetric groups). There were introduced canonical forms, with very interesting exchange laws, and quite interesting properties concerning the Coxeter lengths of the elements. In this paper we generalized the results of \cite{S1} for the $B$-type Coxeter groups, namely $B_n$. The results of the paper motivate us for further generalizations of the $OGS$ and the arising properties from it for more families of Coxeter and generalized Coxeter groups, which have an importance in the classification of Lie algebras and the Lie-type simple groups, and in other fields of mathematics, such as algebraic geometry for classification of fundamental groups of Galois covers of surfaces \cite{alst}. The next natural step of generalization is defining a generalized standard OGS canonical form for $D$-type Coxeter group, which similarly to the $B$-type Coxeter groups, have presentations as signed permutations. Furthermore, it is interesting to find generalization of the $OGS$ to the affine classical families $\tilde{A}_n$, $\tilde{B}_n$, $\tilde{C}_n$, and $\tilde{D}_n$, and also to other generalizations of the mentioned Coxeter groups, as the complex reflection groups $G(r, p, n)$ \cite{ST} or the generalized affine classical groups, the definition of which is described in \cite{rtv}, \cite{ast}. \begin{thebibliography}{99} \bibitem{AR} R. M. Adin, Y. Roichman, The Flag Major Index and Group Actions on Polynomial Rings, \emph{Europ. J. Combinatorics} \textbf{22}, (2001), 431-446. \bibitem{alst} M. Amram, R. Lehman, R. Shwartz, M. Teicher, Classification of fundamental groups of Galois covers of surfaces of small degree degenerating to nice plane arrangements, \emph{Topology of algebraic varieties and singularities} \textbf{538}, (2010), 63–92. \bibitem{ast} M. Amram, R. Shwartz, M. Teicher, Coxeter covers of the classical Coxeter groups, \emph{International Journal of Algebra and Computation} \textbf{20}, (2010), 1041–1062. \bibitem{BS} G. Baumslag, D. Solitar, Some two-generator one-relator non-Hopfian groups, \emph{Bulletin of the American Mathematical Society}, \textbf{68} (1962), 199-201. \bibitem{BB} A. Bjorner, F. Brenti, Combinatorics of Coxeter groups, in: GTM, vol. 231, Springer, (2004). \bibitem{Mac} P. A. MacMahon, Combinatory Analysis I-II, \emph{Cambridge University Press, London/New-York} (1916) (Reprinted by Chelsea, New-York 1960). \bibitem {rtv} L. Rowen, M. Teicher, U. Vishne, Coxeter covers of the symmetric groups. \emph{Journal of Group Theory} \textbf{8} (2005), 139–169. \bibitem{ST} G. C. Shephard, J. A. Todd, Finite unitary reflection groups, \emph{Canadian Journal of Mathematics} \textbf{6}, (1954), 274–304. \bibitem{S1} R. Shwartz, Canonical forms for dihedral and symmetric groups, \emph{The Electronic Journal of Combinatorics} \textbf{26} (2019) P4.46. \bibitem{S} R. Shwartz, On the Freihetssatz in certain one-relator free products I, \emph{International Journal of Algebra and Computation} \textbf{11}, (2001), 673-706. \bibitem{SAR} R. Shwartz, R. M. Adin, and Y. Roichman, Major Indices and Perfect Bases for Complex Reflection Groups, \emph{The Electronic Journal of Combinatorics} \textbf{15} (2008) Research paper 61. \end{thebibliography} \end{document}
2205.00527v2
http://arxiv.org/abs/2205.00527v2
On Finite Analogs of Schmidt's Problem and Its Variants
\documentclass[draft]{amsart} \usepackage{color} \usepackage{hyperref} \hypersetup{colorlinks=true, linkcolor = blue} \usepackage{pgf,tikz,graphicx} \usepackage{amssymb} \usepackage{enumerate} \usepackage{stmaryrd} \renewcommand{\subjclassname}{ \textup{2013} Mathematics Subject Classification} \setlength{\textheight}{220mm} \setlength{\textwidth}{155mm} \setlength{\oddsidemargin}{1.25mm} \setlength{\evensidemargin}{1.25mm} \setlength{\topmargin}{0mm} \renewcommand{\P}{\mathcal{P}} \newcommand{\D}{\mathcal{D}} \renewcommand{\O}{\mathcal{O}} \newcommand{\E}{\mathcal{E}} \newcommand{\wTilde}{\widetilde} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \begin{document} \makeatletter \def\imod#1{\allowbreak\mkern10mu({\operator@font mod}\,\,#1)} \makeatother \author{Alexander Berkovich} \address{[A.B.] Department of Mathematics, University of Florida, 358 Little Hall, Gainesville FL 32611, USA} \email{[email protected]} \author{Ali Kemal Uncu} \address{[A.K.U.] University of Bath, Faculty of Science, Department of Computer Science, Bath, BA2\,7AY, UK} \email{[email protected]} \address{[A.K.U.] Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Science, Altenbergerstraße 69, A-4040 Linz, Austria} \email{[email protected]} \title[\scalebox{.9}{On Finite Analogs of Schmidt's Problem and Its Variants}]{On Finite Analogs of Schmidt's Problem and Its Variants} \begin{abstract} We refine Schmidt's problem and a partition identity related to 2-color partitions which we will refer to as Uncu--Andrews--Paule theorem. We will approach the problem using Boulet--Stanley weights and a formula on Rogers--Szeg\H{o} polynomials by Berkovich--Warnaar, and present various Schmidt's problem alike theorems and their refinements. Our new Schmidt type results include the use of even-indexed parts' sums, alternating sum of parts, and hook lengths as well as the odd-indexed parts' sum which appears in the original Schmidt's problem. We also translate some of our Schmidt's problem alike relations to weighted partition counts with multiplicative weights in relation to Rogers--Ramanujan partitions. \end{abstract} \keywords{Partitions, q-series, Schmidt type theorems, Boulet--Stanley weights, Rogers--Szeg\H{o} polynomials, Sylvester's bijection} \thanks{Research of the second author is partly supported by EPSRC grant number EP/T015713/1 and partly by FWF grant P-34501N} \subjclass[2010]{05A15, 05A17, 05A19, 11B34, 11B75, 11P81} \date{\today} \maketitle \section{Introduction}\label{section1} A \textit{partition} $\pi$ is a non-increasing finite sequence $\pi = (\lambda_1,\lambda_2,\dots)$ of positive integers. The elements $\lambda_i$ that appear in the sequence $\pi$ are called \textit{parts} of $\pi$. The \textit{total number of parts} is denoted by $\#(\pi)$. For positive integers $i$, we call $\lambda_{2i-1}$ odd-indexed parts, and $\lambda_{2i}$ even indexed parts of $\pi$. The sum of all the parts of a partition $\pi$ is called the \textit{size} of this partition and denoted by $|\pi|$. We say $\pi$ is a partition of $n$, if its size is $n$. The empty sequence $\emptyset$ is considered as the unique partition of zero. Let $\D$ be the set of partitions into distinct parts, and let $\P$ be the set of all the (ordinary) partitions. The bivariate generating functions for these sets, where the exponent of $x$ is counting the number of parts and the exponent of $q$ is keeping track of the size of the partitions are known to have infinite product representations. Explicitly \[\sum_{\pi\in\D} x^{\#(\pi)} q^{|\pi|} = (-xq;q)_\infty\quad\text{and}\quad\sum_{\pi\in\P} x^{\#(\pi)} q^{|\pi|} = \frac{1}{(xq;q)_\infty},\] where \[(a;q)_l := \prod_{i=0}^{l-1}(1-a q^i),\quad \text{for}\ \ l\in\mathbb{Z}_{\geq0}\cup \{\infty\}\] the standard $q$-Pochhammer symbol \cite{theoryofpartitions}. In 1999, although the original problem was submitted in 1997, F. Schmidt \cite{Schmidt} shared his curious observations on a connection between partitions into distinct parts and ordinary partitions. For a given partition $\pi = (\lambda_1,\lambda_2,\dots)$, let \[\O(\pi):= \lambda_1+\lambda_3+\lambda_5+\dots,\] be the \textit{sum of odd-indexed parts}. Then, Schmidt observed that for any integer $n$ \[|\{\pi\in\D: \O(\pi)=n\}| = |\{\pi\in\P: |\pi|=n\}|.\] This result has been proven and re-discovered by many over the years. For example, P. Mork \cite{Mork} gave a solution to Schmidt's problem, the second author gave an independent proof in 2018 \cite{Uncu} without the knowledge of Schmidt's problem, and recently Andrews--Paule \cite{AndrewsPaule}. Andrews--Paule paper is particularly important because it led many new researchers into this area and many new proofs of Schmidt's theorem and its variants started appearing in a short period of time. Some examples that give novel proofs to Schmidt's problem are Alladi \cite{AlladiSchmidt}, Li-Yee \cite{LiYee}, and Bridges joint with the second author \cite{BridgesUncu}. The original way the second author discovered and proved Schmidt's problem in 2016 uses Boulet--Stanley weights \cite{Boulet}. While working on her doctorate under the supervision of R. Stanley, Boulet wrote a 4-parameter generalization of the generating functions for the number of partitions where, given a Ferrers diagram, they decorated the odd indexed parts with alternating variables $a$ and $b$ (starting with $a$) and decorated the even indexed parts with alternating $c$ and $d$ (starting with $c$). In this setting, instead of $q$ keeping track of the size, we have a four variable function $\omega_\pi(a,b,c,d)$ where each variable counts the number of respective variables in the decorated Ferrers diagram. For example, the 4-decorated Ferrers diagram of the $\pi=(12,10,7,5,2)$ and the respective weight function are given next. \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=0.45cm,y=0.45cm] \clip(0.5,0) rectangle (24.5,6.5); \draw [line width=.25pt] (2,1)-- (4,1); \draw [line width=.25pt] (2,2)-- (7,2); \draw [line width=.25pt] (2,3)-- (9,3); \draw [line width=.25pt] (2,4)-- (12,4); \draw [line width=.25pt] (14,5)-- (2,5); \draw [line width=.25pt] (2,6)-- (14,6); \draw [line width=.25pt] (14,6)-- (14,5); \draw [line width=.25pt] (13,6)-- (13,5); \draw [line width=.25pt] (12,6)-- (12,4); \draw [line width=.25pt] (11,6)-- (11,4); \draw [line width=.25pt] (10,6)-- (10,4); \draw [line width=.25pt] (9,6)-- (9,3); \draw [line width=.25pt] (8,6)-- (8,3); \draw [line width=.25pt] (7,2)-- (7,6); \draw [line width=.25pt] (6,6)-- (6,2); \draw [line width=.25pt] (5,6)-- (5,2); \draw [line width=.25pt] (4,1)-- (4,6); \draw [line width=.25pt] (3,6)-- (3,1); \draw [line width=.25pt] (2,6)-- (2,1); \draw (2.5,5.5) node[anchor=center] {$a$}; \draw (3.5,5.5) node[anchor=center] {$b$}; \draw (4.5,5.5) node[anchor=center] {$a$}; \draw (5.5,5.5) node[anchor=center] {$b$}; \draw (6.5,5.5) node[anchor=center] {$a$}; \draw (7.5,5.5) node[anchor=center] {$b$}; \draw (8.5,5.5) node[anchor=center] {$a$}; \draw (9.5,5.5) node[anchor=center] {$b$}; \draw (10.5,5.5) node[anchor=center] {$a$}; \draw (11.5,5.5) node[anchor=center] {$b$}; \draw (12.5,5.5) node[anchor=center] {$a$}; \draw (13.5,5.5) node[anchor=center] {$b$}; \draw (2.5,4.5) node[anchor=center] {$c$}; \draw (3.5,4.5) node[anchor=center] {$d$}; \draw (4.5,4.5) node[anchor=center] {$c$}; \draw (5.5,4.5) node[anchor=center] {$d$}; \draw (6.5,4.5) node[anchor=center] {$c$}; \draw (7.5,4.5) node[anchor=center] {$d$}; \draw (8.5,4.5) node[anchor=center] {$c$}; \draw (9.5,4.5) node[anchor=center] {$d$}; \draw (10.5,4.5) node[anchor=center] {$c$}; \draw (11.5,4.5) node[anchor=center] {$d$}; \draw (2.5,3.5) node[anchor=center] {$a$}; \draw (3.5,3.5) node[anchor=center] {$b$}; \draw (4.5,3.5) node[anchor=center] {$a$}; \draw (5.5,3.5) node[anchor=center] {$b$}; \draw (6.5,3.5) node[anchor=center] {$a$}; \draw (7.5,3.5) node[anchor=center] {$b$}; \draw (8.5,3.5) node[anchor=center] {$a$}; \draw (2.5,2.5) node[anchor=center] {$c$}; \draw (3.5,2.5) node[anchor=center] {$d$}; \draw (4.5,2.5) node[anchor=center] {$c$}; \draw (5.5,2.5) node[anchor=center] {$d$}; \draw (6.5,2.5) node[anchor=center] {$c$}; \draw (2.5,1.5) node[anchor=center] {$a$}; \draw (3.5,1.5) node[anchor=center] {$b$}; \draw (19,3.3) node[anchor=center] {,$\ \ \omega_\pi(a,b,c,d) = a^{11}b^{10}c^{8}d^7$.}; \end{tikzpicture} \end{center} Boulet showed that the 4 parameter decorated generating functions for the ordinary partitions and for partitions into distinct parts also have product representations. \begin{theorem}[Boulet] \label{thm:boulet}For variables $a$, $b$, $c$, and $d$ and $Q:=abcd$, we have\begin{align} \label{eq:PSI}\Psi(a,b,c,d) &:= \sum_{\pi\in D} \omega_\pi(a,b,c,d) = \frac{(-a,-abc;Q)_\infty}{(ab;Q)_\infty},\\ \label{eq:PHI}\Phi(a,b,c,d) &:= \sum_{\pi\in U} \omega_\pi(a,b,c,d) = \frac{(-a,-abc;Q)_\infty}{(ab,ac,Q;Q)_\infty}. \end{align}\end{theorem} It is easy to check that with the trivial choice $a=b=c=d=q$, $\omega_\pi(q,q,q,q)=q^{|\pi|}$, the 4-parameter generating functions become the generating function for the number of partitions and number of partitions into distinct parts, respectively. The choice $(a,b,c,d)=(q,q,1,1)$, proves Schmidt problem and yields another similar theorem of the sort for ordinary partitions: \begin{theorem}[Theorems 1.3 and 6.3 \cite{Uncu}] \begin{align}\label{Schmidt}\Psi(q,q,1,1) &= \sum_{\pi\in\D} q^{\O(\pi)} = \frac{1}{(q;q)_\infty},\\\label{UAP}\Phi(q,q,1,1) &= \sum_{\pi\in\P} q^{\O(\pi)} = \frac{1}{(q;q)^2_\infty}.\end{align} \end{theorem} This was the approach of the second author \cite{Uncu}. In the same paper, he also showed that Alladi's weighted theorem on Rogers-Ramanujan type partitions \cite{AlladiWeighted} is equivalent to Schmidt's problem. Another independent proof of the right most equality of \eqref{UAP} recently appeared in the paper Andrews--Paule \cite[Theorem 2]{AndrewsPaule}, where they also proved Schmidt's problem. This also created an influx of research interests towards that one particular identity \eqref{UAP}. Some examples of these new works proving, dissecting and refining \eqref{UAP} are \cite{AlladiSchmidt, AndrewsKeith, BridgesUncu, ChernYee, Ji, LiYee}. In some of these texts, \eqref{UAP} is referred to as the Andrews--Paule theorem, however considering second author's earlier discovery and proof of this result \cite[Theorem 6.3]{Uncu}, we will refer to this result as the Uncu--Andrews--Paule theorem. In this paper, we present two refinements of the Uncu--Andrews--Paule theorem. The doubly refined theorem of this result is as follows. Let $\P_{\leq N}$ be the set of partitions with parts $\leq N$, and let \[\gamma(\pi) :=\lambda_1-\lambda_2+\lambda_3-\lambda_4+\dots\] be the \textit{alternating sum of parts} of a given partition $\pi=(\lambda_!,\lambda_2,\dots)$. \begin{theorem}\label{thm:RefinedAlterPhi}Let $U_{N,j}(n)$ be the number of partitions $\pi\in\P_{\leq N}$ such that $\O(\pi)=n$ and $\gamma(\pi)= j$ and let $T_{N,j}(n)$ be the number of 2-color partitions (red and green) of $n$ such that exactly $j$ parts are red and the largest green part does not exceed $N-j$. Then, \[U_{N,j}(n) = T_{N,j}(n).\] \end{theorem} We will also give various results involving the \textit{even-indexed parts' sum} statistics \[\E(\pi):= \lambda_2+\lambda_4+\lambda_6+\dots,\] where $\pi=(\lambda_1,\lambda_2,\lambda_3,\dots)$. One of such relations is as given below. \begin{theorem}\label{cor:Intro} The number of partitions $\pi$ into distinct parts where $\E(\pi)=n$ and $\gamma(\pi)=j$ is equal to the number of partitions of $n$ into parts $\leq j$. \end{theorem} An example of this theorem with $n=5$ and $j=4$ is given in the next table. \[\begin{array}{c|c} \E(\pi)=5\ \& \ \gamma(\pi)=4 & |\pi|=5 \ \& \ \text{parts}\leq 4\\ \hline (9,5),\ (8,5,1),\ (7,5,2) & (4,1),\ (3,2),\ (3,1,1),\\ (7,4,2,1),\ (6,5,3),\ (6,4,3,1). & (2,2,1),\ (2,1,1,1),\ (1,1,1,1,1). \end{array}\] \vspace{1mm} The organization of this paper is as follows. In Section~\ref{Sec2}, we will recall some refinements of the generating functions proven by Boulet, compare it with a result by Berkovich and Warnaar on Rogers--Szeg\H{o} polynomials to get some weighted partition identities involving the alternating sum of parts and the sum of odd-indexed parts statistics. In Section~\ref{Sec3}, we will give finite analogues of Schmidt's problem and Uncu--Andrews--Paule theorem. Section~\ref{Sec4} is reserved for other Schmidt's problem like implications of the results in Section~\ref{Sec2}, which involve various alternating sums that are governed by the even indexed parts. Section~\ref{Sec5} has the Schmidt type results that focus on adding even indexed parts rather than the odd indexed parts. Finally, Section~\ref{Sec6} has the weighted counts connections between the Schmidt type statistics and multiplicative weights that were earlier studied by Alladi \cite{AlladiWeighted} and by the second author \cite{Uncu}. \section{Refinements for Boulet's generating functions and Rogers--Szeg\H{o} polynomials}\label{Sec2} In \cite{BU2} the authors made an extensive study on Boulet by imposing bounds on the largest part and the number of parts of the partitions. It should be noted that Ishikawa and Zeng \cite{Masao} were the first ones to present four variable generating functions for distinct and ordinary partitions by imposing a single bound on the largest part of partitions. In \cite{BU2}, the authors gave a different representation of the singly bounded generating functions and also gave two doubly bounded generating functions. Later, Fu and Zeng \cite{Fu} worked on the questions and techniques discussed in \cite{BU1, BU2}, and they presented doubly bounded generating functions with uniform bounds for the 4-decorated Ferrers diagrams of ordinary partitions and of partitions into distinct parts. Let $\P_{\leq N}$ and $\D_{\leq N}$ be the sets of partitions from the sets $\P$ and $\D$, respectively, with the extra bound $N$ on the largest part. Define the generating functions \begin{align} \label{PSI_N}\Psi_N(a,b,c,d) &:= \sum_{\pi\in \D_{\leq N}} \omega_\pi(a,b,c,d),\\ \label{PHI_N}\Phi_N(a,b,c,d) &:= \sum_{\pi\in \P_{\leq N}} \omega_\pi(a,b,c,d), \end{align} which are finite analogues of Boulet's generating functions for the weighted count of 4-variable decorated Ferrers diagrams. In \cite{Masao}, Ishikawa and Zeng wrote explicit formulas for \eqref{PSI_N} and \eqref{PHI_N} using Pfaffians. \begin{theorem}[Ishikawa--Zeng]\label{MasaoThm} For a non-zero integer $N$, variables $a$, $b$, $c$, and $d$, we have \begin{align} \label{PSI2Nnu}\Psi_{2N+\nu}(a,b,c,d) &= \sum_{i=0}^N \genfrac{[}{]}{0pt}{}{N}{i}_{Q} (-a;Q)_{N-i+\nu}(-c;Q)_{i} (ab)^{i},\\ \label{PHI2Nnu}\Phi_{2N+\nu}(a,b,c,d) &=\frac{1}{(ac;Q)_{N+\nu}(Q;Q)_{N}}\sum_{i=0}^N \genfrac{[}{]}{0pt}{}{N}{i}_{Q} (-a;Q)_{N-i+\nu}(-c;Q)_{i} (ab)^{i}, \end{align} where $\nu\in\{0,1\}$ and $Q=abcd$. \end{theorem} In Theorem~\ref{MasaoThm} and throughout the rest of the paper we use the standard definition \cite{theoryofpartitions} of the \textit{$q$-binomial coefficients}: \[{n+m\brack n}_q := \left\{\begin{array}{cl}\displaystyle \frac{(q;q)_{n+m}}{(q;q)_n(q;q)_m}, & \text{if}\ \ n,m\geq 0,\\[-1.5ex]\\ 0, & \text{otherwise}. \end{array}\right.\] In \cite{BU1}, we gave a companion to Theorem~\ref{MasaoThm}. \begin{theorem}\label{finiteBoulet} For a non-zero integer $N$, variables $a$, $b$, $c$, $d$, and $Q=abcd$, we have \begin{align} \label{finiteBouletPSI}\Psi_{2N+\nu}(a,b,c,d) &=\sum_{i=0}^N \genfrac{[}{]}{0pt}{}{N}{i}_{Q} (-a;Q)_{i+\nu}(-abc;Q)_{i} \frac{(ac;Q)_{N+\nu}}{(ac;Q)_{i+\nu}}(ab)^{N-i},\\ \label{finiteBouletPHI}\Phi_{2N+\nu}(a,b,c,d) &=\frac{1}{(Q;Q)_{N}}\sum_{i=0}^N \genfrac{[}{]}{0pt}{}{N}{i}_{Q} \frac{(-a;Q)_{i+\nu}(-abc;Q)_{i} }{(ac;Q)_{i+\nu}}(ab)^{N-i}, \end{align} where $\nu\in\{0,1\}$. \end{theorem} It should be noted that our derivation of Theorem~\ref{finiteBoulet}, unlike Theorem~\ref{MasaoThm}, is completely combinatorial. Another useful connection is the first author and Warnaar's theorem \cite{WarnaarBerkovich} on the Rogers--Szeg\H{o} polynomials \[H_N(z,q):=\sum_{k=0}^N \genfrac{[}{]}{0pt}{}{N}{k}_{q}z^k.\] \begin{theorem} [Berkovich--Warnaar]\label{HermiteBerkovichTHM} Let $N$ be a non-negative integer, then the Rogers--Szeg\H{o} polynomials can be expressed as \begin{equation}\label{HermiteBerkovich} H_{2N+\nu}(zq,q^2) = \sum_{k=0}^{N}\genfrac{[}{]}{0pt}{}{N}{k}_{q^4} (-zq;q^4)_{N-k+\nu}(-q/z;q^4)_k (zq)^{2k}, \end{equation} where $\nu=0$ or $1$. \end{theorem} We see that the right-hand sides of \eqref{PSI2Nnu} and \eqref{HermiteBerkovich} coincide for a particular choice of $(a,b,c,d)$: \begin{equation}\label{RogersSzegoToPSI}H_{N}(zq,q^2) = \Psi_{N}(zq,zq,q/z,q/z).\end{equation} In \cite{BU2}, authors showed that \begin{equation}\label{eq:511}\Psi_{N}(zq,zq,q/z,q/z) = \sum_{\pi\in\D_{\leq N}} q^{|\pi|} z^{\gamma(\pi)},\end{equation} where $\gamma(\pi):=\lambda_1-\lambda_2+\lambda_3-\lambda_4+\dots,$ the \textit{alternating sum of the parts} of $\pi=(\lambda_1,\lambda_2,\lambda_3,\lambda_4,\dots)$. Moreover, in the same paper the authors, using \eqref{RogersSzegoToPSI}, also showed that the coefficient of the term $z^k$ in \eqref{eq:511} is $q^k{N\brack k}_{q^2}$. Next, by first shifting $z$ to $zq$ and then mapping $q^2\mapsto q$, yields \begin{theorem}\label{thm:SchmidtTypeAlternating} \begin{align} \label{eq:Psi_alternating}\Psi_N(qz,qz,1/z,1/z) &= \sum_{k = 0}^N q^kz^k {N\brack k}_q ,\\ \label{eq:Phi_alternating}\Phi_N(qz,qz,1/z,1/z) &= \frac{\Psi_N(qz,qz,1/z,1/z)}{(q;q)_N} = \sum_{k = 0}^N \frac{q^k z^k}{(q;q)_k(q;q)_{N-k}}. \end{align} \end{theorem} One can rewrite the above theorem as \begin{theorem}\label{thm:SchmidtTypeAlternatingComb} \begin{align} \label{eq:Psi_alternatingComb}\sum_{\pi\in\D_{\leq N}} q^{\O(\pi)} z^{\gamma(\pi)} &= \sum_{k = 0}^N q^k z^k{N\brack k}_q,\\ \label{eq:Phi_alternatingComb}\sum_{\pi\in\P_{\leq N}} q^{\O(\pi)} z^{\gamma(\pi)}&= \sum_{k = 0}^N \frac{q^k z^k}{(q;q)_k(q;q)_{N-k}}. \end{align} \end{theorem} \section{Another look at the refined Schmidt and results alike}\label{Sec3} Theorem~\ref{MasaoThm} (and Theorem~\ref{finiteBoulet}) gives a sum representation for the bounded analogues of Schmidt's problem \eqref{Schmidt} and the Uncu--Andrews--Paule theorem \eqref{UAP}. \begin{theorem} Let $N$ be any integer and $\nu =0$ or 1, then \begin{align} \label{eq:Old_PsiN_rep}\Psi_{2N+\nu}(q,q,1,1) &=\sum_{\pi\in\D_{\leq 2N+\nu}} q^{\O(\pi)}= \sum_{i=0}^N \genfrac{[}{]}{0pt}{}{N}{i}_{q^2} (-q;q^2)_{N-i+\nu}(-1;q^2)_{i} q^{2i},\\ \label{eq:Old_PhiN_rep}\Phi_{2N+\nu}(q,q,1,1) &=\sum_{\pi\in\P_{\leq 2N+\nu}}q^{\O(\pi)}=\frac{1}{(q;q)_{2N+\nu}}\sum_{i=0}^N \genfrac{[}{]}{0pt}{}{N}{i}_{q^2} (-q;q^2)_{N-i+\nu}(-1;q^2)_{i} q^{2i}. \end{align} \end{theorem} One can also prove that these generating functions have an alternate representations. \begin{theorem}\label{thm:colors} Let $N$ be any integer then \begin{align} \label{eq:New_PsiN_rep}\sum_{\pi\in\D_{\leq N}} q^{\O(\pi)}&= \sum_{i=0}^{N} \genfrac{[}{]}{0pt}{}{N}{i}_{q} q^{i},\\ \label{eq:New_PhiN_rep}\sum_{\pi\in\P_{\leq N}}q^{\O(\pi)}&=\sum_{i=0}^{N} \frac{q^i}{(q;q)_i(q;q)_{N-i}}. \end{align} \end{theorem} Picking $z=1$ in Theorem~\ref{thm:SchmidtTypeAlternatingComb} gives Theorem~\ref{thm:colors}. We can also give new partition interpretations for the sums in \eqref{eq:New_PsiN_rep} and \eqref{eq:New_PhiN_rep}. This directly leads to new weighted partition identities once we compare them with the previously known interpretations of $\Psi_N(q,q,1,1)$ and $\Phi_N(q,q,1,1)$. We start with the new weighted partition identity related to the Schmidt's problem. \begin{theorem}\label{thm:FinWeightedSchmidt} Let $S_N(n)$ be the number of partitions into distinct parts $\pi = (\lambda_1,\lambda_2,\dots)$ with $\O(\pi)=n$ and $\lambda_1\leq N$. Let $\Gamma_N(n)$ be the number of partitions of $n$ where the largest hook length is $\leq N$. Then, \[S_N(n) = \Gamma_N(n).\] \end{theorem} Recall that, on a Ferrers diagram, a \textit{hook length} of a box is defined as one plus the number of boxes directly to the right and directly below to the chosen box. It is then easy to see that the largest hook length is the hook length of the top-left-most box in a Ferrers diagram. Whence, we could also define $\Gamma_N(n)$ as the number of partitions of $n$ where the quantity ``the number of parts plus the largest part minus 1" is less than or equal to $N$. We exemplify Theorem~\ref{thm:FinWeightedSchmidt} with $n=N=4$ and list the partitions counted by $S_N(n)$ and $\Gamma_N(n)$ here. \[\begin{array}{c|c} S_4(4)=5 & \Gamma_4(4)=5\\ \hline (4,3) & (4) \\ (4,2) & (3,1) \\ (4,1) & (2,2) \\ (4) & (2,1,1) \\ (3,2,1) & (1,1,1,1) \\ \end{array} \] Notice that, whereas most partitions in this example have their largest hook lengths to be the upper limit 4, the partition $(2,2)$ has the largest hook length $3<4$. The proof of Theorem~\ref{thm:FinWeightedSchmidt} only requires us to interpret \eqref{eq:New_PsiN_rep} as the generating function for the $\Gamma_N(n)$ numbers. To that end, we focus on the summand of the right-hand side in \eqref{eq:New_PsiN_rep}. Recall that the $q$-binomial coefficient ${N\brack i}_q$ is the generating function of partitions that fit in a box with $i$ rows and $(N-i)$-columns. We fit a column of height $i$ in front of that box. There are exactly $i$ boxes on the left most column of this Ferrers diagram and up to $N-i$ boxes on the top most row. Hence, the largest hook length of this partition is $\leq N$. By summing over all possible $i$, we cover all possible column heights. This shows that the right-hand side of \eqref{eq:New_PsiN_rep} is the generating function of $\Gamma_N(n)$ where $q$ is keeping track of the size of the partitions. A similar weighted theorem comes from comparing the known interpretation of $\Phi_N(q,q,1,1)$ and the new interpretation of the same object coming from \eqref{eq:New_PhiN_rep}. \begin{theorem} Let $U_N(n)$ be the number of partitions $\pi = (\lambda_1,\lambda_2,\dots)$ with $\O(\pi)=n$ and $\lambda_1\leq N$. Let $T_N(n)$ be the number of 2-color partitions (red and green) of $n$ such that the number of parts in red plus the size of the largest green part does not exceed $N$. Then, \[U_N(n) = T_N(n).\] \end{theorem} We give an example of this theorem with $n=4$ and $N=3$. We will use subscripts $r$ and $g$ to indicate colors of the parts of partitions while listing the partitions related to $T_3(4).$ \[ \begin{array}{c|c} U_3(4) = 15 & T_3(4) = 15\\ \hline \begin{array}{cc} (3,3,1,1), & (3,3,1),\\ (3,2,1,1), & (3,2,1),\\ (3,1,1,1), & (3,1,1),\\ (2,2,2,2), & (2,2,2),\\ (2,2,1,1,1,1),& (2,2,1,1,1),\\ (2,2,2,1) & (2,1,1,1,1,1),\\ (2,1,1,1,1),& (1,1,1,1,1,1,1),\\ \end{array} & \begin{array}{cc} (4_{\color{red} r} ) & (3_{\color{red} r} ,1_{\color{red} r} ) \\ (3_{\color{red} r} ,1_{\color{olive} g} ) & (3_{\color{olive} g} ,1_{\color{olive} g} ) \\ (2_{\color{red} r} ,2_{\color{olive} g} ),& (2_{\color{olive} g} ,2_{\color{olive} g} ), \\ (2_{\color{red} r} ,1_{\color{red} r} ,1_{\color{red} r} ), &(2_{\color{red} r} ,1_{\color{red} r} ,1_{\color{olive} g} ), \\ (2_{\color{red} r} ,1_{\color{olive} g} ,1_{\color{olive} g} ), &(2_{\color{olive} g} ,1_{\color{red} r} ,1_{\color{olive} g} ), \\ (2_{\color{olive} g} ,1_{\color{olive} g} ,1_{\color{olive} g} ), &(1_{\color{red} r} ,1_{\color{red} r} ,1_{\color{red} r} ,1_{\color{olive} g} )\\ (1_{\color{red} r} ,1_{\color{red} r} ,1_{\color{olive} g} ,1_{\color{olive} g} ),& (1_{\color{red} r} ,1_{\color{olive} g} ,1_{\color{olive} g} ,1_{\color{olive} g} ),\\ \end{array}\\ (1,1,1,1,1,1,1,1). & (1_{\color{olive} g} ,1_{\color{olive} g} ,1_{\color{olive} g} ,1_{\color{olive} g} ). \end{array} \] Note that the count of $T_N(n)$ is not symmetric in colors red and green. In our example, partitions such as $(4_g)$, $(2_g,1_r,1_r)$ and $(1_r,1_r,1_r,1_r)$ are not counted. The equation \eqref{eq:Psi_alternatingComb} can also be presented as a refined Schmidt problem like result. \begin{theorem}\label{thm:RefinedAlterPsi}Let $S_{N,j}(n)$ be the number of partitions $\pi\in\D_{\leq N}$ such that $\O(\pi)=n$ and $\gamma(\pi) = j$ and let $\Gamma_{N,j}(n)$ be the number of partitions into exactly $j$ parts where the largest hook length is $\leq N$. Then, \[S_{N,j}(n) = \Gamma_{N,j}(n).\] \end{theorem} The proof of this result comes from comparing and interpreting the coefficients of $z^j$ in \eqref{eq:Psi_alternatingComb}. Similarly, we can get an analogous result for ordinary partitions by comparing the $z^j$ terms in \eqref{eq:Phi_alternatingComb}, which yields Theorem~\ref{thm:RefinedAlterPhi}. Moreover, letting $N$ tend to infinity we arrive at \cite[Corollary 2]{AndrewsKeith}. This corollary was implicit in Mork's original solution \cite{Mork} of the Schmidt problem. We give examples of Theorems \ref{thm:RefinedAlterPsi} and \ref{thm:RefinedAlterPhi} in the next table. \[ \begin{array}{cc|cc} S_{3,2}(4) = 2 & \Gamma_{3,2}(4) = 2 & U_{3,1}(4) = 6 & T_{3,1}(4) = 6\\ \hline (4,2) & (3,1) & (3,3,1) & (4_{\color{red} r}) \\ (3,2,1) & (2,2) & (3,2,1,1) & (3_{\color{red} r},1_{\color{olive} g}) \\ & & (2,2,1,1,1) & (2_{\color{red} r},1_{\color{olive} g},1_{\color{olive} g}) \\ & & (2,2,2,1) & (2_{\color{red} r},2_{\color{olive} g}) \\ & & (2,1,1,1,1,1) & (1_{\color{red} r},1_{\color{olive} g},1_{\color{olive} g},1_{\color{olive} g}) \\ & & (1,1,1,1,1,1,1) & (2_{\color{olive} g},1_{\color{red} r},1_{\color{olive} g}) \\ \end{array} \] \section{Schmidt type results}\label{Sec4} For a given partition $\pi$, choices of variables in Boulet's and subsequently the bounded analogues of Boulet's theorem can give us great insight into Schmidt type partition theorems. For example, by the choice $(a,b,c,d) = (q,q,-1,-1)$ in \eqref{eq:PSI}, we directly get the following theorem. \begin{theorem}\label{thm:distinct1mod2} \begin{align} \label{eq:distinct1mod2} \sum_{\pi\in\D} (-1)^{\E(\pi)} q^{\O(\pi)} &= (-q;q^2)_\infty,\\ \label{eq:ordinary0mod2} \sum_{\pi\in\P} (-1)^{\E(\pi)} q^{\O(\pi)} &= \frac{1}{(q^2;q^2)_\infty}. \end{align} \end{theorem} Theorem~\ref{thm:distinct1mod2} translates into the following two weighted combinatorial result. \begin{theorem} The number of partitions $\pi$ into distinct parts counted with weight $(-1)^{\E(\pi)}$ such that $\O(\pi)=n$ is equal to the number of partitions of $n$ into distinct odd parts. \end{theorem} \begin{theorem} The number of partitions $\pi$ counted with weight $(-1)^{\E(\pi)}$ such that $\O(\pi)=n$ is equal to the number of partitions of $n$ into even parts.\end{theorem} The finite analogues of Boulet's theorem can directly be used to get refinements of these results. \begin{theorem}\label{thm:refined1} Let $\D_{\leq N}$ be the set of partitions into distinct parts $\leq N$, then \begin{align} \label{eq:distinct1mod2_N} \sum_{\pi\in\D_{\leq N}} (-1)^{\E(\pi)} q^{\O(\pi)} = (-q;q^2)_{\lceil N/2\rceil},\\ \label{eq:ordinary0mod2_N} \sum_{\pi\in\P_{\leq N}} (-1)^{\E(\pi)} q^{\O(\pi)} = \frac{1}{(q^2;q^2)_{\lceil N/2\rceil}}, \end{align} where $\lceil\cdot\rceil$ is the standard ceil function. \end{theorem} With the choice $c=-1$, the sums in Theorem~\ref{MasaoThm} collapse to the $i=0$ term only and this yields a direct proof of Theorem~\ref{thm:refined1}. For example let $N=2$, then the explicit weighted count of the partitions related to \eqref{eq:distinct1mod2_N} is as follows: \[ \begin{array}{c} \begin{array}{cc|cc|cc} \pi\in\D_{\leq4} & (-1)^{\E(\pi)} q^{\O(\pi)} & \pi\in\D_{\leq4} & (-1)^{\E(\pi)} q^{\O(\pi)} & \pi\in\D_{\leq4} & (-1)^{\E(\pi)} q^{\O(\pi)}\\ \hline (4) & q^4 & (4,2,1) & q^5 & (2) & q^2 \\ (4,3) & -q^4 & (4,3,2,1) & q^6 & (2,1) & -q^2 \\ (4,2) & q^4 & (3) & q^3 & (1) & q \\ (4,1) & -q^4 & (3,2) & q^3 & \emptyset & 1 \\ (4,3,2) & -q^6 & (3,1) & -q^3 & & \\ (4,3,1) & -q^5 & (3,2,1) & q^4 & & \end{array}\\[-1.5ex]\\ \displaystyle \Psi_{4}(q,q,-1,-1) =\sum_{\pi\in\D_{\leq 4}} (-1)^{\E(\pi)} q^{\O(\pi)}= q^4+q^3+q+1 = (1+q)(1+q^3) = (-q;q^2)_2. \end{array} \] We can also choose $(a,b,c,d)= (q,q,-1,1)$ and get another theorem. \begin{theorem}\label{thm:distinct1mod2Neg} \begin{align} \label{eq:distinctNeg1mod2} \sum_{\pi\in\D} (-1)^{\lceil\E\rceil(\pi)} q^{\O(\pi)} &= (-q;-q^2)_\infty = (-q,q^3;q^4)_\infty,\\ \label{eq:ordinaryNeg0mod2} \sum_{\pi\in\P} (-1)^{\lceil\E\rceil(\pi)} q^{\O(\pi)} &= \frac{1}{(-q^2;-q^2)_\infty} = \frac{1}{(-q^2,q^4;q^4)_\infty}, \end{align} where $\lceil\E\rceil (\lambda_1,\lambda_2,\lambda_3,\dots) = \lceil\lambda_2/2\rceil+\lceil\lambda_4/2\rceil+\lceil\lambda_6/2\rceil+\dots$. \end{theorem} The refinement of Theorem~\ref{thm:distinct1mod2Neg} can also be found in a similar manner. \begin{theorem}\label{thm:refined2} \begin{align} \label{eq:distinctNeg1mod2_N} \sum_{\pi\in\D_{\leq N}} (-1)^{\lceil\E(\pi)\rceil} q^{\O(\pi)} &= (-q;-q^2)_{\lceil N/2\rceil},\\ \label{eq:ordinaryNeg0mod2_N} \sum_{\pi\in\P_{\leq N}} (-1)^{\lceil\E\rceil(\pi)} q^{\O(\pi)} &= \frac{1}{(-q^2;-q^2)_{\lceil N/2\rceil}}. \end{align} \end{theorem} For $N=2$ \eqref{eq:distinctNeg1mod2_N} is as follows: \[ \begin{array}{c} \begin{array}{cc|cc|cc} \pi\in\D_{\leq4} & (-1)^{\lceil\E(\pi)\rceil} q^{\O(\pi)} & \pi\in\D_{\leq4} & (-1)^{\lceil\E(\pi)\rceil} q^{\O(\pi)} & \pi\in\D_{\leq4} & (-1)^{\lceil\E(\pi)\rceil} q^{\O(\pi)}\\ \hline (4) & q^4 & (4,2,1) & -q^5 & (2) & q^2 \\ (4,3) & q^4 & (4,3,2,1) & -q^6 & (2,1) & -q^2 \\ (4,2) & -q^4 & (3) & q^3 & (1) & q \\ (4,1) & -q^4 & (3,2) & -q^3 & \emptyset & 1 \\ (4,3,2) & q^6 & (3,1) & -q^3 & & \\ (4,3,1) & q^5 & (3,2,1) & -q^4 & & \end{array}\\[-1.5ex]\\ \displaystyle \Psi_{4}(q,q,-1,1) =\sum_{\pi\in\D_{\leq 4}} (-1)^{\lceil\E(\pi)\rceil} q^{\O(\pi)}= -q^4-q^3+q+1 = (1+q)(1-q^3) = (-q;q^2)_2. \end{array} \] A modulo 3 related group of Schmidt type results come from the following choice $(a,b,c,d)= (q,q,-1,-q)$. \begin{theorem}\label{thm:distinct1mod2NegFloor} \begin{align} \label{eq:distinctNeg1mod2Floor} \sum_{\pi\in\D} (-1)^{\E(\pi)} q^{\O(\pi)+\lfloor \E \rfloor (\pi)} &=(-q;q^3)_\infty,\\ \label{eq:ordinaryNeg0mod2Floor} \sum_{\pi\in\P} (-1)^{\E(\pi)} q^{\O(\pi)+\lfloor \E \rfloor (\pi)} &=\frac{1}{(q^3;q^3)_\infty}, \end{align} where $\lfloor\E\rfloor (\lambda_1,\lambda_2,\lambda_3,\dots) = \lfloor\lambda_2/2\rfloor+\lfloor\lambda_4/2\rfloor+\lfloor\lambda_6/2\rfloor+\dots$. \end{theorem} Once again, a refinement of Theorem~\ref{thm:distinct1mod2NegFloor} similar to the refinements above can easily be found. \begin{theorem}\label{thm:refined3} \begin{align} \label{eq:distinctNeg1mod2_fN} \sum_{\pi\in\D_{\leq N}} (-1)^{\lfloor\E(\pi)\rfloor} q^{\O(\pi)} &= (-q;q^3)_{\lceil N/2\rceil},\\ \label{eq:ordinaryNeg0mod2_fN} \sum_{\pi\in\P_{\leq N}} (-1)^{\lfloor\E\rfloor(\pi)} q^{\O(\pi)} &= \frac{1}{(q^3;q^3)_{\lceil N/2\rceil}}. \end{align} \end{theorem} We add one explicit example for \eqref{eq:distinctNeg1mod2_fN} with $N=2$: \[ \begin{array}{c} \begin{array}{cc|cc|cc} \pi\in\D_{\leq4} & (-1)^{\lfloor\E(\pi)\rfloor} q^{\O(\pi)} & \pi\in\D_{\leq4} & (-1)^{\lfloor\E(\pi)\rfloor} q^{\O(\pi)} & \pi\in\D_{\leq4} & (-1)^{\lfloor\E(\pi)\rfloor} q^{\O(\pi)}\\ \hline (4) & q^4 & (4,2,1) & q^6 & (2) & q^2 \\ (4,3) & -q^5 & (4,3,2,1) & q^7 & (2,1) & -q^2 \\ (4,2) & q^5 & (3) & q^3 & (1) & q \\ (4,1) & -q^4 & (3,2) & q^4 & \emptyset & 1 \\ (4,3,2) & -q^7 & (3,1) & -q^3 & & \\ (4,3,1) & -q^6 & (3,2,1) & q^5 & & \end{array}\\[-1.5ex]\\ \displaystyle \Psi_{4}(q,q,-1,-q) =\sum_{\pi\in\D_{\leq 4}} (-1)^{\lfloor\E(\pi)\rfloor} q^{\O(\pi)}= q^5+q^4+q+1 = (1+q)(1+q^4) = (-q;q^3)_2. \end{array} \] In general, $c=-1$ choice in Theorem~\ref{thm:boulet} always leads to cancellation of terms in the products \eqref{eq:PSI} and \eqref{eq:PHI}. \begin{theorem}\label{thm:MainSec4} Let $(a,b,c,d) = (q^r,q^t,-1,\varepsilon q^s)$ for some non-negative integers $r,\, t,\, s$, (with $t+r>0$) and $\varepsilon=\pm 1$. Then we have \begin{align} \label{eq:distinctGeneral} \sum_{\pi\in\D} (-1)^{\lceil\E\rceil(\pi)} \varepsilon^{\lfloor\E\rfloor(\pi)} q^{r\lceil\O\rceil(\pi)+t\lfloor \O \rfloor (\pi)+ s \lfloor \E \rfloor (\pi)} &=(-q^r;\varepsilon q^{r+t+s})_\infty,\\ \label{eq:ordinaryGeneral} \sum_{\pi\in\P} (-1)^{\lceil\E\rceil(\pi)} \varepsilon^{\lfloor\E\rfloor(\pi)} q^{r\lceil\O\rceil(\pi)+t\lfloor \O \rfloor (\pi)+ s \lfloor \E \rfloor (\pi)} &=\frac{1}{(\varepsilon q^{r+t+s};\varepsilon q^{r+t+s})_\infty}, \end{align} where $\lceil\O\rceil(\pi)$ and $\lfloor \O \rfloor(\pi) $ are defined in a similar fashion to $\lceil\E\rceil(\pi)$ and $\lfloor \E \rfloor(\pi)$. \end{theorem} Similarly the refinement of Theorem~\ref{thm:MainSec4} is as follows. \begin{theorem}\label{thm:MainSec4N} Let $(a,b,c,d) = (q^r,q^t,-1,\varepsilon q^s)$ for some non-negative integers $r,\, t,\, s$, (with $t+r>0$) and $\varepsilon=\pm 1$. Then we have \begin{align} \label{eq:distinctGeneral} \sum_{\pi\in\D_{\leq N}} (-1)^{\lceil\E\rceil(\pi)} \varepsilon^{\lfloor\E\rfloor(\pi)} q^{r\lceil\O\rceil(\pi)+t\lfloor \O \rfloor (\pi)+ s \lfloor \E \rfloor (\pi)} &=(-q^r;\varepsilon q^{r+t+s})_{\lceil N/2\rceil},\\ \label{eq:ordinaryGeneral} \sum_{\pi\in\P_{\leq N}} (-1)^{\lceil\E\rceil(\pi)} \varepsilon^{\lfloor\E\rfloor(\pi)} q^{r\lceil\O\rceil(\pi)+t\lfloor \O \rfloor (\pi)+ s \lfloor \E \rfloor (\pi)} &=\frac{1}{(\varepsilon q^{r+t+s};\varepsilon q^{r+t+s})_{\lceil N/2\rceil}}. \end{align} \end{theorem} Theorems~\ref{thm:distinct1mod2}, \ref{thm:distinct1mod2Neg} and \ref{thm:distinct1mod2NegFloor} are special cases of Theorem~\ref{thm:MainSec4}. Similarly, Theorems~\ref{thm:refined1}, \ref{thm:refined2} and \ref{thm:refined3} are special cases of Theorem~\ref{thm:MainSec4N}. \section{Partitions identities for uncounted odd-indexed parts}\label{Sec5} In sections~\ref{Sec3} and \ref{Sec4}, we were making sure that our substitutions to Boulet weights were never $a=b=1$ or $a=b=-1$. These choices clearly introduce a pole to the product representations of the generating functions \eqref{eq:PSI} and \eqref{eq:PHI}. However, these substitutions can be entertained in the finite analogue of these generating functions. For example, substituting $a=b=-1$ and $c=d=q$ in \eqref{PSI2Nnu} and \eqref{PHI2Nnu} yields the two equations presented in the following theorem, respectively. \begin{theorem} For some non-negative integer $N$, \begin{align} \label{eq:OddsMinus1Psi}\sum_{\pi\in\D_{\leq N}}(-1)^{\O(\pi)} q^{\E(\pi)} &= \left\{ \begin{array}{cl} (-q;q^2)_{N/2} &\text{if }N\text{ is even},\\[-1.5ex]\\ 0 &\text{if }N\text{ is odd}. \end{array}\right.\\ \label{eq:OddsMinus1Phi}\sum_{\pi\in\P_{\leq N}} (-1)^{\O(\pi)} q^{\E(\pi)} &= \left\{ \begin{array}{cl}\displaystyle \frac{1}{(q^2;q^2)_{N/2}} &\text{if }N\text{ is even},\\ 0 &\text{if }N\text{ is odd}. \end{array}\right. \end{align} \end{theorem} We list partitions and their respective weights below to demonstrate \eqref{eq:OddsMinus1Psi} with $N=4$ and $N=3$. \[ \begin{array}{c} \begin{array}{cc|cc|cc} \pi\in\D_{\leq4} & (-1)^{\O(\pi)} q^{\E(\pi)} & \pi\in\D_{\leq4} & (-1)^{\O(\pi)} q^{\E(\pi)} & \pi\in\D_{\leq4} & (-1)^{\O(\pi)} q^{\E(\pi)}\\ \hline (4) & 1 & (4,2,1) & -q^2 & (2) & 1 \\ (4,3) & q^3 & (4,3,2,1) & q^4 & (2,1) & q \\ (4,2) & q^2 & (3) & -1 & (1) & -1 \\ (4,1) & q & (3,2) & -q^2 & \emptyset & 1 \\ (4,3,2) & q^3 & (3,1) & -q & & \\ (4,3,1) & -q^3 & (3,2,1) & q^2 & & \end{array}\\[-1.5ex]\\ \displaystyle \Psi_{4}(-1,-1,q,q) =\sum_{\pi\in\D_{\leq 4}} (-1)^{\O(\pi)} q^{\E(\pi)}= q^4+q^3+q+1 = (1+q)(1+q^3) = (-q;q^2)_2.\\[-1.5ex]\\ \Psi_{3}(-1,-1,q,q) =\sum_{\pi\in\D_{\leq 3}} (-1)^{\O(\pi)} q^{\E(\pi)}=0. \end{array} \] We let $z=1/q$ and then map $q^2\mapsto q$ in \eqref{RogersSzegoToPSI} to get the next theorem. \begin{theorem}\label{thm:Odds1} \begin{align} \label{eq:Odds1Psi}\sum_{\pi\in\D_{\leq N}} q^{\E(\pi)} &= \sum_{i=0}^N {N\brack i }_q,\\ \label{eq:Odds1Phi}\sum_{\pi\in\P_{\leq N}} q^{\E(\pi)} &= \sum_{i=0}^N \frac{1}{(q;q)_i(q;q)_{N-i}}. \end{align} \end{theorem} We demonstrate this theorem with $N=4$ below. \[ \begin{array}{cc|cc|cc} \pi\in\D_{\leq4} & q^{\E(\pi)} & \pi\in\D_{\leq4} & q^{\E(\pi)} & \pi\in\D_{\leq4} & q^{\E(\pi)}\\ \hline (4) & 1 & (4,2,1) & q^2 & (2) & 1 \\ (4,3) & q^3 & (4,3,2,1) & q^4 & (2,1) & q \\ (4,2) & q^2 & (3) & 1 & (1) & 1 \\ (4,1) & q & (3,2) & q^2 & \emptyset & 1 \\ (4,3,2) & q^3 & (3,1) & q & & \\ (4,3,1) & q^3 & (3,2,1) & q^2 & & \end{array} \] \begin{align*} \Psi_{4}(1,1,q,q) &=\sum_{\pi\in\D_{\leq 4}} q^{\E(\pi)}= 5+3q+4q^2+3q^3+q^4\\ &= 1+ (1+q+q^2+q^3) + (1+q+2q^2+q^3+q^4) + (1+q+q^2+q^3) +1 \\ &= {4\brack 0}_q + {4\brack 1}_q + {4\brack 2}_q+ {4\brack 3}_q +{4\brack 4}_q. \end{align*} This result can be refined if we keep track of the variable $z$ in \eqref{RogersSzegoToPSI}. Let $z\mapsto z/q$ and $q^2\mapsto q$ in this order in \eqref{RogersSzegoToPSI} to see two variable generalizations of \eqref{eq:Odds1Psi} and \eqref{eq:Odds1Phi}. Then, by extracting the coefficient of $z^j$, we get the following theorem. \begin{theorem}\label{thm:RefinedOdds1} For a partition $\pi = (\lambda_1,\lambda_2,\dots)$, let $\gamma(\pi) = \lambda_1-\lambda_2+\lambda_3-\lambda_4+\dots$ be the \textit{alternating sum of the parts of }$\pi$. Then, \begin{align} \label{eq:RefOdds1Psi}\sum_{\substack{\pi\in\D_{\leq N}\\\gamma(\pi) = j}} q^{\E(\pi)} &= {N\brack j }_q,\\ \label{eq:RefOdds1Phi}\sum_{\substack{\pi\in\P_{\leq N}\\\gamma(\pi) = j}}q^{\E(\pi)} &= \frac{1}{(q;q)_j(q;q)_{N-j}}. \end{align} \end{theorem} We also note a direct combinatorial proof for \eqref{eq:RefOdds1Psi} using the Sylvester's bijection \cite{Bressoud}. We know that ${N\brack j}_q$ is the generating function for the number of partitions with parts $\leq N-j$ and the number of parts $\leq j$. Let $\pi$ be one of such partitions into $\leq j$ parts where its largest part is $\leq N-j$. Next, we construct a partition $\pi_o$ into $j$ odd parts by first putting $j$ boxes in a column followed by putting the Ferrers diagram of the partition $\pi$ on the right-side of the $j$-box column, and its reflection \reflectbox{$\pi$} on to the left of the column. When it is read line-by-line, this is a partition into odd parts, and $|\pi_o| = 2|\pi| +j$. This construction from $\pi$ to $\pi_o$ is bijective and can be reversed easily. Now we apply the Sylvester's bijection to $\pi_o$. This takes $\pi_o$ to a partition into distinct parts $\pi_d$. Obviously the composition of the bijections $\pi\mapsto\pi_o\mapsto\pi_d$ is a bijection. The largest part of $\pi_d$ is the sum of the main column length (exactly $j$) and the size of the largest part of $\pi$ (which is $\leq N-j$), hence $\leq N$. It is clear that $\gamma(\pi_d) = j$. Finally, $\E(\pi_d)$ is equal to the number of boxes in the Ferrers diagram of the reflected copy in the Ferrers diagram, which is exactly $|\pi|$. One example of this construction is given in Figure~\ref{figure_ferrers}. \definecolor{qqwuqq}{rgb}{0,0.39215686274509803,0} \definecolor{ccqqqq}{rgb}{0.8,0,0} \begin{center} \begin{figure}[h] \begin{tikzpicture}[line cap=round,line join=round,x=0.6cm,y=0.6cm] \clip(0,0.8) rectangle (15.5,10); \draw [line width=2pt] (1,9)-- (6,9); \draw [line width=2pt] (1,8)-- (6,8); \draw [line width=2pt] (1,7)-- (6,7); \draw [line width=2pt] (3,6)-- (6,6); \draw [line width=2pt] (4,5)-- (6,5); \draw [line width=2pt] (4,4)-- (6,4); \draw [line width=2pt] (5,3)-- (6,3); \draw [line width=2pt] (9,3)-- (10,3); \draw [line width=2pt] (9,9)-- (14,9); \draw [line width=2pt] (9,8)-- (14,8); \draw [line width=2pt] (14,7)-- (9,7); \draw [line width=2pt] (9,6)-- (12,6); \draw [line width=2pt] (12,6)-- (12,9); \draw [line width=2pt] (9,5)-- (11,5); \draw [line width=2pt] (11,4)-- (9,4); \draw [line width=2pt] (8,1)-- (8,9); \draw [line width=2pt] (7,9)-- (8,9); \draw [line width=2pt] (7,9)-- (7,1); \draw [line width=2pt] (7,1)-- (8,1); \draw [line width=2pt] (6,3)-- (6,9); \draw [line width=2pt] (5,9)-- (5,3); \draw [line width=2pt] (4,4)-- (4,9); \draw [line width=2pt] (3,9)-- (3,6); \draw [line width=2pt] (2,7)-- (2,9); \draw [line width=2pt] (1,9)-- (1,7); \draw [line width=2pt] (9,9)-- (9,3); \draw [line width=2pt] (7,2)-- (8,2); \draw [line width=2pt] (8,3)-- (7,3); \draw [line width=2pt] (7,4)-- (8,4); \draw [line width=2pt] (8,5)-- (7,5); \draw [line width=2pt] (7,6)-- (8,6); \draw [line width=2pt] (8,7)-- (7,7); \draw [line width=2pt] (7,8)-- (8,8); \draw [line width=2pt] (10,9)-- (10,3); \draw [line width=2pt] (11,4)-- (11,9); \draw [line width=2pt] (13,7)-- (13,9); \draw [line width=2pt] (14,9)-- (14,7); \draw [line width=2pt,color=ccqqqq] (7.5,1.5)-- (7.5,8.5); \draw [line width=2pt,dash pattern=on 2pt off 4pt,color=ccqqqq] (7.5,8.5)-- (9.5,8.5); \draw [line width=2pt,color=ccqqqq] (9.5,8.5)-- (13.5,8.5); \draw [line width=2pt,color=ccqqqq] (9.5,3.5)-- (9.5,7.5); \draw [line width=2pt,color=ccqqqq] (9.5,7.5)-- (13.5,7.5); \draw [line width=2pt,color=ccqqqq] (10.5,4.5)-- (10.5,6.5); \draw [line width=2pt,color=ccqqqq] (10.5,6.5)-- (11.5,6.5); \draw [line width=2pt,color=qqwuqq] (5.5,3.5)-- (5.5,8.5); \draw [line width=2pt,color=qqwuqq] (5.5,8.5)-- (1.5,8.5); \draw [line width=2pt,color=qqwuqq] (4.5,4.5)-- (4.5,7.5); \draw [line width=2pt,color=qqwuqq] (4.5,7.5)-- (1.5,7.5); \begin{scriptsize} \draw [fill=ccqqqq] (7.5,1.5) circle (2.5pt); \draw [fill=ccqqqq] (7.5,8.5) circle (2.5pt); \draw [fill=ccqqqq] (13.5,8.5) circle (2.5pt); \draw [fill=ccqqqq] (9.5,8.5) circle (2.5pt); \draw [fill=ccqqqq] (9.5,3.5) circle (2.5pt); \draw [fill=ccqqqq] (9.5,7.5) circle (2.5pt); \draw [fill=ccqqqq] (13.5,7.5) circle (2.5pt); \draw [fill=ccqqqq] (10.5,4.5) circle (2.5pt); \draw [fill=ccqqqq] (10.5,6.5) circle (2.5pt); \draw [fill=ccqqqq] (11.5,6.5) circle (2.5pt); \draw [fill=qqwuqq] (5.5,3.5) circle (2.5pt); \draw [fill=qqwuqq] (5.5,8.5) circle (2.5pt); \draw [fill=qqwuqq] (1.5,8.5) circle (2.5pt); \draw [fill=qqwuqq] (1.5,7.5) circle (2.5pt); \draw [fill=qqwuqq] (4.5,7.5) circle (2.5pt); \draw [fill=qqwuqq] (4.5,4.5) circle (2.5pt); \draw [fill=qqwuqq] (3.5,6.5) circle (2.5pt); \draw (3.5,9.5) node[anchor=center] {\Large \reflectbox{$\pi$}}; \draw (11.5,9.5) node[anchor=center] {\Large $\pi$}; \end{scriptsize} \end{tikzpicture} \caption{Example the combinatorial proof of \eqref{eq:RefOdds1Psi} when $N=14$, $j=8$, and $\pi=(5,5,3,2,2,1)$. Then the partition $\pi_o$ is $(11,11,7,5,5,3,1,1)$ (read row by row). The partition we get after Sylvester bijection is $\pi_d = (13,10,9,7,4,1)$. The red lines (right-bending lines) are the odd-indexed parts of $\pi_d$ and the green lines (left-bending lines) are the even-indexed parts of $\pi_d$. Only the green lines are counted under the statistic $\E(\pi_d)$.}\label{figure_ferrers} \end{figure} \end{center} Not only that Theorem~\ref{thm:RefinedOdds1} itself is elegant, it also has really elementary combinatorial corollaries as $N$ tends to infinity. \begin{corollary}\label{cor:Limit} \begin{align} \label{eq:RefOdds1PsiCor}\sum_{\substack{\pi\in\D\\\gamma(\pi) = j}} q^{\E(\pi)} &= \frac{1}{(q;q)_j},\\ \label{eq:RefOdds1PhiCor}\sum_{\substack{\pi\in\P\\\gamma(\pi) = j}}q^{\E(\pi)} &= \frac{1}{(q;q)_j(q;q)_\infty}. \end{align} \end{corollary} A combinatorial interpretation of \eqref{eq:RefOdds1PsiCor} is Theorem~\ref{cor:Intro}, and in the same spirit a combinatorial interpretation of \eqref{eq:RefOdds1PhiCor} is the theorem below. \begin{theorem} The number of partitions $\pi$ where $\E(\pi)=n$ and $\gamma(\pi)=j$ is equal to the number of partitions of $n$ where parts are in two colors red and green such that red parts $\leq j$. \end{theorem} \section{Identities for weighted Rogers--Ramanujan partitions}\label{Sec6} In \cite{Uncu}, the second author demonstrated how some of these theorems can be turned into weighted identities using multiplicative weights as well. In that paper it was shown that Schmidt's problem is equivalent to Alladi's weighted Rogers-Ramanujan count (see, \cite[(5.2)]{Uncu}). \begin{theorem}\label{Uncu1} Let $\#(\pi)$ be the number of parts in $\pi$ and let $ \mathcal{R}\mathcal{R}$ be the set of partitions with gaps between parts $\geq 2$, then \begin{equation} \sum_{\pi\in\D}q^{\O(\pi)} = \sum_{\pi\in \mathcal{R}\mathcal{R}} \omega(\pi) q^{|\pi|}, \end{equation}where \begin{equation}\label{eq:omega}\omega(\pi) := \lambda_{\#(\pi)}\cdot\prod_{i=1}^{\#(\pi)-1} (\lambda_{i}-\lambda_{i+1}-1).\end{equation} \end{theorem} Recently the same observation was made by Alladi \cite{AlladiSchmidt}. It is clear that \eqref{eq:New_PsiN_rep} translates to the refinement of Theorem~\ref{Uncu1} with an extra bound on the largest part of partitions. \begin{theorem} \label{New_weighted_sum} Let $ \mathcal{R}\mathcal{R}_{\leq N}$ be the set of partitions into parts $\leq N$ with gaps between parts $\geq 2$, then \begin{equation}\sum_{\pi\in\D_{\leq N}}q^{\O(\pi)} = \sum_{\pi\in \mathcal{R}\mathcal{R}_{ \leq N}} \omega(\pi) q^{|\pi|}.\end{equation} \end{theorem} Similarly, we can also interpret \eqref{eq:Odds1Psi} with the slight modification of the weight \eqref{eq:omega}. \begin{theorem}\label{Uncu2} \begin{equation} \sum_{\pi\in\D_{\leq N}}q^{\E(\pi)} = \sum_{\pi\in \mathcal{R}\mathcal{R}_{\leq N-1}}\omega(\pi) q^{|\pi|} ,\end{equation}where \begin{equation}\label{eq:omegaHat}\hat{\omega}(\pi) := (N-\lambda_1)\cdot\lambda_{\#(\pi)}\cdot\prod_{i=1}^{\#(\pi)-1} (\lambda_{i}-\lambda_{i+1}-1)\ \ \text{and}\ \ \hat{\omega}(\emptyset)=N+1.\end{equation} \end{theorem} There are $N+1$ partitions $\pi$ in $D_{\leq N}$ ($\emptyset$, $(k)$ for $1\leq k\leq N$) that yields $\E(\pi)=0$. This is the reasoning behind the special definition of $\hat{\omega}(\emptyset)$ in \eqref{eq:omegaHat}. An example with $N=4$ is given for the above theorem. The sum of all the second columns in the first three blocks is equal to the sum of the weights in the last column. \[ \begin{array}{cc|cc|cc||cc} \pi\in\D_{\leq4} & q^{\E(\pi)} & \pi\in\D_{\leq4} & q^{\E(\pi)} & \pi\in\D_{\leq4} & q^{\E(\pi)} &\pi\in\mathcal{R}\mathcal{R}_{\leq 3} & \hat{\omega}(\pi)q^{|\pi|}\\ \hline (4) & 1 & (4,2,1) & q^2 & (2) & 1 & (3,1) & q^4 \\ (4,3) & q^3 & (4,3,2,1) & q^4 & (2,1) & q & (3) & 3q^3\\ (4,2) & q^2 & (3) & 1 & (1) & 1 & (2) & 4q^2 \\ (4,1) & q & (3,2) & q^2 & \emptyset & 1 & (1) & 3q \\ (4,3,2) & q^3 & (3,1) & q & & & \emptyset & 5 \\ (4,3,1) & q^3 & (3,2,1) & q^2 & & & \end{array} \] \section{Acknowledgment} We would like to thank George E. Andrews for his kind interest and encouragement. The second author thanks EPSRC and FWF for partially supporting his research through grants EP/T015713/1 and P-34501N, respectively. \begin{thebibliography}{99} \bibitem{AlladiWeighted} K. Alladi, \textit{Partition identities involving gaps and weights}, Trans. Amer. Math. Soc. \textbf{349} (1997), no. 12, 5001-5019. \bibitem{AlladiSchmidt} K. Alladi, \textit{Schmidt-type theorems via weighted partition identities}, preprint. \bibitem{theoryofpartitions}G. E. Andrews, \textit{The theory of partitions}, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1998. Reprint of the 1976 original. MR1634067 (99c:11126). \bibitem{AndrewsKeith} G. E. Andrews, and W. J. Keith, \textit{Schmidt-type theorems for partitions with uncounted parts}, preprint \href{https://arxiv.org/abs/2203.05202}{arXiv:2203.05202}. \bibitem{AndrewsPaule} G. E. Andrews, and P. Paule, \textit{MacMahon’s partition analysis XIII: Schmidt type partitions and modular forms}, J. Number Theory, (2021), \href{https://doi.org/10.1016/j.jnt.2021.09.008}{https://doi.org/10.1016/j.jnt.2021.09.008}. \bibitem{BU1} A. Berkovich, and A. K. Uncu, \textit{A new Companion to Capparelli's Identities}, Adv. in Appl. Math. \textbf{71} (2015), 125-137. \bibitem{BU2} A. Berkovich, and A. K. Uncu, \textit{On partitions with fixed number of even-indexed and odd-indexed odd parts}, J. Number Theory \textbf{167} (2016), 7-30. \bibitem{WarnaarBerkovich} A. Berkovich, and S. O. Warnaar, \textit{Positivity preserving transformations for q-binomial coefficients}, Trans. Amer. Math. Soc. \textbf{357} (2005), no. 6, 2291-2351. \bibitem{Bressoud} D. M. Bressoud, \textit{Proofs and Confirmations: The Story of the Alternating-Sign Matrix Conjecture}. Cambridge University Press, 1999. \bibitem{BridgesUncu} W. Bridges, and A. K. Uncu, \textit{Weighted cylindric partitions}, preprint \href{https://arxiv.org/abs/2201.03047}{arXiv:2201.03047}. \bibitem{Boulet} C. E. Boulet, \textit{A four-parameter partition identity}, Ramanujan J. \textbf{12} (2006), no. 3, 315-320. \bibitem{ChernYee} S. Chern and A. J. Yee, \textit{Diagonal Hooks and a Schmidt-Type Partition Identity}, Elect. J, of Comb. \textbf{29} vol 2, (2022), $\#$P2.10. \bibitem{Fu} S. Fu, and J. Zeng \textit{A unifying combinatorial approach to refined little Göllnitz and Capparelli's companion identities}, Adv. in Appl. Math. \textbf{98} (2018), 127-154. \bibitem{Masao} M. Ishikawa, and J. Zeng, \textit{The Andrews-Stanley partition function and Al-Salam-Chihara polynomials}, Discrete Math. \textbf{309} (2009), no. 1, 151-175. \bibitem{Ji} K. Q. Ji. \textit{A Combinatorial Proof of a Schmidt Type Theorem of Andrews and Paule}, Elect. J, of Comb. \textbf{29} vol 1, (2022), $\#$P1.24. \bibitem{LiYee} R. Li, and A.J. Yee, \textit{Schmidt type partitions}, preprint \href{https://arxiv.org/abs/2204.02535}{arXiv:2204.02535}. \bibitem{Mork} P. Mork, \textit{Interrupted Partitions - Solution to Problem 10629} The Amer. Math. Monthly, Vol. \textbf{107}, No. 1 (Jan., 2000), pp. 87-87. \bibitem{Schmidt} F. Schmidt, \textit{Interrupted Partitions - Problem 10629} The Amer. Math. Monthly, Vol. \textbf{104}, (1999), pp. 87-88. \bibitem{Uncu} A K. Uncu, \textit{Weighted Rogers-Ramanujan Partitions and Dyson Crank}, Ramanujan J. \textbf{46} (2018), no. 2, 579-591. \end{thebibliography} \end{document}
2205.00493v1
http://arxiv.org/abs/2205.00493v1
Gleason's theorem for composite systems
\documentclass[aps,pra,preprint,superscriptaddress,longbibliography,nofootinbib]{revtex4-1} \usepackage[utf8]{inputenc} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathtools} \usepackage{enumerate} \usepackage[shortlabels]{enumitem} \usepackage{array} \usepackage{bm} \usepackage{graphicx} \usepackage{nicefrac} \usepackage{booktabs} \usepackage[all]{xy} \usepackage{hyperref} \usepackage{subcaption} \usepackage{bbm} \usepackage{tikz} \usetikzlibrary{arrows} \usetikzlibrary{positioning} \usepackage{cancel} \usepackage{xcolor} \usepackage[font=small,labelfont=bf]{caption} \usepackage{MnSymbol} \usepackage{multirow} \renewcommand\thesection{\Roman{section}} \newcommand{\mf}[1]{\textcolor{cyan}{#1}} \newcommand\mc[1]{\mathcal{#1}} \newcommand\cH{\mc{H}} \newcommand\cK{\mc{K}} \newcommand\BH{\mc{B}(\cH)} \newcommand\VH{\mc{V}(\cH)} \newcommand\PH{\mc{P}(\cH)} \newcommand\PV{\mc{P}(V)} \newcommand\cA{\mc{A}} \newcommand\cB{\mc{B}} \newcommand\cN{\mc{N}} \newcommand\cM{\mc{M}} \newcommand\cJ{\mc{J}} \newcommand\SN{\mc{S}(\cN)} \newcommand\PN{\mc{P}(\cN)} \newcommand\PM{\mc{P}(\cM)} \newcommand\VN{\mc{V}(\mc{N})} \newcommand\VM{\mc{V}(\mc{M})} \newcommand\JN{\mc{J}(\mc{N})} \newcommand\JNsa{\mc{J}(\mc{N})_\mathrm{sa}} \newcommand\JM{\mc{J}(\mc{M})} \newcommand\Sig{\underline{\Sigma}} \newcommand\PPi{\underline{\Pi}} \newcommand\ra{\rightarrow} \newcommand\mt{\mapsto} \newcommand\lra{\longrightarrow} \newcommand\lmt{\longmapsto} \newcommand\Lra{\Leftrightarrow} \newcommand\hra{\hookrightarrow} \newcommand\N{\mathbb{N}} \newcommand\R{\mathbb{R}} \newcommand\C{\mathbb{C}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{definition}{Definition} \begin{document} \title{Gleason's theorem for composite systems} \author{Markus Frembs} \email{[email protected]} \affiliation{Centre for Quantum Dynamics, Griffith University,\\ Yugambeh Country, Gold Coast, QLD 4222, Australia} \vspace{-0.5cm} \author{Andreas D\"oring} \email{[email protected]} \begin{abstract} Gleason's theorem \cite{Gleason1975} is an important result in the foundations of quantum mechanics, where it justifies the Born rule as a mathematical consequence of the quantum formalism. Formally, it presents a key insight into the projective geometry of Hilbert spaces, showing that finitely additive measures on the projection lattice $\PH$ extend to positive linear functionals on the algebra of bounded operators $\BH$. Over many years, and by the effort of various authors, the theorem has been broadened in its scope from type I to arbitrary von Neumann algebras (without type $\text{I}_2$ factors). Here, we prove a generalisation of Gleason's theorem to composite systems. To this end, we strengthen the original result in two ways: first, we extend its scope to dilations in the sense of Naimark and Stinespring \cite{Naimark1943,Stinespring1955} and second, we require consistency with respect to dynamical correspondences on the respective (local) algebras in the composition \cite{AlfsenShultz1998}. We show that neither of these conditions changes the result in the single system case, yet both are necessary to obtain a generalisation to bipartite systems. \end{abstract} \maketitle \section{Gleason's theorem} Gleason's theorem is a landmark result in quantum theory. It justifies the Born rule, which originally had the status of an axiom, as a mathematical fact of the projective geometry of Hilbert spaces. \begin{theorem}\label{thm: original Gleason} \textbf{\emph{(Gleason \cite{Gleason1975})}} Let $\cH$ be a separable Hilbert space, $\mathrm{dim}(\cH) \geq 3$. Then every countably additive probability measure $\mu: \mc{P}(\mc{H}) \rightarrow [0,1]$ over the projections on $\mc{H}$ is of the form $\mu(p) = \mathrm{tr}[\rho_\mu p]$ for all $p \in \mc{P}(\mc{H})$, with $\rho_\mu: \mc{H} \rightarrow \mc{H}$, $\rho_\mu \geq 0$, $\mathrm{tr}[\rho_\mu] = 1$ a density operator. \end{theorem} Here, a \emph{countably additive probability measure} is a map $\mu: \mc{P}(\mc{H}) \rightarrow [0,1]$, $\mu(1) = 1$ and such that $\mu(\sum_{i=1}^\infty p_i) = \sum_{i=1}^\infty \mu(p_i)$ for all $p_i \in \PH$ such that $p_ip_j=0$ whenever $i\neq j$. Note that when $\mc{N} = M_2(\mathbb{C})$, there exist measures that fail to extend to linear functionals. While the original argument was for type I factors only, Thm.~\ref{thm: original Gleason} was later extended to type II and type III von Neumann algebras in \cite{Christensen1982, Yeadon1983,Yeadon1984} (see also \cite{Maeda1989}). In this setting, $\mu$ is further called completely additive if $\mu(\sum_{i \in I} p_i) = \sum_{i \in I} \mu(p_i)$ for every family of pairwise orthogonal projections $(p_i)_{i \in I}$, $p_i \in \PN$.\footnote{For $\cN = \BH$ with $\cH$ separable, complete additivity reduces to countable additivity.} Recall that a state $\sigma \in \SN$ is a positive, normalised linear functional, it is \emph{normal} if $\sigma(\sum_{i \in I} p_i) = \sum_{i \in I} \sigma(p_i)$ for all families of pairwise orthogonal projections $(p_i)_{i \in I}$, $p_i \in \PN$ (Thm. 7.1.12 in \cite{KadisonRingroseII}). Clearly, a (normal) state $\sigma$ defines a finitely (completely) additive probability measure on $\mc{P}(\cN)$, conversely: \begin{theorem}\label{thm: general Gleason} \textbf{\emph{(Gleason-Christensen-Yeadon \cite{Christensen1982,Yeadon1983,Yeadon1984})}} Let $\cN$ be a von Neumann algebra with no summand of type $I_2$ and let $\mu: \mc{P}(\cN) \rightarrow \mathbb{R}$ be a finitely additive probability measure on the projections of $\cN$. There exists a unique state $\sigma_\mu \in \mc{S}(\cN)$ such that $\mu(p) = \sigma_\mu(p)$ for all $p \in \mc{P}(\cN)$. If $\mu$ is completely additive then $\sigma_\mu$ is normal and of the form $\mu(p) = \sigma_\mu(p) = \mathrm{tr}[\rho_\mu p]$ for all $p \in \mc{P}(\cN)$ with $\rho_\mu$ a positive trace-class operator. \end{theorem} \section{Gleason's theorem in context} \subsection{Partial order of contexts and probabilistic presheaf.} Note that finite (complete) additivity only assumes that $\mu: \PN \ra [0,1]$ is \emph{quasi-linear}, i.e., linear in commuting von Neumann subalgebras. Remarkably, Thm.~\ref{thm: original Gleason} and its generalisation Thm.~\ref{thm: general Gleason} show that this already implies linearity on all of $\cN$. We emphasise the passage from quasi-linearity to linearity as follows. Note that $\mu$ restricts to a probability measure $\mu_V$ in every commutative von Neumann subalgebra $V \subset \cN$, i.e., $\mu_V = \mu|_V$. $\mu$ thus defines a collection of probability measures $(\mu_V)_{V \subset \cN}$, one for every commutative von Neumann subalgebra, and such that whenever $\tilde{V} \subset V$ is a von Neumann subalgebra, the respective probability measures are related by restriction, $\mu_{\tilde{V}} =\mu_V|_{\tilde{V}}$ (viz. marginalisation). From this perspective, the constraints on measures $\mu: \PN \ra [0,1]$ in Gleason's theorem arise via the inclusion relations between commutative von Neumann subalgebras. This motivates the definition of the following partially ordered set (poset). \begin{definition}\label{def: context category} Let $\cN$ be a von Neumann algebra. The poset of commutative von Neumann subalgebras of $\cN$ is called the \emph{context category of $\cN$} and is denoted by $\VN$. \end{definition} The name `context category' is motivated as follows. First, note that every poset can be regarded as a category, whose objects are the elements of the poset and whose arrows are defined by $a\ra b \Leftrightarrow a\leq b$. Second, in quantum physics contextuality refers to the fact that not all observables, represented by the self-adjoint operators in $\cN$, can be measured simultaneously in an arbitrary state.\footnote{This is in contrast to classical physics, where observables are represented by elements in a (commutative) algebra of functions on a (locally) compact Hausdorff space. The Kochen-Specker theorem shows that in the noncommutative case such a description is impossible \cite{KochenSpecker1967}.} Only commuting operators can be measured simultaneously, i.e., those that are contained in a commutative von Neumann subalgebra---a \emph{context} \cite{DoeringFrembs2019a}. The constraints in Gleason's theorem are therefore \emph{noncontextuality constraints}: a projection $p \in \PN$ is assigned a probability $\mu(p)$ independent of the context that $p$ lies in: $\mu_V(p) = \mu_{\tilde V}(p) = \mu(p)$ whenever $p\in \tilde{V},V$. Next, we formalise the idea that a measure $\mu: \PN \ra [0,1]$ corresponds to a collection of probability measures over $\VN$. \begin{definition}\label{defn: probabilistic presheaf} Let $\cN$ be a von Neumann algebra with context category $\VN$. The \emph{(normal) probabilistic presheaf\footnote{A presheaf $\underline{P}$ over the category $\mc{C}$ is a functor $\underline{P}: \mc{C}^\mathrm{op} \ra \mathbf{Set}$. We denote presheaves with an underscore.} $\PPi$ of $\cN$ over $\VN$} is the presheaf given \begin{itemize} \item [(i)] on objects: for all $V\in\VN$, let \begin{equation*} \PPi_V:=\{\mu_V:\PV\ra [0,1] \mid \mu_V\text{ is a finitely (completely) additive probability measure}\}\; , \end{equation*} \item [(ii)] on arrows: for all $\tilde{V},V\in\VN$ with $\tilde{V} \subset V$, let $\PPi(i_{\tilde{V}V}): \PPi_V \ra \PPi_{\tilde{V}}$ with $\mu_V \mapsto \mu_V|_{\tilde{V}}$, where $i_{\tilde{V} V}:\tilde{V}\hra V$ denotes the inclusion map between contexts $\tilde{V} \subset V$. \end{itemize} \end{definition} We remark that the study of presheaves over the partial order of contexts is at the heart of the topos approach to quantum theory \cite{IshamButterfieldI,IshamButterfieldII,IshamButterfieldIII,IshamButterfieldIV,IshamDoeringI,IshamDoeringII,IshamDoeringIII,IshamDoeringIV,HLS2009,HLS2010}. For an introduction, see e.g. \cite{DoeIsh11}. For the intimate relationship between contextuality (in the sense of Def.~\ref{def: context category}) and various key theorems in quantum theory, see \cite{DoeringFrembs2019a}. \subsection{Gleason's theorem in presheaf form.} We have seen that a finitely (completely) additive probability measure over the projections of a von Neumann algebra $\cN$ can be regarded as a collection of finitely (completely) additive probability measures over contexts $(\mu_V)_{V \in \VN}$, which satisfy the constraints $\mu_{\tilde{V}} = \mu_V|_{\tilde{V}}$ whenever $\tilde{V},V \in \VN$ with $\tilde{V} \subset V$. In terms of Def.~\ref{defn: probabilistic presheaf}, $\mu$ thus becomes a \emph{global section} of the (normal) probabilistic presheaf $\PPi$. We denote the set of global sections of $\PPi$ by $\Gamma[\PPi].$\footnote{Here, `global' refers to the fact that $\mu$ satisfies the restriction constraints in $\PPi$ over all of $\VN$. In contrast, a \emph{local section} satisfies the constrains only on a sub-poset of $\VN$.} In this terminology, we obtain the following reformulation of Gleason's theorem \cite{Doering2004,DeGroote2007,Doering2012}. \begin{theorem}\label{thm: Gleason in context form} \textbf{\emph{(Gleason in presheaf form (I))}} Let $\cN$ be a von Neumann algebra with no summand of type $\text{I}_2$. There is a bijective correspondence between (normal) states on $\cN$ and global sections of the (normal) probabilistic presheaf $\PPi$ of $\cN$ over $\VN$. \end{theorem} \begin{proof} Note that every (normal) state $\sigma: \cN \ra \C$ defines a finitely (completely) additive probability measure over the projections of $\cN$ (cf. Thm. 7.1.12 in \cite{KadisonRingroseII}), which corresponds with a global section of $\PPi$ by the preceeding discussion. By the same correspondence, each global section of the (normal) probabilistic presheaf $\gamma \in \Gamma[\PPi(\VN)]$ extends to a (normal) state $\sigma \in \SN$ as a consequence of Thm.~\ref{thm: general Gleason}. \end{proof} Next, we turn to a generalisation of Gleason's theorem in presheaf form to composite systems. \subsection{Linearity without positivity.} The canonical product on partial orders, denoted $\mc{V}_1 \times \mc{V}_2$, is the Cartesian product with elements $(V_1,V_2)$ for $V_1 \in \mc{V}_1$, $V_2 \in \mc{V}_2$ and order relations such that for all $\tilde{V}_1,V_1 \in \mc{V}_1$, $\tilde{V}_2,V_2 \in \mc{V}_2$: \begin{equation*}\label{eq: product context category} (\tilde{V}_1,\tilde{V}_2) \subseteq (V_1,V_2) :\Longleftrightarrow \tilde{V}_1 \subseteq_1 V_1 \mathrm{\ and \ } \tilde{V}_2 \subseteq_2 V_2\; . \end{equation*} It is interesting to ask whether the state space of the composite system can be recovered from product contexts $V \in \mc{V}(\cN_1) \times \mc{V}(\cN_2) \subsetneq \mc{V}(\cN_1 \otimes \cN_2)$ only. For this to be possible states on the subsystem algebras need to define unique states on the composite algebra and vice versa. For this reason we will consider the spatial tensor product between von Neumann algebras $\bar{\otimes}$ (for details, see section 11.2 in \cite{KadisonRingroseII}). Recall that given normal states $\sigma_1 \in (\cN_1)_*$ and $\sigma_2 \in (\cN_2)_*$, where $\cN_*$ denotes the predual of $\cN$, there exists a unique normal product state $\sigma = \sigma_1 \bar{\otimes} \sigma_2 \in (\cN_1 \bar{\otimes} \cN_2)_*$ (Prop. 11.2.7, \cite{KadisonRingroseII}). Conversely, every normal linear functional $\sigma \in \cN_*$ on the spatial tensor product $\cN = \cN_1 \bar{\otimes} \cN_2$ is the norm limit of normal product states $\sigma_1 \bar{\otimes} \sigma_2$ (Prop 11.2.8, \cite{KadisonRingroseII}). It follows that we can identify the product context $V = (V_1,V_2) \in \mc{V}(\cN_1) \times \mc{V}(\cN_2)$ with the commutative von Neumann subalgebra $V = V_1 \bar{\otimes} V_2 \subset \cN_1 \bar{\otimes} \cN_2$ for every $V_1 \subset \cN_1$ and $V_2 \subset \cN_2$. \begin{theorem}\label{thm: linearity vs positivity} Let $\cN_1$, $\cN_2$ be von Neumann algebras with no summand of type $\text{I}_2$ and let $\cN = \cN_1 \bar{\otimes} \cN_2$. There is a bijective correspondence between global sections of the normal probabilistic presheaf $\PPi$ of $\cN$ over $\mc{V}(\cN_1) \times \mc{V}(\cN_2)$ and normal linear functionals $\sigma: \cN \ra \C$ such that $\sigma(1) = 1$ and $\sigma(a \otimes b) \geq 0$ for all $a \in (\cN_1)_+$ and $b \in (\cN_2)_+$. \end{theorem} \begin{proof} \input{Proof_Linearity} \end{proof} We emphasise that the reduction to products of commutative von Neumann algebras in $\mc{V}(\cN_1) \times \mc{V}(\cN_2)$ is already sufficient to lift quasi-linear measures to linear functionals. In finite dimensions this has been observed before \cite{KlayRandallFoulis1987,Wallach2002}. Yet, unlike Thm.~\ref{thm: Gleason in context form} the linear functionals thus obtained are not necessarily positive, hence, Thm.~\ref{thm: linearity vs positivity} does not provide a classification of states on $\cN = \cN_1 \bar{\otimes} \cN_2$. From this perspective, it seems unjustified to call Thm.~\ref{thm: linearity vs positivity} a generalisation of Gleason's theorem (Thm.~\ref{thm: general Gleason}) to composite systems. \section{A composite Gleason theorem} Thm.~\ref{thm: linearity vs positivity} makes it clear that in order to obtain a generalisation of Gleason's theorem, which singles out the state spaces of composite systems, one needs to strengthen Def.~\ref{defn: probabilistic presheaf}. In particular, one needs to characterise positivity of the linear functional $\sigma: \cN_1 \bar{\otimes} \cN_2 \ra \C$ in Thm.~\ref{thm: linearity vs positivity}. We will derive positivity of $\sigma$ from complete positivity of an associated map $\phi: \cN_1 \ra \cN^*_2$ (where $\cN_2^*$ denotes the algebra under the adjoint, see Sec.~\ref{sec: time orientation} below). Recall that in the proof of Thm.~\ref{thm: linearity vs positivity} we constructed a map $\widetilde{\phi}: \cN_1 \ra (\cN_2)_*$, into normal states on $\cN_2$ (up to normalisation). We may therefore identify the normal states $\widetilde{\phi}(p) \in (\cN_2)_*$ for every $p \in \mc{P}(\cN_1)$ with trace-class operators in $\cN_2$. More precisely, consider a faithful representation of the von Neumann algebra $\cN_2$ as bounded operators on a Hilbert space $\cH_2$.\footnote{We may take $\cH_2$ to be the unique Hilbert space defined by the standard form of $\cN_2$ \cite{Haagerup1975}.} With respect to the latter, every normal state $\sigma \in (\cN_2)_*$ is of the form $\sigma(b) = \mathrm{tr}_{\cH_2}[\rho^*_\sigma b] = \mathrm{tr}_{\cH_2}[\rho_\sigma b]$ for all $b \in \cN_2$ (Thm.~7.1.9 in \cite{KadisonRingroseII}); in physics parlance, $\rho_\sigma$ is also called a \emph{density matrix}. Throughout, we will indicate this identification in maps by writing $\phi: \cN_1 \ra \cN^*_2$ as opposed to $\widetilde{\phi}: \cN_1 \ra (\cN_2)_*$, and similarly $\varrho: \mc{P}(\cN_1) \ra \cN^*_2$ as opposed to $\widetilde{\varrho}: \mc{P}(\cN_1) \ra (\cN_2)_*$. Under this identification we seek conditions that ensure that $\phi$ is not only positive but also completely positive. We will achieve this in two steps: first, we extend the constraints in Gleason's theorem to dilations in Sec.~\ref{sec: dilation}; second, we enforce a consistency condition with respect to dynamical correspondences in Sec.~\ref{sec: time orientation}. Finally, in Sec.~\ref{sec: main result} we prove our main result: a generalisation of Gleason's theorem to composite systems. Along the way we show that neither of these additional constraints jeopardises applicability in the single system case. Indeed, we prove sharper versions of Gleason's theorem, thereby addressing a number of subtleties that only arise for composite systems. \subsection{Dilations.}\label{sec: dilation} First, we require that measures $\mu_V$ admit dilations in every commutative von Neumann subalgebra $V \in \VN$. More precisely, by Gelfand duality we may identify every context $V \in \VN$ with a compact Hausdorff $X$ space such that $\mu_V$ becomes a measure on $X$. In particular, we identify the projections $\mc{P}(V)$ with the clopen subsets of the Gelfand spectrum of $V$. In this way, we can interpret $\mu_V$ as a $\mc{B}(\cH)$-valued measure for $\cH=\C$ (thus also $\mc{B}(\cH) \cong \mathbb{C}$). By Naimark's theorem \cite{Naimark1943,Stinespring1955}, the latter admits a spectral dilation of the form $\mu_V = v^*\varphi_V v$, where $\varphi_V: \mc{P}(V) \hookrightarrow \mc{P}(\cK)$ is an embedding for some Hilbert space $\cK$ and $v: \C \ra \cK$ is a linear map (equivalently, a vector $v \in \mc{K}$). By choosing $\cK$ sufficiently large, we can choose $\cK$ independent of contexts $V \in \VN$. Moreover, we take $v$ to be independent of contexts $V \in \VN$, since we can absorb any context dependence on $v$ into $\varphi_V$. More specifically, by a finitely (completely) additive embedding we mean $\varphi_V(0) = 0$, $\varphi_V(1-p) = 1 - \varphi_V(p)$, and $\varphi_V(p_1 + p_2) = \varphi_V(p_1) + \varphi_V(p_2)$ for all $p_1,p_2 \in \mc{P}(V)$, $p_1p_2=0$ ($\varphi_V(\sum_{i\in I} p_i) = \sum_{i\in I} \varphi_V(p_i)$ for every family of pairwise orthogonal projections $(p_i)_{i \in I}$, $p_i \in \PN$). We further require that embeddings preserve spatial tensor products, i.e., whenever $V = V_1 \bar{\otimes} V_2$, then $\varphi_V= \varphi_{V_1} \bar{\otimes} \varphi_{V_2}$, were $\varphi_{V_1}: \mc{P}(V_1) \ra \mc{P}(\cK_1)$ and $\varphi_{V_2}: \mc{P}(V_2) \ra \mc{P}(\cK_2)$ are embeddings and $\cK = \cK_1 \otimes \cK_2$. Taken together, this suggests to strengthen Def.~\ref{defn: probabilistic presheaf} as follows. \begin{definition}\label{def: dilated probabilistic presheaf} Let $\cN$ be a von Neumann algebra with context category $\VN$. The \emph{(normal) dilated probabilistic presheaf} $\PPi_D$ of $\cN$ over $\VN$ is the presheaf given \begin{itemize} \item [(i)] on objects: for all $V\in\VN$, let \begin{align*} &\PPi_D(V):=\{\mu_V:\PV\ra [0,1] \mid \mu_V \text{a probability measure of the form } \mu_V = v^* \varphi_V v\text{ with}\\ &\ v: \C \ra \cK \text{ linear}, \text{ and } \varphi_V: \mc{P}(V) \hookrightarrow \mc{P}(\mc{K}) \text{ a finitely (completely) additive embedding}\}\; , \end{align*} \item [(ii)] on arrows: for all $V,\tilde{V}\in\VN$, if $\tilde{V}\subseteq V$, let \begin{equation*} \PPi_D(i_{\tilde{V}V}): \PPi_D(V) \ra \PPi_D(\tilde{V}) \text{ with } \varphi_V \mapsto \varphi_V|_{\tilde{V}}\; . \end{equation*} \end{itemize} \end{definition} As in Def.~\ref{defn: probabilistic presheaf}, the condition that $\varphi_V$ preserves orthogonality is required in commutative von Neumann subalgebras only. Consequently, Def.~\ref{def: dilated probabilistic presheaf} only requires global sections of $\PPi_D$ to be quasi-linear. The key difference to the probabilistic presheaf $\PPi$ in Def.~\ref{defn: probabilistic presheaf} is that we require the constraints to hold also with respect to dilations. If Def.~\ref{def: dilated probabilistic presheaf} is to generalise the single system case, it should be consistent with it. We therefore start by analysing global sections of the dilated probabilistic presheaf in the original setting. We have the following refined version of Thm.~\ref{thm: Gleason in context form}. \begin{theorem}\label{thm: Gleason in context form II} \textbf{\emph{(Gleason in contextual form (II))}} Let $\cN$ be a von Neumann algebra with no summand of type $I_2$. There is a bijective correspondence between (normal) states on $\cN$ and global sections of the (normal) dilated probabilistic presheaf $\PPi_D$ of $\cN$ over $\VN$. \end{theorem} \begin{proof} Let $\sigma \in \mc{S}(\cN)$ be (normal) state and consider the corresponding representation $\pi: \cN \ra \mc{B}(\cK)$ arising via the GNS construction from the inner product $(a,b) := \sigma(a^*b)$. In this representation, $\sigma$ is a vector state $\sigma(a) = (\pi(a)v,v)$ with $v \in \cK$ for all $a \in \cN$. Interpreting $v: \mathbb{C} \ra \cK$ as a linear map, this reads $\sigma(a) = v^* \pi(a) v$ for all $a \in \cN$, and since $\pi$ is a representation, $\varphi := \pi|_{\PN}$ is an orthomorphism. It follows that $(\sigma|_{\mc{P}(V)})_{V\in\VN}$ defines a global section of the (normal) dilated probabilistic probabilistic $\PPi_D(\VN)$. Conversely, recall that $\gamma = (v^* \varphi_V v)_{V \in \VN} \in \Gamma[\PPi(\VN)]$ defines a (normal) state $\sigma_\gamma \in \SN$ by Thm.~\ref{thm: general Gleason}. In fact, the embeddings $(\varphi_V)_{V \in \VN}$ define an orthomorphism $\varphi_\gamma: \PN \ra \mc{P}(\cK)$, which by Cor.~2 in \cite{BunceWright1993} extends to a (normal) Jordan $*$-homomorphism $\Phi_\gamma: \cN \rightarrow \mc{B}(\cK)$ such that $\sigma_\gamma = v^*\Phi_\gamma v$ and $\varphi_\gamma = \Phi_\gamma|_{\PN}$. This will become important in Lm.~\ref{lm: global sections to Jordan homos} below. \end{proof} Despite the fact that, by Thm.~\ref{thm: Gleason in context form} and Thm.~\ref{thm: Gleason in context form II}, global sections of probabilistic and dilated probabilistic presheaf correspond with states on $\cN$,\footnote{Since two global sections $\gamma,\gamma' \in \Gamma[\PPi_D(\VN)]$ are identified if and only if $\mu^\gamma_V = \mu^{\gamma'}_V$ for all $V \in \VN$.} the dilated probabilistic presheaf encodes more constraints than the probabilistic presheaf. To see this, note that we deduced linearity in Thm.~\ref{thm: linearity vs positivity} from a theorem due to Bunce and Wright in \cite{BunceWright1992}. Under the conditions of Def.~\ref{def: dilated probabilistic presheaf} a stronger version applies \cite{BunceWright1993}, leading to the following refinement of Thm.~\ref{thm: linearity vs positivity}. \begin{lemma}\label{lm: global sections to Jordan homos} Let $\mc{N}_1$, $\mc{N}_2$ be von Neumann algebras with no summand of type $I_2$, $\cN = \cN_1 \bar{\otimes} \cN_2$, and let $\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))$ be the normal dilated probabilistic presheaf over the composite context category $\mc{V}(\cN_1) \times \mc{V}(\cN_2)$. Then every global section $\gamma \in \Gamma[\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$ uniquely extends to a normal linear functional $\sigma_\gamma \in \cN_*$, given for all $a \in \cN_1$ and $b \in \cN_2$ by \begin{equation}\label{eq: decomposable linear functional} \sigma_\gamma(a \otimes b) = \widetilde{\phi}_\gamma(a)(b)\; , \end{equation} where $\phi_\gamma: \cN_1 \ra \cN_2$ is of the form $\phi_\gamma = w^*\Phi_\gamma w$ with $\Phi_\gamma: \cN_1 \ra \mc{B}(\cK)$ a normal Jordan $*$-homomorphism and $w: \cH_2 \ra \cK$ a bounded linear map for some Hilbert space $\cK$ and $\cN_2 \subset \mc{B}(\cH_2)$. \end{lemma} \begin{proof} \input{Proof_Decomposability} \end{proof} Recall that a map $\phi: \cN \ra \BH$ is called \emph{decomposable} if it is of the form $\phi = v^* \Phi v$ for $v: \cH \ra \cK$ a bounded linear map and $\Phi: \cN \ra \mc{B}(\cK)$ a Jordan $*$-homomorphism. Such maps satisfy a weaker notion of positivity than complete positivity \cite{Stormer1982}, owing to their close resemblance with completely positive maps $\phi = v^* \Phi v$, for which $\Phi$ is a $C^*$-homomorphism by the classification due to Stinespring \cite{Stinespring1955}. Importantly, by a generalisation of Choi's theorem \cite{Choi1975} (see Lm.~\ref{lm: positivity vs complete positivity} below), completely positive maps $\phi_\gamma$ correspond with positive linear functionals under Eq.~(\ref{eq: decomposable linear functional}). In contrast, linear functionals $\sigma_\gamma$ corresponding to decomposable maps under Eq.~(\ref{eq: decomposable linear functional}) are generally not positive. We infer that for $\sigma_\gamma$ to be a state, $\phi_\gamma$ in Lm.~\ref{lm: global sections to Jordan homos} further needs to lift to a completely positive map, equivalently the Jordan $*$-homomorphism in Lm.~\ref{lm: global sections to Jordan homos} needs to lift to a $C^*$-homomorphism. \subsection{Dynamical correspondences.}\label{sec: time orientation} We recall some basic facts about Jordan algebras. A \emph{Jordan algebra} $\cJ$ is a commutative algebra, which satisfies the characteristic equation $(a^2 \circ b) \circ a = a^2 \circ (b \circ a)$ for all $a,b \in \cJ$. $\cJ$ is called a \emph{JB algebra}, if it is also a Banach space such that $||a \circ b|| \leq ||a|| \cdot ||b||$ and $||a^2|| = ||a||^2$ for all $a,b \in \cJ$. Finally, $J$ is called a \emph{JBW algebra} if it is a JB algebra with a (unique) predual. A Jordan homomorphism $\Phi: \cJ_1 \rightarrow \cJ_2$ is a linear map such that $\Phi(a \circ b) = \Phi(a) \circ \Phi(b)$ for all $a,b \in \cJ_1$. For a more details on Jordan algebras, see \cite{McCrimmon_ATasteOfJordanAlgebras}. Note that the self-adjoint part of a von Neumann algebra $\cN$ naturally gives rise to a real JBW-algebra under the symmetrised product $a \circ b = \frac{1}{2}\{a,b\} = \frac{1}{2}(ab + ba)$. We denote this algebra by $\JNsa = (\cN_\mathrm{sa},\circ)$ and by $\JN$ its complexification.\footnote{A JBW algebra isomorphic to a subalgebra of $\JNsa$ is also called a JW algebra.} In this case we speak of Jordan $*$-homomorphisms if $\Phi$ is a Jordan homomorphism and $\Phi^*(a) = \Phi(a^*)$. In general, $\JN$ does not determine $\cN$ completely, as it lacks compatibility with the antisymmetric part or \textit{commutator} of the associative product in $\cN$, $[a,b] = ab - ba$. In particular, for the Jordan $*$-homomorphism $\Phi$ in Lm.~\ref{lm: global sections to Jordan homos} to lift to a $C^*$-homomorphism it needs to preserve commutators. On the level of JBW algebras, this can be expressed in terms of one-parameter groups of Jordan automorphisms $\R \ni t \mapsto e^{t\delta(a)} \in \mathrm{Aut}(\cJ)$, where $\delta \in \mc{OD}_s(\cJ)$ is called a \emph{skew order derivation}, i.e., a bounded linear map $\delta: \cJ_+ \ra \cJ_+$ acting on the positive cone of $\cJ$ such that $\delta(1) = 0$ (for details, see \cite{AlfsenShultz1998a}). For $\cJ = \JNsa$, these are of the form $\delta_{ia}(b) = \frac{i}{2}[a,b]$ for all $a,b \in \JNsa$. This yields a canonical map $\psi_\cN: \JNsa \ra \mc{OD}_s(\JNsa)$, $\psi_\cN: a \ra \delta_{ia}$, which gives expression to the double role of self-adoint operators as observables and generators of symmetries \cite{AlfsenShultz1998,Baez2020}. In particular, $e^{t\psi_\cN}$ expresses the unitary evolution of the system described by $\JNsa$. We also define the map $\psi^*_{\cN} := * \circ \psi_\cN: a \ra \delta_{-ia}$ for all $a \in \JNsa$ (cf. Prop.~15 in \cite{AlfsenShultz1998a}). Note that $\psi_\cN(a)(a) = 0$ and $[\delta_a,\delta_b] = - [\psi_\cN(a),\psi_\cN(b)]$ for all $a,b \in \JNsa$, where $\delta_a(b) := a \circ b$. More generally, Alfsen and Shultz define a \emph{dynamical correspondence} to be any map $\psi: \cJ \ra \mc{OD}_s(\cJ)$ satisfying these properties and prove that a JBW algebra $\cJ$ is a JW algebra if and only if $\cJ$ admits a dynamical correspondence.\footnote{We add that, unlike a JBW algebra, a von Neumann algebra is generally not anti-isomorphic to itself \cite{Connes1975}.} In this case, associative products bijectively correspond with dynamical correspondences (Thm.~23 in \cite{AlfsenShultz1998}). We conclude that a von Neumann algebra $\cN$ is determined as the pair $(\JN,\psi_\cN)$ with the canonical dynamical correspondence $\psi_\cN: a \ra \delta_{ia} = \frac{i}{2}[a,\cdot]$ for all $a \in \JNsa$. Moreover, we set $\cN^* := (\JN,\psi^*_\cN)$ for the associative algebra whose order of composition is reversed (under the adjoint map) with respect to the order of composition in $\cN \cong (\JN,\psi_\cN)$. It follows that for a Jordan $*$-homomorphism $\Phi: \cJ(\cN_1) \ra \cJ(\cN_2)$ to lift to a homomorphism between the respective von Neumann algebras $\cN_1$ and $\cN_2$ is for it to preserve the respective dynamical correspondences $\psi_{\cN_1}$ and $\psi_{\cN_2}$. Comparing with the Jordan $*$-homomorphism $\Phi_\gamma$ in Lm.~\ref{lm: global sections to Jordan homos}, we need to lift the dynamical correspondence $\psi_{\cN_2}$ to (a subalgebra of) $\mc{B}(\cK)$. To do so, note first that the argument reduces to factors in $\mc{N}_2$ \cite{AlfsenShultz1998}, and further that we may choose $v$ in Def.~\ref{def: dilated probabilistic presheaf} so that it preserves the factor decomposition. With this choice, let $\cN^v_2 \subseteq \mc{B}(\cK)$ be the largest von Neumann algebra which restricts to $\cN_2$ under $v$, i.e., $\cN_2 = v^* \cN_2^v v$. Clearly, $\Phi(\mc{N}_1) \subseteq \cN^v_2 \subseteq \mc{B}(\mc{K})$, and it is easy to see that in this case $\psi_{\cN_2}$ defines a unique dynamical correspondence $\psi'_{\cN_2}$ on $\cN^v_2$ via the linear embedding $\mc{N}_2 \hookrightarrow \cN^v_2$ induced by $v$. This entitles us to the following key definition (cf. \cite{FrembsDoering2022a}). \begin{definition}\label{def: time-oriented global sections} Let $\cN_1 = (\cJ(\cN_1),\psi_{\cN_1})$, $\cN_2 = (\cJ(\cN_2),\psi_{\cN_2})$ be von Neumann algebras. A global section of the dilated probabilistic presheaf $\gamma \in \Gamma[\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$ is called \emph{time-oriented} if the Jordan $*$-homomorphism $\Phi_\gamma$ in Lm.~\ref{lm: global sections to Jordan homos} preserves dynamical correspondences, \begin{equation}\label{eq: Jordan to vN condition} \Phi_\gamma \circ \psi^*_{\cN_1} = \psi'_{\cN_2} \circ \Phi_\gamma\; , \end{equation} where $\psi'_{\cN_2}$ denotes the dynamical correspondence on $\cN_2^v \subset \mc{B}(\cK)$ uniquely defined by $\psi_{\cN_2}$. \end{definition} The terminology reflects the fact that the one-parameter groups $\mathbb{R} \ni t \mapsto e^{t\psi_\cN}$ express unitary dynamics in quantum theory. Consequently, they give physical meaning to $t$ as a time parameter. In particular, $\psi_\cN: a \ra \delta_{ia}$ fixes the (canonical) forward time direction of the system described by $\cN$, whereas $\psi^*_{\cN} := * \circ \psi_\cN: a \ra \delta_{-ia}$ describes the system under time reversal \cite{Doering2014} (see also \cite{FrembsDoering2022a,Frembs2022a}). We emphasise that the appearance of $\psi^*_1$ and $\psi_2$ in Def.~\ref{def: time-oriented global sections} is a consequence of interpreting the normal linear functional $\sigma_\gamma: \cN_1 \bar{\otimes} \cN_2 \ra \C$ in Lm.~\ref{lm: global sections to Jordan homos} as a linear map $\phi_\gamma: \cN_1 \ra \cN^*_2$ (under the identification between trace-class operators and normal linear functionals), equivalently $\phi_\gamma: \cN^*_1 \ra \cN_2$, where the order of composition in $\cN^*_1$ is naturally reversed (under the adjoint map) with respect to the order of composition in $\cN_1$. Once again, we show that Def.~\ref{def: time-oriented global sections} poses no additional constraint in the single system case. \begin{theorem}\label{thm: Gleason in context form III} \textbf{\emph{(Gleason in contextual form (III))}} Let $\cN = (\cJ(\cN),\psi_\cN)$ be a von Neumann algebra with no summand of type $\text{I}_2$. There is a bijective correspondence between (normal) states on $\cN$ and time-oriented global sections of the (normal) dilated probabilistic presheaf $\PPi_D$ of $\cN$ over $\VN$. \end{theorem} \begin{proof} By Thm.~\ref{thm: Gleason in context form II}, states on $\cN$ correspond with global sections of $\PPi_D(\VN)$. It is thus sufficient to show that every state is already time-oriented. This follows from the fact that every positive linear functional is automatically completely positive \cite{Stinespring1955}. More precisely, recall that the GNS construction for $\sigma \in \SN$ trivially yields a dilation of the form in Lm.~\ref{lm: global sections to Jordan homos} (for $\cN_2 = \C$), namely $\sigma(a) = v^*\pi(a)v = \mathrm{tr}_{\cK}[vv^*\pi(a)] = \mathrm{tr}_{\cK}[(vv^*)^*\pi(a)]$ for all $a \in \cN$, where $v: \C \ra \mc{K}$ and $\pi: \cN \ra \mc{B}(\cK)$ is a representation of $\cN$.\footnote{Note that this construction works for all states $\sigma \in \mc{S}(\cN)$, not just for normal states.} In particular, $\sigma$ is time-oriented with respect to the algebras $\cN_1 = \cN$ and $\cN_2 = \C$. \end{proof} \subsection{Gleason's theorem for composite systems.}\label{sec: main result} The refinements of Thm.~\ref{thm: general Gleason} in Thm.~\ref{thm: Gleason in context form II} and Thm.~\ref{thm: Gleason in context form III} both reveal extra structure in the classification of states in the single system case. In turn, having identified this additional structure in Def.~\ref{def: dilated probabilistic presheaf} and Def.~\ref{def: time-oriented global sections}, we may impose it in order to generalise Gleason's theorem to composite systems. Then, every $\gamma \in \Gamma[\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$ yields a map $\phi_\gamma(a) = v^*\Phi_\gamma(a)v$ with $\Phi_\gamma$ a $C^*$-homomorphism, which implies that $\phi_\gamma$ is completely positive by Stinespring's theorem. In the final step, we deduce from this the existence of a positive linear functional $\sigma_\gamma: \cN \ra \C$ with $\cN = \cN_1 \bar{\otimes} \cN_2$. In finite dimensions, this follows from Choi's theorem \cite{Choi1975}. However, the latter hinges on the existence of a finite trace. For the general case, we use the following generalisation based on a result by Belavkin and Staszewski \cite{Belavkin1986}.\footnote{If $\phi: \cN \ra \cN$, where $\cN$ has a cyclic and separating vector, the result also follows from \cite{Stormer2014}.} \begin{lemma}\label{lm: positivity vs complete positivity} The normal linear functional $\sigma_\gamma(a \otimes b) = \widetilde{\phi}_\gamma(a)(b)$ in Lm.~\ref{lm: global sections to Jordan homos} is positive if and only if the linear map $\phi_\gamma: \cN^*_1 = (\mc{J}(\cN_1),\psi^*_{\cN_1}) \ra \cN_2 = (\mc{J}(\cN_2),\psi_{\cN_2})$ is completely positive. \end{lemma} \begin{proof} \input{Proof_PositivityVsCompletePositivity} \end{proof} We finally arrive at a generalisation of Gleason's theorem to composite systems. \begin{theorem}\label{thm: composite Gleason} Let $\cN_1 = (\cJ(\cN_1),\psi_{\cN_1})$, $\cN_2 = (\cJ(\cN_2),\psi_{\cN_2})$ be von Neumann algebras with no summands of type $I_2$ and let $\cN = \cN_1 \bar{\otimes} \cN_2$. Then there is a bijective correspondence between the set of normal states on $\cN$ and the set of time-oriented global sections of the normal dilated probabilistic presheaf $\PPi_D$ over $\mc{V}(\cN_1) \times \mc{V}(\cN_2)$. \end{theorem} \begin{proof} \input{Proof_KeyTheorem} \end{proof} \vspace{-0.2cm} We finish with a few remarks. Note that the key steps in the argument leading to Thm.~\ref{thm: composite Gleason} apply to $C^*$-algebras. The crucial exception is lifting quasi-linearity to linearity. As such the restriction to von Neumann algebras might not be optimal. For instance, our result may be strengthened to $AW^*$-algebras, for which Gleason's theorem holds by \cite{Hamhalter2015}. Comparing with $C^*$-algebras, we further remark that we restricted to normal states, since non-normal product states generally do not have a unique extension in the spatial tensor product. On the other hand, in the spatial tensor product of $C^*$-algebras every product state has a unique extension (Prop.~11.1.1 in \cite{KadisonRingroseII}). This suggests that for $AW^*$-algebras the bijective correspondence in Thm.~\ref{thm: composite Gleason} further extends beyond normal states. Moreover, we observe that Thm.~\ref{thm: composite Gleason} can be formulated in an intrinsically Jordan-algebraic setting. Note that Bunce and Wright prove a generalisation of Gleason's theorem for JBW-algebras in \cite{BunceWright1985}. The case considered here is the one where JBW algebras arise as self-adjoint parts of von Neumann algebra, hence, are JW algebras. Alfsen and Shultz show that this is the case if and only if the JBW algebra admits a dynamical correspondence \cite{AlfsenShultz_StateSpaceOfOperatorAlgebras}. Thm.~\ref{thm: composite Gleason} applies for any choice of dynamical correspondences on the factor JW algebras. In turn, in the context of general JBW algebras, the concept of state requires a weaker notion of positivity, reflecting the generalisation from completely positive to decomposable maps \cite{Stormer1982}. Clearly, the relation with (preservation of) dynamical correspondences in Def.~\ref{def: time-oriented global sections} is lost in this case. We point out that this is closely analogous to the limited applicability of Tomita-Takesaki theory in JBW algebras \cite{HaagerupHanche-Olsen1984}. \section{Conclusion} We proved a generalisation of Gleason's theorem to composite systems. An ad hoc attempt establishes a correspondence with normal linear functionals on the composite algebra that are positive on product operators, but fails to single out those that are positive (Thm.~\ref{thm: linearity vs positivity}). We remedy this by strengthening the additivity constraints on measures in commutative von Neumann subalgebras to dilated systems, together with a consistency condition between dynamical correspondences on the respective subalgebras (Thm.~\ref{thm: composite Gleason}). Neither of these conditions changes the result in the single system case (Thm.~\ref{thm: Gleason in context form II} and Thm.~\ref{thm: Gleason in context form III}), since positive linear functionals are completely positive as a consequence of Stinespring's theorem \cite{Stinespring1955}. Apart from its mathematical value, our result also carries physical significance. Gleason's theorem (Thm.~\ref{thm: general Gleason}) plays a crucial role in the foundations of quantum theory, where it justifies Born's rule. It is all the more interesting that a similar result extends to composite systems. For instance, we note that within the framework of algebraic quantum field theory the observable algebra is naturally composed of local algebras \cite{HaagKastler1964}. Thm.~\ref{thm: composite Gleason} was foreshadowed for finite type I factors, and with a focus on classifying quantum from non-signalling correlations in \cite{FrembsDoering2022a}. Its close connection with entanglement classification will be discussed elsewhere \cite{Frembs2022a}. For a broader perspective on the significance of contextuality in the foundations of quantum mechanics we refer to \cite{DoeringFrembs2019a}.\\ \paragraph*{Acknowledgements.} This work is supported through a studentship in the Centre for Doctoral Training on Controlled Quantum Dynamics at Imperial College funded by the EPSRC, by grant number FQXi-RFP-1807 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation, and ARC Future Fellowship FT180100317. \clearpage \bibliographystyle{siam} \bibliography{bibliography} \end{document} Clearly, every normal product state $\sigma = \sigma_1 \bar{\otimes} \sigma_2 \in \cN_*$ yields a product global section $\gamma_\sigma = \gamma_{\sigma_1} \times \gamma_{\sigma_2} \in \Gamma[\PPi(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$, as a consequence of Thm.~\ref{thm: Gleason in context form}, applied to $\cN_1$ and $\cN_2$ individually. In particular, $\gamma(1) = \sigma(1) = 1$ by normalisation, and $\sigma(a \otimes b) \geq 0$ for all $a \in (\cN_1)_+$ and $b \in (\cN_2)_+$ by positivity of the probability measures $\mu^\gamma_V$ for all $V\in\VN$. Since every normal state can be approximated by finite linear combinations of normal product states (Prop 11.2.8, \cite{KadisonRingroseII}), the correspondence extends to all normal states $\sigma \in \cN_*$. Conversely, we show that every global section $\gamma \in \Gamma[\PPi(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$ defines a unique normal linear functional $\sigma_\gamma$ on $\cN = \cN_1 \bar{\otimes} \cN_2$, which restricts to $\gamma$ on $\mc{V}(\cN_1) \times \mc{V}(\cN_2)$, i.e., $\gamma = (\sigma|_{\mc{P}(V)})_{\mc{V}(\cN_1) \times \mc{V}(\cN_2)}$. Fix a context $V_1 \in \mc{V}(\mc{N}_1)$ and consider the corresponding partial order of contexts under inclusion, inherited from $\mc{V}_{1\&2} := \mc{V}(\mc{N}_1) \times \mc{V}(\mc{N}_2)$ by restriction, \begin{equation*} \mc{V}_{1\&2}(V_1) := \{V_1 \times V_2 \mid V_2 \in \mc{V}(\mc{N}_2)\}\; . \end{equation*} In every context $V = V_1 \bar{\otimes} V_2 \in \mc{V}_{1\&2}$, the probability measure $\mu^\gamma_V \in \PPi(\mc{V}_{1\&2})_V$ corresponding to the global section $\gamma$ can be written (in terms of conditional probabilities) as follows: \begin{equation}\label{eq: local Gleason} \forall p \in \mc{P}(V_1), q \in \mc{P}(V_2):\ \mu^\gamma_V(p,q) = \mu_{V_1}^\gamma(p) \mu_{V_2}^\gamma(q \mid p) = \mu_{V_1}^\gamma(p) \gamma_2^p(q) = \mu_{V_1}^\gamma(p) \sigma_2^p(q)\; . \end{equation} Here, $(\mu_{V_2}^{\gamma_2} (\cdot \mid p))_{V_2 \in \mc{V}(\mc{N}_2)} =: \gamma_2^p \in \Gamma[\PPi(\mc{V}_{1\&2}(V_1))]$ is a global section of the probabilistic presheaf $\PPi(\mc{V}_{1\&2}(V_1))$, which also depends on $p \in \mc{P}(V_1)$. Since $\mc{V}_{1\&2}(V_1) \cong \mc{V}(\mc{N}_2)$, such global sections correspond with normal states on $\cN_2$ by Thm.~\ref{thm: Gleason in context form}, hence, $\sigma_2^p \in (\cN_2)_*$ for all $p \in \mc{P}(V_1)$. Moreover, since $V_1 \in \mc{V}(\mc{N}_1)$ was arbitrary, Eq.~(\ref{eq: local Gleason}) holds for all $p \in \mc{P}(\cN_1)$. Define a map $\widetilde{\varrho}_\gamma: \mc{P}(\cN_1) \ra (\cN_2)_*$ by $\widetilde{\varrho}_\gamma(p) := \mu_{V_1}^\gamma(p) \sigma_2^p$. Clearly, $\widetilde{\varrho}_\gamma$ is bounded and $(\cN_2)_*$ is a Banach space (as a closed subspace of the continuous dual of $\mc{N}_2$). Moreover, for $p = p_1 + p_2$ with $p_1,p_2 \in \mc{P}(\mc{N}_1)$ orthogonal, i.e., $p_1p_2 = 0$, \begin{equation*}\label{eq: finite additive operator measure} \widetilde{\varrho}_\gamma(p) = \mu_{V_1}^\gamma(p) \sigma_2^p = \mu_{V_1}^\gamma(p_1)\sigma_2^{p_1} + \mu_{V_1}^\gamma(p_2)\sigma_2^{p_2} = \widetilde{\varrho}_\gamma(p_1) + \widetilde{\varrho}_\gamma(p_2)\; , \end{equation*} by additivity of $\gamma$. Similarly, $\widetilde{\varrho}_\gamma$ inherits complete additivity from $\gamma$. By Thm.~A in \cite{BunceWright1992}, $\widetilde{\varrho}_\gamma$ uniquely extends to a normal linear map $\widetilde{\phi}_\gamma: \cN_1 \ra (\cN_2)_*$. Hence, $\sigma_\gamma(p \otimes q) := \widetilde{\phi}_\gamma(p)(q)$ is a normal linear functional such that $\sigma_\gamma(1) = \gamma(1) = 1$ and $\sigma_\gamma(a \otimes b) \geq 0$ for all $a \in (\cN_1)_+$, $b \in (\cN_2)_+$ by positivity of the (product) measures $\mu^\gamma_V$ for all $V\in\mc{V}(\cN_1)\times\mc{V}(\cN_2)$. Recall that in the proof of Thm.~\ref{thm: linearity vs positivity} we lifted the map $\widetilde{\varrho}_\gamma: \mc{P}(\cN_1) \ra (\cN_2)_*$ to a positive bounded linear map $\widetilde{\phi}_\gamma: \cN_1 \ra (\cN_2)_*$ (cf. \cite{BunceWright1992}). Similarly, for $\gamma \in \Gamma[\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$ we construct a completely additive measure $\widetilde{\varrho}_\gamma: \mc{P}(\cN_1) \ra (\cN_2)_*$, which by the correspondence between normal states and positive trace-class operators (Thm.~7.1.9 in \cite{KadisonRingroseII}) yields a map $\varrho_\gamma: \mc{P}(\cN_1) \ra (\cN_2)_+$ into positive trace-class operators (viz. density matrices). Furthermore, note that by assumption there exists a Hilbert space $\cK = \cK_1 \otimes \cK_2$, a map $v: \C \ra \cK$ and orthomorphisms $\varphi_1: \mc{P}(\cN_1) \ra \mc{P}(\cK_1)$ and $\varphi_2: \mc{P}(\cN_2) \ra \mc{P}(\cK_2)$ (uniquely defined by $\varphi_1|_{V_1} = \varphi_{V_1}$ and $\varphi_2|_{V_2} = \varphi_{V_2}$ for all $V_1 \in \mc{V}(\cN_1)$, $V_2 \in \mc{V}(\cN_2)$, respectively) such that \begin{align*} \sigma_\gamma(p\otimes q) = v^*(\varphi_1(p) \otimes \varphi_2(q))v \end{align*} for all $p \in \mc{P}(\cN_1)$ and $q \in \mc{P}(\cN_2)$. Without loss of generality, we may identify $\cK_2 = \cH_2$ (for $\cN_2 \subset \mc{B}(\cH_2)$) and take $\varphi_2(q) = q$ to be the identity. Then let $v = \sum_{ij} c_{ij} v_{1i} \otimes v_{2j} \in \cK$, where $c_{ij} \in \C$ and $\{v_{1i}\}_i$, $\{v_{2j}\}_j$ are orthonormal bases in $\cK_1$ and $\cK_2$, respectively. Since $\sigma_\gamma(p \otimes \cdot)$ corresponds to a normal state on $\cN_2$ for all $p \in \mc{P}(\cN_1)$ (by Thm.~\ref{thm: general Gleason}), we write \begin{align*} \sigma_\gamma(p\otimes q) &= (\sum_{ij} c^*_{ij} v^*_{1i} \otimes v^*_{2j})\ (\varphi_1(p) \otimes q)\ (\sum_{kl} c_{kl} v_{1k} \otimes v_{2l})\\ &= \sum_{ijkl} c^*_{ij}c_{kl}\ (v^*_{1i}\varphi_1(p)v_{1k})\ (v^*_{2j}qv_{2l})\\ &= \mathrm{tr}_{\cH_2}\bigg[\bigg(\sum_{ijkl} c_{ij}c^*_{kl}\ (v^*_{1k}\varphi_1(p)v_{1i})\ v_{2j}v^*_{2l}\bigg)^*q\bigg]\; . \end{align*} where we used that $\varphi^*_1(p) = \varphi_1(p)$ for all $p \in \mc{P}(\cN_1)$ in the last step. Note that under the Hermitian adjoint in the last line the inner product on $\cN_1$ is changed to its complex conjugate. To reflect this, let $\mc{B}(\cK_1)^*$ denote the algebra $\mc{B}(\cK_1)$ under the Hermitian adjoint, such that for all pairs $v_{1i},v_{1k} \in \cK_1$, $\tilde{v}_{1i},\tilde{v}_{1k} \in \widetilde{\cK}_1 \cong \cK^*_1$ (the dual of $\cK_1$) and $a \in \mc{B}(\cK_1)$ we have \begin{equation}\label{eq: inner product N_1} (\tilde{v}^*_{1k} a^* \tilde{v}_{1i})_{\mc{B}(\cK_1)^*} := (v^*_{1k} a v_{1i})^*_{\mc{B}(\cK_1)}\; . \end{equation} Define a bounded linear map $w: \cK_2 \ra \widetilde{\cK}_1 \otimes \cK_2$ by $w = \sum_{ij} c^*_{ij} \tilde{v}_{1i} \otimes v_{20}v^*_{2j}$, and let $\varphi' = \varphi_1 \otimes 1_{\mc{B}(\cK_2)}$. This yields an orthomorphism $\varphi': \mc{P}(\cN_1) \ra \mc{P}(\cK)$ such that \begin{align*} w^*\varphi'(p)w &= (\sum_{ij} c_{ij} \tilde{v}^*_{1i} \otimes v_{2j}v^*_{20})\ \varphi'(p)\ (\sum_{kl} c^*_{kl} \tilde{v}_{1k} \otimes v_{20}v^*_{2l})\\ &= \sum_{ijkl} c_{ij}c^*_{kl}\ (\tilde{v}^*_{1i} \varphi_1(p)\tilde{v}_{1k})_{\mc{B}(\cK_1)^*}\ v_{2j}v^*_{2l}\\ &= \sum_{ijkl} c_{kl}c^*_{ij}\ (\tilde{v}^*_{1k} \varphi_1(p)\tilde{v}_{1i})_{\mc{B}(\cK_1)^*}\ v_{2l}v^*_{2j} \\ &= \left(\sum_{ijkl} c_{ij}c^*_{kl}\ (v^*_{1k}\varphi_1(p)v_{1i})_{\mc{B}(\cK_1)}\ v_{2j}v^*_{2l}\right)^*\; , \end{align*} where we changed labels $i \leftrightarrow k$, $j \leftrightarrow l$ in the third line, and used Eq.~(\ref{eq: inner product N_1}) together with $\varphi^*_1(p) = \varphi_1(p)$ for all $p \in \mc{P}(\cN_1)$ in the last line. Consequently, \begin{equation*} \widetilde{\varrho}_\gamma(p)(q) = \mathrm{tr}_{\cH_2}[w^*\varphi'(p)wq]\; . \end{equation*} Finally, by Cor.~1 in \cite{BunceWright1993} the completely additive orthomorphism $\varphi': \mc{P}(\cN_1) \ra \mc{P}(\cK)$ extends to a normal Jordan $*$-homomorphism $\Phi_\gamma: \mc{J}(\cN_1) \ra \mc{B}(\cK)$. By Lm.~\ref{lm: global sections to Jordan homos}, $\phi_\gamma: \cN^*_1 \ra \cN_2$ is a decomposable map (between Jordan algebras $\mc{J}(\cN_1)$ and $\mc{J}(\cN_2)$) \cite{Stormer1982}. Assume that $\phi_\gamma$ is moreover completely positive. Fix faithful, normal, semi-finite weights $\omega_1: \mc{N}_1 \rightarrow \mathbb{C}$ and $\omega_2: \mc{N}_2 \rightarrow \mathbb{C}$ with respective faithful GNS representations $\pi_{\omega_1}: \cN_1 \ra \mc{B}(\cH_1)$ and $\pi_{\omega_2}: \cN_2 \ra \mc{B}(\cH_2)$. In the product representation $\pi_\omega: \cN_1 \bar{\otimes} \cN_2 \ra \mc{B}(\cH_1\otimes\cH_2)$ induced by the weight $\omega = \omega_1 \bar{\otimes} \omega_2$ we have $\sigma_\gamma(a\otimes b) = \omega(\rho^*_\gamma(a\otimes b))$ for all $a \in \cN_1$, $b \in \cN_2$ and some trace-class operator $\rho_\gamma \in \cN_1 \bar{\otimes} \cN_2 \subset \mc{B}(\cH_1\otimes\cH_2)$.\footnote{In other words, we again identify $\cN_1 \subset \mc{B}(\cH_1)$ and $\cN_2 \subset \mc{B}(\cH_1)$ with their standard forms \cite{Haagerup1975}.} We show that $\rho_\gamma$ is positive and self-adjoint. Define a normal, completely positive reference map $\phi_0: \cN^*_1 \ra \cN_2 \subset \mc{B}(\cH_2)$ by $\phi_0 = \omega^*_1 1_{\mc{B}(\cH_2)} %= (v_1^*\pi_{\omega_1} v_1)1_{\cN_2} = v_0^* \pi_0 v_0$, where $v_0 = v_1 \otimes 1_{\mc{B}(\cH_2)}: \cH_2 \ra \cH_1 \otimes \cH_2$ for $v_1: \C \ra \cH_1$ in $\omega^*_1 = (v^*_1\pi_{\omega_1}v_1)^* = v^*_1 \pi^*_{\omega_1} v_1 = v^*_1 (\pi_{\omega_1} \circ *) v_1$ (cf. \cite{Stinespring1955}) and $\pi_0 = (\pi_{\omega_1} \circ *) \otimes 1_{\mc{B}(\cH_2)}$. Recall that complete positivity of $\phi_0$ means $\sum_{i,j=1}^n (\phi_0(a_{ij})\eta_i,\eta_j) \geq 0$ for all $\eta_i,\eta_j \in \mc{H}_2$, $n \in \mathbb{N}$ whenever $a_{ij} \in M_n(\cN_1) \cong M_n(\cN^*_1)_+$. Let $(a_{ij})_m \in \mc{N}^*_1$ be a family of sequences such that \begin{equation*} \mathrm{lim}_{m \rightarrow \infty} \sum_{i,j=1}^n (\phi_0((a_{ij})_m)\eta_i,\eta_j) = \sum_{i,j=1}^n (\eta_i,\eta_j)\ \mathrm{lim}_{m \rightarrow \infty} \omega_1((a_{ij})_m) = 0\; . \end{equation*} Since $\eta_i,\eta_j$ are arbitrary and $\omega_1$ is a faithful, normal weight, we conclude $\mathrm{lim}_{m \rightarrow \infty} (a_{ij})_m = 0$ in the ultraweak topology. Since $\phi_\gamma$ is also a completely positive and normal map, we have $\mathrm{lim}_{m \rightarrow \infty} \sum_{i,j=1}^n (\phi_\gamma((a_{ij})_m)\eta_i,\eta_j) = 0$ for all $\eta_i, \eta_j \in \mc{H}_2$ (cf. Prop.~III.2.2.2 in \cite{Blackadar_OperatorAlgebras}). In the terminology of \cite{Belavkin1986}, $\phi_\gamma$ is thus strongly completely absolutely continuous with respect to $\phi_0$. Consequently, Thm.~2 in \cite{Belavkin1986} asserts the existence of a positive, self-adjoint operator $\rho_\gamma$ such that $\phi_\gamma = v_0^* \rho_\gamma \pi_0 v_0$. It follows that $\sigma_\gamma(a\otimes b) = \omega(\rho_\gamma(a\otimes b))$ for all $a \in \cN_1$, $b \in \cN_2$ defines a positive and thus normal state on $\cN = \cN_1 \bar{\otimes} \cN_2$. Conversely, if $\sigma_\gamma$ is positive, there exists a positive, self-adjoint operator $\rho_\gamma$ of trace-class such that $\sigma(a \otimes b) = \omega(\rho_\gamma(a \otimes b))$ for all $a \in \cN_1$, $b \in \cN_2$, hence, $\phi_\gamma = v_0^*\rho_\gamma\pi_0 v_0$. In particular, $\phi_\gamma: \cN^*_1 \ra \cN_2$ is completely positive by complete positivity of $\phi_0$. By Thm.~\ref{thm: Gleason in context form II}, every normal state $\sigma \in (\cN_1 \bar{\otimes} \cN_2)_*$ restricts to a global section of the normal dilated probabilistic presheaf, $\gamma_\sigma = (\sigma|_{\mc{P}(V)})_{V \in \mc{V}(\cN_1) \times \mc{V}(\cN_2)} \in \Gamma[\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$. It is thus sufficient to show that every normal state $\sigma$ can be written in the form $\sigma(a \otimes b) = \widetilde{\phi}(a)(b)$ for all $a \in \cN_1$, $b \in \cN_2$ and $\phi: \cN^*_1 \ra \cN_2$ completely positive, which follows from Lm.~\ref{lm: positivity vs complete positivity}. Conversely, let $\gamma \in \Gamma[\PPi_D(\mc{V}(\cN_1) \times \mc{V}(\cN_2))]$. By Lm.~\ref{lm: global sections to Jordan homos}, $\gamma$ corresponds to a normal linear functional of the form $\sigma_\gamma(a \otimes b) = \widetilde{\phi}_\gamma(a)(b)$ for all $a \in \cN_1$, $b \in \cN_2$ and $\phi_\gamma$ a decomposable map (between the respective Jordan algebras $\mc{J}(\cN_1)$ and $\mc{J}(\cN_2)$), i.e., $\phi_\gamma(a) = w^*\Phi_\gamma(a)w$ for $w: \cH_2 \ra \cK$ with $\cN_2 \subset \mc{B}(\cH_2)$ and $\Phi_\gamma: \mc{J}(\cN_1) \ra \mc{J}(\mc{B}(\cK))$ a Jordan $*$-homomorphism. Finally, since $\gamma$ is time-oriented with respect to the dynamical correspondences $\psi^*_{\cN_1} = * \circ \psi_{\cN_1}$ and $\psi_{\cN_2}$, $\Phi_\gamma: \cN^*_1 \ra \cN^v_2 \subset \mc{B}(\cK)$ is also a $C^*$-homomorphism, and $\phi_\gamma$ thus completely positive by Stinespring's theorem \cite{Stinespring1955} (cf. Thm.~III.2.2.4 in \cite{Belavkin1986}). Lm.~\ref{lm: positivity vs complete positivity} asserts that $\sigma_\gamma$ is positive and thus a normal state on $\cN = \cN_1 \bar{\otimes} \cN_2$.
2205.00464v3
http://arxiv.org/abs/2205.00464v3
Quadrature formulas for Bessel polynomials
\documentclass[11pt]{amsart} \usepackage{amsmath, amssymb, array} \usepackage[top=30truemm,bottom=30truemm,left=30truemm,right=30truemm]{geometry} \usepackage{mathtools} \usepackage[all]{xy} \usepackage{ascmac} \usepackage[dvips]{graphicx} \usepackage{color} \usepackage{amscd} \usepackage{fancyhdr} \usepackage[dvipdfmx]{hyperref} \usepackage{amsrefs} \usepackage{cleveref} \usepackage{chngcntr} \usepackage{apptools} \usepackage{comment} \newcommand{\ol}{\overline} \newcommand{\tc}{\textcolor} \newcommand{\Sel}{\mathrm{Sel}^{(2)}} \newcommand{\res}{\mathrm{res}} \newcommand{\val}{\mathrm{val}} \newcommand{\disc}{\mathrm{disc}} \newcommand{\tr}{\mathrm{tr}} \newcommand{\sgn}{\mathrm{sgn}} \renewcommand{\1}{\mathbf{1}} \newcommand{\fn}{\footnote} \newcommand{\fnm}{\footnotemark} \newcommand{\fnt}{\footnotetext} \newcommand{\setmid}{\mathrel{}\middle|\mathrel{}} \newcommand{\op}[1]{\mathop{\mathrm{#1}}} \newcommand{\bm}[1]{{\mbox{\boldmath $#1$}}} \newcommand{\A}{\mathbb{A}} \newcommand{\B}{\mathbb{B}} \newcommand{\C}{\mathbb{C}} \newcommand{\D}{\mathbb{D}} \newcommand{\E}{\mathbb{E}} \newcommand{\F}{\mathbb{F}} \newcommand{\G}{\mathbb{G}} \newcommand{\HH}{\mathbb{H}} \newcommand{\K}{\mathbb{K}} \newcommand{\N}{\mathbb{N}} \newcommand{\PP}{\mathbb{P}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\V}{\mathbb{V}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Het}[1][]{\mathop{H_{\text{\'et}}^{#1}}} \newcommand{\glo}{\mathrm{glo}} \newcommand{\loc}{\mathrm{loc}} \newcommand{\rat}{\mathrm{rat}} \newcommand{\sol}{\mathrm{sol}} \newcommand{\nr}{\mathrm{nr}} \renewcommand{\Re}{\mathrm{Re}} \newcommand{\bb}[1]{\boldsymbol{#1}} \newcommand{\ba}{\boldsymbol{a}} \DeclareMathOperator{\ab}{ab} \DeclareMathOperator{\Ann}{Ann} \DeclareMathOperator{\Art}{Art} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Br}{Br} \DeclareMathOperator{\Char}{char} \DeclareMathOperator{\Cl}{Cl} \DeclareMathOperator{\Coker}{Coker} \DeclareMathOperator{\Coim}{Coim} \DeclareMathOperator{\Disc}{Disc} \DeclareMathOperator{\divzero}{div_{0}} \DeclareMathOperator{\divinf}{div_{0}} \DeclareMathOperator{\Div}{Div} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Frob}{Frob} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\inv}{inv} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Mu}{\boldsymbol{\mu}} \DeclareMathOperator{\NS}{NS} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\Pic}{Pic} \DeclareMathOperator{\QCoh}{QCoh} \DeclareMathOperator{\rad}{rad} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\Reg}{Reg} \DeclareMathOperator{\T}{T} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\tors}{tors} \DeclareMathOperator{\tot}{tot} \DeclareMathOperator{\wt}{\boldsymbol{w}} \title{Quadrature formulas for Bessel polynomials} \author{Hideki Matsumura$^{*}$} \email{[email protected]} \address{${}^*$Department of Mathematics, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi , Kouhoku-ku, Yokohama, Kanagawa, Japan} \thanks{This research is supported by KAKENHI 18H05233.} \subjclass[2010]{primary 33C45; secondary 65D32; tertiary 14G05} \keywords{quadrature formula, Bessel polynomial, Riesz--Shohat theorem, Newton polygon, Christoffel--Darboux kernel, rational points on elliptic curves} \date{\today} \pagestyle{plain} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \crefname{theorem}{Theorem}{Theorems} \newtheorem{proposition}[theorem]{Proposition} \crefname{proposition}{Proposition}{Propositions} \newtheorem{lemma}[theorem]{Lemma} \crefname{lemma}{Lemma}{Lemmas} \newtheorem{corollary}[theorem]{Corollary} \crefname{corollary}{Corollary}{Corollaries} \newtheorem{conjecture}[theorem]{Conjecture} \crefname{conjecture}{Conjecture}{Conjectures} \newtheorem{question}[theorem]{Question} \crefname{question}{Question}{Questions} \newtheorem{problem}[theorem]{Problem} \crefname{problem}{Problem}{Problems} \newtheorem{notation}[theorem]{Notation} \crefname{notation}{Notation}{Notations} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \crefname{definition}{Definition}{Definitions} \newtheorem{example}[theorem]{Example} \crefname{example}{Example}{Examples} \newtheorem{remark}[theorem]{Remark} \crefname{remark}{Remark}{Remarks} \newtheorem{claim}[theorem]{Claim} \crefname{claim}{Claim}{Claims} \begin{document} \maketitle \tableofcontents \begin{abstract} A quadrature formula is a formula computing a definite integration by evaluation at finite points. The existence of certain quadrature formulas for orthogonal polynomials is related to interesting problems such as Waring's problem in number theory and spherical designs in algebraic combinatorics. Sawa and Uchida proved the existence and the non-existence of certain rational quadrature formulas for the weight functions of certain classical orthogonal polynomials. Classical orthogonal polynomials belong to the Askey-scheme, which is a hierarchy of hypergeometric orthogonal polynomials. Thus, it is natural to extend the work of Sawa and Uchida to other polynomials in the Askey-scheme. In this article, we extend the work of Sawa and Uchida to the weight function of the Bessel polynomials. In the proofs, we use the Riesz--Shohat theorem and Newton polygons. It is also of number theoretic interest that proofs of some results are reduced to determining the sets of rational points on elliptic curves. \end{abstract} \section{Introduction} Let $\gamma \subset \mathbb{C}$ be a smooth path and $x_1,\ldots, x_m$, $y_1,\ldots, y_m \in \mathbb{C}$. Suppose that $w(z)$ is a function such that $\int_{\gamma} f(z) w(z)dz$ exists for any $f(z) \in \mathbb{C}[z]$. An integration formula of the form \[\sum_{i=1}^{m} x_i f(y_i)=\int_{\gamma} f(z) w(z)dz\] is called a quadrature formula of degree $n$ for a weight function $w(z)$ if it holds for all $f(z) \in \mathbb{C}[z]$ such that $\deg(z) \leq n$. The points $y_i$'s are called the nodes of the quadrature formula. For example, for a real interval $\gamma=[a,b]$, $w(t)=1$ and $n=1$, the trapezoidal rule states that \[\frac{b-a}{2}(f(a)+f(b))=\int_a^b f(t) dt \] is a quadrature formula of degree $1$ for $w(t)=1$ with $2$ nodes. For $\gamma=[a,b]$, $w(t)=1$ and $n=3$, the Simpson rule states that \[\frac{b-a}{6} \left(f(a)+4 f\left(\frac{a+b}{2} \right)+f(b) \right)=\int_a^b f(t) dt\] is a quadrature formula of degree $3$ for $w(t)=1$ with $3$ nodes. A quadrature formula is called rational if all nodes are rational numbers. The existence of certain rational quadrature formulas for orthogonal polynomials is related to interesting problems such as Waring's problem in number theory (cf.\ \cite{Hausdorff,Nesterenko}) and spherical designs in algebraic combinatorics (cf.\ \cite{BB2005,DGS1977}). See \cite{Hausdorff,SU20} for applications. Quadrature formulas are determined by the weight function $w(t)$, the degree $n$ and the number $m$ of nodes. The following problem is fundamental: \begin{problem} For a fixed weight function $w(t)$ and a natural number $m$, determine the range of $n$ such that there exists a rational quadrature formula of degree $n$ for $w(t)$ with $m$ nodes. \end{problem} By the Stroud-type bound (cf.\ \cite[Proposition 4.3]{SU19}, \cite[Proposition 2.7]{SU20}), if there exists a quadrature formula of degree $n$ for $w(t)$ with $m$ nodes, then $n \leq 2m-1$. The case $n=2m-1$ is called the tight case. Sawa and Uchida \cite{SU19,SU20} considered the ``almost tight case" and proved that there exists no quadrature formula of degree $2r$ for certain classical orthogonal polynomials (Hermite polynomials $H_n(t)$, Legendre polynomials $P_n(t)$ and Laguerre polynomials $L_n^{(0)}(t)$) with $r+1$ nodes on $\mathbb{Q}$ (\cite[Theorem 1.4]{SU20}). They also proved that there exist quadrature formulas of degree $r+1$ for the above classical orthogonal polynomials with $r+1$ nodes on $\mathbb{Q}$ (\cite[Theorem 5.5]{SU20}). They took algebro-geometric approaches involving the Riesz--Shohat theorem, Christoffel--Darboux kernels and Newton polygons. Classical orthogonal polynomials belong to the Askey-scheme \cite{KLS}, which is a hierarchy of hypergeometric orthogonal polynomials. Thus, it is natural to extend the work of \cite{SU19,SU20} to more general polynomials in the Askey-scheme. In this article, we extend the work of Sawa and Uchida \cite{SU19,SU20} for the weight function of Bessel polynomials \cite[p.\ 101]{KF} \begin{align*} y_n(x):=\sum_{k=0}^n \frac{(n+k)!}{(n-k)!k!} \left(\frac{x}{2} \right)^k. \end{align*} The Bessel polynomials belong to the Askey-scheme and are orthogonal with respect to the weight function \[w(z)=-\frac{1}{4 \pi \sqrt{-1}} e^{-\frac{2}{z}},\] where the path of integration is the unit circle $S^1 \subset \mathbb{C}$, i.e., \[\int_{S^1} y_m(z)y_n(z)w(z)dz=0\] for $m \neq n$ (cf.\ \cite[p.\ 104]{KF}). Orthogonal polynomials are defined from certain inner products expressed as integrations. The inner products for the classical orthogonal polynomials in \cite{SU19,SU20} are expressed as real integrations. On the other hand, in our case, note that the inner product is expressed as complex integrations. As a generalization of ``quadrature formulas with nodes on $\mathbb{Q}$" (\cite{SU19,SU20}), we consider ``quadrature formulas with the real parts and the imaginary parts of nodes on $\mathbb{Q}$." Since the path of integration is $S^1$, we consider the following problems: \begin{problem} \label{problem} \begin{enumerate} \item Does there exist a quadrature formula of degree $2r$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$? \item Does there exist a quadrature formula of degree $r+1$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$? \end{enumerate} \end{problem} There are some previous works concerning spherical designs and quadrature formulas on large enough extensions of $\mathbb{Q}$. For example, for fixed $t$ and $d \in \mathbb{Z}_{>0}$, Cui, Xia and Xiang \cite[Theorem 1.3]{CXX} proved the existence of spherical $t$-designs on $S^{d-1}$ with large enough nodes on the algebraic extension $\mathbb{Q}(\sqrt{q} \mid q: \mbox{prime})$ over $\mathbb{Q}$. For fixed $p \in \mathbb{Z}_{\geq 2}$ and $n \in \mathbb{Z}_{>0}$, Kuperberg \cite[Theorem 1, p.\ 855]{Kuperberg} proved the existence of certain Chebyshev-type quadrature formulas (quadrature formulas with the same weights) of degree $2n+1$ with $p^n$ nodes on certain number fields (see also \cite[Lemma 2]{Kuperberg}). As far as the author knows, there are few works considering quadrature formulas whose nodes lie in a quadratic extension of $\mathbb{Q}$. Note that \cite{SU19,SU20} do not consider quadrature formulas with nodes on $S^1$. In this article, we give negative answers to \cref{problem} (1) for all $r \in \mathbb{Z}_{\geq 1}$ and \cref{problem} (2) for $r=2$. \begin{theorem} \label{MT1} \begin{enumerate} \item There exists no quadrature formula of degree $2$ for Bessel polynomials with $2$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$. \item There exists no quadrature formula of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$. \item There exists no quadrature formula of degree $2r$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}(\sqrt{-1})$ for all $r \in \mathbb{Z}_{\geq 3}$. \end{enumerate} \end{theorem} The proof of (1) is straightforward. (2) follows from \cref{MT2}. In the proof of (3), we use the Riesz--Shohat theorem (\cref{RS}) and a fact on Newton polygons (\cref{NP}). An advantage of Bessel polynomials is that the computation (or estimate) of $p$-adic valuation is relatively easy. On the other hand, to prove (2), we cannot use \cref{NP} since we can construct infinitely many quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1})$ by using the Christoffel--Darboux kernel (cf.\ \cref{4-3}). By contrast, there exists no quadrature formula of degree $4$ for Hermite polynomials with $3$ nodes on $\mathbb{Q}$ by \cite[Theorem 1.4]{SU20}. \begin{theorem} \label{MT2} There exists no quadrature formula of degree $3$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$. \end{theorem} Note that it is an immediate consequence of the Riesz--Shohat theorem that there exists a quadrature formula of degree $r$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$ (\cref{RSrem}). Thus, increasing the degree is a problem. In the proof of \cref{MT2}, we use the Riesz--Shohat theorem (\cref{RS}) to reduce the problem to determining the set of rational points on an elliptic curve. It seems hard to extend the proof of \cref{MT2} to $r \geq 3$. On the other hand, by the Riesz--Shohat theorem (\cref{RS}), we can prove that there exist quadrature formulas of degree $r+1$ with $r+1$ nodes on $\mathbb{Q}$ for all $r \in \mathbb{Z}_{\geq 0}$. \begin{theorem} \label{MT3} There exist quadrature formulas of degree $r+1$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}$ for all $r \in \mathbb{Z}_{\geq 0}$. \end{theorem} This article is organized as follows: In \S2, we formulate a quadrature formula on $\mathbb{C}$ and introduce three key tools: the Riesz--Shohat theorem (\cref{RS}), Newton polygons and Christoffel--Darboux kernels. In \S3, we prove \cref{MT1} by using the Riesz--Shohat theorem and Newton polygons. We also construct an example of quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1})$ by using the Christoffel--Darboux kernel (\cref{4-3}). In \S4, we prove \cref{MT2} by using the Riesz--Shohat theorem. It is also of number theoretic interest that this proof is reduced to determining the set of rational points on an elliptic curve. Then, in \S5, we prove \cref{MT3} by using the Riesz--Shohat theorem. \section{Quadrature formulas for Bessel polynomials} We consider the ``quadrature formulas" for Bessel polynomials. Since \[\frac{1}{2^k} \frac{(n+k)!}{(n-k)!k!}=\frac{1}{2^k} \frac{(n+k)!}{(n-k)!(2k)!} \frac{(2k)!}{k!}=\binom{n+k}{2k}(2k-1)!!,\] note that $y_n(x) \in \mathbb{Z}[x]$. Unlike the classical orthogonal polynomials in \cite{SU20}, the path of integration is $S^1 \subset \mathbb{C}$. Let $\gamma \subset \mathbb{C}$ be a smooth path. \fn{Later, we consider $\gamma=S^1$. For the formulation over $\mathbb{R}$, see \cite{SU20}. } Let $w(z)$ be a complex function such that $\int_\gamma f(z) w(z) dz$ exists for any $f(z) \in \mathbb{C}[z]$. Suppose that $\int_\gamma \phi_l(z)^2 w(z) dz \neq 0$ for all $l$, where $\{\phi_l\}$ is the system of monic orthogonal polynomials with respect to $w(z)$. We consider ``quadrature formulas" in the following sense: \begin{definition} \label{CQF} An integration formula of the form \begin{align} \label{QF} \sum_{i=1}^{m} x_i f(z_i)=\int_{\gamma} f(z) w(z)dz \end{align} with $x_1,\ldots, x_m, z_1, \ldots, z_m \in \mathbb{C}$ is called a quadrature formula of degree $n$ for a weight function $w(z)$ with nodes $z_i$ and weights $x_i$ if it holds for all $f(z) \in \mathbb{C}[z]$ such that $\deg(f) \leq n$. A quadrature formula on a subset $S \subset \mathbb{C}$ is a quadrature formula such that $z_i \in S$ for all $i$. \end{definition} For Bessel polynomials, we take \[w(z)=-\frac{1}{4 \pi \sqrt{-1}} e^{-\frac{2}{z}} \] so that \[ \int_{S^1} w(z) dz=1. \] We introduce key tools: the Riesz--Shohat theorem \cite{Shohat} and a lemma on Newton polygons. They are key tools in the proof of \cite[Theorem 1.4, Theorem 5.5]{SU20}, and also work over $\mathbb{C}$. In the proof of \cref{MT1,MT2,MT3}, we use the Riesz--Shohat theorem. The proof is exactly similar to the real case (\cite{Shohat}). \begin{theorem} [{Riesz--Shohat theorem, cf.\ \cite[Proposition 2.4]{SU20}, \cite[Theorem I]{Shohat}}] \label{RS} Suppose that $1 \leq k \leq r+2$, $z_1, \ldots, z_{r+1} \in \mathbb{C}$ are distinct and let \[\theta_{r+1}(z) :=\prod_{i=1}^{r+1}(z-z_i).\] Then, the following are equivalent: \begin{enumerate} \item There exist $x_1, \ldots, x_{r+1} \in \mathbb{C}$ such that \[\sum_{i=1}^{r+1} x_i f(z_i)=\int_{\gamma} f(z)w(z)dz\] is a quadrature formula of degree $2(r+1)-k$. \item $\theta_{r+1}$ is a quasi-orthogonal polynomial of degree $r+1$ and of order $k-1$, i.e., there exist $b_1, \ldots, b_{k-1} \in \mathbb{C}$ such that \[\theta_{r+1}(z)=\phi_{r+1}(z)+b_1 \phi_r(z)+ \cdots +b_{k-1} \phi_{r+2-k}(z).\] Here, $\{ \phi_l \}$ is the system of monic orthogonal polynomials with respect to $w(z)$. \end{enumerate} Furthermore, if the above equivalent conditions hold, then we have \[x_i =\int_{\gamma} \frac{\theta_{r+1}(z)}{(z-z_i)\theta'_{r+1}(z_i)}w(z)dz.\] \end{theorem} In what follows, $\{\phi_l \}$ denotes the system of monic orthogonal polynomials with respect to a weight function $w(t)$ and $\{\Phi_l \}$ denotes a system of (not necessarily monic) orthogonal polynomials with respect to $w(t)$. \begin{remark} \label{RSrem} Let $w(z)$ be a weight function and $A$ be a subset of $\mathbb{C}$ with at least $r+1$ elements. Then, there exists a quadrature formula of degree $r$ for $w(z)$ with $r+1$ nodes on $A$. Indeed, this is an immediate consequence of \cref{RS}. Note that this is the case $2r+2-k=r$. i.e., $k=r+2$ in \cref{RS}. Let $\{\phi_l\}$ be the system of monic orthogonal polynomials with respect to $w(z)$. Then, since $\{\phi_0, \ldots, \phi_{r+1}\}$ is a basis of the space of polynomials of degree $\leq r+1$ and $\theta_{r+1}(z)$ is monic, there exist $b_1, \ldots, b_{r+1} \in \mathbb{C}$ such that \[\theta_{r+1}(z)=\phi_{r+1}(z)+b_1 \phi_r(z)+ \cdots +b_{r+1} \phi_0(z).\] Therefore, by \cref{RS}, for all $z_1, \ldots, z_{r+1} \in A$, there exist $x_1, \ldots, x_{r+1} \in \mathbb{C}$ such that \[\sum_{i=1}^{r+1} x_i f(z_i)=\int_{\gamma} f(z)w(z)dz\] is a quadrature formula of degree $r$. In particular, there exists a quadrature formula of degree $r$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}(\sqrt{-1}) \cap S^1$. \fn{We can take arbitrary $r+1$ points on $\mathbb{Q}(\sqrt{-1}) \cap S^1$ as nodes.} \end{remark} In the proof of \cref{MT1} (3), we use a fact on Newton polygons. \begin{lemma} [{Cf.\ \cite[Lemma 4.2]{SU20}}] \label{NP} Let $K$ be a discrete valuation field, $\mathcal{O}_K$ be its ring of integers and $f(x) \in \mathcal{O}_K[x]$. If all zeros of $f(x)$ are $K$-rational, then all edges of the Newton polygon of $f(x)$ have integral slopes. \end{lemma} We also use the following version of Bertrand's postulate: \begin{theorem} [{\cite[p.\ 505]{Breusch}}] \label{BPcong} For $n \geq 4$, there exists a prime number $p$ such that $n<p \leq 2n$ and $p \equiv 3 \pmod{4}$. \end{theorem} As in the proof of \cite[Lemma 4.5]{SU20}, we obtain the following lemma: \begin{lemma} \label{BPC} For all $l \in \mathbb{Z}_{\geq 7}$, there exists a prime number $p$ such that $(l+1)/2<p \leq l$ and $p \equiv 3 \pmod{4}$. \end{lemma} In \cref{4-3}, we use the Christoffel--Darboux kernel. \begin{definition} [{\cite[p.\ 1243]{SU20}}] Let $\{\Phi_l \}$ be a system of orthogonal polynomials with respect to a weight function $w(z)$ over $\gamma \subset \mathbb{C}$. The Christoffel--Darboux kernel for polynomials of degree at most $l$ is defined by \[K_l(x,y):=\sum_{k=0}^l h_lh_k^{-1} \Phi_k(x) \Phi_k(y) .\] Here, $h_k:=\int_{\gamma} \Phi_k(z)^2 w(z) dz$. \end{definition} The Christoffel--Darboux kernel is computed by the following formula: \begin{proposition} [{Chirstoffel--Darboux formula, cf.\ \cite[Proposition 2.1]{SU20}, \cite[Theorem 3.2.2]{Szego}}] \label{CDF} For $l \in \mathbb{Z}_{\geq 0}$, let $k_l$ be the leading coefficient of $\Phi_l(x)$. Then, \[K_l(x,y)=\frac{k_l}{k_{l+1}} \frac{\Phi_{l+1}(x) \Phi_l(y)-\Phi_l(x) \Phi_{l+1}(y)}{x-y}. \] \end{proposition} \begin{definition} \label{fr} \[f_l(x,y):=\frac{\Phi_{l+1}(x) \Phi_l(y)-\Phi_l(x) \Phi_{l+1}(y)}{x-y}.\] \end{definition} By \cref{RS} and \cref{CDF}, we obtain the following lemma: \begin{lemma} [{Cf.\ \cite[Lemma 2.5]{SU20})}] \label{CDK} Suppose that $x_1, \ldots, x_{r+1}$, $z_1, \ldots , z_{r+1} \in \mathbb{C}$ satisfy \cref{QF} for $(m,n)=(r+1,2r)$. Assume that $z_1, \ldots , z_{r+1}$ are distinct. Then $f_r(z_i,z_j) = 0$ for every distinct $i$, $j$. \end{lemma} \section{Proof of \cref{MT1}} In this section, we prove \cref{MT1}. We also construct an example of quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1})$. \subsection{Proof of \cref{MT1} (1), (2)} In this subsection, we prove \cref{MT1} (1) and (2). \begin{proof} [{Proof of \cref{MT1} (1), (2)}] \begin{enumerate} \item Suppose that there exist $x_1$, $x_2 \in \mathbb{C}$, $z_1$, $z_2 \in \mathbb{Q}(\sqrt{-1})$ such that $|z_1|=|z_2|=1$ and \begin{align*} x_1+x_2 &=1, \\ x_1z_1+x_2z_2 &=-1, \\ x_1z_1^2+x_2z_2^2 &=\frac{2}{3}. \end{align*} By the first and the second equality, we obtain \[(z_1-z_2)x_1= -z_2-1.\] If $z_1=z_2$, then \[z_1=z_2=-1.\] Thus, by the third equality, \[x_1+x_2=\frac{2}{3},\] which contradicts the first equality. Since $z_1 \neq z_2$, \[x_1=\frac{-z_2-1}{z_1-z_2}.\] We substitute it and the first equality into the third equality to obtain \[ -z_1-z_1z_2-z_2=\frac{2}{3}. \] If $z_1=-1$, then \[1+z_2-z_2=\frac{2}{3}, \] which is a contradiction. Thus, $z_1 \neq -1$. Therefore, \[z_2=\frac{-z_1-\frac{2}{3}}{z_1+1}. \] Since $|z_1|=|z_2|=1$, \[ \Re(z_1)=-\frac{5}{6}.\] Thus, we have \[\im(z_1)= \pm \frac{\sqrt{11}}{6} \not\in \mathbb{Q}, \] which is a contradiction. \item This is an immediate consequence of \cref{MT2}. \end{enumerate} \end{proof} \begin{remark} \label{UQ} There exists a unique quadrature formula of degree $2$ for Bessel polynomials with $2$ nodes on $S^1 \subset \mathbb{C}$. Indeed, if there exists such a quadrature formula on $S^1$, then we have \begin{align*} z_1 &=\frac{-5+\sqrt{-11}}{6},\\ z_2 &=\frac{-5-\sqrt{-11}}{6} \end{align*} by the proof of \cref{MT1} (1). Then, by \cref{RS} and the residue theorem, \begin{align*} x_1 &=\int_{S^1} \frac{z-z_2}{z_1-z_2} w(z) dz=\frac{11+\sqrt{-11}}{22}, \\ x_2 &=\int_{S^1} \frac{z-z_1}{z_2-z_1} w(z) dz=\frac{11-\sqrt{-11}}{22}. \end{align*} Since \[-\frac{1}{4 \pi \sqrt{-1}}\int_{S^1} z^j e^{-\frac{2}{z}} dz=\frac{(-2)^j}{(j+1)!}\] for $j \in \mathbb{Z}_{\geq 0}$ by the residue theorem, we can check that \[x_1f(z_1)+x_2f(z_2)=-\frac{1}{4 \pi \sqrt{-1}}\int_{S^1} f(z) e^{-\frac{2}{z}} dz\] is a quadrature formula of degree $2$ for Bessel polynomials. \end{remark} \subsection{Proof of \cref{MT1} (3)} In this subsection, we prove \cref{MT1} (3). We also construct an example of quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1})$. \begin{proof} [{Proof of \cref{MT1} (3)}] Assume that there exist $x_1, \ldots, x_{r+1} \in \mathbb{C}, z_1, \ldots, z_{r+1} \in \mathbb{Q}(\sqrt{-1})$ such that \[\sum_{i=1}^{r+1} x_iz_i^j= -\frac{1}{4 \pi \sqrt{-1}} \int_{S^1} z^j e^{-\frac{2}{z}} dz \quad (j=0,1, \ldots, 2r). \] By \cref{RS}, there exists $c \in \mathbb{Q}(\sqrt{-1})$ such that the zeros of the quasi-Bessel polynomial $y_{r+1;c}(x)$ are $z_1, \ldots, z_{r+1}$. Write $c=s/t$ with $s, t \in \mathbb{Z}[\sqrt{-1}]$ and $\gcd (s,t)=1$. Let \[f(x):=ty_{r+1;c}(x)= ty_{r+1}(x)+sy_{r}(x) \in \mathbb{Z}[x].\] Write \[f(x)=\sum_{k=0}^{r+1} a_k x^{r+1-k}. \] We have \begin{align} \label{(25)'} a_k=\begin{cases} t(2r+1)!! & (k=0),\\ (t(2r+1) +s)(2r-1)!! & (k=1), \\ \left(t \binom{2(r+1)-k}{2(r+1-k)}+s \binom{2r+1-k}{2(r+1-k)} \right) (2(r+1-k)-1)!! & (2 \leq k \leq r+1). \end{cases} \end{align} First, assume that $r \geq 7$, $r=3$ or $4$. For $r \geq 7$, by \cref{BPC}, we can take a prime number $p$ such that $(r+1)/2<p \leq r$ and $p \equiv 3 \pmod{4}$ (i.e., $p$ is inert in $\mathbb{Q}(\sqrt{-1})$). For $r=3$, $4$, let $p=3$. We consider the Newton polygon of $f(x)$ with respect to $p$. Let $P_k:=(k,v_p(a_k)) \in \mathbb{R}^2$. Let \[j:=\begin{cases} r & (t+s \equiv 0 \pmod{p}),\\ r+1 & (t+s \not\equiv 0 \pmod{p}). \end{cases}\] Then, $v_p(a_j)=0$ by \cref{(25)'}. Indeed, if $t+s \equiv 0 \pmod{p}$, then $v_p(s)=v_p(t)=0$ by $\gcd (s,t)=1$. Since $p<r+1<2p$ and $p$ is odd, \[v_p(a_r)=v_p \left(\frac{(r+1)((t+s)r+2t)}{2} \right)=0.\] If $t+s \not\equiv 0 \pmod{p}$, then $v_p(a_{r+1})=v_p(t+s)=0$. Therefore, there exists $j_0:=\min \{j \mid v_p(a_j)=0 \}$. If $v_p(t) =0$, then \[v_p(a_0)=v_p((2r+1)!!)= \begin {cases} 1 & (2r+1<3p),\\ 2 & (2r+1 \geq 3p, \; p>3),\\ 3 & (2r+1 \geq 3p, \; p=3) \end{cases}\] by \cref{(25)'} and $p<2r+1<4p$. If $v_p(t) \geq 1$, then $v_p(s)=0$ by $\gcd (s,t)=1$. Thus, \fn{If $p=3$, then $r<5$ by $(r+1)/2<p$. Thus, $2r-1<9=3p$.} \[v_p(a_1)=v_p((2r-1)!!)= \begin {cases} 1 & (2r-1<3p), \\ 2 & (2r-1 \geq 3p) \end{cases}\] by \cref{(25)'} and $p<2r-1<4p$. \begin{enumerate} \item Suppose that $v_p(t)=0$. \begin{enumerate} \item If $2r+1<3p$, \fn{This includes the case $r=3$.} then $v_p(a_0)=1$. Let $j_0:=\min \{j \mid v_p(a_j)=0 \}$. In this case, the Newton polygon of $f(x)$ has the edge $P_0P_{j_0}$, whose slope is $-1/j_0 \not\in \mathbb{Z}$ since $j_0 > r-(p-1)/2 \geq p-(p-1)/2 =(p+1)/2 \geq 2$ by $p \geq 3$. For the first inequality, note that if $p$ is an odd prime and $p \leq 2(r+1-k)-1$, i.e., $k \leq r-(p-1)/2$, then $v_p(a_k) \geq 1$. \item If $r \geq 7$ and $2r+1 \geq 3p$, then $v_p(a_0)= 2$ and $v_p(a_1) \geq 1$. \begin{enumerate} \item If $v_p(a_1)=1$, then the Newton polygon of $f(x)$ has the edge $P_1P_{j_0}$, whose slope is $-1/(j_0-1) \not\in \mathbb{Z}$ since $j_0 > 2$. \item If $v_p(a_1) \geq 2$ and $v(a_i) \neq 1$ for all $i$, then the Newton polygon of $f(x)$ has the edge $P_0P_{j_0}$, whose slope is $-2/j_0 \not\in \mathbb{Z}$ since $j_0 > 2$. \item If $v_p(a_1) \geq 2$ and $v(a_i)=1$ for some $i \geq 2$, then let $i_0:=\min \{i \mid v_p(a_i)=1 \}$. The Newton polygon of $f(x)$ has the edge $P_0P_{i_0}$, whose slope is $-1/i_0 \not\in \mathbb{Z}$ since $i_0 >1$. \end{enumerate} \item If $r=4$, then \begin{align*} a_0 &=945t, \\ a_1 &=3(315t+35s), \\ a_2 &= 3(140t+35s), \\ a_3 &=3(35t+15s), \\ a_4 &=15t+10s, \\ a_5 &=t+s. \end{align*} \begin{enumerate} \item If $v_3(s)>0$, then $v_3(a_0)=3$, $v_3(a_1) \geq 2$, $v_3(a_2)=1$, $v_3(a_3)$, $v_3(a_4) \geq 1$ and $v_3(a_5)=0$. Thus, the Newton polygon of $f(x)$ has the edge $P_2P_5$, whose slope is $-1/3 \not\in \mathbb{Z}$. \item If $v_3(s)=0$, then $v_3(a_0) = 3$, $v_3(a_1)=1$, $v_3(a_2)$, $v_3(a_3) \geq 1$ and $v_3(a_4)=0$. Thus, the Newton polygon of $f(x)$ has the edge $P_1P_4$, whose slope is $-1/3 \not\in \mathbb{Z}$. \end{enumerate} \end{enumerate} \item Suppose that $v_p(t)>0$. Since $\gcd (s,t)=1$, $v_p(s)=v_p(t+s)=0$. We have $v_p(a_0) \geq 2$. \begin{enumerate} \item If $2r-1<3p$, then $v_p(a_1)=1$ and the Newton polygon of $f(x)$ has the edge $P_1P_{j_0}$, whose slope is $-1/(j_0-1) \not\in \mathbb{Z}$ since $j_0 > 2$. \item If $2r-1 \geq 3p$, then $v_p(a_0) \geq 3$ and $v_p(a_1)=2$. We have $v_p(a_2) \geq 1$ by $j_0>(p+1)/2 \geq 3$. \begin{enumerate} \item If $v_p(a_2)=1$, then the Newton polygon of $f(x)$ has the edge $P_2P_{j_0}$, whose slope is $-1/(j_0-2) \not\in \mathbb{Z}$ since $j_0 > r-(p-1)/2 \geq (3p+1)/2-(p-1)/2 \geq p+1 \geq 4$ by $2r-1 \geq 3p$. The first inequality follows from the argument in Case (1-a). \item If $v_p(a_2) \geq 2$ and $v_p(a_i) \neq 1$ for all $i$, then the Newton polygon of $f(x)$ has the edge $P_1P_{j_0}$, whose slope is $-2/(j_0-1) \not\in \mathbb{Z}$ since $j_0 > 4$. \item If $v_p(a_2) \geq 2$ and $v_p(a_i) = 1$ for some $i \geq 3$, then the Newton polygon of $f(x)$ has the edge $P_1P_{i_0}$, whose slope is $-1/(i_0-1) \not\in \mathbb{Z}$ since $i_0 >2$. \end{enumerate} \end{enumerate} \end{enumerate} For $r=5$, $6$, we consider $(2+\sqrt{-1})$-adic valuation and we can prove the theorem similarly. \fn{Note that for $n \in \mathbb{Z}$, $n$ is divisible by $2+\sqrt{-1}$ in $\mathbb{Z}[\sqrt{-1}]$ if and only if $n$ is divisible by $5$ in $\mathbb{Z}$.} \end{proof} Next, we construct an example of quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}( \sqrt{-1})$. Since $y_2(z)=3z^2+3z+1$ and $y_3(z)=15z^3+15z^2+6z+1$, \begin{align*} f_2(x,y) = 3((15y^2+15y+5)x^2+(15y^2+14y+4)x+(5y^2+4y+1)). \end{align*} by \cref{fr}. Multiplying both sides of $f_2(x,y)=0$ by $4(15y^2+15y+5)$, we have \[(2(15y^2+15y+5)x+(15y^2+14y+4))^2+75y^4+120y^3+84y^2+28y+4=0. \] Let $w:=2(15y^2+15y+5)x+(15y^2+14y+4)$. Then we have a curve \[C: w^2=-75y^4-120y^3-84y^2-28y-4. \] The problem is reduced to determining the set $C(\mathbb{Q}(\sqrt{-1}))$ of $\mathbb{Q}(\sqrt{-1})$-rational points on $C$. Let $J$ be the Jacobian variety of $C$. Then, by MAGMA \cite{Bosma-Cannon-Playoust}, we can check that $\rank J(\mathbb{Q}(\sqrt{-1}))=1$. \begin{example} \label{4-3} From the rational points on the above elliptic curve $C$, we can construct an example of quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1})$. For example, by MAGMA \cite{Bosma-Cannon-Playoust}, \[P:=(-264/743, \pm 377866 \sqrt{-1}/552049) \in C(\mathbb{Q}(\sqrt{-1})).\] Then, by Mathematica \cite{Wolfram}, for any weight $x_1$, other weights $x_2$, $x_3$ and nodes $z_1$, $z_2$, $z_3$ are expressed as rational functions of $x_1$ and $\sqrt{-75x_1^4-120x_1^3-84x_1^2-28x_1-4}$. Substituting $x_1=-264/743$, we can check that \begin{align*} & \frac{304758098401}{73863713379} f \left(\frac{-264}{743} \right)\\ &+\left(-\frac{115447192511}{73863713379}+\frac{28870417761487643\sqrt{-1}}{27910585919669214} \right) f \left(\frac{-253754+188933 \sqrt{-1}}{863405} \right)\\ & +\left(-\frac{115447192511}{73863713379}-\frac{28870417761487643\sqrt{-1}}{27910585919669214} \right) f \left(\frac{-253754-188933 \sqrt{-1}}{863405} \right)\\ = & -\frac{1}{4 \pi \sqrt{-1}} \int_{S^1} f(z) e^{-\frac{2}{z}} dz \end{align*} is a quadrature formula of degree $4$ for Bessel polynomials with nodes on $\mathbb{Q}(\sqrt{-1})$ corresponding to $P$. \end{example} \begin{remark} \label{Rem} \begin{enumerate} \item By the same argument as in \cref{MT1}, we can prove that there exists no quadrature formula of degree $2r$ for Bessel polynomials with $r+1$ nodes on $\mathbb{Q}(\sqrt{-11})$ for all $r \in \mathbb{Z}_{\geq 3}$. \item We cannot prove \cref{MT1} (2) by using \cref{NP} since there exist quadrature formulas of degree $4$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-1})$ by \cref{4-3}. \end{enumerate} \end{remark} \section{Proof of \cref{MT2}} In this section, we prove \cref{MT2}. \begin{proof} [{Proof of \cref{MT2}}] Note that this is the case $2r+2-k=r+1$. i.e., $k=r+1=3$ in the Riesz--Shohat theorem, \cref{RS}. In this case, we prove the following by contradiction: There exist no $z_1$, $z_2$, $z_3 \in \mathbb{Q}(\sqrt{-1}) \cap S^1$, $b_1$, $b_2 \in \mathbb{C}$ such that \[(z-z_1)(z-z_2)(z-z_3)=\phi_3(z)+b_1 \phi_2(z)+ b_2 \phi_1(z).\] Here, \[\phi_n(z):=\frac{y_n(z)}{\frac{(2n)!}{2^n n!}}.\] Let \[A:=\begin{pmatrix} 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0\\ \frac{2}{5} & 1 & 1 & 0 \\ \frac{1}{15} & \frac{1}{3} & 1 & 1 \end{pmatrix},\] and suppose that \[A\begin{pmatrix} 1\\ b_1 \\ b_2\\ 0 \end{pmatrix} =\begin{pmatrix} 1\\ -z_1-z_2-z_3 \\ z_1z_2+z_2z_3+z_3z_1\\ -z_1z_2z_3 \end{pmatrix}.\] Comparing the second to the fourth row, \begin{align*} b_1 & =-1-z_1-z_2-z_3,\\ b_2 &=\frac{3}{5}+(z_1+z_2+z_3)+(z_1z_2+z_2z_3+z_3z_1),\\ \left(z_1+z_2+\frac{2}{3}+z_1z_2 \right)z_3 &=-\frac{1}{3}-\frac{2}{3} (z_1+z_2)-z_1z_2. \end{align*} By the third equality, if \[z_1+z_2+\frac{2}{3}+z_1z_2=0,\] then \[z_1z_2=-\frac{2}{3}(z_1+z_2)-\frac{1}{3}.\] By the above two equalities, \[z_1+z_2+1=0.\] Write $z_1=s+t \sqrt{-1}$, $z_2=u+v \sqrt{-1}$ with $s, t, u, v \in \mathbb{Q}$ and $s^2+t^2=u^2+v^2=1$. Then, $s+u+1=0$ and $t+v=0$. Thus, $u=-s-1$ and $v=-t$. Since $s^2+t^2=u^2+v^2=1$, \[1=(-s-1)^2+t^2=2s+2.\] Thus, $s=-1/2$ and $t=\pm \sqrt{3}/{2} \not\in \mathbb{Q}$, which is a contradiction. Therefore, \[z_1+z_2+\frac{2}{3}+z_1z_2 \neq 0\] and we have \begin{align*} z_3 =&\frac{-\frac{1}{3}-\frac{2}{3} (z_1+z_2)-z_1z_2}{z_1+z_2+\frac{2}{3}+z_1z_2}. \end{align*} We solve $|z_1|^2=|z_2|^2=|z_3|^2=1$ by Mathematica \cite{Wolfram}. Here are the outputs: \fn{$v$ in the first two lines can be expressed as a rational function of $s$, $\sqrt{43+84s+s^2-84s^3-44s^4}$ and $\sqrt{1-s^2}$.} \begin{align*} (t,u) &=\left(\sqrt{1-s^2},\frac{-91-202s-112s^2 \pm 2\sqrt{43+84s+s^2-84s^3-44s^4} }{106+224s+120s^2} \right),\\ (t,u) &=\left(-\sqrt{1-s^2},\frac{-91-202s-112s^2 \pm 2\sqrt{43+84s+s^2-84s^3-44s^4} }{106+224s+120s^2} \right), \\ (s,t,u,v) &= \left(-1,0, -\frac{1}{2}, \pm \frac{\sqrt{3}}{2} \right),\\ (s,t,u,v) &= \left(1,0, -\frac{9}{10}, \pm \frac{\sqrt{19}}{10} \right),\\ (s,t,u,v) &= \left(\frac{-28-\sqrt{-11}}{30}, -\frac{\sqrt{127-56\sqrt{-11}}}{30}, \frac{-28+\sqrt{-11}}{30}, \frac{\sqrt{127+56\sqrt{-11}}}{30} \right),\\ \end{align*} \begin{align*} (s,t,u,v) &= \left(\frac{-28-\sqrt{-11}}{30}, \frac{\sqrt{127-56\sqrt{-11}}}{30}, \frac{-28+\sqrt{-11}}{30},-\frac{\sqrt{127+56\sqrt{-11}}}{30} \right),\\ (s,t,u,v) &= \left(\frac{-28+\sqrt{-11}}{30}, -\frac{\sqrt{127+56\sqrt{-11}}}{30}, \frac{-28-\sqrt{-11}}{30}, \frac{\sqrt{127-56\sqrt{-11}}}{30} \right),\\ (s,t,u,v) &= \left(\frac{-28+\sqrt{-11}}{30}, \frac{\sqrt{127+56\sqrt{-11}}}{30}, \frac{-28-\sqrt{-11}}{30}, -\frac{\sqrt{127-56\sqrt{-11}}}{30} \right). \end{align*} For the third to the eighth solution, $s \not\in \mathbb{Q}$ or $v \not\in \mathbb{Q}$, which are impossible. For the first and the second solution, we claim that \[43+84s+s^2-84s^3-44s^4\] is a rational square if and only if $s=\pm 1$ or $-1/2$ or $-9/10$, which contradict $t \in \mathbb{Q}$ or $v \in \mathbb{Q}$. Indeed, let $C$ be an elliptic curve over $\mathbb{Q}$ defined by \[y^2=-44x^4-84x^3+x^2+84x+43.\] By MAGMA \cite{Bosma-Cannon-Playoust}, $\rank C(\mathbb{Q})=0$. We also know that $(\pm 1,0), \left(-\frac{1}{2}, \pm 3 \right), \left(-\frac{9}{10},\pm \frac{19}{25} \right) \in C(\mathbb{Q})$ and the torsion subgroup is isomorphic to $\mathbb{Z}/6 \mathbb{Z}$. Therefore, we have \[C(\mathbb{Q})=\left\{(\pm 1,0), \left(-\frac{1}{2}, \pm 3 \right), \left(-\frac{9}{10},\pm \frac{19}{25} \right) \right\}. \] \end{proof} \begin{remark} \begin{enumerate} \item \cref{UQ} implies that there exists a unique quadrature formula of degree $2$ for Bessel polynomials with $2$ nodes on $\mathbb{Q}(\sqrt{-11}) \cap S^1$. On the other hand, there exists no quadrature formula of degree $3$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-11}) \cap S^1$. The proof is exactly similar to \cref{MT2}. \item There exist quadrature formulas of degree $3$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-3}) \cap S^1$. For example, \begin{align*} & \frac{2}{3}f(-1)+\frac{3+\sqrt{-3}}{18} f \left(\frac{-1+\sqrt{-3}}{2} \right)+\frac{3-\sqrt{-3}}{18} f \left(\frac{-1-\sqrt{-3}}{2} \right)\\ = &-\frac{1}{4 \pi \sqrt{-1}} \int_{S^1} f(z) e^{-\frac{2}{z}} dz \end{align*} is a quadrature formula of degree $3$ for Bessel polynomials with $3$ nodes on $\mathbb{Q}(\sqrt{-3}) \cap S^1$. \end{enumerate} \end{remark} \section{Proof of \cref{MT3}} In this section, we prove \cref{MT3}. \begin{proof} [{Proof of \cref{MT3}}] By \cref{RS}, it is enough to show that there exist $z_1, \ldots, z_{r+1} \in \mathbb{Q}$, $b_1, \ldots, b_r \in \mathbb{C}$ such that \begin{align} \label{*} \theta_{r+1}(z):=(z-z_1) \cdots (z-z_{r+1})=\phi_{r+1}(z)+b_1 \phi_r(z) + \cdots +b_r \phi_1(z)+0 \cdot \phi_0(z). \end{align} Here, \[\phi_n(z):=\frac{y_n(z)}{\frac{(2n)!}{2^n n!}}.\] Write \begin{align*} (z-z_1) \cdots (z-z_{r+1}) &= 1 \cdot z^{r+1}+Z_1(z_1,\ldots,z_{r+1}) z^r+\cdots +Z_{r+1}(z_1,\ldots,z_{r+1}),\\ \phi_{r+1}(z)+b_1 \phi_r(z) + \cdots +b_r \phi_1(z)&= 1 \cdot z^{r+1}+B_1(b_1,\ldots,b_r) z^r+\cdots +B_{r+1}(b_1,\ldots, b_r). \end{align*} Let \[A:=\begin{pmatrix} 1 & 0 & 0 & 0 & \hdots & 0 & 0 &0\\ a_{(r+1) 1} & 1 & 0 & 0 & \hdots & 0 & 0 &0\\ a_{(r+1) 2} & a_{r1} & 1 & 0 & \hdots & 0 & 0 &0\\ a_{(r+1) 3} & a_{r2} & a_{(r-1)1} & 1 & \hdots & 0 & 0 &0\\ \vdots\ & \vdots & \vdots & \ddots & \ddots & \vdots & \vdots & \vdots\\ a_{(r+1) (r-1)} & a_{r(r-2)} & a_{(r-1) (r-3)} & a_{(r-2) (r-4)} & \hdots & 1 & 0 &0\\ a_{(r+1) r} & a_{r(r-1)} & a_{(r-1) (r-2)} & a_{(r-2) (r-3)} & \hdots & a_{21} & 1 &0\\ a_{(r+1) (r+1)} & a_{rr} & a_{(r-1) (r-1)} & a_{(r-2) (r-2)} & \hdots & a_{22} & a_{11} &1 \end{pmatrix},\] where we write \[\phi_k(z):=\sum_{i=0}^k a_{ki} z^{k-i} \quad (a_{k0}=1).\] Then, by comparing the coefficients of $z^k$ in (2), \cref{*} is equivalent to \[A\begin{pmatrix} 1\\ b_1 \\ \vdots\\ b_r\\ 0 \end{pmatrix} = \begin{pmatrix} 1\\ B_1(b_1,\ldots,b_r) \\ \vdots\\ B_{r+1}(b_1,\ldots,b_r) \\ \end{pmatrix} =\begin{pmatrix} 1\\ Z_1(z_1,\ldots,z_{r+1}) \\ \vdots\\ Z_{r+1}(z_1,\ldots,z_{r+1}) \end{pmatrix}.\] Since $\det(A) \neq 0$, this is equivalent to \begin{align} \label{**} \begin{pmatrix} 1\\ b_1 \\ \vdots\\ b_r\\ 0 \end{pmatrix} = A^{-1}\begin{pmatrix} 1\\ Z_1(z_1,\ldots,z_{r+1}) \\ \vdots\\ Z_{r+1}(z_1,\ldots,z_{r+1}) \end{pmatrix}. \end{align} The numbers $z_1, \ldots, z_r$, $b_1, \ldots, b_r$ in \cref{*} can be taken as follows: First, take $z_1, \ldots, z_r \in \mathbb{Q}$ arbitrarily. By comparing the last row of \cref{**}, we have a relation among $z_1, \ldots, z_{r+1}$. For each $i$, note that $Z_i$ is an elementary symmetric polynomial in $z_1, \ldots, z_{r+1}$ and is linear in $z_{r+1}$. Since the coefficients of Bessel polynomials are rational, the off-diagonal entries in the lower triangular matrix $A$ are rational numbers. Thus, the items of the inverse matrix $A^{-1}$ are rational numbers. Moreover, since the coefficients $Z_1, \cdots, Z_{r+1}$ are linear functions of $z_{r+1}$ with rational coefficients, the last row of the equation \cref{**} is a linear equation of $z_{r+1}$ with rational coefficients. Therefore, $z_{r+1} \in \mathbb{Q}$ can be expressed as a rational function of $z_1, \ldots, z_r$. Then, by \cref{**}, $b_1, \ldots, b_{r+1}$ can be expressed in terms of $z_1, \ldots, z_{r+1}$.\fn{Thus, $b_1, \ldots, b_r \in \mathbb{Q}$.} \end{proof} \begin{example} The formula \begin{align*} \frac{172}{441} f \left(\frac{1}{2} \right)-\frac{1625}{1611} f \left(\frac{1}{5} \right) + \frac{42592}{26313} f \left(-\frac{27}{44} \right) = -\frac{1}{4 \pi \sqrt{-1}} \int_{S^1} f(z) e^{-\frac{2}{z}} dz \end{align*} is a quadrature formula of degree $3$ with $3$ nodes on $\mathbb{Q}$.\\ \end{example} \noindent {\bf Acknowledgements.} The author thanks anonymous referees for their constructive comments to make this article clearer. The author thanks his advisor Professor Ken-ichi Bannai for reading the draft and giving helpful comments. The author also thanks him for warm and constant encouragement. The author gratefully thanks Professor Yukihiro Uchida for helpful comments and discussions. The author also thanks Professors Masato Kurihara, Taka-aki Tanaka, Shuji Yamamoto, Yoshinosuke Hirakawa, Kazuki Yamada, Hohto Bekki, Naoto Dainobu and Yoshinori Kanamura for helpful comments and discussions. \begin{bibdiv} \begin{biblist} \bibselect{quadrature} \end{biblist} \end{bibdiv} \end{document}
2205.00438v1
http://arxiv.org/abs/2205.00438v1
On the combinatorial and rank properties of certain subsemigroups of full contractions of a finite chain
\UseRawInputEncoding \documentclass[10pt]{article} \usepackage[dvips]{color} \usepackage{epsfig} \usepackage{float,amsthm,amssymb,amsfonts} \usepackage{ amssymb,amsmath,graphicx, amsfonts, latexsym} \def\GR{{\cal R}} \def\GL{{\cal L}} \def\GH{{\cal H}} \def\GD{{\cal D}} \def\GJ{{\cal J}} \def\set#1{\{ #1\} } \def\z{\set{0}} \def\Sing{{\rm Sing}_n} \def\nullset{\mbox{\O}} \parindent=16pt \setlength{\textwidth}{6.5in} \setlength{\oddsidemargin}{.1in} \setlength{\evensidemargin}{.1in} \setlength{\topmargin}{-.1in} \setlength{\textheight}{8.4in} \begin{document} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{defn}{Definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \title{On the combinatorial and rank properties of certain subsemigroups of full contractions of a finite chain } \author{\bf M. M. Zubairu \footnote{Corresponding Author. ~~Email: [email protected]$} A. Umar and M. J. Aliyu \\[3mm] \it\small Department of Mathematical Sciences, Bayero University Kano, P. M. B. 3011, Kano, Nigeria\\ \it\small \texttt{[email protected]}\\[3mm] \it\small Khalifa University, P. O. Box 127788, Sas al Nakhl, Abu Dhabi, UAE\\ \it\small \texttt{[email protected]}\\[3mm] \it\small Department of Mathematics, and Computer Sciences, Sule Lamido University, Kafin Hausa\\ \it\small \texttt{[email protected]} } \maketitle\ \begin{abstract} Let $[n]=\{1,2,\ldots,n\}$ be a finite chain and let $\mathcal{CT}_{n}$ be the semigroup of full contractions on $[n]$. Denote $\mathcal{ORCT}_{n}$ and $\mathcal{OCT}_{n}$ to be the subsemigroup of order preserving or reversing and the subsemigroup of order preserving full contractions, respectively. It was shown in \cite{am} that the collection of all regular elements (denoted by, Reg$(\mathcal{ORCT}_{n})$ and Reg$(\mathcal{OCT}_{n}$), respectively) and the collection of all idempotent elements (denoted by E$(\mathcal{ORCT}_{n})$ and E$(\mathcal{OCT}_{n}$), respectively) of the subsemigroups $\mathcal{ORCT}_{n}$ and $\mathcal{OCT}_{n}$, respectively are subsemigroups. In this paper, we study some combinatorial and rank properties of these subsemigroups. \end{abstract} \emph{2010 Mathematics Subject Classification. 20M20.}\\ \textbf{Keywords:} Full Contractions maps on chain, regular element, idempotents, rank properties. \section{Introduction} Denote $[n]=\{1,2,\ldots,n\}$ to be a finite chain and let $\mathcal{T}_{n}$ denote the semigroup of full transformations of $[n]$. A transformation $\alpha\in \mathcal{T}_{n}$ is said to be \emph{order preserving} (resp., \emph{order reversing}) if (for all $x,y \in [n]$) $x\leq y$ implies $x\alpha\leq y\alpha$ (resp., $x\alpha\geq y\alpha$); \emph{order decreasing} if (for all $x\in [n]$) $x\alpha\leq x$; an \emph{isometry} (i.e., \emph{ distance preserving}) if (for all $x,y \in [n]$) $|x\alpha-y\alpha|=|x-y|$; a \emph{contraction} if (for all $x,y \in [n]$) $|x\alpha-y\alpha|\leq |x-y|$. Let $\mathcal{CT}_{n}=\{\alpha\in \mathcal{T}_{n}: (\textnormal{for all }x,y\in [n])~\left|x\alpha-y\alpha\right|\leq\left|x-y\right|\}$ be the semigroup of full contractions on $[n]$, as such $\mathcal{CT}_{n}$ is a subsemigroup of $\mathcal{T}_{n}$. Certain algebraic and combinatorial properties of this semigroup and some of its subsemigroups have been studied, for example see \cite{adu, leyla, garbac,kt, af, am, mzz, a1, a33}. Let \noindent \begin{equation}\label{ctn}\mathcal{OCT}_{n}=\{\alpha\in \mathcal{CT}_{n}: (\textnormal{for all}~x,y\in [n])~x\leq y \textnormal{ implies } x\alpha\leq y\alpha\},\end{equation} \noindent and \begin{equation}\label{orctn}\mathcal{ORCT}_{n}= \mathcal{OCT}_{n}\cup \{\alpha\in \mathcal{CT}_{n}: (\textnormal{for all}~x,y\in [n])~x\leq y ~ \textnormal{implies } x\alpha\geq y\alpha\}\end{equation} \noindent be the subsemigroups of \emph{order preserving full contractions} and of \emph{order preserving or reversing full contractions} on $[n]$, respectively. These subsemigroups are both known to be non-regular left abundant semigroups \cite{am} and their Green's relations have been characterized in \cite{mmz}. The ranks of $\mathcal{ORCT}_{n}$ and $\mathcal{OCT}_{n}$ were computed in \cite{kt} while the ranks of the two sided ideals of $\mathcal{ORCT}_{n}$ and $\mathcal{OCT}_{n}$ were computed by Leyla \cite{leyla}. In 2021, Umar and Zubairu \cite{am} showed that the collection of all regular elements (denoted by $\textnormal{Reg}(\mathcal{ORCT}_{n})$) of $\mathcal{ORCT}_{n}$ and also the collection of idempotent elements (denoted by $\textnormal{E}(\mathcal{ORCT}_{n})$) of $\mathcal{ORCT}_{n}$ are both subsemigroups of $\mathcal{ORCT}_{n}$. The two subsemigroups are both regular, in fact $\textnormal{Reg}(\mathcal{ORCT}_{n})$ has been shown to be an $\mathcal{L}-$ \emph{unipotent} semigroup (i.e., each ${L}-$class contains a unique idempotent). In fact, it was also shown in \cite{am} that the collection of all regular elements (denoted by Reg$\mathcal{OCT}_{n}$) in $\mathcal{OCT}_{n}$ is a subsemigroup. However, combinatorial as well as rank properties of these semigroups are yet to be discussed, in this paper we discuss these properties, as such this paper is a natural sequel to Umar and Zubairu \cite{am}. For basic concepts in semigroup theory, we refer the reader to \cite{ maz, ph,howi}. Let $S$ be a semigroup and $U$ be a subset of $S$, then $|U|$ is said to be the \emph{rank} of $S$ (denoted as $\textnormal{Rank}(S)$) if $$|U|=\min\{|A|: A\subseteq S \textnormal{ and } \langle A \rangle=S\}. $$ The notation $\langle U \rangle=S$ means that $U$ generate the semigroup $S$. The rank of several semigroups of transformation were investigated, see for example, \cite{aj,ak2, gu, gu2, gu3, gm, mp}. However, there are several subsemigroups of full contractions which their ranks are yet to be known. In fact the order and the rank of the semigroup $\mathcal{CT}_{n}$ is still under investigation. Let us briefly discuss the presentation of the paper . In section 1, we give a brief introduction and notations for proper understanding of the content of the remaining sections. In section 2, we discuss combinatorial properties for the semigroups $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{ORCT}_n)$, in particular we give their orders. In section 3, we proved that the rank of the semigroups $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{ORCT}_n)$ are 4 and 3, respectively, through the minimal generating set for their Rees quotient semigroups. \section{Combinatorial Properties of $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{ORCT}_n)$ } In this section, we want to investigate some combinatorial properties of the semigroups, $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{OCT}_n)$. In particular, we want to compute their Cardinalities. Let \begin{equation}\label{1} \alpha=\left( \begin{array}{cccc} A_{1} & A_{2} & \ldots & A_{p} \\ x_{1} & x_{2} & \ldots & x_{p} \end{array} \right)\in \mathcal{T}_{n} ~~ (1\leq p\leq n), \end{equation} then the \emph{rank} of $\alpha$ is defined and denoted by rank $(\alpha)=|\textnormal{Im }\alpha|=p$, so also, $x_{i}\alpha^{-1}=A_{i}$ ($1\leq i\leq p$) are equivalence classes under the relation $\textnormal{ker }\alpha=\{(x,y)\in [n]\times [n]: x\alpha=y\alpha\}$. Further, we denote the partition $(A_{1},\ldots, A_{p})$ by $\textnormal{\textbf{Ker} }\alpha$ and also, fix$(\alpha)=|\{x\in[n]: x\alpha=x\}|$. A subset $T_{\alpha}$ of $[n]$ is said to be a \emph{transversal} of the partition $\textnormal{\textbf{Ker} }\alpha$ if $|T_{\alpha}|=p$, and $|A_{i}\cap T_{\alpha}|=1$ ($1\leq i\leq p$). A transversal $T_{\alpha}$ is said to be \emph{convex} if for all $x,y\in T_{\alpha}$ with $x\leq y$ and if $x\leq z\leq y$ ($z\in [n]$), then $z\in T_{\alpha}$. Before we proceed, lets describe some Green's relations on the semigroups $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{ORCT}_n)$. It is worth noting that the two semigroups, $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{ORCT}_n)$ are both regular subsemigroups of the Full Transformation semigroup $\mathcal{T}_n$, therefore by [\cite{howi}, Prop. 2.4.2] they automatically inherit the Green's $\mathcal{L}$ and $\mathcal{R}$ relations of the semigroup $\mathcal{T}_n$, but not necessary $\mathcal{D}$ relation, as such we have the following lemma. \begin{lemma} Let $\alpha,\beta \in S\in \{\textnormal{Reg}(\mathcal{ORCT}_n), \ \textnormal{E}(\mathcal{ORCT}_n)\}$, then \begin{itemize} \item[i] $\alpha \mathcal{R} \beta$ if and only if $\textnormal{Im }\alpha=\textnormal{Im }\beta$; \item[ii] $\alpha \mathcal{L} \beta$ if and only if $\textnormal{ker }\alpha=\textnormal{ker }\beta$. \end{itemize} \end{lemma} \subsection{The Semigroup $\textnormal{Reg}(\mathcal{ORCT}_n)$} Before we begin discussing on the semigroup $\textnormal{Reg}(\mathcal{ORCT}_n)$, let us first of all consider the semigroup $\textnormal{Reg}(\mathcal{OCT}_n)$ consisting of only order-preserving elements. Let $\alpha$ be in $\textnormal{Reg}(\mathcal{OCT}_n)$, from [\cite{am}, Lem. 12], $\alpha$ is of the form $$\alpha=\left(\begin{array}{ccccc} \{1,\ldots,a+1\} & a+2 & \ldots & a+p-1 & \{a+p,\ldots,n\} \\ x+1 & x+2 & \ldots & x+p-1 & x+ p \end{array} \right)$$\noindent Let \begin{equation}\label{j} K_p=\{\alpha \in Reg(\mathcal{OCT}_n) : |\textnormal{Im }\alpha|=p\} \quad (1\leq p\leq n), \end{equation} and suppose that $\alpha\in K_p$, by [\cite{az}, Lem. 12] Ker $ \alpha= \{\{1,\ldots,a+1\},a+2 \ldots, a+{p-1}, \{a+p,\ldots,n\} \}$ have an \emph{admissible} traversal (A transversal $T_{\alpha}$ is said to be {admissible} if and only if the map $A_{i}\mapsto t_{i}$ ($t_{i}\in T_{\alpha},\, i\in\{1,2,\ldots,p\}$) is a contraction, see \cite{mmz}) $T_\alpha= \{a+i\, : 1\leq i\leq p\}$ such that the mapping $a+i\mapsto x+i$ is an isometry. Therefore, translating the set $\{x+i :\, i\leq 1\leq p\}$ with an integer say $k$ to $\{x+i\pm k:\, 1\leq i\leq p\}$ will also serve as image set to $\textnormal{\textbf{Ker} }\alpha$ as long as $x+1-k\nless 1$ and $x+p +k \ngtr n$. For example, if we define $\alpha$ as : \begin{equation}\label{alf} \alpha= \left( \begin{array}{cccccc} \{1,\ldots,a+1\} & a+2& a_3 & \ldots & a+{p-1} & \{a+p,\ldots,n\} \\ 1 & 2 & 3& \ldots &p-1& p \end{array} \right).\end{equation} then we will have $n-p$ other mappings in $K_p$ that will have the same domain as $\alpha$. In similar manner, suppose we fix the image set $\{x+i |\, 1\leq i\leq p\}$ and consider $\textnormal{\textbf{Ker} }\alpha$, then we can refine the partition $\{\{1,\ldots,a+1\}, \{a+2\} \ldots, \{a+{p-1}\}, \{a+p,\ldots,n\} \}$ by $i-$shifting to say $\{\{1,\ldots,a+i\}, \{a+i+1\} \ldots, \{a+{p-i}\}, \{a+p-i+1,\ldots,n\} \} $ for some integer $1\leq i\leq p $ which also have an admissible convex traversal. For the purpose of illustrations, if for some integer $j$, $\{\{1,\ldots,a+1\}, \{a+2\} \ldots, \{a+{p-1}\}, \{a+p,\ldots,n\} \}=\,\{\{1,2,\ldots j\}, \{j+1\}, \{j+2\}, \ldots, \{n\} \}$, then the translation $\{\{1,2,\ldots j-1\}, \{j\}, \{j+1\}, \ldots, \{n-1,n\} \}$ will also serve as domain to the image set of $\alpha$. Thus, for $p\neq 1$ we will have $n-p+1$ different mappings with the same domain set in $K_p$. To see what we have been explaining, consider the table below; For $n\geq 4$, $2\leq p\leq n$ and $j=n-p+1$, the set $K_p$ can be presented as follows: \begin{equation}\label{tabl}\resizebox{1\textwidth}{!}{$ \begin{array}{cccc} \left( \begin{array}{ccccc} \{1,\ldots j\}&j+1& \cdots &n-1& n \\ 1 & 2 & \ldots &p-1& p \end{array} \right) & \cdots & \left( \begin{array}{ccccc} \{1,2\}&3& \cdots& \{p-1,\ldots n\} \\ 1 & 2& \ldots & p \end{array} \right)& \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ 1 & 2& \cdots&p-1 & p \end{array} \right) \\ \left( \begin{array}{ccccc} \{1,\ldots j\}&j+1& \cdots &n-1& n \\ 2 & 3 & \ldots &p& p+1 \end{array} \right) & \cdots & \left( \begin{array}{ccccc} \{1,2\}&3& \cdots& \{p-1,\ldots n\} \\ 2 & 3& \cdots & p+1 \end{array} \right)& \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ 2 & 3& \cdots&p & p+1 \end{array} \right) \\ \vdots &\vdots& \vdots& \vdots \\ \left( \begin{array}{ccccc} \{1,\ldots j\}&j+1& \cdots &n-1& n \\ j-1 & j & \ldots &n-2& n-1 \end{array} \right) & \cdots & \left( \begin{array}{ccccc} \{1,2\}&3& \cdots& \{p-1,\ldots n\} \\ j-1 & j & \ldots &n-2& n-1 \end{array} \right)& \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ j-1 & j & \ldots &n-2& n-1 \end{array} \right) \\ \left( \begin{array}{ccccc} \{1,\ldots j\}&j+1& \cdots &n-1& n \\ j & j+1 & \ldots &n-1& n \end{array} \right) & \cdots & \left( \begin{array}{ccccc} \{1,2\}&3& \cdots& \{p-1,\ldots n\} \\ j & j+1 & \ldots &n-1& n \end{array} \right)& \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ j & j+1 & \ldots &n-1& n \end{array} \right) \end{array}$}\end{equation} From the table above, we can see that for $p=1$, $|K_p|=n-p+1=n$, while for $2\leq p\leq n,\,$ $|K_p|=(n-p+1)^2$. The next theorem gives us the cardinality of the semigroup $\textnormal{Reg}(\mathcal{OCT}_n)$. \begin{theorem}\label{cadreg} Let $\mathcal{OCT}_n$ be as defined in equation \eqref{ctn}, then $|\textnormal{Reg}(\mathcal{OCT}_n)|=\frac{n(n-1)(2n-1)+6n}{6}$. \end{theorem} \begin{proof} It is clear that $\textnormal{Reg}(\mathcal{OCT}_n)=K_1 \cup K_2 \cup \ldots \cup K_n$. Since this union is disjoint, we have that \begin{equation*}\begin{array}{c} |\textnormal{Reg}\mathcal{OCT}_n|=\sum_{p=1}^n|K_p|=|K_1|+\sum_{p=2}^n|K_p| = n+ \sum_{p=2}^n (n-p+1)^2 \\ = n+(n-1)^2+(n-2)^2+ \cdots +2^2 +1^2 \\= \frac{n(n-1)(2n-1)+6n}{6}, \end{array}\end{equation*}\noindent as required. \end{proof} \begin{corollary}\label{cadreg2} Let $\mathcal{ORCT}_n$ be as defined in equation \eqref{orctn}. Then $|\textnormal{Reg}(\mathcal{ORCT}_n)|=\frac{n(n-1)(2n-1)+6n}{3}-n$. \end{corollary} \begin{proof} It follows from Theorem~\ref{cadreg} and the fact that $|\textnormal{Reg}(\mathcal{ORCT}_n)|=2|\textnormal{Reg}(\mathcal{OCT}_n)|-n$. \end{proof} \subsection{The Semigroup $\textnormal{E}(\mathcal{ORCT}_n)$} Let $\alpha$ be in $\textnormal{E}(\mathcal{ORCT}_n)$, then it follows from [\cite{am}, Lem. 13] that $\alpha$ is of the form \begin{equation}\label{alf} \alpha= \left( \begin{array}{cccccc} \{1,\ldots,i\} & i+1& i+2 & \ldots & i+j-1 & \{i+j, \ldots, n\} \\ i & i+1 & i+2& \ldots &i+j-1& i+j \end{array} \right).\end{equation} \noindent Since fix$(\alpha)=j+1$, then for each given domain set there will be only one corresponding image set. Let \begin{equation} E_p=\{\alpha \in \textnormal{E}(\mathcal{ORCT}_n) : |\textnormal{Im }\alpha|=p\} \quad (1\leq p\leq n). \end{equation} To choose $\alpha\in E_p$ we only need to select the image set of $\alpha$ which is a $p$ consecutive(convex) numbers from the set $[n]$. Thus $|E_P|=n-p-1$. Consequently, we have the cardinality of the semigroup $\textnormal{E}(\mathcal{ORCT}_n)$. \begin{theorem}\label{cidemp} Let $\mathcal{ORCT}_n$ be as defined in equation \eqref{orctn}. Then $|\textnormal{E}(\mathcal{ORCT}_n)|=\frac{n(n+1)}{2}$. \end{theorem} \begin{proof} Following the argument of the proof of Theorem \ref{cadreg} we have, \begin{equation*}\begin{array}{c} |\textnormal{E}(\mathcal{ORCT}_n)|=\sum_{p=1}^n|E_p|= \sum_{p=1}^n (n-p+1) \\ = n+(n-1)+(n-2)+ \cdots +2 +1 \\= \frac{n(n+1)}{2}. \end{array}\end{equation*} \end{proof} \begin{remark} Notice that idempotents in $\mathcal{ORCT}_n$ are necessarily order preserving, as such $|\textnormal{E}(\mathcal{OCT}_n)|=|\textnormal{E}(\mathcal{ORCT}_n)|= \frac{n(n+1)}{2}$. \end{remark} \section{Rank Properties} In this section, we discuss some rank properties of the semigroups $\textnormal{Reg}(\mathcal{ORCT}_n)$ and $\textnormal{E}(\mathcal{ORCT}_n)$. \subsection{Rank of $\textnormal{Reg}(\mathcal{OCT}_n)$} Just as in section 2 above, let us first consider the semigroup $\textnormal{Reg}(\mathcal{OCT}_n)$, the semigroup consisting of regular elements of order-preserving full contractions. Now, let $K_p$ be defined as in equation \eqref{j}. We have seen how elements of $K_p$ look like in Table \ref{tabl} above. Suppose we define: \begin{equation}\label{eta} \eta := \left( \begin{array}{ccccc} \{1,\ldots j\}&j+1& \cdots &n-1& n \\ 1 & 2 & \ldots &p-1& p \end{array} \right), \end{equation} \begin{equation}\label{delta} \delta := \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ 1 & 2& \cdots&p-1 & p \end{array} \right) \end{equation} and \begin{equation}\label{tau} \tau:= \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ j & j+1 & \ldots &n-1& n \end{array} \right) \end{equation} that is, $\eta$ to be the top left-corner element, $\delta$ be the top right-corner element while $\tau$ be the bottom right corner element in Table \ref{tabl}. And let $\textnormal{R}_\eta$ and $\textnormal{L}_\delta$ be the respective $\mathcal{R}$ and $\mathcal{L}$ equivalent classes of $\eta$ and $\delta$. Then for $\alpha$ in $K_p$ there exist two elements say $\eta'$ and $\delta'$ in $\textnormal{R}_\eta$ and $\textnormal{L}_\delta$, respectively for which $\alpha$ is $\mathcal{L}$ related to $\eta'$ and $\mathcal{R}$ related to $\delta'$ and that $\alpha=\eta'\delta'$. For the purpose of illustrations, consider \begin{equation*} \alpha = \left( \begin{array}{ccccc} \{1,\ldots j-1\}&j&j+1& \cdots &\{n-1, n\} \\ 2 & 3&4 & \ldots &p+1 \end{array} \right), \end{equation*} then the elements \begin{equation*} \left( \begin{array}{ccccc} \{1,\ldots j-1\}&j&j+1& \cdots &\{n-1, n\} \\ 1 & 2 &3 & \ldots & p \end{array} \right)\end{equation*} and \begin{equation*} \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ 2 & 3& \cdots&p & p+1 \end{array} \right)\end{equation*} are respectively elements of $\textnormal{R}_\eta$ and $\textnormal{L}_\delta$ and that \begin{equation*}\alpha = \left( \begin{array}{ccccc} \{1,\ldots j-1\}&j&j+1& \cdots &\{n-1, n\} \\ 1 & 2 &3 & \ldots & p \end{array} \right) \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ 2 & 3& \cdots&p & p+1 \end{array} \right). \end{equation*} Consequently, we have the following lemma. \begin{lemma}\label{jp} Let $\eta$ and $\delta$ be as defined in equations \eqref{eta} and \eqref{delta}, respectively. Then $\langle \textnormal{R}_\eta \cup \textnormal{L}_\delta \rangle = K_p$. \end{lemma} \begin{remark}\label{rtabl}The following are observed from Table \ref{tabl}: \begin{itemize} \item[(i)] The element $\delta$ belongs to both $\textnormal{R}_\eta$ and $\textnormal{L}_\delta$; \item[(ii)] $\tau\eta=\delta$; \item[(iii)] For all $\alpha\in \textnormal{R}_\eta$, $\alpha\delta=\alpha$ while $\delta\alpha$ has rank less than $p$; \item[(iv)] For all $\alpha\in \textnormal{L}_\delta$, $\delta\alpha=\alpha$ while $\alpha\delta$ has rank less than $p$; \item[(v)]For all $\alpha,\beta\in \textnormal{R}_\eta\backslash \delta$ ( or $\textnormal{L}_\delta\backslash \delta$), rank($\alpha\beta)<p$. \end{itemize} \end{remark} To investigate the rank of $\textnormal{Reg}(\mathcal{OCT}_n)$, let \begin{equation}\label{lnp} L(n,p)=\{\alpha \in \textnormal{Reg}(\mathcal{OCT}_n) : |\textnormal{Im }\alpha|\leq p\} \quad (1\leq p\leq n), \end{equation}\noindent and let \begin{equation} Q_p=L(n,p)\backslash L(n,p-1). \end{equation} Then $Q_p$ is of the form $K_p \cup \{0\}$, where $K_p$ is the set of all elements of $\textnormal{Reg}(\mathcal{OCT}_n)$ whose height is exactly $p$. The product of any two elements in $Q_p$ say $\alpha$ and $\beta$ is of the form: \begin{equation*}\alpha\ast \beta = \left\{ \begin{array}{ll} \alpha\beta, & \hbox{if $|h(\alpha\beta)|=p$;} \\ 0, & \hbox{if $|h(\alpha\beta)|<p$} \end{array} \right. \end{equation*} $Q_p$ is called the Rees quotient semigroup on $L(n,p)$. Next, we have the following lemma which follows from Lemma \ref{jp} and Remark \ref{rtabl}. \begin{lemma}\label{lrees} $(\textnormal{R}_\eta \cup \textnormal{L}_\delta)\backslash \delta$ is the minimal generating set for the Rees quotient semigroup $Q_p$. \end{lemma} To find the generating set for $L(n,p)$, we need the following proposition: \begin{proposition}\label{prees} For $n\geq4,\,$ $ \langle K_p \rangle\,\subseteq \,\langle K_{p+1}\rangle$ for all $1\leq p\leq n-2$. \end{proposition} \begin{proof} Let $\langle A \rangle=K_p$, to proof $\langle K_p \rangle\,\subseteq \,\langle K_{p+1}\rangle$, it suffices to show that $A\subseteq \langle K_{p+1}\rangle$. From Lemma \ref{lrees} $A= (\textnormal{R}_{\eta} \cup \textnormal{L}_{\delta} )\backslash {\delta}$. Now, let $\alpha$ be in $A$: CASE I: If $\alpha=\eta$, then $\alpha$ can be written as $\alpha=$ \begin{equation*}\resizebox{1\textwidth}{!}{$ \left( \begin{array}{cccccc} \{1,\ldots j-1\}&j&j+1& \cdots &n-1& n \\ j-2 & j-1&j & \cdots&n-2 &n-1 \end{array} \right) \left( \begin{array}{cccccc} \{1,\ldots j-1\}&j&j+1& \cdots &n-1& n\\ 1 & 2&3 & \cdots&p &p+1 \end{array} \right),$} \end{equation*} a product of two elements of $K_{p+1}$. CASE II: If $\alpha\in \textnormal{R}_{\eta}\backslash \eta$, then $\alpha$ is of the form \begin{equation*}\left( \begin{array}{ccccc} \{1,\ldots j-k\}&j-k+1& \cdots &n-2 &\{n-k,\ldots, n\} \\ 1 & 2 & \cdots&p-1 &p \end{array} \right), \, (k=1,2,\dots,j-2).\end{equation*} Then $\alpha $ can be written as: \begin{equation*}\resizebox{1\textwidth}{!}{$ \left( \begin{array}{cccc} \{1,\ldots, j-k-1\}&j-k & \cdots &\{n-k,\ldots, n\} \\ j-k-1 & j-k & \cdots &n-k \end{array} \right) \left( \begin{array}{ccccc} \{1,\ldots j-k\}&j-k+1& \cdots &n-k& \{n-k+1,\ldots,n\}\\ 1 & 2 & \cdots&p &p+1 \end{array} \right),$} \end{equation*} a product of two elements of $K_{p+1}$. CASE III: If $\alpha\in \textnormal{L}_{\delta}\backslash \delta$, then $\alpha$ is of the form \begin{equation*}\left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,p+1,\ldots n\} \\ r & r+1& \cdots& p+r-2 & p+r-1 \end{array} \right),\, (r=2,3,\ldots, n-p+1)\end{equation*} and it can be written as: \begin{equation*}\resizebox{1\textwidth}{!}{$ \left( \begin{array}{ccccc} 1&2& \cdots&p& \{p+1,\ldots n\} \\ 2 & 3& \cdots&p+1 &p+2 \end{array} \right) \left( \begin{array}{ccccc} 1&2& \cdots&p& \{p+1,\ldots n\} \\ r-1 & r& \cdots&p+r-2 & p+r-1 \end{array} \right),$} \end{equation*} hence the proof. \end{proof} \begin{remark}\label{rrank} Notice that by the proposition above, the generating set for $Q_p$ ($1\leq p\leq n-1$) generates the whole $L(n, p)$. \end{remark} The next theorem gives us the rank of the subsemigroup $L(n,p)$ for $1\leq p\leq n-1$. \begin{theorem}\label{trank} Let $L(n,p)$ be as defined in equation \eqref{lnp}. Then for $n\geq 4$ and $1<p\leq n-1$, the rank of $L(n,p)$ is $2(n-p)$. \end{theorem} \begin{proof} It follows from Lemma \ref{lrees} and Remark \ref{rrank} above. \end{proof} Now as a consequence, we readily have the following corollaries. \begin{corollary}\label{cr1} Let $L(n,p)$ be as defined in equation \eqref{lnp}. Then the rank of $L(n,n-1)$ is 2. \end{corollary} \begin{corollary}\label{cr2} Let $\mathcal{OCT}_n$ be as defined in equation \eqref{ctn}. Then the rank of $\textnormal{Reg}(\mathcal{OCT}_n)$ is 3. \end{corollary} \begin{proof} The proof follows from Corollary \ref{cr1} coupled with the fact that $\textnormal{Reg}(\mathcal{OCT}_n)= L(n,n-1)\cup id_{[n]}$, where $id_{[n]}$ is the identity element on $[n]$. \end{proof} \subsection{Rank of $\textnormal{Reg}(\mathcal{ORCT}_n)$} To discuss the rank of $\textnormal{Reg}(\mathcal{ORCT}_n)$, consider the Table \ref{tabl} above. Suppose we reverse the order of the image set of elements in that table, then we will have the set of order-reversing elements of $\textnormal{Reg}(\mathcal{ORCT}_n)$. For $1\leq p\leq n$, let \begin{equation}J_p=\{\alpha \in \textnormal{Reg}(\mathcal{ORCT}_n) : |\textnormal{Im }\alpha|= p\} \end{equation} and let \begin{equation}K_p^*=\{\alpha \in J_p : \alpha \textrm{ is order-reversing} \}. \end{equation} Observe that $J_p= K_p \cup K_p^*$. Now define: \begin{equation}\label{eta2} \eta^* = \left( \begin{array}{ccccc} \{1,\ldots j\}&j+1& \cdots &n-1& n \\ p & p-1 & \ldots & 2 & 1 \end{array} \right), \end{equation} \begin{equation}\label{delta2} \delta^* = \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ p & p-1 & \cdots& 2 & 1 \end{array} \right) \end{equation} and \begin{equation}\label{tau2} \tau^* = \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ n & n-1 & \ldots & j+1 & j \end{array} \right) \end{equation} i.e., $\eta^*, \delta^*$ and $\tau^*$ are respectively $\eta, \delta$ and $\tau$ with image order-reversed. \begin{remark} Throughout this section, we will write $\alpha^*$ to mean a mapping in $K_p^*$ which has a corresponding mapping $\alpha$ in $K_p$ with order-preserving image. \end{remark} And let $R_{\eta^*}$ and $L_{\delta^*}$ be the respective $\mathcal{R}$ and $\mathcal{L}$ equivalent classes of $\eta$ and $\delta$. Then we have the following lemmas which are analogue to Lemma \ref{jp}. \begin{lemma}\label{jp2} Let $\eta$ and $\delta^*$ be as defined in equations \eqref{eta} and \eqref{delta2}, respectively. Then $\langle \textnormal{R}_\eta \cup \textnormal{L}_{\delta^*} \rangle = K_p^*$. \end{lemma} \begin{proof} Let $ \alpha^*= \left( \begin{array}{ccccc} \{1,\ldots,a+1\} & a+2 & \ldots & a+{p-1} & \{a+p,\ldots,n\} \\ x+p & x+{p-1} & \ldots &x+2& x+1 \end{array} \right)$ be in $K_p^*$, then there exists $\alpha\in K_p$ such that by Lemma \ref{jp}, $\alpha$ can be expressed as the following product: \begin{equation*} \left( \begin{array}{ccccc} \{1,\ldots,a+1\} & a+2& \ldots & a+{p-1} & \{a+p,\ldots,n\} \\ y+1 & y+2 & \ldots &y+{p-1}& y+p \end{array} \right) \left( \begin{array}{ccccc} \{1,\ldots,b+1\} & b+2 & \ldots & b+{p-1} & \{b+p,\ldots,n\} \\ x+1 & x+2 & \ldots &x+{p-1}& x+p \end{array} \right)\end{equation*} a product of elements of $\textnormal{R}_\eta$ and $\textnormal{L}_\delta$, respectively. Therefore, $\alpha^*$ can be expressed as the following product: \begin{equation*} \left( \begin{array}{ccccc} \{1,\ldots,a+1\} & a+2& \ldots & a+{p-1} & \{a+p,\ldots,n\} \\ y+1 & y+2 & \ldots &y+{p-1}& y+p \end{array} \right) \left( \begin{array}{ccccc} \{1,\ldots,b+1\} & b+2 & \ldots & b+{p-1} & \{b+p,\ldots,n\} \\ x+1 & x+2 & \ldots &x+{p-1}& x+p \end{array} \right)\end{equation*} a product of elements of $\textnormal{R}_\eta$ and $\textnormal{L}_{\delta^*}$, respectively. \end{proof} \begin{lemma}\label{jp3} Let $J_p=\{\alpha \in \textnormal{Reg}(\mathcal{ORCT}_n) : |\textnormal{Im }\alpha|= p\}$. Then, $\langle R_\eta \cup L_{\delta^*} \rangle = J_p$. \end{lemma} \begin{proof} Since $J_p= K_p \cup K_p^*$, to proof $\langle R_\eta \cup L_{\delta^*} \rangle = J_p$, is suffices by Lemma \ref{jp3} to show that $K_p \subseteq\langle K_p^* \rangle$. Now, let $$\alpha= \left( \begin{array}{ccccc} \{1,\ldots,a+1\} & a+2& \ldots & a+{p-1} & \{a+p,\ldots,n\} \\ b+1 & b+2 & \ldots &b+{p-1}& b+p \end{array} \right)$$ \noindent be in $K_p$, if $\alpha$ is an idempotent, then there exists $\alpha^* \in K_p^*$ such that $(\alpha^*)^2=\alpha.$ Suppose $\alpha$ is not an idempotent, define $$\epsilon= \left( \begin{array}{cccccc} \{1,\ldots,b+1\} & b+2& b+3 & \ldots & b+{p-1} & \{b+p,\ldots,n\} \\ b+1 & b+2 & b+3& \ldots &b+{p-1}& b+p \end{array} \right)$$ \noindent which is an idempotent in $K_p$, then $\alpha$ can be written as $\alpha=\alpha^*\epsilon^*$. \end{proof} Before stating the main theorem of this section, let \begin{equation}\label{mp} M(n,p)=\{\alpha \in \textnormal{Reg}(\mathcal{ORCT}_n) : |\textnormal{Im }\alpha|\leq p\} \quad (1\leq p\leq n). \end{equation} And let \begin{equation} W_p=M(n,p)\backslash M(n,p-1) \end{equation} be Rees quotient semigroup on $M(n,p)$. From Lemma \ref{jp3} and Remark \ref{rtabl} we have: \begin{lemma}\label{lrees2} $(\textnormal{R}_\eta \cup \textnormal{L}_{\delta^*})\backslash \delta$ is the minimal generating set for the Rees quotient semigroup $W_p$. \end{lemma} The next proposition is also analogue to Proposition \ref{prees} which plays an important role in finding the generating set for the subsemigroup $M(n,p)$. \begin{proposition}\label{prees2} For $n\geq4,\; \langle J_p \rangle\,\subseteq \,\langle J_{p+1}\rangle$ for all $1\leq p\leq n-2$. \end{proposition} \begin{proof} The proof follows the same pattern as the proof of the Proposition \ref{prees}. We want to show that $(\textnormal{R}_\eta \cup \textnormal{L}_{\delta^*} )\subseteq \,\langle J_{p+1}\rangle$ and by Proposition \ref{prees} we only need to show that $\textnormal{L}_{\delta^*} \subseteq \,\langle J_{p+1}\rangle$. Now Let $\alpha$ be in $\textnormal{L}_{\delta^*}$, Case I: $\alpha\in \textnormal{L}_{\delta^*}\backslash \tau^* $, then $\alpha$ is the of the form \begin{equation*}\left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,p+1,\ldots n\} \\ p+r-1 & p+r-2& \cdots& r+1& r \end{array} \right)\; (r=1,2,\ldots, n-p),\end{equation*} and it can be written as \begin{equation*}\resizebox{1\textwidth}{!}{$\alpha= \left( \begin{array}{ccccc} 1&2& \cdots&p& \{p+1,\ldots n\} \\ 2 & 3& \cdots&p+1 &p+2 \end{array} \right) \left( \begin{array}{ccccc} 1&2& \cdots&p& \{p+1,\ldots n\} \\ p+r & p+r-1& \cdots& r+1& r \end{array} \right),$} \end{equation*} a product of two elements of $J_{p+1}$. Case II: $\alpha=\tau^*$ then $\alpha$ can be written as \begin{equation*}\alpha= \left( \begin{array}{ccccc} 1&2& \cdots&p-1& \{p,\ldots n\} \\ 1 & 2& \cdots&p-1 &p \end{array} \right) \left( \begin{array}{ccccc} 1&2& \cdots&p& \{p+1,\ldots n\} \\ n & n-1& \cdots& j& j-1 \end{array} \right). \end{equation*} The first element in the product above is $\delta \in J_p$, but it was shown in Remark~\ref{rtabl} that it can be written as $\tau\eta$ which were both shown in Proposition \ref{prees} that they can be expressed as product of elements of $J_{p+1}$. Hence the proof. \end{proof} \begin{remark} Notice also that, by Proposition \ref{prees2} above, for $2\leq p\leq n-1$ the generating set for $W_p$ generates the whole $M(n, p)$ \end{remark} The next theorem gives us the rank of subsemigroup $M(n,p)$ for $2\leq p\leq n-1$. \begin{theorem}\label{trank2} Let $M(n, p)$ be as defined in equation \eqref{mp}. Then for $n\geq 4$ and $2<p\leq n-1$, the rank of $M(n,p)$ is $2(n-p)+1.$ \end{theorem} \begin{proof} To proof this, we only need to compute the cardinality of the set $(R_\eta \cup L_{\delta^*})\backslash \delta$, which from Table~\ref{tabl} we easily obtain $(n-p)+(n-p)+1=2(n-p)+1$. \end{proof} As a consequence, we have the following corollaries. \begin{corollary}\label{cr3}Let $M(n, p)$ be as defined in equation \eqref{mp}. Then the rank of $M(n,n-1)$ is 3. \end{corollary} \begin{corollary}\label{cr4} Let $\mathcal{ORCT}_n$ be as defined in equation \eqref{orctn}. Then the rank of $\textnormal{Reg}(\mathcal{ORCT}_n)$ is 4. \end{corollary} \begin{proof} The only elements of $\textnormal{Reg}(\mathcal{OCT}_n)$ that are not in $ M(n,n-1)$ are the elements $\{id_{[n]},id_{[n]}^*\}$. It is clear that $(id^*)^2=id$. \end{proof} \bibliographystyle{amsplain} \begin{thebibliography}{5} \bibitem{adu} Adeshola, A. D. and Umar, A. Combinatorial results for certain semigroups of order-preserving full contraction mappings of a finite chain. {\em JCMCC} 106, (2018), 37--49. \bibitem{mmz} Ali, B. Umar, A. and Zubairu, M. M. Regularity and Green's relation for the semigroups of partial and full contractions of a finite chain. \emph{(Submitted)}. \bibitem{aj} Ali, B., Jada, M. A. and Zubairu, M. M. On the ranks of certain semigroups of order preserving partial isometries of a finite chain. {\em J. Algebra. and Rel. Top.}, \textbf{6} (2) (2018), 15--33. \bibitem{ak2} Al-Kharousi, F., Kehinde, R. and Umar, A. \emph{On the semigroup of partial isometries of a finite chain}, Comm. Algebra,\textbf{44} (2016), 639--647. \bibitem{leyla} Bugay, L. On the ranks of certain ideal of monotone contractions. \emph{ Hecet. J. Math. Stat.} \textbf{49} (6) (2020), 1988--1996. \bibitem{garbac} Garba, G. U., Ibrahim, M. J., Imam, A. T. On certain semigroups of full contraction maps of a finite chain. {\em Turk. J. Math.} 2017 (41): 500--507. \bibitem{gu} Garba, G. U. Nilpotents in semigroup of partial one-to-one order preserving mappings, \emph{Semigroup Forum}, {\bf48} (1994), 37--49. \bibitem{gu2} Garba, G. U. Nilpotents in semigroups of partial order-preserving transformations, \emph{Proc. Edinburgh Math. Soc.} {\bf37} (1994), 361--377. \bibitem{gu3} Garba, G. U. On the nilpotents rank of partial transformation semigroup, \emph{Portugaliae Mathematica}, {\bf51}, (1994), 163--172. \bibitem{maz} Ganyushkin, O. and Mazorchuk, V. \emph{Classical Finite Transformation Semigroups}. Springer$-$Verlag: London Limited, 2009. \bibitem{gm} Gomes, G. M. S. and Howie, J. M. On the ranks of certain finite semigroups of transformations, \emph{Math. Proc. Cambridge Phil. Soc.} {\bf101} (1987), 395--403. \bibitem{ph} Higgins, P. M. \emph{Techniques of semigroup theory.} Oxford university Press, 1992. \bibitem{mp} Howie, J. M. and Marques-Smith, M. P. O. Inverse semigroups generated by nilpotent transformations, \emph{ Proc.Royal Soc. Edinburgh Sect.} A \textbf{99} (1984), 153--162. \bibitem{howi} Howie, J. M. \emph{Fundamental of semigroup theory}. London Mathematical Society, New series 12. The Clarendon Press, Oxford University Press, 1995. \bibitem{kt} Toker, K. Rank of some subsemigroups of full contraction mapping on a finite chain. \emph{ J.BAUN Inst. Sci. Technol.,} 22(2), (2020) 403--414. \bibitem{af} Umar, A. and Al-Kharousi, F. Studies in semigroup of contraction mappings of a finite chain. {\em The Research Council of Oman Research grant proposal No. ORG/CBS/12/007}, 6th March 2012. \bibitem{am} Umar, A. and Zubairu, M. M. On certain semigroups of contractions mappings of a finite chain. {\em Algebra and Descrete Math.} \textbf{32} (2021), No. 2, 299--320. \bibitem{az} Umar, A. and Zubairu, M. M. On certain semigroups of full contractions of a finite chain. {\em arXiv}:1804.10057v1.[math.GR], 2018. \bibitem{mzz} Zubairu, M. M. and Ali, B. On certain combinatorial Problems of the semigroup of Partial and full contractions of a finite chain. \emph{B. J. Pure App. Sci.} (2019), {11}: 1, 377--380. \bibitem{a1} Zubairu, M. M. and Ali, B. On the idempotents of semigroup of partial contractions of a finite chain \emph{Osmaniye Korkut Ata niversitesi Fen Bilimleri Enstits Dergisi} \textbf{4} (3), (2021), 242--249. \bibitem{a33} Zubairu, M. M., Umar, A. and Aliyu, J. A. On certain Semigroup of Order-decreasing Full Contraction Mappings of a Finite Chain. arXiv preprint arXiv:2204.07427 \end{thebibliography} \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip {\footnotesize {\bf First Author}\; \\ {Department of Mathematics}, {Bayero University Kano, P.O.Box 3011,} {Kano, Nigera.}\\ {\tt Email: mmzubairu.mth\@ buk.edu.ng}\\ {\footnotesize {\bf Second Author}\; \\ {Khalifa University, P. O. Box 127788, Sas al Nakhl, Abu Dhabi, UAE}\\ {\tt Email: [email protected]}\\ {\footnotesize {\bf Third Author}\; \\ {Department of Mathematics, and Computer Sciences, Sule Lamido University, Kafin Hausa , Nigera.}\\ {\tt Email: [email protected]}\\ \end{document}
2206.14950v3
http://arxiv.org/abs/2206.14950v3
Dip-ramp-plateau for Dyson Brownian motion from the identity on $U(N)$
\documentclass[11p,reqno]{amsart} \topmargin=0cm\textheight=22cm\textwidth=15cm \oddsidemargin=0.5cm\evensidemargin=0.5cm \setlength{\marginparwidth}{2cm} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{amssymb,amsthm,amsmath,mathrsfs,bm,braket,marginnote} \usepackage{enumerate} \usepackage{appendix} \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \usepackage{multirow} \usepackage{pgf} \usepackage{pgfplots} \usepackage{tikz} \usetikzlibrary{arrows,calc} \usepackage{verbatim} \usetikzlibrary{decorations.pathreplacing,decorations.pathmorphing} \usepackage[numbers,sort&compress]{natbib} \usepackage{dsfont} \numberwithin{equation}{section} \linespread{1.2} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{coro}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \newtheorem{ass}[theorem]{Assumption} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newcommand{\todo}[1]{\fbox{\textbf{TO DO}} \textbf{#1}} \newcommand{\suggest}[1]{\marginpar{\scriptsize\raggedright #1}} \newcommand{\jsuggest}[1]{\marginpar{{\color{red}\scriptsize\raggedright#1}}} \reversemarginpar \newcommand{\pf}{\text{pf}} \newcommand{\Pf}{\text{Pf}} \newcommand{\dq}{d_qx} \newcommand{\al}{{\alpha}} \newcommand{\ts}{\tilde{\sigma}} \newcommand{\be}{{\beta}} \newcommand{\ab}{{(\alpha,\beta)}} \newcommand{\z}{\mathbb{Z}} \newcommand{\dx}{\mathcal{D}_x} \newcommand{\dy}{\mathcal{D}_y} \newcommand{\ey}{\epsilon_y} \newcommand{\ex}{\epsilon_x} \newcommand{\tih}{\tilde{h}} \newcommand{\lp}{e^{s x}} \newcommand{\ma}{\mathcal{A}} \newcommand{\mb}{\mathcal{B}} \newcommand{\md}{\mathcal{D}} \newcommand{\mt}{\mathcal{T}} \newcommand{\mf}{\mathcal{F}} \newcommand{\ml}{\mathcal{L}} \newcommand{\mk}{\mathcal{K}} \newcommand{\mg}{\mathfrak{g}} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \begin{document} \title[]{Dip-ramp-plateau for Dyson Brownian motion from the identity on $U(N)$ } \subjclass[2020]{15B52} \date{} \author{Peter J. Forrester} \address{School of Mathematical and Statistics, The University of Melbourne, Victoria 3010, Australia} \email{[email protected]} \author{Mario Kieburg} \address{School of Mathematical and Statistics, The University of Melbourne, Victoria 3010, Australia} \email{[email protected]} \author{Shi-Hao Li} \address{Department of Mathematics, Sichuan University, Chengdu, 610064, China} \email{[email protected]} \author{Jiyuan Zhang} \address{School of Mathematical and Statistics, The University of Melbourne, Victoria 3010, Australia} \email{[email protected]} \dedicatory{} \keywords{Dyson Brownian motion on $U(N)$; cyclic P\'olya ensembles; spectral form factor; hypergeometric polynomials; Jacobi polynomial asymptotics} \begin{abstract} In a recent work the present authors have shown that the eigenvalue probability density function for Dyson Brownian motion from the identity on $U(N)$ is an example of a newly identified class of random unitary matrices called cyclic P\'olya ensembles. In general the latter exhibit a structured form of the correlation kernel. Specialising to the case of Dyson Brownian motion from the identity on $U(N)$ allows the moments of the spectral density, and the spectral form factor $S_N(k;t)$, to be evaluated explicitly in terms of a certain hypergeometric polynomial. Upon transformation, this can be identified in terms of a Jacobi polynomial with parameters $(N(\mu - 1),1)$, where $\mu = k/N$ and $k$ is the integer labelling the Fourier coefficients. From existing results in the literature for the asymptotics of the latter, the asymptotic forms of the moments of the spectral density can be specified, as can $\lim_{N \to \infty} {1 \over N} S_N(k;t) |_{\mu = k/N}$. These in turn allow us to give a quantitative description of the large $N$ behaviour of the average $ \langle | \sum_{l=1}^N e^{ i k x_l} |^2 \rangle$. The latter exhibits a dip-ramp-plateau effect, which is attracting recent interest from the viewpoints of many body quantum chaos, and the scrambling of information in black holes. \end{abstract} \maketitle \section{Introduction} \subsection{Focus of the paper}\label{S1.1} Dyson Brownian motion on the group $U(N)$ of $N \times N$ complex unitary matrices is of long standing interest in random matrix theory. This process is contained in, and gets its name from, Dyson's 1962 paper \cite{Dy62b}. Actually Dyson's paper is better known for the construction of a Brownian motion model for the eigenvalues of particular spaces of Hermitian Gaussian random matrices; see the monographs \cite{Ka16,EY17} for developments in relation to this topic. With $U(N)$ being a Lie group, the notion of a corresponding Brownian motion and its theoretical development can be found in earlier works of It\^{o} \cite{It50}, Yoshida \cite{Yo52} and Hunt \cite{Hu56}; we owe our knowledge of these references to the Introduction given in \cite{AKMV10}. While Dyson's objective was a Brownian motion theory of the eigenvalues, the objective of the earlier works related to the group elements $U_t$ say; recent works which relate to this latter aspect include \cite{Bi97, Ra97, Ha01, Ke17}. The matrix evolution can be approximated numerically by the specification \begin{equation}\label{1.0} U_{t + \delta t}(N) = U_t e^{ i \sqrt{\delta t} M} , \qquad 0 < \delta t \ll 1, \end{equation} where $M$ is an element of the Gaussian unitary ensemble of $N \times N$ complex Hermitian matrices; see e.g.~\cite{PS91} or \cite[Eq.~(11.26)]{Fo10}. Wolfram Mathematica, with the support of random matrix theory added in version 11 or later, can easily implement (\ref{1.0}) in a few lines of code \cite{Wo11}. This gives a matrix Brownian motion path on $U(N)$ sampled at discrete time intervals $\delta t$. This path is simple to display in terms of the corresponding eigenvalues; see Figure \ref{Dfig1} for an example. In this paper our aim is to begin with Dyson's specification of the Brownian motion theory of the eigenvalues on $U(N)$, to specialise to the case that the initial condition is the identity, and to quantify specific statistical properties of the corresponding dynamical state, most notably the dip-ramp-plateau effect; see Section \ref{S1.3} in relation to this. Particular attention will be payed to the Fourier components of the eigenvalue density at a specific time, and to the spectral form factor (structure function). This line of study is enabled by the recent identification \cite{KLZF20} of a cyclic P\'olya ensemble structure in relation to the eigenvalue probability density function (PDF) for this model, supplemented by transformation and asymptotic theory associated with certain hypergeometric polynomials. Insights from other sources have seen progress on many fronts in the study of the eigenvalues for Dyson Brownian motion --- interpreted broadly --- in recent years; references include \cite{De10,GS15,We16,As19,HL19,LSY19, BCL21}. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{DUn1.pdf} \caption{Plot of the eigenvalues of a sequence of $m = 300$ unitary matrices with $N=30$ exhibiting Dyson Brownian motion, generated according to the approximation (\ref{1.0}) with $\sqrt{\delta t} = 0.02$. This corresponds to a scaled time $t$, defined below (\ref{3.0c2}), of $t=m N(\delta t) = 3.6$. Theory predicts the eigenvalue support to approach $(-\pi, \pi)$ as $t \to 4^-$.} \label{Dfig1} \end{figure*} \subsection{Definition of key statistical quantities}\label{S1.2} Unitary matrices have eigenvalues on the unit circle and so can be parametrised as $\{e^{ i x_j(t)} \}_{j=1}^N$, with $- \pi < x_j(t) \le \pi$ angles. Let $\langle \cdot \rangle$ denote an ensemble average with respect to the joint PDF for these angles. The eigenvalue density $\rho_{(1),N}(x;t)$ and the two-point correlation function $\rho_{(2),N}(x,y;t)$ are specified as such ensemble averages of sums of Dirac delta functions \begin{equation}\label{A.1a} \rho_{(1),N}(x;t) = \Big \langle \sum_{j=1}^N \delta (x - x_j(t)) \Big \rangle, \qquad \rho_{(2),N}(x,y;t) = \Big \langle \sum_{p,q=1 \atop p \ne q}^N \delta (x - x_p(t)) \delta (y - x_q(t)) \Big \rangle. \end{equation} The Fourier components of the spectral density are, up to proportionality, given by \begin{equation}\label{A.2a} \int_{-\pi}^{\pi} e^{- i k x} \rho_{(1),N}(x;t) \, \mathrm dx = \Big \langle \sum_{l=1}^N e^{- i k x_l(t)} \Big \rangle. \end{equation} The spectral form factor is specified by \begin{align}\label{S.4} S_N(k;t) := {\rm Cov} \, \Big ( \sum_{l=1}^N e^{ i k x_l(t) }, \sum_{l=1}^N e^{- i k x_l(t)} \Big ) := \Big \langle \sum_{l,l'=1}^N e^{ i k (x_l(t) - x_{l'}(t))} \Big \rangle - \Big \langle \sum_{l=1}^N e^{i k x_l(t) } \Big \rangle \Big \langle \sum_{l=1}^N e^{- i k x_l(t)} \Big \rangle. \end{align} Denote the truncated two-point correlation (also referred to as the two-point cluster function \cite{Me04}) by \begin{equation}\label{S.1} \rho_{(2),N}^T(x_1,x_2;t) := \rho_{(2),N}(x_1,x_2;t) - \rho_{(1),N}(x_1;t) \rho_{(1),N}(x_2;t). \end{equation} A straightforward calculation (see e.g.~\cite[proof of Prop.~2.1]{Fo22c}) shows that the spectral form factor can be written in terms of the truncated two-point correlation and the density according to \begin{align}\label{S.3} S_N(k;t) & = \int_{- \pi }^{ \pi } \mathrm dx \int_{- \pi }^{ \pi } \mathrm dy \, e^{ i k (x - y)} \Big ( \rho_{(2),N}^T(x,y;t) + \rho_{(1),N}(x;t) \delta(x - y) \Big ) \nonumber \\ & = N + \int_{- \pi}^{\pi} \mathrm dx \int_{-\pi}^{\pi} \mathrm dy \, e^{ i k (x - y)} \rho_{(2),N}^T(x,y;t), \end{align} where underlying the second line is the requirement that the density integrated over its support equals $N$. The Fourier components of the spectral density, and the spectral form factor, show themselves in the consideration of the ensemble average of the quantity \begin{equation}\label{S.5m} \Big | \sum_{l=1}^N e^{ i k x_l(t) } \Big |^2 , \qquad k \in \mathbb Z. \end{equation} Thus from (\ref{S.4}) one has the decomposition \begin{equation}\label{S.5} \bigg \langle \Big | \sum_{l=1}^N e^{ i k x_l(t) } \Big |^2 \bigg \rangle = S_N(k;t) + \bigg | \bigg \langle \sum_{l=1}^N e^{ i k x_l(t)} \bigg \rangle \bigg |^2. \end{equation} For $\{ x_l \}$ viewed as scaled energy levels, both the spectral form factor and the average (\ref{S.5}) were put forward as probes of quantum chaos in the early literature on the subject \cite{Be85,LLJP86}. In particular, in relation to the average (\ref{S.5m}), the work \cite{LLJP86} drew attention to an effect referred to as a correlation hole in the corresponding graphical shape as a function of $k$, as distinguished from its absence for integrable spectra. The essential point is that for a chaotic system the spectral form factor goes to zero for $k$ small enough and then increases before saturating (measured on some $N$ dependent scale), while the second term on the RHS of (\ref{S.5}) continues to decrease on this same $N$ dependent scale, with its initial decay as a function of $N$ and $k$ characterised by a distinct $N$ dependent scale. Under the name dip-ramp-plateau, or more accurately slope-dip-ramp-plateau \cite{Sh16,Ya20,CES21}, this effect has received renewed attention as a probe of many body quantum chaos \cite{CHLY17,TGS18,CL18,CMC19,CH19,LPC21}, and also in relation to the scrambling of information in black holes \cite{C+17,CMS17}. This in turn has prompted a revival of interest in the consideration of analytic properties of model systems for which (\ref{S.5m}) exhibits a dip-ramp-plateau shape \cite{BH97,Ok19,Fo21a,Fo21b,MH21,CES21,VG21,VG22}. In the present work, we identify Dyson Brownian motion on $U(N)$ from the identity as one of the rare known solvable cases of the dip-ramp-plateau effect, joining the Gaussian unitary ensemble (GUE) \cite{BH97} and Laguerre unitary ensemble (LUE) \cite{Fo22b}. Moreover, our exact solution exhibits analytic properties not seen in any of the previous solvable cases. To appreciate this point, general features of this effect, albeit based on non-rigorous analysis and illustrated to the extent possible by GUE and LUE, should first be revised. \subsection{Heuristics of (slope)-dip-ramp-plateau}\label{S1.3} The dip-ramp-plateau effect occurs for large $N$ and relates to several regimes as $k$ varies. These are as $k$ varies from order unity to of order proportional to $N$ (this relates to the slope, dip and ramp), and as $k$ varies to be proportional to $N$ (this relates to the ramp and plateau) giving rise to a well defined limiting quantity in $\mu$. Specifically, in relation to the latter we write $\mu = k/N$, and assume $k > 0$. The significance of this scaling is that we have $k x_j(t) = \mu X_j(t)$, where $X_j(t) = N x_j(t)$. Since the spacing between the angles $\{ x_j(t) \}$ in the bulk region is ${\rm O}(1/N)$, the spacing between $\{X_j(t)\}$ in the bulk is $O(1)$. Hence in the variable $\mu$ one would expect a well defined $N \to \infty$ limit of the spectral form factor, corresponding to a deformation of the Fourier transform of the bulk scaled limiting form of the term in the big brackets of the integrand of the first line of (\ref{S.3}). The deformation is due to the variation of the spacing between $\{X_j(t)\}$ away from their bulk scaled value across all of the spectrum (in particular near the edges). In the ensuing discussion the fact that $k$ is also an integer plays no role, so we can take $\mu$ as continuous. For large $N$ and an appropriate scaling of time, the eigenvalue density for Dyson Brownian motion on $U(N)$ from the identity has the large $N$ form \cite{Bi97} \begin{equation}\label{Mam} \rho_{(1),N}(x;t) \sim {N \over 2 \pi } \rho_{(1),\infty}(x;t), \end{equation} where $\rho_{(1),\infty}(x;t)$ is independent of $N$. It follows that the second term in (\ref{S.5}) can be approximately expressed for large $N$ by \begin{equation}\label{Ma1} \Big ( {N \over 2 \pi} \Big )^2 \bigg | \int_{- \pi }^{ \pi } \rho_{(1),\infty}(x;t) \, e^{ i \mu N x } \, \mathrm dx \bigg |^2. \end{equation} For $0 < t < 4$ it is known \cite{Bi97} (see also \S \ref{S3.1}) that the support of $\rho_{(1),\infty}(x;t)$ is $(-L_0(t),L_0(t))$ for some $0 < L_0(t) < \pi $ and that $\rho_{(1),\infty}(x;t)$ goes to zero as a square root at the end points $\pm L_0(t)$ with some amplitude $A(t)$. According to Fourier transform theory \cite{Li58}, the integral in (\ref{Ma1}) then displays a leading order decay \begin{equation}\label{dy} \sqrt{ \pi } A(t) {\cos ( \mu N L_0(t) - 3 \pi/4) \over (\mu N)^{3/2}}. \end{equation} Hence (\ref{Ma1}) itself has the decay for $N \mu$ large \begin{equation}\label{Ma2} { (A(t))^2 \over 4 \pi} {\cos^2 ( \mu N L_0(t) -3 \pi/ 4 )\over N \mu^3}. \end{equation} On the other hand, for $t > 4$ the support of $\rho_{(1),\infty}(x;t)$ is all of $(-\pi,\pi]$ and it is a periodic analytic function. As such the integral in (\ref{Ma1}) decays exponentially fast in $N$, and so in this regime (\ref{Ma2}) is replaced by \begin{equation}\label{Ma2b} N^2 e^{-c(t) \mu N} \end{equation} for some $c(t)>0$. Since (\ref{Ma1}) takes on the value $N^2$ for $\mu = 0$ and then decreases according to (\ref{Ma2}) or (\ref{Ma2b}), its decay as $\mu$ increases is responsible for the initial slope in the terminology slope-dip-ramp-plateau. A significant qualitative feature is that the functional form of the slope changes from being algebraic for $ t < 4$ to decaying exponentially for $t > 4$. The circumstance of a square root singularity at the boundary of the eigenvalue support is a feature of both the GUE and LUE. In both cases the second term in (\ref{S.5}) permits a special function evaluation from which the analogue of (\ref{dy}) is readily verified. Considering the GUE for definiteness, one has \cite{Ul85} \begin{equation}\label{X.1} \Big \langle \sum_{j=1}^N e^{i k \lambda_j} \Big \rangle_{\rm GUE} = e^{-k^2/4} L_{N-1}^{(1)}(k^2/2), \end{equation} where $L_n^{(a)}(x)$ denotes the Laguerre polynomial. To proceed further the analogue of the scaled wavenumber $\mu$ is required. This is the variable $\tau_b$ specified by $k = 2 \sqrt{2N} \tau_b$. Analogous to the discussion at the beginning of this subsection, the justification is that then that $k \lambda_j = \tau_b X_j$, where $X_j = 2 \sqrt{2N} \lambda_j$ has the property that in the bulk the eigenvalue spacings are of order unity; see e.g.~\cite[\S 3.3]{Fo21a}. Plancherel-Rotach asymptotics of the Laguerre polynomial can be used to exhibit that for $N \tau_b$ large but $\tau_b$ small \cite[Eqns.~(3.41) and (3.39)]{Fo21a} (see also \cite[working at end of \S 2]{GM95}), \begin{equation}\label{dyG} \Big \langle \sum_{j=1}^N e^{i 2 \sqrt{2N} \tau_b \lambda_j} \Big \rangle_{\rm GUE} \sim {1 \over 2 \sqrt{2 \pi N} \tau_b^{3/2}} \cos (4 N \tau_b - 3 \pi / 4). \end{equation} A heuristic understanding of the large $N$ form of spectral form factor for $\mu$ fixed is based on Dyson Brownian motion on $U(N)$ being a member of the class of random matrix models which have a unitary symmetry. Thus the matrix distribution at a given time is unchanged by conjugation with a fixed $U \in U(N)$. A universal feature of such matrix ensembles is the functional form of the bulk truncated two-point correlation function (see e.g.~\cite[rewrite of (7.2)]{Fo10}) \begin{equation}\label{Ma3} \rho_{(2)}^{T \, \rm bulk}(x,y) = - {\sin [ \pi \rho (x - y) ])^2 \over ( \pi (x - y))^2}. \end{equation} The quantity $\rho$ is the local eigenvalue density, the precise value of which depends on the choice of units underlying the bulk scaling, which in broad terms requires a centring of the coordinates away from the boundary of support, and a rescaling so that the eigenvalue density is of order unity. According to (\ref{S.3}) the spectral form factor is determined by $\rho_{(2)}^{T}$. However the average must be taken over the entire spectrum simultaneous to the scaling of $k$, or equivalently the fixing of $\rho$ in (\ref{Ma3}) (on this last point recall the discussion of the first paragraph of this subsection). This average was first computed in a form suitable for taking the large $N$ scaled limit of the corresponding spectral form factor $S_N^{(\rm G)}(k)$ for the GUE by Br\'ezin and Hikami \cite{BH97}, with the result \begin{equation}\label{Ma3.1} \lim_{N \to \infty} {1 \over N} S_N^{(\rm G)}( 2 \sqrt{2N} \tau_b ) = \left \{ \begin{array}{ll} {2 \over \pi} (\tau_b \sqrt{(1 - \tau_b^2)} + {\rm Arcsin} \, \tau_b ), & 0 < \tau_b < 1, \\ 1, & \tau_b > 1. \end{array} \right. \end{equation} Moreover it was shown in \cite{BH97} that a heuristic analysis based on (\ref{Ma3}) could reproduce the exact result (\ref{Ma3.1}). Later this same argument was shown to also reproduce a newly obtained exact evaluation of the analogous limit of the spectral form factor $S_N^{(\rm L)}((k)$ for the LUE with Laguerre parameter $a$ fixed \cite[Eq.~(1.26)]{Fo22b}, \begin{equation}\label{Ma3.2} \lim_{N \to \infty} {1 \over N} S_N^{(\rm L)}(k) = {\rm Arctan} \, k \end{equation} (here no simultaneous scaling of $k$ is required since a feature of the LUE is that for large $N$ the average spacing between eigenvalues in the bulk is of order unity). The basic idea applied in the present context is to replace the bulk density $\rho$ in (\ref{Ma3}) by the asymptotic global density $N \rho_{(1), \infty}((x+y)/2;t)/(2 \pi)$; recall (\ref{Mam}). Substituting in the second expression of (\ref{S.3}) and changing variables $w=N (x-y)$, $u=(x+y)/2$ then gives the large $N$ prediction \begin{equation}\label{Ma4} {1 \over N} S_N(\mu N;t) \sim 1 - {1 \over \pi^2} \int_{-\infty}^\infty \mathrm dw \int_{-\pi }^{\pi } \mathrm du \, {\sin^2( \rho_{(1),\infty}(u;t )w / 2) \over w^2} e^{ i w \mu}. \end{equation} Changing the order of integration, the integral over $w$ can be computed explicitly, showing that for $\mu > 0$ \begin{equation}\label{Ma5} {1 \over N} S_N(\mu N;t) \sim 1 - {1 \over \pi} \int_0^{u^*} (\rho_{(1),\infty}(u;t) - \mu ) \, \mathrm du. \end{equation} Here use has been made of the fact that $\rho_{(1),\infty}(u;t)$ is even in $u$, and $u^* = u^*(\mu)$ is such that \begin{equation}\label{Ma6} \rho_{(1),\infty}(u^*;t) = \mu. \end{equation} If (\ref{Ma6}) has no solution with the LHS always larger than the RHS, (\ref{Ma4}) implies that $u^*$ in (\ref{Ma5}) is to be replaced by $\pi$. Then the first integral in (\ref{Ma5}) evaluates to $\pi$, so implying \begin{equation}\label{Ma7} {1 \over N} S_N(\mu N;t) \sim \mu. \end{equation} If there is no solution with the LHS always smaller than the RHS then (\ref{Ma4}) implies that the integral in (\ref{Ma5}) is absent, and so \begin{equation}\label{Ma8} {1 \over N} S_N(\mu N;t) \sim 1. \end{equation} The behaviour (\ref{Ma7}) is referred to as a ramp, and (\ref{Ma8}) as a plateau, in the terminology dip-ramp-plateau. Equating the RHS of (\ref{Ma7}) multiplied by $N$ with (\ref{Ma2}) gives $\mu \asymp N^{-1/2}$ or equivalently $k \asymp N^{1/2}$. The latter is referred to as the dip-"time". It quantifies the order in $N$ of the minimum of the graph of (\ref{S.5m}). We put time within quotation marks here as the terminology is confusing in the present setting of Dyson Brownian motion on $U(N)$, with the term time already being used in the context of the evolution. Instead we will refer to this as the dip-wavenumber. Its calculated value, which is valid for $0 < t < 4$ is the same as for the GUE; see e.g.~\cite[second last paragraph of \S 1.1]{CES21}. This follows by equating (\ref{dyG}) with (\ref{Ma3.1}) turned into a large $N$ statement by replacing the equals by asymptotically equals, and multiplying by $N$. On the other hand, as is relevant for $t > 4$, equating the RHS of (\ref{Ma7}) multiplied by $N$ with (\ref{Ma2b}) gives $N e^{-c(t) \mu N} = \mu$. This equation relates to the Lambert $W$-function and implies $k \asymp \log N$ for the dip-wavenumber. In quantitive terms, it follows from the predictions of the paragraph two above that there will be a deviation from the ramp and plateau functional forms whenever the global density is not a constant. However, if the global density is strictly positive, then the prediction is that the ramp will be unaltered in the range $0 < \mu < \mu_r$ for some $\mu_r$. If the global density is bounded from above, this argument gives instead that the plateau will be unaltered in the range $\mu_p < \mu < \infty$ for some $\mu_p$. Aspects of these predictions have previously been confirmed from the exact calculation of the (scaled) limit of $S_N(k)$ for the GUE as seen by (\ref{Ma3.1}) and for the LUE as seen by (\ref{Ma3.2}). For the GUE the global density is given by the Wigner semi-circle law, which vanishes at the endpoints. Hence we can anticipate that the ramp will always be deformed. On the other hand the Wigner semi-circle is bounded, so it is predicted that the plateau is unaltered beyond a critical value $\mu_p$. Both these quantitative features are indeed seen in the exact functional form (\ref{Ma3.1}). In the case of the LUE with Laguerre parameter $a$ fixed, the global density follows a particular Mar\v{c}enko-Pastur functional form proportional to $x^{-1/2}(1-x)^{1/2} \chi_{0 < x < 1}$, which goes to infinity at the origin, before decreasing monotonically to zero at the right hand endpoint of the support. In this setting the heuristic working predicts that the ramp and plateau are both always deformed, which is indeed a feature of the exact solution (\ref{Ma3.2}). As commented, Dyson Brownian motion on $U(N)$ from the identity has the feature that for $t > 4$, the global density is nonzero. This is in distinction to the global density for both the GUE and LUE. The significance of this feature in relation to dip-ramp-plateau is the prediction noted above that the ramp will be unaltered in the range $0 < \mu < \mu_r$ for some $\mu_r$. Indeed we will find that this is a feature of our exact solution for the global scaling limit of $S_N(k; t)$ in the range $t>4$. \begin{remark} $ $ \\ 1.~ Adding the square of (\ref{X.1}) with $k=2 \sqrt{2N} \tau_b $ to the asymptotic form of $S_N^{(\rm G)}(2 \sqrt{2N} \tau_b )$ as implied by (\ref{Ma3.1}) gives a graphically accurate approximation to \begin{equation}\label{Ma3.2a} \Big \langle \Big | \sum_{l=1}^N e^{i 2 \sqrt{2N} \tau_b x_l} \Big |^2 \Big \rangle_{\rm GUE} \end{equation} as defined by the GUE analogue of (\ref{S.5}). A numerical plot --- see Figure \ref{F0} --- using a log-log scale of each axis exhibits the dip-ramp-plateau effect. \\ 2.~In general Dyson Brownian motion on $U(N)$ is dependent on the initial condition, with the particular choice of the identity matrix and thus all eigenvalues having angle $x_l(0)=0$ being the subject of the present work. From the above discussion, relevant questions in relation to dip-ramp-plateau of a more general choice (say with an eigenvalue density supported on an interval strictly within $(-\pi,\pi)$) are the functional form of the singularity of the boundary of support for $t > 0$, and the existence of a time such that the eigenvalue density is strictly positive for all angles in $(-\pi,\pi]$. \\ 3.~There is some interest in the functional form of the amplitude $A(t)$ in (\ref{Ma2}) and exponent $c(t)$ in (\ref{Ma2b}) from the viewpoint of the dip-wavenumber discussed in the paragraph below (\ref{Ma8}). First, for $t < 4$ we see by equating the RHS of (\ref{Ma7}) multiplied by $N$ with (\ref{Ma2}) that a refinement of the asymptotic bound $k \asymp N^{1/2}$ for the dip-wavenumber is to include the dependence on $t$ by way of the amplitude $A(t)$ to obtain $k \asymp (A(t) N)^{1/2}$. The explicit functional form of $A(t)$ given in (\ref{E.1}) below shows that $A(t) \asymp (1-t/4)^{-1/4}$ for $t \to 4^-$. This divergence indicates a breakdown in the asymptotic dependence on $N$ for $t \ge 4$. In fact at $t=4$, the dip-wavenumber relates to $N$ through the asymptotic relation $k \asymp N^{6/11}$ --- see Remark \ref{R4.8} below. In the case $t > 4$, equating the RHS of (\ref{Ma7}) multiplied by $N$ with (\ref{Ma2b}) we see that by including the exponent $c(t)$ in the former the dip-wavenumber dependence on both $t$ and $N$ reads $k \asymp {1 \over c(t)} \log N$. The exponent $c(t)$ is given explicitly as $t \gamma(4/t)$ in (\ref{4.0d+}) below. We calculate from (\ref{gg+}) that for $t \to 4^+$, $c(t) \sim {8 \over 3}(1-4/t)^{3/2}$. Hence its reciprocal diverges in this limit, again in keeping with the distinct dip-wavenumber asymptotic relation for $t=4$. In the limit $t \to \infty$ we read off from (\ref{gg+}) that $c(t) \sim t$ and thus the dip-wavenumber asymptotic relation $k \asymp {1 \over t} \log N$. The factor $1/t$ acts as a damping of the dip-wavenumber for large $t$. This is in keeping with their being no dip effect in the $t \to \infty$ state of Dyson Brownian motion --- referred in random matrix theory as the circular unitary ensemble (CUE); see e.g.~\cite[Ch.~2]{Fo10} --- due to the eigenvalue density then being rotationally invariant. \end{remark} \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{Dip-ramp-plateau.pdf} \caption{Log-log plot of (\ref{Ma3.2a}) with $N = 80$ as a function of $\tau_b$.} \label{F0} \end{figure*} \subsection{Layout of the paper and main results} Section \ref{S2} presents a self contained derivation of the joint eigenvalue PDF for Dyson Brownian motion on $U(N)$ from the identity. As noted in the recent work \cite{KLZF20}, the functional form can be identified as an example of a cyclic Poly\'a ensemble. The significance of this is that the correlation kernel determining the general $k$-point correlation then admits a special structured form --- the corresponding theory is revised in Section \ref{S3}. Section \ref{S4} relates to the density $\rho_{(1),N}(x;t)$ and its moments. Since the former is a periodic function of $x$, period $2 \pi$, and even in $x$ it admits the Fourier expansion \begin{align}\label{4.0a} {2 \pi \over N} \rho_{(1), N}(x;t) = \sum_{k=- \infty}^\infty m_{k}^{(N)}(t) e^{ i k x} = 1 + 2 \sum_{k=1}^N m_{k}^{(N)}(t) \cos k x, \end{align} where the $\{ m_{k}^{(N)}(t) \}$ are real and specified by $$ m_{k}^{(N)}(t) := {1 \over N} \int_{-\pi}^\pi \rho_{(1), N}(x;t) \, e^{- i k x} \, \mathrm dx. $$ In this context, the Fourier coefficients $\{ m_{k}^{(N)}(t) \}$ are referred to as the moments of the eigenvalue density. Our first new result is to detail the derivation of an exact evaluation of the moments, stated in some early literature, but without derivation. \begin{proposition} (Onofri \cite{On81} and Andrews and Onofri \cite{AO84}, both after correction) We have \begin{equation}\label{4.0c} m_{k}^{(N)}(t) = q^{k (N + k + 1)} \, {}_2 F_1 ( 1-N, 1 - k; 2; 1 - q^{-2k} ), \end{equation} where ${}_2F_1(a,b;c;z)$ denotes the Gauss hypergeometric function and $q = e^{-t/2N}$. \end{proposition} In fact two different derivations of (\ref{4.0c}) are given. The first involves Schur function averages, and is the one stated in \cite{On81, AO84} as giving rise to (\ref{4.0c}) but with the details omitted. A novel aspect of our presentation is the use of the cyclic P\'olya ensemble structure to compute the Schur function average. The second is to make use of the cyclic P\'olya ensemble structure for the density itself. This is both quicker and more straightforward than using Schur function averages. The exact formula (\ref{4.0c}) allows for the large $N$ asymptotic form of the $m_{k}^{(N)}(t)$ to be determined in the regime that $k/N = \mu > 0$ is fixed. \begin{coro}\label{C1a} Fix $k/N = \mu > 0$. Define \begin{equation}\label{c.1} t^* = t^*(\mu) := {2 \over \mu} \log \Big |{1 + \mu \over 1 - \mu} \Big |. \end{equation} For $0 < t < t^*$ we have \begin{multline}\label{c.2} m_{k}^{(N)}(t) = {(-1)^N \over N^{3/2}} \sqrt{2 \over \pi} e^{- \mu t /2} {((1 - e^{-\mu t}) \mu)^{-1/2} \over ((1 - e^{-\mu t}) ( (\mu+1)^2 e^{- \mu t} - (\mu - 1)^2))^{1/4}}\\ \times \cos(N \tilde{h}(t,\mu) +\pi/4) + {\rm O} \Big ( {1 \over N^{5/2}} \Big ), \end{multline} where $ \tilde{h}(t,\mu) = h(\lambda,\mu) |_{\lambda = e^{- t \mu/2}}$ with $ h(\lambda,\mu)$ given by (\ref{h0}) below. On the other hand, for $ t > t^*$ we have that $ m_{k}^{(N)}(t) $ decays exponentially fast in $N$. \end{coro} The proof is given in Section \ref{S4.3}. The relevance of Corollary \ref{C1a} to the slope regime comes about by its validity in an extended regime $k,N \to \infty$ with $k \ll N$, which is also proved in Section \ref{S4.3}. \begin{coro}\label{C1b} Suppose $t < 4$, and let $A(t)$ be the amplitude of the square root singularity at the endpoints of the support of $\rho_{(1),N}(x;t)$; recall the text below (\ref{Ma1}). For $k, N \to \infty$ and $k \ll N$ we have \begin{equation}\label{h0+2} m_k^{(N)}(t) \sim {\sqrt{\pi} \over (N \mu)^{3/2}} A(t) \cos ( k L_0(t) - 3 \pi/4 + {\rm O}(k^2/N)), \end{equation} as is in agreement with (\ref{dy}). \end{coro} The topic of Section \ref{S5} is the calculation of the spectral form factor $S_N(k;t)$, and its $N \to \infty$ asymptotic limits, first for $k$ fixed, and then for $k$ proportional to $N$. In Proposition \ref{P5.2}, Eq.~(\ref{S.15}), $S_N(k;t)$ is expressed in terms of the an integral over a variable $s$, where the key factor in the integrand can be identified with $(m_k^{(N)}(s+t))^2$, known explicitly according to (\ref{4.0c}). From this, the $N \to \infty$ limit is almost immediate. Further simplification then leads to our first limit formula in relation to the structure function. \begin{theorem}\label{P5.6m} We have \begin{equation}\label{S.18x} \lim_{N \to \infty} S_N(k;t) = k - e^{-kt} \sum_{s=0}^{k-1} (k-s) \Big ( L_{s}^{(-1)}(kt) \Big )^2, \end{equation} where $L_s^{(a)}(z)$ denotes the Laguerre polynomial. \end{theorem} Next considered is the regime that $k/N =: \mu$ is fixed for $N \to \infty$, as is relevant for dip-ramp-plateau. The same strategy as used to deduce Corollary \ref{C1a}, which proceeds by rewriting the ${}_2 F_1$ function in (\ref{4.0c}) in terms of a Jacobi polynomial with parameters $(N(\mu - 1), 1)$, then makes use of known asymptotics of the latter reported in the literature \cite{SZ22}, suffices to deduce the corresponding limit theorem. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{DUn2.pdf} \caption{[Colour online] Log-log plot of (\ref{S.5}), converted to a continuous function of $\mu = k/N$ for large $N$ according to the procedure of the text, with $N = 20$ and various values of $t$. In order of the maximum height of the oscillations, or equivalently sharpest dip, these are $t=6$ (green), $t=4$ (yellow) and $t=2$ (blue). The (continuous) dip-ramp-plateau effect is evident; cf.~Figure \ref{F0}.} \label{Dfig2} \end{figure*} \begin{theorem}\label{P5.6} Define $t^*=t^*(\mu)$ by (\ref{c.1}) and require that $\mu > 0$. For $0 < t < t^*$ we have \begin{multline}\label{S.20a} \tilde{S}_\infty(\mu;t):= \lim_{N \to \infty} {1 \over N} S_N(k;t) \Big |_{\mu = k/N} \\ = \min (\mu, 1) - {\mu^3 \over \pi (\mu+1)} e^{- \mu t} \int_0^{(t^* - t)_+} { s e^{- \mu s} \over (1 - e^{-\mu (s+t)})^{3/2}} {1 \over \sqrt{ e^{- \mu (s+t)} - e^{- \mu t^*} }} \, \mathrm ds, \end{multline} where $ (t^* - t)_+ = t^* - t$ for $ t^* - t > 0$, and $ (t^* - t)_+ = 0$ otherwise. \end{theorem} As must be, $\tilde{S}_\infty(\mu;t)$ is well defined for continuous $\mu > 0$. Less immediate, but similarly true, is that this is a feature of the asymptotic formula (\ref{c.2}) for $m_k^{(N)}(t)$, even though for finite $N$ the Fourier coefficient label $k$ must be discrete. Consequently a continuous version of dip-ramp-plateau can be defined, which is the (log-log) plotting of (\ref{S.5}) as a function of $\mu$, with the first term on the RHS replaced by $N$ times the limit formula of Theorem \ref{P5.6}, and the second term replaced by $N^2$ times the large $N$ asymptotics of $( m_k^{(N)}(t) )^2$. From a practical viewpoint of performing the plot, since the latter has not been made explicit for $t \ge 4$, we replace this term by the finite $N$ Jacobi polynomial form (\ref{A.8b}) below with $k=\mu N$, as this prescription leads to the same asymptotic formulas; see Figure \ref{Dfig2} for some examples. \section{The joint eigenvalue PDF}\label{S2} Dyson showed that Brownian motion associated with the group $U(N)$ gives that the eigenvalue PDF, $p_\tau = p_\tau(e^{i x_1}, \dots, e^{i x_N})$ --- this quantity is a function too of the initial conditions --- evolves according to the Fokker-Planck equation (referred to by Dyson as the Smoluchowski equation) \begin{equation}\label{2.0a} \gamma {\partial p_\tau \over \partial \tau} = {\mathcal L} p_\tau, \qquad \mathcal L = \sum_{j=1}^N {\partial \over \partial x_j} \Big ( {\partial W \over \partial x_j} + {1 \over \beta} {\partial \over \partial x_j} \Big ), \end{equation} where $\gamma$ is a scale for the time like parameter $\tau$, $\beta = 2$, and \begin{equation}\label{2.0b} W = - \sum_{1 \le j < k \le N} \log | e^{i x_k} - e^{i x_j} |. \end{equation} As pointed out in \cite{Dy62b}, this equation permits the interpretation of an $N$ particle system on the unit circle, with particles interacting pairwise via the potential $- \log | e^{i x} - e^{i y} |$, and executing overdamped Brownian motion in a fictitious viscous fluid with friction coefficient $\gamma$ at inverse temperature $\beta$. he statistical properties of the eigenvalues at a particular time like parameter $\tau$ are completely determined by $p_\tau$, which in turn requires solving (2.1) subject to a prescribed initial condition. This is an essential point of Dyson's original work [32]. With the Fokker-Planck equation written in its equivalent form as coupled stochastic differential equations, this relation was further studied in \cite{CL01}; see too \cite[\S 2.2]{BF22}. A fundamental result of Sutherland \cite{Su71a} identifies a similarity transformation that maps the Fokker-Planck operator $\mathcal L$ to a Schr\"odinger operator $H$. Thus we have \begin{equation}\label{2.1a} - e^{ \beta W/2} \mathcal L e^{- \beta W/2} = {2 \over \beta} ( H - E_0) \end{equation} with \begin{equation}\label{2.1b} E_0 = \Big ( { \beta \over 2} \Big )^2 {N (N^2 - 1) \over 24} \end{equation} and \begin{equation}\label{2.1c} H = - {1 \over 2} \sum_{j=1}^N {\partial^2 \over \partial x_j^2} + (\beta/2) (\beta/2 - 1) \sum_{1 \le j < k \le N} {1 \over (2 \sin ( x_k - x_j)/2)^2}. \end{equation} Notice that for $\beta = 2$, as required for Dyson Brownian motion on $U(N)$, the interaction term in (\ref{2.1c}) vanishes. In light of (\ref{2.1a}) and (\ref{2.1c}), knowledge of the free fermion Green function solution of the imaginary time Schr\"odinger equation on a circle allows for the computation of $p_\tau$ in the case of the initial condition \begin{equation}\label{3.0c1} p_\tau(\mathbf x) \Big |_{\tau = 0} = \prod_{l=1}^N \delta (x_l - x_l^{(0)}), \qquad (- \pi < x_1^{(0)} < \cdots < x_N^{(0)} \le \pi). \end{equation} Let us write $p_\tau(\mathbf x; \mathbf x^{(0)})$ to indicate this initial condition. We will compute the functional form of $p_\tau(\mathbf x; \mathbf 0)$ by first calculating $p_\tau(\mathbf x; \mathbf x^{(0)})$ in terms of a determinant and then taking the limit $ \mathbf x^{(0)} \to \mathbf 0$. In preparation, introduce the Jacobi theta functions \begin{equation}\label{3.0c2} \theta_2(z;q) := \sum_{n=-\infty}^\infty q^{(n - 1/2)^2} e^{2 i z (n - 1/2)}, \qquad \theta_3(z;q) := \sum_{n=-\infty}^\infty q^{ n^2} e^{2 i z n}. \end{equation} Scale $\tau$ by setting $ \tau / \gamma = t/N$, and relate $q$ in (\ref{3.0c2}) to $t/N$ be setting \begin{equation}\label{3.0c3} q = e^{-t/2N}. \end{equation} \begin{proposition} (Liechty and Wang \cite{LW16}, Kieburg et al. \cite{KLZF20}) With $\kappa = 2$ for $N$ even and $\kappa = 3$ for $N$ odd and $q$ as in (\ref{3.0c3}) we have \begin{equation}\label{3.0c5} p_t(\mathbf x; \mathbf 0) = { q^{N (N^2 - 1)/12} \over (2 \pi)^N \prod_{l=1}^{N } l!} \Big ( \prod_{1 \le j < k \le N} \sin (x_k - x_j)/2\Big ) \det \Big [ \Big (- {2} {\partial \over \partial x_j }\Big )^{k-1} \theta_\kappa ( x_j/2; q) \Big ]_{j,k=1,\dots,N}. \end{equation} \end{proposition} \begin{proof} The solution of the single particle imaginary time Schr\"odinger equation $$ \gamma {\partial \over \partial \tau} g_\tau = - {1 \over 2} {\partial^2 \over \partial x^2} g_\tau, \qquad - \pi < x < \pi $$ with initial condition $ g_\tau |_{x = x^{(0)}} \to \delta(x - x^{(0)})$ as $\tau \to 0^+$ is $$ g_\tau(x; x^{(0)}) = \left \{ \begin{array}{ll} {1 \over 2 \pi} \theta_3 ( (x - x^{(0)})/2; q), & {\rm periodic \: b.c.} \\ {1 \over 2 \pi} \theta_2( (x - x^{(0)})/2; q), & \text{anti-periodic b.c.} \end{array} \right., $$ where $q=e^{-\tau/2 \gamma}$. This can be verified directly, with the initial condition following from the functional form of the theta functions written in conjugate modulus form; see \cite{WW65}. Forming a Slater determinant gives that the Green function free fermion solution of the $N$-particle imaginary time free Schr\"odinger equation $$ \gamma {\partial \over \partial \tau} g_\tau = - {1 \over 2} \sum_{j=1}^N {\partial^2 \over \partial x_j^2} g_\tau, \qquad - \pi < x_j < \pi $$ is \begin{equation}\label{3.0c4m} g_\tau(\mathbf x; \mathbf x^{(0)}) = \left \{ \begin{array}{ll} \displaystyle \det \Big [ {1 \over 2 \pi } \theta_3((x_j - x_k^{(0)})/2; q) \Big ]_{j,k=1}^N, & {\rm periodic \: b.c.} \\[3mm] \displaystyle \det \Big [ {1 \over 2 \pi} \theta_2((x_j - x_k^{(0)})/2; q) \Big ]_{j,k=1}^N, & \text{anti-periodic b.c.} \end{array} \right.. \end{equation} The significance of knowledge of $g_\tau(\mathbf x; \mathbf x^{(0)})$ is that it follows from (\ref{2.1a}) with $\beta = 2$ that $$ p_\tau( \mathbf x;\mathbf x^{(0)}) = e^{E_0 |_{\beta = 2}\tau/ 2 \gamma} \bigg ( {\prod_{1 \le j < k \le N} \sin ((x_k - x_j)/2) \over \prod_{1 \le j < k \le N} \sin ((x_k^{(0)} - x_j^{(0)})/2) } \bigg ) g_\tau( \mathbf x; \mathbf x^{(0)}). $$ Consequently, requiring that $ p_\tau( \mathbf x;\mathbf x^{(0)}) $ exhibits periodic boundary conditions with respect to the translation $x_j \mapsto x_j + 2 \pi $ and setting $\tau/\gamma = t/N$ we have \cite{PS91,Fo90b,Fo96} \begin{equation}\label{3.0c4} p_t(\mathbf x; \mathbf x^{(0)}) = q^{N (N^2 - 1)/12} \bigg ( {\prod_{1 \le j < k \le N} \sin ((x_k - x_j)/2) \over \prod_{1 \le j < k \le N} \sin ((x_k^{(0)} - x_j^{(0)})/2) } \bigg ) \det \Big [ {1 \over 2 \pi } \theta_\kappa( ( x_j - x_k^{(0)})/2); q ) \Big ]_{j,k=1,\dots,N}, \end{equation} where $\kappa = 2$ for $N$ even and $\kappa = 3$ for $N$ odd and now $q$ is given by (\ref{3.0c3}). Here the different functional forms depending on the parity of $N$ can be traced back to the product over pairs in the numerator of the RHS of (\ref{3.0c4}) being multiplied by the sign $(-1)^{N - 1}$ upon the mapping $x_j \mapsto x_j + 2 \pi $. With $\theta_2$ for $N$ even, and $\theta_3$ for $N$ odd, the determinant factor on the RHS of (\ref{3.0c4}) has the same property as noted in (\ref{3.0c4m}), implying that $p_\tau$ itself is always periodic. Also, in keeping with the ordering in (\ref{3.0c1}) and thus the implied normalisation, the normalisation associated with (\ref{3.0c4}) is so that $ \int_R p_\tau(\mathbf x) \, \mathrm d \mathbf x = 1$, where the region $R$ is specified by $- \pi < x_1 < \cdots < x_N < \pi$. Application of L'H\^{o}pital's rule in (\ref{3.0c4}) to take $x_j^{(0)} \to 0$ for $j=1,2,\dots,N$ in succession gives (\ref{3.0c5}). A further factor of $1/N!$ relative to (\ref{3.0c4}) has been included to allow the normalisation condition to become $ \int_{{[-\pi,\pi]}^N} p_t(\mathbf x; \mathbf 0) \, \mathrm d \mathbf x = 1$. \end{proof} \section{Cyclic P\'olya ensemble structure}\label{S3} \subsection{Definition of a cyclic P\'olya ensemble} The eigenvalue PDF (\ref{3.0c5}) has the structural property of being proportional to \begin{equation}\label{5.1a} \Big ( \prod_{1 \le j < k \le N} \sin (x_k - x_j)/2 \Big ) \det \Big [ {\partial^{k-1} \over \partial x_j^{k-1}} \hat{w}(x_j) \Big ]_{j,k=1,\dots,N}, \end{equation} for a particular $ \hat{w}(x)$. Such a form has been isolated in the recent work \cite{KLZF20}. This was in the context of a study of the implications of the theory of matrix spherical transforms on $U(N)$, as applied to multiplicative convolutions conserving a determinant structure, first presented in \cite{ZKF21}. By setting $z_j = e^{ i x_j}$ it is easy to see that (\ref{5.1a}) has the equivalent complex form \begin{equation}\label{5.1b} \frac{1}{Z_N}\Delta_N(\mathbf z) \det \Big [ \Big ( - z_j {\partial \over \partial z_j} \Big )^{k-1} w(z_j) \Big ]_{j,k=1,\dots,N}. \end{equation} Here $w(z) = z^{-(N-1)/2} \hat{w}(z)$, $Z_N$ is the normalisation constant which can be determined for general $N$ (see~\eqref{normalisation} below), and $\Delta_N(\mathbf z)$, which is referred to as the Vandermonde product, is specified by \begin{equation}\label{5.1c} \Delta_N(\mathbf z) := \prod_{1 \le j < k \le N}(z_k - z_j). \end{equation} The range of $\mathbf z$ is \begin{equation} \mathbf z\in\mathbb S_1^N:=\{z\in\mathbb C^N: \forall j= 1,\ldots,N, |z_j|=1\}, \end{equation} where the permutation invariance relaxes the ordering of eigenvalues. By requiring $w(z)$ to be a $(2M+\chi-1)$-differentiable cyclic P\'olya frequency function of order $N$ (odd with $\chi=1$ or even with $\chi=0$), multiplied by $z^{-M-\chi+1}$~\cite[Sec. 2.5]{ZKF21}, one can ensure the positivity of~\eqref{5.1b}, and hence specify an ensemble with its eigenvalue PDF of this general form; \eqref{5.1b} is called a \textit{cyclic P\'olya ensemble}. For technical purposes, we further restrict the weight function $w$ to be in the set \begin{equation} \widetilde{L}^1(\mathbb S_1):=\{w\in L^1(\mathbb S_1):w(z)^*=z^{N-1}w(z)\text{ and }\partial_z^jw(z)\in L^1(\mathbb S_1)\text{ for }j=0,\ldots,N-1\}; \end{equation} see \cite{KLZF20} for further details. This allows us to construct bi-orthogonal systems of functions corresponding to each cyclic P\'olya ensemble. However, there are further aspects of the theory that need to be revised before doing this. \subsection{Spherical transform and closure under multiplicative convolution}\label{S2.2} One distinctive feature of cyclic P\'olya ensembles is that they are closed under multiplicative convolution. Thus a product of two random matrices drawn from this class of ensembles is still a random matrix drawn from this class of ensembles. To revisit the underlying theory \cite{KLZF20}, we first introduce the spherical transform of a $ U(N)$ random matrix with eigenvalue PDF $f$. This is defined as \begin{equation}\label{2.6} \mathcal S f(s):=\int_{\mathbb S_1^N}\left(\prod_{j=1}^N\frac{\mathrm d z_j}{2\pi iz_j}\right)f(z)\Phi(z,s),\quad s_j\ne s_k\text{ for any }j\ne k, \: s_j \in \mathbb Z, \end{equation} where $\frac{\mathrm d z_j}{2\pi iz_j}$ is the uniform measure on the $j$-th unit circle $\mathbb S_1$, and $\Phi$ is the normalised character of irreducible representations of $U(N)$. The latter has the explicit form \begin{equation} \Phi(z,s):=\left(\prod_{j=0}^{N-1}j!\right)\frac{\det[z_j^{s_k}]_{j,k=1}^N}{\Delta_N(\mathbf z)\Delta_N( \mathbf s)}. \end{equation} There are two important properties of the spherical transform. First, it possesses an inversion formula~\cite[(2.2.12)]{KLZF20}, and therefore every $\mathcal S f$ is uniquely associated with the eigenvalue PDF $f$. Second, it also admits a convolution formula. Thus, for independent random matrices $U_1,U_2$ with eigenvalue PDFs $f_{U_1}$ and $f_{U_2}$, the spherical transform of the eigenvalue PDF $f_{U_1U_2}$ of the product $U_1U_2$ has the factorisation \begin{equation}\label{convolution_formula} \mathcal S f_{U_1U_2}(s)=\mathcal S f_{U_1}(s)\mathcal S f_{U_2}(s). \end{equation} These two properties allow us to obtain the eigenvalue PDF of the product $U_1U_2$ by inverting the right side of~\eqref{convolution_formula}. We are now in a position to address the question of closure with respect to multiplicative convolution. First of all, integrating~\eqref{5.1b} shows that the normalisation constant is given by \begin{equation}\label{normalisation} Z_N= N! \prod_{j=0}^{N-1}j!\mathcal S w(j). \end{equation} Here $\mathcal S w$ is the one-dimensional spherical transform of the weight function $w$, which correspond to the coefficients of the Fourier series expansion of $w$. To show closure under multiplicative convolution, an explicit calculation tells us that the spherical transform of a cyclic P\'olya ensemble of the form~\eqref{5.1b} is given by \begin{equation}\label{sf_cpe} \mathcal Sf(s)=\prod_{j=1}^{N}\frac{\mathcal Sw(s_j)}{\mathcal Sw(j-1)}, \end{equation} (see~\cite[Prop. 4]{KLZF20}). From the convolution formula, it is now not difficult to see that \begin{equation}\label{sf_cpe_product} \mathcal Sf_{U_1U_2}(s)=\prod_{j=1}^{N}\frac{\mathcal Sw_{U_1}(s_j)\mathcal Sw_{U_2}(s_j)}{\mathcal Sw_{U_1}(j-1)Sw_{U_2}(j-1)}=\prod_{j=1}^{N}\frac{\mathcal S[w_{U_1}\ast w_{U_2}](s_j)}{\mathcal S[w_{U_1}\ast w_{U_2}](j-1)}, \end{equation} where $w_{U_1}\ast w_{U_2}$ denotes the univariate multiplicative convolution of $w_{U_1}$ and $w_{U_2}$. Explicitly, \begin{equation}\label{2.11} w_{U_1}\ast w_{U_2}(z):=\int_{\mathbb S_1}w_{U_1}(zy^{-1})w_{U_2}(y) {\mathrm d y \over 2 \pi i}. \end{equation} As~\eqref{sf_cpe_product} has the same structure as~\eqref{sf_cpe}, one concludes that the product $U_1U_2$ is also a cyclic P\'olya ensemble with a new weight function $w_{U_1}\ast w_{U_2}$. \subsection{Biorthogonal system} Another important feature of the cyclic P\'olya ensemble is that it corresponds to a determinantal point process, which in turn can be characterised by a bi-orthogonal system. With the formula for the ratio of gamma functions \begin{equation} \frac{\Gamma(j-l)}{\Gamma(-l)}=(-1)^{j-1}\frac{\Gamma(l+1)}{\Gamma(l-j+1)}, \qquad l \in \mathbb Z, \end{equation} we introduce a set of pairs of functions $\{(P_j,Q_j)\}_{j=0,\ldots,N-1}$, where \begin{equation}\label{biorth-cyc.Pol} \begin{split} P_j(z_1):=&\sum_{k=0}^j\frac{1}{(j-k)!k!}\frac{(-z_1)^k}{\mathcal{S}w(k)},\\ Q_j(z_2):=&z_2\partial_{z_2}^j{z_2}^{j-1}w(z_2)=\lim_{\epsilon\to0^+}\sum_{l\in\mathbb Z\backslash\{0,1,\ldots,j-1\}}\frac{\Gamma(j-l)}{\Gamma(-l)}\mathcal{S}w(l)z_2^{-l}e^{-\epsilon (l+1-N)l} \end{split} \end{equation} for $j=1,\ldots,N-1$. In the formula for $Q_j$ the factor $e^{-\epsilon (l+1-N)l}$ is a regularisation. This is required to ensure convergence of the infinite sum in the case of general weights $w(z)$. It is shown in~\cite{KLZF20} that both $\{P_j(z_1)\}_{j=0,\ldots,N-1}$ and $\{z_1^{j-1}\}_{j=1,\ldots,N}$ span the same vector space, and so do both $\{Q_j(z_2)\}_{j=0,\ldots,N-1}$ and $\{(z_2\partial_{z_2})^{j-1}w(z_2)\}_{j=1,\ldots,N}$. Moroever $\{P_j\}$ and $\{Q_j\}$ have been constructed to form a bi-orthogonal set with respect to the Haar measure on $\mathbb S_1$. Thus \begin{equation}\label{2.15} \int_{\mathbb S_1}\frac{\mathrm d z}{2\pi i z}P_a(z)Q_b(z)=\delta_{a,b}, \end{equation} where $\delta_{a,b}$ denotes the Kronecker delta function. For this reason, we say that $\{(P_j,Q_j)\}_{j=0,\ldots,N-1}$ is the corresponding bi-orthogonal system for the cyclic P\'olya ensemble. From the linearity of determinants in~\eqref{5.1b}, the span property of the polynomials $\{P_j(z_1)\}_{j=0,\ldots,N-1}$ and the functions $\{Q_j(z_2)\}_{j=0,\ldots,N-1}$ enables us to rewrite the PDF as \begin{equation}\label{2.14} {1 \over N!} \det[P_{j-1}(z_k)]_{j,k=1}^N\det[Q_{j-1}(z_k)]_{j,k=1}^N. \end{equation} We can check that (\ref{2.14}) is correctly normalised. For this we recall Andr\'eief's identity (see e.g.~\cite{Fo18}), which gives \begin{multline*} {1 \over N!} \int_{\mathbb S_1^N}\left(\prod_{j=1}^N\frac{\mathrm d z_j}{2\pi iz_j}\right) \det[P_{j-1}(z_k)]_{j,k=1}^N\det[Q_{j-1}(z_k)]_{j,k=1}^N \\ = \det \bigg [ \int_{\mathbb S_1} \frac{\mathrm d z_j}{2\pi iz} P_{j-1}(z) Q_{k-1}(z) \,\bigg ]_{j,k=1,\dots,N}. \end{multline*} Application of (\ref{2.15}) shows that the matrix on the RHS is in fact the identity, and thus the determinant is equal to unity, as required. \subsection{Correlation kernel} Associated with any bi-orthogonal system for a determinantal point process is its correlation kernel $K_N(z,z')$. By definition, the corresponding $m$-point correlation function can be expressed as an $m$-dimensional determinant of this kernel, \begin{equation}\label{2.16} \rho_{(m),N}(z_1,\ldots,z_m)=\det[K_N(z_j,z_k)]_{j,k=1}^m. \end{equation} Starting from the expression (\ref{2.14}), it is a standard fact that \cite{Bo98} \begin{equation}\label{2.16a} K_N(z_1,z_2) = \sum_{j=1}^N P_{j-1}(z_1) Q_{j-1}(z_2). \end{equation} It is shown in~\cite{KLZF20} that with the substitution (\ref{biorth-cyc.Pol}), (\ref{2.16a}) permits the rewrite \begin{multline}\label{kernel-cyc.Pol} K_N(z_1,z_2)=\sum_{k=0}^{N-1}(z_1z_2^{-1})^k \\ +\lim_{\epsilon\to0^+}\sum_{k=0}^{N-1}\sum_{l\in\mathbb Z\backslash\{0,\ldots,N-1\}}\frac{\Gamma(N-l)}{\Gamma(-l)\Gamma(N-k)\Gamma(k+1)}\frac{\mathcal{S}w(l)}{\mathcal{S}w(k)}\frac{(-z_1)^{k}z_2^{-l}}{k-l}e^{-\epsilon (l+1-N)l}, \end{multline} where as in the second expression of (\ref{biorth-cyc.Pol}) the factor involving $\epsilon$ is included to ensure convergence of the infinite sum for general weights $w(z)$. \begin{remark} Using the identity $$ \prod_{1 \le j < k \le N} \sin (x_k - x_j)/2 \propto \prod_{l=1}^N z_l^{-(N-1)/2} \Delta_N(\mathbf z), $$ as implicit in the passage from (\ref{5.1a}) to (\ref{5.1b}), shows that for general initial conditions $\mathbf z^{(0)}$ the PDF (\ref{3.0c4}) has the form (\ref{2.14}). From the general theory of biorthogonal ensembles in random matrix theory \cite{Bo98} the correlation functions are again of the form (\ref{2.16}), albeit with $K_N$ now dependent on $\mathbf z^{(0)}$; see \cite{VP09} for results in this direction. \end{remark} \subsection{Application to the PDF \eqref{3.0c4}} We conclude this section by presenting the cyclic P\'olya ensemble structure and the correlation kernel for~\eqref{3.0c4}, which is a direct corollary of the theory presented in~\cite{KLZF20}, as revised above. \begin{coro}[{\cite[special case of Prop.~19]{KLZF20}}] With the change of variable $z_j = e^{i x_j}$, the PDF~\eqref{3.0c5} becomes a cyclic P\'olya ensemble with weight function \begin{equation}\label{2.18} w(z)=\sum_{s=-\infty}^\infty q^{(s-(N-1)/2)^2}z^{-s}, \qquad q=e^{-t/2N}. \end{equation} The correlation kernel corresponding to~\eqref{3.0c5} is \begin{equation}\label{kernel} \begin{split} K_N(x,y)=& \frac{1}{2 \pi } \bigg(\frac{e^{ iN(x-y)}-1}{e^{ i(x-y)}-1}\\&+\sum_{k=0}^{N-1}\sum_{l\in\mathbb Z\backslash\{0,\ldots,N-1\}} \frac{\Gamma(N-l)q^{(l-(N-1)/2)^2-(k-(N-1)/2)^2}}{\Gamma(-l)\Gamma(N-k)\Gamma(k+1)}\frac{(-1)^{k}e^{ i(xk-yl)}}{k-l}\bigg). \end{split} \end{equation} Compared to~\eqref{kernel-cyc.Pol}, here the regularisation is removed. \end{coro} \begin{proof} The spherical transform of $w$ is read off from the coefficients of its Fourier series form (\ref{2.18}). Thus \begin{equation}\label{sf_cpe+} \mathcal S w(s)=q^{(s-(N-1)/2)^2}. \end{equation} With $z_j = e^{ix_j}$, upon recalling (\ref{normalisation}), we see that~\eqref{5.1b} can be rewritten as \begin{equation} \frac{1}{q^{N(N^2-1)/12}\prod_{j=1}^{N}j!}\prod_{j<k}\left(e^{ix_k}-e^{ix_j}\right)\det\left[\left(i\frac{\partial}{\partial x_j}\right)^{k-1}w(e^{ix_j})\right]_{j,k=1,\ldots,N}. \end{equation} The Haar measure $\frac{\mathrm d z_j}{2\pi i z_j}$ is replaced accordingly by $\frac{\mathrm dx_j}{2\pi}$, which is responsible for the factor of $\frac{1}{2\pi}$ in (\ref{kernel}) since there we take $\mathrm dx_j$ as the reference measure. Now one notices that the Vandermonde product can be replaced by \begin{equation} \prod_{j<k}\left(e^{ix_k}-e^{ix_j}\right)= e^{(N-1)i \sum_{j=1}^N x_j/2 }\prod_{j<k}2i\sin\left(\frac{x_k-x_j}{2}\right). \end{equation} Also, it can be checked that the Jacobi theta functions~\eqref{3.0c2} have the unified form \begin{equation} \theta_{\kappa}\left(\frac{y}{2};q\right)=\exp\left(iy\frac{N-1}{2}\right)w(e^{iy}), \end{equation} for $N$ being even with $\kappa=2$ and $N$ being odd with $\kappa=3$. Therefore taking the derivatives of the theta functions and applying a row reduction to reduce every entry to be the highest derivative of $w$, one has \begin{equation} \det\left[\left(i\frac{\partial}{\partial x_j}\right)^{k-1}\theta_\kappa\left(\frac{x_j}{2};q\right)\right]_{j,k=1,\ldots,N}= e^{(N-1)i \sum_{j=1}^N x_j/2 } \det\left[\left(i\frac{\partial}{\partial x_j}\right)^{k-1}w(e^{ix_j})\right]_{j,k=1,\ldots,N}. \end{equation} This is~\eqref{5.1b}. To obtain the correlation kernel, one substitutes $z_j = e^{i x_j}$ into~\eqref{kernel-cyc.Pol}. It remains to show that the $l$-sum is absolutely convergent such that we can drop the limit $\epsilon\to0^+$ in (\ref{kernel}). After decomposing the $l$-sum into two sums, where one consists of positive $l$ and the other one is for negative $l$, we need to verify that both series with all positive terms \begin{equation} S_1:=\sum_{l=N}^\infty\frac{\Gamma(l+1)}{\Gamma(l-N+1)}\frac{q^{(l-\frac{N-1}{2})^2}}{l-k},\quad S_2:=\sum_{\tilde l=1}^\infty\frac{\Gamma(N+\tilde l)}{\Gamma(\tilde l)}\frac{q^{(\tilde l+\frac{N-1}{2})^2}}{k+\tilde l} \end{equation} are convergent for $k\in\{0,\ldots,N-1\}$ and $q<1$. A ratio test suffices to give this claim. \end{proof} \begin{remark}\label{R3.2} $ $\\ 1.~In relation to multiplicative convolution of two matrices $U_1, U_2$ drawn from the PDF~\eqref{3.0c5}, one with $q_1 = e^{-t_1/2N}$, and the other with $q_2 = e^{-t_2/2N}$, the theory in the final paragraph of Section \ref{S2.2} is applicable. Thus the PDF corresponding to $U_1 U_2$ is again a cyclic P\'olya ensemble with weight function computed from (\ref{2.11}), in keeping with the laws of unitary Brownian motion forming a semigroup under the multiplicative convolution $$ f_N^{(U_1)} * f_N^{(U_1)} = \int_{U(N)} f_N^{(U_1) }(U') f_N^{(U_2) }(U {U'}^{-1}) \, d \mu(U'), $$ where $f_N^{(U)}$ denotes the PDF. This gives back the same weight (\ref{2.18}), but with parameter $q = q_1 q_2$, or equivalently with time parameter $t = t_1 + t_2$. We remark that this in turn coincides with the fact that in the numerical construction (\ref{1.0}) the Brownian motions have Gaussian increments with variance depending only on the length of the time interval.\\ 2.~Substituting (\ref{sf_cpe+}) in the first equation of (\ref{biorth-cyc.Pol}) shows $$ P_j(z) = q^{-(N-1)^2/4} \sum_{k=0}^j {1 \over (j - k)! k!} q^{k (N - 1 - k)} (-z)^k. $$ One sees in particular that \begin{equation}\label{3.28} z^{-n/2} P_n(z) \Big |_{N = n + 1} \propto \sum_{k=0}^n (-1)^{n-k} \binom{n}{k} q^{k (n - k)} z^{(k - n/2)} =: \tilde{H}_n(z;q). \end{equation} The RHS defines the so-called unitary Hermite polynomials (terminology from \cite{Ka22}). They first appeared, in a random matrix context at least, in the thesis of Mirabelli \cite{Mi21} on finite free probability \cite{Ma21}. They are extensively studied in the recent work of Kabluchko \cite{Ka22} from various viewpoints; see Remark \ref{R4.1a} below for more on this. We can use (\ref{3.28}) to deduce a formula for $\tilde{H}_n(z;q)$ expressed as an average with respect to the PDF (\ref{3.0c5}). First, in relation to the polynomials $\{ P_j(z) \}$ generally, we observe the representation as an average $P_l(z) = \langle \prod_{s=1}^l (z - z_s) \rangle$. Here $\langle \cdot \rangle$ is with respect to the PDF (\ref{5.1b}), but where the size of the Vandermonde product and the determinant are reduced from $N$ to $l$, although with $w(z)$ unchanged and thus still dependent on $N$. Replace this latter $N$ by $n+1$, then set $l=n$. As in verifying the equality between (\ref{5.1a}) and (\ref{5.1b}), one can then check that the LHS of (\ref{3.28}), and thus up to proportionality $ \tilde{H}_n(e^{ix};q)$, admits the form $\langle \prod_{k=1}^n \sin ( ( x - x_k)/2) \rangle$, where the average is with respect to the PDF (\ref{3.0c5}) with $N = n$. \end{remark} \section{The density and its moments}\label{S4} \subsection{Known results}\label{S3.1} The eigenvalue density, $\rho_{(1), N}(x;t)$ say, corresponding to (\ref{3.0c5}) is specified in terms of $p_t(\mathbf x; \mathbf 0)$ by \begin{equation}\label{4.0} \rho_{(1), N}(x;t) = N \int_{[-\pi,\pi]^{N-1}} p_t(\mathbf x; \mathbf 0) \, \mathrm dx_2 \cdots dx_N. \end{equation} Within random matrix theory the corresponding moments (\ref{4.0a}) were first considered by Biane \cite{Bi97} using methods of free probability theory, and independently by Rains \cite{Ra97}. Specifically the exact evaluation for $N \to \infty$ \begin{equation}\label{4.0b} m_{k}^{(\infty)}(t) := \lim_{N \to \infty} m_{k}^{(N)}(t) = {e^{-kt/2} \over k} L_{k-1}^{(1)}(kt) = e^{-kt/2} \, {}_1 F_1 ( 1 - k; 2; kt) , \qquad k \ge 1, \end{equation} was obtained. Here $L_n^{(\mu)}(x)$ denotes the Laguerre polynomial of degree $n$ with parameter $\mu$, and ${}_1 F_1(a;b;x)$ denotes the confluent hypergeometric function with parameters, $a,b$ and argument $x$, which like the Laguerre polynomial can be defined by its power series expansion. It turns out that the same result (\ref{4.0b}) can already be found in the literature on exactly solvable low-dimensional field theories from the early 1980's \cite{Ro81} (see also \cite{Ka81} for the cases $n=2,3$). The point here is that these field theories can be reduced to Dyson Brownian motion on $U(N)$ (also referred to as the heat kernel measure on $U(N)$), with $\langle {\rm Tr} \, U^k \rangle$ an observable relating to Wilson loops. Moreover, soon after the exact formula (\ref{4.0c}) for the finite $N$ case was found \cite{On81,AO84}, although the result was stated without derivation and contains misprints; we reference \cite[Eq.~(3.14) after correction]{GT14} for a more recent work relating to this result, also in a field theory context. From the series form of the Gauss hypergeometric function ${}_2 F_1(a,b;c;z)$ in (\ref{4.0c}), which with $b = 1 - k$ terminates to give a polynomial in $z$ of degree $k-1$, the second expression in (\ref{4.0b}) is reclaimed in the limit $N \to \infty$ upon recalling from (\ref{3.0c3}) that $q = e^{-t/2N}$. Defining \begin{equation}\label{4.0d} \rho_{(1), \infty}(x;t) := \lim_{N \to \infty} {2 \pi \over N} \rho_{(1), N}(x;t) = 1 + 2 \sum_{k=1}^\infty m_k^{(\infty)}(t) \cos k x , \end{equation} substituting (\ref{4.0b}) does not lead to a closed expression by way of evaluation of the summation over $k$. Instead, one source of analytic information comes from consideration of the large $k$ form of $m_k^{(\infty)}$. Thus it follows from (\ref{4.0b}) that for $t < 4$ the rate of decay is $O(k^{-3/2})$ and is consistent with (\ref{Ma1}) \cite{GM95}. This is no longer true for $t \ge 4$. Explicitly, for $t > 4$ the asymptotic analysis of the Laguerre polynomial given in \cite{GM95} implies \begin{equation}\label{4.0d+} m^{(\infty)}_k(t) \mathop{\sim}\limits_{k \to \infty} {t \over \sqrt{2 \pi k}} \Big ( 1 - {4 \over t} \Big )^{-1/4} e^{-k t \gamma(4/t)/2}, \end{equation} where \begin{equation}\label{gg+} \gamma(x) := \sqrt{1 - x} - {x \over 2} \log {1 + \sqrt{1 - x} \over 1 - \sqrt{1 - x}}, \end{equation} giving that the rate of decay is exponentially fast \cite{GM95}. At $t=4$ the decay is $O(k^{-4/3})$ \cite{Le08}. Moreover, the support of the density for $t < 4$ is $[-L_0(t), L_0(t)]$ where \cite{DO81,Bi97} \begin{equation}\label{4.0e} L_0(t) = {1 \over 2} \sqrt{t (4 - t)} + {\rm Arcos} \, (1 - t/2), \end{equation} while for $t > 4$ the support is the full interval $[-\pi,\pi]$ with $ \rho_{(1), \infty}(x;t) \to 1$ as $t \to \infty$. Next we summarise results from \cite{Bi97}, following the clear presentation in \cite{Ha18}. Set $$ z = e^{i x}, \quad \rho_{(1),\infty}(x;t) \mapsto \rho_{(1),\infty}(z;t), $$ and introduce the Herglotz transform \begin{equation}\label{a.0} H_t(w) = \int_{\mathcal C} {z + w \over z - w} \rho_{(1),\infty}(z;t) \,{ \mathrm dz \over 2 \pi i z} \end{equation} where the contour $\mathcal C$ is the unit circle in the complex $z$-plane, traversed anti-clockwise. With \begin{equation}\label{a.1} \psi_t(w) = \int_{\mathcal C} { w \over z - w} \rho_{(1),\infty}(z;t) \, {\mathrm dz \over 2 \pi i z} = \sum_{p=1}^\infty w^p m_p^{(\infty)}(t) \end{equation} denoting the moment generating function one sees \begin{equation}\label{a.2} H_t(w) = 1 + 2 \psi_t(w) \end{equation} and thus \begin{equation}\label{a.3} \rho_{(1),\infty}(w;t) = {\rm Re} \, H_t(w). \end{equation} From knowledge of $\{ m_p^{(\infty)}(t) \}$ from (\ref{4.0b}) substituted in (\ref{a.1}), with the result then substituted in (\ref{a.2}), we can check that $H_t$ satisfies the partial differential equation \begin{equation}\label{a.4} \Big ( {\partial \over \partial t} + w H_{2t}(w) {\partial \over \partial w} \Big ) H_{2t}(w) = 0, \end{equation} subject to the initial condition $H_0(w) = (1 + w)/(1-w)$. The equation (\ref{a.4}) can be identified with the complex Burgers equation in the inviscid limit, and can also be derived by other considerations \cite{PS91,FG16}. It can be checked from (\ref{a.4}) that $H_t$ satisfies the functional equation \begin{equation}\label{a.5} {H_t(w) - 1 \over H_t(w) + 1} e^{(t/2) H_t(w)} = w. \end{equation} Using (\ref{a.4}), it can be established \cite{BN08}, \cite[Appendix A]{AL22} that for $t < 4$, \begin{equation}\label{E.1} \rho_{(1), \infty}(L_0(t) -x;t) \mathop{\sim}\limits_{x \to 0^+} A(t) \sqrt{x}, \qquad A(t) = {1 \over \pi} \sqrt{2 \over t^{3/2} (4 - t)^{1/2}}, \end{equation} and that for $t=4$, $ \rho_{(1), \infty}( \pi -x;t) \asymp |x|^{1/3}$ as $x \to 0$. The former is well known in random matrix theory as a characteristic of a soft edge known from the boundaries of the Wigner semi-circle law --- see e.g.~\cite[\S 1.4]{Fo10} --- while the cusp $|x|^{1/3}$ characterises the Pearcey singularity \cite{BH98,HHN16,EKS20}. It is also known that for small $t$ the functional form of the density is well approximated by a Wigner semi-circle functional form, which becomes exact in a scaling limit \cite{Ka22}. \begin{remark}\label{R4.1a} One of the results obtained in \cite{Ka22} in relation to the unitary Hermite polynomials (\ref{3.28}) concerns their zeros. In particular, it is shown that all the zeros of (\ref{3.28}) are on the unit circle in the complex $z$-plane, and with $z=e^{ix}$, $q=e^{-t/2n}$, their density for $n \to\infty$ is given by $ \rho_{(1), \infty}(x;t) $. Our expression for the $\tilde{H}_n(z;q)$ as an average with respect to the PDF (\ref{3.0c5}) obtained in Remark \ref{R3.2}.2 gives a different viewpoint on the result for the density. Thus for the average one has the limit formula (see e.g.~\cite[Lemma 3.3]{FW17} for justification) \begin{equation}\label{3.29} \lim_{n \to \infty} {1 \over n} \log \Big \langle \prod_{k=1}^n \sin ( ( x - x_k)/2) \Big \rangle \Big |_{q=e^{-t/2n}} = {1 \over 2 \pi} \int_I \log [\sin ((x-y)/2)] \rho_{(1), \infty}(y;t) \, \mathrm dy, \quad x \notin I, \end{equation} where $I \subset [-\pi,\pi]$ is the interval of support of $ \rho_{(1), \infty}$, which interpreted in terms of $\tilde{H}_n(z;q)$ implies the limiting density of zeros result from \cite{Ka22}. \end{remark} \subsection{Schur function average} Let $z_j = e^{ i x_j }$, and let $\langle \cdot \rangle$ denote an average with respect the PDF (\ref{3.0c5}), which is dependent of $q = e^{-t/2N}$. The moments of the spectral density then correspond to computing \begin{equation}\label{A.1} m_k^{(N)}(t) = {1 \over N} \Big \langle \sum_{j=1}^N z_j^k \Big \rangle. \end{equation} In the theory of symmetric polynomials, $\sum_{j=1}^N z_j^k $ is referred to as the power sum, which for $N$ large enough can be used to form a basis \cite{Ma95}. Another prominent choice of basis for symmetric polynomials is the Schur polynomials $\{S_\kappa \}$, labelled by a partition $\kappa = (\kappa_1, \dots, \kappa_N)$, where $\kappa_1 \ge \kappa_2 \ge \cdots \ge \kappa_N \ge 0$. They can be defined in terms of a determinant according to \begin{equation}\label{A.2} S_\kappa(z_1,\dots,z_N) = { \det [ z_k^{N - j + \kappa_j} ]_{j,k=1}^N \over \Delta_N(- \mathbf z)}, \end{equation} with $\Delta_N( \mathbf z)$ specified according to (\ref{5.1c}). In fact one has the identity (see e.g.~\cite{Ma95}) \begin{equation}\label{A.3} \sum_{j=1}^N z_j^k = \sum_{r=0}^{{\rm min} \, (k-1,N-1)} (-1)^r S_{(k-r,1^r)} (z_1,\dots, z_N), \end{equation} where $(k-r,1^r)$ denotes the partition with largest part $\kappa_1 = k - r$, $r$ parts $(r \le N - 1)$ equal to 1 and the remaining parts equal to 0. Thus \begin{equation}\label{A.4} \Big \langle \sum_{j=1}^N z_j^k \Big \rangle = \sum_{r=0}^{{\rm min} \, (k-1,N-1)} (-1)^r \langle S_{(k-r,1^r)} (z_1,\dots, z_N) \rangle . \end{equation} Evaluating the RHS of (\ref{A.4}) is the method sketched in \cite{On81,AO84} to obtain (\ref{4.0c}) --- details of the calculation were not given. Here we provide the details, showing too that the crucial step of evaluating the Schur polynomial average can be carried out within the general formalism based on cyclic P\'olya ensembles. On this, define the complex form of $p_t(\mathbf x;\mathbf 0)$ as implied by (\ref{5.1b}), and denote this by $f_U$. Let $s_k = N - k + \kappa_k$ $(k=1,\dots, N)$. Recalling the definition of the spherical transform (\ref{2.6}), we see that \begin{equation}\label{A.5} \mathcal S f_U = {\prod_{j=0}^{N-1} j! \over \Delta_N(- \mathbf s) |_{s_k = N - k + \kappa_k}} \langle S_\kappa(z_1,\dots,z_N) \rangle. \end{equation} Hence knowledge of $ \mathcal S f_U$, which is known for general cyclic P\'olya ensembles from (\ref{sf_cpe}), suffices for the calculation of the Schur polynomial average. \begin{proposition} In relation to the complex form of $p_t(\mathbf x;\mathbf 0)$ as implied by (\ref{5.1b}), we have \begin{equation}\label{A.6} \langle S_\kappa(z_1,\dots,z_N) \rangle = \prod_{1 \le j < k\le N} { k - j + \kappa_j - \kappa_k \over k - j } \prod_{j=1}^N q^{\kappa_j^2 + (N -2j + 1) \kappa_j}. \end{equation} In particular \begin{equation}\label{A.7} \langle S_{(k-r,1^r)} (z_1,\dots, z_N) \rangle = q^{k^2 + k (N - 1) -2kr} {(N-1+k)! \over (N-1)! k!} {(-1)^r \over r!} {(-k+1)_r (-N+1)_r \over (-(k-1+N))_r}, \end{equation} where $(u)_r := u(u+1) \cdots (u+r-1)$ denotes the increasing Pochhammer symbol. \end{proposition} \begin{proof} The result (\ref{A.6}) follows immediately from (\ref{A.5}), upon noting $$ \Delta_N(- \mathbf s) |_{s_k = N - k + \kappa_k} = \prod_{1 \le j < k\le N} (k - j + \kappa_j - \kappa_k) $$ and noting from (\ref{sf_cpe}) and (\ref{sf_cpe+}) that $$ \prod_{j=1}^N {\mathcal S (N - j + \kappa_j) \over \mathcal S(N-j)} = \prod_{j=1}^N q^{\kappa_j^2 + (N -2j + 1) \kappa_j}. $$ With $\kappa = (k-r,1^r)$, a direct calculation shows $$ \prod_{j=1}^N q^{\kappa_j^2 + (N -2j + 1) \kappa_j} = q^{k^2 + k (N - 1) -2kr}, $$ and furthermore $$ \prod_{1 \le j < k\le N} { k - j + \kappa_j - \kappa_k \over k - j } = A_1 A_2 A_3, $$ where \begin{align*} A_1 & := \prod_{q=2}^{r+1} {q - 2 + k - r \over q - 1} = {(-1)^r \over r!} (-k+1)_r, \\ A_2 & := \prod_{p=2}^{r+1} \prod_{q=r+2}^N {q-p+1 \over q - p} = {(-1)^r \over r!} (-N+1)_r, \\ A_3 & := \prod_{q=r+2}^N {q - 1 + k - r \over q - 1} = (-1)^r {r! \over (N-1)!} {(N-1+k)! \over k!} {1 \over (-(k-1+N))_r}. \end{align*} The result (\ref{A.7}) now follows. \end{proof} \begin{coro} With $q = e^{-t/2N}$ we have \begin{align}\label{A.8} m_k^{(N)}(t) & = q^{k^2 + k (N - 1) } {(N-1+k)! \over k! N! } \, {}_2 F_1(1-N,1-k ;-(k-1+N); q^{-2k}) \nonumber \\ & = q^{k^2 + k (N - 1) } \, {}_2 F_1(1-N,1-k ; 2 ; 1 - q^{-2k}). \end{align} \end{coro} \begin{proof} The series expansion of the Gauss hypergeometric function is \begin{equation}\label{A.9} \, {}_2 F_1(a,b;c;z) = \sum_{n=0}^\infty {(a)_n (b)_n \over n! (c)_n} z^n. \end{equation} With this noted, the first equality follows by substituting (\ref{A.7}) in (\ref{A.4}) then substituting the result in (\ref{A.1}). To obtain the second equality, we make use of the polynomial identity \begin{equation}\label{A.10} \, {}_2 F_1(-a,-b;c;z) = {(c+a+b-1) \cdots (c + b) \over (a+c-1) \cdots c} \, {}_2 F_1(-a,-b;-a-b+1-c;1-z), \end{equation} valid for $a \in \mathbb Z_{\ge 0}$. This can be checked by verifying that both sides satisfy the Gauss hypergeometric function differential equation, and agree at $z=0$. In fact (\ref{A.10}) is a special case of the connection formula expressing a solution analytic about $z=1$ as a linear combination of the two linearly independent solutions about $z=0$; see e.g.~\cite[Th.~2.3.2]{AAR99}. \end{proof} \begin{remark} $ $\\ 1.~Applying one of the standard Pfaff transformations to the second of the hypergeometric formulas in (\ref{A.8}) gives a third form \begin{equation}\label{A.8a} m_k^{(N)}(t) = q^{k^2 - k (N - 1) } \, {}_2 F_1(1-N,1+k ; 2 ; 1 - q^{2k}). \end{equation} All of the three forms can be equivalently expressed in terms of Jacobi polynomials $P_{N-1}^{(a,b)}(z)$ for certain parameters $(a,b)$ and argument $z$. For future reference, we make explicit note of the Jacobi polynomial form equivalent to (\ref{A.8a}), \begin{align}\label{A.8b} m_k^{(N)}(t) = {1 \over N} q^{k^2 - k (N - 1) }P_{N-1}^{(1,k-N)}(2q^{2k}-1) = {(-1)^{N-1} \over N} q^{k^2 - k (N - 1) } P_{N-1}^{(k-N,1)}(1-2q^{2k}) . \end{align} 2.~As remarked in Section \ref{S1.3}, the distribution of the ensemble of matrices obtained from Dyson Brownian motion on $U(N)$ at a given time is unchanged by conjugation with a fixed $U \in U(N)$, telling us that this ensemble exhibits unitary symmetry. The classical Gaussian, Laguerre, Jacobi and Cauchy unitary ensembles all share the property seen in (\ref{A.8}) that their moments can be expressed in terms of hypergeometric polynomials \cite{WF14,CMOS19, ABGS21,FR21}. However, these classical ensembles have a property not shared by (\ref{A.8}), whereby the sequence of moments satisfy a three term recurrence. Hypergeometric polynomials and their basic $q$ extensions have also been shown to provide the closed form expression for the moments of certain discrete and $q$ generalisations of ensembles of random matrices with unitary symmetry \cite{CCO20,Fo22,Co21,FLSY21}. \end{remark} \subsection{Moment evaluation from the cyclic P\'olya density formula} According to (\ref{2.16}) in the case $m=1$ and (\ref{kernel}), the density $\rho_{(1),N}$ for the PDF (\ref{3.0c5}) has the explicit functional form \begin{equation}\label{kernelA} \rho_{(1),N}(x;t) = \frac{1}{2\pi} \bigg( N +\sum_{k=0}^{N-1}\sum_{l\in\mathbb Z\backslash\{0,\ldots,N-1\}} \frac{\Gamma(N-l)q^{(l-(N-1)/2)^2-(k-(N-1)/2)^2}}{\Gamma(-l)\Gamma(N-k)\Gamma(k+1)}\frac{(-1)^{k}e^{ ix(k-l)}}{k-l}\bigg). \end{equation} This provides an alternative starting point to deduce the first equality in (\ref{A.8}) for $m_k^{(N)}(t)$. \begin{proposition} For $k \ge 1$ we have \begin{equation}\label{m1} m_k^{(N)}(t) = {1 \over k N} q^{k^2 + (N - 1)k} \sum_{p=0}^{N-1} (-1)^{p} {\Gamma(N + k - p) \over \Gamma(k-p) \Gamma(N-p) \Gamma(p+1)} q^{-2kp}. \end{equation} This sum can be identified with the first hypergeometric form in (\ref{A.8}). \end{proposition} \begin{proof} We replace the summation label $k$ in (\ref{kernelA}) by the symbol $p$. Then we write $l=p-k$, where $k$ is an independent positive integer. Since $m_k^{(N)}(t)$ is the coefficient of $e^{ i x k }$ in $\rho_{(1),N}(x;t)/N$, the formula (\ref{m1}) results. Using the formula $$ {\Gamma(u) \over \Gamma(u-p)} = (-1)^p (1-u)_p $$ to rewrite the gamma functions in the summand gives the series form of the hypergeometric polynomial in the first line of (\ref{A.8}). \end{proof} \begin{remark} The expression on the RHS of (\ref{m1}) was first obtained in a field theory context in \cite[Eq.~(17)]{BGV99}. It was related to the result (\ref{4.0c}) of \cite{On81,AO84} in \cite{GT14}. \end{remark} \subsection{Asymptotics}\label{S4.3} According to the form (\ref{A.8b}) we see that knowledge of the large $k, N$ asymptotic form of $m_k^{(N)}(t)$ is reliant on knowledge of the large $k,N$ asymptotic form of the Jacobi polynomial $P_{N-1}^{(k-N,1)}( 1 - 2 \lambda^2)$, with $0<\lambda <1$. This has been investigated in the literature some time ago \cite{CI91}, but the formulas obtained therein contain some inaccuracies \cite{FFN10}. Fortunately an accurate and user friendly statement is available in the recent work \cite{SZ22}. The statement of the result requires some notation. Introduce the complex number $z_+ = z_+(\mu,\lambda)$ by \begin{equation}\label{S.20} z_+ = u(\mu,\lambda) + i \sqrt{1 - (u(\mu,\lambda)^2}, \qquad u(\mu,\lambda) = {(\mu - 1) (1 + \lambda^2) + 2 \lambda^2 \over 2 \lambda \mu}, \end{equation} where it is assumed the parameters $\mu, \lambda$ are such that $|u(\mu,\lambda) | \le 1$ so that $z_+$ has unit modulus and is in the upper half plane. Now define $\phi_+ = \phi_+(\mu,\lambda) \in [0,\pi]$ as the argument of $z_+$ so that $e^{i \phi_+} = z_+$. In terms of $\phi_+$ define \begin{equation}\label{h0} h(\mu,\lambda) = {\rm arg} \Big ( {z^{\mu } (1 - \lambda z ) \over z - \lambda} \Big ) \Big |_{z = e^{i \phi_+} }= \mu \phi_+ + {\rm arg} \Big ( {1 - \lambda e^{i \phi_+} \over e^{i \phi_+} - \lambda} \Big ), \end{equation} where here we are free to take the argument function as multivalued. \begin{proposition}\label{P5.5} (\cite[Th.~1]{SZ22}) Set $\mu = k/N$, and require that for $k,N \to \infty$ we have $0<\mu < 1$. For \begin{equation}\label{S.18a} 1 > \lambda > { 1-\mu \over 1+\mu}, \end{equation} the large $N$ expansion \begin{multline}\label{S.19} \lambda^{k-N} P_{N-1}^{(k-N,1)}( 1 - 2 \lambda^2) = {\sqrt{2 \over N \pi}} {((1 - \lambda^2) \mu)^{-1/2} \over ((1 - \lambda^2)((\mu+1)^2\lambda^2 - (\mu - 1)^2)^{1/4}} \\ \times \cos (N h(\mu,\lambda) + \pi/4) + {\rm O} \Big ({1 \over N^{3/2}} \Big ) \end{multline} holds true, and moreover holds uniformly in $\lambda$ for $\lambda$ a compact interval in $((1 - \mu)/(1+\mu),1)$. (On this latter point see \cite[\S 4.1]{Co05}.) In contrast, for $0 \le \lambda < (1-\mu)/(1+\mu)$, the LHS of (\ref{S.19}) decays exponentially fast in $N$. Suppose instead that $\mu > 1$. For \begin{equation}\label{S.23} 0 < \lambda < {\mu - 1 \over \mu + 1} \end{equation} the asymptotic formula (\ref{S.19}) again remains valid, while for \begin{equation}\label{S.24} {\mu - 1 \over \mu + 1} < \lambda < 1 \end{equation} the LHS of (\ref{S.19}) decays exponentially fast in $N$. \end{proposition} We are now in a position to establish the sought asymptotic forms of $m_k^{(N)}(t)$. \subsection*{Proof of Corollary \ref{C1a}} With $\lambda = e^{- \mu t/2}$, the inequalities (\ref{S.18a}) and (\ref{S.23}) can be recast in terms of $t$ and $\mu$ to become equivalent to the requirement that $0 < t < t^*$ where $t^*$ is given by (\ref{c.1}). In these cases we substitute (\ref{S.19}) for the Jacobi polynomial in the second expression of (\ref{A.8b}), to obtain (\ref{c.2}). Outside of this interval, Proposition \ref{P5.5} tells us that the Jacobi polynomial, and thus the moments, decay exponentially fast. \hfill $\square$ \subsection*{Proof of Corollary \ref{C1b}} Inspection of the working of \cite{SZ22} for the derivation of (\ref{S.19}) shows that the leading term, appropriately expanded, remains valid in an extended regime with $k,N \to \infty$ and $k \ll N$, and is thus uniform for $\mu \to 0^+$ provided $N \mu \to \infty$. The keys are the validity of the inequality (\ref{S.18a}), and that $u(\mu,\lambda)$ in (\ref{S.20}) remains well defined. On the former point, substitute $\lambda = e^{- t \mu/2}$ and expand for small $\mu$. We see that (\ref{S.18a}) is valid provided $t < 4$. On the latter point, doing the same in (\ref{S.20}) shows \begin{equation}\label{uu} u(\mu,\lambda) \to 1 - t/2, \end{equation} and thus $z_+ = (1 - t/2) + i \sqrt{t(1 - t/4)}$. The significance of $z_+$ in the working of \cite{SZ22} is that it is the saddle point in the method of stationary phase Again setting $e^{i \phi_+} = z_+$, we thus have $\cos \phi_+ = 1 - t/2$, $\sin \phi_+ = \sqrt{t(1 - t/4)}$, which in turn allows us to compute from (\ref{h0}) the small $\mu$ expansion \begin{equation}\label{h0+} h(\mu,\lambda) |_{\lambda = e^{-\mu t/2}} \sim \pi +\mu L_0(t) + {\rm O}(\mu^2), \end{equation} where $L_0(t)$ is specified by (\ref{4.0e}). Also, for small $\mu$, \begin{equation}\label{h0+1} {((1 - \lambda^2) \mu)^{-1/2} \over ((1 - \lambda^2)((\mu+1)^2\lambda^2 - (\mu - 1)^2)^{1/4}} \Big |_{\lambda = e^{-\mu t/2}} \sim {1 \over \mu^{3/2}} \sqrt{1 \over t^{3/2} (4 - t)^{1/2}}, \end{equation} Recalling (\ref{A.8b}) allows us to conclude from (\ref{S.19}) with $\lambda = e^{-\mu t/2}$, and expanded for small $\mu$, the leading $k,N \to \infty$ and $k \ll N$ expansion of $ m_k^{(N)}(t)$ as specified by (\ref{h0+2}). \hfill $\square$ \begin{remark}\label{R4.8} The asymptotics of the Jacobi polynomial in Proposition \ref{P5.5} is also known in the cases $ \lambda = | (\mu - 1)/ (\mu + 1)|$, with the leading order $N$ dependence then proportional to $N^{-1/3}$ \cite{SZ22}. Hence for $t=t^*$ the moments $ m_k^{(N)}(t)$ decay as $N^{-4/3}$. Note that the exponent $-4/3$ is the same as that for the asymptotic decay of $m_k^{(\infty)}(t)$ at $t=4$, as noted above (\ref{4.0e}) from \cite{Le08}, as we would expect. Furthermore, repeating the reasoning of the first sentence in the paragraph below (\ref{Ma8}) gives $k \asymp N^{6/11}$. \end{remark} \section{The spectral form factor}\label{S5} \subsection{Evaluation of $S_N(k;t)$} According to (\ref{S.3}) and (\ref{2.16a}) we have \begin{align}\label{S.6} S_N(k;t) = N - \int_{-\pi}^{\pi} \mathrm dx \int_{-\pi}^{\pi} \mathrm dy \, e^{ i k ( x - y)} K_N(x,y) K_N(y,x). \end{align} The exact expression (\ref{kernel}) allows this double integral to be evaluated in terms of a sum. \begin{proposition} For $j,l$ integers, define \begin{equation}\label{S.7} a_j = {(-1)^j q^{-(j-(N-1)/2)^2} \over \Gamma(N-j) \Gamma(j+1)}, \quad b_l = {\Gamma(N - l) q^{(l-(N-1)/2)^2} \over \Gamma(-l)}. \end{equation} For these quantities to be nonzero we require $0 \le j \le N-1$ and $ l \notin \{0,\dots,N-1\}$ respectively. For $k$ a non-negative integer, we have \begin{equation}\label{S.7a} S_N(k;t) = \min (k,N) + \sum_{j=0}^{\min (k-1,N)} \sum_{l=\max (k,N)}^{N+k-1} {a_j b_{j-k} a_{l-k} b_l \over (j-l)^2}. \end{equation} \end{proposition} \begin{proof} In terms of the notation (\ref{S.7}), the correlation kernel (\ref{kernel}) can be written as \begin{equation}\label{S.8} 2 \pi K_N(x,y)= \sum_{l=0}^{N-1} e^{ il(x-y)}+ \sum_{j=0}^{N-1}\sum_{l\in\mathbb Z\backslash\{0,\ldots,N-1\}} {a_j b_l \over j - l} e^{ i(xj-yl)}. \end{equation} For $s$ a non-negative integer and $f(x,y)$ is a $2 \pi$-periodic function in both $x$ and $y$, introduce the notation $[e^{ i s(x-y)}]\, f(x,y)$ to denote the coefficient of $e^{ i s(x-y)}$ in the corresponding double Fourier series. It follows from (\ref{S.8}) that \begin{equation}\label{S.9} (2 \pi)^2 [e^{ i s(x-y)}]\, K_N(x,y) K_N(y,x) = \max (N-s,0) - \sum_{j=0}^{\min (s-1,N)} \sum_{l=\max (s,N)}^{N+s-1} {a_j b_{j-s} a_{l-s} b_l \over (j-l)^2}. \end{equation} Using this result in (\ref{S.6}), and noting too from this that $S_N(k;t) = S_N(-k;t)$, (\ref{S.7a}) results. \end{proof} For $k \le N$ in (\ref{S.7a}) we have the rewrite \begin{equation}\label{S.10} S_N(k;t) = k + \sum_{j=0}^{k-1} \sum_{l'=0}^{k-1} {a_j b_{j-k} a_{N-1-l'} b_{N+k-1-l'} \over (j+l'-N-k+1)^2}. \end{equation} Substituting according to (\ref{S.7}) and simplifying then gives \begin{multline}\label{S.11} S_N(k;t) = k - q^{2k^2 + 2k (N - 1)} \sum_{j=0}^{k-1} \sum_{l'=0}^{k-1} q^{-2k(j +l')} (-1)^{j+l'} \\ \times {\Gamma(N+k-j) \Gamma(N+k-l') \over \Gamma(N-j) \Gamma(j+1) \Gamma(l'+1) \Gamma(N-l') \Gamma(k-j) \Gamma(k-l')} {1 \over (j+l'-N-k+1)^2}. \end{multline} In the case $k > N$, this expression again holds with the modifications that the first $k$ on the RHS is to be replaced by $N$, as is each $k$ in the upper terminals of the sums. The fact that at $t=0$ the initial matrix is the identity, and so all eigenvalues are unity, implies from the definition (\ref{S.4}) that \begin{equation}\label{S.11a} S_N(k;t) \Big |_{t=0} = 0; \end{equation} since $q=1$ for $t=0$, note that according to (\ref{S.11}) this implies an identity for the double sum. For $k=1$ the double sum in (\ref{S.11}) consists of a single term, which after simplification and substituting for $q$ according to (\ref{3.0c3}) reads \begin{equation}\label{S.12} S_N(k;t) \Big |_{k=1} = 1 - e^{-t}. \end{equation} The next simplest case is $k=2$, when (\ref{S.11}) consists of four terms. However the summand is symmetric in $j,l'$ so this can immediately be reduced to three terms, \begin{equation}\label{S.13} S_N(k;t) \Big |_{k=2} =2 - e^{-2t} \Big ( N^2 e^{-2t/N} - 2 (N^2 - 1) + N^2 e^{2t/N} \Big ). \end{equation} Expanding for large $N$ gives \begin{equation}\label{S.14} S_N(k;t) \Big |_{k=2} =2 - 2e^{-2t} \Big ( (1 + 2 t^2) + {\rm O} \Big ( {1 \over N^2} \Big ) \Big ). \end{equation} \begin{remark} For Dyson Brownian motion from the identity in the case of $SU(N)$ as distinct from $U(N)$ (in $SU(N)$ the determinant is constrained to be unity) a formula similar in appearance to (\ref{S.11}), derived using a Schur polynomial/ heat kernel expansion of the corresponding eigenvalue PDF, can be found in \cite[Th.10.2]{LM10}. \end{remark} \subsection{Large $N$ limit of $S_N(k;t)$ for fixed $k$} For general $k$ and fixed $q$, the summand in (\ref{S.11}) appears to be a polynomial in $N$ of degree $2(k-1)$. Yet we expect the large $N$ limit of $S_N(k;t)$ to be well defined. A similar feature is already present in the first form of $m_k^{(N)}(t)$ from (\ref{A.8}), or alternatively in (\ref{m1}), in which the summand is a polynomial in $N$ of degree $k$. It is only after application of the identity (\ref{A.10}) that the second form in (\ref{A.8}) is obtained, which allows for the computation of the large $N$ limit (\ref{4.0b}). In fact (\ref{S.11}) can be written in terms of an integral of two ${}_2 F_1$ polynomials of the same type as appearing in the first line of (\ref{A.8}). After applying the transformation (\ref{A.10}) an explicit expression for the $N \to \infty$ limit can be obtained. \begin{proposition}\label{P5.2} The double sum formula (\ref{S.11}) for $S_N(k;t)$ permits the integral forms \begin{align}\label{S.15} S_N(k;t) & = \min (k,N) - q^{2k^2 + 2k (N - 1)} \bigg ( {(N-1+k)! \over (k-1)! (N-1)!} \bigg )^2 \nonumber \\ & \qquad \times \int_0^\infty s e^{-s(N+k-1)} \Big ( {}_2 F_1(1 - N, 1 - k; -(k-1+N); q^{-2k} e^s) \Big )^2 \,\mathrm ds \nonumber \\ & = \min (k,N) - q^{2k^2 + 2k (N - 1)} (kN)^2 \int_0^\infty s e^{-s(N+k-1)} \Big ( {}_2 F_1(1 - N, 1 - k; 2; 1 - q^{-2k} e^s) \Big )^2 \, \mathrm ds. \end{align} Consequently, for $k$ fixed with respect to $N$, \begin{align}\label{S.16} S_\infty(k;t) : = \lim_{N \to \infty} S_N(k;t) & = k - e^{-tk} k^2 \int_0^\infty s e^{-s} \Big ( {}_1 F_1(1 - k; 2; kt + s) \Big )^2 \, \mathrm ds \nonumber \\ & = k - e^{-tk} \int_0^\infty s e^{-s} \Big ( L_{k-1}^{(1)}(kt+s) \Big )^2 \, \mathrm ds. \end{align} \end{proposition} \begin{proof} With $u = N+k-1-j-l'$ we rewrite the factor of $1/u^2$ in (\ref{S.11}) according to the simple integral $$ {1 \over u^2} = \int_0^\infty s e^{-us} \, \mathrm ds. $$ Taking the integral outside of the double summation, the latter factorises into the product of two single summations, both of which have an identical structure to that in (\ref{m1}), which we know can be identified with the first line of (\ref{A.8}). This gives the first expression in (\ref{S.15}). The second follows by applying the transformation (\ref{A.10}), as is analogous with the second line of (\ref{A.8}). The limit (\ref{S.16}) now follows by changing variables $s \mapsto s/N$ in this latter expression, and proceeding as in deducing (\ref{4.0b}) from (\ref{4.0c}). An additional technical issue is the need to interchange the order of the limit with the integral. Here one first note that the hypergeometric polynomial in the final line of (\ref{S.15}) is, for fixed $k$ and $N$ large enough, of degree $2(k-1)$ in $(1 - q^{2k} e^s)$. It follows that the task of justifying the interchange of the order can be reduced to justifying this for \begin{multline*} \lim_{N \to \infty} \int_0^\infty s e^{- s (1 + (k-1)/N)} (-N)^p (1 - q^{-2k} e^{s/N})^p \, ds \\ = \lim_{N \to \infty} \int_0^\infty s e^{- s (1 + (k-1)/N)} N^p (1 - e^{(s+k)/N})^p \, ds, \end{multline*} with $p \le 2 (k-1)$, $p \in \mathbb Z^+$. For $N$ large enough, $e^{-s(k-1-p)/N} \le e^{s/2}$. Also, the elementary inequality $1 - e^{-x} \le x$ shows that $N^p (1 - e^{-(s+k)/N})^p$ is bounded by $(s+k)^p$. Hence for $N$ large enough the integrand is bounded pointwise by $s e^{-s/2} (s+k)^p$, which is integrable on $s \in \mathbb R^+$. The justification of the interchange of the order now follows by dominated convergence. \end{proof} \begin{remark} The Laguerre polynomials permit the addition formula \begin{equation}\label{S.17} L_p^{(\alpha + \beta + 1)}(x+y) = \sum_{s=0}^p L_s^{(\alpha)}(x) L_{p-s}^{(\beta)}(y). \end{equation} Applying this relation to each $ L_{k-1}^{(1)}(kt+s)$ in (\ref{S.16}) with $p=k-1, x = s, y = kt, \alpha = 1, \beta = -1$, and using the orthogonality $$ \int_0^\infty s e^{-s} L_p^{(1)}(s) L_q^{(1)}(s) \, \mathrm ds = (p+1) \delta_{p,q} $$ shows that the integral in (\ref{S.16}) permits an evaluation as the finite sum (\ref{S.18x}). \end{remark} \subsection{Large $N$ limit of $S_N(k;t)$ for $k$ proportional to $N$ --- proof of Theorem \ref{P5.6}} We first consider the case $k \le N$. Changing variables $s \mapsto sk/N$ in (\ref{S.15}) and using the equality between (\ref{A.8a}) and (\ref{A.8b}) shows \begin{equation}\label{S.18} S_N(k;t) = k - {k^4 \over N^2} \int_0^\infty s e^{-(s+t) k ( k - N + 1)/N} \Big ( P_{N-1}^{(k-N,1)}(1 - 2 e^{- (s+t)k/N}) \Big )^2 \, \mathrm ds. \end{equation} We see that the asymptotic formulas for $P_{N-1}^{(k-N,1)}( 1 - 2 \lambda^2)$, with $0<\lambda <1$, given in Proposition \ref{P5.5} are again relevant. It is application of these formulas which allows the sought scaling limit of (\ref{S.18}), as specified in Theorem \ref{P5.6}, to be computed. Suppose first that $0 < t < t^*$. In relation to Proposition \ref{P5.5}, set $\lambda = e^{- \mu (s+t)/2}$. Taking into consideration the range of $\lambda$ for which (\ref{S.19}) holds uniformly, we see that for $0 < s < t^* - t - \epsilon$, $0 < \epsilon \ll 1$, the factor of the integrand \begin{equation}\label{S.22} ( \lambda^{k-N} P_{N-1}^{(k-N,1)}( 1 - 2 \lambda^2))^2 \end{equation} can be replaced by the square of the leading term on the RHS of (\ref{S.19}). Making use too of (\ref{c.1}) shows \begin{multline}\label{S.22R} {k^4 \over N^2} \int_0^{ t^* - t - \epsilon} s e^{-(s+t) k ( k - N + 1)/N} \Big ( P_{N-1}^{(k-N,1)}(1 - 2 e^{- (s+t)k/N}) \Big )^2 \, \mathrm ds \\ \mathop{\sim}\limits_{N \to \infty} {2 \over \pi} {\mu^3 \over (1 + \mu) } \int_0^{ t^* - t - \epsilon} s f(\mu;\lambda) \cos^2(N \tilde{h}(t+s,\mu) + \pi/4) \Big |_{\lambda = e^{- \mu (s+t)/2}} \, \mathrm ds, \end{multline} where $$ f(\mu;\lambda) = {\lambda^2 \over (1 - \lambda^2)^{3/2}} {1 \over (\lambda^2 - e^{- \mu t^*} )^{1/2}}. $$ Next we make use of the identity $\cos^2(N \tilde{h}(t+s,\mu) + \pi/4) = {1 \over 2} ( 1 + \cos 2 (N \tilde{h}(t+s,\mu) + \pi/4 ))$. It shows that to leading order the square of the cosine can be replaced by ${1 \over 2}$. Thus the term involving $ \cos 2 (N \tilde{h}(t+s,\mu) + \pi/4 ))$, being multiplied by an absolutely integrable function of $s$ --- specifically $s f(\mu;\lambda) $ --- must decay in $N$ in accordance with the Riemann-Lebesgue lemma for Fourier integrals and its generalisations \cite{Er55}, and hence vanish in the limit. We therefore conclude that the $N \to \infty$ limit of (\ref{S.22R}) is equal to \begin{equation}\label{S.22Rs} {1 \over \pi} {\mu^3 \over (1 + \mu ) } \int_0^{ t^* - t - \epsilon} s f(\mu;\lambda) \Big |_{\lambda = e^{- \mu (s+t)/2}} \, \mathrm ds. \end{equation} Taking $\epsilon \to 0$ gives the term involving the integral in (\ref{S.20a}). The range $s > t^* - t + \epsilon$ does not contribute due to the corresponding exponential decay in $N$ of (\ref{S.22}), as known from the second statement of Proposition \ref{P5.5}. Thus \begin{multline}\label{S.22R+} {k^4 \over N^2} \int_{ t^* - t + \epsilon}^\infty s e^{-(s+t) k ( k - N + 1)/N} \Big ( P_{N-1}^{(k-N,1)}(1 - 2 e^{- (s+t)k/N}) \Big )^2 \, \mathrm ds \\ \le \mu^4 N^2 e^{-N u(\mu,s)} \int_{ t^* - t + \epsilon}^\infty s e^{- \mu (s+t)} \, \mathrm ds \end{multline} for some $ u(\mu,s) > 0$. Since the RHS tends to zero as $N \to \infty$, so must the LHS. The remaining task then is to check that there is no separate contribution from the singularity of (\ref{S.19}) at $s=t^* - t$. Setting $t=0$ to begin, one approach is to verify that the RHS of (\ref{S.20a}) is already identically zero, as required by the sum rule (\ref{S.11a}). This follows from the integration formula \begin{equation}\label{S.22a} \int_0^T {s e^{-s} \over (1 - e^{-s})^{3/2}} {1 \over \sqrt{e^{-s} - e^{- T}}} \, \mathrm ds = \pi \Big ( 1 + \tanh (T/4) \Big ), \end{equation} as can be validated by computer algebra, used in conjugation with (\ref{c.1}). As a consequence we that with $t=0$ in the integral of (\ref{S.18}), there is no contribution to its $N \to \infty$ form for $s > t^* - \epsilon$ in the limit $\epsilon \to 0^+$, and thus \begin{equation}\label{R1} \lim_{\epsilon \to 0^+} \lim_{N \to \infty} \int_{t^* - \epsilon}^\infty s e^{-s \mu ( k - N + 1)} \Big ( P_{N-1}^{(k-N,1)}(1 - 2 e^{- s \mu }) \Big )^2 \, \mathrm ds = 0. \end{equation} We claim now that knowledge of (\ref{R1}) is sufficient to establish that for general $0 < t < t^*$ fixed in (\ref{S.18}), there is no contribution to its $N \to \infty$ form for $s > t^* - t - \epsilon$ in the limit $\epsilon \to 0^+$. Note that this is a stronger statement than establishing that there is no separate contribution from the neighbourhood of $s=t^* - t$, and in particular supersedes the need for (\ref{S.22R+}). The reason for its validity stems from the fact that the more general case differs from the case $t=0$ only by the replacement of the factor of the first factor of $s$ in the integrand by $(s-t)$, as implied by the explicit functional form in (\ref{S.18}), with the $s$ dependence of all the other terms left unaltered and the terminals of integration $(t^*-\epsilon, \infty)$. Moreover, since we are assuming $t < t^*$ and we have $s \in (t^*- \epsilon , \infty)$, with $\epsilon$ small enough it follows $0 < (s - t) \le s$. Hence the limit formula (\ref{R1}) remains valid with $s$ in the integrand replaced by $(s-t)$, as required for general $0 < t < t^*$, and the claim is established. The above working involving (\ref{S.22R}) assumed $0 < t < t^*$. For $t > t^*$, the factor of the integrand (\ref{S.22}) decays exponentially fast in $N$ for all $s > 0$. The inequality (\ref{S.22R+}) with the lower terminal of integration replaced by 0 on both sides implies the integral in (\ref{S.18}) does not contribute to the $N \to \infty$ limit, which implies (\ref{S.20a}) for this range of $t$. The case $k > N$ remains to be considered. According to (\ref{S.15}) and (\ref{A.8b}) the formula (\ref{S.18}) remains valid but with the first $k$ on the RHS replaced by $N$. The quantity $k-N$ in $P_{N-1}^{(k-N,1)}$ is now positive, or equivalently $\mu > 1$, which is also covered by Proposition \ref{P5.5}. The two cases $0 < t < t^*$ and $ t > t^*$ have to again be considered separately. Doing so, and following the strategy of the working for $\mu < 1$ as detailed above, we arrive at (\ref{S.20}) in both cases. \hfill $\square$ \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{DUn3.pdf} \caption{Plot of $\tilde{S}_\infty(\mu;t)$ with respect to both $\mu$ and $t$, as specified by Theorem \ref{P5.6}. The ramp and plateau features are evident in the direction of fixed $t$ and varying $\mu$, with the transition being smooth in keeping with Remark \ref{R5.7}.4. } \label{Dfig3} \end{figure*} \begin{remark}\label{R5.7} $ $\\ 1.~As $\mu$ varies from $0$ to $1$, $t^*$ monotonically increases from the value $4$ to $\infty$. In particular, this means that for all $t < 4$ there is always a deformation of the limiting CUE result as given by the first term in (\ref{S.20a}). On the other hand, for $ t > 4$ there is always a range of $\mu$ values, $0 \le \mu \le \mu_r$ for which the limiting CUE result is valid. Here $\mu_r$ is determined by being the solution of \ref{c.1}) with $t^*=t$. We recall from the discussion of \S \ref{S3.1} that $t=4$ has the significance as the value of $t$ for which the support of the density becomes the whole unit circle. \\ 2.~As $\mu$ varies from $1$ to $\infty$, $t^*$ monotonically decreases from the value $\infty$ to $0$. This means that for all $t > 0$ there is always a range of $\mu$ values, $\mu_p < \mu < \infty$, for which the limiting CUE result given by the first term of (\ref{S.20a}) is valid. This is in contrast to the circumstance for $0 < \mu < 1$ as described above. Both features are in agreement with the heuristic predictions of Section \ref{S1.3}. \\ 3.~ For a given value of $t$, the critical values of $\mu$ introduced above can be read off from (\ref{c.1}) to be given by the implicit equations \begin{equation}\label{S.22b} t = {2 \over \mu_r} \log {1 + \mu_r \over 1 - \mu_r} \: \: (t > 4), \qquad t = {2 \over \mu_p} \log \Big | {1 + \mu_p \over 1 - \mu_p} \Big | \: \: (t >0). \end{equation} In keeping with the heuristic discussion of Section \ref{S1.3}, we expect that these equations can be rewritten as \begin{equation}\label{S.22c} \mu_r = \rho_{(1), \infty}(x;t) \Big |_{x=\pi} \: \: (t > 4), \qquad \mu_p = \rho_{(1), \infty}(x;t) \Big |_{x=0} \: \: (t >0), \end{equation} with $x=\pi$ ($x=0$) being the points of least (greatest) value of the density. We see from (\ref{a.5}) and (\ref{a.3}) with $w = -1$ (for $x=\pi$) and $w = 1$ (for $x= 0 $) that indeed the equations in (\ref{S.22b}) and (\ref{S.22c}) are equivalent. \\ 4.~Expanding the integral in (\ref{S.20a}) to leading order in $t^* - t$ shows that it is proportional to $(t^* - t)^{3/2}$. The exponent $3/2$ is the same as that known for the transition from ramp to plateau in the cases of the Gaussian unitary ensemble \cite{BH97}, and the Laguerre unitary ensemble with Laguerre parameter scaled with $N$ \cite{Fo21b}. In particular, it implies the function values and first derivatives of $\tilde{S}_\infty(\mu;t)$ agree across the transition. \end{remark} To use Theorem \ref{P5.6} for the computation of $\tilde{S}_\infty(\mu;t)$, with $t$ fixed and as a function of $\mu$, for each $\mu$ we must compute $t^*$ as specified by (\ref{c.1}). If $0 < t < t^*$ we compute $\tilde{S}_\infty(\mu;t)$ according to the functional form (\ref{S.20}) which involves a contribution from the integral, whereas for $t \ge t^*$ the term involving the integral is to be set to 0. A plot obtained by the implementation of this procedure, iterated also over $t$, is given in Figure \ref{Dfig3}. \subsection*{Acknowledgements} This research is part of the program of study supported by the Australian Research Council Discovery Project grant DP210102887. S.-H.~Li acknowledges the support of the National Natural Science Foundation of China (grant No.~12175155). A referee is to be thanked for helpful feedback, as too is a handling editor. The support of the chief editor has played an important role too. \nopagebreak \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{10} \bibitem{AL22} A.~Adhikari and B.~Landon, \emph{Local law and rigidity for unitary Brownian motion}, arXiv: 2202.06714 \bibitem{AAR99} G.E. Andrews, R.~Askey, and R.~Roy, \emph{Special functions}, Cambridge University Press, New York, 1999. \bibitem{AO84} G. Andrews and E. Onofri, \emph{Lattice gauge theory, orthogonal polynomials and $q$-hypergeometric functions}, in Special Functions: Group Theoretical Aspects and Applications, R.A. Askey, ed.(1984), 163--188. \bibitem{AKMV10} P.~Aniello, A.~Kossakowski, G.~Marmo and F.~Ventriglia, \emph{Brownian motion on Lie groups and open quantum systems}, J. Phys. A \textbf{43} (2010), 265301. \bibitem{As19} T. Assiotis, \emph{Intertwinings for general $\beta$-Laguerre and $\beta$-Jacobi processes}, J Theo. Probab. \textbf{32} (2019), 1880--1891. \bibitem{ABGS21} T. Assiotis, B. Bedert, M. Gunes and A. Soor, \emph{Moments of generalized Cauchy random matrices and continuous-Hahn polynomials}, Nonlinearity, \textbf{34} (2021), 4923. \bibitem{AGS22} T.~Assiotis, M.A.~Gunes and A.~Soor, \emph{Convergence and an explicit formula for the joint moments of the circular Jacobi $\beta$-ensemble characteristic polynomial}, Math. Phys., Analysis and Geometry \textbf{25} (2022), 15. \bibitem{BGV99} A. Bassetto, L. Griguolo and F. Vian, \emph{Instanton contributions to Wilson loops with general winding number in two-dimensions and the spectral density}, Nucl. Phys. B \textbf{559} (1999), 563--590. \bibitem{Be85} M.V.~Berry, \emph{Semiclassical theory of spectral rigidity}, Proc.~R.~Soc.~Lond. A \textbf{400}, (1985) 229--251. \bibitem{Bi97} P. Biane, \emph{Free Brownian motion, free stochastic calculus and random matrices}, Fields Institute Communications \textbf{12}, Amer. Math. Soc. Providence, RI, 1997, 1--19. \bibitem{BN08} J.-P. Blaizot and M.A. Nowak, \emph{Large-$N_c$ confinement and turbulence}, Phys. Rev. Lett., \textbf{101} (2008), 102001. \bibitem{Bo98} A. Borodin, \emph{Biorthogonal ensembles.} Nucl. Phys. B \textbf{536} (1998), 704--732. \bibitem{BF22} P. Bourgade and H. Falconet, \emph{Liouville quantum gravity from random matrix dynamics}, arXiv 2206.03029. \bibitem{BCL21} J.~Boursier, D.~Chafa\"i and C.~Labb\'e, \emph{Universal cutoff for Dyson Ornstein Uhlenbeck process}, arXiv:2107.14452 \bibitem{BH97} E.~Br\'ezin and S. Hikami, \emph{Spectral form factor in a random matrix theory}, Phys. Rev. E \textbf{55} (1997), 4067--4083. \bibitem{BH98} E.~Br\'ezin and S.~Hikami, \emph{Level spacing of random matrices in an external source}, Phys. Rev. E \textbf{58} (1998), 7176--7185. \bibitem{BMS11} A.~Brini, M.~Mari\~{n}o, and S.~Stevan, \emph{The uses of the refined matrix model recursion}, J. Math. Phys. \textbf{52} (2011), 35--51. \bibitem{CMS17} A. del Campo, J. Molina-Vilaplana and J. Sonner, \emph{Scrambling the spectral form factor: unitarity constraints and exact results}, Phys. Rev. D \textbf{95} (2017), 126008. \bibitem{CL01} E. C\'epa and D. L\'epingle, Brownian particles with electrostatic repulsion on the circle: Dyson?s model for unitary random matrices revisited, ESAIM Probab. Statist. 5 (2001), 203?224 \bibitem{CI91} L.-C. Chen and M.E.H. Ismail, \emph{On asymptotics of Jacobi polynomials}, SIAM J. Math. Anal., \textbf{22} (1991), 1442--1449. \bibitem{CL18} X.~Chen and A.W.W.~Ludwig, \emph{Universal spectral correlations in the chaotic wave function, and the development of quantum chaos}, Phys.~Rev.~B \textbf{98} (2018), 064309. \bibitem{CMC19} A.~Chenu, J.~Molina-Vilaplana and A.~del Campo, \emph{Work statistics, Loschmidt echo and information scrambling in chaotic quantum systems}, Quantum \textbf{3}, (2019) 127 \bibitem{CES21} G. Cipolloni, L. Erd\"os and D. Schr\"oder, \emph{On the spectral form factor for random matrices}, arXiv:2109.06712. \bibitem{Co21} P. Cohen, \emph{Moments of discrete classical orthogonal polynomial ensembles}, arXiv:2112.02064. \bibitem{CCO20} P. Cohen, F.D. Cunden and N. O'Connell, \emph{Moments of discrete orthogonal polynomial ensembles}, Electron. J. Probab. \textbf{25} (2020), 1--19. \bibitem{Co05} B.~Collins, \emph{Product of random projections, {J}acobi ensembles and universality problems arising from free probability}, Prob. Theory Rel. Fields \textbf{133} (2005), 315--344. \bibitem{C+17} J.S. Cotler, G. Gur-Ari, M. Hanada, J. Polchinski, P. Saad, S.H. Shenker, D. Stanford, A. Streicher and M. Tezuka, \emph{Black Holes and Random Matrices}, JHEP \textbf{1705} (2017), 118; Erratum: [JHEP \textbf{1809} (2018), 002] \bibitem{CH19} J.S. Cotler and N.~Hunter-Jones, \emph{Spectral decoupling in many-body quantum chaos}, JHEP \textbf{12} (2020) 205 \bibitem{CHLY17} J.S. Cotler, N. Hunter-Jones, J. Liu and B. Yoshida, \emph{Chaos, Complexity, and Random Matrices} JHEP \textbf{1711} (2017), 048 \bibitem{CMOS19} F. Cunden, F. Mezzadri, N. O'Connell and N. Simm, \emph{Moments of random matrices and hypergeometric orthogonal polynomials}, Comm. Math. Phys. \textbf{369} (2019), 1091--1145. \bibitem{De10} N.~ Demni, \emph{$\beta$-Jacobi processes} Adv. Pure Appl. Math. \textbf{1} (2010), 325--344. \bibitem{DE05} I.~Dumitriu and A.~Edelman, \emph{Global spectrum fluctuations for the $\beta$-{H}ermite and $\beta$-{L}aguerre ensembles via matrix models}, J. Math. Phys. \textbf{47} (2006), 063302. \bibitem{DP12} I.~Dumitriu and E.~Paquette, \emph{Global fluctuations for linear statistics of $\beta$ {J}acobi ensembles}, Random Matrices: Theory Appl. \textbf{01} (2012), 1250013. \bibitem{DO81} B. Durhuus and P. Olesen, \emph{The spectral density for two-dimensional continuum QCD}, Nucl. Phys. B 184 (1981) 461--475. \bibitem{Dy62b} F.J. Dyson, \emph{A {B}rownian motion model for the eigenvalues of a random matrix}, J. Math. Phys. \textbf{3} (1962), 1191--1198. \bibitem{Er55} A. Erd\'elyi, \emph{Asymptotic representations of Fourier integrals and the method of stationary phase}, J. Soc. Indust. Appl. Math., \textbf{3} (1955), 17--27. \bibitem{EKS20} L. Erd\"os, T. Kr\"uger, and D. Schr\"oder, \emph{Cusp universality for random matrices I: Local law and the complex Hermitian case}, Comm. Math. Phys., \textbf{378} (2020), 1203--1278. \bibitem{EY17} L. Erd\"os and H.-T. Yau, \emph{A dynamical approach to random matrix theory}, Courant Lecture Notes in Mathematics, vol. 28, Amer. Math. Soc. Providence, 2017. \bibitem{FFN10} B.~Fleming, P.J.~Forrester and E.~Nordenstam, \emph{A finitization of the bead process}, Probab. Th. Related Fields (2012) \textbf{152}, 321--356. \bibitem{Fo90b} P.J.~Forrester, \emph{Exact solution of the lockstep model of vicious walkers}, J. Phys. A \textbf{23} (1990), 1259--1273. \bibitem{Fo96} P.J. Forrester, \emph{Some exact correlations in the Dyson Brownian motion model for transitions to the CUE}, Physica A \textbf{223} (1996), 365--390. \bibitem{Fo10} P.J. Forrester, \emph{Log-gases and random matrices}, Princeton University Press, Princeton, NJ, 2010. \bibitem{Fo18} P.J.~Forrester, \emph{Meet Andr\'eief, Bordeaux 1886, and Andreev, Kharkov 1882--83}, Random Matrices Theory Appl. \textbf{8} (2019) 1930001. \bibitem{Fo21a} P.J. Forrester, \emph{Differential identities for the structure function of some random matrix ensembles}, J. Stat. Phys. \textbf{183} (2021), 28. \bibitem{Fo21b} P. J. Forrester, \emph{Quantifying dip-ramp-plateau for the Laguerre unitary ensemble structure function}, Commun. Math. Phys. \textbf{387} (2021), 215--235. \bibitem{Fo22} P.J.~Forrester, \emph{Global and local scaling limits for the $\beta = 2$ Stieltjes--Wigert random matrix ensemble}, Random Matrices Theory Appl. \textbf{11} (2022), 2250020. \bibitem{Fo22a} P.J.~Forrester, \emph{Joint moments of a characteristic polynomial and its derivative for the circular $\beta$--ensemble}, Probab. Math. Phys. \textbf{3} (2022), 145--170. \bibitem{Fo22b} P.J.~Forrester, \emph{High--low temperature dualities for the classical $\beta$-ensembles}, Random Matrices Theory Appl. (2022) https://doi.org/10.1142/S2010326322500356 \bibitem{Fo22c} P.J.~Forrester, \emph{A review of exact results for fluctuation formulas in random matrix theory}, arXiv:2204.03303. \bibitem{FG16} P.J. Forrester and J. Grela, \emph{Hydrodynamical spectral evolution for random matrices}, J. Phys. A, \textbf{49} (2016), 085203. \bibitem{FK22} P.J. Forrester and S.~Kumar, \emph{Differential recurrences for the distribution of the trace of the $\beta$-Jacobi ensemble}, Physica D \textbf{434} (2022), 133220. \bibitem{FLSY21} P.J. Forrester, Shi-Hao Li, Bo-Jian Shen and Guo-Fu Yu, \emph{$q$-Pearson pair and moments in $q$-deformed ensembles}, arxiv:2110.13420. \bibitem{FR21} P. Forrester and A. Rahman, \emph{Relations between moments for the Jacobi and Cauchy random matrix ensembles}, J. Math. Phys. 62 (2021), 073302. \bibitem{FRW17} P.J. Forrester, A.A. Rahman, and N.S. Witte, \emph{Large $N$ expansions for the Laguerre and Jacobi $\beta$ ensembles from the loop equations}, J. Math. Phys. \textbf{58} (2017), 113303. \bibitem{FW17} P.J. Forrester and D. Wang, \emph{Muttalib-Borodin ensembles in random matrix theory --- realisations and correlation functions}, { Elec. J. Probab.} \textbf{22} (2017), 54. \bibitem{FLD16} Y.V.~Fyodorov and P.~Le~Doussal, \emph{Moments of the position of the maximum for GUE characteristic polynomials and for log-correlated Gaussian processes}, J. Stat. Phys.\textbf{164} (2016), 190--240. \bibitem{GT14} G. Giasemidis and M. Tierz, \emph{Torus knot polynomials and SUSY Wilson loops}, Lett. Math. Phys. \textbf{104} (2014), 1535--1556. \bibitem{GS15} V. Gorin and M. Shkolnikov, \emph{Multilevel Dyson Brownian motions via Jack polynomials}, Probab. Th. Related Fields, \textbf{163} (2015), 413--463. \bibitem{GM95} D.J. Gross and A. Matytsin, \emph{Some properties of large N two-dimensional Yang-Mills theory}, Nucl. Phys. B \textbf{437} (1995), 541--584. \bibitem{HHN16} W.~Hachem, A.~Hardy and J.~Najim, \emph{Large complex correlated Wishart matrices: the Pearcey kernel and expansion at the hard edge}, Elec. J. Probab. \textbf{21} (2016), 1--36. \bibitem{Ha01} B.C.~Hall, \emph{Harmonic analysis with respect to heat kernel measure}, Bull. Amer. Math. Soc. \textbf{38} (2001), 43--78. \bibitem{Ha18} T.~Hamdi, \emph{Spectral distribution of the free Jacobi process revisited}, Anal. PDE. \textbf{11} (2018), 2137--2148. \bibitem{HL19} J. Huang and B. Landon, \emph{Rigidity and a mesoscopic central limit theorem for Dyson Brownian motion for general $\beta$ and potentials}, Probab. Th. Related Fields, \textbf{175} (2019), 209--253. \bibitem{Hu56} G.A. Hunt, \emph{Semigroups of measures on Lie groups}, Trans. Am. Math. Soc. \textbf{81} (1956), 264--293. \bibitem{It50} K. It\^{o}, \emph{Brownian motions in a Lie group}, Proc. Jap. Acad. 26 (1950), 4--10. \bibitem{Ka22} Z.~Kabluchko, \emph{Lee-Yang zeros of the Curie-Weiss ferromagnet, unitary Hermite polynomials, and the backward heat flow}, arXiv:2203.05533. \bibitem{Ka16} M.~Katori, \emph{Bessel processes, {S}chramm--{L}oewner evolution, and the {D}yson model}, Springer briefs in mathematical physics, vol.~11, Springer, Berlin, 2016. \bibitem{Ka81} V. A. Kazakov, \emph{Wilson loop average for an arbitrary contour in two-dimensional u(n) gauge theory}, Nucl. Phys. B \textbf{179} (1981), 283--292. \bibitem{Ke17} T. Kemp, \emph{Heat kernel empirical laws on U(N) and GL(N)}, J. Theoret. Probab., \textbf{30} (2017), 397--451. \bibitem{KLZF20} M. Kieburg, S.-H. Li, J. Zhang, and P. J. Forrester, \emph{Cyclic P\'olya ensembles on the unitary matrices and their spectral statistics}, arXiv:2012.11993. \bibitem{LSY19} B. Landon, P. Sosoe, and H.-T. Yau. \emph{Fixed energy universality of Dyson Brownian motion}, Adv. Math., \textbf{346} (2019), 1137--1332. \bibitem{LLJP86} L. Leviandier, M. Lombardi, R. Jost and J. P. Pique, \emph{Fourier transform: a tool to measure statistical level properties in very complex spectra}, Phys. Rev. Lett. \textbf{56} (1986), 2449. \bibitem{Le08} T.~L\'evy, \emph{Schur-Weyl duality and the heat kernel measure on the unitary group}, Adv. Math. \textbf{218}, (2008) 537--575. \bibitem{LM10} T. L\'evy and M. Ma\"ida, \emph{Central limit theorem for the heat kernel measure on the unitary group}, J. Funct. Anal., \textbf{259} (2010), 3163--3204. \bibitem{LPC21} J. Li, T. Prosen, and A. Chan, \emph{Spectral statistics of non-Hermitian matrices and dissipative quantum chaos}, Phys. Rev. Lett. \textbf{127} (2021), 170602. \bibitem{LW16} K. Liechty, and D. Wang, \emph{Nonintersecting Browninan motion on the unit circle}, Ann. Prob. \textbf{44}, (2016), 1134--1211. \bibitem{Li58} J. Lighthill, \emph{Introduction to Fourier analysis and generalized functions}, Cambridge University Press, {1958}. \bibitem{Ma95} I.G. Macdonald, \emph{Hall polynomials and symmetric functions}, 2nd ed., Oxford University Press, Oxford, 1995. \bibitem{Ma21} A. W. Marcus, \emph{Polynomial convolutions and (finite) free probability}, arXiv:2108.07054. \bibitem{MRW15} F.~Mezzadri, A.K. Reynolds and B.~Winn, \emph{Moments of the eigenvalue densities and of the secular coefficients of $\beta$-ensembles}, Nonlinearity \textbf{30} (2017), 1034. \bibitem{Me04} M.L. Mehta, \emph{Random matrices}, 3rd ed., Elsevier, San Diego, 2004. \bibitem{Mi21} B. Mirabelli, \emph{Hermitian, non-Hermitian and multivariate finite free probability}, PhD Thesis, Princeton, 2021 University. \bibitem{MMPS12} A.D. Mironov, A.Yu. Morozov, A.V. Popolitov, and Sh.R.Shakirov, \emph{Resolvents and Seiberg-Witten representation for a Gaussian $\beta$-ensemble}, Theor. Math. Phys., \textbf{171} (2012), 505--522. \bibitem{MH21} A.~Mukherjee and S.~Hikami, \emph{Spectral form factor for time-dependent matrix model}, JHEP \textbf{2021} (2021) 071. \bibitem{Ok19} K. Okuyama, \emph{Spectral form factor and semi-circle law in the time direction}, JHEP \textbf{2019} (2019), 161. \bibitem{On81} E. Onofri, \emph{SU(N) Lattice gauge theory with Villain's action}, Nuovo Cim. A \textbf{66} (1981), 293--318. \bibitem{PS91} A. Pandey and P. Shukla, \emph{Eigenvalue correlations in the circular ensembles}, J. Phys. A \textbf{24} (1991), 3907--3926. \bibitem{Ra97} E.M.~Rains, \emph{Combinatorial properties of Brownian motion on the compact classical groups}, J. Theoret. Probab. \textbf{10} (1997), 659--679. \bibitem{Ro81} P. Rossi, \emph{Continuum QCD${}_2$ from a fixed point lattice action}, Ann. Phys. \textbf{132} (1981), 463--481. \bibitem{Sh16} S.~Shenker, \emph{Black holes and random matrices}, [web resource from the conference Natifest, 2016]. \bibitem{Su71a} B.~Sutherland, \emph{Exact results for a quantum many body problem in one dimension}, Phys. Rev. A \textbf{4} (1971), 2019--2021. \bibitem{SZ22} O. Szehr and R. Zarouf, \emph{On the asymptotic behavior of Jacobi polynomials with first varying parameter}, J. Approx. Th. \textbf{277} (2022), 105702. \bibitem{TGS18} E.J.~Torres-Herrera, A.M.~Garc\'ia-Garc\'ia, and L.F.~Santos, \emph{Generic dynamical features of quenched interacting quantum systems: Survival probability, density imbalance, and out-of-time-ordered correlator}, Phys. Rev. B \textbf{97} (2018), 060303. \bibitem{Ul85} N.~Ullah, \emph{Probability density function of the single eigenvalue outside the semicircle using the exact Fourier transform}, J.~Math.~Phys. \textbf{26}, (1985), 2350--2351. \bibitem{VP09} Vinayak and A.~Pandey, \emph{Transition from Poisson to circular unitary ensemble}, Pramana - J Phys \textbf{73} (2009), 505?519. \bibitem{VG21} W.L. Vleeshouwers and V.~Gritsev, \emph{Topological field theory approach to intermediate statistics}, SciPost Phys. \textbf{10} (2021), 146. \bibitem{VG22} W.L. Vleeshouwers and V.~Gritsev, \emph{The spectral form factor in the 't Hooft limit --- intermediacy versus universality}, SciPost Phys. Core \textbf{5} (2022), 051. \bibitem{We16} C. Webb, \emph{Linear statistics of the circular $\beta$-ensemble, Stein's method, and circular Dyson Brownian motion}, Electron. J. Probab. \textbf{21} (2016), 25. \bibitem{WW65} E.T.~Whittaker and G.N.~Watson, \emph{A course of modern analysis}, 4th ed., Cambridge University Press, Cambridge, 1927. \bibitem{WF14} N.S. Witte and P.J. Forrester, \emph{Moments of the {G}aussian $\beta$ ensembles and the large {$N$} expansion of the densities}, J. Math. Phys. \textbf{55} (2014), 083302. \bibitem{WF15} N.S. Witte and P.J. Forrester, \emph{Loop equation analysis of the circular ensembles}, JHEP \textbf{2015} (2015), 173. \bibitem{Wo11} Wolfram Mathematica, online support, www.wolfram.com/language/11/random-matrices/dyson-coulomb-gas.html?product=mathematica \bibitem{Ya20} C.~Yan, \emph{Spectral form factor}, [web resource dated June 28, 2020]. \bibitem{Yo52} K. Yosida, \emph{Brownian motion in a homogeneous Riemannian space}, Pac. J. Math. \textbf{2} (1952), 263--270. \bibitem{ZKF21} J.~Zhang, M.~Kieburg and P.J.~Forrester, \emph{Harmonic analysis for rank-1 randomised Horn problems}, Lett. Math. Phys. \textbf{111} (2021), 98. \end{thebibliography} \end{document}
2206.14893v1
http://arxiv.org/abs/2206.14893v1
Breaking indecision in multi-agent, multi-option dynamics
\documentclass[onefignum,onetabnum]{siamart190516} \input{macros_decision} \headers{Decision from Indecision}{A.~Franci, M.~Golubitsky, I.~Stewart, A.~Bizyaeva, N.E.~Leonard} \title{Breaking indecision \\ in multi-agent multi-option dynamics\thanks{Submitted \today. \funding{This research has been supported in part by NSF grant CMMI-1635056, ONR grant N00014-19-1-2556, ARO grant W911NF-18-1-0325, DGAPA-UNAM PAPIIT grant IN102420, and Conacyt grant A1-S-10610. This material is also based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. (NSF grant number).}}} \author{Alessio Franci\thanks{Department of Mathematics, National Autonomous University of Mexico, 04510 Mexico City, Mexico. (\email{[email protected]}, \url{https://sites.google.com/site/francialessioac/})} \and Martin Golubitsky\thanks{Department of Mathematics, Ohio State University, Columbus, OH, 43210-1174 USA. (\email{[email protected]}).} \and Ian Stewart\thanks{Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK.\\ (\email{[email protected]}).} \and Anastasia~Bizyaeva\thanks{Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ, 08544 USA. (\email{[email protected]}).} \and Naomi Ehrich Leonard\thanks{Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ, 08544 USA.(\email{[email protected]}).} } \date{\today} \begin{document} \maketitle \begin{abstract} How does a group of agents break indecision when deciding about options with qualities that are hard to distinguish? Biological and artificial multi-agent systems, from honeybees and bird flocks to bacteria, robots, and humans, often need to overcome indecision when choosing among options in situations in which the performance or even the survival of the group are at stake. Breaking indecision is also important because in a fully indecisive state agents are not biased toward any specific option and therefore the agent group is maximally sensitive and prone to adapt to inputs and changes in its environment. Here, we develop a mathematical theory to study how decisions arise from the breaking of indecision. Our approach is grounded in both equivariant and network bifurcation theory. We model decision from indecision as synchrony-breaking in influence networks in which each node is the value assigned by an agent to an option. First, we show that three universal decision behaviors, namely, deadlock, consensus, and dissensus, are the generic outcomes of synchrony-breaking bifurcations from a fully synchronous state of indecision in influence networks. Second, we show that all deadlock and consensus value patterns and some dissensus value patterns are predicted by the symmetry of the influence networks. Third, we show that there are also many `exotic' dissensus value patterns. These patterns are predicted by network architecture, but not by network symmetries, through a new {\it synchrony-breaking branching lemma}. This is the first example of exotic solutions in an application. Numerical simulations of a novel influence network model illustrate our theoretical results. \end{abstract} \begin{keywords} decision making, symmetry-breaking, synchrony-breaking, opinion dynamics \end{keywords} \begin{AMS} 91D30, 37G40, 37Nxx \end{AMS} \renewcommand{\Na}{m} \renewcommand{\No}{n} \section{Motivation} Many multi-agent biological and artificial systems routinely make collective decisions. That is, they make decisions about a set of possible options through group interactions and without a central ruler or a predetermined hierarchy between the agents. Often they do so without clear evidence about which options are better. In other words, they make decisions from indecision. Failure to do so can have detrimental consequences for the group performance, fitness, or even survival. Honeybee swarms recurrently look for a new home and make a selection when a quorum is reached~\cite[Seeley]{S97},\cite[Visscher]{K07}. Scouting bees evaluate the quality of the candidate nest site and recruit other bees at the swarm through the `waggle dance'~\cite[Seeley {\it et al.}]{SCS91}. It was shown in~\cite[Seeley {\it et al.}]{SKSHFM12} that honeybees use a cross-inhibitory signal and this allows them to break indecision when a pair of nest sites have near-equal quality (see also \cite[Gray {\it et al.}]{GFSL18}, \cite[Pais {\it et al.}]{PHSFLM13}). Animal groups on the move, like fish schools~\cite[Couzin {\it et al.}]{CIDGTHCLL11} or bird flocks ~\cite[Biro {\it et al.}]{BSMG06}, face similar conundrums when making navigational decisions, for instance, when there are navigational options with similar qualities or when the group is divided in their information about the options. However, when the navigational options appear sufficiently well separated in space, the group breaks indecision through social influence and makes a consensus navigational decision \cite[Couzin {\it et al.}]{CKFL05}, \cite[Leonard {\it et al.}]{LSNSCL12}, \cite[Sridhar {\it et al.}]{SLGNSBSGC21},\cite[Nabet {\it et al.}]{NLCL09}. In the two biological examples above, the multi-agent system needs to make a {\it consensus} decision to adapt to the environment. But there are situations in which different members of the same biological swarm choose different options to increase the group fitness; that is, the group makes a {\it dissensus} decision. This is the case of phenotypic differentiation in communities of isogenic social unicellular organisms, like bacteria \cite[Miller and Bassler]{MB01}, \cite[Waters and Bassler]{WB05}. In response to environmental cues, like starvation, temperature changes, or the presence of molecules in the environment, bacterial communities are able to differentiate into multiple cell types through molecular quorum sensing mechanisms mediated by autoinducers, as in {\it Bacillus subtilis} \cite[Shank and Kolter]{SK11} and {\it Myxococcus xanthus} \cite[Dworkin and Kaiser]{DK85} communities. In all those cases, the indecision about which cells should differentiate into which functional phenotype is broken through molecular social influence. In some models of sympatric speciation, which can be considered as special cases of the networks studied here, coarse-grained sets of organisms differentiate phenotypically by evolving different strategies \cite[Cohen and Stewart]{CS00}, \cite[Stewart {\it et al.}]{SEC03}. See Section~\ref{S:SCB}. Artificial multi-agent systems, such as robot swarms, must make the same type of consensus and dissensus decisions as those faced by their biological counterparts. Two fundamental collective behaviors of a robot swarm are indeed collective decision making, in the form of either consensus achievement or task allocation, and coordinated navigation \cite[Brambilla {\it et al.}]{BFBD13}. Task allocation in a robot swarm is a form of dissensus decision making \cite[Franci {\it et al.}]{FBPL21} akin to phenotypic differentiation in bacterial communities. The idea of taking inspiration from biology to design robot swarm behaviors has a long history \cite[Kriger {\it et al.}]{KBK00}, \cite[Labella {\it et al.}]{LDD06}. Human groups are also often faced with decision making among near equal-quality alternatives and need to avoid indecision, from deciding what to eat to deciding on policies that affect climate change~\cite{UN21}. The mathematical modeling of opinion formation through social influence in non-hierarchical groups dates back to the De\-Groot linear averaging model \cite[DeGroot]{D74} and has since then evolved in a myriad of different models to reproduce different types of opinion formation behaviors (see for example \cite[Friedkin and Johnsen]{FJ99}, \cite[Deffuant {\it et al.}]{DNAW00}, and \cite[Hegselmann and Krause]{HK02}), including polarization (see for example \cite[Cisneros {\it et al.}]{CCB19}, \cite[Dandekar {\it et al.}]{DGL13}), and formation of beliefs on logically interdependent topics \cite[Friedkin {\it et al.}]{FPTP16},\cite[Parsegov {\it et al.}]{PPTF16},\cite[Ye {\it et al.}]{YTLAA20}. Inspired by the model-independent theory in~\cite[Franci {\it et al.}]{FGBL20} we recently introduced a general model of nonlinear opinion dynamics to understand the emergence of, and the flexible transitions between, different types of agreement and disagreement opinion formation behaviors \cite[Bizyaeva {\it et al.}]{BFL22}. In this paper we ask: Is there a general mathematical framework, underlying many specific models, for analyzing decision from indecision? We propose one answer based on differential equations with network structure, and apply methods from symmetric dynamics, network dynamics, and bifurcation theory to deduce some general principles and facilitate the analysis of specific models. The state space for such models is a rectangular array of agent-assigned option values, a `decision state', introduced in Section~\ref{S:influ_net}. Three important types of decision states are introduced: consensus, deadlock, dissensus. Section~\ref{S:SBB} describes the bifurcations that can lead to the three types of decision states through indecision (synchrony) breaking from fully synchronous equilibria. The decision-theoretic notion of an influence network, and of value dynamics over an influence network, together with the associated admissible maps defining differential equations that respect the network structure, are described in Section~\ref{S:network}, as are their symmetry groups and irreducible representations of those groups. Section~\ref{S:PVD} briefly describes the interpretation of a distinguished parameter in the value dynamics equations from a decision-theoretic perspective. The trivial fully synchronous solution and the linearized admissible maps are discussed in Section~\ref{S:Deadlock_linearization}. It is noteworthy that the four distinct eigenvalues of the linearization are determined by symmetry. This leads to the three different kinds of synchrony-breaking steady-state bifurcation in the influence network. Consensus and deadlock synchrony-breaking bifurcation are studied in Section~\ref{S:DLB}. Both of these bifurcations are $\ES_N$ bifurcations that have been well studied in the symmetry-breaking context and are identical to the corresponding synchrony-breaking bifurcations. Section~\ref{S:axial} proves the Synchrony-Breaking Branching Lemma (Theorem~\ref{T:simp_eigen_reg}) showing the generic existence of solution branches for all `axial' balanced colorings, a key concept in homogeneous networks that is defined later. This theorem is used in Section~\ref{S:diss_bif} to study dissensus synchrony-breaking bifurcations and prove the generic existence of `exotic' solutions that can be found using synchrony-breaking techniques but not using symmetry-breaking techniques. Section~\ref{S:SBSB} provides further discussion of exotic states. Simulations showing that axial solutions can be stable (though not necessarily near bifurcation) are presented in Sections~\ref{S:simulation} and \ref{S:simulation_dissensus}. Additional discussion of stability and instability of states is given in Sections~\ref{SS:stability_dissensus} and \ref{SS:stability_balanced}. \section{Decision making through influence networks and value pattern formation} \label{S:influ_net} We consider a set of $\Na\geq 2$ identical agents who form valued preferences about a set of $\No\geq 2$ identical options. By `identical' we mean that all agents process in the same way the valuations made by other agents, and all options are treated in the same manner by all agents. These conditions are formalized in Section~\ref{S:network} in terms of the network structure and symmetries of model equations. When the option values are {\it a priori} unclear, ambiguous, or simply unknown, agents must compare and evaluate the various alternatives, both on their own and jointly with the other agents. They do so through an {\em influence network}, whose connections describe mutual influences between valuations made by all agents. We assume that the value assigned by a given agent to a given option evolves over time, depending on how all agents value all options. This dynamic is modeled as a system of ordinary differential equations (ODE) whose formulas reflect the network structure. Technically, this ODE is `admissible' \cite[Stewart {\it et al.}]{SGP03} for the network. See \eqref{EQ: generic value evol} and \eqref{EQ: admissible vf}. In this paper we analyze the structure (and stability, when possible) of model ODE steady states. Although agents and options are identical, options need not be valued equally by all agents at steady state. The uniform synchronous state, where all options are equally valued, can lose stability. This leads to spontaneous synchrony-breaking and the creation of patterns of values. Our aim is to exhibit a class of patterns, which we call `axial,' that can be proved to occur via bifurcation from a fully synchronous state for typical models. \subsection{Synchrony-breaking versus symmetry-breaking} \label{S:SBSB} Synchrony-breaking is an approach to classify the bifurcating steady-state solutions of dynamical systems on homogeneous networks. It is analogous to the successful approach of spontaneous symmetry-break\-ing \cite[Golubitsky and Stewart]{GS02}, \cite[Golubitsky {\it et al.}]{GSS88}. In its simplest form, symmetry-breaking addresses the following question. Given a symmetry group $\Gamma$ acting on $\R^n$, a stable $\Gamma$-symmetric equilibrium $x_0$, and a parameter $\lambda$, what are the possible symmetries of steady states that bifurcate from $x_0$ when $x_0(\lambda)$ loses stability at $\lambda=\lambda_0$? A technique that partially answers this question is the Equivariant Branching Lemma (see~\cite[XIII, Theorem~3.3]{GSS88}). This result was first observed by \cite[Vanderbauwhede]{V82} and \cite[Cicogna]{Ci81}. The Equivariant Branching Lemma proves the existence of a branch of equilibria for each axial subgroup $\Sigma\subseteq\Gamma$. It applies whenever an axial subgroup exists. Here we prove an analogous theorem for synchrony-breaking in the network context. Given a homogeneous network $\mathcal N$ with a fully synchronous stable equilibrium $x_0$ and a parameter $\lambda$, we ask what kinds of patterns of synchrony bifurcate from $x_0(\lambda)$ when the equilibrium loses stability at $\lambda_0$. A technique that partially answers this question is the Synchrony-Breaking Branching Lemma \cite[Golubitsky and Stewart]{GS22} (reproved here in Section~\ref{S:axial}). This theorem proves the existence of a branch of equilibria for each `axial' subspace $\Delta_{\bowtie}$. It applies whenever an axial subspace exists. The decision theory models considered in this paper are both homogeneous (all nodes are of the same kind) and symmetric (nodes are interchangeable). Moreover, the maximally symmetric states (states fixed by all symmetries) are the same as the fully synchronous states and it can be shown that for influence networks the same critical eigenspaces (where bifurcations occur) are generic in either context; namely, the absolutely irreducible representations. However, the generic branching behavior on the critical eigenspaces can be different for network-admissible ODEs compared to equivariant ODEs. This occurs because network structure imposes extra constraints compared to symmetry alone and therefore network-admissible ODEs are in general a subset of equivariant ODEs. Here it turns out that this difference occurs for dissensus bifurcations but not for deadlock or consensus bifurcations. Every axial subgroup $\Sigma$ leads to an axial subspace $\Delta_{\bowtie}$, but the converse need not be true. We call axial states that can be obtained by symmetry-breaking {\em orbital states} and axial states that can be obtained by the Synchrony-Breaking Branching Lemma, but not by the Equivariant Branching Lemma, {\em exotic states}. We show that dissensus axial exotic states often exist when $m,n\geq 4$. These states are the first examples of this phenomenon that are known to appear in an application. \subsection{Terminology} The {\em value} that agent $i$ assigns to option $j$ is represented by a real number $z_{ij}$ which may be positive, negative or zero. A {\em state} of the influence network is a rectangular array of values $Z=(z_{ij})$ where $1 \leq i \leq \Na$ and $1 \leq j \leq \No$. The assumptions that agents/options are identical, as explained at the start of this section, implies that any pair of entries $z_{ij}$ and $z_{kl}$ can be compared and that this comparison is meaningful. {\bf Decision states}: A state where each row consists of equal values is a {\em deadlock} state (Fig~\ref{fig:consensus illust}, right). Agent $i$ is {\em deadlocked} if row $i$ consists of equal values. A state where each column consists of equal values is a {\em consensus} state (Fig~\ref{fig:consensus illust}, left). There is {\em consensus about option} $j$ if column $j$ consists of equal values. A state that is neither deadlock nor consensus is {\em dissensus} (Fig.~\ref{fig:dissensus illust ab}). A state that is both deadlock and consensus is a state of {\it indecision}. An indecision state is {\it fully synchronous}, that is, in such a state all values are equal. {\bf Value patterns}: We discuss value patterns that are likely to form by synchrony-breaking bifurcation from a fully synchronous steady state. Each bifurcating state forms a pattern of value assignments. It is convenient to visualize this pattern by assigning a color to each numerical value that occurs. Nodes $z_{ij}$ and $z_{kl}$ of the array $Z$ have the same color if and only if $z_{ij} = z_{kl}$. Nodes with the same value, hence same color, are {\em synchronous}. Examples of value patterns are shown in Figures~\ref{fig:consensus illust}-\ref{fig:dissensus illust c}. Agent and option numbering are omitted when not expressly needed. \subsection{Sample bifurcation results} Because agents and options are independently interchangeable, the influence network has symmetry group $\Gamma = \ES_m\times\ES_n$ and admissible systems of ODEs are $\Gamma$-equivariant. Equivariant bifurcation theory shows that admissible systems can exhibit three kinds of symmetry-breaking or synchrony-breaking bifurcation from a fully synchronous equilibrium; see Section~\ref{SS:3symm_breaking}. Moreover, axial patterns associated with these bifurcation types lead respectively to consensus (\S\ref{case1}, Theorem~\ref{T:consensus_bifur}), deadlock (\S\ref{case2}, Theorem~\ref{T:divided_deadlock_bifur}), or dissensus (\S\ref{case3}, Theorem~\ref{T:diss}) states. Representative examples (up to renumbering agents and options) are shown in Figures~\ref{fig:consensus illust} and \ref{fig:dissensus illust ab}. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{./FIGURES/consensus_illust} \caption{(Left) Consensus value pattern. (Right) Deadlock value pattern.}\label{fig:consensus illust} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.9\textwidth]{./FIGURES/dissensus_illust_ab} \caption{Dissensus value patterns corresponding to \S\ref{case3a} (top) and \S\ref{case3b} (bottom). }\label{fig:dissensus illust ab} \end{figure} The different types of pattern are characterized by the following properties: zero row-sums, zero column-sums, equality of rows, and equality of columns. A bifurcation branch has {\em zero row-sum} (ZRS) if along the branch each row sum is constant to linear order in the bifurcation parameter. A bifurcation branch has {\em zero column-sum} (ZCS) if along the branch each column sum is constant to linear order in the bifurcation parameter. Thus, along a ZRS branch, an agent places higher values on some options and lower values on other options, but in such a way that the average value remains constant. In particular, along a ZRS branch, an agent forms preferences for the various options, e.g., favoring some and disfavoring others. Similarly, along a ZCS branch, an option is valued higher by some agents and lower by some others, but in such a way that the average value remains approximately constant. In particular, along a ZCS branch, an option is favored by some agents and disfavored by others. Consensus bifurcation branches are ZRS with all rows equal, so all agents favor/disfavor the same options. Deadlock bifurcation branches are ZCS with all columns equal, so each agent equally favors or disfavors every option. Dissensus bifurcation branches are ZRS and ZCS with some unequal rows and some unequal columns, so there is no consensus on favored/disfavored options. ZRS and ZCS conditions are discussed in more detail in Section~\ref{SSEC: irreps} using the irreducible representations of $\Gamma$. \subsection{Value-assignment clusters} To describe these patterns informally in decision theoretic language, we first define: \begin{definition} \rm Two agents are in the same {\em agent-cluster} if their rows are equal. Two options are in the same {\em option-cluster} if their columns are equal. \end{definition} In other words, two agents in the same agent-cluster agree on each option value and two options are in the same option-cluster if they are equally valued by each agent. An example is given in Figure~\ref{F:color_example_rect}. \begin{figure}[htb] \centerline{ \includegraphics[width=.27\textwidth]{FIGURES/a-o-clusters.pdf}} \caption{The agent-clusters in this value pattern are $\{1,2\}$, $\{3\}$ and the option-clusters are $\{1\}, \{2,3\}, \{4\}, \{5\}$.} \label{F:color_example_rect} \end{figure} \subsection{Color-isomorphism and color-complementarity} The following definitions help characterize value patterns. Definition~\ref{D: color isom} (color-isomorphism) relates two agents by an option permutation and two options by an agent permutation, thus preserving the number of nodes of a given color in the associated rows/columns. Definition~\ref{D: color comp} (color-complementarity) relates two agents/options by a color permutation, therefore not necessarily preserving the number of nodes of a given color in the associated rows/columns. \begin{definition}\label{D: color isom} \rm Two agents (options) are {\em color-isomorphic} if the row (column) pattern of one can be transformed into the row (column) pattern of the other by option (agent) permutation. \end{definition} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{./FIGURES/cluster_vs_isom_vs_compl} \caption{ (Left) \{1, 2\}, \{3\} are agent clusters; agents 1, 2, 3 are color-isomorphic; none of the agents are color complementary. (Right) \{1\}, \{2\},\{3\} are agent clusters; agents 1 and 3 are color isomorphic; agents 1 and 2 are color complementary.} \label{fig:cluster vs isom vs compl} \end{figure} \begin{definition}\label{D: color comp}\rm Two agents (options) are {\em color-complementary} if the row (column) pattern of one can be transformed into the row (column) pattern of the other by a color permutation. \end{definition} Examples of agent-clusters, color-isomorphism, and color-complementarity are given in Figure~\ref{fig:cluster vs isom vs compl}. Non-trivial color-isomorphism and color-complementarity characterize {\it disagreement} value patterns. Two color-isomorphic agents not belonging to the same agent-cluster disagree on the value of the permuted options. Two color-isomorphic options not belonging to the same option-cluster are such that the permuted agents disagree on their value. Similarly, two color-complementary agents not belonging to the same agent-cluster disagree on options whose colors are permuted. Two color-complementary options not belonging to the same option-cluster are such that the agents whose colors were permuted disagree on those option values. \section{Indecision-breaking as synchrony breaking from full synchrony} \label{S:SBB} We study patterns of values in the context of {\em bifurcations}, in which the solutions of a parametrized family of ODEs change qualitatively as a parameter $\lambda$ varies. Specifically, we consider {\em synchrony-breaking} bifurcations, in which a branch of fully synchronous steady states ($z_{ij} = z_{kl}$ for all $i,j,k,l$) becomes dynamically unstable, leading to a branch (or branches) of states with a more complex value pattern. The fully synchronous state is one in which the group of agents expresses no preferences about the options. In such a state it is impossible for the single agent and for the group as a whole to coherently decide which options are best. The fully synchronous state therefore correspond to indecision; an agent group in such a state is called {\it undecided}. Characterizing indecision is important because any possible value pattern configuration can be realized as an infinitesimal perturbation of the fully synchronous state. In other words, any decision can be made from indecision. Here, we show how exchanging opinions about option values through an influence network leads to generic paths from indecision to decision through synchrony-breaking bifurcations. The condition for steady-state bifurcation is that the linearization $J$ of the ODE at the bifurcation point has at least one eigenvalue equal to $0$. The corresponding eigenspace is called the {\em critical eigenspace}. In~\eqref{Irrep_decomp} we show that in the present context there are three possible critical eigenspaces for generic synchrony-breaking. The bifurcating steady states are described in Theorems~\ref{T:consensus_bifur}, \ref{T:divided_deadlock_bifur}, \ref{T:diss}. For each of these cases, bifurcation theory for networks leads to a list of `axial' patterns -- for each we give a decision-making theoretic interpretation in Sections~\ref{case1}--\ref{case3}. These informal descriptions are made precise by the mathematical statements of the main results in Sections~\ref{S:DLB} and \ref{S:diss_bif}. In the value patterns in Figures~\ref{fig:consensus illust},~\ref{fig:dissensus illust ab},~\ref{fig:dissensus illust c}, we use the convention that the value of red nodes is small, the value of blue nodes is high, and the value of yellow nodes is between that of red and blue nodes. Blue nodes correspond to favored options, red nodes to disfavored options, and yellow nodes to options about which an agent remains neutral. That is, to linear order yellow nodes assign to that option the same value as before synchrony is broken. Color saturation of red and blue encodes for deviation from neutrality, where appropriate. \subsection{Consensus bifurcation} \label{case1} All agents assign the same value to any given option. There is one agent-cluster and two option-clusters. All values for a given option-cluster are equal and different from the values of the other cluster. Options from different option-clusters are not color-isomorphic but are color-complementary. One option cluster is made of favored options and the other option cluster of disfavored options. In other words, there is a group consensus about which options are favored. See Figure~\ref{fig:consensus illust}, left, for a typical consensus value pattern. The sum of the $z_{ij}$ along each row is zero to leading order in $\lambda$. \subsection{Deadlock bifurcation} \label{case2} All options are assigned the same value by any given agent. There is one option-cluster and two agent-clusters. All values for a given agent-cluster are equal and different from the values of the other cluster. Agents from different agent-clusters are not color-isomorphic but are color-complementary. One agent cluster favors all the options and the other agent cluster disfavor all options. Each agent is deadlocked about the options and the group is divided about the overall options' value. See Figure~\ref{fig:consensus illust}, right, for a typical deadlock value pattern. The sum of the $z_{ij}$ along each column is zero to leading order in $\lambda$. \subsection{Dissensus bifurcation} \label{case3} This case is more complex, and splits into three subcases. \subsubsection{Dissensus with color-isomorphic agents} \label{case3a} Each agent assigns the same high value to a subset of favored options, the same low value to a subset of disfavored options, and, possibly, remains neutral about a third subset of options. All agents are color-isomorphic but there are at least two distinct agent-clusters. Therefore, there is no consensus about which options are (equally) favored. There are at least two distinct option-clusters with color-isomorphic elements. Each agent expresses preferences about the elements of each option cluster by assigning the same high value to some of them and the same low value to some others. There might be a further option-cluster made of options about which all agents remain neutral. There might be pairs of color-complementary agent and option clusters. See Figure~\ref{fig:dissensus illust ab}, top for typical dissensus value patterns of this kind with (left) and without (right) color-complementary pairs. \subsubsection{Dissensus with color-isomorphic options} \label{case3b} This situation is similar to \S\ref{case3a}, but now all options are color-isomorphic and there are no options about which all agents remain neutral. There might be an agent-cluster of neutral agents, who assign the same mid-range value to all options. See Figure~\ref{fig:dissensus illust ab}, bottom for typical dissensus value patterns of this kind with (left) and without (right) color-complementary pairs. \subsubsection{Polarized dissensus} \label{case3c} There are two option-clusters and two agent-clus\-ters. For each combination of one option-cluster and one agent-cluster, all values are identical. Members of distinct agent-clusters are color-isomorphic and color-comple\-ment\-ary if and only if members of distinct option-clusters are color-isomorphic and color-complementary. Equivalently, if and only if the number of elements in each agent-cluster is the same and the number of elements in each option-cluster is the same. In other words, agents are polarized into two clusters. Each cluster has complete consensus on the option values, and places the same value on all options within a given option-cluster. The second agent-cluster does the same, but disagrees with the first agent-cluster on all values. See Figure~\ref{fig:dissensus illust c} for typical polarized dissensus value patterns without (left) and with (right) color-isomorphism and color-complementarity. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{./FIGURES/dissensus_illust_c} \caption{Examples of polarized dissensus value patterns (\S\ref{case3c}).}\label{fig:dissensus illust c} \end{figure} \section{Influence Networks} \label{S:network} We now describe the mathematical context in more detail and make the previous informal descriptions precise. The main modeling assumption is that the value $z_{ij}\in\R$ assigned by agent $i$ to option $j$ evolves continuously in time. We focus on dynamics that evolve towards equilibrium, and describe generic equilibrium synchrony patterns. Each variable $z_{ij}$ is associated to a {node $(i,j)$} in an {\it influence network}. The set of nodes of the influence network is the rectangular array \[ \CC=\{(i,j):1\leq i \leq \Na,1\leq j \leq \No\}. \] {We assume that all} nodes are {identical;} they are represented by the same symbol {in} Figure~\ref{FIG: influence network} (left). {Arrows} between the nodes {indicate influence; that is,} an arrow from node $(k,l)$ to node $(i,j)$ means that the value assigned by agent $i$ to {option} $j$ is influenced by the value assigned by agent $k$ to option $l$. \subsection{Arrow types} We assume that there are three different types of interaction between nodes, determined by different types of arrow in the network, represented graphically by different styles of arrow, Figure~\ref{FIG: influence network} (left). The arrow types are distinguished by their tails: \begin{itemize} \item (Row arrows) The {\it intra-agent, inter-option influence neighbors} of node $(i,j)$ are the $\No - 1$ nodes in the subset \[ {\sf A}_{i,j} = \{(i,1),\ldots, (i,\No)\} \setminus \{(i,j)\}. \] Arrows whose head is $(i,j)$ and whose tail is in ${\sf A}_{i,j}$ have the same arrow type represented by a gray dashed arrow. In Figure~\ref{FIG: influence network} (right) these arrows connect all distinct nodes in the same row in an all-to-all manner. \item (Column arrows) The {\it inter-agent, intra-option influence} neighbors of node $(i,j)$ are the $\Na - 1$ nodes in the subset \[ {\sf O}_{ij} = \{(1,j),\ldots, (\Na, j)\} \setminus \{(i,j)\}. \] Arrows whose head is $(i,j)$ and whose tail is in ${\sf O}_{ij}$ have the same arrow type represented by a solid black arrow. In Figure~\ref{FIG: influence network} (right) these arrows connect all distinct nodes in the same column in an all-to-all manner. \item (Diagonal arrows) The {\it inter-agent, inter-option influence} neighbors of node $(i,j)$ are the $\Na\No - \Na -\No + 1$ nodes in the subset \[ {\sf E}_{ij} = \{(k,l): k\neq i, l\neq j \} \] Arrows whose head is $(i,j)$ and whose tail is in ${\sf E}_{ij}$ have the same arrow type represented by a black dashed arrow. In Figure~\ref{FIG: influence network} (right) these arrows connect all distinct nodes that do not lie in the same row or in the same column in an all-to-all manner. \end{itemize} We denote an $m \times n$ influence network by $\mathcal{N}_{mn}$. The network $\mathcal{N}_{mn}$ is {\em homogeneous} (all nodes have isomorphic sets of input arrows) and {\em all-to-all coupled} or {\em all-to-all connected} (any two distinct nodes appear as the head and tail, respectively, of some arrow). Other assumptions on arrow or node types are possible, to reflect different modeling assumptions, but are not discussed in this paper. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{FIGURES/Network_new.pdf} \caption{An $m \times n$ influence network $\mathcal{N}_{mn}$ has three distinct arrow types: gray dashed row arrow, solid black column arrow, and black dashed diagonal arrow. (Left) Inputs to node $(i,j)$. (Right) Arrows inputting to node $(1,1)$ (arrowheads omitted). There is a similar set of arrows inputting to every other node.} \label{FIG: influence network} \end{figure} \subsection{Symmetries of the influence network} \label{S:SIN} The influence network $\mathcal{N}_{mn}$ is symmetric: its {\em automorphism group}, the set of permutations of the set of nodes $\CC$ that preserve node and arrow types and incidence relations between nodes and arrows, is non-trivial. In particular, swapping any two rows or any two columns in Figure~\ref{FIG: influence network} (right) leaves the network structure unchanged. It is straightforward to prove that $\mathcal{N}_{mn}$ has symmetry group \[ \Gamma = \ES_\Na\times\ES_\No \] where $\ES_\Na$ swaps rows (agents) and $\ES_\No$ swaps columns (options). More precisely, $\sigma\in\ES_{\Na}$ and $\tau\in\ES_{\No}$ act on $\CC$ by \[ (\sigma,\tau) (i,j) = (\sigma(i), \tau(j)). \] The symmetries of the influence network can be interpreted as follows. In the decision making process, agents and options are {\it a priori} indistinguishable. In other words, all the agents count the same and have the same influence on the decision making process, and all the options are initially equally valuable. This is the type of decision-making situation we wish to model and whose dynamics we wish to understand. \subsection{Admissible maps} A point in the {\em state space} (or {\em phase space}) $\Q$ is an $\Na\times\No$ rectangular array of real numbers $z = (z_{ij})$. The action of $\Gamma$ on points $z = (z_{ij})\in\Q$ is defined by \[ (\sigma,\tau)z = ( z_{ \sigma^{-1}(i)\tau^{-1}(j)}). \] Let \[ \Gg:\Q\to\Q \] be a smooth map $\Gg =(G_{ij})$ on $\Q$, so that each component $G_{ij}$ is smooth and real-valued. We assume that the value $z_{ij}$ assigned by agent $i$ to option $j$ evolves according to the {\it value dynamics} \begin{equation}\label{EQ: generic value evol} \dot z_{ij} = G_{ij}(z). \end{equation} In other word, we model the evolution of the values assigned by the agents to the different options as a system of ordinary differential equations (ODE) on the state space $\Q$. The map $\Gg$ for the influence network is assumed to be {\it admissible}~\cite[Definition~4.1]{SGP03}. Roughly speaking, admissibility means that the functional dependence of the map $\Gg$ on the network variable $z$ respects the network structure. For instance, if two nodes $(k_1,l_1),(k_2,l_2)$ input a third one $(i,j)$ through the same arrow type, then $\Gg_{ij}(z)$ depends identically on $z_{k_1l_1}$ and $z_{k_2l_2}$, in the sense that the value of $\Gg_{ij}(z)$ does not change if $z_{k_1l_1}$ and $z_{k_2l_2}$ are swapped in the vector $z$. Our assumptions about the influence network select a family of admissible maps that can be analyzed from a network perspective to predict generic model-independent decision-making behaviors. Because the influence network has symmetry $\ES_m\times\ES_n$ and there are three arrow types, by~\cite[Remark~4.2]{SGP03}, the components of the admissible map $\Gg$ in~\eqref{EQ: generic value evol} satisfy \begin{equation}\label{EQ: admissible vf} G_{ij}(z)=G(z_{ij}, \overline{z_{\sf{A_{ij}}}}, \overline{z_{\sf{O_{ij}}}}, \overline{z_{\sf{E_{ij}}}}) \end{equation} where the function $G$ is independent of $i,j$ and the notation $\overline{z_{\sf{A_{ij}}}}$, $\overline{z_{\sf{O_{ij}}}}$, and $\overline{z_{\sf{E_{ij}}}}$ means that $G$ is invariant under all permutations of the arguments appearing under each overbar. That is, each arrow type leads to identical interactions. It is well known~\cite{AS07,GS22} that the symmetry of $\mathcal{N}_{mn}$ implies that any admissible map $\Gg$ is $\Gamma$-equivariant, that is, \[ \gamma\Gg(z)=\Gg(\gamma z) \] for all $\gamma\in\Gamma$. It is straightforward to verify directly that equations \eqref{EQ: admissible vf} are $\Gamma$-equivariant. Given $\gamma=(\sigma,\tau)\in\Gamma$, with $\sigma\in\ES_\Na$ and $\tau\in\ES_\No$, \[ \begin{array}{rcl} ((\sigma,\tau)\Gg)_{ij}(z)& = & \!G_{\sigma^{-1}(i)\, \tau^{-1}\!(j)}(z) \\ & & \\ & = & G(z_{\sigma^{-1}(i)\, \tau^{-1}(j)}, \overline{z_{\sf{A_{\sigma^{-1}(i)\,\tau^{-1}\!(j)}}}}, \overline{z_{\sf{O_{\sigma^{-1}(i)\,\tau^{-1}(j)}}}}, \overline{z_{\sf{E_{\sigma^{-1}(i)\,\tau^{-1}(j)}}}})\\ & & \\&= & G_{ij}((\sigma,\tau)z). \end{array} \] \subsection{Solutions arise as group orbits} \label{S:sol_orbits} An important consequence of group equivariance is that solutions of any $\Gamma$-equivariant ODE arise in group orbits. That is, if $z(t)$ is a solution and $\gamma \in \Gamma$ then $\gamma z(t)$ is also a solution. Moreover, the type of solution (steady, periodic, chaotic) is the same for both, and they have the same stabilities. Such solutions are often said to be {\em conjugate} under $\Gamma$. In an influence network, conjugacy has a simple interpretation. If some synchrony pattern can occur as an equilibrium, then so can all related patterns in which agents and options are permuted in any manner. Effectively, we can permute the numbers $1, \ldots, m$ of agents and $1, \ldots, n$ of options without affecting the possible dynamics. This is reasonable because the system's behavior ought not to depend on the choice of numbering. Moreover, if one of these states is stable, so are all the others. This phenomenon is a direct consequence of the initial assumption that all agents are identical and all options are identical. Which state among the set of conjugate ones occurs, in any specific case, depends on the initial conditions for the equation. In particular, whenever we assert the existence of a pattern as an equilibrium, we are also asserting the existence of all conjugate patterns as equilibria. \subsection{Equivariance versus admissibility} If a network has symmetry group $\Gamma$ then every admissible map is $\Gamma$-equivariant, but the converse is generally false. In other words, admissible maps for the influence network are more constrained than $\Gamma$-equivariant maps. As a consequence, bifurcation phenomena that are not generic in equivariant maps (because not sufficiently constrained) become generic in admissible maps (because the extra constraints provide extra structure).\footnote{As an elementary analogue of this phenomenon, consider the class of real-valued maps $f(x,\lambda)$ and the class of odd-symmetric real-valued maps $g(x,\lambda)=-g(-x,\lambda)$. The first class clearly contains the second, which is more constrained (by symmetry). Whereas the pitchfork bifurcation is non-generic in the class of real valued maps, it becomes generic in the odd-symmetric class.} In practice, this means that equivariant bifurcation theory applied to influence networks might miss some of the bifurcating solutions, i.e., those that are generic in admissible maps but not in equivariant maps. Section~\ref{S:EP} illustrates one way in which this can happen. Other differences between bifurcations in equivariant and admissible maps are discussed in detail in~\cite[Golubitsky and Stewart]{GS22}. For influence networks, we have the following results: \begin{theorem} \label{T:equiv_not_admiss} The equivariant maps for $\ES_m\times\ES_n$ are the same as the admissible maps for ${\mathcal N}_{mn}$ if and only if $(m,n) = (m,1), (1,n), (2,2), (2,3), (3,2)$. \end{theorem} \begin{proof} See~\cite[Theorem 3]{S21}. \end{proof} In contrast, the linear case is much better behaved: \begin{theorem} \label{T:ln_equiv_admiss} The linear equivariant maps for $\ES_m\times\ES_n$ are the same as the linear admissible maps for ${\mathcal N}_{mn}$ for all $m,n$. \end{theorem} \begin{proof} See~\eqref{distinct_irreps} and Remark~\ref{r:admiss=equi}. \end{proof} \subsection{Irreducible representations of $\Gamma$ acting on $\Q$} \label{SSEC: irreps} In equivariant bifurcation theory, a key role is played by the irreducible representations of the symmetry group $\Gamma$ acting on $\Q$. Such a representation is {\em absolutely irreducible} if all commuting linear maps are scalar multiples of the identity. See~\cite[Chapter XII Sections 2--3]{GSS88}. The state space $\Q = \R^{\Na\No}$ admits a direct sum decomposition into four $\Gamma$-invariant subspaces with a clear decision-making interpretation for each. These subspaces of $\Q$ are \begin{equation} \label{distinct_irreps} \begin{array}{ccl} \Wd & = & \mbox{all entries equal} \quad (\mbox{{\it fully synchronous subspace}}) \\ \Vc & = & \mbox{all rows identical with sum $0$} \quad (\mbox{{\it consensus subspace}}) \\ \Wdd & = & \mbox{all columns identical with sum $0$} \quad (\mbox{{\it deadlock subspace}}) \\ \Vd &=& \mbox{all rows and all columns have sum 0} \quad (\mbox{{\it dissensus subspace}})\\ \end{array} \end{equation} whose dimensions are respectively $ 1, (\No -1)(\Na -1), \No-1, \Na -1$. Each subspace $ \Vd$, $\Vc$, $\Wdd$, $\Wd$ is an absolutely irreducible representation of $\Gamma$ acting on $\Q$. The kernels of these representations are ${\bf 1}, \ES_\Na \times {\bf 1}, {\bf 1} \times\ES_\No, \Gamma$, respectively. Since the kernels are unequal the representations are non-isomorphic. A dimension count shows that \begin{equation} \label{Irrep_decomp} \R^{\Na\No} = \Wd \oplus \Vc \oplus \Wdd \oplus \Vd. \end{equation} Two related group-theoretic decom\-po\-si\-ti\-ons are also used later. See Theorems~\ref{T:consensus_bifur} and \ref{T:divided_deadlock_bifur}. \begin{equation} \label{fix_decomp} \begin{array}{rcl} \Fix(\ES_\Na \times {\bf 1}) & = & \Vc\oplus\Wd \\ \Fix( {\bf 1} \times \ES_\No ) & = & \Wdd\oplus\Wd, \end{array} \end{equation} where the {\em fixed-point subspace} $\Fix(H)$ of a subgroup $H \subseteq \Gamma$ is the set of all $z \in \Q$ such that $\alpha z = z$ for all $\alpha \in H$. \subsection{Value-assignment interpretation of irreducible representations} $ $ The subspaces in decomposition~\eqref{Irrep_decomp} admit the following interpretations: \begin{itemize} \item[$\Wd$] is the {\it fully synchronous} subspace, where all agents assign the same value to every option. This is the subspace of {\em undecided} states that we will assume lose stability in synchrony-breaking bifurcations. \item[$\Vc$] is the {\it consensus} subspace, where all agents express the same preferences about the options. This subspace is the critical subspace associated to synchrony-breaking bifurcations leading to consensus value patterns described in \S\ref{case1}. \item[$\Wdd$] is the {\it deadlock} subspace, where each agent is deadlocked and agents are divided about the option values. This subspace is the critical subspace associated to synchrony-breaking bifurcations leading to deadlock value patterns described in \S\ref{case2}. \item[$\Vd$] is the {\it dissensus} subspace, where agents express different preferences about the options. This subspace is the critical subspace associated to synchrony-breaking bifurcations leading to dissensus value patterns described in \S\ref{case3a}, \S\ref{case3b}, \S\ref{case3c}. \end{itemize} \subsection{Value assignment decompositions} \label{SSEC: preliminary decom} We also define two additional important value-assignment subspaces: \begin{equation} \label{preference_deadlock} \begin{array}{cclcl} \Vp & = & \Vc \oplus \Vd \quad (\mbox{\em preference subspace}) \\ \Vdl & = & \Wd \oplus \Wdd \quad (\mbox{\em indecision subspace}) \end{array} \end{equation} By~\eqref{Irrep_decomp} and \eqref{preference_deadlock}: \[ \R^{\Na\No} = \Vp \oplus \Vdl \] and each $V_*$ is $\Gamma$-invariant. Each introduced subspace and its decomposition admits a value-assignment interpretation. By \eqref{distinct_irreps} and \eqref{preference_deadlock}, points in $\Vp$ have row sums equal to $0$ and generically have entries in a row that are not all equal for at least one agent. Hence, generically, for at least one agent some row elements are maximum values over the row and the corresponding options are favored by the given agent. That is, the subspace $\Vp$ consists of points where at least some agents express preferences among the various options. In contrast, points in $\Vdl$ have entries in a row that are all equal, so all agents are deadlocked and the group in a state of indecision. This decomposition distinguishes rows and columns, reflecting the asymmetry between agents and options from the point of view of value assignment. \section{Parametrized value dynamics} \label{S:PVD} To understand how a network of indistinguishable agents values and forms preferences among a set of {\it a priori} equally valuable options, we model the transition between different value states of the influence network --- for example from fully synchronous to consensus --- as a bifurcation. To do so, we introduce a {\it bifurcation parameter} $\lambda\in\R$ into model~\eqref{EQ: generic value evol},\eqref{EQ: admissible vf}, which leads to the parametrized dynamics \begin{subequations}\label{EQ: general param decision dynamics} \begin{align} \dot\Zz&=\Gg(\Zz,\lambda)\\ G_{ij}(\Zz,\lambda)&=G(z_{ij}, \overline{z_{\sf{A_{ij}}}}, \overline{z_{\sf{O_{ij}}}}, \overline{z_{\sf{E_{ij}}}},\lambda)\,. \end{align} \end{subequations} We assume that the parametrized ODE~\eqref{EQ: general param decision dynamics} is also admissible and hence $\Gamma$-equivari\-ant. That is, \[ \gamma\Gg(\Zz,\lambda)=\Gg(\gamma\Zz,\lambda)\,. \] Depending on context, the bifurcation parameter $\lambda$ can model a variety of environmental or endogenous parameters affecting the valuation process. In honeybee nest site selection, $\lambda$ is related to the rate of stop-signaling, a cross-inhibitory signal that enables value-sensitive decision making~\cite[Seeley {\it et al.}]{SKSHFM12}, \cite[Pais {\it et al.}]{PHSFLM13}. In animal groups on the move, $\lambda$ is related to the geometric separation between the navigational options~\cite[Biro {\it et al.}]{BSMG06}, \cite[Couzin {\it et al.}]{CIDGTHCLL11}, \cite[Leonard {\it et al.}]{LSNSCL12}, \cite[Nabet {\it et al.}]{NLCL09}. In phenotypic differentiation, $\lambda$ is related to the concentration of quorum-sensing autoinducer molecules~\cite[Miller and Bassler]{MB01}, \cite[Waters and Bassler]{WB05}. In speciation models, $\lambda$ can be an environmental parameter such as food availability, \cite[Cohen and Stewart]{CS00}, \cite[Stewart {\it et al.}]{SEC03}. In robot swarms, $\lambda$ is related to the communication strength between the robots \cite[Franci {\it et al.}]{FBPL21}. In human groups, $\lambda$ is related to the attention paid to others~\cite[Bizyaeva {\it et al.}]{BFL22}. \section{The undecided solution and its linearization} \label{S:Deadlock_linearization} It is well known that equivariant maps leave fixed-point subspaces invariant \cite[Golubitsky {\it et al.}]{GSS88}. Since admissible maps $\Gg$ are $\Gamma$-equivariant, it follows that the $1$-dimensional undecided (fully synchronous) subspace $\Wd=\Fix(\Gamma)$ is flow-invariant; that is, invariant under the flow of {\em any} admissible ODE. Let $v\in \Wd$ be nonzero. Then for $x\in\R$ we can write \[ \Gg(xv,\lambda) = g(x,\lambda)v \] where $g:\R\times\R\to\R$. Undecided solutions are found by solving $g(x,\lambda) = 0$. Suppose that $g(x_0,\lambda_0) = 0$. We analyze synchrony-breaking bifurcations from a path of solutions to $g(x,\lambda) = 0$ bifurcating at $(x_0,\lambda_0)$. \begin{remark} \label{r:admiss=equi} In general, any linear admissible map is linear equivariant. In influence networks $\mathcal{N}_{mn}$, all linear equivariant maps are admissible. More precisely, the space of linear admissible maps is $4$-dimensional (there are three arrow types and one node type). The space of linear equivariants is also $4$-dimensional (there are four distinct absolutely irreducible representations \eqref{Irrep_decomp}). Therefore linear equivariant maps are the same as linear admissible maps. This justifies applying equivariant methods in the network context to find the generic critical eigenspaces. \end{remark} We now consider syn\-chro\-ny-breaking bifurcations. Since the four irreducible subspaces in \eqref{Irrep_decomp} are non-isomorphic and absolutely irreducible, Remark~\ref{r:admiss=equi} implies that the Jacobian $J$ of \eqref{EQ: general param decision dynamics} at a fully synchronous equilibrium $(x_0v,\lambda_0)$ has up to four distinct real eigenvalues. Moreover, it is possible to find admissible ODEs so that any of these eigenvalues is $0$ while the others are negative. In short, there are four types of steady-state bifurcation from a fully synchronous equilibrium and each can be the first bifurcation. When the critical eigenvector is in $\Wd$, the generic bifurcation is a saddle-node of fully synchronous states. The other three bifurcations correspond to the other three irreducible representations in \eqref{Irrep_decomp} and are synchrony-breaking. \subsection{The four eigenvalues and their value-assignment interpretation} \label{SS:3symm_breaking} A simple calculation reveals that \begin{equation} \begin{array}{ccl} J|_{\Vd} & = & \cd I_{(\Na-1)(\No-1)}\\ J|_{\Vc} & = & \cc I_{\No-1}\\ J|_{\Wdd} & = & \cdd I_{\Na-1}\\ J|_{\Wd} & = & \cdl \\ \end{array} \end{equation} where \begin{equation}\label{EQ: deadlocked jac eigvals} \begin{array}{ccl} \cd & = & \alpha-\beta-\gamma+\delta\\ \cc & = & \alpha-\beta+(\Na-1)(\gamma-\delta)\\ \cdd & = & \alpha-\gamma+(\No-1)(\beta-\delta)\\ \cdl & = & \alpha+(\No-1)\beta+(\Na-1)\gamma+(\Na-1)(\No-1)\delta \end{array} \end{equation} and \[ \alpha=\frac{\partial G_{ij}}{\partial z_{ij}},\quad \beta=\frac{\partial G_{ij}}{\partial z_{il}},\quad \gamma=\frac{\partial G_{ij}}{\partial z_{kj}},\quad \delta=\frac{\partial G_{ij}}{\partial z_{kl}}. \] Here $1\leq i,k \leq\Na$ (with $k\neq i$); $1 \leq j,l \leq \No$ (with $l\neq j$); and all partial derivatives are evaluated at the fully synchronous equilibrium $(x_0v,\lambda_0)$. The parameters $\beta,\gamma,\delta$ are the linearized weights of row, column, and diagonal arrows, respectively, and $\alpha$ is the linearized internal dynamics. We can write \eqref{EQ: deadlocked jac eigvals} in matrix form as \[ \Matrix{ \cd \\ \cc \\ \cdd \\ \cdl} = \Matrix{ 1 & -1 & -1 & 1 \\ 1 & -1 & \Na - 1 & 1 - \Na \\ 1 & \No - 1 & - 1 & 1 - \No \\ 1 & \No - 1 & \Na - 1 & (\Na - 1)( \No - 1)} \Matrix{ \alpha \\ \beta \\ \gamma \\ \delta} \equiv L\Matrix{ \alpha \\ \beta \\ \gamma \\ \delta} \] Since $\det(L) = -\Na^2\No^2 \neq 0$ the matrix $L$ is invertible. Therefore any of the three synchrony-breaking bifurcations can occur as the first bifurcation by making an appropriate choice of the partial derivatives $\alpha,\beta,\gamma,\delta$ of $\Gg$. For example, synchrony-breaking dissensus bifurcation ($\cd = 0$) can occur from a stable deadlocked equilibrium ($\cc,\cdd,\cdl < 0$) if we choose $\alpha,\beta,\gamma,\delta$ by \[ \Matrix{ \alpha \\ \beta \\ \gamma \\ \delta} = L^{-1} \Matrixr{ 0 \\ -1 \\ -1 \\ -1} \] \section{$\ES_n$ consensus and $\ES_m$ deadlock bifurcations} \label{S:DLB} We summarized synchrony-breaking consensus and deadlock bifurcations in Sections~\ref{case1} and~\ref{case2}. We now revisit these bifurcations with more mathematical detail. Both types of bifurcation reduce mathematically to equivariant bifurcation for the natural permutation action of $\ES_m$ or $\ES_n$. This is not immediately clear because of potential network constraints, but in these cases it is easy to prove~\cite{AS07} that the admissible maps (nonlinear as well as linear) are the same as the equivariant maps. In Section~\ref{S:diss_bif} we see that the same comment does not apply to dissensus bifurcations. The Equivariant Branching Lemma~\cite[XIII, Theorem~3.3]{GSS88} states that generically, for every axial subgroup $\Sigma$, there exists a branch of steady-state solutions with symmetry $\Sigma$. The branching occurs from a trivial (group invariant) equilibrium. More precisely, let $\Gamma$ be a group acting on $\R^n$ and let \begin{equation} \label{bifurcation_family} \dot{x} = f(x,\lambda) \end{equation} where $f$ is $\Gamma$-equivariant, $f(x_0,\lambda_0) = 0$ (so $x_0$ is a trivial solution; that is, $\Gamma$ fixes $x_0$), and $x_0$ is a point of steady-state bifurcation. That is, if $J$ is the Jacobian of $f$ at $x_0$, then $K=\ker(J)$ is nonzero. Generically, $K$ is an absolutely irreducible representation of $\Gamma$. A subgroup $\Sigma\subset\Gamma$ is {\em axial relative to} a subspace $K\subset\Q$ if $\Sigma$ is an isotropy subgroup of $\Gamma$ acting on $K$ such that \[ \dim\Fix_K(\Sigma) = 1, \] where the fixed-point subspace $\Fix_K(H)$ of a subgroup $H\subset\Gamma$ {\em relative to} $K$ is the set of all $z\in K$ such that $\alpha z=z$ for all $\alpha\in H$. Suppose that a $\Gamma$-equivariant map has a $\Gamma$-invariant equilibrium at $x_0$ and that the kernel of the Jacobian at $x_0$ is an absolutely irreducible subspace $V$. Then generically, for each axial subgroup $\Sigma$ of $\Gamma$ acting on $V$, a branch of equilibria with symmetry $\Sigma$ bifurcates from $x_0$. Therefore all conjugate branches also occur, as discussed in Section~\ref{S:sol_orbits}. In principle there could be other branches of equilibria~\cite[Field and Richardson]{FR89} and other interesting dynamics~\cite[Guckenheimer and Holmes]{GH88}. For example, \cite[Dias and Stewart]{DS03} consider secondary bifurcations to $\ONE \times(\ES_p\times\ES_q\times\ES_{n-p-q})$, which are not axial. We focus on the equilibria given by the Equivariant Branching Lemma because these are known to exist generically. \subsection{Balanced colorings, quotient networks, and ODE-equivalence} The ideas of `balanced' coloring and `quotient network' were introduced in \cite[Golubitsky {\it et al.}]{GST05}, \cite[Stewart {\it et al.}]{SGP03}. See also \cite[Golubitsky and Stewart]{GS06,GS22}. Let $z$ be a network node and let $I(z)$ be the {\em input} set of $z$ consisting of all arrows whose head node is $z$. Suppose that the nodes are colored. The coloring is {\em balanced} if any two nodes $z_1, z_2$ with the same color have {\em color isomorphic} input sets. That is, there is a bijection $\sigma:I(z_1)\to I(z_2)$ such that the tail nodes of $a\in I(z_1)$ and $\sigma(a)\in I(z_2)$ have the same color. It is shown in the above references that when the coloring is balanced the subspace $Q$, where two node values are equal whenever the nodes have the same color, is a flow-invariant subspace for every admissible vector field. The subspaces $Q$ are the network analogs of fixed-point subspaces of equivariant vector fields. Finally, it is also shown that $Q$ is the phase space for a network whose nodes are the colors in the balanced coloring. This network is the {\em quotient network} (Figure~\ref{FIG: influence network_N23}, right). Through identification of same-color nodes in Figure~\ref{FIG: influence network_N23}, center, the quotient network exhibits self-coupling and two different arrows (dashed gray and dashed black) between pairs of nodes in such a way that the number of arrows between colors is preserved. For example, each red node of the original network receives a solid arrow from a red node, a dashed gray arrow and a dashed black arrow from a white node, and a dashed gray arrow and a dashed black arrow from a blue node, both in Figure~\ref{FIG: influence network_N23}, center, and in Figure~\ref{FIG: influence network_N23}, right. Similarly, for the other colors. Two networks with the same number of nodes are {\em ODE-equivalent} if they have the same spaces of admissible vector fields. \cite[Dias and Stewart]{DS05} show that two networks are ODE-equivalent if and only if they have the same spaces of linear admissible maps. It is straightforward to check that the networks in Figures~\ref{FIG: influence network_N23} (right) and \ref{FIG: influence network_N23_quot} have the same spaces of linear admissible maps, hence are ODE-equivalent. Therefore the bifurcation types from fully synchronous equilibria are identical in these networks. These two ODE-equivalent networks, based on the influence network $\mathcal N_{23}$, can help to illustrate the bifurcation result in Theorem~\ref{T:consensus_bifur}. \begin{figure} \centerline{ \includegraphics[width=0.22\textwidth]{FIGURES/N23a.pdf}\qquad \includegraphics[width=0.22\textwidth]{FIGURES/N23a_bal.pdf}\qquad \includegraphics[width=0.22\textwidth]{FIGURES/N23a_quot.pdf}} \caption{(Left) The influence network $\mathcal{N}_{23}$ has three distinct arrow types: gray dashed row arrow, solid black column arrow, and black dashed diagonal arrow. (Middle) Balanced three coloring given by $\Fix(1\times \ES_2)$. (Right) Three-node quotient network $G_{\rm c}$ of (middle). } \label{FIG: influence network_N23} \end{figure} \begin{figure} \centerline{ \includegraphics[width=0.22\textwidth]{FIGURES/N23a_quot_ODE.pdf}} \caption{Three-node quotient network with $\ES_3$-symmetry that is ODE-equivalent to $G_{\rm c}$ illustrated in Figure~\ref{FIG: influence network_N23} (right).} \label{FIG: influence network_N23_quot} \end{figure} \subsection{Consensus bifurcation ($\cc = 0$)} \label{SS:consensus} Branches of equilibria stemming from consensus bifurcation can be proved to exist using the Equivariant Branching Lemma applied to a suitable quotient network. The branches can be summarized as follows: \begin{theorem} \label{T:consensus_bifur} Generically, there is a branch of equilibria corresponding to the axial subgroup ${\bf 1} \times (\ES_k\times \ES_{\No - k})$ for all $1 \leq k\leq \No-1$. These solution branches are tangent to $\Vc$, lie in the subspace $\Fix(\ES_{\Na}\times\ONE)=\Vc\oplus\Wd$, and are consensus solutions. \end{theorem} \begin{proof} We ask: What are the steady-state branches that bifurcate from the fully synchronous state when $\cc=0$? Using network theory we show in four steps that the answer reduces to $\ES_{\No}\cong {\bf 1} \times \ES_\No$ equivariant theory. Figures~\ref{FIG: influence network_N23} and \ref{FIG: influence network_N23_quot} illustrate the method when $(m,n) = (2,3)$. \begin{enumerate} \item Let $G_{\rm c}$ be the quotient network determined by the balanced coloring where all agents for a given option have the same color and different options are assigned different colors. The quotient $G_{\rm c}$ is a homogeneous all-to-all coupled $\No$-node network with three different arrow-types and multiarrows between some nodes. The $\Vc$ bifurcation occurs in this quotient network. \item The network $G_{\rm c}$ is ODE-equivalent to the standard all-to-all coupled $n$ simplex $\GG_n$ with no multi-arrows (Figure~\ref{FIG: influence network_N23_quot}). Hence the steady-state bifurcation theories for networks $G_{\rm c}$ and $\GG_n$ are identical. \item The admissible maps for the standard $n$-node simplex network $\GG_n$ are identical to the $\ES_n$-equivariant maps acting on $\R^{n-1}$. Using the Equivariant Branching Lemma it is known that, generically, branches of equilibria bifurcate from the trivial (synchronous) equilibrium with isotropy subgroup $\ES_k\times \ES_{n - k}$ symmetry, where $1 \leq k \leq \No-1$; see \cite[Cohen and Stewart]{CS00}. Consequently, generically, consensus deadlock-breaking bifurcations lead to steady-state branches of solutions with $ {\bf 1}\times (\ES_k\times \ES_{\No - k})$ symmetry. \item The bifurcating branches of equilibria are tangent to the critical eigenspace $\Vc$. Additionally \eqref{fix_decomp} implies that the admissible map leaves the subspace $\Fix(\ES_\Na \times {\bf 1}) = \Vc \oplus\Wd$ invariant. Hence, the solution branches lie in the subspace $\Vc\oplus\Wd$ and consists of arrays with all rows identical. See \eqref{distinct_irreps}. \end{enumerate} \end{proof} \begin{remark} \rm As a function of $\lambda$ each bifurcating consensus branch is tangent to $\Vc$. Hence, consensus bifurcation branches are ZRS and each agent values $k$ options higher and the remaining lower in such a way that the average value remains constant to linear order in $\lambda$ along the branch. Since the bifurcating solution branch is in $\Vc\oplus\Wd$, the bifurcating states have all rows equal; that is, all agents value the options in the same way. In particular, all agents favor the same $k$ options, disfavor the remaining $n-k$ ones, and the agents are in a consensus state. Symmetry therefore predicts that in breaking indecision toward consensus, the influence network transitions from an undecided one option-cluster state to a two option-cluster consensus state made of favored and disfavored options. Intuitively, this happens because the symmetry of the fully synchronous undecided state is lost gradually, in the sense that the bifurcating solutions still have large (axial) symmetry group. Secondary bifurcations can subsequently lead to state with smaller and smaller symmetry in which fewer and fewer options are favored. \end{remark} \subsection{Deadlock bifurcation ($\cdd = 0$)} \label{SS:deadlock} The existence of branches of equilibria stemming from deadlock bifurcation can be summarized as follows: \begin{theorem} \label{T:divided_deadlock_bifur} Generically, there is a branch of equilibria corresponding to the axial subgroup $(\ES_k\times \ES_{\Na - k}) \times {\bf 1}$ for $1 \leq k\leq \Na-1$. These solution branches are tangent to $\Wdd$, lie in the subspace $\Fix(\ONE\times\ES_n)=\Wdd\oplus\Wd$, and are deadlock solutions. \end{theorem} The proof is analogous to that of consensus bifurcation in Theorem~\ref{T:consensus_bifur}. \begin{remark} \rm As a function of $\lambda$ each bifurcating solution branch is tangent to $\Wdd$. Hence, the column sums vanish to first order in $\lambda$. It follows that generically at first order in $\lambda$ each column has some unequal entries and therefore agents assign two different values to any given option. This means that agents are divided about option values. Since the bifurcating solution branch is in $\Wdd\oplus\Wd$, the bifurcating states have all columns equal; that is, each agent assigns the same value to all the options, so each agent is deadlocked. In particular, $k$ agents favor all options and $m-k$ agents disfavor all options. Symmetry therefore predicts that in breaking indecision toward deadlock, the influence network transitions from an undecided one agent-cluster state with full symmetry to a two agent-cluster deadlock state with axial symmetry made of agents that favor all options and agents that disfavor all options. Secondary bifurcations can subsequently lead to state with smaller and smaller symmetry in which fewer and fewer agents favor all options. \end{remark} \subsection[Stability of consensus and deadlock bifurcation branches]{Stability of consensus and deadlock bifurcation branches} \label{S:SCB} In consensus bifurcation, all rows of the influence matrix $(z_{ij})$ are identical and have sum zero. In deadlock bifurcation, all columns of the influence matrix $(z_{ij})$ are identical and have sum zero. As shown in Sections~\ref{SS:consensus} and \ref{SS:deadlock} these problems abstractly reduce to $\ES_N$-equivariant bifurcation on the nontrivial irreducible subspace \[ \{x \in \R^N : x_1 + \cdots + x_N = 0 \} \] where $N = m$ for deadlock bifurcation and $N = n$ for consensus bifurcation. This bifurcation problem has been studied extensively as a model for sympatric speciation in evolutionary biology~\cite[Cohen and Stewart]{CS00}, \cite[Stewart {\it et al.}]{SEC03}. Indeed, this model can be viewed as a decision problem in which the agents are coarse-grained tokens for organisms, initially of the same species (such as birds), which assign preference values to some phenotypic variable (such as beak length) in response to an environmental parameter $\lambda$ such as food availability. It is therefore a special case of the models considered here, with $m$ agents and one option. The primary branches of bifurcating states correspond to the axial subgroups, which are (conjugates of) $\ES_p \times \ES_q$ where $p+q = N$. Ihrig's Theorem \cite[Theorem 4.2]{IG84} or \cite[Chapter XIII Theorem 4.4]{GSS88} shows that in many cases transcritical branches of solutions that are obtained using the Equivariant Branching Lemma are unstable at bifurcation. See Figure~\ref{fig:Ihrig}. Indeed, this is the case for axial branches for the $\ES_N$ bifurcations, $N>2$, and at first sight this theorem would seem to affect their relevance. However, simulations show that equilibria with these synchrony patterns can exist and be stable. They arise by jump bifurcation, and the axial branch to which they jump has regained stability by a combination of two methods: (a) The branch `turns round' at a saddle-node. (b) The stability changes when the axial branch meets a secondary branch. Secondary branches correspond to isotropy subgroups of the form $\ES_a \times \ES_b \times \ES_c$ where $a+b+c = N$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{./FIGURES/Ihrig} \caption{Sketch illustrating solution stability near, but not at, bifurcation. Such bifurcations lead to jump transitions rather than smooth transitions. Vertical $z$ coordinate is multidimensional.} \label{fig:Ihrig} \end{figure} The fact that these axial solutions can both exist and be stable in model equations is shown in~\cite[Elmhirst]{E02} and \cite[Stewart et al.]{SEC03}. Simulations of consensus and deadlock bifurcations are discussed in Section~\ref{S:simulation} and simulations of dissensus bifurcations are discussed in Section~\ref{S:simulation_dissensus}. The main prediction is that despite Ihrig's Theorem, axial states can and do occur stably. However, they do so through a jump bifurcation to an axial branch that has regained stability, not by local movement along an axial branch. This stability issue appears again in dissensus bifurcation and simulations in Section~\ref{SS:stability_dissensus} show that axial solutions can be stable even though they are unstable at bifurcation. \section{Axial Balanced Colorings for Homogeneous Networks} \label{S:axial} The analysis of branches of bifurcating solutions in the dissensus subspace $\Vd$ requires a natural network analog of the Equivariant Branching Lemma. This new version applies to exotic colorings (not determined by a subgroup $\Sigma$) as well as orbit colorings (determined by a subgroup $\Sigma$). See Section~\ref{S:SBSB}. We deal with the generalization here, and apply it to $\Vd$ in Section~\ref{S:diss_bif}. For simplicity, assume that each node space of $\GG$ is $1$-dimensional, and that $f$ in \eqref{bifurcation_family} is a $1$-parameter family of admissible maps. Let $\Delta$ be the diagonal space of fully synchronous states. By admissibility, $f:\Delta\times\R\to\Delta$. Hence we can assume generically that $f$ has a trivial steady-state bifurcation point at $x_0\in\Delta$. Given a coloring $\bowtie$, its {\em synchrony subspace} $\Delta_{\bowtie}$ is the subspace where all nodes with the same color are synchronous. \begin{definition} \em \label{D:axial_netwk_reg} Let $K$ be the kernel (critical eigenspace for steady-state bifurcation) of the Jacobian $J = \DD f|_{x_0,\lambda_0}$. Then a balanced coloring\index{balanced!coloring} $\bowtie$ with synchrony subspace $\Delta_{\bowtie}$ is {\em axial relative to $K$} if \begin{equation} \label{D:axial_coloring} \begin{array}{rcl} K \cap \Delta & = & \{0\} \\ \dim(K \cap \Delta_{\bowtie}) & = & 1 \end{array} \end{equation} \end{definition} \subsection{The synchrony-breaking branching lemma} We now state and prove the key bifurcation theorem for this paper. The proof uses the method of Liapunov--Schmidt reduction and various standard properties \cite[Chapter VII Section 3]{GSS88}. \begin{theorem} \label{T:simp_eigen_reg} With the above assumptions and notation, let $\bowtie$ be an axial balanced coloring. Then generically a unique branch of equilibria with synchrony pattern\index{synchrony!pattern} $\bowtie$ bifurcates from $x_0$ at $\lambda_0$. \end{theorem} \begin{proof} The first condition in \eqref{D:axial_coloring} implies that the restriction $f:\Delta\times\R\to\Delta$ is nonsingular at $(x_0,\lambda_0)$. Therefore by the Implicit Function Theorem there is a branch of trivial equilibria $X(\lambda)$ in $\Delta$ for $\lambda$ near $\lambda_0$, so $f(X(\lambda),\lambda) \equiv 0$ where $x_0 = X(\lambda_0)$. We can translate the bifurcation point to the origin so that without loss of generality we may assume $f(0,\lambda) = 0$ for all $\lambda$ near $0$. The second condition in \eqref{D:axial_coloring} implies that $0$ is a simple eigenvalue of $J' = J|_{\Delta_{\bowtie}}$. Therefore Liapunov--Schmidt reduction of the restriction $f:\Delta_{\bowtie}\times\R\to\Delta_{\bowtie}$ leads to a reduced map $g:\R\{v\}\times\R \to \R\{v^*\}$ where $\Delta_{\bowtie} \cap K = \R(v)$ and $v^*$ is the left eigenvector of $J'$ for eigenvalue $0$. The zeros of $g$ near the origin are in 1:1 correspondence with the zeros of $f|_{\Delta_{\bowtie}\times\R}$ near $x_0 = 0$. We can write $g(sv,\lambda) = h(s,\lambda)v^*$. By~\cite[Stewart and Golubitsky]{SG11}, Liapunov--Schmidt reduction can be chosen to preserve the existence of a trivial solution, so we can assume $h(0,\lambda) = 0$. General properties of Liapunov--Schmidt reduction \cite[Golubitsky and Stewart]{GS85} and the fact that $x\mapsto \lambda x$ is admissible for each $\lambda$, imply that $h_\lambda(0,0)$ is generically nonzero. The Implicit Function Theorem\index{Implicit Function Theorem} now implies the existence of a unique branch of solutions $h(\Lambda(\lambda),\lambda) \equiv 0$ in $\Delta_{\bowtie}$; that is, with $\bowtie$ synchrony. Since $K \cap \Delta = 0$ this branch is not a continuation of the trivial branch. The uniqueness statement in the theorem shows that the synchrony pattern on the branch concerned is precisely $\bowtie$ and not some coarser pattern. \end{proof} Let the network symmetry group be $\Gamma$ and let $\Sigma$ be a subgroup of $\Gamma$. Since $K$ is $\Gamma$-invariant, $\Fix_K(\Sigma) = \Fix_{\Q}(\Sigma)\cap K$. Therefore, if $\Fix_{\Q}(\Sigma)\cap K$ is $1$-dimensional then $\Sigma$ is axial on $K$ and Theorem~\ref{T:simp_eigen_reg} reduces to the Equivariant Branching Lemma. \begin{remark} \label{r:subtlety} Figure~\ref{F:Qpattern} shows two balanced patterns on a $2 \times 4$ array. On the whole space $\R^2\otimes\R^4$ these are distinct, corresponding to different synchrony subspaces $\Delta_1, \Delta_2$, which are flow-invariant. When intersected with $\Vd$, both patterns give the same 1-dimensional space, spanned by \[ \Matrixr{1 & -1 & 0 & 0 \\ -1 & 1 & 0 & 0} \] By Theorem~\ref{T:simp_eigen_reg} applied to $\Vd$, there is generically a branch that is {\em tangent} to the kernel $\Vd$, and lies in $\Delta_1$, and a branch that lies in $\Delta_2$. However, $\Delta_1 \subseteq \Delta_2$. Since the bifurcating branch is locally unique, so it must lie in the intersection of those spaces; that is, it corresponds to the synchrony pattern with fewest colors, Figure~\ref{F:Qpattern} (left). \begin{figure}[htb] \centerline{ \includegraphics[width=.4\textwidth]{FIGURES/Qpattern.pdf}} \caption{Two axial patterns (for $\Vd$) with the same linearization on $\Vd$.} \label{F:Qpattern} \end{figure} \end{remark} \section{Dissensus bifurcations ($\cd = 0$)} \label{S:diss_bif} Our results for axial dissensus bifurcations, with critical eigenspace $\Vd$, have been summarized and interpreted in~\S\ref{case3} We now state the results in more mathematical language and use Theorem~\ref{T:simp_eigen_reg} to determine solutions that are generically spawned at a deadlocked dissensus bifurcation: see Theorem~\ref{T:diss}. In this section we verify that Cases \S\ref{case3a}, \S\ref{case3b}, \S\ref{case3c} are all the possible dissensus axials. The analysis leads to a combinatorial structure that arises naturally in this problem. It is a generalization of the concept of a Latin square, and it is a consequence of the balance conditions for colorings. \begin{definition}\em \label{D:LatinRec} A {\em Latin rectangle} is an $a \times b$ array of colored nodes, such that: (a) Each color appears the same number of times in every row. (b) Each color appears the same number of times in every column. \end{definition} Condition (a) is equivalent to the row couplings being balanced. Condition (b) is equivalent to the column couplings being balanced. Definition~\ref{D:LatinRec} is not the usual definition of `Latin rectangle', which does not permit multiple entries in a row or column and imposes other conditions~\cite[wikipedia]{wikib}. Henceforth we abbreviate colors by $R$ (red), $B$ (blue), $G$ (green), and $Y$ (yellow). The conditions in Definition~\ref{D:LatinRec} are independent. Figure~\ref{F:LatinRec} (left) shows a $3 \times 6$ Latin rectangle with $(R,B,G)$ columns and $(R,R, B,B,G,G)$ rows. Counting colors shows that Figure~\ref{F:LatinRec} (right) satisfies (b) but not (a). In terms of balance: in Figure~\ref{F:LatinRec} (left), each $R$ node has one $R$, two $B$, and two $G$ row-arrow inputs; one $B$ and one $G$ column-arrow input; and four $R$, three $B$, and three $G$ diagonal-arrow inputs. Similar remarks apply to the other colors. In contrast, in Figure~\ref{F:LatinRec} (right), the $R$ node in the first row has one $B$ and two $G$ row-arrow inputs, whereas the $R$ node in the second row has two $B$ row-arrow inputs and one $G$ row-arrow input. Therefore the middle pattern is not balanced for row-arrows, which in particular implies that it is not balanced. \begin{figure}[htb] \centerline{ \includegraphics[width = .25\textwidth]{FIGURES/LatinRec.pdf} \qquad\qquad \includegraphics[width = .19\textwidth]{FIGURES/3x4_color_nonlatin.pdf}} \caption{(Left) $3\times 6$ Latin rectangle with 3 colors. (Right) $3\times 4$ rectangle that satisfies (b) but not (a).} \label{F:LatinRec} \end{figure} The classification of axial colorings on influence networks is: \begin{theorem} \label{T:diss} The axial colorings relative to the dissensus space $\Vd$, up to reordering rows and columns, are \begin{itemize} \item[\rm (a)] $\bowtie\ =\Matrix{B_0 & B_1}$ where $B_0$ is a rectangle with one color ($Y$) and $B_1$ is a Latin rectangle with two colors ($R,B$). The fraction $0<\rho<1$ of red nodes in every row of $B_1$ is the same as the fraction of red nodes in every column of $B_1$. Similarly for blue nodes. If in $\Delta_{\bowtie}\cap\Vd$ the value of yellow nodes is $z_Y$, the value of red nodes is $z_R$, and the value of blue nodes is $z_B$, then $z_Y=0$ and \begin{equation} \label{E: case a zero sum} z_B=-\frac{\rho}{1-\rho}z_R. \end{equation} Possibly $B_0$ is empty. \item[\rm (b)] $\bowtie\ =\Matrix{B_{0} \\ B_1}$ where $B_0$ is a rectangle with one color ($Y$) and $B_1$ is a Latin rectangle with two colors ($R,B$). The fraction $0<\rho<1$ of red nodes in every row of $B_1$ is the same as the fraction of red nodes in every column of $B_1$. Similarly for blue nodes. If in $\Delta_{\bowtie}\cap\Vd$ the value of yellow nodes is $z_Y$, the value of red nodes is $z_R$, and the value of blue nodes is $z_B$, then $z_Y=0$ and \begin{equation} \label{E: case b zero sum} z_B=-\frac{\rho}{1-\rho}z_R. \end{equation} Possibly $B_0$ is empty; if so, this pattern is the same as {\rm (a)} with empty $B_0$. \item[\rm (c)] $\bowtie\ =\Matrix{B_{11} & B_{12} \\ B_{21} & B_{22}}$ where the $B_{ij}$ are non-empty rectangles with only one color. Let $z_{ij}$ be the value associated to the color of $B_{ij}$ in $\Delta_{\bowtie}\cap\Vd$ and let $B_{11}$ be $r\times s$, $B_{12}$ be $r\times (n-s)$, $B_{21}$ be $(m-r)\times s$ and $B_{22}$ be $(m-r)\times (n-s)$. Then \begin{subequations} \label{E:case c zero sum} \begin{align} z_{12}&=-\frac{s}{n-s} z_{11}\\ z_{21}&=-\frac{r}{m-r} z_{11}\\ z_{22}&=\frac{s}{n-s}\frac{r}{m-r}z_{11} \end{align} \end{subequations} \end{itemize} \end{theorem} \begin{remark} The theorem implies that axial patterns of cases (a) and (b) either have two colors, one corresponding to a negative value of $z_{ij}$ and the other to a positive value; or, when the block $B_0$ occurs, there can also be a third value that is zero to leading order in $\lambda$. Axial patterns of case (c) have four colors, two corresponding to positive values and two corresponding to negative values, when $r\neq\frac{m}{2}$ or $s\neq\frac{n}{2}$, or two colors, one corresponding to a positive value and the other to a negative value, when $r=\frac{m}{2}$ and $s=\frac{n}{2}$. \end{remark} \subsection{Proof of Theorem~\ref{T:diss}} \label{S:PSTdiss} We need the following theorem, whose proof follows directly from imposing the balanced coloring condition and can be found in~\cite{GS22}. \begin{theorem} \label{T:main_rect} A coloring of $\mathcal{N}_{mn}$ is balanced if and only if it is conjugate under $\ES_m \times \ES_n$ to a tiling by rectangles, meeting along edges, such that {\rm (a)} Each rectangle is a Latin rectangle. {\rm (b)} Distinct rectangles have disjoint sets of colors. \end{theorem} See Figure~\ref{F:RectDecomp} for an illustration. \begin{figure}[htb] \centerline{ \includegraphics[width = .45\textwidth]{FIGURES/RectDecomp.pdf}} \caption{Rectangular decomposition. Each sub-array must be a Latin rectangle, and distinct sub-arrays have disjoint color-sets.} \label{F:RectDecomp} \end{figure} \begin{proof}[Proof of Theorem~\ref{T:diss}] $ $ Using Theorem~\ref{T:main_rect}, decompose the coloring array $\bowtie$ into its component Latin rectangles $B_{ij}$. Here $1 \leq i \leq s$ and $1 \leq j \leq t$. Suppose that $B_{ij}$ contains $n_{ij}$ distinct colors. Let $z \in \Delta_{\bowtie} \cap \Vd$. This happens if and only if $z \in \Delta_{\bowtie}$ and all row- and column-sums of $z$ are zero. We count equations and unknowns to find the dimension of $\Delta_{\bowtie} \cap \Vd$. There are $s$ distinct row-sum equations and $t$ distinct column-sum equations. These are independent except that the sum of entries over all rows is the same as the sum over all columns, giving one linear relation. Therefore the number of independent equations for $z$ is $s+t-1$. The number of independent variables is $\sum_{ij} n_{ij}$. Therefore \[ \dim (\Delta_{\bowtie} \cap \Vd) = \sum_{ij} n_{ij} - s - t + 1 \] and coloring $\bowtie$ is axial if and only if \begin{equation} \label{E:axial_condition} \sum_{ij} n_{ij} = s + t. \end{equation} Since $n_{ij} \geq 1$ we must have $st \leq s+t$. It is easy to prove that this condition holds if and only if \[ (s,t) = (1,t) \qquad (s,t) = (s,1) \qquad (s,t) = (2,2). \] We now show that these correspond respectively to cases (a), (b), (c) of the theorem. If $(s,t) = (2,2)$ then \eqref{E:axial_condition} implies that each $n_{ij} = 1$. This is case (c). If $(s,t) = (1,t)$ then $s+t-st = t+1-t = 1$. Now \eqref{E:axial_condition} requires exactly one $n_{1j}$ to equal 2 and the rest to equal 1. Now Remark~\ref{r:subtlety} comes into play. The columns with $n_{1j}= 1$ satisfy the column-sum condition, hence those $z_{1j} = 0$, and we can amalgamate those columns into a single zero block $B_0$. This is case (a). Case (b) is dual to case (a) and a similar proof applies. Finally, the ZRS and ZCS conditions give equations~\eqref{E: case a zero sum}, \eqref{E: case b zero sum}, and \eqref{E:case c zero sum}. \end{proof} \subsection{Orbit axial versus exotic axial} Axial matrices in Theorem~\ref{T:diss}(c) are orbital. Let $B_{ij}$ be a $p_i\times q_j$ matrix. Let $\Sigma$ be the group generated by elements that permute rows $\{1,\ldots, p_1\}$, permute rows $\{p_1+1,\ldots,p_1+p_2\}$, permute columns $\{1,\ldots, q_1\}$, and permute columns $\{q_1+1,\ldots, q_1+q_2\}$. Then the axial matrices in Theorem~\ref{T:diss}(c) are those in $\Fix(\Sigma)$. However, individual rectangles being orbital need not imply that the entire pattern is orbital. Axial matrices in Theorem~\ref{T:diss}(a) are orbital if and only if the $p\times q$ Latin rectangle $B_1$ is orbital. Suppose $B_1$ is orbital, being fixed by the subgroup $T$ of $\ES_p\times\ES_q$. Then $[B_0 \; B_1]$ is the fixed-point space of the subgroup $\Sigma$ generated by $T$ and permutations $P$ of the columns in $B_0$. Hence $[B_0 \; B_1]$ is orbital. Conversely, if $[B_0 \; B_1]$ is orbital, then the group $\tilde{T}$ fixes $[B_0 \; B_1]$. There is a subgroup $T$ of $\tilde{T}$ that fixes the columns of $B_0$ and $\Fix(T)$ consists of multiples of $B_1$. Thus $B_1$ is orbital. \subsection{Two or three agents} \label{S:2or3a} We specialize to $2 \times n$ and $3 \times n$ arrays. Here it turns out that all axial patterns are orbital. Exotic patterns appear when $m \geq 4$. \subsubsection{$2 \times n$ array} The classification can be read off directly from Theorem~\ref{T:diss}, observing that every 2-color $2 \times k$ Latin rectangle must be conjugate to one of the form \begin{equation} \label{E:2xk_pattern} \Matrix{R & R & \cdots & R & B & B & \cdots & B \\ B & B & \cdots & B & R & R & \cdots & R} \end{equation} with $k$ even and $k/2$ nodes of each color in each row. This gives the form of $B_1$ in Theorem~\ref{T:diss} (a). Here $\rho = 1/3$ and $q$ must be divisible by $3$. There can also be a zero block $B_0$ except when $n$ is even and $k = n/2$. Theorem~\ref{T:diss} (b) does not occur. For Theorem~\ref{T:diss} (c), the $B_{1j}$ must be $1 \times k$, and the $B_{2j}$ must be $1 \times (n-k)$. Both types are easily seen to be orbit colorings. \subsubsection{$3 \times n$ array} The classification can be read off directly from Theorem~\ref{T:diss}, observing that every 2-color $3 \times k$ Latin rectangle must (permuting colors if necessary) be conjugate to one of the form \begin{equation} \label{E:3xk_pattern} \Matrix{R & R & \cdots & R & B & B & \cdots & B & B & B & \cdots & B \\ B & B & \cdots & B & R & R & \cdots & R & B & B & \cdots & B \\ B & B & \cdots & B & B & B & \cdots & B & R & R & \cdots & R } \end{equation} with $k = 3l$ and $k/3$ $R$ nodes and $2k/3$ $B$ nodes in each row. Here $\rho = 1/3$ and $q$ must be divisible by $3$. Theorem~\ref{T:diss} (a): Border this with a zero block $B_0$ when $n \neq 3l$. Theorem~\ref{T:diss} (b): $B_0$ must be $1 \times n$. Then $B_1$ must be a $2 \times n$ Latin rectangle with 2 colors, already classified. This requires $n = 2k$ with $k$ $R$ nodes and $k$ $B$ nodes in each row. (No zero block next to this can occur.) Theorem~\ref{T:diss} (c): Without loss of generality, $B_{11}$ is $2 \times k$ and $B_{21}$ is $2 \times (n-k)$, while $B_{12}$ is $1 \times k$ and $B_{22}$ is $1 \times (n-k)$. All three types are easily seen to be orbit colorings. \subsection{Two or three options} When the number of options is $2$ or $3$ the axial patterns are the transposes of those discussed in \S\ref{S:2or3a}. The interpretations of these patterns are different, because the roles of agents and options are interchanged. \subsection{Exotic patterns} \label{S:EP} For large arrays it is difficult to determine which $2$-color Latin rectangles are orbital and which are exotic, because the combinatorial possibilities for Latin rectangles explode and the possible subgroups of $\ES_\Na \times \ES_\No$ also grow rapidly. Here we show that exotic patterns exist. This has implications for bifurcation analysis: using only the Equivariant Branching Lemma omits all exotic branches. These are just as important as the orbital ones. Figure~\ref{F:4x6_exotic} shows an exotic $4 \times 6$ pattern $\bowtie$. To prove that this pattern is exotic we find its isotropy subgroup $H$ in $\ES_4 \times \ES_6$ and show that $\Fix(H)$ is not $\bowtie$. Theorem~\ref{T:exotic_suff} below shows that there are many exotic $4 \times n$ patterns for larger $n$, and provides an alternative proof that this pattern is exotic. Regarding stability, we note that Ihrig's Theorem~\cite{IG84} applies only to orbital patterns, and it is not clear whether a network analog is valid. Currently, we cannot prove that an exotic pattern can be stable near bifurcation though in principle such solutions can be stable in phase space. See Section~\ref{SS:stability_balanced}. However, simulations show that both orbital axial branches and exotic axial branches can regain stability. See the simulations in Section~\ref{SS:stability_dissensus}. \begin{figure}[htb] \centerline{ \includegraphics[width=.25\textwidth]{FIGURES/4x6_exotic.pdf} \qquad\qquad \includegraphics[width=.25\textwidth]{FIGURES/4x6_exotic_fix2.pdf} } \caption{(Left) Exotic $4 \times 6$ pattern. (Right) Pattern for $\Fix(H)$.} \label{F:4x6_exotic} \end{figure} The general element of $\ES_4 \times \ES_6$ permutes both rows and columns. We claim: \begin{lemma} Let $\sigma_{ij}$ interchange columns $i$ and $j$ in Figure~{\rm \ref{F:4x6_exotic}} and let $\rho_{kl}$ interchange rows $k$ and $l$. Then the isotropy subgroup $H$ of $\bowtie$ is generated by the elements \[ \rho_{13}\rho_{24}\sigma_{12}, \ \ \rho_{12}\rho_{34}\sigma_{36}\sigma_{45},\ \ \sigma_{34}, \ \ \sigma_{56} . \] Also $\Fix(H)$ is not $\bowtie$, so $\bowtie$ is exotic. \end{lemma} \begin{proof} The proof divides into four parts. 1. The network divides into column components where two columns in a column component are either identical or can be made identical after color swapping. In this case there are two column components, one consisting of columns \{1,2\} and the other of columns \{3,4,5,6\}. It is easy to see that elements in any subgroup $H \subseteq \ES_4\times\ES_6$ map column components to column-components; thus $H$ preserves column components because the two components contain different numbers of columns. Therefore $H$ is generated by elements of the form \[ (\rho,\alpha) \in \ES_4^{(1,2,3,4)} \times \ES_2^{(1,2)} \quad \mbox{or} \quad (\rho,\beta) \in \ES_4^{(1,2,3,4)} \times \ES_4^{(3,4,5,6)} \] where $\rho\in \ES_4^{(1,2,3,4)}$ permutes rows $\{1,2,3,4\}$, $\alpha \in\ES_2^{(1,2)}$ permutes columns $\{1, 2\}$, and $\beta \in\ES_4^{(3,4,5,6)}$ permutes columns $\{3,4,5,6\}$. 2. The only element of $H$ that contains the column swap $\sigma_{12}$ is $\rho_{13}\rho_{24}\sigma_{12}$, since $\sigma_{12}$ swaps colors in the first column component. This is the first generator in $H$. 3. Elements in $H$ that fix colors in the second column component are generated by $\sigma_{34}, \; \sigma_{56}$. The elements that swap colors are generated by $ \rho_{12}\rho_{34}\sigma_{36}\sigma_{45}$. 4. It is straightforward to verify that any element in $H$ fixes the pattern in Figure~\ref{F:4x6_exotic} (right). \end{proof} \subsection{$4\times k$ exotic dissensus value patterns} In this section we show that the example of exotic value pattern in Section~\ref{S:EP} is not a rare case. Exotic $4\times k$ dissensus various patterns are indeed abundant. \subsubsection{Classification of $4\times k$ 2-color Latin rectangles} Let $B$ be a $4\times k$ Latin rectangle. With colors $R,B$ there are ten possible columns: four with one $R$ and six with two $R$s. \begin{figure}[htb] \centering \includegraphics[width=.5\textwidth]{./FIGURES/4xn_cols.pdf} \caption{The ten $2$-color $4$-node columns divided into two sets.} \label{F:4xn_cols} \end{figure} Column balance and row balance imply that the columns are color-isomorphic, so either the columns are in the first set \hbox{$a_1$ -- $a_4$} or the second set $b_1$ -- $b_6$. For the first set, row-balance implies that all four columns must occur in equal numbers. Hence this type of Latin rectangle can only occur if the $k$ is a multiple of $4$. For $k=12$, the pattern looks like Figure~\ref{F:4xn_1R}. This pattern is always orbital. \begin{figure}[htb] \centering \includegraphics[width=.55\textwidth]{./FIGURES/4xn_1R.pdf} \caption{Typical pattern from first set of columns $a_1$ -- $a_4$ in Figure~\ref{F:4xn_cols}.} \label{F:4xn_1R} \end{figure} The second set is more interesting. We have grouped the column patterns in color-complementary pairs: $(b_1,b_2)$, $(b_3,b_4)$, and $(b_5,b_6)$. \begin{lemma} \label{L:pair_lemma} Suppose a $4\times n$ Latin rectangle has $\mu_i$ columns of type $b_i$. Then \begin{equation} \label{E:equal_pairs} \mu_1 = \mu_2=\nu_1 \qquad \mu_3 = \mu_4=\nu_2 \qquad \mu_5 = \mu_6 =\nu_3. \end{equation} \end{lemma} \begin{proof} Suppose the Latin rectangle has $\mu_i$ columns of type $b_i$. The row-balance condition is that the total number of $R$ nodes in each row is the same. Therefore the sums \begin{align*} &\mu_1 + \mu_3 + \mu_5 \\ &\mu_1 + \mu_4 + \mu_6 \\ &\mu_2 + \mu_4 + \mu_5 \\ &\mu_2 + \mu_3 + \mu_6 \end{align*} are all equal. This yields three independent equations in six unknowns. It is not hard to see that this happens if and only if~\eqref{E:equal_pairs} holds. \end{proof} Lemma~\ref{L:pair_lemma} shows that in $4\times n$ Latin rectangles, color-complementary pairs of columns occur in equal numbers. For each color-complementary pair, we call the number $\nu_i$ its {\it multiplicity}. For purposes of illustration, we can permute columns so that within the pattern identical columns occur in blocks. A typical pattern is Figure~\ref{F:4xn_2R}. Observe that this type of Latin rectangle exist only when $k$ is even. \begin{figure}[htb] \centering \includegraphics[width=.55\textwidth]{./FIGURES/4xn_2R.pdf} \caption{Typical pattern from second set of columns $b_1$ -- $b_6$ in Figure~\ref{F:4xn_cols}. The multiplicity of the color-complementary pair $(b_1,b_2)$ is 3. The multiplicity of the color-complementary pair $(b_3,b_4)$ is 1. The multiplicity of the color-complementary pair $(b_5,b_6)$ is 2.} \label{F:4xn_2R} \end{figure} \subsubsection{Sufficiency for a $\boldsymbol{4\times k}$ Latin rectangle with 2 colors to be exotic} We now prove a simple sufficient condition for a $2$-coloring of a $4\times n$ Latin rectangle to be exotic. In particular this gives another proof for the $4\times 6$ example. \begin{theorem} \label{T:exotic_suff} Let $\bowtie$ be a $2$-coloring of a $4\times n$ Latin rectangle in which the colors $R$ and $B$ occur in equal proportions. Suppose that two color-complementary pairs have different multiplicities. Then $\bowtie$ is exotic. \end{theorem} \begin{proof} Each column contains two $R$ nodes and two $B$ nodes. By Lemma~\ref{L:pair_lemma} color-complementary pairs of columns occur in equal numbers, so the multiplicity of such a pair is defined. The isotropy subgroup $H$ of $\bowtie$ acts transitively on the set of nodes of any given color, say $R$. Let $C_1$ be one color-complementary pair of columns and let $C_2$ be the other. The action of $\ONE \times S_n \subseteq \ES_m \times \ES_n$ permutes the set of columns. It preserves the pattern in each column, so it preserves complementary pairs. The action of $\ES_m \times \ONE \subseteq \ES_m \times \ES_n$ leaves each column fixed setwise. It permutes the pattern in each column simultaneously, so it maps complementary pairs of columns to complementary pairs. Therefore any element of $\ES_m \times \ES_n$, and in particular of $H$, preserves the multiplicities of complementary pairs. Thus $H$ cannot map any column of $C_1$ to a column of $C_2$, so $H$ cannot map any node in $C_1$ to a node in $C_2$. But both $C_1$ and $C_2$ contain an $R$ node by the Latin rectangle property, which contradicts $H$ being transitive on $R$ nodes. \end{proof} To construct exotic $4 \times n$ axial $2$-color patterns for all even $n \geq 6$, we concatenate blocks of complementary pairs of columns that do not all have the same multiplicity. The $4 \times 6$ pattern of Figure~\ref{F:4x6_exotic} (left) is an example. This proof relies on special properties of columns of length $4$, notably Lemma~\ref{L:pair_lemma}. Columns of odd length cannot occur as complementary pairs in Latin rectangles. However, at least one exotic pattern exists for a $5\times5$ influence network~\cite[Stewart]{S21}, and it seems likely that they are common for larger $m,n$. More general sufficiency conditions than that in Theorem~\ref{T:exotic_suff} can probably be stated. \section{An $\mathcal N_{mn}$-admissible value-formation ODE} We propose the following $\mathcal N_{mn}$-admissible value-formation dynamics: \begin{equation}\label{eq: admissible value formation} \dot z_{ij} = -z_{ij} + \lambda \left( S_1 \left({\talpha z_{ij} + \sum_{k\not= i} \tgamma z_{kj}}\right) + \sum_{l\not= j} S_2 \left( \tbeta z_{il} + \sum_{k\not = i} \tdelta z_{kl} \right) \right) \end{equation} Here $S_1$ and $S_2$ are sigmoidal functions, $1 \leq i \leq m$, and $1\leq j \leq n$. Model~\eqref{eq: admissible value formation} is intimately related to and inspired by the model of opinion dynamics introduced in~\cite[Bizyaeva {\it et al.}]{BFL22}. Model~\eqref{eq: admissible value formation} has two main components: a linear degradation term (modeling resistance to change assigned values) and a saturated network interaction term whose strength is controlled by the bifurcation parameter $\lambda$. The network interaction term is in turn composed of four terms, modeling the four arrow types of $\mathcal N_{mn}$ networks. The sigmoids $S_1$ and $S_2$ satisfy \begin{equation}\label{eq:sigmoid conditions} S_i(0)=0,\quad S_i'(0)=1,\quad S_i^{(n)}(0)\neq 0,\ n\geq 2. \end{equation} With this choice, the trivial equilibrium for this model is $z_{ij}=0$ for all $i,j$. In simulations, we use \begin{equation}\label{eq: sigmoid definition} S_i(x)=\frac{\tanh(x-s_i)+\tanh(s_i)}{1-\tanh(s_i)^2}, \end{equation} which satisfies~\eqref{eq:sigmoid conditions} whenever $s_i\neq 0$. \subsection{Parameter interpretation} Parameter $\talpha$ tunes the weight of the node self-arrow, that is, the node's internal dynamics. When $\alpha<0$ resistance to change assigned value is increased through nonlinear self-negative feedback on option values. When $\alpha>0$ resistance to change assigned value is decreased through nonlinear self-positive feedback on option values. Parameter $\tbeta$ tunes the weight of row arrows, that is, intra-agent, inter-option interactions. When $\tbeta<0$, an increase in one agent's valuation of any given option tends to decrease its valuations of other options. When $\tbeta>0$, an increase in one agent's valuation of any given option tends to increase its valuations of other options. Parameter $\tgamma$ tunes the weight of column arrows, that is, inter-agent, intra-option interactions. When $\tgamma<0$, an increase in one agent's valuation of any given option tends to decrease other agents' valuations of the same option. When $\tgamma>0$, an increase in one agent's valuation of any given option tends to increase other agents' valuations of the same option. Parameter $\tdelta$ tunes the weight of diagonal arrows, that is, inter-agent, inter-option interactions. When $\tdelta<0$, an increase in one agent's valuation of any given option tends to decrease other agents' valuations of the all other options. When $\tdelta>0$, an increase in one agent's valuation of any given option tends to increase other agents' valuations of all other options. \subsection{Bifurcation conditions} Conditions for the various types of synchrony-breaking bifurcation can be computed by noticing that in model~\eqref{eq: admissible value formation} \begin{equation}\label{EQ: model deadlocked jac eigvals} \cd = -1+\lambda\tcd \quad \cc = -1 + \lambda\tcc\quad \cdd = -1 +\lambda\tcdd \quad \cdl = -1 + \lambda\tcdl \end{equation} where \begin{equation*} \begin{aligned} \tcd & = \talpha-\tbeta-\tgamma+\tdelta \\ \tcc & = \talpha-\tbeta+(m-1)\left(\tgamma-\tdelta\right)\\ \tcdd & = \talpha-\tgamma+(n-1)\left(\tbeta-\tdelta\right)\\ \tcdl & = \talpha+(n-1)\tbeta+(m-1)\tgamma+(m-1)(n-1)\tdelta \end{aligned} \end{equation*} For instance, if $\tcd>0$ and $\tcd>\tcc,\tcdd,\tcdl$, then a dissensus synchrony-breaking bifurcation happens at $\lambda=\frac{1}{\tcd}$. \section{Stable solutions} \label{S:stable_solutions} This section divides into four parts: \begin{itemize} \item[\eqref{S:simulation}] Simulation of consensus and deadlock synchrony-breaking. \item[{\eqref{S:simulation_dissensus}}] Simulation of dissensus synchrony-breaking. \item[\eqref{SS:stability_dissensus}] Discussion of the stability of dissensus bifurcation branches. \item[\eqref{SS:stability_balanced}] Stable equilibria can exist in balanced colorings. \end{itemize} \subsection{Simulation of consensus and deadlock synchrony-breaking} \label{S:simulation} To simulate consensus and deadlock synchrony-breaking we use $s_1=0.5$ and $s_2=0.3$ in~\eqref{eq: admissible value formation} and \eqref{eq: sigmoid definition}. For consensus synchrony-breaking we set \begin{equation} \label{e:consensus_bifurcation} \tcd = \tcdd = \tcdl -1.0,\quad \tcc = 1.0, \quad \lambda = -1 +\varepsilon; \end{equation} and for deadlock synchrony-breaking we set \begin{equation} \label{e:deadlock_bifurcation} \tcd = \tcc = \tcdl = -1.0, \quad\tcdd = 1.0, \quad\lambda = 1+\varepsilon, \end{equation} where $0 \leq \varepsilon\ll1$. In other words, both for consensus and deadlock synchrony-breaking we let the bifurcation parameter $\lambda$ be slightly above the critical value at which bifurcation occurs. In simulations, we use $\varepsilon=10^{-2}$. Initial conditions are chosen randomly in a small neighborhood of the unstable fully synchronous equilibrium. \begin{figure} \centering \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-cons-time} \caption{Evolution of valuations by 4 agents on 6 options at consensus symmetry-breaking. Simulation of \eqref{eq: admissible value formation} with parameters~\eqref{e:consensus_bifurcation}. } \label{fig:4 6 cons time} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-cons-patt} \caption{Final simulation pattern of valuations of 4 agents on 6 options at consensus symmetry-breaking. Option 6 is chosen.} \label{fig:4 6 cons patt} \end{subfigure} \caption{Possible consensus symmetry-breaking between 4 agents and 6 options. Transient value-assignment is shown through time series in (a). The final value pattern is shown in (b) and is of consensus type.} \label{fig:4 6 cons} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-deadl-time} \caption{Evolution of valuation by 4 agents on 6 options at deadlock symmetry-breaking. Simulation of \eqref{eq: admissible value formation} with parameters~\eqref{e:deadlock_bifurcation}.} \label{fig:4 6 deadl time} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-deadl-patt} \caption{Final simulated value pattern by 4 agents on 6 options at deadlock symmetry-breaking. Agent 2 values all the options positively. Other agents value all the options negatively.} \label{fig:4 6 deadl patt} \end{subfigure} \caption{Possible deadlock symmetry-breaking pattern obtained by simulation of 4 agents on 6 options. Transient value-assignment is shown through time series in (a). The final value pattern is shown in (b) and is of deadlock type.} \label{fig:4 6 deadl} \end{figure} After exponential divergence from the neutral point (Figures~\ref{fig:4 6 cons time} and~\ref{fig:4 6 deadl time}), trajectories converge to a consensus (Figure~\ref{fig:4 6 cons patt}) or deadlock (Figure~\ref{fig:4 6 deadl patt}) value pattern, depending on the chosen bifurcation type. Observe that the value patterns trajectories converge to are `far' from the neutral point. In other words, the network `jumps' away from indecision to a value pattern with distinctly different value assignments compared to those before bifurcation. This is a consequence of model-independent stability properties of consensus and deadlock branches, as we discuss next. A qualitatively similar behavior would have been observed for $\varepsilon=0$, i.e., for $\lambda$ exactly at bifurcation, with the difference that divergence from the fully synchronous equilibrium would have been sub-exponential because of zero (instead of positive) eigenvalues of the linearization. Also, for $\varepsilon=0$, trajectories could have converged to different consensus or deadlock value patterns as compared to $\varepsilon=10^{-2}$ because of possible secondary bifurcations that are known to happen close to $\ES_n$-equivariant bifurcations and that lead to primary-branch switching~~\cite[Cohen and Stewart]{CS00}, \cite[Stewart {\it et al.}]{SEC03},\cite[Elmhirst]{E02}. \subsection{Simulation of dissensus synchrony-breaking} \label{S:simulation_dissensus} We simulate dissensus synchrony-breaking for two sets of parameters: \begin{eqnarray} \label{param1} &&\qquad \tcd = 1.0, \tcc = -1.0, \tcdd = -0.5, \tcdl = -0.5, s_1 = 0.5, s_2 = 0.3, \lambda = \frac{1}{\tcd}+\varepsilon \\ \label{param2} &&\qquad \tcd = 1.0, \tcc = -1.0, \tcdd = -1.0, \tcdl = -1.0, s_1 = -0.1, s_2 = -0.3, \lambda = \frac{1}{\tcd}+\varepsilon \end{eqnarray} Initial conditions are chosen randomly in a small neighborhood of the unstable neutral point, and in simulations $\varepsilon=10^{-2}$. The resulting temporal behaviors and final value patterns are shown in Figures~\ref{fig:4 6 orb diss} and \ref{fig:4 6 exo diss}. \begin{figure} \centering \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-orb-diss-time} \caption{Evolution of 4 agents' valuations about 6 options at dissensus symmetry-breaking with parameter set \eqref{param1}.} \label{fig:4 6 orb diss time} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-orb-diss-patt} \caption{Final value pattern of 4 agents' valuations about 6 options at dissensus symmetry-breaking with parameter set \eqref{param1}. All agents are neutral about Options~1 and~3. Agent~1 favors Option~5. Agent~2 favors Option~2. Agent~3 favors Option~6. Agent~4 favors Option~4.} \label{fig:4 6 orb diss patt} \end{subfigure} \caption{Dissensus symmetry-breaking between 4 agents and 6 options with parameter set \eqref{param1}. Transient value-assignment is shown through time series in (a). The final value pattern is shown in (b) and is of orbital dissensus type.} \label{fig:4 6 orb diss} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-exo-diss-time} \caption{Evolution of 4 agents' valuations about 6 options at dissensus symmetry-breaking with parameter set \eqref{param2}.} \label{fig:4 6 exo diss time} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{./FIGURES/4-6-exo-diss-patt} \caption{Final value pattern of 4 agents' valuations about 6 options at dissensus symmetry-breaking with parameter set \eqref{param2}. Agent~1 favors Options~1,2,5. Agent~2 favors Options~3,4,6. Agent~3 favors Options~1,5,6. Agent~4 favors Options~2,3,4.} \label{fig:4 6 exo diss patt} \end{subfigure} \caption{Dissensus symmetry-breaking between 4 agents and 6 options with parameter set \eqref{param2}. Transient value-assignment is shown through time series in (a). The final value pattern is shown in (b) and is of exotic dissensus type.} \label{fig:4 6 exo diss} \end{figure} For both sets of parameters, trajectories jump toward a dissensus value pattern. For the first set of parameters (Figure~\ref{fig:4 6 orb diss}), the value pattern has a block of zeros (first and second columns) corresponding to options about which the agents remain neutral. The pattern in Figure~\ref{fig:4 6 orb diss patt} is orbital, with symmetry group $\Z_4$. For the second set of parameters (Figure~\ref{fig:4 6 exo diss}), each agent favors half the options and dislikes the other half, but there is disagreement about the favored options. The precise value pattern follows the rules of a Latin rectangle with two red nodes and two blue nodes per column and three red nodes and three blue nodes per row. The pattern in Figure~\ref{fig:4 6 exo diss patt} is exotic, and is a permutation of the pattern in Figure~\ref{F:4x6_exotic} (left). With parameter set \eqref{param2}, model~\eqref{eq: admissible value formation} has stable equilibria with synchrony patterns given by both exotic and orbital dissensus value patterns. It must be stressed that here, for the same set of parameters but different initial conditions trajectories converge to value patterns corresponding to different Latin rectangles with the same column and row proportions of red and blue nodes, both exotic and orbital ones. Therefore multiple (conjugacy classes of) stable states can coexist, even when proportions of colors are specified. \subsection{Stability of dissensus bifurcation branches} \label{SS:stability_dissensus} As discussed in Section~\ref{S:SCB} for consensus and deadlock bifurcations, solutions given by the Equivariant Branching Lemma are often unstable near bifurcation, but can regain stability away from bifurcation. The way this happens in dissensus bifurcations is likely to be similar to the way $\ES_N$ axial branches regain stability, that is, through saddle-node `turning-around' and secondary bifurcations. This observation is verified by simulation for both orbital and exotic axial solutions, Section~\ref{S:simulation_dissensus}. The analytic determination of stability or instability for dissensus axial solutions involves large numbers of parameters and is beyond our capability. We know, by the existence of a non-zero quadratic equivariant~\cite[Section~6.1]{FGBL20}, that Ihrig's Theorem might apply to orbital dissensus bifurcation branches. We do not know if a similar result applies to exotic orbital dissensus bifurcation branches but our numerical simulations suggest that this is the case. Our simulations also show that both types of pattern can be stable in our model for values of the bifurcation parameter close to the bifurcation point. \subsection{Stable equilibria can exist for any balanced coloring} \label{SS:stability_balanced} Furthermore, we show that whenever $\bowtie$ is a balanced coloring of a network, there exists an admissible ODE having a linearly stable equilibrium with synchrony pattern $\bowtie$. \begin{theorem} \label{T:BCequi_exist} Let $\bowtie$ be a balanced coloring of a network $\GG$. Then for any choice of node spaces there exists a $\GG$-admissible map $f$ such that the ODE $\dot x = f(x)$ has a linearly stable equilibrium with synchrony pattern $\bowtie$. \end{theorem} \begin{proof} Let $y$ be a generic point for $\bowtie$; that is, $y_c = y_d$ if and only if $c \bowtie d$. Balanced colorings refine input equivalence. Therefore each input equivalence class $\KK$ of nodes is a disjoint union of $\bowtie$-equivalence classes: $\KK = \KK_1 \dot\cup \cdots \dot\cup \KK_s$. If $c,d \in \KK$ then $y_c = y_d$ if and only if $c,d$ belong to the same $\KK_i$. Writing variables in standard order (that is, in successive blocks according to arrow-type) we may assume that $P_c = P_d$ whenever $c \sim_I d$. Next, we define a map $f^\KK: P_\KK \to P_\KK$ such that \beqn f^\KK(y_d) &=& 0\quad \forall d \in \KK \\ \mathrm{D}f^\KK(y_d) &=& -\id_d\quad \forall d \in \KK \eeqn where $\id_d$ is the identity map on $P_d$. This can be achieved with a polynomial map by polynomial interpolation. Now define $g:P\to P$ by \[ g_c(x) = f^\KK(x_c) \quad \mbox{when}\ c \in \KK \] The map $g$ is admissible since it depends only on the node variable and its components on any input equivalence class are identical. Now $g_c(y_d) = f^\KK(y_d) = 0$, so $y$ is an equilibrium of $g$. Moreover, $\mathrm{D}f|_y$ is a block-diagonal matrix with blocks $-\id_c$ for each $c \in \CC$; that is, $\mathrm{D}f|_y = - \id_P$, where $\id_P$ is the identity map on $P$. The eigenvalues of $\mathrm{D}f|_y$ are therefore all equal to $-1$, so the equilibrium $y$ is linearly stable. \end{proof} \begin{thebibliography}{100} \bibitem{wikib} Latin rectangle, {\em Wikipedia}, (2021). \bibitem{AS07} F. Antoneli and I. Stewart, {\em Symmetry and synchrony in coupled cell networks 2: group networks}, Internat. J. Bif. Chaos, 17 (2007), pp. 935--951. \bibitem{BSMG06} D.~Biro, D. J. T.~Sumpter, J.~Meade, and T.~Guilford, {\em From compromise to leadership in pigeon homing}, Current Biol., 16 (2006), pp. 2123--2128. \bibitem{BFL22} A.~Bizyaeva, A.~Franci, and N. E.~Leonard, {\em Nonlinear opinion dynamics with tunable sensitivity}, IEEE Trans. Automatic Control, (2022). \bibitem{BFBD13} M.~Brambilla, E.~Ferrante, M.~Birattari, and M.~Dorigo, {\em Swarm robotics: a review from the swarm engineering perspective}, Swarm Intelligence, 7 (2013), pp. 1--41. \bibitem{Ci81} G. Cicogna, {\em Symmetry breakdown from bifurcations}, Lett. Nuovo Cimento, 31 (1981), pp. 600--602. \bibitem{CCB19} P.~Cisneros-Velarde, K. S.~Chan, and F.~Bullo, {\em Polarization and fluctuations in signed social networks}, preprint, arXiv:1902.00658, (2019). \bibitem{CS00} J. Cohen and I. Stewart, {\em Polymorphism viewed as phenotypic symmetry-breaking}, in Nonlinear Phenomena in Biological and Physical Sciences, S.K.Malik, M.K.Chandrasekharan, and N. Pradhan, eds., Indian National Science Academy, New Delhi, 2000, pp. 1--63. \bibitem{CIDGTHCLL11} I. D. Couzin, C. C. Ioannou, G.~Demirel, T.~Gross, C. J. Torney, A.~Hartnett, L.~Conradt, S. A. Levin, and N. E. Leonard, {\em Uninformed individuals promote democratic consensus in animal groups}, Science, 334 (2011), pp. 1578--1580. \bibitem{CKFL05} I. D. Couzin, J. Krause, N. R. Franks, and S. A. Levin, {\em Effective leadership and decision-making in animal groups on the move}, Nature, 433 (2005), pp. 513--516. \bibitem{DGL13} P.~Dandekar, A.~Goel, and D. T.~Lee, {\em Biased assimilation, homophily, and the dynamics of polarization}, Proc. Natl. Acad. Sci. USA, 110 (2013), pp. 5791--5796. \bibitem{DNAW00} G.~Deffuant, D.~Neau, F.~Amblard, and G.~Weisbuch, {\em Mixing beliefs among interacting agents}, Advances in Complex Systems, 3 (2000), pp. 87--98. \bibitem{D74} M. H. DeGroot, {\em Reaching a consensus}, J. Amer. Statist. Assoc., 69 (1974), pp. 121--132. \bibitem{DS03} A. P. S.~Dias and I.~Stewart, {\em Secondary bifurcations in systems with all-to-all coupling}, Proc. Roy. Soc. London A, 459 (2003), pp. 1969--1986. \bibitem{DS05} A. P. S.~Dias and I.~Stewart, {\em Linear equivalence and ODE-equivalence for coupled cell networks}, Nonlinearity, 18 (2005), pp. 1003--1020. \bibitem{DK85} M.~Dworkin and D.~Kaiser, {\em Cell interactions in myxobacterial growth and development}, Science, 230 (1985), pp. 18--24. \bibitem {E02} T.~Elmhirst, {\em Symmetry and Emergence in Polymorphism and Sympatric Speciation}, PhD thesis, University of Warwick 2002. \bibitem{FR89} M. Field and R. Richardson, {\em Symmetry breaking and the maximal isotropy subgroup conjecture for reflection groups}, Arch. Rational Math. Mech., 105 (1989), pp. 61--94. \bibitem{FBPL21} A.~Franci, A.~Bizyaeva, S.~Park, and N. E.~Leonard, {\em Analysis and control of agreement and disagreement opinion cascades}, Swarm Intelligence 15 (2021), pp. 47--82. \bibitem{FGBL20} A.~Franci, M.~Golubitsky, A.~Bizyaeva, and N. E.~Leonard, {\em A model-independent theory of consensus and dissensus decision making}, preprint, arXiv:1909.05765 (2020). \bibitem{FPTP16} N.E.~Friedkin, A.V.~Proskurnikov, R.~Tempo, and S.E.~Parsegov, {\em Network science on belief system dynamics under logic constraints}, Science, 354 (2016), pp. 321--326. \bibitem{FJ99} N. E. Friedkin and E. C. Johnsen, {\em Social influence networks and opinion change}, in Advances in Group Processes 16, S.R.~Thye, E.J.~Lawler, M.W.~Macy, and H.A.~Walker, eds., Emerald Group, Bingley, 1999, pp. 1--29. \bibitem{GS85} M.~Golubitsky and D.G. Schaeffer, {\em Singularities and Groups in Bifurcation Theory}, volume~1, Springer, New York, 1985. \bibitem{GS02} M. Golubitsky and I. Stewart, {\em The Symmetry Perspective: from equilibria to chaos in phase space and physical space}, Progress in Mathematics 200, Birkh\"auser, Basel, 2002. \bibitem{GS06} M. Golubitsky and I. Stewart, {\em Nonlinear dynamics of networks: the groupoid formalism}, Bull. Amer. Math. Soc., 43 (2006), pp. 305--364. \bibitem{GS22} M. Golubitsky and I. Stewart, {\em Dynamics and Bifurcation in Networks}, SIAM, to appear. \bibitem{GSS88} M.~Golubitsky, I.~Stewart, and D. G. Schaeffer, {\em Singularities and Groups in Bifurcation Theory}, volume~2. Springer, New York, 1988. \bibitem{GST05} M. Golubitsky, I. Stewart, and A. T\"or\"ok, {\em Patterns of synchrony in coupled cell networks with multiple arrows}, SIAM J. Appl. Dynam. Sys., 4 (2005), pp. 78--100. \bibitem{GFSL18} R.~Gray, A.~Franci, V.~Srivastava, and N. E.~Leonard, {\em Multi-agent decision-making dynamics inspired by honeybees}, IEEE Trans. Control of Networked Systems, 5 (2018), pp. 793--806. \bibitem{GLS88} M.~Gr{\"o}tschel, L.~Lov{\'a}sz, and A.~Schrijver, {\em Geometric Algorithms and Combinatorial Optimization}, volume~2. Springer, Berlin, 2012. \bibitem{GH88} J. Guckenheimer and P. Holmes, {\em Structurally stable heteroclinic cycles}, Math. Proc. Cambridge Philos. Soc., 103 (1988), pp. 189--192. \bibitem{HK02} R.~Hegselmann and U.~Krause, {\em Opinion dynamics and bounded confidence models, analysis, and simulations}, J. Artificial Societies and Social Simulation, 5 (2002), pp. 121--132. \bibitem{KBK00} M. J. B. Krieger, J-B. Billeter, and L.~Keller, {\em Ant-like task allocation and recruitment in cooperative robots}, Nature, 406 (2000), pp. 992--995. \bibitem{LDD06} T. H.~Labella, M.~Dorigo, and J-L~Deneubourg, {\em Division of labor in a group of robots inspired by ants' foraging behavior}, ACM Trans. Autonomous and Adaptive Systems, 1 (2006), pp. 4--25. \bibitem{LSNSCL12} N. E.~Leonard, T.~Shen, B.~Nabet, L.~Scardovi, I. D.~Couzin, and S. A.~Levin, {\em Decision versus compromise for animal groups in motion}, Proc. Natl. Acad. Sci. USA, 16 (2012), pp. 227--232. \bibitem{IG84} E.~Ihrig and M.~Golubitsky. {\em Pattern selection with $\mathbf{O}(3)$ symmetry}, Physica D, 12 (1984) 1--33. \bibitem{MB01} M. B.~Miller and B. L.~Bassler, {\em Quorum sensing in bacteria}, Ann. Rev. Microbiol., 55 (2001), pp. 165--199. \bibitem{NLCL09} B.~Nabet, N.E.~Leonard, I.D.~Couzin, and S.A.~Levin. {\em Dynamics of decision making in animal group motion}, J. Nonlinear Sci., 19 (2009), pp. 399--435. \bibitem{PHSFLM13} D.~Pais, P. M.~Hogan, T.~Schlegel, N. R.~Franks, N. E.~Leonard, and J. A. R.~Marshall, {\em A mechanism for value-sensitive decision-making}, PloS ONE, 8 (2013), pp. e73216. \bibitem{PPTF16} S.E.~Parsegov, A.V.~Proskurnikov, R..~Tempo, and N.E.~Friedkin, {\em Novel multidimensional models of opinion dynamics in social networks}, IEEE Tran. Aut. Control, 62 (2016), pp. 2270--2285. \bibitem{SCS91} T. D. Seeley, S. Camazine, and J. Sneyd, {\em Collective decision-making in honey bees: how colonles choose among nectar sources}, Behav. Ecol. Sociobiol, 28 (1991), pp. 277--290. \bibitem{S97} T. D. Seeley, {\em Honey bee colonies are group-level adaptive units}, Am. Nat., 150 (1997), pp. S22--S41. \bibitem{SKSHFM12} T. D. Seeley, P. K. Visscher, T.~Schlegel, P. M. Hogan, N. R. Franks, and J. A. R. Marshall, {\em Stop signals provide cross inhibition in collective decision-making by honeybee swarms}, Science, 335 (2012), pp. 108--111. \bibitem{SK11} E. A.~Shank and R.~Kolter, {\em Extracellular signaling and multicellularity in Bacillus subtilis}, Current Opinion in Microbiol., 14 (2011), pp. 741--747. \bibitem{SLGNSBSGC21} V. H.~Sridhar, L.~Li, D.~Gorbonos, M.~Nagy, B. R.~Schell, R.~Bianca, T.~Sorochkin, N. S.~Gov, and I. D.~Couzin, {\em The geometry of decision-making in individuals and collectives}, Proc. Natl. Acad. Sci. USA, 118 (2021), pp. e2102157118. \bibitem{S21} I. Stewart, {\em Balanced colorings and bifurcations in rivalry and opinion networks}, Internat. J. Bif. Chaos, 31 (2021), pp. 2130019. \bibitem{SEC03} I. Stewart, T. Elmhirst, and J. Cohen, {\em Symmetry-breaking as an origin of species}, in Bifurcations, Symmetry, and Patterns, J. Buescu, S. Castro, A.P.S. Dias, and I. Labouriau, eds., Birkh\"auser, Basel, 2003, pp. 3--54. \bibitem{SG11} I.~Stewart and M.~Golubitsky, {\em Synchrony-breaking bifurcation at a simple real eigenvalue for regular networks 1: $1$-dimensional cells}, SIAM J. Appl. Dynam. Sys., 10 (2011), pp. 1404--1442. \bibitem{SGP03} I.~Stewart, M.~Golubitsky, and M.~Pivato, {\em Symmetry groupoids and patterns of synchrony in coupled cell networks}, SIAM J. Appl. Dynam. Sys., 2 (2003), pp. 609--646. \bibitem{UN21} {\em High-Level Thematic Debate on Delivering Climate Action: For People, Planet and Prosperity}, UN General Assembly, (2021), available at {\tt\tiny https://www.un.org/pga/76/wp-content/uploads/sites/101/2022/01/Delivering-Climate-Action-Summary.pdf} \bibitem{V82} A.~Vanderbauwhede. {\em Local Bifurcation and Symmetry}, Res. Notes in Math. 75, Pitman, Boston, 1982. \bibitem{K07} P. K.~Visscher, {\em Group decision making in nest-site selection among social insects}, Ann. Rev. Entomol., 52 (2007), pp. 255--275. \bibitem{YTLAA20} M.~Ye, M.H.~Trinh, Y.-H.~Lim, B.D.O.~Anderson, and H.-S.~Ahn, {\em Continuous-time opinion dynamics on multiple interdependent topics}, Automatica, 115 (2020), pp. 108884. \bibitem{WB05} C. M.~Waters and B. L.~Bassler, {\em Quorum sensing: cell-to-cell communication in bacteria}, Ann. Rev. Cell Dev. Biol., 21 (2005), pp. 319--346. \end{thebibliography} \end{document} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{algorithmic} \usepackage{pdflscape} \usepackage{afterpage} \usepackage{xr} \usepackage{enumitem} \usepackage{tikz} \usepackage{caption} \usepackage{subcaption} \usepackage{todonotes} \usepackage{enumitem} \setitemize[1]{itemsep=10pt,topsep=10pt} \ifpdf \DeclareGraphicsExtensions{.eps,.pdf,.png,.jpg} \else \DeclareGraphicsExtensions{.eps} \newcommand{\RED}[1]{{\color{red}{#1}}} \newcommand{\BLUE}[1]{{\color{blue}{#1}}} \newcommand{\CYAN}[1]{{\color{cyan}{#1}}} \newcommand{\GREEN}[1]{{\color{green}{#1}}} \newcommand{\AFcomm}[1]{\todo[color = green]{{#1}}} \newcommand{\creflastconjunction}{, and~} \newcommand{\ignore}[1]{} \newcommand{\overbar}[1]{\mkern 2.0mu\overline{\mkern-2.0mu#1\mkern-2.0mu}\mkern 2.0mu} \newcommand\mscriptsize[1]{\mbox{\scriptsize\ensuremath{#1}}} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcommand{\R}{{\mathbb R}} \newcommand{\N}{{\mathbb N}} \newcommand{\C}{{\mathbb C}} \newcommand{\D}{{\mathbf D}} \newcommand{\Q}{{\mathbb P}} \newcommand{\Z}{{\mathbf Z}} \newcommand{\ES}{{\mathbf S}} \newcommand{\VV}{{\mathcal V}} \newcommand{\I}{{\mathcal I}} \newcommand{\CC}{\mathcal C} \newcommand{\WW}{{\mathcal W}} \newcommand{\Fix}{\mathrm{Fix}} \newcommand{\NP}{\bs{O}} \newcommand{\NPfull}{{\bs{O}}} \newcommand{\ONE}{{\mathbf 1}} \newcommand{\Na}{{N_{\rm a}}} \newcommand{\No}{{N_{\rm o}}} \newcommand{\Xx}{\bs{X}} \newcommand{\Oo}{\bs{O}} \newcommand{\Zz}{\bs{Z}} \newcommand{\Ww}{\bs{W}} \newcommand{\Vv}{\bs{V}} \newcommand{\Gg}{\bs{G}} \newcommand{\Aa}{a^{\rm a}} \newcommand{\Ao}{a^{\rm o}} \newcommand{\Vvc}{\Vv_{\rm c}} \newcommand{\Vvd}{\Vv_{\rm d}} \newcommand{\Vc}{V_{\rm c}} \newcommand{\Vd}{V_{\rm d}} \newcommand{\Wdd}{V_{\rm dl}} \newcommand{\Wd}{V_{\rm s}} \newcommand{\cc}{c_{\rm c}} \newcommand{\cd}{c_{\rm d}} \newcommand{\cdd}{c_{\rm dl}} \newcommand{\cdl}{c_{\rm s}} \newcommand{\Vdl}{V_{\rm i}} \newcommand{\Vp}{V_{\rm p}} \newcommand{\talpha}{\tilde\alpha} \newcommand{\tgamma}{\tilde\gamma} \newcommand{\tbeta}{\tilde\beta} \newcommand{\tdelta}{\tilde\delta} \newcommand{\tcc}{\tilde c_{\rm c}} \newcommand{\tcd}{\tilde c_{\rm d}} \newcommand{\tcdd}{\tilde c_{\rm dl}} \newcommand{\tcdl}{\tilde c_{\rm s}} \newcommand{\sign}{{\rm sign}} \newcommand{\Am}[4]{A_{#1#2}^{#3#4}} \newcommand\Tstrut{\rule{0pt}{2.6ex}} \newcommand\Bstrut{\rule[-0.9ex]{0pt}{0pt}} \newcommand{\beqn}{\begin{eqnarray*}} \newcommand{\eeqn}{\end{eqnarray*}} \newsiamremark{remark}{Remark} \newsiamremark{hypothesis}{Hypothesis} \crefname{hypothesis}{Hypothesis}{Hypotheses} \newsiamthm{claim}{Claim} \newsiamremark{warning}{Warning} \newsiamremark{remarks}{Remarks} \newsiamremark{example}{Example} \newcommand{\Matrix}[1]{\ensuremath{\left[\begin{array}{ccccccccccccccccccccccccr} #1 \end{array}\right]}} \newcommand{\Matrixr}[1]{\ensuremath{\left[\begin{array}{rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr} #1 \end{array}\right]}} \usepackage{amsopn} \DeclareMathOperator{\diag}{diag} \newcommand{\GG}{{\mathcal G}} \newcommand{\END}{\hfill\mbox{\raggedright$\Diamond$}} \newcommand{\DD}{\mathrm{D}} \newcommand{\RR}{{\mathcal R}} \newcommand{\KK}{{\mathcal K}} \newcommand{\id}{\mathrm{id}\,} \newcommand{\shf}{\mbox{\footnotesize $\frac{1}{2}$}} \newcommand{\PP}{{\mathcal P}} \newcommand{\LL}{{\mathcal L}} \usepackage{latexsym} \usepackage{amsmath} \usepackage{amssymb}
2206.14864v1
http://arxiv.org/abs/2206.14864v1
Local vector measures
\documentclass[10pt, reqno]{amsart} \usepackage[english]{babel} \usepackage[colorlinks,citecolor=green,linkcolor=red]{hyperref} \usepackage{amsthm} \usepackage{color} \usepackage{mathrsfs} \usepackage{amsmath} \usepackage{mathtools} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{physics} \usepackage{enumitem} \usepackage{tikz-cd} \usepackage{a4wide} \usepackage{cleveref} \usepackage{esint} \usepackage{nicefrac} \usepackage{dirtytalk} \numberwithin{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem{thma}{Theorem} \renewcommand{\thethma}{\Alph{thma}} \newtheorem*{thm*}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{cora}{Corollary} \renewcommand{\thecora}{\Alph{cora}} \newtheorem{defn}[thm]{Definition} \newtheorem*{conve}{Convention} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem{ex}[thm]{Example} \DeclareMathOperator*{\esssup}{ess\,sup} \DeclareMathOperator*{\essinf}{ess\,inf} \DeclareMathOperator*{\essint}{ess\,int} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\aplimsup}{ap\,\limsup} \DeclareMathOperator*{\apliminf}{ap\,\liminf} \newcommand{\Ch}{{\mathrm {Ch}}} \newcommand{\lip}{{\mathrm {lip}}} \newcommand{\Lip}{{\mathrm {Lip}}} \newcommand{\diff}{{\mathrm{d}}} \newcommand{\DIFF}{{\mathrm{D}}} \newcommand{\nablatilde}{\bar{\nabla}} \newcommand{\Prbar}{\bar{\Pr}} \newcommand{\pibar}{\bar{\pi}} \newcommand{\qcr}{{\mathrm{QCR}}} \renewcommand{\tr}{{\mathrm{tr}}} \newcommand{\trbar}{{{\bar{\tr}}}} \newcommand{\qcrbar}{{\mathrm{\bar{QCR}}}} \newcommand{\dive}{{\mathrm{div}}} \newcommand{\symcov}{{\nabla_\mathrm{sym}}} \newcommand{\hess}{{\mathrm{Hess}}} \newcommand{\heat}{{\mathrm {h}}} \newcommand{\sign}{{\mathrm {sign}}} \newcommand{\conv}{{\mathrm {conv}}} \newcommand{\supp}{{\mathrm {supp\,}}} \newcommand{\diam}{{\mathrm {diam\,}}} \newcommand{\capa}{{\mathrm {Cap}}} \newcommand{\per}{{\mathrm {Per}}} \newcommand{\Geod}{{\mathrm {Geod}}} \newcommand{\OptGeod}{{\mathrm {OptGeod}}} \newcommand{\Wass}{{\mathrm {W}}} \newcommand{\Prob}{{\mathscr{P}}} \newcommand{\Ptwo}{{\mathscr{P}_2 }} \newcommand{\mres}{\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}} \newcommand{\mressmall}{\mathbin{\vrule height 1.2ex depth 0pt width 0.11ex\vrule height 0.10ex depth 0pt width 1.0ex}} \newcommand{\dist}{{\mathsf{d}}} \newcommand{\mass}{{\mathsf{m}}} \newcommand{\nass}{{\mathsf{n}}} \newcommand{\XX}{{\mathsf{X}}} \newcommand{\YY}{{\mathsf{Y}}} \newcommand{\ZZ}{{\mathcal{Z}}} \newcommand{\EE}{{\mathcal{E}}} \newcommand{\FF}{{\mathcal{F}}} \newcommand{\TT}{\mathcal{T}} \newcommand{\VV}{{\mathcal{V}}} \newcommand{\DD}{{\mathcal{D}}} \newcommand{\Rr}{{\mathcal{R}}} \newcommand{\WW}{{\mathcal{W}}} \newcommand{\mvect}{{\mathsf{M}}} \newcommand{\nvect}{{\mathsf{N}}} \newcommand{\Tan}{\mathrm {Tan}} \newcommand{\len}{{\mathrm{L}}} \newcommand{\defeq}{\mathrel{\mathop:}=} \newcommand{\GG}{\mathcal{G}} \newcommand{\AK}{\mathrm{AK}} \newcommand{\MM}{\mathcal{M}} \newcommand{\Mvmeas}{\mathcal{M}} \newcommand{\HH}{\mathcal{H}} \newcommand{\LL}{\mathcal{L}} \newcommand{\RR}{\mathbb{R}} \newcommand{\NN}{\mathbb{N}} \newcommand{\Cqcvf}{\mathcal{QC}(T\XX)} \newcommand{\Cqcvfinf}{\mathcal{QC}^\infty(T\XX)} \newcommand{\Cqcvfinfn}{\mathcal{QC}^\infty(T^n\XX)} \renewcommand{\S}{{\mathrm {S}^2}} \newcommand{\Ss}{{\mathrm {S}^2}(\XX)} \newcommand{\Lp}{{\mathrm {L}}} \newcommand{\Lploc}{\mathrm {L}_\mathrm{loc}} \newcommand{\Lpo}{\Lp^0(\mass)} \newcommand{\Lpc}{\Lp^0(\capa)} \newcommand{\Lpcinf}{\Lp^\infty(\capa)} \newcommand{\Lpu}{\Lp^1(\mass)} \newcommand{\Lpf}{\Lp^4(\XX,\mass)} \newcommand{\Lpuloc}{\Lp^1_{\mathrm{loc}}(\mass)} \newcommand{\Lpp}{\Lp^p(\XX)} \newcommand{\Lppprime}{\Lp^{p^*}(\XX,\mass)} \newcommand{\Lpploc}{\Lp^p_{\mathrm{loc}}(\XX,\mass)} \newcommand{\Lpt}{\Lp^2(\mass)} \newcommand{\Lptloc}{\Lp^2_{\mathrm{loc}}(\mass)} \newcommand{\Lpi}{\Lp^\infty(\mass)} \newcommand{\HS}{\mathrm {H^{1,2}}} \newcommand{\HSloc}{\mathrm {H^{1,2}_{loc}}} \newcommand{\WSCs}{\mathrm {W^{1,2}_C}(T\XX)} \newcommand{\WS}{\mathrm {W^{2,2}}} \newcommand{\HSs}{{\mathrm {H^{1,2}(\XX)}}} \newcommand{\WSs}{{\mathrm {W^{2,2}(\XX,\dist,\mass)}}} \newcommand{\WHCSs}{{\mathrm {H^{1,2}_C}(T\XX)}} \newcommand{\WSHs}{{\mathrm {W^{1,2}_H}(T\XX)}} \newcommand{\WHHSs}{{\mathrm {H^{1,2}_H}(T\XX)}} \newcommand{\WHHSsn}{{\mathrm {H^{1,2}_H}(T\XX)^n}} \newcommand{\HSsloc}{{\mathrm {H^{1,2}_{loc}(\XX)}}} \newcommand{\WSsloc}{{\mathrm {W^{2,2}_{loc}(\XX,\dist,\mass)}}} \newcommand{\LIP}{{\mathrm {LIP}}} \newcommand{\BV}{{\mathrm {BV}}} \newcommand{\BVv}{{\mathrm {BV}}(\XX)} \newcommand{\LIPbs}{{\mathrm {LIP_{bs}}}} \newcommand{\Cc}{{C_{\mathrm{c}}}} \newcommand{\Cci}{{C^\infty_{\mathrm{c}}}} \newcommand{\Cb}{{C_{\mathrm{b}}}} \newcommand{\Cbs}{{C_{\mathrm{bs}}}} \newcommand{\LIPc}{{\mathrm {LIP_{c}}}} \newcommand{\LIPloc}{{\mathrm {LIP_{loc}}}} \newcommand{\LIPb}{{\mathrm {LIP_{b}}}} \newcommand{\AC}{{\mathrm {AC}}} \newcommand{\TestF}{{\mathrm {Test}}} \newcommand{\TestFc}{{\mathrm {Test_c}}} \newcommand{\TestFbs}{{\mathrm {Test_{bs}}}} \newcommand{\TestV}{{\mathrm {TestV}}} \newcommand{\TestVc}{{\mathrm {TestV_c}}} \newcommand{\TestVbs}{{\mathrm {TestV_{bs}}}} \newcommand{\TestVbar}{{\mathrm {Test\bar{V}}}} \newcommand{\cotX}{\Lp^2(T^*\XX)} \newcommand{\cotcotX}{\Lp^2((T^*)^{\otimes 2}\XX)} \newcommand{\tanX}{\Lp^2(T\XX)} \newcommand{\tanXp}{\Lp^p(T\XX)} \newcommand{\tanXn}{\Lp^2(T^n\XX)} \newcommand{\tanY}{\Lp^2(T\YY)} \newcommand{\tanXzero}{\Lp^0(T\XX)} \newcommand{\cotanXzero}{\Lp^0(T^*\XX)} \newcommand{\tanXinf}{\Lp^\infty(T\XX)} \newcommand{\tanXinfn}{\Lp^\infty(T^n\XX)} \newcommand{\cotanXp}{\Lp^p(T^*\XX)} \newcommand{\tanEX}{\Lp^2_E(T\XX)} \newcommand{\tanbvX}[1]{\Lp^2_{#1}(T\XX)} \newcommand{\tanbvXp}[2]{\Lp^{#1}_{#2}(T\XX)} \newcommand{\tanbvXpn}[2]{\Lp^{#1}_{#2}(T^n\XX)} \newcommand{\tanbvXn}[1]{\Lp^2_{#1}(T^n\XX)} \newcommand{\tanbvXloc}[2]{\Lp^2_{#1}(T\XX_{|{#2}})} \newcommand{\tanbvXploc}[3]{\Lp^{#1}_{#2}(T\XX_{|{#3}})} \newcommand{\tanbvXplocn}[3]{\Lp^{#1}_{#2}(T^n\XX_{|{#3}})} \newcommand{\tanbvXzero}[1]{\Lp^0_{#1}(T\XX)} \newcommand{\tantanX}{\Lp^2(T^{\otimes 2}\XX)} \newcommand{\tanXcap}{\Lp^0_\capa(T\XX)} \newcommand{\tanXcapn}{\Lp^0_\capa(T^n\XX)} \newcommand{\tanXcapinf}{\Lp^\infty_\capa(T\XX)} \newcommand{\tanXcapinfn}{\Lp^\infty_\capa(T^n\XX)} \newcommand{\tanXcapinfm}{\Lp^\infty_\capa(T^m\XX)} \newcommand{\tanXcapinfk}{\Lp^\infty_\capa(T^k\XX)} \newcommand{\RCD}{{\mathrm {RCD}}} \newcommand{\CD}{{\mathrm {CD}}} \newcommand{\PI}{{\mathrm {PI}}} \newcommand{\Meas}{\mathrm{Meas}} \newcommand{\ric}{\mathbf{Ric}} \newcommand{\tv}{{\sf TV}} \newcommand{\fr}{\penalty-20\null\hfill$\blacksquare$} \newcommand{\leb}{\mathrm{Leb}} \newcommand{\borrep}{\mathrm{BorRep}} \let\epsilon\varepsilon \newcommand{\Id}{{\mathrm {Id}}} \def\Xint#1{\mathchoice {\XXint\displaystyle\textstyle{#1}} {\XXint\textstyle\scriptstyle{#1}} {\XXint\scriptstyle\scriptscriptstyle{#1}} {\XXint\scriptscriptstyle\scriptscriptstyle{#1}} \!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$} \vcenter{\hbox{$#2#3$}}\kern-.5\wd0}} \def\ddashint{\Xint=} \def\dashint{\Xint-} \title{}\begin{document} \title[Local vector measures]{Local vector measures} \author[C. Brena]{Camillo Brena} \address{C.~Brena: Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa} \email{\tt [email protected]} \author[N. Gigli]{Nicola Gigli} \address{N.~Gigli: SISSA, Via Bonomea 265, 34136 Trieste} \email{\tt [email protected]} \begin{abstract} Consider a BV function on a Riemannian manifold. What is its differential? And what about the Hessian of a convex function? These questions have clear answers in terms of (co)vector/matrix valued measures if the manifold is the Euclidean space. In more general curved contexts, the same objects can be perfectly understood via charts. However, charts are often unavailable in the less regular setting of metric geometry, where still the questions make sense. In this paper we propose a way to deal with this sort of problems and, more generally, to give a meaning to a concept of `measure acting in duality with sections of a given bundle', loosely speaking. Despite the generality, several classical results in measure theory like Riesz's and Alexandrov's theorems have a natural counterpart in this setting. Moreover, as we are going to discuss, the notions introduced here provide a unified framework for several key concepts in nonsmooth analysis that have been introduced more than two decades ago, such as: Ambrosio-Kirchheim's metric currents, Cheeger's Sobolev functions and Miranda's BV functions. Not surprisingly, the understanding of the structure of these objects improves with the regularity of the underlying space. We are particularly interested in the case of $\RCD$ spaces where, as we will argue, the regularity of several key measures of the type we study nicely matches the known regularity theory for vector fields, resulting in a very effective theory. We expect that the notions developed here will help creating stronger links between differential calculus in Alexandrov spaces (based on Perelman's DC charts) and in $\RCD$ ones (based on intrinsic tensor calculus). \end{abstract} \maketitle \setcounter{tocdepth}{3} \tableofcontents \section*{Introduction} This paper is about further developing differential calculus in the nonsmooth setting of metric (measure) spaces. The starting point is the paper \cite{Gigli14} of the second author, where the concept of $\Lp^p(\mass)$-normed $\Lp^\infty(\mass)$-module has been introduced as a means to interpret what $\mass$-a.e.\ defined tensor fields should be on a given metric measure space $(\XX,\dist,\mass)$. A typical object that is well defined in such framework is that of differential $\dd f$ of a Sobolev function $f\in {\rm W}^{1,p}(\XX)$: shortly said, this is possible because such differential is, even in the smooth world, an a.e.\ defined 1-form. Still, when taking distributional derivatives it might very well be that one ends up with objects more singular than these. Typical instances where this occurs are: \begin{itemize} \item[A)] In dealing with the differential of a BV function. \item[B)] In dealing with the Hessian of a convex function. \item[C)] In dealing with the Ricci curvature tensor on $\RCD$ spaces: this is a sort of measure defined in duality with smooth vector fields whose properties are not yet well understood (see \cite[Section 3.6]{Gigli14}). \end{itemize} In all these cases, the relevant concept should be something possibly concentrated on a negligible set: as such, the notion of $\Lp^p(\mass)$-normed $\Lp^\infty(\mass)$-modules does not really suffice. It is our goal here to propose a more general theory capable of dealing, in particular, with the cases above. As it will turn out, the framework we develop is fully compatible even with objects conceptually far from the examples mentioned, like metric currents as developed by Ambrosio-Kirchheim in \cite{AmbrosioKirchheim00}. The theory, as well as the manuscript, is divided in two parts: in the first we develop the concept of `local vector measure' in the general setting of Polish spaces, while in the second we discuss how the general framework fits in the more regular environment of $\RCD$ spaces. \subsection*{Polish theory} Measures are defined as functionals over other objects: either sets or functions or both. As such, if we want to give an abstract notion of `measure' capable of giving a meaning to the examples above, we should at the same time be ready to specify the space of objects where it is acting. Example $\rm A)$ above is particularly illuminating: consider a BV function on a Riemannian manifold. Then its differential, whatever it is, should be something acting in duality with continuous vector fields; as such one should have at disposal such vector fields before defining the differential. With this in mind, we build our theory by modelling it on the duality \[ \{\text{functions in }\Cb(\XX)\}\qquad\qquad\text{ in duality with }\qquad\qquad\{\text{finite Radon measures on $\XX$}\} \] (for comparison, notice that the duality theory in \cite{Gigli14} was modelled upon the duality between $\Lp^p$-functions and $\Lp^q$-functions, $\frac1p+\frac1q=1$). We thus begin our presentation introducing the concept of `normed $\Cb(\XX)$-module' that aims at being an abstract version of the space of continuous (or, more generally, bounded) sections of a normed bundle. By definition, a normed $\Cb(\XX)$-module is a normed space $(\VV,\|\cdot\|)$ that is also a module over $\Cb(\XX)$ in the algebraic sense and for which the inequality \begin{equation} \label{eq:compintro} \|f_1v_1+\cdots+f_nv_n\|\leq\max_i\|f_i\|_{\infty}\|v_i\| \end{equation} holds for any $n\in\NN$ and choice of $v_i\in\VV$ and $f_i\in\Cb(\XX)$ with pairwise disjoint supports. The structure of $\Cb(\XX)$-module gives the possibility of inspecting the `local behaviour' of elements in $\VV$. For instance, for $v\in\VV$ and $A\subseteq\XX$ open we can define the seminorm $\|v\|_{|A}$ of $v$ in $A$ as $\sup\|fv\|$, the sup being taken among all $f\in\Cb(\XX)$ with support in $A$ and norm bounded by 1. Then for $C\subseteq\XX$ closed we say that the support of $v$ is contained in $C$ provided $\|v\|_{|\XX\setminus C}=0$. With these definitions it is easy to see that the compatibility condition \eqref{eq:compintro} can be read as a way of saying that the norm of $\VV$ is a sort of `sup' norm, as it implies that \[ \|v_1+\cdots+v_n\|=\max_i\|v_i\|\qquad\text{ provided the $v_i$'s have disjoint support} \] (here having disjoint supports means that there are disjoint closed sets $C_i$ with $\supp v_i\subseteq C_i$). For example, even though for every metric measure space $(\XX,\dist,\mass)$ and $p\in[1,\infty]$ the space $\Lp^p(\XX)$ is a module over $\Cb(\XX)$ in the algebraic sense, condition \eqref{eq:compintro} only holds in the case $p=\infty$. Now recall that a vector measure on the topological dual $\VV'$ of $\VV$ as Banach space is a $\sigma$-additive function taking Borel subsets of $\XX$ and returning functionals in $\VV'$. Then a \emph{local} vector measure defined on $\VV$ (i.e.\ acting in duality with elements of $\VV$) is a vector measure $\nvect$ on $\VV'$ that is compatible with the module structure in the sense that \begin{equation} \label{eq:wlintro} \nvect(A)(v)=0\qquad\forall A\subseteq\XX \text{ open and $v\in\VV$ such that $\|v\|_{|A}=0$}. \end{equation} We refer to property \eqref{eq:wlintro} as `locality' (or `weak locality', to distinguish it from the stronger notion discussed below). The basic example here is: $\VV:=\Cb(\XX)$, $\mu$ finite (possibly signed) Radon measure on $\XX$ and $\nvect$ given by the formula \[ \nvect(B)(f):=\int_B f\,\dd\mu\qquad\text{$\forall B\subseteq\XX$ Borel and $f\in\Cb(\XX)$}. \] Section \ref{sectinit} is devoted to the study of this sort of measures. Among others, a relevant result that we obtain is a rather abstract version of Riesz's representation theorem, see Theorem \ref{weakder}. In the case of $\XX$ compact can be stated as: for any $F\in \VV'$ there is a unique local vector measure $\nvect$ on $\VV$ such that \begin{equation} \label{eq:rieszintro} \nvect(\XX)(v)=F(v)\qquad\forall v\in\VV. \end{equation} Notice that in the case $\VV=C(\XX)$ this easily reduces to the standard Riesz's theorem. In the non-compact case we prove that for $F\in\VV'$ there exists $\nvect$ as in \eqref{eq:rieszintro} if and only if \begin{equation} \label{eq:tightintro} F(f_nv)\to0\qquad\forall v\in\VV\text{ and $\{f_n\}_n\subseteq\Cb(\XX)$ with $f_n(x)\searrow 0$ for any $x\in\XX$.} \end{equation} This is reminiscent of the analogue condition appearing in the study of Daniell's integral. Another relevant property of local vector measures concerns their total variation, that for a generic vector measure $\nvect$ on $\VV'$ is defined as $|\nvect|(E):=\sup\sum_i\|\nvect(E_i)\|'$, the sup being taken among at most countable partitions of the Borel set $E$. As it turns out, for local vector measures the total variation is always finite and the identity \[ |\nvect|(E)=\|\nvect(E)\|'\qquad\text{$\forall E\subseteq \XX$ Borel} \] holds. In particular, $|\nvect|$ is always a Radon measure and it is therefore natural to ask whether any sort of polar decomposition is in place. As it turns out, this is always the case, meaning that for any local vector measure $\nvect$ there is a unique $L:\XX\to\VV'$ such that \begin{equation} \notag \nvect=L|\nvect|\quad\text{ in the sense that }\quad\nvect(B)(v)=\int_BL(x)(v)\,\dd|\nvect|(x)\qquad\text{$\forall B\subseteq\XX$ Borel and $v\in\VV$,} \end{equation} and it holds $\|L(x)\|'=1$ for $|\nvect|$-a.e.\ $x$ (see Proposition \ref{allhaspolarabs} for the precise formulation). The locality of $\nvect$ implies the following locality property of $L$: \begin{equation} \label{eq:locpol} L(x)(v)=0,\quad|\nvect|\text{-a.e.\ on}\ A\qquad\text{$\forall A\subseteq\XX$ open such that $\|v\|_{|A}=0$}, \end{equation} that we can also interpret by saying that $L(x)(v)$ depends only on the germ of $v$ at $x$. At this level of generality we cannot say much more than this, but still these concepts turn out to be flexible enough to be naturally compatible with various pre-existing notions in metric geometry. Given the conceptual proximity of the definition of $L^\infty$-module and $\Cb$-module, it is not surprising that the notion of differential of a Sobolev function as given in \cite{Gigli14} can be reinterpreted in this framework (see Section \ref{se:diffsob}). On the other hand, one thing that we gain from the studies conducted here, and that in fact motivates them, is the possibility to give a meaning to the differential $\DIFF f$ of a BV function $f$ on arbitrary metric measure spaces. This notion of differential comes with some non-trivial calculus rule, for instance the Leibniz formula \[ \DIFF (fg)=g\DIFF f+f\DIFF g \] holds for any couple $f,g$ of bounded\emph{ continuous} functions of bounded variation (see Section \ref{sectBV}). As mentioned above, also the concept of metric current naturally fits in this framework, let us briefly mention how. By the definition and the results in \cite{AmbrosioKirchheim00}, an $n$ current acts on the space \[ \DD^n(\XX)\defeq \Cb(\XX)\otimes \bigwedge^n \LIP(\XX) \] whose elements are sums of objects formally written as $f\dd\varphi_1\wedge\cdots\wedge\dd\varphi_n$. Thus, by also looking at the Euclidean case, it is natural to equip $\DD^n(\XX)$ with the seminorm \begin{equation}\notag \Vert v\Vert\defeq\sup_T T(v)\quad\text{for every }v\in\DD^n(\XX) \end{equation} where the supremum is taken among all currents $T$ with $\Vert T\Vert_{\AK}(\XX)\le 1$ (here $\|T\|_{\AK}$ is the mass of the current as defined in \cite{AmbrosioKirchheim00}). Then, tautologically, an $n$ current $T$ can be seen as an element of $\DD^n(\XX)'$ and the theories in \cite{AmbrosioKirchheim00} and in here quite nicely match, meaning that: \begin{itemize} \item[-] The results in \cite{AmbrosioKirchheim00} show that $\DD^n(\XX)'$ can naturally be equipped with the structure of a normed $\Cb(\XX)$-module. \item[-] An $n$ current $T$ always satisfies \eqref{eq:tightintro} when seen as element of $\DD^n(\XX)'$ and thus it induces a unique local vector measure $\nvect_T$ on $\DD^n(\XX)$. Also, the total variation $|\nvect_T|$ of $\nvect_T$ as defined above coincides with the mass $\|T\|_{\AK}$ of $T$ as introduced in \cite{AmbrosioKirchheim00}. \item[-] The map sending $(f\dd\varphi_1\wedge \cdots\dd\varphi_n,g\dd\psi_1\wedge \cdots\dd\psi_m)$ to $fg\dd\varphi_1\wedge \cdots\dd\varphi_n\wedge\dd \psi_1\wedge \cdots\dd\psi_m$ induces a unique bilinear and continuous map from $\DD^n(\XX)\times \DD^m(\XX)$ to $\DD^{n+m}(\XX)$ of norm $\leq 1$. \item[-] In particular, the collection of normed $\Cb(\XX)$-modules $\{\DD^n(\XX)\}_{n\in\NN}$ possesses a natural algebra structure and - reading in this language some of the results in \cite{AmbrosioKirchheim00} - we see that the `differentiation' map $\LIP(\XX)\ni f\mapsto 1\dd f\in \DD^1(\XX)$ satisfies Leibniz and chain rules and it is closed in a natural sense. \end{itemize} See Section \ref{se:curr}. \medskip The concept of locality as expressed in \eqref{eq:wlintro} and in \eqref{eq:locpol} is the most we can expect for local vector measures defined on arbitrary $\Cb(\XX)$-modules. It is so because, in some sense, for an element of such a module we are capable of saying whether it is 0 on an open set, but we cannot give a reasonable meaning to it being 0 on a Borel set. Still, in many practical situations a relevant normed $\Cb(\XX)$-module $\VV$ is given as suitable space of bounded elements of a larger $\Lp^p(\mass)$-normed $\Lp^\infty(\mass)$-module $\mathscr M$ equipped with the norm \[ \|v\|:=\||v|\|_{\Lp^\infty(\mass)}\qquad\forall v\in\VV. \] If this is the case it is important to understand how the structure of $\Lp^\infty(\mass)$-module and that of local vector measures on $\VV$ interact. We are going to see in Section \ref{normedmodules} that a local vector measure $\nvect$ on such a module $\VV$ with $|\nvect|\ll\mass$ satisfies the stronger locality property \[ \nvect(B)(v)=0\qquad\forall B\subseteq\XX \text{ Borel and $v\in\VV$ such that $|v|=0$ $\mass$-a.e.\ on $B$} \] if and only if there is an element $M_\nvect\in \mathscr M^*$ (the dual in the sense of modules) with $|M_\nvect|=1$ $|\nvect|$-a.e.\ such that \[ L_\nvect(v)=M_\nvect(v)\quad|\nvect|\text{-a.e.}\qquad\forall v\in\VV. \] We shall call measures of this kind \emph{strongly local measures}. Notice that if $\mathscr M$ is an Hilbert module, then the writing above (and Riesz's theorem for Hilbert modules) means that we can represent a strongly local measure $\nvect$ as ${\sf v}|\nvect|$ for some ${\sf v}\in\mathscr M$ with $|{\sf v}|=1$ $|\nvect|$-a.e., i.e.\ that \[ \nvect(B)(v)=\int_Bv\cdot{\sf v}\,\dd|\nvect|. \] This closely corresponds to the usual intuition that wants a `vector valued measure' to be writable via polar decomposition as its mass times a vector field of norm 1. \subsection*{$\RCD$ theory} In this part of the paper we focus the attention on $\RCD$ spaces, whose study actually motived this manuscript: here the starting observation is about a `coincidence' occurring when handling certain non-smooth objects. It is indeed the case that: \begin{itemize} \item[-] In all of the three examples mentioned at the beginning of the introduction, the measure does not see sets of null 2-capacity: this is easily seen for Examples A and B at least if the underlying space is the Euclidean one, for the Ricci curvature tensor on $\RCD$ spaces we refer to \cite{Gigli14}. To be more concrete: the polar decomposition of the differential $\DIFF f$ of a real valued BV function on the Euclidean space takes the form ${\sf v}|\DIFF f|$ for some Borel vector field ${\sf v}$ of norm 1 $|\DIFF f|$-a.e.\ and some non-negative Radon measure $|\DIFF f|$ giving 0 mass to sets of null 2-capacity. Notice that in fact the total variation measure of a function of bounded variation is well defined on arbitrary metric measure spaces \cite{MIRANDA2003} and does not see sets of null 2-capacity \cite{BGBV}. \item[-] On $\RCD$ spaces vector fields can be defined up to sets of null 2-capacity (unlike general metric measure spaces, where they typically are only defined up to sets of measure zero - at least in the axiomatization proposed in \cite{Gigli14}). More precisely, the analysis carried out in \cite{debin2019quasicontinuous} shows that Sobolev vector fields have a unique `quasi-continuous representative' defined up to sets of null 2-capacity: the analogy here is with the well known case of Sobolev functions, but in this case all the concepts are built upon functional analytic foundation, as no topology on the tangent module is given, so one cannot speak of continuity of vector fields in the classical sense. \end{itemize} The coincidence is in the fact that the best known regularity for vector fields matches the known regularity for the relevant (mass of the) measure, suggesting the existence of a good duality theory. Of course, we don't think this is a coincidence at all, but rather an instance of the not infrequent phenomenon that sees the presence of solid analytic foundations for geometrically relevant objects. This second part of the work is devoted to exploring the generalities of the theory in this setting and to tailoring it to the study of the differential of BV functions. In particular we shall mostly rely on the concept of strongly local measure discussed in Section \ref{normedmodules}. We cover also the case of vector valued BV functions, that presents additional difficulties and seems intractable over general metric measure spaces. The outcome of the analysis is that such distributional differential $\DIFF f$ can always be represented as ${\sf v}|\DIFF f|$, where ${\sf v}$ is a suitable quasi-continuous vector field of norm 1 $|\DIFF f|$-a.e.\ and $|\DIFF f|$ is the classical total variation measure of $f$ as introduced in \cite{MIRANDA2003}, at least in the scalar case. This description, finer than the one available on arbitrary spaces, is consistent with all the recent fine calculus tools and integration by parts developed on finite dimensional $\RCD$ spaces. For instance, the Gauss-Green formula established in \cite{bru2019rectifiability} can now be interpreted as the representation, in the sense above, of the distributional differential of the characteristic function of a set of finite perimeter. In this sense also the Leibniz rule for the product of two bounded (but not necessarily continuous) BV functions recently obtained in \cite{BGBV} naturally fits in this framework, meaning that we have \[ \DIFF (fg)=\bar f\DIFF g+\bar g\DIFF f,\qquad\forall f\in {\mathrm{BV}}(\XX)\cap \Lp^\infty(\XX), \] where $\bar f:=\frac{f^\vee+f^\wedge}2$ is the precise representative of $f$ (here $f^\vee(x)\defeq\aplimsup_{y\rightarrow x} f(y)$ and $f^\wedge(x)\defeq\apliminf_{y\rightarrow x} f(y)$, see \eqref{veeandwedge}), similarly for $g$ and the differentials are defined as strongly local vector measures in the sense discussed here. We recall this fact in Proposition \ref{leibnizrcd}. \bigskip In this manuscript we are not going to study the Hessian of convex functions and the Ricci curvature tensor: this will be the main goal of an upcoming paper. \subsection*{Acknowledgments} The first author acknowledges the support of the PRIN 2017 project `Gradient flows, Optimal Transport and Metric Measure Structures'. \section{The theory for Polish spaces} \subsection{Definitions and results}\label{sectinit} In the first part of this note, our framework is a Polish space $(\XX,\tau)$, which means that there exists a distance $\dist:\XX\times\XX\rightarrow\RR$ inducing the topology $\tau$ and such that $(\XX,\dist)$ is a complete and separable metric space. For simplicity of exposition, we fix such distance $\dist$ and hence we will often consider the Polish space $(\XX,\tau)$ as the complete and separable metric space $(\XX,\dist)$. We point out that for what concerns this part of the work, the choice of the distance is immaterial and what is important is the topology $\tau$. However, having a fixed distance is natural if one reads the first part of the work in perspective of the second part, where we will work with metric measure spaces, which are triplets $(\XX,\dist,\mass)$ where $(\XX,\dist)$ is a complete separable distance on $\XX$ and $\mass$ is a Borel measure, finite on balls. We denote the Borel $\sigma$-algebra of $\XX$ by $\mathcal{B}(\XX)$ and we adopt the standard notation for the various function spaces. \bigskip In the following definition, we introduce the concept of normed $\Cb(\XX)$-module. Recall that from the algebraic perspective there is a well defined concept of module over the commutative ring with unity $\Cb(\XX)$, it being a commutative group with unity $\VV$ equipped with an operation $\cdot:\Cb(\XX)\times\VV\to\VV$ satisfying \[ \begin{split} f\cdot(v+w)&=f\cdot v+f\cdot w,\\ (f+g)\cdot v&=f\cdot v+g\cdot v,\\ (fg)\cdot v&=f\cdot(g\cdot v),\\ {\bf 1}\cdot v&=v, \end{split} \] for any $f,g\in \Cb(\XX)$ and $v,w\in\VV$. Here ${\bf 1}$ is the multiplicative identity of $\Cb(\XX)$, i.e.\ the function identically equal to 1. In what follow we shall omit the `dot' and write the product of $f$ and $v$ simply as $fv$. The space $\Cb(\XX)$ is also a Banach space when equipped with its natural `$\sup$' norm $\Vert \,\cdot\,\Vert_\infty$. We shall then consider normed spaces that are also modules over $\Cb(\XX)$ in the algebraic sense and for which a certain compatibility between the normed and algebraic structure is present: \begin{defn}[Normed $\Cb(\XX)$-modules]\label{defnmodules} Let $(\VV,\Vert\,\cdot\,\Vert)$ be a normed space that is also a module over the ring $\Cb(\XX)$ in the algebraic sense. We say that $(\VV,\Vert\,\cdot\,\Vert)$ is a normed $\Cb(\XX)$-module if \begin{equation}\label{compeq} \Vert f_1 v_1+\dots+f_n v_n\Vert\le \max_{i} \Vert f_i\Vert_{\infty} \max_{i}\Vert v_i\Vert, \end{equation} whenever $\{f_i\}_{i=1,\dots,n}\subseteq\Cb(\XX)$ have pairwise disjoint supports and $\{v_i\}_{i=1,\dots,n}\subseteq\VV$. \end{defn} Even though the expression `normed $\Cb(\XX)$-module' has an algebraic meaning (i.e.\ a normed vector space that is also a module over $\Cb(\XX)$), in what follows when we write `normed $\Cb(\XX)$-module', \textbf{we always refer to the notion introduced in the definition above.} Simple examples of normed $\Cb(\XX)$-modules are: \begin{itemize} \item[-] The space $(\Cb(\XX),\Vert \,\cdot\,\Vert_\infty)$ itself. \item[-] Let $(\XX,\dist,\mass)$ be a metric measure space and $\mathscr M$ a $\Lpo$-normed $\Lpo$-module. Then any subspace $\VV$ of $\mathscr M$ closed under multiplication by $\Cb(\XX)$ and made of elements with pointwise norm in $\Lpi$ is a normed $\Cb(\XX)$-module once endowed with the norm $\Vert\abs{\,\cdot\,}\Vert_{\Lpi}$. \end{itemize} Here are two simple consequences of our standing assumptions for $\Cb(\XX)$-modules that we are going to use with no reference. \begin{rem} The following properties hold: \begin{enumerate} \item if $f(x)=\lambda\in\RR$ for every $x\in\XX$, then $f v=\lambda v$ for every $v\in\VV$. Indeed, this holds for $\lambda=1$ by the algebraic definition of module, then extends to $\lambda\in\mathbb{Q}$ by algebra and then to $\lambda\in\RR$, by the continuity granted by \eqref{compeq}; \item if $\{f_i\}_{i=1,\dots,n}\subseteq\Cb(\XX)$ have pairwise disjoint support and $\{v_i\}_{i=1,\dots,n}\subseteq\VV$, then \[ \Vert f_1 v_1+\dots+f_n v_n\Vert\le \max_{i} \left(\Vert f_i\Vert_{\infty} \Vert v_i\Vert\right). \] To show this, just divide each non zero $f_i$ by $\Vert f_i\Vert_{\infty}$ and multiply each $v_i$ by the same quantity.\fr \end{enumerate} \end{rem} Occasionally we shall consider modules over appropriate subrings of $\Cb(\XX)$: \begin{defn}[Normed $\mathcal R$-modules]\label{def:Rap} Let $\mathcal R\subseteq\Cb(\XX)$ be a subring (hence, contains the unity). We say that $\mathcal R$ \textbf{approximates open sets} if $\mathcal R$ is a lattice closed under multiplication by constant functions and such that for every open subset $A\subseteq \XX$, there exists a sequence $\{f_k\}_k\subseteq\mathcal R$ such that $f_k(x)\nearrow \chi_A(x)$ for every $x\in\XX$. A normed $\mathcal R$-module is a normed vector space $(\VV,\Vert\,\cdot\,\Vert)$ that is also a module over $\mathcal R$ in the algebraic sense such that property \eqref{compeq} holds for any $\{v_i\}_{i=1,\dots,n}\subseteq\VV$ and $\{f_i\}_{i=1,\dots,n}\subseteq\mathcal R$ with pairwise disjoint supports. When speaking about normed $\mathcal R$-modules we shall always refer to this sort of structure and always assume that $\mathcal R$ approximates open sets. \end{defn} \begin{ex} $\Cb(\XX)$ itself and the subring generated by $\LIPbs(\XX)$ (i.e.\ $\dist$-Lipschitz functions with bounded support) and the constant functions approximate open sets.\fr \end{ex} In the following remark we state some properties of subrings that approximates open sets that we are going to exploit in the sequel. \begin{rem}\label{rem:Rpropr} Let $\mathcal R\subseteq\Cb(\XX)$ that approximates open sets and let $A\subseteq \XX$ open. Then: \begin{enumerate} \item by the lattice property of $\mathcal R$ we can find $\{f_k\}_k\subseteq\mathcal R$ such that $f_k(x)\nearrow \chi_A(x)$ for every $x\in\XX$ and also $f_k(x)\in [0,1]$ for every $x\in\XX$ and $k\in\NN$. Also, we can assume that $\supp f_k\subseteq A$ for every $k$, using again the properties of $\mathcal R$; \item if $K\subseteq A$ is compact, we can modify $\{f_k\}_k$, still remaining in $\mathcal R$, in such a way that, in addition to the properties stated in $(1)$, it holds that for every $k\in\NN$ $ f_k=1$ on a neighbourhood of $K$. This is due to Dini's monotone convergence Theorem and the lattice property of $\mathcal R$. In particular we have \begin{equation} \label{eq:separK} \begin{split} &\text{Let $K\subseteq \XX$ compact and $A\subseteq\XX$ open with $K\subseteq A$. Then there exists a function}\\ &\text{ $\varphi\in \mathcal R$ valued in $[0,1]$ such that $\varphi=1$ on a neighbourhood of $K$ and $\supp\varphi\subseteq A$.} \end{split} \end{equation} \item For every $f\in\Cb(\XX)$ there exists a sequence $\{f_n\}_n\subseteq\Rr$ with $f_n\nearrow f$. Notice that by Dini's monotone convergence Theorem, this approximation is uniform on compact sets. Indeed, take $f\in\Cb(\XX)$, say $f(x)\in[0,1]$ for every $x\in\XX$. Take $k\in\NN$. For every $j=0,\dots,k-1$, consider a sequence $\{\chi_{j,k}^n\}_n\subseteq\Rr$ taking values in $[0,1]$ and such that $\chi_{j,k}^n\nearrow\chi_{\{f>\frac{{j+1}}k\}}$, then let $$g_k^n\defeq\tfrac1k\sum_{j=0}^{k-1} \chi_{j,k}^n\in\Rr.$$ Notice that $g^n_k\leq f$ for every $k,n$, then let $(n_i,k_i),_{i\in\NN}$ be an enumeration of $\NN^2$ and, finally, set $f_0\defeq 0$ and for $i\ge 1$, $f_i\defeq g_{k_i}^{n_i}\vee f_{i-1}$. It is easy to verify that $\{f_i\}_i\subseteq\Rr$ provides a suitable approximating sequence. \fr \end{enumerate} \end{rem} We recall that functions in $\Cb(\XX)$ have the following separation property, stronger than \eqref{eq:separK}: \begin{equation} \label{eq:separ} \begin{split} &\text{Let $C\subseteq \XX$ closed and $A\subseteq\XX$ open with $C\subseteq A$. Then there exists a function}\\ &\text{ $\varphi\in \Cb(\XX)$ valued in $[0,1]$ such that $\varphi=1$ on a neighbourhood of $C$ and $\supp\varphi\subseteq A$.} \end{split} \end{equation} This is an instance of Urysohn's lemma in the normal space $(\XX,\tau)$. More concretely, using the distance $\dist$ we can post-compose the function $\frac{\dist(x,\XX\setminus A)}{\dist(x,C)+\dist(x,\XX\setminus A)}$ with a continuous function $\psi:\RR\to[0,1]$ identically 1 on a neighbourhood of 1 and identically 0 on a neighbourhood of 0. \bigskip The presence of both a norm and a product with functions allows to localize the concept of norm and to give some notion of `support' as follows: \begin{defn}[Local seminorms] Let $\VV$ be a normed $\mathcal R$-module. Then for $A\subseteq\XX$ open and $v\in\VV$, we define \[ \Vert v\Vert_{| A}\defeq\sup\left\{\Vert f v\Vert: f\in\mathcal R,\ \supp f\subseteq A\text{ and }\Vert f\Vert_\infty \le 1 \right\}. \] Then define the germ seminorm of $v$, $\abs{v}_g:\XX\rightarrow\RR$, by \[ \abs{v}_g(x)\defeq\inf_A\Vert v\Vert_{| A}, \] where the infimum is taken among all open neighbourhoods of $x$. \end{defn} \begin{defn}[`Supports']\label{def:supp} Let $\VV$ be a normed $\mathcal R$-module. Then for $C\subseteq\XX$ closed and $v\in\VV$ we say $\supp v \subseteq C$ provided $\|v\|_{\XX\setminus C}=0$. More generally, for $B\subseteq\XX$ Borel we say that $\supp v\subseteq B$ provided $\supp v\subseteq C$ for some $C\subseteq B$ closed. \end{defn} Let us collect few simple properties of these definitions: \begin{enumerate} \item The concepts of local seminorm, germ seminorm and support all depend also on the ring $\mathcal R$, so that if $\VV$ is both a normed $\mathcal R$-module and $\mathcal R'$-module, then the associated notions of seminorm and support may depend on which ring we are using in the definitions above. Nevertheless, this will not make much difference when considering local vector measures, see Remark \ref{nonimportaR}. \item For every $A\subseteq\XX$ open and $x\in\XX$ both $\Vert\,\cdot\,\Vert_{| A}$ and $|{\,\cdot\,}|_g(x)$ are seminorms on $\VV$. \item If $\{x_n\}_n$ is converging to $x$, then eventually $x_n$ belongs to any given neighbourhood of $x$, hence \[ \abs{v}_g(x)\defeq\inf_A\Vert v\Vert_{| A}\geq \inf_A\limsup_{n}\abs{v}_g(x_n)=\limsup_{n}\abs{v}_g(x_n). \] Thus $\abs{v}_g$ is upper semicontinuous, hence Borel measurable. \item For every $\varphi\in\mathcal R$, $v\in\VV$ we have \begin{equation} \label{eq:locnorm} \abs{\varphi v}_g=|\varphi|\abs{v}_g\qquad\text{ on }\XX. \end{equation} To see this, let $\varepsilon>0$, $x\in\XX$ and $A\subseteq\XX$ be an open neighbourhood of $x$ so small that $\Vert v\Vert_{| A}\leq \abs{v}_g(x)+\varepsilon$ and $|\varphi-\varphi(x)|\leq\varepsilon$ on $A$. Then letting $f$ vary in the set of functions in $\mathcal R$ with support in $A$ we get \[ \begin{split} \Vert \varphi v\Vert_{| A}=\sup_f\Vert f\varphi v\Vert\leq \sup_f\Big(\Vert (\varphi-\varphi(x))fv\Vert +|\varphi|(x)\Vert fv\Vert \Big)\leq \varepsilon\Vert v\Vert +|\varphi|(x)\Vert v\Vert_{| A}, \end{split} \] where in the last inequality we used the fact that $\|(\varphi-\varphi(x))f\|_\infty\leq \varepsilon$ and the compatibility condition \eqref{compeq}. This proves $\leq$ in \eqref{eq:locnorm}. The opposite inequality follows along the same lines starting from $ \Vert f\varphi v\Vert\geq -\Vert (\varphi-\varphi(x))fv\Vert +|\varphi|(x)\Vert fv\Vert $. \item A direct consequence of Definition \ref{def:supp} is that \begin{equation} \label{eq:suppprod} \supp v\subseteq B\qquad\Rightarrow\qquad \supp(\varphi v)\subseteq\supp \varphi \cap B. \end{equation} Indeed the inclusion $\supp(\varphi v)\subseteq B$ is obvious from the definition. On the other hand for any $f\in \mathcal R$ with $\supp f\subseteq\XX\setminus\supp \varphi$ we have $f(\varphi v)=0$, which is the same as to say $\|\varphi v\|_{\XX\setminus\supp\varphi }=0$. \item The inclusion \eqref{eq:suppprod} gives \begin{equation} \label{eq:c3} f\in\mathcal R\ \text{ and }\ \supp v\subseteq\text{\{interior of $\{f=1\}$\}}\qquad\Rightarrow\qquad fv=v. \end{equation} Indeed, the conclusion is equivalent to $(1-f)v=0$, i.e.\ to $\supp((1-f)v)=\emptyset$. Now let $C\subseteq\{\text{interior of $\{f=1\}$}\}$ be closed such that $\supp v\subseteq C$, notice that $C\cap \supp(1-f)=0$ and conclude by \eqref{eq:suppprod}. \item If $\VV$ is a normed $\Cb(\XX)$-module (i.e.\ if $\mathcal R=\Cb(\XX)$) we have \begin{equation} \label{eq:disjsupp} \{v_i\}_{i=1,\dots,n}\subseteq \VV\text{ with disjoint supports }\qquad\Rightarrow\qquad \Vert v_1+\dots+v_n\Vert= \max_i \Vert v_i\Vert, \end{equation} where having disjoint supports means that there are pairwise disjoint closed sets $C_i$ such that $\supp v_i\subseteq C_i$ for every $i$. Indeed, we use the metric $\dist$ to see that $(\XX,\tau)$ is normal, and hence find (iteratively) $A_1,\dots,A_n$ open and pairwise disjoint such that $C_i\subseteq A_i$ for every $i$. Then, take $f_i$ for $C_i\subseteq A_i$ as in \eqref{eq:separ} and use \eqref{compeq} to conclude that $\leq$ holds in \eqref{eq:disjsupp}. On the other hand, with this choice of functions $f_i$ we see that \eqref{eq:suppprod} and \eqref{eq:c3} imply $f_j\sum_iv_i=v_j$ and therefore $\|v_j\|\leq \|f_j\|_\infty\|\sum_iv_i\|$ by \eqref{compeq} (applied with $n=1$). \item Definition \ref{def:supp} does not identify the support of an element of $\VV$ as a subset of $\XX$, but rather defines when an element of $\VV$ has support contained in a set. It is tempting to set \begin{equation}\label{strangesupp} \supp v\defeq\overline{\{\abs{v}_g>0 \}}\end{equation} The problem with this definition is that it might be that \begin{equation} \label{eq:strano} \Vert v\Vert_{\XX\setminus\supp v}>0, \end{equation} which violates the intuitive idea behind concept of support and of locality of the norm, see Example \ref{localityvv}. Nevertheless, we point out that on one hand that in all the practical examples we have in mind, the definition of support as in \eqref{strangesupp} works as well as the one in Definition \ref{def:supp} and on the other that in order for an example like the one below one needs $\XX$ to be non compact and to use some version of the Axiom of Choice strictly stronger than the Axiom of Countable Dependent choice. \end{enumerate} \begin{ex}\label{localityvv} Let $\XX\defeq\NN$ be endowed with the compete and separable distance $\dist(n,m)\defeq 1-\delta_{n}^m$, notice that $\Cb(\XX)=\ell^\infty$ and let $\VV:=\Cb(\XX)'=(\ell^\infty)'$. It is readily verified that the natural product operation defined as $f\cdot L(g):=L(fg)$ for any $L\in \VV$, $f,g\in\ell^\infty=\Cb(\XX)$ endow $\VV$ with the structure of normed $\Cb(\XX)$-module. Let $W\subseteq\ell^\infty$ the subspace of sequences having limit and $L\in\VV$ be an element of the dual of $\Cb(\XX)$ that is obtained by extending - via Hahn-Banach - the functional that associates to $f\in W$ its limit. We claim that \begin{equation} \label{eq:claimL} |L|_g(x)=0\qquad\forall x\in \XX. \end{equation} Indeed, since the singleton $\{x\}$ is open, it is sufficient to prove that $f\cdot L=0$ for every $f\in \ell^\infty$ that is identically 0 outside $\{x\}$. But this is trivial by definition of product, because for any such $f$ and any $g\in\ell^\infty$ we have $fg\in W$ with limit 0, hence $f\cdot L(g)=0$ proving our claim \eqref{eq:claimL}. It follows that the support as defined in \eqref{strangesupp} is empty and thus, since clearly $L\neq 0$, that \eqref{eq:strano} holds. \fr \end{ex} Let now $(\VV',\Vert\,\cdot\,\Vert')$ denote the dual space of $(\VV,\Vert\,\cdot\,\Vert)$ (as normed vector space). It is well known that $(\VV',\Vert\,\cdot\,\Vert')$ is a Banach space. We recall now the definition of ($\sigma$-additive) vector valued measure, see e.g.\ \cite[Chapters 1 and 2]{DiestelUhl77}. \begin{defn} A $\VV'$-valued measure is a map $$ \nvect:\mathcal{B}(\XX)\rightarrow \VV' $$ that is \emph{$\sigma$-additive}, in the sense that if $\{A_k\}_{k\in\NN}$ is a sequence of pairwise disjoint sets in $\mathcal{B(\XX)}$, then $$\nvect\left(\bigcup_{k} A_k\right)=\sum_{k} \nvect(A_k),$$ where the convergence of the series has to be understood as convergence in norm in $\VV'$, \end{defn} Notice that in particular the convergence of the series in the above equation is unconditional i.e.\ it is independent of the ordering of the terms. \bigskip In this paper we are concerned with a particular type of such measures, where the measure and the module structure interact in the following way: \begin{defn}[Local vector measures]\label{mvmdef} Let $\VV$ be a normed $\mathcal R$-module. A local vector measure defined on $\VV$ is a $\VV'$-valued measure $\nvect:\mathcal{B}(\XX)\rightarrow \VV'$ that is \emph{weakly local}, in the sense that $$\nvect(A)(v)=0\quad\text{for every $A\subseteq\XX$ open and }v\in\VV\text{ such that }\Vert v\Vert_{|A}=0. $$ We denote the set of such local vector measures by $\MM_\VV$. \end{defn} In what follows, we are going to consider local vector measures defined on the space $\VV$ fixed, unless it is specified otherwise. Notice that one may always assume that $\VV$ is complete, as the dual of the completion coincides with $\VV'$, the completion naturally carries the structure of normed $\Cb(\XX)$-module and such structure remains compatible with the weak locality imposed above. If $\nvect$ is a local vector measure and $v\in\VV$, we will often consider the finite signed Borel measure $$v\,\cdot\,\nvect\defeq \nvect(\,\cdot\,)(v).$$ By the the regularity of $v\,\cdot\,\nvect$ it trivially follows that \begin{equation} \label{eq:c1} A\subseteq \XX\text{ open},\ B\subseteq A\text{ Borel},\ \Vert v\Vert_{|A}=0 \qquad\Rightarrow\qquad \nvect(B)(v)=0 \end{equation} and this further implies that \begin{equation} \label{eq:c4} \supp v\subseteq B\subseteq B'\text{ with $B,B'\subseteq\XX$ Borel}\qquad\Rightarrow\qquad \nvect(B)(v)=\nvect(B')(v). \end{equation} Indeed, for $C\subseteq B$ closed such that $\supp v\subseteq C$ we have $\|v\|_{\XX\setminus C}=0$. Since $\XX\setminus C$ is open, the weak locality of $\nvect$ gives $\nvect(\XX\setminus C)(v)=0$ and thus \eqref{eq:c1} and the trivial inclusion $B'\setminus B\subseteq \XX\setminus C$ imply \eqref{eq:c4}. We also notice that for $B\subseteq \XX$ Borel, $w\in\VV$ and $f\in \mathcal R$ equal to 1 on an open neighbourhood $A$ of $B$, we have $\|(1-f)w\|_{|A}=0$ and thus $\nvect(A)(fw)=\nvect(A)(w)$ by weak locality. Hence \eqref{eq:c1} gives \begin{equation} \label{eq:c2} \nvect(B)(w)=\nvect(B)(f w). \end{equation} \bigskip We recall the definition of total variation of a vector valued measure, that we are going to use specialized to the case of local vector measures. \begin{defn} Let $\VV$ be a normed $\mathcal R$-module and $\nvect$ be a $\VV'$-valued measure. Its total variation is the (countably additive) extended real valued Borel measure defined by $$\abs{\nvect}(E)\defeq\sup_{\pi}\sum_{A\in\pi}\Vert \nvect(A)\Vert'$$ where the supremum is taken over all finite Borel partitions $\pi$ of $E$. If $\abs{\nvect}(\XX)<\infty$, we say that $\nvect$ has bounded variation. \end{defn} We will see with the following Proposition \ref{localisfiniteabs} that the local vector measures we have just defined have automatically bounded variation. From the definition of total variation, it immediately follows \begin{equation} \label{eq:trivialvar} \abs{\nvect(E)(v)}\le\abs{\nvect}(E)\Vert v\Vert\qquad\text{ for every $E$ Borel and $v\in\VV$}. \end{equation} \begin{rem} We remark that there is no effort in taking the (measure theoretic) completion of $\nvect$, defining it to be $0$ on all the subsets of $\abs{\nvect}$-negligible sets. Therefore, if we write $$\mathcal{N}\defeq\{Z\subseteq\XX:\text{there exists $Z'\subseteq\XX$ Borel such that $\abs{\nvect}(Z')=0$ and $Z\subseteq Z'$}\},$$ $\nvect$ is well defined on the $\sigma$-algebra generated by the union of $\mathcal{B(\XX)}$ and $\mathcal{N}$. \fr \end{rem} \begin{lem} Let $\VV$ be a normed $\mathcal R$-module and $\nvect$ be a local vector measure on it. Then for every $A\subseteq\XX$ open we have \begin{equation}\label{nearliplintmp} \abs{\nvect(A)(v)}\le \abs{\nvect}(A)\Vert {v}\Vert_{|A}\quad{\text{for every }v\in\VV}, \end{equation} where the right hand side is intended as 0 in the case $\infty\cdot 0$. \end{lem} \begin{proof} If $\Vert {v}\Vert_{|A}=0$ the conclusion follows by weak locality. Thus we can assume $\Vert {v}\Vert_{|A}>0$ and then $\abs{\nvect}(A)<\infty$ (as otherwise the conclusion is obvious). Then the restriction of $|\nvect|$ to $A$ is inner regular and thus for any fixed $\varepsilon>0$ we can find $K\subseteq A$ compact set such that $\abs{\nvect}(A\setminus K)\le\varepsilon$. take $\varphi\in\Cb(\XX)$ as in \eqref{eq:separK} for $K\subseteq A$. Then, $$\abs{\nvect(A)(v)}\le \abs{\nvect(K)(v)}+\varepsilon \Vert {v}\Vert\stackrel{\eqref{eq:c2}}=\abs{\nvect(K)(\varphi v)}+\varepsilon \Vert {v}\Vert\stackrel{\eqref{eq:trivialvar}}\le \abs{\nvect}(K)\Vert \varphi v\Vert+\varepsilon \Vert {v}\Vert$$ and, as $\varepsilon>0$ is arbitrary, the claim is proved recalling that $\Vert \varphi v\Vert\le\Vert{v}\Vert_{|A} $. \end{proof} We then have the following general result: \begin{prop}\label{localisfiniteabs} Let $\VV$ be a normed $\mathcal R$-module and $\nvect$ be a local vector measure on it. Then $\nvect$ has bounded variation. More precisely \begin{equation}\label{conicidence} \abs{\nvect}(A)=\Vert\nvect(A)\Vert'\quad\text{for any $A\subseteq\XX$ Borel}. \end{equation} \end{prop} \begin{proof} We divide the proof in several steps.\\ \textsc{Step 1}. Let $A\subseteq \XX$ Borel with $\abs{\nvect}(A)=\infty$ let and $m\in\RR$. Taking into account the very definition of total variation, the definition of the dual norm and the inner regularity of the finite signed measure $v\,\cdot\,\nvect$ for $v\in\VV$, we can take $\{v_l\}_{l=1,\dots,L}\subseteq\VV$ with $\Vert v_l\Vert\le 1$ and $\{K_l\}_{l=1,\dots,L}$ pairwise disjoint compact subsets of $A$ such that $$m< \sum_{l=1}^L \nvect(K_l)(v_l).$$ Then by \eqref{eq:separK} we can take $\{\psi_l\}_{l=1,\dots,L}\subseteq\mathcal R$ with values in $[0,1]$ such that $\psi_l=1$ on a neighbourhood of $K_l$ and $\{\supp\psi_l\}_{l}$ are pairwise disjoint. We set then $v\defeq\sum_{l=1}^L \psi_l v_l\in\VV$ and we notice that by weak locality $\nvect(K_l)(v_l)=\nvect(K_l)(v)$. Hence, if we set $B\defeq\bigcup_{l=1}^L K_l$, $$m< \sum_{l=1}^L \nvect(K_l)(v_l)= \sum_{l=1}^L \nvect(K_l)(v)= \nvect(B)(v)\le \Vert\nvect(B)\Vert', $$ where the last inequality follows as $\Vert v\Vert\le 1$ by \eqref{compeq}. To sum up, we have proved in this step that given $A\subseteq\XX$ Borel with $\abs{\nvect}(A)=\infty$ and $m\in\RR$, we can find a (compact) set $B\subseteq A$ such that $\Vert \nvect(B)\Vert'\ge m$. \\\textsc{Step 2}. Assume by contradiction $\abs{\nvect}(\XX)=\infty$. We construct two sequences $\{A_k\}_k$ and $\{B_k\}_k$ of Borel subsets of $\XX$ iteratively, starting from $A_0\defeq\XX$ and $B_0\defeq\XX$. Precisely, given $k\in\NN$, $ k\ge 1$, assume that we have defined $A_0,\dots,A_k$ and $B_0,\dots, B_{k}$. Notice that by the construction we are going to do, $\abs{\nvect}(A_k)=\infty$ and $B_k\subseteq A_k$. Then, by additivity, either $\abs{\nvect}(B_k)=\infty$ or $\abs{\nvect}(A_k\setminus B_k)=\infty$. In the former case we set $A_{k+1}\defeq B_k$, in the latter $A_{k+1}\defeq A_k\setminus B_k$. In either case we then take $B_{k+1}\subseteq A_{k+1}$ such that $\Vert\nvect(B_{k+1})\Vert'\ge {k+1}$, using step 1. Now notice that for every $k$, $\Vert\nvect(B_{k+1})\Vert'\ge {k+1}$ and either $B_{k+1}\subseteq B_k$ or $B_k\cap B_{l}=\emptyset$ for every $l\in\NN$, $l\ge k+1$ (this possibility may depend on $k$). If the second possibility occurs infinitely often, we can find a subsequence of pairwise disjoint subsets of $\XX$, $\{B_{k_l}\}_l$ such that $\Vert\nvect(B_{k_l})\Vert'\rightarrow\infty$. Otherwise, we can find a decreasing sequence of Borel subsets of $\XX$, $\{B_{k_l}\}_l$, such that $\Vert\nvect(B_{k_l})\Vert'\rightarrow\infty$. But in both cases this leads to a contradiction with the countable additivity of $\nvect$ (that involves the convergence in norm). \\\textsc{Step 3}. We show \eqref{conicidence}. By regularity, there is no loss of generality in assuming $A$ open. Let $\varepsilon>0$. With the same notation as in step 1, we find $$\abs{\nvect}(A)-\varepsilon\le \sum_{l=1}^L \nvect(K_l)(v_l).$$ Being $\abs{\nvect}$ a finite measure, we can find pairwise disjoint open sets $\{A_l\}_l$ such that $K_l\subseteq A_l\subseteq A$ and $\abs{\nvect}(A_l\setminus K_l)<\varepsilon/L$ for every $l$. Take then $\{\psi_l\}_l\subseteq\Cb(\XX)$ as in step 1, but with the additional constraint that $\supp\psi_l\subseteq A_l$ for every $l$. Then, by weak locality, $$ \sum_{l=1}^L \nvect(K_l)(\psi_l v_l)\le\sum_{l=1}^L \nvect(A_l)(\psi_l v_l)+\varepsilon=\sum_{l=1}^L \nvect(A)(\psi_l v_l)+\varepsilon=\nvect(A)(v)+\varepsilon$$ where $v\defeq\sum_{l=1}^L \psi_l v_l\in\VV$. Being $\varepsilon>0$ arbitrary, we can conclude the proof. \end{proof} From the finiteness of the total variation we easily get the following: \begin{prop}\label{vectvalbanach} Let $\VV$ be a normed $\mathcal R$-module. Then the space $(\MM_\VV,\abs{\,\cdot\,}(\XX))$ is a Banach space.\end{prop} \begin{proof} It is trivial to verify that $\abs{\,\cdot\,}(\XX)$ is indeed a norm. Let now $\{\nvect_n\}_n$ be a Cauchy sequence. Then, if $B$ is Borel and $v\in\VV$, also $\{\nvect_n(B)(v)\}_n\subseteq\RR$ is a Cauchy sequence, so that we can define (notice that in this way we get immediately weak locality) \begin{equation}\label{deflimit} \nvect(B)(v)\defeq\lim_n \nvect_n(B)(v). \end{equation} We have $$\abs{\nvect(B)(v)}=\lim_n\abs{\nvect_n(B)(v)}\le\liminf_n \abs{\nvect_n}(B)\Vert v\Vert$$ for every $B$ Borel. In particular, $\nvect(B)\in\VV'$ and $$\Vert\nvect(B)\Vert'\le\liminf_n \abs{\nvect_n}(B).$$ Clearly, $\nvect$ is finitely additive, but the equation above implies also $\sigma$-additivity: indeed, if $\{B_k\}_k$ is a sequence of pairwise disjoint Borel sets, setting $B^k\defeq\bigcup_{j=1}^k B_j$ we can compute, for any $m$, $$ \Vert\nvect(B^\infty)-\nvect(B^k)\Vert'\le\liminf_n\abs{\nvect_n}(B^\infty\setminus B^k)\le\liminf_n \abs{\nvect_n-\nvect_m}(\XX)+ \abs{\nvect_m}(B^\infty\setminus B^k)$$ and notice that taking first $m$ big enough and then $k$ big enough, the right hand side of the above inequality converges to $0$. Therefore $\nvect$ is a local vector measure. Now if $B$ is Borel and $v\in\VV$ with $\Vert v\Vert\le 1$, similar computations as above show that $$\Vert (\nvect-\nvect_n)(B)\Vert'\le \liminf_m\abs{\nvect_m-\nvect_n}(B)$$ and then, thanks to the definition of total variation and the super additivity of the $\liminf$, $$\abs{\nvect-\nvect_n}(\XX)\le\liminf_m\abs{\nvect_m-\nvect_n}(\XX).$$ This implies convergence in norm of $\nvect_n$ to $\nvect$. \end{proof} Justified by this proposition, here and below, when we write $\MM_\VV$, we mean the Banach space $(\MM_\VV,\abs{\,\cdot\,}(\XX))$. In view of the following definition, we briefly recall the definition of Bartle integral that we take from \cite[P. 5]{DiestelUhl77} (see also the original article \cite{Bar}). If $\nvect$ is a vector valued measure and $f=\sum_{i=1}^n c_i\chi_{A_i}$ is a simple function, where $\{c_i\}_{i}\subseteq\RR$ and $\{A_i\}_i$ are pairwise disjoint Borel subsets, then we consider the map \begin{equation} \label{eq:defbart} A\mapsto \int_A f\dd{\nvect}\defeq\sum_{i=1}^n c_i \nvect(A\cap A_i)\in\VV'. \end{equation} It is clear that for any such $f$ and at most countable disjoint family $\{B_j\}_j\subseteq\XX$ we have \[ \Big\|\sum_j\int_{B_j}f\dd{\nvect}\Big\|'\leq \|f\|_{\Lp^\infty(|\nvect|)} \sum_j|\nvect|(B_j)= \|f\|_{\Lp^\infty(|\nvect|)}|\nvect|(\cup_jB_j), \] having used also \eqref{conicidence}. This inequality shows that \eqref{eq:defbart} defines a linear and continuous map from the space of simple functions (endowed with the supremum norm) to the space of $\VV'$-valued measures, which therefore can be extended to a linear and continuous map from $\Lp^\infty(|\nvect|)$ to the space of $\VV'$-valued measures (see also Bartle's bounded convergence theorem \cite[Theorem 2.4.1]{DiestelUhl77}). Also, recalling \eqref{eq:c1} for the case of simple $f$'s and then arguing by approximation we see that the measures in the image are weakly local, i.e.\ are local vector measures defined on $\VV$. We summarize all this in the following definition: \begin{defn}\label{bartdef} Let $\VV$ be a normed $\mathcal R$-module, $\nvect$ be a local vector measure defined on it and let $f:\XX\rightarrow\RR$ be a bounded Borel function. We define $f\nvect$ as the local vector measure given by $$f \nvect(A)\defeq \int_A f\dd{\nvect}$$ where the integral has to be understood as a Bartle integral, i.e.\ in the sense described above. \end{defn} We point out that Bartle's bounded convergence theorem (see e.g.\ \cite[Theorem 2.4.1]{DiestelUhl77}) ensures that \begin{equation} \label{eq:bbct} \left.\begin{array}{l} \sup_n\|f_n\|_{\Lp^\infty(|\nvect|)}<\infty,\\ f_n\to f\quad |\nvect|\text{-a.e.} \end{array}\right\} \qquad\Rightarrow\qquad f_n\nvect(A)\to f\nvect(A)\quad in\ \VV'\qquad\forall A\subseteq\XX\ Borel. \end{equation} We notice the following general fact: \begin{prop}\label{absfoutside} Let $\VV$ be a normed $\mathcal R$-module, $\nvect$ be a local vector measure defined on it and let $f:\XX\rightarrow\RR$ be a bounded Borel function. Then it holds, as measures, \begin{equation}\label{letotvar} \abs{f\nvect}=\abs{f}{\abs{\nvect}}. \end{equation} \end{prop} \begin{proof} We show first that $ \abs{f\nvect}\le\abs{f}{\abs{\nvect}}$. Clearly it suffices to show that if $B$ is Borel, then $\Vert f \nvect(B)\Vert'\le \int_B\abs{f}\dd{\abs{\nvect}}$. This follows from the triangle inequality if $f$ is a simple function and then the claim is proved by approximation. Therefore, we conclude if we show that $$\int_\XX\abs{ f}\dd{\abs{\nvect}}\le \abs{f\nvect}(\XX). $$ As Proposition \ref{localisfiniteabs} shows that $\abs{\nvect}(\XX)<\infty$, an approximation argument justified by the inequality in \eqref{letotvar} that we have just proved yields that we can assume that $f$ is simple, say $f=\sum_j c_j\chi_{B_j}$, where $c_j\in\RR$ for every $j$ and $\{B_j\}_j$ is a finite Borel partition of $B$. Now we can compute \begin{equation*}\notag \int_B \abs{f}\dd{\abs{\nvect}}=\sum_j \abs{c_j}\abs{\nvect}(B_j)=\sum_j \abs{c_j\nvect}(B_j)\stackrel{\eqref{eq:defbart}}=\sum_j\abs{f\nvect}(B_j)=\abs{ f\nvect}(B).\qedhere \end{equation*} \end{proof} \begin{lem}\label{foutside} Let $\VV$ be a normed $\mathcal R$-module, and let $\nvect$ be a local vector measure. Then, for every $f\in\mathcal R $ and $v\in\VV$, we have \begin{equation}\label{claimfoutside} (f v)\,\cdot\,\nvect=f(v\,\cdot\,\nvect)=v\,\cdot\,(f \nvect). \end{equation} \end{lem} \begin{proof} The second equality in \eqref{claimfoutside} follows directly from the definition of $f\nvect$ (and an approximation argument), so it is enough to prove the equality of the first and last term. By regularity of the signed measures $(f v)\,\cdot\,\nvect$ and $ v\,\cdot\,(f \nvect)$ we can just show that $$\nvect(A)(f v)=f\nvect(A)(v) $$ when $A$ is open. Recall that Proposition \ref{localisfiniteabs} implies that $\nvect$ and $f\nvect$ have bounded variation. As $f$ is bounded, we can assume that $\abs{f(x)}\le 1$ for every $x$. Fix now $\varepsilon>0$ and let $$-1-\epsilon=t_0<t_1<\dots<t_n=1+\epsilon$$ be a collection of real numbers such that for every $i=1,\dots,n$, $t_i-t_{i-1}\le \varepsilon$ and \begin{equation}\label{liplintmp} \abs{\nvect }(\{f=t_i\})=0 \quad\text{and} \quad\abs{f \nvect }(\{f=t_i\})=0)\qquad\text{for every }i=0,\dots,n. \end{equation} Consider now the open sets $\{A_i\defeq A\cap\{f\in(t_{i-1},t_i)\}\}_{i=1,\dots,n}$ and notice that, for every $i=1,\dots,n$, $$f\nvect(A_i)(v)=t_i\nvect(A_i)(v)+(f-t_i)\nvect(A_i)(v)$$ and also $$\nvect(A_i)(f v)=t_i\nvect(A_i)(v)+\nvect(A_i)(f v-t_i v).$$ Subtracting these two equalities term by term and summing over $i$, recalling \eqref{liplintmp}, \begin{equation}\label{nnearliplintmp} \abs{\nvect(A)(f v)-f\nvect(A)(v )}\le \sum_{i=1}^n \abs{\nvect(A_i) (f v - t_i v)}+\abs{(f-t_i)\nvect(A_i)(v)}. \end{equation} We then use \eqref{letotvar} and \eqref{nearliplintmp} to deduce from \eqref{nnearliplintmp} that $$ \abs{\nvect(A)(f v)-f\nvect(A)(v )}\le \sum_{i=1}^n 2\varepsilon\abs{\nvect }(A_i)\Vert v\Vert\le 2\varepsilon \abs{\nvect}(\XX)\Vert v\Vert.$$ As $\varepsilon>0$ is arbitrary, this concludes the proof. \end{proof} Having a notion of `total variation' naturally leads to the following definition: \begin{defn}[Polar decomposition] Let $\VV$ be a normed $\mathcal R$-module, and let $\nvect$ be a local vector measure. We say that $\nvect$ admits a polar decomposition if there exists a weakly$^*$ $\abs{\nvect}$-measurable map $L_\nvect:\XX\rightarrow\VV'$ such that $\nvect=L_\nvect \abs{\nvect}$, in the sense that for every $v\in\VV$, we have $L_\nvect(v)\in\Lp^1(\abs{\nvect})$ and \begin{equation}\label{reprpolarabs} \nvect(A)(v)= \int_A L_\nvect(v)\dd{\abs{\nvect}}\quad\text{for every $A\subseteq\XX$ Borel}, \end{equation} where here and in what follows we write $L_\nvect(v)$ for the map $x\mapsto L_\nvect(x)(v)$. We require moreover that for every $x\in\XX$ it holds that \begin{equation}\label{inequalitygerm} \abs{L_\nvect(x)(v)}\le \abs{v}_g (x)\quad\text{for every }v\in\VV. \end{equation} \end{defn} It easily follows from the above definition that the polar decomposition is `weakly unique', in the sense that if $\nvect=L_\nvect\abs{\nvect}=L'_\nvect\abs{\nvect}$, then for every $v\in\VV$, $$L_\nvect(v)=L'_\nvect(v)\quad\abs{\nvect}\text{-a.e.}$$ Also, if $f:\XX\rightarrow\RR$ is a bounded Borel function, then $$ f\nvect(A)(v)=\int_A f L_\nvect(v)\dd{\abs{\nvect}}. $$ {Finally, we remark that if $L_\nvect$ satisfies \eqref{inequalitygerm}, then for every $x\in\XX$, $L_\nvect(x)$ is tight.} In view of the following proposition, it is useful to recall the definition of essential supremum (see e.g.\ \cite[Lemma 3.2.1]{GP19} for the well known proof of existence and uniqueness and \cite[Section 2.4]{MNP91} for much more on the topic). \begin{lem} Let $(\XX,\mu)$ be a measure space with $\mu$ $\sigma$-finite. If $\{f_\alpha\}_{\alpha\in A}$ is a collection of extended real valued $\mu$-measurable functions, then there exists a unique (up to equality $\mu$-a.e.) $\mu$-measurable function $g:\XX\rightarrow\{\pm\infty\}$, called the essential supremum of $\{f_\alpha\}_{\alpha\in A}$ and denoted by $\esssup_{\alpha\in A} f_\alpha$ (or $\mu-\esssup_{\alpha\in A} f_\alpha$ when we want to stress the dependence on the measure), such that \begin{enumerate}[label=\roman*)] \item $f_\alpha\le g\ \mu$-a.e.\ for every $\alpha\in A$, \item if $f_\alpha\le h\ \mu$-a.e.\ for every $\alpha\in A$, then $h\ge g\ \mu$-a.e. \end{enumerate} \end{lem} \begin{prop}\label{allhaspolarabs} Let $\VV$ be a normed $\mathcal R$-module, and let $\nvect$ be a local vector measure. Then $\nvect$ admits the (`weakly unique') polar decomposition $L_\nvect{\abs{\nvect}}$, where $L_\nvect$ satisfies the `weak' identity \begin{equation}\notag L_\nvect(v)=\dv{(v\,\cdot\,\nvect)}{\abs{\nvect}}\quad\abs{\nvect}\text{-a.e.\ for every }v\in\VV. \end{equation} Moreover, it holds that \begin{equation}\label{essusp} \abs{\nvect}-\esssup_{v\in\VV, \Vert v\Vert\le 1} L_\nvect(v)=1. \end{equation} \end{prop} \begin{proof} For every $v\in\VV$ \eqref{nearliplintmp} grants that $v\,\cdot\,\nvect\ll\abs{\nvect}$, so that we can define $$\rho_v(x)\defeq \dv{(v\,\cdot\,\nvect)}{\abs{\nvect}}\in\Lp^1(\abs{\nvect})$$ and notice that \begin{equation}\label{pretrivbound} \nvect(A)(v)=\int_A\rho_v\dd{\abs{\nvect}} \end{equation} whenever $A\subseteq\XX $ is Borel. To define the map $L_\nvect$ we follow the argument given in \cite[Lemma 3.2]{GigliHan13}. Let the maps $\leb:\Lploc^1(\abs{\nvect})\rightarrow\mathcal{B(\XX)}$ and $\borrep:\Lploc^1(\abs{\nvect})\rightarrow\{f:\XX\rightarrow\RR \text{ Borel}\}$ be given by Corollary \ref{thmapp} in the Appendix. For every $x\in\XX$, define $$\VV_x\defeq\left\{v\in\VV: x\in\leb(\rho_v)\right\}.$$ Property $iii)$ in Corollary \ref{thmapp} and the linearity of the map $v\mapsto\rho_v$ grant that for every $x\in\XX$, $\VV_x$ is a vector subspace of $\VV$ and that $$L_\nvect(x)(v)\defeq \borrep(\rho_v)(x)\quad\text{for }v\in \VV_x$$ is linear in $v$. Now, \eqref{nearliplintmp}, \eqref{pretrivbound} and an immediate approximation argument easily yield that for every $x\in\XX$ $$ \Vert\rho_v\Vert_{\Lp^\infty(|\nvect|\mressmall B_r(x))}\le \Vert v\Vert_{|B_r(x)} $$ so that, using also \eqref{buoncontrollo} we get that for every $x\in\XX$, \begin{equation}\label{trivbound} \abs{L_\nvect(x)(v)}\le\abs{ v}_g(x)\quad\text{for every } v\in \VV_x. \end{equation} Fix $x\in\XX$ and consider the equivalence relation on $\VV$ given by the seminorm $\abs{\,\cdot\,}_g(x)$. Let then ${\WW}_x$ be the quotient space, endowed with the factorization of $\abs{\,\cdot\,}_g(x)$. Also, by \eqref{trivbound}, $L_\nvect(x)$ factorizes to the projection of $\VV_x$ in ${\WW}_x$. Then, using Hahn-Banach Theorem, we can extend the factorization of $L_\nvect(x)$ to ${\WW}_x$ and then, taking the composition with the projection, we have an extension of $L_\nvect(x)$ to a map defined on $\VV$ which still satisfies \eqref{trivbound} for every $v\in\VV$. Property $i)$ in Corollary \ref{thmapp} implies that for any $v\in\VV$, we have $x\in\leb(\rho_v)$ for $\abs{\nvect}$-a.e.\ $x$, so that, using also property $ii)$, $$L_\nvect(v)=\rho_v\quad\abs{\nvect}\text{-a.e.}$$ This implies for every $v\in\VV$, $L_\nvect(v)$ is $\abs{\nvect}$-measurable and satisfies $$\nvect(A)(v)=\int_A \rho_v\dd{\abs{\nvect}}=\int_A L_\nvect(v)\dd{\abs{\nvect}}\quad \text{for every $A\subseteq\XX$ Borel}.$$ Now we prove the last claim. Set for brevity $g\defeq \esssup_{v\in\VV, \Vert v\Vert\le 1} L(v)$. As for every $B\subseteq \XX$ Borel and $v\in\VV$ it holds $${\int_B L(v)\dd{\abs{\nvect}}}=\abs{\nvect(B)(v)}\le \abs{\nvect}(B)\Vert v\Vert,$$ we see that $g\le 1\ \abs{\nvect}$-a.e. Now, let $B\subseteq\XX$ Borel such that $\abs{\nvect}(B)<\infty$ and let $\pi$ be a finite Borel partition of $B$. If $\varepsilon>0$, we can find $\{v_A\}_{A\in\pi}\subseteq\VV$ such that $\Vert v_A\Vert\le 1$ for every $A\in\pi$ and $$\sum_{A\in\pi}\Vert \nvect(A)\Vert'\le\sum_{A\in\pi} \nvect(A)(v_A)+\varepsilon=\sum_{A\in\pi}\int_A L_\nvect(v_A)\dd{\abs{\nvect}}+\varepsilon\le \sum_{A\in\pi}\int_A g\dd{\abs{\nvect}}+\varepsilon.$$ Being $\varepsilon$ and $\pi$ arbitrary, it follows $$ \abs{\nvect}(B)\le\int_B g \dd{\abs{\nvect}}$$ that, combined with $g\le 1\ \abs{\nvect}$-a.e.\ and the fact that $\abs{\nvect}$ is finite, implies $g =1\ \abs{\nvect}$-a.e. \end{proof} \begin{rem} As the proof shows, rather than imposing \eqref{inequalitygerm} to hold, we could only ask for the bound \begin{equation} \notag \abs{L_\nvect(x)(v)}\le \|v\|\qquad\abs{\nvect}\text{-a.e.\ }x\in\XX \end{equation} to hold for every $v\in\VV$. Nevertheless, even with this weaker requirement, all the conclusions of the proposition remain in place, up to the fact that \eqref{inequalitygerm} has to be interpreted as $$ |L_\nvect(x)(v)|\le |v|_g(x)\qquad|\nvect|\text{-a.e.\ $x$, for every $v\in\VV$}, $$ where this last inequality is a consequence of \eqref{buoncontrollo} of Corollary \ref{thmapp} in the Appendix, as in the proof of Proposition \ref{allhaspolarabs}. We also point out that the use of Hahn-Banach theorem is not really required in the proof. In fact, the rather constructive argument gives a collection $(\VV_x,L_\nvect(x))$ indexed by $x\in\XX$ with $\VV_x$ subspace of $\VV$ and $L_\nvect(x)\in \VV_x'$ for every $x\in\XX$ with the following properties: \begin{itemize} \item[i)] For every $v\in\VV$ the set of $x\in\XX$ such that $v\in \VV_x$ is Borel and with complement $|\nvect|$-negligible, \item[ii)] The map $x\mapsto L_\nvect(x)(v)$ set, say, to 0 if $v\notin\VV_x$ is Borel and bounded in norm by $\|v\|$, \item[iii)] The identity \eqref{reprpolarabs} (that makes sense thanks to the above and $|\nvect|(\XX)<\infty$) holds for every $v\in\VV$. \end{itemize} These properties could also be used as definition of polar decomposition: the price one pays for doing so is to keep track of the subspaces $\VV_x$ where the operators $L_\nvect(x)$ are defined, but in doing so it gains the Borel regularity stated in $\rm i)-ii)$ above, in place of $|\nvect|$-measurability, and a Choice-free proof. Notice in particular that the Axiom of Choice in a form stronger than Countable Dependent Choice - i.e.\ in the form of the general Hahn-Banach theorem - is used in the proof once for each point $x\in\XX$ in order to extend the operators $L_\nvect(x)$ to regions where almost surely they won't be applied and that such extensions are irrelevant from the perspective of the defining formula \eqref{reprpolarabs}. \fr\end{rem} \begin{rem}\label{nonimportaR} Let $\VV$ be both a normed $\Rr$-module and a normed $\Rr'$-module, where $\Rr$ and $\Rr'$ approximate open sets. Assume also $\Rr\subseteq\Rr'$. Let $\nvect:\mathcal B(\XX)\rightarrow \VV'$ be a $\VV'$-valued measure. Then the following assertions are equivalent (notice that the local seminorm $\Vert\,\cdot\,\Vert_{|A}$ may not be independent of the choice of the subring $\Rr$ or $\Rr'$): \begin{enumerate} \item $\nvect$ is weakly local, considering $\VV$ as a normed $\Rr$-module, \item $\nvect$ is weakly local, considering $\VV$ as a normed $\Rr'$-module. \end{enumerate} We prove now this assertion. Let us denote with $\Vert\,\cdot\,\Vert_{A,\Rr}$ and $\Vert\,\cdot\,\Vert_{A,\Rr'}$ the local seminorms induced by the structure of normed $\Rr$-module and normed $\Rr'$-module, respectively. Clearly, $\Vert\,\cdot\,\Vert_{A,\Rr}\le\Vert\,\cdot\,\Vert_{A,\Rr'}$ as $\Rr\subseteq\Rr'$, so that it is immediate to see that $(1)\Rightarrow (2)$. Conversely, assume that $\nvect$ is weakly local, considering $\VV$ as a normed $\Rr'$-module. Take $v\in\VV$ with $\Vert v\Vert_{A,\Rr}=0$, for some open set $A$. Now we write, exploiting Proposition \ref{allhaspolarabs}, $\nvect(A)(v)=\int_A L_\nvect(v)\dd{|\nvect|}$. Take now $\{f_k\}_k\subseteq \Rr$ as in item $(1)$ of Remark \ref{rem:Rpropr} and we use Lemma \ref{foutside} together with dominated convergence to compute $$ \nvect(A)(v)=\lim_k \int_A f_k L_\nvect(v)\dd{|\nvect|}=\lim_k \int_A L_\nvect( f_k v)\dd{|\nvect|}=\lim_k \nvect(A)(f_k v)=0, $$ where we used that $\Vert v\Vert_{A,\Rr}=0$ in the last equality. The conclusion follows. Notice that we indeed showed what follows: if $\nvect$ satisfies one of the equivalent conditions above, then, for every $A\subseteq\XX$ open, $$ |\nvect(A)(v)|\le \abs{\nvect}(A)\Vert v\Vert_{A,\Rr}\le\abs{\nvect}(A)\Vert v\Vert_{A,\Rr'} $$ (the total variation of $\nvect$ is by definition independent of the choice of the subring $\Rr$ or $\Rr'$). This is due to the fact that we assume that $\Rr$ approximates open sets. We point out that if $\nvect=L_\nvect|\nvect|$ is the polar decomposition given by Proposition \ref{allhaspolarabs} for the $\Rr'$-normed module $\VV$, it may be false that (with the obvious notation) for every $x\in\XX$, $$ |L_\nvect(x)(v)|\le |v|_{g,\Rr}(x)\quad\text{for every $v\in\VV$.} $$ Clearly, this issue can be immediately solved building the polar decomposition for the $\Rr$-normed module $\VV$ instead of considering the polar decomposition for the $\Rr'$-normed module $\VV$. \fr \end{rem} If $\mu$ is a finite (signed) measure on the Polish space $\XX$, then \begin{equation} \label{eq:riesz} \varphi\quad\mapsto\quad F(\varphi):=\int \varphi\,\dd\mu \end{equation} is a continuous linear functional on $C(\XX)$ with operator norm equal to $|\mu|(\XX)$. The classical and celebrated Riesz-Markov-Kakutani theorem ensures that if $\XX$ is compact, then all linear functionals on $C(\XX)$ can be represented this way. The non-compact case is more delicate, and handled by the Daniell-Stone theorem: if $\XX$ is Polish then finite measures correspond, via the map \eqref{eq:riesz}, to those functionals $F:\Cb(\XX)\to\RR$ such that \begin{equation} \label{eq:tightF} F(\varphi_n)\to 0\quad\text{ whenever $(\varphi_n)\subseteq\Cb(\XX)$ is such that $\varphi_n(x)\searrow 0$ for every $x\in\XX$.} \end{equation} We shall call property \eqref{eq:tightF} \emph{tightness} (in the literature it is also called `continuity in 0'). Given that we are now going to investigate the validity of a Riesz-like theorem in our setting, it is natural to look for a counterpart of tightness in the framework we are working now. We propose the following: \begin{defn}[Tightness]\label{def:tight} Let $\VV$ be a normed $\mathcal R$-module, and $F\in\VV'$. We say that $F$ is tight with respect to $\mathcal{R}$ if for every $v\in\VV$ and every sequence \begin{equation}\label{defsequ} \{\varphi_n\}_n\subseteq\mathcal{R}\quad\text{with }\varphi_n(x)\searrow 0\text{ for every }x\in\XX, \end{equation} we have $F(\varphi_n v)\rightarrow 0$. We will dispense with specifying the ring $\mathcal{R}$ if it is clear from the context and in the case $\mathcal{R}=\Cb(\XX)$. \end{defn} \begin{rem}\label{tightrem} Notice that, if every $v\in\VV$ has compact support (i.e.\ contained in a compact set - this occurs in particular if the space is compact), then every functional $F\in\VV'$ is tight. This follows easily from Dini's monotone convergence theorem. In general, already the case $\VV=\Cb(\XX)$ shows that (if one assumes a sufficiently strong version of Choice, then) not every functional $F\in\VV'$ is tight: see e.g.\ the functional $\ell\in\VV'$ defined in Example \ref{localityvv}.\fr \end{rem} The link between the concept of tightness and that of measure (positive real-valued for the moment - but this will later be further clarified) is given in the following key lemma. Notice that here the assumption that $\mathcal R$ approximates open sets is crucial. \begin{lem}\label{bbtightiffmeas} Let $\VV$ be a normed $\Rr$-module and $F\in\VV'$. Then $F$ is tight with respect to $\mathcal{R}$ if and only if \begin{equation}\label{mudef} \mu(A)\defeq\sup \left\{F(v):v\in\VV,\ \Vert v\Vert \le 1\text{ and } \supp v\subseteq A \right\} \end{equation} is the restriction to open sets of a finite Borel measure. \end{lem} \begin{proof} Assume that $\mu$ as in \eqref{mudef} is the restriction to open sets of a finite Borel measure. We prove that $F$ is tight with respect to $\mathcal{R}$. Fix then $v\in\VV$ and let $\{\varphi_n\}_n$ be as in \eqref{defsequ}. Fix also $\varepsilon>0$. We use the regularity of $\mu$ to find a compact set $K\subseteq\XX$ such that $\mu(\XX\setminus K)\le\varepsilon$. By Dini's monotone convergence Theorem, up to discarding finitely many functions in $\{\varphi_n\}_n$, we can assume that ${\varphi_{{n}}(x)}< \varepsilon$ for every $x\in K$. By \eqref{eq:separK} we can take a sequence $\{\phi_n\}_{n}\in\mathcal R$ valued in $[0,1]$ such that for every $n$, $\supp\phi_n\subseteq\{\varphi_n<\varepsilon\}$ and $\phi_n=1$ on a neighbourhood of $K$. Then, $$ \abs{F(\varphi_n v)}\le\abs{F(\varphi_n(1-\phi_n) v)}+\abs{F(\varphi_n\phi_n v)}\le \Vert v\Vert\left(\mu(\XX\setminus K)+\varepsilon\right)\le 2\varepsilon\Vert v\Vert,$$ where we used the definition of $\mu$ as $\supp (\varphi_n(1-\phi_n) v)\subseteq\XX\setminus K$. Then the conclusion follows as $\varepsilon>0$ is arbitrary. Conversely, assume that $F$ is tight with respect to $\mathcal{R}$. We prove that $\mu$ as defined in \eqref{mudef} is the restriction to open sets of a finite Borel measure. To this aim, we can use Carathéodory criterion (see e.g.\ \cite{AFP00}) and is then enough to verify (all the sets in consideration are assumed to be open): \begin{enumerate} \item $\mu(A)\le\mu(B)$ if $A\subseteq B$, \item $\mu(A\cup B)\ge\mu(A)+\mu(B)$ if $\dist(A,B)>0$, \item $\mu(A)=\lim_k\mu(A_k)$ if $A_k\nearrow A$, \item$\mu(A\cup B)\le \mu(A)+\mu(B)$. \end{enumerate} We notice that $(1)$ and $(2)$ follow trivially from the definition of $\mu$ and that $(2)$ does not even need the sets to be well separated (so that we do not even need to consider the distance $\dist$). We prove now property $(3)$. Take $v\in\VV$ with $\Vert v\Vert \le 1$ and $\supp v\subseteq A$. Let then $C\subseteq A$ be a closed set with $\supp v\subseteq C$. By the fact that $\mathcal R$ approximates open sets, take $\{\psi_n\}_n\subseteq \mathcal R$ and, for every $k$, $\{\varphi^k_n\}_n\subseteq \mathcal R$ that are valued in $[0,1]$ such that $\psi_n \nearrow \chi_{\XX\setminus C}$ and such that for every $k$, $\varphi_n^k\nearrow\chi_{A_k}$. We can, and will, assume that $\supp\psi_n\subseteq \XX\setminus C$ and that $\supp\varphi_n^k\subseteq A_n^k$ for every $n,k$. Let now $(k_i,n_i)_{i\in\NN}$ be an enumeration of $\NN^2$ and define \[ \xi_i:=\psi_i\vee\hat \varphi_i\in\mathcal R\qquad\text{ where }\qquad \hat\varphi_j:= \bigvee_{j\leq i} \varphi^{k_j}_{n_j}. \] It is clear that $\xi_i\nearrow 1$, hence by tightness we have \begin{equation} \label{eq:dat} F(v)=\lim_i F(\xi_i v). \end{equation} Also, the identity $\xi_i=\hat\varphi_i+(\psi_i-\hat\varphi_i)^+$, the inclusion $\supp((\psi_i-\hat\varphi_i)^+)\subseteq\supp \psi_i \subseteq \XX\setminus C$ and \eqref{eq:suppprod} give $\supp (\xi_i v)\subseteq A_{\hat k_i}$, where $\hat k_i:=\max_{j\leq i}k_j$. Therefore recalling \eqref{eq:dat} we have $$F(v)=\lim_i F(\xi_i v)\le \limsup_i \mu(A_{\hat k_i})= \lim_k\mu(A_k)$$ and since this holds for every $v\in\VV$ with $\|v\|\leq 1$, we proved $\mu(A)\le \lim_k \mu(A_k)$ and the claim. We pass to $(4)$. Take $v\in\VV$ with $\supp v\subseteq A\cup B$, say $\supp v\subseteq C\subseteq A\cup B$ for some $C\subseteq\XX$ closed. As $\mathcal R$ approximates open sets, take functions $\{\psi^A_n\}_n,\{\psi^B_n\}_n\subseteq\mathcal R$ that are valued in $[0,1]$, such that $\psi^A_n\nearrow A$ and for every $n$, $\supp\psi^A_n\subseteq A$ and such that the corresponding properties hold for $\{\psi^B_n\}_n$. Take $\{\psi_n\}_n\subseteq\mathcal R$ valued in $[0,1]$ such that $\psi_n\nearrow \chi_{\XX\setminus C}$ and for every $n$, $\supp\psi_n\subseteq\XX\setminus C$. Finally, let $\{\xi_n\}_n\subseteq\mathcal R$ be defined by $\xi_n\defeq \psi_n\vee\psi_n^A\vee \psi_n^B$. It is easy to verify that $\xi_n\nearrow 1$ and - arguing as before - that for every $n$, we can write $$\xi_n v=(\psi_n^A \vee \psi_n^B) v+ (\psi_n-(\psi_n^A \vee \psi_n^B))^+ v=\psi_n^A v+(\psi_n^B -\psi_n^A)^+ v.$$ Hence $$F(v)=\lim_n F(\gamma_n v)\le \limsup_n F(\psi_n^A v)+F((\psi_n^B -\psi_n^A)^+ v)\le \mu(A)+\mu(B)$$ and taking the supremum among all $v$ as above we conclude. \end{proof} We then have the following version of the Riesz representation theorem (compare e.g.\ with \cite[Section 7.10]{Bogachev07}): \begin{thm}[Riesz's theorem for local vector measures]\label{weakder} Let $\VV$ be a normed $\Rr$-module and $F\in\VV'$ be tight. There exists a unique local vector measure $\nvect_F$ defined on $\VV$ such that \begin{equation}\label{defining} \nvect_F(\XX)(v)=F(v)\quad\text{for every }v\in\VV. \end{equation} Moreover, it holds that $\abs{\nvect_F}=\mu$, where $\mu$ is the finite Borel measure given by Lemma \ref{bbtightiffmeas}. \end{thm} \begin{proof} Let $\mu$ be the finite Borel measure given by Lemma \ref{bbtightiffmeas}. Let $B\subseteq\XX$ Borel. Given $\varepsilon>0$, take $K_\varepsilon\subseteq B\subseteq A_\varepsilon$, with $K_\varepsilon$ compact and $A_\varepsilon$ open such that $\mu(A_\varepsilon\setminus K_\varepsilon)\le \varepsilon$. Let $\varphi_\varepsilon\in\mathcal R$ be as in \eqref{eq:separK} for $K_\varepsilon\subseteq A_\varepsilon$. Notice that \begin{equation}\label{riesztmp} \abs{F(\varphi_\varepsilon v)}\le\mu(A_\varepsilon)\Vert \varphi_\varepsilon v\Vert\le\mu(A_\varepsilon)\Vert v\Vert. \end{equation} We define the linear functional \begin{equation} \label{eq:apprN} \nvect_F(B)(v)\defeq\lim_{\varepsilon\searrow 0} F(\varphi_\varepsilon v) \end{equation} where $\varphi_\varepsilon$ is any function as above for that given $\varepsilon>0$. We claim that this limit exists and is independent of the choice of $\{K_\varepsilon,A_\varepsilon,\varphi_\varepsilon\}_{\varepsilon>0}$. Indeed, take $\varepsilon>0$ and let $\varphi_\varepsilon$ and $\varphi_\varepsilon'$ as above. Then, with the obvious notation, $\varphi_\varepsilon=\varphi_{\varepsilon}'=1$ on a neighbourhood of $K_\varepsilon\cap K_{\varepsilon}'$. Thus $\supp( \varphi_\varepsilon-\varphi_{\varepsilon}')\subseteq( (A_\varepsilon\cup A_{\varepsilon}')\setminus( K_\varepsilon\cap K_{\varepsilon}'))$ implies \begin{equation}\label{rieszfund} \abs{F(\varphi_\varepsilon v)- F(\varphi_{\varepsilon} 'v)} \le \Vert v\Vert \mu((A_\varepsilon\cup A_{\varepsilon}')\setminus( K_\varepsilon\cap K_{\varepsilon}'))\le 2 \Vert v\Vert \varepsilon. \end{equation} Notice that the inequality above holds in particular if $\varphi_\varepsilon'=\varphi_{\varepsilon'}$ with $0<\varepsilon'<\varepsilon$. Also, by \eqref{riesztmp}, $\nvect_F(B)\in\VV'$ for every $B$ Borel. Notice that the very definition of $\nvect_F$ implies that \eqref{defining} holds (too see this, just argue as for \eqref{rieszfund} with $1$ in place of $\varphi_{\varepsilon}'$). We show now that $\nvect_F$, as just defined, is indeed a local vector measure. By \eqref{riesztmp}, it holds $$\abs{\nvect_F(B)(v)}\le\mu(B)\Vert v\Vert\quad\text{for every $B\subseteq$ Borel},$$ so that $\nvect_F(B)\in\VV'$ with \begin{equation}\label{fjsdnodwq} \Vert \nvect_F(B)\Vert'\le \mu(B)\qquad\text{for every }B\subseteq\XX\text{ Borel}. \end{equation} Finite additivity of $\nvect_F(\,\cdot\,)$ follows easily from its definition using a suitable choice of cut-off functions, while to prove $\sigma$-additivity one only has to use \eqref{fjsdnodwq} noticing that if $\{B_k\}_k$ is a sequence of pairwise disjoint Borel sets, it holds that $\sum_{k=n}^\infty\mu( B_k)\rightarrow 0$ as $n\rightarrow\infty$. Therefore $\nvect_F$ is a $\VV'$-valued measure. To show weak locality we pick $A\subseteq\XX$ open, $v\in\VV$ with $\|v\|_{|A}=0$ and notice that in the construction above we can pick $A_\varepsilon=A$ for every $\varepsilon>0$. With this choice we have $\varphi_\varepsilon v=0$ and thus $\nvect(A)(v)=0$, as desired. Also, by \eqref{fjsdnodwq}, $\abs{\nvect_F}\le \mu$ and, by \eqref{defining}, it is clear that $\mu(\XX)\le\abs{\nvect_F}(\XX)$ so that we have indeed $\mu=\abs{\nvect_F}$. Uniqueness follows immediately from \eqref{defining} and \eqref{conicidence} as they grant that \[ |\nvect-\tilde\nvect|(\XX)=\|\nvect-\tilde\nvect\|'(\XX)=\sup_{\|v\|\leq 1}(\nvect-\tilde\nvect)(\XX)(v)=0 \] for every $\nvect,\tilde\nvect$ satisfying the conclusion. \end{proof} \begin{rem} The standard proof of Riesz's theorem typically starts taking $L\in C(\XX)'$ (say $\XX$ compact), decomposes it in its positive and negative parts to reduce to the case of positive functionals, then by monotonicity finds the value of the measure on open and/or compact sets and finally by approximation the value of the measure on any set. Inspecting our arguments, we see that the mathematical principles behind the proof of Theorem \ref{weakder} are similar, even though the lack of an order on $\VV$ forces us to avoid arguments by monotonicity in favour of those based on approximation and domination (as in \eqref{eq:apprN}, \eqref{fjsdnodwq}). Let us illustrate how - somehow conversely - one can recover from our statement the classical Riesz's theorem about the dual of $\Cc(\XX)$ for $\XX$ locally compact metric space. Start noticing that $\VV=\Cc(\XX)$ is a normed $\Cb(\XX)$-module and that, as seen in Remark \ref{tightrem}, any $F\in\Cc(\XX)'$ is automatically tight. Thus by Theorem \ref{weakder} and Proposition \ref{allhaspolarabs} we can represent $F$ as $L\abs{\nvect}$, so that $$ F(f)=\int_\XX L(f)\dd{\abs{\nvect}}\quad\text{for every }f\in\Cc(\XX). $$ Using local compactness and then separability, we build a countable sequence $\{\varphi_n\}_n\subseteq\Cc(\XX)$ such that $\varphi_n(x)\in[0,1]$ for every $x\in\XX$ and the interiors of $\{\varphi_n=1\}$ cover $\XX$ (to show this use e.g.\ the Lindel\"{o}f property of $\XX$). Then we define $\sigma:\XX\rightarrow\RR$ as $$\sigma(x)\defeq L(x)(\varphi_n)\quad\text{on the interior of $\{\varphi_n=1\}$}$$ (such function is well defined up to $\nvect$-negligible sets and is a $|\nvect|$-measurable map). Building upon Lemma \ref{foutside}, it is easy to verify that the identity $L(f)(x)= f(x)\sigma(x)$ holds $\abs{\nvect}$-a.e.\ for every $f\in \Cc(\XX)$. On the other hand, the identity \eqref{essusp} in this case reads as $$ \esssup_{f\in \Cc(\XX),\ \Vert f\Vert\le 1} L(f)=1,$$ which in turn easily implies $\sigma(x)\in\{\pm 1\}$ $\abs{\nvect}$-a.e.. Collecting what observed so far we see that the measure $\mu\defeq\abs{\nvect}\mres \{\sigma=1\}-\abs{\nvect}\mres \{\sigma=-1\},$ satisfies $$ F(f)=\int_\XX f\dd{\mu}\quad\text{for every }f\in\Cc(\XX),$$ as desired. \fr \end{rem} A direct consequence of this last result is the following isomorphism of Banach spaces: \begin{cor}\label{Rieszcor} Let $\VV$ be a normed $\Rr$-module and consider the Banach space \begin{align} {\sf T}&\defeq\left(\left\{F\in\VV': \text{$F$ is tight w.r.t.\ $\mathcal R$}\right\},\Vert\,\cdot\,\Vert'\right).\notag \end{align} Then the map \[ \Psi:\MM_\VV\rightarrow {\sf T}\quad\text{defined as}\quad \nvect\mapsto \nvect(\XX) \] is a bijective isometry. \end{cor} \begin{proof} It is easy to show that $\sf T$ is a Banach space. Also, we know that $\Psi$ is linear, takes values in $\VV'$, preserves the norm (by \eqref{conicidence}) and that its image contains ${\sf T}$ thanks to Theorem \ref{weakder}. Thus it only remains to prove that $\Psi(\nvect)$ is a tight element of $\VV'$ for any $\nvect\in \MM_\VV$. Thus fix $\nvect$, let $v\in\VV$ and let $\{\varphi_n\}_n$ be as in \eqref{defsequ}. Also, let $\varepsilon>0$ and take, by regularity of $|\nvect|$, a compact set $K\subseteq\XX$ such that $\abs{\nvect}(\XX\setminus K)<\varepsilon$. By Dini's monotone convergence Theorem, up discarding finitely many functions of $\{\varphi_n\}_n$, we can assume that $\varphi_n<\varepsilon$ on $K$ for every $n$, By continuity, $\varphi_n<\varepsilon$ on an open neighbourhood of $K$, say $A_n$, for every $n$. Set also $S\defeq\sup_n\Vert \varphi_n\Vert_{\infty}<\infty.$ We can then compute, recalling \eqref{nearliplintmp} and using the trivial bound $\|\varphi_n v\|_{|A_n}\leq\varepsilon\|v\|$, $$\abs{\nvect(\XX)(\varphi_nv)}\le \abs{ \nvect(A_n)(\varphi_n v)}+\abs{\nvect(\XX\setminus A_n)}(\varphi_nv)\le\varepsilon\abs{\nvect}(A_n)\Vert v\Vert +S\varepsilon\Vert v\Vert\le\varepsilon \Vert v\Vert \left(\abs{\nvect}(\XX)+S\right),$$ so that the claim follows as $\varepsilon>0$ is arbitrary. \end{proof} Another direct consequence of Theorem \ref{weakder}, this time in conjunction with the classical theorem by Alexandrov about weak sequential completeness of the space of measures, is the following result. Notice that in order to apply Alexandrov's theorem we need to work in the case $\mathcal R=\Cb(\XX)$. \begin{cor}[Alexandrov's theorem for local vector measures]\label{alex} Let $\VV$ be a normed $\Cb(\XX)$-module and $\{\nvect_n\}_n\in \MM_\VV$ be a sequence such that for every $v\in\VV$ the sequence $n\mapsto\nvect_n(\XX)(v)\in\RR$ is Cauchy. Then there exists $\nvect\in \MM_\VV$ such that \[ \nvect(\XX)(v)=\lim_n\nvect_n(\XX)(v)\qquad\forall v\in\VV. \] \end{cor} \begin{proof} Define $F\in\VV'$ as $F(v):=\lim_n\nvect_n(\XX)(v)$ (linearity of $F$ is obvious, continuity follows from Banach-Steinhaus Theorem). To conclude it is enough, by Theorem \ref{weakder}, to show that $F$ is tight. Thus fix $v$ and define $G_n,G:\Cb(\XX)\to\RR$ as $G_n(\,\cdot\,)\defeq F_n(\,\cdot\,v)$ and $G(\,\cdot\,)\defeq F(\,\cdot\,v)$: then the conclusion follows by the classical Alexandrov's Theorem. We give the details. By Riesz's Theorem (\cite[Theorem 7.10.1]{Bogachev07}), as $G_n$ is tight, it is induced by a Baire measure $\mu_n$, that is $$ G_n(f)=\int_\XX f\dd\mu_n\quad\text{for every }f\in\Cb(\XX).$$ Recall that $\mu_n$ is a Borel measure, as Baire measures and Borel measures coincide for metric (hence Polish) spaces, e.g.\ \cite{Bogachev07}. Now, $\mu_n$ is weakly fundamental (\cite[Definition 2.2.2]{bogaweak}) as $$ \int_\XX f\dd\mu_n=G_n(f)\rightarrow G( f), $$ hence by \cite[Theorem 2.3.9]{bogaweak}, we see that $\mu_n$ converges weakly to a Borel measure $\mu$, so that $G(f)=\int_\XX f\dd\mu$. It follows that $G$ is tight, by \cite[Theorem 7.10.1]{Bogachev07} again. \end{proof} We also notice that another byproduct of Theorem \ref{weakder}, and in particular of the equality $|\nvect_F|=\mu$ is that \begin{equation} \label{eq:locmass} A\subseteq\XX \text{ open and $\nvect(A)(v)=0$ for every $v\in\VV$ with $\supp v\subseteq A$}\qquad\Rightarrow\qquad |\nvect|(A)=0. \end{equation} \bigskip In classical measure theory it often happens that one first defines a measure via its action on a certain class of regular functions (say Lipschitz) and then, once the measure is constructed, its action on more general functions (say continuous) is uniquely defined by some density argument. The following proposition establishes a construction of this sort in our setting. In the statement below notice that \eqref{eq:extn2} is not a direct consequence of \eqref{eq:extn} because in computing the total variation of $\hat\nvect$ and $\nvect$ we use the norm in $\VV'$, $\WW'$ respectively. \begin{prop}\label{lipbscb} Let $\VV$ be a normed $\Cb(\XX)$-module and let $\WW\subseteq\VV$ be a subspace that, with the inherited product and norm, is a normed $\mathcal R$-module. Assume also that the space $\Cb(\XX)\cdot\WW$ of $\Cb(\XX)$-linear combinations of elements of $\WW$ is dense in $\VV$ and let $\nvect$ be a local vector measure defined on $\WW$. Then there is a unique local vector measure $\hat\nvect$ defined on $\VV$ that extends $\nvect$, i.e.\ such that \begin{equation} \label{eq:extn} \hat\nvect(B)(v)=\nvect(B)(v)\qquad\forall v\in\WW,\ B\subseteq\XX\ Borel \end{equation} and such measure $\hat\nvect$ also satisfies \begin{equation} \label{eq:extn2} \begin{split} |\hat\nvect|(B)=|\nvect|(B),\qquad\forall B\subseteq\XX\ Borel. \end{split} \end{equation} More explicitly, for every $A\subseteq\XX$ open and $v\in\VV$, \begin{equation}\label{eq:extn3} |\hat\nvect(A)(v)|\le |\nvect|(A)\Vert v\Vert_{A}, \end{equation} where the local seminorm at the right hand side is with respect the structure of $\Rr$-normed module for $\VV$. \end{prop} \begin{proof}\ First notice that \eqref{eq:extn3} is a `self-improvement' due to Remark \ref{nonimportaR}, once we have proved the remaining parts of the statement.\\ {\bf Existence} Recalling Definition \ref{bartdef} we wish to define \begin{equation} \label{eq:defext} \hat\nvect(B)\big(\sum_if_iv_i\big):=\sum_if_i\nvect(B)(v_i) \end{equation} for any choice of $n\in\NN$, $f_i\in\Cb(\XX)$, $v_i\in\WW$, $i=1,\ldots,n$ and $B\subseteq \XX$ Borel. To prove that this is a well posed definition we claim that, with the same notations, it holds \begin{equation} \label{eq:claimxext} \big|\sum_if_i\nvect(B)(v_i)\big|\leq |\nvect|(B)\Big\|\sum_{i=1}^n f_i v_i\Big\|. \end{equation} To see this we fix $\varepsilon>0$ and find $K_\varepsilon\subseteq \XX$ compact so that $|\nvect|(\XX\setminus K_\varepsilon)\leq\varepsilon$. Then we use item (3) of Remark \ref{rem:Rpropr} to find, for any $i=1,\dots,n$, a function $\varphi^\varepsilon_i\in\mathcal R$ with $|f_i-\varphi^\varepsilon_i|<\varepsilon$ on $K_\varepsilon$ and thus on some neighbourhood $A_\varepsilon$ of $K_\varepsilon$ independent on $i$. Then find $\psi \in\Rr$ taking values in $[0,1]$ with support in $A_\varepsilon$ and identically equal to 1 on $K_\varepsilon$. With these choices we have \[ \begin{split} \big|\sum_if_i\nvect(B)(v_i)\big|&\leq\big|\sum_i(1-\psi)f_i\nvect(B)(v_i)\big|+\big|\sum_i\psi(f_i-\varphi^\varepsilon_i)\nvect(B)(v_i)\big|+\big|\nvect(B)\big(\sum_i\psi\varphi^\varepsilon_i v_i\big )\big|\\ &\leq \sum_i\big(|(1-\psi)f_i||\nvect|\big)(\XX) \|v_i\|+\sum_i\big(|\psi(f_i-\varphi^\varepsilon_i)||\nvect|\big)(\XX)(v_i)+|\nvect|(B)\big\|\sum_i\psi\varphi^\varepsilon_i v_i\big\|\\ &\leq \varepsilon\sum_i\|f_i\|_\infty\|v_i\|+\varepsilon|\nvect|(\XX)\sum_i\|v_i\|+|\nvect|(B)\big\|\sum_i\psi\varphi^\varepsilon_i v_i\big\| \end{split} \] and \[ \begin{split} \big\|\sum_i\psi\varphi^\varepsilon_i v_i\big\|&\leq \sum_i\|\psi(f_i-\varphi^\varepsilon_i) v_i\|+\big\|\psi\sum_i f_i v_i\big\|\leq \varepsilon \sum_i\|v_i\|+\big\|\sum_i f_i v_i\big\|. \end{split} \] These last two inequalities and the arbitrariness of $\varepsilon$ give the claim \eqref{eq:claimxext}. In turn, the bound \eqref{eq:claimxext} ensures that the right hand side of \eqref{eq:defext} depends only on $v:=\sum_{i=1}^n f_i v_i$ and not on the way $v$ is written as such sum. In particular, the definition \eqref{eq:defext} is well posed and we have \begin{equation} \label{eq:boundhatn} |\hat\nvect(B)(v)|\leq |\nvect|(B)\|v\|. \end{equation} for every $B\subseteq\XX$ and $v\in \Cb(\XX)\cdot \WW$. The fact that $\hat\nvect$ is linear on the space of such $v$'s is obvious by definition and this last inequality shows continuity: together with the density assumption this ensures that $\hat\nvect(B)$ can be uniquely extended to an element of $\VV'$, still denoted $\hat\nvect(B)$. It is clear by definition that the map $B\mapsto \hat\nvect(B)$ is additive; $\sigma$-additivity follows trivially from the bound \eqref{eq:boundhatn} and the $\sigma$-additivity of $|\nvect|$. To prove weak locality of $\hat\nvect$ we pick $A\subseteq\XX$ open and $v\in \VV$ with $\|v\|_{|A}=0$. Then by the very definition of support we can find $\varphi\in \Cb(\XX)$ with support in $\XX\setminus A$ and $\supp v\subseteq \{\varphi=1\}$. Then we pick $\{v_j\}_j\in \Cb(\XX)\cdot \WW$ converging to $v$ and notice that $\varphi v_j\to\varphi v=v$ (recall \eqref{eq:c3}). On the other hand, writing $v_j=\sum_if_{ij}v_{ij}$ with $f_{ij}\in\Cb(\XX)$ and $v_{ij}\in\WW$ we see that \[ \sum_i\varphi f_{ij}\nvect(A)(v_{ij})=\sum_if_{ij}\nvect(A)(\varphi v_{ij})=0\qquad\forall i\in\NN \] by weak locality of $\nvect$ and the fact that $\supp(\varphi v_{ij})\subseteq \supp \varphi \subseteq \XX\setminus A$. Hence $\hat\nvect(\varphi v_i)=0$ for any $i$ and letting $i\to\infty$ we conclude, by the arbitrariness of $A,v$, that $\hat\nvect$ is weakly local, as desired. It is then clear that \eqref{eq:extn} holds and that inequality \eqref{eq:boundhatn} gives $\leq$ in \eqref{eq:extn2}. The opposite inequality is trivial because, recalling \eqref{conicidence}, we see that $|\hat\nvect|(B)$ is the operator norm of $\hat\nvect(B)$ in $\VV'$, whereas $|\nvect (B)|$ is the operator norm of $\nvect (B)$ in $\WW'$ i.e.\ of the restriction of $\hat\nvect(B)$ to $\WW$. \noindent{\bf Uniqueness} Let $\hat\nvect$ be an extension of $\nvect$, $f\in \Cb(\XX)$, $\varphi\in\mathcal R$ and $v\in\WW$. Then for any $B\subseteq \XX$ Borel we have \[ |\hat\nvect(B)(fv)-\varphi\nvect(B)(v)|\stackrel{\eqref{claimfoutside}}=|(f-\varphi)\hat\nvect(B)(v)|\stackrel{\eqref{letotvar}}\leq\|v\|(|f-\varphi||\hat\nvect|)(B)\leq \|v\|\|f-\varphi\|_{\Lp^1(|\nvect|)}. \] Now observe that since $|\nvect|$ is a finite measure, item (3) of Remark \ref{rem:Rpropr} ensures that for any $f\in\Cb(\XX)$ there is $\{\varphi_n\}_n\in\mathcal R$ uniformly bounded converging to $f$ pointwise. Thus the convergence is also in $\Lp^1(|\nvect|)$, hence the above and \eqref{eq:bbct} imply that any extension $\hat\nvect$ must satisfy $\hat\nvect(fv)=f\nvect(v)$ for any $v\in\WW$ and $f\in\Cb(\XX)$. By linearity and the density of $\Cb(\XX)\cdot\WW$ in $\VV$ it follows that any extension $\hat\nvect$, if it exists, must satisfy \eqref{eq:defext}. Since such equation defines the value of $\hat\nvect$ on $\Cb(\XX)\cdot\WW$ and this is dense in $\VV$, uniqueness is proved. \end{proof} \bigskip We conclude the section describing some operations on local vector measures. We start with the push forward through continuous maps and start with the following definition: \begin{defn}[Push-forward module]\label{defpfmod} Let $(\XX,\dist)$ and $(\YY,\rho)$ be two complete and separable metric spaces, $\VV$ a normed $\Cb(\XX)$-module and $\varphi\in C(\XX,\YY)$. The normed $\Cb(\YY)$-module $\varphi_* \VV$ is defined as $\VV$ as normed vector space and equipped with the structure of algebraic module over $\Cb(\YY)$ by \begin{equation}\notag f v\defeq f\circ\varphi\, v\quad\text{for every }f\in\Cb(\YY)\text{ and }v\in\VV. \end{equation} \end{defn} It is easy to verify that $\varphi_*\VV$ is a normed $\Cb(\YY)$-module: we just have to notice that if $\{f_i\}_{i=1,\dots,n}\subseteq\Cb(\YY)$ have pairwise disjoint support, then also $\{f_i\circ\varphi\}_{i=1,\dots,n}\subseteq\Cb(\XX)$ have the same property. The following proposition shows how if we have a local vector measure on $\VV$ we can naturally build a local vector measure on $\varphi_*\VV$ via a push-forward operation: \begin{prop}[Push-forward of local vector measures] With the same notation and assumptions as in Definition \ref{defpfmod} the following holds. Let $\nvect$ be a local vector measure defined on $\VV$. Define a map by \begin{equation}\label{definingpushf} \varphi_*\nvect(B)(v)\defeq\nvect(\varphi^{-1}(B))(v)\quad\text{for every $B\subseteq\YY$ Borel and $v\in\varphi_*\VV$}. \end{equation} Then $\varphi_*\nvect$ is a local vector measure defined on $\varphi_*\VV$ and $\abs{\varphi_*\nvect}=\varphi_*\abs{\nvect}$. \end{prop} \begin{proof} We show that $\varphi_*\nvect$ is indeed a local vector measure. The only non trivial verification to be done is weak locality. Take then an open set $A\subseteq\YY$ and $v\in\varphi_*\VV$ with $\Vert v\Vert_{| A}=0$; in other words, $f\circ\varphi\, v=0$ for every $f\in\Cb(\YY)$ with $\supp f\subseteq A$. We have to show that $$ \nvect(\varphi^{-1}(A) )(v)=0.$$ Take any $\varepsilon>0$, then, by regularity, take a compact set $K\subseteq \varphi^{-1}(A)\subseteq\XX$ with \begin{equation}\label{solita} \abs{\nvect}(\varphi^{-1}(A)\setminus K)<\varepsilon. \end{equation} Then $\varphi(K)$ is compact and contained in $A$, hence there is $f\in\Cb(\YY)$ with $\supp f\subseteq A$ and $f(y)=1$ on a neighbourhood of $\varphi(K)$. Therefore, by \eqref{solita} and weak locality of $\nvect$, \begin{equation*} \abs{\nvect (\varphi^{-1}(A))(v)}\le \varepsilon\Vert v\Vert+ \abs{\nvect (K)(v)}\le \varepsilon\Vert v\Vert+\abs{\nvect(K)(f\circ\varphi v)}= \varepsilon\Vert v\Vert, \end{equation*} where we used the fact that $\supp f\subseteq A$ for the last equality. Being $\varepsilon>0$ arbitrary, this proves the claim. By \eqref{definingpushf} and \eqref{conicidence}, we conclude that for every $B\subseteq\YY$ Borel, \begin{equation*} \abs{\varphi_*\nvect}(B)= \abs{\nvect}(\varphi^{-1}(B))=\varphi_*\abs{\nvect }(B).\qedhere \end{equation*} \end{proof} \bigskip It may happen that we have a normed $\Cb(\XX)$-module $\VV$ and we want to consider its Cartesian product with itself. To do this, first we have to endow $\VV^k$ with a norm, then the normed $\Cb(\XX)$-module operations will be defined component-wise. Let $k\in\NN$, $k\ge 1$, endow $\VV^k$ with a norm equivalent to the norm $$\Vert (v_1,\dots,v_k)\Vert\defeq \Vert (\Vert v_i\Vert)_{i=1,\dots,k}\Vert_e\quad\text{for every }v=(v_1,\dots,v_k)\in\VV.$$ Notice that the canonical map $\Phi:(\VV')^k\rightarrow (\VV^k)'$ defined by $$\Phi(\phi_1,\dots,\phi_k)(v_1,\dots,v_k)=\phi_1(v_1)+\cdots+\phi_k(v_k)$$ is an isomorphism. If one has in mind that $\VV$ is some space of vector fields over a manifold, then $\VV^k$ would correspond to some related tensor field. Then a little bit of matrix operation is possible over corresponding local vector measures: \begin{defn}\label{intmatrix} Let $\nvect$ be a local vector measure defined on $\VV^n$. Let moreover $m\in\NN$, $m\ge 1$ and let $f=(f_{i,j})_{1\le i\le m, 1\le j\le n}:\XX\rightarrow\RR^{m\times n}$ be a bounded Borel map. We define $f \nvect$ as the local vector measure defined on $\VV^m$ by \begin{equation}\notag f \nvect(A)\defeq\int_A f\dd{\nvect}\defeq\left(\sum_{j=1}^n\int_A f_{i,j}\dd{\nvect_j}\right)_i\quad\text{if $A\subseteq\XX$ is Borel}, \end{equation} where we exploited the canonical identification $(\VV')^k\simeq(\VV^k)'$. \end{defn} Notice that if $f:\XX\rightarrow\RR$ is a bounded Borel function and $\nvect$ is a local vector measure defined on $\VV^n$, then $$ f{\nvect}= (f \Id_{n\times n})\nvect.$$ \subsection{Examples} \subsubsection{Currents in metric spaces}\label{se:curr} For this section, $(\XX,\dist)$ is a complete and separable metric space (here the distance matters) and we fix $n\in\NN$, $n\ge 1$ (the case $n=0$ being trivial). The following notions are extracted from \cite{AmbrosioKirchheim00}. \begin{defn}\label{corrdefn} A $n$-current is a multilinear map $$T:\LIPb(\XX)\times \LIP(\XX)^n\rightarrow\RR$$ such that \begin{enumerate}[label=\roman*)] \item there exists a finite (positive) measure $\mu$ such that \begin{equation}\label{eqmass} \abs{T(f,\pi_1,\dots,\pi_n)}\le \prod_{j=1}^n\Lip(\pi_j)\int_\XX \abs{f}\dd{\mu} \end{equation} for every $f\in\LIPb(\XX)$ and $\pi_1,\dots,\pi_n\in \LIP(\XX)$. The minimal measure $\mu$ satisfying \eqref{eqmass} (that can be proved to exist) will be called the mass of $T$ and denoted by $\Vert T\Vert_{\AK}$; \item if $f\in\LIPb(\XX)$ and for $j=1,\dots,n$, $\{\pi_j^i\}_i\subseteq\LIP(\XX)$ is a sequence of equi-Lipschitz functions such that $\pi_j^i\rightarrow\pi_j$ pointwise, then \[ \lim_i T(f , \pi_1^i,\dots,\pi^i_n)=T(f , \pi_1,\dots,\pi_n); \] \item if $f\in\LIPb(\XX)$ and for some $j=1,\dots,n$ we have that $\pi_j$ is constant on a neighbourhood of $\{f\ne 0\}$, then $$T(f , \pi_1,\dots,\pi_n)=0.$$ \end{enumerate} \end{defn} When the dimension $n$ is clear from the context, we call $n$-currents simply currents. It is clear from \eqref{eqmass} that, if $T$ is a current, then it can be uniquely extended to a map $$T:\Cb(\XX)\times \LIP(\XX)^n\rightarrow\RR$$ still satisfying \eqref{eqmass} for every $f\in\Cb(\XX)$ and $\pi_1,\dots,\pi_n\in \LIP(\XX)$. As such extension is unique, we will not introduce a different notation for it. By \cite[(3.2)]{AmbrosioKirchheim00}, we have that a current is alternating in the last $n$ entries, so that we can set (all the algebraic operations are with respect to the field of real numbers $\RR$) \begin{equation}\notag \DD^n(\XX)\defeq \Cb(\XX)\otimes \bigwedge^n \LIP(\XX) \end{equation} and consider a current $T$ as a linear map $T: \DD^n(\XX)\rightarrow \RR$. We also have a natural map $$ \Cb(\XX)\times \LIP(\XX)^n\rightarrow \DD^n(\XX)$$ and we write (just as a notation) $f\diff \pi_1\wedge\cdots\wedge \pi_n$ for the image of $(f,\pi_1,\dots,\pi_n)$ through such map. Therefore, by $T(f\diff\pi_1\wedge\cdots\wedge\diff\pi_n)$, we mean $T(f,\pi_1,\dots,\pi_n)$. Notice also that if $T$ is a current and $f\in\Cb(\XX)$, then $f T$ defines a current by (see the discussion below \cite[(2.5)]{AmbrosioKirchheim00}) $$f T(v)\defeq T(f v)\quad\text{for every $v\in \DD^n(\XX)$}$$ and by \eqref{eqmass} (cf.\ the key result \cite[(2.5)]{AmbrosioKirchheim00} that encodes locality) it holds, as measures, \begin{equation}\label{massmult} \Vert f T\Vert_{\AK}\le \abs{f}\Vert T\Vert_{\AK}. \end{equation} We define a seminorm on $\DD^n(\XX)$ as follows: \begin{equation}\label{defnsemi} \Vert v\Vert\defeq\sup_T T(v)\quad\text{for every }v\in\DD^n(\XX) \end{equation} where the supremum is taken among all currents $T$ with $\Vert T\Vert_{\AK}(\XX)\le 1$. We claim that if $v\in\DD^n(\XX) $ is so that $\Vert v\Vert=0$, then $\Vert f v\Vert=0$ for any $f\in\Cb(\XX)$. Indeed $$ \Vert f v\Vert=\sup_{T:\Vert T\Vert_{\AK}\le 1} T(f v)=\sup_{T:\Vert T\Vert_{\AK}(\XX)\le 1} (f T)( v)\le \sup_{\tilde T:\Vert \tilde T\Vert_{\AK}(\XX)\le \Vert f\Vert_\infty} \tilde T(v)=0 $$ where the inequality above is due to \eqref{massmult}. We then identify elements of $\DD^n(\XX)$ that are equal up to an element of zero norm, so that we have a normed vector space $(\DD^n(\XX),\Vert\,\cdot\,\Vert)$. Notice that our claim grants that the structure of algebraic module over $\Cb(\XX)$ descends to the quotient. We show now that $(\DD^n,\Vert\,\cdot\,\Vert)$ is a normed $\Cb(\XX)$-module. The only non trivial verification is that \eqref{compeq} holds and this in turn follows from \eqref{massmult}. Indeed, take $\{f_i\}_{i=1,\dots,m}\subseteq\Cb(\XX)$ with pairwise disjoint supports and $\{v_i\}_{i=1,\dots,m}\subseteq\DD^n(\XX)$. We have to show that, for every current $T$ with $\Vert T\Vert_{\AK}(\XX)\le 1$, it holds that $$T(f_1 v_1+\cdots+f_m v_m)\le \max_j \Vert f_j\Vert_\infty\max_j\Vert v_j\Vert.$$ Now, using the definition of norm on $\DD^n(\XX)$ and \eqref{massmult} we have \begin{align*} T\bigg(\sum_{j=1}^m f_j v_j\bigg)&\le\sum_{j=1}^n \abs{f_j T( v_j)}\le\sum_{j=1}^m \Vert f_j T\Vert_{\AK}(\XX)\Vert v_j \Vert \\&\le \max_j \Vert v_j\Vert \sum_j \left({\abs{f_j}} \Vert T\Vert_{\AK}\right)(\XX)\le \max_j\Vert v_j\Vert \max_j \Vert f_j\Vert_\infty\Vert T\Vert_{\AK}(\XX), \end{align*} so that the claim follows. The following proposition shows how metric currents fit in the framework of local vector measures: we show that metric currents are precisely those local vector measures defined on $\DD^n(\XX)$ that satisfy the weak continuity property stated in item $ii)$ of Definition \ref{corrdefn} and moreover that the two concepts of mass (the one for metric currents defined in \cite{AmbrosioKirchheim00} and the one for local vector measures) coincide. \begin{prop}\label{currismvm} Let $T$ be a current. Then $T$ is a tight element of $\DD^n(\XX)'$. In particular, $T$ induces a unique local vector measure $\nvect_T$ defined on $\DD^n(\XX)$ such that \begin{equation}\label{rieszcorr} \nvect_T(A)(v)=T(v)\quad\text{for every }A\subseteq \XX\text{ open and }v\in\DD^n(\XX)\text{ with }\supp v\subseteq A. \end{equation} Moreover, it holds that $\Vert T\Vert_{\AK}=\abs{\nvect_T}$. Conversely, every tight element of $\DD^n(\XX)'$ satisfying item $ii)$ of Definition \ref{corrdefn} is a current; in other words, every local vector measure $\nvect$ defined on $\DD^n(\XX)$ such that $\nvect(\XX)$ satisfies item $ii)$ of Definition \ref{corrdefn} is given by a current, in the sense that \eqref{rieszcorr} holds. \end{prop} \begin{proof} The fact that $T\in \DD^n(\XX)'$ follows (artificially) from the definition of norm on $\DD^n(\XX)$. Tightness is an immediate consequence of \eqref{eqmass} together with dominated convergence. Therefore we can apply Theorem \ref{weakder} and obtain a local vector measure $\nvect_T$ satisfying \eqref{rieszcorr}. By the very definition of norm on $\DD^n(\XX)$, for every $v\in\DD^n(\XX)$ we have that $|T(v)|\le \Vert T\Vert_{\AK}(\XX)\Vert v\Vert$ so that, using \eqref{conicidence} and \eqref{rieszcorr}, $$\abs{\nvect_T}(\XX)=\Vert T\Vert '\le \Vert T\Vert_{\AK}(\XX).$$ Take then an open set $A\subseteq\XX$. By \cite[Proposition 2.7]{AmbrosioKirchheim00} and the regularity of the measure $\Vert T\Vert_{\AK}$, we can show that \begin{equation}\label{rieszcorrproof} \Vert T\Vert_{\AK}(A)=\sup \sum_i T(f_i \diff\pi_1^i\wedge\cdots\wedge\diff\pi_n^i) \end{equation} where the supremum is taken among all finite collections $\{f_i\}_i\subseteq\Cb(\XX)$ with pairwise disjoint support, with $\supp f_i\subseteq A$ and $\Vert f_i\Vert_\infty\le 1$ for every $i$, and finite families $\{\pi_j^i\}_i\subseteq\LIP(\XX)$ of $1$-Lipschitz functions, for $j=1,\dots,n$. Now notice that if $\pi_1,\dots,\pi_n$ are $1$-Lipschitz, then for every current $T$ it holds $$T(1 \diff\pi_1\wedge\cdots\wedge\diff\pi_n)\le \Vert T\Vert_{\AK}(\XX)$$ so that \begin{equation}\label{normdiffs} \Vert 1 \diff\pi_1\wedge\cdots\wedge\diff\pi_n\Vert\le 1. \end{equation} We can now bound the right hand side of \eqref{rieszcorrproof} using \eqref{rieszcorr}, \eqref{compeq} and what we have noticed above, by $$\nvect_T(A)\left(\sum_i f_i \diff\pi_1^i\wedge\cdots\wedge\diff\pi_n^i\right)\le \abs{\nvect_T}(A)\Big\Vert \sum_if_i \diff\pi_1^i\wedge\cdots\wedge\diff\pi_n^i\Big\Vert\le\abs{\nvect_T}(A),$$ so that $\Vert T\Vert_{\AK}\le \abs{\nvect_T}$ as measures. Then, as we have already proved $\abs{\nvect_T}(\XX)\le\Vert T\Vert_{\AK}(\XX)$, we have $\Vert T\Vert_{\AK}= \abs{\nvect_T}$ as measures. Finally, let $\nvect$ be a local vector measure defined on $\DD^n(\XX)$ such that $\nvect(\XX)$ satisfies item $ii)$ of Definition \ref{corrdefn}. If $f\diff\pi_1\wedge\cdots\wedge\diff\pi_n$ is as in item $iii)$ of Definition \ref{corrdefn}, then $S(f\diff\pi_1\wedge\cdots\wedge\diff\pi_n)=0$ for every current $S$, then $\Vert f\diff\pi_1\wedge\cdots\wedge\diff\pi_n\Vert=0$ and hence $\nvect(\XX)(f\diff\pi_1\wedge\cdots\wedge\diff\pi_n)=0$. Let now $f\in\LIPb(\XX)$ and let $\pi_1,\dots,\pi_n\subseteq\LIP(\XX)$ be $1$-Lipschitz. Notice that the module structure of $\DD^n(\XX)$ ensures that $f\diff\pi_1\wedge\cdots\wedge\diff\pi_n=f(1\diff\pi_1\wedge\cdots\wedge\diff\pi_n)$ and thus Lemma \ref{foutside}, Proposition \ref{absfoutside} and \eqref{normdiffs} give \begin{align*} \nvect(\XX)(f\diff\pi_1\wedge\cdots\wedge\diff\pi_n)= f\nvect(\XX)(1\diff\pi_1\wedge\cdots\wedge\diff\pi_n)\leq |f||\nvect|(\XX)= \int_\XX \abs{f}\dd{\abs{\nvect}}. \end{align*} This proves that item $i)$ in Definition \ref{corrdefn} holds and concludes the proof that $\nvect(\XX)$ is a current. \end{proof} \begin{rem} The push forward operator $\varphi_*$ defined in the previous section has nothing to do with the push forward for currents defined in \cite[Definition 2.4]{AmbrosioKirchheim00} (notice that in particular the latter is only defined for $\varphi$ Lipschitz as it defines the push-forward of a current by duality with the pullback of Lipschitz functions via the map $\varphi$).\fr \end{rem} \begin{rem} Proposition \ref{currismvm} allows us to consider $n$-currents as local vector measures defined on $\DD^n(\XX)$. Notice that, in general, not every element of $\DD^n(\XX)$ is tight, hence not every element of $\DD^n(\XX)$ corresponds to a current. Moreover, not every tight functional of $\DD^n(\XX)'$ is given by a current, this is to say that there are local vector measures defined on $\DD^n(\XX)$ that are not given by currents (which is not a surprise). A counterexample to this lack of coincidence is given in Example \ref{countcurr} below, in which we exhibit a tight functional of $\DD^n(\XX)'$ that does not satisfy axiom $ii)$ of Definition \ref{corrdefn}. Notice however that, by the proof of Proposition \ref{currismvm}, every tight functional of $\DD^n(\XX)'$ satisfies axioms $i)$ and $iii)$ of Definition \ref{corrdefn}. \fr \end{rem} \begin{ex}\label{countcurr} Let $(\XX,\dist,\mass)\defeq([-1,1],\dist_e,\LL^1)$. It is easy to see that for $l\in \Lpu$ the map $$\DD^1(\XX)=\Cb(\XX)\otimes \LIP(\XX)\rightarrow\RR\quad\text{defined by}\quad f\diff g\mapsto\int_\XX l f g'\dd{\mass}$$ defines a current (see \cite[Example 3.2]{AmbrosioKirchheim00}). Thus, if $f\diff g\in\Cb(\XX)\otimes C^1(\XX)\subseteq \DD^1(\XX) $, it holds that $\Vert f\diff g\Vert \ge \abs{f(0)g'(0)}$. Consider now the map $$\DD^1(\XX)\supseteq\Cb(\XX)\otimes C^1(\XX)\rightarrow\RR\quad\text{defined by}\quad f\diff g\mapsto f(0)g'(0).$$ Using Hahn-Banach Theorem, we can extend the map above to a functional of $\DD^1(\XX)'$, that is automatically tight, as $\XX$ is compact (Remark \ref{tightrem}). However, we see that the map above can not be a current: indeed, axiom $ii)$ of Definition \ref{corrdefn} is clearly violated.\fr \end{ex} We want to think to the space $\DD^n(\XX)$ defined above as the space of $n$-forms on the metric space $(\XX,\dist)$ and Proposition \ref{currismvm} corroborates this in showing that there is a natural duality between appropriate operators on $\DD^n(\XX)$ and currents. Still, in the classical smooth setting the space of differential forms has a natural algebra structure and it is natural to wonder whether the same holds in our setting. We are going to show that this is indeed the case. Thus let $n,m\in\NN$, $n,m\ge 1$ and notice that we have a natural exterior product operation $$ \wedge: \DD^n(\XX)\times \DD^m(\XX)\rightarrow \DD^{n+m}(\XX)$$ defined as $$(f \diff\pi_1\wedge\cdots\wedge\diff\pi_n,g \diff\pi_{n+1}\wedge\cdots\wedge\diff\pi_{n+m})\mapsto f g \diff\pi_1\wedge\cdots\wedge\diff\pi_{n}\wedge\diff\pi_{n+1}\wedge\cdots\wedge\diff\pi_{n+m}$$ and extended by bilinearity. We also write $$v\wedge w\defeq \wedge(v,w)\quad\text{for every }v\in\DD^n(\XX), w \in \DD^m(\XX).$$ This `algebraic' structure is compatible with the norm on $\DD^n(\XX)$ imposed via `analytic' considerations: \begin{prop}\label{propwedge} For every $v\in\DD^n(\XX),w \in \DD^m(\XX)$, it holds that $$ \Vert v\wedge w\Vert\le \Vert v\Vert \Vert w\Vert.$$ \end{prop} \begin{proof} We define, for $k\in\NN$, $k\ge 1$, the set $B^k\subseteq\DD^k(\XX)$ as $$ B^k\defeq\left\{\sum_{i\in\NN} f_i \diff \pi_1^i\wedge\cdots\wedge\diff\pi^i_k:\{f_i\}_i\subseteq\Cb(\XX), \sum_{i\in\NN} \abs{f_i}\le 1, \Lip(\pi_j^i)\le 1\right\}.$$ Clearly, $B^n\wedge B^m\subseteq B^{n+m}$, in the sense that $\wedge(B^n,B^m)\subseteq B^{n+m}$. By \cite[Proposition 2.7]{AmbrosioKirchheim00}, it holds that for $k\in\NN$, $k\ge 1$, \begin{equation}\label{AKprop27} \Vert T\Vert_{\AK}(\XX)=\sup_{v\in B^k} T(v)\quad\text{for every $k$-current $T$.} \end{equation} Let now $T$ be a $(n+m)$-current. If $v\in \DD^n(\XX)$, we can define a $m$-current $T\mres v$ (see the discussion below \cite[(2.5)]{AmbrosioKirchheim00}) by $$ T\mres v(w)\defeq T(v\wedge w)\quad\text{for every }w\in\DD^m(\XX),$$ where we notice that the following discussion implies that this definition is well posed even after taking the quotient on $\DD^n(\XX)$ with respect to the seminorm defined by \eqref{defnsemi}. Using \eqref{AKprop27} repeatedly and what noticed above, we have that for $v\in\DD^n(\XX)$, \begin{align*} \Vert T\mres v\Vert_{\AK}(\XX)&=\sup_{p\in B^m} {T\mres v(p)}=\sup_{p\in B^m}{T(v\wedge p)}=\sup_{p\in B^m}{T(p\wedge v)}\\&= \sup_{p\in B^m}{ T\mres p(v)}\le \Vert v\Vert\sup_{p\in B^m}\Vert{T\mres p}\Vert_{\AK}(\XX)\\&=\Vert v\Vert \sup_{p\in B^m}\sup_{q\in B^n} T\mres p(q)=\Vert v\Vert\sup_{p\in B^m}\sup_{q\in B^n} T (p\wedge q)\\&\le \Vert v\Vert\sup_{r\in B^{n+m}} T(r)=\Vert v\Vert \Vert T\Vert_{\AK}(\XX). \end{align*} Now, for $v,w$ as in the statement, $$ \Vert v\wedge w\Vert=\sup_T T(v\wedge w)=\sup_T (T\mres v) (w)\le \sup_T \Vert T\mres v\Vert_{\AK}(\XX) \Vert w\Vert,$$ where the suprema are taken among all $(n+m)$-currents $T$ with $\Vert T\Vert_{\AK}(\XX)\le 1$. Together with what just remarked, this concludes the proof. \end{proof} \begin{rem} Notice that we have a natural surjective linear map $$ \bigwedge\nolimits^n \DD^1(\XX)\rightarrow \DD^n(\XX) $$ where the domain has to be seen as algebraic wedge product. Moreover, if we endow the domain with the projective norm i.e.\ $$ \bigwedge\nolimits^n \DD^1(\XX) \ni v\mapsto \Vert v\Vert\defeq\inf\left\{ \sum_{i=1}^k \Vert v_1^i\Vert_{\DD^1(\XX)} \cdots\Vert v_n^i\Vert_{\DD^1(\XX)}: v=\sum_{i=1}^k v_1^i\wedge\cdots\wedge v_n^i \right\} $$ and take then the quotient of $\bigwedge\nolimits^n \DD^1(\XX)$, we see that this map descends to the quotient and has norm bounded by $1$ thanks to Proposition \ref{propwedge}. \fr \end{rem} \begin{rem} Notice that in the case $n=1$, the space $\DD^1(\XX)$ can be seen as a sort of metric cotangent module. Indeed, we have a natural map $$\diff: \LIP(\XX)\rightarrow\DD^1(\XX)\quad f\mapsto 1 \diff f$$ satisfying $$\Vert \diff f\Vert\le \Lip(f)\quad\text{for every }f\in\LIP(\XX)$$ (here equality in general does not hold). Also, the Leibniz rule $$ \diff (f g)=f\diff g+g\diff f\quad\text{for every }f,g\in\LIPb(\XX)$$ holds by \cite[Theorem 3.5]{AmbrosioKirchheim00} and, with a similar argument, we can prove that the chain rule $$ \diff (\phi\circ f)=\phi'\circ f\diff f\quad\text{for every }\phi\in C^1(\RR)\cap\LIP(\RR)\text{ and }f\in\LIP(\XX)$$ holds. Finally if $\{f_i\}_i$ is a sequence of equi-Lipschitz functions pointwise convergent to $f$, then $$\diff f_i\stackrel{*}{\rightharpoonup} \diff f$$ thanks to the requirement $ii)$ of Definition \ref{corrdefn}, where we isometrically embedded $\DD^1(\XX)$ into the dual space of the space of $1$-currents.\fr \end{rem} In this section we have seen how currents on metric spaces can be seen as elements of the dual of a suitable normed $\Cb(\XX)$-module. In literature, there have been other attempts to describe a pre-dual space of the space of currents (e.g.\ \cite{PankkaSoultanis,SchioppaCurrents,WilliamsCurrents}). We now compare briefly our approach to the one in \cite{PankkaSoultanis}. First, let us clarify that we do not exhibit a pre-dual space of the space of currents, as not every element of $\DD^n(\XX)'$ is a current, (indeed, in order to be a current, an element of $\DD^n(\XX)'$ must be tight and satisfy item $ii)$ of Definition \ref{corrdefn} - we have not been able to find a norm on $\DD^n(\XX)$ which is compatible with such notions of convergence). On the other hand in \cite{PankkaSoultanis} the space of currents is identified with the sequentially continuous dual of the space $\bar{\Gamma}_c^n(\XX)$ (see \cite{PankkaSoultanis} for the relevant definitions). We show now that the map $\widehat{\,\cdot\,}$ in \cite[Theorem 1.1]{PankkaSoultanis} is compatible with the notions developed here. In order to do so, the reader is assumed to be familiar with the machinery used and developed in \cite{PankkaSoultanis}. Let $T$ be a current, which then induces a local vector measure $\nvect_T={L}_T\abs{\nvect_T}$. Take then $\widehat{T}\in \bar{\Gamma}_c^n(\XX)^*$. Now, the proofs of \cite[Theorem 7.1 and Theorem 1.1]{PankkaSoultanis} show that if $\omega\in\bar{\Gamma}_c^n(\XX) $ then we have $$ \widehat{T}(\omega)\defeq \int_\XX \bar{L}_T(x)(\omega(x))\dd{\abs{\nvect_T}(x)}$$ where the measurability of the integrand is part of the statement. The map $\bar{L}_T(x)$ is obtained by first considering the unique extension of $L_T(x)$ to the space $\overline{\rm Poly}^n(U)$, where $U$ is an open neighbourhood of $x$, and then by considering the induced map on the stalk over $x$, as by weak locality $L(x)(v)$ depends only on the germ of $v$ at $x$. \subsubsection{Differential of Sobolev functions}\label{se:diffsob} Recall that a metric measure space is a triplet $(\XX ,\dist,\mass)$ where $\XX$ is a set, $\dist$ is a (complete and separable) distance on $\XX$ and $\mass$ is a non negative Borel measure that is finite on balls. We adopt the convention that metric measure spaces have full support, that is to say that for any $x\in\XX$, $r>0$, we have $\mass(B_r(x))>0$. The Cheeger energy (see \cite{Cheeger00,Shanmugalingam00,AmbrosioGigliSavare11,AmbrosioGigliSavare11-3}) associated to a metric measure space $(\XX,\dist,\mass)$ is the convex and lower semicontinuous functional defined on $\Lpt$ as \[ \Ch(f)\defeq\frac{1}{2}\inf\left\{\liminf_k \int_\XX\lip(f_k)^2\dd{\mass}:f_k\in\LIPb(\XX)\cap\Lpt,\ f_k\rightarrow f \text{ in }\Lpt\right\} \] where $\lip(f)$ is the so called local Lipschitz constant $$ \lip(f)(x)\defeq \limsup_{y\rightarrow x}\frac{\abs{f(y)-f(x)}}{\dist(y,x)},$$ which has to be understood to be $0$ if $x$ is an isolated point. The finiteness domain of the Cheeger energy will be denoted as ${\rm W}^{1,2}(\XX)$ and will be endowed with the complete norm $\Vert f\Vert^2_{{\rm W}^{1,2}(\XX)}\defeq\Vert f\Vert_{\Lpt}^2+2 \Ch(f)$. It is possible to identify a canonical object $\abs{\diff f}\in\Lpt$, called minimal relaxed slope, providing the integral representation \begin{equation}\label{cdsnjoa} \Ch(f)=\frac{1}{2}\int_\XX\abs{\diff f}^2\dd{\mass}\quad\text{for every }f\in{\rm W}^{1,2}(\XX). \end{equation} We assume the reader familiar with the concepts of $\Lp^\infty/\Lp^0$-normed modules as developed in \cite{Gigli14}. Here we recall that one of the main results in \cite{Gigli14} is about existence and uniqueness of a `cotangent module' and of an associated notion of `differential of a Sobolev function', meaning that: there exists a unique (up to unique isomorphism) couple $(\cotX,\diff)$ where $\cotX$ is a $\Lp^2$-normed $\Lp^\infty$-module and $\diff:{\rm W}^{1,2}(\XX)\rightarrow\cotX$ is linear and such that \begin{enumerate}[label=\roman*)] \item $\abs{\diff f}$ (as just above \eqref{cdsnjoa}) coincides with the pointwise norm of $\diff f$ $\mass$-a.e.\ for every $f\in{\rm W}^{1,2}(\XX)$, \item $\cotX$ is generated (in the sense of modules) by $\left\{\diff f:f\in{\rm W}^{1,2}(\XX)\right\}$. \end{enumerate} We define the tangent module $\tanX$ as the dual (in the sense of modules) of $\cotX$. We define $\cotanXzero$ as the $\Lp^0$-completion of the cotangent module $\cotX$ and also (this definition is canonically equivalent to the previous one if $p=2$) \begin{equation}\notag \cotanXp\defeq\left\{v\in\cotanXzero:\abs{v}\in\Lpp\right\}\quad\text{for $p\in[1,\infty]$}. \end{equation} Similarly, we define $\tanXzero$ as the $\Lp^0$-completion of $\tanX$ and \begin{equation} \label{eq:deftp} \tanXp\defeq\left\{v\in\tanXzero:\abs{v}\in\Lpp\right\}\quad\text{for $p\in[1,\infty]$}. \end{equation} \bigskip In this manuscript we proposed an axiomatization of the concept of module (that aims at being an abstract approach to the space of sections of a given bundle) and duality different from the one in \cite{Gigli14}. It is therefore natural to wonder whether even in this new approach we have an existence \& uniqueness result like the above. The answer is `yes under mild conditions' and is given in the theorem below. We notice that: \begin{itemize} \item[i)] If $\mathscr M$ is an $\Lp^2$-normed module, then the subspace $\VV:=\{v\in\mathscr M:|v|\in \Lp^\infty(\mass)\}$ equipped with the norm $\|v\|:=\||v|\|_{\Lp^\infty}$ is a normed $\Cb(\XX)$-module, the product operation being the one inherited from the $\Lp^\infty(\mass)$-module structure. \item[ii)] Is perfectly natural to assume that the reference measure is finite, in order to have the integrability of $|\diff f|$ for every $f\in{\rm W}^{1,2}(\XX)$. The alternative would be to develop a theory for local vector measures with locally finite mass - and thus acting in duality with objects with bounded support. This is viable but we won't proceed in this direction. \item[iii)] The situation here - in particular for what concerns uniqueness - is more complicated than the one in \cite{Gigli14} because we have to build not only the local vector measure $\DIFF f$, but also the module $\VV$ on which it acts (as opposed to the construction of the differential in \cite{Gigli14} that `stands on its own'). Assuming $\cotX$ to be reflexive helps in getting the desired uniqueness. \end{itemize} \begin{thm}\label{exunSobo} Let $(\XX,\dist,\mass)$ be a metric measure space with finite mass and assume that $\cotX$ is reflexive. Then there exists a unique couple $(\DIFF,\VV)$, where $\DIFF:{\rm W}^{1,2}(\XX)\rightarrow\MM_\VV$ is linear and $\VV$ is a normed $\Cb(\XX)$-module such that: \begin{enumerate}[label=\roman*)] \item $\abs{\DIFF f}=\abs{\diff f}\mass$ as measures, \item for every $v\in\VV$ we have \begin{equation}\label{sdcjkasd} \Vert v\Vert =\sup \nvect(\XX)(v), \end{equation} where the supremum is taken among all local vector measures $\nvect$ belonging to the $\Cb(\XX)$ module generated by the image of $\DIFF$ with $\abs{\nvect}(\XX)\le 1$, \item if $\{v_k\}_k\subseteq\VV$ is bounded and such that for every $A$ Borel and $f\in{\rm W}^{1,2}(\XX)$, $\DIFF f(A)(v_k)$ has a finite limit, then there exists $v\in\VV$ such that $\DIFF f(A)(v_k)\rightarrow\DIFF f(A)(v)$. \end{enumerate} Uniqueness is intended up to unique isomorphism in the following sense: whenever $(\tilde{\DIFF},\tilde{\VV})$ is another such couple, there exists a unique couple of $\Cb(\XX)$ linear (bijective) isometries $(\Phi,\Psi)$ where $\Phi:\MM_\VV\rightarrow\MM_{\tilde{\VV}}$ and $\Psi: \VV\rightarrow\tilde{\VV}$ are such that $\Phi\circ{\DIFF}=\tilde{\DIFF}$ and $\Psi(v)\,\cdot\,\Phi(\nvect)=v\,\cdot\,\nvect$. Finally, the definitions \begin{equation}\label{defnvv} \VV\defeq\left\{v\in\tanX:\abs{v}\in\Lpi\right\}=\tanXinf \end{equation} and \begin{equation}\label{defndd} \DIFF f(A)(\,\cdot\,)\defeq \int_A\diff f(\,\cdot\,)\dd{\mass} \end{equation} provide a realization of the unique couple as above. \end{thm} \begin{proof} We divide the proof in three steps. \\\textsc{Step 1}. We verify that the couple $(\DIFF,\VV)$ given by \eqref{defnvv} and \eqref{defndd} satisfies the requirements. It is clear that $\DIFF f$ is a local vector measure for every $f\in{\rm W}^{1,2}(\XX)$ whose total variation is bounded from above by $\abs{\diff f}\mass$. The equality $\abs{\DIFF f}=\abs{\diff f}\mass$ follows from \cite[Corollary 1.2.16]{Gigli14}. Also, ${\rm W}^{1,2}(\XX)\ni f\mapsto\DIFF f\in\MM_\VV$ is linear by linearity of $f\mapsto\diff f$. Notice that \eqref{sdcjkasd} is a consequence of the density (in the sense of $\Lp^p$-normed $\Lp^\infty$-modules) of the image of the map $\diff:{\rm W}^{1,2}(\XX)\rightarrow\cotX$ together with the definition of pointwise norm for $\tanX$ (\cite[Proposition 1.2.14]{Gigli14}) and an immediate approximation argument. We prove now item $iii)$, take $\{v_k\}_k\subseteq\VV$ as in the statement and notice that since $\mass(\XX)<\infty$ such sequence if also bounded in $\tanX$. Since such space is reflexive, there is a non-relabelled subsequence weakly converging to a limit $v\in \tanX$ (e.g.\ by Eberlein-Smulian's Theorem, but in fact in our setting the reflexivity of $\cotX$ implies its separability - because it trivially implies the reflexivity of ${\rm W}^{1,2}(\XX)$, that in turn implies separability of ${\rm W}^{1,2}(\XX)$ - see \cite[Proposition 42]{ACM14} - that in turn trivially implies the separability of $\cotX$, so there is no need of the deep Eberlein-Smulian's Theorem). Now notice that the $\tanXinf$-norm is $\tanX$-lower semicontinuous to conclude that $v\in\VV$ as well. Now notice that for $f\in {\rm W}^{1,2}(\XX)$ and $A\subseteq\XX$ Borel we have $\chi_A\dd f\in \cotX$, hence the weak convergence implies $$ \lim_k \DIFF f(A)(v_k)=\lim_k \int_{A}\diff f(v_k)\,\dd\mass=\int_A \diff f(v)\,\dd\mass=\DIFF f(A)(v), $$ as desired. \\\textsc{Step 2}. We prove that the maps $(\Phi,\Psi)$, if they exist, are unique. Recall that we require $\Phi\circ{\DIFF}=\tilde\DIFF$ and $\Psi(v)\,\cdot\,\Phi(\nvect)=v\,\cdot\,\nvect$ for any $v\in\VV$. Then, taken $\{g_i\}_{i=1,\dots,n}\subseteq\Cb(\XX)$ and $\{f_i\}_{i=1,\dots,n}\subseteq{\rm W}^{1,2}(\XX)$ we have that for any $v\in\VV$ it holds $$\sum_{i=1}^n g_i\tilde{\DIFF} f_i(\XX)(\Psi (v)) =\Phi\left(\sum_{i=1}^n g_i\DIFF f_i\right)(\XX)(\Psi(v))=\sum_{i=1}^n g_i\DIFF f_i(\XX)(v),$$ which, thanks to item $ii)$, forces the uniqueness of $\Psi$. Uniqueness of $\Phi$ follows immediately from the request $\Psi(v)\,\cdot\,\Phi(\nvect)=v\,\cdot\,\nvect$, as $\Psi$ is required to be surjective. \\\textsc{Step 3}. We take a couple $(\DIFF,\VV)$ verifying items $i)$, $ii)$ and $iii)$ and we prove existence of the maps $(\Phi,\Psi)$ as in the statement, provided that the other couple verifying items $i)$, $ii)$ and $iii)$ is the canonical one given by \eqref{defnvv} and \eqref{defndd}. This will be clearly enough. Both maps will be denoted with $\hat\cdot$. For every $v\in\VV$, we define \begin{equation}\notag \abs{v}_*\defeq\mass-\esssup_{f\in{\rm W}^{1,2}(\XX)} L_f(v)\chi_{\{\diff f\ne 0\}}, \end{equation} where $\DIFF f=L_f\abs{\DIFF f}$ is the polar decomposition, notice that $\abs{\,\cdot\,}_*$ is well defined as $|\DIFF f|\ll\mass$ by item $i)$. Notice now that item $ii)$ together with an easy approximation argument based on Proposition \ref{absfoutside} yields that \begin{equation}\notag \Vert v\Vert=\sup \nvect(\XX)(v), \end{equation} where the supremum is taken among the local vector measures $\nvect$ with $\abs{\nvect(\XX)}\le 1$ and $$ \nvect\in\left\{\sum_{i} \chi_{A_i}\DIFF f_i: \{A_i\}_i \text{ is a Borel partition of $\XX$ and $\{f_i\}_i\subseteq{\rm W}^{1,2}(\XX)$}\right\}. $$ It then follows that $\Vert \abs{v}_*\Vert_{\Lpi}=\Vert v\Vert$. Given $v\in\VV$, we consider the map $$ \cotX\ni\sum_i \chi_{A_i}\diff f_i \mapsto \sum_i\chi_{A_i} L_{f_i}(v)\abs{\diff f_i}= \sum_i\chi_{A_i} \dv{(v\,\cdot\,\DIFF f_i)}{\mass}(v)\in\Lpu, $$ where the equality is due to item $i)$. By the trivial bound $ \sum_i\chi_{A_i}|\diff f_i|\Vert v\Vert$ for the right hand side, the fact that $\mass$ is finite and \cite[Proposition 1.4.8]{Gigli14}, we see that it defines an element of $\tanX$, that we call $\hat{v}$, and which satisfies, by the definition of $\abs{\,\cdot\,}_*$, the identity $\abs{\hat{v}}=\abs{v}_*\ \mass$-a.e. Notice that the map $v\mapsto\hat{v}$ is $\Cb(\XX)$ linear and satisfies \begin{equation} \label{eq:ugu} \DIFF f(A)(v)=\int_A \diff f(\hat{v})\dd{\mass}\qquad\text{ or equivalently }\qquad v\cdot \DIFF f=\diff f(\hat{v}) \end{equation} for every $A\subseteq\XX$ Borel and $v\in\VV$. Also, if $\sum_i \chi_{A_i}\diff f_i\in\cotX$, it holds that \begin{equation}\label{pointwiseess} \bigg|{\sum_i \chi_{A_i}\diff f_i}\bigg|=\mass-\esssup_{v\in\VV,\Vert v\Vert\le 1} \sum_i \chi_{A_i}\diff f_i(\hat{v}), \end{equation} as, if $f\in{\rm W}^{1,2}(\XX)$, $$\abs{\diff f}=\mass-\esssup_{v\in\VV,\Vert v\Vert\le 1}L_f(v)\abs{\diff f}=\mass-\esssup_{v\in\VV,\Vert{v}\Vert\le 1}\diff f(\hat{v}),$$ where, as above, the second equality comes from item $i)$ (or, which is the same, from \eqref{eq:ugu}). We set now $M\defeq\left\{\hat{v}:v\in\VV\right\}$ and we claim that $M=\tanXinf$. We prove first that $M\subseteq\tanX$ is dense. If by contradiction $M$ was not dense, we could find a functional $Q\in(\tanX)^*=\cotX$ (by \cite[Proposition 1.2.13]{Gigli14} and the assumption that $\cotX$ is reflexive) such that $Q\ne 0$ but $Q(\hat{v})=0\ \mass$-a.e.\ for every $v\in\VV$. By density, we take $\{Q_k\}_k\subseteq \cotX$ with $Q_k\rightarrow Q$ in $\cotX$ and also $\abs{Q-Q_k}\rightarrow 0\ \mass$-a.e.\ and $Q_k$ is of the form $\sum_i \chi_{A_i^k}\diff f_i^k$. Now, if $v\in\VV$, $$\abs{Q_k(\hat{v})}\le \abs{Q_k-Q}(\hat{v})+ \abs{Q(\hat{v})}\le \abs{Q_k-Q}\Vert v\Vert \quad\mass\text{-a.e.}$$ so that, taking into account \eqref{pointwiseess}, $$ \abs{Q_k}\le \abs{Q-Q_k}\quad\mass\text{-a.e.}$$ which implies that $Q=0$, a contradiction. Therefore we have proved that $M\subseteq\tanX$ is dense. Take now $v\in\tanX$ with $\abs{v}\in\Lpi$. By density, we take a sequence $\{v_n\}_n\subseteq\VV$ such that $\hat{v}_n\rightarrow v$ in $\tanX$ and also $\abs{\hat{v}_n-v}\rightarrow 0\ \mass$-a.e. As $M$ is stable under multiplication by characteristic functions of Borel subsets of $\XX$ (thanks to item $iii)$ and the $\Cb(\XX)$ linearity of the map $v\mapsto\hat{v}$) we can further assume that $\{v_n\}_n\subseteq\VV$ is bounded. Now, thanks to item $iii)$, we see that $v\in M$. Now, to any $\nvect\in\MM_\VV$ we associate $\hat{\nvect}\in\MM_M$ by $\hat{v}\,\cdot\,\hat{\nvect}\defeq v\,\cdot\, \nvect$. As $\VV$ is isometric to $$\left(M,\Vert\abs{\,\cdot\,}\Vert_{\Lpi}\right)$$ (via the $\Cb(\XX)$ linear isometry $\hat\cdot$), we see that the map $\nvect\mapsto\hat{\nvect}$ is a $\Cb(\XX)$ linear isometry. Also, $\hat{\DIFF f}(A)=\int_A\diff f(\,\cdot\,)\dd{\mass}$. Indeed, if $v\in\VV$, then $\hat{v}\,\cdot\,\hat{\DIFF f}=v\,\cdot\,\DIFF f=L_f(v)\abs{\diff f}=\diff f(\hat{v})$. \end{proof} \begin{rem} Theorem \ref{exunSobo} can be easily adapted to integrability exponents different from $2$ within the range $(1,\infty)$. \fr \end{rem} \subsubsection{Differential of BV functions}\label{sectBV} In this section, we build local vector measures that describe the distributional derivatives of functions of bounded variation. We study here the case of an arbitrary metric measure space and a real valued function of bounded variation. Then, in the setting of an $\RCD(K,\infty)$ space, improve considerably the result, see Section \ref{sectBVRCD}. \bigskip We assume that the reader is familiar with the theory of functions of bounded variation in metric measure spaces developed in \cite{amb00,amb01,MIRANDA2003}. We recall now the main notions. For $A\subseteq\XX$ open, $\LIPloc(A)$ denotes the space of Borel functions that are Lipschitz in a neighbourhood of $x$, for any $x\in A$. If $(\XX,\dist)$ is locally compact, $\LIPloc(A)$ coincides with the space of functions that are Lipschitz on compact subsets of $A$. Fix a metric measure space $(\XX,\dist,\mass)$. Given $f\in\Lpu$, we define, for any $A\subseteq\XX$ open, \[ \abs{\DIFF f}(A)\defeq\inf\left\{\liminf_k \int_\XX\lip(f_k)\dd{\mass} :f_k\in\LIPloc(A)\cap\Lpu,\ f_k\rightarrow f \text{ in }\Lpu\right\}. \] We say that $f$ is a function of bounded variation, i.e.\ $f\in\BVv$, if $f\in\Lpu$ and $\abs{\DIFF f}(\XX)<\infty$. If this is the case, $\abs{\DIFF f}(\,\cdot\,)$ turns out to be the restriction to open sets of a finite Borel measure that we denote with the same symbol and we call total variation. Notice that, by its very definition, the total variation $\abs{\DIFF f}(A)$ is lower semicontinuous with respect to $\Lpu$ convergence for $A$ open, is subadditive and $\abs{\DIFF (\phi \circ f)}\le L \abs{\DIFF f}$ whenever $f\in\BVv$ and $\phi$ is $L$-Lipschitz. Several classical results concerning BV calculus have been generalized to the abstract framework of metric measure spaces. Among them, the Fleming-Rishel coarea formula, which states that given $f\in\BVv$, the set $\{f>r\}$ has finite perimeter for $\LL^1$-a.e.\ $r\in\RR$ and \[ \int_\XX h\dd{\abs{\DIFF f}}=\int_\RR\dd{r} \int_\XX h\dd{\per(\{f>r\},\,\cdot\,)}\quad\text {for any Borel function $h:\XX\rightarrow[0,\infty].$} \] In particular, \begin{equation} \label{coareaeqdiff} {\abs{\DIFF f}}(A)=\int_\RR\dd{r} {\per(\{f>r\},A)}\quad\text {for any $A\subseteq \XX$ Borel}. \end{equation} Now we need the definition of divergence (\cite{Gigli14,buffa2020bv}). Notice that in the definition below the module $\mathrm{L}^\infty(T\XX)$ is defined as in \eqref{eq:deftp}, i.e.\ starting from the modules $\cotX$, $\tanX$ and algebraic operations; in particular, no notion of Sobolev function other than ${\rm W}^{1,2}(\XX)$ is required. \begin{defn}\label{divedefn} Let $p\in\{2,\infty\}$. For $v\in\mathrm{L}^p(T\XX)$ we say that $v\in D(\dive^p)$ if there exists a function $g \in\Lpp$ such that \begin{equation}\label{divedefneq} \int_\XX \diff f(v)\dd{\mass}=-\int_\XX f g \dd{\mass}\quad\text{for every }f\in{\rm W}^{1,2}(\XX)\text{ with bounded support}, \end{equation} and such $g$, that is uniquely determined, will be denoted by $\dive\, v$. \end{defn} Notice that if $v\in D(\dive^2)\cap D(\dive^\infty)$, then the the two objects $\dive\,v$ as above coincide, in particular, $\dive\,v\in\Lpt\cap\Lpi$. From \eqref{divedefneq} it follows that $\supp(\dive\,v)\subseteq C$ for any $C\subseteq\XX$ such that $\supp v\subseteq C$. Another direct consequence of the definition is that if $v\in D(\dive^p)$ has bounded support (i.e.\ support contained in a bounded set) then \begin{equation} \label{eq:intdiv} \int_\XX \dive\,v\,\dd\mass=0 \end{equation} as it can be checked by picking $f$ in \eqref{divedefneq} identically equal to 1 on a set containing the support of $v$. Also, the following version of the Leibniz rule holds: if $v\in D(\dive^\infty)$ and $f\in\LIPb(\XX)$, then $f v\in D(\dive^\infty)$ and \begin{equation}\label{calcdiveeq} \dive(f v)=\diff f(v)+f\dive\, v. \end{equation} This follows from \eqref{divedefneq} and the fact that if $g\in{\rm W}^{1,2}(\XX)$ has bounded support and $f\in\LIPb(\XX)$, then $f g\in{\rm W}^{1,2}(\XX)$ has bounded support and satisfies $\diff(f g)=f\diff g+g\diff f$. In the case $p=2$, again from the algebra properties of bounded Sobolev functions together with an easy approximation argument, we have that if $v\in D(\dive^2)\cap\tanXinf$ and $f\in\Ss\cap\Lpi$, then $f v\in D(\dive^2)$ and the calculus rule above holds. In the case $p=2$, we will often omit to write the superscript $2$ for what concerns the divergence. \bigskip The following representation formula is basically proved in \cite{DiMarino14} (see also \cite{buffa2020bv} and \cite[Proposition 2.1]{BGBV} for what concerns this formulation). \begin{prop}[Representation formula]\label{reprfordiffregularpre2} Let $(\XX,\dist,\mass)$ be a metric measure space and $f\in\BVv$. Then, for every $A$ open subset of $\XX$, it holds that \begin{equation}\label{intagainst} \abs{\DIFF f}(A)=\sup\left\{\int_A f \dive\, v\dd{\mass}\right\}, \end{equation} where the supremum is taken among all $v\in\mathcal{W}_A$, where \begin{equation}\label{qualivect1} \mathcal{{W}}_A\defeq\left\{v\in D(\dive^\infty):\abs{v}\le 1\ \mass\text{-a.e.\ }\supp v\subseteq A\right\}. \end{equation} \end{prop} This statement might appear surprising because it characterizes $\BVv$ functions via duality with vector fields that, in turn, are defined in duality with functions in ${\rm W}^{1,2}(\XX)$ (as discussed before Definition \ref{divedefn}). We thus make the following observations: \begin{itemize} \item[i)] Approaching Sobolev/BV functions via integration by parts in general metric measure setting has been one of the main achievements in \cite{DiMarino14}. In such reference, the definition is given in duality with the notion of \emph{derivation} which is there defined as suitable map from Lipschitz functions to $\Lp^0(\mass)$. \item[ii)] Since Lipschitz functions are always Sobolev, at least locally, vector fields as considered in \eqref{qualivect1} are included in the class of derivations as used in \cite{DiMarino14} to define BV functions. In particular, it is obvious a priori from the definitions in \cite{DiMarino14} that the inequality $\geq$ holds in \eqref{intagainst}. \item[iii)] The opposite inequality follows from the results in \cite{DiMarino14}. Specifically, it is trivial to notice that `$\Lp^\infty$ derivations with divergence in $\Lp^\infty$' are also `$\Lp^2$ derivations with divergence in $\Lp^2$' (at least locally) and these latter ones can be used - thanks to \cite{DiMarino14} - to define ${\rm W}^{1,2}$ functions. It then follows by abstract machinery that these sort of $\Lp^2$ derivation are (or better, uniquely induce) a vector field in $\tanX$ and if we actually start with an $\Lp^\infty$ derivations with divergence in $\Lp^\infty$, the corresponding vector field will be in $\mathrm{L}^\infty(T\XX)\cap D(\dive^\infty)$ with the same pointwise norm and divergence of the original derivation (see also \cite[Lemma 3.12]{buffa2020bv}). This line of though gives $\leq$ in \eqref{intagainst}. \end{itemize} With this said, we have the following result: \begin{thm}\label{difffloccaomp} Let $(\XX,\dist,\mass)$ be a metric measure space and let $\VV$ be the subspace of $\tanXinf$ made of $\Cb(\XX)$-linear combinations of vector fields in $D( \dive^\infty)$. Then for every $f\in \BVv$ there exists a unique local vector measure $\DIFF f$ defined on $ \VV$ such that \begin{equation}\label{difffloccaompeq} \DIFF f(\XX)(v)=-\int_\XX f\dive \,v\,\dd\mass\qquad\text{for every }v\in D(\dive^\infty). \end{equation} \end{thm} \begin{proof} Start noticing that \eqref{calcdiveeq} grants that $\dive^\infty$ is a normed $\LIPb(\XX)$-module (we equip $\dive^\infty$ with the norm of $\tanXinf$) and that $\LIPb(\XX)$ is a subring of $\Cb(\XX)$ that approximates open sets in the sense of Definition \ref{def:Rap}. Now define $\FF:D(\dive^\infty)\rightarrow\RR$ as \[ \FF(v)\defeq -\int_\XX f\dive\,v\dd{\mass}. \] Notice that Proposition \ref{reprfordiffregularpre2} shows that $\FF\in(D(\dive^\infty))'$ with $\Vert \mathcal F\Vert'=\abs{\DIFF f}(\XX)$ and that $$\sup \left\{\FF(v):v\in D(\dive^\infty),\ \Vert v\Vert \le 1,\ \supp v\subseteq A\right\} =\abs{\DIFF f}(A).$$ In particular, the set function defined by the supremum in the left hand side of the equation above is the restriction to open sets of a finite Borel measure, so that by Lemma \ref{bbtightiffmeas} the functional $\FF$ is tight. Therefore by Theorem \ref{weakder} there is a unique local vector measure $\DIFF f$ defined on $\dive^\infty$ for which \eqref{difffloccaompeq} holds and by Proposition \ref{lipbscb} such measure can be uniquely extended to a local vector measure on $\VV$. \end{proof} \begin{rem} Given $f\in\BVv$, we can take the polar decomposition of its distributional derivative $\DIFF f=L\abs{\DIFF f}$ given by Theorem \ref{difffloccaomp}. Therefore, taking into account also Lemma \ref{foutside}, we have that for every $g\in\Cb(\XX)$ and $v\in D(\dive^\infty)$ such that $g v\in D(\dive^\infty)$, we have \[ \int_\XX f\dive(g v)\dd{\mass}=-\int_\XX g L(v)\dd{\abs{\DIFF f}} \] where, in particular, $\Vert L(v)\Vert_{\Lp^\infty(\abs{\DIFF f})}\le \Vert v\Vert$ by \eqref{inequalitygerm}. We can see this result as a particular case of \cite[Theorem 4.13]{buffa2020bv}. Following similar arguments it also possible to obtain the full result of \cite[Theorem 4.13]{buffa2020bv}, working with local vector measures defined on the $\Cb(\XX)$ module generated by $\mathcal{D M^\infty}(\XX)$ (\cite[Definition 4.1]{buffa2020bv}).\fr \end{rem} \bigskip We prove now the basic calculus rules for \textbf{continuous} functions of bounded variation. In the framework of $\RCD(K,N)$ spaces, we will have a much more powerful result, see Theorem \ref{volprop}. \begin{prop}[Chain rule]\label{propchaincont} Let $(\XX,\dist,\mass)$ be a metric measure space, let $f\in\BVv\cap C(\XX)$ and let $\phi\in\LIP(\RR)$ be such that $\phi(0)=0$. Then \begin{equation}\label{abspushf} |\DIFF f|(f^{-1}(N))=0\quad\text{for every Borel set $N\subseteq\RR$ such that $\LL^1(N)=0$}. \end{equation} In particular, $\phi$ is differentiable at $f(x)$ for $\abs{\DIFF f}$-a.e.\ $x$. Moreover $\phi\circ f\in\BVv$ and \begin{equation}\label{chaincont} \DIFF (\phi\circ f)=\phi'\circ f\DIFF f. \end{equation} \end{prop} \begin{proof} Take $N\subseteq\RR$ with $\LL^1(N)=0$. Then we use \eqref{coareaeqdiff} and the fact that the perimeter of a set is concentrated on its topological boundary to compute \begin{equation*} \abs{\DIFF f} (f^{-1}(N))=\int_\RR \per({\{f>t\}},f^{-1}(N)) \dd{t} =\int_N \per({\{f>t\}},f^{-1}(N)) \dd{t}=0. \end{equation*} In particular, by Rademacher's Theorem, we have that $\phi$ is differentiable at $f(x)$ for $\abs{\DIFF f}$-a.e.\ $x$. With an easy approximation argument, we see that we can assume $\phi\in\LIP(\RR)\cap C^1(\RR)$ with $\phi(0)=0$. Indeed, let $\{\rho_n\}_n$ be a family of Friedrich mollifiers and define $$\phi_n\defeq \phi\ast\rho_n- (\phi\ast\rho_n)(0)\in\LIP(\RR)\cap C^1(\RR).$$ For any $v\in D(\dive^\infty)$, we have on the one hand $$ \DIFF(\phi_n\circ f)(\XX)(v)=-\int_\XX \phi_n\circ f \dive\,v\dd{\mass}\rightarrow -\int_\XX \phi\circ f \dive\,v\dd{\mass}=\DIFF(\phi \circ f)(\XX)(v) $$ and on the other hand $$ \phi_n'\circ f\DIFF f(\XX)(v)=\int_\XX \phi_n'\circ f L_f(v)\abs{\DIFF f}\rightarrow \int_\XX \phi'\circ f L_f(v)\abs{\DIFF f}= \phi'\circ f\DIFF f(\XX)(v),$$ where we used that $\phi_n'\rightarrow\phi\ \LL^1$-a.e.\ so that by \eqref{abspushf} it holds $\phi_n'\circ f\rightarrow\phi\circ f\ \abs{\DIFF f}$-a.e. Then, if the chain rule holds for $\phi_n$, by the characterization of the differential given in Theorem \ref{difffloccaomp} above we obtain that it holds for $\phi$. We thus proved that it suffices to prove the claim under the assumption $\phi\in \LIP(\RR)\cap C^1(\RR)$ with $\phi(0)=0$. If this is the case, we can take an approximating sequence $\{\tilde{\phi}_n\}_n$ as follows: for every $n$, $\tilde{\phi}_n$ is piecewise affine, at its points of non-differentiability $\tilde{\phi}_n$ coincides with $\phi$, $\tilde{\phi}_n\rightarrow\phi$ uniformly and $|{\tilde{\phi}_n'-\phi'}|\rightarrow 0\ \LL^1$-a.e. Let now $\{\hat{\phi}_n\}_n$ be defined as $\hat{\phi}_n\defeq\tilde{\phi}_n-\tilde{\phi}_n(0)$. Arguing as above, we see that it suffices to check that \eqref{chaincont} holds for any $\hat{\phi}_n$ to conclude the proof. To conclude then we prove the chain rule under the assumption that $\phi\in\LIP(\RR)$ is piecewise affine and $\phi(0)=0$. Thus let $\{A_i\}_i\subseteq\XX$ be the at most countable collection of open sets of the form $f^{-1}(I)$ for $I\subseteq\RR$ interval where $\phi$ is affine. Then \eqref{abspushf} ensures that $|\DIFF f|(\XX\setminus\cup_iA_i)=0$ and then an argument based on the fact that $|\DIFF f|(\XX)<\infty$ shows that $\chi_{\cup_{i<n}A_i}\DIFF (\phi\circ f)(\XX)(v)\to \DIFF (\phi\circ f)(\XX)(v)$ and $\chi_{\cup_{i<n}A_i}\phi'\circ f\DIFF f (\XX)(v)\to\phi'\circ f \DIFF f (\XX)(v)$ as $n\to\infty$ for any $v\in D(\dive^\infty)$. Hence to conclude it is enough to check that $\chi_{A_i}\DIFF(\phi\circ f)=\chi_{A_i}\phi'\circ f\DIFF f$ for any $i$, and then again by an argument based on $|\DIFF f|(\XX)<\infty$ that it is sufficient to prove that $\chi_{B}\DIFF(\phi\circ f)=\chi_{B}\phi'\circ f\DIFF f$ for any bounded open set $B$ contained in some of the $A_i$'s. In turn, \eqref{eq:locmass} (applied with $\VV:=D(\dive)$ but then we use Proposition \ref{lipbscb}) and \eqref{eq:defbart} show that to prove this latter statement it is sufficient to prove that \[ \DIFF(\phi\circ f)(B)(v)=\phi'\circ f\DIFF f(B)(v) \] for any $B$ as before and $v\in D(\dive^\infty)$ with $\supp v\subseteq B$. To see this notice that \[ \begin{split} \DIFF(\phi\circ f) (B)(v)&=\DIFF(\phi\circ f)(\XX)(v)=-\int_\XX \phi\circ f\dive\,v\dd{\mass}\stackrel{\eqref{eq:intdiv}}=-\phi'_{| I}\int_\XX f\dive\,v\dd{\mass}=\phi'\circ f\DIFF f (B)(v). \end{split} \] The conclusion follows. \end{proof} The Leibniz rule is simply obtained by polarization of the chain rule with $\phi(t)=t^2$. \begin{prop}[Leibniz rule]\label{leibnizcontprop} Let $(\XX,\dist,\mass)$ be a metric measure space and $f,g\in\BVv\cap \Cb(\XX)$. Then $f g \in\BVv$ and \[ \DIFF (f g)=f\DIFF f+g\DIFF f. \] In particular, \begin{equation}\label{csdcasdc} \abs{\DIFF(f g)}\le \abs{f}\abs{\DIFF g}+\abs{g}\abs{\DIFF f}. \end{equation} \end{prop} \begin{proof} Using the chain rule with $\phi\in\LIP(\RR)$ that coincides with $t\mapsto t^2$ on a sufficiently large neighbourhood of $0$, we see that \[ \begin{split} \DIFF (f+g)^2&=2(f+g)\DIFF (f+g),\\ \DIFF f^2&=2 f\DIFF f,\\ \DIFF g^2&=2 g\DIFF g. \end{split} \] The conclusion easily follows from the linearity of the differential. \end{proof} \begin{rem} We wish to point out that the language of local vector measures is not necessary to achieve the inequality \eqref{csdcasdc}. We sketch an alternative proof. Take first $\phi\in\LIP(\RR)$ bi-Lipschitz (hence strictly monotone, say strictly increasing) and assume that $\phi(0)=0$. For $A\subseteq\XX$ open, we compute, by \eqref{coareaeqdiff} and the change of variables formula, \begin{align*} |\DIFF(\phi\circ f)|(A)&=\int_\RR \per(\{\phi\circ f>t,A\})\dd{t}=\int_\RR \per(\{ f>s,A\})\phi'(s)\dd{s}\\&=\int_\RR \int_A \phi'(f(x))\dd\per(\{ f>s,A\})(x)\dd{s}=\int_A \phi'(f(x))\dd{\abs{\DIFF f}}, \end{align*} so that\begin{equation}\label{cdsvdscas} \abs{\DIFF (\phi\circ f)}\le \abs{\phi'\circ f}\abs{\DIFF f}. \end{equation} Now we notice that, using \eqref{coareaeqdiff}, \eqref{abspushf} and the regularity of the measures involved, we see that it is enough to check \eqref{csdcasdc} on $A$, where $A\subseteq\XX$ is a bounded open set such that $f,g\in( c,C)$ for some $c,C\in(0,\infty)$. Up to scaling, we can assume $c=2$. We compute then, on $A$, \begin{align*} \abs{\DIFF (f g)}=|{\DIFF e^{\log(f g)}}|\le e^{\log (f g)}\abs{\DIFF (\log f+\log g)}\le fg\abs{\DIFF (\log f)}+ fg\abs{\DIFF (\log g)}\le g\abs{\DIFF f}+f\abs{\DIFF g}, \end{align*} where we used \eqref{cdsvdscas} twice. If $f,g\in\BVv\cap\Lpi$ are not continuous, other versions of the inequality investigated are available: if one denotes with $\bar{f},\bar{g}$ the precise representatives of $f,g$ (e.g.\ \eqref{veeandwedge} and the equation below \eqref{veeandwedge}) a reasonable claim would be $$ \abs{\DIFF (fg)}\le |{\bar{f}}|\abs{\DIFF g}+\abs{\bar{g}}\abs{\DIFF f} $$ which is exactly what one obtains in the smooth context. On metric measure spaces this property may fail (see e.g.\ \cite{lahti2018sharp}, where also an optimal bound on $\abs{\DIFF (f g)}$ was provided for $\PI$ spaces), whereas for finite dimensional $\RCD$ spaces the sharp version has been proved in \cite{BGBV}. \end{rem} \subsubsection{Strongly local measures}\label{normedmodules} Let $(\XX,\dist,\mass)$ be a metric measure space (complete, separable, with measure finite on bounded sets), $\mathscr M$ an $\Lp^p(\mass)$-normed $\Lp^\infty(\mass)$-module over it and $\VV\subseteq{\mathscr M}$ be a $\Cb(\XX)$ submodule (in the algebraic sense) such that for every $v\in\VV$, $\abs{v}\in\Lp^\infty(\mass)$. We have already noticed that $\VV$ equipped with the norm $$ \Vert v\Vert\defeq\Vert \abs{v}\Vert_{\Lp^\infty(\mass)}. $$ is $(\VV,\Vert\,\cdot\,\Vert)$ is a normed $\Cb(\XX)$-module. Local vector measures $\nvect$ defined on $\VV$ are, by definition, weakly local, i.e.\ they satisfy \begin{equation} \label{eq:wl} \nvect(A)(v)=0,\qquad\text{ for every $A\subseteq\XX$ open and $v\in\VV$ with $\|v\|_{|A}=0$.} \end{equation} In some sense, due to the nature of the definition of general normed $\Cb(\XX)$-modules, this is the most we ask for when speaking about locality. In the current setting, however, the elements of $\VV$ are also elements of $\mathscr M$ and thus are `defined $\mass$-a.e.', in a sense (see also discussion in \cite{Gigli14}). In practice, not only we can say whether $\|v\|_{|A}=0$ for any open set $A$, but we can also ask whether $|v|=0$ $\mass$-a.e.\ on $B$ for $B\subseteq\XX$ Borel (and this certainly occurs if $B$ is open and $\|v\|_{|B}=0$). Because of this, we ask whether a given local vector measure $\nvect$ is local in the following sense, that we shall call \emph{strong locality}: \begin{equation} \label{eq:sl} \nvect(B)(v)=0,\qquad\text{ for every $B\subseteq\XX$ Borel and $v\in\VV$ with $|v|=0$ $\mass$-a.e.\ on $B$.} \end{equation} What just said ensures that \eqref{eq:sl} implies \eqref{eq:wl}. There are two reasons for which it might happen that the converse implication fails: \begin{itemize} \item[1)] It might be that $|\nvect|\not\ll\mass$. In this case picking $B$ with $\mass(B)=0$ and $|\nvect|(B)>0$ we see that \eqref{eq:sl} cannot hold. \item[2)] It might be that $|\nvect|\ll\mass$ but still \eqref{eq:sl} fails, so that we can't improve the locality information from open sets to Borel ones. In investigating this matter it might be worth to notice that the germ seminorm $|v|_g$ coincides with the $\mass$-essential upper semicontinuous envelope of the pointwise norm $|v|$ (because $\Vert v\Vert_{| A}=\Vert \abs{v}\chi_A\Vert_{\Lp^\infty(\mu)}$ for any $A\subseteq\XX$ open). \end{itemize} Using Hahn-Banach on $\Lp^\infty$ it is easy to build examples where these can actually occur: \begin{ex} Let $(\XX,\dist,\mass)$ be the unit interval $[0,1]$ equipped with the usual distance and measure and $\VV:=\Lp^\infty(\mass)$. Also, let $V\subseteq\VV$ be the subspace of those functions that are $\mathcal L^1$-a.e.\ constant in a neighbourhood of 0 and $L:V\to\RR$ be the functional assigning to $f\in V$ the value it a.e.\ assumes in such neighbourhood. Then clearly $L$ has norm 1 and can be extended, via Hahn-Banach, to a functional with norm 1 on $\VV$, still denoted $L$. Now we define $\nvect:=L\delta_0$, i.e.\ we put $\nvect(B)(f):=\delta_0(B)L(f)$ for every $B\subseteq [0,1]$ Borel and $f\in \VV$. It is clear that $\nvect$ is a vector valued measure on $\VV'$; to check weak locality we notice that for $A\subseteq[0,1]$ open we have $\|f\|_{|A}=0$ if and only if $f=0$ $\mathcal L^1$-a.e.\ on $A$, whence the conclusion follows from the very definition of $L$. Since, rather clearly, we have $|\nvect|=\delta_0$, we have an example where $(1)$ above holds. A variation of this construction also gives an example where $(2)$ holds. Namely, let $(\XX,\dist,\mass)$ be the unit interval $[0,1]$ equipped with the usual distance and the measure $\mass:=\delta_0+\mathcal L^1$ and let $\VV:=\Lp^\infty(\mass)$. Also, let $V\subseteq\VV$ be the subspace of those functions that are $\mathcal L^1$-a.e.\ constant in a neighbourhood of 0 and $L:V\to\RR$ be the functional assigning to $f\in V$ the value it a.e.\ assumes in such neighbourhood (notice that $L(f)$ might be different from $f(0)$). Then, as before, $L$ has norm 1 and can be extended, via Hahn-Banach, to a functional with norm 1 on $\VV$, still denoted $L$. As before, we define $\nvect:=L\delta_0$ and notice that the same arguments as above ensure that $\nvect$ is a local vector measure defined on $\VV$ with $|\nvect|=\delta_0\ll\mass$. To see that \eqref{eq:sl} fails let $f\in \VV$ be identically 1 on $(0,1]$ and $f(0)=0$. Then the pointwise norm of $f$ in 0 is $0$ (notice that, as already discussed, the pointwise norm is not the same as the germ seminorm) so that if \eqref{eq:sl} is in place we should have $\nvect(\{0\})(f)=0$, but the fact that $f$ is in $V$ gives $L(f)=1$, so that $\nvect(\{0\})(f)=1$. \fr \end{ex} With this said, our main result in this section, namely Proposition \ref{polreprmodules} below, concerns characterization of strongly local vector measures and extension of such measures initially defined only on appropriate subspaces, i.e.\ we are going to adapt Proposition \ref{lipbscb} to the strongly local case. Before coming to the actual statement, let us point out an easy to spot class of strongly local vector measures. Take $M\in\mathscr M^*$ (the dual in the sense of modules) with $|M|\in \Lp^\infty(\mass)$ and define $\nvect$ as \begin{equation} \label{eq:reprm} \nvect(A)(v)\defeq\int_A M(v)\dd\mass\quad\text{for $A\subseteq\XX$ Borel and $v\in\VV$,} \end{equation} where $\VV \defeq\{v\in\mathscr M: |v|\in\Lpi\}$. It is then easy to see that $\nvect$ is a local vector measure satisfying \eqref{eq:sl} and that $|\nvect|=|M|\mass$ (one would also like to say that the polar decomposition $\nvect=L|\nvect|$ of $\nvect$ is given by $\nvect=\frac{M}{|M|}|\nvect|$ but this requires a bit of technical care because $M$ is not a map from $\XX$ to $\VV'$ but rather a `local' map from $\VV$ to $\Lp^1(\mass)$). There are strict links between the two notions - relying on Corollary \ref{thmapp} - but we won't discuss this topic further and rather refer to the upcoming \cite{GLP22}]). One of the conclusions of Proposition \ref{polreprmodules} below is that - perhaps not surprisingly - in fact all strongly local vector measures are of this form. A more interesting question concerns the possibility of extending a strongly local vector measure that initially is defined only on some normed $\mathcal R$-module $\WW$ dense in $\VV$ in a suitable sense. We point out that for $v\in\VV$ ($\VV$ seen as a normed $\Cb(\XX)$-module) or $v\in\WW$ ($\WW$ seen as a normed $\Rr$-module) it holds$$\Vert v\Vert_{|A}=\Vert \chi_A|v|\Vert_{\Lp^\infty(\mass)}\qquad\text{for every $A\subseteq\XX$ open},$$ in particular the local seminorm $\Vert\,\cdot\,\Vert_{|A}$ (and hence the notion of germ seminorm and support) is independent of the choice of $\Rr$. We have seen in Proposition \ref{lipbscb} that all (weakly) local vector measure admit a unique extension, thus uniqueness is also in place in the strongly local case. We have not been able to achieve an equally general conclusion for what concerns existence, nor to find counterexamples; this is the same as to say that we don't know whether the extension of a strongly local vector measure given by Proposition \ref{lipbscb} is still strongly local. Still, we identified a sufficient condition on $\WW$ for this to hold: it amounts at asking that \begin{equation}\label{stabilityfrac} \frac{1}{1\vee\abs{v}}v\in\WW\quad\text{for every }v\in\WW. \end{equation} A typical example of a situation when this happens is for $\WW=\Lp^\infty(\XX)\cap {\rm W}^{1,2}(\XX)$ (in our applications in the $\RCD$ setting we will pick a space of bounded Sobolev vector fields, see Section \ref{se:RCD}). Notice also that the map $v\mapsto \frac{1}{1\vee\abs{v}}v$ is a sort of `truncation' operation, as it leaves $v$ unchanged on $\{|v|\leq 1\}$ and normalizes it to $\frac{v}{|v|}$ on $\{|v|>1\}$. Finally, we notice that once a representation like \eqref{eq:reprm} holds, one can easily extend the measure from the completion of $\Cb(\XX)\cdot\WW$ to $\{v\in\mathscr M:|v|\in\Lpi\}$: we shall use this observation in identifying `polar' and `representable' measures in Section \ref{se:RCD}, see Proposition \ref{equivrepr}. \bigskip With this said, our main result here is: \begin{prop}\label{polreprmodules} Let $(\XX,\dist,\mass)$ be a metric measure space, $\mathscr M$ an $\Lp^p(\mass)$-normed $\Lp^\infty(\mass)$-module and $\VV\subseteq\mathscr M$ the normed $\Cb(\XX)$-module made of elements of $\mathscr M$ with pointwise norm in $\Lp^\infty(\mass)$. Also, let $\WW\subseteq \VV$ be a subspace that, with the inherited structure, is also normed $\mathcal R$-module for some subring $\mathcal R\subseteq\Cb(\XX)$ that approximates open sets (Definition \ref{def:Rap}). Assume also that $\WW$ satisfies \eqref{stabilityfrac} and that $\WW$ generates, in the sense of modules, $\mathscr M$. Let $\nvect$ be a local vector measure defined on $\WW$ such that $|\nvect|\ll\mass$. Then the following assertions are equivalent: \begin{itemize} \item[i)] $\nvect$ is strongly local on $\WW$, i.e.\ for any $v\in\WW$ and $B\subseteq\XX$ Borel we have \[ \nvect(B)(v)=0\quad\text{whenever }\abs{v}=0\ \abs{\nvect}\text{-a.e.\ on $B$}. \] \item[ii)] there exists $M_\nvect$ in ${\mathscr M}^*$ (the dual in the sense of modules) such that $\abs{M_\nvect}=1\ \abs{\nvect}$-a.e.\ and for every $v\in\WW$, it holds \begin{equation}\label{repr0} L_\nvect(x)(v)=M_\nvect(v)(x)\quad\text{for $\abs{\nvect}$-a.e.\ $x\in\XX$}, \end{equation} \end{itemize} Moreover, if these holds the formula \begin{equation} \label{eq:extsl} \hat \nvect(B)(v):=\int_BM_\nvect(v)\,\dd|\nvect|,\qquad\forall B\subseteq\XX,\ Borel,\ v\in\VV, \end{equation} provides the unique extension of $\nvect$ to a strongly local vector measure defined on $\VV$. \end{prop} \begin{proof} The implication $ii)\Rightarrow i)$ is obvious, so we turn to the opposite one. We start by showing that for every $v\in\WW$ we have \begin{equation}\label{todual0} \abs{L_\nvect(x)(v)}\le\abs{v}(x)\quad\text{for }\abs{\nvect}\text{-a.e.\ }x\in\XX. \end{equation} Let then $v\in\WW$ and let $B$ be a Borel subset of $\XX$. If $ v=0\ \abs{\nvect}$-a.e.\ on $B$, then $\nvect(B)(v)=0$. Otherwise we set $$w_v\defeq\frac{\Vert \abs{v}\chi_B\Vert_{\Lp^\infty(\abs{\nvect})}}{{\Vert \abs{v}\chi_B\Vert_{\Lp^\infty(\abs{\nvect})}}\vee\abs{v}} v$$ and notice that $w_v\in\WW$ by our assumption \eqref{stabilityfrac} and that $|v-w_v|=0$ $|\nvect|$-a.e.\ on $B$. Having assumed $i)$, this implies $$\abs{\nvect(B)(v)}=\abs{\nvect(B)(w_v)}\le\abs{\nvect}(B)\Vert w_v\Vert \le\abs{\nvect}(B)\Vert \abs{v}\chi_B\Vert_{\Lp^\infty(\abs{\nvect})}.$$ Therefore, for every $B\subseteq\XX$ Borel we have $$\abs{\int_B L_\nvect(x)(v)\dd{\abs{\nvect}}}\le\abs{\nvect}(B)\Vert \abs{v}\chi_B\Vert_{\Lp^\infty(\abs{\nvect})}\quad\text{for every }v\in\WW,$$ so that \eqref{todual0} follows. By the fact that $\WW$ generates, in the sense of modules $\mathscr M$ and with \eqref{todual0} in mind, we can apply \cite[Proposition 1.4.8 and Theorem 1.2.24]{Gigli14} to obtain existence and uniqueness of $M_\nvect\in{\mathscr M}^*$ such that \eqref{repr0} holds for every $v\in\WW$ and $\abs{M_\nvect}\le 1\ \abs{\nvect}$-a.e.. Then using \eqref{essusp} we show that $\abs{M_\nvect}= 1\ \abs{\nvect}$-a.e. The fact that formula \eqref{eq:extsl} provides a strongly local extension of $\nvect$ is obvious. To see that it is the only one, use the implication $(i)\Rightarrow(ii)$ just proved with $\VV$ in place of $\WW$ to find a (unique) corresponding $M_{\hat\nvect}\in\mathscr M^*$ such that \eqref{repr0} holds. Then the uniqueness of both $M_{\hat\nvect}$ and $M_{\nvect}$ forces the equality $M_{\hat\nvect}=M_{\nvect}$ and gives the conclusion. \end{proof} \section{The theory for RCD spaces} In this section we treat the case of local vector measures defined on a particular class of $\Cb(\XX)$-normed modules, namely tangent modules on $\RCD$ spaces. The reason for dealing with $\RCD$ spaces is having at our disposal a fine tangent module (with respect to the capacity). This fine tangent module is useful, in the practice, as many relevant objects turn out to have total variation which is absolutely continuous with respect to the capacity (e.g.\ the distributional derivative of a function of bounded variation). Even tough it is fairly easy to adapt the theory developed in this section to a more general context, we decided to stick to this particular case for the sake of clarity and to avoid overloading the paper with the axiomatization of the properties regarding the interplay of modules involved, which are by now well known in the $\RCD$ setting. \subsection{Some useful knowledge} With the introduction above in mind, let us briefly introduce $\RCD$ metric measure spaces. An $\RCD(K,N)$ space is an infinitesimally Hilbertian space (\cite{Gigli12}) satisfying a lower Ricci curvature bound and an upper dimension bound (meaningful if $N<\infty$) in synthetic sense according to \cite{Sturm06I,Sturm06II,Lott-Villani09}. General references on this topic are \cite{AmbICM,AmbrosioGigliMondinoRajala12,AmbrosioGigliSavare11,Ambrosio_2014,AmbrosioGigliSavare12,Gigli17,Gigli14,GP19,Villani2017} and we assume the reader to be familiar with this material. \bigskip Following \cite{Gigli14,Savare13} (with the additional request of a $\Lp^\infty$ bound on the Laplacian), we define the vector space of test functions on an $\RCD(K,\infty)$ space as \begin{equation}\notag \TestF(\XX)\defeq\{f\in\LIP(\RR)\cap\Lpi\cap D(\Delta): \Delta f\in \HSs\cap\Lpi\}, \end{equation} and the vector space of test vector fields as \begin{equation}\label{testVdef} \TestV(\XX)\defeq\left\{ \sum_{i=1}^n f_i\nabla g_i : f_i\in\Ss\cap\Lpi,g_i\in\TestF(\XX)\right\}. \end{equation} Notice that the original definition of $\TestV(\XX)$ given by the second named author was slightly different, namely it was, $\left\{\sum_{i=1}^n f_i\nabla g_i: f_i,g_i\in\TestF(\XX)\right\}$. Our choice is motivated by the desire of having \eqref{stabilityfrac} at our disposal without introducing further space of vectors. In this direction we point out that, rather clearly from the studies in \cite{Gigli14}, for any $v\in \TestV(\XX)$ we have $|v|\in {\rm W}^{1,2}(\XX)\cap\Lp^\infty(\XX)$, thus $\frac1{1\vee|v|}\in {\rm W}^{1,2}(\XX)\cap\Lp^\infty(\XX)$ as well, so that our definition of $\TestV(\XX)$ ensures that such space has the property \eqref{stabilityfrac}. For what concerns the definition of the spaces $\WHCSs, \WHHSs$ as closure of the space of test vector fields, having the enlarged space of vector fields makes no difference, as such enlargement is still, trivially, contained in the spaces $\WHCSs, \WHHSs$ as originally introduced, and therefore such spaces can be equivalently defined taking the closure (with respect to the relevant norm) of the space defined in \eqref{testVdef}. \bigskip We assume familiarity with the definition of capacitary modules, quasi-continuous functions and vector fields and related material in \cite{debin2019quasicontinuous}. A summary of the material we will use can be found in \cite[Section 1.3]{bru2019rectifiability}. For the reader's convenience, we write the results that we will need most frequently. Exploiting Sobolev functions, we define the $2$-capacity (to which we shall simply refer as capacity) of any set $A\subseteq\XX$ as $$\capa(A)\defeq\inf\left\{ \Vert f\Vert_{\HSs}^2:f\in\HSs,\ f\ge 1\ \mass\text{-a.e.\ on some neighbourhood of $A$}\right\}. $$ An important object will be the one of fine tangent module, as follows ($\qcr$ stands for `quasi continuous representative'). \begin{thm}[{\cite[Theorem 2.6]{debin2019quasicontinuous}}]\label{tancapa} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space. Then there exists a unique couple $(\tanXcap,\nablatilde)$, where $\tanXcap$ is a $\Lp^0(\capa)$-normed $\Lp^0(\capa)$-module and $\nablatilde: \TestF(\XX) \rightarrow\tanXcap$ is a linear operator such that: \begin{enumerate}[label=\roman*)] \item $|{\nablatilde f}|=\qcr(\abs{\nabla f}) \ \capa$-a.e.\ for every $f\in\TestF(\XX)$, \item the set $\left\{\sum_{n} \chi_{E_n}\nablatilde f_n\right\}$, where $\{f_n\}_n\subseteq\TestF(\XX)$ and $\{E_n\}_n$ is a Borel partition of $\XX$ is dense in $\tanXcap$. \end{enumerate} Uniqueness is intended up to unique isomorphism, in the following sense: if another couple $(\tanXcap',\nablatilde')$ satisfies the same properties, then there exists a unique module isomorphism $\Phi:\tanXcap\rightarrow\tanXcap'$ such that $\Phi\circ \nablatilde=\nablatilde'$. Moreover, $\tanXcap$ is a Hilbert module that we call capacitary tangent module. \end{thm} Notice that we can, and will, extend the map $\qcr$ from $\HSs$ to $\Ss\cap\Lpi$ by a locality argument. We define \begin{equation}\notag \TestVbar(\XX)\defeq \left\{ \sum_{i=1}^n \qcr(f_i) \nablatilde g_i :f_i\in\Ss\cap\Lpi,g_i\in\TestF(\XX) \right\}. \end{equation} We define also the vector subspace of quasi-continuous vector fields, $\Cqcvf$, as the closure of $\TestVbar(\XX)$ in $\tanXcap$ and finally \begin{equation}\label{Cqcvfdef} \Cqcvfinf\defeq\left\{v\in\Cqcvf:\abs{v}\text{ is $\capa$-essentially bounded} \right\}. \end{equation} Recall now that as $\mass\ll\capa$, we have a natural projection map \begin{equation}\notag \Pr:\Lpc\rightarrow\Lpo \quad \text{defined as}\quad[f]_{\Lpc}\mapsto [f]_{\Lpo} \end{equation} where $[f]_{\Lpc}$ (resp.\ $[f]_{\Lpo}$) denotes the $\capa$ (resp.\ $\mass$) equivalence class of $f$. It turns out that $\Pr$, restricted to the set of quasi-continuous functions, is injective. We have the following projection map $\Prbar$, given by \cite[Proposition 2.9 and Proposition 2.13]{debin2019quasicontinuous}, which plays the role of $\Pr$ on vector fields. \begin{prop}\label{prbardef} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space. There exists a unique linear continuous map \begin{equation}\notag \Prbar :\tanXcap\rightarrow\tanXzero \end{equation} that satisfies \begin{enumerate}[label=\roman*)] \item $\Prbar (\nablatilde f)=\nabla f$ for every $f\in\TestF(\XX)$, \item $\Prbar (g v)=\Pr(g)\Prbar(v)$ for every $g\in\Lpc$ and $v\in\tanXcap$. \end{enumerate} Moreover, for every $v\in\tanXcap$, \begin{equation}\notag \abs{\Prbar(v)}=\Pr(\abs{v})\quad\mass\text{-a.e.} \end{equation} and $\Prbar$, when restricted to the set of quasi-continuous vector fields, is injective. \end{prop} We point out that if $v\in\Cqcvf$, \cite[Proposition 2.12]{debin2019quasicontinuous} shows that $\abs{v}\in\Lpc$ is quasi-continuous, in particular, $v\in \Cqcvfinf$ if and only if $\Prbar(v)\in\tanXinf$. In what follows, with a little abuse, we will often write, for $v\in\tanXcap$, $v\in D(\dive)$ if and only if $\Prbar (v)\in D(\dive)$ and, if this is the case, $\dive\, v=\dive(\Prbar(v))$. Similar notation will be used for other operators acting on subspaces of $\tanXzero$. The following theorem describes the analogue of the map $\qcr$ (defined on functions) in the case of vector fields. \begin{thm}[{\cite[Theorem 2.14 and Proposition 2.13]{debin2019quasicontinuous}}] Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space. Then there exists a unique map $\qcrbar:\WHCSs\rightarrow\tanXcap$ such that \begin{enumerate}[label=\roman*)] \item $\qcrbar (v)\in\Cqcvf$ for every $v\in\WHCSs$, \item $\Prbar\circ{\qcrbar}(v)=v$ for every $v\in\WHCSs$. \end{enumerate} Moreover, $\qcrbar$ is linear and satisfies \begin{equation}\notag \abs{\qcrbar(v)}=\qcr(\abs{v})\quad \capa\text{-a.e.\ for every }v\in\WHCSs, \end{equation} so that $\qcrbar$ is continuous as map from $\WHCSs$ to $\tanXcap$. \end{thm} We will often omit to write the $\qcrbar$ operator for simplicity of notation. This should cause no ambiguity thanks to the fact that \begin{equation}\label{qcrfactorizes} \qcrbar(g v)=\qcr(g)\qcrbar(v) \quad\text{for every }g\in\HSs\cap\Lpi\text{ and } v\in\WHCSs\cap\tanXinf. \end{equation} This can be proved easily as the continuity of the map $\qcr$ implies that $\qcr(g) \qcrbar(v)$ as above is quasi-continuous and the injectivity of the map $\Prbar$ restricted the set of quasi-continuous vector fields yields the conclusion. Again by locality, we have that \eqref{qcrfactorizes} holds even for $g\in\Ss\cap\Lpi$. \bigskip The following theorem, which is {\cite[Section 1.3]{bru2019rectifiability}}, will be crucial in the construction of modules tailored to particular measures. \begin{thm} \label{finemodule} Let $(\XX,\dist,\mass)$ be a metric measure space and let $\mu$ be a Borel measure finite on balls such that $\mu\ll\capa$. Let also $\MM$ be a $\Lpc$-normed $\Lpc$-module. Define the natural (continuous) projection \begin{equation}\notag \pi_\mu:\Lpc\rightarrow\Lp^0(\mu). \end{equation} We define an equivalence relation $\sim_\mu$ on $\MM$ as \begin{equation}\notag v\sim_\mu w \text{ if and only if } \abs{v-w}=0 \quad \mu\text{-a.e.} \end{equation} Define the quotient space $\MM_{\mu}^0\defeq{\MM}/{\sim_\mu}$ with the natural (continuous) projection \begin{equation}\notag \pibar_\mu:\MM\rightarrow\MM_{\mu}^0. \end{equation} Then $\MM_{\mu}^0$ is a $\Lp^0(\mu)$-normed $\Lp^0(\mu)$-module, with the pointwise norm and product induced by the ones of $\MM$: more precisely, for every $v\in\MM$ and $g\in\Lpc$, \begin{equation}\label{defnproj} \begin{cases} \abs{\pibar_\mu(v)}\defeq\pi_\mu(\abs{v}),\\ \pi_\mu(g)\pibar_\mu(v)\defeq\pibar_\mu( g v). \end{cases} \end{equation} If $p\in[1,\infty]$, we set \begin{equation}\notag \MM^{p}_{\mu}\defeq\left\{ v\in\MM^0_{\mu}:\abs{v}\in\Lp^p(\mu)\right\}, \end{equation} which is a $\Lp^p(\mu)$-normed $\Lp^\infty(\mu)$-module. Moreover, if $\MM$ is a Hilbert module, also $\MM_\mu^0$ and $\MM_\mu^2$ are Hilbert modules. \end{thm} In the particular case in which $\MM=\tanXcap$ and $\mu$ is a Borel measure finite on balls such that $\mu\ll\capa$, we set \begin{equation}\notag \tanbvXp{p}{\mu}\defeq(\tanXcap)_\mu^p\quad\text{for }p\in\{0\}\cup[1,\infty]. \end{equation} In the case $\mu=\mass$ notice that considering the map $$\dot{\nabla}:\TestF(\XX)\stackrel{\nablatilde}{\longrightarrow}\tanXcap \stackrel{\pibar_\mass }{\longrightarrow}(\tanXcap)_\mass^0 $$we can show that $(\tanXcap)_\mass^0$ is isomorphic to the usual $\Lp^0$ tangent module via a map that sends $\nabla f$ to $\dot{\nabla} f$ so that we have no ambiguity of notation and, by construction, the map $\pibar_\mass$ coincides with $\Prbar$ defined in Proposition \ref{prbardef}. We define the traces \begin{alignat}{5}\notag &\tr_\mu:\HSsloc\rightarrow\Lp^0( \mu)&&\quad\text{as}\quad &&&\tr_\mu\defeq\pi_{\mu}\circ\qcr,\\\notag &\trbar_\mu:\WHCSs\rightarrow\tanbvXzero{\mu}&&\quad\text{as}\quad&&&\trbar_\mu\defeq \pibar_{\mu}\circ\qcrbar. \end{alignat} To simplify the notation, we will often omit to write the trace operators. This should cause no ambiguity because from \eqref{qcrfactorizes} and \eqref{defnproj} it follows that \begin{equation}\label{qcrfactorizes2} \trbar_\mu(g v)=\tr_\mu (g )\trbar_\mu(v) \quad\text{for every }g\in\HSsloc\cap\Lpi\text{ and } v\in\WHCSs\cap\tanXinf. \end{equation} We define \begin{equation}\notag \TestV_\mu(\XX)\defeq\trbar_\mu(\TestV(\XX))\subseteq\tanbvXp{\infty}{\mu} \end{equation} and the proof of \cite[Lemma 2.7]{bru2019rectifiability} gives what follows. \begin{lem}\label{densitytestbargen} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and let $\mu$ be a finite Borel measure such that $\mu\ll\capa$. Then $\TestV_\mu(\XX)$ is dense in $\tanbvXp{p}{\mu}$ for every $p\in[1,\infty)$. \end{lem} \bigskip We will also need Cartesian products of normed modules. Fix $n\in\NN$, $n\ge 1$ and denote by $\Vert\,\cdot\,\Vert_e$ the Euclidean norm of $\RR^n$. Given a $\Lpo$-normed $\Lpo$-module $\mathcal{N} $, we can consider its Cartesian product $\mathcal{N}^n$ and endow it with the natural module structure and with the pointwise norm $$\abs{(v_1,\dots,v_n)}\defeq\Vert (\abs{v_1},\dots,\abs{v_n})\Vert_e$$ which is induced by a scalar product if and only if the one of $\mathcal{N} $ is, and if this is the case, we will still denote the pointwise scalar product on $\mathcal{N}^n$ by $\,\cdot\,$. Similarly, if $\mathcal N$ is an $\Lp^p$-normed module, then $\mathcal{N}^n$ has a natural structure of $\Lp^p$-normed module as well, where for $v=(v_1,\dots,v_n)\in \mathcal N^n$ we have \begin{equation} \label{eq:lpnorm} \Vert v\Vert\defeq \| \abs{v}\|_{\Lp^p(\mass)}=\big\|\Vert (\abs{v_1},\dots,\abs{v_n})\Vert_e\big\|_{\Lp^p(\mass)} \end{equation} It is then clear that a subspace $\mathcal{N}_1$ of $\mathcal{N}$ is dense if and only if $(\mathcal{N}_1)^n$ is dense in $\mathcal{N}^n$. Similar considerations hold if $\mass$ is replaced by a Borel measure finite on balls and (with the suitable interpretation) in the case of $\Lpc$-normed $\Lpc$-modules or if we alter the integrability exponent. It is clear $\MM$ is a $\Lpc$-normed $\Lpc$-module and $\mu$ is a Borel measure finite on balls such that $\mu\ll\capa$, then also $$(\MM_\mu^p)^n \cong(\MM^n)_\mu^p\quad\text{for }p\in\{0\}\cup[1,\infty].$$ Finally, we adopt the natural notation $$ \mathrm{L}_\mu^p(T^{ n}\XX)\defeq \tanbvXp{p}{\mu}^{ n}.$$ \subsection{Definitions and results}\label{se:RCD} Fix now an $\RCD(K,\infty)$ space $(\XX,\dist,\mass)$ and $n\in\NN$, $n\ge 1$. In this section, we often consider local vector measures defined on $\TestV(\XX)^n$, which is endowed with the structure inherited from $\tanXinfn$. We recall that the {space $\TestV(\XX)$, defined in \eqref{testVdef}}, slightly differs from the one that one usually finds in literature, as discussed right after the definition \eqref{testVdef}, but with this definition it follows that $\TestV(\XX)^n$ is a normed $\Rr$-module, for $\Rr\defeq\LIPb(\XX)$ and we are going to exploit this property throughout. Also, $\TestV(\XX)^n$ has the property \eqref{stabilityfrac}, as one may readily check using \eqref{vndsjosocn} below. Notice that, according to the conventions discussed at the end of the last section, the norm of $\TestV(\XX)^n$ is given by formula \eqref{eq:lpnorm} (in particular, in general $\|v\|\neq \|(\||v_1|\|_{\Lp^\infty},\ldots,\||v_n|\|_{\Lp^\infty})\|_e$). We wish to remark the fact that, \begin{equation}\label{vndsjosocn} \text{if $f_1, \dots, f_n\in\HSs$ and $\varphi\in\LIP(\RR^n;\RR)$ is such that $\varphi(0)=0$, then $\varphi(f_1,\dots,f_n)\in\HSs$} \end{equation} with $$\qcr(\varphi(f_1,\dots,f_n))=\varphi(\qcr(f_1),\dots,\qcr(f_n))\quad\capa\text{-a.e.}$$ This is trivial in the case $f_1,\dots,f_n\in\HSs\cap\LIPb(\XX)$ and the general case is proved by approximation. In particular, if $v=(v_1,\dots,v_n)\in\TestV(\XX)^n$, then we have the following compatibility relation $$\qcr(\abs{v})=\Vert\qcr(\abs{v_1}),\dots,\qcr(\abs{v_n})\Vert_e=|{\qcrbar(v)}|\quad\capa\text{-a.e.}$$ where we define the map $\qcrbar:\TestV(\XX)^n\rightarrow (\Cqcvfinf)^n$ componentwise. We will often omit to write the maps $\qcrbar$ and $\qcr$. If $\mu$ is a Borel measure finite on balls such that $\mu\ll\capa$, we define the trace map $$\trbar:\TestV(\XX)^n\rightarrow\tanbvXpn{0}{\mu}\quad\text{as}\quad\trbar\defeq\pibar_\mu\circ\qcrbar,$$ where $\pibar_\mu$ is given by Theorem \ref{finemodule}. Similarly as for $\qcr$ and $\qcrbar$, we shall often omit to explicitly write $\trbar$ and $\tr$. \bigskip We now give the following two crucial definitions: \begin{defn}[Polar measures] Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and let $\nvect$ be a local vector measure defined on $\TestV(\XX)^n$. We say that $\nvect$ is polar (or that $\nvect$ is a polar vector measure) if $\abs{\nvect}\ll\capa$ and for every $A\subseteq\XX$ Borel \[ \nvect(A)(v)=0\quad\text{for every }v\in\TestV(\XX)^n\text{ such that }\abs{v}=0\ \abs{\nvect}\text{-a.e.\ on }A. \] \end{defn} Here and after, we endow, naturally, $\tanXcapinfn$ with the $\Lpcinf$ norm of the pointwise norm. Notice that from the trivial identity \[ |fv|=|f|\,|v|\qquad \capa\text{-a.e.}\qquad \forall v\in\Lp^0(\capa),\ f\in\Cb(\XX) \] it follows that $\tanXcapinfn$ is a normed $\Cb(\XX)$-module. \begin{defn}[Representable measures]\label{repmeas} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and let $\nvect$ be a local vector measure defined on $\tanXcapinfn$. We say that $\nvect$ is representable (or that $\nvect$ is a representable vector measure) if there exists a finite measure $\mu_\nvect\ll\capa$ and $\nu_\nvect\in\tanXcapn$ with $\abs{\nu_\nvect}=1$ $\mu_\nvect$-a.e.\ such that $\nvect=\nu_\nvect\mu_\nvect$, in the sense that \begin{equation}\label{reprmeas} \nvect(A)(v)=\int_A v\,\cdot\,\nu_\nvect \dd{\mu_\nvect}\quad\text{for every }v\in\tanXcapinfn \end{equation} for every $A\subseteq\XX$ Borel. \end{defn} An immediate difference between polar and representable vector measures is their domain of definition: the former are defined on $\TestV(\XX)^n$ whereas the latter are defined on the whole $\tanXcapn$. We shall see in Proposition \ref{equivrepr} that this is basically the only difference between these notions (to this aim we shall exploit the results in Section \ref{normedmodules}). \begin{rem}\label{mvmrcdrem} It is easy to show what follows. \begin{enumerate}[label=\roman*)] \item The representation of a representable vector measure is unique, in the sense that if $\nvect=\nu_\nvect \mu_\nvect=\nu_\nvect'\mu'_\nvect$, then $\mu_\nvect=\mu'_\nvect$ and $\nu_\nvect=\nu_\nvect'$ $\mu_\nvect$-a.e. Lemma \ref{densitytestbargen} shows moreover that if two representable vector measures coincide on $\XX$ on (the trace of) $\TestV(\XX)^n$, then they are equal. \item If $\nvect=\nu_\nvect{\mu_\nvect}$ is a representable vector measure defined on $\tanXcapinfn$, $\mvect=\nu_\mvect{\mu_\mvect}$ is a representable vector measure defined on $\tanXcapinfm$ and also $f\in\Lp^\infty(\mu_\nvect)^{k\times n}$ and $g\in\Lp^\infty(\mu_\mvect)^{k\times m}$, then $f \nvect +g \mvect $ is a representable vector measure defined on $\tanXcapinfk$. Indeed, we set $G\defeq \mu_\nvect+\mu_\mvect$ and $$\omega\defeq f \nu_\nvect\dv{\mu_\nvect}{G}+g \nu_\mvect\dv{\mu_\mvect}{G},$$ then $$f \nvect+ g \mvect=\frac{\omega}{\abs{\omega}} \abs{\omega}G.$$ On the other hand, if $f\in\Lpcinf^{k\times n}$ and $\nvect$ is as above, $$f \nvect (A)((v_1,\dots,v_k))=\nvect(A)(f^T(v_1,\dots,v_k)),$$ where $\cdot^T$ denotes the transpose operator. \item In general, we don't know whether a local vector measure whose total variation is absolutely continuous with respect to $\capa$ is polar, unless other hypothesis are satisfied (cf.\ Proposition \ref{polreprmodules}). \item We remark that, if $\abs{\nvect}\ll\capa$, then, for every $v\in\TestV(\XX)^n$, $$\nvect(A)(v)=0\quad\text{for every $A\subseteq\XX$ Borel}\text{ such that }\abs{v}=0\ \abs{\nvect}\text{-a.e.\ on }A$$ if and only if $$\nvect(A)(v)=0\quad\text{for every $A\subseteq\XX$ Borel}\text{ such that }\abs{v}=0\ \capa\text{-a.e.\ on }A.$$ Indeed, if $\abs{v}=0\ {\capa}$-a.e.\ on $A$, then $\abs{v}=0\ \abs{\nvect}$-a.e.\ on $A$, so that the first line implies the second. Conversely, assume that $\abs{v}=0\ \abs{\nvect}$-a.e.\ on $A$. Then we can split $A=A_1\cup A_2$, where $\abs{\nvect}(A_1)=0$ and $\abs{v}=0 \ \capa$-a.e.\ on $A_2$ (just fix a Borel representative of $|v|$ and put $A_2:=\{|v|=0\}$) and therefore we conclude. \item It may seem not natural to include the request that $\abs{\nvect}$ is absolutely continuous with respect to the capacity in the definition of polar vector measure. However it makes sense as the quasi-continuous representative of a vector field is the finest representative at our disposal.\fr \end{enumerate} \end{rem} \bigskip Here we describe the polar decomposition of a representable vector measure. This will be crucial to exploit the main result of Section \ref{normedmodules}, i.e.\ Proposition \ref{polreprmodules}, whose consequence is the link between polar and representable vector measures (see Proposition \ref{equivrepr}). \begin{prop}\label{reprtopolar} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and let $\nvect=\nu\mu$ be a representable vector measure. Then $\abs{\nvect}=\mu$ and $\nvect$ admits the polar decomposition $L_\nvect\abs{\nvect}$, where $$L_\nvect(x)(v)=v\,\cdot\,\nu(x)\quad\text{for }\abs{\nvect}\text{-a.e.\ $x\in\XX$}\text{ for every $v\in\TestV(\XX)^n$}.$$ \end{prop} \begin{proof} The fact that $\abs{\nvect}=\mu$ follows immediately from the fact that $\nvect$ is defined on $\tanXcapinfn$. The second assertion follows from Proposition \ref{allhaspolarabs}, taking into account the uniqueness of the Radon-Nikodym derivative. \end{proof} For the following proposition, we use the fact that representable vector measures can be seen as polar vector measures: given a representable vector measure $\nvect=\nu\mu$, we can always define a polar vector measure ${\sf I}(\nvect)$ by restriction to $\TestV(\XX)^n\subseteq\tanXcapinfn$, namely $${\sf I}(\nvect)(A)(v)\defeq\int_A v\,\cdot\,\nu\dd{\mu},$$ where, as usual, we took the trace of $v$. \begin{prop}\label{equivrepr} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and consider the Banach spaces \begin{align} {\sf Rep}_n(\XX)&\defeq\left(\left\{\text{representable vector measures defined on $\tanXcapinfn$}\right\},\abs{\,\cdot\,}(\XX)\right)\notag\\ {\sf Pol}_n(\XX)&\defeq\left(\left\{\text{polar vector measures defined on $\TestV(\XX)^n$}\right\},\abs{\,\cdot\,}(\XX)\right)\notag. \end{align} Then the natural inclusion map \begin{align}\notag \sf{I}: {\sf Rep}_n(\XX)\rightarrow {\sf Pol}_n(\XX) \end{align} is a bijective isometry. \end{prop} \begin{proof} Thanks to Proposition \ref{vectvalbanach}, recalling \eqref{deflimit}, we easily see that ${\sf Pol}_n(\XX)$ is indeed Banach space and the fact that also ${\sf Rep}_n(\XX)$ is a Banach space will follow from the fact that $\sf I$ is an isometry. Notice that $\sf I$ is clearly linear. \\\textsc{Step 1}. We prove that $\sf I$ is surjective. Take then a polar vector measure $\nvect$ and notice that by restriction it induces a local vector measure, still denoted $\nvect$, on $\TestV(\XX)^n$. We are going to apply Proposition \ref{polreprmodules} with $|\nvect|$ in place of $\mass$, the (trace of) elements in $\TestV(\XX)^n$ in $\Lp^\infty(|\nvect|)$ in place of $\WW$ (recall that $\TestV(\XX)^n$ is a normed $\LIPb(\XX)$-module) and $\Lp^\infty_{|\nvect|}(T^n\XX)$ in place of $\VV$. The required density comes from Lemma \ref{densitytestbargen}. Taking also into account Riesz theorem for Hilbert modules (\cite[Theorem 1.2.24]{Gigli14}) we thus find $\nu\in {\rm L}_{\abs{\nvect}}^2(T^n\XX)$ such that $\abs{\nu}=1\ \abs{\nvect}$-a.e.\ and \[ L_\nvect(v)=\nu\,\cdot\, v\qquad\abs{\nvect}\text{-a.e.}\qquad\forall v\in \TestV(\XX)^n. \] It is then clear that formula \eqref{reprmeas} with $|\nvect|$ and $\nu$ in place of $\mu_\nvect,\nu_\nvect$ defines a representable vector measure whose image via ${\sf I}$ is precisely $\nvect$ (to be more precise, in Definition \ref{repmeas} we require $\nu$ to be in $\tanXcapn$ and then use its trace in of formula \eqref{reprmeas}: this obviously makes no difference with what we have done, since, by the definition given in Theorem \ref{finemodule} elements of $ {\rm L}_{\abs{\nvect}}^2(T^n\XX)$ are defined as traces of elements in $\tanXcapn$). \\\textsc{Step 2}. We prove that ${\sf I}$ is an isometry. Take then a representable vector measure $\nu\mu$ and let $\nvect\defeq{\sf I}(\nu\mu)$. If $A\subseteq\XX$ is Borel and $v\in\TestV(\XX)^n$, we can compute $$\abs{\nvect(A)(v)}=\abs{\int_A v\,\cdot\,\nu\dd{\mu}}\le \int_A \abs{v}\abs{\nu}\dd{\mu}\le \Vert v\Vert{}\mu(A)$$ and this shows that $\abs{\nvect}\le \mu$. Conversely, by Lemma \ref{densitytestbargen}, take $\{v_k\}_k\subseteq\TestV(\XX)^n$ such that $v_k\rightarrow \nu$ in $\tanbvXn{\mu}$. Set $$w_k\defeq\frac{1}{1\vee \abs{v_k}} {v_k}$$ and notice $w_k\in\TestV(\XX)^n$, $\abs{w_k}\le 1\ \mass$-a.e.\ for every $k$ and $w_k\rightarrow \nu$ in $\tanbvXn{\mu}$. We can compute, by dominated convergence, recalling \eqref{qcrfactorizes2} $$ \abs{\nvect}(\XX)\ge \nvect(\XX)(w_k)=\int_\XX w_k\,\cdot\,\nu\dd{\mu}\rightarrow\int_\XX\dd{\mu},$$ so that, $\abs{\nvect}(\XX)\ge \mu(\XX)$. Then, as we have already showed $\abs{\nvect}\le \mu$, we have $\abs{\nvect}=\mu$. \end{proof} The following theorem builds upon the theory of quasi-continuous functions to improve the conclusion of Theorem \ref{weakder}, under a mild additional assumption. Namely, it allows us to prove that, under an additional tightness condition (see \eqref{tightplus}), the (unique) local vector measure given by Theorem \ref{weakder} is polar. The additional tightness condition just mentioned turns out to be rather manageable, explecially in practice, when one deals with differential objects. \begin{thm}\label{improvetopolar} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and let $F\in (\TestV(\XX)^n)'$ be tight. Assume that $F$ satisfies \begin{equation}\label{tightplus} \begin{split} \text{for every sequence }&\text{$\{f_k\}_k\subseteq\HSs$ equibounded in $\Lpi$ with $f_k\rightarrow 0$ in $\HSs$,}\\&\text{it holds that $F(f_k v)\rightarrow 0$ for every $v\in\TestV(\XX)^n$. } \end{split} \end{equation} Then there exists a unique \textbf{polar} vector measure $\nvect_F$ defined on $\TestV(\XX)^n$ such that $$ \nvect_F(\XX)(v)=F(v)\quad\text{for every }v\in\TestV(\XX)^n. $$ Moreover, it holds that $\abs{\nvect_F}=\mu$, where $\mu$ is the finite Borel measure given by Lemma \ref{bbtightiffmeas}. \end{thm} \begin{proof} We call $\nvect$ the unique local vector measure given by Theorem \ref{weakder}. Assume that $F$ satisfies \eqref{tightplus}, we just have to show that $\nvect$ is polar. \\\textsc{Step 1}. We claim that if $\{f_k\}_k$ is as in \eqref{tightplus} and $B\subseteq\XX$ is Borel, then it holds \begin{equation}\notag \nvect(B)(f_k v)\rightarrow 0\quad\text{ for every $v\in\TestV(\XX)^n$. } \end{equation} Indeed, let $A\subseteq\XX$ be open and let $K\subseteq A\cap B$ be compact and then take $\psi\in\LIPbs(\XX)$ taking values in $[0,1]$ be identically 1 on a neighbourhood of $K$ and with support contained in $A$. Then $\{\psi f_k\}_k$ still is as in \eqref{tightplus}, and by weak locality we have \begin{align*} \abs{\nvect(B)( f_kv)}&\le\abs{\nvect(A)(f_k\psi v)}+\big(\abs{\nvect}(B\setminus K)+\abs{\nvect}(A\setminus K)\big)\Vert f_k\Vert_{\Lpi}\Vert v\Vert \\ &= |F(f_k\psi v)|+\big(\abs{\nvect}(B\setminus K)+\abs{\nvect}(A\setminus K)\big)\Vert f_k\Vert_{\Lpi}\Vert v\Vert \end{align*} for any $k\in\NN$, so the conclusion follows by first letting $k\to\infty$ and then using the arbitrariness of $A,K$ in conjunction with the regularity of $|\nvect|$. \\\textsc{Step 2}. We claim that $\nvect\ll\capa$. By regularity of $\abs{\nvect}$, we just have to show that if $K$ is a compact set with $\capa(K)=0$, then $\abs{\nvect}(K)=0$. By \eqref{conicidence}, we conclude if we show that $\nvect(K)(v)=0$ for any $v\in\TestV(\XX)^n$. As $\capa(K)=0$, we can find a sequence $\{f_k\}_k$ as in \eqref{tightplus} such that $f_k(x)= 1$ for every $x$ in a neighbourhood of $K$. Thus by weak locality we have $\nvect(K)(v)=\nvect(K)(f_kv)$ for every $k\in\NN$ and then the conclusion follows from Step 1. \\\textsc{Step 3} Now we show that then $\nvect$ is polar. Taking into account item $iv)$ of Remark \ref{mvmrcdrem} and the regularity of $\abs{\nvect}$, it is sufficient to show that if $v\in\TestV(\XX)^n$ and $K$ is a compact set such that $\abs{v}=0\ \capa$-a.e.\ on $K$, then $\nvect(K)(v)=0$. Fix then $v\in\TestV(\XX)^n$, we can assume with no loss of generality that $\Vert v\Vert=1$. Let now $\varepsilon>0$. By the quasi-continuity of $\abs{v}$, we can find an open set $A_\varepsilon$ such that $\abs{v}$ is continuous on $\XX\setminus A_\varepsilon$ and $\capa(A_\varepsilon)<\varepsilon$. We fix now a continuous version of $\abs{v}$ on $\XX\setminus A_\varepsilon$. Also, as $\abs{v}=0\ \capa$-a.e.\ on $K$, we can assume, up to slightly enlarging $A_\varepsilon$, that $\abs{v}(x)=0$ for every $x\in K\setminus A_\varepsilon $ (still $\capa(A_\varepsilon)<\varepsilon$). Then, $\abs{v}< \varepsilon$ on $B_\varepsilon\setminus A_\varepsilon$, where $B_\varepsilon$ is a suitable open subset of $\XX\setminus A_\varepsilon$ containing $K\setminus A_\varepsilon$. Let now $f_\varepsilon\in\HSs$ be such that $f_\varepsilon(x)=1$ for every $x$ in $A_\varepsilon$, $f_\varepsilon(x)\in[0,1]$ for every $x\in\XX$ and $\Vert f_\varepsilon\Vert_{\HSs}<\varepsilon$. Now, by construction, $\abs{(1-f_\varepsilon) v}(x)<\varepsilon$ for every $x\in B_\varepsilon\cup A_\varepsilon$, which is an open set in $\XX$ containing of $K$. We can thus compute $$ \abs{\nvect(K)(v)}\le\abs{ \nvect(K)( f_\varepsilon v)}+\abs{\nvect(K)(( 1-f_\varepsilon) v)}\le \abs{ \nvect(K)( f_\varepsilon v)}+\varepsilon\Vert v\Vert\abs{\nvect}(K) .$$ Notice now that $ \nvect(K)( f_\varepsilon v)\rightarrow 0$ as $\varepsilon\searrow 0$ by Step 1. The conclusion follows. \end{proof} \begin{cor}\label{corrcd} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and consider the Banach spaces \begin{align} {\sf Pol}_n(\XX)&\defeq\left(\left\{\text{polar vector measures defined on $\TestV(\XX)^n$}\right\},\abs{\,\cdot\,}(\XX)\right),\notag\\ {\sf Tig}_n(\XX)&\defeq\left(\left\{F\in(\TestV(\XX)^n)': \text{$F$ is tight and $F$ satisfies \eqref{tightplus}}\right\},\Vert\,\cdot\,\Vert'\right).\notag \end{align} Then the map \[ {\sf Pol}_n(\XX)\rightarrow {\sf Tig}_n(\XX)\quad\text{defined as}\quad \nvect\mapsto \nvect(\XX) \] is a bijective isometry. \end{cor} \begin{proof} Taking into account Corollary \ref{Rieszcor} and Theorem \ref{improvetopolar}, it is enough to show that for every polar vector measure $\nvect$, $F\defeq\nvect(\XX)$ satisfies \eqref{tightplus}. Let then $\nvect$ be a polar vector measure. Then we can use Proposition \ref{equivrepr} to represent $\nvect$ as $\nu_\nvect\mu_\nvect$ and hence compute, if $\{f_k\}_k$ is as in \eqref{tightplus} and $v\in\TestV(\XX)^n$, $$\nvect(\XX)(f_k v)=\int_\XX f_k v\,\cdot\,\nu_\nvect\dd{\mu_\nvect}. $$ Now, we recall that if $\{f_k\}_k\subseteq\HSs$ is as above, up to taking a (non relabelled) subsequence, \cite[Theorem 1.20, Proposition 1.12 and Proposition 1.17]{debin2019quasicontinuous} show that the quasi-continuous representatives of $f_k$ converge to $0$ $\capa$-a.e. Then claim then follows by standard arguments. \end{proof} \subsection{An example: improved results for the differential of BV functions}\label{sectBVRCD} In the previous section we developed the theory to deal with polar/representable vector measures and to recognize the local vector measures with this particularly nice behaviour. We give an application of this abstract theory: working on $\RCD(K,\infty)$ spaces, we are able to improve the description of the local vector measure giving the distributional differential of a $\BV$ function studied in Section \ref{sectBV}. This amounts in improving weak locality to `strong locality' (i.e.\ being polar) and hence it gives us the framework to state finer calculus rules. First, we recall the simple \cite[Remark 2.2]{BGBV}, based on the coarea formula, which provides us with the possibility to give a meaning to the integrals $\int_\XX f\dive\, v\dd\mass$ even though $\dive\,v\notin\Lpi$. \begin{rem}\label{interpretationint} If $f\in\BVv$, $v\in D(\dive)\cap\Lpi$ and $\{n_k\}_k\subseteq(0,\infty)$, $\{m_k\}_k\subseteq(0,\infty)$ are two sequences with $\lim_k n_k=\lim_k m_k=+\infty$, then the limit \begin{equation}\label{defint} \lim_k \int_\XX(f\vee -m_k)\wedge n_k\dive\,v\dd{\mass} \end{equation} exists finite and does not depend on the particular choice of the sequences $\{n_k\}_k$ and $\{m_k\}_k$. Therefore, if $f\in\BVv$ and $v\in D(\dive)\cap\Lpi$, we can write $$\int_\XX f\dive\, v\dd{\mass}$$ with the convention that it has to be interpreted as the limit in \eqref{defint}.\fr \end{rem} For what follows, see \cite{BGBV} and the references therein. \begin{defn} Let $(\XX,\dist,\mass)$ be a metric measure space and $F\in\BVv^n$. We define, for any $A$ open subset of $\XX$, \begin{equation}\label{defntvvector} \abs{\DIFF F}(A)\defeq \inf \left\{\liminf_k \int_A \Vert (\lip(F_{i,k}))_{i=1,\dots,n}\Vert_e\dd{\mass}\right\} \end{equation} where the infimum is taken among all sequences $\{F_{i,k}\}_k\subseteq\LIPloc(A)$ such that $F_{i,k}\rightarrow F_i$ in $\Lp^1(A,\mass)$ for every $i=1,\dots,n$. \end{defn} \begin{prop} Let $(\XX,\dist,\mass)$ be a metric measure space and $F\in\BVv^n$. Then $ \abs{\DIFF F}(\,\cdot\,)$ as defined in \eqref{defntvvector} is the restriction to open sets of a finite non negative Borel measure that we call total variation of $F$ and still denote with the same symbol. \end{prop} In view of the following proposition, recall that the interpretation of the integral in \eqref{intbypartsvet} is given by Remark \ref{interpretationint}. \begin{prop}\label{reprvett1} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and $F\in\BVv^n$. Then, for every $A$ open subset of $\XX$, it holds that \begin{equation} \label{intbypartsvet} \abs{\DIFF F}(A)=\sup\left\{\sum_{i=1}^n\int_A F_i \dive\, v_i\dd{\mass}\right\}, \end{equation} where the supremum is taken among all $v=(v_1,\dots,v_n)\in\mathcal{W}_A^n$, where \[ \begin{split} \mathcal{{W}}_A^n\defeq\Big\{v=(v_1,\dots v_n)\in\TestV(\XX)^n:\abs{v}\le 1\ \mass\text{-a.e.\ }\text{and } \supp \abs{v}\subseteq A\Big\}. \end{split} \] \end{prop} In Section \ref{sectBV} we built a local vector measure describing the distributional differential of a function of bounded variation. We improve now the result, in the framework of $\RCD(K,\infty)$ spaces. Indeed, here we show that we can treat vector valued functions of bounded variation and also that we have a more powerful description of the local vector measure describing the weak derivative, as it turns out to be representable. With a slight abuse, we will denote the distributional differential of $F$ on a $\RCD(K,\infty)$ space by $\DIFF F$, even though the same notation has been used in Section \ref{sectBV} for the distributional differential on general metric measure spaces. As in this section we will work only on $\RCD(K,\infty)$ spaces, this should case no confusion. Also, we justify again the notation $\DIFF F$ as we show that the total variation of the local vector measure $\DIFF F$ is (by construction) equal to the total variation of the $\BV$ function $F$. In view of the following theorem, recall that the interpretation of the integral in \eqref{difffloccaompeq2} is given by Remark \ref{interpretationint}. Recall also \eqref{Cqcvfdef}. \begin{thm}\label{weakder2} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and let $F\in\BVv^n$. Then there exists a unique representable vector measure $\DIFF F$ (hence defined on $\tanXcapinfn$) such that it holds \begin{equation}\label{difffloccaompeq2} \sum_{i=1}^n\int_\XX F_i\dive\, v_i\dd\mass=-v\,\cdot\,\DIFF F(\XX)\quad\text{for every }v=(v_1,\dots,v_n)\in (\Cqcvfinf\cap D(\dive))^n. \end{equation} \end{thm} \begin{proof} Notice first that we know that such measure, if exists, is unique, being representable (by $\rm i)$ of Remark \ref{mvmrcdrem}). We start with the case $F_i\in\Lpi$ for every $i=1,\dots,n$. Define $\mathcal{F}:\TestV(\XX)^n\rightarrow\RR$ as $$\mathcal{F}(v)\defeq-\sum_{i=1}^n\int_\XX F_i\dive\,v_i\dd{\mass}.$$ Notice now that from Proposition \ref{reprvett1} it follows that $$\sup \left\{\FF(v):v\in\TestV(\XX)^n,\ \Vert v\Vert \le 1, \supp v\subseteq A\right\} =\abs{\DIFF F}(A).$$ Now we want to argue as in the proof of Theorem \ref{difffloccaomp}, building upon Theorem \ref{improvetopolar} instead of Theorem \ref{weakder}. Take then $\{f_k\}_k$ as in \eqref{tightplus}. We compute, if $v\in\TestV(\XX)^n$, $$\mathcal F(f_k v)=-\sum_{i=1}^n\int_\XX F_i \dive(f_k v_i)\dd\mass=-\sum_{i=1}^n\int_\XX F_i f_k\dive v_i\dd\mass-\sum_{i=1}^n\int_\XX F_i \nabla f_k\,\cdot\,v_i\dd\mass$$ and notice that the right hand side converges to $0$ by the assumption $f_k\rightarrow 0$ in $\HSs$ and $v_i\in\TestV(\XX)$. We therefore obtain a polar vector measure $\DIFF F$ that satisfies \eqref{difffloccaompeq2} for $v\in\TestV(\XX)^n$ and whose total variation coincides with $\abs{\DIFF F}$. Then, by Proposition \ref{equivrepr}, $\DIFF F$ induces a unique representable vector measure (that we still call $\DIFF F$) defined on $\tanXcapinfn$, which still has total variation $\abs{\DIFF F}$ and still satisfies \eqref{difffloccaompeq2} for $v\in\TestV(\XX)^n$. By \cite[Lemma 3.2]{BGBV}, \eqref{difffloccaompeq2} holds for every $v\in (\WHHSs\cap\tanXinf)^n$. Then the very same argument of \cite[Theorem 3.13]{BGBV} shows that \eqref{difffloccaompeq2} holds for every $v\in (\Cqcvfinf\cap D(\dive))^n.$ In the general case, we can define $F^m\in(\BVv\cap\Lpi)^n$ as $F^m_i\defeq (F_i\vee -m)\wedge m$ and therefore consider the sequence of polar vector measures $\{\DIFF F^m\}_m$ given by the paragraphs above. By uniqueness we have that $\DIFF F^l-\DIFF F^m=\DIFF (F^l-F^m)$ and by \eqref{coareaeqdiff}, $$|\DIFF (F^m-F)|(\XX)\stackrel{\eqref{intbypartsvet}}{\le}\sum_{i=1}^n|{\DIFF (F_i^m-F_i)}|(\XX)\rightarrow 0\quad\text{as }m\rightarrow\infty.$$ We therefore have that $\{\DIFF F^m\}_m$ is a Cauchy sequence that, thanks to Proposition \ref{vectvalbanach}, converges to a representable vector measure whose total variation is $\abs{\DIFF F}$ (see also Proposition \ref{equivrepr}). Also, taking into account \eqref{deflimit} and Remark \ref{interpretationint}, ${\DIFF F}$ still satisfies \eqref{difffloccaompeq2}. \end{proof} Notice that $\DIFF F=\nu_F\abs{\DIFF F}$, characterized by \eqref{difffloccaompeq2}, is coherent with the notions developed in \cite{bru2019rectifiability,BGBV}. In particular, if $f=\chi_E$, where $E$ is a set of finite perimeter and finite mass, we obtain the Gauss-Green integration by parts formula stated in \cite{bru2019rectifiability}. This motivates the following definition. \begin{defn} Let $(\XX,\dist,\mass)$ be an $\RCD(K,\infty)$ space and $F\in\BVv^n$. We call the local vector measure $\DIFF F$ given by Theorem \ref{weakder2} the distributional derivative of $F$. \end{defn} \medskip We state now the Leibniz rule for bounded functions of bounded variation, that is \cite[Proposition 3.35]{BGBV} and we encourage the reader to compare it with Proposition \ref{leibnizcontprop}. This calculus rule has been, in \cite{BGBV}, the building block to prove the chain rule for vector valued functions of bounded variation, recalled in Theorem \ref{volprop} below. We recall that for a $\mass$-measurable function $f:\XX\rightarrow\RR$ it is customary to define \begin{equation}\label{veeandwedge} \begin{alignedat}{3} f^{\wedge}(x)&\defeq \apliminf_{y\rightarrow x} f(y)&&\defeq\sup&&\left\{t\in\bar{\RR}: \lim_{r\searrow 0} \frac{\mass(B_r(x)\cap\{f<t\})}{\mass(B_r(x))}=0\right\}, \\ f^{\vee}(x)&\defeq \aplimsup_{y\rightarrow x} f(y)&&\defeq\inf&&\left\{t\in\bar{\RR}:\lim_{r\searrow 0} \frac{\mass(B_r(x)\cap\{f>t\})}{\mass(B_r(x))}=0\right\} \end{alignedat} \end{equation} and finally $$\bar f\defeq\frac{f^\vee+f^\wedge}{2},$$ with the convention that $+\infty-\infty=0$. \begin{prop}[Leibniz rule]\label{leibnizrcd} Let $(\XX,\dist,\mass)$ be an $\RCD(K,N)$ space and let $f,g\in\BVv\cap\Lpi$. Then $f g \in\BVv$ and \begin{equation}\notag \DIFF (f g) =\bar{f}\DIFF g+\bar{g}\DIFF f. \end{equation} In particular, $\abs{\DIFF (f g)}\le \abs{\bar{f}}\abs{\DIFF g}+ \abs{\bar{g}}\abs{\DIFF f}$. \end{prop} To state Theorem \ref{volprop}, which is a restatement of \cite[Theorem 3.38]{BGBV} in the language of local vector measures, we recall Definition \ref{intmatrix}. We also need the following proposition, extracted from \cite{BGBV}, to define the functions $F^l$ and $F^r$. \begin{prop}\label{normalvectval} Let $(\XX,\dist,\mass)$ be an $\RCD(K,N)$ space and $F\in\BVv^n$. Then there exists a pair of $\abs{\DIFF F}$-measurable functions $F^l, F^r: \XX\rightarrow\RR^n$ such that for $\abs{\DIFF F}\text{-a.e.\ }x$ the following holds. Either $F^l(x)=F^r(x)$ and then $$ \lim_{r\searrow 0}\dashint_{B_r(x)} \abs{F- F^r(x)}\dd{\mass}=0, $$ or there exists a Borel set $E\subseteq\XX$ with $$ \lim_{r\searrow 0} \frac{\mass(E\cap B_r(x))}{\mass(B_r(x))}=\frac{1}{2} $$ such that \begin{equation}\notag \lim_{r\searrow 0}\dashint_{B_r(x)\cap E} \abs{F- F^r(x)}\dd{\mass}=\lim_{r\searrow 0}\dashint_{B_r(x)\cap(\XX\setminus E)} |F- F^l(x)|\dd{\mass}=0. \end{equation} If $\tilde{F}^l, \tilde{F}^r: \XX\rightarrow\RR^n$ is another pair as above, then for $\abs{\DIFF F}\text{-a.e.\ }x$ either $(\tilde{F}^l(x), \tilde{F}^r(x))=({F}^l(x), {F}^r(x))$ or $(\tilde{F}^l(x), \tilde{F}^r(x))=({F}^r(x), {F}^l(x))$. \end{prop} We state now the following result about the chain rule in the $\BV$ setting. Notice that, requiring that the space is $\RCD(K,N)$, we can improve considerably what stated in Proposition \ref{propchaincont}: not only we treat vector valued functions, but we also drop the continuity assumption on the $\BV$ function. \begin{thm}[Chain rule for vector valued functions]\label{volprop} Let $(\XX,\dist,\mass)$ be an $\RCD(K,N)$ space and $F\in\BVv^n$. Let $\phi\in C^1(\RR^n;\RR^m)\cap\LIP(\RR^n;\RR^m)$ for some $m\in\NN$, $m\ge 1$ such that $\phi(0)=0$. Then $\phi\circ F\in\BVv^m$ and \[ \DIFF(\phi\circ F)=\left(\int_0^1 \nabla\phi(t F^r+(1-t) F^l)\dd{t}\right)\DIFF F, \] where $F^l,F^r$ are given by Proposition \ref{normalvectval}. \end{thm} \section*{Appendix} \setcounter{equation}{0} \renewcommand\theequation{A.\arabic{equation}} In this appendix we show how using Doob's martingale convergence theorem we can construct Borel representatives of functions in $\Lp^1(\mass)$ that `linearly' depend on the given function. Notice that no Choice, other than Countable Dependent, is needed in the proof, so our construction differs from similar ones based on the concept of von Neumann lifting: the price that we pay for this is that the Borel representatives are defined only on subsets of full measure. Since we will need to distinguish between functions and representatives, for given $f$ Borel and integrable, we denote by $[f]$ its equivalence class in the Lebesgue space $\Lpu$. Also, $\Lpuloc$ denotes the space of measurable functions $f$ such that for every $x\in\XX$, there exists a neighbourhood of $x$, $B_x$, with $f\in \Lp^1(\mass\mres B_x)$. As before, we denote by $[f]$ the equivalence class of $f$, for $f\in\Lpuloc$ Borel. \medskip We recall that a measure space is a triplet $(\XX,\FF,\mass)$ where $\XX$ is a set, $\FF$ is a $\sigma$-algebra and $\mass$ is a measure defined on $\FF$. We say that the measure space is separable if there exists a countable collection $\{A_n\}_{n\in\NN}\subseteq\FF$ such that for every $B\in\FF$ with $\mass(B)<\infty$ we can find a sequence $\{B_n\}_n\subseteq\{A_n\}_n$ with $\mass(B_n\Delta B)\rightarrow 0$. It is easy to see that for a measure space $(\XX,\FF,\mass)$ the following assertions are equivalent: \begin{itemize}[label=$\bullet$] \item $(\XX,\FF,\mass)$ is separable, say $\{A_n\}_n$ is a countable dense subset of $\FF$, \item $\Lp^p(\mass)$ is separable for \emph{some} $p\in[1,\infty)$, \item $\Lp^p(\mass)$ is separable for \emph{every} $p\in[1,\infty)$. \end{itemize} Moreover, if some (hence all) of the item above is satisfied, a countable dense subset of $\Lp^p(\mass)$, for $p\in[1,\infty)$, can be obtained considering the linear span over $\mathbb Q$ of $\{\chi_{A_n}\}_n$, for $\{A_n\}_n$ as above. \begin{thma}[`Linear' choice of measurable representatives for measure spaces]\label{ultimo} Let $(\XX,\FF,\mass)$ be a separable measure space with $\mass$ $\sigma$-finite. Then there exist two maps\newline $\leb:\Lpu\rightarrow\FF$ and $\FF\mathrm{Rep}:\Lpuloc\rightarrow\left\{\text{$\FF$-measurable real valued maps on $\XX$}\right\}$ such that \begin{enumerate}[label=\roman*)] \item $\mass(\XX\setminus\leb([f])=0$ for every $[f]\in\Lpuloc$, \item for every $[f]\in\Lpu$ and $f'\in[f]$, we have $f'=\FF\mathrm{Rep}([f])\ \mass$-a.e. \item for every $[f], [g]\in\Lpu$ and $\alpha,\beta\in\RR$, we have $$\leb([f])\cap \leb([g])\subseteq\leb([\alpha f+\beta g])$$ and for every $x\in \leb([f])\cap \leb([g])$, it holds $$\alpha\FF\mathrm{Rep}([f])(x)+\beta\FF\mathrm{Rep}([g])(x)=\FF\mathrm{Rep}([\alpha f+\beta g])(x).$$ \end{enumerate} \end{thma} \begin{proof} By a gluing argument, we can clearly assume that $\mass$ is finite. Let $\{A_n\}_n$ denote the countable dense subset of $\FF$. We take a sequence of finite $\FF$-measurable partitions of $\XX$, $\{{\mathcal{E}}^k\}_{k\in\NN}$, where ${\mathcal{E}}^k=\{{\mathcal{E}}^k_l\}_{l=1,\dots,n(k)}\subseteq\FF$, with the following properties: \begin{itemize}[label=\roman*)] \item[$a)$] ${\mathcal{E}}^{k+1}$ is a refinement of ${\mathcal{E}}^k$, in the sense that for every $l$, ${\mathcal{E}}^{k+1}_l\subseteq {\mathcal{E}}^k_m$ for some $m=m(l)$, \item[$b)$] for every $n$, there exists $k=k(n)$ such that $A_n$ can be written as union of sets in ${\mathcal{E}}^{k}$. \end{itemize} We build such sequence as follows: first, let $\FF_k$ denote the $\sigma$-algebra generated by $\{A_1,\dots,A_k\}$ and then let $\mathcal E^k$ be the finest partition of $\XX$ whose sets belong to $\FF_k$. We then define a sequence of linear maps $\{P_k\}_{k\in\NN}$ $$P_k:\Lpu\rightarrow\left\{\text{$\FF$-measurable real valued maps on $\XX$}\right\}$$ as follows: \begin{equation}\label{defPk} P_k( [f])(x) \defeq \begin{cases} \dashint_{{\mathcal{E}}^k_l}f\dd{\mass}\quad&\text{if }x\in {\mathcal{E}}^k_l \text{ and }\mass( {\mathcal{E}}^k_l)>0,\\ 0&\text{otherwise}. \end{cases} \end{equation} Notice that for every $k$, $P_k:\Lpu\rightarrow\Lpu$ is $1$-Lipschitz, in particular $\Vert P_k([f])\Vert_{\Lpu}\le \Vert f \Vert_{\Lpu}$. We can easily check that the discrete stochastic process $\{P_k([f])\}_k$ is a martingale with respect to the filtration $\{\FF_k\}_k$. To this aim we use property $a)$ in the construction of $\{\mathcal E^k\}_k$. Therefore, by \cite[Theorem 2.2]{revyor}, $P_k([f])$ converges $\mass$-a.e.\ to a finite limit. We define then $$\leb([f])\defeq \left\{x: \lim_k P_k([f])(x)\text{ exists finite}\right\}$$ and then the Borel function $$ \FF\mathrm{Rep}([f])(x)\defeq \begin{cases} \lim_k P_k([f])(x)\quad&\text{if }x\in\leb([f]),\\ 0&\text{otherwise}. \end{cases} $$ We notice now that property $i)$ of the statement is trivially satisfied while $iii)$ follows from the linearity of the integral. We only need to show property $ii)$ of the statement. First note that by the request $b)$ in the construction of $\{\mathcal{E}^k\}_k$, property $ii)$ holds true if $f$ belongs to the span over $\mathbb Q$ of $\{\chi_{A_n}\}_n$. Indeed, in such case, $P_k([f])=f$, eventually. Notice now that as $\FF\mathrm{Rep}$ is defined as pointwise limit of $P_k([\,\cdot\,])$ and the maps $P_k:\Lpu\rightarrow\Lpu$ are $1$-Lipschitz, it follows from Fatou's lemma (and a slight abuse of notation) that also $\FF\mathrm{Rep}:\Lpu\rightarrow\Lpu$ is $1$-Lipschitz. Then the conclusion follows by density. \end{proof} \begin{cora}[`Linear' choice of Borel representatives for Polish spaces]\label{thmapp} Let $(\XX,\tau)$ be a Polish space and let $\mass$ be a $\sigma$-finite Borel measure on $\XX$. Then there exist two maps\newline $\leb:\Lpuloc\rightarrow\mathcal{B}(\XX)$ and $\borrep:\Lpuloc\rightarrow\left\{\text{Borel real valued maps on $\XX$}\right\}$ such that \begin{enumerate}[label=\roman*)] \item $\mass(\XX\setminus\leb([f])=0$ for every $[f]\in\Lpuloc$, \item for every $[f]\in\Lpuloc$ and $f'\in[f]$, we have $f'=\borrep([f])\ \mass$-a.e. \item for every $[f], [g]\in\Lpuloc$ and $\alpha,\beta\in\RR$, we have $$\leb([f])\cap \leb([g])\subseteq\leb([\alpha f+\beta g])$$ and for every $x\in \leb([f])\cap \leb([g])$, it holds $$\alpha\borrep([f])(x)+\beta\borrep([g])(x)=\borrep([\alpha f+\beta g])(x),$$ \item for every $[f]\in \Lpuloc$, it holds \begin{equation}\label{buoncontrollo} |\borrep([f])(x)|\le\inf_{r>0} \Vert f\Vert_{\Lp^\infty(\mass\mressmall B_r(x))} \qquad\text{for every }x\in \leb([f]). \end{equation} \end{enumerate} \end{cora} \begin{proof} Fix a complete and separable distance $\dist$ on $\XX$ inducing the topology $\tau$. Notice first that, up to a $\mass$-negligible Borel set, $\XX=\bigcup_n K_n$, where $\{K_n\}_n$ is an increasing sequence of compact sets such that $\mass\mres K_n$ is a finite measure for every $n$. With a gluing argument, we see that it is enough to prove the theorem for the compact metric measure space $(K_n,\dist,\mass\mres K_n)$. In particular, $\Lp^1(\mass\mres K_n)=\Lploc^1(\mass\mres K_n)$. Notice also that by basic measure theory we can check that $(K_n,\mathcal B(K_n),\mass\mres K_n)$ is a separable measure space, where, as usual, $\mathcal B$ denotes the Borel $\sigma$-algebra. Now we apply Theorem \ref{ultimo}. It remains to show \eqref{buoncontrollo}. To this aim, we have to modify slightly the partitions $\mathcal E^{k}$ used to prove Theorem \ref{ultimo} in order to ensure that $$ \lim_{k\rightarrow \infty} \sup_l\diam(\mathcal E_l^k)\rightarrow0.$$ This can be easily done: using the notation of the proof of Theorem \ref{ultimo}, for $k\ge 2$ we just have to redefine $\FF_k$ as the $\sigma$-algebra generated by $\FF_{k-1}$, $\{A_1,\dots,A_k\}$ and a finite covering of $K_n$ of sets with diameter smaller than $k^{-1}$, and we leave $\mathcal F_1$ unchanged. \end{proof} \bibliographystyle{alpha} \bibliography{Biblio11} \end{document}
2206.14783v1
http://arxiv.org/abs/2206.14783v1
Homotopy groups of spectra and $p$-adic $L$-functions over totally real number fields
\documentclass[a4paper,9pt,twoside]{amsart} \usepackage[margin=10em]{geometry} \usepackage{amssymb} \usepackage[hyphens]{url} \urlstyle{same} \usepackage{amsmath} \usepackage{blindtext} \usepackage{mathtools} \usepackage{adjustbox} \usepackage[all,cmtip]{xy} \usepackage[shortlabels]{enumitem} \usepackage{multirow} \usepackage{eucal} \usepackage{xcolor} \usepackage{stackrel} \usepackage{epigraph} \usepackage{csquotes} \usepackage{url} \usepackage{tikz-cd} \usepackage{titletoc} \usepackage{setspace} \usepackage{chngcntr} \counterwithin{equation}{section} \tikzset{ curvarr/.style={ to path={ -- ([xshift=2ex]\tikztostart.east) |- (#1) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)} } } \usepackage{hyperref} \hypersetup{colorlinks=true} \usepackage{tikz} \usetikzlibrary{matrix,arrows,positioning} \usepackage[bbgreekl]{mathbbol} \DeclareSymbolFontAlphabet{\mathbb}{AMSb} \DeclareSymbolFontAlphabet{\mathbbl}{bbold} \newcommand{\Prism}{\mathbbl{\Delta}} \setlength{\parindent}{0em} \setlength{\parskip}{0.5em} \theoremstyle{plain}\newtheorem{thm}{Theorem}[subsection] \newtheorem*{thm*}{Theorem} \newtheorem{thme}[thm]{Th\'eor\` eme} \newtheorem{goal*}{Goal} \newtheorem{dream*}{Dream} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \renewcommand{\thethm}{\Roman{section}.\arabic{subsection}.\arabic{thm}} \theoremstyle{definition} \newtheorem{dfn}[thm]{Definition} \newtheorem{dream}[thm]{Dream} \newtheorem{conj}[thm]{Conjecture} \newtheorem{exm}[thm]{Example} \newtheorem{cons}[thm]{Construction} \newtheorem{rem}[thm]{Remark} \newtheorem{que}[thm]{Question} \theoremstyle{remark} \newtheorem{note}{Note} \newtheorem{case}{Case} \newtheorem*{rem*}{Remark} \newtheorem*{prop*}{Proposition} \newcommand{\R}{\mathbf{R}} \newcommand{\Fcal}{\mathcal{F}} \newcommand{\pr}{\mathrm{pr}} \newcommand{\Tcal}{\mathcal{T}} \newcommand{\Hrm}{\mathrm{H}} \usepackage{stmaryrd} \newcommand{\mfrak}{\mathfrak{m}} \newcommand{\poly}{\mathrm{poly}} \newcommand{\sCAlg}{\mathrm{sCAlg}} \newcommand{\fppf}{\mathrm{fppf}} \newcommand{\F}{\mathbf{F}} \newcommand{\PSh}{\mathrm{PSh}} \newcommand{\qfrak}{\mathfrak{q}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\BMS}{\mathrm{BMS}} \newcommand{\Ainf}{A_{\inf}} \newcommand{\TR}{\mathrm{TR}} \newcommand{\hatDelta}{\widehat{\boldsymbol{\Delta}}} \newcommand{\Tw}{\mathrm{Tw}} \newcommand{\stab}{\mathrm{stab}} \newcommand{\HH}{\mathrm{HH}} \newcommand{\Wfrak}{\mathfrak{W}} \newcommand{\Sh}{\mathrm{Sh}} \newcommand{\Fil}{\mathrm{Fil}} \newcommand{\Vcal}{\mathcal{V}} \newcommand{\Sym}{\mathrm{Sym}} \newcommand{\cofib}{\mathrm{cofib}} \newcommand{\Exc}{\mathrm{Exc}} \newcommand{\Hbf}{\mathbf{H}} \newcommand{\Ucal}{\mathcal{U}} \newcommand{\alg}{\mathrm{alg}} \newcommand{\QRSP}{\mathrm{QRSP}} \newcommand{\Spec}{\mathrm{Spec}} b}{\mathrm{fib}} \newcommand{\perf}{\mathrm{perf}} \newcommand{\Wcal}{\mathcal{W}} \newcommand{\Kcal}{\mathcal{K}} \newcommand{\G}{\mathbb{G}} \newcommand{\Lbb}{\mathbb{L}} n}{\mathrm{fin}} \newcommand{\Abb}{\mathbb{A}} \newcommand{\Trm}{\mathrm{T}} \newcommand{\Katz}{\mathrm{Katz}} \newcommand{\Xfrak}{\mathfrak{X}} \newcommand{\tr}{\mathrm{tr}} \newcommand{\Ocal}{\mathcal{O}} \newcommand{\HC}{\mathrm{HC}} \newcommand{\QRSPerfd}{\mathrm{QRSPerfd}} \newcommand{\Shv}{\mathrm{Shv}} \newcommand{\DF}{\mathrm{DF}} \newcommand{\compDF}{\widehat{\mathrm{DF}}} \newcommand{\ffrak}{\mathfrak{f}} \newcommand{\gfrak}{\mathfrak{g}} \newcommand{\HP}{\mathrm{HP}} \newcommand{\FilHKR}{\mathrm{Fil}_\mathrm{HKR}} \newcommand{\grHKR}{\mathrm{gr}_\mathrm{HKR}} \newcommand{\TP}{\mathrm{TP}} \newcommand{\Ncal}{\mathcal{N}} \newcommand{\crys}{\mathrm{crys}} \newcommand{\gr}{\mathrm{gr}} \newcommand{\Perf}{\mathrm{Perf}} \newcommand{\Lbf}{\mathbf{L}} \newcommand{\Ind}{\mathrm{Ind}} \newcommand{\E}{\mathbb{E}} \newcommand{\co}{\mathrm{co}} \newcommand{\Q}{\mathbf{Q}} \newcommand{\Sfrak}{\mathfrak{S}} \newcommand{\Eq}{\mathrm{Eq}} \newcommand{\can}{\mathrm{can}} \newcommand{\TC}{\mathrm{TC}} \newcommand{\Cov}{\mathrm{Cov}} \newcommand{\Lcal}{\mathcal{L}} \newcommand{\NMot}{\mathrm{NMot}} \newcommand{\Gcal}{\mathcal{G}} \newcommand{\dR}{\mathrm{dR}} \newcommand{\THH}{\mathrm{THH}} \newcommand{\pfrak}{\mathfrak{p}} \newcommand{\Ak}{\mathrm{Ak}} \newcommand{\Smooth}{\mathrm{Smooth}} \newcommand{\ev}{\mathrm{ev}} \newcommand{\Frob}{\mathrm{Frob}} \newcommand{\map}{\mathrm{map}} \newcommand{\Fun}{\mathrm{Fun}} \newcommand{\Lex}{\mathrm{Lex}} \newcommand{\TCrm}{\mathrm{TC}} \newcommand{\Alg}{\mathrm{Alg}} \newcommand{\Ex}{\mathrm{Ex}} \newcommand{\CycSp}{\mathrm{CycSp}} \newcommand{\Scal}{\mathcal{S}} \newcommand{\Proj}{\mathrm{Proj}} \newcommand{\TF}{\mathrm{TF}} \newcommand{\C}{\mathbf{C}} \newcommand{\Jcal}{\mathcal{J}} \newcommand{\LEq}{\mathrm{LEq}} \newcommand{\D}{\mathbf{D}} \newcommand{\PSp}{\mathrm{PSp}} \newcommand{\Sp}{\mathrm{Sp}} \newcommand{\Xcal}{\mathcal{X}} \newcommand{\N}{\mathbb{N}} \newcommand{\Sbb}{\mathbb{S}} \newcommand{\Z}{\mathbf{Z}} \newcommand{\Ffrak}{\mathfrak{F}} \newcommand{\T}{\mathbb{T}} \newcommand{\Art}{\mathrm{Art}} \newcommand{\Acal}{\mathcal{A}} \newcommand{\Bcal}{\mathcal{B}} \newcommand{\Sbf}{\mathbf{S}} \newcommand{\Pbf}{\mathbf{P}} \newcommand{\Orm}{\mathrm{O}} \newcommand{\ch}{\mathrm{ch}} \newcommand{\Iw}{\mathrm{Iw}} \newcommand{\Gal}{\mathrm{Gal}} \newcommand{\Sel}{\mathrm{Sel}} \newcommand{\e}{\varepsilon} \newcommand{\Setcal}{{\mathcal{S}et}} \newcommand{\sSetcal}{{{\mathcal{S}et}_\Delta}} \newcommand{\Hcal}{\mathcal{H}} \newcommand{\Ccal}{\mathcal{C}} \newcommand{\Mod}{\mathcal{M}\mathrm{od}} \newcommand{\nttinf}{n\rightarrow\infty} \newcommand{\Hom}{\mathrm{Hom}} \newcommand{\Ext}{\mathrm{Ext}} \newcommand{\Tor}{\mathrm{Tor}} \newcommand{\Nat}{\mathrm{Nat}} \newcommand{\Nm}{\mathrm{Nm}} \newcommand{\Afrak}{\mathfrak{A}} \newcommand{\Catbf}{\mathbf{Cat}} \newcommand{\id}{\mathrm{id}} \newcommand{\Mor}{\mathrm{Mor}} \newcommand{\undundG}{\underline{\underline{G}}} \newcommand{\undG}{\underline{G}} \newcommand{\cyc}{\mathrm{cyc}} \newcommand{\Pcal}{\mathcal{P}} \newcommand{\op}{\mathrm{op}} \newcommand{\Cfrak}{\mathfrak{C}} \newcommand{\Ann}{\mathrm{Ann}} \newcommand{\Top}{\mathcal{T}\mathrm{op}} \newcommand{\imrm}{\mathrm{im}\phantom{.}} \newcommand{\Set}{\mathbf{Set}} \newcommand{\Dfrak}{\mathfrak{D}} \newcommand{\sSet}{\mathbf{sSet}} \newcommand{\Card}{\mathrm{Card}} \newcommand{\Cat}{\mathcal{C}at} \newcommand{\Ob}{\mathrm{Ob}} \newcommand{\FinZ}{\mathrm{Fin}(\Z)} \newcommand{\Dcal}{\mathcal{D}} \newcommand{\sk}{\mathrm{sk}} \newcommand{\Map}{\mathrm{Map}} \newcommand{\K}{\mathbf{K}} \newcommand{\KU}{\mathrm{KU}} \newcommand{\et}{\mathrm{\acute{e}t}} \newcommand{\Ab}{\mathcal{A}b} \newcommand{\CAlg}{\mathrm{CAlg}} \newcommand{\Nrm}{\mathrm{N}} \newcommand{\Ecal}{\mathcal{E}} \newcommand{\Gap}{\mathrm{Gap}} \newcommand{\Mat}{\mathrm{Mat}} \newcommand{\Mcal}{\mathcal{M}} \newcommand{\Ical}{\mathcal{Ical}} \newcommand{\cfrak}{\mathfrak{c}} \newcommand{\QSyn}{\mathrm{QSyn}} \newcommand{\acyc}{\mathrm{acyc}} \newcommand{\qSyn}{\mathrm{qSyn}} \newcommand{\colim}{\mathrm{colim}} \newcommand{\coker}{\mathrm{coker}} \newcommand{\Mrm}{\mathrm{M}} \newcommand{\Ch}{\mathrm{Ch}} \newcommand{\ZpT}{\Z_p\llbracket T\rrbracket} \newcommand{\ZpGamma}{\Z_p\llbracket \Gamma\rrbracket} \newcommand{\place}[2]{ \overset{\substack{#1\\\smile}}{#2}} \newcommand{\idbf}{\mathbf{1}} \newcommand{\rel}{\phantom{.}\mathrm{rel}\phantom{.}} \usepackage[backend=bibtex, style=alphabetic]{biblatex} \addbibresource{biblio.bib} \title{Homotopy groups of spectra and $p$-adic $L$-functions over totally real number fields} \author{Guillem Sala Fernandez} \begin{document} \maketitle \begin{abstract} The goal of this paper is to illustrate different approaches to understand Euler characteristics in the setting of totally real commutative and non-commutative Iwasawa theory. In addition to this, and in the spirit of \cite{Hess18} and \cite{Mitchell2005K1LocalHT}, we show how these objects interact with homotopy theoretic invariants such as ($K(1)$-local) $K$-theory and Topological Cyclic homology. \end{abstract} \section*{Introduction} \subsection*{Summary of the article and structure} The main motivation for this paper was Hesselholt's expository note \cite{Hess18}. In his article, he argues that if $p$ is an odd number and $F_p$ denotes the $p$-completion of the homotopy fiber of the cyclotomic trace map $\tr:K(\Z)\to\TC(\Z;p)$, then, among others, but more interestingly to us: \begin{enumerate} \item The value of the Kubota-Leopoldt $p$-adic $L$-function $L_p(\Q,\omega^{-2k}, 1+2k)$ is non-zero if and only if $\pi_{4k+1}(F_p)$ is zero, and \item If $L_p(\Q,\omega^{-2k}, 1+2k)\neq 0$, then $\pi_{4k}(F_p)$ and $\pi_{4k-1}(F_p)$ are finite and $$\left|L_p(\Q,\omega^{-2k}, 1+2k)\right|_p=\frac{\pi_{4k-1}(F_p)}{\pi_{4k}(F_p)},$$ where $|-|_p$ denotes the $p$-adic absolute value, \end{enumerate} These results follow from combining the Iwasawa main conjecture for $F=\Q$, together with some well-known results by, for example, Bayer-Neukirch in \cite{Bayer1978} and Schneider in \cite{Schneider1983}, and more (homotopy) $K$-theoretic results. Our first goal -- and the goal of this paper -- was to find an alternative proof of the result, as well as an extension of the result to the case of the Deligne-Ribet $p$-adic $L$-function, that is, to the case where $F$ is a totally real field. As it will be seen in \S\ref{CTRsection}, this was performed successfully, thanks to results by Blumberg and Mandell in \cite{blumberg2015fiber}. The second goal was to see if similar results could be reached in the totally real and {\it non-commutative} setting. Unfortunately, it was not possible to find a formula that related the (conjectural) Artin $p$-adic $L$-function with homotopy-theoretic invariants like in the commutative case. It was possible, however, to complete the results involving Euler characteristics in \cite{burns2005leading}, that is, a cohomological description of the $p$-adic norm for certain values of the Artin $p$-adic $L$-function was found. The third goal was to see if similar results held in other relevant number theoretic settings, such as the case of imaginary quadratic number fields, or even algebraic function fields, where a $K$-theoretic description of the special values of Katz's $p$-adic $L$-function has already been found. These will be addressed in the second part of this series of two articles. Finally, one of the tangential motivations of this paper has been to introduce the number theorist to the field of higher algebra, and the higher algebraist to some of the main topics in Iwasawa theory. For this same reason, a brief but concise introduction to both fields is provided in sections \S\ref{NT background} and \S\ref{htpy theory bg section}. Afterwards, \S\ref{CTRsection} addresses the commutative and totally real case, and \S\ref{NCTR case} addresses the non-commutative counterpart. \subsection*{Acknowledgments} The author would like to thank Lars Hesselholt, Jonas McCandless, Andrew Blumberg, Dustin Clausen, Francesc Bars, Francesca Gatti, Armando Gutierrez, Oscar Rivero, Jeanine Van Order, and specially his thesis supervisor Victor Rotger for their help and comments on the article. In addition to this, the project was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 682152). \section{Number theoretic background}\label{NT background} \subsection{$\Z_p^r$-extensions and the Iwasawa algebra} Let $F$ be a number field, and let $p$ be an odd prime. Recall that, given $r\geq 1$, a {\it $\Z_p^r$-extension} of $F$ is a pro-finite abelian field extension $F_\infty$ of $F$ so that $\Gamma=\Gal(F_\infty|F)\simeq\Z_p^r$. \begin{exm}\label{totallyrealzpextension} Let $F=\Q$, and note that $\Gal(\Q(\mu_{p^{n+1}})|\Q)\simeq\Delta\times\Gamma_n$, where $\Delta$ is a cyclic group of order $p-1$, and $\Gamma_n$ is cyclic of order $p^n$. We can then let $\Q_n$ denote the fixed field of $\Q(\mu_{p^{n+1}})$ by $\Delta$ so that the direct limit $\Q^\cyc=\Q_\infty$ is a $\Z_p$-extension of $\Q$. By the Kronecker-Weber theorem, this is the unique $\Z_p$-extension of $\Q$. Furthermore, given any number field $F$, we can construct the {\it cyclotomic $\Z_p$-extension} of $F$ as $F^\cyc=F\cdot\Q^\cyc$. \end{exm} \begin{exm} Let $F=\Kcal$ be an imaginary quadratic number field. By class field theory, there exists a $\Z_p^2$-extension $\Kcal_\infty$ of $\Kcal$ sitting inside the maximal extension of $\Kcal$ that is unramified outside $p$, which is maximal among the $\Z_p^d$-extensions of $\Kcal$. The Galois group $\Gal(\Kcal|\Q)$ acts on $\Gamma=\Gal(\Kcal_\infty|\Kcal)$ via conjugation, and if we denote the only non-trivial automorphism of $\Gal(\Kcal|\Q)$ by $\sigma$, its action gives us a decomposition $$\Gamma=\Gamma_+\oplus\Gamma_-$$ where $\sigma\cdot g=\sigma g\sigma^{-1}=g^{\pm 1}$, for each $g\in\Gamma_{\pm}$. We then recover the cyclotomic $\Z_p$-extension taking the fixed field $\Kcal^\cyc$ of $\Kcal_\infty$ by $\Gamma_-$ and obtain the {\it anticyclotomic} $\Z_p$-extension by taking the fixed field of $\Kcal_\infty$ by $\Gamma_+$, which we denote by $\Kcal^\acyc$. Alternatively, let us assume that $p$ splits in $\Ocal_\Kcal$ as $p=\wp_+\wp_-$, then we can construct the $\Z_p$-extensions $\Kcal_\pm\subset\Kcal_\infty$ of $\Kcal$ which are unramified outside $\wp_\pm$. \end{exm} Now, let $\Ocal$ be the valuation ring of some finite algebraic extension of $\Q_p$. The main goal of Iwasawa theory is to study the structure of modules over the {\it Iwasawa $\Ocal$-algebra} $\Lambda_\Ocal(\Gamma)=\Ocal\llbracket \Gamma\rrbracket$, which is defined as $$\Ocal\llbracket \Gamma\rrbracket = \lim_{U\unlhd\Gamma}\Ocal[G/U],$$ where the limit runs through all the open normal subgroups of $G$ with respect to the restriction maps $\Ocal[G/V]\to\Ocal[G/U]$, for $V\leq U$. Note that whenever the base ring $\Ocal$ is understood, we will simply denote $\Lambda(\Gamma)$, or even $\Lambda$. Modules over the Iwasawa algebra are also known as {\it Iwasawa modules}, and the next fundamental theorem aims at simplifying the description of such modules. \begin{thm} {\rm (Iwasawa-Serre, \autocite[Thm 2.3.9]{sharifi})} Let $G$ be a profinite group with $G\simeq\Z_p^r$, and fix a generating set $\{\gamma_i\}_{1\leq i\leq r}$ of $G$. Then, there exists a unique topological isomorphism $\Ocal\llbracket G\rrbracket\to\Ocal\llbracket T_1,\dots, T_r\rrbracket$ sending $\gamma_i-1$ to $T_i$. \end{thm} \begin{rem} The previous theorem allows us to think of modules over the Iwasawa $\Ocal$-algebra in terms of modules over a formal power series ring. For that reason, we will refer indistinctly to the Iwasawa algebra and its associated formal power series ring, taking into account that this identification carries the extra data of an isomorphism between both algebras. \end{rem} \subsection{The characteristic polynomial of an Iwasawa module} Now that we know how to think of modules over the Iwasawa algebra, we would like to find invariants that simplify their study. To do so, let $\varpi$ be a uniformizer for the valuation ring $\Ocal$, and let $M$ be a finitely-generated, torsion $\Lambda$-module. It is a classical result that $\Lambda$ is a noetherian Krull domain whose only height 1 primes are either those generated by powers of $\varpi$ or by polynomials $f\in\Ocal[T_1,\dots,T_d]$ that are coprime with $\varpi$, so \autocite[Theorems 4 \& 5,\S VII.4.4]{n1998commutative} can be applied to obtain the existence of a map \begin{align}\label{structuretheorem}M\to\bigoplus_{n\geq 0}\Lambda/\varpi^{\mu_n}\oplus\bigoplus_{m\geq 0} \Lambda/(f_m^{\lambda_m}),\end{align} which is a {\it quasi-isomorphism}, meaning that it has finite kernel and cokernel. \begin{rem} Classically, the sum of the powers $\mu_n$ of $\varpi$ give an invariant of $M$, the so-called {\it $\mu$-invariant}, which we denote by $\mu_{\Ocal,\Gamma}(M)$, or simply by $\mu(M)$, whenever the base Iwasawa algebra is understood. This invariant will vanish in the cases that will appear throughout this paper, although this might not always be the case. \end{rem} \begin{dfn} The {\it characteristic ideal} of $M$ is the product of ideals \begin{align}\Ch_{\Lambda}(M)=\varpi^{\mu(M)}\prod_{m=1}^N (f_m^{\lambda_m}).\end{align} \end{dfn} \begin{rem} The choice of a generator for $\Ch_\Lambda(M)$ provides a {\it characteristic element} for $M$, and we denote any such choice by $\ch_\Lambda(M)$. \end{rem} \subsection{Twists of Galois modules} From now on, we denote the {\it $p$-adic cyclotomic character} of a number field $F$ by $\chi_{\cyc,F}$, or simply by $\chi_\cyc$ whenever the base field $F$ is understood. Recall this is the map $\chi_\cyc:G_F\to\Z_p^\times$ defined by $\sigma(\zeta)=\zeta^{\chi_\cyc(\sigma)}$ for all $\zeta\in\mu_{p^\infty}$. One can then define the {\it Tate module} $\Z_p(1)$ to be the $\Z_p[G_F]$-module whose action is given by $$\sigma\cdot a = \chi_\cyc(\sigma)\cdot a,\qquad \forall\sigma\in G_F, \forall a\in\Z_p.$$ More generally, note that the {\it $n$-th twist} $\Z_p(n)$ of the Tate module is obtained by replacing $\chi_\cyc(\sigma)$ in the above definition by $\chi_\cyc(\sigma)^n$, and with this, given any $\Z_p[G_F]$-module $M$, one may define $$M(n)=M\otimes_{\Z_p}\Z_p(n),\qquad \sigma\cdot m =\chi_\cyc(\sigma)^n(\sigma\cdot m),\qquad\forall\sigma\in G_F, \forall m\in M.$$ \begin{exm} Given a number field $F$ and a choice of a compatible system $\zeta_{p^n}$ of primitive $p^n$-th roots of unity, one obtains isomorphisms of $\Z_p[G_F]$-modules $\Z_p(1)\simeq \varprojlim \mu_{p^n}$ and $ \Q_p/\Z_p(1)\simeq \mu_{p^\infty}$. \end{exm} \begin{rem}\label{cyclotomic char remark} It is a well-known result that the cyclotomic character factors through $\Gal(F(\mu_{p^\infty})|F)$. Thus, given that there is an isomorphism $\Gal(F(\mu_{p^\infty})|F)\simeq\Delta\times\Gamma$, where $\Gamma$ is the Galois group of the cyclotomic $\Z_p$-extension from Example \ref{totallyrealzpextension}, whenever we are given a module $M$ over the Iwasawa algebra $\Lambda$, we can form $M(n)$ by twisting the action of $\Gamma$ by the $\Gamma$-part of $\chi_\cyc$. \end{rem} The ``twist'' provided by the cyclotomic character is the most commonly known, although from the definition one can tell that this can be extended to more general characters. For that reason, we provide the following definition. \begin{dfn} Let $\chi:G_F\to\Ocal^\times$ a continuous character, possibly of infinite order, and let $\Ocal(\chi)$ be the free $\Ocal$-module of rank 1 whose action of $G_F$ is given by $\chi$. Then, for any $\Ocal[G_F]$-module $M$, we define $M(\chi)=M\otimes_{\Ocal}\Ocal(\chi)$. \end{dfn} \begin{lem} \label{TorsionPreservation} {\rm (\autocite[\S IV, Lemma 1.2]{rubin2014euler}}) Let $M$ be a finitely generated torsion $\Lambda$-module, and let $\chi:\Gamma\to\Ocal^\times$ be a character. Then, $M(\chi)$ is also a finitely generated torsion $\Lambda$-module. \end{lem} \subsection{Pontryagin Duality} Recall that one can define an endofunctor on the category of locally compact abelian groups, which is usually denoted by $\mathrm{LCA}$, via the rule $A\mapsto A^\vee:=\Hom_{\mathrm{cts}}(A,\Q/\Z)$, where the image is endowed with the compact-open toplogy. We refer to $A^\vee$ as the {\it Pontryagin dual} of $A$. It is then a well-known result that the contra-variant functor $(-)^\vee:\mathrm{LCA}\to\mathrm{LCA}$ produces an equivalence of categories (cf. \autocite[2.5.2]{sharifi}). \begin{rem} Note that if $A$ is pro-$p$ or discrete $p$-torsion, then $$A^\vee=\Hom_\mathrm{cts}(A,\Q_p/\Z_p),$$ and if $A$ is $\Z_p$-finitely generated, then since every $\Z_p$-linear homomorphism is continuous, we have that $$A^\vee=\Hom_{\Z_p}(A,\Q_p/\Z_p).$$ \end{rem} \begin{rem} If $A$ has the additional structure of a topological $G$-module, then it can be endowed with the continuous action given by $$(g\cdot f)(a)=f(g^{-1}\cdot a),\qquad\forall g\in G,\forall a\in A,\forall f\in A^\vee.$$ \end{rem} \begin{exm} Clearly, one has that $\Z_p^\vee=\Q_p/\Z_p$. Furthermore, note that if we endow $\Z_p(1)^\vee$ with the action from the previous remark, then we obtain $\Z_p(1)^\vee=\Q_p/\Z_p(-1)$. \end{exm} \begin{lem} {\rm (\autocite[(5)]{howson2002euler})} Let $M$ be a finitely generated $\Lambda(G)$-module. Then, the Pontryagin dual induces a canonical isomorphism between group homology and group-cohomology, namely, $$\Hrm^n(G,M^\vee)\cong\Hrm_n(G,M)^\vee. $$ \end{lem} \subsection{Eigenspaces} \label{eigenspaces} Let us consider $\hat{\Delta}$, the group of $p$-adic characters of $\Delta$, where $\Delta$ is a finite abelian group, and let $\Ocal$ be the $\Z_p$-algebra generated by the roots of unity of order dividing the exponent of $\Delta$. Also, given $\psi\in\hat{\Delta}$, we denote the $\Z_p$-algebra generated by the values of $\psi$ by $\Ocal_\psi$. Note that $\psi$ extends to a map $\Z_p[\Delta]\to\Ocal_\psi$, which in turn extends to a map $\Ocal[\Delta]\to\Ocal$. In particular, given an $\Ocal[\Delta]$-module, we define $A^\psi:=A\otimes_{\Ocal[\Delta]}\Ocal$ using the previous map, and define $$e_\psi=\frac{1}{\sharp\Delta}\sum_{\delta\in\Delta}\psi(\delta)\delta^{-1}\in\mathrm{Frac}(\Ocal_\psi)[\Delta].$$ It is then easy to see that $e_\psi\Ocal[\Delta]=e_\psi\Ocal$, and thus we obtain the following result. \begin{prop}{\rm (\autocite[Prop 2.8.6]{sharifi})} Assume $p$ does not divide $\sharp\Delta$. Then, for any $\Ocal[\Delta]$-module $A$, we have that $A^\psi=e_\psi A$, and there is a decomposition $$A\cong \bigoplus_{\psi\in\hat{\Delta}}A^\psi.$$ \end{prop} In the case $\Ocal=\Z_p$, we define $A^{(\psi)}:=A\otimes_{\Z_p[\Delta]}\Ocal_\psi$, and note that in order to be able to define an analogue of $e_\psi$, we need to work over a field containing the images $\psi$. For that reason, we define $$\tilde{e}_\psi = \frac{1}{\sharp\Delta}\sum_{\delta\in\Delta}\mathrm{Tr}_{\mathrm{Frac}(\Ocal_\psi)|\Q_p}(\psi(\delta))\delta^{-1}\in\Q_p[\Delta],$$ We write $\psi\sim\varphi$ whenever there exists $\sigma\in G_{\Q_p}$ so that $\psi = \sigma\circ\varphi$. Note that under this notation, we have that if $\psi\sim\varphi$, then $\Ocal_\psi = \Ocal_\varphi$. Thus, we can now state the relationship between working with $\Z_p[\Delta]$-modules and $\Ocal[\Delta]$-modules. \begin{prop}{\rm (\autocite[Lem 2.8.14 \& Prop 2.8.15]{sharifi})} Let $A$ be a $\Z_p[\Delta]$-module. \begin{enumerate} \item One has that $A^{(\psi)}\otimes_{\Ocal_\psi}\Ocal\cong (A\otimes_{\Z_p}\Ocal)^\psi$, and if $p$ does not divide $\sharp\Delta$, then $$A^{(\psi)}\otimes_{\Z_p}\Ocal\cong \bigoplus_{\varphi\in[\psi]}(A\otimes_{\Z_p}\Ocal)^\varphi.$$ \item If $p$ does not divide $\sharp\Delta$, then $$A\cong \bigoplus_{[\psi]\in\hat{\Delta}/\sim}A^{(\psi)}$$ \end{enumerate} \end{prop} \subsection{Twists and characteristic ideals} Our goal now is to see what is the effect of twisting a module on its characteristic ideal. In order to do so, let $\chi:G_F\to\Ocal^\times$ be a character (possibly of infinite order), and define the $\Ocal$-linear isomorphism $\Tw_\chi:\Lambda\to\Lambda$ given by the rule $\gamma\mapsto \chi(\gamma)\gamma$. We have the following result, which is possible due to \ref{TorsionPreservation}. \begin{lem} {\rm (\autocite[\S VI, Lemma 1.2]{rubin2014euler}} Let $M$ be a finitely generated and torsion $\Lambda$-module. Then, $$\Tw_\chi\left(\Ch_\Lambda(M(\chi))\right)=\Ch_\Lambda(M).$$ \end{lem} \begin{exm}\label{characteristic power series twist description} Let us restrict our attention to the case where $F=\Q$. Then, if we let $M$ be a finitely generated torsion $\Lambda$-module, we can conclude from the previous result that, after a choice of a generator $\gamma$ of $\Gamma$, the Iwasawa-Serre isomorphism produces an equality $$\mathrm{ch_\Lambda}\left(M(-n)\right)(T)\equiv\ch_{\Lambda}(M)(u^n(T+1)-1)\mod\Z_p^\times,$$ which holds for any pair of characteristic elements for $M$ and $M(n)$ respectively, where $u=\chi_\cyc(\gamma)$. \end{exm} \subsection{Euler characteristics of Iwasawa modules} Finally, we introduce Euler characteristics of Iwasawa modules. \begin{dfn} Let $M$ be a finitely-generated $\Lambda(G)$-module, for $G$ the Galois group of a $\Z_p^d$-extension. Then, $M$ is said to have {\it finite $G$-Euler characteristic} if all the (group) homology groups $\Hrm_i(G,M)$ are finite for all $0\leq i\leq d$. Furthermore, if $M$ has finite $G$-Euler charactersistic, then the {\it $G$-Euler characteristic of $M$} is defined as the product $$\mathrm{EC}(G,M):=\prod_{0\leq i\leq d}\sharp\Hrm_i(G,M)^{(-1)^i}.$$ \end{dfn} Note that given a finitely generated $\Lambda=\Lambda(\Gamma)$-module, one can define analogously its $\Gamma$-Euler characteristic, and the following result illustrates the relation between both modules. The result can be found in, e.g, \autocite[(31), p.14]{coates2012noncommutative} \begin{thm} Given a finitely generated $\Lambda(G)$-module $M$, $M$ has finite $G$-Euler characteristic if and only if the $\Gamma$-modules $\Hrm_i(\Gamma,M)$ have finite $\Gamma$-Euler characteristic for all $0\leq i< d$. Whenever any of the two equivalent conditions holds, one also has the equality $$\mathrm{EC}(G,M)=\prod_{0\leq i<d}\mathrm{EC}(\Gamma,\Hrm_i(\Gamma,M))^{(-1)^i}.$$ \end{thm} \newpage \section{Background in homotopy theory}\label{htpy theory bg section} The following section will serve the purpose of illustrating the subject of stable homotopy theory using the language of $\infty$-categories. By no means this intends to be a precise introduction, our main goal here is to allow the reader that is not familiarized with the language to follow the rest of this paper up to certain technicalities. The main references are \cite{lurie2009higher} and \cite{lurie2016higher}. \subsection{The language of $\infty$-categories} For our purposes in this paper, we can think of an {\it $\infty$-category} as a category that is weakly enriched over spaces, in the sense that there is a {\it space} of maps between every pair of objects, and the composition law is defined only up to {\it coherent} homotopy. Homotopy coherence is the problem one faces when trying to lift diagrams in the homotopy category of spaces to the ordinary category of spaces. It is easy to see that the existence of such a lift is equivalent to the existence of certain higher-degree homotopies, and this is precisely the problem that $\infty$-categories tackle, as we are asking their sets of maps to be spaces, and therefore contain homotopical information in all degrees. Although we are asking for the composition law to be only unique up to coherent homotopy, which is a much stronger condition than being unique up to homotopy, this is still sufficient to provide a robust theory of $\infty$-categories, in which all constructions for $1$-categories (functors, limits, colimits...) can also be performed. \begin{rem} One can also look at the weaker concept of an {\it $\infty$-groupoid}, whose main difference with $\infty$-categories is that all maps admit an inverse. As explained in \autocite[Example 1.1.1.4]{lurie2009higher}, one can think of $\infty$-groupoids as categories behaving as spaces, and from it one defines the $\infty$-category of spaces $\Scal$, which is the $\infty$-categorical analogue of the classical category of spaces. \end{rem} \subsection{Stable $\infty$-categories} We will be working with {\it stable $\infty$-categories} most of the time. These are pointed\footnote{An $\infty$-category is said to be pointed if it admits a zero object, that is, an object that is both initial and final.} $\infty$-categories with finite limits, and whose loop functor gives an equivalence\footnote{The condition of being pointed and having finite limits is enough to construct a loop space functor $\Omega$ and a suspension functor $\Sigma$. Furthermore, the stability condition implies that the adjunction $(\Sigma,\Omega)$ gives an equivalence.}. In addition, there is a precise notion of fiber and cofiber sequences just as in classical homotopy theory. By \autocite[Theorem 1.1.2.14]{lurie2016higher}, the homotopy category of every stable $\infty$-category admits a triangulated structure, which illustrates another reason why one would be interested in working the $\infty$-categorical setting: the complexity of triangulated categories could be justified by its being a ``decategorification" of a structure that is natural in higher category theory. Moreover, the homotopy category of a stable $\infty$-category $\Ccal$ can be endowed with a $t$-structure, although it might not be unique. Roughly, a $t$-structure is the additional data in the homotopy category of $\Ccal$ that axiomatizes the phenomenon of truncation of complexes and spaces and, therefore, taking cohomology or homotopy groups. From a $t$-structure, one can obtain the {\it heart} $\Ccal^\heartsuit$ of $\Ccal$. This is an abelian subcategory of the homotopy category of $\Ccal$ which, in the case of the derived $\infty$-category\footnote{The derived $\infty$-category admits a canonical $t$-structure.}, can be regarded as those complexes concentrated in degree 0. A consequence of this construction is that fiber sequences in $\Ccal$ induce long exact sequences in $\Ccal^\heartsuit$ after taking homotopy groups. \subsection{The $\infty$-category of Spectra} For our purposes, we will think of the $\infty$-category of spectra, which is denoted by $\Sp$, as the higher analogue of the 1-category of abelian groups. More precisely, there exist adjoint functors $\Omega^\infty$ and $\Sigma^\infty_+$ giving rise to the following diagram $$ \xymatrix@C=8em{ \Ab \ar@{->}@/_/[r] \ar@/^/[d]^{\mathrm{forget}} & \Sp \ar@/_/[l]_{\pi_0} \ar@/^/[d]^{\Omega^\infty} \\ \Setcal \ar@/^/[u]^{\Z(-)} \ar@{->}@/_/[r] & \Scal. \ar@/^/[u]^{\Sigma^\infty_+} \ar@/_/[l]_{\pi_0} } $$ \begin{rem} The functor $\pi_0:\Sp\to\Ab$ comes from the fact that $\Sp$ admits a canonical $t$-structure, and its heart can be naturally identified with (the nerve of) the 1-category of abelian groups. At the same time, this implies that any fibration in $\Sp$ gives rise to a long exact sequence of abelian groups just as in classical homotopy theory. However, note that in this case one can take negative homotopy groups.\end{rem} An alternative way to visualize $\Sp$ is given by Brown's representability theorem \autocite[Theorem 1.4.1.2]{lurie2016higher}, which roughly states that any generalized cohomology theory is representable by a spectrum. Thus, $\Sp$ can be seen as the $\infty$-category containing all cohomology theories, although we warn the reader that the embedding is not always faithful. Furthermore, given a space $X$, one can define $E^n(X)=[\Omega^n E,\Sigma^\infty_+X]$, and check that the assignment $X\mapsto E^n(X)$ gives a generalized cohomology theory. In fact, if $X$ is also a spectrum, one can define $E^n(X)=[\Omega^nE,X]$. \begin{exm} We give a list of some spectra that we will use during this article. \begin{enumerate}[nosep] \item A central object in the study of spectra is the {\it sphere spectrum}, which is the $\infty$-suspension of the point, $\Sbb=\Sigma_+^\infty(\ast)$. This object is sometimes referred to as the {\it stable sphere}, and the reason is that its homotopy groups agree with the stable homotopy groups of spheres. \item As in the classical case, given an abelian group $G$, one can always construct its {\it Eilenberg-MacLane spectrum}, denoted by $HG$, although when the context of spectra is understood, we will simply denote it by $G$. It is characterized by the fact that its homotopy groups are concentrated in degree 0. \item We will also use the {\it Moore spectrum} $SG$, which is characterized by having $\pi_iSG=0$ for $i<0$, $\pi_0SG=G$, and $\pi_i(SG\otimes H\Z)=H_i(SG;\Z)=0$ for $i>0$. The importance of this spectrum will become visible in the Bousfield localization section below, where we will work with {\it non-connective spectra}, that is, spectra with homotopical information in negative degrees. \item Given a spectrum $X$, one can associate to it its {\it $K$-theory spectrum} $K(X)$. In particular, when $X=HR$ for some ring $R$, $\pi_n K(R)\simeq K_n(R)$, where the right hand side denotes the classical $n$-th K-theory group. \end{enumerate} \end{exm} Finally, just as the category of abelian groups admits a symmetric monoidal category structure, one can endow $\Sp$ with a symmetric monoidal $\infty$-category structure given by the smash product $\otimes$ and whose unit is the sphere spectrum. This provides the possibility of forming algebraic structures over $\Sp$, whence the name {\it Higher Algebra}. \begin{center} \begin{tabular}{c|c} Ordinary Algebra & $\infty$-Categorical Algebra\\ \hline Set & Space \\ Abelian group & Spectrum \\ Tensor product of abelian groups & Smash product of spectra\\ Associative ring & $\E_1$-ring \\ Commutative ring & $\E_\infty$-ring \end{tabular}\footnote{This table was obtained from \autocite[Chapter 7]{lurie2016higher}.} \end{center} \subsection{Bousfield localization of spectra and Iwasawa theory} Let us fix a spectrum $E\in\Sp$. Another spectrum $X$ is said to be {\it $E$-acyclic} if $X\otimes E\simeq 0$. On the other hand, $X$ is said to be {\it $E$-local} if $Y\to X$ is null-homotopic (i.e, factors through a zero object) whenever $Y$ is acyclic. Then, there exists a functor $L_E$ from $\Sp$ called {\it Bousfield localization}, which is characterized up to equivalence by the following two properties, namely that $L_E X$ is $E$-local and that there is a natural map $X\to L_E X$ and its fiber is $E$-acyclic. We then define the {\it $p$-adic completion} $X_p^\wedge$ of a spectrum $X$ to be the Bousfield localization of $X$ with respect to the Moore spectrum $S\Z/p$. Now, let $X$ be a space, and let $\KU^\ast(X)$ be the ring of (periodic) complex $K$-theory. Since $\KU^\ast$ defines a generalized cohomology theory, it corresponds to a spectrum $\KU$ by Brown's theorem. We refer to $\KU$ as the {\it complex $K$-theory spectrum}. Finally, we define the {\it $K(1)$-local $K$-theory spectrum} as $L_{K(1)}K=\KU_p^\wedge$. \begin{rem} In fact, the $K(1)$-local adjective comes from the first Morava $K$-theory spectrum, which satisfies that $L_{K(1)}K(X)\simeq\KU(X)_p^\wedge$. In general, one can construct the $n$-th Morava $K$-theory spectrum $K(n)$, which is a central tool in stable homotopy theory. \end{rem} As explained in \cite{Mitchell2005K1LocalHT}, $K(1)$-local $K$-theory already has a direct relation with Iwasawa theory: One can identify the group of Adams operations on $K(1)$-local $K$-theory with $\Gal(\Q_\infty|\Q)$ via the cyclotomic character and obtain $(L_{K(1)}K)^\ast(L_{K(1)}K)\simeq\Lambda(\Gamma)[\Delta]$. \newpage \section{The commutative and totally real case}\label{CTRsection} From now on, let us fix an odd prime $p$ and let $F$ be a totally real number field. Let $\Sigma$ denote a finite set of primes of $F$ containing all primes dividing $p$ and let $F_\Sigma$ denote the maximal extension of $F$ which is unramified outside $\Sigma$ and the infinite places of $F$. Also, given an extension $L$ of $F$ inside $F_\Sigma$, we denote the maximal abelian $p$-extension of $L$ contained inside $F_\Sigma$ by $M(L)$, and its Galois group over $L$ by $X(L)=\Gal(M(L)|L)$. \begin{dfn}\label{tradmissible} A Galois extension $F_\infty|F$ is said to be a {\it totally real admissible $p$-adic Lie extension} if \begin{enumerate}[nosep] \item $F_\infty$ is totally real. \item The Galois group of $F_\infty|F$ is a $p$-adic Lie group, that is, a profinite topological group of finite rank containing an open pro-$p$ subgroup. \item $F_\infty|F$ is unramified outside $\Sigma$. \item $F_\infty$ contains $F^\cyc$, the maximal $\Z_p$-extension of $F$ contained in $F(\mu_{p^\infty})$. \end{enumerate} \end{dfn} Finally, and following \autocite[p. 6]{coates2012noncommutative}, we will work under the assumption that $G=\Gal(F_\infty|F)$ is an abelian $p$-adic Lie group of dimension equal to one -- which, according to Leopoldt's conjecture should always be the case --, and denote $H=\Gal(F_\infty|F^\cyc)$, which is a finite group under our running hypotheses. \subsection{The main conjecture} Let us start by noting that if we denote $\Gamma=\Gal(F^\cyc|F)$, $G$ can be written as $G =H\times\Gamma$. If we denote by $K$ the fixed field of $\Gamma$, which satisfies that $K\cap F^\cyc=F$, then $F_\infty$ can be written as the composite field $K F^\cyc$. Moreover, we will refer to the $p$-adic cyclotomic character $\chi_\cyc:\Gal(F(\mu_{p^\infty})|F)\to\Z_p^\times$ in a slightly unconventional way, namely as a character of $G$ that is trivial on $H$, given that $\Gamma$ is a subgroup of $\Gal(F(\mu_{p^\infty})|F)$ and thus we can look at the restriction of $\chi_\cyc$ to $\Gamma$, as it was commented in \ref{cyclotomic char remark}. Following the notations from \ref{eigenspaces}, let $\psi\in\hat{H}\setminus\{\idbf\}$ be of {\it type S}, that is, so that $F_\psi$, the field cut out by $\ker(\psi)$, or perhaps more precisely, the minimal extension of $F$ through which $\psi$ factors, is totally real and $F_\psi\cap F^\cyc=F$. For simplicity, we will also assume that the order of $\psi$ is coprime to $p$. Furthermore, fix a topological generator $\gamma\in\Gamma$, whose image through the cyclotomic character $\chi_\cyc(\gamma)$ we denote by $u$, Now, let $L^\Sigma(\psi,s)$ denote the $L$-function with Euler factors at $\Sigma$ in the Euler product expression removed, whose rationality is justified in \autocite[(1.2)]{Wiles90} by using a result by Klingen-Siegel. We then have the following theorem. \begin{thm} \label{DR} {\rm (Deligne-Ribet \cite{Deligne1980}, Cassou-Nogu\`es \cite{Cassou1979}, Barsky \cite{GAU_1977-1978__5__A9_0})} Let $n\geq 1$ be an integer with $n\equiv 0\mod[F(\mu_p):F]$. There exists a unique formal power series $L_p(\psi)\in\Ocal_\psi\llbracket T\rrbracket$ such that $$L_p(\psi,u^n-1)=L^\Sigma(\psi,1-n).$$ \end{thm} \begin{rem} As explained in \cite{Wiles90}, the theorem can also be stated for $\idbf$ and characters of type $W$, that is, characters in which $F_\psi\subseteq F^\cyc$, multiplying the left hand side by $(\psi(\gamma)u^n-1)^{-1}$ but for simplicity we only focus on the type $S$ case. \end{rem} \begin{rem} As explained in \autocite[p. 7]{coates2012noncommutative} the theorem can also be stated regarding $L_p(\psi)$ as a {\it pseudo-measure} on $G$, so that if $n>0$ is an integer with $n\equiv 0\mod[F(\mu_p):F]$, $$L_p(\psi,\chi_\cyc^n):=\int_G \left(\psi\otimes\chi_\cyc^n\right) dL_p(\psi)=L^\Sigma(\psi,1-n).$$ This is why we will denote $L_p(\psi,\chi_\cyc^n)$ and $L_p(\psi,u^n-1)$ indistinctly. \end{rem} For the rest of this section, let us fix $\psi$ of type $S$; denote $X=X(F_\infty)$, and let $X_\psi=e_\psi(X\otimes_{\Z_p}\Ocal_\psi)$. By a classical theorem of Iwasawa, $X_\psi$ is finitely generated and torsion as an $\Ocal_\psi\llbracket\Gamma\rrbracket$-module, and therefore we can look at a characteristic power series $C_\psi:=\ch(X_\psi)\in\Ocal_\psi\llbracket T\rrbracket$ for $X_\psi$. It is easy to check that \begin{align}\label{DeterminantFormula}C_\psi(T)\Ocal_\psi\llbracket T\rrbracket =\varpi_\psi^{\mu_\psi}\det\left(T-(\gamma-1)|X_\psi\otimes F_\psi\right)\Ocal_\psi\llbracket T\rrbracket ,\end{align} where $\varpi_\psi$ is any fixed local parameter for $\Ocal_\psi$, and $\mu_\psi$ is the algebraic $\mu$-invariant, which by the work of Wiles agrees with the analytic invariant obtained after applying the Weierstrass preparation theorem to $L_p(\psi,T)$ above. With this, one can state the algebraic part of the IMC as follows. \begin{thm} {\rm (IMC for totally real fields)} \label{AbelianIMCtotallyreal} For all $\psi\in\hat{H}$ of order prime to $p$, $$L_p(\psi,T)\Ocal_\psi\llbracket T\rrbracket = C_\psi(T)\Ocal_\psi\llbracket T\rrbracket.$$ \end{thm} \subsection{The group-cohomological Euler characteristic formula} Now, let us note that $G$ acts on $\Z_p(n)$ as long as $n\equiv 0\mod[F(\mu_p):F]$, so let $n$ be an integer -- positive or negative -- of that form. The determinant part in equation (\ref{DeterminantFormula}) admits a cohomological interpretation by the work of Bayer-Neukirch \autocite[Theorem 6.1]{Bayer1978} and Schneider \autocite[Theorem \S4.4]{Schneider1983}. Namely, the determinant does not vanish at $T=0$ if and only if the $\Gamma$-invariants (or equivalently, the $\Gamma$-coinvariants) of $X_\psi$ are finite dimensional. When one of these three equivalent conditions holds, one has \begin{align}\label{AbelianEulerCharFla}|C_\psi(u^{n}-1)|_p=\frac{\sharp \Hrm^0\left(\Gamma,X_\psi(-n)\right)}{\sharp \Hrm^1\left(\Gamma,X_\psi(-n)\right)},\end{align} where the $(-n)$-th Tate twist of $X_\psi$ appearing in the right hand side follows from \ref{characteristic power series twist description}. Now, combining \ref{DR}, \ref{AbelianIMCtotallyreal} and (\ref{AbelianEulerCharFla}), we obtain the following well-known result. \begin{prop}\label{TRgroupcohomologyformula} Let $n$ be an integer with $n\equiv 0\mod[F(\mu_p):F]$. Then, $$|L_{p}(\psi,\chi_\cyc^n)|_p=\frac{\sharp \Hrm^0\left(\Gamma,X_\psi(-n)\right)}{\sharp \Hrm^1\left(\Gamma,X_\psi(-n)\right)}.$$ \end{prop} \begin{rem} In particular, note that when $n>0$ is an integer such that $n\equiv 0\mod[F(\mu_p):F]$, we have a group-cohomological interpretation of the $p$-adic valuation of the $L$-function, that is, $$\left|L^\Sigma(\psi,1-n)\right|_p=\frac{\sharp \Hrm^0\left(\Gamma,X_\psi(-n)\right)}{\sharp \Hrm^1\left(\Gamma,X_\psi(-n)\right)}.$$ \end{rem} \subsection{The \'etale-cohomological Euler characteristic formula} \label{etaleeulerchar} Let $n$ be an integer as in \ref{TRgroupcohomologyformula}. We can now use Shapiro's lemma to extend the formula above as follows \begin{align}\label{eqn1} |L_p(\psi,\chi_\cyc^n)|_p=\frac{\# \Hrm^0\left(\Gamma, X_\psi(-n) \right)}{\# \Hrm^1\left(\Gamma, X_\psi(-n) \right)}=\frac{\# \Hrm^0\left(G, X(-n)\right)}{\# \Hrm^1\left(G, X(-n)\right)}. \end{align} Let us now recall that we can identify $X$ with $\Hrm^1_\et(\Ocal_{F_\infty}[1/p];\Q_p/\Z_p)^\vee$. This is due to the fact that since $X$ is the Galois group of the maximal abelian $p$-extension of $F^\cyc$ that is unramified outside $p$, it can be identified with the maximal abelian pro-$p$ factor group of the algebraic fundamental group of $\Spec(\Ocal_{F_\infty}[1/p])$. Taking this into account, \begin{align*} \Hrm^\nu(G,X(-n))&=\Hrm^\nu\left(G,\Hrm^1_\et(\Ocal_{F_\infty}[1/p];\Q_p/\Z_p(n))^\vee\right)\\ &=\Hrm^{1-\nu}\left(G,\Hrm^1_\et(\Ocal_{F_\infty}[1/p];\Q_p/\Z_p(n))\right)^\vee,\qquad \nu=0,1.\end{align*} where the first equality follows from the fact that the Pontryagin dual inverts the action of the cyclotomic character, and the second equality follows from the proof of \autocite[Lemma \S I.2.4]{Schneider1983}. Combining these equalities with \ref{eqn1}, we obtain $$|L_p(\psi,\chi_\cyc^n)|_p=\frac{\# \Hrm^1\left(G, \Hrm^1_\et\left(\Ocal_{F_\infty}[1/p],\Q_p/\Z_p(n)\right)\right)}{\# \Hrm^0\left(G, \Hrm^1_\et\left(\Ocal_{F_\infty}[1/p],\Q_p/\Z_p(n)\right)\right)}.$$ Following \autocite[Lemma \S II.4.1]{Schneider1983} we can use the Hochschild-Serre spectral sequence $$E^{i,j}_2=\Hrm^i\left(G,\Hrm^j_\et\left(\Ocal_{F_\infty}[1/p],\Q_p/\Z_p(n)\right)\right)\Rightarrow \Hrm^{i+j}_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right).$$ Schneider studies the groups appearing in this spectral sequence and observes that it degenerates in the second page, and from there he deduces that, since the weak Leopoldt conjecture is proven in the case of totally real number fields, and that is equivalent to $\Hrm^2_\et(\Ocal_{F_\psi}[1/p],\Q_p/\Z_p)=0$, one has the following classical result. \begin{thm}\label{etale_formula} {\rm (\autocite[Theorem 6.1]{Bayer1978},\autocite[Theorem \S4.4]{Schneider1983})}\label{BayerNeukirch} Let $n$ be an integer with $n\equiv 0\mod [F(\mu_p):F]$. Then, the following equality holds $$\left|L_p(\psi,\chi_\cyc^n)\right|_p=\frac{\#\Hrm^0_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right)}{\#\Hrm^1_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right)}.$$ \end{thm} \subsection{($K(1)$-local) $K$-theory and \'etale descent} \label{Ktheoandetalesection} Following the same notation as in the previous sections, as well as the one introduced in \S\ref{htpy theory bg section}, we can state the following theorem. \begin{thm} b(\kappa)$ of the completion map $$\kappa: L_{K(1)}K(\Ocal_F[1/p])\to \prod_{v\in \Sigma}L_{K(1)}K(F_v)$$ in $K(1)$-local algebraic $K$-theory is weakly equivalent to $\Omega I_{\Z_p}L_{K(1)}K(\Ocal_F[1/p])$, where $I_{\Z_p}$ denotes the Anderson duality functor for $p$-local spectra. Furthermore, there is a canonical map of spectra from the homotopy fiber of the $p$-adic cyclotomic trace map $$\tr_p^\wedge:K(\Ocal_F;\Z_p)\to\TC(\Ocal_F;\Z_p)$$ to $\Omega I_{\Z_p}L_{K(1)}K(\Ocal_F)$ that induces an isomorphism of homotopy groups on degrees $\geq 2$. \end{thm} \begin{rem} Note that, thanks to \autocite[Theorem 1.1]{bhatt2020remarks}, we know that $L_{K(1)}K(\Ocal_F[1/p])\simeq L_{K(1)}K(\Ocal_F)$, which allows us to identify both fibers in the specified degrees. \end{rem} We would like to relate the \'etale cohomology groups showing up in Theorem \ref{BayerNeukirch} and ($K(1)$-local) $K$-theory. To do so, we use the Thomason descent spectral sequence which, given a ring $R$ that we assume to be either $\Ocal_F[1/p]$ or $F_v$ for any $v\in \Sigma$, has the following form $$E^{p,q}_2=\Hrm^p_\et(R,\Z_p(-q/2))\Rightarrow\pi_{-p-q}L_{K(1)}K(R),$$ where the groups on the left are assumed to be zero whenever $q$ is not divisible by $2$. Using Thomason's descent spectral sequence as in \cite{blumberg2015fiber} or \cite{dwyer1998k}, if we denote the Moore spectrum associated to $\Q_p/\Z_p$ by $M(\Q_p/\Z_p)$, we have the following table. \begin{center} \begin{tabular}{| c |c| c |} \hline $X$ & $\pi_{2k+1}X$ & $\pi_{2k}X$ \\ \hline \multirow{3}{*}{$L_{K(1)}K(R)$} & & $\Hrm^0_\et(R,\Z_p(k))$ \\ & $\Hrm^1_\et(R,\Z_p(k+1))$ & $\oplus$ \\ & & $\Hrm^2_\et(R,\Z_p(k+1))$ \\ \hline \multirow{3}{*}{$L_{K(1)}\left(K(R)\otimes M_{\Q_p/\Z_p}\right)$} & & $\Hrm^0_\et(R,\Q_p/\Z_p(k))$ \\ & $\Hrm^1_\et(R,\Q_p/\Z_p(k+1))$ & $\oplus$ \\ & & $\Hrm^2_\et(R,\Q_p/\Z_p(k+1))$ \\ \hline \end{tabular} \end{center} \begin{rem} As explained in Blumberg-Mandell's article, one can use \ref{BMTheo} to identify (non-canonically) the {\it Poitou-Tate exact sequence} \begin{center} \adjustbox{scale=0.97,center}{ \begin{tikzcd} 0\to \Hrm_\et^0(\Ocal_F[1/p],\Z_p(k)) \arrow[r,"\Pi_0"] & \prod_{v\in \Sigma}\Hrm^0_\et(F_v,\Z_p(k))\arrow[r]\arrow[d, phantom, ""{coordinate, name=W}] & \Hrm^2_\et(\Ocal_F[1/p],\Q_p/\Z_p(1-k))^\vee \arrow[dll,rounded corners=8pt,curvarr=W] \\ \Hrm_\et^1(\Ocal_F[1/p],\Z_p(k)) \arrow[r,"\Pi_1"] & \prod_{v\in \Sigma}\Hrm^1_\et(F_v,\Z_p(k))\arrow[r] \arrow[d, phantom, ""{coordinate, name=W}] & \Hrm^1_\et(\Ocal_F[1/p],\Q_p/\Z_p(1-k))^\vee \arrow[dll,rounded corners=8pt,curvarr=W] \\ \Hrm_\et^2(\Ocal_F[1/p],\Z_p(k)) \arrow[r,"\Pi_2"] & \prod_{v\in \Sigma}\Hrm^2_\et(F_v,\Z_p(k))\arrow[r] & \Hrm^0_\et(\Ocal_F[1/p],\Q_p/\Z_p(1-k))^\vee\to 0, \end{tikzcd}}\end{center} with the long exact sequence induced by the taking homotopy groups of the fibration in $K(1)$-local $K$-theory \begin{center} \begin{tikzcd} b(\kappa) \arrow[r] & \pi_l L_{K(1)}K(\Ocal_F[1/p])\arrow[r]\arrow[d, phantom, ""{coordinate, name=W}] & \prod_{v\in \Sigma}\pi_l L_{K(1)}K(F_v) \arrow[dll,rounded corners=8pt,curvarr=W] \\ b(\kappa) \arrow[r] & \pi_{l-1}L_{K(1)}K(\Ocal_F[1/p])\arrow[r] & \prod_{v\in \Sigma}L_{K(1)}K(F_v)\to\cdots. \end{tikzcd} \end{center} This is due to Anderson duality, which provides an isomorphism \begin{align}\label{AndersonDuality} \left(\pi_{-1-l}\left(L_{K(1)}K(\Ocal_F[1/p])\otimes M_{\Q_p/\Z_p}\right)\right)^\vee\simeq \pi_{l+1}( I_{\Z_p}L_{K(1)}K(\Ocal_F[1/p])).\end{align} b(\kappa)$ by Theorem \ref{BMTheo}, and the first term can be compared with the Pontryagin dual of the corresponding \'etale cohomology group in the table above, thus giving the identification between both exact sequences. \end{rem} \subsection{The $K(1)$-local $K_\ast$-theoretic description of the special values} We can move on to applying all the machinery we have introduced to give another formula for the special values of the $p$-adic $L$-function, and the results we provide here are an extension, as well as an alternative proof of a result by Hesselholt, who studied the case $F=\Q$ in \cite{Hess18}. \begin{cor}\label{hessgeneralization} Let $n$ be such that $n\equiv 0\mod[F(\mu_p):F]$. Then, b(\kappa)}.$$ Furthermore, if $n \leq -1$, b\left(\tr_p^\wedge\right)}$$ \end{cor} \begin{proof} Let us start by observing that \begin{align*} \Hrm^0_\et(\Ocal_{F}[1/p],\Q_p/\Z_p(n))^\vee&\simeq\pi_{2n}L_{K(1)}\left(K(\Ocal_{F}[1/p])\otimes M_{\Q_p/\Z_p}\right)^\vee \\ &\simeq \pi_{-1-2n}\Omega I_{\Z_p}L_{K(1)}K(\Ocal_{F}[1/p])\\ b(\kappa), \end{align*} where the middle equality follows from (\ref{AndersonDuality}) above. On the other hand, and as mentioned in the previous section, the weak Leopoldt conjecture is true in the case of totally real and imaginary quadratic fields, and that is equivalent to the vanishing of $\Hrm^2_\et(\Ocal_F[1/p],\Q_p/\Z_p)$. Thanks to this, if we use the table above again, we obtain that \begin{align*} \Hrm^1_\et(\Ocal_F[1/p],\Q_p/\Z_p(n))^\vee &\simeq \pi_{2n-1}L_{K(1)}\left(K(\Ocal_F[1/p])\otimes M_{\Q_p/\Z_p}\right)^\vee\\ b(\kappa),\end{align*} b(\tr_p^\wedge)$ come from Theorem \ref{BMTheo} above. \end{proof} We can also use the $K(1)$-local $K$-theoretic Poitou-Tate duality of Blumberg-Mandell to give a description of the $p$-adic valuation of the special values without using the fiber. \begin{cor}\label{corollaryk1} Let $n$ be an integer with $n\equiv 0\mod[F(\mu_p):F]$. Then, $$ |L_p(\psi,\chi_\cyc^n)|_p=\frac{\#\pi_{-2n+1}L_{K(1)}K(\Ocal_F[1/p])}{\#\pi_{-2n}L_{K(1)}K(\Ocal_F[1/p])}\cdot\prod_{v\in\Sigma}\frac{\#\pi_{-2n}L_{K(1)}K(F_v)}{\#\pi_{-2n+1}L_{K(1)}K(F_v)} $$ \end{cor} \begin{proof} We can use the table above together with the fact that the \'etale cohomology group $\Hrm^2_\et(\Ocal_F[1/p],\Q_p/\Z_p)^\vee$ must vanish by the weak Leopoldt conjecture, to conclude that the classical long exact sequence on homotopy groups can be split as follows \begin{center} \adjustbox{center}{ \begin{tikzcd} 0\to \pi_{1-2k}L_{K(1)}K(\Ocal_F[1/p])\arrow[r,"\Pi_1"] & \prod_{v\in \Sigma}\pi_{1-2k}L_{K(1)}K(F_v)\arrow[r] \arrow[d, phantom, ""{coordinate, name=W}] & \Hrm^1_\et(\Ocal_F[1/p],\Q_p/\Z_p(1-k))^\vee \arrow[dll,rounded corners=8pt,curvarr=W] \\ \pi_{2-2k}L_{K(1)}K(\Ocal_F[1/p]) \arrow[r,"\Pi_2"] & \prod_{v\in \Sigma}\pi_{2-2k}L_{K(1)}K(F_v)\arrow[r] & \Hrm^0_\et(\Ocal_F[1/p],\Q_p/\Z_p(1-k))^\vee\to 0, \end{tikzcd}} \end{center} Since all terms in the exact sequence are finite, we obtain the result. \end{proof} \subsection{A note on Hesselholt's expository note} According to Hesselholt, for $k>1$, b\left(\tr_p^\wedge:K(\Z;\Z_p)\to\TC(\Z;\Z_p)\right)}$$ This formula is recovered for those values of $k$ in our result \ref{hessgeneralization}. The problem is that we would like to give the formula above in terms of the cyclotomic fiber for {\it all} $k$, instead of focusing on $k>1$. To do so, observe that $$\left|L_p\left(F,\psi,\chi_\cyc^{n}\right)\right|_p=\frac{\# \Hrm^1_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right)}{\#\Hrm^0_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right)}=\frac{\# \Hrm^1_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right)^\vee}{\#\Hrm^0_\et\left(\Ocal_F[1/p],\Q_p/\Z_p(n)\right)^\vee},$$ and let us also remember that, for all $n$, b\left(L_{K(1)}\kappa:L_{K(1)}K(\Ocal_F[1/p])\to\prod_{v\in\Sigma}L_{K(1)}K(F_v)\right)$$ b\left(L_{K(1)}\kappa:L_{K(1)}K(\Ocal_F[1/p])\to\prod_{v\in\Sigma}L_{K(1)}K(F_v)\right)$$ On the other hand, the following diagram can be used to identify the fiber of $\kappa$ with the connective cover of the fiber of $\tr_p^\wedge$, where the equivalences are argued in \cite{Hess2018}: \begin{align*} \xymatrix{ \TC(\Z;\Z_p)\ar[r]^{\simeq } & \TC(\Z_p;\Z_p) \\ K(\Z;\Z_p) \ar[r]_{\kappa}\ar[u]^{\tr_p^\wedge} & K(\Z_p;\Z_p),\ar[u]_{\simeq \text{ in degrees }\geq0} } \end{align*} However, using Quillen's localization sequence will only give us information in degrees greater or equal than $2$, and that is why we will only have the formula for positive values of $k$. Instead of following that path, we realize we also have the following diagram, obtained after $K(1)$-localization, and substituting $\Z$ and $\Z_p$ by their respective generalizations to $F$, namely \begin{align*} \xymatrix@C=4em{ & L_{K(1)}\TC(\Ocal_F)\ar[r]^-{\simeq\text{ by \cite{Hess2018}}} & \prod_{v\in\Sigma}L_{K(1)}\TC(\Ocal_v) & \prod_{v\in\Sigma}L_{K(1)}K(\Ocal_v) \ar[l]_{\simeq}^{(\ast)}\\ b\left(L_{K(1)}\kappa\right)\ar@{-->}[r] & L_{K(1)}K(\Ocal_F) \ar[r]_-{L_{K(1)}\kappa}\ar[u]^{L_{K(1)}\tr} \ar[d]_{\simeq}^{\text{ by \autocite[Thm. 1.1]{bhatt2020remarks}}} & \prod_{v\in\Sigma}L_{K(1)}K(\Ocal_v)\ar[u]_{\prod_{v\in\Sigma}L_{K(1)}\tr} \ar@{=}@/_1pc/[ur] \ar[d]_{\simeq}^{\text{ by \autocite[Thm. 1.1]{bhatt2020remarks}}} & \\ b\left(L_{K(1)}\tr\right) \ar@{-->}[ur]& L_{K(1)}K(\Ocal_F[1/p]) \ar[r] & \prod_{v\in\Sigma}L_{K(1)}K(F_v) & } \end{align*} where $(\ast)$ is an equivalence by \autocite[Cor. 1.3]{bhatt2020remarks}. With this, we obtain the desired equivalence b(L_{K(1)}\kappa),$$ which, when combined with \ref{hessgeneralization}, gives the following result. \begin{cor}\label{maincorollaryrealcase} Let $n$ be an intger with $n\equiv 0\mod[F(\mu_p):F]$. Then, b\left(L_{K(1)}\tr:L_{K(1)}K(\Ocal_F)\to L_{K(1)}\TC(\Ocal_F)\right)}$$ \end{cor} \newpage \section{The non-commutative and totally real case}\label{NCTR case} \subsection{Initial considerations} As in the commutative case, let $F_\infty|F$ be an arbitrary admissible $p$-adic Lie extension -- cf. \ref{tradmissible} -- with Galois group $G$. Following \cite{Coates2003}, there exists a closed subgroup $H$ such that $\Gamma:=G/H\simeq\Z_p$. \begin{rem} For simplicity and without loss of generality, we assume that $G$ contains no element of order $p$. This makes most discussions much simpler, given that under this hypothesis $\Lambda(G)$ has finite global dimension, and $G$ also has finite $p$-homological dimension, which agrees with the dimension of $G$ as a $p$-adic Lie group (cf. \autocite[Remark 1.4]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures}). \end{rem} Let us define the following sets, following \autocite[Section 2]{coates2004gl2} $$\Sfrak=\left\{f\in\Lambda(G):\Lambda(G)/(f)\in\Mod_{\Lambda(H)}^\mathrm{fg}\right\},\qquad \Sfrak^\ast =\varinjlim_{n} p^n\Sfrak,$$ where $\Mod^{\mathrm{fg}}_{\Lambda(H)}$ is the category of finitely generated $\Lambda(H)$-modules. From now on, we let $\Sfrak'\in\{\Sfrak,\Sfrak^\ast\}$. Then, the following properties are known: \begin{enumerate} \item \autocite[Proposition 2.3]{Coates2003} If $M$ is a finitely generated $\Lambda(G)$-module, then $M$ is finitely generated over $\Lambda(H)$ if and only if $M$ is $\Sfrak$-torsion. \item \autocite[Theorem 2.4]{Coates2003} $\Sfrak'$ is multiplicatively closed, and is a (left or right) Ore set in $\Lambda(G)$. \item \autocite[Theorem 2.4]{Coates2003} The elements of $\Sfrak'$ are non-zero divisors in $\Lambda(G)$. \end{enumerate} Thus, one can define $\Mcal_{\Sfrak'}(G)$, the set of finitely generated $\Lambda(G)$-modules with the property that $\Lambda(G)_{\Sfrak'}\otimes_{\Lambda(G)} M$ vanishes or, equivalently, that are $\Sfrak'$-torsion. \begin{rem}\label{generalizedIwasawa} Generally, it is not known whether $X=X(F_\infty)$ lies in $\Mcal_{\Sfrak'}(G)$. For that reason, one works with extensions $F_\infty|F$ satisfying the {\it generalized Iwasawa conjecture}, meaning that there exists a finite extension $F'|F$ in $F_\infty$ such that $\Gal(F_\infty|F')$ is pro-$p$, and $X(F'^{\cyc})$ is a finitely generated $\Z_p$-module, or with the same hypotheses as Burns in \autocite[Theorem 9.1]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures}. \end{rem} \subsection{$K$-theoretic characteristic elements} From now on, we denote we use $\Lambda$ to refer to $\Lambda(G)$, unless it is otherwise stated. Before being able to state an Iwasawa main conjecture for the non-commutative case, the concept of a characteristic element must be introduced. To do so, recall that there is a localization sequence in $K$-theory of the form $$\cdots\to K_1(\Lambda)\to K_1(\Lambda_{\Sfrak'})\xrightarrow{\partial_G} K_0(\Lambda,\Lambda_{\Sfrak'})\to K_0(\Lambda)\to K_0(\Lambda_{\Sfrak'})\to 0,$$ where $K_0(\Lambda,\Lambda_{\Sfrak'})$ is the 0-th relative $K$-theory group of $\Lambda\to\Lambda_{\Sfrak'}$. \begin{rem} As it will be observed later, it is important to note that as explained in \autocite[\S1.1.2]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures}, under the hypothesis that $G$ has no element of order $p$, there is a natural isomorphism between $K_0(\Lambda,\Lambda_{\Sfrak'})$ and the Grothendieck group $G_0(\Mcal_{\Sfrak'}(G))$. \end{rem} As mentioned in \autocite[Remark 1.1]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures}, under our hypotheses, the map $\partial_G$ is always surjective. For this reason, we can provide the following definition. \begin{dfn} Given $X\in K_0(\Lambda,\Lambda_{\Sfrak'})$, a {\it characteristic element} for $X$ is $\cfrak(X)\in K_1(\Lambda_{\Sfrak'})$ such that $\partial_G\cfrak(X)=X$. \end{dfn} \subsection{Evaluation at $K$-theoretic characteristic elements} This section is explained in more generality in \autocite[\S 1.1.3]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures}. Of course, in order to link this definition to the point-wise property in the Iwasawa main conjectures, we need to define a way to evaluate the elements $\xi\in K_1(\Lambda_{\Sfrak'})$. To fulfill this analogy, let $\rho:G\to \mathrm{Aut}_{\Z_p}(\Gamma)$ be a (continuous) Artin representation of $G$. It is clear that $\rho$ extends canonically to a map $\Lambda_{\Sfrak'}\to\mathrm{End}_{\Z_p}(Q)$, where $Q=\mathrm{Frac}(\Lambda)$. If we then apply to it the functor $K_1$, we obtain a map $$\rho_\ast:K_1(\Lambda_{\Sfrak'})\to Q^\times.$$ Localizing at the kernel $\kappa=\ker(\pi)$ of the projection map $\pi:\Lambda\to\Z_p$ produces a map $\pi_\kappa:\Lambda_\kappa\to\Q_p$. \begin{dfn} Following the notation from the previous paragraph, and if $\xi\in K_1(\Lambda_{\Sfrak'})$ and $\rho\in\Art(G)$, the {\it image of $\xi$ at $\rho$} is defined by $$ \xi(\rho)=\begin{cases} \pi_\kappa\left(\rho_\ast(\mathfrak{c})\right), & \rho_\ast(\xi)\in\Lambda_\kappa,\\ \infty & \text{otherwise.} \end{cases} $$ \end{dfn} \subsection{$p$-adic Artin $L$-functions} During this subsection, we follow \autocite[\S9.1]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures} almost exactly. Let $\rho$ be a finite-dimensional complex representation of any finite quotient of a compact $p$-adic Lie extension $F_\infty|F$ that is unramified outside a finite set of places $\Sigma$ of $F$. Denote by $\Art^+(G)$ the subset of $\Art(G)$ of representations that are trivial on the Galois group of some intermediate field $F'$ of $F_\infty|F$ that is totally real. Also, let $\Sigma'$ be a finite set of places of $F$ and let $L^{\Sigma'}(s,\rho)$ be the Artin $L$-function of $\rho$ truncated by removing the Euler factors of all non-archimedean places in $\Sigma'$. \begin{dfn} For each $\rho\in\Art^+(G)$, the {\it $\Sigma$-truncated $p$-adic Artin $L$-function} $L_p^\Sigma(s,\rho)$ is defined as the unique $p$-adic meromorphic function on $\Z_p$ such that for all integers $n<0$ and all field isomorphisms $j:\C_p\to\C$, $$L^{\Sigma_p}\left(n,(\rho\otimes \omega_F^{n-1})^j\right) = j\left(L_p^\Sigma(n,\rho)\right),$$ where $\Sigma_p$ is the union of $\Sigma$ and all places of $F$ that lie above $p$, and $\omega_F$ denotes the Teichm\"uller character. \end{dfn} \subsection{Twisting in the non-commutative setting} Let $X$ be a perfect complex of $\Lambda$-modules, and let $\rho:G\to\GL_n(\Ocal)$ be a continuous representation of $G$. Then, the complex $X(\rho):=\Ocal^n\otimes_{\Z_p}X$ can be endowed with a $G$-action by setting $$g(a\otimes x):=\rho(g)(a)\otimes g(x),\qquad g\in G, a\in\Ocal^n, x\in X^j.$$ \subsection{Bockstein cohomology} As it is mentioned in \autocite[\S3.1]{burns2005leading}, given a bounded complex of $\Lambda$-modules $X$, the choice of a generator $\gamma$ of $\Gamma$ induces a morphism $\theta_\gamma$ in $D^\flat(\Lambda)$ of the form $X\to X[1]$. Given a continuous representation $\rho:G\to\GL_n(\Ocal)$ where $\Ocal$ denotes the valuation ring of some finite extension $L$ of $\Q_p$, $\theta_\gamma$ induces $T_\rho\otimes_{\Lambda}^\Lbf X\to T_\rho\otimes_{\Lambda}^\Lbf X[1]$. \begin{dfn} The homomorphisms induced by $\theta_\gamma$ $$\beta_\nu:\Tor_\nu^{\Lambda}(T_\rho, X)\to\Tor_{\nu-1}^{\Lambda}(T_\rho, X)$$ are referred to as the {\it Bockstein homomorphisms} of $(X,T_\rho,\gamma)$. Moreover, these homomorphisms induce a complex, the {\it Bockstein complex associated to} $X\in D^\mathrm{perf}(\Lambda(G))$, and $X$ is said to be {\it semi-simple at $\rho$} the cohomology of the complex is $\Z_p$-torsion in each degree. Additionally, the {\it Bockstein cohomology groups} $\Hrm_\beta^i(G,X(\rho))$ are defined as the cohomology groups of the complex given by the homomorphisms $$\beta_{-\nu}:\Tor_{-n}^{\Lambda}(T_\rho, X)\to\Tor_{-n-1}^{\Lambda}(T_\rho, X).$$ \end{dfn} \subsection{The totally real non-commutative IMC} Following \autocite[\S 9.2]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures}, given a $G$-module $M$, $M^\sharp$ is the $G_F$-module whose action by $\sigma\in G_F$ is given by $\chi(\sigma)\overline{\sigma}^{1}$, where $\overline{\sigma}$ is the image of $\sigma$ in $G$. We then define the derived complex $$C_{\infty,\Sigma}:=\R\Gamma_{\et, c}\left(\Spec(\Ocal_{F_\infty}[1/p], \Lambda^\sharp(1)\right)\in D^\flat (\Lambda)$$ of \'{e}tale cohomology with compact support, which is in fact perfect as stated in the citation. Moreover, according to [op. cit, \S 9.3], the $n$-th cohomology groups of $C_{\infty,\Sigma}$ can be canonically identified as $$ \Hrm^n(C_{\infty,\Sigma})\cong\begin{cases} X(F_\infty), & n = 2 \\ \Z_p, & n = 3 \\ 0 & n\neq 2, 3. \end{cases} $$ \begin{thm} \label{NCTRIMC} {\rm (\autocite[Thm 9.1]{OnmainconjecturesinnoncommutativeIwasawatheoryandrelatedconjectures})} There exists an element $\Lcal_p\in K_1(\Lambda_\Sfrak)$ such that \begin{enumerate} \item For all $\rho\in\Art(G)$, $$\Lcal_p(\rho\otimes \chi) = L_p^{\Sigma}(1,\rho\otimes\chi)\mod\Q_p^\times,$$ for almost all $\idbf\neq\chi\in\hat{\Gamma}$. \item The composition of the boundary map $\partial_G:K_1(\Lambda_\Sfrak)\to K_0(\Lambda, \Lambda_{\Sfrak^\ast})$ with the isomorphism $K_0(\Lambda, \Lambda_{\Sfrak^\ast})\cong G_0(\Mcal_{\Sfrak^\ast}(G))$ sends $\Lcal_p\mapsto [X(F_\infty)]-[\Z_p]$. \end{enumerate} \end{thm} \subsection{The Bockstein-cohomological Euler characteristic formula} Let $X\in\Mcal_{\Sfrak^\ast}(G)$ be a semi-simple complex at $\rho\in\mathrm{Art}(G)$, and such that $\Q_p\otimes_{\Z_p}A(\rho)$ is acyclic. Then, it follows from \autocite[Proposition 3.16]{burns2005leading} that there is a commutative diagram $$ \xymatrix@C=3em@R=2em{ K_1(\Lambda, \Sigma_X)\ar[r]^-{\ch_{G,X}} \ar[d]_-{(-)(\rho)} & K_1(\Lambda_{\Sfrak^\ast}) \ar[d]^-{(-)(\rho)} \\ L^\times \ar@{=}[r] & L^\times, } $$ where the map $\ch_{G,X}$ and the subcategory $\Sigma_X$ of $C^\perf(\Lambda)$ is constructed in \autocite[\S 3.2.3]{burns2005leading}. For a detailed description of the map, we refer the reader to \autocite[Appendix A.]{burns2011descent} The following statement is the non-commutative counterpart of Corollary \ref{maincorollaryrealcase}, that is, the main result of the previous section. \begin{cor} \label{BurnsEulerCharFla} {\rm (\autocite[Proposition 3.19]{burns2005leading} corrected using \autocite[Appendix C.]{burns2011descent})} If $X$ is semi-simple at $\rho$, then for any $\xi\in\mathrm{Im}(\ch_{G,X})\leq K_1(\Lambda_{\Sfrak^\ast})$, $$\left|\xi(\rho)\right|_p^{[L:\Q_p]}=\prod_{\nu\in\Z}\sharp\Hrm^i_\beta\left(G,X(\mathrm{Ad}(\rho))\right)^{(-1)^\nu}.$$ \end{cor} \begin{rem} In the classical, commutative case, one has that $G$ is of rank one, and $\rho=\chi_\cyc$. Given a finitely generated and torsion $\Lambda$-module $M$, the differentials induced by the Hochschild-Serre spectral sequence $$ \Hrm^i(G,M^\vee)\to\Hrm^{i+1}(G,M^\vee) $$ are manipulated in order to obtain the equality in \ref{etale_formula}. According to the result above, however, one only needs to use a projective resolution $P$ of $M^\vee$ to obtain the same formula, or simply observe that $$(d^i)^\vee = \beta_{i+1}:\Tor_{i+1}^{\Lambda}(\Z_p,P)\to\Tor_i^{\Lambda}(\Z_p,P).$$ \end{rem} \begin{cor} For all $\rho\in\Art(G)$ and almost all $\idbf\neq \chi\in\hat{\Gamma}$, $$\left|\Lcal_p(\rho\otimes \chi)\right|_p =\left|L_p^{\Sigma}(1,\rho\otimes\chi)\right|_p =\prod_{\nu\in\Z}\sharp\Hrm^i_\beta\left(G, C_{\infty,\Sigma}\left(\mathrm{Ad}(\rho\otimes\chi)\right)\right)^{(-1)^\nu}.$$ \end{cor} \begin{proof} The proof of this result follows directly from combining the IMC in the non-commutative setting \ref{NCTRIMC} together with \ref{BurnsEulerCharFla}. \end{proof} \newpage \printbibliography \end{document}
2206.14653v1
http://arxiv.org/abs/2206.14653v1
Asymptotic analysis of Emden-Fowler type equation with an application to power flow models
\documentclass[a4paper,10pt]{article} \usepackage{a4wide} \usepackage{amsmath,amssymb} \usepackage[english]{babel} \usepackage{multirow} \usepackage{verbatim} \usepackage{float} \usepackage[section]{placeins} \usepackage{graphicx} \usepackage{pdfpages} \usepackage{xargs} \usepackage{mathtools} \usepackage{dsfont} \usepackage{textcomp} \usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes} \usepackage{pgfplots} \pgfplotsset{compat = 1.14} \usepackage{tikz-network} \usetikzlibrary{positioning} \usepackage[title]{appendix} \bibliographystyle{amsplain} \usepackage{booktabs} \usepackage{cite} \usepackage{subcaption} \usepackage{pifont} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \newcommand*\CHECK{\ding{51}} \usepackage{parskip} \setlength{\parskip}{15pt} \definecolor{TomColBack}{HTML}{F62217} \definecolor{TomCol}{HTML}{16EAF5} \newcommand{\Tom}[1]{\todo[linecolor = TomCol, backgroundcolor = TomColBack!30, bordercolor = TomColBack]{#1}} \usepackage{fancyhdr} \setlength{\headheight}{15.0 pt} \setlength{\headsep}{0.5 cm} \pagestyle{fancy} \fancyfoot[L]{\emph{}} \fancyfoot[C]{} \fancyfoot[R]{\thepage} \renewcommand{\footrulewidth}{0.4pt} \newcommand\smallO{ \mathchoice {{\scriptstyle\mathcal{O}}} {{\scriptstyle\mathcal{O}}} {{\scriptscriptstyle\mathcal{O}}} {\scalebox{.6}{$\scriptscriptstyle\mathcal{O}$}} } \fancyhead[L]{\emph{}} \fancyhead[C]{} \renewcommand{\headrulewidth}{0.4pt} \usepackage{tikz} \usetikzlibrary{arrows} \usetikzlibrary{positioning} \usetikzlibrary{decorations} \definecolor{mylilas}{HTML}{CC0099} \usepackage{listings} \lstset{language=Matlab, basicstyle=\color{black}, breaklines=true, morekeywords={matlab2tikz}, keywordstyle=\color{blue}, morekeywords=[2]{1}, keywordstyle=[2]{\color{black}}, identifierstyle=\color{black}, stringstyle=\color{mylilas}, commentstyle=\color{mygreen}, showstringspaces=false, numbers=left, numberstyle={\tiny \color{black}}, numbersep=9pt, emph=[1]{for,while,if,else,elseif,end,break},emphstyle=[1]\color{red}, emph=[2]{gamma,format,clear}, emphstyle=[2]\color{black}, emph=[3]{compact,rat,all,long}, emphstyle=[3]\color{mylilas}, } \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mybackground}{HTML}{F7BE81} \usepackage{caption} \DeclareCaptionFont{white}{\color{white}} \DeclareCaptionFormat{listing}{\colorbox{gray}{\parbox{\textwidth}{#1#2#3}}} \captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white} \setlength\parindent{0pt} \usepackage{hyperref} \hypersetup{ pagebackref=true, colorlinks=true, citecolor=red, linkcolor=blue, linktoc=page, urlcolor=blue, pdfauthor= Mark Christianen, pdftitle=Stability-EV } \usepackage{aligned-overset} \definecolor{redText}{rgb}{1,0,0} \newcommand{\red}[1]{\textcolor{redText}{#1}} \definecolor{blueText}{HTML}{0080FF} \newcommand{\blue}[1]{\textcolor{blueText}{#1}} \definecolor{greenText}{HTML}{00e600} \newcommand{\green}[1]{\textcolor{greenText}{#1}} \def\changemargin#1#2{\list{}{\rightmargin#2\leftmargin#1}\item[]} \let\endchangemargin=\endlist \usetikzlibrary{chains,shapes.multipart} \usetikzlibrary{shapes,calc} \usetikzlibrary{automata,positioning} \usepackage{cases} \definecolor{myred}{RGB}{220,43,25} \definecolor{mygreen}{RGB}{0,146,64} \definecolor{myblue}{RGB}{0,143,224} \definecolor{MarkCol}{HTML}{B216FA} \definecolor{MarkColBack}{HTML}{5FFB17} \newcommandx{\Mark}[2][1=]{\todo[linecolor = MarkCol, backgroundcolor = MarkColBack!30, bordercolor = MarkColBack,#1]{#2}} \tikzset{ myshape/.style={ rectangle split, minimum height=1.5cm, rectangle split horizontal, rectangle split parts=8, draw, anchor=center, }, mytri/.style={ draw, shape=isosceles triangle, isosceles triangle apex angle=60, inner xsep=0.65cm } } \usepackage{algorithm} \usepackage[noend]{algpseudocode} \usepackage{amsthm} \newtheorem{theorem}{Theorem}[section] \usepackage{titling} \predate{} \postdate{} \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{example}{Example}[section] \newtheorem{remark}{Remark}[section] \newtheorem{assumption}{Assumption}[section] \DeclareMathOperator*{\argmax}{arg\,max} \usepackage{color, colortbl} \usepackage{authblk} \numberwithin{equation}{section} \usepackage{enumitem} \newtheorem{definition}{Definition} \newcommand{\Lim}[1]{\raisebox{0.5ex}{\scalebox{0.8}{$\displaystyle \lim_{#1}\;$}}} \newcommand{\Sup}[1]{\raisebox{0.5ex}{\scalebox{0.8}{$\displaystyle \sup_{#1}\;$}}} \def\@adminfootnotes{ \let\@makefnmark\relax \let\@thefnmark\relax \ifx\@empty\thankses\else \@footnotetext{ \def\par{\let\par\@par}\@setthanks} } \begin{document} \author[1]{\small Christianen, M.H.M.} \author[1]{\small Janssen, A.J.E.M.} \author[1,2]{\small Vlasiou, M.} \author[1,3]{\small Zwart, B.} \affil[1]{\footnotesize Eindhoven University of Technology} \affil[2]{\footnotesize University of Twente} \affil[3]{\footnotesize Centrum Wiskunde \& Informatica} \title{Asymptotic analysis of Emden-Fowler type equation with an application to power flow models} \date{} \maketitle \begin{abstract} Emden-Fowler type equations are nonlinear differential equations that appear in many fields such as mathematical physics, astrophysics and chemistry. In this paper, we perform an asymptotic analysis of a specific Emden-Fowler type equation that emerges in a queuing theory context as an approximation of voltages under a well-known power flow model. Thus, we place Emden-Fowler type equations in the context of electrical engineering. We derive properties of the continuous solution of this specific Emden-Fowler type equation and study the asymptotic behavior of its discrete analog. We conclude that the discrete analog has the same asymptotic behavior as the classical continuous Emden-Fowler type equation that we consider. \end{abstract} \section{Introduction} Many problems in mathematical physics, astrophysics and chemistry can be modeled by an Emden-Fowler type equation of the form \begin{align} \frac{d}{dt}\left(t^{\rho}\frac{du}{dt} \right)\pm t^{\sigma}h(u) = 0,\label{eq:general_fowler_emden} \end{align} where $\rho,\sigma$ are real numbers, the function $u:\mathbb{R}\to\mathbb{R}$ is twice differentiable and $h: \mathbb{R}\to\mathbb{R}$ is some given function of $u$. For example, choosing $h(u)=u^n$ for $n\in\mathbb{R}$, $\rho=1$, $\sigma=0$ and plus sign in \eqref{eq:general_fowler_emden}, is an important equation in the study of thermal behavior of a spherical cloud of gas acting under the mutual attraction of its molecules and subject to the classical laws of thermodynamics \cite{Bellman1953, Davis}. Another example is known as \emph{Liouville's equation}, which has been studied extensively in mathematics \cite{Dubrovin1985}. This equation can be reduced to an Emden-Fowler type equation with $h(u)=e^u$, $\rho = 1,\sigma=0$ and plus sign \cite{Davis}. For more information on different applications of Emden-Fowler type equations, we refer the reader to \cite{Wong1975}. In this paper, we study the Emden-Fowler type equation where $h(u) = u^{-1}$, $\rho = 0$, $\sigma = 0$, with the minus sign in \eqref{eq:general_fowler_emden}, and initial conditions $u(0)=k^{-1/2}, u'(0)=k^{-1/2}w$ for $w\geq 0$. For a positive constant $k>0$, we consider the change of variables $u=k^{-1/2}f$, with resulting equation \begin{align} \frac{d^2f}{dt^2} = \frac{k}{f},\quad t\geq 0; \quad f(0)=1,f'(0)=w.\label{eq:voltages_approx} \end{align} This specific Emden-Fowler type equation \eqref{eq:voltages_approx} arises in a queuing model \cite{Christianen2021}, modeling the queue of consumers (e.g.\ electric vehicles (EVs)) connected to the power grid. The distribution of electric power to consumers leads to a resource allocation problem which must be solved subject to a constraint on the voltages in the network. These voltages are modeled by a power flow model known as the Distflow model; see Section \ref{subsec:background_voltages} for background. The Distflow model equations are given by a discrete version of the nonlinear differential equation \eqref{eq:voltages_approx} and can be described as \begin{align} V_{j+1}-2V_j+V_{j-1} = \frac{k}{V_j},\quad j=1,2,\ldots; \quad V_0 = 1, V_1 = 1+k.\label{eq:voltages_distflow} \end{align} In this paper, we study the asymptotic behavior and associated properties of the solution of \eqref{eq:voltages_approx} using differential and integral calculus, and show its numerical validation, i.e., we show that the solutions of \eqref{eq:voltages_approx} have asymptotic behavior \begin{align} f(t)\sim t\left(2k\ln(t)\right)^{1/2},\quad t\to\infty,\label{eq:continuous_asympt_behavior} \end{align} which can be used in the study of any of the aforementioned resource allocation problems. It is natural to expect that the discrete version \eqref{eq:voltages_distflow} of the Emden-Fowler type equation has the asymptotic behavior of the form \eqref{eq:continuous_asympt_behavior} as well. However, to show \eqref{eq:discrete_asympt_behavior} below, is considerably more challenging than in the continuous case, and this is the main technical challenge addressed in this work. We show the asymptotic behavior of the discrete recursion, as in \eqref{eq:voltages_distflow} to be \begin{align} V_j \sim j\left(2k\ln(j)\right)^{1/2},\quad j\to\infty.\label{eq:discrete_asympt_behavior} \end{align}\\ There is a huge number of papers that deal with various properties of solutions of Emden-Fowler differential equations \eqref{eq:general_fowler_emden} and especially in the case where $h(u)=u^n$ or $h(u)=\exp(nu)$ for $n\geq 0$. In this setting, for the asymptotic properties of solutions of an Emden-Fowler equation, we refer to \cite{Bellman1953}, \cite{Wong1975} and \cite{Fowler1930}. To the best of our knowledge, \cite{Mehta1971} is the only work that discusses asymptotic behavior in the case $n=-1$, however not the same asymptotic behavior as we study in this paper. More precisely, the authors of \cite{Mehta1971} study the more general Emden-Fowler type equation with $h(u)=u^n,\ n\in\mathbb{R},\ \rho+\sigma = 0$ and minus sign in \eqref{eq:general_fowler_emden}. In \cite{Mehta1971}, the more general equation appears in the context of the theory of diffusion and reaction governing the concentration $u$ of a substance disappearing by an isothermal reaction at each point $t$ of a slab of catalyst. When such an equation is normalized so that $u(t)$ is the concentration as a fraction of the concentration outside of the slab and $t$ the distance from the central plane as a fraction of the half thickness of the slab, the parameter $\sqrt{k}$ may be interpreted as the ratio of the characteristic reaction rate to the characteristic diffusion rate. This ratio is known in the chemical engineering literature as the Thiele modulus. In this context, it is natural to keep the range of $t$ finite and solve for the Thiele modulus as a function of the concentration of the substance $u$. Therefore, \cite{Mehta1971} studies the more general Emden-Fowler type equation for $u$ as a function of $\sqrt{k}$ and study asymptotic properties of the solution as $k\to\infty$. However, here we solve an Emden-Fowler equation for the special case $n=-1$ and for any given Thiele modulus $k$, and study what happens to the concentration $u(t)$ as $t$ goes to infinity, rather than $k$ to infinity. Although the literature devoted to continuous Emden-Fowler equations and generalizations is very rich, there are not many papers related to the discrete Emden-Fowler equation \eqref{eq:voltages_distflow} or to more general second-order non-linear discrete equations of Emden-Fowler type within the following meaning. Let $j_0$ be a natural number and let $\mathbb{N}(j_0)$ denote the set of all natural numbers greater than or equal to a fixed integer $j_0$, that is, \begin{align*} \mathbb{N}(j_0):=\{j_0,j_0+1,\ldots\}. \end{align*} Then, a second-order non-linear discrete equation of Emden-Fowler type \begin{align} \Delta^2 u(j)\pm j^{\alpha}u^m(j) = 0,\label{eq:general_discrete_emden_fowler} \end{align} is studied, where $u:\mathbb{N}(j_0)\to\mathbb{R}$ is an unknown solution, $\Delta u(j):=u(j+1)-u(j)$ is its first-order forward difference, $\Delta^2 u(j):= \Delta(\Delta u(j))=u(j+2)-2u(j+1)+u(j)$ is its second-order forward difference, and $\alpha,m$ are real numbers. A function $u^*:\mathbb{N}(j_0)\to\mathbb{R}$ is called a solution of \eqref{eq:general_discrete_emden_fowler} if the equality \begin{align*} \Delta^2 u^*(j)\pm j^{\alpha}(u^*(j))^m = 0 \end{align*} holds for every $j\in\mathbb{N}(j_0)$. The work done in this area focuses on finding conditions that guarantee the existence of a solution of such discrete equations. In \cite{Diblik2009}, the authors consider the special case of \eqref{eq:general_discrete_emden_fowler} where $\alpha = -2$, write it as a system of two difference equations, and prove a general theorem for this that gives sufficient conditions that guarantee the existence of at least one solution. In \cite{Akin-Bohnera2003, Erbe2012}, the authors replace the term $j^{\alpha}$ in \eqref{eq:general_discrete_emden_fowler} by $p(j)$, where the function $p(j)$ satisfies some technical conditions, and find conditions that guarantee the existence of a non-oscillatory solution. In \cite{Astashova2021,Migda2019}, the authors find conditions under which the nonlinear discrete equation in \eqref{eq:general_discrete_emden_fowler} with $m$ of the form $p/q$ where $p$ and $q$ are integers such that the difference $p-q$ is odd, has solutions with asymptotic behavior when $j\to\infty$ that is similar to a power-type function, that is, \begin{align*} u(j)\sim a_{\pm}j^{-s},\quad j\to\infty, \end{align*} for constants $a_{\pm}$ and $s$ defined in terms of $\alpha$ and $m$. However, we study the case $m=-1$ and this does not meet the condition that $m$ is of the form $p/q$ where $p$ and $q$ are integers such that the difference $p-q$ is odd. The paper is structured as follows. In Section \ref{subsec:background_voltages}, we present the application that motivated our study of particular equations in \eqref{eq:voltages_approx} and \eqref{eq:voltages_distflow}. We present the main results in two separate sections. In Section \ref{SEC:ASYMP_F(T)}, we present the asymptotic behavior and associated properties of the continuous solution of the differential equation in \eqref{eq:voltages_approx}, while in Section \ref{SEC:DISCRETE_RESULTS}, we present the asymptotic behavior of the discrete recursion in \eqref{eq:voltages_distflow}. The proofs of the main results in the continuous case, except for the results of Section \ref{SUBSEC:ASSOCIATED_PROPERTIES}, and discrete case can be found in Sections \ref{SEC:PROOFS_CONTINUOUS} and \ref{sec:proofs_discrete}, respectively. We finish the paper with a conclusion in Section \ref{sec:conclusion}. In the appendices, we gather the proofs for the results in Section \ref{SUBSEC:ASSOCIATED_PROPERTIES}. \section{Background on motivational application}\label{subsec:background_voltages} Equation \eqref{eq:voltages_approx} emerges in the process of charging electric vehicles (EVs) by considering their random arrivals, their stochastic demand for energy at charging stations, and the characteristics of the electricity \emph{distribution network}. This process can be modeled as a queue, with EVs representing \emph{jobs}, and charging stations classified as \emph{servers}, constrained by the physical limitations of the distribution network \cite{Aveklouris2019b,Christianen2021}. An electric grid is a connected network that transfers electricity from producers to consumers. It consists of generating stations that produce electric power, high voltage transmission lines that carry power from distant sources to demand centers, and distribution lines that connect individual customers, e.g., houses, charging stations, etc. We focus on a network that connects a generator to charging stations with only distribution lines. Such a network is called a distribution network. In a distribution network, distribution lines have an impedance, which results to voltage loss during transportation. Controlling the voltage loss ensures that every customer receives safe and reliable energy \cite{Kerstinga}. Therefore, an important constraint in a distribution network is the requirement of keeping voltage drops on a line under control. In our setting, we assume that the distribution network, consisting of one generator, several charging stations and distribution lines with the same physical properties, has a line topology. The generator that produces electricity is called the \emph{root node}. Charging stations consume power and are called the \emph{load nodes}. Thus, we represent the distribution network by a graph (here, a line) with a root node, load nodes, and edges representing the distribution lines. Furthermore, we assume that EVs arrive at the same rate at each charging station. In order to model the power flow in the network, we use an approximation of the alternating current (AC) power flow equations \cite{Molzahn2019}. These power flow equations characterize the steady-state relationship between power injections at each node, the voltage magnitudes, and phase angles that are necessary to transmit power from generators to load nodes. We study a load flow model known as the \emph{branch flow model} or the \emph{Distflow model} \cite{Low2014d,BaranWu1989}. Due to the specific choice for the network as a line, the same arrival rate at all charging stations, distribution lines with the same physical properties, and the voltage drop constraint, the power flow model has a recursive structure, that is, the voltages at nodes $j=0,\ldots,N-1$, are given by recursion \eqref{eq:voltages_distflow}. Here, $N$ is the root node, and $V_0=1$ is chosen as normalization. This recursion leads to real-valued voltages and ignores line reactances and reactive power, which is a reasonable assumption in distribution networks. We refer to \cite{Christianen2021} for more detail. \section{Main results of continuous Emden-Fowler type equation}\label{SEC:ASYMP_F(T)} In this section, we study the asymptotic behavior of the solution $f$ of \eqref{eq:voltages_approx}. To do so, we present in Lemma \ref{lemma:solution_f} the solution of a more general differential equation. Namely, we consider a more general initial condition $f(0)=y>0$. The solution $f$ presented in Lemma \ref{lemma:solution_f} allows us to study the asymptotic behavior of $f_0(x)$, i.e., the solution of the differential equation in Lemma \ref{lemma:solution_f} where $k=1, y=1$ and $w=0$, or in other words, the solution of the differential equation $f''(x)=1/f(x)$ with initial conditions $f(0)=1$ and $f'(0)=0$; see Theorem \ref{THM:LIMITING_BEHAVIOR}. We can then derive the asymptotic behavior of $f$; see Corollary \ref{corollary:asymp_f}. The following theorem provides the limiting behavior of $f_0(x)$, i.e., the solution of Equation \eqref{eq:voltages_approx} where $k=1, y=1$ and $w=0$. \begin{theorem} Let $f_0(x)$ be the solution of \eqref{eq:voltages_approx} for $k=1, y=1$ and $w=0$. The limiting behavior of the function $f_0(x)$ as $x\to\infty$ is given by, \begin{align*} f_0(x) = z(\ln(z))^{\frac{1}{2}}\left[1+\mathcal{O}\left(\frac{\ln(\ln(z))}{\ln(z)} \right) \right] \end{align*} where $z=x\sqrt{2}$. \label{THM:LIMITING_BEHAVIOR} \end{theorem} We first derive an implicit solution to Equation \eqref{eq:voltages_approx} where $k=1, y=1$ and $w=0$. Namely, we derive $f_0(x)$ in terms of a function $U(x)$; cf.\ Lemma \ref{lemma:solution_f}. We show, using Lemma \ref{lemma:ineq_I(y)}, that we can derive an approximation of $U(x)$ by iterating the following equation: \begin{align} \frac{\exp(U^2)-1}{2U} = \frac{x}{\sqrt{2}}.\label{eq:bound_iterative_method_behavior} \end{align} We can then use this approximation of $U(x)$ in the implicit solution of the differential equation to derive the asymptotic behavior of Theorem \ref{THM:LIMITING_BEHAVIOR}. The proofs of Theorem \ref{THM:LIMITING_BEHAVIOR} and Lemma \ref{lemma:ineq_I(y)} can be found in Section \ref{SEC:PROOFS_CONTINUOUS}. We now give the necessary lemmas for the proof of Theorem \ref{THM:LIMITING_BEHAVIOR}. \begin{lemma}[Lemma D.1 in \cite{Christianen2021}]\label{lemma:solution_f} For $t\geq 0,k>0,y>0,w\geq 0$, the nonlinear differential equation \begin{align*} f''(t) = \frac{k}{f(t)} \end{align*} with initial conditions $f(0)=y$ and $f'(0)=w$ has the unique solution \begin{align} f(t) = cf_0(a+bt).\label{eq:f} \end{align} Here, $f_0$ is given by \begin{align}\label{eq:f_0(x)} f_0(x) = \exp(U^2(x)),\quad \text{for}~x\geq 0, \end{align} where $U(x)$, for $x\geq 0$, is given by \begin{align}\label{eq:Ux} \int_0^{U(x)}\exp(u^2)~du = \frac{x}{\sqrt{2}}, \end{align}and where the constants $a,b,c$ are given by \begin{align} a & = \sqrt{2}\int_0^\frac{w}{\sqrt{2k}} \exp(u^2)~du, \label{eq:a}\\ b & = \frac{\sqrt{k}}{y}\exp\left(\frac{w^2}{2k}\right),\label{eq:b}\\ c & = y\exp\left(\frac{-w^2}{2k} \right).\label{eq:c} \end{align} \label{LEMMA:DIFF_EQ1} \end{lemma} Notice that we do not find an elementary closed-form solution of the function $f_0(x)$, since $f_0(x)$ is given in terms of $U(x)$, given implicitly by \eqref{eq:Ux}. For $x\geq 0$, the left-hand side of \eqref{eq:Ux} is equal to $\frac{1}{2}\sqrt{\pi} \text{erfi}(U(x))$ where $\text{erfi}(z)$ is the imaginary error function, defined by \begin{align} \text{erfi}(z) = -\mathrm{i}\ \text{erf}(\mathrm{i}z), \end{align} where $\text{erf}(w) = \frac{2}{\sqrt{\pi}}\int_0^w \exp(-v^2)dv$ is the well-known error function. \begin{lemma}\label{lemma:ineq_I(y)} For $y\geq 0$, we have the inequalities \begin{align} \frac{\exp(y^2)-1}{2y}\leq \int_0^y \exp(u^2)du\leq \frac{\exp(y^2)-1}{y},\label{eq:inequalities_int_exp} \end{align} and \begin{align} \int_0^y \exp(u^2)du \leq \frac{\exp(y^2)-1}{2y}\left(1+\frac{2}{y^2} \right).\label{eq:inequality_exp} \end{align} \end{lemma} Now, we present the asymptotic behavior of the solution $f$ of \eqref{eq:voltages_approx}. \begin{corollary}\label{corollary:asymp_f} The limiting behavior of the function $f(t)$, defined in Equation \eqref{eq:f}, is given by \begin{align} f(t)=t\sqrt{2k\ln(t)}\left(1+\mathcal{O}\left(\frac{\ln(\ln(t))}{\ln(t)} \right)\right),\quad t\to\infty.\label{eq:f(t)_big_O} \end{align} \end{corollary} \begin{proof}[Proof of Corollary \ref{corollary:asymp_f}] In order to derive a limit result of the exact solution of \eqref{eq:voltages_approx}, i.e. for \eqref{eq:f} with initial conditions $f(0)=1$ and $f'(0)=w$, we use the limiting behavior of the function $f_0(x)$ and the definitions of $a,b$ and $c$ as in \eqref{eq:a}--\eqref{eq:c}. Denote $v = \ln(z)$. Then, by Theorem \ref{THM:LIMITING_BEHAVIOR}, we have \begin{align} f(t) = cf_0(a+bt) = czv^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{\ln(v)}{v} \right) \right).\label{eq:put_together_ft} \end{align} In what follows, we carefully examine the quantities $czv^{\frac{1}{2}}$ and $\ln(v)/v$. First, observe that \begin{align*} v = \ln(z) = \ln((a+bt)\sqrt{2}) = \ln(t)+\mathcal{O}(1),\quad t>\exp(1), \end{align*} which yields \begin{align*} v^{\frac{1}{2}} & = \left(\ln(t)+\mathcal{O}(1)\right)^{\frac{1}{2}} \\ & = \ln(t)^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{1}{\ln(t)}\right) \right),\quad t>\exp(1), \end{align*} and \begin{align*} \ln(v) & = \ln(\ln(t)+\mathcal{O}(1)) \\ & = \ln(\ln(t))+\mathcal{O}\left(\frac{1}{\ln(t)}\right),\quad t>\exp(1). \end{align*} Therefore, using that $cb=\sqrt{k}$, we get \begin{align} czv^{\frac{1}{2}} & = c(a+bt)\sqrt{2}\ln(t)^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{1}{\ln(t)}\right) \right) \nonumber\\ & = (t+\mathcal{O}(1))\sqrt{2k\ln(t)}\left(1+\mathcal{O}\left(\frac{1}{\ln(t)}\right) \right) \nonumber \\ & = t\sqrt{2k\ln(t)}\left(1+\mathcal{O}\left(\frac{1}{\ln(t)}\right) \right),\quad t>\exp(1),\label{eq:czsqrt(v)} \end{align} and \begin{align} \frac{\ln(v)}{v} & = \frac{\ln(\ln(t))+\mathcal{O}\left(\frac{1}{\ln(t)} \right)}{\ln(t)+\mathcal{O}(1)} \nonumber \\ & = \frac{\ln(\ln(t))}{\ln(t)}\left(1+\mathcal{O}\left(\frac{1}{\ln(\ln(t))} \right) \right),\quad t>\exp(1).\label{eq:lnv_v} \end{align} Putting the results in \eqref{eq:czsqrt(v)} and \eqref{eq:lnv_v} together in \eqref{eq:put_together_ft}, yields \begin{align*} f(t) = t\sqrt{2k\ln(t)}\left(1+\mathcal{O}\left(\frac{\ln(\ln(t))}{\ln(t)}\right) \right),\quad t>\exp(1). \end{align*} \end{proof} \subsection{Associated properties of the ratio between $f$ and its first order approximation}\label{SUBSEC:ASSOCIATED_PROPERTIES} In this section, we study associated properties of the ratio between $f(t)$ and its first order approximation. Using only the first term of the asymptotic expansion of \eqref{eq:f(t)_big_O}, we define \begin{align} g(t):= t\sqrt{2k\ln(t)}.\label{eq:f(t)_approx} \end{align} The reason for studying this ratio, and in particular the role of $k$, is twofold: (1) the useful insights that we get for (the proof of) the asymptotic behavior in the discrete case in Section \ref{SEC:DISCRETE_RESULTS}, and (2) the applicability of Equation \eqref{eq:voltages_approx} in our motivational application, in cases where the parameter $k$ in \eqref{eq:voltages_approx} is small. Considering the practical application for charging electric vehicles, the ratio of normalized voltages $V_j/V_0 = V_j, j=1,2,\ldots$ should be below a level $1/(1-\Delta)$, where the tolerance $\Delta$ is small (of the order $10^{-1}$), due to the voltage drop constraint. Therefore, the parameter $k$, comprising given charging rates and resistances at all stations, is normally small (of the order $10^{-3}$). Furthermore, to match the initial conditions $V_0=1$ and $V_1 = 1+k$ of the discrete recursion with the initial conditions of the continuous analog, we demand $f(0)=1$ and $f(1) = 1+k$. However, notice that in our continuous analog described by \eqref{eq:voltages_approx}, we have, next to the initial condition $f(0)=1$, the initial condition $f'(0)=w$, while nothing is assumed about the value $f(1)$. The question arises whether it is possible to connect the conditions $f'(0)=w$ and $f(1)=1+k$. To do so, we use an alternative representation of $f$ given in Lemma \ref{lemma:alternative_f}. Then, using this representation, we show the existence and uniqueness of $w\geq 0$ for every $k$ such that the solution of \eqref{eq:voltages_approx} satisfies $f(1)=1+k$ in Lemma \ref{lemma:existence_uniqueness_w}. The proof of Lemmas \ref{lemma:alternative_f}--\ref{lemma:existence_uniqueness_w} can be found in Appendix \ref{sec:existence_uniqueness_w}. The importance of the role of the parameter $k$ becomes immediate from the comparison of the functions $f(t)$ and $g(t)$ in Theorem \ref{thm:cases_k}. \begin{theorem}\label{thm:cases_k} Let $f(t)$ be given by \eqref{eq:f} with initial conditions $f(0)=1$, $f'(0)=w$ such that $f(1)=1+k$, and let $g(t)$ be given by \eqref{eq:f(t)_approx}. Then, there is a unique $k_c = 1.0384\ldots$ such that \begin{enumerate}[label=(\alph*)] \item $k\geq k_c$ implies $f(t)\geq g(t)$ for all $t\geq 1$, \item $0<k<k_c$ implies that there are $t_1(k),t_2(k)$ with $1<t_1(k)<t_2(k)<\infty$ such that $f(t)<g(t)$ when $t_1(k)<t<t_2(k)$ and $f(t)>g(t)$ when $1\leq t<t_1(k)$ or $t>t_2(k)$. \end{enumerate} \end{theorem} In what follows, we start with introducing notation for the proof of Theorem \ref{thm:cases_k}, and give a sketch of the proof. The theorem is proven in Appendix \ref{sec:existence_uniqueness_w}. Define the auxiliary function $\psi:[1,\infty)\to [0,\infty)$ by \begin{align} \psi(t):=2k+\frac{k}{2\ln(t)}-k\ln(2k\ln(t)),\label{eq:psi} \end{align} and notice (also for the proof in Lemma \ref{lemma:equivalence}) that the function $\psi(t)$ is strictly decreasing from $+\infty$ at $t=1$ to $0$ at $t=\infty$. This follows easily from the definition of $\psi$ in \eqref{eq:psi}. Denote the unique solution $t>1$ of the equation $ \psi(t) = w^2$ by $t_0(k)$, i.e. \begin{align} \psi(t_0(k)) = w^2,\label{eq:equation_psi} \end{align} where $w$ comes from the initial condition $f'(0)=w\geq 0$. Additionally, define \begin{align} F(t,k) &:= \int_{(W^2+\ln(f(t)))^{\frac{1}{2}}}^{(W^2+\ln(g(t)))^{\frac{1}{2}}} \exp(v^2)dv \label{eq:def_F}\\ & = -t\sqrt{\frac{k}{2}}\exp(W^2)+\int_W^{(W^2+\ln(g(t)))^{\frac{1}{2}}}\exp(v^2)dv,\label{eq:F(t,k)} \end{align} where the second line is a consequence of Lemma \ref{lemma:alternative_f} with $y=1$. The proof of Theorem \ref{thm:cases_k} centers about the unique solution $t_0(k)$ of \eqref{eq:equation_psi}. First, from \eqref{eq:def_F}, we notice that $\max_{t\geq 1} F(t,k)\leq 0$ is equivalent to $f(t)\geq g(t)$. In Lemma \ref{lemma:F(t,k)}, we show that $\max_{t\geq 1}F(t,k)$ is exactly attained at the point $t_0(k)$, i.e., $\max_{t\geq 1}F(t,k) = F(t_0(k),k)$. Notice that $F(t_0(k),k)$ is only a function of the parameter $k$. In Lemma \ref{lemma:F(t,k)_decreasing}, we show $F(t_0(k),k)$ is a strictly decreasing function of $k$. To prove Lemma \ref{lemma:F(t,k)_decreasing}, we make use of additional Lemma \ref{lemma:increasing_W}. Then, in Lemmas \ref{lemma:positive_small_k} and \ref{lemma:negative_large_k}, we show that $F(t_0(k),k)$ is positive for small $k$ and negative for large $k$, respectively. This allows us to conclude that $F(t_0(k),k)\leq 0$ is equivalent to $k\geq k_c$. In summary, to prove Theorem \ref{thm:cases_k}, we show \begin{align*} f(t)\geq g(t) & \iff \max_{t\geq 1} F(t,k) = F(t_0(k),k) \leq 0 \iff k\geq k_c. \end{align*} Furthermore, in Lemma \ref{lemma:F(t,k)}, we show that $F(t,k)$ has only one extreme point, and in particular that this extreme point is a maximum and that this is attained at the point $t_0(k)$. Thus, in the case where $0<k<k_c$, we are left with $t_1(k),t_2(k)$ with $1<t_1(k)<t_2(k)<\infty$ such that $f(t)<g(t)$ when $t_1(k)<t<t_2(k)$ and $f(t)>g(t)$ when $1\leq t<t_1(k)$ or $t>t_2(k)$. Necessary Lemmas \ref{lemma:F(t,k)}--\ref{lemma:negative_large_k} to prove Theorem \ref{thm:cases_k} are stated and proven in Appendix \ref{sec:existence_uniqueness_w}. A comparison of the approximation $g(t)$, i.e. for \eqref{eq:f(t)_approx}, to the exact solution $f(t)$ of \eqref{eq:voltages_approx} where $w$ is such that $f(1)=1+k$, for three values of $k$, is given in Figure \ref{fig:quotient_f_g_500_log_a}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{quotient_f_g_500_log.png} \caption{Plot of quotient $f/g$ for three values of $k$.} \label{fig:quotient_f_g_500_log_a} \end{figure} However, in the setting where $k$ is small, the result in Theorem \ref{thm:cases_k}, case (b) leaves two practical questions; how small the ratio $f(t)/g(t)$ can be when $t_1(k)\leq t\leq t_2(k)$ and how large the ratio $f(t)/g(t)$ can be when $t\geq t_2(k)$. These practical questions are covered in Theorem \ref{thm:bounds_f/g}. \begin{theorem}\label{thm:bounds_f/g} Let $f(t)$ be given by \eqref{eq:f} with initial conditions $f(0)=1, f'(0)=w$ such that $f(1)=1+k$, and let $g(t)$ be given by \eqref{eq:f(t)_approx}. Then, for $0<k<k_c$, we have \begin{align} f(t)/g(t) \geq \frac{1}{2}\left(\ln\left(\sqrt{2/k} \right) \right)^{-\frac{1}{2}},\label{eq:lowerbound_f_g} \end{align} when $t_1(k)\leq t\leq t_2(k)$. Furthermore, we have \begin{align} f(t)/g(t)\leq 1.21 \label{eq:upperbound_f_g} \end{align} when $t\geq t_2(k)$. \end{theorem} The proof exploits properties of Theorem \ref{lemma:solution_f} and Theorem \ref{thm:cases_k}, such as exact representations \eqref{eq:a}--\eqref{eq:c} and actual values such as the one for $k_c$, but most importantly, we use numerical results to compute bounds for the quantity $f_0(x)/g(x)$, where $f_0(x)$ is given in \eqref{eq:f_0(x)} and $g(x)$ is given in \eqref{eq:f(t)_approx}. The proofs of Theorem \ref{thm:bounds_f/g}, and supporting Lemmas \ref{lemma:maximum_ratio_f0_g} and \ref{lemma:t2(k)} can be found in Appendix \ref{sec:existence_uniqueness_w}. \section{Main results of discrete Emden-Fowler type equation}\label{SEC:DISCRETE_RESULTS} In this section, we present the asymptotic behavior of the discrete recursion \eqref{eq:voltages_distflow}. Thus, we consider the sequence $V_j, j=0,1,\ldots$ defined in \eqref{eq:voltages_distflow} and we let \begin{align} W_j = j\left(2k\ln(j) \right)^{\frac{1}{2}}=g(j),\quad j=1,2,\ldots,\label{eq:Wn_sequence} \end{align} denote the discrete equation analog to $g(j)$; cf.\ \eqref{eq:f(t)_approx}, at integer points $j=1,2,\ldots$. The asymptotic behavior of the discrete recursion \eqref{eq:voltages_distflow} is summarized in the following theorem. \begin{theorem}\label{thm:lim_V_j_W_j} Let $V_j,j=0,1,\ldots$ and $W_j,j=1,2,\ldots$ be as in \eqref{eq:voltages_distflow} and \eqref{eq:Wn_sequence}, respectively. Then, \begin{align*} \lim_{j\to\infty} \frac{V_j}{W_j} = 1. \end{align*} \end{theorem} The proof of Theorem \ref{thm:lim_V_j_W_j} relies on the following observations: there always exists a point $n\in\{1,2,\ldots\}$ such that either $V_j\geq W_j$ for all $j\geq n$ or $V_j\leq W_j$ for all $j\geq n$, and the existence of such a point implies in either case the desired asymptotic behavior of the sequence $V_j$. To show that there exists a point $n\in\{1,2,\ldots\}$ such that either $V_j\geq W_j$ for all $j\geq n$ or $V_j\leq W_j$ for all $j\geq n$, we rely on Lemmas \ref{lemma:upper_bound_W}, \ref{lemma:lower_bound_V} and \ref{lemma:equivalence}. Due to the inequalities in Lemmas \ref{lemma:upper_bound_W} and \ref{lemma:lower_bound_V}, we show \begin{align} V_{j+1}-V_j \geq W_{j+1}-W_j,\label{eq:ineq_V_W} \end{align} for $j\geq n_0(k)$, where $n_0(k)$ is appropriately chosen. Then, Equation \eqref{eq:ineq_V_W} implies that there exists either a point $n\geq n_0(k)$ such that $V_n\geq W_n$ or not. If there exists a point $n\geq n_0(k)$ such that $V_n\geq W_n$, then we show in Lemma \ref{lemma:equivalence} that $V_j\geq W_j$ for all $j\geq n$. If not, we have that $V_j<W_j$ for all $j\geq n_0(k)$. Then, we are left to show that the existence of such a point implies the desired asymptotic behavior of $V_j$. This is done in Lemma \ref{lemma:V_j_geq_W_j}. We now give the necessary lemmas to prove Theorem \ref{thm:lim_V_j_W_j}. \begin{lemma}\label{lemma:upper_bound_W} Let $W_j, j=1,2,\ldots$ be as in \eqref{eq:Wn_sequence}. Then, \begin{align} W_{j+1}-W_j \leq (\psi(j+1)+2k\ln(W_{j+1}))^{\frac{1}{2}}.\label{eq:difference_W_j} \end{align} where $\psi(j)$ for $j=1,\ldots$ is defined in \eqref{eq:psi}. \end{lemma} \begin{lemma}\label{lemma:lower_bound_V} Let $V_j,j=0,1,\ldots$ be as in \eqref{eq:voltages_distflow}. Then, \begin{align} V_{j+1}-V_j & \geq \left(C+2k\ln(V_j) \right)^{\frac{1}{2}},\label{eq:difference_V_j} \end{align} for some constant $C$. \end{lemma} \begin{lemma}\label{lemma:equivalence} Let $V_j,j=0,1,\ldots$ and $W_j,j=1,2,\ldots$ be as in \eqref{eq:voltages_distflow} and \eqref{eq:Wn_sequence}, respectively. Then, we have the following equivalence. \begin{enumerate} \item There is $n\geq n_0(k)$ such that $V_n\geq W_n$. \item There is $n\geq n_0(k)$ such that $V_j\geq W_j$ for all $j\geq n$. \end{enumerate} \end{lemma} \begin{lemma}\label{lemma:V_j_geq_W_j} Let $V_j, j=0,1,\ldots,N$ and $W_j, j=1,2,\ldots,N$ be as in \eqref{eq:voltages_distflow} and \eqref{eq:Wn_sequence}, respectively. There holds the following. In either case that \begin{enumerate} \item there is a point $n\in\{1,2,\ldots\}$ such that $V_j\geq W_j$ for all $j\geq n$,\end{enumerate} or \begin{enumerate}[resume] \item there is a point $n\in\{1,2,\ldots\}$ such that $V_j\leq W_j$ for all $j\geq n$, \end{enumerate} we have that $V_j= W_j(1+\smallO(1)), j\to\infty$. \end{lemma} The proofs of Lemmas \ref{lemma:upper_bound_W}--\ref{lemma:V_j_geq_W_j} are given in Section \ref{sec:proofs_discrete}. Now, Theorem \ref{thm:lim_V_j_W_j} follows from Lemmas \ref{lemma:upper_bound_W}--\ref{lemma:V_j_geq_W_j}. \begin{proof}[Proof of Theorem \ref{thm:lim_V_j_W_j}] Let $V_j,j=0,1,\ldots$ and $W_j,j=1,2,\ldots$ be as in \eqref{eq:voltages_distflow} and \eqref{eq:Wn_sequence}, respectively. On the one hand, as a result of Lemma \ref{lemma:lower_bound_V}, the first order differences of the sequence $V_j$ are bounded according to \eqref{eq:difference_V_j}, while on the other hand, as a result of Lemma \ref{lemma:upper_bound_W}, the first order finite differences of $W_j$ are bounded according to \eqref{eq:difference_W_j}. A minor issue is that \eqref{eq:difference_W_j} involves $\ln(W_{j+1})$, whereas \eqref{eq:difference_V_j} involves $\ln(V_j)$. However, by \eqref{eq:Wn_sequence}, we write \begin{align} \ln(W_{j+1})-\ln(W_j) & = \ln\left((j+1)(2k\ln(j+1))^{\frac{1}{2}}\right)-\ln\left(j(2k\ln(j))^{\frac{1}{2}}\right) \nonumber\\ & = \ln\left(1+\frac{1}{j}\right)+\frac{1}{2}\ln\left(\left(\frac{\ln(j+1)}{\ln(j)}\right) \right)\label{eq:difference_logs_W} \end{align} and notice from increasingness of the function $j\geq 1 \mapsto \ln(j)$ and the inequality $\ln(j+1)-\ln(j)\leq \frac{1}{j}$ that $\frac{\ln(j+1)}{\ln(j)}\leq 1+\frac{1}{j}$ when $j>\exp(1)$. Using this last inequality in \eqref{eq:difference_logs_W}, yields that $\ln(W_{j+1}) = \ln(W_j)+\mathcal{O}(1/j)$. Moreover, Equations \eqref{eq:difference_W_j} and \eqref{eq:difference_V_j} imply that there exists a point $n_0(k)$ such that $V_{j+1}-V_j\geq W_{j+1}-W_j$ when $j\geq n_0(k)$. To eliminate the effect of the term $\mathcal{O}(1/j)$ in $\ln(W_{j+1})=\ln(W_j)+\mathcal{O}(1/j)$, we let $n_0(k)$ be such that $\psi(n_0(k))\leq C-1$. In any case, we can distinguish between two cases: there exists either a point $n\geq n_0(k)$ such that $V_n\geq W_n$ or not, i.e., \begin{enumerate} \item There is $n\geq n_0(k)$ such that $V_{n}\geq W_{n}$, \item $V_{j}< W_j$ for all $j\geq n_0(k)$. \end{enumerate} By Lemma \ref{lemma:equivalence}, we have, on the one hand, that the existence of a point $n\geq n_0(k)$ such that $V_n\geq W_n$, implies that $V_j\geq W_j$ for all $j\geq n$ and on the other hand, that the non-existence of $n\geq n_0(k)$ such that $V_n\geq W_n$, implies that $V_j< W_j$ for all $j\geq n_0(k)$. This situation exactly fits the framework of Lemma \ref{lemma:V_j_geq_W_j}. We consider the two cases above. First, assume that (1) holds. Then by Lemma \ref{lemma:equivalence}, we have $V_j\geq W_j$ for all $j\geq n$. From $V_j\geq W_j$, for all $j\geq n$, we have that Lemma \ref{lemma:V_j_geq_W_j}, item (1) holds, and so \begin{align*} V_j = W_j(1+\smallO(1)),\quad j\to\infty.\end{align*} Second, assume that (2) holds, so that $V_j< W_j$ for all $j>n_0(k)$. Then, Lemma \ref{lemma:V_j_geq_W_j}, item (2) holds, and so \begin{align*} V_j= W_j(1+\smallO(1)),\quad j\to\infty. \end{align*} Hence, any of the two cases yields \begin{align*} \lim_{j\to\infty} \frac{V_j}{W_j} = 1. \end{align*} \end{proof} Although we do not provide associated properties of the asymptotic behavior of $V_j$ as $j\to\infty$ as we did for the asymptotic behavior of $f(x)$ as $x\to\infty$, we compare the behavior of $V_j$ with the discrete counterpart of $g(t)$, i.e. $W_j$, for $j=1,\ldots,100$ in Figure \ref{fig:quotient_V_W_500_log_a}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{quotient_V_W_100_log.png} \caption{Plot of quotient $V/W$ for three values of $k$.} \label{fig:quotient_V_W_500_log_a} \end{figure} \section{Proofs for Section \ref{SEC:ASYMP_F(T)}}\label{SEC:PROOFS_CONTINUOUS} The main result in Section \ref{SEC:ASYMP_F(T)}, i.e., Theorem \ref{THM:LIMITING_BEHAVIOR} follows from Lemmas \ref{lemma:solution_f} and \ref{lemma:ineq_I(y)}. In this section, we provide the proofs of both Theorem \ref{THM:LIMITING_BEHAVIOR} and Lemma \ref{lemma:ineq_I(y)}. For the proof of Lemma \ref{lemma:solution_f} we refer to \cite{Christianen2021}. \subsection{Proof of Theorem \ref{THM:LIMITING_BEHAVIOR}}\label{subsec:thm_limiting_behavior} \begin{proof}[Proof of Theorem \ref{THM:LIMITING_BEHAVIOR}] Denoting $U=U(x)=\left(\ln(f_0(x)\right)^{\frac{1}{2}}$ for $x\geq 0$, we have by \eqref{eq:Ux} \begin{align} \int_0^U \exp(u^2)du = \frac{x}{\sqrt{2}}.\label{eq:intY} \end{align} We consider for $x\geq 0$ the equation \begin{align} \frac{\exp(y^2)-1}{2y} = \frac{x}{\sqrt{2}}.\label{eq:B20} \end{align} With $z=x\sqrt{2}$, we can write \eqref{eq:B20} as \begin{align*} y = h_z(y), h_z(y)=\left(\ln(1+zy) \right)^{\frac{1}{2}}.\end{align*} The function $h_z(y)$ is concave in $y\geq 0$ since \begin{align*} \frac{d}{dy}\left[h_z(y) \right] = \frac{z}{2(1+yz)(\ln(1+yz))^{\frac{1}{2}}}\end{align*} is decreasing in $y\geq 0$. Furthermore, when $z>\exp(1)-1$, \begin{align*} h_z(1) = \left(\ln(1+z) \right)^{\frac{1}{2}}>1, h_z(z)=(\ln(1+z^2))^{\frac{1}{2}}<z,\end{align*} where the first inequality follows from $z>\exp(1)-1$ and the second inequality follows from $\ln(1+z^2)<z^2,z>0$. Therefore, the equation $y=h_z(y)$ has for any $z>\exp(1)-1$ exactly one solution $y_{LB}\in[1,z]$; here ``LB" refers to the lower-bound in \eqref{eq:inequalities_int_exp}. Since $y_{LB}\in[1,z]$, we have \begin{align} y_{LB}=(\ln(1+zy_{LB}))^{\frac{1}{2}}\in \left[(\ln(1+z))^{\frac{1}{2}},(\ln(1+z^2))^{\frac{1}{2}} \right],\label{eq:B24} \end{align} so that $y_{LB}=\mathcal{O}(\ln(z)^{\frac{1}{2}}),z>\exp(1)-1$. When we iterate \eqref{eq:B24} one more time, we get \begin{align} y_{LB} & =\left(\ln(z)+\ln\left(\frac{1}{z}+y_{LB}\right) \right)^{\frac{1}{2}}=(\ln(z))^{\frac{1}{2}}\left(1+\frac{\ln\left(\frac{1}{z}+y_{LB}\right)}{\ln(z)} \right)^{\frac{1}{2}} \nonumber \\ & = (\ln(z))^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{\ln(\ln(z))}{\ln(z)} \right) \right), \quad z>\exp(1)-1.\label{eq:B25} \end{align} Observe that \begin{align} U = (\ln(f_0(x))^{\frac{1}{2}}\leq y_{LB},\quad z>\exp(1)-1.\label{eq:B26} \end{align} Indeed, we have from \eqref{eq:intY} and the first inequality in \eqref{eq:inequalities_int_exp} \begin{align} \frac{\exp(U^2)-1}{2U}\leq \int_0^U \exp(u^2)du = \frac{x}{\sqrt{2}} = \frac{\exp(y_{LB}^2)-1}{2y_{LB}},\label{eq:B27} \end{align} and so $U\leq y_{LB}$ follows from increasingness of the function $y\geq 0 \mapsto (\exp(y^2)-1)/2y$. In addition to the upper bound on $y$ in \eqref{eq:B26}, we also have the lower bound \begin{align} U\geq \left(\ln\left(\frac{z}{2}\right) \right)^{\frac{1}{2}},\quad z\geq 2.\label{eq:B28} \end{align} Indeed, from \eqref{eq:intY} and the second inequality in \eqref{eq:inequalities_int_exp}, \begin{align*} \frac{\exp(U^2)-1}{U}\geq \int_0^U \exp(u^2)du=\frac{x}{\sqrt{2}}=\frac{z}{2},\end{align*} while \begin{align} \frac{\exp(y^2)-1}{y}\bigg|_{y=(\ln(\frac{z}{2}))^{\frac{1}{2}}} = \frac{\frac{z}{2}-1}{(\ln(\frac{z}{2}))^{\frac{1}{2}}}\leq \frac{z}{2},\quad z\geq 2,\label{eq:B30} \end{align} where the inequality in \eqref{eq:B30} follows from $-\ln(w)\geq (1-w)^2$ with $w=\frac{2}{z}\in (0,1]$. We have from \eqref{eq:B27} that \begin{align} \frac{\exp(U^2)-1}{2U}\leq \frac{x}{\sqrt{2}}.\label{eq:B31} \end{align} When we use \eqref{eq:B28} in \eqref{eq:inequality_exp} with $y=U$, we see that \begin{align} \frac{x}{\sqrt{2}}=\int_0^U \exp(u^2)du & \leq \frac{\exp(U^2)-1}{2U}\left(1+\frac{2}{U^2} \right) \nonumber \\ & = \frac{\exp(U^2)-1}{2U}\left(1+\mathcal{O}\left(\frac{1}{\ln(z)}\right) \right).\label{eq:B32} \end{align} From \eqref{eq:B31} and \eqref{eq:B32}, we then find that \begin{align} \frac{\exp(U^2)-1}{2U} = \frac{x}{\sqrt{2}}\left(1+\mathcal{O}\left(\frac{1}{\ln(z)}\right) \right).\label{eq:B33} \end{align} Observe that \eqref{eq:B33} coincides with \eqref{eq:B20} when we take $y=U$ and replace the right-hand side $\frac{x}{\sqrt{2}}$ by $\left(\frac{x}{\sqrt{2}}\right)\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right)$. Using then \eqref{eq:B25} with $z$ replaced by $z\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right)$, we find that \begin{align} U & = \left(\ln\left(z\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right) \right)\right)^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{\ln(\ln(z\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right)))}{\ln(z\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right))} \right) \right)\nonumber\\ & = (\ln(z))^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{\ln(\ln(z))}{\ln(z)} \right) \right).\label{eq:B34} \end{align} Then, finally, from \eqref{eq:B33} and \eqref{eq:B34}, \begin{align} f_0(x) & = \exp(U^2) = 1+zU\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right)\nonumber\\ & = 1+z(\ln(z))^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{\ln(\ln(z))}{\ln(z)} \right) \right)\left(1+\mathcal{O}\left(\frac{1}{\ln(z)} \right) \right)\nonumber \\ & = z(\ln(z))^{\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{\ln(\ln(z))}{\ln(z)} \right) \right) \nonumber, \end{align} as required. \end{proof} \subsection{Proof of Lemma \ref{lemma:ineq_I(y)}}\label{subsec:proof_lemma_ineq_I(y)} \begin{proof}[Proof of Lemma \ref{lemma:ineq_I(y)}] We require the inequalities \eqref{eq:inequalities_int_exp} and \eqref{eq:inequality_exp}. The inequalities in \eqref{eq:inequalities_int_exp} follow from expanding the three functions in \eqref{eq:inequalities_int_exp} as a series involving odd powers $y^{2l+1}, l=0,1,\ldots,$ of $y$ and comparing coefficients, i.e., \begin{align*} \frac{\exp(y^2)-1}{2y} & = \sum_{\ell=0}^{\infty} \frac{y^{2\ell+1}}{2(\ell+1)!} \\ & \leq \sum_{\ell=0}^{\infty} \frac{y^{2\ell+1}}{(2\ell+1)\ell!} = \int_0^y \exp(u^2)du \\ & \leq \sum_{\ell=0}^{\infty} \frac{y^{2\ell+1}}{(\ell+1)!} = \frac{\exp(y^2)-1}{y}. \end{align*} As to the inequality in \eqref{eq:inequality_exp}, we use partial integration according to \begin{align} \int_0^y \exp(u^2)du & = \int_0^y \frac{1}{2u}d(\exp(u^2)-1)\nonumber \\ & = \frac{\exp(y^2)-1}{2y}+\int_0^y \frac{\exp(u^2)-1}{2u^2}du.\label{eq:B18} \end{align} Now \begin{align} \int_0^y \frac{\exp(u^2)-1}{2u^2}du \leq \frac{\exp(y^2)-1-y^2}{y^3},\quad y\geq 0, \label{eq:B19} \end{align} as follows from expanding the two functions in \eqref{eq:B19} as a series involving odd powers $y^{2l+1}, l=0,1,\ldots,$ of $y$ and comparing coefficients. Then \eqref{eq:inequality_exp} follows from \eqref{eq:B18}--\eqref{eq:B19} upon deleting the $y^2$ in the numerator at the right-hand side of \eqref{eq:B19}. \end{proof} \section{Proofs for Section \ref{SEC:DISCRETE_RESULTS}}\label{sec:proofs_discrete} The main result in Section \ref{SEC:DISCRETE_RESULTS} follows from Lemmas \ref{lemma:upper_bound_W}--\ref{lemma:V_j_geq_W_j}. The proof of each Lemma can be found in \ref{subsubsec:W_j}--\ref{subsec:proof_lemma_V_j_geq_W_j}, respectively. \subsection{Proof of Lemma \ref{lemma:upper_bound_W}}\label{subsubsec:W_j} \begin{proof}[Proof of Lemma \ref{lemma:upper_bound_W}] For Equation \eqref{eq:Wn_sequence}, by the mean-value theorem, there is a $\xi\in[j,j+1]$ such that \begin{align} W_{j+1}-W_j & = g(j+1)-g(j) = g'(\xi) \leq g'(j+1) .\label{eq:W_j+1-W_j} \end{align} We have used here that $g(j)$ is convex in $j\geq \exp(1/2)$. We make the term $g'(j+1)$ explicit, by differentiating $g(t)$, see \eqref{eq:f(t)_approx}, with respect to $t$ and rewrite it in terms of the function $g(t)$ itself (so for integer points, in terms of $W_t$) and $\psi(t)$ as in \eqref{eq:psi}. Differentiation of $g(t)$ gives, \begin{align} g'(t) & = (2k)^{\frac{1}{2}}\frac{d}{dt}(t(\ln(t))^{\frac{1}{2}})\nonumber \\ & = (2k)^{\frac{1}{2}}\left((\ln(t))^{\frac{1}{2}}+\frac{1}{2(\ln(t))^{\frac{1}{2}}} \right)\nonumber\\ & = (2k)^{\frac{1}{2}}\left(\frac{(2\ln(t)+1)^2}{4\ln(t)} \right)^{\frac{1}{2}}\nonumber\\ & = \left(2k\ln(t)+2k+\frac{k}{2\ln(t)} \right)^{\frac{1}{2}}.\label{eq:diff_g(t)_right_form} \end{align} However, \eqref{eq:diff_g(t)_right_form} does not contain the function $g(t)$ yet. Therefore, we rewrite the first term of the right-hand side of \eqref{eq:diff_g(t)_right_form} as follows: \begin{align} 2k\ln(t) & = 2k\ln(t(2k\ln(t))^{\frac{1}{2}})-k\ln(2k\ln(t)) \nonumber\\ & = 2k\ln(g(t))-k\ln(2k\ln(t)).\label{eq:diff_g(t)_step} \end{align} Then, after inserting \eqref{eq:diff_g(t)_step} and the definition of $\psi(t)$ in \eqref{eq:psi}, we get \begin{align} g'(t) & = \left(2k\ln(g(t))-k\ln(2k\ln(t)+2k+\frac{k}{2\ln(t)} \right)^{\frac{1}{2}}\nonumber\\ & = \left(\psi(t)+2k\ln(g(t)) \right)^{\frac{1}{2}},\quad t>1.\label{eq:diff_g(t)} \end{align} Then, combining the upper bound in \eqref{eq:W_j+1-W_j} and \eqref{eq:diff_g(t)}, yields the desired upper bound for the finite differences of $W_j$ in \eqref{eq:difference_W_j}. \end{proof} \subsection{Proof of Lemma \ref{lemma:lower_bound_V}}\label{subsubsecc:V_j} In this section, we prove a lower bound for the first order finite differences of $V_j$ that is similar to the upper bound we obtained in \eqref{eq:difference_W_j}. This result follows from Lemmas \ref{lemma:properties_Vn} and \ref{lemma:connect_to_ln}. In more detail, the proof of Lemma \ref{lemma:lower_bound_V} consists of algebraic manipulations of \eqref{eq:voltages_distflow}, but the key in the proof is the use of Lemma \ref{lemma:connect_to_ln} in these manipulations, which, in turn, builds on technical results established in Lemma \ref{lemma:properties_Vn}. We first state Lemmas \ref{lemma:properties_Vn} and \ref{lemma:connect_to_ln}. \begin{lemma}\label{lemma:properties_Vn} Let $V_j, j=0,1,\ldots,$ be as in \eqref{eq:voltages_distflow}. Then, \begin{enumerate} \item $V_j\geq jk+1$, \item $V_{j+1}-V_j = \sum_{i=0}^j \frac{k}{V_i} \to\infty$ as $j\to\infty$, \item $V_{j+1}-V_j \leq k+\ln\left(1+jk\right)$, \item $\frac{V_{j+1}-V_j}{V_j}=\mathcal{O}\left(\frac{\ln(j)}{j} \right)$. \end{enumerate} \end{lemma} \begin{lemma}\label{lemma:connect_to_ln} Let $V_j,j=0,1,\ldots,N-1$ as in \eqref{eq:voltages_distflow}. Then, \begin{align*} \frac{V_{j+1}-V_{j-1}}{V_j} = \ln(V_{j+1})-\ln(V_{j-1})+\mathcal{O}\left(\left(\frac{\ln(j)}{j} \right)^3 \right). \end{align*} \end{lemma} Both Lemmas \ref{lemma:properties_Vn} and \ref{lemma:connect_to_ln} are proven later in this section. Here, we discuss the efficacy of Lemma \ref{lemma:connect_to_ln} by numerical validation. We approximate, \begin{align} \frac{V_{j+1}-V_{j-1}}{V_j} = \left(\frac{V_{j+1}}{V_j}-1\right)+\left(1-\frac{V_{j-1}}{V_j} \right)\label{eq:approx_diff} \end{align} by \begin{align} \approx & \ln\left(1+\left(\frac{V_{j+1}}{V_j}-1 \right) \right)-\ln\left(1-\left(1-\frac{V_{j-1}}{V_j} \right)\right)\nonumber\\ = & \ln\left(\frac{V_{j+1}}{V_{j}} \right)-\ln\left(\frac{V_{j-1}}{V_{j}} \right) = \ln(V_{j+1})-\ln(V_{j-1}).\label{eq:approx_ln} \end{align} The efficacy of the approximation \eqref{eq:approx_ln} of \eqref{eq:approx_diff} is illustrated for the cases $k=0.001,\ k=0.01$ and $k=0.1$ in Figure \ref{fig:approximation_diff_and_log}. For these cases, the approximation already yields relative errors smaller than $0.5\%$ for $j\geq 10$. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{plot_approximation_diff_and_log.png} \caption{Illustration of efficacy of the approximation \eqref{eq:approx_ln} of \eqref{eq:approx_diff} by showing the quotient of \eqref{eq:approx_ln} and \eqref{eq:approx_diff}, for three values of $k$.} \label{fig:approximation_diff_and_log} \end{figure} Having Lemmas \ref{lemma:properties_Vn} and \ref{lemma:connect_to_ln} at our disposal, we are now ready to give the proof of Lemma \ref{lemma:lower_bound_V}. \begin{proof}[Proof of Lemma \ref{lemma:lower_bound_V}] In order to relate the recursion in \eqref{eq:voltages_distflow} to \eqref{eq:difference_W_j}, we write \eqref{eq:voltages_distflow} as \begin{align} (V_{j+1}-V_j)-(V_j-V_{j-1}) = \frac{k}{V_j},\quad j=1,2,\ldots,N-1\label{eq:Vn1} \end{align} and multiply both sides of \eqref{eq:Vn1} by \begin{align*} V_{j+1}-V_{j-1} = (V_{j+1}-V_j)+(V_j-V_{j-1}) \end{align*} to obtain \begin{align*} (V_{j+1}-V_j)^2-(V_j-V_{j-1})^2 = k\frac{V_{j+1}-V_{j-1}}{V_j}. \end{align*} Summing this over $j=1,2,\ldots,n$, we get \begin{align} (V_{n+1}-V_n)^2-(V_1-V_{0})^2 = k\sum_{j=1}^n \frac{V_{j+1}-V_{j-1}}{V_j}.\label{eq:connect_to_ln} \end{align} We proceed with rewriting Equation \eqref{eq:connect_to_ln} to an expression that is similar to the one we obtained for the sequence $W_j, j=1,2,\ldots,N$ in Equation \eqref{eq:W_j+1-W_j} using Lemma \ref{lemma:connect_to_ln}. Then, we have \begin{align} (V_{n+1}-V_n)^2 & = (V_1-V_{0})^2+k\sum_{j=1}^n \frac{V_{j+1}-V_{j-1}}{V_j} \nonumber\\ & = (V_1-V_{0})^2+k\sum_{j=1}^n \left\{\ln(V_{j+1})-\ln(V_{j-1})+\mathcal{O}\left(\left(\frac{\ln(j)}{j}\right)^3\right)\right\}.\label{eq:rewrite_V_j} \end{align} We observe a telescoping sum in the right-hand side of \eqref{eq:rewrite_V_j}, so we have \begin{align*} \sum_{j=1}^n \left\{\ln(V_{j+1})-\ln(V_{j-1})\right\} = \ln(V_{n+1})+\ln(V_n)-\left(\ln(V_1)+\ln(V_0)\right). \end{align*} Furthermore, we introduce the following notation: \begin{align*} w^2(k) & = (V_1-V_{0})^2-k\left(\ln(V_1)+\ln(V_{0}) \right)\nonumber \\ & = k^2-k\ln(1+k) \end{align*} and $R_j = \mathcal{O}\left(\left(\frac{\ln(j)}{j}\right)^3\right)$. Thus, we rewrite \eqref{eq:rewrite_V_j} to \begin{align*} (V_{n+1}-V_n)^2 & = w^2(k) + k(\ln(V_{n+1})+\ln(V_n))+\sum_{j=1}^n \mathcal{O}\left(\left(\frac{\ln(j)}{j}\right)^3\right) \\ & = w^2(k) + k(\ln(V_{n+1})+\ln(V_n))+\sum_{j=1}^n R_j. \end{align*} Recall that we want to derive a lower bound for the first order finite differences $V_{n+1}-V_n$. In order to do so, we use that $V_{n+1}\geq V_n$ (see \cite[Lemma 5.1]{Christianen2021}). Thus, \begin{align*} (V_{n+1}-V_n)^2 \geq w^2(k)+2k\ln(V_n)+\sum_{j=1}^n R_j. \end{align*} Since $\sum_{j=1}^{\infty}|R_j|<\infty$, we thus see that there is a constant $C$ such that \begin{align*} V_{n+1}-V_n \geq \left(C+2k\ln(V_n) \right)^{\frac{1}{2}}, \end{align*} as desired. \end{proof} To complete the proof of Lemma \ref{lemma:lower_bound_V}, we are left to prove Lemmas \ref{lemma:properties_Vn} and \ref{lemma:connect_to_ln}. This is done in Sections \ref{subsubsec:proof_lemma_properties_Vn} and \ref{subsubsec:proof_lemma_connect_to_ln}, respectively. \subsubsection{Proof of Lemma \ref{lemma:properties_Vn}}\label{subsubsec:proof_lemma_properties_Vn} \begin{proof}[Proof of Lemma \ref{lemma:properties_Vn}] The properties of the sequence $V_j, j=0,1,\ldots$ are given in the following way. \begin{enumerate} \item We have from \eqref{eq:voltages_distflow} for $j=1,2,\ldots$, \begin{align} V_{j+1}-V_j = V_j-V_{j-1}+\frac{k}{V_j}\geq V_j-V_{j-1}.\label{eq:Vn_summing} \end{align} Hence, $V_{j+1}-V_j\geq V_1-V_0=(1+k)-1=k$ for $j=0,1,\ldots$. Then we get for $j=0,1,\ldots$ \begin{align*} V_{j+1} = V_j+(V_{j+1}-V_j)\geq V_j+k, \end{align*} and it follows from $V_0=1$ and induction that $V_j\geq 1+jk$ for $j=0,1,\ldots$. \item We have from the identity in \eqref{eq:Vn_summing} by summation that \begin{align*} V_{j+1}-V_j & = (V_1-V_{0}) + \sum_{i=1}^j \frac{k}{V_i}\\ & = \frac{k}{V_0} + \sum_{i=1}^j \frac{k}{V_i}\\ & = \sum_{i=0}^j \frac{k}{V_i}. \end{align*} When the latter expression would remain bounded by $B<\infty$ as $j\to\infty$, we would have $V_{j+1}\leq V_0 +jB, j=0,1,\ldots$. However, then $\sum_{i=0}^j \frac{k}{V_i}\geq \sum_{i=0}^{j-1}\frac{k}{V_0+iB}\to\infty$ as $j\to\infty$. Since this contradicts the assumption that the latter expression remains bounded, we must have that $\sum_{i=0}^j \frac{k}{V_i} \to \infty$ as $j\to\infty$. \item Combining the results of items (1) and (2) gives us the desired result. Indeed, \begin{align*} V_{j+1}-V_j = \sum_{i=0}^j \frac{k}{V_i}\leq \sum_{i=0}^j \frac{k}{ik+1}, \end{align*} and \begin{align*} \sum_{i=0}^j \frac{k}{ik+1} & = k+\sum_{i=1}^j \frac{1}{i+1/k}\\ & \leq k + \int_{\frac{1}{2}+\frac{1}{k}}^{j+\frac{1}{2}+\frac{1}{k}}\frac{1}{x}dx \\ & = k+\left(\ln\left(j+\frac{1}{2}+\frac{1}{k}\right)-\ln\left(\frac{1}{2}+\frac{1}{k}\right) \right) \\ & = k+\ln\left(1+\frac{j}{\frac{1}{2}+\frac{1}{k}}\right)\\ & \leq k+\ln\left(1+jk\right). \end{align*} \item This is a direct consequence of the inequalities in items (1) and (3). Combining (1) and (3) gives, \begin{align*} \frac{V_{j+1}-V_j}{V_j} \leq \frac{k+\ln(1+jk)}{1+jk}. \end{align*} Hence, $\frac{V_{j+1}-V_j}{V_j} = \mathcal{O}\left(\frac{\ln(j)}{j} \right)$. \end{enumerate} \end{proof} \subsubsection{Proof of Lemma \ref{lemma:connect_to_ln}}\label{subsubsec:proof_lemma_connect_to_ln} \begin{proof}[Proof of Lemma \ref{lemma:connect_to_ln}] We show the asymptotic behavior of $\frac{V_{j+1}-V_j}{V_j}$ as $j\to\infty$. Let, for $j=1,2,\ldots$, \begin{align*} X_j & = \frac{V_{j+1}}{V_j}-1 = \frac{V_{j+1}-V_j}{V_j}, \\ Y_j & = 1-\frac{V_{j-1}}{V_j} = \frac{V_j-V_{j-1}}{V_j}. \end{align*} Then, \begin{align} 0<X_j<1, 0<Y_j<1.\label{eq:bounds_X_Y} \end{align} Indeed, from Lemma \ref{lemma:properties_Vn}, items 1 and 3, \begin{align*} X_j = \frac{V_{j+1}-V_j}{V_j}\leq \begin{cases} & 1-\frac{1}{(k+1)^2}<1, \quad j=1, \\ & \frac{k}{1+jk}+\frac{\ln(1+jk)}{1+jk}\leq \frac{k}{1+jk} + \frac{1}{\exp(1)} \leq \frac{1}{2}+\frac{1}{\exp(1)}<1,\quad j=2,3,\ldots,N-1. \end{cases} \end{align*} Here it has been used that the function $y^{-1}\ln(y), y\geq 1,$ has a global maximum at $y=\exp(1)$ that equals $\exp(-1)$. The other inequalities follow by the increasingness of the sequence $V_j,j=0,1,\ldots$ (see \cite[Lemma 5.1]{Christianen2021}). Furthermore, we have \begin{align*} X_j+Y_j & = \frac{V_{j+1}-V_{j-1}}{V_j},\\ X_j-Y_j & = \frac{V_{j+1}-2V_j+V_{j-1}}{V_j} = \frac{k}{(V_j)^2}>0. \end{align*} Therefore, \begin{align} \ln(V_{j+1})-\ln(V_{j-1}) & = \ln\left(\frac{V_{j+1}}{V_j}\right)-\ln\left(\frac{V_{j-1}}{V_j}\right) \nonumber\\ & = \ln(1+X_j)-\ln(1-Y_j)\nonumber\\ & = \left(X_j-\frac{X_j^2}{2}+\frac{X_j^3}{3}-\ldots \right) - \left( -Y_j-\frac{Y_j^2}{2}-\frac{Y_j^3}{3}-\ldots \right) \nonumber\\ & = (X_j+Y_j)-\frac{1}{2}(X_j^2-Y_j^2)+\sum_{i=2}^{\infty} \frac{1}{i+1}\left((-1)^i X_j^{i+1}+Y_j^{i+1} \right)\nonumber\\ & = \frac{V_{j+1}-V_{j-1}}{V_j}-\frac{k(V_{j+1}-V_{j-1})}{2(V_j)^3}+\sum_{i=2}^{\infty} \frac{1}{i+1}\left((-1)^i X_j^{i+1}+Y_j^{i+1} \right)\label{eq:ln_differences} \end{align} where the bounds in \eqref{eq:bounds_X_Y} assure convergence of the infinite series. Since $0<Y_j<X_j$, we have \begin{align*} \sum_{i=2}^{\infty} \frac{1}{i+1}\left|(-1)^iX_j^{i+1}+Y_j^{i+1}\right| &\leq \sum_{i=2}^{\infty} \frac{2}{i+1}X_j^{i+1}\\ & = X_j^3 \sum_{i=2}^{\infty} \frac{2}{i+1}X_j^{i-2}\\ & = \mathcal{O}\left(X_j^3 \right) = \mathcal{O}\left(\left(\frac{V_{j+1}-V_j}{V_j}\right)^3 \right). \end{align*} Thus, we get that, \begin{align*} \frac{V_{j+1}-V_{j-1}}{V_j}& =\ln(V_{j+1})-\ln(V_{j-1})+\mathcal{O}\left(\frac{k(V_{j+1}-V_{j-1})}{2{V_j}^3} + \left(\frac{V_{j+1}-V_{j}}{V_j} \right)^3 \right)\\ & = \ln(V_{j+1})-\ln(V_{j-1})+\mathcal{O}\left(\left(\frac{\ln(j)}{j} \right)^3\right). \end{align*} In the last line, we used Lemma \ref{lemma:properties_Vn}, item (4). \end{proof} \subsection{Proof of Lemma \ref{lemma:equivalence}}\label{subsec:proof_lemma_equivalence} \begin{proof}[Proof of Lemma \ref{lemma:equivalence}] We establish the (non-trivial) implication from (1) to (2). Assume there is $n\geq n_0(k)$ such that $V_n\geq W_n$. We claim that $V_j\geq W_j$ for all $j\geq n$. Indeed, when there is a $n_2>n$ such that $V_{n_2}<W_{n_2}$, we let $n_3:=\max\{j:n\leq j\leq n_2,V_j\geq W_j \}$. Then $V_{n_3}\geq W_{n_3}$ and $V_j<W_j$ for $n_3< j\leq n_2$. However, since $\psi(j)$ is strictly decreasing and $n_3+1>n_0(k)$, we have \begin{align*} V_{n_3+1}-V_{n_3} & \geq \left(C+2k\ln(V_{n_3}) \right)^{\frac{1}{2}}\\ & \geq \left(C-1+2k\ln(V_{n_3}) \right)^{\frac{1}{2}}\\ & \geq \left(\psi(n_0(k))+2k\ln(W_{n_3+1}) \right)^{\frac{1}{2}}\\ & \geq \left(\psi(n_3+1)+2k\ln(W_{n_3+1}) \right)^{\frac{1}{2}}\\ & \geq W_{n_3+1}-W_{n_3}, \end{align*} which implies $V_{n_3+1}\geq W_{n_3+1}$. This contradicts the definition of $n_3$. Since the choice of $n_2$ is arbitrary, we have that $V_j\geq W_j$ for all $j\geq n$. The implication from (2) to (1) is immediate. \end{proof} \subsection{Proof of Lemma \ref{lemma:V_j_geq_W_j}}\label{subsec:proof_lemma_V_j_geq_W_j} \begin{proof}[Proof of Lemma \ref{lemma:V_j_geq_W_j}] Let $n=2,3,\ldots$ and $j\geq n$. Then, \begin{align} V_j & = V_n + \sum_{i=n}^{j-1} (V_{i+1}-V_i)\nonumber\\ & = V_n + \sum_{i=n}^{j-1} \left(\sum_{l=n}^i \left[(V_{l+1}-V_l)-(V_l-V_{l-1})\right]+(V_n-V_{n-1}) \right)\nonumber\\ & = V_n + (j-n)(V_n-V_{n-1})+\sum_{i=n}^{j-1}\left(\sum_{l=n}^{i}(V_{l+1}-2V_l+V_{l-1}) \right) \nonumber\\ & = V_n + (j-n)(V_n-V_{n-1})+\sum_{i=n}^{j-1}\left(\sum_{l=n}^{i}\frac{k}{V_l} \right).\label{eq:V_j_expression} \end{align} Now suppose that there is a $n=2,3,\ldots$ such that \begin{align} V_j \geq W_j,\quad \text{for all}\ j\geq n.\label{eq:V_j_boundary} \end{align} Then, \begin{align} \frac{1}{V_l}\leq \frac{1}{W_l} = \frac{1}{l(2k\ln(l))^{\frac{1}{2}}},\quad l=n,n+1,\ldots,\label{eq:W_l} \end{align} and so by \eqref{eq:V_j_expression} for all $j\geq n$, \begin{align} V_j \leq V_n+(j-n)(V_n-V_{n-1})+\frac{k}{(2k)^{\frac{1}{2}}}\sum_{i=n}^{j-1}\left(\sum_{l=n}^{i} \frac{1}{l(\ln(l))^{\frac{1}{2}}} \right).\label{eq:V_j_expression_1} \end{align} We use the Euler-Maclaurin formula in its simplest form: for $h\in C^2[n,\infty)$, we have \begin{multline*} \sum_{l=n}^i h(l) = \int_n^i h(x)dx + \frac{1}{2}(h(i)+h(n)) + \frac{1}{12}(h'(i)-h'(n))-\int_n^i h''(x)\frac{B_2(x-\floor*{x})}{2}dx, \end{multline*} where $B_2(t) = (t-\frac{1}{2})^2-\frac{1}{12}$ is the Bernoulli polynomial of degree 2 that satisfies $|B_2(t)|\leq \frac{1}{6}, 0\leq t\leq 1$. Using this with $h(x) = \frac{1}{x(\ln(x))^{\frac{1}{2}}}, x\geq 2$, so that \begin{align*} h'(x) & = -\frac{1}{x^2(\ln(x))^{\frac{1}{2}}}-\frac{1}{2x^2(\ln(x))^{\frac{3}{2}}},\\ h''(x) & = \frac{2}{x^3(\ln(x))^{\frac{1}{2}}}+\frac{\frac{3}{2}}{x^3(\ln(x))^{\frac{3}{2}}}+\frac{\frac{3}{4}}{x^3(\ln(x))^{\frac{5}{2}}},\\ \int_n^i h(x) dx & = \int_n^i \frac{1}{x(\ln(x))^{\frac{1}{2}}}dx = 2(\ln(i))^{\frac{1}{2}}-2(\ln(n))^{\frac{1}{2}}, \end{align*} we get \begin{align} \sum_{l=n}^{i} \frac{1}{l(\ln(l))^{\frac{1}{2}}} = 2(\ln(i))^{\frac{1}{2}}-2(\ln(n))^{\frac{1}{2}}+\mathcal{O}\left(\frac{1}{n(\ln(n))^{\frac{1}{2}}} \right).\label{eq:euler_maclaurin_1} \end{align} Next, from the Euler-Maclaurin formula with $h(x)=(\ln(x))^{\frac{1}{2}}$, we have \begin{align} \sum_{i=n}^{j-1}(\ln(i))^{\frac{1}{2}} = \int_n^{j-1}(\ln(x))^{\frac{1}{2}}dx + \mathcal{O}\left((\ln(j))^{\frac{1}{2}} \right),\label{eq:euler_maclaurin_2} \end{align} and obviously \begin{align} \sum_{i=n}^{j-1}\left((\ln(n))^{\frac{1}{2}}+\mathcal{O}\left(\frac{1}{n(\ln(n))^{\frac{1}{2}}} \right) \right) = (j-n)\left((\ln(n))^{\frac{1}{2}}+\mathcal{O}\left(\frac{1}{n(\ln(n))^{\frac{1}{2}}} \right) \right).\label{eq:V_n_15} \end{align} Thus, from \eqref{eq:euler_maclaurin_1} and \eqref{eq:euler_maclaurin_2}, we can write the right-hand side of \eqref{eq:V_j_expression_1} as \begin{multline*} V_n+(j-n)(V_n-V_{n-1})+\\ \frac{k}{(2k)^{\frac{1}{2}}}\left(2\int_n^{j-1}(\ln(x))^{\frac{1}{2}}dx + \mathcal{O}\left((\ln(j))^{\frac{1}{2}} \right)+(j-n)\left((\ln(n))^{\frac{1}{2}}+\mathcal{O}\left(\frac{1}{n(\ln(n))^{\frac{1}{2}}}\right) \right) \right),\end{multline*} which simplifies to \begin{align} (2k)^{\frac{1}{2}}\int_n^{j-1}(\ln(x))^{\frac{1}{2}}dx + \mathcal{O}(j).\label{eq:V_n_16} \end{align} Next, we use the substitution $u := (\ln(x))^{\frac{1}{2}}$ and partial integration, to obtain \begin{align*} \int_n^{j-1} (\ln(x))^{\frac{1}{2}}dx & = \int_{(\ln(n))^{\frac{1}{2}}}^{(\ln(j-1))^{\frac{1}{2}}} u\cdot 2u\exp(u^2)du \\ & = \left[u\exp(u^2)\right]_{{(\ln(n))^{\frac{1}{2}}}}^{{(\ln(j-1))^{\frac{1}{2}}}}-\int_{{(\ln(n))^{\frac{1}{2}}}}^{{(\ln(j-1))^{\frac{1}{2}}}} \exp(u^2)du \\ & = (j-1)(\ln(j-1))^{\frac{1}{2}}-n(\ln(n))^{\frac{1}{2}}-\int_{(\ln(n))^{\frac{1}{2}}}^{(\ln(j-1))^{\frac{1}{2}}} \exp(u^2)du. \end{align*} Using the second elementary inequality \eqref{eq:inequalities_int_exp} in Lemma \ref{lemma:ineq_I(y)}, we conclude that \begin{align*} \int_n^{j-1} (\ln(u))^{\frac{1}{2}}du = j(\ln(j))^{\frac{1}{2}}+\mathcal{O}\left(\frac{j}{(\ln(j))^{\frac{1}{2}}}\right),\quad j\to\infty. \end{align*} It thus follows from \eqref{eq:V_j_expression_1}, \eqref{eq:V_n_15} and \eqref{eq:V_n_16} that \begin{align} V_j \leq (2k)^{\frac{1}{2}}j(\ln(j))^{\frac{1}{2}}+\mathcal{O}(j).\label{eq:V_j_inequality_order_j} \end{align} Hence, from \eqref{eq:W_l} and \eqref{eq:V_j_inequality_order_j}, \begin{align*} V_j & = W_j\left(1+\mathcal{O}\left(\frac{1}{(2k\ln(j))^{\frac{1}{2}}}\right) \right)\\ & = W_j\left(1+\smallO(1)\right),\quad j\to\infty. \end{align*} In a similar fashion, if there is an $n=2,3,\ldots$ such that \begin{align*} V_j\leq j(2k\ln(j))^{\frac{1}{2}},\quad \text{for all}\ j\geq n,\end{align*} then \begin{align*} V_j &\geq j(2k\ln(j))^{\frac{1}{2}}+\mathcal{O}(j), \end{align*} which also yields \begin{align*} V_j &= W_j(1+\smallO(1)), \quad j\to\infty. \end{align*} \end{proof} \section{Conclusion}\label{sec:conclusion} Continuous and discrete Emden-Fowler type equations appear in many fields such as mathematical physics, astrophysics and chemistry, but also in electrical engineering, and more specifically under a popular power flow model. The specific Emden-Fowler equation we study, appears as a discrete recursion that governs the voltages on a line network and as a continuous approximation of these voltages. We show that the asymptotic behavior of the solution of the continuous Emden-Fowler equation \eqref{eq:voltages_approx}, i.e. the approximation of the discrete recursion, and the asymptotic behavior of the solution of its discrete counterpart \eqref{eq:voltages_distflow}, are the same. \section*{Acknowledgments} This research is supported by the Dutch Research Council through the TOP programme under contract number 613.001.801. \begin{appendices} \section{Proofs for Section \ref{SUBSEC:ASSOCIATED_PROPERTIES}}\label{sec:existence_uniqueness_w} \subsection{Proof of Lemma \ref{lemma:alternative_f}} \begin{lemma}\label{lemma:alternative_f} Let $f(t)$ be given by \eqref{eq:voltages_approx}. Then, we can alternatively write $f(t)$ by \begin{align} \frac{f'(t)}{(w^2+2k\ln(f(t)/y))^{\frac{1}{2}}} = 1,\quad t\geq 0,\label{eq:first_alternative_f} \end{align} and \begin{align} \int_{(W^2-\ln(y))^{\frac{1}{2}}}^{(W^2+\ln(f(t))/y))^{\frac{1}{2}}}\exp(v^2)dv = \frac{t}{y}\sqrt{\frac{1}{2}k}\exp(W^2),\quad t\geq 0,\label{eq:second_alternative_f} \end{align} where $W^2 := \frac{w^2}{2k}$. \end{lemma} \begin{proof} From \eqref{eq:voltages_approx}, we get \begin{align} f'(u)f''(u) = kf'(u)/f(u),\quad 0\leq u\leq t.\label{eq:f4} \end{align} Integrating Equation \eqref{eq:f4} over $u$ from 0 to $t$ using $f(0)=y, f'(0)=w$ we get \begin{align*} \int_0^t f'(u)f''(u)du = \frac{1}{2}(f'(t))^2 - \frac{1}{2}w^2 = \int_0^t \frac{kf'(u)}{f(u)} = k\ln(f(t)/y). \end{align*} Hence, for $t>0$, \begin{align*} \frac{f'(t)}{\left(w^2+2k\ln(f(t)/y)\right)^{\frac{1}{2}}} = 1, \end{align*} as desired. Integrating $f'(u)/(w^2+2k\ln(f(t)/y))^{\frac{1}{2}})=1$ from $u=0$ to $u=t$, while substituting $s=f(u)\in [1,f(t)]$, we get \begin{align} \int_0^t \frac{\frac{df}{ds}\frac{ds}{du}}{(w^2+2k\ln(f(u)/y))^{\frac{1}{2}}} du & = \int_1^{f(t)} \frac{1}{(w^2+2k\ln(s/y))^{\frac{1}{2}}}ds = t.\label{eq:alternative_rep_f} \end{align} By introduction of $W^2 = \frac{w^2}{2k}$, the expression becomes \begin{align} \frac{1}{\sqrt{2k}}\int_1^{f(t)} \frac{1}{(W^2+\ln(s/y))^{\frac{1}{2}}}ds = t.\label{eq:integral_f} \end{align} Substituting $v=(W^2+\ln(s/y))^{\frac{1}{2}}, s=y\exp(v^2-W^2),ds=2s(W^2+\ln(s/y))^{\frac{1}{2}}dv$ in the integral \eqref{eq:integral_f}, we get \begin{align*} \int_{(W^2-\ln(y))^{\frac{1}{2}}}^{(W^2+\ln(f(t)/y))^{\frac{1}{2}})}\exp(v^2)dv = \frac{1}{2}\exp(W^2)\sqrt{2k}\frac{t}{y} = \frac{t}{y}\sqrt{\frac{1}{2}k}\exp(W^2),\quad t\geq 0, \end{align*} as desired. This concludes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:existence_uniqueness_w}} \begin{lemma}\label{lemma:existence_uniqueness_w} Let $k>0$. There exists a unique $w\geq 0$ such that the solution of $f(t)f''(t)=k,\ t\geq 0; f(0)=1,f'(0)=w$ satisfies $f(1)=1+k$. \end{lemma} \begin{proof}[Proof] Again, we rely on the representation of $f$ in \eqref{eq:f(1)_condition_2}. Thus the condition $f(1)=1+k$ can be written as \begin{align} \int_1^{1+k}\frac{1}{(w^2+2k\ln(s))^{\frac{1}{2}}}ds = 1.\label{eq:decrease_in_w} \end{align} The left-hand side of \eqref{eq:decrease_in_w} decreases in $w\geq 0$ from a value greater than $\sqrt{2}$ to 0 as $w$ increases from $w=0$ to $w=\infty$. Indeed, as to $w=0$ we consider \begin{align*} F(k) = \int_1^{1+k}\frac{1}{(\ln(s))^{\frac{1}{2}}}ds, k\geq 0. \end{align*} Then $F(0)=0$ and $F'(k_1) = (\ln(1+k_1))^{-\frac{1}{2}}>(k_1)^{-\frac{1}{2}},\quad k_1>0$, since $0<\ln(1+k_1)<k_1$ for $k_1>0$. Hence, \begin{align*} F(k)=F(0)+\int_0^k F'(k_1)dk_1 > \int_0^k \frac{1}{\sqrt{k_1}}dk_1 = 2\sqrt{k},\quad k>0. \end{align*} This implies that \begin{align*} \int_1^{1+k} \frac{1}{\sqrt{2k\ln(s)}}ds = \frac{F(k)}{\sqrt{2k}}>\sqrt{2},\quad k>0. \end{align*} That the left-hand side of \eqref{eq:decrease_in_w} decreases strictly in $w\geq 0$, to the value 0 at $w=\infty$, is obvious. We conclude that for any $k>0$ there is a unique $w>0$ such that \eqref{eq:decrease_in_w} holds. \end{proof} \subsection{Proof of Theorem \ref{thm:cases_k}} \begin{proof} From the definition of $F$ in \eqref{eq:F(t,k)}, it follows that $F(t,k)=0$ if and only if $f(t)=g(t)$. Furthermore, we have \begin{align} f(t)\geq g(t), 1\leq t < \infty \iff \max_{t\geq 1} F(t,k)\leq 0\label{eq:equivalence_maximum}. \end{align} By Lemma \ref{lemma:F(t,k)}, we have, for any $k$, $\max_{t\geq 1}F(t,k) = F(t_0(k),k)$ and by Lemma \ref{lemma:F(t,k)_decreasing}, we have that $F(t_0(k),k)$ is a strictly decreasing function of $k$. Notice that, by \eqref{eq:F(t,k)}, we can alternatively write, \begin{align*} F(t_0(k),k) = \int_{(W^2+\ln(f(t_0(k))))^{\frac{1}{2}}}^{(W^2+\ln(g(t_0(k))))^{\frac{1}{2}}}\exp(v^2)dv. \end{align*} Thus, by Lemma \ref{lemma:positive_small_k}, we have on the one hand, for small $k$, that $F(t_0(k),k)>0$, and by Lemma \ref{lemma:negative_large_k}, we have on the other hand, for large $k$, that $F(t_0(k),k)\leq 0$. Therefore, we conclude that $F(t_0(k),k)\leq 0$ is equivalent to $k\geq k_c$. \end{proof} \subsection{Proof of Lemma \ref{lemma:F(t,k)}} \begin{lemma}\label{lemma:F(t,k)} Let $F(t,k)$ be given as in \eqref{eq:F(t,k)}. Then, for any $k$, \begin{align*} \max_{t\geq 1} F(t,k) = F(t_0(k),k), \end{align*} where $t_0(k)$ is given by \eqref{eq:equation_psi}. \end{lemma} \begin{proof} To find, for a given $k>0$, the maximum of $F(t,k)$ over $t\geq 1$, we compute from \eqref{eq:F(t,k)} \begin{align} \frac{\partial F}{\partial t}(t,k) & = -\sqrt{\frac{k}{2}}\exp(W^2)+\frac{d}{dt}\left((W^2+\ln(g(t)))^{\frac{1}{2}}\right)\exp(W^2+\ln(g(t)))\nonumber\\ & = \frac{1}{2}(W^2+\ln(g(t)))^{-\frac{1}{2}}\frac{g'(t)}{g(t)}\exp(W^2+\ln(g(t)))-\sqrt{\frac{k}{2}}\exp(W^2)\nonumber\\ & = \exp(W^2)\left(\frac{1}{2}(W^2+\ln(g(t)))^{-\frac{1}{2}}\frac{g'(t)}{g(t)}\exp(\ln(g(t)))-\sqrt{\frac{k}{2}} \right)\nonumber\\ & = \exp(W^2)\sqrt{\frac{k}{2}}\left(\left(\sqrt{\frac{1}{2k}}\sqrt{\frac{g'(t)^2}{W^2+\ln(g(t))}} \right)-1 \right)\nonumber\\ & = \exp(W^2)\sqrt{\frac{k}{2}}\left(\left(\sqrt{\frac{\left(\frac{g'(t)}{\sqrt{2k}}\right)^2}{W^2+\ln(g(t))}} \right)-1 \right).\label{eq:intermediate_partial_Ft} \end{align} Then, using \eqref{eq:diff_g(t)} in \eqref{eq:intermediate_partial_Ft}, we get \begin{align} \frac{\partial F}{\partial t}(t,k)& = \exp(W^2)\sqrt{\frac{k}{2}}\left(\left(\sqrt{\frac{\frac{\psi(t)}{2k}+\ln(g(t))}{W^2+\ln(g(t))}} \right)-1 \right).\label{eq:F(t,k)alternative} \end{align} Then, $\frac{\partial F}{\partial t}(t,k) = 0$ if and only if $\frac{\psi(t)}{2k} = W^2$ or in other words, if and only if $\psi(t)=w^2$. Recall from \eqref{eq:equation_psi} that the unique solution $t>1$ of the equation $\psi(t)=w^2$ is given by $t_0(k)$. Thus, we have \begin{align*} \frac{\partial F}{\partial t}(t_0(k),k) = 0.\end{align*} Since $\psi(t)$ is strictly decreasing in $t>1$, while $W^2$ does not depend on $t$, we have from \eqref{eq:F(t,k)alternative} that $\frac{\partial^2F}{\partial t^2}(t_0(k),k)<0$. Hence, for $k>0$, \begin{align*} \max_{t\geq 1} F(t,k) = F(t_0(k),k), \end{align*} which completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:F(t,k)_decreasing}} \begin{lemma}\label{lemma:F(t,k)_decreasing} Let $F(t,k)$ be given as in \eqref{eq:F(t,k)}. Then, $F(t_0(k),k)$ is a strictly decreasing function of $k$, i.e., \begin{align*} \frac{\partial F}{\partial k}(t_0(k),k)<0, \quad k>0. \end{align*} \end{lemma} \begin{proof} We compute $\frac{\partial F}{\partial k}(t,k)$ for any $t>1$, and set $t=t_0(k)$ in the resulting expression. Thus, from \eqref{eq:F(t,k)}, \begin{multline*} \frac{\partial F}{\partial k}(t,k) = \frac{1}{2}(W^2+\ln(g(t)))^{-\frac{1}{2}}\frac{d}{dk}\left(W^2+\ln(\sqrt{k}t(2\ln(t))^{\frac{1}{2}}) \right)\exp(W^2+\ln(g(t)))-\\ -W'\exp(W^2)-\frac{t}{2\sqrt{2k}}\exp(W^2)-t\sqrt{\frac{1}{2}k}(W^2)'\exp(W^2). \end{multline*} Simplifying this expression, yields \begin{multline*} \frac{\partial F}{\partial k}(t,k) = \exp(W^2)\left(\frac{1}{2}g(t)(W^2+\ln(g(t)))^{-\frac{1}{2}}\left((W^2)'+\frac{1}{2k}\right)-W'-\frac{t}{2\sqrt{2k}}-t\sqrt{\frac{1}{2}k}(W^2)'\right). \end{multline*} From $(W^2)'=2WW'$, we then have \begin{align} \frac{\partial F}{\partial k}(t,k) & = \exp(W^2)\left(\frac{(WW'+\frac{1}{4k})g(t)}{(W^2+\ln(g(t)))^{\frac{1}{2}}}-W'-\frac{t}{2\sqrt{2k}}-t\sqrt{2k}WW' \right)\nonumber\\ & = \exp(W^2)\left(\left(\frac{g(t)}{(W^2+\ln(g(t)))^{\frac{1}{2}}}-t\sqrt{2k} \right)(WW'+\frac{1}{4k})-W' \right).\label{eq:partial_F_2} \end{align} We next take $t=t_0(k)$ in \eqref{eq:partial_F_2}, so that we can use that \begin{align*} g'(t_0(k))=(w^2+2k\ln(g(t_0(k))))^{\frac{1}{2}} \end{align*} and $W^2 = \frac{w^2}{2k}$, and observe that \begin{align*} \frac{g(t)}{(W^2+\ln(g(t)))^{\frac{1}{2}}}-t\sqrt{2k} & = \sqrt{2k}\left(\frac{g(t)}{(w^2+2k\ln(g(t)))^{\frac{1}{2}}}-t \right)\\ & = \sqrt{2k}\left(\frac{g(t)}{g'(t)}-t \right) = \frac{-t\sqrt{2k}}{1+2\ln(t)},\quad t = t_0(k). \end{align*} We claim that, \begin{align*} \frac{\partial F}{\partial k}(t_0(k),k) & = -\exp(W^2)\left(\frac{t\sqrt{2k}}{1+2\ln(t)}(WW'+\frac{1}{4k}+W') \right),\quad t = t_0(k), \end{align*} is negative since $W(k)$ increases in $k>0$, strictly. The latter fact is proven in Lemma \ref{lemma:increasing_W}. We conclude that $F(t_0(k),k)$ is a strictly decreasing function of $k>0$. \end{proof} \begin{lemma}\label{lemma:increasing_W} Let $f(t)$ be given by \eqref{eq:f} with initial conditions $f(0)=1$ and $f'(0)=w$, where $w$ is such that $f(1)=1+k$. Furthermore, let $W(k)=\frac{w}{\sqrt{2k}}$. Then, $W(k)$ is a strictly increasing function of $k$. \end{lemma} \begin{proof} First, by Equation \eqref{eq:alternative_rep_f} with $y=1$, we get \begin{align} \int_1^{f(t)} \frac{1}{(w^2+2k\ln(s))^{\frac{1}{2}}}ds = t.\label{eq:third_alternative_f} \end{align} Second, from the fundamental theorem of calculus, we have \begin{align} f(1) = 1+w+\int_0^1\left(\int_0^s \frac{kdu}{f(u)} \right)ds. \label{eq:f(1)_condition} \end{align} Now, we derive the desired monotonicity property. We require $f(1)=1+k$. We get from \eqref{eq:f(1)_condition}, \begin{align} \frac{w}{k} = 1-\int_0^1\left(\int_0^s \frac{du}{f(u)} \right) ds.\label{eq:w_frac_k} \end{align} From, \eqref{eq:third_alternative_f}, with $t=1$ and $f(1)=1+k$, we get \begin{align} \int_{1}^{1+k} \frac{1}{(w^2+2k\ln(s))^{\frac{1}{2}}}ds = 1.\label{eq:f(1)_condition_2} \end{align} From \eqref{eq:f(1)_condition_2}, noting that $w=w(k)$, we get then \begin{align} 0 & = \frac{d}{dk}\left(\int_1^{1+k} \frac{1}{(w^2(k)+2k\ln(s))^{\frac{1}{2}}}ds \right)\nonumber\\ & = \frac{1}{(w^2(k)+2k\ln(1+k))^{\frac{1}{2}}}+\int_1^{1+k} -\frac{1}{2}\frac{2w(k)w'(k)+2\ln(s)}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds \nonumber\\ & = \frac{1}{(w^2(k)+2k\ln(1+k))^{\frac{1}{2}}}-w(k)w'(k)\int_1^{1+k} \frac{1}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds - \nonumber \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad - \int_1^{1+k}\frac{\ln(s)}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds.\label{eq:diff_calculus} \end{align} Hence, rewriting \eqref{eq:diff_calculus} yields, \begin{multline} w(k)w'(k)\int_1^{1+k} \frac{1}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds = \frac{1}{(w^2(k)+2k\ln(1+k))^{\frac{1}{2}}} \\ - \int_1^{1+k} \frac{\ln(s)}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds.\label{eq:wkw_primek} \end{multline} Consider the last term in \eqref{eq:wkw_primek}. We have for $1\leq s\leq 1+k$, \begin{align} \frac{\ln(s)}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}} & = \frac{\ln(s)}{w^2(k)+2k\ln(s)}\frac{1}{(w^2(k)+2k\ln(s))^{\frac{1}{2}}} \nonumber\\ & \leq \frac{\ln(1+k)}{w^2(k)+2k\ln(1+k)}\frac{1}{(w^2(k)+2k\ln(s))^{\frac{1}{2}}}.\label{ineq:lns} \end{align} Therefore, by integrating over the inequality in \eqref{ineq:lns}, we get \begin{align*} \int_1^{1+k} \frac{\ln(s)}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds & \leq \frac{\ln(1+k)}{w^2(k)+2k\ln(1+k)}\int_1^{1+k}\frac{1}{(w^2(k)+2k\ln(s))^{\frac{1}{2}}}ds \nonumber\\ & = \frac{\ln(1+k)}{w^2(k)+2k\ln(1+k)}, \end{align*} where we used \eqref{eq:f(1)_condition_2}. Therefore, see \eqref{eq:wkw_primek}, \begin{align} w(k)w'(k)\int_1^{1+k} \frac{1}{(w^2(k)+2k\ln(s))^{\frac{3}{2}}}ds \geq \frac{1}{(w^2(k)+2k\ln(1+k))^{\frac{1}{2}}}-\frac{\ln(1+k)}{w^2(k)+2k\ln(1+k)}>0,\label{eq:wkw_prime_2} \end{align} where the latter inequality follows from \begin{align*} (w^2(k)+2k\ln(1+k))^{\frac{1}{2}} > (2k\ln(1+k))^{\frac{1}{2}} > \ln(1+k), \end{align*} since $u>\ln(1+u)$ for $u>0$. We conclude from \eqref{eq:wkw_prime_2} that $w(k)$ strictly increases in $k>0$. Next, we consider for a fixed $t>0$ the identity \eqref{eq:alternative_rep_f}. For any $s>1$, the integrand $\left(w^2(k)+2k\ln(s)\right)^{-\frac{1}{2}}$ decreases strictly in $k>0$, and hence $f(t)=f(t;k)$ increases strictly in $k>0$, since $t>0$ is fixed. As a consequence, we conclude from \eqref{eq:w_frac_k} that $w(k)/k$ strictly increases in $k>0$ since $1/f(u)$ strictly decreases in $k>0$ for any $u\in(0,1)$. \end{proof} \subsection{Proof of Lemma \ref{lemma:positive_small_k}} \begin{lemma}\label{lemma:positive_small_k} Let $F(t,k)$ be given as in \eqref{eq:F(t,k)}. Then, for small $k$, we have that $F(t_0(k),k)>0$. \end{lemma} \begin{proof} We have for $t>0$, \begin{align*} f(t)& =f(0)+tf'(0)+\frac{1}{2}t^2f''(\xi_t)\\ & = 1+tw+\frac{1}{2}t^2\frac{k}{f(\xi_t)}, \end{align*} where $\xi_t$ is a number between $0$ and $t$. Since $f(1)=1+k$ and $f(\xi_t)\geq 1 > 0$, it follows that $w\leq k$. Therefore, \begin{align*} f\left(\frac{1}{\sqrt{k}}\right) \leq 1 + \frac{w}{\sqrt{k}} + \frac{1}{2k}k \leq \frac{3}{2}+\sqrt{k}. \end{align*} On the other hand \begin{align*} g\left(\frac{1}{\sqrt{k}}\right) = \frac{1}{\sqrt{k}}\left(2k\ln(\frac{1}{\sqrt{k}}) \right)^{\frac{1}{2}} = (-\ln(k))^{\frac{1}{2}}, \end{align*} and this exceeds $\frac{3}{2}+\sqrt{k}$ when $k$ is small enough. Numerically, by solving the equation $(-\ln(k))^{\frac{1}{2}} = \frac{3}{2}+\sqrt{k}$ for $k>0$, we find that $k<0.05$ is small enough. We conclude from \eqref{eq:F(t,k)} that $F(t_0(k),k)\geq F\left(\frac{1}{\sqrt{k}},k\right)>0$ when $k$ is small. \end{proof} \subsection{Proof of Lemma \ref{lemma:negative_large_k}} \begin{lemma}\label{lemma:negative_large_k} Let $F(t,k)$ be given as in \eqref{eq:F(t,k)}. Then, for large $k$, we have that $F(t_0(k),k)\leq 0$. \end{lemma} \begin{proof} We show that $f(t)\geq g(t)$ for all $t\geq 1$ when $k$ is large enough. We have $f(1)=1+k>0=g(1)$. Now suppose that there is a $t>1$ such that $f(t)<g(t)$. Then there is also a $t_1>1$ such that $f(t_1)=g(t_1)$ and $f'(t_1)\leq g'(t_1)$. We infer, by the derivatives of the functions $f$ and $g$ given in Equations \eqref{eq:first_alternative_f} and \eqref{eq:diff_g(t)}, with \eqref{eq:psi}, from $f(t_1)=g(t_1)$ and $f'(t_1)\leq g'(t_1)$, that \begin{align} w^2 \leq 2k+\frac{k}{2\ln(t)}-k\ln(2k\ln(t)) = \psi(t) \quad \text{at}\ t=t_1.\label{eq:rhs_psi} \end{align} At the same time, we have by convexity of $f(t), 0\leq t<\infty$, and $f(1)=1+k$ that \begin{align*} f(t)\geq 1+tk,\quad t\geq 1. \end{align*} Hence, when $\frac{1}{2}k\geq \ln(t)$, we have \begin{align*} f(t)\geq 1+tk> tk = t\left(2k\cdot\frac{1}{2}k\right)^{\frac{1}{2}} \geq t(2k\ln(t))^{\frac{1}{2}} = g(t). \end{align*} Since $f(t_1)=g(t_1)$, we thus have that $t_1>\exp(\frac{1}{2}k)$. The right-hand side of \eqref{eq:rhs_psi} decreases in $t>1$, since the function $\psi(t)$ is strictly decreasing, and its value at $t=t_1$ is therefore less than \begin{align*} 2k+\frac{k}{2\cdot \frac{1}{2}k}-k\ln(2k\cdot\frac{1}{2}k) = 2k+1-2k\ln(k). \end{align*} Since $2k+1-2k\ln(k)<0$ for large $k$, \eqref{eq:rhs_psi} cannot hold for large $k$. Numerically, by solving the equation $2k+1-2k\ln(k)=0$, for $k>0$, we find that $k>3.2$ is large enough. This gives the result. \end{proof} \subsection{Proof of Theorem \ref{thm:bounds_f/g}} \begin{proof} Let $f(t)=f(t;k)$ be given by \eqref{eq:f} with initial conditions $f(0)=1,f'(0)=w$ such that $f(1)=1+k$, and let $g(t)=g(t;k)$ be given by \eqref{eq:f(t)_approx}. Before we turn to the proof of inequalities \eqref{eq:lowerbound_f_g} and \eqref{eq:upperbound_f_g}, we first state some numerical results obtained by Newton's method: the unique number $k_c$ that determines whether the ratio of $f$ and $g$ is positive or not, is given by $k_c=1.0384$, the corresponding value of $w$ such that $f(1)=1+k_c$ is given by $w(k_c)=0.6218$ and the corresponding solution to the equation $\psi(t_0(k_c)) = w(k_c)^2$ is given by $t_0(k_c)=t_2(k_c)=18.3798$. Furthermore, by Newton's method, we have that \begin{align*} \frac{f_0(x)}{g(x;1)}\leq 1, x_1\leq x\leq x_2;\quad \frac{f_0(x)}{g(x;1)}>1, 1\leq x<x_1\ \text{or}\ x>x_2, \end{align*} where $x_1 = 2.4556$ and $x_2 = 263.0304$, and $g(x;k)=(2k)^{\frac{1}{2}}x(\ln(x))^{\frac{1}{2}}$. Additionally, the minimum of the ratio $f_0(x)$ and $g(x;1)$ is given by \begin{align} \min_{x\geq 1} \frac{f_0(x)}{g(x;1)} = \min_{x_1\leq x\leq x_2} \frac{f_0(x)}{x(2\ln(x))^{\frac{1}{2}}} \approx 0.8829,\label{eq:minimum_ratio_f_g} \end{align} and is attained at $x_{\text{min}} = 5.7889$. The maximum of the ratio $f_0(x)$ and $g(x;1)$ is given by \begin{align*} \max_{x\geq 1} \frac{f_0(x)}{g(x;1)} \approx 1.0223, \end{align*} and is attained at $x_{\text{max}} = 380223$. However, the computation of the maximum of the ratio of $f_0(x)$ and $g(x;1)$ is much more involved than the computation of the minimum of the ratio of $f_0(x)$ and $g(x;1)$, because evaluation of the function $f_0(x)$ for large entries is difficult. In Lemma \ref{lemma:maximum_ratio_f0_g}, we content ourselves with a reasonably sharp upper bound on the maximum of the ratio of $f_0(x)$ and $g(x;1)$ over $x\geq x_2$. We now turn to the proof of inequality \eqref{eq:lowerbound_f_g}. We consider two regimes, i.e., $t_1(k)\leq t \leq \sqrt{2/k}$ and $t\geq \sqrt{2/k}$. We have for $t_1(k)\leq t\leq \sqrt{2/k}$, \begin{align} \frac{f(t;k)}{g(t;k)} & \geq \frac{1}{t(2k\ln(t))^{\frac{1}{2}}} \nonumber \\ & \geq \frac{1}{\sqrt{2/k}\left(2k\ln(\sqrt{2/k}) \right)^{\frac{1}{2}}} \nonumber \\ & = \frac{1}{2(\ln(\sqrt{2/k}))^{\frac{1}{2}}},\label{eq:first_regime} \end{align} where we used that $f(t;k)$, with $f(0;k)=1$ and $g(t;k)=(2k)^{\frac{1}{2}}t(\ln(t))^{\frac{1}{2}}$ are positive, increasing functions of $t>1$. Next, we let $t\geq \sqrt{2/k}$. We have \begin{align} \frac{f(t;k)}{g(t;k)} & = \frac{cf_0(a+bt)}{t\sqrt{2k\ln(t)}} \nonumber \\ & = \frac{f_0(a+bt)}{g(a+bt;1)}\frac{ac+bct}{t\sqrt{k}}\sqrt{\frac{\ln(a+bt)}{\ln(t)}}. \label{eq:three_terms} \end{align} We consider each factor in the right-hand side of \eqref{eq:three_terms} separately. For the first factor, we use the numerical result that the minimum of the ratio of the functions $f_0$ and $g$ is given in \eqref{eq:minimum_ratio_f_g}. For the second and third factor, we notice, from \eqref{eq:a}--\eqref{eq:c} and $t\geq \sqrt{2/k}$, that \begin{align} a>0, b>\sqrt{k}, bc=\sqrt{k}, a+bt\geq \sqrt{k}\sqrt{2/k} = \sqrt{2}.\label{eq:two_inequalities_a_b} \end{align} Hence, for the second factor, we get \begin{align*} \frac{ac+bct}{t\sqrt{k}} > \frac{\sqrt{k}t}{t\sqrt{k}} = 1, \end{align*} and for the third factor, \begin{align} \min_{t\geq \sqrt{2/k}}\frac{\ln(a+bt)}{\ln(t)} > \min_{t\geq \sqrt{2/k}} \frac{\ln(t\sqrt{k})}{\ln(t)}. \label{min_frac_of_ln} \end{align} However, the right-hand side of \eqref{min_frac_of_ln} is equal to $1$ when $k\geq 1$, and equal to $\frac{\ln{\sqrt{2}}}{\ln\sqrt{2/k}}$ when $0<k<1$. Therefore, \begin{align*} \min_{t\geq \sqrt{2/k}} \frac{\ln(t\sqrt{k})}{\ln(t)} \geq \frac{\ln(\sqrt{2/k_c})}{\ln(\sqrt{2/k})} \end{align*} when $0<k\leq k_c$. Hence, combining the inequalities for each factor in \eqref{eq:three_terms}, we get, for $t\geq \sqrt{2/k}$, \begin{align*} \frac{f(t;k)}{g(t;k)} \geq 0.8829\cdot 1\cdot \left(\frac{\ln\left(\sqrt{2/k_c}\right)}{\ln\left(\sqrt{2/k}\right)}\right)^{\frac{1}{2}} = \frac{0.5055}{\left(\ln(\sqrt{2/k})\right)^{\frac{1}{2}}}. \end{align*} Together with \eqref{eq:first_regime} this gives the desired result. Now, we turn to the proof of inequality \eqref{eq:upperbound_f_g}. We follow the same approach as in the proof of inequality \eqref{eq:lowerbound_f_g}. Thus, we consider each factor of the right-hand side of \eqref{eq:three_terms} separately. The right-hand side of \eqref{eq:three_terms} is now to be considered for $t\geq t_2(k)$, and so it is important to have specific information about $t_2(k)$. We claim that $t_2(k)\geq \exp(e^{2-k_c}/2k)$. This claim is proven in Lemma \ref{lemma:t2(k)} below. We consider $x=a+bt$ with $t\geq t_2(k)$. Now, by the first two inequalities in \eqref{eq:two_inequalities_a_b}, we get \begin{align*} a+bt_2(k)> \sqrt{k}\exp(e^{2-k_c}/2k). \end{align*} Notice that the function $k>0 \mapsto \sqrt{k}\exp(e^{2-k_c}/2k)$ is a decreasing function of $k$ for $0<k\leq k_c$, and therefore, \begin{align*} a+bt_2(k) > \sqrt{k_c}\exp(e^{2-k_c}/2k_c) \approx 3.5909>x_1. \end{align*} Hence, it is sufficient to bound the function $f_0(x)/g(x;1)$ for $x\geq x_2$. Furthermore, by Lemma \ref{lemma:maximum_ratio_f0_g}, we have \begin{align*} \frac{f_0(x)}{g(x;1)} &\leq 1.12. \end{align*} For the second factor, we notice, since $bc=\sqrt{k}$, that we have \begin{align*} \frac{ac+bct}{t\sqrt{k}} = 1+\frac{ac}{t\sqrt{k}}. \end{align*} Now, from \eqref{eq:a} and \eqref{eq:c}, \begin{align*} \frac{ac}{\sqrt{k}} & = \frac{1}{\sqrt{k}}\sqrt{2}\int_0^{\frac{w}{\sqrt{2k}}}\exp(v^2)dv \cdot \exp(-w^2/2k) \\ & \leq \frac{1}{\sqrt{k}}\sqrt{2}\cdot \frac{w}{\sqrt{2k}} = \frac{w}{k}\\ & \leq \frac{w(k_c)}{k_c} \approx 0.5988, \end{align*} where in the last line it has been used that $w/k$ is an increasing function of $k$; see Lemma \ref{lemma:increasing_W}. Hence, for all $k\leq k_c$ and all $t\geq t_2(k)\geq t_0(k)$, we can bound the second factor by \begin{align*} \frac{ac+bct}{t\sqrt{k}}\leq 1+\frac{w(k_c)}{k_ct_0(k_c)} \approx 1.0326. \end{align*} For the third factor, we have by \eqref{eq:a}, \begin{align*} a = \sqrt{2}\int_0^{\frac{w}{\sqrt{2k}}}\exp(v^2)dv \leq \frac{w}{\sqrt{k}}\exp(w^2/2k). \end{align*} Hence, by \eqref{eq:b}, \begin{align*} \ln(a+bt) &\leq \frac{w^2}{2k} + \ln\left(\left(\frac{w}{k}+t\right)\sqrt{k} \right) \\ &\leq \frac{w^2(k_c)}{2k_c}+\ln\left(\left(\frac{w(k_c)}{k_c}+t\right)\sqrt{k} \right). \end{align*} Therefore, for all $t\geq t_2(k)$, \begin{align*} \left(\frac{\ln(a+bt)}{\ln(t)} \right)^{\frac{1}{2}} & \leq \left(\frac{\frac{w^2(k_c)}{2k_c}+\ln\left((\frac{w(k_c)}{k_c}+t)\sqrt{k} \right)}{\ln(t)} \right) \\ & \leq \left(1+\frac{\frac{w^2(k_c)}{2k_c}+\ln(\sqrt{k})+\frac{w(k_c)}{tk_c}}{\ln(t)} \right)^{\frac{1}{2}} \\ & \leq \left(1+\frac{\frac{w^2(k_c)}{2k_c}+\ln(\sqrt{k_c})+\frac{w(k_c)}{t_0(k_c)k_c}}{\ln(t_0(k_c))} \right)^{\frac{1}{2}} \approx 1.0400. \end{align*} Combining all inequalities for each factor in \eqref{eq:three_terms}, we get \begin{align*} \frac{f(t;k)}{g(t;k)} \leq 1.12 \cdot 1.0326\cdot 1.0400 \approx 1.2023. \end{align*} \end{proof} \begin{lemma}\label{lemma:maximum_ratio_f0_g} Let $f_0(x)$ be given by \eqref{eq:f_0(x)} and let $g(x;1)$ be given by $g(x;1)=x(2\ln(x))^{\frac{1}{2}}$. Then, \begin{align*} \frac{f_0(x)}{g(x;1)}\leq 1.12,\quad x\geq x_2. \end{align*} \end{lemma} \begin{proof} From \eqref{eq:f_0(x)},\eqref{eq:Ux} and the first inequality of \eqref{eq:inequalities_int_exp}, we have for $x>0$ \begin{align*} \frac{x}{\sqrt{2}} = \int_0^{(\ln(f_0(x)))^{\frac{1}{2}}}\exp(u^2)du \geq \frac{f_0(x)-1}{2(\ln(f_0(x)))^{\frac{1}{2}}}, \end{align*} i.e., \begin{align} f_0(x)\leq 1+x(2\ln(f_0(x))^{\frac{1}{2}}.\label{eq:ineq_f0(x)} \end{align} Let $x>0$ be fixed and consider the mapping \begin{align*} G:z\geq 1 \mapsto 1+x(2\ln(z))^{\frac{1}{2}}. \end{align*} Then $G$ maps $[1,\infty)$ onto $[1,\infty)$ with $G(1)=1$ and $G(\infty)=\infty$, $G$ is strictly concave on $[1,\infty)$, and $G'(z)$ decreases from $\infty$ to $0$ as $z$ increases from $1$ to $\infty$. Therefore, $G$ has a unique fixed point $z(x)$ in $(1,\infty)$. We have for any $z_1>1,z_2>1$ that \begin{align} z_1\leq z(x) \iff z_1 \leq 1+x(2\ln(z_1))^{\frac{1}{2}},\quad z_2\geq z(x) \iff z_2\geq 1+x(2\ln(z_2))^{\frac{1}{2}}. \label{eq:z1,z2} \end{align} Note that $f_0(x)\leq z(x)$ by \eqref{eq:ineq_f0(x)}. Now let $\alpha>1$ and consider $z_2 = 1+\alpha g(x;1)$, where we take $\alpha$ such that $z_2\geq 1+x(2\ln(z_2))^{\frac{1}{2}}$. An easy computation shows that with this $z_2 = 1+\alpha g(x;1)$, \begin{align} z_2\geq 1+x(2\ln(z_2))^{\frac{1}{2}} \iff g(x;1)\leq \frac{x^{\alpha^2}-1}{\alpha}.\label{eq:z2,g} \end{align} We consider all this for $x\geq x_2 = 263.0340$. We have \begin{align*} g(x_2;1)=x_2(2\ln(x_2))^{\frac{1}{2}} = \frac{x_2^{\alpha^2}-1}{\alpha} \end{align*} for $\alpha = 1.1115 :=\alpha_2$. Furthermore, when we have an $x\geq x_2$ such that $g(x;1)\leq \frac{1}{\alpha}(x^{\alpha^2}-1)$, we have \begin{align*} \frac{d}{dx}\left[\frac{1}{\alpha}(x^{\alpha^2}-1) \right] & = \frac{\alpha}{x}x^{\alpha^2}\geq \frac{\alpha}{x}\left(1+\alpha g(x;1) \right) \\ & = \frac{\alpha}{x}+\frac{\alpha^2}{x}x(2\ln(x))^{\frac{1}{2}} \\ & > \alpha^2(2\ln(x))^{\frac{1}{2}} \\ & \geq (2\ln(x))^{\frac{1}{2}} + \frac{1}{(2\ln(x))^{\frac{1}{2}}} = g'(x;1), \end{align*} where the last inequality holds when $\alpha^2-1\geq \frac{1}{2\ln(x)}$. The latter inequality certainly holds for $\alpha = \alpha_2$ and $x\geq x_2$. Hence, \begin{align*} g(x_2;1) = \frac{x_2^{\alpha_2^2}-1}{\alpha_2};\quad g'(x;1)<\frac{d}{dx}\left[\frac{1}{\alpha_2}(x^{\alpha_2^2}-1) \right],\quad x\geq x_2, \end{align*} and we conclude that \begin{align*} g(x;1)\leq \frac{x^{\alpha_2^2}-1}{\alpha_2},\quad x\geq x_2. \end{align*} Then, by \eqref{eq:z1,z2} and \eqref{eq:z2,g} and $f_0(x)\leq z(x)$, we get that \begin{align*} z_2 = 1+\alpha_2g(x;1)\geq z(x)\geq f(x), \quad x\geq x_2. \end{align*} This implies that \begin{align*} \frac{f_0(x)}{g(x;1)} \leq \alpha_2 + \frac{1}{g(x;1)}\leq \alpha_2 + \frac{1}{g(x_2;1)} \leq 1.12, \quad x\geq x_2, \end{align*} since $\alpha_2\leq 1.112$ and $(g(x_2;1))^{-1}\leq 0.008$ as required. \end{proof} \begin{lemma}\label{lemma:t2(k)} Let $f(t)$ be given by \eqref{eq:f} with initial conditions $f(0)=1, f'(0)=w$ such that $f(1)=1+k$, and let $g(t)$ be given by \eqref{eq:f(t)_approx} for $0<k\leq k_c$. Then, \begin{align*} t_2(k) \geq \exp\left(\frac{e^{2-k_c}}{2k} \right), \end{align*} where $t_2(k)$ is given as in Theorem \ref{thm:cases_k}, case $b$. \end{lemma} \begin{proof} Let $f(t)$ be given by \eqref{eq:f} with initial conditions $f(0)=1, f'(0)=w$ such that $f(1)=1+k$, and let $g(t)$ be given by \eqref{eq:f(t)_approx}. By Theorem \ref{thm:cases_k} case (b), we have $t_2(k)\geq t_0(k)$, where $t_0(k)$ is the unique root of the equation \begin{align} 2+\frac{1}{2\ln(t)}-\ln(2k\ln(t)) = \frac{w^2}{k},\label{eq:soly} \end{align} see \eqref{eq:equation_psi}. If we denote $y=2\ln(t_0(k))$, then the solution $y(k)$ of \eqref{eq:soly} satisfies \begin{align*} y(k) > \frac{1}{k}\exp\left(2-\frac{w^2}{k}\right)\geq \frac{1}{k}\exp(2-\frac{w^2(k_c)}{k_c}) \geq \frac{1}{k}\exp(2-k_c), \end{align*} since $\frac{w^2}{k}$ increases in $k$ and $w^2\leq k^2$. Now, using that $y=2\ln(t_0(k))$, we get \begin{align*} t_0(k) \geq \exp\left(\frac{e^{2-k_c}}{2k}\right). \end{align*} Since $t_2(k)\geq t_0(k)$, we have the desired result. \end{proof} \end{appendices} \bibliography{PhD-Asymptotic_Analysis} \end{document}
2206.14564v1
http://arxiv.org/abs/2206.14564v1
Online coloring of disk graphs
\documentclass[12pt]{article} \usepackage{amssymb,amsmath,amsthm,amsfonts} \usepackage[vlined, ruled, linesnumbered]{algorithm2e} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{enumerate} \usepackage{subfigure} \newtheorem{theorem}{Theorem} \newtheorem{cor}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}{Claim} \newtheorem{prop}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{defi}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \usepackage{gensymb} \usepackage{amssymb} \renewcommand{\angle}{\measuredangle} \newtheorem{problem}{Problem}[section] \renewcommand{\baselinestretch}{1.3} \newcommand{\RR}{\mathbb{R}^2} \newcommand{\dist}{\mathop{\mathrm{dist}}} \newcommand{\z}{\mathbb{Z}} \newcommand{\GS}{G_{[1,\sigma]}} \newcommand{\todo}[1]{{\bf \texttt{TODO: #1}}} \newcommand{\kjs}[1]{{\color{blue}\bf \texttt{kjs: #1}}} \newcommand{\js}[1]{{\color{red}\bf \texttt{js: #1}}} \usepackage{tikz} \usetikzlibrary{decorations.pathreplacing} \usetikzlibrary{calc} \begin{document} \title{Online coloring of disk graphs} \author{ Joanna Chybowska-Sokół$^1$\thanks{Supported by the National Science Center of Poland under Grant No. 2016/23/N/ST1/03181} \and Konstanty Junosza-Szaniawski$^1$ \\ $^1$ Faculty of Mathematics and Information Science,\\ Warsaw University of Technology, Poland\\ email: {[email protected], [email protected]} } \maketitle keywords: disk graphs, online coloring, online $L(2,1)$-labeling, online coloring geometric shapes \begin{abstract} In this paper, we give a family of online algorithms for the classical coloring problem of intersection graphs of discs with bounded diameter. Our algorithms make use of a geometric representation of such graphs and are inspired by an algorithm of Fiala {\em et al.}, but have better competitive ratios. The improvement comes from using two techniques of partitioning the set of vertices before coloring them. One of which is an application of a $b$-fold coloring of the plane. The method is more general and we show how it can be applied to coloring other shapes on the plane as well as adjust it for online $L(2,1)$-labeling. \end{abstract} \section{Introduction} Intersection graphs of families of geometric objects attracted much attention of researchers both for their theoretical properties and practical applications (c.f. McKee and McMorris \cite{McKMcM}). For example intersection graphs of families of discs, and in particular discs of unit diameter (called {\em unit disk intersection graphs}), play a crucial role in modeling radio networks. Apart from the classical coloring, other labeling schemes such as $T$-coloring and distance-constrained labeling of such graphs are applied to frequency assignment in radio networks \cite{hale, roberts}. In this paper, we consider the classical coloring. We say that a graph coloring algorithm is {\em online} if the input graph is not known a priori, but is given vertex by vertex (along with all edges adjacent to already revealed vertices). Each vertex is colored at the moment when it is presented and its color cannot be changed later. On the other hand, {\em offline} coloring algorithms know the whole graph before they start assigning colors. The online coloring can be much harder than offline coloring, even for paths. For an offline coloring algorithm by the {\em approximation ratio} we mean the worst-case ratio of the number of colors used by this algorithm to the chromatic number of the graph For online algorithms, the same value is called {\em competitive ratio}. A unit disk intersection graph $G$ can be colored offline in polynomial time with $3\omega(G)$ colors \cite{peeters} (where $\omega(G)$ denotes the size of a maximum clique) and online with $5\omega(G)$ colors. The last result comes from a combination of results from \cite{capponi}, which states that First-Fit algorithm applied to $K_{1,p}$-free graphs has competitive ratio at most $p-1$ with results in \cite{malesinska,peeters} saying that unit disc intersection graphs are $K_{1,6}$-free. Fiala, Fishkin, and Fomin \cite{FFF} presented a polynomial-time online algorithm that finds an $L(2,1)$-labeling of an intersection graph of discs of bounded diameter. The $L(2,1)$-labeling asks for a vertex labeling with non-negative integers, such that adjacent vertices get labels that differ by at least two, and vertices at distance two get different labels. The algorithm is based on a special coloring of the plane, that resembles colorings studied by Exoo \cite{exoo}, inspired by the classical Hadwiger-Nelson problem \cite{hadwiger}. A similar idea of reserving a set of colors for upcoming vertices can be found in the paper by Kierstead and Trotter \cite{kierstead}. In \cite{udg} we described an algorithm for coloring unit disks intersection graph. It is inspired by \cite{FFF}, however, we change the algorithm in such a way that a $b$-fold coloring of the plane (see \cite{ulamkowe}) is used instead of a classical coloring. For graphs with big $\omega$ such approach lets us obtain the competitive ratio below $5\omega$. In this paper, we generalize results from \cite{udg} for intersection graphs of disks with bounded diameter. Moreover, we improve the algorithm from \cite{udg} by more uniform distribution of vertices into layers. Throughout the paper, we always assume that the input disk intersection graph is given along with its geometric representation. In the paper we consider a few algorithms for coloring $\sigma$-disk graphs (i.e. intersection graphs of disks with a diameter within $[1,\sigma]$) and some other geometric shapes, see Table \ref{tab-algorithms}. We use the divide and conquer approach. The basis of our colorings is a coloring of the plane (except for the BranchFF algorithm, where First-Fit is used), but we use 2 types of division of vertices of the $\sigma$-DG. It is easiest to get a grasp on how to color disks with a coloring of a plane by analyzing the SimpleColor algorithm. In terms of division, algorithms BranchFF and FoldColoring show the two methods we use. \emph{Branching} divides the vertices according to the size of the corresponding disks, while \emph{folding} focuses on the location of their centers. All of these algorithms can be used for online coloring of $\sigma$-disk intersection graphs, hence we first introduce them in terms of disk colorings. However, their application can be broader. Some can be easily adjusted for online coloring of intersection graphs of various geometric shapes. And some can be used for $L(2,1)$-labeling of disks or other shapes, and the only change necessary is the coloring of the plane that we use. In the paper, we concentrate mostly on the method itself, more than the specific parameters of the algorithm such as competitive ratio, since they highly depend on the input. Again, just like for disk graphs, we assume that the geometric interpretation of a graph is given rather than the graph itself. We also assume that some parameters of the geometric structure are known in advance, such as the bounds of the disk diameters. It is crucial knowledge considering our algorithms are online and some of them are used for $L(2,1)$-labeling. \begin{table}[ht] \begin{tabular}{c|c|c|c} Algorithm & branching & folding & What is needed:\\\hline BranchFF\cite{erlebach} & Y & N & $-$\\ SimpleColor\cite{FFF} & N & N & a solid coloring of $G_{[1,\sigma]}$\\ BranchColor & Y & N & a solid coloring of $G_{[1,\textbf{2}]}$\\ FoldColor\cite{udg} & N & Y & a $b$-fold solid coloring of $G_{[1,\sigma]}$\\ FoldShadeColor & N & Y & a $b$-fold solid coloring of $G_{[1,\sigma]}$, shading $\eta$\\ BranchFoldColor & Y & Y & a $b$-fold solid coloring of $G_{[1,\textbf{2}]}$, shading $\eta$\\ \end{tabular}\caption{A short summary of the algorithms inner methods.}\label{tab-algorithms} \end{table} \section{Preliminaries} We start with introducing some basic definitions, notations, and preliminary results. This section includes a method of coloring the plane, which will later be used in our algorithms. \subsection{Notation} For an integer $n$, we define $[n]:=\{1,\ldots, n\}$. For integers $n,b$ by $(n)_b$ we denote $n$ modulo $b$. By a \emph{graph} we mean a pair $G=(V,E)$, where $V$ is a finite set and $E$ is a subset of the set of all 2-element subsets of $V$. A function $c\colon V\to[k]$ is a $k$-coloring of $G=(V,E)$ if for any $xy\in E$ holds $c(x)\neq c(y)$. By $d(u,v)$ we denote the number of edges on the shortest $u$-$v$--path in $G$. For a sequence of disks in the plane $(D_i)_{i\in [n]}$ we define its intersection graph by $G((D_i)_{i\in [n]})=(\{v_i: i\in [n]\},E)$, where $v_i$ is the center of $D_i$ for every $i\in [n]$ and $v_iv_j\in E$ iff $D_{i}\cap D_{j}\neq \emptyset$. Since we have a sequence of disks rather than a set, we disregard the fact that two disks can have the same center and always treat vertices corresponding to different disks as separate entities. Any graph that admits a representation by intersecting disks is called a disk graph, or simply a DG. If the ratio between the largest and the smallest diameters of the disks is at most $\sigma$ then we call such graph a $\sigma$-disk graph or $\sigma$-DG, for short. We can assume that all disks in a representation of a $\sigma$-DG are $\sigma$-disks, i.e. their diameters are within $[1,\sigma]$. By {\em UDG} we mean the class of graphs that admit a representation by intersecting \emph{unit} disks. All of our algorithms require $\sigma$ to be known in advance. Hence we always assume to be given $\sigma$ and when we write disk graph we mean $\sigma$-DG. For a minimization online algorithm $\mathrm{alg}$, by $\mathrm{cr(alg)}$ we denote its \emph{competitive ratio}, which is the supremum of $\frac{\mathrm{alg}(G)}{\mathrm{opt}(G)}$ over all instances $G$, where $\mathrm{alg}(G)$ is the value of the solution given by the algorithm for instance $G$ and $\mathrm{opt}(G)$ is the optimal solution for instance $G$. For the classical coloring we use the fact that any coloring requires at least $\omega(G)$ colors, where $\omega(G)$ denotes the size of the largest clique of $G$. \subsection{Tilings and colorings of the plane}\label{sec:tilings} Our algorithms use colorings of the euclidean plane as a base for $\sigma$-disk coloring. The coloring we use depends on the value of $\sigma$. In this section, we present a method of finding such colorings. We start with defining an infinite graph $G_{[1,\sigma]}$. \begin{defi} By $G_{[1,\sigma]}$ we denote a graph with $\mathbb{R}^2$ as the vertex set and edges between pairs of points at euclidean distance within $[1,\sigma]$. \end{defi} A \emph{coloring} of $G_{[1,\sigma]}$ follows the regular definition of graph coloring. For a $b$-fold colorings we include the definition with our notation. \begin{defi} A function $\varphi=(\varphi_1,\ldots, \varphi_b)$ where $\varphi_i:\mathbb{R}^2 \to [k]$ for $i\in [b]$ is called a {\em $b$-fold coloring of $G_{[1,\sigma]}$ with color set} $[k]$ if \begin{itemize} \item for any point $p\in \mathbb{R}^2$ and $i,j\in [b]$, if $i\neq j$, then $\varphi_i(p)\neq \varphi_j(p)$, \item for any two points $p_1,p_2\in \mathbb{R}^2$ with $\dist(p_1,p_2)\in [1,\sigma]$ and $i,j\in [b]$ holds $\varphi_i(p_1)\neq \varphi_j(p_2)$. \end{itemize} The function $\varphi_i$ for $i \in [b]$ is called an {\em $i$-th layer} of $\varphi$. \end{defi} Notice that a coloring of $G_{[1,\sigma]}$ is a 1-fold coloring of $G_{[1,\sigma]}$. Coloring and $b$-fold colorings of $G_{[1,\sigma]}$ have been a subject of various papers (\cite{exoo},\cite{hadwiger},\cite{falconer},\cite{ulamkowe}, \cite{soifer}). The case of $\sigma=1$ is the most studied, as the problem of coloring $G_{[1,1}$ is known as Hadwiger-Nelson problem. For many years the best bounds on $\chi(G_{[1,1})$ were 4 and 7, which are quite easy to obtain. A breakthrough came in 2018, when Aubrey de Grey \cite{deGrey} proved, that $\chi(G_{[1,1})\ge5$. Right after that, a paper by Exoo and Ismailescu \cite{exoo2018} claimed the same result (the proofs were not the same). Since then there was a spike of interest in various colorings of the plane. The size of a minimal subgraph of $G_{[1,1}$ with chromatic number equal 5 became one of the subjects of study as well \cite{heule}. In our case, we only consider colorings based on specific tilings of the plane (i.e. we partition the plane into congruent shapes called tiles). To emphasize this we will call such colorings \emph{solid}. The assumption of our plane coloring being solid will be clearly stated in our theorems. In general, a coloring $\varphi$ of $G_{[1,\sigma]}$ is called {\em solid} if there exists a tiling such that each tile is monochromatic (so the diameter of a tile cannot exceed 1) and tiles in the same color are at distance greater than $\sigma$. A $b$-fold coloring $\varphi=(\varphi_1,\ldots,\varphi_b)$ of the plane is called {\em solid} if for all $i\in [b]$ the coloring $\varphi_i$ is solid and tiles in the same color (possibly from different tilings) are at distance greater than $\sigma$. An example of a solid 1-fold 7-coloring of the plane (by John Isbell \cite{soifer}) is presented in Figure \ref{7-kol}. Now we introduce our method of building solid $b$-fold colorings of the plane. Note that it is possible to construct other solid colorings that could be used for our algorithms, but finding them is not the main problem of the paper. We define an $h^2$-fold coloring of the plane based on hexagonal tilings, for any positive integer $h$. For $h=1$ it is a coloring of the plane as presented in \cite{MYplaszczyzna} and we follow the notation from both \cite{MYplaszczyzna} and \cite{udg}. From this point onward we will assume $h$ to be a fixed positive integer and often omit it in further notation. Let us start with defining the tiling. Let $H_{0,0}$ be a hexagon with two vertical sides, center in $(0,0)$, diameter equal one, and part of the boundary removed as in Figure \ref{heksagon}. Note that the width of $H_{0,0}$ equals $\frac{\sqrt{3}}{2}$. Then let $s_1=[\frac{\sqrt{3}}{2},0]$, and $s_2=[\frac{\sqrt{3}}{4},-\frac{3}{4}]$. For $i,j\in \mathbb{Z}$ let $H_{i,j}$ be a \emph{tile} created by shifting $H_{0,0}$ by a vector $i\cdot\frac{s_1}{h}+j\cdot\frac{s_2}{h}$, namely $H_{i,j}:=\{(x,y)+i\cdot\frac{s_1}{h}+j\cdot\frac{s_2}{h}\in\RR:\ (x,y)\in H_{0,0}\}$. Notice that if $h=1$ then $\{H_{i,j}:i,j\in \mathbb{Z}\}$ forms a partition of the plane, which we call a hexagonal tiling. More generally for any $m\in [h^2]$ the set $L_m=\{H_{i,j}: 1+(i)_h+h(j)_h=m\}$ forms a tiling (see Figure \ref{tiling}), we call it the \emph{$m$-th layer}. \begin{figure}[h] \centering \subfigure[A tile $H_{i,j}$.]{\label{heksagon} \includegraphics[height=3.5cm]{obrazki2/heksagon} }\hfill \subfigure[Hexagonal tiling - the 1$^\text{st}$ layer.]{\label{tiling}\includegraphics[width=0.6\textwidth]{obrazki2/tiling}} \hfill \subfigure[7-coloring of the plane]{\label{7-kol}\includegraphics[width=0.8\textwidth]{obrazki2/7kol-0}} \caption{\cite{udg} Hexagonal tiling. } \end{figure} All points in a single tile will share a color. Now for a triple of non-negative integers $(h^2,p,q)$ we define a coloring such that for any $(i,j)\in \mathbb{Z}^2$ hexagons $H_{i,j}, H_{i+p,j+q},H_{i+p+q,j-p}$ have the same color. Notice that the centers of such three hexagons form an equilateral triangle (see the three hexagons on the right side of Figure \ref{pattern}). By reapplying this rule of a single colored triangles we obtain that sets of the form $\{H_{i+k\cdot p + l\cdot (p+q),j+k\cdot q-l\cdot p}:\ k,l\in \mathbb{Z}\}$ are monochromatic. When maximal monochromatic sets are of this form we call such coloring a $(h^2,p,q)$-coloring. \begin{lemma}\label{lem:pqcolors} A $(h^2,p,q)$-coloring uses $p^2+p q+q^2$ colors. \end{lemma} \begin{proof} Let us denote greatest common divisor of $p$ and $q$ as $d$ and let $p'=p/d$, $q'=q/d$. We call the set of $H_{i,j}$, $i\in\mathbb{Z}$, the $j$-th row of tiles. Let us call the color of $H_{0,0}$ blue and $\mathcal{T}=\{H_{k\cdot p + l\cdot (p+q),k\cdot q-l\cdot p}:\ k,l\in \mathbb{Z}\}$ denote the set of all blue hexagons. To give some insight of where $\mathcal{T}$ comes from let $v$ denote a vector from $(0,0)$ to the center of $H_{p,q}$ ($v=p\cdot \frac{s_1}{h}+q\cdot \frac{s_2}{h}$) and $\overline{v}$ be a vector obtained from $v$ via rotating it by $\frac{\pi}{3}$ ($\overline{v}=p\cdot \frac{s_1 - s_2}{h}+ q\cdot \frac{s_1}{h}=(p+q)\cdot \frac{s_1}{h}-p\cdot \frac{s_2}{h}$) - see Figure \ref{pattern}. Then $\mathcal{T}$ is a set of all tiles created by shifting $H_{0,0}$ by $k v + l \overline{v}$, for $k,l\in \mathbb{Z}$. \begin{figure}[h] \center \includegraphics[scale=1]{obrazki2/pattern.pdf}\caption{A single color in a $(1,2,1)$-coloring. The angle between $v$ and $\overline{v}$ does not depend on the values of $h,p,q$.}\label{pattern} \end{figure} First let us notice that, by the definition of $\mathcal{T}$, blue appears in rows numbered by $k\cdot q - l\cdot p=(kq'- lp')\cdot d$, for any $k,l\in \mathbb{Z}$. Since $p'$ and $q'$ are coprime, $\{kq'- lp':\ k,l\in \mathbb{Z}\}=\mathbb{Z}$. Then blue appears in a row iff its number is divisible by $d$. Note that, since we used the same pattern (only shifted) for every color, we have the same number of colors in every row. Hence the total number of colors equals the number of colors used in a single row multiplied by $d$. To find the number of colors used in a row, it is enough to know how often does a single color reappear in it (see example in Figure \ref{7-kol} in one row every 7$^{\text{th}}$ hexagon is blue and we use 7 colors in each row). Let $m$ be the smallest positive number such that $H_{m,0}$ is colored blue. Since $H_{m,0}\in\mathcal{T}$, then there exist integers $k,l$ such that:\begin{align} k\cdot p + l\cdot (p+q)=m,\label{e1}\\ k\cdot q-l\cdot p=0.\label{e2} \end{align} From (\ref{e2}) we derive $kq'=lp'$. Since $p',\ q'$ are coprime, then $k$ is divisible by $p'$ and $l$ by $q'$. Moreover $p',\ q'$ are both non-negative, hence either $k$ and $l$ are both non-positive or they are both non-negative. Without loss of generality we assume the latter. Hence $m=d\cdot(k p' + l q' +l p')$ is minimal when $k=p'$ and $l=q'$. So the numbers of colors in a row equals $m=d(p'p' + q'q' +p'q')$, and from our previous remarks we conclude that the total number of colors equals $d^2(p'p' + q'q' +p'q')=p^2+q^2+pq$. \end{proof} In case $q=0$ we can easily express the number of colors we use in terms of $\sigma$, which was done in \cite{ulamkowe} (see Figure \ref{hkw}). It is enough to take $p=\lceil (\frac{2\sigma}{\sqrt{3}}+1)\cdot h \rceil$, as then the distance between centers of $H_{0,0}$ and $H_{p,0}$ equals $|p\cdot\frac{s1}{h}|=\lceil (\frac{2\sigma}{\sqrt{3}}+1) h \rceil\cdot \frac{\sqrt{3}}{2h}\ge \sigma+\frac{\sqrt{3}}{2}$. Since these tiles are in the same row, then the distance between them is in fact the distance between their vertical sides and equals the distance between the centers minus $\frac{\sqrt{3}}{2}$. \begin{prop}[\cite{ulamkowe}]\label{prop:hkwadrat} For $h\in \mathbb{N_+}$ there exists a solid $h^2$-fold coloring $\varphi$ of $G_{[1,\sigma]}$ with $\left \lceil (\frac{2\sigma}{\sqrt{3}}+1)\cdot h \right \rceil^2$ colors. \end{prop} \begin{figure}[h] \includegraphics[width=\textwidth]{obrazki2/hkw} \caption{\cite{udg} $h^2$-fold coloring of the plane by Grytczuk, Junosza-Szaniawski, Sokół, Węsek}\label{hkw} \end{figure} In more general case it is a bit hard to find the maximal $\sigma$ such that the $(h^2,p,q)$-coloring is a coloring of $G_{[1,\sigma]}$. $\sigma$ for $(h^2,p,q)$-coloring. For some values of $(h^2,p,q)$ such $\sigma$ might not exist at all, since the distance between two tiles of the same color could be smaller than 1. It is enough to consider the distance between $H_{0,0}$ and $H_{p,q}$, but this value depends on which points in these two tiles minimize the distance. When computing the maximal sigma we consider the distances between any two points on the boundaries of $H_{0,0}$ and $H_{p,q}$. The minimal distance we find is the maximal value of However we can easily find some values of $\sigma$ for which the $(h^2,p,q)$-coloring works. \begin{prop}\label{prop:pqsigma} For any fixed values of $h,p,q$, let $\sigma=\frac{\sqrt{3}}{2h}\sqrt{p^2+pq+q^2}-1$. If $\sigma\ge1$, then the $(h^2,p,q)$-coloring is a coloring of $G_{[1,\sigma]}$. \end{prop} \begin{proof} Let as consider the minimal distance between two points $P_1\in H_{0,0}$ and $P_2\in H_{p,q}$. Let us denote the centers of these tiles as $C_1$ and $C_2$. By the definition of $H_{i,j}$, the distance between $C_1$ and $C_2$ equals $|p\cdot \frac{s_1}{h} +q\cdot \frac{s_2}{h}|= |\frac{1}{h}[p\frac{\sqrt{3}}{2}+q\frac{\sqrt{3}}{4},-q\frac{3}{4}]| = \frac{1}{h}\sqrt{((2p+q)\frac{\sqrt{3}}{4})^2+(-q\frac{3}{4})^2}=\frac{\sqrt{3}}{2h}\sqrt{p^2+pq+q^2}$. The distance between $P_1$ and $C_1$ is at most $\frac{1}{2}$, and the same is true for $P_2$ and $C_2$. Hence the distance between any $P_1\in H_{0,0}$ and $P_2\in H_{p,q}$ is at most $\frac{\sqrt{3}}{2h}\sqrt{p^2+pq+q^2}-1=\sigma\ge 1$. Any two points of the same color are either both in one tile - then their distance is smaller than $1$, or in two tiles of the same color. The distance between any two tiles of the same color is at least as big as the distance between $H_{0,0}$ and $H_{p,q}$ (by the construction of $(h^2,p,q)$-coloring). Hence in the latter case the distance between such points is at least $\frac{\sqrt{3}}{2h}\sqrt{p^2+pq+q^2}-1$, which concludes the proof. \end{proof} In most cases it is possible to use $(h^2,p,q)$ colorings for $\sigma$ larger than stated above, but the Proposition shows the approximate value. Now by Lemma \ref{lem:pqcolors} and Proposition \ref{prop:pqsigma} we obtain the following. \begin{cor} For any fixed values of $h,p,q$, let $k=p^2+pq+q^2$ and $\sigma=\frac{\sqrt{3}}{2h}\sqrt{k}-1$. If $\sigma\ge1$, then the $(h^2,p,q)$-coloring is an $h^2$-fold $k$-coloring of $G_{[1,\sigma]}$. \end{cor} By this corollary we get that $k\approx\frac{4h^2}{3}(\sigma+1)^2$. This is a worse result then the one from Proposition \ref{prop:hkwadrat}. However if we consider the maximal values of $\sigma$ for some cases $(h^2,p,q)$-colorings we get better results. In particular one of the best $h^2$-fold colorings of $G_{[1,2]}$ and 'small' values of $h$ is the $(8^2,1,26)$-coloring, which uses 703 colors. More examples of precise values of $\sigma$ are presented in Table \ref{tab:hpq}. It contains only the best $(h^2,p,q)$-colorings for $h\in\{1,2,3\}$ and $\sigma$ up to 3. \begin{table}[h] \begin{tabular}{c|c|c|c|c|c} $\sigma$&$\frac{k}{h^2}$&$h^2$&$k$&$p$&$q$\\\hline 1.01036& 4.77778&9&43&1&6\\ 1.08253& 5.25&4&21&1&4\\ 1.1547& 5.44444&9&49&0&7\\ 1.29904& 6.33333&9&57&1&7\\ 1.32288& 7.&9&63&3&6\\ 1.44338& 7.11111&9&64&0&8\\ 1.51554& 7.75&4&31&1&5\\ 1.58771& 8.11111&9&73&1&8\\ 1.60728& 8.77778&9&79&3&7\\ 1.63936& 9.25&4&37&3&4\\ 1.73205& 9.33333&9&84&2&8\\ 1.75& 9.75&4&39&2&5\\ 1.87639& 10.1111&9&91&1&9\\ 1.94856& 10.75&4&43&1&6\\ 2.02073& 11.1111&9&100&0&10\\ 2.02073& 12.1111&9&109&5&7\\ \end{tabular} \hspace{0.5cm} \begin{tabular}{c|c|c|c|c|c} $\sigma$&$\frac{k}{h^2}$&$h^2$&$k$&$p$&$q$\\\hline 2.04634& 12.25&4&49&3&5\\ 2.16506& 12.3333&9&111&1&10\\ 2.17945& 13.&9&117&3&9\\ 2.3094& 13.4444&9&121&0&11\\ 2.38157& 14.25&4&57&1&7\\ 2.45374& 14.7778&9&133&1&11\\ 2.46644& 15.4444&9&139&3&10\\ 2.59808& 16.3333&9&147&2&11\\ 2.61008& 16.75&4&67&2&7\\ 2.64575& 17.3333&9&156&4&10\\ 2.74241& 17.4444&9&157&1&12\\ 2.75379& 18.1111&9&163&3&11\\ 2.81458& 18.25&4&73&1&8\\ 2.88675& 18.7778&9&169&0&13\\ 2.92973& 20.1111&9&181&4&11\\ 3.03109& 20.3333&9&183&1&13\\ \end{tabular}\caption{A few records of $(h^2,p,q)$-coloring. $k$ denotes the number of colors and is equal $p^2+pq+q^2$.}\label{tab:hpq} \end{table} The last thing we should know about our colorings is the number of subtiles in a tile. Having $b$ tilings (in our case $b=h^2$), by a {\em subtile} we mean a non-empty intersection of $b$ tiles, one from each layer. By $\gamma=\gamma(\varphi)$ we denote the maximum number of subtiles in a single tile in a solid $b$-fold coloring of the plane $\varphi$. \begin{lemma}[\cite{udg}]\label{gamma} The number of subtiles in any tile in the set $\{H_{i,j}: i,j\in\z\}$ is equal to: $\gamma =1$ for $h=1$, $\gamma =12$ for $h=2$, $\gamma =6h^2$ for $h>2$. \end{lemma} Notice that for any $h$, the number above is no larger than $6h^2$. In the next chapters, we will usually write about $b$-fold colorings, rather than $h^2$-fold colorings, since one could apply different colorings, then mentioned in this section. However, when the bound on the number of colors is given, substituting $\gamma$ for $6b$ might give a bit of extra insight. \subsection{$L^*(2,1)$-labeling of the plane}\label{sec:planeL21} \begin{defi}\label{def:Lplane} A solid $b$-fold $k$-coloring $\varphi$ of $G_{[1,\sigma]}$ is called a {\em solid $b$-fold $L^*(2,1)$-labeling of $G_{[1,\sigma]}$ with $k$ labels}, if \begin{enumerate} \item all labels (colors) are in $\{1,2,\ldots,k\}$, or all are in $\{0,1,\ldots k-1\}$, \item for any two tiles $T_1,T_2$ with the same color, $T_1$ and $T_2$ are at point-to-point distance greater than $2\sigma$. \item for any two tiles $T_1,T_2$ with consecutive colors, $T_1$ and $T_2$ are at point-to-point distance greater than $\sigma$ \item for a tile $T_1$ with the smallest color and a tile $T_2$ with the largest color, $T_1$ and $T_2$ are at point-to-point distance greater than $\sigma$. \end{enumerate} \end{defi} Note that by this definition any two points of the same tile must still be lower than $1$. Such labeling were briefly considered in \cite{FFF} (called 'circular labeling') and more thoroughly in case of $\sigma=1$ in \cite{udg}. Fiala, Fishkin, and Fomin state in \cite{FFF} that $L^*(2,1)$-labelings can always be found, give the estimated time of searching, and give an example of such coloring for $\sigma=\frac{\sqrt{7}}{2}$. Grytczuk, Junosza-Szaniawski, Sokół, and Węsek in \cite{udg} concentrate on finding such $b$-fold labelings for $\sigma=1$, and in particular, give a method of constructing them in case $b=h^2$. \begin{theorem}[\cite{udg}]\label{th:hkwadratL21} For all $h\in \mathbb{N}$, there exists a solid $h^2$-fold $L^*(2,1)$-labeling of $G_{[1,1]}$ with $k=3\lceil h(\frac{2}{\sqrt{3}}+1)+1\rceil^2+1$ labels. \end{theorem} \begin{figure}[h]\begin{center} \includegraphics[width=\textwidth]{obrazki2/l21plane_n2.pdf} \caption{\cite{udg} A solid $h^2$-fold $L^*(2,1)$-labeling of $G_{[1,1]}$} \label{fig:l21G11} \end{center}\end{figure} The construction is quite similar to the one from Proposition \ref{prop:hkwadrat}. The key idea is to substitute a single color of $(h^2,p,0)$-coloring, where $p=\lceil h(\frac{2}{\sqrt{3}}+1)+1\rceil$, with 3 consecutive labels (see Figure \ref{fig:l21G11}). Tiles of such triplets of labels are monochromatic in $(h^2,p,0)$-coloring, hence their distances are bigger than 1. The tiles with the same label are at distance larger than 2. The fact that $p$ is larger than $\lceil h(\frac{2}{\sqrt{3}}+1)\rceil$ by 1 is enough to make sure that tiles with consecutive labels from different triplets are at distance greater than 1 (see labels $3\varrho-1$ and $3\varrho$ in Fig. \ref{fig:l21G11}). Naturally, in such construction, we use labels from 1 to $p^2$, but the declared number of labels is $p^2$+1, hence there is no conflict between the smallest and largest labels. We can do the same for $\sigma\in [1,\frac{1}{4-2\sqrt{3}}]$ as well. The upper bound on $\sigma$ here comes from the restriction on tiles with the same label (by the second condition of the definition, their distance must be greater than $2\sigma$). The centers of such tiles are at distance $|p\cdot\frac{s_1}{h}+p\cdot\frac{s_2}{h}|=\frac{p}{h}|s_1+s_2|=\frac{3p}{2h}$ and we need this value to be greater than $2\sigma+1$. \begin{prop}\label{prop:planesigmaL21} If $1\le\sigma\le \frac{1}{4-2\sqrt{3}}\approx 1.86603$, then for all $h\in \mathbb{N}$, there exists a solid $h^2$-fold $L^*(2,1)$-labeling of $G_{[1,\sigma]}$ with $k=3\lceil h(\frac{2\sigma}{\sqrt{3}}+1)+1\rceil^2+1$ labels. \end{prop} \begin{figure}[ht]\begin{center} \includegraphics[width=\textwidth]{obrazki2/l21planeslabe.pdf} \caption{A solid $h^2$-fold $L^*(2,1)$-labeling of $G_{[1,\sigma]}$} \label{fig:l21slabe} \end{center}\end{figure} The issue of the distance between two tiles of the same label could be fixed in two ways. The first one is to choose a larger $p$. The other is to simply label monochromatic sets with 6 consecutive labels as in Figure \ref{fig:l21slabe}. It is easy to notice that in this case, the distances between tiles of the same label are at least $2\sigma+\frac{\sqrt{3}}{2}>2\sigma$. \begin{cor}\label{cor:planel21slabe} For all $h\in \mathbb{N}$, there exists a solid $h^2$-fold $L^*(2,1)$-labeling of $G_{[1,\sigma]}$ with $k=6\lceil h(\frac{2\sigma}{\sqrt{3}}+1)+1\rceil^2+1$ colors. \end{cor} Increasing $p$ and labeling monochromatic sets with 3 labels should give a smaller number of labels $k$, but the full analysis of correctness is quite tedious. For now, we are content with Corollary \ref{cor:planel21slabe}, since our main aim is to show that some of our online coloring algorithms can be used for $L(2,1)$-labeling disk graphs. The important fact to note is that a $b$-fold $L^*(2,1)$-labelings of $G_{[1,\sigma]}$ with $k$ labels, can have smaller ratio $k/b$ then the number of labels in $1$-fold $L^*(2,1)$-labeling. It is also worth considering whether there is a more general method of constructing solid $b$-fold $L^*(2,1)$-labelings of $G_{[1,\sigma]}$, much like $(h^2,p,q)$-colorings. While it is possible to find a method that is more general than what we have shown, it should still have some rigid details. One way to do that is to keep labeling monochromatic sets of $(h^2,p,q)$-colorings with multiple labels, but without setting $q=0$. \section{Basic coloring algorithms} The algorithms are online so they color the vertex $v_i$ corresponding to the disk $D_i$ knowing only disks $D_1,\ldots, D_i$. Once the vertex $v_i$ is colored it cannot be recolored. First let us introduce the algorithm by Erlebach and Fiala \cite{erlebach} (with a slight change in assignment of $j$, which will be described below), which divides the set of vertices of a disk graph according to the diameters of the corresponding disks. \begin{algorithm}[H]\label{Alg:BranchFF} \caption {$BranchFF((D_i)_{i\in [n]}$)} \ForEach {$i\in [n]$}{Read $D_i$, let $v_i$ be the center of $D_i$\\ \eIf{$\sigma_i=\sigma=2^t,\ t\in\mathbb{Z}_{+}$}{$j\gets\lfloor\log_2\sigma_i\rfloor-1$} {$j\gets\lfloor\log_2\sigma_i\rfloor$} $B_j := B_j \cup \{v_i\}$\\ $F := \{c(v_k): 1 \leq k < i, v_k \in B_j, D_k \cap D_i \neq \emptyset \} \cup\ \{c(v_k): 1 \leq k < i, v_k \notin L_j \}$\ (the set of forbidden colors)\\ $c(v_i) := min(N \setminus F )$ } \Return {$c$} \end{algorithm} Notice that the algorithm uses a separate set of colors for each set $B_j$. Since for any $j$ the set $B_j$ consists of disks with diameters within $[2^j,2^{j+1}]$, so the ratio of the largest and the smallest diameters is at most 2. This holds in particular in the case of $\sigma=2^t,\ t\in\mathbb{Z}_{+}$, when the maximal $j$ equals $t-1$ and $B_{t-1}$ consists of disks of diameter $[2^{t-1},2^t]$. The unusual assignment of $j$ in such cases lets us avoid creating a separate set $B_t$ consisting solely of disks with a diameter equal $sigma$. Now we can apply the following. \begin{lemma}[\cite{erlebach}] The FirstFit coloring algorithm is 28-competitive on disks of diameter ratio bounded by two.\end{lemma} Since $j$ takes values from $0$ to $\lfloor\log_2\sigma\rfloor$ (or $\lfloor\log_2\sigma\rfloor-1$ for $\sigma=2^t$), there are at most $\lceil\log_2\sigma\rceil$ sets $B_j$ with distinct sets of colors. Each of $B_j$ is colored with no more than $28\chi_j\leq28\chi(G)$, where $\chi_j$ stands for the minimal number of colors required to color a graph induced by $B_j$. Hence the following stands. \begin{lemma}\label{lemBranchFF} $BranchFF$ is $28\lceil\log_2\sigma\rceil$-competitive for $\sigma$-disk graphs.\end{lemma} Now we present another way of dividing vertices of a disk graph by Fiala, Fishkin, and Fomin \cite{FFF}. The input of the algorithm is a solid coloring $\varphi$ of $G_{[1,\sigma]}$ and a sequence of the disks of diameters within $[1,\sigma]$. The idea of the algorithm is the following: find the tile of $\varphi$ containing the center of $D_i$. The color of a vertex corresponding to $D_i$ is the smallest available color equal modulo $k$ to the color of the tile containing the center of $D_i$. \begin{algorithm}[H]\label{Alg:SimpleColor} \caption {$SimpleColor_\varphi$($(D_i)_{i\in [n]}$)} \ForEach {$i\in [n]$}{Read $D_i$, let $v_i$ be the center of $D_i$\\ let $T(v_i)$ be the tile containing $v_i$\\ $t(v_i) \gets |\{v_1,\ldots v_{i-1}\}\cap T(v_i)|$\\ $c(v_i)\gets \varphi (v_i)+k\cdot t(v_i)$ \label{line:c} } \Return {$c$} \label{lineNo} \end{algorithm} The following theorem is a special case of a result from \cite{FFF}. It is also a special case of Theorem \ref{warstwycol} of this paper. The number of colors follows from the fact that all vertices from a single tile create a clique, hence $t(v_i)\leq \omega(G)-1$, for any $i\in [n]$. \begin{theorem}[Fiala, Fishkin, Fomin \cite{FFF}] Let $\varphi$ be a solid $k$-coloring of $G_{[1,\sigma]}$ and $(D_i)_{i\in [n]}$ sequence of $\sigma$-disks. Algorithm $SimpleColor_\varphi((D_i)_{i\in [n]})$ returns coloring of $G=G((D_i)_{i\in [n]})$ with at most $k\cdot \omega(G)$ colors. \end{theorem} \begin{cor} For a unit disk graph $G=G((D_i)_{i\in [n]})$ and a 7-coloring of the plane $\varphi$ from the picture \ref{7-kol} the coefficient ratio of the algorithm $SimpleColor_\varphi((D_i)_{i\in[n]})$ is 7. \end{cor} \begin{proof} $\mathrm{cr}(SimpleColor_\varphi( (D_i)_{i\in[n]})\le \frac{7\omega(G)}{\omega(G)}=7$. \end{proof} \begin{cor} For a $\sigma$-disk graph $G=G((D_i)_{i\in [n]})$ and a coloring of $G_{[1,\sigma]}$ from Proposition \ref{prop:hkwadrat} the coefficient ratio of the algorithm $SimpleColor_\varphi((D_i)_{i\in[n]})$ is $\lceil\frac{2\sigma}{\sqrt{3}}+1\rceil^2$. \end{cor} In the case of UDGs, the competitive ratio of $SimpleColoring$ is worse than that of FirstFit algorithm. In the next part of the paper, we consider more complex algorithms with a better ratio that are based on the same principles as $BranchFF$ and $SimpleColor$. The next algorithm is a combination of the two above. Just like the $BranchFF$ it divides vertices according to the diameters but instead of coloring every $B_j$ with the smallest safe color, much like the FirstFit algorithm, we color them according to $SimpleColor$ algorithm. First we need a solid $12$-coloring $\varphi$ of $G_{[1,2]}$. Then for any $j\in [0,log_2(\sigma)]\cap\mathbb{N}$ we define $\varphi_j$ to be a coloring of $G_{[2^j,2^{j+1}]}$ that is scaled copy of $\varphi$. The color of a vertex is a pair $(j,c)$ where $j$ is the number assigned by branching and $c$ is the color based on $\varphi_j$. It could easily be changed so that the color is a number, but in the case of graph coloring we only consider how many colors are used and we believe leaving the color as a pair makes the method more clear. \begin{algorithm}[H]\label{Alg:BranchColor} \caption {$BranchColor_{(\varphi_j)}((D_i)_{i\in [n]}$)} \ForEach {$i\in [n]$}{Read $D_i$, let $v_i$ be the center of $D_i$\\ \eIf{$\sigma_i=\sigma=2^t,\ t\in\mathbb{Z}_{+}$}{$j(v_i)\gets\lfloor\log_2\sigma_i\rfloor-1$} {$j(v_i)\gets\lfloor\log_2\sigma_i\rfloor$} $B_j := B_j \cup \{v_i\}$\\ let $T(v_i)$ be the tile of $\varphi_j$ containing $v_i$\\ $t(v_i) \gets |\{v_1,\ldots v_{i-1}\}\cap T(v_i)|$\\ $c(v_i)\gets (j,\varphi_j (v_i)+12\cdot t(v_i))$ \label{Blinej:c}} \Return {$c$} \end{algorithm} \begin{theorem}\label{thBranchColor} Let $(\varphi_j)$ be a sequence of solid $12$-colorings of $G_{[2^j,2^{j+1}]}$ for $j\in [0,log_2(\sigma)]\cap\mathbb{N}$, and $(D_i)_{i\in [n]}$ sequence of $\sigma$-disks. Algorithm $BranchColor_{(\varphi_j)}((D_i)_{i\in [n]})$ returns a coloring of $G=G((D_i)_{i\in [n]})$ with at most $12\lceil\log_2(\sigma)\rceil\cdot \omega(G)$ colors. \end{theorem} \begin{proof} First let us prove that $c$ is indeed a coloring of $G$. Consider any two centers of disks $v_{i1}$ and $v_{i2}$ at euclidean distance at most $\sigma$. If $j(v_{i1})\neq j(v_{i2})$ then the first terms of $c(v_{i1})$ and $c(v_{i2})$ differ. Otherwise $j(v_{i1})=j(v_{i2})=:j$ and the distance between $v_{i1}$ and $v_{i2}$ is at most $2^{j+1}$. If $T_{j}(v_{i1})=T_{j}(v_{i2})$ then $\varphi_j(v_{i1})=\varphi_j(v_{i2})$ but $t(v_i)\neq t(v_j)$, and by formula in line \ref{Blinej:c} we get $c(v_i)\neq (c_j)$. If $T_{j}(v_{i1})\neq T_{j}(v_{i2})$ then the distance between $v_{i1}$ and $v_{i2}$ belongs to $[2^j, 2^{j+1}]$, hence $\varphi_j(v_{i1})\neq \varphi_j(v_{i2})$ and $c(v_{i1})\neq c(v_{i2})$.\\ Now let us consider the number of colors. For a fixed $j$, all the vertices from $B_j$ has the same first term. Let us consider $v_i$ with the highest second term. Since $\varphi_j$ is a 12-coloring then $\varphi_j (v_i)+12\cdot t(v_i)\leq 12\cdot (t(v_i)+1)$. Since all vertices from $T(v_i)$ create a clique, and there are already $t(v_i)+1$ of them when $v_i$ is being colored, we obtain $t(v_i)+1\leq \omega(G)$. Hence the second term of $c(v_i)$ is less or equal to $12\omega(G)$. Since there are at most $\lceil \log_2(\sigma)\rceil$ values of $j$ (see explanation above lemma \ref{lemBranchFF}) and for each $j$ we have at most $12\omega(G)$ second terms of $c$, we use no more than $\lceil \log_2(\sigma)\rceil\cdot 12\omega(G)$. \end{proof} \begin{cor} For any sequence of $\sigma$-disks $(D_i)_{i\in [n]}$ and $(\varphi_i)$ as on Theorem \ref{thBranchColor}, we have $cr(BranchColor_{(\varphi_j)}((D_i)_{i\in [n]}))\le12\lceil\log_2(\sigma)\rceil$. \end{cor} For the three algorithms we presented so far, clearly the $BranchColor$ has the best competitive ratio for 'large' $\sigma$. Since the competitive ratio of $SimpleColor$ depends on the choice of coloring of the plane, which in turn depends on $\sigma$, it is a bit hard to pinpoint the exact value of $\sigma$ where $BranchColor$ becomes better than $SimpleColor$. However $cr(SimpleColor)$ is quadratic in terms of $\sigma$, no matter the choice of coloring of $G_{[1,\sigma]}$. Figure \ref{wykresBasicRatios} shows the competitive ratios of the algorithms, where we take coloring of $G_{[1,\sigma]}$ from Proposition \ref{prop:hkwadrat} for $SimpleColor$. \begin{figure}[h]\begin{center} \includegraphics[width=1\textwidth]{obrazki2/plotRatioBasics.pdf} \caption{A comparison of competitive ratios of algorithms $BranchFF$, $SimpleColor$ (with colorings of the plane from Proposition \ref{prop:hkwadrat}), $BranchColor$.} \label{wykresBasicRatios} \end{center}\end{figure} \section{Algorithms with \emph{folding}} The next algorithm $FoldColor$ follows the same idea as $SimpleColor$, with a difference that it uses $b$-fold coloring instead of the classical coloring of the plane. It was introduced by Junosza-Szaniawski, Sokół, Rzążewski, and Węsek \cite{udg} as a coloring algorithm for unit disk graphs, however, it can also be applied for the more general case of $\sigma$-disk graphs, with different colorings of the plane. The algorithm goes as follows. We start with some fixed solid $b$-fold coloring $\varphi=(\varphi_1,\ldots,\varphi_b)$ of $G_{[1,\sigma]}$ with colors $[k$]. When a disk $D_i$ is read, it is assigned to a single tile from the coloring in two steps. First, we find all tiles that contain the center $v_i$ of $D_i$. Then we choose one of the $b$ layers of $\varphi$ for $D_i$ and call it $\ell(v_i)$ (we try to distribute disks to layers as uniformly as possible). By doing so we choose a single tile $T$ from that layer containing $v_i$. We count the number of previous vertices assigned to this tile - $t(v_i)$. Then $v_i$ is colored with the color of this tile plus $k$ times the number of previous vertices of $T$. Notice that for $b=1$ the algorithm $Color_\varphi$ is the same as $SimpleColor_\varphi$. \begin{algorithm}[H]\label{Alg:FoldColor} \caption {$FoldColor_\varphi((D_i)_{i\in [n]}$)} \ForEach {$i\in [n]$}{Read $D_i$, let $v_i$ be the center of $D_i$\\ \ForEach {$r\in [b]$}{ let $T_r(v_i)$ be the tile from the layer $r$ containing $v_i$} $\ell(v_i)\gets1+(|\{v_1,\ldots v_{i-1}\}\cap \bigcap _{r\in [b]}T_r(v_i)|)_b$ \label{layer}\\ $t(v_i) \gets |\{u\in \{v_1,\ldots v_{i-1}\}\cap T_{\ell(v_i)}(v_i): \ell(u)=\ell(v_i) \}|$\\ $c(v_i)\gets \varphi _{\ell(v_i)}(v_i)+k\cdot t(v_i)$\label{linej:c} } \Return {$c$} \label{lineNo_b} \end{algorithm} The following Theorem can be found in the paper by Junosza-Szaniawski et al. \cite{udg}, where only colorings of Unit Disk Graphs are considered. Hence we adjust the proof for our case of $\sigma$-DGs. The bound on the number of colors remains unchanged, as it relies directly on $k$ rather than on $\sigma$ (but naturally $k$ grows along with $\sigma$). \begin{theorem}\label{poprawnoscTempFold} Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $k$-coloring of $G_{[1,\sigma]}$, and $(D_i)_{i\in [n]}$ sequence of $\sigma$-disks. Algorithm $FoldColor_\varphi((D_i)_{i\in [n]}$) returns coloring of $G=G((D_i)_{i\in [n]})$. \end{theorem} \begin{proof} Consider any two centers of disks $v_i$ and $v_j$ at euclidean distance at most $\sigma$. If $T_{\ell(v_i)}(v_i)=T_{\ell(v_j)}(v_j)$ then $\varphi_{\ell(v_i)}(v_i)=\varphi_{\ell(v_j)}(v_j)$ but $t(v_i)\neq t(v_j)$ and by formula in line \ref{linej:c} we get $c(v_i)\neq (c_j)$. If $T_{\ell(v_i)}(v_i)\neq T_{\ell(v_j)}(v_j)$ then either $\ell(v_i)\neq \ell(v_j)$ or the distance between $v_i$ and $v_j$ is within $[1,\sigma]$. Hence $\varphi_{\ell(v_i)}(v_i)\neq \varphi_{\ell(v_j)}(v_j)$ and $c(v_i)\neq c(v_j)$. \end{proof} \begin{theorem}[Junosza-Szaniawski et al. \cite{udg}]\label{warstwycol} Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $k$-coloring of $G_{[1,\sigma]}$, and $(D_i)_{i\in [n]}$ sequence of $\sigma$-disks. Algorithm $FoldColor_\varphi((D_i)_{i\in [n]})$ returns coloring of $G=G((D_i)_{i\in [n]})$ with the largest color at most $k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma}{b}\right \rfloor$, where $\gamma$ is the maximum number of subtiles in one tile of $\varphi$. \end{theorem} Proof of a slightly improved result will be included below, hence we omit this one. Notice that, by Lemma \ref{gamma}, $k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma}{b}\right \rfloor\leq k\cdot\left \lfloor \frac{\omega(G)}{b}+6(b-1)\right \rfloor\leq \frac{k}{b}\cdot \omega(G) +6k (b-1)$. It shows that the algorithms competitive ratio is close to $\frac{k}{b}$ for graphs with large $\omega(G)$. But we are at a disadvantage if the graph has a small clique size, especially when $\sigma$ is big, as $\frac{k}{b}$ is quadratic in terms of $\sigma$. In the case of UDGs with relatively large clique size ($\omega(G) \geq 108901$ for a $25$-fold coloring from \cite{udg}) the algorithm, $FoldColor$ has a competitive ratio lower than $5$, which was the previous best, obtained by $FirstFit$. The fact that $FirstFit$ can be beaten by $FoldColor$ is especially interesting for us since later in this paper we replace the $FirstFit$ part of $BranchFF$ with $FoldColor$. Now we will slightly improve the $FoldColor$ algorithm by better distributing the vertices among layers. This change will be carried on to the next algorithm as well. Notice that in line \ref{layer} of the algorithm $FoldColor_\varphi((D_i)_{i\in [n]})$ the first vertex in each subtile is assigned to the first layer. Hence in each tile the numbers of vertices from the first and last layers can differ by $\gamma$. To lessen this difference we can distribute vertices among layers in a way closer to uniform. To do so we need the following definition. \begin{defi} Let $\varphi$ be a solid coloring of $G_{[1,\sigma]}$ with $\gamma$ subtiles in each tile. A function $\eta$ assigning a number from $[b]$ to each subtile is called \emph{shading} of $\varphi$ if in every tile the numbers of subtiles with the same value of $\eta$ are the same. \end{defi} \begin{figure}[h] \center \includegraphics[scale=.8]{obrazki2/shades.pdf}\caption{Example of a shading for tiling with $h=3$}\label{shade} \end{figure} Notice that the shading does not depend on the coloring $\varphi$ but rather on the tiling that it is based on. For the hexagonal tilings defined in section \ref{sec:tilings}, such shading exists (see an example on Figure \ref{shade}). The construction for $h\ge3$ is based on a fact that the vertical borders of the hexagons create line of the form $x=s\cdot \frac{\sqrt{3}}{4h}$, where $s\in \mathbb{Z}$ (it follows from the proof of Lemma \ref{gamma} presented in \cite{udg}). It gives a partition of the plane into vertical stripes - each of these is assigned $h$ shades in a cyclic manner: choose any stripe to be the \emph{first} one - it receives shades $\{1,2,...,h\}$, the one next to it receives shades $\{h+1,...,2h\}$ and so on. Each stripe is divided into diamonds (again by the borders of the hexagons), consisting of two subtiles. We assign shades to the diamonds in a cyclic manner. It is easy to align the stripes in such a way that every hexagon has exactly 6 triangular subtiles of each shade. For given solid coloring $\varphi$ of $G_{[1,\sigma]}$ and its shading $\eta$ we can define an improved algorithm $FoldShadeColor_{\varphi,\eta}((D_i)_{i\in [n]})$. It is an algorithm obtained from $FoldColor_\varphi((D_i)_{i\in [n]})$ by replacing line \ref{layer} with \\ $\ell(v_i)\gets \eta(\bigcap _{r\in [b]}T_r(v_i)) +(|\{v_1,\ldots v_{i-1}\}\cap \bigcap _{r\in [b]}T_r(v_i)|)_b$ (so the very first vertex from a subtile with shade equal $t$ is assigned to the $t$-th layer). \begin{theorem}\label{th:FoldShade} Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $k$-coloring of $G_{[1,\sigma]}$, $\eta$-shading, and $(D_i)_{i\in [n]}$ sequence of disks. Algorithm $FoldShadeColor_{\varphi,\eta}((D_i)_{i\in [n]}$) returns coloring of $G=G((D_i)_{i\in [n]})$ with the largest color at most $k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$, where $\gamma$ is the maximum number of subtiles in one tile of $\varphi$. \end{theorem} \begin{proof} The proof is analogous to the proof of Theorem \ref{warstwycol}\cite{udg} with a difference in the estimation of $\omega(G)$. Let $v_i$ be a vertex that got the biggest color. Consider the moment of the course of the algorithm when vertex $v_i$ was colored. Let $\ell_i=\ell(v_i),$ and $t(v_i), c(v_i)$ be defined as in the algorithm. Let $T=T_{\ell_i}$ be the tile from the $\ell_i$-th layer containing $v_i$. Let $S_1,\ldots S_\gamma$ be subtiles of $T$. Let $s_q=|\{u\in \{v_1,\ldots v_i\}: u\in S_q\} |$ and $s_q(\ell_i)=|\{u\in \{v_1,\ldots v_i\}: u\in S_q, \ell(u)=\ell_i\} |$ for $q\in[\gamma]$. So the number of all vertices assigned to layer $\ell_i$ and tile $T$ before $v_i$ equals $t(v_i)=\sum_{q=1}^\gamma s_q (\ell_i)-1$. Now let us look closely at the values of $s_q$ in terms of $s_q(\ell_i)$. We denote $(x)_b:=x \mod b$. Vertices in $S_q$ are assigned to layers in a cyclic manner starting from $\eta(S_q)$ and there are $(\ell_i-\eta(S_q))_b$ of them before the first one is assigned to layer $\ell_i$ (if such vertex exists). Then we know that there are $s_q(\ell_i)$ vertices in $S_q$ assigned to $\ell_i$, so there are at least $b(s_q(\ell_i)-1)+1$ vertices across all layers. Hence $s_q\ge b(s_q(\ell_i)-1)+(\ell_i-\eta(S_q))_b+1$ (if $s_q(\ell_i)=0$ the right side of the inequality is negative while the left side is non-negative, so it holds). Now we are ready to estimate the number of all vertices from $\{v_1,\ldots,v_i\}$ contained in $T$ (but not necesarily assigned to the layer $\ell_i$). Notice that these vertices are pairwise at distance less than one and hence they form a clique. After summing both sides of the last inequality over $q$ from 1 to $\gamma$ we obtain $$\omega(G) \ge\sum_{q=1}^\gamma s_q \ge \sum_{q=1}^\gamma [b(s_q(\ell_i)-1)+(\ell_i-\eta(S_q))_b+1]=$$ $$b\sum_{q=1}^\gamma s_q(\ell_i)-b\gamma +\sum_{q=1}^\gamma (\ell_i-\eta(S_q))_b+\gamma \overset{(*)}=$$ $$\overset{(*)}= b(t(v_i)+1)-(b-1)\gamma+\frac{\gamma}{b}\sum_{j=0}^{b-1} j=$$ $$=b(t(v_i)+1)-(b-1)\gamma+\frac{\gamma}{b}\frac{b(b-1)}{2}= b(t(v_i)+1)-(b-1)\gamma/2$$ Notice that since $\eta$ is a shading of $\varphi$ the number of subtiles in the tile $T$ with each shade from 1 to $b$ is the same. Thus $(\ell_i-\eta(S_q))_b$ admits each of the values from 0 to $b-1$ for the same number of subtiles, namely $\gamma/b$. This explains equality $(*)$. From the estimation above we obtain that $t(v_i)+1$ is at most $\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$. Since we chose $v_i$ to have the biggest color and the total number of colors equals at most $c(v_i)= k (t(v_i)+1)$. \end{proof} In our algorithms it is crucial to construct good $b$-fold colorings of $G_{[1,\sigma]}$. We already showed a method for constructing such colorings. However the best possible $(h^2,p,q)$-coloring depends highly on the value of $\sigma$. Thus it is hard to give best attainable bounds on the competitive ratio without specifying $\sigma$. Hence we use Corollary \ref{prop:hkwadrat} to give some proper bounds on the competitive ratio of our algorithms, although for most values of $\sigma$ it is possible to lower them by adjusting the coloring of $(G_{[1,\sigma]})$. \begin{cor} For the $h^2$-fold $\varphi$ coloring of the plane from Theorem \ref{prop:hkwadrat} and any sequence of $\sigma$-disks $(D_i)_{i\in [n]}$ let $\omega=\omega(G((D_i)_{i\in [n]}))$. Then $$\mathrm{cr}(FoldColor_\varphi((D_i)_{i\in [n]})) \leq \frac{\left \lceil (\frac{2\sigma}{\sqrt{3}}+1)\cdot h\right \rceil^2}{\omega}\cdot\left \lfloor \frac{\omega+(h^2-1)6h^2}{h^2} \right \rfloor$$ and $$\mathrm{cr}(FoldShadeColor_{\varphi,\eta}((D_i)_{i\in [n]})) \leq \frac{\left \lceil (\frac{2\sigma}{\sqrt{3}}+1)\cdot h\right \rceil^2}{\omega}\cdot\left \lfloor \frac{\omega+(h^2-1)3h^2}{h^2} \right \rfloor,$$ Hence the ratios are of order $\left(\frac{2\sigma}{\sqrt{3}}+1\right)^2+O\left(\frac{1}{h}\right)+O\left(\frac{h^4}{\omega} \right).$ \end{cor} \begin{figure}[h] \center \includegraphics[width=\textwidth]{obrazki2/crDouble.pdf}\caption{Competitive ratio of $FoldShadeColor$ for $\sigma$-DG, $\sigma=1$ or $\sigma=2$, depending on $\omega$, with $\varphi$ as $h^2$-fold colorings where $h=1,2$ or $3$.}\label{wykres:ratio} \end{figure} \begin{figure}[h] \center \includegraphics[scale=.8]{obrazki2/cr_sigma.pdf}\caption{Competitive ratio of $FoldShadeColor$ for $\sigma$-DG with $\omega=10^9$ depending on $\sigma$, with $\varphi$ as $h^2$-fold colorings where $h=10$.}\label{wykres:ratio_sigma} \end{figure} So the algorithm $FoldShadeColor$ has a better competitive ratio than $FoldColor$, however competitive ratios of both algorithms are equal asymptotically. Notice that for $h=5$ and unit disk graphs $G$ with $\omega(G) \geq 108901$, the competitive ratio of the algorithm $FoldColor$ is less than $5$, hence lower than the ratio of the FirstFit. For the algorithm $FoldShadeColor$ with $h=5$ the ratio is below 5 for UDG with $\omega\ge 54450$ (and occasionally for smaller $\omega$, because of the floor function in our bound). See Figures \ref{wykres:ratio}, \ref{wykres:ratio_sigma} to study the changes in ratio depending on $\omega$, $\sigma$ and $h$. The last algorithm $BranchFoldColor_{\varphi,\eta}$ joins all the techniques from the previous ones. The vertices are dived in two steps as in $BranchColor$ but rather than coloring of $B_j$ by using a regular coloring of the plane we use a $b$-fold coloring of $G_{[1,2]}$ and a shading $\eta$ much like in the $FoldShadeColor{\varphi,\eta}$ algorithm. \begin{algorithm}[H] \caption {$BranchFoldColor_{\varphi,\eta}((D_i)_{i\in [n]}$)} \ForEach {$i\in [n]$}{Read $D_i$, let $v_i$ be the center of $D_i$\\ \eIf{$\sigma_i=\sigma=2^t,\ t\in\mathbb{Z}_{+}$}{$j(v_i)\gets\lfloor\log_2\sigma_i\rfloor-1$} {$j(v_i)\gets\lfloor\log_2\sigma_i\rfloor$} $B_j := B_j \cup \{v_i\}$\\ \ForEach {$r\in [b]$}{ let $T_r(v_i)$ be the tile from the layer $r$ containing $v_i$} $\ell(v_i)\gets \eta(\bigcap _{r\in [b]}T_r(v_i)) +(|\{v_1,\ldots v_{i-1}\}\cap \bigcap _{r\in [b]}T_r(v_i)|)_b$ \label{BFlayer}\\ $t(v_i) \gets |\{u\in \{v_1,\ldots v_{i-1}\}\cap T_{\ell(v_i)}(v_i): \ell(u)=\ell(v_i) \}|$\\ $c(v_i)\gets (j,\varphi _{\ell(v_i)}(v_i)+k\cdot t(v_i))$ \label{BFlinej:c} } \Return {$c$} \label{BFlineNo_b} \end{algorithm} \begin{theorem}\label{th:BranchFold} Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $k$-coloring of $G_{[1,2]}$, $\eta$-shading, and $(D_i)_{i\in [n]}$ sequence of disks. Algorithm $BranchFoldColor_{\varphi,\eta}((D_i)_{i\in [n]}$) returns coloring of $G=G((D_i)_{i\in [n]})$ with the largest color at most $\lceil\log_2(\sigma)\rceil \cdot k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$, where $\gamma$ is the maximum number of subtiles in one tile of $\varphi$. \end{theorem} \begin{proof} As in previous algorithms the branching partitions vertices into sets $B_j$ and colors each of these sets separately. The first element of the color of a vertex is the index of its set, hence there cannot be a conflict of colors between vertices from different branching sets. The second part of the color of a vertex comes from the same procedure as in algorithm $FoldShadeColor$, hence it gives a proper coloring of a graph induced by $B_j$ and the maximal value it takes is at most $k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$. Since there are at most $\lceil\log_2(\sigma)\rceil$ sets $B_j$, we obtain the stated bound. \end{proof} The number of colors in this algorithm looks very similar to the one for $FoldShadeColor$, except for multiplying by $\lceil\log_2(\sigma)\rceil$. But it can be significantly smaller, since the values of $k$ and $b$ correspond to the coloring of $G_{[1,2]}$ instead of $G_{[1,\sigma]}$ (and with the same $b$ the value of $k$ is quadratic in terms of $\sigma$). By using the fact that $\gamma\le 6b$ we obtain the following simplified bound. \begin{cor} The competitive ratio of $BranchFoldColor$ is at most:\\ $$\frac{\lceil\log_2(\sigma)\rceil\cdot k\lfloor\frac{\omega(G)}{b}+3(b-1)\rfloor}{\omega(G)}= O(\log_2(\sigma)\cdot \frac{k}{b})$$. \end{cor} Lets consider a function $f(b)$ that gives the the best possible value of $k/b'$ for all $b'$-fold $k$-colorings of $G_{[1,2]}$ with $b'\le b$. Not only is it non-increasing, but also the value decreases from time to time. However, if we want a small competitive ratio of the algorithm, then clearly $\omega(G)$ should be large compared to $b$. For that reason, we should only consider small values of $b$. In the table below, we have chosen three \emph{good} $b$-fold colorings of $G_{[1,2]}$. The last column shows the minimal value of $\omega(G)$ where the upper bound on the number of colors is smaller than for the coloring from the previous row. Of course, the algorithm still works for smaller values of $\omega$ than the ones given below, but it is simply not the best option in such cases. \begin{table}[h]\caption{The first 3 columns correspond to the parameters of chosen colorings of $G_{[1,2]}$ (the best we get for $b=h^2\le100$). The last column shows the minimal values of $\omega(G)$, where the competitive ratio of $BranchFoldColor_{\varphi,\eta}$ is smaller then for the previous row.} \begin{tabular}{c|c|c|c} $b$ & $k$ & $\frac{k}{b}$& $\omega$\\\hline 1&12 &12& \\ 9 & 100 &11.11 & 2700\\ 64& 703 &10.98 & $1.02944\cdot 10^6$ \end{tabular} \end{table} \section{Coloring various shapes} Many classes of intersection graphs of geometric shapes are considered in research papers. In particular, there is plenty of information available in the case of intersection graphs of axis-parallel rectangles. However, in this paper, we concentrate on the scaling aspect of shapes. In this section, we consider the online coloring of any convex shape. In fact, the methods we present could be applied to some non-convex shapes as well, but in general, it might not bring the best number of colors. Let us first consider an intersection graph $G$ of similar copies of a convex shape $S$ - scaling by the factor less or equal $\sigma$, translation, rotation, and reflection are allowed. We assume that $S$ is the smallest of all similar copies. One natural method of online coloring such graphs would be to first find circumscribed circles for all shapes in $G$. Then we "fill them in" - substitute shapes with disks that contain them, and color a resulting supergraph $H$ of $G$, which is a disk graph. All these operations can be done in an online setting. Unfortunately, the offline parameters of the supergraph $H$ may vary from those of the original graph $G$. In particular, the clique number may increase significantly. Hence if we were to calculate the ratio of diameters of $H$ and use one of our algorithms the number of colors will be expressed using $\omega(H)$, rather than $\omega(G)$, and we cannot tell what the competitive ratio of said algorithm would be. Fortunately, we can avoid such problems if every shape is replaced by a pair of disks. To adapt our method for each shape $S$ we consider two disks with common center: $ID(S)$ - inner disk and $OD(S)$ - outer disk (described below, see Figure \ref{fig:shapedisk}). To be more precise we start with the given shape $S$ and choose a \emph{center of $S$} to be an interior point $P$. It is best if the choice of $P$ minimizes the ratio between the diameters of the outer and inner disks with centers in $P$. Just as we did with disks we will associate the centers of shapes with their corresponding vertices. The \emph{outer disk} is the smallest disk with the center in $P$ that contains shape $S$. The \emph{inner disk} is the largest disk with the center in $P$ contained in shape $S$. Let us call the diameters of $OD(S)$ and $ID(S)$ as outer and inner diameters, respectively, and denote their ratio as $\rho$. Without loss of generality, we can assume that the inner diameter of $S$ equals one and the outer $\rho$. Taking a similar copy of $S$ the translation, rotation and reflection do not effect the diameters. Since we allow scaling by a maximal factor of $\sigma$ and $S$ is the smallest of all copies, then all inner diameters are in $[1,\sigma]$ and all outer ones in $[\rho,\rho\cdot\sigma]$. \begin{figure}[h]\begin{center} \includegraphics[width=0.5\textwidth]{obrazki2/shapeRotate} \caption{Inner and outer disks of $S$. A rotated copy of $S$ has the same disks.} \label{fig:shapedisk} \end{center}\end{figure} Notice that if two shapes intersect so do their outer disks. On the other hand, if two inner disks intersect so do their original shapes. Using these two simple facts we can find a coloring of the plane that can be used as a basis for an online coloring algorithm. If each tile of our coloring has diameter 1, as the inner diameter of $S$, then all shapes with centers in a single tile form a clique. Moreover, if tiles of the same color are at distance greater than $\rho\cdot\sigma$, then two shapes with centers in two same-colored tiles cannot intersect. Hence we are interested in $b$-fold coloring of $G_{[1,\rho\cdot\sigma]}$. With such coloring, we can use the $FoldShadeColor$ algorithm. \begin{cor}\label{th:shapeRotate} Let $(S_i)_{i\in [n]}$ sequence of shapes similar to $S$ with the ratio of outer and inner equal $\rho$. Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $k$-coloring of $G_{[1,\rho\cdot\sigma]}$. Algorithm $FoldShadeColor_\varphi((S_i)_{i\in [n]})$ returns coloring of $G=G((S_i)_{i\in [n]})$ with the largest color at most $k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$, where $\gamma$ is the number of subtiles in one tile of $\varphi$. \end{cor} \begin{proof} The proof of such coloring being proper is the same as in Theorem \ref{poprawnoscTempFold} (just change the wold disk to shape and $\sigma$ to $\rho\cdot\sigma$). The number of colors is copied from Theorem \ref{th:FoldShade}. It follows from the estimation on $\omega(G)$, which is a result of the fact that all vertices from a single tile form a clique. It is still true for shapes, hence the value remains. \end{proof} Notice that if we take $\sigma=1$, there is no scaling involved, hence there is no way of branching. Yet we still need a $b$-fold coloring of $G_{[1,\rho]}$. That is the reason why we include a general method of finding such colorings of the plane instead of considering only colorings of $G_{[1,2]}$. It also shows that the number of colors in our online coloring of $G$ depends highly on the value of $\rho$, which is why we try to minimize it with the choice of the center of $S$. For larger values of $\sigma$ however, the branching method is possible. Consider $\sigma_i$ to be the inner diameter of shape $S_i$, and use a $b$-fold coloring of $G_{[1,2\rho]}$. Then with branching on $\sigma_i$ we obtain the following. \begin{cor}\label{th:shapeRotateBranch} Let $(S_i)_{i\in [n]}$ sequence of shapes similar to $S$ with the ratio of outer and inner equal $\rho$. Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $k$-coloring of $G_{[1,2\rho]}$. Algorithm $BranchFoldColor_\varphi((S_i)_{i\in [n]})$ returns coloring of $G=G((S_i)_{i\in [n]})$ with the largest color at most $\lceil log (\sigma)\rceil k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$, where $\gamma$ is the number of subtiles in one tile of $\varphi$. \end{cor} The last thing to point out is that our approach of online coloring similar shapes does not require shapes to be similar at all! In fact it is enough that all shapes from our sequence $(S_i)_{i\in [n]}$ have centers and disks $ID(S_i)$ and $OD(S_i)$ with diameters in $[1,\sigma]$ (or in any interval $[m,M]$ with $m,M\in\mathbb{R}_+$, $m<M$, since we can scale them all). Hence branching and coloring based on the coloring of a plane is a very general method. \section{$L(2,1)$-labeling of disks} In this section, we consider online $L(2,1)$-labeling of $\sigma$-disk graphs, which is a special case of coloring with integer values. Let us first recall the definition (quite often the labeling is defined starting with the value of 0, but it is more convenient for us to use positive integers only instead). \begin{defi} Formally, an $L(2,1)$-labeling of a graph $G$ is any function $c:V\to \{1,\ldots,k\}$ such that \begin{enumerate} \item $|c(v) - c(w)| \geq 1$ for all $v,w \in V(G)$ such that $d(u,w)=2$ (we call such pairs \emph{second neighbors}), \item $|c(v) - c(w)| \geq 2$ for all $v,w \in V(G)$ such that $vw \in E(G)$. \end{enumerate} The value $k-1$ is called a \emph{span} of the labeling. \end{defi} The {\em $L(2,1)$-span} of a graph $G$, denoted by $\lambda(G)$, is the minimum span of an $L(2,1)$-labeling of $G$. The number of available labels is $\lambda(G)+1$, but some may be not used. $L(2,1)$-labeling was introduced by Griggs and Yeh \cite{griggs} who proved that $\lambda(G)\le \Delta^2+ 2\Delta$ and conjectured $\lambda(G)\le \Delta^2$, which became known as the $\Delta^2$-conjecture. $L(2,1)$-labeling was consider for various classes of graphs \cite{bodlaender, Cala, georges, heuvel}. Not all graphs can be effectively $L(2,1)$-labeled \emph{online}, since once you have two vertices of the same color, the next vertex could be the neighbor of them both, and thus the labeling would be faulty. However, having a geometric representation of a graph with some conditions can exclude those situations from happening. In those cases, we consider the competitive ratio as the ratio of the highest labels of online and offline labelings. Hence our maximal label is compared with $\lambda(G)+1$, and since $\lambda(G)+1\ge 2\omega(G)+1$, bounding the maximal label by a function of $\omega(G)$ is very convenient. Fiala, Fishkin and Fomin \cite{FFF} studied $L(2,1)$-labelings of disk graphs. One of their results states the following. \begin{lemma}\cite{FFF} There is no constant competitive online $L(2,1)$-labeling algorithm for the class of $\sigma$-disk graphs unless there is an upper bound on $\sigma$ and any $\sigma$-disk graph occurs as a sequence of disks in the online input. \end{lemma} This result follows from the fact that in online $L(2,1)$-labeling once we color vertex $v$ with $c(v)$, the color $c(v)$ is not available for any current or future second neighbors. For that reason, we need to know whether a pair of vertices might become second neighbors in the future, and this is where the location of disks and the bounds on their diameters plays a role. If the centers of two disks are $C_1, C_2$ and the diameters $d_1$ and $d_2$, they can become second neighbors iff $\dist(C_1,C_2)\le \frac{d_1+d_2}{2}+\sigma$. Now we will consider which of the presented online coloring algorithms may be adjusted for $L(2,1)$-labelings. Firstly, we know that $SimpleColor$ can be used for online $L(2,1)$-labeling as it was presented in \cite{FFF}. The algorithm itself remains unchanged, but we need to use a special coloring of the plane, which we call solid $L(2,1)$-coloring of the plane. In case of unit disks it was proven in \cite{udg} that $FoldColor_\varphi$ gives proper $L(2,1)$-labeling as long as $\varphi$ is a solid $b$-fold $L^*(2,1)$-labeling of $G_{[1,1]}$ with $k$ labels. There is no reason for the algorithm to stop working properly for larger $\sigma$. Shading also does not spoil the labeling. Hence we get that $FoldShadeColor_{\varphi,\eta}$ 'works' for $L(2,1)$-labeling $\sigma$-disk graphs. \begin{theorem}\label{th:warstwyL21} Let $\varphi=(\varphi_1,\ldots,\varphi_b)$ be a solid $b$-fold $L^*(2,1)$-labeling of $G_{[1,\sigma]}$ with $k$ labels, $\eta$-shading, and $(D_i)_{i\in [n]}$ sequence of $\sigma$-disks. The algorithm $FoldShadeColor_\varphi((D_i)_{i\in [n]})$ returns an $L(2,1)$-coloring of $G=G((D_i)_{i\in [n]})$ with the largest label not exceeding $k\cdot\left \lfloor \frac{\omega(G)+(b-1)\gamma/2}{b}\right \rfloor$. \end{theorem} \begin{proof} First, we prove the correctness of the algorithm. Consider any two centers of disks $v_i$ and $v_j$ with the same label. Notice that $T_{\ell(v_i)}(v_i)\neq T_{\ell(v_j)}(v_j)$, because otherwise $t(v_i)\neq t(v_j)$ and thus $c(v_i)\neq c(v_j)$. Since $\varphi_{\ell(v_i)}(v_i)=\varphi_{\ell(v_j)}(v_j)$ and $\varphi$ is an $L^*(2,1)$-labeling of $G_{[1,\sigma]}$, $T_{\ell(v_i)}(v_i)$ and $T_{\ell(v_j)}(v_j)$ are at point-to-point distance greater than $2\sigma$. Hence $v_i$ and $v_j$ are at Euclidean distance greater than $2\sigma$ and thus are neither neighbors nor have a neighbor in common. Now consider two centers of disks $v_i$ and $v_j$ labeled with consecutive numbers. Without loss of generality assume that $c(v_i)+1=c(v_j)$. Then $\varphi_{\ell(v_i)}(v_i)+1=\varphi_{\ell(v_j)}(v_j)$ or $\varphi_{\ell(v_i)}(v_i)=k$ and $\varphi_{\ell(v_j)}(v_j)=1$. Since $\varphi$ is an $L^*(2,1)$-labeling of the plane, $T_{\ell(v_i)}(v_i)$ and $T_{\ell(v_j)}(v_j)$ are at point-to-point distance greater than $\sigma$. Hence $v_i$ and $v_j$ are at Euclidean distance greater than $\sigma$ and are not adjacent. The bound on the largest color follows directly from Theorem \ref{th:FoldShade}. \end{proof} The use of $b$-fold labelings of $G_{[1,\sigma]}$ instead limiting ourselves to $1$-fold labelings might give significant improvement of the number of labels in labeling of $G=G((D_i)_{i\in[n]})$, just as it did in case of colorings. We have shown in section \ref{sec:planeL21} that the required solid labelings of $G_{[1,\sigma]}$ exist for any $\sigma$. Now let us consider the branching technique. While branching vertices we partition them according to their diameters into sets $B_j$ and color each set separately. This approach will not work with $L(2,1)$-labeling, since we cannot easily reserve a set of labels for each $B_j$ in advance. Since the labels of adjacent vertices must differ by at least 2, and vertices from sets $B_i,\ B_j$, $i\neq j$, can be neighbors, we would need to make sure that there is no pair of consecutive labels between $B_i$ and $B_j$. Adding to that, second neighbors must have different labels, and their common neighbor might be in a different set $B_j$ than either of them. Hence to the best of our knowledge, the $FoldShadeColor$ algorithm might be the best one for online $L(2,1)$-labeling $\sigma$-disk graphs. Moreover, $FoldShadeColor$ can also be applied for online $L(2,1)$-labeling shapes with bounded inner and outer diameters. It is necessary to choose the appropriate coloring of the plane, as we did in the case of the online coloring of shapes, namely a solid $L^*(2,1)$-labeling of $G_{[1,\rho\cdot\sigma]}$. \section{Concluding remarks} We have considered a few algorithms which use some sort of coloring of the plane as a basis for online coloring geometric intersection graphs. As we have shown they can be applied not only for disk coloring but also the online coloring of series of shapes. Since there are many ways of modifying the coloring of the plane, there are many online coloring problems that could be approached with our methods. $L(2,1)$-labeling of $\sigma$-disks and shapes is just one of them. Moreover, the same methods could be applied in other dimensions. One may consider a solid $b$-fold colorings of a line as a basis for online coloring, as in \cite{shorty}. The same could be done in higher dimensions, where for instance a solid coloring of $\mathbb{R}^3$ could serve for coloring intersection graphs of 3-dimensional balls and other three-dimensional solid objects. We must however always have bounds on the size of considered objects. \begin{thebibliography}{} \bibitem{bodlaender} H. L. Bodlaender, T. Kloks, R. B. Tan, J. van Leeuwen, {\em Approximations for lambda-Colorings of Graphs}, Comput. J. 47(2) pp.~193--204, 2004. \bibitem{Cala} T. Calamoneri, The $L(h,k)$-Labelling Problem: An Updated Survey and Annotated Bibliography, {\em Comput. J.}, 54: 1344--1371, 2011. \bibitem{capponi} A. Capponi, C. Pilloto, On-line coloring and on-line partitioning of graphs. Manuscript 2003. \bibitem{shorty} J. Chybowska-Sokół, G. Gutowski, K. Junosza-Szaniawski, P. Mikos, A. Polak, Online Coloring of Short Intervals, {\em Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)}, Leibniz International Proceedings in Informatics (LIPIcs),52:1--52:18, 2020. \bibitem{MYplaszczyzna} J. Chybowska-Sokół, K. Junosza-Szaniawski, K. Węsek, Coloring distance graphs on the plane, in review. \bibitem{deGrey} A.D.N.J. de Grey, The chromatic number of the plane is at least 5, 2018. arXiv:1804.02385 \bibitem{erlebach} T. Erlebach, J. Fiala, On-line coloring of geometric intersection graphs, {\em Computational Geometry}, 23: 243–255, 2002. \bibitem{exoo} G. Exoo, $\varepsilon$-Unit Distance Graphs, {\em Discrete Comput. Geom.}, 33: 117--123, 2005. \bibitem{falconer} K.J. Falconer: The realization of distances in measurable subsets covering Rn. {\em J. Comb. Theory}, Ser. A 31, 187–189, 1981. \bibitem{exoo2018} G. Exoo, D. Ismailescu, The chromatic number of the plane is at least 5 - a new proof, 2018. arXiv:1805.00157 \bibitem{FFF} J. Fiala, A. Fishkin, F. Fomin, On distance constrained labeling of disk graphs, {\em Theoretical Computer Science} 326: 261--292, 2004. \bibitem{georges}\textsc{ J.P. Georges, D.W. Mauro}, {\em On the size of graphs labeled with a condition at distance two}, J. Graph Theory 22, pp.~47-–57, 1996. \bibitem{Goncalves} D. Gon\c{c}alves, On the $L(p, 1)$-labelling of graphs. {\em Discrete Mathematics} 308: 1405--1414, 2008. \bibitem{griggs} J.R. Griggs, R.K. Yeh, Labeling graphs with a condition at distance two, {\em SIAM J. Discrete Math.}, 5: 586--595, 1992. \bibitem{ulamkowe} J. Grytczuk, K. Junosza-Szaniawski, J. Sokół, K. Węsek, {\em Fractional and $j$-fold coloring of the plane}, {\em Disc. \& Comp. Geom.} , Volume 55, Issue 3, 594–-609 (2016). \bibitem{hadwiger} H. Hadwiger, Ungeloste Probleme, {\em Elemente der Mathematik}, 16: 103--104, 1961. \bibitem{hale} W.K. Hale, Frequency assignment: Theory and applications, {\em Proc. IEEE}, 68: 1497--1514, 1980. \bibitem{hochberg} R. Hochberg, P. O’Donnell, A large independent set in the unit distance graph.{\em Geombinatorics 3(4)}, 83-–84, 1993. \bibitem{heule} M. J. H. Heule, Computing Small Unit-Distance Graphs with Chromatic Number 5, 2018. arXiv:805.12181 \bibitem{heuvel} J. van den Heuvel, R.A. Leese, M.A. Shepherd, {\em Graph labeling and radio channel assignment}, J. Graph Theory 29, pp.~263-–283, 1998. \bibitem{udg} K. Junosza-Szaniawski, J. Sokół, P. Rzążewski, Online coloring and $L(2,1)$-labeling of unit disk intersection graphs to appear. \bibitem{kierstead} H.A. Kierstead, W.T. Trotter, An extremal problem in recursive combinatorics. Congr. Numerantium 33: 143–153, 1981. \bibitem{FirstFit5} H.A. Kierstead, D.A. Smith, W.T. Trotter, First-Fit coloring on interval graphs has performance ratio at least 5, European Journal of Combinatorics 51: 236–254, 2016. \bibitem{McKMcM} {T.A. McKee, F.R. McMorris}, Topics in Intersection Graph Theory. {\em SIAM Monographs on Discrete Mathematics and Applications, vol. 2}, SIAM, Philadelphia, 1999. \bibitem{malesinska} E. Malesi\'nska, Graph theoretical models for frequency assignment problems, {\em Ph.D. Thesis}, Technical University of Berlin, Germany, 1997. \bibitem{Parts} J. Parts, The chromatic number of the plane is at least 5 -- a human-verifiable proof, 2020. arXiv:2010.12661 \bibitem{peeters} R. Peeters, On coloring $j$-unit sphere graphs, {\em Technical Report}, Department of Economics, Tilburg University, 1991. \bibitem{roberts} F.S. Roberts, $T$-colorings of graphs: Recent results and open problems, {\em Disc. Math.}, 93: 229--245, 1991. \bibitem{soifer} A. Soifer, The Mathematical Coloring Book. Springer, New York, 2008. \bibitem{SYPS} Z. Shao, R. Yeh, K.K. Poon, W.C. Shiu, The $L(2,1)$-labeling of $K_{1,n}$-free graphs and its applications. {\em Applied Math. Letters}, 21: 1188--1193, 2008. \end{thebibliography} \end{document}
2206.14511v2
http://arxiv.org/abs/2206.14511v2
Nonparametric signal detection with small values of type I and type II error probabilities
\documentclass{article} \usepackage[utf8]{inputenc} \title{Nonparametric signal detection with small values of type I and type II error probabilities} \author{Mikhail Ermakov } \date{October 2020} \usepackage{srcltx} \usepackage{graphicx,color,amssymb,latexsym,amsmath,amsfonts} \usepackage{mathrsfs} \RequirePackage{amsthm,amsmath,amssymb} \RequirePackage[numbers]{natbib} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{remark}{Remark}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{proposition}{Proposition}[section] \begin{document} \global\long\def\zb{\boldsymbol{z}} \global\long\def\ub{\boldsymbol{u}} \global\long\def\vb{\boldsymbol{v}} \global\long\def\wb{\boldsymbol{w}} \global\long\def\tb{\boldsymbol{t}} \global\long\def\eb{\boldsymbol{e}} \global\long\def\sb{\boldsymbol{s}} \global\long\def\0b{\boldsymbol{0}} \global\long\def\xb{\boldsymbol{x}} \global\long\def\xib{\boldsymbol{\xi}} \global\long\def\etab{\boldsymbol{\eta}} \global\long\def\omegab{\boldsymbol{\omega}} \global\long\def\yb{\boldsymbol{y}} \global\long\def\thetab{\boldsymbol{\theta}} \maketitle Institute of Problems of Mechanical Engineering RAS, Bolshoy pr., 61, VO, 1991178 St. Petersburg and St. Petersburg State University, Universitetsky pr., 28, Petrodvoretz, 198504 St. Petersburg, RUSSIA AMS subject classification: 62F03, 62G10, 62G2 keywords : Neymann test, consistency, signal detection, goodness-of-fit tests \footnote{Research has been supported RFFI Grant 20-01-00273.} \begin{abstract}We consider problem of signal detection in Gaussian white noise. Test statistics are linear combinations of squares of estimators of Fourier coefficients or $\mathbb{L}_2$-norms of kernel estimators. We point out necessary and sufficient conditions when nonparametric sets of alternatives have a given rate of exponential decay for type II error probabilities. \end{abstract} \section{Introduction} In hypothesis testing and confidence estimation we usually work with events having small error probabilities. In paper we explore probabilities of such events arising in problems of signal detection in Gaussian white noise. For tests of Neymann type and for tests generated $\mathbb{L}_2$-norms of kernel estimators, we provide comprehensive description of nonparametric sets of alternatives having given rates of convergence to zero of type II error probabilities. If problem of nonparametric hypothesis testing is the problem of testing hypothesis on adequacy of statistical model choice, the alternatives can be arbitrary. The only natural requirement is that functions from set of alternatives does not belong to hypothesis. Thus we need to explore consistency of nonparametric sets of alternatives without any assumptions. We should not be limited by assumptions of their parametric structure or ownership to some class of smooth functions \cite{er90, ing02}. Such an approach has been developed in \cite{dal13, erm21, erm22} and this line is continued in this paper. In papers \cite{erm21,erm22} we established necessary and sufficient conditions of uniform consistency of nonparametric sets of alternatives for wide spread nonparametric tests: Kolmogorov, Kramer-von Mises, chi-squared with number of cells increasing with growing of sample size, Neymann, Bickel-Rosenblatt. The results were provided for nonparametric sets of alternatives assigned both in terms of distribution function and density. The paper provides a comprehensive description of uniformly consistent nonparametric sets of alternatives having a given rate of convergence of type II error probabilities to zero as noise power tends to zero. Such a description is provided, both in terms of the distance method (asymptotics of distance between hypothesis and alternatives) and in terms of rate of convergence to zero $\mathbb{L}_2$ -- norms of signals belonging to sets of alternatives. As mentioned we consider tests of Neymann type \cite{ney} and tests generated $\mathbb{L}_2$-norms of kernel estimators \cite{bic, hor}. Let we observe random process $Y_\varepsilon(t)$, $t \in [0,1]$, defined stochastic differential equation \begin{equation}\label{vv1} dY_\epsilon(t) = S(t)\, dt + \varepsilon\, dw(t), \quad \varepsilon > 0, \end{equation} where $S \in \mathbb{L}_2(0,1)$ and $d\,w(t)$ - Gaussian white noise. We verify hypothesis \begin{equation}\label{i29} \mathbb{H}_0 \,\,:\,\, S(t) = 0, \quad t \in [0,1], \end{equation} versus simple alternatives \begin{equation}\label{i30} \mathbb{H}_\varepsilon \,\,:\,\, S(t) = S_\varepsilon(t), \quad t \in [0,1], \quad \varepsilon > 0. \end{equation} It is clear that the asymptotic of type II error probabilities for arbitrary family of sets of alternatives $\Psi_\varepsilon \subset \mathbb{L}_2(0,1)$, $\varepsilon > 0$, is unambiguously described such a setup. We can always extract from the sets $\Psi_\varepsilon$, $\varepsilon > 0$, a family of simple alternatives $S_\varepsilon$, $\varepsilon > 0$, having worst order of asymptotic of type II error probabilities and to explore this asymptotic. Paper is organized as follows. In section \ref{sec2}, we provide definitions of uniform consistency in zones of large and moderate deviation probabilities. In section \ref{sec3} the asymptotics of type I and type II error probabilities in large and moderate deviations zones are pointed out if test statistics are linear combinations of squared estimates of Fourier coefficients of signal. In section \ref{sec4} we introduce the notion of maxisets for zones of large deviation probabilities. Maxisets for test statistics are the largest convex, center-symmetric sets such that, if, from such a maxiset, we delete $\mathbb{L}_2$-balls, having centers at zero and having a given rates of convergence of radii to zero as the noise power tends to zero, we get uniformly consistent family of sets of alternatives in large deviation sense. In section \ref{sec5} we establish that Besov bodies $\mathbb{B}^s_{2\infty}(P_0)$, $P_0 > 0$ are maxisets. Parameter $s$ depends on rates of convergence $\mathbb{L}_2$-norms $\|S_\varepsilon\|$ to zero. We show that any family of simple alternatives $S_\varepsilon$, $\varepsilon > 0$, having given rates of convergence of $\mathbb{L}_2$-norms $\|S_\varepsilon\|$ to zero, is consistent in terms of large deviation probabilities, if and only if, each function $S_\varepsilon$ admits representation as sum of two functions: $S_{1\varepsilon} \in \mathbb{B}^s_{2\infty}(P_0) $ with the same rates of convergence of $\|S_{1\varepsilon}\|$ to zero as $\|S_\varepsilon\|$ and orthogonal function $S_{2\varepsilon}$. The functions $S_{1\varepsilon}$ are smoother than $S_{2\varepsilon}$ and functions $S_{2\varepsilon}$ are more rapidly oscillating. In section \ref{sec6} we establish similar results if test statistics are $\mathbb{L}_2$-norms of kernel estimators of signal. In section \ref{sec7} proof of Theorems is provided. We use letters $c$ and $C$ as a generic notation for positive constants. Denote ${\bf 1}_{\{A\}}$ the indicator of an event $A$. Denote $[a]$ whole part of real number $a$. For any two sequences of positive real numbers $a_n$ and $b_n$, $a_n = O(b_n)$ and $a_n \asymp b_n$ imply respectively $a_n < C b_n$ and $ca_n \le b_n \le Ca_n$ for all $n$ and $a_n = o(b_n)$, $a_n <<b_n$ implies $a_n/b_n \to 0$ as $n \to \infty$. We also use notation $a_n = \Omega(b_n)$ which means $b_n \le C a_n$ for all $n$. For any complex number $z$ denote $\bar z$ complex conjugate number. Define function $$ \Phi(x) = \frac{1}{\sqrt{2\pi}}\,\int_{-\infty}^x\,\exp\{-t^2/2\}\, dt, \quad x \in \mathbb{R}^1, $$ of standard normal distribution. Let $\phi_j$, $1 \le j < \infty$, be orthonormal system of functions onto $\mathbb{L}_2(0,1)$. For each $P_0 > 0$ define set \begin{equation}\label{vv} \mathbb{\bar B}^s_{2\infty}(P_0) = \Bigl\{S : S = \sum_{j=1}^\infty\theta_j\phi_j,\,\,\, \sup_{\lambda>0} \lambda^{2s} \sum_{j>\lambda} \theta_j^2 \le P_0,\,\, \theta_j \in \mathbb{R}^1 \Bigr\}. \end{equation} With some conditions on the basis $\phi_j$, $1 \le j < \infty,$ functional space $$ \bar{\mathbb{ B}}^s_{2\infty} = \Bigl\{ S : S = \sum_{j=1}^\infty\theta_j\phi_j,\,\,\, \sup_{\lambda>0} \lambda^{2s} \sum_{j>\lambda}\, \theta_j^2 < \infty,\,\, \theta_j \in \mathbb{R}^1 \Bigr\} $$ is Besov space $\mathbb{B}^s_{2\infty}$ (see \cite{rio}). In particular, this holds if $\phi_j$, $1 \le j < \infty$ is trigonometric basis. If $\phi_j(t) = \exp\{2\pi i j x\}$, $x\in (0,1)$, $j = 0, \pm 1, \ldots$, denote $$ \mathbb{ B}^s_{2\infty}(P_0) = \Bigl\{S\,:\, f = \sum_{j=-\infty}^\infty \theta_j\phi_j,\,\,\, \sup_{\lambda>0} \lambda^{2s} \sum_{|j| >\lambda} |\theta_j|^2 \le P_0 \Bigr\}. $$ Since functions $\phi_j$ are complex here, then $\theta_j$ are complex numbers as well, and $\theta_j = \bar\theta_{-j}$ for all $-\infty < j < \infty$. Denote $\mathbb{ B}^s_{2\infty}$-- Banach space generated by balls $\mathbb{ B}^s_{2\infty}(P_0)$, $P_0 > 0$. \section{Consistency in large and moderate deviation zone \label{sec2} } For any test $L_\varepsilon$, $\varepsilon > 0$, denote $\alpha(L_\varepsilon) = \mathbf{E}_0(L_\varepsilon)$ -- its type I error probability and $\beta(L_\varepsilon,S)= \mathbf{E}_S(1 -L_\varepsilon)$ -- its type II error probability for alternative $S \in \mathbb{L}_2(0,1)$. Let $r_\varepsilon \to \infty$ as $\varepsilon \to 0$. If we have \begin{equation}\label{us3} |\log \beta(L_\epsilon,S_\varepsilon)|= \Omega(r_\varepsilon^2) \end{equation} for all tests $L_\varepsilon$, $\alpha(L_\varepsilon) = \alpha_\varepsilon$, such that $r_\varepsilon^{-2}\,\log\alpha(L_\varepsilon) \to 0$ as $\varepsilon \to 0$, then we say that family of alternatives $S_\varepsilon$, $\varepsilon > 0$, is $r_\varepsilon^2$-consistent from large deviation viewpoint ($r_\varepsilon- LD$ -consistent). Note that any $r_\varepsilon^2- LD$- consistent family of alternatives $S_\varepsilon$ is consistent, while the reverse is not obligatory. If (\ref{us3}) does not hold, we say that family of alternatives is $r_\varepsilon^2 - LD$ inconsistent. \section{Asymptotic of type I and type II error probabilities of quadratic test statistics \label{sec3}} Using orthonormal system of functions $\phi_j$, $1 \le j < \infty$, we can rewrite stochastic differential equation (\ref{vv1}) in terms of sequence model (see \cite{ib81}) \begin{equation}\label{q2} y_{\varepsilon j} = \theta_{ j} +\varepsilon \xi_j, \quad 1 \le j < \infty, \end{equation} where $$y_{\varepsilon j} = \int_0^1 \phi_j dY_\varepsilon(t), \quad \xi_j = \int_0^1\,\phi_j\,dw(t) \quad \mbox{ and} \quad \theta_j = \int_0^1 S\,\phi_j\,dt.$$ Denote $\yb_\varepsilon = \{y_{\varepsilon j}\}_{j=1}^\infty$ and $\thetab = \{\theta_j\}_{j=1}^\infty$. For alternatives $S_\varepsilon$ we put $\theta_{\varepsilon j} = \int_0^1 S_\varepsilon\,\phi_j\,dt$. We can consider $\thetab$ as a vector in Hilbert space $\mathbb{H}$ with the norm $\|\thetab\| = \Bigl(\sum_{j=1}^\infty \theta_j^2\Bigr)^{1/2}$. In what follows, we shall implement the same notation of norm $\| \cdot \|$ for the space $\mathbb{L}_2$ and for $\mathbb{H}$. The sense of this notation will be always clear from context. We explore the test statistics of Neyman type tests \begin{equation*} T_\varepsilon(Y_\varepsilon) = \varepsilon^{-2}\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 y_{\varepsilon j}^2 - \rho^2_\varepsilon, \end{equation*} where $\rho_\varepsilon^2 = \sum_{j=1}^\infty \varkappa_{\varepsilon j}^2$. Suppose that coefficients $\varkappa_{\varepsilon j}^2$ satisfy the following assumptions. \noindent{\bf A1.} For any $\varepsilon>0$ sequence $\varkappa^2_{\varepsilon j}$ is decreasing. \noindent{\bf A2.} There holds \begin{equation}\label{q5} C_1 < A_\varepsilon = \,\varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^4 < C_2. \end{equation} Denote $\varkappa_\varepsilon^2=\varkappa^2_{\varepsilon k_\varepsilon}$, where $k_\varepsilon = \sup\Bigl\{k: \sum_{j < k} \varkappa^2_{\varepsilon j} \le \frac{1}{2} \rho_\varepsilon^2 \Bigr\}$. \noindent{\bf A3.} There are $C_1$ and $\lambda >1$, such that for any $\delta > 0$ and for each $\varepsilon>0$ and all $k_{\varepsilon(\delta)} = [(1+\delta)k_\varepsilon]$ we have $ \varkappa^2_{\varepsilon,k_{\varepsilon(\delta)}} < C_1(1 +\delta)^{-\lambda}\varkappa_\varepsilon^2. $ \noindent{\bf A4.} We have $\varkappa_{\varepsilon 1}^2 \asymp \varkappa_\varepsilon^2$ as $\varepsilon \to 0$. For any $c>1$ there is $C$ such that $\varkappa_{\varepsilon,[ck_\varepsilon]}^2 \ge C\varkappa_\varepsilon^2$ for all $\varepsilon > 0$. Define test \begin{equation*} L_\varepsilon= \mathbf{1}_{( A^{-1/2}_\varepsilon T_\varepsilon(Y_\varepsilon) > x_{\alpha_\varepsilon})}. \end{equation*} Denote $D_\varepsilon(S_\varepsilon) = \varepsilon^{-4} \sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2 = \varepsilon^{-4} \left(T_\varepsilon(S_\varepsilon) + \varepsilon^2 \rho^2_\varepsilon\right)$. Denote \begin{equation*} B_\varepsilon(S_\varepsilon) = \frac{D_\varepsilon(S_\varepsilon)}{(2A_\varepsilon)^{1/2}}, \end{equation*} \begin{thm}\label{th1} Assume A1 -A4. Let $x_{\alpha_\varepsilon} \to \infty $ and let $x_{\alpha_\varepsilon} = o(k_{\varepsilon}^{1/6})$ as $\varepsilon \to 0$. Then we have \begin{equation}\label{us1} \alpha(L_\varepsilon) = (1 - \Phi(x_{\alpha_\varepsilon}))(1 + o(1)). \end{equation} Let $\alpha_\varepsilon \to 0 $, $x_{\alpha_\varepsilon} = O(B_\varepsilon(S_\varepsilon))$ and let \begin{equation}\label{eth2} D_\varepsilon(S_\varepsilon) \to \infty, \quad D_\varepsilon(S_\varepsilon) = o(k_\varepsilon^{1/6}), \quad B_\varepsilon(S_\varepsilon) - x_{\alpha_\varepsilon} = \Omega(B_\varepsilon(S_\varepsilon)) \end{equation} as $\varepsilon \to 0$. Then we have \begin{equation}\label{us1n} \beta(L_\varepsilon,S_\varepsilon) = \Phi (x_{\alpha_\varepsilon}- B_\varepsilon(S_\varepsilon))(1 + o(1)). \end{equation} Let $x_{\alpha_\varepsilon} \to \infty $ and let $x_{\alpha_\varepsilon} = o(k_{\varepsilon}^{1/2})$ as $\varepsilon \to 0$. Then we have \begin{equation}\label{us2} \sqrt{2|\log\alpha(L_\varepsilon)|} =x_{\alpha_\varepsilon}(1 +o(1)), \quad \alpha_\varepsilon \to 0 \end{equation} as $\varepsilon \to 0$. Let $\alpha_\varepsilon \to 0 $, $x_{\alpha_\varepsilon} = O(B_\varepsilon(S_\varepsilon))$ and let \begin{equation} \label{eth3} D_\varepsilon(S_\varepsilon) \to \infty,\quad D_\varepsilon(S_\varepsilon) = o(k_\varepsilon^{1/2}), \quad B_\varepsilon(S_\varepsilon) - x_{\alpha_\varepsilon} = \Omega(B_\varepsilon(S_\varepsilon)) \end{equation} as $\varepsilon \to 0$. Then we have \begin{equation} \label{us2n} \sqrt{2|\log\beta(L_\varepsilon,S_\varepsilon)|} = ( B_\epsilon(S_\varepsilon) - x_{\alpha_\varepsilon})(1 + o(1)), \end{equation} Let $\alpha_\varepsilon \to 0 $, $x_{\alpha_\varepsilon} = O(B_\varepsilon(S_\varepsilon))$ and let \begin{equation*} r_\varepsilon \to \infty, \quad \frac{B_\varepsilon(S_\varepsilon)}{r_\varepsilon} \to \infty, \quad B_\varepsilon(S_\varepsilon) - x_{\alpha_\varepsilon} = \Omega(B_\varepsilon(S_\varepsilon)) \end{equation*} as $\varepsilon \to 0$. Then we have $r_\varepsilon^{-2} |\log \beta(L_\varepsilon, S_\varepsilon)| \to \infty$ as $\varepsilon \to 0$.\end{thm} Denote \begin{equation*} \tau_\varepsilon^2= \varepsilon^4 D_\varepsilon(S_\varepsilon)=\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \theta_{\varepsilon j}^2 = T_\varepsilon(S_\varepsilon) + \varepsilon^2\rho^2_\varepsilon. \end{equation*} \begin{thm}\label{th2} Assume $A1-A4$. Let $x_{\alpha_\varepsilon} \to \infty $ and let $k_\varepsilon = \Omega(x^2_{\alpha_\varepsilon})$ as $\varepsilon \to 0$. Then we have \begin{equation}\label{eth20} \log P_{0} (T_\varepsilon(Y_\varepsilon) > x_{\alpha_\varepsilon}) \le - \frac{1}{2} \varkappa_{\varepsilon 1}^{-2} x_{\alpha_\varepsilon} (1 + o(1)). \end{equation} Let $k_\varepsilon = \Omega(D_\varepsilon^2(S_\varepsilon))$ and $\varepsilon(\tau_\varepsilon - x_{\varepsilon}^{1/2}) \to \infty$ as $\varepsilon \to 0$ additionally. Then we have \begin{equation}\label{eth21} \log P_{S_\varepsilon} (T_\varepsilon (Y_\varepsilon) < x_{\alpha_\varepsilon}) \le - \frac{1}{2} \varkappa_{\varepsilon 1}^{-2} (\tau_{\varepsilon}-x^{1/2}_{\alpha_\varepsilon})^2(1 +o(1)). \end{equation} The equality in (\ref{eth21}) is attained for $S_{\varepsilon}= \{\theta_{\varepsilon j}\}_{j=1}^\infty$, where $\theta_{\varepsilon 1} = \varkappa_{\varepsilon 1}^{-1} \tau_\varepsilon$ and $\theta_{\varepsilon j} =0$ for $j >1$. \end{thm} In Theorems \ref{th1} and \ref{th2} coefficients $\varkappa^2_{\varepsilon j}$ have different normalization in comparison with coefficients $\kappa^2_{\varepsilon j}$ in \cite{erm08}. At the same time Theorems \ref{th1} and \ref{th2} are obtained by simple modification of proofs of Lemmas 2 and 3 in \cite{erm08}. To use notation $\varkappa^2_{\varepsilon j}$ instead of $\kappa^2_{\varepsilon j}$ in the proof of Lemmas 2 and 3 in \cite{erm08}, we should put \begin{equation}\label{norm} \kappa^2_{\varepsilon j} = \frac{\sum_{j=1}^\infty \varkappa^2 \theta^2_{\varepsilon j}}{\sum_{j=1}^\infty \kappa^4_{\varepsilon j}} \,\varkappa^2_{\varepsilon j}, \quad 1 \le j < \infty, \end{equation} Normalization (\ref{q5}) was implemented in Theorems on asymptotic normality of test statistics $T_\varepsilon(Y_\varepsilon)$ in \cite{er90, erm21}. \section{Maxisets in large deviation zone \label{sec4} } Maxiset definition in the zone of large deviation probabilities is akin to the definition in \cite{erm21} introduced for exploration of uniform consistency in problems of nonparametric hypothesis testing. The only difference is replacement of notion of consistency with notion of $\varepsilon^{-2\omega}- LD$ consistency, $0 < \omega \le 1$. Just like in \cite{erm21} we explore maxisets for alternatives $S_\varepsilon$, approaching with hypothesis in $\mathbb{L}_2$-- norm with the rate of convergence $\varepsilon^{2r}$, $ 0 < r < 1/2$, as $\varepsilon \to 0$. Let $\Xi$, $\Xi \subset \mathbb{L}_2(0,1)$, be Banach space with the norm $\|\cdot\|_\Xi$. Denote $ U=\{f:\, \|S\|_\Xi \le 1,\, S \in \Xi\}$ ball in $\Xi$. In what follows, we suppose that set $U$ is compact in $\mathbb{L}_2(0,1)$ ( see \cite{ib77, erm21}). Define subspaces $\Pi_k$, $1 \le k < \infty$, using induction. Denote $d_1= \max\{\|S\|,\, S \in U\}$ and define function $e_1 \in U$ such that $\|e_1\|= d_1.$ Denote $\Pi_1 \subset \mathbb{L}_2(0,1)$ linear subspace generated by function $e_1$. For $i=2,3,\ldots$ denote $d_i = \max\{\rho(S,\Pi_{i-1}), S \in U \}$, where $\rho(S,\Pi_{i-1})=\min\{\|S-g\|, g \in \Pi_{i-1} \}$. Define function $e_i$, $e_i \in U$, such that $\rho(e_i,\Pi_{i-1}) = d_i$. Denote $\Pi_i$ linear space generated by functions $e_1,\ldots,e_i$. For any function $S \in \mathbb{L}_2(0,1)$ denote $S_{\Pi_i}$ projection of function $S$ onto subspace $\Pi_i$ and put $\tilde S_i = S - S_{\Pi_i}$. Set $U$ is called maxiset and functional space $\Xi$ is called maxispace, if the following two statements hold: \vskip 0.2cm {\sl i.} any sequence of alternatives $S_\varepsilon \in U$, $ \|S_{\varepsilon}\| \asymp \varepsilon^{2r}$ as $\varepsilon \to 0$, is $\varepsilon^{-2\omega}- LD$ consistent. \vskip 0.2cm {\sl ii.} for any $S \in \mathbb{L}_2(0,1)$, $S \notin \Xi$, there is sequences $i_n$ and $j_{i_n}$, $i_n \to \infty$, $j_{i_n} \to \infty$ as $n \to \infty$ such that $c j_{i_n}^{-r}<\| \tilde S_{i_n}\| < C j_{i_n}^{-r}$ and subsequence of alternatives $\tilde S_{i_n}$ is $j_{i_n}^{\omega}- LD$ inconsistent for subsequence of signals $\tilde S_{i_n}$, observed in Gaussian white noise having the power $j_{i_n}^{-1/2}$. \section{Necessary and sufficient conditions of $\varepsilon^{-2\omega}- LD$ - consistency of families of simple alternatives $S_\varepsilon$ converging to hypothesis in $\mathbb{L}_2$ \label{sec5}} We explore necessary and sufficient conditions of $\varepsilon^{-2\omega}- LD$ - consistency of families of simple alternatives $S_\varepsilon$, $\varepsilon>0$, such that $\|S_\varepsilon\| \asymp \varepsilon^{2r}$ as $\varepsilon \to 0$. Here $0< r <1/2$ and $0 < 2\omega < 1 - 2r$. Thus $r_\varepsilon \asymp \varepsilon^{-\omega}$ as $\varepsilon \to 0$. We put $k_\varepsilon \asymp \varepsilon^{-4+ 8r +4\omega}$. For this setup, A1-A4 implies \begin{equation}\label{u1} \kappa_\varepsilon^2 \asymp \varepsilon^{4-4r-4\omega}, \quad \bar A_\varepsilon \doteq \varepsilon^{-4}\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\asymp \varepsilon^{-4\omega} \end{equation} as $\varepsilon \to 0$. Theorems of section and subsequent proofs are akin to the statements and the proofs of Theorems 4.1 and 4.4-4.10 in \cite{erm21} with the above orders $\kappa_\varepsilon^2, \bar A_\varepsilon$ and $k_\varepsilon$. For the proofs it suffices to substitute the above orders in the proofs of corresponding Theorems in \cite{erm21}. This will be demonstrated in the proof of the property {\it ii.} of maxiset in Theorem \ref{tq1}. All other proofs will be omitted. \subsection{Analytical form of necessary and sufficient conditions of $\varepsilon^{-2\omega}- LD$-consistency \label{s4.2}} Results are provided in terms of Fourier coefficients of functions $S= S_\varepsilon = \sum_{j=1}^\infty \theta_{\varepsilon j} \phi_j$. \begin{thm}\label{tq3} Assume {\rm A1-A4}. Family of alternatives $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, is $\varepsilon^{-2\omega}- LD$ consistent, if and only if, there are $c_1$, $c_2$ and $\varepsilon_0>0$ such that there holds \begin{equation}\label{con2} \sum_{|j| < c_2k_\varepsilon} |\theta_{\varepsilon j}|^2 > c_1 \varepsilon^{4r} \end{equation} for all $\varepsilon < \varepsilon_0$. \end{thm} Versions of Theorem \ref{tq3} and Theorem \ref{tq6}, given bellow, hold also for test statistics based on $\mathbb{L}_2$-- norm of kernel estimators. In this setup index $j$ have positive and negative values and coefficients $\theta_{\varepsilon j}$ may be complex numbers. By this reason, we write $|j|$ instead of $j$ and $|\theta_{\varepsilon j}|$ instead of $\theta_{\varepsilon j}$ in (\ref{con2}) and (\ref{con19}). \subsection{Maxisets. Qualitative structure of consistent families of alternatives. } Denote $s = \frac{r}{2 -4r-2\omega}$. Then $r = \frac{(2-2\omega)s}{1 + 4s}$. \begin{thm}\label{tq1} Assume {\rm A1-A4}. Then the balls $\mathbb{\bar B}^s_{2\infty}(P_0)$, $P_0 > 0$, are maxisets for test statistics $T_\varepsilon(Y_\varepsilon)$. \end{thm} \begin{thm}\label{tq7} Assume {\rm A1-A4}. Then family of alternatives $S_\varepsilon$, $ \|S_\varepsilon\| \asymp \varepsilon^{2r} $, is $\varepsilon^{-2\omega}- LD$ consistent, if and only if, there is maxiset $\mathbb{\bar B}^s_{2\infty}(P_0)$, $P_0 > 0$, and family of functions $S_{1\varepsilon} \in \mathbb{\bar B}^s_{2\infty}(P_0)$, $\|S_{1\varepsilon}\| \asymp \epsilon^{2r}$, such that $S_{1\varepsilon}$ is orthogonal $S_{\varepsilon} - S_{1\varepsilon}$, that is, we have \begin{equation} \label{ma1} \| S_\varepsilon\|^2 = \| S_{1\varepsilon}\|^2 + \|S_\varepsilon - S_{1\varepsilon}\|^2. \end{equation} \end{thm} \begin{thm}\label{tq11} Assume {\rm A1-A4}. L Then, for any $\delta > 0$, for any $\varepsilon^{-2\omega}- LD$ - consistent family of alternatives $S_\varepsilon$, $ \|S_\varepsilon\| \asymp \varepsilon^{2r} $, there is maxiset $\mathbb{\bar B}^s_{2\infty}(P_0)$, $P_0 > 0$, and family of functions $S_{1\varepsilon}$, $ \|S_{1\varepsilon}\| \asymp \varepsilon^{2r} $, $S_{1\varepsilon} \in \mathbb{B}^s_{2\infty}(P_0)$, such that there hold: \vskip 0.25cm $S_{1\varepsilon}$ is orthogonal $S_\varepsilon - S_{1\varepsilon}$, \vskip 0.25cm for any tests $L_\varepsilon$, satisfying (\ref{eth3}), there is $\varepsilon_0 =\varepsilon_0(\delta) > 0$, such that, for $\varepsilon < \varepsilon_0$, there hold \begin{equation} \label{uuu} |\log \beta(L_\varepsilon,S_\varepsilon) - \log \beta(L_\varepsilon,S_{1\varepsilon})| \le \delta |\log \beta(L_\varepsilon,S_\varepsilon)| \end{equation} and \begin{equation} \label{uu1} |\log \beta(L_\varepsilon,S_\varepsilon - S_{1\varepsilon}) | \le \delta |\log \beta(L_\varepsilon,S_\varepsilon)|. \end{equation} \end{thm} \subsection{Interaction of $\varepsilon^{-2\omega}- LD$ consistent and $\varepsilon^{-2\omega}- LD$ inconsistent alternatives. Purely $\varepsilon^{-2\omega}- LD$ consistent families of alternatives} We say that $\varepsilon^{-2\omega}- LD$- consistent family of alternatives $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, is {\sl purely $\varepsilon^{-2\omega}- LD$--consistent}, if there is no inconsistent subsequence of alternatives $S_{1\varepsilon_i}$, $\varepsilon_i \to 0$ as $i \to \infty$, such that $S_{1\varepsilon_i}$ is orthogonal $S_{\varepsilon_i}- S_{1\varepsilon_i}$ and $\|S_{1\varepsilon_i}\| > c_1\varepsilon_i^{2r}$. \begin{thm} \label{tq5} Assume {\rm A1-A4}. Let family of alternatives $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, be $\varepsilon^{-2\omega}- LD$ consistent. Then for any $\varepsilon^{-2\omega}- LD$ inconsistent family of alternatives $S_{1\varepsilon}$, $\|S_{1\varepsilon}\| \asymp \varepsilon^{2r}$, there holds \begin{equation*} \lim_{\varepsilon \to 0} \frac{\log\beta(L_\varepsilon,S_\varepsilon) - \log\beta(L_\varepsilon,S_\varepsilon + S_{1\varepsilon})}{\log\beta(L_\varepsilon,S_\varepsilon)} = 0. \end{equation*} \end{thm} \begin{thm}\label{tq6} Assume {\rm A1-A4}. Family of alternatives $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, is purely $\varepsilon^{-2\omega}- LD$--consistent, if and only if, for any $\delta >0$, there is such a constant $C_1= C_1(\delta)$, that there holds \begin{equation}\label{con19} \sum_{|j| > C_1k_\varepsilon} |\theta_{\varepsilon j}|^2 \le \delta \varepsilon^{4r} \end{equation} for all $\varepsilon< \varepsilon_0(\delta)$. \end{thm} \begin{thm}\label{tq12} Assume {\rm A1-A4}. Then family of alternatives $S_\varepsilon$, $ \|S_\varepsilon\| \asymp \varepsilon^{2r} $, is purely $\varepsilon^{-2\omega}- LD$--consistent, if and only if, for any $\delta > 0$ there is maxiset $ \mathbb{\bar B}^s_{2\infty}(P_0)$ and family of functions $S_{1\varepsilon} \in \mathbb{\bar B}^s_{2\infty}(P_0)$ such that $\|S_\varepsilon - S_{1\varepsilon}\| \le \delta \varepsilon^{2r}$ for all $\varepsilon< \varepsilon_0(\delta)$. \end{thm} \begin{thm}\label{tq8} Assume {\rm A1-A4}. Then family of alternatives $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, is purely $\varepsilon^{-2\omega}- LD$--consistent, if and only if, for any $\varepsilon^{-2\omega}- LD$--inconsistent sequence of alternatives $S_{1\varepsilon_i}$, $ \|S_{1\varepsilon_i}\| \asymp \varepsilon_i^{2r}$, $\varepsilon_i \to 0$ as $i \to \infty$, there holds \begin{equation}\label{ma2} \|S_{\varepsilon_i} + S_{1\varepsilon_i}\|^2 = \|S_{\varepsilon_i} \|^2 + \| S_{1\varepsilon_i}\|^2 + o(\varepsilon_i^{4r}). \end{equation} \end{thm} \begin{remark}\label{rem1}{\rm Let $\varkappa_{\varepsilon j}^2 > 0$ for $j \le l_\varepsilon$, and let $\varkappa^2_{\varepsilon j} = 0$ for $j > l_\varepsilon$, where $l_\varepsilon \asymp \varepsilon^{-4+ 8r +4\omega}$ as $\varepsilon \to 0$. Analysis of proof of Theorems show that Theorems \ref{th1}, \ref{th2} and \ref{tq3} - \ref{tq8} remains valid for this setup, if assumption A4 we replace with \vskip 0.25cm \noindent{\bf A5.} For any $c$, $0 < c <1$, there is $c_1$ such that $\varkappa^2_{\varepsilon,[cl_\varepsilon]} \ge c_1 \varkappa^2_{\varepsilon 1}$ for all $\varepsilon > 0$. \vskip 0.25cm In all corresponding proofs we put $\varkappa^2_\varepsilon = \varkappa_{\varepsilon 1}^2$ and $k_\varepsilon = l_\varepsilon$. Theorems \ref{tq6} is correct with the following modifications. It suffices suppose that $C_1(\epsilon) < 1$. Proof of corresponding versions of Theorems\ref{th1}, \ref{th2} and \ref{tq3} - \ref{tq8} is similar and is omitted.} \end{remark} \section{Tests based on kernel estimators \label{sec6}} We explore the same problem of signal detection in Gaussian white noise (\ref{i29}), (\ref{i30}). Suppose additionally, that signal $S_\varepsilon$ belong to $\mathbb{L}_2^{per}(\mathbb{R}^1)$ the set of 1-periodic functions such that $S_\varepsilon(t) \in \mathbb{L}_2(0,1)$, $t \in [0,1)$. This allows to extend our model on real line $\mathbb{R}^1$ putting $w(t+j) = w(t)$ for all integer $j$ and $t \in [0,1)$ and to write the forthcoming integrals over all real line (see \cite{erm21}). Define kernel estimators \begin{equation}\label{yy} \hat{S}_\varepsilon(t) = \frac{1}{h_\varepsilon} \int_{-\infty}^{\infty} K\Bigl(\frac{t-u}{h_\varepsilon}\Bigr)\, d\,Y_\varepsilon(u), \quad t \in (0,1), \end{equation} where $h_\varepsilon> 0$, $h_\varepsilon \to 0$ as $\varepsilon \to 0$. The kernel $K$ is bounded function such that the support of $K$ is contained in $[-1/2,1/2]$, $K(t) = K(-t)$ for $t \in \mathbb{R}^1$ and $\int_{-\infty}^\infty K(t)\,dt = 1$. Denote $K_h(t) = \frac{1}{h} K\Bigl(\frac{t}{h}\Bigr)$, $t \in \mathbb{R}^1$ and $h >0$. Define test statistics $$T_{1\varepsilon}(Y_\varepsilon) = T_{1\varepsilon h_\varepsilon}(Y_\varepsilon) =\varepsilon^{-2}h_\varepsilon^{1/2}\gamma^{-1} (\|\hat S_{\varepsilon}\|^2- \varepsilon^2 h_\varepsilon^{-1}\|K\|^2),$$ where $$ \gamma^2 = 2 \int_{-\infty}^\infty \Bigl(\int_{-\infty}^\infty K(t-s)K(s) ds\Bigr)^2\,dt. $$ Statistics $\|\tilde S_\varepsilon\|^2- \varepsilon^2 h_\varepsilon^{-1}\|K\|^2$ is estimator of functional value $$ T_{\varepsilon}(S_\varepsilon) =\int_0^1\Bigl(\frac{1}{h_\varepsilon}\int K\Bigl(\frac{t-s}{h_\varepsilon}\Bigr)S_\varepsilon(s)\, ds\Bigr)^2 dt. $$ Denote $L_\varepsilon = \mathbf{1}_{\{T_{1\varepsilon}(Y_\varepsilon) > x_{\alpha_\varepsilon}\}}$ , where $x_{\alpha_\varepsilon}$ is defined by equation $\alpha(L_\varepsilon) = \alpha_\varepsilon$. Problem admits interpretation in the framework of sequence model. Let we observe realization of random process $Y_\varepsilon(t)$, $t \in [0,1]$. For $-\infty < j < \infty$, denote $$ \hat K(jh) = \int_{-1}^1 \exp\{2\pi ijt\}\,K_h(t)\, dt,\quad h > 0, $$ $$ y_{\varepsilon j} = \int_0^1 \exp\{2\pi ijt\}\, dY_\varepsilon(t), \quad \xi_j = \int_0^1 \exp\{2\pi ijt\}\, dw(t), $$ $$ \theta_{\varepsilon j} = \int_0^1 \exp\{2\pi ijt\}\, S_\varepsilon(t)\, dt. $$ In this notation we can define estimator in the following form \begin{equation}\label{au1} \hat \theta_{\varepsilon j} = \hat K(jh_\varepsilon)\, y_{\varepsilon j} = \hat K(jh_\varepsilon)\, \theta_{\varepsilon j} + \varepsilon\, \hat K(jh_\varepsilon)\, \xi_j, \quad -\infty < j < \infty. \end{equation} Then test statistics $T_\varepsilon$ admits the following representation: \begin{equation}\label{au111} T_\varepsilon(Y_\varepsilon) = \varepsilon^{-2} h_\varepsilon^{1/2}\gamma^{-1} \Bigl(\sum_{j=-\infty}^\infty |\hat \theta_{\varepsilon j}|^2 - \varepsilon^2 \sum_{j=-\infty}^\infty |\hat K(jh_\varepsilon)|^2\Bigr). \end{equation} If we put $|\hat K(jh_\varepsilon)|^2 = \kappa^2_{\varepsilon j}$, we get, that test statistics $T_\varepsilon(Y_\varepsilon)$ in this section and sections \ref{sec3} and \ref{sec5} are almost indistinguishable. Results similar to section \ref{sec3} were obtained in \cite{erm11}. However for obtaining results similar to section \ref{sec5} one needs their interpretation in terms of Fourier coefficients. Main difference of setup of section \ref{sec6} is the presence of heterogeneous Gaussian white noise in the model. \begin{thm}\label{tk3} Let $\|S_\varepsilon\| \asymp \varepsilon^{2r}$ and $h_\varepsilon \asymp \varepsilon^{4-8r-4\omega}$, $ 0 < 2 \omega < 1 - 2r$. Then Theorems \ref{tq3}--\ref{tq8} are valid for this setup with the only difference that $ \mathbb{\bar B}^s_{2\infty}(P_0)$ is replaced with $\mathbb{B}^s_{2\infty}(P_0)$. \end{thm} Proof of Theorems is based on the following version of Theorem \ref{th1} (see Theorems 2.1 and 2.2, \cite{erm11}). \begin{thm}\label{tk2} Let $h_\varepsilon \to 0$ as $\varepsilon \to 0$. Let $\alpha_\varepsilon \doteq \alpha(K_\varepsilon) = o(1)$. If \begin{equation*} 1 < < \sqrt{2|\log \alpha_\varepsilon|} < < h_\varepsilon^{-1/6}, \end{equation*} then $x_{\alpha_\varepsilon}$ is defined equation $\alpha_\varepsilon= (1 - \Phi(x_{\alpha_\varepsilon}))(1 + o(1))$ as $\varepsilon \to 0$. If \begin{equation}\label{metka} \varepsilon^{-2} h_\varepsilon^{1/2} T_\varepsilon(S_\varepsilon) - \sqrt{|2\log \alpha_\epsilon|} > c \varepsilon^{-2} h_\varepsilon^{1/2} T_\varepsilon(S_\varepsilon) \end{equation} and \begin{equation*} \varepsilon^2 h_\varepsilon^{-1/2} < < T_\varepsilon(S_\varepsilon) < < \varepsilon^2 h_\varepsilon^{-2/3} = o(1), \end{equation*} as $\varepsilon \to 0$, then the following relation holds \begin{equation}\label{33} \beta(L_\varepsilon,S_\varepsilon) = \Phi(x_{\alpha_\varepsilon} - \gamma^{-1} \varepsilon^{-2} h_\varepsilon^{1/2} T_{\varepsilon}(S_\varepsilon))(1 + o(1)). \end{equation} If \begin{equation*} 1 < < \sqrt{2|\log \alpha_\varepsilon|} < < h_\varepsilon^{-1/2}, \end{equation*} then $x_{\alpha_\varepsilon} = \sqrt{|2\log\alpha_\varepsilon|}(1 + o(1))$. If \begin{equation*} \varepsilon^2 h_\varepsilon^{-1/2} < < T_\varepsilon(S_\varepsilon) < < \varepsilon^2 h_\varepsilon^{-1} = o(1), \end{equation*} and (\ref{metka}) holds then we have \begin{equation}\label{33} 2\log\beta(L_\varepsilon,S_\varepsilon) = -(x_{\alpha_\varepsilon} - \gamma^{-1} \varepsilon^{-2} h_\varepsilon^{1/2} T_{\varepsilon}(S_\varepsilon))^2(1 + o(1)). \end{equation} If $\frac{\varepsilon^2 x_{\alpha_\varepsilon}}{h_\varepsilon^{1/2} T_{\varepsilon}(S_\varepsilon)} \to 0 $ as $\varepsilon \to 0$, then \begin{equation*} \lim_{\varepsilon \to 0} (\log \alpha_\varepsilon)^{-1} \log \beta (L_\varepsilon,S_\varepsilon) = \infty. \end{equation*} \end{thm} Note that \begin{equation}\label{z33} T_{\varepsilon}(S_\varepsilon) = \sum_{j=-\infty}^\infty |\hat K(jh_\varepsilon)|^2 |\theta_{\varepsilon j}|^2. \end{equation} \section{Proof of Theorems \label{sec7}} \subsection{Proof of Theorem \ref{th2}} Denote $z^2 = \kappa^{2}_{\varepsilon} x_{\alpha_\varepsilon}$. Implementing Chebyshev inequality, we have \begin{equation}\label{dth21} \begin{split} & \mathbf{P}_s(T_\varepsilon - \tau_\varepsilon^2 < z^2 - \tau_\varepsilon^2)\\& \le \exp\{t(z^2 - \tau_\varepsilon^2)\} \mathbf{E}_0 \left[\exp\left\{-t\sum_{j=1}^\infty \kappa_j^2(y_j - \varepsilon^2) + 2t\sum_{j=1}^\infty\kappa_j^2y_j\theta_j\right\}\right]\,d\,y \\&= \exp\{t(z^2 - \tau_\varepsilon^2)\} \\& \times \lim_{m\to \infty} (2\pi\varepsilon^2)^{-m/2}\int \left[\exp\left\{-\frac{1}{2}\sum_{j=1}^m\left(\frac{1 + 2t\kappa_j^2\varepsilon^2}{\varepsilon^2} y_j^2 \right)\right.\right.\\&\left.\left. + t \sum_{j=1}^m \kappa_j^2 \epsilon^2+ 2t\sum_{j=1}^m\kappa_j^2y_j\theta_j \pm \sum_{j=1}^m \frac{2t^2\kappa_j^4\theta_j^2\varepsilon^2}{1 + 2t\kappa_j^2\varepsilon^2} \right\}\right]\,d\,y \\& = \exp\{t(z^2 - \tau_\varepsilon^2)\}\exp\left\{\frac{1}{2}\sum_{j=1}^\infty\log(1 + 2t\kappa_j^2\varepsilon^2) + \sum_{j=1}^\infty \frac{2t^2\kappa_j^4\theta_j^2\varepsilon^2}{1 + 2t\kappa_j^2\varepsilon^2} \right\}. \end{split} \end{equation} Note that \begin{equation*} \sum_{j=1}^\infty\log (1 + 2t\kappa_j^2\varepsilon^2) \asymp k_\varepsilon << \varepsilon^{-2}\tau_\varepsilon^2. \end{equation*} It is easy to see that, if $\sum_{j=1}^\infty \kappa_j^2 \theta_j^2 = \mbox{const}$, then supremum on $\theta_j$ of formula under the sign of the second exponent is reached when $\kappa^2 \theta_1^2 = \tau_\varepsilon^2$. Thus the problem is reduced to maximization on $t$ \begin{equation}\label{dth22} \exp\Bigl\{t(z^2 - \tau_\varepsilon^2) + \frac{2t^2\kappa_\varepsilon^2\tau_\varepsilon^2\varepsilon^2}{1 + 2t\kappa_\varepsilon^2\varepsilon^2}\Bigl\}. \end{equation} By straightforward calculations, we get that maximum on $t$ is attained if \begin{equation}\label{dth23} 2t = \varepsilon^{-2}\kappa_\varepsilon^{-2}(\tau_\varepsilon z^{-1} - 1). \end{equation} Substituting (\ref{dth23}) to (\ref{dth22}), we get right hand-side of (\ref{eth21}). \subsection{Proof of Theorem \ref{th1}} Test statistics $T_\varepsilon$ are sums of independent random variables. Therefore the standard reasoning of proof of Cramer Theorem are implemented \cite{os,petr}. This reasoning has been realized in proof of Lemma 4 in \cite{erm08}. If hypothesis is valid, then proof of Theorem \ref{th1} completely coincides with proof of Lemma 4 in \cite{erm08}. If alternative is valid, the difference is only in the evaluation of the residual terms arising in the proof of Lemma 4 in \cite{erm08}. Such differences will arise first of all in the evaluation of the residual term in (3.51) in \cite{erm08}. If (\ref{eth3}) holds, using $\varkappa_\varepsilon^2 \asymp \varepsilon^2 k_\varepsilon^{-1/2}$, we have \begin{equation}\label{dth11a} \varepsilon^{-6}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^4\, \theta_{\varepsilon j}^2 \asymp \varepsilon^{-6}\,\varkappa_\varepsilon^2\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2\, \theta_{\varepsilon j}^2 \asymp k_\varepsilon^{-1/2} \,D_\varepsilon(S_\varepsilon), \end{equation} or, for $\kappa_{\epsilon j} $-- notation, we have \begin{equation}\label{dth11} \varepsilon^{-6}\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\, \theta_{\varepsilon j}^2 \asymp \varepsilon^{-6}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^2\, \theta_{\varepsilon j}^2 = o\left(\varepsilon^{-4}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\right). \end{equation} Using (\ref{dth11}), we can replace remainder terms in (3.53), (3.55)--(3.57) in \cite{erm08} by the factor $(1 +o(1))$. This allows to replace (3.43) (see Lemma 4 in \cite{erm08}) with (\ref{us2n}). If (\ref{eth3}) holds, then we have \begin{equation}\label{dth12a} \begin{split} & ( B_\varepsilon(S_\varepsilon)-x_{\alpha_\varepsilon})\,\varepsilon^{-6}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^4 \,\theta_{\varepsilon j}^2 \,\asymp \, \varepsilon^{-2}\varkappa_\varepsilon^2\left(\varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2\right)^3 \\& \asymp \, k_\varepsilon^{1/2}\,\left(\varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2\, \theta_{\varepsilon j}^2\right)^3 = o(1), \end{split} \end{equation} or, for $\kappa_{\epsilon j} $-- notation, we have \begin{equation}\label{dth12} \begin{split} & \varepsilon^{-6}\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\, \theta_{\varepsilon j}^2 \,\asymp \, \varepsilon^{-6}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^2 \, \theta_{\varepsilon j}^2 \\& \asymp \varepsilon^{-6}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \, \kappa_{\varepsilon j}^4\asymp \varepsilon^{-6}\,k_\varepsilon\,\kappa_\varepsilon^6\, = \,o(1). \end{split} \end{equation} Using (\ref{dth12}), we can remain all estimates of the residual terms in the proof of Lemma 4 in \cite{erm08} and get the validity of these estimates for proof of (\ref{us1n}). Proof of (\ref{us1}) and (\ref{us2}) for type I error probabilities is based on similar estimates. It suffices to put $x_{\alpha_\varepsilon} = \varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2$ in above mentioned estimates. \subsection{Proof of Theorem \ref{tq1}} We verify only {\it ii.} in definition of maxiset. Proof of {\it i.} is akin to \cite{erm21} and is omitted. Suppose the contrary. Then $S = \sum_{j=1}^\infty \tau_{j}\,\phi_j \notin \mathbb{\bar B}^s_{2\infty}$. This implies that there is sequence $m_l$, $m_l \to \infty$ as $l \to \infty$, such that \begin{equation}\label{u5} m_l^{2s} \sum_{j=m_l}^\infty \tau_{j}^2 = C_l, \end{equation} where $C_l \to \infty$ as $l \to \infty$. Define sequence $\etab_l = \{\eta_{lj}\}_{j=1}^\infty$ such that $\eta_{lj} = 0$ if $j < m_l$ and $\eta_{lj} = \tau_j$, if $j \ge m_l$. We put $\tilde S_l = \sum_{j=1}^\infty \eta_{lj}\,\phi_j$. For alternatives $\tilde S_l$ we define sequence $n_l$ such that \begin{equation}\label{u5b} \|\etab_l\|^2 \asymp \varepsilon_l^{4r}= n_l^{-2r}\asymp m_l^{-2s} C_l . \end{equation} Then \begin{equation}\label{u7} n_l \asymp C_l^{-1/(2r)} m_l^{s/r} \asymp C_l^{-1/(2r)} m_l^{\frac{1}{2 - 4r- 2 \omega}}. \end{equation} Hence we have \begin{equation}\label{u10} m_l \asymp C_l^{(1-2r-\omega)/r}n_l^{2-4r-2\omega}. \end{equation} By A4, (\ref{u10}) implies \begin{equation}\label{u9} \kappa^2_{n_l m_l} = o(\kappa^2_{n_l}). \end{equation} Using (\ref{u1}), A2 and (\ref{u9}), we get \begin{equation}\label{u12} \begin{split}& A_{\varepsilon_l}(\etab_l) = n_l^2 \sum_{j=1}^{\infty} \kappa_{\varepsilon_lj}^2\eta_{jl}^2 \le n_l^{2} \kappa^2_{\varepsilon_l m_l}\sum_{j=m_l}^{\infty} \theta_{n_lj}^2\\& \asymp n_l^{2-2r}\kappa^2_{\varepsilon_l m_l} \asymp \varepsilon_l^{-4\omega}\kappa^2_{\varepsilon_l m_l }\kappa^{-2}_{\varepsilon_l}) = o(\varepsilon_l^{-4\omega}) . \end{split} \end{equation} By Theorem \ref{th1}, (\ref{u12}) implies $\varepsilon^{-2\omega}- LD$ --inconsistency of sequence of alternatives $\tilde S_l$. \subsection{Proof of version of Theorem \ref{tq1} for tests based on kernel estimators} We verify only {\it ii.} Suppose the contrary. Let $S = \sum_{j=-\infty}^\infty \tau_j \phi_j \notin \gamma U$ for all $\gamma >0$. Then there is sequence $m_l$, $m_l \to \infty$ as $l \to \infty$, such that \begin{equation}\label{bb5} m_l^{2s} \sum_{|j|\ge m_l}^\infty |\tau_j|^2 = C_l, \end{equation} where $C_l \to \infty$ as $l \to \infty$. It is easy to see (see \cite{erm21}) that we can define sequence $m_l$ such that \begin{equation}\label{gqq} \sum_{m_l \le |j| \le 2m_{l}} |\tau_j|^2 \asymp \varepsilon_l^{4r} = n_l^{-2r}> \delta C_l m_l^{-2s}, \end{equation} where $\delta$, $0< \delta <1/2$, does not depend on $l$. Define sequence $\etab_l = \{\eta_{lj}\}_{j=-\infty}^\infty$ such that $\eta_{lj} = \tau_j$, $|j| \ge m_{l}$, and $\eta_{lj} = 0$ otherwise. Denote $$ \tilde S_l(x) = \sum_{j=-\infty}^\infty \eta_{lj} \exp\{2\pi ijx\}. $$ For alternatives $\tilde S_l(x)$ we define sequence $n_l$ such that $\|\tilde S_l(x)\| \asymp n_l^{-r}$. Then \begin{equation*} n_l \asymp C_l^{-1/(2r)} m_l^{s/r}. \end{equation*} We have $|\hat K(\omega)| \le \hat K(0) = 1$ for all $\omega \in R^1$ and $|\hat K(\omega)| > c > 0$ for all $ |\omega| < b$. Therefore, if we put $h_l= h_{n_l} =2^{-1}b^{-1}m_l^{-1}$, then, by (\ref{gqq}), there is $C > 0$ such that for all $h> 0$, we have \begin{equation*} T_{\varepsilon_l}(\tilde S_l,h_l) = \sum_{j=-\infty}^\infty |\hat K(jh_l)\,\eta_{lj}|^2 > C \sum_{j=-\infty}^\infty |\hat K(jh)\,\eta_{lj}|^2 = C T_{\varepsilon_l}(\tilde S_l,h). \end{equation*} Therefore, we can put $h = h_l$ in further reasoning. Using (\ref{gqq}), we get \begin{equation}\label{k101} T_{\varepsilon_l}(\tilde S_l) = \sum_{|j|>m_l}\, |\hat K(jh_l) \eta_{lj}|^2 \asymp \sum_{j=m_l}^{2m_l} |\eta_{lj}|^2 \asymp n_l^{-2r}. \end{equation} If we put in (\ref{u7}),(\ref{u10}), $k_l = [h_{\varepsilon_l}^{-1}]$ and $m_l = k_l$, then we get \begin{equation}\label{k102} h_{\varepsilon_l}^{1/2} \asymp C_l^{(2r-1+1)/(2r)}n_l^{2r-1+\omega}. \end{equation} Using (\ref{k101}) and (\ref{k102}), we get \begin{equation*} \varepsilon_l^{-2} T_{\varepsilon_l}(\tilde S_l)h_{\varepsilon_l}^{1/2} \asymp C_l^{-(1-2r)/2}. \end{equation*} By Theorem \ref{tk2}, this implies inconsistency of sequence of alternatives $\tilde S_l$. \begin{thebibliography}{99} \bibitem{bic} P.J. Bickel, M. Rosenblatt, {\it On some global measures of deviation of density function estimates}.-- Ann. Statist. {\bf 1} (1973), 1071-1095. \bibitem{dal13} L. Comminges, A. Dalalyan {\it Minimax testing of a composite null hypothesis defined via a quadratic functional in the model of regression}.-- Electronic Journal of Statistics, {\bf 7} (2013), 146–190. \bibitem{er90} M.S. Ermakov, Minimax detection of a signal in a Gaussian white noise--- {\it Theory Probab. Appl.}, {\bf 35} 667-679 (1990). \bibitem{erm08} M.S. Ermakov, {\it Testing nonparametric hypothesis for small type I and type II error probabilities}.-- Problems of Information Transmition. {\bf 44} (2008), 54-73. \bibitem{erm11} M.S. Ermakov, {\it Nonparametric signal detection with small type I and type II error probabilities}. Statistical Inference for Stochastic Processes {\bf 14} (2011), 1-19. \bibitem{erm21} M.S. Ermakov, {\it On uniform consistency of nonparametric tests. I.}--J Math Sci {\bf 258} (2021), 802-837. \bibitem{erm22} M.S. Ermakov, On uniform consistency of nonparametric tests. II. arXiv.org 2004.07039 (2022), 1-21. \bibitem{hor} J.\,Horowitz, V.\,Spokoiny, {\it An adaptive, rate-optimal test of a e parametric mean-regression model against a nonparametric alternative}.--- Econometrica, {\bf 69} (2001), 599-631. \bibitem{ib77} I.A. Ibragimov, R.Z. Has’minskii, (1977) {\it On the estimation of an infinitedimensional parameter in Gaussian white noise}.-- Soviet Mathematics. Doklady, {\bf 18} (1977), 1307-1309. \bibitem{ib81} I.A. Ibragimov, R.Z. Has’minskii, {\it Statistical estimation: Asymptotic theory}, Springer, N.Y. (1981). \bibitem{ing02} Yu.I. Ingster, I.A. Suslina, {\it Nonparametric goodness-of-fit testing under gaussian models.} Lecture Notes in Statistics {\bf 169} Springer: N.Y. (2002). \bibitem{ker93} G. Kerkyacharian, D. Picard, {\it Density estimation by kernel and wavelets methods: optimality of Besov spaces}.--- Statist. Probab. Lett. {\bf 18} (1993), 327 - 336. \bibitem{ney} J. Neyman, {\it Smooth test for goodness of fit}. --- Skand. Aktuarietidskr., {\bf 1--2} (1937), 149--199. \bibitem{os} L. V. Osipov, {\it On probabilities of large deviations for sums of independent random variables.}-- Theory Probab. Appl., {\bf 17} (1973), 309–331. \bibitem{petr} V. V. Petrov, {\it On the probabilities of large deviations for sums of independent random variables}.--- Theory Probab. Appl., {\bf 10} (1965), 287–298. \bibitem{rio} V. Rivoirard, {\it Maxisets for linear procedures}.--- Statist. Probab. Lett. {\bf 67} (2004) 267-275. \end{thebibliography} \end{document} \documentclass{zapiski} \originfo{}{}{}{2021} \usepackage{cite} \usepackage[cp1251]{inputenc}\usepackage[T2A]{fontenc}\usepackage[russian]{babel}\RequirePackage{amsthm,amsmath,amssymb} \RequirePackage[numbers]{natbib} \def \B1{\mathbf{ 1}\mspace{-4.5mu}{\mathrm I}} \newcommand{\bE}{{\mathbb E}} \newcommand{\bN}{{\mathbb N}} \newcommand{\bF}{{\mathbb F}} \newcommand{\bC}{{\mathbb C}} \newcommand{\bR}{{\mathbb R}} \newcommand{\bL}{{\mathbb L}} \newcommand{\bP}{{\mathbb P}} \newcommand{\bT}{{\mathbb T}} \newcommand{\bZ}{{\mathbb Z}} \def \mc{\mathcal} \def \msc{\mathscr} \def \R{{\rm I}\mspace{-4mu}{\rm R}} \def \1{\mathbf{1}} \def \N{{\rm I}\mspace{-3.5mu}{\rm N}} \def \Z{{\rm Z}\mspace{-6mu}{\rm Z}} \def \F{{\rm I}\mspace{-4mu}{\rm F}} \def \EE{\mathbf{ E}} \def \DD{\mathbf{ D}} \def \PP{\mathbf{ P}} \def \RR{\mathbf{ R}} \def \supp{\mathrm{supp}} \def \T{\scriptscriptstyle{T}\textstyle} \def \r{\scriptscriptstyle{r}\textstyle} \def \TT{\scriptstyle{T}\textstyle} \def \t{\scriptstyle{T+r}\textstyle} \def \s{\scriptscriptstyle{-T-r}\textstyle} \def \u{\scriptscriptstyle{T-r}\textstyle} \def \v{\scriptstyle{T-r}\textstyle} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage{newlfont} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amscd} \def\leq{\leqslant} \def\le{\leqslant} \def\geq{\geqslant} \def\ge{\geqslant} \newtheorem{thm}{}[section] \newtheorem{proposition}{}[section] \newtheorem{lemma}{}[section] \newtheorem{assumption}{}[section] \newtheorem{example}{}[section] \newtheorem{remark}{}[section] \numberwithin{equation}{section} \begin{document} \global\long\def\zb{\boldsymbol{z}} \global\long\def\ub{\boldsymbol{u}} \global\long\def\vb{\boldsymbol{v}} \global\long\def\wb{\boldsymbol{w}} \global\long\def\tb{\boldsymbol{t}} \global\long\def\eb{\boldsymbol{e}} \global\long\def\sb{\boldsymbol{s}} \global\long\def\xib{\boldsymbol{\xi}} \global\long\def\etab{\boldsymbol{\eta}} \global\long\def\thetab{\boldsymbol{\theta}} \global\long\def\yb{\boldsymbol{y}} \global\long\def\eb{\boldsymbol{e}} \author{. . } \title[ ]{ } \footnote{ 20-01-00273.} \maketitle \section{} , . , , . , , $\mathbb{L}_2$- , , II , . , . . , - \cite{er90, ing02}. \cite{dal13, erm21, erm22} . \cite{erm21,erm22} : , - , - , , - . , , , . \cite{ney} - \cite{bic,hor} , . , ( , ), $\mathbb{L}_2$-- , . \begin{equation}\label{vv1} dY_\epsilon(t) = S(t)\, dt +\, \varepsilon\, d\,w(t), \quad \varepsilon > 0, \end{equation} $S \in \mathbb{L}_2(0,1)$ $d\,w(t)$ - . \begin{equation}\label{i29} \mathbb{H}_0 \,\,:\,\, S(t) = 0, \quad t \in [0,1], \end{equation} \begin{equation}\label{i30} \mathbb{H}_\epsilon \,\,:\,\, S(t) = S_\varepsilon(t), \quad t \in [0,1], \quad \varepsilon > 0. \end{equation} , $\Psi_\epsilon \subset \mathbb{L}_2(0,1)$ . $\Psi_\epsilon$ $S_\varepsilon$, $\varepsilon > 0$, . . \ref{sec2} . \ref{sec3} I II , . \ref{sec4} . - , , $\mathbb{L}_2$, , . \ref{sec5} , $\mathbb{B}^s_{2\infty}(P_0)$, $P_0 > 0$, $\mathbb{B}^s_{2\infty}$. $s$ $\mathbb{L}_2$-- $\|S_\varepsilon\|$ . , $S_\varepsilon$, $\varepsilon > 0$, $\mathbb{L}_2$-- $\|S_\varepsilon\|$ , , , $S_\varepsilon$ : $S_{1\varepsilon} \in \mathbb{B}^s_{2\infty}(P_0) $ , $\|S_{\varepsilon}\|$, $S_{2\varepsilon}$. , $S_{1\varepsilon}$ , $S_{2\varepsilon}$, $S_{2\varepsilon}$-- . \ref{sec6} , $\mathbb{L}_2$- . \ref{sec7} . $c$ $C$, , . ${ 1}_{\{A\}}$ -- $A$. $[a]$ -- $a$. $a_n$ $b_n$, $a_n \asymp b_n$ , $c$ $C$, $c < a_n/b_n < C$ $n$, $a_n = o(b_n)$, $a_n << b_n$ $a_n/b_n \to 0$ $n \to \infty$. $a_n = O(b_n)$ $a_n = \Omega(b_n)$ , $a_n \le C b_n$ $b_n \le Ca_n$ . $z$ $\bar z$ . $$ \Phi(x) = \frac{1}{\sqrt{2\pi}}\,\int_{-\infty}^x\,\exp\{-t^2/2\}\, dt, \quad x \in \mathbb{R}^1, $$ . $\phi_j$, $1 \le j < \infty$, -- $\mathbb{L}_2(0,1)$. $P_0 > 0$ \begin{equation}\label{vv} \mathbb{\bar B}^s_{2\infty}(P_0) = \Bigl\{S : S = \sum_{j=1}^\infty\theta_j\phi_j,\,\,\, \sup_{\lambda>0} \lambda^{2s} \sum_{j>\lambda} \theta_j^2 \le P_0,\,\, \theta_j \in \mathbb{R}^1 \Bigr\}. \end{equation} $\phi_j$, $1 \le j < \infty,$ $$ \bar{\mathbb{ B}}^s_{2\infty} = \Bigl\{ S : S = \sum_{j=1}^\infty\theta_j\phi_j,\,\,\, \sup_{\lambda>0} \lambda^{2s} \sum_{j>\lambda}\, \theta_j^2 < \infty,\,\, \theta_j \in \mathbb{R}^1 \Bigr\} $$ $\mathbb{B}^s_{2\infty}$ (. \cite{rio}). , $\mathbb{\bar B}^s_{2\infty}$ , $\phi_j$, $1 \le j < \infty$, -- . $\phi_j(t) = \exp\{2\pi i j x\}$, $x\in (0,1)$, $j = 0, \pm 1, \ldots$, $$ \mathbb{ B}^s_{2\infty}(P_0) = \Bigl\{S\,:\, f = \sum_{j=-\infty}^\infty \theta_j\phi_j,\,\,\, \sup_{\lambda>0} \lambda^{2s} \sum_{|j| >\lambda} |\theta_j|^2 \le P_0 \Bigr\}. $$ $\phi_j$ , $\theta_j$ - , $\theta_j = \bar\theta_{-j}$ $-\infty < j < \infty$. $\mathbb{ B}^s_{2\infty}$-- , $\mathbb{ B}^s_{2\infty}(P_0)$, $P_0 > 0$. \section{ \label{sec2}} $L_\epsilon$, $\epsilon > 0$, $\alpha(L_\epsilon) = \mathbf{E}_0(L_\epsilon)$ -- $\beta(L_\epsilon,S)= \mathbf{E}_S(1 -L_\epsilon)$ -- $S \in \mathbb{L}_2(0,1)$. $r_\varepsilon \to \infty$ $\varepsilon \to 0$. \begin{equation}\label{us3} |\log \beta(L_\epsilon,S_\varepsilon)|= \Omega(r_\varepsilon^2) \end{equation} $L_\varepsilon$, $\alpha(L_\varepsilon) = \alpha_\varepsilon$, $r_\varepsilon^{-2} \log\alpha(L_\varepsilon) \to 0$ $\varepsilon \to 0$, , $S_\varepsilon$, $\varepsilon > 0$, $r_\varepsilon$- ($r_\varepsilon- LD$ ). , $r_\varepsilon- LD$- $S_\varepsilon$ , . (\ref{us3}) , $r_\varepsilon- LD$ . \section{ I II \label{sec3}} $\phi_j$, $1 \le j < \infty$, (\ref{vv1}) (. \cite{ib81}) \begin{equation}\label{q2} y_{\varepsilon j} = \theta_j +\varepsilon \xi_j, \quad 1 \le j < \infty, \end{equation} $$y_{\varepsilon j} = \int_0^1 \phi_j\, dY_\varepsilon(t), \quad \xi_j = \int_0^1\,\phi_j\,dw(t) \quad \mbox{ } \quad \theta_j = \int_0^1 S\,\phi_j\,dt.$$ $\yb_\varepsilon = \{y_{\varepsilon j}\}_{j=1}^\infty$ $\thetab = \{\theta_j\}_{j=1}^\infty$. $S_\varepsilon$ $\theta_{\varepsilon j} = \int_0^1 S_\varepsilon\,\phi_j\,dt$. $\thetab$ $\mathbb{H}$ $\|\thetab\| = \Bigl(\sum_{j=1}^\infty \theta_j^2\Bigr)^{1/2}$. $\| \cdot \|$ $\mathbb{L}_2$ $\mathbb{H}$. . \begin{equation*} T_\varepsilon(Y_\varepsilon) = \varepsilon^{-2}\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 y_{\varepsilon j}^2 - \rho^2_\varepsilon, \end{equation*} $\rho_\varepsilon^2 = \sum_{j=1}^\infty \varkappa_{\varepsilon j}^2$. \begin{equation*} L_\varepsilon= \mathbf{1}_{\{ A^{-1/2}_\varepsilon T_\varepsilon > x_{\alpha_\varepsilon}\}}. \end{equation*} , $\varkappa_{\varepsilon j}^2$ \noindent{\bf A1.} $\varepsilon > 0$ $\varkappa^2_{\varepsilon j}$ . \noindent{\bf A2.} \begin{equation}\label{q5} C_1 < A_\varepsilon = \varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^4 < C_2. \end{equation} $\kappa_\varepsilon^2=\kappa^2_{\varepsilon k_\varepsilon}$, $k_\varepsilon = \sup\Bigl\{k: \sum_{j < k} \varkappa^2_{\varepsilon j} \le \frac{1}{2} \rho_\varepsilon^2 \Bigr\}$. \noindent{\bf A3.} $C_1$ $\lambda >1$, $\delta > 0$ $\varepsilon$ $k_{\varepsilon(\delta)} = [(1+\delta)k_\varepsilon]$ $ \varkappa^2_{\varepsilon,k_{\varepsilon(\delta)}} < C_1(1 +\delta)^{-\lambda}\varkappa_\varepsilon^2. $ \noindent{\bf A4.} $\varkappa_{\varepsilon 1}^2 \asymp \varkappa_\varepsilon^2$ $\varepsilon \to 0$. $c>1$ $C$ , $\varkappa_{\varepsilon,[ck_\varepsilon]}^2 \ge C\varkappa_\varepsilon^2$ $\varepsilon > 0$. $D_\varepsilon(S_\varepsilon) = \varepsilon^{-4} \sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2 = \varepsilon^{-4} \left(T_\varepsilon(S_\varepsilon) + \varepsilon^2 \rho^2_\varepsilon\right)$. \begin{equation*} B_\varepsilon(S_\varepsilon) = \frac{D_\varepsilon(S_\varepsilon)}{(2A_\varepsilon)^{1/2}}. \end{equation*} \begin{thm}\label{th1} A1 -A4. $x_{\alpha_\varepsilon} \to \infty $ $x_{\alpha_\varepsilon} = o(k_{\varepsilon}^{1/6})$ $\varepsilon \to 0$. \begin{equation}\label{us1} \alpha(L_\varepsilon) = (1 - \Phi(x_{\alpha_\varepsilon}))(1 + o(1)). \end{equation} $\alpha_\varepsilon \to 0 $, $x_{\alpha_\varepsilon} = O(B_\varepsilon(S_\varepsilon))$ \begin{equation}\label{eth2} D_\varepsilon(S_\varepsilon) \to \infty, \quad D_\varepsilon(S_\varepsilon) = o(k_\varepsilon^{1/6}), \quad B_\varepsilon(S_\varepsilon) - x_{\alpha_\varepsilon} = \Omega(B_\varepsilon(S_\varepsilon)) \end{equation} $\varepsilon \to 0$. \begin{equation}\label{us1n} \beta(L_\varepsilon,S_\varepsilon) = \Phi (x_{\alpha_\varepsilon}- B_\varepsilon(S_\varepsilon))(1 + o(1)). \end{equation} $x_{\alpha_\varepsilon} \to \infty $ $x_{\alpha_\varepsilon} = o(k_{\varepsilon}^{1/2})$ $\varepsilon \to 0$. \begin{equation}\label{us2} \sqrt{2|\log\alpha(L_\varepsilon)|} =x_{\alpha_\varepsilon}(1 +o(1)), \quad \alpha_\varepsilon \to 0 \end{equation} $\varepsilon \to 0$. $\alpha_\varepsilon \to 0 $, $x_{\alpha_\varepsilon} = O(B_\varepsilon(S_\varepsilon))$ \begin{equation} \label{eth3} D_\varepsilon(S_\varepsilon) \to \infty,\quad D_\varepsilon(S_\varepsilon) = o(k_\varepsilon^{1/2}), \quad B_\varepsilon(S_\varepsilon) - x_{\alpha_\varepsilon} = \Omega(B_\varepsilon(S_\varepsilon)) \end{equation} $\varepsilon \to 0$. \begin{equation} \label{us2n} \sqrt{2|\log\beta(L_\varepsilon,S_\varepsilon)|} = ( B_\epsilon(S_\varepsilon) - x_{\alpha_\varepsilon})(1 + o(1)). \end{equation} $\alpha_\varepsilon \to 0 $, $x_{\alpha_\varepsilon} = O(B_\varepsilon(S_\varepsilon))$ \begin{equation*} r_\varepsilon \to \infty, \quad \frac{B_\varepsilon(S_\varepsilon)}{r_\varepsilon} \to \infty, \quad B_\varepsilon(S_\varepsilon) - x_{\alpha_\varepsilon} = \Omega(B_\varepsilon(S_\varepsilon)) \end{equation*} $\varepsilon \to 0$. $r_\varepsilon^{-2} |\log \beta(L_\varepsilon, S_\varepsilon)| \to \infty$ $\varepsilon \to 0$.\end{thm} \begin{equation*} \tau_\varepsilon^2= \varepsilon^4 D_\varepsilon(S_\varepsilon)=\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2 = T_\varepsilon(S_\varepsilon) + \varepsilon^2\rho^2_\varepsilon. \end{equation*} \begin{thm}\label{th2} $A1-A4$. $x_{\alpha_\varepsilon} \to \infty $ $k_\varepsilon = \Omega(x^2_{\alpha_\varepsilon})$ $\varepsilon \to 0$. \begin{equation}\label{eth20} \log P_{0} (T_\varepsilon(Y_\varepsilon) > x_{\alpha_\varepsilon}) \le - \frac{1}{2} \varkappa_{\varepsilon 1}^{-2} x_{\alpha_\varepsilon} (1 + o(1)). \end{equation} $k_\varepsilon = \Omega(D_\varepsilon^2(S_\varepsilon))$ $\varepsilon(\tau_\varepsilon - x_{\alpha_\varepsilon}^{1/2}) \to \infty$ $\varepsilon \to 0$ . \begin{equation}\label{eth21} \log P_{S_\varepsilon} (T_\varepsilon (Y_\varepsilon) < x_{\alpha_\varepsilon}) \le - \frac{1}{2} \varkappa_{\varepsilon 1}^{-2} (\tau_{\varepsilon}-x_{\alpha_\varepsilon}^{1/2})^2(1 +o(1)). \end{equation} (\ref{eth21}) $S_{\varepsilon}= \{\theta_{\varepsilon j}\}_{j=1}^\infty$, $\theta_{\varepsilon 1} = \varkappa_{\varepsilon 1}^{-1} \tau_\varepsilon$ $\theta_{\varepsilon j} =0$ for $j >1$. \end{thm} \ref{th1} and \ref{th2} $\varkappa^2_{\varepsilon j}$ $\kappa^2_{\varepsilon j}$ \cite{erm08}. \ref{th1} \ref{th2} 2 3 \cite{erm08}. $\varkappa^2_{\varepsilon j}$ $\kappa^2_{\varepsilon j}$ 2 3 \cite{erm08}, \begin{equation}\label{norm} \kappa^2_{\varepsilon j} = \frac{\sum_{j=1}^\infty \varkappa^2 \theta^2_{\varepsilon j}}{\sum_{j=1}^\infty \kappa^4_{\varepsilon j}} \,\varkappa^2_{\varepsilon j}, \quad 1 \le j < \infty. \end{equation} (\ref{q5}) $T_\varepsilon(Y_\varepsilon)$ \cite{er90, erm21}. \section{ \label{sec4}} \cite{erm21}, . $\varepsilon^{-2\omega}- LD$ , $0 < \omega \le 1$ . , \cite{erm21}, $S_\varepsilon$, $\mathbb{L}_2$-- $\varepsilon^{2r}$, $ 0 < r < 1/2$, $\varepsilon \to 0$. $\Xi$, $\Xi \subset \mathbb{L}_2(0,1)$, -- $\|\cdot\|_\Xi$. $ U=\{f:\, \|S\|_\Xi \le 1,\, S \in \Xi\}$ $\Xi$. , $U$ $\mathbb{L}_2(0,1)$ (. \cite{ib77, erm21}). $\Pi_k$, $1 \le k < \infty$ . $d_1= \max\{\|S\|,\, S \in U\}$ $e_1 \in U$ , $\|e_1\|= d_1.$ $\Pi_1$ $\mathbb{L}_2(0,1)$, $e_1$. $i=2,3,\ldots$ $d_i = \max\{\rho(S,\Pi_{i-1}), S \in U \}$, $\rho(S,\Pi_{i-1})=\min\{\|S-g\|, g \in \Pi_{i-1} \}$. $e_i$, $e_i \in U$ , $\rho(e_i,\Pi_{i-1}) = d_i$. $\Pi_i$ , $e_1,\ldots,e_i$. $S \in \mathbb{L}_2(0,1)$ $S_{\Pi_i}$ $S$ $\Pi_i$ $\tilde S_i = S - S_{\Pi_i}$. $U$ {\sl } $T_\varepsilon$ $\Xi$ {\sl }, : \vskip 0.2cm {\sl i.} $S_\varepsilon \in U$, $ \|S_{\varepsilon}\| \asymp \varepsilon^{2r}$ $\varepsilon \to 0$, $\varepsilon^{-2\omega}- LD$ . \vskip 0.2cm {\sl ii.} $S \in \mathbb{L}_2(0,1)$, $S \notin \Xi$, $i_n$, $j_{i_n}$, $i_n \to \infty$, $j_{i_n} \to \infty$ $n \to \infty$ , $c j_{i_n}^{-r}<\| \tilde S_{i_n}\| < C j_{i_n}^{-r}$ $\tilde S_{i_n}$ $j_{i_n}^{\omega}- LD$ $\tilde S_{i_n}$, $j_{i_n}^{-1/2}$. \section{ $\varepsilon^{-2\omega}- LD$ - $S_\varepsilon$, $\mathbb{L}_2$ \label{sec5}} $\varepsilon^{-2\omega}- LD$ - $S_\varepsilon$, $\varepsilon>0$, c $\mathbb{L}_2$, .. $\|S_\varepsilon\| \asymp \varepsilon^{2r}$. $0< r <1/2$ $0 < 2\omega < 1 - 2r$. $r_\varepsilon \asymp \varepsilon^{-\omega}$. $k_\varepsilon \asymp \varepsilon^{-4+ 8r +4\omega}$. , A1-A4 \begin{equation}\label{u1} \kappa_\varepsilon^2 \asymp \varepsilon^{4-4r-4\omega}, \quad \bar A_\varepsilon\doteq\varepsilon^{-4}\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^4 \asymp \varepsilon^{-4\omega} \end{equation} $\varepsilon \to 0$. 4.1 4.4-4.10 \cite{erm21} $\kappa_\varepsilon^2, \bar A_\varepsilon$ $k_\varepsilon$. $\kappa_\varepsilon^2, \bar A_\varepsilon, k_\varepsilon$ \cite{erm21}. {\it ii.} \ref{tq1}. . . \subsection{ $\varepsilon^{-2\omega}- LD$-} $S= S_\varepsilon = \sum_{j=1}^\infty \theta_{\varepsilon j} \phi_j$. \begin{thm}\label{tq3} {\rm A1-A4}. $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, -- $\varepsilon^{-2\omega}- LD$ , , $c_1$, $c_2$ $\varepsilon_0$ , \begin{equation}\label{con2} \sum_{|j| < c_2k_\varepsilon} |\theta_{\varepsilon j}|^2 > c_1 \varepsilon^{4r} \end{equation} $\varepsilon < \varepsilon_0$. \end{thm} \ref{tq3} \ref{tq6}, , \ref{sec6}. \ref{sec6} $j$ $\theta_{\varepsilon j}$ . $|j|$ $j$ $|\theta_{\varepsilon j}|$ $\theta_{\varepsilon j}$ (\ref{con2}) (\ref{con19}). \subsection{ . . } $s = \frac{r}{2 -4r-2\omega}$. $r = \frac{(2-2\omega)s}{1 + 4s}$. \begin{thm}\label{tq1} {\rm A1-A4}. $\mathbb{\bar B}^s_{2\infty}(P_0)$, $P_0 > 0$ $T_\varepsilon(Y_\varepsilon)$. \end{thm} \begin{thm}\label{tq7} {\rm A1-A4}. $S_\varepsilon$, $ \|S_\varepsilon\| \asymp \varepsilon^{2r} $, $\varepsilon^{-2\omega}- LD$ , , $\mathbb{\bar B}^s_{2\infty}(P_0)$, $P_0 > 0$, $S_{1\varepsilon} \in \mathbb{\bar B}^s_{2\infty}(P_0)$, $\|S_{1\varepsilon}\| \asymp \epsilon^{2r}$, , $S_{1\varepsilon}$ $S_{\varepsilon} - S_{1\varepsilon}$, .. \begin{equation} \label{ma1} \| S_\varepsilon\|^2 = \| S_{1\varepsilon}\|^2 + \|S_\varepsilon - S_{1\varepsilon}\|^2. \end{equation} \end{thm} \begin{thm}\label{tq11} {\rm A1-A4}. , $\delta > 0$, $\varepsilon^{-2\omega}- LD$ - $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, $\mathbb{\bar B}^s_{2\infty}(P_0)$, $P_0 > 0$, $S_{1\varepsilon}$, $ \|S_{1\varepsilon}\| \asymp \varepsilon^{2r} $, $ \mathbb{B}^s_{2\infty}(P_0)$, : \vskip 0.25cm $S_{1\varepsilon}$ $S_\varepsilon - S_{1\varepsilon}$, \vskip 0.25cm $L_\varepsilon$, (\ref{eth3}), $\varepsilon_0 =\varepsilon_0(\delta) > 0$, $\varepsilon < \varepsilon_0$ \begin{equation} \label{uuu} |\log \beta(L_\varepsilon,S_\varepsilon) - \log \beta(L_\varepsilon,S_{1\varepsilon})| \le \delta |\log \beta(L_\varepsilon,S_\varepsilon)|, \end{equation} \begin{equation} \label{uu1} |\log \beta(L_\varepsilon,S_\varepsilon - S_{1\varepsilon}) | \le \delta |\log \beta(L_\varepsilon,S_\varepsilon)|. \end{equation} \end{thm} \subsection{ . } , $\varepsilon^{-2\omega}- LD$- $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, --{\sl $\varepsilon^{-2\omega}- LD$--}, $S_{1\varepsilon_i}$, $\varepsilon_i \to 0$ $i \to \infty$, , $S_{1\varepsilon_i}$ $S_{\varepsilon_i}- S_{1\varepsilon_i}$ $\|S_{1\varepsilon_i}\| > c_1\varepsilon_i^{2r}$. \begin{thm} \label{tq5} {\rm A1-A4}. $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, -- $\varepsilon^{-2\omega}- LD$ . $\varepsilon^{-2\omega}- LD$ $S_{1\varepsilon}$, $\|S_{1\varepsilon}\| \asymp \varepsilon^{2r}$, \begin{equation*} \lim_{\varepsilon \to 0} \frac{\log\beta(L_\varepsilon,S_\varepsilon) - \log\beta(L_\varepsilon,S_\varepsilon + S_{1\varepsilon})} {\log\beta(L_\varepsilon,S_\varepsilon)} = 0. \end{equation*} \end{thm} \begin{thm}\label{tq6} {\rm A1-A4}. $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, $\varepsilon^{-2\omega}- LD$--, , $\delta >0$ $C_1= C_1(\delta)$, \begin{equation}\label{con19} \sum_{|j| > C_1k_\varepsilon} |\theta_{\varepsilon j}|^2 \le \delta \varepsilon^{4r} \end{equation} $0<\varepsilon< \varepsilon_0(\delta)$. \end{thm} \begin{thm}\label{tq12} {\rm A1-A4}. $S_\varepsilon$, $ \|S_\varepsilon\| \asymp \varepsilon^{2r} $, $\varepsilon^{-2\omega}- LD$--, , $\delta > 0$ $ \mathbb{\bar B}^s_{2\infty}(P_0)$ $S_{1\varepsilon} \in \mathbb{\bar B}^s_{2\infty}(P_0)$ , $\|S_\varepsilon - S_{1\varepsilon}\| \le \delta \varepsilon^{2r}$ $\varepsilon< \varepsilon_0(\delta)$. \end{thm} \begin{thm}\label{tq8} {\rm A1-A4}. $S_\varepsilon$, $\|S_\varepsilon\| \asymp \varepsilon^{2r}$, $\varepsilon^{-2\omega}- LD$--, , $\varepsilon^{-2\omega}- LD$-- $S_{1\varepsilon_i}$, $ \|S_{1\varepsilon_i}\| \asymp \varepsilon_i^{2r}$, $\varepsilon_i \to 0$ $i \to \infty$, \begin{equation}\label{ma2} \|S_{\varepsilon_i} + S_{1\varepsilon_i}\|^2 = \|S_{\varepsilon_i} \|^2 + \| S_{1\varepsilon_i}\|^2 + o(\varepsilon_i^{4r}). \end{equation} \end{thm} \begin{remark}\label{rem1}{\rm $\varkappa_{\varepsilon j}^2 > 0$ $j \le l_\varepsilon$, $\varkappa^2_{\varepsilon j} = 0$ $j > l_\varepsilon$, $l_\varepsilon \asymp \varepsilon^{-4+ 8r +4\omega}$ $\varepsilon \to 0$. , \ref{th1}, \ref{th2} \ref{tq3} - \ref{tq8} , A4 \vskip 0.25cm \noindent{\bf A5.} $c$, $0 < c <1$, $c_1$ , $\varkappa^2_{\varepsilon,[cl_\varepsilon]} \ge c_1 \varkappa^2_{\varepsilon 1}$ $\varepsilon > 0$. \vskip 0.25cm $\varkappa^2_\varepsilon = \varkappa_{\varepsilon 1}^2$ $k_\varepsilon = l_\varepsilon$. \ref{tq6} . $C_1(\epsilon) < 1$. \ref{th1}, \ref{th2} \ref{tq3} - \ref{tq8} .} \end{remark} \section{, \label{sec6}} (\ref{i29}), (\ref{i30}) . , $S$ $\mathbb{L}_2^{per}(\mathbb{R}^1)$ -- 1- , $S \in \mathbb{L}_2(0,1)$. $\mathbb{R}^1$, $w(t+j) = w(t)$ $j$ $t \in [0,1)$ (. \cite{erm21}). \begin{equation}\label{yy} \hat{S}_\varepsilon(t) = \frac{1}{h_\varepsilon} \int_{-\infty}^{\infty} K\Bigl(\frac{t-u}{h_\varepsilon}\Bigr)\, d\,Y_\varepsilon(u), \quad t \in (0,1), \end{equation} $h_\varepsilon> 0$, $h_\varepsilon \to 0$ $\varepsilon \to 0$. , $K$ , , $[-1/2,1/2]$, $\int_{-\infty}^\infty K(t)\,dt = 1$. $K_h(t) = \frac{1}{h} K\Bigl(\frac{t}{h}\Bigr)$, $t \in \mathbb{R}^1$ $h >0$. $$T_{1\varepsilon}(Y_\varepsilon) = T_{1\varepsilon h_\varepsilon}(Y_\varepsilon) =\varepsilon^{-2}h_\varepsilon^{1/2}\gamma^{-1} (\|\hat S_{\varepsilon}\|^2- \varepsilon^2 h_\varepsilon^{-1}\|K\|^2),$$ $$ \gamma^2 = 2 \int_{-\infty}^\infty \Bigl(\int_{-\infty}^\infty K(t-s)K(s) ds\Bigr)^2\,dt. $$ $\varepsilon^2\, h_{\varepsilon}^{-1/2}\, \gamma \,T_{1\varepsilon}(Y_\varepsilon)$ $$ T_{\varepsilon}(S_\varepsilon) =\int_0^1\Bigl(\frac{1}{h_\varepsilon}\int K\Bigl(\frac{t-s}{h_\varepsilon}\Bigr)S_\varepsilon(s)\, ds\Bigr)^2 dt. $$ $L_\varepsilon = \mathbf{1}_{\{T_{1\varepsilon}(Y_\varepsilon) > x_{\alpha_\varepsilon}\}}$ , $x_{\alpha_\varepsilon}$ $\alpha(L_\varepsilon) = \alpha_\varepsilon$. . $Y_\varepsilon(t)$, $t \in [0,1]$. $-\infty < j < \infty$, $$ \hat K(jh) = \int_{-1}^1 \exp\{2\pi ijt\}\,K_h(t)\, dt,\quad h > 0, $$ $$ y_{\varepsilon j} = \int_0^1 \exp\{2\pi ijt\}\, dY_\varepsilon(t), \quad \xi_j = \int_0^1 \exp\{2\pi ijt\}\, dw(t), $$ $$ \theta_{\varepsilon j} = \int_0^1 \exp\{2\pi ijt\}\, S_\varepsilon(t)\, dt. $$ : \begin{equation}\label{au1} \hat \theta_{\varepsilon j} = \hat K(jh_\varepsilon)\, y_{\varepsilon j} = \hat K(jh_\varepsilon)\, \theta_{\varepsilon j} + \varepsilon\, \hat K(jh_\varepsilon)\, \xi_j, \quad -\infty < j < \infty. \end{equation} $T_\varepsilon$ : \begin{equation}\label{au111} T_\varepsilon(Y_\varepsilon) = \varepsilon^{-2} h_\varepsilon^{1/2}\gamma^{-1} \Bigl(\sum_{j=-\infty}^\infty |\hat \theta_{\varepsilon j}|^2 - \varepsilon^2 \sum_{j=-\infty}^\infty |\hat K(jh_\varepsilon)|^2\Bigr). \end{equation} $|\hat K(jh_\varepsilon)|^2 = \kappa^2_{\varepsilon j}$, , $T_\varepsilon(Y_\varepsilon)$, \ref{sec3} \ref{sec5}, . \ref{sec6} . , $\hat K(\omega)$, $\omega \in \mathbb{R}^1$, . \ref{sec3} \cite{erm11}. \begin{thm}\label{tk3} $\|S_\varepsilon\| \asymp \varepsilon^{2r}$ $h_\varepsilon \asymp \varepsilon^{4-8r-4\omega}$, $ 0 < 2 \omega < 1 - 2r$. \ref{tq3} -\ref{tq8} , $ \mathbb{\bar B}^s_{2\infty}(P_0)$ $\mathbb{B}^s_{2\infty}(P_0)$. \end{thm} \ref{th1} (. 2.1 2.2 \cite{erm11}). \begin{thm}\label{tk2} $h_\varepsilon \to 0$ $\varepsilon \to 0$. $\alpha_\varepsilon \doteq \alpha(K_\varepsilon) = o(1)$. \begin{equation*} 1 < < \sqrt{2|\log \alpha_\varepsilon|} < < h_\varepsilon^{-1/6}, \end{equation*} $x_{\alpha_\varepsilon}$ $\alpha_\varepsilon= (1 - \Phi(x_{\alpha_\varepsilon}))(1 + o(1))$ $\varepsilon \to 0$. \begin{equation}\label{metka} \varepsilon^{-2} h_\varepsilon^{1/2} T_\varepsilon(S_\varepsilon) - \sqrt{|2\log \alpha_\epsilon|} > c \varepsilon^{-2} h_\varepsilon^{1/2} T_\varepsilon(S_\varepsilon) \end{equation} \begin{equation*} \varepsilon^2 h_\varepsilon^{-1/2} < < T_\varepsilon(S_\varepsilon) < < \varepsilon^2 h_\varepsilon^{-2/3} = o(1), \end{equation*} $\varepsilon \to 0$, \begin{equation}\label{33} \beta(L_\varepsilon,S_\varepsilon) = \Phi(x_{\alpha_\varepsilon} - \gamma^{-1} \varepsilon^{-2} h_\varepsilon^{1/2} T_{\varepsilon}(S_\varepsilon))(1 + o(1)). \end{equation} \begin{equation*} 1 < < \sqrt{2|\log \alpha_\varepsilon|} < < h_\varepsilon^{-1/2}, \end{equation*} $x_{\alpha_\varepsilon} = \sqrt{|2\log\alpha_\varepsilon|}(1 + o(1))$. \begin{equation*} \varepsilon^2 h_\varepsilon^{-1/2} < < T_\varepsilon(S_\varepsilon) < < \varepsilon^2 h_\varepsilon^{-1} = o(1), \end{equation*} (\ref{metka}), \begin{equation}\label{33} 2\log\beta(L_\varepsilon,S_\varepsilon) = -(x_{\alpha_\varepsilon} - \gamma^{-1} \varepsilon^{-2} h_\varepsilon^{1/2} T_{\varepsilon}(S_\varepsilon))^2(1 + o(1)). \end{equation} $\frac{\varepsilon^2 x_{\alpha_\varepsilon}}{h_\varepsilon^{1/2} T_{\varepsilon}(S_\varepsilon)} \to 0 $ $\varepsilon \to 0$, \begin{equation*} \lim_{\varepsilon \to 0} (\log \alpha_\varepsilon)^{-1} \log \beta (L_\varepsilon,S_\varepsilon) = \infty. \end{equation*} \end{thm} , \begin{equation}\label{z33} T_{\varepsilon}(S_\varepsilon) = \sum_{j=-\infty}^\infty |\hat K(jh_\varepsilon)|^2 |\theta_{\varepsilon j}|^2. \end{equation} \section{ \label{sec7}} \subsection{ \ref{th2}} , , $z^2 = \kappa^{2}_{\varepsilon} x_{\alpha_\varepsilon}$, \begin{equation}\label{dth21} \begin{split} & \mathbf{P}_s(T_\varepsilon - \tau_\varepsilon^2 < z^2 - \tau_\varepsilon^2)\\& \le \exp\{t(z^2 - \tau_\varepsilon^2)\} \mathbf{E}_0 \left[\exp\left\{-t\sum_{j=1}^\infty \kappa_j^2\,(y_j - \varepsilon^2) + 2t\sum_{j=1}^\infty\kappa_j^2\,y_j\,\theta_j\right\}\right]\,d\,y \\&= \exp\{t(z^2 - \tau_\varepsilon^2)\} \lim_{m\to \infty} (2\pi\varepsilon^2)^{-m/2}\int \left[\exp\left\{-\frac{1}{2}\sum_{j=1}^m\left(\frac{1 + 2t\kappa_j^2\,\varepsilon^2}{\varepsilon^2} y_j^2 \right)\right.\right.\\&\left.\left. + t\, \sum_{j=1}^m \kappa_j^2\, \epsilon^2+ 2t\sum_{j=1}^m\kappa_j^2y_j\,\theta_j \pm \sum_{j=1}^m \frac{2\,t^2\,\kappa_j^4\,\theta_j^2\,\varepsilon^2}{1 + 2t\,\kappa_j^2\,\varepsilon^2} \right\}\right]\,d\,y \\& = \exp\{t\,(z^2 - \tau_\varepsilon^2)\}\,\exp\left\{\frac{1}{2}\,\sum_{j=1}^\infty\log(1 + 2\,t\,\kappa_j^2\,\varepsilon^2) + \sum_{j=1}^\infty \frac{2\,t^2\,\kappa_j^4\,\theta_j^2\,\varepsilon^2}{1 + 2\,t\,\kappa_j^2\,\varepsilon^2} \right\}. \end{split} \end{equation} , \begin{equation*} \sum_{j=1}^\infty\log (1 + 2t\kappa_j^2\varepsilon^2) \asymp k_\varepsilon << \varepsilon^{-2}\tau_\varepsilon^2. \end{equation*} , $\sum_{j=1}^\infty \kappa_j^2 \theta_j^2 = \mbox{const}$ $\theta_j$ , $\kappa^2 \theta_1^2 = \tau_\varepsilon^2$. $t$ \begin{equation}\label{dth22} \exp\Bigl\{t(z^2 - \tau_\varepsilon^2) + \frac{2t^2\kappa_\varepsilon^2\tau_\varepsilon^2\varepsilon^2}{1 + 2t\kappa_\varepsilon^2\varepsilon^2}\Bigl\}. \end{equation} , $t$ \begin{equation}\label{dth23} 2t = \varepsilon^{-2}\kappa_\varepsilon^{-2}(\tau_\varepsilon z^{-1} - 1). \end{equation} (\ref{dth23}) (\ref{dth22}), (\ref{eth21}). \subsection{ \ref{th1}} $T_\varepsilon$ . \cite{os,petr}. 4 \cite{erm08}. , \ref{th1} 4 \cite{erm08}. , 4 \cite{erm08}. (3.51) \cite{erm08}. (\ref{eth3}), , $\varkappa_\varepsilon^2 \asymp \varepsilon^2 k_\varepsilon^{-1/2}$, \begin{equation}\label{dth11a} \varepsilon^{-6}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^4\, \theta_{\varepsilon j}^2 \asymp \varepsilon^{-6}\,\varkappa_\varepsilon^2\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2\, \theta_{\varepsilon j}^2 \asymp k_\varepsilon^{-1/2} \,D_\varepsilon(S_\varepsilon), \end{equation} , $\kappa_{\epsilon j} $-- , (\ref{dth11a}) \begin{equation}\label{dth11} \varepsilon^{-6}\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\, \theta_{\varepsilon j}^2 \asymp \varepsilon^{-6}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^2\, \theta_{\varepsilon j}^2 = o\left(\varepsilon^{-4}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\right). \end{equation} (\ref{dth11}), (3.53), (3.55)--(3.57) \cite{erm08} $(1 +o(1))$. (3.43) 4 \cite{erm08} (\ref{us2n}). (\ref{eth2}), \begin{equation}\label{dth12a} \begin{split} & ( B_\varepsilon(S_\varepsilon)-x_{\alpha_\varepsilon})\,\varepsilon^{-6}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^4 \,\theta_{\varepsilon j}^2 \,\asymp \, \varepsilon^{-2}\varkappa_\varepsilon^2\left(\varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2\right)^3 \\& \asymp \, k_\varepsilon^{1/2}\,\left(\varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2\, \theta_{\varepsilon j}^2\right)^3 = o(1), \end{split} \end{equation} , $\kappa_{\epsilon j} $-- , \begin{equation}\label{dth12} \begin{split} & \varepsilon^{-6}\sum_{j=1}^\infty \kappa_{\varepsilon j}^4\, \theta_{\varepsilon j}^2 \,\asymp \, \varepsilon^{-6}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \kappa_{\varepsilon j}^2 \, \theta_{\varepsilon j}^2 \\& \asymp \varepsilon^{-6}\,\kappa_\varepsilon^2\,\sum_{j=1}^\infty \, \kappa_{\varepsilon j}^4\asymp \varepsilon^{-6}\,k_\varepsilon\,\kappa_\varepsilon^6\, = \,o(1). \end{split} \end{equation} (\ref{dth12}), 4 \cite{erm08} , (\ref{us1n}). (\ref{us1}) (\ref{us2}) . $x_{\alpha_\varepsilon} = \varepsilon^{-4}\,\sum_{j=1}^\infty \varkappa_{\varepsilon j}^2 \,\theta_{\varepsilon j}^2$. \subsection{ \ref{tq1}} , {\it ii.} . {\it i.} \cite{erm21} . . $S = \sum_{j=1}^\infty \tau_{j}\,\phi_j \notin \mathbb{\bar B}^s_{2\infty}$. , $m_l$, $m_l \to \infty$ $l \to \infty$, \begin{equation}\label{u5} m_l^{2s} \sum_{j=m_l}^\infty \tau_{j}^2 = C_l, \end{equation} $C_l \to \infty$ as $l \to \infty$. $\etab_l = \{\eta_{lj}\}_{j=1}^\infty$ , $\eta_{lj} = 0$ if $j < m_l$ $\eta_{lj} = \tau_j$, $j \ge m_l$. $\tilde S_l = \sum_{j=1}^\infty \eta_{lj}\,\phi_j$. $\tilde S_l$ $n_l$ , \begin{equation}\label{u5b} \|\etab_l\|^2 \asymp \varepsilon_l^{4r}= n_l^{-2r}\asymp m_l^{-2s}\, C_l . \end{equation} \begin{equation}\label{u7} n_l \asymp C_l^{-1/(2r)} m_l^{s/r} \asymp C_l^{-1/(2r)} m_l^{\frac{1}{2 - 4r- 2 \omega}}. \end{equation} \begin{equation}\label{u10} m_l \asymp C_l^{(1-2r-\omega)/r}\,n_l^{2-4r-2\omega}. \end{equation} A4, (\ref{u10}) \begin{equation}\label{u9} \kappa^2_{n_l m_l} = o(\kappa^2_{n_l}). \end{equation} (\ref{u1}), A2 (\ref{u9}), \begin{equation}\label{u12} \begin{split}& A_{\varepsilon_l}(\etab_l) = n_l^2 \sum_{j=1}^{\infty} \kappa_{\varepsilon_lj}^2\,\eta_{jl}^2 \le n_l^{2} \,\kappa^2_{\varepsilon_l m_l}\,\sum_{j=m_l}^{\infty} \theta_{n_lj}^2\\& \asymp n_l^{2-2r}\,\kappa^2_{\varepsilon_l m_l} \asymp \varepsilon_l^{-4\omega}\,\kappa^2_{\varepsilon_l m_l }\,\kappa^{-2}_{\varepsilon_l} = o(\varepsilon_l^{-4\omega}) . \end{split} \end{equation} \ref{th1}, (\ref{u12}) $\varepsilon^{-2\omega}- LD$ -- $\tilde S_l$. \subsection{ \ref{tq1} , } $S = \sum_{j=-\infty}^\infty \tau_j \,\phi_j \notin \gamma\, U$ $\gamma >0$. $m_l$, $m_l \to \infty$ $l \to \infty$, , \begin{equation}\label{bb5} m_l^{2s} \sum_{|j|\ge m_l}^\infty |\tau_j|^2 = C_l, \end{equation} $C_l \to \infty$ $l \to \infty$. , $m_l$ , \begin{equation}\label{gqq} \sum_{m_l \le |j| \le 2m_{l}} |\tau_j|^2 \asymp \varepsilon_l^{4r} = n_l^{-2r}> \delta \,C_l \,m_l^{-2s}, \end{equation} $\delta$, $0< \delta <1/2$, $l$. $\etab_l = \{\eta_{lj}\}_{j=-\infty}^\infty$ , $\eta_{lj} = \tau_j$, $|j| \ge m_{l}$, $\eta_{lj} = 0$ . $$ \tilde S_l(x) = \sum_{j=-\infty}^\infty \eta_{lj} \exp\{2\pi ijx\}. $$ $\tilde S_l(x)$ $n_l$ , $\|\tilde S_l(x)\| \asymp n_l^{-r}$. \begin{equation*} n_l \asymp C_l^{-1/(2r)}\, m_l^{s/r}. \end{equation*} $|\hat K(\omega)| \le \hat K(0) = 1$ $\omega \in R^1$ $|\hat K(\omega)| > c > 0$ $ |\omega| < b$. , $h_l= h_{n_l} =2^{-1}b^{-1}m_l^{-1}$, (\ref{gqq}), $C > 0$ , $h> 0$, \begin{equation*} T_{\varepsilon_l}(\tilde S_l,h_l) = \sum_{j=-\infty}^\infty |\hat K(jh_l)\,\eta_{lj}|^2 > C \sum_{j=-\infty}^\infty |\hat K(jh)\,\eta_{lj}|^2 = C T_{\varepsilon_l}(\tilde S_l,h). \end{equation*} $h = h_l$. (\ref{gqq}), \begin{equation}\label{k101} T_{\varepsilon_l}(\tilde S_l) = \sum_{|j|>m_l}\, |\hat K(jh_l) \,\eta_{lj}|^2 \asymp \sum_{j=m_l}^{2m_l} |\eta_{lj}|^2 \asymp n_l^{-2r}. \end{equation} (\ref{u7}),(\ref{u10}), $k_l = [h_{\varepsilon_l}^{-1}]$ $m_l = k_l$, \begin{equation}\label{k102} h_{\varepsilon_l}^{1/2} \asymp C_l^{(2r-1+1)/(2r)}\,n_l^{2r-1+\omega}. \end{equation} (\ref{k101}) (\ref{k102}), \begin{equation*} \varepsilon_l^{-2}\, T_{\varepsilon_l}(\tilde S_l)\,h_{\varepsilon_l}^{1/2} \asymp C_l^{-(1-2r)/2}. \end{equation*} \ref{tk2}, $\tilde S_l$. \begin{thebibliography}{99} \bibitem{bic} P.J. Bickel, M. Rosenblatt, {\it On some global measures of deviation of density function estimates}.-- Ann. Statist. {\bf 1} (1973), 1071-1095. \bibitem{dal13} L. Comminges, A. Dalalyan {\it Minimax testing of a composite null hypothesis defined via a quadratic functional in the model of regression}.-- Electronic Journal of Statistics, {\bf 7} (2013), 146190. \bibitem{er90} . . , {\it }.--- . . {\bf 35} (1990), 704-715. \bibitem{erm08} M.S. Ermakov, {\it Testing nonparametric hypothesis for small type I and type II error probabilities}.--- Problems of Information Transmition. {\bf 44} (2008), 54-73. \bibitem{erm11} M.S. Ermakov, {\it Nonparametric signal detection with small type I and type II error probabilities}.--- Statistical Inference for Stochastic Processes {\bf 14} (2011), 1-19. \bibitem{erm21} M.S. Ermakov, {\it On uniform consistency of nonparametric tests. I.}==J Math Sci{\bf 258} (2021),802837. \bibitem{erm22} . . , {\it II}. --- . . . . {\bf 495} (2022), 133-164. \bibitem{hor} J.\,Horowitz, V.\,Spokoiny, {\it An adaptive, rate-optimal test of a parametric mean-regression model against a nonparametric alternative}.--- Econometrica, {\bf 69} (2001), 599-631. \bibitem{ib77} I.A. Ibragimov, R.Z. Hasminskii, (1977) {\it On the estimation of an infinitedimensional parameter in Gaussian white noise}.--- Soviet Mathematics. Doklady, {\bf 18} (1977), 1307-1309. \bibitem{ib81} I.A. Ibragimov, R.Z. Hasminskii, {\it Statistical estimation: Asymptotic theory}, Springer, N.Y. (1981). \bibitem{ing02} Yu.I. Ingster, I.A. Suslina, {\it Nonparametric goodness-of-fit testing under gaussian models.} Lecture Notes in Statistics {\bf 169} Springer: N.Y. (2002). \bibitem{ker93} G. Kerkyacharian, D. Picard, {\it Density estimation by kernel and wavelets methods: optimality of Besov spaces}.--- Statist. Probab. Lett. {\bf 18} (1993), 327 - 336. \bibitem{ney} J. Neyman, {\it Smooth test for goodness of fit}. --- Skand. Aktuarietidskr., {\bf 1--2} (1937), 149--199. \bibitem{os} L. V. Osipov, {\it On probabilities of large deviations for sums of independent random variables.}-- Theory Probab. Appl., {\bf 17} (1973), 309331. \bibitem{petr} V. V. Petrov, {\it On the probabilities of large deviations for sums of independent random variables}.--- Theory Probab. Appl., {\bf 10} (1965), 287298. \bibitem{rio} V. Rivoirard, {\it Maxisets for linear procedures}.--- Statist. Probab. Lett. {\bf 67} (2004) 267-275. \end{thebibliography} \begin{abstract} We consider problem of signal detection in Gaussian white noise. Test statistics are linear combinations of squares of estimators of Fourier coefficients or $\mathbb{L}_2$-norms of kernel estimators. We point out necessary and sufficient conditions when nonparametric sets of alternatives have a given rate of exponential decay for type II error probabilities. \end{abstract} . , $\mathbb{L}_2$- . , . { \\ - , -, \\ { \\ . .., 61 -\\ }} \vskip 0.3cm - \\ ., 28, \\ 198504 -\\ \\ {[email protected]} key words: goodness of fit tests, consistency, signal detection, Bickel-Rosenblatt test, Neyman test, maxisets. AMS subject classification: 62F03, 62G10, 62G2 \end{document}
2206.14447v1
http://arxiv.org/abs/2206.14447v1
Transverse instability of high frequency weakly stable quasilinear boundary value problems
\documentclass[a4paper, 11pt]{amsart} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[all]{xy} \usepackage{enumitem} \usepackage[dvipsnames, x11names]{xcolor} \usepackage{wrapfig, graphicx} \usepackage{amsfonts,amstext,amssymb,amsmath,amsthm, fancybox} \usepackage{pifont} \usepackage[left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm]{geometry} \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=altblue, citecolor=altred, urlcolor=blue,pagebackref=false]{hyperref} \usepackage{fourier-orns} \usepackage{array} \usepackage{fancyhdr} \usepackage{setspace} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png} \usepackage{tikz} \usetikzlibrary{patterns,shapes,decorations.pathmorphing} \usepackage{mathrsfs} \usepackage{dsfont} \usepackage{mathdots} \usepackage{cleveref} \usepackage[colorinlistoftodos, french, bordercolor=black, linecolor=black, textsize=small]{todonotes} \usepackage{yhmath} \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}} \setlength{\doublerulesep}{\arrayrulewidth} \renewcommand{\arraystretch}{1.5} \renewcommand{\epsilon}{\varepsilon} \renewcommand{\phi}{\varphi} \newcommand\R{\mathbb{R}} \newcommand\Q{\mathbb{Q}} \newcommand\D{\mathcal{D}} \newcommand\C{\mathbb{C}} \renewcommand\S{\mathbb{S}} \newcommand\N{\mathbb{N}} \newcommand\Z{\mathbb{Z}} \newcommand\T{\mathbb{T}} \newcommand{\inv}[1]{\frac{1}{#1}} \renewcommand{\hat}{\widehat} \renewcommand{\tilde}{\widetilde} \renewcommand{\bar}{\overline} \renewcommand{\leq}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\norme}[1]{\left\Vert #1\right\Vert} \newcommand{\normetriple}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\prodscalbis}[1]{\left\langle #1\right\rangle} \newcommand{\prodscal}[2]{\left\langle #1\,\middle|\,#2\right\rangle} \newcommand{\ensemble}[1]{\left\lbrace #1\right\rbrace} \newcommand{\valabs}[1]{\left| #1\right|} \newcommand{\systeme}[1]{\left\lbrace\begin{array}{l}#1\end{array}\right.} \newcommand{\indicatrice}{\mathds{1}} \newcommand{\privede}[1]{\setminus\ensemble{#1}} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\divergence}{div} \DeclareMathOperator{\loc}{loc} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\Ima}{Im} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\diff}{d} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\Vect}{Vect} \DeclareMathOperator{\card}{card} \DeclareMathOperator{\com}{com} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\GL}{GL} \newcommand{\cad}{c'est-à-dire } \newcommand{\CK}{Cauchy-Kovalevskaya } \newcommand\F{\mathcal{F}} \newcommand\E{\mathbf{E}} \renewcommand{\Q}{\mathbf{Q}} \newcommand{\A}{\mathbb{A}} \newcommand{\J}{\mathbb{J}} \renewcommand{\P}{\mathcal{P}} \newcommand{\U}{\mathcal{U}} \newcommand{\G}{\mathcal{G}} \renewcommand{\H}{\mathbf{H}} \newcommand\B{\mathcal{B}} \newcommand{\V}{\mathbf{V}} \newcommand{\X}{\mathbf{X}} \newcommand{\Y}{\mathbf{Y}} \renewcommand{\a}{\mathbf{a}} \renewcommand{\b}{\mathbf{b}} \renewcommand{\D}{\mathbf{D}} \newcommand{\freq}{\mathbf{n}\cdot\boldsymbol{\zeta}} \newcommand{\K}{\mathcal{K}} \newcommand{\crochet}[1]{\{#1\}} \newcommand{\red}[1]{\textcolor{red}{#1}} \DeclareMathOperator{\Car}{Car} \DeclareMathOperator{\osc}{osc} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\app}{app} \DeclareMathOperator{\Dom}{Dom} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\inc}{in} \DeclareMathOperator{\out}{out} \DeclareMathOperator{\spectre}{sp} \DeclareMathOperator{\res}{res} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\nc}{nc} \DeclareMathOperator{\Lop}{Lop} \DeclareMathOperator{\per}{per} \DeclareMathOperator{\bord}{b} \definecolor{altblue}{RGB}{0, 0, 180} \definecolor{altred}{RGB}{200, 0, 0} \definecolor{altorange}{RGB}{243, 136, 0} \definecolor{altorange2}{RGB}{190, 100, 0} \definecolor{altgreen}{RGB}{0, 130, 0} \definecolor{altpurple}{RGB}{169, 0, 243} \definecolor{altgrey}{RGB}{40, 55, 71} \definecolor{altpink}{RGB}{255, 48, 145} \definecolor{rouge1}{RGB}{128, 0, 0} \definecolor{rouge2}{RGB}{134, 45, 45} \definecolor{rouge3}{RGB}{77, 0, 0} \definecolor{orange1}{RGB}{153, 38, 0} \definecolor{marron1}{RGB}{102, 51, 0} \definecolor{jaune1}{RGB}{204, 122, 0} \definecolor{violet1}{RGB}{153, 0, 77} \definecolor{violet2}{RGB}{128, 0, 128} \definecolor{rose1}{RGB}{255, 0, 128} \definecolor{violet3}{RGB}{77, 0, 153} \definecolor{bleu1}{RGB}{0, 0, 102} \definecolor{bleu2}{RGB}{0, 45, 179} \definecolor{bleu3}{RGB}{0, 122, 153} \definecolor{vert1}{RGB}{32, 96, 64} \definecolor{vert2}{RGB}{0, 102, 0} \definecolor{vert3}{RGB}{77, 77, 0} \newcommand{\todocorentinmtn}[1]{\todo[backgroundcolor=pink]{#1}} \newcommand{\todocorentinlater}[1]{\todo[backgroundcolor=altgreen!20]{#1}} \newcommand{\todocommentaire}[1]{\todo[backgroundcolor=altblue!30]{#1}} \setlength{\marginparwidth}{2cm} \allowdisplaybreaks \newtheorem{theorem}{Theorem}[section] \crefname{theorem}{Theorem}{Theorems} \newtheorem{lemma}[theorem]{Lemma} \crefname{lemma}{Lemma}{Lemmas} \newtheorem{proposition}[theorem]{Proposition} \crefname{proposition}{Proposition}{Propositions} \newtheorem{corollary}[theorem]{Corollary} \crefname{corollary}{Corollary}{Corollaries} \newtheorem{definition}[theorem]{Definition} \crefname{definition}{Definition}{Definitions} \newtheorem{assumption}{Assumption} \crefname{assumption}{Assumption}{Assumptions} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \crefname{remark}{Remark}{Remarks} \newtheorem{example}[theorem]{Example} \crefname{example}{Example}{Examples} \numberwithin{equation}{section} \title[Transverse instability of weakly stable quasilinear boundary value problems]{Transverse instability of high frequency weakly stable quasilinear boundary value problems} \author{Corentin Kilque} \address{Institut de Mathématiques de Toulouse ; UMR5219 \\ Université de Toulouse ; CNRS \\ UPS, F-31062 Toulouse Cedex 9, France} \email{[email protected]} \begin{document} \maketitle \begin{abstract} This work intends to prove that strong instabilities may appear for high order geometric optics expansions of weakly stable quasilinear hyperbolic boundary value problems, when the forcing boundary term is perturbed by a small amplitude oscillating function, with a transverse frequency. Since the boundary frequencies lie in the locus where the so-called Lopatinskii determinant is zero, the amplifications on the boundary give rise to a highly coupled system of equations for the profiles. A simplified model for this system is solved in an analytical framework using the \CK theorem as well as a version of it ensuring analyticity in space and time for the solution. Then it is proven that, through resonances and amplification, a particular configuration for the phases may create an instability, in the sense that the small perturbation of the forcing term on the boundary interferes at the leading order in the asymptotic expansion of the solution. Finally we study the possibility for such a configuration of frequencies to happen for the isentropic Euler equations in space dimension three. \end{abstract} \tableofcontents This work takes interest into the (in)-stability of multiphase geometric optics expansions for weakly stable quasilinear hyperbolic boundary value problems. The formal construction of such geometric optics expansions goes back to Majda, Artola, and Rosales, in \cite{MajdaRosales1983Machstem,MajdaRosales1984Machstem,ArtolaMajda1987Instabilities,MajdaArtola1988Mixed}. In this paper, we prove that, for a simplified model, infinitely accurate approximate solutions can be unstable, in the sense that a small perturbation of the boundary forcing term interferes at the leading order in the asymptotic expansion. For uniformly stable problems, the construction of a multiphase asymptotic expansion is performed, for Cauchy problems, notably in \cite{HunterMajdaRosales1986Resonantly} for the linear case, and in \cite{JolyMetivierRauch1995Coherent} for the quasilinear one. In the case of boundary value problems, \cite{Williams1996Boundary} studies the semilinear case with multiple frequencies on the boundary, and the quasilinear case is treated in \cite{CoulombelGuesWilliams2011Resonant} for one phase on the boundary. The case of multiple phases on the boundary is addressed in a previous work of the author, \cite{Kilque2021Weakly}. In the weakly stable case, that is, when the weak Kreiss-Lopatinksii condition is satisfied, an amplification phenomenon occurs, as shown in works of Coulombel, Guès and Williams. Following the pioneering works by Majda and his collaborators, the first rigorous construction of a geometric optics expan\-sion in the weakly stable case is performed in \cite{CoulombelGues2010Geometric} for linear boundary value problems. Nonlinear problems are treated in \cite{CoulombelGuesWilliams2014Semilinear,CoulombelWilliams2014Pulses} for the semilinar case, and \cite{CoulombelWilliams2017Mach} for the quasilinear one. In \cite{CoulombelGuesWilliams2014Semilinear,CoulombelWilliams2014Pulses,CoulombelWilliams2017Mach}, the authors consider one phase on the boundary, and the present w work intends to address the extension of \cite{CoulombelWilliams2017Mach} to the multiphase case. Here, allowing multiple phases on the boundary permits us to consider a particular configuration of frequencies on the boundary, which, thanks to the amplification phenomenon, will lead to an instability for the asymptotic expansion. We show however that, fixing a locus of breaking of the Kreiss-Lopatinskii condition, this configuration of frequencies creating an instability cannot happen for the example of gas dynamics. This leaves hope to justify the validity of the geometric optics expansions with one amplification for the gas dynamics. This work is divided in three main parts: (i) the derivation of the equations satisfied by the profiles in the multiphase case, following \cite{CoulombelWilliams2017Mach}; (ii) the proof of existence of solutions to the obtained system in an analytical framework; and (iii) the proof of instability for this system, namely, that there exists a perturbation of the boundary forcing term interfering at the leading order in the expansion. The general system being out of our reach for the moment, both for existence and for the instability mechanism, we deal with simplify models of the general system of equations for the profiles. For the boundary value problem considered in this paper, the boundary condition is assumed to satisfy the weak Kreiss-Lopatinskii condition, namely that the Kreiss-Lopatinskii condition breaks on a certain locus of the frequency space. More precisely, we assume here that the locus where the Kreiss-Lopatinskii condition is not satisfied lies in the hyperbolic region (see \cite[Definition 2.1]{BenzoniSerre2007Multi}). For boundary frequencies for which the Kreiss-Lopatinskii condition is not satisfied, an amplification phenomenon occurs on the boundary. The idea is to consider a particular configuration of frequencies on the boundary which will turn this amplification into a strong instability. For this purpose, we consider a boundary forcing term $ G $ oscillating at a frequency $ \phi $, belonging to the locus where the Kreiss-Lopatinskii condition is not satisfied, and we perturb this boundary forcing term $ G $ with a perturbation term $ H $, oscillating at a \emph{transverse} frequency $ \psi $ also belonging to the locus where the Kreiss-Lopatinskii condition is not satisfied, of small amplitude compared to the one of $ G $. In \cite{CoulombelWilliams2017Mach}, the boundary frequency $ \phi $ is assumed to be non-resonant, in the sense that two interior frequencies lifted from $ \phi $ cannot resonate with each other. We make the same assumption here, as well as for the boundary frequency $ \psi $. We assume however that two well chosen resonance relations between $ \phi $ and $ \psi $ exists, which will allow the perturbation $ H $ to ascend towards the leading order, through repeated amplification and resonances. We study in this article the possibility for such a configuration of frequencies to happen for the isentropic compressible Euler equations in space dimension $ 3 $. We prove that for a particular choice of locus where the Kreiss-Lopatinskii condition is not satisfied, the configuration of boundary frequencies considered here is (thankfully) impossible for the Euler system. The derivation of equations for the amplitudes from the BKW cascade follows the one detailed in \cite{CoulombelWilliams2017Mach}. The main difference with the iterative process in the uniformly stable case, see e.g. \cite{JolyMetivierRauch1995Coherent,Williams1996Boundary,CoulombelGuesWilliams2011Resonant,Kilque2021Weakly}, is that, for boundary frequencies lying in the locus where the Kreiss-Lopatinski condition is not satisfied, we cannot a priori determine a boundary condition for incoming profiles. Indeed, because of the weak Kreiss-Lopatinskii condition, for such boundary frequencies, the traces of incoming profiles are expressed through an unknown scalar function. For a given order, the evolution equations satisfied by these boundary terms are derived using equations on profiles of the next order. This is where amplification occurs. The main difference with \cite{CoulombelWilliams2017Mach} is that, because of resonances, equations for each profile and for boundary terms are coupled with each other. Also, in comparison with \cite{CoulombelWilliams2017Mach}, in equations for the boundary term of a given order, there is a term involving the trace of a profile of the next order, which was proven to be zero in \cite{CoulombelWilliams2017Mach}, because resonances were absent in that work. This results into a highly coupled system. Nevertheless we discuss two points about this system, which are the existence of a solution to it and the creation of an instability. The existence of a solution to the system of equations for the profiles is proven here in an analytical setting. The aim is to use the abstract \CK theorem, whose proof can be found in \cite{Nirenberg1972CauchyKowalewski} and \cite{Nishida1977Nirenberg}. We use in this work the formulation of \cite{BaouendiGoulaouic1978Nishida}. The system of equations for the profiles is made of incoming and outgoing equations for interior profiles, whose traces of incoming profiles are expressed with boundary terms that in turn satisfy coupled evolution equations on the boundary. As already mentioned, the general system is difficult to treat, so we consider two simplified models of increasing difficulty for the study of existence. Both retain only a few profiles (which are the ones of interest), and remove some couplings between the equations. The first is only constituted by coupled equations on the boundary, and we make this first simplified model more complex into a second one by incorporating interior equations, whose traces on the boundary are given by the solutions to the equations on the boundary. For the first simplified model, containing only equations on the boundary, the formulation of \cite{BaouendiGoulaouic1978Nishida} can be applied, using a chain of spaces quantifying analyticity by means of the Fourier transform. The only difficulty is to show that a certain bilinear operator appearing in the equations is semilinear in the considered spaces of functions, and this result is obtained adapting a result of \cite{CoulombelWilliams2017Mach}. For the second simplified model, incorporating interior equations, the aim is to apply the \CK theorem to the interior equations, seen as propagation equations in the normal direction. Therefore, we need the boundary terms, which are solutions to boundary equations, to be analytical with respect to all their variables: both tangential space variables and \emph{time}. However, if we apply the classical \CK theorem to the boundary equations, we obtain a solution analytical only with respect to tangent space variables, and not with respect to time. We therefore need to adapt the \CK theorem to obtain analyticity with respect to all variables for solutions to the boundary equations. This is done using the method of majoring series, and the phenomenon of regularization by integration in time introduced in \cite{Ukai2001CauchyKovalevskaya}, see also \cite{Metivier2009Optics,Morisse2020Elliptic}. We define for this purpose a chain of spaces of analytic functions, with a formulation adapted from \cite{BaouendiGoulaouic1978Nishida} to the framework of majoring series, and prove the result using a fixed point theorem. We also define a chain of functional spaces suited to apply the \CK theorem for interior equations, once we have constructed the analytic boundary terms. Applying the \CK theorem to interior equations in this chain of functional spaces then presents no difficulty. To prove that there is an instability, namely, that a small perturbation $ H $ of the boundary forcing term $ G $ interferes at a leading order, since the perturbation $ H $ is small compared to $ G $, we consider the linearized version of the general system, around the particular solution of this system when the perturbation $ H $ is zero. We obtain a linearized system with a small boundary forcing term given by $ H $, and we prove that there exists a boundary term $ H $ such that this system admits a solution whose first order profiles are not all zero. It shows that the small perturbation $ H $ interferes at the leading order for the linearized system, which constitutes an instability. The existence of $ H $ is proven by contradiction: we assume that for all boundary terms $ H $, all leading profiles are zero, and we contradict a certain condition by constructing the second order correctors. As for the part about existence, we work here with simplified models, as the coupling of the general system of equations is too difficult to handle. The first simplified model allows us to construct explicitly the solution to the linearized system, solving the considered transport equations by the method of characteristics. For the second one, the coupling is more complex, preventing us to apply the latter method, and we use a perturbation method and solve equations with a fixed point theorem. This article is organized as follows. First we state the problem that we study here, make structural assumptions about it, and specify some assumptions and preliminary results about the oscillations at stake. Then, in a second part, the general system of equations for the profiles is derived, by detailing the iterative process for the leading profile and then the first corrector, and by writing down the general system satisfied by higher order correctors. We proceed in a third part with the proof of existence of a solution to simplified models of this general system. We start by detailing the obtaining of a first simplified model, then defining the functional framework which will be used, specifying the simplified model according to this functional framework, and applying the \CK theorem for boundary equations. Then we detail how this first simplified model is made more complex into a second one, we define additional functional spaces and specify the second simplified model accordingly, and finally we show existence and analyticity of solutions to boundary equations by proving a new version of the \CK theorem, leading to existence of solutions to interior equations, using a classical \CK theorem. The fifth part is devoted to the proof of instability, first by deriving the linearization of the general system around the particular solution where the perturbation $ H $ is zero, and then by proving, for two different simplified models, that an instability is created. Finally, in a sixth part, the example of isentropic compressible Euler equations in space dimension $ 3 $ is studied. In all the article the letter $ C $ denotes a positive constant that may vary during the analysis, possibly without any mention being made. \bigskip \emph{Acknowledgments.} The author is particularly grateful to Jean-François Coulombel, whose brilliant idea is at the origin of this work, and for his numerous advice and proofreading. \section{Notation and assumptions} \subsection{Position of the problem} Given a time $ T>0 $ and an integer $ d\geq 2 $, let $ \Omega_T $ be the domain $\Omega_T:=(-\infty,T]\times \R^{d-1}\times \R_+$ and $\omega_T:=(-\infty,T]\times \R^{d-1}$ its boundary. We denote as $ t\in(-\infty,T] $ the time variable, $ x=(y,x_d)\in\R^{d-1}\times\R_+ $ the space variable, with $ y\in\R^{d-1} $ the tangential variable and $ x_d\in\R_+ $ the normal variable, and at last $ z=(t,x)=(t,y,x_d) $. We also denote by $ z'=(t,y)\in\omega_T $ the variable of the boundary $ \ensemble{x_d=0} $. For $ i=1,\dots,d $, we denote by $ \partial_i $ the partial derivative operator with respect to $ x_i $. Finally we denote as $ \alpha\in\R^{d+1} $ and $ \zeta\in\R^d $ the dual variables of $ z\in\Omega_T $ and $ z'\in\omega_T $. We consider the following problem \begin{equation}\label{eq systeme 1} \left\lbrace \begin{array}{lr} L(u^{\epsilon},\partial_z)\,u^{\epsilon}:=\partial_tu^{\epsilon}+\displaystyle\sum_{i=1}^dA_i(u^{\epsilon})\,\partial_iu^{\epsilon}=0&\qquad \mbox{in } \Omega_T, \\[5pt] B\,u^{\epsilon}_{|x_d=0}=\epsilon^2\, g^{\epsilon}+\epsilon^M\,h^{\epsilon}&\qquad \mbox{on } \omega_T, \\[10pt] u^{\epsilon}_{|t\leq 0}=0,& \end{array} \right. \end{equation} where the unknown $ u^{\epsilon} $ is a function from $ \Omega_T $ to an open set $ \mathcal{O} $ of $ \R^N $ containing zero, $ N\geq 1 $, the matrices $A_j$ are smooth functions of $\mathcal{O}$ with values in $\mathcal{M}_N(\R)$, the matrix $B$ belongs to $\mathcal{M}_{\tilde{p},N}(\R)$ and is of maximal rank (integer $ \tilde{p}\geq 1 $ will be made precise below). The boundary term is a superposition of a reference forcing oscillating term $ \epsilon^2\,g^{\epsilon} $ (of characteristic wavelength $ \epsilon $) and a smaller, transverse, oscillating term $ \epsilon^M\,h^{\epsilon} $ with $ M\geq 3 $, namely, for $ z'\in\omega_T $, \begin{subequations} \begin{align}\label{eq def g epsilon 1} g^{\epsilon}(z')&=G\left( z',\frac{z'\cdot\phi}{\epsilon}\right),\\[5pt] h^{\epsilon}(z')&=H\left( z',\frac{z'\cdot\psi}{\epsilon}\right), \end{align} \end{subequations} where $ G,H $ are functions of the Sobolev space of infinite reguarity $ H^{\infty}(\R^d\times\T) $, are zero for negative time $ t $, and with boundary frequencies $ \phi,\psi $ given in $ \R^d\privede{0} $. Frequencies $ \phi $ and $ \psi $ are taken linearly independent over $ \R $, that is, $ \psi\notin\R\phi $. We denote by $ \boldsymbol{\zeta} $ the couple $ \boldsymbol{\zeta}:=(\phi,\psi) $. In this paper we wish to place ourselves in the framework of weakly nonlinear geometric optics. Usually to obtain this framework the amplitude of the boundary forcing term must be of order $ O(\epsilon) $. Here, because we will assume that the Kreiss-Lopatinskii condition is not satisfied for $ \phi $ (and $ \psi $), an amplification phenomenon will happen at the boundary for this frequency, so a forcing term of amplitude of order $ O(\epsilon^2) $ should be chosen on the boundary. This scaling has been studied in \cite{ArtolaMajda1987Instabilities,MajdaRosales1983Machstem,MajdaRosales1984Machstem,CoulombelWilliams2017Mach}. Note that if we set $ h^{\epsilon}=0 $ in system \eqref{eq systeme 1}, we obtain the system studied in \cite{CoulombelWilliams2017Mach}. To simplify the equations and computations we assume that the coefficients are affine maps, that is, for $ j=1,\dots,d $, \begin{equation*} A_j(u)=A_j(0)+dA_j(0)\cdot u. \end{equation*} We make the following structural and classical assumption on the boundary. \begin{assumption}[noncharacteristic boundary]\label{hypothese bord non caract} The boundary is noncharacteristic, that is, matrix $ A_d(0) $ is invertible. \end{assumption} To simplify the equations and the computations we will study here the case $ M=3 $, but there is no apparent obstacle to generalize this analysis to any integer $ M\geq 4 $. For the same purpose we choose to work with the particular case of 3-dimensional vectors ($ N=3 $) since it is sufficient in this analysis to create instabilities. In this paper we study a geometric optics asymptotic expansion for system \eqref{eq systeme 1}, namely, we look for an approximate solution to \eqref{eq systeme 1} in the form of a formal series \begin{equation}\label{eq ansatz provisoire} u^{\epsilon,\app}(z)=\sum_{n\geq 1} \epsilon^n\, U_n\Big(z,\frac{\Phi(z)}{\epsilon}\Big), \end{equation} where the collection of phases $ \Phi $ will be made precise later. The approximate solution is expected to be of order $ O(\epsilon) $ because of the weakly nonlinear framework. The aim is to show that, with a well chosen configuration of frequencies, there is an instability in this asymptotic expansion, in the sense that, despite its small amplitude order $ O\big(\epsilon^3\big) $, perturbation $ \epsilon^3\,h^{\epsilon} $ interferes at the leading order, i.e. in the construction of the leading profile $ U_1 $. In addition to this instability, we will study well-posedness for a simplified model associated with the equations for the profiles, and the possibility for such a frequency configuration to occur in the case of Euler equations in space dimension 3. We start by making a series of structural assumptions on system \eqref{eq systeme 1} and detailing the configuration of frequencies considered here. \bigskip The following definition introduces the notion of characteristic frequency. \begin{definition} For $\alpha=(\tau,\eta,\xi)\in\R\times\R^{d-1}\times\R$, the symbol $L(0,\alpha)$ associated with $L(0,\partial_z)$ is defined as \begin{equation*} L(0,\alpha):=\tau I+\sum_{i=1}^{d-1}\eta_iA_i(0)+\xi A_d(0). \end{equation*} Then we define its characteristic polynomial as $ p(\tau,\eta,\xi):=\det L\big(0,(\tau,\eta,\xi)\big) $. We say that $\alpha\in\R^{1+d}$ is a \emph{characteristic frequency} if it is a root of the polynomial $p$. \end{definition} The following assumption, called \emph{strict hyperbolicity} (see \cite[Definition 1.2]{BenzoniSerre2007Multi}), is made. Assumptions of hyperbolicity, whether strict or with constant multiplicity, are very usual, see e.g. \cite{Williams1996Boundary,CoulombelGuesWilliams2011Resonant,JolyMetivierRauch1995Coherent}, and related to the structure of the problem. Assumption of hyperbolicity with constant multiplicity, which is more general than Assumption \ref{hypothese stricte hyp} of strict hyperbolicity below, is sometimes preferred like in \cite{CoulombelGuesWilliams2011Resonant,JolyMetivierRauch1995Coherent}. We chose here to work with the latter for technical reasons. Recall that we placed ourselves in the particular case where the size of the system is $ N=3 $. \begin{assumption}[strict hyperbolicity]\label{hypothese stricte hyp} There exist real functions $ \tau_1<\tau_2<\tau_3 $, analytic with respect to $ (\eta,\xi) $ in $\R^d\setminus\{0\}$, such that for all $(\eta,\xi)\in\R^d\setminus\ensemble{0}$ and for all $\tau\in\R$, the following factorisation is verified \[p(\tau,\eta,\xi)=\det\Big(\tau I+\sum_{i=1}^{d-1}\eta_iA_i(0)+\xi A_d(0)\Big)=\prod_{k=1}^3\big(\tau-\tau_k(\eta,\xi)\big),\] where the eigenvalues $-\tau_k(\eta,\xi)$ of the matrix $A(\eta,\xi):=\sum_{i=1}^{d-1}\eta_iA_i(0)+\xi A_d(0)$ are therefore simple. \end{assumption} \subsection{Weak Kreiss-Lopatinskii condition} We define the following space of frequencies \begin{align*} \Xi&:=\{\zeta=(\sigma=\tau-i\gamma,\eta)\in(\C\times\R^{d-1})\backslash\{0\} \mid \gamma\geq 0\},\\ \Sigma&:=\ensemble{\zeta\in\Xi\mid \tau^2+\gamma^2+|\eta|^2=1},\\ \Xi_0&:=\{\zeta\in\Xi \mid \gamma=0\},\\ \Sigma_0&:=\Xi_0\cap\Sigma. \end{align*} We also define the matrix valued symbol which we get when applying the Laplace-Fourier transform to the operator $L(0,\partial_z)$. For all $\zeta=(\sigma,\eta)\in\Xi$, let \[\mathcal{A}(\zeta):=-i\,A_d(0)^{-1}\Big(\sigma I+\sum_{i=1}^{d-1}\eta_j\, A_j(0)\Big).\] The Hersh lemma (\cite{Hersh1963Mixed}) ensures that for $ \zeta $ in $ \Xi\backslash\Xi_0 $, the matrix $\mathcal{A}(\zeta)\in\mathcal{M}_3(\C) $ has no eigenvalue of zero real part, and that the stable subspace associated with the eigenvalues of negative real part, denoted by $E_-(\zeta)$, is of constant dimension, denoted $p$. Furthermore, the integer $ p $ is obtained as the number of positive eigenvalues of the matrix $ A_d(0) $. We denote by $E_+(\zeta)$ the unstable subspace $\mathcal{A}(\zeta)$ associated with eigenvalues of positive real part, which is of dimension $3-p$. In \cite{Kreiss1970Initial} (see also \cite{ChazarainPiriou1982Introduction} and \cite{BenzoniSerre2007Multi}) it is shown that the stable and unstable subspaces $E_{\pm}$ extend continuously to the whole space $\Xi$ in the strictly hyperbolic case (Assumption \ref{hypothese stricte hyp}). We still denote by $ E_{\pm} $ the extensions to $ \Xi $. The \emph{hyperbolic region}, denoted by $ \mathcal{H} $, is defined as the set of frequencies $ \zeta $ such that matrix $ \mathcal{A}(\zeta) $ has only purely imaginary eigenvalues. The following assumption is very structural to the problem, and is the one which allows amplification on the boundary, and thus instability. \begin{assumption}[weak Kreiss-Lopatinskii condition]\label{hypothese weak lopatinskii} \begin{itemize}[label=$\bullet$,leftmargin=20pt] \item For all $ \zeta\in\Xi\setminus\Xi_0 $, $ \ker B\cap E_-(\zeta)=\ensemble{0} $. \item The set $ \Upsilon:=\ensemble{\zeta\in\Sigma_0 \,\middle|\, \ker B\cap E_-(\zeta)\neq \ensemble{0}} $ is nonempty and included in the hyperbolic region $ \mathcal{H} $. \item There exist a neighborhood $ \mathcal{V} $ of $ \Upsilon $ in $ \Sigma $, a real valued $ \mathcal{C}^{\infty} $ function $ \kappa $ defined on $ \mathcal{V} $, a basis $ E_1(\zeta),\dots,E_p(\zeta) $ of $ E_-(\zeta) $ and a matrix $ P(\zeta)\in\GL_p(\C) $ which are of class $ \mathcal{C}^{\infty} $ with respect to $ \zeta\in\mathcal{V} $ such that, for all $ \zeta $ in $ \mathcal{V} $, \begin{equation*} B\big(E_1(\zeta)\cdots E_p(\zeta)\big)=P(\zeta)\,\diag\big(\gamma+i\kappa(\zeta),1,\dots,1\big). \end{equation*} \end{itemize} \end{assumption} \begin{remark} First point of Assumption \ref{hypothese weak lopatinskii}, requiring that $ \ker B\cap E_-(\zeta)=\ensemble{0} $ for all $ \zeta\in\Xi\setminus\Xi_0 $, implies in particular that $ \tilde{p} $, the rank of $ B $, equals $ p $, the dimension of $ E_-(\zeta) $. These two equal integers will be denoted by $ p $ in the following. Assumption \ref{hypothese ensemble frequences} below sets furthermore the integer $ p=\tilde{p} $ to be equal to 2. \end{remark} The so-called \emph{Kreiss-Lopatinskii condition} is the first point of Assumption \ref{hypothese weak lopatinskii} that stands in $ \Xi\setminus \Xi_0 $, and the next two points detail how this condition breaks on the boundary $ \Xi_0 $ of $ \Xi $ (for the \emph{uniform Kreiss-Lopatinskii condition} to hold, equality $ \ker B \cap E_-(\zeta)=\ensemble{0} $ is assumed to be satisfied everywhere in $ \Xi $, see \cite{Kreiss1970Initial}). The second point asserts that the Kreiss-Lopatinskii condition breaks only in the hyperbolic region $ \mathcal{H} $, and the third one ensures that when it breaks, the space $ \ker B \cap E_-(\zeta) $ is of dimension $ 1 $, and that the default of injectivity of $ B $ on $ E_-(\zeta) $ is parameterize by the $ \mathcal{C}^{\infty} $ function $ \kappa $. In particular, $ \kappa $ must be zero on $ \Upsilon $, and nonzero on $ \Sigma_0\setminus\Upsilon $. Together with Assumptions \ref{hypothese bord non caract} and \ref{hypothese stricte hyp}, Assumption \ref{hypothese weak lopatinskii} ensures that for all $ \epsilon>0 $, system \eqref{eq systeme 1} is weakly well-posed locally in time (which depends on $ \epsilon $). A proof of a similar result, for characteristic free boundary problems can be found in \cite{CoulombelSecchi2008Vortex}. Indeed, the three assumptions \ref{hypothese bord non caract}, \ref{hypothese stricte hyp} and \ref{hypothese weak lopatinskii} are stable under small perturbations around the equilibrium, see \cite[Section 8.3]{BenzoniSerre2007Multi}. \subsection{Oscillations} The notion of incoming, outgoing and glancing frequencies is now introduced. \begin{definition}\label{def sortant rentrant alpha X alpha} Let $\alpha=(\tau,\eta,\xi)\in\R^{d+1}\backslash\ensemble{0}$ be a characteristic frequency, and $ k $ the integer between $ 1 $ and $ 3 $ such that $\tau=\tau_k(\eta,\xi)$. The group velocity $ \mathbf{v}_{\alpha}\in\R^d $ associated with $ \alpha $ is defined as \begin{equation*} \mathbf{v}_{\alpha}:=\nabla_{\eta,\xi}\,\tau_k(\eta,\xi). \end{equation*} We shall say that $\alpha$ is glancing (resp. incoming, outgoing) if $\partial_{\xi}\tau_k(\eta,\xi)$ is zero (resp. negative, positive). Then the vector field $X_{\alpha}$ associated with $\alpha$ is defined as \begin{equation}\label{eq champ vecteur X_alpha} X_{\alpha}:=\frac{-1}{\partial_{\xi}\tau_k(\eta,\xi)}\Big(\partial_t-\mathbf{v}_{\alpha}\cdot\nabla_{x}\Big)=\frac{-1}{\partial_{\xi}\tau_k(\eta,\xi)}\Big(\partial_t-\nabla_{\eta}\tau_k(\eta,\xi)\cdot\nabla_{y}-\partial_{\xi}\tau_k(\eta,\xi)\,\partial_{x_d}\Big). \end{equation} \end{definition} Lax lemma, see Lemma \ref{lemme Lax} below, ensures that these constant coefficients scalar transport operators $ X_{\alpha} $ appear naturally in the equations satisfied by the profiles arising in weakly nonlinear asymptotic expansions (see \cite{Rauch2012Hyperbolic}). We describe now a decomposition of the stable subspace $ E_-(\zeta) $ for $ \zeta\in\Xi_0 $, that uses strict hyperbolicity (Assumption \ref{hypothese stricte hyp}). \begin{proposition}[{\cite{Williams1996Boundary}, Proposition 3.4}]\label{prop decomp E_-} Consider $ \zeta=(\tau,\eta)\in\Xi_0 $. We denote by $ i\,\xi_j(\zeta) $ for $ j=1,\dots,\mathcal{M}(\zeta) $ the distinct complex eigenvalues of the matrix $ \mathcal{A}(\zeta) $, and if $ \xi_j(\zeta) $ is real, we shall denote by $ \alpha_j(\zeta):=(\tau,\eta,\xi_j(\tau,\eta)) $ the associated real characteristic frequency. If $ \xi_j(\zeta) $ is real, we also denote by $ k_j $ the integer between $ 1 $ and $ 3 $ such that $\tau=\tau_{k_j}(\eta,\xi_j(\zeta))$. Then the set $\ensemble{1,2,\dots,\mathcal{M}(\zeta)}$ decomposes as the disjoint union \begin{equation}\label{eq union disjointe M(zeta)} \ensemble{1,2,\dots,\mathcal{M}(\zeta)}=\mathcal{G}(\zeta)\cup\mathcal{I}(\zeta)\cup\P(\zeta)\cup\mathcal{O}(\zeta)\cup\mathcal{N}(\zeta), \end{equation} where the sets $\mathcal{G}(\zeta)$, $\mathcal{I}(\zeta)$, $\P(\zeta)$, $\mathcal{O}(\zeta)$ and $\mathcal{N}(\zeta)$ correspond to indexes $j$ such that respectively $\alpha_j(\zeta)$ is glancing, $\alpha_j(\zeta)$ is incoming, $\Im(\xi_j(\zeta))$ is positive, $\alpha_j(\zeta)$ is outgoing and $\Im(\xi_j(\zeta))$ is negative. Then the following decomposition of $E_-(\zeta)$ holds \begin{equation}\label{eq decomp E_-(zeta)} E_-(\zeta)=\bigoplus_{j\in\mathcal{G}(\zeta)}E^j_-(\zeta)\oplus \bigoplus_{j\in\mathcal{R}(\zeta)}E^j_-(\zeta) \oplus \bigoplus_{j\in\P(\zeta)}E^j_-(\zeta), \end{equation} where for each index $j$, the subspace $E_-^j(\zeta)$ is precisely described as follows. \begin{enumerate}[label=\roman*)] \item If $j\in \P(\zeta)$, the space $E^j_-(\zeta)$ is the generalized eigenspace $\mathcal{A}(\zeta)$ associated with the eigenvalue $i\,\xi_j(\zeta)$. \item If $j\in\mathcal{R}(\zeta)$, we have $E^j_-(\zeta)=\ker L\big(0,\alpha_j(\zeta)\big)$, which is of dimension 1. \item If $j\in\mathcal{G}(\zeta)$, we denote by $n_j$ the algebraic multiplicity of the imaginary eigenvalue $i\xi_j(\zeta)$. For small positive $ \gamma $, the multiple eigenvalue $i\,\xi_j(\tau,\eta)$ splits into $n_j$ simple eigenvalues, denoted by $i\,\xi_j^k(\tau-i\gamma,\eta)$, $k=1,\dots,n_j$, all of nonzero real part. We denote by $\mu_j$ the number (independent of $\gamma>0$) of the eigenvalues $i\,\xi_j^k(\tau-i\gamma,\eta)$ of negative real part. Then $E_-^j(\zeta)$ is of dimension $ \mu_j $ and is generated by the vectors $w$ satisfying $[\mathcal{A}(\zeta)-i\xi_j(\zeta)]^{\mu_j}w=0$. Furthermore, if $n_j$ is even, $\mu_j=n_j/2$ and if $n_j$ is odd, $\mu_j$ is equal to $(n_j-1)/2$ or $(n_j+1)/2$. \end{enumerate} Likewise, the unstable subspace $E_+(\zeta)$ decomposes as \begin{equation}\label{eq decomp E_+(zeta)} E_+(\zeta)=\bigoplus_{j\in\mathcal{G}(\zeta)}E^j_+(\zeta)\oplus \bigoplus_{j\in\mathcal{S}(\zeta)}E^j_+(\zeta) \oplus \bigoplus_{j\in \mathcal{N}(\zeta)}E^j_+(\zeta), \end{equation} with similar description of the subspaces $E_+^j(\zeta)$. In particular, if the set $\mathcal{G}(\zeta)$ is empty, then \begin{equation*} \C^3=E_-(\zeta)\oplus E_+(\zeta). \end{equation*} \end{proposition} For $ \zeta\in\Xi_0 $, we denote by $ \mathcal{C}(\zeta) $ the set of indices such that $ \alpha_j(\zeta) $ is real characteristic, that is \begin{equation*} \mathcal{C}(\zeta):=\mathcal{I}(\zeta)\cup\mathcal{O}(\zeta)\cup \mathcal{G}(\zeta). \end{equation*} \begin{definition} A frequency $ \zeta $ in $ \Xi_0 $ is said to be \emph{glancing} if there exists $ j=1,\dots,\mathcal{M}(\zeta)$ such that $ \alpha_j(\zeta) $ is glancing, i.e. if $ \mathcal{G}(\zeta) $ is nonempty, \emph{hyperbolic} if $ \mathcal{A}(\zeta) $ has only purely imaginary eigenvalues, that is if $ \mathcal{P}(\zeta)\cup\mathcal{N}(\zeta) $ is empty and \emph{mixed} if $ \mathcal{P}(\zeta)\cup\mathcal{N}(\zeta) $ is nonempty. We shall denote by $ \mathcal{G} $ (resp. $ \mathcal{H} $, $ \mathcal{EH} $) the set of glancing (resp. hyperbolic, mixed) frequencies. \end{definition} \begin{definition} For $ \zeta\in\Xi_0 $ not glancing, according to Proposition \ref{prop decomp E_-}, we have the following decomposition of $ \C^3 $: \begin{equation*} \C^3= \bigoplus_{j\in\mathcal{O}(\zeta)}E^j_+(\zeta) \oplus \bigoplus_{j\in \mathcal{N}(\zeta)}E^j_+(\zeta)\oplus \bigoplus_{j\in\mathcal{I}(\zeta)}E^j_-(\zeta) \oplus \bigoplus_{j\in\P(\zeta)}E^j_-(\zeta). \end{equation*} In that case we denote by $ \Pi^{e}(\zeta) $ the projection from $ \C^3 $ on the stable elliptic component $ E^e_-(\zeta):=\oplus_{j\in\P(\zeta)}E^j_-(\zeta) $ according to this decomposition. \end{definition} The following result is adapted from \cite[Lemma 3.2]{CoulombelGues2010Geometric} to the case of mixed frequencies. \begin{lemma}\label{lemme def Pj} For all $ \zeta\in\Xi_0 $ nonglancing, the following decompositions hold \begin{subequations} \begin{align} \C^3&=\bigoplus_{j\in \mathcal{C}(\zeta)}\ker L\big(0,\alpha_j(\zeta)\big)\oplus F_{\zeta}\label{eq decomp C N }\\ \C^3&=\bigoplus_{j\in \mathcal{C}(\zeta)}A_d(0)\,\ker L\big(0,\alpha_j(\zeta)\big)\oplus A_d(0)\,F_{\zeta},\label{eq decomp C N A d 0} \end{align} \end{subequations} where $ F_{\zeta} $ is the generalized eigenspace of $ \mathcal{A}(\zeta) $ associated with the eigenvalues of nonzero real part. Furthermore, if we denote by $ P_j(\zeta) $ and $ P_{F_{\zeta}} $ (resp. $ Q_j(\zeta) $ and $ Q_{F_{\zeta}} $) the projectors associated with the decomposition \eqref{eq decomp C N } (resp. \eqref{eq decomp C N A d 0}), then we have \begin{equation}\label{eq relation Qj} \Ima L\big(0,\alpha_j(\zeta)\big)=\ker Q_j(\zeta), \end{equation} for all $ j $. \end{lemma} In \cite{CoulombelGues2010Geometric}, the result is proven only for frequencies $ \zeta $ hyperbolic, and the proof is slightly simpler using directly the diagonalizability of matrix $ \mathcal{A}(\zeta) $. Here the matrix is only block-diagonalizable, and we have to deal with eigenvalues of nonzero real part. \begin{proof} The two decompositions come from the block-diagonalizability of matrix $ \mathcal{A}(\zeta) $, the fact that $ \zeta $ is not glancing and the invertibility of matrix $ A_d(0) $. Indeed, for any nonglancing frequency $ \zeta\in\Xi_0 $, there exists therefore an invertible matrix $ T(\zeta) $ such that $ T(\zeta)\,\mathcal{A}(\zeta)\,T(\zeta)^{-1} $ is the block diagonal matrix \begin{equation*} T(\zeta)\,\mathcal{A}(\zeta)\,T(\zeta)^{-1}=\diag\big(i\xi_1(\zeta) ,\dots,i\xi_{m_{\zeta}}(\zeta) ,\mathcal{A}_{\pm}(\zeta)\big) \end{equation*} where the $ \xi_j(\zeta) $ are real scalars, and the spectrum of the block $ \mathcal{A}_{\pm}(\zeta) $ is contained in $ \C\setminus i\R $. The proof decomposes in two main steps. First we construct a sequence of diagonalizable matrix converging toward $ \mathcal{A}(\zeta) $, in order to be able to adapt the method used in \cite{CoulombelGues2010Geometric}. Then using projectors defined for this sequence of matrix, analogous to $ P_j(\zeta) $ and $ Q_j(\zeta) $, we are able to prove relation \eqref{eq relation Qj}, using diagonalizability. \emph{Step 1.} We consider a sequence $ (\mathcal{A}^k_{\pm}(\zeta))_{k\geq 0} $ of diagonalizable matrices converging toward $ \mathcal{A}_{\pm}(\zeta) $. For $ k\geq 0 $, we denote by $ \tilde{T}_k(\zeta) $ the invertible matrix such that \begin{equation*} \tilde{T}_k(\zeta)\,\mathcal{A}^k_{\pm}(\zeta)\,\tilde{T}_k(\zeta)^{-1}=\diag (i\lambda_1,\dots,i\lambda_{3-m_{\zeta}}). \end{equation*} We also denote by $ T_k(\zeta) $ the block diagonalizable matrix \begin{equation*} T_k(\zeta):=\diag(I_{m_{\zeta}},\tilde{T}_k(\zeta)), \end{equation*} and we finally define, for $ k\geq 0 $ the matrix $ \mathcal{A}^k(\zeta) $ as \begin{equation*} \mathcal{A}^k(\zeta):=T(\zeta)\,T_k(\zeta)\,\diag\big(i\xi_1(\zeta) ,\dots,i\xi_{m_{\zeta}}(\zeta) ,i\lambda_1,\dots,i\lambda_{3-m_{\zeta}}\big)\,T_k(\zeta)^{-1}\,T(\zeta)^{-1}. \end{equation*} Note that the sequence $ \big(\mathcal{A}^k(\zeta)\big)_{k\geq 0} $ is by definition a sequence of diagonalizable matrices which converges toward $ \mathcal{A}(\zeta) $. Using this diagonalizability we get the two following decompositions of $ \C^3 $, for $ k\geq 0 $: \begin{subequations} \begin{align} \C^3&=\bigoplus_{j=1}^{m_{\zeta}}\ker \big(\mathcal{A}^k(\zeta)-i\xi_j(\zeta) I\big)\oplus\bigoplus_{j=1}^{3-m_{\zeta}}\ker \big(\mathcal{A}^k(\zeta)-i\lambda_jI\big)\label{eq Lop decomp C 3 k}\\ &=\bigoplus_{j=1}^{m_{\zeta}}A_d(0)\ker \big(\mathcal{A}^k(\zeta)-i\xi_j(\zeta) I\big)\oplus\bigoplus_{j=1}^{3-m_{\zeta}}A_d(0)\ker \big(\mathcal{A}^k(\zeta)-i\lambda_jI\big).\label{eq Lop decomp C 3 k Ad} \end{align} \end{subequations} First we note that, by definition of the matrix $ \mathcal{A}^k(\zeta) $, the eigenspace $ \ker \big(\mathcal{A}^k(\zeta)-i\xi_j(\zeta) I\big) $ is equal to $ \ker L\big(0,\alpha_j(\zeta)\big) $ and that \begin{equation*} \bigoplus_{j=1}^{3-m_{\zeta}}\ker \big(\mathcal{A}^k(\zeta)-i\lambda_jI\big)=F_{\zeta}. \end{equation*} Thus we define the projectors $ P_{\pm}^{k,j}(\zeta) $ (resp. $ Q_{\pm}^{k,j}(\zeta) $) on $ \ker \big(\mathcal{A}^k(\zeta)-i\lambda_jI\big) $ (resp. $ A_d(0)\allowbreak\ker \big(\mathcal{A}^k(\zeta)-i\lambda_jI\big) $) associated with the decomposition \eqref{eq Lop decomp C 3 k} (resp. \eqref{eq Lop decomp C 3 k Ad}). According to the previous remark we then have \begin{subequations} \begin{align} I&=P_1(\zeta)+\cdots+P_{m_{\zeta}}(\zeta)+P_{\pm}^{k,1}(\zeta)+\cdots+P_{\pm}^{k,3-m_{\zeta}}(\zeta)\label{eq Lop decomp I P j}\\ &=Q_1(\zeta)+\cdots+Q_{m_{\zeta}}(\zeta)+Q_{\pm}^{k,1}(\zeta)+\cdots+Q_{\pm}^{k,3-m_{\zeta}}(\zeta). \end{align} \end{subequations} \emph{Step 2.} For $ j_0 $ between $ 1 $ and $ m_{\zeta} $, analogously to $ L\big(0,\alpha_{j_0}(\zeta)\big) $, we define \begin{equation*} L_k\big(0,\alpha_{j_0}(\zeta)\big):=iA_d(0)\big(\mathcal{A}_k(\zeta)-i\xi_{j_0}(\zeta)I\big). \end{equation*} By definition and since the following relation is satisfied \begin{equation*} L\big(0,\alpha_{j_0}(\zeta)\big)=iA_d(0)\big(\mathcal{A}(\zeta)-i\xi_{j_0}(\zeta)I\big), \end{equation*} the sequence $ \big(L_k\big(0,\alpha_{j_0}(\zeta)\big)\big)_{k\geq 0} $ converges to $ L\big(0,\alpha_{j_0}(\zeta)\big) $. We consider $ L_k\big(0,\alpha_{j_0}(\zeta)\big)\,X $ an element of $ \Ima L_k\big(0,\alpha_{j_0}(\zeta)\big) $ with $ X\in\C^3 $, and the aim is to prove that it belongs to $ \ker Q_{j_0}(\zeta) $. The latter is a closed space, so, since the sequence $ \big(L_k\big(0,\alpha_{j_0}(\zeta)\big)\,X\big)_{k\geq 0} $ converges to $ L\big(0,\alpha_{j_0}(\zeta)\big)\,X $, it will follow that $ \Ima L\big(0,\alpha_{j_0}(\zeta)\big)\subset \ker Q_{j_0}(\zeta) $ and the conclusion then infers because of equality of dimension of the two spaces. We have, by definition of the projectors $ P_j(\zeta) $ and $ P^{k,j}_{\pm}(\zeta) $ and because of the decomposition \eqref{eq Lop decomp I P j}, \begin{align*} L_k\big(\alpha_{j_0}(\zeta)\big)\,X&=iA_d(0)\big(\mathcal{A}_k(\zeta)-i\xi_{j_0}(\zeta)I\big)\Big\lbrace\sum_{j=1}^{m_{\zeta}}P_j(\zeta)\,X+\sum_{j=1}^{3-m_{\zeta}}P^{k,j}_{\pm}(\zeta)\,X\Big\rbrace\\ &=iA_d(0)\sum_{\substack{j=1\\j\neq j_0}}^{m_{\zeta}}\big(i\xi_j(\zeta)-i\xi_{j_0}(\zeta)\big)P_j(\zeta)\,X+iA_d(0)\sum_{j=1}^{3-m_{\zeta}}\big(i\lambda_j(\zeta)-i\xi_{j_0}(\zeta)\big)P^{k,j}_{\pm}(\zeta)\,X, \end{align*} and the last term belongs to \begin{equation*} \bigoplus_{\substack{j=1\\j\neq j_0}}^{m_{\zeta}}A_d(0)\ker \big(\mathcal{A}^k(\zeta)-i\xi_j(\zeta) I\big)\oplus\bigoplus_{j=1}^{3-m_{\zeta}}A_d(0)\ker \big(\mathcal{A}^k(\zeta)-i\lambda_jI\big)=\ker Q_{j_0}(\zeta), \end{equation*} concluding the proof. \end{proof} The interest is now made on the frequencies created on the boundary and then lifted inside the domain. Recall that we considered a quasi-periodic boundary forcing term of frequencies $ \phi/\epsilon $ and $ \psi/\epsilon $, with $ \phi,\psi\in\R^{d}\privede{0} $. In the following we will make restricting assumptions on $ \phi $ and $ \psi $ in order to obtain a particular frequency configuration, eventually creating an instability. By nonlinear interaction, frequencies $ \phi $ and $ \psi $ on the boundary create the following lattice of frequencies on the boundary: \begin{equation*} \F_b:=\phi\,\Z\oplus\psi\,\Z. \end{equation*} To avoid the complications induced by the glancing modes, we assume that there is no glancing frequency in $ \F_{b}\privede{0} $. This is a common assumption, see \cite{CoulombelGues2010Geometric,CoulombelGuesWilliams2011Resonant}. \begin{assumption}\label{hypothese pas de glancing} We have \begin{equation*} \big(\F_{b}\privede{0}\big)\cap \mathcal{G}=\emptyset. \end{equation*} \end{assumption} To parameterize $ \F_b $ we introduce the following subset of $ \Z^2\privede{0} $: \begin{equation*} \mathcal{B}_{\Z^2}:=\ensemble{(n_1,n_2)\in\Z^2\privede{0}\,\middle|\,\begin{aligned} &n_2\wedge n_2=1,\\ &n_1>0 \ \ \mbox{or}\ \ n_1=0,n_2>0 \end{aligned}}, \end{equation*} of couples of coprime integers of which the first nonzero term is positive. Then, each frequency $ \zeta:=n_1\,\phi+n_2\,\psi $ of $ \F_b\privede{0} $ is parameterized in a unique way by $ \mathbf{n}_0:=(n^0_1,n^0_2)\in\B_{\Z^2} $ and $ \lambda\in\Z^* $ such that $ (n_1,n_2)=\lambda\,(n^0_1,n^0_2) $. In the following, we will allow ourselves to alternate without mentioning it between the following representations of a frequency of $ \zeta\in\F_{b}\privede{0} $: $ \mathbf{n}=(n_1,n_2) $ in $ \Z^2\privede{0} $ such that $ \zeta=n_1\,\phi+n_2\,\psi $ and $ \mathbf{n}_0=(n_0^1,n_0^2) $ in $ \B_{\Z^2} $ and $ \lambda $ in $ \Z^* $ such that $ \mathbf{n}=\lambda\,\mathbf{n}_0 $. Because of the hyperbolicity of the system, boundary frequencies $ \zeta $ of $ \F_{b} $ are lifted into frequencies $ (\zeta,\xi) $ inside the domain, which must be characteristic frequencies due to polarization conditions. Furthermore, frequencies $ (\zeta,\xi) $ with $ \Im \xi <0 $ are excluded to obtain bounded solutions, and we have already discarded glancing frequencies by Assumption \ref{hypothese pas de glancing}. Therefore, the set $ \F $ of frequencies inside the domain is given by \begin{equation*} \F:=\ensemble{0}\cup\ensemble{\big(\zeta,\xi_j(\zeta)\big) \mid\zeta\in \F_b\privede{0},j\in \mathcal{C}(\zeta)\cup\mathcal{P}(\zeta)}. \end{equation*} The following assumption details the configuration of frequencies which is assumed to hold in order to create an instability. It is a generalization to our case of \cite[Assumptions 1.7 and 1.9]{CoulombelWilliams2017Mach}, where the only frequency of the problem, $ \phi $, was supposed to be nonresonant, hyperbolic, and in $ \Upsilon $. In \cite{CoulombelWilliams2017Mach}, the authors explain that allowing the boundary frequency $ \phi $ to be resonant could lead to an over-determination of the system. Assumption \ref{hypothese ensemble frequences} below requires in particular that frequencies $ \phi $ and $ \psi $ are nonresonant\footnote{In the sense that two frequencies lifted from $ \phi $ cannot resonate with each other, and the same for $ \psi $.}, hyperbolic, and in $ \Upsilon $. We additionally assume two resonances between frequencies lifted from $ \phi $ and $ \psi $ to hold, which will eventually allow us to create an instability. \begin{assumption}\label{hypothese ensemble frequences} There exists a frequency $ \nu $ in $ \F_{b}\privede{0} $ defined by \begin{equation*} \lambda_{\phi}\,\phi+\lambda_{\psi}\,\psi+\nu=0 \end{equation*} with coprime integers $ \lambda_{\phi},\lambda_{\psi} $ such that $ (-\lambda_{\phi},-\lambda_{\psi}) $ is in $ \B_{\Z^2} $, and such that the following conditions hold. \begin{enumerate}[label=\roman*.),leftmargin=30pt,itemsep=5pt] \item Frequencies $ \phi $, $ \psi $ and $ \nu $ are in the hyperbolic region $ \mathcal{H} $. \item Frequencies lifted from $ \phi,\psi,\nu $, denoted by $ \phi_j,\psi_j,\nu_j $, $ j=1,2,3 $ are such that $ \phi_j,\psi_j,\nu_j $, $ j=1,3 $ are incoming frequencies and $ \phi_2,\psi_2,\nu_2 $ are outgoing frequencies. \item We have $ \F_b\cap\Upsilon=\ensemble{\phi,-\phi,\psi,-\psi} $ (so in particular we have $ \phi,\psi\in\Upsilon $ and $ \nu\in\Xi_0\setminus\Upsilon $). \item The following two resonances hold: \begin{subequations}\label{eq hyp res phi psi nu} \begin{align} \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_1+\nu_2&=0\label{eq hyp res phi psi nu 1},\\ \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_2+\nu_2&=0\label{eq hyp res phi psi nu 2}. \end{align} \end{subequations} \item There is no other resonance between frequencies inside the domain. More precisely, if there exists a resonance relation of the form \begin{equation*} \lambda_1\,\alpha_1+\lambda_2\,\alpha_2+\lambda_3\,\alpha_3=0, \end{equation*} with $ \lambda_1,\lambda_2,\lambda_3\in\Z^* $ and $ \alpha_1,\alpha_2,\alpha_3\in\F\privede{0} $ noncolinear, then, there exists $ \lambda\in\Z^* $, such that up to a renumbering, $ \lambda_1=\lambda\lambda_{\phi} $, $ \lambda_2=\lambda\lambda_{\psi} $, $ \lambda_3=\lambda $ and ($ \alpha_1=\phi_1 $, $ \alpha_2=\psi_1 $ and $ \alpha_3=\nu_2 $) or ($ \alpha_1=\phi_3 $, $ \alpha_2=\psi_2 $ and $ \alpha_3=\nu_2 $). \end{enumerate} \end{assumption} Frequencies lifted inside from frequencies $ \phi $, $ \psi $ and $ \nu $ are depicted in Figure \ref{figure phi psi nu}. There is an amplification in the lifting of $ \phi $ and $ \psi $ because these frequencies are in the region $ \Upsilon $ where the Kreiss-Lopatinskii condition is not satisfied, in contrast to $ \nu $. Amplification arise since, for a frequency in $ \Upsilon $, there is an ascent of small amplitudes toward higher one, namely, a boundary source term of order $ O\big(\epsilon^{n+1}\big) $ occurs in the equations for the profile of order $ O\big(\epsilon^{n}\big) $. Therefore, when amplification occurs, inside profiles lifted from boundary terms of order $ O\big(\epsilon^{n+1}\big) $ are one order higher, namely $ O\big(\epsilon^{n}\big) $. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.8] \begin{scope} ll[black!15] (-0.3,0.3) -- (4.3,0.3) -- (2.8,-1.2) -- (-2,-1.2)--cycle; \draw[line width = 1pt,->,>=latex] (-0.5,0) -- (4,0); \draw[line width = 1pt,dashed] (0,-0.5) -- (0,0); \draw[line width = 1pt,->,>=latex] (0,0) -- (0,3); \draw[line width = 1pt,->,>=latex] (0.3,0.3) -- (-1.2,-1.2); \draw[altred,fill=altred!30] (0.8,-0.6) ellipse (0.8cm and 0.3cm); \draw[above right] (0,3) node{\small $ x_d $}; \draw[below left] (-1.2,-1.2) node{\small $ y $}; \draw[below right] (4,0) node{\small $ t $}; \draw[altred,->,line width = 1pt,>=latex] (-0.6,2.2) -- (0.5,-0.7); \draw[altred] (-0.5,1) node{$ \phi_2 $}; \draw[altred,->,line width = 1pt,>=latex] (0.9,-0.7) -- (1.5,2.3); \draw[altred] (1,1.3) node{$ \phi_1 $}; \draw[altred,->,line width = 1pt,>=latex] (1.2,-0.7) -- (2.5,2.3); \draw[altred] (2.2,0.8) node{$ \phi_3 $}; \node[draw,altred,rounded corners=2pt,fill=altred!10] (phi) at (3.5,3) {$ \phi $}; \draw (3.3,1.5) node{\small $ \Omega_T $}; \draw (2.7,-0.8) node {\small $ \omega_T $}; \end{scope} \begin{scope}[shift={(6,0)}] ll[black!15] (-0.3,0.3) -- (4.3,0.3) -- (2.8,-1.2) -- (-2,-1.2)--cycle; \draw[line width = 1pt,->,>=latex] (-0.5,0) -- (4,0); \draw[line width = 1pt,dashed] (0,-0.5) -- (0,0); \draw[line width = 1pt,->,>=latex] (0,0) -- (0,3); \draw[line width = 1pt,->,>=latex] (0.3,0.3) -- (-1.2,-1.2); \draw[altorange,fill=altorange!30] (1,-0.6) ellipse (0.8cm and 0.3cm); \draw[above right] (0,3) node{\small $ x_d $}; \draw[below left] (-1.2,-1.2) node{\small $ y $}; \draw[below right] (4,0) node{\small $ t $}; \draw[altorange,->,line width = 1pt,>=latex] (-0.8,2.2) -- (0.7,-0.7); \draw[altorange] (-0.6,1) node{$ \psi_2 $}; \draw[altorange,->,line width = 1pt,>=latex] (1.1,-0.7) -- (1.6,2.3); \draw[altorange] (1.1,1.3) node{$ \psi_1 $}; \draw[altorange,->,line width = 1pt,>=latex] (1.4,-0.7) -- (2.8,2.3); \draw[altorange] (2.5,0.8) node{$ \psi_3 $}; \node[draw,altorange,rounded corners=2pt,fill=altorange!10] (phi) at (3.5,3) {$ \psi $}; \draw (3.3,1.5) node{\small $ \Omega_T $}; \draw (2.7,-0.8) node {\small $ \omega_T $}; \end{scope} \begin{scope}[shift={(12,0)}] ll[black!15] (-0.3,0.3) -- (4.3,0.3) -- (2.8,-1.2) -- (-2,-1.2)--cycle; \draw[line width = 1pt,->,>=latex] (-0.5,0) -- (4,0); \draw[line width = 1pt,dashed] (0,-0.5) -- (0,0); \draw[line width = 1pt,->,>=latex] (0,0) -- (0,3); \draw[line width = 1pt,->,>=latex] (0.3,0.3) -- (-1.2,-1.2); \draw[above right] (0,3) node{\small $ x_d $}; \draw[below left] (-1.2,-1.2) node{\small $ y $}; \draw[below right] (4,0) node{\small $ t $}; \draw[altgreen,->,line width = 1pt,>=latex] (-0.8,2.2) -- (0.5,-0.7); \draw[altgreen] (-0.7,1) node{$ \nu_2 $}; \draw[altgreen,->,line width = 1pt,>=latex] (0.9,-0.7) -- (1.7,2.3); \draw[altgreen] (1.1,1.3) node{$ \nu_1 $}; \draw[altgreen,->,line width = 1pt,>=latex] (1.2,-0.7) -- (3,2.3); \draw[altgreen] (2.6,0.8) node{$ \nu_3 $}; \node[draw,altgreen,rounded corners=2pt,fill=altgreen!10] (phi) at (3.5,3) {$ \nu $}; \draw (3.3,1.5) node{\small $ \Omega_T $}; \draw (2.7,-0.8) node {\small $ \omega_T $}; \end{scope} \begin{scope}[shift={(5,-4)}] \draw[altred,fill=altred!30] (0,0) ellipse (0.6cm and 0.3cm); \draw[right] (1.2,0) node{\footnotesize Amplification}; \draw (1.5,1) node{\small \textbf{Legend}}; \draw[rounded corners,thick,black!50] (-1.3,1.5) rectangle (4.2, -0.6) {}; \end{scope} \end{tikzpicture} \caption{Frequencies lifted from $ \phi $, $ \psi $ and $ \nu $.} \label{figure phi psi nu} \end{figure} \begin{remark}\label{remark numbering phi psi} \begin{itemize}[leftmargin=20pt] \item Point i.) of Assumption \ref{hypothese ensemble frequences} asserts that each frequency $ \phi $, $ \psi $ and $ \nu $ is lifted into three real characteristic frequencies inside the domain. \item Point ii.) of Assumption \ref{hypothese ensemble frequences} implies in particular that the integer $ p $, which is the rank of $ B $ and the dimension of the stable subspace $ E_-(\zeta) $ for $ \zeta $ in $ \Xi $, is equal to 2. \item In relations \eqref{eq hyp res phi psi nu}, the numeration of the frequencies occurring in the resonances \eqref{eq hyp res phi psi nu} is arbitrary. For the first resonance \eqref{eq hyp res phi psi nu 1}, each of the two incoming frequencies lifted from $ \phi $ and $ \psi $ can be chosen. It sets the numbering of the frequencies lifted from $ \phi $ and $ \psi $. Next, for the second resonance \eqref{eq hyp res phi psi nu 2}, there is no choice, the incoming frequency lifted from $ \phi $ which occurs in the resonance must be the one which did not occur in the first one, $ \phi_3 $ in our fixed numbering, since we already required that $ \lambda_{\phi}\,\phi_1+\nu_2=-\lambda_{\psi}\,\psi_1 $, and $ \psi_2 $ is the only outgoing frequency associated with $ \psi $. \item We choose a numbering of $ \alpha_j(\zeta) $ for $ \zeta=\phi,\psi,\nu $ such that, for any $ j=1,2,3 $, we have \begin{equation*} \alpha_j(\zeta)=\zeta_j, \end{equation*} where the $ \zeta_j $ are the hyperbolic frequencies defined in Assumption \ref{hypothese ensemble frequences}. \item The condition $ (-\lambda_{\phi},-\lambda_{\psi}) \in \B_{\Z^2} $ is not restrictive and only relies on permuting the notation for $ \phi $ and $ \psi $ or $ -\phi $ and $ \phi $. It is made to simplify notation in the following. \end{itemize} \end{remark} A useful notation is now introduced for the resonances. \begin{definition} For $ \zeta\in\ensemble{\phi,\psi,\nu} $, and $ j=1,2,3 $, the set $ \mathcal{R}(\zeta,j) $ is defined as the set of quadruples $ (\zeta_1,\zeta_2,j_1,j_2) $ in $ \ensemble{\phi,\psi,\nu}^2\times\ensemble{1,2,3}^2 $ such that the following resonance holds \begin{equation*} \lambda_{\zeta}\,\alpha_j(\zeta)+\lambda_{\zeta_1}\,\alpha_{j_1}(\zeta_1)+\lambda_{\zeta_2}\,\alpha_{j_2}(\zeta_2)=0, \end{equation*} where we have denoted $ \lambda_{\nu}:=1 $. \end{definition} For example, we have, according to Assumption \ref{hypothese ensemble frequences}, \begin{align*} \mathcal{R}(\phi,1)&=\ensemble{(\psi,\nu,1,2),(\nu,\psi,2,1)},\quad \mathcal{R}(\phi,2)=\emptyset,\quad\mathcal{R}(\phi,3)=\ensemble{(\psi,\nu,3,2),(\nu,\psi,2,3)},\quad \text{and}\quad\\[5pt] \mathcal{R}(\nu,2)&=\ensemble{(\phi,\psi,1,1),(\phi,\psi,3,2),(\psi,\phi,1,1),(\psi,\phi,2,3)}. \end{align*} \bigskip We conduct now a formal discussion on how the configuration of frequencies $ \phi_j $, $ \psi_j $ and $ \nu_j $, $ j=1,2,3 $ and the two resonances \eqref{eq hyp res phi psi nu} are expected to create an instability, as represented in Figure \ref{figure configuration resonance}. First, the boundary profiles $ \epsilon^2\,g^{\epsilon} $ and $ \epsilon^3\,h^{\epsilon} $ of frequencies $ \phi $ and $ \psi $ in \eqref{eq systeme 1} create, because of the amplification due to the breaking of Kreiss-Lopatinskii condition for those frequencies, incoming interior profiles of frequencies $ \phi_1 $, $ \phi_3 $, and $ \psi_1 $, $ \psi_3 $ of orders respectively $ O\big(\epsilon\big) $ and $ O\big(\epsilon^2\big) $. Then because of the resonance relation \eqref{eq hyp res phi psi nu 1}, the profiles associated with $ \phi_1 $ and $ \psi_1 $ resonate to create a profile of outgoing frequency $ \nu_2 $ and of order $ O\big(\epsilon^2\big) $\footnote{One of the quadratic term in the equations has a factor $ 1/\epsilon $ in front of it, because the product $ A_i(u^{\epsilon})\,\partial_iu^{\epsilon} $ involves a derivative which counts as $ 1/\epsilon $ for oscillating wave packets at frequency of order $ 1/\epsilon $.}. This profile interacts, through resonance relation \eqref{eq hyp res phi psi nu 2}, with the one of frequency $ \phi_3 $ and of order $ O\big(\epsilon\big) $, which is lifted from the boundary forcing term $ \epsilon^2\,g^{\epsilon} $. This resonance leads to a profile of frequency $ \psi_2 $ and amplitude $ O\big(\epsilon^2\big) $, which is an outgoing profile, so a reflection and thus an amplification occur. Indeed, it creates a boundary profile of frequency $ \psi $ and order $ O\big(\epsilon^2\big) $: we obtain instability. Indeed, this boundary profile creates, through amplification on the boundary for $ \psi $, a profile of frequency $ \psi_1 $ and order $ O\big(\epsilon\big) $, which is one order higher than the profile of frequency $ \psi_1 $ we started with. Iterating this process leads to an explosion. \begin{figure}[h] \centering \begin{tikzpicture} \tikzstyle{etiquette}=[midway,fill=black!10] \tikzstyle{spring}=[thick,decorate,decoration={snake,pre length=0.0cm,post length=0.2cm,segment length=6}] \begin{scope}[scale=1.1] ll[black!15] (-1.2,0.3) -- (10.3,0.3) -- (8.8,-2.2) -- (-3.7,-2.2)--cycle; \draw[line width = 1pt,->,>=latex] (-1.5,0) -- (10,0); \draw[line width = 1pt,dashed] (-1,-0.5) -- (-1,0); \draw[line width = 1pt,->,>=latex] (-1,0) -- (-1,3.5); \draw[line width = 1pt,->,>=latex] (-0.7,0.3) -- (-3.2,-2.2); \draw[altblue,dashed,fill=altblue!10] (1.6,2.4) circle (0.5cm); \node[altblue] (a) at (1.75,2.5) {\small $ \epsilon^{-1} $}; \draw[altblue,dashed,fill=altblue!10] (3.6,1) circle (0.5cm); \node[altblue] (a) at (3.7,1.1) {\small $ \epsilon^{-1} $}; \draw[altorange,fill=altorange!30] (7,-0.9) ellipse (0.6cm and 0.3cm); \draw[altred,fill=altred!30] (0,-0.6) ellipse (0.4cm and 0.25cm); \draw[altorange,fill=altorange!30] (0.7,-1) ellipse (0.4cm and 0.25cm); \draw[altred,fill=altred!30] (2.2,-1.3) ellipse (0.4cm and 0.25cm); \draw[right] (-1,3.5) node{\small $ x_d $}; \draw[below left] (-2.8,-2.1) node{\small $ y $}; \draw[below right] (10,0) node{\small $ t $}; \draw (6,2) node{\small $ \Omega_T $}; \draw (8.5,-1.2) node {\small $ \omega_T $}; \draw[spring,->,>=latex,altorange] (-1,-1.2) -- (0.7,-1) node[etiquette]{\small $ \epsilon^3 $}; \draw[altorange] (-0.8,-1.5) node{$ \psi $}; \draw[spring,->,>=latex,altorange] (4.9,-1.1) -- (6.9,-0.9) node[etiquette]{\small $ \epsilon^2 $}; \draw[altorange] (5.1,-1.4) node{$ \psi $}; \draw[spring,->,>=latex,altred] (1.3,-0.1) -- (-0,-0.6) node[etiquette]{\small $ \epsilon^2 $}; \draw[altred] (1.5,-0.2) node{$ \phi $}; \draw[spring,->,>=latex,altred] (3.5,-0.8) -- (2.3,-1.3) node[etiquette]{\small $ \epsilon^2 $}; \draw[altred] (3.5,-1.2) node{$ \phi $}; \draw[altred,->,line width = 1pt,>=latex] (-0.2,-0.7) -- (1.5,2.7) node[etiquette]{\small $ \epsilon^1 $}; \draw[altred] (0.7,1.8) node{$ \phi_1 $}; \draw[altorange,->,line width = 1pt,>=latex] (0.8,-1) -- (1.5,2.3) node[etiquette]{\small $ \epsilon^2 $}; \draw[altorange] (1.7,1.3) node{$ \psi_1 $}; \draw[altgreen,->,line width = 1pt,>=latex] (1.7,2.4) -- (3.5,1.1) node[etiquette]{\small $ \epsilon^2 $}; \draw[altgreen] (3.2,1.7) node{$ \nu_2 $}; \draw[altred,->,line width = 1pt,>=latex] (2.1,-1.3) -- (3.5,0.9) node[etiquette]{\small $ \epsilon^1 $}; \draw[altred] (2.8,0.5) node{$ \phi_3 $}; \draw[altorange,->,line width = 1pt,>=latex] (3.7,1) -- (4.8,-1.1) node[etiquette]{\small $ \epsilon^2 $}; \draw[altorange] (4.5,0.5) node{$ \psi_2 $}; \draw[altorange,->,line width = 1pt,>=latex] (7,-0.9) -- (7.7,2.3) node[etiquette]{\small $ \epsilon^1 $}; \draw[altorange] (7.1,1.3) node{$ \psi_1 $}; \draw[altorange,->,line width = 1pt,>=latex] (7.2,-0.9) -- (8.9,2.3) node[etiquette]{\small $ \epsilon^1 $}; \draw[altorange] (8.7,1.1) node{$ \psi_3 $}; \end{scope} \begin{scope}[shift={(-1.5,-4.3)},scale=0.9] \draw[altred,fill=altred!30] (0,0) ellipse (0.6cm and 0.2cm); \draw[right] (0.9,0) node{\footnotesize Amplification}; \draw[altblue,dashed,fill=altblue!10] (0,-0.8) circle (0.4cm); \draw[right] (0.9,-0.8) node{\footnotesize Resonance}; \draw[spring,altred,->,line width = 1pt,>=latex] (4,0)--(5.2,0); \draw[right] (5.3,0) node{\footnotesize Boundary profile}; \draw[altred,->,line width = 1pt,>=latex] (4,-0.8)--(5.2,-0.8); \draw[right] (5.3,-0.8) node{\footnotesize Interior profile}; \draw[altred,line width = 1pt] (8.6,0) -- (9.8,0) node[etiquette]{\small $ \epsilon^2 $}; \draw[right,align=left] (10,0) node{\footnotesize Order of profile's\\[-2pt]\footnotesize amplitude}; \draw (5.5,1) node{\small \textbf{Legend}}; \draw[rounded corners,thick,black!50] (-1.3,1.5) rectangle (13.3, -1.5) {}; \end{scope} \end{tikzpicture} \caption{Creation of instability through amplification.} \label{figure configuration resonance} \end{figure} \bigskip We make now a small divisors assumption, which is adapted from \cite[Assumption 1.9]{CoulombelWilliams2017Mach}. This assumption is needed only for frequencies for which the uniform Kreiss-Lopatinskii condition is not satisfied, so, in our case, for $ \phi $ and $ \psi $. Analogously to \cite[Assumption 1.9]{CoulombelWilliams2017Mach}, it requires a polynomial control of the determinant of the symbol associated with combinations of incoming frequencies, using the fact that frequencies lifted from $ \phi $ do not resonate, and the same for $ \psi $. The formulation is simpler than the one of \cite[Assumption 1.9]{CoulombelWilliams2017Mach} since in our case there is only two incoming frequencies, so the only possibility for a combination of it is $ \lambda_1\phi_1+\lambda_3\phi_3 $, with $ \lambda_1,\lambda_3\in\Z^* $, and the same for $ \psi $. \begin{assumption}\label{hyp petits diviseurs} There exists a constant $ C>0 $ and a real positive number $ m_0 $ such that, for $ \zeta=\phi,\psi $, for all $ \lambda_1,\lambda_3\in\Z^* $, \begin{equation*} \big|\det L\big(0,\lambda_1\,\zeta_1+\lambda_3\,\zeta_3\big)\big|\geq C \big|(\lambda_1,\lambda_3)\big|^{-m_0}. \end{equation*} \end{assumption} Finally, we define several vectors associated with the previously introduced eigenspaces. For $ \zeta $ in $ \F_b\privede{0} $ and $ j\in \mathcal{C}(\zeta) $, we denote by $ r_{\zeta,j} $ a unit column vector of the one dimensional space $ \ker L\big(0,\alpha_j(\zeta)\big) $, and $ \ell_{\zeta,j} $ a row vector such that \begin{equation}\label{eq relation l zeta j L alpha j} \ell_{\zeta,j}\,L\big(0,\alpha_j(\zeta)\big)=0 \end{equation} with the following normalization: for all $ \zeta $ in $ \F_{b}\privede{0} $ and for all $ j,j' $ in $ \mathcal{C}(\zeta) $, we have \begin{equation}\label{eq relation l zeta j normalisation} \ell_{\zeta,j'}\,A_d(0)\,r_{\zeta,j}=\delta^j_{j'}. \end{equation} The projectors $ P_j(\zeta) $, $ Q_j(\zeta) $ (defined in Lemma \ref{lemme def Pj}) and the vectors $ r_{\zeta,j} $ and $ \ell_{\zeta,j} $ are chosen to be homogeneous of degree 0 with respect to $ \zeta $. Accordingly, we define the partial inverses $ R_{\zeta,j} $, which satisfy, for $ \zeta $ in $ \F_b\privede{0} $ represented by $ \mathbf{n}_0,\lambda $ in $ \B_{\Z^2}\times\Z^* $, \begin{equation}\label{eq relation R zeta j P} R_{\zeta,j}\,L\big(0,\alpha_j(\zeta)\big)=L\big(0,\alpha_j(\zeta)\big)\,R_{\zeta,j}=\lambda\,\big(I-P_{\zeta,j}\big). \end{equation} Consider $ \zeta\in\Upsilon $. Assumption \ref{hypothese weak lopatinskii} asserts that the space $ \ker B \cap E_-(\zeta) $ is one dimensional, so we denote by $ e_{\zeta} $ a unit vector in this space. Now, since, according to the same assumption, $ \Upsilon $ is included in the hyperbolic region $ \mathcal{H} $ and because of Proposition \ref{prop decomp E_-}, we can decompose $ e_{\zeta} $ as \begin{equation}\label{eq decomp e zeta} e_{\zeta}=\sum_{j\in\mathcal{I}(\zeta)}e_{\zeta,j}, \end{equation} with $ e_{\zeta,j}\in\Span r_{\zeta,j} $ for $ j\in\mathcal{I}(\zeta) $. We also denote by $ b_{\zeta} $ a vector of $ \C^2 $ such that \begin{equation}\label{eq def b zeta} B\,E_-(\zeta)=\ensemble{X\in\C^2\,\middle|\, b_{\zeta}\cdot X=0}, \end{equation} that is, a nonzero vector of $ \ker \,^tB_{|E_-(\zeta)} $, which is of dimension 1. Notation $ b_{\zeta}\cdot X $ refers to the complex scalar product in $ \C^2 $. Using vectors $ r_{\zeta,j} $ and $ \ell_{\zeta,j} $, we have the following lemma, analogous to the one of \cite{Lax1957Asymptotic}. The proof of this particular result can be found in \cite{CoulombelGuesWilliams2011Resonant}, and is recalled here for the sake of clarity. \begin{lemma}[{\cite[Lemma 2.11]{CoulombelGuesWilliams2011Resonant}}]\label{lemme Lax} For $ \zeta\in\F_{b}\privede{0} $ and $ j\in\mathcal{C}(\zeta) $, we have \begin{equation*} \ell_{\zeta,j}\,L(0,\partial_{z})\,r_{\zeta,j}=X_{\alpha_j(\zeta)}, \end{equation*} where $ X_{\alpha_j(\zeta)} $ is the vector field defined in Definition \ref{def sortant rentrant alpha X alpha}. \end{lemma} \begin{proof} Denote by $ k $ the integer between $ 1 $ and $ 3 $ such that, if $ \alpha_j(\zeta)=(\tau,\eta,\xi) $, then $ \tau=\tau_k(\eta,\xi) $. Since $ \zeta\in\F_b\privede{0} $, the frequency $ \zeta $ is not glancing, so, according to definition \ref{def sortant rentrant alpha X alpha}, we have $ \partial_{\xi}\tau_k(\eta,\xi)\neq0 $. Therefore, according to the implicit function theorem, the function $ \zeta'\mapsto\xi_j(\zeta') $ is differentiable near $ \zeta $. Indeed, $ \xi_j $ is such that, if $ \zeta'=(\tau',\eta') $, \begin{equation}\label{eq inter1} \tau_k\big(\eta',\xi_j(\tau',\eta')\big)-\tau'=0. \end{equation} Therefore\footnote{Here we extend the definition of $ r_{\zeta,j} $ to any frequency $ \zeta $ in $ \Xi\setminus\mathcal{G} $.}, seen as a function of $ \zeta $, the vector $ r_{\zeta,j} $ is also differentiable with respect to $ \zeta $. Differentiating relation \eqref{eq inter1} even proves the following relations: \begin{equation}\label{eq inter2} \partial_{\tau}\xi_j(\tau,\eta)=\inv{\partial_{\xi}\tau_k(\eta,\xi)},\qquad \partial_{\eta_p}\xi_j(\tau,\eta)=\frac{-\partial_{\eta_p}\tau_k(\eta,\xi)}{\partial_{\xi}\tau_k(\eta,\xi)},\quad \forall p=1,\dots,d-1. \end{equation} Now, differentiating the relation \begin{equation*} L\big(0,(\tau,\eta,\xi_j(\tau,\eta))\big)\,r_{\zeta,j}=0 \end{equation*} with respect to $ \tau $ and $ \eta_p $, $ p=1,\dots,d-1 $, and multiplying on the left by $ \ell_{\zeta,j} $, gives \begin{align*} \ell_{\zeta,j}\,r_{\zeta,j}+\partial_{\tau}\xi_j(\tau,\eta)\,\ell_{\zeta,j}\,A_d(0)\,r_{\zeta,j}&=0, \intertext{and, for $ p=1,\dots,d-1 $,} \ell_{\zeta,j}\,A_p(0)\,r_{\zeta,j}+\partial_{\eta_p}\xi_j(\tau,\eta)\,\ell_{\zeta,j}\,A_d(0)\,r_{\zeta,j}&=0. \end{align*} With relations \eqref{eq inter2}, the result follows. \end{proof} The following result could be seen as an analogue to Lax lemma, for the boundary. Indeed, it asserts that a certain operator appearing in the boundary equations is actually a linear transport operator with constant velocity. The result is due to \cite{CoulombelGues2010Geometric}, and its technical proof is not recalled here. \begin{lemma}[{\cite[Proposition 3.5]{CoulombelGues2010Geometric}}]\label{lemme Lax bord} Let $ \zeta=\phi,\psi $, and recall that $ \kappa $ is the scalar function of the weak Kreiss-Lopatinskii condition, see Assumption \ref{hypothese weak lopatinskii}. Then, there exists a nonzero real scalar $ \beta_{\zeta} $ such that \begin{equation*} b_{\zeta}\cdot B\,\Big(R_{\zeta,1}\,L(0,\partial_z)\,e_{\zeta,1}+R_{\zeta,3}\,L(0,\partial_z)\,e_{\zeta,3}\Big)=\beta_{\zeta}\left(\partial_{\tau}\kappa(\zeta)\,\partial_t+\sum_{j=1}^{d-1}\partial_{\eta_j}\kappa(\zeta)\,\partial_{x_j}\right). \end{equation*} Moreover, the coefficient $ \partial_{\tau}\kappa(\zeta) $ is equal to 1. \end{lemma} \begin{remark} In particular, the previous result ensures that the operator $ b_{\zeta}\cdot B\,\big(R_{\zeta,1}\,L(0,\partial_z)\,e_{\zeta,1}\linebreak+R_{\zeta,3}\,L(0,\partial_z)\,e_{\zeta,3}\big) $ is tangent to the boundary. \end{remark} \section{Derivation of the system} This section is devoted to the derivation of the general system studied in this article. We start by detailing the ansatz we choose here, and by displaying the WKB cascade associated with system \eqref{eq systeme 1}. Then we proceed by trying to decouple this cascade for the profiles. \subsection{Ansatz and WKB cascade} The ansatz for each amplitude $ U_n $ of \eqref{eq ansatz provisoire} must allow to consider both oscillating modes (associated with characteristic frequencies $ \alpha_j(\zeta) $ for $ \zeta\in\F_b\privede{0} $ and $ j\in\mathcal{C}(\zeta) $) and evanescent modes (associated with evanescent frequencies $ \alpha_j(\zeta) $ for $ \zeta\in\F_b\privede{0} $ and $ j\in\mathcal{P}(\zeta) $). We define at this purpose the following spaces of profiles. We denote by $ \T $ the one-dimensional torus. \begin{definition}\label{def espaces profils} The space of evanescent profiles $ \P^{\ev}_{T} $ is defined as the set of functions $ U^{\ev} $ of $ \mathcal{C}_b\big(\R^+_{\chi_d},H^{\infty}(\Omega_T\times\T^2)\big) $ which converge to zero as $ \chi_d $ goes to infinity. The space of oscillating profiles $ \P^{\osc}_{T} $ is defined as the set of formal trigonometric functions in $ \chi_d $ with values in the Sobolev space $ H^{\infty}(\Omega_T\times\T^2) $, that is, formal series \begin{equation*} U^{\osc}(z,\theta_1,\theta_2,\chi_d)=\sum_{\xi\in\R}U^{\osc}_{\xi}(z,\theta_1,\theta_2)\,e^{i\,\xi\,\chi_d}, \end{equation*} with $ U^{\osc}_{\xi}\in H^{\infty}(\Omega_T\times\T^2) $ for $ \xi\in\R $. Finally, $ \P_{T} $ is defined as the direct sum \begin{equation*} \P_{T}:=\P^{\osc}_{T}\oplus\P_{T}^{\ev}. \end{equation*} \end{definition} The ansatz is the following: we look for an approximate solution \begin{equation*} u^{\epsilon,\app}(z):=v^{\epsilon}\left( z,\frac{z'\cdot\phi}{\epsilon},\frac{z'\cdot\psi}{\epsilon},\frac{x_d}{\epsilon}\right) \end{equation*} where the formal series $ v^{\epsilon} $ is given by \begin{equation}\label{eq ansatz} v^{\epsilon}\big(z,\theta_1,\theta_2,\chi_d\big):=\sum_{n\geq 1}\epsilon^n\,U_n\big(z,\theta_1,\theta_2,\chi_d\big), \end{equation} where, for $ n\geq 1 $, $ U_n $ belongs to $ \P_{T} $. Formally plugging ansatz \eqref{eq ansatz} in system \eqref{eq systeme 1}, we obtain the following WKB cascade\footnote{We have used here the assumption that coefficients $ A_j $ are affine maps.} (see \cite{CoulombelWilliams2017Mach}) for the profiles $ (U_n)_{n\geq 1} $: \begin{subequations}\label{eq cascade int} \begin{align} \mathcal{L}(\partial_{\theta},\partial_{\chi_d})\,U_1&=0\label{eq cascade int U1}\\[5pt] \mathcal{L}(\partial_{\theta},\partial_{\chi_d})\,U_2+L(0,\partial_z)\,U_1+\mathcal{M}(U_1,U_1)&=0\label{eq cascade int U2}\\ \mathcal{L}(\partial_{\theta},\partial_{\chi_d})\,U_{n+1}+L(0,\partial_z)\,U_n+\sum_{k=1}^n\mathcal{M}(U_{n-k+1},U_k)+\sum_{k=1}^{n-1}\mathcal{N}(U_{n-k},U_k)&=0,\label{eq cascade int Un} \end{align} \end{subequations} where \eqref{eq cascade int Un} should hold for any $ n\geq 2 $. In \eqref{eq cascade int}, the fast operator $ \mathcal{L}(\partial_{\theta},\partial_{\chi_d}) $ and the quadratic operators $ \mathcal{M} $ and $ \mathcal{N} $ are defined by \begin{align*} \mathcal{L}(\partial_{\theta},\partial_{\chi_d}):=&\,L(0,\phi)\,\partial_{\theta_1}+L(0,\psi)\,\partial_{\theta_2}+A_d(0)\,\partial_{\chi_d},\\[5pt] \mathcal{M}(u,v):=&\,L_1(u,\phi)\,\partial_{\theta_1}v+L_1(u,\psi)\,\partial_{\theta_2}v+dA_d(0)\,\cdot u\,\partial_{\chi_d}v\\ =&\,\sum_{k=1}^{d-1}dA_k(0)\cdot u\, \big(\phi_k\,\partial_{\theta_1}+\psi_k\,\partial_{\theta_2}\big) v+dA_d(0)\cdot u\,\partial_{\chi_d}v,\\ \mathcal{N}(u,v):=&\,L_1(u,\partial_z)\,v:=\sum_{k=1}^d dA_k(0)\cdot u\,\partial_k v, \end{align*} where we have denoted by, for $ X $ in $ \C^3 $ and $ \zeta=(\zeta_0,\dots,\zeta_{d-1})\in\R^{d} $, \begin{equation*} L(0,\zeta):=L\big(0,(\zeta,0)\big)=\sum_{k=1}^{d-1}\zeta_k A_k(0),\qquad L_1(X,\zeta):=\sum_{k=1}^{d-1}\zeta_k\, dA_k(0)\cdot X. \end{equation*} The boundary and initial conditions of \eqref{eq systeme 1} reads \begin{subequations}\label{eq cascade bord} \begin{align} B\,\big(U_1\big)_{|x_d,\chi_d=0}(z',\theta_1,\theta_2)&=0\label{eq cascade bord U12}, &B\,\big(U_2\big)_{|x_d,\chi_d=0}(z',\theta_1,\theta_2)&=G(z',\theta_1)\\ B\,\big(U_3\big)_{|x_d,\chi_d=0}(z',\theta_1,\theta_2)&=H(z',\theta_2), &B\,\big(U_n\big)_{|x_d,\chi_d=0}(z',\theta_1,\theta_2)&=0,\quad\forall n\geq 4\label{eq cascade bord U3n}, \end{align} \end{subequations} and \begin{equation}\label{eq cascade initial} \big(U_n\big)_{|t\leq 0}=0,\quad \forall n\geq 1. \end{equation} The aim is now to decouple cascade \eqref{eq cascade int}. First we use polarization equation \eqref{eq cascade int U1} to obtain the form of the leading profile $ U_1 $, and proceed to show that the mean value $ U_1^* $ of $ U_1 $ is zero using evolution equation \eqref{eq cascade int U2}. Then we need to determine the oscillating part of $ U_1 $. Equation \eqref{eq cascade int U2} leads to a transport equations for each mode. When the equation is outgoing (that is, when the frequency is outgoing), the transport equation can be solved with a source term eventually depending on other leading profiles, due to resonances. When it is incoming (i.e. when the frequency is incoming), we need to determine a boundary condition from first equation of \eqref{eq cascade bord U12}. Two cases may occur. If the associated boundary frequency $ \zeta $ is not in $ \Upsilon $, that is, for $ \zeta\neq \phi,\psi $, we can write boundary condition \eqref{eq cascade bord U12} for $ \zeta $ as $ B\,X=F $, where source term $ F $ depends on the trace of the outgoing leading profile for this boundary frequency $ \zeta $, and where $ X $ (containing traces of incoming leading profiles) belongs to $ E_-(\zeta) $. Since $ B $ is invertible on $ E_-(\zeta) $ according to Assumption \ref{hypothese weak lopatinskii} and the fact that $ \zeta\notin \Upsilon $, this boundary condition $ B\,X=F $ leads to a boundary condition for traces of incoming leading profiles. The second case is more complicated. If $ \zeta\in\Upsilon $, that is, if $ \zeta=\phi,\psi $, matrix $ B $ is no longer invertible on $ E_-(\zeta) $. Therefore, boundary condition $ B\,X=F $ cannot be inverted, and leads to both a compatibility condition (which we shall see will over-determine the system), and expressions for traces of incoming leading profiles for $ \phi $ and $ \psi $ depending on unknown scalar functions $ a_{\phi}^1 $ and $ a_{\psi}^1 $. Then, to determine these functions $ a_{\phi}^1 $and $ a_{\psi}^1 $, we need investigate first corrector $ U_2 $. According to equation \eqref{eq cascade int U2}, the first corrector $ U_2 $ is not polarized. But this equation allows us to determine of formula for its nonpolarized part, depending on the leading profiles. We write second equation of boundary condition \eqref{eq cascade bord U12} with these expressions for the nonpolarized parts which leads to equations on the traces of the nonpolarized parts, and therefore, equations on the traces of leading profiles, namely $ a^1_{\phi} $ and $ a^1_{\psi} $. However, the system of equations obtained is still not closed, since equations on $ a^1_{\phi} $ and $ a^1_{\psi} $ involve traces of the first corrector $ U_2 $. The next step is to obtain equations on the polarized part of the first corrector $ U_2 $, which is achieved using equation \eqref{eq cascade int Un} for $ n=3 $. Once again, depending on the frequency, we obtain incoming or outgoing transport equations with source term depending on leading profile and first corrector. For incoming equations, when the associated frequency $ \zeta $ is not in $ \Upsilon $, boundary condition \eqref{eq cascade bord U12} can be inverted to obtain a closed system. Otherwise, when $ \zeta $ belongs to $ \Upsilon $, the same arguments as for the leading profiles leads to compatibility conditions (that are these time always satisfied by previous construction) and expressions for traces of incoming first corrector profiles for $ \phi $ and $ \psi $ depending on unknown scalar functions $ a_{\phi}^2 $ and $ a_{\psi}^2 $. Investigating the nonpolarized part of the second corrector $ U_3 $ leads, in its turn, to equations on $ a_{\phi}^2 $ and $ a_{\psi}^2 $, depending once again on trace of the second corrector $ U_2 $, preventing to close the system. This method applies recursively to any order. \subsection{Rewriting the equations: leading profile and first corrector} This subsection is devoted to the almost-decoupling of the cascade \eqref{eq cascade int}, \eqref{eq cascade bord} and \eqref{eq cascade initial}. The computations are, for the most part of it, formal. Except for the leading profile, we will not detail the obtaining of formulas for the evanescent part, as it will not be interesting for the instability analysis, since all three frequencies $ \phi $, $ \psi $ and $ \nu $ are hyperbolic. \subsubsection{Leading profile} We start by deriving the polarization condition for $ U_1 $ from \eqref{eq cascade int U1}, recalling the analysis of \cite{Lescarret2007Wave}. If we write $ U_1 $ in $ \P_{T} $ as \begin{align*} U_1(z,\theta,\chi_d)&=U_1^{\osc}(z,\theta,\chi_d)+U_1^{\ev}(z,\theta,\chi_d)\\[5pt] &=\sum_{\mathbf{n}\in\Z^m}\sum_{\xi\in\R}U^{1,\osc}_{\mathbf{n},\xi}(z)\,e^{i\,\mathbf{n}\cdot\theta}\,e^{i\,\xi\,\chi_d}+\sum_{\mathbf{n}\in\Z^m}U^{1,\ev}_{\mathbf{n}}(z,\chi_d)\,e^{i\,\mathbf{n}\cdot\theta}, \end{align*} equation \eqref{eq cascade int U1} reads \begin{multline*} \sum_{\mathbf{n}\in\Z^2}\sum_{\xi\in\R}iL\big(0,(\freq,\xi)\big)\,U^{1,\osc}_{\mathbf{n},\xi}(z)\,e^{i\,\mathbf{n}\cdot\theta}\,e^{i\,\xi\,\chi_d}\\ +\sum_{\mathbf{n}\in\Z^2}\big\{iL(0,\freq)+A_d(0)\,\partial_{\chi_d}\big\}\,U^{1,\ev}_{\mathbf{n}}(z,\chi_d)\,e^{i\,\mathbf{n}\cdot\theta}=0. \end{multline*} Therefore, on one hand, for the oscillating part, we get $ L\big(0,(\freq,\xi)\big)\,U^{1,\osc}_{\mathbf{n},\xi}=0 $ for every $ \mathbf{n}\in\Z^2\privede{0} $ and $ \xi\in\R $, so, if $ (\freq,\xi) $ is noncharacteristic, $ U^{1,\osc}_{\mathbf{n},\xi}=0 $ and if $ \xi=\xi_j(\freq) $ for some $ j\in\mathcal{C}(\freq) $, we find that $ U^{1,\osc}_{\mathbf{n},\xi} $ belongs to $ \ker L\big(0,\alpha_j(\freq)\big)=\Span r_{\freq,j} $. Thus we write \begin{equation*} U^{1,\osc}_{\mathbf{n},\xi}=\sigma^1_{\mathbf{n}_0,j,\lambda} \,r_{\mathbf{n}_0\cdot\boldsymbol{\zeta},j}, \end{equation*} if $ \mathbf{n}=\lambda\,\mathbf{n}_0 $ with $ \mathbf{n}_0\in\B_{\Z^2} $ and $ \lambda\in\Z^* $, and where $ \sigma^1_{\mathbf{n}_0,j,\lambda} $ is a scalar function of $ \Omega_T $. On the other hand, for the evanescent part, we get $ U^{1,\ev}_{0}=0 $, and, for $ \mathbf{n}\in\Z^2\privede{0} $, multiplying by $ A_d(0)^{-1} $, \begin{equation*} \partial_{\chi_d}\,U^{1,\ev}_{\mathbf{n}}-\mathcal{A}(\freq)\,U^{1,\ev}_{\mathbf{n}}=0. \end{equation*} Solving this differential equation in $ \mathcal{P}^{\ev}_{T} $ leads to \begin{equation*} U^{1,\ev}_{\mathbf{n}}(z,\chi_d)=e^{\chi_d\mathcal{A}(\mathbf{n}\cdot\boldsymbol{\zeta})}\,\Pi^e(\mathbf{n}\cdot\boldsymbol{\zeta})\,U^{1,\ev}_{\mathbf{n}}(z,0). \end{equation*} In short, polarization equation \eqref{eq cascade int U1} asserts that $ U_1 $ reads \begin{multline}\label{eq ecriture U 1} U_1(z,\theta,\chi_d)=U^*_1(z)+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}\sigma^1_{\mathbf{n},j,\lambda}(z)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}\,e^{i\,\lambda\,\xi_j(\freq)\,\chi_d}\,r_{\freq,j} \\ +\sum_{\mathbf{n}\in\Z^2\privede{0}}e^{\chi_d\mathcal{A}(\mathbf{n}\cdot\boldsymbol{\zeta})}\,\Pi^e(\mathbf{n}\cdot\boldsymbol{\zeta})\,U^{1,\ev}_{\mathbf{n}}(z,0)\,e^{i\,\mathbf{n}\cdot\theta}. \end{multline} We start by showing that the mean value $ U^*_1 $ is zero, using equation \eqref{eq cascade int U2}. The oscillating part of $ L(0,\partial_z)\,U_1+\mathcal{M}(U_1,U_1) $ is given by $ L(0,\partial_z)\,U^{\osc}_1+\mathcal{M}(U^{\osc}_1,U^{\osc}_1) $ and according to \eqref{eq ecriture U 1} and the expression of the quadratic operator $ \mathcal{M} $, the latter reads \begin{subequations}\label{eq derivation L U1 + M U1} \begin{align}\label{eq derivation L U1 + M U1 a} &L(0,\partial_z)\,U^*_1+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L(0,\partial_z)\, \sigma^1_{\mathbf{n},j,\lambda}\, e^{i\lambda\,\mathbf{n}\cdot\theta}\, e^{i\lambda\xi_j(\mathbf{n}\cdot\boldsymbol{\zeta})\chi_d}\,r_{\mathbf{n}\cdot\boldsymbol{\zeta},j}\\\label{eq derivation L U1 + M U1 b} &+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L_1\big(U^*_1,i\,\lambda\,\alpha_j(\mathbf{n}\cdot\boldsymbol{\zeta})\big)\,r_{\mathbf{n}\cdot\boldsymbol{\zeta},j}\,\sigma^1_{\mathbf{n},j,\lambda}\, e^{i\lambda\,\mathbf{n}\cdot\theta}\, e^{i\lambda\xi_j(\mathbf{n}\cdot\boldsymbol{\zeta})\chi_d} \\\label{eq derivation L U1 + M U1 c} &+\sum_{\mathbf{n}_1,\mathbf{n}_2\in\B_{\Z^2}}\sum_{\substack{j_1\in\mathcal{C}(\mathbf{n}_1\cdot\boldsymbol{\zeta})\\j_2\in\mathcal{C}(\mathbf{n}_2\cdot\boldsymbol{\zeta})}}\sum_{\lambda_1,\lambda_2\in\Z^*}L_1\big(r_{\mathbf{n}_1\cdot\boldsymbol{\zeta},j_1},i\,\lambda_2\,\alpha_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta})\big)\,r_{\mathbf{n}_2\cdot\boldsymbol{\zeta},j_2}\,\\\nonumber &\qquad\qquad\sigma^1_{\mathbf{n}_1,j_1,\lambda_1}\,\sigma^1_{\mathbf{n}_2,j_2,\lambda_2}\,e^{i\,(\lambda_1\mathbf{n}_1+\lambda_2\mathbf{n}_2)\cdot\theta}\,e^{i\,(\lambda_1\xi_{j_1}(\mathbf{n}_1\cdot\boldsymbol{\zeta})+\lambda_2\xi_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta}))\,\chi_d}, \end{align} \end{subequations} where, for $ \alpha=(\alpha_0,\alpha_1,\dots,\alpha_d)\in\R^{d+1} $ and $ X\in\C^3 $, we have denoted \begin{equation*} L_1(X,\alpha):=\sum_{k=1}^d\alpha_k\,dA_k(0)\cdot X. \end{equation*} We now isolate the nonoscillating terms in \eqref{eq derivation L U1 + M U1}, to obtain a system satisfied by the mean value $ U^*_1 $. In equation \eqref{eq derivation L U1 + M U1}, the terms in the sums in \eqref{eq derivation L U1 + M U1 a} and \eqref{eq derivation L U1 + M U1 b} are always oscillating since $ \mathbf{n}\in\B_{\Z^2} $. As for them, the terms in the sum in \eqref{eq derivation L U1 + M U1 c}are not oscillating if and only if $ \lambda_1\mathbf{n}_1+\lambda_2\mathbf{n}_2=0 $ and $ \lambda_1\,\xi_{j_1}(\mathbf{n}_1\cdot\boldsymbol{\zeta})+\lambda_2\,\xi_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta})=0 $, that is, if $ \mathbf{n}_1=\mathbf{n}_2 $, $ \lambda_1=-\lambda_2 $ and $ j_1=j_2 $. Therefore, we deduce from \eqref{eq derivation L U1 + M U1} that we have \begin{equation}\label{eq derivation inter1} L(0,\partial_z)\,U^*_1+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L_1\big(r_{\mathbf{n}\cdot\boldsymbol{\zeta},j},-i\,\lambda\,\alpha_j(\mathbf{n}\cdot\boldsymbol{\zeta})\big)\,r_{\mathbf{n}\cdot\boldsymbol{\zeta},j}\,\sigma^1_{\mathbf{n},j,\lambda}\,\sigma^1_{\mathbf{n},j,-\lambda}=0. \end{equation} By a change of variable $ \lambda=-\lambda $ we prove that the second term in the left-hand side of \eqref{eq derivation inter1} is actually zero, so we have the following linear constant coefficient equation \color{altpink} \begin{equation*} L(0,\partial_z)\,U^*_1=0. \end{equation*}\color{black} With the following boundary and initial conditions obtained from \eqref{eq cascade bord U12} and \eqref{eq cascade initial}, \color{altpink} \begin{equation*} B\big(U^*_1\big)_{|x_d,\chi_d=0}=0,\qquad \big(U^*_1\big)_{|t\leq0}=0, \end{equation*}\color{black} we get that the mean value $ U^*_1 $ satisfies a system which is weakly well-posed, see \cite{Coulombel2005Wellposedness}, with zero source term, boundary forcing term and initial term, so the mean value $ U^*_1 $ is zero. Since $ U^*_1 $ is zero, equation \eqref{eq cascade int U2} now reads, for each nonzero characteristic mode $ \lambda\,\alpha_j(\freq) $, with $ \mathbf{n}\in\B_{\Z^2} $, $ j\in\mathcal{C}(\freq) $ and $ \lambda\in\Z^* $, \begin{multline}\label{eq derivation L U1 + M U1 2} i\,L\big(0,\alpha_j(\freq)\big)\,U^{2,\osc}_{\mathbf{n},\xi_j(\freq)}+L(0,\partial_z)\, \sigma^1_{\mathbf{n},j,\lambda}\,r_{\mathbf{n}\cdot\boldsymbol{\zeta},j}\\ +\sum_{(\mathbf{n}_1,\mathbf{n}_2,j_1,j_2,\lambda_1,\lambda_2)}L_1\big(r_{\mathbf{n}_1\cdot\boldsymbol{\zeta},j_1},i\,\lambda_2\,\alpha_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta})\big)\,r_{\mathbf{n}_2\cdot\boldsymbol{\zeta},j_2}\sigma^1_{\mathbf{n}_1,j_1,\lambda_1}\,\sigma^1_{\mathbf{n}_2,j_2,\lambda_2}=0, \end{multline} where the sum is over the set of 6-tuples $ (\mathbf{n}_1,\mathbf{n}_2,j_1,j_2,\lambda_1,\lambda_2) $ in $ \big(\B_{\Z^2}\big)^2\times\mathcal{C}(\mathbf{n}_1\cdot\boldsymbol{\zeta})\times\mathcal{C}(\mathbf{n}_2\cdot\boldsymbol{\zeta})\times(\Z^*)^2 $ such that $ \lambda_1\alpha_{j_1}(\mathbf{n}_1\cdot\boldsymbol{\zeta})+\lambda_2\alpha_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta})=\lambda\alpha_{j}(\freq) $. There are two possibilities for that to happen. \begin{itemize}[leftmargin=20pt] \item Either frequencies $ \lambda_1\,\alpha_{j_1}(\mathbf{n}_1\cdot\boldsymbol{\zeta}) $ and $ \lambda_2\,\alpha_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta}) $ are colinear (therefore colinear to $ \alpha_j(\freq) $), that is to say $ \mathbf{n}_1=\mathbf{n}_2=\mathbf{n} $ and $ j_1=j_2=j $. This is called \emph{self-interaction} of frequency $ \alpha_{j}(\freq) $ with itself. Note that the obtained frequency $ \lambda_1\alpha_{j_1}(\mathbf{n}_1\cdot\boldsymbol{\zeta})+\lambda_2\alpha_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta}) $ is then always real characteristic. \item Or frequencies $ \lambda_1\,\alpha_{j_1}(\mathbf{n}_1\cdot\boldsymbol{\zeta}) $ and $ \lambda_2\,\alpha_{j_2}(\mathbf{n}_2\cdot\boldsymbol{\zeta}) $ are noncolinear, in which case a true resonance in the sense of Assump\-tion \ref{hypothese ensemble frequences} occurs, namely \eqref{eq hyp res phi psi nu 1} or \eqref{eq hyp res phi psi nu 2}. For example, if $ \alpha_j(\freq)=\psi_2 $ (i.e. if $ \mathbf{n}=(0,1) $ and $ j=2 $), then according to Assumption \ref{hypothese ensemble frequences}, it implies that $ \lambda=k\lambda_{\psi} $ for some $ k\in\Z^* $ and, up to a permutation, $ \mathbf{n}_1=(1,0) $, $ j_1=3 $, $ \lambda_1=-k\lambda_{\psi} $ and $ \mathbf{n}_2=(-\lambda_{\phi},-\lambda_{\psi}) $, $ j_2=2 $, $ \lambda_2=-k $. \end{itemize} Recall that, for a frequency $ \lambda\,\zeta=\lambda\,\mathbf{n}\cdot\boldsymbol{\zeta}\in\F_{b}\privede{0} $ with $ \mathbf{n}\in\B_{\Z^2} $ and $ \lambda\in\Z^* $, we alternate from the representations $ \lambda\,\zeta $ and $ (\lambda,\mathbf{n}) $, so we shall denote \begin{equation*} \sigma^1_{\zeta,j,\lambda}:=\sigma^1_{\mathbf{n},j,\lambda},\quad \forall j=1,2,3,\forall \lambda\in\Z^*. \end{equation*} According to the previous analysis, we can now write the system satisfied by the leading profiles. For example, for $ \psi_2 $ which is involved in resonance \eqref{eq hyp res phi psi nu 2}, multiplying equation \eqref{eq derivation L U1 + M U1 2} for $ \mathbf{n}=(0,1) $ and $ j=2 $ by the vector $ \ell_{\psi,2} $ cancels the first term of \eqref{eq derivation L U1 + M U1 2}, according to \eqref{eq relation l zeta j L alpha j}. Thus we obtain, for $ \lambda\in\Z^* $, \begin{multline*} \ell_{\psi,2}\,L(0,\partial_z)\,r_{\psi,2}\,\sigma^1_{\psi,2,\lambda}+\ell_{\psi,2}\,L_1\big(r_{\psi,2},\psi_2\big)\,r_{\psi,2}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\sigma^1_{\psi,2,\lambda_1}\,\sigma^1_{\psi,2,\lambda_2}\\ +\indicatrice_{\lambda=k\lambda_{\psi}} \ell_{\psi,2}\Big\{L_1\big(r_{\phi,3},-\nu_2\big)\,r_{\nu,2}+L_1\big(r_{\nu,2},-\lambda_{\phi}\phi_3\big)\,r_{\phi,3}\Big\}\,ik\,\sigma^1_{\phi,3,-k\lambda_{\phi}}\,\sigma^1_{\nu,2,-k}=0. \end{multline*} In the left-hand side of the previous equation, the first term, the transport one, corresponds to the second term of the left-hand side of \eqref{eq derivation L U1 + M U1 2}, the second one, the Burgers type term corresponds to the self-interaction part of the third term of the left-hand side of \eqref{eq derivation L U1 + M U1 2}, while the last one, the resonant term, corresponds to the resonance part of the third term of the left-hand side of \eqref{eq derivation L U1 + M U1 2}. This splitting between transport, self-interaction and resonance terms can be generalized to any frequency. For $ \zeta=\phi,\psi,\nu $, $ j=1,2,3 $ and $ \lambda\in\Z^* $, we have \begin{subequations}\label{eq evolution sigma 1} \begin{multline}\label{eq evolution sigma 1 phi psi nu} \color{altblue} X_{\alpha_j(\zeta)}\,\sigma^1_{\zeta,j,\lambda}+D_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\sigma^1_{\zeta,j,\lambda_1}\,\sigma^1_{\zeta,j,\lambda_2}\\ \color{altblue} +\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}J^{\zeta_1,j_1}_{\zeta_2,j_2}\,ik\,\sigma^1_{\zeta_1,j_1,-k\lambda_{\zeta_1}}\,\sigma^1_{\zeta_2,j_2,-k\lambda_{\zeta_2}}=0, \end{multline} \color{black} and for other frequencies $ \zeta\in\F_{b}\privede{0,\phi,\psi,\nu} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, \begin{equation}\label{eq evolution sigma 1 autre} X_{\alpha_j(\zeta)}\,\sigma^1_{\zeta,j,\lambda}+D_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\sigma^1_{\zeta,j,\lambda_1}\,\sigma^1_{\zeta,j,\lambda_2}=0. \end{equation} \end{subequations} We have denoted, for $ \zeta\in\F_{b} $ and $ j\in\mathcal{C}(\zeta) $, the vector field $ X_{\alpha_j(\zeta)} $ and the self-interaction coefficient $ D_{\zeta,j} $ as \begin{subequations}\label{eq prof princ def X D J} \begin{align} X_{\alpha_j(\zeta)}&:=\ell_{\zeta,j}\,L(0,\partial_z)\,r_{\zeta,j},\qquad D_{\zeta,j}:=\ell_{\zeta,j}\,L_1\big(r_{\zeta,j},\alpha_j(\zeta)\big)\,r_{\zeta,j} ,\label{eq def X D} \intertext{and, for $ (\zeta_1,\zeta_2,j_1,j_2)\in\mathcal{R}(\zeta,j) $ (in the case $ \zeta=\phi,\psi,\nu $), the resonance coefficient $ J_{\zeta_2,j_2}^{\zeta_1,j_1} $ as} J^{\zeta_1,j_1}_{\zeta_2,j_2}&:=\ell_{\zeta,j}\, L_1\big(r_{\zeta_1,j_1},\lambda_{\zeta_2}\,\alpha_{j_2}(\zeta_2)\big)\,r_{\zeta_2,j_2} .\label{eq def J} \end{align} \end{subequations} According to the Lax Lemma \ref{lemme Lax}, the operator $ \ell_{\zeta,j}\,L(0,\partial_z)\,r_{\zeta,j} $ of \eqref{eq def X D} is equal to the vector field \eqref{eq champ vecteur X_alpha} which has already been denoted by $ X_{\alpha_j(\zeta)} $, so the notation is coherent. In the following, for $ \zeta\in\F_b\privede{0} $ and $ j\in\mathcal{C}(\zeta) $, we denote by $ X_{\zeta,j} $ the vector field $ X_{\alpha_j(\zeta)} $ and $ \mathbf{v}_{\zeta,j}:=\mathbf{v}_{\alpha_{j}(\zeta)} $ the velocity vector associated with it. \bigskip For all frequencies $ \zeta=\mathbf{n}\cdot\boldsymbol{\zeta} $ with $ \mathbf{n}\in\B_{\Z^2} $ except for $ \zeta=\phi,\psi,\nu $, and for all $ j\in\mathcal{C}(\zeta) $, since the frequency $ \alpha_j(\zeta) $ does not occur in any resonance, if we denote by $ \sigma^1_{\zeta,j} $ the series \begin{equation*} \sigma^1_{\zeta,j}(z,\Theta):=\sum_{\lambda\in\Z^*}\sigma^1_{\zeta,j,\lambda}(z)\,e^{i\lambda\Theta}, \end{equation*} then, according to \eqref{eq evolution sigma 1 autre}, we have the Burgers type equation \begin{equation}\label{eq evolution S hors cas part} \color{altpurple}X_{\zeta,j}\,\sigma^1_{\zeta,j}+D_{\zeta,j}\,\sigma^1_{\zeta,j}\partial_{\Theta}\sigma^1_{\zeta,j}=0, \end{equation} along with the initial condition $ \color{altpurple}(\sigma^1_{\zeta,j})_{t=0}=0 $\color{black}, which is a nonlinear scalar transport equation in the half space $ \Omega_T $. If the frequency $ \alpha_j(\zeta) $ is outgoing, i.e. if the last component of $ \mathbf{v}_{\zeta,j} $ is positive, there is no need for a boundary condition, so we deduce from \eqref{eq evolution S hors cas part} that $ \sigma^1_{\zeta,j} $ is zero. Therefore, for $ \lambda\in\Z^* $, we have $ \sigma^1_{\zeta,j,\lambda}=0 $. The same arguments can be applied for the outgoing frequency $ \phi_2 $ since there is no resonance for it. If $ \alpha_j(\zeta) $ is incoming we need in this case a boundary condition to determine the trace of $ \sigma^1_{\zeta,k}(z,\Theta) $ at $ x_d=0 $. From boundary condition \eqref{eq cascade bord U12} for the frequency $ \lambda\zeta $, $ \lambda\in\Z^* $, writing of $ U_1 $ \eqref{eq ecriture U 1} and the fact that all outgoing frequencies for $ \zeta $ have been proved to be zero, we deduce that \begin{equation}\label{eq cond bord sigma 1 hors cas part} B\sum_{j\in\mathcal{I}(\zeta)}\big(\sigma^1_{\zeta,j,\lambda}\big)_{|x_d=0}r_{\zeta,j}+B\,\Pi^e(\zeta)\,\big(U^{1,\ev}_{\lambda\zeta}\big)_{|x_d,\chi_d=0}=0. \end{equation} Since $ \lambda\zeta\in\F_b\setminus\Upsilon $ (because for now we consider $ \zeta\neq \phi,\psi $) and, according to decomposition \eqref{eq decomp E_-(zeta)} of the stable subspace $ E_-(\zeta) $, the vector in \eqref{eq cond bord sigma 1 hors cas part} to which matrix $ B $ applies lies in $ E_-(\lambda\zeta) $, on which $ B $ is invertible, according to Assumption \ref{hypothese weak lopatinskii}. Therefore we deduce from the previous equation \eqref{eq cond bord sigma 1 hors cas part}, using vectors $ \ell_{\zeta,j} $, that \color{altpurple} \begin{equation}\label{eq prof princ bord autre} \big(\sigma^1_{\zeta,j}\big)_{|x_d=0}=0,\quad \forall j\in\mathcal{I}(\zeta). \end{equation} \color{black} Along with \eqref{eq evolution S hors cas part} we get $ \sigma^1_{\zeta,j,\lambda}=0 $ for all $ \lambda\in\Z^* $. We have therefore proven that for all frequencies $ \zeta=\mathbf{n}\cdot\boldsymbol{\zeta} $ with $ \mathbf{n}\in\B_{\Z^2} $ except for $ \zeta=\phi,\psi,\nu $, and for all $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, we have \begin{equation*} \sigma^1_{\zeta,j,\lambda}=0. \end{equation*} In the same way we deduce from \eqref{eq cond bord sigma 1 hors cas part}, using decomposition \eqref{eq decomp E_-(zeta)}, \begin{equation*} \big(U^{1,\ev}_{\lambda\zeta}\big)_{|x_d,\chi_d=0}=0, \end{equation*} so, according to \eqref{eq ecriture U 1}, we can set the evanescent part $ U^{1,\ev} $ of $ U^1 $ to be zero. We now need to determine boundary conditions for the incoming frequencies $ \phi_j $, $ \psi_j $ and $ \nu_j $ for $ j=1,3 $ as well. For the frequency $ \nu $ we obtain, in the same fashion as before, since $ \nu $ is in the hyperbolic region $ \mathcal{H} $, for $ \lambda\in\Z^* $, \begin{equation*} B\,\Big(\big(\sigma_{\nu,1,\lambda}^1\big)_{|x_d=0}\,r_{\nu,1}+\big(\sigma_{\nu,2,\lambda}^1\big)_{|x_d=0}\,r_{\nu,2}+\big(\sigma_{\nu,3,\lambda}^1\big)_{|x_d=0}\,r_{\nu,3}\Big)=0, \end{equation*} so, for $ j=1,3 $, according to the normalization \eqref{eq relation l zeta j normalisation}, \begin{equation*} \big(\sigma_{\nu,j,\lambda}^1\big)_{|x_d=0}=-\ell_{\nu_j}\,A_d(0)\, \big(B_{|E_-(\nu)}\big)^{-1}\,B\big(\sigma_{\nu,2,\lambda}^1\big)_{|x_d=0}\,r_{\nu,2}. \end{equation*} We denote by $ \mu_{\nu,j} $, for $ j=1,3 $, the coefficient \begin{equation}\label{eq prof princ def mu nu} \mu_{\nu,j}:=-\ell_{\nu,j}\,A_d(0)\,\big(B_{|E_-(\nu)}\big)^{-1}\,B\,r_{\nu,2}, \end{equation} so that, for $ j=1,3 $, we have \begin{equation}\label{eq prof princ bord nu} \color{altgreen}\big(\sigma_{\nu,j,\lambda}^1\big)_{|x_d=0}\,r_{\nu,j}=\mu_{\nu,j}\,\big(\sigma_{\nu,2,\lambda}^1\big)_{|x_d=0}\,r_{\nu,j}. \end{equation} \color{black} For $ \phi $ we have, in a similar manner, since $ \sigma^1_{\phi,2,\lambda} $ is zero for $ \lambda\in\Z^* $, \begin{equation*} B\,\Big(\big(\sigma_{\phi,1,\lambda}^1\big)_{|x_d=0}\,r_{\phi,1}+\big(\sigma_{\phi,3,\lambda}^1\big)_{|x_d=0}\,r_{\phi,3}\Big)=0, \end{equation*} for every $ \lambda $ in $ \Z^* $, so, according to \eqref{eq decomp E_-(zeta)}, the vector in factor of $ B $ in the left-hand side belongs to $ \ker B \cap E_-(\phi) $. But since $ \phi $ is in $ \Upsilon $, the latter space is of dimension $ 1 $ and reads $ \Span e_{\phi} $, so there exists a scalar function $ a_{\phi,\lambda} $ of $ \omega_T $ such that, \begin{equation*} \big(\sigma_{\phi,1,\lambda}^1\big)_{|x_d=0}\,r_{\phi,1}+\big(\sigma_{\phi,3,\lambda}^1\big)_{|x_d=0}\,r_{\phi,3}=a_{\phi,\lambda}^1\,e_{\phi}. \end{equation*} Therefore, according to decomposition \eqref{eq decomp e zeta} of $ e_{\zeta} $, for $ j=1,3 $, \begin{equation}\label{eq prof princ bord phi} \color{altred} \big(\sigma_{\phi,j,\lambda}^1\big)_{|x_d=0}\,r_{\phi,j}=a_{\phi,\lambda}^1\,e_{\phi,j}. \end{equation} \color{black} Finally the case of the phase $ \psi $ gather the two previous ones. We have, for $ \lambda\in\Z^* $, \begin{equation*} B\,\Big(\big(\sigma_{\psi,1,\lambda}^1\big)_{|x_d=0}\,r_{\psi,1}+\big(\sigma_{\psi,2,\lambda}^1\big)_{|x_d=0}\,r_{\psi,2}+\big(\sigma_{\psi,3,\lambda}^1\big)_{|x_d=0}\,r_{\psi,3}\Big)=0. \end{equation*} In particular, because of \eqref{eq decomp E_-(zeta)}, it implies \begin{equation*} B\big(\sigma_{\psi,2,\lambda}^1\big)_{|x_d=0}\,r_{\psi,2}\in \Ima B_{|E_-(\psi)}=\Big(\ker \,^tB_{|E_-(\psi)}\Big)^{\perp}. \end{equation*} Therefore, according to the definition of $ b_{\psi} $, the following necessary condition follows: \begin{equation*} b_{\psi}\cdot B\big(\sigma_{\psi,2,\lambda}^1\big)_{|x_d=0}\,r_{\psi,2}=0. \end{equation*} But since the scalar $ b_{\psi}\cdot B\,r_{\psi,2} $ is not zero\footnote{the linear form $ b_{\psi}\cdot B $ is not uniformly zero and is already zero on two of the three vectors $ r_{\psi,1} $, $ r_{\psi,2} $ and $ r_{\psi,3} $ constituting a basis of $ \C^3 $, so cannot be on the third one}, we necessarily have \begin{equation}\label{eq prof princ cond psi 2 zero} \big(\sigma_{\psi,2,\lambda}^1\big)_{|x_d=0}=0. \end{equation} When it is satisfied we can write, in the same way as for $ \phi $, for $ j=1,3 $, \begin{equation}\label{eq prof princ bord psi} \color{altorange2}\big(\sigma_{\psi,j,\lambda}^1\big)_{|x_d=0}\,r_{\psi,j}=a_{\psi,\lambda}^1\,e_{\psi,j}, \end{equation} \color{black} with $ a_{\psi,\lambda} $ a scalar function of $ \omega_T $. At this point we have obtained an constant coefficient equation for $ U_1^* $, and transport equations \eqref{eq evolution sigma 1 phi psi nu} and \eqref{eq evolution S hors cas part}, associated with (when incoming) boundary conditions \eqref{eq prof princ bord autre}, \eqref{eq prof princ bord nu}, \eqref{eq prof princ bord phi} and \eqref{eq prof princ bord psi}, but the last two ones are expressed through scalar functions $ a^1_{\zeta,\lambda} $ which are still to be determined, so the system is not closed at this stage. Also note that condition \eqref{eq prof princ cond psi 2 zero} might raise an issue of over-determination of the system. To determine the equations satisfied by coefficients $ a^1_{\phi,\lambda} $ and $ a^1_{\psi,\lambda} $, we need to study the nonpolarized part of the first corrector $ U_2 $. \subsubsection{Nonpolarized part of the first corrector} For the first corrector we no longer have a polarization condition such as \eqref{eq cascade int U1}, so noncharacteristic modes may appear through quadratic interaction of characteristic modes. Thus the first corrector $ U_2 $ reads \begin{align*} U_2(z,\theta,\chi_d)&=U^*_2(z)+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}U^{2,\osc}_{\mathbf{n},j,\lambda}(z)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}\,e^{i\,\lambda\,\xi_j(\freq)\,\chi_d} \\\nonumber &\quad+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{\lambda\in\Z^*}U^{2,\ev}_{\mathbf{n},\lambda}(z,\chi_d)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}+U^{2,\nc}(z,\theta,\chi_d), \end{align*} where $ U^*_2 $ is the mean value of $ U_2 $ and $ U^{2,\nc} $ corresponds to the noncharacteristic modes. According to \eqref{eq cascade int U2}, the noncharacteristic part $ U^{2,\nc} $ satisfies (since there are only characteristic modes in $ L(0,\partial_z)\,U_1 $), \begin{multline}\label{eq 1cor expr U 2 nc} \mathcal{L}(\partial_{\theta},\partial_{\chi_d})\,U^{2,\nc}=-\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2,\\\lambda_1,\lambda_2)\in\mathcal{NR}}}L_1(r_{\zeta_1,j_1},\alpha_{j_2}(\zeta_2))\,r_{\zeta_2,j_2}\,i\lambda_2\,\sigma_{\zeta_1,j_1,\lambda_1}^1\,\sigma_{\zeta_2,j_2,\lambda_2}^1\,\\ e^{i(\lambda_1\mathbf{n}_1+\lambda_2\mathbf{n}_2)\cdot\theta}\,e^{i(\lambda_1\xi_{j_1}(\zeta_1)+\lambda_2\xi_{j_2}(\zeta_2))\chi_d}, \end{multline} where $ \mathcal{NR} $ denotes the set of 6-tuples $ (\zeta_1,\zeta_2,j_1,j_2,\lambda_1,\lambda_2) $ such that the frequency \begin{equation*} \lambda_1\,\alpha_{j_1}(\zeta_1)+\lambda_2\,\alpha_{j_2}(\zeta_2) \end{equation*} is noncharacteristic (which is such that there is no resonance). Note that in \eqref{eq 1cor expr U 2 nc}, only occur the boundary frequencies $ \phi,\psi $ and $ \nu $, since for all the others, the first profile is zero. Since all frequencies in $ U^{2,\nc} $ are noncharacteristic, equation \eqref{eq 1cor expr U 2 nc} determine $ U^{2,\nc} $ totally. Indeed, for each mode of noncharacteristic frequency $ \lambda_1\,\alpha_{j_1}(\zeta_1)+\lambda_2\,\alpha_{j_2}(\zeta_2) $ with $ (\zeta_1,\zeta_2,j_1,j_2,\lambda_1,\lambda_2) \in\mathcal{NR} $, the operator $ \mathcal{L}(\partial_{\theta},\partial_{\chi_d}) $ reads $ i\,L\big(0,\lambda_1\alpha_{j_1}(\zeta_1)+\lambda_2\alpha_{j_2}(\zeta_2)\big) $, which is an invertible matrix. Since for every boundary frequency $ \zeta=\mathbf{n}\cdot\boldsymbol{\zeta}\in\F_b\privede{0,\phi,\psi,\nu} $, the oscillating profile $ \sigma^1_{\zeta,j,\lambda} $ is zero for $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $ and since there is no resonances generating theses frequencies, according to \eqref{eq cascade int U2}, the profile $ U^{2,\osc}_{\mathbf{n},j,\lambda} $ satisfies \begin{equation*} L\big(0,\alpha_j(\zeta)\big)\,U^{2,\osc}_{\mathbf{n},j,\lambda}=0, \end{equation*} so it is polarized, and we denote by $ \sigma^2_{\zeta,j,\lambda} $ the scalar function of $ \Omega_T $ such that \begin{equation*} U^{2,\osc}_{\mathbf{n},j,\lambda}=\sigma^2_{\zeta,j,\lambda}\,r_{\zeta,j}. \end{equation*} With the same arguments as for the leading profile, we get the following polarization condition for the evanescent part: $ U^{2,\ev}_{0}=0 $, and, for $ \mathbf{n}\in\Z^2\privede{0} $, \begin{equation*} U^{2,\ev}_{\mathbf{n}}(z,\chi_d)=e^{\chi_d\mathcal{A}(\mathbf{n}\cdot\boldsymbol{\zeta})}\,\Pi^e(\mathbf{n}\cdot\boldsymbol{\zeta})\,U^{2,\ev}_{\mathbf{n}}(z,0). \end{equation*} Therefore, $ U_2 $ reads as the more precise following way, \begin{align}\label{eq ecriture U 2} U_2(z,\theta,\chi_d)&=U^*_2(z)+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}U^{2,\osc}_{\mathbf{n},j,\lambda}(z)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}\,e^{i\,\lambda\,\xi_j(\freq)\,\chi_d} \\\nonumber &\quad+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{\lambda\in\Z^*}e^{\chi_d\mathcal{A}(\mathbf{n}\cdot\boldsymbol{\zeta})}\,\Pi^e(\mathbf{n}\cdot\boldsymbol{\zeta})\,U^{2,\ev}_{\mathbf{n}}(z,0)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}+U^{2,\nc}(z,\theta,\chi_d). \end{align} \bigskip Writing down boundary equation \eqref{eq cascade bord U12} for $ U_2 $ will lead to equations on boundary terms $ a_{\phi}^1 $ and $ a_{\psi}^1 $. Thus we need to determine the nonpolarized part of the amplitudes associated with frequencies lifted from $ \phi,\psi,\nu $. For $ \zeta=\mathbf{n}\cdot\boldsymbol{\zeta}\in\ensemble{\phi,\psi,\nu} $, $ j=1,2,3 $, and $ \lambda\in\Z^* $, from polarization equation \eqref{eq cascade int U1}, we get, (using the notation $ U^{2,\osc}_{\zeta,j,\lambda}:=U^{2,\osc}_{\mathbf{n},j,\lambda} $), \begin{align*} i\,L\big(0,\alpha_j(\zeta)\big)\,U^{2,\osc}_{\zeta,j,\lambda}= &-L(0,\partial_z)\,\sigma_{\zeta,j,\lambda}^1\,r_{\zeta,j}-L_1\big(r_{\zeta,j},\alpha_j(\zeta)\big)\,r_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\sigma_{\zeta,j,\lambda_1}^1\,\sigma_{\zeta,j,\lambda_2}^1\\ &-\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}ik\,\Big\lbrace L_1\big(r_{\zeta_1,j_1},\lambda_{\zeta_2}\,\alpha_{j_2}(\zeta_2)\big)\,r_{\zeta_2,j_2}\\ &\qquad\qquad+L_1\big(r_{\zeta_2,j_2},\lambda_{\zeta_1}\,\alpha_{j_1}(\zeta_1)\big)\,r_{\zeta_1,j_1}\Big\rbrace\, \sigma^1_{\zeta_1,j_1,-k\lambda_{\zeta_1}}\,\sigma^1_{\zeta_2,j_2,-k\lambda_{\zeta_2}}. \end{align*} Then we multiply this equation on the left by the partial inverse $ R_{\zeta,j} $ to obtain, according to relation \eqref{eq relation R zeta j P}, \begin{align}\label{eq 1cor nonpola part U 2 zeta} i\lambda\,\big(I-&P_{\zeta,j}\big)\,U^{2,\osc}_{\zeta,j,\lambda}=\\[5pt]\nonumber &-R_{\zeta,j}\,L(0,\partial_z)\,\sigma_{\zeta,j,\lambda}^1\,r_{\zeta,j}-R_{\zeta,j}\,L_1\big(r_{\zeta,j},\alpha_j(\zeta)\big)\,r_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\sigma_{\zeta,j,\lambda_1}^1\,\sigma_{\zeta,j,\lambda_2}^1\\\nonumber &-R_{\zeta,j}\,\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}ik\,\Big\lbrace L_1\big(r_{\zeta_1,j_1},\lambda_{\zeta_2}\,\alpha_{j_2}(\zeta_2)\big)\,r_{\zeta_2,j_2}\\\nonumber &\qquad\qquad+L_1\big(r_{\zeta_2,j_2},\lambda_{\zeta_1}\,\alpha_{j_1}(\zeta_1)\big)\,r_{\zeta_1,j_1}\Big\rbrace\, \sigma^1_{\zeta_1,j_1,-k\lambda_{\zeta_1}}\,\sigma^1_{\zeta_2,j_2,-k\lambda_{\zeta_2}}. \end{align} We now write the boundary conditions for the first corrector $ U_2 $, for the frequencies $ \phi $ and $ \psi $. \bigskip Note that since $ \phi $ is in the hyperbolic region, the stable elliptic component $ E^e_-(\phi) $ is zero, so $ \Pi^e(\phi) $ is also zero. Therefore, boundary condition \eqref{eq cascade bord U12} for mode $ \phi $ reads, according to equation \eqref{eq ecriture U 2}, \begin{align}\label{eq 1cor boundary cond phi} B\,P_{\phi,1}\,\big(U^2_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,P_{\phi,3}\,\big(U^2_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,(I-P_{\phi,1})\,\big(U^2_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\phi,3})\,\big(U^2_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^2_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{2,\nc}_{\phi,\lambda}\big)_{|x_d,\chi_d=0}&=G_{\lambda}, \end{align} where we have expanded the source term $ G $ in Fourier series as \begin{equation*} G(z',\Theta)=\sum_{\lambda\in\Z}G_{\lambda}(z')\,e^{i\lambda\Theta}, \end{equation*} and where we have denoted by $ U^{2,\nc}_{\phi,\lambda} $ the sum of all the terms of $ U^{2,\nc} $ of which the trace on the boundary of the associated frequency is equal to $ \lambda\phi $, namely, \begin{multline*} U^{2,\nc}_{\phi,\lambda}=-\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2,\\\lambda_1,\lambda_2)\in\mathcal{NR}\\\lambda_1\zeta_1+\lambda_2\zeta_2=\lambda\phi}}L\big(0,\lambda_1\alpha_{j_1}(\zeta_1)+\lambda_2\alpha_{j_2}(\zeta_2)\big)^{-1}\,L_1(r_{\zeta_1,j_1},\alpha_{j_2}(\zeta_2))\,r_{\zeta_2,j_2}\,\lambda_2\,\sigma^1_{\zeta_1,j_1,\lambda_1}\,\sigma^1_{\zeta_2,j_2,\lambda_2}\,\\ e^{i(\lambda_1\mathbf{n}_1+\lambda_2\mathbf{n}_2)\cdot\theta}\,e^{i(\lambda_1\xi_{j_1}(\zeta_1)+\lambda_2\xi_{j_2}(\zeta_2))\chi_d}. \end{multline*} We investigate now which frequencies occur in this sum. If we denote by, for $ i=1,2 $, $ \zeta_i=\mathfrak{m}_i\phi+\mathfrak{n}_i\psi $ with $ (\mathfrak{m}_i,\mathfrak{n}_i)\in\B_{\Z^2} $, since there are only frequencies lifted from $ \phi,\psi,\nu $ in $ \mathcal{NR} $, we necessarily have ($ \mathfrak{m}_i=1 $ and $ \mathfrak{n}_i=0 $) or ($ \mathfrak{m}_i=0 $ and $ \mathfrak{n}_i=1 $) or ($ \mathfrak{m}_i=\lambda_{\phi} $ and $ \mathfrak{n}_i=\lambda_{\psi} $). In this notation, the condition $ \lambda_1\zeta_1+\lambda_2\zeta_2=\lambda\phi $ is equivalent to \begin{equation*} \left\lbrace\begin{array}{l} \lambda_1\,\mathfrak{m}_1+\lambda_2\,\mathfrak{m}_2=\lambda\\ \lambda_1\,\mathfrak{n}_1+\lambda_2\,\mathfrak{n}_2=0, \end{array}\right. \end{equation*} and using that $ \lambda_{\phi},\lambda_{\psi} $ are coprime integers, we find that this system admits the following solutions \begin{equation*} \left\lbrace\begin{array}{l} (\mathfrak{m}_1,\mathfrak{n}_1)=(1,0)\\ (\mathfrak{m}_2,\mathfrak{n}_2)=(1,0)\\ \lambda_1+\lambda_2=\lambda \end{array}\right., \quad \left\lbrace\begin{array}{l} (\mathfrak{m}_1,\mathfrak{n}_1)=(0,1)\\ (\mathfrak{m}_2,\mathfrak{n}_2)=(\lambda_{\phi},\lambda_{\psi})\\ (\lambda,\lambda_1,\lambda_2)=k\,(\lambda_{\phi},-\lambda_{\psi},1) \end{array}\right., \mbox{ and } \left\lbrace\begin{array}{l} (\mathfrak{m}_1,\mathfrak{n}_1)=(\lambda_{\phi},\lambda_{\psi})\\ (\mathfrak{m}_2,\mathfrak{n}_2)=(0,1)\\ (\lambda,\lambda_1,\lambda_2)=k\,(\lambda_{\phi}1,-\lambda_{\psi}) \end{array}\right.. \end{equation*} Selecting only 6-tuples of $ \mathcal{NR} $, we obtain that $ U^{2,\nc}_{\phi,\lambda} $ is equal to \begin{align}\label{eq 1cor U 2 nc phi} U^{2,\nc}_{\phi,\lambda}&=\indicatrice_{\lambda=k\lambda_{\phi}}\sum_{\substack{j_1,j_2=1,2,3 \\ (j_1,j_2)\neq(2,1), (2,2)}} L\big(0,k\nu_{j_1}-\lambda_{\psi}k\psi_{j_2}\big)^{-1}\,\big\lbrace \lambda_{\psi}\,L_1(r_{\nu,j_1},\psi_{j_2})\,r_{\psi,j_2}\\\nonumber &\qquad\qquad-L_1(r_{\psi,j_2},\nu_{j_1})\,r_{\nu,j_1}\big\rbrace \,k\,\sigma^1_{\nu,j_1,k}\,\sigma^1_{\psi,j_2,-\lambda_{\psi}k}\, e^{ik\lambda_{\phi}\theta_1}\,e^{i(k\xi_{j_1}(\nu)-\lambda_{\psi}k\xi_{j_2}(\psi))\chi_d}\\\nonumber &\quad-\sum_{\lambda_1+\lambda_2=\lambda} L\big(0,\lambda_1\phi_1+\lambda_2\phi_3\big)^{-1}\big\lbrace \lambda_1 \,L_1(r_{\phi,3},\phi_{1})\,r_{\phi,1}\\\nonumber &\qquad\qquad+ \lambda_2\, L_1(r_{\phi,1},\phi_{3})\,r_{\phi,3}\big\rbrace\,\sigma^1_{\phi,1,\lambda_1}\,\sigma^1_{\phi,3,\lambda_2}\, e^{i\lambda\,\theta_1}\,e^{i(\lambda_1\xi_1(\phi)+\lambda_2\xi_3(\phi))\chi_d}. \end{align} Since the vectors $ P_{\phi,1}\,\big(U^2_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0} $ and $ P_{\phi,3}\,\big(U^2_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0} $ in \eqref{eq 1cor boundary cond phi} are respectively in $ E_-^1(\phi) $ and $ E_-^3(\phi) $, by definition \eqref{eq def b zeta} of $ b_{\phi} $, we have \begin{equation*} b_{\phi}\cdot B\,P_{\phi,1}\,\big(U^2_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0} = b_{\phi}\cdot B\, P_{\phi,3}\,\big(U^2_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0} =0. \end{equation*} So if we take the scalar product of $ b_{\phi} $ with equality \eqref{eq 1cor boundary cond phi} multiplied by $ i\lambda $, using \eqref{eq 1cor nonpola part U 2 zeta}, \eqref{eq 1cor U 2 nc phi} and the boundary conditions \eqref{eq prof princ bord phi} for the leading profile associated with $ \lambda\phi $, we get the amplitude equation \begin{multline}\label{eq 1cor eq evol a phi} \color{altred}X^{\Lop}_{\phi}\,a^1_{\phi,\lambda} +D^{\Lop}_{\phi}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,a^1_{\phi,\lambda_1}\,a^1_{\phi,\lambda_2}+i\lambda\sum_{\lambda_1+\lambda_3=\lambda}\gamma_{\phi}(\lambda_1,\lambda_3)\,a^1_{\phi,\lambda_1}\,a^1_{\phi,\lambda_3}\\[5pt]\color{altred} +\indicatrice_{\lambda=k\lambda_{\phi}}\,\Gamma^{\phi,k}_{1}\,ik\,a^1_{\psi,-k\lambda_{\psi}}\,\big(\sigma^1_{\nu,2,-k}\big)_{|x_d=0}=i\lambda\,b_{\phi}\cdot B\,\big(U^2_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}-i\lambda\,b_{\phi}\cdot G_{\lambda}, \end{multline} \color{black} with \begin{subequations}\label{eq 1cor eq X lop v phi etc} \begin{align} X^{\Lop}_{\phi}&:=b_{\phi}\cdot B\,\Big(R_{\phi,1}\,L(0,\partial_z)\,e_{\phi,1}+R_{\phi,3}\,L(0,\partial_z)\,e_{\phi,3}\Big),\\ D^{\Lop}_{\phi}&:=b_{\phi}\cdot B\,\Big(R_{\phi,1}\,L_1\big(e_{\phi,1},\phi_1\big)\,e_{\phi,1}+R_{\phi,3}\,L_1\big(e_{\phi,3},\phi_3\big)\,e_{\phi,3}\Big)\label{eq 1cor eq v phi},\\[5pt]\label{eq 1cor eq gamma phi lambda 1 lambda 3} \gamma_{\phi}(\lambda_1,\lambda_3)&:=b_{\phi}\cdot B\,L\big(\lambda_1\phi_1+\lambda_3\phi_3\big)^{-1}\big\lbrace \lambda_1 \,L_1(e_{\phi,3},\phi_{1})\,e_{\phi,1}+ \lambda_3\, L_1(e_{\phi,1},\phi_{3})\,e_{\phi,3}\big\rbrace,\\[5pt] \Gamma^{\phi}&:=b_{\phi}\cdot B\,R_{\phi,1}\,\Big( L_1(e_{\psi,1},-\nu_2)\,r_{\nu,2}+L_1(r_{\nu,2},-\lambda_{\psi}\psi_1)\,e_{\psi,1}\Big)\label{eq 1cor eq gamma phi 1}\\ &\nonumber\quad+\lambda_{\phi}\,b_{\phi}\cdot B\sum_{\substack{j_1=1,3,j_2=1,2,3 \\ (j_1,j_2)\neq(1,2)}}\mu_{\nu,j_2}\, L\big(\nu_{j_2}-\lambda_{\psi}\psi_{j_1}\big)^{-1}\,\Big\lbrace L_1(r_{\psi,j_1},\nu_{j_2})\,r_{\nu,j_2}\\ &\nonumber\qquad-L_1(r_{\nu,j_2},\psi_{j_1})\,r_{\psi,j_1}\,\lambda_{\psi}\Big\rbrace. \end{align} \end{subequations} Equation \eqref{eq 1cor eq evol a phi} differs from the analogous one of \cite[equation (2.19)]{CoulombelWilliams2017Mach} by the two terms $ \indicatrice_{\lambda=k\lambda_{\phi}}\,\Gamma^{\phi}\,ik\,a^1_{\psi,-k\lambda_{\psi}}\,\big(\sigma^1_{\nu,2,-k}\big)_{|x_d=0}$ and $ i\lambda\,b_{\phi}\cdot B\,\big(U^2_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0} $. The first one appears here because of the resonances, and the second one wasn't in \cite{CoulombelWilliams2017Mach} because profiles $ U^2_{\phi,2,\lambda} $ were zero\footnote{Note that here, with a more precise analysis, we could also show that every profile $ U^n_{\phi,2,\lambda} $ is zero, since frequency $ \phi_2 $ does not occur in any resonance. We choose however to not detail it, in order to simplify the whole analysis, and since $ U^n_{\psi,2,\lambda} $ will not be zero, and proving that $ U^n_{\phi,2,\lambda} $ would not simplify the solving of the equations}. Computing $ L\big(\lambda_1\phi_1+\lambda_2\phi_3\big)^{-1} $ on $ r_{\phi,2} $ leads to the following alternative expression for $ \gamma_{\phi}(\lambda_1,\lambda_3) $: \begin{equation*} \gamma_{\phi}(\lambda_1,\lambda_3)=b_{\phi}\cdot B\,r_{\phi,2}\,\frac{i\lambda_1\,\ell_{\phi,2} \,E^{\phi}_{3,1}+ i\lambda_3\,\ell_{\phi,2}\, E^{\phi}_{1,3}}{\lambda_1\,\big(\xi_1(\phi)-\xi_2(\phi)\big)+\lambda_3\,\big(\xi_3(\phi)-\xi_2(\phi)\big)}, \end{equation*} where we have denoted \begin{equation*} E^{\phi}_{3,1}:=L_1(e_{\phi,3},\phi_{1})\,e_{\phi,1},\qquad E^{\phi}_{1,3}:=L_1(e_{\phi,1},\phi_{3})\,e_{\phi,3}. \end{equation*} This rewriting, which can be found in \cite{CoulombelWilliams2017Mach}, will be useful in the following to study the bilinear operator associated with the symbol $ \gamma_{\phi}(\lambda_1,\lambda_2) $ of \eqref{eq 1cor eq gamma phi lambda 1 lambda 3}. Finally, according to Lemma \ref{lemme Lax bord}, the operator $ X_{\phi}^{\Lop} $ is actually equal to the tangential vector field \begin{equation*} \beta_{\phi}\Big(\partial_t+\nabla_{\eta}\kappa(\phi)\cdot\nabla_y\Big), \end{equation*} which we still denote by $ X_{\phi}^{\Lop} $. \bigskip Similarly, for $ \psi $ we have \begin{align}\label{eq 1cor boundary cond psi} B\,P_{\psi,1}\,\big(U^2_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,P_{\psi,3}\,\big(U^2_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,(I-P_{\psi,1})\,\big(U^2_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\psi,3})\,\big(U^2_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^2_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{2,\nc}_{\psi,\lambda}\big)_{|x_d,\chi_d=0}&=0, \end{align} where the vector $ U^{2,\nc}_{\psi,\lambda} $ is given by \begin{align}\label{eq 1cor U 2 nc psi} U^{2,\nc}_{\psi,\lambda}&=\indicatrice_{\lambda=k\lambda_{\psi}}\sum_{\substack{j_1=1,3,j_2=1,2,3 \\ (j_1,j_2)\neq(1,2), (3,2)}} L\big(0,k\nu_{j_2}-\lambda_{\phi}k\phi_{j_1}\big)^{-1}\,\big\lbrace \lambda_{\phi}\,L_1(r_{\nu,j_2},\phi_{j_1})\,r_{\phi,j_1}\\\nonumber &\qquad\qquad-L_1(r_{\phi,j_1},\nu_{j_2})\,r_{\nu,j_2}\big\rbrace\,k\,\sigma_{\nu,j_2,k}^1\,\sigma_{\phi,j_1,-\lambda_{\phi}k}^1\, e^{ik\lambda_{\phi}\theta_1}\,e^{i(k\xi_{j_2}(\nu)-\lambda_{\phi}k\xi_{j_1}(\phi))\chi_d}\\[5pt]\nonumber &\quad-\sum_{\substack{j_1,j_2=1,2,3\\ j_1\neq j_2}}\sum_{\lambda_1+\lambda_2=\lambda}L\big(0,\lambda_1\psi_{j_1}+\lambda_2\psi_{j_2}\big)^{-1}\,L_1(r_{\psi,j_1},\psi_{j_2})\,r_{\psi,j_2}\,\lambda_2\,\\\nonumber &\qquad\qquad\sigma^1_{\psi,j_1,\lambda_1}\,\sigma^1_{\psi,j_2,\lambda_2}\, e^{i\lambda\,\theta_2}\,e^{i(\lambda_1\xi_{j_1}(\psi)+\lambda_2\xi_{j_2}(\psi))\chi_d}. \end{align} If we take the scalar product of $ b_{\psi} $ with equality \eqref{eq 1cor boundary cond psi} multiplied by $ i\lambda $, using \eqref{eq 1cor nonpola part U 2 zeta}, \eqref{eq 1cor U 2 nc phi} and the boundary conditions \eqref{eq 1cor boundary cond psi} for the leading profile associated with $ \lambda\psi $, we get the second amplitude equation \begin{multline}\label{eq 1cor eq evol a psi} \color{altorange2}X^{\Lop}_{\psi}\,a^1_{\psi,\lambda} +v_{\psi}\,\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,a^1_{\psi,\lambda_1}\,a^1_{\psi,\lambda_2} +i\lambda\sum_{\lambda_1+\lambda_3=\lambda}\gamma_{\psi}(\lambda_1,\lambda_3)\,a^1_{\psi,\lambda_1}\,a^1_{\psi,\lambda_3}\\[5pt]\color{altorange2} +\indicatrice_{\lambda=k\lambda_{\psi}}\,\Gamma^{\psi}\,ik\,\big(\sigma_{\nu,2,k}\big)_{|x_d=0}\,a^1_{\phi,-\lambda_{\phi}k}=i\lambda\,b_{\psi}\cdot B\,\big(U^2_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}. \end{multline} \color{black} with, using once again Lemma \ref{lemme Lax bord}, \begin{subequations}\label{eq 1cor eq X lop v psi etc} \begin{align} X^{\Lop}_{\psi}&:=b_{\psi}\cdot B\Big(R_{\psi,1}\,L(0,\partial_z)\,e_{\psi,1}+R_{\psi,3}\,L(0,\partial_z)\,e_{\psi,3}\Big)\nonumber\\ &\ =\beta_{\psi}\Big(\partial_t+\nabla_{\eta}\kappa(\psi)\cdot\nabla_y\Big),\\ v_{\psi}&:=b_{\psi}\cdot B\Big(R_{\psi,1}\,L_1\big(e_{\psi,1},\psi_1\big)\,e_{\psi,1}+R_{\psi,3}\,L_1\big(e_{\psi,3},\psi_3\big)\,e_{\psi,3}\Big),\label{eq 1cor eq v psi}\\[5pt] \gamma_{\psi}(\lambda_1,\lambda_3)&:= b_{\psi}\cdot B\,L\big(\lambda_1\psi_1+\lambda_3\psi_3\big)^{-1}\big\lbrace \lambda_1 \,L_1(e_{\psi,3},\psi_{1})\,e_{\psi,1}+ \lambda_3\, L_1(e_{\psi,1},\psi_{3})\,e_{\psi,3}\big\rbrace\\[5pt] \Gamma^{\psi}&:=b_{\psi}\cdot B\,R_{\psi,1}\,\Big( L_1(e_{\phi,1},-\nu_2)\,r_{\nu,2}+L_1(r_{\nu,2},-\lambda_{\phi}\phi_1)\,e_{\phi,1}\Big)\\ &\quad+\lambda_{\psi}\,b_{\psi}\cdot B\sum_{\substack{j_1=1,3,j_2=1,2,3 \\ (j_1,j_2)\neq(1,2), (3,2)}}\mu_{\nu,j_2}\,L\big(\nu_{j_2}-\lambda_{\phi}\phi_{j_1}\big)^{-1}\,\big\lbrace L_1(e_{\phi,j_1},\nu_{j_2})\,r_{\nu,j_2} \nonumber\\ &\qquad\qquad-\lambda_{\phi}\,L_1(r_{\nu,j_2},\phi_{j_1})\,e_{\phi,j_1}\big\rbrace.\nonumber \end{align} \end{subequations} Again, comparing to \cite[equation (2.19)]{CoulombelWilliams2017Mach}, equation \eqref{eq 1cor eq evol a psi} features two additional terms $ \indicatrice_{\lambda=k\lambda_{\psi}}\,\Gamma^{\psi}\,ik\,\big(\sigma_{\nu,2,k}\big)_{|x_d=0}\,a^1_{\phi,-\lambda_{\phi}k} $ and $ i\lambda\,b_{\psi}\cdot B\,\big(U^2_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0} $ for the same reason. The is no boundary forcing term here because the one for $ \psi $ is of order $ O(\epsilon^3) $. In the same way as for $ \phi $ we have \begin{equation*} \gamma_{\psi}(\lambda_1,\lambda_3)=b_{\psi}\cdot B\,r_{\psi,2}\,\frac{i\lambda_1\,\ell_{\psi,2} \,E^{\psi}_{3,1}+ i\lambda_3\,\ell_{\psi,2}\, E^{\psi}_{1,3}}{\lambda_1\,\big(\xi_1(\psi)-\xi_2(\psi)\big)+\lambda_3\,\big(\xi_3(\psi)-\xi_2(\psi)\big)}, \end{equation*} where we have denoted \begin{equation*} E^{\psi}_{3,1}:=L_1(e_{\psi,3},\psi_{1})\,e_{\psi,1},\qquad E^{\psi}_{1,3}:=L_1(e_{\psi,1},\psi_{3})\,e_{\psi,3}. \end{equation*} In conclusion, in this paragraph, we have determined the nonpolarized part of the first corrector, with \eqref{eq 1cor nonpola part U 2 zeta}, and the evolution equations \eqref{eq 1cor eq evol a phi} and \eqref{eq 1cor eq evol a psi} satisfied by $ a^1_{\phi,\lambda} $ and $ a^1_{\psi,\lambda} $. However, the obtained system of equation is still not closed since equations \eqref{eq 1cor eq evol a phi} and \eqref{eq 1cor eq evol a psi} involve traces of the first corrector $ U_2 $, of which the polarized part is still undetermined. Therefore we proceed inductively, by deriving equations for the polarized part of the first corrector $ U_2 $, and then studying the nonpolarized part of the second corrector $ U_3 $, in order to obtain evolution equations on the boundary terms for the polarized part of $ U_2 $. \subsubsection{Polarized part of the first corrector} For $ \zeta=\mathbf{n}\cdot\boldsymbol{\zeta}\in\F_b\privede{0} $ with $ \mathbf{n}\in\B_{\Z^2} $, for $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, we decompose the profile $ U^{2,\osc}_{\mathbf{n},j,\lambda} $ as $ U^{2,\osc}_{\mathbf{n},j,\lambda}=P_{\zeta,j}\,U^{2,\osc}_{\mathbf{n},j,\lambda}+(I-P_{\zeta,j})\,U^{2,\osc}_{\mathbf{n},j,\lambda} $. Recall that the nonpolarized part $ (I-P_{\zeta,j})\,U^{2,\osc}_{\mathbf{n},j,\lambda} $ is given by \eqref{eq 1cor nonpola part U 2 zeta}, so it remains to determine the polarized part, which is written as \begin{equation*} P_{\zeta,j}\,U^{2,\osc}_{\mathbf{n},j,\lambda}=\sigma_{\zeta,j,\lambda}^2\,r_{\zeta,j}, \end{equation*} with $ \sigma^2_{\zeta,j,\lambda} $ a scalar function of $ \Omega_T $. We start by determining the mean value $ U_2^* $, as in the general case of a corrector $ U_n $, the mean value $ U_n^* $ appears in equations for the polarized components. According to equation \eqref{eq cascade int U2}, $ U_2^* $ satisfies the equation \color{altpink} \begin{align*} L(0,\partial_z)\,U^*_2+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L_1\big(r_{\mathbf{n}\cdot\boldsymbol{\zeta},j},-i\,\lambda\,\alpha_j(\mathbf{n}\cdot\boldsymbol{\zeta})\big)\,r_{\mathbf{n}\cdot\boldsymbol{\zeta},j}\,\Big(\sigma^1_{\mathbf{n},j,\lambda}\,\sigma^2_{\mathbf{n},j,-\lambda}+\sigma^2_{\mathbf{n},j,\lambda}\,\sigma^1_{\mathbf{n},j,-\lambda}\Big)\\ +\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L_1\big((I-P_{\freq,j})\,U^{2,\osc}_{\mathbf{n},j,\lambda},-i\,\lambda\,\alpha_j(\mathbf{n}\cdot\boldsymbol{\zeta})\big)\,\sigma^1_{\mathbf{n},j,-\lambda}\,r_{\freq,j}\\ +\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L_1\big(\sigma^1_{\mathbf{n},j,\lambda},-i\,\lambda\,\alpha_j(\mathbf{n}\cdot\boldsymbol{\zeta})\big)\,(I-P_{\freq,j})\,U^{2,\osc}_{\mathbf{n},j,-\lambda}\,r_{\freq,j}\\ +\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}L_1\big(U^{1,\osc}_{\mathbf{n},j,\lambda},\partial_z\big)\,U^{1,\osc}_{\mathbf{n},j,\lambda}&=0. \end{align*}\color{black} The change of variable $ \lambda=-\lambda $ shows that the second term in first line of previous equation is zero. Boundary and initial condition \eqref{eq cascade bord U12} and \eqref{eq cascade initial} as well as writing \eqref{eq ecriture U 2} for $ U_2 $ leads to the following boundary and initial condition for $ U_2^* $: \color{altpink} \begin{equation*} B\,\big(U_2^*\big)_{|x_d=0}=G_0-B\,\big(U^{2,\nc}_0\big)_{|x_d,\chi_d=0},\qquad B\,\big(U_2^*\big)_{|t\leq 0}=0, \end{equation*}\color{black} where $ G_0 $ is the mean value of the forcing term $ G $ and $ U^{2,\nc}_0 $ is the sum of all terms of $ U^{2,\nc} $ of which the trace on the boundary has zero frequency. According to expressions \eqref{eq 1cor nonpola part U 2 zeta} of the nonpolarized parts and \eqref{eq 1cor expr U 2 nc} of $ U^{2,\nc} $, the mean value $ U_2^* $ satisfies an initial boundary value problem which is weakly well posed and of which the source and boundary terms depend only on the leading profile $ U_1 $ (with possibly a first order derivative applied to it). We consider now modes $ \lambda\,\alpha_j(\zeta) $ with $ \zeta=\mathbf{n}\cdot\boldsymbol{\zeta}\in\F_b\privede{0,\phi,\psi,\nu} $, $ \mathbf{n}\in\B_{\Z^2} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $. Recall that it has been proven in the previous part that for these modes, the profile $ U^{2,\osc}_{\mathbf{n},j} $ is polarized, that is \begin{equation*} U^{2,\osc}_{\mathbf{n},j} =\sigma^2_{\zeta,j,\lambda}\,r_{\zeta,j}. \end{equation*} Since there is no resonance generating frequency $ \alpha_j(\zeta) $, since the mean value $ U^*_1 $ is zero and since profiles $ \sigma_{\zeta,j,\lambda}^1 $ are also zero, the terms $ \mathcal{M}(U_1,U_2) $, $ \mathcal{M}(U_2,U_1) $, and $ \mathcal{N}(U_1,U_1) $ contain no term of frequency $ \alpha_j(\zeta) $. Therefore, analogously as for the leading profile, multiplying equation \eqref{eq cascade int Un} for $ n=3 $ on the left by $ \ell_{\zeta,j} $ leads to the following system of transport equations for the scalar functions $ \sigma_{\zeta,j,\lambda}^2 $: \begin{subequations}\label{eq 1cor systeme sigma zeta hors cas part} \begin{equation}\label{eq 1cor systeme sigma zeta hors cas part1} \color{altpurple}X_{\alpha_j(\zeta)}\,\sigma_{\zeta,j,\lambda}^2=0,\qquad \big(\sigma_{\zeta,j,\lambda}^2\big)_{|t\leq 0}=0. \end{equation} \color{black} When frequency $ \alpha_j(\zeta) $ is outgoing, transport equation \eqref{eq 1cor systeme sigma zeta hors cas part1} leads to $ \sigma_{\zeta,j,\lambda}^2=0 $. When frequency $ \alpha_j(\zeta) $ is incoming, according to boundary condition \eqref{eq cascade bord U12} and decomposition \eqref{eq ecriture U 2} of $ U_2 $, since $ \zeta $ is not in $ \Upsilon $, since $ G $ does not contain any oscillation in $ \zeta $ and since the outgoing profile $ \sigma^2_{\zeta,j,\lambda} $, $ j\in\mathcal{O}(\zeta) $, is zero, we have \begin{equation} \color{altpurple}\big(\sigma_{\zeta,j,\lambda}^2\big)_{|x_d=0}=-\ell_{\zeta,j}\,A_d(0)\,\big(B_{|E_-(\zeta)}\big)^{-1}B\,\big(U^{2,\nc}_{\zeta,\lambda}\big)_{|x_d,\chi_d=0}, \end{equation} \color{black} \end{subequations} where $ U^{2,\nc}_{\zeta,\lambda} $ is the sum of all terms of $ U^{2,\nc} $ of which the trace on the boundary is $ \lambda\,\zeta $. It is fully determined by \eqref{eq 1cor expr U 2 nc}, and thus depends only on $ \big(\sigma^1_{\zeta,j,\lambda}\big)_{|x_d=0} $ for $ \zeta=\phi,\psi,\nu $. Therefore, if the traces $ \big(\sigma^1_{\zeta,j,\lambda}\big)_{|x_d=0} $ for $ \zeta=\phi,\psi,\nu $ are determined, system \eqref{eq 1cor systeme sigma zeta hors cas part} allows us to construct the profiles $ \sigma^2_{\zeta,j,\lambda} $, for $ \zeta\in\F_b\privede{0,\phi,\psi,\nu} $ and $ j\in\mathcal{I}(\zeta) $. \bigskip We now take interest into modes $ \phi_j $, $ \psi_j $ and $ \nu_j $ for $ j=1,2,3 $. Applying $ \ell_{\zeta,j} $ to equation \eqref{eq cascade int Un} for $ n=3 $ leads to the following equation for $ \sigma^2_{\zeta,j,\lambda} $, \begin{align}\label{eq 1cor evol sigma phi psi nu} &\color{altblue}X_{\zeta,j}\,\sigma^2_{\zeta,j,\lambda}+D_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda\,\sigma_{\zeta,j,\lambda_1}^2\,\sigma_{\zeta,j,\lambda_2}^1+\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}J^{\zeta_2,j_2}_{\zeta_1,j_1}\,ik\,\sigma^1_{\zeta_1,j_1,-k\lambda_{\zeta_1}}\,\sigma^2_{\zeta_2,j_2,-k\lambda_{\zeta_2}}\\\nonumber\color{altblue} =&\color{altblue}-\ell_{\zeta,j}\,L(0,\partial_z)\,(I-P_{\zeta,j})\,U^{2}_{\zeta,j,\lambda} -\ell_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}L_1\big((I-P_{\zeta,j})\,U^2_{\zeta,j,\lambda_1},\zeta_j\big)\,r_{\zeta,j}\,i\lambda_2\,\sigma_{\zeta,j,\lambda_2}^1\\\nonumber &\color{altblue}-\ell_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}L_1\big(r_{\zeta,j},\zeta_j\big)\,(I-P_{\zeta,j})\,U^2_{\zeta,j,\lambda_2}\,i\lambda_2\,\sigma_{\zeta,j,\lambda_1}^1\\\nonumber &\color{altblue}-\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}\ell_{\zeta,j}\,ik\,\Big\lbrace L_1\big(r_{\zeta_1,j_1},-\lambda_{\zeta_2}\alpha_{j_2}(\zeta_2)\big)\,(I-P_{\zeta_2,j_2})\,U^2_{\zeta_2,j_2,-k\lambda_{\zeta_2}}\,\sigma_{\zeta_1,j_1,-k\lambda_{\zeta_1}}^1\\ &\color{altblue}\nonumber\qquad+L_1\big((I-P_{\zeta_2,j_2})\,U^2_{\zeta_2,j_2,-k\lambda_{\zeta_2}},-\lambda_{\zeta_1}\alpha_{j_1}(\zeta_1)\big)\,r_{\zeta_1,j_1}\,\sigma_{\zeta_1,j_1,-k\lambda_{\zeta_1}}^1\Big\rbrace\\\nonumber &\color{altblue}-\ell_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}L_1\big(\sigma_{\zeta,j,\lambda_1}^1\,r_{\zeta,j},\partial_z\big)\,\sigma_{\zeta,j,\lambda_2}^1\,r_{\zeta,j}\\\nonumber &\color{altblue}-\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}\ell_{\zeta,j}\, L_1\big(\sigma_{\zeta_1,j_1,-k\lambda_{\zeta_1}}^1\,r_{\zeta_1,j_1},\partial_z\big)\,\sigma_{\zeta_2,j_2,-k\lambda_{\zeta_2}}^1\,r_{\zeta_2,j_2}. \end{align} \color{black} Note that the source term on the right-hand side of equation \eqref{eq 1cor evol sigma phi psi nu} only depends on the leading profile $ U_1 $, according to formula \eqref{eq 1cor nonpola part U 2 zeta} for nonpolarized parts, with possibly second order derivatives applied to it (since in the expression of the nonpolarized part, first order derivatives are applied to $ U_1 $). For the incoming frequencies $ \phi_1 $, $ \phi_3 $ and so on, boundary conditions must be determined to solve the above transport equations \eqref{eq 1cor evol sigma phi psi nu}. We have already seen that boundary equation \eqref{eq cascade bord U12} for $ U_2 $ reads, for mode $ \lambda\phi $, as \eqref{eq 1cor boundary cond phi}. From this boundary condition, according to decomposition \eqref{eq decomp E_-(zeta)} of $ E_-(\zeta) $ and relation \eqref{eq def b zeta} defining the vector $ b_{\phi} $, we get the following necessary solvability condition \begin{align*} b_{\phi}\cdot\Big(B\,(I-P_{\phi,1})\,\big(U^2_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\phi,3})\,\big(U^2_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^2_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{2,\nc}_{\phi,\lambda}\big)_{|x_d,\chi_d=0}\Big)&=b_{\phi}\cdot G_{\lambda}, \end{align*} which is satisfied as soon as the scalar functions $ a_{\phi,\lambda} $ satisfy the evolution equation \eqref{eq 1cor eq evol a phi}, since these two equations are different writings of the same one. Thus we obtain, for $ j=1,3 $, in a similar manner than for the leading profile, \begin{equation}\label{eq 1cor trace phi} \color{altred}\big(\sigma_{\phi,j,\lambda}^2\big)_{|x_d=0}\,r_{\phi,j}=a_{\phi,\lambda}^2\,e_{\phi,j} +\tilde{F}^2_{\phi,j,\lambda}, \end{equation} \color{black} with $ a^2_{\phi,\lambda} $ a scalar function defined on $ \omega_T $ and where, for $ j=1,3 $, we have denoted by $ \tilde{F}^2_{\phi,j,\lambda} $ the function \begin{align*} \tilde{F}^2_{\phi,j,\lambda}&:=\ell_{\phi,j}\cdot A_d(0)\,\big(B_{|E_-(\phi)}\big)^{-1}\Big(G_{\lambda}-B\,(I-P_{\phi,1})\,\big(U^2_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad-B\,(I-P_{\phi,3})\,\big(U^2_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}-B\,(I-P_{\phi,2})\,\big(U^2_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad-B\,\big(\sigma_{\phi,2,\lambda}^2\big)_{|x_d=0}\,r_{\phi,2}-B\,\big(U^{2,\nc}_{\phi,\lambda}\big)_{|x_d,\chi_d=0}\Big)\,r_{\phi,j}. \end{align*} In the same way, for $ \psi $, from \eqref{eq 1cor boundary cond psi}, the following condition must be satisfied: \begin{align*} b_{\psi}\cdot\Big(B\,(I-P_{\psi,1})\,\big(U^2_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\psi,3})\,\big(U^2_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^2_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{2,\nc}_{\psi,\lambda}\big)_{|x_d,\chi_d=0}\Big)&=0, \end{align*} and it is the case when $ a_{\psi,\lambda} $ verify equation \eqref{eq 1cor eq evol a psi}. Therefore, for $ j=1,3 $, \begin{align}\label{eq 1cor trace psi} \color{altorange2} \big(\sigma_{\psi,j,\lambda}^2\big)_{|x_d=0}\,r_{\psi,j}=a_{\psi,\lambda}^2\,e_{\psi,j} +\tilde{F}^2_{\psi,j,\lambda}, \end{align} \color{black} with $ a^2_{\psi,\lambda} $ a scalar function of $ \omega_T $, and where we have denoted by $ \tilde{F}^2_{\psi,j,\lambda} $ the function \begin{align*} \tilde{F}^2_{\psi,j,\lambda}&:=-\ell_{\psi,j}\cdot A_d(0)\,\big(B_{|E_-(\psi)}\big)^{-1}\Big(B\,(I-P_{\psi,1})\,\big(U^2_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad+B\,(I-P_{\psi,3})\,\big(U^2_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\psi,2})\,\big(U^2_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad -B\,\big(\sigma_{\psi,2,\lambda}^2\big)_{|x_d=0}\,r_{\psi,2}+B\,\big(U^{2,\nc}_{\psi,\lambda}\big)_{|x_d,\chi_d=0}\Big). \end{align*} Note that expressions \eqref{eq 1cor trace phi} and \eqref{eq 1cor trace psi} of incoming traces $ \big(\sigma_{\phi,j,\lambda}^2\big)_{|x_d=0} $ and $ \big(\sigma_{\psi,j,\lambda}^2\big)_{|x_d=0} $ for $ j=1,3 $ is respectively coupled to the outgoing trace $ \big(\sigma_{\phi,2,\lambda}^2\big)_{|x_d=0} $ and $ \big(\sigma_{\psi,2,\lambda}^2\big)_{|x_d=0} $ through terms $ \tilde{F}^2_{\phi,j,\lambda} $ and $ \tilde{F}^2_{\psi,j,\lambda} $. Finally, for amplitudes associated with the boundary phase $ \nu $, we need to write a boundary condition for the first corrector. Boundary condition \eqref{eq cascade bord U12} for $ U_2 $ reads \begin{align*} B\,P_{\nu,1}\,\big(U^2_{\nu,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,P_{\nu,3}\,\big(U^2_{\nu,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,(I-P_{\nu,1})\,\big(U^2_{\nu,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\nu,3})\,\big(U^2_{\nu,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^2_{\nu,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{2,\nc}_{\nu,\lambda}\big)_{|x_d,\chi_d=0}&=0, \end{align*} where $ U^{2,\nc}_{\nu,\lambda} $ is the sum of all the terms of $ U^{2,\nc} $ of which the trace on the boundary of the associated frequency is equal to $ \lambda\,\nu $, which is fully determined by \eqref{eq 1cor expr U 2 nc}. Therefore, since $ \nu\in\F_{b}\setminus \Upsilon $, we get, for $ j=1,3 $, \begin{align}\label{eq 1cor trace nu} \color{altgreen}\big(\sigma_{\nu,j,\lambda}^2\big)_{|x_d=0}\,r_{\nu,j}=\mu_{\nu,j}\,\big(\sigma_{\nu,2,\lambda}^2\big)_{|x_d=0}\,r_{\nu,j}+\tilde{F}^2_{\nu,j,\lambda}, \end{align} \color{black} where $ \mu_{\nu,j} $ has been defined in equation \eqref{eq prof princ def mu nu} and where we have denoted by $ \tilde{F}^2_{\nu,j,\lambda} $ the function \begin{multline*} \tilde{F}^2_{\nu,j,\lambda}:=-\ell_{\nu,j}\cdot A_d(0)\,\big(B_{|E_-(\nu)}\big)^{-1}\Big(B\,(I-P_{\nu,1})\,\big(U^2_{\nu,1,\lambda}\big)_{|x_d,\chi_d=0}\\ +B\,(I-P_{\nu,3})\,\big(U^2_{\nu,3,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\nu,2})\,\big(U^2_{\nu,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{2,\nc}_{\nu,\lambda}\big)_{|x_d,\chi_d=0}\Big). \end{multline*} In the same way as for the leading profile, we need to investigate the nonpolarized part of the second corrector to find equations on $ a_{\phi,\lambda}^2 $ and $ a_{\psi,\lambda}^2 $. \subsubsection{Nonpolarized part of the second corrector} We follow the same analysis as for the first corrector. With similar arguments we get that the second corrector $ U_3 $ reads \begin{align}\label{eq ecriture U 3} U_3(z,\theta,\chi_d)&=U^*_3(z)+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\freq)}\sum_{\lambda\in\Z^*}U^{3,\osc}_{\mathbf{n},j,\lambda}(z)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}\,e^{i\,\lambda\,\xi_j(\freq)\,\chi_d} \\\nonumber &\quad+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{\lambda\in\Z^*}e^{\chi_d\mathcal{A}(\mathbf{n}\cdot\boldsymbol{\zeta})}\,\Pi^e(\mathbf{n}\cdot\boldsymbol{\zeta})\,U^{3,\ev}_{\mathbf{n}}(z,0)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}+U^{3,\nc}(z,\theta,\chi_d), \end{align} with $ U_3^* $ the mean value of $ U_3 $, $ U^{3,\nc} $ the noncharacteristic terms, and where, for $ \zeta=\freq\in\F_{b}\privede{0} $, $ \mathbf{n}\in\B_{\Z^2} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, profile $ U^{3,\osc}_{\mathbf{n},j,\lambda} $ decomposes as \begin{equation*} U^{3,\osc}_{\mathbf{n},j,\lambda}=\sigma^3_{\zeta\,j,\lambda}\,r_{\zeta,j}+\big(I-P_{\zeta,j}\big)\,U^{3,\osc}_{\mathbf{n},j,\lambda}, \end{equation*} with $ \sigma_{\zeta,j,\lambda}^3 $ a scalar function of $ \Omega_T $. Furthermore, according to \eqref{eq cascade int Un} for $ n=2 $, the noncharacteristic part $ U^{3,\nc} $ is given by \begin{align}\label{eq 2cor expr U 3 nc} \mathcal{L}(&\partial_{\theta},\partial_{\chi_d})\,U^{3,\nc}=\\\nonumber &-\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2,\\\lambda_1,\lambda_2)\in\mathcal{NR}}}\Big(L_1\big(U^{1,\osc}_{\zeta_1,j_1,\lambda_1},i\,\lambda_2\alpha_{j_2}(\zeta_2)\big)\,U^{2,\osc}_{\zeta_2,j_2,\lambda_2} +L_1\big(U^{2,\osc}_{\zeta_1,j_1,\lambda_1},i\,\lambda_2\alpha_{j_2}(\zeta_2)\big)\,U^{1,\osc}_{\zeta_2,j_2,\lambda_2}\\\nonumber &+L_1\big(U^{1,\osc}_{\zeta_1,j_1,\lambda_1},\partial_z\big)\,U^{1,\osc}_{\zeta_2,j_2,\lambda_2}\Big)\, e^{i(\lambda_1\mathbf{n}_1+\lambda_2\mathbf{n}_2)\cdot\theta}\,e^{i(\lambda_1\xi_{j_1}(\zeta_1)+\lambda_2\xi_{j_2}(\zeta_2))\chi_d}, \end{align} where the set $ \mathcal{NR} $ of nonresonant frequencies has already been defined. Since all frequencies in $ U^{3,\nc} $ are noncharacteristic, equation \eqref{eq 2cor expr U 3 nc} totally determines $ U^{3,\nc} $. Note that opposite to what was done for the first corrector, this is no longer true that in $ U^{3,\nc} $ there are only profiles of modes $ \phi_j $, $ \psi_j $ and $ \nu_j $, since now second order profiles $ \sigma^2_{\zeta,j,\lambda} $ for $ \zeta\in\F_{b}\privede{0,\phi,\psi,\nu} $ are possibly nonzero. For the same reason, profiles $ U^{3,\osc}_{\mathbf{n},j,\lambda} $ for $ \mathbf{n}\cdot\boldsymbol{\zeta}\in\F_{b}\privede{0,\phi,\psi,\nu} $ are not necessarily polarized. Therefore we derive now the nonpolarized part for each frequency $ \zeta=\freq\in\F_{b}\privede{0} $, $ \mathbf{n}\in\B_{\Z^2} $ and $ j\in\mathcal{C}(\zeta) $, $ \lambda\in\Z^* $. Multiplying equation \eqref{eq cascade int Un} for $ n=2 $ and frequency $ \lambda\,\alpha_j(\zeta) $ on the left by the partial inverse $ R_{\zeta,j} $ leads to the relation \begin{align}\label{eq 2cor nonpola part U 3 zeta} i\lambda\,\big(I-&P_{\zeta,j}\big)\,U^{3,\osc}_{\mathbf{n},j,\lambda}=\\\nonumber &-R_{\zeta,j}\,L(0,\partial_z)\,\sigma_{\zeta,j,\lambda}^2\,r_{\zeta,j}-R_{\zeta,j}\,L_1\big(r_{\zeta,j},\alpha_j(\zeta)\big)\,r_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda\,\sigma_{\zeta,j,\lambda_1}^1\,\sigma_{\zeta,j,\lambda_2}^2\\\nonumber &-R_{\zeta,j}\,\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}L_1\big(r_{\zeta_1,j_1},-\lambda_{\zeta_2}\alpha_{j_2}(\zeta_2)\big)\,r_{\zeta_2,j_2}\,ik\,\sigma_{\zeta_1,j_1,-k\lambda_{\zeta_1}}^1\,\sigma_{\zeta_2,j_2,-k\lambda_{\zeta_2}}^2\\\nonumber &-R_{\zeta,j}\,\partial_z\,\mbox{\emph{terms in }} \big(U_1,(I-P)\,U_2,U_2^*\big), \end{align} where notation $ \partial_z\,\mbox{\emph{terms in }} \big(U_1,(I-P)\,U_2\big) $ refers to quadratic terms involving the leading profile $ U_1 $, the nonpolarized parts of the first corrector $ U_2 $ and the mean value $ U_2^* $, with possibly first order derivative in front of it. The key point is that since the leading profile is polarized and since the mean value $ U_1^* $ is zero, all the terms involving a profile $ \sigma^2_{\zeta',j',\lambda'} $ depend only on leading order polarized profiles $ \sigma^1_{\zeta'',j'',\lambda''} $, and not on $ (I-P)\,U_2 $ or $ U_2^* $. We now write the boundary conditions for the second corrector, for the frequencies $ \phi $ and $ \psi $, which will lead to equations on the amplitudes $ a_{\phi}^2 $ and $ a^{2}_{\psi} $. For $ \phi $ we have, according to boundary condition \eqref{eq cascade bord U3n} and writing \eqref{eq ecriture U 3} of $ U_3 $, since the elliptic component $ E^e_-(\phi) $ of the stable subspace $ E_-(\zeta) $ is zero, \begin{align}\label{eq 2 cor boundary cond phi} B\,P_{\phi,1}\,\big(U^3_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,P_{\phi,3}\,\big(U^3_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,(I-P_{\phi,1})\,\big(U^3_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\phi,3})\,\big(U^3_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^{3,\osc}_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{3,\nc}_{\phi,\lambda}\big)_{|x_d,\chi_d=0}&=0, \end{align} where $ U^{3,\nc}_{\phi,\lambda} $ is the sum of all the terms of $ U^{3,\nc} $ of which the trace on the boundary of the associated frequency is equal to $ \lambda\,\phi $, namely, according to expression \eqref{eq 2cor expr U 3 nc} of $ U^{3,\nc} $, \begin{align}\label{eq 2cor U3 nc phi} U^{3,\nc}_{\phi,\lambda}&=\indicatrice_{\lambda=k\lambda_{\phi}}\sum_{\substack{j_1,j_2=1,2,3 \\ (j_1,j_2)\neq\\(2,1), (2,2)}} L\big(0,k\nu_{j_1}-\lambda_{\psi}k\psi_{j_2}\big)^{-1}\,\big\lbrace\lambda_{\psi} L_1(r_{\nu,j_1},\psi_{j_2})\,r_{\psi,j_2}-L_1(r_{\psi,j_2},\nu_{j_1})\,r_{\nu,j_1}\big\rbrace\\\nonumber &\qquad\qquad\,k\,\big\lbrace\sigma^1_{\nu,j_1,k}\,\sigma^2_{\psi,j_2,-\lambda_{\psi}k}+\sigma^2_{\nu,j_1,k}\,\sigma^1_{\psi,j_2,-\lambda_{\psi}k}\big\rbrace\, e^{ik\lambda_{\phi}\theta_1}\,e^{i(k\xi_{j_1}(\nu)-\lambda_{\psi}k\xi_{j_2}(\psi))\chi_d}\\[5pt]\nonumber &-\sum_{\lambda_1+\lambda_2=\lambda}L\big(0,\lambda_1\phi_1+\lambda_2\phi_3\big)^{-1}\big\lbrace \lambda_1 \,L_1(r_{\phi,3},\phi_{1})\,r_{\phi,1}+ \lambda_2\, L_1(r_{\phi,1},\phi_{3})\,r_{\phi,3}\big\rbrace\\\nonumber &\qquad\qquad\big\lbrace\sigma^1_{\phi,1,\lambda_1}\,\sigma^2_{\phi,3,\lambda_2}+\sigma^2_{\phi,1,\lambda_1}\,\sigma^1_{\phi,3,\lambda_2}\big\rbrace \, e^{i\lambda\,\theta_1}\,e^{i(\lambda_1\xi_1(\phi)+\lambda_2\xi_3(\phi))\chi_d}\\[5pt]\nonumber &+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big), \end{align} where the notation $ \partial_z\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big) $ refers to quadratic terms in $ U_1 $, the nonpolarized part of $ U_2 $, the polarized part of $ U_2 $ of which the associated modes are different from $ \lambda\phi_j $, $ \lambda\psi_j $ and $ \lambda\nu_j $, and the mean value $ U_2^* $, with possibly one derivative in front of it. Once again, the key point here is that since $ U_1 $ is polarized and of zero mean value, and since only the profiles $ \sigma^1_{\zeta,j,\lambda} $ for $ \zeta=\phi,\psi,\nu $ are nonzero, in $ U^{3,\nc}_{\phi,\lambda} $, every term involving $ \sigma^2_{\zeta,j,\lambda'} $ for $ \zeta=\phi,\psi,\nu $, is a quadratic term with a profile $ \sigma^1_{\zeta',j',\lambda''} $ for $ \zeta'=\phi,\psi,\nu $. Similarly as for the leading profile, taking the scalar product of $ b_{\phi} $ with equation \eqref{eq 2 cor boundary cond phi} multiplied by $ i\lambda $, using \eqref{eq 2cor nonpola part U 3 zeta}, \eqref{eq 2cor U3 nc phi} and expression of traces \eqref{eq 1cor trace phi}, \eqref{eq 1cor trace psi} and \eqref{eq 1cor trace nu}, we get \begin{align}\label{eq 2cor eq evol a phi} &\color{altred} X^{\Lop}_{\phi}\,a_{\phi,\lambda}^2+D^{\Lop}_{\phi}\,i\lambda\sum_{\lambda_1+\lambda_2=\lambda}a_{\phi,\lambda_1}^1\,a_{\phi,\lambda_2}^2+i\lambda\sum_{\lambda_1+\lambda_2=\lambda}\gamma_{\phi}(\lambda_1,\lambda_2)\big(a_{\phi,\lambda_1}^1\,a_{\phi,\lambda_2}^2+a_{\phi,\lambda_1}^2\,a_{\phi,\lambda_2}^1\big)\\\nonumber &\color{altred}+\indicatrice_{\lambda=k\lambda_{\phi}}\,\Gamma^{\phi}\,ik\,\Big\lbrace\big(\sigma_{\nu,2,-k}^1\big)_{|x_d=0}\,a_{\psi,-k\lambda_{\psi}}^2+\big(\sigma_{\nu,2,-k}^2\big)_{|x_d=0}\,a_{\psi,-k\lambda_{\psi}}^1\Big\rbrace\\[5pt]\nonumber &\color{altred}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^2_{\phi},\big(\tilde{F}^2_{\phi},a^1_{\phi}\big),\big(\tilde{F}^2_{\psi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^2_{\phi})_{|x_d=0},a^1_{\phi}\big),\big((\sigma^2_{\psi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \\[5pt]\nonumber &\color{altred}=i\lambda\,b_{\psi}\cdot B\,\big(U^{3,\osc}_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big)_{|x_d,\chi_d=0}, \end{align} \color{black} where $ X^{\Lop}_{\phi} $, $ D^{\Lop}_{\phi} $, $ \gamma_{\phi}(\lambda_1,\lambda_2) $ and $ \Gamma^{\phi} $ have already been defined in \eqref{eq 1cor eq X lop v phi etc}, and where \begin{equation*} \partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^2_{\phi},\big(\tilde{F}^2_{\phi},a^1_{\phi}\big),\big(\tilde{F}^2_{\psi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^2_{\phi})_{|x_d=0},a^1_{\phi}\big),\big((\sigma^2_{\psi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \end{equation*} refers to linear terms in $ \tilde{F}^2_{\phi,j,\lambda} $ for $ j=1,2 $ and $ \lambda\in\Z^* $, and quadratic terms in $ \big(\tilde{F}^2_{\phi,j,\lambda},a^1_{\phi,\lambda}\big) $, $ \big(\tilde{F}^2_{\psi,j,\lambda},(\sigma^1_{\nu,2,\lambda})_{|x_d=0}\big) $, $ \big((\sigma^2_{\phi,j,\lambda})_{|x_d=0},a^1_{\phi,\lambda}\big) $ or $ \big((\sigma^2_{\psi,j,\lambda})_{|x_d=0},(\sigma^1_{\nu,2,\lambda})_{|x_d=0}\big) $ for $ j=1,2 $ and $ \lambda\in\Z^* $, with, for all these terms, possibly one derivative in front of them. Recall that terms $ \tilde{F}^2_{\phi} $ and $ \tilde{F}^2_{\psi} $ depend on the traces $ \big(\sigma^2_{\phi,2,\lambda})_{|x_d=0} $ and $ \big(\sigma^2_{\psi,2,\lambda})_{|x_d=0} $. For phase $ \psi $ we have the following boundary condition \begin{align}\label{eq 2 cor boundary cond psi} B\,P_{\psi,1}\,\big(U^3_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,P_{\psi,3}\,\big(U^3_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,(I-P_{\psi,1})\,\big(U^3_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\psi,3})\,\big(U^3_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber +B\,\big(U^3_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{3,\nc}_{\psi,\lambda}\big)_{|x_d,\chi_d=0}&=H_{\lambda}, \end{align} where we have expanded $ H $ in Fourier series as \begin{equation*} H(z',\Theta)=\sum_{\lambda\in\Z}H_{\lambda}(z')\,e^{i\lambda\Theta}, \end{equation*} and $ U^{3,\nc}_{\psi,\lambda} $ is the sum of all the terms of $ U^{3,\nc} $ of which the trace on the boundary of the associated frequency is equal to $ \lambda\psi $, that is, \begin{align}\label{eq 2cor U3 nc psi} U^{3,\nc}_{\psi,\lambda}&=\indicatrice_{\lambda=k\lambda_{\psi}}\sum_{\substack{j_1=1,3,j_2=1,2,3 \\ (j_1,j_2)\\\neq(1,2), (3,2)}} L\big(k\nu_{j_2}-\lambda_{\phi}k\phi_{j_1}\big)^{-1}\,\big\lbrace \lambda_{\phi}\,L_1(r_{\nu,j_2},\phi_{j_1})\,r_{\phi,j_1}-L_1(r_{\phi,j_1},\nu_{j_2})\,r_{\nu,j_2}\big\rbrace\\\nonumber &\qquad\qquad\,k\,\big\lbrace\sigma_{\nu,j_2,k}^1\,\sigma_{\phi,j_1,-\lambda_{\phi}k}^2+\sigma_{\nu,j_2,k}^2\,\sigma_{\phi,j_1,-\lambda_{\phi}k}^1\big\rbrace\, e^{ik\lambda_{\phi}\theta_1}\,e^{i(k\xi_{j_2}(\nu)-\lambda_{\phi}k\xi_{j_1}(\phi))\chi_d}\\\nonumber &-\sum_{\substack{j_1,j_2=1,2,3\\ j_1\neq j_2}}\sum_{\lambda_1+\lambda_2=\lambda}L\big(\lambda_1\psi_{j_1}+\lambda_2\psi_{j_2}\big)^{-1}\,L_1(r_{\psi,j_1},\psi_{j_2})\,r_{\psi,j_2}\,\lambda_2\,\big\lbrace\sigma^1_{\psi,j_1,\lambda_1}\,\sigma^2_{\psi,j_2,\lambda_2}\,\\\nonumber &\qquad\qquad+\sigma^2_{\psi,j_1,\lambda_1}\,\sigma^1_{\psi,j_2,\lambda_2}\big\rbrace\,e^{i\lambda\,\theta_2}\,e^{i(\lambda_1\xi_{j_1}(\psi)+\lambda_2\xi_{j_2}(\psi))\chi_d}\\[5pt]\nonumber &+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big). \end{align} If we take the scalar product of $ b_{\psi} $ with the equality \eqref{eq 2 cor boundary cond psi} multiplied by $ \lambda $, using \eqref{eq 2cor nonpola part U 3 zeta}, \eqref{eq 2cor U3 nc phi} and expression of traces \eqref{eq 1cor trace phi}, \eqref{eq 1cor trace psi} and \eqref{eq 1cor trace nu}, we get the amplitude equation \begin{align}\label{eq 2cor eq evol a psi} &\color{altorange2} X^{\Lop}_{\psi}\,a^2_{\psi,\lambda} +D^{\Lop}_{\psi}\,i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}a^1_{\psi,\lambda_1}\,a^2_{\psi,\lambda_2} +i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\gamma_{\psi}(\lambda_1,\lambda_2)\big(a^1_{\psi,\lambda_1}\,a^2_{\psi,\lambda_2}+a^2_{\psi,\lambda_1}\,a^1_{\psi,\lambda_2}\big) \\\nonumber &\color{altorange2}+\indicatrice_{\lambda=k\lambda_{\psi}}\,\Gamma^{\psi}\,ik\,\big\lbrace\big(\sigma_{\nu,2,k}^1\big)_{|x_d=0}\,a^2_{\phi,-\lambda_{\phi}k}+\big(\sigma_{\nu,2,k}^2\big)_{|x_d=0}\,a^1_{\phi,-\lambda_{\phi}k}\big\rbrace\\\nonumber &\color{altorange2}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^2_{\psi},\big(\tilde{F}^2_{\psi},a^1_{\psi}\big),\big(\tilde{F}^2_{\phi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^2_{\psi})_{|x_d=0},a^1_{\psi}\big),\big((\sigma^2_{\phi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \\[5pt]\nonumber &\color{altorange2}=i\lambda\,b_{\psi}\cdot B\,\big(U^{3,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}-i\lambda\,b_{\psi}\cdot H_{\lambda}\\\nonumber &\color{altorange2}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big)_{|x_d,\chi_d=0}, \end{align} \color{black} where $ X^{\Lop}_{\psi} $, $ D^{\Lop}_{\psi} $, $ \gamma_{\psi}(\lambda_1,\lambda_2) $, and $ \Gamma^{\psi} $ have already been defined in \eqref{eq 1cor eq X lop v psi etc}, and where \begin{equation*} \partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^2_{\psi},\big(\tilde{F}^2_{\psi},a^1_{\psi}\big),\big(\tilde{F}^2_{\phi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^2_{\psi})_{|x_d=0},a^1_{\psi}\big),\big((\sigma^2_{\phi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \end{equation*} refers to linear terms in $ \tilde{F}^2_{\phi,j,\lambda} $ for $ j=1,2 $ and $ \lambda\in\Z^* $, and quadratic terms in $ \big(\tilde{F}^2_{\psi,j,\lambda},a^1_{\psi,\lambda}\big) $, $ \big(\tilde{F}^2_{\phi,j,\lambda},(\sigma^1_{\nu,2,\lambda})_{|x_d=0}\big) $, $ \big((\sigma^2_{\psi,j,\lambda})_{|x_d=0},a^1_{\psi,\lambda}\big) $ or $ \big((\sigma^2_{\phi,j,\lambda})_{|x_d=0},(\sigma^1_{\nu,2,\lambda})_{|x_d=0}\big) $ for $ j=1,2 $ and $ \lambda\in\Z^* $, with, for all these terms, possibly one derivative in front of them. \bigskip Note that equations \eqref{eq 2cor eq evol a phi} and \eqref{eq 2cor eq evol a psi} can be seen as linearizations around the trace of the leading profile $ U_1 $ of equations \eqref{eq 1cor eq evol a phi} and \eqref{eq 1cor eq evol a psi}. This is usual in weakly nonlinear geometric optics, where equations for the leading profile are nonlinear, and equations for higher order are linearizations of the former equations around the leading profile $ U_1 $. Again, the obtained system of equations is still not closed since traces of the second corrector $ U_3 $ appear in amplitude equations \eqref{eq 2cor eq evol a phi} and \eqref{eq 2cor eq evol a psi}. With the obtained equations, can have the intuition on how lower terms ascent toward higher order terms, eventually leading to an instability. In equation \eqref{eq 2cor eq evol a psi} for amplitudes $ a^2_{\psi,\lambda} $, the boundary forcing term $ H $ occurs, and therefore this forcing term ascents to first corrector profiles $ \sigma^2_{\psi,j,\lambda} $ for $ j=1,3 $ and $ \lambda\in\Z^* $ through boundary conditions \eqref{eq 1cor trace psi}. Eventually, because of the resonances \eqref{eq hyp res phi psi nu} leading to resonances terms in transport equations \eqref{eq 1cor evol sigma phi psi nu} for first corrector profiles, the boundary term $ H $ arises in profiles $ \sigma^2_{\psi,2,\lambda} $ for $ \lambda\in\Z^* $. In their turn, these profiles $ \sigma^2_{\psi,2,\lambda} $ for $ \lambda\in\Z^* $ interfere in amplitude equation \eqref{eq 1cor eq evol a psi} for $ a^1_{\psi,\lambda} $, for $ \lambda\in\Z^* $, because of the trace $ \big(U^2_{\psi,2,\lambda}\big)_{|x_d,\xi_d=0} $. Then this reasoning can be applied recursively to obtain that the boundary forcing term $ H $ interferes in leading profiles $ \sigma^1_{\zeta,j,\lambda} $, for $ \zeta=\phi,\psi,\nu $, $ j=1,2,3 $ and $ \lambda\in\Z^* $. \subsection{General system} The above arguments can be extended recursively to any corrector $ U_n $, $ n\geq 3 $. Doing so we get that the $ n $-th profile $ U_n $ reads \begin{multline}\label{eq ecriture U n} U_n(z,\theta,\chi_d)=U^*_n(z)+\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{j\in\mathcal{C}(\mathbf{n})}\sum_{\lambda\in\Z^*}U^{n,\osc}_{\mathbf{n},j,\lambda}\,e^{i\lambda\,\mathbf{n}\cdot\theta}\,e^{i\lambda\xi_j(\mathbf{n}\cdot\boldsymbol{\zeta})\chi_d}\\ +\sum_{\mathbf{n}\in\B_{\Z^2}}\sum_{\lambda\in\Z^*}e^{\chi_d\mathcal{A}(\mathbf{n}\cdot\boldsymbol{\zeta})}\,\Pi^e(\mathbf{n}\cdot\boldsymbol{\zeta})\,U^{n,\ev}_{\mathbf{n}}(z,0)\,e^{i\,\lambda\,\mathbf{n}\cdot\theta}+U^{n,\nc}(z,\theta,\chi_d), \end{multline} with $ U_n^* $ the mean value of $ U_n $, $ U^{n,\nc} $ the noncharacteristic terms, and where, for $ \zeta=\freq\in\F_{b}\privede{0} $, $ \mathbf{n}\in\B_{\Z^2} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, the oscillating profile $ U^{n,\osc}_{\mathbf{n},j,\lambda} $ decomposes as \begin{equation*} U^{n,\osc}_{\mathbf{n},j,\lambda}=\sigma^n_{\zeta\,j,\lambda}\,r_{\zeta,j}+\big(I-P_{\zeta,j}\big)\,U^{n,\osc}_{\mathbf{n},j,\lambda}, \end{equation*} with $ \sigma_{\zeta,j,\lambda}^n $ a scalar function defined on $ \Omega_T $. According to equation \eqref{eq cascade int Un} for $ n-1 $ and since $ U_1 $ is polarized and of zero mean value, $ U^{n,\nc} $ is determined by the formula \begin{align}\label{eq ncor expr U n nc} \mathcal{L}(\partial_{\theta},\partial_{\chi_d})\,U^{n,\nc}&= -\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2,\\\lambda_1,\lambda_2)\in\mathcal{NR}}}L_1(r_{\zeta_1,j_1},\alpha_{j_2}(\zeta_2))\,r_{\zeta_2,j_2}\,i\lambda\,\sigma_{\zeta_1,j_1,\lambda_1}^1\,\sigma_{\zeta_2,j_2,\lambda_2}^{n-1}\,\\\nonumber & \qquad\qquad e^{i(\lambda_1\mathbf{n}_1+\lambda_2\mathbf{n}_2)\cdot\theta}\,e^{i(\lambda_1\xi_{j_1}(\zeta_1)+\lambda_2\xi_{j_2}(\zeta_2))\chi_d}\\\nonumber &\quad+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-2},(I-P)\,U_{n-1},U_{n-1}^*\big), \end{align} where notation $ \partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-2},(I-P)\,U_{n-1},U_{n-1}^*\big) $ refers to quadratic terms involving the profiles $ U_1,\dots,U_{n-2} $, the nonpolarized parts of the corrector $ U_{n-1} $ and the mean value $ U_{n-1}^* $, with possibly first order derivatives in front of it. As for it, for $ \zeta=\freq\in\F_{b}\privede{0} $, $ \mathbf{n}\in\B_{\Z^2} $ and $ j\in\mathcal{C}(\zeta) $, $ \lambda\in\Z^* $, the nonpolarized part $ \big(I-P_{\zeta,j}\big)\,U^{n,\osc}_{\zeta,j,\lambda} $ of $ U^{n,\osc}_{\zeta,j,\lambda} $ is given by \begin{align}\label{eq ncor nonpola part U n zeta} i\lambda\,\big(I-&P_{\zeta,j}\big)\,U^{n,\osc}_{\mathbf{n},j,\lambda}=\\\nonumber &-R_{\zeta,j}\,L(0,\partial_z)\,\sigma_{\zeta,j,\lambda}^{n-1}\,r_{\zeta,j}-R_{\zeta,j}\,L_1\big(r_{\zeta,j},\alpha_j(\zeta)\big)\,r_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda\,\sigma_{\zeta,j,\lambda_1}^1\,\sigma_{\zeta,j,\lambda_2}^{n-1}\\\nonumber &-R_{\zeta,j}\,\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}L_1\big(r_{\zeta_1,j_1},-\lambda_{\zeta_2}\alpha_{j_2}(\zeta_2)\big)\,r_{\zeta_2,j_2}\,ik\,\sigma_{\zeta_1,j_1,-k\lambda_{\zeta_1}}^1\,\sigma_{\zeta_2,j_2,-k\lambda_{\zeta_2}}^{n-1}\\\nonumber &-R_{\zeta,j}\,\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-2},(I-P)\,U_{n-1},U_{n-1}^*\big). \end{align} This formula is obtained by multiplying equation \eqref{eq cascade int Un} for $ n-1 $ by the partial inverse $ R_{\zeta,j} $, using that $ U_1 $ is polarized and of zero mean value. We specify now equations satisfied by the mean value $ U_n^* $ and the polarized components $ \sigma^n_{\zeta,j,\lambda} $. Since $ U_1 $ is polarized and of zero mean value, equations \eqref{eq cascade int Un}, \eqref{eq cascade bord U3n} for $ n $ and \eqref{eq cascade initial} lead to the following system for the mean value $ U_n^* $: \begin{equation}\label{eq systeme moyenne} \color{altpink}\begin{cases} L(0,\partial_z)\,U_n^*=\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n\big)\\[5pt] B\,\big(U_n^*\big)_{|x_d=0}=\indicatrice_{n=3}\,H_0-B\,\big(U^{n,\nc}_{0}\big)_{|x_d,\chi_d=0}\\[5pt] B\,\big(U_n^*\big)_{|t\leq0}=0, \end{cases} \end{equation} where $ U^{n,\nc}_0 $ refers to the sum of all terms of $ U^{n,\nc} $ of which the trace on the boundary is of zero frequency. For a frequency $ \zeta=\freq\in\F_{b}\privede{0,\phi,\psi,\nu} $, $ \mathbf{n}\in\B_{\Z^2} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, multiplying equation \eqref{eq cascade int Un} by $ \ell_{\zeta,j} $ leads to, since all harmonics $ \sigma^1_{\zeta,j,\lambda'} $, $ \lambda\in\Z^* $ are zero and since $ U_1 $ is polarized and of zero mean value, \begin{subequations}\label{eq systeme sigma hors cas part} \begin{align} \color{altpurple}X_{\alpha_j(\zeta)}\,\sigma^n_{\zeta,j,\lambda}&\color{altpurple}=\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n,U_n^*\big)\\[5pt]\color{altpurple} \big(\sigma^n_{\zeta,j,\lambda}\big)_{|t\leq 0}&\color{altpurple}= 0, \intertext{\color{black}with initial condition \eqref{eq cascade initial}, and, if $ j\in\mathcal{I}(\zeta) $, boundary condition \eqref{eq cascade bord U3n} gives, since $ \zeta\in\F_{b}\setminus\Upsilon $,} \color{altpurple} \big(\sigma^n_{\zeta,j,\lambda}\big)_{|x_d=0}&\color{altpurple}=-\ell_{\zeta,j}\,A_d(0)\,\big(E_-(\zeta)\big)^{-1}\,B\,\big(U^{n,\nc}_{\zeta,\lambda}\big)_{|x_d,\chi_d=0}. \end{align} \end{subequations} \color{black} Finally, for $ \zeta=\freq\in\ensemble{\phi,\psi,\nu} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, multiplying equation \eqref{eq cascade int Un} by $ \ell_{\zeta,j} $ gives, with the same arguments, \begin{subequations}\label{eq systeme sigma} \begin{align}\label{eq systeme sigma evol} &\color{altblue}\quad X_{\zeta,j}\,\sigma^n_{\zeta,j,\lambda}+D_{\zeta,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda\,\sigma_{\zeta,j,\lambda_1}^n\,\sigma_{\zeta,j,\lambda_2}^1+\indicatrice_{\lambda=k\lambda_{\zeta}}\sum_{\substack{(\zeta_1,\zeta_2,j_1,j_2)\\\in\mathcal{R}(\zeta,j)}}J^{\zeta_2,j_2}_{\zeta_1,j_1}\,ik\,\sigma^1_{\zeta_1,j_1,-k\lambda_{\zeta_1}}\,\sigma^n_{\zeta_2,j_2,-k\lambda_{\zeta_2}}\\\nonumber &\color{altblue}=\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n,U_n^*\big), \end{align} \color{black} where the notation are defined by \eqref{eq prof princ def X D J}. Equation \eqref{eq systeme sigma evol} is coupled with the following initial condition, \begin{equation}\label{eq systeme sigma init} \color{altblue}\big(\sigma^n_{\zeta,j,\lambda}\big)_{|t\leq 0}=0. \end{equation} \end{subequations} It remains to determine the traces on the boundary of the corresponding incoming frequencies. The trace of the amplitudes associated with the boundary phase $ \phi $ on the boundary is given by, for $ j=1,3 $, \begin{subequations}\label{eq systeme sigma bord} \begin{equation}\label{eq systeme sigma bord phi} \color{altred}\big(\sigma_{\phi,j,\lambda}^n\big)_{|x_d=0}\,r_{\phi,j}=a_{\phi,\lambda}^n\,e_{\phi,j} +\tilde{F}^n_{\phi,j,\lambda}, \end{equation} \color{black} with $ a^n_{\phi,\lambda} $ a scalar function defined on $ \omega_T $ and \begin{align*} \tilde{F}^n_{\phi,j,\lambda}&:=-\ell_{\phi,j}\cdot A_d(0)\,\big(B_{|E_-(\phi)}\big)^{-1}\Big(B\,(I-P_{\phi,1})\,\big(U^{n,\osc}_{\phi,1,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad+B\,(I-P_{\phi,3})\,\big(U^{n,\osc}_{\phi,3,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\phi,2})\,\big(U^{n,\osc}_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad+B\,\big(\sigma_{\phi,2,\lambda}^n\big)_{|x_d=0}\,r_{\phi,2}+B\,\big(U^{n,\nc}_{\phi,\lambda}\big)_{|x_d,\chi_d=0}\Big)\,r_{\phi,j}. \end{align*} For $ \psi $ we have, for $ j=1,3 $, \begin{align}\label{eq systeme sigma bord psi} \color{altorange2} \big(\sigma_{\psi,j,\lambda}^n\big)_{|x_d=0}\,r_{\psi,j}=a_{\psi,\lambda}^n\,e_{\psi,j}+\tilde{F}^n_{\psi,j,\lambda}, \end{align} \color{black} with $ a^n_{\psi,\lambda} $ a scalar function of $ \omega_T $ and \begin{align*} \tilde{F}^n_{\psi,j,\lambda}&:=-\ell_{\psi,j}\cdot A_d(0)\,\big(B_{|E_-(\psi)}\big)^{-1}\Big(-\indicatrice_{n=3}\,H_{\lambda}+B\,(I-P_{\psi,1})\,\big(U^{n,\osc}_{\psi,1,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad+B\,(I-P_{\psi,3})\,\big(U^{n,\osc}_{\psi,3,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\psi,2})\,\big(U^{n,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}\\ &\quad+B\,\big(\sigma_{\psi,2,\lambda}^n\big)_{|x_d=0}\,r_{\psi,2}+B\,\big(U^{n,\nc}_{\psi,\lambda}\big)_{|x_d,\chi_d=0}\Big)\,r_{\psi,j}. \end{align*} Note that $ \tilde{F}^n_{\phi,j,\lambda} $ and $ \tilde{F}^n_{\psi,j,\lambda} $, for $ j=1,3 $ and $ \lambda\in\Z^* $ depends respectively on the traces $ \big(\sigma_{\phi,2,\lambda}^n\big)_{|x_d=0} $ and $ \big(\sigma_{\psi,2,\lambda}^n\big)_{|x_d=0} $. Finally, for $ \nu $,we have for $ j=1,3 $, \begin{align}\label{eq systeme sigma bord nu} \color{altgreen}\big(\sigma_{\nu,j,\lambda}^n\big)_{|x_d=0}\,r_{\nu,j}=\mu_{\nu,j}\,\big(\sigma_{\nu,2,\lambda}^n\big)_{|x_d=0}\,r_{\nu,j}+\tilde{F}^n_{\nu,j,\lambda}, \end{align} \color{black} with \begin{multline*} \tilde{F}^n_{\nu,j,\lambda}:=-\ell_{\nu,j}\cdot A_d(0)\,\big(B_{|E_-(\nu)}\big)^{-1}\Big(B\,(I-P_{\nu,1})\,\big(U^n_{\nu,1,\lambda}\big)_{|x_d,\chi_d=0}\\ +B\,(I-P_{\nu,3})\,\big(U^n_{\nu,3,\lambda}\big)_{|x_d,\chi_d=0}+B\,(I-P_{\nu,2})\,\big(U^n_{\nu,2,\lambda}\big)_{|x_d,\chi_d=0}+B\,\big(U^{n,\nc}_{\nu,\lambda}\big)_{|x_d,\chi_d=0}\Big)\,r_{\nu,j}. \end{multline*} \end{subequations} Coefficients $ \mu_{\nu,j} $ have been introduced in \eqref{eq prof princ def mu nu}. Scalar functions $ a^{n}_{\phi,\lambda} $ and $ a^n_{\psi,\lambda} $ satisfy the following equations, which are derived using boundary condition \eqref{eq cascade bord U3n} and formulas \eqref{eq ncor expr U n nc} and \eqref{eq ncor nonpola part U n zeta}, \begin{subequations}\label{eq systeme a} \begin{align}\label{eq systeme a phi} &\color{altred} X^{\Lop}_{\phi}\,a_{\phi,\lambda}^n+D^{\Lop}_{\phi}\,i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\!a_{\phi,\lambda_1}^1\,a_{\phi,\lambda_2}^n+i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\!\gamma_{\phi}(\lambda_1,\lambda_2)\big(a_{\phi,\lambda_1}^1\,a_{\phi,\lambda_2}^n+a_{\phi,\lambda_1}^n\,a_{\phi,\lambda_2}^1\big)\\\nonumber &\color{altred}+\indicatrice_{\lambda=k\lambda_{\phi}}\,\Gamma^{\phi}\,ik\,\Big\lbrace\big(\sigma_{\nu,2,-k}^1\big)_{|x_d=0}\,a_{\psi,-k\lambda_{\psi}}^n+\big(\sigma_{\nu,2,-k}^n\big)_{|x_d=0}\,a_{\psi,-k\lambda_{\psi}}^1\Big\rbrace\\[5pt]\nonumber &\color{altred}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^n_{\phi},\big(\tilde{F}^n_{\phi},a^1_{\phi}\big),\big(\tilde{F}^n_{\psi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^n_{\phi})_{|x_d=0},a^1_{\phi}\big),\big((\sigma^n_{\psi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \\[5pt]\nonumber &\color{altred}=i\lambda\,b_{\phi}\cdot B\,\big(U^{n+1,\osc}_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}\\\nonumber &\color{altred}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n,(P\,U_n)_{\zeta\neq \phi,\psi,\nu},U_n^*\big)_{|x_d,\chi_d=0}, \end{align} \color{black} and \begin{align}\label{eq systeme a psi} &\color{altorange2}X^{\Lop}_{\psi}\,a^n_{\psi,\lambda} +D^{\Lop}_{\psi}\,i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\!a^1_{\psi,\lambda_1}\,a^n_{\psi,\lambda_2} +i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\!\gamma_{\psi}(\lambda_1,\lambda_2)\big(a_{\psi,\lambda_1}^1\,a_{\psi,\lambda_2}^n+a_{\psi,\lambda_1}^n\,a_{\psi,\lambda_2}^1\big)\\\nonumber &\color{altorange2}+\indicatrice_{\lambda=k\lambda_{\psi}}\,\Gamma^{\psi}\,ik\,\big\lbrace\big(\sigma_{\nu,2,k}^1\big)_{|x_d=0}\,a^n_{\phi,-\lambda_{\phi}k}+\big(\sigma_{\nu,2,k}^n\big)_{|x_d=0}\,a^1_{\phi,-\lambda_{\phi}k}\big\rbrace\\\nonumber &\color{altorange2}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^n_{\psi},\big(\tilde{F}^n_{\psi},a^1_{\psi}\big),\big(\tilde{F}^n_{\phi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^n_{\psi})_{|x_d=0},a^1_{\psi}\big),\big((\sigma^n_{\phi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \\[5pt]\nonumber &\color{altorange2}=i\lambda\,b_{\psi}\cdot B\,\big(U^{n+1,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}-\indicatrice_{n=3}\,b_{\psi}\cdot H_{\lambda}\\\nonumber &\color{altorange2}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n,(P\,U_n)_{\zeta\neq \phi,\psi,\nu},U_n^*\big)_{|x_d,\chi_d=0}, \end{align} \color{black} where the notation have been defined in \eqref{eq 1cor eq X lop v phi etc} and \eqref{eq 1cor eq X lop v psi etc}. These two equations come with the following initial conditions \begin{equation}\label{eq systeme init a phi psi} \color{altred}\big(a^n_{\phi,\lambda}\big)_{|t\leq 0}=0,\qquad \color{altorange2} \big(a^n_{\psi,\lambda}\big)_{|t\leq 0}=0. \end{equation} \end{subequations} The system of equations \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} is highly coupled. In all equations for the corrector of order $ n $, there are terms depending on $ U_1,\dots,U_{n-1},(I-P)\,U_n,(P\,U_n)_{\zeta\neq \phi,\psi,\nu},U_n^* $, but this is not a big issue, since, if the lower order correctors $ U_1,\dots,\linebreak U_{n-1} $ are constructed, $ (I-P)\,U_n,(P\,U_n)_{\zeta\neq \phi,\psi,\nu},U_n^* $ can be determined with \eqref{eq ncor nonpola part U n zeta}, \eqref{eq systeme moyenne} and \eqref{eq systeme sigma hors cas part}. The terms inducing coupling which seem the most problematic are the terms $ \lambda\,b_{\psi}\cdot B\,\big(U^{n+1,\osc}_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0} $ and $ \lambda\,b_{\psi}\cdot B\,\big(U^{n+1,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0} $ in \eqref{eq systeme a phi} and \eqref{eq systeme a psi} which couple evolution equations for $ a^n_{\phi,\lambda} $ and $ a^n_{\psi,\lambda} $ (and therefore evolution equations for the corrector $ U^n $ of order $ n $), with the corrector of one order higher, $ U^{n+1} $. In equations \eqref{eq systeme a} there are also traces of profiles of order $ n $, which prevents to solve this equations (having determined lower order correctors) before solving the evolution equations for $ U^n $. In addition to being highly coupled, system of equations \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} seems also over-determined. Indeed, condition \eqref{eq prof princ cond psi 2 zero} imposing that the \emph{outgoing} leading profile $ \sigma^1_{\psi,2,\lambda} $ is of zero trace on the boundary gives one more boundary condition than the two boundary conditions prescribed by the structure of the problem. Therefore, this is not clear at all that the system of equations \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} admits a solution satisfying the additional condition \eqref{eq prof princ cond psi 2 zero}. \section{Existence of an analytic solution}\label{section existence} In this section we focus on the well-posedness of \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a}. Both because of the high coupling of the system, and the over-determination of it, we choose to concentrate on a simplified version of the general system and try to prove well-posedness for it. This simplified model should focus on the profiles associated with frequencies $ \phi_j $, $ \psi_j $ and $ \nu_j $, because on one hand it greatly reduces the number of equations, and therefore complexity of the system and of the functional framework, and on the other hand because it seems that, due to amplification and resonances, equations on the profiles associated with $ \phi_j $, $ \psi_j $ and $ \nu_j $ carry the main difficulties of system of equations \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma} and \eqref{eq systeme a}. Indeed, we already pointed out that if profiles $ \sigma^n_{\zeta,j,\lambda} $ for $ n\geq 1 $, $ \zeta=\phi,\psi,\nu $, $ j=1,2,3 $ and $ \lambda\in\Z^* $ are determined, system \eqref{eq systeme moyenne}-\eqref{eq systeme sigma hors cas part} becomes upper triangular, and could be studied in a rather classical way, see for example \cite{Kilque2021Weakly}. Since we wish to study simplified versions of the system \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} in an analytical setting, the initial conditions in \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma} and \eqref{eq systeme a}, requiring that the profiles $ \sigma_{\zeta,j,\lambda} $ and their boundary terms $ a_{\zeta,\lambda} $ are zero for negative times $ t $ are not suited for analytic functions, since it would imply that these profiles and boundary terms are zero everywhere. Therefore, in the simplified models, we modify, in a non equivalent way, these boundary conditions into conditions requiring that the solutions are zero at $ t=0 $, which are now adapted for analytic functions. We start by describing a first simplified model, very simple, which concentrate on boundary equations, and detail the functional framework which will be used to prove well-posedness of it, and proceed with the proof. Then we describe a second simplified model, more elaborate, which incorporates interior (incoming) equations, introduce additional functional framework , make some specifications on the simplified model with regard to the functional framework, and state the main result, before proceeding by proving it. \subsection{First simplified model} For the first simplified model, we focus only on boundary equations \eqref{eq systeme a}. In these equations, terms \begin{equation*} X^{\Lop}_{\phi}\,a_{\phi,\lambda}^n,\quad D^{\Lop}_{\phi}\,i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\!a_{\phi,\lambda_1}^1\,a_{\phi,\lambda_2}^n\quad \text{and}\quad i\lambda\!\sum_{\lambda_1+\lambda_2=\lambda}\!\gamma_{\phi}(\lambda_1,\lambda_2)\big(a_{\phi,\lambda_1}^1\,a_{\phi,\lambda_2}^n+a_{\phi,\lambda_1}^n\,a_{\phi,\lambda_2}^1\big) \end{equation*} as well as the analogous ones for $ \psi $ appear in the chosen first simplified model, and the last one is rewritten as a semilinear term. On the contrary, terms \begin{align*} &\indicatrice_{\lambda=k\lambda_{\phi}}\,\Gamma^{\phi}\,ik\,\Big\lbrace\big(\sigma_{\nu,2,-k}^1\big)_{|x_d=0}\,a_{\psi,-k\lambda_{\psi}}^n+\big(\sigma_{\nu,2,-k}^n\big)_{|x_d=0}\,a_{\psi,-k\lambda_{\psi}}^1\Big\rbrace,\\[5pt] &\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\Big[\tilde{F}^n_{\phi},\big(\tilde{F}^n_{\phi},a^1_{\phi}\big),\big(\tilde{F}^n_{\psi},(\sigma^1_{\nu,2})_{|x_d=0}\big),\big((\sigma^n_{\phi})_{|x_d=0},a^1_{\phi}\big),\big((\sigma^n_{\psi})_{|x_d=0},(\sigma^1_{\nu,2})_{|x_d=0}\big)\Big] \end{align*} and the analogous ones for $ \psi $ are removed in the simplified model, since they involved traces of outgoing interior profiles (recall that $ \tilde{F}^n_{\phi,j,\lambda} $ and $ \tilde{F}^n_{\psi,j,\lambda} $ depend respectively on $ \big(\sigma^n_{\phi,j,\lambda}\big)_{|x_d=0} $ and $ \big(\sigma^n_{\psi,j,\lambda}\big)_{|x_d=0} $). For the same reasons, terms \begin{equation*} i\lambda\,b_{\phi}\cdot B\,\big(U^{n+1,\osc}_{\phi,2,\lambda}\big)_{|x_d,\chi_d=0}\quad \text{and}\quad i\lambda\,b_{\psi}\cdot B\,\big(U^{n+1,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0} \end{equation*} are not kept. Finally, source terms \begin{equation*} \partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n,(P\,U_n)_{\zeta\neq \phi,\psi,\nu},U_n^*\big)_{|x_d,\chi_d=0} \end{equation*} involves quadratic terms in the traces of profiles $ \sigma^k_{\zeta,j,\lambda} $ for $ 1\leq k\leq n $, $ \zeta\in\F_{b}\privede{0} $, $ j\in\mathcal{C}(\zeta) $ and $ \lambda\in\Z^* $, with possibly derivatives of order up to $ n $ in front of it. They are simplified in three ways: we keep only traces of profiles associated with boundary frequencies $ \phi $, $ \psi $ and $ \nu $, we express traces only through functions $ a^k_{\zeta,\lambda} $ for $ \zeta=\phi,\psi $, and we choose only first order derivatives in $ \Theta $ (but we shall see in the following that considering derivatives in $ y $ would present no additional difficulty). Finally, boundary terms $ G $ and $ H $ are represented by functions $ H^n_{\zeta} $, belonging to a space specified later on. Multiplying equations \eqref{eq systeme a} by $ e^{i\lambda\Theta} $ for $ \Theta\in\T $ a periodic variable therefore leads to the following simplified model amplitude equations \begin{subequations}\label{eq ana1 a} \begin{align}\label{eq ana1 a phi} \color{altred}X^{\Lop}_{\phi}\,a_{\phi}^n+D^{\Lop}_{\phi}\,\partial_{\Theta}\big(a^1_{\phi}\,a^n_{\phi}\big) +\mathbf{w}_{\phi}\,\mathbb{F}_{\phi}^{\per}\big[\partial_{\Theta}\,a^1_{\phi},\partial_{\Theta}\,a^n_{\phi}\big]\color{altred}&\color{altred}=H^n_{\phi}+K^{\Lop}_{\phi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,a^{n-k}_{\psi}\big),\\\label{eq ana1 a psi} \color{altorange2}X^{\Lop}_{\psi}\,a_{\psi}^n+D^{\Lop}_{\psi}\,\partial_{\Theta}\big(a^1_{\psi}\,a^n_{\psi}\big) +\mathbf{w}_{\psi}\,\mathbb{F}_{\psi}^{\per}\big[\partial_{\Theta}\,a^1_{\psi},\partial_{\Theta}\,a^n_{\psi}\big]&\color{altorange2}=H^n_{\psi}+K^{\Lop}_{\psi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,a^{n-k}_{\psi}\big), \end{align} \end{subequations} where, for $ \zeta=\phi,\psi $, we have denoted by $ a^n_{\zeta} $ the function of $ \omega_T\times\T $ defined as \begin{equation*} a^n_{\zeta}(z,\Theta):=\sum_{\lambda\in\Z^*}a^n_{\zeta,\lambda}(z)\,e^{i\lambda\,\Theta}, \end{equation*} where, for $ \zeta=\phi,\psi $, the bilinear operator $ \mathbb{F}^{\per}_{\zeta} $ is defined as \begin{equation}\label{eq ana1 def F per} \mathbb{F}^{\per}_{\zeta}\big[a,b\big]:= \sum_{\lambda\in\Z^*}\sum_{\substack{\lambda_1+\lambda_3=\lambda\\\lambda_1\lambda_3\neq0}}\frac{i\,a_{\lambda_1}\,b_{\lambda_3}}{\lambda_1\,\delta^1_{\zeta}+\lambda_3\,\delta^3_{\zeta}}\,e^{i\lambda\,\Theta}, \end{equation} with $ \delta^1_{\zeta} $ and $ \delta^3_{\zeta} $ scalars defined as \begin{equation*} \delta^1_{\zeta}:=\frac{\xi_1(\zeta)-\xi_2(\zeta)}{\xi_3(\zeta)-\xi_1(\zeta)},\qquad \delta^3_{\zeta}:=\frac{\xi_3(\zeta)-\xi_2(\zeta)}{\xi_3(\zeta)-\xi_1(\zeta)}, \end{equation*} and where $ \mathbf{w}_{\phi},\mathbf{w}_{\psi},K^{\Lop}_{\phi},K^{\Lop}_{\psi}\in\R $. Here we have used analysis of \cite[Section 3.1]{CoulombelWilliams2017Mach} to rewrite terms involving the $ \gamma_{\zeta}(\lambda_1,\lambda_3) $ coefficients in \eqref{eq systeme a} as $ \mathbf{w}_{\phi}\,\mathbb{F}_{\phi}^{\per}\big[\partial_{\Theta}\,a^1_{\phi},\partial_{\Theta}\,a^n_{\phi}\big] $, up to changing definition of the coefficients $ D^{\Lop}_{\zeta} $. Note that since $ \phi $ and $ \psi $ are nonresonant, the denominators in equation \eqref{eq ana1 def F per} defining $ \mathbb{F}^{\per}_{\zeta} $ are nonzero. Up to changing all notation by a harmless multiplicative constant, we can assume that, for $ \zeta=\phi,\psi $, vector fields $ X_{\zeta}^{\Lop} $ read \begin{equation}\label{eq ana1 def X Lop} X_{\zeta}^{\Lop}=\partial_t-\mathbf{v}^{\Lop}_{\zeta}\cdot \nabla_y, \end{equation} with $ \mathbf{v}^{\Lop}_{\zeta}\in\R^{d-1} $. Equations \eqref{eq ana1 a} are coupled with the initial condi\-tions \begin{equation}\label{eq ana1 a init} \color{altred}\big(a_{\phi}^n\big)_{|t=0}=0,\qquad \color{altorange2}\big(a_{\psi}^n\big)_{|t=0}=0. \end{equation} Again, these initial conditions \eqref{eq ana1 a init} are not the same as \eqref{eq systeme init a phi psi}, and are written in this (non equivalent) form to be suited for the analytical framework. Note that equations \eqref{eq ana1 a} are quasilinear for $ a^1_{\phi},a^1_{\psi} $ when $ n=1 $, and linear for $ a^n_{\phi},a^n_{\psi} $ when $ n\geq 2 $. As we will prove later that terms $ \mathbb{F}_{\phi}^{\per}\big[\partial_{\Theta}\,a^1_{\phi},\partial_{\Theta}\,a^n_{\phi}\big] $ and $ \mathbb{F}_{\psi}^{\per}\big[\partial_{\Theta}\,a^1_{\psi},\partial_{\Theta}\,a^n_{\psi}\big] $ are semilinear, equations \eqref{eq ana1 a} are transport equations, with a Burgers type term (when $ n=1 $), a semilinear term, a source term and a convolution type term. System of equations \eqref{eq ana1 a}, \eqref{eq ana1 a init} is a simplification of system \eqref{eq systeme a} for which we propose to set up the analytical tools to solve it. \bigskip The aim is to solve system \eqref{eq ana1 a}-\eqref{eq ana1 a init} with the following Cauchy-Kovalevskaya theorem. First proofs of this kind of result are due to \cite{Nirenberg1972CauchyKowalewski} and then \cite{Nishida1977Nirenberg}, and the proof of the following formulation goes back to \cite{BaouendiGoulaouic1978Nishida}. \begin{theorem}[{\cite{BaouendiGoulaouic1978Nishida}}]\label{theorem Cauchy-Kovalevskaya} Let $ (B_r)_{r_0\leq r\leq r_1} $ be a decreasing sequence of Banach spaces (with $ 0\leq r_0\leq r_1\leq 1 $), i.e. such that, for $ r_0\leq r'<r\leq r_1 $, \begin{equation*} B_r\subset B_{r'},\quad \norme{.}_{r'}\leq \norme{.}_{r}. \end{equation*} Let $ T,R,C $ and $ M $ be positive real numbers, and consider a continuous function $ F $ from $ [-T,T]\times\ensemble{u\in B_r\,\middle|\,\norme{u}_r<R} $ to $ B_{r'} $ for every $ r_0\leq r'<r\leq r_1 $ which satisfies \begin{equation}\label{eq ana hyp F 1} \sup_{|t|\leq T}\norme{F(t,u)-F(t,v)}_{r'}\leq \frac{C}{r-r'}\,\norme{u-v}_r \end{equation} for all $ r_0\leq r'<r\leq r_1 $, $ |t|<T $, and for all $ u,v $ in $ B_r $ such that $ \norme{u}_r\leq R $, $ \norme{v}_r\leq R $, and \begin{equation}\label{eq ana hyp F 2} \sup_{|t|\leq T}\norme{F(t,0)}_r\leq \frac{M}{r_1-r}, \end{equation} for every $ r_0\leq r<r_1 $. Then there exists a real number $ \delta $ in $ (0,T) $ and a unique function $ u $, belonging to $ \mathcal{C}^1\big((-\delta(r_1-r),\delta(r_1-r)), B_r\big) $ for every $ r_0\leq r\leq r_1 $, satisfying \begin{equation*} \sup_{|t|<\delta(r_1-r)}\norme{u(t)}_r<R, \end{equation*} and the system \begin{equation*} \begin{cases} u'(t)=F\big(t,u(t)\big) & \mbox{for } |t|<\rho(r_1-r)\\ u(0)=0. \end{cases} \end{equation*} \end{theorem} We therefore need to define a chain of Banach spaces of analytic functions adapted to our problem \eqref{eq ana1 a}. \subsection{Functional framework} \subsubsection{Functional spaces} For a function $ u $ of $ L^2(\R^{d-1}) $, the symbol $ \hat{u} $ refers to the Fourier transform of $ u $, with the following convention \begin{equation*} \hat{u}(\xi):=\int_{\R^{d-1}}u(y)\,e^{-i\,\xi\cdot y}\,dy, \quad \forall \xi\in\R^{d-1}. \end{equation*} For a complex vector $ X $, notation $ |X| $ refers to the norm $ \sqrt{X\cdot X^*} $, and we denote by $ \prodscalbis{.} $ the Japanese bracket, that is, for a complex vector $ X $, \begin{equation*} \prodscalbis{X}:=\big(1+|X|^2\big)^{1/2}. \end{equation*} We set $ d^* $ to be an integer such that $ d^*>\tilde{m}_0+2+(d+1)/2 $, where $ \tilde{m}_0 $ is the nonnegative real number of Lemma \ref{lemma hyp petit diviseurs}. The following definition quantifies analyticity by means of an exponential decay of the Fourier transform. \begin{definition} For $ s\in(0,1) $, the space $ Y_{s} $ is defined as the space of all functions $ u $ of $ L^2(\R^{d-1}\times\T) $ such that, if their Fourier series expansion in $ \Theta $ reads \begin{equation*} u(y,\Theta)=\sum_{\lambda\in\Z}u_{\lambda}(y)\,e^{i\lambda\Theta}, \end{equation*} then \begin{equation*} \norme{u}^2_{Y_s}:=\int_{\R^{d-1}}\sum_{\lambda\in\Z}e^{2s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{2d^*}\big|\hat{u_{\lambda}}(\xi)\big|^2\,d\xi<+\infty. \end{equation*} \end{definition} The following results make precise how $ y $ and $ \Theta $ derivatives act on $ Y_s $, and assert that $ Y_s $ is a Banach algebra. \begin{lemma}\label{lemma ana dy dtheta Y s} There exists $ C>0 $ such that, for $ 0\leq s'<s\leq 1 $, for $ u $ in $ Y_s $, functions $ \nabla_y\, u $ and $ \partial_{\Theta}u $ belong to $ Y_{s'} $, and we have \begin{equation}\label{eq ana est dt dtheta Ys} \norme{\nabla_y\, u }_{Y_{s'}'}\leq \frac{C}{s-s'}\norme{u}_{Y_s}\quad\text{and}\quad \norme{\partial_{\Theta}u}_{Y_{s'}}\leq \frac{C}{s-s'}\norme{u}_{Y_s}. \end{equation} \end{lemma} \begin{proof} We prove the estimate for $ \nabla_y\, u $, the one for $ \partial_{\Theta}u $ being similar. We have, by definition of the $ Y_{s'} $-norm, \begin{multline*} \norme{\nabla_y\, u}_{Y_{s'}}^2=\int_{\R^{d-1}}\sum_{\lambda\in\Z}e^{2s'|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{2d^*}|\xi|^2\big|\hat{u_{\lambda}}(\xi)\big|^2\,d\xi\\ \leq \frac{C^2}{(s-s')^2}\int_{\R^{d-1}}\sum_{\lambda\in\Z}e^{2s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{2d^*}\big|\hat{u_{\lambda}}(\xi)\big|^2\,d\xi= \frac{C^2}{(s-s')^2}\,\norme{u}^2_{s}, \end{multline*} since $ |\xi|^2\exp(2s'|\xi|)\leq C^2\exp(2s|\xi|)/(s-s')^2 $ for $ \xi $ in $ \R^{d} $, with $ C>0 $ independent of $ s,s' $ and $ \xi $, which reads precisely $ C=2e^{-1} $. \end{proof} \begin{lemma}\label{lemma Ys Banach algebra} For $ s\in(0,1) $, the space $ Y_s $ is a Banach algebra, up to a positive constant, that is, there exists $ C>0 $ (independent of $ s $), such that for $ u,v$ in $ Y_s $, the function $ uv $ belongs to $ Y_s $ and we have \begin{equation*} \norme{uv}_{Y_s}\leq C\norme{u}_{Y_s}\norme{v}_{Y_s}. \end{equation*} \end{lemma} \begin{proof} Let $ s $ be in $ (0,1) $, and consider $ u,v $ in $ Y_s $. We have, \begin{align*} &\quad\int_{\R^{d-1}}\sum_{\lambda\in\Z}e^{2s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{2d^*}\big|\hat{(uv)_{\lambda}}(\xi)\big|^2\,d\xi\\ &=\int_{\R^{d-1}}\sum_{\lambda\in\Z}e^{2s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{2d^*}\left|\int_{\R^{d-1}}\sum_{\mu\in\Z}\hat{u_{\mu}}(\eta)\hat{v_{\lambda-\mu}}(\xi-\eta)\,d\eta\right|^2\,d\xi\\ &\leq\int_{\R^{d-1}}\sum_{\lambda\in\Z}\left(\int_{\R^{d-1}}\sum_{\mu\in\Z}\frac{\prodscalbis{(\lambda,\xi)}^{2d^*}}{\prodscalbis{(\mu,\eta)}^{2d^*}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{2d^*}}\,d\eta\right)\\ &\quad\int_{\R^{d-1}}\sum_{\mu\in\Z}e^{2s|(\mu,\eta)|}\prodscalbis{(\mu,\eta)}^{2d^*}\big|\hat{u_{\mu}}(\eta)\big|^2 e^{2s|(\lambda-\mu,\xi-\eta)|}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{2d^*}\big|\hat{v_{\lambda-\mu}}(\xi-\eta)\big|^2\,d\eta\,d\xi\\ &\leq C \norme{u}^2_{Y_s}\norme{v}^2_{Y_s}, \end{align*} by Cauchy-Schwarz inequality, if $ \int_{\R^{d-1}}\sum_{\mu\in\Z}\frac{\prodscalbis{(\lambda,\xi)}^{2d^*}}{\prodscalbis{(\mu,\eta)}^{2d^*}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{2d^*}}\,d\eta $ is bounded uniformly with respect to $ (\lambda,\xi) $. We conclude by making the proof of this latter result. For $ \mu,\lambda\in\Z $ and $ \xi,\eta\in\R^{d-1} $, we have \begin{equation*} \big|(\lambda,\xi)\big|^2\leq 2 \big|(\mu,\eta)\big|^2+2 \big|(\lambda-\mu,\xi-\eta)\big|^2 \end{equation*} so \begin{equation*} \prodscalbis{(\lambda,\xi)}^2\leq 2 \prodscalbis{(\mu,\eta)}^2+2 \prodscalbis{(\lambda-\mu,\xi-\eta)}^2, \end{equation*} and \begin{equation*} \prodscalbis{(\lambda,\xi)}^{2d^*}\leq 2^{d^*+1} \prodscalbis{(\mu,\eta)}^{2d^*}+2^{d^*+1} \prodscalbis{(\lambda-\mu,\xi-\eta)}^{2d^*} \end{equation*} Therefore, by a change of variables $ (\eta,\mu)=(\xi-\eta,\lambda-\mu) $, \begin{equation*} \int_{\R^{d-1}}\sum_{\mu\in\Z}\frac{\prodscalbis{(\lambda,\xi)}^{2d^*}}{\prodscalbis{(\mu,\eta)}^{2d^*}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{2d^*}}\,d\eta \leq 2^{d^*+2}\int_{\R^{d-1}}\sum_{\mu\in\Z}\inv{\prodscalbis{(\mu,\eta)}^{2d^*}}\,d\eta \leq C\,2^{d^*+2} \end{equation*} with $ C $ depending only on $ d^* $, since $ d^* $ is such that $ d^*>d/2 $. \end{proof} As we work with sequences of functions $ (a_{\phi}^n)_{n\geq 1} $ and $ (a_{\psi}^n)_{n\geq 1} $, we define a functional space accordingly. We also specify the norm chosen on the product space, since we will work with couples of sequences. \begin{definition}\label{def Y s suite} For $ s\in(0,1) $, the space $ \Y_{s} $ is defined as the set of sequences $ \mathbf{a}=\big(a_n\big)_{n\in\mathbb{N}} $ of $ Y_{s} $ such that \begin{equation*} \normetriple{\mathbf{a}}^2_{\Y_{s}}:=\sum_{n\geq 1}e^{2sn}\prodscalbis{n}^{2d^*}\normetriple{a_n}^2_{Y_s}<+\infty. \end{equation*} For $ s\in(0,1) $, the norm on the product space $ \Y_s^2 $ is defined, for $ (\a,\b)\in\Y_s^2 $, as \begin{equation*} \normetriple{\big(\a,\b\big)}_{\Y_s^2}^2:=\normetriple{\a}_{\Y_s}^2+\normetriple{\b}_{\Y_s}^2. \end{equation*} \end{definition} The space $ \Y_s $ satisfies analogous properties as the ones of Lemmas \ref{lemma ana dy dtheta Y s} and \ref{lemma Ys Banach algebra} (with the convolution on sequences for product), but they will not be used directly. \subsubsection{Specifications on the simplified model} We are now able to precise some properties of the study system \eqref{eq ana1 a}, \eqref{eq ana1 a init}. Boundary source terms $ H^n_{\phi} $, $ H^n_{\psi} $ for $ n\geq 1 $ are taken such that, defining $ \mathbf{H}:=\big(\mathbf{H}_{\phi},\mathbf{H}_{\psi}\big):=\big(H^n_{\phi},H^n_{\psi}\big)_{n\geq 1} $, function $ \mathbf{H} $ is in $ \mathcal{C}\big([-T,T],Y^2_1\big) $. In the statement \ref{prop WP bord CK} below of existence and uniqueness for system \eqref{eq ana1 a}, \eqref{eq ana1 a init}, there will be an additional assumption on $ \mathbf{H} $, requiring that there exists $ M>0 $ such that, for $ 0\leq s<1 $, \begin{equation*} \sup_{|t|<T}\normetriple{\mathbf{H}}_{\Y^2_s}\leq \frac{M}{1-s}. \end{equation*} This assumption on $ \mathbf{H} $ is stronger than requiring $ H $ and $ G $ of \eqref{eq systeme 1} to be in $ H^{\infty}(\R^d\times\T) $, as it imposes analyticity with respect to space variables, but with bound on the norm increasing with regularity. We denote by $ \gamma_0>0 $ a positive constant such that, for $ \zeta=\phi,\psi $, \begin{subequations}\label{eq ana1 hyp toy model} \begin{equation}\label{eq ana1 hyp toy model v D bord} \big|\mathbf{v}^{\Lop}_{\zeta}\big|\leq \gamma_0,\qquad \big|D^{\Lop}_{\zeta}\big|\leq \gamma_0, \quad |\mathbf{w}_{\zeta}|\leq\gamma_0^{1/2}\quad \text{and}\quad \big|K^{\Lop}_{\zeta}\big|\leq \gamma_0, \end{equation} and, for $ s\in(0,1) $, for $ u,v $ in $ Y_s $, and for $ \zeta=\phi,\psi $, \begin{equation}\label{eq ana1 hyp toy model F per} \norme{\mathbb{F}^{\per}_{\zeta}\big[\partial_{\Theta}\,u,\partial_{\Theta}\,v\big]}_{Y_s}\leq \gamma_0^{1/2}\,\normetriple{u}_{Y_s}\normetriple{v}_{Y_s}. \end{equation} \end{subequations} All estimates relies on the fact that scalars $ D^{\Lop}_{\zeta} $, $ \mathbf{w}_{\zeta} $ and $ K^{\Lop}_{\zeta} $, vectors $ \mathbf{v}^{\Lop}_{\zeta} $ and operators $ \mathbb{F}^{\per}_{\zeta} $ are indexed by finite sets. As for it, estimate \eqref{eq ana1 hyp toy model F per} asserting that the operator $ \mathbb{F}^{\per}_{\zeta} $, composed with derivation in $ \Theta $, acts as a semilinear operator, is a result of \cite[Theorem 3.1]{CoulombelWilliams2017Mach}. The proof in our case is a straightforward adaptation to our functional framework of the one of \cite{CoulombelWilliams2017Mach}, which we detail here. \begin{proposition}[{\cite[Theorem 3.1]{CoulombelWilliams2017Mach}}]\label{prop F per semilin} There exists a constant $ C>0 $ such that for $ s\in[0,1] $, for $ u,v $ in $ Y_s $ and $ \zeta=\phi,\psi $, we have \begin{equation}\label{eq ana1 est prop F per} \norme{\mathbb{F}^{\per}_{\zeta}\big[\partial_{\Theta}\,u,\partial_{\Theta}\,v\big]}_{Y_s}\leq C\,\normetriple{u}_{Y_s}\normetriple{v}_{Y_s}. \end{equation} \end{proposition} \begin{proof} The result relies on the following lemma, which constitutes a reformulation of the small divisors Assumption \ref{hyp petits diviseurs}. Its proof is the same as the one in \cite{CoulombelWilliams2017Mach}, and is recalled here for the sake of completeness. \begin{lemma}[{\cite[Lemma 3.2]{CoulombelWilliams2017Mach}}]\label{lemma hyp petit diviseurs} There exists a constant $ C>0 $ and a real number $ \tilde{m}_0 $ such that, for $ \lambda_1,\lambda_3\in\Z^* $, and for $ \zeta=\phi,\psi $, we have \begin{equation}\label{eq ana1 est lemme hyp petits div} \inv{\big|\lambda_1\,\delta^1_{\zeta}+\lambda_3\,\delta^3_{\zeta}\big|}\leq C \min\big(|\lambda_1|^{\tilde{m}_0},|\lambda_3|^{\tilde{m}_0}\big). \end{equation} \end{lemma} \begin{proof} Without loss of generality, we consider $ \zeta=\phi $. The aim is to use the bound of Assumption \ref{hyp petits diviseurs}. Using equality $ L(0,\phi_2)\,r_{\phi,2}=0 $, we get, \begin{equation*} L\big(0,\lambda_1\,\phi_1+\lambda_3\,\phi_3\big)\,r_{\phi,2}=\Big[\lambda_1\big(\xi_1(\phi)-\xi_2(\phi)\big)+\lambda_3\big(\xi_3(\phi)-\xi_2(\phi)\big)\Big]A_d(0)\,r_{\phi,2}, \end{equation*} so the quantity $ \lambda_1\big(\xi_1(\phi)-\xi_2(\phi)\big)+\lambda_3\big(\xi_3(\phi)-\xi_2(\phi)\big) $ is nonzero since otherwise $ r_{\phi,2} $ would be a nonzero vector in the kernel of $ L\big(0,\lambda_1\,\phi_1+\lambda_3\,\phi_3\big) $, contradicting Assumption \ref{hypothese ensemble frequences} asserting that $ \lambda_1\,\phi_1+\lambda_3\,\phi_3 $ is never characteristic for $ \lambda_1,\lambda_3\in\Z^* $. Therefore we have \begin{equation*} \inv{\Big|\lambda_1\big(\xi_1(\phi)-\xi_2(\phi)\big)+\lambda_3\big(\xi_3(\phi)-\xi_2(\phi)\big)\Big|}\leq C \norme{L\big(0,\lambda_1\,\phi_1+\lambda_3\,\phi_3\big)^{-1}}, \end{equation*} with a constant $ C>0 $ independent on $ \lambda_1,\lambda_3 $. Using Assumption \ref{hyp petits diviseurs} and a polynomial bound on the transpose of the comatrix, we get that there exists a nonnegative real number $ \tilde{m}_0 $ such that \begin{equation*} \inv{\Big|\lambda_1\big(\xi_1(\phi)-\xi_2(\phi)\big)+\lambda_3\big(\xi_3(\phi)-\xi_2(\phi)\big)\Big|}\leq C \big|(\lambda_1,\lambda_3)\big|^{\tilde{m}_0}, \end{equation*} with a new constant $ C>0 $ independent on $ \lambda_1,\lambda_3 $. Up to changing constant $ C>0 $, we obtain, \begin{equation}\label{eq ana inter 1} \inv{\big|\lambda_1\,\delta^1_{\phi}+\lambda_3\,\delta^3_{\phi}\big|}\leq C \big|(\lambda_1,\lambda_3)\big|^{\tilde{m}_0}. \end{equation} To get the formulation of \eqref{eq ana inter 1} with a minimum, we see that two cases may occur. Either $ \big|\lambda_1\,\delta^1_{\phi}+\lambda_3\,\delta^3_{\phi}\big|>|\delta^1_{\phi}| $, and in this case, with $ C\geq 1/|\delta^1_{\phi}| $, we have \begin{equation*} \inv{\big|\lambda_1\,\delta^1_{\phi}+\lambda_3\,\delta^3_{\phi}\big|}\leq \inv{\big|\delta^1_{\phi}\big|}\leq C \leq C \,|\lambda_1|^{\tilde{m}_0} \end{equation*} since $ \tilde{m}_0\geq 0 $. In the other case, if $ \big|\lambda_1\,\delta^1_{\phi}+\lambda_3\,\delta^3_{\phi}\big|\leq|\delta^1_{\phi}| $, we have \begin{equation*} |\lambda_3|\leq \inv{\big|\delta^3_{\phi}\big|}\big|\lambda_1\,\delta^1_{\phi}+\lambda_3\,\delta^3_{\phi}\big|+\inv{\big|\delta^3_{\phi}\big|}\big|\lambda_1\,\delta^1_{\phi}\big|\leq 2\frac{\big|\delta^1_{\phi}\big|}{\big|\delta^3_{\phi}\big|}|\lambda_1|, \end{equation*} so, up to changing constant $ C $, estimate \eqref{eq ana inter 1} rewrites \begin{equation*} \inv{\big|\lambda_1\,\delta^1_{\phi}+\lambda_3\,\delta^3_{\phi}\big|}\leq C \,|\lambda_1|^{\tilde{m}_0}. \end{equation*} Applying the same arguments for $ \lambda_3 $ leads to the aimed estimate \eqref{eq ana1 est lemme hyp petits div}. \end{proof} The proof of Proposition \ref{prop F per semilin} also relies on the following technical result, whose formulation is the one of \cite{CoulombelWilliams2017Mach}. Its proof is an immediate adaptation of a result of \cite{RauchReed1982Microlocal}, and is not recalled here. \begin{lemma}[{\cite[Lemma 1.2.2]{RauchReed1982Microlocal}},{\cite[Lemma 3.3]{CoulombelWilliams2017Mach}}]\label{lemma technique Cou} Let $ \mathbb{K} : \R^{d-1}\times\Z\times \R^{d-1}\times\Z \rightarrow \C $ be a locally integrable measurable function such that, either \begin{equation*} \sup_{(\xi,\lambda)\in \R^{d-1}\times\Z}\int_{\R^{d-1}}\sum_{\mu\in\Z}\big|\mathbb{K}(\xi,\lambda,\eta,\mu)\big|^2\,d\eta<+\infty, \end{equation*} or \begin{equation*} \sup_{(\eta,\mu)\in \R^{d-1}\times\Z}\int_{\R^{d-1}}\sum_{\lambda\in\Z}\big|\mathbb{K}(\xi,\lambda,\eta,\mu)\big|^2\,d\xi<+\infty. \end{equation*} Then the map \begin{equation*} (f,g)\mapsto \int_{\R^{d-1}}\sum_{\mu\in\Z}\mathbb{K}(\xi,\lambda,\eta,\mu)\,f(\xi-\eta,\lambda-\mu)\,g(\eta,\mu)\,d\eta \end{equation*} is bounded on $ L^2(\R^{d-1}\times\Z)\times L^2(\R^{d-1}\times\Z) $ with values in $ L^2(\R^{d-1}\times\Z) $. \end{lemma} We now proceed with the proof of Proposition \ref{prop F per semilin}, and we consider without loss of generality $ \zeta=\phi $. For $ \xi\in\R^{d-1} $, $ \lambda\in\Z $ and for $ u,v $ in $ Y_s $, the Fourier transform of the $ \lambda $-th term of the Fourier series expansion of $ \mathbb{F}^{\per}_{\zeta}\big[\partial_{\Theta}\,u,\partial_{\Theta}\,v\big] $ is given by \begin{equation*} \widehat{\mathbb{F}^{\per}_{\zeta}\big[\partial_{\Theta}\,u,\partial_{\Theta}\,v\big]_{\lambda}}(\xi)=-i\int_{\R^{d-1}}\sum_{\substack{\mu\in\Z \\ \mu\neq 0, \lambda}}\frac{\mu(\lambda-\mu)}{(\lambda-\mu)\delta^1_{\phi}+\mu\delta^3_{\phi}}\,\hat{u_{\lambda-\mu}}(\xi-\eta)\,\hat{v_{\mu}}(\eta)\,d\eta. \end{equation*} Therefore, to obtain inequality \eqref{eq ana1 est prop F per}, we have to estimate the quantity \begin{equation*} \int_{\R^{d-1}}\sum_{\lambda\in\Z}\left|\int_{\R^{d-1}}\sum_{\mu\in\Z}e^{s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{d^*}\mathbf{F}(\lambda,\mu)\,\hat{u_{\lambda-\mu}}(\xi-\eta)\,\hat{v_{\mu}}(\eta)\,d\eta\right|^2d\xi, \end{equation*} where we have denoted, for $ \lambda,\mu\in\Z $, \begin{equation*} \mathbf{F}(\lambda,\mu):=\begin{cases} \frac{\mu(\lambda-\mu)}{(\lambda-\mu)\delta^1_{\phi}+\mu\delta^3_{\phi}} & \text{if } \mu \neq 0,\lambda\\ 0 & \text{otherwise,} \end{cases} \end{equation*} and we will do it using Lemma \ref{lemma technique Cou}. We consider two nonnegative functions $ \chi_1,\chi_2 $ on $ \R^d\times\R^d $ such that $ \chi_1+\chi_2\equiv 1 $ and \begin{align*} \chi_1(\xi,\lambda,\eta,\mu)=0 &\quad \text{if } \prodscalbis{(\eta,\mu)}\geq (2/3)\prodscalbis{(\xi,\lambda)}\\ \chi_2(\xi,\lambda,\eta,\mu)=0 &\quad \text{if } \prodscalbis{(\eta,\mu)}\leq (1/3)\prodscalbis{(\xi,\lambda)}. \end{align*} We first consider the quantity \begin{subequations}\label{eq ana inter 2} \begin{align} &\int_{\R^{d-1}}\sum_{\mu\in\Z}\chi_1(\xi,\lambda,\eta,\mu)\,e^{s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{d^*}\mathbf{F}(\lambda,\mu)\,\hat{u_{\lambda-\mu}}(\xi-\eta)\,\hat{v_{\mu}}(\eta)\,d\eta, \intertext{rewritten as} &\int_{\R^{d-1}}\sum_{\mu\in\Z}\frac{\chi_1(\xi,\lambda,\eta,\mu)\,e^{s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{d^*}\mathbf{F}(\lambda,\mu)}{e^{s|(\lambda-\mu,\xi-\eta)|}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{d^*} e^{s|(\mu,\eta)|}\prodscalbis{(\mu,\eta)}^{d^*}}\,\\\nonumber &\qquad\times\Big(e^{s|(\lambda-\mu,\xi-\eta)|}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{d^*}\hat{u_{\lambda-\mu}}(\xi-\eta)\Big)\Big(e^{s|(\mu,\eta)|}\prodscalbis{(\mu,\eta)}^{d^*}\hat{v_{\mu}}(\eta)\Big)d\eta. \end{align} \end{subequations} We have \begin{equation*} e^{2s|(\lambda,\xi)|} \leq e^{2s|(\mu,\eta)|} e^{2s|(\lambda-\mu,\xi-\eta)|}, \end{equation*} and, on the support of $ \chi_1 $, \begin{equation*} \prodscalbis{(\lambda-\mu,\xi-\eta)}\geq \prodscalbis{(\lambda,\xi)}-\prodscalbis{(\mu,\eta)}\geq \inv{3}\prodscalbis{(\lambda,\xi)}, \end{equation*} so \begin{equation*} \int_{\R^{d-1}}\sum_{\mu\in\Z}\left|\frac{\chi_1(\xi,\lambda,\eta,\mu)\,e^{s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{d^*}\mathbf{F}(\lambda,\mu)}{e^{s|(\lambda-\mu,\xi-\eta)|}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{d^*} e^{s|(\mu,\eta)|}\prodscalbis{(\mu,\eta)}^{d^*}}\right|^2d\eta\leq C\int_{\R^{d-1}}\sum_{\mu\in\Z}\left|\frac{\mathbf{F}(\lambda,\mu)}{\prodscalbis{(\mu,\eta)}^{d^*}}\right|^2d\eta. \end{equation*} Using Lemma \ref{lemma hyp petit diviseurs} we get \begin{equation*} \left|\frac{\mu(\lambda-\mu)}{(\lambda-\mu)\delta^1_{\phi}+\mu\delta^3_{\phi}}\right|=\inv{\big|\delta^1_{\phi}\big|}\left|\mu-\frac{\delta^3_{\phi}\,\mu^2}{(\lambda-\mu)\delta^1_{\phi}+\mu\delta^3_{\phi}}\right|\leq C |\mu|^{\tilde{m}_0+2}, \end{equation*} so \begin{multline*} \sup_{(\xi,\lambda)\in \R^{d-1}\times\Z}\int_{\R^{d-1}}\sum_{\mu\in\Z}\left|\frac{\chi_1(\xi,\lambda,\eta,\mu)\,e^{s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{d^*}\mathbf{F}(\lambda,\mu)}{e^{s|(\lambda-\mu,\xi-\eta)|}\prodscalbis{(\lambda-\mu,\xi-\eta)}^{d^*} e^{s|(\mu,\eta)|}\prodscalbis{(\mu,\eta)}^{d^*}}\right|^2d\eta\\ \leq C\int_{\R^{d-1}}\sum_{\mu\in\Z}\frac{|\mu|^{2(\tilde{m}_0+2)}}{\prodscalbis{(\mu,\eta)}^{2d^*}}\,d\eta<+\infty, \end{multline*} since we chose $ d^*>\tilde{m}_0+2+(d+1)/2 $. Applying Lemma \ref{lemma technique Cou} to the quantity in \eqref{eq ana inter 2} we obtain \begin{align*} &\int_{\R^{d-1}}\sum_{\lambda\in\Z}\left|\int_{\R^{d-1}}\sum_{\mu\in\Z}\chi_1(\xi,\lambda,\eta,\mu)\,e^{s|(\lambda,\xi)|}\prodscalbis{(\lambda,\xi)}^{d^*}\mathbf{F}(\lambda,\mu)\,\hat{u_{\lambda-\mu}}(\xi-\eta)\,\hat{v_{\mu}}(\eta)\,d\eta\right|^2d\xi\\[5pt] &\leq C \norme{u}^2_{Y_s}\norme{v}^2_{Y_s}. \end{align*} Applying similar arguments for $ \chi_2 $ leads to the analogous estimate for $ \chi_2 $, and combining estimates for $ \chi_1 $ and $ \chi_2 $ gives the sought one \eqref{eq ana1 est prop F per}, concluding the proof. \end{proof} We are now able to prove well-posedness for system \eqref{eq ana1 a}, \eqref{eq ana1 a init}, using Theorem \ref{theorem Cauchy-Kovalevskaya}, with the above properties. \subsection{A Cauchy-Kovalevskaya theorem for boundary equations} System \eqref{eq ana1 a}, \eqref{eq ana1 a init} reads \begin{equation}\label{eq ana1 dt a} \begin{cases} \partial_t \,\mathbf{a} = \mathbf{F}\big(t,\a):= L \,\mathbf{a} -\partial_{\Theta}\,\mathbf{D}^{\Lop}\big(\a,\a\big) - \mathbb{F}^{\per}\big(\partial_{\Theta}\,\a,\partial_{\Theta}\,\a\big)+\mathbf{H}+\mathbf{K}^{\Lop}\big(\a,\a\big),\\[5pt] \a(0)=0, \end{cases} \end{equation} where $ \a:=(\a_{\phi},\a_{\psi}):=\big(a^n_{\phi},a^n_{\psi}\big)_{n\geq 1} $, and, if $ \mathbf{c}:=\big(c^n_{\phi},c^n_{\psi}\big)_{n\geq 1} $, \begin{align*} L\,\a&:=\big(\mathbf{v}_{\phi}^{\Lop}\cdot\nabla_y\,a^n_{\phi},\mathbf{v}_{\phi}^{\Lop}\cdot\nabla_y\,a^n_{\psi}\big)_{n\geq 1}, \\[5pt] \mathbf{D}^{\Lop}(\a,\mathbf{c})&:=\big(D^{\Lop}_{\phi}\,a^1_{\phi}\,c^n_{\phi}, D^{\Lop}_{\psi}\,a^1_{\psi}\,c^n_{\psi}\big), \\[5pt] \mathbb{F}^{\per}(\a,\mathbf{c})&:=\big(\mathbf{w}_{\phi}\,\mathbb{F}^{\per}_{\phi}[a^1_{\phi},c^{n}_{\phi}], \mathbf{w}_{\psi}\linebreak\mathbb{F}^{\per}_{\psi}[a^1_{\psi},c^{n}_{\psi}]\big)_{n\geq 1}, \\[5pt] \mathbf{K}^{\Lop}(\a,\mathbf{c})&:=\Big(K^{\Lop}_{\phi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,c^{n-k}_{\psi}\big),K^{\Lop}_{\psi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,c^{n-k}_{\psi}\big)\Big)_{n\geq 1}. \end{align*} System \eqref{eq ana1 dt a} is now in the right shape to apply Theorem \ref{theorem Cauchy-Kovalevskaya}, and we prove the following result. \begin{proposition}\label{prop WP bord CK} For $ M_0>0 $, there exists $ \delta\in(0,T) $ such that for all $ \mathbf{H} $ in $ \mathcal{C}\big([-T,T],\Y_1^2) $ satisfying, for $ 0<s<1 $, \begin{equation}\label{eq ana1 hyp H WP} \sup_{|t|<T}\normetriple{\mathbf{H}(t)}_{Y^2_s}\leq \frac{M_0}{1-s}, \end{equation} system \eqref{eq ana1 dt a} admits a unique solution $ \a $ in $ \mathcal{C}^1\big((-\delta(1-s),\delta(1-s)), \Y^2_s\big) $ for every $ 0<s\leq 1 $. \end{proposition} The key estimates to prove this result are the following. These are classical, and their proof is recalled here for the sake of completeness. \begin{lemma}\label{lemma ana1 est LBK Y s} There exists a constant $ C>0 $ such that for $ 0<s'<s\leq 1 $, for $ \b,\mathbf{c} $ in $ \Y_s^2 $, the following estimates hold \begin{subequations}\label{eq ana1 est LBK Y s} \begin{align} \label{eq ana1 est LBK Y s:L} \normetriple{L\,\b}_{\Y_{s'}^2}&\leq \frac{C\,\gamma_0\, \normetriple{\b}_{\Y_s^2}}{s-s'},\\[5pt] \label{eq ana1 est LBK Y s:B} \normetriple{\partial_{\Theta}\,\mathbf{D}^{\Lop}\big(\b,\mathbf{c}\big)}_{\Y_{s'}^2}&\leq\frac{C\,\gamma_0\, \normetriple{\b}_{\Y_s^2}\normetriple{\mathbf{c}}_{\Y_s^2}}{s-s'},\\[5pt] \label{eq ana1 est LBK Y s:F} \normetriple{\mathbb{F}^{\per}\big(\partial_{\Theta}\b,\partial_{\Theta}\mathbf{c}\big)}_{\Y_{s}^2}&\leq C\,\gamma_0\, \normetriple{\b}_{\Y_s^2}\normetriple{\mathbf{c}}_{\Y_s^2},\\[5pt] \label{eq ana1 est LBK Y s:K} \normetriple{\mathbf{K}^{\Lop}\big(\b,\mathbf{c}\big)}_{\Y_{s'}^2}&\leq\frac{C\,\gamma_0\, \normetriple{\b}_{\Y_s^2}\normetriple{\mathbf{c}}_{\Y_s^2}}{s-s'}. \end{align} \end{subequations} \end{lemma} \begin{proof} First, estimate \eqref{eq ana1 est LBK Y s:L} follows directly from estimate \eqref{eq ana1 hyp toy model v D bord} and Lemma \ref{lemma ana dy dtheta Y s} since $ L $ is a linear combination of a bounded vector and a first order derivative. For the second one \eqref{eq ana1 est LBK Y s:B}, we have, according to Lemma \ref{lemma Ys Banach algebra}, for $ \zeta=\phi,\psi $ and $ n\geq 1 $, \begin{equation*} \norme{D^{\Lop}_{\zeta}\,b^1_{\zeta}\,c^n_{\zeta}}_{Y_s}\leq C\,\gamma_0\norme{b^1_{\zeta}}_{Y_s}\norme{c^n_{\zeta}}_{Y_s}\leq C\,\gamma_0\norme{\b}_{\Y_s^2}\norme{c^n_{\zeta}}_{Y_s}. \end{equation*} Therefore, according to Lemma \ref{lemma ana dy dtheta Y s}, \begin{equation*} \norme{\partial_{\Theta}\,D^{\Lop}_{\zeta}\,b^1_{\zeta}\,c^n_{\zeta}}_{Y_{s'}^2}\leq \frac{C}{s-s'}\norme{D^{\Lop}_{\zeta}\,b^1_{\zeta}\,c^n_{\zeta}}_{Y_s}\leq \frac{C\,\gamma_0}{s-s'}\norme{\b}_{\Y_s^2}\norme{c^n_{\zeta}}_{Y_s}. \end{equation*} Multiplying by $ e^{2sn}\prodscalbis{n}^{2d^*} $ and summing over $ n\geq 1 $ and $ \zeta=\phi,\psi $ gives the estimate \eqref{eq ana1 est LBK Y s:B} for the $ \Y_s^2 $-norm. With \eqref{eq ana1 hyp toy model F per}, the proof of \eqref{eq ana1 est LBK Y s:F} is analogous but simpler since the operator is semilinear. Finally, for \eqref{eq ana1 est LBK Y s:K}, according to Lemmas \ref{lemma ana dy dtheta Y s} and \ref{lemma Ys Banach algebra}, we have, for $ n\geq 1 $, \begin{equation*} \norme{\sum_{k=1}^{n-1}\partial_{\Theta}\big(b^k_{\phi}\,c^{n-k}_{\psi}\big)}_{Y_{s'}}\leq \frac{C}{s-s'} \sum_{k=1}^{n-1}\norme{b^k_{\phi}}_{Y_s}\norme{c^{n-k}_{\psi}}_{Y_s}. \end{equation*} Thus, by Cauchy-Schwarz inequality, \begin{align*} 2\sum_{n\geq 1}&e^{2s'n}\prodscalbis{n}^{2d^*}\norme{\sum_{k=1}^{n-1}\partial_{\Theta}\big(b^k_{\phi}\,c^{n-k}_{\psi}\big)}_{Y_{s'}}^2\\ &\leq \frac{C^2}{(s-s')^2}\sum_{n\geq 1}e^{2sn}\prodscalbis{n}^{2d^*}\left(\sum_{k=1}^{n-1}\norme{b^k_{\phi}}_{Y_s}\norme{c^{n-k}_{\psi}}_{Y_s}\right)^2\\ &\leq \frac{C^2}{(s-s')^2}\sum_{n\geq 1}\left(\sum_{k=1}^{n-1}\frac{\prodscalbis{n}^{2d^*}}{\prodscalbis{k}^{2d^*}\prodscalbis{n-k}^{2d^*}}\right)\\ &\quad\times\sum_{k=1}^{n-1}e^{2sk}\prodscalbis{k}^{2d^*}\norme{b^k_{\phi}}_{Y_s}^2e^{2s(n-k)}\prodscalbis{n-k}^{2d^*}\norme{c^{n-k}_{\psi}}_{Y_s}^2\\ &\leq \frac{C^2}{(s-s')^2}\normetriple{\b}_{\Y_s^2}^2\normetriple{\mathbf{c}}_{\Y_s^2}^2, \end{align*} since $ \sum_{k=1}^{n-1}\frac{\prodscalbis{n}^{2d^*}}{\prodscalbis{k}^{2d^*}\prodscalbis{n-k}^{2d^*}} $ is bounded uniformly with respect to $ n\geq 1 $, and the result follows. \end{proof} We proceed with proof of Proposition \ref{prop WP bord CK}, which essentially amounts to verify assumptions of Theorem \ref{theorem Cauchy-Kovalevskaya}. \begin{proof}[Proof\emph{ (Proposition \ref{prop WP bord CK})}] We apply Theorem \ref{theorem Cauchy-Kovalevskaya} to system \eqref{eq ana dt a} with the scale of Banach spaces $ \big(\Y_s^2\big)_{0< s\leq 1} $. First note that assumption \eqref{eq ana hyp F 2} is satisfied as soon as assumption \eqref{eq ana1 hyp H WP} for $ \mathbf{H} $ is verified. Next we take interest into continuity assumption for $ \mathbf{F} $ and assumption \eqref{eq ana hyp F 1}. For $ 0<s'<s\leq1 $, and for $ t,t'\in(-T,T) $ and $ \b,\mathbf{c} $ in $ \Y_s^2 $, we have, \begin{align*} \mathbf{F}\big(t,\b\big)-\mathbf{F}\big(t',\mathbf{c}\big)&=\mathbf{L}\big(\b-\mathbf{c}\big)-\partial_{\Theta}\,\mathbf{D}^{\Lop}\big(\b,\b-\mathbf{c}\big) -\partial_{\Theta}\,\mathbf{D}^{\Lop}\big(\b-\mathbf{c},\mathbf{c}\big) \\&\quad -\mathbb{F}^{\per}\big(\partial_{\Theta}\,\b,\partial_{\Theta}\,(\b-\mathbf{c})\big)- \mathbb{F}^{\per}\big(\partial_{\Theta}\,(\b-\mathbf{c}),\partial_{\Theta}\,\mathbf{c}\big)+\mathbf{H}(t)-\mathbf{H}(t')\\&\quad+\mathbf{K}^{\Lop}\big(\b,\b-\mathbf{c}\big)+\mathbf{K}^{\Lop}\big(\b-\mathbf{c},\mathbf{c}\big) \end{align*} so, according to estimates of Lemma \ref{lemma ana1 est LBK Y s}, \begin{multline}\label{eq ana1 inter1} \normetriple{\mathbf{F}\big(t,\b\big)-\mathbf{F}\big(t',\mathbf{c}\big)}_{\Y_{s'}^2}\\ \leq \normetriple{\mathbf{H}(t)-\mathbf{H}(t')}_{\Y_s^2}+C\,\gamma_0\Big(1+3\normetriple{\b}_{\Y_s^2}+3\normetriple{\mathbf{c}}_{\Y_s^2}\Big)\frac{\normetriple{\b-\mathbf{c}}_{\Y_s^2}}{s-s'}. \end{multline} Therefore, since $ \mathbf{H} $ is continuous from $ [-T,T] $ to $ \Y_1^2 $, if we set $ R>0 $ (which is therefore arbitrary), we both get, from \eqref{eq ana1 inter1}, continuity of $ \mathbf{F} $ from $ [-T,T]\times\ensemble{\b\in\Y_s^2\mid \normetriple{\b}_{\Y_s^2}<R} $ to $ \Y_{s'}^2 $, and (setting $ t'=t $) estimate \eqref{eq ana hyp F 1}, with constant $ C $ given by $ C\,\gamma_0\big(1+6R\big) $. Theorem \ref{theorem Cauchy-Kovalevskaya} therefore applies here and gives the sought result. \end{proof} Here we used that system \eqref{eq ana1 a}, \eqref{eq ana1 a init} presents quadratic nonlinearities, but, using the same arguments, other types of nonlinearities could also be treated. \subsection{Second simplified model} We now refine the previous simplified model by incorporating interior equations in it. According to remarks from the introduction of this section, in the new chosen simplified model, we remove the coupling with profiles of frequencies different from $ \phi_j $, $ \psi_j $ and $ \nu_j $, which were appearing in \eqref{eq systeme sigma evol} in terms $ \partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,\dots,U_{n-1},(I-P)\,U_n,U_n^*\big) $. The latter terms are also simplified since they carry derivatives of order higher than one\footnote{In expression \eqref{eq ncor nonpola part U n zeta} of nonpolarized parts, there are already derivatives.}, and we keep only first order derivatives in $ \Theta $ (once again, considering derivatives in $ y $ presents no additional difficulty). They are therefore represented through terms of the form \begin{equation*} \sum_{\zeta_1,\zeta_2=\phi,\psi,\nu}\,\sum_{j_1,j_2=1,3}\,\sum_{k=1}^{n-1}\partial_{y,\Theta}\big(\sigma^k_{\zeta_1,j_1}\,\sigma^{n-k}_{\zeta_2,j_2}\big). \end{equation*} We also remove couplings with outgoing frequencies $ \phi_2 $, $ \psi_2 $ and $ \nu_2 $, as incoming equations will be solved seen as propagation in the normal variable equations, a form which is not suited to solve outgoing equations. Finally, we multiply equations \eqref{eq systeme sigma} by $ e^{i\lambda\Theta} $ for $ \Theta\in\T $ a periodic variable. It leads to the following study interior evolution equations, for $ n\geq 1 $, $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $, \begin{multline}\label{eq ana sigma} \color{altblue}X_{\zeta,j}\,\sigma^n_{\zeta,j}+D_{\zeta,j}\,\partial_{\Theta}\big(\sigma_{\zeta,j}^n\,\sigma_{\zeta,j}^1\big)+\sum_{\substack{\zeta_1,\zeta_2\ensemble{\phi,\psi,\nu}\\j_1,j_2\in\ensemble{1,3}}}\partial_{\Theta}\,\J^{\zeta_2,j_2}_{\zeta_1,j_1}\big[\sigma^1_{\zeta_1,j_1},\sigma^n_{\zeta_2,j_2}\big]\\[-10pt] \color{altblue}=K_{\zeta,j}\sum_{\substack{\zeta_1,\zeta_2\ensemble{\phi,\psi,\nu}\\j_1,j_2\in\ensemble{1,3}}}\,\sum_{k=1}^{n-1}\partial_{\Theta}\big(\sigma^k_{\zeta_1,j_1}\,\sigma^{n-k}_{\zeta_2,j_2}\big), \end{multline} \color{black} where we have defined $ \sigma^n_{\zeta,j} $, a function of $ \Omega_T\times\T $, as \begin{equation*} \sigma^n_{\zeta,j}(z,\Theta):=\sum_{\lambda\in\Z^*}\sigma^n_{\zeta,j,\lambda}(z)\,e^{i\lambda\,\Theta}, \end{equation*} and where $ K_{\zeta,j}\in\R $ for $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $. For $ (\zeta_1,\zeta_2,j_1,j_2)\in\ensemble{\phi,\psi,\nu}^2\times\ensemble{1,3}^2 $, the bilinear operator $ \J_{\zeta_1,j_1}^{\zeta_2,j_2} $ is defined as \begin{equation}\label{eq ana def J} \J_{\zeta_1,j_1}^{\zeta_2,j_2}\big[\sigma,\tau\big]=J_{\zeta_1,j_1}^{\zeta_2,j_2}\sum_{\lambda\in\Z^*}\sigma_{\lambda}\tau_{\lambda}\,e^{i\lambda\Theta}, \end{equation} with some coefficients $ J_{\zeta_1,j_1}^{\zeta_2,j_2} $. Similarly as for the boundary equations, up to changing all notation by a harmless multiplicative constant, according to expression \eqref{eq champ vecteur X_alpha} of vector field $ X_{\zeta,j} $, it can be assumed to read \begin{equation*} X_{\zeta,j}=\partial_{t}-\mathbf{v}_{\zeta,j}\cdot\nabla_x, \end{equation*} where vector $ \mathbf{v}_{\zeta,j} $ has been defined in Definition \ref{def sortant rentrant alpha X alpha}. Recall last component of each vector $ \mathbf{v}_{\zeta,j} $ is positive. Equation \eqref{eq ana sigma} is not provided with an initial condition, as we will see it as a propagation in the normal variable equation. For boundary conditions for profiles $ \sigma^n_{\zeta,j} $, $ j=1,3 $, the coupling terms in $ \big(\sigma^n_{\zeta,2}\big)_{|x_d=0} $ (appearing in terms in $ \tilde{F}^n_{\zeta,j,\lambda} $ for boundary conditions for profiles associated with $ \phi $ and $ \psi $) are not kept, since it would require trace estimates to solve interior equations, and we do not have such estimates in our possession. Terms in $ \tilde{F}^n_{\zeta,j,\lambda} $ also convey first order derivatives of lower order terms $ a^1_{\zeta},\dots,a^{n-1}_{\zeta} $. For the functional framework chosen later, these derivatives are an issue, and since coupling with lower order terms $ a^1_{\zeta},\dots,a^{n-1}_{\zeta} $ will be expressed in evolution equations for $ a^n_{\zeta} $, terms $ \tilde{F}^n_{\zeta,j,\lambda} $ are only represented in the study equations by boundary terms $ g^n_{\zeta,j} $, belonging to one of the spaces defined later on. This lead to the following study boundary conditions, for $ j=1,3 $, \begin{subequations}\label{eq ana bord sigma} \begin{align}\label{eq ana bord sigma phi psu} \color{altred}\big(\sigma_{\phi,j}^n\big)_{|x_d=0}&\color{altred}=(e_{\phi,j}\cdot r_{\phi,j})\,a^n_{\phi}+g^n_{\phi,j},\qquad \color{altorange2}\big(\sigma_{\psi,j}^n\big)_{|x_d=0}=(e_{\psi,j}\cdot r_{\psi,j})\,a^n_{\psi}+g^n_{\psi,j},\\[5pt]\label{eq ana bord sigma nu} \color{altgreen}\big(\sigma_{\nu,j}^n\big)_{|x_d=0}&\color{altgreen}=g^n_{\nu,j}, \end{align} \end{subequations} where, for $ \zeta=\phi,\psi $, we have denoted by $ a^n_{\zeta} $ the function of $ \Omega_T\times\T $ defined as \begin{equation*} a^n_{\zeta}(z,\Theta):=\sum_{\lambda\in\Z^*}a^n_{\zeta,\lambda}(z)\,e^{i\lambda\,\Theta}. \end{equation*} Finally, equations for boundary terms $ a^n_{\phi} $ and $ a^n_{\psi} $ are the same as for the first simplified model, namely, \begin{subequations}\label{eq ana a} \begin{align}\label{eq ana a phi} \color{altred}X^{\Lop}_{\phi}\,a_{\phi}^n+D^{\Lop}_{\phi}\,\partial_{\Theta}\big(a^1_{\phi}\,a^n_{\phi}\big) +\mathbf{w}_{\phi}\,\mathbb{F}_{\phi}^{\per}\big[\partial_{\Theta}\,a^1_{\phi},\partial_{\Theta}\,a^n_{\phi}\big]\color{altred}&\color{altred}=H^n_{\phi}+K^{\Lop}_{\phi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,a^{n-k}_{\psi}\big),\\\label{eq ana a psi} \color{altorange2}X^{\Lop}_{\psi}\,a_{\psi}^n+D^{\Lop}_{\psi}\,\partial_{\Theta}\big(a^1_{\psi}\,a^n_{\psi}\big) +\mathbf{w}_{\psi}\,\mathbb{F}_{\psi}^{\per}\big[\partial_{\Theta}\,a^1_{\psi},\partial_{\Theta}\,a^n_{\psi}\big]&\color{altorange2}=H^n_{\psi}+K^{\Lop}_{\psi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,a^{n-k}_{\psi}\big), \end{align} \end{subequations} coupled with the initial condi\-tions \begin{equation}\label{eq ana a init} \color{altred}\big(a_{\phi}^n\big)_{|t= 0}=0,\qquad \color{altorange2}\big(a_{\psi}^n\big)_{|t= 0}=0. \end{equation} The strategy to solve the above system of equations \eqref{eq ana sigma}, \eqref{eq ana bord sigma}, \eqref{eq ana a} and \eqref{eq ana a init} is to apply a Cauchy-Kovalevskaya theorem such as Theorem \ref{theorem Cauchy-Kovalevskaya} to interior system \eqref{eq ana sigma}, \eqref{eq ana bord sigma}, seen as a propagation equation in the normal variable. In order to do that, we need the boundary terms in \eqref{eq ana bord sigma} to be analytical with respect to all variables (even with respect to time). For the first simplified model, in Proposition \ref{prop WP bord CK}, we obtained only continuity with respect to time. Therefore we need to refine this result to obtain analyticity with respect to all variables. In the next part we define functional spaces which will be used for this purpose. \subsection{Additional functional framework} \subsubsection{Functional spaces} We define two different types of spaces, which all are spaces of functions defined on $ \omega_T\times\T $, analytical with respect to all variables $ (t,y,\Theta) $. The first ones, which will be denoted by $ E_{\rho} $ and $ \E_{\rho} $, will be used to solve boundary equations \eqref{eq ana a}-\eqref{eq ana a init}, which will be viewed as a fixed point problem in $ \E_{\rho} $. The second one, denoted by $ X_{r} $, $ \X_{r} $, are the one fitted for interior system \eqref{eq ana sigma}-\eqref{eq ana bord sigma}, where equation \eqref{eq ana sigma} will be seen as a differential equation with values in $ \X_r $. Features and relations of this spaces are summarized in Figure \ref{figure fct spaces}. In addition to defining the functional spaces, we have to describe action of differentiation on it, and to prove that every function of $ \E_{\rho} $ is in $ \X_r $. Previously introduced spaces $ Y_s $, $ s\in(0,1) $, are used to defined spaces $ E_{\rho} $, $ \rho\in(0,1) $. If $ I\subset \R $ is an interval and $ E $ a Banach space, we denote by $ \mathcal{C}^{\omega}(I,E) $ the space of analytic functions from $ I $ to $ E $. \begin{definition} For $ \rho\in(0,1) $, the space $ \tilde{E}_{\rho} $ is defined as \begin{equation*} \tilde{E}_{\rho}:=\bigcap_{s\in(0,1)}\mathcal{C}^{\omega}\Big(\big(-\rho(1-s),\rho(1-s)\big),Y_s\Big). \end{equation*} \end{definition} In the next definition we use the \emph{Catalan numbers} (see \cite{Comtet1974Combinatorics}), defined by, for $ n\geq 0 $, \begin{equation*} \mathfrak{C}_n:=\inv{n+1}\binom{2n}{n}. \end{equation*} They satisfy, for $ n\geq 0 $, \begin{equation}\label{eq ana relation catalan} \sum_{i=0}^n\mathfrak{C}_i\,\mathfrak{C}_{n-i}=\mathfrak{C}_{n+1}. \end{equation} The Catalan numbers appear in the power series expansion of $ x\mapsto (1-x)^{-1/2} $: \begin{equation}\label{eq ana power series 1/sqrt(1-x)} \inv{\sqrt{1-x}}=\sum_{n\geq 0}\inv{n!}\frac{(n+1)!\,\mathfrak{C}_n}{4^n}\,x^n,\quad \forall |x|<1. \end{equation} Next definition takes inspiration from the method of majoring series, see for example \cite[Chapter II]{John1991PDE}, since, in this formalism, it requires for a function to admit a dilatation of $ x\mapsto (1-x)^{-1/2} $ as a majoring series. \begin{definition}\label{def E rho} For $ \rho\in(0,1) $, the space $ E_{\rho} $ is defined as the set of functions $ a $ of $ \tilde{E}_{\rho} $ such that there exists $ M>0 $ such that for all $ s\in(0,1) $ and $ \nu\in\mathbb{N} $, \begin{equation}\label{eq ana est norme a} \norme{\partial_t^{\nu} a(0)}_{Y_s}\leq \frac{M}{(1-s)^{\nu+1}}\frac{(\nu+1)!\,\mathfrak{C}_{\nu}}{(4\rho)^{\nu}}. \end{equation} The infimum of all $ M $ satisfying condition \eqref{eq ana est norme a} is denoted by $ \normetriple{a}_{E_{\rho}} $. \end{definition} If $ a $ is in $ E_{\rho} $ for some $ \rho\in(0,1) $, then, for $ s\in(0,1) $, for $ |t|<\rho(1-s) $, by expanding $ a $ in power series with respect to $ t $ at $ 0 $, using estimate \eqref{eq ana est norme a} and the power series expansion of $ x\mapsto(1-x)^{-1/2} $, we get \begin{equation*} \norme{a(t)}_{Y_s}\leq \frac{\normetriple{a}_{E_{\rho}}}{1-s}\left(1-\frac{|t|}{\rho(1-s)}\right)^{-1/2}. \end{equation*} We find here the formulation of \cite{BaouendiGoulaouic1978Nishida}. Since we work with couples of sequences of functions, we define a space accordingly, and we specify the norm used on the product space. \begin{definition}\label{def E rho suite} For $ \rho\in(0,1) $, the space $ \E_{\rho} $ is defined as the set of sequences $ \mathbf{a}=\big(a_n\big)_{n\in\mathbb{N}} $ of $ E_{\rho} $ such that \begin{equation*} \normetriple{\mathbf{a}}^2_{\E_{\rho}}:=\sum_{n\geq 1}e^{2\,\rho\, n}\prodscalbis{n}^{2d^*}\normetriple{a_n}^2_{E_{\rho}}<+\infty. \end{equation*} For $ \rho\in(0,1) $, the norm on the product space $ \E_{\rho}^2 $ is defined, for $ (\a,\b)\in\E_{\rho}^2 $, as \begin{equation*} \normetriple{\big(\a,\b\big)}_{\E_{\rho}^2}^2:=\normetriple{\a}_{\E_{\rho}}^2+\normetriple{\b}_{\E_{\rho}}^2. \end{equation*} \end{definition} Spaces $ E_{\rho} $ and $ \E_{\rho} $ are not normed algebras, and neither do they satisfy a derivation property such as \eqref{eq ana est dt dtheta Ys}. Indeed, for a function $ a $ of $ E_{\rho} $ with $ \rho\in(0,1) $, we have, by Lemma \ref{lemma ana dy dtheta Y s}, for $ 0<s'<s<1 $, \begin{equation*} \norme{\partial_t^{\nu} \partial_{\Theta}\,a(0)}_{Y_{s'}}\leq \inv{s-s'} \norme{\partial_t^{\nu} a(0)}_{Y_s} \leq \inv{s-s'}\frac{\normetriple{a}_{E_{\rho}}}{(1-s)^{\nu+1}}\frac{(\nu+1)!\,\mathfrak{C}_{\nu}}{(4\rho)^{\nu}}. \end{equation*} To obtain an estimate for \eqref{eq ana est norme a}, it seems that we should have the existence of $ C>0 $ such that for all $ 0<s'<s<1 $ and $ \nu\geq 0 $, \begin{equation*} \inv{(s-s')(1-s)^{\nu+1}}\leq \frac{C}{(1-s')^{\nu+1}}, \end{equation*} which is false. However, as we shall see later, estimating $ t\mapsto \int_0^t \partial_{\Theta}\,a(s)\,ds $ instead of $ \partial_{\Theta}\,a $ could solve the problem. This is what is referred to as \emph{regularization by integration in time}, see \cite{Ukai2001CauchyKovalevskaya, Morisse2020Elliptic}. These spaces seem well suited to prove existence of solutions to boundary system \eqref{eq ana a}, analytical with respect to all variables, but the absence of above mentioned properties prevents to apply a Cauchy-Kovalevskaya theorem with these spaces for interior system. This is why we need to define other, more appropriate spaces. Spaces for interior equations are spaces in $ (t,y,\Theta) $ variables since interior equations will be seen as propagation equation in $ x_d $, valued in these spaces. In the following, $ H^{d^*} $ denotes the Sobolev space $ H^{d^*}(\R^{d-1}_y\times\T_{\Theta}) $ of regularity $ d^* $. Recall that $ d^* $ has been chosen such that $ d^*>\tilde{m}_0+2+(d+1)/2 $, where $ \tilde{m}_0 $ is the real nonnegative number of Lemma \ref{lemma hyp petit diviseurs}. The next definition is based on the classical way to characterize analytic functions. For a $ (d+1) $-tuple $ \alpha=(\alpha_0,\dots,\alpha_d)\in\mathbb{N}^{d+1} $, notation $ \alpha! $ refers to $ \alpha!:=\alpha_0!\cdots \alpha_d! $. \begin{definition} Consider $ \rho\in(0,1) $. For $ r\in(0,1) $, the space $ X_{r} $ is defined as the set of smooth functions $ a $ of $ (t,y,\Theta)\in[-\rho/2,\rho/2]\times\R^{d-1}\times\T $ with values in $ \C $ such that there exists $ M>0 $ such that for every $ \alpha $ in $ \mathbb{N}^{d+1} $, \begin{equation*} \norme{\partial_{t,y,\Theta}^{\alpha}a(0,.,.)}_{H^{d^*}}\leq \frac{M\alpha!}{r^{|\alpha|}\,(|\alpha|^{2d+1}+1)}. \end{equation*} The infimum of such $ M>0 $ is denoted by $ \normetriple{a}_{X_r} $. \end{definition} Note that in the previous definition, the space $ X_{r} $ depends on the fixed constant $ \rho\in(0,1) $, but we chose not to include this dependence in the notation since in the following $ \rho $ will be fixed. The time interval of the form $ [-\rho/2,\rho/2] $ is required because, in the following, functions of $ X_r $ will come from functions of $ E_{\rho} $, which are defined on time intervals $ \big(-\rho(1-s),\rho(1-s)\big) $ for $ s\in(0,1) $, so we choose arbitrarily $ s=1/2 $. Analogously as for $ E_{\rho} $, we define a space for sequences of $ X_r $. \begin{definition} For $ r\in(0,1) $, the space $ \X_{r} $ is defined as the set of sequences $ \mathbf{a}=\big(a_n\big)_{n\in\mathbb{N}} $ of $ X_{r} $ such that \begin{equation*} \normetriple{\mathbf{a}}^2_{\X_{r}}:=\sum_{n\geq 1}e^{2\,r\, n}\prodscalbis{n}^{2d^*}\normetriple{a_n}^2_{X_{r}}<+\infty. \end{equation*} For $ r\in(0,1) $, the norm on the product space $ \X_r^6 $ is defined, for $ \a=(\a_1,\dots,\a_6)\in\X_r^6 $, as \begin{equation*} \normetriple{\a}_{\X_r^6}^2:=\normetriple{\a_1}_{\X_r}^2+\cdots+\normetriple{\a_6}_{\X_r}^2. \end{equation*} \end{definition} \begin{figure} \begin{tikzpicture} \draw[fill=black!5,black!5] (-4.5,2) rectangle (9.5,3.4); \draw[fill=black!5,black!5] (-4.5,0.3) rectangle (9.5,1.7); \draw[fill=black!5,black!5] (-4.5,-1.4) rectangle (9.5,0); \draw[fill=black!5,black!5] (-4.5,-3.1) rectangle (9.5,-1.7); \node[draw,circle,fill=altred!30] (YY) at (-3,-0.7) {$ \Y_s $}; \node[draw,circle,altred,thick,text=black,fill=white] (E) at (-0.1,1) {$ E_{\rho} $}; \node[draw,circle,altred,thick,text=black,fill=white] (EE) at (-0.1,2.7) {$ \E_{\rho} $}; \node [draw,circle,fill=altred!30](X) at (3.1,1) {$ X_r $}; \node[draw,circle,fill=altred!30] (XX) at (3.1,2.7) {$ \X_r $}; \node[draw,rounded corners=3pt,fill=white] (LL2) at (7,-0.7) {$ L^2(\R_{y}^{d-1}\times\T_{\Theta})^{\mathbb{N}} $}; \node[draw,rounded corners=3pt,fill=white] (L2) at (7,-2.4) {$ L^2(\R_{y}^{d-1}\times\T_{\Theta}) $}; \node[draw,rounded corners=3pt,fill=white] (CL2) at (7,1) {$ \mathcal{C}\big(I_t,L^2(\R_{y}^{d-1}\times\T_{\Theta})\big) $}; \node[draw,rounded corners=3pt,fill=white] (CCL2) at (7,2.7) {$ \mathcal{C}\big(I_t,L^2(\R_{y}^{d-1}\times\T_{\Theta})\big)^{\mathbb{N}} $}; \draw[align=center](-3,-4.1) node{\small Spaces for\\\small first study\\\small model}; \draw[align=center](3,-4.1) node{\small Spaces for\\\small interior\\\small equations}; \draw[align=center](0,-4.1) node{\small Spaces for\\\small boundary\\\small equations}; \draw[align=center](7,-4.1) node{\small Underlying\\\small spaces}; \draw[black!50] (4.5,-4.9) -- (4.5,3.6); \draw[dashed,black!50] (1.5,-4.9) -- (1.5,3.6); \draw[dashed,black!50] (-1.5,-4.9) -- (-1.5,3.6); \node[draw,circle,fill=altred!30] (Y) at (-1.5,-2.4) {$ Y_s $}; \draw[->,>=latex] (EE) to[bend right=15] (XX); \draw[->,>=latex] (E) to[bend left=15] (X); \draw[->,>=latex,dashed] (Y) to (E); \draw[->,>=latex,dashed] (E) to (EE); \draw[->,>=latex,dashed] (X) to (XX); \draw[->,>=latex,dashed] (Y) to (YY); \node[rounded corners,fill=black!20,align=center] (A) at (1.5,1.85) {\tiny with some \\\tiny $ r\in(0,1) $\\\tiny depending \\\tiny on $ \rho $}; \begin{scope}[shift={(-2,-1.9)}] \node[draw,circle,altred,thick,text=black] (Z) at (-0.2,-5.7) {\small $ Z $}; \node [draw,circle,fill=altred!30](Z) at (-0.2,-4.7) {\small $ Z $}; \draw[->,>=latex] (5,-4.7) to[bend left=15] (7,-4.7); \draw[->,>=latex,dashed] (5,-5.7) to (7,-5.7); \draw[right,align=left] (0.5,-5.7) node{\footnotesize regularization by\\[-2pt] \footnotesize integration in time}; \draw[right,align=left] (0.5,-4.7) node{\footnotesize derivation property\\[-2pt] \footnotesize and algebra}; \draw[right,align=left] (7.5,-4.7) node{\footnotesize continuous injection}; \draw[right,align=left] (7.5,-5.7) node{\footnotesize is used to define}; \draw (4.5,-3.9) node{\small \textbf{Legend}}; \draw[rounded corners,thick,black!50] (-0.9, -6.4) rectangle (10.8, -3.4) {}; \end{scope} \end{tikzpicture} \caption{Features of functional spaces and links between them} \label{figure fct spaces} \end{figure} The following result asserts that, for every $ \rho\in(0,1) $, there exists $ r\in(0,1) $ such that space $ \E_{\rho} $ is continuously injected in $ \X_r $, with a constant independent of $ \rho $. The proof is recalled here for the sake of clarity. \begin{lemma}\label{lemma Erho dans Xr} There exists $ C>0 $, such that, for $ \rho\in(0,1) $, if $ \a $ is in $ \E_{\rho} $ then there exists $ r\in(0,1) $ such that $ \a $ belongs to $ \X_r $ (for the same $ \rho $) and, furthermore, \begin{equation*} \normetriple{\a}_{\X_r}\leq C\normetriple{\a}_{\E_{\rho}}. \end{equation*} \end{lemma} \begin{proof} Let $ \a=(a^n)_{n\geq 1} $ be in $ \E_{\rho} $ for some $ \rho\in(0,1) $. For $ \alpha=(\alpha_0,\alpha',\beta)=(\alpha_0,\alpha_1,\dots,\alpha_{d-1},\beta) $ in $ \mathbb{N}\times\mathbb{N}^{d-1}\times\mathbb{N} $, we have, for $ s\in(0,1) $ and $ n\geq 1 $, \begin{align*} \norme{\partial_{t,y,\Theta}^{\alpha}a^n(0,.,.)}^2_{H^{d^*}}&=\int_{\R^{d-1}}\sum_{\lambda\in\Z}\xi_1^{2\alpha_1}\cdots\xi_{d-1}^{2\alpha_{d-1}}\lambda^{2\beta}\prodscalbis{(\xi,\lambda)}^{2d^*}\left|\partial_t^{\alpha_0}\hat{a^n_{\lambda}}(0,\xi)\right|^2\,d\xi\\ &\leq\int_{\R^{d-1}}\sum_{\lambda\in\Z}\frac{\alpha'!^2\beta!^2}{s^{2(|\alpha'|+\beta)}}\,e^{2s|(\xi,\lambda)|}\prodscalbis{(\xi,\lambda)}^{2d^*}\left|\partial_t^{\alpha_0}\hat{a^n_{\lambda}}(0,\xi)\right|^2\,d\xi\\ &=\frac{\alpha'!^2\beta!^2}{s^{2(|\alpha'|+\beta)}}\norme{\partial_t^{\alpha_0}a^n(0)}^2_{Y_s}, \end{align*} using the inequality \begin{equation*} \frac{(s\xi)^{\alpha'}(s\lambda)^{\beta}}{\alpha'!\beta!}\leq e^{s|(\xi,\lambda)|}. \end{equation*} Since $ a^n $ is in $ E_{\rho} $ (because $ \a $ is in $ \E_{\rho} $), we therefore have \begin{equation*} \norme{\partial_{t,y,\Theta}^{\alpha}a^n(0,.,.)}_{H^{d^*}}\leq \frac{\alpha'!\beta!}{s^{|\alpha'|+\beta}} \frac{\normetriple{a^n}_{E_{\rho}}\,(\alpha_0+1)!\,\mathfrak{C}_{\alpha_0}}{(1-s)^{\alpha_0+1}\,(4\rho)^{\alpha_0}}\leq \frac{\alpha'!\beta!}{s^{|\alpha'|+\beta}} \frac{C\,\normetriple{a^n}_{E_{\rho}}\,\alpha_0!}{(1-s)^{\alpha_0+1}\,(3\rho)^{\alpha_0}} \end{equation*} using $ (\alpha_0+1)\,\mathfrak{C}_{|\alpha_0|}/(4\rho)^{\alpha_0}\leq C/(3\rho)^{\alpha_0} $. Finally, we have, if $ s\leq \min(\rho,2/3) $, \begin{equation*} \norme{\partial_{t,y,\Theta}^{\alpha}a^n(0,.,.)}_{H^{d^*}}\leq \frac{\alpha!}{s^{|\alpha|}} \frac{C\,\normetriple{a^n}_{\E_{\rho}}}{3}\leq \frac{C\,\normetriple{a^n}_{\E_{\rho}}\alpha!}{s'^{|\alpha|}\,(|\alpha|^{2d+1}+1)}, \end{equation*} with $ s'<s $, because $ s'^{|\alpha|}\,(|\alpha|^{2d+1}+1)\leq Cs^{|\alpha|} $. Therefore, \begin{equation}\label{eq ana inter 3} \normetriple{a^n}_{X_{s'}}\leq C \normetriple{a^n}_{E_{\rho}}, \end{equation} with a constant $ C $ which does not depend on $ n\geq 1 $. Therefore, multiplying inequality \eqref{eq ana inter 3} by $ e^{2s'n}\prodscalbis{n}^{2d^*} $ and summing over $ n\geq 1 $ leads to \begin{equation*} \normetriple{\a}_{\X_{s'}}\leq C \normetriple{\a}_{\E_{\rho}}, \end{equation*} so $ \a $ belongs to $ \X_r $ with $ r<\min(\rho,2/3) $ which depends only on $ \rho $, concluding the proof. \end{proof} Following results state that partial derivatives with respect to $ t,y,\Theta $ act on $ \X_r $ in the same way as partial derivatives with respect to $ y,\Theta $ act on $ Y_s $, and that spaces $ \X_r $ satisfy an algebra property. For the sake of completeness, we recall the proof of these classical results. \begin{lemma}\label{lemma ana dy dtheta Xr} There exists $ C>0 $ such that, for $ 0\leq r'<r\leq 1 $, for $ \a $ in $ \X_r $ and for $ e_j $ in $ \mathbb{N}^{d+1} $ with $ |e_j|=1 $, function $ \partial_{t,y,\Theta}^{e_j}\a $ belongs to $ \X_{r'} $ and satisfies \begin{equation*} \normetriple{\partial_{t,y,\Theta}^{e_j}\a}_{\X_{r'}}\leq \frac{C}{r-r'}\,\normetriple{\a}_{\X_r}. \end{equation*} \end{lemma} \begin{proof} In the same way as for the previous Lemma \ref{lemma Erho dans Xr}, proving the estimate for the space $ X_r $ leads to the one for $ \X_r $ and the associate result, by multiplying by $ e^{2s'n}\prodscalbis{n}^{2d^*} $ and summing over $ n\geq 1 $. Let $ a $ be in $ X_r $. Without loss of generality, we make the proof for $ e_j=e_0=(1,0,\dots,0) $. For $ \alpha=(\alpha_0,\alpha',\beta) $ in $ \mathbb{N}\times\mathbb{N}^{d-1}\times\mathbb{N} $, we have, by definition of $ X_r $-norm, \begin{align*} \norme{\partial_{t,y,\Theta}^{\alpha}\partial_{t,y,\Theta}^{e_0}a(0,.,.)}_{H^{d^*}}&\leq\normetriple{a}_{X_r}\frac{(\alpha+e_0)!}{r^{|\alpha|+1}\,(|\alpha+e|^{2d+1}+1)}\\ &=\normetriple{a}_{X_r}\frac{\alpha!}{(r')^{|\alpha|}\,(|\alpha|^{2d+1}+1)}\frac{(|\alpha|^{2d+1}+1)}{(|\alpha+e|^{2d+1}+1)}(\alpha_0+1)\frac{(r')^{|\alpha|}}{r^{|\alpha|+1}}. \end{align*} Since $ (|\alpha|^{2d+1}+1)/(|\alpha+e|^{2d+1}+1) $ is bounded uniformly with respect to $ \alpha\in\mathbb{N}^{d+1} $ and since \begin{equation*} (\alpha_0+1)\frac{(r')^{|\alpha|}}{r^{|\alpha|+1}}\leq \big(|\alpha|+1\big)\frac{(r')^{|\alpha|}}{r^{|\alpha|+1}}\leq \inv{r-r'}, \end{equation*} the result follows. \end{proof} \begin{lemma}\label{lemma Xr Banach algebra} For $ r\in(0,1) $, spaces $ X_r $ and $ \X_r $ are Banach algebras (the latter for the convolution on sequences), up to a positive constant, that is, there exists $ C>0 $ (independent of $ r\in(0,1) $), such that for $ \a,\b $ in $ \X_r $, the function $ \a\b $ belongs to $ \X_r $ and we have \begin{equation*} \normetriple{\a\b}_{\X_r}\leq C\normetriple{\a}_{\X_r}\normetriple{\b}_{\X_r}, \end{equation*} and the analogous estimate for $ X_r $. \end{lemma} \begin{proof} We make the proof for spaces $ X_r $, and the result for $ \X_r $ follows using the same arguments as in the proof of Lemma \ref{lemma Ys Banach algebra}. Let $ r $ be in $ (0,1) $ and consider $ a,b $ in $ X_r $. We need to show that there exists $ C>0 $ such that for all $ \alpha\in\mathbb{N}^{d+1} $, we have \begin{equation*} \norme{\partial_{t,y,\Theta}^{\alpha}\big(ab\big)(0,.,.)}_{H^{d^*}}\leq \frac{C\normetriple{a}_{X_r}\normetriple{b}_{X_r}\,\alpha!}{r^{|\alpha|}\,(|\alpha|^{2d+1}+1)} \end{equation*} So consider $ \alpha\in\mathbb{N}^{d+1} $. We have, since $ d^*>d/2 $, so $ H^{d^*} $ is an algebra, \begin{align*} \norme{\partial_{t,y,\Theta}^{\alpha}\big(ab\big)(0,.,.)}_{H^{d^*}}&\leq C\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}\norme{\partial_{t,y,\Theta}^{\beta}a(0,.,.)}_{H^{d^*}}\norme{\partial_{t,y,\Theta}^{\alpha-\beta}b(0,.,.)}_{H^{d^*}}\\ &\leq C\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}\frac{\normetriple{a}_{X_r}\,\beta!}{r^{|\beta|}\,(|\beta|^{2d+1}+1)}\frac{\normetriple{b}_{X_r}\,(\alpha-\beta)!}{r^{|\alpha|-|\beta|}\,(|\alpha-\beta|^{2d+1}+1)} \\ &=\frac{C\,\normetriple{a}_{X_r}\normetriple{b}_{X_r}\alpha!}{r^{|\alpha|}\,(|\alpha|^{2d+1}+1)}\sum_{\beta\leq\alpha}\frac{(|\alpha|^{2d+1}+1)}{(|\beta|^{2d+1}+1)\,(|\alpha-\beta|^{2d+1}+1)}, \end{align*} and the result follows since $ \sum_{\beta\leq\alpha}\frac{(|\alpha|^{2d+1}+1)}{(|\beta|^{2d+1}+1)\,(|\alpha-\beta|^{2d+1}+1)} $ is bounded uniformly with respect to $ \alpha\in\mathbb{N}^{d+1} $. \end{proof} We summarize the main features of the functional spaces introduced in this section in Figure \ref{figure fct spaces}. The concept of \emph{regularization by integration in time} is detailed below, in Lemma \ref{lemma ana est LBK E rho}. \subsubsection{Specifications on the simplified model and main result} In view of the functional spaces defined above, we are able to make precise the study system \eqref{eq ana sigma}-\eqref{eq ana bord sigma}-\eqref{eq ana a}-\eqref{eq ana a init}. Boundary terms $ g^n_{\zeta,j} $ appearing in \eqref{eq ana bord sigma} are taken such that, if we define $ \mathbf{g}_{\zeta,j}:=\big(g^n_{\zeta,j}\big)_{n\in\mathbb{N}} $, then function $ \mathbf{g}_{\zeta,j} $ is in $ \X_1 $ for $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $. Analogously, source terms $ H^n_{\zeta} $ of boundary equations \eqref{eq ana a} are taken such that, defining $ \mathbf{H}_{\zeta}:=\big(H^n_{\zeta}\big)_{n\geq 1} $, sequence $ \mathbf{H}_{\zeta} $ is in $ \E_1 $ for $ \zeta=\phi,\psi $. Once again, assumption on $ \mathbf{H}_{\zeta} $ imposing it to be analytical with respect to all its variables is stronger than requiring $ H $ and $ G $ to be in $ H^{\infty}(\R^d\times\T) $. We also denote by $ \gamma_0>0 $ a positive constant such that, for $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $, \begin{subequations}\label{eq ana hyp toy model} \begin{equation}\label{eq ana hyp toy model v D int} |\mathbf{v}_{\zeta,j}|\leq \gamma_0,\quad \big|D_{\zeta,j}\big|\leq \gamma_0 \quad \text{and}\quad \big|K_{\zeta,j}\big|\leq \gamma_0, \end{equation} for $ \zeta=\phi,\psi $, \begin{equation}\label{eq ana hyp toy model v D bord} \big|\mathbf{v}^{\Lop}_{\zeta}\big|\leq \gamma_0,\qquad \big|D^{\Lop}_{\zeta}\big|\leq \gamma_0, \quad |\mathbf{w}_{\zeta}|\leq\gamma_0^{1/2}, \quad \text{and}\quad \big|K^{\Lop}_{\zeta}\big|\leq \gamma_0, \end{equation} for $ r\in(0,1) $ and for $ \sigma,\tau $ in $ X_r $, for $ \zeta_1,\zeta_2=\phi,\psi,\nu $ and $ j_1,j_2=1,3 $, \begin{equation}\label{eq ana hyp toy model J} \normetriple{\J_{\zeta_1,j_1}^{\zeta_2,j_2}\big[\sigma,\tau\big]}_{X_r}\leq \gamma_0\,\normetriple{\sigma}_{X_r}\normetriple{\tau}_{X_r}, \end{equation} and, for $ s\in(0,1) $, for $ u,v $ in $ Y_s $, and for $ \zeta=\phi,\psi $, \begin{equation}\label{eq ana hyp toy model F per} \norme{\mathbb{F}^{\per}_{\zeta}\big[\partial_{\Theta}\,u,\partial_{\Theta}\,v\big]}_{Y_s}\leq \gamma_0^{1/2}\,\norme{u}_{Y_s}\norme{v}_{Y_s}. \end{equation} \end{subequations} All estimates relies on the fact that scalars $ D_{\zeta,j} $, $ K_{\zeta,j} $, $ D^{\Lop}_{\zeta} $, $ \mathbf{w}_{\zeta} $ and $ K^{\Lop}_{\zeta} $, vectors $ \mathbf{v}_{\zeta,j} $ and $ \mathbf{v}^{\Lop}_{\zeta} $ and operators $ \mathbb{J}^{\zeta_1,j_1}_{\zeta_2,j_2} $ and $ \mathbb{F}^{\per}_{\zeta} $ are indexed by finite sets. Estimate \eqref{eq ana hyp toy model F per} is the result of Proposition \ref{prop F per semilin}, and \eqref{eq ana hyp toy model J} is the result of the following lemma. \begin{lemma}\label{lemma J est} There exists a constant $ C>0 $ such that, for $ r\in(0,1) $, for $ \sigma,\tau $ in $ X_r $, $ \zeta_1,\zeta_2 $ in$ \ensemble{\phi,\psi} $ and $ j_1,j_2 $ in $ \ensemble{1,3} $, we have \begin{equation}\label{eq ana1 est lemma J} \normetriple{\J_{\zeta_1,j_1}^{\zeta_2,j_2}\big[\sigma,\tau\big]}_{X_r}\leq C\,\normetriple{\sigma}_{X_r}\normetriple{\tau}_{X_r}. \end{equation} \end{lemma} \begin{proof} Without loss of generality, we make the proof for $ \zeta_1=\zeta_2=\phi $ and $ j_1=j_2=1 $. Recall that $ \mathbb{J}^{\phi,1}_{\phi,1} $ is defined by \eqref{eq ana def J} as \begin{equation*} \J^{\phi,1}_{\phi,1}\big[\sigma,\tau\big]=J^{\phi,1}_{\phi,1}\sum_{\lambda\in\Z^*}\sigma_{\lambda}\tau_{\lambda}\,e^{i\lambda\Theta}. \end{equation*} For $ \alpha=(\alpha',\alpha_d) $ in $ \mathbb{N}^{d}\times\mathbb{N} $, we want to estimate in $ H^{d^*} $ the following function \begin{equation}\label{eq ana inter 4} \partial_{t,y,\Theta}^{\alpha}\J^{\phi,1}_{\phi,1}\big[\sigma,\tau\big](0,.,.) =J^{\phi,1}_{\phi,1}\sum_{\lambda\in\Z^*}\sum_{\beta'\leq \alpha'}\binom{\alpha'}{\beta'}\big(i\lambda\big)^{\alpha_d}\partial_{t,y}^{\beta'}\sigma_{\lambda}(0,.)\,\partial_{t,y}^{\alpha'-\beta'}\tau_{\lambda}(0,.)\,e^{i\lambda\Theta}. \end{equation} We prove now an intermediate result. For every function $ u,v $ defined on $ \R^{d-1}\times\T $, whose Fourier series expansions read \begin{equation*} u(y,\Theta):=\sum_{\lambda\in\Z}u_{\lambda}(y)\,e^{i\lambda\Theta},\qquad v(y,\Theta):=\sum_{\lambda\in\Z}v_{\lambda}(y)\,e^{i\lambda\Theta}, \end{equation*} we have, \begin{align}\nonumber &\norme{(y,\Theta)\mapsto \sum_{\lambda\in\Z} u_{\lambda}(y)v_{\lambda}(y)\,e^{i\lambda\Theta}}^2_{H^{d^*}} \\\nonumber &\qquad=\sum_{k\in\Z^*}\int_{\R^{d-1}}\prodscalbis{(\lambda,\xi)}^{2d^*}\left|\int_{\R^{d-1}}\hat{u_{\lambda}}(\eta)\,\hat{v_{\lambda}}(\xi-\eta)\,d\eta\right|^2d\xi\\\nonumber &\qquad\leq\sum_{k\in\Z^*}\int_{\R^{d-1}}\left(\int_{\R^{d-1}}\frac{\prodscalbis{(\lambda,\xi)}^{2d^*}}{\prodscalbis{(\lambda,\eta)}^{2d^*}\prodscalbis{(\lambda,\xi-\eta)}^{2d^*}}\,d\eta\right)\\\nonumber &\qquad\quad\times\int_{\R^{d-1}}\prodscalbis{(\lambda,\eta)}^{2d^*}\big|\hat{u_{\lambda}}(\eta)\big|^2\,\prodscalbis{(\lambda,\xi-\eta)}^{2d^*}\big|\hat{v_{\lambda}}(\xi-\eta)\big|^2\,d\eta\,d\xi \\\label{eq ana inter 5} &\qquad\leq C\norme{(y,\Theta)\mapsto \sum_{\lambda\in\Z} u_{\lambda}(y)\,e^{i\lambda\Theta}}^2_{H^{d^*}}\norme{(y,\Theta)\mapsto \sum_{\lambda\in\Z} v_{\lambda}(y)\,e^{i\lambda\Theta}}^2_{H^{d^*}}, \end{align} with a constant $ C>0 $ independent on $ u $ and $ v $, since $ \int_{\R^{d-1}}\frac{\prodscalbis{(\lambda,\xi)}^{2d^*}}{\prodscalbis{(\lambda,\eta)}^{2d^*}\prodscalbis{(\lambda,\xi-\eta)}^{2d^*}}\,d\eta $ is bounded uniformly with respect to $ (\lambda,\xi) $. We have, according to \eqref{eq ana inter 4}, \begin{multline}\label{eq ana inter 6} \norme{\partial_{t,y,\Theta}^{\alpha}\J^{\phi,1}_{\phi,1}\big[\sigma,\tau\big](0,.,.)}_{H^{d^*}}^2\\ \qquad\leq \big(J^{\phi,1}_{\phi,1}\big)^2\sum_{\beta'\leq \alpha'}\binom{\alpha'}{\beta'} \norme{(y,\Theta)\mapsto \sum_{\lambda\in\Z^*}\big(i\lambda\big)^{\alpha_d}\partial_{t,y}^{\beta'}\sigma_{\lambda}(0,.)\,\partial_{t,y}^{\alpha'-\beta'}\tau_{\lambda}(0,.)\,e^{i\lambda\Theta}}, \end{multline} so, applying inequality \eqref{eq ana inter 5} to quantity \eqref{eq ana inter 6} we get \begin{align*} &\norme{\partial_{t,y,\Theta}^{\alpha}\J^{\phi,1}_{\phi,1}\big[\sigma,\tau\big](0,.,.)}_{H^{d^*}}^2\\ &\qquad\leq C\big(J^{\phi,1}_{\phi,1}\big)^2\sum_{\beta'\leq \alpha'}\binom{\alpha'}{\beta'}\norme{(y,\Theta)\mapsto \sum_{\lambda\in\Z} \big(i\lambda\big)^{\alpha_d}\partial_{t,y}^{\beta'}\sigma_{\lambda}(0,y)\,e^{i\lambda\Theta}}^2_{H^{d^*}}\\ &\qquad\quad\times\norme{(y,\Theta)\mapsto \sum_{\lambda\in\Z} \partial_{t,y}^{\alpha'-\beta'}\tau_{\lambda}(0,y)\,e^{i\lambda\Theta}}^2_{H^{d^*}}\\ &\qquad= C\big(J^{\phi,1}_{\phi,1}\big)^2\sum_{\beta'\leq \alpha'}\binom{\alpha'}{\beta'}\norme{\partial_{t,y,\Theta}^{(\beta',\alpha_d)}\sigma(0,.,.)}^2_{H^{d^*}}\norme{ \partial_{t,y,\Theta}^{(\alpha'-\beta',0)}\tau(0,.,.)}^2_{H^{d^*}}. \end{align*} Therefore, by definition of the $ X_r $-norm, \begin{align*} &\norme{\partial_{t,y,\Theta}^{\alpha}\J^{\phi,1}_{\phi,1}\big[\sigma,\tau\big](0,.,.)}_{H^{d^*}}^2\\ &\qquad\leq C\sum_{\beta'\leq\alpha'}\binom{\alpha'}{\beta'}\frac{\normetriple{\sigma}_{X_r}\,\beta'!\alpha_d!}{r^{|\beta'|+\alpha_d}\,(|(\beta,\alpha_d)|^{2d+1}+1)}\frac{\normetriple{b}_{X_r}\,(\alpha'-\beta')!}{r^{|\alpha'|-|\beta'|}\,(|\alpha'-\beta'|^{2d+1}+1)} \\[5pt] &\qquad=\frac{C\,\normetriple{a}_{X_r}\normetriple{b}_{X_r}\alpha!}{r^{|\alpha|}\,(|\alpha|^{2d+1}+1)}\sum_{\beta'\leq\alpha'}\frac{(|\alpha|^{2d+1}+1)}{(|(\beta',\alpha_d)|^{2d+1}+1)\,(|\alpha'-\beta'|^{2d+1}+1)}, \end{align*} and the result follows since $ \sum_{\beta'\leq\alpha'}\frac{(|\alpha|^{2d+1}+1)}{(|(\beta',\alpha_d)|^{2d+1}+1)\,(|\alpha'-\beta'|^{2d+1}+1)} $ is bounded uniformly with respect to $ \alpha\in\mathbb{N}^{d+1} $. \end{proof} The rest of the section is devoted to the proof of the following existence and uniqueness result. \begin{theorem}\label{theorem existence ana} For every $ M_0,M_1>0 $, there exist $ 0<r_1<1 $ and $ \delta>0 $ such that for every $ \mathbf{g} $ in $ \X^6_{r_1} $ and $ \mathbf{H} $ in $ \E^2_1 $ satisfying respectively $ \normetriple{\mathbf{H}}_{\E_1^2}<M_0 $ and $ \normetriple{\mathbf{g}}_{\X^6_{r_1}}<M_1 $, system of equations \eqref{eq ana sigma}, \eqref{eq ana bord sigma}, \eqref{eq ana a} and \eqref{eq ana a init} admits a unique solution given by $ \boldsymbol{\sigma} $ in $ \mathcal{C}^1\big((-\delta(r_1-r),\delta(r_1-r)),\X_r^6\big) $ for each $ 0<r<r_1 $ and $ \a $ in $ \X^2_{r_1} $, where we have denoted $ \boldsymbol{\sigma}:=\big(\sigma^n_{\phi,1},\sigma_{\phi,3}^n,\sigma^n_{\psi,1},\sigma^n_{\psi,3},\sigma_{\nu,1}^n,\sigma_{\nu,3}^n\big)_{n\geq1} $, $ \mathbf{g}:=\big(\mathbf{g}_{\phi,1},\mathbf{g}_{\phi,3},\mathbf{g}_{\psi,1},\mathbf{g}_{\psi,3},\mathbf{g}_{\nu,1},\mathbf{g}_{\nu,3}\big) $ and $ \mathbf{H}:=\big(\mathbf{H}_{\phi},\mathbf{H}_{\psi}\big) $. \end{theorem} To prove this result, we will start by proving existence for boundary system \eqref{eq ana a}-\eqref{eq ana a init}, with the Banach fixed point theorem applied in a closed ball of $ \E^2_{\rho} $ for some $ \rho\in(0,1) $. We will therefore have a solution $ \a $ of \eqref{eq ana a}-\eqref{eq ana a init} analytical both with respect to space and time. The strategy is to write equations \eqref{eq ana a} as a fixed point, by the change of variables $ \b:=\partial_t\,\a $, to obtain a problem like the one of \cite{BaouendiGoulaouic1978Nishida}. This will allow us to prove that the operator at stake is a contraction, using the phenomenon of regularization by integration in time. Then we proceed with the existence of solution for interior system \eqref{eq ana sigma}-\eqref{eq ana bord sigma}, by applying a classical Cauchy-Kovalevskaya result, in $ \X_r^6 $ for some $ r\in(0,1) $. For this purpose, equations \eqref{eq ana sigma} will be seen as propagation equations in the normal variable. Verifying assumptions of Theorem \ref{theorem Cauchy-Kovalevskaya} for interior equations presents no difficulty. Finally Lemma \ref{lemma Erho dans Xr} will be used to assert that the obtained solution $ \a $ of \eqref{eq ana a}-\eqref{eq ana a init} in $ \E_{\rho}^2 $ is actually in $ \X_{r}^2 $ for some $ r\in(0,1) $. \subsection{Time analyticity on the boundary and Cauchy-Kovalevskaya theorem for incoming equations} \subsubsection{Existence and time analyticity for boundary equations} This part is devoted to solving boundary system \eqref{eq ana a}-\eqref{eq ana a init}. The goal is to obtain solutions which are analytical not only with respect to $ (y,\Theta) $ but also with respect to time. In the same way as for the first simplified model, system \eqref{eq ana a}-\eqref{eq ana a init} can be displayed in the form \begin{equation}\label{eq ana dt a} \begin{cases} \partial_t \,\mathbf{a} = L \,\mathbf{a} -\partial_{\Theta}\,\mathbf{D}^{\Lop}\big(\a,\a\big) - \mathbb{F}^{\per}\big(\partial_{\Theta}\,\a,\partial_{\Theta}\,\a\big)+\mathbf{H}+\mathbf{K}^{\Lop}\big(\a,\a\big),\\[5pt] \a(0)=0, \end{cases} \end{equation} where $ \a:=(\a_{\phi},\a_{\psi}):=\big(a^n_{\phi},a^n_{\psi}\big)_{n\geq 1} $, $ \mathbf{H}:=\big(\mathbf{H}_{\phi},\mathbf{H}_{\psi}\big) $, and, if $ \mathbf{c}:=\big(c^n_{\phi},c^n_{\psi}\big)_{n\geq 1} $, \begin{align*} L\,\a&:=\big(\mathbf{v}_{\phi}^{\Lop}\cdot\nabla_y\,a^n_{\phi},\mathbf{v}_{\phi}^{\Lop}\cdot\nabla_y\,a^n_{\psi}\big)_{n\geq 1}, \\[5pt] \mathbf{D}^{\Lop}(\a,\mathbf{c})&:=\big(D^{\Lop}_{\phi}\,a^1_{\phi}\,a^n_{\phi}, D^{\Lop}_{\psi}\,a^1_{\psi}\,a^n_{\psi}\big)_{n\geq 1}, \\[5pt] \mathbb{F}^{\per}(\a,\mathbf{c})&:=\big(\mathbf{w}_{\phi}\,\mathbb{F}^{\per}_{\phi}[a^1_{\phi},c^{n}_{\phi}], \mathbf{w}_{\psi}\linebreak\mathbb{F}^{\per}_{\psi}[a^1_{\psi},c^{n}_{\psi}]\big)_{n\geq 1}, \\[5pt] \mathbf{K}^{\Lop}(\a,\mathbf{c})&:=\Big(K^{\Lop}_{\phi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,c^{n-k}_{\psi}\big),K^{\Lop}_{\psi}\sum_{k=1}^{n-1}\partial_{\Theta}\big(a^k_{\phi}\,c^{n-k}_{\psi}\big)\Big)_{n\geq 1}. \end{align*} Setting $ \b:=\partial_t\a $, system \eqref{eq ana dt a} is equivalent to \begin{multline}\label{eq ana b = F(b)} \b(t)=L\int_0^t\b(\sigma)\,d\sigma-\partial_{\Theta}\,\mathbf{D}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\b(\sigma)\,d\sigma\Big)\\ \quad-\mathbb{F}^{\per}\Big(\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma,\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma\Big) +\mathbf{K}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\b(\sigma)\,d\sigma\Big)+\mathbf{H}(t), \end{multline} with $ \a(t)=\int_0^t\b(\sigma)\,d\sigma $. The aim is to solve equation \eqref{eq ana b = F(b)} with a fixed point theorem in $ \E_{\rho}^2 $, so we start by proving the following key estimates, which will allow us to prove contraction for the operator at stake. Estimates \eqref{eq ana est LBK E rho:L}, \eqref{eq ana est LBK E rho:B} and \eqref{eq ana est LBK E rho:K} below constitute what we call \emph{regularization by integration in time}, where composing derivation in $ (y,\Theta) $ with integration in time leads to no loss of regularity. This phenomenon was introduced by \cite{Ukai2001CauchyKovalevskaya}, and can also be found in \cite{Metivier2009Optics,Morisse2020Elliptic}. \begin{lemma}\label{lemma ana est LBK E rho} There exists $ C>0 $ such that for $ \rho\in(0,1) $, for $ \b,\mathbf{c} $ in $ \E_{\rho}^2 $, the following estimates hold \begin{subequations}\label{eq ana est LBK E rho} \begin{align} \label{eq ana est LBK E rho:L} \normetriple{t\mapsto L\int_0^t\b(\sigma)\,d\sigma}_{\E_{\rho}^2}&\leq C\,\rho\,\gamma_0\, \normetriple{\b}_{\E_{\rho}^2},\\ \label{eq ana est LBK E rho:B} \normetriple{t\mapsto \partial_{\Theta}\,\mathbf{D}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\mathbf{c}(\sigma)\,d\sigma\Big)}_{\E_{\rho}^2}&\leq C\,\rho^2\,\gamma_0\, \normetriple{\b}_{\E_{\rho}^2}\normetriple{\mathbf{c}}_{\E_{\rho}^2},\\ \label{eq ana est LBK E rho:F} \normetriple{t\mapsto \mathbb{F}^{\per}\Big(\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma,\partial_{\Theta}\int_0^t\mathbf{c}(\sigma)\,d\sigma\Big)}_{\E_{\rho}^2}&\leq C\,\rho^2\,\gamma_0\, \normetriple{\b}_{\E_{\rho}^2}\normetriple{\mathbf{c}}_{\E_{\rho}^2},\\ \label{eq ana est LBK E rho:K} \normetriple{t\mapsto \mathbf{K}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\mathbf{c}(\sigma)\,d\sigma\Big)}_{\E_{\rho}^2}&\leq C\,\rho^2\,\gamma_0\, \normetriple{\b}_{\E_{\rho}^2}\normetriple{\mathbf{c}}_{\E_{\rho}^2}. \end{align} \end{subequations} \end{lemma} \begin{proof} First note that since $ \b,\mathbf{c} $ are in $ \E_{\rho}^2 $, functions which we wish to estimate are in $ \big(\tilde{E}_{\rho}^{\mathbb{N}^*}\big)^2 $. In all this proof, we denote \begin{equation*} \b:=(\b_{\phi},\b_{\psi}):=\big(b_{\phi}^n,b_{\psi}^n\big)_{n\geq 1},\qquad \mathbf{c}:=(\mathbf{c}_{\phi},\mathbf{c}_{\psi}):=\big(c_{\phi}^n,c_{\psi}^n\big)_{n\geq 1}. \end{equation*} For $ \nu\geq 1 $ and $ s\in(0,1) $, we define $ s_{\nu}:=s+\inv{\nu+1} (1-s) $ which is such that $ s_{\nu}>s $ and satisfy, for $ \nu\geq 1 $, \begin{equation}\label{eq ana relations sn} \frac{1-s}{s_{\nu}-s}=\nu+1\qquad \text{and}\qquad \frac{1-s}{1-s_{\nu}}=1+\inv{\nu}. \end{equation} We proceed with the proof of estimate \eqref{eq ana est LBK E rho:L} dealing with function $ \mathbf{V}:=L\int_0^t\b(\sigma)\,d\sigma $, and we denote $ \mathbf{V}:=(\mathbf{V}_{\phi},\mathbf{V}_{\psi}):=\big(V_{\phi}^n,V_{\psi}^n\big)_{n\geq 1} $. According to Definition \ref{def E rho}, the aim is to estimate, for $ s\in(0,1) $, $ \nu\geq 0 $, $ \zeta=\phi,\psi $ and $ n\geq 1 $, the $ Y_s $-norm of $ \partial_t^{\nu} \,V_{\zeta}^n(0) $. Fix $ s\in(0,1) $, $ \zeta=\phi,\psi $ and $ n\geq 1 $, and recall that \begin{equation*} V_{\zeta}^n(t)=\mathbf{v}^{\Lop}_{\zeta}\cdot\nabla_y\int_0^tb_{\zeta}^n(\sigma)\,d\sigma. \end{equation*} Therefore we have $ V_{\zeta}^n(0)=0 $ and, for $ \nu\geq 1 $, \begin{equation*} \partial_t^{\nu}V_{\zeta}^n(0)=\mathbf{v}^{\Lop}_{\zeta}\cdot\nabla_y\,\partial_t^{\nu-1}b_{\zeta}^n(0), \end{equation*} so, for $ s\in(0,1) $, using \eqref{eq ana est dt dtheta Ys} with $ 0<s<s_{\nu}\leq 1 $, estimate \eqref{eq ana hyp toy model v D int} and definition \eqref{eq ana est norme a} of $ E_{\rho} $-norm, \begin{align*} \norme{\partial_t^{\nu}\,V_{\zeta}^n(0)}_{Y_s}&\leq \gamma_0\norme{\nabla_y\,\partial_t^{\nu-1}b_{\zeta}^n(0)}_{Y_s}\leq \frac{C\,\gamma_0}{s_{\nu}-s}\norme{\partial_t^{\nu-1}b_{\zeta}^n(0)}_{Y_{s_{\nu}}}\\ &\leq \frac{C\,\gamma_0}{s_{\nu}-s}\frac{1}{(1-s_{\nu})^{\nu}}\frac{\nu!\,\mathfrak{C}_{\nu-1}}{(4\rho)^{\nu-1}}\normetriple{b_{\zeta}^n}_{E_{\rho}}. \end{align*} Therefore, using relations \eqref{eq ana relations sn}, \begin{align*} \norme{\partial_t^{\nu}\,V_{\zeta}^n(0)}_{Y_s}&\leq C\,\gamma_0\,\normetriple{b_{\zeta}^n}_{E_{\rho}}\, \inv{(1-s)^{\nu+1}}\frac{(\nu+1)\,\nu!\,\mathfrak{C}_{\nu-1}}{(4\rho)^{\nu-1}}\left(1+\inv{\nu}\right)^{\nu}\\ &\leq C\,\rho\,\gamma_0\,\normetriple{b_{\zeta}^n}_{E_{\rho}}\,\frac{(\nu+1)!\,\mathfrak{C}_{\nu}}{(1-s)^{\nu+1}\,(4\rho)^{\nu}}, \end{align*} so that $ \normetriple{V^n_{\zeta}}_{E_{\rho}}\leq C\,\rho\,\gamma_0\,\normetriple{b_{\zeta}^n}_{E_{\rho}} $. Since this estimate is independent of $ n,\zeta $, multiplying it by $ e^{2\rho n}\prodscalbis{n}^{2d^*} $ and summing over $ n\geq 1 $ leads to the analogous one for $ \E_{\rho}^2 $, which reads as \eqref{eq ana est LBK E rho:L}. For $ \mathbf{V}:=\partial_{\Theta}\,\mathbf{D}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\mathbf{c}(\sigma)\,d\sigma\Big) $ which we decompose in $ \big(\tilde{E}_{\rho}^{\mathbb{N}^*}\big)^2 $ as $ \mathbf{V}=:\linebreak(\mathbf{V}_{\phi},\mathbf{V}_{\psi})=:\big(V_{\phi}^n,V_{\psi}^n\big)_{n\geq 1} $, with, for $ n\geq 1 $ and $ \zeta=\phi,\psi $, \begin{equation*} V_{\zeta}^n:=\partial_{\Theta}\,D^{\Lop}_{\zeta}\int_0^tb_{\zeta}^1(\sigma)\,d\sigma\cdot\int_0^tc_{\zeta}^n(\sigma)\,d\sigma, \end{equation*} for $ n\geq 1 $ and $ \zeta=\phi,\psi $, we compute $ V_{\zeta}^n(0)=\partial_tV_{\zeta}^n(0)=0 $, and for $ \nu\geq 2 $, \begin{equation*} \partial_t^{\nu}\,V_{\zeta}^n(0)=D^{\Lop}_{\zeta}\,\partial_{\Theta}\sum_{\mu=1}^{\nu-1}\binom{\nu}{\mu}\partial_t^{\mu-1}b^1_{\zeta}(0)\,\partial_t^{\nu-\mu-1}c^n_{\zeta}(0). \end{equation*} Therefore, for $ s\in(0,1) $, using \eqref{eq ana est dt dtheta Ys} with $ 0<s<s_{\nu}\leq 1 $ and estimates \eqref{eq ana hyp toy model v D int} and \eqref{eq ana est norme a}, we have \begin{align*} \norme{\partial_t^{\nu}V_{\zeta}^n(0)}_{Y_s}&\leq \frac{C\,\gamma_0}{s_{\nu}-s}\sum_{\mu=1}^{\nu-1}\binom{\nu}{\mu}\norme{\partial_t^{\mu-1}b^1_{\zeta}(0)}_{Y_{s_{\nu}}}\norme{\partial_t^{\nu-\mu-1}c^n_{\zeta}(0)}_{Y_{s_{\nu}}}\\ &\leq \frac{C\,\gamma_0}{s_{\nu}-s} \frac{\normetriple{b^1_{\zeta}}_{E_{\rho}}\normetriple{c^n_{\zeta}}_{E_{\rho}}}{(4\rho)^{\nu-2}}\sum_{\mu=1}^{\nu-1}\binom{\nu}{\mu}\frac{\mu!\,\mathfrak{C}_{\mu-1}}{(1-s_{\nu})^{\mu}}\frac{(\nu-\mu)!\,\mathfrak{C}_{\nu-\mu-1}}{(1-s_{\nu})^{\nu-\mu}}. \end{align*} Again with relations \eqref{eq ana relations sn} we get \begin{align*}\nonumber \norme{\partial_t^{\nu}V_{\zeta}^n(0)}_{Y_s}&\leq \frac{C\,\gamma_0\,(\nu+1)}{1-s}\frac{\normetriple{b^1_{\zeta}}_{E_{\rho}}\normetriple{c^n_{\zeta}}_{E_{\rho}}}{(4\rho)^{\nu-2}}\frac{\nu!}{(1-s)^\nu}\sum_{\mu=1}^{\nu-1}\mathfrak{C}_{\mu-1}\,\mathfrak{C}_{\nu-\mu-1}\left(1+\inv{\nu}\right)^{\nu}\\ &\leq C\,\rho^2\,\gamma_0\,e\,\normetriple{b^1_{\zeta}}_{E_{\rho}}\normetriple{c^n_{\zeta}}_{E_{\rho}}\frac{(\nu+1)!}{(1-s)^{\nu+1}\,(4\rho)^{\nu}}\sum_{\mu=1}^{\nu-1}\mathfrak{C}_{\mu-1}\,\mathfrak{C}_{\nu-\mu-1}.\label{eq ana preuve est LBK E rho:inter1} \end{align*} We can conclude using \eqref{eq ana relation catalan}: \begin{equation*} \sum_{\mu=1}^{\nu-1}\mathfrak{C}_{\mu-1}\,\mathfrak{C}_{\nu-\mu-1}=\sum_{\mu=0}^{\nu-2}\mathfrak{C}_{\mu}\,\mathfrak{C}_{\nu-2-\mu}=\mathfrak{C}_{\nu-1}\leq \mathfrak{C}_{\nu}. \end{equation*} Last inequality follows from relation \eqref{eq ana relation catalan} and $ \mathfrak{C}_0=1 $. The proof of estimate \eqref{eq ana est LBK E rho:F} for $ \mathbf{V}:=\mathbb{F}^{\per}\Big(\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma,\partial_{\Theta}\int_0^t\mathbf{c}(\sigma)\,d\sigma\Big) $ follows the same argument as the previous one, but it is simpler since, according to \eqref{eq ana hyp toy model F per}, $ \mathbb{F}^{\per}_{\zeta}(\partial_{\Theta}\,\a,\partial_{\Theta}\,\a) $ acts as a semilinear term in $ \a $. It is therefore omitted. Finally, we take interest into $ \mathbf{V}:=\mathbf{K}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\mathbf{c}(\sigma)\,d\sigma\Big) $ which we decompose in $ \big(\tilde{E}_{\rho}^{\mathbb{N}^*}\big)^2 $ as $ \mathbf{V}=:(\mathbf{V}_{\phi},\mathbf{V}_{\psi})=:\big(V_{\phi}^n,V_{\psi}^n\big)_{n\geq 1} $, with, for $ n\geq 1 $ and $ \zeta=\phi,\psi $, \begin{equation*} V_{\zeta}^n:=K^{\Lop}_{\zeta}\sum_{k=1}^{n-1}\partial_{\Theta}\big(b^k_{\phi},c^{n-k}_{\psi}\big) \end{equation*} For $ n\geq 1 $ and $ \zeta=\phi,\psi $, we compute $ V_{\zeta}^n(0)=\partial_tV_{\zeta}^n(0)=0 $, and for $ \nu\geq 2 $, \begin{equation*} \partial_t^{\nu}V_{\zeta}^n(0)=K^{\Lop}_{\zeta}\sum_{k=1}^{n-1}\partial_{\Theta}\sum_{\mu=1}^{\nu-1}\binom{\nu}{\mu}\partial_t^{\mu-1}b^k_{\phi}(0)\,\partial_t^{\nu-\mu-1}c^{n-k}_{\psi}(0), \end{equation*} so, for $ s\in(0,1) $, using \eqref{eq ana hyp toy model v D bord} and \eqref{eq ana est dt dtheta Ys} with $ 0<s<s_{\nu}\leq 1 $ and then \eqref{eq ana est norme a}, \begin{align*} \norme{\partial_t^{\nu}V_{\zeta}^n(0)}_{Y_s}&\leq \frac{C\,\gamma_0}{s_{\nu}-s}\sum_{k=1}^{n-1}\sum_{\mu=1}^{\nu-1}\binom{\nu}{\mu}\norme{\partial_t^{\mu-1}b^k_{\phi}(0)}_{Y_{s_{\nu}}}\norme{\partial_t^{\nu-\mu-1}c^{n-k}_{\psi}(0)}_{Y_{s_{\nu}}}\\ &\leq \frac{C\,\gamma_0}{s_{\nu}-s} \inv{(4\rho)^{\nu-2}}\sum_{k=1}^{n-1}\normetriple{b^k_{\phi}}_{E_{\rho}}\normetriple{c^{n-k}_{\psi}}_{E_{\rho}}\\ &\quad\sum_{\mu=1}^{\nu-1}\binom{\nu}{\mu}\frac{\mu!\,\mathfrak{C}_{\mu-1}}{(1-s_{\nu})^{\mu}}\frac{(\nu-\mu)!\,\mathfrak{C}_{\nu-\mu-1}}{(1-s_{\nu})^{\nu-\mu}}\\ &\leq C\,\rho^2\,\gamma_0\,\frac{(\nu+1)!\,\mathfrak{C}_{\nu}}{(1-s)^{\nu+1}(4\rho)^{\nu}}\sum_{k=1}^{n-1}\normetriple{b^k_{\phi}}_{E_{\rho}}\normetriple{c^{n-k}_{\psi}}_{E_{\rho}}, \end{align*} using \eqref{eq ana relation catalan} as before. Therefore we get \begin{equation*} \normetriple{V^n_{\zeta}}_{E_{\rho}}\leq C\,\rho^2\,\gamma_0\sum_{k=1}^{n-1}\normetriple{b^k_{\phi}}_{E_{\rho}}\normetriple{c^{n-k}_{\psi}}_{E_{\rho}}, \end{equation*} so that, multiplying by $ e^{2\,\rho\, n}\prodscalbis{n}^{2d^*} $ and summing over $ n\geq 1 $, we obtain \begin{align*} \sum_{\zeta=\phi,\psi}\sum_{n\geq 1}&e^{2\,\rho\, n}\prodscalbis{n}^{2d^*}\normetriple{V^n_{\zeta}}_{E_{\rho}}^2\\[-10pt] &\leq C^2\,\rho^4\,\gamma_0^2\sum_{\zeta=\phi,\psi}\sum_{n\geq 1}e^{2\,\rho\, n}\prodscalbis{n}^{2d^*}\left(\sum_{k=1}^{n-1}\normetriple{b^k_{\phi}}_{E_{\rho}}\normetriple{c^{n-k}_{\psi}}_{E_{\rho}}\right)^2\\ &\leq C^2\,\rho^4\,\gamma_0^2\sum_{n\geq 1}\left(\sum_{k=1}^{n-1}\frac{\prodscalbis{n}^{2d^*}}{\prodscalbis{k}^{2d^*}\prodscalbis{n-k}^{2d^*}}\right)\\ &\quad\sum_{k=1}^{n-1}e^{2\,\rho\, k}\prodscalbis{k}^{2d^*}\normetriple{b^k_{\phi}}_{E_{\rho}}^2e^{2\,\rho\, (n-k)}\prodscalbis{n-k}^{2d^*}\normetriple{c^{n-k}_{\psi}}_{E_{\rho}}^2, \end{align*} using Cauchy-Schwarz inequality. Therefore, since $ \sum_{k=1}^{n-1}\frac{\prodscalbis{n}^{2d^*}}{\prodscalbis{k}^{2d^*}\prodscalbis{n-k}^{2d^*}} $ is bounded uniformly with respect to $ n\geq 1 $, we get \begin{equation*} \normetriple{\mathbf{V}}^2_{\E^2_{\rho}}=\sum_{\zeta=\phi,\psi}\sum_{n\geq 1}e^{2\,\rho\, n}\prodscalbis{n}^{2d^*}\normetriple{V^n_{\zeta}}_{E_{\rho}}^2\leq C^2\,\rho^4\,\gamma_0^2\, \normetriple{\b}^2_{\E^2_{\rho}}\normetriple{\mathbf{c}}^2_{\E^2_{\rho}}, \end{equation*} which is the sought inequality \eqref{eq ana est LBK E rho:K}. \end{proof} We are now in place to prove existence for \eqref{eq ana b = F(b)} (which is equivalent to \eqref{eq ana dt a}) in the space $ \E_{\rho}^2 $, of analytic functions with respect to $ (t,y,\Theta) $. We follow here the method of \cite{Ukai2001CauchyKovalevskaya,Morisse2020Elliptic}. \begin{proposition}\label{prop existence bord} For every $ M_0>0 $, there exists $ \rho\in(0,1) $ such that for every $ \mathbf{H} $ in $ \E^2_{\rho} $ satisfying $ \normetriple{\mathbf{H}}_{\E^2_{\rho}}<M_0 $, equation \eqref{eq ana b = F(b)} admits a unique solution $ \b $ in $ \E_{\rho}^2 $. \end{proposition} \begin{proof} In all this proof, for $ R>0 $, $ B_{\rho}(0,R) $ denotes the closed ball of $ \E_{\rho}^2 $ centered at 0 and of radius $ R $. For $ \b $ and $ \mathbf{H} $ in $ \E_{\rho}^2 $, we denote \begin{multline*} F\big(\mathbf{H},\b\big):=L\int_0^t\b(\sigma)\,d\sigma-\partial_{\Theta}\,\mathbf{D}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\b(\sigma)\,d\sigma\Big)\\ \quad-\mathbb{F}^{\per}\Big(\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma,\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma\Big) +\mathbf{K}^{\Lop}\Big(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\b(\sigma)\,d\sigma\Big)+\mathbf{H}, \end{multline*} so that solving \eqref{eq ana b = F(b)} amounts to find a fixed point of $ F\big(\mathbf{H},.\big): \b \mapsto F\big(\mathbf{H},\b\big) $. Therefore, we will prove that there exist $ R>0 $ and $ \rho\in(0,1) $ such that for every $ \mathbf{H} $ in $ \E^2_{\rho} $ satisfying $ \normetriple{\mathbf{H}}_{\E^2_{\rho}}<M_0 $, the map $ F\big(\mathbf{H},.\big) $ is a contraction from the complete space $ B_{\rho}(0,R) $ to itself. Consider $ M_0>0 $ and $ \mathbf{H} $ in $ \E^2_{\rho} $ such that $ \normetriple{\mathbf{H}}_{\E^2_{\rho}}<M_0 $. First we need to show that there exist $ \rho\in(0,1) $ and $ R>0 $ such that $ F\big(\mathbf{H},.\big) $ maps $ B_{\rho}(0,R) $ to itself. Lemma \ref{lemma ana est LBK E rho} asserts that $ F\big(\mathbf{H},.\big) $ is well defined from $ \E_{\rho}^2 $ to itself and that it satisfies, for $ \b $ in $ \E_{\rho}^2 $, \begin{equation*} \normetriple{F\big(\mathbf{H},\b\big)}_{\E_{\rho}^2}\leq C\,\rho\,\gamma_0\, \normetriple{\b}_{\E_{\rho}^2}+C\,\rho^2\,\gamma_0\, \normetriple{\b}_{\E_{\rho}^2}^2+M_0, \end{equation*} for a new positive constant $ C>0 $ independent on $ \rho $, $ \mathbf{H} $, $ M_0 $ and $ \b $. Therefore, setting $ R:=2M_0$, for $ 0<\rho<C(\gamma_0,M_0) $, with \begin{equation*} C(\gamma_0,M_0):=\min\left(\big[4\,C\,\gamma_0\big]^{-1},\big[8\,C\,\gamma_0\,M_0\big]^{-1/2}\right), \end{equation*} the application $ F\big(\mathbf{H},.\big) $ maps the ball $ B_{\rho}(0,R) $ to itself. Now we need to show that this map is a contraction, for $ \rho<C(M_0,\gamma_0) $ small enough. We compute, for $ 0<\rho<C(M_0,\gamma_0) $ and for $ \b,\mathbf{c} $ in $ B_{\rho}(0,R) $, \begin{align*} F\big(&\mathbf{H},\b\big)-F\big(\mathbf{H},\mathbf{c}\big)=\\\nonumber &L\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma-\partial_{\Theta}\,\mathbf{D}^{\Lop}\left(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma\right)\\\nonumber &-\partial_{\Theta}\,\mathbf{D}^{\Lop}\left(\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma,\int_0^t\mathbf{c}(\sigma)\,d\sigma\right)-\mathbb{F}^{\per}\left(\partial_{\Theta}\int_0^t\b(\sigma)\,d\sigma,\partial_{\Theta}\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma\right)\\\nonumber &-\mathbb{F}^{\per}\left(\partial_{\Theta}\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma,\partial_{\Theta}\int_0^t\mathbf{c}(\sigma)\,d\sigma\right)+\mathbf{K}^{\Lop}\left(\int_0^t\b(\sigma)\,d\sigma,\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma\right)\\\nonumber &+\mathbf{K}^{\Lop}\left(\int_0^t\big(\b-\mathbf{c}\big)(\sigma)\,d\sigma,\int_0^t\mathbf{c}(\sigma)\,d\sigma\right). \end{align*} Therefore using estimates of Lemma \ref{lemma ana est LBK E rho} and the fact that $ \b,\mathbf{c} $ are in $ B_{\rho}(0,R) $, we get, \begin{align*} \normetriple{F\big(\mathbf{H},\b\big)-F\big(\mathbf{H},\mathbf{c}\big)}_{\E_{\rho}^2}&\leq C\,\rho\,\gamma_0\Big(1+\rho\,\big(\normetriple{\b}_{\E_{\rho}^2}+\normetriple{\mathbf{c}}_{\E_{\rho}^2}\big)\Big)\normetriple{\b-\mathbf{c}}_{\E_{\rho}^2}\\[5pt] &\leq C\,\rho\,\gamma_0\,\big(1+\rho\,R\big)\normetriple{\b-\mathbf{c}}_{\E_{\rho}^2}, \end{align*} up to changing the constant $ C>0 $ in every line. Thus, for $ \rho<\tilde{C}(\gamma_0,M_0) $, with \begin{equation*} \tilde{C}(\gamma_0,M_0):=\min\Big(\big[4\,\gamma_0\,M_0\big]^{-1},\big[8\,\,\gamma_0\,M_0^2\big]^{-1/2}, \big[2\,C\,\gamma_0\,\big(1+2\,M_0\big)\big]^{-1}\Big), \end{equation*} the map $ F\big(\mathbf{H},.\big) $ is a contraction from $ B_{\rho}(0,R) $ to itself. Since $ B_{\rho}(0,R) $ is a closed subspace of the Banach space $ \E^2_{\rho} $, the Banach fixed-point theorem gives a unique solution to \eqref{eq ana b = F(b)}. \end{proof} \subsubsection{A Cauchy-Kovalevskaya theorem for incoming interior equations} The aim is now to prove existence of solution to \eqref{eq ana sigma}-\eqref{eq ana bord sigma} with the Cauchy-Kovalev\-skaya type Theorem \ref{theorem Cauchy-Kovalevskaya} using the chain of Banach spaces $ \big(\X_r\big)_{r\in(0,1)} $. We start by writing system \eqref{eq ana sigma}-\eqref{eq ana bord sigma} in a form suited to apply Theorem \ref{theorem Cauchy-Kovalevskaya}. Up to multiplying \eqref{eq ana sigma} by a nonzero constant (which is the $ x_d $-component of $ -\mathbf{v}_{\zeta,j} $), system \eqref{eq ana sigma}-\eqref{eq ana bord sigma} can be written as \begin{equation}\label{eq ana Sigma inhom} \begin{cases} \partial_{x_d}\boldsymbol{\sigma}=L\,\boldsymbol{\sigma}-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\sigma},\boldsymbol{\sigma}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\sigma},\boldsymbol{\sigma}\big)+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\sigma},\boldsymbol{\sigma}\big)\\[5pt] \boldsymbol{\sigma}_{|x_d=0}=\tilde{\a} +\mathbf{g}, \end{cases} \end{equation} where $ \boldsymbol{\sigma}:=\big(\boldsymbol{\sigma}_{\phi,1},\boldsymbol{\sigma}_{\phi,3},\boldsymbol{\sigma}_{\psi,1},\boldsymbol{\sigma}_{\psi,3},\boldsymbol{\sigma}_{\nu,1},\boldsymbol{\sigma}_{\nu,3}\big):=\big(\sigma_{\phi,1}^n,\sigma_{\phi,3}^n,\sigma_{\psi,1}^n,\sigma_{\psi,3}^n,\sigma_{\nu,1}^n,\sigma_{\nu,3}^n\big)_{n\geq 1} $, function $ \a $ is the solution to \eqref{eq ana a}-\eqref{eq ana a init}, function $ \tilde{\a} $ is defined from $ \a $ by $ \tilde{\a}:=\big((e_{\phi,1}\cdot r_{\phi,1})\,\a_{\phi},(e_{\phi,3}\cdot r_{\phi,3})\,\a_{\phi},(e_{\psi,1}\cdot r_{\psi,1})\,\a_{\psi},(e_{\psi,3}\cdot r_{\psi,3})\,\a_{\psi},0,0\big) $, boundary term $ \mathbf{g} $ is defined as $ \mathbf{g}:=\big(\mathbf{g}_{\phi,1},\dots,\mathbf{g}_{\nu,3}\big) $, and, if $ \boldsymbol{\tau}:=\big(\tau_{\phi,1}^n,\dots,\tau_{\nu,3}^n\big)_{n\geq 1} $, \begin{align*} L\,\boldsymbol{\sigma}&:=\big(\mathbf{v}_{\phi,1}\cdot\nabla_{t,y}\,\boldsymbol{\sigma}_{\phi,1},\dots,\mathbf{v}_{\nu,3}\cdot\nabla_{t,y}\,\boldsymbol{\sigma}_{\nu,3}\big),\\[5pt] \mathbf{D} \big(\boldsymbol{\sigma},\boldsymbol{\tau}\big)&:=\big(D_{\phi,1}\,\sigma_{\phi,1}^n\,\tau_{\phi,1}^1,\dots,D_{\nu,3}\,\sigma_{\nu,3}^n\,\tau_{\nu,3}^1\big)_{n\geq 1},\\ \mathbb{J}\big(\boldsymbol{\sigma},\boldsymbol{\tau}\big)&:=\left(\sum_{\zeta_1,\zeta_2\in\ensemble{\phi,\psi,\nu}}\,\sum_{j_1,j_2\in\ensemble{1,3}}\mathbb{J}^{\zeta_2,j_2}_{\zeta_1,j_1}\big[\sigma^1_{\zeta_1,j_1},\tau^n_{\zeta_2,j_2}\big]\right)_{\substack{\zeta=\phi,\psi,\nu\\j=1,3\\n\geq 1}},\\[0pt] \mathbf{K}\big(\boldsymbol{\sigma},\boldsymbol{\tau}\big)&:=\left(K_{\zeta,j}\sum_{\zeta_1,\zeta_2\in\ensemble{\phi,\psi,\nu}}\,\sum_{j_1,j_2\in\ensemble{1,3}}\,\sum_{k=1}^{n-1}\sigma^k_{\zeta_1,j_1}\,\sigma^{n-k}_{\zeta_2,j_2}\right)_{\substack{\zeta=\phi,\psi,\nu\\j=1,3\\n\geq 1}}, \end{align*} with new\footnote{Due to the fact that we multiplied equation \eqref{eq ana sigma} by a nonzero coefficient to obtain a propagation equation in the normal variable.} $ \mathbf{v}_{\zeta,j} $, $ D_{\zeta,j} $, $ K_{\zeta,j} $ and $ \mathbb{J}_{\zeta_1,j_1}^{\zeta_2,j_2} $, satisfying the same assumptions as the old ones \eqref{eq ana hyp toy model v D int} and \eqref{eq ana hyp toy model J}. Note that there exists a constant $ C>0 $ such that for any $ \a $ in $ \X_r^2 $, we have $ \normetriple{\tilde{\a}}_{\X_r^6}\leq C \normetriple{\a}_{\X_r^2} $. For $ N_0>0 $, denote by $ \tilde{F} $ the function of $ [-N_0,N_0]\times\X_r^6 $ defined by, for $ |x_d|\leq N_0 $ and $ \boldsymbol{\sigma}\in\X_r^6 $, \begin{equation*} \tilde{F}(x_d,\boldsymbol{\sigma}):=L\,\boldsymbol{\sigma}-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\sigma},\boldsymbol{\sigma}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\sigma},\boldsymbol{\sigma}\big)+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\sigma},\boldsymbol{\sigma}\big), \end{equation*} and set $ \boldsymbol{\sigma}^0:=\tilde{\a}+\mathbf{g} $. Now system \eqref{eq ana Sigma inhom} is equivalent to the following one, with $ \boldsymbol{\tau}:=\boldsymbol{\sigma}-\boldsymbol{\sigma}^0 $, \begin{equation}\label{eq ana Tau hom} \begin{cases} \boldsymbol{\tau}'(x_d)=\tilde{F}\big(x_d,\boldsymbol{\tau}(x_d)+\boldsymbol{\sigma}^0\big)\\ \boldsymbol{\tau}(0)=0, \end{cases} \end{equation} which is, with $ F\big(x_d,\boldsymbol{\tau}(x_d)\big):=\tilde{F}\big(x_d,\boldsymbol{\tau}(x_d)+\boldsymbol{\sigma}^0\big) $ for $ |x_d|<N_0 $, in the right form to apply Theorem \ref{theorem Cauchy-Kovalevskaya}. Note that the operator $ F $ actually does not depend on $ x_d $, so all suprema in $ x_d $ in assumptions of Theorem \ref{theorem Cauchy-Kovalevskaya} may be removed below when verifying the assumptions of this theorem on our particular problem \eqref{eq ana Tau hom}. We will therefore omit to indicate the dependency in $ x_d $ of $ F $ and simply write $ F(\boldsymbol{\tau}) $. It remains to check the assumption of Theorem \ref{theorem Cauchy-Kovalevskaya} to obtain existence of solutions to \eqref{eq ana Tau hom}. The key estimates to do so are the following ones. \begin{lemma}\label{lemma ana est LBR Xr} There exists $ C>0 $ such that for $ 0\leq r'<r\leq1 $, for $ \boldsymbol{\sigma},\boldsymbol{\tau} $ in $ \X_r^6 $, the following estimates hold \begin{subequations}\label{eq ana est LBR Xr} \begin{align}\label{eq ana est LBR Xr:L} \normetriple{L\,\boldsymbol{\sigma}}_{\X^6_{r'}}&\leq \frac{C\,\gamma_0}{r-r'}\,\normetriple{\boldsymbol{\sigma}}_{\X_r^6}\\[5pt]\label{eq ana est LBR Xr:B} \normetriple{\mathbf{D}\big(\boldsymbol{\sigma},\boldsymbol{\tau}\big)}_{\X^6_r}&\leq C\gamma_0 \normetriple{\boldsymbol{\sigma}}_{\X_r^6}\normetriple{\boldsymbol{\tau}}_{\X_r^6}\\[5pt]\label{eq ana est LBR Xr:R} \normetriple{\mathbb{J}\big(\boldsymbol{\sigma},\boldsymbol{\tau}\big)}_{\X^6_r}&\leq C\gamma_0 \normetriple{\boldsymbol{\sigma}}_{\X_r^6}\normetriple{\boldsymbol{\tau}}_{\X_r^6}\\[5pt]\label{eq ana est LBR Xr:etoile} \normetriple{\mathbf{K}\big(\boldsymbol{\sigma},\boldsymbol{\tau}\big)}_{\X^6_r}&\leq C\gamma_0\normetriple{\boldsymbol{\sigma}}_{\X_r^6}\normetriple{\boldsymbol{\tau}}_{\X_r^6}. \end{align} \end{subequations} \end{lemma} \begin{proof} Estimate \eqref{eq ana est LBR Xr:L} follows directly from Lemma \ref{lemma ana dy dtheta Xr} and assumption \eqref{eq ana hyp toy model v D int} on $ \mathbf{v}_{\zeta,j} $, for $ \zeta=\phi,\psi,\nu $, and $ j=1,3 $. As for them, estimates \eqref{eq ana est LBR Xr:B} and \eqref{eq ana est LBR Xr:R} rely on the algebra property of $ X_r $ and assumptions \eqref{eq ana hyp toy model v D int} and \eqref{eq ana hyp toy model J} on $ D_{\zeta,j} $ and $ \mathbb{J}^{\zeta_2,j_2}_{\zeta_1,j_1} $. Finally, estimate \eqref{eq ana est LBR Xr:etoile} is proven using assumption \eqref{eq ana hyp toy model v D int} on $ K_{\zeta,j} $ for $ \zeta=\phi,\psi,\nu $, and $ j=1,3 $, and the same arguments used to prove algebra property of $ \X_r $. \end{proof} The main result of this part is the following one, which, along with Proposition \ref{prop existence bord}, will prove Theorem \ref{theorem existence ana}. \begin{proposition}\label{prop existence eq int} Consider $ 0<r_1<1 $, and $ \a $ a solution to system \eqref{eq ana a}-\eqref{eq ana a init} given in $ \X_{r_1}^2 $. Then, for every $ M_1>0 $, the following existence and uniqueness result holds: there exists $ \delta>0 $ such that for every $ \mathbf{g} $ in $ \X^6_{r_1} $ satisfying $ \normetriple{\mathbf{g}}_{\X^6_{r_1}}<M_1 $, system \eqref{eq ana Tau hom} admits a unique solution in $ \mathcal{C}^1\big((-\delta(r_1-r),\delta(r_1-r)),\X_r^6\big) $ for each $ r\in(0,r_1) $. \end{proposition} \begin{proof} The aim is to apply Theorem \ref{theorem Cauchy-Kovalevskaya} with the scale of Banach spaces $ \big(\X^6_r\big)_{0<r\leq r_1} $. Fix now a constant $ M_1>0 $ as well as\footnote{Constant $ R $ takes part only in the proof, and can be chosen arbitrarily large.} $ R>0 $, and consider $ \mathbf{g} $ in $ \X_{r_1}^6 $ such that $ \normetriple{\mathbf{g}}_{\X_{r_1}^6}<M_1 $. We will now verify assumptions \eqref{eq ana hyp F 1} and \eqref{eq ana hyp F 2}. We compute that for $ \boldsymbol{\tau} $ in $ \X_{r_1}^6 $, we have \begin{align*} F\big(\boldsymbol{\tau}\big)=&L\,\boldsymbol{\tau}+L\,\boldsymbol{\sigma^0}-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\tau},\boldsymbol{\tau}\big)-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\sigma^0},\boldsymbol{\tau}\big)-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\tau},\boldsymbol{\sigma^0}\big)-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\sigma^0},\boldsymbol{\sigma^0}\big)\\ &-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\tau},\boldsymbol{\tau}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\tau},\boldsymbol{\sigma^0}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\sigma^0},\boldsymbol{\tau}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\sigma^0},\boldsymbol{\sigma^0}\big)\\ &+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\tau},\boldsymbol{\tau}\big)+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\tau},\boldsymbol{\sigma^0}\big)+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\sigma^0},\boldsymbol{\tau}\big)+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\sigma^0},\boldsymbol{\sigma^0}\big). \end{align*} Therefore, \begin{equation*} F(0)=L\,\boldsymbol{\sigma^0}-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\sigma^0},\boldsymbol{\sigma^0}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\sigma^0},\boldsymbol{\sigma^0}\big)+\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\sigma^0},\boldsymbol{\sigma^0}\big), \end{equation*} so, using Lemmas \ref{lemma ana dy dtheta Xr} and \ref{lemma ana est LBR Xr}, we get that there exists a constant $ C>0 $ such that for all $ 0<r'<r<r_1 $, \begin{equation*} \normetriple{F(0)}_{\X_{r'}^6}\leq \frac{C\,\gamma_0}{r-r'} \Big(\normetriple{\boldsymbol{\sigma^0}}_{\X_r^6}+\normetriple{\boldsymbol{\sigma^0}}_{\X_r^6}^2\Big)\leq \frac{C\,\gamma_0}{r-r'} \Big(\normetriple{\boldsymbol{\sigma^0}}_{\X_r^6}+\normetriple{\boldsymbol{\sigma^0}}_{\X_r^6}^2\Big), \end{equation*} and then, using the fact that $ \normetriple{\tilde{\a}}_{\X_r^6}\leq C \normetriple{\a}_{\X_r^2} $ and $ \normetriple{\mathbf{g}}_{\X_{r_1}^6}<M_1 $, \begin{equation*} \normetriple{F(0)}_{\X_{r'}^6}\leq \frac{C\,\gamma_0}{r-r'}\Big(\normetriple{\a}_{\X_r^2}+\normetriple{\a}^2_{\X_r^2}+M_1+M_1^2\Big), \end{equation*} so assumption \eqref{eq ana hyp F 2} is satisfied with $ M:=C\,\gamma_0\big(\normetriple{a}_{\X_r^2}+\normetriple{a}^2_{\X_r^2}+M_1+M_1^2\big) $. On the other hand, we have, for $ \boldsymbol{\tau},\boldsymbol{\omega} $ in $ \X_r^6 $, \begin{align*} F\big(\boldsymbol{\tau}\big)-F\big(\boldsymbol{\omega}\big)&=L\,\big(\boldsymbol{\tau}-\boldsymbol{\omega}\big)-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\tau}-\boldsymbol{\omega},\boldsymbol{\tau}\big)-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\omega},\boldsymbol{\tau}-\boldsymbol{\omega}\big)-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\sigma^0},\boldsymbol{\tau}-\boldsymbol{\omega}\big)\\ &\quad-\partial_{\Theta}\,\mathbf{D}\big(\boldsymbol{\tau}-\boldsymbol{\omega},\boldsymbol{\sigma^0}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\tau}-\boldsymbol{\omega},\boldsymbol{\tau}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\omega},\boldsymbol{\tau}-\boldsymbol{\omega}\big)-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\sigma^0},\boldsymbol{\tau}-\boldsymbol{\omega}\big)\\ &\quad-\partial_{\Theta}\,\mathbb{J}\big(\boldsymbol{\tau}-\boldsymbol{\omega},\boldsymbol{\sigma^0}\big) +\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\tau}-\boldsymbol{\omega},\boldsymbol{\tau}\big) +\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\omega},\boldsymbol{\tau}-\boldsymbol{\omega}\big) +\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\sigma^0},\boldsymbol{\tau}-\boldsymbol{\omega}\big)\\ &\quad +\partial_{\Theta}\,\mathbf{K}\big(\boldsymbol{\tau}-\boldsymbol{\omega},\boldsymbol{\sigma^0}\big). \end{align*} Therefore, using Lemmas \ref{lemma ana dy dtheta Xr} and \ref{lemma ana est LBR Xr}, we get that there exists $ C>0 $ such that for all $ 0<r'<r<r_1 $, for each $ \boldsymbol{\tau},\boldsymbol{\omega} $ in $ \ensemble{\boldsymbol{\sigma}\in \X_r^6\middle| \normetriple{\boldsymbol{\sigma}}_{\X_r^6}<R} $, \begin{align*} \normetriple{F\big(\boldsymbol{\tau}\big)-F\big(\boldsymbol{\omega}\big)}_{\X_{r'}^6}&\leq C\,\gamma_0\Big(1+\normetriple{\boldsymbol{\tau}}_{\X_r^6}+\normetriple{\boldsymbol{\omega}}_{\X_r^6}+\normetriple{\boldsymbol{\sigma}^0}_{\X_r^6}\Big)\frac{\normetriple{\boldsymbol{\tau}-\boldsymbol{\omega}}_{\X_r^6}}{r-r'}\\ &\leq \frac{C\,\gamma_0\,\big(1+R+\normetriple{\a}_{\X_r^2}+M_1\big)}{r-r'}\,\normetriple{\boldsymbol{\tau}-\boldsymbol{\omega}}_{\X_r^6}. \end{align*} This estimate asserts that both the continuity property of $ F $ and assumption \eqref{eq ana hyp F 1} with $ C:=C\,\gamma_0\,\big(1+R+\normetriple{\a}_{\X_r^2}+M_1\big) $ are satisfied. We can therefore apply Theorem \ref{theorem Cauchy-Kovalevskaya} which gives the sought result. \end{proof} Proof of Theorem \ref{theorem existence ana} is now straightforward. Fix two positive constants $ M_0 $ and $ M_1 $. Proposition \ref{prop existence bord} asserts the existence of $ \rho_1\in(0,1) $ such that for all $ \mathbf{H} $ in $ \E_1^2 $ satisfying $ \normetriple{\mathbf{H}}_{\E_1^2}<M_0 $, there exists a solution $ \a $ in $ \E_{\rho_1}^2 $ to system \eqref{eq ana a}-\eqref{eq ana a init}. Then, Lemma \ref{lemma Erho dans Xr} ensures that there exists $ r_1 $ (depending only on $ \rho_1 $), such that the solution $ \a $ to \eqref{eq ana a}-\eqref{eq ana a init} is in $ \X_{r_1} $. Proposition \ref{prop existence eq int} gives the existence of $ \delta>0 $ such that for every $ \mathbf{g} $ in $ \X_{r_1}^6 $ satisfying $ \normetriple{\mathbf{g}}_{\X^6_{r_1}}<M_1 $, there exists a solution $ \boldsymbol{\sigma} $ in $ \mathcal{C}^1\big((-\delta(r_1-r),\delta(r_1-r)),\X_r^6\big) $, for each $ r\in(0,r_1) $, to system \eqref{eq ana sigma}-\eqref{eq ana bord sigma}. This is precisely the statement of Theorem \ref{theorem existence ana}. \bigskip To make the simplified model \eqref{eq ana sigma}, \eqref{eq ana bord sigma}, \eqref{eq ana a} and \eqref{eq ana a init} more complicated and closer to the general system \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a}, several aspects could be incorporated in the former one. Outgoing equations associated with boundary frequencies $ \phi $, $ \psi $ and $ \nu $ could be integrated in interior equations \eqref{eq ana sigma}. It raises mainly an issue of functional framework, as we solved incoming equations \eqref{eq ana sigma} as propagation equations in the normal variable, which is not a framework suited for outgoing equations. Then it would be possible to incorporate traces of outgoing profiles in boundary conditions \eqref{eq ana bord sigma} and boundary evolution equations \eqref{eq ana a}. For that we would need trace estimates for the chosen functional framework. In a more distant perspective, we could integrate profiles associated with boundary frequencies different from $ \phi $, $ \psi $ and $ \nu $ in interior equations \eqref{eq ana sigma} and boundary equations \eqref{eq ana a}, which would require a total change of the functional framework, since we would have to work with almost-periodic functions. We could also consider derivatives of order higher than one in source terms of theses equations \eqref{eq ana sigma} and \eqref{eq ana a} \section{Instability} This section is devoted to the proof of instability. More precisely, the aim is to show that the perturbation $ H $ in \eqref{eq systeme 1} interferes at a leading order in the asymptotic expansion \eqref{eq ansatz}. This is not the case in general, where the perturbation $ \epsilon^3\,h^{\epsilon} $ only interferes at order $ \epsilon^2 $ and higher, see \cite{MajdaArtola1988Mixed}. As the perturbation $ \epsilon^3\,h^{\epsilon} $ of $ \epsilon^2\,g^{\epsilon} $ in \eqref{eq systeme 1} is small, we will work with the linearized system of system \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a}, around the particular solution when the perturbation is zero. To simplify even more the computations we will prove instability on simplified models of the linearized system. The first part of the section focuses on deriving the linearized system for the profiles. \subsection{Linearization around a particular solution} If the perturbation $ H $ is uniformly zero in \eqref{eq systeme 1}, then we are brought back to the case of \cite{CoulombelWilliams2017Mach}, and the solution obtained in the mentioned work is thus a solution to our cascade of equations \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} in this particular case. Therefore, according to \cite{CoulombelWilliams2017Mach}, we have the following result. \begin{proposition}[{\cite[Theorem 1.10]{CoulombelWilliams2017Mach}}]\label{prop exist sol part} Let $ T_0>0 $, and consider $ G $ in $ \mathcal{C}^{\infty}\big((-\infty,T_0],\linebreak H^{\infty}(\R^{d-1}\times\T)\big) $, zero for negative times $ t $, and $ H\equiv 0 $. Then there exists $ T\in(0,T_0] $ and unique sequences of functions $ \big(\bar{U}_n^*\big)_{n\geq 0} $, and $ \big(\bar{\sigma}^n_{\zeta,j,\lambda}\big)_{n\geq 0} $ for $ \zeta=\phi,\psi,\nu $ and $ j=1,2,3 $ in $ \mathcal{C}^{\infty}\big((-\infty,T_0], H^{\infty}(\R^{d-1}\times\R_+\times\T)\big) $ and sequences $ \big(\bar{a}_{\zeta,\lambda}^n \big)_{n\geq 1} $ for $ \zeta=\phi,\psi $ in $ \mathcal{C}^{\infty}\big((-\infty,T_0], \linebreak H^{\infty}(\R^{d-1}\times\T)\big) $, solution of the cascade of equations \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a}. \end{proposition} Note that Theorem \ref{theorem existence ana} constitute a version of Proposition \ref{prop exist sol part} in the case where $ H $ is possibly nonzero, but only on a simplified model of system \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a}, and with a different functional framework. Note also that since $ H $ is zero, we have, for $ n\geq 1 $, $ \lambda\in\Z^* $, for the solution of Proposition \ref{prop exist sol part}, \begin{equation}\label{eq insta sigma a bar zero} \bar{\sigma}^n_{\zeta,j,\lambda}=0 \quad \mbox{for $ \zeta\neq \phi $ and $ j\in\mathcal{C}(\zeta) $,}\qquad \mbox{and}\qquad \bar{a}^n_{\psi,\lambda}=0. \end{equation} The aim of this part is to derive the linearization of system \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} around the particular solution of Proposition \ref{prop exist sol part}. Schematically, in order to study the general problem of the form $ \mathcal{F}(u)=(G,H,0,\dots) $, we linearize this problem around the particular solution $ \bar{u} $ of $ \mathcal{F}(\bar{u})=(G,0,0,\dots) $ to obtain the linearized problem $ d\mathcal{F}(\bar{u})\cdot u =(0,H,0,\dots) $. We will also simplify the linearized system during its derivation, since for some profiles it is easy to show that they are zero. We only detail the linearized equations for the order we are interested in, which are first and second orders, and only for profiles of interest, that is, $ \sigma^n_{\psi,1,\lambda} $, $ \sigma^n_{\psi,2,\lambda} $, $ \sigma^n_{\nu,2,\lambda} $ and $ a^n_{\psi,\lambda} $, for $ \lambda\in\Z^* $ and $ n=1,2 $. Here, opposite to the formulation of \eqref{eq systeme sigma}, we write down each equation separately, as they are now different since $ \bar{\sigma}_{\zeta,j,\lambda}^n $ is zero for $ \zeta\neq \phi $. We also adopt a new color code for these equations. For the leading profile, starting from equations \eqref{eq evolution sigma 1}, we get, for the phases $ \psi_1 $, $ \psi_2 $ and $ \nu_2 $, and for $ \lambda\in\Z^* $, \begin{subequations}\label{eq linearise eq prof princ} \begin{align}\label{eq linearise eq prof princ psi1} \color{altblue}X_{\psi,1}\,\sigma^1_{\psi,1,\lambda}+\indicatrice_{\lambda=k\lambda_{\psi}}\,J^{\phi,1}_{\nu,2}\,ik\,\bar{\sigma}^1_{\phi,1,-k\lambda_{\phi}}\,\sigma^1_{\nu,2,-k}&\color{altblue}=0,\\[5pt]\label{eq linearise eq prof princ psi2} \color{altpink}X_{\psi,2}\,\sigma^1_{\psi,2,\lambda}+\indicatrice_{\lambda=k\lambda_{\psi}}\,J^{\phi,3}_{\nu,2}\,ik\,\bar{\sigma}^1_{\phi,3,-k\lambda_{\phi}}\,\sigma^1_{\nu,2,-k}&\color{altpink}=0,\\[5pt]\label{eq linearise eq prof princ nu2} \color{altpurple}X_{\nu,2}\,\sigma^1_{\nu,2,\lambda} +J^{\phi,1}_{\psi,1}\,i\lambda\,\bar{\sigma}^1_{\phi,1,-\lambda\lambda_{\phi}}\,\sigma^1_{\psi,1,-\lambda\lambda_{\psi}}+J^{\phi,3}_{\psi,2}\,i\lambda\,\bar{\sigma}^1_{\phi,3,-\lambda\lambda_{\phi}}\,\sigma^1_{\psi,2,-\lambda\lambda_{\psi}}&\color{altpurple}=0. \end{align} \end{subequations} In equations \eqref{eq linearise eq prof princ psi1} and \eqref{eq linearise eq prof princ psi2}, if $ \lambda\notin \lambda_{\psi}\Z $, no resonance happens, but if $ \lambda=k\lambda_{\psi} $ for some $ k\in\Z^* $, then, for example for the phase $ \psi_1 $, the resonance $ k\lambda_{\phi}\,\phi_1+k\lambda_{\psi}\,\psi_1+k\,\nu_2=0 $ occurs. This explains the presence of factors $ \indicatrice_{\lambda=k\lambda_{\psi}} $ in equations \eqref{eq linearise eq prof princ psi1} and \eqref{eq linearise eq prof princ psi2}. We also have, for $ j=1,3 $ and $ \lambda\in\Z^* $, the transport equation \begin{equation}\label{eq linearise eq prof princ phi} X_{\phi,j}\,\sigma^1_{\phi,j,\lambda}+D_{\phi,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\bar{\sigma}^1_{\phi,j,\lambda_1}\,\sigma^1_{\phi,j,\lambda_2}=0. \end{equation} As for them, the linearized equations for boundary terms $ a^1_{\phi} $ and $ a^1_{\psi} $ read \begin{equation}\label{eq linearise eq bord prof princ phi} X^{\Lop}_{\phi}\,a^1_{\phi,\lambda} +v_{\phi}\,\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\bar{a}^1_{\phi,\lambda_1}\,a^1_{\phi,\lambda_2}+\lambda\sum_{\lambda_1+\lambda_2=\lambda}\gamma_{\phi}(\lambda_1,\lambda_2)\,\big(\bar{a}^1_{\phi,\lambda_1}\,a^1_{\phi,\lambda_2}+a^1_{\phi,\lambda_1}\,\bar{a}^1_{\phi,\lambda_2}\big)=0, \end{equation} and \begin{equation}\label{eq linearise eq bord prof princ psi} \color{altorange2}X^{\Lop}_{\psi}\,a^1_{\psi,\lambda} +\indicatrice_{\lambda=k\lambda_{\psi}}\,\Gamma^{\psi}\,ik\,\big(\sigma^1_{\nu,2,k}\big)_{|x_d=0}\,\bar{a}^1_{\phi,-\lambda_{\phi}k}=i\lambda\,b_{\psi}\cdot B\,\big(U^2_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}. \end{equation} There is no term in $ G $ in equation \eqref{eq linearise eq bord prof princ phi} since we linearized around the solution given by Proposition \ref{prop exist sol part} corresponding to the source term $ H=0 $ and we study the influence of a small source term $ H $ on the leading amplitudes $ \sigma^1 $. Equations for the boundary phase $ \phi $ are decoupled from the others, they can therefore be solved, using for example \cite[Theorem 1.10]{CoulombelWilliams2017Mach}. From \eqref{eq linearise eq bord prof princ phi}, along with the initial condition $ \big(a^1_{\phi,\lambda}\big)_{|t\leq 0}=0 $, we obtain $ a^1_{\phi,\lambda}=0 $ for $ \lambda\in\Z^* $. Using boundary condition \eqref{eq prof princ bord phi} as well as initial condition $ \big(\sigma^1_{\phi,j,\lambda}\big)_{|t\leq 0}=0 $, we also get $ \sigma^1_{\phi,j,\lambda}=0 $ for $ j=1,3 $ and $ \lambda\in\Z^* $. Summing up, we have \begin{equation}\label{eq linearise sigma a phi zero} \sigma_{\phi,\lambda,j}^1=0,\quad a^1_{\phi,\lambda}=0,\quad \forall\lambda\in\Z^*,\forall j=1,3. \end{equation} The other frequencies $ \psi_1 $, $ \psi_2 $ and $ \nu_2 $ are totally coupled through equations \eqref{eq linearise eq prof princ} and \eqref{eq linearise eq bord prof princ psi}, and we need to determine the function $ a^1_{\psi} $ on the boundary and thus the outgoing amplitude $ \sigma_{\psi,2,\lambda}^2 $. \bigskip In the same way as for the leading profile, for the first corrector, starting from \eqref{eq 1cor evol sigma phi psi nu}, we get, for the phases $ \psi_1 $, $ \psi_2 $ and $ \nu_2 $, \begin{subequations}\label{eq linearise evol psi nu} \begin{align} \color{altblue} X_{\psi,1}\,\sigma^2_{\psi,1,\lambda}+\indicatrice_{\lambda=k\lambda_{\psi}}\,J^{\phi,1}_{\nu,2}\,ik\,\big(\bar{\sigma}^1_{\phi,1,-k\lambda_{\phi}}\,\sigma^2_{\nu,2,-k}+\bar{\sigma}^2_{\phi,1,-k\lambda_{\phi}}\,\sigma^1_{\nu,2,-k}\big)&\color{altblue}\\\nonumber&\color{altblue}\hspace{-40pt}=\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2\big),\\ \color{altpink} X_{\psi,2}\,\sigma^2_{\psi,2,\lambda}+\indicatrice_{\lambda=k\lambda_{\psi}}J^{\phi,3}_{\nu,2}\,ik\,\big(\bar{\sigma}^1_{\phi,3,-k\lambda_{\phi}}\,\sigma^2_{\nu,2,-k}+\bar{\sigma}^2_{\phi,3,-k\lambda_{\phi}}\,\sigma^1_{\nu,2,-k}\big)& \\\nonumber&\color{altpink}\hspace{-40pt}=\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2\big),\\ \color{altpurple}X_{\nu,2}\,\sigma^2_{\nu,2,\lambda}+J^{\phi,1}_{\psi,1}\,i\lambda\,\big(\bar{\sigma}^1_{\phi,1,-\lambda\lambda_{\phi}}\,\sigma^2_{\psi,1,-\lambda\lambda_{\psi}}+\bar{\sigma}^2_{\phi,1,-\lambda\lambda_{\phi}}\,\sigma^1_{\psi,1,-\lambda\lambda_{\psi}}\big)& \\\nonumber\color{altpurple}&\color{altpurple}\hspace{-289pt}+J^{\phi,3}_{\psi,2}\,i\lambda\,\big(\bar{\sigma}^1_{\phi,3,-\lambda\lambda_{\phi}}\,\sigma^2_{\psi,2,-\lambda\lambda_{\psi}}+\bar{\sigma}^2_{\phi,3,-\lambda\lambda_{\phi}}\,\sigma^1_{\psi,2,-\lambda\lambda_{\psi}}\big)=\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2\big), \end{align} \end{subequations} where $ \mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2\big) $ refer to quadratic terms in $ U_1 $ or the nonpolarized parts of $ U_2 $, both of them for frequencies $ \zeta_j $, with $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $, terms which therefore will be zero if the corresponding profiles are zero. Equations on other profiles $ \sigma^2_{\zeta,j,\lambda} $, for $ \zeta\in\F_{b}\privede{\psi,\nu} $ and $ (\zeta,j)=(\psi,3),(\nu,1),(\nu,3) $, are not of interest so we do not write them. For the boundary term $ a^2_{\psi} $, we have, according to \eqref{eq 2cor eq evol a psi}, \begin{align}\label{eq linearise eq bord 1cor} &\quad\color{altorange2}X^{\Lop}_{\psi}\,a^2_{\psi,\lambda} +\tilde{X}^{\Lop}_{\psi}\,\big(\sigma_{\psi,2,\lambda}^2\big)_{|x_d=0}\\\nonumber &\quad\color{altorange2}+\indicatrice_{\lambda=k\lambda_{\psi}}\,\Gamma^{\psi}\,ik\,\big\lbrace\big(\sigma_{\nu,2,k}^1\big)_{|x_d=0}\,\bar{a}^2_{\phi,-\lambda_{\phi}k}+\big(\sigma_{\nu,2,k}^2\big)_{|x_d=0}\,\bar{a}^1_{\phi,-\lambda_{\phi}k}\big\rbrace\\\nonumber &\color{altorange2}=i\lambda\,b_{\psi}\cdot B\,\big(U^{3,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0}-i\lambda\,b_{\psi}\cdot H_{\lambda}\\\nonumber &\quad\color{altorange2}+\partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big)_{|x_d,\chi_d=0}. \end{align} Again, here, equations for $ \psi_1 $, $ \psi_2 $ and $ \nu_2 $ are coupled. As the coupling is difficult to handle, especially with the term $ i\lambda\,b_{\psi}\cdot B\,\big(U^{3,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0} $, we will simplify equations \eqref{eq linearise evol psi nu} and \eqref{eq linearise eq bord 1cor} to reduce the coupling, in order to study instability. We have obtained the system \eqref{eq linearise eq prof princ}, \eqref{eq linearise eq bord prof princ psi}, \eqref{eq linearise evol psi nu} and \eqref{eq linearise eq bord 1cor}, which is the linearization of system of equations \eqref{eq systeme sigma} and \eqref{eq systeme a}, around the particular solution of \eqref{eq systeme moyenne}, \eqref{eq systeme sigma hors cas part}, \eqref{eq systeme sigma}, \eqref{eq systeme sigma bord} and \eqref{eq systeme a} of Proposition \ref{prop exist sol part} for which the boundary term $ H $ is zero. \subsection{Instability on simplified models} The aim of this section is to show that the system \eqref{eq systeme 1} considered in this article is unstable, namely that a small perturbation $ H $ in the boundary term may interfere up to the leading order. More precisely we prove that there exists a boundary term $ H $ such that, for simplified models of the linearized system \eqref{eq linearise eq prof princ}, \eqref{eq linearise eq bord prof princ psi}, \eqref{eq linearise evol psi nu} and \eqref{eq linearise eq bord 1cor}, the leading perturbations $ \sigma^1_{\psi,j,\lambda} $ and $ \sigma^1_{\nu,j,\lambda} $ are not all zero. For this purpose, we argue by contradiction and assume that for every boundary term $ H $, all amplitudes $ \sigma^1_{\psi,j,\lambda} $ and $ \sigma^1_{\nu,j,\lambda} $, for $ j=1,3 $ and $ \lambda\in\Z^* $ are zero. Then we seek for a contradiction. In particular, according to \eqref{eq prof princ bord psi}, it implies that $ a^1_{\psi,\lambda}=0 $ for all $ \lambda\in\Z^* $. Recall that we have shown above that for the linearized system, profiles $ \sigma^1_{\phi,j,\lambda} $ for $ j=1,3 $ and $ \lambda\in\Z^* $ are zero. Therefore, all leading profiles of frequencies $ \zeta_j $, $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $ are zero. Furthermore, according to formula \eqref{eq 1cor nonpola part U 2 zeta} giving the nonpolarized parts of the first corrector, the nonpolarized parts of $ U_2 $ for frequencies $ \zeta_j $, $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $ are consequently also zero. We can also show in a similar manner, that the mean value $ U_2^* $ and the polarized parts of frequencies different from $ \zeta_j $, $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $, are also zero. Therefore, equation \eqref{eq linearise eq bord prof princ psi} now reads \begin{equation}\label{eq insta eq trace psi 2 = 0} \big(\sigma^2_{\psi,2,\lambda}\big)_{|x_d=0}=0, \end{equation} since $ U^2_{\psi,2,\lambda} $ is polarized, $ a^1_{\psi,\lambda} $ is zero for $ \lambda\in\Z^* $ and the scalar $ b_{\psi}\cdot B\,r_{\psi,2} $ is nonzero. Equation \eqref{eq insta eq trace psi 2 = 0} is the condition which we wish to contradict. The general linearized equations \eqref{eq linearise eq prof princ}, \eqref{eq linearise eq bord prof princ psi}, \eqref{eq linearise evol psi nu} and \eqref{eq linearise eq bord 1cor} being too difficult to handle at this stage of comprehension, two simplified models are investigated. \subsubsection{First simplified model} We focus first on a very simple simplified model, for which computations can be easily followed through the end, and reflect the general idea of the instability mechanism. In equations \eqref{eq linearise evol psi nu}, most of the resonant terms, which couple the equations, are removed. We also use that both leading profiles and nonpolarized parts of $ U_2 $ of frequencies $ \zeta_j $ for $ \zeta=\phi,\psi,\nu $ and $ j=1,3 $ are zero. We retain at the end, for phases $ \psi_1 $, $ \psi_2 $ and $ \nu_2 $, the incoming evolution equation \begin{subequations}\label{eq insta eq 1cor BB1} \begin{align} \color{altblue}\label{eq insta eq 1cor BB1 psi 1} X_{\psi,1}\,\sigma^2_{\psi,1,\lambda}&\color{altblue}=0, \intertext{and the two outgoing evolution equations with resonance terms} \color{altpink}\label{eq insta eq 1cor BB1 psi2} X_{\psi,2}\,\sigma^2_{\psi,2,\lambda}+\indicatrice_{\lambda=k\lambda_{\psi}}\,J^{\phi,3}_{\nu,2}\,ik\,\bar{\sigma}^1_{\phi,3,-k\lambda_{\phi}}\,\sigma^2_{\nu,2,-k}&\color{altpink}=0,\\[5pt] \color{altpurple}\label{eq insta eq 1cor BB1 nu2} X_{\nu,2}\,\sigma^2_{\nu,2,\lambda}+J^{\phi,1}_{\psi,1}\,i\lambda\,\bar{\sigma}^1_{\phi,1,-\lambda\lambda_{\phi}}\,\sigma^2_{\psi,1,-\lambda\lambda_{\psi}}&\color{altpurple}=0. \end{align} \end{subequations} As for the boundary amplitudes $ a^2_{\psi,\lambda} $ for $ \lambda\in\Z^* $, we remove all traces of first or second profile, and, as usual, $ i\lambda\,b_{\psi}\cdot B\,\big(U^{3,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0} $, and we use that terms of $ \partial_{z,\theta}\,\mbox{\emph{terms in}}\,\big(U_1,(I-P)\,U_2,(P\,U_2)_{\zeta\neq \phi,\psi,\nu},U_2^*\big)_{|x_d,\chi_d=0} $ are zero to retain the simple forced transport equation \begin{equation}\label{eq insta eq 1cor apsi evol BB1} \color{altorange2}X^{\Lop}_{\psi}\,a^2_{\psi,\lambda} =-i\lambda\,b_{\psi}\cdot H_{\lambda}. \end{equation} According to above remarks, boundary condition \eqref{eq 1cor trace psi} for the incoming amplitude $ \sigma_{\psi,1,\lambda}^2 $ now reads \begin{equation}\label{eq insta eq 1cor apsi bord BB1} \color{altorange2}\big(\sigma_{\psi,1,\lambda}^2\big)_{|x_d=0}\,r_{\psi,1}=a^2_{\psi,\lambda}\,e_{\psi,1}. \end{equation} Although system \eqref{eq insta eq 1cor BB1}, \eqref{eq insta eq 1cor apsi evol BB1} and \eqref{eq insta eq 1cor apsi bord BB1} is coupled, it is in an upper triangular form, so it can be solved using explicit formulas, since we are in presence of transport equations with constant coefficients. This is made precise now, with the proof of the following result. \begin{theorem}\label{th il existe H BBmod 1} There exists a boundary term $ H $ in $ L^2\big((-\infty,T]_t\times\R^{d-1}_y\times\T_{\theta_2}\big) $ such that, if the sequence $ (\sigma^2_{\psi,1,\lambda},\sigma^2_{\psi,2,\lambda},\sigma^2_{\nu,2,\lambda})_{\lambda\in\Z^*} $ of tuples of $ \mathcal{C}\big(\R^+_{x_d},L^2\big((-\infty,T]_t\times\R^{d-1}_y\big)\big) $ and the sequence $ (a^2_{\psi,\lambda})_{\lambda\in\Z^*} $ of $ L^2\big((-\infty,T]_t\times\R^{d-1}_y\big) $ are solutions to the system \eqref{eq insta eq 1cor BB1}, \eqref{eq insta eq 1cor apsi evol BB1} and \eqref{eq insta eq 1cor apsi bord BB1}, then the trace $ \big(\sigma^2_{\psi,2,\lambda_{\psi}}\big)_{|x_d=0} $ is nonzero. \end{theorem} \begin{proof} We consider any boundary term $ H $ in $ L^2\big((-\infty,T]_t\times\R^{d-1}_y\times\T_{\theta_2}\big) $, and we look for an expression of the trace $ \big(\sigma^2_{\psi,2,\lambda}\big)_{|x_d=0} $ of the associated solution of \eqref{eq insta eq 1cor BB1}, \eqref{eq insta eq 1cor apsi evol BB1} and \eqref{eq insta eq 1cor apsi bord BB1}. First of all, the transport equation \eqref{eq insta eq 1cor apsi evol BB1} on the boundary $ \ensemble{x_d=0} $ can be solved to find \begin{equation*} a^2_{\psi,\lambda}(t,y)=\int_0^t -i\lambda\,b_{\psi}\cdot H_{\lambda}\big(s,y-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\big)\,ds, \end{equation*} recalling the notation\footnote{Without lost of generality, we have set $ \beta_{\psi}=1 $ in Lemma \ref{lemme Lax bord}, to simplify the equations.} \begin{equation*} X^{\Lop}_{\psi}=\partial_t+\mathbf{v}^{\Lop}_{\psi}\cdot\nabla_y. \end{equation*} According to boundary condition \eqref{eq insta eq 1cor apsi bord BB1}, it follows, with notation\footnote{\label{note1}We assumed here that, with notation of Definition \ref{def sortant rentrant alpha X alpha}, $ -1/\partial_{\xi}\tau_k(\eta,\xi) $ is equal to 1.} \begin{equation*} X_{\psi,1}=\partial_t-\mathbf{v}_{\psi,1}\cdot\nabla_x=:\partial_t-\mathbf{v}'_{\psi,1}\cdot\nabla_y+\partial_{x_d}, \end{equation*} using the incoming transport equation \eqref{eq insta eq 1cor BB1 psi 1}, \begin{equation*} \sigma_{\psi,1,\lambda}^2(t,y,x_d)= -\indicatrice_{x_d\leq t}\int_0^{t-x_d} i\lambda\,p_{\psi,1}\,b_{\psi}\cdot H_{\lambda}\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\Big)\,ds, \end{equation*} with a coefficient $ p_{\psi,1}\in\R $ such that $ e_{\psi,1}=p_{\psi,1}\,r_{\psi,1} $. To simplify notation, the coefficient $ p_{\psi,1} $ will be omitted in the following. Thus, with notation\footnote{See footnote \ref{note1}.} \begin{equation*} X_{\nu,2}=\partial_t-\mathbf{v}_{\nu,2}\cdot\nabla_x=:\partial_t-\mathbf{v}'_{\nu,2}\cdot\nabla_y-\partial_{x_d}, \end{equation*} according to the outgoing transport equation \eqref{eq insta eq 1cor BB1 nu2}, we have \begin{align*} \sigma^2_{\nu,2,\lambda}(t,y,x_d)&=-\int_0^t J^{\phi,1}_{\psi,1}\,i\lambda\,\bar{\sigma}^1_{\phi,1,-\lambda\lambda_{\phi}}(s,x+\mathbf{v}_{\nu,2}(t-s))\,\indicatrice_{x_d\leq2s- t}\\ &\quad\times\int_0^{2s-x_d-t} i\lambda\lambda_{\psi}\,b_{\psi}\cdot H_{-\lambda\lambda_{\psi}}\Big(\tau, y+\mathbf{v}_{\nu,2}'\,(t-s)\\ &\quad+(x_d+t-s)\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(s-\tau)\Big)\,d\tau\,ds. \end{align*} In the same way, with similar notation, the outgoing transport equation \eqref{eq insta eq 1cor BB1 psi2} leads to \begin{align*} \sigma^2_{\psi,2,\lambda}(t,y,x_d)&=\int_0^t \indicatrice_{\lambda=k\lambda_{\psi}}\,J^{\phi,3}_{\nu,2}\,ik\,\bar{\sigma}^1_{\phi,3,-k\lambda_{\phi}}\big(s,x+\mathbf{v}_{\psi,2}(t-s)\big)\\ &\quad\times\int_0^s J^{\phi,1}_{\psi,1}\,ik\,\bar{\sigma}^1_{\phi,1,k\lambda_{\phi}}\big(\tau,x+\mathbf{v}_{\psi,2}(t-s)+\mathbf{v}_{\nu,2}(s-\tau)\big)\,\indicatrice_{x_d\leq 2\tau-t}\\ &\quad\times\int_0^{2\tau-x_d-t} ik\lambda_{\psi}\,b_{\psi}\cdot H_{k\lambda_{\psi}}\Big(\sigma,y+\mathbf{v}_{\nu,2}'\,(s-\tau)+\mathbf{v}_{\psi,2}'\,(t-s) \\ &\qquad+(x_d+t-\tau)\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(\tau-\sigma)\Big)\,d\sigma\,d\tau\,ds. \end{align*} The trace of $ \sigma^2_{\psi,2,\lambda} $ on the boundary $ \ensemble{x_d=0} $ is therefore given by \begin{align}\label{eq insta trace sigma 2 lambda} \sigma^2_{\psi,2,\lambda}(t,y,0)&=-i\int_0^t\int_0^s\int_0^{2\tau-t} \indicatrice_{\lambda=k\lambda_{\psi}}\,J^{\phi,3}_{\nu,2}\,J^{\phi,1}_{\psi,1}\,k^3\,\lambda_{\psi}\,\bar{\sigma}^1_{\phi,3,-k\lambda_{\phi}}\big(s,y+\mathbf{v}_{\psi,2}(t-s)\big)\,\\\nonumber &\quad \times\bar{\sigma}^1_{\phi,1,k\lambda_{\phi}}\big(\tau,y+\mathbf{v}_{\psi,2}(t-s)+\mathbf{v}_{\nu,2}(s-\tau)\big)\\\nonumber &\quad \times b_{\psi}\cdot H_{k\lambda_{\psi}}\Big(\sigma,y+\mathbf{v}_{\nu,2}'\,(s-\tau)+\mathbf{v}_{\psi,2}'(t-s) \\\nonumber &\qquad+(t-\tau)\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(\tau-\sigma)\Big)\,d\sigma\,d\tau\,ds. \end{align} We justify now why there is a choice of a boundary term $ H $ such that this trace is nonzero. We take interest into the trace $ \big(\sigma^2_{\psi,2,\lambda_{\psi}}\big)_{|x_d=0} $, which is given by formula \eqref{eq insta trace sigma 2 lambda} with $ \lambda=\lambda_{\psi} $ and therefore $ k=1 $, namely, \begin{align}\label{eq insta trace sigma 2 cas part} \sigma^2_{\psi,2,\lambda_{\psi}}(t,y,0)&=-i\int_0^t\int_0^s\int_0^{2\tau-t} J^{\phi,3}_{\nu,2}\,J^{\phi,1}_{\psi,1}\,\lambda_{\psi}\,\bar{\sigma}^1_{\phi,3,-\lambda_{\phi}}\big(s,y+\mathbf{v}_{\psi,2}(t-s)\big)\,\\\nonumber &\quad \times\bar{\sigma}^1_{\phi,1,\lambda_{\phi}}\big(\tau,y+\mathbf{v}_{\psi,2}(t-s)+\mathbf{v}_{\nu,2}(s-\tau)\big)\\\nonumber &\quad \times b_{\psi}\cdot H_{\lambda_{\psi}}\Big(\sigma,y+\mathbf{v}_{\nu,2}'\,(s-\tau)+\mathbf{v}_{\psi,2}'(t-s) \\\nonumber &\qquad+(t-\tau)\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(\tau-\sigma)\Big)\,d\sigma\,d\tau\,ds. \end{align} We start by constructing $ \bar{\sigma}^1_{\phi,1,\lambda_{\phi}} $ and $ \bar{\sigma}^1_{\phi,3,-\lambda_{\phi}} $ suited for our purpose. It is proven in \cite[section 2.2]{CoulombelWilliams2017Mach} that $ \bar{a}^1_{\phi,\lambda} $, solution to equation \eqref{eq systeme a phi} in the particular case where $ H $ is zero, is the solution to the following equation, for $ \lambda\in\Z^* $ \begin{equation}\label{eq insta eq transport a bar phi lambda} X^{\Lop}_{\phi}\,\bar{a}^1_{\phi,\lambda} +D^{\Lop}_{\phi}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\bar{a}^1_{\phi,\lambda_1}\,\bar{a}^1_{\phi,\lambda_2}+i\lambda\sum_{\lambda_1+\lambda_3=\lambda}\gamma_{\phi}(\lambda_1,\lambda_3)\,\bar{a}^1_{\phi,\lambda_1}\,\bar{a}^1_{\phi,\lambda_3}=-i\lambda\,b_{\phi}\cdot G_{\lambda}. \end{equation} We set $ G_{\lambda}=0 $ for $ \lambda\in\Z\privede{\lambda_{\phi},-\lambda_{\phi}} $ and $ G_{\lambda_{\phi}} $, $ G_{-\lambda_{\phi}} $ real, non-negative, and equal to one on the set $ [1/2,2]_{t}\times [-(h+2)\mathbf{V},(h+2)\mathbf{V}]^{d-1}_y $, where we have denoted $ \mathbf{V}:=\big|\mathbf{v}^{\Lop}_{\phi}\big| $ and with $ h\geq 1 $. Solving the transport equation \eqref{eq insta eq transport a bar phi lambda}, we get $ \bar{a}^1_{\phi,\lambda}=0 $ for $ \lambda\in\Z\privede{\lambda_{\phi},-\lambda_{\phi}} $ and $ \bar{a}^1_{\phi,\lambda_{\phi}} $, $ \bar{a}^1_{\phi,-\lambda_{\phi}} $ real, non-negative, and greater than $ 1/2 $ on the set $ [1,2]_{t}\times [-(h+1)\mathbf{V},(h+1)\mathbf{V}]^{d-1}_y $. Now we know that, according to the condition \eqref{eq insta sigma a bar zero} on profiles $ \bar{\sigma}^1_{\zeta,j,\lambda} $ for $ \zeta=\psi,\nu $, $ j=1,2,3 $ and $ \lambda\in\Z^* $, there are no resonance terms in the evolution equation \eqref{eq evolution sigma 1 phi psi nu} for $ \bar{\sigma}^1_{\phi,j,\lambda} $ for $ j=1,3 $ and $ \lambda\in\Z^* $, so theses profiles $ \bar{\sigma}^1_{\phi,j,\lambda} $ for $ j=1,3 $ and $ \lambda\in\Z^* $ satisfy the following incoming transport equation \begin{subequations}\label{eq insta eq transport sigma bar phi lambda} \begin{equation} X_{\phi,j}\,\bar{\sigma}^1_{\phi,j,\lambda}+D_{\phi,j}\sum_{\lambda_1+\lambda_2=\lambda}i\lambda_2\,\bar{\sigma}^1_{\phi,j,\lambda_1}\,\bar{\sigma}^1_{\phi,j,\lambda_2}=0, \end{equation} with the following boundary condition \eqref{eq prof princ bord phi} \begin{equation} \big(\bar{\sigma}_{\phi,j,\lambda}^1\big)_{|x_d=0}\,r_{\phi,j}=\bar{a}_{\phi,\lambda}^1\,e_{\phi,j}. \end{equation} \end{subequations} Solving system \eqref{eq insta eq transport sigma bar phi lambda} seen as a transport propagation equation in the normal direction, with notation\footnote{See footnote \ref{note1}.} \begin{equation*} X_{\phi,j}=:\partial_t-\mathbf{v}'_{\phi,j}\cdot\nabla_y+\partial_{x_d}, \end{equation*} we get $ \bar{\sigma}^1_{\phi,j,\lambda}=0 $ for $ j=1,3 $ and $ \lambda\in\Z\privede{\lambda_{\phi},-\lambda_{\phi}} $, and that $ \bar{\sigma}^1_{\phi,1,\lambda_{\phi}} $, $ \bar{\sigma}^1_{\phi,3,-\lambda_{\phi}} $ are real, non-negative, and larger than $ A/2 $ on the set $ [1+\mathbf{V}/(2\mathbf{w}),2+\mathbf{V}/(2\mathbf{W})] _t\times[-h\mathbf{V},h\mathbf{V}]_y\times[0,\mathbf{V}/\mathbf{W}]_{x_d} $, where $ A:=\min\big(|e_{\phi,1}\cdot r_{\phi,1}|,|e_{\phi,3}\cdot r_{\phi,3}|\big) $, $ \mathbf{w}:=\min\big(|\mathbf{v}'_{\phi,1}|,|\mathbf{v}'_{\phi,3}|\big) $ and $ \mathbf{W}:=\max\big(|\mathbf{v}'_{\phi,1}|,|\mathbf{v}'_{\phi,3}|\big) $. Now that $ \bar{\sigma}^1_{\phi,1,\lambda_{\phi}} $ and $ \bar{\sigma}^1_{\phi,3,-\lambda_{\phi}} $ have been constructed appropriately, we make precise a suitable choice of boundary term $ H $. We denote $ \mathbf{t}:=1+\mathbf{V}/(2\mathbf{w}) $ and $ \mathbf{T}:=2+\mathbf{V}/(2\mathbf{W}) $, and we set the integer $ h $ such that \begin{equation*} h\geq \frac{8(\mathbf{T}-\mathbf{t})}{\mathbf{V}}\,\max \Big(\big|\mathbf{v}'_{\psi,2}\big|,\big|\mathbf{v}'_{\nu,2}\big|,\big|\mathbf{v}'_{\psi,1}\big|,\big|\mathbf{v}^{\Lop}_{\psi}\big|\Big). \end{equation*} We also take the boundary term $ H $ such that $ b_{\psi}\cdot H_{\lambda_{\psi}} $ is pure imaginary, and such that its imaginary part is of the sign of $ J^{\phi,3}_{\nu,2}\,J^{\phi,1}_{\psi,1}\,\lambda_{\psi} $ and of modulus one on $ [\mathbf{t},\mathbf{T}]_t\times[-h\mathbf{V},h\mathbf{V}]_y $, namely, \begin{equation*} b_{\psi}\cdot H_{\lambda_{\psi}}=i\,\sign\big(J^{\phi,3}_{\nu,2}\,J^{\phi,1}_{\psi,1}\,\lambda_{\psi}\big)\quad \text{on}\quad [\mathbf{t},\mathbf{T}]_t\times[-h\mathbf{V},h\mathbf{V}]_y. \end{equation*} Then we note that, according to \eqref{eq insta trace sigma 2 cas part}, the trace $ \big(\sigma^2_{\psi,2,\lambda_{\psi}}\big)_{|x_d=0} $ satisfies, for $ t\in[\mathbf{t},\mathbf{T}] $ and $ y\in[-h\mathbf{V}/2,h\mathbf{V}/2] $, \begin{align*} \sigma^2_{\psi,2,\lambda_{\psi}}(t,y,0)&\geq -i\int_{\mathbf{t}}^t\int_{\mathbf{t}}^s\int_{\mathbf{t}}^{2\tau-t} J^{\phi,3}_{\nu,2}\,J^{\phi,1}_{\psi,1}\,\lambda_{\psi}\,\bar{\sigma}^1_{\phi,3,-\lambda_{\phi}}\big(s,y+\mathbf{v}_{\psi,2}(t-s)\big)\,\\\nonumber &\quad \times\bar{\sigma}^1_{\phi,1,\lambda_{\phi}}\big(\tau,y+\mathbf{v}_{\psi,2}(t-s)+\mathbf{v}_{\nu,2}(s-\tau)\big)\\\nonumber &\quad \times b_{\psi}\cdot H_{\lambda_{\psi}}\Big(\sigma,y+\mathbf{v}_{\nu,2}'\,(s-\tau)+\mathbf{v}_{\psi,2}'(t-s) \\\nonumber &\qquad+(t-\tau)\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(\tau-\sigma)\Big)\,d\sigma\,d\tau\,ds\\ &\geq \big|J^{\phi,3}_{\nu,2}\,J^{\phi,1}_{\psi,1}\,\lambda_{\psi}\big|\frac{A^2}{4}, \end{align*} since, for $ \mathbf{t}\leq \sigma\leq 2\tau-t $, $ \mathbf{t}\leq \tau \leq s \leq t \leq \mathbf{T} $ and $ y\in[-h\mathbf{V}/2,h\mathbf{V}/2] $, according to the assumption on $ h $, we have $ \sigma,\tau,s\in[\mathbf{t},\mathbf{T}] $ and \begin{align*} y+\mathbf{v}_{\psi,2}(t-s)&\in [-h\mathbf{V},h\mathbf{V}]_y\times[0,\mathbf{V}/\mathbf{W}]_{x_d} ,\\ y+\mathbf{v}_{\psi,2}(t-s)+\mathbf{v}_{\nu,2}(s-\tau)&\in [-h\mathbf{V},h\mathbf{V}]_y\times[0,\mathbf{V}/\mathbf{W}]_{x_d} ,\\ y+\mathbf{v}_{\nu,2}'\,(s-\tau)+\mathbf{v}_{\psi,2}'(t-s)&\\ +(t-\tau)\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(\tau-\sigma)&\in [-h\mathbf{V},h\mathbf{V}]_y, \end{align*} because $ \mathbf{T}-\mathbf{t}\leq \mathbf{V}/2\mathbf{W} $, up to shrinking $ \mathbf{w} $. It concludes the proof of Theorem \ref{th il existe H BBmod 1} since we proved that there exists a choice of $ H $ such that the trace $ \big(\sigma^2_{\psi,2,\lambda_{\psi}}\big)_{|x_d=0} $ is nonzero. \end{proof} Theorem \ref{th il existe H BBmod 1}, stating that the trace $ \big(\sigma^2_{\psi,2,\lambda_{\psi}}\big)_{|x_d=0} $ is nonzero, contradicts the condition \eqref{eq insta eq trace psi 2 = 0}, so instability is proven. Indeed, we have assumed that for all boundary terms $ H $, the associated amplitudes $ \sigma^1_{\psi,j,\lambda} $ and $ \sigma^1_{\nu,j,\lambda} $ for $ j=1,3 $ and $ \lambda\in\Z^* $ are zero, and, for the simplified model equations \eqref{eq insta eq 1cor BB1}, \eqref{eq insta eq 1cor apsi evol BB1} and \eqref{eq insta eq 1cor apsi bord BB1}, we found a contradiction. Therefore, for this simplified model, there exists a boundary term $ H $ such that the leading profiles $ \sigma^1_{\psi,j,\lambda} $ and $ \sigma^1_{\nu,j,\lambda} $ for $ j=1,3 $ and $ \lambda\in\Z^* $ are not all zero. It proves that the boundary term $ H $ may interfere at the leading order, which constitutes an instability. \subsubsection{Second simplified model} The second simplified model that we shall consider features additional resonance coupling terms which add difficulties. This time we keep the formulation of simplified models of section \ref{section existence} and multiply equations \eqref{eq linearise evol psi nu} and \eqref{eq linearise eq bord 1cor} by $ e^{i\lambda\Theta} $, to obtain, for the phases $ \psi_1 $, $ \psi_2 $ and $ \nu_2 $, the incoming evolution equation with a resonance term \begin{subequations}\label{eq insta eq 1cor BB2} \begin{align} \color{altblue}\label{eq insta eq 1cor BB2 psi 1} X_{\psi,1}\,\sigma^2_{\psi,1}+\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\sigma^2_{\nu,2}\big)&\color{altblue}=0, \intertext{and the two outgoing evolution equations with resonance terms} \color{altpink}\label{eq insta eq 1cor BB2 psi2} X_{\psi,2}\,\sigma^2_{\psi,2}+\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\nu,2}\big)&\color{altpink}=0,\\[5pt] \color{altpurple}\label{eq insta eq 1cor BB2 nu2} X_{\nu,2}\,\sigma^2_{\nu,2}+\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\big(\bar{\sigma}^1_{\phi,1},\sigma^2_{\psi,1}\big)+\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)&\color{altpurple}=0, \end{align} \end{subequations} where operators $ \mathbb{J}_{\zeta_2,j_2}^{\zeta_1,j_1} $ have been defined in \eqref{eq ana def J}, and equations on the boundary are kept as in the first simplified model \eqref{eq insta eq 1cor apsi evol BB1} and \eqref{eq insta eq 1cor apsi bord BB1}, namely, \begin{align}\label{eq insta eq 1cor apsi evol BB2} \color{altorange2}X^{\Lop}_{\psi}\,a^2_{\psi} &\color{altorange2}=-b_{\psi}\cdot \partial_{\Theta}\,H,\\[5pt]\label{eq insta eq 1cor apsi bord BB2} \color{altorange2}\big(\sigma_{\psi,1}^2\big)_{|x_d=0}\,r_{\psi,1}&\color{altorange2}=a^2_{\psi}\,e_{\psi,1}. \end{align} Note that this time, the simplified model features all resonance terms of the general equations. The obtained system is no longer triangular since additional resonance terms in \eqref{eq insta eq 1cor BB2} relatively to \eqref{eq insta eq 1cor BB1} couple each equation with the others, so we cannot solve it as a sequence of transport equations as before. We use a perturbation method and solve equations \eqref{eq insta eq 1cor BB2} with a fixed point theorem. We start by solving \eqref{eq insta eq 1cor apsi evol BB2} as a transport equation, and then we deduce, using the incoming transport equations \eqref{eq insta eq 1cor BB2 psi 1} and boundary condition \eqref{eq insta eq 1cor apsi bord BB2}, an expression of $ \sigma_{\psi,1}^2 $ depending on $ \sigma^2_{\nu,2} $. We use this expression in \eqref{eq insta eq 1cor BB2 nu2} to obtain an equation in $ \sigma^2_{\nu,2} $ with a source term depending only on $ \sigma^2_{\psi,2} $. This equation is solved with a fixed point method, using that the source term depending on $ \sigma^2_{\nu,2} $ is ``small'', in a convenient topology, comparing to the transport term, and we get an expression of $ \sigma^2_{\nu,2} $ depending on $ \sigma^2_{\psi,2} $. This expression is finally used in \eqref{eq insta eq 1cor BB2 psi2} which is solved with the same fixed point method. The result is the following. \begin{theorem}\label{th il existe H BBmod 2} There exists a boundary term $ H $ in $ \mathcal{C}\big([0,T]_t,H^{\infty}(\R^{d-1}_y\times\T_{\theta_2})\big) $ such that, if $ \sigma^2_{\psi,1}, \sigma^2_{\psi,2},\sigma^2_{\nu,2} $ in $ \mathcal{C}^1\big([0,T]_t,H^{\infty}(\R^{d-1}_y\times\R^+_{x_d}\times\T_{\Theta})\big) $ and $ a^2_{\psi} $ in $ \mathcal{C}^1\big([0,T]_t,H^{\infty}(\R^{d-1}_y\times\T_{\Theta})\big) $ are solutions to the system \eqref{eq insta eq 1cor BB2}, \eqref{eq insta eq 1cor apsi evol BB2} and \eqref{eq insta eq 1cor apsi bord BB2}, then the trace $ \big(\sigma^2_{\psi,2}\big)_{|x_d=0} $ is nonzero. \end{theorem} \begin{proof} Similarly as for the first simplified model, from equation \eqref{eq insta eq 1cor apsi evol BB2}, reusing previous notation, we get \begin{equation*} a^2_{\psi}(t,y)=-\int_0^t b_{\psi}\cdot \partial_{\Theta}\,H_{\lambda}\big(s,y-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\big)\,ds. \end{equation*} Then system \eqref{eq insta eq 1cor BB2 psi 1}, \eqref{eq insta eq 1cor apsi bord BB2} seen as a transport equation with a source term depending on $ \sigma^2_{\nu,2} $, leads to \begin{align*} \sigma_{\psi,1}^2(t,y,x_d)&= -\indicatrice_{x_d\leq t}\int_0^{t-x_d}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\Big)\,ds\\ &\quad+\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\sigma^2_{\nu,2}\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s)\big)\,ds. \end{align*} Therefore, equation \eqref{eq insta eq 1cor BB2 nu2} now reads \begin{subequations}\label{eq insta eq 1cor nu2bis BB2} \begin{align} \label{eq insta eq 1cor nu2bis BB2 1} &\color{altpurple}\quad X_{\nu,2}\,\sigma^2_{\nu,2}+\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)\\\label{eq insta eq 1cor nu2bis BB2 1 2} &\color{altpurple}-\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left[\bar{\sigma}^1_{\phi,1},\indicatrice_{x_d\leq t}\int_0^{t-x_d}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\Big)\,ds\right]\\\label{eq insta eq 1cor nu2bis BB2 1 3} &\color{altpurple}+\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left[\bar{\sigma}^1_{\phi,1},\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\sigma^2_{\nu,2}\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s)\big)\,ds\right]=0. \end{align} \end{subequations} This is a transport equation but with a perturbation term \eqref{eq insta eq 1cor nu2bis BB2 1 3} depending on the unknowns $ \sigma^2_{\nu,2,\lambda} $. It is solved using the following result. For $ s\in[0,+\infty) $, we denote by $ H^{s} $ the Sobolev space $ H^{s}(\R^{d-1}_y\times\R^+_{x_d}\times\T_{\Theta}) $ of regularity $ s $, and $ H^{\infty}:=\bigcap_{s\geq 0}H^s $. \begin{lemma}\label{lemma solu Xu+ int u =0} There exists $ T>0 $ such that, for any function $ f $ in $ \mathcal{C}([0,T],H^{\infty}) $, the transport equation \begin{subequations}\label{eq insta eq type A + epsilon B} \begin{align}\nonumber X_{\nu,2}\,\sigma^2_{\nu,2}&\\\label{eq insta eq type A + epsilon B evol} +\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\Big[\bar{\sigma}^1_{\phi,1},\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\sigma^2_{\nu,2}\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s)\big)\,ds\Big]&=f(t,x),\\\label{eq insta eq type A + epsilon B init} \big(\sigma^2_{\nu,2}\big)_{|t=0}&=0, \end{align} \end{subequations} admits a unique solution $ \sigma^2_{\nu,2} $ in $ \mathcal{C}^1([0,T],H^{\infty}) $. If, for $ f $ in $ \mathcal{C}([0,T],H^{\infty}) $ we denote by $ \Psi f $ the solution $ \sigma^2_{\nu,2} $ of \eqref{eq insta eq type A + epsilon B} in $ \mathcal{C}^1([0,T],H^{\infty}) $, then, for any $ s\geq 0 $, there exists $ C_s>0 $ such that for $ f $ in $ \mathcal{C}([0,T],H^{s}) $, we have \begin{equation}\label{eq insta est Psi f} \norme{\Psi f}_{\mathcal{C}([0,T_s],H^s)}\leq C_s\,T\norme{f}_{\mathcal{C}([0,T_s],H^s)}. \end{equation} \end{lemma} Before proving Lemma \ref{lemma solu Xu+ int u =0}, we prove the following preliminary result, asserting that the operators $ u\mapsto \partial_{\Theta}\,\mathbb{J}^{\phi,j}_{\zeta,k}\big[\bar{\sigma}^1_{\phi,j},u\big] $ for $ (j,\zeta,k)\in\ensemble{(1,\psi,1),(1,\nu,2),(3,\nu,2),(3,\psi,2)} $ are bounded from $ \mathcal{C}([0,T],H^s) $ to itself, for $ s\in[0,+\infty) $. \begin{lemma}\label{lemma insta J per bilin } For $ s\in[0,+\infty) $ and $ \bar{\sigma}^1_{\phi,1}\in\mathcal{C}([0,T],H^{\infty}) $, there exists $ C_s>0 $ such that for $ u $ in $ \mathcal{C}([0,T],H^s) $, functions $ \partial_{\Theta}\,\mathbb{J}^{\phi,j}_{\zeta,k}\big[\bar{\sigma}^1_{\phi,j},u\big] $ for $ (j,\zeta,k)\in\ensemble{(1,\psi,1),(1,\nu,2),(3,\nu,2),(3,\psi,2)} $ belong to $ \mathcal{C}([0,T],H^s) $ and satisfy \begin{equation}\label{eq insta est J per bilin} \norme{\partial_{\Theta}\,\mathbb{J}^{\phi,j}_{\zeta,k}\big[\bar{\sigma}^1_{\phi,1},u\big]}_{\mathcal{C}([0,T],H^s)}\leq C_s \norme{u}_{\mathcal{C}([0,T],H^s)}. \end{equation} \end{lemma} \begin{proof} We make the proof for the operator $ u\mapsto \partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\big[\bar{\sigma}^1_{\phi,1},u\big] $, namely, $ (j,\zeta,k)=(1,\psi,1) $. According to the expression \eqref{eq ana def J} of the operator $ \mathbb{J}^{\phi,1}_{\psi,1} $, we want to estimate, for $ t\in[0,T] $, the following quantity: \begin{equation*} \norme{\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\big[\bar{\sigma}^1_{\phi,1},u\big](t)}_{H^s}=\norme{(x,\Theta)\mapsto J_{\zeta_1,j_1}^{\zeta_2,j_2}\sum_{\lambda\in\Z^*}\lambda\,\bar{\sigma}^1_{\phi,1,\lambda}(t,x)\,u_{\lambda}(t,x)\,e^{i\lambda\Theta}}_{H^s}. \end{equation*} Using the same proof as the one of estimate \eqref{eq ana inter 5}, we get \begin{align*} &\norme{\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\big[\bar{\sigma}^1_{\phi,1},u\big](t)}_{H^s}\\ &\qquad\leq\norme{(x,\Theta)\mapsto J_{\zeta_1,j_1}^{\zeta_2,j_2}\sum_{\lambda\in\Z^*}\lambda\,\bar{\sigma}^1_{\phi,1,\lambda}(t,x)\,e^{i\lambda\Theta}}_{H^s}\norme{(x,\Theta)\mapsto \sum_{\lambda\in\Z^*}u_{\lambda}(t,x)\,e^{i\lambda\Theta}}_{H^s}\\ &\qquad\leq C \norme{\bar{\sigma}^1_{\phi,1}(t)}_{H^{s+1}}\norme{u(t)}_{H^s}, \end{align*} which, taking the supremum in $ t\in[0,T] $, leads to the sought estimate \eqref{eq insta est J per bilin}, since $ \bar{\sigma}^1_{\phi,1} $ is in $ \mathcal{C}([0,T],H^{s+1}) $ according to Proposition \ref{prop exist sol part}. This is sufficient since we do not seek here for a tame estimate. \end{proof} \begin{proof}[Proof \emph{(Lemma \ref{lemma solu Xu+ int u =0})}] From now on we fix an integer $ s\geq 0 $, and the aim is to use the Banach fixed point theorem in the Banach space $ \mathcal{C}([0,T],H^s) $. For $ v $ in $ \mathcal{C}([0,T],H^s) $, we denote by $ \Phi\,v $ the solution in $ \mathcal{C}([0,T],H^s) $ of \begin{equation*} \begin{cases} X_{\nu,2}\,u+v=0,\\ u_{|t=0}=0, \end{cases} \end{equation*} which is therefore given by \begin{equation*}\label{eq insta expr Phi} \Phi\,v(t,x,\Theta)=-\int_0^tv\big(s,x+\mathbf{v}_{\nu,2}\,(t-s),\Theta\big)\,ds. \end{equation*} Note that $ \Phi $ is continuous from $ \mathcal{C}([0,T],H^s) $ to itself, and satisfies \begin{equation}\label{eq insta est Phi} \norme{\Phi\,v}_{\mathcal{C}([0,T],H^s)}\leq T \norme{v}_{\mathcal{C}([0,T],H^s)}. \end{equation} Now, if $ \sigma^2_{\nu,2} $ is a solution to \eqref{eq insta eq type A + epsilon B}, it is in this notation a fixed point of the map $ \mathbf{F} $, where, for $ w $ in $ \mathcal{C}([0,T],H^{s}) $, $ \mathbf{F}(w) $ is defined by \begin{align*} \mathbf{F} (w) : (t,x,\Theta) \mapsto \Phi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\Big[\bar{\sigma}^1_{\phi,1}, \int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},w\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s),\Theta\big)\,ds\Big](t,x,\Theta)\\ -\Phi\,f(t,x,\Theta). \end{align*} We derive now an estimate on the difference $ \mathbf{F}(w) - \mathbf{F}(w') $ for $ w,w' $ in $ \mathcal{C}([0,T],H^s) $. By linearity of the operators, we have \begin{equation*} \mathbf{F}(w) - \mathbf{F}(w') =\Phi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\Big[\bar{\sigma}^1_{\phi,1},\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},w-w'\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s),\Theta\big)\,ds\Big]. \end{equation*} Therefore, according to estimates \eqref{eq insta est Phi} and \eqref{eq insta est J per bilin}, we have \begin{align*} &\norme{\mathbf{F}(w) - \mathbf{F}(w')}_{\mathcal{C}([0,T],H^s)}\\ &\qquad\leq C_s\,T\,\norme{\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},w-w'\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s),\Theta\big)\,ds}_{\mathcal{C}([0,T],H^s)}\\ &\qquad\leq C_s\,T^2\,\norme{\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},w-w'\big)}_{\mathcal{C}([0,T],H^s)}\\ &\qquad \leq C_s^2\,T^2\norme{w-w'}_{\mathcal{C}([0,T],H^s)}. \end{align*} Therefore, for $ T>0 $ small enough, $ \mathbf{F} $ is a contraction of $ \mathcal{C}([0,T],H^s) $, and the Banach fixed point theorem therefore gives a unique solution $ \Psi f $ in $ \mathcal{C}([0,T],H^s) $. By linearity of system \eqref{eq insta eq type A + epsilon B}, the solution $ \Psi f $ may be extended to any time interval, so the time of existence $ T $ does not depend on the regularity $ s\in[0,+\infty) $. Finally, using equation \eqref{eq insta eq type A + epsilon B evol}, we obtain \begin{multline}\label{eq insta expr dt Psi} \partial_t\,\Psi f(t,x,\Theta)=f(t,x,\Theta)+\mathbf{v}_{\nu,2}\cdot\nabla_x\,\Psi f(t,x,\Theta)\\ -\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\Big[\bar{\sigma}^1_{\phi,1},\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\Psi f\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s),\Theta\big)\,ds\Big](t,x,\Theta) \end{multline} so $ \partial_t \Psi f $ belongs to $ \mathcal{C}([0,T],H^{\infty}) $ and therefore $ \Psi f $ is actually in $ \mathcal{C}^1([0,T],H^{\infty}) $. We have proven the first part of Lemma \ref{lemma solu Xu+ int u =0}. The interest is now made on the boundedness of $ \Psi $. We have, since $ \Psi f = \mathbf{F}(\Psi f) $, \begin{align*} \Psi f(t,x,\Theta) =\Phi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\Big[\bar{\sigma}^1_{\phi,1},\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\Psi f\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s),\Theta\big)\,ds\Big](t,x,\Theta)\\ -\Phi\,f(t,x,\Theta), \end{align*} and therefore, using estimates \eqref{eq insta est Phi} and \eqref{eq insta est J per bilin}, we have, for $ s\in[0,+\infty) $, \begin{align*} \norme{\Psi f}_{\mathcal{C}([0,T],H^s)}&\leq C_s\,T\,\norme{\int_{\max(0,t-x_d)}^t\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\nu,2}\big(\bar{\sigma}^1_{\phi,1},\Psi f\big)\big(s,x+\mathbf{v}_{\psi,1}\,(t-s),\Theta\big)\,ds}_{\mathcal{C}([0,T],H^s)}\\ &\quad+T\norme{f}_{\mathcal{C}([0,T],H^s)}\\ &\leq C_s^2\, T^2\norme{\Psi f}_{\mathcal{C}([0,T],H^s)}+T\norme{f}_{\mathcal{C}([0,T],H^s)}. \end{align*} Thus, for $ T $ small enough (depending on $ s\in[0,+\infty) $), we have \begin{equation}\label{eq insta inter 1} \norme{\Psi f}_{\mathcal{C}([0,T],H^s)}\leq C\,T\norme{f}_{\mathcal{C}([0,T],H^s)}. \end{equation} Once again, by linearity of system \eqref{eq insta eq type A + epsilon B}, the estimate \eqref{eq insta inter 1} is propagated to the whole interval $ [0,T] $, which concludes the proof. \end{proof} Returning to \eqref{eq insta eq 1cor nu2bis BB2} and using Lemma \ref{lemma solu Xu+ int u =0}, by linearity, the solution $ \sigma^2_{\nu,2} $ of equation \eqref{eq insta eq 1cor nu2bis BB2} reads \begin{multline}\label{eq insta expr sigma nu 2} \sigma^2_{\nu,2}=\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left(\bar{\sigma}^1_{\phi,1},\indicatrice_{x_d\leq t}\int_0^{t-x_d}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s),\Theta\Big)\,ds\right)\\[5pt] \quad-\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big). \end{multline} \bigskip We proceed now with equation \eqref{eq insta eq 1cor BB2 psi2} which now reads, according to the expression \eqref{eq insta expr sigma nu 2} of $ \sigma^2_{\nu,2} $, \begin{align}\label{eq insta eq type A + epsilon B 2} &\color{altpink} \quad X_{\psi,2}\,\sigma^2_{\psi,2}-\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)\Big]\\\nonumber &\color{altpink} =-\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\left[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left(\bar{\sigma}^1_{\phi,1},\vphantom{\int_0^{t-x_d} }\right.\right.\\\nonumber &\color{altpink} \hspace{50pt}\left.\left.\indicatrice_{x_d\leq t}\int_0^{t-x_d}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\Big)\,ds\right)\right]. \end{align} This equation is solved using the same method as the one of Lemma \ref{lemma solu Xu+ int u =0}. For $ v $ in $ \mathcal{C}([0,T],H^{\infty}) $, we still denote by $ \Phi\,v $ the solution in $ \mathcal{C}([0,T],H^{\infty}) $ of \begin{equation*} \begin{cases} X_{\psi,2}\,u+v=0,\\ u_{|t=0}=0, \end{cases} \end{equation*} and we recall that it satisfies, for $ s\in[0,+\infty) $, \begin{equation}\label{eq insta est Phi 2} \norme{\Phi\,v}_{\mathcal{C}([0,T],H^s)}\leq T \norme{v}_{\mathcal{C}([0,T],H^s)}. \end{equation} Now, $ \sigma^2_{\psi,2} $ is a solution to \eqref{eq insta eq type A + epsilon B 2} if and only if it is a fixed point of the map $ \mathbf{F} $ of $ \mathcal{C}([0,T],H^{\infty}) $ given by \begin{multline*} \mathbf{F} : w \mapsto -\Phi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},w\big)\Big] +\Phi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\left[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left(\bar{\sigma}^1_{\phi,1},\vphantom{\int_0^{t-x_d} }\right.\right.\\\nonumber \left.\left.\indicatrice_{x_d\leq t}\int_0^{t-x_d}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s),\Theta\Big)\,ds\right)\right]. \end{multline*} For $ w,w' $ in $ \mathcal{C}([0,T],H^{\infty}) $, the difference $ \mathbf{F}(w)-\mathbf{F}(w') $ is given by \begin{equation*} \mathbf{F}(w)-\mathbf{F}(w')=\Phi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},w'-w\big)\Big]. \end{equation*} Therefore, for $ s\in[0,+\infty) $, according to estimates \eqref{eq insta est J per bilin} and \eqref{eq insta est Psi f}, we have \begin{align*} \norme{\mathbf{F}(w)-\mathbf{F}(w')}_{\mathcal{C}([0,T],H^s)}&\leq C_s\,T\norme{\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},w'-w\big)}_{\mathcal{C}([0,T],H^s)}\\ &\leq C_s \, T^2 \norme{w-w'}_{\mathcal{C}([0,T],H^s)}, \end{align*} so, for $ T>0 $ small enough, $ \mathbf{F} $ is a contraction of $ \mathcal{C}([0,T],H^{s}) $. The Banach fixed point theorem therefore gives a unique solution $ \sigma^2_{\psi,2} $ to \eqref{eq insta eq type A + epsilon B 2} in $ \mathcal{C}([0,T],H^{s}) $ for $ s\in[0,+\infty) $, that reads, \begin{align*} \sigma^2_{\psi,2}(t,x)&= \Phi\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)\Big]\\ &-\Phi\,\mathbb{J}^{\phi,3}_{\nu,2}\left[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left(\bar{\sigma}^1_{\phi,1},\vphantom{\int_0^{t-x_d} }\right.\right.\\\nonumber &\hspace{50pt}\left.\left.\indicatrice_{x_d\leq t}\int_0^{t-x_d}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y+x_d\,\big(\mathbf{v}^{\Lop}_{\psi}+\mathbf{v}_{\psi,1}'\big)-\mathbf{v}^{\Lop}_{\psi}\,(t-s),\Theta\Big)\,ds\right)\right]. \end{align*} Therefore, according to the expression of $ \Phi $ and taking the trace in $ x_d=0 $, we have \begin{align}\label{eq insta inter 2} &\quad\big(\sigma^2_{\psi,2}\big)_{|x_d=0}(t,y)- \Phi\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)\Big]_{|x_d=0}(t,y)\\\nonumber &=\int_0^t\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\nu,2}\left[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,1}_{\psi,1}\left(\bar{\sigma}^1_{\phi,1},\vphantom{\int_0^{t-x_d} }\int_0^{t}b_{\psi}\cdot\partial_{\Theta}\, H\Big(s,y-\mathbf{v}^{\Lop}_{\psi}\,(t-s)\Big)\,ds\right)\right]\\\nonumber &\quad\big(s,y+\mathbf{v}_{\psi,2}\,(t-s)\big)\,ds. \end{align} Using similar arguments as the one of Theorem \ref{th il existe H BBmod 1}, we can construct profiles $ \bar{\sigma}^1_{\phi,1} $ and $ \bar{\sigma}^1_{\phi,3} $ and choose a boundary term $ H $ such that the right-hand side of equation \eqref{eq insta inter 2} is nonzero. Therefore, we obtain \begin{equation} \mathbf{C}:=\norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}- \Phi\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)\Big]_{|x_d=0}}_{\mathcal{C}([0,T],L^2(\R^{d-1}\times\T))}>0. \end{equation} Now note that, using the exact same arguments as the one used to prove estimates \eqref{eq insta est Psi f}, \eqref{eq insta est J per bilin} and \eqref{eq insta est Phi 2}, we can prove that theses estimates \eqref{eq insta est Psi f}, \eqref{eq insta est J per bilin} and \eqref{eq insta est Phi 2} still hold for the traces, in $ \mathcal{C}([0,T],L^2(\R^{d-1}\times\T)) $. Therefore, we get \begin{align*} \mathbf{C}&\leq \norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}+\norme{\Phi\,\mathbb{J}^{\phi,3}_{\nu,2}\Big[\bar{\sigma}^1_{\phi,3},\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)\Big]_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}\\ &\leq \norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}+C\,T\norme{\Psi\,\partial_{\Theta}\,\mathbb{J}^{\phi,3}_{\psi,2}\big(\bar{\sigma}^1_{\phi,3},\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}\\ &\leq \norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}+C\,T^2\norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}\\ &= \big(1+C\,T^2\big)\norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2)}. \end{align*} In conclusion, we obtain that the norm $ \norme{\big(\sigma^2_{\psi,2}\big)_{|x_d=0}}_{\mathcal{C}([0,T],L^2(\R^{d-1)\times\T}))} $ is positive, so, for $ T $ sufficiently small, the trace $ \big(\sigma^2_{\psi,2}\big)_{|x_d=0} $ is nonzero, which is the sought result, concluding the proof. \end{proof} In the same manner as for the first simplified model, Theorem \ref{th il existe H BBmod 2} proves that an instability is created for the simplified model \eqref{eq insta eq 1cor BB2}, \eqref{eq insta eq 1cor apsi evol BB2} and \eqref{eq insta eq 1cor apsi bord BB2}. \bigskip Once again, as in section \ref{section existence}, it is conceivable to consider a more complex simplified model than \eqref{eq insta eq 1cor BB2}, \eqref{eq insta eq 1cor apsi evol BB2} and \eqref{eq insta eq 1cor apsi bord BB2}, by integrating, in the equations \eqref{eq insta eq 1cor BB2}, coupling terms with profiles $ \sigma_{\zeta,j} $ with $ \zeta\neq \phi,\psi,\nu $. What seems to be a further step is to add, in equation \eqref{eq insta eq 1cor apsi evol BB2}, terms involving the traces of interior profiles. Among these terms, the one that seems to raise the most difficult issue is $ i\lambda\,b_{\psi}\cdot B\,\big(U^{3,\osc}_{\psi,2,\lambda}\big)_{|x_d,\chi_d=0} $, since it couples equation \eqref{eq insta eq 1cor apsi evol BB2} on the first corrector with the second corrector $ U^3 $. \section{The example of gas dynamics}\label{section Euler} We study here the example of three dimensional compressible isentropic Euler equations. The aim is to determine whether or not the configuration of frequencies considered in this work can happen for this system. For $ \mathcal{C}^1 $ solutions, away from vacuum, the equations read \begin{equation}\label{eq Euler systeme} \left\lbrace \begin{array}{lr} \partial_tV^{\epsilon}+A_1(V^{\epsilon})\,\partial_1V^{\epsilon}+A_2(V^{\epsilon})\,\partial_2V^{\epsilon}+A_3(V^{\epsilon})\,\partial_3V^{\epsilon}=0&\qquad \mbox{in } \Omega_T, \\[5pt] B\,V^{\epsilon}_{|x_3=0}=\epsilon^2\, g^{\epsilon}+\epsilon^M\,h^{\epsilon}&\qquad \mbox{on } \omega_T, \\[10pt] V^{\epsilon}_{|t=0}=0,& \end{array} \right. \end{equation} with $ V^{\epsilon}=(v^{\epsilon},\mathbf{u}^{\epsilon})\in\R^*_+\times\R^3 $, where $ v^{\epsilon}\in\R^*_+ $ represents the fluid volume, and $ \mathbf{u}^{\epsilon}\in\R^3 $ its velocity, and where the functions $ A_j $, $ j=1,2,3 $ are defined on $ \R^*_+\times\R^3 $ as \begin{equation} A_j(V):=\left(\begin{array}{cc} \mathbf{u}_j & -v \,^te_j\\[5pt] -c(v)^2/v\,e_j & \mathbf{u}_j\,I_3 \end{array}\right)\in\mathcal{M}_4(\R), \end{equation} where $ e_j $ is the $ j $-th vector of the canonical basis, with $ c(v)>0 $ representing the sound velocity in the fluid, depending on its volume $ v $. We study here a perturbation of this system around the equilibrium $ V_0:=(v_0,0,0,u_0) $ with $ v_0>0 $ a fixed volume and $ (0,0,u_0) $ an incoming subsonic velocity, that is, such that $ 0<u_0<c(v_0) $. We denote by $ c_0:=c(v_0) $ the sound velocity in a fluid of the fixed volume $ v_0 $. In order to study the possibility of existence of a configuration of frequencies satisfying Assumption \ref{hypothese ensemble frequences}, we need to determine a matrix $ B $ satisfying Assumption \ref{hypothese weak lopatinskii}. For which we need to know the dimension of the stable subspace $ E_-(\zeta) $, and construct a basis of it. \bigskip Although it will not be used in this part, we derive the expression of various quantities related to hyperbolicity of the Euler system. For $ (\eta,\xi)\in\R^2\times\R $, the matrix $ A(\eta,\xi):=\eta_1\,A_1(V_0)+\eta_2\,A_2(V_0)+\xi\,A_3(V_0) $ associated with the system \eqref{eq Euler systeme} is given by \begin{equation*} A(\eta,\xi)=\left(\begin{array}{ccc} u_0 \,\xi & -v_0 \,^t\eta & -v_0\,\xi\\[5pt] -c_0^2/v_0\,\eta & u_0 \,\xi\,I_2 &0 \\[5pt] -c_0^2/v_0\,\xi & 0 & u_0 \,\xi \end{array}\right), \end{equation*} so the polynomial $ p $ defined as $ p(\tau,\eta,\xi):=\det\big(\tau\,I_4+A(\eta,\xi)\big) $ reads \begin{equation*} p(\tau,\eta,\xi)=(\tau+\xi\,u_0)^2\big((\tau+\xi\,u_0)^2-c_0^2(|\eta|^2+\xi^2)\big). \end{equation*} Thus the matrix $ A(\eta,\xi) $ admits a double eigenvalue $ -\tau_2(\eta,\xi) $ and two simple eigenvalues $ -\tau_1(\eta,\xi) $ and $ -\tau_3(\eta,\xi) $ given by \begin{equation*} \tau_1(\eta,\xi)=-\xi\,u_0-c_0\sqrt{|\eta|^2+\xi^2},\quad \tau_2(\eta,\xi)=-\xi\,u_0,\quad \tau_3(\eta,\xi)=-\xi\,u_0+c_0\sqrt{|\eta|^2+\xi^2}. \end{equation*} Note that since $ -\tau_2(\eta,\xi) $ is a double eigenvalue, the Euler system is not strictly hyperbolic, but hyperbolic with constant multiplicity. Despite this difference with Assumption \ref{hypothese stricte hyp}, we study this system since Assumption \ref{hypothese stricte hyp} seems to be a technical assumption. Now to determine the expression of the stable subspace $ E_-(\zeta) $, we need to study the eigenvalues of the matrix \begin{equation*} \mathcal{A}(\tau,\eta):=-i\,A_3(V_0)^{-1}\Big(\tau I+\eta_1\, A_1(V_0)+\eta_2\,A_2(V_0)\Big). \end{equation*} We determine that in this case, the hyperbolic region\footnote{That is, the region where $ \mathcal{A}(\tau,\eta) $ has only pure imaginary eigenvalues and is diagonalizable.} is given by \begin{equation*} \mathcal{H}:=\ensemble{\big(\tau,\eta\big)\,\middle|\,|\tau|>\sqrt{c_0^2-u_0^2}\,|\eta|}. \end{equation*} Then, for $ (\tau,\eta) $ in $ \mathcal{H} $, the eigenvalues of $ \mathcal{A}(\tau,\eta) $ are given by \begin{subequations}\label{eq ex Euler v.p. hyp} \begin{align} i\,\xi_1(\tau,\eta)&:=i\,\frac{\tau\, u_0-\sign(\tau)\,c_0\,\sqrt{\tau^2-|\eta|^2\,(c_0^2-u_0^2)}}{c_0^2-u_0^2},\\ i\,\xi_2(\tau,\eta)&:=i\,\frac{\tau \,u_0+\sign(\tau)\,c_0\,\sqrt{\tau^2-|\eta|^2\,(c_0^2-u_0^2)}}{c_0^2-u_0^2},\\\label{eq freq def xi 3} i\,\xi_3(\tau,\eta)&:=i\,\frac{-\tau}{u_0}, \end{align} \end{subequations} where $ \sign(x):=x/|x| $ for $ x\neq0 $. The eigenvalue $ i\,\xi_3 $ is double, when the two others are simple. We determine that, if we denote $ \alpha_j(\tau,\eta):=\big(\tau,\eta,\xi_j(\tau,\eta)\big) $, the frequency $ \alpha_2(\tau,\eta) $ is outgoing when frequencies $ \alpha_1(\tau,\eta) $ and $ \alpha_3(\tau,\eta) $ are incoming. Since $ i\,\xi_3(\tau,\eta) $ is a double eigenvalue, the dimension $ p $ of the stable subspace $ E_-(\zeta) $ is therefore equal to $ 3 $. This could also have been determined by the number of positive eigenvalues of $ A_3(V_0) $. The interest is now made on a basis for $ E_-(\zeta) $, which, according to \eqref{eq decomp E_-(zeta)}, can be constructed with eigenvectors of $ \mathcal{A}(\zeta) $ associated with incoming frequencies. We determine that the eigenvectors associated with eigenvalues $ i\,\xi_1(\zeta) $ and $ i\,\xi_3(\zeta) $ are respectively given by \begin{equation*} \lambda\begin{pmatrix} |(\eta,\xi_2(\tau,\eta))|\,v_0 \\ c_0 \,\eta \\ c_0\,\xi_2(\tau,\eta) \end{pmatrix},\quad \begin{pmatrix} 0 \\ \mathbf{a} \end{pmatrix}, \end{equation*} with $ \lambda\in\R $ and where $ \mathbf{a} $ is any vector satisfying $ \mathbf{a}\cdot\big(\eta,\xi_3(\tau,\eta)\big)=0 $. For $ \mathbf{a} $ we can choose for example the two linearly independent vectors $ (\tau\,\eta,|\eta|^2\,u_0) $ and $ (c_0\,\eta_2,-c_0\,\eta_1,0) $ to obtain the following basis of the stable subspace $ E_-(\zeta) $: \begin{equation*} r_1(\zeta):= \begin{pmatrix} |(\eta,\xi_2(\tau,\eta))|\,v_0 \\ c_0 \,\eta \\ c_0\,\xi_2(\tau,\eta) \end{pmatrix},\quad r^1_3(\zeta):=\begin{pmatrix} 0 \\ \tau\,\eta \\ |\eta|^2\,u_0 \end{pmatrix},\quad r^2_3(\zeta):=\begin{pmatrix} 0 \\ c_0\,\eta_2 \\ -c_0\,\eta_1 \\ 0 \end{pmatrix}, \end{equation*} We look now for a matrix $ B $, of size $ 3\times 4 $, satisfying the weak Kreiss-Lopatinskii condition \ref{hypothese weak lopatinskii}. More precisely, we want a matrix $ B $ such that $ \ker B \cap E_-(\zeta) $ is nonzero on the specific frequency $ \tau=c_0\,|\eta| $. Note that here we make a restrictive choice, about the locus where $ \ker B \cap E_-(\zeta) $ should be nontrivial. This choice is made since it makes the following computations easier. Since every quantity is homogeneous of degree 1, we can make the computations for $ |\eta|=1 $. For $ \tau=c_0\,|\eta| $ we have $ \xi_2(\tau,\eta)=0 $, so, denoting $ \eta=(\cos\theta,\sin\theta) $, basis $ \ensemble{r_1(\zeta),r_3^1(\zeta),r_3^2(\zeta)} $ reads \begin{equation*} r_1(\zeta)= \begin{pmatrix} v_0 \\ c_0 \,\cos\theta\\ c_0 \,\sin\theta\\ 0 \end{pmatrix},\quad r^1_3(\zeta)=\begin{pmatrix} 0 \\ c_0 \,\cos\theta\\ c_0 \,\sin\theta \\ u_0 \end{pmatrix},\quad r^2_3(\zeta)=\begin{pmatrix} 0 \\ c_0 \,\sin\theta\\ -c_0 \,\cos\theta\\0 \end{pmatrix}. \end{equation*} The condition that $ \ker B \cap E_-(\zeta) $ is trivial is equivalent to the three vectors $ B\,r_1(\zeta), B\,r_3^1(\zeta),\linebreak B\,r_3^2(\zeta) $ being linearly dependent. To study this condition, we write $ B $ in column as \begin{equation*} B=\begin{pmatrix} b_1 & b_2 & b_3 & b_4 \end{pmatrix}, \end{equation*} and, since $ B $ has to be of rank 3, we can assume that column $ b_4 $ is a linear combination of the three linearly independent vectors $ b_1,b_2,b_3 $ which we chose to be the canonical basis of $ \R^3 $. We write $ b_4=\mu_1\,b_1+\mu_2\,b_2+\mu_3\,b_3 $, with $ \mu_1,\mu_2,\mu_3\in\R $. In this notation, the linear dependence of $ B\,r_1(\zeta), B\,r_3^1(\zeta), B\,r_3^2(\zeta) $ is equivalent to \begin{equation*} v_0\,c_0^2=\mu_1\,u_0\,c_0^2\qquad \mbox{and}\qquad \mu_2\,v_0\,c_0\,\cos\theta=\mu_3\,v_0\,c_0\,\sin\theta,\quad \forall\theta\in\T, \end{equation*} so $ \mu_1=v_0/u_0 $ and $ \mu_2=\mu_3=0 $. Multiplying $ B $ by a nonzero constant we obtain \begin{equation*} B=\begin{pmatrix} u_0& 0 & 0 & v_0 \\ 0 & u_0 & 0 & 0\\ 0 & 0 & u_0 & 0 \end{pmatrix}, \end{equation*} which gives an example of a matrix $ B $ for which $ \ker B \cap E_-(\zeta) $ is nonzero, and actually of dimension $ 1 $, on $ \tau=c_0\,|\eta| $. We investigate now if $ \ker B \cap E_-(\zeta) $ is nontrivial only on $ \tau=c_0\,|\eta| $. At this purpose we introduce a practical tool, the Lopatinskii determinant (see \cite[section 4.2.2]{BenzoniSerre2007Multi}), denoted by $ \Delta(\sigma,\eta) $ for $ (\sigma,\eta)\in\Xi $. It is a scalar function such that its zeros are exactly the frequencies for which $ \ker B \cap E_-(\zeta) $ is nontrivial. Its construction can be found in \cite[section 4.6.1]{BenzoniSerre2007Multi}. If we write $ E_-(\sigma,\eta) $ as\footnote{Notation $ \,^{\perp} $ refers to the orthogonal complement relatively to the complex scalar product.} \begin{equation*} E_-(\sigma,\eta)=\ell(\sigma,\eta)^{\perp}, \end{equation*} the Lopatinskii determinant is given by the following block determinant: \begin{equation*} \Delta(\sigma,\eta)=\begin{vmatrix} B \\ \ell (\sigma,\eta) \end{vmatrix}. \end{equation*} Calculations made in \cite[section 14.3.1]{BenzoniSerre2007Multi} show that, for $ (\sigma,\eta)\in\Xi $, we can choose \begin{equation*} \ell(\sigma,\eta):=\big(a,-v_0\,u_0\,^{t}\eta,v_0\,\sigma\big) \end{equation*} with \begin{equation*} a:=u_0\,\sigma-\xi_-\,(c_0^2-u_0^2), \end{equation*} $ \xi_- $ being the root of negative real part of the following dispersion relation \begin{equation*} (\sigma+u_0\,\xi)^2-c_0^2\,(\xi^2+|\eta|^2)=0. \end{equation*} Thus the Lopatinskii determinant is given by \begin{equation*} \Delta(\sigma,\eta):=\begin{vmatrix} u_0& 0 & 0 & v_0 \\ 0 & v_0 & 0 & 0\\ 0 & 0 & v_0 & 0\\ a & -v_0\,u_0\,\eta_1 & -v_0\,u_0\,\eta_2 & v_0\,\sigma \end{vmatrix}=v_0^3\big[u_0\,\sigma-a\big]=v_0^3\,\xi_-(c_0^2-u_0^2). \end{equation*} It is zero if and only if $ \xi_- $ is zero, and this is the case only when $ \sigma $ is real (i.e. for $ (\sigma,\eta)=(\tau,\eta)\in\Xi_0 $) and $ \tau=c_0\,|\eta| $. Therefore $ \ker B \cap E_-(\zeta) $ is nontrivial only on $ \tau=c_0\,|\eta| $, and thus matrix $ B $ satisfies Assumption \ref{hypothese weak lopatinskii}, with \begin{equation*} \Upsilon:=\ensemble{(\tau,\eta)\,\middle|\,\tau=c_0\,|\eta|}. \end{equation*} \bigskip Now that we have determined a boundary condition $ B $ suited for our problem, we take interest into oscillations. Thus we consider two hyperbolic frequencies $ \phi $ and $ \psi $ on the boundary which will satisfy our assumptions. First, according to Assumption \ref{hypothese ensemble frequences}, frequencies $ \phi $ and $ \psi $ must be zeros of the Lopatinskii determinant, thus satisfy $ \tau=c_0|\eta| $. If we still take $ |\eta|=1 $, it leads to consider \begin{equation*} \phi:=(c_0,\cos\theta_{\phi},\sin\theta_{\phi})\quad \text{and}\quad \psi:=(c_0,\cos\theta_{\psi},\sin\theta_{\psi}), \end{equation*} with $ \theta_{\phi},\theta_{\psi}\in[0,2\pi) $. An immediate computation then gives \begin{equation*} \xi_1(\phi)=\xi_1(\psi)=0,\quad \xi_2(\phi)=\xi_2(\psi)=\frac{2\,M}{1-M^2},\quad \xi_3(\phi)=\xi_3(\psi)=-\inv{M}, \end{equation*} with $ M:=u_0/c_0\in(0,1) $ being the Mach number. Therefore, in order to have no resonances between frequencies lifted from $ \phi $ and no resonances between frequencies lifted from $ \psi $, it is sufficient to assume $ M^2 $ irrational. We now look for a boundary frequency $ \nu:=-\lambda_{\phi}\,\phi-\lambda_{\psi}\,\psi $ with $ \lambda_{\phi},\lambda_{\psi}\in \Z^* $, which satisfies Assumption \ref{hypothese ensemble frequences}. Frequency $ \nu $ reads \begin{equation*} \nu=\big(-c_0(\lambda_{\phi}+\lambda_{\psi}),-\lambda_{\phi}\cos\theta_{\phi}-\lambda_{\psi}\cos\theta_{\psi},-\lambda_{\phi}\sin\theta_{\phi}-\lambda_{\psi}\sin\theta_{\psi}\big). \end{equation*} First we determine in which case $ \nu $ is not in $ \Upsilon $. If we denote $ \nu=(\tau,\eta) $, we have \begin{equation*} \tau^2=c_0^2(\lambda_{\phi}+\lambda_{\psi})^2\quad \text{and}\quad c_0^2\,|\eta|^2=c_0^2(\lambda_{\phi}^2+\lambda_{\psi}^2)+2c_0^2\lambda_{\phi}\lambda_{\psi}\cos(\theta_{\phi}-\theta_{\psi}), \end{equation*} so, according to the description of $ \Upsilon $, frequency $ \nu $ is not in $ \Upsilon $ if and only if $ \theta_{\phi}\neq\theta_{\psi} $. Generalizing this to any frequency $ \zeta=\lambda_1\,\phi+\lambda_2\,\psi\in\F_{b}\privede{0} $ asserts that $ \F_b\cap\Upsilon=\ensemble{\phi,-\phi,\psi,-\psi} $ as required by Assumption \ref{hypothese ensemble frequences}. This assumption also demands $ \nu $ to be in the hyperbolic region. We have, if we still denote $ \nu=(\tau,\eta) $, \begin{equation*} \tau^2-|\eta|^2(c_0^2-u_0^2)=u_0^2(\lambda_{\phi}+\lambda_{\psi})^2+2\lambda_{\phi}\lambda_{\psi}(c_0^2-u_0^2)\big[1-\cos(\theta_{\phi}-\theta_{\psi}))\big], \end{equation*} so the hyperbolicity condition $ \tau^2-|\eta|^2(c_0^2-u_0^2)>0 $ reads \begin{equation}\label{eq Euler hyp cond} M^2(\lambda_{\phi}+\lambda_{\psi})^2+2\lambda_{\phi}\lambda_{\psi}(1-M^2)\big[1-\cos(\theta_{\phi}-\theta_{\psi}))\big]>0, \end{equation} which is satisfied for example when $ \lambda_{\phi} $ and $ \lambda_{\psi} $ are positive. We take interest now in resonance assumptions \eqref{eq hyp res phi psi nu 1} and \eqref{eq hyp res phi psi nu 2}. We compute \begin{align*} \xi_1(\nu)&=\frac{-M(\lambda_{\phi}+\lambda_{\psi})}{1-M^2}\\ &\quad+\frac{\sign(\lambda_{\phi}+\lambda_{\psi})\sqrt{2\lambda_{\phi}\lambda_{\psi}\big[1-\cos(\theta_{\phi}-\theta_{\psi})\big]+M^2\big[\lambda_{\phi}^2+\lambda_{\psi}^2+2\lambda_{\phi}\lambda_{\psi}\cos(\theta_{\phi}-\theta_{\psi})\big]}}{1-M^2}\\[5pt] \xi_2(\nu)&=\frac{-M(\lambda_{\phi}+\lambda_{\psi})}{1-M^2}\\ &\quad-\frac{\sign(\lambda_{\phi}+\lambda_{\psi})\sqrt{2\lambda_{\phi}\lambda_{\psi}\big[1-\cos(\theta_{\phi}-\theta_{\psi})\big]+M^2\big[\lambda_{\phi}^2+\lambda_{\psi}^2+2\lambda_{\phi}\lambda_{\psi}\cos(\theta_{\phi}-\theta_{\psi})\big]}}{1-M^2}\\[5pt] \xi_3(\nu)&=\frac{\lambda_{\phi}+\lambda_{\psi}}{M}. \end{align*} Recalling Remark \ref{remark numbering phi psi} about the numbering of frequencies, we need to check the four possibilities for the couple of resonance, namely, \begin{subequations} \begin{align}\label{eq Euler res 1} \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_1+\nu_2=0 & \quad \text{and} \quad \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_2+\nu_2=0,\\\label{eq Euler res 2} \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_3+\nu_2=0 & \quad \text{and} \quad \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_2+\nu_2=0,\\\label{eq Euler res 3} \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_1+\nu_2=0 & \quad \text{and} \quad \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_2+\nu_2=0,\\\label{eq Euler res 4} \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_1+\nu_2=0 & \quad \text{and} \quad \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_2+\nu_2=0. \end{align} \end{subequations} \begin{itemize}[leftmargin=15pt] \item Since $ \xi_1(\phi)=\xi_1(\psi)=0 $, relation $ \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_1+\nu_2=0 $ implies that $ \xi_2(\nu)=0 $, and therefore $ \lambda_{\phi}+\lambda_{\psi}=0 $ which is impossible, since it contradicts condition \eqref{eq Euler hyp cond}. \item We determine that $ \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_3+\nu_2=0 $ is equivalent to \begin{equation*} 2M^2\lambda_{\phi}\lambda_{\psi}\big[2-\cos(\theta_{\phi}-\theta_{\psi})\big]+M^4\big[\lambda_{\psi}^2+2\lambda_{\phi}\lambda_{\psi}\cos(\theta_{\phi}-\theta_{\psi})\big]-\lambda_{\psi}^2=0 \end{equation*} and \begin{equation}\label{eq Euler inter 1} (\lambda_{\phi}+\lambda_{\psi})(\lambda_{\phi}M^2-\lambda_{\psi})\geq 0. \end{equation} The corresponding second resonance is $ \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_2+\nu_2=0 $, which is equivalent to \begin{equation*} 2M^2\lambda_{\phi}\lambda_{\psi}\big[2-\cos(\theta_{\phi}-\theta_{\psi})\big]+M^4\big[\lambda_{\phi}^2+2\lambda_{\phi}\lambda_{\psi}\cos(\theta_{\phi}-\theta_{\psi})\big]-\lambda_{\phi}^2=0 \end{equation*} and \begin{equation}\label{eq Euler inter 2} (\lambda_{\phi}+\lambda_{\psi})(\lambda_{\psi} M^2-\lambda_{\phi})\geq 0. \end{equation} Now conditions \eqref{eq Euler inter 1} and \eqref{eq Euler inter 2} are incompatible, so the configuration of resonances \eqref{eq Euler res 1} is impossible. \item The case of \eqref{eq Euler res 3} is analogous, and is not detailed here. \item Finally, if for the first resonance we have $ \lambda_{\phi}\,\phi_3+\lambda_{\psi}\,\psi_3+\nu_2=0 $, then the second one must be $ \lambda_{\phi}\,\phi_1+\lambda_{\psi}\,\psi_2+\nu_2=0 $, which is equivalent to \begin{equation*} \big[1-\cos(\theta_{\phi}-\theta_{\psi})\big]+M^2\big[1+\cos(\theta_{\phi}-\theta_{\psi})\big]=0\quad \mbox{and}\quad (\lambda_{\phi}+\lambda_{\psi})(\lambda_{\psi}-\lambda_{\phi})\geq 0. \end{equation*} First equation rewrites \begin{equation*} \cos(\theta_{\phi}-\theta_{\psi})=\frac{1+M^2}{1-M^2}, \end{equation*} which cannot be satisfied by $ M^2\in(0,1) $. Thus the fourth possibility \eqref{eq Euler res 4} is also impossible. \end{itemize} Therefore, in this case, a situation like the one described in Assumption \ref{hypothese ensemble frequences} \textbf{cannot happen}. To conclude as for the Euler system, we need to have a discussion about where we have made a choice which puts us in a particular case. The above analysis about frequencies $ \phi $, $ \psi $ and $ \nu $ does not depend on $ B $, but only on the location of cancellation of the Lopatinskii determinant. Thus the only restrictive choice we made is to choose this location as $ \tau=c_0\,|\eta| $. Therefore, for the compressible isentropic Euler equations in space dimension 3, in this particular case, the configuration of frequencies considered in this work which leads to an instability cannot happen. We have considered here the Euler system in space dimension 3, since, in space dimension 2, the condition $ \tau=c\,|\eta| $ leads to $ \tau=\pm c\,\eta $, preventing to obtain a transverse oscillation. We could also consider the shock problem for the Euler equations, which is the original problem of Majda and Rosales in \cite{MajdaRosales1983Machstem,MajdaRosales1984Machstem}. \bibliography{Bibliographie.bib} \bibliographystyle{alpha} \end{document}
2206.14252v1
http://arxiv.org/abs/2206.14252v1
Effective bounds on $S$-integral preperiodic points for polynomials
\documentclass[12pt]{amsart} \usepackage{amsmath,amssymb,amsbsy,amsfonts,amsthm,latexsym, amsopn,amstext,amsxtra,euscript,amscd,mathrsfs,color,bm,mathtools} \usepackage{float} \usepackage[english]{babel} \usepackage{url} \usepackage[colorlinks,linkcolor=blue,anchorcolor=blue,citecolor=blue,backref=page]{hyperref} \usepackage[margin=3.1cm]{geometry} \usepackage{color} \usepackage[norefs,nocites]{refcheck} \newcommand*{\QEDB}{\hfill\ensuremath{\square}} \renewcommand*{\backref}[1]{} \renewcommand*{\backrefalt}[4]{ \ifcase #1 (Not cited.) \or (p.\,#2) \else (pp.\,#2)} \begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{algol}{Algorithm} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{Problem} \newtheorem{remark}[theorem]{Remark} \newtheorem{conjecture}[theorem]{Conjecture} \newcommand{\llrrparen}[1]{ \left(\mkern-3mu\left(#1\right)\mkern-3mu\right)} \newcommand{\commM}[1]{\marginpar{\begin{color}{red} \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule \smallskip M: #1\par\smallskip\hrule\end{color}}} \newcommand{\commA}[1]{\marginpar{\begin{color}{blue} \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule \smallskip A: #1\par\smallskip\hrule\end{color}}} \def\xxx{\vskip5pt\hrule\vskip5pt} \def\cA{{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}} \def\cD{{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}} \def\cG{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}} \def\cJ{{\mathcal J}} \def\cK{{\mathcal K}} \def\cL{{\mathcal L}} \def\cM{{\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}} \def\cP{{\mathcal P}} \def\cQ{{\mathcal Q}} \def\cR{{\mathcal R}} \def\cS{{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}} \def\cV{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}} \def\cY{{\mathcal Y}} \def\cZ{{\mathcal Z}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\I{\mathbb{I}} \def\F{\mathbb{F}} \def\H{\mathbb{H}} \def\K{\mathbb{K}} \def\G{\mathbb{G}} \def\Z{\mathbb{Z}} \def\R{\mathbb{R}} \def\Q{\mathbb{Q}} \def\N{\mathbb{N}} \def\M{\textsf{M}} \def\U{\mathbb{U}} \def\P{\mathbb{P}} \def\A{\mathbb{A}} \def\p{\mathfrak{p}} \def\n{\mathfrak{n}} \def\X{\mathcal{X}} \def\x{\textrm{\bf x}} \def\w{\textrm{\bf w}} \def\z{\mathrm{z}} \def\ovQ{\overline{\Q}} \def\rank#1{\mathrm{rank}#1} \def\wf{\widetilde{f}} \def\wg{\widetilde{g}} \def\comp{\hskip -2.5pt \circ \hskip -2.5pt} \def\({\left(} \def\){\right)} \def\[{\left[} \def\]{\right]} \def\<{\langle} \def\>{\rangle} \newcommand*\diff{\mathop{}\!\mathrm{d}} \def\Lip{\mathrm{Lip}} \def\BP{\mathbb{P}_{\mathrm{Berk}}^1} \def\BA{\mathbb{A}_{\mathrm{Berk}}^1} \def\cJBv{\mathcal{J}_{v,\mathrm{Berk}}} \def\cFBv{\mathcal{F}_{v,\mathrm{Berk}}} \def\gen#1{{\left\langle#1\right\rangle}} \def\genp#1{{\left\langle#1\right\rangle}_p} \def\genPs{{\left\langle P_1, \ldots, P_s\right\rangle}} \def\genPsp{{\left\langle P_1, \ldots, P_s\right\rangle}_p} \def\e{e} \def\eq{\e_q} \def\fh{{\mathfrak h}} \def\lcm{{\mathrm{lcm}}\,} \def\l({\left(} \def\r){\right)} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \def\mand{\qquad\mbox{and}\qquad} \def\jt{\tilde\jmath} \def\ellmax{\ell_{\rm max}} \def\llog{\log\log} \def\diam{{\mathrm{diam}}} \def\m{{\mathfrak{m}}} \def\ud{{\mathrm{d}}} \def\ch{\hat{h}} \def\GL{{\rm GL}} \def\Orb{\mathrm{Orb}} \def\Per{\mathrm{Per}} \def\larg{\mathrm{larg}} \def\Preper{\mathrm{Preper}} \def \S{\mathcal{S}} \def\vec#1{\mathbf{#1}} \def\ov#1{{\overline{#1}}} \def\Gal{{\rm Gal}} \newcommand{\bfalpha}{{\boldsymbol{\alpha}}} \newcommand{\bfomega}{{\boldsymbol{\omega}}} \newcommand{\canheight}[1]{{\hat h_{#1}}} \newcommand{\loccanheight}[2]{{\hat h_{#2,#1}}} \newcommand{\relsupreper}[3]{{\cP_{#1,#2}(#3)}} \newcommand{\disunit}[3]{{\cA_{#1,#2}(#3)}} \def\rat{f} \def\pol{f} \def\unipol{f_c} \newcommand{\Ch}{{\operatorname{Ch}}} \newcommand{\Elim}{{\operatorname{Elim}}} \newcommand{\proj}{{\operatorname{proj}}} \newcommand{\h}{{\operatorname{h}}} \newcommand{\hh}{\mathrm{h}} \newcommand{\aff}{\mathrm{aff}} \newcommand{\Spec}{{\operatorname{Spec}}} \newcommand{\Res}{{\operatorname{Res}}} \numberwithin{equation}{section} \numberwithin{theorem}{section} \def\house#1{{ \setbox0=\hbox{$#1$} \vrule height \dimexpr\ht0+1.4pt width .5pt depth \dimexpr\dp0+.8pt\relax \vrule height \dimexpr\ht0+1.4pt width \dimexpr\wd0+2pt depth \dimexpr-\ht0-1pt\relax \llap{$#1$\kern1pt} \vrule height \dimexpr\ht0+1.4pt width .5pt depth \dimexpr\dp0+.8pt\relax}} \newcommand{\Address}{{ \bigskip \footnotesize \textsc{Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge CB3 0WB, UK}\par\nopagebreak \textit{E-mail address:} \texttt{[email protected]} }} \subjclass[2020]{37F10, 37P05, 11G50} \title[] {Effective bounds on $S$-integral preperiodic points for polynomials} \begin{abstract} Given a polynomial $\pol$ defined over a number field $K$, we make effective certain special cases of a conjecture of S. Ih, on the finiteness of $\pol$-preperiodic points which are $S$-integral with respect to a fixed non-preperiodic point $\alpha$. As an application, we obtain bounds on the number of $S$-units in the doubly indexed sequence $\{ \pol^n(\alpha) - \pol^m(\alpha) \}_{n > m \geq 0}$. In the case of a unicritical polynomial $\unipol(z)=z^2+c$, with $\alpha$ fixed to be the critical point 0, for parameters $c$ outside a small region, we give an explicit bound which depends only on the number of places of bad reduction for $\unipol$. As part of the proof, we obtain novel lower bounds for the $v$-adically smallest preperiodic point of $\unipol$ for each place $v$ of $K$. \end{abstract} \author {Marley Young} \maketitle \vspace{-0.5cm} \section{Introduction} \subsection{Motivation and background} Let $K$ be a number field with algebraic closure $\overline K$, and let $S$ be a finite set of places of $K$ including all the archimedean ones. Given two points $\alpha, \beta \in \P^1(\overline K)$, we say that $\beta$ is \emph{$S$-integral relative to $\alpha$} if $\alpha$ and $\beta$ have distinct reduction with respect to each place of $\overline K$ lying over the places of $K$ outside $S$. A notion of $S$-integrality can more generally be defined relative to an effective divisor on an abelian variety (see for example \cite{GI}). Problems of unlikely intersections have fueled recent development in arithmetic geometry and provided a deep connection to arithmetic dynamics. For example, S. Ih \cite[Conjecture~1.2]{BIR} has conjectured, given an abelian variety $A/K$ and an effective divisor $D$ on $A$, that the set of torsion points of $A(\overline K)$ which are $S$-integral relative to $D$ is not Zariski dense in $A$. This conjecture can be viewed as similar to the Manin-Mumford conjecture, proved by Raynaud \cite{Ra}, but integral points are considered since $A \setminus \mathrm{supp}(D)$ is non-compact. The cases of elliptic curves and the multiplicative group were proved by Baker, Ih and Rumely in \cite{BIR}. Links from arithmetic geometry to arithmetic dynamics are often made through an analogy between torsion points on abelian varieties and preperiodic points of rational maps. Indeed, in this manner, Ih further conjectured a general dynamical phenomenon. Given a self map $\rat: X \to X$ of a set $X$, let $\rat^n$ denote the $n$-fold composition given by $$ \rat^n = \underbrace{\rat \circ \cdots \circ \rat}_{n \text{ times}}. $$ A point $\alpha \in X$ is \emph{preperiodic} for $\rat$ if $\rat^{n+m}(\alpha) = \rat^m(\alpha)$ for some $n \geq 1$ and $m \geq 0$. For $Y \subseteq X$, let $\mathrm{PrePer}(f,Y)$ denote the set of elements of $Y$ which are preperiodic for $f$. \begin{conjecture}[Ih] \label{conj:Ih} Let $K$ be a number field, and let $S$ be a finite set of places containing all the archimedean ones. If $\rat : \P^1 \to \P^1$ is a rational function of degree at least two defined over $K$, and $\alpha \in \P^1(\overline K)$ is a point which is not preperiodic, then there are at most finitely many preperiodic points $\beta \in \P^1(\overline K)$ that are $S$-integral with respect to $\alpha$. \end{conjecture} We say that a point $\alpha \in \P^1(\overline K)$ is \emph{totally Fatou} at a place $v$ of $K$ (with respect to the rational function $\rat$) if each of the $[K(\alpha):K]$ embeddings of $\alpha$ into $\P^1(\C_v)$ is contained in the $v$-adic Fatou set of $\rat$. Petsche \cite{P} proved Conjecture~\ref{conj:Ih} in the case where the non-preperiodic point $\alpha$ is totally Fatou at all places of $K$. In more generality, Conjecture~\ref{conj:Ih} has only been proved for very special maps, namely, those arising from endomorphisms of commutative algebraic groups for which the aforementioned \cite[Conjecture~1.2]{BIR} has been established. Baker, Ih and Rumely \cite{BIR} have proved the conjecture when $\rat$ is a Latt\'{e}s map associated to an elliptic curve or $\rat(z)=z^d$ for some $d \geq 2$. Also, in \cite{IT}, Ih and Tucker prove the conjecture for Chebyshev maps. A key tool in the proof of each of the above cases of Conjecture~\ref{conj:Ih} is the equidistribution of points of small height. Recently, we have seen the effectiveness of quantitative equidistribution techniques in answering questions in unlikely intersections, particularly with a view toward uniform results. For example, they are used by DeMarco, Krieger and Ye \cite{DKY} to obtain a uniform Manin-Mumford bound for a family of genus two curves. In this paper we develop the tools to apply quantitative equidistribution to some cases of Conjecture~\ref{conj:Ih}, and in particular make Petsche's \cite{P} result effective when $\rat$ is a polynomial. The bounds we obtain depend substantially on the $v$-adic distance of the given totally Fatou, non-preperiodic point $\alpha$ to the nearest preperiodic point for $\pol$, at each place $v \in S$. Bounding these distances from below would make quantitative the known discreteness of Fatou preperiodic points for a rational map (see for example \cite[Proposition~2]{P}). It is moreover interesting to study how these distances vary within a family of rational maps, and the extent to which we can obtain uniform bounds. Using various local methods, we do this in the case of a unicritical polynomial $\unipol(z)=z^2+c$, when $\alpha=0$ is the critical point. For parameters $c$ lying outside a small region, we obtain a bound for the number of prepriodic points for $\unipol$ which are $S$-integral with respect to 0, depending only on the number of primes for which $c$ is non-integral. We also relate Conjecture~\ref{conj:Ih} to bounding the number of $S$-units in the doubly indexed sequence $\{ \rat^n(\alpha)-\rat^m(\alpha) \}_{n > m \geq 0}$. This generalises the problem of bounding the number of $S$-units in orbits of $\pol$, which has been considered in \cite{KLS}. Throughout the paper, we will use the following notation. Let $M_K$ denote the set of places of $K$, and write $M_K^\infty$ and $M_K^0$ respectively for the sets of archimedean and non-archimedean places. For each $v \in M_K$, we normalise an absolute value $| \cdot |_v$, so that any $x \in K \setminus \{ 0 \}$ satisfies the \emph{product formula} $$ \prod_{v \in M_K} |x|_v = 1. $$ For each $v \in M_K$, let $\C_v$ be a completion of an algebraic closure of $K_v$, and let $D(a,r) := \{ z \in \C_v : |z-a|_v < r \}$ and $\overline D(a,r) := \{ z \in \C_v : |z-a|_v \leq r \}$ denote respectively the open and closed disks in $\C_v$ centred at $a \in \C_v$ of radius $r \geq 0$. \subsection{Main results} For a rational function $\rat \in K(z)$ and a point $\alpha \in \overline K$, define \begin{equation} \label{eq:defrelsupreper} \relsupreper{\rat}{S}{\alpha} := \{ \beta \in \mathrm{PrePer}(\rat, \overline K) : \beta \text{ is $S$-integral relative to } \alpha \} \end{equation} and recall that our main goal is to obtain upper bounds for this quantity. We obtain the following. \begin{theorem} \label{thm:main} Let $K$ be a number field, and let $\pol \in K[z]$ be a polynomial of degree $d \geq 2$. Let $\canheight{\pol}$ denote the canonical height with respect to $\pol$ (see \textsection \textsection \ref{subsec:Heights}), and let $S$ be a finite set of places of $K$ containing all the archimedean ones, as well as all the places of bad reduction for $\pol$. Suppose that $\alpha \in \overline K$ is not preperiodic under $\pol$, and is totally Fatou at all places $v \in M_K$. Then there exists an effectively computable constant $$ P = P \left( \pol, \alpha, S \right) $$ such that $$ |\relsupreper{\pol}{S}{\alpha}| \leq P. $$ \end{theorem} It is likely that one can obtain this result without restricting to the case where $\rat$ is a polynomial. However, the decomposition of the canonical height associated to $\rat$ into local heights, which arises when $\rat$ is a polynomial, simplifies the discussion greatly when compared to the general case, where one must analyse dynamical Arakelov-Green's functions appearing for example in \cite{BR1}. For $\rat \in K(z)$ and $\alpha \in \overline K$, define \begin{equation} \label{eq:defdeltav} \delta_{\rat, v}(\alpha) := \min \left \{ 1, \inf_{z \in \mathrm{PrePer}(\rat,\C_v)} |z-\alpha|_v \right \} \end{equation} to be the $v$-adic distance of $\alpha$ to the nearest preperiodic point for $\rat$. \begin{remark} Since the set of preperiodic points in the Fatou set of $\pol$ is discrete (see \cite[Proposition~2]{P}), the condition of $\alpha$ being non-preperiodic and totally Fatou at a place $v$ is equivalent to having $\delta_{\pol,v}(\alpha) > 0$. \end{remark} We show that in Theorem~\ref{thm:main}, the dependence of the constant $P$ on $\pol$ and $\alpha$ is limited to the canonical height $\canheight{\pol}(\alpha)$ (in fact, the constant improves as $\canheight{\pol}(\alpha)$ grows), the quantities $\delta_{\pol,v}(\alpha)$ for $v \in S$, and certain constants of H\"{o}lder continuous potentials for the invariant probability measure associated to $\pol$ at each place (see Theorem~\ref{thm:main2}). In the case of a unicritical polynomial $\unipol(z)=z^2+c$, when $\alpha=0$ is the critical point, we are able to explicitly estimate the dependence on $\unipol$ (especially the quantities $\delta_{\unipol,v}(0)$), and in most instances relate it to the canonical height $\canheight{\unipol}(0)$. This allows us to obtain the following, more uniform, result for parameters $c$ whose conjugates all lie outside a small region about the boundary of the Mandelbrot set (see \textsection \textsection \ref{subsec:Mandelbrot}). Note that in this case, we do not need to enlarge $S$ to contain the places of bad reduction for $\unipol$. \begin{theorem} \label{thm:UniformMain} Let $K$ be a number field, $S$ a finite set of places of $K$ containing the archimedean ones, and let $\unipol(z)=z^2+c$, $c \in K$. Suppose that the critical point $0$ is not preperiodic under $\unipol$. Further, let $\varepsilon > 0$, $t \geq 1$ and suppose that for all archimedean places $v \in M_K^\infty$, either $\loccanheight{v}{\unipol}(0) \geq \varepsilon$ or $c$ lies in a hyperbolic component of the Mandelbrot set of period at most $t$. Then $$ |\relsupreper{\unipol}{S}{0}| \ll 1, $$ where the implied constant depends only on $\varepsilon,t,K,S$ and the number of places of bad reduction for $\unipol$ (see Theorem~\ref{thm:UniformMainExplicit} for explicit values). \end{theorem} Note that Theorem~\ref{thm:UniformMain} can be extended to $\unipol(z)=z^d+c$ for arbitrary $d \geq 2$, but we restrict to the case $d=2$ to avoid messy computations. Also, for parameters $c$ outside the Mandelbrot set, nothing special about the critical point is used in the proof of Theorem~\ref{thm:UniformMain}. It is therefore possible to obtain an analogue of Theorem~\ref{thm:UniformMain} with the critical point 0 replaced by an arbitrary non-preperiodic point $\alpha$, for parameters $c$ lying outside the corresponding bifurcation locus. \begin{remark} For a parameter $c \in K$ and an archimedean place $v$, if $c$ lies in the $v$-adic Mandelbrot set, then $\loccanheight{v}{\unipol}(0) = 0$. For $c$ outside the Mandelbrot set, it would be of interest to give a uniform estimate of the $v$-adic escape rate $\loccanheight{v}{\unipol}(0)$ in terms of the distance to the Mandelbrot set. However, this seems untractable currently, as it would solve in the affirmative the well-known open problem of the Mandelbrot set being locally connected; see \cite[Chapter~VIII,~Theorem~4.2]{CG}. \end{remark} Theorem~\ref{thm:UniformMain} produces the following completely uniform bound when $c$ is an integer. \begin{corollary} \label{cor:IntMain} Let $K = \Q$ and let $\unipol(z) = z^2 + c$ and $c \in \Z$ such that $0$ is not preperiodic for $\unipol$ i.e. $c \neq 0,-1,-2$. Let $S$ be a finite set of places of $\Q$ containing the archimedean place $\infty$. Then $$ | \relsupreper{\unipol}{S}{0} | \ll 1, $$ where the implied constant depends only on $S$ (see Corollary~\ref{cor:IntMain2} for explicit values). \end{corollary} Finally, we state our result on $S$-units in dynamical sequences. Let $\cO_S$ and $\cO_S^*$ denote respectively the sets of $S$-integers and $S$-units in $K$. In \cite{Si3}, Silverman showed that for any $\rat \in K(z)$ of degree at least two such that $\rat^2 \notin K[z]$, and any $\alpha \in K$, $\{ \rat^n(\alpha) : n \geq 0 \} \cap \cO_S$ is finite. However, the size of this intersection cannot be bounded independent of $\rat$, even if we restrict to functions of a fixed degree. Such a bound (i.e. depending only on $|S|$ and $\deg(\rat)$) may be possible if we replace $\cO_S$ with $\cO_S^*$. This was conjectured in \cite{KLS}, and proved in the case of a monic polynomial $\pol$ with coefficients in $\cO_S$. We proceed to consider, in place of the orbit of $\alpha$, the doubly-indexed sequence $\{ \rat^n(\alpha) - \rat^m(\alpha) \}_{n > m \geq 0}$. It is interesting to observe that we can relate the number of $S$-units in this sequence to the number of $\rat$-preperiodic points which are $S$-integral relative to $\alpha$. \begin{theorem} \label{thm:SunitBound} Let $K$ be a number field, let $\pol \in K[z]$ and let $\alpha \in K$. Let $S$ be a finite set of places of $K$ containing all the archimedean ones, and let $$ \disunit{\pol}{S}{\alpha} : = \{ \pol^n(\alpha) - \pol^m(\alpha) \}_{n > m \geq 0}. $$ Then $$ |\disunit{\pol}{S}{\alpha} \cap \cO_S^*| \leq \frac{(2 |\relsupreper{\pol}{S}{\alpha}|+1)^2-1}{8}, $$ where we recall $\relsupreper{\pol}{S}{\alpha}$ is defined in \eqref{eq:defrelsupreper}. \end{theorem} \begin{remark} In particular, for $\unipol(z) = z^2 + c$, $c \in \Z$, Corollary~\ref{cor:IntMain} implies $$ | \disunit{\unipol}{S}{0} \cap \cO_S^* | \ll 1, $$ with the implied constant depending only on $S$. \end{remark} \subsection{Proof strategy and outline of paper} In Section~\ref{sec:Prelim}, we introduce notation and results requisite for potential theory on the Berkovich projective line at each place of $K$, as well as various useful notions and propositions relevant to heights of algebraic numbers. This allows us to proceed in Section~\ref{sec:QuantEqui} to make explicit the constants in Favre and Rivera-Letelier's quantitative equidistribution theorem \cite[Theorem~7]{FRL} for the adelic measure associated to a polynomial $\pol$, and in particular for the unicritical polynomial $\unipol(z)=z^2+c$. To prove Theorem~\ref{thm:main}, we assume on the contrary that the set $\cP:=\relsupreper{\pol}{S}{\alpha}$ of preperiodic points for $\pol$ which are $S$-integral relative to $\alpha$ is very large. The idea is to then test these preperiodic points against an extension of $\log |z-\alpha|_v$ at each place $v \in S$ using the equidistribution theorem. Then the quantity $$ \Gamma := \frac{1}{|\cP|} \sum_{v \in M_K} \sum_{z \in \cP} \log | z - \alpha |_v, $$ which can be shown to be 0 using the product formula, is very close to the canonical height of $\alpha$ with respect to $\pol$, and is hence positive (since $\alpha$ is not preperiodic), a contradiction. The only problem with this approach is that our candidate test function has a logarithmic singularity at (embeddings of) $\alpha$. As done by Petsche in \cite{P}, we get around this by making the assumption that $\alpha$ is totally Fatou at all places. In this case, since at each place the measure associated to $\pol$ is supported on its Julia set, and the preperiodic points of $\pol$ are discrete in the Fatou set, it is possible to construct a suitable truncation function which is sufficiently regular to apply our quantitative equidistribution theorem, and agrees with $\log |z-\alpha|_v$ at all preperiodic points, and on the support of the measure associated to $\pol$. We construct this truncation function in Section~\ref{sec:Trunc}, essentially by cutting out a disk of radius $\delta_{\pol,v}(\alpha)$. We then give the proof of Theorem~\ref{thm:main} in Section~\ref{sec:MainProof}. Most of the rest of the paper concerns a unicritical polynomial $\unipol(z)=z^2+c$, and is devoted to obtaining lower bounds for $\delta_v = \delta_{\unipol,v}(0)$ of an appropriate form at each relevant place, in order to eventually prove Theorem~\ref{thm:UniformMain} in Section~\ref{sec:Unif}. In Section~\ref{sec:arch} we deal with the archimedean places $v$, treating separately two different cases. Firstly, when the parameter $c$ lies outside the $v$-adic Mandelbrot set, we use properties of the geometry of the Julia set of $f$, as well as the H\"{o}lder continuity of its Green function to bound $\delta_v$ in terms of the $v$-adic escape rate $\loccanheight{v}{\unipol}(0)$. When $c$ lies inside a hyperbolic component of the $v$-adic Mandelbrot set, we use distortion theorems from complex analysis together with uniformisation results from complex dynamics to bound $\delta_v$ in terms of the multiplier and the period of the component. We then remove the dependence on the multiplier using properties of the critical height found in \cite{I2}. In Section~\ref{sec:nonarch}, we apply techniques from non-archimedean analysis and dynamics to bound $\delta_v$ when $v$ is a finite place of good reduction. Finally, we prove Theorem~\ref{thm:SunitBound} in Section~\ref{sec:Sunit} using an elementary argument, together with a lower bound on the number of distinct roots of $\pol^n(z)-\pol^m(z)$, $n > m \geq 0$. \subsection{Acknowledgements} The author is indebted to his supervisor Holly Krieger for her support and guidance on this project, and to Rob Benedetto for useful discussions relevant to \textsection \ref{sec:nonarch}. \section{Preliminaries} \label{sec:Prelim} \subsection{Reduction of rational maps} \label{subsec:Reduction} Given a non-archimedean place $v$ of $K$, denote the \emph{ring of integers} and \emph{residue field} of $K$ with respect to $v$ respectively by $\cO_v$ and $k_v$. Given a rational function $\rat: \P^1 \to \P^1$ of degree $d$ defined over $K$, represented as $$ \rat([x,y]) = [g(x,y),h(x,y)], $$ where $g,h \in \cO_v[x,y]$ are homogeneous polynomials of degree $d$ with no common irreducible factors in $K[x,y]$ and at least one coefficient of $g$ or $h$ has $v$-adic absolute value 1, we say that $\rat$ has \emph{good reduction} at $v$ if the reductions of $f$ and $g$ modulo $v$ are relatively prime in $k_v[x,y]$. Otherwise we say that $\rat$ has \emph{bad reduction} at $v$. Let $M^{0,\rat}_{K,\mathrm{good}}$ denote the set of places of $K$ at which $\rat$ has good reduction, and let $M^{0,\rat}_{K,\mathrm{bad}} := M^0_K \setminus M^{0,\rat}_{K,\mathrm{good}}$. Note that the polynomial $\unipol(z) = z^d+c$, $c \in K$ has good reduction at $v$ if and only if $|c|_v \leq 1$. \subsection{Fatou and Julia sets} \label{subsec:FatouJulia} Let $v \in M_K$. The \emph{Fatou set} $\cF_v(\rat)$ associated to a rational function $\rat$ is the largest open subset of $\P^1(\C_v)$ such that $\{ \rat^n \}_{n=1}^\infty$ is equicontinuous with respect to the spherical metric at all $z \in \cF_v(\rat)$. The \emph{Julia set} $\cJ_v(\rat)$ is the complement of the Fatou set. In the case of a polynomial $\pol$, $\cJ_v(\pol)$ is the boundary of the \emph{filled Julia set} $$ \cK_v(\pol) := \{ z \in \P^1(\C_v) : |\pol^n(z)| \not \to \infty \} $$ for $\pol$. If $v \in M_K^0$ is a place at which $\rat$ has good reduction, then $\cJ_v(\rat) = \emptyset$. Note also that $\cJ_v(\rat)$ is contained in the closure of the set of all periodic points for $\rat$. In particular, $D(\alpha, \delta_{\rat,v}(\alpha)) \cap \cJ_v(\rat) = \emptyset$. For details about the Fatou and Julia sets, see \cite{B} \textsection \textsection 1.6.2 and Chapter~5. \subsection{The Mandelbrot set} \label{subsec:Mandelbrot} The \emph{Mandelbrot set} (see \cite{DH}) $\cM_d \subset \C$ is defined as \begin{align*} \cM & = \{ c \in \C : |\unipol^n(0)| \text{ is bounded} \} \\ & = \{ c \in \C : \text{the Julia set of $\unipol$ is connected} \}, \end{align*} where $\unipol(z)=z^2+c$. We say that a connected component $\cH$ of the interior of $\cM$ is a \emph{hyperbolic component} of period $t$ if for all parameters $c \in \cH$, the map $\unipol(z)=z^2+c$ has an attracting periodic cycle of period $t$. Such a cycle, if it exists, must in fact be unique. For each archimedean place $v$ of $K$, by identifying $\C_v$ with $\C$, we obtain a $v$-adic Mandelbrot set $\cM_v \subset \C_v$. \subsection{The Riemann sphere} \label{subsec:RiemannSphere} At an archimedean place $v$ of $K$, we identify $\P^1(\C_v)$ with $\P^1(\C)$, which we consider as a Riemannian 2-manifold by equipping it with the typical atlas and the Fubini-Study metric (see for example \cite[\textsection 4.1]{NC}), which induces an inner product $\langle \cdot, \cdot \rangle_g$ on the tangent space, and we let $\mu$ denote the volume element on $\P^1(\C)$. We extend the induced operators $\nabla$ and $\Delta$ (gradient and Laplacian respectively) to arbitrary distributions on $\P^1(\C)$ in the usual way (see for example \cite[\textsection 5.2]{NC}). We say that a signed measure $\rho$ on $\P^1(\C)$ has \emph{continuous potentials} if there is a continuous function $h : \P^1(\C) \to \R$ such that $\Delta h = \rho - \lambda$ in the sense of distributions, where $\lambda$ is Lebesgue measure on the unit circle. Given two signed measures $\rho$ and $\rho'$ on $\P^1(\C)$ such that $\log |z-w|$ is integrable with respect to $|\rho| \otimes |\rho'|$ on $\C \times \C \setminus \mathrm{Diag}$ (where $\mathrm{Diag} = \{ (z,z) : z \in \C \}$), we define the \emph{mutual energy} of $\rho$ and $\rho'$ by $$ (\rho,\rho')=-\int_{\C \times \C \setminus \mathrm{Diag}} \log |z-w| d\rho(z) d \rho'(w). $$ For any functions $f,h$ on $\P^1(\C)$ whose weak gradients exist, we define their \emph{Dirichlet form} by $$ \langle f, h \rangle = \frac{1}{2\pi} \int_{\P^1(\C)} \langle \nabla f, \nabla h \rangle_g d \mu, $$ which can be calculated in charts as (see \cite[Definition~2.17]{NC}) \begin{align} \label{eq:DirichletForm} \langle f, h \rangle = \int_{\overline D(0,1)} & \left( \frac{\partial f_0}{\partial x} \frac{\partial h_0}{\partial x} + \frac{\partial f_0}{\partial y} \frac{ \partial h_0}{\partial y} \right) dx dy \notag \\ & + \int_{D(0,1)} \left( \frac{\partial f_1}{\partial x} \frac{\partial h_1}{\partial x} + \frac{\partial f_1}{\partial y} \frac{ \partial h_1}{\partial y} \right) dx dy, \end{align} where $f_0(x,y)=f([x+iy,1])$ and $f_1(x,y)=g([1,x+iy])$, and $[ \cdot, \cdot]$ are homogeneous coordinates on $\P^1(\C)$. Let $H^1(\P^1(\C))$ denote the Sobolev space consisting of square integrable functions $\phi$ on $\P^1(\C)$ with weak gradients, satisfying $\langle \phi,\phi \rangle < \infty$ (equivalently in local coordinates, the weak partial derivatives of $\phi$ exist and are square integrable). We have the following Cauchy-Schwarz inequality \begin{lemma} \label{lem:ArchCauchySchwarz} For every function $\phi \in H^1(\P^1(\C))$ and every signed measure $\rho$ with $\rho(\P^1(\C)) = 0$ and such that $|\rho|$ has continuous potentials, we have $$ \left| \int_{\P^1(\C)} \phi \, d \rho \right|^2 \leq \langle \phi, \phi \rangle (\rho, \rho). $$ \end{lemma} \begin{proof} This is proved for $\phi \in \cC^1(\P^1(\C))$ in \cite[\textsection \textsection 5.5]{FRL} and \cite[Proposition~2.9]{NC}, but can easily be extended to $\phi \in H^1(\P^1(\C))$. \end{proof} Given a rational function $\rat$ of degree $d$, there is a canonical probability measure $\rho_{\rat,v}$ on $\P^1(\C_v)$, called the \emph{equilibrium measure} or \emph{measure of maximal entropy} associated to $\rat$ at $v$, whose support is the Julia set $\cJ_v(\rat)$, and which satisfies $\rat_*( \rho_{\rat,v}) = \rho_{\rat,v}$ and $\rat^*(\rho_{\rat,v}) = d \cdot \rho_{\rat,v}$ (see for example \cite{Br}). \subsection{The Berkovich projective line} \label{subsec:Berk} We recall details of this construction as presented in \cite[\textsection 3]{FRL} and \cite[\textsection 1]{RW}. For each place $v$ of $K$, the \emph{Berkovich affine line} $\BA(\C_v)$ is defined to be the set of multiplicative seminorms $\zeta = \| \cdot \|_\zeta$ on the polynomial ring $\C_v[T]$ that extend the absolute value $| \cdot |_v$ on $\C_v$, together with the weakest topology such that for every $f \in \C_v[T]$, the function $\zeta \mapsto \| f \|_\zeta$ is continuous (called the \emph{Gel'fand topology}). At archimedean places, we have $\BA(\C_v) \cong \C_v$ (this can be deduced, for example, from the Gelfand-Mazur theorem). Let $v$ be a non-archimedean place. Then the \emph{Berkovich projective line} $\BP(\C_v) = \BA(\C_v) \cup \{ \infty \}$ is a Hausdorff, compact and uniquely path connected topological space containing $\P^1(\C_v)$ as a dense subspace (see \cite[Chapter~6]{B}, or \cite[\textsection 3]{FRL} for details of its construction and properties). In particular, $\BP(\C_v)$ has the structure of an $\R$-tree (see \cite[Appendix~B]{B}). Given a disk $D(a,r) \subseteq \C_v$, there is a corresponding point $\zeta(a,r) \in \BA(\C_v)$, whose seminorm is given by $$ \| f \|_{\zeta(a,r)} = \sup \{ |f(z)|_v : z \in D(a,r) \}. $$ The point $\zeta_G := \zeta(0,1)$ is called the \emph{Gauss point}. By Berkovich's classification theorem \cite[Theorem~6.9]{B}, all points of $\BA(\C_v)$ either correspond to a disk $D(a,r)$ (and are respectively called Type I, II, and III points when $r=0$, $r \in |\C_v^\times|$ and $r \notin |\C_v^\times|$), or to (a cofinal equivalence class of) a sequence of nested disks $D(a_n,r_n)$ with empty intersection. A point $\zeta \in \BA(\C_v)$ of the latter kind, called Type IV, has seminorm given by $$ \| f \|_\zeta = \lim_{n \to \infty} \| f \|_{\zeta(a_n,r_n)}. $$ For $\zeta \in \BA(\C_v)$, the \emph{absolute value} of $\zeta$ is $|\zeta|:= \| T \|_\zeta$ and the \emph{diameter} of $\zeta$ is $$ \diam(\zeta) := \inf \{ \| T - a \|_\zeta : a \in \C_v \}. $$ As $\BP(\C_v)$ is uniquely path-connected, given any $x,y \in \BP(\C_v)$ we will denote by $[x,y]$ the unique path between $x$ and $y$, and denote by $(x,y)$, $[x,y)$ and $(x,y]$ the corresponding open and half-open paths. Moreover, if we fix a point $\zeta \in \BP(\C_v)$, the set $[x,\zeta] \cap [y,\zeta] \cap [x,y]$ consists of a single point (see \cite[Proposition~6.35]{B}) which we denote by $x \vee_\zeta y$. We write $\vee$ for $\vee_\infty$ and $\vee_G$ for $\vee_{\zeta_G}$. \subsection*{Distances on $\BP(\C_v)$} The set $\H_v : = \BP(\C_v) \setminus \P^1(\C_v)$ is called the \emph{hyperbolic space} over $\C_v$, and admits the following metric, called the \emph{logarithmic path distance}: \begin{equation} \label{eq:defPathDistance} d_{\H_v}(x,y) = 2 \log \diam(x \vee y) - \log \diam(x) - \log \diam(y). \end{equation} We can extend $d_{\H_v}$ to a singular metric on $\BP(\C_v)$ by declaring that for $x \in \P^1(\C_v)$ and $y \in \BP(\C_v)$, we have $d_{\H_v}(x,y) = \infty$ if $x \neq y$ and 0 otherwise. We can also define the diameter of a point $x \in \BP(\C_v)$ with respect to the Gauss point $\zeta_G$ by \begin{equation} \label{eq:defGaussDiam} \diam_G(x) = \exp(-d_{\H_v}(\zeta_G,x)), \end{equation} and subsequently obtain a metric $\ud$ proportional to that defined by Favre and Rivera-Letelier \cite[\textsection \textsection 4.7]{FRL} on $\BP(\C_v)$, given by \begin{equation} \label{eq:FRLDist} \ud(x,y) = 2 \, \diam_G(x \vee_G y) - \diam_G(x) - \diam_G(y). \end{equation} This satisfies $0 \leq \ud(x,y) \leq 2$ and $\ud(x,y) = 2 \| x,y \|$ when $a,b \in \P^1(\C_v)$ (see \cite[Proposition~1.1]{RW}), where $\| \cdot, \cdot \|$ denotes the spherical metric on $\P^1(\C_v)$. Both $d_{\H_v}$ and $\ud$ are additive on paths in the sense that if $z$ is any point in $[x,y]$, we have $d_{\H_v}(x,y) = d_{\H_v}(x,z) + d_{\H_v}(z,y)$, and similarly for $\ud$. The \emph{Gromov product} (with the Gauss point $\zeta_G$ as a base point) is the function $\langle \cdot, \cdot \rangle_G : \BP(\C_v) \times \BP(\C_v) \to [0,\infty]$ given by \begin{equation} \label{eq:GromovDef} \langle x, y \rangle_G = d_{\H_v}(x \vee_G y,\zeta_G). \end{equation} \subsection*{Potential Theory on $\BP(\C_v)$} Every rational function $\rat: \P^1(\C_v) \to \P^1(\C_v)$ has a unique continuous extension to $\BP(\C_v)$ \cite[\textsection 7.1]{B}. We also have the notion of Julia set of $\rat$ on the Berkovich projective line. We say an open set $U \subseteq \BP(\C_v)$ is \emph{dynamically stable} under $\rat$ if $\bigcup_{n \geq 0} \rat^n(U)$ omits infinitely many points of $\BP(\C_v)$. The \emph{Berkovich Fatou set}, which we denote $\cF_{v,\mathrm{Berk}}(\rat)$, is the subset of $\BP(\C_v)$ consisting of all points $x \in \BP(\C_v)$ having a dynamically stable neighbourhood. Again, the \emph{Berkovich Julia set} is given by $\cJ_{v,\mathrm{Berk}}(\rat) = \BP(\C_v) \setminus \cF_{v,\mathrm{Berk}}(\rat)$. When $v$ is a place of good reduction for $\rat$, we have $\cJ_{v,\mathrm{Berk}}(\rat) = \{ \zeta_G \}$. Denote by $M_{\mathrm{Rad}}$ the set of signed Radon measures on $\BP(\C_v)$, and let $\rho \in M_{\mathrm{Rad}}$. Then the \emph{potential} of $\rho$ based at $\zeta_G$ is the function $u_\rho : \H_v \to \R$ given by $$ u_\rho(x) := - \rho(\BP(\C_v)) - \int_{\BP(\C_v)} \langle x, y \rangle_G \, d \rho(y). $$ Note that the potential of a delta measure at a point $y$, which we denote $[y]$, is given by $u_{[y]}(x) = -1 - \langle x,y \rangle_G$. Let $\cQ$ denote the space of potentials $$ \cQ := \{ f: \H_v \to \R : f = u_\rho \text{ for some } \rho \in M_{\mathrm{Rad}} \}. $$ This is a vector space which contains all functions of the form $\langle \cdot, y \rangle_G$. With this framework, one can define a non-archimedean Laplacian operator on $\BP(\C_v)$ and develop the relevant notions in potential theory. Namely, define the function $\Delta : \cQ \to M_{\mathrm{Rad}}$ by $$ \Delta u_\rho := \rho - \rho(\BP(\C_v)) \cdot [\zeta_G]. $$ Importantly, we have \begin{equation} \label{eq:LaplacGromov} \Delta \langle \cdot, y \rangle_G = [\zeta_G] - [y]. \end{equation} For $\rho \in M_{\mathrm{Rad}}$, we say that $\rho$ has \emph{continuous potentials} if the associated potential function $u_\rho$ extends to a function on $\BP(\C_v)$ which is continuous with respect to the Gel'fand topology. Given a pair of measures $\rho, \rho' \in M_{\mathrm{Rad}}$, such that $\log \diam ( \cdot \vee \cdot )$ is integrable with respect to $\rho \otimes \rho'$ on $\BA(\C_v) \times \BA(\C_v) \setminus \mathrm{Diag}$ (where $\mathrm{Diag} = \{ (z,z) : z \in \C_v \}$ is the Type I diagonal), we define the \emph{mutual energy} of $\rho$ and $\rho'$ to be \begin{equation} \label{eq:defMutualEnergy} (\rho,\rho') = - \int_{\BA(\C_v) \times \BA(\C_v) \setminus \mathrm{Diag}} \log \diam( x \vee y) \, d \rho(x) d \rho'(y). \end{equation} Any rational map $\rat : \P^1(\C_v) \to \P^1(\C_v)$ of degree $d \geq 2$ extends naturally to a map $\rat: \BP(\C_v) \to \BP(\C_v)$. As in the archimedean case, we can associate to $\rat$ an \emph{equilibrium measure} i.e. a probability measure $\rho_{\rat,v}$ on $\BP(\C_v)$ which satisfies $\rat_*(\rho_{\rat,v}) = \rho_{\rat,v}$ and $\rat^*(\rho_{\rat,v}) = d \cdot \rho_{\rat,v}$ (see \cite[Theorem~13.37]{B} or \cite[\textsection \textsection 6.1]{FRL} for details of its construction). Once again, we note that $\mathrm{supp}(\rho_{\rat,v}) = \cJBv(\rat)$. We say a continuous function $\phi : \BP(\C_v) \to \R$ is $\cC^k$ for $k \geq 1$ if it is locally constant of a finite and closed subtree $\cT \subset \H_v$, and $\phi$ is $\cC^k$ on $\cT$, where for $x \in \cT$ we can define the derivative $\partial \phi(x)$ as the left derivative of $\phi$ at $x$ on the path $[\zeta_G, x]$, parameterised by the distance $d_{\H_v}$, and $\partial \phi(x)=0$ for $x \notin \cT$. Now, for a $\cC^1$ function $\phi$, set $$ \langle \phi, \phi \rangle := \int_{\BP(\C_v)} (\partial \phi)^2 d \mu, $$ where $\mu$ is the measure on $\BP(\C_v)$ which is null on $\P^1(\C_v)$, and coincides with Hausdorff measure of dimension 1 on $\H_v$ with respect to the distance $d_{\H_v}$. As in the archimedean case, we have a kind of Cauchy-Schwarz inequality \cite[\textsection \textsection 5.5]{FRL}. \begin{lemma} \label{lem:nonArchCauchySchwarz} Let $\phi$ be a $\cC^1$ function on $\BP(\C_v)$, and let $\rho \in M_{\mathrm{Rad}}$ have continuous potentials and satisfy $\rho(\BP(\C_v)) = 0$. Then $$ \left| \int_{\BP(\C_v)} \phi \, d \rho \right|^2 \leq \langle \phi, \phi \rangle (\rho, \rho). $$ \end{lemma} For $\varepsilon > 0$, let $\pi_\varepsilon : \BP(\C_v) \to \BP(\C_v)$ be the map which sends $x \in \BP(\C_v)$ to the unique point $y$ in $[x,\infty]$ with $\diam(y) = \max \{ \diam(x),\varepsilon \}$. We define the \emph{regularisation} of a measure $\rho \in M_{\mathrm{Rad}}$ to be $\rho_\varepsilon := (\pi_\varepsilon)_* \rho$. \subsection{Adelic measures and heights} \label{subsec:AdelMeasHei} When $v$ is an infinite place of $K$, let $\lambda_v$ denote the probability measure proportional to Lebesgue measure on the unit circle $S^1 \subset \P^1(\C_v)$. When $v$ is finite, let $\lambda_v$ denote the point mass at the Gauss point in $\BP(\C_v)$. An \emph{adelic measure} $\rho = \{ \rho_v \}_{v \in M_K}$ consists of a probability measure $\rho_v$ on $\BP(\C_v)$ with continuous potentials for each place $v \in M_K$, such that $\rho_v = \lambda_v$ for all but finitely many places $v$. To an adelic measure $\rho$ we can associate an \emph{adelic height}, given by \begin{equation} \label{eq:adelHeight} h_\rho(F) := \frac{1}{2} \sum_{v \in M_K} ( [F] - \rho_v, [F]-\rho_v )_v, \end{equation} for any finite set $F \subset \overline K$ invariant under $\Gal(\overline K/K)$. Note that in \cite[Definition~1.2]{FRL}, $\llrrparen{ \cdot , \cdot }_v$ is used in place of $( \cdot , \cdot )_v$, denoting a normalisation of the mutual energy, but this is consistent with the way we have initially normalised our absolute values on $K$. For $\alpha \in \P^1(\overline K)$, we let $h_\rho(\alpha) = h_\rho(F)$ where $F$ is the orbit of $\alpha$ under the action of $\Gal(\overline K/K)$. When $\rho = \{ \lambda_v \}_{v \in M_K}$, we have $h_\rho = h$, the usual height, whose definition we recall in the next subsection. We say that an adelic measure $\rho$ has \emph{H\"{o}lder continuous potentials} if for each place $v$, there exists a H\"{o}lder continuous (with respect to the spherical metric in the archimedean case, and the metric $\ud$ defined in \eqref{eq:FRLDist} in the non-archimedean case) function $g_v : \BP(\C_v) \to \R$ with $\Delta g_v = \rho_v - \lambda_v$. It turns out that in the archimedean case, it suffices (and is more convenient) to make some calculations with the Euclidean distance, rather than the spherical one. Hence, given an adelic measure $\rho$ with H\"{o}lder continuous potentials, we will choose $g_v$ such that there exist constants $C_v \geq 0$ and $\kappa_v \leq 1$ with \begin{equation} \label{eq:defHCP} \begin{cases} |g_v^*(z)-g_v^*(w)| \leq C_v |z-w|_v^{\kappa_v} \text{ for all } z,w \in \C_v, & \text{if } v \in M_K^\infty, \\ |g_v(z)-g_v(w)| \leq C_v \ud(z,w)^{\kappa_v} \text{ for all } z,w \in \BP(\C_v), & \text{if } v \in M_K^0, \end{cases} \end{equation} where for $v \in M_K^\infty$, $g_v^*(z) = g_v(z) + \log^+|z|_v$ so that $\Delta g_v^* = \rho_v$. Favre and Rivera-Letelier \cite[Theorem~4]{FRL} show that for a rational function $\rat$, $\rho_{\rat} = \{ \rho_{\rat,v} \}_{v \in M_K}$ is an adelic measure with H\"{o}lder continuous potentials, where $\rho_{\rat,v}$ is the equilibrium measure of $\rat$ at $v$, and that the adelic height $h_{\rho_{\rat}}$ coincides with the canonical height $\canheight{\rat}$ for $\rat$. \subsection{Heights of algebraic numbers} \label{subsec:Heights} Recall that the \emph{(absolute logarithmic) height} of an algebraic number $\alpha$ is given by $$ h(\alpha) = \sum_{v \in M_K} \log^+ |\alpha|_v, $$ where $K$ is any number field containing $\alpha$, and $\log^+x = \log \max \{ 1, x \}$. The height is independent of the choice of number field, invariant under Galois conjugation, and satisfies the following properties (see \cite[\textsection \textsection 1.5]{BG}). For algebraic numbers $\alpha_1,\ldots,\alpha_r$, we have \begin{equation} \label{eq:HeightSumInequality1} h(\alpha_1 + \cdots + \alpha_r) \leq h(\alpha_1) + \cdots + h(\alpha_r) + \log r, \end{equation} and moreover \begin{align*} h(\alpha_1) & = h( (\alpha_1 + \cdots + \alpha_r) - \alpha_2 - \cdots - \alpha_r ) \\ & \leq h(\alpha_1 + \cdots + \alpha_r ) + h(-\alpha_2) + \cdots + h(-\alpha_r) + \log r, \end{align*} and so \begin{equation} \label{eq:HeightSumInequality2} h(\alpha_1 + \cdots+ \alpha_r) \geq h(\alpha_1) - h(\alpha_2) - \cdots - h(\alpha_r)-\log r. \end{equation} For a non-zero algebraic number $\alpha$ and a rational number $\lambda$, we have $h(\alpha^\lambda) = |\lambda| h(\alpha)$. For $K$ a number field $S \subset M_K$ a finite set of places, and $\alpha \in K \setminus \{ 0 \}$, we have the fundamental inequality \begin{equation} \label{eq:FundamentalHeightInequality} -h(\alpha) \leq \sum_{v \in S} \log |\alpha|_v \leq h(\alpha). \end{equation} Now, we can associate to a given rational function $\rat : \P^1 \to \P^1$ a \emph{canonical height} (see \cite[\textsection 3.4]{Si}) $\canheight{\rat}$ given by $$ \canheight{\rat}(\alpha) = \lim_{n \to \infty} \frac{1}{d^n} h(\rat^n(\alpha)). $$ This satisfies $\canheight{\rat}(\alpha) = h(\alpha) + O(1)$, and $\canheight{\rat}(r(\alpha)) = d \canheight{\rat}(\alpha)$. When $\rat=\pol$ is a polynomial, $\canheight{\pol}$ has the following decomposition into \emph{local canonical heights} (or \emph{$v$-adic escape rates}) given by $$ \canheight{\pol}(\alpha) = \sum_{v \in M_K} \loccanheight{v}{\pol}(\alpha), $$ where $K$ is a number field containing $\alpha$, and $$ \loccanheight{v}{\pol}(\alpha) = \lim_{n \to \infty} \frac{1}{d^n} \log^+ |\pol^n(\alpha)|_v. $$ Note that if $v$ is a place of good reduction for $R$, then we have $\loccanheight{v}{\pol}(\alpha) = \log^+ |\alpha|_v$. Also, we have the following formula for the local height (see the proof of \cite[Proposition~1.3]{FRL}): \begin{equation} \label{eq:MahlerFormula} \loccanheight{v}{\pol}(\alpha) = \int_{\BP(\C_v)} \log |z-\alpha|_v d \rho_{\pol,v}, \end{equation} where $\rho_{\pol,v}$ is the equilibrium measure associated to $\pol$ at the place $v$, and for non-archimedean places, we mean $$ \log |z-\alpha|_v := \log \diam(z \vee \alpha) $$ for $z \in \BP(\C_v)$. Note that at archimedean places, this implies \begin{equation} \label{eq:ArchPotLCH} \Delta \loccanheight{v}{\pol} = \rho_{\pol,v}. \end{equation} \subsection{Canonical heights of unicritical polynomials} We now collect a number of useful properties of the local and global canonical heights associated to a unicritical polynomial. \begin{lemma} \label{lem:BadReductHeight} Let $\unipol(z) = z^2 + c$, $c \in K$, and suppose that $v$ is a place of bad reduction for $\unipol$, or equivalently $|c|_v > 1$. Then $$ \loccanheight{v}{\unipol}(\alpha) = \log \max \{ |\alpha|_v, |c|_v^{1/2} \} $$ for all $\alpha \in \C_v$ with $|\alpha|_v \neq |c|_v^{1/2}$. \end{lemma} \begin{proof} This is an easy induction. \end{proof} \begin{remark} \label{rem:BadReductHeight} Note that the above result shows that at a place $v$ of bad reduction for $\unipol(z)=z^2+c$, any point $\beta$ which is preperiodic for $\unipol$ must satisfy $|\beta|_v = |c|_v^{1/2}$. Similarly, if $v$ is a place of good reduction for $\unipol$, then preperiodic points $\beta$ for $\unipol$ must satisfy $|\beta|_v \leq 1$. \end{remark} For archimedean places $v$, the local canonical height is precisely the Green function of the filled Julia set of $\unipol$, and satisfies the following properties (these are proved in \cite[Lemma~2.3]{BD}) \begin{lemma} \label{lem:LocalHeightProps} Let $\unipol(z)=z^2+c$, $c \in K$. Let $v$ be an archimedean place of $K$, and let $z \in \C_v$. Then \begin{enumerate} \item[(a)] When $|c|_v \geq 1$ or $|z|_v \geq |c|_v^{1/2}$ we have $$ \loccanheight{v}{\unipol}(z) \leq \log 2 + \max \left \{ \frac{1}{2} \log |c|_v, \log |z|_v \right \}. $$ \item[(b)] $\max \{ \loccanheight{v}{\unipol}(z), \loccanheight{v}{\unipol}(0) \} \geq \log |z|_v - \log 4.$ \end{enumerate} \end{lemma} Ingram \cite[Theorem~2]{I}, gives the following lower bound for the canonical height (with respect to $\unipol$) of a non-preperiodic point in terms of the height of the parameter $c$. \begin{theorem} \label{thm:IngramCanHeightBound} Let $c \in K$, let $s$ be the number of places $v$ of bad reduction for $\unipol(z)=z^2+c$ such that $v(c)=\exp(-|c|_v)$ is divisible by $2$, let $r \leq [K:\Q]$ be the number of distinct archimedean valuations on $K$, let and let $$ N = \frac{5^{r+s+1}-1}{2}. $$ Then for any $\alpha \in K$, either $\unipol^i(\alpha) = \unipol^j(\alpha)$ for some $i \neq j < N$, or else $$ \canheight{\unipol}(\alpha) \geq \frac{1}{2^{N+2}} (h(c) - 12\log 2). $$ \end{theorem} Note the following trivial bounds on the canonical height when the parameter $c$ is an integer. \begin{lemma} \label{lem:IntCanHeightBounds} Let $K = \Q$. For $c=1$, $$ \canheight{\unipol}(0) \geq \frac{1}{4} \log 2 $$ and for $c \in \Z$ such that $0$ is not preperiodic for $\unipol$ i.e. $c \neq -2,-1,0$, $$ \canheight{\unipol}(0) \geq \frac{1}{4} h(c). $$ \end{lemma} Combining some of the above results gives the following. \begin{corollary} \label{cor:CanHeightBound1} With notation as in Theorem~\ref{thm:IngramCanHeightBound}, we have $$ h(c) \leq 2^{N+2} \canheight{\unipol}(0) + 12 \log 2, $$ Moreover, in the case where, $K = \Q$ and $c \in \Z$, we have $h(c) \leq 4 \canheight{\unipol}(0)$. \end{corollary} \begin{proof} The result follows from Theorem~\ref{thm:IngramCanHeightBound}. The integer case follows from Lemma~\ref{lem:IntCanHeightBounds}. \end{proof} We also obtain a lower bound for the canonical height of any non-preperiodic point $\alpha$ for $\unipol$. \begin{corollary} \label{cor:CanHeightLowBound} For any $B > 0$, let $\cN(K,B)$ denote the number of elements of $K$ with height at most $B$. Then, for any $\alpha \in K$ which is not preperiodic for $\unipol$, we have $$ \canheight{\unipol}(\alpha) \geq C_0 \max \{ 1, h(c) \}, $$ where $$ C_0 \geq \min \left \{ \frac{1}{2^{N+3}}, \frac{2 \log 2}{2^{\cN(K, 25\log 2)}} \right \}. $$ Moreover, when $\alpha = 0$, $K = \Q$ and $c \in \Z$, we can take $$ C_0 \geq \frac{1}{4}. $$ \end{corollary} Note that $\cN(K,B)$ can be estimated. For example \cite[Theorem~1.1]{D} gives $$ \log \cN(K,B) < B[K:\Q]^2 + \frac{17 B [K : \Q]^2 \log \log [K : \Q]}{\log [K : \Q]} $$ for all $B > 2 [K : \Q]^{-1} \log [K : \Q]$ when $[K : \Q]$ is sufficiently large. \begin{proof} If $h(c) \geq 24 \log 2$, then immediately from Theorem~\ref{thm:IngramCanHeightBound} we have $$ \canheight{\unipol}(\alpha) \geq \frac{h(c)}{2^{N+3}} = \frac{1}{2^{N+3}} \max \{ 1, h(c) \}. $$ By \eqref{eq:HeightSumInequality1} and \eqref{eq:HeightSumInequality2}, \begin{equation*} |h(\unipol(\beta)) - 2h(\beta) | \leq h(c) + \log 2, \end{equation*} for any $\beta \in K$. Hence, for any $\beta \in K$, \begin{equation} \label{eq:CanHeightConst} |\canheight{\unipol}(\beta) - h(\beta)| \leq h(c)+\log 2. \end{equation} (see for example the proof of \cite[Theorem~3.20]{Si}). Suppose $h(c) < 24 \log 2$. Then \eqref{eq:CanHeightConst} with $\beta = \unipol^\ell(\alpha)$ gives $$ 2^\ell \canheight{\unipol}(\alpha) = \canheight{\unipol}(\unipol^\ell(\alpha)) \geq h(\unipol^\ell(\alpha)) - 25 \log 2 $$ for all $\ell \geq 1$. Since $\alpha$ is not preperiodic for $\unipol$, there exists $\ell \leq \cN(K, 25\log 2)$ such that $$ h(\unipol^\ell(\alpha)) \geq 50\log 2, $$ and hence \begin{align*} \canheight{\unipol}(\alpha) & \geq \frac{50 \log 2}{2^{\cN(K, 25\log 2)}} \\ & \geq \frac{2 \log 2}{2^{\cN(K, 25\log 2)}} \max \{1, h(c) \}. \end{align*} The integer case comes immediately from Lemma~\ref{lem:IntCanHeightBounds}. \end{proof} \section{Explicit quantitative equidistribution} \label{sec:QuantEqui} For any $v \in M_K$ and any finite subset $F \subset \BP(\C_v)$, we denote by $[F]$ the probability measure $$ [F] := \frac{1}{|F|} \sum_{z \in F} [z], $$ where we recall $[z]$ is the Dirac measure at $z$. For $z \in \BP(\C_v)$ and $\varepsilon > 0$, if $v$ is archimedean let $[z]_\varepsilon$ denote the probability measure proportional to Lebesgue measure on the circle about $z$ of radius $\varepsilon$. Otherwise, denote by $\pi_\varepsilon : \BP(\C_v) \to \BP(\C_v)$ the map which sends a point $x$ to the unique point $y \in [x,\infty]$ such that $\diam(y) = \max \{ \diam(x),\varepsilon \}$, and set $[z]_\varepsilon = (\pi_\varepsilon)_* [z]$ (see \cite[\textsection \textsection 4.6]{FRL}). For a finite subset $F \subset \BP(\C_v)$, define $$ [F]_\varepsilon = \frac{1}{|F|} \sum_{z \in F} [z]_\varepsilon. $$ Let us firstly make the constant in \cite[Theorem~7]{FRL} explicit in terms of the constants and exponents of H\"{o}lder continuity of the given measure. \begin{theorem} \label{thm:QuantEquid} Suppose $\rho = \{ \rho_v \}_{v \in M_K}$ is an adelic measure with H\"{o}lder continuous potentials $\{ g_v \}_{v \in M_K}$, with constants $\{ C_v \}_{v \in M_K}$ and exponents $\{ \kappa_v \}_{v \in M_K}$ as in \eqref{eq:defHCP}. Let $V = \{ v \in M_K : \rho_v \neq \lambda_v \}$, where $\lambda_v$ is defined as in \textsection \textsection \ref{subsec:AdelMeasHei}, and let $C, \kappa > 0$ be constants such that $C \geq \sum_{v \in V} C_v$ and $\kappa \leq \min_{v \in V} \kappa_v$. Then for any place $v \in M_K$, any test function $\phi$ on $\BP(\C_v)$ which is $\cC^1$ if $v$ is finite and otherwise Lipschitz on $\C_v$, and any finite $\Gal(\overline K/K)$-invariant subset $F$ of $\overline K$ with $|F| \geq (6C \kappa)/(|V|+1)$ we have \begin{align*} \left| \frac{1}{|F|} \sum_{\alpha \in F} \phi(\alpha) - \int_{\BP(\C_v)} \phi \, d \rho \right| & \leq \left( h_{\rho}(F) + \frac{2(|V|+1)}{\kappa} \frac{\log |F|}{|F|} \right)^{\frac{1}{2}} \langle \phi, \phi \rangle^{\frac{1}{2}} \\ & \qquad \qquad \qquad \qquad + \Lip(\phi) \left( \frac{|V|+1}{2C \kappa |F|} \right)^{\frac{1}{\kappa}}. \end{align*} \end{theorem} \begin{proof} Following the proof of \cite[Theorem~7]{FRL}, let $0 < \varepsilon < 1$ and let $F$ be a finite $\Gal(\overline K/ K)$-invariant subset of $\overline K$. Then for each place $v \in M_K$, we have \begin{align*} \sum_{w \in V \cup \{ v \}} & ( [F]_\varepsilon - \rho_w, [F]_\varepsilon - \rho_w)_w = \sum_{w \in V \cup \{ v \}} \Big( ([F]-\rho_w,[F]-\rho_w)_w \\ & + 2 \left( ([F],\rho_w)_w-([F]_\varepsilon,\rho_w)_w \right) + \left( ([F]_\varepsilon,[F]_\varepsilon)_w - ([F],[F])_w \right) \Big) \\ & \leq \sum_{w \in V \cup \{ v \}} \left( ([F]-\rho_w,[F]-\rho_w)_w + 2 \left( ([F],\rho_w)_w-([F]_\varepsilon,\rho_w)_w \right) - \frac{\log \varepsilon}{|F|} \right), \end{align*} where the inequality follows from \cite[Lemma~12]{F}, and \cite[Proposition~4.9]{FRL}. Now, let $w$ be an archimedean place so $\rho_w = \Delta g^*_w$ as in \eqref{eq:defHCP}. We have by \cite[Lemma~2.5]{FRL}, \begin{align*} \left| ([F]_\varepsilon,\rho_w)_w - ([F],\rho_w)_w \right| & \leq \frac{1}{|F|} \sum_{z \in F} \left| ([z]_\varepsilon, \rho_w )_w - ([z],\rho_w)_w \right| \\ & = \frac{1}{|F|} \sum_{z \in F} \left| \int_{\C_w} g^*_w d([z]_\varepsilon-[z]) \right| \\ & \leq \frac{1}{|F|} \sum_{z \in F} \int_0^1 |g^*_w(z+\varepsilon e^{2\pi i \theta}) - g^*_w(z)| d\theta \leq C_w \varepsilon^{\kappa_w}. \end{align*} The analogous inequality holds for non-archimedean places by \cite[Lemma~4.9]{FRL}. Also, for all $w \notin V$ we have $\rho_w=\lambda_w$ and \cite[Lemma~5.4]{FRL} gives $([F]-\rho_w,[F]-\rho_w)_w \geq 0$, and so \begin{align} \label{eq:EnergBd} ( [F]_\varepsilon - \rho_v, [F]_\varepsilon - \rho_v )_{v} & \leq h_{\rho}(F) + \sum_{w \in V \cup \{v\}} \left( 2C_w \varepsilon^{\kappa_w} - \frac{\log \varepsilon}{|F|} \right) \notag \\ & \leq 2 C \varepsilon^\kappa - \left( |V| + 1 \right) \frac{\log \varepsilon}{|F|}. \end{align} We choose $$ \varepsilon = \left( \frac{|V|+1}{2 C \kappa |F|} \right)^{1/\kappa} $$ in order to minimise the right hand side of \eqref{eq:EnergBd}. Then we have $$ ( [F]_\varepsilon - \rho_v, [F]_\varepsilon - \rho_v )_{v} \leq h_\rho(F) + \frac{|V|+1}{\kappa|F|} \left(1+ \log \frac{2C \kappa}{|V|+1} + \log |F| \right), $$ Then, by Lemmas~\ref{lem:ArchCauchySchwarz} and \ref{lem:nonArchCauchySchwarz}, for our given test function $\phi$ we have $$ \left| \int_{\BP(\C_v)} \phi \, d([F]_\varepsilon - \rho_v) \right|^2 \leq \langle \phi,\phi \rangle \left( h_{\rho}(F) + \frac{|V|+1}{\kappa|F|} \left(\log \frac{6C \kappa}{|V|+1} + \log |F| \right) \right). $$ On the other hand, \begin{align*} \left| \int_{\BP(\C_v)} \phi \, d([F]_\varepsilon - [F]) \right| & \leq \mathrm{Lip}(\phi) \cdot \varepsilon \\ & = \mathrm{Lip}(\phi) \left( \frac{|V|+1}{2C \kappa |F|} \right)^{1/\kappa}. \end{align*} Combining these and recalling that $|F| \geq (6C\kappa)/(|V|+1)$ gives the result. \end{proof} Favre and Rivera-Letelier \cite[Proposition~6.5]{FRL} show that the adelic measure associated to a rational function has H\"{o}lder continuous potentials. We will give bounds for the constants $C_v$, $\kappa_v$ associated to $\rho_{\pol}$ for a polynomial $\pol \in K[z]$ (see \eqref{eq:defHCP}). At archimedean places, the H\"{o}lder continuity of the local canonical height is a classical result, and we follow an approach that can be found in \cite[Theorem~1.1]{K} and \cite[Chapter~VIII,~Theorem~3.2]{CG}. \begin{lemma} \label{lem:HolArch} Let $\pol \in K[z]$ have degree $d \geq 2$. Let $v$ be an archimedean place of $K$, and let $F \subset \C_v$ be a compact, convex set containing the $v$-adic filled Julia set $\cK_v(p)$ of $p$. Then for any $\varepsilon > 0$, we have $$ |\loccanheight{v}{\pol}(z)-\loccanheight{v}{\pol}(w)|_v \leq C_{\pol,v}(\varepsilon)|z-w|_v^{\kappa_{\pol,v}(\varepsilon)}, $$ for all $z,w \in \C_v$, where $$ \kappa_{\pol,v}(\varepsilon) = \frac{\log d}{\log \max \{ |\pol'(z)|_v: z \in F_\varepsilon \} } $$ and $$ C_{\pol,v}(\varepsilon) = 2d \varepsilon^{-\kappa_{\pol,v}(\varepsilon)} \max \{ \loccanheight{v}{\pol}(z) : z \in F_\varepsilon \}. $$ Here $F_\varepsilon$ denotes the set of $z \in \C_v$ with $\mathrm{dist}(z,F) < \varepsilon$. \end{lemma} \begin{proof} Identify $\C_v$ with $\C$, write $\cK = \cK_v(\pol)$, $C(\varepsilon) =C_{\pol,v}(\varepsilon)$, and $\kappa(\varepsilon)=\kappa_{\pol,v}(\varepsilon)$. Let $z \in \C \setminus \cK$ and fix $z_0 \in \cK$ with $|z-z_0| = \mathrm{dist}(z,\cK)$. Note that the line segment $L:=[z_0,z]$ is contained in $F_\varepsilon$ since $F$ is convex. Moreover, since $z \notin \cK$, there exists a minimal integer $N$ such that $p^N(L) \not\subseteq F_\varepsilon$. Then, for all $n \leq N$ and all $w \in L$, by the chain rule, $$ |(\pol^n)'(w)| = |\pol'(\pol^{n-1}(w)) \cdots \pol'(w)| \leq \max \{ |\pol'(z)| : z \in F_\varepsilon \}^n. $$ On the other hand, there exists $z_1 \in L$ such that $\pol^N(z_1) \notin F_\varepsilon$, and so since $z_0 \in \cK$ gives $\pol^N(z_0) \in \cK \subset F$, we have \begin{align*} \varepsilon & < |\pol^N(z_1) - \pol^N(z_0)| = \left| \int_{[z_0,z_1]} (\pol^N)'(w)dw \right| \\ & \leq \int_L |(\pol^N)'(w)|dw \leq \max \{ |\pol'(z)| : z \in F_\varepsilon \}^N \mathrm{dist}(z,\cK), \end{align*} and so $$ \varepsilon^{-\kappa(\varepsilon)}\mathrm{dist}(z,\cK)^{\kappa(\varepsilon)} \geq \left(\max \{ |\pol'(z)| : z \in F_\varepsilon \}^{\kappa(\varepsilon)} \right)^{-N} = d^{-N}. $$ Thus, \begin{align} \label{eq:HolJulBd} \loccanheight{v}{\pol}(z) & = d \loccanheight{v}{\pol}(\pol^{N-1}(z)) d^{-N} \notag \\ & \leq d \varepsilon^{-\kappa(\varepsilon)}\mathrm{dist}(z,\cK)^{\kappa(\varepsilon)} = \frac{C(\varepsilon)}{2} \mathrm{dist}(z,\cK)^{\kappa(\varepsilon)}. \end{align} Now, let $z,w \in \C$, and suppose $\mathrm{dist}(z,\cK) \geq \mathrm{dist}(z,\cK)$. If $|z-w| \geq \mathrm{dist}(z,\cK)$, then the result follows from \eqref{eq:HolJulBd}. Otherwise, $w$ lies in the disk centred at $z$ of radius $\mathrm{dist}(z,\cK)$, on which $\loccanheight{v}{\pol}$ is a positive harmonic function. Therefore by Harnack's inequality, $$ \frac{\mathrm{dist}(z,\cK)-|z-w|}{\mathrm{dist}(z,\cK)+|z-w|} \loccanheight{v}{\pol}(z) \leq \loccanheight{v}{\pol}(w) \leq \frac{\mathrm{dist}(z,\cK)+|z-w|}{\mathrm{dist}(z,\cK)-|z-w|} \loccanheight{v}{\pol}(z). $$ Rearranging this and applying \eqref{eq:HolJulBd} gives \begin{align*} |\loccanheight{v}{\pol}(z)-\loccanheight{v}{\pol}(w)| & \leq \frac{\loccanheight{v}{\pol}(z)+\loccanheight{v}{\pol}(w)}{\mathrm{dist}(z,\cK)} |z-w| \\ & \leq \frac{C(\varepsilon) \mathrm{dist}(z,\cK)^{\kappa(\varepsilon)}}{\mathrm{dist}(z,\cK) |z-w|^{\kappa(\varepsilon)-1}} |z-w|^{\kappa(\varepsilon)} \leq C(\varepsilon) |z-w|^{\kappa(\varepsilon)}, \end{align*} noting that $|z-w| < \mathrm{dist}(z,\cK)$ and $\kappa(\varepsilon) < 1$. \end{proof} In particular, when $\pol = \unipol$ we have the following. \begin{corollary} \label{cor:ArchHolUni} Let $v$ be an archimedean place of $K$. Then for all $z,w \in \C_v$, $$ |\loccanheight{v}{\unipol}(z)-\loccanheight{v}{\unipol}(w)| \leq C_v |z-w|^{\kappa_v}, $$ where $C_v = 4 \log 6 + 2 \log^+ |c|_v$, and $$ \kappa_v = \frac{2 \log 2}{2 \log 6 + \log^+ |c|_v}. $$ \end{corollary} \begin{proof} Let $F$ be the disk centred at the origin of radius $R := 2 \max \{1, |c|_v^{1/2} \}$. Then it is easy to see that $F \supseteq \cK_v(\unipol)$. Note that $R+1 \leq 3 \max \{ 1, |c|_v^{1/2} \}$, so on $F_1$, $$ |\unipol'(z)|_v = \left| 2 z \right|_v \leq 6 \max \{ 1, |c|_v^{1/2} \}, $$ and by Lemma~\ref{lem:LocalHeightProps}~(a), $$ \loccanheight{v}{\unipol}(z) \leq \log 2 + \log \left( 3 \max \{ 1, |c|_v^{1/2} \} \right) . $$ The result follows from applying Lemma~\ref{lem:HolArch} with $\varepsilon = 1$. \end{proof} We now determine H\"{o}lder continuous potentials for the measure $\rho_{r,v}$ associated to a rational function $r \in K(z)$ at a non-archimedean place $v$, following \cite[Proposition~6.5]{FRL} (see also \cite[Theorem~10.35]{BR}). \begin{lemma} \label{lem:HoldPotGen} Where $\rat \in K(z)$ has degree $d \geq 2$ and $v$ is a place of bad reduction for $\rat$, there exists $g_v : \BP(\C_v) \to \R$ such that $\Delta g_v = \rho_{\rat,v} - \lambda_v$, and $$ |g_v(z)-g_v(w)| \leq C_v \ud(z,w)^{\kappa_v} $$ for all $z,w \in \BP(\C_v)$, where $$ C_v = \frac{4d^2 \max_{0 \leq k < d} d_{\H_v}(\zeta_G,w_k)}{L_v \min_{0 \leq k < d} \diam_G(w_k)} + d \sum_{k=0}^{d-1} d_{\H_v}(\zeta_G,w_k) $$ and $$ \kappa_v = \frac{\log d}{\log L_v}. $$ Here $L_v := \max \{ 2d, \Lip_v(\rat) \}$, where $\Lip_v(\rat)$ is the Lipschitz constant of $\rat$ as a function on $\BP(\C_v)$, and $w_0,\ldots,w_{d-1} \in \BP(\C_v)$ are the preimages of the Gauss point $\zeta_G$ under $\rat$. \end{lemma} \begin{proof} Fix a place $v$ of bad reduction and write $\lambda_v = \lambda$, $| \cdot |_v = | \cdot |$. Following the proof of \cite[Proposition~6.5]{FRL} (see also \cite[Theorem~10.35]{BR}), if we take a Lipschitz (with respect to $\ud$) potential $g$ (which we will specify later) such that $\Delta g = d^{-1} \rat^* \lambda - \lambda$, then since $\rho_{\rat,v} = \lim_{n \to \infty} d^{-n} (\rat^n)^* \lambda$, we can deduce that $\rho_{\rat,v} = \lambda + \Delta g_v$ where $g_v = \sum_{k=0}^\infty d^{-k} g \circ \rat^k$. Let $N \geq 0$ be an integer. We have for any $z,w \in \BP(\C_v)$, \begin{align*} |g_v(z)-g_v(w)| & \leq \sum_{k=0}^{N-1} d^{-k} |g \circ \rat^k(z) - g \circ \rat^k(w)| + \sup |g| \sum_{k=N}^\infty d^{-k} \\ & \leq \left( \sum_{k=0}^{N-1} \frac{\Lip(g) \, \Lip(\rat)^k}{d^k} \right) \ud(z,w) + \sup |g| \sum_{k=N}^\infty d^{-k}. \end{align*} Then, writing $E=\Lip(g)$, $L=\max \{2d, \Lip(\rat) \}$ and $B = \sup |g|$, we have \begin{align*} |g_v(z)-g_v(w)| & \leq \frac{E}{\frac{L}{d}-1} \left( \frac{L}{d} \right)^N \ud(z,w) + dB \left( \frac{1}{d} \right)^N \\ & = \left( \frac{E}{\frac{L}{d}-1} L^N \ud(z,w) + dB \right) \left( \frac{1}{d} \right)^N. \end{align*} If we take $N=\lfloor-\log(\ud(z,w))/\log(L) \rfloor$, then $N = -\log_L \ud(z,w) - \delta$ for some $0 \leq \delta < 1$, and we obtain \begin{align} \label{eq:HolderBound} |g_v(z)-g_v(w)| & \leq \left( \frac{E}{\frac{L}{d}-1} L^{-\log_L \ud(z,w)} \ud(z,w) L^{-\delta} + dB \right) d^{\frac{\log \ud(z,w)}{\log L} + \delta} \notag \\ & = d^{\delta} \left( \frac{dE}{L-d} L^{-\delta} + dB \right) \ud(z,w)^{\frac{\log d}{\log L}} \notag \\ & < \left(\frac{2d^2 E}{L}+d^2B \right) \ud(z,w)^{\frac{\log d}{\log L}}, \end{align} noting that $L \geq 2d$. Now let $w_0,\ldots,w_{d-1}$ be the preimages of the Gauss point $\zeta_G$ under $\rat$. With $\langle \cdot, \cdot \rangle_G$ defined as in \eqref{eq:GromovDef}, let $$ H_k(z) = \langle z, w_k \rangle_G. $$ That is, referring to \eqref{eq:GromovDef}, $$ H_k(z) = d_{\H_v}(m_z, \zeta_G), $$ where $d_{\H_v}$ is the path distance defined in \eqref{eq:defPathDistance}, and $m_z = z \vee_G w_k$ denotes the unique point of $[z,w_k] \cap [z, \zeta_G] \cap [w_k, \zeta_G]$. Clearly $H_k$ is locally constant off the path $[\zeta_G, w_k]$, and so \begin{equation} \label{eq:supHk} 0 = H_k(\zeta_G) \leq H_k(z) \leq H_k(w_k) = d_{\H_v}(\zeta_G, w_k). \end{equation} We have from \eqref{eq:LaplacGromov} $$ \Delta H_k = \lambda - [w_k], $$ so if $g = - \frac{1}{d} \sum_{0 \leq k < d} H_k$, then $\Delta g = d^{-1} \rat^* \lambda - \lambda$, as desired. Note first that from \eqref{eq:supHk}, $g$ is bounded with $$ B = \sup |g| \leq \frac{1}{d} \sum_{k=0}^{d-1} d_{\H_v} (\zeta_G, w_k). $$ Moreover, $g$ is Lipschitz with constant $$ \Lip(g) = E \leq \frac{2 \max_{0 \leq k < d} d_{\H_v}(\zeta_G,w_k)}{\min_{0 \leq k < d} \diam_G(w_k)}, $$ where $\diam_G$ is defined in \eqref{eq:defGaussDiam}. Indeed, it suffices to show that each $H_k$ is Lipschitz with constant $E$ for points $x,y \in \BP(\C_v)$ with $y \in [x,\zeta_G]$, since given arbitrary $x,y \in \BP(\C_v)$, putting $z = x \vee_G y$ gives \begin{align*} |H_k(x)-H_k(y)| & \leq |H_k(x)-H_k(z)|+|H_k(z)-H_k(y)| \\ & \leq E \ud(x,z)+ E \ud(z,y) \leq E \ud(x,y). \end{align*} Take $x,y \in \BP(\C_v)$ with $y \in [x,\zeta_G]$. Writing $[x,\zeta_G] = [x,m_x) \cup [m_x,\zeta_G]$, and noting that $[m_x,\zeta_G] \subseteq [w_k,\zeta_G]$ we see that if $y \in [x,m_x)$, then $m_y = m_x$ and so $H_k(x)=H_k(y)$, and otherwise $m_y = y$. Suppose we are in the latter case. We have \begin{align*} |H_k(x) - H_k(y)| & = |d_{\H_v}(m_x,\zeta_G) - d_{\H_v}(y,\zeta_G)| \\ & = d_{\H_v}(m_x,y) \leq d_{\H_v}(w_k,\zeta_G), \end{align*} and moreover $x \vee_G y = y$, whence $$ \ud(x,y) = \diam_G(y)-\diam_G(x). $$ Since $y \in [\zeta_G,w_k]$, $\diam_G(y) \geq \diam_G(w_k)$. Suppose that $$ \diam_G(x) \leq \frac{1}{2} \diam_G(w_k). $$ Then $$ \ud(x,y) \geq \frac{1}{2}\diam_G(w_k), $$ and so \begin{align*} |H_k(x)-H_k(y)| & \leq d_{\H_v}(w_k,\zeta_G) \\ & = \frac{2 d_{\H_v}(w_k,\zeta_G)}{\diam_G(w_k)} \cdot \frac{1}{2} \diam_G(w_k) \leq E \ud(x,y). \end{align*} On the other hand, if $\diam_G(x) \geq \diam_G(w_k)/2$, then, since $\log$ is Lipschitz on $[\diam_G(w_k)/2,1]$ with constant $2/\diam_G(w_k)$, we have \begin{align*} |H_k(x)-H_k(y)| = d_{\H_v}(m_x,y) & \leq d_{\H_v}(x,y) \\ & = |d_{\H_v}(\zeta_G,x)-d_{\H_v}(\zeta_G,y)| \\ & = |\log \diam_G(y) - \log \diam_G(x)|\\ & \leq \frac{2}{\diam_G(w_k)} |\diam_G(y)-\diam_G(x)| \\ & = \frac{2}{\diam_G(w_k)} \ud(x,y) \leq E \ud(x,y), \end{align*} as desired. Substituting these values for $E$ and $B$ into \eqref{eq:HolderBound} gives \begin{align*} |g_v(z)-g_v(w)| \leq \left( \frac{4d^2 \max_{0 \leq k < d} d_{\H_v}(\zeta_G,w_k)}{L \min_{0 \leq k < d} \diam_G(w_k)} + d \sum_{k=0}^{d-1} d_{\H_v}(\zeta_G,w_k) \right) \ud(z,w)^{\frac{\log d}{\log L}} \end{align*} completing the proof. \end{proof} The action of a unicritical polynomial $\unipol$ on Berkovich points is easy to compute using results found in \cite{B}. We can hence explicitly bound the quantities in Lemma~\ref{lem:HoldPotGen} and obtain the following. \begin{lemma} \label{lem:HoldPot} Where $\unipol(z)=z^2+c$, $\rho_{\unipol}$ has H\"{o}lder continuous potentials with exponents \begin{equation} \label{eq:HolderExponent} \kappa_v \geq \begin{cases} \frac{2 \log 2}{2 \log 6 + \log^+|c|_v} & v \in M_K^\infty \\ 1 & v \in M^{0,\unipol}_{K,\mathrm{good}} \\ \frac{\log 2}{\log 4 + 4 \log |c|_v} & v \in M^{0,\unipol}_{K,\mathrm{bad}} \end{cases} \end{equation} and constants \begin{equation} \label{eq:HolderConstant} C_v \leq \begin{cases} 4 \log 6 + 2 \log^+ |c|_v & v \in M_K^\infty \\ 0 & v \in M^{0,\unipol}_{K,\mathrm{good}} \\ 1 + 6 \log |c|_v & v \in M^{0,\unipol}_{K,\mathrm{bad}}. \end{cases} \end{equation} \end{lemma} \begin{proof} The archimedean case follows immediately from Corollary~\ref{cor:ArchHolUni}, noting that $\rho_{\unipol,v} = \Delta \loccanheight{v}{\unipol}$ from \eqref{eq:ArchPotLCH}. Now suppose $v$ is a finite place of bad reduction, so we can apply Lemma~\ref{lem:HoldPotGen}. From \cite[Theorem~0.1]{RW}, $$ \Lip(\unipol) \leq \max \left \{ \frac{2}{|\mathrm{Res}(\unipol)|}, \frac{1}{|\mathrm{Res}(\unipol)|^2} \right \} < 4 |c|^{4}. $$ We now compute the preimages $w_0,w_1$ of the Gauss point $\zeta_G$ under $\unipol$. By \cite[Proposition~7.6]{B}, if $\zeta(a,r)$ is a point of Type II or III, and $\unipol(D(a,r)) = D(b,s)$, then $\unipol(\zeta(a,r)) = \zeta(b,s)$. Moreover, from \cite[Theorem~3.15]{B}, if $\unipol(z) = \sum_{n \geq 0} c_n (z-a)^n$, then $\unipol(D(a,r)) = D(c_0,t)$, where $t = \max_{i \geq 1} |c_i| r^i$. For $a \in \C_v$, we can write $$ \unipol(z) = z^2 + c = a^2+c + 2a(z-a)+(z-a)^2, $$ and so $$ \unipol(\zeta(a,r)) = \zeta \left( a^2+c, \max\{ |2a|r, r^2 \} \right). $$ From this we can see that $$ w_k = \zeta \left( (-1)^k (-c)^{\frac{1}{2}}, \min \left \{ 1, \frac{1}{|2||c|^{\frac{1}{2}}} \right \} \right), \qquad k=0,1, $$ are the preimages of the Gauss point under $f_c$. Now, we have (see for example \cite[Exercise~6.22]{B}) \begin{equation*} \diam( \zeta_G \vee w_k ) = \max \{ 1, \| T \|_{w_k} \} = |c|^{1/2}, \end{equation*} noting that $|c| > 1$. Thus $$ d_{\H_v}(\zeta_G,w_k) = 2 \log |c|^{1/2} - \log \min \left \{ 1, \frac{1}{|2||c|^{\frac{1}{2}}} \right \} \leq \frac{3}{2} \log |c|, $$ and so $$ \diam_G(w_k) \geq \frac{1}{|c|^{\frac{3}{2}}}. $$ Substituting these values into the equations for $C_v$ and $\kappa_v$ in Lemma~\ref{lem:HoldPotGen}, and noting again that $|c| > 1$, completes the proof. \end{proof} \section{Truncation function} \label{sec:Trunc} In this section, for each place $v$ of $K$ we define a suitable test function with which to apply Theorem~\ref{thm:QuantEquid}, and bound its Lipschitz constant and Dirichlet form. \subsection{Archimedean case} Let $v$ be an archimedean place of $K$ and identify $\C_v$ with $\C$. Let $0 < \delta < 1$ and consider the function $\phi_\delta(z)$ given as the extension to $\P^1(\C)$ of $\log \max \{ \delta, |z-\alpha| \}$. We have that $\phi_\delta$ is Lipschitz continuous on $\C$ (with respect to the Euclidean distance) with Lipschitz constant \begin{equation} \label{eq:ArchLipOfTrunc} \mathrm{Lip}(\phi_\delta) = \frac{1}{\delta}. \end{equation} On a chart $U_i$ with coordinates $x,y$, where $u_i(\alpha)=(a_i,b_i)$ we have $$ \phi_\delta \circ \alpha_i^{-1}(x,y) = \log \max \{ \delta, \sqrt{(x-a_i)^2+(y-b_i)^2} \}, $$ which has weak partial derivatives \begin{align*} \frac{\partial \phi_\delta}{\partial x}(x,y) & = \I \left \{\sqrt{(x-a_i)^2+(y-b_i)^2} > \delta \right \} \frac{x-a_i}{(x-a_i)^2+(y-b_i)^2}, \\ \frac{\partial \phi_\delta}{\partial y}(x,y) & = \I \left \{\sqrt{(x-a_i)^2+(y-b_i)^2} > \delta \right \} \frac{y-b_i}{(x-a_i)^2+(y-b_i)^2}, \end{align*} where $\I$ is an indicator function. Hence from \eqref{eq:DirichletForm}, we have (making a substitution to move $(a_i,b_i)$ to the origin) \begin{align} \label{eq:ArchDirFormOfTrunc} \langle \phi_\delta, \phi_\delta \rangle & = \int_{\overline D(0,1) \setminus \overline D(0,\delta)} \frac{dx dy}{x^2+y^2} + \int_{D(0,1) \setminus \overline D(0,\delta)} \frac{dx dy}{x^2+y^2} \notag \\ & = 2 \int_0^{2 \pi} \int_\delta^1 \frac{dr d \theta}{r} = -4 \pi \log \delta. \end{align} \subsection{Non-archimedean case} When $v$ is a finite place of $K$, for $0 < \delta < 1$ we define $\phi_\delta : \BP(\C_v) \to \R$ by $$ \phi_\delta(x) = \log \max \{ \diam(x \vee \alpha), \delta \}, $$ which extends the function $\log \max \{ |x-\alpha|, \delta \}$ on $\P^1(\C_v)$ to $\BP(\C_v)$. Let $\xi := \zeta(\alpha, 1)$ and $\Lambda := [\zeta(\alpha,\delta), \xi]$. Then $\phi_\delta$ is locally constant outside of $\Lambda$. Indeed, it is easy to see that $\phi_\delta(x) = \phi_\delta(x \vee_\xi \zeta(\alpha,\delta))$. Moreover, since for $x \in \Lambda$ we have $$ \phi_\delta(x) = \log \diam(x), $$ $\phi_\delta$ is $\cC^1$ on $\Lambda$, with $$ \partial \phi_\delta(x) = \lim_{\substack{y \to x \\ y \in [\xi,x]}} \frac{\phi_\delta(x)-\phi_\delta(y)}{d_{\H_v}(x,y)} = 1. $$ Note that this implies $\phi_\delta$ is Lipschitz with \begin{equation} \label{eq:nonArchLipofTrunc} \Lip(\phi_\delta) = 1. \end{equation} We conclude that \begin{align} \label{eq:nonArchDirFormOfTrunc} \langle \phi_\delta, \phi_\delta \rangle & = \int_{\BP(\C_v)} (\partial \phi_\delta)^2 d\mu = \int_\Lambda d\mu \notag = \mu(\Lambda) \\ & = d_{\H_v}(\zeta(\alpha,\delta),\xi) = - \log \delta. \end{align} Note that for all places $v$, if $\delta \leq \delta_v(\alpha)$, then $\phi_\delta$ agrees with $\log | \cdot - \alpha|$ at all preperiodic points of $\unipol$, and on the support of $\rho_{\unipol,v}$. \section{Proof of Theorem~\ref{thm:main}} \label{sec:MainProof} We can now prove the following explicit form of Theorem~\ref{thm:main}. \begin{theorem} \label{thm:main2} Let $K$ be a number field, and let $\pol \in K[z]$ be a polynomial of degree $d \geq 2$. Let $S$ be a finite set of places of $K$ containing all the archimedean ones, as well as all the places of bad reduction for $\pol$. Suppose that $\alpha \in \overline K$ is not preperiodic under $\pol$, and further that $\alpha$ is totally Fatou at all places $v \in M_K$. Let $C_v$ and $\kappa_v$ be constants as in \eqref{eq:defHCP} for the adelic measure associated to $\pol$, and let $C \geq 1$ and $\kappa > 0$ be constants satisfying $C \geq \sum_{v \in V} C_v$ and $\kappa \leq \min_{v \in V} \kappa_v$, where $V = M_K^\infty \cup M_{K,\mathrm{bad}}^{0,\pol}$. For each $v \in S$, write $\delta_v = \delta_{\pol,v}(\alpha)$. Then \begin{align*} |\relsupreper{\pol}{S}{\alpha}| \ll \max \Bigg \{ C, & \: \: \left( \frac{|V| \sum_{v \in S} |\log \delta_v|^{1/2}}{\kappa \hat h(\alpha)^2} \right)^2, \\ & \qquad \frac{|V|}{\kappa} \left( \frac{2}{\hat h(\alpha)} \left( \sum_{v \in M_K^\infty} \frac{1}{\delta_v} + |S \setminus M_K^\infty| \right) \right)^{\kappa} \Bigg \}. \end{align*} \end{theorem} \begin{proof} First note that $\cP := \relsupreper{\pol}{S}{\alpha}$ is $\Gal(\overline K/K)$-invariant, and write $$ \cP = \bigcup_{\beta \in Y} \Gal(\overline K/K) \cdot \beta $$ as a union of Galois orbits. Define, for each $v \in M_K$, \begin{align*} \Gamma_v & := \frac{1}{|\cP|} \sum_{z \in \cP} \log |z-\alpha|_v \\ & = \frac{1}{|\cP|} \sum_{\beta \in Y} \sum_{\sigma : K(\beta)/K \hookrightarrow \overline K_v} \log |\sigma(\beta)-\alpha|_v, \end{align*} and set $\Gamma = \sum_{v \in M_K} \Gamma_v$. First consider a place $v \notin S$. By definition such a place is of good reduction, and so $\loccanheight{v}{\pol}(\alpha) = \log^+|\alpha|_v$. Note that as the points $z \in \cP$ are preperiodic, we have $0 = \loccanheight{v}{\pol}(z) = \log^+ |z|_v$, and so $|z|_v \leq 1$. Thus, if $|\alpha|_v > 1$, $\log |z-\alpha| = \log^+ |\alpha| = \loccanheight{v}{\pol}(\alpha)$ for all $z \in \cP$. If $|\alpha|_v \leq 1$, then for $z \in \cP$, $|z-\alpha|_v \leq \max \{ |z|_v, |\alpha|_v \} \leq 1$, and moreover the integrality hypothesis gives $|z-\alpha| \geq 1$. Hence again we have $\log |z-\alpha| = \loccanheight{v}{\pol}(\alpha)$, and so $$ \Gamma_v = \loccanheight{v}{\pol}(\alpha) $$ Therefore \begin{equation} \label{eq:GamForm} \Gamma = \canheight{\pol}(\alpha) + \sum_{v \in S} \left( \Gamma_v - \loccanheight{v}{\pol}(\alpha) \right). \end{equation} Since we have shown that $\Gamma_v=0$ for all but finitely many places, we may exchange sums in the following to obtain \begin{align} \label{eq:GamZero} \Gamma = \sum_{v \in M_K} \Gamma_v & = \frac{1}{|\cP|} \sum_{v \in M_K} \sum_{\beta \in Y} \sum_{\sigma : K(\beta)/K \hookrightarrow \overline K_v} \log |\sigma(\beta)-\alpha|_v \notag \\ & = \frac{1}{|\cP|} \sum_{\beta \in Y} \sum_{v \in M_K} \sum_{\sigma : K(\beta)/K \hookrightarrow \overline K_v} \log |\sigma(\beta)-\alpha|_v \notag \\ & = \frac{1}{|\cP|} \sum_{\beta \in Y} \sum_{w \in M_{K(\beta)}} \log |\beta-\alpha|_w = 0, \end{align} where the last equality follows from the product formula, noting that each $\beta \neq \alpha$, as $\alpha$ is not preperiodic for $\pol$ by assumption. Suppose $|\cP| \geq (6C\kappa)/(|V|+1)$. As $\cP$ consists of preperiodic points for $\pol$, $h_{\rho_{\pol}}(\cP) = 0$, so for each $v \in S$, applying Theorem~\ref{thm:QuantEquid} and \eqref{eq:MahlerFormula} to the truncation function $\phi_{\delta_v}$ (see \textsection \ref{sec:Trunc}) gives $$ |\Gamma_v - \loccanheight{v}{\pol}(\alpha) | \leq \left( \frac{2(|V|+1)}{\kappa} \frac{\log |\cP|}{|\cP|} \langle \phi_{\delta_v}, \phi_{\delta_v} \rangle \right)^{1/2} + \Lip(\phi_{\delta_v}) \left( \frac{|V|+1}{2C\kappa |\cP|} \right)^{1/\kappa}. $$ Plugging this into \eqref{eq:GamForm}, from \eqref{eq:ArchLipOfTrunc}, \eqref{eq:ArchDirFormOfTrunc}, \eqref{eq:nonArchLipofTrunc} and \eqref{eq:nonArchDirFormOfTrunc} we have \begin{align*} \Gamma & \geq \canheight{\pol}(\alpha) - \sum_{v \in M_K^\infty} \left[ \left( 8\pi \frac{|V|+1}{\kappa} \frac{\log |\cP|}{|\cP|} |\log \delta_v| \right)^{1/2} + \frac{1}{\delta_v} \left( \frac{|V|+1}{2C\kappa |\cP|} \right)^{1/\kappa} \right] \\ & \qquad \qquad \qquad - \sum_{v \in S \setminus M_K^\infty} \left[ \left( \frac{|V|+1}{\kappa} \frac{\log |\cP|}{|\cP|} |\log \delta_v| \right)^{1/2} + \left( \frac{|V|+1}{2C\kappa |\cP|} \right)^{1/\kappa} \right] \\ & \geq \canheight{\pol}(\alpha) - \left(8\pi \frac{|V|+1}{\kappa} \frac{\log |\cP|}{|\cP|} \right)^{1/2} \left( \sum_{v \in S} |\log \delta_v|^{1/2} \right) \\ & \qquad \qquad - \left[ \left( \sum_{v \in M_K^\infty} \frac{1}{\delta_v} \right) + |S \setminus M_K^\infty| \right] \left( \frac{|V|+1}{2C\kappa |\cP|} \right)^{1/\kappa}. \end{align*} Recall that the Lambert function (see \cite{C}), denoted $W(z)$ for $z \in \C$, is defined as the function that satisfies $W(z) e^{W(z)} = z$. If $z \in \R$, then $W(z)$ can take two possible real values for $-1/e \leq z \leq 0$. Let $W_{-1}$ denote the branch whose values satisfy $W(z) \leq -1$. For $-1/e \leq z \leq 0$ we have $$ \frac{\log e^{-W_{-1}(z)}}{e^{-W_{-1}(z)}} = \frac{-W_{-1}(z)}{W_{-1}(z)/z} = -z, $$ and so if $|\cP| \geq e^{-W_{-1}(z)}$, then $(\log |\cP|)/|\cP| \leq -z$. By \cite[Theorem~1]{C}, $$ W_{-1}(z) > -1- \sqrt{2u}- u, $$ where $u = -\log(-z)-1$. Thus, if we suppose \begin{equation} \label{eq:PBound} |\cP| > \max \left \{ \frac{6C\kappa}{|V|+1}, e^{1+\sqrt{2u}+u}, \frac{|V|+1}{2C\kappa} \left[ \frac{2}{\canheight{\pol}(\alpha)} \left( \left( \sum_{v \in M_K^\infty} \frac{1}{\delta_v} \right) + |S \setminus M_K^\infty | \right) \right]^\kappa \right \}, \end{equation} where $$ u = - \log \min \left \{ \frac{1}{e}, \frac{\kappa \canheight{\pol}(\alpha)}{32 \pi (|V|+1) \left( \sum_{v \in S} |\log \delta_v |^{1/2} \right)^2} \right \} - 1, $$ then $$ \Gamma > \canheight{\pol}(\alpha) - \frac{\canheight{\pol}(\alpha)}{2} - \frac{\canheight{\pol}(\alpha)}{2} = 0, $$ which contradicts \eqref{eq:GamZero}. The result follows from noting that $C \geq 1$, $\kappa \leq 1$ and $e^{1+\sqrt{2u}+u} \ll e^{2u}$. \end{proof} \section{Bounds on the size of preperiodic points at archimedean places} \label{sec:arch} Let $v$ be an archimedean place of $K$, identify $\C_v$ with $\C$, writing $| \cdot |$ for $| \cdot |_v$, and $\cJ(\unipol)$ and $\cK(\unipol)$ respectively for the $v$-adic Julia set and filled Julia set of $\unipol$. \subsection{Parameters outside the Mandelbrot set} Suppose $c$ lies outside the $v$-adic Mandelbrot set. Then the Julia set $\cJ(\unipol)$ of $\unipol$ is a Cantor set which coincides with the filled Julia set $\cK(\unipol)$. As such, all preperiodic points for $\unipol$ lie in $\cJ(\unipol)$, and so the quantity $\delta_v = \delta_{\unipol,v}(0)$ defined in \eqref{eq:defdeltav} is precisely the distance $\mathrm{dist}(0,\cJ(\unipol))$ from 0 to the Julia set. We call $R$ an \emph{escape radius} for $\unipol(z)=z^d+c$ if for all $z \in \C$ with $|z| > R$, $|\unipol^n(z)| \to \infty$. A trivial escape radius is $R=1+|c|$, but there are better estimates. Namely, we have the following \cite[Corollary~3.3]{S}. \begin{lemma} Define $$ Q_c(R) = R^2-R-|c|, $$ and let $R_c$ be the largest real fixed point of $Q_c$. Then $$ R_c = \frac{1}{2}+\sqrt{\frac{1}{4}+|c|} $$ is an escape radius for $\unipol$. \end{lemma} Note that $|c| > R_c$ whenever $|c| > 2$. When $|c|$ is sufficiently large, it is easy to show that for a point $z$ close to 0, $\unipol(z)$ lies outside an escape radius of $\unipol$, so $z$ cannot be preperiodic for $\unipol$, and hence does not lie inside the Julia set. In particular we have the following. \begin{lemma} \label{lem:JuliaSetDistance} Suppose $|c| > 2$. Then the distance from 0 to $\cJ(\unipol)$ $$ \delta_v = \mathrm{dist}(0,\cJ(\unipol)) \geq \left( |c|-R_c \right)^{\frac{1}{2}}. $$ We also have $$ \delta_v = \mathrm{dist}(0, \cJ(\unipol)) \geq \frac{1}{2} $$ for $c \in [1,\infty)$. In particular, this means that for all $c \in \Z$ such that $0$ is not preperiodic for $\unipol$, we have $$ - \log \delta_v \leq \log 2. $$ \end{lemma} \begin{proof} Suppose $z \in \C$ with $|z| < (|c|-R_c)^{\frac{1}{2}}$. Then $$ |\unipol(z)| = |z^2+c| \geq |c|-|z|^2 > R_c, $$ whence $z \notin \cJ(\unipol)$. Now, suppose $c \in [1, \infty)$ note that the image of $\D(0,1/2)$ under $\unipol$ is $\D(c,1/2^d)$, so for any $z \in \D(0,1/2)$, $|\mathrm{Arg}(\unipol(z))| \leq \tan^{-1}(1/4)$. Hence $|\mathrm{Arg}(\unipol(z))^2| \leq 2 \tan^{-1}(4^{-1}) < \pi/4$, and so $$ |\unipol^2(z)| > |c| + \frac{|\unipol(z)|^2}{\sqrt{2}} \geq |c| + \frac{1-4^{-1}}{\sqrt{2}}. $$ It is easy then to check (just applying the reverse triangle inequality) that certainly $\unipol^4(z) > R_{d,c}$, and so $z \notin \cJ(\unipol)$. \end{proof} On the other hand, when $|c|$ is not large, and especially when $c$ lies close to the boundary of the Mandelbrot set, we cannot argue as above, as it could take many iterations under $\unipol$ for a point close to 0 to get outside an escape radius. This information is contained, however, in the local canonical height of 0 under $\unipol$. Indeed, we can still obtain a lower bound for $\mathrm{dist}(0,\cJ(\unipol))$ in terms of $\loccanheight{v}{\unipol}(0)$ by using the H\"{o}lder continuity of $\loccanheight{v}{\unipol}$. \begin{prop} \label{prop:Kosek} Let $v \in M_K^\infty$ and suppose $c \in K$ lies outside the $v$-adic Mandelbrot set. Then we have \begin{align*} \delta_v & = \mathrm{dist}(0, \cJ(\unipol)) \\ & \geq \left( \frac{\loccanheight{v}{\unipol}(0)}{2 \log 6 + \log^+ |c|_v} \right)^{\frac{2 \log 6 + \log^+|c|_v}{2 \log 2}}. \end{align*} \end{prop} \begin{proof} This follows immediately from rearranging \eqref{eq:HolJulBd} with $z=0$ and the constants from Corollary~\ref{cor:ArchHolUni}. \end{proof} We now reinterpret Lemma~\ref{lem:JuliaSetDistance} and Proposition~\ref{prop:Kosek} in order to give a bound of a form amenable to the proof of Theorem~\ref{thm:UniformMain}. \begin{corollary} \label{cor:ArchOutParamDeltavBound} Let $\varepsilon > 0$, $v \in M_K^\infty$ and let $c \in K$ be such that $\loccanheight{v}{\unipol}(0) \geq \varepsilon$. Then $$ -\log \delta_v \leq A_{\infty,1}, $$ where $$ A_{\infty,1} = \begin{cases} \max \left \{0, -\frac{1}{2} \log \left(e^{2\left(\varepsilon-\log 2 \right)}- R_{e^{2\left(\varepsilon-\log 2 \right)}} \right) \right\} & \varepsilon > \frac{3 \log 2}{2}, \\ \frac{\log 12 + \varepsilon}{\log 2} \log \left( \frac{\log 48 + 2 \varepsilon}{\varepsilon} \right) & \varepsilon > 0. \end{cases} $$ Note that we separate the case $\varepsilon > \frac{3 \log 2}{2}$ because it ensures the parameter $c$ satisfies $|c| > 2$, allowing us to apply Lemma~\ref{lem:JuliaSetDistance}. This produces a better bound than the latter, general case, for which we remark that $A_{\infty, 1} \ll -\log \varepsilon$ as $\varepsilon \to 0$. \end{corollary} \begin{proof} Suppose $\varepsilon > \frac{3 \log 2}{2}$. We must have $|c| \geq 1$ as if $|c| < 1$, an easy induction gives $$ |\unipol^n(0)| < 2^{2^{n-1}-1}, $$ and so $\loccanheight{v}{\unipol}(0) < (\log 2)/2$. Hence, by Lemma~\ref{lem:LocalHeightProps}~(a), we have $$ \loccanheight{v}{\unipol}(0) \leq \log 2 + \frac{1}{2} \log |c|, $$ and so $$ |c| > e^{2\left(\loccanheight{v}{\unipol}(0)-\log 2 \right)} = 2. $$ Hence, by Lemma~\ref{lem:JuliaSetDistance}, $$ \delta_v \geq (|c| - R_c)^{1/2}. $$ Note that $|c|-R_c$ increases with $|c|$ for $|c| \geq 1$. Thus we conclude this case by substituting $|c| > e^{2\left(\varepsilon-\log 2\right)}$. Now, for arbitrary $\varepsilon$, from Lemma~\ref{lem:LocalHeightProps}~(b), we have $$ 2 \loccanheight{v}{\unipol}(0) = \loccanheight{v}{\unipol}(c) = \max \{ \loccanheight{v}{\unipol}(c), \loccanheight{v}{\unipol}(0) \} \geq \log |c| - \log 4, $$ and so \begin{equation} \label{eq:ska1} \log^+ |c| \leq \log 4 + 2\varepsilon. \end{equation} The result follows from plugging \eqref{eq:ska1} into Proposition~\ref{prop:Kosek} and taking logarithms. \end{proof} \subsection{Parameters inside the hyperbolic locus} Now suppose $c$ lies in a period $t$ hyperbolic component of the $v$-adic Mandelbrot set. That is, $\unipol$ has an attracting cycle of period $t$. Let $U$ be the component of the immediate basin of this cycle containing 0, let $\alpha$ be the element of the cycle in this component, and let $\phi : U \to \D$ be the uniformizing map with $\phi(\alpha)=0$, appropriately normalized so that $g:= \phi \circ \unipol^t \circ \phi^{-1}$ is the Blaschke product $$ g(z) = z \frac{z+\lambda}{1+\bar \lambda z}, $$ where $\lambda = (\unipol^t)'(\alpha) \neq 0$ is the multiplier of $\alpha$ (see \cite[\textsection \textsection 25.3]{L}). Note that $\phi(0)$ is a critical point of $g$ lying in $\D$, and so \begin{equation} \label{eq:phi} \phi(0) = \frac{1-\sqrt{1-|\lambda|^2}}{\bar \lambda} = \frac{1-\sqrt{1-|\lambda|^2}}{|\lambda|^2} \lambda. \end{equation} Suppose that $z \in \phi^{-1}(\D(\phi(0),\delta)) \subset U$ is preperiodic for $\unipol$, where \begin{equation} \label{eq:delta} \delta := |\lambda - \phi(0)| = \frac{|\lambda|^2-1+\sqrt{1-|\lambda|^2}}{|\lambda|}. \end{equation} Then $\unipol^{tk}(z)=\alpha$ for some $k \geq 1$, and so $g^k(\phi(z)) = \phi(\unipol^{tk}(z)) = \phi(\alpha)=0$. By the Schwarz lemma, $|g(w)| < |w|$ for all $w \in \D \setminus \{0 \}$, and so, since the only preimages of 0 under $g$ are $0$ and $-\lambda$, the disk centred at 0 of radius $|\lambda|$ contains no iterated preimages of $0$ under $g$ other than $0$ itself. But we have \begin{align*} |\phi(z)| & = |\phi(z)-\phi(0)+\phi(0)| \leq |\phi(z)-\phi(0)| + |\phi(0)| \\ & < \delta + |\phi(0)| = |\lambda|, \end{align*} a contradiction. Hence $\phi^{-1} (\D(\phi(0),\delta)) \subset U$ contains no preperiodic points under $\unipol$. Let $\eta : \D \to \D$ be given by $\eta(z) = \phi(d(\alpha,\partial U)z+\alpha)$. Then $\eta(0)=0$, so by the Schwarz lemma, $$ 1 \geq |\eta'(0)| = |d(\alpha, \partial U) \phi'(\alpha)|, $$ and hence by the inverse function theroem, \begin{equation} \label{eq:SchwazDistBound} \left|(\phi^{-1})'(0) \right| \geq d(\alpha, \partial U). \end{equation} Recall the Koebe distortion theorem \cite[\textsection 1.2]{Po}. \begin{theorem} Let $F : \D \to \C$ be univalent with $F(0)=0$ and $F'(0)=1$. Then for all $z \in \D$, $$ \frac{|z|}{(1+|z|)^2} \leq |F(z)| \leq \frac{|z|}{(1-|z|)^2} $$ and $$ \frac{1-|z|}{(1+|z|)^3} \leq |F'(z)| \leq \frac{1+|z|}{(1-|z|)^3}. $$ \end{theorem} Applying the Koebe distortion theorem to $$ F(z):=\frac{\phi^{-1}(z)-\alpha}{(\phi^{-1})'(0)} $$ at $z=\phi(0)$, we get \begin{equation} \label{eq:KobeDist} \left| (\phi^{-1})'(\phi(0)) \right| = \left| (\phi^{-1})'(0) F'(\phi(0)) \right| \geq \frac{1-|\phi(0)|}{(1+|\phi(0)|)^3} \left| (\phi^{-1})'(0) \right|. \end{equation} Let $\psi : \D \to \D(\phi(0),\delta)$ be given by $$ \psi(z) = \delta z + \phi(0) $$ so that by the Koebe 1/4-theorem, $\phi^{-1}(\D(\phi(0),\delta))$ contains a disk about $\phi^{-1} \circ \psi(0) = 0$ of radius \begin{align} \label{eq:boundo} \left| \frac{(\phi^{-1} \circ \psi)'(0)}{4} \right| & = \frac{\delta}{4} \left| (\phi^{-1})'(\phi(0)) \right| \notag \\ & \geq \frac{\delta(1-|\phi(0)|)}{4(1+|\phi(0)|)^3} \left| (\phi^{-1})'(0) \right| \notag \\ & = \frac{|\lambda| \left( |\lambda|^2 - 1 + \sqrt{1-|\lambda|^2} \right) \left( |\lambda| - 1 + \sqrt{1- \lambda|^2} \right)}{4 \left( |\lambda| + 1 - \sqrt{1-|\lambda|^2} \right)^3} \left| (\phi^{-1})'(0) \right| \notag \\ & \geq \frac{|\lambda| \left( |\lambda|^2 - 1 + \sqrt{1-|\lambda|^2} \right) \left( |\lambda| - 1 + \sqrt{1- |\lambda|^2} \right)}{4 \left( |\lambda| + 1 - \sqrt{1-|\lambda|^2} \right)^3} d(\alpha, \partial U), \end{align} where the first inequality comes from \eqref{eq:KobeDist}, the next equality results from substituting \eqref{eq:delta} and \eqref{eq:phi}, and the last inequality follows from \eqref{eq:SchwazDistBound}. Now, the quantity $d(\alpha, \partial U)$ is known as the \emph{converge radius} for the periodic point $\alpha$. In \cite{S}, Stroh obtains some lower bounds for converge radii. In particular the following will be useful \cite[Corollary~3.14]{S}. \begin{lemma} Let $P(z) = \sum_{i=0}^d a_i z^i$, and let $\tilde z$ be an attracting fixed point of $P$. Furthermore, let $\tilde P(z) = \sum_{i=0}^d b_i z^i = P(z+\tilde z)-\tilde z$ and $$ \tilde Q(r) = \left( \sum_{i=1}^d |b_i| r^{i-1} \right) - 1. $$ Then $\tilde r_{\tilde Q}$ defined by $$ \tilde r_{\tilde Q} = \min \{ r \in \R^+ : \tilde Q(r) = 0 \} $$ satisfies $D(\tilde z, \tilde r_{\tilde Q}) \subset A^*(\tilde z)$ (the immediate basin of $\tilde z$). \end{lemma} In our case, with $P = \unipol^t$ and $\tilde z = \alpha$, so that $\tilde P(z) = \unipol^t(z+\alpha)-\alpha$, we can calculate $$ \tilde Q(r) = r^{2^t-1} + \frac{(\unipol^t)^{(2^t-1)}(\alpha)}{(2^t-1)!} r^{2^t-2} + \cdots + \frac{(\unipol^t)''(\alpha)}{2!}r + |\lambda| - 1. $$ Note that if $t=1$, we have \begin{equation} \label{eq:ConvRad1} d(\alpha, \partial U) \geq \tilde r_{\tilde Q} = 1-|\lambda|. \end{equation} Note that $|c| \leq 2$, since $c$ lies in the Mandelbrot set. Also, since $\alpha$ is periodic, $|\alpha| \leq R_c \leq 2$. For $t \geq 2$, the roots of $$ r^{2^t-1} \tilde Q \left( \frac{1}{r} \right) = (|\lambda| - 1) r^{2^t-1} + \frac{(\unipol^t)''(\alpha)}{2!} r^{2^t-2} + \cdots + \frac{(\unipol^t)^{(2^t-1)}(\alpha)}{(2^t-1)!} r + 1 $$ have modulus at most $$ 1 + \frac{H(\tilde P)}{1-|\lambda|}, $$ (see for example \cite[Lemma~3.5]{S}) where $H(f)$ denotes the naive height of a polynomial $f$ i.e. the maximum of the modulus of its coefficients. Hence $$ \tilde r_{\tilde Q} \geq \frac{1-|\lambda|}{1-|\lambda| + H(\tilde P)}. $$ Recall (see \cite[Lemma~1.2~(c)]{KPS}) that for two polynomials $f,g$ we have $$ H(f \circ g) \leq H(f)H(g)^{\deg f} 2^{(\deg f)(\deg g +1)}. $$ In particular, $$ H(\unipol^t) \leq H(\unipol^{t-1})H(\unipol)^{2^{t-1}}2^{2^t+2^{t-1}} = H(\unipol^{t-1})\max \{1, |c| \}^{2^{t-1}}2^{2^t+2^{t-1}}, $$ and so inductively we obtain \begin{align*} H(\unipol^t) & \leq \max \{1, |c| \}^{2^{t-1}+\cdots+2+1} 2^{2^{t+1}+2^{t-1}+\cdots+2^3+2} \\ & = \max \{1, |c| \}^{2^t-1} 2^{2^{t+1} + 2^t - 6}. \end{align*} Hence, \begin{align*} H( \tilde P) & \leq H(\unipol^t(z-\alpha)) + |\alpha| \leq \max\{1,|\alpha|\} H(\unipol^t) 2^{2^t+1} + |\alpha| \\ & \leq 2 \left( \max \{1, |c| \}^{2^t-1} 2^{2^{t+1} + 2^t - 6} + 1 \right) \\ & \leq 2 \left(2^{2^{t+2}-7}+1 \right) \leq 2^{2^{t+2}-5} \end{align*} whence \begin{equation} \label{eq:ConvRad2} d(\alpha, \partial U) \geq \tilde r_{\tilde Q} \geq \frac{1-|\lambda|}{2^{2^{t+2}-5}}. \end{equation} In conclusion, plugging \eqref{eq:ConvRad1} and \eqref{eq:ConvRad2} into \eqref{eq:boundo}, we obtain the following result. \begin{prop} \label{prop:InMandelBound} Let $\unipol(z)=z^2+c$, with $c$ lying in a hyperbolic component of the Mandelbrot set of period $t$, and suppose the period $t$ attracting cycle of $\unipol$ has multiplier $\lambda \neq 0$. Then there is a disk about 0 of radius at least $$ \frac{|\lambda| \left( |\lambda|^2 - 1 + \sqrt{1-|\lambda|^2} \right) \left( |\lambda| - 1 + \sqrt{1- |\lambda|^2} \right) (1- |\lambda|)}{ C_3 \left( |\lambda| + 1 - \sqrt{1-|\lambda|^2} \right)^3} $$ containing no preperiodic points for $\unipol$, where $$ C_3 = \begin{cases} 1 & \text{if } t=1, \\ 2^{2^{t+2}-5} & \text{if } t > 1. \end{cases} $$ \end{prop} Note that the bound given in Proposition~\ref{prop:InMandelBound} is asymptotically equivalent to $|\lambda|/(2C_3)$ as $|\lambda| \to 0$, and to $(1-|\lambda|)^2/(4C_3)$ as $|\lambda| \to 1$. In \cite[Theorem~2]{I2}, given a rational function $\rat$ with a periodic cycle, Ingram relates the \emph{critical height} of $\rat$ (in the case $\rat=\unipol$, this is proportional to the canonical height $\canheight{\unipol}(0)$ of the unique critical point) to the multiplier of the periodic cycle. We present below a version of this result, which will allow us to remove the dependence on $|\lambda|$ from Proposition~\ref{prop:InMandelBound}. \begin{lemma} \label{lem:CritHeightVSMultiplierHeight} Suppose $\unipol(z)=z^2+c$ has a $t$-cycle of multiplier $\lambda \neq 1$. Then $$ h(\lambda) \leq C_4 \canheight{\unipol}(0) + C_5, $$ where \begin{equation*} C_4 = t2^{2t} + C_1(2^{t+2}-1)(2^t+2)(2^t-1), \end{equation*} and \begin{align*} C_5 & = (2^{t+2}-1) ( (2^t + 2) (2^t-1) \left( C_2 + 4 \log 8 \right) \notag\\ & \qquad \quad + (2^t+1)^2 \log 2 + \log (2^t+1)(2^t+2) ) + 2 \log 2^{t+1}(2^{t+1}-1)! \notag \\ & \qquad \qquad + \log 2 + 2^{t+1} \log \mathrm{lcm}(1,\ldots,2^t) + 2 \log \max \left \{8, 3^{2^t-1} \right \}, \end{align*} where $C_1,C_2$ are the constants appearing in Corollary~\ref{cor:CanHeightBound1}. \end{lemma} \begin{proof} Given a rational function $\rat : \P^1 \to \P^1$ of degree at most $d$, written in the form $$ \rat(z) = \frac{c_0 + c_1 z + \cdots + c_d z^d}{c_{d+1} + c_{d+2}z + \cdots + c_{2d+1}z^d}, $$ we set $$ h_{\mathrm{Hom}_d}(\rat) := \sum_{v \in M_K} \log \max \{ |c_0|_v , \ldots, |c_{2d+1}|_v \}, $$ which is well-defined by the product formula. By \cite[Proposition~5]{HS}, we have $$ h_{\mathrm{Hom}_{d^n}}(\rat^n) \leq \left( \frac{d^n-1}{d-1} \right) h_{\mathrm{Hom}_d}(\rat) + d^2 \left( \frac{d^{n-1}-1}{d-1} \right) \log 8. $$ In particular, for the unicritical polynomial $\unipol(z)=z^2+c$ we have $h_{\mathrm{Hom}_d}(\unipol) = h(c)$, and so \begin{equation} \label{eq:HB1} h_{\mathrm{Hom}_{d^t}}(\unipol^t) \leq (2^t-1) h(c) + 4 ( 2^{t-1}-1) \log 8. \end{equation} Also, define the \emph{critical height} $$ \hat h_{\mathrm{crit}}(\rat) := \sum_{\alpha \in \P^1} (e_\alpha(\rat)-1) \canheight{\rat}(\alpha), $$ where $e_\alpha(\rat) = \mathrm{ord}_\alpha(\rat(z)-\rat(\alpha))$ is the \emph{ramification index} of $\rat$ at $\alpha$. By \cite[Proposition~6.25~(b)]{Si2}, we have $\hat h_{\mathrm{crit}}(\rat^n) = n \hat h_{\mathrm{crit}}(\rat)$ for any $n \geq 1$. Note also that $$ \hat h_{\mathrm{crit}}(\unipol) = \canheight{\unipol}(0), $$ and so \begin{equation} \label{eq:HB2} \hat h_{\mathrm{crit}}(\unipol^t) = t \canheight{\unipol}(0). \end{equation} Moreover, it is easy to show that $\hat h_{\mathrm{crit}}$ is invariant under change of coordinates. By \cite[Lemma~11]{I2}, there exists a rational function $\psi$, conjugate to $\unipol^t$ by a linear fractional transformation, such that $\psi(0)=0$ with multiplier $\lambda$, $\psi(\infty)=\infty$ and \begin{align} \label{eq:HB3} h_{\mathrm{Hom}_{2^t}}(\psi) & \leq (2^t+2) h_{\mathrm{Hom}_{2^t}}(\unipol^t) + (2^t+1)^2 \log 2 + \log(2^t+1)(2^t+2) \notag \\ & \leq (2^t+2) \left( (2^t-1) h(c) + 4 ( 2^{t-1}-1) \log 8 \right) \notag \\ & \qquad \qquad + (2^t+1)^2 \log 2 + \log(2^t+1)(2^t+2), \end{align} where the last inequality follows from \eqref{eq:HB1}. Now, following the proof of \cite[Lemma~12]{I2} with $k=1$ and using \eqref{eq:HB2}, we have \begin{align} \label{eq:HB4} t2^{2t}(t & -1) \canheight{\unipol}(0) = 2^{2t} \hat h_{\mathrm{crit}}(\unipol^n) = 2^{2t} \hat h_{\mathrm{crit}}(\psi) \notag \\ & \geq h(\lambda) - (2^{t+2}-1) h_{\mathrm{Hom}_{2^t}}(\psi) - 2 \log \left( 2^{t+1}(2^{t+1}-1)! \right) \notag \\ & \qquad \qquad - \log 2 - 4 \log \mathrm{lcm}(1,\ldots,2^t) - 2 \log \max \{ 8, 3^{2^t-1} \} \notag \\ & \geq h(\lambda) - (2^{t+2}-1) ( (2^t+2) \left( ( 2^t-1) h(c) + 4 ( 2^{t-1}-1) \log 8 \right) \notag \\ & \: + (2^t+1)^2 \log 2 + \log(2^t+1)(2^t+2) ) - 2 \log \left( 2^{t+1}(2^{t+1}-1)! \right) - \log 2 \notag \\ & \qquad \qquad - 4 \log \mathrm{lcm}(1,\ldots,2^t) - 2 \log \max \{ 8, 3^{2^t-1} \}, \end{align} with the last inequality coming from \eqref{eq:HB3}. Recall from Corollary~\ref{cor:CanHeightBound1} that $h(c) \leq C_1 \canheight{\unipol}(0) + C_2$. Plugging this into \eqref{eq:HB4} and rearranging completes the proof. \end{proof} \begin{prop} \label{prop:DeltavBoundParamIhHypComp} Suppose $v \in M_K^\infty$ is such that $c$ lies in a period $t$ hyperbolic component of the $v$-adic Mandelbrot set. Then $$ - \log \delta_v \leq A_{\infty,2} + B_{\infty,2} \canheight{\unipol}(0), $$ where $$ A_{\infty,2} := C_5 + \log(20) + \log(C_3) \quad \text{and} \quad B_{\infty,2} := C_4. $$ \end{prop} \begin{proof} First suppose that $0.1 \leq |\lambda| \leq 0.7$. Define $$ g(x) := \frac{x(x^2-1+\sqrt{1-x^2})(x-1+\sqrt{1-x^2})(1-x)}{(x+1-\sqrt{1-x^2})^3}, \quad 0 < x \leq 1. $$ Using elementary calculus, it is easy to see that on $[0.1,0.7]$, $g(x)$ attains a minimum at $x=0.7$, and so by Proposition~\ref{prop:InMandelBound} $$ \delta_v \geq \frac{g(0.7)}{C_3} > \frac{1}{60C_3}, $$ whence $$ -\log \delta_v \leq \log(60)+\log(C_3) < A_{\infty,2} + B_{\infty,2} \canheight{\unipol}(0). $$ Note that $g(x)/x$ is decreasing on $(0,1]$, and $g(0.1)/0.1 > 1/3$, so if $|\lambda| < 0.1$, then $$ \delta_v = \frac{g(|\lambda|)}{C_3 |\lambda|} |\lambda| \geq \frac{g(0.1)}{0.1 C_3} |\lambda| \geq \frac{|\lambda|}{3C_3}. $$ It follows from \eqref{eq:FundamentalHeightInequality} and Lemma~\ref{lem:CritHeightVSMultiplierHeight} that \begin{align*} - \log \delta_v & \leq \log(3C_3)-\log |\lambda| \leq \log(3C_3)+h(\lambda) \\ & \leq C_4 \canheight{\unipol}(0) + C_5 + \log 3 + \log C_3 < A_{\infty,2} + B_{\infty,2} \canheight{\unipol}(0). \end{align*} Moreover, $g(x)/(1-x)^2$ is increasing on $(0,1]$, and $g(0.7)/0.3^2 > 1/5$, so if $|\lambda| > 0.7$, then $$ \delta_v = \frac{g(|\lambda|)}{C_3 (1-|\lambda|)^2} (1-|\lambda|)^2 \geq \frac{g(0.7)}{ 0.3^2 C_3} (1-|\lambda|)^2 \geq \frac{|\lambda-\xi|^2}{10C_3}, $$ where $\xi$ is a root of unity with $|\lambda - \xi| \leq \sqrt{2}(1-|\lambda|)$, which exists since the roots of unity are dense in the unit circle. Thus, using \eqref{eq:FundamentalHeightInequality}, \eqref{eq:HeightSumInequality1} and Lemma~\ref{lem:CritHeightVSMultiplierHeight}, \begin{align*} - \log \delta_v & \leq \log (10C_3) - \log |\lambda-\xi| \leq \log(10C_3) + h(\lambda-\xi) \\ & \leq \log(10C_3) + \log 2 + h(\xi) + h(\lambda) \\ & \leq C_4 \canheight{\unipol}(0) + C_5 + \log(20) + \log(C_3) \leq A_{\infty,2} + B_{\infty,2} \canheight{\unipol}(0), \end{align*} as desired. \end{proof} \section{Bounds on preperiodic points at places of good reduction} \label{sec:nonarch} Let $v$ be a finite place of $K$ lying over a rational prime $p$, write $| \cdot | = | \cdot |_v$, $\cO = \cO_v$, $\m=\m_v$ and $k=k_v$. Note that $k$ is a finite field of characteristic $p$, say with $q$ elements. Assume $0$ is not preperiodic for $\unipol$, and that $|c| \leq 1$, so that $\unipol$ has good reduction at $v$. In this section, for $z \in \C_v$ we use the notation $O(z)$ to denote an element $x \in \C_v$ with $|x| \leq |z|$. Since there are only finitely many residue classes modulo $v$, the unit disk $D(0,1) \subset \C_v$ must be preperiodic under $\unipol$. In particular, we can take minimal $n \geq 1$, $m \geq 0$ with $n + m \leq q$ such that $\unipol^{n+m}(D(0,1)) = \unipol^m(D(0,1)) = D(x,1)$ where $x := \unipol^m(0)$ (the last equality follows for example from \cite[Theorem~3.15]{B}. Set $y := \unipol^n(x)-x$ so that $|y| < 1$, and on $D(x,1)$, write \begin{equation} \label{eq:NewtonPoly} \unipol^n(z)-x = c_0 + c_1(z-x)+\cdots + c_{2^n}(z-x)^{2^n}. \end{equation} Then $c_0 = y$, so $|c_0| < 1$, and \begin{equation} \label{eq:NewtonPolyC1} c_1 = (\unipol^n)'(x) = 2^n x \unipol(x) \cdots \unipol^{n-1}(x). \end{equation} \subsection{Attracting case} We have $|c_1| < 1$ if either $m = 0$ or $|2| < 1$. In this case, since $c_{2^n}=1$, $\unipol^n-x$ has Weierstrass degree at least 2 on $D(x,1)$, and so by \cite[Theorem~4.18]{B}, $D(x,1)$ is an attracting component for $\unipol$ with a unique attracting periodic point $b$ of exact period $n$. We first produce an iterate of $\unipol^n$ which sends $x$ sufficiently close to the attracting periodic point $b$. \begin{lemma} \label{lem:AttractSmallCoeffs} There exists an integer $\ell$ with $$ \ell \leq \left \lfloor \frac{\log \left( \frac{n \log |2|}{\log |y|}+ 1 \right)}{\log 2} \right \rfloor + 1 $$ such that $\delta := |\unipol^{ \ell n}(x)-b| < |(\unipol^n)'(b)|$. In particular, we can take $\ell=1$ if $|2|=1$. Moreover, $\delta$ satisfies $$ -\log \delta \leq 2^{r_1}h(c)+(2^{r_1-1}+1)\log 2, $$ where $$ r_1 := \begin{cases} q-1 & |2|=1, \\ q \left( \left \lfloor \frac{\log \left( \frac{q \log |2|}{\log |\pi|}+1 \right)}{\log 2} \right \rfloor + 1 \right)-1 & |2| < 1. \end{cases} $$ \end{lemma} \begin{remark} \label{rem:AttractSmallCoeffs} Note that if we write $$ \unipol^{\ell n}(z) - \unipol^{\ell n}(x) = b_0 + b_1 (z - \unipol^{\ell n}(x)) + \cdots + c_{2^{\ell n}} (z - \unipol^{\ell n}(x))^{2^{\ell n}}, $$ then $b_0 = \unipol^{2 \ell n}(x) - \unipol^{\ell n}(x)$ and $$ b_1 = (\unipol^{\ell n})'(\unipol^{\ell n}(x)) = 2^{\ell n} \unipol^{\ell n}(x) \unipol^{\ell n +1}(x) \cdots \unipol^{2 \ell n -1}(x) $$ satisfies $|b_1| < 1$, and so $$ b - \unipol^{\ell n}(x) = \unipol^{\ell n}(b) - \unipol^{\ell n}(x) = b_0 + b_1(b-\unipol^{\ell n}(x)) + O \left( (b - \unipol^{\ell n}(x))^2 \right), $$ implies that we can calculate $$ |\unipol^{\ell n}(x) - b| = |\unipol^{2 \ell n}(x) - \unipol^{\ell n}(x)|. $$ \end{remark} \begin{proof} Looking to the Newton polygon of \eqref{eq:NewtonPoly}, we see that $|b-x|=|y|$. Write $x = b+w$ with $|w| = |y|$. Then $$ \unipol^n(x) = \unipol^n(b) + (\unipol^n)'(b)w + O(w^2) = b + (\unipol^n)'(b)w +O(w^2), $$ and so $$ |\unipol^n(x)-b| \leq \max \{ |(\unipol^n)'(b)||y|, |y|^2 \}. $$ Continuing by induction, we obtain \begin{equation} \label{eq:7.1Bound} |\unipol^{\ell n}(x)-b| \leq \max \{ |(\unipol^n)'(b)||y|, |y|^{2^\ell} \}. \end{equation} Now, if $m=0$, then $|b|=|y|<1$. For $1 \leq i < n$, we have $$ \unipol^i(b) = \unipol^i(0) + O(b^d), $$ which gives $|\unipol^i(b)| = 1$, since by definition $|\unipol^i(0)|=1$ for $i < n$. On the other hand, if $m > 0$, then $|b|=1$ and so $|\unipol^i(b)|=1$ for $0 \leq i < n$, as otherwise, say $|\unipol^i(b)| < 1$, we would have $$ \unipol^i(b) = \unipol^{n+i}(b) = \unipol^n(0) + O(\unipol^i(b)^2), $$ giving $|\unipol^i(b)|=1$, a contradiction. In either case, we have $$ |(\unipol^n)'(b)| = |2|^n \left| b \unipol(b) \cdots \unipol^{n-1}(b) \right| \geq |2|^n |y|. $$ Note in particular that if $m > 0$, then $|(\unipol^n)'(b)|=|2|^n$, so by \eqref{eq:7.1Bound} we can take $\ell = 1$ if $|2|=1$. Moreover, if $m=0$, then $$ b = \unipol^n(b) = \unipol^n(0) + O(b^2) = \unipol^n(x)+O(b^2), $$ so $|\unipol^n(x)-b| \leq |y|^2$, and we can again take $\ell = 1$ if $|2|=1$. In general, by \eqref{eq:7.1Bound} for $$ \ell > \frac{\log \left( \frac{n \log |2|}{\log |y|}+1 \right)}{\log 2} $$ we have $$ |\unipol^{\ell n}(x)-b| \leq |2|^n |y| \leq |(\unipol^n)'(b)|. $$ Note that since $b$ is periodic for $\unipol$, $\canheight{\unipol}(b) = 0$ and so from \eqref{eq:CanHeightConst}, $$ h(b) \leq h(c)+\log 2. $$ Now, by \eqref{eq:FundamentalHeightInequality} and \eqref{eq:HeightSumInequality1}, \begin{align*} - \log \delta & \leq - \log | \unipol^{\ell n+m}(0) - b | \leq h(\unipol^{\ell n+m-1}(c)-b) \\ & \leq h(\unipol^{\ell n+m-1}(c)) + h(b) + \log 2 \\ & \leq (2^{\ell n+m-1}-1) h(c) + (2^{\ell n+m-2}-1) \log 2 + h(c)+2 \log 2 \\ & = 2^{\ell n+m-1} h(c) + (2^{\ell n+m-2}+1)\log 2. \end{align*} The result follows, noting that $$ \ell n+m = (\ell -1)n+(n+m) \leq \ell q \leq q \left( \left \lfloor \frac{\log \left( \frac{q \log |2|}{\log |\pi|}+1 \right)}{\log 2} \right \rfloor + 1 \right) $$ since $n+m \leq q$ and $|y| \geq |\pi|$. \end{proof} Now, for points $z$ close to 0, we can precisely control the distance from iterates of $z$ under $\unipol$ to $b$, ruling out the possibility that $z$ is preperiodic. \begin{prop} \label{prop:AttractNoPrePer} Let $\delta$ be as in Lemma~\ref{lem:AttractSmallCoeffs}. Then $D(0,\delta)$ contains no preperiodic points for $\unipol$. \end{prop} \begin{proof} Let $z \in D(0,\delta)$. Then $$ \unipol^{\ell n+m}(z) - b = \unipol^{\ell n+m}(0) - b + O(z^d), $$ and so $|\unipol^{\ell n+m}(z)-b| = |\unipol^{\ell n}(x)-b|=\delta$. Write $\unipol^{\ell n+m}(z) = b+w$ with $|w| = \delta$. Then $$ \unipol^{(\ell +1)n+m}(z) = \unipol^n(b) + (\unipol^n)'(b)w + O(w^2). $$ Since $|w| = \delta < |(\unipol^n)'(b)|$, we have $$ |\unipol^{(\ell +1)n+m}(z)-b| = |(\unipol^n)'(b)| \delta. $$ We can therefore proceed by induction to obtain $$ |\unipol^{(\ell+j)n+m}(z)-b| = |(\unipol^n)'(b)|^j \delta \neq 0 $$ for any $j \geq 1$. But $\unipol^{(\ell+j)n+m}(z) \to b$ as $j \to \infty$ since $b$ is attracting, so we conclude that $z$ is not preperiodic for $\unipol$. \end{proof} \subsection{Indifferent case} The only case remaining is where $|2|=1$, and $D(0,1) \subset \C_v$ is strictly preperiodic, say with (minimal) $m,n \geq 1$ such that $\unipol^{m+n}(D(0,1)) = \unipol^m(D(0,1)) = D(x,1)$, where $x = \unipol^m(0)$. In this case, since $|\unipol^i(x)|=1$ for all $i$, referring again to \eqref{eq:NewtonPoly} and \eqref{eq:NewtonPolyC1}, we have $|c_1|=1$, and so $\unipol^n-x$ has Weierstrass degree 1 on $D(x,1)$. Hence by \cite[Theorem~4.18]{B}, $D(x,1)$ is an indifferent periodic component for $\unipol$, which is mapped bijectively onto itself by $\unipol^n$. For this case, we will use the following form of the Taylor expansion: Let $n \geq 1$, $z \in \C_v$, and set $w := \unipol^n(z)-z$. Then \begin{align*} \unipol^{2n}(z) = \unipol^n(z+w) & = \unipol^n(z) + (\unipol^n)'(z)w + O(w^2) \\ & = z + (1+ (\unipol^n)'(z))w + O(w^2). \end{align*} Continuing this inductively, we obtain for any integer $t \geq 1$, $$ \unipol^{tn}(z) = z + (1+(\unipol^n)'(z) + \cdots + (\unipol^n)'(z)^{t-1})w + O(w^2). $$ Note in particular that $\unipol^{tn}(z)-z=O(w)$, and so \begin{align*} \unipol^{tn+1}(z) = \unipol(z + \unipol^{tn}(z)-z) & = \unipol(z) + \unipol'(z) (\unipol^{tn}(z)-z) + O((\unipol^{tn}(z)-z)^2) \\ & = \unipol(z) + \unipol'(z) (1+(\unipol^n)'(z) + \cdots + (\unipol^n)'(z)^{t-1})w + O(w^2). \end{align*} Thus, by another induction, this time on $0 \leq s < n$, we obtain \begin{equation} \label{eq:NonArchTaylor} \unipol^{tn+s}(z) = \unipol^s(z) + \unipol'(z)^s (1+(\unipol^n)'(z) + \cdots + (\unipol^n)'(z)^{t-1})w + O(w^2), \end{equation} for all $t \geq 1$ and $0 \leq s < n$. Note also the following. \begin{lemma} \label{lem:GPVal} Let $z \in \C_v$. If $|z-1| < |p|^{1/(p-1)}$, then $$ \left| \frac{z^n-1}{z-1} \right| = |n| $$ for $n = p$ and all integers $n$ with $p \nmid n$. If $|z-1| \geq |p|^{1/(p-1)}$, then $$ \left| \frac{z^p-1}{z-1} \right| = |z-1|^{p-1}. $$ \end{lemma} \begin{proof} Write $z=1+u$. If $|u| < |p|^{1/(p-1)}$, then $$ |z^n-1| = |(1+u)^n - 1| = \left| nu +\binom{n}{2} u^2 + \cdots + n u^{n-1} + u^n \right|. $$ If $p \nmid n$, then for $k > 1$ $$ \left| \binom{n}{k} u^k \right| \leq |u|^k < |u| = |nu|, $$ and so $|z^n-1| = |nu| = |u|$ gives $|(z^n-1)/(z-1)|=1$. If $n=p$, then $$ \left| \binom{n}{k} u^k \right| = \begin{cases} |p| |u|^k, & 1 \leq k < n, \\ |u|^n, & k = n, \end{cases} $$ so $|z^p-1| = |p||u|$, since $|u| < |p|^{1/(p-1)}$. We conclude that $$ \left| \frac{z^p-1}{z-1} \right| = |p|. $$ On the other hand, if $|u| \geq |p|^{1/(p-1)}$, it is easy to see that $$ \max_{1 \leq k \leq p} \left|\binom{p}{k} \right| |u^k| = |u|^p, $$ and so $$ \left| \frac{z^p-1}{z-1} \right| = |z-1|^{p-1}. $$ \end{proof} The indifferent case is more complicated, but we can proceed along similar lines, in the spirit of how Rivera-Letelier \cite[\textsection 3.2]{RL} proved the discreteness of Fatou preperiodic points (see also \cite[\textsection 10.2]{B}). We say that a point $a \in \C_v$ is \emph{quasi-periodic} under $\unipol$ if there is a radius $r > 0$ and an integer $m \geq 1$ such that the \emph{iterative logarithm} of $\unipol$, $$ L_{\unipol}(z) := \lim_{\ell \to \infty} \frac{\unipol^{mp^\ell}(z)-z}{mp^\ell} $$ converges uniformly on $\overline D(a,r)$. Rivera-Letelier proved that for a quasi-periodic point $a$, $L_{\unipol}(a) = 0$ if and only if $a$ is an indifferent periodic point, and moreover that $L_{\unipol}$ is a power series converging on a neighbourhood of $a$. Since the zeros of a power series are isolated, this can be used to prove the discreteness of preperiodic points in the Fatou set; see \cite[Proposition~2]{P}. In our case, all points in $D(x,1)$ are quasi-periodic, and so the appropriate analogues of Lemma~\ref{lem:AttractSmallCoeffs} and Proposition~\ref{prop:AttractNoPrePer} involve controlling the rate at which $\unipol^{mp^\ell}(z)$ converges to $z$ for some appropriately chosen $m$ and points $z$ sufficiently close to $x$. \begin{lemma} \label{lem:SmallCoeffs} There exist non-negative integers $j,\ell$ with $1 \leq j \leq q-1$, and $$ 0 \leq \ell \leq \left \lfloor \frac{\log \left( \sqrt{5} \frac{\log |p|}{\log |\pi|} + \frac{1}{2} \right)}{\log \phi} \right \rfloor, $$ where $\phi$ denotes the golden ratio, such that $|\unipol^{jnp^\ell}(x)-x| =: \delta^2 < |p|$ and $|(\unipol^{jnp^\ell})'(x)-1| < |p|^{1/(p-1)}$. Also, $\delta$ satisfies $$ -\log \delta \leq (2^{r_2}+2^{q-2}-1)h(c)+(2^{r_2-1}+2^{q-3}-1) \log 2, $$ where $$ r_2 := q(q-1) \left(\sqrt{5} \frac{\log |p|}{\log |\pi|}+\frac{1}{2} \right)^{\frac{\log p}{\log \phi}} - 2. $$ Moreover, in the case $K = \Q$ we can take $\ell = 1$, and we have $$ - \log \delta \leq \left( 2^{p^2(p-1)-2}+2^{p-2}-1 \right) h(c) + \left( 2^{p^2(p-1)-3}+d^{p-3}-1 \right) \log 2. $$ \end{lemma} \begin{proof} Recall that we write $\unipol^n(x)=x+y$ with $|y| \leq |\pi| < 1$. We have \begin{align*} \unipol^{2n}(x) = \unipol^n(x+y) & = \unipol^n(x) + (\unipol^n)'(x)y + O(y^2) \\ & = x + y(1+(\unipol^n)'(x)) + O(y^2). \end{align*} If $|(\unipol^n)'(x)-1| < 1$, let $j = 1$. Otherwise, let $j = q-1$. Since $|(\unipol^n)'(x)|=1$, and \eqref{eq:NonArchTaylor} gives $$ \unipol^{s+tn}(x) \equiv \unipol^s(x) \pmod{\m} $$ for $0 \leq s < n$ and $1 \leq t < j$, we have \begin{align*} (\unipol^{jn})'(x) & = 2^{jn} x \unipol(x) \cdots \unipol^{jn-1}(x) \\ & \equiv 2^{(q-1)n} (x \unipol(x) \cdots \unipol^{n-1}(x))^{q-1} \\ & \equiv (2^n)^{q-1} (\unipol^n)'(x)^{q-1} \equiv 1 \pmod{\m}, \end{align*} recalling that the residue field $k = \cO/\m$ has $q$ elements. We show by induction that for all $i \geq 0$, $$ |\unipol^{p^i jn}(x)-x| \leq \max \left \{ |p|, |\pi|^{F_{i+1}} \right \}, \quad |(\unipol^{p^i jn})'(x)-1| \leq \max \left \{ |p|, |\pi|^{F_i} \right \}, $$ where $F_i$ denotes the $i$-th Fibonacci number. Note that in the case $K=\Q$, we have $|\pi|=|p|$, and so this implies we can take $\ell=1$. The basis case follows from above. Let $i \geq 0$ and write $\unipol^{p^i jn}(x)=x+\beta$ with $|\beta| \leq \max \{ |p|, |\pi|^{F_{i+1}} \}$ and $(\unipol^{p^i jn})'(x) = 1 + \alpha$ with $|\alpha| \leq \max \{ |p|, |\pi|^{F_i} \}$. Then using \eqref{eq:NonArchTaylor} with $s=0$ and $t=p$, but with $p^i jn$ in place of $n$, we have from Lemma~\ref{lem:GPVal} \begin{align*} \left| \unipol^{p^{i+1} jn}(x) - x \right| & = \left| \beta \left( \frac{(\unipol^{p^i jn})'(x)^p-1}{(\unipol^{p^i jn})'(x)-1} \right) + O(\beta^2) \right| \\ & \leq \max \{ |p|, |\beta||\alpha|^{p-1}, |\beta|^2 \} \\ & \leq \max \{ |p|, |\pi|^{F_{i+1} + (p-1)F_i}, |\pi|^{2F_{i+1}} \} \leq \max \{ |p|, |\pi|^{F_{i+2}} \}. \end{align*} Additionally, since \eqref{eq:NonArchTaylor} gives $\unipol^{s+tp^ijn}(x) = \unipol^s(x) + O(\beta)$ for $0 \leq s < p^i jn$, $1 \leq t < p$, we have \begin{align*} (\unipol^{p^{i+1}jn})'(x) & = 2^{p^{i+1}jn} x \unipol(x) \cdots \unipol^{p^{i+1}jn-1}(x) \\ & = 2^{p^{i+1}jn} \left(x \unipol(x) \cdots \unipol^{p^i jn-1}(x) \right)^{p} + O(\beta) \\ & = (\unipol^{p^i jn})'(x)^p + O(\beta), \end{align*} and so by Lemma~\ref{lem:GPVal}, \begin{align*} |(\unipol^{p^{i+1} jn})'(x)-1| & = |(\unipol^{p^i jn})'(x)^p - 1 + O(\beta)| \\ & \leq \max \{ |p||\alpha|, |\beta|, |\alpha|^p \} \\ & \leq \max \{ |p|, |\pi|^{F_{i+1}}, |\pi|^{p F_i} \} \leq \max \{ |p|, |\pi|^{F_{i+1}} \}. \end{align*} The bound on $\ell$ follows by noting that $$ F_\ell = \left \lfloor \frac{\phi^\ell}{\sqrt{5}} - \frac{1}{2} \right \rfloor. $$ Now, from \eqref{eq:FundamentalHeightInequality} and \eqref{eq:HeightSumInequality1} we have \begin{align*} - \log \delta^d & = - \log | \unipol^{p^\ell jn}(x) - x | \leq h(\unipol^{p^\ell jn}(x)-x)| \\ & \leq h(\unipol^{p^\ell j n + m - 1}(c)) + h(\unipol^{m-1}(c)) + \log 2 \\ & \leq \left( 2^{p^\ell jn + m - 1}+2^{m-1}-2 \right) h(c) + \left(2^{p^\ell jn+m-2}+2^{m-2}-2 \right) \log 2. \end{align*} The result follows, noting that $j \leq q - 1$, $n + m \leq q$ and $$ p^{\ell} \leq \left(\sqrt{5} \frac{\log |p|}{\log |\pi|}+\frac{1}{2} \right)^{\frac{\log p}{\log \phi}}. $$ The bound for the case $K=\Q$ comes from taking $\ell = 1$ and noting that $q=p$. \end{proof} \begin{prop} \label{prop:IndiffNoPer} Let $\delta$ be as in Lemma~\ref{lem:SmallCoeffs}. Then there are no preperiodic points for $\unipol$ in the disk $D(x,\delta^d)$. \end{prop} \begin{proof} Let $j,\ell$ be as in Lemma~\ref{lem:SmallCoeffs}, write $N = jp^\ell$ and $$ \unipol^N(x) = x + a $$ with $|a| = \delta^d$. Let $z \in D(x,\delta^d)$ and write $z=x+w$ with $|w| < \delta^d$. Then \begin{align*} \unipol^N(z) = \unipol^N(x+w) & = \unipol^N(x)+ (\unipol^N)'(x)w + O(w^2) \\ & = x+a+w+ \left( (\unipol^N)'(x)-1 \right)w + O(w^2) \\ & =: z + Y, \end{align*} with $|Y| = |a| = \delta^d$. Note that \begin{align*} (\unipol^N)'(z) & = 2^N z \unipol(z) \cdots \unipol^{N-1}(z) \\ & = 2^N (a+O(x))(\unipol(a)+O(x)) \cdots (\unipol^{N-1}(a)+O(x)) \\ & = (\unipol^N)'(a)+O(x), \end{align*} and so $$ |(\unipol^N)'(z)-1| = |(\unipol^N)'(a)-1+O(x)| \leq |p|^{1/p-1}. $$ Let $0 \leq s < N$. By \eqref{eq:NonArchTaylor} and Lemma~\ref{lem:GPVal}, for any integer $t > 0$ with $p \nmid t$ we have \begin{align*} |\unipol^{s+tN}(z)-\unipol^{s}(z)| & = \left|Y \unipol'(z)^s \left( \frac{(\unipol^N)'(z)^t-1}{(\unipol^N)'(z)-1} \right) + O(Y^2) \right| \\ & = |Y|. \end{align*} Furthermore, we have \begin{align*} |\unipol^{s+pN}(z)-\unipol^{s}(z)| & = \left|Y \unipol'(z)^s \left( \frac{(\unipol^N)'(z)^p-1}{(\unipol^N)'(z)-1} \right) + O(Y^2) \right| \\ & = |Y||p|, \end{align*} since $|Y| < |p|$. Now, \begin{align*} (\unipol^{pN})'(z) & = 2^{pN} z \unipol(z) \cdots \unipol^{pN-1}(z) \\ & = 2^{pN} \left( z \unipol(z) \cdots \unipol^{N-1}(z) \right)^{p}+O(Y) \\ & = (\unipol^N)'(z)^p + O(Y), \end{align*} so $$ |(\unipol^{pN})'(z)-1| = |(\unipol^N)'(z)^p-1+O(Y)| \leq |p|, $$ and hence we may proceed by induction to obtain $$ |\unipol^{s+tN}(z)-\unipol^{s}(z)| = |Y||p|^{e_t}, $$ where $p^{e_t} \mathrel\Vert t$. In particular $\unipol^{tN}(z) \neq z$ for all $t$, and so $z$ is not periodic for $\unipol$. Since $\unipol^n$ maps $D(x,1)$ 1-1 onto itself, $D(x,1)$ contains no strictly preperiodic points, and so the proof is complete. \end{proof} \begin{corollary} \label{cor:IndiffNoPrePer} Let $\delta$ be as in Lemma~\ref{lem:SmallCoeffs}. Then there are no preperiodic points for $\unipol$ in the disk $D(0,\delta)$. \end{corollary} \begin{proof} By Proposition~\ref{prop:IndiffNoPer}, the preimage of $D(x,\delta^2)$ under $\unipol^m$ contains no preperiodic points. This preimage contains a disk $D(0,\varepsilon)$ about 0, which maps onto $D(x,\delta^2)$ under $\unipol^m$. Write $$ \unipol^m(z) = a_0 + a_2z^2 + \cdots + a_{2^m} z^{2^m}, $$ and note that $|a_i| \leq 1$ for $0 \leq i \leq 2^m$. We have that the Weierstrass degree $w$ of $\unipol^m(z)-a_0$ on $D(0,\varepsilon)$ satisfies $w \geq 2$, and so by \cite[Theorem~3.15]{B} we have $$ \varepsilon = \left(\frac{\delta^2}{|c_w|}\right)^{1/w} \geq \delta, $$ as desired. \end{proof} The following is immediate from the above results, noting for the case $K=\Q$ that for a prime $p$ of good reduction, $q=p$ and $|\pi_p|_p = 1/p$. \begin{corollary} \label{cor:nonArchDeltavBound} Let $v$ be a place of good reduction for $\unipol$ lying over a prime $p$, and such that the residue field $k_v$ has $q$ elements. Then $$ - \log \delta_v \leq A_v + B_v \canheight{\unipol}(0) $$ with $$ A_v := (C_2+\log 2) 2^{r_v}, \qquad B_v := C_1 2^{r_v}, $$ where \begin{align*} r_v & = \max \Bigg \{ q \left( \frac{\log \left( \frac{q \log |2|_v}{\log |\pi_v|_v}+1 \right)}{\log 2} + 1 \right) - 1 , \\ & \qquad \qquad \qquad q(q-1) \left( \sqrt{5} \frac{\log |p|_v}{\log |\pi_v|_v} + \frac{1}{2} \right)^{\frac{\log p}{\log \phi}} - 1 \Bigg \}, \end{align*} and $C_1,C_2$ are the constants appearing in Corollary~\ref{cor:CanHeightBound1}. In the case where $K = \Q$ and $v=p$ is a prime of good reduction we can take $$ r_p = \max \left \{ p \left \lfloor \frac{\log \left( p v_p(2)+1 \right)}{\log 2} + 1 \right \rfloor - 1 , p^2(p-1)-1 \right \}, $$ where $v_p(2)$ denotes the $p$-adic order of $2$. \end{corollary} \section{Proof of Theorem~\ref{thm:UniformMain} and related results} \label{sec:Unif} Let us first summarise our results relating the quantities appearing in Theorem~\ref{thm:main} to the canonical height $\canheight{\unipol}(\alpha)$ in the case where $\alpha = 0$ is the critical point of a unicritical poiynomial $\unipol(z)=z^2+c$. \begin{theorem} \label{thm:deltavUnifBounds} Let $\unipol(z)=z^2+c$, $c \in K$ and $v \in M^{0,\unipol}_{K,\mathrm{good}}$. Then there exist explicit constants $A_v, B_v > 0$, depending only on $K$, $v$ such that $$ - \log \delta_{\unipol,v}(0) \leq A_v + B_v \canheight{\unipol}(0) $$ Moreover, let $\varepsilon > 0$, $t \geq 1$. Then there exist explicit constants $A_\infty, B_\infty > 0$, depending only on $K$, $\varepsilon$ and $t$ such that if $v$ is an archimedean place of $K$ such that either $\loccanheight{v}{\unipol}(0) \geq \varepsilon$ or $c$ lies is a hyperbolic component of the $v$-adic Mandelbrot set of period at most $t$, then $$ - \log \delta_{\unipol,v}(0) \leq A_\infty + B_\infty \canheight{\unipol}(0). $$ Finally, there exist constants $A_\kappa, B_\kappa$, and $C_0$ depending only on $K$ such that $$ \frac{4}{\log 2} \canheight{\unipol}(0) \leq \frac{1}{\kappa} \leq A_\kappa + B_\kappa \canheight{\unipol}(0), $$ and $$ \canheight{\unipol}(0) \geq C_0 $$ for all $c \in K$. \end{theorem} \begin{proof} For places of good reduction, see Corollary~\ref{cor:nonArchDeltavBound}. For archimedean places, see Corollary~\ref{cor:ArchOutParamDeltavBound} and Proposition~\ref{prop:DeltavBoundParamIhHypComp} and set $A_\infty := \max \{ A_{\infty, 1}, A_{\infty,2} \}$ and $B_\infty := B_{\infty,2}$. By \eqref{eq:HolderExponent}, for all $v \in M_K$ we have H\"{o}lder exponents $$ \kappa_v \geq \frac{\log 2}{\log 6 + 4 \log^+ |c|_v}, $$ so by \eqref{eq:FundamentalHeightInequality}, \begin{equation} \label{eq:Kap} \kappa := \frac{\log 2}{\log 6 + 4h(c)} \end{equation} satisfies $\kappa \leq \inf_{v \in V} \kappa_v$. Then $$ \frac{1}{\kappa} = \frac{\log 6}{\log 2} + \frac{4 h(c)}{\log 2} \geq \frac{4}{\log 2} \canheight{\unipol}(0) $$ by \eqref{eq:CanHeightConst} with $\beta = c$, and we can take \begin{equation} \label{eq:aKapbKap} A_\kappa := 1 + \frac{\log 6 + 4 C_2}{\log 2}, \qquad B_\kappa := \frac{4 C_1}{\log 2}, \end{equation} with $C_1$ and $C_2$ as in Corollary~\ref{cor:CanHeightBound1}. To conclude, $C_0$ is found in Corollary~\ref{cor:CanHeightLowBound}. \end{proof} This leads to the following result, which immediately implies Theorem~\ref{thm:UniformMain}. \begin{theorem} \label{thm:UniformMainExplicit} Let $K$ be a number field, $S$ a finite set of places of $K$ containing the archimedean ones, and let $\unipol(z)=z^d+c$, $d \geq 2$, $c \in K$. Suppose that the critical point $0$ is not preperiodic under $\unipol$. Further, let $\varepsilon > 0$, $t \geq 1$ and suppose that for all archimedean places $v \in M_K^\infty$, either $\loccanheight{v}{\unipol}(0) \geq \varepsilon$ or $c$ lies in a hyperbolic component of the Mandelbrot set of period at most $t$. Then \begin{equation*} |\relsupreper{\unipol}{S}{0}| \leq \max \Bigg \{ 29 \log 2, e^{1+\sqrt{2u}+u}, \frac{7+4|V|}{12 \log 2} \left( \frac{2 | \widetilde S|}{C_0} \right)^{\frac{\log 2}{4C_0}} e^{\frac{\log 2}{4} \left( \frac{A_\infty}{C_0} + B_\infty \right)} \Bigg \}, \end{equation*} where $\widetilde S = S \setminus M^{0,\unipol}_{K,\mathrm{bad}}$, $V = M_K^\infty \cup M^{0,\unipol}_{K,\mathrm{bad}}$, $$ u \leq \log(32 \pi (|V|+1) |\widetilde S|) + \log \left( \frac{A_\kappa A}{C_0^2} + \frac{A_\kappa B + A B_\kappa}{C_0} + B_\kappa B \right) - 1, $$ and $$ A := \sum_{v \in S \setminus M_K^\infty} A_v + |M_K^\infty| A_\infty, \quad \qquad B := \sum_{v \in S \setminus M_K^\infty} B_v + |M_K^\infty| B_\infty, $$ with constants as in Theorem~\ref{thm:deltavUnifBounds}. \end{theorem} \begin{proof} Take $\kappa$ as in \eqref{eq:Kap}. By \eqref{eq:HolderConstant}, for all $v \in V$, $$ C_v \leq 4 \log 6 + 6 \log^+ |c|_v, $$ and $C_v = 0$ for $v \notin V$, so $$ C := 4|V| \log 6 + 6 h(c) \geq \sum_{v \in V} C_v. $$ We have $$ C\kappa = \frac{4|V| \log 12 + 6(\log 2) h(c)}{\log 6 + 4h(c)} \leq 4|V| \log 2 + \frac{3}{2} \log 2, $$ so \begin{equation*} \frac{6C\kappa}{|V|+1} \leq \frac{24|V| \log 2}{|V|+1} + \frac{9 \log 2}{|V|+1} < 29 \log 2, \end{equation*} and note also that \begin{equation} \label{eq:CKap2} \frac{|V|+1}{2C\kappa} \leq \frac{|V|+1}{2} \left( \frac{1}{|V| \log 2} + \frac{2}{3 \log 2} \right) \leq \frac{7+4|V|}{12 \log 2}. \end{equation} Now, referring to the proof of Theorem~\ref{thm:main}, note that if $v$ is a place of bad reduction for $\unipol$, by Remark~\ref{rem:BadReductHeight}, $|z|_v = |c|_v^{1/2}$ for all $z \in \cP = \relsupreper{\unipol}{S}{0}$, and so by Lemma~\ref{lem:BadReductHeight} $$ \Gamma_v = \log \max \{ |\alpha|_v, |c|_v^{1/2} \} = \loccanheight{v}{\unipol}(\alpha). $$ Thus, in \eqref{eq:GamForm} and subsequent equations, we may replace $S$ with $\widetilde S = S \setminus M^{0,\unipol}_{K,\mathrm{bad}}$. In particular, an upper bound for $|\cP|$ is given on the right-hand side of \eqref{eq:PBound}, with $\widetilde S$ in place of $S$. We have \begin{align*} u & = \log \left( \frac{32\pi(|V|+1) \left( \sum_{v \in \widetilde S} |\log \delta_v|^{1/2} \right)^2}{\kappa \canheight{\unipol}(0)^2} \right)-1 \\ & \leq \log(32\pi(|V|+1)|\widetilde S|) - 1 + \\ & \log \left( \frac{(A_\kappa + B_\kappa \canheight{\unipol}(0)) \left( \sum_{v \in \widetilde S \setminus M_K^\infty} (A_v + B_v \canheight{\unipol}(0)) + |M_K^\infty|(A_\infty + B_\infty \canheight{\unipol}(0)) \right)}{\canheight{\unipol}(0)^2} \right), \end{align*} where the inequality follows from Theorem~\ref{thm:deltavUnifBounds} and the generalized mean inequality. Setting $$ A := \sum_{v \in \widetilde S \setminus M_K^\infty} A_v + |M_K^\infty| A_\infty, \quad \qquad B := \sum_{v \in \widetilde S \setminus M_K^\infty} B_v + |M_K^\infty| B_\infty, $$ we have \begin{align*} u & \leq \log(32 \pi (|V|+1) |\widetilde S|) + \log \left( \frac{A_\kappa A + (A_\kappa B + A B_\kappa) \canheight{\unipol}(0) + B_\kappa B \canheight{\unipol}(0)^2}{ \canheight{\unipol}(0)^2} \right) - 1 \\ & \leq \log(32 \pi (|V|+1) |\widetilde S|) + \log \left( \frac{A_\kappa A}{C_0^2} + \frac{A_\kappa B + A B_\kappa}{C_0} + B_\kappa B \right) - 1, \end{align*} recalling that $\canheight{\unipol}(0) \geq C_0$ for all $c \in K$. Moreover, from \eqref{eq:CKap2} we have \begin{align*} & \frac{|V|+1}{2C\kappa} \left[ \frac{2}{\canheight{\unipol}(0)} \left( \left( \sum_{v \in M_K^\infty} \frac{1}{\delta_v} \right) + |\widetilde S \setminus M_K^\infty| \right) \right]^\kappa \\ & \leq \frac{7+4|V|}{12 \log 2} \left( \frac{2}{\canheight{\unipol}(0)} \left( |M_K^\infty| e^{A_\infty + B_\infty \canheight{\unipol}(0)} + |\widetilde S \setminus M_K^\infty| \right) \right)^{\frac{\log 2}{4 \canheight{\unipol}(0)}} \\ & \leq \frac{7+4|V|}{12 \log 2} \left( \frac{2 | \widetilde S|}{C_0} \right)^{\frac{\log 2}{4C_0}} e^{\frac{\log 2}{4} \left( \frac{A_\infty}{C_0} + B_\infty \right)}, \end{align*} again using Theorem~\ref{thm:deltavUnifBounds} and the fact that $\canheight{\unipol}(0) \geq C_0$. This completes the proof. \end{proof} Moreover, we have an explicit form of Corollary~\ref{cor:IntMain} \begin{corollary} \label{cor:IntMain2} Let $\unipol(z) = z^2 + c$ with $c \in \Z$ such that $0$ is not preperiodic for $\unipol$ i.e. $c \neq 0,-1,-2$. Let $S$ be a finite set of places of $\Q$ containing the archimedean place $\infty$. Then there are at most $e^{1+\sqrt{2u}+u}$ preperiodic points for $\unipol$ which are $S$-integral relative to $0$, where $$ u \leq \log(64\pi |S|) + \log \Bigg( 16 \Bigg( 5 + \frac{20+5 \log 6}{\log 2} \Bigg) \sum_{p \in S \setminus \{ \infty \}} 2^{r_p} \Bigg)-1, $$ with $$ r_p := \max \left \{ p \left \lfloor \frac{\log \left( p v_p(2)+1 \right)}{\log 2} + 1 \right \rfloor - 1 , p^2(p-1)-1 \right \}, $$ where $v_p(d)$ denotes the $p$-adic order of $d$. \end{corollary} \begin{example} In the case $d = 2$ and $S = \{ \infty, 2 \}$, the above gives an upper bound of $451287434$ for the number of preperiodic points for a map of the form $\unipol(z)=z^2+c$, $c \in \Z$, which are $\{ \infty, 2 \}$-integral with respect to 0. \end{example} \begin{proof} In this case, in Corollary~\ref{cor:CanHeightBound1} we can take $C_1 = 4$ and $C_2 = 0$, and in Corollary~\ref{cor:CanHeightLowBound} we can take $C_0 = 1/4$. Thus, looking to \eqref{eq:aKapbKap}, we have \begin{align*} A_\kappa & = 1 + \frac{\log 6}{\log 2}, \\ B_\kappa & = \frac{16}{\log 2}. \end{align*} Now, all integers $c$ such that $0$ is not preperiodic for $\unipol$ lie outside the Mandelbrot set, so by Lemma~\ref{lem:JuliaSetDistance}, we can take $A_\infty = \log 2$ and $B_\infty = 0$. Thus, from Corollary~\ref{cor:nonArchDeltavBound}, we have \begin{align*} A & = \sum_{p \in S \setminus \{ \infty \}} A_p + A_\infty = \sum_{p \in S \setminus \{ \infty \} } (2^{r_p} \log 2) + \log 2, \\ B & = \sum_{p \in S \setminus \{ \infty \}} B_p + B_\infty = \sum_{p \in S \setminus \{ \infty \}} 2^{r_p+2} \geq A. \end{align*} The result follows from plugging these values into Theorem~\ref{thm:UniformMain}, and noting that $e^{1+\sqrt{2u}+u}$ attains the maximum of the bound therein. \end{proof} \section{$S$-units in dynamical sequences} \label{sec:Sunit} To prove Theorem~\ref{thm:SunitBound}, we first observe that $S$-units in the sequence $\disunit{\pol}{S}{\alpha} = \{ \pol^n(\alpha)-\pol^m(\alpha) \}_{n > m \geq 0}$ correlate with $S$-integral (with respect to $\alpha$) preperiodic points of corresponding (pre)period. \begin{lemma} \label{lem:Suint} Let $K$ be a number field, let $\alpha \in K$, let $\pol \in K[z]$ be a polynomial of degree $d \geq 2$, and let $S$ be a finite set of places of $K$, containing all the archimedean ones and all the places of bad reduction for $\pol$. Then \begin{align*} |\relsupreper{\pol}{S}{\alpha}| \geq \big| \big \{ & \beta \in \overline K : \pol^n(\beta)-\pol^m(\beta) = 0 \text{ for some } n > m \geq 0 \\ & \text{such that } \pol^n(\alpha) - \pol^m(\alpha) \in \cO_S^* \big \} \big|. \end{align*} \end{lemma} \begin{proof} Write $$ \pol(z) = a_0 + a_1 z + \cdots + a_d z^d, \: a_i \in K, $$ and let $v \notin S$. Then, since $\pol$ has good reduction at $v$, $|a_i|_v \leq 1$ for all $i$, and $|a_d|_v = 1$. Hence, if $|\alpha|_v > 1$, then $$ |\pol(\alpha)|_v = |a_0 + a_1 \alpha + \cdots + a_d \alpha^d|_v = |\alpha|_v^d > 1, $$ and so continuing by induction, $|\pol^n(\alpha)|_v = |\alpha|_v^{d^n} > 1$ for all $n \geq 1$. Thus, for all $n > m \geq 0$, $|\pol^n(\alpha) - \pol^m(\alpha)|_v = |\alpha|_v^{d^n} > 1$, and so $\pol^n(\alpha)-\pol^m(\alpha)$ is not an $S$-unit. We hence assume that $|\alpha|_v \leq 1$ for all $v \notin S$. Now, let $n > m \geq 0$, and write \begin{align*} \pol^n(z)-\pol^m(z) & = b_0 + b_1 z + \cdots + b_{d^n} z^{d^n} \\ & = \pol^n(\alpha)-\pol^m(\alpha) + c_1 (z-\alpha) + \cdots + c_{d^n} (z-\alpha)^{d^n}. \end{align*} Note that $b_{d^n} = c_{d^n}$, and for $1 \leq k < d^n$ we have \begin{equation} \label{eq:Sstuff} c_{d^n-k} = b_{d^n-k} + \sum_{i = 1}^{k} (-1)^{i-1} \binom{d}{i} \alpha^i c_{d^n-k+i}. \end{equation} Let $v \notin S$. Then $|b_i|_v \leq 1$ for all $i$, again using the fact that $\pol$ has good reduction at $v$, and so an easy induction with \eqref{eq:Sstuff} shows that $|c_i|_v \leq 1$ for all $i$. Thus, if $\pol^n(\alpha)-\pol^m(\alpha)$ is an $S$-unit, then for any root $\beta$ of $\pol^n(z)-\pol^m(z)$, we must have $|\beta - \alpha|_v \geq 1$ for all $v \notin S$. Indeed, if $|\beta-\alpha|_v < 1$ for some $v \notin S$, then $$ 0 = |\pol^n(\beta)-\pol^m(\beta)|_v = | \pol^n(\alpha)-\pol^m(\alpha) + c_1 (\beta-\alpha) + \cdots + c_{d^n} (\beta-\alpha)^{d^n} |_v = 1, $$ a contradiction. That is, such $\beta$ are $S$-integral relative to $\alpha$ (as the relevant $K$-embeddings of $\beta$ are also roots of $\pol^n(z)-\pol^m(z)$), and therefore belong to $\relsupreper{\pol}{S}{\alpha}$, as they are of course preperiodic for $\pol$. \end{proof} Hence, to relate $|\relsupreper{\pol}{S}{\alpha}|$ to $|\disunit{\pol}{S}{\alpha} \cap \cO_S^*|$, it will suffice to obtain a lower bound for the number of distinct zeros of $\pol^n-\pol^m$, $n > m \geq 0$. For any complex polynomial $f$, let $\z(f) := \deg \mathrm{rad}(f)$ denote the number of distinct zeros of $f$. Then we have the following. \begin{lemma} \label{lem:distRootBound} Let $K$ be a number field and let $\pol \in K[z]$ be a polynomial of degree $d \geq 2$. Then for all $n > m \geq 0$, we have $\z( \pol^n - \pol^m ) \geq \max \{ 1, n-m-2 \} \max \{ 1, d^{m-1} \} \geq n$. \end{lemma} \begin{proof} First note that by \cite[Theorem~2]{Ba}, $\z(\pol^k-\pol^0) \geq k$ for all $k \geq 1$, since $\pol$ has a cycle of exact period $k$ (with the exception of $k=2$ when $\pol$ is linearly conjugate to $z^2-3/4$, but one can manually check $\z(\pol^2-\pol^0) \geq 2$ in this case). Now, for polynomials $g,h$ with $\z(g) > 1$, \cite[Main~Theorem]{FP} gives $$ \z(g \circ h) \geq \max \{ 1, \z(g)-2 \} \deg h + 1. $$ Hence, if $(n,m) \neq (1,0)$ or $\z(\pol-\pol^0) > 1$, the result follows from the above with $g = \pol^{n-m}-\pol^0$, $h = \pol^m$. Otherwise, take $g = \pol^2-\pol$, $h = \pol^{m-1}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:SunitBound}] If $\disunit{\pol}{S}{\alpha} \cap \cO_S^*$ is non-empty, it must contain an $S$-unit of the form $\pol^n(\alpha)-\pol^m(\alpha)$ with $$ n \geq \sqrt{2 |\disunit{\pol}{S}{\alpha} \cap \cO_S^*| + \frac{1}{4}} - \frac{1}{2}. $$ Now, from Lemma~\ref{lem:Suint} and Lemma~\ref{lem:distRootBound}, we have $$ |\relsupreper{\pol}{S}{\alpha}| \geq \z(\pol^n-\pol^m) \geq n \geq \sqrt{2 |\disunit{\pol}{S}{\alpha} \cap \cO_S^*| + \frac{1}{4}} - \frac{1}{2}, $$ and rearranging completes the proof. \end{proof} On the other hand, it is worth noting that assuming the $abc$-conjecture, \cite[Corollary~1.6]{GNT15} can be adapted to number fields (see also \cite[Theorem~1.2]{GNT}, which assumes Vojta's conjecture): Given a polynomial $\unipol \in K[z]$ which is not linearly conjugate over $K$ to a polynomial for the form $z^d+c$, and a non-preperiodic point $\alpha$ for $\unipol$, there exists an effectively computable finite set $\cZ$ depending only on $K, \unipol, |S|$ (enlarging $S$ here to contain all places of bad reduction for $\unipol$) and the canonical height $\canheight{\unipol}(\alpha)$ of $\alpha$ with respect to $\unipol$, such that for every $(m,n) \in (\Z_{\geq 0} \times \Z^+) \setminus \cZ$, there is a place $v \notin S$ such that $\alpha$ has \emph{preperiodicity portrait} $(m,n)$ for $\unipol$ modulo $v$. That is, $|\unipol^{m+n}(\alpha)-\unipol^m(\alpha)|_v < 1$, while $|\unipol^{m+k}(\alpha)-\unipol^m(\alpha)|_v, |\unipol^{j+\ell}(\alpha)-\unipol^j(\alpha)|_v \geq 1$ for $1 \leq k < n$, $0 \leq j < m$ and $\ell \geq 1$. Looking to the Newton polygon of $\unipol^{m+n}-\unipol^m$ expanded about $\alpha$, we see this implies that for $(m,n) \notin \cZ$ the \emph{generalised dynatomic polynomial} $$ \Phi_{\unipol, m,n}(z) := \frac{\Phi_{\unipol,n}(\unipol^m(z))}{\Phi_{\unipol,n}(\unipol^{m-1}(z))} $$ (where $\Phi_{\unipol,n}$ denotes the \emph{dynatomic polynomial} $$ \Phi_{\unipol,n}(z) := \prod_{d \mid n} \left( \unipol^d(z)-z \right)^{\mu(n/d)}, $$ and here $\mu$ is the M\"{o}bius function) has a root $\beta$ satisfying $|\alpha-\beta|_v < 1$. Thus, if $\Phi_{\unipol,m,n}$ is irreducible, no root of $\Phi_{\unipol,m,n}$ is $S$-integral relative to $\alpha$. For fixed $\unipol$, the pairs $(m,n)$ such that $\Phi_{\unipol,m,n}$ is reducible appear to be numerically isolated, so perhaps it is the case that $\Phi_{\unipol,m,n}$ is irreducible for all sufficiently large $n$. This would prove Conjecture~\ref{conj:Ih} (assuming $abc$), for non-unicritical polynomials, even without the assumption of $\alpha$ being totally Fatou, but this irreducibility problem seems itself difficult. \begin{thebibliography}{30} \bibitem{Ba} I. N. Baker, \textit{Fixpoints of polynomials and rational functions}, Journal London Math. Soc. \textbf{39} (1964), 615-622. \bibitem{BIR} M. Baker, S. Ih and R. Rumely, \textit{A finiteness property of torsion points}, Algebra Number Theory \textbf{2} (2008), no. 2, 217-248. \bibitem{BR1} M. Baker and R. Rumely, \textit{Equidistribution of small points, rational dynamics, and potential theory}, Ann. Inst. Fourier (Grenoble) \textbf{56} (2006(, no. 3, 625-688. \bibitem{BR} M. Baker and R. Rumely, \textit{Potential theory and dynamics on the Berkovich projective line}, American Mathematical Society, 2010. \bibitem{B} R. Benedetto, \textit{Dynamics in one non-archimedean variable}, American Mathematical Society, 2019. \bibitem{BD} F. Berteloot and T.-C. Dinh, \textit{The Mandelbrot set is the shadow of a Julia set}, Discrete and Continuous Dynamical Systems, \textbf{40} (2020), no. 12, 6611-6633. \bibitem{BG} E. Bombieri and W. Gubler, \textit{Heights in diophantine geometry}, Cambridge University Press, 2006. \bibitem{Br} H. Brolin, \textit{Invariant sets under iterations of rational functions}, Arkiv f\"{u}r Mathematik, \textbf{6} (1966) \bibitem{CG} L. Carleson and T. W. Gamelin, \textit{Complex Dynamics}, Springer, New York, 1993. \bibitem{C} I. Chatzigeorgiou, \textit{Bounds on the Lambert function and their application to the outage analysis of user cooperation}, in IEEE Communications Letters, \textbf{17} (2013), no. 8, 1505-1508. \bibitem{DH} A. Douady and J. H. Hubbard, \textit{Exploring the Mandelbrot set}, The Orsay notes, \url{http://pi.math.cornell.edu/~hubbard/OrsayEnglish.pdf}. \bibitem{DKY} L. Demarco, H. Krieger and H. Ye, \textit{Uniform Manin-Mumford for a family of genus 2 curves}, Ann. of Math. (2) \textbf{191} (2020) no. 3, 949-1001. \bibitem{D} A. Dubickas, \textit{Algebraic numbers with bounded degree and bounded Weil height}, Bull. Aust. MAth. Soc. \textbf{98} (2018), 212-220. \bibitem{FRL} C. Favre and J. Rivera-Letelier, \textit{Equidistribution quntitative des points de petite hauteur sur la droite projective}, Math. Ann. \textbf{335} (2006), no. 2, 311-361. \bibitem{F} P. Fili, \textit{A metric of mutual energy and unlikely intersections for dynamical systems}, Preprint, arXiv:1708.08403v1 [math.NT]. \bibitem{FP} C. Fuchs and A Peth\"{o}, \textit{Composite rational functions having a bounded number of zeros and poles}, Proc. Amer. Math. Soc., \textbf{139} (2011), no. 1, 31-38. \bibitem{GI} D. Grant and S. Ih, \textit{Integral division points on curves}, Composito Math. \textbf{149} (2013), 2011-2035. \bibitem{GNT15} D. Ghioca, K. Nguyen and T. J. Tucker, \textit{Portraits of preperiodic points for rational maps.} \textit{Math. Proc. Camb. Phil. Soc.} \textbf{159} (2015), 165-186. \bibitem{GNT} D. Ghioca, K. Nguyen and T. J. Tucker, \textit{Squarefree doubly primitive divisors in dynamical sequences}, Math. Proc. Camb. Phil. Soc. \textbf{164} (2018), no. 3, 551-572. \bibitem{HS} L.-C. Hsia and J. H. Silverman, \textit{A quantitative estimate for quasi-integral points in orbits}, Pacific J. Math. \textbf{249} (2011), no. 2, 321-342. \bibitem{IT} S. Ih and T. Tucker, \textit{A finiteness property for preperiodic points of Chebyshev polynomials}, Int. J. Number Theory, \textbf{6} (2010), no. 5, 1011-1025. \bibitem{I} P. Ingram, \textit{Lower bounds on the canonical height associated to the morphism $\varphi(z) = z^d+c$}, Monatsh. Math. \textbf{157} (2009), no. 1, 69-89. \bibitem{I2} P. Ingram, \textit{The critical height is a moduli height}, Duke Math. J. \textbf{167} (2018), no. 7, 1311-1346. \bibitem{K} M. Kosek, \textit{H\"{o}lder exponents of the green function of planar polynomial Julia sets}, Ann. Mat. Pura ed App \textbf{193} (2014), 359-368. \bibitem{KPS} T. Krick, L. M. Pardo and M. Sombra, \textit{Sharp estimates for the arithmetic Nullstellensatz}, Duke Math. J. \textbf{109} (3), 2001, 521-598. \bibitem{KLS} H. Krieger, A. Levin, Z. Scherr, T. J. Tucker, Y. Yasufuku and M. E. Zieve, \textit{Uniform boundedness of $S$-units in arithmetic dynamics}, Pacific Journal of Mathematics, \textbf{274} (2015), no. 1, 97-105. \bibitem{L} M. Lyubich, \textit{Conformal Geometry and Dynamics of Quadratic Polynomials}, Book in preparation, \url{www.math.sunysb.edu/\~mlyubich/book.pdf}. \bibitem{NC} M. Narvaez-Clauss, \textit{Quantitative equidistribution of Galois orbits of points of small height on the algebraic torus}, Ph.D thesis, Universitat de Barcelona, 2016, available at \url{http://diposit.ub.edu/dspace/bitstream/2445/112532/1/MNC_PhD_THESIS.pdf}. \bibitem{P} C. Petsche, \textit{S-integral preperiodic points for dynamical systems over number fields}, Bull. London Math. Soc., \textbf{40} (2008), no. 5, 749-758. \bibitem{Po} C. Pommerenke, \textit{Univalent functions, with a chapter on quadratic differentials by Gerd Jensen}, Studia Math. Lehrb\"{u}cher, \textbf{15} (1975), Vandenhoeck and Ruprecht. \bibitem{Ra} M. Raynaud, \textit{Sous-vari\'{e}t\'{e}s d'une vari\'{e}t\'{e} ab\'{e}lienne et points de torsion}, `Arithmetic and geometry', Vol. I, 327-252, Progr. Math. 35, Birkh\"{a}user Boston, Boston, MA, 1983. \bibitem{RL} J. Rivera-Letelier, \textit{Dynamique des fonctions rationnelles sur des corps locaux} (French, with Endlish and French summaries), Ast\'{e}risque \textbf{287} (2003), xv, 147-230. Geometric methods in dynamics. II. \bibitem{RW} R. Rumely and S. Winburn, \textit{The Lipschitz constant of a non-archimedean rational function}, arXiv:1512.01136 [math.DS] \bibitem{Si} J. H. Silverman, \textit{The arithmetic of dynamical systems}, Springer, 2007. \bibitem{Si3} J. H. Silverman, \textit{Integer points, Diophantine approximation, and iteration of rational maps}, Duke Math. J. \textbf{71} (1993), no. 3, 793-829. \bibitem{Si2} J. H. Silverman, \textit{Moduli Spaces and Arithmetic Dynamics}, volume 30 of \textit{CRM Monograph Series}, AMS, 2012. \bibitem{S} C. M. Stroh, \textit{Julia sets of complex polynomials and their implementation on the computer}, Master's thesis, University of Linz, 1997. \end{thebibliography} \Address \end{document}
2206.14250v1
http://arxiv.org/abs/2206.14250v1
Spectral Analysis of the Kohn Laplacian on Lens Spaces
\documentclass[10pt]{amsart} \usepackage{amsfonts,amsmath,amscd,amssymb,amsthm,mathrsfs,esint} \usepackage{color,fancyhdr,framed,graphicx,verbatim,mathtools} \usepackage[margin=1in]{geometry} \usepackage{tikz} \usepackage{comment} \usetikzlibrary{matrix,arrows,decorations.pathmorphing} \newcommand{\R}[0]{\mathbb{R}} \newcommand{\C}[0]{\mathbb{C}} \newcommand{\N}[0]{\mathbb{N}} \newcommand{\Q}[0]{\mathbb{Q}} \newcommand{\Z}[0]{\mathbb{Z}} \newcommand{\F}[0]{\mathbb{F}} \newcommand{\calH}[0]{\mathcal{H}} \newcommand{\calP}[0]{\mathcal{P}} \newcommand{\tn}[1]{\text{#1}} \DeclareMathOperator{\spn}{span} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\Skew}{Skew} \DeclareMathOperator{\divides}{div} \newcommand{\lrfl}[1]{\left\lfloor#1\right\rfloor} \newcommand{\lrceil}[1]{\left\lceil#1\right\rceil} \newcommand{\lrp}[1]{\left ( {#1} \right )} \newcommand{\lrb}[1]{\left [ {#1} \right ]} \newcommand{\lrsq}[1]{\left \{ {#1} \right \}} \newcommand{\lrabs}[1]{\left | {#1} \right |} \newcommand{\lrang}[1]{\left\langle #1 \right\rangle} \newtheorem{theorem}{Theorem}[section] \theoremstyle{definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{answer}[theorem]{Answer} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem{result}[theorem]{Result} \setcounter{theorem}{0} \setcounter{section}{1} \rfoot{Page \thepage \hspace{1pt}} \newtheorem*{proposition*}{Proposition} \title{Spectral Analysis of the Kohn Laplacian on Lens Spaces} \author{Colin Fan} \address[Colin Fan]{Rutgers University--New Brunswick, Department of Mathematics, Piscataway, NJ 08854, USA} \email{[email protected]} \author{Elena Kim} \address[Elena Kim]{Massachusetts Institute of Technology, Department of Mathematics, Cambridge, MA 02142, USA} \email{[email protected]} \author{Zoe Plzak} \address[Zoe Plzak]{Department of Mathematics, Boston University, Boston, MA 02215, USA} \email{[email protected]} \author{Ian Shors} \address[Ian Shors]{Department of Mathematics, Harvey Mudd College, Claremont, CA 91711, USA} \email{[email protected] } \author{Samuel Sottile} \address[Samuel Sottile]{Department of Mathematics, Michigan State University, East Lansing, MI 48910, USA} \email{[email protected]} \author{Yunus E. Zeytuncu} \address[Yunus E. Zeytuncu]{University of Michigan--Dearborn, Department of Mathematics and Statistics, Dearborn, MI 48128, USA} \email{[email protected]} \thanks{This work is supported by NSF (DMS-1950102 and DMS-1659203). The work of the second author is suppported by a grant by the NSF Graduate Research Fellowship (\#1745302) and the work of the last author is also partially supported by a grant from the Simons Foundation (\#353525).} \subjclass[2010]{Primary 32W10; Secondary 32V20} \keywords{Kohn Laplacian, Weyl's Law, Lens Spaces} \renewcommand\qedsymbol{$\blacksquare$} \begin{document} \begin{abstract} We obtain an analog of Weyl's law for the Kohn Laplacian on lens spaces. We also show that two 3-dimensional lens spaces with fundamental groups of equal prime order are isospectral with respect to the Kohn Laplacian if and only if they are CR isometric. \end{abstract} \maketitle \setcounter{section}{-1} \section{Introduction} One of the most celebrated results in spectral geometry is due to Hermann Weyl in 1911: given any bounded domain $\Omega$ in $\mathbb{R}^n$, one can obtain its volume by understanding the eigenvalue distribution of the Laplacian on $\Omega$. This result, usually referred to as Weyl's law, has since been extended to the Laplace-Beltrami operator and other elliptic operators on Riemannian manifolds \cite{Stanton}. However, as the Kohn Laplacian $\Box_b$ is not elliptic, it is an open problem to find an analog of Weyl's law for the Kohn Laplacian on functions for CR manifolds. Stanton and Tartokoff presented in \cite{Stanton1984TheHE} a version on $q$-forms that are not functions or of top-degree. Motivated by this problem, in \cite{REU2020Weyl}, the authors prove an analog of Weyl's law for the Kohn Laplacian on functions on odd-dimensional spheres. Moreover, they conjecture that their result generalizes to compact strongly pseudoconvex embedded CR manifolds of hypersurface type. In this paper, we first generalize the aforementioned result to lens spaces. In particular, we show that one can hear the order of its fundamental group. Moreover, the universal constant obtained in the asymptotic behavior of the eigenvalue counting function for the Kohn Laplacian matches the universal constant computed on spheres, providing further evidence for the conjecture above. The second portion of this paper is inspired by the 1979 result of Ikeda and Yamamoto \cite{ikeda1979} which shows that if two 3-dimensional lens spaces with fundamental group of order $k$ are isospectral with respect to the standard Laplacian, then they are isometric as Riemannian manifolds. That is, one can hear the shape of a lens space from its standard Laplacian. We investigate the analogous question: does the spectrum of the Kohn Laplacian determine when two lens spaces are CR isometric? We say that two CR manifolds are CR isospectral if the spectrums of the Kohn Laplacian on each manifold are identical. We suspect that, as in the Riemannian setting, one can hear the shape of a 3-dimensional lens space as a CR manifold from its Kohn Laplacian. As a partial result towards this conjecture, we show that two isospectral lens spaces must have fundamental groups of equal order. Moreover, we show that if the order is prime, and if the dimension is $3$, being CR isospectral is equivalent to being CR isometric. The organization of this paper is as follows. We first prove the analog of Weyl's law for the Kohn Laplacian on lens spaces. The proof follows from a careful counting of solutions to a system of diophantine equations and asymptotic analysis of binomial coefficients. Next, we prove the result on isospectral lens spaces by using a chain of equivalencies and extending some techniques of Ikeda and Yamamoto. \section{Preliminaries} Let $S^{2n-1}$ denote the unit sphere in $\mathbb{C}^n$. We begin by formally defining lens spaces. \begin{definition}\label{def:lens_space} Let $k$ be a positive integer and let $\ell_1,\ldots, \ell_n$ be integers relatively prime to $k$. Let $\zeta = e^{2\pi i/k}$. Define the map $g: S^{2n-1}\to S^{2n-1}$ by \[g\left(z_1,\ldots, z_n\right) = \left(\zeta^{\ell_1} z_1,\ldots,\zeta^{\ell_n} z_n\right). \] Let $G$ be the cyclic subgroup of the unitary group $U\left(n\right)$ generated by $g$. The \emph{lens space} $L\left(k;\ell_1,\ldots,\ell_n\right)$ is the quotient space, $G\setminus S^{2n-1}$. \end{definition} We now consider some preliminary results that are necessary in our analysis of the spectrum of the Kohn Laplacian on lens spaces. First we present Folland's calculation of the spectrum of $\Box_b$ on spheres. \begin{theorem}[\cite{Folland}]\label{thm:spectrum_box_b} The space of square integrable functions on $S^{2n-1}$ yields the following spectral decomposition for $\square_b$, \[L^2\left(S^{2n-1}\right) =\bigoplus_{p,q\geq 0} \calH_{p,q}\left(S^{2n-1}\right),\] where $\calH_{p,q}\left(S^{2n-1}\right)$ denotes the space of harmonic polynomials on the sphere of bidegree $\left(p,q\right)$. Furthermore, $\calH_{p,q}\left(S^{2n-1}\right)$ has corresponding eigenvalue $2q\left(p+n-1\right)$ and \begin{align*} \dim \calH_{p,q} \left(S^{2n-1}\right) =\left( \frac{p+q}{n-1}+1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}. \end{align*} \end{theorem} For the remainder of the paper we will write $\mathcal{H}_{p,q}$ for $\calH_{p,q}\left(S^{2n-1}\right)$. Note that Theorem \ref{thm:spectrum_box_b} allows us to quickly compute the spectrum of $\Box_b$ on $S^{2n-1}$; each eigenvalue is a non-negative even integer and the multiplicity of an eigenvalue $\lambda$ is \[\sum_{2q(p+n-1) = \lambda} \dim\calH_{p,q}.\] It is also important to note that a basis for $\mathcal{H}_{p,q}$ can be computed explicitly. \begin{theorem}[\cite{REU18}] For $\alpha \in \mathbb{N}^n$, define $\left|\alpha\right| = \sum_{j=1}^n\alpha_j$. For $\alpha,\beta \in\mathbb{N}^n$ let \[\overline{D}^{\alpha} = \frac{\partial^{\left|\alpha\right|}}{\partial \overline{z}_1^{\alpha_1}\cdots \partial \overline{z}_n^{\alpha_n}} \quad \text{and} \quad \ D^{\beta} = \frac{\partial^{\left|\beta\right|}}{\partial z_1^{\beta_1}\cdots \partial z_n^{\beta_n}}. \] The following yields a basis for $\mathcal{H}_{p,q} \left(S^{2n-1}\right)$: \[ \left\{\overline{D}^{\alpha}D^{\beta}|z|^{2-2n} :\ |\alpha|=p,\ |\beta|=q,\ \alpha_1=0\textnormal{ or }\beta_1=0\right\}.\] \end{theorem} Note that $G$ acts naturally on $L^2\left(S^{2n-1}\right)$ by precomposition. In particular, note that each $\calH_{p,q}$ space is invariant under $G$. It follows that, \begin{theorem} The space of square integrable functions on $L(k;\ell_1,\dots,\ell_n)$ has the following spectral decomposition for $\square_b$ on $L(k;\ell_1,\dots,\ell_n)$, \[L^2\left(L\left(k;\ell_1,\dots,\ell_n\right)\right) = \bigoplus_{p,q\geq 0} \calH_{p,q}\left(L\left(k;\ell_1,\dots,\ell_n\right)\right) \cong \bigoplus_{p,q\geq 0} \calH^G_{p,q},\] where $\calH_{p,q}^G$ is the subspace of $\calH_{p,q}$ consisting of elements invariant under the action of $G$. That is, \[\calH_{p,q}^G = \left\{f \in \calH_{p,q} : f \circ g = f\right\}.\] \end{theorem} Just as on the sphere, each $\calH_{p,q}^G$ (presuming it is non-trivial) is an eigenspace of $\Box_b$ with eigenvalue $2q\left(p+n-1\right)$. It follows that the multiplicity of an eigenvalue $\lambda$ in the spectrum of $\Box_b$ on the lens space $G \setminus S^{2n-1}$ is given by \[\sum_{2q\left(p+n-1\right) = \lambda} \dim\calH_{p,q}^G,\] where we further restrict $p,q$ such that $\calH_{p,q}^G \neq \left\{0\right\}$. This observation gives the following proposition. \begin{proposition} \label{prop:system} The dimension of $\calH^G_{p,q}$ is equal to the number of solutions $\left(\alpha,\beta\right)$ to the system \begin{align*} \left|\alpha\right| = p, \quad \left|\beta\right| = q,\\ \alpha_1 = 0 \quad \text{or} \quad \beta_1 = 0,\\ \sum_{j = 1}^{n} \ell_j\left(\alpha_j - \beta_j\right) \equiv 0\mod k, \end{align*} where $\alpha=\left(\alpha_1,\ldots,\alpha_n\right)$ and $\beta = \left(\beta_1,\ldots,\beta_n\right)$ are $n$-tuples of nonnegative integers and $\left|\cdot\right|$ denotes the sum of these integers. \end{proposition} \begin{proof} Note that the first two conditions on the system are given by the basis of $\mathcal{H}_{p,q}$. Now let $f_{\alpha,\beta} = \overline{D}^{\alpha}D^{\beta}|z|^{2-2n}$. Recall that $f_{\alpha,\beta} \in \mathcal{H}_{p,q}^G$ if and only if $ f_{\alpha,\beta} \circ g = f_{\alpha,\beta}$. By the chain rule and the fact that $g$ can be thought of as a unitary matrix, \begin{align*} f_{\alpha,\beta} \left(w\right) &= \left(\overline{D}^\alpha D^\beta \left|z\right|^{2- 2n}\right)\big|_{z=w}\\ &= \left(\overline{D}^\alpha D^\beta \left| g z\right|^{2- 2n}\right)\big|_{z=w}\\ &= \zeta^{\sum_{j=1}^n \ell_j \left(\beta_j - \alpha_j\right)} \left(\overline{D}^\alpha D^\beta \left|z^{2 - 2n}\right|\right)\big|_{z=gw}\\ &=\zeta^{\sum_{j=1}^n \ell_j \left(\beta_j - \alpha_j\right)} g \circ f_{\alpha,\beta}\left(w\right). \end{align*} Since the collection of the $f_{\alpha,\beta}$ comprise of a basis for $\calH_{p,q}$ on which $g$ acts diagonally, the dimension of the $G$-invariant subspace of $\calH_{p,q}$ is simply the number of these basis vectors that are fixed by $g$. Thus, $f_{\alpha,\beta}\circ g = f_{\alpha,\beta}$ if and only if the last condition in the proposition holds. \end{proof} \section{Results} We now outline the main results of this paper. For $\lambda>0$, let $N\left(\lambda\right)$ denote the number of positive eigenvalues, including multiplicity, of $\square_b$ on $L^2\left(S^{2n-1}\right)$ that are at most $\lambda$. Similarly, let $N_L\left(\lambda\right)$ denote the number of positive eigenvalues, including multiplicity, of $\square_b$ on $L^2\left(L(k;\ell_1,\ldots,\ell_n)\right)$ that are at most $\lambda$. \begin{theorem}\label{thm:oneoverk} Given a lens space $L\left(k;\ell_1,\ldots,\ell_n\right)$, we have \[\lim_{\lambda \rightarrow\infty}\frac{N_L(\lambda)}{N(\lambda)}=\frac{1}{k}.\] \end{theorem} Note that since the volume of the lens space is scaled by $k$, it is not surprising that the eigenvalue counting functions scale similarly. Furthermore, combining this result with the explicit calculations in \cite{REU2020Weyl}, yields the following analog of Weyl's law. \begin{corollary} We have \[\lim_{n\to\infty} \frac{N_L\left(\lambda\right)}{\lambda^n} = u_n \frac{ \operatorname{vol}\left(S^{2n-1}\right)}{k} = u_n \operatorname{vol}\left(L\left(k;\ell_1,\ldots,\ell_n\right)\right),\] where $u_n$ is a universal constant depending only on $n$, given by \[u_n = \frac{n - 1}{n \left(2\pi\right)^n \Gamma\left(n + 1\right)} \int_{-\infty}^{\infty} \left(\frac{x}{\sinh x}\right)^n e^{-(n - 2)x}\,dx.\] \end{corollary} As a consequence of Theorem \ref{thm:oneoverk}, the following corollary is immediate. \begin{corollary} \label{cor:same_k} If the lens spaces $L\left(k;\ell_1, \ldots, \ell_n\right)$ and $L\left(k'; \ell_1', \ldots, \ell_n'\right)$ are CR isospectral, then $k = k'$. \end{corollary} With Corollary \ref{cor:same_k}, we obtain the following result on isospectral quotients of $S^3$. \begin{theorem} \label{thm:isospec} Let $k$ be an odd prime number. Let $L\left(k; \ell_1, \ell_2\right)$ and $L\left(k; \ell_1', \ell_2'\right)$ be the lens spaces generated by the groups $G$ and $G'$, respectively. The following are equivalent. \begin{enumerate} \item $L\left(k; \ell_1, \ell_2\right)$ and $L\left(k; \ell_1', \ell_2'\right)$ are CR isometric. \item $L\left(k; \ell_1, \ell_2\right)$ and $L\left(k; \ell_1', \ell_2'\right)$ are CR isospectral. \item $\dim \calH_{p,q}^G = \dim \calH_{p,q}^{G'}$ for all $p,q\geq 0$. \item There exists an integer $a$ and a permutation $\sigma$ such that $\left(\ell_1',\ell_2'\right) \equiv \left(a\ell_{\sigma\left(1\right)},a\ell_{\sigma\left(2\right)}\right)\mod k.$ \end{enumerate} \end{theorem} \section{Diophantine Equations and Proof of Theorem \ref{thm:oneoverk}} We begin our proof of Theorem \ref{thm:oneoverk} with a series of technical lemmas. \begin{lemma}\label{lem:lim=0} For $p,q \in \mathbb{Z}$, $p,q\geq 0$, define \[a_{p,q}=\binom{ p+n-2}{n-2}\binom{q+n-2}{n-2} \text{ and } b_{p,q}=\left(\frac{p+q}{n-1}+1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.\] The following equality holds, \[\lim_{\lambda \rightarrow \infty} \frac{\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}a_{p,q}}{\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}b_{p,q}}=0.\] \end{lemma} \begin{proof} The claim is intuitive due to the fact that $b_{p,q} = \left(\frac{p+q}{n-1}+1\right) a_{p,q}$. Let $A_m = \sum_{p=0}^{ m - n + 1 }\sum_{q=1}^{\left\lfloor \frac{m}{p+n-1} \right\rfloor}a_{p,q}$ and $B_m = \sum_{p=0}^{ m - n + 1 }\sum_{q=1}^{\left\lfloor \frac{m}{p+n-1} \right\rfloor}b_{p,q}$. Note that it suffices to show $B_m/A_m\to\infty$ as $m\to\infty$. Fix $R>0$, and note that $B_m/A_m > R$ if and only if \[C_m = \sum_{p=0}^{m- n + 1} \sum_{q=1}^{\left\lfloor \frac{m}{p + n - 1}\right\rfloor} \left(\frac{p + q}{n-1} + 1 - R \right) a_{p,q} > 0.\] In particular, there is a negative part $N_R$ of the above sum contributed by indices where $\frac{p + q}{n-1}+ 1 - R < 0$. But for large enough $m$, this negative part is always dominated by the positive part. By taking $m \geq M + n - 1$ where $M$ is such that $\frac{M+1}{n-1} + 1 - R + N_R > 0$, the term given by $p = m - n + 1$ and $q = 1$ contributes enough to make $C_m$ positive. \end{proof} \subsection{Lower and Upper Bounds} In this section we establish a lower and upper bound that will be used to control the number of solutions to the Diophantine system in Proposition \ref{prop:system}. We state first the lower bound and then the analogous upper bound. \begin{lemma}\label{lem:lower_bound} If $N,m,d \in \mathbb{Z}_{\geq 0}$, $m,d > 0$, then \[ \sum_{r=0}^N\binom{r+n-3}{n-3}\sum_{j=0}^{\left\lfloor \frac{1}{m}(N-r+1)\right\rfloor - 1}\left\lfloor \frac{1}{d}(jm+1)\right \rfloor \geq \frac{1}{md}\sum_{q=0}^{N}\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m- \left( d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2}\right) \binom{q + n - 2}{n-2}, \] where in the case that the upper index of a summation is $-1$, we take it to be equal to zero. \end{lemma} \begin{lemma}\label{lem:upper_bound} If $N,m,d \in \Z_{\geq 0}$, $m, d > 0$, then \[ \sum_{r=0}^N\binom{r+n-3}{n-3}\sum_{j=1}^{\left\lceil \frac{1}{m}(N-r+1)\right\rceil}\left\lceil \frac{jm}{d}\right\rceil \leq \frac{1}{md}\sum_{q=0}^N\left( \frac{q+n-1}{n-1}+d + \frac{3}{2}m + (m^2+md)\frac{n-2}{q+n-2}\right)\binom{q+n-2}{n-2}. \] \end{lemma} We prove only the lower bound as the proof for the upper bound follows similarly. \begin{proof} First, note that for any $M \geq -1$, \[\sum_{j=0}^M\left\lfloor \frac{jm+1}{d} \right\rfloor \geq \sum_{j=0}^M\frac{1}{d}\left(jm-d+1\right) =\frac{1}{d}(M+1)\left(\frac{mM}{2}+1-d\right).\] Thus, it suffices to give the claimed lower bound for the sum \[\sum_{r=0}^N\binom{r+n-3}{n-3}\frac{1}{d}\left\lfloor \frac{N-r+1}{m}\right\rfloor \left( \frac{m\left(\lfloor \frac{N-r+1}{m}\rfloor - 1 \right)}{2} + 1 - d \right).\] We see that for $r \leq N$, \begin{align*} \left\lfloor \frac{N-r+1}{m}\right\rfloor\left(\frac{m\left(\left\lfloor \frac{N-r+1}{m}\right\rfloor -1\right)}{2}+1-d\right) &\geq \frac{1}{m} \left(N - r + 1 - m\right) \left(\frac{N - r + 1}{2} - m + 1 - d\right)\\ &=\frac{1}{m} \left( \frac{\left(N - r + 2\right)^2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 2 \right) - \left(1 + m\right)\left(\frac{1}{2} - m - d \right)\right)\\ &\geq \frac{1}{m} \left( \frac{\left(N - r + 2\right)^2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 1 \right) - \left(d + \frac{3}{2}m\right) \right)\\ &\geq \frac{1}{m} \left( \binom{N - r + 2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 1 \right) - \left(d + \frac{3}{2}m\right) \right). \end{align*} Now, the combinatorial identities, \[\sum_{q=0}^N\binom{q+n-1}{n-1}=\binom{N+n}{n}\quad \text{and} \quad \sum_{m=0}^n\binom{m}{j}\binom{n-m}{k-j}=\binom{n+1}{k+1}\text{ for }0 \leq j \leq k \leq n\] imply \[\sum_{r=0}^N\binom{r+n-3}{n-3}\binom{N-r+2}{2}=\sum_{q=0}^N\binom{q+n-1}{n-1}\text{ and }\sum_{r=0}^N\binom{r+n-3}{n-3}\binom{N-r+1}{1}=\sum_{q=0}^N\binom{q+n-2}{n-2}.\] Applying these identities and the lower bound above, the claim follows, \begin{align*} \sum_{r=0}^N\binom{r+n-3}{n-3}\frac{1}{d}\left\lfloor \frac{N-r+1}{m}\right\rfloor &\left( \frac{m\left(\lfloor \frac{N-r+1}{m}\rfloor - 1 \right)}{2} + 1 - d \right) \\ &\geq\sum_{r=0}^N\binom{r+n-3}{n-3}\frac{1}{md}\left( \binom{N - r + 2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 1 \right) - \left(d + \frac{3}{2}m\right) \right)\\ &= \frac{1}{md} \sum_{q=0}^N \left( \frac{q + n - 1}{n - 1 } - d - \frac{3}{2} m - \left( d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2}\right) \binom{q + n - 2}{n-2}. \end{align*} \end{proof} \subsection{Bounding the Number of Solutions to the Diophantine System} Given a lens space $L = L\left(k; \ell_1,\ldots, \ell_n\right)$, with eigenvalue counting function $N_L(\lambda)$, note \[N_L\left(2\lambda\right) = \sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor }\dim\mathcal{H}_{p,q}^G\] and \[N\left(2\lambda\right) = \sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor }\dim\mathcal{H}_{p,q} = \sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor } \left(\frac{p+q}{n-1} + 1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2},\] where $N\left(\lambda\right)$ is the eigenvalue counting function the Kohn Laplacian on $S^{2n-1}$. These formulas are due to the fact that the eigenvalues corresponding to $\mathcal{H}_{p,q}$ and $\mathcal{H}_{p,q}^G$ are $2q \left(p + n - 1 \right)$. Recall from Proposition \ref{prop:system}, $\dim\mathcal{H}_{p,q}^G$ is equal to the number of solutions $\left(\alpha,\beta\right)$ to the Diophantine system \[ \sum_{j=1}^{n}\ell_j\left(\alpha_j-\beta_j\right) \equiv 0 \tn{ mod } k,\] where $\alpha_j,\beta_j \geq 0$, $\alpha_1\beta_1 = 0$, and $\left|\alpha\right| = p$ and $\left|\beta\right| = q$. Since our goal is to study $N_L\left(2\lambda\right)$, we can replace the conditions $\left|\alpha\right| = p$ and $\left|\beta\right| = q$ with $0 < \left|\beta\right|\left(\left|\alpha\right|+n-1\right) \leq \lambda$. That is, for fixed $\lambda > 0$, we study the number of solutions $\left(\alpha,\beta\right)$ to the Diophantine equation \begin{align} \sum_{j=1}^{n}\ell_j\left(\alpha_j-\beta_j\right) \equiv 0 \tn{ mod } k, \label{eq:important!} \end{align} where $\alpha_j,\beta_j \geq 0$, $\alpha_1 \beta_1 = 0$, and $ 0 < \left|\beta\right|\left(\left|\alpha\right|+n-1\right) \leq \lambda $. In particular, we establish upper and lower bounds for the number of solutions to the system (\ref{eq:important!}) in the following two lemmas, where they are distinguished by fixing $\alpha_1 = 0$ or $\beta_1 = 0$. We will make use of Lemma \ref{lem:lower_bound} and Lemma \ref{lem:upper_bound} as well as the following fact, \begin{proposition*} Given $a,b,c,d \in \N$, with $a \leq b$, the number of solutions to the system \begin{align*} a &\leq x \leq b\\ x &\equiv c \tn{ mod }d \end{align*} is either $\left\lfloor \frac{b-a+1}{d} \right\rfloor$ or $\left\lceil \frac{b-a+1}{d}\right \rceil$. \end{proposition*} \begin{lemma} \label{lem:alpha1=0} For $\lambda>n$, when $\alpha_1=0$, the number of solutions to the system (\ref{eq:important!}) is greater than \[\frac{1}{k}\sum_{p=0}^{\left\lfloor\lambda - n + 1\right\rfloor}\binom{p+n-2}{n-2}\sum_{q=0}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2} m\right) \frac{n-2}{q + n - 2}\right)\binom{q+n-2}{n-2}\] and less than \[\frac{1}{k}\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\binom{p+n-2}{n-2}\sum_{q=0}^{\left\lfloor \frac \lambda {p+n-1} \right\rfloor}\left( \frac{q+n-1}{n-1}+ d + \frac{3}{2}m + \left(m^2+md\right)\frac{n-2}{q+n-2}\right)\binom{q+n-2}{n-2}\] where $m = \gcd \left(k,\ell_1-\ell_2\right)$ and $d = k/m$. \end{lemma} \begin{lemma} \label{lem:beta1=0} For $\lambda > n$, when $\beta_1=0$, the number of solutions to the system (\ref{eq:important!}) is greater than \[\frac{1}{k}\sum_{q=1}^{\left\lfloor \frac{\lambda}{n-1}\right\rfloor}\binom{q+n-2}{n-2}\sum_{p=0}^{\left\lfloor \frac{\lambda}{q}-n+1 \right\rfloor}\left(\frac{p+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2}m\right) \frac{n-2}{p + n - 2}\right)\binom{p+n-2}{n-2}\] and less than \[\frac{1}{k}\sum_{q=1}^{\left\lfloor \frac{\lambda}{n-1} \right\rfloor}\binom{q+n-2}{n-2}\sum_{p=0}^{\left\lfloor\frac{\lambda}{q}-n+1\right\rfloor}\left( \frac{p+n-1}{n-1} + d + \frac{3}{2}m + \left(m^2+md\right)\frac{n-2}{p+n-2} \right)\binom{p+n-2}{n-2}\] where $m = \gcd \left(k,\ell_1-\ell_2\right)$ and $d = k/m$. \end{lemma} We prove only Lemma \ref{lem:alpha1=0}, as the proof of Lemma \ref{lem:beta1=0} follows the same argument. \begin{proof} The outline of the argument is as follows. Recall that we want to estimate the number of $\left(\alpha,\beta\right)$ solving the Diophantine system, (\ref{eq:important!}). To do this, we first note that by stars and bars, we can find a number of candidates for $\alpha$. Thus, by fixing some $\alpha'$, it suffices to find the number of $\beta$ such that $\left(\alpha',\beta\right)$ is a solution to (\ref{eq:important!}). To reduce the problem further, by another application of stars and bars, we can find a number of values of $\beta_3,\ldots,\beta_n$ that may contribute to solutions of (\ref{eq:important!}). So for fixed $\alpha',\beta_3,\ldots,\beta_n$, by using the above proposition, we estimate the number of $\beta_1$ so that $\left(\alpha',\beta_1,\beta_2,\beta_3,\ldots,\beta_n\right)$ solve (\ref{eq:important!}). Note that $\beta_2$ is determined by $\beta_1,\beta_3,\ldots,\beta_n$. We now proceed with the details. For $p \in \Z_{\geq 0}$, there are $\binom{p+n-2}{n-2}$ values of $\alpha$ where $\alpha_1=0$ and $\left|\alpha\right|=p$. Similarly, for $r \in \Z_{\geq 0}$, there are $\binom{r+n-3}{n-3}$ values for $\beta_3,\ldots,\beta_n$ such that $\sum_{j=3}^n\beta_j=r$. Now we would like to compute bounds on the number of $\beta_1$ that yield a solution to (\ref{eq:important!}) along with $\alpha,\beta_3,\ldots,\beta_n$ for a fixed value of $\left|\alpha\right|=p$ and $\sum_{j=3}^n\beta_j=r$, and fixed $0 \leq p \leq \left\lfloor\lambda-n+1\right\rfloor$. Note that from this information, $q = \left|\beta\right|$ is not fixed and it can take different values as $\beta_1$ changes. However, we may write $\beta_2 = q - r - \beta_1$ and therefore, \[\left(\ell_1-\ell_2\right)\beta_1\equiv \sum_{j=2}^n\ell_j\alpha_j - \ell_2\left(q-r\right) - \sum_{j=3}^n\ell_j\beta_j \tn{ mod } k.\] Since $m = \tn{gcd}\left(k,\ell_1-\ell_2\right)$ and $dm = k$, there exists a $d'$ coprime to $d$ so that $d'm = \ell_1 - \ell_2$. It follows that, \[d'm\beta_1 \equiv \sum_{j=2}^n\ell_j\alpha_j - \ell_2\left(q-r\right) - \sum_{j=3}^n\ell_j\beta_j \tn{ mod } md.\] The existence of $\beta_1$ that satisfies this equation is equivalent to $m$ dividing $\sum_{j=2}^n \ell_j\alpha_j -\ell_2\left(q-r\right)-\sum_{j=3}^n \ell_j \beta_j$ as $d$ and $d'$ are coprime. Furthermore, this division condition is equivalent to \begin{equation}\label{eq:q_congruence} q \equiv \ell_2^{-1}\left(\ell_2r+\sum_{j=2}^n\ell_j\alpha_j-\sum_{j=3}^{n}\ell_j\beta_j\right) \tn{ mod } m. \end{equation} Since $q$ is subject to the conditions $r \leq q \leq \left\lfloor\frac{\lambda}{p+n-1}\right\rfloor$, by the above proposition, the number of solutions to (\ref{eq:q_congruence}) is bounded by \[\left\lfloor \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1)\right) \right\rfloor\text{ and } \left\lceil \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1)\right) \right\rceil.\] Note that if there is no $\beta_1$ that solves (\ref{eq:important!}) given $\alpha,\beta_3,\ldots,\beta_n$, then the lower bound is zero with no harm. This case corresponds to when the sum $\sum_{j=0}^{\left\lfloor \frac{1}{m} \left(N - r + 1 \right)\right\rfloor - 1} \left\lfloor \frac{1}{d} \left(jm + 1 \right)\right\rfloor$ has its upper index equal to $-1$ in Lemma \ref{lem:lower_bound}. In particular, we continue with our analysis by assuming that there is such a $\beta_1$, and therefore $q$, satisfying (\ref{eq:q_congruence}). Note that if (\ref{eq:q_congruence}) is satisfied, we can divide the equation by $m$ to obtain \begin{equation}\label{eq:b1_congruence} \beta_1 \equiv \left(d'\right)^{-1}\frac{1}{m}\left(\sum_{j=2}^n\ell_j\alpha_j-\ell_2\left(q-r\right)-\sum_{j=3}^n\ell_j\beta_j\right) \tn{ mod } d. \end{equation} For a fixed value of $q$ which solves (\ref{eq:q_congruence}), since $0 \leq \beta_1 \leq q-r$, we can bound the number of solutions to (\ref{eq:b1_congruence}) by \[\left\lfloor \frac{1}{d}(q-r+1) \right\rfloor \text{ and }\left\lceil \frac{1}{d}(q-r+1) \right\rceil.\] First we look at the lower bound. The values for $q$ which solve (\ref{eq:q_congruence}) are of the form $r+c + jm$, where $0 \leq c < m$ is some integer and $j$ is also an integer ranging from $0$ to the number of solutions of (\ref{eq:q_congruence}) minus one. Note that for all $j$, \[\left\lfloor \frac1d((r+c+jm) - r + 1) \right\rfloor \geq \left\lfloor \frac1d(jm+1) \right\rfloor,\] and therefore a lower bound on the number of solutions to both (\ref{eq:q_congruence}) and (\ref{eq:b1_congruence}) is \[\sum_{j=0}^{\left\lfloor \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1\right) \right\rfloor-1}\left\lfloor\frac{1}{d}(jm+1)\right\rfloor.\] So a lower bound on the number of solutions to (\ref{eq:important!}) when $\alpha_1=0$ is \[\sum_{p=0}^{\lfloor\lambda - n + 1\rfloor}\binom{p+n-2}{n-2}\sum_{r=0}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}\binom{r+n-3}{n-3}\sum_{j=0}^{\left\lfloor \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1\right) \right\rfloor-1}\left\lfloor\frac{1}{d}(jm+1)\right\rfloor.\] By Lemma \ref{lem:lower_bound}, the first part of the lemma follows. Now we examine the upper bound. Note that for all $j$, \[\left\lceil \frac1d((r+c+jm)-r+1)\right\rceil \leq \left\lceil \frac 1d(j+1)m \right\rceil.\] So an upper bound on the number of solutions to the system when $\alpha_1 = 0$ is \[\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\binom{p+n-2}{p}\sum_{r=0}^{\left\lfloor \frac \lambda {p+n-1} \right\rfloor}\binom{r+n-3}{n-3}\sum_{j=1}^{\left\lceil \frac{1}{m}(\lfloor \frac \lambda {p+n-1} \rfloor-r+1) \right\rceil}\left\lceil \frac{jm}{d}\right\rceil.\] By Lemma \ref{lem:upper_bound}, the second part of the lemma follows. \end{proof} \subsection{Proof of Theorem \ref{thm:oneoverk}} We now use Lemma \ref{lem:alpha1=0} and Lemma \ref{lem:beta1=0} and our reformulation of $N_L\left(2\lambda\right)$ as the number of solutions to equation (\ref{eq:important!}) to prove Theorem \ref{thm:oneoverk}. \begin{proof} To get a lower bound on sum $N_L(2\lambda)$, we note that we have three parts. First, from Lemma \ref{lem:alpha1=0}, the part where $\alpha_1=0$: \begin{align*} \frac{1}{k}\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=0}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}&\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2} \right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}\\ \hspace{-20mm}=& \frac{1}{k}\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2} \right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}\\ &+\frac{1}{k}\sum_{p=0}^{\lfloor\lambda-n+1\rfloor}\left(1-2d-3m \right)\binom{p+n-2}{n-2}. \end{align*} Next, from Lemma \ref{lem:beta1=0}, the part where $\beta_1=0$: \[\frac{1}{k}\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\left( \frac{p+n-1}{n-1} - d -\frac{3}{2}m - \left(d + \frac{3}{2} m \right) \frac{n-2}{p + n - 2}\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.\] Note that here we swapped the order of summation from the original statement of Lemma \ref{lem:beta1=0}. Finally, we have the part to be subtracted off which is given by $\alpha_1=\beta_1=0$. This part is bounded above by a formula from stars and bars: \[\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.\] Putting this together, a lower bound of $N_L \left(2\lambda\right)$ is \[ \frac{1}{k}\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor} \left(\frac{p + q}{n - 1} + A +\frac{B}{q + n - 2} + \frac{C}{p + n - 2}\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2} + \frac{1}{k}\sum_{p=0}^{\lfloor\lambda-n+1\rfloor}D\binom{p+n-2}{n-2},\] where $A,B,C,D$ are constants dependent only on $m,d$ and $n$. By Lemma \ref{lem:lim=0}, we know \[\lim_{\lambda \rightarrow \infty} \frac{\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}}{\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\left(\frac{p+q}{n-1}+1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}}=0.\] Therefore, $\lim_{\lambda\to\infty} \frac{N_L\left(\lambda\right)}{N \left(\lambda\right)}\geq \frac{1}{k}$. The upper bound follows similarly, except there is no need to consider the part where $\alpha_1 = \beta_1 = 0$. \end{proof} \begin{remark} This proof shows furthermore that $N_L\left(2\lambda\right)$ has asymptotic behavior, \[u_n \operatorname{vol} \left(L\left(k;\ell_1,\ldots,\ell_n\right)\right) + O \left(\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor} \sum_{q = 1}^{\left\lfloor \frac{\lambda}{p + n - 1 } \right\rfloor} \binom{ p + n - 2}{n - 2} \binom{q + n - 2}{n - 2}\right).\]By some calculations, we conjecture that this remainder term can be bounded above by $O \left(\lambda^{n - 1} \log \lambda\right)$ but not $O \left(\lambda^{n-1}\right)$. If true, this would be similar to the remainder term on compact Heisenberg manifolds in \cite{strichartz2015}, except the fact that the authors speculates in the paper that the estimate can be improved to $O\left(\lambda^{n}\right)$. \end{remark} \section{Isospectral lens spaces and Proof of Theorem \ref{thm:isospec}} In this section, we provide more details and a proof of Theorem \ref{thm:isospec}. For convenience, we restate it here. \noindent\textbf{Theorem \ref{thm:isospec}.} Let $k$ be an odd prime number. Let $L(k; \ell_1, \ell_2)$ and $L(k; \ell_1', \ell_2')$ be the lens spaces generated by the groups $G$ and $G'$, respectively. The following are equivalent. \begin{enumerate} \item $L(k; \ell_1, \ell_2)$ and $L(k; \ell_1', \ell_2')$ are CR isometric. \item $L(k; \ell_1, \ell_2)$ and $L(k; \ell_1', \ell_2')$ are CR isospectral. \item $\dim \calH_{p,q}^G = \dim \calH_{p,q}^{G'}$ for all $p,q\geq 0$. \item There exists an integer $a$ and a permutation $\sigma$ such that $\left(\ell_1',\ell_2'\right) \equiv \left(a\ell_{\sigma\left(1\right)},a\ell_{\sigma\left(2\right)}\right)\mod k$. \end{enumerate} \subsection{(2) implies (3)} Of the four implications we show, this is the only one that requires $k$ to be an odd prime. In particular, we need this assumption after Lemma \ref{Lem:dimension}. Fix a lens space $L(k;\ell_1,\ell_2)$ and let $G$ be the corresponding group. Let $d= \gcd(k, \ell_1- \ell_2)$. We begin with some observations about $\dim\calH^G_{p,q}$. \begin{lemma} \label{lem:symmetric} $\dim\calH^G_{p,q} = \dim\calH^G_{q,p}$ for all $p,q$. \end{lemma} This follows from Proposition \ref{prop:system}, observing that $(\alpha,\beta)\leftrightarrow(\beta,\alpha)$ is a one-to-one correspondence between solutions to the system for $\calH^G_{p,q}$ and solutions to the system for $\calH^G_{q,p}$. \begin{lemma} \label{lem:doesnotdivide} If $d$ does not divide $p-q$, then $\dim\calH_{p,q}^G = 0$. \end{lemma} \begin{proof} From Proposition~\ref{prop:system}, we know $\dim\calH_{p,q}^G$ is the number of solutions to \[\ell_1(\alpha_1 - \beta_1) + \ell_2(\alpha_2 - \beta_2) \equiv 0\mod k\] such that $\alpha_1 \beta_1 = 0$, and $\alpha_1 + \alpha_2 = p$, $\beta_1+\beta_2 =q$. Suppose for contradiction $d$ does not divide $p-q$, but a solution exists. We then have \begin{align*} \ell_1\left(\alpha_1 - \beta_1\right) + \ell_2\left(\left(p - \alpha_1\right) - \left(q-\beta_1\right)\right) &\equiv 0 \mod k\\ \left(\ell_1 - \ell_2\right)\left(\alpha_1 - \beta_1\right) &\equiv - \ell_2\left(p-q\right) \mod k. \end{align*} Note that $d$ is coprime to $\ell_2$, since $\ell_2$ is coprime to $k$ and $d$ divides $k$. Since $d$ does not divide $p-q$, we see that $d$ does not divide the right hand side of the equation. However, $d$ divides $\ell_1 - \ell_2$, which is a contradiction, meaning no such solution exists. \end{proof} We now relate the dimensions of invariant subpaces. \begin{lemma}If $d$ divides $p-q$, then \[\dim \calH^G_{p+k,q} =\dim \calH^G_{p,q+k} = d+ \dim\calH^G_{p,q}. \] \end{lemma} \begin{proof} From Proposition~\ref{prop:system}, $\dim\calH_{p,q}^G$ is the number of solutions to \[\ell_1\left(\alpha_1 - \beta_1\right) + \ell_2\left(\alpha_2 - \beta_2\right) \equiv 0\mod k\] such that $\alpha_1 + \alpha_2 = p$, $\beta_1+\beta_2 =q$, and either $\alpha_1 = 0$ or $\beta_1 = 0$. In the $\alpha_1 = 0$ case, this reduces to \begin{equation} \label{eq:alpha} \ell_1\left(-\beta_1\right) + \ell_2\left(p - q +\beta_1\right) \equiv 0\mod k\quad\text{ where } 0\leq \beta_1 \leq q. \end{equation} Similarly, in the $\beta_1 = 0$ case, we obtain \begin{equation} \label{eq:beta} \ell_1\left(\alpha_1\right) + \ell_2\left(p - q -\alpha_1\right) \equiv 0\mod k\quad\text{ where } 0\leq \alpha_1 \leq p. \end{equation} By inclusion-exclusion, $\dim\calH_{p,q}^G$ is the number of solutions to (\ref{eq:alpha}) plus the number of solutions to (\ref{eq:beta}) minus the number of solutions where $\alpha_1 = \beta_1 = 0$. Note that in the $\alpha_1 = \beta_1 = 0$ case, the Diophantine system reduces to $\ell_1\left(p-q\right) \equiv 0\mod k$, which has one solution if $k$ divides $p-q$, and zero solutions otherwise. For ease of notation, define the following indicator function for divisibility \[\divides\left(k, a\right) = \begin{cases} 1 & k \text{ divides } a\\ 0 & \text{otherwise} \end{cases}.\] Let $m_{p,q}$ denote the number of solutions to (\ref{eq:alpha}) and $n_{p,q}$ denote the number of solutions to (\ref{eq:beta}). Using this notation, the above inclusion-exclusion argument can be written as \[\dim\calH_{p,q}^G = m_{p,q} + n_{p,q} - \divides\left(k, p-q\right).\] Note that $m_{p+k,q}$ is the number of solutions to $\ell_1\left(-\beta_1\right) + \ell_2\left(p+k-q+\beta_1\right) \equiv 0\mod k$ for $0\leq \beta_1\leq q$. This equation is equivalent to $\ell_1\left(-\beta_1\right) + \ell_2\left(p-q+\beta_1\right) \equiv 0\mod k$, which implies $m_{p+k,q} = m_{p,q}$. By the same logic, we can see that $n_{p,q+k} = n_{p,q}$. We now show $m_{p,q+k} = m_{p,q}+d$ and $n_{p+k,q} = n_{p,q} + d$. Note that $m_{p,q+k} - m_{p,q}$ is the number of solutions to $\ell_1\left(-\beta_1\right) + \ell_2\left(p-q+\beta_1\right) \equiv 0\mod k$ where $q+1 \leq \beta_1\leq q+k$. Reducing modulo $k$, this is equal to the number of solutions where $1\leq \beta_1\leq k$. The equivalence may be rewritten as \[\left(\ell_1 - \ell_2\right)\beta_1 \equiv \ell_2\left(p-q\right)\mod k, \] where $1 \leq \beta_1 \leq k$. Since $d$ divides $\ell_1 - \ell_2$ and $p-q$, we obtain \[\left(\frac{\ell_1-\ell_2}{d}\right)\beta_1 \equiv \ell_2\left(\frac{p-q}{d}\right)\mod{k/d}.\] Since $d = \gcd(k,\ell_1-\ell_2)$, the above equivalence has a unique solution for $1\leq \beta_1\leq k/d$. Thus, there are precisely $d$ solutions in $1\leq \beta_1 \leq k$. Hence, $m_{p,q+k} = m_{p,q} + d$. Similarly, we see that $n_{p+k,q} = n_{p,q} + d$. Combining these facts, we have \[ \dim\calH_{p+k,q}^G = m_{p+k,q} + n_{p+k,q} - \divides\left(k, p-q\right) = m_{p,q} + n_{p,q} + d - \divides\left(k, p-q\right) = \dim\calH_{p,q}^G +d\] and \[ \dim\calH_{p,q+k}^G = m_{p,q+k} + n_{p,q+k} - \divides\left(k, p-q\right)= m_{p,q}+d + n_{p,q} - \divides\left(k, p-q\right)= \dim\calH_{p,q}^G +d\] completing the proof. \end{proof} In what follows, it will be useful to distinguish between ``mod" as an equivalence relation and ``mod" as a binary operator. To that end, we will use continue to use ``$\mod k$" to denote the equivalence relation modulo $k$, and we will use ``$\%$" to denote the binary modulo operator. That is to say, $p\% k$ denotes the smallest non-negative integer equivalent to $p$ modulo $k$. For example, $11\%5 = 1$, $12\%4 = 0$, and $1\%7 = 1$. Using this notation, the previous lemma implies \begin{lemma}[Dimension Lemma]\label{Lem:dimension} If $d$ divides $p-q$, then \[\dim \calH^G_{p,q} = \dim\calH^G_{p\% k,q\% k} + d\left(\left\lfloor\frac{p}{k}\right\rfloor + \left\lfloor\frac{q}{k}\right\rfloor\right). \] \end{lemma} Now, let $L(k;\ell_1',\ell_2')$ be another lens space generated by the group $G'$, and let $d' = \gcd(k,\ell_1' - \ell_2')$. \begin{proposition}\label{Propd=d'} Consider any two lens spaces $L\left(k;\ell_1,\ell_2\right)$ and $L\left(k;\ell_1',\ell_2'\right)$ where $d = 2$ and $d' = 4$ do not occur simultaneously. If the lens spaces are CR isospectral, then $d = d'$. \label{prop:gcd} \end{proposition} \begin{remark} We suspect that the above statement for lens spaces with $d = 2$ and $d' = 4$ is also true. This case seems to require a more subtle analysis than the growth rate arguments considered below. However, since in the rest of this section we assume $k$ is prime, this restriction on $d$ and $d'$ does not impact the rest of the paper. \end{remark} \begin{proof} We show that if $d \neq d'$, then the spectra differ. {\bf Case 1:} Assume $2 < d < d'$. Let $r$ be a prime to be determined such that $r \equiv 1 \mod d d'$. Let $\lambda = 4r$. It follows that the only $\left(p,q\right)$ so that $ 2 q \left( p + 1 \right) = \lambda$ are $\left(2r-1,1\right)$, $\left(r-1,2\right)$, $\left(1,r\right)$, and $\left(0,2r\right)$. Since $r\equiv 1 \mod d$ and $r\equiv 1 \mod d'$, by Lemma \ref{lem:doesnotdivide} \begin{align*} \operatorname{mult}_G \left(\lambda\right) &= \dim \mathcal{H}^G_{2r - 1,1} + \dim \mathcal{H}^G_{1,r}\\ \operatorname{mult}_{G'} \left(\lambda\right) &= \dim \mathcal{H}^{G'}_{2r - 1,1} + \dim \mathcal{H}^{G'}_{1,r}. \end{align*} By the dimension lemma, \begin{align*} \operatorname{mult}_G \left(\lambda\right) &= \dim \mathcal{H}^G_{2r - 1\% k,1} + \dim \mathcal{H}^G_{1,r\% k} + d \left(\left\lfloor \frac{2r - 1}{k}\right\rfloor + \left\lfloor \frac{r}{k}\right\rfloor\right)\\ \operatorname{mult}_{G'} \left(\lambda\right) &= \dim \mathcal{H}^{G'}_{2r - 1\% k,1} + \dim \mathcal{H}^{G'}_{1,r\% k}+ d' \left(\left\lfloor \frac{2r - 1}{k}\right\rfloor + \left\lfloor \frac{r}{k}\right\rfloor\right). \end{align*} Since $d < d'$, the multiplicity will differ for sufficiently large $r$. {\bf Case 2:} Assume $2 = d < d'$ where $d' \neq 4$. Let $r$ be a prime to be determined such that $r \equiv 1 \mod d'$. Let $\lambda = 4r$. By Lemma \ref{lem:doesnotdivide}, we see that, \begin{align*} \operatorname{mult}_G \left(\lambda\right) &= \dim \mathcal{H}_{1,r}^G + \dim \mathcal{H}_{2r-1,1}^G + \dim \mathcal{H}_{0,2r}^G + \dim \mathcal{H}_{r-1,2}^G\\ \operatorname{mult}_{G'} \left(\lambda\right) &= \dim \mathcal{H}_{1,r}^{G'} + \dim \mathcal{H}_{2r-1,1}^{G'}. \end{align*} By the dimension lemma, \begin{align*} \operatorname{mult}_{G} \left(\lambda\right) &= \dim \mathcal{H}_{1,r\% k}^G + \dim \mathcal{H}_{2r-1 \% k, 1}^G + \dim \mathcal{H}_{0,2r\% k}^G + \dim\mathcal{H}_{r - 1 \% k, 2}^G + 2 \left(\left\lfloor \frac{r}{k}\right\rfloor + \left\lfloor \frac{2r - 1}{k}\right\rfloor + \left\lfloor \frac{2r}{k}\right\rfloor + \left\lfloor \frac{r - 1}{k}\right\rfloor\right)\\ \operatorname{mult}_{G'} \left(\lambda\right) &= \dim \mathcal{H}_{1,r\% k}^{G'} + \dim \mathcal{H}_{2r-1 \% k, 1}^{G'} + d' \left(\left\lfloor \frac{r}{k}\right\rfloor + \left\lfloor \frac{2r - 1}{k}\right\rfloor\right). \end{align*} This implies $\operatorname{mult}_G \left(\lambda\right)$ has growth rate $\frac{12}{k}r$ while $\operatorname{mult}_{G'} \left(\lambda\right)$ has growth rate $\frac{3d'}{k}r$. Since the growth rates are different, for large enough $r$ the multiplicities will differ. {\bf Case 3:} Assume $1 = d < d'$. Let $r$ be any prime greater than $k$ so that $r\equiv 1 \mod d'$. Let $\lambda = 2r$. It follows that the only $\left(p,q\right)$ so that $2q \left(p+1\right) = \lambda$ are $\left(r-1,1\right)$ and $\left(0,r\right)$. Since $d'$ does not divide $r-2$ or $r$, $\calH_{r-1,1}^{G'}$ and $\calH_{0,r}^{G'}$ are both empty by Lemma \ref{lem:doesnotdivide}. This implies $\operatorname{mult}_{G'}\left(\lambda\right) = 0$. Since $r > k$, by the dimension lemma, \[\operatorname{mult}_G\left(\lambda\right) = \dim\calH_{r-1,1}^{G} + \dim\calH_{0,r}^{G} = \dim\calH_{r-1\% k,1}^{G} + \dim\calH_{0,r\% k}^{G} + \lrfl{\frac {r-1}k} +\lrfl{\frac r k}> 0,\] which implies the multiplicities differ. \end{proof} Recall that in this section, (2) implies (3), we wish to show that if we have two CR isospectral lens spaces given by groups $G$ and $G'$, then $\dim \mathcal{H}_{p,q}^G = \dim \mathcal{H}_{p,q}^{G'}$ for all $p,q\geq 0$. For each $p, q \geq 0$, define \[x_{p,q} = \dim \calH^G_{p,q}-\dim \calH^{G'}_{p,q}.\] By the dimension lemma and Proposition \ref{prop:gcd}, we have \[ x_{p,q} =\dim\calH^G_{p\% k,q\% k} + d\left(\lrfl{\frac{p}{k}} + \lrfl{\frac{q}{k}}\right)-\left(\dim\calH^{G'}_{p\% k,q\% k} + d\left(\lrfl{\frac{p}{k}} + \lrfl{\frac{q}{k}}\right)\right)= x_{p\% k,q\% k}.\] Thus, it suffices to show $x_{p,q} = 0$ for $0\leq p,q\leq k-1$. Define \[X = \begin{pmatrix} x_{0,0} & x_{0,1} & \cdots & x_{0,k-1} \\ x_{1,0} & x_{1,1} & \cdots & x_{1,k-1} \\ \vdots &\vdots & \ddots & \vdots \\ x_{k-1,0} & x_{k-1,1} & \cdots & x_{k-1,k-1}\\ \end{pmatrix}.\] By our observation above, $X = 0$ implies $\dim \calH^G_{p,q} = \dim \calH^{G'}_{p,q}$ for all $p,q\geq 0$. From Lemma \ref{lem:symmetric}, we see that $X$ is symmetric. Assuming our two lens spaces are CR isospectral, for every $\lambda \in 2\N$, we have $$\sum_{2q(p+1) = \lambda}\dim \calH^G_{p,q} = \sum_{2q(p+1) = \lambda}\dim \calH^{G'}_{p,q}.$$ This implies that \begin{align*} 0=\sum_{2q(p+1) = \lambda}\left(\dim \calH^G_{p,q} -\dim \calH^{G'}_{p,q}\right) =\sum_{2q(p+1) = \lambda}x_{p,q} =\sum_{2q(p+1) = \lambda}x_{p\% k,q\% k}. \end{align*} For integers $a,b$ so that $0\leq a,b\leq k - 1$, define \[c^\lambda_{a,b} = \#\left\{(p,q)\in \mathbb{Z}_{\geq 0}\times \mathbb{Z}_{\geq 0} : p \equiv a\mod k,\, q \equiv b\mod k,\, 2q \left(p+1\right) = \lambda\right\}.\] Note that $c^\lambda_{a,b}$ is the number of times $x_{a,b}$ appears in the above sum. Therefore, we have \[\sum_{0 \leq a,b \leq k-1} c^\lambda_{a,b} x_{a,b} = 0. \] For each $\lambda$, define the $k \times k$ matrix $C^{\lambda}$: \[C^\lambda = \begin{pmatrix} c_{0,0}^\lambda & c_{0,1}^\lambda & \cdots & c_{0,k-1}^\lambda \\ c_{1,0}^\lambda & c_{1,1}^\lambda & \cdots & c_{1,k-1}^\lambda \\ \vdots & \vdots & \ddots & \vdots \\ c_{k-1,0}^\lambda & c_{k-1,1}^\lambda & \cdots & c_{k-1,k-1}^\lambda\\ \end{pmatrix}.\] Then, we may write the above equations more compactly as \[C^\lambda \cdot X = 0\text{ for all $\lambda \in 2\N$},\] where $\cdot$ denotes the Frobenius inner product. We will show that the only symmetric matrix $X$ that satisfies all of these equations is the zero matrix. \begin{definition} Define the operator $T$ on $k\times k$ matrices, which moves the top row to the bottom and shifts every other row up by one: \[T: \begin{pmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\ \vdots &\vdots & \ddots & \vdots\\ a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\ \end{pmatrix} \mapsto \begin{pmatrix} a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\ a_{2,0} & a_{2,1} & \cdots & a_{1,k-1} \\ \vdots &\vdots & \ddots & \vdots\\ a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\ a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\ \end{pmatrix}. \] \end{definition} Note that $T^{-1}$ moves the bottom row to the top and shifts every other row down by one: \[T^{-1}: \begin{pmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\ \vdots &\vdots & \ddots & \vdots\\ a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\ \end{pmatrix} \mapsto \begin{pmatrix} a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\ a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\ \vdots &\vdots & \ddots & \vdots\\ a_{k-2,0} & a_{k-2,1} & \cdots & a_{k-2,k-1} \\ \end{pmatrix}. \] We now use $T$ to prove the following theorem about $\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N}$. \begin{theorem} Let $k$ be a prime number. It follows that, \[\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N} = T\left(\Sym_k\right)\] where $\Sym_k$ denotes the space of real $k\times k$ symmetric matrices. \end{theorem} \begin{proof} We first show $\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N} \subseteq T\left(\Sym_k\right)$. Fix $\lambda\in 2\mathbb{N}$ and let \[S^\lambda_{a,b} = \left\{\left(p,q\right)\in \mathbb{Z}_{\geq 0} \times \mathbb{Z}_{\geq 0}: p \equiv a \mod{k},\, q \equiv b \mod{k},\, 2q \left(p+1\right) = \lambda\right\}.\] Note that $c^\lambda_{a,b} = \#S^\lambda_{a,b}$. The map \[(p,q) \mapsto (q-1,p+1)\] is a bijection from $S^\lambda_{a,b}$ to $S^{\lambda}_{b-1,a+1}$. Therefore $c^\lambda_{a,b} = c^\lambda_{b-1,a+1}$ for all $a,b$. Note that \[T^{-1}\left(C^\lambda\right) = \begin{pmatrix} c_{k-1,0}^{\lambda} & c_{k-1,1}^{\lambda} & c_{k-1,2}^{\lambda} & \cdots & c_{k-1,k-1}^{\lambda} \\ c_{0,0}^{\lambda} & c_{0,1}^{\lambda} & c_{0,2}^{\lambda} & \cdots & c_{0,k-1}^{\lambda} \\ c_{1,0}^{\lambda} & c_{1,1}^{\lambda} & c_{1,2}^{\lambda} &\cdots & c_{1,k-1}^{\lambda} \\ \vdots & \vdots &\vdots & \ddots & \vdots \\ c_{k-2,0}^{\lambda} & c_{k-2,1}^{\lambda} & c_{k-2,2}^{\lambda} & \cdots & c_{k-2,k-1}^{\lambda} \\ \end{pmatrix}.\] Since $c^\lambda_{a,b} = c^\lambda_{b-1,a+1}$, the above matrix is symmetric. Hence, $T^{-1}\left(C^\lambda\right)$ is in $\Sym_k$ for each $\lambda$. This implies $C^\lambda$ is in $T\left(\Sym_k\right)$ for each $\lambda$. We now show the other direction: $T\left(\Sym_k\right)\subseteq \spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N}$. In particular, since $T$ is bijective, we show $\Sym_k\subseteq T^{-1}\left(\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N}\right)$. Let $E_{i,j}$ denote the $k\times k$ matrix whose $i,j$ entry is 1 and all other entries are 0. Note that $T^{-1}\left(E_{i,j}\right) = E_{\left(i+ 1\right) \% k , j}$ and the set \[\left\{E_{i,j} + E_{j,i} : 0\leq i \leq j \leq k-1\right\}\] is a basis for $\Sym_k$. We show that each of these basis elements is in $T(\spn_\R\{C^\lambda\}_{\lambda\in2\N})$. Let $0\leq i\leq j\leq k-1$. \textbf{Case 1:} Assume $0 = i = j $. Let $\lambda = 2k$. Since $k$ is prime, the only $\left(p,q\right)$ so that $2q\left(p+1\right) = \lambda$ are $\left(0,k\right)$ and $\left(k-1,1\right)$. This implies $C^{2k} = E_{0,0} + E_{k-1,1}$. Therefore, \begin{equation} \label{eq:twok} T^{-1}\left(C^{2k}\right) = E_{1,0} + E_{0,1}. \end{equation} Now let $\lambda = 2k^2$. Similarly, the only pairs $\left(p,q\right)$ so that $2q\left(p+1\right) = \lambda$ are $\left(0,k^2\right)$, $\left(k-1,k\right)$, and $\left(k^2-1,1\right)$. This implies $C^{2k} = E_{0,0} + E_{k-1,0} + E_{k-1,1}$. Therefore, \begin{equation} \label{eq:twoksquared} T^{-1}\left(C^{2k^2}\right) = E_{1,0} + E_{0,0} + E_{0,1}. \end{equation} From (\ref{eq:twok}) and (\ref{eq:twoksquared}), it follows that \[ E_{0,0} = T^{-1}\left(C^{2k^2}\right) - T^{-1}\left(C^{2k}\right).\] Thus, the basis vector $E_{0,0} + E_{0,0}$ is in $T^{-1}\left(\spn_\R\{C^{\lambda}\}_{\lambda\in2\N}\right)$. \textbf{Case 2:} Assume $0 = i < j$. Since $k$ is prime, $j$ is trivially coprime to $k$. By Dirichlet's theorem, there exists a prime $r$ so that $r \equiv j \mod k$. Let $\lambda = 2k r$. The only $\left(p,q\right)$ so that $2q\left(p+1\right) = \lambda$ are $\left(kr-1, 1\right)$, $\left(0, kr\right)$, $\left(k-1, r\right)$, and $\left(r-1, k\right)$. This implies, \begin{equation} \label{eq:twokr} T^{-1}\left(C^{2kr}\right) = E_{0,1} + E_{1,0} + E_{0,j} + E_{j,0}. \end{equation} From (\ref{eq:twok}) and (\ref{eq:twokr}) \[E_{0,j} + E_{j,0} = T^{-1}\left(C^{2k r}\right) - T^{-1}\left(C^{2k}\right),\] so $E_{0,j} + E_{j,0} \in T^{-1}\left(\spn_\R\{C^{\lambda}\}_{\lambda\in2\N}\right)$. \textbf{Case 3:} Assume $0 < i \leq j$. Since $k$ is prime, $i$ and $j$ are coprime to $k$. Let $r$, $s$, and $t$ be primes such that $r \equiv j \mod k$, $s\equiv i \mod k$, and $t\equiv ij \mod k$. Let $\lambda = 2 rs$. Just as in case 2, we obtain the ordered pairs $\left(rs-1, 1\right)$, $\left(0, rs\right)$, $\left(s-1, r\right)$, and $\left(r-1, s\right)$, which implies \begin{equation} \label{eq:twors} T^{-1}\left(C^{2rs}\right) = E_{ij\%k,1} + E_{1,ij\%k} + E_{i,j} + E_{j,i}. \end{equation} Finally, let $\lambda = 2t$. As in case 1, the only ordered pairs are $\left(t-1,1\right)$ and $\left(0,t\right)$, so \begin{equation} \label{eq:twot} T^{-1}\left(C^{2t}\right) = E_{ij\%k,1} + E_{1,ij\%k}. \end{equation} From (\ref{eq:twors}), and (\ref{eq:twot}), we see that \[E_{i,j} + E_{j,i} = T^{-1}\left(C^{2rs}\right) - T^{-1}\left(C^{2 t}\right),\] so $E_{i,j} + E_{j,i} \in T^{-1}\left(\spn_\R\{C^{\lambda}\}_{\lambda\in2\N}\right)$. \end{proof} The following corollary is immediate when we regard the vector space of all real $k\times k$ matrices as an inner product space equipped with the Frobenius inner product, and when we notice that symmetric matrices and skew-symmetric matrices are orthogonal under this inner product and any $k\times k$ matrix is the sum of a symmetric matrix and skew-symmetric matrix. \begin{corollary} If a matrix $X$ solves the system \[C^\lambda \cdot X = 0\text{ for all }\lambda = 2,4,6,\ldots,\] then $X\in T\left(\Sym_k\right)^\perp = T\left(\Skew_k\right)$, where $\Skew_k$ denotes the space of real $k\times k$ skew-symmetric (also known as antisymmetric) matrices. \end{corollary} We now complete the proof of (2) implies (3). Assuming we have two CR isospectral lens spaces $L\left(k;\ell_1,\ell_2\right)$ and $L\left(k;\ell_1',\ell_2'\right)$ where $k$ is an odd prime, then there exists a symmetric matrix $X$ so that $C^\lambda\cdot X = 0$ for all $\lambda\in 2 \mathbb{N}$. By the above corollary, $X\in \Sym_k \cap T\left(\Skew_k\right)$. Equivalently, $T^{-1}\left(X\right)\in T^{-1}\left(\Sym_k\right) \cap \Skew_k$. Since $T^{-1}\left(X\right)\in\Skew_k$, we have \[T^{-1}\left(X\right) = - T^{-1}\left(X\right)^t.\] Applying $T$ to both sides yields, \[ X = - T\left(T^{-1}\left(X\right)^t\right).\] Since $X$ is symmetric, transposing both sides imply \[X = - T\left(T^{-1}\left(X\right)^t\right)^t.\] Writing this equation in matrix form, we obtain \[\begin{pmatrix} x_{0,0} & x_{0,1} & x_{0,2} & \cdots & x_{0,k-1} \\ x_{1,0} & x_{1,1} & x_{1,2} & \cdots & x_{1,k-1} \\ x_{2,0} & x_{2,1} & x_{2,2} & \cdots & x_{2,k-1} \\ \vdots &\vdots & \vdots & \ddots & \vdots\\ x_{k-1,0} & x_{k-1,1} & x_{k-1,2} & \cdots & x_{k-1,k-1} \\ \end{pmatrix} = \begin{pmatrix} x_{k-1,1} & x_{k-1,2} & x_{k-1,3} & \cdots & x_{k-1,k-1} & x_{k-1,0} \\ x_{0,1} & x_{0,2} & x_{0,3} & \cdots & x_{1,k-1} & x_{1,0} \\ x_{1,1} & x_{1,2} & x_{1,3} & \cdots & x_{1,k-1} & x_{2,0} \\ \vdots &\vdots & \vdots & \ddots & \vdots & \vdots\\ x_{k-2,1} & x_{k-2,2} & x_{k-2,3} & \cdots & x_{k-2,k-1} & x_{k-2,0} \\ \end{pmatrix}.\] This means $x_{a,b} = -x_{\left(a-1\right)\% k,\left(b+1\right)\% k}$ for all $0\leq a,b\leq k-1$. Applying this fact $k$ times, we see that \[x_{a,b} = \left(-1\right)^k x_{\left(a-k\right)\% k,\left(b+k\right)\% k} = \left(-1\right)^k x_{a,b} = - x_{a,b}.\] Therefore, we must have $x_{a,b} = 0$ for all $a,b$. Thus, $X = 0$. As noted above, this implies $\dim\calH_{p,q}^G = \dim\calH_{p,q}^{G'}$ for all $p,q$, completing the proof that (2) implies (3) in Theorem \ref{thm:isospec}. \subsection{(3) implies (4)} In this section, we extend techniques from \cite{ikeda1979} to the CR setting. For a lens space $L(k;\ell_1,\ldots,\ell_n)$ corresponding to group $G$, consider the generating function below taking inputs as pairs of complex numbers with sufficiently small modulus, \[F_{\left(k; \ell_1,\ldots,\ell_n\right)}\left(z,w\right) = \sum_{p,q\geq 0} \left(\dim\calH_{p,q}^G\right)z^p w^q.\] We now present a closed form for this generating function that connects the CR spectrum of this lens space and its geometry. \begin{theorem} \label{thm:genfunc} Let $G$ be the group corresponding to the lens space $L\left(k;\ell_1,\ldots,\ell_n\right)$. It follows that, \[F_{\left(k; \ell_1,\ldots,\ell_n\right)}\left(z,w\right) = \frac1k\sum_{m = 0}^{k-1} \frac{1 - z w} {\prod_{i = 1}^n\left(z-\zeta^{-m\ell_i}\right)\left(w-\zeta^{m\ell_i} \right)}\] where $\zeta = e^{2\pi i/k}$. \end{theorem} \begin{proof} Let $\calP_{p,q}$ denote the space of polynomials of bidegree $p,q$ on $S^{2n-1}$. We may consider $\calH_{p,q}$ and $\calP_{p,q}$ as $G$-modules over $\mathbb{C}$. Note that $\calP_{p,q} \cong \calH_{p,q}\oplus |z|^2\calP_{p-1,q-1}$ \cite{klima2004}. Let $\chi_{p,q}$ be the character of $\calH_{p,q}$ and $\widetilde{\chi}_{p,q}$ be the character of $\calP_{p,q}$. Note that \[\left\{z^{\alpha}\overline{z}^\beta : \alpha,\beta\in \Z_{\geq 0}^n, \left|\alpha\right| = p, \left|\beta\right| = q\right\}\] is a basis for $\calP_{p,q}$, and recall that \[g^m \cdot z^{\alpha} \overline{z}^\beta = \zeta^{m \left(\sum_{i=1}^n{\ell_i \left(\alpha_i - \beta_i\right)}\right)} z^{\alpha} \overline{z}^\beta.\] Since this is a basis on which $g^m$ acts diagonally, we see that \[\widetilde{\chi}_{p,q}\left(g^m\right) = \sum_{\left|\alpha\right| = p,\left|\beta\right| = q} \zeta^{m \left(\sum_{i=1}^n{\ell_i \left(\alpha_i - \beta_i\right)}\right)}.\] Notice that in \[\prod_{i = 1}^n \left(1 + \zeta^{m \ell_i}z + \zeta^{2m \ell_i}z^2 + \cdots\right) \left(1 + \zeta^{-m \ell_i}w + \zeta^{-2m \ell_i}w^2 + \cdots\right),\] the coefficient of $z^p w^q$ is precisely $\widetilde{\chi}_{p,q}\left(g^m\right)$. It follows that, \[\sum_{p,q\geq 0}\widetilde{\chi}_{p,q}\left(g^m\right) z^p w^q = \frac{1}{\prod_{i = 1}^n\left(1 - \zeta^{m \ell_i}z\right) \left(1 - \zeta^{-m \ell_i}w\right)},\] where we have used the fact that $1 + x + x^2 + \cdots = \left(1-x\right)^{-1}$. We now relate the characters to the dimension of eigenspaces. Since $\calP_{p,q} \cong \calH_{p,q} \oplus |z|^2\calP_{p-1,q-1}$, it follows that \[\chi_{p,q} = \widetilde\chi_{p,q}- \widetilde{\chi}_{p-1,q-1}.\] Note that when $p$ or $q$ is zero $\calP_{p,q}=\calH_{p,q}$; therefore, we set $\widetilde{\chi}_{0,-1}=\widetilde{\chi}_{-1,0}=\widetilde{\chi}_{-1,-1}=0$. Moreover, $\dim\calH_{p,q}^G$ can be obtained by averaging the character $\chi_{p,q}$ over the group $G$ as follows, \[\dim \mathcal{H}^G_{p,q} = \frac1k \sum_{m = 0}^{k-1} \chi_{p,q}\left(g^m\right).\] Combining all of these facts, we see that \begin{align*} \sum_{p,q\geq 0} \left(\dim\mathcal{H}_{p,q} ^G\right) z^p w^q &= \frac1k\sum_{p,q\geq 0} \lrp{\sum_{m = 0}^{k-1} \chi_{p,q}\left(g^m\right)} z^p w^q\\ &= \frac1k\sum_{p,q\geq 0} \lrp{\sum_{m = 0}^{k-1} \widetilde\chi_{p,q}\left(g^m\right) - \widetilde\chi_{p-1,q-1}\left(g^m\right)} z^p w^q\\ &= \frac1k\sum_{m = 0}^{k-1}\lrp{\lrp{ \sum_{p,q\geq 0} \widetilde\chi_{p,q}\left(g^m\right)z^p w^q} - z w \lrp{\sum_{p,q\geq 1}\widetilde\chi_{p-1,q-1}\left(g^m\right) z^{p-1} w^{q-1}}}\\ &= \frac1k\sum_{m = 0}^{k-1} \frac{1-zw} {\prod_{i = 1}^n\left(1-\zeta^{m\ell_i} z\right) \left(1-\zeta^{-m\ell_i} w\right)}= \frac1k\sum_{m = 0}^{k-1} \frac{1-zw} {\prod_{i = 1}^n\left(z-\zeta^{-m\ell_i}\right) \left(w-\zeta^{m\ell_i}\right)}. \end{align*} \end{proof} To complete the proof that $(3)$ implies $(4)$ in Theorem \ref{thm:isospec} in the $n=2$ case, we need the following. \begin{lemma} \label{lem:independence} Let $\zeta = e^{2\pi i/k}$. The set \[\left\{\frac{1}{\left(z-\zeta^i \right)\left(w-\zeta^{-i}\right)\left(z-\zeta^j \right)\left(w-\zeta^{-j}\right)}\ \middle |\ 0\leq i\leq j\leq k-1\right\}\] is linearly independent over $\C$. \end{lemma} \begin{proof} We will label the elements of the above set as \[f_{l,m} = \frac1{\left(z-\zeta^l\right)\left(w-\zeta^{-l}\right)\left(z-\zeta^m\right)\left(z-\zeta^{-m}\right)}.\] First, note that $\operatorname{span}\left\{f_{l,l}: 0 \leq l < k-1\right\} \cap \operatorname{span}\left\{f_{l,m}: 0 \leq l < m \leq k-1\right\} = \left\{0\right\}$ and the set $\left\{f_{l,l}: 0 \leq l < k-1\right\}$ is linearly independent. Now for $\left\{f_{l,m}: 0 \leq l < m \leq k-1\right\}$, suppose there exists $a_{l,m}\in \mathbb{C}$ so that \[\sum_{l=0}^{k-1}\sum_{m=l+1}^{k-1}a_{l,m}f_{l,m}=\sum_{l=0}^{k-1}\sum_{m=l+1}^{k-1}\frac{a_{l,m}}{\left(z-\zeta^l\right)\left(w-\zeta^{-l}\right)\left(z-\zeta^m\right)\left(w-\zeta^{-m}\right)}=0.\] If we multiply by a factor of $\left(z^k-1\right)\left(w^k-1\right)$, then we are left with a polynomial \[\left(z^k-1\right)\left(w^k-1\right)\sum_{l=0}^{k-1}\sum_{m=l+1}^{k-1}\frac{a_{l,m}}{\left(z-\zeta^l\right)\left(w-\zeta^{-l}\right)\left(z-\zeta^m\right)\left(w-\zeta^{-m}\right)}=0.\] Setting $z=\zeta^{l_0}$, $w=\zeta^{-m_0}$, where $0 \leq l_0 < m_0 \leq k-1$ implies $a_{l_0,m_0}=0$. Therefore, the set $\left\{f_{l,m}: 0 \leq l < m \leq k-1\right\}$ is linearly independent. \end{proof} \begin{corollary} Let $L\left(k; \ell_1,\ell_2\right)$ and $L\left(k; \ell_1',\ell_2'\right)$ be lens spaces with groups $G$ and $G'$ respectively. If $\dim\calH^G_{p,q} = \dim\calH^{G'}_{p,q}$ for all $p,q$, then there exists a permutation $\sigma$ and an integer $a$ such that $\left(\ell_1,\ell_2\right) = \left(a\ell'_{\sigma\left(1\right)},a\ell'_{\sigma\left(2\right)}\right)$. \end{corollary} \begin{proof} Since $\dim\calH^G_{p,q} = \dim\calH^{G'}_{p,q}$ for all $p,q$, $F_{\left(k; \ell_1,\ell_2\right)}\left(x\right) = F_{\left(k; \ell_1',\ell_2'\right)}\left(x\right)$. Applying Theorem~\ref{thm:genfunc} and cancelling like terms, we obtain \[\sum_{m = 0}^{k-1} \frac{1} {\left(z-\zeta^{-m\ell_1}\right)\left(w-\zeta^{m\ell_1}\right)\left(z-\zeta^{-m\ell_2}\right)\left(w-\zeta^{m\ell_2}\right)} = \sum_{m = 0}^{k-1} \frac{1} {\left(z-\zeta^{-m\ell_1'}\right)\left(w-\zeta^{m\ell_1'}\right) \left(z-\zeta^{-m\ell_2'}\right)\left(w-\zeta^{m\ell_2'}\right)}.\] By Lemma \ref{lem:independence}, the terms of these sums are linearly independent, so we may equate each term on the left with a term on the right. By considering the $m = 1$ term on the left, there exists some $m_0$ such that \[\frac{1}{\left(z-\zeta^{-\ell_1}\right)\left(w-\zeta^{\ell_1}\right) \left(z-\zeta^{-\ell_2}\right)\left(w-\zeta^{\ell_2}\right)} = \frac{1}{\left(z-\zeta^{-m_0\ell_1'}\right)\left(w-\zeta^{m_0\ell_1'}\right) \left(z-\zeta^{-m_0\ell_2'}\right)\left(w-\zeta^{m_0\ell_2'}\right)}.\] This implies $\left(m_0\ell_1',m_0\ell_2'\right)$ is some permutation of $\left(\ell_1, \ell_2\right)$. The claim follows after setting $a = m_0$. \end{proof} \subsection{(4) implies (1) and (1) implies (2)} We begin with the following theorem which shows that (4) implies (1). \begin{theorem} Let $L\left(k;\ell_1,\ldots,\ell_n\right)$ and $L\left(k;\ell_1',\ldots,\ell_n'\right)$ be lens spaces. If there exists a permutation $\sigma$ and an integer $a$ such that $\left(\ell_1',\ldots,\ell_n'\right) \equiv \left(a\ell_{\sigma\left(1\right)},\ldots,a\ell_{\sigma\left(n\right)}\right) \mod k$, then $L\left(k;\ell_1,\ldots,\ell_n\right)$ and $L\left(k;\ell_1',\ldots,\ell_n'\right)$ are CR isometric. \end{theorem} \begin{proof} For any permutation $\sigma$, \[\left(z_1,z_2,\dots,z_n\right) \mapsto (z_{\sigma\left(1\right)},z_{\sigma\left(2\right)}, \dots, z_{\sigma\left(n\right)})\] is a CR isometry from $S^{2n-1}$ to itself. This induces a CR isometry from $L\left(k; \ell_1, \dots, \ell_n\right)$ to $L\left(k; \ell_{\sigma(1)}, \dots, \ell_{\sigma\left(n\right)}\right)$. Note that $a$ must be relatively prime to $k$ by the definition of a lens space. If $g$ denotes the action corresponding to $L\left(k; \ell_1, \dots, \ell_n\right)$, then $g^a$ generates the same subgroup as $g$. Hence $L\left(k; \ell_1, \dots, \ell_n\right) = L\left(k; a\ell_1, \dots, a\ell_n\right)$, which completes the proof. \end{proof} Finally, we need to prove (1) implies (2) to conclude the proof of Theorem \ref{thm:isospec}. This implication follows from a more general statement that the Kohn Laplacian commutes with CR isometries and therefore isometric CR manifolds are isospectral. We refer to \cite[Section 4.4]{Canzani} for the proof in the Riemannian setting and argue similarly in the CR setting. \newcommand{\etalchar}[1]{$^{#1}$} \begin{thebibliography}{ABB{\etalchar{+}}19} \bibitem[ABB{\etalchar{+}}19]{REU18} John Ahn, Mohit Bansil, Garrett Brown, Emilee Cardin, and Yunus~E. Zeytuncu. \newblock Spectra of {K}ohn {L}aplacians on spheres. \newblock {\em Involve}, 12(5):855--869, 2019. \bibitem[BGS{\etalchar{+}}21]{REU2020Weyl} Henry Bosch, Tyler Gonzales, Kamryn Spinelli, Gabe Udell, and Yunus~E. Zeytuncu. \newblock {A Tauberian approach to Weyl’s law for the Kohn Laplacian on spheres}. \newblock {\em Canadian Mathematical Bulletin}, page 1–21, 2021. \bibitem[Can13]{Canzani} Yeiza Canzani. \newblock {\em Notes for Analysis on Manifolds via the Laplacian}. \newblock Lecture Notes. https://canzani.web.unc.edu/wp-content/uploads/sites/12623/2016/08/Laplacian.pdf, 2013. \bibitem[Fol72]{Folland} G.~B. Folland. \newblock The tangential {C}auchy-{R}iemann complex on spheres. \newblock {\em Trans. Amer. Math. Soc.}, 171:83--133, 1972. \bibitem[IY79]{ikeda1979} Akira Ikeda and Yoshihiko Yamamoto. \newblock On the spectra of {$3$}-dimensional lens spaces. \newblock {\em Osaka Math. J.}, 16(2):447--469, 1979. \bibitem[Kli04]{klima2004} Oldrich Klima. \newblock Analysis of a subelliptic operator on the sphere in complex n-space. \newblock Master's thesis, The University of New South Wales, April 2004. \bibitem[ST84]{Stanton1984TheHE} N.~K. Stanton and David~S. Tartokoff. \newblock The {H}eat {E}quation for the $\overline{\partial}_b$–{L}aplacian. \newblock {\em Communications in Partial Differential Equations}, 9:597--686, 1984. \bibitem[Sta84]{Stanton} Nancy~K. Stanton. \newblock {The heat equation in several complex variables}. \newblock {\em Bulletin (New Series) of the American Mathematical Society}, 11(1):65 -- 84, 1984. \bibitem[Str15]{strichartz2015} Robert Strichartz. \newblock Spectral asymptotics on compact {H}eisenberg manifolds. \newblock {\em The Journal of Geometric Analysis}, 26, 07 2015. \end{thebibliography} \end{document}
2206.14232v2
http://arxiv.org/abs/2206.14232v2
The arithmetic volume of hypersurfaces in toric varieties and Mahler measures
\documentclass[12pt]{amsart} \usepackage{xcolor} \usepackage[draft]{todonotes} \usepackage[english]{babel} \usepackage{geometry} \geometry{textwidth=15cm} \usepackage[T1]{fontenc} \usepackage[latin1]{inputenc} \usepackage{amsmath,amssymb,mathrsfs,amsthm, tikz-cd,mathrsfs} \usepackage{soul} \usepackage{epsfig} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, urlcolor=red, pdftitle={}, } \usepackage{graphicx} \usepackage{xypic} \usepackage{enumitem} \usepackage{datetime} \usepackage{xcolor} \usepackage{fancyhdr} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{claim}[theorem]{Claim} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{defprop}[theorem]{Definition-Proposition} \newtheorem{defthm}[theorem]{Definition-Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition}\newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \renewcommand{\contentsname}{\centerline{Table of contents}} \newcommand{\vc}{\|\cdot\|} \newcommand{\C}{\mathbb{C}} \newcommand{\mc}{{k\LL}} \newcommand{\mb}{\mathbb{C}} \newcommand{\hm}{\widehat{ch}(\mathcal{O}(mD),h_1)} \newcommand{\Z}{\mathbb{Z}} \newcommand{\s}{\mathbb{S}^1} \newcommand{\lra}{\longrightarrow} \newcommand{\al}{\alpha} \newcommand{\la}{\lambda} \newcommand{\A}{\mathcal{A}} \newcommand{\R}{\mathbb{R}} \newcommand{\cl}{\mathcal{C}^\infty} \newcommand{\p}{\mathbb{P}} \newcommand{\eps}{\varepsilon} \newcommand{\bei}{\begin{enumerate}[label=(\roman*)]} \newcommand{\T}{\mathbb{T}} \newcommand{\tf}{{}^t f} \newcommand{\vf}{\varphi} \newcommand{\h}{\mathcal{H}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\N}{\mathbb{N}} \newcommand{\z}{\overline{z}} \newcommand{\pt}{\partial} \newcommand{\dif}{\frac{\pt}{\pt z}} \newcommand{\K}{\mathrm{k}} \newcommand{\nt}{\rm{int}} \newcommand{\m}{\omega} \newcommand{\f}{\infty} \newcommand{\e}{\textbf{e}} \newcommand{\D}{\overline{D}} \newcommand{\E}{\overline{E}} \newcommand{\LL}{\mathcal{L}} \newcommand{\X}{\mathcal{X}} \newcommand{\ZZ}{\mathcal{Z}} \newcommand{\ra}{\rightarrow} \begin{document} \vskip 0.5cm \title{The arithmetic volume of hypersurfaces in toric varieties and Mahler measures} \author{Mounir Hajli} \address{School of Science, Westlake University, Hangzhou 310024, China} \email{[email protected]} \maketitle \begin{abstract} In this paper we determine the canonical arithmetic volume of hypersurfaces in smooth projective toric varieties. As a consequence, we prove a generalized Hodge index theorem on hypersurfaces in smooth projective toric varieties. \end{abstract} \begin{center} {\small Keywords: \textit{Arithmetic volume; Height; Mahler measure}}. \end{center} \begin{center} {\small MSC: \textit{14G40; 11G50. }} \end{center} \tableofcontents \section{Introduction} Let ${Z}$ be an arithmetic variety over $\mathrm{Spec}(\Z)$, that is a projective, integral and flat scheme over $\Z$. Let $d+1$ be the absolute dimension of ${Z}$. Let $\overline{{\LL}}=(\LL,\|\cdot\|_\phi)$ be a Hermitian line bundle on ${Z}$, such that the norm $\|\cdot\|_\phi$ is defined by a continuous weight $\phi$. For any $k\in \N_{\geq 1}$, $k\overline{{\LL}}$ denotes $\overline{{\LL}}^{\otimes k}$.\\ The arithmetic volume $\widehat{\mathrm{vol}}(\overline{{\LL}})$ is defined by \[ \widehat{\mathrm{vol}}(\overline{{\LL}})= \underset{k\rightarrow \infty}{\limsup} \frac{\widehat{h}^0(\overline{H^0(Z,k\LL)}_{(\sup,k\phi)}) }{k^{d+1}/(d+1)!}, \] where $\widehat{h}^0(\overline{H^0(Z,k\LL)}_{(\sup,k\phi)}):=\log \# \{ s\in H^0({Z},k{L}) \mid \|s\|_{\sup,k\phi}\leq 1 \}$. This arithmetic invariant was introduced by Moriwaki in \cite{Moriwaki2}. One of the main results of \cite{Moriwaki2} can be stated as follows. Let $\overline{{\LL}}$ be a nef $\mathcal{C}^\infty$ Hermitian line bundle on $Z$ (see Section \ref{sec3}). Then, the height of $Z$ with respect to $\overline{{\LL}}$ equals to the arithmetic volume of $Z$ with respect to $\overline{{\LL}}$. Namely \begin{equation}\label{voldeg} \widehat{\mathrm{vol}}(\overline{{\LL}})=h_{\overline{{\LL}}}(Z) ,\end{equation} (see \cite{Character} for the definition of the heights). The proof of this result is difficult and relies on the property of continuity of arithmetic volume function proved in \cite{Moriwaki2}. \\ The explicit determination of the arithmetic volume is a very difficult problem. When $Z$ is a toric variety, and $\overline{\LL}$ is toric in the sense of Burgos-Philippon-Sombra \cite{Burgos2}, the arithmetic volume of $Z$ with respect to $\overline{\LL}$ possesses a nice integral representation. Namely the following equation. \begin{equation}\label{vol-rep} \widehat{\mathrm{vol}}(\overline \LL)=(d+1)! \int_{\Delta_\LL} \max(0,\vartheta_{\overline{{\LL}}})d\mathrm{vol}_M, \end{equation} where $M$ is a free $\Z$-module of rank $d$, and $\Delta_\LL$ is a rational polytope in $M\otimes_\Z\R$ attached to $\LL$, $\vartheta_{\overline{{\LL}}}$ a concave function defined in terms of the metric of $\LL$, and $d\mathrm{vol}_M$ is a normalized Lebesgue measure on $M\otimes_\Z\R$, see \cite{Burgos3, MounirKJM, Moriwaki}. We can prove \eqref{vol-rep} using three ingredients. Namely, the following equation \[ \widehat{\mathrm{vol}}(\overline \LL)=\underset{k\rightarrow \infty}{\limsup} \frac{ \log \# \{ s\in H^0({Z},k{\LL}) \mid \|s\|_{\mu,k\phi}\leq 1 \}}{k^{d+1}/(d+1)!}, \] where $\|\cdot\|_{\mu,k\phi}$ is a Euclidean norm (see Section \ref{sec2}), the fact that the Euclidean lattice $\overline{H^0(Y,k\LL)}_{(\mu, k\phi)}$ possesses a natural orthonormal basis with respect to $\|\cdot\|_{\mu,k\phi}$ and some classical results from the geometry of numbers \cite{SouleLattice}. Note that in \cite{Burgos3}, the authors bypass the use of the $L^2$-norm with the fact that the basis of toric sections is orthogonal with respect to the sup-norm for all places, both Archimedean or ultrametric. Burgos, Moriwaki, Philippon and Sombra \cite{Burgos3} gave a combinatorial proof of \eqref{voldeg} in the toric setting.\\ Let $M$ be a free $\mathbb{Z}$-module of rank $d$ and $N$ its dual. We consider a fan $\Sigma$ on $M_\mathbb{R}= M\otimes_\mathbb{Z} \R$ and we denote by $Y $ the associated toric variety over $\mathbb{Z}$, see for instance \cite{Oda} or \cite[Paragraph 2.2]{Maillot}. { In the sequel, we assume that $Y$ is smooth and projective (this is equivalent to the fact that $\Sigma$ is nonsingular and the support of $\Sigma$ is $M_\mathbb{R}$, see \cite[Theorems 1.10 and 1.11]{Oda})}. We set $\mathbb{T}_M:=\mathrm{Hom}_\mathbb{Z}(M,\mathbb{C}^\ast)\simeq (\mathbb{C}^\ast)^d$ and we denote by $\mathbb{S}_N\simeq (\mathbb{S}^1)^d$ its compact torus. We have an open dense immersion $\mathbb{T}_M\hookrightarrow X$ with an action of $\mathbb{T}_M$ on $Y$ which extends the action of $\mathbb{T}_M$ on itself by translations. There exists a canonical volume form on $\mathbb{S}_N$ (see \cite[Remarque 6.1.2, and p. 97]{Maillot}) which is denoted by $d\mu_\infty$. Let $s$ be a nonzero rational function on $Y$. Mahler measure of $s$ is defined as follows. \[ m(s)=\int_{\mathbb{S}_N} \log|s|d\mu_\infty. \] Let $\LL$ be an equivariant line bundle on $Y$. It is well known that $\LL$ possesses a natural continuous Hermitian metric which we denote by $\|\cdot\|_{\phi_\infty}$ and which is given in terms of the combinatorial structure of $Y$ \cite[Paragraph 3.4]{Maillot}. This metric is called the canonical metric of $\LL$. We denote this Hermitian line bundle by $\overline {\LL}_{\phi_\infty}$. Let $\p^N$ be the projective space of dimension $N$. The canonical metric of the standard line bundle $\mathcal O(1)$ on $\p^N$ is given as follows. \[ \|\cdot\|_{\phi_\infty}=\frac{|\cdot|}{\max(|x_0|,\ldots,|x_N|)}, \] where $x_0,x_1,\ldots,x_N$ are the standard homogeneous coordinates in $Y$. Let $\LL$ be an equivariant line bundle generated by its global sections on $Y$. Then $\LL$ defines an equivariant morphism $\psi_\LL$ on $Y$ with image in $\p^{h^0(Y, \LL)-1}$. We can show that $\overline{\LL}_{{\phi_\infty}}=\psi_\LL^\ast \overline{\mathcal O(1)}_{\phi_\infty}$, see \cite[Paragraph 3.3.3]{Maillot}. It is worth noting that, up to a positive multiplicative constant, $ \|\cdot\|_{\phi_\infty}$ is the unique toric metric on $\LL$ which has Bernstein-Markov property (see Section \ref{sec2}) with respect to $\mu_\infty$, see Theorem \ref{BMtoric}.\\ Let $X$ be a hypersurface in $Y$ which is defined by a nonzero rational section $s$ of an equivariant line bundle $\mathcal E$ on $Y$. Let $D$ be an equivariant divisor on $Y$ such that $\mathcal E\simeq \mathcal O(D)$. Let $s_D$ be the rational function of $Y$ which corresponds to $s$ by this isomorphism. The canonical height of the hypersurface $X$ with respect to $\overline {\LL}_{\phi_\infty}$ is given in terms of Mahler measure of $s_D$. Namely \[ h_{\overline { \LL}_{\phi_\infty}}(X)=\deg(\LL_\Q)\ m(s_D), \] see \cite[Proposition 7.2.1]{Maillot}. In the sequel, $\overline{\mathcal E}_{\psi_\infty}$ denotes the line bundle $\mathcal E$ endowed with its canonical metric $\|\cdot\|_{\psi_\infty}$.\\ Our first goal is to determine the canonical arithmetic volume of $X$ with respect to $\overline{\LL}_{{\phi_\infty}}$. Our first result is the following equation. \begin{equation}\label{mt} \widehat{\mathrm{vol}}_X({\overline {\LL}_{\phi_\infty}})= \mathrm{vol}(\LL_\Q) m(s_D), \end{equation} where $\widehat{\mathrm{vol}}_X({\overline {\LL}_{\phi_\infty}})$ is the arithmetic volume of $X$ with respect to $\overline {\LL}_{\phi_\infty}$ and $ \mathrm{vol}(\LL_\Q)$ is the geometric volume of $\LL_\Q$, see Theorem \ref{maintheorem}. As a first application, we deduce that \begin{equation}\label{volhinfty} \widehat{\mathrm{vol}}_X({\overline{ \LL}_{\phi_\infty}})=h_{ \overline{\LL}_{\phi_\infty}}({{X}}). \end{equation} In \cite[Proposition 1.5]{MounirKJM}, we proved that $\overline{\LL}_{\phi_\infty}$ is arithmetically nef but not big, and hence not arithmetically ample on $Y$ (see Section \ref{sec3} for the definitions of arithmetically nef, big and ample Hermitian line bundles). \\ Classically, \eqref{voldeg} can be used to prove \eqref{volhinfty}. Let us outline the proof of this. Let us first recall that $\overline{\LL}_{\phi_\infty}$ can be approximated by a sequence of nef $\mathcal{C}^\infty$ Hermitian line bundles \cite[Paragraph 2.1]{Zhang}. Then use the fact that the arithmetic volume function and the height are continuous with respect to the variation of the metrics \cite[Proposition 3.2.2]{BoGS} and \cite[Proposition 4.2]{Moriwaki2}. Our method for the proof of \eqref{volhinfty} is different and more direct. As a consequence of this study, we show that the equation \[ \widehat{\mathrm{vol}}_X({\overline{\LL}_{\phi}})=h_{ \overline{ \LL}_{\phi}}({{X}}), \] holds for every Hermitian line ${\overline{ \LL}_{\phi}}$ generated by its small sections on $Y$, see Theorem \ref{ThmVolDeg}. Thus we partially recover \eqref{voldeg}. \\ We proved a generalized Hodge index theorem on toric varieties, see \cite[Theorem 5.5]{HajliTAMS}. In this paper, we show that the methods of \cite{HajliTAMS} can be used to prove a generalized Hodge index theorem on hypersurfaces in $Y$. Let $\phi$ be a semipositive weight on $\LL$. We shall prove that \[ \widehat{\mathrm{vol}}_X(\overline{\LL}_\phi)\geq h_{\overline{\LL}_\phi}(X), \] see Theorem \ref{Hodge}.\\ The approach proposed here for the study of these problems relies on the combinatorial structure of the toric variety $Y$. The space $Y(\C)$ possesses a canonical measure which is denoted by $\mu_\infty$. Its restriction to the compact torus of $Y(\C)$ is a Haar measure. To $\mu_\infty$, $\overline{\LL}_{\phi_\infty}$ and $\overline{\mathcal E}_{\psi_\infty}$ we attach an Euclidean lattice $ \overline{{H^0}\left({{Y}},k \LL+\mathcal E\right)}_{(\mu_\infty,k\phi_\infty+\psi_\infty)}$ for every $k\in \N$ (see Section \ref{sec2}). We call $ \overline{{H^0}\left({{Y}},k\LL+\mathcal E \right)}_{(\mu_\infty,k\phi_\infty+\psi_\infty)}$ the \textit{canonical Euclidean lattice} associated with $\mu_\infty$, $k\overline{\LL}_{\phi_\infty}$ and $\overline{\mathcal E}_{\psi_\infty}$. This lattice plays a central role in this paper. The Euclidean lattice $ \overline{{H^0}\left({{Y}},k\LL+\mathcal E \right)}_{(\mu_\infty,k\phi_\infty+\psi_\infty)}$ induces a structure of Euclidean lattice on ${H^0}\left({{X}},(k\LL+\mathcal E)_{|_X} \right)$, which we denote by $ \overline{{H^0}\left({{X}},(k\LL+\mathcal E)_{|_X} \right)}_{\mathrm{sq},(\mu_\infty,k\phi_\infty+\psi_\infty)}$ (see Section \ref{sec2} for more details on the construction). We are naturally led to study the following limits \[ \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty+\psi_\infty}})} )}{k^d/d!}, \] and \[ \limsup_{k\rightarrow \infty}\frac{\widehat{\deg}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{ \mathrm{sq}, (\mu_\infty,k\phi_\infty+\psi_\infty)} )}{k^d/d!}. \] In Proposition \ref{4.3} we prove that \[ \limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty+\psi_\infty)}\right) }{k^d/d!}=\limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},k\LL_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty)}\right) }{k^d/d!}. \] An important point to note here is that the $\chi$-arithmetic volume of canonical Euclidean lattices is zero. We use the additivity of the $\chi$-arithmetic degree on admissible metrized sequences, and a theorem due to Szeg\"o and generalized by Deninger, to deduce the following inequality \[ \widehat{\mathrm{vol}}_X({\overline {\LL}_{\phi_\infty}})\geq \mathrm{vol}(\LL_\Q) m(s_D). \] Let $(\|\cdot\|_{\phi_p})_{p=1,2,\ldots}$ be a sequence of smooth Hermitian metrics on $\LL$ converging uniformly to $\|\cdot\|_{\phi_\infty}$. We shall show that \[ \lim_{p\rightarrow \infty} \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu,k{{\phi_p}})} )}{k^d/d!} =\limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}})} )}{k^d/d!}, \] where $\mu$ is any smooth probability measure on $Y$ (see Proposition \ref{limit}). Using Bernstein-Markov's property (see Section \ref{sec2}) we shall deduce that \[ \lim_{p\rightarrow \infty} \widehat{\mathrm{vol}}_X(\overline{\LL}_{\phi_p})=\limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}})} )}{k^d/d!}. \] It is not difficult to prove that \[ \lim_{p\rightarrow \infty} \widehat{\mathrm{vol}}_X(\overline{\LL}_{\phi_p})= \widehat{\mathrm{vol}}_X(\overline{\LL}_{\phi_\infty}). \] Using the technical lemma \ref{h1} we should deduce the following \[ \lim_{k\rightarrow \infty}\frac{\widehat{\deg}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{ \mathrm{sq}, (\mu_\infty,k\phi_\infty+\psi_\infty)} )}{k^d/d!}= \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty+\psi_\infty}})})}{k^d/d!}. \] Gathering all these computations, we shall conclude the proof of \eqref{mt}. \section{Preliminaries}\label{sec2} A normed $\Z$-module $\overline M=(M,\|\cdot\|)$ is a $\Z$-module of finite type endowed with a norm $\|\cdot\|$ on the $\C$-vector space $M_\C=M\otimes_\Z \C$. Let $M_{\mathrm{tors}}$ denote the torsion-module of $M$, $M_{\mathrm{free}}=M/M_{\mathrm{tors}}$, and $M_\R=M\otimes_\Z\R$. We let $B=\{m\in M_\R: \|m\|\leq 1\}$. There exists a unique Haar measure on $M_\R$ such that the volume of $B$ is $1$. We let \[ \hat{\chi}(M,\|\cdot\|)=\log \# M_{\mathrm{tors}}-\log \mathrm{vol}(M_\R/ (M/ M_{\mathrm{tors}}) ). \] Equivalently, we have \[ \hat{\chi}(M,\|\cdot\|)=\log \# M_{\mathrm{tors}}-\log\left( \frac{\mathrm{vol}(M_\R/(M/M_{\mathrm{tor}} ))}{\mathrm{vol}(B(M,\|\cdot\|))}\right), \] for any choice of a Haar measure of $M_\R$. \\ The arithmetic degree of $(M,\|\cdot\|)$ is defined as follows \[ \widehat{\deg}(M,\|\cdot\|)=\widehat{\deg} \overline M= \hat{\chi}(\overline M)-\hat{\chi}(\overline{\Z}^r), \] where $\hat{\chi}(\overline{\Z}^r)=-\log \left( \Gamma(\frac{r}{2}+1)\pi^{-\frac{r}{2}} \right)$, with $r$ is the rank of $M\otimes_\Z\Q$. \\ When the norm $\|\cdot\|$ is induced by a Hermitian product $(\cdot,\cdot)$, we say that $\overline M$ is an Euclidean lattice. In this situation, we have \[ \widehat{\deg}(\overline M)=\log \# M/(s_1,\ldots,s_r)-\log \sqrt{\det((s_i,s_j))_{1\leq i,j\leq r} }, \] where $s_1,\ldots,s_r$ are elements of $M$ such that their images in $M_\Q$ form a basis. \\ We define $\widehat{H}^0(\overline M)$ and $\widehat{h}^0(\overline M)$ to be \[ \widehat{H}^0(\overline M)=\left\{ m\in M : \|m\|\leq 1 \right\}\quad\text{and}\quad \widehat{h}^0(\overline M)=\log \# \widehat{H}^0(\overline M). \] We let \[ \widehat{H}^1(\overline M):=\widehat{H}^0(\overline M^\vee) \quad\text{and}\quad \widehat{h}^1(\overline M):=\widehat{h}^0(\overline M^\vee), \] where $\overline M^\vee$ is the $\Z$-module $M^\vee=\mathrm{Hom}_\Z(M,\Z)$ endowed with the dual norm $\|\cdot\|^\vee$ defined as follows \[ \|f\|^\vee=\sup_{x\in M_\R\setminus\{0\}}\frac{ |f(x)|}{\|x\|}, \quad\forall f\in M^\vee. \] Gillet and Soul\'e \cite{SouleLattice} proved the following \begin{equation}\label{h0h1} -\log(6) \ \mathrm{rank}\ M \leq \widehat{h}^0(\overline M)-\widehat{\deg}(\overline M)-\widehat{h}^1(\overline M)\leq \log(\tfrac{3}{2}) \mathrm{rank}\, M+2\log ((\mathrm{rank}\ M)!), \end{equation} see also \cite[Proposition 2.1]{Moriwaki2}.\\ A short exact sequence of Euclidean lattices \[ 0\longrightarrow \overline{N} \overset{i}{\longrightarrow} \overline{M} \overset{\pi}{\longrightarrow } \overline{Q}\longrightarrow 0, \] is said to be admissible if $i_\R$ and the transpose of $\pi_\R$ are isometries with respect to the Euclidean norms on $N_\R, M_\R$ and $Q_\R$ defining the Euclidean lattices $\overline N, \overline M$ and $\overline Q$ (for more details see \cite{BostTheta}).\\ We denote by $\|\cdot\|_{{\mathrm{sq}}}$ the norm on $Q$ induced by $\overline M$. It is given by \begin{equation}\label{sqdef} \|v\|_{{\mathrm{sq}}}:=\inf_{\substack{m\in M_\R,\\ \pi_\R(m)=v}} \|m\|,\quad\forall v\in Q_\R. \end{equation} Let $Y$ be an arithmetic variety over $\mathrm{Spec}(\Z)$ of absolute dimension $d+1$. We assume that $Y_\Q$ is smooth. Let $\LL$ be a line bundle on $Y$.\\ A weight $\phi$ on $\LL(\C)$ is a locally integrable function on the complement of the zero-section in the total space of the dual line bundle $\mathcal{L}^{-1}(\C)$ satisfying the log-homogeneity property \[ \phi(\lambda v)=\log|\lambda|+\phi(v) \] for all non-zero $v\in \mathcal{L}^{-1}(\C) $ and $\lambda\in \C$. Let $\phi$ be a weight function on $ \LL$. $\phi$ defines a Hermitian metric on $\LL$, which we denote by $\|\cdot\|_\phi$. We denote by $\overline{\LL}_\phi$ the line bundle $\LL$ endowed with the metric $\|\cdot\|_\phi$.\\ Let $\mu$ be a probability measure on $Y(\C)$. Let $\phi$ (resp. $\psi$) be a continuous weight function on $\LL$ (resp. $\mathcal E$). Let $k$ be a positive integer. We endow the space of global sections $H^0(Y,{k\LL+\mathcal E})\otimes_\Z\C $ with the $L^2$-norm given as follows \[ \|s\|_{(\mu, k\phi+\psi)}:=\left(\int_{Y(\C)} \|s(x)\|_{k \phi+\psi}^2\mu\right)^{\frac{1}{2}} \quad \forall \ s\in H^0(Y,{k\LL+\mathcal E}) \otimes_\Z\C. \] Let $(\cdot, \cdot)_{(\mu,k \phi+\psi)}$ denote the associated inner product. Also we consider the sup-norm defined by \[ \|s\|_{\sup, k\phi+\psi}:=\sup_{x\in Y(\C)}\|s(x)\|_{k\phi+\psi}\quad \forall \ s\in H^0(Y,{k\LL+\mathcal E}) \otimes_\Z\C \] Let $X$ be a subvariety of $Y$. We let \[ \|s\|_{\sup, (k\phi+\psi)_{|_X} }:=\sup_{x\in X(\C)}\|s(x)\|_{(k\phi+\psi)_{|_X}}\quad \forall \ s\in H^0(X,{(k\LL+\mathcal E)}_{|_X}) \otimes_\Z \C \] where $(k\phi+\psi)_{|_X}$ denotes the weight of the restriction of $\|\cdot\|_{k\phi+\psi}$ to $(k\LL+\mathcal E)_{|_X}$.\\ The Bergman distortion function $\rho(\mu,\overline{\LL})$ is by definition the function given at a point $x\in X$ by \[ \rho(\mu,\phi)(x):=\sup_{s\in H^0(X,\LL)_\C\setminus\{0\}}\frac{\|s(x)\|_{\phi}^2}{\quad\|s\|_{(\mu, \phi)}^2}. \] If $\{s_1,\ldots,s_N\}$ is a $(\mu, \phi)$-orthonormal basis of $H^0(X,{\LL})_\C$, where $N=\dim_\C H^0(X,\LL)_\C$, then it is well known that \[ \rho(\mu,\phi)(x)=\sum_{j=1}^N \|s_j(x)\|_{\phi}^2\quad \forall\,x\in X, \] see \cite[p. 357]{BermanBoucksom}.\\ We say that $\mu$ has the Bernstein-Markov property with respect to $\|\cdot\|_{\phi}$ if for all $\eps>0$ we have \[ \sup_X \rho(\mu,k\phi)^\frac{1}{2}=O(e^{k\eps}). \] \begin{remark} If $\mu$ is a smooth positive volume form and $\|\cdot\|_{\phi}$ is a continuous metric on $\LL$ then $\mu$ has the Bernstein-Markov property with respect to $\|\cdot\|_{\phi}$ (see \cite[Lemma 3.2]{BermanBoucksom}).\\ \end{remark} The following result provides a new characterization of canonical metrics on equivariant line bundles on toric varieties. \begin{theorem}\label{BMtoric} Let $Y_\C$ be nonsingular complex projective variety. Let $L$ be an equivariant line bundle on $Y_\C$ generated by its global sections. Let $\|\cdot\|_\phi$ be a toric Hermitian metric on $L$. Then \bei \item $\|\cdot\|_{\phi_\infty}$ has Bernstein-Markov property with respect to $\mu_\infty$. \item $\|\cdot\|_\phi$ has Bernstein-Markov property with respect to $\mu_\infty$ if and only if $\|\cdot\|_\phi=\lambda\|\cdot\|_{\phi_\infty}$ where $\lambda$ is a positive constant. \end{enumerate} \end{theorem} \begin{proof} \bei \item Let $k\in \N_{\geq 1}$. It is clear that $ \|\chi^m\|_{\sup,k\phi_\infty}=1$ and $ \|\chi^m\|_{\mu_\infty,k\phi_\infty}=1$ for every $m\in k\Delta_L\cap M$. Using this, it is not difficult to see that\[ \|s\|_{\sup,k\phi_\infty}\leq \sum_{m\in k\Delta_L\cap M} |a_m| \leq \sqrt{\#( k\Delta_L\cap M)} \|s\|_{\mu,k\phi}\quad \forall s\in H^0(Y,kL)_\C \] where the complex coefficients $a_m$ are such that $s=\sum_{m\in k\Delta_L\cap M} a_m \chi^m$. So we have proved (i). \item Let us assume that for every $\eps>0$ we have \[ \|s\|_{\sup,k\phi}\leq C e^{k\eps} \|s\|_{\mu_\infty,k\phi} \quad \forall k\in \N \] for every $s\in H^0(Z,kL)$, where $C$ is a positive constant. \\ For $s=\chi^m$ with $m\in k\Delta_L\cap M$, we get \begin{equation}\label{kg0} e^{-k \check{g}(\frac{m}{k}) } \leq C e^{k\eps} e^{k g(0)}. \end{equation} where $g(u):=\log \|s_L\|(e^{-u})$ for every $u\in N_\R$ with $s_L$ is an equivariant rational section of $L$ and $\check{g}$ is the Legendre-Fenchel transform of $g$, see \cite{Burgos2} for the definition of Legendre-Fenchel transform. We deduce from \eqref{kg0} the following \[ \check{g}(x)\geq -g(0)-\eps\quad\forall x\in \Delta_L. \] Observe that $\check{g}(x)\leq -g(0)$. It follows that \[ \check{g}(x)=-g(0)\quad\forall x\in \Delta_L. \] By \cite[Corollary 12.2.1]{convex}, we infer that $g=g_\infty-g(0)$ where $g_\infty(u)=\log\|s_L\|_{\phi_\infty}(e^{-u})$. \end{enumerate} \end{proof} \section{Arithmetic volume of hypersurfaces in projective toric varieties} \label{sec3} For $X$ an irreducible hypersurface of $Y$, the arithmetic volume of $X$ with respect to ${\overline{\LL}_\phi}_{|_X}$ is denoted by $\widehat{\mathrm{vol}}_X( \overline{\LL}_\phi)$ or $\widehat{\mathrm{vol}}_X(\LL, \|\cdot\|_\phi)$. In other words, \[ \widehat{\mathrm{vol}}_X( \overline{\LL}_\phi):=\limsup_{k\rightarrow \infty} \frac{1}{k^d/d!} \hat{h}^0(\overline{H^0(X,k\LL_{|_X} )}_{(\sup,k\phi_{|_X})}). \] Unless otherwise stated we assume that $Y$ is a smooth projective toric variety, and $\LL$ is an equivariant line bundle generated by its global sections on $Y$. Let $\mathcal E$ be a line bundle on $Y$ such that the defining equation of $X$ is given by a global section $s$ of $\mathcal E$. Note that the following sequence is exact. \[ 0\ra {H^0({{Y}}, k\LL)}\overset{i}\ra {H^0({{Y}}, {k\LL+\mathcal E})} \overset{\pi}\ra {H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}})} \ra 0\ \ (\text{for}\ k=1,2,\ldots) \] where $i$ is the multiplication map by $s$. \\ Let $k\geq 1$. We consider the following admissible exact sequences. \begin{equation}\label{exact} 0\ra\overline{{H^0({{Y}}, k\LL)}}_{(\mu,k\phi, s)} \overset{i}\ra\overline{H^0({{Y}}, {k\LL+\mathcal E})}_{(\mu,k\phi)}\overset{\pi} \ra \overline{H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}})}_{{\mathrm{sq}},(\mu,k\phi)} \ra 0, \end{equation} and \begin{equation}\label{exactsup} 0\ra \overline{{H^0({{Y}}, k\LL)}}_{(\sup,k\phi, s)} \overset{i}\ra \overline{H^0({{Y}}, {k\LL+\mathcal E})}_{(\sup,k\phi)}\overset{\pi} \ra \overline{H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}})}_{{\mathrm{sq}},(\sup,k\phi)} \ra 0, \end{equation} where the metrics of $ \overline{H^0({{Y}},k\LL{{{}}})}_{(\sup,k\phi,s)}$ and $\overline{H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}})}_{{\mathrm{sq}},(\sup,k\phi,s)}$ (resp. $ \overline{H^0({{Y}},k\LL{{{}}})}_{(\mu,k\phi,s)}$ and $\overline{H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}})}_{{\mathrm{sq}},(\mu,k\phi,s)}$ ) are induced by the norm considered on $\overline{H^0({{Y}}, {k\LL+\mathcal E})}_{(\sup,k\phi)}$ (resp. $\overline{H^0({{Y}}, {k\LL+\mathcal E})}_{(\mu,k\phi)}$). \begin{theorem}\label{thm4.3} Let $\phi$ be a weight on $\LL$. We assume that $\|\cdot\|_\phi$ is smooth and positive. Let $\mu$ be a smooth probability measure on $Y(\C)$. We have \bei \item \begin{equation}\label{Th2.2i} \limsup_{k\rightarrow \infty} \frac{{ \hat h^0 } \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}} )}_{{\mathrm{sq}}, (\mu, k\phi) } \right) }{k^d/d!}=\limsup_{k\rightarrow \infty} \frac{{ \hat h^0} \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}, (\sup, k\phi) } \right) }{k^d/d!}. \end{equation} \item \begin{equation}\label{Th2.2ii} \limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}}, k\LL_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu, k\phi)}\right) }{k^d/d!}= \widehat{\mathrm{vol}}_X(\overline{\LL}_\phi). \end{equation} \end{enumerate} \end{theorem} \begin{proof} Let $\phi$ be a weight on $\LL$ such that the metric $\|\cdot\|_\phi$ is smooth and positive. By \cite[Theorem B, (2.7.3)]{Ran}, we know that for every $\eps>0$ and $k=1,2,\ldots$ \begin{equation}\label{18} \|s\|_{\mathrm{sq},(\sup,k\phi+\psi)}\leq C e^{k\eps} \|s\|_{\sup,(k\phi+\psi)_{|_X}},\quad \forall s\in H^0(X,(k\LL+\mathcal E)_{|_X})\otimes_\Z\C, \end{equation} where $C$ is a positive constant depending only on $\phi$ and $\mu.$ It is clear that \begin{equation}\label{19} \|s\|_{\sup,\phi_{|_{X}}}\leq \|s\|_{\mathrm{sq},(\sup,\phi)}\quad \forall s\in H^0(X,(k\LL+\mathcal E)_{|_X})\otimes_\Z\C \end{equation} Combining \eqref{18} and \eqref{19}, and following the proof \cite[Lemma 2.1]{Moriwaki}), it follows immediately that \[ \limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}}(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) })_{ (\sup,(k\phi+\psi)_{|_X}) } )}{k^d/d!}= \limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}}(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) })_{\mathrm{sq}, (\sup,k\phi+\psi) } )}{k^d/d!}. \] By Gromov's inequality, there exists a constant $C'$ such that for every $\eps>0$ and $k\in \N$, \[ \|s\|_{(\mu,k\phi+\psi)} \leq \|s\|_{(\sup, k\phi+\psi)} \leq C' e^{k\eps} \|s\|_{(\mu,k\phi+\psi)} \quad \forall \ s\in H^0(Y,k\LL+\mathcal E)\otimes_\Z\C. \] Hence \[ \|s\|_{{\mathrm{sq}}, (\mu,k\phi+\psi)} \leq \|s\|_{{\mathrm{sq}}, (\sup, k\phi+\psi)} \leq C' e^{k\eps} \|s\|_{{\mathrm{sq}}, (\mu,k\phi+\psi)} \quad \forall \ s\in H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}}). \] So we deduce (i). The proof (ii) follows from (i). \end{proof} Let $Z$ be an arithmetic variety over $\mathrm{Spec}(\Z)$ of dimension $N+1$. According to \cite{Moriwaki1}, there are three kinds of positivity of $\overline{{\LL}}=({{\LL}},\|\cdot\|)$ a Hermitian line bundle on $Z$. \begin{itemize} \item \textit{ample} : $\overline{{\LL}}$ is ample if ${{L}}$ is ample on ${Z}$, the first Chern form $c_1({\overline{\LL}})$ is positive on ${Z}(\C)$ and, for a sufficiently large integer $k$, $H^0({Z}, k {\overline{\LL}})$ is generated by the set \[ \{s\in H^0({Z},k{\overline{\LL}})\mid \|s\|_{\sup}<1 \}, \] as a $\Z$-module. \item \textit{nef} : ${\overline{\LL}}$ is nef if the first Chern form $c_1( {{\overline{\LL}}})$ is semipositive and $\widehat{\deg}({\overline{\LL}}_{|_\Gamma})\geq 0$ for any $1$-dimensional closed subscheme $\Gamma$ in ${Z}$. \item \textit{big} : ${\overline{\LL}}$ is big if ${\overline{\LL}}_\Q$ is big on ${Z}_\Q$ and there is a positive integer $k$ and a non-zero section $s$ of $H^0({Z},k{\overline{\LL}})$ with $\|s\|_{\sup}<1$. \end{itemize} In the notation of \cite{Moriwaki1} we have \begin{equation}\label{kN1} \hat{h}^1(H^0(Z,k\overline{{\LL}}), \|\cdot\|_{\sup}^{k\overline{{\LL}}} )=o(k^{N+1}), \quad (k\rightarrow \infty), \end{equation} for every ample Hermitian line bundle $\overline{{\LL}}$ on $Z$, see \cite[p. 428]{Moriwaki2}.\\ The following lemma can be regarded as a slight generalization of \eqref{kN1}. \begin{lemma}\label{h1} Let $\mu$ be a smooth positive volume form on $Y$ . Let $\LL$ be an equivariant line bundle generated by its global sections on $Y$. Let $\|\cdot\|_{\phi}$ be a continuous Hermitian metric on $\LL$ such that $\|\chi^m\|_{\sup,\phi}\leq 1$ for every $m\in \Delta_\LL\cap M$. Let $X$ be an irreducible hypersurface of $Y$. With the notations of the previous section, we have \[ \widehat h^1(\overline{{H^0}({{X}},{(k\LL+\mathcal E)}_{|_{{X}}}) }_{{\mathrm{sq}},(\sup,k\phi+\psi_\infty)})= o(k^d), \quad (k\rightarrow \infty), \] and \[ \widehat h^1(\overline{{H^0}({{X}},{(k\LL+\mathcal E)}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu,k\phi+\psi_\infty)})= o(k^d), \quad (k\rightarrow \infty). \] \end{lemma} \begin{remark}\label{remark3.2} Lemma \ref{h1} can be applied to $\overline{\LL}_{\phi_\infty}$, endowed with its canonical metric. \end{remark} \begin{proof}[Proof of Lemma \ref{h1}] To shorten notation, we write $\|\cdot\|$ and $\|\cdot\|_{{\mathrm{sq}}}$ instead of $\|\cdot\|_{\sup}$ and $\|\cdot\|_{_{{\mathrm{sq}},(\sup,k\phi +\psi_\infty)}}$ respectively. The proof of the second assertion can be deduced from the first one by using Bernstein-Markov's property.\\ We let $e_m:=\chi^{m}$ for every $m\in \Delta_{k\LL+\mathcal E}$.\\ Let $k\geq 1$. Let $\gamma \in \widehat H^1(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}})$. We have \begin{equation}\label{quotient} |\gamma(\pi(e_m))|\leq \|\pi(e_m)\|_{{\mathrm{sq}}}\leq \|e_m\|\leq 1, \ \forall m\in (k\Delta_{\LL}+\Delta_{\mathcal E})\cap M.\\ \end{equation} \ Note that \[ \pi^\ast:H^0({{X}},(k\LL+\mathcal E)_{|_{{X}}})^\vee \longrightarrow H^0(Y, k\LL+\mathcal E)^\vee\] is injective. We consider $\pi^\ast (\gamma)\in H^0(Y, k\LL+\mathcal E)^\vee$. There exists a sequence of integers $(a_m)_{m\in (k\Delta_\LL+\Delta_{\mathcal E})\cap M}$ such that \[ \gamma \circ \pi=\sum_{m \in \N^{N+1}_k}a_m e_m^\vee, \] where $\{e_m^\vee\}_{m\in \Delta_{k\LL+\mathcal E}\cap M}$ denote the dual basis of $\{e_m\}_{m\in \Delta_{k\LL+\mathcal E}\cap M}$. From \eqref{quotient} we see that \[ a_{m}\in \{-1,0,1\}, \quad \forall \;m\in \Delta_{k\LL+\mathcal E}\cap M. \] \ Let $f$ be a rational function which defines $X$. We have \[ (\gamma\circ\pi)(f {\chi^{\mu}})=0\quad \forall \mu\in \Delta_{k\LL}\cap M. \] So \[ \begin{split} 0=&\sum_{\nu\in \Delta_{k\LL+\mathcal E}\cap M} a_\nu e_\nu^{\vee}(f{x^{\mu}})\\ =&\sum_{\nu\in \Delta_{k\LL+\mathcal E}\cap M} a_\nu \sum_{m\in \Delta_{\mathcal E}} b_m e_\nu^\vee(\chi^{m}\chi^{\mu})\\ =& \sum_{\nu\in \Delta_{k\LL+\mathcal E}\cap M} a_\nu \sum_{m\in \Delta_{\mathcal E}} b_m e_\nu^\vee(e_{m+\mu}). \end{split} \] Hence \[ 0=\sum_{\nu\in \Delta_{k\LL+\mathcal E}\cap M} a_\nu b_{\nu-\mu},\quad \forall \mu \in \Delta_{k\LL}\cap M, \] where we have made the convention that $b_{\nu-\mu}=0$ whenever $\nu-\mu\notin \Delta_{k\LL}\cap M$.\\ Let us consider the matrix \[ C_k= (c_{\mu,m})_{\substack{\mu \in \Delta_{k\LL}\cap M,\\ m\in \Delta_{k\LL+\mathcal E}\cap M }}\] where $c_{\mu,m}=b_{m-\mu}$ for any $\mu\in \Delta_{k\LL}\cap M$ and $ m\in \Delta_{k\LL+\mathcal E}\cap M$. So $C_k$ is a $h^0(Y,k\LL)\times h^0(Y,k\LL+\mathcal E)$-matrix, where its $\mu$-row is given in terms of the coefficients of $f{\chi^{\mu}}$.\\ We claim that the rank of $C_k$ is $h^0(Y,k\LL)$. Indeed, let $y=(y_{m})_{m\in \Delta_{k\LL}\cap M} \in \R^{h^0(Y,k\LL)}$. By basic linear algebra, we observe that $C_k^t y=0$ (where $C_k^t$ is the transpose of $C_k$) if and only if $f \sum_{m\in \Delta_{k\LL}\cap M} y_m \chi^m=0$.\\ It follows that \[ \dim \ker C_k=h^0(Y,k\LL+\mathcal E)- h^0(Y,k\LL)=o(k^{d}), \] as $k\rightarrow \infty$. \\ Note that $(a_m)_{m\in \Delta_{k\LL+\mathcal E}\cap M}\in \ker C_k$ and recall that $a_m\in\{-1,0,1\}$, so we can conclude that \[ \# \widehat H^1(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}})=3^{o(k^{d})}. \] \end{proof} \section{Canonical arithmetic volume of hypersurfaces}\label{section 4} Assume that $\LL$ is generated by its global sections on $Y$. Let $(\phi_p)_{p=1,2,\ldots}$ be the sequence of continuous weights on $\LL$ given as follows \[ \|s(x)\|_{\phi_p}=\frac{|s(x)|}{\left(\sum_{v\in \Delta_\LL\cap M} |\chi^v(x)|^p \right)^{\frac{1}{p}}},\quad p=1,2,\ldots \] for every local section $s$ of $\LL$. It is well-known that the sequence $(\|\cdot\|_{\phi_p})_{p=1,2,\ldots}$ converges uniformly to $\|\cdot\|_{\phi_\infty}$.\\ From now on, we assume moreover that the probability measure $\mu$ is invariant under the action of the compact torus of $Y(\C)$. \begin{proposition}\label{limit} Let $\LL$ be an equivariant line bundle generated by its global sections. Under the above notations and assumptions, we have \begin{equation}\label{1706} \lim_{p\rightarrow \infty} \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu,k{{\phi_p}})} )}{k^d/d!} =\limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}})} )}{k^d/d!}. \end{equation} \end{proposition} \begin{proof} There exists an equivariant map \[ \psi_\LL: Y\lra \p^{r_\LL},\ x\mapsto \psi_\LL(x)=(\chi^m(x))_{m\in \Delta_\LL\cap M} \] where $r_\LL:=\# (\Delta_\LL\cap M)-1$. We have \[ \|\cdot\|_{\overline{\LL}_{\phi_\infty}}=\psi_\LL^\ast \|\cdot\|_{\overline{\mathcal O(1)}_{\phi_\infty}}. \] Let $\delta\in [0,1]$. Let $v_0\in \Delta_\LL\cap M$. For every $v\in \Delta_\LL\cap M$, we let \[ E_{v,\delta}:=\left\{x\in Y(\C): \chi^{v_0}(x)\neq 0\; ,\; \delta \tfrac{|\chi^v(x)|}{|\chi^{v_0}(x)|}\leq \tfrac{|\chi^{v'}(x)|}{|\chi^{v_0}(x)|}\leq \tfrac{|\chi^v(x)|}{|\chi^{v_0}(x)|}\;\text{for every} \; v' \in (\Delta_\LL\cap M) \right\}. \] It is clear that \[ Y(\C)\setminus \mathrm{div}(\chi^{v_0})=\bigcup_{v\in \Delta_\LL\cap M} E_{v,0}. \] For $0<\delta<1$, and for every $p=1,2,\ldots$, $k=1,2,\ldots$, and $m\in k \Delta_\LL\cap M$, \[ \begin{split} ( \chi^m,\chi^m)_{(\mu,k \phi_p)} \geq &\sum_{v\in \Delta_\LL\cap M} \int_{E_{v,\delta}} \frac{|\chi^m(x)|^2}{ (\sum_{v\in \Delta_\LL\cap M }|\chi^v(x)|^p)^{\frac{2k}{p}}}\mu\geq \frac{\delta^{2k}}{(r_\LL+1)^{\frac{2k}{p}}} I_\delta, \end{split} \] where we have put $I_\delta:=\sum_{v\in \Delta_\LL\cap M} \int_{E_{v,\delta}} \mu.$ That is \begin{equation}\label{1011} ( \chi^m,\chi^m)_{(\mu, k {{\phi_p}}) }\geq \frac{\delta^{2k}}{(r_\LL+1)^{\frac{2k}{p}}} I_\delta. \end{equation} On one hand, by noticing that the metrics are invariant under the action of the compact group $\mathcal{S}$, it is easy to check that \eqref{1011} gives the following \begin{equation}\label{inequality1} ( s,s)_{(\mu, k {{\phi_p}}) }\geq \frac{\delta^{2k} I_\delta}{(r_\LL+1)^{\frac{2k}{p}}} ( s,s)_{(\mu_\infty, k {{\phi_\infty}}) }, \quad \forall\ s\in H^0(Y,k\LL)\otimes_\Z \C. \end{equation} On the other hand, we have \begin{equation}\label{inequality2} (s, s)_{(\mu, k\phi_p)}\leq (s, s)_{(\mu_\infty, k\phi_\infty)}, \quad \forall\ s\in H^0(Y,k\LL)\otimes_\Z \C. \end{equation} In order to see this, let $s=\sum_{m \in k\Delta_\LL\cap M} c_m \chi^m$ be an element of $H^0(Y,k\LL)\otimes_\Z \C$. By the invariance of the metrics, we obtain that \[ \begin{split} (s, s)_{(\mu,k\phi_p)}=&\sum_{m\in k\Delta_\LL\cap M} |c_m|^2 \int_{Y(\C)} \|\chi^m\|_{k\phi_p}^2\mu\\ &\leq \sum_{m\in k\Delta_\LL\cap M} |c_m|^2\\ =& (s, s)_{(\mu_\infty,k\phi_\infty)}, \end{split} \] where we have used the fact that $\|\chi^m\|_{k\phi_p }\leq\|\chi^m\|_{k\phi_\infty }\leq 1$.\\ So, we have proved the following. \[ \frac{\delta^{2k} I_\delta}{(r_\LL+1)^{\frac{2k}{p}}} ( s,s)_{(\mu_\infty, k {{\phi_\infty}}) }\leq (s, s)_{(\mu,k\phi_p)}\leq (s, s)_{(\mu_\infty,k\phi_\infty)}\quad\forall\ s\in H^0(Y,k\LL)\otimes_\Z \C. \] That is \[ \frac{\delta^{k} I_\delta^{\frac{1}{2}}}{(r_\LL+1)^{\frac{k}{p}}} \|\cdot \|_{{\mathrm{sq}}, (\mu_\infty, k {{\phi_\infty}})}\leq \|\cdot \|_{{\mathrm{sq}}, (\mu, k {{\phi_p}})}\leq \|\cdot \|_{{\mathrm{sq}}, (\mu_\infty, k {{\phi_\infty}})} \quad \forall \ k\in \N.\] From these inequalities, we infer that \[ \begin{split} \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}}, k\LL_{|_{{X}}}) )}_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}})}}{k^d/d!} &\leq \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},k\LL_{|_{{X}}})) }_{{\mathrm{sq}},(\mu,k{{\phi_p}})}}{k^d/d!} \\ &\leq \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},k\LL_{|_{{X}}}) )}_{{\mathrm{sq}},(\mu_\infty,k{{\phi_p}})}}{k^d/d!}\\ &-\log \frac{\delta^2}{(r_\LL+1)^{\frac{2}{p}}}. \end{split} \] By letting $\delta\rightarrow 1^-$, we obtain \begin{equation}\label{1111} \left| \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},k\LL_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}})}}{k^d/d!} -\limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},k\LL_{|_{{X}}}) }_{{\mathrm{sq}},(\mu,k{{\phi_p}})}}{k^d/d!}\right| \leq \tfrac{2}{p}\log (r_\LL+1). \end{equation} \end{proof} \begin{lemma}\label{Gamma} \[ \lim_{k\rightarrow \infty} \frac{1}{k^d} \left(\hat \chi(\overline{\Z}^{\#(\Delta_{k\LL+\mathcal E}\cap M)})- \hat \chi(\overline{\Z}^{\# (k\Delta_\LL\cap M)})\right)=0. \] \end{lemma} \begin{proof} This is a consequence of Stirling's asymptotic formula. \end{proof} \begin{proposition}\label{4.3} \begin{equation}\label{4.3-1} \limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty+\psi_\infty)}\right) }{k^d/d!}=\limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},k\LL_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty)}\right) }{k^d/d!}. \end{equation} \end{proposition} \begin{proof} Let $v$ be a global section of $\mathcal E$ which does not vanish on the compact torus $\mathcal S$ of $Y$. We denote by $V$ the hypersurface defined by $v$. We can show that the following sequence is exact. \[ 0\longrightarrow {H^0({{X}}, k\LL_{|_X})}\overset{i_V}\longrightarrow {H^0({{X}}, {(k\LL+\mathcal E)_{|_X}})} \overset{\pi_V}\longrightarrow {H^0({{V}},(k\LL+\mathcal E)_{|_{{V}}})} \longrightarrow 0, \] where $i_V$ is the multiplication map by $v$ and $\pi_V$ is the natural projection map. Let us consider the following admissible metrized exact sequence \begin{equation}\label{XV} \begin{split} 0\longrightarrow \overline{H^0({{X}}, k\LL_{|_X})}_{\mathrm{sq}, (\mu_\infty,k\phi_\infty, v)}\overset{i_V}\longrightarrow & \overline{H^0({{X}}, {(k\LL+\mathcal E)_{|_X}})}_{\mathrm{sq}, (\mu_\infty,k\phi_\infty+\psi_\infty)} \\ & \overset{\pi_V} \longrightarrow \overline{H^0({{V}},(k\LL+\mathcal E)_{|_{{V}}})}_{sq, (\mathrm{sq}, (\mu_\infty,k\phi_\infty+\psi_\infty))} \longrightarrow 0, \end{split} \end{equation} On one hand, there exists a positive constant $c$ such that for every $t\in H^0(Y, k\LL)\otimes_\Z\C$ we have \[ \|vt\|_{\mu_\infty, k\phi_\infty+\psi_\infty}^2=\int_{\mathcal S} \|v(x)\|_{\psi}^2 \| t(x)\|^2_{k\phi}\mu_\infty\geq c \|t\|_{\mu_\infty,k\phi_\infty}^2. \] On the other hand, \[ \|vt\|_{\mu_\infty, k\phi_\infty+\psi_\infty}^2 \leq \|v\|_{\sup,\psi_\infty}^2 \|t\|_{\mu_\infty,k\phi_\infty}^2, \] for every $t\in H^0(Y, k\LL)_\C$. \\ This leads to the following \[ c \|t\|_{\mathrm{sq},(\mu_\infty,k\phi_\infty)}^2 \leq \|vt\|_{\mathrm{sq},(\mu_\infty, k\phi_\infty+\psi_\infty)}^2 \leq \|v\|_{\sup,\psi}^2 \|t\|_{\mathrm{sq},(\mu_\infty,k\phi_\infty)}^2\quad \forall t \in H^0(Y, k\LL)\otimes_\Z\C. \] An easy adaptation of the proof of \cite[Theorem 4.1]{HajliTAMS} can be used to deduce that \[ \limsup_{k\rightarrow \infty}\frac{ \hat{h}^0\left(\overline{{H^0}({{X}},k\LL_{|_X} )}_{\mathrm{sq}, (\mu_\infty,k\phi_\infty, {}{v})} \right)}{k^d/d!}= \limsup_{k\rightarrow \infty}\frac{ \hat{h}^0(\overline{{H^0}({{X}},k\LL_{|_X} )}_{\mathrm{sq}, (\mu_\infty,k\phi_\infty)} )}{k^d/d!}. \] From \eqref{XV} and using \cite[(3.3.2), (3.3.3) p. 59 ]{BostTheta} and by \cite[Theorem 4.1]{HajliTAMS}, we infer that \[ \limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty+\psi_\infty)}\right) }{k^d/d!}= \limsup_{k\rightarrow \infty}\frac{ \hat{h}^0(\overline{{H^0}({{X}},k\LL_{|_X} ))}_{\mathrm{sq}, (\mu_\infty,k\phi_\infty, {}{v})} )}{k^d/d!}. \] This concludes the proof of the proposition. \end{proof} \begin{theorem}\label{maintheorem} We have \item \begin{enumerate}[label=\roman*)] \item \[ \lim_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty+\psi_\infty)}\right) }{k^d/d!}= h_{ \overline{ \LL}_{\phi_\infty}}({{X}}), \] \item \[ \widehat{\mathrm{vol}}_X({\overline{ \LL}_{\phi_\infty}})=h_{ \overline{ \LL}_{\phi_\infty}}({{X}}). \] \end{enumerate} \end{theorem} \begin{remark} Chen \cite{Chen1} proved that the limsup in the definition of arithmetic volume is in fact a limit. \end{remark} \begin{proof} From Theorem \ref{thm4.3}, we get for every $p=1,2,\ldots$ \[ \begin{split} \limsup_{k\rightarrow \infty} \frac{{ \hat h^0 }(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu, k{{\phi_p}}) }) }{k^d/d!}=&\limsup_{k\rightarrow \infty} \frac{{\hat h^0}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{{\mathrm{sq}}, (\sup, k{{\phi_p}}) }) }{k^d/d!}\\ =&\widehat{\mathrm{vol}}_X(\overline{\LL}_{\phi_p}). \end{split} \] We have \[ \begin{split} \lim_{p\ra \infty}\widehat{\mathrm{vol}}_X(\overline{\LL}_{\phi_p})=& \lim_{p\rightarrow \infty} \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu,k{{\phi_p}})} )}{k^d/d!} \quad \text{(by \eqref{Th2.2ii})}\\ =&\limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},{k\LL}_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}})} )}{k^d/d!} \quad \text{(by \eqref{1706})}\\ =&\limsup_{k\rightarrow \infty} \frac{{{{\hat{h}^0}}} \left(\overline{{H^0}({{X}},(k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}}, (\mu_\infty, k\phi_\infty+\psi_\infty)}\right) }{k^d/d!}\quad\text{(by \eqref{4.3-1})}\\ \end{split} \] Hence \begin{equation}\label{1606} \limsup_{k\rightarrow \infty}\frac{{\hat h^0}\Bigl(\overline{{H^0}({{X}}, (k\LL+\mathcal E)_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty}+\psi_\infty })}\Bigr) }{k^d/d!}=\widehat{\mathrm{vol}}_{{X}}( \overline{\LL}_{\phi_\infty}), \end{equation} where we have used that $\lim_{p\rightarrow \infty} \widehat{\mathrm{vol}}_{{X}}( \overline{\LL}_{\phi_p})=\widehat{\mathrm{vol}}_{{X}}( \overline{\LL}_{\phi_\infty}).$\\ Note that \begin{equation}\label{chiI} \hat{\chi}(\overline{{H^0}({{Y}},k\LL)}_{(\mu_\infty,k\phi_\infty, {}{s})} )=\det \left(\left<{{s}} {\chi^m},{{s}} \chi^{m'} \right>_{(\mu_\infty,k\phi_\infty)} \right)_{m,m'\in {}{k\Delta_\LL\cap M}}, \end{equation} (we recall that $\left<\cdot,\cdot\right>_{(\mu_\infty,k\phi_\infty)}$ is the scalar product associated with $\|\cdot\|_{(\mu_\infty,k \phi_\infty)}$).\\ We have \begin{equation}\label{deninger} \lim_{k\rightarrow \infty } \log \left( \det \Bigl( \int_{\mathbb{S}_N} \chi^{m} \overline{\chi^{m'}} |s|^2 {d\mu_{\infty}} \Bigr)_{m,m'\in (k\Delta_\LL)\cap M}\right)^{1/|k\Delta_\LL\cap M|} =\mathrm{vol}(\LL_\Q) \int_{\mathcal{S}}\log |s|^2 {d\mu_{\infty}}, \end{equation} see \cite[Theorem 4, p.49]{Deninger_Mahler}. \\ We obtain that \[ \lim_{k\rightarrow \infty}\frac{ \hat{\chi}(\overline{{H^0}({{Y}},k\LL ))}_{(\mu_\infty,k\phi_\infty, {}{s})} )}{k^d/d!}=\mathrm{vol}(\LL_\Q) \int_{\mathcal{S}}\log |s|^2 {d\mu_{\infty}}. \] Applying \cite[p. 81]{Ran} to \eqref{exact}, we obtain that \[ \begin{split} \widehat{\deg}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }) _{{\mathrm{sq}},(\mu_\infty,k\phi_\infty+\psi_\infty)}=& \widehat{\deg}(\overline{{H^0}({{Y}},(k\LL+ \mathcal{E}))} )_{(\mu_\infty,k\phi_\infty+\psi_\infty)}\\ &-\widehat{\deg}(\overline{{H^0}({{Y}}, k\LL))})_{(\mu_\infty,k\phi_\infty, {}{s})}, \end{split} \] holds for every $k\in\N$.\\ An easy computation shows that \[ \hat{\chi}\left( \overline{{H^0}\left({{Y}},(k\LL+ \mathcal{E}) \right)}_{(\mu_\infty,k\phi_\infty+\psi_\infty)} \right)=0\quad \forall \ k\in \N. \] Hence, for all $k\in \N$, \[ \begin{split} \widehat{\deg}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }) _{{\mathrm{sq}},(\mu_\infty,k\phi_\infty+\psi_\infty)}=& \hat{\chi}(\overline{{H^0}({{Y}}, k\LL+ \mathcal E)} _{(\mu_\infty,k\phi_\infty+\psi_\infty)})-\hat{\chi}(\overline{{H^0}({{Y}},k\LL)}_{(\mu_\infty,k\phi_\infty,s)})\\ &-\hat{\chi}(\overline{\Z}^{h^0(Y,(k\LL+ \mathcal{E}))})+\hat{\chi}(\overline{\Z}^{h^0(Y, k\LL)})\\ =& \det \left(\left<s {\chi^m},s \chi^{m'} \right> _{(\mu_\infty,k\phi_\infty)} \right)_{m,m'\in k\Delta_\LL\cap M}\\ &-\hat \chi(\overline{\Z}^{\#(\Delta_{k\LL+\mathcal E}\cap M)}) + \hat \chi(\overline{\Z}^{\# (k\Delta_\LL\cap M)}). \end{split} \] Since $\dim {H^0}\left({{Y}}, k\LL+\mathcal E \right)\otimes_\Z\Q=O(k^d)$ as $k\rightarrow \infty$, we can use Lemma \ref{Gamma} to conclude that \begin{equation}\label{mahler06} \lim_{k\rightarrow \infty}\frac{\widehat{\deg}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{ \mathrm{sq}, (\mu_\infty,k\phi_\infty+\psi_\infty)} )}{k^d/d!}=\mathrm{vol}(\LL_\Q)\int_{\mathcal{S}} \log |s|^2 {d\mu_{\infty}}. \end{equation} It is clear that the metric $\|\cdot\|_{\phi_\infty}$ satisfies the conditions of Lemma \ref{h1}. Using \eqref{h0h1}, we get \[ \lim_{k\rightarrow \infty}\frac{\widehat{\deg}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{ \mathrm{sq}, (\mu_\infty,k\phi_\infty+\psi_\infty)} )}{k^d/d!}= \limsup_{k\rightarrow \infty}\frac{{\hat h^0}(\overline{{H^0}({{X}},(k\LL+ \mathcal{E})_{|_{{X}}}) }_{{\mathrm{sq}},(\mu_\infty,k{{\phi_\infty+\psi_\infty}})}}{k^d/d!}. \] So, by \eqref{1606} and \eqref{mahler06}, \[ \mathrm{vol}(\LL_\Q)\int_{\mathcal{S}} \log |s|^2 {d\mu_{\infty}}=\widehat{\mathrm{vol}}_{{X}}( \overline{\LL}_{\phi_\infty}). \] We conclude that \[ \widehat{\mathrm{vol}}_{{X}}( \overline{\LL}_{\phi_\infty})=h_{\overline{\LL}_{\phi_\infty}}(X), \] where we have used the fact that \[h_{\overline{\LL}_{\phi_\infty}}(X)=\mathrm{vol}(\LL_\Q)\int_{\mathcal{S}} \log |s|^2 {d\mu_{\infty}},\] see \cite[Proposition 7.2.1]{Maillot}. \end{proof} \section{A generalized Hodge index theorem on hypersurfaces in toric varieties} We introduced the theory of arithmetic theta invariants associated with Hermitian line bundles on arithmetic varieties, see \cite[Section 4]{HajliTAMS}. One of the main result of \cite{HajliTAMS} is a generalized Hodge index theorem on toric varieties. In this section we show that the methods of \cite{HajliTAMS} can be generalized to prove a generalized Hodge index theorem on hypersurfaces in toric varieties. \begin{theorem}\label{main} Let $Z$ be an arithmetic variety of dimension $n+1$ and with smooth generic fibre. Let $(\LL, \|\cdot\|_\phi)$ and $(\LL, \|\cdot\|_\psi)$ be two $\mathcal{C}^\infty$ semipositive Hermitian line bundles on $Z$. We assume $\psi\leq \phi$ and $(\LL, \|\cdot\|_\psi)$ is generated by small sections. We have \[ \widehat{\mathrm{vol}}(\LL, \|\cdot\|_\phi)-\widehat{\deg}(\hat c_1( \LL, \|\cdot\|_{\phi})^{n+1}) = \widehat{\mathrm{vol}}(\LL, \|\cdot\|_\psi)-\widehat{\deg}(\hat c_1(\LL, \|\cdot\|_{\psi })^{n+1}). \] \end{theorem} \begin{proof} See \cite[Theorem 4.11]{HajliTAMS}. \end{proof} \begin{theorem}\label{ThmVolDeg} Let $Y$ be a smooth toric variety over $\Z$ of dimension $d+1$. Let $\LL$ be an equivariant line bundle on $Y$. We assume that it admits a semipositive weight $\phi$ and that $\overline{\LL}_\phi$ is generated by small sections. Let $X$ be a hypersurface in $Y$. Then \[ \widehat{\mathrm{vol}}_X({\overline{ \LL}_{\phi}})=h_{ \overline{ \LL}_{\phi}}({{X}}). \] \ \end{theorem} \begin{proof} The arithmetic volume function is continuous with respect to the variation of the metric, see \cite[p. 513]{Moriwaki}. Then we can assume that $\overline{\LL}_\phi$ is generated by strictly small sections. We claim that the proof can be obtained by the same method as in the proof of \cite[Theorem 5.4]{HajliTAMS}. Indeed, there exists $0<\alpha\leq 1$ such that \[ \alpha \|\cdot\|_\phi\leq \|\cdot\|_{\phi_\infty}. \] So by Theorem \ref{main} and a continuity argument, we can show that \[ \widehat{\mathrm{vol}}_X(\LL, \alpha\|\cdot\|_\phi)-\widehat{\deg}(\hat c_1(\LL_{|_X}, \alpha \|\cdot\|_{\psi })^{d}) = \widehat{\mathrm{vol}}_X(\LL, \|\cdot\|_{\phi_\infty})-\widehat{\deg}(\hat c_1(\LL_{|_X}, \|\cdot\|_{\phi_\infty})^{d}), \] and \[ \widehat{\mathrm{vol}}_X(\LL, \alpha\|\cdot\|_\phi)-\widehat{\deg}(\hat c_1(\LL_{|_X}, \alpha \|\cdot\|_{\phi })^{d})= \widehat{\mathrm{vol}}_X(\LL, \|\cdot\|_\phi)-\widehat{\deg}(\hat c_1(\LL_{|_X}, \|\cdot\|_{\phi })^{d}). \] Since \[ \widehat{\mathrm{vol}}(\overline{\LL}_{\phi_\infty})=h_{\overline{\LL}_{\phi_\infty}}(X), \] (see (ii) of Theorem \ref{maintheorem}) we deduce that \[ \widehat{\mathrm{vol}}_X(\LL, \|\cdot\|_\phi)=h_{\overline{\LL}_\phi}(X). \] \end{proof} \begin{theorem}\label{Hodge}[A generalized Hodge index theorem] Let $Y$ be a smooth toric variety over $\Z$ of dimension $d+1$. Let $\LL$ be an equivariant line bundle on $Y$. Let $\phi$ be a semipositive weight on $\LL$. Let $X$ be a hypersurface in $Y$. We have \[ \widehat{\mathrm{vol}}_X(\overline{\LL}_\phi)\geq h_{\overline{\LL}_\phi}(X). \] \end{theorem} \begin{proof} The proof is similar to the proof of \cite[Theorem 5.5]{HajliTAMS}. The $\Z$-algebra $\bigoplus_{k\geq 0} H^0(X,{k\LL}_{|_X})$ is generated by $\chi^m$ for $m\in k\Delta_\LL\cap M$.\\ Let $\alpha$ be a positive real number such that \[ 0<\alpha<1\quad\text{and}\quad \alpha \|\chi^m\|_{\sup,\phi_{|_X}}<1\quad\text{for}\quad m\in k\Delta_\LL\cap M.\] It follows that $(\LL,\alpha \|\cdot\|_\phi)$ is ample.\\ From Theorem \ref{ThmVolDeg}, we get \[ \widehat{\mathrm{vol}}_X(\LL, \alpha\|\cdot\|_\phi)=\widehat{\deg}( \hat c_1(\LL, \alpha\|\cdot\|_{\phi_{|_X}} )^{d} ). \] By \cite[Proposition 4.6 and Theorem 4.7]{HajliTAMS} we see that \[ \widehat{\mathrm{vol}}_X(\LL, \|\cdot\|_\phi) -\widehat{\deg}(\hat c_1(\LL, \|\cdot\|_{\phi_{|_X}})^{d})\geq \widehat{\mathrm{vol}}(\LL, \alpha\|\cdot\|_{\phi_{|_X}})-\widehat{\deg}(\hat c_1(\LL, \alpha\|\cdot\|_{\phi_{|_X}})^{d}).\] Therefore \[ \widehat{\mathrm{vol}}_X(\LL, \|\cdot\|_\phi)\geq \widehat{\deg}(\hat c_1(\LL, \|\cdot\|_{\phi_{|_X}})^{d}). \] \end{proof} \bibliographystyle{plain} \bibliography{biblio} \end{document}
2206.14193v2
http://arxiv.org/abs/2206.14193v2
Computation of the least primitive root
\documentclass[12pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{graphicx} \newtheorem{theorem}{Theorem} \def\Z{\mathbb{Z}} \def\eps{\varepsilon} \begin{document} \title{Computation of the Least Primitive root} \author{Kevin J.~McGown} \address{Department of Mathematics and Statistics, California State University at Chico} \email{[email protected]} \author{Jonathan P.~Sorenson} \address{Computer Science and Software Engineering Department, Butler University, Indianapolis, IN 46208 USA} \email{[email protected]} \date{\today} \begin{abstract} Let $g(p)$ denote the least primitive root modulo $p$, and $h(p)$ the least primitive root modulo $p^2$. We computed $g(p)$ and $h(p)$ for all primes $p\le 10^{16}$. As a consequence we are able to prove that $g(p)<p^{5/8}$ for all primes $p>3$ and that $h(p)<p^{2/3}$ for all primes $p$. We also present some additional results. \end{abstract} \maketitle \section{Introduction and statement of results}\label{S:intro} Let $g(p)$ denote the least primitive root modulo $p$, and $h(p)$ denote the least primitive root modulo $p^2$. We calculated $g(p)$ for all $p\leq 10^{16}$ and checked the condition $g(p)=h(p)$. This allows us to extend several known results. Our computation took a total of roughly 645 days, wall time, on a linux cluster with 192 cores, or roughly 3 million CPU hours. We computed $g(p)$ for all primes $p\leq 10^{16}$ and stored the values where $g(p)\geq 100$. Here are the record values of $g(p)$: \begin{tabular}{c|c} $p$ & $g(p)$\\ \hline $3$&$2$\\ $7$&$3$\\ $23$&$5$\\ $41$&$6$\\ $71$&$7$\\ $191$&$19$\\ $409$&$21$\\ $2161$&$23$\\ $5881$&$31$\\ $36721$&$37$\\ $55441$&$38$\\ $71761$&$44$\\ \end{tabular}\hspace{4ex} \begin{tabular}{c|c} $p$ & $g(p)$\\ \hline $110881$&$69$\\ $760321$&$73$\\ $5109721$&$94$\\ $17551561$&$97$\\ $29418841$&$101$\\ $33358081$&$107$\\ $45024841$&$111$\\ $90441961$&$113$\\ $184254841$&$127$\\ $324013369$&$137$\\ $831143041$&$151$\\ $1685283601$&$164$\\ \end{tabular}\hspace{4ex} \begin{tabular}{c|c} $p$ & $g(p)$\\ \hline $6064561441$&$179$\\ $7111268641$&$194$\\ $9470788801$&$197$\\ $28725635761$&$227$\\ $108709927561$&$229$\\ $386681163961$&$263$\\ $1990614824641$&$281$\\ $44384069747161$&$293$\\ $89637484042681$&$335$\\ $358973066123281$&$347$\\ $2069304073407481$&$359$\\ $4986561061454281$&$401$\\ $6525032504501281$&$417$\\ \end{tabular} In order to show that the principal congruence subgroup $\Gamma(p)$ can be generated by $[1, p; 0, p]$ and $p(p-1)(p+1)/12$ hyperbolic elements, and construct such generators, Grosswald employed the hypothesis that $g(p)<\sqrt{p}-2$ for all primes $p>409$, which is now referred to as Grosswald's conjecture (see~\cite{MR49937}). We remark that this conjecture is clearly true for $p$ large enough since, by the Burgess inequality, one knows that $g(p)\ll p^{\frac{1}{4}+\eps}$; however, explicit results where the upper bound is anywhere close to $p^{\frac{1}{4}}$ are difficult to come by, except when $p$ is ridiculously large. To illustrate this, we remark that in a subsequent paper, Grosswald showed that $g(p)\leq p^{0.499}$ when $p>e^{e^{24}}\approx 10^{10^{10}}$ (see~\cite{MR636957}). Nonetheless, in~\cite{MR3460815} it was shown that Grosswald's conjecture holds except possibly when $p\in(2.5\cdot 10^{15}, 3.4\cdot 10^{71})$. In~\cite{MR4055934} the bound of $3.4\cdot 10^{71}$ was reduced to $10^{56}$, and in~\cite{MR3584569} the conjecture was proved completely under the Generalized Riemann Hypothesis. Together with these results, the list of records given in the table above allows one to easily verify that: \begin{theorem} Grosswald's conjecture holds except possibly when $p\in(10^{16},10^{56})$. \end{theorem} Previously it was verified in~\cite{MR2476579} that $g(p)=h(p)$ for all primes $p\leq 10^{12}$ except for $p=40487$ and $p=6692367337$. In~\cite{MR4125902} it was pointed out that there are likely infinitely many such primes $p$ for the simple reason that a lift of a generator of $(\Z/p\Z)^\times$ to $(\Z/p^2\Z)^\times$ has a $(p-1)/p$ chance of being a generator. However, we have verified that there are no additional examples where $g(p)\neq h(p)$ with $p\leq 10^{16}$. \begin{theorem} The only primes $p\leq 10^{16}$ satisfying $g(p)\neq h(p)$ are $40487$ and $6692367337$. \end{theorem} Perhaps this is not surprising. If one believed the heuristic in the previous paragraph and $x_n$ is the $n$-th odd prime $p$ where $g(p)\neq h(p)$, then one might expect the chances that $g(p)=h(p)$ for all primes $p$ with $x_n<p\leq x$ to be roughly $\prod_{x_n<p\leq x}(1-1/p)$; denoting $x_1=40487$, $x_2=6692367337$, this would lead one to expect that $x_3$ might lie between $10^{19}$ and $10^{20}$. As was the motivation for this work, our computation complements the results in~\cite{ChenRoots,MR4055934,MR4125902}, leading to the following: \begin{theorem}\label{T:3} One has $g(p)<p^{5/8}$ for all primes $p>3$. \end{theorem} \begin{theorem}\label{T:4} One has $h(p)<p^{2/3}$ for all primes $p$. \end{theorem} Previously, Pretorius \cite{pretorius2018smallest} showed that $g(p)<p^{0.68}$ for all primes $p$, and Chen \cite{ChenRoots} showed that $h(p)<p^{0.74}$ for all primes $p$. In Section~2 we describe the methods used in our computations with a few comments on their asymptotic time complexity. In Section~3 we describe some auxillary programs used in the proofs of Theorems~\ref{T:3} and~\ref{T:4}. Finally, in Section~\ref{S:additional} we give some additional computational results. \section{Computational methods} \label{S:computation} We set a modulus $M=2\cdot3\cdots 29=6469693230$, and a core loop stepped through the integers $a\bmod M$ with $\gcd(a,M)=1$. The $\phi(M)=1021870080$ values of $a$ were shared evenly among 192 processor cores and processed independently. For each such $a$, the primes in the arithmetic progression $a\bmod M$ up to $10^{16}$ are found using the sieve of Eratosthenes. This requires sieving an interval of size only $10^{16}/M\approx 1.5\times10^6$, which fits in cache. Since $\pi(10^8)-10$, the number of primes used to sieve the interval, is 5761445, of a similar magnitude, the work is reasonably balanced. In addition to this, the progression $a-1 \bmod M$ is sieved to generate complete factorizations, so that for each prime $p\equiv a\bmod M$ we find, we have handy the complete factorization of $p-1$. With this information, we then find $g(p)$ and check if $g(p)=g(p^2)$ using Montgomery multiplication \cite{MR777282} to reduce the cost of modular exponentiations. For each of the integers $n\equiv a-1 \bmod M$ that we factor completely, the odd prime divisors of $n$ up to $10^8$ are packed, in binary, into a single 64-bit integer to save space. The 192-core cluster at Butler University runs linux; the code was written in C++ with MPI to manage parallel execution and communication. The total wall time for the computation was 645 days, with two brief interruptions requiring restarts from saved checkpoints. If we generalize our method to an algorithm to find $g(p)$ and $h(p)$ for all primes $p\le n$, our algorithm takes $O(n)$ arithmetic operations under the assumption that the average values of $g(p)$ and $h(p)$ are bounded. (See, for example, \cite{EM97}.) We assume that $M$, our product of small primes, satisfies $n^{1/3}\le M \le n^{2/3}$, say, so that $\phi(M)/M \approx \log\log n$. This serves the same role as a wheel in prime sieves, so that finding all primes up to $n$ and factoring all integers in the $a-1 \bmod M$ residue classes with $\gcd(a,M)=1$ take $O(n)$ operations each. (See \cite{Sorenson2015} for references on prime sieves.) Then, for each prime, we compute $g(p)$ and $h(p)$ which takes $O( h(p) \log p )$ arithmetic operations for each $p$, as a modular exponentiation takes $O(\log q)$ operations if the exponent is bounded by $q$. By the prime number theorem and our heuristic assumption that the average value of $h(p)$ is bounded, we get $O(n)$ operations here as well. \section{Proof of Theorems~\ref{T:3} and~\ref{T:4}}\label{S:proofs} Our proofs for both theorems are computational, and used a total of five programs. At a high level, our approach is not dissimilar to what Chen outlines in Remark 2.6 in \cite{ChenRoots}. \subsection{Theorem \ref{T:3}} To prove Theorem~\ref{T:3} we used Theorem~3 from \cite{MR4055934}, namely that if $$ H^2 - \frac{\pi^2}{6} \frac{B(H/h)^{2r-1}}{A(H/h)^{2r}} \left\{ \left( 2+\frac{s-1}{\delta}\right) 2^{\omega-s} \right\}^{2r} h \sqrt{p} W(p,h,r)>0 $$ then $g(p)<H$. Following the notation from that paper, we set $\alpha=5/8$, $H=p^\alpha$, $h=\lceil 2.01\cdot p^{2\alpha-1}\rceil$, and $\omega:=\omega(p-1)$, the number of prime divisors of $p-1$. If we set $e$ to be an even divisor of $p-1$, then $s=\omega(p-1)-\omega(e)$, the number of distinct primes dividing $p-1$ that do not divide $e$. Let $p_1, p_2, \ldots, p_s$ be theses primes. Then $\delta:=1-\sum_{i=1}^s 1/p_i$. We also have \begin{eqnarray*} W(p,h,r) &\le& \begin{cases} \sqrt{2p} \left( \frac{2r}{eh} \right)^r+2r-1 & \mbox{for $r\ge 1$\,,} \\ 3\left(1+\frac{\sqrt{p}}{h^2}\right) & \mbox{for $r=2$\,.} \end{cases} \end{eqnarray*} Our first program, written in C++, works to narrow the range of $p$ for when we cannot yet prove that $g(p)<p^\alpha$. We begin by knowing from our computation above and the previous result that $g(p)<p^{5/8}$ for $p>10^{22}$ (Corollary 1 from \cite{MR4055934}) that we need only consider primes $p$ with $10^{16} < p < 10^{22}$. We then tried a range of $r$ values from $1$ to $10$, and bounded $\omega$ explicitly (Theorem~11 from \cite{omegabound}), and set $s=0$ (which gives $\delta=1$) to lower the upper bound on $p$, using bisection. Next, with a somewhat narrower range on $p$, this gives us a range for $\omega$, from $2$ to $18$ in this case. We handled each value of $\omega$ as a separate case. For each $\omega$, we can compute an optimal value of $s$ to use, by minimizing $(2+(s-1)/\delta)2^{\omega-s}$, as the value of $\omega-s$ can be used to give a lower bound on $\delta$. With these more refined parameters, we closed the gap on $p$ values except for when $10\le \omega \le 15$. To deal with the remaining values of $\omega$ we employed a tree algorithm, similar what what is described in Section~3.1 of~\cite{MR3584569}. Our version of the tree algorithm is to set a mask with a fixed number of bits -- 14 in this particular case. A bit of the mask was $1$ if the prime corresponding to that bit position divided $p-1$. For example, the mask 1010011 would mean the primes $2,3,11,17$ divide $p-1$ and no other primes up to $43$ (the 14th prime) divide $p-1$. This extra information allows a better lower bound on $\delta$, and in all cases we found that setting $s=\omega-1$ worked. Looping through all odd-valued masks up to $2^{14}$, we narrowed the gaps on $p$ significantly, and in cases where a gap remained, we wrote the upper and lower limits, and the mask information to a text file to be read by a second program. If the mask value had enough bits set so that all primes dividing $p-1$ were known, the information was written to a different text file for our third program to process. This first program takes under a minute to run. The second program read in upper and lower bounds, and a mask to use to create a modulus to create an arithmetic progression to sieve for primes. For each prime found, its primitive root was found and checked. Processing all the work generated by the first program took about five hours. This second program was a SageMath script. (If it had taken longer, we would have rewritten it in C++.) The third program was given a complete list of prime divisors of $p-1$, so it enumerated all possible exponents on each prime specified by the mask to create candidate primes $p$ that were prime tested (see Theorem~4.1.1 from \cite{CrandallPomerance}) and then primitive roots were found and checked. This program took only a few hours to do the work it was given by the first program. We reran the first program multiple times, trying different mask sizes and estimating the amount of work it produced for the other two programs, and we found 14 to be the best choice. \subsection{Theorem \ref{T:4}} Our proof of Theorem~\ref{T:4} is based on the recent work of Chen \cite{ChenRoots}, and consists of two programs that work in roughly the same way as above. The first program is based on equation~(2.5) from \cite{ChenRoots}: $$ \delta \frac{\phi(e)}{e} \left\{ p^\alpha - \left(2+ \frac{s-1}{\delta}\right) 2^{\omega(e)} A(p) \sqrt{p}\log p \right\} -\sqrt{5} \pi p^{\alpha-1/4} > 0. $$ Note that although the $A(p)$ function here is from \cite{ChenRoots} and is not the same as the $A(H/h)$ function in the previous section, the other parameters $e, s, \delta$ are defined similarly. Our proof used $\alpha=0.661<2/3$, so what we proved is slightly stronger than the statement of the theorem. (We found that using $0.660$ would have required more work than is feasible -- $0.661$ seems the limit of this method for $h(p)$.) We required an explicit lower bound for $\phi(e)/e$ which can be found in \cite{RosserSchoenfeld}. Otherwise, the overall program structure is basically the same as for the first program for the proof of Theorem~\ref{T:3}. We had no upper bound on $p$ for our proof, so computing one was the first step, using explicit bounds for $\omega$ and $\phi(e)/e$, giving $10^{233}$. This meant $\omega\le 105$. Proving $\omega=2$ to $8$ and $17$ to $105$ was straightforward, but for $\omega$ between $9$ and $16$ inclusive we used a mask of 16 bits and handled cases as before. Note that $s=0$ worked for $\omega$ up to $6$, then $s=4,6$ for $\omega=7,8$ respectively, and we used $s=14$ for $\omega\ge17$. We used $s=\omega-2$ in the troublesome range $9$--$16$. The only change in the second program was to compute $h(p)$ instead of $g(p)$. Like the previous subsection, it was a SageMath script as well. We did not use a third program as in the previous subsection -- the second program was able to do the work instead. \section{Additional computational results}\label{S:additional} \begin{figure} \includegraphics[width=5in]{hist.png} \caption{Log-histogram of $g(p)$} \label{F:hist} \end{figure} We felt it would be of interest to include a log-histogram of the values of $g(p)$, where $g(p)>100$. See Figure~\ref{F:hist}. \begin{table} \begin{tabular}{|r|c|c|c|} \hline $p$ & $g(p)$ & $F(p)$ & $g(p)/F(p)$\\ \hline 29418841 & 101 & 247.87 & 0.407471\\ 33358081 & 107 &250.961 & 0.426361\\ 45024841 & 111 &258.388 & 0.429586\\ 90441961 & 113 &275.932 & 0.409521\\ 184254841 & 127 &294.212 & 0.431661\\ 324013369 & 137 &308.979 & 0.443396\\ 831143041 & 151 &334.133 & 0.451916\\ 1685283601 & 164 &353.416 & 0.464042\\ 6064561441 & 179 &389.207 & 0.459909\\ 7111268641 & 194 &393.733 & 0.492719\\ 9470788801 & 197 &401.919 & 0.490148\\ 28725635761 & 227 &434.111 & 0.522908\\ 108709927561 & 229 &473.726 & 0.483402\\ 386681163961 & 263 &512.476 & 0.513195\\ 1990614824641 & 281 & 563.874& 0.498338\\ 44384069747161 & 293 &665.223& 0.440453\\ 89637484042681 & 335 & 688.859 & 0.486311\\ 358973066123281& 347 & 736.23& 0.47132\\ 2069304073407481 & 359 & 797.351& 0.450241\\ 4986561061454281 & 401 & 828.577 & 0.483962\\ 6525032504501281 & 417 & 838.194 & 0.497498\\ \hline \end{tabular} \caption{Growth of record values of $g(p)$} \label{T:gF} \end{table} Let $\hat{g}(p)$ denote the least prime primitive root. (Clearly $g(p)\leq \hat{g}(p)$, but they may not be equal.) In~\cite{MR1433261}, based on a probabilistic model, it was conjectured that $$ \limsup_{p\to\infty}\frac{\hat{g}(p)}{e^\gamma \log p(\log \log p)^2}=1 $$ and numerical evidence to support this was given using a table of record values for $\hat{g}(p)$, $p<2^{31}\approx 2.1\cdot 10^9$. We carry out a similar computation for $g(p)$ using the record values given in Section~\ref{S:intro}. Set $F(p)=e^\gamma \log p(\log \log p)^2$. We certainly do not expect $\limsup g(p)/F(p)$ to equal one, but just for sake of comparison we compute $g(p)/F(p)$ as p ranges over the record values. See Table~\ref{T:gF}. The value of the ratio is somewhat close to $1/2$, but we dare not speculate soley based on this numerical data. \begin{figure}[hb] \includegraphics[width=5in]{plot.png} \caption{Graph representation of Theorem~\ref{T:alphas}} \label{F:alpha} \end{figure} We conclude with the following theorem, which is a refinement of Theorem~\ref{T:3}, proven using methods similar to those discussed in Section~\ref{S:proofs}. \begin{theorem}\label{T:alphas} If $p>p_\alpha$ then $g(p)<p^\alpha$ for the values of $\alpha$ and $p_\alpha$ in the following table: \begin{quote} \begin{tabular}{|l|c||l|c|} \hline $\alpha$ & $p_\alpha$ & $\alpha$ & $p_\alpha$ \\ \hline $0.62$ & $1.91\times10^{23}$ & $0.58$ & $8.21\times10^{34}$ \\ $0.615$ & $1.97\times10^{24}$& $0.575$ & $1.77\times10^{37}$ \\ $0.61$ & $2.47\times10^{25}$ & $0.57$ & $7.98\times10^{39}$ \\ $0.605$ & $4.00\times10^{26}$& $0.565$ & $9.42\times10^{42}$ \\ $0.6 $ & $8.52\times10^{27}$& $0.56$ & $3.57\times10^{46}$ \\ $0.595$ & $2.53\times10^{29}$& $0.555$ & $6.08\times10^{50}$ \\ $0.59$ & $1.09\times10^{31}$& $0.55$ & $7.27\times10^{55}$ \\ $0.585$ & $7.24\times10^{32}$ && \\ \hline \end{tabular} \end{quote} \end{theorem} The graph in Figure~\ref{F:alpha} illustrates the table above. \nocite{*} \bibliographystyle{plain} \bibliography{primitive_root} \end{document}
2206.14182v1
http://arxiv.org/abs/2206.14182v1
Entropy Inequalities and Gaussian Comparisons
\UseRawInputEncoding \documentclass[10pt]{article} \usepackage{hyperref} \usepackage{amsthm,amsmath,amssymb} \usepackage{enumerate} \usepackage{fullpage} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newcommand{\Cov}{\operatorname{Cov}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\Var}{\operatorname{Var}} \newcommand{\pd }{\mathbf{S}^+} \newcommand{\lsc }{l.s.c.\ } \newcommand{\usc }{u.s.c.\ } \newcommand{\law }{\operatorname{law}} \newcommand{\psd }{\mathbf{S}_0^+} \newcommand{\sym }{\mathbf{S}} \newcommand{\Tr}{\operatorname{Tr}} \newcommand{\HS}{\operatorname{HS}} \newcommand{\id}{\operatorname{id}} \newcommand{\Pb}{\mathbb{P}} \newcommand{\lrb}[1]{\left( #1\right)} \newcommand{\la}{\lambda} \newcommand{\lrr}{\Longleftrightarrow} \newcommand{\rr}{\Rightarrow} \newcommand{\EE}{\mathbb{E}} \newcommand{\fa}{\ \forall \ } \renewcommand{\top}{T} \renewcommand{\tilde}{\widetilde} \newcommand{\ospan}{\operatorname{span}} \newcommand{\ocov}{\operatorname{Cov}} \newcommand{\ovar}{\operatorname{Var}} \newcommand{\R}{\mathbb{R}} \newcommand{\eR}{\overline{\mathbb{R}}} \newcommand{\cS}{\mathcal{S}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \title{Entropy Inequalities and Gaussian Comparisons} \author{Efe Aras and Thomas A.~Courtade\\University of California, Berkeley} \date{~~} \begin{document} \maketitle \begin{abstract} We establish a general class of entropy inequalities that take the concise form of Gaussian comparisons. The main result unifies many classical and recent results, including the Shannon--Stam inequality, the Brunn--Minkowski inequality, the Zamir--Feder inequality, the Brascamp--Lieb and Barthe inequalities, the Anantharam--Jog--Nair inequality, and others. \end{abstract} \section{Introduction} Entropy inequalities have been a core part of information theory since its inception; their development driven largely by the role they serve in impossibility results for coding theorems. Many basic inequalities enjoyed by entropy, such as subadditivity, boil down to convexity of the logarithm, and hold in great generality. Others are decidedly more analytic in nature, and may be regarded as capturing some deeper geometric property of the specific spaces on which they hold. In the context of Euclidean spaces, a notable example of the latter is the Shannon--Stam entropy power inequality (EPI), stated in Shannon's original 1948 treatise \cite{shannon48} and later proved by Stam \cite{stam59}. Another example is the Zamir--Feder inequality \cite{ZamirFeder}, which can be stated as follows: Let $X = (X_1, \dots, X_k)$ be a random vector in $\mathbb{R}^k$ with independent coordinates $(X_i)_{i=1}^k$. If $Z = (Z_1, \dots, Z_k)$ is a Gaussian vector with independent coordinates $(Z_i)_{i=1}^k$ and entropies satisfying $h(Z_i) = h(X_i)$, $1\leq i \leq k$, then for any linear map $B: \mathbb{R}^k \to \mathbb{R}^n$, we have \begin{align} h( B X)\geq h( B Z). \label{eq:ZamirFederIneq} \end{align} Evidently, \eqref{eq:ZamirFederIneq} takes the form of a Gaussian comparison; so, too, does the EPI. The goal of this paper is to show that such Gaussian comparisons hold in great generality, thus unifying a large swath of known and new information-theoretic and geometric inequalities. For example, we'll see that \eqref{eq:ZamirFederIneq} holds when the $X_i$'s are random vectors of different dimensions, and, in fact, continues to hold even when the independence assumption is suitably relaxed. As another example, we'll see how the EPI and the Brunn--Minkowski inequality emerge as different endpoints of a suitable Gaussian comparison, thus giving a clear and precise explanation for their formal similarity. This paper is organized as follows. Section \ref{sec:MainResult} presents the main result and a few short examples; Section \ref{sec:proofs} is dedicated to the proof. Sections \ref{sec:multimarginal} and \ref{sec:saddle} give further applications, and Section \ref{sec:closing} delivers closing remarks. \section{ Main Result} \label{sec:MainResult} Recall that a Euclidean space $E$ is a finite-dimensional Hilbert space over the real field, equipped with Lebesgue measure. For a probability measure $\mu$ on $E$, absolutely continuous with respect to Lebesgue measure, and a random vector $X\sim \mu$, we define the Shannon entropy $$ h(X) \equiv h(\mu) :=-\int_E \log\left( \frac{d\mu}{dx}\right)d\mu, $$ provided the integral exists. If $\mu$ is not absolutely continuous with respect to Lebesgue measure, we adopt the convention $h(\mu):=-\infty$. We let $\mathcal{P}(E)$ denote the set of probability measures on $E$ having finite entropies and second moments. When there is no cause for ambiguity, we adopt the hybrid notation where a random vector $X$ and its law $\mu$ are denoted interchangeably. So, for example, writing $X\in \mathcal{P}(E)$ means that $X$ is a random vector taking values in $E$, having finite entropy and finite second moments. We let $\mathcal{G}(E)$ denote the subset of $\mathcal{P}(E)$ that consists of Gaussian measures. The following notation will be reserved throughout. We consider a Euclidean space $E_0$ with a fixed orthogonal decomposition $E_0 = \oplus_{i=1}^k E_i$. There are no constraints on the dimensions of these spaces, other than that they are finite (by definition of Euclidean space), and $\dim(E_0) = \sum_{i=1}^k \dim(E_i)$ (by virtue of the stated decomposition). We let $\mathbf{d} = (d_j)_{j=1}^m$ be a collection of positive real numbers, and $\mathbf{B}=(B_j)_{j=1}^m$ be a collection of linear maps $B_j : E_0 \to E^j$, with common domain $E_0$ and respective codomains equal to Euclidean spaces $E^1, \dots, E^m$. Aside from linearity, no further properties of the maps in $\mathbf{B}$ are assumed. For given random vectors $X_i\in \mathcal{P}(E_i)$, $1\leq i \leq k$, we let $\Pi(X_1, \dots, X_k)$ denote the corresponding set of couplings on $E_0$. That is, we write $X\in \Pi(X_1, \dots, X_k)$ to indicate that $X$ is a random vector taking values in $E_0$ with $$ \pi_{E_i}(X) \overset{law}{=} X_i, ~~1\leq i\leq k, $$ where $\pi_{E_i} : E_0 \to E_i$ is the canonical projection. For $X\in \Pi(X_1,\dots, X_k)$ and $S\subset \{1,\dots,k\}$, we define the {${S}$-correlation}\footnote{The $S$-correlation $I_S$ seems to have no generally agreed-upon name, and has been called different things in the literature. Our choice of terminology reflects that of Watanabe \cite{Watanabe}, who used the term {\it total correlation} to describe $I_S$ when $S=\{1,\dots,k\}$. However, it might also be called $S$-information, to reflect the ``multi-information" terminology preferred by some (see, e.g., \cite{CsiszarKorner}).} $$ I_S(X) := \sum_{i\in S}h(X_i) - h( \pi_{S}(X) ), $$ where we let $\pi_{S}$ denote the canonical projection of $E_0$ onto $\oplus_{i\in S}E_i$. To avoid ambiguity, we adopt the convention that $I_{\emptyset}(X) = 0$. Observe that that $I_S$ is the relative entropy between the law of $\pi_{S}(X)$ and the product of its marginals, so is always nonnegative. For a given {constraint function} $\nu : 2^{\{1,\dots, k\}} \to [0,+\infty]$, and $X_i\in \mathcal{P}(E_i)$, $1\leq i \leq k$, we can now define the set of {\bf correlation-constrained couplings} \begin{align*} &\Pi(X_1, \dots, X_k ; \nu) := \big\{ X \in \Pi(X_1, \dots, X_k) : I_S(X)\leq \nu(S) \mbox{~for each~} S \in 2^{\{1,\dots, k\} } \big\}. \end{align*} With notation established, our main result is the following. \begin{theorem}\label{thm:GaussianComparisonConstrained} Fix $(\mathbf{d},\mathbf{B})$ and $\nu : 2^{\{1,\dots, k\}} \to [0,+\infty]$. For any $X_i \in \mathcal{P}(E_i)$, $1\leq i \leq k$, there exist $Z_i \in \mathcal{G}(E_i)$ with $h(Z_i)= h(X_i)$, $1\leq i\leq k$ satisfying \begin{align} \max_{X\in \Pi(X_1, \dots, X_k;\nu)}\sum_{j=1}^m d_j h(B_j X) \geq \max_{Z\in \Pi(Z_1, \dots, Z_k;\nu)}\sum_{j=1}^m d_j h(B_j Z). \label{eq:maxEntComparisonConstrained} \end{align} \end{theorem} \begin{remark} The special case where $\dim(E_i) = 1$ for all $1\leq i \leq k$ appeared in the preliminary work \cite{ArasCourtadeISIT2021} by the authors. \end{remark} Let us give the two brief examples promised in the introduction; further applications are discussed in Sections \ref{sec:multimarginal} and \ref{sec:saddle}. First, observe that when $m=1$, $\nu\equiv 0$ and $\dim(E_i)=1$ for all $1\leq i \leq k$, we recover the Zamir--Feder inequality \eqref{eq:ZamirFederIneq}. Indeed, taking $\nu \equiv 0$ renders the set of couplings equal to the singleton consisting of the independent coupling, and the one-dimensional nature of the $E_i$'s means that the variances of the $Z_i$'s are fully determined by the entropy constraints. Hence, it is clear that Theorem \ref{thm:GaussianComparisonConstrained} generalizes the Zamir--Feder inequality \eqref{eq:ZamirFederIneq} in the directions noted in the introduction. That is, it continues to hold in the case where the $X_i$'s are multidimensional, and when the independence assumption is relaxed in a suitable manner. As a second and slightly more substantial example, we explain the connection between the EPI and the Brunn--Minkowski inequality alluded to in the introduction. Denote the {entropy power} of $X\in \mathcal{P}(\mathbb{R}^n)$ by $$ N(X):= e^{2 h(X)/n} . $$ For a coupling $X=(X_1,X_2)$, note that the {mutual information} $I(X_1;X_2)$ is equal to $I_S(X)$ with $S=\{1,2\}$. \begin{corollary}\label{thm:depEPI} For any $X_1,X_2 \in \mathcal{P}(\R^n)$ and $\zeta \in [0,+\infty]$, it holds that \begin{align} N(X_1) + N(X_2) + &2 \sqrt{(1 - e^{- 2 \zeta/n }) N(X_1)N(X_2)} \leq \!\!\! \max_{ \substack{X_1,X_2 :\\ I(X_1;X_2)\leq \zeta} } \!\!\! N(X_1+X_2) , \label{eq:depEPI} \end{align} where the maximum is over couplings of $X_1,X_2$ such that $I(X_1;X_2)\leq \zeta$. Equality holds for Gaussian $X_1, X_2$ with proportional covariances. \end{corollary} \begin{proof} We apply Theorem \ref{thm:GaussianComparisonConstrained} with $E_1= E_2=\R^n$ and $\nu(\{1,2\}) = \zeta$ to give existence of Gaussian $Z_1,Z_2$ satisfying $N(Z_i)=N(X_i)$ and $$ \max_{ \substack{(X_1,X_2) \in \Pi(X_1,X_2) : \\I(X_1;X_2)\leq \zeta} } N(X_1+X_2) \geq \max_{ \substack{(Z_1,Z_2) \in \Pi(Z_1,Z_2) : \\I(Z_1;Z_2)\leq \zeta} } N(Z_1+Z_2). $$ Now, suppose $Z_i\sim N(0,\Sigma_{i})$, $i\in \{1,2\}$ and consider the coupling $$ Z_1 = \rho \Sigma^{1/2}_{1} \Sigma^{-1/2}_{2} Z_2 + (1-\rho^2)^{1/2} W, $$ where $W\sim N(0,\Sigma_{1})$ is independent of $Z_2$, and $\rho := (1 - e^{-2 \zeta/n})^{1/2}$. This ensures $I(Z_1;Z_2) = \zeta$, and \begin{align*} N(Z_1+Z_2) &= (2\pi e) \det( \Sigma_{1}+\Sigma_{2} + \rho \Sigma_{1}^{1/2}\Sigma_{2}^{1/2} + \rho \Sigma_{2}^{1/2}\Sigma_{1}^{1/2} )^{1/n}\\ &\geq (2\pi e) \left( \det( \Sigma_{1} )^{1/n} + \det( \Sigma_{2} )^{1/n} + 2 \rho \det( \Sigma^{1/2}_{1} )^{1/n}\det( \Sigma^{1/2}_{2} )^{1/n}\right)\\ &=N(X_1) + N(X_2) + 2 \sqrt{(1 - e^{-2 \zeta/n}) N(X_1)N(X_2)}, \end{align*} where the inequality follows by Minkowski's determinant inequality. It is easy to see that we have equality throughout if $X_1, X_2$ are Gaussian with proportional covariances. \end{proof} \begin{remark} Theorem \ref{thm:depEPI} may be considered as an extension of the EPI that holds for certain dependent random variables; it appeared in the preliminary work \cite{ArasCourtadeISIT2021} by the authors. We remark that Takano \cite{takano1995inequalities} and Johnson \cite{johnson2004conditional} have established that the EPI holds for dependent random variables which have positively correlated scores. However, given the different hypotheses, those results are not directly comparable to Theorem \ref{thm:depEPI}. \end{remark} Now, we observe that the EPI and the Brunn--Minkowski inequality naturally emerge from \eqref{eq:depEPI} by considering the endpoints of independence ($\zeta = 0$) and maximal dependence ($\zeta = +\infty$). Of course, \eqref{eq:depEPI} also gives a sharp inequality for the whole spectrum of cases in between. \begin{example}[Shannon--Stam EPI] Taking $\zeta = 0$ enforces the independent coupling in Theorem \ref{thm:depEPI}, and recovers the EPI in its usual form. For independent $X_1,X_2\in \mathcal{P}(\mathbb{R}^n)$, \begin{align} e^{2 h(X_1)/n}+ e^{2 h(X_2)/n}\leq e^{2 h(X_1+X_2)/n}.\label{eq:EPIstatement} \end{align} Hence, Theorem \ref{thm:depEPI} may be regarded as an extension of the EPI for certain dependent random variables with a sharp correction term. \end{example} \begin{example}[Brunn--Minkowski inequality] Taking $\zeta = +\infty$ in Theorem \ref{thm:depEPI} allows for unconstrained optimization over couplings, giving $$ e^{h(X_1)/n}+ e^{h(X_2)/n}\leq \sup_{(X_1,X_2) \in \Pi(X_1,X_2) } e^{h(X_1+X_2)/n}, $$ where we emphasize the change in exponent from $2$ to $1$, relative to \eqref{eq:EPIstatement}. This may be regarded as an entropic improvement of the Brunn--Minkowski inequality. Indeed, if $X_1,X_2$ are uniform on compact subsets $K,L\subset \mathbb{R}^n$, respectively, we obtain the familiar Brunn--Minkowski inequality $$ \operatorname{Vol}_n(K)^{1/n} + \operatorname{Vol}_n(L)^{1/n} \leq \sup_{(X_1,X_2) \in \Pi(X_1,X_2) } e^{h(X_1+X_2)/n} \leq \operatorname{Vol}_n(K+L)^{1/n}, $$ where $K+L$ denotes the Minkowski sum of $K$ and $L$, and $\operatorname{Vol}_n(\cdot)$ denotes the $n$-dimensional Lebesgue volume. The last inequality follows since $X_1+X_2$ is supported on the Minkowski sum $K+L$, and hence the entropy is upper bounded by that of the uniform distribution on that set. \end{example} It has long been observed that there is a striking similarity between the Brunn--Minkowski inequality and the EPI (see, e.g., \cite{costa1984similarity} and citing works). It is well-known that each can be obtained from convolution inequalities involving R\'{e}nyi entropies (e.g., the sharp Young inequality \cite{ brascamp1976best, lieb1978}, or rearrangement inequalities \cite{WangMadiman}), when the orders of the involved R\'{e}nyi entropies are taken to the limit $0$ or $1$, respectively. Quantitatively linking the Brunn--Minkowski and EPI using only Shannon entropies has proved elusive, and has been somewhat of a looming question. In this sense, Theorem \ref{thm:depEPI} provides an answer. In particular, the Brunn--Minkowski inequality and EPI are obtained as logical endpoints of a family of inequalities which involve only Shannon entropies instead of R\'enyi entropies of varying orders. In contrast to derivations involving R\'enyi entropies where summands are always independent (corresponding to the convolution of densities), the idea here is to allow dependence between the random summands. We do not tackle the problem of characterizing equality cases in this paper, but we remark that equality is attained in the Brunn--Minkowski inequality when $K,L$ are positive homothetic convex bodies, which highlights that the stated conditions for equality in Theorem \ref{thm:depEPI} are sufficient, but not always necessary. Indeed, for $X_1,X_2$ equal in distribution, Cover and Zhang \cite{cover1994maximum} showed $$ h(2 X_1) \leq \max_{ (X_1,X_2) \in \Pi(X_1,X_2)} h(X_1+X_2), $$ with equality if and only if $X_1$ is log-concave. We expect that for $\zeta <+\infty$, the only extremizers in Theorem \ref{thm:depEPI} are Gaussian with proportional covariances. For $\zeta =+\infty$, the resulting entropy inequality is dual to the Pr\'ekopa--Leindler inequality, so the known equality conditions \cite{Dubuc} are likely to carry over. Namely, equality should be attained in this case iff $X_1$ is log-concave and $X_2 = \alpha X_1$ a.s.\ for $\alpha \geq 0$. We remark that equality cases for \eqref{eq:maxEntComparisonConstrained} in the special case where $\nu \equiv 0$ follow from the main results in \cite{ArasCourtadeZhang}. \section{Proof of the Main Result}\label{sec:proofs} This section is dedicated to the proof of Theorem \ref{thm:GaussianComparisonConstrained}. There are several preparations to make before starting the proof; this is done in the first subsection. The second subsection brings everything together to prove an unconstrained version of Theorem \ref{thm:GaussianComparisonConstrained} where $\nu \equiv +\infty$. The third and final subsection proves Theorem \ref{thm:GaussianComparisonConstrained} on the basis of its unconstrained variation. \subsection{Preliminaries} Here we quote the preparatory results that we shall need, and the definitions required to state them. The various results are organized by subsection, and proofs are only given where necessary. \subsubsection{Some additional notation} For a Euclidean space $E$, we let $\pd(E)$ denote the set of symmetric positive definite linear operators from $E$ to itself. That is, $A\in \pd(E)$ means $A = A^T$ and $x^T A x >0 $ for all nonzero $x\in E$. We let $\psd(E)$ denote the closure of $\pd(E)$, equal to those symmetric matrices which are positive semidefinite. The set $\sym(E)$ denotes the matrices which are symmetric. We let $\langle\cdot,\cdot\rangle_{\HS}$ denote the Hilbert--Schmidt (trace) inner product, and $\|\cdot\|_{\HS}$ denote the induced norm (i.e., the Frobenius norm). If $K_i \in \pd(E_i)$, $1\leq i \leq k$, then we let $\Pi(K_1, \dots, K_k)$ denote the subset of $\psd(E_0)$ consisting of those matrices $K$ such that $$ \pi_{E_i} K \pi_{E_i}^T = K_i, ~~~1\leq i \leq k. $$ Note that this overloaded notation is consistent with our notation for couplings. Indeed, if $X_i \sim N(0,K_i)$, $1\leq i \leq k$, then $X \sim N(0,K)$ is a coupling in $\Pi(X_1, \dots, X_k)$ if and only if $K \in \Pi(K_1, \dots, K_k)$. If $A_i : E_i \to E_i$, $1\leq i \leq k$, are linear maps, then we write the block-diagonal matrix $$ A = \operatorname{diag}(A_1, \dots, A_k) $$ to denote the operator direct sum $A = \oplus_{i=1}^k A_i : E_0 \to E_0$. For a set $V$, we let $\id_{V}: V\to V$ denote the identity map from $V$ to itself. So, for instance, we have $\id_{E_0} = \oplus_{i=1}^k \id_{E_i} \equiv \operatorname{diag}(\id_{E_1}, \dots, \id_{E_k})$. \subsubsection{The entropic forward-reverse Brascamp--Lieb inequalities} Define $$ D_g(\mathbf{c},\mathbf{d},\mathbf{B}) := \sup_{Z_i \in \mathcal{G}(E_i),1\leq i \leq k }\left( \sum_{i=1}^k c_i h(Z_i) - \max_{Z\in \Pi(Z_1, \dots, Z_k)}\sum_{j=1}^m d_j h(B_j Z) \right), $$ The following is a main result of \cite{CourtadeLiu21}, when stated in terms of entropies. \begin{theorem}\label{thm:FRBLentropy} Fix a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$. For random vectors $X_i \in \mathcal{P}(E_i)$, $1\leq i \leq k$, we have \begin{align} \sum_{i=1}^k c_i h(X_i) \leq \max_{X\in \Pi(X_1, \dots, X_k)}\sum_{j=1}^m d_j h(B_j X) + D_g(\mathbf{c},\mathbf{d},\mathbf{B}). \label{eq:MainEntropyCouplingInequality} \end{align} Moreover, the constant $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ is finite if and only if the following two conditions hold. \begin{enumerate}[(i)] \item {\bf Scaling condition:} It holds that \begin{align} \sum_{i=1}^k c_i \dim(E_i) = \sum_{j=1}^m d_j \dim(E^j). \label{eq:ScalingCond} \end{align} \item{\bf Dimension condition:} For all subspaces $T_i \subset E_i$, $1\leq i \leq k$, \begin{align} \sum_{i=1}^k c_i \dim(T_i ) \leq \sum_{j=1}^m d_j \dim(B_j T),\hspace{5mm}\mbox{where $T = \oplus_{i=1}^k T_i$.} \label{eq:DimCond} \end{align} \end{enumerate} \end{theorem} A datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ is said to be {\bf extremizable} if $D(\mathbf{c},\mathbf{d},\mathbf{B})<\infty$ and there exist $X_i \in \mathcal{P}(E_i)$, $1\leq i \leq k$ which attain equality in \eqref{eq:MainEntropyCouplingInequality}. Likewise, a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ is said to be {\bf Gaussian-extremizable} if there exist Gaussian $X_i \in \mathcal{G}(E_i)$, $1\leq i \leq k$ which attain equality in \eqref{eq:MainEntropyCouplingInequality}. Necessary and sufficient conditions for Gaussian-extremizability of a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ can be found in \cite{CourtadeLiu21}. Clearly Gaussian-extremizability implies extremizability on account of Theorem \ref{thm:FRBLentropy}. We shall need the converse, which was not proved in \cite{CourtadeLiu21}. \begin{theorem}\label{thm:extImpliesGext} If a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ is extremizable, then it is Gaussian-extremizable. \end{theorem} The proof follows a doubling argument similar to what appears \cite[Proof of Theorem 8]{liu2018forward}. We will need the following Lemma. \begin{lemma}\label{lem:W2ConvergenceCovariance} For each $1\leq i \leq k$, let $Z_i\sim N(0,K_i)$ and let $(X_{n,i})_{n\geq 1}$ be a sequence of zero-mean random vectors satisfying $$\lim_{n\to\infty} W_2(X_{n,i}, Z_i)= 0,$$ where $W_2: \mathcal{P}(E_i)\times \mathcal{P}(E_i)\to \mathbb{R}$ is the Wasserstein distance of order 2. For any $K\in \Pi(K_1, \dots, K_k)$, there exists a sequence of couplings $X_n \in \Pi(X_{n,1},\dots, X_{n,k})$, $n\geq 1$ such that $\|\Cov(X_n) - K\|_{\HS}\to 0$. \end{lemma} \begin{proof} Let $Z\sim N(0,K)$, and observe that $Z \in \Pi(Z_1, \dots, Z_k)$. Let $T_{n,i}$ be the optimal transport map sending $N(0,K_i)$ to $\law(X_{n,i})$ (see, e.g., \cite{villani2003topics}). Then $X_n = (T_{n,1}(Z_1), \dots, T_{n,k}(Z_k)) \in \Pi(X_{n,1},\dots, X_{n,k})$ satisfies \begin{align*} T_{n,i}(Z_{i})T_{n,i'}(Z_{i'})^T - Z_i Z_{i'}^T &= Z_i (T_{n,i'}(Z_{i'}) - Z_{i'} )^T + (T_{n,i}(Z_{i}) - Z_{i}) Z_{i'}^T \\ &\phantom{=}+ (T_{n,i}(Z_{i}) - Z_{i}) (T_{n,i'}(Z_{i'}) - Z_{i'} )^T . \end{align*} Taking expectations of both sides and applying Cauchy--Schwarz, we conclude $$ \|\Cov(X_n) - K\|_{\HS} \to 0 $$ since $\EE|T_{n,i}(Z_{i}) - Z_{i}|^2 = W_2(X_{n,i}, Z_i)^2 \to 0$ for each $1\leq i\leq k$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:extImpliesGext}] The approach will be to show that extremizers are closed under convolutions, and apply the entropic central limit theorem. Toward this end, let $X_i \sim \mu_i \in \mathcal{P}(E_i)$ be independent of $Y_i \sim \nu_i \in \mathcal{P}(E_i)$, $1\leq i \leq k$, both assumed to be extremal in \eqref{eq:MainEntropyCouplingInequality}. Define $$ Z_i^+ := X_i + Y_i, \hspace{5mm} Z_i^- := X_i - Y_i, \hspace{5mm}1\leq i \leq k, $$ and let $$ Z^+ \in \arg\max_{Z\in \Pi(Z_1^+, \dots, Z_k^+)} \sum_{j=1}^m d_j h(B_j Z). $$ Let $Z_i^-|z_i^+$ denote the random variable $Z_i^-$ conditioned on $\{Z_i^+= z_i^+\}$, which has law in $\mathcal{P}(E_i)$ for $\law(Z_i^+)$-a.e.~$z_i^+\in E_i$ by disintegration. Next, for $z^+ = (z_1^+, \dots, z_k^+)\in E_0$, let $$ Z^-|z^+ \in \arg\max_{Z\in \Pi(Z_1^-|z_1^+, \dots, Z_k^-|z_k^+)} \sum_{j=1}^m d_j h(B_j Z). $$ We can assume these couplings are such that $z^+\mapsto \law( Z^-|z^+)$ is Borel measurable (i.e., $\law( Z^-|z^+)$ is a regular conditional probability). This can be justified by measurable selection theorems, as in \cite[Cor. 5.22]{villani2008} and \cite[p. 42]{liu2017ITperspectiveBL}. With this assumption, definitions imply \begin{align*} \sum_{i=1}^k c_i h(Z^+_i) &\leq \sum_{j=1}^m d_j h(B_j Z^+) + D(\mathbf{c},\mathbf{d},\mathbf{B})\\ \sum_{i=1}^k c_i h(Z^-_i | z_i^+ ) &\leq \sum_{j=1}^m d_j h(B_j Z^-| z^+) + D(\mathbf{c},\mathbf{d},\mathbf{B}), \end{align*} where the latter holds for $\law(Z^+)$-a.e.~$z^+$. Integrating the second inequality against the distribution of $Z^+$ gives the inequality for conditional entropies: \begin{align*} \sum_{i=1}^k c_i h(Z^-_i | Z_i^+ ) &\leq \sum_{j=1}^m d_j h(B_j Z^-| Z^+) + D(\mathbf{c},\mathbf{d},\mathbf{B})\\ &\leq \sum_{j=1}^m d_j h(B_j Z^-| B_j Z^+) + D(\mathbf{c},\mathbf{d},\mathbf{B}), \end{align*} where the second inequality follows since conditioning reduces entropy. Now, define $$ X = \frac{1}{2}\left( Z^+ + (Z^-|Z^+) \right) , \hspace{5mm} Y = \frac{1}{2}\left( Z^+ - (Z^-|Z^+) \right). $$ Observe that $X\in \Pi(X_1, \dots, X_k)$ and $Y\in \Pi(Y_1, \dots, Y_k)$. So, using the above inequalities and definitions, we have \begin{align*} 2 D(\mathbf{c},\mathbf{d},\mathbf{B}) &\leq \sum_{i=1}^k c_i h(X_i,Y_i) - \sum_{j=1}^m d_j h(B_j X ) - \sum_{j=1}^m d_j h(B_j Y ) \\ &\leq \sum_{i=1}^k c_i h(X_i,Y_i) - \sum_{j=1}^m d_j h(B_j X,B_j Y) \\ &= \sum_{i=1}^k c_i h(Z^+_i) + \sum_{i=1}^k c_i h(Z^-_i | Z_i^+) \\ &\phantom{=}- \sum_{j=1}^m d_j h(B_j Z^+) - \sum_{j=1}^m d_j h(B_j Z^-|B_j Z^+) \\ &\leq 2 D(\mathbf{c},\mathbf{d},\mathbf{B}) \end{align*} Thus, we conclude $$ \sum_{i=1}^k c_i h(Z^+_i) = \sum_{j=1}^m d_j h(B_j Z^+) + D(\mathbf{c},\mathbf{d},\mathbf{B}), $$ showing that $Z_i^+ \sim \mu_i*\nu_i \in \mathcal{P}(E_i)$, $1\leq i\leq k$ are extremal in \eqref{eq:MainEntropyCouplingInequality} as desired. The scaling condition \eqref{eq:ScalingCond} is necessary for $D(\mathbf{c},\mathbf{d},\mathbf{B})<\infty$, so it follows by induction and scale invariance that, for every $n\geq 1$, marginally specified $(Z_{n,i})_{i=1}^k$ are extremal in \eqref{eq:MainEntropyCouplingInequality}, where $$ Z_{n,i}:=\frac{1}{\sqrt{n}}\sum_{\ell=1}^n (X_{\ell,i}-\EE[X_i]), $$ and $(X_{\ell,i})_{\ell\geq 1}$ are i.i.d.\ copies of $X_i$. Define $K_i = \Cov(X_i)$ (which is positive definite since $h(X_i)$ is finite), and fix any $K \in \Pi(K_1, \dots, K_k)$. For any $\epsilon>0$, Lemma \ref{lem:W2ConvergenceCovariance} together with the central limit theorem for $W_2$ implies there exists $N \geq 1$ and a coupling $Z_N \in \Pi(Z_{N,1},\dots, Z_{N,k})$ such that $\|\Cov(Z_N)-K\|_{\HS}<\epsilon$. Letting $Z_N^{(n)}$ denote the standardized sum of $n$ i.i.d.\ copies of $Z_N$, we have $Z^{(n)}_N \in \Pi(Z_{nN,1},\dots, Z_{nN,k})$ for each $n\geq 1$. Thus, by the entropic central limit theorem \cite{barronCLT, CarlenSoffer}, we have \begin{align*} \limsup_{n\to \infty} \max_{Z_n \in \Pi(Z_{n,1},\dots, Z_{n,k})} \sum_{j=1}^m d_j h(B_j Z_n) &\geq \lim_{n\to\infty} \sum_{j=1}^m d_j h(B_j Z^{(n)}_N )=\sum_{j=1}^m d_j h(B_j Z^{*}_N ) \end{align*} where $Z^{*}_N\sim N(0,\Cov(Z_N))$. Our arbitrary choice of $K$ and $\epsilon$ together with continuity of determinants implies \begin{align*} &\limsup_{n\to\infty} \max_{Z_n \in \Pi(Z_{n,1},\dots, Z_{n,k})} \sum_{j=1}^m d_j h(B_j Z_n) \geq\max_{K \in \Pi(K_1, \dots, K_k) }\sum_{j=1}^m \frac{d_j}{2}\log \left( (2\pi e)^{\dim(E^j)} \det( B_j K B_j^T )\right). \end{align*} Invoking the entropic central limit theorem, and using the fact that $(Z_{n,i})_{i=1}^k$ are extremal in \eqref{eq:MainEntropyCouplingInequality} for each $n\geq 1$, we conclude \begin{align*} \sum_{i=1}^k \frac{c_i}{2}\log \left( (2\pi e)^{\dim(E_i)} \det( K_i )\right) &= \lim_{n\to\infty}\sum_{i=1}^k c_i h(Z_{n,i})\\ &=\lim_{n\to\infty} \max_{Z_n \in \Pi(Z_{n,1},\dots, Z_{n,k})} \sum_{j=1}^m d_j h(B_j Z_n) + D(\mathbf{c},\mathbf{d},\mathbf{B})\\ &\geq\max_{K \in \Pi(K_1, \dots, K_k) }\sum_{j=1}^m \frac{d_j}{2}\log \left( (2\pi e)^{\dim(E^j)} \det( B_j K B_j^T )\right)+ D(\mathbf{c},\mathbf{d},\mathbf{B}). \end{align*} Thus, by definitions, we have equality throughout, and $(\mathbf{c},\mathbf{d},\mathbf{B})$ is Gaussian-extremizable. \end{proof} \subsubsection{Properties of the max-entropy term} Let us briefly make a few technical observations related to the max-entropy quantity that appears in \eqref{eq:MainEntropyCouplingInequality}. First, we quote a technical lemma that will be needed several times. A proof can be found in \cite[Lemma A2]{liu2018forward}. \begin{lemma} \label{lem:WeakSemicontH} Let $(\mu_n)_{n\geq 1} \subset\mathcal{P}(E)$ converge in distribution to $\mu$. If $\sup_{n\geq 1}\int_E |x|^2 d\mu_n < \infty$, then $$ \limsup_{n\to\infty}h(\mu_n) \leq h(\mu). $$ \end{lemma} Now, we point out that the max-entropy term is well-defined as a maximum. \begin{proposition}\label{prop:MaxEntropyCouplingExists} Fix $(\mathbf{d},\mathbf{B})$ and $X_i\in \mathcal{P}(E_i)$, $1\leq i \leq k$. The function $$ X \in \Pi(X_1,\dots, X_k)\longmapsto \sum_{j=1}^m d_j h(B_j X) $$ achieves a maximum at some $X^* \in \Pi(X_1,\dots, X_k)$. Moreover, if each $X_i$ is Gaussian, then $X^*$ is Gaussian.\end{proposition} \begin{proof} We have $\sup_{X \in \Pi(X_1,\dots, X_k)}\EE|B_j X|^2 < \infty$ for each $1\leq j \leq m$ since each $X_i$ has bounded second moments. The second moment constraint also implies $\Pi(X_1,\dots, X_k)$ is tight, and it is easily checked to be closed in the weak topology. Thus, Prokhorov's theorem ensures $\Pi(X_1,\dots, X_k)$ is sequentially compact. So, if $(X^{(n)})_{n\geq 1}\subset \Pi(X_1,\dots, X_k)$ is such that $$ \lim_{n\to\infty}\sum_{j=1}^m d_j h(B_j X^{(n)}) = \sup_{X \in \Pi(X_1,\dots, X_k)} \sum_{j=1}^m d_j h(B_j X), $$ we can assume $X^{(n)}\to X^* \in \Pi(X_1,\dots, X_k)$ weakly, by passing to a subsequence if necessary. This implies $B_j X^{(n)}\to B_jX^*$ weakly for each $1\leq j\leq m$. The first claim follows by an application of Lemma \ref{lem:WeakSemicontH}. The second claim now follows from the first, together with the fact that Gaussians maximize entropy under a covariance constraint. \end{proof} Next, if $X_i \sim N(0,K_i)$ for $K_i \in \pd(E_i)$, $1\leq i \leq k$, then the entropy maximization in \eqref{eq:MainEntropyCouplingInequality} is equivalent to the following optimization problem \begin{align} (K_i)_{i=1}^k \mapsto \max_{K \in \Pi(K_1, \dots, K_k) } \sum_{j=1}^md_j \log \det(B_j K B_j^T). \label{eq:maxCouplingsContPro} \end{align} This maximization enjoys a certain strong duality property, which is a consequence of the Fenchel--Rockafellar theorem. The following can be found in \cite[Theorem 2.8]{CourtadeLiu21}. \begin{theorem}\label{thm:FRdualQuadraticForms} Fix $(\mathbf{d},\mathbf{B})$. For any $K_i \in \pd(E_i)$, $1\leq i\leq k$, it holds that \begin{align} &\max_{K \in \Pi(K_1, \dots, K_k) }\sum_{j=1}^m d_j \log \det \left( B_j K B_j^T \right) + \sum_{j=1}^m d_j \dim(E^j) \notag \\ &=\inf_{(U_i,V_j)_{1\leq i\leq k, 1\leq j \leq m}} \left( \sum_{i=1}^k \langle U_i, K_i\rangle_{\HS} - \sum_{j=1}^m d_j \log \det V_j\right) , \label{FenchelMaxCouplingIntro} \end{align} where the infimum is over $U_i\in \pd(E_i),1\leq i\leq k$ and $V_j\in \pd(E^j), 1\leq j\leq m$ satisfying \begin{align} \sum_{j=1}^m d_j B_j^T V_j B_j \leq \operatorname{diag}( U_1, \dots, U_k). \label{eq:MinMaxOperatorHypothesisIntro} \end{align} \end{theorem} \begin{corollary}\label{cor:ContinuityOfMaxDet} The function in \eqref{eq:maxCouplingsContPro} is continuous on $\prod_{i=1}^k \pd(E_i)$. \end{corollary} \begin{proof} By \eqref{FenchelMaxCouplingIntro}, we see that the mapping in \eqref{eq:maxCouplingsContPro} is a pointwise infimum of functions that are affine in $(K_i)_{i=1}^k$, so it follows that it is upper semi-continuous on $\prod_{i=1}^k \pd(E_i)$. On the other hand, each $K\in \Pi(K_1, \dots, K_k)$ can be factored as $K= K^{1/2}_d \Sigma K^{1/2}_d$, for $K^{1/2}_d := \operatorname{diag}(K^{1/2}_1, \dots, K^{1/2}_k)$ and $\Sigma\in \Pi(\id_{E_1}, \dots, \id_{E_k})$. Since the map $K_i \mapsto K_i^{1/2}$ is continuous on $ \pd(E_i)$, and determinants are also continuous, it follows that \eqref{eq:maxCouplingsContPro} is a pointwise supremum of continuous functions. As such, it is lower semi-continuous, completing the proof. \end{proof} \subsubsection{Convexity properties of $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$} \label{sec:GeoConvex} For $(\mathbf{d},\mathbf{B})$ fixed, define the function $F: \mathbb{R}^k \times\prod_{i=1}^k\pd (E_i) \to \mathbb{R}\cup\{-\infty\}$ via \begin{align*} F\left(\mathbf{c}, (K_i)_{i=1}^k\right) &:= \max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^md_j \log \det(B_j K B_j^T)-\sum_{i=1}^k c_i \log \det(K_i) . \end{align*} The motivation for the above definition is that we have \begin{align} -2 D_g(\mathbf{c},\mathbf{d},\mathbf{B}) = \inf_{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i)} F\left(\mathbf{c}, (K_i)_{i=1}^k\right)\label{eq:DgFromF} \end{align} by definition of $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ and the fact that the scaling condition \eqref{eq:ScalingCond} is a necessary condition for finiteness of $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$. The optimization problem above is not convex in the $K_i$'s, however it is \emph{geodesically-convex}. This property was mentioned to the second named author by Jingbo~Liu in a discussion of the geodesically convex formulation of the Brascamp--Lieb constant \cite{liu2019private,Sra2018}. We assume the following argument, which extends that for the Brascamp--Lieb constant, was what he had in mind, so we credit the observation to him. Let us first explain what is meant by geodesic convexity. Given a metric space $(M,\rho)$ and points $x,y\in M$, a geodesic is a path $\gamma : [0,1] \to M$ with $\gamma(0)=x$, $\gamma(1)=y$ and $$ d_M\left( \gamma(t_1),\gamma(t_2) \right) = |t_1-t_2| \rho(x,y), \hspace{5mm}\forall t_1,t_2\in [0,1]. $$ A function $f:M\to \mathbb{R}$ is {geodesically-convex} if, for any geodesic $\gamma$, $$ f(\gamma(t)) \leq t f(\gamma(0)) + (1-t) f(\gamma(1)), \hspace{5mm}\forall t\in [0,1]. $$ The space $(M,\rho)$ is a unique geodesic metric space if every two points $x,y\in M$ are joined by a unique geodesic. This is relevant to us as follows. For a Euclidean space $E$, the space $(\pd (E),\delta_2)$ is a unique geodesic metric space, where for $A,B\in \pd (E)$, $$ t\in [0,1] \mapsto A\#_tB: = A^{1/2}(A^{-1/2}B A^{-1/2})^t A^{1/2} $$ is the unique geodesic joining $A$ and $B$ with respect to the metric $$ \delta_2(A,B):= \left( \sum_{i=1}^{\dim(E)} \log(\lambda_i(A^{-1}B))^2 \right)^{1/2} . $$ The matrix $A\#B := A\#_{1/2} B$ is referred to as the {geometric mean} of $A,B\in \pd (E)$. The topology on $\pd (E)$ generated by $\delta_2$ is the usual one, in the sense that $\delta_2(A_n,A)\to 0$ if and only if $\|A_n - A\|_{\HS}\to 0$. Hence, there are no subtleties with regards to the notions of continuity, etc. In particular, if $f:\pd (E)\to \mathbb{R}$ is continuous and {geodesically midpoint-convex}, i.e., $$ f(A\#B) \leq \frac{1}{2} f(A) + \frac{1}{2} f(B), \hspace{5mm}A,B\in \pd (E), $$ then it is geodesically convex. \begin{theorem} \label{thm:FunctionalPropertiesDg}Fix $(\mathbf{d},\mathbf{B})$. \begin{enumerate}[(i)] \item The function $\mathbf{c} \mapsto D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ is convex and lower semi-continuous. \item For fixed $\mathbf{c}$, the function $(K_i)_{i=1}^k \mapsto F\left(\mathbf{c}, (K_i)_{i=1}^k\right)$ is geodesically-convex and continuous on $\prod_{i=1}^k\pd (E_i)$. \end{enumerate} \end{theorem} \begin{remark} It may be the case that $D_g(\mathbf{c},\mathbf{d},\mathbf{B})=+\infty$ for each $\mathbf{c}$, e.g., if some $B_j$ fails to be surjective. \end{remark} Before the proof, we recall a few basic facts about the geometric mean $A\#B$. A linear transformation $\Phi : \sym(E)\to \sym(E')$ is said to be \emph{positive} if it sends $\pd(E)$ into $\pd(E')$. \begin{proposition}\label{prop:GeoMeanProperties} Let $E,E'$ be Euclidean spaces. For $A_1,A_2,B_1,B_2 \in \pd (E)$, the following hold. \begin{enumerate}[(i)] \item (Monotone Property) If $A_1\geq B_1$ and $A_2\geq B_2$, then $(A_1\#A_2)\geq (B_2\#B_2)$. \item (Cauchy--Schwarz) We have $$ \langle A_1,B_1 \rangle_{\HS}+ \langle A_2,B_2 \rangle_{\HS} \geq 2 \langle (A_1\#A_2), (B_1\#B_2)\rangle_{\HS}. $$ \item (Ando's inequality) If $\Phi : \sym (E)\to \sym (E')$ is a positive linear map, then $$ \Phi(A_1\#A_2) \leq \Phi(A_1)\#\Phi(A_2).$$ \item (Geodesic linearity of $\log\det$) It holds that $$ \log\det(A_1 \# A_2) = \frac{1}{2}\log\det(A_1) + \frac{1}{2}\log\det(A_2). $$ \end{enumerate} \end{proposition} \begin{proof} The monotonicity property can be found, e.g., in \cite[p.~802]{Lawson2001}. By a change of variables using \cite[Lem.~3.1]{Lawson2001} and \cite[Cor.~2.1(ii)]{ando79}, it suffices to prove (ii) under the assumption that $B_1 = \id_E$. In particular, Cauchy--Schwarz gives \begin{align*} |\langle (A_1\#A_2), (\id_E\#B_2)\rangle_{\HS} |^2&= |\langle (A_2^{-1/2} A_1 A_2^{-1/2})^{1/2}A_2^{1/2} , A_2^{1/2}B_2^{1/2}\rangle_{\HS} |^2\\ &\leq \| (A_2^{-1/2} A_1 A_2^{-1/2})^{1/2}A_2^{1/2} \|_{\HS} \| A_2^{1/2}B_2^{1/2}\|_{\HS} \\ &=\langle A_1, \id_E \rangle_{\HS} \langle A_2, B_2 \rangle_{\HS} . \end{align*} Thus, the claim follows by taking square roots of both sides and invoking the AM-GM inequality $\sqrt{ab}\leq (a+b)/2$ for $a,b\geq 0$. Ando's inequality can be found in \cite[Thm.~3(i)]{ando79}. Claim (iv) is trivial. \end{proof} Theorem \ref{thm:FunctionalPropertiesDg} now follows as an easy consequence of the above properties and Theorem \ref{thm:FRdualQuadraticForms}. \begin{proof}[Proof of Theorem \ref{thm:FunctionalPropertiesDg}] Claim (i) follows immediately from \eqref{eq:DgFromF}, since $-D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ is a pointwise infimum of functions that are affine in $\mathbf{c}$. To prove (ii), we note that geodesic-linearity of $\log\det$ implies it suffices to show geodesic midpoint-convexity of the continuous (by Corollary \ref{cor:ContinuityOfMaxDet}) function \begin{align} (K_i)_{i=1}^k \mapsto \max_{K \in \Pi(K_1, \dots, K_k) } \sum_{j=1}^md_j \log \det(B_j K B_j^T). \label{eq:maxCouplingsCont} \end{align} Invoking Theorem \ref{thm:FRdualQuadraticForms}, this is the same as establishing geodesic-convexity of \begin{align} (K_i)_{i=1}^k \mapsto \inf_{(U_i,V_j)_{1\leq i\leq k, 1\leq j \leq m}} \left( \sum_{i=1}^k \langle U_i, K_i\rangle_{\HS} - \sum_{j=1}^m d_j \log \det V_j\right) , \label{FenchelMaxCouplingGC} \end{align} where the infimum is over $U_i\in \pd(E_i),1\leq i\leq k$ and $V_j\in \pd(E^j), 1\leq j\leq m$ satisfying \begin{align} \operatorname{diag}( U_1, \dots, U_k) \geq \sum_{j=1}^m d_j B_j^T V_j B_j . \label{eq:MinMaxOperatorHypothesisGC} \end{align} For $\ell\in \{1,2\}$, let $U^{(\ell)}_i\in \pd(E_i),1\leq i\leq k$ and $V^{(\ell)}_j\in \pd(E^j), 1\leq j\leq m$ satisfy \eqref{eq:MinMaxOperatorHypothesisGC} {with strict inequality}. As such, there exists $\epsilon>0$ sufficiently small such that \begin{align*} \operatorname{diag}( U^{(\ell)}_1, \dots, U^{(\ell)}_k) \geq &\sum_{j=1}^m d_j B_j^T V^{(\ell)}_j B_j +\epsilon \sum_{j=1}^m \Tr(V^{(\ell)}_j) \id_{E_0}, \hspace{5mm}\ell\in \{1,2\}.\end{align*} Define the positive linear map $\Phi :\pd(E^0) \to \pd(E_0)$ via $$ \Phi(V) := \sum_{j=1}^m d_j B_j^T \pi_{E^j}V\pi_{E_j}^T B_j + \epsilon \Tr(V) \id_{E_0},\hspace{5mm}V\in \pd(E^0). $$ By the monotone property and Ando's inequality in Proposition \ref{prop:GeoMeanProperties}, \begin{align*} \operatorname{diag}( U^{(1)}_1\#U^{(2)}_1, \dots, U^{(1)}_k\#U^{(2)}_k) &\geq \Phi\left( \operatorname{diag}( V^{(1)}_1, \dots, V^{(1)}_m) \right) \#\Phi\left( \operatorname{diag}( V^{(2)}_1, \dots, V^{(2)}_m) \right) \\ &\geq \Phi\left( \operatorname{diag}( V^{(1)}_1\#V^{(2)}_1 , \dots, V^{(1)}_m\#V^{(2)}_m) \right) \geq \sum_{j=1}^m d_j B_j^T (V^{(1)}_j\#V^{(2)}_j) B_j . \end{align*} In particular, $(U^{(1)}_i\# U^{(2)}_i)\in \pd(E_i),1\leq i\leq k$ and $(V^{(1)}_j\# V^{(2)}_j)\in \pd(E^j)$, $1\leq j\leq m$ satisfy \eqref{eq:MinMaxOperatorHypothesisGC}. Therefore, let $ (K^{(\ell)}_i)_{i=1}^k\in \prod_{i=1}^k\pd (E_i)$ and use Proposition \ref{prop:GeoMeanProperties} to write \begin{align*} &\frac{1}{2}\sum_{\ell\in \{1,2\}} \left( \sum_{i=1}^k \langle U^{(\ell)}_i, K^{(\ell)}_i\rangle_{\HS} - \sum_{j=1}^m d_j \log \det V^{(\ell)}_j\right)\\ &\geq \sum_{i=1}^k \langle ( U^{(1)}_i\#U^{(2) }_i ) , ( K^{(1)}_i\#K^{(2) }_i )\rangle_{\HS} - \sum_{j=1}^m d_j \log \det (V^{(1)}_j \# V^{(2)}_j) \\ &\geq \inf_{(U_i,V_j)_{1\leq i\leq k, 1\leq j \leq m}} \left( \sum_{i=1}^k \langle U_i, ( K^{(1)}_i\#K^{(2) }_i ) \rangle_{\HS} - \sum_{j=1}^m d_j \log \det V_j\right) . \end{align*} By continuity of the objective in \eqref{FenchelMaxCouplingGC} with respect to the $U_i$'s, the value of the infimum in \eqref{FenchelMaxCouplingGC} remains unchanged if we take infimum over $U_i$'s and $V_j$'s satisfying \eqref{eq:MinMaxOperatorHypothesisGC} with strict inequality. Hence, by the arbitrary choice of $U^{(\ell)}_i\in \pd(E_i),1\leq i\leq k$ and $V^{(\ell)}_j\in \pd(E^j), 1\leq j\leq m$ subject to \eqref{eq:MinMaxOperatorHypothesisGC} with strict inequality, geodesic midpoint-convexity of \eqref{FenchelMaxCouplingGC} is proved. \end{proof} \subsubsection{Sion's theorem for geodesic metric spaces} We will need the following version of Sion's minimax theorem, found in \cite{Zhang2022}. \begin{theorem}[Sion's theorem in geodesic metric spaces]\label{thm:SionGeodesic} Let $(M,d_M)$ and $(N,d_N)$ be finite-dimensional unique geodesic metric spaces. Suppose $\mathcal{X}\subset M$ is a compact and geodesically convex set, $\mathcal{Y} \subset N$ is a geodesically convex set. If following conditions hold for $f : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$: \begin{enumerate}[1.] \item $f (\cdot, y)$ is geodesically-convex and \lsc for each $y\in \mathcal{Y}$; \item $f (x, \cdot)$ is geodesically-concave and \usc for each $x\in \mathcal{X}$, \end{enumerate} then $$ \min_{x\in \mathcal{X}} \sup_{y\in \mathcal{Y}} f(x,y) = \sup_{y\in \mathcal{Y}} \min_{x\in \mathcal{X}} f(x,y). $$ \end{theorem} \subsection{Unconstrained comparisons} With all the pieces in place, we can take a big step toward proving Theorem \ref{thm:GaussianComparisonConstrained} by first establishing the result in the unconstrained case. Namely, the goal of this section is to prove the following. \begin{theorem}\label{thm:GaussianComparisons} Fix $(\mathbf{d},\mathbf{B})$. For any $X_i \in \mathcal{P}(E_i)$, $1\leq i \leq k$, there exist $Z_i \in \mathcal{G}(E_i)$ with $h(Z_i)= h(X_i)$ for $1\leq i\leq k$ such that \begin{align} \max_{X\in \Pi(X_1, \dots, X_k)}\sum_{j=1}^m d_j h(B_j X) \geq \max_{Z\in \Pi(Z_1, \dots, Z_k)}\sum_{j=1}^m d_j h(B_j Z). \label{eq:maxEntComparison} \end{align} \end{theorem} \begin{remark} It is a part of the theorem that each maximum is attained. \end{remark} Before we start the proof, let's first describe the high-level idea. To do this, recall that Lieb's form \cite{lieb1978} of the EPI is as follows: For independent random vectors $X_1,X_2\in \mathcal{P}(\R)$ and any $\lambda\in (0,1)$, \begin{align} h(\sqrt{\lambda} X_1 + \sqrt{1-\lambda} X_2 )\geq \lambda h(X_1) + (1-\lambda) h(X_2). \label{eq:introLieb} \end{align} Motivated by the similarity between the entropy power inequality and the {B}runn--{M}inkowski inequality, Costa and Cover \cite{costa1984similarity} reformulated \eqref{eq:introLieb} as the following concise Gaussian comparison\footnote{The comparison also holds in the multidimensional setting, distinguishing it from the Zamir--Feder inequality.}. \begin{proposition}[Comparison form of Shannon--Stam inequality] For independent random variables $X_1, X_2 \in \mathcal{P}(\R)$, we have \begin{align} h(X_1 + X_2)\geq h(Z_1 + Z_2) ,\label{eq:EPIgaussComparison} \end{align} where $Z_1,Z_2$ are independent Gaussian random variables with variances chosen so that $h(Z_i) = h(X_i)$. \end{proposition} To understand how this comes about, observe that a change of variables in \eqref{eq:introLieb} yields the equivalent formulation $$ c h(X_1) + (1-c) h(X_2) + \frac{1}{2}h_2(c) \leq h(X_1 + X_2),\hspace{5mm}\mbox{for all $c\in [0,1]$,} $$ where $h_2(c):= - c\log(c) - (1-c)\log(1-c)$ is the binary entropy function. Since the RHS does not depend on $c$, we may maximize the LHS over $c\in [0,1]$, yielding \eqref{eq:EPIgaussComparison}. Now, we draw the reader's attention to the formal similarity to \eqref{eq:MainEntropyCouplingInequality}. Namely, we can apply the same logic to bound \begin{align} \sup_{\mathbf{c} \geq 0} \left\{ \sum_{i=1}^k c_i h(X_i) - D_g(\mathbf{c},\mathbf{d},\mathbf{B}) \right\} \leq \max_{X\in \Pi(X_1, \dots, X_k)}\sum_{j=1}^m d_j h(B_j X) . \label{eq:MainEntropyCouplingInequalityToOptimize} \end{align} The difficulty encountered is that, unlike $c\mapsto h_2(c)$, the function $\mathbf{c}\mapsto D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ is not explicit, complicating the optimization problem to be solved. Nevertheless, the task can be accomplished with all the ingredients we have at hand. \begin{proof}[Proof of Theorem \ref{thm:GaussianComparisons}] We start by noting each maximum is attained due to Proposition \ref{prop:MaxEntropyCouplingExists}. Now, without loss of generality, we can assume $\mathbf{d}$ is scaled so that \begin{align} \sum_{j=1}^m d_j \dim(E^j) = 1.\label{eq:normalized} \end{align} Also, since there are no qualifications on the linear maps in $\mathbf{B}$, a simple rescaling argument reveals that we can assume without loss of generality that $h(X_i)=\frac{\dim(E_i)}{2}\log(2\pi e)$; this will allow us to consider $Z_i\sim N(0,K_i)$ with $\det(K_i)=1$ for each $1\leq i\leq k$. Thus, by Theorem \ref{thm:FRBLentropy}, we have \begin{align} \max_{X\in \Pi(X_1, \dots, X_k)}\sum_{j=1}^m d_j h(B_j X) &\geq \sum_{i=1}^k c_i h(X_i) - D_g(\mathbf{c},\mathbf{d},\mathbf{B})=\frac{1}{2}\log(2\pi e)\sum_{i=1}^k c_i \dim(E_i) - D_g(\mathbf{c},\mathbf{d},\mathbf{B}) \label{eq:quantityToBound} \end{align} for any $\mathbf{c}$. Define the simplex $$A := \left\{\mathbf{c}\geq 0 : \sum_{i=1}^k c_i \dim(E_i) = \sum_{j=1}^m d_j \dim(E^j) =1 \right\},$$ which is compact and convex. By Theorem \ref{thm:FRBLentropy}, we have $D_g(\mathbf{c},\mathbf{d},\mathbf{B})<\infty$ only if $\mathbf{c}\in A$, so our task in maximizing the RHS of \eqref{eq:quantityToBound} is to compute $$ \max_{\mathbf{c}\in A}- D_g(\mathbf{c},\mathbf{d},\mathbf{B}) = -\min_{\mathbf{c}\in A} D_g(\mathbf{c},\mathbf{d},\mathbf{B}), $$ where the use of $\max$ and $\min$ is justified, since $\mathbf{c} \mapsto D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ is \lsc by Theorem \ref{thm:FunctionalPropertiesDg} and $A$ is compact. For $\mathbf{c}\in A$ and $(K_1,\dots,K_k)\in \prod_{i=1}^k\pd (E_i)$, define $$ F\left(\mathbf{c}, (K_i)_{i=1}^k\right) := \max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^m d_j \log \det(B_j K B_j^T) - \sum_{i=1}^k c_i \log \det(K_i), $$ which is the same as that in \eqref{eq:DgFromF}. Theorem \ref{thm:FunctionalPropertiesDg} ensures that $F$ satisfies the hypotheses of Theorem \ref{thm:SionGeodesic}. Thus, by an application of the latter and definition of $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$, we have \begin{align*} \max_{\mathbf{c}\in A}- 2D_g(\mathbf{c},\mathbf{d},\mathbf{B}) &= \max_{\mathbf{c}\in A}~~ \inf_{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i)} F\left(\mathbf{c}, (K_i)_{i=1}^k\right)\\ &=\!\!\!\inf_{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i)} ~\max_{\mathbf{c}\in A} F\left(\mathbf{c}, (K_i)_{i=1}^k\right)\\ &=\!\!\!\inf_{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i)} ~\max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^m d_j \log \det(B_j K B_j^T) - \min_{1 \leq i \leq k}\!\!\frac{\log\det(K_i)}{\dim(E_i)}\\ &=\!\!\! \inf_{ \substack{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i) :\\ \min_{1\leq i \leq k} \det(K_i) = 1}} ~\max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^m d_j \log \det(B_j K B_j^T) , \end{align*} where the last line made use of the observation that the function $$ (K_i)_{i=1}^k \mapsto \max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^m d_j \log \det(B_j K B_j^T) - \min_{1 \leq i \leq k}\!\!\frac{\log\det(K_i)}{\dim(E_i)} $$ is invariant to rescaling $(K_i)_{i=1}^k \mapsto (\alpha K_i)_{i=1}^k$ for $\alpha >0$ by \eqref{eq:normalized}. Now, invoking Theorem \ref{thm:FRdualQuadraticForms}, we have \begin{align*} & \inf_{ \substack{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i) :\\ \min_{1\leq i \leq k} \det(K_i) = 1}} ~\max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^m d_j \log \det(B_j K B_j^T)\\ &= \inf_{ \substack{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i) :\\ \min_{1\leq i \leq k} \det(K_i) = 1}} \inf_{(U_i)_{i=1}^k,(V_j)_{j=1}^m} \left( \sum_{i=1}^k \langle U_i, K_i\rangle_{\HS} - \sum_{j=1}^m d_j \log \det V_j\right), \end{align*} where the second infimum is over all $U_i\in \pd (E_i),1\leq i\leq k$ and $V_j\in \pd (E^j), 1\leq j\leq m$ satisfying \begin{align*} \sum_{j=1}^m d_j B_j^T V_j B_j \leq \operatorname{diag}( U_1, \dots, U_k). \end{align*} Written in this way, it evidently suffices to consider $\det(K_i) = 1$ for all $1\leq i\leq k$ in the last line, so we conclude \begin{align} \max_{\mathbf{c}\in A}- 2D_g(\mathbf{c},\mathbf{d},\mathbf{B}) = \inf_{ \substack{ (K_i)_{i=1}^k \in \prod_{i=1}^k\pd (E_i) :\\ \det(K_i) = 1, 1\leq i\leq k}} ~\max_{K \in \Pi(K_1, \dots, K_k)} \sum_{j=1}^m d_j \log \det(B_j K B_j^T). \label{matrixIdent} \end{align} Now, let $\mathbf{c^*} \in \arg\min_{\mathbf{c}\in A} D_g(\mathbf{c},\mathbf{d},\mathbf{B})$. By \eqref{eq:quantityToBound} and \eqref{eq:normalized}, we have \begin{align} \max_{X\in \Pi(X_1, \dots, X_k)}\sum_{j=1}^m d_j h(B_j X) &\geq \frac{1}{2}\log(2\pi e) - D_g(\mathbf{c^*},\mathbf{d},\mathbf{B}). \label{eq:DependsOnExtremizability} \end{align} If the LHS of \eqref{eq:DependsOnExtremizability} is equal to $-\infty$, then it is easy to see that one of the $B_j$'s must fail to be surjective. Indeed, suppose each $B_j$ is surjective and factor $B_j = R_j Q_j$, where $Q_j$ has orthonormal rows and $R_j$ is full rank. Letting $Q^{\perp}_j$ denote the matrix with orthonormal rows and rowspace equal to the orthogonal complement of the rowspace of $Q_j$, for the independent coupling $X$ we have $$ \sum_{i=1}^k h(X_i) = h(X) =h(Q_j X, Q_j^{\perp} X) \leq h(Q_j X) + h(Q_j^{\perp} X).$$ Since $h(Q_j^{\perp} X)$ is bounded from above due to finiteness of second moments and the LHS is finite by assumption, $h(Q_j X)$ is finite, and so is $h(B_j X)$. Therefore, \eqref{eq:maxEntComparison} holds trivially if the LHS of \eqref{eq:DependsOnExtremizability} is equal to $-\infty$, so we assume henceforth that the LHS of \eqref{eq:DependsOnExtremizability} is finite. If $(\mathbf{c^*},\mathbf{d},\mathbf{B})$ is extremizable, then by Theorem \ref{thm:extImpliesGext} and \eqref{matrixIdent}, there exist Gaussians $Z^*_i\sim N(0,K_i)$ with $\det(K_i)=1$ such that \begin{align*} \max_{X\in \Pi(X_1, \dots, X_k)}\sum_{j=1}^m d_j h(B_j X) &\geq \frac{1}{2}\log(2\pi e) - D_g(\mathbf{c^*},\mathbf{d},\mathbf{B})\\ &=\max_{Z\in \Pi(Z^*_1, \dots, Z^*_k)}\sum_{j=1}^m d_j h(B_j Z), \end{align*} where we used the identity $\frac{1}{2}\log(2\pi e) = \sum_{i=1}^k c_i^* h(X_i) = \sum_{i=1}^k c_i^* h(Z^*_i)$. On the other hand, if $(\mathbf{c^*},\mathbf{d},\mathbf{B})$ is not extremizable, then we have strict inequality in \eqref{eq:DependsOnExtremizability}, and it follows by \eqref{matrixIdent} that there are Gaussians $Z_i\sim N(0,K_i)$ with $\det(K_i)=1$ such that \eqref{eq:maxEntComparison} holds (with strict inequality, in fact). \end{proof} \subsection{Proof of Theorem \ref{thm:GaussianComparisonConstrained}} With Theorem \ref{thm:GaussianComparisons} at our disposal, it is a straightforward matter to self-strengthen it to produce Theorem \ref{thm:GaussianComparisonConstrained}. First, observe that lower semicontinuity of relative entropy implies $X\in \Pi(X_1, \dots, X_k) \mapsto I_S(X)$ is weakly lower semicontinuous, and therefore $\Pi(X_1, \dots, X_k;\nu)$ is a compact subset of $\Pi(X_1, \dots, X_k)$ when equipped with the weak topology. Hence, repeating the argument in the Proposition \ref{prop:MaxEntropyCouplingExists}, we find that each maximum is achieved the statement of the Theorem. Now, by the method of Lagrange multipliers, \begin{align*} \max_{X\in \Pi(X_1, \dots, X_k; \nu)} \sum_{j=1}^m d_j h(B_j X) &= \max_{X\in \Pi(X_1, \dots, X_k)} ~\inf_{\lambda\geq 0} \left( \sum_{j=1}^m d_j h(B_j X) - \sum_{S: \nu(S)<\infty } \lambda(S) (I_S(X) - \nu(S))\right) \\ &= \inf_{\lambda\geq 0}~\max_{X\in \Pi(X_1, \dots, X_k)} \underbrace{\left( \sum_{j=1}^m d_j h(B_j X) -\sum_{ S: \nu(S)<\infty } \lambda(S) (I_S(X) - \nu(S))\right)}_{=:G(\lambda, X)} , \end{align*} where the infimum is over functions $\lambda : 2^{\{1,\dots, k\}} \to [0,+\infty)$. The exchange of $\max$ and $\inf$ follows by an application of the classical Sion minimax theorem. Indeed, for any fixed $X\in \Pi(X_1, \dots, X_k )$, the function $\lambda \mapsto G(\lambda, X)$ is linear in $\lambda$. On the other hand, $\Pi(X_1, \dots, X_k)$ is a convex subset of $\mathcal{P}(E_0)$ that is compact with respect to the weak topology. For fixed $\lambda\geq 0$, the functional $X \mapsto G(\lambda, X)$ is concave upper semicontinuous on $\Pi(X_1, \dots, X_k)$ by concavity of entropy and Lemma \ref{lem:WeakSemicontH}. Using the definition of $I_S$, for any $\lambda\geq 0$, Theorem \ref{thm:GaussianComparisons} applies to give existence of Gaussian $(Z_i)_{i=1}^k$ satisfying \begin{align*} &\max_{X\in \Pi(X_1, \dots, X_k)} \left( \sum_{j=1}^m d_j h(B_j X) - \sum_{ S: \nu(S)<\infty } \lambda(S) (I_S(X) - \nu(S))\right) \\ &\geq \max_{Z\in \Pi(Z_1, \dots, Z_k )} \left( \sum_{j=1}^m d_j h(B_j Z) - \sum_{ S: \nu(S)<\infty } \lambda(S) (I_S(Z) - \nu(S))\right) \\ &\geq \max_{Z\in \Pi(Z_1, \dots, Z_k;\nu )} \sum_{j=1}^m d_j h(B_j Z). \end{align*} The last inequality follows since we are taking the maximum over a smaller set and because $\lambda\geq 0$. This proves the theorem. \section{Application: constrained multi-marginal inequalities} \label{sec:multimarginal} In this section, we introduce a constrained version of the multi-marginal inequality considered in \eqref{eq:MainEntropyCouplingInequality} and demonstrate how the results transfer almost immediately with the help of Theorem \ref{thm:GaussianComparisonConstrained}. Fix a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$. For a constraint function $\nu: 2^{\{1,\dots, k\}}\to [0,+\infty]$, let $D(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ denote the smallest constant $D$ such that the inequality \begin{align} \sum_{i=1}^k c_i h(X_i) \leq \max_{X\in \Pi(X_1, \dots, X_k;\nu)}\sum_{j=1}^m d_j h(B_j X) + D \label{eq:multimarginalConst} \end{align} holds for all choices of $X_i \in \mathcal{P}(E_i)$, $1\leq i\leq k$. Call $(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ {\bf extremizable} if there are $X_i \in \mathcal{P}(E_i)$, $1\leq i\leq k$ which achieve equality in \eqref{eq:multimarginalConst} with $D = D(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$. Similarly, let $D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ denote the smallest constant $D$ such that \eqref{eq:multimarginalConst} holds for all Gaussian $X_i \in \mathcal{G}(E_i)$, $1\leq i\leq k$, and call $(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ {\bf Gaussian-extremizable} if there are $X_i \in \mathcal{G}(E_i)$, $1\leq i\leq k$ which achieve equality in \eqref{eq:multimarginalConst} with $D = D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$. The following generalizes Theorem \ref{thm:FRBLentropy} and \ref{thm:extImpliesGext} to the correlation-constrained setting. \begin{theorem}\label{thm:constrainedFRBLentropy} For any datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ and constraint function $\nu$, \begin{enumerate}[(i)] \item $D(\mathbf{c},\mathbf{d},\mathbf{B};\nu)=D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$; \item $(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ is extremizable if and only if it is Gaussian-extremizable; and \item $D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ is finite if and only if the scaling condition \eqref{eq:ScalingCond} and the dimension condition \eqref{eq:DimCond} hold. \end{enumerate} \end{theorem} \begin{proof} For any $X_i\in \mathcal{P}(E_i)$ and any $\mathbf{c}$, an application of Theorem \ref{thm:GaussianComparisonConstrained} ensures existence of $Z_i \in \mathcal{G}(E_i)$ with $h(Z_i)=h(X_i)$ satisfying \begin{align*} &\sum_{i=1}^k c_i h(X_i) - \max_{X\in \Pi(X_1, \dots, X_k;\nu)}\sum_{j=1}^m d_j h(B_j X)\\ &\leq \sum_{i=1}^k c_i h(Z_i) - \max_{Z\in \Pi(Z_1, \dots, Z_k;\nu)}\sum_{j=1}^m d_j h(B_j Z) \leq D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu), \end{align*} where the final inequality follows by definition of $D_g$. This establishes both (i) and (iii). As for finiteness, observe that definitions imply \begin{align} D_g(\mathbf{c},\mathbf{d},\mathbf{B}) \equiv D_g(\mathbf{c},\mathbf{d},\mathbf{B}; +\infty) \leq D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu)\leq D_g(\mathbf{c},\mathbf{d},\mathbf{B};0)\label{eq:finitenessIneq} \end{align} for any $\nu$. Now, for any $K\in \Pi(K_1, \dots, K_k)$ with $K_i\in \pd (E_i)$, $1\leq i \leq k$, observe that $$K\leq k \operatorname{diag}(K_1, \dots, K_k).$$ Indeed, for $Z\sim N(0,K)$ and $u=(u_1,\dots, u_k) \in E_0$, Jensen's inequality yields $$ u^T K u = \EE|u^T Z|^2 \leq k \sum_{i=1}^k \EE |u_i^T Z_i |^2 = k u^T \operatorname{diag}(K_1, \dots, K_k) u. $$ This implies, for Gaussian $(Z_i)_{i=1}^k$, that $$ \max_{Z \in \Pi(Z_1,\dots, Z_k)} \sum_{j=1}^m d_j h(B_j Z) \leq \sum_{j=1}^m d_j h(B_j Z^{\mathrm{ind}}) + \log(k) \sum_{j=1}^m d_j\dim(E^j), $$ where $Z^{\mathrm{ind}}$ denotes the independent coupling of the $Z_i$'s. Thus, $$ D_g(\mathbf{c},\mathbf{d},\mathbf{B};0) \leq D_g(\mathbf{c},\mathbf{d},\mathbf{B})+ \log(k) \sum_{j=1}^m d_j\dim(E^j), $$ so that finiteness of $D_g(\mathbf{c},\mathbf{d},\mathbf{B};\nu)$ is equivalent to finiteness of $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ by \eqref{eq:finitenessIneq}. Invoking Theorem \ref{thm:FRBLentropy} completes the proof. \end{proof} When $\nu \equiv 0$, then the only allowable coupling in \eqref{eq:multimarginalConst} is the independent one. Thus, we recover the main results of Anantharam, Jog and Nair \cite[Theorems 3 \& 4]{anantharam2019unifying}, which simultaneously capture the entropic Brascamp--Lieb inequalities and the EPI. When $\nu \equiv +\infty$, then we immediately recover Theorems \ref{thm:FRBLentropy} and \ref{thm:extImpliesGext}. Of note, we recall from \cite{liu2018forward, CourtadeLiu21} that, by extending the duality for the Brascamp--Lieb inequalities \cite{carlen2009subadditivity}, Theorem \ref{thm:FRBLentropy} has the following equivalent functional form. \begin{theorem}\label{thm:FRBLfunctional} Fix a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$. If measurable functions $f_i : E_i \to \R^+$, $1\leq i \leq k$ and $g_j : E^j \to \R^+$, $1\leq j\leq m$ satisfy \begin{align} \prod_{i=1}^k f_i^{c_i}(\pi_{E_i}(x)) \leq \prod_{j=1}^m g_j^{d_j}\left( B_j x \right)\hspace{1cm}\forall x\in E_0,\label{eq:majorization} \end{align} then \begin{align} \prod_{i=1}^k \left( \int_{E_i} f_i \right)^{c_i} \leq e^{ D_g(\mathbf{c},\mathbf{d},\mathbf{B}) } \prod_{j=1}^m \left( \int_{E^j} g_j \right)^{d_j}.\label{eq:frblFunctional} \end{align} Moreover, the constant $D_g(\mathbf{c},\mathbf{d},\mathbf{B})$ is best possible. \end{theorem} By a suitable choice of datum $(\mathbf{c},\mathbf{d},\mathbf{B})$, this implies many geometric inequalities such as the Brascamp--Lieb inequalities \cite{brascamp1974general, brascamp1976best, lieb1990gaussian} (which include, e.g., H\"older's inequality, the sharp Young inequality, the Loomis--Whitney inequalities), the Barthe inequalities \cite{barthe1998reverse} (which include, e.g., the Pr\'ekopa--Leindler inequality, Ball's inequality \cite{ball1989volumes}), the sharp reverse Young inequality \cite{brascamp1976best}, the Chen--Dafnis--Paouris inequalities \cite{chen2015improved}, and a form of the Barthe--Wolff inequalities \cite{barthe2018positive}. Readers are referred to \cite{CourtadeLiu21} for a more detailed account of these implications and further references. The survey by Gardner also gives a clear depiction of the hierarchy implied by the Brascamp--Lieb and Barthe inequalities \cite[Fig. 1]{gardner2002brunn}. We remark that, while Theorem \ref{thm:FRBLentropy} admits the equivalent functional form given above, there is no obvious functional equivalent when $\nu$ induces nontrivial correlation constraints. In particular, the comparison \eqref{eq:maxEntComparisonConstrained} seems to be most naturally expressed in the language of entropies (even in the unconstrained case). \section{Application: Gaussian saddle point}\label{sec:saddle} The EPI has been successfully applied many times to prove coding theorems, particularly in the field of network information theory. However, it also provides the essential ingredient in establishing that a certain mutual information game admits a saddle point (see \cite{pinsker1956calculation, Ihara}, and also \cite[Problem 9.21]{coverThomas}). Namely, for numbers $P,N\geq 0$, we have \begin{align} \sup_{P_X: \EE|X|^2\leq P} ~\inf_{P_Z: \EE|Z|^2\leq N} I(X;X+Z) = \inf_{P_Z: \EE|Z|^2\leq N} ~\sup_{P_X: \EE|X|^2\leq P} I(X;X+Z) , \notag \end{align} where the $\sup$ (resp.\ $\inf$) is over $X\sim P_X\in \mathcal{P}(\mathbb{R}^n)$ such that $\EE|X|^2\leq P$ (resp.\ $Z\sim P_Z\in \mathcal{P}(\mathbb{R}^n)$ such that $\EE|Z|^2\leq N$), and the mutual information is computed under the assumption that $X\sim P_X$ and $Z\sim P_X$ are independent. It turns out that the game admits a Gaussian saddle point, which together with Shannon's capacity theorem, implies that worst-case additive noise is Gaussian. In this section, we extend this saddle point property to a game with payoff given by $$ G_{\zeta}(P_X, P_Z) := \sup_{ \substack{ (X,Z)\in\Pi(P_X,P_Z):\\ I(X;Z)\leq \zeta}} I(X; X+Z), $$ for a parameter $\zeta\geq 0$, where the supremum is over couplings $(X,Z)$ with given marginals $X\sim P_X$ and $Z\sim P_Z$. Of course, by taking $\zeta = 0$, we will recover the classical saddle-point result above. This may be interpreted as a game where the signal and noise players fix their strategies $P_X$ and $P_Z$, but the signal player has the benefit during game-play of adapting their transmission using side information obtained about the noise player's action. \begin{theorem}\label{thm:SaddlePt} For $0< P,N < \infty$ and $\zeta\geq 0$, \begin{align*} &\sup_{P_{X}: \EE|X|^2\leq P} ~\inf_{P_{Z}: \EE|Z|^2\leq N} G_{\zeta}(P_X, P_Z) = \inf_{P_{Z}: \EE|Z|^2\leq N} ~\sup_{P_{X}: \EE|X|^2\leq P} G_{\zeta}(P_X, P_Z) . \end{align*} Moreover, $P_X = N\left(0,\tfrac{P}{n}\id_{\mathbb{R}^n}\right)$ and $P_Z = N\left(0,\tfrac{N}{n}\id_{\mathbb{R}^n}\right)$ is a saddle point. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:SaddlePt}] In a slight abuse of notation, we will write $ \Pi(X_1, X_2; \zeta)$ to denote couplings of $X_1,X_2$ satisfying $I(X_1;X_2)\leq \zeta$. Let $X$ and $Z$ be a random variables with finite variance, and let $X^*,Z^*$ be centered isotropic Gaussians with $\EE|X^*|^2 = \EE|X|^2$ and $\EE|Z^*|^2 = \EE|Z|^2$. Now, observe that Theorem \ref{thm:depEPI} implies \begin{align*} \max_{ \Pi(X^*, Z; \zeta) } \left( h(X^*+ Z) - h(Z) \right) &\geq \frac{n}{2}\log\left( 1 + \frac{N(X^*)}{N(Z)} + 2 \sqrt{ (1 - e^{- \frac{2 \zeta}{n} }) \frac{N(X^*)}{N(Z)} }\right)\\ &\geq \frac{n}{2}\log\left( 1 + \frac{N(X^*)}{N(Z^*)} + 2 \sqrt{ (1 - e^{- \frac{2 \zeta}{n}}) \frac{N(X^*)}{N(Z^*)} }\right)\\ &=\max_{ \Pi(X^*, Z^*; \zeta) } \left( h(X^*+ Z^*) - h(Z^*) \right), \end{align*} where the second inequality follows since $h(Z) \leq h(Z^*)$, and the last equality follows by the equality conditions in Theorem \ref{thm:depEPI}. In particular, this gives \begin{align} \sup_{ \Pi(X^*, Z; \zeta) } I(X^*; X^*+ Z) &= \sup_{ \Pi(X^*, Z; \zeta) } \left( h(X^*+ Z) - h(Z) + I(X^*; Z)\right) \notag\\ &= \sup_{ \Pi(X^*, Z; \zeta) } \left( h(X^*+ Z) - h(Z) \right)+ \zeta \label{secondEquality} \\ &\geq \sup_{ \Pi(X^*, Z^*; \zeta) } \left( h(X^*+ Z^*) - h(Z^*) \right)+ \zeta \label{applyCor}\\ &=\sup_{ \Pi(X^*, Z^*; \zeta) } I(X^*; X^*+ Z^*), \notag \end{align} where \eqref{secondEquality} can be justified using the supremum\footnote{This sounds obvious, but we don't know of a simple argument to justify the assertion. A proof is given in Proposition \ref{prop:rearrangementArgument}.}, and \eqref{applyCor} follows from the previous computation. For any pair $(X,Z^*)$, couple $(X^*, Z^*)$ to have the same covariance. By the max-entropy property of Gaussians, $I(X^*; Z^*)\leq I(X;Z^*)$ and $h(X+ Z^*) \leq h(X^*+ Z^*)$. As a result, we have \begin{align*} \sup_{ \Pi(X, Z^*; \zeta) }\!\! I(X; X+ Z^*) \leq \sup_{ \Pi(X^*, Z^*; \zeta) } \!\!\! I(X^*; X^*+ Z^*) \leq \sup_{ \Pi(X^*, Z; \zeta) } \!\!I(X^*; X^*+ Z) . \end{align*} This implies \begin{align*} \inf_{P_{Z}: \EE|Z|^2\leq N} ~\sup_{P_{X}: \EE|X|^2\leq P} G_{\zeta}(P_X, P_Z) \leq &\sup_{P_{X}: \EE|X|^2\leq P} ~\inf_{P_{Z}: \EE|Z|^2\leq N} G_{\zeta}(P_X, P_Z), \end{align*} and the reverse direction follows by the max-min inequality. The fact that the asserted distributions coincide with the saddle point subject to the constraints follows by direct computation. \end{proof} We now tie up loose ends by justifying \eqref{secondEquality}, which is an easy consequence of the proposition below. \begin{proposition}\label{prop:rearrangementArgument} Let $X\sim N(0,\id_{\mathbb{R}^n})$ and $Z \in \mathcal{P}(\mathbb{R}^n)$ be jointly distributed with $I(X;Z) \leq \zeta < + \infty$. For any $\epsilon>0$, there is a coupling $(X',Z') \in \Pi(X,Z)$ with $h(X'+Z') \geq h(X+Z)-\epsilon$ and $I(X';Z')=\zeta$. \end{proposition} \begin{proof} We'll work in dimension $n=1$ for simplicity of exposition. It suffices to establish existence of $(X',Z') \in \Pi(X,Z)$ with $h(X'+Z') \geq h(X+Z)-\epsilon$ and $\zeta \leq I(X';Z') < +\infty$. Indeed, if there is such $(X',Z')$, then we can let $\pi_0$ denote the joint distribution of $(X,Z)$ and $\pi_1$ denote the joint distribution of $(X',Z')$. For $\theta\in[0,1]$ define the mixture $$ \pi_{\theta}= (1-\theta) \pi_0 + \theta \pi_1. $$ Evidently, $\pi_{\theta}\in \Pi(X,Z)$ for all $\theta\in[0,1]$. For $(X^{(\theta)}, Z^{(\theta)})\sim \pi_{\theta}$, concavity of entropy gives \begin{align*} h(X^{(\theta)}+ Z^{(\theta)}) &\geq (1-\theta)h(X+Z)+ \theta h(X'+Z') \geq h(X+Z) - \epsilon. \end{align*} Now, convexity of relative entropy ensures that $\theta \mapsto I(X^{(\theta)}; Z^{(\theta)})$ is continuous on $(0,1)$. Weak lower semicontinuity of mutual information together with finiteness of $I(X'; Z')$ establishes continuity at the endpoints, so that the above mapping is continuous on $[0,1]$. As a result, the intermediate value theorem ensures there is some $\theta \in [0,1]$ such that $h(X^{(\theta)}+ Z^{(\theta)}) \geq h(X+Z) - \epsilon$ and $I(X^{(\theta)}; Z^{(\theta)})=\zeta$. Toward establishing the above ansatz, fix $\epsilon>0$, and consider the interval $I:=(-\epsilon,\epsilon]$. Define $p(\epsilon):= \Pr\{X\in I\}$, and note that $p(\epsilon) = \Theta(\epsilon)$ since $X$ is assumed Gaussian. For fixed parameters $n\geq1$ and $\epsilon$, we'll rearrange the joint distribution of $(X, Z)$ on the event $\{X\in I\}$. To this end, consider two partitions $$ -\epsilon = t_0 < t_1 < \cdots < t_n = \epsilon $$ and $$ -\infty = s_0 < s_1 < \cdots < s_n = +\infty $$ such that $$ \Pr\{X \in (t_{i-1}, t_i]|X\in I\} = \Pr\{Z \in (s_{i-1}, s_i]|X\in I\} = \frac{1}{n}, ~~1\leq i \leq n. $$ This is always possible since $X$ and $Z$ are (marginally) continuous random variables. We now define a random variable $Z_n$, jointly distributed with $X$, by rearranging the distribution of $(X,Z)$ as follows. On the event $\{X\notin I\}$, we let $Z_n = Z$. Conditioned on the event $\{X\in I\}$, we let the joint density of $(X,Z_n)$ be supported on the union of rectangles $R:=\cup_{i=1}^n (t_{i-1}, t_{i}]\times (s_{i-1}, s_{i}]$, given explicitly by $$ f_{X,Z_n|X\in I}(x,z) = n f_{X|X\in I}(x) f_{Z|X\in I}(z) 1_{\{(x,z) \in R\}}. $$ This is well-defined since the conditional densities $f_{X|X\in I}, f_{Z|X\in I}$ exist by marginal continuity of $X$ and $Z$, and the fact that $\Pr\{X\in I\}>0$. Observe that this rearrangement preserves marginals, so $(X,Z_n)\in \Pi(X,Z)$. Further, note that $I(X; Z_n | X\in I) = n$ by construction, therefore \begin{align*} I(X; Z_n) &= p(\epsilon) I(X; Z_n | X\in I) + (1-p(\epsilon)) I(X; Z_n | X\notin I) + I(Z; 1_{\{X\in I\}})\\ &\leq p(\epsilon) n + I(X; Z) + O(\epsilon \log \epsilon). \end{align*} By nonnegativity of mutual information, the first identity above also implies $$I(X; Z_n) \geq p(\epsilon) I(X; Z_n | X\in I) = p(\epsilon) n.$$ Since $I(X;Z)$ is finite by assumption, the combination of the above estimates imply \begin{align} I(X; Z_n) = \Theta(\epsilon n), \label{eq:IXZn} \end{align} where the asymptotics are understood in the sense that $\epsilon>0$ is fixed and $n$ allowed to increase. For $x\in I$, let $k(x)$ denote the integer $k\in \{1,\dots, n\}$ such that $x \in (t_{k-1}, t_k]$. Observe that, conditioned on $\{X\in I\}$, the index $k(X)$ is almost surely equal to a function of $X+Z_n$. This follows since for any $c\in \mathbb{R}$, the line $\{(x,z) : x + z = c\} \subset \mathbb{R}^2$ intersects a unique rectangle of the form $$ (t_{i-1}, t_{i}]\times (s_{i-1}, s_{i}]. $$ Conditioned on $\{X\in I\}$, the distribution of $(X,Z_n)$ is supported on such rectangles by construction, so the claim follows. With the above observation together with the fact that $X$ and $Z_n$ are conditionally independent given $\{k(X), X\in I\}$ by construction, we have \begin{align*} h(X+ Z_n | X\in I) &= h(X+ Z_n | k(X), X\in I) + I(X+ Z_n ; k(X)| X\in I) \\ &\geq h(X | k(X), X\in I) + I(X; k(X)| X\in I) \\ &= h(X | X\in I) . \end{align*} In particular, \begin{align*} h(X+Z_n) &\geq (1-p(\epsilon))h(X+Z_n|X\notin I) + p(\epsilon) h(X+Z_n |X\in I) \\ &\geq (1-p(\epsilon))h(X+Z_n |X\notin I) + p(\epsilon) h(X |X \in I) \\ &= (1-p(\epsilon))h(X+Z |X\notin I) + O(\epsilon \log \epsilon), \end{align*} where the first line follows since conditioning reduces entropy, and the last line follows since $X$ is nearly uniform on $I$ for $\epsilon$ small. Now, upper bounding entropy in terms of second moments, we have \begin{align*} h(X+Z |X\in I)&\leq \frac{1}{2}\log\left(2\pi e\EE[(X+Z)^2|X\in I] \right) \\ &\leq \frac{1}{2}\log\left(4\pi e( \EE[X^2|X\in I] +\EE[Z^2|X\in I] ) \right)\\ &\leq \frac{1}{2}\log\left(\frac{4\pi e}{p(\epsilon)}( \EE[X^2 ] +\EE[Z^2 ] ) \right). \end{align*} So, by finiteness of second moments, \begin{align*} p(\epsilon) h(X+Z |X\in I)&\leq O(\epsilon \log \epsilon). \end{align*} Since $I(X+Z; 1_{\{X\in I\}})\leq H(1_{\{X\in I\}}) = O(\epsilon \log \epsilon)$, we put everything together to find \begin{align*} h(X+Z) &= h(X+Z|1_{\{X\in I\}} ) +I(X+Z; 1_{\{X\in I\}}) \\ &\leq (1-p(\epsilon))h(X+Z |X\notin I) + O(\epsilon \log \epsilon)\\ &\leq h(X+Z_n) + O(\epsilon \log \epsilon). \end{align*} Combining with \eqref{eq:IXZn} establishes the ansatz, and completes the proof. \end{proof} \section{Closing Remarks}\label{sec:closing} Through the sequence of applications given, we have hopefully convinced the reader that Theorem \ref{thm:GaussianComparisonConstrained} offers significant generality and flexibility. This flexibility allows for easy modification. For example, the results hold verbatim over the complex field instead of the real field. Indeed, this follows directly by considering complex-valued random variables as two-dimensional real random variables, and using the matrix representation of complex numbers to implement the linear transformations\footnote{With a bit more work, which we do not detail here, it can be shown in the complex setting that it suffices to consider circularly symmetric complex Gaussians in the lower bound of the comparison \eqref{eq:maxEntComparisonConstrained}.}. By the same logic, the same comparison of Theorem \ref{thm:GaussianComparisonConstrained} can be seen to hold for other matrix algebras over $\mathbb{R}$. Having said all this, we make no assertion that Theorem \ref{thm:GaussianComparisonConstrained} is a grand unification of all entropy inequalities on Euclidean space. Indeed, there are several important examples of inequalities that are not obviously subsumed. Results in \cite{LiuViswanath, GengNair, CourtadeStrongEPI} provide representative examples. We concede that there may be some clever application of Theorem \ref{thm:GaussianComparisonConstrained} that can recover some of these results, but we do not know of one at the time of this writing. Thus, at the moment, it seems that Theorem \ref{thm:GaussianComparisonConstrained} may be another piece in a larger puzzle still wanting to be put together. \subsection*{Acknowledgement} This work was supported in part by NSF grant CCF-1750430 (CAREER). \begin{thebibliography}{10} \bibitem{anantharam2019unifying} V.~Anantharam, V.~Jog, and C.~Nair. \newblock Unifying the {B}rascamp-{L}ieb inequality and the entropy power inequality. \newblock {\em arXiv preprint arXiv:1901.06619}, 2019. \bibitem{ando79} T.~Ando. \newblock Concavity of certain maps on positive definite matrices and applications to {H}adamard products. \newblock {\em Linear algebra and its applications} 26 (1979): 203-241. \bibitem{ArasCourtadeISIT2021} E.~Aras and T.~A.~Courtade. \newblock Sharp Maximum-Entropy Comparisons. \newblock In Proc. of the 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. \bibitem{ArasCourtadeZhang} E.~Aras, T.~A.~Courtade and A.~Zhang. \newblock Equality cases in the Anantharam--Jog--Nair inequality. \newblock {\em preprint}, 2022. \bibitem{ball1989volumes} K.~Ball. \newblock Volumes of sections of cubes and related problems \newblock {\em Geometric aspects of functional analysis}, ed. by J.~Lindenstrauss and V.~D.~Milman, Lecture Notes in Math. 1376, Springer, Heidelberg, 1989, pp. 251--260. \bibitem{barthe1998reverse} F.~Barthe. \newblock On a reverse form of the {B}rascamp--{L}ieb inequality. \newblock {\em Inventiones mathematicae}, 134(2):335--361, 1998. \bibitem{barthe2018positive} F.~Barthe and P.~Wolff. \newblock Positive Gaussian kernels also have Gaussian minimizers. \newblock {\em arXiv preprint arXiv:1805.02455}, 2018. \bibitem{barronCLT} A.~R.~Barron. \newblock Entropy and the central limit theorem. \newblock The Annals of probability (1986): 336-342. \bibitem{bennett2008brascamp} J.~Bennett, A.~Carbery, M.~Christ, and T.~Tao. \newblock The {B}rascamp-{L}ieb inequalities: finiteness, structure and extremals. \newblock {\em Geometric and Functional Analysis}, 17(5):1343--1415, 2008. \bibitem{brascamp1974general} H.~J. Brascamp, E.~H. Lieb, and J.~M. Luttinger. \newblock A general rearrangement inequality for multiple integrals. \newblock {\em Journal of functional analysis}, 17(2):227--237, 1974. \bibitem{brascamp1976best} H.~J. Brascamp and E.~H. Lieb. \newblock Best constants in {Y}oung's inequality, its converse, and its generalization to more than three functions. \newblock {\em Advances in Mathematics}, 20(2):151--173, 1976. \bibitem{CarlenSoffer} E.~Carlen and A.~Soffer. \newblock Entropy production by block variable summation and central limit theorems. \newblock {\em Commun. Math. Phys.}, vol. 140, no. 2, pp. 339--371, 1991. \bibitem{carlen2009subadditivity} E.~A. Carlen and D.~Cordero-Erausquin. \newblock Subadditivity of the entropy and its relation to {B}rascamp--{L}ieb type inequalities. \newblock {\em Geometric and Functional Analysis}, 19(2):373--405, 2009. \bibitem{chen2015improved} W.-K.~Chen, N.~Dafnis and G.~Paouris. \newblock Improved H{\"o}lder and reverse H{\"o}lder inequalities for Gaussian random vectors. \newblock {\em Advances in Mathematics}, 280: 643--689, 2015. \bibitem{costa1984similarity} M.~Costa and T.~M.~Cover. \newblock On the similarity of the entropy power inequality and the {B}runn--{M}inkowski inequality (corresp.). \newblock {\em IEEE Transactions on Information Theory}, vol.~30, no.~6, pp.~837--839, 1984. \bibitem{CourtadeStrongEPI} T.~A.~Courtade. \newblock A strong entropy power inequality. \newblock IEEE Transactions on Information Theory 64.4 (2017): 2173--2192. \bibitem{CourtadeLiu21} T.~A.~Courtade and J.~Liu. \newblock Euclidean forward-reverse {B}rascamp--{L}ieb inequalities: {F}initeness, structure, and extremals. \newblock {\em The Journal of Geometric Analysis} 31.4 (2021): 3300--3350. \bibitem{coverThomas} T.~M.~Cover and J.~A.~Thomas. \newblock Elements of Information Theory, 2e. \newblock John Wiley \& Sons, 2012 \bibitem{cover1994maximum} T.~M.~Cover and Z.~Zhang. \newblock On the maximum entropy of the sum of two dependent random variables. \newblock IEEE Transactions on Information Theory 40.4 (1994): 1244--1246. \bibitem{CsiszarKorner} I.~Csisz\'ar and J.~K\"orner. \newblock Information theory: coding theorems for discrete memoryless systems, 2e. \newblock Cambridge University Press, 2011. \bibitem{Dubuc} S.~Dubuc. \newblock Crit\`eres de convexit\'e et in\'egalits integral\'es. \newblock Ann. Inst. Fourier Grenoble 27 (1) (1977) 135--165. \bibitem{gardner2002brunn} R.~Gardner. \newblock The {B}runn-{M}inkowski inequality. \newblock {\em Bulletin of the American Mathematical Society}, 39(3):355--405, 2002. \bibitem{GengNair} Y.~Geng and C.~Nair. \newblock The capacity region of the two-receiver {G}aussian vector broadcast channel with private and common messages. \newblock IEEE Transactions on Information Theory 60.4 (2014): 2087--2104. \bibitem{Ihara} S.~Ihara, Shunsuke. \newblock On the capacity of channels with additive non-{G}aussian noise. \newblock {\em Information and Control} 37.1 (1978): 34--39. \bibitem{johnson2004conditional} O.~Johnson. \newblock A conditional entropy power inequality for dependent variables. \newblock {\em IEEE Transactions on Information Theory}, vol.~50, no.~8, pp.~1581--1583, 2004. \bibitem{Lawson2001} J.~D.~Lawson, Y.~Lim. \newblock The geometric mean, matrices, metrics, and more. \newblock The American Mathematical Monthly 108(9): 797--812, 2001. \bibitem{lieb1978} E.~H.~Lieb. \newblock Proof of an entropy conjecture of {W}ehrl. \newblock Communications in Mathematical Physics 62.1: 35--41, 1978. \bibitem{lieb1990gaussian} E.~H. Lieb. \newblock {G}aussian kernels have only {G}aussian maximizers. \newblock {\em Inventiones mathematicae}, 102(1):179--208, 1990. \bibitem{liu2019private} J.~Liu. \newblock {\em Private communication}, 2019. \bibitem{liu2017ITperspectiveBL} J.~Liu, T.~A. Courtade, P.~Cuff, and S.~Verd{\'u}. \newblock Information-theoretic perspectives on {B}rascamp-{L}ieb inequality and its reverse. \newblock {\em arXiv preprint arXiv:1702.06260}, 2017. \bibitem{liu2018forward} J.~Liu, T.~A. Courtade, P.~Cuff, and S.~Verd{\'u}. \newblock A forward-reverse {B}rascamp-{L}ieb inequality: Entropic duality and {G}aussian optimality. \newblock {\em Entropy (special issue on information inequalities)}, 20(6):418, 2018. \bibitem{LiuViswanath} T.~Liu and P.~Viswanath. \newblock An extremal inequality motivated by multiterminal information-theoretic problems. \newblock IEEE Transactions on Information Theory 53.5 (2007): 1839--1851. \bibitem{pinsker1956calculation} M.~Pinsker. \newblock Calculation of the rate of information production by means of stationary random processes and the capacity of stationary channel. \newblock {\em Dokl. Akad. Nauk USSR}, vol.~111, pp.~753--756, 1956. \bibitem{shannon48} C.~E.~Shannon. \newblock A mathematical theory of communication. \newblock The Bell system technical journal 27.3: 379--423, 1948. \bibitem{Sra2018} S.~Sra, N.~K.~Vishnoi, and O.~Yildiz. \newblock On geodesically convex formulations for the {B}rascamp--{L}ieb constant. \newblock Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018. \bibitem{stam59} A.~J.~Stam. \newblock Some inequalities satisfied by the quantities of information of {F}isher and {S}hannon. \newblock {\em Information and Control} 2.2: 101--112, 1959. \bibitem{takano1995inequalities} S.~Takano. \newblock The inequalities of {F}isher information and entropy power for dependent variables. \newblock {\em Proceedings of the 7th Japan-Russia Symposium on Probability Theory and Mathematical Statistics, Tokyo} (S.~Watanabe, M.~Fukushima, Y.~Prohorov, and A.~Shiryaev, eds.), pp.~460--470, World Scientific, 1995. \bibitem{villani2003topics} C.~Villani. \newblock {\em Topics in optimal transportation}. \newblock Number~58. American Mathematical Soc., 2003. \bibitem{villani2008} C.~Villani. \newblock {\em Optimal transport: old and new}. \newblock Vol. 338. Berlin: Springer, 2009. \bibitem{WangMadiman} L.~Wang and M.~Madiman. \newblock Beyond the entropy power inequality, via rearrangements. \newblock {\em IEEE Transactions on Information Theory}, 60.9: 5116--5137, 2014. \bibitem{Watanabe} S.~Watanabe. \newblock Information theoretical analysis of multivariate correlation. \newblock IBM Journal of research and development 4.1 (1960): 66-82. \bibitem{ZamirFeder} R.~Zamir and M.~Feder. \newblock A generalization of the entropy power inequality with applications. \newblock {\em IEEE transactions on Information Theory}, 39.5: 1723--1728, 1993. \bibitem{Zhang2022} P.~Zhang, J.~Zhang, and S.~Sra. \newblock Minimax in Geodesic Metric Spaces: {S}ion's Theorem and Algorithms. \newblock {\em arXiv preprint arXiv:2202.06950}, 2022. \end{thebibliography} \end{document}
2206.14174v1
http://arxiv.org/abs/2206.14174v1
The $(4,p)$-arithmetic hyperbolic lattices, $p\geq 2$, in three dimensions
\documentclass[12pt]{article} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx} \usepackage{amsfonts} \usepackage{amsmath} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newcommand{\dis}{\displaystyle} \newcommand{\HS}[3]{\left(\frac{#1,#2}{#3}\right)} \newcommand{\ds}{\displaystyle} \newcommand{\bsno}{\bigskip\par\noindent} \newcommand{\IR}{\mathbb R} \newcommand{\IQ}{\mathbb Q} \newcommand{\IZ}{\mathbb Z} \newcommand{\IC}{\mathbb C} \newcommand{\IH}{\mathbb H} \newcommand{\IN}{\mathbb N} \newcommand{\G}{\Gamma} \newcommand{\IE}{\textsc 8} \def\IH{{\Bbb H}} \def\oH{\overline{{\Bbb H}}} \def\IR{{\Bbb R}} \def\oR{\overline {\IR}} \def\IS{{\Bbb S}} \def\IA{{\Bbb A}} \def\IK{{\Bbb K}} \def\IZ{{\Bbb Z}} \def\IN{{\Bbb N}} \def\IC{\Bbb C} \def\ID{{\Bbb D}} \def\tr{{\rm tr}} \def\oC{\hat{\IC}} \def\tr{\mbox{\rm{tr\,}}} \def\Norm{\mbox{\rm{Norm}}} \def\Hom{\mbox{\rm{Hom}}} \def\PSL{\mbox{\rm{PSL}}} \def\SL{\mbox{\rm{SL}}} \def\SO{\mbox{\rm{SO}}} \def\PGL{\mbox{\rm{PGL}}} \def\R{\mbox{\rm{R}}} \def\GL{\mbox{\rm{GL}}} \def\isomorphic{\cong} \def\qed{ $\Box$} \def\ab{\mbox{\rm{ab}}} \def\Te{{\cal T}(G)} \def\det{\mbox{\rm{det}}} \def\det{\mbox{\rm{det}}} \def\log{\mbox{\rm{log}}} \def\Isom{\mbox{\rm{Isom}}} \def\discr{\mbox{\rm{discr}}} \def\Tr{\mbox{\rm{Tr}}} \def\Gal{\mbox{\rm{Gal}}} \def\Vol{\mbox{\rm{Vol}}} \def\Covol{\mbox{\rm{Covol}}} \def\Int{\mbox{\rm{Int}}} \def\Isom{\mbox{\rm{Isom}}} \def\End{\mbox{\rm{End}}} \def\Aut{\mbox{\rm{Aut}}} \def\Ram{\mbox{\rm{Ram}}} \def\Ker{\mbox{\rm{Ker}}} \def\em{\it} \def\mod{\mbox{\rm{mod}}} \title{The $(4,p)$-arithmetic hyperbolic lattices, $p\geq 2$, in three dimensions. } \author{G.J. Martin, K. Salehi and Y. Yamashita \thanks{Research of all authors supported in part by the NZ Marsden Fund.}} \date{} \begin{document} \maketitle \begin{abstract} We identify the finitely many arithmetic lattices $\Gamma$ in the orientation preserving isometry group of hyperbolic $3$-space $\IH^3$ generated by an element of order $4$ and and element of order $p\geq 2$. Thus $\Gamma$ has a presentation of the form \[ \Gamma\cong\langle f,g: f^4=g^p=w(f,g)=\cdots=1 \rangle \] We find that necessarily $p\in \{2,3,4,5,6,\infty\}$, where $p=\infty$ denotes that $g$ is a parabolic element, the total degree of the invariant trace field $k\Gamma=\IQ(\{\tr^2(h):h\in\Gamma\})$ is at most $4$, and each orbifold is either a two bridge link of slope $r/s$ surgered with $(4,0)$, $(p,0)$ Dehn surgery (possibly a two bridge knot if $p=4$) or a Heckoid group with slope $r/s$ and $w(f,g)=(w_{r/s})^r$ with $r\in \{1,2,3,4\}$. We give a discrete and faithful representation in $PSL(2,\IC)$ for each group and identify the associated number theoretic data. \end{abstract} \section{Introduction} In this paper we continue our long running programme to identify (up to conjugacy) all the finitely many arithmetic lattices $\Gamma$ in the group of orientation preserving isometries of hyperbolic $3$-space $Isom^+(\IH^3)\cong PSL(2,\IC)$ generated by two elements of finite order $p$ and $q$, we also allow $p=\infty$ or $q=\infty$ to mean that a generator is parabolic. In \cite{MM} it is proved that there are only finitely many such lattices. In fact it is widely expected that there are only finitely many two generator arithmetic lattices in total in $PSL(2,\IC)$, though this remains unknown. There are infinitely many distinct three generator arithmetic lattices. It is well known that for such a lattice as $\Gamma$ above, $\IH^3/\Gamma$ is geometrically the $3$-sphere with a marked trivalent graph of singular points -- the vertices of the graph corresponding to the finite spherical triangle groups and dihedral groups. The possible graphs for the cases at hand can be seen in Figure 1. These precisely follow the descriptions of all lattices in $PSL(2,\IC)$ generated by two parabolic elements as proved by Aimi, Lee,Sakai, and Sakuma, and also Akiyoshi, Ohshika, Parker, Sakuma and Yoshida, \cite{ALSS,AOPSY}, who resolved a conjecture of Agol to this effect. Our results suggest that this conjecture (modified in the obvious way as per Figure 1.) is valid for all groups generated by two elements of finite order which is not freely generated by those two elements. \medskip In two dimensions a lattice in $Isom^+(\IH^2)$ generated by two elements of finite order is necessarily a triangle group. Takeuchi \cite[Theorem 3]{Take} has identified the $82$ arithmetic Fuchsian triangle groups. Here is a summary of what is known in dimension three. \begin{itemize} \item There are exactly $4$ arithmetic lattices generated by two parabolic elements. These are all knot and link complements and so torsion free. The explicit groups can be found in \cite{GMM}. \item There are exactly $14$ arithmetic lattices generated by a parabolic element and an elliptic of order $2\leq p < \infty$. Necessarily $p\in \{2,3,4,6\}$. Six of these groups have $p=2$, three have $p=3$, and three have $p=4$. The explicit groups can be found in \cite{CMMO}. \item There are exactly $16$ arithmetic lattices generated by two elements of finite orders $p$ and $q$ with $p,q \geq 6$. In each case $p=q$, twelve of these groups have $p=q=6$, and two have $p=q=12$ and one each for $p=q=8$ and $p=q=10$. The explicit groups can be found in \cite{MM} \end{itemize} Most of these groups are orbifold surgeries on two bridge knots and links of ``small'' slope. Here. we will meet examples of Dehn surgeries on knots of more than 12 crossings. \medskip From this information the reader can see that there are basically four cases left to deal with in order to compete the identification we seek. Those are when one generator is elliptic of order $q=2,3,4,5$ and $2\leq p < \infty$. In fact a discrete sugroup of $Isom^+(\IH^3)$ generated by two elements of the same finite order (say $p$) admits a $\IZ_2$ extension to a group generated by an element of order two and an element of order $p$. As a finite extension remains an arithmetic lattice, there really are now only three cases to deal with, $q\in\{3,4,5\}$ and here we assume that $q=4$ and $2\leq p\leq \infty$. Below is a summary of our main result. \begin{theorem}\label{mainthm} Let $\Gamma=\langle f,g \rangle$ be an arithmetic lattice generated by $f$ of order $4$ and $g$ of finite order $p\geq 2$. Then $p\in \{2,3,4,5,6\}$ and the degree of the invariant trace field \[ k\Gamma=\IQ(\{\tr^2(h):h\in\Gamma\}) \] is at most $4$. \begin{enumerate} \item If $p=2$, then there are fifty four groups. \item If $p=3$, then there are ten groups. \item If $p=4$, then there are twenty seven groups. \item If $p=5$, then there is one group. \item If $p=6$, then there are five groups. \end{enumerate} \end{theorem} \medskip The groups appearing in Theorem \ref{mainthm} all fall into the pattern of Heckoid groups as described in \cite{ALSS,AOPSY}. The singular graph of $\IH^3/\Gamma$ is one of the following. \scalebox{0.5}{\includegraphics[viewport=-40 520 570 800]{HeckoidGroups}}\\ {\bf Figure 1.} {\em The Heckoid groups. When $p\neq 4$ the braid must have an even number of crossings in the first example. Here $m\geq 1$.} \bigskip In the tables that follow we set \begin{equation} \gamma =\gamma(f,g) = \tr [f,g]-2. \end{equation} and give the minimal polynomial for $\gamma$. With the orders of the generators, this completely determines the arithmetic structure of the group, the invariant trace field $k\Gamma=\IQ(\gamma)$ and the associated quaternion algebra -- we discuss these things below. Then $\gamma$ is an invariant of the Nielsen class of the generating pair $\{f,g\}$. With the order of the generators, which we shall always assume are primitive (that is their trace should be $\pm 2\cos \big(\frac{\pi}{p}\big)$) and $\gamma$ it is straightforward to construct a discrete and faithful representation of the group in $PSL(2,\IC)$, \cite{GM1}. We give this. representation below at (\ref{XY}). Note that the very difficult general problem of proving discreteness is overcome by the arthimetic criterion as described in \cite{GMMR} and \cite{MMpq}. We recall those results, and in particular the identification theorem, below at Theorem \ref{idthm}. For each group $\Gamma$, as noted, we give the minimal polynomial for $\gamma$ and the discriminant of the invariant trace field $k\Gamma=\IQ(\gamma)$. Next the column $(Farey,order)$ indicates a particular word in the group $\langle f,g \rangle$ has a particular order (or is the identity). Briefly this word is found as follows. The simple closed curves on the $4$ times punctured sphere $\IS_4$ separating one pair of points from another are enumerated by their slope, coming from the Farey expansion. The enumeration of these simple closed curves informs the deformation theory of Keen-Series-Maskit \cite{KS,KS2,KSM} which relates this slope (and the geodesic in the homotopy class) to a bending deformation along this geodesic of $\IS_4$ which terminates on the boundary of moduli space as the length of this curve shrinks to zero (and so the associated word becomes parabolic). This bending locus is called a pleating ray. Bending further creates cone manifolds some of which are discrete lattices when the cone angle becomes $2\pi/n$ for a positive integer $n$. The recent results of \cite{ALSS,AOPSY} show this process to describe all the discrete and faithful representations of groups generated by two parabolic elements which are not free. There is of course a strong connection with the Schubert normal form of a two-bridge knot or link, \cite{BZ}. If one takes a two-bridge link, say with Schubert normal form $r/s$, and performs orbifold $(p,0)$ and $(q,0)$ Dehn surgery on the components of the link, and if the result is hyperbolic, then the orbifold fundamental group of the space so formed is a discrete subgroup of $PSL(2,\IC)$ generated by elliptic elements of order $p$ and $q$, say $f$ and $g$ respectively - the images of the two meridians around each link component (or the images of the two parabolic generators of the group). The word associated with the slope remains the identity. However the value $\gamma(f,g)$ may not always lie on the pleating ray (we give more details below) but possibly on the pleating ray $1-\frac{r}{s}$ - this is simply because of the way we chose to enumerate slopes between. This then also provides the Conway notation for the knot of link complement via the continued fraction expansion of the slope $r/s$. There is one further issue here and that is that the exact choice of generators in $PSL(2,\IC)$ matters since we classify all Nielson pairs and in some cases there may be different pairs generating the same group and obviously they will have different relations. In the description of the group (in terms of the pleating ray $\gamma$ lies on) we will have the following setup: If $X,Y\in PSL(2,\IC)$, representing $f$ and $g$, are given by \begin{equation}\label{XY} X= \left[\begin{array}{cc} \zeta & 1\\ 0 &\bar\zeta \end{array}\right], Y= \left[\begin{array}{cc} \eta & 0\\ \mu &\bar\eta \end{array}\right],\; \zeta = \cos\frac{\pi}{p}+i\sin\frac{\pi}{p},\;\; \eta = \cos\frac{\pi}{q}+i\sin\frac{\pi}{q},\end{equation} we compute \begin{equation}\label{mug} \gamma(f,g) = \mu (\mu -4 \sin \frac{\pi }{p}\; \sin \frac{\pi }{q}). \end{equation} Thus two different choices for $\mu$ lead to the same group up to M\"obius conjugacy. One of these values of $\mu$ lies on the pleating ray and gives the ``presentation'' we choose - invariably we choose \[ \mu = 1-\sqrt{1+\gamma},\quad \Im m(\mu)>0.\] We make this choice so as to give the simplest presentation of the underlying orbifold, though often it does not matter and both choices of $\mu$ lead to conjugate groups. There is a simple recipe for moving from a slope to a word and we give the algorithm for this in the appendix. For instance the first few Farey slopes (greater then $\frac{1}{2}$) are \[ \left\{\frac{1}{2},\frac{4}{7},\frac{3}{5},\frac{5}{8},\frac{2}{3},\frac{5}{7},\frac{3}{4},\frac{4}{5},1\right\} \] and in the same order the words are, with $x=X^{-1}$ and $y=Y^{-1}$, \[ \{ XYxy ,\; XYxyXYxYXyxYXy ,\; XYxyXyxYXy, \; XYxyXyxYxyXYxYXy, \] \[ XYxYXy,\; XYxYXyXyxYxyXy,\; XYxYxyXy,\; XYxYxYXyXy,\; Xy\}\] Now let us describe how to read the tables by example. If we look at the $7^{th}$ entry of the $(4,3)$-arithmetic lattices table below we have $X=f$ has order $4$ and $Y=g$ has order $3$. We see $\gamma(f,g)\approx-2.55804+1.34707 i$, the complex root with positive real part of its minimal polynomial $z^4+6z^3+13z^2+8z+1$ (there is only ever one conjugate pair of complex roots). The pair $(3/5,2)$ indicates the word $w_{3/5} = XYxyXyxYXy$ is elliptic of order two when $X,Y$ have the form at (\ref{XY}) and $\mu$ has one of the two values solving (\ref{mug}), in this case \begin{eqnarray*} \tr w_{3/5} &=& -\mu ^5+\sqrt{5 \sqrt{3}+38} \mu ^4-\left(2 \sqrt{3}+15\right) \mu ^3+\frac{\left(15 \sqrt{3}+7\right) \mu ^2}{\sqrt{2}}\\ && -\left(\sqrt{3}+10\right) \mu +\sqrt{\sqrt{3}+2}\end{eqnarray*} which is $0$ when $\mu=1.79696 \pm 1.17706 i$. The other possibility for $\mu$ so that (\ref{mug}) holds is $0.652527 - 1.17706 i$ and for that choice $w_{3/5}$ is loxodromic. With minor additional work this gives us a presentation \[ \langle f,g:f^4=g^3=(fgf^{-1}g^{-1}fg^{-1}f^{-1}gfg^{-1})^2=1 \rangle \] This group is therefore an arithmetic generalised triangle group in the sense of \cite{HMR}, however it is not identified in that paper as they restrict themselves largely to the case $w_{1/2}=[X,Y]$ is elliptic. Next, we have also given a co-volume approximation in some cases. This is obtained from adapting the Poincar\'e subroutine in Week's programme Snap to our setting of groups generated by two elliptic elements. This was previously done by Cooper \cite{Cooper} in his PhD thesis. This gives an approximation to the volume of $\IH^3/\langle f,g\rangle$. However, this approximation is enough to give the precise index of the the group $\langle f,g\rangle$ in its maximal order and the latter has an explicit volume formula due to Borel \cite{Bo} if further refinement is needed. \medskip We expect that the topological classification (the structure of the singular locus and the observation that $\gamma(f,g)$ lies on an extended pleating ray) suggested by our data from the arithmetic groups is also generally true for groups generated by two elements of finite order which are not freely generated. \medskip A major technical advance used in this article is provided by the development of the Keen-Series-Maskit deformation theory from the case of the Riley slice (specifically \cite{KS}), to the more general case of groups generated by two elements of finite order, \cite{EMS1,EMS2}. Using this we are able to further give defined neighbourhoods of pleating rays which lie entirely in these quasiconformal deformation spaces. This allows us to set up a practical algorithm, which we describe below and implement here, to decide whether or not a two generator discrete group -- identified solely by arithmetic criteria -- is free on the given generating pair. And if it is not free on the two generators, then to identify a nontrivial word in the group. This process is guaranteed to work if the discrete group is geometrically finite and certain conjectures are true, and of course the groups we seek will fall into this category, though the groups we test may not {\em apriori} be geometrically finite. This gives us an effective computational description of certain moduli spaces akin to the Riley Slice \cite{KS}. We expect that any lattice we might find comes from a bending of a word of some slope that has become elliptic - it's just a matter of finding them. Since we additionally expect (hope) that the slope is not too big (denominator small), the possibilities might be few and so we simply search for them. However we will see that in fact run into technical difficulties using this approach in a couple of cases where the slops get up to $101/120$ and some guesses are needed to limit the search space (there are $4387$ slopes less than this and we need to precisely generate a polynomial of degree depending on the numerator - and when the degree is large these span pages of course. Here we are lucky because it turns out that the slopes we seek (as we found out with a lot of experimentation) do not have large integers in their continued fraction expansion. Note that $101/120$ has Conway notation $[6,3,5,1]$ and represents a link of about 15 crossings (we do not know what the crossing number is) and similarly with $7/102$ (Conway $[3,1,1,14]$) probably with about $19$ crossings. However we do know that $29/51$ below has Conway notation $[7,3,1,1]$ and does have exactly $12$ crossings ($\#762$ in Rolfsen's tables) - it is a surprising group discovered by Zhang \cite{Zhang} in her PhD thesis. It is quite remarkable that the invariant trace field is at most quartic for all of the above groups. \medskip We recall that we require that $f$ and $g$ are primitive elliptic elements, that is $\tr^2(f)=4\cos^2 \frac{\pi}{4}$ and $\tr^2(g)=4\cos^2 \frac{\pi}{p}$. This now identifies the Heckoid group from which a complete presentation can be determined if required. However some of these can be quite long and so we have not given them. \subsection{The five $(4,6)$ arithmetic hyperbolic lattices.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline No. & polynomial & (Farey,order) & $\gamma$ & Discr. $k\Gamma$ \\ \hline 1 & $z+1$ & $Tet(4,6;3)$ & $-1$ & $ -12 $ \\ \hline 2 & $z^2 +6$ & $(7/10,1)$ & $\sqrt{6} i$ & $-24$ \\ \hline 3 & $z^3 +3z^2 +6z+2$ & $(5/8,1)$ & $-1.2980+ 1.8073i$& $-216$ \\ \hline 4 & $ z^3 +5z^2 +9z+3 $ & $(7/12,1)$ & $-2.2873+ 1.3499i$ & $-204$ \\ \hline 5 & $ z^4+8z^2+6z+1$ & $(17/24,1)$ & $0.3620+2.8764i$& $-2412$ \\ \hline \end{tabular} \\ \end{center} \subsection{The (4,5) arithmetic hyperbolic lattice.} In the case $p=5$ there is one arithmetic lattice and it is cocompact, $Tet(4,5;3)$. It has $\gamma=-1$. This group has presentation \[ \langle x,y:x^4=y^5 =[x,y]^3=(x[yx^{-1}])^2=(x[y,x^{-1}]y)^2=[y,x^{-1}]y)^2=1 \rangle \] and is the orientation preserving subgroup of the reflection group with Coxeter diagram $4-3-5$. The number field $k\Gamma$ has discriminant $-400$. \subsection{The ten $(4,3)$ arithmetic hyperbolic lattices.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ \\ \hline 1 & $z+2$ & $GT(4,3;2)$ & $ -2 $ & $-24 $ \\ \hline 2 & $z^2 +4$ & $ (3/4;2)$ & $2i$ & $ -4 $ \\ \hline 3 & $z^2+4 z+6$ & $ (3/4;1)$ & $-2+\sqrt{2}i$ & $ -8 $ \\ \hline - & $z^3-z^2+z+1$ & $ (4/5;4)^*$ & $0.77184 + 1.11514 i$& $-44$ \\ \hline 4 & $z^3 +3z^2 +5z+1$ & $(2/3;2)$ & $-1.38546 + 1.56388 i$& $-140 $ \\ \hline 5& $ z^3+3z^2+9z+9$ & $ (17/24;2)$ & $-0.83626+2.46585 i$& $ -108 $ \\ \hline 6 & $ z^3+7z^2+17z+13$ & $(7/12;1)$ & $-2.77184 + 1.11514 i$& $-44 $ \\ \hline - & $ z^4+2z^3+3z^2+4z+1$ & $(3/4;3)^*$ & $-0.10176+1.4711 i$& $ -976$ \\ \hline - & $ z^4-2z^3-z^2+6z+1$ & $(6/7;3)^*$ & $1.78298 +1.08419 i$& $ -1424 $ \\ \hline 7 & $ z^4+6z^3+13z^2+8z+1$ & $(3/5;2)$ & $-2.55804+1.34707 i$& $ -3376 $ \\ \hline 8 & $ z^4+6z^3+13z^2+10z+1$ & $(3/5;4)$ \& $(8/13;3)$ & $-2.20711+0.97831 i $& $ -448$ \\ \hline 9 & $ z^4+4z^3+10z^2+12z+4$ & $(7/10;1)$ & $-1+2.05817 i $& $ -400$ \\ \hline 10 & $ z^4+8z^3+22z^2+22z+6$ & $(9/16;1)$ & $-3.11438+0.83097 i $ & $ -2096 $ \\ \hline \end{tabular} \\ \end{center} The group $\#8$ above is in fact the index two subgroup of the tetrahedral reflection group $T_5[2,3,3;2,3,4]$, \cite[pg. 228 \& Table 1]{MR2}. The three groups marked with an asterisk are infinite covolume web-groups with a hyperideal vertex, or of finite index in the truncated tetrahedral reflection groups enumerated in \cite{CM}. \subsection{The fifty-four $(4,2)$ arithmetic hyperbolic lattices.} \subsubsection{Real, quadratic.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ & covolume\\ \hline {\bf Real} & $z+2$ & $GT(4,3;2)$ & $-2$ & $-24$ & \\ \hline {\bf Quad.} & &&& \\ \hline 2 & $z^2 +2z+2$ & $ (3/4;2) $ & $-1+ i$ & $ -4$&$ 0.457 $ \\ \hline 3 & $z^2 +z+1$ & $ (4/5;2) $ & $-0.5 + 0.86602 i$ & $ -3$ & $ 0.253$ \\ \hline 4 & $z^2 +3z+3$ & $ (7/10;1)$ & $-1.5 + 0.86602 i$& $-3$ & $ 0.253$\\ \hline 5 & $z^2 +1$ & $ (5/6;2)$ & $i$ & $ -4$ & $ 0.915 $\\ \hline 6 & $z^2 +4z+5$ & $ (2/3;4)$ & $-2+i$ & $-4 $& $ 0.915 $\\ \hline - & $z^2 +2z+3$ & $(3/4;3)$ & $-1+1.41421 i$& $ -8$& $ \infty $ \\ \hline 7 & $z^2 +z+2$ & $(19/24;1)$ & $-0.5 - 1.32288 i$& $ -7$ & $ 1.332$ \\ \hline 8 & $z^2 +3z+4$ & $(17/24;1)$ & $-1.5 +1.32288 i$& $ -7$& $ 1.332 $ \\ \hline 9 & $z^2 +5z+7$ & $(19/30;1) $ & $-2.5+ 0.86602 i$ & $-3 $ & $ 1.524$\\ \hline 10 & $z^2-z+1$ & $(13/15;2)$ & $0.5+ 0.86602 i$ & $-3 $ & $ 1.524$ \\ \hline \end{tabular} \\ \end{center} \newpage \subsubsection{Cubic.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ & covolume\\ \hline 11 & $z^3-2z+2$ & $(43/48;1)$ & $0.884646 + 0.58974 i$ & $-76 $ & $ 1.985$\\ \hline 12 & $z^3+6z^2+10z+2$ & $(39/40;4)$ & $-2.884646 + 0.58974 i$ & $-76 $ & $ 1.985 $\\ \hline 13 & $z^3-z+1$ & $(8/9;2)$ & $0.662359 - 0.56228 i$ & $-23 $ & $ 0.824$ \\ \hline 14 & $z^3+6z^2+11z+5$ & $(11/18;1)$ & $-2.66236 + 0.56228 i$ & $ -23 $ & $ 0.824$ \\ \hline 15 & $z^3+z^2-z+1$ & $(7/8;3)$ & $0.419643 - 0.60629 i$ & $ -44$ & $ 0.264$\\ \hline 16 & $z^3+5z^2+7z+1$ & $(5/8;3)$ & $-2.41964 - 0.60629 i$ & $-44 $ & $ 0.264$ \\ \hline 17 & $z^3+z^2+1$ & $(6/7;2)$ & $0.232786 + 0.79255 i$ & $-31 $ & $0.595$\\ \hline 18 & $z^3+5z^2+8z+3$ & $(9/14;1)$ & $-2.23279+ 0.79255 i$ & $ -31$ & $0.595$\\ \hline 19 & $z^3+z^2+2$ & $(17/24;1)$ & $0.34781+1.02885i$ & $-116$ & $ 2.232 $ \\ \hline 20 & $z^3+5z^2+8z+2$ & $(17/24;1)$ & $-2.34781 +1.02885 i$ & $-116 $ & $ 2.232 $ \\ \hline 21 & $z^3+2z^2+z+1$ & $(5/6;3)$ & $-0.122561 + 0.74486 i$ & $ -23$& $ 0.137$ \\ \hline 22 & $z^3+ 4 z^2 + 5 z+1$ & $(2/3;3)$ & $-1.87744 - 0.74486 i$ & $ -23$& $ 0.137$ \\ \hline 23 & $z^3+z^2+z+2$ & $(101/120;1)$ & $0.17660+1.20282 i $& $-83 $ & $ 3.308$\\ \hline 24 & $z^3+5z^2+9z+4$ & $(79/120;1)$ & $-2.1766+ 1.20282i$ & $-83 $ & $ 3.308$ \\ \hline 25 & $z^3+2z^2+2z+3$ & $(49/60;1)$ & $-0.09473 + 1.28374 i$ & $ -139$& $ 2.597 $ \\ \hline 26 & $z^3+4z^2+6z+1$ & $(41/60;1)$ & $-1.90527 + 1.28374 i$ & $-139 $ & $ 2.597 $ \\ \hline 27 & $z^3+2z^2+2z+2$ & $(13/16;1)$ & $-0.22815+1.11514 i$ & $ -44$ & $0.793$ \\ \hline 28 & $z^3+4z^2+6z+2$ & $(11/16;1)$ & $-1.77184 + 1.11514 i$ & $ -44$& $ 0.793 $ \\ \hline 29 & $z^3+z^2+2z+1$ & $(17/21;2)$ & $-0.21508 + 1.30714 i$& $-23 $ & $ 0.824$ \\ \hline 30 & $z^3+5z^2+10z+7$ & (29/42;1) & $-1.78492+ 1.30714 i$ & $ -23 $& $ 0.824 $\\ \hline 31 & $z^3+2z^2+3z+1$ & $(7/9;2)$, & $-0.78492 + 1.30714 i$ & $ -23$ & $ 0.824 $ \\ \hline 32 & $z^3+4z^2+7z+5$ & $(13/18;1)$, & $-1.21508 + 1.30714 i$ & $-23 $& $ 0.824 $ \\ \hline 33 & $z^3+3z^2+5z+4$ & $ (31/40;1)$ & $-0.773301 + 1.46771 i$ & $ -59$ & $ 1.927 $\\ \hline 34 & $z^3+3z^2+5z+2$ & $ (29/40;1)$ & $-1.2267 + 1.46771 i$ & $ -59$ & $ 1.927$ \\ \hline 35 & $z^3+3z^2+4z+3$ & $(11/14;1)$ & $-0.658836-1.16154 i$ & $-31 $ & $0.593$ \\ \hline 36 & $z^3+3z^2+4z+1$ & $(5/7;2)$ & $-1.34116 + 1.16154 i$ & $-31 $ & $0.593$ \\ \hline \end{tabular} \\ \end{center} \newpage \subsubsection{Quartic.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline No. & polynomial & (Farey , order)& $\gamma$ & Disc. $k\Gamma$ & covol. \\ \hline \\ \hline 37& $ z^4 - 4 z^2 + z + 3 $ & $(7/102;1)$ & $1.36778 + 0.23154 i $ & $ -731$ & $ 2.9003 $ \\ \hline 38 & $ 1 + 15 z + 20 z^2 + 8 z^3 + z^4 $ & $(29/51;2)$ & $-3.36778+ 0.23154 i $ & $ -731$ & $ 2.9003 $ \\ \hline 39& $1 + z - 2 z^2 + z^4$ & $(77/85;2)$ & $1.00755 + 0.51311i $ & $ -283$ & $ 3.6673 $ \\ \hline 40 & $ 7 + 23 z + 22 z^2 + 8 z^3 + z^4 $ & $(101/170;1)$ & $-3.00755 + 0.51311i $ & $ -283$ & $ 3.6673 $ \\ \hline 41 & $z^4+z^3-3z^2-z+3$ & $(31.34;1)$ & $1.06115 + 0.38829 i$ & $ -491 $ & $ 1.542$ \\ \hline 42 & $z^4+7z^3+15z^2+9z+1$ & $(10/17;2) $ & $-3.06115+ 0.38829 i$ & $ -491$ & $ 1.542$\\ \hline - & $z^4+4z^3+7z^2+6z+1$ & $(3/4;5) $ & $-1+ 1.27202 i$ & $ -400 $ & $\infty$ \\ \hline - & $z^4+4z^3+8z^2+8z+2$ & $(3/4;4) $ & $-1+ 1.55377 i$ & $ -1024 $ & $\infty$ \\ \hline - & $z^4+4z^3+8z^2+8z+1$ & $(3/4;6) $ & $-1+1.65289 i$ & $ -4608 $ & $\infty$ \\ \hline 43 & $z^4+5z^3+11z^2+11z+3$ & $(40/51;2)$ & $-1.42057+ 1.45743 i$ & $ -731$ & $ 2.889 $\\ \hline 44 & $z^4+3z^3+5z^2+5z+1$ & $(40/51;2)$ & $-0.57943+ 1.45743 i$& $ -731$ & $ 2.889$ \\ \hline 45 & $z^4+2z^3+2z^2+3z+1$ & $(14/17;2)$ & $0.03640 +1.21238 i$ & $-491 $ & $ 1.540 $\\ \hline 46 & $z^4+6z^3+14z^2+13z+3$ & $(23/34;1)$ & $-2.03640 +1.21238 i$ & $ -491$& $ 1.540$ \\ \hline 47 & $z^4+6z^3+13z^2+10z+1$ & $(13/20;1)$ & $-2.20711 +0.97831 i$ & $ -448$ & $ 1.028 $ \\ \hline 48 & $z^4+2z^3+z^2+2z+1$ & $(17/20;1)$ & $0.20711 +0.97831 i$ & $ -448$ & $ 1.028 $ \\ \hline 49 & $z^4+z^3-z^2+z+1$ & $(15/17;2)$ & $0.65138 + 0.75874 i$ & $-507 $ & $ 1.624 $ \\ \hline 50 & $z^4+7z^3+17z^2+15z+3$ & $(21/34;1)$ & $-2.65139 + 0.75874 i$ & $ -507$& $ 1.624$ \\ \hline 51 & $ z^4+3 z^3+4 z^2+4 z+1 $ & (4/5;3) & $ -0.40630 + 1.19616 i $ &$-331$ & $ 0.447 $\\ \hline 52 & $ z^4+5 z^3+10 z^2+8 z+1 $ & (7/10;3) & $ -1.59369 + 1.19616 i $ & $-331$&$ 0.447 $ \\ \hline 53 & $ z^4 + z^3 - 2 z^2+ 1 $ & (9/10;3) & $ 0.78810 + 0.40136 i $ &$-283$ & $ 0.347$ \\ \hline 54 & $ z^4+7 z^3+16 z^2+12 z+1 $ & (3/5;3) & $ -2.78810 + 0.40135i$ & $-283$&$ 0.347 $\\ \hline \end{tabular} \end{center} \newpage \subsection{The twenty-seven $(4,4)$ arithmetic hyperbolic lattices.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline No. & polynomial & (Farey,order)& $\gamma$ & Disc. $k\Gamma$ & covol.\\ \hline \\ \hline 1 & $z+2$ & $GT(4,4;2)$ & $-2$ & $-4$ &$0.91596$ \\ \hline -& $z+3$ & $Tet(4,4;3)$ & $-3$ & $-8$ & $\infty$ \\ \hline -& $1 + 3 z + z^2$ & $Tet(4,4;5)$ & $-2.61803$ & $ 5 $ & $\infty$ \\ \hline -& $2 + 4 z + z^2$ & $GT(4,4;4)$ & $-3.41421$ & $ 8$ & $\infty$ \\ \hline -& $1 + 4 z + z^2$ & $GT(4,4;6) $ & $-3.73205$ & $12$ & $\infty$ \\ \hline \\ \hline 2 & $3 + 3 z + z^2$ & $ (3/5,1) $ & $-1.5 + 0.866025 i$ & $ -3$ & $0.5067$ \\ \hline 3& $5 + 2 z + z^2$ & $ (2/3;2) $ & $-1 + 2 i$ & $ -4$ & $1.8319 $ \\ \hline 4& $8 + 5 z + z^2$ & $(7/12;1)$ & $-2.5 + 1.32288 i$ & $ -7$ & $2.6648$ \\ \hline 5& $7 - z + z^2$ & $(11/15;1)$ & $0.5 + 2.59808 i$ & $-3 $ & $3.0491$ \\ \hline \\ \hline 6& $3 + 8 z + 5 z^2 + z^3$ & $(4/7;1)$ & $-2.23278+0.79255 i$ & $ -31$ & $1.1878$ \\ \hline 7& $4 + 8 z - 4 z^2 + z^3$ & $(19/24;1)$ & $2.20409 + 2.22291 i$ & $-76$ & $3.9710$ \\ \hline 8& $5 + 3 z - 2 z^2 + z^3$ & $(7/9;1)$ & $1.44728 +1.86942 i$ & $-23 $ & $1.6481$ \\ \hline 9& $1 + 3 z - z^2 + z^3$ & $(3/4;3)$ & $0.647799 + 1.72143 i$ & $-44$ & $0.5295$ \\ \hline 10& $3 + 4 z + z^2 + z^3 $ & $(5/7;1)$ & $-0.108378 + 1.95409 i$ & $-31$ & $1.1900$ \\ \hline 11& $4 + 8 z + z^2 + z^3$ & $(17/24;1)$ & $-0.241944 + 2.7734 i $& $-116$ & $4.4654$ \\ \hline 12& $1 + 3 z + 2 z^2 + z^3$ & $(2/3,3)$ & $-0.78492 + 1.30714 i$ & $ -23$ & $0.2746$ \\ \hline 13& $8 + 11 z + 3 z^2 + z^3$ & $(41/60;1)$ & $-1.06238 + 2.83049 i$& $-83$ & $6.6173$ \\ \hline 14& $3 + 10 z + 4 z^2 + z^3$ & $(19/30;1)$ & $-1.82848 + 2.32426 i$ & $ -139$ & $5.1943$ \\ \hline 15& $4 + 8 z + 4 z^2 + z^3$ & $(5/8;1)$ & $-1.6478 + 1.72143 i $& $-44$ & $0.1322 $ \\ \hline 16& $7 + 12 z + 5 z^2 + z^3$ & $(5/9;1)$ & $-2.09252 + 2.052 i$& $-23 $ & $4.2441$ \\ \hline 17& $8 + 15 z + 7 z^2 + z^3$ & $(11/20;1)$ & $-3.10278 + 0.665457 i$ & $ -59$ & $3.8557$ \\ \hline 18& $3 + 8 z + 5 z^2 + z^3$ & $(4/7;1)$ & $-2.23278+0.79255 i$ & $ -31$ & $1.1878$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline No. & polynomial & (Farey,order)& $\gamma$ & Disc. $k\Gamma$ & covol.\\ \hline \\ \hline 19& $z^4-8 z^3+12 z^2+23 z+3$ & $(44/51;1) $ & $4.55279 + 1.09648 i$ & $-731$ & $5.8007 $ \\ \hline 20& $7 + 15 z + 4 z^2 - 4 z^3 + z^4$ & $(69/85;1) $ & $2.76698 + 2.06021 i$ & $-283$ & $7.3346 $ \\ \hline 21& $3 + 13 z + 5 z^2 - 5 z^3 + z^4$ & $(14/17;1) $ & $3.09755 + 1.60068 i$ & $-491$ & $3.0852$ \\ \hline 22& $3 + 13 z + 17 z^2 + 7 z^3 + z^4$ & $(29/51;1$ & $-2.94722 + 1.22589 i$ & $ -731$ & $5.7795$ \\ \hline 23& $3 + 11 z + 12 z^2 + 4 z^3 + z^4$ & $(11/17;1)$ & $-1.39573 + 2.51303 i$ & $-491 $ & $3.0815$ \\ \hline 24& $1 + 6 z + 7 z^2 + 2 z^3 + z^4$ & $(7/10;1)$ & $-0.5 + 2.36187 i$ & $-448$ & $2.0575$ \\ \hline 25& $3 + 9 z + 5 z^2 - z^3 + z^4$ & $(13/17;1)$ & $1.15139 - 2.50596 i$ & $-507$ & $3.2480$ \\ \hline 26& $z^4+5 z^3+10 z^2+6 z+1$ & $(3/5;3)$ & $-2.07833 + 1.4203 i$ & $-331$ & $0.8948$ \\ \hline 27& $z^4-3 z^3+2 z^2+6 z+1$ & $(4/5;3)$ & $2.03623 + 1.43534 i$ & $-283$ & $0.6951$ \\ \hline \end{tabular} \end{center} For completeness we recall Takeuchi's result in this case. \begin{theorem} Let $\Gamma=\langle f,g \rangle$ with $f$ and $g$ of finite order $4$ and $p$ be an arithmetic Fuchsian triangle group Then there are eleven such groups and $\{4,p,q\}$ form one of the following triples. \begin{itemize} \item noncompact : $(2,4,\infty)$, $(4,4,\infty)$, \item compact: $(2,4,12)$, $(2,4,18)$, $(3,4,4)$, $(3,4,6)$, $(3,4,12)$, $(4,4,4)$, $(4,4,5)$, $(4,4,6)$, $(4,4,9)$. \end{itemize} \end{theorem} \section{Total degree bounds.} In this section we first outline the arithmetic criteria on the complex number \[ \gamma=\gamma(f,g) = \tr[f,g]-2 \] that are necessary in order for the group $\Gamma=\langle f,g:f^4=g^p= \cdots=1\rangle$ to be a discrete subgroup of an arithmetic group, following on from \cite{MM,MMpq}. This is encapsulated in the Identification Theorem \ref{idthm} below. It places rather stringent conditions on $\gamma=\gamma(f,g)$. Next, in order for $\Gamma$ to be a lattice, it is necessary that is it not free on generators. We then use the ``disjoint isometric circles test'' and the Klein ``ping-pong'' lemma to obtain a coarse bound on $|\gamma|$. The arithmetic criteria and coarse bound then allow us to either directly give degree bounds or, in some cases, adapt the method of Flammang and Rhin \cite{FR} to obtain a total degree bound for the field $k\Gamma=\IQ(\gamma)$. It will turn out that the degree is actually no more than $4$ as we have noted above, however a total degree bound of $7$ or $8$ brings us down into the range of feasible search spaces for integral monic polynomials with the root bounds we obtain exploiting arithmeticity and bounds on $|\gamma(f,g)|$, and so this is the first bound we will seek. Note that this gives first bounds on $p$ since $\IQ(\cos\frac{2\pi}{p})\subset k\Gamma$ and \[ [\IQ:k\Gamma]=[\IQ:\IQ(\cos\frac{2\pi}{p})][\IQ(\cos\frac{2\pi}{p}):k\Gamma]\leq 8.\] Then refining this degree bound down to $4$ or $5$ relies on us obtaining a coarse description of the moduli space ${\cal M}_{4,p}$ --- the deformation space of Riemann surfaces $\IS_{4,4,p,p}$ topologically homeomorphic to the sphere with two cone points of order $4$ and two cone points of order $p$, an analogue of the Riley slice for groups generated by two parabolics where $\IS_{4,4,p,p}$ is replaced by the four times punctured sphere. Actually only the cases $(4,3)$,$(4,3)$ and $(4,4)$ have very large search spaces for (as we shall see below) there is not necessarily an intermediate field real for us to work with. Indeed the $(4,2)$ case is actually covered by the methods of Flammang and Rhin \cite{FR} after they achieved a degree $9$ bound (for the $p=q=3$ case) and their methods largely carry over but are a bit more complicated since these moduli spaces when $q=4$ and $p\neq 2,4$ are not symmetric and do have significantly larger complements (where the groups we are looking for will lie). Next, the arithmetic criteria show that $\IQ(\gamma)$ has one complex conjugate pair of embeddings and good bounds on the real embeddings of the number $\gamma$ due to a ramification requirement. Thus the coefficients of the minimal polynomial for $\gamma$ are bounded with a bound on $|\gamma|$ (though very crudely). Given the degree bound we can now look through the space of polynomials with integer coefficients within these ranges to produce a list of possibilities. Without good bounds this can take quite a long time (many days). Then a factorisation criteria significantly reduces the possibilities further. This leaves us with a moderate number of values for $\gamma$ (perhaps several thousand). At this point there is no issue about whether the group is discrete. That is guaranteed. The only question is if it is a lattice and if so what is it. This is where we need the refined descriptions of moduli spaces. Our earlier work implemented a version of the Poincar\'e polyhedral theorem to construct a fundamental domain for the group acting on hyperbolic $3$-space and thereby hope to determine if this action is cocompact or not \cite{Cooper}. The case of cofinite volume but not cocompact is rather easier in the arithmetic case as then the invariant trace field $k\Gamma$ is quadratic \cite{MR}. This implementation worked well when there were no points quite close to the boundary (where the geometrically infinite groups are dense). However it fell over for points close to the boundary, and was a case by case analysis which was fine when there were only a few tens of groups to look at. Here we replace the last few steps by a finer description - and so tighter bounds - on the set where $\gamma$ must lie, giving smaller search spaces (a matter of minutes). Then we ``guess'' it is a Heckoid group and then set about finding what it might be by enumerating the possible words and associated trace polynomials. This just happens to work --- and of course it is natural to conjecture that is always does. \subsection{The standard representation.} Define the two matrices \begin{equation}\label{AB} A = \begin{pmatrix} \cos \pi/p & i \sin \pi/p \\ i \sin \pi/p & \cos \pi/p \end{pmatrix}, \quad B = \begin{pmatrix} \cos \pi/q & iw \sin\ \pi/q \\ i w^{-1} \sin \pi/q & \cos \pi/q \end{pmatrix}. \end{equation} Then if $\G = \langle f,g\rangle $ is a non-elementary Kleinian group with $o(f)=p$ (where $o(f)$ denotes the order of $f$) and $o(g) = q$, where $p \geq q \geq 3$ then $\G$ can be normalised so that $f,g$ are represented by the matrices $A,B$ respectively. The parameter $\gamma$ is related to $w$ by \begin{equation}\label{2} \gamma = \sin^2 \frac{\pi}{p} \, \sin^2 \frac{\pi}{q} \; (w - \frac{1}{w})^2. \end{equation} Given $\gamma$, we can further normalise and choose $w$ such that $| w | \leq 1$ and ${\rm Re}(w) \geq 0$. \subsection{Isometric circles.} The isometric circles of the M\"obius transformations determined by (\ref{AB}) are \[ \{z:|i \sin(\frac{\pi}{p}) z \pm \cos(\frac{\pi}{p}) |^2=1\}, \;\;\; \{ |i w^{-1} \sin(\frac{\pi}{q}) z \pm \cos (\frac{\pi}{q}) |^2=1\} \] These circles are paired by the respective M\"obius transformations. With our normalisation on $w$ as above, these two pairs of circles are disjoint precisely when \begin{equation}\label{3} | i w \cot \pi/q + i \cot \pi/p | + \frac{| w |}{\sin \pi/q} \leq \frac{1}{\sin \pi/p}. \end{equation} If $\gamma$ is real, the isometric circles are disjoint if and only if $\gamma<-4$ or $\gamma > 4\big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p})$. In the case of equality here the isometric circles are tangent. \bigskip With our normalisations, the well known Klein ``ping-pong" argument quickly implies that, should the isometric circles be pairwise disjoint or at worse tangent, then the group generated by $A$ and $B$ is free on these two generators. It therefore cannot be an arithmetic lattice. In fact in these circumstances (with $p=4$ and $q\geq 3$) it is easy to see \[ \left[ \ID(i,\sqrt{2})\cap\ID(-i,\sqrt{2}) \right] \setminus \left[\ID\Big(i w\cot (\frac{\pi}{q}),\frac{|w|}{\sin(\frac{\pi}{q})}\Big) \cup \ID\Big(-i w\cot (\frac{\pi}{q}),\frac{|w|}{\sin(\frac{\pi}{q})}\Big)\right] \] is a nonempty fundamental domain for the action of $\Gamma=\langle f,g\rangle$ on $\oC\setminus \Lambda(\Gamma)$. \medskip We now need to turn the inequality (\ref{3}) into a condition on $\gamma$. From (\ref{3}) we see with $q=4$ and $w=r e^{i\theta}$, $0\leq \theta\leq \pi/2$, that \begin{eqnarray*} | r \cos(\theta) + \cot \pi/p +i r\sin(\theta)| + \sqrt{2}r & \leq &\frac{1}{\sin \pi/p} \\ \Big( r \cos(\theta) + \cot (\frac{\pi}{p})\Big)^2 + r^2\sin^2(\theta) & \leq &(\frac{1}{\sin\frac{\pi}{p}}- \sqrt{2}r )^2 \\ 2 r \cos(\theta) \cot (\frac{\pi}{p}) & \leq &1+r^2 - \frac{2\sqrt{2}r}{\sin\frac{\pi}{p}} \\ \end{eqnarray*} On the curve where equality holds, we obtain a parametric equation for a region $\Omega_p$ bounded by this curve. \[\Omega_p\; :\;0=1+r^2 -2 r \; \frac{\sqrt{2}- \cos(\theta) \cos (\frac{\pi}{p})}{\sin\frac{\pi}{p}}\] \scalebox{0.5}{\includegraphics[viewport=-50 250 600 750]{ParametricRegion}}\\ {\bf Figure 2.} {\em The smooth parametric regions $\Omega_p$ from smallest to largest, $p=3,6,9$ and $12$. If $\gamma \not\in \Omega_p$, then $\langle f,g\rangle$ is freely generated. Inset is the actual region (in the case $p=3$) where free generation occurs, illustrated by interior pleating rays landing on its boundary.} \medskip The $\gamma$ values for groups we seek are inside the curves $\Omega_p$. However the regions $\Omega_p$ where the simplest criterion giving a region where disjoint isometric circles holds is apparently much larger than that required to identify groups that are freely generated. However these regions are sufficient to provide initial bounds especially for $p$ large enough that $[\IQ(\cos 2\pi/p):\IQ]>17$, that is $31$ possible values no more than $60$. \bigskip \subsection{Real points.} At this point we would like to remove the cases $\gamma\in \IR$ from further consideration. This result is primarily based on \cite{MM3} \subsubsection{If $\gamma>0$,} then the group $\Gamma$ is a Fuchsian triangle group, or is freely generated, and is therefore not an arithmetic lattice. However, the $(4,p,q)$- triangle groups $\langle a,b|a^4=b^p=(ab)^q \rangle$ have parameters \begin{eqnarray*}(\gamma,\tr^2(f)-4,\tr^2(g)-4)& = & (\gamma,-2,-4\sin^2(\frac{\pi}{p})), \\ \gamma &=& 4 \cos (\frac{\pi }{q})\big(\sqrt{2} \cos(\frac{\pi }{p})+\cos(\frac{\pi }{q})\big)+2\cos(\frac{2 \pi }{p})\end{eqnarray*} \subsubsection{If $\gamma<0$,} and if $\langle f,g\rangle$ is not free on generators, then $-4<\gamma <0$. It follows that $-2< \tr[f,g] < 2$ and $[f,g]$ is elliptic. All groups generated by two elements of finite order and whose commutator has finite order were identified in \cite{MM3}. As examples, the $(4,p,q)$-generalised triangle groups $\langle a,b|a^4=b^p=[a,b]^q \rangle$ have parameters \[ (\gamma,-2,-4\sin^2(\frac{\pi}{p})), \gamma =-2-2\cos(\frac{\pi}{p}) \] \bigskip From that classification we have the following result. \begin{theorem} Let $\Gamma=\langle f,g\rangle$ be a non-elementary Kleinian group where $f,g$ are elliptic elements with $o(f) =4$, $o(g) = q$ and $o([f,g]) = n$. Assume that $\gamma$ is nonelementary and does not have an invariant hyperbolic plane. Then the generators give rise to the sets of parameters described are \begin{enumerate} \item {\em Generalised Triangle groups} $\langle x,y|x^4=y^p=[x,y]^n=1\rangle$. \[ (\gamma,-2,\beta)= (-2-2 \cos(\pi/n),-2,-4 \sin^2(\pi/p)). \] \item {\em Tetrahedral groups with $n$ odd}. \begin{eqnarray*} \langle x,y|x^4 & = & y^p=[x,y]^n=1, (x[y, x^{-1}]^{(n-1)/2})^2 = 1, \\ && (x[y, x^{-1}]^{(n-1)/2}y)^2 = ([y, x^{-1}]^{(n-1)/2}y)^2 =1\rangle \end{eqnarray*} \[ (\gamma,-2,\beta)= (-2-2 \cos(2\pi/n),-2,-4 \sin^2(\pi/p)). \] \item {\em Tetrahedral groups with $n$ odd and $n\geq 7$}. In this case we have a group with the presentation as determined above, but additional possibilities for the value of $\gamma$. \[ (\gamma,-2,\beta)= (-2-2 \cos(4\pi/n),-2,-4 \sin^2(\pi/p)). \] \end{enumerate} \ The only cocompact group is the group $(4,p;n)=(4, 5; 3)$ which is arithmetic. The only non-cocompact groups of finite covolume are $(4,p;n)=GT(4,3; 2), Tet(4, 6; 3), GT(4,4,2)$, which are all arithmetic. \end{theorem} Futher data on these groups can be found in \cite{MM3}. \medskip Notice here that if $n=2$, then $tr[f,g]=0$ and $\gamma(f,g)=-2=\beta(f)$. Then $\gamma(f,gfg^{-1})=\gamma(f,g)(\gamma(f,g)-\beta(f))=0$ and $f$ and $gfg^{-1}$ share a fixed point on $\oC$. In this last case here $(4,p;n)=(4, 4; 2)$ is the $\IZ_2$ extension with parameters $(1+i,-2,-4)$. In summary we have the possible values for $\gamma\in \IR$ for arithmetic Kleinian groups. \[ p=5; \gamma=-3, \quad p=3 ; \gamma=-2,\quad p=4 ; \gamma=-2\quad p=6 ; \gamma=-3 \] \subsection{Arithmetic restrictions on the commutator $\gamma(f,g)$.} Having dispensed with the case that $\gamma=\gamma(f,g)\in \IR$ in the rest of this paper we shall assume that $\gamma$ is complex. We require the following preliminaries. Let $\G$ be any non-elementary finitely-generated subgroup of $\PSL(2,\IC)$. Let $\G^{(2)} = \langle g^2 \mid g \in \G \rangle $ so that $\G^{(2)}$ is a subgroup of finite index in $\G$. Define \begin{equation} \left. \begin{array}{lll} k\G & = & \IQ(\{ \tr(h) \mid h \in \G^{(2)} \}) \\ A\G & = & \{ \sum a_i h_i \mid a_i \in k\G, h_i \in \G^{(2)} \} \end{array}\;\;\;\; \right\} \end{equation} where, with the usual abuse of notation, we regard elements of $\G$ as matrices, so that $A\Gamma \subset M_2(\IC)$. Then $A\G$ is a quaternion algebra over $k\G$ and the pair $(k\G, A\G)$ is an invariant of the commensurability class of $\G$. If, in addition, $\G$ is a Kleinian group of finite co-volume, then $k\G$ is a number field. We state the identification theorem as follows: \begin{theorem} \label{idthm} Let $\G$ be a subgroup of $\PSL(2,\IC)$ which is finitely-generated and non-elementary. Then $\G$ is an arithmetic Kleinian group if and only if the following conditions all hold: \begin{enumerate} \item $k\G$ is a number field with exactly one complex place, \item for every $g \in \G$, $\tr(g)$ is an algebraic integer, \item $A\G$ is ramified at all real places of $k\G$. \item $\G$ has finite co-volume. \end{enumerate} \end{theorem} It should be noted that the first three conditions together imply that $\G$ is Kleinian, and without the fourth condition, are sufficient to imply that $\G$ is a subgroup of an arithmetic Kleinian group. The first two conditions clearly depend on the traces of the elements of $\G$. In addition, we may also find a Hilbert symbol for $A\G$ in terms of the traces of elements of $\G$ so that the third condition also depends on the traces (for all this, see \cite{MR1},\cite[Chap. 8]{MR}). \subsection{The factorisation condition.}\label{factorisation} For an arithmetic group generated by elliptic elements of order $4$ and $p$ \cite{MMpq} observed that as a consequence of the Fricke identity the number $\lambda= \tr f \tr g \tr fg$ is an algebraic integer and satisfies the quadratic equation \begin{equation}\label{eqn5} x^2 - 8\cos^2\frac{\pi}{p}\, x + 8\cos^2\frac{\pi}{p}(-2 + 4\cos^2\frac{\pi}{p} -\gamma) = 0, \end{equation} Thus \[ \lambda = 4 \cos ^2 \frac{\pi }{p} \pm 2 \sqrt{2} \cos \frac{\pi }{p} \sqrt{\gamma + 2\sin^2\frac{ \pi }{p}} \] and \begin{equation}\label{lp} \lambda_p= 2 \sqrt{2} \cos \frac{\pi }{p} \sqrt{\gamma + 2\sin^2\frac{ \pi }{p}} \end{equation} is an algebraic integer. Further, if $p$ is odd, then $2 \cos \frac{\pi }{p} $ is a unit and so \[ \alpha_p = \sqrt{2} \sqrt{\gamma +2\sin^2\frac{ \pi }{p}} =\sqrt{2\gamma -\beta} \] is an algebraic integer. Since the field $\IQ(\gamma)$ has one complex place we must have $\IQ(\gamma)=\IQ(\lambda_p)$ and when $p$ is odd $\IQ(\gamma)=\IQ(\alpha_p)$. \medskip This factorisation criterion yielding $\IQ(\gamma)=\IQ(\lambda_p)$ is a powerful obstruction for an algebraic integer $\gamma$ to satisfy when $p\neq 2$. In particular we will apply it in the following form which is easily seem to be equivalent in the case at hand in fields with one complex place. \begin{theorem} \label{samedegree} Let $\Gamma=\langle f,g\rangle$ be an arithmetic Kleinian group generated by elliptic elements of order $4$ and $p$, with $p\neq 2$. Then the minimal polynomial for $\lambda_p$ as at (\ref{lp}) has the same degree as the minimal polynomial for $\gamma=\gamma(f,g)$. \end{theorem} For $p\not\in\{2,3,4,6\}$ there is an intermediate real field as $\cos(2\pi/p)\not \in\IQ$ lies in the invariant trace field. We set \[ L = \IQ(\cos(2\pi/p)),\quad\quad [\IQ(\gamma):\IQ]=[L:\IQ] \times [\IQ(\gamma):L] \] Now if $\G$ is arithmetic, $k\G$ will be a number field with one complex place if and only if $L(\gamma)$ has one complex place and the quadratic at (\ref{eqn5}) splits into linear factors over $L(\gamma)$. This implies that, if $\tau$ is any real embedding of $L(\gamma)$, then the image of the discriminant of (\ref{eqn5}), which is $32 \cos^2\frac{\pi}{p}(-2\sin^2\frac{\pi}{p} + \gamma)$, under $\tau$ must be positive. Clearly this is equivalent to requiring that \begin{equation}\label{eqn6} \tau( 2\cos^2\frac{\pi}{p} + \gamma) > 0. \end{equation} Thus $k\G$ has one complex place if and only if (i) $\IQ(\gamma)$ has one complex place, (ii) $L \subset \IQ(\gamma)$, (iii) for all real embeddings $\tau$ of $\IQ(\gamma)$, (\ref{eqn6}) holds and (iv) the quadratic at (\ref{eqn5}) factorises over $\IQ(\gamma)$. Now, still in the cases where $p> 2$,(\cite[\S 3.6]{MR}) \begin{equation}\label{eqn7} A\G = \HS{-1}{ 2\cos^2(\frac{\pi}{p}) \,\gamma}{k\G}. \end{equation} Under all the real embeddings of $k\G$, $A\G$ is ramified at all real places of $k\G$ if and only if, under any real embedding $\tau$ of $k\G$, \begin{equation}\label{eqn8} \tau(\gamma) < 0. \end{equation} Thus, summarising, we have the following theorem which we will use to determine the possible $\gamma$ values for the groups we seek. \begin{theorem}[The Identification Theorem] \label{2genthm} Let $\G = \langle f , g \rangle $ be a non-elementary subgroup of $\PSL(2,\IC)$ with $f$ of order $4$ and $g$ of order $p$, $p \geq 3$. Let $\gamma(f,g) = \gamma \in \IC \setminus \IR$. Then $\G$ is an arithmetic Kleinian group if and only if \begin{enumerate} \item $\gamma$ is an algebraic integer, \item $\IQ(\gamma) \supset L = \IQ(\cos 2 \pi/p)$ and $\IQ(\gamma)$ is a number field with exactly one complex place, \item if $\tau : \IQ(\gamma) \rightarrow \IR$ such that $\tau |_L = \sigma$, then \begin{equation}\label{eqn10} - \sigma( 2\sin^2 \frac{\pi}{p}) < \tau(\gamma) < 0, \end{equation} \item the algebraic integer $\lambda_p$ defined at (\ref{lp}) has the same degree as $\gamma$, \item $\G$ has finite co-volume. \end{enumerate} \end{theorem} \subsection{The possible values of $p$.} If the generator $f$ has order $4$, then Theorem \ref{2genthm}, implies first, that $\gamma$ is an algebraic integer, secondly, that $\IQ(\gamma)$ has exactly one complex place and thirdly, that $\IQ(\gamma)$ must contain $L = \IQ(\cos 2 \pi/p)$. Let \begin{equation} [\IQ(\gamma) : L ] = r \end{equation} Since the minimal polynomial for $\gamma$ factors over $L$ we know $n=r\mu=[\IQ(\gamma):\IQ]$. Further, all real embeddings of $\gamma$ lie in the interval $[-2,0]$. Next, from the disjoint isometric circles criteria, we obtain the following information if the group $\Gamma=\langle f,g\rangle$ is not freely generated --- a necessary condition if $\G$ is to be arithmetic. \begin{lemma} \label{modlem} Let $\langle f,g\rangle$ be a discrete group with $o(f)=4$ and $o(g)=p$ and which is not free on these two generators. Let $\gamma=\gamma(f,g)$, Then \begin{enumerate} \item $\Re e(\gamma)>-4$ and this is sharp as for each $p\geq 2$, $GT(4,p;\infty)$ has $[f,g]$ parabolic and $\gamma=-4$. This group is free on generators. Further, $GT(4,p;n)$ has $\gamma=-2-2\cos(\frac{\pi}{n})$ which tends to $-4$ with $n$. None of these groups are free on generators, \cite{MM3}. \item For each $p\geq 2$ we have \begin{equation}\label{trivial bound} |\gamma|\leq 4 \big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p}).\end{equation} This estimate is sharp and achieved by the $(4,p,\infty)$ triangle group. The $(4,p,n)$ triangle groups have \[\gamma=2 \left(\cos(\frac{2 \pi }{n})+2\cos(\frac{\pi }{p}) \left(\sqrt{2} \cos(\frac{\pi }{n})+\cos(\frac{\pi }{p})\right)\right)\] which tends to $ \big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p})$ as $n\to\infty$. The $(4,p,n)$-triangle groups are not free on generators. \end{enumerate} \end{lemma} \noindent{\bf Proof.} To obtain the bound $|\gamma|\leq 4 \big(\sqrt{2} \cos(\frac{\pi }{p})+1\big)+2\cos(\frac{2 \pi }{p})$ for each $p$ we simply find the point of maximum modulus on the region $\Omega_p$ using calculus. Next, there are points within $\Omega_p$ whose real part is less than $-4$. We have to show these give rise to groups freely generated. We do this by a cut and paste argument to produce a fundamental domain from the isometric circles, which are no longer disjoint. We can normalise so that the isometric circles of $f$ and of $g$ are \[ I(f): |z\pm i|=\sqrt{2},\quad I(g): |z\pm i\omega\cot \frac{\pi}{p}|=\frac{1}{\sin\frac{\pi}{p}} \] and that $\Im m(i\omega)\geq 0$. Let $D_1=\{z:|z+i\omega\cot \frac{\pi}{p}|=\frac{1}{\sin\frac{\pi}{p}}\}$ and $D_2=\{z:|z-i\omega\cot \frac{\pi}{p}|=\frac{1}{\sin\frac{\pi}{p}}\}$ be the disks bounded by the isometric circles of $g$. \begin{lemma}\label{fog} Suppose that $f^{-1}(D_1)\cap (D_1\cup D_2)=f(D_2)\cap (D_1\cup D_2)=\emptyset$ Then $\langle f,g \rangle$ is free on generators. \end{lemma} \noindent{\bf Proof.} We replace the fundamental domain $D(i,\sqrt{2})\cap D(-i,\sqrt{2})$ for $f$ with the domain \[ [D(i,\sqrt{2})\cap D(-i,\sqrt{2})]\cup D_1\cup D_2)\setminus (f^{-1}(D_1) \cup f(D_2)) \] By construction the exterior of the isometric circles of $g$ lie in this region and this region is a fundamental domain for $f$. This the ``ping-pong'' hypotheses are satisfied. \hfill $\Box$ \medskip \scalebox{0.5}{\includegraphics[viewport=-100 300 500 730]{Nfog}} \medskip \noindent{\bf Figure 3.}{\em Isometric circles of $f$ and $g$ (in red). The blues disks are the images of the leftmost isometric circle of $f$ under $f^{-1}$, and the rightmost under $f$.The modified fundamental domain for $f$ consists of the intersection of the fundamental disks for $f$, together with the fundamental disks for $g$ (red) and then deleting their images (two blue disks).} \medskip This cut and paste technique for moving the fundamental domains around to a good configuration can be extended, but quickly becomes to complex to obtain useful information. It apparent that is is only possible when the angle between the axes of $f$ and $g$ is small compared with the radius of the disk. We want to turn the hypotheses of Lemma \ref{fog} into a restriction on $\gamma$. Things have been set up so \[ f(z) = \frac{z+i}{iz+1}. \] There is also an evident rotational symmetry. We therefore need to assert $f(D)$ is disjoint from the isometric circles of $g$ where $D$ is the left-most isometric circle of $g$. We write \[\zeta=i\omega\cot \frac{\pi}{p}+\frac{e^{i\theta}}{\sin\frac{\pi}{p}}, \quad \mbox{that is $\zeta\in \partial D$} \] and we need to show \[ |f(\zeta)\pm i\omega\cot \frac{\pi}{p}|\geq \frac{1}{\sin\frac{\pi}{p}} \] These yield the two inequalities \[ \left| -i+i w \cot \frac{\pi }{p} +\frac{2}{-i+i w \cot \frac{\pi }{p} +e^{i \theta } \csc \frac{\pi }{p} }\right|\sin \frac{\pi }{p} \geq 1\] and \[\left| -i -iw \cot \frac{\pi }{p} +\frac{2}{-i+i w \cot \frac{\pi }{p} +e^{i \theta } \csc \frac{\pi }{p} }\right| \sin \frac{\pi }{p} \geq 1\] \subsection{First bounds.} We now make use of these facts, together with the inequalities that $\gamma$ and its conjugates must satisfy values for the triple $(4,p,r)$ for which there may exist a $\gamma$-parameter corresponding to an arithmetic Kleinian group which is not free using the criteria above. Our first estimate is straightforward and is simply used to cut down the set of triples we search through. \begin{lemma}\label{easybound} Let $\langle f,g\rangle$ be a discrete group with $o(f)=4$ and $o(g)=p$ and which is not free on these two generators. Let $\gamma=\gamma(f,g)$. Then the degree $[\IQ(\gamma):\IQ]\leq 16$. \end{lemma} \noindent{\bf Proof.} Let $P(z)$ be the minimial polynomial for $1+\gamma$ of degree $n$. Then $P$ has complex roots $1+\gamma$ and $1+\bar\gamma$ and, as $\IQ(\gamma)$ has one complex place, $n-2$ real roots $r_i$ in the interval $[-1,1]$. As $P$ is irreducible and integral \begin{eqnarray*} 1 &\leq & |P(0)|^2|P(-1)||P(1)| = |1+\gamma|^4|\gamma|^2|\gamma+2|^2 \prod_{i=1}^{n-2} r_i^2(1-r_i^2) \\ &\leq & \left(7+4 \sqrt{2}\right)^4\left(6+4 \sqrt{2}\right)^2\left(8+4 \sqrt{2}\right)^24^{2-n} \end{eqnarray*} where we have used the trivial bounds on $|\gamma|$ from Lemma \ref{modlem}. This is not possible if $n\geq 17$. \hfill $\Box$ \medskip There is far more information to be had here when we identify a specific $p$. For instance if $p=4$ and $P$ is the minimal polynomial for $\gamma$, following the above argument we see $r_i\in [-1,0]$ and \begin{eqnarray*} 1 &\leq & |P(0)||P(-1)| = |\gamma|^2|1+\gamma|^2 \prod_{i=1}^{n-2} |r_i |(1-|r_i|) \leq \left(10884 + 7696\sqrt{2}\right)^2 4^{2-n} \end{eqnarray*} This gives us a total degree bound of at most $9$. In fact this (where the orders of $f$ and $g$ are the same) is typically the worst case, and degree $9$ yields a feasible search space (as per \cite{FR}). \medskip The field $L$ is totally real and the embeddings $\sigma : L \rightarrow \IR$ are defined by \[ \sigma( \cos \frac{2 \pi}{p}) = \cos \frac{2 \pi j}{p}, \hskip15pt (j,p)=1, \;\; j< [p/2].\] Let us denote these embeddings by $\sigma_1, \sigma_2, \ldots , \sigma_{\mu}$, with $\sigma_1 = {\rm Id}$. Here $\mu=[L:\IQ]$ and from the online list of integer sequences A055034 we find that \[ [L:\IQ] = \Phi[p]/2, \] where $\Phi$ is Euler's function. Since $\gamma$ is complex, our bound from Lemma \ref{easybound} implies that $\Phi[p] \leq 16$. Hence $p\leq 60$, and in fact only $30$ possibilities for $p$ remain. \subsection{Discriminant bounds.} We now follow \cite[\S 4.2]{MM} to obtain an estimate on the relative discriminant. The discriminant of the minimal polynomial for $\gamma$ is \[ disc(\gamma) = |\gamma-\bar\gamma|^2 \prod_{ i=1}^{n-2} |\gamma - r_i|^4 \prod_{1 \leq i < j \leq n-2} (r_i - r_j)^2, \] where the $n-2$ real roots $r_i\in [-2,0]$. Let \begin{equation} \mu = [L:\IQ] \end{equation} so that the total degree of the field $\IQ(\gamma)$ is $n=r\mu$. For $n \geq 2$, let $D_n$ denote the minimum absolute value of the discriminant of any field of degree $n$ over $\IQ$ with exactly one complex place. For small values of $n$ the number $D_n$ has been widely investigated (\cite{CDO,Di,DO}) and lower bounds for $D_n$ for all $n$ can be computed (\cite{Mull,Od,Rodgers,Stark}). In \cite{Od}, the bound is given in the form $D_n > A^{n-2} B^2 \exp(-E)$ for varying values of $A,B$ and $E$. Choosing, by experimentation, suitable values from this table we obtain the bounds shown in Table 1. We will use more precise data later. \begin{table}[h] \begin{center} \begin{tabular}{clcl} Degree $n$ & Bound & Degree $n$ & Bound \\ 2 & 3 & 3 & 27 \\ 4 & 275 & 5 & 4511 \\ 6 & 92779 & 7 & 2306599 \\ 8 & 68856875* & 9 & $0.11063894 \times 10^{10} $ \\ 10 & $0.31503776 \times 10^{11}$ & 11 & $0.90315026 \times 10^{12}$ \\ 12 & $0.25891511 \times 10^{14}$ & 13 & $0.74225785 \times 10^{15}$ \\ 14 & $0.21279048 \times 10^{17}$& 15 & $0.61002775 \times 10^{18}$ \\ 16 & $0.17488275 \times 10^{20}$ & 17 & $0.50135388 \times 10^{21}$ \\ 18 & $0.14372813 \times 10^{23} $ & 19 & $0.41203981 \times 10^{24}$ \\ 20 & $0.11812357 \times 10^{26}$ \end{tabular} \caption{Discriminant Bounds} \end{center} \end{table} \subsection{Schur's bound.} We will need to use Schur's bound \cite{S} which gives that, if $-1 \leq x_1 < x_2 < \cdots < x_r \leq 1$ with $r \geq 3$ then \begin{equation}\label{eqn31} \prod_{1 \leq i < j \leq r} (x_i - x_j)^2 \leq M_r = \frac{2^2\,3^3\, \ldots r^r\, 2^2 \, 3^3 \, \ldots (r-2)^{r-2}}{3^3\, 5^5 \, \dots (2r-3)^{2r-3}}. \end{equation} The bounds we have at hand give \begin{equation}\label{disc} D_n \leq disc(\gamma) \leq |\gamma-\bar\gamma|^2 (2+|\gamma|)^{2(n-2)} M_{n-2}, \quad\quad n \geq \Phi[p] \end{equation} where $\Phi$ is the Euler phi function. However, this can be improved to capture the fact that the real roots of the minimal polynomial for $\gamma$ are not so well distributed as those of the sharp estimate for Schur's formula (roots of Chebychev polynomials). Following \cite{MM} we let $\Delta_1 = \delta_{\IQ(\gamma) \mid L}$, the relative discriminant of the field extension $\IQ(\gamma) \mid L$, and let $\Delta$ denote the discriminant of the basis $1, \gamma, \gamma^2, \ldots , \gamma^{r-1}$ over $L$. Then $$| N_{L \mid \IQ}(\Delta) | \geq | N_{L \mid \IQ}(\Delta_1) |.$$ \begin{equation}\label{eqn33} |N_{L \mid \IQ}(\delta_{\IQ(\gamma) \mid L})| = |\Delta_{\IQ(\gamma)}|/\Delta_L^r. \end{equation} Next set \begin{equation}\label{18} K(p,r) = M_{r-2}[2(\sqrt{2}+ \cos\frac{\pi}{p})^2]^{4(r-2)}\left(\sin\frac{\pi}{p} \right)^{2(r-2)(r-3)}|\gamma-\bar\gamma|^2 \end{equation} The discriminant $\Delta_p$ of the field $\IQ(\cos \frac{2\pi}{p})$ is given in \cite[(30)]{MM}. Then (see \cite[(31)]{MM}) we have the inequality \begin{equation}\label{19} K(p,r) \left(\sin\frac{\pi}{p}\right)^{-2r(r-1)}\left( \frac{\delta_{p}}{8}\right)^{\mu r(r-1)} M_r^{\mu-1} \geq \max\{1, D_n/\Delta_L^r\} \end{equation} where here \[ \delta_p = \left\{ \begin{array}{ll} 1 & {\rm if~}p \neq \pi^{\alpha}, \;\; \pi~{\rm a~prime} \\ \pi & {\rm if~}p = \pi^{\alpha}, \;\; \pi ~{\rm a~prime}, \end{array} \right. \] If (\ref{disc}) fails to eliminate a case, then we put together inequalities (\ref{18}) and (\ref{19}) to try to eliminate it. We obtain the following remaining possibilities. \begin{center} \begin{tabular}{|c|c|c|l|} \hline $p$ & $[\IQ(\gamma):L]$ & $\Delta_p$ & total degree $n$ \\ \hline 3 & 1 & $1$& $n\leq 11$ \\ \hline 4 & 1 & $1$&$n\leq 8$\\ \hline 5 & 2 & $5$&$n=4,6,8$\\ \hline 6 & 1& $1$&$n=2,3,4$ \\ \hline 7 & 3 & $49$&$n=6,9$ \\ \hline 8 & 2 & $8$&$n=4,6,8$ \\ \hline 9 & 3 & $81$&$n=6,9$\\ \hline 10 & 2 & $5$&$n=4,6$\\ \hline 11 & 5 & $14641$&$n=10$ \\ \hline 12 & 2& $12$&$n=4,6,8$ \\ \hline 15 & 4 & $1125$&$n=8$ \\ \hline 18 & 3 & $81$&$n=6,9$ \\ \hline 20 & 4 & $2000$&$n=8$ \\ \hline 24 & 4 & $2304$&$n=8$ \\ \hline 30 & 4 & $1125$& $n=8$ \\ \hline \end{tabular} \end{center} For all but small $p$ we find that the minimal polynomial for $\gamma$ is quadratic or cubic over the base field $\IQ(\cos 2\pi/p)$. We will work through some explicit examples to find all these algebraic integers in a moment, but all these search spaces are feasible with a little work except $p=3$ for which we will find alternative arguments, and $p=4$ which will follow from \cite{FR} once we have better bounds on the shape of the moduli space. The case $p=5$ also looks troublesome. Typically we search for $1+\gamma$ as then all real roots are in $[-1,1]$ and it is easier to get bounds on the symmetric functions of the roots, and therefore the coefficients of the polynomials. \subsection{$p=7$, $p=9$, $p=11$ \& $p=18$.} There are four cases not yet eliminated where the total degree exceeds $8$, these are $p=11$ and total degree $n=10$ and $p=7, 9,18$ with total degree $6$ or $9$. Let us work through these case, first with $p=11$. The minimal polynomial $P$ is quadratic over $\IQ(\cos(2\pi/11))$. It has the complex conjugate pair of roots $\gamma$ and $\bar\gamma$ and $8$ real roots which come in pairs, two in $[-2\sin(2\pi/11),0]$, two in $[-2\sin(3\pi/11),0]$, two in $[-2\sin(4\pi/11),0]$ and two in $[-2\sin(5\pi/11),0]$. Therefore the norm of the relative discriminant is bounded by \begin{eqnarray*} \lefteqn{|\gamma-\bar \gamma|^2 4^4 \sin^2(2\pi/11) \sin^2(3\pi/11) \sin^2(4\pi/11) \sin^2(5 \pi/11) \Delta_{11}^2}\quad\quad\\ & = & |\gamma-\bar \gamma|^2 \; \frac{11}{2\sin^2(\pi/11)} \; (14641)^2 \geq D_{10}= 0.31503776 \times 10^{11}. \end{eqnarray*} This implies \[ |\gamma-\bar \gamma|=2\Im m(\gamma) \geq 25.6581, \] and this of course is a contradiction. \medskip Next with $p=18$. The minimal polynomial $P$ is cubic over $\IQ(\cos(2\pi/18))$. It has the complex conjugate pair of roots $\gamma$ and $\bar\gamma$ and a real root in $[-2\sin^2\pi/18,0]$ and $6$ real roots which come in triples, three in $[-2\sin(5\pi/18),0]$, and three in $[-2\sin(7\pi/18),0]$. Therefore the discriminant is bounded by \begin{eqnarray*} \lefteqn{|\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/18)^4 16^{-2} (2\sin(5\pi/18))^6(2\sin(7\pi/18))^6 \Delta_{18}^3}\quad\quad\\ & = & |\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/18)^4 \times164609 \geq D_{9}= 0.11063894 \times 10^{10} . \end{eqnarray*} This implies \[ |\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/18)^4 \geq 6721.32, \] which gives $|\gamma|\geq 14$ and this is a contradiction. We next consider the degree $6$ case to show in a simple case how we can use more refined information about the discriminant to eliminate a value. Following the above argument we would achieve the estimate \begin{equation}\label{deg6} {\cal N}(\IQ(\gamma)|L) \times 81^2 \geq D_{6}= 92779. \end{equation} The norm ${\cal N}$ is a rational integer but $92779$ is prime. This shows the inequality can never be sharp. It gives the estimate $|\gamma-\bar \gamma|\geq 1.8$. However, from the useful resource $https://hobbes.la.asu.edu/NFDB/$, \cite{JR} we can identify all the polynomials giving number fields with one complex place, degree $6$ and discriminant less than $3\times 10^6$ - there are $2352$ such fields. This list is proven complete. The one we seek for $\IQ(\gamma)$ has $\IQ(\cos \frac{2\pi}{18})$ as a subfield and the formula $(\ref{deg6})$ shows the discriminant has $3^8$ as a factor and Galois group $A_4\times C_2$. This leaves $19$ candidates. These are identified in the table below with $\delta$ as the distance between the smallest and largest roots. The polynomials presented are unique up to integer translation. \medskip \begin{tabular}{|c|c|c|l|c|} \hline & $-\Delta(\IQ(\gamma))$ & $p(z)$ & $\delta$ \\ \hline 1 &$ 3^8 \times 19 $ & $x^6 - 3x^4 - 2x^3 + 3x^2 + 3x - 1.$ & $2.76$\\ \hline 2 &$ 3^9 \times 17$ & $ x^6 - 3x^4 - 5x^3 + 3x + 1.$ & $2.91$ \\ \hline 3 &$ 3^8 \times 107 $ & $ x^6 - 3x^5 + 3x^4 - 9x^2 + 6x - 1.$ & $3.53$ \\ \hline 4 &$ 3^8 \times 2^6 $ & $ x^6 - 3x^2 + 1.$ & $2.47$ \\ \hline 5 &$ 3^9 \times 2^6$ & $x^6 - 3x^4 + 3.$& $ 3.18$\\ \hline 6 &$ 3^9 \times 37 $ & $ x^6 - 3x^5 + 3x^4 - x^3 - 3x^2 + 3x + 1.$ & $2.66$\\ \hline 7 &$ 3^9\times 53 $ & $ x^6 - 3x^4 - 4x^3 + 6x + 1.$ & $3.47$ \\ \hline 8 &$ 38 \times 1271 $ & $ x^6 - 3x^4 - 2x^3 - 6x^2 - 6x - 1.$ & $4.00$ \\ \hline 9 &$ 3^8\times 163 $ & $ x^6 - 3x^5 + 3x^4 + 6x^3 - 15x^2 + 6x + 1.$ & $3.10$ \\ \hline 10 &$ 3^8 \times 179 $ & $ x^6 - 3x^5 + 5x^3 - 3x^2 + 3.$ & $3.33$ \\ \hline 11 &$ 3^9\times 73 $ & $x^6 - 8x^3 + 9x + 1$ & $2.59$\\ \hline 12 &$ 3^8 \times 251 $ & $ x^6 - 3x^5 + 6x^2 + 6x - 1.$ &$ 3.24$ \\ \hline 13 &$ 3^ 9\times 89 $ & $ x^6 - 3x^4 - 4x^3 - 9x^2 - 3x + 1.$ & $4.22$\\ \hline 14 &$ 3^ 8\times 271 $ & $ x^6 - 3x^5 - 3x^3 + 6x^2 + 6x + 1.$ & $3.54$ \\ \hline 15 &$ 3^ 8\times 17\times 19 $ & $x^6 - 3x^5 + 5x^3 - 12x^2 + 9x + 3 .$ & $4.47$ \\ \hline 16 &$ 3^ 8\times 17\times 19 $ & $x^6 - 7x^3 - 6x^2 + 18x - 3.$ & $3.32$\\ \hline 17 &$ 3^9\times 109 $ & $x^6 - 3x^5 - 3x^4 + 3x^3 + 9x^2 + 9x + 3.$ &$4.06$\\ \hline 18 &$ 3^ 8\times 359 $ & $ x^6 - 6x^4 - 4x^3 - 3x^2 + 3.$ & $4.94$ \\ \hline 19 &$ 3^ 8\times 431 $ & $x^6 - 3x^5 + x^3 + 18x - 9.$ & $3.93$\\ \hline \end{tabular} \medskip None of these polynomials satisfy the root restrictions we require. In particular $\delta > 2$ implies no integral translate of any of these polynomials has all its real roots in an interval of length $2$. Therefore no integral translate can have the root distributions we have established as necessary. In fact in this case $\delta > 2 \sin^2(7\pi/18)\approx 1.766$ will suffice. We therefore have \begin{equation} |\gamma-\bar\gamma|^2\big(2\sin^2 \frac{5\pi}{18}\big)^2 \big(2\sin^2 \frac{7\pi}{18}\big)^2 81^2 \geq {\cal N}(\IQ(\gamma)|L) \times \Delta_p^2 \geq 3\times 10^6. \end{equation} Hence \[ \Im m(\gamma) \geq 5.15829 \] and this easily implies the group is free on its generators. \medskip The remaining cases in this subsection are $p=7,9$. As with $p=18$ there are two cases when the extension over $L$ is quadratic or a cubic. In the cubic case Galois group in question is the wreath product $S(3)wr3$ (of order $2^3 3^4$ - T28 in the notation of \cite{JR}) There are $285$ fields to consider with discriminant less than $3.84\times 10^{10}$. In the case $p=7$, we must have $7^6$ as a factor of the discriminant (the smallest discriminant here is $7^6 \times 22679$ ) and in the case $p=9$ we must have $3^{12}$ as a factor (the smallest discriminant here is $3^{12}\times53\times 163$). \begin{eqnarray*} |\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/9)^4 16^{-2} (2\sin(2\pi/9))^6(2\sin(4\pi/9))^6 \Delta_{9}^3\geq 3.84\times 10^{10}\\ |\gamma-\bar \gamma|^2 (|\gamma|+ 2\sin^2\pi/7)^4 16^{-2} (2\sin(2\pi/7))^6(2\sin(4\pi/7))^6 \Delta_{7}^3 \geq 3.84\times 10^{10} . \end{eqnarray*} The first case $p=7$ quickly gives either \begin{itemize} \item If $|\gamma-\bar\gamma|\leq 8$, then $|\gamma|\geq 9.16471$, or \item if $|\gamma-\bar\gamma|\leq 5$, then $|\gamma|\geq 11.6923$. \end{itemize} and for $p=9$, simply $|\gamma|\geq 11.1919$. Together these inequalities imply that the associated group is free. In the quadratic cases we have have a field of degree $6$. We enumerate the degree $6$ fields (actually we are enumerating the polynomials) with one complex place and discriminant less than $1.7 \times 10^6$. In $p=7$ there is a factor of $7^4$ and when $p=9$ a factor of $3^8$. The Galois group is $A_4\times C_2$ and there are $40$ such fields (polynomials). Then \begin{eqnarray*} |\gamma-\bar \gamma|^2 (2\sin(2\pi/9))^2(2\sin(4\pi/9))^2 \Delta_{9}^2\geq 2.2\times 10^6.\\ |\gamma-\bar \gamma|^2 (2\sin(2\pi/7))^2(2\sin(4\pi/7))^2 \Delta_{7}^2 \geq 2.2\times 10^6 . \end{eqnarray*} Together these inequalities yield $|\gamma-\bar \gamma|>10 $ and again this implies the groups is discrete and free on generators. \medskip Apart from the case $p=3,4$ and $5$ we have now established the total degree is at most $8$. We will actually run through searches to verify these results independently and we will discuss that later. \subsection{$p=15,20,24,30$, degree $8$.} These four cases arise from a quadratic polynomial over the base field $L$. In each case we can compute the relative discriminant as \begin{equation} |\gamma-\gamma|^2 \; \delta_2^2\; \delta_3^2\; \delta_4^2 \; \Delta_p^2 \geq D_8 = 68856875 \end{equation} Here $\delta_i=\sigma_i(2\sin^2\frac{\pi}{p})$ as $i$ runs through the three non-identity Galois automorphisms $\sigma_i$ of $L$, since a pair of roots lies in the interval $[-2\sigma_i( \sin^2\frac{\pi}{p}),0]$. For $p=15$ we find \begin{eqnarray*} p=15,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{2\pi}{15} \sin^2 \frac{4\pi}{15} \sin^2 \frac{7\pi}{15} \times 1125^2 \geq 68856875.\\ \end{eqnarray*} Hence $\Im m(\gamma) \; \geq 5.10151.$ and this implies the group is free. For the other cases the discriminant bound is not enough to eliminate them. Thus we seek a stronger bound. From the resource $https://hobbes.la.asu.edu/NFDB/$, \cite{JR} once again we can identify all the polynomials giving number fields with one complex place, degree $8$ and discriminant less than $2\times 10^9$. When $p=20$ $2^8\times 5^6$ is a factor of the discriminant (smallest such is $2^8\times 5^6\times 79$), $p=24$ finds $2^16 \times 3^4$ as a factor (smallest such is $2^{16}\times 3^4 \times 47$) and $p=30$ finds $3^4 \times 5^6$ as a factor (smallest such is $3^4 \times 5^6 \times 59$). The Galois groups are either $[2^4]^4$ ($T27$ in \cite{JR}) when $p=20,30$ or $[2^4]E(4)$ ($T31$ in \cite{JR}) when $p=24$. These lists are proven complete. When $p=20$ we have the following $12$ possibilities. \medskip \begin{tabular}{|c|c|l|c|} \hline & $-\Delta$ & $p(z)$ &$\delta$ \\ \hline 1 &$2^85^679$ & $ x^8 - 2x^7 + x^6 - 6x^5 - x^4 + 12x^3 - x^2 - 4x + 1$ & $3.29$\\ \hline 2 &$2^85^719$ & $ x^8 - 4x^7 + 2x^6 + 8x^5 - 10x^4 + 2x^3 + 7x^2 - 6x + 1.$ &$3.55$\\ \hline 3 &$ 2^85^6199 $ & $x^8 - 4x^7 + 3x^6 + 6x^4 - 8x^2 + 2x + 1.$ & $3.30$\\ \hline 4 &$ 2^85^6239 $& $x^8 - 4x^7 + 10x^6 - 16x^5 - x^4 + 24x^3 - 10x^2 - 4x + 1.$ & $2.97$ \\ \hline 5 &$ 2^85^759$ & $x^8 - 2x^7 - 7x^6 + 16x^5 + 5x^4 - 24x^3 + 8x^2 + 8x - 4.$ & $4.75$ \\ \hline 6 &$2^{12}5^619$ & $ x^8 - 2x^7 - 8x^6 + 10x^5 + 11x^4 - 10x^3 - 2x^2 + 4x + 1.$ &$5.53$\\ \hline 7 &$ 2^{12}5^619 $ & $ x^8 - 8x^6 - 4x^5 + 8x^4 - 8x^3 - 12x^2 + 4x + 1.$ & $4.82$\\ \hline 8 &$ 2^85^6359 $ & $x^8 - 9x^6 - 10x^5 + 11x^4 + 30x^3 + 11x^2 - 10x + 1.$ & $5.16$\\ \hline 9 &$ 2^83^45^7$ & $x^8 - 10x^4 + 15x^2 - 5.$ & $2.64$ \\ \hline 10 &$ 2^83^45^7$ & $x^8 - 5x^6 + 5x^4 + 5x^2 - 5.$ & $3.36$ \\ \hline 11&$2^85^6439$ & $x^8 - 2x^7 - 3x^6 + x^4 + 10x^3 + 3x^2 - 6x + 1.$ & $4.08$ \\ \hline 12&$2^85^6479$ & $x^8 - 3x^6 - 11x^4 - 30x^3 - 17x^2 + 1.$ & $4.13$\\ \hline \end{tabular} When $p=30$ there are $26$ possibilities (which we do not enumerate) Finally when $p=24$, there are $7$ possibilities. \begin{tabular}{|c|c|l|c|} \hline & $-\Delta$ & $p(z)$ & $\delta$ \\ \hline 1 &$2^{16}3^4 47 $ & $ x^8 - 8x^5 + x^4 + 12x^3 - 2x^2 - 4x + 1$ & $2.29$\\ \hline 2 &$2^{18}3^4 23 $ & $ x^8 - 6x^6 - 8x^5 + 4x^4 + 12x^3 - 2x^2 - 4x + 1.$ & $4.29$ \\ \hline 3 &$ 2^{16}3^4191 $ & $x^8 - 4x^6 - 11x^4 - 12x^3 + 30x^2 + 36x + 9.$ & $4.42$\\ \hline 4 &$ 2^{16}3^6 23 $& $ x^8 - 4x^7 + 2x^6 + 8x^5 - 14x^4 + 4x^3 + 14x^2 - 8x + 1.$ & $4.06$\\ \hline 5 &$ 2^{16}3^4 239$ & $x^8 - 6x^6 + 5x^4 - 12x^3 - 12x^2 + 1.$ & $4.49$ \\ \hline 6 &$ 2^{18}3^4 71 $ & $ x^8 - 4x^7 + 10x^6 - 12x^5 - 20x^4 + 40x^3 - 18x^2 + 1.$ & $3.84$\\ \hline 7 &$ 2^{20}3^4 23 $ & $ x^8 - 8x^6 - 4x^5 + 8x^4 - 8x^3 - 12x^2 + 4x + 1.$ & $4.82$\\ \hline \end{tabular} Hence we may use the bound $2\times 10^9$ to see \begin{eqnarray*} p=20,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{3\pi}{20} \sin^2 \frac{7\pi}{20} \sin^2 \frac{9\pi}{20} \times 2000^2 \geq 2\times 10^9.\\ p=24,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{5\pi}{24} \sin^2 \frac{7\pi}{24} \sin^2 \frac{11\pi}{24} \times 2304^2 \geq 2\times 10^9.\\ p=30,& & |\gamma-\gamma|^2 \; 2^6 \; \sin^2 \frac{7\pi}{30} \sin^2 \frac{11\pi}{30} \sin^2 \frac{13\pi}{30} \times 1125^2 \geq 2\times 10^9. \end{eqnarray*} Then \begin{eqnarray*} p=20,& & \Im m(\gamma) \; \geq 8.75528\\ p=24,& & \Im m(\gamma) \; \geq 5.29112.\\ p=30,& & \Im m(\gamma) \; \geq 6.94947. \end{eqnarray*} Thus all these groups are free as well. \begin{center} \scalebox{0.7}{\includegraphics[viewport=80 340 800 800]{Modulispace4p}} \end{center} \noindent{\bf Figure 2.} {\em Rough descriptions of the moduli space of $\;\IZ_4*\IZ_p$ for $p=5,6,7,8,10$ and $12$. These moduli spaces contain the convex hull of the points (which are roots of the Farey polynomials of degree up to $120$, and contain about $513$ boundary cusp groups.)} \medskip From these rough descriptions of the moduli spaces one directly obtains concrete information such as the following. We will give a careful (and sharper) proof for this later in the $(4,5)$ case and then again when we examine the $(3,4)$ and $(4,4)$ case in considerable detail. The details for other $p$ are exactly the same except that since we only give rough bounds it is only necessary to examine pleating rays of small slope. \begin{lemma} Let $\Gamma=\langle f,g \rangle$ be an arithmetic lattice generated by $f$ of order $4$ and $g$ of finite order $p\in \{5,6,7,8,10,12\}$. Then \begin{equation} |\Im m(\gamma(f,g))|\leq 4. \end{equation} This bound is also true if $\Gamma$ is not a lattice and is not free. \end{lemma} The sharp bound here is nearly 3.5. \medskip Next we consider the following remaining possibilities. \begin{center} \begin{tabular}{|c|c|c|l|} \hline $p$ & $[\IQ(\gamma):L]$ & $\Delta_p$ & total degree $n$ \\ \hline 3 & 1 & $1$& $n\leq 11$ \\ \hline 4 & 1 & $1$&$n\leq 8$\\ \hline 5 & 2 & $5$&$n=4,6,8$\\ \hline 6 & 1& $1$&$n=2,3,4$ \\ \hline 8 & 2 & $8$&$n=4,6,8$ \\ \hline 10 & 2 & $5$&$n=4,6$\\ \hline 12 & 2& $12$&$n=4,6,8$ \\ \hline \end{tabular} \end{center} \subsection{$p=5,8,10$ and $p=12$.} These are quadratic, cubic and quartic extensions over the base field which is a degree two extension of $\IQ$. We next examine the limits of the approach we have used so far. \subsubsection{$p=12$ : quartic.} As always, we estimate the discriminant, using the relative discriminant. There are $4$ real roots in $[-2\sin^2\frac{5\pi}{12},0]$ and two in $[-2\sin^2\frac{\pi}{12},0]$. We use Schur's bound to deal with the real roots in $[-2\sin^2\frac{5\pi}{12},0]$ \begin{equation} |\gamma-\gamma|^2 \big(|\gamma|+2\sin^2\frac{\pi}{12}\big)^8 \; \big(\sin^2\frac{5\pi}{12}\big)^{12} \; \frac{2^23^34^42^2}{3^3 5^5}\; 12^4 \geq D_8 \geq 68856875 \end{equation} We quickly see that in order to get useful information on $\gamma$ we will need to bound the discriminant by something of the order $10^{12}$. The Galois group in question is $[2^4]E(4)$ of order $64$. However there are (provably) $15,648$ polynomials which meet our criteria. \subsubsection{$p=12$ : cubic.} We have \begin{equation} |\gamma-\gamma|^2 \big(|\gamma|+2\sin^2\frac{\pi}{12}\big)^4 \; 16^{-2} \big(2\sin^2\frac{5\pi}{12}\big)^{6} \; 12^3 \geq D_6 \end{equation} To get useful information we need a discriminant bound of about $10^8$ with a factor of $2^63^3$. The Galois group is $A_4C_2$ ($T6$ in \cite{JR}). There are (provably) $2319$ such polynomials. \subsubsection{$p=12$ : quadratic.} We have \begin{equation} |\gamma-\gamma|^2 \big(2\sin^2\frac{5\pi}{12}\big)^{2} \; 12^2 \geq D_4 \end{equation} To get useful information we need a discriminant bound of about $4\times 10^5$ with a factor of $2^43^2$. The Galois group is $D4$ ($T3$ in \cite{JR}). There are (provably) $3916$ such polynomials (the smallest discriminant is $2^43^223$ with polynomial $x^4 - 2x^3 - x^2 + 2^x - 2$). With a bit of work this method will work here but for quadratics it is far easier to search. \medskip The point to discussing these three cases is that the methods applied above produce too many polynomials to be really useful. It is oftentimes simpler to search for the polynomials directly. The several thousand polynomials identified above will not have their real roots in the intervals we require, nor their complex roots bounds bounded appropriately. Finally the factorisation condition discussed in \S \ref{factorisation}, and which we will come to rely on heavily to eliminate cases, doesn't seem to be directly applicable in any way. \section{Searches.} In this section we give a few pertinent examples of the searches we performed to eliminate cases. First off is a case we have already dealt with, but which admits some useful features which are easier to follow. \subsection{The case $p=4$ and $q=7$, degree $2$ over $\IQ(2\cos\frac{\pi}{7})$, total degree $6$. } The intermediate field is \[ L=\IQ(2\cos\frac{\pi}{7}) \] of discriminant $49$. We suppose $\gamma$ is quadratic over $L$. The minimal polynomial for $\gamma$ is irreducible over $\IZ$ and then factors as \[ (z^2+a_1z+a_0)(z^2+b_1z+b_0)(z^2+c_1z+c_0) \] where we may suppose that $\gamma$ and $\bar \gamma$ are roots of the first factor. In order for the arithmeticity conditions to be satisfied we must have \begin{itemize} \item $z^2+b_1z+b_0$ has two real roots, say $r_\pm= \frac{1}{2}\big(-b_1\pm \sqrt{b_1^2-4b_0}\big)$, in $-[2\sin^2\frac{2\pi}{7},0]$, thus $b_1> 0$ \begin{equation}\label{T1} -4\sin^2\frac{2\pi}{7} \leq 2r_- = -b_1-\sqrt{b_1^2-4b_0}, \end{equation} and note the implication $b_1^2\geq 4b_0>0$, strict inequality because of irreducibility. We will always take square roots with positive real part. Also \item $z^2+c_1z+c_0$ has two real roots (say $s_\pm$) in $-[2\sin^2\frac{3\pi}{7},0]$, thus $c_1>0$ and \begin{equation}\label{T2} -4\sin^2\frac{3\pi}{7} \leq 2s_-=-c_1-\sqrt{c_1^2-4c_0}, \end{equation} with $c_1^2\geq 4c_0>0$. \end{itemize} We also know from our criterion regarding free groups used above that \begin{eqnarray}\label{T3} 0 < |\gamma|^2 & = & a_0 <\left[ 2 \left(2+2 \sqrt{2} \cos \frac{\pi }{7} +\sin \frac{3 \pi }{14} \right)\right]^2 \approx 106.991 \end{eqnarray} It is efficient to improve this bound on $a_0$ and we can do this using the relative discriminant. The total degree of $\gamma$ is $6$ and $|\Delta(\IQ(2\cos\frac{\pi}{7}))|=49$. As above there are $13$ polynomials yielding fields with one complex place an discriminant divisible by $7^4$ and less than $10^6$. None of these polynomials have the properties we seek. Hence \[ |N(\gamma-\bar\gamma)| \times 49^2 \geq 10^6 \] where $N$ is the relative norm, \[ |N(\gamma-\bar\gamma)| =| \gamma-\gamma|^2(r_+-r_-)^2(s_+-s_-)^2 \leq |\gamma-\gamma|^2 16 \sin^4\frac{2\pi}{7} \sin^4\frac{3\pi}{7} \] and hence \begin{equation} \label{T4} \Im m[\gamma]> 4.39079, \end{equation} and although we know this is enough to remove this case, we keep calculating as this bound further implies \begin{equation} \label{T5} 19.279 \leq |\gamma|^2=a_0 < 106.7 \end{equation} An improvement on (\ref{T3}) alone. \medskip An integral basis for $\IQ(2\cos\frac{\pi}{7})$ can be found from $1$, $2\cos\frac{\pi}{7}$ and $2\cos\frac{3\pi}{7}$. We also us the remaining Galois conjugate $2\cos\frac{5\pi}{7}$. We can therefore write, for rational integers $p_i,q_i,r_i$, $i=1,2$, \begin{eqnarray*} a_0= p_0+ 2q_0\cos\frac{\pi}{7} +2r_0 \cos\frac{3\pi}{7},&& a_1= p_1+ 2q_1\cos\frac{\pi}{7} +2r_1 \cos\frac{3\pi}{7},\\ b_0=p_0+ 2q_0\cos\frac{3\pi}{7} +2r_0 \cos\frac{5\pi}{7}, && b_1=p_1+ 2q_1\cos\frac{3\pi}{7} +2r_1 \cos\frac{5\pi}{7} ,\\ c_0=p_0+ 2q_0\cos\frac{5\pi}{7} +2r_0 \cos\frac{\pi}{7}, && c_1=p_1+ 2q_1\cos\frac{5\pi}{7} +2r_1 \cos\frac{\pi}{7}. \end{eqnarray*} From (\ref{T1}) and (\ref{T2}) we also have the bounds \begin{eqnarray*} 0 < b_0< \left(2\sin^2\frac{2\pi }{7}\right)^2, && 0< b_1<4 \sin^2\frac{2\pi }{7} ,\\ 0 < c_0 < \left(2\sin^2\frac{3\pi }{7}\right)^2, &&0< c_1<4 \sin^2\frac{3\pi }{7}. \end{eqnarray*} We can solve $p_0,q_0,r_0$ in terms of $a_0,b_0,c_0$. \begin{eqnarray*} p_0&=& \frac{1}{14} (4 (b_0+c_0+(b_0-c_0) \cos\big(\frac{\pi }{7}\big))+c_0 \text{csc}\big(\frac{\pi }{14}\big)-a_0 (-4+\text{csc}\big(\frac{3 \pi }{14}\big)+4 \sin\big(\frac{\pi }{14}\big))),\\ q_0 & = & \frac{2}{7} (a_0 \cos\big(\frac{\pi }{7}\big)-b_0 \cos\big(\frac{\pi }{7}\big)+b_0 \sin\big(\frac{\pi }{14}\big)-c_0 \sin\big(\frac{\pi }{14}\big)+(a_0-c_0) \sin\big(\frac{3 \pi }{14}\big)),\\ r_0 &=&\frac{2}{7} (-b_0 \cos\big(\frac{\pi }{7}\big)+c_0 \cos\big(\frac{\pi }{7}\big)+a_0 \sin\big(\frac{\pi }{14}\big)-c_0 \sin\big(\frac{\pi }{14}\big)+(a_0-b_0) \sin\big(\frac{3 \pi }{14}\big)) \end{eqnarray*} From this we deduce that \begin{eqnarray*} 0\leq p_0 \leq 7,& 0\leq q_0\leq 21, &0\leq r_0\leq 12 \end{eqnarray*} In exactly the same way we find the bounds \begin{eqnarray*} -1\leq p_1 \leq 3,& -7\leq q_1\leq 3, &-4 \leq r_1\leq 2. \end{eqnarray*} This gives us now $880880$ cases to consider. Of these there is only one case which passes all the above tests. That is $p_0=3$, $q_0= 11$, $r_0= 6$, $p_1= 1$, $q_1= -2$ and $r_1= -1$. The first pair of real roots are $-0.89457$ and $-0.462326$ both greater than $-2\sin^2\frac{2\pi}{7} = -1.22252$, while the second pair is $-1.63397$ and $-0.0580492$, both greater than $-2\sin^2\frac{3\pi}{7} =-1.90097$. However, the complex roots are $\gamma = 1.52446 \pm 4.81327 i$ are easily seen to be in the space of groups freely generated as the imaginary part of $\gamma$ is too large. \subsection{$p=12$} The intermediate field is \[ L=\IQ(2\cos\frac{2\pi}{4},2\cos \frac{2\pi}{12})=\IQ(\sqrt{3}) \] \begin{center} \scalebox{0.8}{\includegraphics[viewport=80 520 500 800]{MapDegree2}} \end{center} \noindent{\bf Figure 3.} {\em The moduli space of discrete groups generated by elliptics of order $p=4$ and $q=12$ obtained from identifying the roots of Farey polynomials. The red points are those $\gamma$ satisfying the criteria: \begin{enumerate} \item the associated group is not obviously free, that is \begin{itemize} \item $0< Im(\gamma(f,g))< 4$, \item $\Re e(\gamma)\geq -4$, (achieved in the $GT(4,12,\infty)$-generalised triangle group). \item $\Re e(\gamma)\leq 3(2+\sqrt{3})$, (achieved in the $(4,12,\infty)$-triangle group). \end{itemize} \item $\gamma$ is the complex root of a quadratic over $L$. \item the two real embeddings of $\gamma$ lie in the interval $[-2\sin^2\frac{5\pi}{12},0]$. \end{enumerate}} \subsubsection{Degree 2 over $L=\IQ(\sqrt{3})$} The minimal polynomial for $\gamma$ has the form \[ p(z)=(z^2+a_1 z+ a_0)(z^2+b_1 z + b_0) \] We always assume the first polynomial has a complex conjugate pair of roots. Here $a_0,a_1,b_0,b_1\in \IQ(\sqrt{3})$ and $b_i$ is the Galois conjugate of $a_i$. We write \begin{eqnarray*} a_0=p_0+q_0\sqrt{3}, &&b_0=p_0-q_0\sqrt{3}\\ a_1=p_1+q_1\sqrt{3}, &&b_1=p_1-q_1\sqrt{3} \end{eqnarray*} The first polynomial factor has roots $\gamma,\bar\gamma$ and the second factor has roots $r,s \in [-2\sin^2\frac{5\pi}{12},0]$. Thus \begin{eqnarray*} a_0=|\gamma|^2, &&b_0=rs\\ a_1= - 2 \Re e[\gamma], &&b_1=-(r+s) \end{eqnarray*} From this we see that \begin{eqnarray*} 0< a_0 < 9 (7+4 \sqrt{3} ), && 0< b_0 < 4 \sin^4\frac{5\pi}{12} =\frac{7}{4}+\sqrt{3} \\ -3(2+\sqrt{3}) < a_1< 8, && 0< b_1<2+\sqrt{3}. \end{eqnarray*} Adding these inequalities provides bounds on $p_i$ and $q_i$. \begin{eqnarray*} 0 < a_0 + b_0 = 2 p_0 < \frac{37}{4} \left(7+4 \sqrt{3}\right) \\ -3(2+\sqrt{3}) < a_1 + b_1=2p_1 < 10+ \sqrt{3} \end{eqnarray*} Hence, as $q_0=(p_0-b_0)/\sqrt{3}$, we find the following bounds. \begin{eqnarray*} 1\leq p_0 \leq 64, && -5\leq p_1 \leq 5 \\ \big[\frac{p_0-\frac{7}{4}}{\sqrt{3}}\big] \leq q_0 \leq \big[\frac{p_0}{\sqrt{3}}\big], && \big[\frac{p_1-2 }{\sqrt{3}}\big] \leq q_1 \leq \big[\frac{p_1}{\sqrt{3}}\big]. \end{eqnarray*} This gives us $2,688$ cases to consider. In higher degree searches we will want to break this up, but this presents no computational problems. We have a few conditions to satisfy. First $\gamma$ is complex so $a_1^2<4 a_0$. Next $b_1^2>4 b_0$ for two real roots and additionally, for the two real roots to be in the correct interval, \[ -1-\frac{\sqrt{3}}{2} < \frac{-b_1}{2} \pm \frac{1}{2}\sqrt{ b_1^2-4 b_0 } <0 \] This gives us the condition \[ 2+\sqrt{3} > b_1+ \sqrt{ b_1^2-4 b_0 } \] These simple tests reduce the search space to $202$ candidates. We next realise that $|\Im m(\gamma)|<3.5$, giving the simple additional test $-49< a_1^2-4a_0 $. There are only now only $18$ values for $\gamma$ satisfying our criteria and these are illustrated above in Figure X. Of these all but $7$ are well outside our region and can be shown to be free on generators or alternatively one can check the factorisation as we shall now do. The seven points and their minimal polynomial as as follows. \begin{eqnarray*} \gamma_1 =& \frac{1}{2} \left(\sqrt{3}+i \sqrt{5+4 \sqrt{3}}\right) ,& 1 + 6 z + z^2 + z^4.\\ \gamma_2 =& -1+i \sqrt{1+\sqrt{3}} , & 1 + 8 z + 8 z^2 + 4 z^3 + z^4.\\ \gamma_3 =& \frac{1}{2} \left(\sqrt{3}+i \sqrt{13+8 \sqrt{3}}\right) ,&4 + 12 z + 5 z^2 + z^4. \\ \gamma_4 =& -1+i \sqrt{3+2 \sqrt{3}}, & 4 + 16 z + 12 z^2 + 4 z^3 + z^4. \\ \gamma_5 =& 1+\sqrt{3}+i \sqrt{3+2 \sqrt{3}} ,&1 + 20 z + 6 z^2 - 4 z^3 + z^4 .\\ \gamma_6 =& \frac{1}{2} +\sqrt{3}+i \frac{1}{2}\sqrt{19+12\sqrt{3}}, & 16 + 32 z + 5 z^2 - 2 z^3 + z^4.\\ \gamma_7 =& \frac{1}{2}\left(3 +\sqrt{3}+i \sqrt{8+6\sqrt{3}}\right), & 13 + 42 z + 4 z^2 - 6 z^3 + z^4. \end{eqnarray*} Then we compute the minimal polynomial for $\lambda_p$ \begin{eqnarray*} \lambda_1: & & 33 - 240 z^2 + 238 z^4 - 16 z^6 + z^8.\\ \lambda_2 : & & 1 - 180 z^2 + 182 z^4 + 12 z^6 + z^8.\\ \lambda_3 : & & 97 - 464 z^2 + 446 z^4 - 16 z^6 + z^8. \\ \lambda_4 : & & 33 - 372 z^2 + 390 z^4 + 12 z^6 + z^8.\\ \lambda_5 : & & 193 - 1004 z^2 + 870 z^4 - 44 z^6 + z^8.\\ \lambda_6 : & & -11 - 588 z^2 + 890 z^4 - 36 z^6 + z^8.\\ \lambda_7 : & &-3 - 1032 z^2 + 1306 z^4 - 64 z^6 + z^8. \end{eqnarray*} Since these are all degree eight we conclude these groups are not arithmetic. \medskip \subsection{p=10} A similar situation occurs when $p=10$. \[ \left(\sqrt{3}+1\right) \sqrt{\gamma -\frac{\sqrt{3}}{2}+1} \] After a similar search there there are three points we must consider, the complex roots of the three equations \begin{center} \begin{tabular}{|c|c|} \hline $\gamma$ polynomial & $\lambda$ polynomial \\ \hline $1+4 z+2 z^2+z^3+z^4$ & $-25 - 400 z^2 + 165 z^4 - 10 z^6 + z^8$ \\ \hline $1+12 z-2 z^2-3 z^3+z^4$ &$ -1775 - 700 z^2 + 485 z^4 - 40 z^6 + z^8 $\\ \hline $1+11 z+6 z^2+z^3+z^4$ & $-725 - 1000 z^2 + 385 z^4 - 10 z^6 + z^8$ \\ \hline \end{tabular} \\ \end{center} We again find there are no arithmetic lattices. \subsection{p=8} \[ \lambda_8=\sqrt{2 \left(\sqrt{2}+2\right) \gamma +2}\] \begin{center} \begin{tabular}{|c|c|} \hline $\gamma$ polynomial & $\lambda$ polynomial \\ \hline $2+8 z+8 z^2+4 z^3+z^4$ & $16 - 224 z^2 + 120 z^4 + 8 z^6 + z^8$ \\ \hline $1+8 z+4 z^2+z^4$ & $ 272 - 704 z^2 + 328 z^4 - 16 z^6 + z^8$ \\ \hline $1+6 z+7 z^2+2 z^3+z^4$ & $ 496 - 736 z^2 + 256 z^4 + z^8 $ \\ \hline $1+10 z+13 z^2+6 z^3+z^4$ & $ 112 - 448 z^2 + 160 z^4 + 24 z^6 + z^8$ \\ \hline $7+14 z+3 z^2-2 z^3+z^4$ &$ 368 - 928 z^2 + 544 z^4 - 32 z^6 + z^8$ \\ \hline $7+20 z+14 z^2+4 z^3+z^4$ & $ 144 - 672 z^2 + 392 z^4 + 8 z^6 + z^8 $ \\ \hline $4+20 z+5 z^2-2 z^3+z^4$ & $ 112 - 1120 z^2 + 656 z^4 - 32 z^6 + z^8 $ \\ \hline $14+28 z+2 z^2-4 z^3+z^4$ & $ 16 - 1088 z^2 + 856 z^4 - 48 z^6 + z^8 $ \\ \hline $7+40 z+10 z^2-8 z^3+z^4$ & $ 656 - 2784 z^2 + 1480 z^4 - 72 z^6 + z^8 $ \\ \hline \end{tabular} \end{center} \subsection{p=6} As noted earlier, this case is covered by the results of \cite{MM6}. The methods used here are similar and lead to the same results. The candidates we find are \begin{center} \begin{tabular}{|c|c|} \hline $\gamma$ polynomial & $\lambda$ polynomial \\ \hline $z^2 +6$ & $15 - 6 z + z^2$ \\ \hline $z^3 +3z^2 +6z+2$ & $9 + 9 z - 3 z^2 + z^3$ \\ \hline $ z^3 +5z^2 +9z+3 $ & $-9 + 15 z - 3 z^2 + z^3$ \\ \hline $ z^4+8z^2+6z+1$ & $-9 + 18 z + 12 z^2 - 6 z^3 + z^4$ \\ \hline \end{tabular} \end{center} \subsection{The case $p=4$ and $q=5$, degree $2,3$ and $4$ over $\IQ(2\cos\frac{\pi}{5})$, total degree $4,6$ and $8$. } In this subsection we meet the first really challenging search where it is likely that there may be some groups to find --- we have already found one with $\gamma$ real. We shall find that there are no more. The intermediate field is \[ L=\IQ(2\cos \frac{2\pi}{5})=\IQ(\sqrt{5}) \] We have the following absolute bounds on the modulus of $\gamma$ for groups which are not free; \begin{equation} |\gamma|<4 \big(\sqrt{2} \cos\frac{\pi}{5}+1 \big) + 2\cos \frac{2\pi}{5} = 9.19453. \end{equation} This is achieved in the $(4,5,\infty)$-triangle group. We also use the bounds \begin{equation} |\Im m(\gamma)|\leq 4, \;\;\; \Re e(\gamma) \geq -4. \end{equation} We have already established that the total degree is no more than $8$ using the discriminant method. The minimal polynomial for $\gamma$ now factors into two polynomials. Both of these polynomials are either degree $2$, $3$ or $4$ and we consider each case separately. \subsection{Degree 2 over $L=\IQ(\sqrt{5})$} The minimal polynomial for $\gamma$ has the form \[ p(z)=(z^2+a_1 z+ a_0)(z^2+b_1 z + b_0) \] We always assume the first polynomial has a complex conjugate pair of roots. Here $a_0,a_1,b_0,b_1\in \IQ(\sqrt{5})$ are algebraic integers and $b_i$ is the Galois conjugate of $a_i$. \begin{eqnarray*} a_0=\frac{p_0+q_0\sqrt{5}}{2}, && b_0=\frac{p_0-q_0\sqrt{5}}{2}\\ a_1=\frac{p_1+q_1\sqrt{5}}{2}, && b_1=\frac{p_1-q_1\sqrt{5}}{2} \end{eqnarray*} where $p_i,q_i\in\IZ$ are integers with the same parity. We observe that \begin{eqnarray*} p_i = a_i+b_i,\quad q_i = (p_i-2b_i)/\sqrt{5} \end{eqnarray*} Our number theoretic restrictions on $\gamma$ imply the first polynomial factor has roots $\gamma,\bar\gamma$ and the second factor has roots $r,s \in [-2\sin^2\frac{2\pi}{5},0]$. Thus \begin{eqnarray*} a_0=|\gamma|^2, &&b_0=rs\\ a_1= - 2 \Re e[\gamma], &&b_1=-(r+s) \end{eqnarray*} From this we see that \begin{eqnarray*} 0< a_0 < 84.5393 , && 0< b_0 < 4 \sin^4\frac{2\pi}{5} =3.27254 \\ -18.3891 < a_1 < 8, && 0< b_1< 4 \sin^2\frac{2\pi}{5} =3.61803 . \end{eqnarray*} It is quite apparent that all these bounds can be improved with work. This is really only helpful when the searches are very large as we will see. Hence we find the following bounds. \begin{eqnarray*} 1\leq p_0=a_0+b_0 \leq 87, && \big[ \big(p_0-\frac{5}{4}(\sqrt{5}+3)\big)/\sqrt{5} \big] +1 \leq q_0 \leq \big[p_0/\sqrt{5} \big] \\ -18 \leq p_1 =a_1+b_1 \leq 11 ,&&\big[(p_1-\sqrt{5}-1)/\sqrt{5} \big] +1 \leq q_1 \leq \big[p_1/\sqrt{5} \big]. \end{eqnarray*} We are only interested in complex values for $\gamma$, and $|\Im m(\gamma)|<4$, so $a_1^2-4a_0< -16$. Also the condition that the second polynomial have two real roots in $[-2\sin^2\frac{2\pi}{5},0]$ gives the inequality \[ 0< b_1^2-4b_0 < (4\sin^2\frac{2\pi}{5} - b_1)^2 \] It is this root condition that will be the most challenging test for our polynomials. After additionally checking parity conditions this gives $20$ polynomials whose complex roots are illustrated below. They well indicate the problems we face. Figure 1. shows a rough picture or the exterior of the closed space of faithful discrete and free representations of the group $\IZ_4*\IZ_5$ in $PSL(2,\IC)$ and the two points we have found. \begin{center} \scalebox{0.55}{\includegraphics[angle=-90,viewport=100 300 500 500]{map45}}\\ \end{center} \noindent{\bf Figure 4.} {\em The $20$ potential $\gamma$ values. Four identified as not free, or not discrete. One as a subgroup of an arithmetic group.} \medskip Of these $20$ points, all but $4$ are well outside our region where non-free groups must lie and these $16$ are captured by neighbourhoods of pleating rays as per \cite{EMS2}. One of these $16$ is in fact a discrete free subgroup of an arithmetic group and indicated by an arrow. The remaining $4$ points are indicated with an arrow. Notice that we are faced with the apparently real possibility of a point lying very close to the boundary and trying to decide if the associated group is free or not. However at this point we cannot confirm any of these groups is discrete (other than those 16 we identified as free) as we have not used, or satisfied, all the arithmetic information needed for the identification theorem. The remaining necessary and sufficient condition we need to check is the factorisation condition. In each case we set (where we have removed the factor $2\cos \frac{\pi }{5}$ from (\ref{lp}) as it is a unit). \begin{equation}\label{lp} \lambda_i= \sqrt{2\gamma_i + 4\sin^2\frac{ \pi }{5}} \end{equation} We know that it must be the case that $\IQ(\gamma_i)=\IQ(\lambda_i)$ and in particular the minimal polynomials for $\gamma_i$ and $\lambda_i$ must have the same degree, in this case degree $4$. This is quite straightforward to check and we do it for {\em all} $20$ points. The four points that might be at question are listed below with their minimal polynomials. \medskip {\small \noindent\begin{tabular}{|c|c|l|l|} \hline & $\gamma $ & min. polynomial $\gamma$ & min. polynomial $\lambda$ \\ \hline 1 & $1.618 + 2.058 i $ & $1 + 8 x + 3 x^2 - 2 x^3 + x^4$ & $181 - 226 x^2 + 87 x^4 - 14 x^6 + x^8$ \\ \hline 2 & $-0.5 + 2.569 i $ & $1 + 7 x + 8 x^2 + 2 x^3 + x^4$ & $171 - 144 x^2 + 37 x^4 - 6 x^6 + x^8$ \\ \hline 3 & $-1.809 + 1.892i$ &$1 + 10 x + 12 x^2 + 5 x^3 + x^4$ & $71 - 70 x^2 + 3 x^4 + x^8$ \\ \hline 4 & $3.736 + 1.996i $ & $1 + 26 x + 7 x^2 - 6 x^3 + x^4$ & $251 - 452 x^2 + 173 x^4 - 22 x^6 + x^8$\\ \hline \end{tabular} } \medskip Surprisingly there is the point $\gamma=1.61803 + 3.91487i$ (also indicated in Figure 4.) with minimal polynomial $x^4-2 x^3+14 x^2+22 x+1$ and with $\lambda= \sqrt{2\gamma + 4\sin^2\frac{ \pi }{5}}$ having minimal polynomial $x^4-6 x^3+11 x^2+4 x-19$. Both of degree $4$. It is clear that $\gamma\in \IQ(\lambda)$ and hence the group generated by elliptics of order $4$ and $5$ with this value $\gamma$ for the commutator parameter is indeed a subgroup of an arithmetic Kleinian group. However, as noted, it is captured by a pleating ray neighbourhood. We have established the following. \begin{corollary} There are no arithmetic Kleinian groups generated by elements of order $4$ and $5$ with invariant trace field of degree $4$. \end{corollary} We now turn to the case of degree $6$, but offer fewer details. We do note that the modulis space desciption and the capturing pleating ray neighbourhoods are independent of the degree of $\IQ(\gamma)$, so we may use the same picture to remove possible values of $\gamma$. \subsection{Degree 3 over $L=\IQ(\sqrt{5})$} The minimal polynomial for $\gamma$ has the form \begin{equation}\label{poly} p(z)=(z^3+a_2 z^2 +a_1 z+ a_0)(z^3+b_2 z^2+b_1 z + b_0) \end{equation} We continue to assume the first polynomial has a complex conjugate pair of roots. We write, for $p_i$ and $q_i$ of the same parity, \begin{eqnarray*} 2a_0=p_0+q_0\sqrt{5}, &&2b_0=p_0-q_0\sqrt{5}\\ 2a_1=p_1+q_1\sqrt{5}, &&2b_1=p_1-q_1\sqrt{5} \\ 2a_2=p_2+q_2\sqrt{5}, &&2b_2=p_2-q_2\sqrt{5} \end{eqnarray*} The first polynomial factor has roots $\gamma,\bar\gamma,r$ with $r\in (-2\sin^2\frac{\pi}{5},0)$ and the second factor has real roots $r_1,r_2,r_3 \in [-2\sin^2\frac{2\pi}{5},0]$. Thus \begin{eqnarray*} a_0=-|\gamma|^2 r, && b_0=-r_1r_2r_3.\\ a_1= |\gamma|^2+2 r \Re e[\gamma], &&b_1=r_1r_2+r_2r_3+r_1r_3.\\ a_2= - 2 \Re e[\gamma]-r, &&b_2=-(r_1+r_2+r_3). \end{eqnarray*} From this, with some easy estimates, we see that \begin{eqnarray*} 0< a_0 < 58.416, && 0< b_0 < 5.921\\ -1 < a_1< 97.246 , && 0< b_1<9.82 \\ -18.39 < a_2< 8.691, && 0< b_2<5.428 \end{eqnarray*} As before, adding these inequalities provides bounds on $p_i$ and $q_i$. \begin{eqnarray*} 1\leq p_0 \leq 65, \quad 0\leq p_1 \leq 107, \quad -18 \leq p_2 \leq 14 . \end{eqnarray*} \begin{eqnarray*} \left[ \frac{ p_0-12}{\sqrt{5}} \right]+1 \leq \frac{ p_0-2b_0}{\sqrt{5}}= q_0 \leq \left[ \frac{ p_0}{\sqrt{5}} \right] \\ \left[ \frac{ p_1-20}{\sqrt{5}} \right]+1 \leq \frac{p_1-2b_1}{\sqrt{5}}= q_1\leq \left[ \frac{ p_1}{\sqrt{5}} \right] \\ \left[ \frac{ p_2-11}{\sqrt{5}} \right]+1 \leq \frac{p_2-2b_2}{\sqrt{5}}= q_2 \leq \left[ \frac{ p_2}{\sqrt{5}} \right] \end{eqnarray*} Checking parity alone gives us $6793308$ cases to consider. Again we have a few conditions to satisfy. \begin{enumerate} \item The first factor must be negative at $-2\sin^2\frac{\pi}{5}$. \[ a_0+\frac{1}{4} (\sqrt{5}-5) a_1-\frac{5}{8}\big((\sqrt{5}-3) a_2 -2 \sqrt{5}+5\big) <0 \] \item The second factor must be negative at $-2\sin^2\frac{2\pi}{5}$ and have derivative with two roots in the interval $[-2\sin^2\frac{2\pi}{5},0]$ \begin{eqnarray*} b_0-\frac{1}{4} (\sqrt{5}+5)b_1+\frac{5}{8} \left((\sqrt{5}+3)b_2-2 \sqrt{5}-5\right)<0\\ \frac{3}{4} (\sqrt{5}+5 )> b_2+ \sqrt{b_2^2-3b_1} \end{eqnarray*} \item The discriminant of the first factor is negative and the second factor positive. \begin{eqnarray*} 0 > -27 a_0^2+a_1^2 \left(-4 a_1+a_2^2\right)+2a_0 \left(9 a_1a_2-2 a_2^3\right) \\ 0 < -27 b_0^2+b_1^2 \left(-4 b_1+b_2^2\right)+2b_0 \left(9 b_1b_2-2 b_2^3\right) \end{eqnarray*} \end{enumerate} If these tests are all passed, as they were by $114$ candidates, then we numerically computed the roots to identify $\gamma$ and check that all requirements were satisfied. This time there were only three points clearly lying outside the discrete free space. However we found the minimal polynomial in $\lambda$ for all these cases and found it to be degree $12$. This proves \begin{corollary} There are no arithmetic Kleinian groups generated by elements of order $4$ and $5$ with invariant trace field of degree $6$. \end{corollary} \subsection{Degree 4 over $L=\IQ(\sqrt{5})$} The minimal polynomial for $\gamma$ has the form \begin{equation}\label{poly} p(z)=(z^4+a_3z^3+a_2 z^2 +a_1 z+ a_0)(z^4+b_3z^3+b_2 z^2+b_1 z + b_0) \end{equation} We continue to assume the first polynomial has a complex conjugate pair of roots. We write, for $p_i$ and $q_i$ of the same parity, \begin{eqnarray*} 2a_0=p_0+q_0\sqrt{5}, &&2b_0=p_0-q_0\sqrt{5}\\ 2a_1=p_1+q_1\sqrt{5}, &&2b_1=p_1-q_1\sqrt{5} \\ 2a_2=p_2+q_2\sqrt{5}, &&2b_2=p_2-q_2\sqrt{5} \\ 2a_3=p_3+q_3\sqrt{5}, &&2b_3=p_3-q_3\sqrt{5} \end{eqnarray*} Following our earlier notation, direct estimates yield \begin{eqnarray*} 0< b_0 < 10.71 \quad 0< b_1<23.69, \quad 0< b_2 <19.64 \quad 0< b_3 < 7.24 \\ 0< a_0< 40.364 , \quad -9.382<a_3<18.39 . \end{eqnarray*} It is worth a little time to improve the obvious bounds on $a_1$ and $a_2$. The polynomial \[ q(x)=x^4+a_3x^3+a_2x^2+a_1x+a_0 \] has a complex conjugate pair of roots and two roots in the interval $[-2\sin^2\frac{\pi}{5},0]$. Thus $q'(0)=a_1>0$ and $q'(-2\sin^2\frac{\pi}{5})<0$. These conditions are not sufficient. Using elementary calculus, which we leave the reader to consider, one also obtains \[ 0 \leq a_1 <112.69, \quad\quad -1.76 < a_2 < 88.03 \] Hence \begin{eqnarray*} 1\leq p_0 \leq 40, \quad 0\leq p_1 \leq 114, \quad -1 \leq p_2 \leq 90, \quad -9 \leq p_3 \leq 21. \end{eqnarray*} \begin{eqnarray*} \left[ \frac{ p_0-21.5}{\sqrt{5}} \right]+1 \leq \frac{ p_0-2b_0}{\sqrt{5}}= q_0 \leq \left[ \frac{ p_0}{\sqrt{5}} \right] \\ \left[ \frac{ p_1-48}{\sqrt{5}} \right]+1 \leq \frac{p_1-2b_1}{\sqrt{5}}= q_1\leq \left[ \frac{ p_1}{\sqrt{5}} \right] \\ \left[ \frac{ p_2-40}{\sqrt{5}} \right]+1 \leq \frac{p_2-2b_2}{\sqrt{5}}= q_2 \leq \left[ \frac{ p_2}{\sqrt{5}} \right] \\ \left[ \frac{ p_3-15}{\sqrt{5}} \right]+1 \leq \frac{p_3-2b_3}{\sqrt{5}}= q_3 \leq \left[ \frac{ p_3}{\sqrt{5}} \right] \end{eqnarray*} This search is now about $10^{10}$ cases after checking parity. To make this search more efficient we need to find tests on coefficients as soon as possible in the process. The most obvious target is the fact that the polynomial \[ p(z)=z^4+b_3z^3+b_2z^2+b_1z+b_0 \] has four real roots in the interval $I_5=[-2\sin^2\frac{2\pi}{5},0]$. Then \begin{eqnarray*} p'(z) & = & 4z^3+3 b_3z^2+2 b_2z +b_1, \quad \mbox{has three real root in $I_5$} \\ p''(z) & = & 12z^2+6 b_3z+2 b_2, \quad \mbox{has two real root in $I_5$} \\ p'''(z) & = & 24z+6 b_3, \quad \mbox{has a real root in $I_5$} \end{eqnarray*} This last condition gives $-2\sin^2\frac{2\pi}{5}<-b_3/4<0$, which implies $0<b_3< 8\sin^2\frac{2\pi}{5}$ which we have by construction. The condition on $p''(z)$ implies \[ 0< \frac{b_3}{4} \pm \frac{\sqrt{9 b_3^2-24 b_2}}{12} < 2 \sin^2\frac{2\pi}{5} \] The content here is that \begin{eqnarray*} 0 < & 3 b_3^2-8 b_2 & < 3\big(8 \sin^2\frac{2\pi}{5}-b_3\big)^2 \end{eqnarray*} Now the cubic $p'(z)$ has three real roots in $I_5$. It's discriminant must satisfy \[ 108 b_1^2+27b_1b_3^3+32 b_2^3<108 b_1b_2b_3+9b_2^2 b_3^2 \] and $p'(-2\sin^2\frac{2\pi}{5})<0$, \[ b_1-\frac{1}{2} (\sqrt{5}+5 )b_2+\frac{5}{8} (3 \sqrt{5} b_3+9b_3-8 \sqrt{5}-20 ) <0 \] Finally, for the quartic itself we must have the discriminant positive and at least one real root. \medskip After this search we still do not yet have candidate polynomials, just a much shorter list of possibilities. It is hard to assert that the two real roots of the first polynomial lie in the correct interval without actually computing them. Now that we have a shorter list of possibilities we do just that. We compute the roots of $q(x)=x^4+a_3x^3+a_2x^2+a_1x+a_0$ and check the imaginary part of the complex roots are smaller than $5$ is absolute value and that the real roots lie in the shorter interval. This left us with $9$ possibilities. {\small \begin{eqnarray*} \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(45+19 \sqrt{5}\right) z+\frac{1}{2} \left(56+22 \sqrt{5}\right) z^2+\frac{1}{2} \left(-9-7 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(18+8 \sqrt{5}\right)+\frac{1}{2} \left(67+29 \sqrt{5}\right) z+\frac{1}{2} \left(56+22 \sqrt{5}\right) z^2+\frac{1}{2} \left(-9-7 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(37+15 \sqrt{5}\right) z+\frac{1}{2} \left(40+14 \sqrt{5}\right) z^2+\frac{1}{2} \left(-6-6 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(44+18 \sqrt{5}\right) z+\frac{1}{2} \left(47+17 \sqrt{5}\right) z^2+\frac{1}{2} \left(-6-6 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(3+\sqrt{5}\right)+\frac{1}{2} \left(28+10 \sqrt{5}\right) z+\frac{1}{2} \left(38+12 \sqrt{5}\right) z^2+\frac{1}{2} \left(-3-5 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(18+8 \sqrt{5}\right)+\frac{1}{2} \left(67+29 \sqrt{5}\right) z+\frac{1}{2} \left(70+28 \sqrt{5}\right) z^2+\frac{1}{2} \left(16+4 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(3+\sqrt{5}\right)+\frac{1}{2} \left(43+17 \sqrt{5}\right) z+\frac{1}{2} \left(75+29 \sqrt{5}\right) z^2+\frac{1}{2} \left(19+5 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(58+24 \sqrt{5}\right) z+\frac{1}{2} \left(86+34 \sqrt{5}\right) z^2+\frac{1}{2} \left(19+5 \sqrt{5}\right) z^3+z^4\\ \frac{1}{2} \left(7+3 \sqrt{5}\right)+\frac{1}{2} \left(55+23 \sqrt{5}\right) z+\frac{1}{2} \left(90+36 \sqrt{5}\right) z^2+\frac{1}{2} \left(19+5 \sqrt{5}\right) z^3+z^4 \end{eqnarray*} } All of these cases are well within the space of free representations, but in fact none of them satisfied the factorisation criterion. \begin{theorem} There are no arithmetic Kleinian groups generated by elements $f$ of order $4$ and $g$ of order $5$ with $\gamma(f,g)\not\in\IR$. \end{theorem} In fact we have actually proven quite a bit more as we have shown every subgroup of an arithmetic Kleinian group generated by $f$ of order $4$ and $g$ of order $5$, if discrete and if $\gamma$ is complex, is free. We therefore have the following theorem. \begin{theorem} Let $\gamma$ be an arithmetic Kleinian group. Suppose $f$ of order $4$ and $g$ of order $5$ lie in $\Gamma$. Then either $\langle f,g\rangle$ is free, or $\Gamma$ contains $Tet(4,5;3)$ with finite index. \end{theorem} \section{$p=2,3$ and $4$.} Apart from the case $p=6$ already covered, it is here that we will find many examples. Unfortunately there is not necessarily any intermediate field to help us in our searches. The fact that earlier we had a polynomial factor with all real roots was enormously helpful. Therefore the cases here are of a quite different nature. As we will see, the cases $p=2,4$ can be dealt with together as they will be commensurable, and the polynomials we seek are largely covered by the results of Flammang and Rhin \cite{FR}. The complexity of that case is in getting a very accurate description of the moduli space. That leaves the case $p=3$ which we now deal with. \subsection{$p=3$.} We first identify a useful total degree bound. This is obtained as a modification of the method of auxilliary functions introduced by Smyth and used in this context by Flammang and Rhin, \cite{FR}. We will have more to say about this a bit later. Let $\alpha = 1+\gamma$ and note that then \begin{equation}\label{alphabounds} |\alpha-1|<3+2 \sqrt{2}, \quad |\alpha|<4+2 \sqrt{2} \end{equation} \bigskip Let $t_0$ be the unique solution to the equation \[ \left|\frac{t}{1+t}\right|^t\frac{1}{1+t}=\frac{3}{2^{t+1}} \] Then $t_0\approx 4.28291$. A little calculus reveals that the function $|x|^t|1-x|\leq 0.0770561$ on the interval $[-\frac{1}{2},1]$. The graph is illustrated below. \scalebox{0.5}{\includegraphics[viewport=-40 520 570 800]{Graph1}}\\ \medskip \noindent {\bf Figure 5.} {\em The graph of $|x|^t|1-x|$ with $t=4.28291$.} \medskip Let \[ q_\alpha(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0 = (z-\alpha)(z-\bar\alpha)(z-r_1) \cdots(z-r_{n-2}) \] be the minimal polynomial for $\alpha$. Then, as $r_i\in [-\frac{1}{2},1]$, \begin{eqnarray} |q_\alpha(0)|^{t_0}|q_\alpha(1)| &=& |\alpha|^{2t_0}|\alpha-1|^2 \prod_{i=1}^{n-2} |r_i|^{t_0} |1-r_{i}|\nonumber \\ & \leq & (3+2 \sqrt{2})^2\Big(4+2 \sqrt{2}\Big)^{2t_0} (0.0770561)^{n-2}. \label{a0bound} \end{eqnarray} As the left hand side here is greater than one, we deduce that $n\leq 9$. Indeed if $p(0)\neq \pm 1$, then $n\leq 8$. In fact we have the following lemma which will be useful in our high degree searches. \begin{lemma}\label{*lemma} Let $\IQ(\alpha)$ be a field with one complex place, $\sigma(\alpha)\in [-\frac{1}{2},1] $ for all real embeddings, and satisfying (\ref{alphabounds}). The the degree of this field is no more than $9$, and if it is $9$, then $\alpha$ is a unit, and $1\leq q(1) \leq 7$, where $q$ is the minimal polynomial for $\alpha$. If the degree is $8$, then $1\leq |q(0)|\leq 2$, and $|q(0)|=2$ implies $|q(1)|\leq 5$. Further, in the degree $9$ case we also have $|\alpha|\geq 5.28769$, and in the degree $8$ case $|\alpha|\geq 4.1139$. \end{lemma} For the last two bounds, we also use the fact that $|\alpha-1|=|\gamma|$, and our bounds on the values $|\gamma|$ when $\gamma=\gamma(f,g)$ and $\langle f,g\rangle$ is not free. \bigskip Having these degree bounds now allows us to search to find all the possibilities for the values $\gamma$ satisfying the criteria determined by the Identification Theorem \ref{2genthm}. Some of these searches are at first sight barely feasible, for instance the degree $9$ case. However, the factorisation condition described earlier significantly reduces the possibilities by imposing strong parity conditions on the coefficients of the minimal polynomial of \begin{equation} \lambda=\sqrt{2\alpha+1}=\sqrt{2\gamma+3}, \quad \IQ(\lambda)=\IQ(\alpha)=\IQ(\gamma) \end{equation} as we now describe. We discuss briefly the low degree cases of degree $2$ and $3$ as there are straightforward. We then go into rather more detail in degrees $4$ and $5$. \section{The case of degree 2.} We have $\gamma^2+b\gamma+c=0$, and $a_3=\sqrt{2\gamma+3}$ must also be a quadratic integer by the factorisation condition. Also, using easier bound on the size of the moduli space, $\Im m(\gamma)<3$ and $-4<\Re e(\gamma) < 6$, and as $\gamma$ must be complex, $4c>b^2$. This gives us $\gamma=\frac{1}{2}(-b+\sqrt{b^2-4c})$ and hence \[ 8> b > -12, \quad 0> b^2-4c >- 36 \] This gives $162$ cases to consider, many of which are not in the moduli space. However once we implement the factorisation test that the minimal polynomial of $a_3$ is degree 2 this quickly reduces to the three complex values $-2+\sqrt{2}i$, $2i$ and $-3+2i$. The first, $-2+\sqrt{2}i$, is found from $(3,0)$, $(4,0)$ orbifold Dehn surgery on the Whitehead link. The second, $2i$, is the associated Heckoid group with $(W_{3/4})^2=1$ and the last group is free on its generators. One can establish this later fact directly or it is a consequence of the results described in Figure 6 below. \section{The case of degree 3.} We consider the polynomial $p(z)=z^3+az^2+bz+c$ which must have exactly one real root $r\in [-3/2,0]$. Then \[ p(z)= -\bar \gamma \gamma r + (\bar \gamma \gamma + \bar\gamma r + \gamma r )z - (\bar\gamma + \gamma + r)+ z^3 \] With our bounds on $r$ and $\gamma$ we find \[ -9 \leq a \leq 10, \quad 1\leq b \leq 25, \quad 1\leq c \leq 36,\] As $p(0)=c>0, p'(0)=b>0$ and $p(-3/2)<0$ and $p'(-3/2)>0$ and the discriminant must be negative a simple loop with these tests finds about $5,500$ candidates. One the factorisation condition is applied to these potential candidates only $17$ remain. This is illustrated below with the $4$ points outside the moduli space indicated. Proving that these are the only such points in the complement of the moduli space of groups free on generators follows from the arguments we provide in the next case of degree $4$ which will give a provable computational description of these spaces. \\ \scalebox{0.5}{\includegraphics[angle=-90,viewport=10 80 550 550]{degree3map}}\\ \noindent{\bf Figure 6.} {\em The possible $5,500$ candidate $\gamma$ values in the case of the degree $3$ search. Of these only $17$ satisfy the factorisation condition and only $4$ are in the complement of the moduli space of groups free on generators of order $3$ and $4$.} \bigskip We now describe the degree $4$ and $5$ cases in some detail for two reasons. First in degree $4$ we find several cases which do turn out to generate arithmetic lattices, and then in degree $5$ there are none, but we need to develop more ideas to limit the search spaces. Finally we carefully eliminate the highest degree search, degree $9$. We have searched the cases degree $6,7$ and $8$. They all go the same way and there is nothing to report. \section{The case of degree 4.} The minimal polynomial for $\gamma$ and hence $\lambda$ has degree $4$, \[\lambda^4+a_3\lambda^3+a_2\lambda^2+a_1\lambda+a_0 = 0 \] Thus \[(2\alpha+1)^2+a_2(2\alpha+1)+a_0 = -\sqrt{2\alpha+1}(a_1 +a_3(2\alpha+1) \] Squaring both sides and rearranging gives us a polynomial for $\alpha$. \begin{eqnarray*} 0& = & 16 \alpha ^4+\left(-8 a_3^2+16 a_2+32\right) \alpha ^3+\left(4 a_2^2+24 a_2-12 a_3^2+8 a_0-8 a_1 a_3+24\right) \alpha ^2 \\ && +\left(-2 a_1^2-8 a_3 a_1+4 a_2^2-6 a_3^2+8 a_0+4 a_0 a_2+12 a_2+8\right) \alpha \\&& +\left(a_0^2+2 a_2 a_0+2 a_0-a_1^2+a_2^2-a_3^2+2 a_2-2 a_1 a_3+1\right) \end{eqnarray*} There is a unique monic polynomial for the algebraic integer $\alpha$ over $\IQ$ and so all these coefficients are divisible by $16$. \begin{enumerate} \item degree 3 coefficient: $a_3^2$ is even, so \underline{$a_3$ is even}. \item degree 2 coefficient: $ a_2^2+2a_2+2 a_0+2 $ is divisible by $4$. Thus \underline{$a_2$ is even}. Then $2 a_0+2 $ is divisible by $4$, so \underline{$a_0$ is odd}. \item degree 1 coefficient: $- a_1^2-3 a_3^2+4 a_0+2 a_0 a_2+6 a_2+4$ is divisible by $8$. We reduce mod $2$ to see \underline{$a_1$ is even}. \item degree 0 coefficient $\left(1+a_0-a_1+a_2-a_3\right) \left(1+a_0+a_1+a_2+a_3\right)$. We reduce mod $2$ to see \underline{$a_0$ is odd} \end{enumerate} We now make the substitutions $a_i=2k_i$ or $2k_i+1$ as the case may be. We expand out the polynomial for $\alpha$ and check parities again. We find $k_2$ is even and $k_0$ is odd, and the sum $k_1+k_3$ is even. Again write $k_2=2m_2$, $k_0=2m_0-1$ and $k_3=2m_3-k_1$. Substitute back and expand out. We have \begin{align*} &16 \left(\left(m_0+m_2\right){}^2-m_3^2\right)\\ &+32 \left(\left(1+m_2\right) \left(m_0+m_2\right)+k_1 m_3-3 m_3^2\right) \alpha \\ &+16 \left(1+2 m_0+m_2 \left(4+m_2\right)-\left(k_1-6 m_3\right) \left(k_1-2 m_3\right)\right) \alpha ^2\\ & -32 \left(-1-m_2+\left(k_1-2 m_3\right){}^2\right) \alpha ^3 +16 \alpha ^4 \end{align*} as $16$ times the minimal polynomial. Further, we have proved the following. \begin{lemma} The minimal polynomial for $\lambda$ has the following coefficient structure. There are integers $n_0,n_1,n_2$ and $n_3$ such that \begin{itemize} \item $a_0=-1 + 4 n_0$. \item $a_1= 2n_1$. \item $a_2=4n_2$. \item $a_3=-2n_1 + 4 n_3$. \end{itemize} Moreover the coefficient structure of the minimal polynomial for $\alpha$ has (with the same $n_i$) \begin{itemize} \item $b_0=\left(n_0+n_2\right){}^2-n_3^2$. \item $b_1= 2 \left(\left(1+n_2\right) \left(n_0+n_2\right)+n_1 n_3-3 n_3^2\right)$. \item $b_2=2 \left(\left(1+n_2\right) \left(n_0+n_2\right)+n_1 n_3-3 n_3^2\right)$. \item $b_3=-2 \left(-1-n_2+\left(n_1-2 n_3\right){}^2\right)$. \end{itemize} Note that also $b_1$ and $b_3$ are even, $b_2$ has the same parity as $b_3/2$. \end{lemma} With this information we can new set up a simple search. To do so we use Vieta's formulas. First for $\alpha$ whose real roots are in $[-\frac{1}{2},1]$. First note that once we fix $\alpha$ the coefficients of its minimal polynomial are monomials in the real roots $r_i$. Thus either increasing or decreasing. Thus the extrema occur at $r_i=1$ or $r_i=-\frac{1}{2}$ so it is just a small finite problem to find these. We replace $\alpha+\bar\alpha$ by either $|\alpha|=4+2\sqrt{2}$ or $-4$. This is unchallenging in low degree, but a bit of work in higher degree. \medskip \begin{tabular}{|l|c|c|} \hline $b_0=|\alpha|^2 r_1r_2$ & $-16 \leq b_0\leq 33$ \\ \hline $b_1=-|\alpha|^2(r_1+r_2)-(\alpha+\bar\alpha)r_1r_2 =2x_1$& $-53 \leq x_1\leq 24$ \\ \hline $b_2=|\alpha|^2+(\alpha+\bar\alpha)(r_1+r_2)+r_1r_2 $& $0 \leq b_2\leq 74$ \\ \hline $b_3 =-( \alpha+\bar\alpha+r_1+r_2)=2x_3 $ & $-7 \leq x_3 \leq 4$ \\ \hline \end{tabular} \medskip With the one other parity restriction we have found this is a search of about $1.5$ million cases. For the $\lambda$ search we may assume additionally that $\lambda+\bar\lambda \geq 0$ with the choice of square root to find the search bounds (this time $r_1,r_2\in [-\sqrt{3},\sqrt{3}]$). Recall $\lambda=\sqrt{2\alpha+1}$ and $|\lambda|<\sqrt{9+4\sqrt{2}}\approx 3.828$. \medskip {\small \noindent \begin{tabular}{|l|c|c|c|} \hline $a_0=|\lambda|^2 r_1r_2$& $-43 \leq a_0\leq 43 $ & $ -10 \leq n_0 \leq 11 $ \\ \hline $a_1=-|\lambda|^2(r_1+r_2)-(\lambda+\bar\lambda)r_1r_2$& $-73 \leq a_1\leq 27$& $ -36 \leq n_1 \leq 13$\\ \hline $a_2=|\lambda|^2+(\lambda+\bar\lambda)(r_1+r_2)+r_1r_2 $& $-2 \leq a_2\leq 30$& $ -1 \leq n_2 \leq 7$ \\ \hline $a_3=-( \lambda+\bar\lambda+r_1+r_2)$& $-11\leq a_3\leq 3 $& $\Big[\frac{2n_1-11}{4}\Big] \leq n_3\leq\Big[ \frac{3+2n_1}{4}\Big] $ \\ \hline \end{tabular} } \medskip This is a search of about $22$ thousand cases - nearly two orders of magnitude smaller. So we of course choose to do the latter search. However as the degree grows, the symmetric functions start to get quite big quite quickly (for $b_i$ they are evaluated at $-\frac{1}{2}$ or $1$, while for $a_i$ they are evaluated at $\pm \sqrt{3}$) and from about degree $7$ or $8$ it is best to search through the coefficients $a_i$. For these larger degrees it is worthwhile obtaining much better coefficient bounds as well. \medskip Within these bounds on the coefficients we tested the real root condition, that $|\lambda|<4$ and that $|\gamma|<5.85$ and $|\Im m(\gamma)|<3.75$ we found $55$ possibilities for $\gamma=\alpha-1$ up to complex conjugacy. These are illustrated below. \scalebox{0.5}{\includegraphics[viewport=-40 520 570 800]{163}}\\ \medskip \noindent{\bf Figure 7.} {\em The degree $4$ candidate values for $\gamma$ satisfying all arithmetic criteria} \medskip We now have to decide whether these groups are free. Each is definitely a discrete subgroup of an arithmetic Kleinian group at this point. \scalebox{0.85}{\includegraphics[viewport=100 520 570 800]{mod163}}\\ \noindent{\bf Figure 8.} {\em The degree $4$ candidate values for $\gamma$ satisfying all arithmetic criteria mapped against the moduli space.} \medskip Now to eliminate these points we first examine the $17$ Farey polynomials and the pleating rays of low slopes where the denominator (which is the degree of the polynomial) is small. {\small \begin{equation}\label{slp}\{1/2, 3/5,4/7,6/7,5/8,5/9,7/11,8/13,9/14,11/17,13/20,16/21,15/26\}.\end{equation}} The inverse image of the half-space $H=\{z=x+i y: x\leq -2\}$ under the branch of the Farey polynomial $P_{r/s}$ which yields the rational pleating ray $r/s$ is proved in \cite{EMS2} to consist only of geometrically finite groups free on the enerators of order $3$ and $4$. For these low slopes these preimages capture all but $4$ points easily. There are some computation issues in drawing the figure below for the last $4$ polynomial inverse images, it is really only for visual confirmation. In fact all we need to do is evaluate the associated polynomial with integer coefficients on the point in question, show that the image lies in $\{z=x+i y: x\leq -2\}$, then just check that we have the right branch - but this is determined by the pleating ray and path lifting which are direct to check as we can numerically identify the other roots and critical points. \scalebox{0.5}{\includegraphics[angle=-90,viewport=-40 100 550 550]{PleatingCapture34}}\\ \noindent{\bf Figure 9.} {\em Neighbourhoods of the pleating rays with slopes give at (\ref{slp}) capturing all but $7$ points. } \medskip There are now $7$ points that remain and the associated groups are not going to be free on the two generators. These appear in the table below. We will explain how these groups are determined later. \section{Degree $5$ search.} As these searches get bigger it is efficient to look for further simple tests to eliminate possible candidates before we numerically calculate the roots. We first look at the parity considerations as above. Following the above strategy, we assume minimal polynomial for $\lambda$ has degree $5$, \[p(\lambda)=\lambda^5+a_4+a_3\lambda^3+a_2\lambda^2+a_1\lambda+a_0 = 0 \] Thus \[a_4(2\alpha+1)^2+a_2(2\alpha+1)+a_0 = -\sqrt{2\alpha+1}(a_1 +a_3(2\alpha+1)+(2\alpha+1)^2) \] Squaring both sides and rearranging gives us a polynomial for $\alpha$. {\small \begin{eqnarray}\label{mpa} p(\alpha)& = & 32 \alpha ^5+16 \left(-a_4^2+2 a_3+5\right) \alpha ^4+8 \left(a_3^2+8 a_3-4 a_4^2+2 a_1+2 a_2 a_4+10\right) \alpha ^3 \\ && \nonumber +4 \left(-a_2^2+2 a_4 a_2+3 a_3^2-6 a_4^2+12 a_3+2 a_1 \left(a_3+3\right)-2 a_0 a_4+10\right) \alpha ^2\\ && +\left(2 \left(a_1+a_3+1\right){}^2+4 \left(a_3+2\right) \left(a_1+a_3+1\right)+4 \left(a_0+a_2+a_4\right) \left(a_2-2 a_4\right)\right) \alpha \nonumber \\ && \nonumber+\left(\left(a_1+a_3+1\right){}^2-\left(a_0+a_2+a_4\right){}^2\right) \end{eqnarray}} There is a unique monic polynomial for the algebraic integer $\alpha$ over $\IQ$ and so all these coefficients are divisible by $32$. We reduce these coefficients modulo $2$ to determine the parity necessary. \begin{itemize} \item \underline{$a_4$ is odd}, $a_4=2k_4+1$. \item $a_3^2+8 a_3-4 a_4^2+2 a_1+2 a_2 a_4+10$ is divisible by $4$, so \underline{$a_3$ is even}, $a_3=2k_3$. Then $1+ a_1- a_2 a_4$ is even. Hence $ a_1+ a_2+1$ is odd. \item $-a_2^2+2 a_4 a_2+3 a_3^2-6 a_4^2+12 a_3+2 a_1 \left(a_3+3\right)-2 a_0 a_4+10$ is divisible by $8$. So \underline{$a_2$ is even}, $a_2=2k_2$ and \underline{$a_1$ is odd}, $a_1=2k_1+1$. Then \[-4 a_0 k_4-2 a_0-4 k_2^2+4 k_2+4 k_3^2+4 k_1+4 k_3+2\] is divisible by $8$, so \underline{$a_0$ is odd}, $a_0=2k_0+1$. Then \begin{equation}\label{*1} k_0 + k_1+ k_4 \end{equation} is even. \item Next, making the substitutions identified above, we have \begin{equation}\label{*2} k_1^2+2 k_2^2+3 k_3^2-2 k_0+2 k_0 k_2+2 k_3-2 k_2 k_4-2 k_4+1 \end{equation} divisible by $4$. Thus $k_1^2 +3 k_3^2$ is odd and so \underline{$k_1+k_3$ is odd}. \item Next, expanding out the constant term gives \[ k_0+k_1+k_2+k_3+k_4 \] is even. Thus $k_0+k_2+k_4$ is odd. Hence (\ref{*1}) gives $k_1+k_2$ odd. \end{itemize} We now record this information. \begin{align} & \label{*5} a_0=2k_0+1, \quad a_1=2k_1+1,\quad a_2=2k_2, \quad a_3=2k_3,\quad a_4=2k_4+1. \\ & \label{*6} k_1+k_2 \mbox{ is odd}, \quad k_1+k_3 \mbox{ is odd},\quad k_0+k_2+k_4 \mbox{ is odd}. \end{align} Writing $k_2=2m_2+1-k_1, \; k_3=2m_3+1-k_1, \; k_4=2m_4-k_0-k_1$ now yields the polynomial \begin{align*} & 16 \Big((2-k_1+m_2+m_3+m_4) (k_1-m_2+m_3-m_4) \Big)\\ &+32 \Big(1-3 k_1^2+3 m_3 (2+m_3)-6 m_4+2 k_0 (1-k_1+m_2+m_4)\\ & \quad +k_1 (4+5 m_2-m_3+7 m_4)-2 (m_2^2+2 m_4^2+m_2 (2+3 m_4)) \Big) \alpha \\ &+32 \Big(4-2 k_0^2-6 k_1^2-5 m_2-2 m_2^2+13 m_3+6 m_3^2-(13+12 m_2) m_4-12 m_4^2 \\ & \quad +k_0 (6-8 k_1+6 m_2+10 m_4)+k_1 (5+8 m_2-4 m_3+18 m_4) \Big) \alpha ^2 \\&+32 \Big(-4 k_0^2-5 k_1^2+2 k_0 (3-5 k_1+2 m_2+8 m_4)+k_1 (2+4 m_2-4 m_3+20 m_4) \\ & \quad -2 (-3+m_2-2 m_3 (3+m_3)+6 m_4+4 m_2 m_4+8 m_4^2)\Big) \alpha ^3 \\ &-64 \Big(k_0^2+k_0 (-1+2 k_1-4 m_4)+(k_1-2 m_4){}^2-2 (1+m_3-m_4)\Big) \alpha ^4+32 \alpha ^5 \end{align*} Only the constant term \[ (2-k_1+m_2+m_3+m_4) (k_1-m_2+m_3-m_4) =(1+m_3)^2 -(1+m_2+m_4-k_1)^2 \] is now an issue. As it is a difference of squares it is not congruent to $2$ mod $4$. It also must be even, thus the constant term in $q_\alpha(z)$ is an even integer. Now (\ref{*5}) and (\ref{*6}) give necessary and sufficient conditions for the factorisation condition to hold. \medskip With a little simplification, putting together what is above, we now have the following information. \begin{eqnarray*} a_0=2k_0+1,\quad a_1=2k_1+1,\quad a_2=4m_2+2-2k_1, \\a_3=4m_3+2-2k_1,\quad a_4=4m_4-2k_0-2k_1+1. \end{eqnarray*} There are another couple of quick tests we can utilise to cut down the search space. The polynomial $p(z)$ has odd degree, irreducible and all its real roots in the interval $[-\sqrt{3},\sqrt{3}]$. Thus \begin{eqnarray*} 0 &< & p(\sqrt{3}) = \sqrt{3} k_1+3 \sqrt{3} k_3+k_0+3 k_2+9 k_4+5 \sqrt{3}+5.\\ 0 &> & p(-\sqrt{3}) =-\sqrt{3} k_1-3 \sqrt{3} k_3+k_0+3 k_2+9 k_4-5 \sqrt{3}+5. \end{eqnarray*} Subtract the second from the first and substituting, we find \begin{equation} k_1+3k_3+5=k_1+3(2m_3+1-k_1)+5>0 \end{equation} and so we can implement $m_3\geq (k_1-4)/3$ in the search we set up. \subsection{The search.} At this point we can begin our search. Let $\lambda,\bar\lambda, r,s,t$ be the roots of the minimal polynomial. We can choose $\lambda$ to have non-negative real part. Without any great attempt to get the best possible bounds we obtained \medskip {\small \begin{tabular}{|l|c|c|} \hline $a_0=|\lambda|^2 rst=2k_0+1$& $-33 \leq k_0\leq 32 $ \\ \hline $a_1=|\lambda|^2(rs+rt+st)=2k_1+1$& $-66 \leq k_1\leq 74$ \\ \hline $a_2=|\lambda|^2(r+s+t)+(\lambda+\bar\lambda)(rs+rt+st)+rst =2k_2$& $-32 \leq k_2\leq 67$ \\ \hline $a_3=|\lambda|^2+rs+rt+st+(\lambda+\bar\lambda)(r+s+t) =2k_3$& $-21\leq k_3\leq 29$ \\ \hline $a_4= \lambda+\bar\lambda+r+s+t =2k_4+1 $& $-3\leq k_4\leq 5$ \\ \hline \end{tabular} } \medskip With the parity checks and the simple tests suggested this loop has about $84$ million candidates. Many of these candidates will have $4$ complex roots. We remark that if there is exactly one complex conjugate pair of roots for the irreducible minimal polynomial for $\lambda$ of degree $5$, then the same can be said of $\alpha$ if it is complex, since $\alpha\in \IQ(\lambda)$, so $ \IQ(\alpha)\subset \IQ(\lambda)$ and since the only proper subfields of a field with one complex place are totally real $ \IQ(\alpha)= \IQ(\lambda)$. However $\alpha$ cannot be real as then \[ 5=[\IQ:\IQ(\lambda)]= [\IQ(\alpha):\IQ(\lambda)][\IQ:\IQ(\alpha)] \] so $[\IQ:\IQ(\alpha)]=1$, $\alpha\in \IZ$ and $\lambda$ is an integer in a quadratic field. \medskip We could implement a further test of the discriminant in these searches, but polynomial root finding algorithms are quick enough that simply computed the roots and checked that $\lambda$ was a viable candidate, that is all the real roots are in $[-\sqrt{3},\sqrt{3}]$. This gave us a list with about 5100 candidates. The first we found was $x^5-x^4+6x^3+22x^2-23x-51$ with real roots $-1.674, -1.490,1.62$ and $\lambda=1.269+i3.308$. However, as we did not implement {\em all} the parity checks in the search to keep the structure simple, \[ \gamma=\frac{1}{2}(\lambda^2-3) = -6.168+i4.20 \] has a minimal polynomial of degree $5$ with real roots in $[-\frac{1}{2},1]$, but it is not monic. We then searched through these polynomials to find those minimal polynomials for $\gamma=(\lambda^2-3)/2$ which were monic, had $-4<\Re e(\gamma)$ and $|\Im(\gamma)|<4$. The values of $\gamma$ we found are illustrated below. \scalebox{0.7}{\includegraphics[viewport=50 430 770 850]{Degree5Map34}}\\ \noindent{\bf Figure 10.} {\em The degree $5$ candidates with $11/13$ pleating ray neighbourhood illustrated.} The most computationally challenging polynomial we found was \[ 1+7 x+12 x^2-3 x^4+x^5 \] with real roots $-1.29375, -0.378269, -0.240317$ and complex roots $2.45617 \pm i 1.57165$. This was captured by the $11/13$ pleating ray neighbourhood illustrated above in Figure 10. \subsection{Additional searches} As the degree grows the search space is growing rapidly and the fact that all the real roots of $\alpha$ are in $[-\frac{1}{2},1]$ means symmetric functions grow slowly and we can obtain better bounds for our searches (although this does not happen in degree $5$). We now outline how this goes in this case and present data. \medskip Let $q_\alpha(z)=z^5+b_4z^4+b_3z^3+b_2z^2+b_1z^1+b_0$ be the minimal polynomial for $\alpha$. Equations (\ref{*5}) and (\ref{*6}) together with the difference of squares condition we found yield the following information on the coefficients $b_i$. \begin{itemize} \item $b_4$ is even, $b_4=2n_4$ and $n_4$ has the same parity as $m_2+m_3+m_4$. \item $b_3$ has the same parity as $m_2+m_3+m_4$. \item $b_2$ is even, $b_2=2n_2$ \item $b_1$ has the same parity as $1+m_3$ \item $b_0$ is even, $b_0=2n_0$ and $n_0$ has the same parity as $1+m_3$. \item $b_3$ has the same parity as $n_4$ \item $b_1$ has the same parity as $n_0$ \end{itemize} We now go about finding bounds on these quantities. \subsection{Loop bounds.} We recall $-8\leq \alpha+\bar\alpha \leq 6+4\sqrt{2}$, $|\alpha|\leq 3+2\sqrt{2}$, and $r,s,t\in [-\frac{1}{2},1]$. The following information which follows from calculus. We set $x=|\alpha|^2$ and $2x=\alpha+\bar\alpha$, $-4<x<(3+2\sqrt{2})$ and calculate gradients to find extrema are on the boundary and then minimise. So for instance, $-0.737 < rs+rt+st <3$. \begin{eqnarray*} -33.98 &< b_0 =&- |\alpha|^2 rst < |\alpha|^2/2 < 16.99 \\ -25.3 &< b_1=&|\alpha|^2(rs+rt+st)+(\alpha+\bar\alpha)(r+s+t) < 136.9 \\ -108.2 &< b_2=&-|\alpha|^2(r+s+t)-(\alpha+\bar\alpha)(rs+rt+st) < 55.3 \\ -24 &< b_3=&(\alpha+\bar\alpha)(r+s+t) < 34.9 \\ -14.7 &< b_4=&-(\alpha+\bar\alpha + r+s+t) < 9.5 \end{eqnarray*} Thus \begin{eqnarray*} -16 \leq n_0 \leq 8 & -25\leq b_1 \leq 136 & -54 \leq n_2\leq 26 \\ -23 \leq b_3 \leq 34 && -7 \leq n_4 \leq 4 \end{eqnarray*} We actually ran this search (which took significantly longer) to confirm our results. \section{The case of degree $6$ \& $7$} \subsection{Degree $6$ parity considerations} As before we obtain another expression for the minimal polynomial for $\alpha$ with the coefficients coming from the minimal polynomial for $\lambda$. \begin{align} &\left(-1-a_0+a_1-a_2+a_3-a_4+a_5\right) \left(1+a_0+a_1+a_2+a_3+a_4+a_5\right) \nonumber \\ &+2 \left(-2 a_0 \left(3+a_2+2 a_4\right)-2 \left(1+a_2+a_4\right) \left(3+a_2+2 a_4\right)+\left(a_1+a_3+a_5\right) \left(a_1+3 a_3+5 a_5\right)\right) \alpha \nonumber \\ &+\left(8 a_1 a_3+12 a_3^2-8 \left(3+a_4\right) \left(1+a_0+a_2+a_4\right)-4 \left(3+a_2+2 a_4\right){}^2+24 a_1 a_5+48 a_3 a_5+40 a_5^2\right) \alpha ^2\nonumber \\ &+8 \left(a_3^2-2 \left(a_0+a_2 \left(4+a_4\right)+2 \left(5+a_4 \left(5+a_4\right)\right)\right)+2 a_1 a_5+8 a_3 a_5+10 a_5^2\right) \alpha ^3\nonumber \\&-16 \left(15+2 a_2+a_4 \left(10+a_4\right)-2 a_3 a_5-5 a_5^2\right) \alpha ^4\nonumber\\ &+32 \left(-6-2 a_4+a_5^2\right) \alpha ^5-64 \alpha ^6 \label{afor} \end{align} The necessary parity conditions in degree 6 are \begin{itemize} \item $a_0=2m_0+1$ \item $a_1=2m_1$ \item $a_2=4m_2+3$ \item $a_3=4m_3$ \item $a_4=8m_4-4m_2-2m_0+3$ \item $a_5=8m_5-4m_3-2m_1 $ \end{itemize} With the addition of another parity condition $m_4+m_5$ odd, these are sufficient as well. Flammang and Rhin give a method for bounding values of symmetric functions and coefficients using various identities. We adopt those in degree $7$ and $8$. However here we simply adopt the following strategy. We replace $|\lambda|$ by $x\in [0,\sqrt{7+\sqrt{2}}]$ and $\lambda+\bar\lambda$ by $2x$. Calculate the gradient of the function of $5$ variables to see there are no interior extrema in the region $x\in[0,\sqrt{7+4\sqrt{2}}]$, $r_i\in [-\sqrt{3},\sqrt{3}]$. Then symmetry allows us to move the $r_i$ in an increasing or decreasing manner to the value $\pm \sqrt{3}$. We could improve these estimates by realising that $r_i\approx \pm \sqrt{3}$ implies the corresponding real embedding gives $\sigma_i(\alpha)\approx 1$ and we have seen that the real roots of the minimal polynomial for $\alpha$ cannot pile up near $1$. \[ -113 \leq a_0 = |\lambda|^2 r_1 r_2 r_3 r_4 \leq 113 \] \[ a_1 =- |\lambda|^2 (r_1 r_2 r_3 +r_1r_2r_4+r_2r_3r_4)-(\lambda+\bar\lambda)(r_1r_2r_3r_4) \] \[ -261 \leq a_1\leq 129 \] \[ a_2 = |\lambda|^2 (r_1 r_2 +r_1r_3 +r_1r_4+r_2r_3+r_2r_4+r_3r_4)+(\lambda+\bar\lambda)(r_1 r_2 r_3 +r_1r_2r_4+r_2r_3r_4) \leq \] \[ -188 \leq a_2 \leq 338 \] \[ a_3 = -|\lambda|^2 (r_1 +r_2+r_3 +r_4)-(\lambda+\bar\lambda)(r_1 r_2 +r_1r_3 +r_1r_4+r_2r_3+r_2r_4+r_3r_4) \] \[ -215 \leq a_3 \leq 43 \] \[ a_4=|\lambda|^2+(\lambda+\bar\lambda)(r_1+r_2+r_3+r_4)+(r_1r_2+r_1r_3+r_1r_4+ r_2r_3+ r_2r_4+r_3r_4) \] \[-11 \leq a_4\leq 79 \] \[ -14\leq a_5=-\lambda-\bar\lambda-r_1-r_2-r_3-r_4\leq 6 \] At first sight this loop seems to allow $20\times 91 \times 258\times 526\times 390\times 226 \approx 21\times 10^{12}$ possibilities. However the parity considerations reduce this to only $5\times 10^9$ possibilities. For instance, $a_5$ lies in an interval of width $20$ and so after having determined $n_1,\ldots,n_4$ the value $m_5$ must lie in an interval of length $20/8$ and so admits at most three values. There is more than can be said here. If we review the formula (\ref{afor}) we see that the coefficient for $\alpha^5$ there is $32 \left(-6-2 a_4+a_5^2\right)$, which must be divisible by $64$ and is since $a_5$ is even. In terms of the real roots $s_i$ of the minimal polynomial for $\alpha$ we find that there is an integer $m$ so that \[ \alpha+\bar\alpha+s_1+s_2+s_3+s_4 = (-3- a_4+\frac{1}{2} a_5^2) m \] In the loop described above we can calculate $m_1,m_3$ and hence $m_5$ before we calculate $m_4$, and hence $a_4$. Now \[ -9\leq \alpha+\bar\alpha+s_1+s_2+s_3+s_4 \leq 17 \] Hence, given $a_5$, \[ -9\leq \big(-3- a_4+\frac{1}{2} a_5^2\big) m \leq 17 \] If $a_4$ lies in an interval of width $\frac{26}{|m|}$ and hence $m_4$ lies in an interval of width $\frac{26}{8|m|}$. If $|m|=1$, then that gives at most $4$ possibilities for $a_4$. Decreasing the search space by a factor of at least $3$. If $|m|=2$, then only three possibilities and as soon as $m\geq 4$ there is at most one possibilty, with \[ 2 \leq - 4m_4+2m_2+m_0+\frac{1}{4} a_5^2 \leq 10 \] Other observations, coming from the additional parity condition, are that the constant term coefficient in the minimal polynomial for $\alpha$ is \[ \frac{1}{64} \left(-1-a_0+a_1-a_2+a_3-a_4+a_5\right) \left(1+a_0+a_1+a_2+a_3+a_4+a_5\right) \] \[ = \frac{1}{4} (3 + 4 m_4 - 4 m_5) (1 + m_4 + m_5) \] so we must have $m_4+m_5$ odd, and this halves the search space again. These, and a few other simple observations, reduce the search space by another order of magnitude and additionally give simple tests to eliminate candidates before we use a root finding algorithm. \scalebox{0.65}{\includegraphics[viewport=-10 350 670 850]{Degree6Map34}}\\ \noindent{\bf Figure 11.} {\em The degree $6$ candidates satisfying all arithmetic criteria.} \medskip We ran this search to find $32$ points, illustrated here, and that we should consider further. These were not in the required region. \subsection{Degree $7$ parity considerations} \begin{align*} &128 \alpha ^7+64 \alpha ^6 \left(2 a_5-a_6 ^2+7\right)+32 \alpha ^5 (2 a_3-2 a_6 (a_4+3 a_6 )+a_5 (a_5+12)+21) \\ &+16 \alpha ^4 \left(2 a_1-2 a_2 a_6 +2 a_3 (a_5+5)-a_4^2-10 a_4 a_6 +5 a_5 (a_5+6)-15 a_6 ^2+35\right)\\ &+8 \alpha ^3 \left(-2 a_0a_6 +2 a_1 (a_5+4)-2 a_2 (a_4+4 a_6 )+a_3^2+4 a_3 (2 a_5+5) \right. \\ & \left. \quad -4 \left(a_4^2+5 a_4 a_6 +5 a_6 ^2\right)+10 a_5 (a_5+4)+35\right)\\ &+4 \alpha ^2 \left(-2 (a_4+3 a_6 ) (a_0+a_2+a_4+a_6 )+2 a_1 (a_3+3 a_5+6)-(a_2+2 a_4+3 a_6 )^2\right.\\ & \left. \quad+3 a_3^2+4 a_3 (3 a_5+5)+10 a_5^2+30 a_5+21\right)\\ &+\alpha \left(-4 (a_2+2 a_4+3 a_6 ) (a_0+a_2+a_4+a_6 )+2 (a_1+a_3+a_5+1)^2 \right. \\ & \left. \quad +4 (a_3+2 a_5+3) (a_1+a_3+a_5+1)\right) \\ & +\left((a_1+a_3+a_5+1)^2-(a_0+a_2+a_4+a_6 )^2\right) \end{align*} We continue to reduce these coefficients mod $2$ making substitutions as appropriate. Then \underline{$a_6$ is odd}, $a_6=2k_6+1$. Then \underline{$a_5$} is odd, $a_5=2k_5+1$. Hence $a_3+a_4$ is even, and subsequently \underline{$a_4$, hence $a_3$} are odd, $a_4=2k_4+1$, $a_3=2k_3+1$. It follows \underline{$a_0$ is odd}, $a_0=2k_0+1$. Subsequently \underline{$a_1$ and $a_2$ are odd}, $a_2=2k_2+1$,$a_1=2k_1+1$. Expanding out and looking at the coefficients in terms of $k_i$ we find \begin{itemize} \item $k_1+k_2+k_4$ is even. \item $k_0+k_1+k_2+k_4+k_5+k_6$ is odd. \item $k_2+k_3+k_6$ is even. \item $k_1+k_3+k_5$ is even. \item $k_0+k_1+k_2+k_3+k_4+k_5+k_6$ is even. \end{itemize} Hence $k_3$ is odd. \begin{itemize} \item $k_0+k_1+k_2$ is odd. $k_2=2m_2+1-k_0-k_1$ \item $k_0+k_4$ is odd. $k_4=2m_4+1-k_0$ \item $k_2+k_6$ is odd. $k_6=2\tilde{m_6}+1-k_2=2m_6+k_0+k_1$ \item $k_1+k_5$ is odd. $k_5=2m_5+1-k_1$ \end{itemize} At this point the coefficients of $\alpha^4,\alpha^5$ and $\alpha^6$ are divisible by $128$. We continue. Looking at the coeffficient of $\alpha^2$ gives $k_3=2m_3+1$ odd and deals with the coefficient of $\alpha^3$. Next $m_3+m_5$ is even, as is $m_2+m_4+m_6$. Subsequently $m_4+m_5$, is even, and $m_3+m_4$, is even. Then we have \begin{itemize} \item $k_2=2m_2+1-k_0-k_1$ \item $k_3=2m_3+1$ \item $k_4=2(2n_4-m_3)+1-k_0$ \item $k_6=2(2n_6-(2n_4-m_3)-m_2)+k_0+k_1$ \item $k_5=2(2n_5-m_3)+1-k_1$ \end{itemize} Relabeling we now have an inductive structure to determine the coefficients. \begin{itemize} \item $a_0=2n_0+1$ \item $a_1=2n_1+1$ \item $a_2=4n_2+3-2(n_0+n_1) $ \item $a_3=4n_3+3$ \item $a_4=8n_4-4n_3-2n_0+3$ \item $a_5=8n_5-4n_3-2n_1+3$ \item $a_6=8n_6- 8n_4-4n_3-4n_2+2n_0+2n_1+1 $ \end{itemize} \subsection{The case of degree $8$} Let \[ \lambda ^8+a_7\lambda ^7+a_6\lambda ^6+a_5\lambda ^5+a_4\lambda ^4+a_3 \lambda ^3+a_2\lambda ^2+a_1\lambda +a_0=0 \] be the minimal polynomial for $\lambda$. Let \[ \alpha ^8+b_7\alpha ^7+b_6\alpha ^6+b_5\alpha ^5+b_4\alpha ^4+b_3 \alpha ^3+b_2\alpha ^2+b_1\alpha +b_0 = 0 \] be the minimal polynomial for $\alpha$. \\ \begin{lemma} The disk $\{|\alpha|<4.4960\}$ is excluded. Hence \[ 4.49 \leq |\alpha| \leq 4+2\sqrt{2} \] and \[ 4\leq \Re e(\alpha) \leq 4+2\sqrt{2} \] \end{lemma} \scalebox{0.75}{\includegraphics[viewport=40 450 470 800]{ExcludedDisk}}\\ \noindent {\bf Figure 13.}{\em The excluded disk in the $\alpha$-plane.} \medskip \noindent{\bf Proof.} Let $\alpha_0$ be the root of the equation \[ \alpha_0^{2 t_0} (\alpha_0 - 1)^2 (0.0770561)^6 = 1 \] Then (noting $\gamma=\alpha-1$) and $|\alpha|< \alpha_0$ Theorem \ref{a0bound} gives \[ |p(0)|^{t_0}|p(1)|\leq |\alpha|^{2 t_0}|\alpha - 1|^2 (0.0770561)^6 < 1 \] and this implies that $p(0)=0$ or $p(1)=0$ which is impossible. \hfill $\Box$ \medskip Thus the possible values of $\alpha$ that we seek are confined to a rather small region. It is rather unsurprising then that there are no degree $8$ cases. Against this however, we do know there are infinitely many lattices in this region. We next recall Lemma \ref{*lemma} that implies $|p(0)|=|b_0|\leq 2$ in this case. We calculate as before to find $2^8$ times the minimal polynomial for $\alpha$ to be {\small \begin{align*} &(-1 - a_0 + a_1 - a_2 + a_3 - a_4 + a_5 - a_6 + a_7) (1 + a_0 + a_1 + a_2 + a_3 + a_4 + a_5 + a_6 + a_7) \\ &+ 2 (-2 a_0 (4 + a_2 + 2 a_4 + 3 a_6) - 2 (1 + a_2 + a_4 + a_6) (4 + a_2 + 2 a_4 + 3 a_6) \\ & + (a_1 + a_3 + a_5 + a_7) (a_1 + 3 a_3 + 5 a_5 + 7 a_7)) \alpha \\ & + 4 (2 a_1 a_3 + 3 a_3^2 + 6 a_1 a_5 + 12 a_3 a_5 + 10 a_5^2 - 2 (1 + a_0 + a_2 + a_4 + a_6) (6 + a_4 + 3 a_6) \\& - (4 + a_2 + 2 a_4 + 3 a_6)^2 + 12 a_1 a_7 + 20 a_3 a_7 + 30 a_5 a_7 + 21 a_7^2) \alpha^2 \\ & + 8 (a_3^2 + 2 a_1 a_5 + 8 a_3 a_5 + 10 a_5^2 - 2 (4 + a_6) (1 + a_0 + a_2 + a_4 + a_6)\\& - 2 (6 + a_4 + 3 a_6) (4 + a_2 + 2 a_4 + 3 a_6) + 8 a_1 a_7 + 20 a_3 a_7 + 40 a_5 a_7 + 35 a_7^2) \alpha^3 \\ &+ (-32 a_0 - 16 (70 + a_4^2 - a_5 (2 a_3 + 5 a_5) + 10 a_4 (3 + a_6) + 2 a_2 (5 + a_6) \\ & + 5 a_6 (14 + 3 a_6)) + 32 (a_1 + 5 (a_3 + 3 a_5)) a_7 + 560 a_7^2) \alpha^4 \\& + 32 (a_5^2 - 2 (28 + a_2 + a_4 (6 + a_6) + 3 a_6 (7 + a_6)) + 2 a_3 a_7 + 12 a_5 a_7 + 21 a_7^2) \alpha^5 \\ & - 64 (28 + 2 a_4 + a_6 (14 + a_6) - 2 a_5 a_7 - 7 a_7^2) \alpha^6 \\ &+ 128 (-8 - 2 a_6 + a_7^2) \alpha^7 - 256 \alpha^8 \end{align*}} We then inductive reduce modulo $2$ to gain the parity of the coefficients, make the appropriate substitutions and repeat until all the remaining coefficients are a multiple of $2^8$ (we cary out this process more carefully in the next subsection for degree $9$). After this we find that there are integers $k_0,k_1,\ldots,k_8$ so that \begin{eqnarray*} a_0 & = & 1 + 2 k_0 \\ a_1 & = & -2+4 k_1 \\ a_2 & = & -2 k_0+4 k_2 \\ a_3 & = & 2-4 k_1+4 k_3 \\ a_4 & = & -2 \left(1+k_0+4 k_1-4 k_4\right) \\ a_5 & = & 2+4 k_1-8 k_4+8 k_5 \\ a_6 & = & -2 \left(7 k_0-4 k_1+2 k_2+4k_4-8 k_6\right) \\ a_7 & = & -2 \left(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\right) \end{eqnarray*} This gives us the minimal polynomial for $\alpha$ as \[ x^8+b_7x^7+b_6x^6+b_5x^5+b_4x^4+b_3x^3+b_2x^2+b_1 x+b_0 \] and where \begin{eqnarray*} b_0 &= &\big(k_0-k_6\big){}^2-\big(k_4-k_7\big){}^2,\\ b_1 & = & 2\big(\big(6 k_0-k_1+k_2+k_4-6 k_6\big) \big(k_0-k_6\big)\\ && \quad -\big(k_1+k_3+5 k_4+k_5-6 k_7\big) \big(k_4-k_7\big)-\big(k_4-k_7\big){}^2\big),\\ b_2 & =& 2 \big(-1+11 k_0-4 k_1+3 k_2+4 k_4-12 k_6\big) \big(k_0-k_6\big)\\ && \quad -\big(k_1+k_3+5 k_4+k_5-6 k_7\big){}^2-2 \big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(k_4-k_7\big)\\ && \quad -4 \big(k_1+k_3+5 k_4+k_5-6 k_7\big) \big(k_4-k_7\big)\\ && \quad +\big(6 k_0-k_1+k_2+k_4-6 k_6\big){}^2,\\ b_3 & = & =2\big(73 k_0^2+k_1^2-k_2+3 k_2^2-k_3\\ && \quad-4 k_3^2-9 k_4+7 k_2 k_4-41 k_3 k_4-81 k_4^2-k_5\\ && \quad -9 k_3 k_5-50 k_4 k_5-5 k_5^2+k_0 \big(-8-39 k_1+31 k_2+39 k_4-153 k_6\big)+8 k_6\\ && \quad-32 k_2 k_6-40 k_4 k_6+80 k_6^2-k_1 \big(7 k_2+7 k_3+42 k_4+8 k_5-40 k_6-42 k_7\big)+9 k_7\\ && \quad+50 k_3 k_7+220 k_4 k_7+60 k_5 k_7-140 k_7^2\big),\\ b_4& = & -2 k_0+2 \big(-2+7 k_0-4 k_1+2 k_2+4 k_4-8 k_6\big) \big(6 k_0-k_1+k_2+k_4-6 k_6\big)+2 k_6\\ && \quad+\big(1-11 k_0+4 k_1-3 k_2-4 k_4+12 k_6\big){}^2\\ && \quad-4 \big(\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(k_1+k_3+5 k_4+k_5-6 k_7\big)\\ && \quad+\big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big) \big(k_4-k_7\big)\big)\\ && \quad-\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big){}^2\\ && \quad-2 \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big) \big(k_1+k_3+5 k_4+k_5-6 k_7\big) ,\\ b_5 & = & 2 \big(\big(-1+11 k_0-4 k_1+3 k_2+4 k_4-12 k_6\big) \big(-2+7 k_0-4 k_1+2 k_2+4 k_4-8 k_6\big) \\ && \quad -6 k_0+k_1-k_2-k_4+6 k_6-\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big){}^2\\ && \quad-\big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big)\\ && \quad-2 \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big) \big(k_1+k_3+5 k_4+k_5-6 k_7\big)\big),\\ b_6&=& 2-22 k_0+8 k_1-6 k_2-8 k_4+24 k_6+\big(2-7 k_0+4 k_1-2 k_2-4 k_4+8 k_6\big){}^2\\ && \quad-4 \big(1+2 k_1+3 k_3+8 k_4+4 k_5-12 k_7\big) \big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big)\\ && \quad-\big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big){}^2,\\ b_7 & = & 2 \big(2-7 k_0+4 k_1-2 k_2-4 k_4+8 k_6-\big(1+2 k_1+2 k_3+4 k_4+4 k_5-8 k_7\big){}^2\big). \end{eqnarray*} Notice that $b_0$ is a difference of squares. As $2$ is not a quadratic residue we see that $b_0\neq \pm 2$ and hence must be $\pm 1$, by Lemma \ref{*lemma}. \begin{lemma} If $a^2-b^2=\pm 1$, then \[ (a,b)\in \{(0,\pm1),(\pm 1,0) \}\] \end{lemma} \noindent{\bf Proof.} $a^2-b^2= (a+b)(a-b)=\pm 1$ so both factors are $\pm 1$. $a-b=1$ gives $a=1+b$ and $1+2b=\pm 1$ so $b=-1$, and $a=0$, or $b=0$. $a-b=-1$ gives $a=-1+b$ and $-1+2b=\pm 1$ so $b=1$, and $a=0$, or $b=0$. \hfill $\Box$ \medskip We can now make the following assertions about the coefficients for the minimal polynomial of $\alpha$. \begin{lemma} Let \[ \alpha ^8+b_7\alpha ^7+b_6\alpha ^6+b_5\alpha ^5+b_4\alpha ^4+b_3 \alpha ^3+b_2\alpha ^2+b_1\alpha +b_0 = 0 \] be the minimal polynomial for $\alpha$. Then \begin{itemize} \item $b_0=\pm 1$. \item $b_7$, $b_5$, $b_3$ and $b_1$ are even. \item $b_6$ has the same parity as $\frac{1}{2} b_7$. \item $b_4$ has the opposite parity to $b_6$. \end{itemize} \end{lemma} \subsection{Degree $8$ coefficient bounds.} We want to use the information in the above lemma to prove that there are no such polynomials, as in degree $7$. As earlier, the coefficients in the search are determined by the root structure. But this time the search for $\lambda$ is quite a bit larger. However we can use the excluded disk to improve search bounds as $4.49 \leq |\alpha| \leq 4+2\sqrt{2}$ and $\alpha+\bar\alpha>8.9$ as we recall that all the real roots of the minimal polynomial for $\alpha$ lie in $[-1,1]$. Then a few elementary estimates got the search runtime down to a day. We found no candidates. \subsection{The case of degree $9$} In light of our first calculation, the following lemma suffices to deal with this case. \begin{lemma} Let $\alpha$ be a complex root of a degree $9$ polynomial with $\lambda=\sqrt{2\alpha+1}$ an algebraic integer and we have $\IQ(\lambda)=\IQ(\alpha)$. Then $\alpha$ is not a unit. \end{lemma} The proof is simply a very long calculation. We repeatedly apply the process above to determine the parity of the coefficients for the minimal polynomial for $\lambda$, \[\lambda ^4+a_8\lambda ^8+a_7\lambda ^7+a_6\lambda ^6+a_5\lambda ^5+a_4\lambda ^4+a_3 \lambda ^3+a_2\lambda ^2+a_1\lambda +a_0\] As before we find a multiple of the minimal polynomial for $\alpha$, \noindent {\tiny \begin{tabular}{|l|} \hline $ \left(1+a_1+a_3+a_5+a_7\right){}^2-\left(a_0+a_2+a_4+a_6+a_8\right){}^2 $ \\ \hline $ \left(2 \left(1+a_1+a_3+a_5+a_7\right){}^2+4 \left(1+a_1+a_3+a_5+a_7\right) \left(4+a_3+2 a_5+3 a_7\right)\right. $\\ $\left.-4 \left(a_0+a_2+a_4+a_6+a_8\right) \left(a_2+2 a_4+3 a_6+4 a_8\right)\right) \alpha $ \\ \hline $\left(8 \left(1+a_1+a_3+a_5+a_7\right) \left(6+a_5+3 a_7\right)+8 \left(1+a_1+a_3+a_5+a_7\right) \left(4+a_3+2 a_5+3 a_7\right) \right.$ \\ $\left. +4 \left(4+a_3+2 a_5+3 a_7\right){}^2-4 \left(a_2+2 a_4+3 a_6+4 a_8\right){}^2-8 \left(a_0+a_2+a_4+a_6+a_8\right) \left(a_4+3 a_6+6 a_8\right)\right) \alpha ^2$ \\ \hline $8 \left(84+a_3^2+70 a_5+10 a_5^2+112 a_7+40 a_5 a_7+35 a_7^2+2 a_1 \left(10+a_5+4 a_7\right)+4 a_3 \left(2 a_5+5 \left(2+a_7\right)\right) \right. $ \\ $\left.-2 \left(a_0+a_2+a_4+a_6+a_8\right) \left(a_6+4 a_8\right)-2 \left(a_2+2 a_4+3 a_6+4 a_8\right) \left(a_4+3 a_6+6 a_8\right)\right) \alpha ^3$ \\ \hline $16 \left(126-a_4^2+70 a_5+5 a_5^2-2 a_2 a_6-10 a_4 a_6-15 a_6^2+140 a_7+30 a_5 a_7+35 a_7^2+2 a_1 \left(5+a_7\right)\right.$\\ $\left. +2 a_3 \left(15+a_5+5 a_7\right)-2 a_0 a_8-10 a_2 a_8-30 a_4 a_8-70 a_6 a_8-70 a_8^2\right) \alpha ^4$\\ \hline $32 \left(126+2 a_1+42 a_5+a_5^2-2 a_4 a_6-6 a_6^2+112 a_7+12 a_5 a_7+21 a_7^2+2 a_3 \left(6+a_7\right)-2 a_2 a_8-12 a_4 a_8-42 a_6 a_8-56 a_8^2\right) \alpha ^5$\\ \hline $64 \left(84+2 a_3-a_6^2+56 a_7+7 a_7^2+2 a_5 \left(7+a_7\right)-2 a_4 a_8-14 a_6 a_8-28 a_8^2\right) \alpha ^6$\\ \hline $128 \left(36+2 a_5+16 a_7+a_7^2-2 a_6 a_8-8 a_8^2\right) \alpha ^7$\\ \hline $256 \left(9+2 a_7-a_8^2\right) \alpha ^8 + 512\alpha ^9$ \\ \hline \end{tabular} } \bigskip We see \begin{itemize} \item $ 9+2 a_7-a_8^2$ even, so \underline{$a_8$ is odd.} \item $2 a_5+a_7^2-2 a_6$ divisible by $4$, so \underline{$a_7$ is even} and \underline{$a_5-a_6$ is even.} \item $2 a_3-2 a_4+6 a_5-6 a_6-a_6^2+4 a_5 k_7+4 k_7^2-4 a_4 k_8-4 a_6 k_8$ divisible by $8$, so \underline{$a_5$ and $a_6$ are even}. Further, $a_3-a_4+2 k_5-2 k_6-2 k_6^2+2 k_7^2-2 a_4 k_8$ is divisible by $4$. \item $3+a_1-a_2+6 a_3-6 a_4+2 k_5+2 k_5^2-2 k_6-2 a_4 k_6-4 k_6^2+2 a_3 k_7+2 k_7^2-2 a_2 k_8-4 a_4 k_8-4 k_6 k_8$ is divisible by $8$. So $a_1-a_2$ is odd. \item $24-2 a_0+10 a_1-10 a_2+30 a_3-30 a_4-a_4^2+12 k_5+4 a_3 k_5+20 k_5^2-12 k_6-4 a_2 k_6-20 a_4 k_6-28 k_6^2-8 k_7+4 a_1 k_7+20 a_3 k_7+24 k_5 k_7+12 k_7^2-24 k_8-4 a_0 k_8-20 a_2 k_8-28 a_4 k_8-24k_6 k_8-24 k_8^2$ is divisible by $32$, so \underline{$a_4$ is even}, and $a_0+a_3$ is odd. \item $84+a_3^2+140 k_5+40 k_5^2+224 k_7+160 k_5 k_7+140 k_7^2+4 a_1 \left(5+k_5+4 k_7\right)+8 a_3 \left(5+2 k_5+5 k_7\right)-4 \left(1+a_0+a_2+2 k_4+2 k_6+2 k_8\right) \left(2+k_6+4 k_8\right)-4 \left(3+k_4+3 k_6+6 k_8\right) \left(4+a_2+4 k_4+6 k_6+8 k_8\right)$ is divisible by $64$ and so \underline{$a_3$ is even, and $a_0$ is odd.} \item $-4+20 a_1-20 a_2-a_2^2-24 k_0+60 k_3+4 a_1 k_3+12 k_3^2-64 k_4-12 a_2 k_4-8 k_0 k_4-24 k_4^2+84 k_5+12 a_1 k_5+48 k_3 k_5+40 k_5^2-96 k_6-24 a_2 k_6-24 k_0 k_6-80 k_4 k_6-60 k_6^2+112 k_7+24 a_1 k_7+80 k_3 k_7+120 k_5 k_7+84 k_7^2-136 k_8-40 a_2 k_8-48 k_0 k_8-120 k_4 k_8-168 k_6 k_8-112 k_8^2$ is divisible by $128$, so \underline{$a_2$ is even and $a_1$ is odd}. \end{itemize} Subsequently we reduce the coefficients modulo $2$ to see that \begin{itemize} \item $k_1+k_2$ is even \item $k_1+k_3$ is even \item $k_1+k_4$ is odd \item $k_1+k_5$ is odd \item $k_1+k_6$ is even \item $k_1+ k_7$ is even \item $k_0+k_1+ k_8$ is even \end{itemize} Thus we substitute \[ k_2=2m_2+k_1;k_3=2m_3+k_1;k_4=2m_4+1+k_1;k_5=2m_5+1+k_1; \]\[k_6=2m_6+k_1;k_7=2m_7+k_1;k_8=2m_8+1+k_1;k_0=2m_0+1\] We expand out the coefficients of the polynomials once again to see \begin{itemize} \item $1+m_2+m_5+ m_6$ is even \item $1+ m_3+ m_5+ m_7$ is even \item $k_1+m_0+m_4+ m_8$ is even \item $k_1+m_3+m_7$ is even \item $k_1+m_2+m_6$ is even \end{itemize} We then make substitutions once again of the sort $m_8=2n_8-k_1-m_0-m_4$, $m_7=2n_7-m_3-k_1$, and $m_6=2n_6-m_2-k_1$ and simplify the coefficients once again checking parity. We find \begin{itemize} \item $k_1+n_5+n_6+n_7+n_8$ is even \item $1+n_6+n_8$ is even \item $1+n_6+n_8+m_2+ m_3+ m_4$ is even \end{itemize} make the substitutions \[ n_7= 2r_7-\left(1+k_1+ n_5\right); n_8= 2r_8-\left(1+ n_6\right); \] \[ m_4=2 n_4-\left(1+n_6+n_8+m_2+ m_3\right); n_6=\left(2r_6-\left(1+ 2r_8-\left(k_1+n_5\right)\right)\right) \] At this point every coefficient is an integral multiple of $512$ except the constant term coefficient which is $256 \left(r_7^2-r_8^2\right)$. However $2$ is not a difference of squares so the constant term is at least $1024$ and $\alpha$ is not a unit. \medskip We remark that a similar situation seems to arise in all odd degrees, we also checked it for $3,5$ and $7$. \section{The case that $p=2$ and $p=4$.} This case is quite different to our previous searches in that much of the work has been done in early work of Flammang and Rhin \cite{FR}. Motivated by a question of ours in addressing the identification of all arithmetic Kleinian groups generated by an elliptic of order $2$ and an elliptic of order $3$ they proved the following. \begin{theorem} There are $15909$ algebraic integers $\alpha$ whose minimal polynomial has exactly one complex conjugate pair of roots in the ellipse \[ {\cal E} = \{z\in \IC: |z+1|+|z-2|\geq 5 \} \] and such that all the real roots of this polynomial lie in the interval $[-1,2]$. The degree of such an $\alpha$ is less than $10$, and \begin{itemize} \item $22$ polynomials in degree $2$, \item $206$ polynomials in degree $3$, \item $918$ polynomials in degree $4$, \item $2524$ polynomials in degree $5$, \item $4401$ polynomials in degree $6$, \item $4260$ polynomials in degree $7$, \item $2792$ polynomials in degree $8$, \item $600$ polynomials in degree $9$, and \item $186$ polynomials in degree $10$. \end{itemize} \end{theorem} In order to use this list there are a few things we must establish. First that the exterior of the moduli space of groups freely generated be elements of order $2$ and $4$, $\IC\setminus {\cal M}_{2,4}$ lies within this ellipse. In fact it does not, however this space has a $\IZ_2$ symmetry in the line $\{\Re e(z)=-1\}$ which respects discreteness (however the symmetric pair of discrete groups are not necessarily isomorphic --- only having a common index two subgroup.) Thus we need only show $\{z:\Re(z)\geq -1\} \setminus {\cal M}_{2,4}$ lies in the ellipise. Again it does not, but an integer translate by $1$ of it does! This observation was made by Zhang in her PhD thesis \cite{Zhang}. At that time we did not have the technology to decide the question of whether a group had finite co-volume or not though Coopers PhD thesis \cite{Cooper} implemented a modified version of Week's Snappea code to decide for many of them. Actually we additionally searched these spaces, using Flammang and Rhin's degree bounds, just to check that we had found all the possibilities. \medskip A rough description of the space $\IC\setminus {\cal M}_{2,4}$ can be found as all the roots of all Farey polynomial lie in it. Such a description is illustrated below. \scalebox{0.75}{\includegraphics[viewport=50 460 470 780]{First44}}\\ {\bf Figure 12.} {\em The space $\IC\setminus {\cal M}_{2,4}$ found by roots of Farey polynomials. Notice the symmetry in $\Re e(z)=-1$ } \medskip We will now give a provable description of this space in order to get reasonable degree bounds for $\alpha=\gamma+1$ to limit the actual number of polynomials we have to consider. As previously we enumerate the ``first'' 129 Farey words. We compute the pleating rays corresponding to the following $14$ slopes. \[ F_{rat}=\{\frac{1}{2},\frac{2}{3},\frac{3}{4},\frac{3}{5},\frac{4}{5},\frac{5}{6},\frac{4}{7},\frac{5}{7},\frac{6}{7},\frac{5}{8},\frac{7}{8},\frac{5}{9},\frac{7}{9},\frac{8}{9}\}.\] The associated Farey polynomials $p_{r/s}(z)$ are monic, with integer coefficients and have degree no more than $9$. The pleating ray is a particular branch of $p_{r/s}^{-1}((-\infty,-2])$, actually the branch making angle $r\pi/s$ at $\infty$ with the positive real axis. We then compute $D_{r/s}=p_{r/s}(\{\Re e(z)\leq -2)$, the $r/s$ pleating ray neighbourhood, and the results of \cite{EMS} show that for all $r/s$, $D_{r/s}\subset \overline{{\cal M}_{2,4}}$ touching $\partial {\cal M}_{2,4}$ at a single point -- the $r/s$-cusp group. This is illustrated below. \scalebox{0.5}{\includegraphics[angle=-90,viewport=-40 50 600 620]{Region44}}\\ \noindent {\bf Figure 14.} {\em The space $\IC\setminus {\cal M}_{2,4}$ (grey) and pleating ray neighbourhoods $D_{r/s}$. The grey region is found by a conjectural description of the space of groups free on two generators given by Bowditch, \cite{Bowditch}. The bounded parabaloid shaped regions are proved to lie completely in ${\cal M}_{2,4}$.} \medskip This gives us an approximation to the set ${\cal M}_{2,4}$. Should we find a value $\gamma\in \{z:\Re e(z)>-1,\Im m(z)>0\}$ which does not lie in the bounded region \[ \IC \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}, \] then we know that the group generated by elements of order two and four with this particular commutator value is free on the two generators. \medskip We remark that it is quite straightforward to test if a point is in some $D_{r/s}$. We simply evaluate the Farey polynomial $p_{r/s}$ on it to see the image has real part less than $-2$ (we can guess the particular $r/s$ by inspection). Then we check that this point lies in the same branch of $p_{r/s}^{-1}(\{\Re e(z)>-2\})$ as that determined by the pleating ray from an elementary path lifting which we can also implement computationally. \bigskip \subsection{The symmetry of the space ${\cal M}_{2,4}$} First we describe the symmetry of the space ${\cal M}_{2,4}$. \begin{theorem} Let $\langle f,g\rangle$ be a Kleinian group generated by elements $g$ of order $2$ and $f$ of order $4$ and set $\gamma=\gamma(f,g)$. Then there is $\langle f,h\rangle$, a Kleinian group generated by elements of order $2$ and $4$ and such that $\gamma(f,h)=-2-\gamma$. The groups $\langle f,g\rangle$ and $\langle f,h\rangle$ have a common index two subgroup $\langle f,gfg^{-1}\rangle$ and so both are simultaneously lattices (or not). \end{theorem} \noindent{\bf Proof.} The axis of $g$ bisects (and is perpendicular to) the common perpendicular $\ell$ between the axes of $f$ and $gfg^{-1}$. Let $h$ be the elliptic of order two whose axis is perpendicular to both that of $g$ and $\ell$. Then evidently $hfh^{-1}$ is an element of order the same as $f$ and it shares the same axis of $f$ and has the same trace. Thus $hfh^{-1}=f^{\pm1}$. We calculate that $\gamma(f,hfh^{-1})=\gamma(f,h)(\gamma(f,h)+2) = \gamma(f,g)(\gamma(f,g)+2)$ and a moments analysis shows $\gamma(f,g)\neq \gamma(f,h)$ so $\gamma(f,g)=-2+\gamma(f,h)$. The statement regarding index two follows immediately by looking at the coset decomposition as $g$ and $h$ both have order two. \hfill $\Box$ The figure below clearly indicates, and it is not too difficult to prove just using the combinatorics of the isometric circles to construct a fundamental domain, that \begin{equation} \IC \setminus {\cal M}_{2,4} \subset \{-2+{\cal E}\} \cup \{-1+{\cal E}\} \end{equation} \scalebox{0.73}{\includegraphics[viewport=40 420 770 750]{24Covered}}\\ Notice that the interval $[-2,0]$ lies in both $\{-2+{\cal E}\}$ and $\{-1+{\cal E}\}$. Thus we find that the values of $\gamma$ that we seek are those points in $-1+{\cal E}$ whose minimal polynomial has real roots which actually lie inside the interval $[-2,0]$ (recall they do lie in $[-2,1]$). Once we have this list of points we go through the process of deciding if they are in or out of $\IC \setminus {\cal M}_{2,4}$. \medskip Now using this symmetry we may assume $0 \leq \Re e(\alpha) \leq 3$. We also have the elementary bounds $|\alpha|\leq 3$, $|\alpha+1|\leq 4$ and $|\alpha-1|\leq 2$. Now we obtain degree bounds as earlier following (\ref{alphabounds}). Let \[ q_\alpha(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0 = (z-\alpha)(z-\bar\alpha)(z-r_1) \cdots(z-r_{n-2}) \] be the minimal polynomial for $\alpha$. Then, as $r_i\in [-1,1]$, \begin{eqnarray} |q_\alpha(-1)||q_\alpha(0)||q_\alpha(1)| &=& |\alpha|^2|\alpha^2-1|^2\prod_{i=1}^{n-2} |r_i| |1-r_{i}|\nonumber \\ & \leq & 576 (0.3849)^{n-2}. \end{eqnarray} as $\max \{x(x^2-1):x\in [-1,1]\} = 0.3849...$. As the left hand side here is greater than one, we deduce that $n\leq 8$. Indeed if $p(0)\neq \pm 1$, then $n\leq 7$. We therefore run through the $12331$ polynomials in the list above, along with the degree 8 polynomials with last coefficient 1. We are only interested in those whose real roots lie in $[-1,1]$ and complex root has real part of absolute value no more than $\sqrt{3}$. \subsection{Degree $2$} There are $12$ possible values, of which $6$ lie in the region $\IC \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}$.\\ \scalebox{0.5}{\includegraphics[angle=-90,viewport=40 50 600 660]{Degree2Points}}\\ \subsection{Degree $3$} There are $175$ possible values, of which $13$ lie in the region $\IC \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}$.\\ \scalebox{0.5}{\includegraphics[angle=-90,viewport=40 50 600 660]{Degree3Points}}\\ \subsection{Degree $4$} With rather coarse search bounds \[ -8\leq a \leq 1, -12\leq b \leq 22, -23\leq c \leq 23, -8\leq d \leq 8 \] we found $572$ possible values, of which $23$ lie in the region $\IC \setminus \Big\{ \{-1 < \Re e(z)<2 \}\cup \bigcup_{\frac{r}{s}\in F_{rat}} D_{r/s} \Big\}$. \\ \scalebox{0.75}{\includegraphics[viewport=50 500 750 800]{24Degree4}}\\ We then checked irreducibility of the polynomial and this left $13$ possibilities in the region $\{\Re e(z)\geq -1\}$ and one further point was eliminated by adding another pleating ray neighbourhood. This leaves the list we have presented. \subsection{The case $p=4$.} In this case we have a group generated by two elements of order $4$. The following theorem shows that if $\langle f,g\rangle$ is such a group, then there is a $\IZ_2$ extension to a group $\langle f,\Phi \rangle$ generated by elliptics of orders $2$ and $4$. Of course if $\langle f,g\rangle$ is a discrete subgroup of an arithmetic Kleinian group, then so is $\langle f,\Phi \rangle$. The only issue is the calculation of the relevant commutators, the arithmetic data, and identifying the slope. \begin{theorem} Let $\langle f,g\rangle$, generated by two elements of order $4$, be a discrete subgroup of an arithmetic Kleinian group and $\gamma=\gamma(f,g)$ with $\gamma$ complex. Suppose $\tilde{\gamma}(\tilde{\gamma}+2)=\gamma$. Then there is $\Phi$ of order two and such that $\langle f,\Phi\rangle$ is a discrete subgroup of an arithmetic Kleinian group and that $\tilde{\gamma}=\gamma(f,\Phi)$. \end{theorem} \noindent{\bf Proof.} One of the two elliptics of order two whose axis bisects the common perpendicular of the axes of $f$ and of $g$ and interchanges these axes is the element $ \Phi$ we seek. Then $\Phi f\Phi^{-1}=g^{\pm 1}$ and $\langle f,\Phi\rangle$ is discrete as it contains a discrete group of index two. Similarly these groups are commensurable, and have the same invariant trace field. Arithmeticity is preserved by finite extensions. To see this more concretely, the factorisation condition must hold for $\gamma$, that condition is precisely that $\IQ(\tilde{\gamma})=\IQ(\gamma)$. We have already seen the trace identity \[ \gamma=\gamma(f,g)=\gamma(f,\Phi f\Phi)=\gamma(f,\Phi)(\gamma(f,\Phi)+2) =\tilde{\gamma}(\tilde{\gamma}+2) \] This completes the proof. \hfill $\Box$ \medskip This result now shows us the can find all the groups in the case $p=4$ from our analysis of the previous case $p=2$. \section{Finding the groups.} The recent solution of Agol's conjecture \cite{ALSS,AOPSY} identifying all the Kleinian groups generated by two parabolic elements as two bridge knot and link groups and associated Heckoid groups suggests a method for identifying our groups. We conjecture that the same is true for groups generated by two elements of finite order. This suggests the following approach to identifying these groups. Let $\Gamma=\langle f,g\rangle$ with $o(f)=4$ and $o(g)=p$. We only sketch the ideas here. Let $m/n$ and $r/s$ be two rational numbers such that $m$, $n$ (resp. $r$, $s$) are coprime. Define an operation $\oplus$ such that $m/n\oplus r/s=(m + r)/(n + s)$; we call $\oplus$ Farey addition. Suppose $m/n < r/s$. Then $m/n < (m + r)/(n + s) < r/s$. These Farey fractions enumerate the simple closed curves on the four times punctured sphere, and hence words in the free group of rank two. These words correspond to pleating rays. As examples: We start with a pair of rationals and associated rational words (in generators $X$ and $Y$, with $x=X^{-1}$ and $y=Y^{-1}$): \[ \{1/2, 1/1\} \mapsto \{\{X, Y, x, y\}, \{X, y\}\}\] and inductively create Farey words by addition; \begin{eqnarray*} \left\{\frac{1}{2},\frac{2}{3},1\right\} &\mapsto& \{\{X,Y,x,y\},\{X,Y,x,Y,X,y\},\{X,y\}\} \\ \left\{\frac{1}{2},\frac{3}{5},\frac{2}{3},\frac{3}{4},1\right\}&\mapsto&\{\{X,Y,x,y\},\{X,Y,x,y,X,y,x,Y,X,y\},\\ && \{X,Y,x,Y,X,y\},\{X,Y,x,Y,x,y,X,y\},\{X,y\}\} \end{eqnarray*} Here is some Mathematica code which does this (from Zhang \cite{Z}). \medskip {\tiny \noindent L1 = {1/2, 1/1};\\ L2 = {{X, Y, x, y}, {X, y}} (*list of rational words*)\\ NF = 3 (*determines how many polynomials to make *)\\ For[j = 1, j$\leq$ NF, j++,(* determines the number of rational words made*)] \\ d = Length[L1];\\ For[i=1, i$\leq$(2 d - 1), i=i + 2,\\ N1 = (Numerator[L1[[i]]] + Numerator[L1[[i + 1]]] )/(Denominator[L1[[i]]] + Denominator[L1[[i + 1]]]);\\ (* Farey addition*)\\ L1 = Insert[L1, N1, i + 1]; (* orders the fractions *)\\ T2 = Join[L2[[i]], L2[[i + 1]]]; d1 = Denominator[N1] + 1; \\ R = Switch[T2[[d1]], x, X, X, x, y, Y, Y, y];\\ T2 = ReplacePart[T2, R, d1]; L2 = Insert[L2, T2, i + 1]]]} \medskip Each entry in the second list corresponds to a word $W_{r/s}$. We take a representation of a group generated by elliptics of order $4$ and $p$ by assigning \[ X=\left(\begin{array}{cc}\frac{1+i}{\sqrt{2}}& 1 \\ 0 &\frac{1+i}{\sqrt{2}} \end{array} \right), \;\;\;\;\; Y=\left(\begin{array}{cc}\cos(\frac{\pi}{p})+i\sin(\frac{\pi}{p}) & 0 \\ -\mu &\cos(\frac{\pi}{p})-i\sin(\frac{\pi}{p}) \end{array} \right),\] Then \begin{equation} p_{r/s}(\mu)={\rm Trace}(W_{r/s})\end{equation} Some examples (with $p=2$ and the first $9$ fractions) \begin{center} \begin{tabular}{|c|c|} \hline Farey & polynomial \\ \hline $\frac{1}{2}$ & $2-2 \sqrt{2} \mu +\mu ^2$ \\ \hline $\frac{4}{7}$ & $ -\sqrt{2}+25 \mu -58 \sqrt{2} \mu ^2+118 \mu ^3-65 \sqrt{2} \mu ^4+41 \mu ^5-7 \sqrt{2} \mu ^6+\mu ^7 $\\ \hline $\frac{3}{5}$ & $\sqrt{2}-13 \mu +17 \sqrt{2} \mu ^2-19 \mu ^3+5 \sqrt{2} \mu ^4-\mu ^5$ \\ \hline $\frac{5}{8}$ & $2-16 \sqrt{2} \mu +104 \mu ^2-144 \sqrt{2} \mu ^3+220 \mu ^4-100 \sqrt{2} \mu ^5+54 \mu ^6-8 \sqrt{2} \mu ^7+\mu ^8$ \\ \hline $\frac{2}{3}$ & $-\sqrt{2}+5 \mu -3 \sqrt{2} \mu ^2+\mu ^3$ \\ \hline $\frac{5}{7}$ & $\sqrt{2}-17 \mu +36 \sqrt{2} \mu ^2-84 \mu ^3+55 \sqrt{2} \mu ^4-39 \mu ^5+7 \sqrt{2} \mu ^6-\mu ^7$ \\ \hline $\frac{3}{4}$ & $2-4 \sqrt{2} \mu +10 \mu ^2-4 \sqrt{2} \mu ^3+\mu ^4$ \\ \hline $\frac{4}{5}$ & $-\sqrt{2}+5 \mu -11 \sqrt{2} \mu ^2+17 \mu ^3-5 \sqrt{2} \mu ^4+\mu ^5$ \\ \hline $\frac{1}{1}$ & $\sqrt{2}-\mu$ \\ \hline \end{tabular} \\ \end{center} Notice that the degree of the polynomial is the denominator or the fraction and that $W_{1/2}=XYxy=[X,y]$ and so $p_{1/2}(\mu)-2=\gamma(X,Y)$. To identify our groups we used the following procedure. From $p$ we construct a list of about $100$ fractions and polynomials $p_{r/s}(\mu)$. From an monic polynomial $z^3 +5z^2 +7z+1$ candidate we identify $\gamma=-2.41964+0.60629i$ and $\mu=\sqrt{2}+\sqrt{2+\gamma }=1.60761 + 1.5675 i$, from $-2 \sqrt{2} \mu +\mu ^2=\gamma$. We then evaluate all these polynomials on $\mu$. \begin{center} \begin{tabular}{|c|c|} \hline Farey $r/s$& polynomial $P_{r/s}$\\ \hline $\frac{1}{2}$ & $-0.419643 - 0.606291 i$ \\ \hline $\frac{4}{7}$ & $-1.42553 + 1.59871 i$\\ \hline $\frac{3}{5}$ & $0.540536 + 1.03152 i$ \\ \hline $\frac{5}{8}$ & $1. - 2.84217 \times 10^{-14} i $ \\ \hline $\frac{2}{3}$ & $1.02696 - 0.838121 i$ \\ \hline $\frac{5}{7}$ & $-4.56052 + 1.21192 i$ \\ \hline $\frac{3}{4}$ & $2.6478 + 1.72143 i$ \\ \hline $\frac{4}{5}$ & $-3.39159 + 2.16591 i$ \\ \hline $\frac{1}{1}$ & $0.398566 - 0.760591 i$ \\ \hline \end{tabular} \\ \end{center} This list suggests that ${\rm Trace}(W_{5/8})=1$ and that therefore $W_{5/8}^3=1$. We could deduce that this point $\gamma$ gives us the Heckoid group $(5/8;3)$. In this case the two bridge link with slope $5/8$ one component surgered by $(4,0)$ Dehn surgery and the other by $(2,0)$ Dehn surgery with the tunnelling word $W_{5/8}$ elliptic of order three. We presented this data with round off error, and we must decide if it is real or not. There are two fortunate things here. First we know apriori from the Identification Theorem that this group generated by elements of order $2$ and $4$ with $\gamma$ as the commutator {\em is} discrete. We are just trying to find out what it is. Secondly $\gamma$ is presented as an algebraic integer so we can use integer arithmetic, identify the minimal polynomial of $P_{r/s}(\gamma)$ and its roots. We can then assure ourselves that $1$ is indeed a root and that no others are close. Or we could go back and enter $\gamma$ and compute $W_{r/s}$ symbolically. These amount to the same thing of course. In the case at hand $\mu = \sqrt{2}+ \sqrt{2 + \gamma}$ and $\mu$ is a root of \[ 1 - 298 z^2 + 251 z^4 - 200 z^6 + 71 z^8 - 14 z^{10} + z^{12} \] The minimal polynomial of $P_{5/8}(\mu)$ is then found to be $z-1$. This procedure works well in almost all cases. However some of the cases required us searching through a few thousand slopes. Doing this using integer arithmetic takes a very long time, while using numerical approximation introduces round-off errors as there are approximately four times as many matrices to multiply together as the denominator of the slope (so sometimes around $500$ matrices). We found that working with 40 digit precision (and hoping for a result accurate to a couple of decimal places) gave us an answer in a reasonable time - a couple of hours. Of course once you have a possible answer it is much easier and quicker to verify it is in fact correct. \begin{thebibliography}{9999} \bibitem{ALSS} S. Aimi, D. Lee, S. Sakai, and M. Sakuma, {\em Classification of parabolic generating pairs of Kleinian groups with two parabolic generators}, Rend. Istit. Mat. Univ. Trieste, {\bf 52}, (2020), 477--511. \bibitem{AOPSY} H. Akiyoshi, K. Ohshika, J. Parker, M. Sakuma and H. Yoshida, {\em Classification of non-free Kleinain groups generated by two parabolic transformations}, Trans. Amer. Math. Soc., {\bf 374}, (2021), 1765--1814. \bibitem{ASWY} H. Akiyoshi, M. Sakuma, M. Wada and Y. Yamashita, {\em Punctured torus groups and two bridge knot groups (I)}, Lecture Notes in Mathematics 1909, Springer-Verlag Berlin Heidelberg, 2007. \bibitem{Be} A. Beardon, {\em The geometry of discrete groups}, Springer--Verlag, 1983. \bibitem{BZ} G. Burder and H. Zieschang, {\em Knots}, de Gruyter Studies in Math, {\bf 5}, Walter de Gruyter, 1985. \bibitem{Bo} A. Borel {\em Commensurability classes and volumes of hyperbolic three-manifolds}, Ann. Sc. Norm. Pisa {\bf 8} (1981) 1 - 33. \bibitem{Bowditch} B.H. Bowditch, Markoff triples and quasi-Fuchsian groups. Proc. London Math. Soc. (3) 77 (1998), no. 3, 697–736. \bibitem{CDO} H. Cohen, F. Diaz Y Diaz and M. Olivier, {\em Tables of octic fields with a quartic subfield}, preprint. \bibitem{CM} MDE Conder, GJ Martin, {\em Cusps, triangle groups and hyperbolic $3$-folds} J. Austral. Math. Soc. Ser. A 55 (1993), 149--182. \bibitem{CMMO} M.D.E. Conder, C. Maclachlan, G.J. Martin and E.A. O'Brien, {\em $2$--generator arithmetic Kleinian groups {\bf III}} 2-generator arithmetic Kleinian groups. III. Math. Scand. 90 (2002), no. 2, 161--179. \bibitem{Cooper} H. Cooper, {\em Discrete Groups and Computational Geometry}, Ph.D. Massey University 2013 NewZealand. \bibitem{Di} F. Diaz Y Diaz, {\em Discriminant minimal et petis discriminants des corps de nombres de degre 7 avec cinq places reelles}, J. London Math. Soc. {\bf 38}, (1988), 33--46. \bibitem{DO} F. Diaz Y Diaz and M. Olivier, {\em Corps imprimitifs de degr\'e 9 de petit discriminant}, preprint. \bibitem{EMS1} A. Elzenaar, G.J. Martin and J. Schillewaert, {\em Approximations to the Riely slice}, arXiv:2111.03230 \bibitem{EMS2} A. Elzenaar, G.J. Martin and J. Schillewaert, {\em One complex dimensional moduli spaces: Keen - Series deformation theory}, Matrix, to appear. \bibitem{FR} Flammang and G. Rhin {\em Algebraic integers whose conjugates all lie in an ellipse}, Math. Comp. 74 (2005), no. 252, 2007-2015. \bibitem{GM1} F. W. Gehring and G. J. Martin, {\it Commutators, collars and the geometry of M\"obius groups}, J. d'Analyse Math., {\bf 63}, (1994), 175--219. \bibitem{GMMsemmat} F. W. Gehring, C. Maclachlan and G.J. Martin, {\em On the discreteness of the free product of finite cyclic groups}, Mitt. Math. Sem. Giessen, {\bf 228}, (1996), 9--15. \bibitem{GMM} F. W. Gehring, C. Maclachlan and G.J. Martin,{\em $2$--generator arithmetic Kleinian groups {\bf II}}, Bull. Lond. Math. Soc., {\bf 30}, (1998) 258--266 \bibitem{GMMR} F.W. Gehring, C. Maclachlan, G. Martin and A.W. Reid {\em Arithmeticity, discreteness and volume}, Trans. Amer. Math. Soc., {\bf 349}, (1997), 3611--3643. \bibitem{HMR} M. Hagelberg, C. Maclachlan and G. Rosenberger {\em On discrete generalised triangle groups}, Proc. Edinburgh Math. Soc. {\bf 38} (1995) 397 - 412. \bibitem{JR} J.W. Jones and D.P. Roberts, {\em A database of number fields}, LMS Journal of Computation and Mathematics, Cambridge University Press, 2014. \bibitem{KS} L. Keen and C. Series, {\em The Riley slice of Schottky space}, Proc. London Math. Soc. 69 (1994), 72--90. \bibitem{KS2} L. Keen and C. Series, {\em Pleating coordinates for the maskit embedding of the Teichm\"uller space of punctured tori}, Topology, {\bf 32}, (1993), 719--749. \bibitem{KSM} L. Keen, B. Maskit and C. Series, {\em Geometric finiteness and uniqueness for Kleinian groups with circle packing limit sets}, J. Reine Angew. Math., {\bf 436}, (1993), 209--219. \bibitem{MM} C. Maclachlan and G.J. Martin, {\em On 2--generator Arithmetic Kleinian groups.} J. Reine Angew. Math. {\bf 511}, (1999), 95--117 \bibitem{MM2} C. Maclachlan and G.J. Martin, {\em The non-compact generalised arithmetic triangle groups} Topology, {\bf 40}, (2001), 927--944. \bibitem{MM3} C. Maclachlan and G. J. Martin, {\em All Kleinian groups with two elliptic generators whose commutator is elliptic}, Math. Proc. Cambridge Philos. Soc. 135 (2003), no. 3, 413--420. \bibitem{MMpq} C. Maclachlan and G. J. Martin, {\em The $(p, q)$-arithmetic hyperbolic lattices; $p, q \geq 6$}, arXiv:1502.05453v1 \bibitem{MM6} C. Maclachlan and G. J. Martin, {\em The $(6,p)$-arithmetic hyperbolic lattices in dimension $3$}, Pure Appl. Math. Q., {\bf 7}, (2011), Special Issue: In honor of Frederick W. Gehring, Part 2, 365--382. \bibitem{MMM} C. Maclachlan, G.J. Martin and J. McKenzie, {\em Arithmetic 2-generator Kleinian groups with quadratic invariant trace field.} New Zealand J. Math., {\bf 29}, (2000), 203--209. \bibitem{MR} C. Maclachlan and A. W. Reid, {\em The arithmetic of hyperbolic 3-manifolds}, Graduate Texts in Maths. Springer, 2003. \bibitem{MR1} C. Maclachlan and A.W. Reid {\em Commensurability classes of arithmetic Kleinian groups and their Fuchsian subgroups}, Math. Proc. Camb. Phil. Soc., {\bf 102}, (1987), 251 -- 258. \bibitem{MR2} C. Maclachlan and A.W. Reid, {\em The arithmetic structure of tetrahedral groups of hyperbolic isometries.} Mathematika, {\bf 36}, (1989), 221--240. \bibitem{Maskit} B. Maskit, {\em Kleinian groups}, Springer Verlag, 1989. \bibitem{MMarshall} T.H. Marshall and G.J. Martin, {\em Minimal co-volume hyperbolic lattices, II: Simple torsion in a Kleinian group}, Ann. of Math., {\bf 176}, (2012), 261--301. \bibitem{Mull} H.P. Mullholland, {\em The product of $n$ complex homogeneous linear forms}, J. London Math. Soc. {\bf 35}. (1960), 241--250. \bibitem{NR} W.D. Neumann and A.W. Reid, {\em Arithmetic of hyperbolic manifolds.} Topology '90 (Columbus, OH, 1990), 273--310, Ohio State Univ. Math. Res. Inst. Publ., 1, de Gruyter, Berlin, 1992. \bibitem{Od} A.M. Odlyzko, {\em Some analytic estimates of class numbers and discriminants}, Invent Math. {\bf }, (1975) \bibitem{Rat} J.G. Ratcliffe, {\em Foundations of Hyperbolic Manifolds}, Springer--Verlag, 1994. \bibitem{RH} G. Rhin, { \em Approximants de Pad\'{e} et mesures effectives d’irrationalit\'{e}, S\'{e}minaire de Th\'{e}orie des Nombres}, Paris 1985-1986, Prog. Math. 71 (1987), 155-164. \bibitem{Rodgers} C.A. Rodgers, {\em The product of $n$ real homogeneous linear forms}, Acta Math. {\bf 82}. (1950), 185--208. \bibitem{Rolfsen} Rolfsen, {\em Knots and links}, Corrected reprint of the 1976 original. Mathematics Lecture Series, 7. Publish or Perish, Inc., Houston, TX, 1990. xiv+439 pp. \bibitem {CJ} C. J. Smyth, { \em The mean value of totally real algebraic numbers.} Math. Comp. 42 (1984) 663-681. \bibitem{Tak} K. Takeuchi, {\it A characterization of arithmetic Fuchsian groups}, J. Math. Soc. Japan {\bf 27} (1975) 600-612.\\ \bibitem{S} I. Schur, {\em \"{U}ber die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten.} Math. Zeit. {\bf 1} (1918) 377 - 402. \bibitem{Stark} H.M. Stark, {\em Some effective cases of the Brauer-Siegel Theorem}, Invent. Math. {\bf 23}, (1974), 135--152. \bibitem{Take} K. Takeuchi, {\em Arithmetic triangle groups}, J. Math. Soc. Japan, {\bf 29}, (1977), 91--106. \bibitem{Vig} M-F. Vigneras, {\em Arithm{\'e}tique des Alg{\`e}bres de Quaternions.} Lecture Notes in Mathematics, No. 800. Springer-Verlag, 1980. \bibitem{Zhang} Q. Zhang, {\em Two Elliptic Generator Kleinian Groups}, PhD Thesis, Massey University, New Zealand, 2010. \end{thebibliography} \medskip \noindent G.J. Martin, Institute for Advanced Study, Massey University, New Zealand.\\ ([email protected])\\ K. Salehi, Institute for Advanced Study, Massey University, New Zealand.\\ Y. Yamishita, Department of Mathematics, Nara Women's University, Japan. \end{document}
2206.14843v1
http://arxiv.org/abs/2206.14843v1
Equal Coverings of Finite Groups
\documentclass[11pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage{graphicx} \usepackage[a4paper, total={6.5in, 9in}]{geometry} \usepackage{setspace} \usepackage{tikz} \usepackage{array} \usepackage{makecell} \usepackage{longtable} \usepackage[utf8]{inputenc} \renewcommand\theadalign{bc} \renewcommand\theadfont{\bfseries} \DeclareMathOperator{\lcm}{lcm} \title{Senior Thesis - Equal Coverings} \author{Andrew Velasquez-Berroteran} \date{\today} \begin{document} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newtheorem{definition}{Definition} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{example}{Example} \newtheorem{theorem}{Theorem} \newtheorem{note}{Note} \newtheorem{conjecture}{Conjecture} \newtheorem{remark}{Remark} \onehalfspacing \begin{titlepage} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} \center \textsc{\LARGE Department of Mathematics \& Computer Science}\\[1.5cm] \HRule \\[0.4cm] { \huge \bfseries Equal Coverings of Finite Groups}\\[0.1cm] \HRule \\[2cm] \begin{minipage}{0.5\textwidth} \begin{flushleft} \large \emph{Author:}\\ \textsc{Andrew Velasquez-Berroteran}\\\vspace{20pt} \emph{Committee Members:}\\ \textsc{Tuval Foguel (advisor)}\\ \textsc{Joshua Hiller}\\ \textsc{Salvatore Petrilli}\\ \end{flushleft} \end{minipage}\\[1cm] {\large April 27th, 2022}\\[2cm] \vfill \end{titlepage} \tableofcontents \newpage \begin{abstract} In this thesis, we will explore the nature of when certain finite groups have an equal covering, and when finite groups do not. Not to be confused with the concept of a cover group, a covering of a group is a collection of proper subgroups whose set-theoretic union is the original group. We will discuss the history of what has been researched in the topic of coverings, and as well as mention some findings in concepts related to equal coverings such as that of equal partition of a group. We develop some useful theorems that will aid us in determining whether a finite group has an equal covering or not. In addition, for when a theorem may not be entirely useful to examine a certain group we will turn to using \texttt{\texttt{GAP}} (Groups, Algorithms, Programming) for computational arguments. \end{abstract} \textbf{Motivation}\vspace{5pt}\\ The question of determining how a group may possess an equal covering is an interesting since in addition to wondering if a group can be the set-theoretic union of some of its proper subgroups, we would also like to see if there is a such a collection with all member being the same size. As we will see soon, non-cyclic groups all possess some covering. If we add, however, the restriction mentioned above then the problem of determining such groups becomes a lot more complicated. We hope to determine from a selection of finite groups, which ones have an equal covering and which do not. Our plan will first proceed with familiarizing ourselves with useful definitions, such as that of the exponent of a group. Next, we will mention general research within the topic of coverings in hopes some finding from within the past century may serve us. Afterwards, we will derive our own theorems related to equal coverings of groups. Following that, we will then utilize the theorems presented, as well as \texttt{GAP} for when the theorems alone do not help, in aiding us to determine which groups up to order 60 and some finite (non-cyclic) simple groups have equal coverings. \section{Introduction} The topic of coverings of groups is a relatively novel one, only having been researched within the past 120 years. Equal coverings, on the other hand, has not been researched as much and will be the focus of this paper. Given a group $G$ and if $\Pi$ is a a covering of $G$, then it is an equal covering of $G$ if for all $H,K \in \Pi$, we have $H$ and $K$ are of the same order. Now, one thing that must be clear is that not every group will have a covering, let alone an equal covering. In other words, when we know that $G$ has no covering at all, then it is not worthwhile attempting to find an equal covering or determine if it has one or not. To begin this discussion, we will first take notice of a very important fact that distinguishes groups that have coverings, from those that do not. From this point on, unless otherwise specified, we will be concerned with finite coverings of groups, or coverings that have finitely many proper subgroups of the original group.\vspace{5pt}\\ If $G$ is a group, let $\sigma(G)$ denote the smallest cardinality of any covering of $G$. If $G$ has no covering, then we would simply write $\sigma(G) = \infty$. Below is a relatively simple but powerful well-known theorem. \begin{theorem}[\cite{scorza}]\label{Cyclic} Let $G$ be a group. $G$ has a covering if and only if $G$ is non-cyclic. \end{theorem} \begin{proof} Suppose $G$ has an covering. By definition, this is a collection of proper subgroups, where each element of $G$ must appear in at least one of the subgroups. It $x \in G$, then $\langle x \rangle$ must be a proper subgroup of $G$, so $G$ cannot be generated by $x$. Hence, $G$ is non-cyclic.\vspace{5pt}\\ Conversely, suppose $G$ is non-cyclic. Consider the collection of subgroups $\Pi = \{ \langle a \rangle: a \in G\}$. Since $G$ is non-cyclic, $\langle a \rangle$ is a proper subgroup of $G$ for all $a \in G$, so $\Pi$ is a covering of $G$. \end{proof} \noindent A consequence of Theorem \ref{Cyclic} is that all groups of prime order do not have a covering, since all groups of prime order are cyclic. Since this means we will not take much interest in cyclic groups we have limited the number of groups to analyze for having an equal covering, even if the proportion of groups are reduced by very little.\vspace{5pt}\\ In this investigation, we will work primarily with finite groups. Say if $G$ is a finite non-cyclic group, would there be a way to determine $\sigma(G)$, or at the very least find bounds on $\sigma(G)$? In a moment we will look at what has been researched in domain of coverings of groups, which will involve some work in answering this question for some groups. But before we do that, we will mention and prove two well-known theorems related to this question. \begin{theorem}\label{Union2} Let $G$ be a non-cyclic group. If $H$ and $K$ are proper subgroups of $G$, then $G$ cannot be the union of $H$ and $K$. In other words, $\sigma(G) \neq 2$ for any non-cyclic group $G$. \end{theorem} \begin{proof} Suppose $H$ and $K$ are proper subgroups such that $G = H \cup K$. Since it cannot be possible for either $H \subseteq K$ or $K \subseteq H$, we must have there is some $h \in H$ but $h \notin K$, and there is some $k \in K$ but $k \notin H$. Since $hk \in G$, $hk \in H$ or $hk \in K$. Observe if $hk \in H$, then since $h^{-1} \in H$, we have $h^{-1}(hk) = (h^{-1}h)k = k \in H$, which is impossible. Similarly, if $hk \in K$ then $(hk)k^{-1} = h(kk^{-1}) = h \in K$. We have a contradiction, so we cannot have $G$ cannot be the union of $H$ and $K$. \end{proof} \begin{proposition}\label{Bounds} If $G$ be a non-cyclic group of order $n$, then $2 < \sigma(G) \leq n - 1$. \end{proposition} \begin{proof} Suppose $G$ is a non-cyclic group of order $n$. Clearly no covering cannot consist of one element, since that would indicate it contains $G$, not a possibility. Next, by Theorem \ref{Union2}, any covering must have more than two proper subgroups of $G$. So, $\sigma(G) > 2$.\\ Now, let $a_1$, $a_2$, ..., $a_{n-1}$ represent all $n-1$ nonidentity elements of $G$. Since $G$ is non-cyclic, $\langle a_i \rangle < G$ for $1 \leq i \leq n-1$. If $\Pi = \{\langle a_i \rangle:\ 1 \leq i \leq n-1\}$, then $\Pi$ is a collection of proper $n-1$ subgroups of $G$. Furthermore, the union of all these subgroups is $G$, so $\Pi$ is a covering of $G$. It follows $\sigma(G) \leq n-1$. Therefore, $2 < \sigma(G) \leq n-1$. \end{proof} We consider Proposition 1 above just a proposition and not a theorem since, as we will see in the history section, there has been work done to find a smaller range for $\sigma(G)$ for different finite groups $G$ as well as specific values for certain groups.\vspace{5pt}\\ As mentioned before, we will only discuss finite groups in this peper, but as a brief mention the possibility of infinite groups being a union of proper subgroups is a bit mystifying. In regards to Theorem \ref{Cyclic}, there is a reason we needed to state beforehand that the groups we refer to will need to be finite. Take for example the group $\mathbb{Q}^{+}$ under multiplication. While this group may not be cyclic, Haber and Rosenfeld \cite{haber1959groups} demonstrated that it's actually impossible for $\mathbb{Q}^+$ be a union of proper subgroups. So in addition to the overall complexity that comes with dealing with infinite groups, there will be theorems presented in this thesis that may not hold true for infinite groups satisfying the necessary assumptions. \section{History} \subsection*{On the General History of Group Coverings} \indent Before we continue with our discussion talking about equal coverings, let's take a look at some things that have been researched within the topic of coverings of groups, as well as a mention on coverings of loops and equal partitions.\vspace{5pt}\\ \indent The first instance of there being a discussion of representing groups as a general union of proper subgroups appeared in a book from G. Scorza in 1926. Two decades prior, G.A. Miller had actually touched on the concept of partitions which we will dedicate its own subsection to later in this section. Although this was the first instance wherein a mathematician posed a problem relevant to the idea of coverings for groups, one source of great motivation for inquiry came from P. Erdös.\vspace{5pt}\\ \indent Erdös is said to be a very influential mathematician, with some arguing he is the most prolific one from the last century. He had done extensive work in various fields of mathematics, especially in the realm in algebra. Scorza had originally come up with the idea of coverings for groups in the 1920s, and in a matter of less than half a century later, Erdös posed somewhat of a related question. The question can ultimately be boiled down to the following \cite{neumann_1976}:\\ If $G$ is a group and there is no infinite subset of elements which do not commute, is there a finite number of such subsets? \\ While Erdös was essentially talking of coverings for groups, but by particular subsets and not proper subgroups, his question helped mathematicians such as B.H Neumann looked at groups with this property, and some other mathematicians such as H.E. Bell and L.C. Kappe look at a ring theory problem analogous to Erdös' \cite{bell1997analogue}. Thus we definitely say Erdös served to help bring attention to the theory of coverings of groups, which Neumann and Kappe both looked more into as we will see later in this section.\vspace{5pt}\\ \indent There was some work already done within this topic even prior to Erdös' involvement, so we will continue on from the relatively early twentieth century. Theorem \ref{Union2} has showed us it's impossible to write a group as union of two proper subgroups, but it is possible for a group to be a union of three of its proper subgroups and as it turns out, there's a theorem for this. This theorem and Theorem \ref{Cyclic} have repeatedly been mentioned and proven in multiple papers such as in \cite{haber1959groups} and \cite{bruckheimer}, but first appeared in Scorza's paper \cite{scorza}. \begin{theorem}[\cite{scorza}] If $G$ is a group, then $\sigma(G) = 3$ if and only if for some $N \vartriangleleft G$, $G/N \cong V$, the Klein 4-group. \end{theorem} An immediate consequence of this theorem is that the lower bound of the inequality given in Theorem \ref{Bounds} can be changed to 3 and so now for any finite non-cyclic group $G$ we have $3 \leq \sigma(G) < n-1$. Immediately we see that smallest non-cyclic group that has a covering is indeed $V$ and it should be evident that $\{\langle(0,1)\rangle, \langle (1,0)\rangle, \langle (1,1)\rangle\}$ forms a covering of $V$. In fact, it happens to be an equal covering of $V$. \begin{definition} Given a group $G$ and a covering $\Pi = \{H_1, H_2 ,..., H_n\}$, we say $\Pi$ is \textbf{irredundant}( or \textbf{minimal}) if for any $H_i \in \Pi$, $H_i$ is not contained in the union of the remaining $H's$ in $\Pi$. In other words, for each $i \in \{1,..., n\}$ there exists $x_i \in H_i$ such that $x_i \notin \bigcup\limits_{j\neq i}H_j$. \end{definition} Ideally when we come up with a covering for a group, we want the least amount of subgroups necessary. \cite{haber1959groups} actually had proven that if $\Pi = \{H_i\}$ is an irredundant covering of $G$ then for any $H_i \in \Pi$, $H_i$ contains the intersection of the remaining $H's$ in $\Pi$. Further in their paper they had shown the following two statements for any finite group $G$: \begin{theorem}[\cite{haber1959groups}]\label{haber} (i) If $p$ is the smallest prime divisor of $|G|$ then $G$ cannot be the union of $p$ or fewer proper subgroups.\\ (ii) If $p$ is the smallest prime divisor of $|G|$ and $\Pi = \{H_i\}$ is a covering of $p+1$ proper subgroups, there is some $H_i$ for which $[G:H_i] = p$. If such an $H_i$ is normal, then all $H's \in \Pi$ have index $p$ and $p^2$ divides $|G|$. \end{theorem} As mentioned, Theorem 4 has been repeatedly mentioned in multiple papers and in M. Bruckheimer, et. al \cite{bruckheimer}, they had actually explored a little more of when groups can be the union of three proper subgroups. As an example, they had explained all dihedral groups of orders that are divisible by 4 and all dicyclic groups are `3-groups', which in the context of their paper means their covering number is 3. Additionally, they had shown if a group $G$ has the decomposition (or covering) of $\{A,B,C\}$ then this is only possible if all three subgroups are abelian, all are non-abelian, or only one is abelian. They had shown it was impossible for a covering of $G$ to have 2 abelian subgroups of $G$ and 1 non-abelian.\vspace{5pt}\\ \indent T. Foguel and M. Ragland \cite{foguel2008groups} actually investigate what they call `CIA'-groups, or groups that have a covering whose components are isomorphic abelian subgroups of $G$. They had found many results such as that every finite group can be a factor of a CIA-group, and that the (direct) product of two CIA-groups is a CIA-group. Among the other results they had derived, they had found which families of groups are CIA-groups and which ones do not. All dihedral groups and groups of square-free order are examples of non-CIA-groups and generally any non-cyclic group with prime exponent is a CIA-group. Since isomorphic groups have the same order, any finite CIA-group by definition will have an equal covering, or covering by proper subgroups of the same order.\vspace{5pt}\\ \indent J.H.E. Cohn \cite{cohn1994n} provide us with plenty of nifty theorems and corollaries. Before presenting two superb theorems from his paper we must mention that in place of\ $\bigcup$, Cohn used summation notation and so if $\{H_1, H_2, ..., H_n\}$ is a covering for $G$, with $|H_1| \geq |H_2| \geq ... |H_n|$, then he had written $G = \sum\limits_{i=1}^{n}H_i$. He had also used $i_r$ to denote $[G:H_r]$ and if $\sigma(G) = n$ he said that $G$ is an $n$-sum group. \begin{theorem}[\cite{cohn1994n}]\label{cohn1} Let $G$ be a finite $n$-sum group. It follows: \begin{enumerate} \item $i_2 \leq n-1$ \item if $N \vartriangleleft G$ then $\sigma(G) \leq \sigma(G/N)$ \item $\sigma(H \times K) \leq \min\{\sigma(H), \sigma(K)\}$, where equality holds if and only if $|H|$ and $|K|$ are coprime. \end{enumerate} \end{theorem} Before we continue, we must mention that Theorem \ref{cohn2} was originally written so that \textit{1.} and \textit{2.} were lemmas and \textit{3.} was an immediate corollary. In our study of equal coverings, any one of these may prove to be useful so we compiled all three statements into a theorem. Before we move on to the next theorem, we must note that Cohn defined a primitive $n$-sum group $G$ to be a group such that $\sigma(G) = n$ and $\sigma(G/N) > n$ for all nontrivial normal subgroups $N$ of $G$. The following theorem was written by \cite{bhargava2009groups} with \textit{2.}-\textit{4.} coming originally from Theorem 5 of \cite{cohn1994n} and \textit{5.} coming from work developed later on in the same paper. \begin{theorem}[\cite{cohn1994n}, \cite{tomkinson}]\label{cohn2} \vspace{5pt} \begin{enumerate} \item There are no 2-sum groups. \item $G$ is a 3-sum group if and only if it has at least two subgroups of index 2. The only primitive 2-sum group is $V$. \item $G$ is a 4-sum group if and only if $\sigma(G) \neq 3$ and it has at least 3 subgroups of index 3. The only primitive 4-sum groups are $\mathbb{Z}_3^2$ and $S_3$. \item $G$ is a 5-sum group if and only if $\sigma(G) \neq 3$ or 4 and it has at least one maximal subgroup of index 4. The only primitive 5-sum group is $A_4$. \item $G$ is a 6-sum group if and only if $\sigma(G) \neq 3$, 4, or 5 and there is a quotient isomorphic to $\mathbb{Z}_5^2$, $D_{10}$ (dihedral group of order 10) or $W = \mathbb{Z}_5 \rtimes \mathbb{Z}_4 = \langle a,b|\ a^5 = b^4 = e, ba = a^2b\rangle$. All three happen to be the only primitive 6-sum groups. \item There are no 7-sum groups, or no $G$ for which $\sigma(G) = 7$. \end{enumerate} \end{theorem} \noindent The last statement from Theorem \ref{cohn2} is interesting since it is the third positive integer for which no groups can be covered by that number of proper subgroups, and although Cohn didn't know or demonstrate a proof of it, it was ultimately proven by M.J. Tomkinson \cite{tomkinson}. In M. Garonzi et. al.'s paper \cite{garonzi2019integers}, one topic of the paper was to figure out what are some integers that cannot be covering numbers. For a complete list of integers less than 129 that cannot be covering numbers, please see \cite{garonzi2019integers}. In particular, they had found that integers which can be covering numbers are of the form $\frac{q^m-1}{q-1}$, where $q$ is a prime and $m \neq 3$. Additionally, something Cohn had also conjectured, and was then proven by Tomkinson, was that for every prime number $p$ and positive integer $n$ there exists a group $G$ for which $\sigma(G) = p^n + 1$, and moreover, such groups are non-cyclic solvable groups.\vspace{5pt}\\ \indent In addition to determining what integers smaller than 129 cannot be a covering number, \cite{garonzi2019integers} also attempted to look at covering numbers of small symmetric groups, linear groups, and some sporadic groups. Some of the results were based on the work of A. Maroti \cite{maroti2005covering}, with one result being that that for all odd $n \geq 3$, except $n =9$, $\sigma(S_n) = 2^{n-1}$. \cite{kappe2016covering} had actually demonstrated that $\sigma(S_9) = 256$, so that formula actually holds for all odd integers greater than 1. Additionally, when finding the exact covering number of a group wasn't available they would at find a lower bound, upper bound or possibly both, such as for Janko group $J_1$, they had found that $5316 \leq \sigma(J_1) \leq 5413$. \subsection*{Other Types of Coverings} Now, we have primarily talked thus far groups that have a covering by general proper subgroups. One may ask what if we place restrictions or modify the concept of a standard covering of a group with say a covering by proper normal subgroups, or a covering by proper subgroups with the restriction that any two given subgroups intersect trivially? \subsubsection*{Covering by Cosets} Neumann \cite{neumann1954groups} was interested in seeing what we can find out about when groups can be the union of cosets of subgroups. In other words, he was interested in when $G = \bigcup x_iH_i$. A powerful theorem he had proven was that: \begin{theorem}[\cite{neumann1954groups}] If $G = \bigcup x_iH_i$ is a union of cosets of subgroups, and if we remove any $x_iH_i$ for which $[G:H_i]$ is infinite then the remaining union is still all of $G$. \end{theorem} \noindent If $G$ is a finite group the Theorem 8 will hold no matter which nontrivial subgroups $H_i$ we choose, but if we were dealing with infinite groups then this theorem can very well prove to incredibly useful. \subsubsection*{Covering by Normal Subgroups and Conjugates of Subgroups} M. Bhargava \cite{bhargava2009groups} investigated coverings by normal subgroups and conjugates of subgroups. One type of covering was that of covering by normal subgroups. It was proven that any group that is can be covered by three proper subgroups is actually covered by three normal proper subgroups. Additionally, $G$ can be written as the union of proper normal subgroups of $G$ if and only if there is some quotient group isomorphic to $\mathbb{Z}_{p}^2 = \mathbb{Z}_p \times \mathbb{Z}_p$ for some prime $p$.\\ Another type of covering is that of by conjugate subgroups. It turns out that there isn't an example of a finite group that is coverable by the conjugates of a single proper subgroup! In \cite{bhargava2009groups} there happens to be a theorem in regard to non-cyclic solvable groups. \begin{theorem}[\cite{bhargava2009groups}] Suppose $G$ is a finite non-cyclic solvable group. Then $G$ satisfies either 1) a union of proper normal subgroups or 2) a union of conjugates of 2 proper subgroups. \end{theorem} \noindent Interestingly enough, the infinite group GL$_2(\mathbb{C})$, or group of all non-singular $2 \times 2$ matrices with complex entries, happens to be coverable by the set of all conjugates of upper triangular matrices \cite{bhargava2009groups}. \subsubsection*{Partitions \& Semi-Partitions} Now regardless of what type of group covering we have, we only require that such a collection is indeed a covering for the parent group. We now introduce a special kind of covering for groups.\vspace{5pt}\\ As mentioned prior, G.A. Miller \cite{miller1906groups} began an investigation into a special type of covering known as a partition and the purpose of this section is to highlight the many discoveries of partitionable groups. \begin{definition} Let $G$ be a group. If $\Pi$ is a covering of $G$ where any two distinct members of $\Pi$ intersect trivially, then $\Pi$ is a \textbf{partition} of $G$. We will say $G$ is partitionable if $G$ has a partition. \end{definition} \noindent First, \cite{miller1906groups} had shown two impressive statements: that any abelian partitionable group must be an elementary abelian $p$-group with order $\geq p^2$; and that if $|G| = p^m$ and $\Pi$ is a partition of $G$ then for any $H \in \Pi$ we have $|H| = p^a$ where $a$ divides $m$.\vspace{5pt}\\ Similar to how we defined the covering number of a group, we define $\rho(G)$ to be smallest number of members for any partition of $G$. If $G$ has no partition, then we write $\rho(G) = \infty$. Clearly when $G$ is partitionable, $\sigma(G) \leq \rho(G)$ and so a question may arise as to which groups may satisfy $\sigma(G) < \rho(G)$ and when $\sigma(G) = \rho(G)$. T. Foguel and N. Sizemore \cite{sizemorepartition} look at partition numbers of some finite solvable groups, such as $D_{2n}$ (the dihedral group of order $2n$) and $E_{p^n} = \mathbb{Z}_{p}^n$ (the elementary $p$-abelian group of order $p^n$, where $p$ is prime). In this paper, they mentioned and proven many results, such as when $n > 1$ we have $\rho(E_{p^n}) = 1 + p^{\lceil \frac{n}{2} \rceil}$, as well as that $\sigma(D_{2n}) = \rho(D_{2n})$ if and only if $n$ is prime, otherwise $\sigma(D_{2n}) < \rho(D_{2n})$. During the middle of the last century, work has been do to classify all partitionable groups, and such a classification was finally complete in 1961 and is due to the work of R. Baer \cite{baer1961partitionen}, O. Kegel \cite{kegel1961nicht}, M. Suzuki \cite{suzuki1961finite} collectively. \vspace{5pt}\\ Let us familiarize ourselves with notation that will be used for the following theorem. If $G$ is a $p$-group, then we define $H_p(G) = \langle x \in G:\ x^p \neq 1\}$ and a group is of Hughes-Thompson type if $G$ is a non-$p$-group where $H_p(G) \neq G$. For the classification mentioned above, please see Theorem 10. \begin{theorem}[\cite{baer1961partitionen}, \cite{kegel1961nicht}, \cite{suzuki1961finite}] $G$ is a partitionable group if and only if $G$ is isomorphic to any of the following: \begin{enumerate} \item $S_4$ \item A $p$-group where $|G| > p$ and $H_p(G) < G$ \item A Frobenius group ($G = H \rtimes K$, where $H$ is the Frobenius complement and $K$ is the Frobenius kernel) \item A group of Hughes-Thompson type \item $\text{PSL}(2, p^n)$, $p$ is prime and $p^n \geq 4$ \item $\text{PGL}(2, p^n)$, $p$ is an odd prime and $p^n \geq 5$ \item $\text{Sz}(q)$, the Suzuki group of order $q^2(q^2+1)/(q-1)$ where $q = 2^{2n+1}, n\geq 1$ \end{enumerate} \end{theorem} After this work, G. Zappa \cite{zappa2003partitions} had developed a more general concept of partitions, strict $S$-partitions. \begin{definition} If $G$ is a group and $\Pi$ is a partition of $G$ such that for all $H_i \cap H_j = S$ for all $H_i, H_j \in \Pi$ and for some $S < G$, then we say $\Pi$ is a \textbf{strict $S$-partition}. If, in addition, $|H_i| = |H_j|$ for all $H_i,H_j \in \Pi$ then we say $\Pi$ is an \textbf{equal strict $S$-partition} or an \textbf{$ES$-partition}. \end{definition} One powerful derivation of G. Zappa's was that if $N \leq S < G$ and $N \vartriangleleft G$ then $G$ has a strict $S$-partition $\{H_1, H_2, ..., H_n\}$ if and only if $\{H_1/N, H_2/N,..., H_n/N\}$ is a strict $S/N$-partition of $G/N$.\vspace{5pt}\\ Using Zappa's results and definitions, L. Taghvasani and M. Zarrin \cite{jafari2018criteria} proved among many results that a group $G$ is nilpotent if and only if for every subgroup $H$ of $G$, there is some $S \leq H$ such that $H$ has an $ES$-partition.\vspace{5pt}\\ In 1973, I.M. Isaacs \cite{isaacs1973equally} attempted to look at groups that were equally partitionable, or using Zappa's terminology, all $G$ that have $E\{1\}$-partition. He derived the following theorem: \begin{theorem}[\cite{isaacs1973equally}]\label{isaacstheorem} $G$ is a finite group with equal partition if and only if $G$ is a finite non-cyclic $p$-group with exponent $p$ where $p$ is a prime. \end{theorem} \noindent Isaac's result provides us an insight into at least one class of groups that have equal coverings, since an equal partition is an equal covering after all.\vspace{5pt}\\ \indent To close this subsection, we will talk briefly about \textit{semi-partitions} of groups, which are coverings of groups wherein the intersection of any three distinct components is trivial. Foguel et. al. \cite{semi-partitions} analyze and look for properties of groups that have or do not possess a semi-partition, as well as determine the semi-partition number of a group, $\rho_s(G)$. Some results they had found included that if $G$ has a semi-partition composed of proper normal subgroups, then $G$ is finite and solvable (\cite{semi-partitions}, Theorem 2.1) and when $p$ is prime we have $\sigma(D_{2p^n}) = p + 1$, $\rho(D_{2p^n}) = p^n + 1$, and $\rho_s(D_{2p^n}) = p^n - p^{n-1} + 2$ (\cite{semi-partitions}, Proposition 4.2). \subsubsection*{Coverings of Loops} This last subsection on the history of coverings of groups is dedicated to looking over coverings of loops. Indeed, the concept of coverings of groups can be loosely be translated to that of other algebraic structures such as loops, semigroups \cite{kappe2001analogue}, and rings \cite{bell1997analogue}. We will however focus on loops covered by subloops and even subgroups, as well as a brief mention of loop partitions.\vspace{5pt}\\ Similar to how we defined a group covering, T. Foguel and L.C. Kappe \cite{foguel2005loops} define a subloop covering of a loop $\mathscr{L}$ to be a collection of proper subloops $\mathscr{H}_1,..., \mathscr{H}_n$ whose set-theoretic union is $\mathscr{L}$. Using the terminology they had used, $\mathscr{L}$ is \textit{power-associative} if the subloop generated by $x$ forms a group for any $x \in \mathscr{L}$, and \textit{diassociative} if the subloop generated by $x$ and $y$ form a group for any $x,y \in \mathscr{L}$.\\ Foguel and Kappe then defined the concept of an \textit{$n$-covering} for a loop. We say the collection of proper subloops $\{\mathscr{H}_i: i \in \Omega\}$ is an $n$-covering for $\mathscr{L}$ if for any collection of $n$ elements of $\mathscr{L}$, those elements lie in $\mathscr{H}_i$ for some $i \in \Omega$. Using this definition, they had proven the following theorem. \begin{theorem}[\cite{foguel2005loops}] Given a loop $\mathscr{L}$ we have \begin{enumerate} \item $\mathscr{L}$ has a 1-covering (or just covering) if and only if $\mathscr{L}$ is power-associative \item $\mathscr{L}$ has a 2-covering if and only if $\mathscr{L}$ is diassociative \item $\mathscr{L}$ has a 3-covering if and only if $\mathscr{L}$ is a group \end{enumerate} \end{theorem} \noindent In the same paper, Foguel and Kappe that while a few ideas and properties of group coverings can be translated when talking about loops, in other instances we would need to place restrictions in in order to obtain results or theorems analogous to the theorems of group coverings. Theorem 6.4 of \cite{foguel2005loops} we would say is almost the loop equivalent of Theorem 8 of this paper, which was originally derived by B.H. Neumann.\vspace{5pt}\\ In a separate paper, T. Foguel and R. Atanasov \cite{atanasov2014loops} go further with investigating the subject of loop partitions, which of course can be defined similar to how we define group partitions. First, a \textit{group covering} of loop $\mathscr{L}$ is a covering of subloops that also are subgroups. A group covering is a group-partition (or $G$-partition) if every nonidentity element lies in one subgroup of the covering, and is an equal group partition (or $EG$-partition) if such subgroups are of the same order. T. Foguel and R. Atanasov proved many results using these definitions with one being of being super interest for this paper: \begin{theorem}[\cite{atanasov2014loops}] If $\mathscr{L}$ is a finite non-cyclic power-associative loop with the propery $(ab)^n = a^nb^n$ for all $a,b \in \mathbb{N}$, then the following are equivalent: \begin{enumerate} \item $\mathscr{L}$ has a proper $G$-partition \item $\mathscr{L}$ has a proper diassociative partition \item $\mathscr{L}$ has exponent $p$, where $p$ is prime \end{enumerate} \end{theorem} \noindent Foguel and Atansov also demonstrate that for a certain type of finite non-cyclic loops they have an $EG$-partition if and only if they have prime exponent (\cite{atanasov2014loops} Theorem 6.7). \vspace{5pt}\\ \indent In this section of this thesis, I attempted to highlight the important theorems and results of mathematicians who have delve into the subject of coverings of groups and coverings of other algebraic structures since the time of G.A. Miller near the beginning of the last century. A lot has been accomplished that a whole 20+ page thesis would be needed to cover more general results of the papers mentioned in this section and more. In the following section, we attempt derive some theorems of groups that have equal coverings. One thing to note that we may need to keep our eyes peeled for groups and loops of prime exponent since there have been at least two separate instances where such groups seem to correlate with being the union of equal order proper subgroups. \section{Preliminaries for Equal Coverings} Recall that if $G$ is a group, then an equal covering of $G$ is a collection of proper subgroups such that their union is $G$ and all such subgroups are of the same order. Again, since all cyclic groups already do not have a covering, we will focus on non-cyclic groups for the remainder of this paper. So, unless otherwise specified, in future theorems we will restrict ourselves to finite non-cyclic groups. The first theorem of this section will be powerful, but first we must mention the concept of the exponent of a group. \begin{definition} If $G$ is a group, then the \textbf{exponent} of $G$ is the smallest positive integer $n$ for which $a^n = 1$. We will use $\exp(G)$ to denote the exponent of $G$. \end{definition} \begin{remark} If $G$ is a finite group, then the exponent of $G$ is the least common multiple of all the orders of the elements of $G$. \end{remark} \begin{theorem}\label{ExpTheorem} If $G$ has an equal covering $\Pi = \{H_i\}$, then $\exp(G)$ divides $|H_i|$ for all $H_i \in \Pi$. \end{theorem} \begin{proof} Let $\Pi = \{H_i\}$ be an equal covering of $G$ and suppose $x \in G$. Since $\Pi$ is a covering, $x \in H$ for some $H \in \Pi$. Since $|x|$ divides $|H|$, $|x|$ divides the order of $H_i$ for all $H_i \in \Pi$, since $\Pi$ is an equal covering. It follows then the order of every element of $G$ divides the order of every $H_i \in \Pi$, so $\exp(G)$ divides $|H_i|$ for all $H_i \in \Pi$. \end{proof} \begin{corollary}\label{ExpCor} If $\exp(G) \nmid |K|$ for every maximal subgroup $K$ of $G$, then $G$ does not have an equal covering. \end{corollary} Now, recall $D_{2n}$ is our notation for the dihedral group of order $2n$. That is, let $D_{2n} = \langle r,s \rangle$, where the defining equations are $r^n = s^2 = 1$ and $srs = r^{-1}$. It turns out that there is a way to determine whether a dihedral group has an equal covering - and even more, we simply must examine the parity of $n$. As we will see, $D_{2n}$ will have an equal covering if and only if $n$ is even. \begin{lemma}\label{OrderDn} In $D_{2n}$, if $i \in \{1,2,...,n\}$ then $|r^is| = |sr^i| = 2$ and $|r^i| = \lcm(n,i)/i$. \end{lemma} \begin{proof} Using the fact that $srs = r^{-1}$, we must have $(srs)^i = sr^is = r^{-i}$ using induction. Now, multiplying $r^i$ on both sides of $sr^is = r^{-i}$ will result in $(r^is)(r^is) = (sr^i)(sr^i) = 1$.\vspace{5pt}\\ We have $(r^i)^{\lcm(i,n)/i} = r^{\lcm(i,n)} = 1$, since $\lcm(i,n)$ is divisible by $n$, the order of $r$. \end{proof} \begin{corollary}\label{ExpDn} If $n$ is odd then $\exp(D_{2n}) = 2n$, if $n$ is even then $\exp(D_{2n}) = n$. In other words, $\exp(D_{2n}) = \lcm(n,2)$. \end{corollary} \begin{proof} By Lemma \ref{OrderDn}, we must have that $\exp(G)$ must be divisible by 2 and must divide $\lcm(i,n)$ for all $i \in \{1,2,...,n\}$. Observe when $i$ and $n$ are coprime, then $\lcm(i,n) = i\cdot n$, and so $|\langle r^i \rangle| = i\cdot n/i = n$. This suggests $\exp(D_{2n})$ must be divisible by $n$. If $n$ is odd, then the only possible value for $\exp(D_{2n})$ must be $2n$ since it will be the smallest multiple of $n$ and $2$ that also divides the order of the group. If $n$ is even, then $\exp(D_{2n}) = n$ since $n$ will be divisible by 2 and it is the largest proper divisor of $2n$. Therefore, $\exp(D_{2n}) = \lcm(n,2)$. \end{proof} \begin{theorem}\label{EqCovDn} (i) If $n$ is odd, $D_{2n}$ has no equal covering. (ii) If $n$ is even, then $D_{2n}$ has an equal covering $\Pi = \{\langle r \rangle, \langle r^2, s\rangle, \langle r^2, rs\rangle\}$. Consequently, $\sigma(D_{2n}) = 3$ for even $n$. \end{theorem} \begin{proof} (i) Let $n$ be odd and suppose $D_{2n}$ had an equal covering. Then there would be some maximal subgroup whose order is divisible by $2n$, by Corollary \ref{ExpCor} and Corollary \ref{ExpDn}. Since $2n$ is the order of $D_{2n}$, we reach a contradiction.\vspace{5pt}\\ (ii) Let $n$ be even, $A = \langle r^2, s\rangle$ and $B = \langle r^2, rs\rangle$. We will first prove any $d \in D_{2n}$ lies in at least one of $\langle r \rangle$, $A$, or $B$. Then, we will show $|\langle r \rangle| = |A| =|B|$.\vspace{5pt}\\ For any $d \in D_{2n}$, we have the following three cases: $d$ is of the form $r^i$, $r^is$ when $i$ is even, or $r^is$ when $i$ is odd.\\ If $d = r^i$, then $d \in \langle r \rangle$.\\ If $d = r^is$, where $i$ is even, then $d = r^{2k}s$ for some $k$. Since $r^{2k}s = (r^2)^ks \in A$, $d \in A$.\\ If $d = r^is$, where $i$ is odd, then $d = r^{2k+1}s$, for some $k$. Since $r^{2k+1}s = (r^2)^k(rs) \in B$, $d \in B$. So, $\Pi$ is at least a covering of $D_{2n}$, which implies $\sigma(D_{2n}) = 3$.\vspace{5pt}\\ We know $|r| = n$, so we will now show $|A| = |B| = n$.\\ First, any element of $A$ is either an even power of $r$, or an even power of $r$ multiplied by $s$. Since $n$ is even, any power of $r^2$ will be even, and if we multiply any such elements with $s$, we simply obtain an even power of $r$ multiplied with $s$. Since $n$ is even, and $n$ is the order of $r$, the number of even powers of $r$ is $\frac{n}{2}$. Since we multiply such numbers by $s$, we obtain $\frac{n}{2}$ new elements. It follows $|A| = \frac{n}{2} + \frac{n}{2} = n$.\\ Now, any element of $B$ is either an even power of $r$, or an even power of $r$ multiplied by $rs$. We know the number of even powers of $r$ is $\frac{n}{2}$. Multiplying such numbers by $rs$, we obtain elements of the form $(r^{2k})(rs) = r^{2k+1}s$, which are odd powers of $r$ multiplied by $s$, so we obtain $\frac{n}{2}$ new elements. It follows $|B| = \frac{n}{2} + \frac{n}{2}= n$.\vspace{5pt}\\ Therefore, $\Pi = \{\langle r \rangle, \langle r^2, s\rangle, \langle r^2, rs\rangle\}$ is an equal covering of $D_{2n}$ when $n$ is even. \end{proof} While we are in search finding significant theorems about particular family of groups such as Theorem \ref{EqCovDn}, we have developed some other useful theorems. To start, as mentioned prior, Isaacs had proven that non-cyclic $p$-groups of exponent $p$ had an equal partition. Since equal partitions are a particular type of equal coverings, we will demonstrate that all non-cyclic $p$-groups, no matter their exponent, will have an equal covering. From here, we will develop some more useful theorems that can aid us in determining whether a group has an equal covering. \begin{theorem}\label{pgroups} Every non-cyclic finite $p$-group has an equal covering. \end{theorem} \begin{proof} Let $G$ be a non-cyclic group of order $p^n$, where $p$ is prime and $n > 1$. If $x \in G$, then $x$ must lie in some maximal subgroup of $G$, say $M_x$. Since $G$ is a $p$-group, and thus nilpotent, $[G:M_x]$ is prime and consequently, $[G:M_x] = p$ by J.J. Rotman (\cite{rotman2012introduction}, Theorem 4.6 (ii)). It follows for every $x \in G$, $x$ lies in a maximal subgroup $M_x$ such that $|M_x| = p^{n-1}$. It follows the collection of all such subgroups forms an equal covering of $G$. \end{proof} \begin{theorem}\label{direct2} If $G$ or $H$ have an equal covering, then $G \times H$ has an equal covering. \end{theorem} \begin{proof} Without loss of generality, suppose $X = \{G_i: i \in I\}$ is an equal covering for $G$, and consider the set $\Pi = \{G_i \times H: i \in I\}$.\vspace{5pt}\\ Suppose $g \in G$ and $h \in H$. Since $X$ is a covering for $G$, $(g,h) \in G_i \times H$ for some $i \in I$. Since $X$ is an equal covering for $G$, if $i,j \in I$, then $|G_i| = |G_j|$, and consequently $|G_i \times H| = |G_i||H| = |G_j||H| = |G_j \times H|$. It follows every subgroup of $G \times H$ in $\Pi$ is of the same order. Therefore, $\Pi$ is an equal covering of $G \times H$. \end{proof} \begin{corollary}\label{directarbitrary} If $G = \prod\limits_{i}H_i$ is a direct product of groups, and at least one $H_i$ has an equal covering, then $G$ has an equal covering. \end{corollary} \noindent Corollary \ref{directarbitrary} is actually a powerful theorem since if we happen to know a certain group $G_1$ has an equal covering, then we easily construct a new group $G_2$ that has an equal covering by taking any direct product of finite groups with $G_1$, not needing to know anything about the remaining groups. \begin{theorem}\label{nilpotent} If $G$ is a finite non-cyclic nilpotent group then it must have an equal covering. \end{theorem} \begin{proof} Let $G$ be a finite non-cyclic nilpotent group. Since $G$ is a direct of its Sylow $p$-subgroups, by (\cite{rotman2012introduction}, Theorem 5.39), and all $p$-groups have an equal covering, so does $G$ by Theorem \ref{pgroups} and Corollary \ref{directarbitrary}. \end{proof} \begin{theorem}\label{distinctp} Suppose $G$ is a group whose order is the product of two or more distinct primes, each with a multiplicity of one. That is, $|G| = p_1p_2...p_n$, where $p_i =p_j$ only when $i=j$. Then $G$ cannot have an equal covering. \end{theorem} \begin{proof} Suppose $p_i$ is some prime divisor of $|G|$. It follows $G$ must have some element $x_i \in G$ such that $|x_i| = p_i$. Since the order every element of $G$ is either a prime $p_i$ or a product of prime divisors of $|G|$, we must have $\exp(G) = \text{lcm}\{|x|: x \in G\} = p_1 \cdot p_2 \cdot ... \cdot p_n = |G|$. By Corollary \ref{ExpCor}, since there does not exist a maximal subgroup $K$ such that $|G| \mid |K|$, $G$ does not have an equal covering. \end{proof} \begin{theorem}\label{Quotient} If $H \vartriangleleft G$ and $G/H$ has an equal covering, then $G$ has an equal covering. \end{theorem} \begin{proof} Suppose $G/H$ has the equal covering $\Pi = \{N_1, N_2, ..., N_k\}$. Since $N_i < G/H$ and $H \vartriangleleft G$, $N_i = N_i^*/H$ for some $N_i^* < G$ by Correspondence Theorem in (\cite{rotman2012introduction}, Theorem 2.28). It follows that if $g \in G$ and $gH \in N_i$, then $g \in N_i^*$. So, the set $\Gamma = \{N_1^*, N_2^*, ..., N_k^*\}$ is a covering of $G$. Finally, for any $N_i,N_j \in P_i$, we have $|N_i| = |N_j|$, implying $|N_i^*/H| = |N_j^*/H|$, or $|N_i^*| = |N_j^*|$. Thus, $\Gamma$ is an equal covering of $G$. \end{proof} Theorem \ref{Quotient} can provide us an alternate proof to Theorem \ref{direct2}. If $G = H \times K$, with $H $ having an equal covering, then we have $G/K \cong H$ has an equal covering. By Theorem \ref{Quotient}, so does $G$. The advantage of the original proof for Theorem \ref{direct2} is that if we know what an equal covering of $G$ is then we can then easily find an example of an equal covering for $G \times H$. In a moment, we will define the semidirect product of two groups, as it turns out a lot of groups can be written in terms of the semidirect product of two groups. \begin{definition} Let $G$ be a group with subgroups $H$ and $K$. We say $G$ is the semidirect product of $H$ and $K$, if $H \vartriangleleft G$ and $Q \cong K$ is a complement of $H$, that is, $HQ = G$ and $H \cap Q = \{1\}$. If $G$ is the semidirect product of $H$ and $K$, we denote this by $G = H \rtimes K$. \end{definition} \begin{remark}\label{QuotientSemi} Note that if $G = H \rtimes K$, then $K \cong G/H$. \end{remark} \begin{corollary}\label{covering-semi} If $G = H \rtimes K$ and $K$ has an equal covering, then $G$ has an equal covering. \end{corollary} \begin{proof} By Remark \ref{QuotientSemi}, $K \cong G/H$. By Theorem \ref{Quotient}, if $K$ has an equal covering then so does $G$. \end{proof} Let us recall that if $G$ is a group with a subgroup $H$ of index 2, then $H \vartriangleleft G$. If $|G| > 2$, then if $G$ possesses such a group then it cannot be simple. What follows is a proposition. \begin{proposition}\label{simpleexp} If $G$ is a finite non-cyclic simple group with $\exp(G) = |G|/2$, then $G$ does not have an equal covering. \end{proposition} \begin{proof} If $G$ is a group with an equal covering $\Pi$ and $\exp(G) = |G|/2$, then it follows that $G$ has a maximal subgroup $M$ with whose order is $|G|/2$. It follows $[G:M] = 2$ and thus $M \vartriangleleft G$. This contradicts the simplicity of $G$. \end{proof} \begin{remark} Although $S_4$ is not simple, since $A_4$ is a normal subgroup, it does not have an equal covering. This is due to the exponent of $S_4$ being $12$ and $A_4$ is the only subgroup of order 12. \end{remark} We have here in this section a collection of useful theorems which will help deduce if a group hasn't equal covering or not \section{Possible Future Work} I took interest in this topic due to Dr. Tuval Foguel bringing it to my attention, and after having done the research in literature and receiving his help in deriving the theorems in Section 3, I hope to continue researching into this topic. The following are questions or ideas I would like to go delve into more once I become familiar with more algebra concepts and definitions, as well as the literature related to this topic: \begin{enumerate} \item Develop a more sophisticated \texttt{\texttt{GAP}} code that can be faster and efficient at determining which finite groups have an equal covering and which ones do not. \item For those that do have an equal covering, what would the equal covering number $\varepsilon(G)$ be, that is, \begin{equation*} \varepsilon(G) = \min\{|\Pi|\ : \Pi\ \text{is an equal covering of}\ G\} \end{equation*} \item In addition to developing more sophisticated code, I would like to derive more theorems that indicate which finite groups do (or do not) have an equal coverings. Possibly develop a classification system of which groups have an equal covering, similar to how there is a classification system of partitionable groups. \item If a classification system somehow not feasible, I still would like to examine certain type of groups such as the finite simple groups one would find in the ATLAS \cite{conway1985finite}. I would love to either prove or determine the first counterexample to the conjecture at the end of Section 4. \end{enumerate} \section{Appendix} \subsection*{Equal Covering of Statuses of Groups} This section is dedicated, we will attempt to state which groups up to order 60 as well all finite non-cyclic groups up to the Mathieu group $M_{12}$ have equal coverings (or don't) based on the theorems presented in the Section 3 as well as use \texttt{GAP} \cite{GAP4}.\texttt{GAP} is a software that can be utilized to examine established groups or derive new ones from a computational standpoint, such as determining the order of a group or finding the conjugacy classes of groups.\\ We hope to examine groups that are at least of order 60 and below, as well as the first couple of groups in the ATLAS \cite{conway1985finite}. To start off, we will list out all order of groups up to order 60 that we know groups of such orders will have or not have equal coverings as well as state the reason for the status, where T\# will mean the same as Theorem \#.\vspace{5pt} \begin{center} \renewcommand{\arraystretch}{2} \begin{tabular}{|c|c|c|}\hline Order & Have An Equal Covering? & Reason \\\hline 1 & No & Trivial\\\hline All primes $\leq 60$ & No & T\ref{Cyclic}\\\hline 4, 8, 9, 16, 25, 27, 32, 49 & Yes (for non-cyclic groups) & T\ref{pgroups}\\ \hline \makecell{6, 10, 14, 15, 21, 22, 26, 30, 33, 34,\\ 35, 38, 39, 42, 46, 51, 55, 57, 58} & No & T\ref{distinctp}\\\hline \end{tabular}\vspace{5pt}\\ Table 1: Groups of Order 60 or Less with Equal Covering Status Based on Order \end{center}\vspace{5pt} As one will notice, we are missing the following numbers from the list of the first 60 natural numbers: 12, 18, 20, 24, 28, 36, 40, 44, 45, 48, 50, 52, 54, 56, 60. For each group of each of these orders, we will determine whether they have an equal covering by theorems provided as well as using \texttt{GAP} and ATLAS\cite{conway1985finite} for numerical arguments. The ATLAS is an encyclopedia of finite simple groups such as $A_{11}$ or the Mathieu group $M_{12}$, providing the different families of such groups and for each finite simple group characteristics such as the classification of their maximal subgroups. For each order, the sequence in which we will examine the groups is by the Group ID that is based in \texttt{GAP}'s identification system. For example, the ID of $S_3$ in \texttt{GAP} is [6,1] indicating it is listed as the first group of order 6. Before we go into this part of the section, let us note two things: \begin{itemize} \item that we omit $\mathbb{Z}_n$ for each $n$ since they trivially have no covering; \item to identify a group's structure we will use \texttt{GAP} and other sources. Although \texttt{GAP}'s\\ \texttt{StructureDescription} function is useful for this, there will be instances where groups with different ID's will be called the same name. Example: In \texttt{GAP}, groups [20,1] and [20,3] are called "\texttt{C5 : C4}" (or $\mathbb{Z}_5 \rtimes \mathbb{Z}_4$). In which case we will write (1) after the group name to indicate it is the first group with that name, (2) if it is the second group with that name, and so on. \textbf{In some instances, we may write the group in a way where it will be clear if a theorem or other statement was used to determine the equal covering status.} \end{itemize} \noindent \textbf{How to Read Table 2:} \begin{enumerate} \item The first column is \textbf{Group ID}, which is the I.D. number of the corresponding group. We read $[m,n]$ to mean this the $n$th group of order $m$. \item The second column is the \textbf{name(s)} of the group. \item The third column is the \textbf{Exponent} of the group. \item The fourth column indicate whether the given group is \textbf{nilpotent} or not. \item The fifth column is the status of whether the group has an \textbf{equal covering or not}. \item The sixth column is the reason why the group has or does not have an equal covering. We will write either write the Corollary(ies),Theorem(s), etc. used and that were mentioned in Section 3and write either C\# (for Corollary \#), T\# (for Theorem \#), R\# (for Remark \#), P\# (for Proposition \#), or state \texttt{GAP} if primarily \texttt{\texttt{GAP}} code was used to determine the status. \end{enumerate} \begin{center} \begin{longtable}{| p{.1\textwidth} | p{.2\textwidth} |p{.1\textwidth}| p{.1\textwidth}|p{.2\textwidth}|p{.1\textwidth}|p{.3\textwidth}|} \hline Group ID & Name(s) & Exponent & Nilpotent? & Equally Coverable? & Reason\\ \hline [12, 1] & $\mathbb{Z}_3 \rtimes \mathbb{Z}_4 \cong Q_{12}$ & 12 & No & No & C\ref{ExpCor} \\ \hline [12, 3] & $A_4$ & 6 & No & No & \texttt{GAP} \\ \hline [12, 4] & $D_{12}$ & 6 & No & Yes & T\ref{EqCovDn}\\ \hline [12, 5] & $\mathbb{Z}_2^2 \times \mathbb{Z}_3$ & 6 & Yes & Yes & C\ref{directarbitrary}\\ \hline Group ID & Name(s) & Exponent & Nilpotent? & Equally Coverable? & Reason\\ \hline [18, 1] & $D_{18}$ & 18 & No & No & C\ref{directarbitrary} \\ \hline [18, 3] & $S_3 \times \mathbb{Z}_3$ & 6 & No & No & \texttt{GAP}\\ \hline [18, 4] & $\mathbb{Z}_3^2 \rtimes \mathbb{Z}_2$ & 6 & No & Yes & \texttt{GAP} \\ \hline [18, 5] & $\mathbb{Z}_3^2 \times \mathbb{Z}_2$ & 6 & Yes & Yes & T\ref{EqCovDn} \\ \hline [20, 1] & $Q_{20}$ & 20 & No & No & C\ref{ExpCor}\\ \hline [20, 3] & $\mathbb{Z}_5 \rtimes \mathbb{Z}_4$ & 20 & No & No & C\ref{ExpCor} \\ \hline [20, 4] & $D_{20}$ & 10 & No & Yes & C\ref{directarbitrary} \\ \hline [20, 5] & $\mathbb{Z}_2^2 \times \mathbb{Z}_5$ & 10 & Yes & Yes & T\ref{EqCovDn}\\ \hline [24, 1] & $\mathbb{Z}_3 \rtimes \mathbb{Z}_8$ & 24 & No & No & C\ref{ExpCor}\\ \hline [24, 3] & SL(2,3) $ \cong Q_8 \rtimes \mathbb{Z}_3$ & 12 & No & No & \texttt{GAP} \\ \hline [24, 4] & $\mathbb{Z}_3 \rtimes Q_8 \cong Q_{24}$ & 12 & No & Yes & C\ref{covering-semi}\\ \hline [24, 5] & $S_3 \times \mathbb{Z}_4$ & 12 & No & Yes & \texttt{GAP} \\ \hline [24, 6] & $D_{24}$ & 12 & No & Yes & C\ref{directarbitrary} \\ \hline [24, 7] & $Q_{12} \times \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\ \hline [24, 8] & $(\mathbb{Z}_3 \times \mathbb{Z}_2^2) \rtimes \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\ \hline [24, 9] & $\mathbb{Z}_{12} \times \mathbb{Z}_2$ & 12 & Yes & Yes & T\ref{nilpotent}\\ \hline [24, 10] & $D_8 \times \mathbb{Z}_3$ & 12 & Yes & Yes & C\ref{directarbitrary}\\ \hline [24, 11] & $Q_8 \times \mathbb{Z}_3$ & 12 & Yes & Yes & T\ref{nilpotent} \\ \hline [24, 12] & $S_4$ & 12 & No & No & R3\\\hline [24, 13] & $A_4 \times \mathbb{Z}_2$ & 6 & No & No & \texttt{GAP}\\ \hline [24, 14] & $S_3 \times \mathbb{Z}_2^2$ & 6 & No & Yes & C\ref{directarbitrary}\\ \hline [24, 15] & $\mathbb{Z}_2^3 \times \mathbb{Z}_3$ & 6 & Yes & Yes & C\ref{directarbitrary}\\ \hline [28, 1] & $\mathbb{Z}_7 \rtimes \mathbb{Z}_4$ & 28 & No & No & C\ref{ExpCor}\\ \hline [28, 3] & $D_{28}$ & 14 & No & Yes & T\ref{EqCovDn} \\ \hline [28, 4] & $\mathbb{Z}_2^2 \times \mathbb{Z}_7$ & 14 & Yes & Yes & C\ref{directarbitrary}\\ \hline [36, 1] & $\mathbb{Z}_9 \rtimes \mathbb{Z}_4$ & 36 & No & No & C\ref{ExpCor}\\ \hline [36, 3] & $\mathbb{Z}_2^2 \rtimes \mathbb{Z}_9$ & 18 & No & No & \texttt{GAP} \\ \hline [36, 4] & $D_{36}$ & 18 & No & Yes & T\ref{EqCovDn} \\ \hline [36, 5] & $\mathbb{Z}_2^2 \times \mathbb{Z}_9$ & 18 & Yes & Yes & T\ref{nilpotent}\\\hline [36, 6] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_3$ & 12 & No & No & \texttt{GAP} \\ \hline [36, 7] & $\mathbb{Z}_3^2 \rtimes \mathbb{Z}_4$(1) & 12 & Yes & Yes & \texttt{GAP} \\\hline [36, 8] & $\mathbb{Z}_3^2 \times \mathbb{Z}_4$ & 12 & Yes & Yes & C\ref{directarbitrary} \\ \hline [36, 9] & $\mathbb{Z}_3^2 \rtimes \mathbb{Z}_4$(2) & 12 & No & No & \texttt{GAP} \\\hline [36, 10] & $S_3 \times S_3$ & 6 & No & Yes & \texttt{GAP}\\ \hline [36, 11] & $A_4 \times \mathbb{Z}_3$ & 6 & No & Yes & \texttt{GAP}\\ \hline [36, 12] & $S_3 \times \mathbb{Z}_6$ & 6 & No & Yes & \texttt{GAP}\\ \hline Group ID & Name(s) & Exponent & Nilpotent? & Equally Coverable? & Reason\\ \hline [36, 13] & $(\mathbb{Z}_3^2 \rtimes \mathbb{Z}_2) \times \mathbb{Z}_2$ & 6 & No & Yes & \texttt{GAP}\\ \hline [36, 14] & $\mathbb{Z}_3^2 \times \mathbb{Z}_2^2$ & 6 & Yes & Yes & C\ref{directarbitrary}\\ \hline [40, 1] & $\mathbb{Z}_5 \rtimes \mathbb{Z}_8$(1) & 40 & No & No & C\ref{ExpCor}\\ \hline [40, 3] & $\mathbb{Z}_5 \rtimes \mathbb{Z}_8$(2) & 40 & No & No & C\ref{ExpCor}\\ \hline [40, 4] & $\mathbb{Z}_5 \rtimes Q_8$ & 20 & No & Yes & C\ref{covering-semi}\\ \hline [40, 5] & $D_{10} \times \mathbb{Z}_4 $ & 20 & No & Yes & \texttt{GAP}\\ \hline [40, 6] & $D_{40}$ & 20 & No & Yes & T\ref{EqCovDn}\\ \hline [40, 7] & $(\mathbb{Z}_5 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_2$ & 20 & No & Yes & \texttt{GAP}\\ \hline [40, 8] & $(\mathbb{Z}_{2}^2 \times \mathbb{Z}_5) \rtimes \mathbb{Z}_2$ & 20 & No & Yes & \texttt{GAP}\\ \hline [40, 9] & $\mathbb{Z}_5 \times \mathbb{Z}_4 \times \mathbb{Z}_2$ & 20 & Yes & Yes & C\ref{directarbitrary}\\ \hline [40, 10] & $D_8 \times \mathbb{Z}_5$ & 20 & Yes & Yes & C\ref{directarbitrary}\\\hline [40, 11] & $Q_8 \times \mathbb{Z}_5$ & 20 & Yes & Yes & T\ref{nilpotent}\\\hline [40, 12] & $(\mathbb{Z}_5 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_2$ & 20 & No & Yes & \texttt{GAP}\\\hline [40, 13] & $D_{10} \times \mathbb{Z}_2^2$ & 10 & No & Yes & C\ref{directarbitrary}\\\hline [40, 14] & $\mathbb{Z}_2^3\times \mathbb{Z}_5$ & 10 & Yes & Yes & C\ref{directarbitrary}\\\hline [44, 1] & $\mathbb{Z}_{11} \rtimes \mathbb{Z}_4$ & 44 & No & No & C\ref{ExpCor}\\\hline [44, 3] & $D_{44}$ & 22 & No & Yes & T\ref{EqCovDn}\\\hline [44, 4] & $\mathbb{Z}_2^2 \times \mathbb{Z}_{11}$ & 22 & Yes & Yes & C\ref{directarbitrary}\\\hline [45, 2] & $\mathbb{Z}_3^2 \times \mathbb{Z}_5$ & 15 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 1] & $\mathbb{Z}_3 \rtimes \mathbb{Z}_{16}$ & 48 & No & No & C\ref{ExpCor}\\\hline [48, 3] & $\mathbb{Z}_4^2 \rtimes \mathbb{Z}_3$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 4] & $S_3 \times \mathbb{Z}_8$ & 24 & No & Yes & \texttt{GAP}\\\hline [48, 5] & $\mathbb{Z}_{24} \rtimes \mathbb{Z}_2$(1) & 24 & No & Yes & \texttt{GAP}\\\hline [48, 6] & $\mathbb{Z}_{24} \rtimes \mathbb{Z}_2$(2) & 24 & No & Yes & \texttt{GAP}\\\hline [48, 7] & $D_{48}$ & 24 & No & Yes & T\ref{EqCovDn}\\\hline [48, 8] & $\mathbb{Z}_3 \rtimes Q_{16}$(1) & 24 & No & Yes & C\ref{covering-semi}\\\hline [48, 9] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_8) \times \mathbb{Z}_2$ & 24 & No & Yes & \texttt{GAP}\\\hline [48, 10] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_8) \rtimes \mathbb{Z}_2$(1) & 24 & No & Yes & \texttt{GAP}\\ \hline [48, 11] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_4$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 12] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \rtimes \mathbb{Z}_4$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 13] & $\mathbb{Z}_{12} \rtimes \mathbb{Z}_4$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 14] & $(\mathbb{Z}_{12} \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2$(1) & 12 & No & Yes & \texttt{GAP}\\\hline [48, 15] & $(D_8 \times \mathbb{Z}_3) \rtimes \mathbb{Z}_2$ & 24 & No & Yes & \texttt{GAP}\\\hline [48, 16] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_8) \rtimes \mathbb{Z}_2$(2) & 24 & No & Yes & \texttt{GAP}\\\hline [48, 17] & $(Q_8 \times \mathbb{Z}_3) \rtimes \mathbb{Z}_2$ & 24 & No & Yes & \texttt{GAP}\\\hline [48, 18] & $\mathbb{Z}_3 \rtimes Q_{16}$(2) & 24 & No & Yes & C\ref{covering-semi}\\\hline Group ID & Name(s) & Exponent & Nilpotent? & Equally Coverable? & Reason\\ \hline [48, 19] & $((\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 20] & $\mathbb{Z}_4^2 \times \mathbb{Z}_3$ & 12 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 21] & $((\mathbb{Z}_4 \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2) \times \mathbb{Z}_3$ & 12 & Yes & Yes & T\ref{nilpotent}\\\hline [48, 22] & $(\mathbb{Z}_4 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_3$ & 12 & Yes & Yes & T\ref{nilpotent}\\\hline [48, 23] & $\mathbb{Z}_{8} \times \mathbb{Z}_3 \times \mathbb{Z}_2$ & 24 & Yes & Yes & T\ref{nilpotent}\\\hline [48, 24] & $(\mathbb{Z}_8 \rtimes \mathbb{Z}_2) \times \mathbb{Z}_3$ & 24 & Yes & Yes & T\ref{nilpotent}\\\hline [48, 25] & $D_{16} \times \mathbb{Z}_3$ & 24 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 26] & $QD_{16} \times \mathbb{Z}_3$ & 24 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 27] & $Q_{16} \times \mathbb{Z}_3$ & 24 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 28] & $\langle 2,3,4\rangle$ & 24 & No & No & \texttt{GAP} \\\hline [48, 29] & GL$(2,3)$ & 24 & No & No & \texttt{GAP}\\\hline [48, 30] & $A_4 \rtimes \mathbb{Z}_4$ & 12 & No & No & \texttt{GAP}\\\hline [48, 31] &$A_4 \times \mathbb{Z}_4$ & 12 & No & No & \texttt{GAP}\\\hline [48, 32] & SL$(2,3) \times \mathbb{Z}_2$ & 12 & No & No & \texttt{GAP}\\\hline [48, 33] & SL$(2,3) \rtimes \mathbb{Z}_2$ & 12 & No & No & \texttt{GAP} \\\hline [48, 34] & $(\mathbb{Z}_3 \rtimes Q_8) \times \mathbb{Z}_2$ & 12 & No & Yes & C\ref{covering-semi}\\\hline [48, 35] & $S_3 \times \mathbb{Z}_4 \times \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 36] & $D_{24} \times \mathbb{Z}_2$ & 12 & No & Yes & C\ref{directarbitrary}\\\hline [48, 37] & $(\mathbb{Z}_{12} \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2$(2) & 12 & No & Yes & \texttt{GAP}\\\hline [48, 38] & $D_8 \times S_3$ & 12 & No & Yes & C\ref{directarbitrary}\\\hline [48, 39] & $((\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 40] & $Q_8 \times S_3$ & 12 & No & Yes & C\ref{directarbitrary}\\\hline [48, 41] & $(S_3 \times \mathbb{Z}_4) \rtimes \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 42] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_2^2$ & 12 & No & Yes & C\ref{directarbitrary}\\\hline [48, 43] & $((\mathbb{Z}_6 \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2) \times \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 44] & $\mathbb{Z}_2^2 \times \mathbb{Z}_{12}$ & 12 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 45] & $D_8 \times \mathbb{Z}_6$ & 12 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 46] & $Q_8 \times \mathbb{Z}_6$ & 12 & Yes & Yes & C\ref{directarbitrary}\\\hline [48, 47] & $((\mathbb{Z}_4 \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2) \times \mathbb{Z}_3$ & 12 & Yes & Yes & T\ref{nilpotent}\\\hline [48, 48] & $S_4 \times \mathbb{Z}_2$ & 12 & No & Yes & \texttt{GAP}\\\hline [48, 49] & $A_4 \times \mathbb{Z}_2^2$ & 6 & No & Yes & C\ref{directarbitrary}\\\hline [48, 50] & $\mathbb{Z}_2^4 \rtimes \mathbb{Z}_3$ & 6 & No & Yes & \texttt{GAP}\\\hline [48, 51] & $\mathbb{Z}_2^3 \times S_3$ & 6 & No & Yes & C\ref{directarbitrary}\\\hline [48, 52] & $\mathbb{Z}_2^3 \times \mathbb{Z}_3$ & 6 & Yes & Yes & C\ref{directarbitrary}\\\hline [50, 1] & $D_{50}$ & 50 & No & No & C\ref{ExpCor}\\\hline [50, 3] & $D_{10} \times \mathbb{Z}_5$ & 10 & No & No & \texttt{GAP}\\\hline Group ID & Name(s) & Exponent & Nilpotent? & Equally Coverable? & Reason\\ \hline [50, 4] & $\mathbb{Z}_5^2 \rtimes \mathbb{Z}_2$ & 10 & No & Yes & \texttt{GAP}\\\hline [50, 5] & $ \mathbb{Z}_5^2 \times \mathbb{Z}_2$ & 10 & Yes & Yes & C\ref{directarbitrary}\\\hline [52, 1] & $\mathbb{Z}_{13} \rtimes \mathbb{Z}_4$(1) & 52 & No & No & C\ref{ExpCor}\\\hline [52, 3] & $\mathbb{Z}_{13} \rtimes \mathbb{Z}_4$(2) & 52 & No & No & C\ref{ExpCor}\\\hline [52, 4] & $D_{52}$ & 26 & No & Yes & T\ref{EqCovDn}\\\hline [52, 5] & $\mathbb{Z}_2^2 \times \mathbb{Z}_{13}$ & 26 & Yes & Yes & C\ref{directarbitrary}\\\hline [54, 1] & $D_{54}$ & 54 & No & No & T\ref{EqCovDn}\\\hline [54, 3] & $D_{18} \times \mathbb{Z}_3$ & 18 & No & No & \texttt{GAP}\\\hline [54, 4] & $S_3 \times \mathbb{Z}_9$ & 18 & No & No & \texttt{GAP}\\\hline [54, 5] & $(\mathbb{Z}_3^2 \rtimes \mathbb{Z}_3) \rtimes \mathbb{Z}_2$(1) & 6 & No & No & \texttt{GAP}\\\hline [54, 6] & $(\mathbb{Z}_9 \rtimes \mathbb{Z}_3) \rtimes \mathbb{Z}_2$ & 18 & No & No & \texttt{GAP}\\\hline [54, 7] & $(\mathbb{Z}_9 \times \mathbb{Z}_3) \rtimes \mathbb{Z}_2$ & 18 & No & Yes & \texttt{GAP}\\\hline [54, 8] & $(\mathbb{Z}_3^2 \rtimes \mathbb{Z}_3) \rtimes \mathbb{Z}_2$(2) & 6 & No & Yes & \texttt{GAP}\\\hline [54, 9] & $\mathbb{Z}_{9} \times \mathbb{Z}_3 \times \mathbb{Z}_2$ & 18 & Yes & Yes & T\ref{nilpotent}\\\hline [54, 10] & $(\mathbb{Z}_3^2 \rtimes \mathbb{Z}_3) \times \mathbb{Z}_2$ & 6 & Yes & Yes & T\ref{nilpotent}\\\hline [54, 11] & $(\mathbb{Z}_9 \rtimes \mathbb{Z}_3) \times \mathbb{Z}_2$ & 18 & Yes & Yes & T\ref{nilpotent}\\\hline [54, 12] & $\mathbb{Z}_3^2 \times S_3$ & 6 & No & Yes & C\ref{directarbitrary}\\\hline [54, 13] & $(\mathbb{Z}_3^2 \rtimes \mathbb{Z}_2) \times \mathbb{Z}_3$ & 6 & No & Yes & C\ref{directarbitrary}\\\hline [54, 14] & $\mathbb{Z}_3^3 \rtimes \mathbb{Z}_2$ & 6 & No & Yes & \texttt{GAP}\\\hline [54, 15] & $\mathbb{Z}_3^3 \times \mathbb{Z}_2$ & 6 & Yes & Yes & C\ref{directarbitrary}\\\hline [56, 1] & $\mathbb{Z}_7 \rtimes \mathbb{Z}_8$ & 56 & No & No & C\ref{ExpCor}\\\hline [56, 3] & $\mathbb{Z}_7 \rtimes Q_8$ & 28 & No & Yes & C\ref{covering-semi}\\\hline [56, 4] & $D_{14} \times \mathbb{Z}_4$ & 28 & No & Yes & \texttt{GAP}\\\hline [56, 5] & $D_{56}$ & 28 & No & Yes & T\ref{EqCovDn}\\\hline [56, 6] & $(\mathbb{Z}_7 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_4$ & 18 & No & Yes & \texttt{GAP}\\\hline [56, 7] & $(\mathbb{Z}_{14} \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2$ & 28 & No & Yes & \texttt{GAP}\\\hline [56, 8] & $\mathbb{Z}_{7} \times \mathbb{Z}_4 \times \mathbb{Z}_2$ & 28 & Yes & Yes & T\ref{nilpotent}\\\hline [56, 9] & $D_8 \times \mathbb{Z}_7$ & 28 & Yes & Yes & C\ref{directarbitrary}\\\hline [56, 10] & $Q_8 \times \mathbb{Z}_7$ & 28 & Yes & Yes & C\ref{directarbitrary}\\\hline [56, 11] & $\mathbb{Z}_2^3 \rtimes \mathbb{Z}_7$ & 14 & No & Yes & \texttt{GAP}\\\hline [56, 12] & $D_{14} \times \mathbb{Z}_2^2$ & 14 & No & Yes & C\ref{directarbitrary}\\\hline [56, 13] & $\mathbb{Z}_2^3 \times \mathbb{Z}_7$ & 14 & Yes & Yes & C\ref{directarbitrary}\\\hline [60, 1] & $(\mathbb{Z}_3 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_5$ & 60 & No & No & C\ref{ExpCor}\\\hline [60, 2] & $(\mathbb{Z}_5 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_3(1)$ & 60 & No & No & C\ref{ExpCor}\\\hline [60, 3] & $\mathbb{Z}_{15} \rtimes \mathbb{Z}_4$ & 60 & No & No & C\ref{ExpCor}\\\hline [60, 5] & $A_5$ & 30 & No & No & P\ref{simpleexp}\\\hline Group ID & Name(s) & Exponent & Nilpotent? & Equally Coverable? & Reason\\ \hline [60, 6] & $(\mathbb{Z}_5 \rtimes \mathbb{Z}_4) \times \mathbb{Z}_3(2)$ & 60 & No & No & C\ref{ExpCor}\\\hline [60, 7] & $\mathbb{Z}_{15} \rtimes \mathbb{Z}_4$ & 60 & No & No & C\ref{ExpCor}\\\hline [60, 8] & $D_{10} \times S_3$ & 30 & No & Yes & \texttt{GAP}\\\hline [60, 9] & $A_4 \times \mathbb{Z}_5$ & 30 & No & No & \texttt{GAP}\\\hline [60, 10] & $D_{10} \times \mathbb{Z}_6$ & 30 & No & Yes & \texttt{GAP}\\\hline [60, 11] & $S_3 \times \mathbb{Z}_{10}$ & 30 & No & Yes & \texttt{GAP}\\\hline [60, 12] & $D_{60}$ & 30 & No & Yes & T\ref{EqCovDn}\\\hline [60, 13] & $\mathbb{Z}_2^2 \times \mathbb{Z}_{15}$ & 30 & Yes & Yes & C\ref{directarbitrary}\\\hline \end{longtable} Table 2: Classification of Remaining Groups Up to Order 60 \end{center} \vspace{5pt} \begin{center} \begin{tabular}{|c|c|c|c|c|c|}\hline Group & Order & Exponent & Nilpotent? & Equally Coverable? & Reason\\\hline $A_5 \cong L_2(4) \cong L_2(5)$ & 60 & 30 & No & No & P\ref{simpleexp}\\\hline $A_6 \cong L_2(9) \cong S_4(2)'$ & 360 & 60 & No & No & \texttt{GAP}\\\hline $L_2(8) \cong R(3)'$ & 504 & 126 & No & No & \texttt{GAP}\\\hline $L_2(11)$ & 660 & 330 & No & No & P\ref{simpleexp}\\\hline $L_2(13)$ & 1092 & 546 & No & No & P\ref{simpleexp}\\\hline $L_2(17)$ & 2448 & 1224 & No & No & P\ref{simpleexp}\\\hline $A_7$ & 2520 & 420 & No & No & \texttt{GAP} \\\hline $L_2(19)$ & 3420 & 1710 & No & No & P\ref{simpleexp}\\\hline $L_2(16)$ & 4080 & 510 & No & No & \texttt{GAP}\\\hline $L_3(3)$ & 5616 & 312 & No & No & \texttt{GAP}\\\hline $U_3(3) \cong G_2(2)'$ & 6048 & 168 & No & No & \texttt{GAP}\\\hline $L_2(23)$ & 6072 & 3036 & No & No & P\ref{simpleexp}\\\hline $L_2(25)$ & 7800 & 780 & No & No & \texttt{GAP}\\\hline $M_{11}$ & 7920 & 1320 & No & No & \texttt{GAP}\\\hline $L_2(27)$ & 9828 & 546 & No & No & \texttt{GAP}\\\hline $L_2(29)$ & 12180 & 6090 & No & No & P\ref{simpleexp}\\\hline $L_2(31)$ & 14880 & 7440 & No & No & P\ref{simpleexp}\\\hline $A_8 \cong L_4(2)$ & 20160 & 420 & No & No & \texttt{GAP}\\\hline $L_3(4)$ & 20160 & 420 & No & No & \texttt{GAP}\\\hline $U_4(2) \cong S_4(3)$ & 25920 & 180 & No & No & \texttt{GAP}\\\hline $Sz(8)$ & 29120 & 1280 & No & No & \texttt{GAP}\\\hline $L_2(32)$ & 32736 & 2046 & No & No & \texttt{GAP}\\\hline $U_3(4)$ & 62400 & 780 & No & No & \texttt{GAP}\\\hline $M_{12}$ & 96040 & 1320 & No & No & \texttt{GAP}\\\hline \end{tabular}\vspace{5pt}\\ Table 3: All Finite Simple Groups Of Order Less than 100,000 \end{center}\newpage Something interesting to note from Table 2 is the fact that all finite simple groups of order less than 100,000 do not have an equal covering. The question now arises whether the following conjecture is true:\vspace{5pt}\\ \textbf{Conjecture:} If $G$ is a finite simple group, then $G$ has no equal covering. \subsection{\texttt{\texttt{GAP}} Code} In Table 1, we determined the equal covering status for the non-cyclic groups up to order 60 that we were not able to determine using our theorems alone, and for Table 2 we had performed the same for the first 20 non-cyclic finite simple groups. In many instances, we had to rely on \texttt{\texttt{GAP}} to provide us an insight into whether or not a given group has an equal covering. Before demonstrating the code I used to determine if a group had an equal covering, let us familiarize ourselves with some of the functions that will be used in the code. \begin{enumerate} \item \texttt{DivisorsInt(n)} is a function whose output is the set of positive divisors for the integer \texttt{n} \item \texttt{SmallGroup(n,m)} corresponds to the ID of $m$th group of order $n$ in \texttt{\texttt{GAP}}'s library of groups (primarily applicable to the groups in Table 1) \item \texttt{MaximalSubgroups(G)} provides the collection of all maximal subgroups of group $G$ \item \texttt{Append(set1, set2)} is a function that takes in two sets \texttt{set1} and \texttt{set2} as inputs, and appends the elements of \texttt{set2} to the set of elements of \texttt{set1} \item \texttt{ShallowCopy(set)} makes a mutable copy of the set of elements in \texttt{set}. This is useful for wokring with sets that are immutable, that is, sets that cannot be manipulated in any way. \end{enumerate} Now that we know the parts of the code that one may not immediately recognize or know, we will now see the general code I used.\vspace{5pt}\\ Let's say for a given group with the ID $[n,m]$ I want to determine whether it has an equal covering. After computing the exponent of $G$, using \texttt{Exponent(G)}, we use the following algorithm for every positive integer $i$ that is 1) a proper divisor of the order of $G$ and 2) is divisible by the exponent of $G$: \begin{verbatim} (1) G := SmallGroup(n,m); S := Subgroups(G); (2) D := DivisorsInt(Order(G)); orders := []; (3) for d in D do (4) if d < Order(G) and RemInt(d, Exponent(G)) = 0 then (5) Append(orders, [d]); (6) fi; (7) od; (8) truthvalues := []; (9) for d in orders do (10) union := []; (11) for s in S do (12) if Order(s) = d then (13) Append(union, ShallowCopy(Elements(s))); (14) fi; (15) do; (16) Append(truthvalues, [Elements(G) = Set(union)]); (17) od; (18) truthvalues; \end{verbatim} We can take a closer look at this code to see what it really does. In line:\vspace{5pt}\\ (1) I designate what group we are dealing with by \texttt{G} and its set of subgroups by \texttt{S}. Note: Most of the groups in Table 2 could not be designated by a Group ID as we did for the groups in Table 1 and instead had to be set using group name functions in \texttt{\texttt{GAP}}. As an example, for $M_{11}$ I had to write \texttt{G := MathieuGroup(11)}.\vspace{5pt}\\ (2) - (7) I compute the set of integers that are proper divisors of the order of $G$ and are divisible by the exponent of $G$, and call this set \texttt{orders}.\vspace{5pt}\\ (8)-(17) I determine for each $d$ in \texttt{orders} if the elements of $G$ equals the union of all maximal subgroups in \texttt{S} of order $d$.\vspace{5pt}\\ If after executing line 18, we produce a set of truth value(s). We determine if a group has an equal covering if at least one \texttt{true} appears. If none appear, we can conclude $G$ has no equal covering. \subsubsection*{The Case of $M_{12}$} When running the code shown in the last page for Mathieu Group $M_{12}$, I had run into the issue of \texttt{\texttt{GAP}} not compiling due the large number of subgroups $M_{12}$ possessed (214,871 to be exact). As \texttt{\texttt{GAP}} mentioned, I used the \texttt{ConjugacyClassesSubgroups} function, which returns the set of all conjugacy classes of the group each having a representative. As a work-around, I came up with the following code to determine whether $M_{12}$ possessed an equal covering. \begin{verbatim} (1) G := MathieuGroup(12); C := ConjugacyClassesSubgroups(G); classes := []; (2) for i in [1..Size(C)] do (3) if RemInt(Order(C[i][1]), Exponent(G)) = 0 and Order(C[i][1]) < Order(G) then (4) Add(classes, C[i]); (5) fi; (6) od; \end{verbatim} What I have done is come up with the set of all conjugacy classes of subgroups of $M_{12}$ that contain a subgroup whose order is divisible by the the exponent of $M_{12}$. As a note, since any two subgroups in the same conjugacy class are isomorphic and therefore have the same order, we will attempt to take the set-theoretic union of all proper subgroups satisfying the property mentioned above.\vspace{5pt}\\ After line (5), we must print the set \texttt{classes}, which will tell us the classes that contain proper subgroups whose orders are divisible by the exponent of $M_{12}$. It turns out that \texttt{classes} contains 2 elements, the conjucacy class of two proper subgroups. Next comes code similar to the last page, where we will take the union of all the subgroups in \texttt{classes[1]} and \texttt{classes[2]}(both of which are sets of subgroups of order 7920) and determine whether the union is indeed equal to the set of elements in $M_{12}$. \begin{verbatim} (6) c1 := classes[1]; c2 := classes[2]; union:= []; (7) for s in c1 do (8) Append(union, ShallowCopy(Elements(s))); (9) od; (10) for s in c2 do (11) Append(union, ShallowCopy(Elements(s))); (12) od; (13) Elements(G) = Set(union); \end{verbatim} After we execute line 13, we produce \texttt{false} indicating $M_{12}$ has no equal covering. \newpage \begin{thebibliography}{99} \bibitem{atanasov2014loops} R. Atanasov and T. Foguel. Loops that are partitioned by groups. {\em Journal of Group Theory}, 17(5): 851-861, 2014. \bibitem{baer1961partitionen} R. Baer. Partitionen endlicher gruppen. {\em Mathematische Zeitschrift}, 75(10:333-372, 1961. \bibitem{bell1997analogue} H.E. Bell, A.A. Klein, and L. Charlotte-Kappe. An Analogue for Rings of a Group Problem of P. Erd{\H{o}}s and BH Neumann {\em Acta Mathematica Hungarica} 77(1):57-67, 1997. \bibitem{bhargava2009groups} Groups as unions of proper subgroups. {\em American Mathematical Monthly}, 116(5):413-422, 2009. \bibitem{bruckheimer} M. Bruckheimer, A.C. Bryan, and A. Muir. Groups Which Are the Union of Three Subgroups. {\em American Mathematical Monthly}, 77(1):52-57, 1970. \bibitem{kappe2001analogue} L. Charlotte-Kappe, J.C. Lennox, and J. Weigold. An analogue for semigroups of a group problem of P. Erd{\"o}s and BH Neumann. {\em Bulletin of the Australian Mathematical Society}, 63(1):59-66, 2001. \bibitem{kappe2016covering} L. Charlotte-Kappe, D. Nikolova-Popova, and E. Swartz. On the covering number of small symmetric groups and some sporadic simple groups. {\em Groups Complexity Cryptology}, 8(2):135-154, 2016. \bibitem{cohn1994n} J.H.E. Cohn. On n-sum groups. {\em Mathematica Scandinavica}, pages 44-58, 1994. \bibitem{conway1985finite} J.H. Conway, R.T. Curtis, S.P. Norton, R.A. Parker, and R.A. Wilson. Finite Groups, 1985. \bibitem{foguel2005loops} T. Foguel and L. Charlotte-Kappe. On loops covered by subloops. {\em Expositiones Mathematicae}, 23(3):255-270, 2005. \bibitem{semi-partitions} T. Foguel, A. Mahmoudifar, A.R. Moghaddamfar, and J. Schmidt. Groups with Semi-Partitions. {\em Journal of Algebra and its Applications}, 2020. \bibitem{foguel2008groups} T. Foguel and M. Ragland. Groups with a finite covering by isomorphic abelian subgroups {\em Contemporary Mathematics: Computational Group Theory and the Theory of Groups}, 470(2):75-88, 2008 \bibitem{GAP4} The GAP~Group, \emph{GAP -- Groups, Algorithms, and Programming}, Version $4.11.1$; 2021, \url{https://www.gap-system.org}. \bibitem{garonzi2019integers} M. Garonzi, L. Charlotte-Kappe, and E. Swartz. On integers that are covering numbers of groups. {\em Experimental Mathematics}, pages 1-19, 2019. \bibitem{haber1959groups} S. Haber and A. Rosenfeld. Groups as unions of proper subgroups. {\em The American Mathematical Monthly}, 66(6): 491-494, 1959. \bibitem{isaacs1973equally} I.M. Isaacs. Equally partitioned groups. {\em Pacific Journal of Mathematics}, 49(1):109-116, 1973 \bibitem{jafari2018criteria} L. Jafari Taghvasani and M. Zarrin. Criteria for nilpotency of groups via partitions. {\em Mathematische Nachrichten}, 291(17-18):2585-2589, 2018. \bibitem{kegel1961nicht} O. Kegel. Nicht-einfache Partitionen endlicher gruppen. {\em Archiv der Mathematik}, 12(1):170-175, 1961. \bibitem{maroti2005covering} A. Mar{\'o}ti. Covering the symmetric groups with proper subgroups {\em Journal of Combinatorial Theory, Series A}, 110(1):97-111, 2005. \bibitem{miller1906groups} G.A. Miller. Groups in which all the operators are contained in a series of subgroups such that any two have only identity in common. {\em Bulletin of the American Mathematical Society} 12(9):446-449, 1906. \bibitem{neumann_1976} B.H. Neumann. A problem of Paul Erdös on groups. {\em Journal of the Australian Mathematical Society}, 21(4):467-472, 1976. \bibitem{neumann1954groups} B.H. Neumann. Groups covered by finitely many cosets {\em Publ. Math. Debrecen}, 3(3-4):227:242, 1954. \bibitem{rotman2012introduction} J.J. Rotman. An Introduction to the Theory of Groups. 1995. \bibitem{scorza} G. Scorza. I gruppi che possone pensarsi come somma di tre lori sottogruppi. {\em Boll. Un. Mat. Ital}, 1926. \bibitem{sizemorepartition} N. Sizemore and T. Foguel. Partition Numbers of Finite Solvable Groups. {\em Advances in Group Theory and Applications}. 2018. \bibitem{suzuki1961finite} M. Suzuki. On a finite group with a partition. {\em Archiv der Mathematik}, 12(1):241-254, 1961. \bibitem{tomkinson} M.J. Tomkinson. Groups as the union of proper subgroups. {\em Mathematica Scandinavica}, 81: 191198, 1997. \bibitem{zappa2003partitions} G. Zappa. Partitions and other coverings of finite groups. {\em Illinois Journal of Mathematics}, 47(1-2):571-580, 2003. \end{thebibliography} \end{document}
2206.14088v2
http://arxiv.org/abs/2206.14088v2
Poisson transform and unipotent complex geometry
\documentclass[12pt,letterpaper,titlepage,reqno]{amsart} \usepackage{amsmath, amssymb, amsthm, amsfonts,amscd,amsaddr,enumerate,mathtools} \usepackage{hyperref} \usepackage{backref} \usepackage[ paper=a4paper, portrait=true, textwidth=425pt, textheight=650pt, tmargin=3cm, marginratio=1:1 ]{geometry} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{cor}[theorem]{Corollary} \newtheorem{con}[theorem]{Conjecture} \newtheorem{prop}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{problem}[theorem]{Problem} \theoremstyle{definition} \newtheorem{ex}[theorem]{Example} \newtheorem{rmk}[theorem]{Remark} \numberwithin{equation}{section} \newtheorem*{theoremA*}{Theorem A} \newtheorem*{theoremB*}{Theorem B} \newtheorem*{theorem1*}{Theorem A'} \newtheorem*{theoremC*}{Theorem C} \newtheorem*{theoremD*}{Theorem D} \newtheorem*{theoremE*}{Theorem E} \newtheorem*{theoremF*}{Theorem F} \newtheorem*{theoremE2*}{Theorem E2} \newtheorem*{theoremE3*}{Theorem E3} \newcommand{\bs}{\backslash} \newcommand{\cc}{\mathcal{C}} \newcommand{\C}{\mathbb{C}} \newcommand{\G}{\mathbb{G}} \newcommand{\A}{\mathcal{A}} \newcommand{\Nc}{\mathcal{N}} \newcommand{\Lc}{\mathcal{L}} \newcommand{\E}{\mathcal{E}} \newcommand{\Hc}{\mathcal{H}} \newcommand{\Hb}{\mathbb{H}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Lb}{\mathbf{L}} \newcommand{\Lbb}{\mathbb{L}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Zc}{\mathcal{Z}} \newcommand{\Sc}{\mathcal{S}} \newcommand{\Oc}{\mathcal{O}} \newcommand{\M}{\mathcal{M}} \newcommand{\Mf}{\mathfrak{M}} \newcommand{\Rc}{\mathcal{R}} \newcommand{\Ec}{\mathcal{E}} \newcommand{\Pc}{\mathcal{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Pb}{\mathbb{P}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Dens}{\operatorname{Dens}} \newcommand{\Gr}{\operatorname{Gr}} \newcommand{\Sl}{\operatorname{SL}} \newcommand{\Ind}{\operatorname{Ind}} \newcommand{\id}{\operatorname{id}} \newcommand{\SO}{\operatorname{SO}} \newcommand{\PW}{\operatorname{PW}} \newcommand{\DPW}{\operatorname{\mathcal{D}PW}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\End}{\operatorname{End}} \newcommand{\Herm}{\operatorname{Herm}} \newcommand{\OO}{\operatorname{O}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\SP}{\operatorname{Sp}} \newcommand{\SU}{\operatorname{SU}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\im}{\operatorname{Im}} \newcommand{\Sp}{\operatorname{Sp}} \newcommand{\Lie}{\operatorname{Lie}} \newcommand{\Ad}{\operatorname{Ad}} \newcommand{\ima}{\operatorname{im}} \newcommand{\ad}{\operatorname{ad}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\ord}{\operatorname{ord}} \newcommand{\pr}{\operatorname{pr}} \newcommand{\Pol}{\operatorname{Pol}} \newcommand{\vol}{\operatorname{vol}} \newcommand{\res} {\operatorname{Res}} \newcommand{\Spec}{\operatorname{spec}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\Span}{\operatorname{span}} \newcommand{\Spin}{\operatorname{Spin}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Skew}{\operatorname{Skew}} \newcommand{\err}{\operatorname{err}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\PSO}{\operatorname{PSO}} \newcommand{\PSl}{\operatorname{PSl}} \newcommand{\re}{\operatorname{Re}} \newcommand{\sym}{\operatorname{sym}} \newcommand{\bnb}{\operatorname{{\bf\oline n}}} \newcommand{\ba}{\operatorname{{\bf a}}} \newcommand{\bm}{\operatorname{{\bf m}}} \newcommand{\bn}{\operatorname{{\bf n}}} \newcommand{\Mat}{\operatorname{Mat}} \def\sG{\mathsf{G}} \def\sN{\mathsf{N}} \def\sA{\mathsf{A}} \def\sP{\mathsf{P}} \def\sU{\mathsf{U}} \def\sA{\mathsf{A}} \def\sL{\mathsf{L}} \def\cP{\mathcal{P}} \def\hat{\widehat} \def\af{\mathfrak{a}} \def\bfrak{\mathfrak{b}} \def\e{\epsilon} \def\gf{\mathfrak{g}} \def\ff{\mathfrak{f}} \def\cf{\mathfrak{c}} \def\df{\mathfrak{d}} \def\ef{\mathfrak{e}} \def\hf{\mathfrak{h}} \def\kf{\mathfrak{k}} \def\lf{\mathfrak{l}} \def\mf{\mathfrak{m}} \def\nf{\mathfrak{n}} \def\of{\mathfrak{o}} \def\pf{\mathfrak{p}} \def\qf{\mathfrak{q}} \def\rf{\mathfrak{r}} \def\sf{\mathfrak{s}} \def\sl{\mathfrak{sl}} \def\gl{\mathfrak{gl}} \def\symp{\mathfrak{sp}} \def\so{\mathfrak{so}} \def\sp{\mathfrak{sp}} \def\su{\mathfrak{su}} \def\tf{\mathfrak{t}} \def\uf{\mathfrak{u}} \def\vf{\mathfrak{v}} \def\zf{\mathfrak{z}} \def\la{\langle} \def\ra{\rangle} \def\1{{\bf1}} \def\cS{\mathcal{S}} \def\U{\mathcal{U}} \def\Ac{\mathcal{A}} \def\B{\mathcal{B}} \def\Cc{\mathcal{C}} \def\Tc{\mathcal{T}} \def\D{\mathcal {D}} \def\Ic{\mathcal {I}} \def\G{\mathcal{G}} \def\Oc{\mathcal{O}} \def\P{\mathbb{P}} \def\cR{\mathcal{R}} \def\M{\mathcal{M}} \def\oline{\overline} \def\F{\mathcal{F}} \def\V{\mathcal{V}} \def\W{\mathcal{W}} \def\cN{\mathcal{N}} \def\BRG{B_{R,\gf}} \def\Unitary{\operatorname{U}} \def\Field{\mathbb{F}} \def\propertyUI{{\rm (I)}} \def\UIprime{{\rm (I*)}} \def\tilde{\widetilde} \def\Sphere{\mathbf{S}} \def\Sym{\mathrm{Sym}} \def\Pol{\operatorname{Pol}} \def\tilde{\widetilde} \def\Diff{\mathbb{D}} \hyphenation{hy-per-geo-me-tric} \def\oline{\overline} \def\la{\langle} \def\ra{\rangle} \usepackage[usenames]{color} \title[Poisson transform] {Poisson transform and unipotent complex geometry} \begin{document} \begin{abstract} Our concern is with Riemannian symmetric spaces $Z=G/K$ of the non-compact type and more precisely with the Poisson transform $\Pc_\lambda$ which maps generalized functions on the boundary $\partial Z$ to $\lambda$-eigenfunctions on $Z$. Special emphasis is given to a maximal unipotent group $N<G$ which naturally acts on both $Z$ and $\partial Z$. The $N$-orbits on $Z$ are parametrized by a torus $A=(\R_{>0})^r<G$ (Iwasawa) and letting the level $a\in A$ tend to $0$ on a ray we retrieve $N$ via $\lim_{a\to 0} Na$ as an open dense orbit in $\partial Z$ (Bruhat). For positive parameters $\lambda$ the Poisson transform $\Pc_\lambda$ is defined an{ d} injective for functions $f\in L^2(N)$ and we give a novel characterization of $\Pc_\lambda(L^2(N))$ in terms of complex analysis. For that we view eigenfunctions $\phi = \Pc_\lambda(f)$ as families $(\phi_a)_{a\in A}$ of functions on the $N$-orbits, i.e. $\phi_a(n)= \phi(na)$ for $n\in N$. The general theory then tells us that there is a tube domain $\Tc=N\exp(i\Lambda)\subset N_\C$ such that each $\phi_a$ extends to a holomorphic function on the scaled tube $\Tc_a=N\exp(i\Ad(a)\Lambda)$. We define a class of $N$-invariant weight functions { ${\bf w}_\lambda$ on the tube $\Tc$}, rescale them for every $a\in A$ to a weight ${\bf w}_{\lambda, a}$ on $\Tc_a$, and show that each $\phi_a$ lies in the $L^2$-weighted Bergman space $\B(\Tc_a, {\bf w}_{\lambda, a}):=\Oc(\Tc_a)\cap L^2(\Tc_a, {\bf w}_{\lambda, a})$. The main result of the article then describes $\Pc_\lambda(L^2(N))$ as those eigenfunctions $\phi$ for which $\phi_a\in \B(\Tc_a, {\bf w}_{\lambda, a})$ and $$\|\phi\|:=\sup_{a\in A} a^{\re\lambda -2\rho} \|\phi_a\|_{\B_{a,\lambda}}<\infty$$ holds. \end{abstract} \author[Gimperlein]{Heiko Gimperlein} \address{Engineering Mathematics\\ Leopold-Franzens-Universit\"at Innsbruck\\ 6020 Innsbruck, Austria\\ {\tt [email protected]}} \author[Kr\"otz]{Bernhard Kr\"otz} \address{Institut f\"ur Mathematik\\ Universit\"at Paderborn\\Warburger Str. 100, 33098 Paderborn, Germany \\ {\tt [email protected]}} \author[Roncal]{Luz Roncal} \address{BCAM - Basque Center for Applied Mathematics\\ 48009 Bilbao, Spain and\\ Ikerbasque Basque Foundation for Science, 48011 Bilbao, Spain and\\ Universidad del Pa\'is Vasco / Euskal Herriko Unibertsitatea, 48080 Bilbao, Spain\\ {\tt [email protected]}} \author[Thangavelu]{Sundaram Thangavelu} \address{Department of Mathematics\\ Indian Institute of Science\\ 560 012 Bangalore, India\\ {\tt [email protected]}} \maketitle \section{Introduction} This article considers range theorems for the Poisson transform on Riemannian symmetric spaces $Z$ in the context of horospherical complex geometry. We assume that $Z$ is of non-compact type and let $G$ be the semisimple Lie group of isometries of $Z$. Then $Z$ is homogeneous for $G$ and identified as $Z=G/K$, where $K\subset G$ is a maximal compact subgroup and stabilizer of a fixed base point $z_0\in Z$. Classical examples are the real hyperbolic spaces which will receive special explicit attention at the end of the article. \par The Poisson transform maps sections of line bundles over the compact boundary $\partial Z$ to eigenfunctions of the commutative algebra of $G$-invariant differential operators $\mathbb{D}(Z)$ on $Z$. Recall that $\partial Z = G/{ \oline P}$ is a real flag manifold for ${ \oline P =MA\oline N}$ a minimal parabolic subgroup originating from an Iwasawa decomposition ${ G=KA\oline N}$ of $G$. The line bundles we consider are parametrized by the complex characters $\lambda$ of the abelian group $A$, and we write $\Pc_\lambda$ for the corresponding Poisson transform. { We let $N $ be the unipotent radical of the parabolic subgroup $P=MAN$ opposed to $\oline P$. } \par The present paper initiates the study of the Poisson transform in terms of the $N$-geometry of both $Z$ and $\partial Z$. Identifying the contractible group $N$ with its open dense orbit in $\partial Z$, functions on $N$ correspond to sections of the line bundle via extension by zero. On the other hand $N\bs Z\simeq A$. Hence, given a function $f\in L^2(N)$ with Poisson transform $\phi { =} \Pc_\lambda(f)$, it is natural to consider the family $\phi_a$, $a\in A \simeq N\bs Z$, of functions restricted to the $N$-orbits $Na\cdot z_0\subset Z$. A basic observation then is that the functions $\phi_a$ extend holomorphically to $N$-invariant tubular neighborhoods $\Tc_a\subset N_\C$ of $N$. Our main result, Theorem \ref{maintheorem}, identifies for positive parameters $\lambda$ the image $\Pc_\lambda(L^2(N))$ with a class of families $\phi_a$ in weighted Bergman spaces $\B(\Tc_a, {\bf w}_{\lambda, a})$ on these tubes $\Tc_a$. \par Range theorems for the Poisson transform in terms of the $K$-geometry of both $\partial Z$ and $Z$ were investigated in \cite{I} for spaces of rank one. Note that $\partial Z\simeq K/M$ and that every line bundle over $K/M$ is trivial, so that sections can be identified with functions on $K/M$. On the other hand $K\bs Z\simeq A/W$ with $W$ the little Weyl group, a finite reflection group. Given a function $f \in L^2(K/M)$ the image $\phi=\Pc_\lambda(f)$ therefore induces a family of partial functions $\phi_a: K\to \C$ with $\phi_a(k):=\phi(ka\cdot z_0)$ on the $K$-orbits in $Z$ parametrized by $a\in A$. As $\phi$ is continuous, we have $\phi_a\in L^2(K)$, and \cite{I} characterizes the image $\Pc_\lambda(L^2(K/M))$ in terms of the growth of $\|\phi_a\|_{L^2(K)}$ and suitable maximal functions. Interesting follow up work includes \cite{BOS} and \cite{Ka}. \bigskip To explain our results in more detail, we first describe our perspective on eigenfunctions of the algebra $\mathbb{D}(Z)$. The Iwasawa decomposition $G=KAN$ allows us to identify $Z=G/K$ with the solvable group $S=NA$. Inside $\mathbb{D}(Z)$ one finds a distinguished element, the Laplace--Beltrami operator $\Delta_Z$. Upon identifying $Z$ with $S$ we use the symbol $\Delta_S$ instead of $\Delta_Z$. Now it is a remarkable fact that all $\Delta_S$-eigenfunctions extend to a universal $S$-invariant domain $\Xi_S\subset S_\C$. In fact, $\Xi_S$ is closely related to the crown domain $\Xi\subset Z_\C=G_\C/K_\C$ of $Z$, and we refer to Section~\ref{section crown} for details. In particular, there exists a maximal domain $0\in \Lambda\subset \nf = \rm{Lie}(N)$ such that \begin{equation} \label{XiS}\Xi_S \supset S \exp(i\Lambda)\,. \end{equation} The domain $\Lambda$ has its origin in the unipotent model of the crown domain \cite[Sect.~8]{KO} and, except in the rank one cases, its geometry is not known. Proposition \ref{prop bounded} implies that $\Lambda$ is bounded for a class of classical groups, including $G=\GL(n,\R)$. It is an interesting open problem whether $\Lambda$ is bounded or convex in general. \par Now let $\phi: S\to \C$ be an eigenfunction of $\Delta_S$. For each $a\in A$ we define the partial function $$\phi_a: N \to \C, \quad n\mapsto \phi(na)\, .$$ Because eigenfunctions extend to $\Xi_S$, we see from \eqref{XiS} that $\phi_a$ extends to a holomorphic function on the tube domain \begin{equation}\label{defta} \Tc_a:= N\exp(i\Lambda_a)\subset N_\C \end{equation} with \begin{equation}\label{deflambdaa} \Lambda_a= \Ad(a)\Lambda\, . \end{equation} The general perspective of this paper is to view an eigenfunction $\phi$ as a family of holomorphic functions $(\phi_a)_{a\in A}$ with $\phi_a$ belonging to $\Oc(\Tc_a)$, the space of all holomorphic functions on $\Tc_a$. \par We now explain the Poisson transform and how eigenfunctions of the algebra $\mathbb{D}(Z)$ can be characterized by their boundary values on $\partial Z$. Fix a minimal parabolic subgroup $P=MAN$ with $M=Z_K(A)$. If $\theta: G\to G$ denotes the Cartan involution with fixed point group $K$, we consider $\oline N=\theta(N)$ and the parabolic subgroup $\oline P= M A \oline N$ opposite to $P$. Because $N\oline P\subset G$ is open dense by the Bruhat decomposition, it proves convenient to identify $\partial Z$ with $G/\oline P$. In the sequel we view $N\subset \partial Z=G/\oline P$ as an open dense subset. \par For each $\lambda\in \af_\C^*$ one defines the Poisson transform (in the $N$-picture) as $$ \Pc_\lambda: C_c^\infty(N) \to C^\infty(S)\ , $$ \begin{equation} \label{Poisson0} \Pc_\lambda f(s)= \int_N f(x) {\bf a} (s^{-1} x)^{\lambda + \rho} \ dx\ \qquad (s\in S)\ , \end{equation} where ${\bf a}: KA\oline N \to A$ is the middle projection with respect to the opposite Iwasawa decomposition, { $a^\lambda:= e^{\lambda(\log a)}$ for $a\in A$} and $\rho { :=\frac{1}{2}\sum_{\alpha\in \Sigma^+} (\dim \gf^\alpha)\cdot \alpha}\in \af^*$ is the Weyl half sum with respect to $P$. In this article we restrict to parameters $\lambda$ with $\re \lambda (\alpha^\vee)>0$ for all positive co-roots $\alpha^\vee\in \af$, denoted in the following as $\re\lambda>0$. This condition ensures that the integral defining the Harish-Chandra ${\bf c}$-function $${\bf c}(\lambda):=\int_N {\bf a}(n)^{\lambda+\rho} \ dn$$ converges absolutely. \par Recall the Harish-Chandra isomorphism between $\mathbb{D}(Z)$ and the $W$-invariant polynomials on $\af_\C^*$, where $W$ is the Weyl group of the pair $(\gf, \af)$. In particular, $\Spec \mathbb{D}(Z)=\af_\C^*/ W$, and for each $[\lambda]=W\cdot \lambda$ we denote by $\E_{[\lambda]}(S)$ the corresponding eigenspace on $S\simeq Z$. The image of the Poisson transform consists of eigenfunctions, $\operatorname{im} \Pc_\lambda(C_c^\infty(N))\subset \E_{[\lambda]}(S)$. Because ${\bf a}(\cdot)^{\lambda+\rho}$ belongs to $L^1(N)$ for $\re\lambda>0$, $\Pc_\lambda$ extends from $C_c^\infty(N)$ to $L^2(N)$. The goal of this article is to characterize $\Pc_\lambda(L^2(N))$. As a first step towards this goal, for $f\in L^2(N)$ and $\phi=\Pc_\lambda(f)$ { in Lemma \ref{lemmaeasybound} we} note the estimate $$\|\phi_a\|_{L^2(N)} \leq a^{\rho -\re \lambda}{\bf c}(\re \lambda) \|f\|_{L^2(N)}$$ for all $a\in A$. The basic observation in this paper is that the kernel $n\mapsto {\bf a}(n)^{\lambda+\rho}$ underlying the Poisson transform \eqref{Poisson0} extends holomorphically to $\Tc^{-1}:=\exp(i\Lambda)N$ and remains $N$-integrable along every fiber, i.e.~for any fixed $y\in \exp(i\Lambda)$ the kernel $n\mapsto {\bf a}(yn)^{\lambda+\rho}$ is integrable over $N$. This allows us to formulate a condition for positive left $N$-invariant continuous weight functions ${\bf w}_\lambda$ on the tubes $\Tc=N\exp(i\Lambda)$, namely (see also \eqref{request w}) \begin{equation} \label{request intro w}\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \|\delta_{\lambda, y}\|^2_{ L^1(N)} \ dy <\infty\, ,\end{equation} { where the function $\delta_{\lambda, y}$ is defined in \eqref{deltadef}.} In the sequel we assume that ${\bf w}_\lambda$ satisfies condition \eqref{request intro w} and define rescaled weight functions $${\bf w}_{\lambda,a}: \Tc_a\to \R_{>0}, \ \ ny\mapsto {\bf w}_\lambda(\Ad(a^{-1})y)\qquad (y\in\exp(i\Lambda_a))$$ on the scaled tubes $\Tc_a$. The upshot then is that $\phi_a\in \Oc(\Tc_a)$ lies in the weighted Bergman space $$\B(\Tc_a, {\bf w}_{\lambda,a}):=\{ \psi\in \Oc(\Tc_a)\mid \|\psi\|^2_{\B_{a, \lambda}}:= \int_{\Tc_a} |\psi(z)|^2 {\bf w}_{\lambda,a}(z) dz <\infty\}$$ where $dz$ is the Haar measure on $N_\C$ restricted to $\Tc_a$. This motivates us the definition of the following Banach subspace of $\E_{[\lambda]}({ S})\subset \Oc(\Xi_S)$ $$\B(\Xi_S, \lambda):=\{ \phi \in \E_{[\lambda]}({ S})\mid \|\phi\|:=\sup_{a\in A} a^{\re\lambda -2\rho} \|\phi_a\|_{\B_{a,\lambda}}<\infty\}\, .$$ It will be consequence of Theorem \ref{maintheorem} below that $\B(\Xi_S, \lambda)$ as a vector space does not depend on the particular choice of the positive left $N$-invariant weight function ${\bf w}_\lambda$ satisfying \eqref{request intro w}. The main result of this article now reads: \begin{theorem}\label{maintheorem}Let $Z=G/K$ be a Riemannian symmetric space and $\lambda\in \af_\C^*$ be a parameter such that $\re \lambda>0$. Then $$\Pc_\lambda: L^2(N) \to \B(\Xi_S, \lambda)$$ is an isomorphism of Banach spaces, i.e. there exist $c,C>0$ depending on ${\bf w}_\lambda$ such that $$c \|\Pc_\lambda(f)\|\leq \|f\|_{L^2(N)} \leq C \|\Pc_\lambda(f)\|\qquad (f\in L^2(N))\, .$$ \end{theorem} Let us mention that the surjectivity of $\Pc_\lambda$ relies on the established Helgason conjecture (see \cite{K6,GKKS}) and the Bergman inequality. We now recall that $\Pc_\lambda$ is inverted by the boundary value map, that is $${1\over {\bf c}(\lambda)} \lim_{a\to \infty\atop a\in A^-} a^{\lambda-\rho} \Pc_\lambda f(na) = f(n)\qquad (n\in N)\, ,$$ where the limit is taken along a fixed ray in the interior of the negative Weyl chamber { $A^-$}. Define the positive constant \begin{equation} \label{def w const} w(\lambda):=\left[\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \ dy\right]^{1\over 2}. \end{equation} We observe that this constant is indeed finite, see Subsection \ref{sub:norm}. { There} we obtain a corresponding norm limit formula: \begin{theorem}\label{norm limit intro} For any $f\in L^2(N)$, $\phi=\Pc_\lambda(f)$ we have \begin{equation} \label{norm limit2} {1\over w(\lambda) |{\bf c}(\lambda)|} a^{\re \lambda - 2\rho} \|\phi_a\|_{\B_{a,\lambda}} \to \|f\|_{L^2(N)} \qquad (f\in L^2(N))\end{equation} for $a\to \infty$ on a ray in $A^-$. \end{theorem} Let us emphasize that the weight functions ${\bf w}_\lambda$ are not unique and it is natural to ask about the existence of optimal choices, i.e.~choices for which $\Pc_\lambda$ establishes an isometry between $L^2(N)$ and $\B(\Xi_S, \lambda)$, in other words whether a norm-sup identity holds: \begin{equation} \label{norm sup} \sup_{a\in A} {1\over w(\lambda) |{\bf c}(\lambda)|} a^{\re \lambda - 2\rho} \|\phi_a\|_{\B_{a,\lambda}} =\|f\|_{L^2(N)} \qquad (f\in L^2(N))\, .\end{equation} The answer is quite interesting in the classical example of the real hyperbolic space $$Z=\SO_e(n+1,1)/\SO(n+1)\simeq \R^n \times \R_{>0} = N\times A$$ where the study was initiated in \cite{RT} and is now completed in Section \ref{sect hyp}. Here $N=\nf=\R^n$ is abelian and we recall the classical formulas for the Poisson kernel and ${\bf c}$-function $${\bf a}(x)^{\lambda+\rho} = ( 1 +{ |x|}^2)^{-(\lambda +n/2)}\qquad (x\in N=\R^n)\, ,$$ $${\bf c}(\lambda)= \pi^{n/2} \frac{\Gamma(2\lambda)}{\Gamma(\lambda+n/2)}\, , $$ { where we write $|\cdot|$ for the Euclidean norm}. It is now easily seen that $\Lambda=\{ y \in \R^n \mid { |y|}<1\}$ is the open unit ball. A natural family of weights to consider are powers of the Poisson kernel parametrized by $\alpha>0$ \begin{equation} \label{special weight 1} {\bf w}_{\lambda}^\alpha(z) = (2\pi)^{-n/2} \frac{1}{\Gamma(\alpha)} \left(1-{ |y|}^2\right)_+^{\alpha -1} \qquad (z=x+iy\in \Tc = \R^n +i\Lambda)\, ,\end{equation} { where $(\,\cdot\, )_+$ denotes the positive part.} These weights satisfy condition \eqref{request intro w} exactly for $ \alpha > \max\{2s-1,0\} $ where $s=\re \lambda$, see Lemma \ref{deltabound}. Moreover, in Theorem \ref{thm hyp} we establish the following: \begin{enumerate} \item \label{one} Condition \eqref{request intro w} is only sufficient and Theorems \ref{maintheorem}, \ref{norm limit intro} hold even for $\alpha>\max\{2s-\frac{n+1}{2}, 0\}$. \item For $\alpha$ as in \eqref{one} and $\lambda=s>0$ real the norm-sup identity \eqref{norm sup} holds. \end{enumerate} Let us stress that \eqref{norm sup} is a new feature and isn't recorded (and perhaps not even true) for the range investigations with respect to the $K$-geometry in the rank one case: there{ ,} one verifies lim-sup identities which are even weaker than the norm-limit formula in Theorem \ref{norm limit intro}, see \cite{I}. \section{Notation} \label{sec:notation} Most of the notation used in this paper is standard for semisimple Lie groups and symmetric spaces and can be found for instance in \cite{H3} { for the semisimple case and, for the general setting, in \cite{W1}.} Let $G$ be the real points of a connected algebraic reductive group defined over $\R$ and let $\gf$ be its Lie algebra. Subgroups of $G$ are denoted by capitals. The corresponding subalgebras are denoted by the corresponding fraktur letter, i.e.~$\gf$ is the Lie algebra of $G$ etc. \par We denote by $\gf_\C=\gf\otimes_\R \C$ the complexification of $\gf$ and by $G_{\C}$ the group of complex points. We fix a Cartan involution $\theta$ and write $K$ for the maximal compact subgroup that is fixed by $\theta$. We also write $\theta$ for the derived automorphism of $\gf$. We write $K_{\C}$ for the complexification of $K$, i.e.~$K_{\C}$ is the subgroup of $G_{\C}$ consisting of the fixed points for the analytic extension of $\theta$. The Cartan involution induces the infinitesimal Cartan decomposition $\gf =\kf \oplus\sf$. Let $\af\subset\sf$ be a maximal abelian subspace. The set of restricted roots of $\af$ in $\gf$ we denote by $\Sigma\subset \af^*\bs \{0\}$ and write $W$ for the Weyl group of $\Sigma$. We record the familiar root space decomposition $$\gf=\af\oplus\mf\oplus \bigoplus_{\alpha\in\Sigma} \gf^\alpha\ ,$$ with $\mf=\zf_\kf(\af)$. Let $A$ be the connected subgroup of $G$ with Lie algebra $\af$ and let $M=Z_{K}(\af)$. We fix a choice of positive roots $\Sigma^+$ of $\af$ in $\gf$ and write $\nf=\bigoplus_{\alpha\in\Sigma^+} \gf^\alpha$ with corresponding unipotent subgroup $N=\exp\nf\subset G$. As customary we set $\oline \nf =\theta(\nf)$ and accordingly $\oline N = \theta(N)$. For the Iwasawa decomposition $G=KA\oline N$ of $G$ we define the projections $\mathbf{k}:G\to K$ and $\mathbf{a}:G\to A$ by $$ g\in \mathbf{k}(g)\mathbf{a}(g)\oline N\qquad(g\in G). $$ Let $\kappa$ be the Killing form on $\gf$ and let $\tilde\kappa$ be a non-degenerate $\Ad(G)$-invariant symmetric bilinear form on $\gf$ such that its restriction to $[\gf,\gf]$ coincides with the restriction of $\kappa$ and $-\tilde\kappa(\,\cdot\,,\theta\,\cdot\,)$ is positive definite. We write $\|\cdot\|$ for the corresponding norm on $\gf$. \section{The complex crown of a Riemannian symmetric space}\label{section crown} The Riemannian symmetric space $Z=G/K$ can be realized as a totally real subvariety of the Stein symmetric space $Z_\C= G_\C/K_\C$: $$ Z=G/K \hookrightarrow Z_\C, \ \ gK\mapsto gK_\C\, .$$ In the following we view $Z\subset Z_\C$ and write $z_0=K\in Z$ for the standard base point. We define the subgroups $A_\C=\exp(\af_\C)$ and $N_\C=\exp(\nf_\C)$ of $G_\C$. We denote by $F:=[A_\C]_{2-\rm{tor}}$ the finite group of $2$-torsion elements and note that $F=A_\C \cap K$. Our concern is also with the solvable group $S=AN$ and its complexification $S_\C=A_\C N_\C$. Note that $S\simeq Z$ as transitive $S$-manifolds, but the natural morphism $S_\C\to Z_\C$ is neither onto nor injective. Its image $S_\C \cdot z_0$ is Zariski open in the affine variety $Z_\C$ and we have $S_\C/F \simeq S_\C\cdot z_0$. The maximal $G\times K_\C$-invariant domain in $G_\C$ containing $e$ and contained in $ N_\C A_\C K_\C$ is given by \begin{equation} \label{crown1} \tilde \Xi = G\exp(i\Omega)K_\C\ , \end{equation} where $\Omega=\{ Y\in \af\mid (\forall \alpha\in\Sigma) \alpha(Y)<\pi/2\}$. Note in particular that \begin{equation} \label{c-intersect} \tilde \Xi=\left[\bigcap_{g\in G} g N_\C A_\C K_\C\right]_0\end{equation} with $[\ldots ]_0$ denoting the connected component of $[\ldots]$ containing $e$. Taking right cosets by $K_\C$, we obtain the $G$-domain \begin{equation}\label{crown2} \Xi:=\tilde \Xi/K_\C \subset Z_\C=G_\C/K_\C\ ,\end{equation} commonly referred to as the {\it crown domain}. See \cite{Gi} for the origin of the notion, \cite[Cor.~3.3]{KS} for the inclusion $\tilde \Xi\subset N_\C A_\C K_\C$ and \cite[Th.~4.3]{KO} for the maximality. We recall that $\Xi$ is a contractible space. To be more precise, let $\hat\Omega=\Ad(K)\Omega$ and note that $\hat\Omega$ is an open convex subset of $\sf$. As a consequence of the Kostant convexity theorem it satisfies $\hat\Omega\cap\af=\Omega$ and $p_{\af}(\hat\Omega)=\Omega$, where $p_{\af}$ is the orthogonal projection $\sf\to\af$. The fiber map $$ G\times_{K}\hat\Omega\to\Xi; \quad [g,X]\mapsto g\exp(iX)\cdot K_{\C}\ , $$ is a diffeomorphism by \cite[Prop.~4, 5 and 7]{AG}. Since $G/K\simeq\sf$ and $\hat\Omega$ are both contractible, also $\Xi$ is contractible. In particular, $\Xi$ is simply connected. \par As $\Xi\subset S_\C\cdot z_0$ we also obtain a realization of $\Xi$ in $S_\C/F$ which, by the contractibility of $\Xi$ lifts to an $S$-equivariant embedding of $\Xi\hookrightarrow S_\C$. We denote the image by $\Xi_S$. Let us remark that $\Xi_S$ is not known explicitly in appropriate coordinates except when $Z$ has real rank one, which was determined in \cite{CK}. \par We recall ${\bf a}: G \to A$ the middle projection of the Iwasawa decomposition $G=KA\oline{N}$ and note that ${\bf a}$ extends holomorphically to \begin{equation}\label{tilde Xi} \tilde \Xi^{-1} :=\{g^{-1}:g\in\tilde\Xi\}\ . \end{equation} Here we use that $\tilde \Xi\subset \oline N_\C A_\C K_\C$ as a consequence of $\Xi\subset N_\C A_\C K_\C$ and the $G$-invariance of $\Xi$. Moreover, the simply connectedness of $\Xi$ plays a role to achieve ${\bf a}: \tilde \Xi^{-1}\to A_\C$ uniquely: A priori ${\bf a}$ is only defined as a map to $A_\C/F$. We denote the extension of ${\bf a}$ to $\tilde \Xi^{-1}$ by the same symbol. Likewise one remarks that $\mathbf{k}: G \to K$ extends holomorphically to $\tilde \Xi^{-1}$ as well. \subsection{Unipotent model for the crown} Let us define a domain $\Lambda\subset \nf$ by $$\Lambda:=\{ Y \in \nf\mid \exp(iY)\cdot z_0\subset \Xi\}_0$$ where the index $\{\cdot\}_0$ refers to the connected component of $\{\cdot\}$ containing $0$. Then we have $$\Xi=G\exp(i\Lambda)\cdot z_0$$ by \cite[Th. 8.3]{KO}. In general the precise shape of $\Lambda$ is not known except for a few special cases, in particular if the real rank of $G$ is one (see \cite[Sect. 8.1 and 8.2]{KO}). \begin{prop} \label{prop bounded} For $G=\GL(n,\R)$ the domain $\Lambda\subset \nf$ is bounded. \end{prop} \begin{rmk}\label{rmk bounded} A general real reductive group $G$ can be embedded into $\GL(n,\R)$ with compatible Iwasawa decompositions. Then it happens in a variety of cases that the crown domain $\Xi=\Xi(G)$ for $G$ embeds into the one of $\GL(n,\R)$. For example this is the case for $G=\SL(n,\R), \Sp(n,\R), \Sp(p,q), \SU(p,q)$, and we refer to \cite[Prop. 2.6]{KrSt} for a complete list. In all these cases $\Lambda$ is then bounded as a consequence of Proposition \ref{prop bounded}. \end{rmk} \begin{proof}[Proof of Proposition \ref{prop bounded}] Define $$\Lambda'=\{ Y \in \nf\mid \exp(iY)N \subset K_\C A_\C \oline N_\C\}_0$$ and note that $\Lambda'=-\Lambda$. Now \eqref{c-intersect} for $N$ replaced by $\oline N$ implies $\Lambda\subset \Lambda'$. We will show an even stronger statement by replacing $\Lambda$ by $\Lambda'$; in other words we search for the largest tube domain $T_{N,\Lambda'}:=\exp(i\Lambda') N$ contained in $K_\C A_\C \oline N_\C$ and show that the tube has bounded base. As usual we let $K_\C= \SO(n,\C)$, $ A_\C=\diag(n, \C^*)$ and $\oline N_\C$ be the unipotent lower triangular matrices. We recall the construction of the basic $K_\C\times \oline N_\C$-invariant functions on $G_\C$. With $e_1, \ldots, e_n$ the standard basis of $\C^n$ we let $v_i:= e_{n-i+1}$, $1\leq i\leq n$. Now for $1\leq k\leq n-1$ we define a holomorphic function on $G_\C = \GL(n,\C)$ by $$f_k(g) = \det \left(\la g(v_i), g(v_j)\ra_{1\leq i,j\leq n-k}\right) \qquad (g\in G_\C)$$ where $\la z,w\ra = z^t w$ is the standard pairing of $\C^n$. As the standard pairing is $K_\C$-invariant we obtain that $f_k$ is left $K_\C$-invariant. Furthermore from $$f_k(g) =\la g(v_1)\wedge\ldots \wedge g(v_{n-k}), g(v_1)\wedge\ldots \wedge g(v_{n-k})\ra_{\bigwedge^{n-k}\C^n}$$ we see that $f_k$ is right-$\oline N_\C$-invariant. In particular we have $$f_k(\kappa a\oline n)= a_{k+1} \cdot\ldots \cdot a_n \qquad (\kappa \in K_\C , \oline n\in \oline N_\C)$$ for $a=\diag(a_1, \ldots, a_n)\in A_\C$. Hence $f_k$ is not vanishing on $K_\C A_\C \oline N_\C$ and in particular not on the tube domain $T_{N,\Lambda'}$ which is contained in $K_\C A_\C \oline N_\C$. \par The functions $f_k$ are right semi-invariant under the maximal parabolic subgroup $\oline P_k = L_k \oline U_k$ with $L_k=\GL(k,\R)\times \GL(n-k,\R)$ embedded block-diagonally and $\oline U_k =\1_n+ \Mat_{(n-k)\times k}(\R)$ with $\Mat_{(n-k)\times k }(\R)$ sitting in the lower left corner. We obtain with $U_k= \1_n+ \Mat_{k\times (n-k)}(\R)$ an abelian subgroup of $N$ with $\uf_k = \Mat_{k \times (n-k)}(\R)$ and record for $Z=X+iY\in \uf_{k,\C}$ that $$f_k(\exp(Z))= \det (\1_{n-k} + Z^t Z)\, .$$ From this we see that the largest $U_k$-invariant tube domain in $U_{k,\C}=\Mat_{k\times (n-k)}(\C)$ to which $f_k$ extends to a zero free holomorphic function is given by $$T_k = \Mat_{k\times(n-k)}(\R) + i \Upsilon_k$$ where $$\Upsilon_k=\{ Y\in \Mat_{k\times(n-k)}(\R)\mid \1_{n-k}- Y^tY\ \hbox{is positive definite} \}$$ is bounded and convex. \par With $\nf_k = \lf_k\cap \nf$ we obtain a subalgebra of $\nf$ such that $\nf = \nf_k \ltimes \uf_k$ is a semi-direct product with abelian ideal $\uf_k$. Accordingly we have $N\simeq U_k \times N_k$ under the multiplication map and likewise we obtain, via Lemma \ref{lemma bipolar} below, for each $k$ a diffeomorphic polar map $$\Phi_k: \uf_k \times \nf_k \times N \to N_\C, \ \ (Y_1, Y_2, n)\mapsto \exp(iY_1)\exp(iY_2)n\, .$$ Note that $$\Phi_k^{-1}(T_{N,\Lambda'})=\Lambda_k'\times N$$ with $\Lambda_k'\subset \uf_k\times \nf_k$ a domain containing $0$. Now let $\Lambda_{k,1}'$ be the projection of $\Lambda_k'$ to $\uf_k$ and likewise we define $\Lambda_{k,2}'\subset \nf_k$. Note that $\Lambda_k'\subset \Lambda_{k,1}'\times \Lambda_{k,2}'$. We now claim that $\Lambda_{k,1}'\subset \Upsilon_k$. In fact let $Y=Y_1+Y_2 \in \Lambda_k'$. Then $\exp(iY_1)\exp(iY_2)\in T_{N,\Lambda'}$ and thus, as $f_k$ is right $N_{k,\C}$-invariant, $$ 0\neq f_k(\exp(iY_1)\exp(iY_2))=f_k(\exp(iY_1))\,.$$ Our claim follows. \par To complete the proof we argue by contradiction and assume that $\Lambda'$ is unbounded. We will show that this implies that $\Lambda_{k,1}'$ becomes unbounded, a contradiction to the claim above. Suppose now that there is an unbounded sequence $(Y^m)_{m\in \N}\subset \Lambda'$. We write elements $Y\in \nf$ in coordinates $Y=\sum_{1\leq i <j\leq n} Y_{i,j}$. Let now $1\leq k\leq n-1$ be maximal such that all $Y^{m}_{i,j}$ stay bounded for $j\leq k$. Our choice of parabolic subgroup then is $\oline P_k$. By assumption we have that $Y^m_{i, k+1}$ becomes unbounded for some $1\leq i \leq k$. Let $l\geq i$ be maximal with this property. We write elements $Y\in \nf$ as $Y_1+Y_2$ with $Y_1 \in \uf_k$ and $Y_2\in \nf_k$. Now for any $Y=Y_1+Y_2\in \nf$ we find unique $\tilde Y_1, X\in \uf_k$ such that \begin{equation} \label{triple exp} \exp(iY)=\exp(i(Y_1 +Y_2))= \exp(i\tilde Y_1) \exp(iY_2)\exp(X)\end{equation} as a consequence of the fact that $\Phi_k$ is diffeomorphic and the identity $$\exp(iY) U_{k, \C} = \exp(iY_2) U_{k,\C}$$ in the Lie group $N_\C/ U_{k,\C}$. By Dynkin's formula and the abelianess of $\uf_k$ we infer from \eqref{triple exp} $$iY= ((i\tilde Y_1*iY_2)*X)=i\tilde Y_1 +iY_2+X+\sum_{j=1}^{n-1} c_j i^{j+1} (\ad Y_2)^j \tilde Y_1 +\sum_{j=1}^{n-1} d_j i^j (\ad Y_2)^j X$$ for certain rational constants $c_j, d_j\in \Q$. In particular, comparing real and imaginary parts on both sides we obtain two equations: \begin{equation} \label{matrix1} Y_1 = \tilde Y_1 +\sum_{j=1}^{n_1} c_{2j}(-1)^j (\ad Y_2)^{2j} \tilde Y_1 +\sum_{j=0}^{n_2} d_{2j+1} (-1)^{j} (\ad Y_2)^{2j+1} X \end{equation} \begin{equation} \label{matrix2} X= \sum_{j=0}^{n_1} c_{2j+1}(-1)^j (\ad Y_2)^{2j+1} \tilde Y_1 -\sum_{j=1}^{n_2} d_{2j} (-1)^{j} (\ad Y_2)^{2j} X, \end{equation} { where $n_1=\lfloor \frac{n-1}{2}\rfloor$ and $n_2=\lceil \frac{n-1}{2}-1\rceil$. } Our claim now is that $(\tilde Y_1^m)_{l, k+1}$ is unbounded. If $l=k$, then we deduce from \eqref{matrix1} that $(Y_1^m)_{k, k+1}= (\tilde Y_1^m)_{k, k+1}$ is unbounded, i.e., our desired contradiction. Now suppose $l<k$. We are interested in the entries of $\tilde Y_1$ in the first column and for that we let $\pi_1: \uf_{k,\C}=\Mat_{k\times (n-k)} (\C) \to \C^k$ { be} the projection { onto} the first column. We decompose $\lf_k=\lf_{k,1} +\lf_{k,2}$ with $\lf_{k,1}= \gl(k, \R)$ and $\lf_2=\gl(n-k,\R)$. Write $\uf_{k,j}=\R^k$ for the subalgebra of $\uf_k$ consisting of the $j$-th column and observe \begin{align} \label{pi1} \pi_1([\lf_{k,2}\cap \nf_k, \uf_k])&=\{0\}\\ \label{lfk1} [\lf_{k,1}, \uf_{k,j}]&\subset \uf_{k,j}. \end{align} Now write $Y_2 = Y_{2|1} + Y_{2|2}$ according to $\lf_{k}=\lf_{k,1}+\lf_{k,2}$. From \eqref{matrix1}--\eqref{matrix2} together with \eqref{pi1}--\eqref{lfk1} we then derive that \begin{align} \label{matrix3}\pi_1(Y_1) &= \pi_1(\tilde Y_1) +\sum_{j=1}^{n_1} c_{2j}(-1)^j (\ad Y_{2|1})^{ 2j} \pi_1(\tilde Y_1)\\ \notag & \quad +\sum_{j=0}^{n_2} d_{2j+1} (-1)^{j} (\ad Y_{2|1})^{2j+1} \pi_1(X) \end{align} and \begin{equation} \label{matrix4} \pi_1(X)= \sum_{j=0}^{n_1} c_{2j+1}(-1)^j (\ad Y_{2|1})^{ 2j} \pi_1(\tilde Y_1) -\sum_{j=1}^{n_2} d_{2j} (-1)^{j} (\ad Y_{2|1})^{2j} \pi_1(X) \, .\end{equation} We apply this now to $Y=Y^m$ and note that $Y_{2|1}^m$ is bounded by the construction of $\oline P_k$. From \eqref{matrix3} and \eqref{matrix4} we obtain that $X^m_{k+1, k}=0$ and $(\tilde Y_1^m )_{k, k+1}= (Y_1^m)_{k, k+1}$ and recursively we obtain that $X_{i, k+1}^m$ and $\tilde Y_{i, k+1}^m$ remain bounded for $i<l$. It then follows from \eqref{matrix3}, as $Y^m_{l, k+1}$ is unbounded, that $\tilde Y^m_{l, k+1}$ is unbounded. This is the desired contradiction and completes the proof of the proposition. \end{proof} \begin{lemma} \label{lemma bipolar}Let $\nf$ be a nilpotent Lie algebra, $N_\C$ a simply connected Lie group with Lie algebra $\nf_\C$ and $N=\exp(\nf)\subset N_\C$. Let further $\nf_1, \nf_2\subset \nf$ { be} subalgebras with $\nf=\nf_1 +\nf_2$ (not necessarily direct). Suppose that $\nf_1$ is abelian. Then the 2-polar map $$\Phi: \nf_1 \times\nf_2 \times N \to N_\C, \ \ (Y_1, Y_2, n) \mapsto \exp(iY_1) \exp(iY_2) n $$ is onto. If moreover, the sum $\nf_1+\nf_2$ is direct and $\nf_1$ is an ideal, then $\Phi$ is diffeomorphic. \end{lemma} \begin{proof} We prove the statement by induction on $\dim N$. Let $Z(N_\C)\subset N_\C$ the center of $N_\C$. Note that $Z(N_\C)$ is connected and of positive dimension if $\dim \nf>0$. Set $\tilde \nf:=\nf/\zf(\nf)$, $\tilde \nf_i:= (\nf_i +\zf(\nf))/\zf(\nf)$ and $\tilde N_\C = N_\C/ Z(N_\C)$. Induction applies and we deduce that for every $n_\C \in N_\C$ we find elements $n\in N$, $Y_i\in \nf_i$ and $z_\C\in Z(N_\C)$ such that $$ n_\C = \exp(iY_1) \exp(iY_2) n z_\C. $$ We write $z_\C = z y $ with $z\in Z(N)$ and $y=\exp(iY)$ with $Y\in \zf(\nf)$. Write $Y=Y_1' +Y_2'$ with $Y_i'\in \nf_i$. As $Y$ is central $Y_1'$ commutes with $Y_2'$ and so $y =\exp(Y_1')\exp(Y_2')$. Putting matters together we arrive at $$ n_\C = \exp(iY_1)\exp(iY_1') \exp(iY_2') \exp(iY_2) nz. $$ Now $nz\in N$ and $\exp(iY_1)\exp(iY_1')=\exp(i(Y_1+Y_1'))$. Finally, $\exp(iY_2')\exp(iY_2)= \exp(iY_2'')n_2$ for some $Y_2''\in \nf_2$ and $n_2\in N_2 =\exp(\nf_2)$. This proves that $\Phi$ is surjective. \par For the second part let us assume the further requirements. We confine ourselves with showing that $\Phi$ is injective. So suppose that $$\exp(iY_1)\exp(iY_2) n= \exp(iY_1') \exp(iY_2') n'$$ and reduce both sides mod the normal subgroup $N_{1,\C}$. Hence $Y_2=Y_2'$. Since we have $N\simeq N_1\times N_2$ under multiplication we may assume, by the same argument that $n=n_1\in N_1$ and $n'=n_1'$. Now injectivity is immediate. \end{proof} \section{The Poisson transform and the Helgason conjecture} \subsection{Representations of the spherical principal series} Let $\oline P = M A\oline N$ and define for $\lambda\in \af_\C^*$ the normalized character $$\chi_\lambda: \oline P \to \C^*,\quad \oline p = ma\oline n \mapsto a^{\lambda-\rho}\,. $$ Associated to this character is the line bundle $\Lc_\lambda= G\times_{\oline P} \C_\lambda\to G/\oline P$. The sections of this line bundle form the representations of the spherical principal series: We denote the $K$-finite sections by $V_\lambda$, the analytic sections by $V_\lambda^\omega$ and the smooth sections by $V_\lambda^\infty$. Note in particular, $$V_\lambda^\infty=\{ f\in C^\infty(G)\mid f(g\oline p ) = \chi_\lambda(\oline p)^{-1} f(g), \ \oline p\in \oline P, g \in G \}$$ and that $V_\lambda^\infty$ is a $G$-module under the left regular representation. Now given $f_1\in V_\lambda^\infty$ and $f_2\in V_{-\lambda}^\infty$ we obtain that $f:=f_1f_2$ is a smooth section of $\Lc_{-\rho}$ which identifies with the 1-density bundle of the compact flag variety $G/\oline P$. Hence we obtain a natural $G$-invariant non-degenerate pairing \begin{equation} \label{dual}V_\lambda^{\infty}\times V_{-\lambda}^\infty\to \C, \quad (f_1, f_2)\mapsto \la f_1, f_2\ra:=\int_{G/\oline P} f_1f_2\, .\end{equation} In particular, the Harish-Chandra module dual to $V_\lambda$ is isomorphic to $V_{-\lambda}$. The advantage using the pairing \eqref{dual} is that it easily gives formulas when trivializing $\Lc_\lambda$, and one securely obtains correct formulas for the compact and non-compact picture. Using this pairing we define { the space of} distribution vectors as the strong dual $V_\lambda^{-\infty}=(V_{-\lambda}^\infty)'$. Likewise we obtain { the space of} hyperfunction vectors $V_\lambda^{-\omega}$. Altogether we have the natural chain $$ V_\lambda\subset V_\lambda^\omega\subset V_\lambda^\infty\subset V_\lambda^{-\infty} \subset V_{\lambda}^{-\omega}\, .$$ We denote by $f_{\lambda, K}\in V_\lambda$ the $K$-fixed vector with $f_{\lambda, K}(\1)=1$ and normalize the identification of $\Lc_{-\rho}$ with the 1-density bundle such that $\int f_{-\rho, K}=1$. \subsection{Definition of the Poisson transform and Helgason's conjecture} We move on with the concept of Poisson transform and the Helgason conjecture on $Z=G/K$ which was formulated in \cite{H1} and first established in \cite{K6}; see also \cite{GKKS} for a novel elementary treatment. We denote by ${\mathbb D}(Z)$ the commutative algebra of $G$-invariant differential operators and recall that the Harish-Chandra homomorphism for $Z$ asserts that ${\mathbb D}(Z)\simeq \Pol(\af_\C^*)^W$ with $W$ the Weyl group. In particular, $\Spec{\mathbb D}(Z)\simeq \af_\C^*/W$. For $\lambda\in \af_\C^*$ we denote by $\E_{[\lambda]}(Z)$ the ${\mathbb D}(Z)$-eigenspace attached to $[\lambda]=W\cdot \lambda\in \af_\C^*/W$. Note that all functions in $\E_{[\lambda]}(Z)$ are eigenfunctions of $\Delta_Z$ to the eigenvalue $\lambda^2 - \rho^2$, with $\lambda^2$ abbreviating the Cartan--Killing pairing $\kappa(\lambda, \lambda)$. In case $Z$ has real rank one, let us remark that this characterizes $\E_{[\lambda]}(Z)$, i.e. $$ \E_{[\lambda]}(Z)=\{ f \in C^\infty(Z)\mid \Delta_Z f = (\lambda^2 -\rho^2)f\}\, . $$ For $\lambda\in \af_\C^*$ one defines the $G$-equivariant Poisson transform $$\Pc_\lambda: V_\lambda^{-\omega}\to C^\infty(G/K), \ \ f\mapsto (gK\mapsto \la f, g\cdot f_{-\lambda, K}\ra). $$ The Helgason conjecture then asserts that $\Pc_\lambda$ is onto the $\mathbb{D}(Z)$-eigenspace $\E_{[\lambda]}(Z)$ provided that $f_{K,-\lambda}$ is cyclic in $V_{-\lambda}$, i.e. $\U(\gf)f_{K,-\lambda}= V_{-\lambda}$. The latter condition is always satisfied if Kostant's condition \cite[Th.~8]{Kos} holds: $\re \lambda(\alpha^\vee)\geq 0$ for all positive roots $\alpha$. In the sequel we abbreviate this condition as $\re \lambda \geq 0$. If $\re \lambda >0$, then the Poisson transform is inverted by the boundary value map $$b_\lambda: \E_\lambda(Z) \to V_\lambda^{-\omega}, \ \ \phi\mapsto (g\mapsto {\bf c}(\lambda)^{-1}\lim_{a\to \infty\atop a\in A^-} a^{\lambda -\rho} \phi(ga))$$ where ${\bf c}(\lambda)$ is the Harish-Chandra ${\bf c}$-function: $${\bf c}(\lambda):=\int_N{\bf a}(n)^{\lambda +\rho} \ dn $$ with ${\bf a}: KA\oline N \to A$ the middle projection. In particular, we have \begin{equation} \label{boundary} b_\lambda(\Pc_\lambda(f)) = f \qquad (f \in V_\lambda^{-\omega}, \re \lambda >0)\, .\end{equation} \section{The Poisson transform in terms of $S$-geometry} { As emphasized in the introduction our focus in this article is on the $S=AN$-picture of $Z=G/K$ which we henceforth identify with $S$. In particular, we will write $\E_{[\lambda]}(S)$ instead of $\E_{[\lambda]}(Z)$ etc.} \par We fix a parameter $\lambda$ such that $\re \lambda >0$. The goal is to identify subspaces of $V_\lambda^{-\omega}$ for which $\Pc_\lambda$ has a particularly nice image in terms of $S$-models. From what we already explained we have $$ \operatorname{im} \Pc_\lambda\subset \Oc(\Xi_S)$$ and, in particular, for all $\phi \in \operatorname{im} \Pc_\lambda$ and $a\in A$ we have $\phi_a\in \Oc(\Tc_a)$. The general problem here is that one wants to identify $V_\lambda^{-\omega}$ with a certain subspace of $C^{-\omega}(N)$ which is tricky and depends on the parameter $\lambda$. The compact models for the spherical principal series here are much cleaner to handle as the restriction maps $$\res_{K,\lambda} : V_\lambda^\infty \to C^\infty(K/M)=C^\infty(K)^M, \quad f\mapsto f|_K$$ are isomorphisms. In this sense we obtain a natural identification $V_\lambda^{-\omega} \simeq C^{-\omega}(K/M)$ as $K$-modules which is parameter independent. Contrary to that, the faithful restriction map $$\res_{N,\lambda} : V_\lambda^\infty \to C^\infty(N), \quad f\mapsto f|_N$$ is not onto and the image depends on $\lambda$. For a function $h\in C^\infty(N)$ we define a function { $H_\lambda$} on the open Bruhat cell $NMA\oline N$ by $$H_\lambda(n ma\oline n) = h(n) a^{-\lambda+\rho}\, .$$ Then the image of $\res_{N,\lambda}$ is by definition given by $$ C_\lambda^\infty(N)=\{ h \in C^\infty(N)\mid H_\lambda\ \hbox{extends to a smooth function on $G$}\}\, .$$ In this sense $V_\lambda^{-\omega}$ corresponds in the non-compact model to $$C_\lambda^{-\omega}(N)= \{ h \in C^{-\omega}(N)\mid H_\lambda|_{K\cap N \oline P}\ \hbox{extends to a hyperfunction on $K$}\}\, .$$ Having said this we take an element $f\in C_\lambda^{-\omega}(N)$ and observe that the Poisson transform in terms of $S$ is given by \begin{equation} \label{Poisson} \Pc_\lambda f(s)= \int_N f(x) {\bf a} (s^{-1} x)^{\lambda + \rho} \ dx\ \qquad (s\in S)\end{equation} with ${\bf a}: KA\oline N \to A$ the middle projection. In accordance with \eqref{boundary} we then have $${1\over {\bf c}(\lambda)} \lim_{a\to \infty\atop a\in A^-} a^{\lambda-\rho} \Pc_\lambda f(na) = f(n)\qquad (n\in N)\,.$$ Let us note that the Hilbert model of $\Hc_\lambda=L^2(K/M)\subset C^{-\omega}(K/M)=V_\lambda^{-\omega}$ of $V_\lambda$ corresponds in the non-compact picture to $L^2(N, {\bf a}(n)^{2 \re \lambda} dn)\supset L^2(N)$ and hence $$L^2(N)\subset C^{-\omega}_\lambda(N)\qquad (\re \lambda\geq 0)\, .$$ \par The main objective now is to give a novel characterization of $\Pc_\lambda(L^2(N))$ for $\re \lambda>0$. { For a function $\phi$ on $S=NA$ and $a\in A$ we recall the partial functions on $N$ defined by $$\phi_a(n)= \phi(na)\qquad (n\in N)\, .$$} Now, given $f\in L^2(N)$ we let $\phi:=\Pc_\lambda(f)$ and rewrite \eqref{Poisson} as \begin{equation} \label{P rewrite} {1\over {\bf c}(\lambda)} a^{\lambda-\rho} \phi_a(n) = \int_N f(x)\delta_{\lambda, a}(n^{-1}x) \ dx \end{equation} with \begin{equation} \label{deltadefa} \delta_{\lambda, a}(x):= {1\over {\bf c}(\lambda)} a^{-2\rho} {\bf a} (a^{-1} x a)^{\lambda+\rho} \qquad (x \in N)\, .\end{equation} We first note that the condition $\re \lambda>0$ then implies that $\delta_{\lambda, a}$ is a Dirac-sequence on $N$ for $a\to \infty$ on a ray in the negative Weyl chamber. \begin{lemma}\label{lemmaeasybound} Let $\phi=\Pc_\lambda(f)$ for $f\in L^2(N)$. Then the following assertions hold: \begin{enumerate} \item $\phi_a\in L^2(N)$ for all $a\in A$. \item $\|\phi_a\|_{L^2(N)} \leq a^{\rho -\re \lambda}{\bf c}(\re \lambda) \|f\|_{L^2(N)}$. \end{enumerate} \end{lemma} \begin{proof} Both assertions are immediate from the fact that $\|\delta_{{ \lambda,a}}\|_{L^1(N)} \leq \frac {{\bf c}(\re \lambda)}{|{\bf c}(\lambda)|}$, \eqref{P rewrite} { and Young's convolution inequality}. \end{proof} \subsection{Partial holomorphic extensions of eigenfunctions} { Recall { $\Tc_a$ and $\Lambda_a$ from \eqref{defta}, resp.~\eqref{deflambdaa}, and} that the Poisson transform $\phi = \Pc_\lambda(f)$ belongs to $\Oc(\Xi_S)$ with all partial functions $\phi_a$ extending to holomorphic functions on $\Tc_a$. For $y\in\exp(i\Lambda_a)$ we thus can define $$\phi_{a,y}(n):=\phi_a(n y)\qquad (n \in N)\, .$$ Let $\delta_\lambda:=\delta_{\lambda, 1}$ and put \begin{equation}\label{deltadef}\delta_{\lambda, y}: N \to \C , \quad x \mapsto \delta_\lambda(y^{-1} x )\ .\end{equation}} \begin{lemma} The following assertions hold: \begin{enumerate} \item The function ${\bf v}_\lambda(y):=\sup_{k\in K} |{\bf a}(y^{-1} k)^{\lambda +\rho}|$ is finite for all $y \in \exp(i\Lambda)$.\\ \item The function $\delta_{\lambda, y}$ is integrable with $L^1(N)$-norm \begin{equation} \label{delta bound2} v_\lambda(y):=\|\delta_{\lambda, y}\|_{ L^1(N)}\leq {\bf v}_\lambda(y)\frac{{\bf c}(\re \lambda)}{|\bf c(\lambda)|}\, .\end{equation} \end{enumerate} \end{lemma} \begin{proof} Part (1) is a consequence of the fact that ${\bf a}: G\to A$, considered as a map from $K\bs G \to A$, extends holomorphically to $\Xi^{-1}\to A_\C$ with $\Xi^{-1}$ considered as a subset of $K_\C \bs G_\C$, see \eqref{tilde Xi}. \\ For the proof of (2) we note the identity \begin{equation} \label{delta bound} \delta_\lambda(y^{-1} x )=\delta_\lambda(x) {\bf a}(y^{-1} {\bf k}(x))^{\lambda +\rho} \qquad (x \in N, y\in \exp(i\Lambda)),\end{equation} where ${\bf k}: G \to K$ is defined by the opposite Iwasawa decomposition $G=KA\oline N$. Combined with part (1), \eqref{delta bound} implies that for all $y \in \exp(i\Lambda)$ the function $\delta_{\lambda, y}$ is integrable on $N$, with the asserted estimate \eqref{delta bound2} for its norm. \end{proof} { For $g, x\in G_\C$ we use the standard abbreviation $x^g:=gxg^{-1}$.} \begin{lemma}\label{lemma a bound} For $\re \lambda>0$, $f\in L^2(N)$ and $\phi=\Pc_\lambda(f)$ we have \begin{equation}\label{upper a-bound} \|\phi_{a, y}\|_{L^2(N)} \leq {|\bf c}(\lambda)| a^{\rho -\re \lambda}\|\delta_{\lambda, y^{a^{-1}}}\|_{L^1(N)} \|f\|_{L^2(N)}\qquad (y\in \exp(i\Lambda_a))\, .\end{equation} \end{lemma} \begin{proof} From \eqref{P rewrite} we obtain $$ {1\over {\bf c}(\lambda)} a^{\lambda-\rho} \phi_{a,y}(n) = \int_N f(x)\delta_{\lambda, a}(y^{-1}n^{-1}x) \ dx $$ and thus \begin{equation} \label{deltaest} {1\over |{\bf c}(\lambda)|} a^{\re \lambda-\rho}\| \phi_{a,y}\|_{L^2(N)} \leq \|\delta_{\lambda, a}(y^{-1}\cdot)\|_{L^1(N)} \|f\|_{L^2(N)}. \end{equation} { Next we unwind the defintions \eqref{deltadefa} and \eqref{deltadef} and apply the change of variable $x\mapsto a^{-1}xa$ on $N$: \begin{align*} \|\delta_{\lambda, a}(y^{-1}\cdot)\|_{L^1(N)}&={a^{-2\rho} \over |{\bf c}(\lambda)|}\int_N \left|{\bf a} (a^{-1} y^{-1} x a)^{\lambda+\rho}\right|\ dx \\ &={a^{-2\rho} \over |{\bf c}(\lambda)|} \int_N \left|{\bf a} ((a^{-1} y^{-1}a) a^{-1} x a)^{\lambda+\rho}\right|\ dx \\ &={1\over |{\bf c}(\lambda)|} \int_N \left| {\bf a} ((y^{-1})^{a^{-1}} x )^{\lambda+\rho}\right|\ dx =\|\delta_{\lambda, y^{a^{-1}}}\|_{L^1(N)} \, .\end{align*} The assertion \eqref{upper a-bound} now follows from \eqref{deltaest}.} \end{proof} \subsection{A class of weight functions}\label{subsection weight functions} We now let ${\bf w}_\lambda: \exp(i\Lambda)\to \R_{>0}$ be any positive continuous function such that \begin{equation} \label{request w} d(\lambda):=\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \|\delta_{\lambda, y}\|^2_{ L^1(N)} \ dy <\infty\end{equation} and define a left $N$-invariant function on the tube $\Tc_a$ by $${\bf w}_{\lambda,a}: \Tc_a\to \R_{>0}, \quad n y\mapsto {\bf w}_\lambda (\Ad(a^{-1})y)\qquad (y\in \exp(i\Lambda_a))\, .$$ \begin{rmk} In general we expect that $\Lambda$ is bounded. In view of \eqref{delta bound2} one may then take $${\bf w}_\lambda \equiv 1\ ,$$ as ${\bf v}_\lambda^{-2}$ is bounded from below by a positive constant. Optimal choices for ${\bf w}_\lambda$ in special cases will be presented at the end of the article. \end{rmk} We now show that $\phi_a\in \Oc(\Tc_a)$ belongs to the weighted Bergman space $$\B(\Tc_a, {\bf w}_{\lambda,a}):=\{ \psi\in \Oc(\Tc_a)\mid \|\psi\|^2_{\B_{a, \lambda}}:= \int_{\Tc_a} |\psi(z)|^2 {\bf w}_{\lambda,a}(z) dz <\infty\},$$ where $dz$ is the Haar measure on $N_\C$ restricted to $\Tc_a$. In more precision with $d(\lambda)$ from \eqref{request w} we record the following { lemma} \begin{lemma} \label{lemma5.5} Let $\re \lambda>0$, $f\in L^2(N)$ and $\phi=\Pc_\lambda(f)$. Then we have the following inequality \begin{equation}\label{normb1} \|\phi_a\|_{\B_{a,\lambda}} \leq |{\bf c}( \lambda)| \sqrt{d(\lambda)} a^{2\rho-\re \lambda}\|f\|_{L^2(N)}\, .\end{equation} \end{lemma} \begin{proof} Starting with \eqref{upper a-bound} the assertion follows from the estimate \begin{align*} \notag \|\phi_a\|_{\B_{a,\lambda}}&\leq a^{\rho- \re \lambda} |{\bf c}(\lambda)| \left[\int_{\exp(i\Lambda^a)} {\bf w}_\lambda(y^{a^{-1}}) \|\delta_{\lambda, y^{a^{-1}}}\|^2_{ L^1(N)} \ dy]\right]^{1\over 2} \|f\|_{L^2(N)}\\ \notag&= |{\bf c}(\lambda)| \left[\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \|\delta_{\lambda, y}\|^2_{ L^1(N)} \ dy\right]^{1\over 2} a^{2\rho-\re \lambda}\|f\|_{L^2(N)}\\ &= |{\bf c}(\lambda)| \sqrt{d(\lambda)} a^{2\rho-\re \lambda}\|f\|_{L^2(N)}\,, \end{align*} as desired. \end{proof} { Lemma \ref{lemma5.5}} motivates the definition of the following Banach subspace of $\E_{[\lambda]}({ S})\subset \Oc(\Xi_S)$: $$\B(\Xi_S, \lambda):=\{ \phi \in \E_{[\lambda]}({ S})\mid \|\phi\|:=\sup_{a\in A} a^{\re\lambda -2\rho} \|\phi_a\|_{\B_{a,\lambda}}<\infty\}\, .$$ Indeed, \eqref{normb1} implies \begin{equation} \label{P cont} \|\Pc_\lambda(f)\|\leq C \|f\|_{L^2(N)} \qquad (f\in L^2(N))\end{equation} with $C:={\bf c}(\re \lambda) \sqrt{d(\lambda)}$ and therefore the first inequality in { Theorem \ref{maintheorem}}. \begin{proof}[{ Proof of Theorem \ref{maintheorem}}] Since $\re\lambda>0$, the Poisson transform is injective. Further, \eqref{P cont} shows that $\Pc_\lambda$ takes values in $\B(\Xi_S, \lambda)$ and is continuous. In view of the open mapping theorem, it thus suffices to show that $\Pc_\lambda$ is surjective. Note now that the weight ${\bf w}_\lambda$ is uniformly bounded above and below by positive constants when restricted to a compact subset $\exp(i\Lambda_c)\subset \exp(i\Lambda)$. Hence the Bergman inequality implies the bound \begin{equation} \label{norm 1} \|\psi|_N\|_{L^2(N)} \leq C a^{-\rho} \|\psi\|_{\B_{a,\lambda}}\quad (\psi \in \B(\Tc_a, {\bf w}_{\lambda, a})).\end{equation} We apply this to $\psi=\phi_a$ for some $\phi\in \B(\Xi_S,\lambda)$ and obtain that $a^{\lambda -\rho} \phi_a|_N $ is bounded in $L^2(N)$. Hence we obtain for some sequence $(a_n)_{n\in \N}$ on a ray in $A^-$ that $a_n^{\lambda-\rho} \phi_{a_n}|_N \to h$ weakly for some $h \in L^2(N)$. By the Helgason conjecture we know that $\phi = \Pc_\lambda(f)$ for some $f\in C^{-\omega}_\lambda(N)$ and that \begin{equation} \label{limit} {\bf c}(\lambda)^{-1} a^{\lambda -\rho} \phi_a|_N \to f\end{equation} as appropriate hyperfunctions on $N$ for $a\to \infty$ in $A^-$ on a ray. Hence $h=f$ and we obtain the second inequality of the theorem. \end{proof} \subsection{The norm limit formula} \label{sub:norm} Define a positive constant \begin{equation} \label{def w const} w(\lambda):=\left[\int_{\exp(i\Lambda)} {\bf w}_\lambda(y) \ dy\right]^{1\over 2} .\end{equation} Note that $w(\lambda)$ is indeed finite. This will follow from \eqref{request w} provided we can show that $\|\delta_{\lambda, y}\|_1\geq 1$. Now, using Cauchy's theorem we see that \begin{equation} \label{cy} \int_N {\bf a} (y^{-1} n)^{\lambda +\rho} \ dn = {\bf c}(\lambda)\end{equation} does not depend on $y\in \exp(i\Lambda)$. The estimate $\|\delta_{\lambda, y}\|_{ L^1(N)}\geq 1$ follows. The purpose of this section is to prove the norm limit formula as stated in the introduction. \begin{proof}[{ Proof of Theorem \ref{norm limit intro}}] In the sequel we first note that for any integrable function $\psi$ on $\Tc_a$ we have $$ \int_{\Tc_a} |\psi(z)|^2\ dz = \int_{\Lambda_a} \int_N |\psi(yn)|^2 \ dn \ dY $$ with $y=\exp(iY)$ and $dY$ the Lebesgue measure on $\nf$. With that we rewrite the square of the left hand side of \eqref{norm limit2} as \begin{align*} &{1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{2\re \lambda - 4\rho} \|\phi_a\|_{\B_{a,\lambda}}^2= \\ &= {1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{2\re \lambda - 4\rho} \int_{\Lambda_a}\int_N |\phi_a(ny)|^2 {\bf w}_{\lambda, a} (y) \ dn \ dY \\ &= {1\over w(\lambda)^2|{\bf c}(\lambda)|^2}a^{2\re \lambda - 2\rho} \int_{\Lambda}\int_N |\phi_a(ny^a)|^2 {\bf w}_{\lambda, \1} (y) \ dn \ dY \\ &= {1\over w(\lambda)^2|{\bf c}(\lambda)|^2}a^{2\re \lambda - 2\rho} \int_{\Lambda}\int_N \left|\int_N f(x) {\bf a} (y^{-1} a^{-1} n^{-1} x)^{\lambda +\rho} \ dx \right|^2 {\bf w}_{\lambda, \1} (y) \ dn \ dY \\ &={1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{-4\rho} \int_{\Lambda}\int_N \left|\int_N f(x) {\bf a} (y^{-1} a^{-1} n^{-1} xa)^{\lambda +\rho} \ dx \right|^2 {\bf w}_{\lambda, \1} (y) \ dn \ dY\, . \\ \end{align*} Next we consider the function on $N$ $$\delta_{\lambda, y, a}(n):={1\over {\bf c}(\lambda)} a^{-2\rho} {\bf a} (y^{-1} a^{-1} na)^{\lambda +\rho} $$ and observe that this defines for any fixed $y\in \exp(i\Lambda)$ a Dirac-sequence when $a\in A^-$ moves along a fixed ray to infinity, see \eqref{cy} for $\int_N \delta_{\lambda, a,y}= 1$. We thus arrive at \begin{multline*} {1\over w(\lambda)^2|{\bf c}(\lambda)|^2} a^{2\re \lambda - 4\rho} \|\phi_a\|_{\B_{a,\lambda}}^2\\ = {1\over w(\lambda)^2} \int_{\Lambda}\int_N \left|\int_N f(x) \delta_{\lambda, y, a} (n^{-1}x) \ dx \right|^2 {\bf w}_{\lambda, \1} (y) dn \ dY \ . \end{multline*} We define a convolution type operator $$T_{\lambda, y, a}: L^2(N) \to L^2(N), \quad f\mapsto \left(n \mapsto \int_N f(x) \delta_{\lambda, y, a}(n^{-1}x)\ dx\right) $$ and note that by Young's convolution inequality $$\|T_{\lambda, y, a}(f)\|_{ L^2(N)} \leq \|\delta_{\lambda, y, a}\|_{ L^1(N)} \cdot \|f\|_{ L^2(N)}\ .$$ We continue with some standard estimates: \begin{align*} &\left|\int_N \left|\int_N f(x) \delta_{\lambda, y, a} (n^{-1}x) \ dx \right|^2 \ dn \ - \|f\|^2_{ L^2(N)}\right|= \left|\| T_{\lambda, y, a}(f)\|^2_{ L^2(N)} - \|f\|^2_{ L^2(N)}\right|\\ &\quad = \left| \|T_{\lambda, y, a}\|_{ L^2(N)} - \|f\|_{ L^2(N)}\right| \cdot (\|T_{\lambda, y, a}(f) \|_{ L^2(N)} + \|f\|_{ L^2(N)})\\ &\quad \leq \|T_{\lambda, y, a}(f) - f\|_{ L^2(N)} \cdot \|f\|_{ L^2(N)}( 1+ \|\delta_{\lambda, y, a}\|_{ L^1(N)})\\ &\quad =\left\|\int_N (f(\cdot x) - f(\cdot)) \delta_{\lambda, y, a} (x) \ dx \right\|_{ L^2(N)} \cdot \|f\|_{ L^2(N)}( 1+ \|\delta_{\lambda, y, a}\|_{ L^1(N)})\\ &\quad \leq \|f\|_{ L^2(N)}( 1+ \|\delta_{\lambda, y, a}\|_{ L^1(N)})\int_N \|f(\cdot x) - f(\cdot)\|_{ L^2(N)} |\delta_{\lambda, y, a}(x)| \ dx \, . \end{align*} Now note that $x\mapsto \|f(\cdot x) - f(\cdot)\|_{ L^2(N)}$ is a bounded continuous function and $\frac{|\delta_{\lambda, y, a}|}{\|\delta_{\lambda, y,a}\|_{ L^1(N)}}$ is a Dirac-sequence for $a\to \infty$ in $A^-$ on a ray. Hence we obtain a positive function $\kappa_f(a)$ with $\kappa_f(a) \to 0$ for $a\to \infty$ in $A^-$ on a ray such that $$\int_N \|f(\cdot x) - f(\cdot)\|_{ L^2(N)} |\delta_{\lambda, y, a}(x)| \ dx \leq \|\delta_{\lambda, y,a}\|_{ L^1(N)} \kappa_f (a)\, .$$ Putting matters together we have shown that \begin{align*}& \left|{1\over |{\bf c}(\lambda)|^2} a^{2\re \lambda - 4\rho} \|\phi_a\|_{\B_{a,\lambda}}^2 -(\int_\Lambda {\bf w}_{\lambda,\1})\cdot \|f\|_{ L^2(N)}\right|\\ &\quad \le\kappa_f(a)\|f\|^2_{ L^2(N)} \int_{\Lambda} (1 +\|\delta_{\lambda, y, a}\|_{ L^1(N)}) \|\delta_{\lambda, y, a}\|_{ L^1(N)} {\bf w}_{\lambda, \1}(y) \ dy\ . \end{align*} Finally observe that $\|\delta_{\lambda, y, a}\|_{ L^1(N)} =\|\delta_{\lambda, y}\|_{ L^1(N)}$ and hence $$\int_{\Lambda} (1 +\|\delta_{\lambda, y, a}\|_{ L^1(N)}) \|\delta_{\lambda, y, a}\|_{ L^1(N)} {\bf w}_{\lambda, \1}(y) \ dy <\infty\ ,$$ by the defining condition \eqref{request w} for ${\bf w}_\lambda$. With that the proof of the norm limit formula \eqref{norm limit2}, i.e.~Theorem \ref{norm limit intro}, is complete. \end{proof} \section{The real hyperbolic space}\label{sect hyp} In this section we investigate how the main results of this article take shape in the case of real hyperbolic spaces. After recalling the explicit formulas of the Poisson kernel we provide essentially sharp estimates for $\|\delta_{\lambda, y}\|_{ L^1(N)}$ which allow us to perform the construction of a family of nice explicit weight functions ${\bf w}_\lambda$ satisfying \eqref{request w}. These in turn have the property that for real parameters $\lambda=\re \lambda$ the weighted Bergman space $\B(\Xi_S, \lambda)$ becomes isometric to $L^2(N)$. In particular, the Banach space $\B(\Xi_S, \lambda)$ is in fact a Hilbert space for the exhibited family of weights. \subsection{Notation} Our concern is with the real hyperbolic space $ \mathbf{H}_n(\R) = G/K $ where $ G = \SO_e(n+1,1)$ and $K = \SO(n+1)$ for $n\geq 1$. Here $\SO_e(n+1,1)$ is the identity component of the group $\SO(n+1,1)$. The Iwasawa decomposition $ G = KAN $ is given by $ N = \R^n$, $K = \SO(n+1) $ and $ A = \R_+.$ and we can identify $ \mathbf{H}_n(\R) $ with the upper half-space $ \R^{n+1}_+ = \R^n \times \R_+ $ equipped with the Riemannian metric $ g = a^{-2} (|dx|^2+da^2 ).$ For any $ \lambda \in \C $ which is not a pole of $\Gamma(\lambda+n/2)$ we consider the normalized kernels $$ p_\lambda(x, a) = \pi^{-n/2} \frac{\Gamma(\lambda+n/2)}{\Gamma(2\lambda)} a^{\lambda+n/2}(a^2+|x|^2)^{-(\lambda+n/2)}, $$ which play the role of the normalized Poisson kernel when $ \mathbf{H}_n(\R) $ is identified with the group $ S = NA$, $N =\R^n$, $A = \R_+.$ In fact, with ${\bf a}: G \to A$ the Iwasawa projection with { respect} to $G=KA\oline N$ as in the main text we record for $x\in N=\R^n$ that $${\bf a}(x)^{\lambda+\rho} = ( 1 +|x|^2)^{-(\lambda +n/2)}\, .$$ Further we have $${\bf c}(\lambda)= \pi^{n/2} \frac{\Gamma(2\lambda)}{\Gamma(\lambda+n/2)} $$ so that $$ p_\lambda(x, a) = {1\over {\bf c}(\lambda)} {\bf a}(a^{-1} x)^{\lambda +\rho}. $$ In the sequel we assume that $s:=\re \lambda>0$ and note that $\rho=n/2$. The classical Poisson transform (normalize \eqref{Poisson} by ${1\over {\bf c}(\lambda)}$) of a function $ f \in L^2(\R^n) $ is then given by \begin{align*} \mathcal{P}_\lambda f(x,a) &= f*p_{\lambda}(\cdot, a)\\ &=\pi^{-n/2} \frac{\Gamma(\lambda+n/2)}{\Gamma(2\lambda)} a^{-(\lambda+n/2)} \int_{\R^n} f(u) (1+a^{-2} |x-u|^2)^{-\lambda-n/2} du\, \end{align*} with $\ast$ the convolution on $N=\R^n$. It is easy to check that $ \mathcal{P}_\lambda f(x,a) $ is an eigenfunction of the Laplace--Beltrami operator $ \Delta $ with eigenvalue $\lambda^2- (n/2)^2.$ From the explicit formula for the Poisson kernel it is clear that for each $ a \in A $ fixed, $ \mathcal{P}_\lambda f(x, a) $ has a holomorphic extension to the tube domain $$ \Tc_a:= \{ x+iy \in \C^n \mid |y| < a \} = N\exp(i\Lambda_a)\subset N_\C=\C^n, $$ where $ \Lambda_a = \{ y \in \R^n : |y| < a \}.$ Writing $ \phi_a(x) = \mathcal{P}_\lambda f(x,a) $ as in (\ref{P rewrite}) we see that $$ \delta_{\lambda, y}(x)= {1\over {\bf c}(\lambda)} (1+(x+iy)^2)^{-(\lambda+n/2)}. $$ A weight function $ {\bf w}_\lambda $ satisfying (\ref{request w}), namely $$ d(\lambda) = \int_{|y| <1} {\bf w}_\lambda(y) \|\delta_{\lambda,y}\|^2_{ L^1(\R^n)} \, dy < \infty$$ can be easily found. Indeed, as $$ (1+z^2)^{-(n/2+\lambda)} = \frac{2^{-n-\lambda}}{\Gamma(\lambda+n/2)} \int_0^\infty e^{-\frac{1}{4t} (1+z^2)} t^{-n/2-\lambda-1} dt $$ where $ z^2 = z_1^2+z_2^2+...+z_n^2$ we have $$ |\delta_{\lambda,y}(x)| \leq c_\lambda \int_0^\infty e^{-\frac{1}{4t} (1-|y|^2+|x|^2)} t^{-n/2-s-1} dt $$ valid for $ |y| <1.$ From this it is immediate that we have the estimate $$ \|\delta_{\lambda,y}\|_{ L^1(\R^n)} \leq c_\lambda (1-|y|^2)_+^{-s}\, .$$ However this bound is not optimal and we can do better with slightly more effort. This will be part of the next subsection. \subsection{Bounding $\|\delta_{\lambda, y}\|_{ L^1(\R^n)}$ and special weights.} \begin{lemma}\label{deltabound} For $s=\re \lambda>0$ we have for a constant $C=C(\lambda, n)>0$ that $$\|\delta_{\lambda, y}\|_{ L^1(\R^n)} \asymp \begin{cases*} C & if $0<s<\frac{1}{2}$,\\ C |\log(1-|y|^2)_+| & if $s=\frac{1}{2}$,\\ C (1-|y|^2)_+^{-s+\frac{1}{2}} & if $s>\frac{1}{2}$, \end{cases*} \qquad \qquad (|y|<1).$$ \end{lemma} \begin{proof} To begin with we have \begin{align*} \|\delta_{\lambda,y}\|_{ L^1(\R^n)}&\asymp \int_{\R^n} |1+(x+iy)^2|^{-(n/2+s)}\ dx \\ &\asymp \int_{\R^n} (1-|y|^2 +|x|^2+ 2|\la x, y\ra|)^{-(n/2+s)}\ dx\ . \end{align*} With $\gamma=\sqrt{1-|y|^2}$ we find \begin{align*} \|\delta_{\lambda,y}\|_{ L^1(\R^n)}&\asymp \int_{\R^n} (\gamma^2 +|x|^2+ 2|\la x, y\ra|)^{-(n/2+s)}\ dx\\ &= \int_{\R^n} (\gamma^2 +\gamma^2|x|^2+ 2 \gamma |\la x, y\ra|)^{-(n/2+s)}\ \gamma^n dx\\ &= \gamma^{-2s}\int_{\R^n} (1 +|x|^2+ 2 |\la x, \gamma^{-1}y \ra|)^{-(n/2+s)}\ dx\ . \end{align*} Set $$I_n(s,\gamma):=\int_{\R^n} (1 +|x|^2+ 2 |\la x, \gamma^{-1}y \ra|)^{-(n/2+s)}\ dx\,. $$ Then it remains to show that \begin{equation} \label{Ins} I_n (s,\gamma) \asymp\begin{cases*} \gamma^{2s} & if $0<s<\frac{1}{2}$, \\ \gamma |\log \gamma| & if $s=\frac{1}{2}$, \\ \gamma & if $s>\frac{1}{2}$ \end{cases*} \, .\end{equation} We first reduce the assertion to the case $n=1$ and assume $n\geq 2$. By rotational symmetry we may assume that $y=y_1 e_1$ is a multiple of the first unit vector with $1/2<y_1 <1$. Further we write $x=(x_1,x')$ with $x'\in \R^{n-1}$. Introducing polar coordinates $r=|x'|$, we find \begin{align*} &I_n(s,\gamma) = \int_{\R^n} (1+ |x'|^2 +x_1^2+2 \gamma^{-1}|x_1|y_1 )^{-(n/2+s)}\ dx\\ & \asymp \int_0^\infty \int_0^\infty r^{n-2} (1+ r^2 +x_1^2+2 \gamma^{-1}x_1y_1 )^{-(n/2+s)} dx_1 \ dr \, . \end{align*} With $a^2:=1 + x_1^2 +2x_1 y_1 \gamma^{-1}$ this rewrites as $$I_n(s, \gamma)\asymp \int_0^\infty \int_0^\infty r^{n-2} (r^2 +a^2)^{-(n/2+s)} \ dr \ dx_1 $$ and with the change of variable $r=at$ we arrive at a splitting of integrals \begin{align*} I_n(s,\gamma) &\asymp \int_0^\infty \int_0^\infty t^{n-2} (1+t^2)^{-\frac{n}{2} -s} a^{- n - 2s} a^{n-2} a \ dt \ dx_1 \\ &= \underbrace{\left(\int_0^\infty t^{n-2} (1+t^2)^{-\frac{n}{2} -s} \ dt \right)}_{:=J_n(s)} \cdot \underbrace{\left (\int_0^\infty ( 1 + x_1^2 +2 \gamma^{-1} x _1 y_1)^{-s -\frac{1}{2}} \ dx_1\right)}_{=I_1(s,\gamma)}\end{align*} Now $J_n(s)$ remains finite as long as $n\geq 2$ and $s>0$. Thus we have reduced the situation to the case of $n=1$ which we finally address. \par It is easy to check that $ I_1(s,\gamma) \asymp \gamma^{2s}$ for $ 0 < s < 1/2 $ and $ I_1(s,\gamma) \asymp \gamma $ for $ s >1/2$. When $ s = 1/2 $ we can evaluate $ \gamma^{-1} I_1(1/2,\gamma) $ explicitly. Indeed, by a simple computation we see that $ \gamma^{-1} I_1(1/2,\gamma)$ is given by $$ 2 \int_0^\infty \frac{1}{(x_1+y_1)^2- (y_1^2-\gamma^2)} dx_1 = \frac{-1}{ \sqrt{y_1^2-\gamma^2}} \log \frac{y_1- \sqrt{y_1^2-\gamma^2}}{y_1 + \sqrt{y_1^2-\gamma^2}}. $$ This gives the claimed estimate. \end{proof} For $\alpha>0$ we now define special weight functions by \begin{equation} \label{special weight} {\bf w}_\lambda^\alpha(z) = (2\pi)^{-n/2} \frac{1}{\Gamma(\alpha)} \left(1-|y|^2\right)_+^{\alpha -1} \, \qquad (z=x+iy\in \Tc)\, .\end{equation} As a consequence of Lemma \ref{deltabound} we obtain \begin{cor} The weight ${\bf w}_\lambda^\alpha$ satisfies the integrability condition \eqref{request w} precisely for $$\alpha>\max\{2s-1, 0\}\, .$$ \end{cor} \begin{rmk} Observe that ${\bf w}_{\lambda}^\alpha(z)$ is a power of the Iwasawa projection ${\bf a} (y)$. It would be interesting to explore this further in higher rank, i.e. whether one can find suitable weights which are of the form $${\bf w}_\lambda(ny)=|{\bf a}(y)^\alpha|\qquad { (}ny \in \Tc { )}$$ for some $\alpha=\alpha(\lambda)\in \af^*$. \end{rmk} For later reference we also record the explicit expression \begin{equation} {\bf w}_{\lambda,a}^\alpha(z) = (2\pi)^{-n/2} \frac{1}{\Gamma(\alpha)} \left(1-\frac{|y|^2}{a^2}\right)_+^{\alpha -1}\end{equation} for the rescaled weights. In the next subsection we will show that the general integrability condition for the weight function \eqref{request w} is sufficient, but not sharp. By a direct use of the Plancherel theorem for the Fourier transform on $ \R^n $ we will show that one can do better for $\mathbf{H}_n(\R)$.\\ \subsection{Isometric identities} Let $K_\lambda$ be the Macdonald Bessel function and $I_{\alpha+n/2} $ be the Bessel function of first kind with $\alpha>0$. For $s:=\re \lambda >0$, we define non-negative weight functions \begin{equation}\label{weigh} w_\lambda^\alpha(\xi): = |\xi|^{2s} \left|K_{\lambda}( |\xi|)\right|^2 \frac{I_{\alpha+n/2-1}(2|\xi|)}{(2|\xi|)^{\alpha+n/2-1}}\qquad (\xi \in \R^n)\, .\end{equation} \begin{theorem} \label{thm level isometry} Let $\alpha>0, \lambda\in \C$, and $s=\re \lambda>0$. There exists an explicit constant $c_{n,\alpha,\lambda} >0$ such that for all $f \in L^2(\R^n)$ and $\phi_a=\Pc_\lambda f(\cdot, a)$ we have the identity \begin{equation} \label{level isometry} \int_{\Tc_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)\, dz =c_{n,\alpha,\lambda} \, a^{-2s+2n} \int_{\R^n} |\widehat{f}(\xi)|^2 \, w_{\lambda}^\alpha(a \xi) \, d\xi \qquad (a>0)\end{equation} where $ {\bf w}_\lambda^\alpha$ is as in \eqref{special weight}. \end{theorem} \begin{proof} Let us set $$ \varphi_{\lambda,a}(x) = \pi^{-n/2} \frac{\Gamma(\lambda+n/2)}{\Gamma(2\lambda)} (a^2+|x|^2)^{-(\lambda+n/2)} $$ so that we can write $ \phi_a(z) = \mathcal{P}_\lambda f(z,a) = a^{\lambda+n/2} f \ast \varphi_{\lambda,a}(z).$ In view of the Plancherel theorem for the Fourier transform we have $$ \int_{\R^n} |\phi_a(x+iy)|^2 dx = a^{2s+n} \int_{\R^n} e^{-2 y \cdot \xi} |\widehat{f}(\xi)|^2 |\widehat{\varphi}_{\lambda,a}(\xi)|^2 \, d\xi\, . $$ Integrating both sides of the above against the weight function ${\bf w}_{\lambda,a}^\alpha(z)$ we obtain the identity \begin{equation} \label{main id} \int_{\Tc_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)dz = a^{2 s+n} \int_{\R^n} |\widehat{f}(\xi)|^2 \, v_a^\alpha(\xi) \, |\widehat{\varphi}_{\lambda,a}(\xi)|^2 \, d\xi\end{equation} where $ v_a^\alpha(\xi) $ is the function defined by $$ v_a^\alpha(\xi) = (2\pi)^{-n/2} \, \frac{1}{\Gamma(\alpha)} \, \int_{|y| < a} e^{-2 y \cdot \xi}\, \left(1-\frac{|y|^2}{a^2}\right)_+^{\alpha-1}\ dy.$$ Both functions $ v_a^\alpha(\xi) $ and $\widehat{\varphi}_{\lambda,a}(\xi)$ can be evaluated explicitly in terms of Bessel and Macdonald functions. We begin with $v_a^\alpha$ and recall that the Fourier transform of $(1-|y|^2)^{\alpha-1}_+$ is explicitly known in terms of $J$-Bessel functions { (see \cite[Ch.~II, \S2.5]{GS})}: $$ (2\pi)^{-n/2} \int_{\R^n} (1-|y|^2)^{\alpha-1}_+ e^{-i y\cdot \xi} dy = \Gamma(\alpha) 2^{\alpha-1} |\xi|^{-\alpha-n/2+1}J_{\alpha+n/2-1}(|\xi|). $$ As the $J$-Bessel functions analytically extend to the imaginary axis, it follows that \begin{equation} \label{FTweight} (2\pi)^{-n/2} \, a^{-n}\, \int_{\R^n} \left( 1-\frac{|y|^2}{a^2} \right)_+^{\alpha-1} e^{-2y\cdot \xi} dy = \Gamma(\alpha) 2^{\alpha-1} \, (2a |\xi|)^{-\alpha-n/2+1} I_{\alpha+n/2-1}(2 a |\xi|) \end{equation} where $ I_{\alpha+n/2-1}$ is the modified Bessel function of first kind. We arrive at \begin{equation} \label{vsa} v_a^\alpha(\xi)=2^{\alpha-1} a^n (2a |\xi|)^{-\alpha-n/2+1} I_{\alpha+n/2-1}(2 a |\xi|)\, .\end{equation} \par Moving on to $\widehat{\varphi}_{\lambda,a}(\xi)$ we use the integral representation $$ \varphi_{\lambda,a}(x) = \frac{(4 \pi)^{-n/2} 2^{-2\lambda}}{\Gamma(2\lambda)} \int_0^\infty e^{-\frac{1}{4t}(a^2+|x|^2)} t^{-n/2-\lambda-1} \, dt $$ and calculate the Fourier transform as $$ \widehat{\varphi}_{\lambda,a}(\xi) = \frac{(2 \pi)^{-n/2} 2^{-2\lambda}}{\Gamma(2\lambda)} \int_0^\infty e^{-\frac{1}{4t}a^2} \, e^{-t|\xi|^2} \,t^{-\lambda-1} \, dt\, . $$ The Macdonald function of type $ \nu $ is given by the integral representation $$ r^\nu K_\nu(r) = 2^{\nu-1} \int_0^\infty e^{-t-\frac{r^2}{4t}} t^{\nu-1} dt,$$ for any $ r >0.$ In terms of this function we have \begin{equation} \label{phiK} \widehat{\varphi}_{\lambda,a}(\xi) = \frac{(2 \pi)^{-n/2} 2^{1-\lambda}}{\Gamma(2\lambda)} a^{-2\lambda} (a|\xi|)^\lambda K_\lambda(a|\xi|)\, .\end{equation} Using these explicit formulas we obtain from \eqref{main id} that $$ \int_{\Tc_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)dz =c_{n,\alpha,\lambda}\, a^{-2s+2n} \int_{\R^n} |\widehat{f}(\xi)|^2 \, w_{\lambda}^\alpha(a \xi) \, d\xi$$ for an explicit constant $c_{n,\alpha,\lambda} $ and \begin{equation} \notag w_\lambda^\alpha(\xi)= |\xi|^{2s} \left|K_{\lambda}( |\xi|)\right|^2 \frac{I_{\alpha+n/2-1}(2|\xi|)}{(2|\xi|)^{\alpha+n/2-1}}, \end{equation} by \eqref{vsa} and \eqref{phiK}. \end{proof} In the sequel we write $\B_\alpha(\Xi_S, \lambda)$ to indicate the dependence on $\alpha>0$. Now we are ready to state the main theorem of this section: \begin{theorem}\label{thm hyp} For $\alpha>\max\{2s-\frac{n+1}{2}, 0\}$ the Poisson transform establishes an isomorphism of Banach spaces $$\Pc_\lambda: L^2(N)\to \B_\alpha(\Xi_S, \lambda)\,.$$ If moreover $\lambda=s$ is real, then $\Pc_\lambda$ is an isometry up to positive scalar. \end{theorem} \begin{proof} The behaviour of $ w_s^\alpha(\xi) $ can be read out from the well known asymptotic properties of the functions $ K_\nu $ and $ I_\alpha.$ Indeed we can show (see Proposition \ref{prop monotone} below) that there exists $c_1, c_2>0$ such that $$ c_1 \, (1+|\xi|)^{-(\alpha+ \frac{n+1}{2}-2s)} \leq w_\lambda^\alpha(\xi) \leq c_2 \, (1+|\xi|)^{-(\alpha+\frac{n+1}{2}-2s)} \qquad (\xi \in \R^n)\, .$$ When $ 2s > \frac{n+1}{2},$ we can choose $ \alpha = (2s - \frac{n+1}{2}) >0 $ in Theorem \ref{thm level isometry} above. With this choice, note that $ w_\lambda^\alpha(\xi) \leq c_3 $ and consequently we have $$ \int_{\Tc_a} |\phi_a(z)|^2 {\bf w}_{\lambda,a}^\alpha(z)dz \leq c_{n,\alpha,\lambda}\, a^{-2s+2n} \int_{\R^n} |\widehat{f}(x)|^2 \, dx .$$ This inequality implies that $\Pc_\lambda$ is defined. The surjectivity follows as in the proof of Theorem \ref{maintheorem} and with that the first assertion is established. The second part is a consequence of Theorem \ref{thm level isometry} and the stated monotonicity in Proposition \ref{prop monotone}. \end{proof} \begin{proposition}\label{prop monotone} Let $\lambda = s>0$ be real and $\alpha>\max\{2s-\frac{n+1}{2}, 0\}$. Then $w_s^\alpha(r)$, $r >0 $, is positive and monotonically decreasing. Moreover, $w_s^\alpha(0) =2^{-\alpha-\frac{n}{2}-2s-1} \frac{\Gamma(s)^2}{\Gamma(\alpha+\frac{n}{2})} $ and $w_s^\alpha(r) \sim c_\alpha \, r^{-(\alpha+\frac{n+1}{2}-2s)}$ for $ r \to \infty$. \end{proposition} \begin{proof} From the definition, we have $$ w_s^\alpha(r)= r^{2s} K_s(r)^2 \frac{I_{\alpha+n/2-1}(2r)}{(2r)^{\alpha+n/2-1}}.$$ Evaluating $ r^sK_s(r)$ and $ \frac{I_{\alpha+n/2-1}(2r)}{(2r)^{\alpha+n/2-1}}$ at $ r =0 $ by making use of the limiting forms close to zero (see \cite[10.30]{OlMax}) we obtain $ w_s^\alpha(0) = 2^{-\alpha-\frac{n}{2}-2s-1} \frac{\Gamma(s)^2}{\Gamma(\alpha+\frac{n}{2})}$ as claimed in the proposition. The well known asymptotic properties of $ K_s(r) $ and $ I_\beta(r) $, see \cite[10.40]{OlMax}, proves the other claim. It therefore remains to show that $ w_s^\alpha(r) $ is monotonically decreasing, which will follow once we show that the derivative of $ w_s^\alpha(r)$ is negative at any $ r >0.$ Making use of the well known relations $$ \frac{d}{dr}\big( r^s K_s(r)\big) = -r^s K_{s-1}(r),\qquad \frac{d}{dr}\Big(\frac{I_\beta(r)}{r^\beta}\Big) = \frac{I_{\beta+1}(r)}{r^\beta}$$ a simple calculation shows that $$ \frac{d}{dr}\big( w_s^\alpha(r)\big) = 2 r^{2s} K_s(r) (2r)^{-(\alpha+n/2-1)} F(r) $$ where $ F(r) = \Big( K_s(r) I_{\alpha+n/2}(2r) - K_{s-1}(r) I_{\alpha+n/2-1}(2r) \Big).$ Thus we only need to check that $ F(r) < 0 $ or equivalently $$ \frac{I_{\alpha+n/2}(2r)}{I_{\alpha+n/2-1}(2r)} < \frac{K_{s-1}(r)}{K_s(r)}.$$ In order to verify this we make use of the following inequality proved by J.~Segura \cite[Theorem 1.1]{JS}: for any $ \nu\ge 0$ and $r > 0 $ one has $$ \frac{I_{\nu+1/2}(r)}{I_{\nu-1/2}(r)} < \frac{r}{\nu+\sqrt{\nu^2+r^2}} \leq \frac{K_{\nu-1/2}(r)}{K_{\nu+1/2}(r)}.$$ Replacing $ r $ by $ 2r $ in the first inequality, from the above we deduce that $$ \frac{I_{\nu+1/2}(2r)}{I_{\nu-1/2}(2r)} < \frac{r}{\nu/2+\sqrt{\nu^2/4+r^2}} \leq \frac{K_{(\nu-1)/2}(r)}{K_{(\nu+1)/2}(r)}.$$ In the above we choose $ \nu= 2s -1 $ so that $ (\nu-1)/2 = s-1 $ and $ (\nu+1)/2 = s.$ With $ \beta = \alpha+(n-1)/2 $ we have $$ \frac{I_{\beta+1/2}(2r)}{I_{\beta-1/2}(2r)} < \frac{r}{\beta/2+\sqrt{\beta^2/4+r^2}} < \frac{r}{\nu/2+\sqrt{\nu^2/4+r^2}}\leq \frac{K_{(\nu-1)/2}(r)}{K_{(\nu+1)/2}(r)}$$ provided $ \beta > \nu$, i.e. $ \alpha+(n-1)/2 > 2s-1$ which is precisely the condition on $\alpha$ in the statement of the proposition. \end{proof} \section{Remarks on the extension problem} We recall that given a function $f \in L^2(N)$ the Poisson transform $\phi =\Pc_\lambda(f)$ for $\re \lambda>0$, viewed as a function on $S$, gives us a solution $$\Delta_S \phi ={ ( \lambda^2 -\rho^2)} \phi$$ such that we can retrieve $f$ through the boundary value map $$b_\lambda(f)(n) = {\bf c}(\lambda)^{-1}\lim_{a\to \infty\atop a\in A^-} a^{\lambda -\rho} \phi(na)\, .$$ We normalize $\Pc_\lambda$ in the sequel by ${1\over {\bf c}(\lambda)} \Pc_\lambda$ and replace $\phi$ by $\psi(na) = a^{\lambda -\rho} \phi(na)$. Hence it is natural to ask about the differential equation which $\psi$ satisfies. To begin with we first derive a formula for $\Delta_S$ in $N\times A$-coordiantes. Note that $\Delta_Z$ on $Z=G/K$ descends from the Casimir operator $\Cc$ on right $K$-invariant functions on $G$ and so we start with $\Cc$. We assume that $G$ is semisimple and let $\kappa:\gf\times \gf\to \R$ be the Cartan--Killing form. For an orthonormal basis $H_1, \ldots, H_n$ of $\af$ with respect to $\kappa|_{\af \times \af} $ we form the operator $\Delta_A= \sum_{j=1}^n H_j^2$ viewed as a left $G$-invariant differential operator on $G$. Likewise we define $\Delta_M$ with respect to an orthonornal basis of $\mf$ with respect to $-\kappa|_{\mf \times \mf}$. Now for each root space $\gf^\alpha$, $\alpha\in \Sigma^+$ we choose a basis $E_\alpha^j$, $1\leq j \leq m_\alpha=\dim \gf^\alpha$, such that with $F_\alpha^j:=-\theta(E_\alpha^j)\in \gf^{-\alpha}$ we have $\kappa(E_\alpha^j, F_\alpha^k)=\delta_{jk}$. Having said all that we obtain $$\Cc=\Delta_A -\Delta_M+ \sum_{\alpha>0}\sum_{j=1}^{m_\alpha} E_\alpha^j F_\alpha^j + F_\alpha^j E_\alpha^j\, .$$ Now $\kappa|_{\af \times \af}$ complex linearly extended identifies $\af_\C$ with $\af_\C^*$ and for $\lambda\in \af_\C^*$ we let $H_\lambda\in \af_\C$ such that $\lambda = \kappa(\cdot, H_\lambda)$. We further recall that $[E_\alpha^j, F_\alpha^j]=H_\alpha$. Thus we can rewrite $\Cc$ as $$\Cc=\Delta_A -H_{2\rho} + \Delta_M+ \sum_{\alpha>0}\sum_{j=1}^{m_\alpha} 2E_\alpha^j F_\alpha^j \, .$$ Now note that $2E_\alpha^j F_\alpha^j = 2E_\alpha^j E_\alpha^j+ 2E_\alpha^j( F_\alpha^j - E_\alpha^j)$ with $F_\alpha^j -E_\alpha^j\in \kf$. Therefore $\Cc$ descends on right $K$-invariant smooth functions on $G$ to the operator $$\Delta_Z = \Delta_A -H_{2\rho} + \sum_{\alpha>0}\sum_{j=1}^{m_\alpha} 2E_\alpha^j E_\alpha^j \, .$$ Now if we identify $Z=G/K$ with $N\times A$ then we see that that a point $(n,a)$ the operator $\Delta_S$ is given in the separated form $$\Delta_S = \Delta_A -H_{2\rho} + \underbrace{\sum_{\alpha>0}\sum_{j=1}^{m_\alpha} 2a^{2\alpha}E_\alpha^j E_\alpha^j}_{=: \Delta_N^a}$$ with $\Delta_N^a$ acting on the right of $N$. Hence $$\Delta_S = \Delta_A -H_{2\rho} +\Delta_N^a\, .$$ Let now $\phi = \phi(n,a)$ be an eigenfunction of $\Delta_S$ to parameter $\lambda\in \af_\C^*$, say $$\Delta_S \phi ={ ( \lambda^2 - \rho^2)} \phi\, .$$ We think of $\phi=\Pc_\lambda(f)$ for some generalized function $f$ on $N$ so that we can retrieve $f$ from $\phi$ via the boundary value map \eqref{boundary}. As motivated above we replace $\phi$ by $\psi(n,a)= a^{\lambda -\rho} \phi(n,a)$ and see what differential equation $\psi$ satisfies. Since $\phi = a^{\rho-\lambda} \psi$ we have $$(\Delta_A -H_{2\rho})a^{\rho-\lambda} \psi = a^{\rho -\lambda} ({ \lambda^2 -\rho^2} + \Delta_A - H_{2\rho} + \underbrace{{ 2} H_{\rho -\lambda}}_{ =H_{2\rho} - H_{2\lambda}})\psi\, .$$ This means that $\psi$ satisfies the differential equation \begin{equation} \label{ext1}(\Delta_A - H_{ 2 \lambda} + \Delta_N^a)\psi(n,a)=0\, .\end{equation} If we assume now that $\re \lambda>0$ and $\phi=\Pc_\lambda f$, then we have \begin{equation} \label{ext2}\lim_{a\to \infty\atop a\in A^-} \psi(n,a)= f(n)\end{equation} The pair of equations \eqref{ext1} and \eqref{ext2} we refer to is the extension problem with parameter $\lambda$ for the operator $\Delta_S$. Given $f$ it has a unique solution provided $\Delta_S$ generates all of $\mathbb{D}(Z)$, that is, the real rank of $G$ is one. It is instructive to see what it is classically for the real hyperbolic spaces with $N=\R^n$. Here we use the classical (i.e. normalized Poisson transform) and the extension problem becomes, for $(x,t)\in \R^n \times \R_{>0}$ (we use the notation from the previous section), \begin{equation} \label{ext1a} \big(\partial_t^2 + \frac{1-\lambda/2}{t} \partial_t+ \Delta_{\R^n}\big)\psi(x,t)=0\qquad (x\in \R^n, t>0)\end{equation} If we assume now that $\re \lambda>0$ and $\phi=\Pc_\lambda f$, then we have \begin{equation} \label{ext2a}\lim_{t\to 0} \psi(x,t)= f(x)\, .\end{equation} The general theory then tells us that there is a unique solution $\psi$ of the extension problem \eqref{ext1a} and \eqref{ext2a} considered by Caffarelli and Silvestre in \cite{CS}. \section*{Acknowledgement} The third author (L.~R.) is supported by Ikerbasque, by the Basque Government through the BERC 2022-2025 program, and by Agencia Estatal de Investigaci\'on through BCAM Severo Ochoa excellence accreditation CEX2021-001142-S/MCIN/AEI/10.13039/501100011033, RYC2018-025477-I, CNS2023-143893, and PID2023-146646NB-I00 funded by MICIU/AEI/10.13039/501100011033 and by ESF+. \begin{thebibliography}{99} \bibitem{AG} D. N. Akhiezer and S. G. Gindikin, {\it On Stein extensions of real symmetric spaces}, Math. Ann. {\bf 286} (1990), no. {\bf 1-3}, 1--12. \bibitem{BOS} S. Ben Sa\"{i}d, T. Oshima and N. Shimeno, {\it Fatou's theorems and Hardy-type spaces for eigenfunctions of the invariant differential operators on symmetric spaces}, Int. Math. Res. Not. 2003, no. {\bf 16}, 915--931. \bibitem{CS} L. Caffarelli and L. Silvestre, {\it An extension problem related to the fractional Laplacian}, Comm. Partial Differential Equations {\bf 32 (8)} (2007), 1245 --1260. \bibitem{CK} R. Camporesi and B. Kr\"otz, {\it The complex crown for homogeneous harmonic spaces}, Trans. Amer. Math. Soc. {\bf 364} (2012), no. {\bf 4}, 2227--2240. \bibitem{GS} I. M. Gel'fand and G. E. \v{S}ilov, \textit{Generalized functions. 1. Properties and operations}, Academic Press, Inc., Boston, MA, 1969. \bibitem{GKKS} H. Gimperlein, B. Kr\"otz, J. Kuit and H. Schlichtkrull, {\it A Paley--Wiener theorem for Harish-Chandra modules}, Cambridge Journal of Mathematics {\bf 10} (2022), 689--742. \bibitem{Gi} S. Gindikin, {\it Some remarks on complex crowns of Riemannian symmetric spaces}, The 2000 Twente Conference on Lie Groups (Enschede). Acta Appl. Math. {\bf 73} (2002), no. {\bf 1-2}, 95--101. \bibitem{H1} S. Helgason, {\it A duality for symmetric spaces with applications to group representations}, Advances in Math. {\bf 5} (1970), 1--154. \bibitem{H3} \bysame, ``Geometric Analysis on Symmetric Spaces", Math. Surveys and Monographs {\bf 39}, Amer. Math. Soc., 1994. \bibitem{I} A. Ionescu, {\it On the Poisson transform on symmetric spaces of real rank one}, J. Funct. Anal. {\bf 174} (2000), no. {\bf 2} , 513--523. \bibitem{K6} M. Kashiwara, A. Kowata, K. Minemura, K. Okamoto, T. Oshima, and M. Tanaka, {\it Eigenfunctions of invariant differential operators on a symmetric space}, Ann. of Math. {\bf (2) 107} (1978), no. {\bf 1}, 1--39. \bibitem{Ka} K. Kaizuka, {\it A characterization of the $L^2$-range of the Poisson transform related to Strichartz conjecture on symmetric spaces of noncompact type}, Adv. Math. {\bf 303} (2016), 464--501. \bibitem{Kos} B. Kostant, {\it On the existence and irreducibility of certain series of representations}, Bull. Amer. Math. Soc. {\bf 75} (1969), 627--642. \bibitem{KO} B. Kr\"otz and E. Opdam, {\it Analysis on the crown domain}, Geom. Funct. Anal. {\bf 18} (2008), no. {\bf 4}, 1326--1421. \bibitem{KS} B. Kr\"otz and H. Schlichtkrull, {\it Holomorphic extension of eigenfunctions}, Math. Ann. {\bf 345} (2009), no. {\bf 4}, 835--841. \bibitem{KrSt} B. Kr\"otz and R. Stanton, {\it Holomorphic extensions of representations. I. Automorphic functions}, Ann. of Math. {\bf (2) 159} (2004), no. {\bf 2}, 641--724. \bibitem{OlMax} F. W. J. Olver and L. C. Maximon, Bessel Functions, \textit{NIST handbook of mathematical functions} (edited by F. W. F. Olver, D. W. Lozier, R. F. Boisvert and C. W. Clark), Chapter~10, National Institute of Standards and Technology, Washington, DC, and Cambridge University Press, Cambridge, 2010. Available online in \url{http://dlmf.nist.gov/10} \bibitem{RT} L. Roncal and S. Thangavelu, {\it Holomorphic extensions of eigenfunctions on $NA$ groups}, arXiv: 2005.09894 \bibitem{JS} J. Segura, {\it Bounds for ratios of modified Bessel functions and associated Turan-type inequalities}, J. Math. Anal. Appl. {\bf 374} (2011), 516-528. \bibitem{W1} N. R. Wallach, ``Real reductive groups I", Pure and Applied Mathematics, {\bf 132-I}. Academic Press, Inc., Boston, MA, 1988. \end{thebibliography} \end{document} \subsection{Complex hyperbolic spaces}In the case of the complex hyperbolic space, $ \mathbf{H}_n(\C) = \SU(n+1,1)/\operatorname{U}(n), $ the Iwasawa decomposition of $ G = \SU(n+1,1) $ is explicitly given by $ N = \Hb^n$, $K = \operatorname{U}(n) $ and $ A = \R_+.$ Here $ \Hb^n $ is the Heisenberg group $\Hb^n=\C^n\times \R$ equipped with the group law \begin{equation} \label{law} (z,t)(z',t')=\big(z+z', t+t'+\frac12 \im (z\cdot \bar{z'})\big), \end{equation} where $z,z'\in \C^n$ and $t, t' \in \R$. It is convenient to use real coordinates: thus identifying $ \Hb^n $ with $ \R^{2n+1} $ and considering coordinates $ (x,u,\xi) $ we can write the group law as $$ (x,u,\xi)(y,v,\eta) = \big(x+y,u+v,\xi+\eta+\frac{1}{2}(u\cdot y-v\cdot x)\big). $$ Note that $ \im \big((x+iu)\cdot(y-iv)\big)= u\cdot y-v\cdot x = [(x,u)(y,v)] $ is the symplectic form on $ \R^{2n}.$ The group $ \Hb^n $ admits a family of automorphisms indexed by $ A = \R_+ $ and given by the non-isotropic dilations $ \delta_r (z,t) = ( rz, r^2 t).$ Thus $ A $ acts on $ \Hb^n $ as automorphisms, and so we can form the semi-direct product $ S = NA = \Hb^n \times \R_+$ which we identify with the hyperbolic space $ \mathbf{H}_n(\C).$ The group law in $ S $ is given by $$ (z,t,a) (w,s,a') = ((z,t)\delta_{\sqrt{a}}(w,s), a a') = \big(z+\sqrt{a}w, t+ a s+\frac{1}{2} \sqrt{a} \im(z\cdot \bar{w}), a a'\big). $$ { To match with the main text, it would be better to use $\delta_a$ and not $\delta_{\sqrt{a}}$. The following part in blue can be skipped.} { This group turns out to be a solvable group which is non-unimodular. Indeed, the left Haar measure on $ S $ is $ a^{-n-2}\, dz\, dt\, da $ whereas the right Haar measure is $ a^{-1}\, dz\, dt\, da.$ The Lie algebra $ \mathfrak{s} $ of the Lie group $ S $ can be identified with $ \R^{2n+1} \times \R.$ In what follows, we give an explicit basis for the Lie algebra $ \mathfrak{s} .$\\ Consider the following $ (2n+1) $ left invariant vector fields on the Heisenberg group $ \Hb^n$: for $ j =1,2,..., n$ \begin{equation} \label{vfields} X_j = \frac{\partial}{\partial{x_j}}+\frac{1}{2}u_j \frac{\partial}{\partial \xi},\qquad Y_j = \frac{\partial}{\partial{u_j}}-\frac{1}{2}x_j \frac{\partial}{\partial \xi},\qquad T = \frac{\partial}{\partial \xi}. \end{equation} These vector fields form a basis for the Heisenberg Lie algebra $ \mathfrak{h}_n.$ It is easily checked that the only non-trivial Lie brackets in $ \mathfrak{h}_n $ are given by $ [ X_j, Y_j] = T$ as all other brackets vanish. An easy calculation shows that the vector fields $ E_0 = a \partial_a$, $ E_j = \sqrt{a}X_j$, $E_{n+j} = \sqrt{a}Y_j $ for $ j = 1,2,...,n $ and $ E_{2n+1} = a T $ are left invariant on $ S.$ The non zero Lie brackets are given by $$ [E_0, E_j] = \frac{1}{2} E_j, \qquad [E_0, E_{n+j}] = \frac{1}{2} E_{n+j}, \qquad [E_0, E_{2n+1}] = E_{2n+1}. $$ Equipping $ \mathfrak{s} $ with the standard inner product on $ \R^{2n+1} \times \R$ these $2n+2 $ vector fields form an orthonormal basis for $ \mathfrak{s}. $ This induces a Riemannian metric on $ \mathfrak{s}.$ The Laplace-Beltrami operator on the Riemannian manifold $ S $ can be expressed in terms of the vector fields $ E_j.$ Indeed, we have $$ \Delta_S = \sum_{j=0}^{2n+1} E_j^2-(n+1) E_0 = - a \mathcal{L}+a^2 \partial_\xi^2 -(n+1) a \partial_a $$ where $ \mathcal{L} = -\sum_{j=1}^n ( X_j^2 +Y_j^2) $ is the sublaplacian on $ \Hb^n.$\\} The Poisson transform $ \mathcal{P}_\lambda $ on $ \mathbf{H}_n(\C) $ which takes functions on $ \Hb^n $ into eigenfunctions of $ \Delta_S $ with eigenvalues $ \lambda^2- \frac{1}{4}(n+1)^2 $ is explicitly known, see . As in the case of real hyperbolic spaces, we define $$ \varphi_{\lambda, a}(x,u,\xi) = \frac{2^{-(n+1+2\lambda)} }{ \pi^{n+1}\Gamma(2\lambda)} \Gamma\big(\lambda +(n+1)/2\big)^2\big((a+\frac{1}{4}|(x,u)|^2)^2+\xi^2\big)^{-(\lambda+(n+1)/2)}$$ and assume from now on that $s:=\re \lambda>0$. The classical Poisson transform of $ f \in L^2(\Hb^n) $ is given by $$ \mathcal{P}_\lambda f((x,u,\xi),a) = a^{\lambda+(n+1)/2} f \ast \varphi_{\lambda, a}(x,u,\xi) $$ where the convolution is taken over $ \Hb^n.$ For the sake of simplicity of notation, let us write $ {\bf x} =(x,u,\xi), {\bf y} = (y,v,\eta) $ to stand for elements of $\Hb^n. $ We will also use the exponential coordinates: $ {\bf x} = e^X $ where $ X = \sum_{j=1}^n (x_j X_j+y_j Y_j)+ \xi T.$ Observe that $ \varphi_{\lambda,a}({\bf x}) $ has a holomorphic extension $ \varphi_{\lambda,a}({\bf z}) $ to a domain $\Omega_a $ defined by $$\Omega_a = \{ {\bf z} = {\bf x}+i {\bf y} \in \C^{2n+1}: |\eta|+\frac{1}{4}(|y|^2+|v|^2) < a \}.$$ It is therefore reasonable to expect that $ \phi_a({\bf x}) = \mathcal{P}_\lambda f({\bf x},a) $ has a holomorphic extension to certain domain in $ \C^{2n+1}.$ \\ Indeed, as $ \mathcal{P}_\lambda(f) $ are eigenfunctions of the Laplace-Beltrami operators, from the unipotent model for the complex crown, all the functions $ \phi_a({\bf x}) $ holomorphically extend to the domain $\Tc_a = N e^{i\Lambda_a} $ for some domain $ \Lambda_a \subset \mathfrak{h}_n.$ As expected, this domain if left invariant under the action of $ \Hb^n$ and we can describe it in terms of $ \Omega_a.$ { $\Omega_a =\Tc_a$ ?}Since $$ \mathcal{P}_\lambda f({\bf x},a) = a^{\lambda+(n+1)/2} \int_{\Hb^n} f({\bf x^\prime}) \varphi_{\lambda,a}({\bf x^\prime}^{-1} {\bf x} ) d{\bf x^\prime} $$ in order to holomorphically extend $ \mathcal{P}_\lambda f({\bf x},a) $ we require a good estimate for the integral \begin{equation}\label{integral-esti} \int_{\Hb^n} |\varphi_{\lambda,a}({\bf x}e^{iY})| d{\bf x},\,\,\, Y \in \Lambda_a. \end{equation} At present, this seems to be difficult, but we can circumvent this by proceeding as follows. By replacing $ {\bf x} $ by $ {\bf x}e^{iY} $ we see that $$ \mathcal{P}_\lambda f({\bf x}e^{iY},a) = a^{\lambda+(n+1)/2} \int_{\Hb^n} f({\bf x}{\bf x^\prime}) \varphi_{\lambda,a}({\bf x^\prime}^{-1} e^{iY} ) d{\bf x^\prime}. $$ By defining $ \Phi_Y({\bf x^\prime}) = \varphi_{\lambda,a}({\bf x} e^{iY} ) $ we can rewrite the above as $$ \mathcal{P}_\lambda f({\bf x}e^{iY},a) = a^{\lambda+(n+1)/2} \int_{\Hb^n} f({\bf x}{\bf x^\prime}^{-1}) \Phi_Y({\bf x^\prime} ) d{\bf x^\prime} = a^{\lambda+(n+1)/2} f \ast \Phi_Y({\bf x}). $$ For $ f \in L^2(\Hb^n), $ instead of using the estimate $ \| f \ast \Phi_Y \|_2 \leq \| \Phi_Y\|_1 \, \|f\|_2 $ which requires a good estimate on $ \|\Phi_Y\|_1 $ we use the following lemma.\\ For functions $ f $ on $ \Hb^n $ we let $ f^\tau(x,u)$ stand for its inverse Fourier transform in the central variable: $$ f^\tau(x,u) = \int_{-\infty}^\infty f(x,u,\xi) \, e^{i\xi \tau}\, d\xi.$$ Given $ f \in L^2(\Hb^n) $ let $ f^\tau(x,u)$ stand for the inverse Fourier transform of $ f $ in the central variable: $$ f^\tau(x,u) = \int_{-\infty}^\infty f(x,u,\xi) \, e^{i\xi \tau}\, d\xi.$$ \begin{lemma} For $ f, g \in L^2(\Hb^n) $ we have $$ \|f \ast g\|_2 \leq (2\pi)^{n/2} \, \sup_{\tau \in \R} \|g^\tau\|_{L^1(\R^{2n})} \| f\|_2.$$ \end{lemma} \begin{proof} The proof makes use of the Plancherel theorem for the Fourier transform on $ \Hb^n $ which we define using the Schr\"odinger representations $ \pi_\tau \in \R, \tau \neq 0.$ All these are realized on $ L^2(\R^n) $ and the Fourier transform of $ f \in L^2(\Hb^n) $ is the operator valued function $$ \widehat{f}(\tau) = \int_{\Hb^n} f({\bf x}) \pi_\tau({\bf x}) d{\bf x} .$$ The Plancherel theorem for the Heisenberg groups reads as $$ \int_{\Hb^n} |f({\bf x})|^2 \, d{\bf x} = (2\pi)^{-n-1} \int_{-\infty}^\infty \| \widehat{f}(\tau)\|_{HS}^2 |\tau|^n d\tau. $$ Since $ \widehat{f \ast g}(\tau) = \widehat{f}(\tau) \widehat{g}(\tau) $ we obtain the estimate $$ \| f \ast g \|_2^2 = (2\pi)^{-n-1} \int_{-\infty}^\infty \| \widehat{f}(\tau) \widehat{g}(\tau)\|_{HS}^2 |\tau|^n d\tau \leq \sup_{\tau \in \R} \|\widehat{g}(\tau) \|^2 \, \|f \|_2^2 . $$ From the explicit form of the representations $ \pi_\tau $ we infer that $$ \widehat{g}(\tau) = \int_{\R^{2n}} g^\tau(x,u) \pi_\tau(x,u,0) dx\, du. $$ Consequently we get the estimate $ \| \widehat{g}(\tau)\| \leq \| g^\tau \|_{L^1(\R^{2n})}$ which proves the lemma. \end{proof} In view of the above lemma, we need to estimate the $ L^1(\R^{2n}) $ norms of $ \Phi_Y^\tau, \tau \in \R.$ In order to do this, we make use of the following formula proved in \cite{RT1} (see Proposition 4.2) which gives an integral representation of $ \varphi_{\lambda,a} :$ \begin{theorem} \label{thm:com} For any $ s, A > 0 $ and $ \xi \in \R $ we have $$ \int_0^\infty \Big( \int_{-\infty}^\infty e^{-i \xi \tau} \Big(\frac{\tau}{\sinh t \tau}\Big)^{n+s+1} e^{-\frac{1}{4}\tau \coth(t \tau) A}\, d\tau\,\Big) \,dt = c_{n,s} (A^2+16\xi^2)^{-\frac{(n+1+s)}{2}},$$ where the constant $c_{n,s}$ is an explicit constant given by $$ c_{n,s}=2^{n-1+3s}\pi^{-n-1}\Gamma\Big(\frac{n+s+1}{2}\Big)^2. $$ \end{theorem} By analytic continuation the above formula holds for all $ s \in \C,\, \re{s} > 0.$ Given $ {\bf x} = (x,u,\xi),$ by taking $ A = (4a+|x|^2+|u|^2) $ we get the representation $$ \varphi_{\lambda,a}({\bf x}) = c_{n,\lambda}^\prime \int_0^\infty \Big( \int_{-\infty}^\infty e^{-i \xi \tau} \Big(\frac{\tau}{\sinh t \tau}\Big)^{n+2\lambda+1} e^{-\tau \coth(t \tau) \big(a+\frac{1}{4}(|x|^2+|u|^2)\big)}\, d\tau \Big) \, dt .$$ As this representation involves a Fourier inversion, we are not able to get a good estimate for the integral \eqref{integral-esti}. However the partial Fourier transform of $ \varphi_{\lambda,a}({\bf x}e^{iY})$ in the $ \xi$ variable is explicit which can be estimated. For $ Y \in \mathfrak{h}_n $ with $ e^Y = (y,v,\eta) $ we define its homogeneous norm $ |Y| $ by $$ |Y|^2 = |\eta|+\frac{1}{2}(|y|^2+|v|^2).$$ Note that this norm is homogeneous of degree one with respect to the non-isotropic dilations on $ \Hb^n.$ \begin{proposition} For any $Y \in \mathfrak{h}_n $ with $ |Y| < \sqrt{a} $ we have the estimate $$ \int_{\R^{2n}} | \Phi_Y^\tau(x,u) | dx\, du \leq C_{n,\lambda} \, \big( a-|Y|^2\big)^{-2 s} $$ where the constant $ C_{n,\lambda} $ is independent of $ \tau \in \R.$ \end{proposition} \begin{proof} With $ {\bf x} =(x,u,\xi) $ and $ e^{iY} = i(y,v,\eta)$ we have $ {\bf x} e^{iY} = (z,w,\zeta+\frac{i}{2} \im (z \cdot \bar{w}))$ where we have written $ z = x+iy, w =u+iv, \zeta = \xi+i\eta.$ The integral representation of $ \varphi_{\lambda,a}({\bf x}e^{iY}) $ followed by Fourier inversion gives us $$ \Phi_Y^\tau(x,u) = C_{n,\lambda}^\prime \int_0^\infty e^{-i \tau (\zeta+\frac{i}{2} \im (z \cdot \bar{w}))} \Big(\frac{\tau}{\sinh t \tau}\Big)^{n+2\lambda+1} e^{-\tau \coth(t \tau) \big(a+\frac{1}{4}(z^2+w^2)\big)}\, \, dt .$$ This gives the pointwise estimate $$ | \Phi_Y^\tau(x,u)| \leq C_{n,\lambda}^\prime \int_0^\infty e^{\tau (\eta+\frac{1}{2} \im (z \cdot \bar{w}))} \Big(\frac{\tau}{\sinh t \tau}\Big)^{n+2s+1} e^{-\tau \coth(t \tau) \big(a+\frac{1}{4}\re(z^2+w^2)\big)}\, \, dt .$$ As we have $$ (2\pi)^{-n} \int_{\R^{2n}} e^{\frac{1}{2} \im (z \cdot \bar{w}) \tau} e^{- \frac{1}{4} \tau \coth(t \tau) (|x|^2+|u|^2)}\, dx\, du = (\tau \coth t\tau)^{-n} e^{ \frac{1}{4} \tau \tanh(t \tau) (|y|^2+|v|^2)},$$ the integral of $ | \Phi_Y^\tau(x,u)|$ over $ \R^{2n} $ is bounded by a constant multiple of $$ \int_0^\infty e^{\tau \eta} \Big(\frac{\tau}{\sinh t \tau}\Big)^{2s+1} (\cosh t\tau)^{-n} e^{-\tau \coth(t \tau) \big(a-\frac{1}{4}(|y|^2+|v|^2)\big)}\, e^{ \frac{1}{4} \tau \tanh(t \tau) (|y|^2+|v|^2)} \, dt .$$ As $ \tanh t\tau \leq 1 \leq \coth t \tau, $ the above integral is bounded by $$ \int_0^\infty \Big(\frac{\tau}{\sinh t \tau}\Big)^{2s+1} (\cosh t\tau)^{-n} e^{-\tau \coth(t \tau) \big(a-\frac{1}{2}(|y|^2+|v|^2)-|\eta|\big)}\, \, dt .$$ By making the change of variable $ t \rightarrow \tau^{-1} t $ followed by $ \sinh t \rightarrow t^{-1} $ we can estimate the above by a Gamma integral which yields the stated estimate. \end{proof} Thus we see that for each $ a \in A, $ the function $ \phi_a({\bf x}) := \mathcal{P}_\lambda f({\bf x}, a) $ has a holomorphic extension to $ \Tc_a = Ne^{i\Lambda_a} $ with $ \Lambda = \{ Y \in \mathfrak{h}_n: |Y| < \sqrt{a} \}$ and satisfies the estimate $$ \Big( \int_{\Hb^n} |\phi_a({\bf x}e^{iY})|^2 \, d{\bf x} \Big)^{1/2} \leq C_{n,\lambda}\, a^{s+(n+1)/2} \big( a-|Y|^2\big)^{-2 \,\re (\lambda)} \, \|f \|_2.$$ Consequently, with ${\bf w}_{\lambda,a}^\epsilon({\bf z}) = \big( 1- \frac{|Y|^2}{a} \big)_+^{4 s+\epsilon-1},\, \epsilon > 0 $ we obtain $$ \Big( \int_{\Tc_a} |\phi_a({\bf z})|^2 \, {\bf w}_{\lambda,a}^\epsilon({\bf z})\,d{\bf z} \Big)^{1/2} \leq C_{n,\lambda}\, a^{- s+(n+1)/2} \, \|f \|_2.$$ Thus in Theorem \ref{main theorem} we can take $ {\bf w}_{\lambda,a}^\epsilon $ as the weight function.\\ {{ Remarks: The power of $ a $ is not matching. Probably the estimate I got in Proposition 6.3 can be improved. Don't know if we can take $ \epsilon =0.$}}
2206.13949v2
http://arxiv.org/abs/2206.13949v2
$λ$-quiddité sur certains sous-groupes monogènes de $\mathbb{C}$
\documentclass{amsart} \usepackage[dvips,final]{graphics} \usepackage{array} \usepackage{arydshln} \usepackage[makeroom]{cancel} \usepackage[all]{xy} \usepackage{url} \usepackage{multirow, blkarray} \usepackage{booktabs} \usepackage{textcomp} \usepackage[final]{epsfig} \usepackage{color} \usepackage[T1]{fontenc} \usepackage[english,french]{babel} \usepackage[utf8]{inputenc} \usepackage{blindtext} \usepackage{amsfonts,amscd,array, mathdots, epigraph} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage{stmaryrd} \usepackage{slashbox} \usepackage{diagbox} \usepackage{enumitem} \usepackage{ulem} \usepackage{tikz} \usepackage{xcolor} \usepackage{multicol} \definecolor{ufogreen}{rgb}{0.24, 0.82, 0.44} \vfuzz2pt \hfuzz2pt \setlength{\textwidth}{16truecm} \setlength{\hoffset}{-1.5truecm} \begin{document} \newtheorem{theorem}{Théorème}[section] \newtheorem{theore}{Théorème} \newtheorem{definition}[theorem]{Définition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollaire} \newtheorem*{con}{Conjecture} \newtheorem*{remark}{Remarque} \newtheorem*{remarks}{Remarques} \newtheorem*{pro}{Problème} \newtheorem*{examples}{Exemples} \newtheorem*{example}{Exemple} \newtheorem{lemma}[theorem]{Lemme} \numberwithin{equation}{section} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\bC}{\mathbb{C}} \newcommand{\bN}{\mathbb{N}} \newcommand{\bM}{\mathbb{N^{*}}} \newcommand{\bR}{\mathbb{R}} \title{$\lambda$-quiddité sur certains sous-groupes monogènes de $\bC$} \author{Flavien Mabilat} \date{} \keywords{$\lambda$-quiddity; modular group; cyclic subgroup} \address{Laboratoire de Mathématiques de Reims, UMR9008 CNRS et Université de Reims Champagne-Ardenne, U.F.R. Sciences Exactes et Naturelles Moulin de la Housse - BP 1039 51687 Reims cedex 2, France} \email{[email protected]} \maketitle \selectlanguage{french} \begin{abstract} Au cours de l'étude des frises de Coxeter, M. Cuntz a défini la notion de $\lambda$-quiddité et a soulevé le problème de l'étude de celles-ci sur certains sous-ensembles de $\bC$. L'objectif de ce texte est de mener à bien cette étude dans le cas de quelques sous-groupes monogènes de ($\bC,+$). On s'intéressera tout particulièrement aux cas des sous-groupes monogènes engendrés par $\sqrt{k}$ et par $i\sqrt{k}$ avec $k \in \bN$. \\ \end{abstract} \selectlanguage{english} \begin{abstract} During the study of the Coxeter friezes, M. Cuntz defined the concept of $\lambda$-quiddity and gave the problem of studying them on some subsets of $\bC$. The objective of this text is to carry out this study in the case of some cyclic subgroups of ($\bC,+$). In particular we will study the case of the cyclic subgroups generated by $\sqrt{k}$ and $i\sqrt{k}$, with $k \in \bN$. \\ \end{abstract} \selectlanguage{french} \thispagestyle{empty} \textbf{Mots clés :} $\lambda$-quiddité; groupe modulaire; sous-groupe monogène \\ \\ \indent \textbf{Classification :} 05A05 \\ \begin{flushright} \textit{\og L'efficacité symbolique des mots ne s'exerce jamais que dans la mesure où celui \\qui la subit reconnaît celui qui l'exerce comme fondé à l'exercer [...]. \fg} \\ Pierre Bourdieu, \textit{Ce que parler veut dire} \end{flushright} \section{Introduction} \label{Intro} Depuis leur introduction au début des années soixante-dix, les frises de Coxeter font l'objet de nombreux travaux qui ont notamment mis en lumière de multiples liens avec plusieurs autres domaines mathématiques (voir par exemple \cite{Mo1}). Lors de l'étude de ces dernières, on est amené à l'étude de l'équation suivante : \[M_{n}(a_{1},\ldots,a_{n})=\begin{pmatrix} a_{n} & -1 \\[4pt] 1 & 0 \end{pmatrix} \begin{pmatrix} a_{n-1} & -1 \\[4pt] 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} a_{1} & -1 \\[4pt] 1 & 0 \end{pmatrix}=-Id.\] \noindent En effet, les solutions de cette équation sont celles qui interviennent dans la construction des frises de Coxeter (voir \cite{BR} et \cite{CH} proposition 2.4). Cela conduit naturellement à considérer la généralisation ci-dessous : \begin{equation} \label{a} \tag{$E$} M_{n}(a_1,\ldots,a_n)=\pm Id. \end{equation} \noindent Notons par ailleurs que les matrices $M_{n}(a_{1},\ldots,a_{n})$ interviennent également dans de nombreuses autres situations telles que l'étude des fractions continues ou encore la résolution des équations de Sturm-Liouville discrètes (voir par exemple \cite{O}). \\ \\ \indent Les solutions de \eqref{a} sont appelées $\lambda$-quiddités. L'étude de ces dernières repose principalement sur une notion d'irréductibilité basée sur une opération sur les $n$-uplets d'éléments de l'ensemble considéré (voir \cite{C} et la section suivante). L'objectif principal est alors d'obtenir l'ensemble des solutions irréductibles de \eqref{a} lorsque les $a_{i}$ appartiennent à un ensemble fixé. On dispose de telles descriptions pour $\bN$ (voir \cite{C} Théorème 3.1 et la section \ref{class}), $\mathbb{Z}$ (voir \cite{CH} Théorème 6.2 et la section \ref{class}) et $\bZ[\alpha]$ avec $\alpha$ un nombre complexe transcendant (voir \cite{M2} Théorème 2.7 et la section \ref{class}). Par ailleurs, V. Ovsienko a donné une procédure récursive permettant de construire toutes les $\lambda$-quiddités sur $\bM$ (voir \cite{O} Théorème 2). On dispose également d'un certain nombre d'éléments sur les solutions de \eqref{a} dans les cas où on se place sur les anneaux $\bZ/N\bZ$ (voir \cite{M1,M3,M4,M5}) ou sur les corps finis (voir \cite{Mo2}). \\ \\ \indent On souhaite ici se placer sur d'autres sous-ensembles de $\bC$ que ceux déjà étudiés, en lien avec un problème ouvert soulevé par M. Cuntz (voir \cite{C} problème 4.1). Pour cela, on va s'attacher à résoudre \eqref{a} sur un certain nombre de sous-groupes monogènes de $(\bC,+)$, c'est-à-dire sur un certain nombre de sous-groupes additifs engendrés par un seul élément. On s'intéressera tout particulièrement aux cas des sous-groupes monogènes engendrés par $\sqrt{k}$ et par $i\sqrt{k}$ (avec $k \in \bN$). Les énoncés des deux résultats principaux sont présentés dans la section suivante et démontrés respectivement dans les sections \ref{preuve25} et \ref{preuve26} tandis que la section \ref{pre} contient des résultats préliminaires. \section{Définitions et résultats principaux} \label{RP} L'objectif de cette section est double. D'une part, on va donner les définitions des notions essentielles à l'étude menée dans la suite et, d'autre part, on va énoncer les résultats principaux de ce texte. Dans tout ce qui suit, $R$ est un sous-magma de $\bC$, c'est-à-dire un sous-ensemble de $\bC$ stable par addition. De plus, si $w \in \bC$, on note $<w>$ le sous-groupe de $(\bC,+)$ engendré par $w$. En d'autres termes, on a $<w>~=\mathbb{Z}w=\{kw, k \in \bZ\}$. On débute par la définition formelle du concept de $\lambda$-quiddité. \begin{definition}[\cite{C}, définition 2.2] \label{21} Soit $n \in \bM$. On dit que le $n$-uplet $(a_{1},\ldots,a_{n})$ d'éléments de $R$ est une $\lambda$-quiddité sur $R$ de taille $n$ si $(a_{1},\ldots,a_{n})$ est une solution de \eqref{a}, c'est-à-dire si $(a_{1},\ldots,a_{n})$ vérifie $M_{n}(a_{1},\ldots,a_{n})=\pm Id.$ En cas d'absence d'ambiguïté, on parlera simplement de $\lambda$-quiddité. \end{definition} \noindent Afin de procéder à l'étude des $\lambda$-quiddités on donne les deux définitions suivantes : \begin{definition}[\cite{C}, lemme 2.7] \label{22} Soient $(n,m) \in (\bM)^{2}$, $(a_{1},\ldots,a_{n})$ un $n$-uplet d'éléments de $R$ et $(b_{1},\ldots,b_{m})$ un $m$-uplet d'éléments de $R$. On définit l'opération suivante: \[(a_{1},\ldots,a_{n}) \oplus (b_{1},\ldots,b_{m}):= (a_{1}+b_{m},a_{2},\ldots,a_{n-1},a_{n}+b_{1},b_{2},\ldots,b_{m-1}).\] Le $(n+m-2)$-uplet ainsi obtenu est appelé la somme de $(a_{1},\ldots,a_{n})$ avec $(b_{1},\ldots,b_{m})$. \end{definition} \begin{examples} {\rm On prend ici $R=\bN$. On a :} \begin{itemize} \item $(2,0,3) \oplus (1,1,0) = (2,0,4,1)$; \item $(2,3,4) \oplus (4,1,0,8) = (10,3,8,1,0)$; \item $(1,3,5,3) \oplus (3,2,2,5,4) = (5,3,5,6,2,2,5)$; \item $n \geq 2$, $(a_{1},\ldots,a_{n}) \oplus (0,0) = (0,0) \oplus (a_{1},\ldots,a_{n})=(a_{1},\ldots,a_{n})$. \end{itemize} \end{examples} L'opération présentée ci-dessus n'est malheureusement ni commutative ni associative (voir \cite{WZ} exemple 2.1). Cependant, celle-ci est très utile pour l'étude des solutions de l'équation \eqref{a} car elle possède la propriété suivante : si $(b_{1},\ldots,b_{m})$ est une $\lambda$-quiddité sur $R$ alors la somme $(a_{1},\ldots,a_{n}) \oplus (b_{1},\ldots,b_{m})$ est une $\lambda$-quiddité sur $R$ si et seulement si $(a_{1},\ldots,a_{n})$ est une $\lambda$-quiddité sur $R$ (voir \cite{C,WZ} et \cite{M1} proposition 3.7). \begin{definition}[\cite{C}, définition 2.5] \label{23} Soient $(a_{1},\ldots,a_{n})$ et $(b_{1},\ldots,b_{n})$ deux $n$-uplets d'éléments de $R$. On dit que $(a_{1},\ldots,a_{n}) \sim (b_{1},\ldots,b_{n})$ si $(b_{1},\ldots,b_{n})$ est obtenu par permutations circulaires de $(a_{1},\ldots,a_{n})$ ou de $(a_{n},\ldots,a_{1})$. \end{definition} On montre que $\sim$ est une relation d'équivalence sur l'ensemble des $n$-uplets d'éléments de $R$ (voir \cite{WZ}, lemme 1.7). De plus, si un $n$-uplet d'éléments de $R$ est une $\lambda$-quiddité sur $R$ alors tout $n$-uplet d'éléments de $R$ qui lui est équivalent est aussi une $\lambda$-quiddité sur $R$ (voir \cite{C} proposition 2.6). On peut maintenant définir la notion d'irréductibilité annoncée dans l'introduction. \begin{definition}[\cite{C}, définition 2.9] \label{24} Une $\lambda$-quiddité $(c_{1},\ldots,c_{n})$ sur $R$ avec $n \geq 3$ est dite réductible s'il existe une $\lambda$-quiddité $(b_{1},\ldots,b_{l})$ sur $R$ et un $m$-uplet $(a_{1},\ldots,a_{m})$ d'éléments de $R$ tels que \begin{itemize} \item $(c_{1},\ldots,c_{n}) \sim (a_{1},\ldots,a_{m}) \oplus (b_{1},\ldots,b_{l})$, \item $m \geq 3$ et $l \geq 3$. \end{itemize} Une $\lambda$-quiddité est dite irréductible si elle n'est pas réductible. \end{definition} \begin{remark} {\rm Si $0 \in R$ alors $(0,0)$ est une $\lambda$-quiddité sur $R$. En revanche, elle n'est jamais considérée comme étant une $\lambda$-quiddité irréductible.} \end{remark} On dispose déjà de plusieurs résultats de classification des $\lambda$-quiddités irréductibles (voir section \ref{class}). Notre objectif ici est d'étudier les solutions de \eqref{a} sur certains sous-groupes monogènes de $(\bC,+$) et d'obtenir, si possible, une classification des $\lambda$-quiddités irréductibles sur ces ensembles. On commencera par le cas de $R=<\sqrt{k}>$ en démontrant le résultat ci-dessous : \begin{theorem} \label{25} Soit $k \in \bN$. \\ \\i) Si $k=0$. $(0,0,0,0)$ est la seule $\lambda$-quiddité irréductible sur $<\sqrt{k}>$. \\ \\ii) Si $k=1$. Les $\lambda$-quiddités irréductibles sur $<\sqrt{1}>$ sont : \[\{(1,1,1), (-1,-1,-1), (0,a,0,-a), (a,0,-a,0); a \in \bZ-\{\pm 1\} \}.\] \noindent iii) Si $k=2$. Les $\lambda$-quiddités irréductibles sur $<\sqrt{2}>$ sont : \[\{(\sqrt{2},\sqrt{2},\sqrt{2},\sqrt{2}), (-\sqrt{2},-\sqrt{2},-\sqrt{2},-\sqrt{2}), (0,a\sqrt{2},0,-a\sqrt{2}), (a\sqrt{2},0,-a\sqrt{2},0); a \in \bZ\}.\] \noindent iv) Si $k=3$. Les $\lambda$-quiddités irréductibles sur $<\sqrt{3}>$ sont : \[\{(\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3}), (-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3}), (0,a\sqrt{3},0,-a\sqrt{3}), (a\sqrt{3},0,-a\sqrt{3},0); a \in \bZ\}.\] \noindent v) Si $k \geq 4$. Les $\lambda$-quiddités irréductibles sur $<\sqrt{k}>$ sont : \[\{(0,a\sqrt{k},0,-a\sqrt{k}), (a\sqrt{k},0,-a\sqrt{k},0); a \in \bZ\}.\] \end{theorem} Ce théorème est démontré dans la section \ref{preuve25}. Par ailleurs, on constate que les $\lambda$-quiddités irréductibles sur les sous-groupes $<\sqrt{k}>$ ($k \in \bM$) sont assez différentes en fonction des valeurs de $k$ alors que ces sous-groupes sont tous isomorphes. On considérera ensuite les cas $R=<i\sqrt{k}>$, avec $k \in \bN$. En particulier, on démontrera le résultat suivant : \begin{theorem} \label{26} Soient $k \in \bN$ et $\Omega_{k}$ l'ensemble des $\lambda$-quiddités sur $<i\sqrt{k}>$. Si $k \neq 1$, on note $\Xi_{k}$ l'ensemble des $\lambda$-quiddités sur $<\sqrt{k}>$ et on note $\Xi_{1}$ l'ensemble des $\lambda$-quiddités de taille paire sur $<\sqrt{k}>$. \\ \\i) $\Omega_{k}$ et $\Xi_{k}$ ne contiennent que des éléments de taille paire. L'application \[\begin{array}{ccccc} \varphi_{k} & : & \Omega_{k} & \longrightarrow & \Xi_{k} \\ & & (a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i) & \longmapsto & (a_{1},-a_{2},\ldots,a_{2n-1},-a_{2n}) \\ \end{array}\] est une bijection. \\ \\ii) Si $k \in \bN$, $k \neq 1$ alors $\varphi_{k}$ établit une bijection entre l'ensemble des $\lambda$-quiddités irréductibles sur $<i\sqrt{k}>$ et l'ensemble des $\lambda$-quiddités irréductibles sur $<\sqrt{k}>$. \end{theorem} \noindent Ce résultat est prouvé dans la section \ref{preuve26}. \section{Résultats préliminaires et premiers éléments sur le cas des sous-groupes monogènes} \label{pre} L'objectif de cette section est de regrouper plusieurs résultats déjà connus sur l'étude des $\lambda$-quiddités et de donner quelques premiers éléments concernant le cas des sous-groupes monogènes. \subsection{Résultats préliminaires} \label{Rp} On commence par rappeler les solutions de \eqref{a} sur $\bC$ pour les petites valeurs de $n$. \begin{proposition}[\cite{CH}, exemple 2.7] \label{31} \begin{itemize} \item \eqref{a} n'a pas de solution de taille 1. \item $(0,0)$ est la seule solution de \eqref{a} de taille 2. \item $(1,1,1)$ et $(-1,-1,-1)$ sont les seules solutions de \eqref{a} de taille 3. \item Les solutions de \eqref{a} pour $n=4$ sont les 4-uplets suivants $(-a,b,a,-b)$ avec $ab=0$ (c'est-à-dire $a=0$ ou $b=0$) et $(a,b,a,b)$ avec $ab=2$. \end{itemize} \end{proposition} Si l'on se place sur un sous-magma $R$ de $\bC$ alors il suffit d'adapter le résultat précédent aux éléments présents dans $R$. Par exemple, si $1,-1 \notin R$ alors il n'y a pas de $\lambda$-quiddité de taille 3 sur $R$. \begin{proposition}[\cite{M4}, proposition 3.5] \label{opposé} Soit $R$ un sous-magma de $\bC$ tel que si $x \in R$ alors $-x \in R$. Soit $n$ un entier naturel supérieur à 2. $(a_{1},\ldots,a_{n})$ est une solution (irréductible) de \eqref{a} sur $R$ si et seulement si $(-a_{1},\ldots,-a_{n})$ est une solution (irréductible) de \eqref{a} sur $R$. \end{proposition} \begin{proof} La preuve donnée dans \cite{M4} s'adapte naturellement au cas d'un sous-magma de $\bC$. \end{proof} \begin{proposition} \label{32} Soient $G$ un sous-groupe de $\bC$. \\i) Une $\lambda$-quiddité de taille supérieure à 5 contenant 0 est réductible. \\ii) Si $1 \notin G$ alors les $\lambda$-quiddités de taille 4 sont irréductibles. \end{proposition} \begin{proof} i) Comme $G$ est un groupe, on a $0 \in G$. Soit $(a_{1},\ldots,a_{n})$ une $\lambda$-quiddité sur $G$ de taille supérieure à 5. On suppose qu'il existe $i$ dans $[\![1;n]\!]$ tel que $a_{i}=0$. On a : \begin{eqnarray*} (a_{i+2},\ldots,a_{n},a_{1},\ldots,a_{i},a_{i+1}) &=& (a_{i+2},\ldots,a_{n},a_{1},\ldots,a_{i-1}+a_{i+1}) \\ &\oplus & (-a_{i+1},0,a_{i+1},0).\\ \end{eqnarray*} \noindent Or, $(-a_{i+1},0,a_{i+1},0)$ est une $\lambda$-quiddité sur $G$ (voir proposition \ref{31}) et $(a_{i+2},\ldots,a_{n},a_{1},\ldots,a_{i-1}+a_{i+1})$ est de taille $n-2 \geq 3$. Ainsi, $(a_{1},\ldots,a_{n})$ est une $\lambda$-quiddité réductible. \\ \\ii) Supposons par l'absurde qu'une $\lambda$-quiddité $(c_{1},\ldots,c_{4})$ est réductible. Il existe deux solutions de \eqref{a} $(a_{1},\ldots,a_{l})$ et $(b_{1},\ldots,b_{l'})$ telles que $l, l' \geq 3$ et $(c_{1},\ldots,c_{4})=(a_{1},\ldots,a_{l}) \oplus (b_{1},\ldots,b_{l'})$. On a $6=l+l'$. Ainsi, $l=l'=3$. Or, comme $1 \notin G$ et $-1 \notin G$ (puisque $G$ est un groupe), il n'y a pas de $\lambda$-quiddité de taille 3 sur $G$ (par la proposition \ref{31}). Ceci est absurde. \end{proof} Cette proposition montre que la présence de certains éléments dans les $\lambda$-quiddités est un atout important pour l'étude de ces dernières. Il est donc intéressant de chercher des résultats permettant de garantir l'existence de nombres particuliers dans les solutions de \eqref{a}. Dans cet esprit, on a le résultat majeur suivant : \begin{theorem}[Cuntz-Holm, \cite{CH} corollaire 3.3] \label{33} Soit $(a_{1},\ldots,a_{n}) \in \bC^{n}$ une $\lambda$-quiddité. $\exists (i,j) \in [\![1;n]\!]^{2}$, $i \neq j$, tels que $\left|a_{i}\right| < 2$ et $\left|a_{j}\right| < 2$. \end{theorem} \begin{remark} {\rm La constante 2 du théorème ci-dessus est optimale. En effet, on peut montrer que, pour tout $\epsilon \in ]0,2]$, il existe une $\lambda$-quiddité sur $\bR$ dont toutes les composantes sont de module supérieur à $2-\epsilon$ (voir \cite{M2} proposition 3.5). } \end{remark} On aura également besoin dans les sections suivantes de connaître une formule donnant les coefficients de la matrice $M_{n}(a_{1},\ldots,a_{n})$ en terme de déterminant. Pour cela, on rappelle la définition classique ci-dessous : \\ \\On pose $K_{-1}=0$, $K_{0}=1$. Soient $n \in \bM$ et $(a_{1},\ldots,a_{n}) \in \bC^{n}$. On note \[K_{n}(a_{1},\ldots,a_{n})= \left| \begin{array}{cccccc} a_1&1&&&\\[4pt] 1&a_{2}&1&&\\[4pt] &\ddots&\ddots&\!\!\ddots&\\[4pt] &&1&a_{n-1}&\!\!\!\!\!1\\[4pt] &&&\!\!\!\!\!1&\!\!\!\!a_{n} \end{array} \right|.\] $K_{n}(a_{1},\ldots,a_{n})$ est le continuant de $a_{1},\ldots,a_{n}$. On dispose de l'égalité suivante (voir \cite{CO,MO}) : Soient $n \in \bM$ et $(a_{1},\ldots,a_{n}) \in \bC^{n}$. \[M_{n}(a_{1},\ldots,a_{n})=\begin{pmatrix} K_{n}(a_{1},\ldots,a_{n}) & -K_{n-1}(a_{2},\ldots,a_{n}) \\ K_{n-1}(a_{1},\ldots,a_{n-1}) & -K_{n-2}(a_{2},\ldots,a_{n-1}) \end{pmatrix}.\] Pour calculer le polynôme continuant, on dispose d'un algorithme simple appelé algorithme d'Euler (ou règle d'Euler), apparu lors de l'étude menée par L. Euler des fractions continues. On peut le décrire de la façon suivante (voir par exemple \cite{CO} section 2). $K_{n}(a_{1},\ldots,a_{n})$ est la somme de tous les produits possibles de $a_{1},\ldots,a_{n}$, dans lesquels un nombre quelconque de paires disjointes de termes consécutifs est supprimé. Chacun de ces produits étant multiplié par -1 puissance le nombre de paires supprimées. On commence donc par le produit $a_{1} \times \ldots \times a_{n}$. Ensuite, on soustrait tous les produits de la forme $a_{1} \times \ldots a_{i-1} \times a_{i+2} \times \ldots \times a_{n}$. Puis, on additionne tous les produits possibles de $a_{1},\ldots,a_{n}$, dans lesquels deux paires disjointes de termes consécutifs ont été supprimées et on continue ainsi de suite. \begin{examples} {\rm En appliquant cet algorithme, on obtient :} \begin{itemize} \item $K_{2}(a_{1},a_{2})=a_{1}a_{2}-1$; \item $K_{3}(a_{1},a_{2},a_{3})=a_{1}a_{2}a_{3}-a_{3}-a_{1}$; \item $K_{4}(a_{1},a_{2},a_{3},a_{4})=a_{1}a_{2}a_{3}a_{4}-a_{3}a_{4}-a_{1}a_{4}-a_{1}a_{2}+1$; \item $K_{5}(a_{1},a_{2},a_{3},a_{4},a_{5})=a_{1}a_{2}a_{3}a_{4}a_{5}-a_{3}a_{4}a_{5}-a_{1}a_{4}a_{5}-a_{1}a_{2}a_{5}-a_{1}a_{2}a_{3}+a_{5}+a_{3}+a_{1}$. \\ \end{itemize} \end{examples} \subsection{Résultats de classification} \label{class} On dispose déjà de résultats de classification des $\lambda$-quiddités sur certains sous-ensembles de $\bC$, notamment sur $\bN$ et $\bZ$, obtenus à l'aide du théorème \ref{33}. \begin{theorem}[Cuntz, \cite{C}, Théorème 3.1] \label{34} Les $\lambda$-quiddités irréductibles sur $\mathbb{N}$ sont $(1,1,1)$ et $(0,0,0,0)$. \end{theorem} \begin{theorem}[Cuntz-Holm, \cite{CH} Théorème 6.2] \label{35} Les $\lambda$-quiddités irréductibles sur l'anneau $\mathbb{Z}$ sont : \[\{(1,1,1), (-1,-1,-1), (0,m,0,-m), (m,0,-m,0); m \in \mathbb{Z}-\{\pm 1\} \}.\] \end{theorem} Pour ce cas, on dispose d'une description combinatoire élégante des solutions. Avant de donner celle-ci, on définit la notion d'étiquetage admissible introduite dans \cite{CH}. \begin{definition}[\cite{CH}, définition 7.1] \label{36} Soit $P$ un polygone convexe à $n$ sommets. On considère une triangulation de ce polygone par des diagonales ne se coupant qu'aux sommets et on affecte un entier relatif à chaque triangle. On dit que cette étiquetage est admissible si l'ensemble des triangles étiquetés avec un entier $m \neq \pm 1$ peut être écrit comme une union disjointe d'ensembles contenant chacun deux triangles partageant un côté commun et ayant pour étiquette $a$ et $-a$. À chaque sommet de $P$ on associe un entier $c$ obtenu en sommant les étiquettes des triangles utilisant ce sommet. On parcourt les sommets, à partir de n'importe lequel d'entre eux, dans le sens horaire ou le sens trigonométrique, pour obtenir le $n$-uplet $(c_{1},\ldots,c_{n})$. Ce $n$-uplet est la quiddité de l'étiquetage admissible de $P$. \end{definition} \begin{theorem}[Cuntz-Holm, \cite{CH} Théorème 7.3] \label{37} Les $\lambda$-quiddités de taille $n$ sur $\mathbb{Z}$ sont des quiddités d'étiquetages admissibles de polygones convexes à $n$ sommets et réciproquement. \end{theorem} \begin{examples} {\rm Voici quelques exemples d'étiquetages admissibles avec leur quiddité :} $$ \shorthandoff{; :!?} \xymatrix @!0 @R=0.50cm @C=0.65cm { && 1 \ar@{-}[rrdd]\ar@{-}[lldd]\ar@{-}[ldddd]\ar@{-}[rdddd]& \\ \\ 1 \ar@{-}[rdd]&1&&1& 1 \ar@{-}[ldd] \\ &&-1 \\ &0\ar@{-}[rr]&& 0 } \qquad \qquad \xymatrix @!0 @R=0.50cm @C=0.65cm { && 1 \ar@{-}[rrdd]\ar@{-}[lldd]& \\ &&1 \\ 1 \ar@{-}[rdd]\ar@{-}[rrrdd]\ar@{-}[rrrr]&&&& 1 \ar@{-}[ldd] \\ &0&&0 \\ &0\ar@{-}[rr]&& 0 } \qquad \qquad \xymatrix @!0 @R=0.40cm @C=0.5cm { &&1 \ar@{-}[rrdd]\ar@{-}[lldd]& \\ &&1& \\ 2\ar@{-}[dd]\ar@{-}[rrrr]&&&& 3 \ar@{-}[dd] \\ &1 && 1 \\ 1\ar@{-}[rrrr]\ar@{-}[rrrruu]&&&& 0 \\ &&-1& \\ &&-1 \ar@{-}[lluu]\ar@{-}[rruu] } \qquad \qquad \xymatrix @!0 @R=0.40cm @C=0.5cm { &&1\ar@{-}[lldd] \ar@{-}[rr]&&3\ar@{-}[rrdd]\ar@{-}[dddddd]\ar@{-}[lldddddd]\ar@{-}[rrdddd]& \\ &&& \\ 5\ar@{-}[dd]\ar@{-}[rrrruu]&&&&&& 1 \ar@{-}[dd] \\ &&3 \\ 1&&&&&& 2 \\ &&&-3 \\ &&1 \ar@{-}[rr]\ar@{-}[lluuuu]\ar@{-}[lluu] &&-2 \ar@{-}[rruu] } $$ \noindent {\rm Dans le dernier exemple, tous les triangles sont étiquetés avec 1, à l'exception des deux triangles déjà étiquetés sur la figure. Remarquons que les deux pentagones ont un étiquetage admissible différent aboutissant à la même quiddité.} \end{examples} \begin{remark} {\rm La version initiale du théorème telle qu'elle apparaît dans \cite{CH} concerne uniquement les solutions de $M_{n}(a_{1},\ldots,a_{n})=-Id$ et introduit pour cela une notion de signe de l'étiquetage. } \end{remark} \indent On dispose également du résultat de classification énoncé ci-dessous. Celui-ci concerne le cas des sous-anneaux $\bZ[\alpha]$ avec $\alpha$ un nombre complexe transcendant, c'est-à-dire un nombre complexe qui n'est racine d'aucun polynôme non nul à coefficients dans $\bZ$. \begin{theorem}[\cite{M2}, Théorème 2.7] \label{38} Soit $\alpha$ un nombre complexe transcendant. Les $\lambda$-quiddités irréductibles sur l'anneau $\bZ[\alpha]$ sont : \[\{(1,1,1), (-1,-1,-1), (0,P(\alpha),0,-P(\alpha)), (P(\alpha),0,-P(\alpha),0); P \in \bZ[X]-\{\pm 1\} \}.\] \end{theorem} \subsection{Premiers éléments sur les sous-groupes monogènes} \label{mono} Nous allons revenir à notre problème initial en donnant quelques résultats de classification des $\lambda$-quiddités irréductibles sur $<w>$ pour certains nombres complexes $w$. \begin{proposition} \label{39} Soit $w$ un nombre complexe vérifiant $\left|w\right| \geq 2$. L'ensemble des $\lambda$-quiddités irréductibles sur $<w>$ est : \[\{(0,kw,0,-kw), (kw,0,-kw,0); k \in \bZ\}.\] \end{proposition} \begin{proof} Soit $(k_{1}w,\ldots,k_{n}w)$ une $\lambda$-quiddité sur $<w>$. D'après le théorème \ref{33}, il existe $i$ dans $[\![1;n]\!]$ tel que $\left|k_{i}w\right| < 2$. Or, $\left|k_{i}w\right|=\left|k_{i}\right|\left|w\right| \geq 2\left|k_{i}\right|$. Donc, $k_{i}=0$. Par la proposition \ref{32} i), les $\lambda$-quiddités sur $<w>$ de taille supérieure à 5 sont réductibles. \\ \\Par la proposition \ref{31}, il n'y a pas de $\lambda$-quiddité sur $<w>$ de taille 3 (car $1 \notin <w>$). Donc, par la proposition \ref{32} ii), les $\lambda$-quiddités irréductibles sur $<w>$ sont exactement les $\lambda$-quiddités de taille 4. Or, $ab=2$ n'a pas de solution dans $<w>$. Ainsi, les solutions de \eqref{a} de taille 4 sont de la forme $(0,kw,0,-kw)$ ou $(kw,0,-kw,0)$ (avec $k \in \bZ$). \end{proof} \begin{corollary} \label{310} Soit $a \in \bZ$. \\ \\i) Si $a=0$. $(0,0,0,0)$ est la seule $\lambda$-quiddité irréductible sur $<a>$. \\ \\ii) Si $a=\pm 1$. L'ensemble des $\lambda$-quiddités irréductibles sur $<a>$ est : \[\{(1,1,1), (-1,-1,-1), (0,m,0,-m), (m,0,-m,0); m \in \bZ-\{\pm 1\} \}.\] \noindent iii) Si $\left|a\right| \geq 2$. L'ensemble des $\lambda$-quiddités irréductibles sur $<a>$ est : \[\{(0,ka,0,-ka), (ka,0,-ka,0); k \in \bZ\}.\] \end{corollary} \begin{proof} i) $<a>=\{0\}$. Par la proposition \ref{32} i), les $\lambda$-quiddités sur $<a>$ de taille supérieure à 5 sont réductibles. Par la proposition \ref{31}, les seules $\lambda$-quiddités sur $<a>$ de taille inférieure à 4 sont $(0,0)$ et $(0,0,0,0)$. Par la proposition \ref{32} ii), $(0,0,0,0)$ est la seule $\lambda$-quiddité irréductible sur $<a>$. \\ \\ii) $<a>=\bZ$. Le résultat est déjà connu (voir théorème \ref{35}). \\ \\iii) C'est une conséquence de la proposition précédente. \end{proof} \noindent On va maintenant s'intéresser aux cas où $w$ est un nombre complexe transcendant. \begin{proposition} \label{311} Soit $\alpha$ un nombre complexe transcendant. L'ensemble des $\lambda$-quiddités irréductibles sur $<\alpha>$ est : \[\{(0,k\alpha,0,-k\alpha), (k\alpha,0,-k\alpha,0); k \in \bZ\}.\] \end{proposition} \begin{proof} Soient $n \in \bN$ $n \geq 2$ et $(k_{1}\alpha,\ldots,k_{n}\alpha)$ une $\lambda$-quiddité sur $<\alpha>$. il existe $\epsilon$ dans $\{-1, 1\}$ tel que \[M_{n}(k_{1}\alpha,\ldots,k_{n}\alpha)=\begin{pmatrix} K_{n}(k_{1}\alpha,\ldots,k_{n}\alpha) & -K_{n-1}(k_{2}\alpha,\ldots,k_{n}\alpha) \\ K_{n-1}(k_{1}\alpha,\ldots,k_{n-1}\alpha) & -K_{n-2}(k_{2}\alpha,\ldots,k_{n-1}\alpha) \end{pmatrix}=\epsilon Id.\] \noindent Supposons par l'absurde $n$ impair. Par l'algorithme d'Euler, $K_{n-1}(k_{1}\alpha,\ldots,k_{n-1}\alpha)$ est un polynôme en $\alpha$ à coefficients dans $\bZ$ (dépendant des $k_{i}$) dont le coefficient constant est $\pm 1$ (car $n-1$ est pair). Or, $P(\alpha)=K_{n-1}(k_{1}\alpha,\ldots,k_{n-1}\alpha)=0$. Ainsi, comme $\alpha$ est transcendant, $P$ est un polynôme nul. Ceci est absurde car le coefficient constant de $P$ est $\pm 1$. \\ \\On en déduit que $n$ est pair. \\ \\Par l'algorithme d'Euler, $K_{n-1}(k_{1}\alpha,\ldots,k_{n-1}\alpha)$ est un polynôme en $\alpha$ à coefficients dans $\bZ$ (dépendant des $k_{i}$) dont le coefficient dominant est $\prod_{i=1}^{n-1} k_{i}$. Or, $P(\alpha)=K_{n-1}(k_{1}\alpha,\ldots,k_{n-1}\alpha)=0$. Ainsi, comme $\alpha$ est transcendant, $P$ est un polynôme nul. Donc, $\prod_{i=1}^{n-1} k_{i}=0$, c'est-à-dire qu'il existe $i$ dans $[\![1;n-1]\!]$ tel que $k_{i}=0$. Par la proposition \ref{32} i), les $\lambda$-quiddités sur $<\alpha>$ de taille supérieure à 5 sont réductibles. \\ \\Comme $\alpha$ est transcendant, $1 \notin <\alpha>$ et $ab=2$ n'a pas de solution dans $<\alpha>$. Donc, par la proposition \ref{32} ii), les $\lambda$-quiddités irréductibles sur $<\alpha>$ sont de la forme $(0,k\alpha,0,-k\alpha)$ ou $(k\alpha,0,-k\alpha,0)$ ($k \in \bZ$). \end{proof} Cette proposition couvre de très nombreux cas. En effet, l'ensemble des nombres complexes algébriques est dénombrable (voir par exemple \cite{Ca} et \cite {G} corollaire III.57) alors que $\bC$ est indénombrable (Théorème de Cantor, voir \cite{G} corollaire III.54). De plus, il existe de nombreux résultats permettant de considérer des nombres transcendants précis. Par exemple : \begin{itemize} \item Nombres de Liouville (1844) : Si $(a_{n})_{n \in \bN}$ est une suite d'entiers compris entre 0 et 9 avec une infinité de $a_{n}$ non nul alors la série numérique de terme générale $\frac{a_{n}}{10^{n!}}$ est convergente et $\sum_{n=0}^{+\infty} \frac{a_{n}}{10^{n!}}$ est transcendant (voir \cite{L} et \cite{G} exercice III.11). \\ \item Théorème de Hermite (1873) : $e$ est transcendant (voir \cite{H} et \cite{G} Théorème III.59). \\ \item Théorème de Lindemann (1882) : $\pi$ est transcendant (voir \cite{G} Théorème III.60). \\ \item Théorème de Gelfond-Schneider (1934) : Si $a$ est un nombre algébrique différent de 0 et 1 et si $b$ est un nombre algébrique irrationnel alors $a^{b}$ est transcendant, où $a^{b}$ est à prendre au sens ${\rm exp}(b{\rm log}(a))$ avec ${\rm log}(a)$ n'importe quelle détermination du logarithme complexe de $a$ (voir \cite{Ge}). \\ \end{itemize} Nous allons nous intéresser dans les sections suivantes aux cas de certains nombres complexes algébriques en démontrant les deux théorèmes présentés dans la section \ref{RP}. \section{Démonstration du théorème \ref{25}} \label{preuve25} L'objectif de cette section est de démontrer le théorème \ref{25}, c'est-à-dire de considérer les cas $w=\sqrt{k}$ avec $k \in \bN$. Le point i) correspond exactement au point i) du corollaire \ref{310}. De même, le point ii) est scrupuleusement identique au théorème \ref{35}. Quant au point v), il découle directement de la proposition \ref{39}. Il nous reste donc à considérer les cas des sous-groupes $<\sqrt{k}>$ avec $k=2$ et $k=3$. \\ \\Dans ce qui suit, si $(a_{1},\ldots a_{n})$ est une solution de \eqref{a} on considérera $a_{1}$ et $a_{n}$ comme des éléments consécutifs (ceci à cause de l'invariance par permutations circulaires des solutions de \eqref{a}). \subsection{Le cas k=2} Soit $m$ un entier strictement positif et $(a_{1},\ldots,a_{m})$ une solution de \eqref{a} avec pour tout $j$ dans $[\![1;m]\!]$, $a_{j}=k_{j}\sqrt{2}$ et $k_{j} \in \mathbb{Z}$. Il existe $\epsilon \in \{\pm 1 \}$ tel que \[\epsilon Id=M_{m}(a_{1},\ldots,a_{m})=\begin{pmatrix} K_{m}(a_{1},\ldots,a_{m}) & -K_{m-1}(a_{2},\ldots,a_{m}) \\ K_{m-1}(a_{1},\ldots,a_{m-1}) & -K_{m-2}(a_{2},\ldots,a_{m-1}) \end{pmatrix}.\] \noindent Supposons $m$ impair. On a $K_{m}(a_{1},\ldots,a_{m})=\epsilon \in \mathbb{Z}$. \\ \\Par l'algorithme d'Euler, $K_{m}(a_{1},\ldots,a_{m})$ est une somme dont chacun des termes est un produit d'un nombre impair de $a_{j}$ (et éventuellement de -1). On regroupe les $\sqrt{2}$ présents dans chaque $a_{j}$. Chacun des termes de la somme est alors le produit de $\sqrt{2}$ à une puissance impaire multiplié par des entiers. En factorisant par $\sqrt{2}$, on peut donc écrire $K_{m}(a_{1},\ldots,a_{m})$ comme le produit de $\sqrt{2}$ par un entier. \\ \\Ainsi, $K_{m}(a_{1},\ldots,a_{m}) \in \mathbb{Z} \cap \sqrt{2}\mathbb{Z}$, c'est-à-dire $K_{m}(a_{1},\ldots,a_{m})=0$. Ceci est absurde puisque $\epsilon \in \{\pm 1 \}$. \\ \\Donc, il n'y a pas de $\lambda$-quiddité de taille impaire sur $<\sqrt{2}>$. \\ \\Supposons maintenant $m$ pair supérieur à 4 avec $m=2n$. Notre objectif est de montrer que si tous les $k_{i}$ sont non nuls alors deux $k_{i}$ consécutifs sont égaux à 1 ou à -1. \\ \\Par l'algorithme d'Euler, $K_{2n}(k_{1}\sqrt{2},k_{2}\sqrt{2},\ldots,k_{2n-1}\sqrt{2},k_{2n}\sqrt{2})$ est une somme de produits des $k_{j}\sqrt{2}$ (et éventuellement de -1). Dans chaque terme d'une de ces sommes, pour chaque $j$ impair $k_{j}$ est suivi d'un $k_{l}$ avec $l$ pair. On fait alors la manipulation suivante :\[k_{j}\sqrt{2}k_{l}\sqrt{2}=2k_{j}k_{l}=k_{j}(2k_{l}).\] \noindent Avec cette manipulation, on a \[K_{2n}(k_{1}\sqrt{2},k_{2}\sqrt{2},\ldots,k_{2n-1}\sqrt{2},k_{2n}\sqrt{2})=K_{2n}(k_{1},2k_{2},\ldots,k_{2n-1},2k_{2n}).\] \noindent On procède de façon analogue pour $K_{2n-2}(k_{2}\sqrt{2},\ldots,k_{2n-1}\sqrt{2})$ (puisque pour pour chaque $j$ impair $k_{j}$ est précédé d'un $k_{l}$ avec $l$ pair). \\ \\$K_{2n-1}(k_{1}\sqrt{2},k_{2}\sqrt{2},\ldots,k_{2n-1}\sqrt{2})$ est une somme de produits des $k_{j}\sqrt{2}$ (et éventuellement de -1). Pour chaque terme de la somme, on a deux choix : \begin{itemize} \item le terme est de la forme $k_{j}\sqrt{2}$ avec $j$ impair; \item le terme est un produit de $2u+1$ éléments de la forme $k_{j}\sqrt{2}$ avec $u+1$ indices $j$ impair. Pour chaque $m$ pair $k_{m}$ est précédé d'un $k_{l}$ avec $l$ impair. On fait alors la manipulation suivante :\[k_{l}\sqrt{2}k_{m}\sqrt{2}=2k_{l}k_{m}=k_{l}(2k_{m}).\] \end{itemize} \noindent Cela donne $0=K_{2n-1}(k_{1}\sqrt{2},k_{2}\sqrt{2},\ldots,k_{2n-1}\sqrt{2})=\sqrt{2}K_{2n-1}(k_{1},2k_{2},\ldots,k_{2n-1})$. En particulier, on a $0=K_{2n-1}(k_{1},2k_{2},\ldots,k_{2n-1})$. On procède de façon analogue pour $K_{2n-1}(k_{2}\sqrt{2},\ldots,k_{2n}\sqrt{2})$. \\ \\Donc, $(k_{1},2k_{2},\ldots,k_{2n-1},2k_{2n})$ est une $\lambda$-quiddité sur $\mathbb{Z}$. \\ \\On suppose que pour tout $i \in [\![1;2n]\!]$, $k_{i} \neq 0$. Ainsi, par le théorème \ref{33}, il existe $i \in [\![1;n]\!]$ tel que $k_{i}=\pm 1$ et $i$ impair. \\ \\On montre, par un simple calcul, que $M_{3}(a,1,b)=M_{2}(a-1,b-1)$ et $M_{3}(a,-1,b)=-M_{2}(a+1,b+1)$. Si on "réduit" de cette façon tous les $\pm 1$ présents dans $(k_{1},2k_{2},\ldots,k_{2n-1},2k_{2n})$ on obtient une nouvelle solution. Cette nouvelle $\lambda$-quiddité ne peut contenir que des éléments appartenant à la liste suivante : \begin{itemize} \item $k_{i}$ avec $i$ impair et $\left|k_{i}\right| \geq 2$; \item $2k_{i}$ avec $i$ pair; \item $2k_{i}-1$ avec $i$ pair; \item $2k_{i}+1$ avec $i$ pair; \item $2k_{i}-2$ avec $i$ pair; \item $2k_{i}+2$ avec $i$ pair. \\ \end{itemize} \noindent Par le théorème \ref{33}, un de ces éléments est nécessairement égal à 0, 1 ou -1. Cela ne peut pas être un des éléments des deux premières catégories. S'il existe $i \in [\![1;2n]\!]$ tel que $2k_{i}-1 \in \{0, 1, -1\}$. On a nécessairement $2k_{i}-1=1$, c'est-à-dire $k_{i}=1$. De plus, pour arriver à $2k_{i}-1$ lors de la réduction on a impérativement utilisé un 1 adjacent à $2k_{i}$. Ainsi, on a deux $k_{i}$ consécutifs égaux à 1. On procède de façon analogue s'il existe $i \in [\![1;2n]\!]$ tel que $2k_{i}+1 \in \{0, 1, -1\}$. Supposons maintenant qu'il existe $i \in [\![1;2n]\!]$ tel que $2k_{i}-2 \in \{0, 1, -1\}$. On a nécessairement $2k_{i}-2=0$, c'est-à-dire $k_{i}=1$. De plus, pour arriver à $2k_{i}-2$ lors de la réduction on a obligatoirement utilisé un 1 adjacent à gauche et à droite de $2k_{i}$. Ainsi, on a deux $k_{i}$ consécutifs égaux à 1. On procède de façon analogue s'il existe $i \in [\![1;2n]\!]$ tel que $2k_{i}+2 \in \{0, 1, -1\}$. \\ \\Ainsi, si $(a_{1},\ldots,a_{n})$ est une $\lambda$-quiddité sur $<\sqrt{2}>$ alors $n$ est pair et une des deux conditions suivantes est obligatoirement vérifiée : \begin{itemize} \item un des $a_{i}$ est nul; \item deux $a_{i}$ consécutifs sont égaux et valent $\sqrt{2}$ ou $-\sqrt{2}$. \\ \end{itemize} \noindent Par la proposition \ref{31}, $(\sqrt{2},\sqrt{2},\sqrt{2},\sqrt{2})$ et $(-\sqrt{2},-\sqrt{2},-\sqrt{2},-\sqrt{2})$ sont des $\lambda$-quiddités sur $<\sqrt{2}>$. Aussi, on peut les utiliser pour réduire toutes les solutions de \eqref{a} de taille supérieure à 5 dont deux éléments consécutifs sont égaux et valent $\sqrt{2}$ ou $-\sqrt{2}$. \\ \\Donc, les $\lambda$-quiddités sur $<\sqrt{2}>$ de taille supérieure à 5 sont réductibles (par la proposition \ref{32} i) et la discussion précédente). Par les propositions \ref{31} et \ref{32} ii), les $\lambda$-quiddités irréductibles sur $<\sqrt{2}>$ sont celles données dans l'énoncé. \qed \subsection{Le cas k=3} Soit $m$ un entier strictement positif et $(a_{1},\ldots,a_{m})$ une solution de \eqref{a} avec pour tout $j$ dans $[\![1;m]\!]$, $a_{j}=k_{j}\sqrt{3}$ et $k_{j} \in \mathbb{Z}$. En procédant de la même manière que dans le cas précédent, on obtient $m$ pair, c'est-à-dire $m=2n$, et $(k_{1},3k_{2},\ldots,k_{2n-1},3k_{2n})$ solution de \eqref{a} sur $\mathbb{Z}$. \\ \\Par la proposition \ref{31}, les seules $\lambda$-quiddités sur $<\sqrt{3}>$ de taille inférieure à 4 sont celles données dans l'énoncé. Par la proposition \ref{32} ii), celles-ci sont irréductibles. On suppose maintenant $m$ pair supérieur à 6. Notre objectif est de montrer que si tous les $k_{i}$ sont non nuls alors quatre $k_{i}$ consécutifs sont égaux à 1 ou à -1. \\ \\On suppose que pour tout $i \in [\![1;2n]\!]$, $k_{i} \neq 0$. Ainsi, par le théorème \ref{33}, il existe $i \in [\![1;n]\!]$ tel que $k_{i}=\pm 1$ et $i$ impair. \\ \\On montre, par un simple calcul, que $M_{3}(a,1,b)=M_{2}(a-1,b-1)$ et $M_{3}(a,-1,b)=-M_{2}(a+1,b+1)$. Si on "réduit" de cette façon tous les $\pm 1$ présents dans $(k_{1},3k_{2},\ldots,k_{2n-1},3k_{2n})$ on obtient une nouvelle solution. Cette nouvelle $\lambda$-quiddité ne peut contenir que des éléments appartenant à la liste suivante : \begin{itemize} \item $k_{i}$ avec $i$ impair et $\left|k_{i}\right| \geq 2$; \item $3k_{i}$ avec $i$ pair; \item $3k_{i}-1$ avec $i$ pair; \item $3k_{i}+1$ avec $i$ pair; \item $3k_{i}-2$ avec $i$ pair; \item $3k_{i}+2$ avec $i$ pair. \\ \end{itemize} \noindent Par le théorème \ref{33}, un de ces éléments est nécessairement égal à 0, 1 ou -1. Comme pour tout $i$ dans $[\![1;2n]\!]$ $k_{i} \neq 0$, cela ne peut pas être un des éléments des quatre premières catégories. S'il existe $i \in [\![1;2n]\!]$ tel que $3k_{i}-2 \in \{0, 1, -1\}$. On a nécessairement $3k_{i}-2=1$, c'est-à-dire $k_{i}=1$. De plus, pour arriver à $3k_{i}-2$ lors de la réduction on a impérativement utilisé un 1 adjacent à droite et à gauche de $3k_{i}$. Ainsi, on a trois $k_{i}$ consécutifs égaux à 1. On procède de façon analogue s'il existe $i \in [\![1;2n]\!]$ tel que $3k_{i}+2 \in \{0, 1, -1\}$. \\ \\On "réduit" à nouveau tous les $\pm 1$ présents dans la $\lambda$-quiddité obtenue précédemment. On obtient une nouvelle solution qui ne peut contenir que des éléments appartenant à la liste suivante : \begin{itemize} \item $k_{i}$ avec $i$ impair et $\left|k_{i}\right| \geq 2$; \item $3k_{i}$ avec $i$ pair; \item $3k_{i}-1$ avec $i$ pair; \item $3k_{i}+1$ avec $i$ pair; \item $3k_{i}-2$ avec $i$ pair; \item $3k_{i}+2$ avec $i$ pair; \item $3k_{i}-3$ avec $i$ pair; \item $3k_{i}+3$ avec $i$ pair; \item $3k_{i}-4$ avec $i$ pair; \item $3k_{i}+4$ avec $i$ pair. \\ \end{itemize} \noindent Par le théorème \ref{33}, un de ces éléments est nécessairement égal à 0, 1 ou -1. Comme pour tout $i$ dans $[\![1;2n]\!]$ $k_{i} \neq 0$, cela ne peut pas être un des éléments des quatre premières catégories. \\ \\Supposons maintenant que $3k_{i}-2 \in \{0, 1, -1\}$ pour un certain $i$ pair. On a nécessairement $3k_{i}-2=1$, c'est-à-dire $k_{i}=1$. De plus, pour arriver à $3k_{i}-2$ lors de la réduction on a obligatoirement utilisé un 1 adjacent à gauche ou à droite d'un $3k_{i}$ ou d'un $3k_{i}-1$. Par ce qui précède, ce 1 provenait d'un triplet de $k_{i}$ consécutifs égaux à 1. On a donc au moins quatre 1 consécutifs. \\ \\Supposons maintenant que $3k_{i}+2 \in \{0, 1, -1\}$ pour un certain $i$ pair. On a nécessairement $3k_{i}+2=-1$, c'est-à-dire $k_{i}=-1$. De plus, pour arriver à $3k_{i}+2$ lors de la réduction on a obligatoirement utilisé un -1 adjacent à gauche ou à droite d'un $3k_{i}$ ou d'un $3k_{i}+1$ . Par ce qui précède, ce -1 provenait d'un triplet de $k_{i}$ consécutifs égaux -1. On a donc au moins quatre -1 consécutifs. \\ \\Supposons maintenant que $3k_{i}+3 \in \{0, 1, -1\}$ pour un certain $i$ pair. On a nécessairement $3k_{i}+3=0$, c'est-à-dire $k_{i}=-1$. De plus, pour arriver à $3k_{i}+3$ lors de la réduction on a obligatoirement utilisé un -1 adjacent à gauche ou à droite d'un $3k_{i}+1$ ou d'un $3k_{i}+2$. Par ce qui précède, ce -1 provenait d'un triplet de $k_{i}$ consécutifs égaux -1. On a donc au moins quatre -1 consécutifs. On procède de façon analogue si $3k_{i}-3 \in \{0, 1, -1\}$ pour un certain $i$ pair. \\ \\Supposons maintenant que $3k_{i}-4 \in \{0, 1, -1\}$ pour un certain $i$ pair. On a nécessairement $3k_{i}-4=-1$, c'est-à-dire $k_{i}=1$. De plus, pour arriver à $3k_{i}-4$ lors de la réduction on a obligatoirement utilisé un 1 adjacent à gauche et à droite d'un $3k_{i}-2$. Par ce qui précède, ce 1 provenait d'un triplet de $k_{i}$ consécutifs égaux 1. On a donc au moins quatre 1 consécutifs. On procède de façon analogue si $3k_{i}+4 \in \{0, 1, -1\}$ pour un certain $i$ pair. \\ \\Ainsi, si $(a_{1},\ldots,a_{n})$ est une $\lambda$-quiddité sur $<\sqrt{3}>$ alors $n$ est pair et une des deux conditions suivantes est obligatoirement vérifiée : \begin{itemize} \item un des $a_{i}$ est nul; \item quatre $a_{i}$ consécutifs sont égaux et valent $\sqrt{3}$ ou $-\sqrt{3}$. \\ \end{itemize} \noindent Par un simple calcul, on montre que $(\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3})$ et $(-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3})$ sont des $\lambda$-quiddités sur $<\sqrt{3}>$. Aussi, on peut les utiliser pour réduire toutes les solutions de \eqref{a} de taille supérieure à 7 dont quatre éléments consécutifs sont égaux et valent $\sqrt{3}$ ou $-\sqrt{3}$. \\ \\Donc, les $\lambda$-quiddités sur $<\sqrt{3}>$ de taille supérieure à 7 sont réductibles (par la proposition \ref{32} i) et la discussion précédente). \\ \\Soit $(a_{1},\ldots,a_{6})$ une $\lambda$-quiddité sur $<\sqrt{3}>$ de taille 6. Si cette solution contient 0 alors, par la proposition \ref{32} i), elle est réductible. Si cette solution ne contient pas 0 alors, par ce qui précède, elle contient quatre $\sqrt{3}$ (ou $-\sqrt{3}$) consécutifs. Quitte à effectuer des permutations circulaires, la solution est de la forme $(a\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},b\sqrt{3})$ ou $(-a\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-b\sqrt{3})$. Si $(a\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},b\sqrt{3})$ est une solution de \eqref{a} alors on a \[0=K_{5}(a\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3})=a\sqrt{3}K_{4}(\sqrt{3},\sqrt{3},\sqrt{3},\sqrt{3})-K_{3}(\sqrt{3},\sqrt{3},\sqrt{3})=a\sqrt{3}-\sqrt{3}.\] \noindent Donc, $a=1$. On montre de même que $b=1$. Le cas $(-a\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-\sqrt{3},-b\sqrt{3})$ solution se traite de façon analogue. \\ \\Ainsi, les $\lambda$-quiddités irréductibles sur $<\sqrt{3}>$ sont celles données dans l'énoncé. \qed \section{Démonstration du théorème \ref{26}} \label{preuve26} \subsection{Preuve des bijections} Nous allons commencer par établir le point i) du théorème. Pour cela, on commence par le résultat ci-dessous : \begin{proposition} \label{51} Soit $k \in \bN$. Il n'y a pas de $\lambda$-quiddité de taille impaire sur $<i\sqrt{k}>$. \end{proposition} \begin{proof} Soit $n$ un entier positif impair. Supposons qu'il existe une solution $(a_{1},\ldots,a_{n})$ de \eqref{a} avec pour tout $j$ dans $[\![1;n]\!]$, $a_{j}=i k_{j}\sqrt{k}$ et $k_{j} \in \mathbb{Z}$. Il existe $\epsilon \in \{\pm 1 \}$ tel que \[\epsilon Id=M_{n}(a_{1},\ldots,a_{n})=\begin{pmatrix} K_{n}(a_{1},\ldots,a_{n}) & -K_{n-1}(a_{2},\ldots,a_{n}) \\ K_{n-1}(a_{1},\ldots,a_{n-1}) & -K_{n-2}(a_{2},\ldots,a_{n-1}) \end{pmatrix}.\] \noindent En particulier, $K_{n}(a_{1},\ldots,a_{n})=\epsilon \in \mathbb{R}$. \\ \\Par l'algorithme d'Euler, $K_{n}(a_{1},\ldots,a_{n})$ est une somme dont chacun des termes est un produit d'un nombre impair de $a_{j}$ (et éventuellement de -1). On regroupe les $i$ présents dans chaque $a_{j}$. Chacun des termes de la somme est alors le produit de $i$ à une puissance impaire multiplié par des entiers. Comme les puissances impaires de $i$ valent $\pm i$, $K_{n}(a_{1},\ldots,a_{n})$ est un imaginaire pur. \\ \\Ainsi, $K_{n}(a_{1},\ldots,a_{n}) \in \mathbb{R} \cap i\mathbb{R}$, c'est-à-dire $K_{n}(a_{1},\ldots,a_{n})=0$. Ceci est absurde puisque $\epsilon \in \{\pm 1 \}$. \\ \\Donc, \eqref{a} n'a pas de solution de taille impaire sur $<i\sqrt{k}>$. \end{proof} \noindent On s'intéresse pour l'instant à l'équation sur $\mathbb{N}i\sqrt{k}$. On commence par le résultat suivant : \begin{lemma} \label{52} Soit $k \in \bN$. Les solutions de \eqref{a} sur $\mathbb{N}i\sqrt{k}$ contiennent un 0. \end{lemma} \begin{proof} Soit $(a_{1},\ldots,a_{n})$ une solution de \eqref{a} sur $\mathbb{N}i\sqrt{k}$. On suppose que $n$ est divisible par 4. Pour tout $j$ dans $[\![1;n]\!]$, $a_{j}=i k_{j}\sqrt{k}$ avec $k_{j} \in \mathbb{N}$. Il existe $\epsilon \in \{\pm 1 \}$ tel que \[\epsilon Id=M_{n}(a_{1},\ldots,a_{n})=\begin{pmatrix} K_{n}(a_{1},\ldots,a_{n}) & -K_{n-1}(a_{2},\ldots,a_{n}) \\ K_{n-1}(a_{1},\ldots,a_{n-1}) & -K_{n-2}(a_{2},\ldots,a_{n-1}) \end{pmatrix}.\] \noindent En particulier, $K_{n}(a_{1},\ldots,a_{n})=\epsilon$. \\ \\On regroupe les $i$ présents dans chaque terme de la somme définissant le continuant $K_{n}(a_{1},\ldots,a_{n})$ (algorithme d'Euler). Les termes dans lesquels on a supprimé un nombre pair de paire de $a_{j}$ consécutifs sont multipliés par $i\sqrt{k}$ puissance un multiple de 4, c'est-à-dire par une puissance paire de $k$. Les termes dans lesquels on a supprimé un nombre impair de paire de $a_{j}$ consécutifs sont multipliés par -1 et par $i\sqrt{k}$ puissance un entier divisible par 2 mais par 4, c'est-à-dire par $(-1) \times (-1)=1$ multiplié par une puissance de $k$. \\ \\Ainsi, $K_{n}(a_{1},\ldots,a_{n})$ est égal à 1 (obtenu en supprimant toutes les paires possibles) plus une somme d'entiers positifs. Donc, nécessairement $K_{n}(a_{1},\ldots,a_{n})=1$ et un des $a_{j}$ est nul (car sinon on aurait $K_{n}(a_{1},\ldots,a_{n}) >1)$. \\ \\On procède de façon analogue si $n$ n'est pas divisible par 4 (dans ce cas l'opposé de $K_{n}(a_{1},\ldots,a_{n})$ est égal à 1 plus une somme d'entiers positifs). \end{proof} \noindent Ces résultats permettent de classifier les $\lambda$-quiddités irréductibles sur $\mathbb{N}i\sqrt{k}$. \begin{theorem} \label{53} Les $\lambda$-quiddités sur $\mathbb{N}i\sqrt{k}$ sont les $2n$-uplets ne contenant que 0. En particulier, $(0,0,0,0)$ est la seule $\lambda$-quiddité irréductible sur $\mathbb{N}i\sqrt{k}$. \end{theorem} \begin{proof} Par la proposition \ref{51}, les solutions de \eqref{a} sur $\mathbb{N}i\sqrt{k}$ sont de taille paire. Supposons par l'absurde qu'il existe des solutions non nulles de \eqref{a} sur $\mathbb{N}i\sqrt{k}$. Soit $(a_{1},\ldots,a_{n})$ une $\lambda$-quiddité sur $\mathbb{N}i\sqrt{k}$ possédant une composante non nulle. Par le lemme précédent, il existe $j \in [\![1;n]\!]$ tel que $a_{j}=0$. \\ \\Or, on a \[\begin{pmatrix} a & -1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} b & -1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} -a-b & 1 \\ -1 & 0 \end{pmatrix}= -\begin{pmatrix} a+b & -1 \\ 1 & 0 \end{pmatrix}.\] \noindent En utilisant cela, $(a_{1},\ldots,a_{j-2},a_{j-1}+a_{j+1},a_{j+2},\ldots,a_{n})$ est une solution de \eqref{a} sur $\mathbb{N}i\sqrt{k}$. Cette solution possède donc un zéro et une composante non nulle et on peut ainsi réitérer le processus. \\ \\En procédant ainsi, on arrive à une solution de taille 2 possédant une composante non nulle. Ceci est absurde par la proposition \ref{31}. \\ \\Donc, les $\lambda$-quiddités sur $\mathbb{N}i\sqrt{k}$ sont des $2n$-uplets ne contenant que des 0. De plus, tous les $2n$-uplets ne contenant que des 0 sont des $\lambda$-quiddités sur $\mathbb{N}i\sqrt{k}$. En particulier, $(0,0,0,0)$ est la seule $\lambda$-quiddité irréductible sur $\mathbb{N}i\sqrt{k}$ (proposition \ref{32} i)). \end{proof} \noindent On revient maintenant au cas de $\mathbb{Z}i\sqrt{k}$ : \begin{proof}[Démonstration du théorème \ref{26}] Soit $k \in \bN$. $\Omega_{k}$ et $\Xi_{k}$ n'ont que des éléments de taille paire (proposition \ref{51} et section précédente). Soit $(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)$ une $\lambda$-quiddité sur $<i\sqrt{k}>$. Il existe $\epsilon \in \{\pm 1 \}$ tel que \begin{eqnarray*} \epsilon Id &=& M_{2n}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i) \\ &=& \begin{pmatrix} K_{2n}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i) & -K_{2n-1}(a_{2}i,\ldots,a_{2n-1}i,a_{2n}i) \\ K_{2n-1}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i) & -K_{2n-2}(a_{2}i,\ldots,a_{2n-1}i) \end{pmatrix}.\\ \end{eqnarray*} \noindent Par l'algorithme d'Euler, $K_{2n}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)$ est une somme de produits des $a_{j}i$. Dans chaque terme d'une de ces sommes, pour chaque $j$ impair $a_{j}$ est suivi d'un $a_{k}$ avec $k$ pair. On fait alors la manipulation suivante :\[a_{j}ia_{k}i=a_{j}a_{k}i^{2}=a_{j}(-a_{k}).\] Avec cette manipulation, on a \[K_{2n}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)=K_{2n}(a_{1},-a_{2},\ldots,a_{2n-1},-a_{2n}).\] \noindent On procède de façon analogue pour $K_{2n-2}(a_{2}i,\ldots,a_{2n-1}i)$ (puisque pour pour chaque $j$ impair $a_{j}$ est précédé d'un $a_{l}$ avec $l$ pair). $K_{2n-1}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i)$ est une somme de produits des $a_{j}i$. Pour chaque terme de la somme, on a deux choix : \begin{itemize} \item le terme est de la forme $a_{j}i$ avec $j$ impair; \item le terme est un produit de $2u+1$ éléments de la forme $a_{j}i$ avec $u+1$ indices $j$ impair. Pour chaque $m$ pair $a_{m}$ est précédé d'un $a_{l}$ avec $l$ impair. On fait alors la manipulation suivante :\[a_{l}ia_{m}i=a_{l}a_{m}i^{2}=a_{l}(-a_{m}).\] \end{itemize} Cela donne $0=K_{2n-1}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i)=iK_{2n-1}(a_{1},-a_{2},\ldots,a_{2n-1})$. En particulier, on a \[0=K_{2n-1}(a_{1},-a_{2},\ldots,a_{2n-1}).\] \noindent On procède de façon analogue pour $K_{2n-1}(a_{2}i,a_{2}i,\ldots,a_{2n}i)$. \\ \\Donc, $(a_{1},-a_{2},\ldots,a_{2n-1},-a_{2n})$ est une $\lambda$-quiddité sur $<\sqrt{k}>$. Ainsi, $\varphi_{k}$ est bien défini. \\ \\ Comme $\varphi_{k}$ est injective, il nous reste à montrer que $\varphi_{k}$ est surjective. \\ \\ Soit $(a_{1},a_{2},\ldots,a_{2n-1},a_{2n})$ une $\lambda$-quiddité de taille paire sur $<\sqrt{k}>$. Par les mêmes arguments que ceux utilisés précédemment, $(a_{1}i,-a_{2}i,\ldots,a_{2n-1}i,-a_{2n}i)$ est une $\lambda$-quiddité sur $<i\sqrt{k}>$ et son image par $\varphi_{k}$ est $(a_{1},a_{2},\ldots,a_{2n-1},a_{2n})$. Donc, $\varphi_{k}$ est surjective. \\ \\Ainsi, $\varphi_{k}$ établit une bijection entre $\Omega_{k}$ et $\Xi_{k}$. \\ \\ On considère maintenant l'irréductibilité des solutions de \eqref{a} sur $<i\sqrt{k}>$ lorsque $k \neq 1$. \\ \\Soit $(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)$ une $\lambda$-quiddité sur $<i\sqrt{k}>$. Supposons que $\varphi_{k}((a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i))$ est réductible. Comme $k \neq 1$, il existe $(b_{1},b_{2},\ldots,b_{2l-1},b_{2l})$ et $(c_{1},c_{2},\ldots,c_{2l'-1},c_{2l'})$ deux $\lambda$-quiddités de taille paire sur $<\sqrt{k}>$ de taille supérieure à 3 telle que : \begin{eqnarray*} (a_{1},-a_{2},\ldots,a_{2n-1},-a_{2n}) &=& \varphi_{k}((a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)) \\ &\sim& (b_{1},b_{2},\ldots,b_{2l-1},b_{2l}) \oplus (c_{1},c_{2},\ldots,c_{2l'-1},c_{2l'}).\\ \end{eqnarray*} \noindent Ainsi, $(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i) \sim (b_{1}i,-b_{2}i,\ldots,b_{2l-1}i,-b_{2l}i) \oplus (-c_{1}i,c_{2}i,\ldots,-c_{2l'-1}i,c_{2l'}i)$. Or, on a montré que $(b_{1}i,-b_{2}i,\ldots,b_{2l-1}i,-b_{2l}i)$ et $(c_{1}i,-c_{2}i,\ldots,c_{2l'-1}i,-c_{2l'}i)$ sont des $\lambda$-quiddités sur $<i\sqrt{k}>$. Par la proposition \ref{opposé}, $(-c_{1}i,c_{2}i,\ldots,-c_{2l'-1}i,c_{2l'}i)$ est une $\lambda$-quiddité sur $<i\sqrt{k}>$. Donc, l'image par $\varphi_{k}$ d'une solution irréductible sur $<i\sqrt{k}>$ est une solution irréductible sur $<\sqrt{k}>$. \\ \\Soit $(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)$ une $\lambda$-quiddité réductible sur $<i\sqrt{k}>$. Il existe $(b_{1}i,b_{2}i,\ldots,b_{2l-1}i,b_{2l}i)$ et $(c_{1}i,c_{2}i,\ldots,c_{2l'-1}i,c_{2l'}i)$ deux $\lambda$-quiddités sur $<i\sqrt{k}>$ de taille supérieure à 3 telles que \[(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i) \sim (b_{1}i,b_{2}i,\ldots,b_{2l-1}i,b_{2l}i) \oplus (c_{1}i,c_{2}i,\ldots,c_{2l'-1}i,c_{2l'}i).\] \noindent On a \begin{eqnarray*} \varphi_{k}((a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)) &=& (a_{1},-a_{2},\ldots,a_{2n-1},-a_{2n}) \\ &\sim& (b_{1},-b_{2},\ldots,b_{2l-1},-b_{2l}) \oplus (-c_{1},c_{2},\ldots,-c_{2l'-1},c_{2l'}). \\ \end{eqnarray*} \noindent Or, $(c_{1},-c_{2},\ldots,c_{2l'-1},-c_{2l'})=\varphi_{k}(c_{1}i,c_{2}i,\ldots,c_{2l'-1}i,c_{2l'}i))$. Donc, $(c_{1},-c_{2},\ldots,c_{2l'-1},-c_{2l'})$ est une solution de \eqref{a} sur $<\sqrt{k}>$. Par la proposition \ref{opposé}, $(-c_{1},c_{2},\ldots,-c_{2l'-1},c_{2l'})$ est une solution de \eqref{a} sur $<\sqrt{k}>$. Donc, $\varphi_{k}((a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i))$ est réductible. Ainsi, $\varphi_{k}$ établit une surjection entre les solutions irréductibles de $\Omega_{k}$ est les solutions irréductibles de $\Xi_{k}$. \\ \\Donc, $\varphi_{k}$ établit une bijection entre les solutions irréductibles sur $<i\sqrt{k}>$ et les solutions irréductibles sur $<\sqrt{k}>$. \end{proof} Malheureusement, $\varphi_{1}$ n'établit pas une bijection entre les solutions irréductibles sur $<i>$ et les solutions irréductibles sur $\bZ$. En effet, il existe des solutions irréductibles de taille 3 sur $\bZ$ (voir Théorème \ref{35}) alors que cela n'est pas le cas sur $<i>$ (voir proposition \ref{51}). \subsection{Solutions pairement irréductibles} Nous allons maintenant donner quelques éléments sur l'irréductibilité des $\lambda$-quiddités sur $\mathbb{Z}i$. On commence par la définition suivante qui concerne les solutions de taille paire sur $\bZ$. \begin{definition} \label{44} Une $\lambda$-quiddité sur $\mathbb{Z}$ de taille paire est dite pairement réductible si on peut l'écrire comme une somme de deux $\lambda$-quiddités sur $\bZ$ de taille paire supérieure à 4. Dans le cas contraire, la $\lambda$-quiddité est dite pairement irréductible. \end{definition} \begin{examples} {\rm \begin{itemize} \item Les $\lambda$-quiddités de taille 4 sont pairement irréductibles; \item $(2,2,1,4,1,2)$ est pairement réductible car $(2,2,1,4,1,2)=(1,2,1,2) \oplus (2,1,2,1)$; \item $(1,2,1,2,1,2,1,2)$ est pairement réductible car $(1,2,1,2,1,2,1,2)=(1,2,1,2) \oplus (0,1,2,1,2,0)$; \item $(1,1,1,1,1,1)$ est pairement irréductible car dans le cas contraire elle serait la somme de deux $\lambda$-quiddités de taille 4 et il n'y a pas de $\lambda$-quiddité de la forme $(a,1,1,b)$. \end{itemize} } \end{examples} \noindent Graphiquement, la notion de $\lambda$-quiddité sur $\mathbb{Z}$ pairement réductible signifie qu'il existe un étiquetage admissible aboutissant à cette quiddité et possédant les deux propriétés suivantes : \begin{itemize} \item on peut, en coupant suivant une diagonale, obtenir deux sous-polygones possédant un nombre pair de sommet; \item l'étiquetage admissible du polygone initial induit un étiquetage admissible pour chacun des deux sous-polygones. \end{itemize} \noindent Par exemple, cela donne (avec tous les triangles étiquetés avec 1) : $$ \shorthandoff{; :!?} \xymatrix @!0 @R=0.45cm @C=0.8cm { &2\ar@{-}[ld]\ar@{-}[rd]& \\ 2\ar@{-}[dd]\ar@{-}[]&&2\ar@{-}[dd] \\ &&& \longmapsto \\ 1\ar@{-}[]&&1\ar@{-}[] \\ &4\ar@{-}[lu]\ar@{-}[ru]\ar@{-}[uuuu]\ar@{-}[ruuu]\ar@{-}[luuu]& } \xymatrix @!0 @R=0.45cm @C=0.8cm { &1\ar@{-}[ld]& \\ 2\ar@{-}[dd] \\ &&\bigoplus \\ 1\ar@{-}[]&& \\ &2\ar@{-}[lu]\ar@{-}[uuuu]\ar@{-}[luuu]& } \xymatrix @!0 @R=0.45cm @C=0.8cm { 1\ar@{-}[rd]& \\ &2\ar@{-}[dd] \\ \\ &1\ar@{-}[] \\ 2\ar@{-}[ru]\ar@{-}[uuuu]\ar@{-}[ruuu] }$$ \noindent Cette interprétation graphique doit néanmoins être considérée avec précaution. En effet, il est possible pour une même quiddité d'avoir deux étiquetages admissibles différents, un ne vérifiant pas les deux conditions précédentes et un autre les vérifiant. Par exemple, pour $(1,2,1,2,1,2,1,2)$ on a : $$ \shorthandoff{; :!?} \xymatrix @!0 @R=0.40cm @C=0.5cm { &&1\ar@{-}[lldd] \ar@{-}[rr]&&2\ar@{-}[rrdd]\ar@{-}[lldddddd]\ar@{-}[rrdddd]& \\ &&& \\ 2\ar@{-}[dd]\ar@{-}[rrrruu]&&&&&& 1 \ar@{-}[dd] \\ &&0&&0 \\ 1&&&&&& 2 \\ &&& \\ &&2 \ar@{-}[rr]\ar@{-}[rrrruu]\ar@{-}[lluuuu]\ar@{-}[lluu] &&1 \ar@{-}[rruu] } \qquad \qquad \xymatrix @!0 @R=0.40cm @C=0.5cm { &&1\ar@{-}[lldd] \ar@{-}[dddddd]\ar@{-}[rrrrdddd]\ar@{-}[rr]&&2\ar@{-}[rrdd]\ar@{-}[rrdddd]& \\ &&& \\ 2\ar@{-}[dd]&&&&&& 1 \ar@{-}[dd] \\ &&&-1 \\ 1&&&&&& 2 \\ &&& \\ &&2 \ar@{-}[rr]\ar@{-}[rrrruu]\ar@{-}[lluuuu]\ar@{-}[lluu] &&1 \ar@{-}[rruu] } $$ \noindent Les triangles non étiquetés sur la figure sont étiquetés avec 1. \\ \\Cette notion de solution pairement réductible est reliée à notre problème initial via le résultat ci-dessous : \begin{proposition} \label{52} $(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)$ est une $\lambda$-quiddité irréductible sur $<i>$ si et seulement si $\varphi_{1}(a_{1}i,a_{2}i,\ldots,a_{2n-1}i,a_{2n}i)=(a_{1},-a_{2},\ldots,a_{2n-1},-a_{2n})$ est pairement irréductible. \end{proposition} \begin{proof} La preuve est analogue à celle du théorème \ref{26}. \end{proof} \noindent On termine par la conjecture ci-dessous : \begin{con} Il existe des solutions pairement irréductibles de taille arbitrairement grande. \end{con} \begin{thebibliography}{1} \bibitem{BR} F. Bergeron, C. Reutenauer, {\it $SL_{k}$-tilings of the plane}, Illinois J. Math., Vol. 54 no. 1, (2010), pp 263-300. \bibitem{Ca} G. Cantor, {\it Sur une propriété du système de tous les nombres algébriques réels}, Acta Mathemetica, Vol. 2, (1883), pp 305-310. \bibitem{CO} C. Conley, V. Ovsienko, {\it Rotundus: triangulations, Chebyshev polynomials, and Pfaffians}, Math. Intelligencer, Vol. 40 no. 3, (2018), pp 45-50. \bibitem{C} M. Cuntz, {\it A combinatorial model for tame frieze patterns}, Munster J. Math., Vol. 12 no. 1, (2019), pp 49-56. \bibitem{CH} M. Cuntz, T. Holm, {\it Frieze patterns over integers and other subsets of the complex numbers}, J. Comb. Algebra., Vol. 3 no. 2, (2019), pp 153-188. \bibitem{Ge} A. O. Gelfond, {\it Sur le septième Problème de D. Hilbert}, Bulletin de l'Académie des Sciences de l'URSS. Classe des sciences mathématiques et na, Vol. 4, (1934), pp 623-634. \bibitem{G} I. Gozard, {\it Théorie de Galois - niveau L3-M1 - 2e édition}, Ellipses, 2009. \bibitem{H} C. Hermite, {\it Sur la fonction exponentielle}, Comptes rendus hebdomadaires des séances de l'Académie des sciences, Vol. 77, (1873), 18-24, 74-79, 226-233 et 285-293. \bibitem{L} J. Liouville, À propos de l'existence des nombres transcendants, Compte rendu des séances de l’Académie des sciences. Séance du lundi 13 mai 1844. Présidence de M. Elie de Beaumont. Disponible en ligne à l’adresse : http://www.bibnum.education.fr/mathematiques/theorie-des-nombres/propos-de-l-existence-des-nombres-transcendants \bibitem{M1} F. Mabilat, \textit{Combinatoire des sous-groupes de congruence du groupe modulaire}, Annales Mathématiques Blaise Pascal, Vol. 28 no. 1, (2021), pp. 7-43. doi : 10.5802/ambp.398. https://ambp.centre-mersenne.org/articles/10.5802/ambp.398/. \bibitem{M2} F. Mabilat, \textit{$\lambda$-quiddité sur $\mathbb{Z}[\alpha]$ avec $\alpha$ transcendant}, Mathematica Scandinavica, Vol. 128 no. 1, (2022), pp 5-13, https://doi.org/10.7146/math.scand.a-128972. \bibitem{M3} F. Mabilat, {\it Combinatoire des sous-groupes de congruence du groupe modulaire II}. Annales Mathématiques Blaise Pascal, Vol. 28 no. 2, (2021), pp. 199-229. doi : 10.5802/ambp.404. https://ambp.centre-mersenne.org/articles/10.5802/ambp.404/. \bibitem{M4} F. Mabilat, \textit{Entiers monomialement irréductibles}, hal-03487145, arXiv:2112.10410. \bibitem{M5} F. Mabilat, \textit{Solutions monomiales minimales irréductibles dans $SL_{2}(\mathbb{Z}/p^{n}\mathbb{Z})$}, hal-03573421, arxiv:2202.07279. \bibitem{Mo1} S. Morier-Genoud, \textit{Coxeter's frieze patterns at the crossroad of algebra, geometry and combinatorics}, Bull. Lond. Math. Soc., Vol. 47 no. 6, (2015), pp 895-938. \bibitem{Mo2} S. Morier-Genoud, \textit{Counting Coxeter's friezes over a finite field via moduli spaces}, Algebraic combinatoric, Vol. 4 no. 2, (2021), pp 225-240. \bibitem{MO} S. Morier-Genoud, V. Ovsienko, \textit{Farey Boat. Continued fractions and triangulations, modular group and polygon dissections }, Jahresber. Dtsch. Math. Ver., Vol. 121 no. 2, (2019), pp 91-136, https://doi.org/10.1365/s13291-019-00197-7. \bibitem{O} V. Ovsienko, {\it Partitions of unity in $SL(2,\mathbb{Z})$, negative continued fractions, and dissections of polygons,} Res. Math. Sci., Vol. 5 no. 2, (2018), Article 21, 25 pp. \bibitem{WZ} M. Weber, M. Zhao, {\it Factorization of frieze patterns,} Revista de la Unión Matemática Argentina, Vol. 60 no. 2, (2019), pp 407-415. \end{thebibliography} \end{document}
2206.13912v1
http://arxiv.org/abs/2206.13912v1
On simple evolution algebras of dimension two and three. Constructing simple and semisimple evolution algebras
\documentclass{amsart} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \usepackage{latexsym} \usepackage{amssymb} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{amsthm} \usepackage{float} \usepackage{epsfig} \usepackage{multirow} \usepackage{xcolor} \usepackage{caption} \usepackage[toc,page]{appendix} \usepackage{old-arrows} \usepackage{caption} \usepackage{subcaption} \usepackage{lmodern, mathtools} \usepackage{mathtools} \usepackage{hyperref} \RequirePackage{color} \usepackage{tikz} \usepackage{amscd,enumerate} \usepackage{amsfonts} \hypersetup{ colorlinks, linkcolor=blue, filecolor=green, urlcolor=blue, citecolor=blue } \usepackage{helvet} \usepackage{bigstrut} \usepackage{float, caption, booktabs, cellspace} \setlength{\cellspacetoplimit}{6pt} \setlength{\cellspacebottomlimit}{6pt} \usepackage{calrsfs} \DeclareMathAlphabet{\pazocal}{OMS}{zplm}{m}{n} \newcommand{\Lim}{\displaystyle\lim} \addtolength{\textwidth}{4cm} \addtolength{\oddsidemargin}{-2cm} \addtolength{\evensidemargin}{-2cm} \textheight=22.15truecm \makeatletter \@addtoreset{subfigure}{framenumber}\makeatother \usepackage{etoolbox}\AtBeginEnvironment{figure}{\setcounter{subfigure}{0}} \setcounter{secnumdepth}{4} \renewcommand{\thefigure}{\Roman{figure}} \renewcommand{\thesubfigure}{\arabic{subfigure}} \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}[lemma]{Corollary} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{remark}[lemma]{Remark} \newtheorem{definition}[lemma]{Definition} \newtheorem{definitions}[lemma]{Definitions} \newtheorem{example}[lemma]{Example} \newtheorem{examples}[lemma]{Examples} \newtheorem{problem}[lemma]{Problem} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{question}[lemma]{Question} \newtheorem{notation}[lemma]{Notation} \newtheorem{corolary}[lemma]{Corollary} \input xygraph \input xy \xyoption{all} \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \usepackage{multicol} \newcommand{\bK}{\mathbb{K}} \def\l{\lambda} \def\a{\alpha} \def\d{\delta} \def\b{\beta} \def\g{\gamma} \def\o{\omega} \def\s{\sigma} \def\m{\mu} \def\gsquare{gsquare} \def\com{\mathop{\text{Com}}} \def\GL{\mathop{\hbox{\rm GL}}} \def\w{\omega} \def\W{\mathcal{W} } \def\B{\mathcal{B} } \def\E{{A}} \def\id{\mathop{\text{Id}}} \def\supp{\mathop{\hbox{\rm supp}}} \def\f{\phi} \def\esc#1{\langle #1\rangle} \def\remove#1{} \newcommand{\Supp}{{\rm supp}} \newcommand{\sgn}{\text{sgn}} \newcommand{\C}{{\mathcal{C}}} \newcommand{\U}{{\mathcal{U}}} \newcommand{\CU}{{\mathcal{CU}}} \newcommand{\M}{{\mathcal{M}}} \newcommand{\G}{{\mathcal{G}}} \newcommand{\Ker}{{\rm{Ker}}} \newcommand{\dsum}{\sum} \newcommand{\dbigcup}{\bigcup} \newcommand{\N}{{\mathbb{N}}} \newcommand{\Z}{{\mathbb{Z}}} \newcommand{\K}{{\mathbb{K}}} \newcommand{\KN}{{\mathbb{K}^{\times}}} \newcommand{\Q}{{\mathbb{Q}}} \newcommand{\R}{{\mathbb{R}}} \newcommand{\complex}{{\mathbb{C}}} \newcommand{\dos}{\mathbf{II}} \newcommand{\tres}{\mathbf{III}} \newcommand{\cuatro}{\mathbf{IV}} \def\ch{\mathop{\hbox{\rm char}}} \def\diag{\mathop{\hbox{\rm diag}}} \newcommand{\D}{\mathfrak{Diag}} \title{On simple evolution algebras of dimension two and three. Constructing simple and semisimple evolution algebras} \author[Y. Cabrera]{Yolanda Cabrera Casado} \author[D. Mart\'{\i}n]{Dolores Mart\'{\i}n Barquero} \author[C. Mart\'{\i}n]{C\'andido Mart\'{\i}n Gonz\'alez} \author[A. Tocino]{Alicia Tocino} \address{Departamento de Matem\'atica Aplicada, E.T.S. Ingenier\'\i a Inform\'atica, Universidad de M\'alaga, Campus de Teatinos s/n. 29071 M\'alaga. Spain. } \email{[email protected]} \address{Departamento de Matem\'atica Aplicada, Escuela de Ingenier\'\i as Industriales, Universidad de M\'alaga, Campus de Teatinos s/n. 29071 M\'alaga. Spain.} \email{[email protected]} \address{Departamento de \'Algebra Geometr\'{\i}a y Topolog\'{\i}a, Fa\-cultad de Ciencias, Universidad de M\'alaga, Campus de Teatinos s/n. 29071 M\'alaga. Spain.} \email{candido\[email protected]} \address{Departamento de Matem\'atica Aplicada, E.T.S. Ingenier\'\i a Inform\'atica, Universidad de M\'alaga, Campus de Teatinos s/n. 29071 M\'alaga. Spain. } \email{[email protected]} \subjclass[2020] {17A60, 17D92, 05C25.} \keywords{Evolution algebra, simple algebra, directed graph, strongly connected, tensor product, moduli set.} \thanks{ The authors are supported by the Spanish Ministerio de Ciencia e Innovaci\'on through project PID2019-104236GB-I00 and by the Junta de Andaluc\'{\i}a through projects FQM-336 and UMA18-FEDERJA-119, all of them with FEDER funds. } \begin{document} \begin{abstract} This work classifies three-dimensional simple evolution algebras over arbitrary fields. For this purpose, we use tools such as the associated directed graph, the moduli set, inductive limit group, Zariski topology and the dimension of the diagonal subspace. Explicitly, in the three-dimensional case we construct some models $_i\tres_{\l_1,\ldots,\l_n}^{p,q}$ of such algebras with $1\le i\le 4$, $\l_i\in\K^\times$, $p,q\in\N$, such that any algebra is isomorphic to one (and only one) of the given in the models and we further investigate the isomorphic question within each one. Moreover, we show how to construct simple evolution algebras of higher order from known simple evolution algebras of smaller size. \end{abstract} \maketitle \section{Introduction} The classification of three-dimensional evolution algebras over fields (under some hypothesis on the ground field) is achieved in \cite{CSV2}. In particular, the classification of evolution algebras of dimension three over the real field is obtained in \cite{BMV} and over the complex field in the paper \cite{IM}. For more information on how advances in the classification of this type of algebras have emerged, one can see \cite[Section 4.7]{ceballos}. The number of cases arising in the classification is so high that it is worth re-thinking the strategy and ample the classification to general fields. We have divided the job into two tasks. The first one is the classification of non simple algebras, which is done in \cite{CCGMM}. Having completed the classification of non simple evolution algebras of dimension three over arbitrary fields in the aforementioned reference, we aboard in this paper the corresponding classification in the case of simple algebras. The novelty in our study is the use of some tools like the diagonal subspace and the moduli sets, which systematize the organization of different classes of simple algebras into families which are parametrized by suitable sets, usually orbit sets under the action of appropriate groups. Roughly speaking, a moduli set for a class of algebras $C$ (over the same ground field) is a set $S$ which parametrizes the isomorphic classes of elements in $C$. So, we can say that the isomorphic classes of elements of $C$ are in one-to-one correspondence with the elements of the moduli set. If we represent by $\hbox{iso}(C)$ the set whose elements are isomorphic classes of algebras of $C$, then there will be a bijection $\omega\colon \hbox{iso}(C)\to S$ in such a way that from each $s\in S$ we can construct the multiplication table of any $A\in\omega^{-1}(s)$. The moduli set may result from having some extra algebraic or geometric structure. For instance, there is a family of simple three-dimensional algebras $A_\l$ depending on a nonzero $\l\in\K^\times $ whose isomorphism condition is $A_\l\cong A_\m$ if and only if $\m=\l r^7$ for some $r\in\K^\times$. So, a moduli set for this class of algebras is the quotient group $G=\K^\times/(\K^\times)^{[7]}$ where $(\K^\times)^{[7]}:=\{r^7\colon r\in \K^\times\}$. In this case, the moduli set is a group and the isomorphism classes of such algebras are in one-to-one correspondence with $G$. To give an example of a moduli set with a geometric flavour, recall that a bundle in the category of sets is a surjective map $\pi\colon E\to B$ where $B$ is the base set and $E$ the total set. The sets $\pi^{-1}(b)$ are called the fibers and, of course, we have $E=\sqcup_{b\in B} \pi^{-1}(b)$. There is another class $C'$ of three-dimensional simple algebras $B_{a,b}$ depending on $a,b\in\K^\times$ such that $B_{a,b}\cong B_{x,y}$ if and only if there is some $t\in\K^\times$ with $x=t^3 a$ and $y=t^7 b$. Then the parametrized curve $c_{a,b}:=\footnotesize\begin{cases}x=t^3 a\\ y=t^7 b\end{cases}$ contains all the points $(x,y)$ such that $A_{a,b}\cong A_{x,y}$. Furthermore, the set $(\K^\times\times\K^\times)/\Delta_{3,7}$, whose elements are the different curves $c_{a,b}$, is a moduli set for the class $C'$. In this case, there is a bundle (in the category of sets) $C'\to (\K^\times\times\K^\times)/\Delta_{3,7}$ such that $B_{a,b}\mapsto c_{a,b}$. This moduli set set is the base set of the bundle whose total set is $C'$ and the fibers are the isomorphic classes of $C$. Thus, this work aims to classify simple three-dimensional evolution algebras over an arbitrary field of scalars, giving explicit constructions of moduli set for the different classes of algebras. To do the classification over arbitrary fields, it has become necessary to use the Fröbenius normal form in just one case. The paper is structured in the following way. In the section of preliminaries, we recall all the definitions and properties that will be used in the task of classification. In Section \ref{simple}, we introduce the concept of diagonal subspace for $\K$-evolution algebras with a unique natural basis. It will be used to classify evolution algebras in terms of the dimension of their diagonal subspace. In the case of two-dimensional evolution algebras, we have three types of simple evolution algebras, that are collected in Theorem \ref{teorema1}. In the case of three-dimensional simple evolution algebras, we have twenty-seven types, which are reflected in the main Theorem \ref{teorema2}. In the last section, we obtain new simple evolution algebras of finite dimension as the tensor product of two simple evolution algebras (see Theorem \ref{cor:strong}) in terms of the categorical product of their associated directed graphs using \cite{McANDREW}. Another way of constructing new simple evolution algebras is described in Remark \ref{nuevas}. Moreover, if $A$ is a finite evolution algebra with an ideal $I$ of dimension $1$, we prove that $A/I$ is not simple (see Theorem \ref{cociente}). \section{Preliminaries} A \emph{directed graph} is a $4$-tuple $E=(E^0, E^1, r_E, s_E)$ consisting of two disjoint sets $E^0$, $E^1$ and two maps $r_E, s_E: E^1 \to E^0$. The elements of $E^0$ are called \emph{vertices} and the elements of $E^1$ are called \emph{edges} of $E$. Further, for $e\in E^1$, $r_E(e)$ and $s_E(e)$ are the \emph{range} and the \emph{source} of $e$, respectively. If there is no confusion with respect to the graph we are considering, we simply write $r(e)$ and $s(e)$. A \emph{path} $\mu$ of length $m$ is a finite chain of edges $\mu=f_1\ldots f_m$ such that $r(f_i)=s(f_{i+1})$ for $i=1,\ldots,m-1$. We denote by $s(\mu):=s(f_1)$ the source of $\mu$ and $r(\mu):=r(f_m)$ the range of $\mu$. If $r(\mu)=s(\mu)$, then $\mu$ is called a \emph{closed path}. If $E$ is a directed graph and $S\subset E^0$, then denote by $T(S)$ the tree of $S$ where $$T(S)=\{v\in E^0 \colon \text{ exist }\lambda \in \text{Path}(E) \text{ and } u \in S \text{ with } s(\lambda)=u, r(\lambda)=v\}.$$ In this paper, when we say graph we mean directed graph. An \emph{evolution algebra} over a field $\bK$ is a $\bK$-algebra $A$ which has a basis $\B=\{e_i\}_{i\in \Lambda}$ such that $e_ie_j=0$ for every $i, j \in \Lambda$ with $i\neq j$. Such a basis is called a \emph{natural basis}. From now on, all the evolution algebras we will consider will be finite-dimensional, and $\Lambda$ will denote a finite set $\{1, \dots, n\}$. Let $A$ be an evolution algebra with a natural basis $\B=\{e_i\}_{i\in \Lambda}$. Denote by $M_\B=(\omega_{ij})$ the \emph{structure matrix} of $A$ relative to $\B$, i.e., $e_i^2 = \sum_{j\in \Lambda} \omega_{ji}e_j$. In the particular case that $\det(M_\B) \neq 0$, we say that the evolution algebra is \emph{perfect}. Let $u=\sum_{i\in \Lambda}\alpha_ie_i$ be an element of $A$. Recall that he \emph{support of} $u$ \emph{relative to} $B$, denoted $\Supp_B(u)$, is defined as the set $\Supp_B(u)=\{i\in \Lambda\ \vert \ \alpha_i \neq 0\}$. We recall the definition of the directed graph associated to an evolution algebra $A$ relative to a natural basis $\B=\{e_1,\ldots,e_n\}$. We draw an edge from the vertex labelled $i$ to the one labelled $j$ if the $j$-th coordinate of $e_i^2$ relative to $\B$ is nonzero. An interesting property is that \begin{equation}\label{first} I:=\bigoplus_{i\in T(S)}\K e_i\triangleleft A\end{equation} for any $S\subset E^0$. To prove this, we only need to check that $e_i^2\in I$ whenever $i\in T(S)$: but $e_i^2=\sum_j\o_{ji} e_j$ and if $\o_{ji}\ne 0$ we have $j\in T(i)\subset T(S)$. One of the ideas in this work is to construct three-dimensional simple evolution algebras from algebraic constructions starting from two-dimensional evolution algebras. We will need several algebraic tools. \remove{The very first of all is the tensor product of algebras. If $A$ and $B$ are finite-dimensional vector spaces over a field $\K$ and $\{e_i\}_{i=1}^n$, $\{u_j\}_{j=1}^m$ are basis of $A$ and $B$ respectively, then $\{e_i\otimes u_j\}_{i=1,j=1}^{n,m}$ is a basis of $A\otimes B$. Assume furthermore that $A$ and $B$ are algebras, with structure constants $\omega_{ij}^k$ and $\sigma_{pq}^n$. Thus $e_ie_j=\sum_k\omega_{ij}^k e_k$ and $u_pu_q=\sum_r\sigma_{pq}^r u_r$. Then we have $$(e_i\otimes u_p)(e_j\otimes u_q)=\sum_{k,r}\omega_{ij}^k\sigma_{pq}^r e_k\otimes u_r.$$ If $A$ and $B$ are evolution algebras, then $\omega_{ij}^k=\delta_{ij}\omega_{ii}^k$ and $\sigma_{pq}^r=\delta_{pq}\omega_{pp}^r$. Denoting $\omega_{ii}^k:=\omega_i^k$ and $\sigma_{pp}^r:=\sigma_p^r$ we may write $$(e_i\otimes u_p)(e_j\otimes u_q)=\delta_{ij}\delta_{pq}\sum_{k,r}\omega_{i}^k\sigma_{p}^r e_k\otimes u_r,$$ which proves that $A\otimes B$ is an evolution algebra and its structure matrix relative to the basis $\{e_i\otimes u_p\}$ is $(\o_i^j)_{i,j}\otimes(\sigma_p^r)_{p,r}$. For instance, if $A$ has structure matrix $\tiny\begin{pmatrix}\o_1^1 & \o_1^2\cr\o_2^1&\o_2^2\end{pmatrix}$ and $B$ $\tiny\begin{pmatrix}\sigma_1^1 & \sigma_1^2\cr\sigma_2^1&\sigma_2^2\end{pmatrix}$ then $A\otimes B$ has structure matrix $$\begin{pmatrix}\o_1^1 & \o_1^2\cr\o_2^1&\o_2^2\end{pmatrix}\otimes \begin{pmatrix}\sigma_1^1 & \sigma_1^2\cr\sigma_2^1&\sigma_2^2\end{pmatrix}= \begin{pmatrix}\o_1^1\sigma_1^1 & \o_1^1\sigma_1^2 & \o_1^2\sigma_1^1 & \o_1^2\sigma_1^2 \cr \o_1^1\sigma_2^1 & \o_1^1\sigma_2^2 & \o_1^2\sigma_2^1 & \o_1^2\sigma_2^2 \cr \o_2^1\sigma_1^1 & \o_2^1\sigma_1^2 & \o_2^2\sigma_1^1 & \o_2^2\sigma_1^2 \cr \o_2^1\sigma_2^1 & \o_2^1\sigma_2^2 & \o_2^2\sigma_2^1 & \o_2^2\sigma_2^2 \cr \end{pmatrix}.$$ If $M$ is an $n\times m$ matrix and $N$ is $k\times l$ then $M\otimes N$ is a $nk \times ml $ matrix. So if $M$ and $N$ are square matrices, then $M\otimes N$ is also a square matrix and it is well known that $\det(M\otimes N)=\det(M)^m\det(N)^n$ where $M$ is $n\times n$ and $N$ is $m\times m$. In particular, the tensor product of nonsingular matrices is again a nonsingular matrix (from this we deduce that the tensor product of perfect evolution algebras is a perfect evolution algebra). It is also remarkable that the directed graph of the tensor product of evolution algebras is the product of the graphs defined in the following way: if $E=(E^0,E^1,r_E,s_E)$ and $F=(F^0,F^1,r_F,s_F)$ then $E\times F:=(E^0\times F^0, E^1\times F^1,r,s)$ where $r(f,g)=(r_E(f),r_F(g))$ and similarly $s(f,g)=(s_E(f),s_F(g))$ for $(f,g)\in E^1\times F^1$. } It is known that for perfect evolution algebras, the number of edges in its directed graph does not depend on the natural basis chosen to get the graph. Also, the number of loops (more precisely: edges whose source and range agree) is an invariant (see \cite[Corollary 4.5]{EL1}). Moreover, recall that non-isomorphic graphs correspond to non-isomorphic evolution algebras for perfect evolution algebras. \begin{definition}\rm Let $A$ be a perfect evolution algebra with structure matrix $(\o_{ij})_{i,j=1}^n$. The cardinal of the set $\{i \ \colon \o_{ii}\ne 0\}$, which does not depend on the chosen natural basis, will be called the {\it $l$-number of} $A$ and denoted $l(A)$. The cardinal of $\{(i,j)\colon \o_{ij}\ne 0\}$, which is another invariant, will be called the {\it $e$-number of} $A$ (denoted $e(A)$). \end{definition} Given $n\in\N$ with $n\ge 1$, and a field $\K$, we will use the notation $G_n(\K)$ for the multiplicative group $G_n(\K)=\K^\times/(\K^\times)^{[n]}$ where $(\K^\times)^{[n]}:=\{x^n\colon x\in \K^\times\}$. If the allusion to the field $\K$ is not necessary, we shall use $G_n$ instead of $G_n(\K)$. If $n\ge 2$, then there is a group action of $\Z_q$ on $G_n$ which we can describe in the following way: consider $\Z_q$ with multiplicative notation, that is $\Z_q=\{1,\pi,\ldots,\pi^{q-1}\}$ where $\pi^q=1$. Then define $\Z_q\times G_n\to G_n$ by $\pi^m \bar k=\bar k^{j^m}$ where $j$ is a fixed integer satisfying $j \neq 1$ and $j^q\equiv 1 (\hbox{mod } n)$. To prove that the above map is indeed a group action, consider $$\pi^m(\pi^l \bar k)=\pi ^m(\overline{k}^{j^l})=\overline{k}^{j^{l+m}}=\pi^{m+l}\bar k,\quad k\in\K^\times.$$ \begin{definition}\label{moduli}\rm Let $\G$ be a group and $X$ an $\G$-set. That is, $X$ is a set, and there is a group action $\G\times X\to X$, $(m,x)\mapsto mx$. Then we say that the couple $(\G,X)$ is a {\it moduli set} for a class $C$ of algebras over the field $\K$ if there is a one-to-one correspondence between the isomorphic classes of algebras in $C$ and the set of orbits $X/\G$. Precisely, the \emph{set of orbits of a moduli set} $(\G,X)$ is defined as the set of orbits $X/\G$. If $(\G,X)$ is a $\G$-set and $(\G',X')$ a $\G'$-set, we can define a homomorphism $f\colon (\G,X)\to (\G',X')$ as a couple $f=(f_1,f_2)$ where $f_1\colon \G\to \G'$ is a group homomorphism and $f_2\colon X\to X'$ is a map satisfying $f_2(gx)=f_1(g)f_2(x)$ for any $g\in\G$ and $x\in X$. This allows a natural notion of isomorphism. If $(\G,X)$ is a moduli set for a class of algebras $C$ and there is an isomorphism $(\G,X)\cong(\G',X')$, then $(\G',X')$ is again a moduli set for $C$. This allows a kind of simplification for certain moduli sets. An example of this situation arises when $\G$ and $\G'$ are conjugated subgroups of $\GL_n(\K)$ and we have an action $\GL_n(\K)\times \K^n\to \K^n$ which induces an action $\G\times X\to X$ where $X\subset\K^n$. Let us denote the action as $g\cdot x$ for $g\in\G$ and $x\in X$. If $\G'=p\G p^{-1}$ for a fixed $p\in\GL_n(\K)$, then we can define a new action $\G'\times X'\to X'$ where $X':=p\cdot X$ denoted by $g'\bullet x'$. The new action is given by $(pgp^{-1})\bullet (p\cdot x):=p\cdot(g\cdot x)$ for any $g\in\G$ and $x\in X$. Furthermore, the maps $f_1\colon \G\to\G'$ given by $f_1(g)=pgp^{-1}$ and $f_2\colon X\to X'$ such that $f_2(x):=p\cdot x$ give rise to an isomorphism $(f_1,f_2)\colon (\G,X)\cong(\G',X')$. Thus, if $(\G,X)$ is a moduli set for $C$, then so is $(\G',X')$. A particular case is the following: assume that $\G$ is the subgroup of $\GL_n(\K)$ generated by a matrix $m$ and consider an action $\G\times X\to X$ for some subset $X\subset\K^n$. If $m'$ is some canonical form associated to $m$ (Jordan normal form or alternatively, Fröbenius normal form) so that $m'=pmp^{-1}$ for some invertible matrix $p\in\GL_n(\K)$, then the subgroup $\G'$ of $\GL_n(\K)$ generated by $m'$ induces a new action on $X'=p\cdot X$ as above. So the isomorphism $(\G,X)\cong (\G',X')$ tells us that the moduli set $(\G,X)$ can be replaced with the new one $(\G',X')$ and if $(\G,X)$ classifies the algebras of $C$, then also $(\G',X')$ classifies that class of algebras. \end{definition} \begin{notation}\label{mrqpp}\rm Let $\K$ be a field and $G_n=G_n(\K)$ the group defined above. Then if $j^q\equiv 1 (\hbox{mod } n)$, we denote by $\Omega_{q,n}$ the couple $\Omega_{q,n}=(\Z_q,G_n(\K))$ with the action $\pi^l \bar k:=\overline{k}^{j^l}$ for any $k\in\K^{\times}$. \end{notation} For example, we will use $(\Z_2,G_3)$ as a moduli set. Explicitly, the action is $\Z_2\times G_3\to G_3$ where $\pi\bar k=\overline{k}^2$. Thus, $\bar \m$ and $\bar \l$ are in the same orbit if and only if there is some $k\in\K^{\times}$ such that $\m=k^3\l$ or $\m=k^3\l^2$ or $\l=k^3\m^2$. Another example of this moduli set is $(\Z_2,G_3({\mathbb F}_4))$. In this case, $({\mathbb F}_4^\times)^{[3]}$ is trivial so that $G_3({\mathbb F}_4)={\mathbb F}_4^\times$. The action $\Z_2\times {\mathbb F}_4^{\times}\to {\mathbb F}_4^{\times}$ is given by $\pi\a=\b$ and $\pi\b=\a$. Thus, the orbit set ${\mathbb F}_4^\times/\Z_2$ has two elements: the orbit of $1$ which is a singleton and the orbit of $\a$ which is $\{\a,\b\}$. If $G$ is an abelian group and we consider the direct system $G\to G\to\cdots\to G\to\cdots$ where every homomorphism is given by $g\mapsto g^m$ (for fixed $m$), then we can construct the inductive limit group $\Lim_{\to m} G $ whose elements are the equivalence classes $[g]=\{g^{m^k}\colon k\in\N\}$. Thus $[g]=[h]$ if and only if $g^{m^r}=h^{m^s}$ for some naturals $r$ and $s$. We will apply this construction when $G=G_n(\K)$. The canonical homomorphism $\K^{\times} \to G_n(\K)\to \Lim_{\to m}G_n(\K)$ allows us to say that two elements $\l,\m\in\K^\times$ have the same image in $\Lim_{\to m}G_n(\K)$ when $\l^{m^r}=k^n\m^{m^s}$ for $r,s\in\N$ and $k\in\K^\times$. For instance, in the moduli set $(\Z_2,G_3)$ whose set of orbits is $G_3/\Z_2$ we have $G_3/\Z_2=\Lim_{\to 2}G_3$ and two elements $\l,\m\in\K^\times$ have the same image in $\Lim_{\to 2}G_3(\K)$ if and only if $\l^{2^r}=k^3\m^{2^s}$. \begin{definition} \rm Let $\K$ be a field and $n_1,\ldots,n_q \in \Z$. We define a map $\phi: \K^{\times} \to (\K^{\times})^{[n_1]} \times \ldots \times (\K^{\times})^{[n_q]} $ such that $\phi(\lambda)=(\lambda^{n_1},\ldots,\lambda^{n_q})$. Observe that $\phi$ is a group homomorphism. We denote by $\Delta_{n_1,\ldots,n_q}=\phi(\K^{\times})$. \end{definition} \begin{definition}\rm \label{action1} We define an action $\GL_n(\K)\times(\K^\times)^n\to (\K^\times)^n$ such that $$(m_{i,j})\cdot (v_i):=(\prod_j v_j^{m_{1,j}},\ldots,\prod_j v_j^{m_{n,j}})^T.$$ Then, for any subgroup $S$ of $\GL_n(\K)$ we can restrict to an action $S\times(\K^\times)^n\to (\K^\times)^n$. Denote the orbit of an element $v\in(\K^\times)^n$ by $[v]$. For a fixed matrix $M=(m_{i,j}) \in \GL_n(\K)$, we can consider the subgroup $S=\{M^k\colon k\in\Z\}$ of $\GL_n(\K)$ and the action $S\times (\K^\times)^n\to (\K^\times)^n$ as before. This gives rise to the orbit space $(\K^\times)^n/S$, which will depend on $S$ (and of course of $\K$). In some cases, there can be a power of $M$ which equals the identity. Then, the group $S$ will be finite. On the contrary, we will have $S\cong \Z$. \end{definition} \begin{notation}\rm Let $\K$ be an arbitrary field. If $p \in \K[x_1,\ldots,x_n]$, we denote by $D(p):=\{(\l_1,\ldots \l_n) \in \K^{n} \colon p(\l_1,\ldots \l_n) \neq 0\}$. Observe that $D(p)$ is a basic open set in the Zariski topology of $\K^{n}$. \end{notation} \section{Classification of two and three dimensional simple evolution algebras}\label{simple} A simple $\K$-algebra $A$ is an algebra such that $A^2\ne 0$ and its only ideals are $0$ and $A$. We recall also that a directed graph $E$ is said to be \emph{strongly connected} if given any two different vertices $u$ y $v$ in $E^0$, there exists a path $\mu$ such that $s(\mu)=u$ and $r(\mu)=v$. We know that a perfect finite evolution algebra is simple if and only if its associated directed graph is strongly connected (see \cite[Proposition 2.7]{CKS1}). \remove{\begin{lemma}\label{merengue} Let $A$ be a perfect finite-dimensional evolution algebra and $E$ its associated directed graph relative to a natural basis $\B=\{e_1,\ldots,e_n\}$. If there is a closed path in $E$ containing all the vertices of the directed graph, then $A$ is simple. \end{lemma} \begin{proof} Assume now that $A$ is a perfect evolution algebras, $\B$ a natural basis and $E$ the directed graph of $A$ relative to $\B$ with $E^0=\{1,\ldots,n\}$. Assume further that there is an edge from $i$ to $j$. Then $e_j^2\in (e_i^2)$ (the ideal generated by $e_i^2$). Consequently, if there is a closed path in $E$ containing all the vertices of the graph we conclude that $(e_j^2)= (e_i ^2)$ for any $i$ and $j$. Thus $(e_i^2)=A$ for any $i$, also for any nonzero $x\in A$ we have $(x)=A$, because we may write $x=\sum x^i e_i$ and some scalar $x^i$ is nonzero, so $(x)\ni x e_i=x^i e_i^2$ implying $A=(e_i^2)\subset (x)$. Consequently, any nonzero element $x$ satisfies $(x)=A$ implying that $A$ is a simple algebra. \end{proof} So we claim: \begin{proposition}\label{yaemp} Let $A$ be a perfect finite-dimensional evolution algebra and $E$ its directed graph relative to a natural basis. Then the following assertions are equivalent: \begin{enumerate} \item The directed graph $E$ has a closed path containing all the vertices. \item $A$ is simple. \end{enumerate} \end{proposition} \begin{proof} By Lemma \ref{merengue}, it remains to prove that the simplicity of $A$ implies the existence of the closed path. Let $E$ be the directed graph of $A$ relative to a natural basis $\B=\{e_1,\ldots,e_n\}$ and $E^0=\{1,\ldots,n\}$. Take any vertex $i\in E^0$, then $A=\oplus_{j\in T(i)}\K e_j$ by \eqref{first} and simplicity of $A$. So $T(i)=E^0$ for any vertex $i \in E^0$. Thus given $i,j$ different vertices we have $j\in T(i)$ and $i\in T(j)$ hence there is a closed path $\lambda$ passing through both vertices $i$ and $j$. Take such a closed path $\l$ of maximum length, that is, the cardinal of $\l^0$ is maximum. Then one can prove that $\l^0=E^0$ hence we have found a closed path containing all the vertices of $E$. \end{proof} } \begin{remark} \label{yaemp}\rm It is easy to check that a directed graph is strongly connected if and only if it has a closed path containing all the vertices. So, a perfect evolution algebra whose associated directed graph has a closed path containing all the vertices is simple. In particular, if $A$ is a perfect evolution algebra with structure matrix $(\omega_{ij})$ satisfying $\omega_{ij}\ne 0$ for $i \neq j$, then $A$ is simple. \end{remark} Assume that $A$ is an evolution $\K$-algebra such that any two natural basis $\B_1$ and $\B_2$ are related in the sense that for any $e\in \B_1$, there is a nonzero scalar $k \in \K$ and an $f\in \B_2$ such that $f=k e$. When this happen, we say that \textit{$A$ has a unique natural basis} (see \cite[Definition 2.1]{boudi20}). Note that $\sum_{e\in \B_1}(eA)e=\sum_{f\in \B_2}(fA)f$, then we have the following definition. \begin{definition}\rm In the previous conditions, we define the \emph{diagonal subspace} given by $$\D(A):=\sum_{e\in \B} (eA)e.$$ \end{definition} Observe that, for perfect evolution $\K$-algebras, the above definition applies (and, in particular, for simple algebras). Furthermore, if $f\colon A_1\to A_2$ is an isomorphism of algebras and $A_1$ is in the above conditions, $f(\D(A_1))=\D(A_2)$. The diagonal subspace is then an invariant for simple evolution $\K$-algebras, and we can classify the simple three-dimensional algebras $A$ into $4$ disjoint classes depending on $\dim(\D(A))$. Note also that $\D(A)=\oplus_{i}\K e_i^2$ (extended to those indices $i$ such that $\omega_{ii}\ne 0$). In other words, $\dim(\D(A))=l(A)$ (the number of loops in the directed graph associated). \begin{notation}\rm We will consider the notation $\dos^{l(A),e(A)}_{\Gamma}$ for $A$ a two-dimensional evolution algebra, where $\Gamma$ is the set of nonzero parameters that appears in its corresponding structure matrix. Similarly, we will consider $_n\tres^{l(A),e(A)}_{\Gamma}$ for $A$ a three-dimensional evolution algebra where $n \in \N^*$ helps us to distinguish the cases in which the evolution algebras have the same $l(A)$, $e(A)$ and $\Gamma$. \end{notation} \subsection{Two-dimensional case} So, for instance, the case of two-dimensional simple evolution algebras $A$ will be useful to illustrate the above ideas. Since the graph must contain a cycle visiting the two vertices, the graph will contain the subgraph: $$\xymatrix{{\bullet}^{1} \ar@/^0.5pc/ [r] & {\bullet}^{2} \ar@/^0.5pc/[l]}$$ \noindent so that $\dim(\D(A))=l(A)=0,1,2$. \begin{enumerate} \item If $\dim(\D(A))=0$ (implying $e(A)=2$), we have $e_1^2=\a e_2$ and $e_2^2=\b e_1$. Without loss of generality, we may assume $e_1^2=e_2$ and $e_2^2=\l e_1$ with $\l\in\K^\times$. Thus, the structure matrix relative to $\{e_1,e_2\}$ is $\tiny\begin{pmatrix} 0 & \l\cr 1 & 0\end{pmatrix}$. This algebra is $\dos_{\l}^{0,2}$. Now, we focus on natural basis whose structure matrix is of the kind above (zeros in the diagonal, $1$ in the $(2,1)$-entry and a nonzero element in the $(1,2)$-entry). We can see that any other natural basis of this kind is either $\{k e_1,k^2 e_2\}$ or $\{k e_2,k^2\l e_1\}$ with $k\in\K^\times$. So, here we have an action of the group $\K^\times\times\Z_2$ on the set of natural bases. The structure matrix relative to $\{k e_1,k^2 e_2\}$ is $\tiny\begin{pmatrix} 0 & k^3\l\cr 1 & 0\end{pmatrix}$ while the corresponding one relative to $\{k e_2,k^2\l e_1\}$ is $\tiny\begin{pmatrix} 0 & k^3\l^2\cr 1 & 0\end{pmatrix}$. Therefore, the algebras $\dos_{\l}^{0,2}$ and $\dos_{\m}^{0,2}$ with structure matrices $\tiny\begin{pmatrix} 0 & \l\cr 1 & 0\end{pmatrix}$ and $\tiny\begin{pmatrix} 0 & \m\cr 1 & 0\end{pmatrix}$ are isomorphic if and only if there is a $k\in\K^\times$ such that $\m^{2^{r}}=k^3\l^{2^{s}}$ for some $r, s \in \N$. Thus, the isomorphism classes of algebras with structure matrix of type $\tiny\begin{pmatrix} 0 & \l\cr 1 & 0\end{pmatrix}$ are classified by the moduli set $\Omega_{2,3}=(\Z_2,G_3)$ (see Notation \ref{mrqpp}). Equivalently, $\dos_{\l}^{0,2}\cong \dos_{\m}^{0,2}$ if and only if $\l$ and $\m$ have the same image in $\Lim_{\to 2}G_3(\K)$. \item If $\dim(\D(A))=1$, then $e(A)=3$. We may choose our natural basis to have structure matrix $\tiny\begin{pmatrix}1 & \l\cr 1 & 0\end{pmatrix}$ with $\l\in\K^{\times}$. This algebra is $\dos_{\l}^{1,3}$. \bigskip $$ \tiny\xymatrix{\bullet^{1} \ar@(dl,ul) \ar@/^.5pc/[r] & \bullet^{2} \ar@/^.5pc/[l]}$$ \bigskip It is easy to check that if $\dos_{\m}^{1,3}\cong \dos_{\l}^{1,3}$, then $\m=\l$. Thus, the isomorphic classes of algebras of this kind are in one-to-one correspondence with the set $\K^\times$. We may consider that the group acting is the trivial one. So, the moduli set is $(1,\K^\times)$. \item If $\dim(\D(A))=2$ (so $e(A)=4$), we can choose the matrix relative to a natural basis of the form $\tiny\begin{pmatrix}1 & \l\cr \mu & 1\end{pmatrix}$ with $\l,\m \in\K^\times$ and $\l\m\ne 1$ for $\dos_{\l,\m}^{2,3}$. \bigskip $$ \tiny\xymatrix{\bullet^{1} \ar@(dl,ul) \ar@/^.5pc/[r] & \bullet^{2} \ar@(dr,ur)\ar@/^.5pc/[l]}$$ \bigskip If $\dos_{\l,\m}^{2,3}\cong \dos_{\l',\m'}^{2,3}$ then we can see that $(\l',\m')=(\l,\m)$ or $(\l',\m')=(\m,\l)$. We can identify the set of matrices $\tiny\begin{pmatrix}1 & \l\cr \mu & 1\end{pmatrix}$ with $\l,\m\ne 0$, $\l\m\ne 1$ with the subset $P$ of the two-dimensional affine space $A_2(\K)$ of those points $(x,y)$ such that $x,y\ne 0$, $xy\ne 1$. So $P$ is the Zariski open subset of the affine space obtained removing both axis $x=0$, $y=0$ and the points of the \lq\lq hyperbola\rq\rq\ $xy=1$. We have an action of the group $\Z_2$ on $P$ such that $$ 0\cdot (\l,\m):=(\l,\m) \hbox{ and } 1\cdot (\l,\m):=(\m,\l).$$ In this case, the isomorphic classes of algebras are classified by the moduli set $(\Z_2,P)$. In fact, $P=(\K^\times)^2\cap D(\l\m-1)$. \end{enumerate} We can summarize the results in the following table and theorem: \begin{table}[H] \renewcommand{\arraystretch}{2} \begin{tabular}{|c|C{3cm}|c|c|c|} \hline Type algebra & Graph & Structure matrix& Moduli set & Orbit set\cr \hline \multirow{2}{*}{$\dos_{\l}^{0,2}$} & \multirow{2}{*}{$\tiny\xymatrix{\bullet^{1} \ar@/^0.5pc/ [r] & \bullet^{2} \ar@/^0.5pc/[l]}$} & \multirow{2}{*}{$\tiny\begin{pmatrix} 0 & \l\cr 1 & 0\end{pmatrix}$}& \multirow{2}{*}{$\Omega_{2,3}$} & \multirow{2}{*}{$G_3/\Z_2\cong \Lim_{\to 2}G_3(\K)$} \cr \multirow{3}{*}{$\dos_{\l}^{1,3}$} & \multirow{3}{*}{$ \tiny\xymatrix{\bullet^{1} \ar@(dl,ul) \ar@/^.5pc/[r] & \bullet^{2} \ar@/^.5pc/[l]}$}& \multirow{3}{*}{$\tiny\begin{pmatrix}1 & \l\cr 1 & 0\end{pmatrix}$} & \multirow{3}{*}{$(1,\K^\times)$} & \multirow{3}{*}{$\K^\times$} \cr \multirow{4}{*}{$\begin{array}{c} \dos_{\l,\m}^{2,4} \end{array}$} & \multirow{4}{*}{$ \tiny\xymatrix{\bullet^{1} \ar@(dl,ul) \ar@/^.5pc/[r] & \bullet^{2} \ar@(dr,ur)\ar@/^.5pc/[l]}$} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \l\cr \mu & 1\end{pmatrix}$} & \multirow{4}{*}{$(\Z_2,(\K^\times)^2\cap D(\l\m-1))$} & \multirow{4}{*}{$\frac{(\K^\times)^2\cap D(\l\m-1)}{\Z_2}$} \cr &&&&\\ &&&&\\ \hline \end{tabular} \caption{\footnotesize Moduli sets of simple two-dimensional evolution algebras.}\label{table:1} \end{table} \begin{theorem}\label{teorema1} Let $A$ be a two-dimensional simple evolution $\K$-algebra. Then, $A$ is isomorphic to one and only one algebra of type $\dos_{\l}^{0,2}$, $\dos_{\l}^{1,3}$ or $\dos_{\l,\mu}^{2,4}$ (see Table \ref{table:1}). \end{theorem} \subsection{Three-dimensional case} Consider a simple evolution algebra $A$ with $\dim(A)=3$. From Remark \ref{yaemp}, we see that there is a closed path going through the three vertices. Thus, $\dim(\D(A))=l(A)=0,1,2,3$. \subsubsection{$\boldsymbol{\dim(\D(A))=0}$} Then $e(A)=3,4,5,6$, so we can consider the following cases: \begin{enumerate} \item If $e(A)=3$ the associated directed graph is \begin{center} \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}} \end{center} \bigskip Without loss of generality, the structure matrix relative to a natural basis of the evolution algebra can be taken to be ${\tiny\begin{pmatrix} 0 & 0 &\l\cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}}$. If ${\tiny\begin{pmatrix} 0 & 0 &\mu \cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}}$ is the structure matrix relative to another natural basis, we get $\l^{2^{r}}= k^7\mu^{2^{s}}$ for some $k\in\K^\times$ and $r,s \in \N$. Thus, the isomorphism classes of these algebras are in one-to-one correspondence with the orbits of the group $G_7(\K)$ modulo the action of $\Z_2$ described by the moduli set $\Omega_{2,7}=(\Z_2,G_7(\K))$. That is, $\tres^{0,3}_{\l}\cong \tres^{0,3}_{\m}$ if and only if $\l$ and $\m$ have the same image in $\Lim_{\to 2}G_7(\K)$. If $\root 7 \of \lambda\in\K$, then one can see that the image of $\l$ in $\Lim_{\to 2}G_7(\K)$ is the same as the image of $1$ hence $\tres^{0,3}_{\l}\cong \tres^{0,3}_{1}$. This happens, for instance, if $\K=\R$ or $\mathbb C$. However, if $\K=\Q$ we have $\tres^{0,3}_{2}\not\cong \tres^{0,3}_{3}$. \item If $e(A)=4$, the graph of $A$ has two possibilities: \begin{center} \resizebox{6cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]} \hskip2cm \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ul]}} \end{center} \bigskip But note that the second graph does not correspond to a perfect algebra. In the first case, there are nonzero scalars $\l,\m$ such that the structure matrix relative to a natural basis can be chosen to be ${\tiny\begin{pmatrix} 0 & \l &\m\cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}}$ and we denote this evolution algebra by $\tres_{\l,\m}^{0,4}$. If $\tres_{\l,\m}^{0,4} \cong \tres_{\l',\m'}^{0,4}$ then we find that $\l'=k^3\l$ and $\m'=k^7\m$ for some $k\in\K^{\times}$. Thus, we can define the group action of $\K^{\times}$ on the set $(\K^{\times})^2$ such that $k\cdot (\l,\m):=(k^3\l,k^7\m)$. The moduli set $(\K^{\times},(\K^{\times})^2)$ classifies the algebras in this item. The orbit set is the underlying set of the quotient group $(\K^{\times})^2/\Delta_{3,7}$ where $\Delta_{3,7}=\{(k^3,k^7): k\in \K^{\times}\}$. \item If $e(A)=5$ the graph of $A$ is \begin{center} \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} \end{center}\smallskip \noindent and there are nonzero scalars $\l,\m,\gamma$ such the structure matrix can be chosen to be ${\tiny\begin{pmatrix} 0 & \l &\m\cr 1 & 0 & \gamma\cr 0 & 1 & 0\end{pmatrix}}$. Let us denote $A$ by $\tres_{\l,\m,\gamma}^{0,5}$. If $\tres_{\l,\m,\gamma}^{0,5} \cong \tres_{\l',\m',\gamma'}^{0,5}$, then we find that $\l'=k^3\l$, $\m'=k^7\m$ and $\g'=k^6\g$ whence we can the define the action of $\K^\times$ on $(\K^\times)^3$ such that $k(\l,\m,\g)=(k^3\l,k^7\m,k^6\g)$ and consider the moduli set $(\K^\times,(\K^\times)^3)$ which classifies the algebras in this item. The orbit set is this case is the given by the quotient group $(\K^\times)^3/\Delta_{3,7,6}$. \item If $e(A)=6$ the graph of $A$ is \begin{center} \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} \end{center}\smallskip and there are nonzero scalars $\l,\m,\gamma, \delta$ such the structure matrix relative to a natural basis can be chosen to be ${\tiny\begin{pmatrix} 0 & \l &\m \\ 1 & 0 &\gamma \\ \delta & 1 & 0\end{pmatrix}}$. We denote this evolution algebra by $\tres_{\l,\m,\gamma, \delta}^{0,6}$. If $\tres_{\l,\m,\gamma, \delta}^{0,6} \cong \tres_{\l',\m',\gamma', \delta'}^{0,6}$ then we find that $\delta'=k^{-2}\delta$, $\l'=k^3\l$, $\gamma'=k^6\gamma$ and $\m'=k^7\m$ whence we can define the action of $\K^\times$ on $(\K^\times)^4$ such that $k(\delta,\l,\gamma,\mu)=(k^{-2}\delta,k^3\l,k^6\gamma, k^7 \m)$. This action can be restricted to an action $\K^\times\times((\K^{\times})^4\cap D(\m-\l\delta\gamma))\to ((\K^{\times})^4\cap D(\m-\l\delta\gamma))$ and consider the moduli set $(\K^\times,(\K^{\times})^4\cap D(\m-\l\delta\gamma))$ which classifies the algebras in this item. The orbit set in this case is the given by the quotient group $$\dfrac{(\K^\times)^4\cap D(\m-\l\delta\gamma)}{\Delta_{-2,3,6,7}}.$$ \end{enumerate} Thus, we can collect the previous results in Table \ref{table:2}. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c|c|} \hline Type algebra & Graph & Structure Matrix & Moduli set & Orbit set\cr \hline \multirow{4}{*}{$\tres^{0,3}_{\l}$} & \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}} & \multirow{4}{*}{${\tiny\begin{pmatrix} 0 & 0 &\l\cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}}$} & \multirow{4}{*}{$(1,G_7)$} & \multirow{4}{*}{$G_7$}\cr \multirow{4}{*}{$\tres_{\l,\m}^{0,4}$} & \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}} & \multirow{4}{*}{${\tiny\begin{pmatrix} 0 & \l &\m\cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}}$} & \multirow{4}{*}{$(\K^{\times},(\K^{\times})^2)$} & \multirow{4}{*}{$(\K^\times)^2/\Delta_{3,7}$}\cr \multirow{4}{*}{ $\tres_{\l,\m,\gamma}^{0,5}$} & \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{${\tiny\begin{pmatrix} 0 & \l &\m\cr 1 & 0 & \gamma\cr 0 & 1 & 0\end{pmatrix}}$} & \multirow{4}{*}{$(\K^{\times},(\K^{\times})^3)$} & \multirow{4}{*}{$(\K^\times)^3/\Delta_{3,7,6}$}\cr \multirow{4}{*}{$\begin{matrix} \tres_{\l,\m,\gamma, \delta}^{0,6} \vspace*{-0.15cm} \end{matrix}$} & \resizebox{2.5cm}{.25cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{${\tiny\begin{pmatrix} 0 & \l &\m \\ 1 & 0 &\gamma \\ \delta & 1 & 0\end{pmatrix}}$} & \multirow{4}{*}{$(\K^\times,(\K^{\times})^4\cap D(\m-\l\delta\gamma))$}& \multirow{4}{*}{$\dfrac{(\K^\times)^4\cap D(\m-\l\delta\gamma)}{\Delta_{-2,3,6,7}}$} \cr &&&&\\ \hline \end{tabular} \caption{\footnotesize Simple three-dimensional algebras with $\dim(\D(A))=0$.}\label{table:2} \end{table} \subsubsection{$\boldsymbol{\dim(\D(A))=1}$} Then $e(A)=4,5,6,7$. Without loss of generality, we can take a natural basis for each type of evolution algebras so that the structure matrix relative to it is of the form shown in Table \ref{l1}. In most cases, we could have chosen another natural basis with a different structure matrix (i.e. with a different distribution of $1$'s and parameters). The advantage of the selected bases is that the isomorphism conditions derived from this choice are valid for any field. Observe that, when the graphs have the same number of edges, as for instance in the cases $_1\tres_{\l,\mu}^{1,5}$, $_2\tres_{\l,\mu}^{1,5}$, $_3\tres_{\l,\mu}^{1,5}$ and $_4\tres_{\l,\mu}^{1,5}$, the associated evolution algebras are non-isomorphic since the graphs are non-isomorphic. For each of the type algebras appearing in Table \ref{l1}, except for $\tres^{1,7}_{\l,\m,\delta,\nu}$, the algebras are isomorphic if and only if the parameters coincide. For the evolution algebras of type $\tres^{1,7}_{\l,\m,\delta,\nu}$, we obtain $\tres^{1,7}_{\l,\m,\delta,\nu}\cong \tres^{1,7}_{\l',\m',\delta',\nu'} $ if and only if $[(\l,\m,\delta,\nu)]=[(\l',\m',\delta',\nu')]$ in $((\K^{\times})^4\cap D(\mu+\l\delta\nu))/S_1$ with $S_1=\{M_1^k\,\colon\,k\in\Z\}$ (if $\ch({\K})=p$ with $p$ prime, then $S_1=\Z_{2p}$ and if $\ch({\K})=0 $, then $S_1=\Z$) and $M_1=\footnotesize\begin{pmatrix} 0 & 1 &\phantom{-} 2 & \phantom{-}5\\ 1 & 0 & \phantom{-}2 &\phantom{-} 4\\ 0 & 0 &\phantom{-} 2 &\phantom{-} 3\\ 0 & 0 & -1 & -2 \end{pmatrix}$ with the action defined in Definition \ref{action1}. \subsubsection{$\boldsymbol{\dim(\D(A))=2}$ } Then $e(A)=5,6,7,8$. Analogously to the previous case, we can consider a natural basis for each type of evolution algebras such that its structure matrix is of the form displayed in Table \ref{l2}. We could have chosen another natural basis with a different structure matrix (i.e. with a different distribution of $1$'s and parameters), for this choice the isomorphism conditions for the algebras presented in the table is valid for any field. As before, the evolution algebras $_i\tres_{\l,\mu,\delta}^{2,6}$ with $i\in\{1,2,3,4\}$ are non-isomorphic because their associated directed graphs are non-isomorphic. The same happens for $_i\tres_{\l,\mu,\delta,\nu}^{2,7}$ with $i\in\{1,2,3\}$. Again, for each of the type algebras appearing in Table \ref{l2}, except for $_4\tres^{2,6}_{\l,\m,\delta}$ and $\tres^{2,8}_{\l,\m,\delta,\nu,\xi}$, the algebras are isomorphic if and only if the parameters coincide. For the evolution algebras of type $_4\tres^{2,6}_{\l,\m,\delta}$, we get $_4\tres^{2,6}_{\l,\m,\delta}\cong\, _4\tres^{2,6}_{\l',\m',\delta'} $ if and only if $[(\l,\m,\delta)]=[(\l',\m',\delta')]$ in $((\K^{\times})^3\cap D(\l\m+\delta))/S_2$ where $M_2=\footnotesize\begin{pmatrix} -1 & 0 & 0 \\ \phantom{-}2 & 0 & 1 \\ \phantom{-}2 & 1 & 0 \end{pmatrix}$ and $S_2=\{M_2^k\,\colon\,k\in\Z\}=\{Id,M_2\}$ (isomorphic to $\Z_2)$ and with the action defined in Definition \ref{action1}. \begin{center} \begin{table}[h] \renewcommand{\arraystretch}{0.8} \begin{tabular}{|c|C{2.3cm}|C{2cm}|c|} \hline &&&\\ Type algebra & Graph & Structure matrix & Orbit set \cr &&& \\ \hline &&&\\[-0.2cm] \multirow{4}{*} {$\tres_{\l}^{1,4}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \l\cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$\K^\times$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$_1\tres_{\l,\mu}^{1,5}$} & \centerline{ \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}}} & \multirow{4}{*}{ $\tiny\begin{pmatrix}1 & \m & \l\cr 1 & 0 & 0\cr 0 & 1 & 0\end{pmatrix}$}& \multirow{4}{*}{\footnotesize{$(\K^\times)^2$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$_2\tres_{\l,\mu}^{1,5}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \l\cr 1 & 0 & 0\cr \m & 1 & 0\end{pmatrix}$} &\multirow{4}{*}{\footnotesize{$(\K^\times)^2$}}\cr &&&\\[-0.2cm] &&&\\ \multirow{4}{*}{$\begin{matrix} _3\tres_{\l,\m}^{1,5} \end{matrix}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll] \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \l\cr 1 & 0 & \mu\cr 0 & 1 & 0\end{pmatrix}$}& \multirow{4}{*}{\footnotesize{$(\K^\times)^2\cap D(\l-\m)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$_4\tres_{\l,\m}^{1,5}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \l & 0\cr 1 & 0 & \mu\cr 0 & 1 & 0\end{pmatrix}$} &\multirow{4}{*}{\footnotesize{$(\K^\times)^2$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _1\tres_{\l,\m,\delta}^{1,6} \end{matrix}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll] \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \l & \mu\cr 1 & 0 & \delta\cr 0 & 1 & 0\end{pmatrix}$}&\multirow{4}{*}{\footnotesize{$(\K^\times)^3\cap D(\mu-\delta)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _2\tres_{\l,\m,\delta}^{1,6} \end{matrix}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.1pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \l & 0\cr 1 & 0 & \m\cr \delta & 1 & 0\end{pmatrix}$}& \multirow{4}{*}{\footnotesize{$(\K^\times)^3\cap D(\l\delta-1))$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$_3\tres_{\l,\m,\delta}^{1,6}$}& \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll] \ar@/^.1pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \m & \delta\cr \l & 0 & 1\cr 1 & 0 & 0\end{pmatrix}$} &\multirow{4}{*}{\footnotesize{$(\K^\times)^3$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} \tres_{\l,\m,\delta,\nu}^{1,7} \end{matrix}$} & \resizebox{2cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@(dl,ul) \ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \l & \m\cr 1 & 0 & \delta\cr \nu & 1 & 0\end{pmatrix}$}& \multirow{4}{*}{$\frac{(\K^\times)^4\cap D(\mu+\l\delta\nu)}{S_1}$}\cr & & & \\ \hline \end{tabular} \caption{\footnotesize Simple three-dimensional algebras with $\dim(\D(A))=1$ and $\l$, $\delta$, $\mu$ and $\nu$ nonzero. \\ \centering The acting group is $\KN$.}\label{l1} \end{table} \end{center} For the evolution algebras of type $\tres_{\l,\m,\delta,\nu,\xi}^{2,8}$, we obtain that $\tres_{\l,\m,\delta,\nu,\xi}^{2,8}\cong \tres_{\l',\m',\delta',\nu',\xi'}^{2,8} $ if and only if $[(\l,\m,\delta,\nu,\xi)]=[(\l',\m',\delta',\nu',\xi')]$ in $((\K^{\times})^5\cap D(\xi(\delta\m-1)-\nu(\m-\l)))/S_3$, where $M_3=\footnotesize\begin{pmatrix} 0 & \phantom{-}0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0\\ 1 & \phantom{-}0 & 0 & 0 & 0\\ 0 & \phantom{-}2 & 0 & 0 & 1\\ 0 & \phantom{-}2 & 0 & 1 & 0 \end{pmatrix}$ and $S_3=\{M_3^k\,\colon\,k\in\Z\}=\{Id,M_3\}$ (isomorphic to $\Z_2$) with the action defined in Definition \ref{action1}. \subsubsection{$\boldsymbol{\dim(\D(A))=3}$} Then $e(A)=6,7,8,9$. Again, we can choose a natural basis for each type of evolution algebras so that its structure matrix is of the form given in Table \ref{l3}. As before, we could have chosen another natural basis with a different structure matrix (i.e. with a different distribution of $1$'s and parameters), however the isomorphism condition for the algebras presented in the table is valid for any field. For evolution algebras of type $_1\tres^{3,7}_{\l,\m,\delta,\nu}$ and $\tres^{3,8}_{\l,\m,\delta,\nu,\xi}$, the algebras are isomorphic if and only if the parameters coincide. For the evolution algebras of type $\tres_{\l,\m,\delta}^{3,6}$, we have $\tres_{\l,\m,\delta}^{3,6}\cong \tres_{\l',\m',\delta'}^{3,6} $ if and only if $[(\l,\m,\delta)]=[(\l',\m',\delta')]$ in $((\K^\times)^3\cap D(\l\m+\delta))/S_4$, where $M_4=\footnotesize\begin{pmatrix} 0 & 1 & -2 \\ 2 & 0 & \phantom{-}1 \\ 1 & 0 & \phantom{-}0 \end{pmatrix}$ and $S_4=\{M_4^k\,\colon\,k\in\Z\}=\{Id,M_4,M_4^2\}$ (isomorphic to $\Z_3$) with the action defined in Definition \ref{action1}. In this case, the Fröbenius normal form is $F_{4}=\footnotesize\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}$ in any field. The moduli set is isomorphic to $((\K^{\times})^3\cap D(\l\m-\delta^3))/\Z_3$ with the action $F_{4}\cdot(\l,\m,\delta)=(\m,\delta,\l)$. For the evolution algebras of type $_2\tres_{\l,\m,\delta,\nu}^{3,7}$, we obtain that $_2\tres_{\l,\m,\delta,\nu}^{3,7}\cong\, _2\tres_{\l',\m',\delta',\nu'}^{3,7}$ if and only if $[(\l,\m,\delta,\nu)]=[(\l',\m',\delta',\nu')]$ in $((\K^{\times})^4\cap D(\lambda\mu\nu-\nu +\xi))/S_5$, where $M_5=\footnotesize\begin{pmatrix} 0 & 0 & 1 & -2 \\ 0 & 0 & 0 & \phantom{-}1 \\ 1 & 2 & 0 & \phantom{-}0\\ 0 & 1 & 0 & \phantom{-}0 \end{pmatrix}$ and $S_5=\{M_5^k\,\colon\,k\in\Z\}=\{Id,M_5\}$ (isomorphic to $\Z_2$) with the action defined in Definition \ref{action1}. For the evolution algebras of type $\tres_{\l,\m,\delta,\nu,\xi,\gamma}^{3,9}$, we get that $\tres_{\l,\m,\delta,\nu,\xi,\gamma}^{3,9}\cong \tres_{\l',\m',\delta',\nu',\xi',\gamma'}^{3,9} $ if and only if there is an element $X$ in the subgroup generated by $M_6$ and $M_7$ such that $X\cdot (\l,\m,\delta,\nu,\xi,\gamma)=(\l',\m',\delta',\nu',\xi',\gamma')$ \medskip \noindent with $M_6=\footnotesize\begin{pmatrix} 0 & \phantom{-}0 & 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 1 & \phantom{-}0 & 0 & 0 & 0 & 0\\ 0 & \phantom{-}2 & 0 & 0 & 1 & 0\\ 0 & \phantom{-}2 & 0 & 1 & 0 & 0\\ 0 & \phantom{-}1 & 0 & 0 & 0 & 1 \end{pmatrix}$ and $M_{7}=\footnotesize\begin{pmatrix} \phantom{-}0 & 0 & 0 & 1 & 0 & -2 \\ -1 & 0 & 0 & 0 & 1 & -2 \\ \phantom{-}0 & 1 & 0 & 0 & 0 & \phantom{-}1\\ \phantom{-}2 & 0 & 0 & 0 & 0 & \phantom{-}1\\ \phantom{-}2 & 0 & 1 & 0 & 0 & \phantom{-}0\\ \phantom{-}1 & 0 & 0 & 0 & 0 & \phantom{-}0 \end{pmatrix}.$ The moduli set is isomorphic to ${((\K^\times)^6\cap D(\xi(\delta\mu-1)+\nu(\l-\mu)- \gamma(\delta\l-1)))}/{\mathbb{S}_3}$, where $\mathbb{S}_3$ is the permutation group of three elements. In fact, this group is generated by the element $M_6$, which is of order $2$, and $M_7$, which is of order $3$. Moreover, $M_6M_7=M_7^2M_6$. This is the only case where the acting group is nonabelian. \medskip In conclusion, we can summarize the classification of three-dimensional simple evolution algebras over arbitrary fields in the below theorem. \begin{theorem}\label{teorema2} Let $A$ be a three-dimensional simple evolution $\K$-algebra. We distinguish several cases: \begin{enumerate}[\rm (i)] \item If $\dim(\D(A))=0$, then $A$ is isomorphic to one and only one of the type evolution algebras $\tres_{\l}^{0,3}$, $\tres_{\l,\m}^{0,4}$, $\tres_{\l,\m,\gamma}^{0,5}$ and $\tres_{\l,\m,\gamma,\delta}^{0,6}$. \item If $\dim(\D(A))=1$, then $A$ is isomorphic to one and only one of the type evolution algebras $\tres_{\l}^{1,3}$, $_i\tres_{\l,\m}^{1,5}$ with $i\in \{1,2,3,4\}$, $_j\tres_{\l,\m,\delta}^{1,6}$ for $j\in \{1,2,3\}$ and $\tres_{\l,\m,\delta, \nu}^{1,7}$. \item If $\dim(\D(A))=2$, then $A$ is isomorphic to one and only one of the type evolution algebras $\tres_{\l,\mu}^{2,5}$, $_i\tres_{\l,\m,\delta}^{2,6}$ with $i\in \{1,2,3,4\}$, $_j\tres_{\l,\m,\delta,\nu}^{2,7}$ for $j\in \{1,2,3\}$ and $\tres_{\l,\m,\delta,\nu, \xi}^{2,8}$. \item If $\dim(\D(A))=3$, then $A$ is isomorphic to one and only one of the type evolution algebras $\tres_{\l,\m,\delta}^{3,6}$, $_i\tres_{\l,\m,\delta,\nu}^{3,7}$ with $i\in\{1,2\}$, $\tres_{\l,\m,\delta,\nu,\xi}^{3,8}$ and $\tres_{\l,\m,\delta,\nu,\xi,\gamma}^{3,9}$. \end{enumerate} \end{theorem} \begin{center} \begin{table}[H] \renewcommand{\arraystretch}{0.8} \begin{tabular}{|c|C{2cm}|C{2.5cm}|c|} \hline &&& \\ Type algebra & Graph & Structure matrix & Orbit set \cr &&& \\ \hline &&& \\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*} {$\tres_{\l,\m}^{2,5}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \mu\cr \lambda & 1 & 0\cr 0 & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^2$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _1\tres_{\l,\mu,\delta}^{2,6} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]}} & \multirow{4}{*}{ $\tiny\begin{pmatrix}1 & 0 & \delta\cr \lambda & 1 & 0\cr \mu & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^3\cap D(\l-\m)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$_2\tres_{\l,\mu,\delta}^{2,6}$} & \centerline{ \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll]}}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \mu & \delta\cr \l & 1 & 0\cr 0 & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^3$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{ $_3\tres_{\l,\m,\delta}^{2,6}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \mu & 0\cr \l & 1 & \delta\cr 0 & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^3$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _4\tres_{\l,\m,\delta}^{2,6} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2}\ar@(ur,ul) \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@(dl,ul) \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \mu\cr 0 & 1 & \delta\cr \l & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{$\frac{(\K^\times)^3\cap D(\l\m+\delta)}{\Z_2}$}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] && &\\[-0.2cm] && &\\[-0.2cm] &&&\\[-0.2cm] \multirow{5}{*}{$\begin{matrix} _1\tres_{\l,\m,\delta,\nu}^{2,7} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3} \ar@/^.4pc/[ll] \ar@/^.4pc/[ul]}} & \multirow{5}{*}{$\tiny\begin{pmatrix}1 & \mu & \delta\cr \l & 1 & \nu\cr 0 & 1 & 0\end{pmatrix}$} & \multirow{5}{*}{\footnotesize{$(\K^\times)^4\cap D(\l\delta-\nu)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] && &\\[-0.2cm] && &\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _2\tres_{\l,\m,\delta,\nu}^{2,7} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.1pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \delta & 0\cr \l & 1 & \nu\cr \m & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^4\cap D(\m\delta\nu-\nu)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] && &\\[-0.2cm] &&&\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _3\tres_{\l,\m,\delta,\nu}^{2,7} \end{matrix}$}& \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2}\ar@(ur,ul) \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@(dl,ul) \ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \delta\cr \l & 1 & \nu\cr \m & 1 & 0\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^4\cap D(\delta(\l-\m)-\nu)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} \tres_{\l,\m,\delta,\nu,\xi}^{2,8} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2}\ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@(dl,ul) \ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3} \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \delta & \nu\cr \l & 1 & \xi\cr \mu & 1 & 0\end{pmatrix}$}& \multirow{4}{*}{$\frac{(\K^\times)^5\cap D(\xi(\delta\m-1)-\nu(\m-\l))}{\Z_2}$}\cr & & &\\ \hline \end{tabular} \caption{\footnotesize Simple three-dimensional algebras with $\dim(\D(A))=2$ and $\l$, $\delta$, $\mu$,$\nu$ and $\xi$ nonzero. \\ \centering The acting group is $\KN$.}\label{l2} \end{table} \end{center} \begin{center} \begin{table}[h] \renewcommand{\arraystretch}{0.8} \begin{tabular}{|c|C{2cm}|C{2cm}|c|} \hline &&& \\ Type algebra & Graph & Structure matrix & Orbit set \cr &&& \\ \hline &&& \\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*} {$\begin{matrix} \tres_{\l,\m,\delta}^{3,6} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3}\ar@(dr,ur) \ar@/^.4pc/[ll]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & 0 & \mu\cr \lambda & 1 & 0\cr 0 & 1 & \delta\end{pmatrix}$} & \multirow{4}{*}{$\frac{(\K^\times)^3\cap D(\l\m+\delta)}{\Z_3}$}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} _1\tres_{\l,\mu,\delta,\nu}^{3,7} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3}\ar@(dr,ur) \ar@/^.4pc/[ll]}} & \multirow{4}{*}{ $\tiny\begin{pmatrix}1 & 0 & \delta\cr \lambda & 1 & 0\cr \mu & 1 & \nu\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^4\cap D(\m-\l-1)$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{ $\begin{matrix} _2\tres_{\l,\m,\delta,\nu}^{3,7} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] & &\bullet^{3}\ar@(dr,ur) \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \mu & 0\cr \l & 1 & \delta\cr 0 & 1 & \nu\end{pmatrix}$} & \multirow{4}{*}{$\frac{(\K^{\times})^4\cap D(\l\m\nu-\nu+\delta)}{\Z_2}$}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] && &\\[-0.2cm] && &\\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} \tres_{\l,\m,\delta,\nu,\xi}^{3,8} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2} \ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1}\ar@(dl,ul)\ar@/^.4pc/[ur] \ar@/^.1pc/[rr] & &\bullet^{3}\ar@(dr,ur) \ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \delta & 0\cr \l & 1 & \xi\cr \m & 1 & \nu\end{pmatrix}$} & \multirow{4}{*}{\footnotesize{$(\K^\times)^5\cap D(\xi(\delta\mu-1)-\nu(\delta\l-1))$}}\cr &&&\\[-0.2cm] &&&\\[-0.2cm] \hline &&&\\[-0.2cm] &&& \\[-0.2cm] &&& \\[-0.2cm] &&&\\[-0.2cm] \multirow{4}{*}{$\begin{matrix} \tres_{\l,\m,\delta,\nu,\xi,\gamma}^{3,9} \end{matrix}$} & \resizebox{1.7cm}{.2cm}{ \xymatrix{ & \bullet^{2}\ar@(ur,ul) \ar@/^.5pc/[dl] \ar@/^.4pc/[dr] & \cr \bullet^{1} \ar@(dl,ul) \ar@/^.4pc/[ur] \ar@/^.5pc/[rr] & &\bullet^{3}\ar@(dr,ur) \ar@/^.4pc/[ll]\ar@/^.4pc/[ul]}} & \multirow{4}{*}{$\tiny\begin{pmatrix}1 & \delta & \nu\cr \l & 1 & \xi\cr \mu & 1 & \gamma\end{pmatrix}$}& \multirow{4}{*}{$\frac{(\K^\times)^6\cap D(\xi(\delta\mu-1)+\nu(\l-\mu)- \gamma(\delta\l-1))}{\mathbb{S}_3}$}\cr && & \\ \hline \end{tabular} \caption{\footnotesize Simple three-dimensional algebras with $\dim(\D(A))=3$ and $\l$, $\delta$, $\mu$, $\nu$, $\xi$ and $\gamma$ nonzero. \\ \centering The acting group is $\KN$.}\label{l3} \end{table} \end{center} \vspace*{-1cm} \section{Simple and semisimple evolution algebras arising as the tensor product of two simple ones} Let $E=(E^0,E^1,r_E,s_E)$ and $F=(F^0,E^1,r_F,s_F)$ be two directed graphs. We recall that the \textit{categorical product} of $E$ and $F$ is the directed graph defined by $E \times F := (E^0 \times F^0, E^1 \times F^1, r, s)$ where $s(f,g) =(s(f), s(g))$ and $r(f,g)=(r(f), r(g))$ for any $(f,g)\in E^1 \times F^1$. We know, by \cite{CMMT}, that the tensor product of two evolution algebras is an evolution algebra as well. Moreover, if $E=(E^0,E^1,r_E,s_E)$ and $F=(F^0, F^1,r_F,s_F)$ are the directed graphs associated to the evolution algebras $A_1$ and $A_2$ respectively, then the directed graph associated to $A_1 \otimes A_2$ is the graph $E \times F$. Following \cite{McANDREW}, let $E=(E^0,E^1,r_E,s_E)$ be a directed graph and $u, v\in E^0$. We say $u, v$ are \textit{strongly connected} if there exists a path from $u$ to $v$ and from $v$ to $u$ or $u=v$. This is an equivalence relation and we define a \textit{component} $C$ of $E$ to be a subgraph whose vertices, $C^0$, are the vertices in an equivalence class with respect to this relation and whose edges are those $f \in E^1$ with $s(f), r(f) \in C^0$. One can find, in \cite{McANDREW}, the following result that concerns strongly connected directed graphs (involving the categorical product of graphs). \begin{theorem}[\cite{McANDREW}, Theorem 1.(ii)]\label{thm:strong} Let $G_1$ and $G_2$ be strongly connected directed graphs. Let $$d_1=d(G_1)=gcd\{\text{\rm length of all the closed paths in } G_1\},$$ $$d_2=d(G_2)=gcd\{\text{\rm length of all the closed paths in } G_2\}$$ and $$d_3=gcd\{d_1,d_2\}.$$ Then the number of components of $G_1\times G_2$ is $d_3$. \end{theorem} \begin{remark}\label{remark:strongly}\rm The previous theorem implies that if $E$ and $F$ are strongly connected graphs, then the connected components of $E\times F$ are strongly connected. \end{remark} So, we have a result that improves \cite[item (i) Corollary 4.3]{CMMT}. \begin{theorem}\label{cor:strong} Let $A_i$ ($i=1,2$) be finite-dimensional simple evolution algebras over a field $\K$ whose associated directed graphs are $E$ and $F$, respectively. We have: \begin{enumerate}[\rm (i)] \item The evolution algebra $A_1\otimes A_2$ is simple if and only if $1=\gcd(d_1,d_2)$ where $d_1$ and $d_2$ are as in Theorem \ref{thm:strong}. In particular, if $E$ and $F$ have closed paths of coprime length, then $A_1\otimes A_2$ is simple. \item The evolution algebra $A_1\otimes A_2$ is semisimple (a direct sum of simple evolution algebras). \end{enumerate} \end{theorem} \begin{proof} For item (i), if $A_1\otimes A_2$ is simple, then the graph $E\times F$ has one component hence $\gcd(d_1,d_2)=d_3=1$. Conversely, if $1=d_3=\gcd(d_1,d_2)$ then $E\times F$ has one component which is $(E\times F)^0$. Recall that any component is strongly connected, moreover, the algebra $A_1\otimes A_2$ is perfect since both $A_1$ and $A_2$ are perfect (being simple) by \cite[item (i) Proposition 2.14]{CMMT}. So, the graph $E\times F$ is strongly connected and $A_1\otimes A_2$ is perfect. Consequently, $A_1\otimes A_2$ is simple by \cite[Proposition 2.7]{CKS1}. The second assertion is straightforward. For item (ii), if $d_3\geq 1$ each component gives rise to a simple evolution algebra. \end{proof} \begin{corollary}\label{patatas} Let $A_1$ and $A_2$ be two finite-dimensional simple evolution algebras over a field $\K$. Suppose $\dim(\D({A_i})) > 0$ for some $i \in \{1,2\}$, then $A_1 \otimes A_2$ is a simple evolution algebra. \end{corollary} \begin{proof} Since $\dim(\D({A_i})) > 0$, for some $i \in \{1,2\}$, we get that $A_1$ or $A_2$ has at least one loop. This means that $d_1$ or $d_2$ is $1$. The result is straightforward. \end{proof} The next issue is how to construct simple tensorially decomposable evolution algebras. \begin{remark}\label{nuevas}\rm Note that, item (i) Theorem \ref{cor:strong} and Corollary \ref{patatas} allow us to construct more complex models of simple evolution algebras. For instance, taking as model the simple algebra $\tres^{1,4}_\l$, whose structure matrix is $$\tiny\begin{pmatrix}1 & 0 &\l\\ 1 & 0 & 0\\ 0 & 1 & 0\end{pmatrix},$$ and replacing each $1$ in the matrix with an $n\times n$ structure matrix $M$ of another simple evolution algebra (and replacing each $\l$ with $\l M$), we get the structure matrix $\tiny\begin{pmatrix}M & 0 &\l M\\ M & 0 & 0\\ 0 & M & 0\end{pmatrix}$, which represents a $3n$-dimensional simple evolution algebras. So, we have many models of $3n$ and $2n$-dimensional simple algebras from our classification theorems. Although, observe that not every simple tensorially decomposable evolution algebra requires the factors to have all closed paths of coprime length. We illustrate this fact. Let $A_1$ and $A_2$ be two simple finite dimensional evolution algebras over a field $\K$ with associated directed graphs $E$ and $F$ as follows: \begin{figure}[h] \centering \begin{subfigure}[h]{0.45\textwidth} \centering $\resizebox{.6\textwidth}{!}{\xymatrix{ & \bullet^{1}\ar@/^.2pc/[rr] & & \bullet^{2}\ar@/^.2pc/[dr] \ar@/^.2pc/[ddll] &\cr \bullet^{6} \ar@/^.2pc/[ur] & & & &\bullet^{3} \ar@/^.2pc/[dl]\cr & \bullet^{5} \ar@/^.2pc/[ul] & & \bullet^{4} \ar@/^.2pc/[ll] & } } $ \caption{\footnotesize Directed graph $E$ associated to the evolution algebra $A_1$.} \end{subfigure} \begin{subfigure}[h]{0.5\textwidth} \centering $ \resizebox{0.99\textwidth}{!}{\xymatrix{\bullet^{1} \ar@/^.2pc/[rr] & & \bullet^{2}\ar@/^.2pc/[rr] & & \bullet^{3}\ar@/^.2pc/[rr] & & \bullet^{4}\ar@/^.2pc/[rr] & & \bullet^{5}\ar@/^.2pc/[dd] \cr &&&&&&&& \cr \bullet^{9} \ar@/^.2pc/[uu]& & & & \bullet^{8} \ar@/^.1pc/[uu]\ar@/^.2pc/[llll] & & \bullet^{7}\ar@/^.2pc/[ll] & & \bullet^{6} \ar@/^.2pc/[ll] \cr}}$ \caption{\footnotesize Directed graph $F$ associated to the evolution algebra $A_2$.} \end{subfigure} \end{figure} In the same vein of Theorem \ref{thm:strong}, $d_1=gcd\{4,6\}=2$ and $d_2=gcd\{6,9\}=3$. Hence $d_3=gcd\{d_1,d_2\}=1$. So, $E\times F$ has one strongly connected component. Hence, $A_1\otimes A_2$ is simple. $$\includegraphics[width=8cm]{grafo2}$$ \begin{center} \footnotesize {\rm (3)} Directed graph $E\times F$ associated to the evolution algebra $A_1\otimes A_2.$ \end{center} \end{remark} Recall the definitions of tensorially decomposable and tensorially indecomposable $\K$-algebras. \begin{definition}[\cite{CMMT}, Definition 2.9] \rm We say that a $\K$-algebra $A$ is \textit{tensorially decomposable} if it is isomorphic to $A_1\otimes A_2$ where $A_1$ and $A_2$ are $\K$-algebras with $\dim(A_1),\dim(A_2)> 1$. Otherwise we say that $A$ is \textit{tensorially indecomposable}. \end{definition} One of the motivations of the following theorem is to decide if we could construct simple evolution algebras arising as quotients of tensor products of other evolution algebras. Here we prove that if the ideal realizing the quotient is $1$-dimensional, then the quotient algebra is not simple. \begin{theorem}\label{cociente} Let $A$ be a finite dimensional evolution algebra over a field $\K$ with natural basis $B$. \begin{enumerate}[\rm (i)] \item If $I\triangleleft A$ is an ideal with $I=\K u$ and $\Supp_B{(u)}> 1$, then $A/I$ is not simple. \item If $A= A_1 \otimes A_2$ is tensorially decomposable and $0\ne I\triangleleft A$ is an ideal with $I=\K u$, then $A/ I$ is not simple. \end{enumerate} \end{theorem} \begin{proof} For (i) write $u=\sum_h \l_h e_h$ where $B=\{e_h\}_{h=1}^n$ is a natural basis of $A$. We may assume that $\l_1,\l_2\ne 0$. Then $I\ni ue_i$ for $i=1,2$ so $e_1^2,\ e_2^2\in I$ thus $A^2\subset I+\sum_{h=3}^n \K e_h^2$ and $\dim(A^2)\le n-1$. Since $\dim[(A^2+I)/I]=\dim(A^2/A^2\cap I)\le n-2$ we have a nonzero proper ideal $(A^2+I)/I\triangleleft A/I$. For (ii), suppose that $B_1=\{a_i\}_{i=1}^n$, $B_2=\{b_j\}_{j=1}^q$ are natural bases of $A_1$ and $A_2$ respectively, and $B_1\otimes B_2=\{e_h\}_{h=1}^{n q}$ a natural basis of $A=A_1\otimes A_2$. So, $u=\sum_{h=1}^n \lambda_h e_h$. If $\Supp_B{(u)}>1$ we apply item (i) and if $\Supp_B{(u)}=1$ then $u=\lambda a_1\otimes b_1$, $\lambda\neq 0$ (so we may assume without loss of generality $\l=1$). Since $I=\K u$, we have either $u^2=0$ or $u^2\in \K^{\times} u$. In the last case, we can take $u$ being idempotent (re-scaling if necessary). \begin{enumerate} \item In the case $u^2=u$ we have $ a_1\otimes b_1=u= u^2=a_1^2\otimes b_1^2=\sum_{i=1}^n\omega_{i1}a_i\otimes \sum_{j=1}^q\sigma_{j1}b_j=\sum_{i,j}\omega_{i1}\sigma_{j1}a_i\otimes b_j$. So $\omega_{11}\sigma_{11}\ne 0$ and $\omega_{i1}\sigma_{j1}=0$ if $i\neq 1$ and $j\neq 1$. Now, if we suppose $\omega_{i1}\neq 0$ for $i\neq 1$, then $\sigma_{j1}=0$ for all $j\neq 1$. Hence $b_1^2=\sigma_{11}b_1$. Thus $A_2$ has a $1$-dimensional ideal $J=\K b_1$ and therefore $A$ has an ideal $A_1\otimes J$ with $\dim(A_1\otimes J)=\dim(A_1)$. Then $A/I$ has the ideal $(A_1\otimes J+I)/I$ which is nonzero and proper. \item If $u^2=0$ we deduce, as before, $\omega_{i1}\sigma_{j1}=0$ for any $i$ and $j$. Then, for instance if $\omega_{i1}\ne 0$, we have $\sigma_{j1}=0$ for every $j$, so $A_2$ has a $1$-dimensional ideal and we conclude as above. \end{enumerate} \end{proof} \section*{Acknowledgments} The four authors are supported by the Junta de Andaluc\'{\i}a through projects UMA18-FEDERJA-119 and FQM-336 and by the Spanish Ministerio de Ciencia e Innovaci\'on through project PID2019-104236GB-I00, all of them with FEDER funds. {\bf Data Availability Statement:} The authors confirm that the data supporting the findings of this study are available within the article. \begin{thebibliography}{99} \bibitem{boudi20} Nadia Boudi, Yolanda Cabrera Casado and Mercedes Siles Molina. {Natural families in evolution algebras.} \textit{Publicacions Matemàtiques.} \textbf{66} (1) (2022). \bibitem{YC} Yolanda Cabrera Casado. {Evolution algebras}. \textit{ Doctoral dissertation. Universidad de M\'alaga (2016)}. http://hdl.handle.net/10630/14175. \bibitem{CCGMM} Yolanda Cabrera Casado, Maria Inez Cardoso Gon\c calves, Daniel Gon\c calvez, Dolores Mart\'in Barquero and C\'andido Mart\'in Gonz\'alez. {Chains in evolution algebras.} \textit{Linear Algebra Appl.} \textbf{622} (2021) 104-149. \bibitem{CKS1} Yolanda Cabrera Casado, M\"uge Kanuni and Mercedes Siles Molina. {Basic ideals in evolution algebras}. \textit{Linear Algebra Appl.} \textbf{570} (2019) 148-180. \bibitem{CMMT} Yolanda Cabrera Casado, Dolores Martín Barquero, Cándido Martín González and Alicia Tocino. {Tensor product of evolution algebras}. In Press. \textit{Mediterranean Journal of Mathematics}. ArXiv:2111.06114 (2021). \bibitem{YE} Yolanda Cabrera Casado and Elkin Quintero Vanegas. {Absorption Radical of an evolution algebra}. {Preprint.} \bibitem{CSV2} Yolanda Cabrera Casado, Mercedes Siles Molina and M. Victoria Velasco. Classification of three dimensional evolution algebras. \textit{Linear Algebra Appl.} \textbf{524} (2017) 68-108. \bibitem{ceballos} Manuel Ceballos González, Raúl M. Falcon Ganfornina, Juan Nuñez Valdes and Ángel F. Tenorio Villalón. {A historical perspective of Tian's evolution algebras}. In Press. \textit{Expositiones Mathematicae.} (2021). \url{https://doi.org/10.1016/j.exmath.2021.11.004} \bibitem{BMV} M. Eugenia Celorrio and M. Victoria Velasco. {Classifying Evolution Algebras of Dimensions Two and Three.} \textit{Mathematics.} \textbf{7} (12) (2019). \bibitem{EL1} Alberto Elduque and Alicia Labra. {Evolution algebras and graphs}. \textit{J. Algebra Appl.} \textbf{14} (7) (2015), 1550103, 10 pp. \bibitem{IM} A.N. Imomkulov. {Classification of a family of three dimensional real evolution algebras.} \textit{TWMS J. Pure Appl. Math.} \textbf{10} (2) (2019) 225-238. \bibitem{McANDREW} M. H. Mc Andrew. On the product of directed graphs. \textit{Proceedings of the American Mathematical Society} Vol. 14, No. 4 (Aug., 1963), 600-606. \end{thebibliography} \end{document}
2206.13877v2
http://arxiv.org/abs/2206.13877v2
Pattern avoiding alternating involutions
\documentclass[a4paper,12pt]{article} \usepackage[latin1]{inputenc} \usepackage[english]{babel} \usepackage{amssymb} \usepackage{amsmath} \usepackage{latexsym} \usepackage{amsthm} \usepackage[pdftex]{graphicx} \usepackage{pgf, tikz} \usepackage{hyperref,url} \usepackage{pgfplots} \usepackage{mathtools} \pgfplotsset{width=6.6cm,compat=1.7} \usepackage{array} \usepackage{fancyhdr} \usepackage{color} \usetikzlibrary{arrows} \usetikzlibrary{cd} \usetikzlibrary{decorations.pathreplacing} \DeclareMathOperator*{\diag}{diag} \DeclareMathOperator*{\std}{std} \DeclareMathOperator*{\Des}{Des} \DeclareMathOperator*{\des}{des} \newcommand{\seqnum}[1]{\href{http://oeis.org/#1}{\underline{#1}}} \newcommand{\todo}[1]{\vspace{2 mm}\par\noindent \marginpar{\textsc{ToDo}}\framebox{\begin{minipage}[c]{0.95 \textwidth} \tt #1 \end{minipage}}\vspace{2 mm}\par} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{examples}[thm]{Examples} \newtheorem{axiom}[thm]{Axiom} \newtheorem{notation}[thm]{Notation} \theoremstyle{remark} \newtheorem{oss}[thm]{Observation} \hyphenation{previous} \title{Pattern avoiding alternating involutions} \date{} \author{Marilena Barnabei\\ Dipartimento di Matematica \\ Universit\`a di Bologna, 40126, ITALY \\ \texttt{[email protected]}\and Flavio Bonetti \\ P.A.M. \\ Universit\`a di Bologna, 40126, ITALY \\ \texttt{[email protected]}\and Niccol\`o Castronuovo \\ Liceo ``A. Einstein'', Rimini, 47923, ITALY \\ \texttt{[email protected]}\and Matteo Silimbani \thanks{corresponding author} \\ Istituto Comprensivo ``E. Rosetti'', Forlimpopoli, 47034, ITALY \\ \texttt{[email protected]}} \begin{document} \maketitle \begin{abstract} We enumerate and characterize some classes of alternating and reverse alternating involutions avoiding a single pattern of length three or four. If on one hand the case of patterns of length three is trivial, on the other hand the length four case is more challenging and involves sequences of combinatorial interest, such as Motzkin and Fibonacci numbers. \end{abstract} 2020 msc: 05A05, 05A15 (primary), 05A19 (secondary). Keywords: permutation pattern, involution, alternating permutation. \section{Introduction} A permutation $\pi$ avoids a pattern $\tau$ whenever $\pi$ does not contain any subsequence order-isomorphic to $\tau.$ The theory of permutation patterns goes back to the work of Knuth \cite{Kn2}, who, in the 1970's, introduced the definition of pattern avoidance in connection to the stack sorting problem. The first systematic study of these objects appears in the paper by Simion and Schmidt (\cite{Si}). Nowadays, the theory is very rich and widely expanded, with hundreds of papers appeared in the last decades (see e.g. \cite{Ki} and references therein). More recently permutation patterns have been studied over particular subset of the symmetric group. In particular pattern avoidance has been studied over involutions (see e.g. \cite{Ba6,Bona,Jaggard,Ki}) and over alternating permutations (see e.g. \cite{Bonaf,ChenChenZhou,Gowravaram,Ki,Lewis,Lewis2,Lewis3,XuYan,Yan}), i.e., permutations $\pi=\pi_1\ldots \pi_n$ such that $\pi_i<\pi_{i+1}$ if and only if $i$ is odd. The enumeration of alternating involutions is due to Stanley (see \cite{Stanleyalt1} and also his survey on alternating permutations \cite{Stanleyalt2}). However, to the best of our knowledge, pattern avoiding alternating involutions have not been studied so far. In this paper we consider alternating involutions that avoid some patterns of length three or four. If on one hand the case of patterns of length three is trivial, on the other hand the length four case is more challenging and involves sequences of combinatorial interest, such as Motzkin and Fibonacci numbers. \section{Preliminaries} \subsection{Permutations} Let $S_n$ be the symmetric group over the symbols $\{1,2,\ldots \,n\}.$ We will often write permutations in $S_n$ in one line notation as $\pi=\pi_1\ldots \pi_n.$ An \textit{involution} is a permutation $\pi$ such that $\pi=\pi^{-1}.$ The \textit{reverse} of a permutation $\pi=\pi_1\pi_2\ldots \pi_n$ is $\pi^r=\pi_n\pi_{n-1}\ldots \pi_1$ and the \textit{complement} of $\pi$ is $\pi^c=n+1-\pi_1\,n+1-\pi_2\ldots n+1-\pi_n.$ The \textit{reverse-complement} of $\pi$ is $\pi^{rc}=(\pi^r)^c=(\pi^c)^r.$ Notice that $(\pi^{-1})^{rc}=(\pi^{rc})^{-1}.$ In particular the reverse-complement of an involution is an involution. A \textit{descent} in a permutation $\pi$ is an index $i$ such that $\pi_i>\pi_{i+1}.$ Denote by $\Des(\pi)$ the set of descents of $\pi$ and by $\des(\pi)$ its cardinality. Recall that a \textit{left-to-right minimum} (ltr minimum from now on) of a permutation $\pi$ is a value $\pi(i)$ such that $\pi(j)>\pi(i)$ for every $j<i.$ A \textit{left-to-right maximum} (ltr maximum from now on) is a value $\pi(i)$ such that $\pi(j)<\pi(i)$ for every $j<i.$ The definition of right-to-left (rtl) minimum and maximum is analogous. \begin{lem}\label{mininv} In any involution $\pi$ the ltr minima form an involution themselves, i.e., a ltr minimum $m$ is at position $i$ if and only if $i$ is a ltr minimum. The same is true for rtl maxima. \end{lem} \proof We will consider only the case of ltr minima, since the other case is analogous. Denote by $m_1, m_2, \ldots, m_k$ the left-to-right minima of $\pi$, and by $i_1,i_2\ldots,i_k$ their respective positions. We want to show that $$\{m_1, m_2, \ldots, m_k\}=\{i_1,i_2\ldots,i_k\}.$$ Suppose on the contrary that there exists an index $j$ such that $i_j$ is not the value of a left-to-right minimum. Then, to the left of $i_j$ in $\pi$ there is a symbol $a$ less than $i_j.$ In other terms, there exist two integers $a, h$ such that $\pi(h)=a,$ $a<i_j$ and $h<m_j.$ In this situation, $m_j$ is not a left-to-right minimum, since it is preceded by $h,$ and $h<m_j,$ contradiction. \endproof \begin{example} Consider the involution $\pi=7\,9\,4\,3\,5\,6\,1\,10\,2\,8.$ The ltr minima of $\pi$ are 7,4,3,1 and they form an involution themselves. The same is true for the rtl maxima 8 and 10. \end{example} Given a word $w=w_1\ldots w_j$ whose letters are distinct numbers, the \textit{standardization} of $w$ is the unique permutation $\pi$ in $S_j$ order isomorphic to $w.$ If two words $w$ and $u$ have the same standardization we write $u\sim w.$ The \textit{decomposition into connected components} of a permutation $\pi$ is the finest way to write $\pi$ as $\pi=w_1w_2\ldots w_k,$ where each $w_i$ is a permutation of the symbols from $|w_1|+|w_2|+\ldots+|w_{i-1}|+1$ to $|w_1|+|w_2|+\ldots+|w_{i-1}|+|w_i|.$ Each $w_i$ is called \textit{a connected component} of $\pi.$ A permutation is said to be \textit{connected} if it is composed by only one connected component. \begin{example} The decomposition into connected components of the permutation $\pi=34215786$ is $\pi=w_1w_2w_3,$ where $w_1=3421,$ $w_2=5,$ $w_3=786.$ \end{example} \subsection{Alternating permutations} A permutation $\pi=\pi_1\ldots \pi_n$ is said to be \textit{alternating} if $\pi_i<\pi_{i+1}$ if and only if $i$ is odd and \textit{reverse alternating} if and only if $\pi_i>\pi_{i+1}$ if and only if $i$ is odd. Equivalently, a permutation $\pi$ is alternating whenever $\Des(\pi)=\{2,4,6,\ldots\}.$ \begin{example} The permutation $\pi=4615273$ is alternating, while $\sigma=5372614$ is reverse alternating. \end{example} Denote by $S_n$ ($I_n,$ $A_n,$ $RA_n,$ $AI_n$ and $RAI_n,$ respectively) the set of permutations (involutions, alternating permutations, reverse alternating permutations, alternating involutions, reverse alternating involutions, respectively) of length $n.$ The following lemma will be useful in the sequel. \begin{lem}\label{str} \begin{itemize} \item[i)] Let $\pi=\pi_1\ldots\pi_n \in RAI_{n}.$ Then $1$ is in even position, $n$ is in odd position, hence $\pi_1$ is even and $\pi_{n}$ is odd. \item[ii)] Let $\pi=\pi_1\ldots\pi_n \in AI_n.$ Then $1$ is in odd position, $n$ is in even position, hence $\pi_1$ is odd and $\pi_n$ is even. \end{itemize} \end{lem} \subsection{Pattern avoidance} A permutation $\sigma=\sigma_1\ldots\sigma_n \in S_n$ \textit{avoids} the pattern $\tau \in S_k$ if there are no indices $i_1,i_2,\ldots,i_k$ such that the subsequence $\sigma_{i_1}\sigma_{i_2}\ldots \sigma_{i_k}$ is order isomorphic to $\tau.$ Denote by $S_n(\tau)$ the set of permutations of length $n$ avoiding $\tau$ and let $S(\tau):=\bigcup_n S_n(\tau).$ We will keep this notation also when $S_n$ is replaced by other subsets of permutations, such as $I_n,$ $A_n,$ etc. Notice that an involution avoids $\tau$ if and only if it avoids $\tau^{-1}.$ The next trivial lemma will be useful in the sequel. \begin{lem}\label{rc} The reverse-complement map is a bijection between $AI_{2n+1}(\tau)$ and $RAI_{2n+1}(\tau^{rc}),$ between $AI_{2n}(\tau)$ and $AI_{2n}(\tau^{rc})$ and between $RAI_{2n}(\tau)$ and $RAI_{2n}(\tau^{rc}).$ \end{lem} \subsection{Motzkin paths} A \textit{Motzkin path} of length $n$ is a lattice path starting at $(0,0),$ ending at $(n,0),$ consisting of up steps $U$ of the form $(1,1),$ down steps $D$ of the form $(1,-1),$ and horizontal steps $H$ of the form $(1,0),$ and lying weakly above the $x$-axis. As usual, a Motzkin path can be identified with a \textit{Motzkin word}, namely, a word $w = w_1w_2\ldots w_n$ of length $n$ in the alphabet $\{U, D, H\}$ with the constraint that the number of occurrences of the letter $U$ is equal to the number of occurrences of the letter $D$ and, for every $i,$ the number of occurrences of $U$ in the subword $w_1w_2\ldots w_i$ is not smaller than the number of occurrences of $D.$ In the following we will not distinguish between a Motzkin path and the corresponding word. Denote by $\mathcal{M}_n$ the set of Motzkin path of length $n$ and by $M_n$ its cardinality, the $n$-th Motzkin number (see sequence \seqnum{A001006} in \cite{Sl}). The \textit{diod decomposition} of a Motzkin path $m$ of even length $2n$ is the decomposition of $m$ as $m=d_1d_2\ldots d_n,$ where each $d_i$ is a subword of $m$ of length two. Each $d_i$ in the diod decomposition of a Motzkin path is called a \textit{diod} of $m.$ \section{General results} In this section we prove two general results which will be used in the paper. \begin{lem}\label{BWX} Let $\tau$ be any permutation of $\{3,\ldots ,m\},$ $m\geq 3,$ Then $$|AI_{n}(12\tau)|=|AI_n(21\tau)|.$$ \end{lem} \proof We closely follow \cite{Ouc}, where a similar result is proved for doubly alternating permutations, i.e., alternating permutations whose inverse is also alternating. Our goal is to find a bijection between $AI_{n}(12\tau)=AI_n(12\tau,12\tau^{-1})$ and $AI_{n}(21\tau)=AI_n(21\tau,21\tau^{-1}).$ We will use the diagrammatic representation of a permutation, i.e., we will identify a permutation $\pi$ in $S_n$ with a $n\times n$ diagram with a dot at position $(i,\pi_i),$ for every $i$ (notice that Ouchterlony \cite{Ouc} uses a slightly different definition for the diagram of a permutation). A dot, $d,$ in the diagram of a permutation $\pi$ is called \textit{active} if $d$ is the 1 or 2 in any $12\tau,$ $12\tau^{-1},$ $21\tau$ or $21\tau^{-1}$ pattern in $\pi,$ or \textit{inactive} otherwise. Also the pair of dots $(d_1,d_2)$ is called an \textit{active pair} if $d_1 d_2$ is the $12$ in a $12\tau$ or $12\tau^{-1}$ pattern or the $21$ in a $21\tau$ or $21\tau^{-1}$ pattern. We now define a Young diagram, $\lambda_{\pi},$ consisting of the part of the diagram of $\pi$ which contains the active dots. For any two dots $d_1,d_2,$ let $R_{d_1,d_2}$ be the smallest rectangle with bottom left coordinates (1, 1), such that $d_1,d_2\in R_{d_1,d_2}.$ Define $$\lambda_{\pi}=\bigcup R_{d_1,d_2},$$ where the union is over all active pairs $(d_1,d_2)$ of $\pi.$ It is clear from the definition that $\lambda_{\pi}$ is indeed a Young diagram. Since $\pi$ is an involution, its diagram is symmetric with respect to the main diagonal, and for every active dot in position $(i,j)$ there is also an active dot in position $(j,i),$ hence $\lambda_{\pi}$ is a Young diagram symmetric with respect to the main diagonal. A \textit{rook placement} of a Young diagram $\lambda$ is a placement of dots in its boxes, such that all rows and columns contain exactly one dot. If some of the rows or columns are empty we call it a \textit{partial rook placement}. Furthermore, we say that a rook placement on $\lambda$ avoids the pattern $\tau$ if no rectangle, $R\subseteq \lambda,$ contains $\tau.$ Notice that the rook placement on $\lambda_\pi$ induced by an involution $\pi$ is symmetric. Now, it has been proved by Jaggard \cite[Theorem 4.2]{Jaggard}, that the number of symmetric rook placements on the self-conjugate shape $\mu$ avoiding the patterns $12\tau$ and $12\tau^{-1}$ is equal to the number of symmetric rook placements on the self-conjugate shape $\mu$ avoiding the patterns $21\tau$ and $21\tau^{-1}.$ We call two permutations $\pi$ and $\sigma$ of the same size \textit{a-equivalent} if they have the same inactive dots, and write $\pi\sim_a \sigma.$ In the sequel we will need the following three facts that are immediate consequences of Lemma 6.4, Lemma 6.3 and Lemma 6.2 in \cite{Ouc}, respectively. \begin{itemize} \item[\textbf{(I)}] $\pi \sim_a \sigma \Rightarrow \lambda_\pi=\lambda_\sigma.$ \item[\textbf{(II)}] If $\pi\in AI_n(12\tau,12\tau^{-1})\cup AI_n(21\tau,21\tau^{-1})$ and $\pi\sim_a \sigma$ then $\sigma$ is doubly alternating. \item[\textbf{(III)}] If $\pi\in AI_n$ and $rp(\lambda_\pi)$ is the partial rook placement on $\lambda_\pi$ induced by $\pi,$ then $$\pi \in AI_n(12\tau,12\tau^{-1}) \Leftrightarrow rp(\lambda_\pi) \mbox{ is }12\mbox{-avoiding}$$ and $$\pi \in AI_n(21\tau,21\tau^{-1}) \Leftrightarrow rp(\lambda_\pi) \mbox{ is }21\mbox{-avoiding}.$$ \end{itemize} Now we are ready to construct a bijection $$\phi:AI_n(12\tau,12\tau^{-1})\to AI_n(21\tau,21\tau^{-1}).$$ Let $\pi \in AI_n(12\tau,12\tau^{-1}),$ so that the restriction of $\pi$ to $\lambda_\pi$ is a partial $12$-avoiding symmetric rook placement. By Jaggard's theorem (ignoring the empty rows and columns) and by point \textbf{(I)} there exists a unique $21$-avoiding (partial) symmetric rook placement on $\lambda_{\pi}$ with the same rows and columns empty, which we combine with the inactive dots of $\pi$ to get $\phi(\pi).$ By point \textbf{(II)}, $\phi(\pi)$ is doubly alternating and, since its diagram is symmetric, it is an involution. By point \textbf{(III)} it avoids $21\tau$ (and hence also $21\tau^{-1}$). It is also clear from Jaggard's theorem that $\phi$ is indeed a bijection. \endproof \begin{example} Consider the permutation $\pi=593716482\in AI_{9}(12435).$ Notice that $\pi$ contains the pattern $21435.$ Here $\tau=435.$ The diagram of $\pi$ is the following \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm] \draw[dotted,thin] (1,1)--(10,10); \draw [] (1.,1.)-- (10.,1.); \draw [] (10.,1.)-- (10.,10.); \draw [] (10.,10.)-- (1.,10.); \draw [] (1.,10.)-- (1.,1.); \draw [] (9.,10.) -- (9.,1.); \draw [] (10.,9.) -- (1.,9.); \draw (2.,1.)-- (2.,10.); \draw (3.,1.)-- (3.,10.); \draw (4.,1.)-- (4.,10.); \draw (5.,1.)-- (5.,10.); \draw (6.,1.)-- (6.,10.); \draw (7.,1.)-- (7.,10.); \draw (8.,1.)-- (8.,10.); \draw (1.,8.)-- (10.,8.); \draw (1.,7.)-- (10.,7.); \draw (1.,6.)-- (10.,6.); \draw (1.,5.)-- (10.,5.); \draw (1.,4.)-- (10.,4.); \draw (1.,3.)-- (10.,3.); \draw (1.,2.)-- (10.,2.); \draw (1.,1.)-- (10.,1.); \draw (1.,10.)-- (10.,10.); \draw (1.,10.)-- (1.,1.); \draw (10.,10.)-- (10.,1.); \draw [color=red] (1.,6.) -- (4.,6.) -- (4.,4.) -- (6.,4.) -- (6.,1.); \draw [color=red] (6.,1.) -- (1.,1.) -- (1.,6.); \begin{scriptsize} \draw [color=blue,fill=blue] (1.5,5.5) circle (3.5pt); \draw [] (2.5,9.5) circle (3.5pt); \draw [color=blue,fill=blue] (3.5,3.5) circle (3.5pt); \draw [] (4.5,7.5) circle (3.5pt); \draw [color=blue,fill=blue] (5.5,1.5) circle (3.5pt); \draw [] (6.5,6.5) circle (3.5pt); \draw [] (7.5,4.5) circle (3.5pt); \draw [] (8.5,8.5) circle (3.5pt); \draw [] (9.5,2.5) circle (3.5pt); \end{scriptsize} \end{tikzpicture} \vspace{0.5cm} The blue dots are the active dots of $\pi$ and the red Young diagram is $\lambda_{\pi}.$ Now applying to $\lambda_{\pi}$ the procedure described in the above proof we get $\phi(\pi)=195736482\in AI_9(21435).$ \end{example} It follows from the previous lemma and Lemma \ref{rc} that if $\tau$ is any permutation of $\{1,\ldots ,k-2\}$ then $$|AI_{2n}(\tau\, k-1\, k)|=|AI_{2n}(\tau\, k\,k-1)|$$ and $$|RAI_{2n+1}(\tau\, k-1\, k)|=|RAI_{2n+1}(\tau\, k\,k-1)|.$$ Notice that similar relations do not hold for $RAI_{2n},$ in fact, numerical computations show that, for general $n,$ $|RAI_{2n}(1234)|\neq |RAI_{2n}(1243)|$ and $|RAI_{2n}(1234)|\neq |RAI_{2n}(2134)|.$ When $\tau$ is an increasing sequence it is possible to provide a more explicit bijection $f$ between $|AI_{2n}(\tau\, k-1\, k)|$ and $|AI_{2n}(\tau\, k\,k-1)|.$ Such a bijection has been defined for the first time by J. West \cite{West} for permutations and has been used by M. B\'ona \cite{Bonaf} to prove a conjecture by J. B. Lewis about alternating permutations. To this aim B\'ona proved that $f$ preserves the alternating property when the length of the permutation is even. Here we recall the definition of the map $f:S_{t}(12\ldots \, k-1\, k)\to S_t(12\ldots\, k\,k-1).$ Consider a permutation $\pi \in S_{t}(12\ldots \, k-1\, k)$ and define the \textit{rank} of the element $\pi_i$ to be the maximum length of an increasing subsequence ending at $\pi_i.$ Since $\pi$ avoids $12\ldots \, k-1\, k,$ the maximal rank of an element is $k-1.$ Let $R$ be the set of elements of $\pi$ whose rank is $k-1,$ and $P$ the set of their positions. The permutation $\rho=f(\pi)$ is obtained as follows. \begin{itemize} \item if $j\notin P,$ $\rho_j=\pi_j,$ \item if $j\in P,$ $\rho_j$ is the smallest unused element of $R$ that is larger than the closest entry of rank $k-2$ to the left of $\pi_j.$ \end{itemize} Notice that, if $k=3,$ the map $f$ reduces to the classic Simion-Schmidt bijection (\cite{Si}, \cite{West}). In the following lemma we prove that $f$ preserves also the property of being an involution and we describe what happens in the odd case. \begin{lem}\label{Bonabij} The map $f$ is a bijection between $AI_{2n}(12\ldots\, k-1\, k)$ and $AI_{2n}(12\ldots\, k\,k-1).$ Moreover, $f$ maps bijectively the subset of $AI_{2n+1}(1234)$ of those permutations having $2n+1$ at position 2 to the set $AI_{2n+1}(1243).$ \end{lem} \proof First of all we notice that, in an involution $\pi,$ all the elements of a given rank form an involution. To prove this fact it is sufficient to observe that elements of $\pi$ of rank 1 are the ltr minima of the permutation, which form an involution (see Lemma \ref{mininv}). We can consider the involution obtained removing from $\pi$ the elements of rank 1 and proceed inductively. Now we want to prove that $f$ sends involutions of lenght $n$ to involutions of length $n$ (for every length $n$). Also in this case we can obtain the result by induction on $k.$ When $k=3,$ we noticed above that $f$ coincides with the Simion-Schmidt map (\cite{Si}), and it is well-known that this map sends involutions to involutions. If $k>3,$ we can delete from $\pi$ the elements of rank 1 and standardize, obtaining an involution $\pi'$ which avoids $12\ldots\, k-2\;\, k-1.$ If we apply the map $f,$ we obtain an involution avoiding $12\ldots\, k-1\;\, k-2$ which can be obtained from $f(\pi)$ by removing the elements of rank 1 and standardizing. Hence also $f(\pi)$ is an involution. The fact that $f$ preserves the alternating property (when the length is even) has been proved by B\'ona \cite[Theorem 1]{Bonaf}. Now we turn our attention to the second assertion. We want to prove that $f^{-1}$ maps bijectively the set $AI_{2n+1}(1243)$ onto the set $Y_{2n+1}=AI_{2n+1}(1234)\cap \{\pi\in S_{2n+1}\,|\, \pi(2)=2n+1\}.$ The fact that the injective map $f^{-1}$ preserves the alternating property has been already observed by B\'ona \cite[Corollary 1]{Bonaf}. Moreover $f^{-1}$ preserves the involutory property, as proved above. Now consider $\pi\in AI_{2n+1}(1243).$ The symbol $2n+1$ appears at even position $a.$ Suppose that $a\geq 4.$ Then $\pi_2=2n,$ otherwise $\pi_1\;\,\pi_2\;\,2n+1\;\,2n\sim 1243.$ As a consequence, $\pi_{2n+1}=a>2=\pi_{2n}$ which is impossible since $\pi$ is alternating. Hence $a=2$ and $\pi_2=2n+1.$ Since for every $\pi\in AI_{2n+1}(1243),$ $\pi(2)=2n+1,$ the symbols $2n+1$ and $2$ have rank $2.$ The map $f^{-1}$ fixes the elements of rank $k-2$ or less, hence it fixes the positions of $2n+1$ and $2.$ With similar arguments one can show that $f(Y_{2n+1})\subseteq AI_{2n+1}(1243)$ and conclude that $f(Y_{2n+1})= AI_{2n+1}(1243).$ \endproof \begin{example} Consider the permutation $\pi=5\,9\,7\,10\,1\,8\,3\,6\,2\,4\in AI_{10}(1234).$ The elements of rank 1 are 5 and 1, the elements of rank 2 are 9,7,3 and 2, the elements of rank 3 are 10, 8, 6 and 4. Then $f(\pi)=5\,9\,7\,8\,1\,10\,3\,4\,2\,6\in AI_{10}(1243).$ \end{example} \section{Patterns of length three}\label{ltr} The following proposition is an easy consequence of Proposition 3.1 in \cite{Ouc}. See also \cite{Oucth}. \begin{prop} $$|AI_n(123)|=|AI_n(213)|=|AI_n(231)|=|AI_n(231)|=$$ $$|RAI_n(132)|=|RAI_n(231)|=|RAI_n(312)|=|RAI_n(321)|=1$$ $$|AI_n(132)|=|RAI_n(213)|=\begin{cases} 1& \mbox{ if }n\mbox{ is even or }n=1\\ 0& \mbox{ otherwise}.\end{cases} $$ $$|AI_n(321)|=|RAI_n(123)|=\begin{cases} 2& \mbox{ if }n\mbox{ is even and }n\geq 4\\ 1& \mbox{ otherwise}.\end{cases} $$ \end{prop} \section{Patterns 4321 and 1234}\label{sec4321} We recall the definition of a bijection $\Phi$ between the set $I_n(4321)$ of $4321$-avoiding involutions of the symmetric group $S_n$ and the set $\mathcal M_n$ of Motzkin paths of length $n$ (see \cite{Ba6}), that is essentially a restriction to the set $I_n(4321)$ of the bijection appearing in \cite{BIANE}. Consider an involution $\tau$ avoiding $4321$ and determine the set $exc(\tau)=\{i|\tau(i) > i\}$ of its excedances. Start from the empty path and create the Motzkin path $\Phi(\tau)$ by adding a step for every integer $1 \leq i \leq n$ as follows: \begin{itemize} \item if $\tau(i)=i,$ add a horizontal step at the $i$-th position; \item if $\tau(i)>i,$ add an up step at the $i$-th position; \item if $\tau(i)<i,$ add a down step at the $i$-th position. \end{itemize} The map $\Phi$ is a bijection whose inverse can be described as follows. Given a Motzkin path $M,$ let $A=(a_1,\ldots,a_k)$ be the list of positions of up steps in $M,$ written in increasing order, and let $B=(b_1,\ldots, b_k)$ the analogous list of positions of down steps. Then $\Phi^{-1}(M)$ is the involution $\tau$ given by the product of the cycles $(a_i,b_i)$ for $1\leq i\leq k.$ \begin{lem} $\Phi$ maps the set $RAI_{2n}(4321)$ onto the set $\widehat{\mathcal M}_{2n}$ of Motzkin paths of length $2n$ whose diods are either of the form $UH,$ $HD$ or $UD.$ \end{lem} \proof Let $\tau \in RAI_{2n}(4321).$ For every integer $k=1,2,\ldots,2n-1,$ if $k$ is odd, and hence a descent of $\tau$ , $\tau_k\tau_{k+1}$ is mapped by $\Phi$ to one of the following pairs: $UH,$ $UD,$ $HD.$ \endproof \begin{thm}\label{RAI4321even} $$|RAI_{2n}(4321)|=M_n.$$ \end{thm} \proof The previous lemma allows us to define a bijection $\hat{\Phi}:RAI_{2n}(4321)\to \mathcal{M}_n.$ Let $m=d_1d_2\ldots d_{n}$ be a Motzkin path (of even length) whose diods $d_i$ are either of the form $UH,$ $UD$ or $HD.$ Define $\Delta(m):=s_1\ldots s_n\in \mathcal M_n$ where $$s_i=\begin{cases} U & \mbox{ if }d_i=UH\\ D & \mbox{ if }d_i=HD\\ H & \mbox{ if }d_i=UD. \end{cases}$$ Set now $\hat{\Phi}=\Delta \circ \Phi.$ \endproof \begin{example} Consider $\pi=6\,2\,8\,4\,5\,1\,10\,3\,9\,7\in RAI_{10}(4321).$ Then $$\Phi(\pi)=\begin{tikzpicture}[every node/.style={draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black}] \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (10) at (10/2,0) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (9) at (9/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (8) at (8/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (7) at (7/2,2/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (6) at (6/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (5) at (5/2,1) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (4) at (4/2,2/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (3) at (3/2,2/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (2) at (2/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (1) at (1/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (0) at (0,0) {}; \draw[] (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) --(8)--(9)--(10); \end{tikzpicture} $$ and $$\widehat{\Phi}(\pi)=\begin{tikzpicture}[every node/.style={draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black}] \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (5) at (5/2,0) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (4) at (4/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (3) at (3/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (2) at (2/2,2/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (1) at (1/2,1/2) {}; \node[draw,shape=circle,minimum size=1mm, inner sep=0mm, fill=black] (0) at (0,0) {}; \draw[] (0) -- (1) -- (2) -- (3) -- (4) -- (5); \end{tikzpicture}$$ \end{example} \begin{thm}\label{RAI4321} $$|RAI_{2n-1}(4321)|=M_n-M_{n-2}.$$ \end{thm} \proof Consider a permutation $\pi \in RAI_{2n}(4321).$ By Lemma \ref{str}, $\pi_{2n}$ is odd. Now consider the element $2n-1$ in $\pi.$ Notice that this element is at position $2n$ or $2n-1,$ otherwise $\pi$ would contain the pattern $2n\;\, 2n-1\;\, \pi(2n-1)\;\, \pi(2n) \sim 4321$ (notice that $2n$ appears before $2n-1$ because $\pi$ is an involution and $\pi_{2n-1}>\pi_{2n}$). If $\pi_{2n}=2n-1,$ the permutation $\pi'$ obtained from $\pi$ by removing the element $2n-1$ and standardizing is an arbitrary element of $RAI_{2n-1}(4321)$ which ends with its maximum element. If $\pi_{2n-1}=2n-1,$ write $\pi$ as $\pi=\left(\begin{smallmatrix} \cdots & k & \cdots & 2n-2 & 2n-1 & 2n\\ \cdots & 2n & \cdots & j & 2n-1 & k \end{smallmatrix}\right)$. If $k>j,$ consider the permutation $\pi'$ obtained from $\pi$ by removing the element $2n-1$ and standardizing. $\pi'$ is an arbitrary element of $RAI_{2n-1}(4321)$ which does not end with its maximum element. If $k<j,$ we must have $k=2n-3$ and $j=2n-2,$ otherwise $\pi$ would contain $4321.$ Hence we can write $\pi=\pi'\;\,2n\;\,2n-2\;\,2n-1\;\,2n-3,$ where $\pi'$ is an arbitrary permutation in $RAI_{2n-4}(4321).$ As a consequence, $|RAI_{2n}(4321)|=|RAI_{2n-1}(4321)|+|RAI_{2n-4}(4321)|.$ \endproof The preceding lemma implies that the sequence $(|AI_{2n-1}(4321)|)_{n\geq 1}$ coincides with sequence \seqnum{A102071} in \cite{Sl}. \begin{thm}\label{robs} $$|AI_{n}(1234)|=|RAI_{n}(4321)|$$ and $$|RAI_{n}(1234)|=|AI_{n}(4321)|.$$ \end{thm} \proof The \textit{Robinson-Schensted} map $RS$ (see e.g. \cite{vL}) associates an involution $\pi$ with a standard Young Tableau whose descent set $D$ is equal to the descent set of $\pi.$ If we apply the inverse of the map $RS$ to the transposed Tableau we get another involution $\pi'$ whose descent set is $[n-1]\setminus D.$ In particular, $\pi$ is alternating if and only if $\pi'$ is reverse alternating. Moreover, by the properties of the map $RS,$ $\pi$ avoids $1234$ if and only if the corresponding tableau has at most three columns, hence the transposed tableau has at most three rows and $\pi'$ avoids $4321.$ \endproof Theorems \ref{RAI4321even}, \ref{RAI4321} and \ref{robs} and Lemma \ref{rc} imply the following result. \begin{cor}\label{enum1234} $$|RAI_{2n}(4321)|=|AI_{2n}(1234)|=M_n$$ and $$|RAI_{2n-1}(1234)|=|AI_{2n-1}(4321)|=|RAI_{2n-1}(4321)|=$$ $$|AI_{2n-1}(1234)|=M_n-M_{n-2}.$$ \end{cor} We now give an explicit formula for the cardinalities of the sets $RAI_{2n}(1234)$ and $AI_{2n}(4321).$ \begin{thm} $|RAI_{2n}(1234)|=|AI_{2n}(4321)|=M_{n+1}-2M_{n-1}+M_{n-3}.$ \end{thm} \proof We proceed as in the proof of Theorem \ref{RAI4321}. Consider a permutation $\pi\in AI_{2n+1}(4321).$ The symbol $2n$ is either at position $2n$ or $2n+1,$ otherwise $\pi$ would contain $4321.$ If $\pi_{2n+1}=2n,$ the permutation $\pi'$ obtained from $\pi$ by removing the element $2n+1$ and standardizing is an arbitrary element of $AI_{2n}(4321)$ which ends with its maximum element. If $\pi_{2n}=2n,$ set $\pi=\left(\begin{smallmatrix} \cdots & k & \cdots & 2n-1 & 2n & 2n+1\\ \cdots & 2n+1 & \cdots & j & 2n & k \end{smallmatrix}\right)$. If $j<k,$ the permutation $\pi'$ obtained from $\pi$ by removing the element $2n$ and standardizing is an arbitrary element of $AI_{2n}(4321)$ which does not end with its maximum element. If $j>k,$ the element $j=2n-1$ must be fixed and $k=2n-2,$ otherwise $\pi$ would contain an occurrence of $4321.$ Removing from $\pi$ the last four elements we get an arbitrary permutations in $AI_{2n-3}(4321).$ Hence $|AI_{2n+1}(4321)|=|AI_{2n}(4321)|+|AI_{2n-3}(4321)|$ and the assertion now follows from Corollary \ref{enum1234}. \endproof \section{Pattern 3412} \begin{thm} $$|RAI_{2n}(3412)|=|AI_{2n+2}(3412)|=|AI_{2n+1}(3412)|=|RAI_{2n+1}(3412)|=M_n.$$ \end{thm} \proof We recall that also the set $I_n(3412)$ corresponds bijectively to the set $\mathcal M_n$ via a map $\Psi$ whose definition coincides with the definition of the map $\Phi$ of Section \ref{sec4321} (see \cite{Ba6}). It is easily seen that $\Psi$ maps the set $RAI_{2n}(3412)$ to the set of Motzkin paths in $\mathcal{M}_{2n}$ whose diods are either of the form $UH,$ $HD,$ or $UD.$ Hence the map $\widehat \Psi=\Delta \circ \Psi $ is a bijection between $RAI_{2n}(3412)$ and $\mathcal{M}_{n},$ where $\Delta$ is the map defined in the proof of Theorem \ref{RAI4321even}. We observe that any permutation in $AI_{t}(3412)$ begins with 1, hence, a permutation $\pi \in AI_{2n+1}(3412)$ can be written as $\pi=1\,\pi',$ where $\pi'$ is an arbitrary permutation in $RAI_{2n}(3412).$ Moreover, any permutation $\pi$ in $AI_{2n+2}(3412)$ ends with its maximum element, hence it can be can be written as $\pi=1\,\pi'\,2n+2$ where $\pi'$ is an arbitrary permutation in $RAI_{2n}(3412).$ The last equality follows from Lemma \ref{rc}. \endproof \section{Patterns 2143, 2134 and 1243} \begin{thm}\label{enum1243one} $$|AI_{2n}(1243)|=|AI_{2n}(2143)|=|AI_{2n}(2134)|=M_n$$ and $$|AI_{2n-1}(2134)|=|RAI_{2n-1}(1243)|=M_n-M_{n-2}.$$ \end{thm} \proof It is an immediate consequence of Lemma \ref{rc}, Lemma \ref{BWX} and Corollary \ref{enum1234}. \endproof \begin{thm} $$|AI_{2n+1}(1243)|=|AI_{2n+1}(2143)|=|RAI_{2n+1}(2143)|=|RAI_{2n+1}(2134)|=M_n.$$ \end{thm} \proof By Lemmas \ref{rc}, \ref{BWX} and \ref{Bonabij}, it suffices to show that $|AI_{2n+1}(1234)\cap \{\sigma \in S_{2n+1} \,|\, \sigma(2)=2n+1\}|=M_n.$ If $\pi$ is a permutation in this last set, removing $2n+1$ and $2$ from $\pi$ and standardizing we get an arbitrary permutation $\widehat{\pi}$ in $I_{2n-1}(1234)$ with either $\Des(\widehat{\pi})= \{3,5,7,\ldots 2n-3\}$ or $\Des(\widehat{\pi})= \{1,3,5,7,\ldots 2n-3\}.$ In other words, this permutation is reverse alternating or it is reverse alternating only to the right of the second position. Hence we have $$|AI_{2n+1}(1234)\cap \{\sigma \in S_{2n+1} \,|\, \sigma(2)=2n+1\}|=$$ $$|I_{2n-1}(1234)\cap \{\sigma \in S_{2n-1} \,|\, \Des(\sigma)= \{3,5,7,\ldots 2n-3\}\}|+|RAI_{2n-1}(1234)|.$$ We want to show that the sum on the right hand side of the previous equality is equal to $M_n.$ We proceed by induction on $n$. The assertion is trivial when $n=2$ or $n=3.$ Assume that the assertion is true for every $m<n.$ We know by Corollary \ref{enum1234} that $|RAI_{2n-1}(1234)|=M_n-M_{n-2}.$ Consider a permutation $\rho$ in $I_{2n-1}(1234)\cap \{\sigma \in S_{2n-1} \,|\, \Des(\sigma)= \{3,5,7,\ldots 2n-3\}\}.$ Notice that $\rho(2)=2n-2$ and $\rho(3)=2n-1,$ otherwise the permutation would contain the subsequence $\rho(1)\;\,\rho(2)\;\, 2n-2\;\,2n-1 \sim 1234.$ Removing $2,3,2n-2$ and $2n-1$ and standardizing we get an arbitrary permutation in $I_{2n-5}(1234)\cap \{\sigma \in S_{2n-5} \,|\, \Des(\sigma)= \{3,5,7,\ldots 2n-7\}\}$ or in $RAI_{2n-5}(1234).$ By the inductive hypothesis we have $$|I_{2n-5}(1234)\cup \{\sigma \in S_{2n-5} \,|\, \Des(\sigma)= \{3,5,7,\ldots 2n-7\}\}|+$$ $$|RAI_{2n-5}(1234)|=M_{n-2}.$$ This completes the proof. \endproof \begin{example} Consider $\pi=5\,11\,9\,10\,1\,7\,6\,8\,3\,4\,2\in AI_{11}(1234).$ If we remove 11 and 2 from $\pi$ and standardize we get the involution $$\widehat{\pi}=4\,8\,9\,1\,6\,5\,7\,2\,3$$ with $\Des(\widehat{\pi})=\{3,5,7\}.$ Removing from $\widehat{\pi}$ the symbols 8,9,2 and 3 we get the involution $2\,1\,4\,3\,5\in RAI_5(1234).$ Consider now the permutation $\pi=9\,11\,7\,8\,5\,10\,3\,4\,1\,6\,2.$ If we remove 11 and 2 from $\pi$ and standardize we get the involution $$\widehat{\pi}=8\,6\,7\,4\,9\,2\,3\,1\,5\in RAI_9(1234).$$ \end{example} \begin{thm} $$|RAI_{2n}(2143)|=M_{n-1}.$$ \end{thm} \proof Let $\pi$ be a permutation in $RAI_{2n}(2143).$ The maximum of $\pi$ is in position $1,$ otherwise $2\,1\,\pi_{2n-1}\,\pi_{2n}$ would be an occurrence of $2143$ in $\pi.$ Write $\pi$ as $2n\,\widehat{\pi}\,1.$ Then the standardization of $\widehat{\pi}$ is an arbitrary permutation in $AI_{2n-2}(2143).$ The assertion now follows from Theorem \ref{enum1243one}. \endproof \section{Patterns 3421 and 4312} Notice that the these two patterns are inverse of each other. Hence $AI_{n}(3421)=AI_n(4312)=AI_n(3421,4312)$ and the same is true for reverse alternating involutions. The following theorem shows that all these classes are enumerated by the Fibonacci numbers $F_n$ (sequence \seqnum{A000045} in \cite{Sl}). \begin{thm} $$|AI_n(3421,4312)|=|RAI_n(3421,4312)|=F_{n-1},$$ where $F_k$ is the $k$-th Fibonacci number. \end{thm} \proof Let $\pi \in RAI_{2n}(3421,4312).$ The last element of $\pi$ is odd and the position of $2n$ is odd by Lemma \ref{str}. Notice that the symbol $2n-1$ can be either at position $2n$ or $2n-1,$ otherwise $\pi$ would contain the subsequence $2n\;\,2n-1\;\,\pi(2n-2)\;\,\pi(2n-1)$ whose standardization is $4312.$ If $\pi_{2n}=2n-1,$ then $\pi=\pi'\;\,2n\;\,2n-1,$ where $\pi'$ is an arbitrary permutation in $RAI_{2n-2}(3421,4312).$ If $\pi_{2n-1}=2n-1,$ let $y=\pi_{2n-2}$ and $x=\pi_{2n}.$ If $x>y,$ removing from $\pi$ the symbol $2n-1$ and standardizing we obtain an arbitrary permutation in $RAI_{2n-1}(3421,4312)$ which does not end with its maximum. If $x<y,$ notice that either $\pi(2n-2)=2n-2$ or $\pi(2n-3)=2n-2$ (otherwise $\pi$ would contain the patterns). If $\pi(2n-2)=2n-2,$ then $\pi(2n-3)$ is forced to be $2n$ (because $\pi$ is alternating), so we can write $\pi=\tau\;\,2n\;\,2n-2\;\,2n-1\;\,2n-3,$ where $\tau\in RAI_{2n-4}(3421,4312).$ We associate to $\pi$ the permutation $\pi'=\tau\;\, 2n-2\;\,2n-3\;\,2n-1.$ Such $\pi'$ is an arbitrary permutation in $RAI_{2n-1}(3421,4312)$ which ends with its maximum and in which $2n-3$ precedes the maximum. If $\pi(2n-3)=2n-2,$ we write $\pi=\sigma' \;\, 2n \;\, \sigma'' \;\, 2n-2 \;\, 2n-3 \;\, 2n-1 \;\, x$ and we associate to $\pi$ the permutation $\pi'= \sigma' \;\, 2n-2 \;\, \sigma'' \;\, 2n-3 \;\, x \;\, 2n-1 $ which is an arbitrary permutation in $RAI_{2n-1}(3421,4312)$ which ends with its maximum and in which $2n-3$ follows the maximum. As a consequence $$|RAI_{2n}(3421,4312)|=|RAI_{2n-1}(3421,4312)|+|RAI_{2n-2}(3421,4312)|.$$ Let $\pi \in RAI_{2n+1}(3421,4312).$ The last element of $\pi$ is odd and the position of $2n+1$ is odd by Lemma \ref{str}. Notice that the symbol $2n+1$ is either at position $2n+1$ or $2n-1,$ otherwise $\pi$ would contain the subsequence $2n\;\,2n+1\;\,\pi(2n-1)\;\,\pi(2n)\sim 3421.$ If $\pi_{2n+1}=2n+1,$ then $\pi=\pi'\,2n+1,$ where $\pi'$ is an arbitrary permutation in $RAI_{2n}(3421,4312).$ If $\pi_{2n-1}=2n+1,$ write $\pi=\tau \;\, 2n+1 \;\, x \;\, 2n-1.$ Then $\pi'=\tau \;\, 2n-1 \;\, x $ is an arbitrary permutation in $RAI_{2n}(3421,4312)$ with $2n-1$ as a fixed point. We observed above that such permutations are in bijection with $RAI_{2n-1}(3421,4312).$ As a consequence \begin{equation}\label{eq1} |RAI_{2n+1}(3421,4312)|=|RAI_{2n}(3421,4312)|+|RAI_{2n-1}(3421,4312)|. \end{equation} Hence we can conclude that $$|RAI_{m}(3421,4312)|=F_{m-1}.$$ By Lemma \ref{rc}, we have also $|AI_{2n+1}(3421,4312)|=|RAI_{2n+1}(3421,4312)|.$ Consider now $\pi\in AI_{2n}(3421,4312).$ By Lemma \ref{str}, $\pi_1$ is odd and the symbol 1 is in odd position. Notice that either $\pi(1)=1$ or $\pi(3)=1$ otherwise $\pi$ would contain the subsequence $\pi_1 \, \pi_2\, 2\, 1,$ whose standardization is 3421. If $\pi_1=1,$ the standardization of $\pi_2\ldots\pi_{2n}$ is an arbitrary permutation in $RAI_{2n-1}(3421,4312).$ If $\pi(3)=1,$ write $\pi=3\,\pi_2\,1\,\tau.$ The standardization of the word $\pi_2\,3\,\tau$ is an arbitrary permutation $\pi'$ in $RAI_{2n-1}(3421,4312)$ which fixes the symbol $2.$ Observe that, if $\pi\in RAI_{2n-1}(3421,4312),$ then $\pi(1)=2$ or $\pi(2)=2,$ namely $RAI_{2n-1}(3421,4312)=A\cup B$ where $A=RAI_{2n-1}(3421,4312)\cap \{\pi \in S_{2n-1}\,|\, \pi(2)=1\}$ and $B=RAI_{2n-1}(3421,4312)\cap \{\pi \in S_{2n-1}\,|\, \pi(2)=2\}.$ The set $A$ corresponds bijectively to the set $RAI_{2n-3}(3421,4312)$ by removing the firs two elements, hence the set $B$ corresponds bijectively to the set $RAI_{2n-2}(3421,4312)$ by equation (\ref{eq1}). Hence, $$|AI_{2n}(3421,4312)|=|RAI_{2n-1}(3421,4312)|+|RAI_{2n-2}(3421,4312)|$$ and $$|AI_{m}(3421,4312)|=F_{m-1}.$$ \endproof \begin{example} Consider the set $RAI_6(3421,4312),$ whose elements are $$\alpha=2\,1\,6\,4\,5\,3\quad\quad \beta=2\,1\,4\,3\,6\,5 \quad \quad \gamma=4\,2\,6\,1\,5\,3$$ $$\delta=4\,2\,3\,1\,6\,5 \quad \quad \epsilon=6\,2\,4\,3\,5\,1.$$ \end{example} The permutations $\beta$ and $\delta$ correspond to $2\,1\,4\,3$ and $4\,2\,3\,1,$ namely, the elements of the set $RAI_4(3421,4312),$ whereas the permutations $\alpha, \gamma$ and $\epsilon$ correspond to $2\,1\,4\,3\,5,$ $4\,2\,5\,1\,3$ and $5\,2\,3\,4\,1,$ respectively, namely, the elements of $RAI_5(3421,4312).$ \section{Patterns 2431, 4132, 3241 and 4213.} Notice that the two patterns $2431$ and $4132$ are inverse of each other. Hence $AI_{n}(2431)=AI_n(4132)=AI_n(2431,4132)$ and the same is true for reverse alternating involutions and for the pair of patterns $(3241,4213).$ \begin{thm}\label{theorem0} $$|RAI_{2n}(2431,4132)|=|RAI_{2n}(3241,4213)|=|RAI_{2n+1}(3241,4213)|=$$ $$|AI_{2n+1}(2431,4132)|= 2^{n-1}.$$ \end{thm} \proof Consider a permutation $\pi\in RAI_{2n}(2431,4132).$ We want to show that $\pi=\tau' \, 2n\,\tau'',$ where $\tau'$ is a permutation of the symbols $1,2,\ldots,i-1$ and $\tau''$ is a permutation of the symbols $i,\ldots,2n-1.$ Suppose on the contrary that there exists an element $b$ in $\tau'$ such that $b>i,$ so that $\pi=\ldots b\ldots 2n\ldots i.$ Let $a=\pi_{2n-1}.$ Since $\pi$ is alternating, we have $a>i,$ hence $2n-1$ follows $2n$ in $\pi$ and the subsequence $b\, 2n\, 2n-1\, i$ is order-isomorphic to 2431. As a consequence, the word $2n\,\tau''$ is itself a reverse alternating involution avoiding 132, and there exists only one of such permutations, as observed in Section \ref{ltr}, while the word $\tau'$ is an arbitrary permutation in $RAI_{j}(2431,4132),$ $j$ even. We can conclude that $$|RAI_{2n}(2431,4132)|=\sum_{j \mbox{ even, }j<2n}|RAI_{j}(2431,4132)|=2^{n-1},$$ where the last equality follows from the previous one by induction. The facts that $$|RAI_{2n}(2431,4132)|=|RAI_{2n}(3241,4213)|$$ and $$|RAI_{2n+1}(3241,4213)|=|AI_{2n+1}(2431,4132)|$$ follow by Lemma \ref{rc}. Consider now a permutation $\pi$ in $RAI_{2n+1}(4213,3241).$ Write $\pi=m_1\, w_1\,m_2\, w_2\ldots m_k\, w_k,$ where the $m_i's$ are the ltr maxima of $\pi$ and the $w_i's$ are words. Since $\pi$ is alternating and avoids the patterns 4213 and 3241, it follows that for every $i=1,2\ldots, k-1$ the word $w_i$ with is non empty and that $w_i<w_{i+1},$ namely, every symbol in $w_i$ is smaller than any symbol in $w_{i+1}.$ As a consequence $m_iw_i$ is an interval for every $i.$ Obviously $m_k=2n+1$ and its position is $\pi_{2n+1}$ since $\pi$ is an involution. Suppose that $\pi_{2n+1}\neq 2n+1.$ Since $\pi_{2n}<\pi_{2n+1},$ the symbol $2n$ should appear to the left of $2n+1$ in $\pi,$ but this contradicts the fact that $m_kw_k$ is an interval. Hence $2n+1$ is the last element of $\pi$ and removing it from $\pi$ we get an arbitrary permutation in $RAI_{2n}(4213,3241),$ so we have $$|RAI_{2n}(4213,3241)|=|RAI_{2n+1}(4213,3241)|.$$ \endproof \begin{thm}\label{theorem1} $$|RAI_{2n+1}(2431,4132)|=|AI_{2n+1}(3241,4213)| =\left\lfloor \dfrac{2^{n-1}\cdot 5}{3} \right\rfloor.$$ \end{thm} To prove this theorem we need the following lemma. \begin{lem}\label{Lemma1} The number of connected permutations in $RAI_{2l+1}(4132,2431)$ is $$\left \lfloor \frac{2l+1}{4}\right \rfloor.$$ \end{lem} \proof Let $\tau\in RAI_{2l+1}(4132,2431)$ be a connected permutations. Consider the position $i$ of $2l+1$ in $\tau.$ This position is odd by lemma \ref{str} and $i\neq 2l+1$ since $\tau$ is connected. If $i=2l-1,$ we can remove from $\tau$ the symbols $2l+1$ and $2l-1$ obtaining an arbitrary connected permutation $\tau$ in $RAI_{2l-1}(2431,4132).$ If $i<2l-1,$ let $a=\tau_{2l},$ $b=\tau_{2l-1}$ and $c=\tau_{2l-2}.$ We can write $\tau=\left(\begin{smallmatrix} \cdots & a & \cdots & i & \cdots & b & \cdots & 2l-2 & 2l-1 & 2l &2l+1\\ w_1 & 2l & w_2 & 2l+1 & w_3 & 2l-1 & w_4 & c & b & a & i \end{smallmatrix}\right)$ where, since $\tau$ is alternating, $c<b>a<i.$ If $bai\sim 312,$ since $\tau$ avoids 2431, the elements of $w_1$ and $w_2$ are smaller than the elements of $w_4\,c\,b\,a\,i$ that, in turn, are smaller than the elements of $w_3.$ But now, if $a\neq i-1,$ we would have an occurence of 4132. Hence $a=i-1,$ and this is impossible since $\tau$ is connected ($w_1$ would be a connected component of $\tau$). Hence $bai\sim 213.$ Notice that we must have $cba\sim 231$ and we can write $\tau=\left(\begin{smallmatrix} \cdots & a & \cdots & c & \cdots & b &\cdots & i & \cdots & 2l-2 & 2l-1 & 2l &2l+1\\ w_1 & 2l & w_{2_1} & 2l-2 & w_{2_2} & 2l-1 & w_3 & 2l+1 & w_4 & c & b & a & i \end{smallmatrix}\right)$ where, as above, $w_1<w_4<w_3<w_{2_2}<w_{2_1}.$ The word $w_1$ is empty, since $\tau$ is connected. If $i\neq 2l-3,$ we have $\tau_{2l-3}>\tau_{2l-2},$ since $\tau$ is alternating. Hence $2l-3$ must follow $2l-2.$ So we can conclude that also $w_{2_1}$ is empty and $c=2.$ Since $\tau_2<\tau_3,$ 3 must follows 2 in $\tau$ so $b=3$ and $w_{2_2}$ is also empty. Iterating these arguments one can show that $$\tau= 2l\;2l-2\;2l-1\;2l-4\;2l-3\;\ldots l-1\;2l+1\; l-3\;l-2\;\ldots 2\;3\;1\;l+1.$$ In particular $i=l+1.$ Since $i$ is odd, this implies that $2l+1\equiv 1\,\mbox{ mod }4.$ As a consequence, the number of connected permutations in $RAI_{2l+1}(4132,2431)$ is the same as the number of connected permutations in $RAI_{2l-1}(4132,2431)$ if $2l+1\equiv 3\mbox{ mod }4$ and increases by one if $2l+1\equiv 1\mbox{ mod }4.$ As an example, when $2l+1=21$ such permutations are $$4\; 2\; 6\; 1\; 8\; 3\; 10\; 5\; 12\; 7\; 14\; 9\; 16\; 11\; 18\; 13\; 20\; 15\; 21\; 17\; 19$$ $$8\; 6\; 7\; 4\; 10\; 2\; 3\; 1\; 12\; 5\; 14\; 9\; 16\; 11\; 18\; 13\; 20\; 15\; 21\; 17\; 19$$ $$12\; 10\; 11\; 8\; 9\; 6\; 14\; 4\; 5\; 2\; 3\; 1\; 16\; 7\; 18\; 13\; 20\; 15\; 21\; 17\; 19$$ $$16\; 14\; 15\; 12\; 13\; 10\; 11\; 8\; 18\; 6\; 7\; 4\; 5\; 2\; 3\; 1\; 20\; 9\; 21\; 17\; 19$$ $$20\; 18\; 19\; 16\; 17\; 14\; 15\; 12\; 13\; 10\; 21\; 8\; 9\; 6\; 7\; 4\; 5\; 2\; 3\; 1\; 11$$ (notice that all of them but the last one have 21 at position 19 and the last one has 21 in position 11). When $2l+1=23$ such permutations are $$4\; 2\; 6\; 1\; 8\; 3\; 10\; 5\; 12\; 7\; 14\; 9\; 16\; 11\; 18\; 13\; 20\; 15\; 22\; 17\; 23\; 19\; 21$$ $$8\; 6\; 7\; 4\; 10\; 2\; 3\; 1\; 12\; 5\; 14\; 9\; 16\; 11\; 18\; 13\; 20\; 15\; 22\; 17\; 23\; 19\; 21$$ $$12\; 10\; 11\; 8\; 9\; 6\; 14\; 4\; 5\; 2\; 3\; 1\; 16\; 7\; 18\; 13\; 20\; 15\; 22\; 17\; 23\; 19\; 21$$ $$16\; 14\; 15\; 12\; 13\; 10\; 11\; 8\; 18\; 6\; 7\; 4\; 5\; 2\; 3\; 1\; 20\; 9\; 22\; 17\; 23\; 19\; 21$$ $$20\; 18\; 19\; 16\; 17\; 14\; 15\; 12\; 13\; 10\; 22\; 8\; 9\; 6\; 7\; 4\; 5\; 2\; 3\; 1\; 23\; 11\; 21$$ (all of them have the symbol 23 in position 21). Since when $2l+1=5$ there is only one of such permutations, the number of connected permutations in $RAI_{2l+1}(4132,2431)$ is $\left \lfloor \frac{2l+1}{4}\right \rfloor.$ \endproof Now we turn to the proof of Theorem \ref{theorem1}. \vspace{0.3 cm} \noindent \textit{Proof of Theorem }\ref{theorem1}. Let $\pi\in RAI_{2n+1}(2431,4132).$ Consider the last connected component $\tau$ of $\pi$ and write $\pi=\sigma\tau.$ Now, $\sigma$ is an arbitrary permutation in $RAI_{2k}(2431,4132),$ with $k\geq 0$ (notice that $\sigma$ has even length, since $\tau$ is connected and $\pi$ is alternating) and $\tau',$ the standardization of $\tau,$ is an arbitrary connected permutation in $RAI_{2n+1-2k}(2431,4132),$ with $l\geq 0.$ Hence, by Lemma \ref{Lemma1} and by Theorem \ref{theorem0}, we have $$|RAI_{2n+1}(2431,4132)|=2^{n-1}+\sum_{k\geq 1}^{n-1}2^{k-1}\left\lfloor \frac{2n+1-2k}{4} \right \rfloor+\left\lfloor\frac{2n+1}{4} \right \rfloor, $$ where the first summand of the right hand side corresponds to the case $\tau=2n+1$ and the third summand corresponds to the case $|\tau|=2n+1.$ Now it can be shown by induction that $$|RAI_{2n+1}(2431,4132)|=\left \lfloor \frac{2^{n-1}\cdot 5}{3}\right \rfloor.$$ The fact that $|AI_{2n+1}(3241,4213)|=|RAI_{2n+1}(2431,4132)|$ follows by Lemma \ref{rc}. \endproof Notice that $\{|RAI_{2n+1}(2431,4132)|\}_{n}$ is sequence \seqnum{A081254} in \cite{Sl}. On the contrary, the sequence enumerating $AI_{2n}(3241,4213)$ and $AI_{2n}(2431,4132),$ whose first terms are $1,2,5,9,17,31,59,\ldots,$ is not present in \cite{Sl}. \section{Patterns 2413 and 3142} The set $AI_n(2413,3142)$ coincides with the set of alternating Baxter involutions of length $n.$ We recall that a \textit{Baxter permutation} of length $n$ is a permutation $\pi\in S_n$ such that, for every $1\leq i\leq j\leq k\leq l\leq n,$ $$\mbox{if }\pi_i+1=\pi_l\mbox{ and }\pi_j>\pi_l\mbox{ then }\pi_k>\pi_l\mbox{ and}$$ $$\mbox{if }\pi_l+1=\pi_i\mbox{ and }\pi_k>\pi_i\mbox{ then }\pi_j>\pi_i.$$ In \cite{Ouc} the author shows that the set of doubly alternating Baxter permutations coincides with the set of doubly alternating permutations avoiding 2413 and $3142=2413^{-1}.$ An involution is clearly doubly alternating. As a consequence, the set $AI_n(2413,3142)$ coincides with the set of alternating Baxter involutions of length $n.$ A recurrence relation for this sequence has been found in \cite{Min} and the corresponding sequence in \cite{Sl} is \seqnum{A347546}. Moreover we have $$|AI_{2n+1}(2413,3142)|=|RAI_{2n+1}(2413,3142)|=|RAI_{2n}(2413,3142)|,$$ where the first equality follows from Lemma \ref{rc} and the second one is a consequence of the fact that each permutation in $RAI_{2n+1}(2413,3142)$ ends with its maximum (see \cite{Min}) and this maximum can be removed, hence obtaining any permutation in $RAI_{2n}(2413,3142).$ \section{Patterns 4123, 2341} Since $4123=2341^{-1}$ we have $AI_n(4123)=AI_n(2341)=AI_n(4123, 2341)$ and the same is true for reverse alternating involutions. To enumerate these classes we need the following lemma. \begin{lem} The number of connected permutations in $RAI_{n}(4123, 2341)$ is $\begin{cases} 1 & \mbox{ if }n\mbox{ is odd and }n\neq 3\\ 2 & \mbox{ if }n\mbox{ is even and }n\geq 6\\ 0 & \mbox{ if }n=3\\ 1 & \mbox{ if }n=2,4. \end{cases}.$\\ The number of connected permutations in $AI_n(4123,2341)$ is $\begin{cases} 1 & \mbox{ if }n\geq 4\mbox{ or }n=1\\ 0 & \mbox{ if }n=2\mbox{ or }3\\ \end{cases}.$ \end{lem} \proof The small cases are trivial, so we can assume $n\geq 5.$ Let $\pi\in RAI_{2n}(4123, 2341)$ be connected. Write $\pi=w_k\,m_k\, w_{k-1} m_{k-1}\ldots m_2\,w_1\,m_1$ where the $m_i's$ are the rtl maxima of $\pi.$ Since $\pi$ is reverse alternating and avoids 4123, every $w_i$ with $2\leq i\leq k-1$ must have length 1. For the same reason, $w_1$ must be empty ($\pi$ ends with a descent) and the length of $w_k$ is 0 or 2. If $|w_k|=0,$ $$\pi=2n\,2n-2\,2n-1\,2n-4\,2n-3\ldots 4\,5\,2\,3\,1, $$ if $|w_k|=2,$ $$\pi=2n-2\,2n-4\,2n\,2n-6\,2n-1\,2n-8\,2n-3\ldots 2\,7\,1\,5\,3.$$ In a similar way it is possible to prove that the only connected permutation in $RAI_{2n+1}(4123,2341)$ is $$\pi=2n\,2n-2\,2n+1\,2n-4\,2n-1\,2n-6\,2n-3\ldots 9\,4\,7\,2\,5\,1\,3. $$ The classification of the connected permutations in $AI_n(4123,2341)$ is fully analogous. \endproof \begin{example} There are two connected permutations in $RAI_{10}(4123,2341),$ namely, $$8\, 6\, 10\, 4\, 9\, 2\, 7\, 1\, 5\, 3 \mbox{ and } 10\, 8\, 9\, 6\, 7\, 4\, 5\, 2\, 3\, 1,$$ while the only connected permutation in $RAI_{11}(4123,2341)$ is $$10\, 8\, 11\, 6\, 9\, 4\, 7\, 2\, 5\, 1\, 3.$$ The only connected permutation in $AI_{10}(4123,2341)$ is $$9\, 10\, 7\, 8\, 5\, 6\, 3\, 4\, 1\, 2$$ and the only connected permutation in $AI_{11}(4123,2341)$ is $$9\, 11\, 7\, 10\, 5\, 8\, 3\, 6\, 1\, 4\, 2.$$ \end{example} In the following theorem we find the ordinary generating functions that enumerate reverse alternating involutions avoiding 4123 and 2341 of even and odd length (sequences \seqnum{A052980} and \seqnum{A193641} in \cite{Sl}, respectively). We also find the same generating function for the alternating even case. This last sequence does not appear in \cite{Sl}. \begin{thm} We have $$\sum_{n\geq 0 }|AI_{2n+1}(4123, 2341)|x^{2n+1}=\sum_{n\geq 0 }|RAI_{2n+1}(4123, 2341)|x^{2n+1}=$$ $$\frac{x^5-x^3+x}{1-2x^2-x^6}, $$ $$ \sum_{n\geq 0 }|RAI_{2n}(4123, 2341)|x^{2n}=\frac{1-x^2}{1-2x^2-x^6},$$ and $$\sum_{n\geq 0 }|AI_{2n}(4123, 2341)|x^{2n}=1+\frac{x^4}{1-x^2}+\frac{(x^5-x^3+x)^2}{(1-2x^2-x^6)\cdot (1-x^2)}.$$ \end{thm} \proof The fact that $|RAI_{2n+1}(4123, 2341)|=|AI_{2n+1}(4123, 2341)|$ follows from Lemma \ref{rc}. Let $\pi \in RAI_{2n+1}(4123, 2341).$ Decompose $\pi$ in connected components as $\pi=\tau_1\tau_2\ldots \tau_k.$ Denote by $\widehat \tau_i$ the standardization of $\tau_i.$ The $\widehat \tau_i's$ are arbitrary connected permutations with $\widehat \tau_i\in \cup_{j\geq 1} RAI_{2j}(4123, 2341)$ for $1\leq i\leq k-1$ (because $\pi$ is reverse alternating) and $\tau_k\in \cup_{j\geq 0} RAI_{2j+1}(4123, 2341).$ By the previous lemma it follows that the ordinary generating function that counts connected permutations in $\cup_{j\geq 1} RAI_{2j}(4123, 2341)$ (by length) is $F(x)=x^2+x^4+\frac{2x^6}{1-x^2}$ and the ordinary generating function that counts connected permutations in $RAI_{2j+1}(4123, 2341)$ is $G(x)=x+\frac{x^5}{1-x^2}.$ As a consequence $$\sum_{n\geq 0 }|RAI_{2n+1}(4123, 2341)|x^{2n+1}=\frac{G(x)}{1-F(x)}.$$ Trivial algebraic manipulations lead to the desired generating function. Similarly, given $\pi \in RAI_{2n}(4123, 2341)$ we can decompose $\pi$ in connected components as $\pi=\tau_1\tau_2\ldots \tau_k$ where $\widehat \tau_i\in \cup_{j\geq 1} RAI_{2j}(4123, 2341)$ for $1\leq i\leq k.$ As a consequence $$\sum_{n\geq 0 }|RAI_{2n+1}(4123, 2341)|x^{2n+1}=\frac{1}{1-F(x)}.$$ The decomposition in connected components of a permutation $\pi \in AI_{2n}(4123, 2341)$ is more subtle than the above ones because these components are not necessarily only of even or odd length. However such a permutation $\pi$ can only be \begin{itemize} \item the empty one, \item connected itself, \item of the form $\sigma\tau,$ where $\sigma$ is an arbitrary permutation in $\cup_{j\geq 0} AI_{2j+1}(4123, 2341)$ and $\tau$ is a connected permutation in $\cup_{j\geq 0} RAI_{2j+1}(4123, 2341).$ \end{itemize} By the above results and by the previous lemma we get $$\sum_{n\geq 0 }|AI_{2n}(4123, 2341)|x^{2n}=1+\frac{x^4}{1-x^2}+\frac{x^5-x^3+x}{1-2x^2-x^6}\cdot \frac{x^5-x^3+x}{1-x^2}$$ where the three summand on the left hand side correspond to the three dotted cases. \endproof \section{Other patterns} The following three conjectures are based on numerical evidences. The first one covers the last open case about the patterns 1243 and 2134 and the second covers the patterns 1432 and 3214. \begin{conj}\label{conjecture1} $$|RAI_{2n}(1243)|=|RAI_{2n}(2134)|=M_n.$$ \end{conj} \begin{conj}\label{conjecture2} $$|AI_{2n}(1432)|=|AI_{2n}(3214)|=|RAI_{2n}(1432)|=|RAI_{2n}(3214)|=M_n,$$ $$|AI_{2n+1}(1432)|=|RAI_{2n+1}(3214)|=M_n,$$ $$|AI_{2n-1}(3214)|=|RAI_{2n-1}(1432)|=M_{n}-M_{n-2}.$$ \end{conj} \begin{conj} Let $\tau$ be any permutation of $\{4,\ldots,m\}, m\geq 4.$ Then $$|AI_n(123\tau)|=|AI_n(321\tau)|.$$ \end{conj} A classical subject in pattern avoidance is the study of Wilf-equivalent patterns. Let $\sigma$ and $\tau$ two patterns and $P_n$ a given subset of $S_n.$ Then $\sigma$ and $\tau$ are said to be \textit{Wilf-equivalent} on $P_n$ if $|P_n(\sigma)|=|P_n(\tau)|.$ A proof of Conjectures \ref{conjecture1} and \ref{conjecture2} would conclude the Wilf-classification of alternating and reverse alternating involutions avoiding a pattern of length 4. In fact, trivial numerical experiments show that the only Wilf-equivalences among all the other patterns not mentioned in the paper are the trivial ones given by the reverse complement map and the inverse map. \section*{Acknowledgments} We thank the anonymous referee for his very detailed revision and his valuable suggestions. \addcontentsline{toc}{section}{Bibliography} \bibliographystyle{plain} \begin{thebibliography}{10} \bibitem{Ba6} M.~Barnabei, F.~Bonetti, and M.~Silimbani. \newblock Restricted involutions and {M}otzkin paths. \newblock {\em Adv. in Appl. Math.}, 47(1):102--115, 2011. \bibitem{BIANE} P.~Biane. \newblock Permutations suivant le type d'exc\'edance et le nombre d'inversions et interpr\'etation combinatoire d'une fraction continue de {Heine}. \newblock {\em European J. Combin.}, 14(4):277--284, 1993. \bibitem{Bonaf} M.~B\'{o}na. \newblock On a family of conjectures of {J}oel {L}ewis on alternating permutations. \newblock {\em Graphs Combin.}, 30(3):521--526, 2014. \bibitem{Bona} M.~B\'{o}na, C.~Homberger, J.~Pantone, and V.~Vatter. \newblock Pattern-avoiding involutions: exact and asymptotic enumeration. \newblock {\em Australas. J. Combin.}, 64:88--119, 2016. \bibitem{ChenChenZhou} J.~N. Chen, W.~Y.~C. Chen, and R.~D.~P. Zhou. \newblock On pattern avoiding alternating permutations. \newblock {\em European J. Combin.}, 40:11--25, 2014. \bibitem{Gowravaram} N.~Gowravaram and R.~Jagadeesan. \newblock Beyond alternating permutations: pattern avoidance in {Y}oung diagrams and tableaux. \newblock {\em Electron. J. Combin.}, 20(4):Paper 17, 28, 2013. \bibitem{Jaggard} A.~D. Jaggard. \newblock Prefix exchanging and pattern avoidance by involutions. \newblock {\em Electron. J. Combin.}, 9:Paper 16, 24, 2002/03. \newblock Permutation patterns (Otago, 2003). \bibitem{Ki} S.~Kitaev. \newblock {\em Patterns in Permutations and Words}. \newblock Monographs in Theoretical Computer Science. Springer, 2011. \bibitem{Kn2} D.~E. Knuth. \newblock {\em The Art of Computer Programming}, volume~3. \newblock Addison-Wesley, 1973. \bibitem{Lewis} J.~B. Lewis. \newblock Alternating, pattern-avoiding permutations. \newblock {\em Electron. J. Combin.}, 16(1):Note 7, 8, 2009. \bibitem{Lewis2} J.~B. Lewis. \newblock Pattern avoidance for alternating permutations and {Y}oung tableaux. \newblock {\em J. Combin. Theory Ser. A}, 118(4):1436--1450, 2011. \bibitem{Lewis3} J.~B. Lewis. \newblock Generating trees and pattern avoidance in alternating permutations. \newblock {\em Electron. J. Combin.}, 19(1):Paper 21, 21, 2012. \bibitem{Min} S.~Min. \newblock The enumeration of involutions of doubly alternating {B}axter permutations. \newblock {\em J. Chungcheong Math. Soc.}, 34(3):253--257, 2021. \bibitem{Oucth} E.~Ouchterlony. \newblock {\em On {Y}oung tableau involutions and patterns in permutations}. \newblock PhD thesis, Link\"opings Universitet, 2005. \bibitem{Ouc} E.~Ouchterlony. \newblock Pattern avoiding doubly alternating permutations. \newblock {\em Proceedings FPSAC (San Diego 2006)}, 2006. \bibitem{Si} R.~Simion and F.~Schmidt. \newblock Restricted permutations. \newblock {\em European J. Combin.}, 6:383--406, 1985. \bibitem{Sl} N.~J.~A. Sloane. \newblock {The On-Line Encyclopedia of Integer Sequences}. \newblock \url{https://oeis.org/}. \bibitem{Stanleyalt1} R.~P. Stanley. \newblock Alternating permutations and symmetric functions. \newblock {\em J. Combin. Theory Ser. A}, 114(3):436--460, 2007. \bibitem{Stanleyalt2} R.~P. Stanley. \newblock A survey of alternating permutations. \newblock In {\em Combinatorics and graphs}, volume 531 of {\em Contemporary Mathematics}, pages 165--196. Amer. Math. Soc., Providence, RI, 2010. \bibitem{vL} M.~A.~A. van Leeuwen. \newblock The {Robinson-Schensted} and {Sch\"{u}tzenberger} algorithms, an elementary approach. \newblock {\em Electron. J. Combin., Foata Festschrift}, 3, 1996. \bibitem{West} J.~West. \newblock {\em Permutations with restricted subsequences and stack-sortable permutations}. \newblock PhD thesis, M.I.T., 1990. \bibitem{XuYan} Y.~Xu and S.~H.~F. Yan. \newblock Alternating permutations with restrictions and standard {Y}oung tableaux. \newblock {\em Electron. J. Combin.}, 19(2):Paper 49, 16, 2012. \bibitem{Yan} S.~H.~F. Yan. \newblock On {W}ilf equivalence for alternating permutations. \newblock {\em Electron. J. Combin.}, 20(3):Paper 58, 19, 2013. \end{thebibliography} \end{document}
2206.13797v3
http://arxiv.org/abs/2206.13797v3
Existence-Uniqueness for nonlinear integro-differential equations with drift in $\mathbb{R}^d$
\documentclass[notitlepage,11pt,reqno]{amsart} \usepackage{amsopn,esint} \usepackage[final]{hyperref} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc}\usepackage{apptools}\AtAppendix{\counterwithin{lemma}{section}} \usepackage[margin=1in]{geometry} \raggedbottom } \newcommand*\dif{\,d} \newcommand*\C{\mathbb{C}} \newcommand{\grad}{\triangledown} \newcommand{\calpha}{C^\alpha} \newcommand{\xo}{x_{0}} \newcommand{\yo}{y_{0}} \newcommand{\A}{\alpha} \newcommand{\B}{\beta} \newcommand{\xbar}{\bar{x}} \newcommand{\ybar}{\bar{y}} \newcommand{\laplask}{\mathcal{I}_{\delta}} \newcommand{\laplasK}{\mathcal{I}^{\delta}} \newcommand{\laplas}{(-\bigtriangleup)^{1/2}} \newcommand{\N}{\mathbb{N}} \newcommand{\Rn}{\mathbb{R}^n} \newcommand{\De} {\Delta} \newcommand{\sA}{\mathscr{A}} \newcommand{\SB}{\mathcal{T}} \usepackage{graphicx,enumitem,dsfont,upgreek} \usepackage[dvips]{epsfig} \usepackage[mathscr]{eucal} \usepackage{amscd} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsmath} \usepackage{latexsym} \usepackage{dsfont} \usepackage{upref} \usepackage{hyperref} \usepackage{color} \theoremstyle{plain} \newtheorem{prob}{Exercise}[section] \newtheorem{thm}{Theorem}[section] \theoremstyle{plain} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{example}{Example}[section] \newtheorem{assumption}{Assumption}[section] \theoremstyle{definition} \newtheorem{defi}{Definition}[section] \newtheorem{rem}{Remark}[section] \newtheorem*{maintheorem*}{Main Theorem} \newtheorem*{maincorollary*}{Main Corollary} \newenvironment{Assumptions}{\setcounter{enumi}{0} \renewcommand{\theenumi}{(\textbf{A}.\arabic{enumi})} \renewcommand{\labelenumi}{\theenumi} \begin{enumerate}}{\end{enumerate} } \newenvironment{Assumptions2} { \setcounter{enumi}{0} \renewcommand{\theenumi}{(\textbf{B}.\arabic{enumi})} \renewcommand{\labelenumi}{\theenumi} \begin{enumerate}} {\end{enumerate} } \newenvironment{Definitions}{\setcounter{enumi}{0} \renewcommand{\theenumi}{(\textbf{D}.\arabic{enumi})} \renewcommand{\labelenumi}{\theenumi} \begin{enumerate}}{\end{enumerate} } \newcommand{\norm}[1]{\ensuremath{\left\|#1\right\|}} \newcommand{\abs}[1]{\ensuremath{\left|#1\right|}} \newcommand{\Om}{\ensuremath{\Omega}} \newcommand{\pOm}{\ensuremath{\partial \Om}} \newcommand{\TOm}{\ensuremath{Q_T}} \newcommand{\disp}{\ensuremath{\displaystyle}} \newcommand{\bM}{\ensuremath{\mathbf{M}}} \newcommand{\Gw}{\ensuremath{\Gamma_{w}}} \newcommand{\Gs}{\ensuremath{\Gamma_{y}}} \newcommand{\Ga}{\ensuremath{\Gamma_{0}}} \newcommand{\Gb}{\ensuremath{\Gamma_{1}}} \newcommand{\uv}{\ensuremath{\underline{v}}} \newcommand{\ov}{\ensuremath{\overline{v}}} \newcommand{\oG}{\ensuremath{\overline{\gamma}}} \newcommand{\Do}{\ensuremath{D}} \newcommand{\DoGs}{\ensuremath{\Do\bigcup\Gs}} \newcommand{\cDo}{\ensuremath{\overline{\Do}}} \newcommand{\DoT}{\ensuremath{(0,T)\times\Do}} \newcommand{\DoTGs}{\ensuremath{(0,T)\times\left(\Do\bigcup\Gs\right)}} \newcommand{\cDoT}{\ensuremath{(0,T)\times\overline{\Do}}} \newcommand{\DOT}{\ensuremath{[0,T]\times\Do}} \newcommand{\cDOT}{\ensuremath{[0,T]\times\overline{\Do}}} \newcommand{\Oset}{\ensuremath{\mathcal{O}}} \newcommand{\OT}{\ensuremath{\Oset_T}} \newcommand{\Jet}{\ensuremath{\mathcal{P}}} \newcommand{\cJet}{\ensuremath{\overline{\mathcal{P}}}} \newcommand{\Eset}{\ensuremath{\mathcal{E}}} \newcommand{\Kset}{\ensuremath{\mathcal{K}}} \newcommand{\sK}{\mathscr{K}} \newcommand{\oKset}{\ensuremath{\overline{\mathcal{K}}}} \newcommand{\cH}{\ensuremath{\mathcal{H}}} \newcommand{\Gop}{\ensuremath{\mathcal{G}}} \newcommand{\cL}{\ensuremath{\mathcal{L}}} \newcommand{\cI}{\ensuremath{\mathcal{I}}} \newcommand{\sV}{\ensuremath{\mathscr{V}}} \newcommand{\trace}{\rm tr} \newcommand{\Iopsub}{\ensuremath{\mathcal{I}_{\kappa}}} \newcommand{\Iopsup}{\ensuremath{\mathcal{I}^{\kappa}}} \newcommand{\R}{\ensuremath{\mathbb{R}}} \newcommand{\Sy}{\ensuremath{\mathbb{S}}} \newcommand{\E}{\ensuremath{E}} \newcommand{\loc}{\mathrm{loc}} \newcommand{\meas}{\mathrm{meas}} \newcommand{\Lenloc}{L_{\mathrm{loc}}^1} \newcommand{\Div}{\mathrm{div}\,} \newcommand{\Grad}{\mathrm{\nabla}} \newcommand{\bA}{\mathbf{A}} \newcommand{\sorder}{\mathfrak{o}} \newcommand{\sL}{\mathscr{L}} \newcommand{\Usm}{\mathscr{U}} \newcommand{\sJ}{\mathscr{J}} \newcommand{\Lsloc}{\ensuremath{L_{\mathrm{loc}}^s}} \newcommand{\Linf}{\ensuremath{L^\infty}} \newcommand{\Linfloc}{\ensuremath{L^\infty_{\mathrm{loc}}}} \newcommand{\rd}{\ensuremath{\R^d}} \newcommand{\dx}{\ensuremath{\, dx}} \newcommand{\dy}{\ensuremath{\, dy}} \newcommand{\dz}{\ensuremath{\, dz}} \newcommand{\ds}{\ensuremath{\, ds}} \newcommand{\dr}{\ensuremath{\, dr}} \newcommand{\supp}{\ensuremath{\mathrm{supp}\,}} \newcommand{\ftn}[1]{\footnote{\bfseries\color{red}#1}} \def\e{{\text{e}}} \def\N{{I\!\!N}} \numberwithin{equation}{section} \allowdisplaybreaks \DeclareMathOperator*{\Argmin}{arg\,min} \usepackage{cancel,pdfsync} \begin{document} \title[Existence-Uniqueness results]{Existence-Uniqueness for nonlinear integro-differential equations with drift in $\rd$} \author{Anup Biswas \& Saibal Khan} \address{Indian Institute of Science Education and Research-Pune, Dr.\ Homi Bhabha Road, Pashan, Pune 411008. Email: {\tt [email protected], [email protected]}} \dedicatory{Dedicated to the memory of Ari Arapostathis (1954-2021)} \begin{abstract} In this article we consider a class of nonlinear integro-differential equations of the form $$\inf_{\tau \in\mathcal{T}} \bigg\{\int_{\mathbb{R}^d} (u(x+y)+u(x-y)-2u(x))\frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy+ b_{\tau}(x) \cdot \nabla u(x)+g_{\tau}(x) \bigg\}-\lambda^*=0\quad \text{in} \hspace{2mm} \mathbb{R}^d,$$ where $0<\lambda(2-2s)\leq k_{\tau}\leq \Lambda (2-2s)$ , $s\in (\frac{1}{2},1)$. The above equation appears in the study of ergodic control problems in $\rd$ when the controlled dynamics is governed by pure-jump L\'evy processes characterized by the kernels $k_{\tau}\,|y|^{-d-2s}$ and the drift $b_\tau$. Under a Foster-Lyapunov condition, we establish the existence of a unique solution pair $(u, \lambda^*)$ satisfying the above equation, provided we set $u(0)=0$. Results are then extended to cover the HJB equations of mixed local-nonlocal type and this significantly improves the results in \cite{ACGZ}. \end{abstract} \keywords{Ergodic control problem, Liouville theorem, regularity, mixed local-nonlocal operators, controlled jump diffusions} \subjclass[2010]{Primary: 35Q93, 35F21, Secondary: 93E20, 35B53} \maketitle \section{Introduction} Our chief goal in this article is to find a pair $(u,\lambda^*)$ that satisfies \begin{align} \label{ergodic_HJB} \inf_{\tau \in\mathcal{T}} \bigg\{\int_{\mathbb{R}^d}\delta (u,x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy + b_{\tau}(x) \cdot \nabla u(x)+g_{\tau}(x) \bigg\}-\lambda^*=0\quad \text{in} \hspace{2mm} \mathbb{R}^d, \end{align} where $\delta(u,x,y):=u(x+y)+u(x-y)-2 u(x)$ and $\mathcal{T}$ is an indexing set. We impose the following assumptions on the kernel: Let $x\mapsto k_{\tau}(x,y)$ be continuous uniformly in $(y,\tau)$ and satisfies \begin{align*} k_\tau(x,y)=k_\tau(x,-y),\quad (2-2s)\lambda \leq k_{\tau}(x,y)\leq (2-2s)\Lambda \quad \forall\, x, y \in \mathbb{R}^d, \end{align*} where $s\in (\frac{1}{2},1)$ and $0< \lambda \leq \Lambda$. Let us introduce the following notations. \begin{align}\label{lin-opt} I_{\tau}[u](x)=\int_{\mathbb{R}^d}\delta (u,x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy;\quad \mathcal{L}_{\tau}[u](x)=I_{\tau}[u](x)+ b_{\tau}(x) \cdot \nabla u(x). \end{align} Also, denote by $\omega_s(y)=\frac{1}{1+|y|^{d+2s}}$. \begin{rem} The symmetry property of the kernel $k_\tau$ is not used in this article, but we still make this assumption due to the following reason. In general, the nonlocal operator of L\'evy type does not have a symmetrized form (cf. \cite[Chapter~3]{Apple}) and this symmetrization (that is, the form involving $\delta(u,x,y)$) is possible when kernel $k_\tau$ is symmetric in the second variable. Therefore, keeping the assumption of symmetry makes the model physically relevant. \end{rem} One of the main motivations to study \eqref{ergodic_HJB} comes from the stochastic ergodic control problems where the random noise in the controlled dynamics corresponds to some $2s$-stable process. More precisely, suppose that the action set (or control set) $\mathcal{T}$ is a metric space and $\Usm$ denotes the collection of all stationary Markov controls, that is, collection of all Borel measurable functions $v:\mathbb{R}^d\to \mathcal{T}$. This class of functions plays a central role in the study of optimal control problems. Let us also assume that the martingale problem corresponding to the operator $\mathcal{L}^v$ , defined by (see \eqref{lin-opt}) $$\mathcal{L}^v [f](x)=\mathcal{L}_{v(x)}[f](x),$$ is well-posed. In particular, for every $v\in\Usm$ there exists a family of probability measures $\{\mathbb{P}_x\}_{x\in\mathbb{R}^d}$ on $\mathbb{D}([0, \infty), \mathbb{R}^d)$, the space of C\`{a}dl\`{a}g functions on $[0, \infty)$ taking values in $\mathbb{R}^d$, such that $(\mathbb{P}^v_x, X^v)$, where $X^v$ denotes the canonical coordinate process, solves the martingale problem. One albeit needs to impose certain regularity hypothesis on the kernels $k_\tau$ to guarantee well-posedness of the martingale problem, see for instance \cite{CKS12,Kom84}. Let $\rd\times\mathcal{T}\ni (x, \tau)\to g_\tau(x)$ denotes the running cost and the goal is to minimize the ergodic cost criterion $$\sJ[v]:=\limsup_{T\to\infty}\, \frac{1}{T} \mathbb{E}^v_x\left[\int_0^T g_{v}(X^v_t)\right],$$ over $\Usm$. We denote the optimal value by $\lambda^*$. It is then expected that the optimal value $\lambda^*$ would satisfy \eqref{ergodic_HJB} (cf. \cite{ACGZ,red-book,Bor89,GM92,FS06}) and the measurable selectors of \eqref{ergodic_HJB} would be the optimal controls in $\Usm$. Though the analogous problem for the local case (that is, $s=1$) has been investigated extensively (see \cite{red-book} and references therein), study of equation \eqref{ergodic_HJB} remained open. Recently, ergodic control problem in $\rd$ with dispersal type nonlocal kernel is considered by Br\"{a}ndle \& Chasseigne in \cite{BC19} whereas Barles et.\ al. \cite{BCCI14} study the ergodic control problem for mixed integro-differential operators in periodic settings. In this article, we establish the existence and uniqueness of solution to \eqref{ergodic_HJB} under a Foster-Lyapunov type condition. We say a function $f:\rd\to \R$ is inf-compact (or coercive) if for any $\kappa\in\R$ either $\{f\leq \kappa\}$ is empty or compact. \begin{assumption} We make following assumptions on the coefficients. \begin{itemize}\label{assumptions} \item [(\hypertarget{A1}{A1})] There exists $V\in C^2(\mathbb{R}^d), V\geq 0$ and a function $0\leq h\in C(\mathbb{R}^d)$, $V$ and $h$ are inf-compact, such that \begin{align}\label{Lyap} \sup_{\tau \in\mathcal{T}}\mathcal{L}_{\tau} [V](x)\leq k_0-h(x) \quad x\in\rd, \end{align} for some $k_0>0$. This is called Foster-Lyapunov stability condition. The above condition also implies that $V\in L^1(\omega_s)$. \item[(\hypertarget{A2}{A2})] $\sup_{\tau\in\mathcal{T}}|g_{\tau}(x)|\leq h(x)$ and $\sup_{\tau \in\mathcal{T}}|g_{\tau}|\in\sorder(h)$, that is, $$\lim_{|x|\to\infty} \frac{1}{1+h(x)}\sup_{\tau \in\mathcal{T}}|g_\tau(x)|=0.$$ \item[(\hypertarget{A3}{A3})] For some $\upmu\geq 0$ we have $V^{1+\upmu}\in L^1(\omega_s)$ and \begin{align} \sup_{\tau \in\mathcal{T}}\left[\frac{|b_\tau|}{(1+V)^{(2s-1)\upmu}} + \frac{|g_{\tau}|}{(1+V)^{1+2s\upmu}}\right]&\leq C,\label{EA.3A} \\ \sup_{x, y}\frac{V(x+y)}{(1+V(x))(1+V(y))} + \limsup_{|x|\to\infty}\; \frac{1}{1+V(x)}\sup_{|y-x|\leq 1} V(y)& \leq C.\label{EA.3B} \end{align} \item[(\hypertarget{A4}{A4})] The map $x\rightarrow k_{\tau}(x,y)$ is uniformly continuous, uniformly in $\tau$ and $y$, that is, \begin{align*} |k_{\tau}(x_1,y)-k_{\tau}(x_2,y)|\leq \varrho(|x_1-x_2|)\quad \forall x_1,x_2 \in \mathbb{R}^d,\; \tau\in \mathcal{T}, y\in\rd, \end{align*} for some modulus of continuity $\varrho$. \end{itemize} \end{assumption} Note that \eqref{EA.3A} allows $b_\tau, g_{\tau}$ to be unbounded. Since $V$ is inf-compact, \eqref{EA.3A} holds for bounded $b_\tau, g_{\tau}$. This condition will be useful to find a growth-bound on the H\"{o}lder norm of solution $u$ in the unit ball $B_1(x)$. Condition \eqref{EA.3B} requires $V$ to have polynomial growth. For $C^{2s+\upkappa}$ regularity of solutions we shall often impose the following condition on the coefficients. \begin{itemize} \item[(\hypertarget{A5}{A5})] For every compact set $K\subset\rd$, there exists $\tilde\alpha\in (0, 1)$ so that $$\sup_{\tau\in \mathcal{T}}\abs{b_\tau(x)-b_\tau(x')} + \sup_{\tau\in \mathcal{T}}\abs{g_{\tau}(x)-g_{\tau}(x')} + \sup_{\tau \in\mathcal{T}}\,\sup_{y\in\rd} |k_{\tau}(x, y)-k_{\tau}(x', y)|\leq C_K|x-x'|^{\tilde\alpha}.$$ \end{itemize} It can be easily seen that Assumption~\ref{assumptions} holds for $2s$-fractional Ornstein-Uhlenbeck type operators. Such operators are used by Fujita, Ishii \& Loreti \cite{FIL06} to study large time behaviour of solutions to certain (local) Hamilton-Jacobi equation. Recently, Chasseigne, Ley \& Nguyen \cite{CLN19} consider Ornstein-Uhlenbeck type drift with tempered $2s$-stable nonlocal kernel to study Lipschitz regularity of the solutions. In Example~\ref{Eg1.1} below we provide a large class of functions satisfying (\hyperlink{A1}{A1})--(\hyperlink{A3}{A3}). One of the main results of this article is as follows \begin{thm}\label{T1.1} Suppose that (\hyperlink{A1}{A1})--(\hyperlink{A5}{A5}) hold. Then there exists a unique pair $(u, \lambda^*)\in C(\rd)\cap \sorder(V)\times\mathbb{R}$ satisfying \begin{equation}\label{ET1.1A} \inf_{\tau \in\mathcal{T}}(\cL_{\tau} u + g_{\tau}) -\lambda^*=0\quad \text{in}\; \rd, \quad u(0)=0. \end{equation} \end{thm} By a solution we shall always mean viscosity solution in the sense of Caffarelli-Silvestre \cite{CS09} (see also \cite{BI08,BCI08}), unless mentioned otherwise. \begin{defi}[Viscosity solution] A function $u:\rd\to \R$, upper (lower) semi continuous in a domain $\bar{D}$ and $u\in L^1(\omega_s)$, is said to be a viscosity subsolution (supersolution) to $$\inf_{\tau \in\mathcal{T}}(\cL_{\tau} u + g_{\tau})=0\quad \text{in}\; D,$$ written as $\inf_{\tau \in\mathcal{T}}(\cL_{\tau} u + g_{\tau})\geq 0$ ($\inf_{\tau \in\mathcal{T}}(\cL_{\tau} u + g_{\tau})\leq 0$), if for any point $x\in D$ and a neighbourhood $N_x$ of $x$ in $D$, there exists a function $\varphi\in C^2(\bar{N}_x)$ so that $\varphi-u$ attains minimum value $0$ in $N_x$ at the point $x$, then letting \[ v(y):=\left\{ \begin{array}{ll} \varphi(y) & \text{for}\; y\in N_x, \\[2mm] u(y) & \text{otherwise}, \end{array} \right. \] we have $\inf_{\tau \in\mathcal{T}}(\cL_{\tau} v + g_{\tau})\geq 0$ ($\inf_{\tau \in\mathcal{T}}(\cL_{\tau} v + g_{\tau})\leq 0$, resp.). We say $u$ is a viscosity solution if it is both sub and supersolution. \end{defi} In practice, there are two types of situations that are normally considered to study ergodic control problems. These are (i) near-monotone condition, and (ii) stability condition (cf. \cite{ACGZ,red-book}). Under near-monotone setting, it is assumed that the limiting value of $\inf_{\tau\in\mathcal{T}}g_\tau$ at infinity is strictly bigger than $\lambda^*$ (cf. (2.3) in \cite{ACGZ}). The classical linear-quadratic problems fall under this setting. Due to this near-monotone condition we expect the optimal Markov control, if exists, to be {\it stable}. In the second situation ( that is, under stability condition), a blanket stability condition (same as \eqref{Lyap}) is imposed on the drifts but no growth condition at infinity is imposed on the running cost $g_\tau$. Though \eqref{ET1.1A} has been extensively studied in the local case, the nonlocal version remained unsettled, mainly due to the unavailability of certain regularity estimates and appropriate Harnack type inequality for general nonlinear integro-differential operators. To understand the difficulty in proving Theorem~\ref{T1.1} we recall the three main steps of the proof: \begin{itemize} \item[(a)] Find solution $w_\alpha$ to the $\alpha$-discounted problem for $\alpha\in (0,1)$; \item[(b)] Establish the convergence of $\bar{w}_\alpha(x):= w_\alpha(x)-w_\alpha(0)$ to solution $u$ of \eqref{ET1.1A}; \item[(c)] Show that $(u, \lambda^*)$ is unique. \end{itemize} For (a), we consider a more general class of discounted problem given by $$\cI w(x)=\inf_{\tau \in\mathcal{T}}\bigl(\cL_{\tau} w + c_{\tau}(x) w(x)+ g_{\tau}(x)\bigr)=0\quad \text{in}\; \rd. $$ Fixing $c_\tau=-\alpha$ we obtain $\alpha$-discounted problem. It is possible to weaken Assumption~\ref{assumptions} substantially to solve the above equation (see Assumption~\ref{A2.1} in Section~\ref{S-alpha}). In particular, we prove the following result. \begin{thm}\label{T1.2} Suppose that (\hyperlink{B1}{B1})--(\hyperlink{B4}{B4}) hold. Then there exists $w\in C(\rd)\cap\sorder(\sV)$, $\sV$ given by \eqref{Lyap-alpha}, satisfying \begin{equation}\label{ET1.2A} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} w + c_{\tau} w + g_{\tau}\Bigr) =0\quad \text{in}\; \rd. \end{equation} In addition, if (\hyperlink{B5}{B5}) holds, then $w\in C^{2s+}_{\rm loc}(\rd)$ and it is the unique solution in the class $\sorder(\sV)$. \end{thm} If compared with the existing literature, the existence, uniqueness and the regularity results in Theorem~\ref{T1.2} appear to be new. For instance, most of the existence results in the unbounded domains consider periodic setting, see Barles et.\ al. \cite{BCCI14}, Ciomaga, Ghilli \& Topp \cite{CGT22}. Under the assumption that the coefficients are Lipschitz with sublinear growth, Jakobsen \& Karlsen \cite{JK06} establish comparison principle for bounded viscosity solutions to mixed L\'evy-It\^{o} type Isaacs equations. Biswas, Jakobsen \& Karlsen \cite{BJK10} study the existence-uniqueness of viscosity solutions for parabolic integro-differential equations of L\'evy-It\^{o} type under the assumption that the L\'evy kernel is having an exponentially decaying tail and the coefficients of \eqref{ET1.2A} are Lipschitz. The existence-uniqueness result in Theorem~\ref{T1.2} is obtained for a purely nonlocal equation that is not of L\'evy-It\^{o} type. Similar existence-uniqueness result for mixed integro-differential operator is established in Section~\ref{S-mixed}. In a recent work, Meglioli \& Punzo \cite{MP22} consider linear equation involving $2s$-fractional Laplacian and $C^1$ drift and study uniqueness of the solutions. The methodology in \cite{MP22} is of variational nature and can not be adapted for operators of the form \eqref{ET1.2A} (see Remark~\ref{R2.1} for a detailed comparison). The regularity result plays a crucial role in obtaining the uniqueness. Two main ingredients to find $C^{2s+}$ regularity are $C^{1, \gamma}$ regularity of $w$ and the growth bounded on the H\"{o}lder norm of $w$ in $B_1(x)$ (see Lemma~\ref{L2.6}). We establish the $C^{1, \gamma}$ regularity for Isaacs type equations and without requiring the coefficients to be H\"{o}lder continuous (see Theorem~\ref{c_1,gamma_regularity} below for more detail). There is a large body of works dealing with the H\"{o}lder regularity of Isaacs type integro-differential equations. Many of these works are based on Ishii-Lions method and therefore, they require the coefficients to be H\"{o}lder continuous and $c_\tau$ to be negative, see for instance, \cite[Corollary~4.1]{BCI11}, \cite{BCCI12,CLN19}, \cite[Theorem~3.1]{CGT22}. We do not require any such condition since our method is based on scaling argument and Liouville theorem used by Serra in \cite{Serra2015,Serra_Parabolic}. Since $c_\tau$ is negative (by (\hyperlink{B1}{B1})) we can solve \eqref{ET1.2A} in bounded domains, for instance, ball $B_n(0)$, with Dirichlet exterior condition. Then, applying the local H\"{o}lder estimate from \cite{Schwab_Silvestre} one can pass to the limit, as $n\to\infty$, to obtain a solution of \eqref{ET1.2A}, provided the solutions in the bounded domains are dominated by a fixed barrier function. This is required to pass the limit inside the nonlocal operator. We show that $\sV$ (see \eqref{Lyap-alpha}) can be used as a barrier function. To establish uniqueness, we first show that $w\in C^{2s+}_{\rm loc}(\rd)$ and therefore, it is a classical solution to \eqref{ET1.2A}. Recall that $C^{2s+}$ estimate of $w$ requires global H\"{o}lder regularity of $w$ and local H\"{o}lder regularity of $\grad w$ (cf. \cite{Serra2015}). To attain this goal we first establish $C^{1, \gamma}$ estimate for $w$. This is done in Section~\ref{S-regu}. Once we have $C^{2s+}$ regularity (see Lemma~\ref{L2.6}) we can couple two solutions and use the barrier function $\sV$ to establish uniqueness. Coming back to Theorem~\ref{T1.1}, we apply Theorem~\ref{T1.2} to obtain solution $w_\alpha$ for the $\alpha$-discounted problem and define the normalized function $\bar{w}_\alpha(x)=w_\alpha(x)-w_\alpha(0)$. Note that $$ \inf_{\tau \in\mathcal{T}}(\cL_{\tau} \bar{w}_\alpha + g_{\tau}) -\alpha \bar{w_\alpha} -\alpha w_\alpha(0)=0\quad \text{in}\; \rd, \quad \bar{w}_\alpha(0)=0. $$ Thus to complete step (b) we only need to find a convergent subsequence of $\{\bar{w}_\alpha\}$, as $\alpha\to 0$, and pass the limit in the above equation. The equicontinuity of the family $\{\bar{w}_\alpha\}$ is generally obtained by employing a generalized Harnack's type estimate (cf. \cite[Theorem~3.3]{ACGZ},\cite[Lemma~3.6.3]{red-book}). But, to the best of our knowledge, Harnack type estimate is not available for nonlinear integro-differential operators with gradient and zeroth order term. Therefore, we innovate a different method. We again use the Lyapunov function $V$ in \eqref{Lyap} and show that $|\bar{w}_\alpha|\leq \kappa + V$ in $\rd$ for some suitable constant $\kappa$ (see Lemma~\ref{L3.1}). Now applying the regularity result of \cite{Schwab_Silvestre} we can establish the equicontinuity of $\{\bar{w}_\alpha\}$. For the step (c), we again show that $u\in C^{2s+}_{\rm loc}(\rd)$ and then use the fact $u\in\sorder(V)$. It is worth pointing out that equation \eqref{ET1.1A} is not strictly monotone which makes the uniqueness tricky. There are only few works dealing with the uniqueness of non-monotone operators in bounded domains, see Caffarelli \& Silvestre \cite{CS09}, Mou \& \'{S}wi\k{e}ch \cite{Mou-Swiech}. The $C^{2s+}$ regularity property is crucially used in the proof of uniqueness. Note that once we have $C^{1, \gamma}$ regularity, $C^{2s+}$ estimate follows from Serra \cite{Serra2015} since \eqref{ET1.1A} is of concave type. Similar approach does not work for Isaacs type equations. Interestingly, the above approach can be generalized to study HJB equations for a larger family of integro-differential operators. In particular, we consider the operator $$\cI u(x) =\inf_{\tau\in\mathcal{T}}\left[ {\rm tr}{(a_\tau(x)D^2u)} + \breve I_\tau [u](x) + b_\tau(x)\cdot\grad u + g_\tau \right],$$ where $\breve I_\tau$ is a general L\'evy type nonlocal operator (see Section~\ref{S-mixed}). We show that the above approach extends for these class of operators and we obtain the following result in Section~\ref{S-mixed}. \begin{thm}\label{T1.3} Suppose that Assumptions~\ref{A4.1} and ~\ref{A4.2} hold. Then there exists a unique pair $(u, \lambda^*)\in\sorder(V)\times\mathbb{R}$ satisfying \begin{equation}\label{ET4.1A} \cI u(x) -\lambda^*=0\quad \text{in}\; \rd, \quad u(0)=0. \end{equation} \end{thm} Theorem~\ref{T1.3} should be compared with Arapostathis et.\ al. \cite{ACGZ} where similar problem has been considered for nonlocal kernels having compact support and finite measure. We conclude the introduction with an example satisfying Assumption~\ref{assumptions}. This example is inspired from \cite{ABC16}. \begin{example}\label{Eg1.1} Consider a function $0\leq V\in C^2(\mathbb{R}^d)$ satisfying $V(x)=|x|^\upgamma$ for $|x|\geq 1$, and $\upgamma\in (0,2s)$. We observe that for any $|x|\leq 1$, $|V(x)|\leq C$ and therefore $V(x)\leq C+|x|^\upgamma$ for all $x \in \mathbb{R}^d$. All the inequalities below are true upto a constant. \noindent\textbf{Case-I} Let $x\in B_R$ and $R>2$, then \begin{align*} \bigg|\int_{B_{R+1}}\delta(V,x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy\bigg|\leq ||D^2V||_{L^\infty(B_{2R+1})} \int_{{B_{R+1}}}\frac{|y|^2}{|y|^{d+2s}}\,dy =\norm{D^2V}_{L^\infty(B_{2R+1})} (R+1)^{(2-2s)} \end{align*} Now observe that for $|y|\geq R+1$ and $|x|\leq R$, we have $|x\pm y|\geq 1$. Therefore \begin{align*} \bigg|\int_{B^c_{R+1}}\delta(V,x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy\bigg|=&\bigg|\int_{B^c_{R+1}}\frac{|x+y|^\upgamma+|x-y|^\upgamma-2V(x)}{|y|^{d+2s}} \,dy\bigg| \\ \lesssim &\int_{B^c_{R+1}}\frac{2|y|^\upgamma +2(|x|^\upgamma+|V(x)|)}{|y|^{d+2s}} \,dy \\ \lesssim & (R+1)^{\upgamma -2s} + (R^\upgamma +||V||_{L^\infty(B_{R})})R^{-2s}. \end{align*} \noindent\textbf{Case-2} Let $x \in B^c_R$. Then \begin{align}\label{E1.1A} \bigg|\int_{B_{\frac{R}{2}}}\delta(V,x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy\bigg|\leq ||D^2V||_{L^\infty(\mathbb{R}^d)}C_R. \end{align} Now to compute the integration on $B^c_{\frac{R}{2}}$ we define \begin{align*} J_{\tau}:=\int_{\frac{R}{2}\leq |y|\leq \frac{|x|}{2}}\delta(V,x,y) \frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy . \end{align*} Note that $|y|\leq \frac{|x|}{2}\implies |x\pm y|\geq |x|-\frac{|x|}{2}\geq \frac{|x|}{2}\geq \frac{R}{2} \geq 1$. Therefore, \begin{align*} J_{\tau}&=\int_{\frac{R}{2}\leq |y|\leq \frac{|x|}{2}}(|x+ y|^\upgamma+|x-y|^\upgamma -2|x|^\upgamma) \frac{k_{\tau}(x,y)}{|y|^{d+2s}} \,dy \\ &= |x|^{\upgamma-2s}\int_{ \frac{R}{2|x|}\leq |z|<\frac{1}{2}}\bigg(\bigg|\frac{x}{|x|}+ z\bigg|^\upgamma+\bigg|\frac{x}{|x|}-z\bigg|^\upgamma -2\bigg) \frac{k_{\tau}(x,|x|z)}{|z|^{d+2s}} \,dz \end{align*} Now observe that for any $|z|<\frac{1}{2}$ we have $\big|\frac{x}{|x|}\pm z\big|\geq 1 -|z|\geq \frac{1}{2}$. Therefore we can apply Taylor's formula to conclude \begin{align}\label{E1.1B} J_{\tau}\leq \tilde{C}|x|^{\upgamma-2s} \int_{ |z|<\frac{1}{2}}\frac{|z|^2}{|z|^{d+2s}}\,dz\leq C |x|^{\upgamma-2s}. \end{align} Let us denote by $\tilde{\delta}(x,y)=|x+y|^\gamma+|x-y|^\gamma-2|x|^\gamma$ and observe that \begin{align}\label{E1.1C} \int_{\frac{|x|}{2}\leq |y|}\delta(V,x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}}\,dy=\int_{\frac{|x|}{2}\leq |y|}\Big(\delta(V,x,y)-\tilde{\delta}(x,y)\Big) \frac{k_{\tau}(x,y)}{|y|^{d+2s}}\,dy+\int_{\frac{|x|}{2}\leq |y|}\tilde{\delta}(x,y)\frac{k_{\tau}(x,y)}{|y|^{d+2s}}\,dy. \end{align} Furthermore, as for $|x|<2 |y|$ we have $||x\pm y|^\gamma-|x|^\gamma|\leq 9 |y|^\gamma$, it can be easily seen that \begin{align*} \bigg|\int_{\frac{|x|}{2}\leq |y|}\tilde{\delta}(x,y)K_{\tau}(x,y)\,dy\bigg|\leq C \int_{\frac{|x|}{2}\leq |y|} \frac{|y|^\upgamma}{|y|^{d+2s}}\,dy\leq C|x|^{\upgamma-2s}. \end{align*} Now to calculate the first integral on the rhs of \eqref{E1.1C}, we observe that if both $|x+y|$ and $|x-y|$ are bigger than $1$, then $\delta(V,x,y)-\tilde{\delta}(x,y)=0$. Again, if $|x + y|\leq 1$ (or $|x - y|\leq 1$) then \begin{align*} \frac{|x|}{2}\leq |y|\leq |x|+|x+y|\leq |x|+1\leq \frac{3|x|}{2}, \end{align*} and hence, \begin{align}\label{E1.1D} \int_{\frac{|x|}{2}\leq |y|}\Big|\delta(V,x,y)-\tilde{\delta}(x,y)\Big|\frac{k_{\tau}(x,y)}{|y|^{d+2s}}\,dy\leq &\int_{\frac{|x|}{2}\leq |y|\leq \frac{3|x|}{2}}|\delta(V,x,y)-\tilde{\delta}(x,y)|\frac{k_{\tau}(x,y)}{|y|^{d+2s}}\,dy\nonumber \\ &\lesssim |x|^{\upgamma-2s-d} \int_{\frac{|x|}{2}\leq |y|\leq \frac{3|x|}{2}} \,dy \leq C|x|^{\upgamma-2s}. \end{align} Choose $\theta\geq 0, \upgamma\in (s+\frac{1}{2}, 2s)$ satisfying $$\theta+\gamma-1>0,\quad \theta<(2s-\upgamma)(2s-1) <(\upgamma-1)(2s-1).$$ Set $\upmu=\frac{\theta}{\upgamma(2s-1)}$. Now suppose that $b_{\tau}(x)\cdot x\leq -|x|^{\theta+1}$ outside a fixed compact set independent of $\tau$ and $$\sup_{\tau\in \mathcal{T}}\abs{b_\tau}\leq C(1+|x|^\theta),\quad \sup_{\tau\in \mathcal{T}}|g_{\tau}|\leq C(1+ |x|^{\frac{2s\theta}{2s-1}}) , $$ then \begin{equation}\label{E1.1E} \limsup_{|x|\to\infty}\sup_{\tau \in\mathcal{T}}\frac{b_{\tau}(x)\cdot \nabla V(x)} {|x|^{\theta+\upgamma-1}}<0. \end{equation} Thus, fixing $R=4$ in the above calculations, we get from \eqref{E1.1A},\eqref{E1.1B},\eqref{E1.1D} and \eqref{E1.1E} that \begin{align*} \sup_{\tau \in\mathcal{T}}\mathcal{L}_{\tau} [V](x)\leq k_0-k_1 |x|^{\theta+\gamma-1}, \end{align*} for some suitable constants $k_0, k_1$. Therefore to arrive at \eqref{Lyap}, we can take $h(x)=k_1 |x|^{\theta+\gamma-1}$. Moreover, since $\upgamma(1+\upmu)<2s$ we have $V^{1+\upmu}\in L^1(\omega_s)$. By our choice of $\theta, \upmu$ it is easily seen \eqref{EA.3A} holds and $V$ satisfies \eqref{EA.3B}. \end{example} \section{ \texorpdfstring{$\alpha$}{a}-discounted HJB Equation with Rough Kernel}\label{S-alpha} In this section we prove the existence of a unique classical solution for the discounted problem. The results of this section will be proved under a weaker setting compared to Assumption~\ref{assumptions}. In this section we consider operators of the form \begin{equation}\label{E2.1} \cI u(x)=\inf_{\tau \in\mathcal{T}}\bigl(\cL_{\tau} u + c_{\tau}(x) u(x)+ g_{\tau}(x)\bigr). \end{equation} Note that $c_{\tau}=-\alpha$ corresponds to the $\alpha$-discounted problem. \begin{assumption}\label{A2.1} We impose following assumptions on the coefficients of the equation \eqref{E2.1}. \begin{itemize} \item[(\hypertarget{B1}{B1})] For some positive constant $c_\circ$ we have $$\sup_{\tau\in \mathcal{T}}c_{\tau}(x)\leq -c_\circ\quad x\in\rd.$$ \item[(\hypertarget{B2}{B2})] There exist an inf-compact $\sV\in C^2(\mathbb{R}^d), \sV\geq 0$ and a positive inf-compact function $ h\in C(\mathbb{R}^d)$, such that \begin{align}\label{Lyap-alpha} \sup_{\tau \in\mathcal{T}} (\mathcal{L}_{\tau}\sV(x) + c_{\tau}(x) \sV(x)) \leq k_0 1_{\sK} - h(x),\quad x\in\rd, \end{align} for some $k_0>0$ and a compact set $\sK$. In addition, $\sup_{\tau \in\mathcal{T}}|g_{\tau}|\leq h$ and \begin{equation}\label{EA2.1A} \lim_{|x|\to\infty}\,\frac{\sup_{\tau \in\mathcal{T}}|g_{\tau}(x)|}{h(x)}=0. \end{equation} \item[(\hypertarget{B3}{B3})] For some $\upmu\geq 0$ we have $\sV^{1+\upmu}\in L^1(\omega_s)$ and \begin{align} \sup_{\tau \in\mathcal{T}}\left[\frac{|b_\tau|}{(1+\sV)^{(2s-1)\upmu}} + \frac{|g_{\tau}|}{(1+\sV)^{1+2s\upmu}} + \frac{|c_{\tau}|}{(1+\sV)^{2s\upmu}}\right]&\leq C,\label{EA2.1B} \\ \sup_{x, y}\frac{\sV(x+y)}{(1+\sV(x))(1+\sV(y))} + \limsup_{|x|\to\infty}\; \frac{1}{1+\sV(x)}\sup_{|y-x|\leq 1} \sV(y)& \leq C.\label{EA2.1C} \end{align} \item[(\hypertarget{B4}{B4})] The maps $x\mapsto k_{\tau}(x,y), b_\tau(x), g_{\tau}(x), c_{\tau}(x)$ are locally uniformly continuous and locally bounded, uniformly in $\tau, y$. \item[(\hypertarget{B5}{B5})] For every compact set $K\subset\rd$, there exists $\tilde\alpha\in (0, 1)$ so that \begin{align*} \sup_{\tau\in \mathcal{T}}\abs{b_\tau(x)-b_\tau(x')} + \sup_{\tau\in \mathcal{T}}(\abs{g_{\tau}(x)-g_{\tau}(x')}+ \abs{c_{\tau}(x)-c_{\tau}(x')}) &+ \sup_{\tau\in \mathcal{T}}\,\sup_{y\in\rd} |k_{\tau}(x, y)-k_{\tau}(x', y)| \\ &\leq C_K|x-x'|^{\tilde\alpha}. \end{align*} \end{itemize} \end{assumption} Assumptions (\hyperlink{B1}{B1}) and (\hyperlink{B4}{B4}) are standard whereas (\hyperlink{B5}{B5}) is generally imposed to obtain $C^{2s+\upkappa}$ regularity of the solutions. Let us now cite a class of examples that satisfy \eqref{Lyap-alpha},\eqref{EA2.1B} and \eqref{EA2.1C}. \begin{example} We recall the notations from Example~\ref{Eg1.1}. Suppose that \begin{equation}\label{Eg2.1A} \sup_{\tau\in \mathcal{T}}(b_\tau\cdot x)_+\leq C_0 |x|^{\upsigma + 1}\quad \text{where}\; \upsigma\in [0, 1]. \end{equation} Choose $\upgamma\in (0, 2s)$ so that for $\upsigma=1$ we have $c_\circ > \upgamma C_0$ where $c_\circ$ is given by (\hyperlink{B1}{B1}). Let $0\leq \sV\in C^2(\mathbb{R}^d)$ be such that $\sV(x)=|x|^\upgamma$ for $|x|\geq 1$. Then the calculation in Example~\ref{Eg1.1} reveals that \begin{equation}\label{Eg2.1B} \sup_{\tau\in \mathcal{T}}\int_{\rd}\delta(V, x, y)\frac{k_{\tau}(x, y)}{|y|^{d+2s}}\leq C(1+|x|^{\upgamma-2s})\quad x\in B^c_4. \end{equation} Therefore, if we set $h(x)=\kappa |x|^{\upgamma}$ for $|x|\geq 1$, from \eqref{Eg2.1A} and \eqref{Eg2.1B} it is easily seen that \eqref{Lyap-alpha} holds for suitable $\kappa$ and compact set $\sK$. For (\hyperlink{B3}{B3}) to hold, we can choose $\upmu\in [0, \frac{2s}{\upgamma}-1)$ and restrict the family $\{b_\tau\}_{\tau\in \mathcal{T}}$ further to satisfy $$\sup_{\tau\in \mathcal{T}}|b_\tau(x)|\leq C(1+|x|)^{\upgamma\upmu(2s-1)}.$$ \end{example} It is quite possible for $b_\tau$ to have super-linear growth as we show in the example below. \begin{example} Let $\upsigma, \theta$ are positive numbers satisfying $$\upsigma-1<\theta<4s^2, \quad \upsigma< 2s(2s-1).$$ Consider a family of $\{b_\tau\}, \{c_{\tau}\}$ satisfying $$\sup_{\tau\in \mathcal{T}}|b_\tau(x)|\leq C(1+|x|^\upsigma), \quad \sup_{\tau\in \mathcal{T}} c_{\tau}(x)\leq -c_\circ(1+|x|^\theta).$$ Then we can choose $\upgamma\in (0, 2s)$ and $\upmu\in (0, \frac{2s}{\upgamma}-1)$ so that $$\upsigma\leq \upgamma \upmu (2s-1),\quad \text{and}\quad \theta\leq 2s \upgamma\upmu.$$ Now let $\sV$ to satisfy $\sV(x)=|x|^\upgamma$ for $|x|\geq 1$. It can be easily checked that \eqref{Lyap-alpha}, \eqref{EA2.1B} and \eqref{EA2.1C} hold for $h(x)\asymp |x|^{\theta+\upgamma}$. \end{example} By $C^{\eta+}_{\rm loc}(\rd)$ we denote the set of functions that are in $C^{\eta+\upkappa}(K)$ for every compact $K$ and for some $\upkappa>0$, possibly depending on $K$. More precisely, a function $\ell\in C^{\eta+}_{\rm loc}(\rd)$ if and only if for every compact set $K$, there exists $\upkappa>0$ such that $\ell\in C^{\eta+\upkappa}(K)$. The remaining part of this section is devoted to the proof of Theorem~\ref{T1.2}. We first solve \eqref{ET1.2A} in bounded domains (Theorem~\ref{T2.2} below) . Then, using $\sV$ as a barrier function and the stability estimates from \cite{Schwab_Silvestre}, we could find a subsequence of solutions, as the domains increase to $\rd$, that converge to a solution of \eqref{ET1.2A}. This is done in Lemma~\ref{L2.3}. In Lemma~\ref{L2.6} we then show that this solution, obtained as a limit, is in the class $C^{2s+}_{\rm loc}(\rd)\cap\sorder(\sV)$. Combining these results we then complete the proof of Theorem~\ref{T1.2}. We begin with a comparison principle which will be used in several places. \begin{lem}\label{L2.1} Let $\Omega$ be a bounded domain. Suppose that (\hyperlink{B1}{B1}) holds. Let $u\in C(\rd)\cap L^1(\omega_s)$ be a viscosity solution to $$\inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} u + c_{\tau} u + g_{\tau}\Bigr) \geq 0\quad \text{in}\; \Omega,$$ and $v\in C(\rd)\cap L^1(\omega_s)$ is a viscosity solution to $$\inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} v+ c_{\tau} v + g_{\tau}\Bigr)\leq 0\quad \text{in}\; \Omega.$$ Furthermore, assume that either $u\in C^{2}(\Omega)$ or $v\in C^{2}(\Omega)$. Then, if $v\geq u$ in $\Omega^c$, we have $v\geq u$ in $\rd$. \end{lem} \begin{proof} Without any loss of generality assume that $v\in C^{2}(\Omega)$ and therefore, it is a classical supersolution. Suppose, to the contrary, that $\sup_\Omega(u-v)>0$. Define $$t_0=\inf\{t>0\; :\; v + t> u\;\; \text{in}\; \rd\}.$$ It is evident that $t_0\leq 2 \sup_\Omega(u-v)$. Again, since $\sup_\Omega(u-v)>0$, we must have $t_0>0$. Let $\psi(x)=v(x)+ t_0$. From the definition we have $\psi\geq u$. Since $\psi-u\geq t_0$ in $\Omega^c$ and $\Omega$ is bounded, it follows that $\psi(x)=u(x)$ for some $x\in \Omega$. Therefore, $\psi$ is a valid test function at $x$. Hence, from the definition of viscosity subsolution, we must have $$0\leq \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} \psi(x) + c_{\tau}(x)\psi(x) + g_{\tau}(x)\Bigr) =\inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} v(x) + c_{\tau}(x) v(x) + g_{\tau}(x)\Bigr)-c_\circ t_0\leq -c_\circ t_0<0.$$ But this is contradiction. Hence we must have $v\geq u$ in $\rd$. \end{proof} Next result is a $C^{1, \gamma}$ regularity estimate. This is a special case of Theorem~\ref{c_1,gamma_regularity} which we prove in Section~\ref{S-regu}. \begin{thm}\label{T-reg} Let $u$ be a viscosity solution to $$\inf_{\tau \in\mathcal{T}} \bigg[I_{\tau}[u](x)+b_{\tau}(x) \cdot \nabla u(x)+g_{\tau}(x) \bigg]=0 \quad \text{in}\; B_1. $$ Let $\sup_{\tau}\norm{b_{\tau}}_{L^\infty(B_1)}\leq C_0$ and $\gamma\in (0, 2s-1)$. Suppose that for some modulus of continuity $\varrho$ we have \begin{align*} |k_{\tau}(x_1,y)-k_{\tau}(x_2,y)|\leq \varrho(|x_1-x_2|)\quad \forall x_1,x_2 \in B_1,\; \tau\in \mathcal{T}, y\in\rd. \end{align*} Then we have \begin{align*} \norm{u}_{C^{1,\gamma}(B_{\frac{1}{2}})}\leq C\left(\norm{u}_{L^\infty(\mathbb{R}^d)}+\sup_{\tau\in\mathcal{T}}\norm{g_{\tau}}_{L^\infty(B_1)}\right), \end{align*} where the constant $C$ depends on $d,s,C_0,\varrho,\lambda, \Lambda$. \end{thm} Next we prove an existence result in the bounded domains. \begin{thm}\label{T2.2} Let $\Omega$ be a bounded $C^1$ domain. Assume that (\hyperlink{B1}{B1}) and (\hyperlink{B4}{B4}) hold. Then there exists a viscosity solution $W$ satisfying \begin{equation}\label{ET2.2A} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} W + c_{\tau} W + g_{\tau}\Bigr)=0\quad \text{in}\; \Omega, \quad \text{and}\quad W=0\quad \text{in}\; \Omega^c. \end{equation} In addition, if (\hyperlink{B5}{B5}) holds, then $W\in C^{2s+}_{\rm loc}(\Omega)\cap C^\upkappa(\rd)$ for some $\upkappa>0$ and $W$ is the unique solution to \eqref{ET2.2A}. \end{thm} \begin{proof} Existence of a viscosity solution follows from \cite[Corollary~5.7]{MOU-2017}. Next we show that $W\in C^\upkappa(\rd)$ for some $\upkappa>0$. Let $M=c_\circ^{-1}\sup_{a}\norm{g_{\tau}}_{L^\infty(\Omega)}$. Using Lemma~\ref{L2.1} we obtain $W\leq M$. Analogous argument also gives $W\geq -M$. Therefore, using the barrier function of \cite[Lemma~5.10]{MOU-2017}, it is standard to show that $|W|\leq CM\delta^\upkappa(x)$, where $\delta(x)={\rm dist}(x, \Omega^c)$ (see for instance, \cite[Theorem~2.6]{B20}). Again, by \cite[Theorem~7.2]{Schwab_Silvestre}, $W$ is H\"{o}lder continuous in the interior. It is now quite standard to show that $W\in C^\upkappa(\rd)$ for some $\upkappa>0$ (see again, the proof of \cite[Theorem~2.6]{B20}). From Theorem~\ref{T-reg} we also see that $W\in C^{1, \gamma}(\Omega_\sigma)$ for some $\gamma\in (0,2s-1)$ where $$\Omega_\sigma=\{x\in\Omega\; :\; {\rm dist}(x, \Omega^c)>\sigma\}, \quad \sigma>0.$$ Therefore, by (\hyperlink{B5}{B5}), the function $$x\mapsto b_\tau(x)\cdot \grad W(x) + c_{\tau}(x) W(x) + g_\tau(x)$$ is H\"{o}lder continuous in $\Omega_\sigma$, uniformly in $\tau$, for every $\sigma>0$. Thus, using \cite[Theorem~1.3]{Serra2015}, we obtain that $W\in C^{2s+\upkappa}(\Omega_\sigma)$ for some $\upkappa>0$. Thus $W\in C^{2s+}_{\rm loc}(\Omega)\cap C^\upkappa(\rd)$. In particular, $W$ is a classical solution to \eqref{ET2.2A}. To establish the uniqueness, let $\tilde{W}$ be a viscosity solution to \eqref{ET2.2A}. The preceding argument shows that $\tilde{W}$ is a classical solution. Hence, from the ellipticity property, it follows that $$\sup_{\tau\in \mathcal{T}}(\cL_{\tau}(W-\tilde{W}) + c_{\tau}(W-\tilde{W}))\geq 0 \quad \text{in}\; \Omega.$$ Since $0$ is a supersolution to the above equation, applying Lemma~\ref{L2.1} it follows that $W\leq \tilde{W}$ in $\rd$. From the symmetry we also have $\tilde{W}\leq W$, giving us $W=\tilde{W}$. This completes the proof. \end{proof} \begin{rem} In several places below we use the H\"{o}lder estimate from \cite[Theorem~7.2]{Schwab_Silvestre}. Though the results of \cite{Schwab_Silvestre} are proved for nonlocal parabolic operators, we can still apply the results to our setting by treating the solutions as stationary functions of the time variable. \end{rem} Our next step is to construct a solution for \eqref{ET2.2A} in the whole space. \begin{lem}\label{L2.3} Suppose that (\hyperlink{B1}{B1}), (\hyperlink{B2}{B2}) and (\hyperlink{B4}{B4}) hold. Let $W_n$ be a viscosity solution to \eqref{ET2.2A} in the ball $\Omega=B_n$. Then there exists a subsequence $W_{n_k}$ such that $W_{n_k}\to w$, uniformly on compacts and $w\in C(\rd)\cap L^1(\omega_s)$ satisfies \begin{equation}\label{EL2.3A} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} w + c_{\tau} w+ g_{\tau}\Bigr) =0\quad \text{in}\; \rd. \end{equation} Furthermore, $|w(x)|\leq\frac{k_0}{c_\circ} + \sV(x)$ in $\rd$. \end{lem} \begin{proof} Define $\tilde{\sV}=\frac{k_0}{c_\circ}+ \sV$ where $k_0$ is given by \eqref{Lyap-alpha}. It then follows from \eqref{Lyap-alpha} that \begin{equation}\label{EL2.3B} \sup_{\tau\in \mathcal{T}}\Bigl(\cL_{\tau} \tilde{\sV}+ c_{\tau} \tilde\sV + |g_\tau| \Bigr)\leq \sup_{\tau\in \mathcal{T}} (\cL_{\tau} \sV + c_{\tau} \sV) + h -k_0\leq 0\quad \text{in}\; \rd. \end{equation} Let $W_{n}$ be a viscosity solution to \eqref{ET2.2A} with $\Omega=B_n$, that is, \begin{equation}\label{EL2.3C} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} W_n + c_{\tau} W_n + g_{\tau}\Bigr) =0\quad \text{in}\; B_n, \quad \text{and}\quad W_n=0\quad \text{in}\; B_n^c. \end{equation} Applying Lemma~\ref{L2.1} on \eqref{EL2.3B} and \eqref{EL2.3C}, we see that $|W_{n}|\leq \tilde{\sV}$ in $\rd$, for all $n$. Next we show that for any compact set $\Kset$, $\{W_{n}\}_{n\geq n_0}$ is equicontinuous on $\Kset$, where $\Kset\Subset B_{n_0-4}$. Let $R^\prime$ be such that $\Kset\subset B_{R'}$. Without any loss of generality, we may assume that $R'+4 <n_0$. Consider a smooth cut-off function $0\leq \chi \leq 1$ such that $\chi=1$ in the ball $B_{R^\prime+2}$ and $\chi=0$ outside the $B_{R^\prime +3}$. Let $\tilde{\psi}_{n}=\chi W_{n}$ in $\mathbb{R}^d$. From \eqref{EL2.3C} we see that \begin{equation}\label{EL2.3D} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau}\tilde\psi_n + c_{\tau}\tilde\psi_n + g_{\tau} + I_{\tau}[(1-\chi)W_n]\Bigr) =0\quad \text{in}\; B_{R'+2}, \end{equation} for all $n\geq n_0$. We claim that \begin{equation}\label{EL2.3E} \sup_{B_{R'+1}}\, \sup_{\tau\in \mathcal{T}} |I_{\tau}[(1-\chi)W_n]| \leq C. \end{equation} Note that $|x+y|\geq R'+2$ and $|x|\leq R'+1$ imply that $|y|\geq 1$. Thus, for any $x\in B_{R'+1}$ \begin{align*} \sup_{\tau\in \mathcal{T}} |I_{\tau}[(1-\chi)W_n]| &\leq (2-2s)\int_{\rd} |\delta((1-\chi)W_n, x, y)|\frac{\Lambda}{|y|^{d+2s}} \dy \\ &\leq 2 (2-2s) \int_{\abs{y}\geq 1} |W_n(x+y)| \frac{\Lambda}{|y|^{d+2s}} \dy \\ &\leq 2 (2-2s) \int_{\abs{y}\geq 1} \tilde{\sV}(x+y) \frac{\Lambda}{|y|^{d+2s}} \dy \\ &\leq C_{R'} \int_{\rd} \tilde{\sV}(x+y) \frac{\Lambda}{1+|x+y|^{d+2s}} \dy \leq C. \end{align*} This establishes \eqref{EL2.3E}. Also, $$ \sup_{\tau\in \mathcal{T}}\sup_{B_{R'+1}}|c_{\tau} W_n|\leq C_{R'}(1+ \sup_{B_{R'+1}}\sV).$$ Thus, applying \cite[Theorem~7.2]{Schwab_Silvestre} on \eqref{EL2.3D} , we obtain that for some $\alpha>0$ \begin{equation*} \sup_{n\geq n_0}\, \norm{W_n}_{C^\alpha(B_{R'})} = \sup_{n\geq n_0}\, \norm{\tilde\psi_n}_{C^\alpha(B_{R'})}\leq C. \end{equation*} Hence, $\{W_{n}\}_{n\geq n_0}$ is equicontinuous on $\Kset$. Applying Arzel\`{a}-Ascoli theorem and a standard diagonalization argument we have $W_{n_k}\to w\in C(\rd)$, uniformly on compact sets, along some subsequence $n_k\to\infty$. Since $|W_{n_k}|\leq \tilde{\sV}$ for all $n_k$, we also get \begin{equation}\label{AB001} |w|\leq \tilde{\sV}\quad \text{and}\quad \lim_{n_k\to\infty} \norm{W_{n_k}-w}_{L^1(\omega_s)}=0. \end{equation} Finally, applying the stability property of viscosity solutions \cite[Lemma~5]{CS11b}, we see that $w$ solves \eqref{EL2.3A}. In fact, the stability property can be seen as follows: Suppose $\varphi\in C^2(B_{2\kappa}(x))$ touches $w$ (strictly) from above at $x$ in $B_{2\kappa}(x)$, for some $\kappa>0$. From the local uniform convergence above, we can find a sequence $(d_{n_k}, x_{n_k})\in \R\times B_{\kappa}(x)$ such that $x_{n_k}\to x, d_{n_k}\to 0$ and $\varphi+d_k$ would touch $W_{n_k}$ from above at the point $x_k$. Then letting \[ v_{n_k}(y):=\left\{ \begin{array}{ll} \varphi(y) + d_k & \text{for}\; y\in B_\kappa(x), \\[2mm] W_{n_k}(y) & \text{otherwise}, \end{array} \right. \] we obtain from \eqref{EL2.3C} and for large $n_k$ that $$\inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} v_{n_k}(x_{n_k}) + c_{\tau} v_{n_k}(x_{n_k}) + g_{\tau}(x_{n_k})\Bigr) \geq 0.$$ Using Assumption~\ref{A2.1}(B4) and \eqref{AB001}, we can let $n_k\to\infty$ to get that $$\inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} v(x) + c_{\tau} v(x) + g_{\tau}(x)\Bigr) \geq 0,$$ where $v$ is defined in the same fashion replacing $W_{n_k}$ by $w$ and $d_k$ by $0$. Similarly, we can also check that $w$ is a supersolution. This completes the proof. \end{proof} Now we show that the solution $w$ obtained in Lemma~\ref{L2.3} belongs to the class $\sorder(\sV)\cap C^{2s+}_{\rm loc}(\rd)$. \begin{lem}\label{L2.6} Let $w$ be the solution to \eqref{EL2.3A} obtained in Lemma~\ref{L2.3}. It holds that $w\in\sorder(\sV)$. In addition, if (\hyperlink{B3}{B3})-(\hyperlink{B5}{B5}) holds, then any solution $\upsilon$ of \eqref{EL2.3A} satisfying $\upsilon\in\mathcal{O}(\sV)$ is in $C^{2s+}_{\rm loc}(\rd)$. In particular, $w$ is a classical solution to \eqref{EL2.3A}. \end{lem} \begin{proof} First we show that the solution $w$ obtained in Lemma~\ref{L2.3} is in $\sorder(\sV)$. Recall that $w=\lim_{n_k\to\infty} W_{n_k}$ where $W_{n_k}$ solves \eqref{ET2.2A} in the ball $B_{n_k}$. Fix $\varepsilon>0$. By \eqref{EA2.1A} we have $$ (\varepsilon h(x)- \sup_{\tau \in\mathcal{T}}|g_{\tau}|(x))\geq 0 $$ for all $|x|$ large. Thus we can find a compact set $\Kset_\varepsilon\Supset \sK$ so that \begin{equation}\label{ET2.4B} \begin{split} (\varepsilon h- \sup_{\tau \in\mathcal{T}}|g_{\tau}|(x)) & \geq 0 \quad \text{in} \; \Kset^c_\varepsilon, \\ \sup_{\tau\in \mathcal{T}}(\cL_{\tau} \varphi_\varepsilon + c_{\tau} \varphi_\varepsilon + |g_{\tau}|) \leq \sup_{\tau\in \mathcal{T}}(\cL_{\tau} \varphi_\varepsilon + c_{\tau} \varphi_\varepsilon + \varepsilon h) &\leq 0\quad \text{in}\; \Kset^c_\varepsilon, \end{split} \end{equation} where $\varphi_\varepsilon=\varepsilon \sV$. Let $\kappa=\sup_{n_k}\,\max_{\Kset_\varepsilon}|W_{n_k}|<\infty$. From (\hyperlink{B1}{B1}) and \eqref{ET2.4B} it follows that $$\sup_{\tau\in \mathcal{T}}(\cL_{\tau} (\kappa+\varphi_\varepsilon) + c_{\tau} (\kappa+\varphi_\varepsilon)+ |g_{\tau}|) \leq 0\quad \text{in}\; \Kset^c_\varepsilon.$$ Applying Lemma~\ref{L2.1} in the domain $B_{n_k}\setminus \Kset_\varepsilon$ we obtain \begin{equation*} |W_{n_k}|\leq \kappa + \varphi_\varepsilon= \kappa + \varepsilon \sV \quad \text{in}\; \rd. \end{equation*} for all $n_k$ large. Hence, letting $n_k\to\infty$, we get $|w|\leq \kappa + \varepsilon \sV$ in $\rd$ which in turn, implies $$\limsup_{|x|\to\infty} \frac{1}{1+ \sV(x)} |w|\leq \varepsilon.$$ From the arbitrariness of $\varepsilon$ we have $w\in\sorder(\sV)$. Now we prove the second part. Let $\upsilon$ solve \begin{equation}\label{EL2.6A} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} \upsilon + c_{\tau} \upsilon+ g_{\tau}\Bigr) =0\quad \text{in}\; \rd, \end{equation} and $\upsilon\in \mathcal{O}(\sV)$, that is, $|\upsilon|\leq C(1+\sV)$ in $\rd$. Fix a point $x_0\in \rd$ and let $r=[1+ \sV(x_0)]^{-\upmu}$, where $\upmu$ is given by (\hyperlink{B3}{B3}). Define $v(x)= \upsilon(x_0+ rx)$. It then follows from \eqref{EL2.6A} that \begin{equation}\label{ET2.4C} \inf_{\tau \in\mathcal{T}}\left(\tilde I_{\tau}[v] + r^{2s-1} b_\tau(x_0+rx)\cdot \grad v + r^{2s} c_{\tau}(x_0+rx) v + r^{2s} g_{\tau}(x_0+ r x) \right) =0 \quad \text{in}\; \rd, \end{equation} where $$\tilde I_{\tau}[u](x)=\int_{\rd}\delta(u, x, y) \frac{k_{\tau}(x_0+rx, ry)}{|y|^{d+2s}}\dy.$$ Now consider a smooth cut-off function $\xi:\rd\to[0, 1]$ satisfying $\xi=1$ in $B_{\frac{3}{2}}$ and $\xi=0$ in $B^c_2$. We can re-write \eqref{ET2.4C} as \begin{equation}\label{ET2.4D} \inf_{\tau \in\mathcal{T}}\left(\tilde I_{\tau}[\psi] + r^{2s-1} b_\tau(x_0+rx)\cdot \grad \psi + r^{2s} c_{\tau}(x_0+rx) \psi + r^{2s} g_{\tau}(x_0+ r x) + \tilde I_{\tau}[(1-\xi)v] \right) =0 \quad \text{in}\; B_1, \end{equation} for $\psi=\xi v$. Since $|\upsilon|\leq C(1+\sV)$, from \eqref{EA2.1B} we have \begin{equation}\label{ET2.4E} \sup_{\tau\in \mathcal{T}}\, \sup_{B_1}(r^{2s-1} |b_\tau(x_0+r\cdot)| + r^{2s} \frac{|g_{\tau}(x_0+r\cdot)|}{1+\sV(x_0)}) \leq C,\quad \sup_{\tau\in \mathcal{T}}\,\sup_{B_1} r^{2s}\frac{|c_{\tau}(x_0+r\cdot)v|}{1+\sV(x_0)}\leq C, \end{equation} where $C$ is independent of $r$ and $x_0$. From \eqref{EA2.1C} we also have $\norm{\psi}_{L^\infty(\rd)}\leq C (1+\sV(x_0))$. For $x\in B_1$ let us now compute \begin{align}\label{ET2.4F} \sup_{\tau\in \mathcal{T}}|\tilde I_\tau[(1-\xi)v]| &\leq 2(2-2s)\int_{\rd} |(1-\xi(x+y))v(x+y)|\frac{1}{|y|^{d+2s}}\dy \nonumber \\ &\leq 2(2-2s)\int_{|y|\geq 1/2} |v(x+y)|\frac{1}{|y|^{d+2s}}\dy \nonumber \\ &\leq C \int_{\rd} |\upsilon(x_0+ rx+ry)|\frac{1}{1+|y|^{d+2s}}\dy \nonumber \\ &\leq C \int_{\rd} (1 + \sV(x_0+ rx + ry))\frac{1}{1+|y|^{d+2s}}\dy \nonumber \\ &\leq C \int_{|y|\leq r^{-1}} \frac{(1 + \sV(x_0))}{1+|y|^{d+2s}}\dy + C \int_{|y|> r^{-1}} \frac{(1 + \sV(x_0+ rx + ry))}{1+|y|^{d+2s}}\dy \nonumber \\ &\leq C(1+\sV(x_0)) + C \int_{|z|> 1} (1 + \sV(x_0+ rx + z)) \frac{r^{2s}}{r^{d+2s}+|z|^{d+2s}}\dz \nonumber \\ &\leq C(1+\sV(x_0)) + C (1+\sV(x_0))\int_{|z|> 1}\frac{1+\sV(z)}{1+|z|^{d+2s}}\dz \nonumber \\ &\leq C(1+\sV(x_0)), \end{align} where in the fifth and seventh line we use \eqref{EA2.1C}. Thus, using \eqref{ET2.4D},\eqref{ET2.4E},\eqref{ET2.4F} and Theorem~\ref{T-reg}, we obtain $$\sup_{B_{\frac{1}{2}}}|\grad v|\leq C(1+\sV(x_0)).$$ Computing the derivative at $x=0$ gives us \begin{equation}\label{ET2.4G} |\grad \upsilon(x_0) |\leq C(1+ \sV(x_0))^{1+\upmu} , \quad x_0\in\rd. \end{equation} This gives an estimate on the growth of $|\grad \upsilon|$. Next we fix a point $x_0$ and let $\zeta(x)=\upsilon(x_0 +x)$. Let $\xi$ be the cut-off function as chosen above. Then, letting $\varphi=\xi\zeta$, we obtain \begin{equation}\label{ET2.4H} \inf_{\tau \in\mathcal{T}}\left(\widehat I_{\tau}[\varphi] + b_\tau(x_0+x)\cdot \grad \varphi + c_{\tau}(x_0+x)\varphi + g_{\tau}(x_0+ x) + \widehat I_\tau[(1-\xi)\zeta] \right) =0 \quad \text{in}\; \rd, \end{equation} where $$\widehat I_{\tau}[u](x)=\int_{\rd}\delta(u, x, y) \frac{k_{\tau}(x_0+x, ry)}{|y|^{d+2s}}\dy.$$ Let $x, x'\in B_1$. Using (\hyperlink{B5}{B5}) we get \begin{align*} &|\widehat I_{\tau}[(1-\xi)\zeta](x)-\widehat I_{\tau}[(1-\xi)\zeta](x')| \\ &\leq \int_{|y|\geq \frac{1}{2}}|\delta((1-\xi)\zeta, x, y)| \frac{|k_{\tau}(x_0+x, y)-k_{\tau}(x_0+x', y)|}{|y|^{d+2s}}\dy \\ &\quad + 2(2-2s) \Lambda\int_{|y|\geq \frac{1}{2}}|(1-\xi(x+y))\zeta(x+y)-(1-\xi(x'+y))\zeta(x'+y)| \frac{1}{|y|^{d+2s}}\dy \\ &\leq C |x-x'|^{\tilde\alpha} + C \int_{|y|\geq \frac{1}{2}}|(\xi(x'+y)-\xi(x+y))\zeta(x+y)| \frac{1}{|y|^{d+2s}}\dy \\ &\quad+ \int_{|y|\geq \frac{1}{2}}|\zeta(x+y)-\zeta(x'+y)|\frac{1}{|y|^{d+2s}}\dy \\ &\leq C |x-x'|^{\tilde\alpha} + C |x-x'| (1+\sV(x_0)) + C |x-x'|\int_{|y|\geq \frac{1}{2}}(1+\sV(x+y))^{1+\upmu}\frac{1}{|y|^{d+2s}}\dy \\ &\leq C (1+\sV(x_0))^{1+\upmu}\max\{|x-x'|^{\tilde\alpha}, |x-x'|\} \end{align*} for all $\tau\in \mathcal{T}$, where in the third line we use \eqref{EA2.1C} and \eqref{ET2.4G}, and in the last line we use the fact $\sV^{1+\upmu}\in L^1(\omega_s)$. The above estimate gives H\"{o}lder continuity estimate of $\widehat I_{\tau}[(1-\xi)\zeta]$ uniformly in $\tau\in \mathcal{T}$. From \eqref{ET2.4H} and Theorem~\ref{T-reg} we first observe that $\upsilon\in C^{1, \gamma}(B_1(x_0))$ for some $\gamma\in (0, 2s-1)$. Therefore, using above estimates, we can apply \cite[Theorem~1.3]{Serra2015} to obtain that $\upsilon\in C^{2s+\upkappa}(B_{1/2}(x_0))$ for some $\upkappa>0$. Hence $\upsilon\in C^{2s+}_{\rm loc}(\rd)$, completing the proof. \end{proof} Now we can complete the proof of Theorem~\ref{T1.2}. \begin{proof}[Proof of Theorem~\ref{T1.2}] Existence and regularity follow from Lemma~\ref{L2.3} and Lemma~\ref{L2.6}. So we only prove uniqueness. Let $\tilde{w}\in\sorder(\sV)$ be a viscosity solution to \eqref{EL2.3A}. From Lemma~\ref{L2.6} we see that $\tilde{w}\in C^{2s+}_{\rm loc}(\rd)$. Hence $w, \tilde{w}$ are classical solutions to \eqref{ET1.2A}. Now suppose, to the contrary, that $w\neq \tilde{w}$ and, without any loss of generality, we let $(\tilde{w}-w)_+\gneq 0$. Letting $v=\tilde{w}-w$ we obtain from \eqref{ET1.2A} that \begin{equation}\label{ET2.4L} \sup_{\tau\in \mathcal{T}}(\cL_{\tau} v + c_{\tau} v)\geq 0\quad \text{in}\; \rd. \end{equation} If $\sup_{\rd} (\tilde{w}-w)_+>0$ is attained in $\rd$, say at a point $x_0$, then we get from \eqref{ET2.4L} that $0>-c_\circ v(x_0)\geq 0$ which is a contradiction. So we consider the situation where \begin{equation}\label{EL2.4J} \sup_{B_m} v_+\nearrow \sup_{\rd} v, \end{equation} as $m\to\infty$ and the sequence is strictly increasing. Define $\Kset=\sK\cup\{\sV\leq 1\}$ (see \eqref{Lyap-alpha}) and let $\kappa=\max_{\Kset}v_+$. Letting $\psi=v-\kappa$ we have from \eqref{ET2.4L} that \begin{equation}\label{ET2.4K} \sup_{\tau\in \mathcal{T}}(\cL_{\tau} \psi + c_{\tau} \psi)\geq 0\quad \text{in}\; \rd. \end{equation} Using \eqref{EL2.4J}, we have $\psi\gneq 0$ in $\Kset^c$. Define $$t_0=\inf\{t>0\; :\; t\sV>\psi\quad \text{in}\; \Kset^c\}.$$ It is evident that $t_0$ is finite, and since $\psi(z)>0$ for some $z\in \Kset^c$, we also have $t_0>0$. We claim that $t_0\sV$ must touch $\psi$ from above. Suppose that, $t_0\sV>\psi$ in $\Kset^c$. Since $\psi\in\sorder(\sV)$, there exists a compact set $\Kset_1\Supset\Kset$ such that $|\psi|<\frac{t_0}{2}\sV$ in $\Kset^c_1$. Again, on $\partial\Kset$, we have $\psi\leq 0 <t_0\leq t_0\sV$. Therefore $(t_0\sV-\psi)>0$ in $\overline{\Kset_1\setminus\Kset}$. Thus we can find $\varepsilon>0$ small enough so that $(t_0-\varepsilon)\sV>\psi$ in $\Kset^c$ which will contradict the definition of $t_0$. Hence $t_0\sV$ must touch $\psi$ at some point $x_0\in\Kset^c$. Since $\psi\leq 0$ in $\Kset$, we also have $t_0 \sV\geq \psi$ in $\rd$. Applying $t_0\sV$ as a test function at $x_0$ in \eqref{ET2.4K} and using \eqref{Lyap-alpha} we get $$0\leq t_0 \sup_{\tau\in \mathcal{T}} (\cL_{\tau}\sV(x_0) + c_{\tau}\sV(x_0)) \leq - t_0 h(x_0)<0,$$ which is a contradiction. Therefore $t_0$ must be zero and $(\tilde{w}-w)_+=0$. This gives us uniqueness, completing the proof. \end{proof} \begin{rem}\label{R2.1} The proof of Theorem~\ref{T1.2} leads to the following Liouville type result. Suppose that there exist an inf-compact $C^2$ function $\sV\geq 0$ and a compact set $\sK$ satisfying \begin{align*} \sup_{\tau \in\mathcal{T}} (\mathcal{L}_{\tau}\sV(x) + c_{\tau}(x) \sV(x)) < 0 \quad x\in\sK^c, \end{align*} and $c_\tau\leq -c_0<0$ for all $\tau\in\mathcal{T}$. Then, if $u\in\sorder(\sV)$ solves $$\inf_{\tau\in\mathcal{T}}(\mathcal{L}_{\tau} u(x) + c_{\tau}(x) u(x))=0\quad \text{in}\; \rd,$$ then $u\equiv 0$. The argument works for $s\in (0, 1)$ and $u, b_\tau, c_\tau$ are only required to be continuous. This result should be compared with Meglioli \& Punzo \cite[Theorem~2.7]{MP22}. Unlike \cite{MP22}, the above equation is nonlinear and requires no additional regularity on drift. \end{rem} \section{HJB for ergodic control problem}\label{Erg-HJB} The main result of this section is the existence-uniqueness result for ergodic HJB problem, that is, to find a solution $(u, \lambda^*)$ satisfying \begin{equation}\label{erg} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} u + g_{\tau}\Bigr)-\lambda^*=0\quad \text{in}\; \rd. \end{equation} Letting $c_{\tau}=-\alpha\in (-1, 0)$ in (\hyperlink{B1}{B1}), we see that $V$ in Assumption~\ref{assumptions} satisfies (\hyperlink{B2}{B2})--(\hyperlink{B3}{B3}). Thus Theorem~\ref{T1.2} is applicable under Assumption~\ref{assumptions}. Let $w_\alpha$ be the unique solution to \begin{equation}\label{E3.1} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} w_\alpha + g_{\tau}\Bigr)-\alpha w_\alpha =0\quad \text{in}\; \rd, \end{equation} and $w_\alpha\in \sorder(V)\cap C^{2s+}_{\rm loc}(\rd)$. Define $$\bar{w}_\alpha(x)=w_\alpha(x)-w_\alpha(0).$$ We claim that for some compact ball $B$ we have \begin{equation}\label{E3.2} |\bar{w}_\alpha(x)|\leq \max_{B}|\bar{w}_\alpha| + V(x) \quad x\in\rd. \end{equation} Since $w_\alpha\in\sorder(V)$ and $V$ is inf-compact, we have $V-w_\alpha$ inf-compact. Thus, there exists $x_\alpha\in \rd$ satisfying $(V-w_\alpha)(x_\alpha)=\min_{\rd}(V-w_\alpha)$ implying $w_\alpha\leq V + w_\alpha(x_\alpha)-V(x_\alpha)$. From \eqref{E3.1} we then have $$0\leq \inf_{\tau \in\mathcal{T}}(\cL_{\tau} V(x_\alpha) + g_{\tau}(x_\alpha)) -\alpha w(x_\alpha)\leq k_0 - (h(x_\alpha)-\inf_{\tau \in\mathcal{T}}|g_{\tau}|(x_\alpha))+\alpha (V(0)-w_\alpha(0) - V(x_\alpha)),$$ by \eqref{Lyap}. Since $V$ is non-negative and $\alpha |w_\alpha|\leq k_0 + \alpha V$ (by Lemma~\ref{L2.3}), the above inequality gives $$(h(x_\alpha)-\sup_{\tau\in \mathcal{T}}|g_{\tau}|(x_\alpha)) \leq 2k_0 + 2\alpha V(0).$$ Since $\sup_{\tau\in \mathcal{T}}|g_{\tau}|\in\sorder(h)$ by (\hyperlink{A2}{A2}), there exists a compact ball $B$, independent of $\alpha\in (0,1)$, so that $x_\alpha\in B$. Hence $$\bar{w}_\alpha(x)\leq \bar{w}_\alpha(x)-V(x) + V(x)\leq \bar{w}_\alpha(x_\alpha)-V(x_\alpha) + V(x) \leq \max_{B} \bar{w}_\alpha + V(x).$$ Again, since $(V+w_\alpha)$ is inf-compact, an argument similar to above gives that $$\bar{w}_\alpha(x)\geq \min_{B} \bar{w}_\alpha-V,$$ for some compact ball $B$. Combining the above estimates we get \eqref{E3.2}. \begin{lem}\label{L3.1} It holds that $\sup_{\alpha\in(0,1)}\, \max_{B}|\bar{w}_\alpha|<\infty$. \end{lem} \begin{proof} Suppose, to the contrary, that $$\sup_{\alpha\in(0,1)}\, \max_{B}|\bar{w}_\alpha|=\infty.$$ Therefore, we can find a sequence $\alpha_n\to\alpha_0\in [0,1]$ such that $$\max_{B}|\bar{w}_{\alpha_n}|\to \infty\quad \text{as}\; n\to\infty.$$ Since $|w_\alpha|\leq \frac{k_0}{\alpha} + V$, we must have $\alpha_0=0$. Denote by $\rho_n=\max_{B}|\bar{w}_{\alpha_n}|$. From \eqref{E3.2} we observe that \begin{align}\label{EL3.1A} |\bar{w}_{\alpha_n}(x)|\leq \rho_n + V(x) \quad\text{in}\; \rd. \end{align} Letting $\psi_n=\frac{1}{\rho_n} \bar{w}_{\alpha_n}$, we obtain from \eqref{E3.1} that \begin{align}\label{EL3.1B} \inf_{\tau \in\mathcal{T}}[\mathcal{L}_{\tau} {\psi}_n + \rho^{-1}_n g_{\tau}]-\alpha_n \psi_n -\frac{\alpha_n}{\rho_n} w_{\alpha_n}(0)=0 \quad \text{in}\; \rd. \end{align} Following the arguments of Lemma~\ref{L2.3} and using \eqref{EL3.1A}, it can be easily seen that for every compact $\Kset$, $\{\psi_n\}_{n\geq 1}$ is in $C^{\upkappa}(\Kset)$ for some $\upkappa>0$, uniformly in $n$. Hence there exists a $\psi\in C(\rd)$ such that $$\psi_n\to \psi\quad \text{uniformly on compacts},$$ along some subsequence. From \eqref{EL3.1A} we also have $|\psi|\leq 1$ and $\max_{B}|\psi|=1$. Using the stability property of viscosity solution in \eqref{EL3.1B}, we obtain \begin{equation}\label{EL3.1C} \inf_{\tau \in\mathcal{T}}\cL_{\tau}\psi=0\quad \text{in}\; \rd. \end{equation} The argument of Lemma~\ref{L2.6} gives us $\psi\in C^{2s+\upkappa}_{\rm loc}(\rd)$. Hence $\psi$ is a classical solution to \eqref{EL3.1C}. Since $|\psi|$ attends its maximum in $B$, there exists $z\in B$ satisfying $|\psi(z)|=1$. Suppose that $\psi(z)=1$. In view of \eqref{EL3.1C}, this implies $\psi\equiv 1$ which contradicts the fact $\psi(0)=0$. We arrive at a similar contradiction when $\psi(z)=-1$. This completes the proof. \end{proof} Next we prove the existence of a solution to the ergodic control problem. \begin{thm}\label{T3.2} Suppose that (\hyperlink{A1}{A1})--(\hyperlink{A4}{A4}) hold. Then for some sequence $\alpha_n\to 0$ we have $$\lim_{n\to\infty} \bar{w}_{\alpha_n}=u,\quad \alpha_n w_{\alpha_n}(0)=\lambda^*$$ for some $(u, \lambda^*)\in C(\rd)\times\mathbb{R}$, and \begin{equation}\label{ET3.2A} \inf_{\tau \in\mathcal{T}}(\cL_{\tau} u + g_{\tau}) -\lambda^*=0\quad \text{in}\; \rd. \end{equation} Moreover, if (\hyperlink{A5}{A5}) holds, then $u\in \sorder(V)\cap C^{2s+}_{\rm loc}(\rd)$. \end{thm} \begin{proof} From \eqref{E3.1} we note that \begin{equation}\label{ET3.2B} \inf_{\tau \in\mathcal{T}} \Bigl(\cL_{\tau} \bar{w}_\alpha + g_{\tau}\Bigr)-\alpha \bar{w}_\alpha - \alpha w_\alpha(0)=0\quad \text{in}\; \rd. \end{equation} From Lemma~\ref{L3.1}, \eqref{E3.2} and the proof of Lemma~\ref{L2.3}, we see that the family $\{\bar{w}_\alpha\, :\, \alpha\in (0, 1)\}$ is locally equicontinuous. Also note that $\alpha|w_\alpha(0)|\leq \kappa + \alpha V(0)$. Hence we can find a sequence $\alpha_n\to 0$ such that \begin{align*} \lim_{n\to\infty}\alpha_n w_{\alpha_n}(0)&=\lambda^* \\ \lim_{n\to\infty}\bar{w}_{\alpha_n} & = u \quad \text{uniformly over compacts}. \end{align*} Using stability property of the viscosity solutions and passing the limit in \eqref{ET3.2B} we obtain \begin{equation}\label{ET3.2C} \inf_{\tau \in\mathcal{T}}(\cL_{\tau} u + g_{\tau}) -\lambda^*=0\quad \text{in}\; \rd. \end{equation} This gives us \eqref{ET3.2A}. From \eqref{E3.2} we also have $|u|\leq C+ V$ in $\rd$. Thus we can follow the proof of Lemma~\ref{L2.6} to conclude that $u\in C^{2s+}_{\rm loc}(\rd)$. Thus we remain to show that $u\in\sorder(V)$. Fix an $\varepsilon>0$. Recall from Lemma~\ref{L2.6} that $w_{\alpha_n}\in\sorder(V)$. Thus, the functions $\varepsilon V-w_{\alpha}$ and $\varepsilon V+w_{\alpha}$ are inf-compact. Since $\sup_{\tau\in \mathcal{T}}|g_{\tau}|\in\sorder(h)$, we also have $\varepsilon h - \sup_{\tau\in \mathcal{T}}|g_{\tau}|$ inf-compact. Thus the proof of \eqref{E3.2} works with $V$ replaced by $\varepsilon V$ and we obtain a compact $B(\varepsilon)$ satisfying $$|\bar{w}_{\alpha_n}(x)|\leq \max_{B(\varepsilon)}|\bar{w}_{\alpha_n}| + \varepsilon V(x) \quad x\in\rd.$$ Letting $\alpha_n\to 0$ in the above equation we obtain $$|u(x)|\leq \max_{B(\varepsilon)}|u| + \varepsilon V(x) \quad x\in\rd,$$ which in turn, gives $$\limsup_{|x|\to\infty} \frac{|u(x)|}{1+ V(x)}\leq \varepsilon.$$ From the arbitrariness of $\varepsilon$ it follows that $u\in\sorder(V)$, completing the proof. \end{proof} Now we complete the proof of Theorem~\ref{T1.1}. \begin{proof}[Proof of Theorem~\ref{T1.1}] Existence of $(u, \lambda)$ follows from Theorem~\ref{T3.2}. So we only prove uniqueness. Suppose that $(\hat{u}, \hat{\rho})\in\sorder(V)\times\mathbb{R}$ is a solution to \eqref{ET1.1A}. The proof of Lemma~\ref{L2.6} gives us that $u, \hat{u}\in C^{2s+}_{\rm loc}(\rd)$, and therefore, both $u, \hat{u}$ are classical solutions. Now suppose, to the contrary, that $\lambda^*\neq \hat{\rho}$. Assume, without loss of generality, that $\hat{\rho}< \lambda^*$ and define $\hat{u}_\epsilon:= \hat{u}+\epsilon V$ for $\epsilon>0$. Since $\hat{u}$ and $V$ are classical solution to \eqref{ET1.1A} and \eqref{Lyap}, respectively, we obtain \begin{align*} &\inf_{\tau \in\mathcal{T}} [\mathcal{L}_{\tau} \hat{u}_\epsilon + g_{\tau}(x)] \leq \inf_{\tau \in\mathcal{T}}(\mathcal{L}_{\tau} \hat{u} +g_{\tau}(x)) + \sup_{\tau\in \mathcal{T}} \mathcal{L}_{\tau}(\epsilon V) \leq \hat{\rho} + \epsilon (k_0-h)\leq \hat{\rho} + \epsilon k_0. \end{align*} We choose $\epsilon$ small enough such that $\hat{\rho} + \epsilon k_0 < \lambda^*$ which in turn, gives \begin{equation}\label{ET3.3B} \inf_{\tau \in\mathcal{T}} (\mathcal{L}_{\tau} \hat{u}_\epsilon + g_{\tau}(x)) < \lambda^*. \end{equation} Define $\phi_\epsilon= \epsilon V + \hat{u}-u$ and since $u, \hat{u}\in\sorder(V)$, observe that $\phi_\epsilon(x)\longrightarrow +\infty $ as $|x|\longrightarrow +\infty$. In other words $\phi_\epsilon $ is inf-compact, and so it attains minimum at a point in $\rd$, say $y_\epsilon$. Denote by $\kappa_\epsilon= (\epsilon V + \hat{u}-u)(y_\epsilon)$ and note that $u+\kappa_\epsilon\leq \hat{u}_\epsilon$. From \eqref{ET1.1A} and \eqref{ET3.3B} we then obtain \begin{align*} \lambda^*=\inf_{\tau \in\mathcal{T}} (\mathcal{L}_{\tau} u(y_\epsilon) + g_{\tau}(y_\epsilon)) = \inf_{\tau \in\mathcal{T}} (\mathcal{L}_{\tau} (u+\kappa_\epsilon)(y_\epsilon) + g_{\tau}(y_\epsilon)) \leq \inf_{\tau \in\mathcal{T}} (\mathcal{L}_{\tau} \hat{u}_\epsilon (y_\epsilon) + g_{\tau}(y_\epsilon))<\lambda^*. \end{align*} But this is a contradiction. Hence we must have $\hat{\rho}=\lambda^*$. Next we show that $u=\hat{u}$. Denote by $v=\hat{u}-u$. Since $\hat{u}, u$ are classical solutions and $\hat{\rho}=\lambda^*$ we have \begin{equation}\label{ET3.3C} \sup_{\tau\in \mathcal{T}}\cL_{\tau} v \geq 0\quad \text{in}\; \rd. \end{equation} Let $\Kset=\{h\leq k_0\}\cup\{V\leq 1\}$. We claim that \begin{equation}\label{ET3.3D} \sup_{\rd} v = \max_{\Kset} v. \end{equation} Denote by $\hat{\kappa}=\max_{\Kset} v$ and $\hat{v}=v-\hat{\kappa}$. It is evident from \eqref{ET3.3C} that $$\sup_{\tau\in \mathcal{T}}\cL_{\tau} \hat{v} \geq 0\quad \text{in}\; \rd.$$ Let $$t_0=\inf\{t>0\; :\; \hat{v}< t V\quad \text{in}\; \Kset^c\}.$$ If $t_0=0$ we get the claim \eqref{ET3.3D}. For $t_0>0$, the argument in Theorem~\ref{T1.2} gives that $t_0V\geq \hat{v}$ in $\rd$ and for some point $z\in\Kset^c$, $t_0 V(z)=\hat{v}(z)$. Then $$0\leq \sup_{\tau\in \mathcal{T}}\cL_{\tau} \hat{v}(z) \leq t_0 \sup_{\tau\in \mathcal{T}}\cL_{\tau} V(z) \leq t_0 (k_0-h(z)) <0.$$ This is a contradiction. Hence we must have $t_0=0$, establishing \eqref{ET3.3D}. Let $\bar{z}\in\Kset$ be such that $\sup_{\rd} v= v(\bar{z})$. Computing \eqref{ET3.3C} at the point $z$ gives us $$ 0\leq\sup_{\tau\in \mathcal{T}}\cL_{\tau} v(\bar{z}) \leq \lambda \int_{\rd}\delta(v, \bar{z}, y) \frac{1}{|y|^{d+2s}}\dy\leq 0. $$ Hence $v={\rm constant}$ which gives us $u=\hat{u}$ since $u(0)=0=\hat{u}$. This completes the proof. \end{proof} \section{HJB equation with mixed local-nonlocal operators}\label{S-mixed} In this section we generalize the results of Section~\ref{Erg-HJB} to a more general class of integro-differential operator involving both local and nonlocal terms. More precisely, we consider the following operator $$\cI u(x) =\inf_{\tau\in\mathcal{T}}\left[ {\rm tr}{(a_\tau(x)D^2u)} + \breve I_\tau [u](x) + b_\tau(x)\cdot\grad u + g_\tau \right],$$ where $\mathcal{T}$ is an indexing set, $a_\tau$ is positive definite matrix and $$ \breve{I}_{\tau} [u](x)=\int_{\rd} (u(x+y)-u(x)- \grad u(x)\cdot y 1_{B_1}(y)) K_\tau(x, y) \dy.$$ \begin{assumption}\label{A4.1} We impose the following standard assumptions on the coefficients. \begin{itemize} \item[(i)] $\{a_\tau\}_{\tau\in\mathcal{T}}$ is locally bounded, and $$ \lambda |\xi|^2\leq \xi a_\tau(x) \cdot\xi\leq \Lambda |\xi|^2\quad \text{for all}\; x, \xi\in\rd,$$ for some $0<\lambda\leq \Lambda$. \item[(ii)] There exists a measurable $K:\rd\setminus\{0\} \to (0, \infty)$, locally bounded in $\rd\setminus\{0\}$, such that $$0\leq K_\tau(x, y)\leq K(y) \quad \text{for all}\; x\in\rd, y\in \rd\setminus\{0\},\, \tau\in\mathcal{T},$$ and $$\int_{\rd} \min\{|y|^2, 1\} K(y)\, \dy<\infty.$$ \item[(iii)] For every compact set $\Kset$, there exists $\tilde\alpha\in (0, 1)$ such that $$\sup_{\tau\in\mathcal{T}}\norm{a_\tau(x)-a_\tau(x')} + \sup_{\tau\in\mathcal{T}}\abs{b_\tau(x)-b_\tau(x')} + \sup_{\tau\in\mathcal{T}}\abs{g_\tau(x)-g_\tau(x')} \leq C_{\Kset}|x-x'|^{\tilde\alpha},$$ and $$ \sup_{\tau\in \mathcal{T}}|K_\tau(x, y)-K_\tau(x', y)|\leq C_{\Kset} |x-x'|^{\tilde\alpha} K(y)\quad y\in\rd\setminus\{0\},$$ for all $x, x'\in\Kset$. \end{itemize} \end{assumption} Note that Assumption~\ref{A4.1}(i) is the non-degeneracy condition and Assumption~\ref{A4.1}(iii) is normally used to establish $C^{2, \upkappa}$ regularity. As before, we also need Foster-Lyapunov type condition which we state below. Denote by $\breve{K}(y)=1_{B^c_1}(y) K(y)$. \begin{assumption}\label{A4.2} The following hold. \begin{itemize} \item[(i)] There exist $V\in C^2(\mathbb{R}^d), V\geq 0$ and a function $0\leq h\in C(\mathbb{R}^d)$, $V$ and $h$ are inf-compact, such that \begin{align}\label{Lyap-ellip} \sup_{\tau \in\mathcal{T}} \left[ {\rm tr}{(a_\tau(x)D^2V)} + \breve I_\tau [V](x) + b_\tau(x)\cdot\grad V \right] \leq k_0-h(x) \quad x\in\rd, \end{align} for some $k_0>0$. \item[(ii)] $\sup_{\tau\in \mathcal{T}}|g_{\tau}(x)|\leq h(x)$ and $\sup_{\tau \in\mathcal{T}}|g_{\tau}|\in\sorder(h)$. \item[(iii)] For some $\upmu\geq 0$ we have $V^{1+\upmu}\in L^1(\breve{K})$ and \begin{align} \sup_{\tau \in\mathcal{T}}\,\frac{|g_{\tau}|}{(1+V)^{1+2\upmu}}&\leq C,\label{EA4.2A} \\ \sup_{x, y}\frac{V(x+y)}{(1+V(x))(1+V(y))} &+ \limsup_{|x|\to\infty}\; \frac{1}{1+V(x)}\sup_{|y-x|\leq 1} V(y) \leq C.\label{EA4.2B} \end{align} Moreover, one of the following holds. \begin{itemize} \item[(a)] There exists a kernel $\mathscr{J}:\rd\setminus\{0\}\to (0, \infty)$, locally bounded in $\rd\setminus\{0\}$, such that \begin{equation}\label{EA4.2C} r^{d+2} K(ry)\leq \mathscr{J}(y) \quad \text{for all}\; r\in (0,1], \; y\in \rd\setminus\{0\}, \quad \int_{\rd} (|y|^2\wedge 1)\mathscr{J}(y)\dy<\infty, \end{equation} and \begin{equation}\label{EA4.2D} \sup_{\tau \in\mathcal{T}} \frac{|b_\tau|}{(1+V)^{\upmu}}\leq C, \end{equation} where $\upmu$ is same as above. \item[(b)] $\sup_{\tau\in \mathcal{T}}\norm{b_\tau}_{L^\infty(\rd)} <\infty$. \end{itemize} \end{itemize} \end{assumption} In the spirit of Theorem~\ref{T1.1}, We can complete the proof of Theorem~\ref{T1.3} \begin{proof}[Proof of Theorem~\ref{T1.3}] Since the proof is analogous to the proof of Theorem~\ref{T1.1}, we only provide the outline. \medskip \noindent\underline{$\alpha$-discounted problem:} Applying \cite[Theorem~5.7]{Mou2019} we find a viscosity solution $W_n$ \begin{equation*}\cI W_n -\alpha W_n=0\quad \text{in}\; B_n, \quad \text{and}\quad W_n=0\quad \text{in}\; B^c_n, \end{equation*} where $\alpha\in (0,1)$. Now by Assumption~\ref{A4.2} and the proof of Lemma~\ref{L2.3} we get $|W_n|\leq \frac{k_0}{\alpha} + V$. Given $R'>0$ we choose $n_0$ large enough so that $R'+4\geq n_0$. Then using the notations of Lemma~\ref{L2.3} we have \begin{equation*}\inf_{\tau\in\mathcal{T}} \Bigl({\rm tr}{(a_\tau(x)D^2\tilde\psi_n)} + \breve{I}_\tau[\tilde\psi_n]+ b_\tau\cdot\tilde\psi_n + g_\tau + \breve{I}_\tau[(1-\chi)W_n]\Bigr) =0\quad \text{in}\; B_{R'+2}, \end{equation*} where $\tilde\psi_n=\chi W_n$. Using Assumption~\ref{A4.1}(ii) and \eqref{EA4.2B} it can be easily checked that $$\sup_{x\in B_{R'+1}}\sup_{\tau\in\mathcal{T}} |\breve{I}_\tau[(1-\chi)W_n]|<\infty.$$ Thus, by \cite[Theorem~4.2]{Mou2019}, $\{W_{n}\}_{n\geq n_0}$ is $\upkappa$-H\"{o}lder continuous in $B_{R'}$, uniformly in $n$. Therefore, we can repeat the proof of Lemma~\ref{L2.3} and find a subsequence $W_{n_k}\to w_\alpha\in \sorder(V)$ such that \begin{equation}\label{ET4.1B} \cI w_\alpha -\alpha w_\alpha=0\quad \text{in}\; \rd. \end{equation} \medskip \noindent\underline{Unique solution of \eqref{ET4.1A}:} As before, we denote by $\bar{w}_\alpha(x)=w_\alpha(x)-w_\alpha(0)$. The argument of Lemma~\ref{L3.1} goes through. Therefore, we can repeat the proof of Theorem~\ref{T3.2} to obtain that, along some subsequence $\alpha_n\to 0$, we have $\bar{w}_{\alpha_n}\to u$, $\alpha_n w_{\alpha_n}(0)\to \lambda^*$ and (passing limit in \eqref{ET4.1B}) \begin{equation}\label{ET4.1C} \cI u -\lambda^*=0\quad \text{in}\; \rd, \quad u(0)=0. \end{equation} Proof of Theorem~\ref{T3.2} also gives us $u\in \sorder(V)$. Next we argue that $u\in C^{2+}_{\rm loc}(\rd)$. To attain this goal we need an estimate analogous to \eqref{ET2.4G}. We use the notations of Lemma~\ref{L2.6}. Fix $x_0\in\rd$ and let $r=[1+V(x_0)]^{-\upmu},\, v(x)=u(x_0 + rx)$ and $\psi=\xi v$. Also, define $$K^r_\tau= r^{d+2} K(x_0 + r x, ry), \quad \text{and}\quad \breve{I}^r_\tau \xi(x)=\int_{\rd} (\xi(x+y)-\xi(x)-\grad\xi(x)\cdot y 1_{B_{\frac{1}{r}}}(y)) K^r_\tau (x, y)\dy.$$ It is easy to check from \eqref{ET4.1C} that \begin{equation}\label{ET4.1D} \inf_{\tau\in\mathcal{T}}\left[ {\rm tr}{(a_\tau(x_0+rx)D^2\psi)} + \breve{I}^r_\tau [\psi] + r b_\tau(x_0+rx)\cdot\grad \psi + r^2 g_\tau(x_0+rx) + \breve{I}^r_\tau[(1-\xi)v] \right]-\lambda^*=0 \quad \text{in}\; \rd. \end{equation} Recall that $|v(x)|\leq C(1+V(x_0+rx))$. Therefore, for $x\in B_1$, \begin{align*} \sup_{\tau\in\mathcal{T}}|\breve{I}^r_\tau[(1-\xi)v]| &\leq C r^{d+2}\int_{|y|\geq 1/2} (1 + V(x_0+rx+ry)) K(ry)\dy \\ &= C r^2 \int_{|y|\geq \frac{1}{2r}} (1 + V(x_0+rx+y)) K(y)\dy \\ &\leq C r^2 (1+V(x_0))\int_{\frac{1}{2r}\leq |y|\leq 1} K(y)\dy + C (1+V(x_0)) \int_{\rd} (1+V(y))\breve{K}(y)\dy \\ &\leq C(1+V(x_0))\left[1 + \int_{|y|\leq 1} |y|^2 K(y) \dy + \norm{V}_{L^1(\breve{K})} \right] \end{align*} for some constant $C$ independent of $x_0, r$, where in the third line we use \eqref{EA4.2B}. Under Assumption~\ref{A4.2}(iii)(a), we can apply \cite[Theorem~4.2]{Mou2019} on \eqref{ET4.1D} to obtain $\eta\in (0,1)$ satisfying \begin{equation}\label{ET4.1E} |u(x_0)-u(x_0 + rx)|\leq C(1+V(x_0)) |x|^\eta\quad \text{for all}\; |x|\leq \frac{1}{2}, \end{equation} and for all $x_0\in \rd$. For Assumption~\ref{A4.2}(iii)(b), we apply \cite[Lemma~2.1]{MZ21}, to obtain \eqref{ET4.1E}. Now consider $|x|\leq 1$. If $|x|\leq r$, then \eqref{ET4.1E} gives $$|u(x_0)-u(x_0+x)|\leq C(1+V(x_0)) |x|^\eta.$$ Now suppose, $|x|> r$ and let $e=\frac{x}{|x|}$, $k=\lfloor \frac{2|x|}{r}+\frac{1}{2}\rfloor$. Note that $$k\leq \frac{2|x|}{r}+\frac{1}{2}\leq k+1\; \Rightarrow \bigl|\frac{2|x|}{r}-k\bigr| \leq \frac{1}{2}.$$ Using \eqref{EA4.2B} and \eqref{ET4.1E} we have \begin{align*} |u(x_0)-u(x_0+ x)|&\leq \sum_{i=1}^k|u(x_0 + \frac{r}{2}(i-1)e) -u(x_0 + \frac{r}{2}ie) + |u(x_0 + \frac{r}{2}ke)-u(x_0+ x)| \\ &\leq C(1+V(x_0)) \frac{k+1}{2^\eta} \leq C_1(1+V(x_0)) \frac{|x|}{r}\leq C_1(1+V(x_0))^{1+\upmu}|x|^\eta. \end{align*} Combining the above estimate we get $$|u(x_0)-u(x_0+ x)|\leq C(1+V(x_0))^{1+\upmu}|x|^\eta \quad \text{for all}\; |x|\leq 1, \; \text{and}\; x_0\in\rd. $$ Now using Assumption~\ref{A4.1}(iii), \cite[Theorem~5.3]{MZ21} and the proof of Lemma~\ref{L2.6} we see that $u\in C^{2+}_{\rm loc}(\rd)$. Hence $u$ is a classical solution to \eqref{ET4.1C}. To complete the proof we only need to show uniqueness. Suppose that $(\hat{u}, \hat{\varrho})\in \sorder(V)\times\mathbb{R}$ is a solution to \eqref{ET4.1C}. From the above argument we have $\hat{u}\in C^{2+}_{\rm loc}(\rd)$. Therefore, the arguments in Theorem~\ref{T1.1} gives that $\hat\rho=\lambda^*$. The equality of $u=\hat{u}$ can be established using a similar argument, provided we could show that for $v=u-\hat{u}$, $\sup_{\rd} v= v(\bar{z})$ for some $\bar{z}\in\rd$, implies $v=0$. We note from \eqref{ET4.1A} that $$ \inf_{\tau\in\mathcal{T}}\left[ {\rm tr}{(a_\tau(x)D^2(-v))} + \breve I_\tau [-v](x) + b_\tau(x)\cdot\grad (-v)\right]\leq \cI \hat{u}-\cI u=0.$$ Letting $\hat{v}=v(\bar{z})-v$, we obtain from above that $$\inf_{\tau\in\mathcal{T}}\left[ {\rm tr}{(a_\tau(x)D^2\hat{v})} + \breve I_\tau [\hat{v}](x) + b_\tau(x)\cdot\grad\hat{v}\right]\leq 0 \quad \text{in}\; \rd.$$ Thus $\hat{v}$ a non-negative super-solution. Fix any bounded domain $\Omega$. Choose $M$ large enough, so that $\hat{v}<M$ in $\Omega$. Defining $v_{M}=\hat{v}\wedge M$ we get from above that $$\inf_{\tau\in\mathcal{T}}\left[ {\rm tr}{(a_\tau(x)D^2 v_M)} + \breve I_\tau [v_M](x) + b_\tau(x)\cdot\grad v_M\right]\leq 0 \quad \text{in}\; \Omega,$$ and $v_M(\bar{z})=0$. From the weak Harnack principle \cite[Theorem~3.12]{Mou2019}, it then follows that $v_M=0$ in $\Omega$. Since $\Omega, M$ are arbitrary, we have $v=0$ in $\rd$. This completes the proof. \end{proof} \section{\texorpdfstring{$C^{1, \gamma}$}{c} regularity}\label{S-regu} The main goal of this section is to establish $C^{1, \gamma}$ regularity of the viscosity solutions to the equation \begin{equation}\label{nonlinear_rough kernel} \sA u(x):= \inf_{\tau \in\mathcal{T}} \sup_{\iota\in\mathfrak{I}} \bigg[I_{\tau \iota}[u](x)+b_{\tau \iota}(x) \cdot \nabla u(x)+g_{\tau \iota}(x) \bigg]=0, \end{equation} where $\mathcal{T}, \mathfrak{I}$ are some index sets and \begin{align*} I_{\tau \iota}[u](x)=\int_{\mathbb{R}^d}\delta (u,x,y)\frac{k_{\tau \iota}(x,y)}{|y|^{d+2s}} \,dy. \end{align*} Note that \eqref{nonlinear_rough kernel} is more general than the one used in Theorem~\ref{T-reg}. For each $\tau \in \mathcal{T}, \iota \in\mathfrak{I}$, $k_{\tau \iota}$ is symmetric in $y$, that is, $k_{\tau \iota}(x, y)=k_{\tau \iota}(x, -y)$ and $$(2-2s)\lambda \leq k_{\tau \iota}(x, y)\leq (2-2s)\Lambda \quad \text{for all}\; x, y, \quad 0<\lambda\leq \Lambda.$$ By $\cL_0(s)$ we denote the class of all kernels $k$ satisfying the above relation. Let us define the Pucci extremal operators. The maximal operators, with respect to the class $\cL_0(s)$, are defined as follows. \begin{align*} M^+u(x)&=\sup_{I\in \mathcal{L}_0(s)} I u(x) = (2-2s)\int_{\rd} \frac{\Lambda \delta^+(u, x, y)- \lambda \delta^-(u, x, y)}{|y|^{d+2s}}, \\ M^-u(x)&=\inf_{I\in \mathcal{L}_0(s)} Iu(x) =(2-2s)\int_{\rd} \frac{\lambda \delta^+(u, x, y)- \Lambda \delta^-(u, x, y)}{|y|^{d+2s}}. \end{align*} We impose the following assumptions on the coefficients. \begin{itemize} \item[(\hypertarget{H1}{H1})] $b_{\tau \iota}$, $g_{\tau \iota}$ are continuous and $$\sup_{\tau\in \mathcal{T},\iota \in\mathfrak{I}}\norm{b_{\tau \iota}}_{L^\infty}<\infty, \quad \text{and}\quad \sup_{\tau\in \mathcal{T},\iota \in \mathfrak{I}}\norm{g_{\tau \iota}}_{L^\infty}<\infty.$$ \item[(\hypertarget{H2}{H2})] The map $x\rightarrow k_{\tau \iota}(x,y)$ is uniformly continuous, uniformly in $\tau, \iota$ and $y$, that is, \begin{align*} |k_{\tau \iota}(x_1,y)-k_{\tau \iota}(x_2,y)|\leq \varrho(|x_1-x_2|)\quad \forall x_1,x_2 \in \mathbb{R}^d,\; \tau\in \mathcal{T}, \iota \in\mathfrak{I}, y\in\rd, \end{align*} where $\varrho$ is the modulus of continuity. \end{itemize} As before, we denote by $\omega_s(y)$ the weight function $[1+|y|^{d+2s}]^{-1}$. Let us define the following collection of test functions which will be used to define a norm on $I$ : Let $M>0$, $\Omega$ be a fixed domain and $x \in \Omega$. Let us consider the following set of functions: \begin{align*} \mathcal{D}^x_M:=\{\phi\in L^1(\mathbb{R}^d,\omega) &|\; \phi \in C^2(x), x\in \Omega,\hspace{1mm} \norm{\phi}_{L^1(\omega)}\leq M, \quad \text{and} \\ &\quad |\phi(y)-\phi(x)-(y-x)\cdot \nabla\phi(x)|\leq M|y-x|^2, \forall y \in B_1(x)\}, \end{align*} where $\omega$ is a weight function and $\phi\in C^2(x)$ means that there is a quadratic polynomial $q$ such that $\phi(y)=q(y) + \sorder(|x-y|^2)$ for $y$ sufficiently close to $x$. Let $\Omega = B_N$ be a ball of fixed radius $N$, now observe that if $\phi \in \mathcal{ D}^x_M$ and $ \omega_{s}(y)=(1+|y|^{d+2s})^{-1}$ is a particular weight function, then \begin{align} \label{bound_on_phi} &\int_{B_1(x)}|\phi(y)-\phi(x)-(y-x)\cdot \nabla\phi(x)|\,dy\leq M\int_{B_1(x)}|x-y|^2\,dy\leq M|B_1| \notag \\ \implies & \bigg|\int_{B_1(x)}\phi(y)\,dy-\int_{B_1(x)}\phi(x)\,dy-\int_{B_1(x)}(y-x)\cdot \nabla\phi(x)\,dy\bigg|\leq M|B_1| \notag \\ \implies & \bigg|\int_{B_1(x)}\phi(y)\,dy-\phi(x)|B_1|\bigg|\leq M|B_1|\implies |\phi(x)||B_1| \leq M|B_1|+\int_{B_1(x)}\frac{|\phi(y)|(1+|y|^{d+2s})}{1+|y|^{d+2s}}\,dy \notag \\ \implies &|\phi(x)||B_1| \leq M|B_1|+M(1+(1+N)^{d+2s})\implies |\phi(x)|\leq M C_{d, s, \Omega}. \end{align} It is noteworthy that the right hand side of the last inequality does not depend on any particular choice of $\phi$ or $x$. In other words, we get the same bound as long as $x\in \Omega$ and $\phi \in \mathcal{D}^x_M$ for a fixed $M>0$. \begin{defi} Let $\Omega$ be a domain and $\omega $ be a weight function , then for any non local operator $I$, the norm $||I||_\omega$ with respect to the weight function $\omega$ is defined as follows. \begin{align}\label{norm} ||I||_\omega=\sup_{x,M}\Big\{\frac{I[\phi](x)}{1+M}\; \big|\; \phi \in \mathcal{D}^x_M ,\; x \in \Omega, M>0\Big\}. \end{align} \end{defi} Our main result of this section is the following $C^{1, \gamma}$ regularity estimate of the solutions to \eqref{nonlinear_rough kernel}. \begin{thm}\label{c_1,gamma_regularity} Let $s\in (1/2, 1)$. Also, \hyperlink{H1}{(H1)}-\hyperlink{H2}{(H2)} hold in $B_1$ and $$\sup_{\tau, \iota}\norm{b_{\tau \iota}}_{L^\infty(B_1)}\leq C_0.$$ Then there exists a $\gamma\in(0, 2s-1)$ such that for any bounded viscosity solution $u\in C(\mathbb{R}^d)$ to \eqref{nonlinear_rough kernel} in $B_1$ we have \begin{align*} \norm{u}_{C^{1,\gamma}(B_{\frac{1}{2}})}\leq C\left(\norm{u}_{L^\infty(\mathbb{R}^d)}+\sup_{\tau, \iota}\norm{g_{\tau \iota}}_{L^\infty(B_1)}\right), \end{align*} where the constant $C$ depends on $d,s,C_0,\varrho,\lambda, \Lambda$. \end{thm} We mainly follow the ideas in \cite{Serra2015} and \cite{Serra_Parabolic} for the proof of Theorem~\ref{c_1,gamma_regularity}. \begin{lem} \label{Rescaled_Operator} Let $s>\frac{1}{2}$ and $I$ be an integro-differential operator, elliptic with respect to the class $\mathcal{L}_0(s)$, in particular an operator of the form $I_{\tau\iota}$ in \eqref{nonlinear_rough kernel} . Given $x_0\in \mathbb{R}^d, r>0, c>0$, and $l(x)=a\cdot x+b$, define $\tilde{I}$ by \begin{align*} \tilde{I}\bigg( \frac{w(x_0+r\cdot)-l(x_0+r\cdot)}{c}\bigg)(x)=\frac{r^{2s}}{c}I(cw)(x_0+rx) \end{align*} Then $\tilde{I}$ is elliptic with respect to $\mathcal{L}_0(s)$ with the same ellipticity constants. \end{lem} \begin{proof} To see this, let us introduce the notation $\tau_r(x)=x_0+r\cdot x$. Now from the definition of $\tilde{I}$ we observe that \begin{align*} &\tilde{I}((w-l)\circ \tau_r)(x)=\frac{r^{2s}}{c}I(cw)(\tau_r(x)) \\ \implies &\tilde{I}(w)(x)=\tilde{I}(w\circ \tau^{-1}_r\circ \tau_r)(x)=\frac{r^{2s}}{c}I(c(w+l)\circ \tau_r^{-1})(\tau_r(x)) \end{align*} Now it is easy to see that \begin{align*} \tilde{I}(u)(x)-\tilde{I}(v)(x)\leq &\frac{r^{2s}}{c}\Big[I(c(u+l)\circ \tau_r^{-1})(\tau_r(x))-I(c(v+l)\circ \tau_r^{-1})(\tau_r(x))\Big] \\ \leq &\frac{r^{2s}}{c}M^+(c(u-v)\circ \tau^{-1}_r)(\tau_r(x))=M^+((u-v))(x), \end{align*} using the fact that the extremal operators $M^+$ and $M^-$ are translation invariant. Hence the proof. \end{proof} Proof of the following result can be found in \cite[Lemma~4.3]{Serra2015}. \begin{lem}\label{Least_square_{tau}pproximation} Let $s> \frac{1}{2}, \beta \in (1,2s)$ and define the for any $z\in \mathbb{R}^d$ and $r>0$ the following affine function \begin{align*} l_{r,z}(x)=a^* \cdot (x-z)+b^*, \end{align*} where \begin{align*} nt_{B_r(z)} u(x)\,dx, \end{align*} equivalently, \begin{align*} (a^*,b^*)=\Argmin_{(a,b)\in\rd\times\R} \int_{B_r(z)} (u(x)-a(x-z)+b)^2 \,dx. \end{align*} If for some constant $C_0$ we have \begin{align*} \sup_{r>0}\sup_{z\in B_{\frac{1}{2}}} r^{-\beta} ||u-l_{r,z}||_{L^\infty(B_r(z))}\leq C_0, \end{align*} then \begin{align*} \norm{u}_{C^\beta(B_{\frac{1}{2}})}\leq C(\norm{u}_{L^\infty(\mathbb{R}^d)}+C_0), \end{align*} where $C$ depends on the exponent $\beta $. \end{lem} For our next result we need to introduce a class of scaled operators. For $m\in \mathbb{N}$, let $z_m\in B_{\frac{1}{2}}$ and \begin{equation}\label{E5.4} \tilde{I}^m[w](x):= \inf_{\tau\in \mathcal{T}_m}\sup_{\iota \in\mathfrak{I}_m}\int_{\mathbb{R}^d} \delta(w,x,y) \frac{k_{\tau, \iota}(z_m + r_m x, r_m y)}{|y|^{d+2s}}\,dy, \end{equation} where $r_m>0$ and $k_{\tau, \iota}\in\cL_0(s)$ for all $\tau\in \mathcal{T}_m, \iota \in\mathfrak{I}_m$. Now we recall the weak convergence of operators from \cite{CS11b}. A sequence of operators $I_m$ is said to be weakly convergent to $I$ (with respect to a weight function $\omega$), if for every $\varepsilon>0$ small and test function $\phi$ a point $x_0\in\Omega$, where $\phi$ is a quadratic polynomial in $B_\varepsilon(x_0)$ and $\phi\in L^1(\omega)$, we have $$I_m[\phi](x)\to I[\phi](x)\quad \text{uniformly in}\; B_{\varepsilon/2}(x_0).$$ We need the following result on weak convergence. \begin{lem}\label{L5.4} Let $z_m\to z_0$ and $r_m\to 0$ as $m\to\infty$. Let $\tilde{\mathcal{I}}^{m}$ be defined as above and the family $\{k_{\tau \iota}\; :\; \tau\in \mathcal{T}_m, \tau \in\SB_m\}$ satisfy \hyperlink{H2}{(H2)} with the same modulus of continuity $\varrho$. Then, there exists a subsequence $\tilde{\mathcal{I}}^{m_k}$ converges weakly to a translation invariant operator $I_0$ with $I_0(0)=0$ where $I_0$ is elliptic with respect to the class $\sL_0(s)$. \end{lem} \begin{proof} To prove the lemma, let us first define a translation invariant operator $\hat{I}^m$ as follows \begin{align*} \hat{I}^m(z_0)[\phi](x)= \inf_{\tau\in \mathcal{T}_m} \sup_{\iota \in\mathfrak{I}_m}\int_{\mathbb{R}^d} \delta(\phi,x,y) \frac{k_{\tau \iota}(z_0,r_my)}{|y|^{d+2s}}\,dy . \end{align*} We claim that for every bounded domain $\Omega$ we have \begin{equation}\label{EL5.4A} ||\tilde{I}^{m}-\hat{I}^m||_{\omega_{s}}\xrightarrow{m \rightarrow \infty} 0, \end{equation} where $\norm{\cdot}_{\omega_s}$ is given by \eqref{norm}. To prove the convergence, consider a test function $\phi\in \mathcal{D}^M_x$, defined on $\Omega$, and observe from \hyperlink{H2}{(H2)} that \begin{align*} |\tilde{I}^{m}[\phi](x)-\hat{I}^m[\phi](x)| & = \sup_{(\tau, \iota)\in\mathcal{T}_m\times\mathfrak{I}_m}|\tilde{I}_{\tau \iota}^{m}[\phi](x)-\hat{I}_{\tau \iota}^m[\phi](x)| \\ =&2M\varrho(|z_m+r_mx-z_0|) \int_{B_1}\frac{|y|^2}{|y|^{d+2s}} \,dy + \varrho(|z_m+r_mx-z_0|) \int_{B^c_1}\frac{|\delta(\phi,x,y)|}{|y|^{d+2s}}\,dy. \end{align*} Next we carefully calculate the second integral. Using \eqref{bound_on_phi} we see that \begin{align*} &\bigg|\int_{B^c_1}\frac{\delta(\phi,x,y)}{|y|^{d+2s}}\,dy\bigg|\leq \int_{B^c_1}\frac{|\phi(x+y)+\phi(x-y)-2\phi(x)|}{|y|^{d+2s}}\,dy \\ \leq &\int_{B^c_1}\frac{1+|y|^{d+2s}}{|y|^{d+2s}}(|\phi(x+y)|+|\phi(x-y)|)\omega_{s}(y)\,dy+ C(s, \Omega)\frac{M}{2s} \leq MC_{s, \Omega}. \end{align*} Thus, it follows from \eqref{norm} that \begin{align*} \norm{\tilde{I}^{m}-\hat{I}^m}_{\omega_s} \leq C_{s, \Omega}\, \sup_{x\in\Omega}\varrho(z_m+r_m x-z-0), \end{align*} which gives us \eqref{EL5.4A}. Again, by \cite[Theorem~42]{CS11b}, there exists a subsequence $\hat{I}^{m_k}$ converges weakly to some $I_0$. Combining with \eqref{EL5.4A} it is easily seen that $\tilde{I}^{m_k}$ converges weakly to $I_0$. It is also evident that $I_0(0)=0$ and $I_0$ is elliptic with respect to the class $\cL_0(s)$ (cf. \cite[Lemma~4.1]{Serra_Parabolic}). \end{proof} Now we can complete the proof of Theorem~\ref{c_1,gamma_regularity} with the help of Lemmas~\ref{Rescaled_Operator},~\ref{Least_square_{tau}pproximation} and \ref{L5.4}. \begin{proof}[Proof of Theorem~\ref{c_1,gamma_regularity}] We prove the theorem by the method of contradiction. First we fix the choice of $\gamma$. Let $\upgamma_\circ$ be the H\"{o}lder exponent obtained with respect to the Pucci operators $M^{\pm}$ in \cite[Theorem~12.1]{CS09} (see also \cite[Theorem~2.1]{Serra_Parabolic}). Fix $\gamma\in (0, \min\{2s-1,\upgamma_\circ\})$. Now suppose that there exist $I_k,u_k, b^k_{\tau \iota}$ and $\{g^k_{\tau \iota}\}_k$ satisfying \begin{align*} & \sup_{\tau, \iota}|b^k_{\tau \iota}|\leq C_0 \quad \text{and} \quad ||u_k||_{L^\infty(\mathbb{R}^d)} + \sup_{\tau, \iota}\norm{g^k_{\tau \iota}}_{L^\infty(B_1)} = 1, \\ & \text{but}\quad ||u_k||_{C^\beta(B_{\frac{1}{2}})}\xrightarrow{k \rightarrow \infty} +\infty, \quad \text{where}\quad \beta=1+\gamma. \end{align*} In view of Lemma \ref{Least_square_{tau}pproximation}, there exist $a^*(k,r,z)$ and $b^*(k,r,z)$ such that \begin{align}\label{E3.5} \sup_k \sup_{r>0}\sup_{B_{\frac{1}{2}}}r^{-\beta} ||u_k -l_{k,r,z}||_{L^\infty(B_r(z))}= +\infty, \end{align} where \begin{align*} &(a^*(k,r,z),b^*(k,r,z))=\Argmin_{(a,b)\in \mathbb{R}^d\times \mathbb{R}}\int_{B_r(z)}(u_k(x)-a\cdot(x-z)+b)^2\,dx \\ &\text{and} \quad l_{k,r,z}(x)=a^*(k,r,z)\cdot (x-z)+ b^*(k,r,z). \end{align*} Now for any $r>0$, define $\Theta$ as follows: \begin{align} \label{Theta} \Theta(r):=\sup_k \sup_{r^\prime \geq r}\sup_{z\in B_{\frac{1}{2}}} (r^\prime)^{-\beta} ||u_k -l_{k,r^\prime, z}||_{L^\infty(B_{r^\prime}(z))}. \end{align} Since $\norm{u_k}_{L^\infty(\rd)}$ is finite, we see that $\Theta (r)< \infty$ for all $r> 0$, and therefore, \eqref{Theta} is well-defined. Furthermore, from \eqref{E3.5} we observe that for any $M>0$, however large, there exists $\tilde{r}>0, \tilde{k}\in \mathbb{N}$ and $\tilde{z}\in B_\frac{1}{2}$ such that \begin{align*} (\tilde{r})^{-\beta} ||u_{\tilde{k}}-l_{\tilde{k},\tilde{r},\tilde{z}}||_{L^\infty(B_{\tilde{r}}(\tilde{z}))}> M. \end{align*} Therefore, since $\Theta(r)\geq \Theta(\tilde{r})>M$ for any $0<r \leq \tilde{r}$, we get that $\Theta(r)\uparrow \infty $ as $r \downarrow 0$. As $\Theta(\frac{1}{m})\uparrow +\infty$ with $m\uparrow +\infty$, there exits $r_m \geq \frac{1}{m}, k_m \in \mathbb{N}$ and $z_m \in B_{\frac{1}{2}}$ such that \begin{align*} \frac{1}{2}\Theta(r_m)\leq\frac{1}{2}\Theta(\frac{1}{m})< (r_m)^{-\beta} ||u_{k_m}-l_{k_m,r_m,z_m}||_{L^\infty(B_{r_m}(z_m))}. \end{align*} It is easily seen that $r_m$ converges to zero. With this, let us define a new function $v_m$ as follows \begin{align} v_m(x):= \bigg(\frac{u_{k_m}-l_{k_m,r_m,z_m}}{r_m^\beta \Theta(r_m)}\bigg)(z_m +r_mx). \end{align} It is easy to see from the condition of minimality that \begin{align}\label{E3.8} \int_{B_1} v_m \,dx=0; \quad \int_{B_1} v_m x_i \,dx=0;\quad \text{and}\quad ||v_m||_{L^\infty(B_1)}\geq 1/2. \end{align} We next claim that \begin{align}\label{AB00} ||v_m||_{L^\infty(B_R)} \leq CR^\beta , \quad \forall R\geq 1. \end{align} To prove the above growth bound, we first observe that for any $z\in B_{\frac{1}{2}}, k\in \mathbb{N}$ and $r^\prime \geq r>0$ \begin{align*} &||u_k-l_{k,r^\prime,z}||_{L^\infty(B_{r^\prime}(z))}\leq (r^\prime)^\beta \Theta(r^\prime)\quad \text{and}\quad ||u_k-l_{k,2r^\prime,z}||_{L^\infty(B_{2r^\prime}(z))}\leq (2r^\prime)^\beta \Theta(2r^\prime) \\ \implies &||l_{k, 2r^\prime , z}-l_{k, r^\prime, z}||_{L^\infty(B_{r^\prime}(z))}\leq ||u_k-l_{k,r^\prime,z}||_{L^\infty(B_{r^\prime}(z))} +||u_k-l_{k,2r^\prime,z}||_{L^\infty(B_{r^\prime}(z))} \\ &\hspace{11em} \leq (r^\prime)^\beta \Theta(r^\prime)+(2r^\prime)^\beta \Theta(2r^\prime). \end{align*} Now by the monotonicity property of $\Theta$, we conclude for any $r^\prime \geq r>0$ \begin{align*} ||l_{k, 2r^\prime , z}-l_{k, r^\prime, z}||_{L^\infty(B_r(z))} \leq &(2r^\prime)^{\beta} \Theta(2 r^\prime)+(r^\prime)^{\beta} \Theta(r^\prime) \leq (2^\beta+1) (r^\prime)^\beta \Theta(r). \end{align*} This implies that \begin{align*} \sup_{k,z}|b^*(k,2r^\prime, z)-b(k,r^\prime,z)|\leq C \Theta(r) (r^\prime)^\beta \quad \text{and}\quad \sup_{k,z}|a^*(k,2r^\prime,z)-a^*(k,r^\prime,z)|\leq C \Theta(r)(r^\prime)^{\beta-1} \end{align*} Now if we take $R=2^N, N\geq 1$ and $r^\prime_j=2^{j}r\geq r, j=1,2,\ldots, N,$ in the above inequalities, then \begin{align} \label{R_Growth_on_{tau}^*} \frac{|a^*(k,Rr,z)-a^*(k,r,z)|}{r^{\beta-1} \Theta(r)}\leq & \sum^N_{j=1}\frac{|a^*(k,r^\prime_j,z)-a^*(k,r^\prime_{j-1},z)|}{r^{\beta-1} \Theta(r)} \notag \\ =& \sum^N_{j=1}2^{(\beta-1)(j-1)}\frac{|a^*(k,r^\prime_j,z)-a^*(k,r^\prime_{j-1},z)|}{2^{(\beta-1)(j-1)}r^{\beta-1} \Theta(r)} \notag \\ \leq & C \sum_{j=1}^N 2^{(j-1)(\beta-1)}\leq C \frac{(2^{\beta-1})^N-1}{2^{\beta-1}-1} \leq \frac{C}{2^{\beta-1}-1}R^{\beta-1}. \end{align} Similarly, we can also prove that \begin{align} \label{R_growth_on_b^*} \frac{|b^*(k, Rr,z)-b^*(k,r,z)|}{r^\beta \Theta(r)}\leq CR^\beta. \end{align} Now, observe that \begin{align*} ||v_m||_{L^\infty(B_R)}\leq \frac{||u_{k_m}-l_{k_m,r_m,z_m}||_{L^\infty(B_{Rr_m}(z_m))}}{r_m^\beta \Theta(r_m)}\leq & R^\beta \frac{\Theta(Rr_m)}{\Theta(r_m)}+ \frac{||l_{k_m,Rr_m,z_m}-l_{k_m,r_m,z_m}||_{L^\infty(B_{Rr_m}(z_m))}}{r_m^\beta \Theta(r_m)}. \end{align*} Moreover, for any $x\in B_{Rr_m}(z_m))$ \begin{align*} |l_{k_m,Rr_m,z_m}-l_{k_m,r_m,z_m}| &\leq |a^*(k_m,Rr_m,z_m)-a^*(k_m,r_m,z_m)||x-z_m| \\ &\qquad + |b^*(k_m,Rr_m,z_m)-b^*(k_m,r_m,z_m)| \\ &\leq Rr_m|a^*(k_m,Rr_m,z_m)-a^*(k_m,r_m,z_m)| \\ &\qquad + |b^*(k_m,Rr_m,z_m)-b^*(k_m,r_m,z_m)|. \end{align*} By using \eqref{R_Growth_on_{tau}^*} and \eqref{R_growth_on_b^*} we have \begin{align} \label{R_growth_on_l} \frac{\norm{l_{k_m,Rr_m,z_m}-l_{k_m,r_m,z_m}}_{L^\infty(B_{Rr_m}(z_m))}}{r_m^\beta \Theta(r_m)}&\leq R\frac{|a^*(k_m,Rr_m,z_m)-a^*(k_m,r_m,z_m)|}{r_m^{\beta-1}\Theta(r_m)}\notag \\ &\qquad + \frac{|b^*(k_m,Rr_m,z_m)-b^*(k_m,r_m,z_m)|}{r_m^\beta\Theta(r_m)} \notag \\ \leq &C(\beta)R^\beta + C(\beta)R^\beta. \end{align} Ultimately, using the monotonicity property of $\Theta$ and \eqref{R_growth_on_l} , we obtain \begin{align}\label{AB01} ||v_m||_{L^\infty(B_R)}\leq (1+2C(\beta ))R^\beta. \end{align} This gives us \eqref{AB00}. Now, observe that, by Lemma \ref{Rescaled_Operator}, $v_m$ satisfies the following equation in the viscosity sense \begin{equation}\label{AB02} \begin{split} M^- v_m - r_m^{2s-1}C_0|\nabla v_m|- r_m^{2s-\beta}\frac{|b^k_{\tau \iota}\cdot a_m^*|+|g^k_{\tau \iota}|}{ \Theta(r_m)} \leq 0 \hspace{2mm} \text{in} \hspace{2mm} B_R, \\ M^+ v_m + r_m^{2s-1}C_0|\nabla v_m|+r_m^{2s-\beta}\frac{|b^k_{\tau \iota}\cdot a_m^*|+|g^k_{\tau \iota}|}{ \Theta(r_m)} \geq 0 \hspace{2mm} \text{in} \hspace{2mm} B_R, \end{split} \end{equation} where $a^*_m:=a^*(k_m,r_m, z_m)$. We first claim that \begin{align*} |a^*_m|\leq C(1+\Theta(r_m)). \end{align*} To see this, choose $l_m \in \mathbb{N}$ such that $2^{-l_m}\leq r_m < 2^{-(l_m-1)}$ and observe \begin{align*} |a^*(k_m, r_m, z_m)-a^*(k_m, 2^{(l_m-1)} r_m, z_m)| &\leq \sum^{l_m}_{j=1}|a^*( k_m, 2^jr_m, z_m)-a^*(k_m, 2^{(j-1)}r_m, z_m)| \\ &\leq \sum^{l_m}_{j=1}(2^{(j-1)}r_m)^{\beta-1}\frac{|a^*( k_m, 2^jr_m, z_m)-a^*(k_m, 2^{(j-1)}r_m, z_m)|}{(2^{(j-1)}r_m)^{(\beta-1)}} \\ &\leq \sum^{l_m}_{j=1}(2^{(j-1)}r_m)^{\beta-1}\Theta(2^{j-1}r_m) \leq \Theta(r_m)r^{(\beta-1)}_m \frac{2^{(\beta-1)l_m}-1}{2^{\beta-1}-1} \\ &\leq \Theta(r_m) \frac{(r_m2^{l_m})^{(\beta-1)}}{2^{\beta-1}-1} \leq \frac{\Theta(r_m)2^{\beta-1}}{2^{\beta-1}-1}. \end{align*} Again, from the definition and monotonicity of $\Theta$ \begin{align*} |u_{k_m}(x)-l_{k_m, 2^{(l_m-1)} r_m, z_m}(x)|&\leq \Theta (2^{l_m-1} r_m)(2^{l_m-1} r_m)^\beta \leq \Theta({1}/{2})(2^{l_m-1} r_m)^\beta \quad \forall x\in B_{2^{l_m-1}r_m}(z_m) \\ \implies|u_{k_m}(z_m)-b^*(k_m, 2^{(l_m-1)}r_m,z_m)|&\leq \Theta({1}/{2})\left(2^{l_m-1} r_m\right)^\beta, \hspace{2mm} [\text{by substituting }\hspace{1mm} x=z_m] \end{align*} which in turn, implies that for small $r_m$ \begin{align*} |a^*(k_m, 2^{(l_m-1)} r_m, z_m)|\leq C \Theta(\frac{1}{2}). \end{align*} So we have our desired result \begin{align*} |a^*(k_m, r_m, z_m)|\leq |a^*(k_m, r_m, z_m)-a^*(k_m, 2^{(l_m-1)} r_m, z_m)|+|a^*(k_m, 2^{(l_m-1)} r_m, z_m)|\leq C(\Theta(r_m)+1). \end{align*} Therefore, as $b_{\tau \iota}, g_{\tau \iota}$ are uniformly bounded, we have \begin{align}\label{E5.16} r_m^{2s-\beta}\frac{|b^k_{\tau \iota}\cdot a_m^*|+|g^k_{\tau \iota}|}{ \Theta(r_m)} \xrightarrow{m \rightarrow \infty } 0. \end{align} Applying \cite[Theorem~7.2]{Schwab_Silvestre} it then follows from \eqref{AB01}-\eqref{AB02}, that the family $\{v_m\,:\, m\geq 1\}$ is locally H\"{o}lder continuous, uniformly in $m$. By the Arzel\`{a}-Ascoli theorem we can extract a convergent subsequence of $v_m$. Let $v_m\to v\in C(\rd)$ along some subsequence, as $m\to\infty$. It is also evident from \eqref{AB01} that $$\norm{v}_{L^\infty(B_R)}\leq C(1+R^\beta).$$ We claim that there exists a translation invariant operator $I_0$, elliptic with respect to $\cL_0(s)$, such that \begin{equation}\label{AB03} I_0(v)=0\quad \text{in} \hspace{2mm} \mathbb{R}^d. \end{equation} Once we have established \eqref{AB03}, it then follows from \cite[Theorem~3.1]{Serra_Parabolic} that $v(x)=a\cdot x+ b$. Passing the limit in the first two equations of \eqref{E3.8} gives $a=0, b=0$ but it contradicts the third estimate in \eqref{E3.8} that requires $\norm{v}_{L^\infty(B_{1})}\geq 1/2$. Hence we have a contradiction to \eqref{E3.5}. Thus we remain to show \eqref{AB03}. Choosing a further subsequence, if required, we may assume that $z_m\to z_0$ and $v_m\to v$ uniformly on compacts. Recall the operator $\tilde{I}^m$ from \eqref{E5.4}. Now observe that for any bounded domain $\Omega$ we have \begin{equation}\label{AB04} \begin{split} \tilde{I}^m v_m - r_m^{2s-1}K_0|\nabla v_m|- r_m^{2s-\beta}\frac{|b^k_{\tau \iota}\cdot a_m^*|+|g^k_{\tau \iota}|}{ \Theta(r_m)} \leq 0 \hspace{2mm} \text{in} \hspace{2mm} \Omega, \\ \tilde{I}^m + r_m^{2s-1}K_0|\nabla v_m|+r_m^{2s-\beta}\frac{|b^k_{\tau \iota}\cdot a_m^*|+|g^k_{\tau \iota}|}{ \Theta(r_m)} \geq 0 \hspace{2mm} \text{in} \hspace{2mm} \Omega. \end{split} \end{equation} By Lemma~\ref{L5.4}, $\tilde{I}^{m_k}$ converges weakly to a translation invariant operator $I_0$ which is elliptic with respect to $\cL$. Thus \eqref{AB03} follows from \eqref{E5.16} and \eqref{AB04}. \end{proof} \subsection*{Acknowledgement} We thank the reviewers for helpful comments. This research of Anup Biswas was supported in part by a SwarnaJayanti fellowship DST/SJF/MSA-01/2019-20. \bibliographystyle{plain} \bibliography{ref.bib} \end{document}
2206.13662v3
http://arxiv.org/abs/2206.13662v3
Toward Jordan Decompositions of Tensors
\documentclass[12pt,oneside, reqno]{amsart} \usepackage{graphicx} \usepackage{placeins} \usepackage{hhline} \usepackage{amsmath,amsthm,amscd,amssymb,mathrsfs} \usepackage{xspace} \usepackage[all]{xypic} \usepackage{booktabs} \usepackage{physics} \usepackage{array} \newcolumntype{C}{>{$}c<{$}} \usepackage{hyperref} \usepackage[lite,initials]{amsrefs} \usepackage{verbatim} \usepackage{amscd} \usepackage[all]{xy} \usepackage{youngtab} \usepackage{ytableau} \usepackage{nicefrac} \usepackage{xfrac} \usepackage{longtable} \newcommand\mathcenter[1]{\begin{array}{@{}c@{}}#1\end{array}} \newcommand\yngfrac[2]{\left.\mathcenter{#1}\,\middle/\,\mathcenter{#2}\right.} \usepackage{mathdots} \usepackage{tikz} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png} \newcommand{\ccircle}[1]{* + <1ex>[o][F-]{#1}} \newcommand{\ccirc}[1]{\xymatrix@1{* + <1ex>[o][F-]{#1}}} \usepackage[T1]{fontenc} \usepackage{cleveref} \numberwithin{equation}{section} \topmargin=-0.3in \evensidemargin=0in \oddsidemargin=0in \textwidth=6.5in \textheight=9.0in \headsep=0.4in \usepackage{color} \makeatletter \newtheorem{rep@theorem}{\rep@title} \newcommand{\newreptheorem}[2]{\newenvironment{rep#1}[1]{ \def\rep@title{#2 \ref{##1}} \begin{rep@theorem}} {\end{rep@theorem}}} \makeatother \newtheorem{theorem}{Theorem}[section] \newreptheorem{theorem}{Theorem} \newreptheorem{lemma}{Lemma} \newtheorem{theoremst}[theorem]{Theorem$^{*}$} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{lem}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{obs}[theorem]{Observation} \newtheorem{notation}[theorem]{Notation} \newtheorem{cor}[theorem]{Corollary} \newtheorem{question}[theorem]{Question} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{examplex}{Example} \newenvironment{example} {\pushQED{\qed}\renewcommand{\qedsymbol}{$\diamondsuit$}\examplex} {\popQED\endexamplex} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newcommand{\defi}[1]{\textsf{#1}} \newcommand{\isom}{\cong} \newcommand{\im}{\operatorname{im}} \newcommand{\Id}{\text{Id}} \newcommand{\pr}{\text{pr}} \newcommand{\Proj}{\operatorname{Proj}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\End}{\operatorname{End}} \newcommand{\Gr}{\operatorname{Gr}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\GG}{\operatorname{\text{GME}^{grass}}} \newcommand{\St}{\operatorname{St}} \newcommand{\Osh}{{\mathcal O}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\C}{\mathcal{C}} \newcommand{\K}{\mathcal{K}} \newcommand{\E}{\mathcal{E}} \newcommand{\F}{\mathcal{F}} \newcommand{\I}{\mathcal{I}} \newcommand{\J}{\mathcal{J}} \newcommand{\R}{\mathcal{R}} \newcommand{\U}{\mathcal{U}} \newcommand{\V}{{\mathcal V}} \def \S{\mathfrak{S}} \newcommand{\codim}{\operatorname{codim}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\PP}{\mathbb{P}} \newcommand{\FF}{\mathbb{F}} \renewcommand{\SS}{\mathbb{S}} \newcommand{\TT}{\mathbb{T}} \newcommand{\RR}{\mathbb{R}} \newcommand{\NN}{\mathbb{N}} \newcommand{\CC}{\mathbb{C}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Spec}{\operatorname{Spec}} \newcommand{\Chow}{\operatorname{Chow}} \newcommand{\Seg}{\operatorname{Seg}} \newcommand{\Sub}{\operatorname{Sub}} \def\bw#1{{\textstyle\bigwedge^{\hspace{-.2em}#1}}} \def\o{ \otimes } \def\phi{ \varphi } \def\ep{ \varepsilon} \def \a{\alpha} \def \b{\beta} \def \n{\mathfrak{n}} \def \h{\mathfrak{h}} \def \d{\mathfrak{d}} \def \z{\mathfrak{z}} \def \fb{\mathfrak{b}} \def \c{\mathfrak{c}} \def \s{\mathfrak{s}} \def \ga{\gamma} \def \g{\mathfrak{g}} \def \fa{\mathfrak{a}} \def \e{\mathfrak{e}} \def \gl{\mathfrak{gl}} \def \sl{\mathfrak{sl}} \def \diag{\textrm{diag}} \newcommand{\Ad}{\operatorname{Ad}} \newcommand{\ad}{\operatorname{ad }} \newcommand{\Mat}{\operatorname{Mat}} \renewcommand{\span}{\operatorname{span}} \newcommand{\brank}{\defi{Brank }} \newcommand{\Frank}{\operatorname{F-Rank}} \newcommand{\Prank}{\operatorname{P-Rank}} \newcounter{nameOfYourChoice} \def\red#1{{\textcolor{red}{#1}}} \def\blue#1{{\textcolor{blue}{#1}}} \def\white#1{{\textcolor{white}{ #1}}} \newcommand{\luke}[1]{{\color{red} [\sf Luke: [#1]]}} \begin{document} \date{\today} \author{Fr\'ed\'eric Holweck}\email{[email protected]} \address{Laboratoire Interdisciplinaire Carnot de Bourgogne, ICB/UTBM, UMR 6303 CNRS, Universit\'e Bourgogne Franche-Comt\'e, 90010 Belfort Cedex, France } \address{Department of Mathematics and Statistics, Auburn University, Auburn, AL, USA } \author{Luke Oeding}\email{[email protected]} \address{Department of Mathematics and Statistics, Auburn University, Auburn, AL, USA } \title[Toward Jordan Decompositions of Tensors]{Toward Jordan Decompositions for Tensors} \begin{abstract} We expand on an idea of Vinberg to take a tensor space and the natural Lie algebra that acts on it and embed their direct sum into an auxiliary algebra. Viewed as endomorphisms of this algebra, we associate adjoint operators to tensors. We show that the group actions on the tensor space and on the adjoint operators are consistent, which means that the invariants of the adjoint operator of a tensor, such as the Jordan decomposition, are invariants of the tensor. We show that there is an essentially unique algebra structure that preserves the tensor structure and has a meaningful Jordan decomposition. We utilize aspects of these adjoint operators to study orbit separation and classification in examples relevant to tensor decomposition and quantum information. \end{abstract} \maketitle \section{Introduction} The classical Jordan decomposition of square matrices is the following: \begin{theorem}[Jordan Canonical Form] Let $\FF$ be an algebraically closed field. Every $A\in \Mat_{n\times n }(\FF)$ is similar to its Jordan canonical form, which is a decomposition: \[ A \sim J_{k_1}(\lambda_1) \oplus \cdots \oplus J_{k_d}(\lambda_d) \oplus {\bf 0 } ,\] where the $k\times k $ Jordan blocks are $J_k(\lambda) = \left(\begin{smallmatrix} \lambda & 1 \\[-1ex] & \lambda & \small{\ddots} \\[-1ex] & & \ddots & 1 \\ & & & \lambda \end{smallmatrix}\right) $. The algebraic multiplicity of the eigenvalue $\lambda$ is the sum $\sum_{\lambda_j = \lambda} k_j$, and the geometric multiplicity of $\lambda$ is the number $s$ of blocks with eigenvalue $\lambda$, $s = \sum_{\lambda_j = \lambda} 1$. \end{theorem} One may view the JCF as an expression $A = S+N$ with $S$ diagonal (semisimple) and $N$ upper triangular (nilpotent), and the two commute. The JCF answers an orbit-classification problem: The group $\SL(V)$ acts on the vector space $V\otimes V^{*}$, and JCF gives canonical representatives for the orbits. Let $V \cong \FF^n$ be a vector space, which naturally carries the action of the special linear group, denoted $\SL(V)$, of invertible linear transformations with determinant 1. Let $A \in \Mat_{n\times n} (\FF)$. Then $A$ represents a linear mapping $T_A \colon V \to V$ with respect to the standard basis of $V$, and $T_A$ is viewed as an element of the algebra of endomorphisms $\End(V)$. As a representation of $\SL(V)$, we have $\End(V) = V \otimes_\FF V^*$. Writing this as a tensor product of $\SL(V)$-modules and keeping track of the dual indicates the induced action, conjugation. The analogous statements hold for operators $T_A \in \End(V)$, and one has the \defi{Jordan-Chevalley decomposition}, which is the expression $T_A = T_S + T_N$ where $T_S$ is semisimple (diagonalizable), $T_ N$ is nilpotent, and $T_S$ and $T_N$ commute. Recall that a linear operator $T$ is \defi{nilpotent} if any of the following equivalent conditions hold: $T^r = 0$ for some finite positive integer $r$, if $T$ has no non-zero eigenvalues, or if the characteristic polynomial of $T$ is $\chi_T(t) = t^n$. An operator $T_S$ is semisimple if it is diagonalizable, which one can check by seeing if, for each eigenvalue the geometric and algebraic multiplicities agree. Like many notions for matrices, there are many possible generalizations to higher-order tensors. Our starting point is to note that the vector space $V\otimes V^{*}$ is also a Lie algebra of endomorphisms and is the Lie algebra of the group $\GL(V)$ acting on it. So, in this case, the orbit problem concerns a Lie algebra acting on itself. Since conjugating by scalar multiples of the identity is trivial, we work with $\SL(V)$ instead since it has the same action as $\GL(V)$ in this case. For tensors, we consider a tensor space $W$, like $U_{1}\otimes U_{2} \otimes U_{3}$ or $\bw{k} V$, the natural group $G$ acting on it, like $\SL(U_{1})\times \SL(U_{2})\times \SL(U_{3})$ or $\SL(V)$ respectively, and the Lie algebra $\mathfrak g$, and attempt to build a graded algebra starting from their direct sum $\fa = \g \oplus W$ with enough properties so that elements of $W$ can be viewed as adjoint operators acting on $\fa$ whose Jordan decomposition is both non-trivial and invariant under the $G$-action. This method, which is inspired by the work of Vinberg, Kac, and others \cites{Kac80, Vinberg-Elashvili, GattiViniberghi} invites comparison to the work of Landsberg and Manivel \cite{LM02}, which constructed all complex simple Lie algebras. Our goal, rather than to classify algebras that arise from this construction, is to explore the construction of this adjoint operator and its Jordan decomposition and the consequences for tensors. In particular, the Jacobi identity is not necessary to compute the adjoint operators or their Jordan forms. We also note the work of Levy \cite{levy2014rationality}, which also studied the generalizations of Vinberg's method and associated Jordan decompositions \cite{GattiViniberghi}. We consider the adjoint operator of a tensor $T \in W$ in a particular embedding into an algebra $\fa$. One would like to take the Jordan decomposition of the adjoint operator and pull it back to a decomposition of the tensor, but this process might not be possible, especially since the algebra $\fa$ typically has much larger dimension than $W$. Hence when $\ad(T) = S + N$ as a sum of a semi-simple $S$ and nilpotent $N$, one can't necessarily expect to be able to find preimages $s, n\in W$ such that $\ad(s) = S$ and $\ad(n) = N$ with $[s,n]=0$. We study this issue in the case of $\sl_6 \oplus \bw{3} \CC^6$ in Example~\ref{ex:g36}, and show that pure semi-simple elements do not exist in $\bw{3} \CC^6$, but we can construct pure semisimple elements that are not concentrated in a single grade of the algebra. To understand this fully would more work beyond the scope of the present article, but it could be a nice future direction to pursue. Our aim is to set a possible stage where this operation could occur. We note that in the Vinberg cases where the algebra is a Lie algebra, this process actually does pull back to provide Jordan depositions of tensors for these special formats. We will see what happens when we relax the Lie requirement on the auxillary algebra, which has an advantage of being able to be defined for any tensor format, and the disadvantage of not being well-understood in the non-Lie regime. This article provides several initial steps in this direction. \subsection{Linear algebra review}\label{sec:LA} Let $V^*$ denote the vector space dual to $V$, that is the set of linear functionals $\{ V\to \FF\}$, which is also the dual $\SL(V)$-module to $V$ with the right action: \[\begin{matrix} \SL(V) \times V^* &\to& V^* \\ (g,\alpha) &\mapsto & \alpha g^{-1} .\end{matrix} \] Then the natural (linear) action of $\SL(V)$ on $V\otimes V^*$ is obtained by defining the action on simple elements and extending by linearity: Simple elements $V\otimes V^*$ are of the form $v\otimes \a$, so the action is induced from \[\begin{matrix} \SL(V) \times V\otimes V^* &\to& V\otimes V^* \\ (g, v\otimes \alpha) &\mapsto & (gv)\otimes( \alpha g^{-1} ) .\end{matrix} \] Hence, the natural action of $\SL(V)$ on $\End(V)$ is by conjugation. Since the matrix $A = (a_{ij})$ is obtained from $T_A$ by expanding $T_A$ in the standard basis $\{e_1,\ldots, e_n\}$ of $V$ and dual basis $\{f_1,\ldots, f_n\}$ of $V^*$ via extracting the coefficients in the expression \[ T_A = \sum_{ij} a_{ij} e_i \otimes f_j .\] So for a vector $x = \sum_i x_i e_i\in V$, $T_A(x) = \sum_{ij} a_{ij} e_i \otimes f_j \sum_k x_ke_k = \sum_{ij,k} a_{ij} x_k e_i \otimes f_j ( e_k) = \sum_{ij} a_{ij} x_j e_i = \sum_i (\sum_{j} a_{ij} x_j) e_i $, i.e. the usual matrix-vector product $Ax$. The natural $\SL_n(\FF)$-action on matrices is also conjugation (since scalar multiplication commutes with matrix product and linear operators): \[ g.T_A = \sum_{ij} a_{ij} g.(e_i \otimes f_j) = \sum_{ij} a_{ij} (ge_i) \otimes (f_j g^{-1}) .\] Evaluating $g.T_A$ on a vector $x$ we see that $(g.T_A)(x) = g(T_A(g^{-1}x))$. Replacing $T_A$ and $g$ respectively with the matrices that represent them with respect to the standard basis, $A$ and $M$ respectively, and using associativity, we obtain $g.T_A(x) = MAM^{-1} x$. So, the standard matrix representative of the coordinate-changed transformation $g.T_A$ is $MAM^{-1}$. If the field $\FF$ is algebraically closed, then the operator $T_A$ has generalizd eigen-pairs $(\lambda, v) \in \FF\times \FF^n$ such that for some $m\in \NN$ \[ (Av - \lambda v)^k = 0, \quad\text{but } (Av - \lambda v)^{k-1} \neq 0. \] The subspace of generalized eigenvectors associated to a fixed $\lambda$ is $G$-invariant, and a Jordan chain of linearly independent generalized eigenvectors provides a basis such that the operator is in almost-diagonal form, the Jordan canonical form referenced above. \begin{remark} Morozov's theorem \cite{MR0007750} is key to studying nilpotent orbits such as in \cite{Vinberg-Elashvili, Antonyan}. It says that every nilpotent element is part of an $\sl_2$-triple. In the matrix case, and in adapted coordinates these triples consist of a matrix with a single 1 above the diagonal (which may be taken as part of a Jordan block), its transpose, and their commutator, which is on the diagonal. This triple forms a 3-dimensional Lie algebra isomorphic to $\sl_2$. \end{remark} \section{Historical Remarks on Jordan Decomposition for Tensors}\label{sec:history} Concepts from linear algebra often have several distinct generalizations to the multi-linear or tensor setting. It seems natural that any generalization of Jordan decomposition to tensors should involve the concepts of eigenvalues and the conjugation of operators (simultaneous change of basis of source and target) by a group that respects the tensor structure. In their seminal work that reintroduced hyperdeterminants (one generalization of the determinant to tensors), Gelfand, Kapranov, and Zelevinski \cite{GKZ} wondered what a spectral theory of tensors might look like. This was fleshed out for the first time simultaneously in the works \cite{Lim05_evectors, Qi05_eigen} (see also \cite{qi2018tensor}). One definition of an eigen-pair of a tensor $T \in (\CC^{n})^{\otimes d}$ is a vector $v \in \CC^n$ and a number $\lambda$ such that $T(v^{\otimes d-1}) = \lambda v$. Cartwright and Sturmfels computed the number of eigenvectors of a symmetric tensor \cite{CartwrightSturmfels2011} and Ottaviani gave further computations using Chern classes of vector bundles, see for example \cite{OedOtt13_Waring}. Additional considerations in this vein appeared in \cite{Gnang2011}, which gave a different product, and hence algebra structure, on tensor spaces. While this concept of eigenvector has been fundamental, it doesn't seem immediately amenable to a Jordan decomposition because of the following features: \begin{enumerate} \item The operator $T \colon S^{d-1} V \to V$ is not usually square. \item Contracting $T \in V^{\otimes d}$ with the same vector $v\in V$ $d-1$ times has a symmetrization effect, which collapses the group that acts to $\SL(V)$ versus $\SL(V)^{\times d}$. \item It's not immediately apparent that a conjugation group action on $T$ makes sense because of the (typically) non-square matrices coming from contractions or flattenings. \end{enumerate} See \cite{MR3841899} for a multilinear generalization of a Jordan decomposition, which considered 3rd-order tensors and focused on approximating a tensor with one that is block upper triangular. The generalization we will offer follows the following line of research. Any semisimple Lie algebra has a Jordan decomposition \cite{kostant1973convexity}, and Jordan decomposition is preserved under any representation \cite[Th.~9.20]{FultonHarris}. See also \cite{HuangKim} for recent work in this direction. The existence of such a Jordan decomposition has been the key to answering classical questions like orbit classification. Specifically, given a vector space $V$ with a connected linear algebraic group $G \subset \GL(V)$ acting on it, what are all the orbits? Which pairs $(G, V)$ have finitely many orbits \cite{Kac80, Kac85}, or if not, which are of \emph{tame} or \emph{wild} representation type? These questions have intrigued algebraists for a long time. Dynkin \cite{dynkin1960semisimple, dynkin2000selected} noted to call something a \emph{classification} of representations one must have a set of ``characteristics'' satisfying the following: \begin{itemize} \item The characteristics must be invariant under inner automorphisms so that the characteristics of equivalent representations must coincide. \item They should be complete: If two representations have the same characteristics, they must be equivalent. \item They should be compact and easy to compute. \end{itemize} Indeed, a requirement for finitely many orbits is that the dimension of the group $G$ must be at least that of the vector space $V$. However, for tensors, this is rare \cite{venturelli2019prehomogeneous}. Yet, in some cases, Vinberg's method \cite{Vinberg75} can classify orbits even in the tame setting and, in fact, does so by embedding the question of classifying orbits for the pair $(G, V)$ to classifying nilpotent orbits in the adjoint representation $V'$ of an auxiliary group $G'$. Part of the classification comes down to classifying the subalgebras of the Lie algebra $\g'$, which was done by Dynkin \cite{dynkin1960semisimple, dynkin2000selected}. Another crucial part of this classification is the characteristics, which are computed utilizing Morozov $\sl_2$-triples in the auxiliary algebra $\fa$. One can follow these details in \cite{Vinberg-Elashvili}, for instance, in the case of $(\SL(9), \bw{3}\CC^9)$ whose orbit classification relied upon the connection to the adjoint representation of $\e_8$, or for the case of $(\SL(8), \bw{4}\CC^8)$ in \cite{Antonyan}. However, Vinberg and {\`E}la{\v{s}}vili, comment that while they did succeed in classifying the orbits for $(\SL(9), \bw{3}\CC^9)$, which involved 7 families of semisimple elements depending on 4 continuous parameters, and up to as many as 102 nilpotent parts for each, such a classification of orbits for $(\SL(n), \bw{3}\CC^n)$, ``if at all possible, is significantly more complicated.'' One reason for this is that all the orbits for case $n$ are nilpotent for case $n+1$ (being non-concise), hence, for $n\geq 10$ even the nilpotent orbits will depend on parameters. In addition, it is not clear what should come next after the sequence $E_6, E_7, E_8$. We offer an option for the next step by being agnostic to the algebra classification problem, and we just focus on putting tensor spaces into naturally occurring graded algebras, whose product (bracket) is compatible with the group action on the tensor space. Though we do not compute a Jordan decomposition many times in this article, we emphasize that all the features we study (like matrix block ranks and eigenvalues) are consequences of the existence of a Jordan decomposition compatible with the tensor structure. Unless otherwise noted, all the computations we report take under a few seconds on a 2020 desktop computer. We implemented these computations in a package in Macaylay2 \cite{M2} called \texttt{ExteriorExtensions} \cite{oeding2023exteriorextensions}. We include this package and example computations in the ancillary files of the arXiv version of this article. \section{Constructing an algebra extending tensor space} Now we will generalize a part of the Vinberg construction (discussed briefly in Section~\ref{sec:history}) embedding tensor spaces into a graded algebra, obtaining adjoint operators for tensors such that the group action on the tensor space is consistent with the Jordan decomposition of the operator. This involves structure tensors of algebras (see \cite{bari2022structure, ye2018fast} for recent studies). \subsection{A graded algebra extending a $G$-module}\label{sec:requirements} Let $M$ denote a finite-dimensional $G$-module, with $\g$ the Lie algebra of $G$, considered an algebra over $\FF$. We wish to give the vector space $\fa = \g \oplus M$ the structure of an algebra that is compatible with the $G$-action. In order to have closure, we may need to extend $M$ to a larger $G$-module, i.e., we may also consider the vector space $\fa' = \g \oplus M \oplus M^*$ in Section~\ref{sec:Z3}, or more in Section~\ref{sec:Zm}. We will attempt to define a bracket on $\fa$ with the following properties: \begin{equation} [\;,\;] \colon \fa \times \fa \to \fa \end{equation} \begin{enumerate} \item The bracket is bi-linear and hence equivalent to a structure tensor $B \in \fa^*\otimes \fa^* \otimes \fa$. \item The bracket is \emph{interesting}, i.e., the structure tensor is non-zero. \item The bracket respects the grading, i.e., $[\;,\;] \colon \fa_i \times \fa_j \to \fa_{i+j}$. \item The bracket agrees with the $\g$-action on $\g$, and on $M$. This ensures that the Jordan decomposition respects the $G$-action on $M$. \item\label{prop:equi} The structure tensor $B$ is $G$-invariant, so that the $G$-action on elements of $\fa$ is conjugation for adjoint operators, and hence Jordan decomposition makes sense. \setcounter{nameOfYourChoice}{\value{enumi}} \end{enumerate} Additionally, we could ask for the following properties that would make $\fa$ into a Lie algebra. \begin{enumerate} \setcounter{enumi}{\value{nameOfYourChoice}} \item The bracket is globally skew-commuting, i.e., $[T,S] = -[S,T]$ for all $S,T \in \fa$. Note it must be skew-commuting for the products $\g \times \g \to \g$ and $\g \times M \to M$ if it is to respect the grading and the $G$-action on $\g$ and on $M$. \item The bracket satisfies the Jacobi criterion, making $\fa$ a Lie algebra. \end{enumerate} We will see that these last 2 criteria may not always be possible to impose. We may study potential connections to Lie superalgebras in future work. However, the conditions (1)-(5) are enough to define the following. \begin{definition} Given an element $T$ in an algebra $\fa$, we associate its \defi{adjoint form} \[\ad_T :=[T,\;] \colon \fa \to \fa.\] \end{definition} \begin{prop} Suppose $\fa$ is a $G$-module. Then the structure tensor $B$ of an algebra $\fa$ is $G$-invariant if and only if the operation $T \mapsto \ad_T$ is $G$-equivariant in the sense that \begin{equation} \ad_{gT} = g(\ad_T)g^{-1}, \end{equation} with $gT$ denoting the $G$-action on $\fa$ on the LHS and juxtaposition standing for the matrix product on the RHS. In particular, the Jordan form of $\ad_T$ is a $G$-invariant for $T\in \fa$. \end{prop} \begin{proof} Let $B = \sum_{i,j,k} B_{i,j,k} \a_i \otimes \a_j \otimes a_k \in \fa^* \otimes \fa^* \otimes \fa$ represent a potential bracket $[\;,\;]$. For $T\in \fa$ we have that $\ad_T = B(T) \in \FF\otimes \fa^* \otimes \fa$ is the contraction in the first factor. Since the $G$-action is a linear action, it suffices to work with on rank-one tensors such as $\a \otimes \b \otimes a$, for which the contraction with $T$ is $\a(T)\cdot \b\otimes a$, and $\cdot$ denotes the scalar product. For $g\in G$ the $G$-action on the contraction is \[ g.(\a(T)\b\otimes a )= \a(T)\cdot(g.\b)\otimes (g.a), \] because $G$ acts as the identity on $\FF$. Extending this by linearity and noting that $g.\b(v) = \b(g^{-1}v)$ (the dual action) we have that \begin{equation}\label{g.bt1} g.(B(T)) = g(B(T))g^{-1}, \end{equation} where no dot (juxtaposition) means matrix product. Then we compute: \[\begin{matrix} g.(\a\otimes \b \otimes a) = (g.\a)\otimes (g.\b) \otimes (g.a), \text{ and} \\[1ex] (g.(\a\otimes \b \otimes a))(T) = (g.\a)(T) \cdot (g.\b)\otimes (g.a) = \a(g^{-1} T) \cdot (g.\b)\otimes (g.a). \end{matrix}\] This implies that \begin{equation}\label{g.bt2} (g.B)(T) = g.(B(g^{-1}T)) = g(B(g^{-1}T))g^{-1}, \end{equation} where the second equality is by \eqref{g.bt1}. If we assume that $g.B = B$ we can conclude from \eqref{g.bt2} that \[B(T) = g(B(g^{-1} T))g^{-1},\] or replacing $T$ with $gT$ \[B(gT) = g(B( T))g^{-1},\] Hence, the construction of the adjoint operators is $G$-equivariant. This argument is also reversible since \eqref{g.bt2} holds for any tensor $B$, and if \[B(T) = g(B(g^{-1}T))g^{-1}\] holds for all $T$, then $B(T) = (g.B)(T)$ for all $T$, which implies that $B = g.B$. \end{proof} \begin{definition}\label{def:GJD} We will say that a graded algebra $\mathfrak{a} = \g \oplus W$ has a Jordan decomposition consistent with the $G = \text{Lie}(\g)$-action (or say $\mathfrak{a}$ has GJD for short) if its structure tensor is $G$-invariant (and non-trivial). An element $T\in \fa$ is called \defi{ad-nilpotent} or simply \defi{nilpotent}, respecively \defi{ad-semisimple} or \defi{semisimple}, if $\ad_T$ is nilpotent (resp. semisimple). \end{definition} After we posted a draft of this article to the arXiv, Mamuka Jibladze asked us about the commuting condition for the Jordan-Chevalley decomposition, which led us to the following. \begin{remark} If $\fa$ has GJD, and $T\in W$, considering $\ad_T \in \End(\fa)$, and its Jordan decomposition $\ad_T = (\ad_T)_S + (\ad_T)_N$ for semisimple $(\ad_T)_S $ and nilpotent $ (\ad_T)_N$ with $[ (\ad_T)_S , (\ad_T)_N] =0$. We ask if we can find corresponding $s,n \in \fa$ so that $\ad_s = (\ad_T)_S $ and $\ad_n = (\ad_T)_N $, and we don't require that $s,n$ commute in $\fa$, but rather that their corresponding adjoint operators do. Notice that if we did have $[\ad_s,\ad_n] = 0 $ in $\End(\fa)$ and $[s,n]= 0$ in $\fa$, then this would mean that the elements $s,n$ would have to satisfy the Jacobi identity $\ad_{[s,n]} = [\ad_s, \ad_n]$, which may not hold for all elements of $\fa$. The question of the existence of such $s,n$ regards the adjoint map itself $\ad\colon \fa \to \End (\fa)$, and we can try to solve for them by solving a system of equations on the image of the adjoint map. We report on this and other related computations in example~\ref{ex:g36}. \end{remark} \subsection{Invariants from adjoint operators} Since conjugation preserves eigenvalues, we have that elementary symmetric functions of the eigenvalues, including the trace and determinant, and hence also the characteristic polynomial of the adjoint form of a tensor are all invariant under the action of $G$. Hence, $x$ and $y$ are not in the same orbit if they do not have the same values for each invariant. We list three such invariants for later reference. \begin{definition}For generic $T \in \g$ the adjoint operator $\ad_T$ induces \defi{trace-power} invariants: \[ f_k(T) := \tr((\ad_T)^k). \] \end{definition} The ring $\mathcal F := \CC[f_1,\ldots,f_n]$ is a subring of the invariant ring $\CC[V]^G$, though $\F$ is not likely to be freely generated by the $f_k$, and could be a proper subring, see also \cite{wallach2005hilbert}. This opens a slew of interesting commutative algebra problems such as computing the syzygies of the $f_k$'s, or even the entire minimal free resolution (over $\CC[V]$) of such; obtaining a minimal set of basic invariants; finding expressions of other known invariants in terms of the $f_{k}$, for example, one could ask for an expression of the hyperdeterminant, as was done in \cite{BremnerHuOeding, HolweckOedingE8}. Invariance also holds for the set of algebraic multiplicities of the roots of the adjoint form of the tensor, as well as the ranks of the adjoint form and its blocks induced from the natural grading. This leads us to the following invariants that are easy to compute and often sufficient to distinguish orbits. \begin{definition}\label{def:profiles} Suppose $T\in M$, $\fa$ is an algebra containing $\g \oplus M$, and $\ad_T$ is the adjoint operator of $T$. We call the \defi{adjoint root profile}, which lists the roots (with multiplicities) of the characteristic polynomial of $\ad_{T}$. \end{definition} \begin{definition} We list the ranks of the blocks of $\ad_{T}$ and its powers and call this the \defi{adjoint rank profile}. We depict the rank profile by a table whose rows correspond to the power $k$ on $(\ad_T)^k$ and whose columns are labeled by the blocks of $(\ad_T)^k$, with the last column corresponding to the total rank. \end{definition} The ranks of powers of $\ad_T$ indicate the dimensions of the generalized eigenspaces for the 0-eigenvalue, and this can be computed without knowing the rest of the eigenvalues of $\ad_T$. In particular, if these ranks are not constant, $\ad_T$ is not semi-simple, and if the rank sequence does not go to 0 then $\ad_T$ is not nilpotent. Recall that the \defi{null cone} $\mathcal{N}_G$ is the common zero locus of the invariants, i.e., the generators of $\CC[V]^G$. We have the following straightforward conclusion: \begin{prop} Suppose $\fa$ has a Jordan decomposition consistent with the $G$-action on $M$. If a tensor $T$ is in the null cone $\mathcal{N}_G$, then $\ad_T$ is a nilpotent operator. Moreover, if the trace powers $f_k$ generate the invariant ring $\CC[V]^G$, then null-cone membership and nilpotency are equivalent. \end{prop} \begin{proof} If $T\in \mathcal{N}_G$, then every invariant vanishes, in particular all elementary symmetric polynomials in the eigenvalues of $\ad_T$, hence all the eigenvalues of $\ad_T$ are zero, so null-cone membership always implies nilpotency. Conversely, if $\ad_T$ is nilpotent, then all the eigenvalues of $\ad_T$ are zero, and since the trace powers are symmetric functions in the eigenvalues of $\ad_T$, they all vanish. If these trace powers generate the invariant ring, then nilpotent tensors are in the null cone. \end{proof} \subsection{Algebra structures on $\End(V)_0$} The vector space of traceless endomorphisms, denoted $\g = \End(V)_0$ (or $\sl_n$ or $\sl(V)$ when we imply the Lie algebra structure), can be given more than one $G$-invariant algebra structure, as we will now show. We attempt to define a $G$-equivariant bracket \[ \g \times \g \to \g. \] The following result implies that up to re-scaling, there are two canonical $G$-equivariant product structures on $\g$, one commuting and one skew-commuting. \begin{prop} Let $\g = \End(V)_0$ denote the vector space of traceless endomorphisms of a finite-dimensional vector space $V$. Then $\g^* \otimes \g^* \otimes \g$ contains 2 copies of the trivial representation, and more specifically, each of $\bw{2} \g^* \otimes \g$ and $S^2 \g^* \otimes \g$ contains a 1-dimensional space of invariants. \end{prop} \begin{proof} Since $\g$ is an irreducible $G$-module, we only need to show that there is an isomorphic copy of $\g^*$ in each of $\bw{2} \g^*$ and $S^2\g^*$, then each of $\bw{2} \g^*\otimes \g$ and $S^2\g^*\otimes \g$ will have a non-trivial space of invariants. This is just a character computation, but we can also see it as an application of the Pieri rule and the algebra of Schur functors. We do the case of $S^2 \g^*$ since the other case is quite similar, and we already know that $\End(V)_0$ has the structure of the Lie algebra $\sl (V)$ with a skew-commuting product. Recall that $V \otimes V^* = \End(V)_0 \oplus \CC$, where the trivial factor is the trace, and as a Schur module $\End(V)_0 = S_{2,1^n-2}V$. Also $S^2(A\oplus B) = S^2A \oplus (A\otimes B) \oplus S^2 B $ so we can compute $S^2 \g$ by computing $S^2 (V\otimes V^*)$ and taking a quotient. We have \[ S^2(V\otimes V^*) = (S^2 V \otimes S^2 V^*) \oplus (\bw2 V \otimes \bw 2 V^*) .\] Now apply the Pieri rule (let exponents denote repetition in the Schur functors) \[ = (S_{2^n}V \oplus \underline{S_{3,2^{n-2},1}V} \oplus S_{4,2^{n-2}} V) \oplus (\bw n V \oplus \underline{S_{2,1^{n-2}} V }\oplus S_{2,2, 1^{n-3}}V ) ,\] where we have underlined the copies of $\g$. Since $S^2(V\otimes V^*)$ contains 2 copies of $\g $, and only one copy can occur in the complement of $S^2 \sl(V)$ (which is $\g \otimes \CC \oplus S^2 \CC$), we conclude that there must be precisely one copy of $\g$ in $S^2 \g$. \end{proof} \begin{remark} Note the traceless commuting product is defined via: \[\begin{matrix} \g \times \g &\to& \g \\ (A,B ) & \mapsto & (AB+ BA) - I\cdot \tr(AB+ BA). \end{matrix} \] Then we know that the trace is $G$-invariant, the $G$ action is linear and moreover \[g(AB+BA)g^{-1} = (gAg^{-1})(gBg^{-1})+(gBg^{-1})(gAg^{-1}),\] so $g.[A, B] = [g.A, g.B]$, i.e., the product is $G$-equivariant. \end{remark} \begin{remark} Since both $\bw{2} \sl_n \otimes \sl_n$ and $S^2 \sl_n \otimes \sl_n$ have a space of non-trivial invariants, we could put a commuting or skew-commuting product on $\sl_n$ and yield different algebra structures on $\fa$. However, if we want the product to agree with the action of $\sl_n$ on itself and with the action of $\sl_n$ on $M$ (and hence obtain a Jordan decomposition), then we should insist that we choose the bracket that is skew-commuting on $\sl_n$. This structure is inherited from viewing $\g$ as the adjoint representation of $G$. \end{remark} \subsection{A $\ZZ_2$ graded algebra from a $G$-module.}\label{sec:Z2} For a $G$-module $M$ we define $\fa = \g \oplus M$ and attempt to construct a bracket \[ [\;,\;] \colon \fa \times \fa \to \fa, \] viewed as an element of a tensor product $B\in \fa^* \otimes \fa^* \otimes \fa$, with the requirements in Section~\ref{sec:requirements}. For the bracket on $\fa$ to respect the $\ZZ_2$ grading $\fa_0 = \g$, $\fa_1 = M$, it must impose conditions that respect the following decomposition. \[\fa^* \otimes \fa^* \otimes \fa = (\fa_0^* \oplus \fa_1^*) \otimes (\fa_0^* \oplus \fa_1^*) \otimes (\fa_0 \oplus \fa_1) \] \[\begin{matrix} &= & \fa_0^* \otimes \fa_0^* \otimes \fa_0 &\oplus & \fa_0^* \otimes \fa_0^* \otimes \fa_1 & \oplus& \fa_0^* \otimes \fa_1^* \otimes \fa_0 &\oplus & \fa_0^* \otimes \fa_1^* \otimes \fa_1 \\ && \oplus\; \fa_1^* \otimes \fa_0^* \otimes \fa_0 &\oplus & \fa_1^* \otimes \fa_0^* \otimes \fa_1 &\oplus& \fa_1^* \otimes \fa_1^* \otimes \fa_0 &\oplus & \fa_1^* \otimes \fa_1^* \otimes \fa_1 \end{matrix} \] Correspondingly denote by $B_{ijk}$ the graded pieces of $B$, i.e., $B_{ijk}$ is the restriction of $B$ to $\fa_i^* \otimes \fa_j^* \otimes \fa_k$. Respecting the grading requires that the maps $B_{001} =0$, $B_{010} =0$, $B_{100} =0$, and $B_{111} =0$. So $B$ must have the following structure: \[ B \in \begin{matrix} \fa_0^* \otimes \fa_0^* \otimes \fa_0 &\oplus & \fa_0^*\otimes \fa_1^* \otimes \fa_1 &\oplus & \fa_1^*\otimes \fa_0^* \otimes \fa_1 & \oplus& \fa_1^* \otimes \fa_1^* \otimes \fa_0 ,\end{matrix} \] For $X\in \g$ write $B(X) = \ad_X$, likewise, for $T\in M$ write $B(T) = \ad_T$, and correspondingly with the graded pieces of each. So, the adjoint operators have formats: \begin{equation}\label{eq:block2} B(X) = \begin{pmatrix} B_{000}(X) & 0 \\ 0 & B_{011}(X) \end{pmatrix}, \quad \quad \text {and}\quad\quad B(T) = \begin{pmatrix} 0 & B_{110}(T) \\ B_{101}(T) & 0 \end{pmatrix} ,\end{equation} where we note that each of the blocks is a map: \[\begin{matrix} B_{000}(X) \colon \fa_0 \to \fa_0, & \quad & B_{011}(X) \colon \fa_1 \to \fa_1, \\ B_{110}(T) \colon \fa_1 \to \fa_0, & \quad & B_{101}(T)\colon \fa_0 \to \fa_1, \end{matrix} \] that depends linearly on its argument, $X$ or $T$. The linearity of the construction is apparent so that if $X\in \g, T\in M$, then \[ B(X+T) = B(X) + B(T), \] respecting the matrix decompositions at \eqref{eq:block2}. Agreement with the $\g$-action would require that $B_{000}$ be the usual commutator on $\g$ and that $B_{011}$ should be the standard $\g$-action on $M$, which is not an obstruction. The remaining requirement is a $G$-invariant in $\fa_1^*\otimes \fa_1^* \otimes \fa_0$ (respectively in $\fa_0^*\otimes \fa_1^* \otimes \fa_1$), which will allow for an invariant $B_{110}$ (respectively $B_{101}$). Formally: \begin{prop} The vector space $\fa = \g \oplus M = \fa_0 \oplus \fa_1$ has a $G$-invariant structure tensor, and hence elements of the corresponding graded algebra have a non-trivial Jordan decomposition that is consistent with the $G$-action on $T\in M$ if and only if the spaces of $G$-invariants in $\fa_1^*\otimes \fa_1^* \otimes \fa_0$ and in $\fa_0^*\otimes \fa_1^* \otimes \fa_1$ are non-trivial. \end{prop} Skew-symmetry would force the elements $B_{000}$ and $B_{110}$ to be skew-symmetric in their first two arguments, and $B_{101} = -B_{011}^\top$. On the level of modules, this is \[ B \in \begin{matrix} \bw{2}\fa_0^* \otimes \fa_0 &\oplus & \fa_0^*\wedge \fa_1^* \otimes \fa_1 & \oplus& \bw{2} \fa_1^* \otimes \fa_0 ,\end{matrix} \] where we have encoded the condition that $B_{101} = -B_{011}^\top$ by replacing $ \left(\fa_0^* \otimes \fa_1^* \otimes \fa_1 \right) \oplus \left( \fa_1^* \otimes \fa_0^* \otimes \fa_1 \right) $ with $ \fa_0^* \wedge \fa_1^* \otimes \fa_1 $. We record this condition as follows: \begin{prop} The algebra $\fa = \g \oplus M = \fa_0 \oplus \fa_1$ has a skew-commuting product with a non-trivial Jordan decomposition that is consistent with the $G$-action on $T\in M$ if and only if the spaces of $G$-invariants in $\bw{2}\fa_1^* \otimes \fa_0$ and in $\fa_0^* \wedge \fa_1^* \otimes \fa_1$ are non-trivial. \end{prop} \begin{example}[Trivectors on a 6-dimensional space] \label{ex:g36} Consider $M = \bw 3\CC^6$, and $\g = \sl_6$. Note that $M\cong M^*$ as $G$-modules, and likewise $\g^* = \g$. We ask if there is a non-trivial invariant \[ B \in \begin{matrix} \fa_0^* \otimes \fa_0^* \otimes \fa_0 &\oplus & \fa_0^*\otimes \fa_1^* \otimes \fa_1 &\oplus & \fa_1^*\otimes \fa_0^* \otimes \fa_1 & \oplus& \fa_1^* \otimes \fa_1^* \otimes \fa_0 .\end{matrix} \] Noting the self-dualities and permuting tensor factors, we can check for invariants in \[ \begin{matrix} \fa_0 \otimes \fa_0 \otimes \fa_0 &\oplus & \fa_0\otimes \fa_1 \otimes \fa_1 .\end{matrix} \] By the Pieri rule we have $M\otimes M = \bw6 \CC^6 \oplus S_{2,1,1,1,1}\CC^6 \oplus S_{2,2,1,1}\CC^6 \oplus S_{2,2,2}\CC^6$. Since $\sl_6$ is irreducible, existence of a non-trivial space of invariants in $M^*\otimes M^* \otimes \sl_6$ requires a summand in $M\otimes M$ be isomorphic to $\sl_6$, which is the case since $\sl_6 \cong S_{2,1,1,1,1}\CC^6$ as a $G$-module. Note also (by \cite[Ex.~15.32]{FultonHarris}) $\bw{2} M = \bw6 \CC^6 \oplus S_{2,2,1,1}\CC^6$. So, it is impossible to have a $G$-invariant structure tensor for a globally skew-commuting product in this example. But (by the same exercise) since $\sl_6 \subset S^2 \bw{3}\CC^6$, we see that $\fa$ does have a non-trivial $G$-invariant structure tensor that is commuting on $M$. We give up skew-symmetry and the Jacobi identity but retain Jordan decompositions of adjoint operators. The orbits of $\SL_6(\CC)$ in $\PP \bw3 \CC^6$ were classified in the 1930s by Schouten \cites{Schouten31,GurevichBook}. Their closures are linearly ordered. Table~\ref{tab:w3c6} shows that the adjoint rank profiles separate orbits. In the last case, we stop the table since the form is not nilpotent. \begin{table} \scalebox{.9}{ \begin{tabular}{l||l||l||l} \begin{tabular}{l}Grassmannian:\\ $e_0 e_1 e_2$ \end{tabular} & \begin{tabular}{l} Restricted Chordal:\\ $e_0 e_1 e_2 + e_0 e_3 e_4$\end{tabular} & \begin{tabular}{l} Tangential: \\$e_0 e_1 e_2 + e_0 e_3 e_4 + e_1e_3e_5$\end{tabular} & \begin{tabular}{l} Secant (general): \\$e_0 e_1 e_2 + e_3 e_4e_5$\end{tabular} \\ $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0 & 10 & 10 & 0 & 20 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{smallmatrix} \right|$ &$ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0 &15 &15 &0 &30 \\ 10 &0 &0 &6 &16 \\ 0 &1 &1 &0 &2 \\ 1 &0 &0 &0 &1 \\ 0 &0 &0 &0 &0 \\ \end{smallmatrix}\right|$ & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0 &19 &19 &0 &38 \\ 18 &0 &0 &11 &29 \\ 0 &10 &10 &0 &20 \\ 9 &0 &0 &2 &11 \\ 0 &1 &1 &0 &2 \\ 0 &0 &0 &1 &1 \\ 0 &0 &0 &0 &0 \\ \end{smallmatrix}\right|$ & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0 &19 &19 &0 &38 \\ 18 &0 &0 &19 &37 \\ 0 &18 &18 &0 &36 \\ \end{smallmatrix}\right|$\\ \end{tabular} } \caption{Normal forms and adjoint rank profiles of orbits in $\PP \bw3 \CC^6$.}\label{tab:w3c6} \end{table} The characteristic polynomials for the nilpotent elements are $t^{55}$. For the (non-nilpotent) element $T = e_0 e_1 e_2 + e_3 e_4e_5$ we have $\chi_T(t) = \left(t\right)^{19}\left(3\,t^{2}-4\right)^{9}\left(3\,t^{2}+4\right)^{9}$, with root profile is $(19_0, (9_\CC)^2,(9_\RR)^2)$, i.e., there are roots of multiplicity 19 at 0, and 2 complex roots and 2 real roots each with multiplicity 9. For the nilpotent normal forms, the trace-power invariants are zero. For the general point, the trace powers of order $4k$ are non-zero. Since the ring of invariants is generated in degree 4, the invariant $\Tr(\ad_{T}^4)$ must be equal to a scalar multiple of this invariant, known as a hyperpfaffian. It has value $36$ on the form $e_0 e_1 e_2 + e_3 e_4e_5$. Now, we can ask for the JCF for the adjoint operator of these normal forms. Let us show the example with $S = e_0 e_1 e_2 + e_3 e_4e_5$. The kernel of the adjoint operator $\ad_S$ is spanned by the following elements from $\sl_6$: $h_1 = E_{0,0}- E_{1,1}$, $h_2 = E_{1,1}- E_{2,2}$, $h_4 = E_{3,3}- E_{4,4}$, $h_5 = E_{4,4}- E_{5,5}$ together with the 12 elements of the form $E_{i,j}$ where both $i$ and $j$ come from the same block from the partition $\{0,1,2\}\cup \{3,4,5\}$, and the element $S_- = -e_0 e_1 e_2 + e_3 e_4e_5$. The kernel of $(\ad_S)^2$ increases by 1 dimension and includes the new vector $h_3$. The kernel of $(\ad_S)^3$ increases by 1 dimension and instead of the vector $S_-$, it is spanned by the two elements $e_0 e_1 e_2, e_3 e_4e_5$. So we can start a Jordan chain as: \[v_1 = h_{3}+e_{3}e_{4}e_{5},\] \[v_2 = \ad_S v_1 = \frac{1}{2}h_{1}+h_{2}+\frac{3}{2}h_{3}+h_{4}+\frac{1}{2}h_{5}-e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5},\] \[v_3 = \ad_S v_2 = -\frac{3}{2}e_{0}e_{1}e_{2}+\frac{3}{2}e_{3}e_{4}e_{5}.\] Then complete the chain by adding elements from the kernel of $\ad_S$: \[\begin{matrix} h_{1},& h_{2},& h_{4},& h_{5}, & E_{0,\:1},& E_{0,\:2},& E_{1,\:2},& E_{3,\:4},\\ & E_{3,\:5},& E_{4,\:5},& E_{1,\:0},& E_{2,\:0},& E_{2,\:1},& E_{4,\:3},& E_{5,\:3},& E_{5,\:4}. \end{matrix} \] The other eigenspaces have dimensions equal to their algebraic multiplicities, so choosing the remaining basis vectors of $\fa$ to be full sets of eigenvectors corresponding to the eigenvalues $\pm 1, \pm i$ for $\ad_S$ one obtains a matrix $Q$ whose columns correspond to these basis vectors and the final matrix $Q^{-1}\ad_S Q$ is in JCF, with only one non -diagonal block which is a $3\times 3$ Jordan block $J_3(0)$. As a final comment in this example, we mention that while none of the orbits of $\SL_6$ in $\bw3 \CC^6$ appear to be semisimple, we checked that the mixed vector $v_1$ above is, in fact, semisimple. It seems that there are many more things to discover about this algebra. \end{example} \begin{example}[4-vectors on an 8-dimensional space] Now consider $M = \bw 4\CC^8$ and $\g = \sl_8$. Note that $M\cong M^*$ as $G$-modules. By the Pieri rule $M\otimes M = \bw8 \CC^8 \oplus S_{2,1,1,1,1,1,1}\CC^8 \oplus S_{2,2,1,1,1,1}\CC^8 \oplus S_{2,2,2,1,1}\CC^8 \oplus S_{2,2,2,2}\CC^8$. Since $\sl_8$ is irreducible, $M^*\otimes M^* \otimes \sl_8$ has a non-trivial space of invariants if and only if a summand in $M\otimes M$ is isomorphic to $\sl_8$, which is the case since $\sl_8 \cong S_{2,1,1,1,1,1,1}\CC^6$ as a $G$-module. Note also (by \cite[Ex.~15.32]{FultonHarris}) $\bw{2} M = \bw8 \CC^8 \oplus S_{2,1,1,1,1,1,1}\CC^8\oplus S_{2,2,2,1,1}\CC^8$, which contains a copy of $\sl_8$. So $\fa$ has a non-trivial $G$-invariant structure tensor for a skew-commuting product in this case. One checks that this product (which is unique up to scalar) also satisfies the Jacobi identity. Antonyan \cite{Antonyan} noticed that this algebra is a copy of the Lie algebra $\mathfrak{e}_7$ and carried out Vinberg's method \cite{Vinberg75}, which says, essentially, that since $\mathfrak{e}_7$ is a semisimple Lie-algebra the nilpotent orbits can be classified by utilizing Dynkin classification of subalgebras of semisimple Lie algebras \cite{dynkin1960semisimple}. Antonyan uses a modification of Dynkin's \emph{Characteristics} to separate nilpotent orbits. The appendix in \cite{oeding2022} provides normal forms for each nilpotent orbit. The adjoint rank profiles can distinguish orbits. The adjoint rank profile has the advantage that it does not require one to be able to use the group action to put a given tensor into its normal form, and in that sense, it is an automatic computation. It is interesting to consider normal forms of nilpotent orbits whose stabilizers have type associated with the full Lie algebra $\mathfrak{e}_7$, and respectively $\mathfrak{e}_7(a_1)$ and $\mathfrak{e}_7(a_2)$. The respective normal forms, orbit numbers (from Antonyan), and adjoint rank profiles are listed in Table \ref{tab:e7s}. \begin{table} \[ \begin{matrix} \text{\textnumero } 83: & e_{1345}+e_{1246}+e_{0356}+e_{1237}+e_{0247}+e_{0257}+e_{0167} \\ \text{\textnumero } 86: & e_{1245}+e_{1346}+e_{0256}+e_{1237}+e_{0347}+e_{0157}+e_{0167} \\ \text{\textnumero } 88: & e_{2345}+e_{1346}+e_{1256}+e_{0356}+e_{1237}+e_{0247}+e_{0157} \end{matrix} \] \[\begin{matrix} \text{\textnumero } 83: \hfill \\ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&62&62&0&124\\ 54&0&0&61&115\\ 0&53&53&0&106\\ 46&0&0&52&98\\ 0&45&45&0&90\\ 38&0&0&44&82\\ 0&37&37&0&74\\ 31&0&0&36&67\\ 0&30&30&0&60\\ 24&0&0&29&53\\ 0&23&23&0&46\\ 19&0&0&22&41\\ 0&18&18&0&36\\ 14&0&0&17&31\\ 0&13&13&0&26\\ 10&0&0&12&22\\ 0&9&9&0&18\\ 6&0&0&9&15\\ 0&6&6&0&12\\ 4&0&0&6&10\\ 0&4&4&0&8\\ 2&0&0&4&6\\ 0&2&2&0&4\\ 1&0&0&2&3\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\right| \end{matrix}\quad \begin{matrix} \text{\textnumero } 86: \hfill\\ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&61&61&0&122\\ 52&0&0&59&111\\ 0&50&50&0&100\\ 43&0&0&48&91\\ 0&41&41&0&82\\ 34&0&0&39&73\\ 0&32&32&0&64\\ 26&0&0&30&56\\ 0&24&24&0&48\\ 18&0&0&23&41\\ 0&17&17&0&34\\ 13&0&0&16&29\\ 0&12&12&0&24\\ 8&0&0&11&19\\ 0&7&7&0&14\\ 5&0&0&6&11\\ 0&4&4&0&8\\ 2&0&0&4&6\\ 0&2&2&0&4\\ 1&0&0&2&3\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\right| \end{matrix} \quad \begin{matrix} \text{\textnumero } 88: \hfill\\ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&63&63&0&126\\ 56&0&0&63&119\\ 0&56&56&0&112\\ 50&0&0&56&106\\ 0&50&50&0&100\\ 44&0&0&50&94\\ 0&44&44&0&88\\ 38&0&0&44&82\\ 0&38&38&0&76\\ 32&0&0&38&70\\ 0&32&32&0&64\\ 27&0&0&32&59\\ 0&27&27&0&54\\ 22&0&0&27&49\\ 0&22&22&0&44\\ 18&0&0&22&40\\ 0&18&18&0&36\\ 14&0&0&18&32\\ 0&14&14&0&28\\ 11&0&0&14&25\\ 0&11&11&0&22\\ 8&0&0&11&19\\ 0&8&8&0&16\\ 6&0&0&8&14\\ 0&6&6&0&12\\ 4&0&0&6&10\\ 0&4&4&0&8\\ 3&0&0&4&7\\ 0&3&3&0&6\\ 2&0&0&3&5\\ 0&2&2&0&4\\ 1&0&0&2&3\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\right|\end{matrix}\] \caption{Some normal forms of orbits in $\bw4 \CC^8$ and their adjoint rank profiles.}\label{tab:e7s} \end{table} These orbits are also distinguishable by their dimensions (seen in the first row of the adjoint rank profiles by Remark~\ref{rem:conical}). We also highlight orbits \textnumero 65, \textnumero 67, and \textnumero 69, which all have the same dimension (60). Their normal forms and adjoint rank profiles are listed in Table \ref{tab:60s}. Here, two of them even appear to have the same tensor rank (though the actual rank could be smaller). \begin{table} \[ \begin{matrix} \text{\textnumero } 65: & e_{2345}+e_{0246}+e_{1356}+e_{0237}+e_{1237}+e_{0147}+e_{0157}\\ \text{\textnumero } 67: &e_{1345}+e_{1246}+e_{0346}+e_{0256}+e_{1237}+e_{0247}+e_{0167}\\ \text{\textnumero } 69: &e_{1345}+e_{1246}+e_{0356}+e_{1237}+e_{0247}+e_{0157} \end{matrix} \] \[\begin{matrix} \text{\textnumero } 65: \hfill\\ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&60&60&0&120\\ 50&0&0&57&107\\ 0&47&47&0&94\\ 39&0&0&44&83\\ 0&36&36&0&72\\ 28&0&0&34&62\\ 0&26&26&0&52\\ 20&0&0&24&44\\ 0&18&18&0&36\\ 12&0&0&17&29\\ 0&11&11&0&22\\ 8&0&0&10&18\\ 0&7&7&0&14\\ 4&0&0&6&10\\ 0&3&3&0&6\\ 2&0&0&2&4\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\right|\end{matrix}\quad \begin{matrix} \text{\textnumero } 67: \hfill\\ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&60&60&0&120\\ 50&0&0&57&107\\ 0&47&47&0&94\\ 39&0&0&44&83\\ 0&36&36&0&72\\ 29&0&0&33&62\\ 0&26&26&0&52\\ 20&0&0&24&44\\ 0&18&18&0&36\\ 13&0&0&16&29\\ 0&11&11&0&22\\ 8&0&0&10&18\\ 0&7&7&0&14\\ 4&0&0&6&10\\ 0&3&3&0&6\\ 1&0&0&3&4\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\right|\end{matrix}\quad \begin{matrix} \text{\textnumero } 69: \hfill\\ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&60&60&0&120\\ 52&0&0&58&110\\ 0&50&50&0&100\\ 43&0&0&48&91\\ 0&41&41&0&82\\ 34&0&0&39&73\\ 0&32&32&0&64\\ 25&0&0&30&55\\ 0&23&23&0&46\\ 18&0&0&22&40\\ 0&17&17&0&34\\ 13&0&0&16&29\\ 0&12&12&0&24\\ 8&0&0&11&19\\ 0&7&7&0&14\\ 4&0&0&6&10\\ 0&3&3&0&6\\ 2&0&0&3&5\\ 0&2&2&0&4\\ 1&0&0&2&3\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\right|\end{matrix}\] \caption{More normal forms of orbits in $\bw4 \CC^8$ and their adjoint rank profiles.}\label{tab:60s} \end{table} Notice that even the ranks of the powers may not distinguish orbits, but the blocks for \textnumero 65 and \textnumero 67 do have some different ranks starting at the 6-th power. \end{example} The previous two examples are special cases of the following straightforward generalization: \begin{theorem} The vector space $\fa = \sl_{2m} \oplus \bw{m} \CC^{2m}$ has a $\ZZ_2$-graded algebra structure with a Jordan decomposition consistent with the $G$-action. There is a unique (up to scale) equivariant bracket product that agrees with the $\g$-action on $M= \bw{m}\CC^{2m}$. Moreover, it must satisfy the property that the restriction to $M \times M \to \g$ must be commuting when $m$ is odd and skew-commuting when $m$ is even. \end{theorem} \begin{proof} Note first that $\sl_{2m}$ is an irreducible $\g = \sl_{2m}$-module (the adjoint representation), and hence a non-zero invariant structure tensor exists if and only if there is a copy of $\g$ in $M\otimes M$. Moreover, the number of such is determined by the multiplicity of $\g$ in $M\otimes M$. Indeed, by \cite[Ex.~15.32]{FultonHarris}, we have for $M =\bw{m} \CC^{2m}$ that precisely one copy of $\g = S_{2,1^{2m-2}} \CC^{2m}$ is contained only in $S^2 \bw{m} \CC^{2m}$ when $m$ is odd, and only in $\bw2 \bw{m} \CC^{2m}$ when $m$ is even. \end{proof} \subsection{A $\ZZ_3$ graded algebra from a $\g$-module}\label{sec:Z3} At the risk of confusion of notation, for this subsection, let $\fa = \fa_0 \oplus \fa_1 \oplus \fa_{-1}$ with $\fa_0 = \g$ and $\fa_1 = M$ as before, but also $\fa_{-1} = M^*$, the dual $\g$-module. For the bracket on $\fa$ to respect the $\ZZ_3$ grading, it must impose conditions that respect the following decomposition. \[\begin{array}{rcl} \fa^* \otimes \fa^* \otimes \fa &=& (\fa_0^* \oplus \fa_1^*\oplus \fa_{-1}^*) \otimes (\fa_0^* \oplus \fa_1^*\oplus \fa_{-1}^*) \otimes (\fa_0 \oplus \fa_1\oplus \fa_{-1}) \\ &=& \bigoplus_{i,j,k \in \{0,1,-1\}} \fa_i^*\otimes \fa_j^* \otimes \fa_k \end{array} \] Correspondingly denote by $B_{ijk}$ the graded pieces of $B$, i.e., $B_{ijk}$ is the restriction of $B$ to $\fa_i^* \otimes \fa_j^* \otimes \fa_k$, and we equate $1$ with $+$ and $-1$ with $-$ for notational ease. Respecting the $\ZZ_3$ grading now requires the following vanishing: $B_{ijk} = 0 $ if $k \neq i+j \mod 3$. Thus, the only non-zero blocks of $B$ must be: \[ \begin{matrix} B_{000} & B_{0++} & B_{0--} \\ B_{+0+} & B_{+-0} & B_{++-} \\ B_{-0-} & B_{--+} & B_{-+0} \end{matrix} \] \noindent So $B$ must have the following structure: \[ B \in \begin{matrix} && \fa_0^* \otimes \fa_0^* \otimes \fa_0 &\oplus & \fa_0^*\otimes \fa_1^* \otimes \fa_1 &\oplus & \fa_0^*\otimes \fa_{-1}^* \otimes \fa_{-1} \\ &\oplus & \fa_1^*\otimes \fa_0^* \otimes \fa_1 &\oplus & \fa_1^*\otimes \fa_{-1}^* \otimes \fa_0 &\oplus & \fa_1^*\otimes \fa_1^* \otimes \fa_{-1} \\ & \oplus& \fa_{-1}^* \otimes \fa_0^* \otimes \fa_{-1} & \oplus& \fa_{-1}^* \otimes \fa_{-1}^* \otimes \fa_1 & \oplus& \fa_{-1}^* \otimes \fa_1^* \otimes \fa_0 \end{matrix} \] Correspondingly, there are three types of adjoint operators: for $X\in \g$ write $B(X) = \ad_X$, likewise, for $T\in M$ write $B(T) = \ad_T$, and for $\tau \in M^*$ write $B(\tau) = \ad_\tau$, and correspondingly with the graded pieces of each. So, the adjoint operators have formats \begin{equation}\label{eq:block3} \begin{matrix} B(X) = \left(\begin{smallmatrix} B_{000}(X) & 0 & 0 \\ 0 & B_{0++}(X) &0 \\ 0& 0 & B_{0--}(X)\\ \end{smallmatrix}\right), & B(T) = \left(\begin{smallmatrix} 0 & 0 & B_{+-0}(T) \\ B_{+0+}(T) &0 & 0 \\ 0& B_{++-}(T) &0\\ \end{smallmatrix}\right),\\\\ B(\tau) = \left(\begin{smallmatrix} 0 & B_{-+0}(\tau) & 0 \\ 0 &0 & B_{--+}(\tau) \\ B_{-0-}(\tau)& 0 &0\\ \end{smallmatrix}\right). \end{matrix} \end{equation} The linearity of the construction and the grading of the bracket is apparent. Note that each block is a map that depends linearly on its argument ($X, T$, or $\tau$). \begin{theorem} The vector space $\fa = \sl_{n} \oplus \bw{k} \CC^{n} \oplus \bw{n-k} \CC^{n}$ has an essentially unique non-trivial $\ZZ_3$-graded algebra structure with a Jordan decomposition consistent with the $G$-action precisely when $n = 3k$. Any non-trivial equivariant bracket product must satisfy the property that the restriction to $M \times M \to \g$ must be skew-commuting when $k$ is odd and commuting when $k$ is even. \end{theorem} \begin{proof} Agreement with the $\g$-action requires that $B_{000}$ be the usual commutator on $\g$ and that $B_{0++}$ should be the usual $\g$-action on $M$, while $B_{0--}$ should be the usual $\g$-action on $M^*$. More care must be given to ensure that the other blocks come from an invariant tensor. For $B(T)$ we seek a non-zero invariant tensor in each block of $ \fa_1^*\otimes \fa_0^* \otimes \fa_1 \oplus \fa_1^*\otimes \fa_{-1}^* \otimes \fa_0 \oplus \fa_1^*\otimes \fa_1^* \otimes \fa_{-1} $, or noting that $\fa_{-1}^* = \fa_1$ we seek a non-zero tensor in each block of $ \fa_1^*\otimes \fa_0^* \otimes \fa_1 \oplus \fa_1^*\otimes \fa_{1} \otimes \fa_0 \oplus \fa_1^*\otimes \fa_1^* \otimes \fa_{1}^* $. In the last block, note that $(\bw{k} \CC^n)^{\otimes 3}$ decomposes by an iterated application of Pieri's rule. In order to have $3k$ boxes fitting in a rectangle of height $n$, and it can have either 0, 1, 2 or 3 columns, since $k\leq n$ and $n-k \leq n$, the corresponding possibilities for $k$ are (in order) $k=0$, $3k =n$, or $3k=2n$ or $3k = 3n$. The middle two are the only non-trivial ones, and they correspond to the modules $M = \bw{k} \CC^{3k}$ and $M^* =\bw{2k} \CC^{3k}$. Hereafter $n=3k$. Now we look for invariants in $ \fa_1^*\otimes \fa_0^* \otimes \fa_1 \oplus \fa_1^*\otimes \fa_{1} \otimes \fa_0 $. Since $\sl_{3k}$ is an irreducible $\g = \sl_{3k}$-module (the adjoint representation), an interesting non-zero invariant structure tensor exists if and only if there is a copy of $\g^*$ in $M^*\otimes M$. One sees a copy of $\g^*$ in $M^* \otimes M$ by taking the transpose and noting that $\g \cong \g^*$ as $\g$-modules. \begin{lemma}\label{lem:dualBrackets} Consider $M = \bw{k} \CC^n$ and $M^* = \bw{n-k} \CC^n$. There is precisely one copy of $\sl_n = \g = S_{2,1^{n-2}}$ in $M \otimes M^*$. Moreover, if $n=2k$, then the copy of $\g$ lives in $S^2 M$ if $k$ is odd and in $\bw 2 M$ if $k$ is even. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:dualBrackets}] By the Pieri rule for $M =\bw{k} \CC^{n}$ and $M^* = \bw{k} \CC^{n}$ there is always a copy of $\g = S_{2,1^{n-2}} \CC^{n}$ in $M \otimes M^*$ obtained by adding to a column of height $k$ one box to the second column and the rest to the first column. The total number of boxes is $n$. This decomposition is multiplicity-free, so there is only one copy of $\g $ in $M\otimes M^*$. The ``moreover'' statement follows from \cite[Ex.~15.32]{FultonHarris}. \end{proof} Similarly, if we wish to define the bracket for elements $\tau$ of $M^*$ we must find an interesting non-zero invariant tensor in $\fa_{-1}^* \otimes \fa_0^* \otimes \fa_{-1} \oplus \fa_{-1}^* \otimes \fa_{-1}^* \otimes \fa_1 \oplus \fa_{-1}^* \otimes \fa_1^* \otimes \fa_0 $. This is the same computation as in the case of $T$ in $M$. So, we obtain no additional obstructions. The question of whether this bracket can be symmetric or skew-symmetric for $T \in M$ (respectively for $\tau \in M^*$) comes down to the existence of a non-zero invariant in $\bw{2} (\fa_1^*) \otimes \fa_{-1}$ or in $S^{2} (\fa_1^*) \otimes \fa_{-1}$. Again, this amounts to finding a copy of $\fa_{-1}$ in $\bw 2\fa_1$ or in $S^2 \fa_1$. Again, by \cite[Ex.~15.32]{FultonHarris}, when $k$ is even, there is only a copy of $\fa_{-1} \isom \fa_1$ in $S^2\fa_1$, hence the bracket must be commuting on this part. When $k$ is odd, there is only a copy of $\fa_{-1} \isom \fa_1$ in $\bw2\fa_1$; hence the bracket must be skew-commuting on this summand. Moreover, since these decompositions are multiplicity-free, the structure tensors are essentially unique. \end{proof} \begin{remark} Up to a scalar multiple the map $\fa_{-1}\times \fa_{1} \to \fa_0$ must be contraction, and the map $\fa_1 \times \fa_1 \to \fa_{-1}$ must be multiplication. Note that the product of $k$ forms is skew-symmetric when $k$ is odd and symmetric when $k$ is even. Similarly, the map $\fa_{-1} \times \fa_{-1} \to \fa_{1}$ must be contraction with the volume form followed by product and then contraction again. \end{remark} Skew-symmetry would require that the maps $B_{000}$, $B_{++-}$, $B_{--+}$ must be skew-symmetric, and that $B_{0++} = -B_{+0+}^\top$, $B_{0--} = -B_{-0-}^\top$, $B_{+-0} = -B_{-+0}^\top$, hence, this would force $B$ in \[ \begin{matrix} \bw{2}\fa_0^* \otimes \fa_0 & \oplus \bw{2} \fa_1^* \otimes \fa_{-1} & \oplus \bw{2} \fa_{-1}^* \otimes \fa_{1} &\oplus \fa_0^*\wedge \fa_1^* \otimes \fa_1 &\oplus \fa_0^*\wedge \fa_{-1}^* \otimes \fa_{-1} &\oplus \fa_1^*\wedge \fa_{-1}^* \otimes \fa_0 ,\end{matrix} \] where, for example, we have again encoded the condition that $B_{101} = -B_{011}^\top$ by replacing $ \left(\fa_0^* \otimes \fa_1^* \otimes \fa_1 \right) \oplus \left( \fa_1^* \otimes \fa_0^* \otimes \fa_1 \right) $ with $ \fa_0^* \wedge \fa_1^* \otimes \fa_1 $. \begin{example}[Trivectors on a 9-dimensional space] In Table~\ref{tab:w3c9}, we report the adjoint rank profiles of points in $\bw{3} \CC^9$ of rank $k$ (as sums of points of the Grassmannian $\Gr(3,9)$). We note that the rank profiles distinguish ranks up to 5, but not 6. \begin{table} \begin{tabular}{l||l} Rank 1: & Rank 2:\\ $\left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0 &0 &19 &19 &0 &0 &0 &20 &0 &58 \\ 0 &0 &0 &0 &0 &1 &0 &0 &0 &1 \\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \end{smallmatrix} \right|$ & $\left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0 &0 &38 &38 &0 &0 &0 &38 &0 &114 \\ 0 &19 &0 &0 &0 &20 &19 &0 &0 &58 \\ 0 &0 &0 &0 &1 &0 &0 &0 &1 &2 \\ 0 &0 &0 &0 &0 &0 &0 &1 &0 &1 \\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ \end{smallmatrix}\right| $ \\[4ex] \hline Rank 3: &Rank 4:\\ $\left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0 &0 &56 &56 &0 &0 &0 &56 &0 &168 \\ 0 &56 &0 &0 &0 &56 &56 &0 &0 &168 \\ \end{smallmatrix}\right|$ & $\left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0 &0 &72 &72 &0 &0 &0 &72 &0 &216 \\ 0 &72 &0 &0 &0 &72 &72 &0 &0 &216 \\ \end{smallmatrix}\right|$ \\[2ex] \hline Rank 5:& Rank 6:\\ $\left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0 &0 &80 &80 &0 &0 &0 &80 &0 &240 \\ 0 &80 &0 &0 &0 &80 &80 &0 &0 &240 \\ \end{smallmatrix}\right|$ & $\left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0 &0 &80 &80 &0 &0 &0 &80 &0 &240 \\ 0 &80 &0 &0 &0 &80 &80 &0 &0 &240 \\ \end{smallmatrix}\right|$\\[2ex] \end{tabular} \caption{Adjoint rank profiles for each tensor rank in $\bw{3} \CC^9$.}\label{tab:w3c9} \end{table} We are curious about nilpotent orbits, \textnumero 79, \textnumero 82, and \textnumero 87 (in the notation of \cite{Vinberg-Elashvili}), with respective types $D_4$, $A_3+A_1$, and $A_3+A_1$. Orbit \textnumero 79 has normal form with indices $129\, 138\, 237\, 456$ and the following adjoint rank profile: \[ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0&56&0&0&0&56&56&0&0&168\\ 0&0&46&46&0&0&0&48&0&140\\ 36&0&0&0&38&0&0&0&38&112\\ 0&28&0&0&0&29&28&0&0&85\\ 0&0&19&19&0&0&0&20&0&58\\ 9&0&0&0&11&0&0&0&11&31\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&1&1&0&0&0&1&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right|\] \noindent Change to $129\, 138\, 237\, 458$ (should be of type $A_3+A_1$) produces the following rank profile: \[ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0&55&0&0&0&54&55&0&0&164\\ 0&0&33&33&0&0&0&38&0&104\\ 16&0&0&0&21&0&0&0&21&58\\ 0&6&0&0&0&10&6&0&0&22\\ 0&0&2&2&0&0&0&0&0&4\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|\] This agrees with the rank profile of orbit \textnumero 82 and not that of \textnumero 87, which has rank profile: \[ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0&52&0&0&0&60&52&0&0&164\\ 0&0&33&33&0&0&0&38&0&104\\ 16&0&0&0&21&0&0&0&21&58\\ 0&8&0&0&0&6&8&0&0&22\\ 0&0&1&1&0&0&0&2&0&4\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|\] The differences are subtle and are not recognized by the total rank, only by the block ranks. The rank profiles and trace powers of all $\SL_9$ orbits in $\bw3\CC^9$ take too much space to include here, so we included them as an ancillary file accompanying the arXiv version of this article. Two other interesting cases are two different representatives for orbit \textnumero 9: \[E_1 = e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{2}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{2}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8},\] \[ E_2 = e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{2}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{5}e_{7}+e_{0}e_{2}e_{8}.\] Note that \cite{Vinberg-Elashvili} point out that these representatives have different minimal ambient regular semisimple subalgebras, but since they have the same characteristic, they lie on the same $\SL_9$-orbit. They both have the same rank profile (see below); hence, the rank profiles seem to give the same information as the characteristics. \[ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{02} & B_{10} & B_{11} & B_{12} & B_{10} & B_{21} & B_{22} & B \\[.5ex] \hline\\[.5ex] 0&76&0&0&0&74&76&0&0&226\\ 0&0&66&66&0&0&0&72&0&204\\ 58&0&0&0&62&0&0&0&62&182\\ 0&54&0&0&0&56&54&0&0&164\\ 0&0&48&48&0&0&0&50&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&37&0&0&0&38&37&0&0&112\\ 0&0&31&31&0&0&0&35&0&97\\ 24&0&0&0&29&0&0&0&29&82\\ 0&22&0&0&0&26&22&0&0&70\\ 0&0&19&19&0&0&0&20&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&13&0&0&0&14&13&0&0&40\\ 0&0&10&10&0&0&0&11&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&4&0&0&0&8&4&0&0&16\\ 0&0&4&4&0&0&0&2&0&10\\ 3&0&0&0&2&0&0&0&2&7\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&1&1&0&0&0&1&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|\] In the Appendix we provide the trace-power invariants and rank profiles for all the orbits in \cite{Vinberg-Elashvili}. \end{example} \subsection{Extending the exterior algebra}\label{sec:Zm} \begin{theorem} The vector space $\fa = \sl_{n} \oplus \bigoplus_{k=1 \ldots n-1}\bw{k} \CC^{n}$ has a $\ZZ_n$-graded algebra structure with a Jordan decomposition consistent with the $G = \SL(V)$-action. There is a unique (up to scale) equivariant bracket product that agrees with the $\g$-action on each $\g_i$. If $n = 2k$, the equivariant bracket must satisfy the property that the restriction to $\bw{k} \CC^{n} \times \bw{k} \CC^{n} \to \g$ must be commuting when $k$ is odd and skew-commuting when $k$ is even. For any $k$ such that $2k \neq n$ the bracket $\bw{k} \CC^{n} \times \bw{k} \CC^{n} \to \bw{2k \mod n}\CC^n$ must be skew-commuting when $k$ is odd and commuting when $k$ is even. \end{theorem} \begin{proof} To construct this algebra, consider the exterior algebra $\bw{\bullet} \CC^n$ as a graded algebra, and we attach $\sl_n$ to it to try to make a Jordan decomposition as well. We do this by replacing $\bw0\CC^n = \bw n\CC^n$ with $\sl_n = \fa_0$. We define the brackets $\sl_n \times \bw{k} \CC^n \to \bw{k}\CC^n$ via the usual Lie algebra action on $\bw{k} \CC^n$. We use the usual exterior algebra products or their contracted forms to define the brackets $[\,,\,] \colon \bw{i} \CC^n \times \bw{j} \CC^n \to \bw{i+j \mod n} \CC^n$ for $i+j \not \equiv 0 \mod n$. When $i+j = n$ we utilize Lemma~\ref{lem:dualBrackets} to define the bracket $[\,,\,] \colon \bw{k} \CC^n \times \bw{n-k} \CC^n \to \sl_n$, and in the case $n=2k$ we have a ``middle map'', which must be commuting / skew-commuting respectively when $k$ is odd /even, again by Lemma~\ref{lem:dualBrackets}. There is an equivariant bracket $\bw{k} \CC^{n} \times \bw{k} \CC^{n} \to \bw{2k \mod n}\CC^n$ if and only if there is an invariant in $(\bw{k} \CC^{n})^* \otimes (\bw{k} \CC^{n})^* \otimes \bw{2k \mod n}\CC^n$, or equivalently if and only if there is a copy of $ \bw{2k \mod n}\CC^n$ in $\bw{k} \CC^{n} \otimes \bw{k} \CC^{n}$. This is true again by the Pieri rule. Uniqueness comes from the fact that the Pieri rule not only gives the decomposition of $\bw{k}\CC^n \otimes \bw{\ell} \CC^n$ but also has the consequence that the decomposition into irreducible representations is multiplicity-free. In particular, the spaces of invariants $\bw{n}\CC^n$, or $\sl_n(\CC^)$ can only occur once, so the dimension of the space of choices of brackets is at most one, hence there is a unique bracket up to scale (in each graded piece). \end{proof} \begin{remark} We see from this result that $\fa = \sl_{n} \oplus \bigoplus_{k=1 \ldots n-1}\bw{k} \CC^{n}$ cannot be a Lie algebra whenever the conditions of the theorem force it to have commuting products unless one takes those graded products to be the trivial product (choosing the scalars to be zero). \end{remark} \subsection{Rank subadditivity and semi-continutity} Because of the linearity of the construction, the rank of the adjoint operator $B(T)$ gives a bound for the additive rank and border rank of the element $T$. In the $\ZZ_{2}$-graded and $\ZZ_{3}$-graded cases, we have respective block structures seen at \eqref{eq:block2} and \eqref{eq:block3}, and these blocks also provide rank and border rank bounds. In general, the following statement mimics prior work of Landsberg and Ottaviani (see \cite{LanOtt11_Equations, galkazka2017vector}). \begin{prop}[Landsberg-Ottaviani \cite{LanOtt11_Equations}]\label{prop:LanOtt} Suppose the operation $T \mapsto F_T$ produces a linear mapping depending linearly on $T$. Then if the maximum over $T \in X$ has $\rank(F_T)= k$, then \[ X\text{-}\rank(T) \leq r \Rightarrow \rank(F_T)\leq kr, \] and the contrapositive is the tensor rank bound: \[ \rank(F_T)> kr \Rightarrow X\text{-}\rank(T) > r .\] Moreover, semi-continuity also allows us to conclude that \[ \rank(F_T)> kr \Rightarrow X\text{-border}\rank(T) > r .\] \end{prop} In our case, this result translates to the following. \begin{prop} Suppose $M$ is a graded $\g$-module such that $\g \oplus M$ has a meaningful Jordan decomposition, that $X$ is a $G$-variety for $G$ the connected component of the identity of the Lie group $\exp \g$. The $X$-rank of a tensor $T$ is bounded by \[ X\text{-rank}(T) \leq \rank(B(T)) / k, \quad and \quad X\text{-rank}(T) \leq \rank(B_{I}(T)) / k_{I} .\] where $k$ is the maximal possible rank of $B(S)$ for $S\in X$, $k_{I}$ is the maximal possible rank of $B_{I}(S)$ for $S\in X$ for each relevant index $I$. \end{prop} We also note that the semi-continuity of matrix rank implies that the rank profiles of adjoint forms can be used as a negative test for orbit closure containment. \subsection{Dimensions of conical orbits from adjoint rank profiles} Suppose $\fa = \fa_0\oplus \fa_1 \oplus \cdots$ with $\g = \fa_0$ and a tensor $T\in M = \fa_1$, as constructed above so that $T$ has a meaningful Jordan decomposition. The rank of the block matrices $B_{101}(T)$ are connected to the dimension of the cone over the orbit closure $\PP(G.T)\subset\PP(M)$. More precisely, we have the following: \begin{prop} Notation as above. If $T$ is such that $G.T\subset M$ is conical, i.e., invariant by scalar multiplication, then $\rank(B_{101}(T))=\dim\left(\PP(\overline{G.T})\subset\PP M \right)+1$. If $G.T$ is not conical, then $\rank(B_{101}(T))=\dim\left( \PP(\overline{G.T})\subset\PP M \right)$. \end{prop} \begin{proof} Consider $\pi:M\to \PP M$ the projection and let $p$ be a point of the orbit $G.T\subset M$. Our proof comes down to an interpretation of the tangent space $\widehat{T}_p G.T$. Without loss of generality, we may assume $p=T$ since the dimension of the tangent space of an orbit is constant along the orbit. The tangent space $\widehat{T}_T G.T$ projects to $T_{\pi(T)}\pi (G.T)$ and we have: \begin{equation} \widehat{T}_T G.T=T+[\mathfrak{g},T]. \end{equation} If $G.T$ is conical then $T\in [\mathfrak{g},T]$ while if $G.T$ is not conical $T\notin [\mathfrak{g},T]$. Combining with the fact that the image of $B_{101}(T)$ is the space $[\mathfrak{g},T]$ the result follows. \end{proof} \begin{remark}\label{rem:conical} Since nilpotent elements are conical, we can read the dimension of the orbit from the block of the first row in the Table of Example \ref{ex:g36}. For the Grassmannian, the restricted chordal and the tangential variety, which are all nilpotent and thus conical, the first entry of the table provides the dimension of the cone of the orbit $(10, 15, 19)$, i.e., the dimension of the projective variety plus one. However, the secant variety is not conical, and therefore the value provided by the table, i.e., $19$, is the dimension of the secant variety $\sigma_2(\Gr(3,6))\subset \PP \bw3 \CC^6$. \end{remark} \subsection{Some approachable examples} As the exterior algebra $\bw{\bullet}\CC^n$ has dimension $2^n$, this can get unwieldy quickly. Here, we study several small cases that are still manageable. \begin{example}[$4\times 4\times 4$ tensors and matrix multiplication]\label{ex:444} Notice that we have a containment $\bw 3 \CC^{12} \supset \CC^4 \otimes \CC^4\otimes \CC^4$, so we can work with the algebra $\fa = \sl_{12}\oplus \bw 3 \CC^{12} \oplus \bw 6\CC^{12} \oplus \bw 9 \CC^{12} $, and consider tensors in $\CC^4 \otimes \CC^4\otimes \CC^4$ as elements in $\fa$, which has dimension 1507, and yields adjoint operators represented by $1507 \times 1507$ matrices. We might also use $\fa^{(4,4,4)} = \sl_{4}^{\times 3}\oplus (\CC^4 \otimes \CC^4\otimes \CC^4) \oplus (\bw 2 \CC^4 \otimes \bw 2 \CC^4\otimes \bw 2\CC^4) \oplus (\bw 3 \CC^4 \otimes \bw 3 \CC^4\otimes \bw 3\CC^4) $, which produces adjoint operators with matrix size $389 \times 389$. The matrix $2\times 2$ multiplication tensor viewed in $\bw 4 \CC^{12}$ is computed via $\trace(ABC^\top)$ for generic $2\times 2$ matrices $A,B,C$, so that $AB = C$, and the term $\lambda a_ib_j c_k$ is present in $\trace(ABC^\top)$ if $\lambda a_i b_j$ appears in the entry corresponding to $c_k$ and the indices $i,j,k$ are double indices. In Table~\ref{tab:mmult} we list the rank profiles for adjoint operators in $\fa$ for several tensors, starting with the matrix multiplication tensor, then tensors of increasing tensor ranks. The adjoint form for random of Rank $7$ has the same rank profile as the Rank $6$ case, so the rank profiles over $\fa$ do not distinguish Rank $6$ from Rank $7$. We also tried restricting to just the sub-algebra $\fa^{(4,4,4)}$ and found that we get less information: and $5$ and $6$ have the same profile in $\fa^{(4,4,4)}$, but their rank profiles in $\fa$ are distinct. \begin{table} \[\text{mmult}:\; e_{0}e_{4}e_{8}+e_{2}e_{5}e_{8}+e_{1}e_{4}e_{9}+e_{3}e_{5}e_{9}+e_{0}e_{6}e_{10}+e_{2}e_{7}e_{10}+e_{1}e_{6}e_{11}+e_{3}e_{7}e_{11}\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&132&0&0&0&0&219&0&0&0&0&219&132&0&0&0&702\\ 0&0&132&0&0&0&0&0&132&0&0&0&0&132&0&0&396\\ 0&0&0&0&0&0&0&0&0&132&0&0&0&0&132&0&264\\ 0&0&0&0&0&0&0&0&0&0&132&0&0&0&0&0&132\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right| \end{array} \] \[\text{Rank 1: } e_{0}e_{4}e_{8}\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&28&0&0&0&0&84&0&0&0&0&84&28&0&0&0&224\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&1\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right| \end{array} \] \[\text{Rank 2: } e_{0}e_{4}e_{8}+e_{1}e_{5}e_{9}\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&56&0&0&0&0&147&0&0&0&0&147&56&0&0&0&406\\ 0&0&37&0&0&0&0&0&37&0&0&0&0&20&0&0&94\\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0&1&0&2\\ 0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right| \end{array} \] \[\text{Rank 3: } e_{0}e_{4}e_{8}+e_{1}e_{5}e_{9}+e_{2}e_{6}e_{10}\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&84&0&0&0&0&192&0&0&0&0&192&84&0&0&0&552\\ 0&0&83&0&0&0&0&0&83&0&0&0&0&57&0&0&223\\ 0&0&0&0&0&0&0&0&0&56&0&0&0&0&56&0&112\\ 0&0&0&0&0&0&0&0&0&0&56&0&0&0&0&0&56\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right| \end{array} \] \[\text{Rank 4: } e_{0}e_{4}e_{8}+e_{1}e_{5}e_{9}+e_{2}e_{6}e_{10}+e_{3}e_{7}e_{11}\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&111&0&0&0&0&219&0&0&0&0&219&111&0&0&0&660\\ 0&0&111&0&0&0&0&0&111&0&0&0&0&111&0&0&333\\ 0&0&0&0&0&0&0&0&0&111&0&0&0&0&111&0&222\\ 0&0&0&0&0&0&0&0&0&0&111&0&0&0&0&0&111\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right| \end{array} \] \[\text{Rank 5: }e_{0}e_{4}e_{8}+e_{1}e_{5}e_{9}+e_{2}e_{6}e_{10}+e_{3}e_{7}e_{11} + abc\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&135&0&0&0&0&219&0&0&0&0&219&135&0&0&0&708\\ 0&0&135&0&0&0&0&0&135&0&0&0&0&135&0&0&405\\ 0&0&0&0&0&0&0&0&0&135&0&0&0&0&135&0&270\\ 0&0&0&0&0&0&0&0&0&0&135&0&0&0&0&0&135\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right| \end{array} \] \[\text{Rank 6: } e_{0}e_{4}e_{8}+e_{1}e_{5}e_{9}+e_{2}e_{6}e_{10}+e_{3}e_{7}e_{11} + a_1b_1c_1 + a_2b_2c_2\] \[\begin{array}{rl} \ad_T^\fa: & \left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{01}&B_{11}&B_{21}&B_{31}&B_{02}&B_{12}&B_{22}&B_{32}&B_{03}&B_{13}&B_{23}&B_{33}&B \\ \hline\\[.5ex] 0&141&0&0&0&0&219&0&0&0&0&219&141&0&0&0&720\\ 0&0&141&0&0&0&0&0&141&0&0&0&0&141&0&0&423\\ 0&0&0&0&0&0&0&0&0&141&0&0&0&0&141&0&282\\ 0&0&0&0&0&0&0&0&0&0&141&0&0&0&0&0&141\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right|\\ \end{array} \] \caption{Some adjoint rank profiles of tensors in $\CC^4\otimes \CC^4\otimes \CC^4$ viewed as elements of $\fa = \sl_{12}\oplus \bw{3}\CC^{12}$.}\label{tab:mmult} \end{table} We were curious to see if there was a difference in the rank-detection power between $\fa = \sl_{12} \bigoplus_{k=1}^3 \bw{3k}\CC^{12}$ and its parent $\tilde \fa = \sl_{12} \bigoplus_{k=1}^{11} \bw{k}\CC^{12}$. Since the latter algebra is much larger (dimension $4327$), the computations take longer, but we found they're still in reach. However, this matrix is still unable to detect the difference between general among rank 6 and general among rank 7. Since there are 144 blocks in each adjoint operator corresponding to this $\ZZ_{12}$ graded algebra, we only report the ranks of the powers of these operators for each rank of tensor in Table~\ref{tab:fullC12}: \begin{table} \begin{tabular}{r| |l|l|l|l|l|l|l|l|} ad powers & Rank 1: & Rank 2:&Rank 3: & Rank 4:&Rank 5: & Rank 6: &Rank 7: & mmult \\ \hline $\!\begin{array}{c} A\\ A^2\\ A^3\\ A^4\\ A^5 \end{array}\!$ & $\!\begin{array}{c} 572\\ 1\\ 0\\ 0\\ 0 \end{array}\!$ & $\!\begin{array}{c} 1\,018\\ 118\\ 14\\ 1\\ 0 \end{array}\!$ & $\!\begin{array}{c} 1\,368\\ 259\\ 130\\ 56\\ 0 \end{array}\!$ & $\!\begin{array}{c} 1\,644\\ 381\\ 246\\ 111\\ 0 \end{array}\!$ & $\!\begin{array}{c} 1\,824 \\ 453\\ 294\\ 135\\ 0 \end{array}\!$ & $ \!\begin{array}{c} 1\,860\\ 471\\ 306\\ 141\\ 0 \end{array}\! $ & $\!\begin{array}{c} 1\,860\\ 471\\ 306\\ 141\\ 0 \end{array}\! $ & $\!\begin{array}{c} 1\,812\\ 444\\ 288\\ 132\\ 0 \end{array}\! $ \end{tabular} \caption{Ranks of powers of adjoint operators $A^k$ of various ranks of tensors and the matrix multiplication tensor all living in $(\CC^4)^{\otimes 4} \subset \bw{3}\CC^{12}$ considered as elements of the algebra $\sl_{12} \oplus \bigoplus_{d=1}^{11} \bw{d}\CC^{12}$.}\label{tab:fullC12} \end{table} These computations seem to suggest that the matrix multiplication tensor should be somehow below rank 5, which seems contradictory to the well-known result that the border rank of the 2 by 2 matrix multiplication operator is 7. However, the rank semi-continuity arguments actually go in the other direction. If the rank of the linear operator of a given tensor is greater than that for a known tensor rank, then that provides a lower bound for the border rank. But when the rank is higher we get no conclusion unless we could say that the rank conditions on the linear operator were necessary and sufficient for that particular border rank. This appears not to be the case. So all we can conclude is that this tool shows that matrix multiplication has border rank at least 5 (not new) and suggests that it is not generic among those tensors of rank 5. We are curious if the Jordan form for these adjoint operators on this algebra sheds any additional light on the orbit closure issue that arises here. At present, we do not have any additional information to report. \end{example} \begin{remark}Note on a 2020 era desktop Macintosh computer running v.1.21 of Macaulay2 we have the following approximate run times: For the matrix multiplication tensor, the adjoint matrix took approximately 4.3s to construct, and the rank profile was computed in 3s. Up until tensor rank 5 building the adjoint matrix takes about 2-3s, and computing the rank profile takes approximately 4-5s. For tensor rank 6 and 7, building the matrix took 6.5s-7, but finding the rank profile took 108s and 360s respectively over $\QQ$, but reducing to $\ZZ_{10000000019}$ gives the same answers in under 3s. The reason for the increase in complexity in computing the rank over $\QQ$ is that for the low-rank tensors, we selected normal forms, which are sparse. However, for higher rank forms, we must choose random rank-1 elements to add on, which produces denser adjoint matrices and, hence, greater accumulation of size of intermediate expressions. Reducing modulo a large prime mitigates the coefficient explosion. \end{remark} \begin{example}[Possible semi-simple-like elements for $4\times 4\times 4$ tensors]\label{ex:quasi-semisimple} There is a special set of elements in $\CC^4 \otimes \CC^4 \otimes \CC^4$ that mimick part of Nurmiev's \cite{nurmiev} classification of elements in $\CC^3 \otimes \CC^3 \otimes \CC^3$ from the Vinberg-Elashvili classification \cite{Vinberg-Elashvili} of trivectors in $\bw{3}\CC^9$. A basis $e_1,\ldots e_{12}$ of $\CC^{12}$, a partition \[P = \{\{1,2,3,4\},\{5,6,7,8\}, \{9,10,11,12\}\},\] and corresponding splitting $\CC^{12} = \CC^4 \oplus \CC^4 \oplus \CC^4 $ induces an inclusion $\CC^4 \otimes \CC^4 \otimes \CC^4 \subset \bw{3} \CC^{12}$. We then ask for the combinatorial design that consists of quadruples of non-intersecting lines in $[12]$, each line with 3 points, each line must contain precisely 1 element from each of the parts of the partition $P$, and no two lines intersecting in more than 1 point. By exhaustion, we have 4 quadruples of lines, which lead to the following basic elements which we call \emph{quasi-semisimple}: \[ \begin{matrix} p_1 = e_{1}\otimes e_{5}\otimes e_{9} + e_{2}\otimes e_{6}\otimes e_{10} + e_{3}\otimes e_{7}\otimes e_{11} + e_{4}\otimes e_{8}\otimes e_{12},\\ p_2 = e_{1}\otimes e_{6}\otimes e_{11} + e_{2}\otimes e_{5}\otimes e_{12} + e_{3}\otimes e_{8}\otimes e_{9} + e_{4}\otimes e_{7}\otimes e_{10}, \\ p_3 = e_{1}\otimes e_{7}\otimes e_{12} + e_{2}\otimes e_{8}\otimes e_{11} + e_{3}\otimes e_{5}\otimes e_{10} + e_{4}\otimes e_{6}\otimes e_{9},\\ p_4 = e_{1}\otimes e_{8}\otimes e_{10} + e_{2}\otimes e_{7}\otimes e_{9} + e_{3}\otimes e_{6}\otimes e_{12} + e_{4}\otimes e_{5}\otimes e_{11}. \end{matrix} \] These tensors have the same adjoint rank profiles as those of rank 4 listed in Example~\ref{ex:444}. The elements that come from $\fa_1$ appear to all be nilpotent, and hence, there would be no ad-semisimple elements that live entirely in $\fa_1$. Moreover, these elements do not commute. However, the combinatorial structure seems interesting and could be interesting for further study. We also note that the matrix multiplication tensor has an expression that looks like the sum of 2 of these semi-simple-like elements, and moreover, we checked that the adjoint rank profile of $p_i + p_j$ is identical to that of mmult. Moreover, like in example~\ref{ex:g36}, we are curious to understand the adjoint elements that are not purely in a single graded piece of $\fa$ and which could be semisimple. A potential candidate could be $h_0 + e_0 e_4 e_8$, with $h_0 $ the first basis vector in the Cartan in $\sl V$, which appears to have constant ranks for powers of the adjoint operator. So it seems that there are ad-semisimple elements in this algebra, just not concentrated in a single grade. \end{example} \begin{example}[Trivectors on a 10-dimensional space]\label{ex:wedge3C10} It is reasonable that this method could handle the $\ZZ_{10}$-graded algebra for $\bw{3}\CC^{10}$. Here we show that our tools can be constructed for $\bw{3}\CC^{10}$, where $\fa = \g_0 \oplus \cdots \oplus \g_9$, in the order 0, 3, 6, 9, 12= 2, 5, 8, 11= 1, 4, 7, with $\g_i = \bw{3*i}\CC^{10}$ , $\g_{3+i} = \bw{3*i-1}\CC^{10}$, and $\g_{6+i} = \bw{3*i -2}\CC^{10}$, for $i=1,2,3$. We were able to compute adjoint operators in this algebra. There are many blocks, so we do not report the entire block rank adjoint rank profile. Here are the results for the first 5 ranks, the results for rank 5 are the same as for sums of more than 5 rank 1 elements. Rank 1: $\left\{176,\:1,\:0\right\}$ Rank 2: $\left\{322,\:94,\:14,\:1,\:0\right\}$ Rank 3: $\left\{444,\:223,\:130,\:56,\:0\right\}$ Rank 4: $\left\{536,\:294,\:178,\:79,\:0\right\}$ Rank 5: $\left\{566,\:337,\:218,\:99,\:0\right\}$ The point of this example is to say that computations in this algebra are in the computable regime, and we encourage future research here. \end{example} \subsection{Connection to geometric constructions} The preceding two examples seem relevant for many other geometric constructions, such as K3 surfaces, K\"ahler, and hyper-K\"ahler manifolds, and more, which we explain briefly now. Hitchen \cite{hitchin2001stable} was interested in the existence of special $G$-structures, metrics with special holonomy. A $p$-form is to be considered \emph{stable} if it lies in an open orbit. The existence of open orbits is the classification of prehomogeneous vector spaces \cite{sato1977classification}, which relies on the so-called castling transforms (see \cite{venturelli2019prehomogeneous} for a modern treatment in the case of tensors). A strong necessary condition is that the group $G$ must have dimension at least that of the vector space $W$ (in our case $W = \bw p V$ with $\dim V = n$) for there to be an open orbit. Hitchen considered the prehomogeneous cases $(n,p)$ (with $n=\dim V$), $(2m, 2); (6,3), (7,3), (8,3),$ and their duals. Stable $p$-forms determine interesting $G$-structures in these cases. After this we can consider \emph{semi-stable} $p$-forms, which are those which do not contain $0$ in their orbit. Semi-stable $p$-forms govern many interesting geometric objects (K3 surfaces, K\"ahler, and hyper-K\"ahler manifolds, genus 2 curves, the Coble cubic), see for instance \cite{rains2018invariant, bernardara2021nested, benedetti2023hecke, bernardara2024even, swann1990hyperkahler} which explore connections to tensors in $\bw 3 \CC^9$. This notion of semi-stability is the same as the notion of a form being not \emph{nilpotent}. In the case where Jordan decomposition exists for $\bw p V$ we can study the nicest semi-stable forms to be those that are semi-simple. A slight relaxation of this concept which we advocate for is when we may not have a Jordan decomposition for elements of $\bw p V$, but we still have GJD, and we can ask for forms that are Ad-semi-simple, i.e. elements of $T \in \bw p V$ such that $\ad(T)$ is semi-simple. We find it interesting to note that, as seen in Example \ref{ex:wedge3C10} it seems that every element of $\bw{3}\CC^{10}$ is ad-nilpotent, and our experiments suggest likewise for $\bw{3}\CC^{11}$ and $\bw 3 \CC^{12}$. For those cases, in Example~\ref{ex:quasi-semisimple} we suggest a possible replacement for the notion of ad-semisimple in this last case. It would be interesting to understand the implications for geometry for these cases where it seems that the traditional notions of semi-stable and semi-simple $3$-forms or $4$-forms may not be possible, and we ask specifically if the notion of quasi-semisimple could be a meaningful replacement. Finally, we remark that we found that elements of $\bw 4 \CC^{10}$, that are not ad-nilpotent, and we find that 4-vectors like these could be an interesting place to start a new investigation of semi-stable $4$-forms on a 10-dimensional space. In that case on a 2018 mac laptop it takes approximately one minute to compute adjoint operators when working over a field like $\ZZ_{1009}$. For example, the form \[e_{0}e_{2}e_{3}e_{7}+e_{1}e_{3}e_{6}e_{8}+e_{0}e_{4}e_{6}e_{8}+e_{2}e_{4}e_{5}e_{9}+e_{1}e_{5}e_{7}e_{9}\] is not ad-nilpotent; it has the following rank profile: \[\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30}&B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}& B \\[.5ex] \hline\\[.5ex] 0&88&0&0&0&0&0&45&0&0&0&0&0&26&0&0&0&0&0&45&88&0&0&0&0&292\\ 0&0&30&0&0&0&0&0&26&0&0&0&0&0&26&30&0&0&0&0&0&88&0&0&0&200\\ 0&0&0&16&0&0&0&0&0&26&16&0&0&0&0&0&30&0&0&0&0&0&30&0&0&118\\ 0&0&0&0&16&16&0&0&0&0&0&16&0&0&0&0&0&6&0&0&0&0&0&16&0&70\\ 6&0&0&0&0&0&16&0&0&0&0&0&6&0&0&0&0&0&6&0&0&0&0&0&16&50\\ 0&6&0&0&0&0&0&6&0&0&0&0&0&6&0&0&0&0&0&6&6&0&0&0&0&30\\ 0&0&6&0&0&0&0&0&6&0&0&0&0&0&6&6&0&0&0&0&0&6&0&0&0&30\\ 0&0&0&6&0&0&0&0&0&6&6&0&0&0&0&0&6&0&0&0&0&0&6&0&0&30 \end{smallmatrix}\right|\] As in \cite{oeding2022} we can make a Carter diagram recording the numbers of coincidences (recorded by the labels on the edges) in the indices on the basic 4-forms, which is the following for the candidate above: \[ \xymatrix{ \ccircle{1368} \ar@{=}_{68}[dd] \ar@{-}^1[rr]&&\ccircle{1579}\ar@{=}^{59}[dd]\\ & \ccircle{0237}\ar@{-}_3[ul] \ar@{-}^7[ur] \ar@{-}_2[dr] \ar@{-}^0[dl] \\ \ccircle{0468}\ar@{-}_4[rr] && \ccircle{2459} } \] The Carter diagram may suggest ways of generalizing and constructing new interesting examples of forms that are not ad-nilpotent. \section{Applications to quantum information} The classical problem of classifying and distinguishing $G$-orbits on a module $M$ has regained popularity in the context of quantum information theory in the past twenty years. In this section, we explain the connection and how our Jordan decomposition can be used as an efficient tool to study entanglement. While the examples we consider have been addressed before, our point of departure is that we handle all these by a single construction (an adjoint operator) and straightforward computations (matrix ranks and eigenvalues). A standard reference for quantum information is \cite{nielsen2002quantum}. For an introduction for mathematicians, see \cite{Landsberg2019}, and for an algebraic geometry perspective on entanglement classification, see \cite{holweck_entanglement}. \subsection{General theory} In quantum computation, information is encoded in quantum states. More precisely, the quantum analog of the classical bit $\{0,1\}$ in information theory is a quantum-bit or qubit. In Dirac's notation, a qubit reads \begin{equation} \ket{\psi}=\alpha\ket{0}+\beta\ket{1}=\alpha\begin{pmatrix} 1 \\ 0 \end{pmatrix}+\beta\begin{pmatrix} 0\\1 \end{pmatrix}=\begin{pmatrix} \alpha\\ \beta \end{pmatrix}\in \CC^2, \text{with }|\alpha|^2+|\beta|^2=1. \end{equation} Mathematically, a qubit is just a normalized vector in $\CC^2$. The idea is that with the resources of quantum physics, the information could be given by a classical state of the system, state $\ket{0}$ or $\ket{1}$, but also in a superposition of the basis states. A system made of $n$ qubits will be a normalized tensor $\ket{\psi}\in (\CC^2)^{\otimes n}$. Normalized tensors of rank $1$ are said to be {\em separable} while all other states are said to be \emph{entangled}. From a physical point of view, a separable state, $\psi=\psi_1\otimes\psi_2\otimes \dots \otimes \psi_n$, can be fully described from the knowledge of one of its components, while entangled states share non-classical correlations among their different components. This property of entanglement is a non-classical resource in quantum information that can be exploited to perform non-classical tasks (quantum teleportation, super-dense coding). It is also considered a resource that could explain the speed-up of some quantum algorithms over their classical counterparts. There are different frameworks in which to study the entanglement classification problem. One such is to consider equivalence of quantum states up to Stochastic Local Operations with Classical Communication (SLOCC). The quantum physics operations corresponding to the SLOCC group include local unitary transformations, measurements and classical coordinations. This set of operations comprising the SLOOC group is the group of local invertible transformations \cite{dur2000}. The following cases are of interest to the quantum community. We denote by $M$ the module (also called a Hilbert space in quantum physics) where the quantum states live, and $G$ the corresponding SLOCC group: \begin{enumerate} \item $M=(\CC^2)^{\otimes n}$, the Hilbert space of $n$-qubit quantum states, with $G=\SL_2(\CC)^{\times n}$. In the projective space $\PP M$, the variety of separable states is the Segre variety $X=\PP^1\times\dots\times \PP^1\subset \PP M$. \item $M=S^k \CC^n$, the Hilbert space of $k$-symmetric (bosonic) quantum states, with $G=\SL_n(\CC)$. Here, a single boson in a normalized vector of $\CC^n$. In the projective space $\PP M$, the variety of separable states is the Veronese variety $X=v_k(\PP^{n-1})\subset \PP(S^k \CC^n)$. \item $M=\bw{k} \CC^n$, the Hilbert space of $k$ fermions where a single fermion is a $n$-state particle, i.e., a normalized vector of $\CC^n$, with $G=\SL_n(\CC)$. In the projective space $\PP M$, the variety of separable states is the Grassmann variety $X=\Gr(k,n)\subset \PP(\bigwedge^k \CC^n)$. \end{enumerate} \subsection{Application to QI for small numbers of qubits} \subsubsection{3-qubits} A very famous case in quantum information is the three-qubit SLOCC classification, i.e., the $G=\SL_2\CC\times \SL_2\CC\times \SL_2\CC$ orbits of normalized tensors in $\CC^2\otimes\CC^2\otimes \CC^2$. Even though this classification was mathematically known for a long time \cite{lepaige81, GKZ}, the result got a lot of attention in the context of quantum physics as it proved for the first time that quantum states could be genuinely entangled in two non-equivalent ways \cite{dur2000}. The orbit structure of the three-qubit case is similar to other tripartite systems like the three-bosonic qubit ($v_3 \PP^1\subset \PP^4$) or three-fermion with $6$-single particle states ($\Gr(3,6)\subset \PP \bigwedge^3\CC^6$). The rank profile allows us to easily distinguish those orbits because those orbits (up to qubit permutation) have different dimensions; see Example~\ref{ex:g36}. \subsubsection{4-qubits} Another important classification is the four-qubit classification, i.e., $M=(\CC^2)^{\otimes 4}$ and $G=(\SL_2\CC)^{\times 4}$. Here, the number of orbits is infinite, and a description depends on parameters. A classification was first established by Verstraete et al. \cite{verstraete02} and later corrected by \cite{ChtDjo:NormalFormsTensRanksPureStatesPureQubits}, see also \cite{HolweckLuquePlanat}. A complete and irredundant classification was given by Dietrich et al. \cite{ dietrich2022classification}. Note that $\sl_8 \oplus \bw{4} \CC^8$ contains $\sl_2^{\oplus 4} \oplus (\CC^2)^{\otimes 4}$ as a subalgebra, and we can utilize the rank profiles in $\sl_8 \oplus \bw{4} \CC^8$ to distinugish orbits. Though the complete irredundant classification from \cite{dietrich2022classification} is more complicated to demonstrate proof of concept we only handle random elements from the 9-family classification \cite{ChtDjo:NormalFormsTensRanksPureStatesPureQubits}. We constructed the adjoint operators for random tensors (generic choice of parameters) of each format from the Verstraete classification and made random choices for those parameters for elements in the $9$ families (which are named 1, 2, 3, 6, 9, 10, 12, 14, 16 in \cites{ChtDjo:NormalFormsTensRanksPureStatesPureQubits}), and computed their rank profiles, which allows us to distinguish the different families (Table~\ref{tab:ChtDjo}). We also note that while the drops in ranks are related to the Jordan form associated with the 0-eigenspace, the final ranks in the block rank profiles distinguish the semi-simple parts. See \cite{dietrich2022classification} for further discussion on the semi-simple and nilpotent parts of these orbits. \begin{table}[htbp] \begin{tabular}[t]{lll} Family & Form & Rank Profile \\ \hline \\[-2ex] 1. & \begin{tabular}{l}$\frac{a+d}{2}(\ket{0000} + \ket{1111}) + \frac{a-d}{2}(\ket{0011} + \ket{1100})$ \\ $+\frac{b+c}{2}(\ket{0101}+\ket{1010}) +\frac{b-c}{2}(\ket{0110} + \ket{1001})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&60&60&0&120\\ 60&0&0&60&120\\ 0&60&60&0&120\\ 60&0&0&60&120 \end{smallmatrix}\!\right|$ \\\\ 2. & \begin{tabular}{l} $\frac{a+c-i}{2}(\ket{0000}+\ket{1111})+\frac{a-c+i}{2}(\ket{0011}+\ket{1100})$ \\ $+\frac{b+c+i}{2}(\ket{0101}+\ket{1010})+\frac{b-c-i}{2}(\ket{0110}+\ket{1001})$ \\ $ +\frac{i}{2}(\ket{0001}+\ket{0111}+\ket{1000}+\ket{1110} $ \\ $-\ket{0010}-\ket{0100}-\ket{1011}-\ket{1101})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&60&60&0&120\\ 59&0&0&60&119\\ 0&59&59&0&118\\ 59&0&0&59&118\\ 0&59&59&0&118 \end{smallmatrix}\!\right|$ \\\\ 3. & \begin{tabular}{l} $ \frac{a}{2}(\ket{0000}+\ket{1111} +\ket{0011}+\ket{1100}) + \frac{b+1}{2}(\ket{0101}+\ket{1010})$ \\ $ +\frac{b-1}{2}(\ket{0110}+\ket{1001})+\frac{1}{2}(\ket{1101}+\ket{0010}-\ket{0001}-\ket{1110})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&56&56&0&112\\ 52&0&0&54&106\\ 0&50&50&0&100\\ 50&0&0&50&100\\ 0&50&50&0&100 \end{smallmatrix}\!\right|$ \\\\ 6. & \begin{tabular}{l} $\frac{a+b}{2}(\ket{0000}+\ket{1111})+b(\ket{0101}+\ket{1010})+i(\ket{1001}-\ket{0110})$ \\ $+\frac{a-b}{2}(\ket{0011}+\ket{1100})+\frac{1}{2}(\ket{0010}+\ket{0100}+\ket{1011}+\ket{1101}$ \\ $-\ket{0001}-\ket{0111}-\ket{1000}-\ket{1110})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&60&60&0&120\\ 58&0&0&60&118\\ 0&58&58&0&116\\ 57&0&0&58&115\\ 0&57&57&0&114\\ 57&0&0&57&114\\ 0&57&57&0&114 \end{smallmatrix}\!\right|$ \\\\ 9. &\begin{tabular}{l} $a(\ket{0000}+\ket{0101}+\ket{1010}+\ket{1111})$ \\ $-2i(\ket{0100}-\ket{1001}-\ket{1110})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&56&56&0&112\\ 51&0&0&54&105\\ 0&49&49&0&98\\ 45&0&0&47&92\\ 0&43&43&0&86\\ 42&0&0&43&85\\ 0&42&42&0&84\\ 42&0&0&42&84\\ 0&42&42&0&84 \end{smallmatrix}\!\right|$ \\\\ 10.& \begin{tabular}{l} $ \frac{a+i}{2}(\ket{0000}+\ket{1111} +\ket{0011}+\ket{1100})+\frac{a-i+1}{2}(\ket{0101}+\ket{1010})$ \\ $ +\frac{a-i-1}{2}(\ket{0110}+\ket{1001})+\frac{i+1}{2}(\ket{1101}+\ket{0010})$ \\ $+\frac{i-1}{2}(\ket{0001}+\ket{1110}) -\frac{i}{2}(\ket{0100}+\ket{0111}+\ket{1000}+\ket{1011})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&48&48&0&96\\ 39&0&0&42&81\\ 0&33&33&0&66\\ 33&0&0&33&66\\ 0&33&33&0&66 \end{smallmatrix}\!\right|$ \\\\ 12. & \begin{tabular}{l} $(\ket{0101}-\ket{0110}+\ket{1100}+\ket{1111})+(i+1)(\ket{1001}+\ket{1010})$ \\ $-i(\ket{0100}+\ket{0111}+\ket{1101}-\ket{1110})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&48&48&0&96\\ 38&0&0&42&80\\ 0&32&32&0&64\\ 23&0&0&26&49\\ 0&17&17&0&34\\ 8&0&0&11&19\\ 0&2&2&0&4\\ 1&0&0&2&3\\ 0&1&1&0&2\\ 0&0&0&1&1\\ 0&0&0&0&0\\ \end{smallmatrix}\!\right|$ \\\\ 14.& \begin{tabular}{l} $ \frac{i+1}{2}(\ket{0000}+\ket{1111}-\ket{0010}-\ket{1101})$ \\ $ +\frac{i-1}{2}(\ket{0001}+\ket{1110}-\ket{0011}-\ket{1100})$ \\ $ +\frac{1}{2}(\ket{0100}+\ket{1001}+\ket{1010}+\ket{0111})$ \\ $+\frac{1-2i}{2}(\ket{1000}+\ket{0101}+\ket{0110}+\ket{1011})$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&47&47&0&94\\ 30&0&0&34&64\\ 0&17&17&0&34\\ 9&0&0&10&19\\ 0&2&2&0&4\\ 0&0&0&2&2\\ 0&0&0&0&0\\ \end{smallmatrix}\!\right|$ \\\\ 16. & \begin{tabular}{l} $\frac{1}{2}(\ket{0}+\ket{1})\otimes(\ket{000}+\ket{011}+\ket{100}+\ket{111}$ \\ $ +i(\ket{001}+\ket{010}-\ket{101}-\ket{110}))$ \end{tabular} & $ \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&33&33&0&66\\ 14&0&0&20&34\\ 0&1&1&0&2\\ 1&0&0&0&1\\ 0&0&0&0&0\\ \end{smallmatrix}\!\right|$ \end{tabular} \caption{ $\text{SLOCC}^*$--orbits, \cite{ChtDjo:NormalFormsTensRanksPureStatesPureQubits} with (random) $a= 1,b = 2,c = \frac{8}{5},d =\frac{1}{2}$. }\label{tab:ChtDjo} \end{table} Setting all parameters ($a,b,c,d$) in the generic expressions to $0$, one obtains representatives of the $9$ orbits of the null-cone (up to qubit permutation). Here again, those orbits can also be distinguished by the rank profile. It furnishes an efficient and easy way to identify the class of a nilpotent element. An algorithm based on invariants and covariants was published in the quantum information literature to classify four-qubit nilpotent element \cite{HLT14_atlas} and later implemented in the study of Grover's quantum search algorithm and Shor's quantum algorithm \cite{JH19}. Here, because the nilpotent orbits of the four-qubit case have different dimensions, the rank profile can be used as an alternative to distinguish the nilpotent orbits (up to qubit permutation), see Table~\ref{tab:ChtDjo}. We computed that the root profile of the adjoint operators in $\sl_2^{\times 4} \oplus (\CC^2)^{\otimes 4}$ (which are $28\times 28$ matrices) for the states that are generic semi-simple in \cite{dietrich2022classification}, denoted $G_{abcd}$ (in \cite{verstraete02}) or family 1 \cite{ChtDjo:NormalFormsTensRanksPureStatesPureQubits}, is made of $24$ single roots and $0$ with order $4$. This is a direct consequence of the fact that generic elements $G_{abcd}$ are the elements of $(\CC^2)^{\otimes 4}$ that do not annihilate the $2\times 2\times 2\times 2$ Cayley hyperdeterminant, a degree $24$ invariant polynomial. Recall that Cayley hyperdeterminant is the defining equation of the dual of $X=(\PP^1)^4\subset \PP((\CC^2)^{\otimes 4})$. As shown in \cite{holweck_4qubit2} it is the restriction to $(\CC^2)^{\otimes 4}$ of the adjoint discriminant $\Delta_{\mathfrak{g}}$ for $\mathfrak{g}=\mathfrak{s}\mathfrak{o}(8)$. The adjoint discriminant of a Lie algebra $\mathfrak{g}$ \cite[p.~$4605$]{TevelevJMS} is \begin{equation} \Delta_{\mathfrak{g}}(x)=\Delta\left(\frac{1}{t^n}\chi_{\ad_{x}}\right), \end{equation} where $n$ is the rank of the Lie algebra and $\Delta$ is the discriminant with respect to $t$. Here we consider again the algebra $\sl_8 \oplus \bw{4} \CC^8$ which contains $\sl_2^{\oplus 4} \oplus (\CC^2)^{\otimes 4}$. In particular, if one restricts to $x=T\in M=(\CC^2)^{\otimes 4}$, the adjoint root profile should be made of $24$ single roots (and $0$ with order $4$) for tensors $T$ outside the zero locus of the dual variety. In other words, when $\mathfrak{g}\oplus M$ is a Lie algebra, the adjoint root profile allows us to decide if a tensor $T$ belongs to the dual variety of the highest weight vector $G$-orbit of $M$, $X_{G, M}=G.[v]\subset \PP(M)$. Note that when $\mathfrak{g}\oplus M$ is not a Lie algebra, like in Example~\ref{ex:g36} for $\bw{3}\CC^6$, we still have a root profile, but the connection with the dual of the adjoint variety is not clear at present. \subsubsection{5 qubits} For larger quantum systems, few results are known. The $5$-qubit classification is considered intractable because the number of orbits is infinite, as is the number of nilpotent orbits. However, our method already provides a criterion to decide if two SLOCC orbits are distinct. Indeed, for a Hilbert space $M$ with SLOCC group $G$ and corresponding Lie algebra $\mathfrak{g}$, then quantum states $\psi_1,\psi_2\in M$ are not SLOCC equivalent if the rank profiles of $\ad_{\psi_1}$ and $\ad_{\psi_2}$ are distinct. In \cite{Osterloh06}, the notion of entanglement filter is used to distinguish some five qubit states $M=(\CC^2)^{\otimes 5}$. We list these states and their adjoint rank profiles in Table~\ref{tab:Osterloh}. The fact that these $5$-qubit quantum states are not SLOCC equivalent was also obtained by \cite{LT05} using covariants. The rank profiles in Table~\ref{tab:Osterloh} lead to the same conclusion. As already noted, $\sl_{10} \oplus \bw{ 5} \CC^{10}$ does not have a Lie algebra structure, but it does have GJD. Notice that $\CC^{10} = (\CC^2) ^{\oplus 5}$, and correspondingly $ (\CC^2)^{\otimes 5}\subset \bw5 \CC^{10}$. This set of tensors is realized by grouping basis vectors $\{e_i, e_{i+5}\} = \CC^2_i$. Therefore, we can consider the subalgebra and the Jordan decomposition for the $\ZZ_2$-graded algebra we have constructed without any more effort. As seen in Table~\ref{tab:C2o5}, the rank profiles appear to detect border ranks up to 4 but not higher. In \cite{OedingSam}, we proved that the set of border rank 5 tensors is defined by invariants of degrees 6 and 16. The trace power invariants of this adjoint form cannot produce these invariants for border rank 5 since the block structure of the adjoint operator implies that the only powers that can have a non-zero trace are multiples of 4, and the degree 6 invariant cannot be produced this way. This feature is also apparent in the characteristic polynomials in Table~\ref{tab:C2o5}. Here is an analogy to the Vinberg cases, which we label as a proposition for later reference even if the idea is still in its initial stages. \begin{prop} The algebra $\fa = \sl_{10} \oplus \bw{ 5} \CC^{10}$ has a 10-dimensional Cartan-like subalgebra that lives entirely in $(\CC^2)^{\otimes 5}\subset \bw 5\CC^{10}$. A set of basic almost ad-semisimple elements is the following: \[ \begin{matrix} p_{0,\pm} = e_{1}e_{2}e_{4}e_{6}e_{8} \pm e_{0}e_{3}e_{5}e_{7}e_{9},& p_{1,\pm} = e_{0}e_{3}e_{4}e_{6}e_{8} \pm e_{1}e_{2}e_{5}e_{7}e_{9},\\ p_{2,\pm} = e_{0}e_{2}e_{5}e_{6}e_{8} \pm e_{1}e_{3}e_{4}e_{7}e_{9},& p_{3,\pm} = e_{0}e_{2}e_{4}e_{7}e_{8} \pm e_{1}e_{3}e_{5}e_{6}e_{9},\\ p_{4,\pm} = e_{1}e_{3}e_{5}e_{7}e_{8} \pm e_{0}e_{2}e_{4}e_{6}e_{9}. \end{matrix} \] \end{prop} \begin{proof} First, a word of warning on notation. By \defi{Cartan-like} subalgebra, we mean an abelian subalgebra $\mathfrak{h} \subset \fa$ of almost diagonalizable elements. Since the bracket is commuting on this grade 1 piece of $\fa$, it is not a restriction to ask that this set be abelian. So we asked, instead, if $[a,b]=0$ for any $a$ and $b$ independent elements of our basis of match vectors, but we don't have, for instance, $[a, a] = 0$ necessarily, so the algebra $\mathfrak{h}$ is not nilpotent. We noticed that for basis elements $x$ we have $[x,x] = \sum_{i=1}^5 \pm h_i \neq 0$. The almost diagonalizability is that the dimension of the $0$-eigenspace is 249 versus the algebraic multiplicity of 251, but the other 4 eigenspaces have the correct dimension of 25 each. The extra kernel seems to be due to the following facts: \begin{itemize} \item $[p_{i,+}, p_{i,-}] = 0$ and $p_{i,-}$ is in the kernel of $(\ad_{p_{i,+}})^2$ but not the kernel of $(\ad_{p_{i,+}})$, \item $[p_{i,+}, p_{i,+}] = k:= \sum_{i=1}^5 \pm h_i $ is in the kernel of $(\ad_{p_{i,+}})^3$ but not the kernel of $(\ad_{p_{i,+}})^2$. \end{itemize} Also, we did not prove that this subalgebra is maximal in $\fa$. We will explain in what sense the algebra we construct is maximal. Let us consider some elements that have a chance to be basic semisimple elements, by analogy to the $\bw{4}\CC^8$ and the $\bw3\CC^9$ cases, whose basic semisimple elements are governed by certain combinatorial designs (respectively Steiner quadruple and triple systems). The bipartition $\{\{0,1,2,3,4\}, \{5,6,7,8,9\}\}$ indicates a copy of $(\CC^2)^{\otimes 5}$ in $\bw{5}\CC^{10}$. The 16 matchings on the bipartite graph associated to that bipartition are the following. \[\begin{matrix} \left\{\left\{0,\:2,\:4,\:6,\:8\right\},\:\left\{1,\:3,\:5,\:7,\:9\right\}\right\}, & \left\{\left\{0,\:2,\:4,\:6,\:9\right\},\:\left\{1,\:3,\:5,\:7,\:8\right\}\right\},\\ \left\{\left\{0,\:2,\:4,\:7,\:8\right\},\:\left\{1,\:3,\:5,\:6,\:9\right\}\right\}, & \left\{\left\{0,\:2,\:4,\:7,\:9\right\},\:\left\{1,\:3,\:5,\:6,\:8\right\}\right\},\\ \left\{\left\{0,\:2,\:5,\:6,\:8\right\},\:\left\{1,\:3,\:4,\:7,\:9\right\}\right\}, & \left\{\left\{0,\:2,\:5,\:6,\:9\right\},\:\left\{1,\:3,\:4,\:7,\:8\right\}\right\},\\ \left\{\left\{0,\:2,\:5,\:7,\:8\right\},\:\left\{1,\:3,\:4,\:6,\:9\right\}\right\}, & \left\{\left\{0,\:3,\:4,\:6,\:8\right\},\:\left\{1,\:2,\:5,\:7,\:9\right\}\right\},\\ \left\{\left\{0,\:3,\:4,\:6,\:9\right\},\:\left\{1,\:2,\:5,\:7,\:8\right\}\right\}, & \left\{\left\{0,\:3,\:4,\:7,\:8\right\},\:\left\{1,\:2,\:5,\:6,\:9\right\}\right\},\\ \left\{\left\{0,\:3,\:5,\:6,\:8\right\},\:\left\{1,\:2,\:4,\:7,\:9\right\}\right\}, & \left\{\left\{1,\:2,\:4,\:6,\:8\right\},\:\left\{0,\:3,\:5,\:7,\:9\right\}\right\},\\ \left\{\left\{1,\:2,\:4,\:6,\:9\right\},\:\left\{0,\:3,\:5,\:7,\:8\right\}\right\}, & \left\{\left\{1,\:2,\:4,\:7,\:8\right\},\:\left\{0,\:3,\:5,\:6,\:9\right\}\right\},\\ \left\{\left\{1,\:2,\:5,\:6,\:8\right\},\:\left\{0,\:3,\:4,\:7,\:9\right\}\right\}, & \left\{\left\{1,\:3,\:4,\:6,\:8\right\},\:\left\{0,\:2,\:5,\:7,\:9\right\}\right\}. \end{matrix} \] Each of these matchings corresponds to two elements, call them \defi{match vectors}, of our distinguished copy of $(\CC^2)^{\otimes 5} \subset \bw{5}\CC^{10}$, for instance in the first case we have the vectors $e_{0}e_{2}e_{4}e_{6}e_{8}\pm e_{1}e_{3}e_{5}e_{7}e_{9}$. Individually, the rank profiles and characteristic polynomials of match vectors are all identical; see Table~\ref{tab:C2o5ss}. The fact that the adjoint rank profile stabilizes led us to believe that match vectors are not ad-nilpotent and might be ad-semisimple. We checked the geometric multiplicity of the eigenvalues ($4^\text{th}$-roots of unity) found by factoring the characteristic polynomial and found that all 4 non-zero eigenspaces have geometric multiplicity 25, which agrees with their algebraic multiplicity, so the adjoint forms of match vectors are all almost diagonalizable. \begin{remark}When working in M2, we noted that computations of dimensions of eigenspaces via matrix kernels were problematic working over $\CC$, but when we made the symbolic field extension by hand and worked in exact arithmetic, we could verify that the geometric and algebraic multiplicities of the eigenvalues of our adjoint forms were indeed the same. \end{remark} The fact that the match vectors all have the same adjoint characteristic polynomial means they have the same adjoint eigenvalues counting multiplicities. However, they're not necessarily simultaneously diagonalizable unless they also commute, which seems not to be the case. The next step is to attempt to find a maximal subset of these. We checked that a subset of 5 of these forms ($p_{0,+},\ldots,p_{4,+}$, for instance) satisfies $[a,b]=0$ for $a\neq b$, then we tried to add in $p_{i,-}$ for each $p_{i,+}$ in our set. Adding any other basic match vector to the set failed $[a,b]=0$ for $a\neq b$, so in this sense, our set is maximal. \end{proof} \begin{table} \begin{tabular}{CCCC} \scalebox{.8}{normal form}& \scalebox{.9}{Rank Profile in $\bw{5}\CC^{10}$ }&\scalebox{.9}{Rank Profile in $(\CC^{2})^{\otimes 5}$} & \scalebox{.9}{$\chi_{T}(t)$, $\chi_{T}(t)_{\mid(\CC^{2})^{\otimes 5}}$}\\ \hline\\ \begin{matrix} e_{1}e_{2}e_{4}e_{6}e_{8}+e_{0}e_{2}e_{5}e_{6}e_{8}\\ +e_{1}e_{3}e_{4}e_{7}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9} \end{matrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&72&72&0&144\\ 66&0&0&72&138\\ 0&66&66&0&132\\ 66&0&0&66&132 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&10&10&0&20\\ 6&0&0&10&16\\ 0&6&6&0&12\\ 6&0&0&6&12 \end{smallmatrix}\right| & \begin{smallmatrix} \left(t\right)^{219}\left(t^4-1\right)^{24}\left(t^{4}-4\right)^{9},\\ \left(t\right)^{35}\left(t^{4}-4\right)^{3} \end{smallmatrix} \\\\ \begin{matrix}e_{1}e_{2}e_{4}e_{6}e_{8}+e_{0}e_{3}e_{5}e_{7}e_{8} \\ +e_{1}e_{2}e_{4}e_{6}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9} \end{matrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&51&51&0&102\\ 18&0&0&34&52\\ 0&1&1&0&2\\ 1&0&0&0&1\\ 0&0&0&0&0 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&11&11&0&22\\ 2&0&0&10&12\\ 0&1&1&0&2\\ 1&0&0&0&1\\ 0&0&0&0&0 \end{smallmatrix}\right| & \begin{matrix} \left(t\right)^{351},& \left(t\right)^{47} \end{matrix} \\\\ 2\,e_{1}e_{2}e_{4}e_{6}e_{8} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&26&26&0&52\\ 0&0&0&1&1\\ 0&0&0&0&0 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&6&6&0&12\\ 0&0&0&1&1\\ 0&0&0&0&0 \end{smallmatrix}\right| & \begin{matrix} \left(t\right)^{351},& \left(t\right)^{47} \end{matrix} \\\\ \begin{matrix}e_{1}e_{2}e_{4}e_{6}e_{8}+e_{0}e_{3}e_{4}e_{6}e_{8}\\ -e_{1}e_{2}e_{5}e_{7}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9} \end{matrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&76&76&0&152\\ 56&0&0&76&132\\ 0&56&56&0&112\\ 56&0&0&56&112 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&12&12&0&24\\ 4&0&0&12&16\\ 0&4&4&0&8\\ 4&0&0&4&8 \end{smallmatrix}\right| & \begin{smallmatrix} \left(t\right)^{239}\left(t^4-1\right)^{24}\left(t^{4}-4\right)^{4},\\ \left(t\right)^{39}\left(t^{4}-4\right)^{2} \end{smallmatrix} \\\\ \begin{matrix} e_{1}e_{2}e_{4}e_{6}e_{8}+e_{0}e_{2}e_{4}e_{7}e_{8}\\ -e_{1}e_{3}e_{5}e_{6}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9} \end{matrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&76&76&0&152\\ 56&0&0&76&132\\ 0&56&56&0&112\\ 56&0&0&56&112 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&12&12&0&24\\ 4&0&0&12&16\\ 0&4&4&0&8\\ 4&0&0&4&8 \end{smallmatrix}\right| & \begin{smallmatrix} \left(t\right)^{239}\left(t^4-1\right)^{24}\left(t^{4}-4\right)^{4},\\ \left(t\right)^{39}\left(t^{4}-4\right)^{2} \end{smallmatrix} \\\\ \begin{matrix} e_{1}e_{2}e_{4}e_{6}e_{8}+e_{1}e_{2}e_{4}e_{7}e_{8}\\ -e_{0}e_{3}e_{5}e_{6}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9} \end{matrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&51&51&0&102\\ 50&0&0&51&101\\ 0&50&50&0&100\\ 50&0&0&50&100 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&11&11&0&22\\ 10&0&0&11&21\\ 0&10&10&0&20\\ 10&0&0&10&20 \end{smallmatrix}\right| & \begin{matrix} \left(t\right)^{251}\left(t^{4}-4\right)^{25},\\ \left(t\right)^{27}\left(t^{4}-4\right)^{5} \end{matrix} \\\\ \end{tabular} \caption{Adjoint rank profiles and characteristic polynomials for some sums of semisimple-like tensors in $(\CC^2)^{\otimes 5}$.}\label{tab:C2o5ss} \end{table} We are curious about the behavior of the ad-semisimple elements coming from the match vectors and linear combinations of such. When we add specific pairs, we obtain other forms that seem like semisimple forms and some that are nilpotent. We list these types as rows 2 and 3 in Table \ref{tab:C2o5ss}. Like in the standard Vinberg theory, the ranks and characteristic polynomials depend on the coefficients we apply when combining semisimple elements. For example, the form $a (e_{1}e_{2}e_{4}e_{6}e_{8}+e_{0}e_{3}e_{5}e_{7}e_{8}) +b(e_{1}e_{2}e_{4}e_{6}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9})$ corresponds to the row of Table~\ref{tab:C2o5ss} the adjoint rank 144 when $a^2-b^2 \neq 0$ and 102 when $a^2-b^2 =0$. This collapse comes from a factorization: \[ a (e_{1}e_{2}e_{4}e_{6}e_{8}+e_{0}e_{3}e_{5}e_{7}e_{8}) +b(e_{1}e_{2}e_{4}e_{6}e_{9}+e_{0}e_{3}e_{5}e_{7}e_{9}) = e_{1}e_{2}e_{4}e_{6}(a e_{8} + b e_9) +e_{0}e_{3}e_{5}e_{7}(a e_{8} + b e_9) .\] This form is equivalent to a semisimple-like form \[ e_{1}e_{2}e_{4}e_{6} \tilde e_8+e_{0}e_{3}e_{5}e_{7}\tilde e_9,\] when $(\tilde e_8) = (a e_{8} + b e_9)$ and $(\tilde e_9) = (a e_{8} + b e_9)$ are linearly independent, i.e. when $\det \left( \begin{smallmatrix} a & b \\ b & a\end{smallmatrix}\right) \neq 0$. Otherwise, the form is not concise and becomes a point on a restricted chordal variety \cite{BidlemanOeding}: \[ (e_{1}e_{2}e_{4}e_{6} +e_{0}e_{3}e_{5}e_{7})(\tilde e_8).\] Like in the Vinberg situations, the non-concise tensors are nilpotent. We can make linear combinations of semisimple forms with higher rank. For instance, when adding pairs $0,3,4,5$ we obtain the form in row 4 of Table~\ref{tab:C2o5ss}. Indeed, this appears to be a rich theory and indicates a new way to construct tensors of high rank. \begin{table} \begin{tabular}{CCCC} \scalebox{.8}{Tensor Rank }& \scalebox{.9}{Rank Profile in $\bw{5}\CC^{10}$ }&\scalebox{.9}{Rank Profile in $(\CC^{2})^{\otimes 5}$} & \scalebox{.9}{$\chi_{T}(t)_{\mid(\CC^{2})^{\otimes 5}}$}\\ \hline\\ 1 & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&26&26&0&52\\ 0&0&0&1&1\\ 0&0&0&0&0 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&6&6&0&12\\ 0&0&0&1&1\\ 0&0&0&0&0 \end{smallmatrix}\right| & t^{47} \\[5ex] 2& \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&51&51&0&102\\ 50&0&0&51&101\\ 0&50&50&0&100 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&11&11&0&22\\ 10&0&0&11&21\\ 0&10&10&0&20 \end{smallmatrix}\right| &\begin{smallmatrix} \left(t\right)^{27}\left(t^{4}-a^2\right)^{5} \end{smallmatrix} \\[5ex] 3 & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&75&75&0&150\\ 50&0&0&75&125\\ 0&50&50&0&100\\ \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&15&15&0&30\\ 10&0&0&15&25\\ 0&10&10&0&20 \end{smallmatrix}\right| &\begin{smallmatrix} \left(t\right)^{27}\left(t^{4}-a\right)^{5} \end{smallmatrix} \\[5ex] \geq 4 & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&95&95&0&190\\ 90&0&0&95&185\\ 0&90&90&0&180 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&15&15&0&30\\ 10&0&0&15&25\\ 0&10&10&0&20 \end{smallmatrix}\right| & \begin{smallmatrix}\left(t\right)^{27} \left(t^{4}-a\right)\left(t^{4}-b\right)\left(t^{4}-c\right)\left(t^{4}-d\right)\left(t^{4}-e\right) \end{smallmatrix}\\\\ \end{tabular} \caption{Adjoint rank profiles and characteristic polynomials for random tensors of each tensor rank in $(\CC^2)^{\otimes 5}$ ($a,b,\ldots, e$ denote random rational numbers).}\label{tab:C2o5} \end{table} In Table~\ref{tab:Osterloh} we report our computations for the examples in Osterloh \cite{Osterloh06}, which are the following in the physics notation: \begin{eqnarray*} \ket{\Psi_2}&=&\frac{1}{\sqrt{2}}(\ket{00000}+\ket{11111}), \\ \ket{\Psi_4}&=&\frac{1}{2}(\ket{11111}+\ket{11100}+\ket{00010}+\ket{00001}), \\ \ket{\Psi_5}&=&\frac{1}{\sqrt{6}}(\sqrt{2}\ket{11111}+\ket{11000}+\ket{00100}+\ket{00010}+\ket{00001}), \\ \ket{\Psi_6}&=&\frac{1}{2\sqrt{2}}(\sqrt{3}\ket{11111}+\ket{10000}+\ket{01000}+\ket{00100}+\ket{00010}+\ket{00001}). \end{eqnarray*} Our computations show that the single construction of the adjoint operators and their ranks and characteristic polynomials are enough to distinguish these orbits. \begin{table} \begin{tabular}{CCCC} \scalebox{.9}{Normal Form in $\bw{5}\CC^{10}$ }& \scalebox{.9}{Rank Profile in $\bw{5}\CC^{10}$ }&\scalebox{.9}{Rank Profile in $(\CC^{2})^{\otimes 5}$} & \scalebox{.9}{$\chi_{T}(t)_{\mid(\CC^{2})^{\otimes 5}}$}\\ \hline\\ \begin{smallmatrix} \Psi_2 &=& {e}_{0}{e}_{2}{e}_{4}{e}_{6}{e}_{8} \\ &&+{e}_{1}{e}_{3}{e}_{5}{e}_{7}{e}_{9} \end{smallmatrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&51&51&0&102\\ 50&0&0&51&101\\ 0&50&50&0&100 \end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&11&11&0&22\\ 10&0&0&11&21\\ 0&10&10&0&20\end{smallmatrix}\right| &t^{27}\left({t^{4}-1}\right)^{2} \\[4ex]\begin{smallmatrix} \Psi_4 &=& e_{1}e_{3}e_{5}e_{6}e_{8}\\ &&+e_{0}e_{2}e_{4}e_{7}e_{8}\\ &&+e_{0}e_{2}e_{4}e_{6}e_{9}\\ &&+e_{1}e_{3}e_{5}e_{7}e_{9} \end{smallmatrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&76&76&0&152\\ 56&0&0&76&132\\ 0&56&56&0&112\end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&12&12&0&24\\ 4&0&0&12&16\\ 0&4&4&0&8\end{smallmatrix}\right| & \begin{smallmatrix}t^{39}\left({t^{4}-2}\right)^{2}\end{smallmatrix} \\[4ex] \begin{smallmatrix} \Psi_5 &=& \sqrt2 e_{1}e_{3}e_{5}e_{7}e_{9}\\ &&+e_{1}e_{3}e_{4}e_{6}e_{8}\\ &&+e_{0}e_{2}e_{5}e_{6}e_{8}\\ &&+e_{0}e_{2}e_{4}e_{7}e_{8}\\ &&+e_{0}e_{2}e_{4}e_{6}e_{9} \end{smallmatrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&84&84&0&168\\ 42&0&0&84&126\\ 0&42&42&0&84\\ 9&0&0&42&51\\ 0&9&9&0&18\\ 0&0&0&9&9\\ 0&0&0&0&0\end{smallmatrix} \right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&14&14&0&28\\ 6&0&0&14&20\\ 0&6&6&0&12\\ 3&0&0&6&9\\ 0&3&3&0&6\\ 0&0&0&3&3\\ 0&0&0&0&0 \end{smallmatrix}\right| &t^{47} \\[6ex]\begin{smallmatrix} \Psi_6 &=& \sqrt3 e_{1}e_{3}e_{5}e_{7}e_{9}\\ && +e_{1}e_{2}e_{4}e_{6}e_{8}\\ &&+e_{0}e_{3}e_{4}e_{6}e_{8}\\ &&+e_{0}e_{2}e_{5}e_{6}e_{8}\\ &&+e_{0}e_{2}e_{4}e_{7}e_{8}\\ &&+e_{0}e_{2}e_{4}e_{6}e_{9} \end{smallmatrix} & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&75&75&0&150\\ 50&0&0&75&125\\ 0&50&50&0&100\\ 25&0&0&50&75\\ 0&25&25&0&50\\ 0&0&0&25&25\\ 0&0&0&0&0\end{smallmatrix}\right| & \left|\begin{smallmatrix} B_{00} & B_{01} & B_{10}& B_{11}& B \\[.5ex] \hline\\[.5ex] 0&15&15&0&30\\ 10&0&0&15&25\\ 0&10&10&0&20\\ 5&0&0&10&15\\ 0&5&5&0&10\\ 0&0&0&5&5\\ 0&0&0&0&0 \end{smallmatrix}\right| &t^{47} \\\\ \end{tabular} \caption{Adjoint rank profiles and characteristic polynomials for the examples for 5 qubits from \cite{Osterloh06}.}\label{tab:Osterloh} \end{table} \begin{example}[$5\times 5\times 5$ tensors] Now we consider tensors in $\bw3 W$, with $\dim W = 15$, and we do computations over $\ZZ/1000000007$ to avoid coefficient explosion for computing ranks of matrices. In particular, we're interested in tensors in $V_1\otimes V_2 \otimes V_3 \subset \bw3 W$, with $\dim V_j= 5$. Respectively we take the bases $\{e_{0+5*(j-1)},\ldots,e_{4 + 5*(j-1)}\}$ of $V_j$ for $j=1,2,3$. The rank profiles of the adjoint operators for forms of each rank are recorded in Tables~\ref{tab:W3C15} (for normal forms for low rank) and \ref{tab:W3C15b} (for randomized forms of a given rank). \begin{table} \begin{tabular}{l} $e_{0}e_{5}e_{10}$ \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&37&0&0&0&0&0&220&0&0&0&0&0&924&0&0&0&0&0&220&37&0&0&0&0&1\,438\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right|$ \\ -- used 229.807 seconds to compute the block ranks \\[1ex] \hline $e_{0}e_{5}e_{10}+e_{1}e_{6}e_{11}$ \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&74&0&0&0&0&0&355&0&0&0&0&0&1\,680&0&0&0&0&0&355&74&0&0&0&0&2\,538\\ 0&0&55&0&0&0&0&0&0&0&0&0&0&0&0&55&0&0&0&0&0&20&0&0&0&130\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0&0&2\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 265.469 seconds to compute the block ranks \\[1ex] \hline $e_{0}e_{5}e_{10}+e_{1}e_{6}e_{11}+e_{2}e_{7}e_{12}$ \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&111&0&0&0&0&0&427&0&0&0&0&0&2\,310&0&0&0&0&0&427&111&0&0&0&0&3\,386\\ 0&0&110&0&0&0&0&0&0&0&0&0&0&0&0&110&0&0&0&0&0&57&0&0&0&277\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&56&0&0&0&0&0&56&0&0&112\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&56&0&0&0&0&0&0&0&56\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 310.79 seconds to compute the block ranks \\[1ex] \hline $e_{0}e_{5}e_{10}+e_{1}e_{6}e_{11}+e_{2}e_{7}e_{12}+e_{3}e_{8}e_{13}$ \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&148&0&0&0&0&0&454&0&0&0&0&0&2\,850&0&0&0&0&0&454&148&0&0&0&0&4\,054\\ 0&0&147&0&0&0&0&0&0&0&0&0&0&0&0&147&0&0&0&0&0&112&0&0&0&406\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&111&0&0&0&0&0&111&0&0&222\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&111&0&0&0&0&0&0&0&111\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 338.107 seconds to compute the block ranks \\[1ex] \hline $e_{0}e_{5}e_{10}+e_{1}e_{6}e_{11}+e_{2}e_{7}e_{12}+e_{3}e_{8}e_{13}+e_{4}e_{9}e_{14}$ \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&184&0&0&0&0&0&454&0&0&0&0&0&3\,336&0&0&0&0&0&454&184&0&0&0&0&4\,612\\ 0&0&184&0&0&0&0&0&0&0&0&0&0&0&0&184&0&0&0&0&0&184&0&0&0&552\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&184&0&0&0&0&0&184&0&0&368\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&184&0&0&0&0&0&0&0&184\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 359.908 seconds to compute the block ranks \\[1ex] \hline \end{tabular} \caption{Forms of each rank in $\bw{3}\CC^{15}$ and their adjoint rank profiles in $\fa = \sl_{15} \oplus \bigoplus_k \bw{3k}\CC^{15}$} \label{tab:W3C15} \end{table} \begin{table} \begin{tabular}{l} \text{random rank 6} -- used 142.025 seconds to construct the adjoint form \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&216&0&0&0&0&0&454&0&0&0&0&0&3\,796&0&0&0&0&0&454&216&0&0&0&0&5\,136\\ 0&0&216&0&0&0&0&0&0&0&0&0&0&0&0&216&0&0&0&0&0&216&0&0&0&648\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&216&0&0&0&0&0&216&0&0&432\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&216&0&0&0&0&0&0&0&216\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 277.98 seconds to compute the block ranks \\[1ex] \hline \text{random rank 7} -- used 139.479 seconds to construct the adjoint form \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&222&0&0&0&0&0&454&0&0&0&0&0&4\,252&0&0&0&0&0&454&222&0&0&0&0&5\,604\\ 0&0&222&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&222&0&0&0&666\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&222&0&0&444\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&0&0&222\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 298.66 seconds to compute the block ranks \\[1ex] \hline \text{random rank 8} -- used 135.61 seconds to construct the adjoint form \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&222&0&0&0&0&0&454&0&0&0&0&0&4\,464&0&0&0&0&0&454&222&0&0&0&0&5\,816\\ 0&0&222&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&222&0&0&0&666\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&222&0&0&444\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&0&0&222\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 291.446 seconds to compute the block ranks \\[1ex] \hline \text{random rank 9} -- used 133.797 seconds to construct the adjoint form \\ $\left|\begin{smallmatrix} B_{00}&B_{10}&B_{20}&B_{30} & B_{40}&B_{01}&B_{11}&B_{21}&B_{31}&B_{41}&B_{02}&B_{12}&B_{22}&B_{32}&B_{42}&B_{03}&B_{13}&B_{23}&B_{33}&B_{43}&B_{04}&B_{14}&B_{24}&B_{34}&B_{44}&B \\ \hline\\[.5ex] 0&222&0&0&0&0&0&454&0&0&0&0&0&4\,476&0&0&0&0&0&454&222&0&0&0&0&5\,828\\ 0&0&222&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&222&0&0&0&666\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&222&0&0&444\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&222&0&0&0&0&0&0&0&222\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix}\right|$ \\ -- used 293.198 seconds to compute the block ranks \end{tabular} \caption{Randomized forms of each rank in $\bw{3}\CC^{15}$ and their adjoint rank profiles in $\fa = \sl_{15} \oplus \bigoplus_k \bw{3k}\CC^{15}$} \label{tab:W3C15b} \end{table} For rank 10 and up, we obtain the same adjoint rank profile as for rank 9, making rank 9 the largest rank that these rank profiles can detect. We also note that for rank 7 and larger, the only block whose rank changes is $B_{32}$. Landsberg points out \cite{landsberg2015nontriviality} that the determinant of a particular exterior flattening gives non-trivial equations for border rank 8, so ours is not the first tool that we use can distinguish border rank 8 from 9 (the generic border rank). However, we remark that in the case of exterior flattenings, one has to construct each flattening separately and check for non-trivial equations, the adjoint operator in the extension of the exterior algebra contains all of these exterior flattenings in a single construction. The fact that this case is still within our computational range tells us we can use the adjoint operators to study this type of tensor further. \end{example} \section*{Acknowledgements} Holweck thanks Pr. Ash Abebe and his staff of the AU Mathematics and Statistics department for making his visiting year in Auburn possible as well as Pr. Ghislain Montavon from UTBM for his support in this project. Holweck was partially supported by the Conseil R\'egional de Bourgogne Franche-Comt\'e (GeoQuant project). Oeding and Holweck acknowledge partial support from the Thomas Jefferson Foundation. Oeding acknowledges partial support from the University of Bourgogne for summer salary that supported part of this work. We also thank Laurent Manivel, Sasha Elashvili, Mamuka Jibladze, Skip Garibaldi, and Ian Tan for helpful discussions. An anonymous referee's remarks provided valuable insight for a revision of this work. Statement: During the preparation of this work the authors used Grammarly in order to identify and correct grammatical issues. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication. \FloatBarrier \newcommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{{\tt arXiv:#1}}} \bibliography{/Users/lao0004/Library/CloudStorage/Box-Box/texFiles/main_bibfile.bib} \newpage \include{trivectors} \end{document} \section*{Appendix}\label{sec:appendix} In this appendix we provide the trace power invariants for general semisimple elements and the adjoint rank profiles of all orbits in Vinberg and {\`E}la{\v{s}}vili's classification for $\bw3 \CC^9$, \cite{Vinberg-Elashvili} as an ancillary file to our article ``Jordan Decompositions of Tensors.'' The orbits of $\SL_9$ in $\bw{3}\CC^9$ occur in 7 families depending on the form of their semi-simple parts. Recall the basic semi-simple elements: \[p_1 = e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{6}e_{7}e_{8}, \quad p_2 = e_{0}e_{3}e_{6}+e_{1}e_{4}e_{7}+e_{2}e_{5}e_{8}\] \[p_3 = e_{1}e_{5}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{4}e_{8}, \quad p_4 = e_{2}e_{4}e_{6}+e_{0}e_{5}e_{7}+e_{1}e_{3}e_{8}\] Here is the adjoint rank profile for random elements of each of the different families of semi-simple elements: We note that each of these families have non-zero values for their trace powers in degrees 12, 18, 24, 30, etc. \subsection*{Family one} Basic semisimple form $\sum_{i=1}^4 \lambda_i p_i$. Block Ranks: $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&80&0&0&0&80&80&0&0&240\\ \end{smallmatrix}\right)$ The trace powers on generic semi-simple elements are: $f_{12} = ( \lambda_{1}^{12}+22\, \lambda_{1}^{6} \lambda_{2}^{6}+ \lambda_{2}^{12}-220\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{3}^{3}-220\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{3}^{3}+22\, \lambda_{1}^{6} \lambda_{3}^{6}+220\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{3}^{6}+22\, \lambda_{2}^{6} \lambda_{3}^{6}+ \lambda_{3}^{12}-220\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{4}^{3}+220\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{4}^{3}-220\, \lambda_{1}^{6} \lambda_{3}^{3} \lambda_{4}^{3}+220\, \lambda_{2}^{6} \lambda_{3}^{3} \lambda_{4}^{3}-220\, \lambda_{1}^{3} \lambda_{3}^{6} \lambda_{4}^{3}+220\, \lambda_{2}^{3} \lambda_{3}^{6} \lambda_{4}^{3}+22\, \lambda_{1}^{6} \lambda_{4}^{6}-220\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{4}^{6}+22\, \lambda_{2}^{6} \lambda_{4}^{6}+220\, \lambda_{1}^{3} \lambda_{3}^{3} \lambda_{4}^{6}+220\, \lambda_{2}^{3} \lambda_{3}^{3} \lambda_{4}^{6}+22\, \lambda_{3}^{6} \lambda_{4}^{6}+ \lambda_{4}^{12})\left(4536\right)$ $f_{18} = ( \lambda_{1}^{18}-17\, \lambda_{1}^{12} \lambda_{2}^{6}-17\, \lambda_{1}^{6} \lambda_{2}^{12}+ \lambda_{2}^{18}+170\, \lambda_{1}^{12} \lambda_{2}^{3} \lambda_{3}^{3}+1870\, \lambda_{1}^{9} \lambda_{2}^{6} \lambda_{3}^{3}+1870\, \lambda_{1}^{6} \lambda_{2}^{9} \lambda_{3}^{3}+170\, \lambda_{1}^{3} \lambda_{2}^{12} \lambda_{3}^{3}-17\, \lambda_{1}^{12} \lambda_{3}^{6}-1870\, \lambda_{1}^{9} \lambda_{2}^{3} \lambda_{3}^{6}-7854\, \lambda_{1}^{6} \lambda_{2}^{6} \lambda_{3}^{6}-1870\, \lambda_{1}^{3} \lambda_{2}^{9} \lambda_{3}^{6}-17\, \lambda_{2}^{12} \lambda_{3}^{6}+1870\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{3}^{9}+1870\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{3}^{9}-17\, \lambda_{1}^{6} \lambda_{3}^{12}-170\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{3}^{12}-17\, \lambda_{2}^{6} \lambda_{3}^{12}+ \lambda_{3}^{18}+170\, \lambda_{1}^{12} \lambda_{2}^{3} \lambda_{4}^{3}-1870\, \lambda_{1}^{9} \lambda_{2}^{6} \lambda_{4}^{3}+1870\, \lambda_{1}^{6} \lambda_{2}^{9} \lambda_{4}^{3}-170\, \lambda_{1}^{3} \lambda_{2}^{12} \lambda_{4}^{3}+170\, \lambda_{1}^{12} \lambda_{3}^{3} \lambda_{4}^{3}-170\, \lambda_{2}^{12} \lambda_{3}^{3} \lambda_{4}^{3}+1870\, \lambda_{1}^{9} \lambda_{3}^{6} \lambda_{4}^{3}-1870\, \lambda_{2}^{9} \lambda_{3}^{6} \lambda_{4}^{3}+1870\, \lambda_{1}^{6} \lambda_{3}^{9} \lambda_{4}^{3}-1870\, \lambda_{2}^{6} \lambda_{3}^{9} \lambda_{4}^{3}+170\, \lambda_{1}^{3} \lambda_{3}^{12} \lambda_{4}^{3}-170\, \lambda_{2}^{3} \lambda_{3}^{12} \lambda_{4}^{3}-17\, \lambda_{1}^{12} \lambda_{4}^{6}+1870\, \lambda_{1}^{9} \lambda_{2}^{3} \lambda_{4}^{6}-7854\, \lambda_{1}^{6} \lambda_{2}^{6} \lambda_{4}^{6}+1870\, \lambda_{1}^{3} \lambda_{2}^{9} \lambda_{4}^{6}-17\, \lambda_{2}^{12} \lambda_{4}^{6}-1870\, \lambda_{1}^{9} \lambda_{3}^{3} \lambda_{4}^{6}-1870\, \lambda_{2}^{9} \lambda_{3}^{3} \lambda_{4}^{6}-7854\, \lambda_{1}^{6} \lambda_{3}^{6} \lambda_{4}^{6}-7854\, \lambda_{2}^{6} \lambda_{3}^{6} \lambda_{4}^{6}-1870\, \lambda_{1}^{3} \lambda_{3}^{9} \lambda_{4}^{6}-1870\, \lambda_{2}^{3} \lambda_{3}^{9} \lambda_{4}^{6}-17\, \lambda_{3}^{12} \lambda_{4}^{6}+1870\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{4}^{9}-1870\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{4}^{9}+1870\, \lambda_{1}^{6} \lambda_{3}^{3} \lambda_{4}^{9}-1870\, \lambda_{2}^{6} \lambda_{3}^{3} \lambda_{4}^{9}+1870\, \lambda_{1}^{3} \lambda_{3}^{6} \lambda_{4}^{9}-1870\, \lambda_{2}^{3} \lambda_{3}^{6} \lambda_{4}^{9}-17\, \lambda_{1}^{6} \lambda_{4}^{12}+170\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{4}^{12}-17\, \lambda_{2}^{6} \lambda_{4}^{12}-170\, \lambda_{1}^{3} \lambda_{3}^{3} \lambda_{4}^{12}-170\, \lambda_{2}^{3} \lambda_{3}^{3} \lambda_{4}^{12}-17\, \lambda_{3}^{6} \lambda_{4}^{12}+ \lambda_{4}^{18})\left(-117936\right)$ $f_{24} =(111\, \lambda_{1}^{24}+506\, \lambda_{1}^{18} \lambda_{2}^{6}+10166\, \lambda_{1}^{12} \lambda_{2}^{12}+506\, \lambda_{1}^{6} \lambda_{2}^{18}+111\, \lambda_{2}^{24}-5060\, \lambda_{1}^{18} \lambda_{2}^{3} \lambda_{3}^{3}-206448\, \lambda_{1}^{15} \lambda_{2}^{6} \lambda_{3}^{3}-1118260\, \lambda_{1}^{12} \lambda_{2}^{9} \lambda_{3}^{3}-1118260\, \lambda_{1}^{9} \lambda_{2}^{12} \lambda_{3}^{3}-206448\, \lambda_{1}^{6} \lambda_{2}^{15} \lambda_{3}^{3}-5060\, \lambda_{1}^{3} \lambda_{2}^{18} \lambda_{3}^{3}+506\, \lambda_{1}^{18} \lambda_{3}^{6}+206448\, \lambda_{1}^{15} \lambda_{2}^{3} \lambda_{3}^{6}+4696692\, \lambda_{1}^{12} \lambda_{2}^{6} \lambda_{3}^{6}+12300860\, \lambda_{1}^{9} \lambda_{2}^{9} \lambda_{3}^{6}+4696692\, \lambda_{1}^{6} \lambda_{2}^{12} \lambda_{3}^{6}+206448\, \lambda_{1}^{3} \lambda_{2}^{15} \lambda_{3}^{6}+506\, \lambda_{2}^{18} \lambda_{3}^{6}-1118260\, \lambda_{1}^{12} \lambda_{2}^{3} \lambda_{3}^{9}-12300860\, \lambda_{1}^{9} \lambda_{2}^{6} \lambda_{3}^{9}-12300860\, \lambda_{1}^{6} \lambda_{2}^{9} \lambda_{3}^{9}-1118260\, \lambda_{1}^{3} \lambda_{2}^{12} \lambda_{3}^{9}+10166\, \lambda_{1}^{12} \lambda_{3}^{12}+1118260\, \lambda_{1}^{9} \lambda_{2}^{3} \lambda_{3}^{12}+4696692\, \lambda_{1}^{6} \lambda_{2}^{6} \lambda_{3}^{12}+1118260\, \lambda_{1}^{3} \lambda_{2}^{9} \lambda_{3}^{12}+10166\, \lambda_{2}^{12} \lambda_{3}^{12}-206448\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{3}^{15}-206448\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{3}^{15}+506\, \lambda_{1}^{6} \lambda_{3}^{18}+5060\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{3}^{18}+506\, \lambda_{2}^{6} \lambda_{3}^{18}+111\, \lambda_{3}^{24}-5060\, \lambda_{1}^{18} \lambda_{2}^{3} \lambda_{4}^{3}+206448\, \lambda_{1}^{15} \lambda_{2}^{6} \lambda_{4}^{3}-1118260\, \lambda_{1}^{12} \lambda_{2}^{9} \lambda_{4}^{3}+1118260\, \lambda_{1}^{9} \lambda_{2}^{12} \lambda_{4}^{3}-206448\, \lambda_{1}^{6} \lambda_{2}^{15} \lambda_{4}^{3}+5060\, \lambda_{1}^{3} \lambda_{2}^{18} \lambda_{4}^{3}-5060\, \lambda_{1}^{18} \lambda_{3}^{3} \lambda_{4}^{3}+5060\, \lambda_{2}^{18} \lambda_{3}^{3} \lambda_{4}^{3}-206448\, \lambda_{1}^{15} \lambda_{3}^{6} \lambda_{4}^{3}+206448\, \lambda_{2}^{15} \lambda_{3}^{6} \lambda_{4}^{3}-1118260\, \lambda_{1}^{12} \lambda_{3}^{9} \lambda_{4}^{3}+1118260\, \lambda_{2}^{12} \lambda_{3}^{9} \lambda_{4}^{3}-1118260\, \lambda_{1}^{9} \lambda_{3}^{12} \lambda_{4}^{3}+1118260\, \lambda_{2}^{9} \lambda_{3}^{12} \lambda_{4}^{3}-206448\, \lambda_{1}^{6} \lambda_{3}^{15} \lambda_{4}^{3}+206448\, \lambda_{2}^{6} \lambda_{3}^{15} \lambda_{4}^{3}-5060\, \lambda_{1}^{3} \lambda_{3}^{18} \lambda_{4}^{3}+5060\, \lambda_{2}^{3} \lambda_{3}^{18} \lambda_{4}^{3}+506\, \lambda_{1}^{18} \lambda_{4}^{6}-206448\, \lambda_{1}^{15} \lambda_{2}^{3} \lambda_{4}^{6}+4696692\, \lambda_{1}^{12} \lambda_{2}^{6} \lambda_{4}^{6}-12300860\, \lambda_{1}^{9} \lambda_{2}^{9} \lambda_{4}^{6}+4696692\, \lambda_{1}^{6} \lambda_{2}^{12} \lambda_{4}^{6}-206448\, \lambda_{1}^{3} \lambda_{2}^{15} \lambda_{4}^{6}+506\, \lambda_{2}^{18} \lambda_{4}^{6}+206448\, \lambda_{1}^{15} \lambda_{3}^{3} \lambda_{4}^{6}+206448\, \lambda_{2}^{15} \lambda_{3}^{3} \lambda_{4}^{6}+4696692\, \lambda_{1}^{12} \lambda_{3}^{6} \lambda_{4}^{6}+4696692\, \lambda_{2}^{12} \lambda_{3}^{6} \lambda_{4}^{6}+12300860\, \lambda_{1}^{9} \lambda_{3}^{9} \lambda_{4}^{6}+12300860\, \lambda_{2}^{9} \lambda_{3}^{9} \lambda_{4}^{6}+4696692\, \lambda_{1}^{6} \lambda_{3}^{12} \lambda_{4}^{6}+4696692\, \lambda_{2}^{6} \lambda_{3}^{12} \lambda_{4}^{6}+206448\, \lambda_{1}^{3} \lambda_{3}^{15} \lambda_{4}^{6}+206448\, \lambda_{2}^{3} \lambda_{3}^{15} \lambda_{4}^{6}+506\, \lambda_{3}^{18} \lambda_{4}^{6}-1118260\, \lambda_{1}^{12} \lambda_{2}^{3} \lambda_{4}^{9}+12300860\, \lambda_{1}^{9} \lambda_{2}^{6} \lambda_{4}^{9}-12300860\, \lambda_{1}^{6} \lambda_{2}^{9} \lambda_{4}^{9}+1118260\, \lambda_{1}^{3} \lambda_{2}^{12} \lambda_{4}^{9}-1118260\, \lambda_{1}^{12} \lambda_{3}^{3} \lambda_{4}^{9}+1118260\, \lambda_{2}^{12} \lambda_{3}^{3} \lambda_{4}^{9}-12300860\, \lambda_{1}^{9} \lambda_{3}^{6} \lambda_{4}^{9}+12300860\, \lambda_{2}^{9} \lambda_{3}^{6} \lambda_{4}^{9}-12300860\, \lambda_{1}^{6} \lambda_{3}^{9} \lambda_{4}^{9}+12300860\, \lambda_{2}^{6} \lambda_{3}^{9} \lambda_{4}^{9}-1118260\, \lambda_{1}^{3} \lambda_{3}^{12} \lambda_{4}^{9}+1118260\, \lambda_{2}^{3} \lambda_{3}^{12} \lambda_{4}^{9}+10166\, \lambda_{1}^{12} \lambda_{4}^{12}-1118260\, \lambda_{1}^{9} \lambda_{2}^{3} \lambda_{4}^{12}+4696692\, \lambda_{1}^{6} \lambda_{2}^{6} \lambda_{4}^{12}-1118260\, \lambda_{1}^{3} \lambda_{2}^{9} \lambda_{4}^{12}+10166\, \lambda_{2}^{12} \lambda_{4}^{12}+1118260\, \lambda_{1}^{9} \lambda_{3}^{3} \lambda_{4}^{12}+1118260\, \lambda_{2}^{9} \lambda_{3}^{3} \lambda_{4}^{12}+4696692\, \lambda_{1}^{6} \lambda_{3}^{6} \lambda_{4}^{12}+4696692\, \lambda_{2}^{6} \lambda_{3}^{6} \lambda_{4}^{12}+1118260\, \lambda_{1}^{3} \lambda_{3}^{9} \lambda_{4}^{12}+1118260\, \lambda_{2}^{3} \lambda_{3}^{9} \lambda_{4}^{12}+10166\, \lambda_{3}^{12} \lambda_{4}^{12}-206448\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{4}^{15}+206448\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{4}^{15}-206448\, \lambda_{1}^{6} \lambda_{3}^{3} \lambda_{4}^{15}+206448\, \lambda_{2}^{6} \lambda_{3}^{3} \lambda_{4}^{15}-206448\, \lambda_{1}^{3} \lambda_{3}^{6} \lambda_{4}^{15}+206448\, \lambda_{2}^{3} \lambda_{3}^{6} \lambda_{4}^{15}+506\, \lambda_{1}^{6} \lambda_{4}^{18}-5060\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{4}^{18}+506\, \lambda_{2}^{6} \lambda_{4}^{18}+5060\, \lambda_{1}^{3} \lambda_{3}^{3} \lambda_{4}^{18}+5060\, \lambda_{2}^{3} \lambda_{3}^{3} \lambda_{4}^{18}+506\, \lambda_{3}^{6} \lambda_{4}^{18}+111\, \lambda_{4}^{24})\left(28728\right)$ $f_{30} = (584\, \lambda_{1}^{30}-435\, \lambda_{1}^{24} \lambda_{2}^{6}-63365\, \lambda_{1}^{18} \lambda_{2}^{12}-63365\, \lambda_{1}^{12} \lambda_{2}^{18}-435\, \lambda_{1}^{6} \lambda_{2}^{24}+584\, \lambda_{2}^{30}+4350\, \lambda_{1}^{24} \lambda_{2}^{3} \lambda_{3}^{3}+440220\, \lambda_{1}^{21} \lambda_{2}^{6} \lambda_{3}^{3}+6970150\, \lambda_{1}^{18} \lambda_{2}^{9} \lambda_{3}^{3}+25852920\, \lambda_{1}^{15} \lambda_{2}^{12} \lambda_{3}^{3}+25852920\, \lambda_{1}^{12} \lambda_{2}^{15} \lambda_{3}^{3}+6970150\, \lambda_{1}^{9} \lambda_{2}^{18} \lambda_{3}^{3}+440220\, \lambda_{1}^{6} \lambda_{2}^{21} \lambda_{3}^{3}+4350\, \lambda_{1}^{3} \lambda_{2}^{24} \lambda_{3}^{3}-435\, \lambda_{1}^{24} \lambda_{3}^{6}-440220\, \lambda_{1}^{21} \lambda_{2}^{3} \lambda_{3}^{6}-29274630\, \lambda_{1}^{18} \lambda_{2}^{6} \lambda_{3}^{6}-284382120\, \lambda_{1}^{15} \lambda_{2}^{9} \lambda_{3}^{6}-588153930\, \lambda_{1}^{12} \lambda_{2}^{12} \lambda_{3}^{6}-284382120\, \lambda_{1}^{9} \lambda_{2}^{15} \lambda_{3}^{6}-29274630\, \lambda_{1}^{6} \lambda_{2}^{18} \lambda_{3}^{6}-440220\, \lambda_{1}^{3} \lambda_{2}^{21} \lambda_{3}^{6}-435\, \lambda_{2}^{24} \lambda_{3}^{6}+6970150\, \lambda_{1}^{18} \lambda_{2}^{3} \lambda_{3}^{9}+284382120\, \lambda_{1}^{15} \lambda_{2}^{6} \lambda_{3}^{9}+1540403150\, \lambda_{1}^{12} \lambda_{2}^{9} \lambda_{3}^{9}+1540403150\, \lambda_{1}^{9} \lambda_{2}^{12} \lambda_{3}^{9}+284382120\, \lambda_{1}^{6} \lambda_{2}^{15} \lambda_{3}^{9}+6970150\, \lambda_{1}^{3} \lambda_{2}^{18} \lambda_{3}^{9}-63365\, \lambda_{1}^{18} \lambda_{3}^{12}-25852920\, \lambda_{1}^{15} \lambda_{2}^{3} \lambda_{3}^{12}-588153930\, \lambda_{1}^{12} \lambda_{2}^{6} \lambda_{3}^{12}-1540403150\, \lambda_{1}^{9} \lambda_{2}^{9} \lambda_{3}^{12}-588153930\, \lambda_{1}^{6} \lambda_{2}^{12} \lambda_{3}^{12}-25852920\, \lambda_{1}^{3} \lambda_{2}^{15} \lambda_{3}^{12}-63365\, \lambda_{2}^{18} \lambda_{3}^{12}+25852920\, \lambda_{1}^{12} \lambda_{2}^{3} \lambda_{3}^{15}+284382120\, \lambda_{1}^{9} \lambda_{2}^{6} \lambda_{3}^{15}+284382120\, \lambda_{1}^{6} \lambda_{2}^{9} \lambda_{3}^{15}+25852920\, \lambda_{1}^{3} \lambda_{2}^{12} \lambda_{3}^{15}-63365\, \lambda_{1}^{12} \lambda_{3}^{18}-6970150\, \lambda_{1}^{9} \lambda_{2}^{3} \lambda_{3}^{18}-29274630\, \lambda_{1}^{6} \lambda_{2}^{6} \lambda_{3}^{18}-6970150\, \lambda_{1}^{3} \lambda_{2}^{9} \lambda_{3}^{18}-63365\, \lambda_{2}^{12} \lambda_{3}^{18}+440220\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{3}^{21}+440220\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{3}^{21}-435\, \lambda_{1}^{6} \lambda_{3}^{24}-4350\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{3}^{24}-435\, \lambda_{2}^{6} \lambda_{3}^{24}+584\, \lambda_{3}^{30}+4350\, \lambda_{1}^{24} \lambda_{2}^{3} \lambda_{4}^{3}-440220\, \lambda_{1}^{21} \lambda_{2}^{6} \lambda_{4}^{3}+6970150\, \lambda_{1}^{18} \lambda_{2}^{9} \lambda_{4}^{3}-25852920\, \lambda_{1}^{15} \lambda_{2}^{12} \lambda_{4}^{3}+25852920\, \lambda_{1}^{12} \lambda_{2}^{15} \lambda_{4}^{3}-6970150\, \lambda_{1}^{9} \lambda_{2}^{18} \lambda_{4}^{3}+440220\, \lambda_{1}^{6} \lambda_{2}^{21} \lambda_{4}^{3}-4350\, \lambda_{1}^{3} \lambda_{2}^{24} \lambda_{4}^{3}+4350\, \lambda_{1}^{24} \lambda_{3}^{3} \lambda_{4}^{3}-4350\, \lambda_{2}^{24} \lambda_{3}^{3} \lambda_{4}^{3}+440220\, \lambda_{1}^{21} \lambda_{3}^{6} \lambda_{4}^{3}-440220\, \lambda_{2}^{21} \lambda_{3}^{6} \lambda_{4}^{3}+6970150\, \lambda_{1}^{18} \lambda_{3}^{9} \lambda_{4}^{3}-6970150\, \lambda_{2}^{18} \lambda_{3}^{9} \lambda_{4}^{3}+25852920\, \lambda_{1}^{15} \lambda_{3}^{12} \lambda_{4}^{3}-25852920\, \lambda_{2}^{15} \lambda_{3}^{12} \lambda_{4}^{3}+25852920\, \lambda_{1}^{12} \lambda_{3}^{15} \lambda_{4}^{3}-25852920\, \lambda_{2}^{12} \lambda_{3}^{15} \lambda_{4}^{3}+6970150\, \lambda_{1}^{9} \lambda_{3}^{18} \lambda_{4}^{3}-6970150\, \lambda_{2}^{9} \lambda_{3}^{18} \lambda_{4}^{3}+440220\, \lambda_{1}^{6} \lambda_{3}^{21} \lambda_{4}^{3}-440220\, \lambda_{2}^{6} \lambda_{3}^{21} \lambda_{4}^{3}+4350\, \lambda_{1}^{3} \lambda_{3}^{24} \lambda_{4}^{3}-4350\, \lambda_{2}^{3} \lambda_{3}^{24} \lambda_{4}^{3}-435\, \lambda_{1}^{24} \lambda_{4}^{6}+440220\, \lambda_{1}^{21} \lambda_{2}^{3} \lambda_{4}^{6}-29274630\, \lambda_{1}^{18} \lambda_{2}^{6} \lambda_{4}^{6}+284382120\, \lambda_{1}^{15} \lambda_{2}^{9} \lambda_{4}^{6}-588153930\, \lambda_{1}^{12} \lambda_{2}^{12} \lambda_{4}^{6}+284382120\, \lambda_{1}^{9} \lambda_{2}^{15} \lambda_{4}^{6}-29274630\, \lambda_{1}^{6} \lambda_{2}^{18} \lambda_{4}^{6}+440220\, \lambda_{1}^{3} \lambda_{2}^{21} \lambda_{4}^{6}-435\, \lambda_{2}^{24} \lambda_{4}^{6}-440220\, \lambda_{1}^{21} \lambda_{3}^{3} \lambda_{4}^{6}-440220\, \lambda_{2}^{21} \lambda_{3}^{3} \lambda_{4}^{6}-29274630\, \lambda_{1}^{18} \lambda_{3}^{6} \lambda_{4}^{6}-29274630\, \lambda_{2}^{18} \lambda_{3}^{6} \lambda_{4}^{6}-284382120\, \lambda_{1}^{15} \lambda_{3}^{9} \lambda_{4}^{6}-284382120\, \lambda_{2}^{15} \lambda_{3}^{9} \lambda_{4}^{6}-588153930\, \lambda_{1}^{12} \lambda_{3}^{12} \lambda_{4}^{6}-588153930\, \lambda_{2}^{12} \lambda_{3}^{12} \lambda_{4}^{6}-284382120\, \lambda_{1}^{9} \lambda_{3}^{15} \lambda_{4}^{6}-284382120\, \lambda_{2}^{9} \lambda_{3}^{15} \lambda_{4}^{6}-29274630\, \lambda_{1}^{6} \lambda_{3}^{18} \lambda_{4}^{6}-29274630\, \lambda_{2}^{6} \lambda_{3}^{18} \lambda_{4}^{6}-440220\, \lambda_{1}^{3} \lambda_{3}^{21} \lambda_{4}^{6}-440220\, \lambda_{2}^{3} \lambda_{3}^{21} \lambda_{4}^{6}-435\, \lambda_{3}^{24} \lambda_{4}^{6}+6970150\, \lambda_{1}^{18} \lambda_{2}^{3} \lambda_{4}^{9}-284382120\, \lambda_{1}^{15} \lambda_{2}^{6} \lambda_{4}^{9}+1540403150\, \lambda_{1}^{12} \lambda_{2}^{9} \lambda_{4}^{9}-1540403150\, \lambda_{1}^{9} \lambda_{2}^{12} \lambda_{4}^{9}+284382120\, \lambda_{1}^{6} \lambda_{2}^{15} \lambda_{4}^{9}-6970150\, \lambda_{1}^{3} \lambda_{2}^{18} \lambda_{4}^{9}+6970150\, \lambda_{1}^{18} \lambda_{3}^{3} \lambda_{4}^{9}-6970150\, \lambda_{2}^{18} \lambda_{3}^{3} \lambda_{4}^{9}+284382120\, \lambda_{1}^{15} \lambda_{3}^{6} \lambda_{4}^{9}-284382120\, \lambda_{2}^{15} \lambda_{3}^{6} \lambda_{4}^{9}+1540403150\, \lambda_{1}^{12} \lambda_{3}^{9} \lambda_{4}^{9}-1540403150\, \lambda_{2}^{12} \lambda_{3}^{9} \lambda_{4}^{9}+1540403150\, \lambda_{1}^{9} \lambda_{3}^{12} \lambda_{4}^{9}-1540403150\, \lambda_{2}^{9} \lambda_{3}^{12} \lambda_{4}^{9}+284382120\, \lambda_{1}^{6} \lambda_{3}^{15} \lambda_{4}^{9}-284382120\, \lambda_{2}^{6} \lambda_{3}^{15} \lambda_{4}^{9}+6970150\, \lambda_{1}^{3} \lambda_{3}^{18} \lambda_{4}^{9}-6970150\, \lambda_{2}^{3} \lambda_{3}^{18} \lambda_{4}^{9}-63365\, \lambda_{1}^{18} \lambda_{4}^{12}+25852920\, \lambda_{1}^{15} \lambda_{2}^{3} \lambda_{4}^{12}-588153930\, \lambda_{1}^{12} \lambda_{2}^{6} \lambda_{4}^{12}+1540403150\, \lambda_{1}^{9} \lambda_{2}^{9} \lambda_{4}^{12}-588153930\, \lambda_{1}^{6} \lambda_{2}^{12} \lambda_{4}^{12}+25852920\, \lambda_{1}^{3} \lambda_{2}^{15} \lambda_{4}^{12}-63365\, \lambda_{2}^{18} \lambda_{4}^{12}-25852920\, \lambda_{1}^{15} \lambda_{3}^{3} \lambda_{4}^{12}-25852920\, \lambda_{2}^{15} \lambda_{3}^{3} \lambda_{4}^{12}-588153930\, \lambda_{1}^{12} \lambda_{3}^{6} \lambda_{4}^{12}-588153930\, \lambda_{2}^{12} \lambda_{3}^{6} \lambda_{4}^{12}-1540403150\, \lambda_{1}^{9} \lambda_{3}^{9} \lambda_{4}^{12}-1540403150\, \lambda_{2}^{9} \lambda_{3}^{9} \lambda_{4}^{12}-588153930\, \lambda_{1}^{6} \lambda_{3}^{12} \lambda_{4}^{12}-588153930\, \lambda_{2}^{6} \lambda_{3}^{12} \lambda_{4}^{12}-25852920\, \lambda_{1}^{3} \lambda_{3}^{15} \lambda_{4}^{12}-25852920\, \lambda_{2}^{3} \lambda_{3}^{15} \lambda_{4}^{12}-63365\, \lambda_{3}^{18} \lambda_{4}^{12}+25852920\, \lambda_{1}^{12} \lambda_{2}^{3} \lambda_{4}^{15}-284382120\, \lambda_{1}^{9} \lambda_{2}^{6} \lambda_{4}^{15}+284382120\, \lambda_{1}^{6} \lambda_{2}^{9} \lambda_{4}^{15}-25852920\, \lambda_{1}^{3} \lambda_{2}^{12} \lambda_{4}^{15}+25852920\, \lambda_{1}^{12} \lambda_{3}^{3} \lambda_{4}^{15}-25852920\, \lambda_{2}^{12} \lambda_{3}^{3} \lambda_{4}^{15}+284382120\, \lambda_{1}^{9} \lambda_{3}^{6} \lambda_{4}^{15}-284382120\, \lambda_{2}^{9} \lambda_{3}^{6} \lambda_{4}^{15}+284382120\, \lambda_{1}^{6} \lambda_{3}^{9} \lambda_{4}^{15}-284382120\, \lambda_{2}^{6} \lambda_{3}^{9} \lambda_{4}^{15}+25852920\, \lambda_{1}^{3} \lambda_{3}^{12} \lambda_{4}^{15}-25852920\, \lambda_{2}^{3} \lambda_{3}^{12} \lambda_{4}^{15}-63365\, \lambda_{1}^{12} \lambda_{4}^{18}+6970150\, \lambda_{1}^{9} \lambda_{2}^{3} \lambda_{4}^{18}-29274630\, \lambda_{1}^{6} \lambda_{2}^{6} \lambda_{4}^{18}+6970150\, \lambda_{1}^{3} \lambda_{2}^{9} \lambda_{4}^{18}-63365\, \lambda_{2}^{12} \lambda_{4}^{18}-6970150\, \lambda_{1}^{9} \lambda_{3}^{3} \lambda_{4}^{18}-6970150\, \lambda_{2}^{9} \lambda_{3}^{3} \lambda_{4}^{18}-29274630\, \lambda_{1}^{6} \lambda_{3}^{6} \lambda_{4}^{18}-29274630\, \lambda_{2}^{6} \lambda_{3}^{6} \lambda_{4}^{18}-6970150\, \lambda_{1}^{3} \lambda_{3}^{9} \lambda_{4}^{18}-6970150\, \lambda_{2}^{3} \lambda_{3}^{9} \lambda_{4}^{18}-63365\, \lambda_{3}^{12} \lambda_{4}^{18}+440220\, \lambda_{1}^{6} \lambda_{2}^{3} \lambda_{4}^{21}-440220\, \lambda_{1}^{3} \lambda_{2}^{6} \lambda_{4}^{21}+440220\, \lambda_{1}^{6} \lambda_{3}^{3} \lambda_{4}^{21}-440220\, \lambda_{2}^{6} \lambda_{3}^{3} \lambda_{4}^{21}+440220\, \lambda_{1}^{3} \lambda_{3}^{6} \lambda_{4}^{21}-440220\, \lambda_{2}^{3} \lambda_{3}^{6} \lambda_{4}^{21}-435\, \lambda_{1}^{6} \lambda_{4}^{24}+4350\, \lambda_{1}^{3} \lambda_{2}^{3} \lambda_{4}^{24}-435\, \lambda_{2}^{6} \lambda_{4}^{24}-4350\, \lambda_{1}^{3} \lambda_{3}^{3} \lambda_{4}^{24}-4350\, \lambda_{2}^{3} \lambda_{3}^{3} \lambda_{4}^{24}-435\, \lambda_{3}^{6} \lambda_{4}^{24}+584\, \lambda_{4}^{30})\left(-147420\right)$ \subsection*{Family two} Basic semi-simple form $p = \lambda_1p_1 + \lambda_2p_2 - \lambda_3p_3$. We randomize the $\lambda$'s and add the different nilpotent parts $e$: Here are the corresponding adjoint rank profiles: \textnumero 3 $e=0$: $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&78&0&0&0&78&78&0&0&234\\ \end{smallmatrix}\right)$ \textnumero 2 $e=e_1 e_6e_8$: $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&78&0&0&0&79&78&0&0&235\\ 78&0&0&0&78&0&0&0&78&234\\ \end{smallmatrix}\right)$ \textnumero 1 $e=e_1 e_6e_8 + e_2e_4e_9$: $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&79&0&0&0&80&79&0&0&238\\ 78&0&0&0&79&0&0&0&79&236\\ 0&0&78&78&0&0&0&79&0&235\\ 0&78&0&0&0&78&78&0&0&234\\ \end{smallmatrix}\right)$ \subsection*{Family three} Basic semi-simple form $p = \lambda_1p_1 + \lambda_2p_2 $. \textnumero 1 $e = e_{2}e_{6}e_{7}+e_{1}e_{6}e_{8}+e_{2}e_{4}e_{9}+e_{1}e_{5}e_{9}$: $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&78&0&0&0&80&78&0&0&236\\ 76&0&0&0&78&0&0&0&78&232\\ 0&0&76&76&0&0&0&78&0&230\\ 0&76&0&0&0&76&76&0&0&228\\ \end{smallmatrix}\right)$ \textnumero 2 $e = e_{1}e_{6}e_{8}+e_{2}e_{4}e_{9}+e_{1}e_{5}e_{9}$: $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&77&0&0&0&79&77&0&0&233\\ 76&0&0&0&77&0&0&0&77&230\\ 0&0&76&76&0&0&0&77&0&229\\ 0&76&0&0&0&76&76&0&0&228\\ \end{smallmatrix}\right)$ \textnumero 3 $e = e_{2}e_{6}e_{7}+e_{1}e_{6}e_{8}+e_{1}e_{5}e_{9}$: $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&77&0&0&0&79&77&0&0&233\\ 76&0&0&0&77&0&0&0&77&230\\ 0&0&76&76&0&0&0&77&0&229\\ 0&76&0&0&0&76&76&0&0&228\\ \end{smallmatrix}\right)$ \textnumero 4 $e =e_{1}e_{6}e_{8}+e_{1}e_{5}e_{9}$: $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&80&0&236\\ 0&76&0&0&0&78&76&0&0&230\\ 76&0&0&0&76&0&0&0&76&228\\ \end{smallmatrix}\right)$ \textnumero 5 $e = e_{2}e_{6}e_{7}+e_{1}e_{5}e_{9}$: $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&77&0&0&0&78&77&0&0&232\\ 76&0&0&0&77&0&0&0&77&230\\ 0&0&76&76&0&0&0&77&0&229\\ 0&76&0&0&0&76&76&0&0&228\\ \end{smallmatrix}\right)$ \textnumero 6 $e = e_{1}e_{6}e_{8}+e_{2}e_{4}e_{9}$: $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&77&0&0&0&78&77&0&0&232\\ 76&0&0&0&77&0&0&0&77&230\\ 0&0&76&76&0&0&0&77&0&229\\ 0&76&0&0&0&76&76&0&0&228\\ \end{smallmatrix}\right)$ \textnumero 7 $e = e_{1}e_{5}e_{9}$: $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&76&0&0&0&77&76&0&0&229\\ 76&0&0&0&76&0&0&0&76&228\\ \end{smallmatrix}\right)$ \textnumero 8 $e = e_{1}e_{6}e_{8}$: $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&76&0&0&0&77&76&0&0&229\\ 76&0&0&0&76&0&0&0&76&228\\ \end{smallmatrix}\right)$ $e = 0$: $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&76&0&228\\ 0&76&0&0&0&76&76&0&0&228\\ \end{smallmatrix}\right)$ \subsection*{Family four} $e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&78&0&0&0&80&78&0&0&236\\ 76&0&0&0&78&0&0&0&78&232\\ 0&0&76&76&0&0&0&77&0&229\\ 0&75&0&0&0&76&75&0&0&226\\ 73&0&0&0&75&0&0&0&75&223\\ 0&0&73&73&0&0&0&74&0&220\\ 0&73&0&0&0&73&73&0&0&219\\ 72&0&0&0&73&0&0&0&73&218\\ 0&0&72&72&0&0&0&73&0&217\\ 0&72&0&0&0&72&72&0&0&216\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&77&0&0&0&78&77&0&0&232\\ 74&0&0&0&76&0&0&0&76&226\\ 0&0&74&74&0&0&0&75&0&223\\ 0&73&0&0&0&74&73&0&0&220\\ 72&0&0&0&73&0&0&0&73&218\\ 0&0&72&72&0&0&0&72&0&216\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{1}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&75&0&0&0&76&75&0&0&226\\ 72&0&0&0&73&0&0&0&73&218\\ 0&0&72&72&0&0&0&73&0&217\\ 0&72&0&0&0&72&72&0&0&216\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{3}e_{7}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&74&0&0&0&75&74&0&0&223\\ 72&0&0&0&73&0&0&0&73&218\\ 0&0&72&72&0&0&0&72&0&216\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&76&0&226\\ 0&72&0&0&0&73&72&0&0&217\\ 72&0&0&0&72&0&0&0&72&216\\ \end{smallmatrix}\right)$ $e=0$: $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&72&0&216\\ 0&72&0&0&0&72&72&0&0&216\\ \end{smallmatrix}\right)$ \subsection*{Family five} $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&77&0&0&0&80&77&0&0&234\\ 74&0&0&0&77&0&0&0&77&228\\ 0&0&74&74&0&0&0&76&0&224\\ 0&73&0&0&0&74&73&0&0&220\\ 71&0&0&0&73&0&0&0&73&217\\ 0&0&71&71&0&0&0&72&0&214\\ 0&71&0&0&0&71&71&0&0&213\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&76&0&0&0&78&76&0&0&230\\ 72&0&0&0&75&0&0&0&75&222\\ 0&0&72&72&0&0&0&74&0&218\\ 0&71&0&0&0&72&71&0&0&214\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&70&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&76&0&0&0&79&76&0&0&231\\ 74&0&0&0&76&0&0&0&76&226\\ 0&0&74&74&0&0&0&75&0&223\\ 0&73&0&0&0&74&73&0&0&220\\ 71&0&0&0&73&0&0&0&73&217\\ 0&0&71&71&0&0&0&72&0&214\\ 0&71&0&0&0&71&71&0&0&213\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&74&0&0&0&76&74&0&0&224\\ 70&0&0&0&72&0&0&0&72&214\\ 0&0&70&70&0&0&0&72&0&212\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&80&0&236\\ 0&75&0&0&0&77&75&0&0&227\\ 72&0&0&0&74&0&0&0&74&220\\ 0&0&72&72&0&0&0&73&0&217\\ 0&71&0&0&0&72&71&0&0&214\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&70&0&210\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&76&0&0&0&78&76&0&0&230\\ 74&0&0&0&76&0&0&0&76&226\\ 0&0&74&74&0&0&0&75&0&223\\ 0&73&0&0&0&74&73&0&0&220\\ 71&0&0&0&73&0&0&0&73&217\\ 0&0&71&71&0&0&0&72&0&214\\ 0&71&0&0&0&71&71&0&0&213\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{3}e_{7}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&73&0&0&0&75&73&0&0&221\\ 70&0&0&0&72&0&0&0&72&214\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&73&0&0&0&75&73&0&0&221\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&75&0&0&0&76&75&0&0&226\\ 72&0&0&0&74&0&0&0&74&220\\ 0&0&72&72&0&0&0&73&0&217\\ 0&71&0&0&0&72&71&0&0&214\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&70&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{3}e_{7}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&78&0&230\\ 0&72&0&0&0&74&72&0&0&218\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&70&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{1}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&76&0&228\\ 0&73&0&0&0&74&73&0&0&220\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&76&0&226\\ 0&71&0&0&0&73&71&0&0&215\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{3}e_{7}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&76&0&226\\ 0&72&0&0&0&73&72&0&0&217\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&70&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{0}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&74&74&0&0&0&76&0&224\\ 0&70&0&0&0&72&70&0&0&212\\ 70&0&0&0&70&0&0&0&70&210\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&74&0&220\\ 0&70&0&0&0&71&70&0&0&211\\ 70&0&0&0&70&0&0&0&70&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&72&0&216\\ 0&71&0&0&0&72&71&0&0&214\\ 70&0&0&0&71&0&0&0&71&212\\ 0&0&70&70&0&0&0&71&0&211\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ $e_{0}e_{1}e_{2}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&72&0&214\\ 0&70&0&0&0&71&70&0&0&211\\ 70&0&0&0&70&0&0&0&70&210\\ \end{smallmatrix}\right)$ $e=0$: $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&70&0&210\\ 0&70&0&0&0&70&70&0&0&210\\ \end{smallmatrix}\right)$ \subsection*{Family six} $e_{2}e_{3}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{3}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&77&0&0&0&80&77&0&0&234\\ 74&0&0&0&77&0&0&0&77&228\\ 0&0&74&74&0&0&0&75&0&223\\ 0&72&0&0&0&74&72&0&0&218\\ 69&0&0&0&72&0&0&0&72&213\\ 0&0&69&69&0&0&0&70&0&208\\ 0&67&0&0&0&69&67&0&0&203\\ 64&0&0&0&67&0&0&0&67&198\\ 0&0&64&64&0&0&0&66&0&194\\ 0&63&0&0&0&64&63&0&0&190\\ 61&0&0&0&63&0&0&0&63&187\\ 0&0&61&61&0&0&0&62&0&184\\ 0&60&0&0&0&61&60&0&0&181\\ 58&0&0&0&60&0&0&0&60&178\\ 0&0&58&58&0&0&0&60&0&176\\ 0&58&0&0&0&58&58&0&0&174\\ 57&0&0&0&58&0&0&0&58&173\\ 0&0&57&57&0&0&0&58&0&172\\ 0&57&0&0&0&57&57&0&0&171\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{3}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&76&0&0&0&78&76&0&0&230\\ 72&0&0&0&75&0&0&0&75&222\\ 0&0&71&71&0&0&0&73&0&215\\ 0&69&0&0&0&70&69&0&0&208\\ 66&0&0&0&68&0&0&0&68&202\\ 0&0&65&65&0&0&0&66&0&196\\ 0&63&0&0&0&65&63&0&0&191\\ 60&0&0&0&63&0&0&0&63&186\\ 0&0&60&60&0&0&0&62&0&182\\ 0&59&0&0&0&60&59&0&0&178\\ 58&0&0&0&59&0&0&0&59&176\\ 0&0&58&58&0&0&0&58&0&174\\ 0&57&0&0&0&58&57&0&0&172\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}+e_{1}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&73&0&0&0&76&73&0&0&222\\ 68&0&0&0&71&0&0&0&71&210\\ 0&0&66&66&0&0&0&69&0&201\\ 0&64&0&0&0&64&64&0&0&192\\ 62&0&0&0&62&0&0&0&62&186\\ 0&0&60&60&0&0&0&60&0&180\\ 0&58&0&0&0&60&58&0&0&176\\ 56&0&0&0&58&0&0&0&58&172\\ 0&0&56&56&0&0&0&58&0&170\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{1}e_{4}e_{6}+e_{2}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&80&0&236\\ 0&75&0&0&0&77&75&0&0&227\\ 70&0&0&0&74&0&0&0&74&218\\ 0&0&69&69&0&0&0&72&0&210\\ 0&67&0&0&0&68&67&0&0&202\\ 64&0&0&0&66&0&0&0&66&196\\ 0&0&63&63&0&0&0&64&0&190\\ 0&61&0&0&0&63&61&0&0&185\\ 58&0&0&0&61&0&0&0&61&180\\ 0&0&58&58&0&0&0&60&0&176\\ 0&57&0&0&0&58&57&0&0&172\\ 57&0&0&0&57&0&0&0&57&171\\ 0&0&57&57&0&0&0&56&0&170\\ 0&56&0&0&0&57&56&0&0&169\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&72&0&0&0&75&72&0&0&219\\ 68&0&0&0&70&0&0&0&70&208\\ 0&0&66&66&0&0&0&67&0&199\\ 0&63&0&0&0&64&63&0&0&190\\ 60&0&0&0&61&0&0&0&61&182\\ 0&0&58&58&0&0&0&60&0&176\\ 0&57&0&0&0&58&57&0&0&172\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{2}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&73&0&0&0&75&73&0&0&221\\ 68&0&0&0&71&0&0&0&71&210\\ 0&0&66&66&0&0&0&68&0&200\\ 0&64&0&0&0&64&64&0&0&192\\ 61&0&0&0&62&0&0&0&62&185\\ 0&0&60&60&0&0&0&60&0&180\\ 0&58&0&0&0&60&58&0&0&176\\ 56&0&0&0&58&0&0&0&58&172\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&78&0&230\\ 0&71&0&0&0&73&71&0&0&215\\ 66&0&0&0&68&0&0&0&68&202\\ 0&0&63&63&0&0&0&65&0&191\\ 0&60&0&0&0&62&60&0&0&182\\ 58&0&0&0&59&0&0&0&59&176\\ 0&0&57&57&0&0&0&58&0&172\\ 0&56&0&0&0&57&56&0&0&169\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&76&0&228\\ 0&72&0&0&0&74&72&0&0&218\\ 68&0&0&0&70&0&0&0&70&208\\ 0&0&66&66&0&0&0&67&0&199\\ 0&63&0&0&0&64&63&0&0&190\\ 59&0&0&0&61&0&0&0&61&181\\ 0&0&57&57&0&0&0&58&0&172\\ 0&57&0&0&0&57&57&0&0&171\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{1}e_{4}e_{7}+e_{1}e_{5}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&74&0&226\\ 0&67&0&0&0&74&67&0&0&208\\ 60&0&0&0&65&0&0&0&65&190\\ 0&0&58&58&0&0&0&65&0&181\\ 0&58&0&0&0&56&58&0&0&172\\ 58&0&0&0&56&0&0&0&56&170\\ 0&0&56&56&0&0&0&56&0&168\\ \end{smallmatrix}\right)$ $e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&76&0&226\\ 0&69&0&0&0&70&69&0&0&208\\ 62&0&0&0&64&0&0&0&64&190\\ 0&0&60&60&0&0&0&61&0&181\\ 0&57&0&0&0&58&57&0&0&172\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&56&0&168\\ \end{smallmatrix}\right)$ $e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&78&0&228\\ 0&71&0&0&0&72&71&0&0&214\\ 64&0&0&0&68&0&0&0&68&200\\ 0&0&63&63&0&0&0&65&0&191\\ 0&60&0&0&0&62&60&0&0&182\\ 58&0&0&0&59&0&0&0&59&176\\ 0&0&57&57&0&0&0&56&0&170\\ 0&56&0&0&0&57&56&0&0&169\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&74&0&224\\ 0&67&0&0&0&72&67&0&0&206\\ 60&0&0&0&65&0&0&0&65&190\\ 0&0&58&58&0&0&0&62&0&178\\ 0&58&0&0&0&56&58&0&0&172\\ 57&0&0&0&56&0&0&0&56&169\\ 0&0&56&56&0&0&0&56&0&168\\ \end{smallmatrix}\right)$ $e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&74&74&0&0&0&74&0&222\\ 0&66&0&0&0&69&66&0&0&201\\ 60&0&0&0&63&0&0&0&63&186\\ 0&0&58&58&0&0&0&60&0&176\\ 0&57&0&0&0&56&57&0&0&170\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&76&0&222\\ 0&66&0&0&0&69&66&0&0&201\\ 62&0&0&0&62&0&0&0&62&186\\ 0&0&58&58&0&0&0&60&0&176\\ 0&56&0&0&0&58&56&0&0&170\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{0}e_{5}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&74&0&220\\ 0&67&0&0&0&71&67&0&0&205\\ 60&0&0&0&65&0&0&0&65&190\\ 0&0&58&58&0&0&0&60&0&176\\ 0&57&0&0&0&56&57&0&0&170\\ 57&0&0&0&56&0&0&0&56&169\\ 0&0&56&56&0&0&0&56&0&168\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{4}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&74&0&218\\ 0&64&0&0&0&66&64&0&0&194\\ 58&0&0&0&60&0&0&0&60&178\\ 0&0&57&57&0&0&0&57&0&171\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&72&0&216\\ 0&66&0&0&0&68&66&0&0&200\\ 60&0&0&0&62&0&0&0&62&184\\ 0&0&58&58&0&0&0&60&0&176\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&74&0&218\\ 0&63&0&0&0&68&63&0&0&194\\ 60&0&0&0&59&0&0&0&59&178\\ 0&0&56&56&0&0&0&59&0&171\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&72&0&214\\ 0&63&0&0&0&65&63&0&0&191\\ 58&0&0&0&59&0&0&0&59&176\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{1}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&70&0&210\\ 0&63&0&0&0&64&63&0&0&190\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&57&0&169\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ $e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{3}e_{7}$ $\left(\begin{smallmatrix} 0&0&69&69&0&0&0&70&0&208\\ 0&60&0&0&0&61&60&0&0&181\\ 56&0&0&0&57&0&0&0&57&170\\ 0&0&56&56&0&0&0&56&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{0}e_{5}e_{8}$ $\left(\begin{smallmatrix} 0&0&67&67&0&0&0&74&0&208\\ 0&58&0&0&0&65&58&0&0&181\\ 58&0&0&0&56&0&0&0&56&170\\ 0&0&56&56&0&0&0&56&0&168\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}+e_{0}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&66&66&0&0&0&68&0&200\\ 0&58&0&0&0&60&58&0&0&176\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e_{0}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&63&63&0&0&0&64&0&190\\ 0&56&0&0&0&57&56&0&0&169\\ 56&0&0&0&56&0&0&0&56&168\\ \end{smallmatrix}\right)$ $e=0$: $\left(\begin{smallmatrix} 0&0&56&56&0&0&0&56&0&168\\ 0&56&0&0&0&56&56&0&0&168\\ \end{smallmatrix}\right)$ Here are the rank profiles of the nilpotent elements: 1: $e_{3}e_{4}e_{5}+e_{2}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{2}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&80&80&0&0&0&80&0&240\\ 0&76&0&0&0&80&76&0&0&232\\ 72&0&0&0&76&0&0&0&76&224\\ 0&0&72&72&0&0&0&73&0&217\\ 0&69&0&0&0&72&69&0&0&210\\ 65&0&0&0&69&0&0&0&69&203\\ 0&0&65&65&0&0&0&66&0&196\\ 0&62&0&0&0&65&62&0&0&189\\ 58&0&0&0&62&0&0&0&62&182\\ 0&0&58&58&0&0&0&59&0&175\\ 0&55&0&0&0&58&55&0&0&168\\ 51&0&0&0&55&0&0&0&55&161\\ 0&0&51&51&0&0&0&52&0&154\\ 0&48&0&0&0&51&48&0&0&147\\ 44&0&0&0&48&0&0&0&48&140\\ 0&0&44&44&0&0&0&46&0&134\\ 0&42&0&0&0&44&42&0&0&128\\ 38&0&0&0&42&0&0&0&42&122\\ 0&0&38&38&0&0&0&40&0&116\\ 0&36&0&0&0&38&36&0&0&110\\ 32&0&0&0&36&0&0&0&36&104\\ 0&0&32&32&0&0&0&34&0&98\\ 0&30&0&0&0&32&30&0&0&92\\ 27&0&0&0&30&0&0&0&30&87\\ 0&0&27&27&0&0&0&28&0&82\\ 0&25&0&0&0&27&25&0&0&77\\ 22&0&0&0&25&0&0&0&25&72\\ 0&0&22&22&0&0&0&24&0&68\\ 0&21&0&0&0&22&21&0&0&64\\ 18&0&0&0&21&0&0&0&21&60\\ 0&0&18&18&0&0&0&20&0&56\\ 0&17&0&0&0&18&17&0&0&52\\ 14&0&0&0&17&0&0&0&17&48\\ 0&0&14&14&0&0&0&16&0&44\\ 0&13&0&0&0&14&13&0&0&40\\ 11&0&0&0&13&0&0&0&13&37\\ 0&0&11&11&0&0&0&12&0&34\\ 0&10&0&0&0&11&10&0&0&31\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&8&8&0&0&0&10&0&26\\ 0&8&0&0&0&8&8&0&0&24\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&6&6&0&0&0&8&0&20\\ 0&6&0&0&0&6&6&0&0&18\\ 4&0&0&0&6&0&0&0&6&16\\ 0&0&4&4&0&0&0&6&0&14\\ 0&4&0&0&0&4&4&0&0&12\\ 3&0&0&0&4&0&0&0&4&11\\ 0&0&3&3&0&0&0&4&0&10\\ 0&3&0&0&0&3&3&0&0&9\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&2&2&0&0&0&3&0&7\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 2: $e_{3}e_{4}e_{5}+e_{2}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&79&79&0&0&0&80&0&238\\ 0&75&0&0&0&78&75&0&0&228\\ 70&0&0&0&74&0&0&0&74&218\\ 0&0&69&69&0&0&0&71&0&209\\ 0&66&0&0&0&68&66&0&0&200\\ 61&0&0&0&65&0&0&0&65&191\\ 0&0&60&60&0&0&0&62&0&182\\ 0&57&0&0&0&59&57&0&0&173\\ 52&0&0&0&56&0&0&0&56&164\\ 0&0&51&51&0&0&0&53&0&155\\ 0&48&0&0&0&50&48&0&0&146\\ 44&0&0&0&47&0&0&0&47&138\\ 0&0&43&43&0&0&0&44&0&130\\ 0&40&0&0&0&42&40&0&0&122\\ 36&0&0&0&39&0&0&0&39&114\\ 0&0&35&35&0&0&0&37&0&107\\ 0&33&0&0&0&34&33&0&0&100\\ 29&0&0&0&32&0&0&0&32&93\\ 0&0&28&28&0&0&0&30&0&86\\ 0&26&0&0&0&28&26&0&0&80\\ 22&0&0&0&26&0&0&0&26&74\\ 0&0&22&22&0&0&0&24&0&68\\ 0&20&0&0&0&22&20&0&0&62\\ 17&0&0&0&20&0&0&0&20&57\\ 0&0&17&17&0&0&0&18&0&52\\ 0&15&0&0&0&17&15&0&0&47\\ 12&0&0&0&15&0&0&0&15&42\\ 0&0&12&12&0&0&0&14&0&38\\ 0&11&0&0&0&12&11&0&0&34\\ 9&0&0&0&11&0&0&0&11&31\\ 0&0&9&9&0&0&0&10&0&28\\ 0&8&0&0&0&9&8&0&0&25\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&6&6&0&0&0&7&0&19\\ 0&5&0&0&0&6&5&0&0&16\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&4&4&0&0&0&4&0&12\\ 0&3&0&0&0&4&3&0&0&10\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&2&2&0&0&0&3&0&7\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 3: $e_{2}e_{3}e_{5}+e_{2}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&80&0&236\\ 0&74&0&0&0&76&74&0&0&224\\ 68&0&0&0&72&0&0&0&72&212\\ 0&0&66&66&0&0&0&69&0&201\\ 0&63&0&0&0&64&63&0&0&190\\ 57&0&0&0&61&0&0&0&61&179\\ 0&0&55&55&0&0&0&58&0&168\\ 0&52&0&0&0&54&52&0&0&158\\ 46&0&0&0&51&0&0&0&51&148\\ 0&0&45&45&0&0&0&48&0&138\\ 0&42&0&0&0&44&42&0&0&128\\ 37&0&0&0&41&0&0&0&41&119\\ 0&0&36&36&0&0&0&38&0&110\\ 0&33&0&0&0&35&33&0&0&101\\ 28&0&0&0&32&0&0&0&32&92\\ 0&0&27&27&0&0&0&30&0&84\\ 0&25&0&0&0&26&25&0&0&76\\ 21&0&0&0&24&0&0&0&24&69\\ 0&0&20&20&0&0&0&22&0&62\\ 0&18&0&0&0&20&18&0&0&56\\ 14&0&0&0&18&0&0&0&18&50\\ 0&0&14&14&0&0&0&16&0&44\\ 0&12&0&0&0&14&12&0&0&38\\ 10&0&0&0&12&0&0&0&12&34\\ 0&0&10&10&0&0&0&10&0&30\\ 0&8&0&0&0&10&8&0&0&26\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&6&6&0&0&0&7&0&19\\ 0&5&0&0&0&6&5&0&0&16\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&4&4&0&0&0&4&0&12\\ 0&3&0&0&0&4&3&0&0&10\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&2&2&0&0&0&2&0&6\\ 0&1&0&0&0&2&1&0&0&4\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 4: $e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{2}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&78&78&0&0&0&78&0&234\\ 0&72&0&0&0&76&72&0&0&220\\ 66&0&0&0&70&0&0&0&70&206\\ 0&0&64&64&0&0&0&66&0&194\\ 0&60&0&0&0&62&60&0&0&182\\ 54&0&0&0&58&0&0&0&58&170\\ 0&0&52&52&0&0&0&54&0&158\\ 0&48&0&0&0&50&48&0&0&146\\ 42&0&0&0&46&0&0&0&46&134\\ 0&0&40&40&0&0&0&43&0&123\\ 0&37&0&0&0&38&37&0&0&112\\ 33&0&0&0&35&0&0&0&35&103\\ 0&0&31&31&0&0&0&32&0&94\\ 0&28&0&0&0&29&28&0&0&85\\ 24&0&0&0&26&0&0&0&26&76\\ 0&0&22&22&0&0&0&24&0&68\\ 0&20&0&0&0&20&20&0&0&60\\ 17&0&0&0&18&0&0&0&18&53\\ 0&0&15&15&0&0&0&16&0&46\\ 0&13&0&0&0&15&13&0&0&41\\ 10&0&0&0&13&0&0&0&13&36\\ 0&0&10&10&0&0&0&11&0&31\\ 0&8&0&0&0&10&8&0&0&26\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&6&6&0&0&0&6&0&18\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&4&0&0&0&4&10\\ 0&0&2&2&0&0&0&4&0&8\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 5: $e_{3}e_{4}e_{5}+e_{0}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{2}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{2}e_{8}+e_{1}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&71&0&0&0&74&71&0&0&216\\ 64&0&0&0&68&0&0&0&68&200\\ 0&0&61&61&0&0&0&63&0&185\\ 0&56&0&0&0&58&56&0&0&170\\ 50&0&0&0&53&0&0&0&53&156\\ 0&0&47&47&0&0&0&48&0&142\\ 0&42&0&0&0&45&42&0&0&129\\ 36&0&0&0&40&0&0&0&40&116\\ 0&0&34&34&0&0&0&36&0&104\\ 0&30&0&0&0&32&30&0&0&92\\ 26&0&0&0&28&0&0&0&28&82\\ 0&0&24&24&0&0&0&24&0&72\\ 0&20&0&0&0&22&20&0&0&62\\ 16&0&0&0&18&0&0&0&18&52\\ 0&0&14&14&0&0&0&17&0&45\\ 0&13&0&0&0&12&13&0&0&38\\ 10&0&0&0&11&0&0&0&11&32\\ 0&0&8&8&0&0&0&10&0&26\\ 0&7&0&0&0&8&7&0&0&22\\ 4&0&0&0&7&0&0&0&7&18\\ 0&0&4&4&0&0&0&6&0&14\\ 0&3&0&0&0&4&3&0&0&10\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&2&2&0&0&0&2&0&6\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 6: $e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&77&77&0&0&0&78&0&232\\ 0&72&0&0&0&75&72&0&0&219\\ 66&0&0&0&70&0&0&0&70&206\\ 0&0&64&64&0&0&0&66&0&194\\ 0&60&0&0&0&62&60&0&0&182\\ 54&0&0&0&58&0&0&0&58&170\\ 0&0&52&52&0&0&0&54&0&158\\ 0&48&0&0&0&50&48&0&0&146\\ 42&0&0&0&46&0&0&0&46&134\\ 0&0&40&40&0&0&0&42&0&122\\ 0&37&0&0&0&38&37&0&0&112\\ 33&0&0&0&35&0&0&0&35&103\\ 0&0&31&31&0&0&0&32&0&94\\ 0&28&0&0&0&29&28&0&0&85\\ 24&0&0&0&26&0&0&0&26&76\\ 0&0&22&22&0&0&0&24&0&68\\ 0&20&0&0&0&20&20&0&0&60\\ 16&0&0&0&18&0&0&0&18&52\\ 0&0&15&15&0&0&0&16&0&46\\ 0&13&0&0&0&15&13&0&0&41\\ 10&0&0&0&13&0&0&0&13&36\\ 0&0&10&10&0&0&0&11&0&31\\ 0&8&0&0&0&10&8&0&0&26\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&6&6&0&0&0&6&0&18\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&4&0&0&0&4&10\\ 0&0&2&2&0&0&0&3&0&7\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 7: $e_{2}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{1}e_{3}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&76&0&228\\ 0&68&0&0&0&72&68&0&0&208\\ 60&0&0&0&64&0&0&0&64&188\\ 0&0&56&56&0&0&0&59&0&171\\ 0&51&0&0&0&52&51&0&0&154\\ 44&0&0&0&47&0&0&0&47&138\\ 0&0&40&40&0&0&0&42&0&122\\ 0&35&0&0&0&37&35&0&0&107\\ 28&0&0&0&32&0&0&0&32&92\\ 0&0&25&25&0&0&0&28&0&78\\ 0&21&0&0&0&22&21&0&0&64\\ 18&0&0&0&18&0&0&0&18&54\\ 0&0&15&15&0&0&0&14&0&44\\ 0&11&0&0&0&14&11&0&0&36\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&7&7&0&0&0&9&0&23\\ 0&6&0&0&0&6&6&0&0&18\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&3&3&0&0&0&4&0&10\\ 0&2&0&0&0&3&2&0&0&7\\ 0&0&0&0&2&0&0&0&2&4\\ 0&0&0&0&0&0&0&2&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 8: $e_{1}e_{3}e_{5}+e_{1}e_{3}e_{6}+e_{2}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&78&0&230\\ 0&70&0&0&0&72&70&0&0&212\\ 62&0&0&0&66&0&0&0&66&194\\ 0&0&58&58&0&0&0&62&0&178\\ 0&54&0&0&0&54&54&0&0&162\\ 47&0&0&0&50&0&0&0&50&147\\ 0&0&43&43&0&0&0&46&0&132\\ 0&39&0&0&0&41&39&0&0&119\\ 32&0&0&0&37&0&0&0&37&106\\ 0&0&30&30&0&0&0&33&0&93\\ 0&26&0&0&0&28&26&0&0&80\\ 22&0&0&0&24&0&0&0&24&70\\ 0&0&20&20&0&0&0&20&0&60\\ 0&16&0&0&0&19&16&0&0&51\\ 12&0&0&0&15&0&0&0&15&42\\ 0&0&11&11&0&0&0&13&0&35\\ 0&9&0&0&0&10&9&0&0&28\\ 7&0&0&0&8&0&0&0&8&23\\ 0&0&6&6&0&0&0&6&0&18\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&4&0&0&0&4&10\\ 0&0&2&2&0&0&0&3&0&7\\ 0&1&0&0&0&2&1&0&0&4\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 9(1): $e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{2}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{2}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&74&0&226\\ 0&66&0&0&0&72&66&0&0&204\\ 58&0&0&0&62&0&0&0&62&182\\ 0&0&54&54&0&0&0&56&0&164\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&31&0&0&0&35&31&0&0&97\\ 24&0&0&0&29&0&0&0&29&82\\ 0&0&22&22&0&0&0&26&0&70\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&8&0&16\\ 0&4&0&0&0&2&4&0&0&10\\ 3&0&0&0&2&0&0&0&2&7\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 9(2): $e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{2}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{5}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&76&76&0&0&0&74&0&226\\ 0&66&0&0&0&72&66&0&0&204\\ 58&0&0&0&62&0&0&0&62&182\\ 0&0&54&54&0&0&0&56&0&164\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&31&0&0&0&35&31&0&0&97\\ 24&0&0&0&29&0&0&0&29&82\\ 0&0&22&22&0&0&0&26&0&70\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&8&0&16\\ 0&4&0&0&0&2&4&0&0&10\\ 3&0&0&0&2&0&0&0&2&7\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 10: $e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&76&0&226\\ 0&67&0&0&0&70&67&0&0&204\\ 58&0&0&0&62&0&0&0&62&182\\ 0&0&54&54&0&0&0&56&0&164\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&32&0&0&0&33&32&0&0&97\\ 26&0&0&0&28&0&0&0&28&82\\ 0&0&23&23&0&0&0&24&0&70\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&5&5&0&0&0&6&0&16\\ 0&3&0&0&0&4&3&0&0&10\\ 1&0&0&0&3&0&0&0&3&7\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 11: $e_{1}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&78&0&228\\ 0&70&0&0&0&71&70&0&0&211\\ 62&0&0&0&66&0&0&0&66&194\\ 0&0&58&58&0&0&0&62&0&178\\ 0&54&0&0&0&54&54&0&0&162\\ 46&0&0&0&50&0&0&0&50&146\\ 0&0&43&43&0&0&0&46&0&132\\ 0&39&0&0&0&41&39&0&0&119\\ 32&0&0&0&37&0&0&0&37&106\\ 0&0&30&30&0&0&0&33&0&93\\ 0&26&0&0&0&28&26&0&0&80\\ 21&0&0&0&24&0&0&0&24&69\\ 0&0&20&20&0&0&0&20&0&60\\ 0&16&0&0&0&19&16&0&0&51\\ 12&0&0&0&15&0&0&0&15&42\\ 0&0&11&11&0&0&0&12&0&34\\ 0&9&0&0&0&10&9&0&0&28\\ 7&0&0&0&8&0&0&0&8&23\\ 0&0&6&6&0&0&0&6&0&18\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&4&0&0&0&4&10\\ 0&0&2&2&0&0&0&2&0&6\\ 0&1&0&0&0&2&1&0&0&4\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 12: $e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{5}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&74&0&224\\ 0&66&0&0&0&71&66&0&0&203\\ 58&0&0&0&62&0&0&0&62&182\\ 0&0&54&54&0&0&0&55&0&163\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&31&0&0&0&34&31&0&0&96\\ 24&0&0&0&29&0&0&0&29&82\\ 0&0&22&22&0&0&0&25&0&69\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&7&0&15\\ 0&4&0&0&0&2&4&0&0&10\\ 2&0&0&0&2&0&0&0&2&6\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 13: $e_{2}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&75&75&0&0&0&76&0&226\\ 0&68&0&0&0&71&68&0&0&207\\ 60&0&0&0&64&0&0&0&64&188\\ 0&0&56&56&0&0&0&58&0&170\\ 0&51&0&0&0&52&51&0&0&154\\ 44&0&0&0&47&0&0&0&47&138\\ 0&0&40&40&0&0&0&42&0&122\\ 0&35&0&0&0&37&35&0&0&107\\ 28&0&0&0&32&0&0&0&32&92\\ 0&0&25&25&0&0&0&27&0&77\\ 0&21&0&0&0&22&21&0&0&64\\ 16&0&0&0&18&0&0&0&18&52\\ 0&0&14&14&0&0&0&14&0&42\\ 0&11&0&0&0&13&11&0&0&35\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&7&7&0&0&0&8&0&22\\ 0&6&0&0&0&6&6&0&0&18\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&3&3&0&0&0&4&0&10\\ 0&2&0&0&0&3&2&0&0&7\\ 0&0&0&0&2&0&0&0&2&4\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 14: $e_{2}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&74&74&0&0&0&76&0&224\\ 0&66&0&0&0&68&66&0&0&200\\ 56&0&0&0&60&0&0&0&60&176\\ 0&0&51&51&0&0&0&53&0&155\\ 0&44&0&0&0&46&44&0&0&134\\ 36&0&0&0&39&0&0&0&39&114\\ 0&0&31&31&0&0&0&32&0&94\\ 0&26&0&0&0&27&26&0&0&79\\ 20&0&0&0&22&0&0&0&22&64\\ 0&0&17&17&0&0&0&18&0&52\\ 0&13&0&0&0&14&13&0&0&40\\ 9&0&0&0&11&0&0&0&11&31\\ 0&0&7&7&0&0&0&8&0&22\\ 0&5&0&0&0&6&5&0&0&16\\ 2&0&0&0&4&0&0&0&4&10\\ 0&0&2&2&0&0&0&3&0&7\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 15: $e_{2}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{3}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{2}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&74&74&0&0&0&74&0&222\\ 0&64&0&0&0&69&64&0&0&197\\ 54&0&0&0&59&0&0&0&59&172\\ 0&0&49&49&0&0&0&53&0&151\\ 0&43&0&0&0&44&43&0&0&130\\ 35&0&0&0&38&0&0&0&38&111\\ 0&0&30&30&0&0&0&32&0&92\\ 0&24&0&0&0&28&24&0&0&76\\ 16&0&0&0&22&0&0&0&22&60\\ 0&0&14&14&0&0&0&19&0&47\\ 0&11&0&0&0&12&11&0&0&34\\ 9&0&0&0&9&0&0&0&9&27\\ 0&0&7&7&0&0&0&6&0&20\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&1&1&0&0&0&3&0&5\\ 0&1&0&0&0&0&1&0&0&2\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 16: $e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&74&74&0&0&0&74&0&222\\ 0&65&0&0&0&69&65&0&0&199\\ 58&0&0&0&61&0&0&0&61&180\\ 0&0&54&54&0&0&0&55&0&163\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&31&0&0&0&33&31&0&0&95\\ 24&0&0&0&28&0&0&0&28&80\\ 0&0&22&22&0&0&0&24&0&68\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&6&0&14\\ 0&3&0&0&0&2&3&0&0&8\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 17(1): $e_{2}e_{3}e_{4}+e_{2}e_{3}e_{5}+e_{0}e_{2}e_{6}+e_{1}e_{3}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&74&0&220\\ 0&63&0&0&0&66&63&0&0&192\\ 52&0&0&0&56&0&0&0&56&164\\ 0&0&45&45&0&0&0&50&0&140\\ 0&39&0&0&0&38&39&0&0&116\\ 32&0&0&0&32&0&0&0&32&96\\ 0&0&25&25&0&0&0&26&0&76\\ 0&19&0&0&0&23&19&0&0&61\\ 12&0&0&0&17&0&0&0&17&46\\ 0&0&10&10&0&0&0&14&0&34\\ 0&7&0&0&0&8&7&0&0&22\\ 6&0&0&0&5&0&0&0&5&16\\ 0&0&4&4&0&0&0&2&0&10\\ 0&1&0&0&0&4&1&0&0&6\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 17(2) $e_{2}e_{3}e_{4}+e_{2}e_{3}e_{5}+e_{0}e_{2}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&74&0&220\\ 0&63&0&0&0&66&63&0&0&192\\ 52&0&0&0&56&0&0&0&56&164\\ 0&0&45&45&0&0&0&50&0&140\\ 0&39&0&0&0&38&39&0&0&116\\ 32&0&0&0&32&0&0&0&32&96\\ 0&0&25&25&0&0&0&26&0&76\\ 0&19&0&0&0&23&19&0&0&61\\ 12&0&0&0&17&0&0&0&17&46\\ 0&0&10&10&0&0&0&14&0&34\\ 0&7&0&0&0&8&7&0&0&22\\ 6&0&0&0&5&0&0&0&5&16\\ 0&0&4&4&0&0&0&2&0&10\\ 0&1&0&0&0&4&1&0&0&6\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 18: $e_{2}e_{4}e_{5}+e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&74&0&220\\ 0&63&0&0&0&66&63&0&0&192\\ 52&0&0&0&56&0&0&0&56&164\\ 0&0&46&46&0&0&0&48&0&140\\ 0&38&0&0&0&40&38&0&0&116\\ 30&0&0&0&33&0&0&0&33&96\\ 0&0&25&25&0&0&0&26&0&76\\ 0&20&0&0&0&21&20&0&0&61\\ 14&0&0&0&16&0&0&0&16&46\\ 0&0&11&11&0&0&0&12&0&34\\ 0&7&0&0&0&8&7&0&0&22\\ 4&0&0&0&6&0&0&0&6&16\\ 0&0&3&3&0&0&0&4&0&10\\ 0&2&0&0&0&2&2&0&0&6\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 19: $e_{1}e_{3}e_{5}+e_{2}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&74&0&220\\ 0&64&0&0&0&67&64&0&0&195\\ 54&0&0&0&59&0&0&0&59&172\\ 0&0&49&49&0&0&0&53&0&151\\ 0&43&0&0&0&44&43&0&0&130\\ 34&0&0&0&38&0&0&0&38&110\\ 0&0&30&30&0&0&0&32&0&92\\ 0&24&0&0&0&28&24&0&0&76\\ 16&0&0&0&22&0&0&0&22&60\\ 0&0&14&14&0&0&0&17&0&45\\ 0&11&0&0&0&12&11&0&0&34\\ 8&0&0&0&9&0&0&0&9&26\\ 0&0&7&7&0&0&0&6&0&20\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&0&1&0&0&2\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 20: $e_{3}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&73&73&0&0&0&76&0&222\\ 0&65&0&0&0&69&65&0&0&199\\ 58&0&0&0&61&0&0&0&61&180\\ 0&0&54&54&0&0&0&55&0&163\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&31&0&0&0&33&31&0&0&95\\ 26&0&0&0&27&0&0&0&27&80\\ 0&0&22&22&0&0&0&24&0&68\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&6&0&14\\ 0&2&0&0&0&4&2&0&0&8\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 21: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&74&0&218\\ 0&63&0&0&0&65&63&0&0&191\\ 52&0&0&0&56&0&0&0&56&164\\ 0&0&45&45&0&0&0&48&0&138\\ 0&38&0&0&0&38&38&0&0&114\\ 29&0&0&0&32&0&0&0&32&93\\ 0&0&24&24&0&0&0&26&0&74\\ 0&18&0&0&0&20&18&0&0&56\\ 12&0&0&0&15&0&0&0&15&42\\ 0&0&10&10&0&0&0&11&0&31\\ 0&7&0&0&0&8&7&0&0&22\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&3&3&0&0&0&2&0&8\\ 0&1&0&0&0&3&1&0&0&5\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 22: $e_{2}e_{4}e_{5}+e_{3}e_{4}e_{5}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&74&0&218\\ 0&62&0&0&0&65&62&0&0&189\\ 52&0&0&0&55&0&0&0&55&162\\ 0&0&45&45&0&0&0&47&0&137\\ 0&38&0&0&0&38&38&0&0&114\\ 30&0&0&0&32&0&0&0&32&94\\ 0&0&25&25&0&0&0&26&0&76\\ 0&19&0&0&0&21&19&0&0&59\\ 12&0&0&0&16&0&0&0&16&44\\ 0&0&10&10&0&0&0&12&0&32\\ 0&7&0&0&0&8&7&0&0&22\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&3&3&0&0&0&2&0&8\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 23: $e_{2}e_{4}e_{5}+e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{0}e_{2}e_{7}+e_{1}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&74&0&218\\ 0&62&0&0&0&65&62&0&0&189\\ 52&0&0&0&55&0&0&0&55&162\\ 0&0&45&45&0&0&0&47&0&137\\ 0&37&0&0&0&40&37&0&0&114\\ 30&0&0&0&32&0&0&0&32&94\\ 0&0&25&25&0&0&0&26&0&76\\ 0&19&0&0&0&21&19&0&0&59\\ 14&0&0&0&15&0&0&0&15&44\\ 0&0&10&10&0&0&0&12&0&32\\ 0&7&0&0&0&8&7&0&0&22\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&2&2&0&0&0&4&0&8\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 24: $e_{0}e_{3}e_{4}+e_{2}e_{3}e_{6}+e_{1}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{2}e_{4}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&70&0&214\\ 0&58&0&0&0&65&58&0&0&181\\ 46&0&0&0&51&0&0&0&51&148\\ 0&0&39&39&0&0&0&45&0&123\\ 0&33&0&0&0&32&33&0&0&98\\ 26&0&0&0&26&0&0&0&26&78\\ 0&0&19&19&0&0&0&20&0&58\\ 0&13&0&0&0&17&13&0&0&43\\ 6&0&0&0&11&0&0&0&11&28\\ 0&0&4&4&0&0&0&10&0&18\\ 0&3&0&0&0&2&3&0&0&8\\ 3&0&0&0&1&0&0&0&1&5\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 25: $e_{3}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&72&72&0&0&0&72&0&216\\ 0&65&0&0&0&68&65&0&0&198\\ 58&0&0&0&61&0&0&0&61&180\\ 0&0&54&54&0&0&0&55&0&163\\ 0&48&0&0&0&50&48&0&0&146\\ 41&0&0&0&44&0&0&0&44&129\\ 0&0&37&37&0&0&0&38&0&112\\ 0&31&0&0&0&33&31&0&0&95\\ 24&0&0&0&27&0&0&0&27&78\\ 0&0&22&22&0&0&0&24&0&68\\ 0&19&0&0&0&20&19&0&0&58\\ 15&0&0&0&17&0&0&0&17&49\\ 0&0&13&13&0&0&0&14&0&40\\ 0&10&0&0&0&11&10&0&0&31\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&6&0&14\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 26: $e_{1}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&74&0&216\\ 0&61&0&0&0&63&61&0&0&185\\ 50&0&0&0&53&0&0&0&53&156\\ 0&0&42&42&0&0&0&45&0&129\\ 0&34&0&0&0&36&34&0&0&104\\ 26&0&0&0&28&0&0&0&28&82\\ 0&0&20&20&0&0&0&22&0&62\\ 0&14&0&0&0&17&14&0&0&45\\ 10&0&0&0&11&0&0&0&11&32\\ 0&0&7&7&0&0&0&8&0&22\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&1&1&0&0&0&2&0&4\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 27: $e_{2}e_{3}e_{4}+e_{2}e_{3}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&70&0&212\\ 0&58&0&0&0&63&58&0&0&179\\ 46&0&0&0&51&0&0&0&51&148\\ 0&0&39&39&0&0&0&43&0&121\\ 0&33&0&0&0&32&33&0&0&98\\ 25&0&0&0&26&0&0&0&26&77\\ 0&0&19&19&0&0&0&20&0&58\\ 0&13&0&0&0&16&13&0&0&42\\ 6&0&0&0&11&0&0&0&11&28\\ 0&0&4&4&0&0&0&8&0&16\\ 0&3&0&0&0&2&3&0&0&8\\ 2&0&0&0&1&0&0&0&1&4\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 28: $e_{2}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&70&0&212\\ 0&58&0&0&0&63&58&0&0&179\\ 46&0&0&0&51&0&0&0&51&148\\ 0&0&39&39&0&0&0&41&0&119\\ 0&30&0&0&0&32&30&0&0&92\\ 20&0&0&0&24&0&0&0&24&68\\ 0&0&15&15&0&0&0&18&0&48\\ 0&11&0&0&0&11&11&0&0&33\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&6&0&14\\ 0&3&0&0&0&2&3&0&0&8\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 29: $e_{2}e_{4}e_{5}+e_{3}e_{4}e_{5}+e_{0}e_{4}e_{6}+e_{1}e_{5}e_{6}+e_{0}e_{2}e_{7}+e_{1}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&72&0&214\\ 0&62&0&0&0&64&62&0&0&188\\ 52&0&0&0&55&0&0&0&55&162\\ 0&0&45&45&0&0&0&47&0&137\\ 0&37&0&0&0&38&37&0&0&112\\ 30&0&0&0&32&0&0&0&32&94\\ 0&0&25&25&0&0&0&26&0&76\\ 0&19&0&0&0&21&19&0&0&59\\ 12&0&0&0&15&0&0&0&15&42\\ 0&0&10&10&0&0&0&12&0&32\\ 0&7&0&0&0&8&7&0&0&22\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&2&2&0&0&0&2&0&6\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 30: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{6}e_{7}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&71&71&0&0&0&74&0&216\\ 0&64&0&0&0&66&64&0&0&194\\ 54&0&0&0&59&0&0&0&59&172\\ 0&0&49&49&0&0&0&53&0&151\\ 0&43&0&0&0&44&43&0&0&130\\ 33&0&0&0&38&0&0&0&38&109\\ 0&0&30&30&0&0&0&32&0&92\\ 0&24&0&0&0&28&24&0&0&76\\ 16&0&0&0&22&0&0&0&22&60\\ 0&0&14&14&0&0&0&16&0&44\\ 0&10&0&0&0&12&10&0&0&32\\ 8&0&0&0&9&0&0&0&9&26\\ 0&0&7&7&0&0&0&6&0&20\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&1&1&0&0&0&1&0&3\\ 0&1&0&0&0&0&1&0&0&2\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 31(1): $e_{0}e_{1}e_{5}+e_{2}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&68&0&208\\ 0&54&0&0&0&60&54&0&0&168\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&38&0&98\\ 0&24&0&0&0&20&24&0&0&68\\ 20&0&0&0&14&0&0&0&14&48\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&10&4&0&0&18\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&4&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 31(2) $e_{0}e_{1}e_{5}+e_{2}e_{3}e_{5}+e_{3}e_{4}e_{5}+e_{0}e_{2}e_{6}+e_{1}e_{3}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{2}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&68&0&208\\ 0&54&0&0&0&60&54&0&0&168\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&38&0&98\\ 0&24&0&0&0&20&24&0&0&68\\ 20&0&0&0&14&0&0&0&14&48\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&10&4&0&0&18\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&4&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 31(3) $e_{0}e_{1}e_{5}+e_{2}e_{3}e_{5}+e_{0}e_{2}e_{6}+e_{3}e_{4}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{1}e_{3}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&68&0&208\\ 0&54&0&0&0&60&54&0&0&168\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&38&0&98\\ 0&24&0&0&0&20&24&0&0&68\\ 20&0&0&0&14&0&0&0&14&48\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&10&4&0&0&18\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&4&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 31(4) $e_{0}e_{1}e_{5}+e_{2}e_{3}e_{5}+e_{3}e_{4}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&68&0&208\\ 0&54&0&0&0&60&54&0&0&168\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&38&0&98\\ 0&24&0&0&0&20&24&0&0&68\\ 20&0&0&0&14&0&0&0&14&48\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&10&4&0&0&18\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&4&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 31(5) $e_{0}e_{1}e_{5}+e_{2}e_{3}e_{5}+e_{3}e_{4}e_{5}+e_{0}e_{2}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{4}e_{7}+e_{1}e_{3}e_{8}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&68&0&208\\ 0&54&0&0&0&60&54&0&0&168\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&38&0&98\\ 0&24&0&0&0&20&24&0&0&68\\ 20&0&0&0&14&0&0&0&14&48\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&10&4&0&0&18\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&4&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 32: $e_{2}e_{3}e_{4}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&74&0&214\\ 0&60&0&0&0&61&60&0&0&181\\ 46&0&0&0&51&0&0&0&51&148\\ 0&0&40&40&0&0&0&43&0&123\\ 0&32&0&0&0&34&32&0&0&98\\ 24&0&0&0&27&0&0&0&27&78\\ 0&0&19&19&0&0&0&20&0&58\\ 0&14&0&0&0&15&14&0&0&43\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&6&6&0&0&0&6&0&18\\ 0&2&0&0&0&4&2&0&0&8\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 33: $e_{1}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{3}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&70&0&210\\ 0&58&0&0&0&62&58&0&0&178\\ 46&0&0&0&50&0&0&0&50&146\\ 0&0&39&39&0&0&0&42&0&120\\ 0&33&0&0&0&32&33&0&0&98\\ 24&0&0&0&26&0&0&0&26&76\\ 0&0&19&19&0&0&0&20&0&58\\ 0&13&0&0&0&16&13&0&0&42\\ 6&0&0&0&10&0&0&0&10&26\\ 0&0&4&4&0&0&0&7&0&15\\ 0&3&0&0&0&2&3&0&0&8\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 34: $e_{2}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{5}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&70&70&0&0&0&70&0&210\\ 0&58&0&0&0&62&58&0&0&178\\ 46&0&0&0&50&0&0&0&50&146\\ 0&0&39&39&0&0&0&41&0&119\\ 0&30&0&0&0&32&30&0&0&92\\ 20&0&0&0&24&0&0&0&24&68\\ 0&0&14&14&0&0&0&16&0&44\\ 0&11&0&0&0&11&11&0&0&33\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&4&4&0&0&0&6&0&14\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 35: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&69&69&0&0&0&70&0&208\\ 0&55&0&0&0&58&55&0&0&168\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&32&32&0&0&0&34&0&98\\ 0&22&0&0&0&24&22&0&0&68\\ 14&0&0&0&17&0&0&0&17&48\\ 0&0&9&9&0&0&0&10&0&28\\ 0&6&0&0&0&6&6&0&0&18\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&1&1&0&0&0&2&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 36(1): $e_{0}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{2}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&69&69&0&0&0&68&0&206\\ 0&54&0&0&0&59&54&0&0&167\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&35&0&95\\ 0&24&0&0&0&20&24&0&0&68\\ 17&0&0&0&14&0&0&0&14&45\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&8&4&0&0&16\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&3&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 36(2) $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{0}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{2}e_{4}e_{7}+e_{0}e_{1}e_{8}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&69&69&0&0&0&68&0&206\\ 0&54&0&0&0&59&54&0&0&167\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&35&0&95\\ 0&24&0&0&0&20&24&0&0&68\\ 17&0&0&0&14&0&0&0&14&45\\ 0&0&10&10&0&0&0&8&0&28\\ 0&4&0&0&0&8&4&0&0&16\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&3&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 37: $e_{2}e_{3}e_{4}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{5}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&69&69&0&0&0&70&0&208\\ 0&57&0&0&0&60&57&0&0&174\\ 46&0&0&0&50&0&0&0&50&146\\ 0&0&39&39&0&0&0&42&0&120\\ 0&32&0&0&0&32&32&0&0&96\\ 24&0&0&0&26&0&0&0&26&76\\ 0&0&19&19&0&0&0&20&0&58\\ 0&13&0&0&0&15&13&0&0&41\\ 6&0&0&0&10&0&0&0&10&26\\ 0&0&4&4&0&0&0&6&0&14\\ 0&2&0&0&0&2&2&0&0&6\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 38: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&68&68&0&0&0&70&0&206\\ 0&55&0&0&0&57&55&0&0&167\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&31&31&0&0&0&33&0&95\\ 0&22&0&0&0&24&22&0&0&68\\ 13&0&0&0&16&0&0&0&16&45\\ 0&0&9&9&0&0&0&10&0&28\\ 0&5&0&0&0&6&5&0&0&16\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&1&1&0&0&0&1&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 39: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{2}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&68&68&0&0&0&68&0&204\\ 0&53&0&0&0&57&53&0&0&163\\ 40&0&0&0&43&0&0&0&43&126\\ 0&0&30&30&0&0&0&33&0&93\\ 0&22&0&0&0&20&22&0&0&64\\ 14&0&0&0&14&0&0&0&14&42\\ 0&0&9&9&0&0&0&8&0&26\\ 0&4&0&0&0&6&4&0&0&14\\ 0&0&0&0&3&0&0&0&3&6\\ 0&0&0&0&0&0&0&2&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 40: $e_{0}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{8}+e_{1}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&68&68&0&0&0&68&0&204\\ 0&54&0&0&0&58&54&0&0&166\\ 40&0&0&0&44&0&0&0&44&128\\ 0&0&30&30&0&0&0&33&0&93\\ 0&23&0&0&0&20&23&0&0&66\\ 16&0&0&0&14&0&0&0&14&44\\ 0&0&9&9&0&0&0&8&0&26\\ 0&4&0&0&0&7&4&0&0&15\\ 0&0&0&0&4&0&0&0&4&8\\ 0&0&0&0&0&0&0&2&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 41: $e_{1}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{1}e_{8}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&67&67&0&0&0&68&0&202\\ 0&52&0&0&0&55&52&0&0&159\\ 38&0&0&0&41&0&0&0&41&120\\ 0&0&29&29&0&0&0&30&0&88\\ 0&20&0&0&0&20&20&0&0&60\\ 12&0&0&0&13&0&0&0&13&38\\ 0&0&7&7&0&0&0&8&0&22\\ 0&3&0&0&0&4&3&0&0&10\\ 0&0&0&0&2&0&0&0&2&4\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 42: $e_{2}e_{3}e_{4}+e_{0}e_{2}e_{6}+e_{1}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{4}e_{7}+e_{1}e_{5}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&67&67&0&0&0&68&0&202\\ 0&51&0&0&0&57&51&0&0&159\\ 38&0&0&0&41&0&0&0&41&120\\ 0&0&28&28&0&0&0&32&0&88\\ 0&19&0&0&0&22&19&0&0&60\\ 12&0&0&0&13&0&0&0&13&38\\ 0&0&6&6&0&0&0&10&0&22\\ 0&3&0&0&0&4&3&0&0&10\\ 2&0&0&0&1&0&0&0&1&4\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 43: $e_{1}e_{2}e_{5}+e_{3}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{1}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&67&67&0&0&0&70&0&204\\ 0&53&0&0&0&57&53&0&0&163\\ 40&0&0&0&43&0&0&0&43&126\\ 0&0&30&30&0&0&0&33&0&93\\ 0&20&0&0&0&24&20&0&0&64\\ 14&0&0&0&14&0&0&0&14&42\\ 0&0&8&8&0&0&0&10&0&26\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&2&0&0&0&2&6\\ 0&0&0&0&0&0&0&2&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 44: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{4}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&67&67&0&0&0&68&0&202\\ 0&53&0&0&0&56&53&0&0&162\\ 40&0&0&0&43&0&0&0&43&126\\ 0&0&30&30&0&0&0&32&0&92\\ 0&21&0&0&0&20&21&0&0&62\\ 13&0&0&0&13&0&0&0&13&39\\ 0&0&8&8&0&0&0&8&0&24\\ 0&4&0&0&0&6&4&0&0&14\\ 0&0&0&0&3&0&0&0&3&6\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 45: $e_{0}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{6}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&67&67&0&0&0&74&0&208\\ 0&57&0&0&0&60&57&0&0&174\\ 46&0&0&0&50&0&0&0&50&146\\ 0&0&39&39&0&0&0&42&0&120\\ 0&31&0&0&0&34&31&0&0&96\\ 24&0&0&0&26&0&0&0&26&76\\ 0&0&19&19&0&0&0&20&0&58\\ 0&13&0&0&0&15&13&0&0&41\\ 8&0&0&0&9&0&0&0&9&26\\ 0&0&4&4&0&0&0&6&0&14\\ 0&1&0&0&0&4&1&0&0&6\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 46: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&66&66&0&0&0&68&0&200\\ 0&51&0&0&0&53&51&0&0&155\\ 36&0&0&0&39&0&0&0&39&114\\ 0&0&26&26&0&0&0&27&0&79\\ 0&17&0&0&0&18&17&0&0&52\\ 9&0&0&0&11&0&0&0&11&31\\ 0&0&5&5&0&0&0&6&0&16\\ 0&2&0&0&0&3&2&0&0&7\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 47: $e_{2}e_{3}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{2}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&66&66&0&0&0&66&0&198\\ 0&51&0&0&0&54&51&0&0&156\\ 36&0&0&0&39&0&0&0&39&114\\ 0&0&28&28&0&0&0&30&0&86\\ 0&19&0&0&0&20&19&0&0&58\\ 11&0&0&0&13&0&0&0&13&37\\ 0&0&5&5&0&0&0&6&0&16\\ 0&3&0&0&0&3&3&0&0&9\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 48: $e_{1}e_{2}e_{5}+e_{3}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{2}e_{4}e_{6}+e_{0}e_{1}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&66&66&0&0&0&66&0&198\\ 0&53&0&0&0&56&53&0&0&162\\ 40&0&0&0&43&0&0&0&43&126\\ 0&0&30&30&0&0&0&33&0&93\\ 0&20&0&0&0&20&20&0&0&60\\ 14&0&0&0&14&0&0&0&14&42\\ 0&0&8&8&0&0&0&8&0&24\\ 0&4&0&0&0&6&4&0&0&14\\ 0&0&0&0&2&0&0&0&2&4\\ 0&0&0&0&0&0&0&2&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 49: $e_{1}e_{3}e_{5}+e_{2}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{3}e_{7}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&66&66&0&0&0&70&0&202\\ 0&53&0&0&0&56&53&0&0&162\\ 40&0&0&0&43&0&0&0&43&126\\ 0&0&30&30&0&0&0&32&0&92\\ 0&20&0&0&0&22&20&0&0&62\\ 11&0&0&0&14&0&0&0&14&39\\ 0&0&8&8&0&0&0&8&0&24\\ 0&4&0&0&0&6&4&0&0&14\\ 2&0&0&0&2&0&0&0&2&6\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 50: $e_{0}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{6}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&66&66&0&0&0&68&0&200\\ 0&57&0&0&0&59&57&0&0&173\\ 46&0&0&0&50&0&0&0&50&146\\ 0&0&39&39&0&0&0&42&0&120\\ 0&31&0&0&0&32&31&0&0&94\\ 24&0&0&0&26&0&0&0&26&76\\ 0&0&19&19&0&0&0&20&0&58\\ 0&13&0&0&0&15&13&0&0&41\\ 6&0&0&0&9&0&0&0&9&24\\ 0&0&4&4&0&0&0&6&0&14\\ 0&1&0&0&0&2&1&0&0&4\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 51: $e_{2}e_{3}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&65&65&0&0&0&66&0&196\\ 0&48&0&0&0&51&48&0&0&147\\ 32&0&0&0&36&0&0&0&36&104\\ 0&0&22&22&0&0&0&24&0&68\\ 0&13&0&0&0&14&13&0&0&40\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&3&3&0&0&0&4&0&10\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 52: $e_{2}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&65&65&0&0&0&66&0&196\\ 0&49&0&0&0&52&49&0&0&150\\ 36&0&0&0&39&0&0&0&39&114\\ 0&0&28&28&0&0&0&29&0&85\\ 0&19&0&0&0&20&19&0&0&58\\ 10&0&0&0&12&0&0&0&12&34\\ 0&0&5&5&0&0&0&6&0&16\\ 0&2&0&0&0&2&2&0&0&6\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 53: $e_{1}e_{3}e_{5}+e_{2}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{3}e_{7}+e_{0}e_{4}e_{8}$ $\left(\begin{smallmatrix} 0&0&65&65&0&0&0&66&0&196\\ 0&53&0&0&0&55&53&0&0&161\\ 40&0&0&0&43&0&0&0&43&126\\ 0&0&30&30&0&0&0&32&0&92\\ 0&20&0&0&0&20&20&0&0&60\\ 11&0&0&0&12&0&0&0&12&35\\ 0&0&8&8&0&0&0&8&0&24\\ 0&4&0&0&0&6&4&0&0&14\\ 0&0&0&0&2&0&0&0&2&4\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 54: $e_{2}e_{3}e_{4}+e_{0}e_{2}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&64&64&0&0&0&68&0&196\\ 0&48&0&0&0&51&48&0&0&147\\ 34&0&0&0&35&0&0&0&35&104\\ 0&0&21&21&0&0&0&26&0&68\\ 0&12&0&0&0&16&12&0&0&40\\ 8&0&0&0&7&0&0&0&7&22\\ 0&0&3&3&0&0&0&4&0&10\\ 0&0&0&0&0&3&0&0&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 55: $e_{2}e_{3}e_{4}+e_{1}e_{4}e_{5}+e_{0}e_{5}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&64&64&0&0&0&66&0&194\\ 0&48&0&0&0&50&48&0&0&146\\ 30&0&0&0&34&0&0&0&34&98\\ 0&0&22&22&0&0&0&24&0&68\\ 0&12&0&0&0&14&12&0&0&38\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&2&2&0&0&0&2&0&6\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 56: $e_{0}e_{4}e_{5}+e_{0}e_{1}e_{6}+e_{1}e_{2}e_{6}+e_{2}e_{3}e_{7}+e_{0}e_{2}e_{8}+e_{1}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&64&64&0&0&0&68&0&196\\ 0&48&0&0&0&54&48&0&0&150\\ 38&0&0&0&38&0&0&0&38&114\\ 0&0&28&28&0&0&0&29&0&85\\ 0&19&0&0&0&20&19&0&0&58\\ 12&0&0&0&11&0&0&0&11&34\\ 0&0&4&4&0&0&0&8&0&16\\ 0&1&0&0&0&4&1&0&0&6\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 57: $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{1}e_{4}e_{6}+e_{2}e_{5}e_{7}+e_{0}e_{6}e_{7}+e_{3}e_{6}e_{7}$ $\left(\begin{smallmatrix} 0&0&64&64&0&0&0&56&0&184\\ 0&36&0&0&0&56&36&0&0&128\\ 16&0&0&0&28&0&0&0&28&72\\ 0&0&8&8&0&0&0&28&0&44\\ 0&8&0&0&0&0&8&0&0&16\\ 8&0&0&0&0&0&0&0&0&8\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 58: $e_{2}e_{3}e_{4}+e_{0}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&63&63&0&0&0&66&0&192\\ 0&46&0&0&0&48&46&0&0&140\\ 30&0&0&0&33&0&0&0&33&96\\ 0&0&20&20&0&0&0&21&0&61\\ 0&11&0&0&0&12&11&0&0&34\\ 4&0&0&0&6&0&0&0&6&16\\ 0&0&2&2&0&0&0&2&0&6\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 59: $e_{0}e_{2}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{1}e_{2}e_{8}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&63&63&0&0&0&68&0&194\\ 0&48&0&0&0&50&48&0&0&146\\ 28&0&0&0&35&0&0&0&35&98\\ 0&0&21&21&0&0&0&26&0&68\\ 0&12&0&0&0&14&12&0&0&38\\ 8&0&0&0&7&0&0&0&7&22\\ 0&0&3&3&0&0&0&0&0&6\\ 0&0&0&0&0&3&0&0&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 60: $e_{2}e_{3}e_{4}+e_{1}e_{3}e_{5}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{1}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&63&63&0&0&0&66&0&192\\ 0&45&0&0&0&50&45&0&0&140\\ 32&0&0&0&32&0&0&0&32&96\\ 0&0&19&19&0&0&0&23&0&61\\ 0&10&0&0&0&14&10&0&0&34\\ 6&0&0&0&5&0&0&0&5&16\\ 0&0&1&1&0&0&0&4&0&6\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 61: $e_{0}e_{1}e_{5}+e_{3}e_{4}e_{5}+e_{2}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&63&63&0&0&0&56&0&182\\ 0&36&0&0&0&50&36&0&0&122\\ 16&0&0&0&28&0&0&0&28&72\\ 0&0&8&8&0&0&0&21&0&37\\ 0&8&0&0&0&0&8&0&0&16\\ 5&0&0&0&0&0&0&0&0&5\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 62: $e_{2}e_{4}e_{5}+e_{0}e_{2}e_{6}+e_{0}e_{3}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&63&63&0&0&0&64&0&190\\ 0&48&0&0&0&51&48&0&0&147\\ 36&0&0&0&38&0&0&0&38&112\\ 0&0&28&28&0&0&0&29&0&85\\ 0&19&0&0&0&20&19&0&0&58\\ 10&0&0&0&11&0&0&0&11&32\\ 0&0&4&4&0&0&0&6&0&14\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 63: $e_{2}e_{3}e_{4}+e_{1}e_{3}e_{5}+e_{1}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&62&62&0&0&0&64&0&188\\ 0&45&0&0&0&48&45&0&0&138\\ 28&0&0&0&32&0&0&0&32&92\\ 0&0&17&17&0&0&0&18&0&52\\ 0&9&0&0&0&10&9&0&0&28\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&1&1&0&0&0&2&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 64: $e_{2}e_{3}e_{4}+e_{0}e_{3}e_{5}+e_{1}e_{4}e_{6}+e_{0}e_{2}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&62&62&0&0&0&64&0&188\\ 0&45&0&0&0&47&45&0&0&137\\ 30&0&0&0&32&0&0&0&32&94\\ 0&0&19&19&0&0&0&21&0&59\\ 0&10&0&0&0&12&10&0&0&32\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&1&1&0&0&0&2&0&4\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 65: $e_{2}e_{3}e_{4}+e_{0}e_{2}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{1}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&61&61&0&0&0&62&0&184\\ 0&42&0&0&0&44&42&0&0&128\\ 22&0&0&0&25&0&0&0&25&72\\ 0&0&14&14&0&0&0&16&0&44\\ 0&5&0&0&0&6&5&0&0&16\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 66: $e_{0}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{1}e_{2}e_{7}+e_{0}e_{3}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&61&61&0&0&0&66&0&188\\ 0&45&0&0&0&47&45&0&0&137\\ 28&0&0&0&33&0&0&0&33&94\\ 0&0&19&19&0&0&0&21&0&59\\ 0&10&0&0&0&12&10&0&0&32\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&2&2&0&0&0&0&0&4\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 67: $e_{0}e_{3}e_{4}+e_{1}e_{3}e_{5}+e_{2}e_{4}e_{6}+e_{0}e_{5}e_{6}+e_{1}e_{2}e_{7}$ $\left(\begin{smallmatrix} 0&0&61&61&0&0&0&56&0&178\\ 0&36&0&0&0&47&36&0&0&119\\ 16&0&0&0&26&0&0&0&26&68\\ 0&0&8&8&0&0&0&17&0&33\\ 0&7&0&0&0&0&7&0&0&14\\ 3&0&0&0&0&0&0&0&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 68: $e_{3}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{3}e_{6}+e_{0}e_{1}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&61&61&0&0&0&62&0&184\\ 0&46&0&0&0&49&46&0&0&141\\ 36&0&0&0&38&0&0&0&38&112\\ 0&0&28&28&0&0&0&29&0&85\\ 0&19&0&0&0&20&19&0&0&58\\ 9&0&0&0&11&0&0&0&11&31\\ 0&0&3&3&0&0&0&4&0&10\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 69: $e_{2}e_{3}e_{4}+e_{1}e_{2}e_{5}+e_{0}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&60&60&0&0&0&62&0&182\\ 0&40&0&0&0&42&40&0&0&122\\ 22&0&0&0&25&0&0&0&25&72\\ 0&0&12&12&0&0&0&13&0&37\\ 0&5&0&0&0&6&5&0&0&16\\ 1&0&0&0&2&0&0&0&2&5\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 70: $e_{2}e_{3}e_{4}+e_{0}e_{2}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{1}e_{4}e_{7}$ $\left(\begin{smallmatrix} 0&0&60&60&0&0&0&56&0&176\\ 0&35&0&0&0&43&35&0&0&113\\ 16&0&0&0&24&0&0&0&24&64\\ 0&0&8&8&0&0&0&15&0&31\\ 0&5&0&0&0&0&5&0&0&10\\ 2&0&0&0&0&0&0&0&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 71: $e_{0}e_{3}e_{4}+e_{1}e_{5}e_{6}+e_{0}e_{2}e_{7}+e_{1}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&59&59&0&0&0&62&0&180\\ 0&45&0&0&0&46&45&0&0&136\\ 28&0&0&0&32&0&0&0&32&92\\ 0&0&19&19&0&0&0&21&0&59\\ 0&8&0&0&0&10&8&0&0&26\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&1&1&0&0&0&0&0&2\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 72: $e_{1}e_{3}e_{4}+e_{2}e_{3}e_{5}+e_{0}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{3}e_{6}+e_{0}e_{1}e_{7}+e_{0}e_{2}e_{8}$ $\left(\begin{smallmatrix} 0&0&58&58&0&0&0&60&0&176\\ 0&37&0&0&0&39&37&0&0&113\\ 20&0&0&0&22&0&0&0&22&64\\ 0&0&10&10&0&0&0&11&0&31\\ 0&3&0&0&0&4&3&0&0&10\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 73: $e_{2}e_{3}e_{4}+e_{1}e_{2}e_{5}+e_{0}e_{3}e_{6}+e_{0}e_{4}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&58&58&0&0&0&62&0&178\\ 0&39&0&0&0&41&39&0&0&119\\ 20&0&0&0&24&0&0&0&24&68\\ 0&0&11&11&0&0&0&11&0&33\\ 0&4&0&0&0&6&4&0&0&14\\ 1&0&0&0&1&0&0&0&1&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 74: $e_{2}e_{3}e_{4}+e_{1}e_{2}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{5}e_{6}+e_{0}e_{1}e_{7}$ $\left(\begin{smallmatrix} 0&0&58&58&0&0&0&56&0&172\\ 0&34&0&0&0&41&34&0&0&109\\ 16&0&0&0&22&0&0&0&22&60\\ 0&0&7&7&0&0&0&12&0&26\\ 0&4&0&0&0&0&4&0&0&8\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 75: $e_{0}e_{2}e_{3}+e_{1}e_{4}e_{5}+e_{0}e_{6}e_{7}+e_{1}e_{6}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&57&57&0&0&0&62&0&176\\ 0&35&0&0&0&43&35&0&0&113\\ 22&0&0&0&21&0&0&0&21&64\\ 0&0&8&8&0&0&0&15&0&31\\ 0&2&0&0&0&6&2&0&0&10\\ 2&0&0&0&0&0&0&0&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 76: $e_{2}e_{3}e_{4}+e_{0}e_{2}e_{5}+e_{1}e_{4}e_{5}+e_{1}e_{3}e_{6}+e_{0}e_{4}e_{6}+e_{0}e_{1}e_{7}$ $\left(\begin{smallmatrix} 0&0&56&56&0&0&0&56&0&168\\ 0&31&0&0&0&36&31&0&0&98\\ 14&0&0&0&17&0&0&0&17&48\\ 0&0&5&5&0&0&0&8&0&18\\ 0&2&0&0&0&0&2&0&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 77: $e_{1}e_{2}e_{3}+e_{1}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&56&56&0&0&0&60&0&172\\ 0&35&0&0&0&39&35&0&0&109\\ 18&0&0&0&21&0&0&0&21&60\\ 0&0&8&8&0&0&0&10&0&26\\ 0&2&0&0&0&4&2&0&0&8\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 78: $e_{0}e_{2}e_{3}+e_{1}e_{4}e_{5}+e_{0}e_{6}e_{7}+e_{1}e_{6}e_{7}$ $\left(\begin{smallmatrix} 0&0&56&56&0&0&0&54&0&166\\ 0&35&0&0&0&42&35&0&0&112\\ 16&0&0&0&21&0&0&0&21&58\\ 0&0&8&8&0&0&0&15&0&31\\ 0&2&0&0&0&0&2&0&0&4\\ 2&0&0&0&0&0&0&0&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 79: $e_{3}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&56&56&0&0&0&56&0&168\\ 0&46&0&0&0&48&46&0&0&140\\ 36&0&0&0&38&0&0&0&38&112\\ 0&0&28&28&0&0&0&29&0&85\\ 0&19&0&0&0&20&19&0&0&58\\ 9&0&0&0&11&0&0&0&11&31\\ 0&0&1&1&0&0&0&2&0&4\\ 0&1&0&0&0&1&1&0&0&3\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 80: $e_{1}e_{2}e_{4}+e_{1}e_{3}e_{5}+e_{0}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&55&55&0&0&0&58&0&168\\ 0&32&0&0&0&34&32&0&0&98\\ 14&0&0&0&17&0&0&0&17&48\\ 0&0&6&6&0&0&0&6&0&18\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 81: $e_{1}e_{3}e_{4}+e_{0}e_{3}e_{5}+e_{2}e_{4}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&55&55&0&0&0&56&0&166\\ 0&37&0&0&0&38&37&0&0&112\\ 18&0&0&0&20&0&0&0&20&58\\ 0&0&10&10&0&0&0&11&0&31\\ 0&1&0&0&0&2&1&0&0&4\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 82: $e_{1}e_{3}e_{4}+e_{1}e_{2}e_{5}+e_{0}e_{2}e_{6}+e_{0}e_{5}e_{7}$ $\left(\begin{smallmatrix} 0&0&55&55&0&0&0&54&0&164\\ 0&33&0&0&0&38&33&0&0&104\\ 16&0&0&0&21&0&0&0&21&58\\ 0&0&6&6&0&0&0&10&0&22\\ 0&2&0&0&0&0&2&0&0&4\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 83: $e_{0}e_{2}e_{4}+e_{0}e_{3}e_{5}+e_{1}e_{2}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{1}e_{8}+e_{2}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&54&54&0&0&0&60&0&168\\ 0&30&0&0&0&38&30&0&0&98\\ 20&0&0&0&14&0&0&0&14&48\\ 0&0&4&4&0&0&0&10&0&18\\ 0&0&0&0&0&4&0&0&0&4\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 84: $e_{2}e_{3}e_{4}+e_{1}e_{3}e_{5}+e_{0}e_{4}e_{5}+e_{0}e_{1}e_{6}+e_{0}e_{2}e_{7}$ $\left(\begin{smallmatrix} 0&0&54&54&0&0&0&54&0&162\\ 0&30&0&0&0&33&30&0&0&93\\ 12&0&0&0&15&0&0&0&15&42\\ 0&0&4&4&0&0&0&6&0&14\\ 0&1&0&0&0&0&1&0&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 85: $e_{0}e_{2}e_{4}+e_{0}e_{3}e_{5}+e_{1}e_{2}e_{6}+e_{1}e_{3}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&53&53&0&0&0&56&0&162\\ 0&30&0&0&0&33&30&0&0&93\\ 14&0&0&0&14&0&0&0&14&42\\ 0&0&4&4&0&0&0&6&0&14\\ 0&0&0&0&0&2&0&0&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 86: $e_{0}e_{1}e_{4}+e_{2}e_{3}e_{5}+e_{0}e_{2}e_{6}+e_{1}e_{3}e_{7}$ $\left(\begin{smallmatrix} 0&0&52&52&0&0&0&52&0&156\\ 0&30&0&0&0&32&30&0&0&92\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&4&4&0&0&0&6&0&14\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 87: $e_{1}e_{2}e_{3}+e_{0}e_{4}e_{5}+e_{0}e_{6}e_{7}+e_{0}e_{1}e_{8}$ $\left(\begin{smallmatrix} 0&0&52&52&0&0&0&60&0&164\\ 0&33&0&0&0&38&33&0&0&104\\ 16&0&0&0&21&0&0&0&21&58\\ 0&0&8&8&0&0&0&6&0&22\\ 0&1&0&0&0&2&1&0&0&4\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 88: $e_{0}e_{2}e_{4}+e_{1}e_{3}e_{5}+e_{1}e_{2}e_{6}+e_{0}e_{3}e_{6}+e_{0}e_{1}e_{7}$ $\left(\begin{smallmatrix} 0&0&51&51&0&0&0&52&0&154\\ 0&25&0&0&0&27&25&0&0&77\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&2&2&0&0&0&3&0&7\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 89: $e_{1}e_{2}e_{3}+e_{0}e_{4}e_{5}+e_{0}e_{1}e_{6}+e_{0}e_{2}e_{7}+e_{0}e_{3}e_{8}$ $\left(\begin{smallmatrix} 0&0&49&49&0&0&0&56&0&154\\ 0&24&0&0&0&29&24&0&0&77\\ 8&0&0&0&10&0&0&0&10&28\\ 0&0&3&3&0&0&0&1&0&7\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 90: $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}+e_{0}e_{3}e_{6}+e_{1}e_{4}e_{6}+e_{2}e_{5}e_{6}$ $\left(\begin{smallmatrix} 0&0&49&49&0&0&0&56&0&154\\ 0&21&0&0&0&35&21&0&0&77\\ 14&0&0&0&7&0&0&0&7&28\\ 0&0&0&0&0&0&0&7&0&7\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 91: $e_{1}e_{2}e_{3}+e_{0}e_{4}e_{5}+e_{0}e_{6}e_{7}$ $\left(\begin{smallmatrix} 0&0&49&49&0&0&0&50&0&148\\ 0&33&0&0&0&37&33&0&0&103\\ 16&0&0&0&21&0&0&0&21&58\\ 0&0&4&4&0&0&0&6&0&14\\ 0&1&0&0&0&0&1&0&0&2\\ 1&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 92: $e_{0}e_{3}e_{4}+e_{1}e_{2}e_{5}+e_{0}e_{1}e_{6}+e_{0}e_{2}e_{7}$ $\left(\begin{smallmatrix} 0&0&48&48&0&0&0&50&0&146\\ 0&22&0&0&0&24&22&0&0&68\\ 6&0&0&0&8&0&0&0&8&22\\ 0&0&1&1&0&0&0&1&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 93: $e_{0}e_{1}e_{4}+e_{2}e_{3}e_{5}+e_{0}e_{2}e_{6}+e_{1}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&48&48&0&0&0&50&0&146\\ 0&21&0&0&0&26&21&0&0&68\\ 8&0&0&0&7&0&0&0&7&22\\ 0&0&0&0&0&0&0&3&0&3\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 94: $e_{0}e_{2}e_{3}+e_{1}e_{4}e_{5}+e_{0}e_{1}e_{6}$ $\left(\begin{smallmatrix} 0&0&45&45&0&0&0&46&0&136\\ 0&19&0&0&0&21&19&0&0&59\\ 4&0&0&0&5&0&0&0&5&14\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 95: $e_{1}e_{2}e_{3}+e_{0}e_{1}e_{4}+e_{0}e_{2}e_{5}+e_{0}e_{3}e_{6}$ $\left(\begin{smallmatrix} 0&0&42&42&0&0&0&44&0&128\\ 0&14&0&0&0&16&14&0&0&44\\ 2&0&0&0&3&0&0&0&3&8\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 96: $e_{0}e_{1}e_{2}+e_{3}e_{4}e_{5}$ $\left(\begin{smallmatrix} 0&0&38&38&0&0&0&38&0&114\\ 0&19&0&0&0&20&19&0&0&58\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&1&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 97: $e_{0}e_{1}e_{3}+e_{0}e_{2}e_{4}+e_{1}e_{2}e_{5}$ $\left(\begin{smallmatrix} 0&0&37&37&0&0&0&38&0&112\\ 0&10&0&0&0&11&10&0&0&31\\ 0&0&0&0&1&0&0&0&1&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 98: $e_{0}e_{1}e_{2}+e_{0}e_{3}e_{4}+e_{0}e_{5}e_{6}+e_{0}e_{7}e_{8}$ $\left(\begin{smallmatrix} 0&0&36&36&0&0&0&56&0&128\\ 0&8&0&0&0&28&8&0&0&44\\ 8&0&0&0&0&0&0&0&0&8\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 99: $e_{0}e_{1}e_{2}+e_{0}e_{3}e_{4}+e_{0}e_{5}e_{6}$ $\left(\begin{smallmatrix} 0&0&35&35&0&0&0&42&0&112\\ 0&8&0&0&0&15&8&0&0&31\\ 2&0&0&0&0&0&0&0&0&2\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 100: $e_{0}e_{1}e_{2}+e_{0}e_{3}e_{4}$ $\left(\begin{smallmatrix} 0&0&30&30&0&0&0&32&0&92\\ 0&4&0&0&0&6&4&0&0&14\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$ 101: $e_{0}e_{1}e_{2}$ $\left(\begin{smallmatrix} 0&0&19&19&0&0&0&20&0&58\\ 0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)$
2206.13649v5
http://arxiv.org/abs/2206.13649v5
Critical points of discrete periodic operators
\documentclass[12pt]{amsart} \usepackage{amsfonts, amssymb, amsmath, latexsym, mathtools, amscd} \usepackage{hyperref,graphicx} \usepackage[dvipsnames]{xcolor} \usepackage{url} \usepackage{enumitem} \usepackage{wrapfig} \usepackage{ulem} \DeclareSymbolFont{bbold}{U}{bbold}{m}{n} \DeclareSymbolFontAlphabet{\mathbbold}{bbold} \headheight=8pt \topmargin=0pt \textheight=613pt \textwidth=466pt \oddsidemargin=1pt \evensidemargin=1pt \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Conjecture}[Theorem]{Conjecture} \newtheorem{Fact}[Theorem]{Fact} \theoremstyle{remark} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Example}[Theorem]{Example} \newtheorem{Definition}[Theorem]{Definition} \newcommand{\CC}{{\mathbb C}} \newcommand{\NN}{{\mathbb N}} \newcommand{\PP}{{\mathbb P}} \newcommand{\RR}{{\mathbb R}} \newcommand{\TT}{{\mathbb T}} \newcommand{\ZZ}{{\mathbb Z}} \newcommand{\calA}{{\mathcal A}} \newcommand{\calC}{{\mathcal C}} \newcommand{\calE}{{\mathcal E}} \newcommand{\calF}{{\mathcal F}} \newcommand{\calG}{{\mathcal G}} \newcommand{\calN}{{\mathcal N}} \newcommand{\calV}{{\mathcal V}} \newcommand{\boldOne}{{\mathbbold{1}}} \DeclareMathOperator\supp{supp} \DeclareMathOperator\Newt{N} \DeclareMathOperator\N{N} \DeclareMathOperator\Vol{Vol} \DeclareMathOperator\MVol{MV} \DeclareMathOperator\conv{conv} \newcommand{\vol}{{\rm vol}} \newcommand{\Var}{{\it Var}} \newcommand{\bfe}{{\bf e}} \newcommand{\bfz}{{\bf 0}} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\defcolor}[1]{{\color{blue}#1}} \newcommand{\demph}[1]{\defcolor{{\sl #1}}} \title{Critical points of Discrete Periodic Operators} \author{M.~Faust} \address{Matthew Faust, Department of Mathematics, Texas A\&M University, College Station, Texas 77843, USA} \email{[email protected]} \urladdr{https://mattfaust.github.io} \author{F.~Sottile} \address{Frank Sottile, Department of Mathematics, Texas A\&M University, College Station, Texas 77843, USA} \email{[email protected]} \urladdr{https://franksottile.github.io} \thanks{Research supported in part by Simons Collaboration Grant for Mathematicians 636314 and NSF grants DMS-2246031, DMS-2201005, DMS-2052572, and DMS-2000345.} \subjclass[2010]{81U30, 81Q10, 14M25} \keywords{Bloch variety, Schr\"odinger operator, Kushnirenko Theorem, Toric variety, Newton polytope} \begin{document} \begin{abstract} We study the spectra of operators on periodic graphs using methods from combinatorial algebraic geometry. Our main result is a bound on the number of complex critical points of the Bloch variety, together with an effective criterion for when this bound is attained. We show that this criterion holds for $\ZZ^2$- and $\ZZ^3$-periodic graphs with sufficiently many edges and use our results to establish the spectral edges conjecture for some $\ZZ^2$-periodic graphs. \end{abstract} \maketitle \section*{Introduction} The spectrum of a $\ZZ^d$-periodic self-adjoint discrete operator $L$ consists of intervals in $\RR$. Floquet theory reveals that the spectrum is the image of the coordinate projection to $\RR$ of the \demph{Bloch variety} (also known as the dispersion relation), an algebraic hypersurface in $(S^1)^d\times\RR$. This coordinate projection defines a function $\lambda$ on the Bloch variety, which is our main object of study. When the operator is discrete, the complexification of the Bloch variety is an algebraic variety in $(\CC^\times)^d\times\CC$. Thus techniques from algebraic geometry and related areas may be used to address some questions in spectral theory. In the 1990's Gieseker, Kn\"orrer, and Trubowitz~\cite{GKT} used algebraic geometry to study the Schr\"odinger operator on the square lattice $\ZZ^2$ with a periodic potential and established a number of results, including Floquet isospectrality, the irreducibility of its Fermi varieties, and determined the density of states. Recently there has been a surge of interest in using algebraic methods in spectral theory. This includes investigating the irreducibility of Bloch and Fermi varieties~\cite{FLM,FLM23,LiSh,Liu22}, Fermi isospectrality~\cite{Liu+}, density of states~\cite{Kravaris}, and extrema and critical points of the projection $\lambda$ on Bloch varieties~\cite{Berk,DKS,Liu22}. We use techniques from combinatorial algebraic geometry and geometric combinatorics~\cite{GBCP} to study critical points of the function $\lambda$ on the Bloch variety of a discrete periodic operator. We now discuss motivation and sketch our results. Some background on spectral theory is sketched in Section~\ref{S:one}, and Section~\ref{Sec:AG} gives some background from algebraic geometry. An old and widely believed conjecture in mathematical physics concerns the structure of the Bloch variety near the edges of the spectral bands. Namely, that for a sufficiently general operator $L$ (as defined in Section~\ref{S:oneone}), the extrema of the band functions $\lambda_j$ on the Bloch variety are nondegenerate in that their Hessians are nondegenerate quadratic forms. This \demph{spectral edges nondegeneracy conjecture} is stated in~\cite[Conj.\ 5.25]{KuchBAMS}, and it also appears in~\cite{CdV,KuchBook,Nov81,Nov83}. Important notions, such as effective mass in solid state physics, the Liouville property, Green's function asymptotics, Anderson localization, homogenization, and many other assumed properties in physics, depend upon this conjecture. The spectral edges conjecture states that for generic parameters, each extreme value is attained by a single band, the extrema are isolated, and the extrema are nondegenerate. We discuss progress for discrete operators on periodic graphs. In 2000, Klopp and Ralston~\cite{RP} proved that for Laplacians with generic potential each extreme value is attained by a single band. In 2015, Filonov and Kachkovskiy~\cite{FILI} gave a class of two-dimensional operators for which the extrema are isolated. They also show~\cite[Sect.\ 6]{FILI} that the spectral edges conjecture may fail for a Laplacian with general potential, which does not have generic parameters in the sense of Section~\ref{S:oneone}. Most recently, Liu~\cite{Liu22} proved that the extrema are isolated for the Schr\"{o}dinger operator acting on the square lattice. We consider a property which implies the spectral edges nondegeneracy conjecture: A family of operators has the \demph{critical points property} if for almost all operators in the family, all critical points of the function $\lambda$ (not just the extrema) are nondegenerate. Algebraic geometry was used in~\cite{DKS} to prove the following dichotomy: For a given algebraic family of discrete periodic operators, either the critical points property holds for that family, or almost all operators in the family have Bloch varieties with degenerate critical points. In~\cite{DKS}, this dichotomy was used to establish the critical points property for the family of Laplace-Beltrami difference operators on the $\ZZ^2$-periodic diatomic graph of Figure~\ref{F:denseDimer}. Bloch varieties for these operators were shown to have at most 32 critical points. A single example was computed to have 32 nondegenerate critical points. Standard arguments from algebraic geometry (see Section~\ref{S:final}) implied that, for this family, the critical points property, and therefore also the spectral edges nondegeneracy conjecture, holds. We extend part of that argument to operators on many periodic graphs. Let $L$ be a discrete operator on a $\ZZ^d$-periodic graph $\Gamma$ (see Section~\ref{S:one}). Its (complexified) Bloch variety is a hypersurface in the product $(\CC^\times)^d\times\CC$ of a complex torus and the complex line defined by a Laurent polynomial $D(z,\lambda)$. The last coordinate $\lambda$, corresponding to projection onto the spectral axis, is the function on the Bloch variety whose critical points we study. Accordingly, we will call critical points of the function $\lambda$ on the Bloch variety ``critical points of the Bloch variety.'' One contribution of this paper is to shift focus from spectral band functions $\lambda_j$ defined on a compact torus to a global function on the complex Bloch variety. Another is to use the perspective of nonlinear optimization to address a question concerning the spectrum of a discrete periodic operator. We state our first result. Let $\Gamma$ be a connected $\ZZ^d$-periodic graph (as in Section~\ref{S:oneone}). Fix a fundamental domain \defcolor{$W$} for the $\ZZ^d$-action on the vertices of $\Gamma$. The support \defcolor{$\calA(\Gamma)$} of $\Gamma$ records the local connectivity between translates of the fundamental domain. It is the set of $a \in \ZZ^d$ such that $\Gamma$ has an edge with endpoints in both $W$ and $a{+}W$. \medskip \noindent{\bf Theorem A.} \ {\it The function $\lambda$ on the Bloch variety of a discrete operator on $\Gamma$ has at most \[ d!\,|W|^{d+1}\vol(\calA(\Gamma)) \] isolated critical points. Here, $\vol(\calA(\Gamma))$ is the Euclidean volume of the convex hull of $\calA(\Gamma)$.}\medskip This bound uses an outer approximation for the Newton polytope of $D(z,\lambda)$ (see Lemma~\ref{L:genericNP}) and a study of the equations defining critical points of the function $\lambda$ on the Bloch variety, called the \demph{critical point equations}~\eqref{Eq:CPE}. Corollary~\ref{C:Bound} is a strengthening of Theorem A. When the bound is attained all critical points are isolated.\medskip \noindent{\bf Example.} We illustrate Theorem A on the example from~\cite[Sect.\ 4]{DKS}. Figure~\ref{F:denseDimer} shows a periodic graph $\Gamma$ with $d=2$ whose fundamental domain $W$ has \begin{figure}[htb] \centering \includegraphics{images/denseDimer_graph} \qquad\qquad \raisebox{14.5pt}{\includegraphics{images/diamond}} \caption{A dense periodic graph $\Gamma$ with the convex hull of $\calA(\Gamma)$.}\label{F:denseDimer} \end{figure} two vertices and its support $\calA(\Gamma)$ consists of the columns of the matrix $(\begin{smallmatrix}0&1&0&-1&0\\0&0&1&0&-1\end{smallmatrix})$. Figure~\ref{F:denseDimer} also displays the convex hull of $\calA(\Gamma)$. As $|W|=2$ and $\vol(\calA(\Gamma))=2$, Theorem A implies that any Bloch variety for an operator on $\Gamma$ has at most \[ d!\,|W|^{d+1}\vol(\calA(\Gamma))\ =\ 2!\cdot 2^{2+1}\cdot 2=32 \] critical points, which is the bound demonstrated in~\cite{DKS}. \hfill$\diamond$\medskip The bound of Corollary~\ref{C:Bound} arises as follows. There is a natural compactification of $(\CC^\times)^d \times \CC$ by a projective toric variety $X$ associated to the Newton polytope, P, of $D(z,\lambda)$~\cite[Ch.~5]{GKZ}. The critical point equations become linear equations on $X$ whose number of solutions is the degree of $X$. By Kushnirenko's Theorem~\cite{Kushnirenko}, this degree is the normalized volume of $P$, $(d{+}1)!\vol(P)$. This bound is attained exactly when there are no solutions at infinity, which is the set $\defcolor{\partial X}\vcentcolon=X \smallsetminus\bigl( (\CC^\times)^d \times \CC\bigr)$ of points added in the compactification. The compactified Bloch variety is a hypersurface in $X$. A \demph{vertical face} of $P$ is one that contains a segment parallel to the $\lambda$-axis. Corollary~\ref{C:SingThmB} shows that when $P$ has no vertical faces, any solution on $\partial X$ to the critical point equations is a singular point of the intersection of this hypersurface with $\partial X$. We state a simplified version of Corollary~\ref{C:SingThmB}.\medskip \noindent{\bf Theorem B.} \ {\it If $P$ has no vertical faces, then the bound of Corollary~\ref{C:Bound} is attained exactly when the compactified Bloch variety is smooth along $\partial X$.}\medskip We give a class of graphs whose typical Bloch variety is smooth at infinity and whose Newton polytopes have no vertical faces. A periodic graph $\Gamma$ is \demph{dense} if it has every possible edge, given its support $\calA(\Gamma)$ and fundamental domain $W$ (see Section~\ref{S:C}). The following is a consequence of Corollary~\ref{C:SingThmB} and Theorem~\ref{Th:denseDense}. \medskip \noindent{\bf Theorem C.} \ {\it When $d=2$ or $3$ the Bloch variety of a generic operator on a dense periodic graph is smooth along $\partial X$, its Newton polytope has no vertical faces, and the bound of \textbf{Theorem A} is attained.}\medskip Theorem C is an example of a recent trend in applications of algebraic geometry in which a highly structured optimization problem is shown to unexpectedly achieve a combinatorial bound on the number of critical points. A first instance was~\cite{DKS}, which inspired~\cite{EDD} and~\cite{LNRW}. Section~\ref{S:one} presents background on the spectrum of an operator on a periodic graph, and formulates our goal to bound the number of critical points of the function $\lambda$ on the Bloch variety. At the beginning of Section~\ref{S:B}, we recast extrema of the spectral band functions using the language of constrained optimization. Theorems A, B, and C are proven in Sections~\ref{S:A}, \ref{S:B}, and \ref{S:C}. In Section~\ref{S:final}, we use these results to prove the spectral edges conjecture for operators on three periodic graphs. \section{Operators on periodic graphs}\label{S:one} Let $d$ be a positive integer. We write $\defcolor{\CC^\times}\vcentcolon=\CC\smallsetminus\{0\}$ for the multiplicative group of nonzero complex numbers and $\defcolor{\TT}\vcentcolon=\{z\in\CC^\times\mid |z|=1\}$ for its maximal compact subgroup. Note that if $z\in\TT$, then $\overline{z}=z^{-1}$. We write edges of a graph as pairs, $(u,v)$ with $u,v$ vertices, and understand that $(u,v)=(v,u)$. \subsection{Operators on periodic graphs}\label{S:oneone} For more, see~\cite[Ch.\ 4]{BerkKuch}. A (\defcolor{$\ZZ^d$}-)\demph{periodic graph} is a simple (no multiple edges or loops) connected undirected graph $\Gamma$ with a free cocompact action of $\ZZ^d$. Thus $\ZZ^d$ acts freely on the vertices, \defcolor{$\calV(\Gamma)$}, and edges, \defcolor{$\calE(\Gamma)$}, of $\Gamma$ preserving incidences, and $\ZZ^d$ has finitely many orbits on each of $\calV(\Gamma)$ and $\calE(\Gamma)$. Figure~\ref{F:first_graphs} \begin{figure}[htb] \centering \includegraphics{images/graphene_graph} \qquad \ \includegraphics{images/K4_graph} \caption{Two $\ZZ^2$-periodic graphs.} \label{F:first_graphs} \end{figure} shows two $\ZZ^2$-periodic graphs. One is the honeycomb lattice and the other is an abelian cover of $K_4$, the complete graph on four vertices. It is useful but not necessary to consider $\Gamma$ immersed in $\RR^d$ so that $\ZZ^d$ acts on $\Gamma$ via translations. The graphs in Figure~\ref{F:first_graphs} are each immersed in $\RR^2$, and for each we show two independent vectors that generate the $\ZZ^2$-action. Choose a fundamental domain for this $\ZZ^d$-action whose boundary does not contain a vertex of $\Gamma$. In Figure~\ref{F:first_graphs}, we have shaded the fundamental domains. Let \defcolor{$W$} be the vertices of $\Gamma$ lying in the fundamental domain. Then $W$ is a set of representatives of $\ZZ^d$-orbits of $\calV(\Gamma)$. Every $\ZZ^d$-orbit of edges contains one or two edges incident on vertices in $W$. An edge incident on $W$ has the form $(u,a{+}v)$ for some $u,v\in W$ and $a\in\ZZ^d$. (If $a=0$, then $u\neq v$ as $\Gamma$ has no loops, and there are no restrictions when $a\neq 0$.) The support $\defcolor{\calA(\Gamma)}$ of $\Gamma$ is the set of $a\in\ZZ^d$ such that $(u,a{+}v)\in\calE(\Gamma)$ for some $u,v\in W$. This finite set depends on the choice of fundamental domain and it is centrally symmetric in that $\calA(\Gamma)=-\calA(\Gamma)$. As $\Gamma$ is connected, the $\ZZ$-span of $\calA(\Gamma)$ is $\ZZ^d$. For both graphs in Figure~\ref{F:first_graphs}, this set consists of the columns of the matrix $(\begin{smallmatrix}0&1&0&-1&\hspace{3pt}0\\0&0&1&\hspace{3pt}0&-1\end{smallmatrix})$. A labeling of $\Gamma$ is a pair of functions $\defcolor{e}\colon\calE(\Gamma)\to\RR$ (edge weights) and $\defcolor{V}\colon\calV(\Gamma)\to\RR$ (potential) that is $\ZZ^d$-invariant (constant on orbits). The set of labelings is the finite-dimensional vector space $\RR^E\times\RR^W$, where $E$ is the set of orbits on $\calE(\Gamma)$. Given a labeling $c=(e,V)$, we have the discrete operator \defcolor{$L_c$} acting on functions $f$ on $\calV(\Gamma)$. Then $L_c(f)$ is defined by its value at $u\in\calV(\Gamma)$, \[ L_c(f)(u)\ \vcentcolon=\ V(u)f(u)\ +\ \sum_{(u,v)\in \calE(\Gamma)} e_{(u,v)}(f(u)-f(v))\,. \] We call $L_c$ a \demph{discrete periodic operator} on $\Gamma$, and may often omit the subscript $c$. It is a bounded self-adjoint operator on the Hilbert space $\ell^2(\Gamma)$ of square-summable functions on $\calV(\Gamma)$, and has real spectrum. \subsection{Floquet theory} As the action of $\ZZ^d$ on $\Gamma$ commutes with the operator $L$, we may apply the Floquet transform, which reveals important structure of its spectrum. References for this \demph{Floquet theory} include~\cite{BerkKuch, KuchBook, KuchBAMS}. The Floquet (Fourier) transform is a linear isometry $\ell^2(\Gamma)\xrightarrow{\,\sim\,}L^2(\TT^d,\CC^W)$, from $\ell^2(\Gamma)$ to square-inte\-grable functions on $\defcolor{\TT^d}$, the compact torus, with values in the vector space $\CC^W$. The torus $\TT^d$ is the group of unitary characters of $\ZZ^d$. For $z\in\TT^d$ and $a\in\ZZ^d$, the corresponding character value is the Laurent monomial \[ \defcolor{z^a}\ \vcentcolon=\ z_1^{a_1} z_2^{a_2}\dotsb z_d^{a_d}\,. \] The Floquet transform \defcolor{$\hat{f}$} of a function $f$ on $\calV(\Gamma)$ is a function on $\TT^d\times\calV(\Gamma)$ such that for $z\in\TT^d$ and $u\in\calV(\Gamma)$, \begin{equation}\label{Eq:FloquetTransform} \hat{f}(z,a{+}u)\ =\ z^a \hat{f}(z,u)\qquad\mbox{ for }a\in\ZZ^d\,. \end{equation} Thus $\hat{f}$ is determined by its values at the vertices $W$ in the fundamental domain. Let $\hat{f}\in L^2(\TT^d,\CC^W)$. Then for $u\in W$, $\hat{f}(u)$ is a function on $\TT^d$. The action of the operator $L$ on the Floquet transform $\hat{f}$ is given by the formula \begin{equation}\label{Eq:Laurent_operator} L(\hat{f})(u)\ =\ V(u)\hat{f}(u)\ +\ \sum_{(u,a+v)\in\calE(\Gamma)} e_{(u,a+v)} \bigl(\hat{f}(u) - z^a \hat{f}(v)\bigr)\,, \end{equation} as $\hat{f}(a{+}v)=z^a\hat{f}(v)$. The exponents $a$ which appear lie in the support $\calA(\Gamma)$ of $\Gamma$. The simplicity of this expression is because $L$ commutes with the $\ZZ^d$-action. Thus in the standard basis for $\CC^W$, the operator $L$ becomes multiplication by a square matrix whose rows and columns are indexed by elements of $W$. Writing \defcolor{$\delta_{u,v}$} for the Kronecker delta function, the matrix entry in position $(u,v)$ is the function \begin{equation}\label{Eq:matrix-entry} \delta_{u,v} \Bigl( V(u)\ +\ \sum_{(u,w)\in\calE(\Gamma)} e_{(u,w)} \Bigr)\ -\ \sum_{(u,a+v)\in\calE(\Gamma)} e_{(u,a+v)} z^a\ . \end{equation} \begin{Example}\label{Ex:Graphene_operator} Let $\Gamma$ be the hexagonal lattice from Figure~\ref{F:first_graphs}. Figure~\ref{F:localGraphene} shows a labeling in a neighborhood of its fundamental domain. \begin{figure}[htb] \centering \begin{picture}(105,92)(0,-1) \put(0,0){\includegraphics{images/graphene_FD}} \put(31,31){\small$u$} \put(95,31){\small$u$} \put(64,89){\small$u$} \put(69,53){\small$v$} \put( 5,53){\small$v$} \put(37,-2){\small$v$} \put(54,15){\small$x$} \put(29,62){\small$y$} \put(52,39){\small$\alpha$} \put(32,14){\small$\gamma$} \put(68,73){\small$\gamma$} \put(14,40){\small$\beta$} \put(86,44){\small$\beta$} \end{picture} \caption{A labeling of the hexagonal lattice.} \label{F:localGraphene} \end{figure} Thus $W=\{u,v\}$ consists of two vertices and there are three (orbits of) edges, with labels $\alpha,\beta,\gamma$. Let $(x,y)\in\TT^2$. The operator $L$ is \begin{align*} L(\hat{f})(u)\ &=\ V(u)\hat{f}(u)\ +\ \alpha(\hat{f}(u)-\hat{f}(v))\ +\ \beta(\hat{f}(u)-x^{-1}\hat{f}(v))+\gamma(\hat{f}(u)-y^{-1}\hat{f}(v))\ ,\\ L(\hat{f})(v)\ &=\ V(v)\hat{f}(v)\ +\ \alpha(\hat{f}(v)-\hat{f}(u))\ +\ \beta(\hat{f}(v)-x\hat{f}(u))\ \ \ \ + \gamma(\hat{f}(v)-y\hat{f}(u))\ . \end{align*} Collecting coefficients of $\hat{f}(u),\hat{f}(v)$, we represent $L$ by the $2\times 2$-matrix, \begin{equation}\label{Eq:Graphene_Matrix} L\ =\ \left( \begin{matrix} V(u)+\alpha+\beta+\gamma &-\alpha-\beta x^{-1}-\gamma y^{-1}\\ -\alpha-\beta x-\gamma y & V(v)+\alpha+\beta+\gamma \end{matrix}\right)\ , \end{equation} whose entries are Laurent polynomials in $x,y$. Notice that the support $\calA(\Gamma)$ of $\Gamma$ equals the set of exponents of monomials which appear in $L$. Observe that for $(x,y)\in\TT^2$, $L^T=\overline{L}$, so that $L$ is Hermitian, showing again that the operator $L$ is self-adjoint.\hfill$\diamond$ \end{Example} What we saw in Example~\ref{Ex:Graphene_operator} holds in general. In the standard basis for $\CC^W$, $L=L_c$ is multiplication by a $|W|\times|W|$-matrix \defcolor{$L(z)=L_c(z)$} with each entry~\eqref{Eq:matrix-entry} a finite sum of monomials with exponents from $\calA(\Gamma)$ (a \demph{Laurent polynomial with support $\calA(\Gamma)$}). Note that $(u,a{+}v)\in\calE(\Gamma)$ if and only if $(-a{+}u,v)\in \calE(\Gamma)$, these edges have the same label, and for $z\in\TT^d$, $\overline{z^a}=z^{-a}$. Thus for $z\in\TT^d$, the matrix is Hermitian, as $L(z)^T=L(z^{-1})=\overline{L(z)}$. \subsection{Critical points of the Bloch variety} As $L(z)$ is Hermitian for $z\in\TT^d$, its spectrum is real and consists of its $|W|$ eigenvalues \begin{equation}\label{Eq:bandFunctions} \lambda_1(z)\ \leq\ \lambda_2(z)\ \leq\ \dotsb\ \leq\ \lambda_{|W|}(z)\,. \end{equation} These eigenvalues vary continuously with $z\in\TT^d$, and $\lambda_j(z)$ is called the $j$th \demph{spectral band function}, $\lambda_j\colon\TT^d\to\RR$. Its image is an interval in $\RR$, called the \demph{$j$th spectral band}. The eigenvalues~\eqref{Eq:bandFunctions} are the roots of the characteristic polynomial \begin{equation}\label{Eq:charPoly} \defcolor{D(z,\lambda)}\ =\ D_c(z,\lambda)\ \vcentcolon=\ \det(L_c(z)\ -\ \lambda I)\,, \end{equation} which we call the \demph{dispersion function}. Its vanishing defines a hypersurface \begin{equation} \defcolor{\Var(D_c(z,\lambda))} = \{(z,\lambda) \in \TT^d\times\RR \mid D(z,\lambda)=0 \}\,, \end{equation} called the \demph{Bloch variety} of the operator $L$\footnote{This is also called the dispersion relation in the literature. We use the term Bloch variety as it is an algebraic variety and our perspective is to use methods from algebraic geometry in spectral theory.}. The Bloch variety is the union of $|W|$ branches with the $j$th branch equal to the graph of the $j$th spectral band function. The image of the Bloch variety under the projection to $\RR$ is the spectrum \defcolor{$\sigma(L)$} of the operator $L$. This projection is a function \defcolor{$\lambda$} on the Bloch variety. Identifying the $j$th branch/graph with $\TT^d$, the restriction of $\lambda$ to that branch gives the corresponding spectral band function $\lambda_j$. Figure~\ref{F:smooth_graphene} shows this for the operator $L$ on the hexagonal lattice with edge weights $6,3,2$ and zero potential $V$---for this we unfurl $\TT^2$, representing it by $[-\frac{\pi}{2},\frac{3\pi}{2}]^2\subset\RR^2$, which is a fundamental domain in its universal cover. \begin{figure}[htb] \centering \begin{picture}(178,138)(-39,0) \put(0,0){\includegraphics[height=140.4pt]{images/graphene_dispersion}} \put(130,15){\small$x$} \put(-1,31){\small$y$} \put(47.5,129){\small$\RR$} \put(134,28){\small$\TT^2$} \put(-39,49){\small$\sigma(L)$} \thicklines \put(-14,50){{\color{white}\line(3,-1){52}}} \put(-14,49.5){{\color{white}\line(3,-1){52}}} \put(-14,49){{\color{white}\line(3,-1){52}}} \put(-14,48.5){{\color{white}\line(3,-1){52}}} \put(-14,48){{\color{white}\line(3,-1){52}}} \thinlines \put(40,31.){\color{white}{\circle*{1.5}}} \put(39,31.3333){\color{white}{\circle*{2}}} \put(38.5,31.5){\color{white}{\circle*{2}}} \put(37,32){\color{white}{\circle*{3.5}}} \put(-14,54){\vector(3,1){54}} \put(-14,49){\vector(3,-1){54}} \end{picture} \caption{A Bloch variety and spectral bands for the hexagonal lattice.} \label{F:smooth_graphene} \end{figure} (That is, by quasimomenta in $[-\frac{\pi}{2},\frac{3\pi}{2}]^2$.) It has two branches with each the graph of the corresponding spectral band function. An endpoint of a spectral band (\demph{spectral edge}) is the image of an extremum of some band function $\lambda_j(z)$. For the hexagonal lattice at these parameters, each band function has two nondegenerate extrema, and these give the four spectral edges. These are also local extrema of the function $\lambda$ on the Bloch variety. The \demph{spectral edges conjecture}~\cite[Conj.\ 5.25]{KuchBAMS} for a periodic graph $\Gamma$ asserts that for generic values of the parameters $(e,V)$, each spectral edge is attained by a single band, the extrema on the Bloch variety are isolated, and all extrema are nondegenerate (the spectral band function $\lambda_j$ has a full rank Hessian matrix). Here, generic means that there is a nonconstant polynomial $p(e,V)$ in the parameters such that when $p(e,V)\neq 0$, these desired properties hold. The entries in the matrix $L(z)$ and the function~\eqref{Eq:charPoly} defining the Bloch variety are all (Laurent) polynomials. In this setting it is natural to allow complex parameters, $e\colon\calE(\Gamma)\to\CC$, $V\colon\calV(\Gamma)\to\CC$ and variables $z\in(\CC^\times)^d$, $\lambda\in\CC$. With complex parameters and variables, $L_c(z)$ is no longer Hermitian, but it does satisfy $L_c(z)^T=L_c(z^{-1})$ and the Bloch variety is the complex algebraic hypersurface $\Var(D_{c}(z,\lambda))$ in $(\CC^\times)^d\times\CC$ defined by the vanishing of the dispersion function $D_{c}(z,\lambda)$ of $L_c(z)$~\eqref{Eq:charPoly}. In passing to the complex Bloch variety we may no longer distinguish branches $\lambda_j(z)$ of $\lambda$. At a smooth point $(z_0,\lambda_0)$ whose projection $z$ to $(\CC^\times)^d$ is regular (in that $\frac{\partial D}{\partial\lambda}(z_0,\lambda_0)\neq 0$), there is a locally defined function $f$ of $z$ with $\lambda_0=f(z_0)$ and $D(z,f(z))=0$ on its domain, but this is not necessarily a global function of $z$. Consequently, we will consider the projection to the last coordinate to be a function $\lambda$ on the Bloch variety, and then study its differential geometry, including its critical points. Nondegeneracy of spectral edges is implied by the stronger condition that all critical points of the function $\lambda$ on the complex Bloch variety are nondegenerate. Understanding the critical points of $\lambda$ is a first step. Our aim is to bound the number of (isolated) critical points of $\lambda$ on the Bloch variety of a given operator $L$, give criteria for when the bound is attained, prove that it is attained for generic operators on a class of graphs, and finally to use these results to prove the spectral edges conjecture for $2^{19}+2$ graphs. We treat these in the following four sections. \section{Bounding the number of critical points}\label{S:A} We first recast extrema of spectral band functions in terms of constrained optimization. The complex Bloch variety is the hypersurface $\Var(D(z,\lambda))$ in $(\CC^\times)^d\times\CC$ defined by the vanishing of the dispersion function $D(z,\lambda)$. \defcolor{Critical points} of the function $\lambda$ on the Bloch variety are points of the Bloch variety where the gradients in $(\CC^\times)^d\times\CC$ of $\lambda$ and $D(z,\lambda)$ are linearly dependent. That is, a critical point is a point $(z,\lambda)\in(\CC^\times)^d\times\CC$ with $D(z,\lambda)=0$ such that either the gradient $\nabla D(z,\lambda)$ vanishes or we have $\frac{\partial D}{\partial z_i}(z,\lambda)=0$ for $i=1,\dotsc, d$ and $\frac{\partial D}{\partial\lambda}(z,\lambda)\neq0$ (as $\nabla\lambda=(0,\dotsc,0,1)$). In either case, we have \[ D(z,\lambda)\ =\ 0 \qquad\mbox{and}\qquad \frac{\partial D}{\partial z_i}\ =\ 0 \qquad\mbox{for } i=1,\dots,d\,. \] Since $z_i\neq 0$, we obtain the equivalent system \begin{equation}\label{Eq:CPE} D(z,\lambda)\ =\ z_1\frac{\partial D}{\partial z_1}\ =\ \ \dotsb\ \ =\ z_d\frac{\partial D}{\partial z_d}\ =\ 0 \ , \end{equation} which we call the \demph{critical point equations}. \begin{Proposition}\label{P:CPE} A point $(z,\lambda)\in(\CC^\times)^d\times\CC$ is a critical point of the function $\lambda$ on the Bloch variety $\Var(D(z,\lambda))$ if and only if~\eqref{Eq:CPE} holds. \end{Proposition} \begin{proof} We already showed that at a critical point of $\lambda$, the equations~\eqref{Eq:CPE} hold. Suppose now that $(z,\lambda)\in(\CC^\times)^d\times\CC$ is a solution to~\eqref{Eq:CPE}. As $D(z,\lambda)=0$, the point lies on the Bloch variety. As $z\in(\CC^\times)^d$, no coordinate $z_i$ vanishes, which implies that $\frac{\partial D}{\partial z_i}(z,\lambda)=0$ for $i=1,\dotsc,d$. Thus the gradients $\nabla \lambda$ and $\nabla D$ are linearly dependent at $(z,\lambda)$, showing that it is a critical point. \end{proof} \begin{Remark} A point $(z_0,\lambda_0)\in\TT^d\times\RR$ such that $\lambda_0=\lambda_j(z_0)$ is an extreme value of the spectral band function $\lambda_j$ is also a critical point of the Bloch variety. Indeed, either the gradient $\nabla D$ vanishes at $(z_0,\lambda_0)$ or it does not vanish. If $\nabla D(z_0,\lambda_0)=\bfz$, then $(z_0,\lambda_0)$ is a critical point. If $\nabla D(z_0,\lambda_0)\neq 0$, then the Bloch variety is smooth at $(z_0,\lambda_0)$ and thus is a smooth point of the graph of $\lambda_j$. As $\lambda_0=\lambda_j(z_0)$ is an extreme value of $\lambda_j$, the tangent plane is horizontal at $(z_0,\lambda_0)$. This implies that $\lambda_j$ is differentiable (by the implicit function theorem) and that $\frac{\partial \lambda_j}{\partial z_i}(z_0,\lambda_0)=0$ for $i=1,\dotsc,d$. Thus the gradients of $\lambda$ and $D$ at $(z_0,\lambda_0)$ are linearly dependent, showing that it is a critical point.\hfill$\diamond$ \end{Remark} B\'ezout's Theorem~\cite[Sect.\ 4.2.1]{Shaf13} gives an upper bound on the number of isolated critical points: We may multiply each Laurent polynomial in~\eqref{Eq:CPE} by a monomial to clear denominators and obtain ordinary polynomials. The product of their degrees is an upper bound for the number of the common zeroes that are isolated in the complex domain. Polyhedral bounds that exploit the structure of the Laurent polynomials are typically much smaller. Sources for these are~\cite[Ch.\ 7]{CLOII}, \cite[Ch.\ 5]{GKZ}, and~\cite[Ch.\ 3]{IHP}. These results bound the number of isolated common zeroes, counted with multiplicities. An isolated common zero $z_0$ of polynomials $f_1,\dots,f_{d+1}$ on $(\CC^\times)^d\times\CC$ has multiplicity 1 exactly when the gradient of $f_1,\dots,f_{d+1}$ spans the cotangent space at $z_0$; otherwise its multiplicity exceeds 1 (see~\cite[Ch.\ 4\ Def.\ 2.1]{CLOII} and~\cite[Ch.\ 8.7 \ Def.\ 8]{CLO}). Let \defcolor{$\CC[z^{\pm},\lambda]$} be the ring of Laurent polynomials in $z_1,\dotsc,z_d,\lambda$ where $\lambda$ occurs with only nonnegative exponents. Note that $D(z,\lambda)\in \CC[z^{\pm},\lambda]$. The \demph{support} $\defcolor{\calA(\psi)}\subset\ZZ^d\times\NN$ of a polynomial $\psi\in\CC[z^{\pm},\lambda]$ is the set of exponents of monomials in $\psi$. The \defcolor{Newton polytope} $\defcolor{\calN(\psi)}\vcentcolon=\conv(\calA(\psi))$ of $\psi$ is the convex hull of its support. Write \defcolor{$\vol(\calN(\psi))$} for the $(d{+}1)$-dimensional Euclidean volume of the Newton polytope of $\psi$. \begin{Example}\label{Ex:GraphenePolytope} We continue the example of the hexagonal lattice. Writing \defcolor{$\ell$} for $\alpha{+}\beta{+}\gamma {-}\lambda$, the dispersion function \defcolor{$D(x,y;\lambda)$} of the matrix~\eqref{Eq:Graphene_Matrix} is \[ (V(u)+\ell)(V(v)+\ell)\ -\ (-\alpha-\beta x^{-1}-\gamma y^{-1})(-\alpha-\beta x-\gamma y)\,. \] In Figure~\ref{F:GraphenePolytope} the monomials in $D(x,y;\lambda)$ label the columns of a $3\times 9$ array which are their exponent vectors. Figure~\ref{F:GraphenePolytope} also shows its Newton polytope, which has volume 2.\hfill$\diamond$ \begin{figure}[htb] \centering \begin{tabular}{ccccccccc} $x$&$xy^{-1}$&$y^{-1}$&$x^{-1}$&$x^{-1}y$&$y$&$1$&$\lambda$&$\lambda^2$\\ $1$& $1$ & $0$ & $-1$ & $-1$ &$0$&$0$&$0$& $0$ \\ $0$& $-1$ & $-1$ & $0$ & $1$ &$1$&$0$&$0$& $0$ \\ $0$& $0$ & $0$ & $0$ & $0$ &$0$&$0$&$1$& $2$ \end{tabular} \qquad\qquad \raisebox{-45pt}{\begin{picture}(85,100)(-5,0) \put(-5,-5){\includegraphics{images/GraphenePolytope}} \put(43,88){\small$\lambda^2$} \put(68,23){\small$x$} \put(41,2){\small$xy^{-1}$} \put(10,4){\small$y^{-1}$} \put(-3,36){\small$x^{-1}$} \put(62,39){\small$y$} \end{picture}} \caption{Support and Newton polytope of the hexagonal lattice operator.} \label{F:GraphenePolytope} \end{figure} \end{Example} \begin{Theorem}\label{Th:Bound} For a polynomial $\psi\in\CC[z^{\pm},\lambda]$, the critical point equations for $\psi$ \begin{equation}\label{Eq:psi_system} \psi(z,\lambda)\ =\ z_1 \frac{\partial\psi}{\partial z_1}\ =\ \dotsb\ =\ z_d \frac{\partial\psi}{\partial z_d}\ =\ 0 \end{equation} have at most $(d{+}1)!\vol(\calN(\psi))$ isolated solutions in $(\CC^\times)^d\times\CC$, counted with multiplicity. When the bound is attained, all solutions are isolated. \end{Theorem} We prove this at the end of the section. As the Bloch variety is defined by the dispersion function $D(z,\lambda)=\det(L(z)-\lambda I)$, we deduce the following from Theorem~\ref{Th:Bound}. \begin{Corollary}\label{C:Bound} The number of isolated critical points of the function $\lambda$ on the Bloch variety for an operator $L$ on a discrete periodic graph is at most $(d{+}1)!\vol(\calN(D))$. \end{Corollary} Theorem A follows from this and Lemma~\ref{L:genericNP}, which asserts that \[ \calN(D)\ \subset\ |W|(\conv(\calA(\Gamma)\cup\{\bfe\})\,,\] where $\bfe=(0,\dotsc,0,1)$. This containment implies the inequality \[ (d{+}1)!\vol(\calN(D))\ \leq \ (d{+}1)!|W|^{d+1}\vol(\conv(\calA(\Gamma)\cup\{\bfe\}))\ =\ d!\,|W|^{d+1}\vol(\calA(\Gamma))\,. \] We prove Theorem~\ref{Th:Bound} and Corollary~\ref{C:Bound} after developing some preliminary results. \subsection{A little algebraic geometry}\label{Sec:AG} For more from algebraic geometry, see~\cite{CLO,Shaf13}. An (affine) \demph{variety} is the set of common zeroes of some polynomials $f_1,\dotsc,f_r\in\CC[x_1,\dotsc,x_n]$, \[ \defcolor{\Var(f_1,\dotsc,f_r)}\ \vcentcolon=\ \{x\in\CC^n\mid f_1(x)=\dotsb=f_r(x)=0\}\,. \] We also call this the set of \demph{solutions} to the system $f_1=\dotsb=f_r=0$. We may replace any factor $\CC$ in $\CC^n$ by $\CC^\times$, and then allow the corresponding variable to have negative exponents. The complement of a variety $X$ is a (Zariski) open set. This defines the \demph{Zariski topology} in which varieties are the closed sets. A variety is irreducible if it is not the union of two proper subvarieties. For an irreducible variety, any nonempty open set is dense (even in the classical topology) and any nonempty classically open set is dense in the Zariski topology. Maps $f\colon X\to Y\subset\CC^m$ of varieties are given by $m$ polynomials on $X$ and the image $f(X)$ contains an open subset of its closure. Suppose that $X=\Var(f_1,\dotsc,f_r)$. The \demph{smooth (nonsingular)} locus of $X$ is the open subset of points of $X$ where the Jacobian of $f_1\dotsc,f_r$ has maximal rank on $X$. Let $f$ be a single polynomial. A point $x$ is a smooth point on the hypersurface $\Var(f)$ defined by $f$ if $f(x)=0$, so that $x\in\Var(f)$ and if the gradient $\nabla f(x)=(\frac{\partial f}{\partial x_1}(x),\dotsc,\frac{\partial f}{\partial x_n}(x))$ is nonzero, so that some partial derivative of $f$ does not vanish at $x$. The point $x\in\Var(f)$ is \demph{singular} if all partial derivatives of $f$ vanish at $x$. The kernel of the Jacobian at $x\in X=\Var(f_1,\dotsc,f_r)$ is the (Zariski) tangent space at $x$. The dimension of an irreducible variety is the dimension of a tangent space at any smooth point. An isolated point $x$ of $X$ has multiplicity one exactly when it is nonsingular. \begin{Remark} Our definition of smooth and singular points of a variety depends upon its defining polynomials. For example, the variety defined by $(z-\lambda)^2$ is singular at every point. This {\sl scheme-theoretic} notion of singularity is essential to our arguments in Sections~\ref{S:B} and~\ref{S:C}, and is standard in algebraic geometry.\hfill$\diamond$ \end{Remark} If $X$ is irreducible, then any proper subvariety has smaller dimension. If $f\colon X\to Y$ is a map of varieties with $f(X)$ dense in $Y$, then there is an open subset $U$ of $Y$ such that if $y\in U$, then $\dim f^{-1}(y) + \dim Y = \dim X$. We also have \demph{Bertini's Theorem}: if $X$ is smooth, then $U$ may be chosen so that for every $y\in U$, $f^{-1}(y)$ is smooth. Projective space \defcolor{$\PP(\CC^n)$} is the set of one-dimensional linear subspaces (lines) of $\CC^n$ and is compact. It has dimension $n{-}1$ and subvarieties are given by homogeneous polynomials. The set \defcolor{$U_0$} of lines spanned by vectors whose initial coordinate is nonzero is isomorphic to $\CC^{n-1}$ under $v\mapsto\mbox{span}(1,v)$ and $\PP(\CC^n)$ is a compactification of $U_0\simeq\CC^{n-1}$. \subsection{Polyhedral bounds}\label{Sec:bounds} The expression $(d{+}1)!\vol(\calN(\psi))$ of Theorem~\ref{Th:Bound} is the \demph{normalized volume} of $\calN(\psi)$. This is Kushnirenko's bound~\cite[Ch.\ 6, Thm.\ 2.2]{GKZ} for the number of isolated solutions in $(\CC^\times)^{d+1}$ to a system of $d{+}1$ polynomial equations, all with Newton polytope $\calN(\psi)$. To prove Theorem~\ref{Th:Bound}, we first explain why Kushnirenko's bound applies to the system~\eqref{Eq:psi_system}, and then why it bounds the number of isolated solutions on the larger space $(\CC^\times)^d\times\CC$. For a monomial $z^a\lambda^j$ in $\CC[z^{\pm},\lambda]$, $a\in\ZZ^d$ and $j\in\NN$. For each $i=1,\dotsc,d$, this monomial is an eigenvector for the operator $z_i\frac{\partial}{\partial z_i}$ with eigenvalue $a_i$. Thus $\calA(z_i\frac{\partial}{\partial z_i}\psi)\subset\calA(\psi)$, giving the inclusion $\calN(z_i\frac{\partial}{\partial z_i}\psi)\subset\calN(\psi)$. A refined version of Kushnirenko's Theorem in which the polynomials may have different Newton polytopes is Bernstein's theorem~\cite[Sect.\ 7.5]{CLOII}, which is in terms of a quantity called mixed volume, whose properties are developed in~\cite[Ch.\ IV]{Ewald}. The mixed volume of polytopes is monotone under inclusion of polytopes and it equals the normalized volume when all polytopes coincide. It follows that the theorems of Bernstein and Kushnirenko together give the bound of $(d{+}1)!\vol(\calN(\psi))$ for the number of isolated solutions to the system~\eqref{Eq:psi_system} in $(\CC^\times)^{d+1}$. To extend this to solutions in the larger space $(\CC^\times)^d\times\CC$, we develop some theory of projective toric varieties. \subsection{Projective toric varieties}\label{Sec:Toric} For Kushnirenko's Theorem and our extension, we replace the nonlinear equations~\eqref{Eq:psi_system} on $(\CC^\times)^d\times\CC$ by linear equations on a projective variety. We follow the discussion of~\cite[Ch.\ 3]{IHP}. Let $\defcolor{f}\in\CC[z^{\pm},\lambda]$ be a polynomial with support $\calA=\calA(f)$. To simplify the presentation, we will at times assume that the origin $\bfz$ lies in $\calA$. The results hold without this assumption, as explained in ~\cite[Ch.\ 3]{IHP}. Writing \defcolor{$\CC^\calA$} for the vector space with basis indexed by elements of $\calA$, consider the map \begin{eqnarray*} \varphi_\calA\ \colon\ (\CC^\times)^d\times\CC &\longrightarrow& \CC^\calA\\ (z,\lambda)&\longmapsto & (z^a\lambda^j \mid (a,j)\in\calA)\ . \end{eqnarray*} This map linearizes nonlinear polynomials. Indeed, write $f$ as a sum of monomials, \[ f\ =\ \sum_{(a,j)\in\calA} c_{(a,j)}z^a\lambda^j \ . \] If $\{x_{(a,j)}\mid (a,j)\in\calA\}$ are variables (coordinate functions) on $\CC^\calA$, then \begin{equation}\label{Eq:LinearForm} \defcolor{\Lambda_f}\ \vcentcolon=\ \sum_{(a,j)\in\calA} c_{(a,j)} x_{(a,j)} \end{equation} is a linear form on $\CC^\calA$, and we have $f(z,\lambda)=\Lambda_f(\varphi_\calA(z,\lambda))=\vcentcolon\defcolor{\varphi^*_\calA(\Lambda_f)}$. Since $\bfz\in\calA$, the corresponding coordinate $x_{\bfz}$ of $\varphi_\calA$ is 1 and so the image of $\varphi_\calA$ lies in the principal affine open subset $U_{\bfz}$ of the projective space $\defcolor{\PP^\calA}\vcentcolon=\PP(\CC^\calA)=\PP^{|\calA|-1}$. This is the subset of $\PP^\calA$ where $x_{\bfz}\neq 0$ and it is isomorphic to the affine space $\CC^{|\calA|-1}$. We define \defcolor{$X_\calA$} to be the closure of the image $\varphi_\calA((\CC^\times)^{d+1})$ in the projective space $\PP^\calA$, which is a projective toric variety. Because the map $\varphi_{\calA}$ is continuous on $(\CC^\times)^d\times\CC$, $X_\calA$ is also the closure of the image $\varphi_\calA((\CC^\times)^d\times\CC)$. The map $\varphi_\calA$ is not necessarily injective; we describe its fibers. Let $\ZZ\calA\subset\ZZ^{d+1}$ be sublattice generated by all differences $\alpha{-}\beta$ for $\alpha,\beta\in\calA$. When $\bfz\in\calA$ this is the sublattice generated by $\calA$, and it has full rank $d+1$ if and only if $\conv(\calA)$ has full dimension $d+1$. Let \defcolor{$G_\calA$} be $\mbox{Hom}(\ZZ^{d+1}/\ZZ\calA,\CC^\times)\subset(\CC^\times)^{d+1}$, which acts on $(\CC^\times)^d\times\CC$. The fibers of $\varphi_\calA$ are exactly the orbits of $G_\calA$ on $(\CC^\times)^d\times\CC$. If $\conv(\calA)$ does not have full dimension, then $G_\calA$ has positive dimension as do all fibers of $\varphi_\calA$, otherwise $G_\calA$ is a finite group and $\varphi_\calA$ has finite fibers. On the torus $(\CC^\times)^{d+1}$, $G_\calA$ acts freely and $\varphi_\calA((\CC^\times)^{d+1})$ is identified with $(\CC^\times)^{d+1}/G_\calA$. To describe the fibers of $\varphi_\calA$ on $(\CC^\times)^d\times\{0\} = ((\CC^\times)^d\times\CC)\smallsetminus(\CC^\times)^{d+1}$, note that $(\CC^\times)^{d+1}$ acts on this through the homomorphism $\pi$ that sends its last ($\lambda$) coordinate to $\{1\}$. Thus the fibers of $\varphi_\calA$ on $(\CC^\times)^d\times\{0\}$ are exactly the orbits of $\pi(G_\calA)\subset(\CC^\times)^d$. \begin{Proposition} The dimension of $X_\calA$ is the dimension of $\conv(\calA)$. The fibers of $\varphi_\calA$ on $(\CC^\times)^{d+1}$ are the orbits of $G_\calA$ and its fibers on $(\CC^\times)^d\times\{0\}$ are the orbits of $\pi(G_\calA)$. \end{Proposition} We return to the situation of Theorem~\ref{Th:Bound}. Let $\psi\in\CC[z^{\pm},\lambda]$ be a polynomial with support $\calA$. As each polynomial in~\eqref{Eq:psi_system} has support a subset of $\calA$, each corresponds to a linear form on $\PP^\calA$ as in~\eqref{Eq:LinearForm}. The corresponding system of linear forms defines a linear subspace \defcolor{$M_\psi$} of $\PP^\calA$. We have the following proposition (a version of~\cite[Lemma 3.5]{IHP}). \begin{Proposition}\label{P:bijection} The solutions to~\eqref{Eq:psi_system} are the inverse images under $\varphi_\calA$ of points in the linear section $\varphi_\calA((\CC^\times)^d\times\CC)\cap M_\psi$. When $\varphi_\calA$ is an injection, it is a bijection between solutions to~\eqref{Eq:psi_system} on $(\CC^\times)^d\times\CC$ and points in $\varphi_\calA((\CC^\times)^d\times\CC)\cap M_\psi$. \end{Proposition} \begin{proof}[Proof of Theorem~\ref{Th:Bound}] When $\vol(\calN(\psi))=0$, so that $\calN(\psi)$ does not have full dimension $d+1$, then each fiber of $\varphi_{\calA}$ is positive-dimensional and so by Proposition~\ref{P:bijection} there are no isolated solutions to~\eqref{Eq:psi_system}. Suppose that $\vol(\calN(\psi))>0$. Then every fiber of $\varphi_\calA$ is an orbit of the finite group $G_\calA$. Over points of $\varphi_\calA((\CC^\times)^{d+1})$, each fiber consists of $|G_\calA|$ points and over $\varphi_\calA((\CC^\times)^d\times\{0\}$ each fiber consists of $|\pi(G_\calA)|\leq|G_\calA|$ points. As $X_\calA$ is the closure of $\varphi_\calA((\CC^\times)^d\times\CC)$, the number of isolated points in $X_\calA\cap M_\psi$ is at least the number of isolated points in $\varphi_\calA((\CC^\times)^d\times\CC)\cap M_\psi$, both counted with multiplicity. The degree of the projective variety $X_\calA$ is an upper bound for the number of isolated points in $X_\calA\cap M_\psi$, which is explained in~\cite[Ch.\ 3.3]{IHP}. There, the product of $|G_\calA|$ and the degree of $X_\calA$ is shown to be $(d{+}1)!\vol(\calN(\psi))$, the normalized volume of the Newton polytope of $\psi$. This gives the bound of Theorem~\ref{Th:Bound}. That all points are isolated when the bound of the degree is attained is Proposition~\ref{P:sharp} in the next section. \end{proof} \section{Proof of Theorem B}\label{S:B} We give conditions for when the upper bound of Corollary~\ref{C:Bound} is attained. By Proposition~\ref{P:CPE}, the critical points of the function $\lambda$ on the Bloch variety $\Var(D)$ are the solutions in $(\CC^\times)^d\times\CC$ to the critical point equations~\eqref{Eq:CPE}. Let $\defcolor{\calA}=\calA(D)$ be the support of the polynomial $D$. The critical points are $\varphi_\calA^{-1}(X_\calA\cap M_{D})$, where $X_\calA\subset\PP^\calA$ is the closure of $\varphi_\calA((\CC^\times)^d\times\CC)$ and $M_{D}$ is the subspace of $\PP^\calA$ defined by linear forms corresponding (as in~\eqref{Eq:LinearForm}) to the polynomials in~\eqref{Eq:CPE}. For the bound of Theorem~\ref{Th:Bound} and Corollary~\ref{C:Bound}, note that the number of isolated points of $X_\calA\cap M_{D}$ is at most the product of the degree of $X_\calA$ with the cardinality of a fiber of $\varphi_\calA$, which is $(d{+}1)!\vol(\calN(D))$. We establish Theorem B concerning the sharpness of this bound by characterizing when the inequality of Theorem~\ref{Th:Bound} is strict and then interpreting that for the critical point equations. \begin{Remark}\label{R:excess} Let $X\subset\PP^n$ be a variety of dimension $d$ and $M\subset\PP^n$ a linear subspace of codimension $d$. The number of points in $X\cap M$ does not depend on $M$ when the intersection is transverse; it is the \demph{degree} of $X$~\cite[p.\ 234]{Shaf13}. When the intersection is not transverse, intersection theory gives a refinement~\cite[Ch.\ 6]{Fulton}. For each irreducible component $Z$ of the intersection $X\cap M$, there is a positive integer---the intersection multiplicity along $Z$--such that the sum of these multiplicities is the degree of $X$. When $Z$ is positive-dimensional this number is the degree of a zero-cycle constructed on $Z$ (it is at least the degree of $Z$) and when $Z$ is zero-dimensional (a point), it is the local multiplicity~\cite[Ch.\ 4]{Shaf13}. \hfill$\diamond$ \end{Remark} A consequence of Remark~\ref{R:excess} is the following. \begin{Proposition}\label{P:sharp} Let $X,M$ be as in Remark~\ref{R:excess}. The number (counted with multiplicity) of isolated points of $X\cap M$ is strictly less than the degree of $X$ if and only if the intersection has a positive-dimensional component. \end{Proposition} Write $\defcolor{X^\circ_\calA}\vcentcolon=\varphi_\calA((\CC^\times)^d\times\CC)$ for the image of $\varphi_\calA$ and $\defcolor{\partial X_\calA}\vcentcolon=X_\calA\smallsetminus X^\circ_\calA$, the points of $X_\calA$ added to $X^\circ_\calA$ when taking the closure. This is the \demph{boundary} of $X_\calA$. In the Introduction, points of $\partial X_\calA$ were referred to as `lying at infinity'. \begin{Corollary}\label{C:MeetBoundary} For a polynomial $\psi\in\CC[z^{\pm},\lambda]$, the inequality of Theorem~\ref{Th:Bound} is strict if and only if $\partial X_\calA\cap M_\psi\neq\emptyset$. \end{Corollary} \begin{proof} The inequality of Theorem~\ref{Th:Bound} is strict if either of the following hold. \begin{enumerate} \item $X_\calA\cap M_\psi$ has an isolated point not lying in $X^\circ_\calA$. \item $X_\calA\cap M_\psi$ contains a positive-dimensional component $Z$. \end{enumerate} In (1), $X_\calA\cap M_\psi$ has isolated points in $\partial X_\calA\cap M_\psi$, so the intersection is nonempty. In (2), $Z$ is a projective variety of dimension at least one. The set $X^\circ_\calA$ is an affine variety, and we cannot have $Z\subset X^\circ_\calA$ as the only projective varieties that are also subvarieties of an affine variety are points. Thus $Z\cap \partial X_\calA\neq\emptyset$, which completes the proof. \end{proof} \subsection{Facial systems}\label{Sec:facial} We return to the general case of a toric variety. Let $\calA\subset\ZZ^n$ be a finite set of points with corresponding projective toric variety $X_\calA\subset\PP^\calA$. We have the following description of the points of its boundary, $X_\calA\smallsetminus\varphi_\calA((\CC^\times)^n)$. Let $\defcolor{P}\vcentcolon=\conv(\calA)$, the convex hull of $\calA$. The dot product with a nonzero vector $w\in\RR^n$, $a\mapsto w\cdot a$, defines a linear function on $\RR^n$. For $w\in\RR^n$, set $\defcolor{h(w)}\vcentcolon=\min\{w\cdot a\mid a\in P\}$. The set $F=\{p\in P\mid w\cdot p=h(w)\}$ of minimizers is the \demph{face} of $P$ \demph{exposed} by $w$. We have that $F=\conv(F\cap\calA)$, and may write \defcolor{$\calF$} for $F\cap\calA$. As $\calA\subset\ZZ^n$, we only need integer vectors $w\in\ZZ^n$ to expose all faces of $P$. If $\dim F = \dim P-1$, then $F$ is a \demph{facet}. For each face $F$ of $P$, there is a corresponding coordinate subspace \defcolor{$\PP^\calF$} of $\PP^\calA$---this is the set of points $z=[z_a\mid a\in\calA]\in\PP^\calA$ such that $a\not\in F$ implies that $z_a=0$. The image of the map $\varphi_{\calF}\colon(\CC^\times)^n\to\PP^{\calF}\subset\PP^\calA$ has closure the toric variety $X_{\calF}$. Its dimension is equal to the dimension of the face $F$. Write \defcolor{$X_{\calF}^\circ$} for the image of $\varphi_{\calF}$. This description and the following proposition is essentially~\cite[Prop.\ 5.1.9]{GKZ}. \begin{Proposition} The boundary of the toric variety $X_\calA$ is the disjoint union of the sets $X_{\calF}^\circ$ for all the proper faces $F$ of $\conv(\calA)$. \end{Proposition} Let $f=\sum_{a\in\calA} c_a x^a$ be a polynomial with support $\calA$. We observed that if $\Lambda$ is the corresponding linear form~\eqref{Eq:LinearForm} on $\PP^\calA$, then the variety $\Var(f)\subset(\CC^\times)^n$ of $f$ is the pullback along $\varphi_\calA$ of $X^\circ_\calA\cap M$, where $\defcolor{M}\vcentcolon=\Var(\Lambda)$ is the hyperplane defined by $\Lambda$. Let $F$ be a proper face of $P$. Then $X^\circ_{\calF}\cap M$ pulls back along $\varphi_{\calF}$ to the variety of \[ \varphi^{-1}_{\calF}(\Lambda)\ =\ \sum_{a\in F} c_a x^a \] in $(\CC^\times)^n$. This sum of the terms of $f$ whose exponents lie in $F$ is a \demph{facial form} of $f$ and is written \defcolor{$f|_F$}. Given a system $\Phi\colon f_1=\dotsb=f_n=0$ involving Laurent polynomials with support $\calA$, the system $f_1|_F=\dotsb=f_n|_F=0$ of their facial forms is the \demph{facial system} \defcolor{$\Phi|_F$} of $\Phi$. \begin{Corollary}\label{C:Sings} Let $M$ be the intersection of the hyperplanes given by the polynomials in a system $\Phi$ of Laurent polynomials with support $\calA$. For each face $F$ of $\conv(\calA)$, the points of $X^\circ_{\calF}\cap M$ pull back under $\varphi_{\calF}$ to the solutions of the facial system $\Phi|_F$. If no facial system $\Phi|_F$ has a solution, then the number of solutions to $\Phi=0$ on $(\CC^\times)^n$ is $n!\vol(\conv(\calA))$. \end{Corollary} \begin{proof} The first statement follows from the observation about a single polynomial $f$ and its facial form $f|_F$, and the second is a consequence of a version of Corollary~\ref{C:MeetBoundary} for $X_\calA\smallsetminus\varphi_\calA((\CC^\times)^n)$. \end{proof} The second statement is essentially~\cite[Thm.\ B]{Bernstein} and is also explained in~\cite[Sect.\ 3.4]{IHP}. \subsection{Facial systems of the critical point equations}\label{S:facial} We prove Theorem B from the Introduction by interpreting the facial systems of the critical point equations. It is useful to introduce the following notion. A polynomial $f(x)$ in $x\in(\CC^\times)^n$ is \demph{quasi-homogeneous} with \demph{quasi-homogeneity} $w\in\ZZ^n$ if there is a number $0\neq w_f$ such that \[ a\ \in\ \calA(f)\ \Longrightarrow\ w\cdot a\ =\ w_f\,. \] Equivalently, $f$ is quasi-homogeneous if its support $\calA(f)$ lies on a hyperplane not containing the origin. The quasi-homogeneities of $f$ are those $w\in\ZZ^n$ whose dot product is constant on $\calA(f)$. For $t\in \CC^\times$ and $w\in\ZZ^n$, let $\defcolor{t^w}\vcentcolon=(t^{w_1},\dotsc,t^{w_n})\in(\CC^\times)^n$. \begin{Lemma}\label{L:quasi-homogeneous} Suppose that $f$ has a quasi-homogeneity $w\in\ZZ^n$. Then \begin{enumerate} \item For $t\in\CC^\times$ and $x\in(\CC^\times)^n$, we have $f(t^w\cdot x)=t^{w_f} f(x)$. \item We have \[ w_f \; f\ =\ \sum_{i=1}^n w_i\,x_i \frac{\partial f}{\partial x_i}\,. \] \end{enumerate} \end{Lemma} \begin{proof} Note that for $a\in\ZZ^n$, $(t^w\cdot x)^a = t^{w\cdot a} x^a$. The first statement follows. For the second, note that $a_ix^a= x_i \frac{\partial}{\partial x_i} x^a$. \end{proof} Let $\psi\in\CC[z^{\pm},\lambda]$ have support $\calA\subset\ZZ^d\times\NN$ and Newton polytope $\defcolor{P}\vcentcolon=\conv(\calA)$. We will assume that $P$ has dimension $d{+}1$, and also that $\calA\cap\ZZ^d\times\{0\}$ is a facet of $\calA$, called its \demph{base}. Let~\eqref{Eq:psi_system} be the critical point equations for $\lambda$ on $\psi$ and $M_\psi\subset\PP^\calA$ the corresponding linear subspace of codimension $d{+}1$. Let $\defcolor{\bfz}\vcentcolon= 0^d$ in $\ZZ^d$ and $\defcolor{\bfe}\vcentcolon=(\bfz,1)$. The base of $\calA$ is exposed by $\bfe$ and it is the support of $\psi(z,0)$. A main difference between the sparse equations of Section~\ref{Sec:facial} and the critical point equations~\eqref{Eq:CPE} is that the critical point equations allow solutions with $\lambda=0$, which is the component of the boundary of the toric variety corresponding to the base of $\calA$. A face $F$ of $P$ is \demph{vertical} if it contains a vertical line segment, one parallel to $\bfe$. \begin{Lemma}\label{L:Singularities} Suppose that $F$ is a proper face of $P$ that is not the base of $P$ and is not vertical. Then the corresponding facial system of the critical point equations has a solution if and only if the hypersurface $\Var(\psi|_F)$ defined by $\psi|_F$ in $(\CC^\times)^{d+1}$ is singular. \end{Lemma} \begin{proof} Let $0\neq w\in\ZZ^{d+1}$ be an integer vector that exposes the face $F$. As $F$ is not vertical we may assume that $w_{d+1}$ is nonzero. As $F$ is not the base, it lies on an affine hyperplane that does not contain the origin, so that $\psi|_F$ is quasi-homogeneous with some quasi-homogeneity $w$. Write \defcolor{$w_F$} for the constant $w\cdot a$ for $a\in F$. By Lemma~\ref{L:quasi-homogeneous} (2), we have \begin{equation}\label{Eq:Euler} w_F\,\psi|_F\ =\ \sum_{i=1}^d w_i\; z_i\frac{\partial\psi|_F}{\partial z_i} \ +\ w_{d+1}\; \lambda\frac{\partial\psi|_F}{\partial \lambda}\ . \end{equation} Suppose now that $(z,\lambda)$ is a solution of the restriction of the critical point equations to the face $F$. That is, at $(z,\lambda)$, \[ \psi|_\calF\ =\ \left(z_1\frac{\partial\psi}{\partial z_1}\middle)\right|_F\ =\ \dotsb \ =\ \left(z_d\frac{\partial\psi}{\partial z_d}\middle)\right|_F\ =\ 0\,. \] Observe that $(z_i\frac{\partial\psi}{\partial z_i})|_F= z_i\frac{\partial\psi|_F}{\partial z_i}$ (and the same for $\lambda$). Since $w_{d+1}\neq 0$, these equations and~\eqref{Eq:Euler} together imply that $(\lambda\frac{\partial\psi}{\partial\lambda})|_F=0$, which implies that $(z,\lambda)$ is a singular point of the hypersurface $\Var(\psi|_F)$ defined by $\psi|_F$. \end{proof} We deduce the following theorem. \begin{Theorem}\label{Th:SingTheoremA} If the Newton polytope $\calN(\psi)$ of $\psi$ has no vertical faces and the restriction of $\psi$ to each face that is not the base of $\calN(\psi)$ defines a smooth variety, then the critical point equations have exactly $(d{+}1)!\vol(\calN(\calA))$ solutions in $(\CC^\times)^d\times \CC$. \end{Theorem} We apply this when $\psi$ is the dispersion function $D(z,\lambda)$. Recall that the boundary of the variety $X_\calA$ ($X_D$) corresponds to all proper faces of its Newton polytope $\calN(D)$, except for its base. We deduce the following precise version of \textbf{Theorem B}. \begin{Corollary}\label{C:SingThmB} Let $L$ be an operator on a periodic graph and set $D=\det(L(z)-\lambda I)$. If $\calN(D)$ has no vertical faces and if for each face $F$ that is not its base, $\Var(D|_F)$ is smooth, then the Bloch variety has exactly $(d{+}1)!\vol(\conv(\calA(D)))$ critical points. \end{Corollary} \begin{Example}\label{Ex:K4} The restriction on vertical faces is necessary. General operators on the second graph in Figure~\ref{F:first_graphs} (an abelian cover of $K_4$) have the following Newton polytope: \[ \includegraphics[height=100pt]{images/K4Polytope} \] It has base $[-1,1]^2$, apex $(0,0,4)$, and the remaining vertices are at $(\pm 1,0,1)$ and $(0,\pm 1,1)$. It has volume $20/3$, so we expect $40=3!\cdot 20/3$ critical points. However, there are at most 32 critical points, as direct computation shows that the critical point equations have two solutions on each of its four vertical faces.\hfill$\diamond$ \end{Example} \section{Newton polytopes and dense periodic graphs} \label{S:C} The Newton polytope $\calN(D)$ of the dispersion function of an operator on a periodic graph is central to our results. In Section~\ref{Sec:41} we associate a polytope $\calN(\Gamma)$ to any periodic graph $\Gamma$ such that $\calN(D)\subset\calN(\Gamma)$ for any operator on $\Gamma$, and that we have equality for almost all parameter values. We call $\calN(\Gamma)$ the \demph{Newton polytope} of $\Gamma$. A periodic graph $\Gamma$ is dense if it has every possible edge, given its support $\calA(\Gamma)$ and fundamental domain $W$. Every periodic graph is a subgraph of a minimal dense periodic graph. We identify the Newton polytope of a dense periodic graph and show that when $d=2$ or $3$, a general operator on $\Gamma$ satisfies Corollary~\ref{C:SingThmB}, which implies Theorem C. Let $\Gamma$ be a connected $\ZZ^d$-periodic graph with fundamental domain $W$. Its support $\calA(\Gamma)$ is the finite set of points $a\in\ZZ^d$ such that there is an edge between $W$ and $a{+}W$. The integer span of $\calA(\Gamma)$ is $\ZZ^d$, as $\Gamma$ is connected. The graph $\Gamma$ is \demph{dense} if for every $a\in\calA(\Gamma)$, there is an edge in $\Gamma$ between every pair of vertices in the union of $W$ and $a{+}W$. In particular, the restriction of $\Gamma$ to $W$ is the complete graph on $W$. The graphs of Figures~\ref{F:denseDimer} and~\ref{F:New-dense} are dense, \begin{figure}[htb] \centering \raisebox{-17pt}{\includegraphics[height=140pt]{images/newDense}} \qquad \includegraphics{images/wideDiamond} \caption{A dense graph $\Gamma$ and its support $\calA(\Gamma)$ with convex hull.} \label{F:New-dense} \end{figure} while those of Figure~\ref{F:first_graphs} are not dense. The set of parameters $(e,V)$ for operators on a periodic graph $\Gamma$ is $Y=\CC^E\times\CC^W$, where $E$ is the set of orbits of edges. We observed that for any $c\in Y$, each entry of $L_c(z)$ has support a subset of $\calA(\Gamma)$. Consequently, each diagonal entry of $L_c(z)-\lambda I$ has support a subset of $\calA(\Gamma)\cup\{\bfe\}$ and its Newton polytope is a subpolytope of $\defcolor{Q}\vcentcolon=\conv(\calA(\Gamma)\cup\{\bfe\})$. Let $\defcolor{m}\vcentcolon=|W|$, the number of orbits of vertices. \begin{Lemma}\label{L:genericNP} The Newton polytope $\calN(D_c)$ is a subpolytope of the dilation $mQ$ of $Q$. \end{Lemma} \begin{proof} The dispersion function $D_c$ is a sum of products of $m$ entries of the $m\times m$ matrix $L_c(z)-\lambda I$. Each such product has Newton polytope a subpolytope of $mQ$ as the Newton polytope of a product is the sum of Newton polytopes of the factors. \end{proof} Figure~\ref{F:NewtonPolytopes} shows $mQ=2Q$ for the dense graphs of Figures~\ref{F:denseDimer} and~\ref{F:New-dense}. \begin{figure}[htb] \centering \includegraphics{images/Pyramid} \qquad \includegraphics{images/widePyramid} \caption{Newton polytopes of dense graphs.} \label{F:NewtonPolytopes} \end{figure} Observe that $mQ$ is a pyramid with base $m\conv(\calA(\Gamma))$ and apex $m\bfe$, and it has no vertical faces. \begin{Theorem}\label{Th:denseDense} Let $\Gamma$ be a dense $\ZZ^d$-periodic graph. There is a nonempty Zariski open subset $U$ of the parameter space $Y$ such that for $c\in U$, the Newton polytope of $D_c(z,\lambda)$ is the pyramid $m Q$. When $d=2$ or $3$, then we may choose $U$ so that for every $c\in U$ and face $F$ of $m Q$ that is not its base, $\Var(D_c|_F)$ is smooth. \end{Theorem} Together with Corollary~\ref{C:SingThmB}, this implies Theorem C from the Introduction. We prove Theorem~\ref{Th:denseDense} in the following two subsections.\medskip \subsection{The Newton polytope of $\Gamma$}~\label{Sec:41} For a periodic graph $\Gamma$, the space of parameters $(e,V)$ for operators on $\Gamma$ is $Y=\CC^E\times\CC^W$. Treating parameters as indeterminates gives the \demph{generic dispersion function} \defcolor{$D(e,V,z,\lambda)$}, which is a polynomial in $z,\lambda$ whose coefficients are polynomials in the parameters $e,V$. The \demph{Newton polytope} \defcolor{$\calN(\Gamma)$} of $\Gamma$ is the convex hull of the monomials in $z,\lambda$ that appear in $D(e,V,z,\lambda)$. \begin{Lemma}\label{L:NewtonPolytope} For $c\in Y$, $\calN(D_c(z,\lambda))$ is a subpolytope of $\calN(\Gamma)$. The set of $c\in Y$ such that $\calN(D_c(z,\lambda))=\calN(\Gamma)$ is a dense open subset $U$. When $\Gamma$ is a dense periodic graph, $\calN(\Gamma)=mQ$. \end{Lemma} \begin{proof} For any $c=(e,V)\in Y$, $D_c(z,\lambda)$ is the evaluation of the generic dispersion function $D(e,V,z,\lambda)$ at the point $(e,V)$. Thus $\calN(D_c)\subset\calN(\Gamma)$. The coefficient \defcolor{$C_{(a,j)}$} of a monomial $z^a\lambda^j$ in $D(e,V,z,\lambda)$ is a polynomial in $(e,V)$. For any $c=(e,V)\in Y$, $z^a\lambda^j$ appears in $D_c$ if any only if $C_{(a,j)}(e,V)\neq 0$. Thus, we have the equality $\calN(D_c)=\calN(\Gamma)$ of Newton polytopes if and only if $C_{(a,j)}(e,V)\neq 0$ for every vertex $(a,j)$ of $\calN(\Gamma)$, which defines a dense open subset $U\subset Y$. When $\Gamma$ is dense and no parameter $c$ vanishes, then every diagonal entry of $L_c(z)-\lambda I$ has support $\calA(\Gamma)\cup\{\bfe\}$. This implies that $\calN(\Gamma)=mQ$. \end{proof} \subsection{Smoothness of the Bloch variety at infinity} Let $\Gamma$ be a dense periodic graph with $d=2$ or $3$. Let $U\subset Y$ be the subset of Lemma~\ref{L:NewtonPolytope}. We show that for each face $F$ of $\calN(\Gamma)$ that is not its base, there is a nonempty open subset $U_F$ of $U$ such that for $c\in U_F$, the restriction $D_c|_F$ to the monomials in $F$ defines a smooth hypersurface. Then for parameters $c$ in the intersection of the $U_F$, the operator satisfies the hypotheses of Corollary~\ref{C:SingThmB}, which proves Theorem~\ref{Th:denseDense} and Theorem C. Let $F$ be a face of $\calN(\Gamma)$ that is not its base and let $c\in U$. We may assume that $F$ is not a vertex, for then $D_c|_F$ is a single term and $\Var(D_c|_F)=\emptyset$. Since $\calN(\Gamma)=mQ$, there is a unique face \defcolor{$G$} of $Q$ such that $F=mG$. We have that \[ D_c(z,\lambda)|_F\ =\ \det\bigl( (L_c(z)-\lambda I)|_G \bigr)\ , \] where each entry of the matrix $(L_c(z)-\lambda I)|_G$ is the facial form $f|_G$ of the corresponding entry $f$ of $L_c(z)-\lambda I$. Since $G$ is not the base of $Q$ (and thus does not contain the origin), we make the following observation, which follows from the form of the operator $L_c$~\eqref{Eq:Laurent_operator}. If the apex $\bfe=(\bfz,1)$ of $Q$ lies in $G$ and $f$ is a diagonal entry of $(L_c(z)-\lambda I)|_G$, then $f$ contains the term $-\lambda$. Any other integer point $a\in G$ is nonzero and lies in the support $\calA(\Gamma)$ of $\Gamma$, and the coefficient of $z^a$ in $f$ is $-e_{(u,a+v)}$, where $f$ is the entry in row $u$ and column $v$. Consequently, except possibly for terms $-\lambda$, all coefficients of entries in $(L_c(z)-\lambda I)|_G$ are distinct parameters. Suppose that the fundamental domain is $W=\{v_1,\dotsc,v_m\}$ so that we may index the rows and columns of $L_c(z)$ by $1,\dotsc,m$. Let $\defcolor{Y'}\subset Y$ be the set of parameters $c$ where \[ e_{(v_i,a+v_j)}\ =\ 0\qquad \mbox{if $a\in G$ \ and \ $j\neq i, i{+}1$.} \] (Here, $m{+}1$ is interpreted to be 1.) For $c\in Y'$, all entries of $L_c(z)|_G$ are zero, except on the diagonal, the first super diagonal, and the lower left entry. The same arguments as in the proof of Lemma~\ref{L:NewtonPolytope} show that there exist parameters $c\in Y'$ such that $D_c(z,\lambda)$ has Newton polytope $\calN(\Gamma)$. Thus $Y'\cap U\neq\emptyset$, where $U\subset Y$ is the set of Lemma~\ref{L:NewtonPolytope}. \begin{Theorem}\label{Th:smoothFaces} There exists an open subset $U'$ of $Y'$ with $U'\subset U$ such that if $c\in U'$, then $\Var(D_c(z,\lambda)|_F)$ is a smooth hypersurface in $(\CC^\times)^{d+1}$. \end{Theorem} Since smoothness of $\Var(D_c(z,\lambda)|_F)$ is an open condition on the space $Y$ of parameters, this will complete the proof of Theorem~\ref{Th:denseDense}, and thus also of Theorem C. \begin{proof} Let us write \defcolor{$\psi_c(z,\lambda)$} for the facial polynomial $D_c(z,\lambda)|_F$. We will show that the set of $c\in Y'$ such that $\Var(\psi_c(z,\lambda))$ is singular is a finite union of proper algebraic subvarieties. As $c\in Y'$, the only nonzero entries in the matrix $(L_c(z)-\lambda I)|_G$ are its diagonal entries $f_1(z,\lambda),\dotsc,f_m(z,\lambda)$ and the entries $g_1(z),\dotsc,g_m(z)$ which are in positions $(1,2),\dotsc,(m{-}1,m)$ and $(m,1)$, respectively. Thus \[ \psi_c(z,\lambda)\ =\ D_c(z,\lambda)|_F\ =\ \det((L_c(z)-\lambda I)|_G)\ =\ \prod_{i=1}^m f_i(z,\lambda)\ -\ (-1)^m\prod_{i=1}^m g_i(z)\,. \] For a polynomial $f$ in the variables $(z,\lambda)$, write \defcolor{$\nabla_\TT$} for the toric gradient operator, \[ \nabla_\TT f \ \vcentcolon=\ \Bigl(z_1\frac{\partial f}{\partial z_1},\dotsc, z_d\frac{\partial f}{\partial z_d}, \lambda\frac{\partial f}{\partial\lambda}\Bigr)\,. \] Note that \begin{equation}\label{Eq:nablaPsi} \nabla_\TT \psi_c\ =\ \sum_{i=1}^m (\nabla_\TT f_i) f_1\dotsb\widehat{f_i}\dotsb f_m \ -\ (-1)^m\sum_{i=1}^m (\nabla_\TT g_i) g_1\dotsb\widehat{g_i}\dotsb g_m\ . \end{equation} Here $\widehat{f_i}$ indicates that $f_i$ does not appear in the product, and the same for $\widehat{g_i}$. Let $(z,\lambda)\in\Var(\psi_c)$ be a singular point. Then $\psi_c(z,\lambda)=0$ and $\nabla_\TT \psi_c(z,\lambda)=\bfz$. There are five cases that depend upon the number of polynomials $f_i,g_j$ vanishing at $(z,\lambda)$. \begin{enumerate}[label=(\roman*)] \item At least two polynomials $f_p$ and $f_q$ and two polynomials $g_r$ and $g_s$ vanish at $(z,\lambda)$. Thus $\psi(z,\lambda)=0$ and by~\eqref{Eq:nablaPsi} this implies that $\nabla_\TT \psi_c(z,\lambda)=\bfz$. \item At least two polynomials $f_p$ and $f_q$ and exactly one polynomial $g_s$ vanish at $(z,\lambda)$. Thus $\psi(z,\lambda)=0$ and by~\eqref{Eq:nablaPsi} if $\nabla_\TT \psi_c(z,\lambda)=\bfz$, then $\nabla_\TT g_s(z,\lambda)=\bfz$. \item Exactly one polynomial $f_p$ and at least two polynomials $g_r$ and $g_s$ vanish at $(z,\lambda)$. Thus $\psi(z,\lambda)=0$ and by~\eqref{Eq:nablaPsi} if $\nabla_\TT \psi_c(z,\lambda)=\bfz$, then $\nabla_\TT f_p(z,\lambda)=\bfz$. \item Exactly one polynomial $f_p$ and one polynomial $g_r$ vanish at $(z,\lambda)$. Thus $\psi(z,\lambda)=0$ and by~\eqref{Eq:nablaPsi} if $\nabla_\TT \psi_c(z,\lambda)=\bfz$, then, after reindexing so that $p=r=1$, we have \begin{equation}\label{Eq:iv_consequence} \nabla_\TT f_1(z,\lambda)\cdot \prod_{i=2}^m f_i(z,\lambda)\ -\ (-1)^m \nabla_\TT g_1(z,\lambda)\cdot \prod_{i=2}^m g_i(z,\lambda)\ =\ \bfz\,. \end{equation} \item No polynomials $f_i$ or $g_i$ vanish at $(z,\lambda)$. \end{enumerate} In each case, we will show that the set of parameters $c\in Y'$ such that there exist $(z,\lambda)$ satisfying these conditions lies in a proper subvariety of $Y'$. Cases (i)---(iv) use arguments based on the dimension of fibers and images of a map and are proven in the rest of this section. Case (v) is proven in Section~\ref{S:Case(v)} and it uses Bertini's Theorem. \end{proof} Let us write \defcolor{$X$} for the space $(\CC^\times)^{d+1}$ and \defcolor{$x$} for a point $(z,\lambda)\in X$. We first derive consequences of some vanishing statements. For a finite set $\calF\subset\ZZ^{d+1}$, let \defcolor{$\CC^\calF$} be the space of coefficients of polynomials in $x\in X$ with support $\calF$. This is the parameter space for polynomials with support $\calF$. \begin{Lemma}\label{L:linearEquations} We have the following. \begin{enumerate} \item For any $x\in X$, $f(x)=0$ is a nonzero homogeneous linear equation on $\CC^\calF$. \item For any $x\in X$, $\{\nabla_\TT f(x) \mid f\in\CC^\calF\}$ is the linear span \defcolor{$\CC\calF$} of $\calF$. \end{enumerate} Suppose that the affine span of $\calF$ does not contain the origin. Then \begin{enumerate} \setcounter{enumi}{2} \item For any $f\in\CC^\calF$ and $x\in X$, $\nabla_\TT f=\bfz$ implies that $f(x)=0$. \item For any $x\in X$, the equation $\nabla_\TT f(x)=\bfz$ defines a linear subspace of $\CC^\calF$ of codimension $\dim \CC\calF$. \end{enumerate} \end{Lemma} \begin{proof} Writing $f=\sum_{a\in\calF} c_a x^a$, the first statement is obvious. We have $\nabla_\TT f=\sum a c_a x^a$. As the coefficients $c_a$ are independent complex numbers and $x^a\neq 0$, Statement (2) is immediate. The hypothesis that the affine span of $\calF$ does not contain the origin implies that any $f\in\CC^\calF$ is quasi-homogeneous. Statement (3) follows from Equation~\eqref{Eq:Euler}. The last statement follows from the observation that the set of $f$ such that $\nabla_\TT f=\bfz$ is the kernel of a surjective linear map $\CC^\calF\twoheadrightarrow \CC\calF$. \end{proof} Let $\defcolor{\calF}\vcentcolon=G\cap(\calA(\Gamma)\cup\{\bfe\})$, where $\bfe=(\bfz,1)$, be the (common) support of the diagonal polynomials $f_i$ and let $\defcolor{\calG}\vcentcolon=G\cap\calA(\Gamma)$ be the (common) support of the polynomials $g_j$. We either have that $\calF=\calG$ or $\calF=\calG\cup\{\bfe\}$. Also, $|\calF|>1$ as $G$ is not a vertex, and as $G$ is a proper face of $Q=\conv(\calA(\Gamma)\cup\{\bfe\})$, but not its base, the polynomials $f_i, g_j$ are quasi-homogeneous with a common quasi-homogeneity. The parameter space for the entries of $(L_c(z)-\lambda I)|_G$ is \[ \defcolor{Z}\ \vcentcolon=\ \bigl(\CC^\calF)^{\oplus m} \oplus \bigl(\CC^\calG)^{\oplus m}\,. \] We write $c=\defcolor{(f_\bullet, g_\bullet)}=(f_1,\dotsc,f_m, g_1,\dotsc,g_m)$ for points of $Z$. This is a coordinate subspace of the parameter space $Y'$. As $Z$ contains exactly those parameters that can appear in the facial polynomial $\psi_c(x)$, it suffices to show that the set of parameters $c=(f_\bullet, g_\bullet)\in Z$ such that $\Var(\psi_c(x))$ is singular lies in a proper subvariety of $Z$. The same case distinctions (i)---(v) in the proof of Theorem~\ref{Th:smoothFaces} apply. After reindexing, Case (i) in the proof of Theorem~\ref{Th:smoothFaces} follows from the next lemma. \begin{Lemma}\label{L:i} The set \[ \defcolor{\Theta}\ \vcentcolon=\ \{c\in Z\mid \exists x\in X\mbox{ with }f_1(x)=f_2(x)=g_1(x)=g_2(x)=0\} \] lies in a proper subvariety of $Z$. \end{Lemma} \begin{proof} Consider the incidence correspondence, \[ \defcolor{\Upsilon}\ \vcentcolon=\ \{(x,f_\bullet,g_\bullet)\in X\times Z \mid f_1(x)=f_2(x)=g_1(x)=g_2(x)=0\}\,. \] This has projections to $X$ and to $Z$ and its image in $Z$ is the set $\Theta$. Consider the projection $\pi_X\colon\Upsilon\to X$. By Lemma~\ref{L:linearEquations}(1), for $x\in X$, each condition $f_i(x)=0$, $g_i(x)=0$ for $i=1,2$ is a linear equation on $\CC^\calF$ or $\CC^\calG$. These are independent on $Z$ as they involve different variables. Thus the fiber $\pi_X^{-1}(x)$ is a vector subspace of $Z$ of codimension 4, and $\dim(\Upsilon)=\dim(Z)-4+\dim(X)=\dim(Z)+d-3$. Consider the projection $\pi_Z$ to $Z$ and let $(f_\bullet,g_\bullet)\in\pi_Z(\Upsilon)$. Then there is an $x\in X$ such that $f_1(x)=f_2(x)=g_1(x)=g_2(x)=0$. Let $w\in\ZZ^{d+1}$ be a common quasi-homogeneity of the polynomials $f_i,g_j$. By Lemma~\ref{L:quasi-homogeneous} (1), for any $t\in\CC^\times$, each of $f_1,f_2,g_1,g_2$ vanishes at $t^w\cdot x$. Thus the fiber $\pi_Z^{-1}(f_\bullet,g_\bullet)$ has dimension at least one. By the Theorem~\cite[Theorem 1.25]{Shaf13} on the dimension of the image and fibers of a map, the image $\pi_Z(\Upsilon)$ has dimension at most $\dim(Z)+d-4<\dim(Z)$, which establishes the lemma. \end{proof} After reindexing and possibly interchanging $f$ with $g$, Cases (ii) and (iii) in the proof of Theorem~\ref{Th:smoothFaces} follow from the next lemma. \begin{Lemma}\label{L:ii} The set \[ \Theta\ \vcentcolon=\ \{c\in Z\mid \exists x\in X\mbox{ with }f_1(x)=f_2(x)=g_1(x)=0\mbox{ and }\,\nabla_\TT g_1(x)=\bfz\} \] lies in a proper subvariety of $Z$. \end{Lemma} \begin{proof} Consider the incidence correspondence, \[ \Upsilon\ \vcentcolon=\ \{(x,f_\bullet,g_\bullet)\in X\times Z \mid f_1(x)=f_2(x)=g_1(x)=0 \mbox{ and }\nabla_\TT g_1(x)=\bfz\}\,. \] Let $x\in X$ and consider the fiber $\pi_X^{-1}(x)$. As in the proof of Lemma~\ref{L:i}, the conditions $f_1(x)=f_2(x)=0$ are two independent linear equations on $Z$. By Lemma~\ref{L:linearEquations} (3), $\nabla_\TT g_1(x)=\bfz$ implies that $g_1(x)=0$, and by Lemma~\ref{L:linearEquations} (4), the condition $\nabla_\TT g_1(x)=\bfz$ is $\dim \CC\calG$ further independent linear equations on $Z$. If $|\calG|=1$, so that $g_1=c_ax^a$ is a single term, then $g(x)=0$ implies that $c_a=0$. Consequently, the image $\Theta$ of $\Upsilon$ in $Z$ lies in a proper subvariety. Otherwise, $|\calG|>1$ which implies that $\dim \CC\calG\geq 2$, and thus the fiber has codimension at least 4. As in the proof of Lemma~\ref{L:i}, this implies that $\Theta$ lies in a proper subvariety of $Z$. \end{proof} Case (iv) in the proof of Theorem~\ref{Th:smoothFaces} is more involved. \begin{Lemma}\label{L:iv} The set \[ \defcolor{\Theta}\ \vcentcolon=\ \{c\in Z\mid \exists x\in X\mbox{ with }f_1(x)=g_1(x)=0\mbox{ and }\,\nabla_\TT \psi_c(x)=\bfz\} \] lies in a proper subvariety of $Z$. \end{Lemma} \begin{proof} The set $\Theta$ includes the sets of Lemmas~\ref{L:i} and~\ref{L:ii}. Let $\defcolor{\Theta^\circ}\subset\Theta$ be the set of $c=(f_\bullet,g_\bullet)$ that have a witness $x\in X$ ($f_1(x)=g_1(x)=0$ and $\nabla_\TT \psi_c(x)=\bfz$) such that none of $\nabla_\TT f_1(x)$, $\nabla_\TT g_1(x)$, or $f_i(x)g_i(x)$ for $i>1$ vanish. It will suffice to show that $\Theta^\circ$ lies in a proper subvariety of $Z$. For this, we use the incidence correspondence, \begin{multline*} \qquad\defcolor{\Upsilon}\ \vcentcolon=\ \{(y,x,f_\bullet,g_\bullet)\in \CC^\times \times X\times Z \mid f_1(x)=g_1(x)=0\,,\\ y\prod_{i=2}^m f_i(x)\ -\ (-1)^m\prod_{i=2}^m g_i(x)=0\,,\mbox{ and } \nabla_\TT f_1(x)\ -\ (-1)^my \nabla_\TT g_1(x)=\bfz\}\,.\qquad \end{multline*} We show that $\Theta^\circ\subset\pi_Z(\Upsilon)$. Let $c=(f_\bullet,g_\bullet)\in\Theta^\circ$ with witness $x\in X$ in that $f_1(x)=g_1(x)=0$ and $\nabla_\TT \psi_c(x)=\bfz$, but none of $\nabla_\TT f_1(x)$, $\nabla_\TT g_1(x)$, or $f_i(x)g_i(x)$ for $i>1$ vanish. There is a unique $\defcolor{y}\in\CC^\times$ satisfying \[ y \prod_{i=2}^m f_i(x)\ -\ (-1)^m \prod_{i=2}^m g_i(x)\ =\ 0\,. \] Dividing~\eqref{Eq:iv_consequence} by $\prod_{i=2}^m f_i(x)$ gives $\nabla_\TT f_1(x)-(-1)^my \nabla_\TT g_1(x)=\bfz$, and thus $(y,x,f_\bullet,g_\bullet)\in\Upsilon$. We now determine the dimension of $\Upsilon$. Let $(y,x)\in\CC^\times \times X$ and consider the fiber $\pi^{-1}(y,x)\subset Z$ above it in $\Upsilon$. The two linear and one nonlinear equations \begin{equation}\label{Eq:Three} f_1(x)\ =\ g_1(x)\ =\ y\prod_{i=2}^m f_i(x)-(-1)^m\prod_{i=2}^m g_i(x)\ =\ 0 \end{equation} are independent on $Z$ as they involve disjoint sets of variables, and thus define a subvariety $\defcolor{T}\subset Z$ of codimension 3. Consider the remaining equation, $\nabla_\TT f_1(x)-(-1)^my \nabla_\TT g_1(x)=\bfz$. Note that if $\bfe=(\bfz,1)$ lies in the support $\calF$ of $f_1$, so that $\calF=\calG\cup\{\bfe\}$, then $\nabla_\TT f_1(x)$ contains the term $-\bfe$ and thus cannot lie in the span $\CC\calG$ of $\calG$, which contains $\nabla_\TT g_1(x)$ by Lemma~\ref{L:linearEquations}(2). In this case the fiber is empty and $\Theta^\circ=\emptyset$. Suppose that $\calF=\calG$ and $(f_\bullet,g_\bullet)\in T$. Let $w\in\ZZ^{d+1}$ be any homogeneity for $f_1$ (or $g_1$). Then there exists $w_\calF\neq 0$ such that $w\cdot a=w_\calF$ for all $a\in\calF$. Equation~\eqref{Eq:Euler} implies that \[ w\cdot \nabla_\TT f_1(x)\ =\ w_\calF\, f_1(x)\ =\ 0\,, \] and the same for $g_1$. Thus $\nabla_\TT f_1(x)$ and $\nabla_\TT g_1(x)$ are annihilated by all homogeneities and so lie in the affine span of $\calF$---the linear span of differences $a{-}b$ for $a,b\in\calF$. This has dimension $\dim \CC\calF-1$. Consequently, $\nabla_\TT f_1(x)-(-1)^my \nabla_\TT g_1(x)=\bfz$ consists of $\dim \CC\calF-1$ independent linear equations on the subset of $\CC^\calF\oplus\CC^\calF$ consisting of pairs $f_1,g_1$ such that $f_1(x)=g_1(x)=0$. These are independent of the third equation in~\eqref{Eq:Three}. Thus the fiber $\pi^{-1}(y,x)\subset Z$ has codimension $3+\dim\CC\calF-1=2+\dim\CC\calF$ and so \[ \dim\Upsilon\ =\ \dim(\CC^\times\times X)+\dim Z-\dim\CC\calF-2\ =\ \dim Z+d-\dim\CC\calF\,. \] Let $(f_\bullet,g_\bullet)\in\pi_Z(\Upsilon)$ have witness $(y,x)$. That is, the equations~\eqref{Eq:Three} hold, as well as $\nabla_\TT f_1(x)-(-1)^my \nabla_\TT g_1(x)=\bfz$. As in the proof of Lemma~\ref{L:i}, if $w\in\ZZ^{d+1}$ is a quasi-homogeneity for polynomials of support $\calF$, then $(y, t^w\cdot x)$ also satisfies these equations. We have $\calF=\calG= G\cap\calA(\Gamma)$, so that $G$ is a face of the base of $Q$. Thus there are at least two (in fact the codimension of $G$ in $Q$) independent homogeneities, which implies that the fiber $\pi_Z^{-1}(f_\bullet,g_\bullet)$ has dimension at least two. This implies that the image $\Theta^\circ$ has dimension at most $\dim Z+d-\dim\CC\calF-2$. Since $G$ is not a vertex, $\dim \CC\calF\geq 2$, which shows that $\dim\Theta^\circ<\dim Z$ and completes the proof. \end{proof} \subsection{Case (v)}\label{S:Case(v)} For $\alpha\in\CC^\times$, define $\defcolor{\Psi(\alpha,f_\bullet, g_\bullet)}\subset X$ to be the set \[ \Big\{x\in X \;\Big|\; \mbox{none of $f_i(x)g_i(x)$ for $i\geq 1$ vanish} \mbox{ \ and \ } \prod_{i=1}^m f_i(x)\ -\ (-1)^m\alpha \prod_{i=1}^m g_i(x)\ =\ 0\Big\}\ . \] Case (v) in the proof of Theorem~\ref{Th:smoothFaces} follows from the next lemma. \begin{Lemma}\label{L:A} There is a dense open subset $U_1\subset Z$ such that if $(f_\bullet,g_\bullet)\in U_1$, then $\Psi(1,f_\bullet,g_\bullet)$ is smooth. \end{Lemma} We will deduce this from a weaker lemma. \begin{Lemma}\label{L:B} There is a dense open subset $U\subset \CC^\times\times Z$ such that if $(\alpha,f_\bullet,g_\bullet)\in U$, then $\Psi(\alpha,f_\bullet,g_\bullet)$ is smooth. \end{Lemma} \begin{proof}[Proof of Lemma~\ref{L:A}] If we knew that the set $U$ of Lemma~\ref{L:B} contained a point $(1,f_\bullet,g_\bullet)$, then $U_1\vcentcolon=U\cap(\{1\}\times Z)$ would be a dense open subset of $Z$, which would complete the proof. As we do not know this, we must instead argue indirectly. Suppose that there is no such open set $U_1$ as in Lemma~\ref{L:A}. Then the set $\defcolor{\Xi}\subset Z$ consisting of $(f_\bullet,g_\bullet)$ such that $\Psi(1,f_\bullet,g_\bullet)$ is singular is dense in $Z$. For $\alpha\in\CC^\times$ and $(f_\bullet,g_\bullet)\in Z$, define \defcolor{$\alpha.(f_\bullet,g_\bullet)$} to be $(f_\bullet,\alpha.g_\bullet)$ where \[ \alpha.(g_1,g_2,\dotsc,g_m)\ =\ (\alpha g_1,g_2,\dotsc,g_m)\,. \] This is a $\CC^\times$-action on $Z$. Consequently, $\alpha.\Xi$ is dense in $Z$ for all $\alpha\in\CC^\times$. Let $U\subset\CC^\times\times Z$ be the set of Lemma~\ref{L:B}. As it is nonempty, let $(\alpha,f'_\bullet, g'_\bullet)\in U$. Then $\defcolor{U_\alpha}\vcentcolon= U\cap (\{\alpha\}\times Z)$ is nonempty and open in $\{\alpha\}\times Z$. As $\alpha.\Xi$ is dense, we have \[ U_\alpha \bigcap \big(\{\alpha\}\times \alpha.\Xi\big)\ \neq\ \emptyset\,. \] This is a contradiction, for if $(\alpha,f_\bullet,g_\bullet)\in U_\alpha$, then $\Psi(\alpha,f_\bullet, g_\bullet)$ is smooth, but if $(f_\bullet,g_\bullet)\in \alpha.\Xi$, then $(f_\bullet,\alpha^{-1}g_\bullet)\in\Xi$ and $\Psi(1,f_\bullet,\alpha^{-1}g_\bullet)$ is singular. The contradiction follows from the equality of sets $\Psi(\alpha,f_\bullet, g_\bullet)=\Psi(1,f_\bullet,\alpha^{-1}g_\bullet)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{L:B}] Let $\defcolor{T}\subset X\times Z$ be the set of $(x,f_\bullet,g_\bullet)$ such that none of $f_i(x)g_i(x)$ for $i\geq 1$ vanish. Define $\varphi\colon T\to \CC^\times\times Z$ by \[ \varphi(x,f_\bullet,g_\bullet)\ =\ \bigl((-1)^m{\textstyle \prod_{i=1}^m f_i(x)/ \prod_{i=1}^mg_i(x)} \;,\; f_\bullet\,,\,g_\bullet \bigr)\,. \] Notice that $\varphi^{-1}(\alpha,f_\bullet,g_\bullet)=\Psi(\alpha,f_\bullet,g_\bullet)$ for $(\alpha,f_\bullet,g_\bullet)\in\CC^\times\times Z$. We claim that $\varphi(T)$ is dense in $\CC^\times\times Z$. For this, recall that the polynomials $f_i$ have support $\calF$, which is $G\cap(\calA(\Gamma)\cup\{\bfe\})$ for some face $G$ of $Q=\conv(\calA(\Gamma)\cup\{\bfe\})$ that is neither its base nor a vertex, and the polynomials $g_i$ have support $\calG=G\cap\calA(\Gamma)$. Since $G$ is not a vertex, there are $a,b\in\calF$ with $a\neq b$ and $b\in\calA(\Gamma)$. Let $\defcolor{f_i}\vcentcolon=x^a$ and $\defcolor{g_i}\vcentcolon=x^b$ for $i=1,\dotsc,m$. Then $X\times\{(f_\bullet,g_\bullet)\}\subset T$ and for $x\in X$ $\varphi(x,f_\bullet,g_\bullet)=(x^{ma}-(-1)^mx^{mb},f_\bullet,g_\bullet)$. The map $X=(\CC^\times)^{d+1}\to\CC^\times$ given by $x\mapsto x^{ma}-(-1)^mx^{mb}$ is surjective as $ma-mb\neq 0$. This implies that the differential $d\varphi$ is surjective at any point of $X\times\{(f_\bullet,g_\bullet)\}$, and therefore $\varphi(T)$ is dense in $\CC^\times\times Z$. Since $T$ is an open subset of the smooth variety $X\times Z$, it is smooth. Then Bertini's Theorem~\cite[Thm.\ 2.27, p.\ 139]{Shaf13} implies that there is a dense open subset $U\subset \CC^\times\times Z$ such that for $(\alpha,f_\bullet,g_\bullet)\in U$, $\varphi^{-1}(\alpha,f_\bullet,g_\bullet)=\Psi(\alpha,f_\bullet,g_\bullet)$ is smooth. \end{proof} \section{Critical points property}\label{S:final} We illustrate our results, using them to establish the critical points property (and thus the spectral edges nondegeneracy conjecture) for three periodic graphs. We first state this property. Let $\Gamma$ be a connected $\ZZ^d$-periodic graph with parameter space $Y=\CC^E\times\CC^W$ for discrete operators on $\Gamma$. We say that $\Gamma$ has the \demph{critical points property} if there is a dense open subset $U\subset Y$ such that if $c\in U$, then every critical point of the function $\lambda$ on the Bloch variety $\Var(D_c(z,\lambda))$ is nondegenerate in that the Hessian determinant \begin{equation}\label{Eq:Hessian} \det \left( \left( \frac{\partial^2\lambda}{\partial z_i\partial z_j}\right)_{i,j=1}^{d} \right) \end{equation} is nonzero at that critical point. Here, the derivatives are implicit, using that $D(z,\lambda)=0$. \subsection{Reformulation of Hessian condition} Let $D=\det(L_c(z)-\lambda I)$ be the dispersion function for an operator $L_c$ on a periodic graph $\Gamma$. In Section~\ref{S:A} we derived the equations for the critical points of the function $\lambda$ on the Bloch variety $\Var(D(z,\lambda))$, \begin{equation}\label{Eq:CPE_nonToric} D(z,\lambda)\ =\ 0 \qquad\mbox{and}\qquad \frac{\partial D}{\partial z_i}\ =\ 0 \qquad\mbox{for } i=1,\dots,d\,. \end{equation} Implicit differentiation of $D=0$ gives $\frac{\partial D}{\partial z_j}+\frac{\partial D}{\partial\lambda}\cdot\frac{\partial \lambda}{\partial z_j}=0$. If $\frac{\partial D}{\partial\lambda}\neq 0$, then $\frac{\partial\lambda}{\partial z_j}=0$. If $\frac{\partial D}{\partial\lambda}=0$, then $(z,\lambda)$ is a singular point hence is also a critical point of the function $\lambda$ and so we again have $\frac{\partial\lambda}{\partial z_j}=0$. Differentiating again we obtain, \[ 0\ =\ \frac{\partial}{\partial z_i}\left(\frac{\partial D}{\partial z_j} +\frac{\partial D}{\partial\lambda}\cdot\frac{\partial \lambda}{\partial z_j}\right) \ =\ \frac{\partial^2D}{\partial z_i\partial z_j} + \frac{\partial^2D}{\partial z_i\partial\lambda}\cdot\frac{\partial \lambda}{\partial z_j} + \frac{\partial D}{\partial\lambda}\cdot \frac{\partial^2\lambda}{\partial z_i\partial z_j}\,. \] At a critical point (so that $\frac{\partial\lambda}{\partial z_j}=0$), we have \[ \frac{\partial^2D}{\partial z_i\partial z_j} \ =\ -\frac{\partial D}{\partial\lambda}\cdot \frac{\partial^2\lambda}{\partial z_i\partial z_j}\,. \] Thus \[ \det \left( \left( \frac{\partial^2 D}{\partial z_i\partial z_j}\right)_{i,j=1}^{d} \right) \ =\ \left(-\frac{\partial D}{\partial\lambda}\right)^d\cdot \det \left( \left( \frac{\partial^2\lambda}{\partial z_i\partial z_j}\right)_{i,j=1}^{d} \right)\ . \] Consider now the Jacobian matrix of the critical point equations~\eqref{Eq:CPE_nonToric}, \[ J\ =\ \left(\begin{array}{cccc} \frac{\partial D}{\partial z_1} & \dotsc & \frac{\partial D}{\partial z_d} & \frac{\partial D}{\partial\lambda} \\ \frac{\partial^2 D}{\partial z_1^2} & \dotsc & \frac{\partial^2 D}{\partial z_d\partial z_1} \rule{0pt}{14pt} & \frac{\partial^2 D}{\partial\lambda\partial z_1} \\ \vdots & \ddots & \vdots & \vdots\\ \frac{\partial^2 D}{\partial z_1\partial z_d} & \dotsc & \frac{\partial^2 D}{\partial z_d^2} & \frac{\partial^2 D}{\partial\lambda\partial z_d} \end{array}\right)\ . \] At a critical point, the first row is $(0\; \dotsb\; 0\; \frac{\partial D}{\partial\lambda})$, and thus \[ \det(J)\ =\ \frac{\partial D}{\partial\lambda} \det \left( \left( \frac{\partial^2 D}{\partial z_i\partial z_j}\right)_{i,j=1}^{d} \right) \ =\ (-1)^d \left(\frac{\partial D}{\partial\lambda}\right)^{d+1}\cdot \det \left( \left( \frac{\partial^2\lambda}{\partial z_i\partial z_j}\right)_{i,j=1}^{d} \right)\ . \] A solution $\zeta$ of a system of polynomial equations on $\CC^n$ is \demph{regular} if the Jacobian of the system at $\zeta$ has full rank $n$. Regular solutions are isolated and have multiplicity 1. We deduce the following lemma. \begin{Lemma}\label{L:nondegenerate} A nonsingular critical point $(z,\lambda)$ on $\Var(D_c(z,\lambda))$ is nondegenerate if and only if it is a regular solution of the critical point equations~\eqref{Eq:CPE_nonToric}. \end{Lemma} The following theorem is adapted from arguments in~\cite[Sect.\ 5.4]{DKS}. \begin{Theorem}\label{Th:singleCalculation} Let $\Gamma$ be a $\ZZ^d$-periodic graph. If there is a parameter value $c\in Y$ such that the critical point equations have $(d{+}1)!\vol(\calN(\Gamma))$ regular solutions, then the critical points property holds for $\Gamma$. \end{Theorem} \begin{proof} Let $Y$ be the parameter space for operators $L$ on $\Gamma$. Consider the variety \[ \defcolor{CP}\ \vcentcolon=\ \{(c,z,\lambda)\in Y\times (\CC^\times)^d\times\CC \mid \ \mbox{the critical point equations~\eqref{Eq:CPE} hold}\}\,, \] which is the incidence variety of critical points on all Bloch varieties for operators on $\Gamma$. Let $\pi$ be its projection to $Y$. For any $c\in Y$, the fiber $\pi^{-1}(c)$ is the set of critical points of the function $\lambda$ on the corresponding Bloch variety for $D_c$. By Corollary~\ref{C:Bound}, there are at most $(d{+}1)!\vol(\calN(D_c))$ isolated points in the fiber. Let $c\in Y$ be a point such that the critical point equations have $(d{+}1)!\vol(\calN(\Gamma))$ regular solutions. Then $(d{+}1)!\vol(\calN(\Gamma))\leq(d{+}1)!\vol(\calN(D_c))$. By Lemma~\ref{L:NewtonPolytope}, $\calN(D_c)$ is a subpolytope of $\calN(\Gamma)$, so that $\vol(\calN(D_c))\leq\vol(\calN(\Gamma))$. We conclude that both polytopes have the same volume and are therefore equal. In particular, the corresponding Bloch variety has the maximum number of critical points, and each is a regular solution of the critical point equations~\eqref{Eq:CPE}. Because they are regular solutions, the implicit function theorem implies that there is a neighborhood \defcolor{$U_c$} of $c$ in the classical topology on $Y$ such that the map $\pi^{-1}(U_c)\to U_c$ is proper (it is a $(d{+}1)!\vol(\calN(\Gamma))$-sheeted cover). The set \defcolor{$DC$} of degenerate critical points is the closed subset of $CP$ given by the vanishing of the Hessian determinant~\eqref{Eq:Hessian}. Since $\pi$ is proper over $U_c$, if $\defcolor{DP}=\pi(DC)$ is the image of $DC$ in $Y$, then $DP\cap U_c$ is closed in $U_c$. As the points of $\pi^{-1}(c)$ are regular solutions, Lemma~\ref{L:nondegenerate} implies they are all nondegenerate and thus $c\not\in DP$, so that $U_c\smallsetminus DP$ is a nonempty classically open subset of $Y$ consisting of parameter values $c'$ with the property that all critical points on the corresponding Bloch variety are nondegenerate. This implies that there is a nonempty Zariski open subset of $Y$ consisting of parameters such that all critical points on the corresponding Bloch variety are nondegenerate, which completes the proof. \end{proof} By Theorem~\ref{Th:singleCalculation}, it suffices to find a single Bloch variety with the maximum number of isolated critical points to establish the critical points property for a periodic graph. The following examples use such a computation to establish the critical points property for $2^{19}+2$ graphs $\Gamma$. Computer code and output are available at the github repository\footnote{\url{https://mattfaust.github.io/CPODPO}.}. \begin{Example}\label{Ex:DenseGraph} Let us consider the dense $\ZZ^2$-periodic graph $\Gamma$ of Figure~\ref{F:New-dense}. It has $m=2$ points in its fundamental domain and the convex hull of its support $\calA(\Gamma)$ has area 4. By Theorem~C, a general operator on $\Gamma$ has $2!\cdot 2^{2+1}\cdot 4 = 64$ critical points. \begin{figure}[htb] \centering \includegraphics{images/newDense} \qquad \begin{picture}(212,156)(-2,-25) \put(0,0){\includegraphics[height=110pt]{images/widePyramid}} \thicklines \thinlines \put( 56,91){\footnotesize$(0,0,2)$} \put(161, 52){\footnotesize$(-4,0,0)$} \put(-1,39){\footnotesize$(0,-2,0)$} \put(14, 3.5){\footnotesize$(4,0,0)$} \put(118, 16){\footnotesize$(0,2,0)$} \end{picture} \caption{Dense periodic graph and its polytope from Figure~\ref{F:New-dense}.} \label{F:newDensewidePyramid} \end{figure} There are 13 edges and two vertices in $W$, and independent computations in the computer algebra systems Macaulay2~\cite{M2} and Singular~\cite{Singular} find a point $c\in Y=\CC^{15}$ such that the critical point equations have 64 regular solutions on $(\CC^\times)^2\times\CC$. By Theorem~\ref{Th:singleCalculation}, the critical points property holds for $\Gamma$. These computations are independent in that the code, authors, and parameter values for each are distinct.\hfill$\diamond$ \end{Example} \begin{Example}\label{Ex:FullNotDense} The graph $\Gamma$ in Figure~\ref{F:FullNotDense} is not dense. Its restriction to the fundamental domain is not the complete graph on 3 vertices and there are three and not nine edges between any two adjacent translates of the fundamental domain. Altogether, it has $3\cdot 6 + 1=19$ fewer edges than the corresponding dense graph. Its support $\calA(\Gamma)$ forms the columns of the matrix $(\begin{smallmatrix}0&1&1&0&-1&-1&0\\0&0&1&1&0&-1&-1\end{smallmatrix})$ whose convex hull is a hexagon of area 3. \begin{figure}[htb] \centering \includegraphics{images/full_not-dense} \qquad \begin{picture}(235,160)(-10,0) \put(0,0){\includegraphics[height=150pt]{images/full_Pyramid}} \thicklines \put(127,10){{\color{white}\line(0,1){41}}} \put(128,10){{\color{white}\line(0,1){41}}} \put(39.5,10){{\color{white}\line(2,3){33}}} \put(40.5,10){{\color{white}\line(2,3){33}}} \put(178.5,104.5){{\color{white}\line(-1,-2){17.5}}} \put(179.5,104.5){{\color{white}\line(-1,-2){17.5}}} \thinlines \put( 72,125){\footnotesize$(0,0,3)$} \put( 10,120.5){\footnotesize$(-1,-1,1)$} \put( 39.5,117.5){\vector(1,-2){16}} \put(164,107.5){\footnotesize$( 1,2,1)$} \put(179,104.5){\vector(-1,-2){19.5}} \put(-13, 69){\footnotesize$(-3,-3,0)$} \put(-12, 38){\footnotesize$(0,-3,0)$} \put(190, 48){\footnotesize$(0,3,0)$} \put(59.5, 16.5){\footnotesize$(3,0,0)$} \put(162, 10){\footnotesize$(3,3,0)$} \put(14, 0){\footnotesize$(1,-1,1)$} \put(40,10){\vector(2,3){35.5}} \put(111.5,0){\footnotesize$(2,1,1)$} \put(127.5,10){\vector(0,1){43.5}} \end{picture} \caption{Sparse graph with the same Newton polytope as the corresponding dense graph.} \label{F:FullNotDense} \end{figure} Despite $\Gamma$ not being dense, its Newton polytope $\calN(\Gamma)$ is equal to the Newton polytope of the dense graph with the same parameters, $\calA(\Gamma)$ and $W$. Figure~\ref{F:FullNotDense} displays the Newton polytope, along with elements of the support of the dispersion function that are visible. Observe that on each triangular face, there are four and not ten monomials. By Theorem~A (Corollary~\ref{C:Bound}), there are at most $2!\cdot 3^{2+1}\cdot 3=162$ critical points. There are eleven edges and three vertices in $W$, and independent computations in Macaulay2 and Singular find a point $c\in Y=\CC^{14}$ such that the critical point equations have 162 regular solutions on $(\CC^\times)^2\times\CC$. By Theorem~\ref{Th:singleCalculation}, the critical points property holds for $\Gamma$. Let $\Gamma'$ be a graph that has the same vertex set and support as $\Gamma$, and contains all the edges of $\Gamma$---then~\cite[Thm.\ 22]{DKS} implies that the critical points property also holds for $\Gamma'$. This establishes the critical points property for an additional $2^{19}-1$ periodic graphs.\hfill$\diamond$ \end{Example} \begin{Example}\label{Ex:NotPyramid} The graph $\Gamma$ of Figure~\ref{F:notPyramidGraphPolytope} has only ten edges but the same fundamental domain $W$ and support $\calA(\Gamma)$ as the the graph of Figure~\ref{F:FullNotDense}, which had eleven edges. Its Newton polytope is smaller, as it is missing the vertices $(3,3,0)$ and $(-3,-3,0)$. \begin{figure}[htb] \centering \includegraphics{images/notPyramidGraph} \qquad \begin{picture}(245,159)(-8,-2) \put(0,0){\includegraphics[height=150pt]{images/notPyramid}} \thicklines \put( 38.5,10){{\color{white}\line(2,3){24.5}}} \put(39.5,10){{\color{white}\line(2,3){24.5}}} \put(178.5,110.5){{\color{white}\line(-1,-2){20}}} \put(179.5,110.5){{\color{white}\line(-1,-2){20}}} \put(133, 8){{\color{white}\line(-1,2){17}}} \put(134, 8){{\color{white}\line(-1,2){17}}} \thinlines \put( 75,132){\footnotesize$(0,0,3)$} \put(155.5,113.5){\footnotesize$(-1,1,1)$} \put(179,110.5){\vector(-1,-2){21}} \put(197, 56){\footnotesize$(-3,0,0)$} \put( 21, 0){\footnotesize$(2,1,1)$} \put( 39,10){\vector(2,3){26.5}} \put( 67, 2){\footnotesize$(2,2,0)$} \put(120, -2){\footnotesize$(1,2,1)$} \put(133.5, 8){\vector(-1,2){19}} \put(-11, 28){\footnotesize$(3,0,0)$} \put(166, 12.5){\footnotesize$(0,3,0)$} \end{picture} \caption{A periodic graph and its Newton polytope.} \label{F:notPyramidGraphPolytope} \end{figure} It has volume $70/3$ and normalized volume $3!\cdot 70/3=140$. Independent computations in Macaulay2 and Singular find a point $c\in Y=\CC^{13}$ such that the critical point equations have 140 regular solutions on $(\CC^\times)^2\times\CC$. Thus there are no critical points at infinity, and Theorem B implies that the Bloch variety is smooth at infinity. As before, achieving the bound of Corollary~\ref{C:Bound} with regular solutions implies that all critical points are nondegenerate and the critical points property holds for $\Gamma$. \hfill$\diamond$ \end{Example} \section{Conclusion} We considered the critical points of the complex Bloch variety for an operator on a periodic graph. We gave a bound on the number of critical points---the normalized volume of a Newton polytope---together with a criterion for when that bound is attained. We presented a class of graphs (dense periodic graphs) and showed that this criterion holds for general discrete operators on a dense graph. Lastly, we used these results to find $2^{19}+2$ graphs on which the spectral edges conjecture holds for general discrete operators when $d=2$. \bibliographystyle{amsplain} \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{10} \bibitem{Berk} G.~Berkolaiko, Y.~Canzani, G.~Cox, and J.L. Marzuola, \emph{A local test for global extrema in the dispersion relation of a periodic graph}, Pure and Applied Analysis \textbf{4} (2022), no.~2, 257--286. \bibitem{BerkKuch} G.~Berkolaiko and P.~Kuchment, \emph{Introduction to quantum graphs}, Mathematical Surveys and Monographs, vol. 186, American Mathematical Society, Providence, RI, 2013. \bibitem{Bernstein} D.~N. Bernstein, \emph{The number of roots of a system of equations}, Functional Anal. Appl. \textbf{9} (1975), no.~3, 183--185. \bibitem{EDD} P.~Breiding, F.~Sottile, and J.~Woodcock, \emph{Euclidean distance degree and mixed volume}, Found. Comput. Math. (2021). \bibitem{CdV} Y.~Colin~de Verdi\`ere, \emph{Sur les singularit\'{e}s de van {H}ove g\'{e}n\'{e}riques}, Analyse globale et physique math\'{e}matique (Lyon, 1989), no.~46, Soc. Math. France, 1991, pp.~99--110. \bibitem{CLOII} D.~Cox, J.~Little, and D.~O'Shea, \emph{Using algebraic geometry}, second ed., Graduate Texts in Mathematics, vol. 185, Springer, New York, 2005. \bibitem{CLO} \bysame, \emph{Ideals, varieties, and algorithms}, third ed., Undergraduate Texts in Mathematics, Springer, New York, 2007. \bibitem{Singular} W.~Decker, G-M. Greuel, G.~Pfister, and H.~Sch\"onemann, \emph{{\sc Singular} {4-3-0} --- {A} computer algebra system for polynomial computations}, \url{http://www.singular.uni-kl.de}, 2022. \bibitem{DKS} N.~Do, P.~Kuchment, and F.~Sottile, \emph{Generic properties of dispersion relations for discrete periodic operators}, J. Math. Phys. \textbf{61} (2020), no.~10, 103502, 19. \bibitem{Ewald} G.~Ewald, \emph{Combinatorial convexity and algebraic geometry}, Graduate Texts in Mathematics, vol. 168, Springer-Verlag, New York, 1996. \bibitem{FLM} J.~Fillman, W.~Liu, and R.~Matos, \emph{Irreducibility of the {B}loch variety for finite-range {S}chr\"odinger operators}, Journal of Functional Analysis \textbf{283} (2022), no.~10, 109670. \bibitem{FLM23} \bysame, \emph{Algebraic properties of the {F}ermi variety for periodic graph operators}, 2023, {\tt arXiV.oer/2305.06471}. \bibitem{FILI} N.~Filonov and I.~Kachkovskiy, \emph{On the structure of band edges of 2-dimensional periodic elliptic operators}, Acta Math. \textbf{221} (2018), no.~1, 59--80. \MR{3877018} \bibitem{Fulton} Wm. Fulton, \emph{Intersection theory}, second ed., Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol.~2, Springer-Verlag, Berlin, 1998. \bibitem{GKZ} I.~M. Gel\cprime~fand, M.~M. Kapranov, and A.~V. Zelevinsky, \emph{Discriminants, resultants, and multidimensional determinants}, Mathematics: Theory \& Applications, Birkh\"{a}user, Boston, MA, 1994. \bibitem{GKT} D.~Gieseker, H.~Kn\"{o}rrer, and E.~Trubowitz, \emph{The geometry of algebraic {F}ermi curves}, Perspectives in Mathematics, vol.~14, Academic Press, Inc., Boston, MA, 1993. \bibitem{M2} D.R. Grayson and M.E. Stillman, \emph{Macaulay2, a software system for research in algebraic geometry}, Available at \url{http://www.math.uiuc.edu/Macaulay2/}. \bibitem{RP} F.~Klopp and J.~Ralston, \emph{Endpoints of the spectrum of periodic operators are generically simple}, Methods Appl. Anal. \textbf{7} (2000), no.~3, 459--463. \bibitem{Kushnirenko} A.~G. Kouchnirenko, \emph{Poly\`edres de {N}ewton et nombres de {M}ilnor}, Invent. Math. \textbf{32} (1976), no.~1, 1--31. \bibitem{Kravaris} C.~Kravaris, \emph{On the density of eigenvalues on periodic graphs}, SIAM Journal on Applied Algebra and Geometry, to appear. {\tt arXiv.org/2103:12734}, 2021. \bibitem{KuchBook} P.~Kuchment, \emph{Floquet theory for partial differential equations}, Operator Theory: Advances and Applications, vol.~60, Birkh\"{a}user Verlag, Basel, 1993. \bibitem{KuchBAMS} \bysame, \emph{An overview of periodic elliptic operators}, Bull. Amer. Math. Soc. (N.S.) \textbf{53} (2016), no.~3, 343--414. \bibitem{LiSh} W.~Li and S.P. Shipman, \emph{Irreducibility of the {F}ermi surface for planar periodic graph operators}, Lett. Math. Phys. \textbf{110} (2020), no.~9, 2543--2572. \bibitem{LNRW} J.~Lindberg, N.~Nicholson, J.L. Rodriguez, and Z.~Wang, \emph{The maximum likelihood degree of sparse polynomial systems}, SIAM Journal on Applied Algebra and Geometry \textbf{7} (2023), no.~1. \bibitem{Liu22} W.~Liu, \emph{Irreducibility of the {F}ermi variety for discrete periodic {S}chr\"{o}dinger operators and embedded eigenvalues}, Geom. Funct. Anal. \textbf{32} (2022), no.~1, 1--30. \bibitem{Liu+} \bysame, \emph{Fermi isospectrality of discrete periodic {S}chr\"{o}dinger operators with separable potentials on {$\mathbb{Z}^2$}}, 2023, pp.~1139--1149. \MR{4576766} \bibitem{Nov81} S.~P. Novikov, \emph{Bloch functions in the magnetic field and vector bundles. {T}ypical dispersion relations and their quantum numbers}, Dokl. Akad. Nauk SSSR \textbf{257} (1981), no.~3, 538--543. \bibitem{Nov83} \bysame, \emph{Two-dimensional {S}chr\"{o}dinger operators in periodic fields}, Current problems in mathematics, {V}ol. 23, Itogi Nauki i Tekhniki, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1983, pp.~3--32. \bibitem{Shaf13} I.~R. Shafarevich, \emph{Basic algebraic geometry 1}, translated from the 2007 third russian edition ed., Springer, Heidelberg, 2013. \bibitem{IHP} F.~Sottile, \emph{Real solutions to equations from geometry}, University Lecture Series, vol.~57, American Mathematical Society, Providence, RI, 2011. \bibitem{GBCP} B.~Sturmfels, \emph{Gr\"obner bases and convex polytopes}, University Lecture Series, vol.~8, American Mathematical Society, Providence, RI, 1996. \end{thebibliography} \end{document}
2206.13640v3
http://arxiv.org/abs/2206.13640v3
The first homology group with twisted coefficients for the mapping class group of a non-orientable surface of genus three with two boundary components
\documentclass[11pt]{amsart} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amscd} \usepackage{cite} \usepackage[all]{xy} \usepackage{multicol} \usepackage{url} \newtheorem{tw}{Theorem}[section] \newtheorem{prop}[tw]{Proposition} \newtheorem{lem}[tw]{Lemma} \newtheorem{wn}[tw]{Corollary} \theoremstyle{remark} \newtheorem{uw}[tw]{Remark} \theoremstyle{definition} \newtheorem*{df}{Definition} \newcommand{\Cal}[1]{\mathcal{#1}} \newcommand{\bez}{\setminus} \newcommand{\podz}{\subseteq} i}{\varphi} \newcommand{\sig}{\sigma} \newcommand{\eps}{\varepsilon} \newcommand{\ro}{\varrho} \newcommand{\fal}[1]{\widetilde{#1}} \newcommand{\dasz}[1]{\widehat{#1}} \newcommand{\kre}[1]{\overline{#1}} \newcommand{\gen}[1]{\langle #1 \rangle} \newcommand{\map}[3]{#1\colon #2\to #3} eld}[1]{\mathbb{#1}} eld{Z}} eld{K}} eld{R}} \newcommand{\KL}[1]{ \bigl\{ #1 \bigr\}} \newcommand{\st}{\;|\;} \newcommand{\cM}{{\Cal M}} \newcommand{\SR}[1]{\underset{\rightarrow}{[#1]}} \newcommand{\SL}[1]{\underset{\leftarrow}{[#1]}} \newcommand{\ukre}[1]{\underline{#1}} \newcommand{\lst}[2]{{#1}_1,\dotsc,{#1}_{#2}} \newcommand{\Mob}{M\"{o}bius strip} \makeatletter \@namedef{subjclassname@2020}{\textup{2020} Mathematics Subject Classification} \makeatother \begin{document} \numberwithin{equation}{section} \title[The first homology group with twisted coefficients \ldots] {The first homology group with twisted coefficients for the mapping class group of a non--orientable surface of genus three with two boundary components } \author{Piotr Pawlak \hspace{1em} Micha\l\ Stukow} \address[]{ Institute of Mathematics, Faculty of Mathematics, Physics and Informatics, University of Gda\'nsk, 80-308 Gda\'nsk, Poland } \email{[email protected], [email protected]} \keywords{Mapping class group, Homology of groups, Non--orientable surface} \subjclass[2020]{Primary 57K20; Secondary 20J06, 55N25, 22F50} \begin{abstract} We determine the first homology group with coefficients in $H_1(N;\zz)$ for the mapping class group of a non--orientable surface $N$ of genus three with two boundary components. \end{abstract} \maketitle \section{Introduction}Let $N_{g,s}^n$ be a smooth, non--orientable, compact surface of genus $g$ with $s$ boundary components and $n$ punctures. If $s$ and/or $n$ is zero, then we omit it from the notation. If we do not want to emphasise the numbers $g,s,n$, we simply write $N$ for a surface $N_{g,s}^n$. Recall that $N_{g}$ is a connected sum of $g$ projective planes and $N_{g,s}^n$ is obtained from $N_g$ by removing $s$ open discs and specifying a set $\Sigma=\{z_1,\ldots,z_n\}$ of $n$ distinguished points in the interior of~$N$. Let ${\textrm{Diff}}(N)$ be the group of all diffeomorphisms $\map{h}{N}{N}$ such that $h$ is the identity on each boundary component. By ${\Cal{M}}(N)$ we denote the quotient group of ${\textrm{Diff}}(N)$ by the subgroup consisting of maps isotopic to the identity, where we assume that isotopies are the identity on each boundary component. ${\Cal{M}}(N)$ is called the \emph{mapping class group} of $N$. Let ${\Cal{PM}}^+(N)$ be the subgroup of ${\Cal{M}}(N)$ consisting of elements which fix $\Sigma$ pointwise and preserve a local orientation around the punctures $\{z_1,\ldots,z_k\}$. \subsection{Main results} We showed in \cite{Stukow_homolTopApp} that if $N_{g,s}$ is a non--orientable surface of genus $g\geq 3$ with $s\leq 1$ boundary components, then \begin{equation*}H_1({\Cal{M}}(N_{g,s});H_1(N_{g,s};\zz))\cong \begin{cases} \zz_2\oplus\zz_2\oplus\zz_2 &\text{if $g\in\{3,4,5,6\},$}\\ \zz_2\oplus\zz_2&\text{if $g\geq 7$.} \end{cases}\end{equation*} It is quite natural to extend this result to surfaces with more than one boundary component. However, this is quite challenging, because we do not have a simple presentation for the mapping class group in this case. Nevertheless, in the forthcoming paper \cite{PawlakStukowHomoPunctured} we managed (among other homological results) to compute the first homology group $H_1({\Cal{M}}(N_{g,s});H_1(N_{g,s};\zz))$ for any $s\geq 0$ by bounding it both from above and below. However, it turned out that the proof of nontriviality of two important families of homology generators (denoted by $a_{1,3+j}$ and $u_{1,3+j}$ in \cite{PawlakStukowHomoPunctured}) is beyond the methods used in \cite{PawlakStukowHomoPunctured}. By a simple reduction, the required argument in \cite{PawlakStukowHomoPunctured} boils down to the exceptional case of a non-orientable surface $N_{3}^2$ of genus 3 with two punctures. The main purpose of this paper is to fill this gap, and to prove the following two theorems \begin{tw}\label{MainThm1} If $N_{3,2}$ is a non--orientable surface of genus $g= 3$ with 2 boundary components, then \[H_1({\Cal{M}}(N_{3,2});H_1(N_{3,2};\zz))\cong \zz_2^{6}.\] \end{tw} \begin{tw}\label{MainThm2} If $N_{3}^2$ is a non--orientable surface of genus $g= 3$ with 2 punctures, then \[H_1({\Cal{PM}^+}(N_{3}^2);H_1(N_{3}^2;\zz))\cong \zz_2^{5}.\] \end{tw} \subsection{Homology of groups} Let us briefly review how to compute the first homology of a group with twisted coefficients -- for more details see Section~5 of \cite{Stukow_HiperOsaka} and references there. For a given group $G$ and $G$-module $M$ (that is $\zz G$-module) we define $C_2(G)$ and $C_1(G)$ as the free $G$-modules generated respectively by symbols $[h_1|h_2]$ and $[h_1]$, where $h_i\in G$. We define also $C_0(G)$ as the free $G$-module generated by the empty bracket $[\cdot]$. Then the first homology group $H_1(G;M)$ is the first homology group of the complex \[\xymatrix@C=3pc@R=3pc{C_2(G)\otimes_G M\ar[r]^{\ \partial_2\otimes_G {\rm id}}&C_1(G)\otimes_G M \ar[r]^{\ \partial_1\otimes_G {\rm id}}&C_0(G)\otimes_G M},\] where \begin{equation}\label{diff:formula} \begin{aligned} \partial_2([h_1|h_2])&=h_1[h_2]-[h_1h_2]+[h_1],\\ \partial_1([h])&=h[\cdot]-[\cdot]. \end{aligned} \end{equation} For simplicity, we write $\otimes_G=\otimes$ and $\partial\otimes {\rm id}=\kre{\partial}$ henceforth. If the group $G$ has a presentation $G=\langle X\,|\,R\rangle$ and \[\gen{\kre{X}}=\gen{[x]\otimes m\st x\in X, m\in M}\podz C_1(G)\otimes M,\] then $H_1(G;M)$ is a quotient of $\gen{\kre{X}}\cap \ker\kre{\partial}_1$. The kernel of this quotient corresponds to relations in $G$ (that is elements of $R$). To be more precise, if $r\in R$ has the form $x_1\cdots x_k=y_1\cdots y_n$ and $m\in M$, then $r$ gives the relation (in $H_1(G;M)$) \begin{equation} \kre{r}\otimes m\!:\ \sum_{i=1}^{k}x_1\cdots x_{i-1}[x_i]\otimes m=\sum_{i=1}^{n}y_1\cdots y_{i-1}[y_i]\otimes m.\label{eq_rew_rel} \end{equation} Then \[H_1(G;M)=\gen{\kre{X}}\cap \ker\kre{\partial}_1/\gen{\kre{R}},\] where \[\kre{R}=\{\kre{r}\otimes m\st r\in R,m\in M\}.\] \section{Presentations for the groups ${\Cal{PM}}^{+}(N_{3}^2)$ and ${\Cal{M}}(N_{3,2})$} Represent the surface $N_{3,2}$ as a sphere with three crosscaps $\mu_1,\mu_2,\mu_3$ and two boundary components (Figure \ref{r01}). Let \[\alpha_1,\alpha_{2},\eps_1,\eps_2, \delta_1,\delta_2,\beta_1,\beta_2,\beta_3\] be two--sided circles indicated in Figure \ref{r01}. \begin{figure}[h] \begin{center} \includegraphics[width=0.88\textwidth]{R01.pdf} \caption{Surface $N_{3,2}$ as a sphere with three crosscaps.}\label{r01} \end{center} \end{figure} Small arrows in that figure indicate directions of Dehn twists \[a_1,a_{2},e_1,e_2,d_1,d_2, b_1,b_2,b_3 \] associated with these circles. Let $u$ be a \emph{crosscap transposition}, that is the map which interchanges the first two crosscaps (see Figure \ref{r03}). \begin{figure}[h] \begin{center} \includegraphics[width=0.62\textwidth]{R03.pdf} \caption{Crosscap transposition $u$.}\label{r03} \end{center} \end{figure} \begin{tw}\label{tw:pres:two:holes} The mapping class group ${\Cal{M}}(N_{3,2})$ of a non--orientable surface of genus 3 with two boundary components admits a presentation with generators $\{a_1,a_2,e_1,e_2,d_1,d_2,b_1,b_2,b_3, u\}$. The defining relations are \begin{multicols}{2} \begin{itemize} \item[(1a)] $a_1e_1=e_1a_1$, \item[(1b)] $a_1e_2=e_2a_1$, \item[(1c)] $e_1e_2=e_2e_1$, \item[(2a)] $a_1a_2a_1=a_2a_1a_2$, \item[(2b)] $e_1a_2e_1=a_2e_1a_2$, \item[(2c)] $e_2a_2e_2=a_2e_2a_2$, \item[(3)] $a_1ua_1=u$, \item[(4)] $ue_2=a_2ue_2a_2$, \item[(5)] $ub_1=b_1u$, \item[(6)] $ub_3=b_3u$, \item[(7)] $a_2b_2=b_2a_2$, \item[(8)] $(e_1u)^2=d_1b_1$, \item[(9a)] $b_3=(e_2u)^2$, \item[(9b)] $(e_2u)^2=(a_2e_2a_1^2)^3$, \item[(10)] $ue_1u^{-1}b_2u=a_2b_1^{-1}a_2^{-1}ue_1$, \item[(11)] $b_3b_1(b_2u)^2=d_2^2d_1u^2$, \item[(12)] $(a_2e_2e_1a_1)^3=d_1d_2$, \item[(13a)] $d_1a_1=a_1d_1$, \item[(13b)] $d_1a_2=a_2d_1$, \item[(13c)] $d_1e_1=e_1d_1$, \item[(13d)] $d_1u=ud_1$, \item[(13e)] $d_2u=ud_2$. \end{itemize} \end{multicols} \end{tw} \begin{proof} By Theorem 7.18 of \cite{Szep_curv}, the mapping class group ${\Cal{M}}(N_{3,2})$ admits a presentation with generators $\{A_1,A_2,A_3,B,D_1,D_2,D_3,U, C_1,C_2\}$ and relations \begin{itemize} \item[(S1)] $A_iA_j=A_jA_i$, $i,j=1,2,3$, \item[(S2)] $A_iBA_i=BA_iB$, $i=1,2,3$, \item[(S3)] $UA_1U^{-1}=A_1^{-1}$, \item[(S4)] $UBU^{-1}=A_3^{-1}B^{-1}A_3$, \item[(S5)] $UD_1=D_1U$, \item[(S6)] $UD_3=D_3U$, \item[(S7)] $BD_2=D_2B$, \item[(S8)] $(UA_2)^2=D_1C_1$, \item[(S9)] $(A_1^2A_3B)^3=(UA_3)^2=D_3$, \item[(S10)] $A_2^{-1}UD_2U^{-1}A_2=UB^{-1}D_1^{-1}BU^{-1}$, \item[(S11)] $(UD_2)^2D_1D_3=U^2C_1C_2^2$, \item[(S12)] $(A_1A_2A_3B)^3=C_1C_2=C_2C_1$, \item[(S13)] $C_iA_j=A_jC_i$, $C_iD_j=D_jC_i$, $C_iB=BC_i$, $C_iU=UC_i$, $i=1,2$, $j=1,2,3$. \end{itemize} Moreover, the topological model for the surface $N_{3,2}$ can be chosen in such way that \[\begin{aligned} A_1&=a_1^{-1},\ A_2=e_1^{-1},\ A_3=e_2^{-1},\\ B&=a_2^{-1},\ U=u^{-1},\\ D_1&=b_1^{-1},\ D_2=b_2^{-1},\ D_3=b_3^{-1},\\ C_1&=d_1^{-1},\ C_2=d_2^{-1}. \end{aligned} \] To see this correspondence, simply reflect Figure 14 of \cite{Szep_curv} across the vertical axis passing through the second crosscap. Therefore, we can easily rewrite Relations (S1)--(S13) into our set of generators. In most cases this is straightforward, but note the following remarks. \begin{itemize} \item Generators $A_3,D_1,D_2,D_3$ and $C_2$ are superfluous in the above presentation -- they can be easily removed by use of Relations (S2)\&(S4), (S8), (S10), (S9) and (S12) respectively. \item By the above remark, to ensure that $C_1$ and $C_2$ are central in ${\Cal{M}}(N_{3,2})$ (Relation (S13)), it is enough to have commutativity of these elements with $A_1,A_2,B$ and~$U$. \item In order to show that Relations (13a)--(13e) fully replace Relations (S13), we still need to show that $d_2=d_1^{-1}(a_2e_2e_1a_1)^3$ commutes with $a_1,a_2$ and $e_1$, or equivalently, that $(a_2e_2e_1a_1)^3$ commutes with $a_1,a_2$ and $e_1$. \end{itemize} Let us first show this in the most difficult of these cases -- the case of the commutativity with $a_2$. The computations repeatedly make use of Relations (1a)--(2c). \[\begin{aligned} a_2^{-1}(a_2e_2e_1a_1)^3a_2&= \SR{a_2^{-1}}a_2e_2e_1a_1a_2\SR{e_2}e_1a_1a_2e_2e_1a_1a_2\\ &=a_2a_2^{-1}e_2e_1a_1a_2\SR{e_1}a_1[e_2a_2e_2]e_1a_1a_2\\ &=a_2a_2^{-1}e_2e_1[a_1a_2a_1]e_1a_2e_2a_2e_1a_1a_2\\ &=a_2a_2^{-1}e_2e_1a_2a_1[a_2e_1a_2]e_2a_2e_1a_1a_2\\ &=a_2a_2^{-1}e_2e_1a_2a_1\SL{e_1}a_2\SR{e_1}e_2a_2e_1a_1a_2\\ &=a_2a_2^{-1}e_2[e_1a_2e_1]a_1a_2e_2[e_1a_2e_1]a_1a_2\\ &=a_2[a_2^{-1}e_2a_2]e_1a_2a_1a_2e_2a_2e_1[a_2a_1a_2]\\ &=a_2e_2a_2\SR{e_2^{-1}}e_1[a_2a_1a_2]e_2a_2\SR{e_1}a_1a_2a_1\\ &=a_2e_2a_2e_1\SR{e_2^{-1}}a_1a_2a_1\SL{e_2}a_2a_1e_1a_2a_1\\ &=a_2e_2a_2e_1a_1[e_2^{-1}a_2e_2][a_1a_2a_1]e_1a_2a_1\\ &=a_2e_2a_2e_1a_1a_2e_2\ukre{a_2}^{-1}\ukre{a_2}\SL{a_1}[a_2e_1a_2]a_1\\ &=a_2e_2a_2e_1[a_1a_2a_1]e_2e_1a_2e_1a_1\\ &=a_2e_2[a_2e_1a_2]a_1a_2e_2\SL{e_1}a_2e_1a_1\\ &=a_2e_2e_1a_2\SR{e_1}a_1a_2e_1e_2a_2e_1a_1\\ &=a_2e_2e_1a_2a_1[e_1a_2e_1]e_2a_2e_1a_1\\ &=a_2e_2e_1[a_2a_1a_2]e_1[a_2e_2a_2]e_1a_1\\ &=(a_2e_2e_1a_1)a_2\SR{a_1}e_1\SL{e_2}(a_2e_2e_1a_1)=(a_2e_2e_1a_1)^3. \end{aligned} \] Now we show the commutativity of $(a_2e_2e_1a_1)^3$ with $a_1$. \[\begin{aligned} a_1^{-1}(a_2e_2e_1a_1)^3a_1&= a_1^{-1}a_2e_2e_1\SL{a_1}a_2e_2e_1a_1a_2e_2e_1\SL{a_1}a_1\\ &=[a_1^{-1}a_2a_1]\SR{e_2}e_1a_2e_2e_1[a_1a_2a_1]e_2e_1a_1\\ &=a_2a_1a_2^{-1}e_1[e_2a_2e_2]e_1a_2a_1(a_2e_2e_1a_1)\\ &=a_2a_1[a_2^{-1}e_1a_2]e_2[a_2e_1a_2]a_1(a_2e_2e_1a_1)\\ &=a_2a_1e_1a_2\ukre{e_1^{-1}}e_2\ukre{e_1}a_2e_1a_1(a_2e_2e_1a_1)\\ &=a_2a_1e_1[a_2e_2a_2]e_1a_1(a_2e_2e_1a_1)\\ &=a_2\SR{a_1}e_1\SL{e_2}(a_2e_2e_1a_1)(a_2e_2e_1a_1)=(a_2e_2e_1a_1)^3. \end{aligned} \] And finally, we show the commutativity of $(a_2e_2e_1a_1)^3$ with $e_1$. \[\begin{aligned} e_1^{-1}(a_2e_2e_1a_1)^3e_1&= e_1^{-1}a_2e_2\SL{e_1}a_1a_2e_2\SR{e_1}a_1a_2e_2e_1a_1\SL{e_1}\\ &=[e_1^{-1}a_2e_1]\SR{e_2}a_1a_2e_2a_1[e_1a_2e_1]e_2e_1a_1\\ &=a_2e_1a_2^{-1}a_1[e_2a_2e_2]a_1a_2e_1(a_2e_2e_1a_1)\\ &=a_2e_1[a_2^{-1}a_1a_2]e_2[a_2a_1a_2]e_1(a_2e_2e_1a_1)\\ &=a_2e_1a_1a_2\ukre{a_1^{-1}}e_2\ukre{a_1}a_2a_1\SL{e_1}(a_2e_2e_1a_1)\\ &=a_2e_1a_1[a_2e_2a_2]e_1a_1(a_2e_2e_1a_1)\\ &=a_2e_1\SR{a_1}\SL{e_2}(a_2e_2e_1a_1)(a_2e_2e_1a_1)=(a_2e_2e_1a_1)^3. \end{aligned} \] \end{proof} \begin{uw} As we observed in the proof of Theorem \ref{tw:pres:two:holes}, generators: $e_2$, $b_1$, $b_2$, $b_3$, $d_2$ are superfluous in the above presentation. However, we decided to leave these generators in order to have simpler relations. This simplifies the computations in the proofs of Theorems \ref{MainThm1} and \ref{MainThm2}. \end{uw} \begin{tw}\label{tw:pres:two:punct} The mapping class group ${\Cal{PM}^+}(N_{3}^2)$ of a non--orientable surface of genus 3 with two punctures admits a presentation with generators $\{a_1,a_2,e_1,e_2,b_1,b_2,b_3, u\}$. The defining relations are Relations (1a)--(7),(9),(10) from the statement of Theorem \ref{tw:pres:two:holes} and additionally \begin{itemize} \item[(8')] $(e_1u)^2=b_1$, \item[(11')] $b_3b_1(b_2u)^2=u^2$, \item[(12')] $(a_2e_2e_1a_1)^3=1$. \end{itemize} \end{tw} \begin{proof} By Theorem 7.17 of \cite{Szep_curv}, the mapping class group ${\Cal{PM}^+}(N_{3}^2)$ admits a presentations with Relations (S1)--(S7), (S9), (S10) from the proof of Theorem \ref{tw:pres:two:holes} and additionally \begin{itemize} \item[(S8')] $(UA_2)^2=D_1$, \item[(S11')] $(UD_2)^2D_1D_3=U^2$, \item[(S12')] $(A_1A_2A_3B)^3=1$. \end{itemize} If we rewrite Relations (S8'), (S11') and (S12') into our set of generators, we get Relations (8'), (11') and (12') respectively. \end{proof} \section{Action of ${\Cal{M}}(N_{3,s}^n)$ on $H_1(N_{3,s}^n;\zz)$, where $(s,n)=(0,2)$ or $(s,n)=(2,0)$} In the rest of the paper assume that $(s,n)=(0,2)$ or $(s,n)=(2,0)$. Let $\gamma_1,\gamma_2,\gamma_3,\delta_1,\delta_2$ be circles indicated in Figure \ref{r01}. Note that $\gamma_1,\gamma_2,\gamma_3$ are one--sided, $\delta_1,\delta_2$ are two--sided and the $\zz$-module $H_1(N_{3,s}^n;\zz)$ is freely generated by homology classes $[\gamma_1],[\gamma_2],[\gamma_3],[\delta_1]$. In abuse of notation we will not distinguish between the curves $\gamma_1,\gamma_2,\gamma_{3},\delta_1$ and their cycle classes. The mapping class group ${\Cal{M}}(N_{3,s}^n)$ acts on $H_1(N_{3,s}^n;\zz)$, hence we have a representation \[\map{\psi}{{\Cal{M}}(N_{3,s}^n)}{\textrm{Aut}(H_1(N_{3,s}^n;\zz))}. \] It is straightforward to check that \begin{equation}\begin{aligned}\label{eq:psi:1} \psi(a_1)&=\begin{bmatrix} 0&1&0&0\\ -1&2&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},\psi(a_1^{-1})=\begin{bmatrix} 2&-1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{bmatrix} \\ \psi(a_2)&=\begin{bmatrix} 1&0&0&0\\ 0&0&1&0\\ 0&-1&2&0\\ 0&0&0&1\end{bmatrix}, \psi(a_2^{-1})=\begin{bmatrix} 1&0&0&0\\ 0&2&-1&0\\ 0&1&0&0\\ 0&0&0&1 \end{bmatrix}\\ \psi(u)&=\psi(u^{-1})=\begin{bmatrix} 0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{bmatrix} \end{aligned} \end{equation} \begin{equation}\begin{aligned} \label{eq:psi:2} \psi(e_1)&= \begin{bmatrix}0&1&0&0\\ -1&2&0&0\\ 0&0&1&0\\ -1&1&0&1 \end{bmatrix}, \psi(e_1^{-1})= \begin{bmatrix}2&-1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 1&-1&0&1 \end{bmatrix}\\ \psi(e_2)&= \begin{bmatrix}2&-1&0&0\\ 1&0&0&0\\ 2&-2&1&0\\ 0&0&0&1 \end{bmatrix}, \psi(e_2^{-1})= \begin{bmatrix}0&1&0&0\\ -1&2&0&0\\ -2&2&1&0\\ 0&0&0&1 \end{bmatrix}\\ \psi(b_j^{\pm 1})&=\psi(d_j^{\pm 1})= I_4\\ \end{aligned}\end{equation} where $I_4$ is the identity matrix of rank 4. \section{Computing $\gen{\kre{X}}\cap \ker\kre{\partial}_1$} Let $G={\Cal{M}}(N_{3,s}^n)$ and $M=H_1(N_{3,s}^n;\zz)$ as in the previous section, and define \[\xi_i=\begin{cases} \gamma_i&\text{for $i=1,2,3$,}\\ \delta_{1}&\text{for $i=4$.} \end{cases} \] If $h\in G$, then \[\kre{\partial}_1([h]\otimes\xi_i)=(h-1)[\cdot]\otimes\xi_i=(\psi(h)^{-1}-I_4)\xi_i,\] where we identified $C_0(G)\otimes M$ with $M$ by the map $[\cdot]\otimes m\mapsto m$. Let us denote \[[a_j]\otimes \xi_i,\ [u]\otimes\xi_i,\ [e_j]\otimes\xi_i,\ [b_j]\otimes\xi_i,\ [d_j]\otimes\xi_i\] respectively by \[a_{j,i},\ u_{i},\ e_{j,i},\ b_{j,i},\ d_{j,i}.\] Using the formulas \eqref{eq:psi:1} and \eqref{eq:psi:2}, we obtain \begin{equation*}\begin{aligned}\kre{\partial}_1(a_{j,i})&=\begin{cases} \gamma_j+\gamma_{j+1}&\text{if $i=j$}\\ -\gamma_j-\gamma_{j+1}&\text{if $i=j+1$}\\ 0&\text{otherwise,} \end{cases} \\ \kre{\partial}_1(u_{i})&=\begin{cases} -\gamma_1+\gamma_{2}&\text{if $i=1$}\\ \gamma_1-\gamma_{2}&\text{if $i=2$}\\ 0&\text{otherwise,} \end{cases}\\ \kre{\partial}_1(e_{1,i})&=\begin{cases} \gamma_1+\gamma_2+\delta_1&\text{if $i=1$}\\ -\gamma_1-\gamma_2-\delta_1&\text{if $i=2$}\\ 0&\text{otherwise,} \end{cases} \\ \kre{\partial}_1(e_{2,i})&=\begin{cases} -\gamma_1-\gamma_2-2\gamma_3&\text{if $i=1$}\\ \gamma_1+\gamma_2+2\gamma_3&\text{if $i=2$}\\ 0&\text{otherwise,} \end{cases}\\ \kre{\partial}_1(b_{j,i})&=\kre{\partial}_1(d_{j,i})=0. \end{aligned}\end{equation*} The above formulas show that all of the following elements are contained in $\ker\kre{\partial}_1$ \begin{enumerate} \item[(K1)] $a_{j,i}$ for $j=1,2$ and $i\in\{1,2,3,4\}\bez\{j,j+1\}$, \item[(K2)] $a_{j,j}+a_{j,j+1}$ for $j=1,2$, \item[(K3)] $u_{i}$ for $i=3,4$, \item[(K4)] $u_{1}+u_{2}$, \item[(K5)] $e_{j,i}$ for $j=1,2$ and $i=3,4$, \item[(K6)] $e_{j,1}+e_{j,2}$ for $j=1,2$, \item[(K7)] $e_{2,1}+2a_{2,2}-u_{1}$, \item[(K8)] $b_{j,i}$ for $j=1,2,3$ and $i=1,2,3,4$, \item[(K9)] $d_{j,i}$ for $j=1,2$ and $i=1,2,3,4$. \end{enumerate} \begin{prop}\label{prop:kernel:1} Let $G={\Cal{M}}(N_{3,2})$. Then $\gen{\kre{X}}\cap \ker\kre{\partial}_1$ is the abelian group generated freely by Generators (K1)--(K9). \end{prop} \begin{proof} By Theorem \ref{tw:pres:two:holes}, $\gen{\kre{X}}$ is generated freely by $\{a_{j,i},u_{i},e_{j,i},b_{j,i},d_{j,i}\}$. Suppose that $h\in\gen{\kre{X}}\cap\ker\kre{\partial}_1$. We will show that $h$ can be uniquely expressed as a linear combination of Generators (K1)--(K9) specified in the statement of the theorem. We decompose $h$ as follows: \begin{itemize} \item $h=h_0=h_1+h_2$, where $h_1$ is a combination of Generators (K1)--(K2) and $h_2$ does not contain $a_{j,i}$ with $i\neq j$; \item $h_2=h_3+h_4$, where $h_3$ is a combination of Generators (K3)--(K4) and $h_4$ does not contain $u_i$ with $i\neq 1$; \item $h_4=h_5+h_6$, where $h_5$ is a combination of Generators (K5)--(K7) and $h_6$ does not contain $e_{j,i}$ for $(i,j)\neq (1,1)$; \item $h_6=h_7+h_8$, where $h_7$ is a combination of Generators (K8) and $h_8$ does not contain $b_{j,i}$; \item $h_8=h_9+h_{10}$, where $h_9$ is a combination of Generators (K9) and $h_{10}$ does not contain $d_{j,i}$. \end{itemize} Observe also that for each $k=0,\ldots,8$, $h_{k+1}$ and $h_{k+2}$ are uniquely determined by $h_k$. Element $h_{10}$ has the form \[h_{10}=k_1a_{1,1}+k_2a_{2,2}+l e_{1,1}+m u_{1}\] for some integers $k_1,k_2,l,m$. Hence \[\begin{aligned} 0=&\kre{\partial}_1(h_{10})=k_1(\gamma_1+\gamma_2)+k_2(\gamma_2+\gamma_3)+l(\gamma_1+\gamma_2+\delta_1)+m(-\gamma_1+\gamma_2). \end{aligned}\] This implies that $l=0, k_2=0$, and then $k_1=m=0$ and thus $h_{10}=0$. \end{proof} By an analogous argument, Theorem \ref{tw:pres:two:punct} implies that \begin{prop}\label{prop:kernel:2} Let $G={\Cal{PM}}^+(N_{3}^2)$. Then $\gen{\kre{X}}\cap \ker\kre{\partial}_1$ is the abelian group generated freely by Generators (K1)--(K8). \end{prop} \section{Computing $H_1({\Cal{M}}(N_{3,2});H_1(N_{3,2};\zz))$} The goal of this section is to prove that \begin{prop}\label{prop:h1:1} The abelian group $H_1({\Cal{PM}^+}(N_{3}^2);H_1(N_{3}^2;\zz))$ has a presentation with generators \[a_{1,3},\ a_{1,4},\ a_{1,1}+a_{1,2},\ u_3,\ u_4,\ d_{1,1},\] and relations \[2a_{1,3}=2a_{1,4}=2\left(a_{1,1}+a_{1,2}\right)=2u_3=2u_4=2d_{1,1}=0.\] \end{prop} \begin{proof} We start with the generators provided by Proposition \ref{prop:kernel:1}, and using the formula \eqref{eq_rew_rel}, we rewrite relations from Theorem \ref{tw:pres:two:holes} as relations in $H_1({\Cal{M}}(N_{3,2});H_1(N_{3,2};\zz))$. \subsection*{(3)} Relation (3) gives \[\begin{aligned} 0&=([a_1]+a_1[u]+a_1u[a_1]-[u])\otimes \xi_i\\ &=[a_1]\otimes (I_4+\psi(u^{-1}a_1^{-1}))\xi_i+[u]\otimes(\psi(a_1^{-1})-I_4)\xi_i\\ &=[a_1]\otimes \begin{bmatrix} 2&0&0&0\\ 2&0&0&0\\ 0&0&2&0\\ 0&0&0&2 \end{bmatrix}\xi_i+[u]\otimes \begin{bmatrix} 1&-1&0&0\\ 1&-1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i. \end{aligned} \] We conclude from this relation that Generator (K4) is trivial: \begin{equation}u_1+u_2=0\label{k4:0},\end{equation} and each of the generators $a_{1,3},\ a_{1,4},\ a_{1,1}+a_{1,2}$ is of order at most 2: \begin{equation}\label{a1:mod2} 2a_{1,3}=2a_{1,4}=2(a_{1,1}+a_{1,2})=0. \end{equation} \subsection*{(2a)--(2c)} Relation (2a) gives \[\begin{aligned}0=& ([a_1]+a_{1}[a_{2}]+a_{1}a_{2}[a_{1}]-[a_{2}]-a_{2}[a_{1}]-a_{2}a_{1}[a_{2}])\otimes \xi_i\\ =&[a_1]\otimes (I_4+\psi(a_{2}^{-1}a_1^{-1})-\psi(a_{2}^{-1}))\xi_i\\ &+[a_{2}]\otimes(\psi(a_1^{-1})-I_4-\psi(a_1^{-1}a_{2}^{-1}))\xi_i\\ =& [a_1]\otimes \begin{bmatrix} 2&-1&0&0\\ 2&-1&0&0\\ 1&-1&1&0\\ 0&0&0&1 \end{bmatrix}\xi_i+[a_2]\otimes \begin{bmatrix} -1&1&-1&0\\ 0&-1&0&0\\ 0&-1&0&0\\ 0&0&0&-1 \end{bmatrix}\xi_i. \end{aligned}\] We conclude from this relation, that \begin{equation} \label{a24:a14} \begin{aligned} a_{2,4}&=a_{1,4},\\ a_{2,1}&=a_{1,3}. \end{aligned} \end{equation} Hence generators $a_{2,1}$ and $a_{2,4}$ are superfluous. Moreover, \begin{equation}\label{a22:a23} a_{2,2}+a_{2,3}=a_{1,1}+a_{1,2}, \end{equation} which together with the formula \eqref{a1:mod2} imply that Generators (K2) generate a cyclic group of order at most 2. Relation (2b) gives \[\begin{aligned}0=& ([e_1]+e_{1}[a_{2}]+e_{1}a_{2}[e_{1}]-[a_{2}]-a_{2}[e_{1}]-a_{2}e_{1}[a_{2}])\otimes \xi_i\\ =&[e_1]\otimes (I_4+\psi(a_{2}^{-1}e_1^{-1})-\psi(a_{2}^{-1}))\xi_i\\ &+[a_{2}]\otimes(\psi(e_1^{-1})-I_4-\psi(e_1^{-1}a_{2}^{-1}))\xi_i\\ =&[e_1]\otimes\begin{bmatrix} 2&-1&0&0\\ 2&-1&0&0\\ 1&-1&1&0\\ 1&-1&0&1 \end{bmatrix} \xi_i +[a_2]\otimes\begin{bmatrix} -1&1&-1&0\\ 0&-1&0&0\\ 0&-1&0&0\\ 0&1&-1&-1 \end{bmatrix} \xi_i. \end{aligned}\] This, the formulas \eqref{a1:mod2}, \eqref{a24:a14} and \eqref{a22:a23} imply that \begin{equation} \label{e1:s} \begin{aligned} e_{1,4}&=a_{2,4}=a_{1,4},\\ e_{1,3}&=a_{2,1}+a_{2,4}=a_{1,3}+a_{1,4},\\ e_{1,1}+e_{1,2}&=(a_{2,2}+a_{2,3})+a_{2,4}=(a_{1,1}+a_{1,2})+a_{1,4} \end{aligned} \end{equation} and Generators (K5) and (K6) are superfluous for $j=1$. Relation (2c) gives \[\begin{aligned}0=& ([e_2]+e_{2}[a_{2}]+e_{2}a_{2}[e_{2}]-[a_{2}]-a_{2}[e_{2}]-a_{2}e_{2}[a_{2}])\otimes \xi_i\\ =&[e_2]\otimes (I_4+\psi(a_{2}^{-1}e_2^{-1})-\psi(a_{2}^{-1}))\xi_i\\ &+[a_{2}]\otimes(\psi(e_2^{-1})-I_4-\psi(e_2^{-1}a_{2}^{-1}))\xi_i\\ =&[e_2]\otimes \begin{bmatrix} 0&1&0&0\\ 0&1&0&0\\ -1&1&1&0\\ 0&0&0&1 \end{bmatrix}\xi_i+[a_2]\otimes \begin{bmatrix} -1&-1&1&0\\ 0&-3&2&0\\ 0&-3&2&0\\ 0&0&0&-1 \end{bmatrix}\xi_i. \end{aligned}\] This relation, the formulas \eqref{a1:mod2}, \eqref{a24:a14} and \eqref{a22:a23} imply that \begin{equation}\label{e2:s} \begin{aligned} e_{2,4}&=a_{2,4}=a_{1,4},\\ e_{2,3}&=-a_{2,1}=a_{1,3},\\ e_{2,1}+e_{2,2}&=a_{2,2}+a_{2,3}=a_{1,1}+a_{1,2}. \end{aligned} \end{equation} Thus Generators (K5) and (K6) are superfluous for $j=2$. \subsection*{(1a)--(1c)} Relation (1a) gives \[\begin{aligned} 0&=([a_1]+a_1[e_1]-[e_1]-e_1[a_1])\otimes \xi_i\\ &=[a_1]\otimes (I_4-\psi(e_1^{-1}))\xi_i+[e_1]\otimes(\psi(a_1^{-1})-I_4)\xi_i\\ &=[a_1]\otimes \begin{bmatrix} -1&1&0&0\\ -1&1&0&0\\ 0&0&0&0\\ -1&1&0&0 \end{bmatrix}\xi_i+[e_1]\otimes \begin{bmatrix} 1&-1&0&0\\ 1&-1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i. \end{aligned} \] This relation gives no new information. Relation (1b) gives \[\begin{aligned} 0&=([a_1]+a_1[e_2]-[e_2]-e_2[a_1])\otimes \xi_i\\ &=[a_1]\otimes (I_4-\psi(e_2^{-1}))\xi_i+[e_2]\otimes(\psi(a_1^{-1})-I_4)\xi_i\\ &=[a_1]\otimes \begin{bmatrix} 1&-1&0&0\\ 1&-1&0&0\\ 2&-2&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i+[e_2]\otimes \begin{bmatrix} 1&-1&0&0\\ 1&-1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i. \end{aligned} \] This relation gives no new information. Relation (1c) gives \[\begin{aligned} 0&=([e_1]+e_1[e_2]-[e_2]-e_2[e_1])\otimes \xi_i\\ &=[e_1]\otimes (I_4-\psi(e_2^{-1}))\xi_i+[e_2]\otimes(\psi(e_1^{-1})-I_4)\xi_i\\ &=[e_1]\otimes \begin{bmatrix} 1&-1&0&0\\ 1&-1&0&0\\ 2&-2&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i+[e_2]\otimes \begin{bmatrix} 1&-1&0&0\\ 1&-1&0&0\\ 0&0&0&0\\ 1&-1&0&0 \end{bmatrix}\xi_i. \end{aligned} \] This relation gives no new information. \subsection*{(4)} Relation (4) gives \[\begin{aligned} 0=&([u]+u[e_2]-[a_2]-a_2[u]-a_2u[e_2]-a_2ue_2[a_2])\otimes \xi_i\\ =&[u]\otimes (I_4-\psi(a_2^{-1}))\xi_i+[e_2]\otimes(\psi(u^{-1})-\psi(u^{-1}a_2^{-1}))\xi_i\\ &+[a_2]\otimes(-I_4-\psi(e_2^{-1}u^{-1}a_2^{-1}))\xi_i\\ =&[u]\otimes \begin{bmatrix} 0&0&0&0\\ 0&-1&1&0\\ 0&-1&1&0\\ 0&0&0&0 \end{bmatrix}\xi_i+[e_2]\otimes \begin{bmatrix} 0&-1&1&0\\ 0&0&0&0\\ 0&-1&1&0\\ 0&0&0&0 \end{bmatrix}\xi_i\\ &+[a_2]\otimes \begin{bmatrix} -2&0&0&0\\ -2&1&-1&0\\ -2&3&-3&0\\ 0&0&0&-2 \end{bmatrix}\xi_i. \end{aligned} \] This relation, the formulas \eqref{k4:0}, \eqref{a1:mod2}, \eqref{a22:a23} and \eqref{e2:s} give \begin{equation} \begin{aligned} e_{2,1}+2a_{2,2}-u_1&=3(a_{2,2}+a_{2,3})-(u_1+u_2)-u_3-e_{2,3}\\ &=(a_{1,1}+a_{1,2})-u_3+a_{1,3} \end{aligned}\label{rel_e4} \end{equation} which implies that Generator (K7) is superfluous. \subsection*{(9b)} If we denote \[\begin{aligned} M&=I+\psi(u^{-1}e_2^{-1})=\begin{bmatrix} 0&2&0&0\\ 0&2&0&0\\ -2&2&2&0\\ 0&0&0&2 \end{bmatrix} \\ N&=I+\psi(a_1^{-2}e_2^{-1}a_2^{-1})+\psi(a_1^{-2}e_2^{-1}a_2^{-1})^2=\begin{bmatrix} 3&-1&1&0\\ 3&-1&1&0\\ 3&-1&1&0\\ 0&0&0&3 \end{bmatrix} \end{aligned} \] then Relation (9b) gives \[\begin{aligned} 0=&[e_2]\otimes \left(M-\psi(a_2^{-1})N\right)\xi_i+[u]\otimes \psi(e_2^{-1})M\xi_i-[a_2]\otimes N\xi_i\\ &-[a_1]\otimes\left(\psi(e_2^{-1}a_2^{-1})N+\psi(a_1^{-1}e_2^{-1}a_2^{-1})N\right)\xi_i\\ =&[e_2]\otimes \begin{bmatrix} -3&3&-1&0\\ -3&3&-1&0\\ -5&3&1&0\\ 0&0&0&-1 \end{bmatrix}\xi_i+[u]\otimes \begin{bmatrix} 0&2&0&0\\ 0&2&0&0\\ -2&2&2&0\\ 0&0&0&2 \end{bmatrix}\xi_i\\ &+[a_2]\otimes \begin{bmatrix} -3&1&-1&0\\ -3&1&-1&0\\ -3&1&-1&0\\ 0&0&0&-3 \end{bmatrix}\xi_i+[a_1]\otimes \begin{bmatrix} -6&2&-2&0\\ -6&2&-2&0\\ -6&2&-2&0\\ 0&0&0&-6 \end{bmatrix}\xi_i. \end{aligned} \] The last two columns of this relation, the formulas \eqref{a1:mod2}, \eqref{a24:a14}, \eqref{a22:a23} and \eqref{e2:s} imply that \begin{equation}\label{u34:mod2} \begin{aligned} 2u_{3}=&(e_{2,1}+e_{2,2}+a_{2,2}+a_{2,3})+(a_{2,1}-e_{2,3})+\\ &+2(a_{1,1}+a_{1,2})+2a_{1,3}=0,\\ 2u_{4}=&e_{2,4}+3a_{2,4}+6a_{1,4}=0. \end{aligned} \end{equation} As for the first two columns of this relation \begin{equation*} \begin{aligned} -2u_3=&3(e_{2,1}+e_{2,2}+a_{2,2}+a_{2,3})+5(a_{2,1}+e_{2,3})+\\ &+6(a_{1,1}+a_{1,2})+6a_{1,3}-2a_{2,1}=0,\\ -2u_{3}=&(3e_{2,1}+3e_{2,2}+2a_{1,1}+2a_{1,2}+a_{2,2}+a_{2,3})+3(a_{2,1}+e_{2,3})+\\ &+2(u_{1,1}+u_{1,2})+2a_{1,3}-2a_{2,1}=0,\\ \end{aligned} \end{equation*} they do not give any additional information. \subsection*{(9a)} Let $M=I+\psi(u^{-1}e_2^{-1})$ as above. Then Relation (9a) gives \[\begin{aligned} 0&=[e_2]\otimes M\xi_i+[u]\otimes(\psi(e_2^{-1})M)\xi_i-[b_3]\otimes \xi_i=\\ &=[e_2]\otimes \begin{bmatrix} 0&2&0&0\\ 0&2&0&0\\ -2&2&2&0\\ 0&0&0&2 \end{bmatrix}\xi_i+[u]\otimes \begin{bmatrix} 0&2&0&0\\ 0&2&0&0\\ -2&2&2&0\\ 0&0&0&2 \end{bmatrix}\xi_i-[b_3]\otimes \xi_i. \end{aligned} \] This, the formulas \eqref{k4:0}, \eqref{a1:mod2}, \eqref{e2:s} and \eqref{u34:mod2} imply that generators \begin{equation}\label{b3:s} \begin{aligned} b_{3,4}&=2e_{2,4}+2u_4=0,\\ b_{3,3}&=2e_{2,3}+2u_3=0,\\ b_{3,1}&=-2e_{2,3}-2u_{3}=0,\\ b_{3,2}&=2(e_{2,1}+e_{2,2})+2e_{2,3}+2(u_1+u_2)+2u_3=0 \end{aligned} \end{equation} are trivial. \subsection*{(12)} Let \[M=I+\psi(a_1^{-1}e_1^{-1}e_2^{-1}a_2^{-1})+\psi(a_1^{-1}e_1^{-1}e_2^{-1}a_2^{-1})^2=\begin{bmatrix} 3&-1&1&0\\ 3&-1&1&0\\ 3&-1&1&0\\ 0&-1&1&3 \end{bmatrix}.\] Relation (12), the formulas \eqref{a1:mod2}, \eqref{a24:a14}, \eqref{a22:a23}, \eqref{e1:s} and \eqref{e2:s} give \[\begin{aligned} 0=&[a_2]\otimes M\xi_i+[e_2]\otimes(\psi(a_2^{-1})M)\xi_i+[e_1]\otimes(\psi(e_2^{-1}a_2^{-1})M)\xi_i\\ &+[a_1]\otimes(\psi(e_1^{-1}e_2^{-1}a_2^{-1})M)\xi_i-[d_1]\otimes\xi_i-[d_2]\otimes\psi(d_1^{-1})\xi_i\\ =&([a_2]+[e_2]+[e_1]+[a_1])\otimes\begin{bmatrix} 3&-1&1&0\\ 3&-1&1&0\\ 3&-1&1&0\\ 0&-1&1&3 \end{bmatrix}\xi_i-[d_1]\otimes\xi_i-[d_2]\otimes\xi_i=\\ =&-[d_1]\otimes\xi_i-[d_2]\otimes\xi_i. \end{aligned} \] This implies that \begin{equation}\label{d2:s} d_{2,i}=-d_{1,i},\quad\text{for $i=1,2,3,4$.} \end{equation} \subsection*{(13a)--(13e)} Relation (13a) gives \[\begin{aligned} 0&=([d_1]+d_1[a_1]-[a_1]-a_1[d_1])\otimes \xi_i\\ &=[d_1]\otimes (I_4-\psi(a_1^{-1}))\xi_i+[a_1]\otimes(\psi(d_1^{-1})-I_4)\xi_i\\ &=[d_1]\otimes \begin{bmatrix} -1&1&0&0\\ -1&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i. \end{aligned} \] Thus \[d_{1,2}=-d_{1,1}.\] By a similar argument, Relation (13d) gives \[\begin{aligned} 0&=[d_1]\otimes \begin{bmatrix} 1&-1&0&0\\ -1&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i. \end{aligned} \] Hence \begin{equation}\label{d12:s} d_{1,2}=d_{1,1}, \end{equation} and \begin{equation}\label{d11:mod2} 2d_{1,1}=0. \end{equation} Analogously, Relation (13b) gives \[\begin{aligned} 0&=[d_1]\otimes \begin{bmatrix} 0&0&0&0\\ 0&-1&1&0\\ 0&-1&1&0\\ 0&0&0&0 \end{bmatrix}\xi_i, \end{aligned} \] which implies that \[d_{1,3}=-d_{1,2}.\] This together with the formulas \eqref{d12:s} and \eqref{d11:mod2} imply that \begin{equation}\label{d13:s} d_{1,3}=d_{1,1}. \end{equation} Relation (13c) gives \[\begin{aligned} 0&=([d_1]+d_1[e_1]-[e_1]-e_1[d_1])\otimes \xi_i\\ &=[d_1]\otimes (I_4-\psi(e_1^{-1}))\xi_i+[e_1]\otimes(\psi(d_1^{-1})-I_4)\xi_i\\ &=[d_1]\otimes \begin{bmatrix} -1&1&0&0\\ -1&1&0&0\\ 0&0&0&0\\ -1&1&0&0 \end{bmatrix}\xi_i. \end{aligned} \] Hence, by the formulas \eqref{d12:s} and \eqref{d11:mod2} \begin{equation}\label{d14:s} d_{1,4}=-d_{1,1}-d_{1,2}=0 \end{equation} and we conclude that Generators (K9) generate a cyclic group of order at most 2. Relation (13e) \[\begin{aligned} 0&=[d_2]\otimes \begin{bmatrix} 1&-1&0&0\\ -1&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i \end{aligned} \] gives no new information. \subsection*{(8)} Let \[M=I+\psi(u^{-1}e_1^{-1})=\begin{bmatrix} 2&0&0&0\\ 2&0&0&0\\ 0&0&2&0\\ 1&-1&0&2 \end{bmatrix}.\] Relation (8) gives \[\begin{aligned} 0&=[e_1]\otimes M\xi_i+[u]\otimes(\psi^{-1}(e_1^{-1})M)\xi_i-[d_1]\otimes \xi_i-[b_1]\otimes \xi_i\\ &=[e_1]\otimes \begin{bmatrix} 2&0&0&0\\ 2&0&0&0\\ 0&0&2&0\\ 1&-1&0&2 \end{bmatrix}\xi_i+[u]\otimes \begin{bmatrix} 2&0&0&0\\ 2&0&0&0\\ 0&0&2&0\\ 1&-1&0&2 \end{bmatrix}\xi_i-[d_1]\otimes \xi_i-[b_1]\otimes \xi_i. \end{aligned} \] This, the formulas \eqref{k4:0}, \eqref{a1:mod2}, \eqref{a22:a23}, \eqref{e1:s}, \eqref{u34:mod2}, \eqref{d13:s} and \eqref{d14:s} imply that generators \begin{equation}\label{b1:s} \begin{aligned} b_{1,1}&=e_{1,4}+u_4-d_{1,1}=a_{1,4}+u_4+d_{1,1},\\ b_{1,2}&=-e_{1,4}-u_4-d_{1,2}=a_{1,4}+u_4+d_{1,1},\\ b_{1,3}&=2e_{1,3}+2u_3-d_{1,3}=d_{1,1},\\ b_{1,4}&=2e_{1,4}+2u_4-d_{1,4}=0 \end{aligned} \end{equation} are superfluous. \subsection*{(5), (6)} Relations (5) and (6) give \[\begin{aligned} 0&=([u]+u[b_k]-[b_k]-b_k[u])\otimes \xi_i\\ &=[u]\otimes (I_4-\psi(b_k^{-1}))\xi_i+[b_k]\otimes(\psi(u^{-1})-I_4)\xi_i\\ &=[b_k]\otimes \begin{bmatrix} -1&1&0&0\\ 1&-1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix}\xi_i \end{aligned} \] for $k=1,3$. This relation gives no new information. \subsection*{(10)} Observe first that if $g\in G$, then the formulas \eqref{diff:formula}, \eqref{eq_rew_rel} and the relation $g^{-1}g=1$ imply that \[[g^{-1}]\otimes\xi_i=-g^{-1}[g]\otimes \xi_i.\] Hence Relation (10) gives \[\begin{aligned} 0=&[u]\otimes\left(I_4+\psi(b_2^{-1}ue_1^{-1}u^{-1})-\psi(a_2b_1a_2^{-1})\right)\xi_i+[u^{-1}]\otimes\psi(e_1^{-1}u^{-1})\xi_i\\ &+[e_1]\otimes\left(\psi(u^{-1})-\psi(u^{-1}a_2b_1a_2^{-1})\right)\xi_i\\ &+[b_2]\otimes\psi(ue_1^{-1}u^{-1})\xi_i- [b_1^{-1}]\otimes \psi(a_2^{-1})\xi_i\\ &-[a_2]\otimes \xi_i-[a_2^{-1}]\otimes\psi(b_1a_2^{-1})\xi_i\\ =&[u]\otimes\left(I_4+\psi(b_2^{-1}ue_1^{-1}u^{-1})-\psi(a_2b_1a_2^{-1})-\psi(ue_1^{-1}u^{-1})\right)\xi_i\\ &+[e_1]\otimes\left(\psi(u^{-1})-\psi(u^{-1}a_2b_1a_2^{-1})\right)\xi_i\\ &+[b_2]\otimes\psi(ue_1^{-1}u^{-1})\xi_i+ [b_1]\otimes \psi(b_1a_2^{-1})\xi_i\\ &+[a_2]\otimes\left(\psi(a_2b_1a_2^{-1})-I_4\right) \xi_i. \end{aligned} \] Recall that $\psi(b_j^{\pm 1})=I_4$, hence the above formula takes form \[\begin{aligned} 0=&[b_2]\otimes\psi(ue_1^{-1}u^{-1})\xi_i+ [b_1]\otimes \psi(a_2^{-1})\xi_i\\ =&[b_2]\otimes \begin{bmatrix} 0&1&0&0\\ -1&2&0&0\\ 0&0&1&0\\ -1&1&0&1 \end{bmatrix}\xi_i+[b_1]\otimes \begin{bmatrix} 1&0&0&0\\ 0&2&-1&0\\ 0&1&0&0\\ 0&0&0&1 \end{bmatrix}\xi_i. \end{aligned} \] This, the formulas \eqref{a1:mod2}, \eqref{u34:mod2} and \eqref{b1:s} imply that generators \begin{equation}\label{b2:s} \begin{aligned} b_{2,4}&=-b_{1,4}=0,\\ b_{2,3}&=b_{1,2}=a_{1,4}+u_4+d_{1,1},\\ b_{2,2}&=b_{1,1}-b_{2,4}=a_{1,4}+u_4+d_{1,1},\\ b_{2,1}&=-2b_{1,2}-b_{1,3}-2b_{2,2}-b_{2,4}=-d_{1,1} \end{aligned} \end{equation} are superfluous. \subsection*{(7)} Relation (7) gives \[\begin{aligned} 0&=([a_2]+a_2[b_2]-[b_2]-b_2[a_2])\otimes \xi_i\\ &=[a_2]\otimes (I_4-\psi(b_2^{-1}))\xi_i+[b_2]\otimes(\psi(a_2^{-1})-I_4)\xi_i\\ &=[b_2]\otimes \begin{bmatrix} 0&0&0&0\\ 0&1&-1&0\\ 0&1&-1&0\\ 0&0&0&0 \end{bmatrix}\xi_i. \end{aligned} \] This relation gives no new information. \subsection*{(11)} Relation (11) gives \[\begin{aligned} 0=&[b_3]\otimes\xi_i+[b_1]\otimes (\psi(b_3^{-1}))\xi_i+[b_2]\otimes (\psi(b_1^{-1}b_3^{-1})+\psi(u^{-1}b_2^{-1}b_1^{-1}b_3^{-1}))\xi_i\\ &+[u]\otimes(\psi(b_2^{-1}b_1^{-1}b_3^{-1})+ \psi(b_2^{-1}u^{-1}b_2^{-1}b_1^{-1}b_3^{-1}))-\psi(d_1^{-1}d_2^{-2}) -\psi(u^{-1}d_1^{-1}d_2^{-2}))\xi_i\\ &-[d_2]\otimes(I_4+\psi(d_2^{-1}))\xi_i-[d_1]\otimes(\psi(d_2^{-2}))\xi_i\\ =&[b_3]\otimes\xi_i+[b_1]\otimes \xi_i+[b_2]\otimes (I_4+\psi(u^{-1}))\xi_i-2[d_2]\otimes\xi_i-[d_1]\otimes\xi_i\\ =&[b_3]\otimes\xi_i+[b_1]\otimes \xi_i+[b_2]\otimes\begin{bmatrix} 1&1&0&0\\ 1&1&0&0\\ 0&0&2&0\\ 0&0&0&2 \end{bmatrix}\xi_i-2[d_2]\otimes\xi_i-[d_1]\otimes\xi_i. \end{aligned} \] This relation gives no new information. In order to sum up the above computation, recall that $H_1({\Cal{M}}(N_{3,2});H_1(N_{3,2};\zz))$ is the quotient of the free abelian group with basis specified in the statement of Proposition \ref{prop:kernel:1} by the relations obtained above. If we remove the superfluous generators \begin{itemize} \item $u_1+u_2$ by the formula \eqref{k4:0}, \item $a_{2,1}$ and $a_{2,4}$ by the formula \eqref{a24:a14}, \item $a_{2,2}+a_{2,3}$ by the formula \eqref{a22:a23}, \item $e_{1,1}+e_{1,2}$, $e_{1,3}$ and $e_{1,4}$ by the formula \eqref{e1:s}, \item $e_{2,1}+e_{2,2}$, $e_{2,3}$ and $e_{2,4}$ by the formula \eqref{e2:s}, \item $e_{2,1}+2a_{2,2}-u_1$ by the formula \eqref{rel_e4} \item $d_{1,2}$, $d_{1,3}$, $d_{1,4}$, $d_{2,1}$, $d_{2,2}$, $d_{2,3}$ and $d_{2,4}$ by the formulas \eqref{d12:s}, \eqref{d13:s}, \eqref{d14:s} and \eqref{d2:s}, \item $b_{1,1}$, $b_{1,2}$, $b_{1,3}$ and $b_{1,4}$ by the formula \eqref{b1:s}, \item $b_{2,1}$, $b_{2,2}$, $b_{2,3}$ and $b_{2,4}$ by the formula \eqref{b2:s}, \item $b_{3,1}$, $b_{3,2}$, $b_{3,3}$ and $b_{3,4}$ by the formula \eqref{b3:s} \end{itemize} then $H_1({\Cal{M}}(N_{3,2});H_1(N_{3,2};\zz))$ is generated by homology classes \[a_{1,3},\ a_{1,4},\ a_{1,1}+a_{1,2},\ u_3,\ u_4,\ d_{1,1},\] with respect to the relations obtained in the formulas \eqref{a1:mod2}, \eqref{u34:mod2} and \eqref{d11:mod2} \[2a_{1,3}=2a_{1,4}=2\left(a_{1,1}+a_{1,2}\right)=2u_3=2u_4=2d_{1,1}=0.\] \end{proof} This concludes the proof of Theorem \ref{MainThm1} which is an immediate consequence of Proposition \ref{prop:h1:1}. \section{Computing $H_1({\Cal{PM}^+}(N_{3}^2);H_1(N_{3}^2;\zz))$} By Theorems \ref{tw:pres:two:holes} and \ref{tw:pres:two:punct}, the presentation of the group ${\Cal{PM}^+}(N_{3}^2)$ can be obtained from the presentation of ${\Cal{M}}(N_{3,2})$ by adding relations $d_1=d_2=1$. Hence, it is straightforward to conclude from Proposition \ref{prop:kernel:2} and the proof of Proposition \ref{prop:h1:1} that \begin{prop}\label{prop:h1:2} The abelian group $H_1({\Cal{PM}^+}(N_{3}^2);H_1(N_{3}^2;\zz))$ has a presentation with generators \[ a_{1,3},\ a_{1,4},\ a_{1,1}+a_{1,2},\ u_3,\ u_4,\] and relations \[2a_{1,3}=2a_{1,4}=2\left(a_{1,1}+a_{1,2}\right)=2u_3=2u_4=0.\qed\] \end{prop} This concludes the proof of Theorem \ref{MainThm2}. \section*{Acknowledgements} The author wishes to thank the referee for his/her helpful suggestions. \begin{thebibliography}{1} \bibitem{PawlakStukowHomoPunctured} {\sc P.~Pawlak and M.~{S}tukow}, { The first homology group with twisted coefficients for the mapping class group of a non--orientable surface with boundary}. \newblock arXiv:2206.13642, 2022. \bibitem{Stukow_homolTopApp} {\sc M.~Stukow}, { The first homology group of the mapping class group of a nonorientable surface with twisted coefficients}, Topology Appl. {\bf 178} (2014), 417--437. \bibitem{Stukow_HiperOsaka} {\sc M.~Stukow}, { A finite presentation for the hyperelliptic mapping class group of a nonrientable surface}, Osaka J. Math. {\bf 52} (2015), 495--515. \bibitem{Szep_curv} {\sc B.~Szepietowski}, { A presentation for the mapping class group of a non-orientable surface from the action on the complex of curves}, Osaka J. Math. {\bf 45} (2008), 283--326. \end{thebibliography} \end{document}
2206.13625v1
http://arxiv.org/abs/2206.13625v1
On Mixed Concatenations of Fibonacci and Lucas Numbers Which are Fibonacci Numbers
\documentclass{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{enumerate} \newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \newtheorem{question}[theorem]{Question} \newenvironment{proof}[1][Proof]{\textbf{#1.} } \begin{document} \title{On Mixed Concatenations of Fibonacci and Lucas Numbers Which are Fibonacci Numbers} \author{Alaa ALTASSAN$^{1}$ and Murat ALAN$^{2}$ \\ $^{1}$King Abdulaziz University, Department of Mathematics,\\ P.O. Box 80203, Jeddah 21589, Saudi Arabia\\ e-mail: [email protected]\\ $^{2}$Yildiz Technical University\\ Mathematics Department, 34210, Istanbul, Turkey.\\ e-mail: [email protected] } \maketitle \begin{abstract} Let $(F_n)_{n\geq 0}$ and $(L_n)_{n\geq 0}$ be the Fibonacci and Lucas sequences, respectively. In this paper we determine all Fibonacci numbers which are mixed concatenations of a Fibonacci and a Lucas numbers. By mixed concatenations of $ a $ and $ b $, we mean the both concatenations $\overline{ab}$ and $\overline{ba}$ together, where $ a $ and $ b $ are any two non negative integers. So, the mathematical formulation of this problem leads us searching the solutions of two Diophantine equations $ F_n=10^d F_m +L_k $ and $ F_n=10^d L_m+F_k $ in non-negative integers $ (n,m,k) ,$ where $ d $ denotes the number of digits of $ L_k $ and $ F_k $, respectively. We use lower bounds for linear forms in logarithms and reduction method in Diophantine approximation to get the results. \end{abstract} \section{Introduction} Let $(F_n)_{n\geq 0}$ and $(L_n)_{n\geq 0}$ be the Fibonacci and Lucas sequences given by $F_0=0$, $F_1=1$, $L_0=2$, $L_1=1$, $F_{n+2}=F_{n+1}+F_{n}$ and $L_{n+2}=L_{n+1}+L_{n}$ for $n\geq 0,$ respectively. In recent years, the numbers in Fibonacci, Lucas or some similar sequences which are concatenations of two or more repdigits are investigated by a series of papers \cite{Alahmadi2,Dam1,Dam2, EK2,Qu,Trojovsky}. In the case of the concatenation of binary recurrent sequences, there is a general result due to Banks and Luca \cite{Banks}. In \cite{Banks}, they proved that if $ u_n $ is any binary recurrent sequence of integers, then only finitely many terms of the sequence $ u_n $ can be written as concatenations of two or more terms of the same sequence $ u_n $ under some mild hypotheses on $ u_n.$ In particular, they proved that 13, 21 and 55 are the only Fibonacci numbers which are non trivial concatenations of two terms of Fibonacci numbers. In \cite{Alan}, Fibonacci and Lucas numbers which can be written as concatenations of two terms of the other sequences also investigated and it is shown that 13, 21 and 34 (1, 2, 3, 11, 18 and 521 ) are the only Fibonacci (Lucas) numbers which are concatenations of two Lucas (Fibonacci) numbers. In this paper, we study the mixed concatenations of these famous two sequences which forms a Fibonacci number. By mixed concatenations of $ a $ and $ b $, for any two non negative integers $ a $ and $ b ,$ we mean the both concatenations $\overline{ab} $ and $ \overline{ba},$ together. So, we search all Fibonacci numbers of the form $\overline{F_mL_k}$, that is the concatenations of $ F_m $ and $ L_k $, as well as of the form $\overline{L_mF_k},$ that is the concatenations of $L_m $ and $ F_k.$ In other words, by the mathematical expression of this problem, we solve the Diophantine equations \begin{equation} F_n=10^d F_m +L_k \label{FcFL} \end{equation} and \begin{equation} F_n=10^d L_m+F_k \label{FcLF} \end{equation} in non negative integers $ (n,m,k) $ where $ d $ denotes the number of digits of $ L_k $ and $ F_k $, respectively and we get the following results. \begin{theorem} \label{main1} All Fibonacci numbers which are concatenations of a Fibonacci and a Lucas numbers are only the numbers 1, 2, 3, 13, 21 and 34. \end{theorem} \begin{theorem} \label{main2} All Fibonacci numbers which are concatenations of a Lucas and a Fibonacci numbers are only the numbers 13 and 21. \end{theorem} In the next chapter, we give some details of the methods we used in this study to prove the above theorems. In fact, we mainly use two powerful tools. The first one is the theory of non zero linear forms in logarithms of algebraic numbers which is due to Matveev \cite{Matveev} and the second one is reduction method based on the theory of continued fractions given in \cite{DP} which is a version of Baker-Davenport lemma \cite{Baker-Davenport}. In the third section, we give the proofs of the above theorem. All calculations and computations are made with the help of the software \textsf{Maple}. \section{Preliminaries} Let $F_n$ and $L_n$ be the Fibonacci and Lucas numbers, respectively. The Binet formula for Fibonacci and Lucas numbers are given by $$ F_n = \dfrac{\alpha^n-\beta^n}{\sqrt{5}}, \qquad L_n=\alpha^n+\beta^n \quad \quad n\geq 0, $$ where \[ \alpha=\frac{1+\sqrt{5}}{2} \quad \text{and} \quad \beta=\frac{1-\sqrt{5}}{2} \] are the roots of the equation $x^2-x-1=0.$ By using the Binet formula of these sequences, one can see that, by induction, \begin{equation} \label{F1} \alpha^{n-2} \leq F_n \leq \alpha^{n-1} \end{equation} and \begin{equation} \label{L1} \alpha^{n-1}\leq L_n \leq 2\alpha^{n} \end{equation} hold for all $n\geq 1$ and $n\geq 0,$ respectively. Let $\eta$ be an algebraic number of degree $d$ with minimal polynomial \[ a_0x^d+a_1x^{d-1}+\cdots+a_d=a_0\prod_{i=1}^{d}(x-\eta^{(i)}), \] where the $a_i$'s are relatively prime integers with $a_0>0$ and the $\eta^{(i)}$'s are conjugates of $\eta$. Recall that, the logarithmic height of $\eta$ is defined by \[ h(\eta)=\frac{1}{d}\left(\log a_0+\sum_{i=1}^{d}\log\left(\max\{|\eta^{(i)}|,1\}\right)\right). \] In particular, for a rational number $p/q$ with $\gcd(p,q)=1$ and $q>0$, $h(p/q)=\log \max \{|p|,q\}$. The logarithmic height $ h(\eta) $ has the following properties: \begin{itemize} \item[$ \bullet $] $h(\eta\pm\gamma)\leq h(\eta) + h(\gamma)+\log 2$. \item[$ \bullet $] $h(\eta\gamma^{\pm 1})\leq h(\eta)+h(\gamma)$. \item[$ \bullet $] $h(\eta^{s})=|s|h(\eta),$ $ s \in \mathbb{Z} $. \end{itemize} \begin{theorem}[Matveev's Theorem] \label{Matveev} Assume that $\gamma_1, \ldots, \gamma_t$ are positive real algebraic numbers in a real algebraic number field $\mathbb{K}$ of degree $ d_\mathbb{K} $, $b_1,\ldots,b_t$ are rational integers, and \[ \Lambda:=\eta_1^{b_1} \cdots \eta_t^{b_t}-1, \] is not zero. Then \[ |\Lambda|>\exp\left(-1.4\cdot 30^{t+3}\cdot t^{4.5}\cdot d_\mathbb{K}^2(1+\log d_\mathbb{K})(1+\log B)A_1\cdots A_t\right), \] where $B\geq \max\{|b_1|,\ldots,|b_t|\},$ and $A_i\geq \max\{d_\mathbb{K}h(\eta_i),|\log \eta_i|, 0.16\},$ for all $i=1,\ldots,t.$ \end{theorem} We cite the following lemma from \cite{DP}, which is a version of the reduction method based on the Baker-Davenport lemma \cite{Baker-Davenport}, and we use it to reduce some upper bounds on the variables. Recall that, for a real number $ \theta, $ we put $ ||\theta||=\min\{ |\theta -n | : n \in\mathbb{N} \}, $ the distance from $ \theta $ to the nearest integer. \begin{lemma} \label{reduction} Let $M$ be a positive integer, $p/q$ be a convergent of the continued fraction of the irrational $\gamma$ such that $q>6M$, and let $A,B,\mu$ be some real numbers with $A>0$ and $B>1$. If $\epsilon:=||\mu q||-M||\gamma q|| >0$, then there is no solution to the inequality \[ 0< | u\gamma-v+\mu | <AB^{-w}, \] in positive integers $u,v$ and $w$ with \[ u\leq M \quad\text{and}\quad w\geq \frac{\log(Aq/\epsilon)}{\log B}. \] \end{lemma} \section{Proof of Theorem \ref{main1} and \ref{main2}} \textbf{Proof of Theorem \ref{main1}:} Assume that the equation \eqref{FcFL} holds. We will need the relations among the variables $ n, m, k $ and $ d $ through this section. Note that, we may write the number of digits of $ L_k $ as $ d=\lfloor \log_{10}L_k \rfloor +1$ where $ \lfloor \theta\rfloor $ is the floor function of $ \theta $ , that is the greatest integer less than or equal to $ \theta .$ Thus, \begin{align*} d=\lfloor \log_{10}L_k \rfloor +1 \leq 1+\log_{10}L_k & \leq 1+ \log_{10}{ (2\alpha^{k} )}= 1+ {k}\log_{10}{ \alpha } +\log_{10}{ 2 } \\ \end{align*} and $$ d=\lfloor \log_{10}L_k \rfloor +1 > \log_{10}L_k \geq \log_{10}{ \alpha^{k-1} } \geq {(k-1)} \log_{10}{ \alpha } .$$ From the above relations we may get more explicit bounds for $ d $ as \begin{equation} \label{d} \frac{k-1}{5} < d < \frac{k+6}{4}, \end{equation} by using the fact that $ (1/5)< \log_{10}{ \alpha } \cong 0.208... <(1/4) $ and $ \log_{10}{ 2 } < 0.31 .$ In particular, $$ L_k = 10^{ \log_{10}L_k } <10^d < 10^{1+ \log_{10}L_k } <10 L_k .$$ From the last inequality together with \eqref{FcFL} we write $$\alpha^{n-2} \leq F_n=10^d F_m +L_k \leq 10 L_k F_m +L_k <11 F_mL_k < 22 \alpha^{m+k-1} \leq \alpha^{m+k+6} $$ and $$ \alpha^{n-1} \geq F_n=F_m 10^d +L_k > F_m L_k+L_k > F_m L_k \geq \alpha^{m+k-3}.$$ Hence, we have that \begin{equation} \label{n} m+k-2 < n < m+k+8. \end{equation} Before further calculations, we wrote a short computer program to search the variables $ n,m $ and $ k $ satisfying \eqref{FcFL} in the range $ 0\leq m,k <200 $ and we found only the Fibonacci numbers given in Theorem \ref{main1}. So from now on we may assume that $ \max \{m,k\} \geq 200. $ Note that, we may take $ n-k \geq 4 .$ Indeed, using the well-known fact $ L_k=F_{k+1}+F_{k-1} $, see for example \cite{Koshy}, we may write equation \eqref{FcFL} as $$ F_n= 10^d F_m+F_{k+1}+F_{k-1} .$$ Then, clearly $ n \neq k $ and $ n \neq k+1 .$ If $ m=0 ,$ then the case $ F_n=L_k $ is possible only for $ L_k \in \{ 1,2,3 \}, $ that is $ \max \{m,k\} < 3, $ a contradiction. So $ m \neq 0 $ and hence from the inequality $$ F_n=10^d F_m +L_k \geq 10^d +L_k > 2L_k = 2 (F_{k+1}+F_{k-1}) ,$$ we see that the cases $ n = k+2 $ and $ n = k+3 $ are not also possible. So we get that \begin{equation} \label{nk4} n-k \geq 4. \end{equation} Using Binet formula for Fibonacci and Lucas sequences, we rewrite equation \eqref{FcFL} as $$ \dfrac{\alpha^{n} -\beta^{n}}{\sqrt{5}} = \dfrac{\alpha^{m}}{\sqrt{5}} 10^d - \dfrac{\beta^{m}}{\sqrt{5}} 10^d +L_k,$$ $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \dfrac{ 10^d \alpha^{m}}{\sqrt{5}} = \dfrac{\beta^{n}}{\sqrt{5}} - \dfrac{ 10^d \beta^{m}}{\sqrt{5}} + L_k. $$ Multiplying both sides of the above equation by $ {\sqrt{5}}/{\alpha^{n}} ,$ and taking absolute values of the both sides of it, we get that $$ \left| 1- \dfrac{10^{d} }{ \alpha^{n-m}} \right| \leq \dfrac{ |\beta^{n}| }{\alpha^n} + \dfrac{ 10^d |\beta^{m}|}{ \alpha^n } + \dfrac{L_k \sqrt{5} }{\alpha^n} $$ $$ \qquad \qquad \qquad \leq \dfrac{ 1}{\alpha^{2n}} + \dfrac{ 10 L_k }{ \alpha^{n+m} } + \dfrac{2 \alpha^k \sqrt{5} }{\alpha^n} $$ $$ \qquad \qquad \qquad < \dfrac{ 1}{\alpha^{2n}} + \dfrac{ 20 \alpha^k }{ \alpha^{n+m} } + \dfrac{2 \sqrt{5} }{\alpha^{n-k}}. $$ Since $ 20 < \alpha^{7} ,$ we find that \begin{equation} \label{1ineq} \Lambda_1 := \left| 1- \dfrac{10^{d} }{ \alpha^{n-m}} \right| < \dfrac{ 3}{ \alpha^{n-k-7}}. \end{equation} Let $(\eta_1, b_1)=(10, d)$ and $(\eta_2, b_2)=(\alpha, -(n-m)),$ where $ \eta_1, \eta_2 \in \mathbb{K}=\mathbb{Q}(\sqrt{5}).$ We take $ d_\mathbb{K}=2 $, to be the degree of the real number field $ \mathbb{K}.$ Since $ h(\eta_1)=\log{10} ,$ $h(\eta_2)=\dfrac{1}{2} \log {\alpha} ,$ we take $ A_1 = 2\log{10} ,$ $A_2= \log {\alpha}.$ Suppose that $ n-m < d.$ Then from the two relations \eqref{d} and \eqref{n}, we get that $$ k-2<n-m<d<(k+6)/4,$$ which implies $ k \leq 4 .$ Since $ L_4=7 ,$ we have $ d=1 .$ Hence $ n=m,$ that is $ F_n=10 F_n+L_k ,$ which is clearly false . So we take $$ B: = \max\{ |b_i | \} = \max\{ d, n-m\} = n-m. $$ In \eqref{1ineq}, $ \Lambda_1 \neq 0 .$ Indeed if $ \Lambda_1=0 ,$ then we get that $ \alpha^{n-m}= 10^d \in \mathbb{Q} $, which is possible only for $ n=m $ and we know that this is not the case. So $ \Lambda_1 \neq 0.$ Now we apply Theorem \ref{Matveev} to $ \Lambda_1 $ and we get that $$ \log{( \Lambda_1 ) } > -1.4 \cdot 30^5 \cdot 2^{4.5} \cdot 2^2 (1+\log 2)(1+\log {(n-m)}) \cdot 2 \log{10}\cdot \log \alpha . $$ On the other hand, taking the logarithm of both sides of \eqref{1ineq}, we get also $$ \log{( \Lambda_1 ) } < \log 3 - (n-k-7) \log \alpha. $$ Combining last two inequalities, we get that \begin{equation} \label{n-k} n-k-7 < 2.41 \cdot 10^{10}\cdot (1+\log {(n-m)}). \end{equation} We will turn \eqref{n-k} later. Now we rewrite \eqref{FcFL} as $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \alpha^k - 10^d F_m = \dfrac{\beta^{n}}{\sqrt{5}}+\beta^k ,$$ $$ \dfrac{ \alpha^n}{\sqrt{5}} \left( 1- \alpha^{k-n} \sqrt{5} \right) - 10^d F_m = \dfrac{\beta^{n}}{\sqrt{5}}+\beta^k. $$ Note that, by \eqref{nk4}, $ 1- \alpha^{k-n} \sqrt{5} >0 $ and $ 1< \dfrac{1}{1 - \alpha^{k-n} \sqrt{5} } <2 $ for $ n-k \geq 4 .$ So, we may divide, and than take the absolute value of both sides of the last equality to get that $$ \left| 1- \dfrac{ F_m 10^d \sqrt{5} }{ { \alpha^n} \left( 1- \alpha^{k-n} \sqrt{5} \right) } \right| < \left( \dfrac{1}{1 - \alpha^{k-n} \sqrt{5} } \right) \left( \dfrac{ |\beta^n | }{\alpha^n} + \dfrac{ |\beta^k | \sqrt{5} }{\alpha^n } \right) $$ $$ \qquad \qquad < 2 \left( \dfrac{ 1 }{\alpha^{2n}} + \dfrac{ \sqrt{5} }{\alpha^{n+k} } \right). $$ So \begin{equation} \label{2ineq} \left| \Lambda_2 \right | := \left| 1- \dfrac{ 10^d F_m \sqrt{5} }{ { \alpha^n} \left( 1- \alpha^{k-n} \sqrt{5} \right) } \right| < \dfrac{7}{\alpha^{2k}}. \end{equation} Let $(\eta_1, b_1)=(10, d)$, $(\eta_2, b_2)=(\alpha, -n)$ and $(\eta_3, b_3)= \left( \dfrac{ F_m \sqrt{5} }{ \left( 1- \alpha^{k-n} \sqrt{5} \right) } , 1 \right) $. Again $ \eta_1, \eta_2$ and $ \eta_3 $ are all belongs to the real quadratic number field $ \mathbb{K}=\mathbb{Q}(\sqrt{5}) .$ So we take $ d_\mathbb{K}=2 $, to be the degree of $ \mathbb{K}, $ $h(\eta_1)= \log{10}$, $h(\eta_2)=(1/2) \log(\alpha)$ and for $ h(\eta_3) ,$ we need the properties of logarithmic height given in the Preliminaries, so that \begin{align*} h(\eta_3)= h \left( \dfrac{ F_m \sqrt{5} }{ 1- \alpha^{k-n} \sqrt{5} } \right) & \leq h( F_m ) +h( \sqrt{5}) +h( 1- \alpha^{k-n} \sqrt{5} ) \\ & \leq \log (F_m)+ h( \sqrt{5}) + h(\alpha^{k-n} \sqrt{5} ) +\log 2 \\ & \leq (m-1) \log(\alpha) + \dfrac{ |k-n| }{2} \log \alpha + 1. \\ \end{align*} Since $ m-1<n-k+1 $ from \eqref{n}, we write $ h(\eta_3)< 1+\dfrac{3(n-k)+2}{2} \log(\alpha).$ So we take $$ A_1 = 2\log {10} ,\quad A_2= \log \alpha \quad \text{and} \quad A_3= 2+ {(3(n-k)+2)} \log(\alpha).$$ By \eqref{d}, if $ n<d<(k+6)/4 ,$ then we find that $ 4n-6<k $ and hence $ m+3n<8 $ from \eqref{n}, a contradiction since $ \max\{m, k \} \geq 200. $ So $$ B:=\max\{ |b_i | \} = \max\{ d, n, 1 \}= n.$$ We show that $ \Lambda_2 \neq 0. $ For this purpose, assume that $ \Lambda_2 = 0. $ Then we get that $$ \alpha^n- \alpha^{k} \sqrt{5} = 10^d F_m \sqrt{5}, $$ Conjugating this expression in $ \mathbb{K} $ we obtain $$ \beta^n+ \beta^{k} \sqrt{5} = - 10^d F_m \sqrt{5}. $$ From these last two equality, we find that $$ \label{d2neq2} 10 \sqrt{5} \leq 10^d F_m \sqrt{5} = | \beta^n+ \beta^{k} \sqrt{5} | \leq 2 \max \{ \beta^n, \beta^{k} \sqrt{5} \} \leq 2 \sqrt{5}, $$ a contradiction. So we conclude that $ \Lambda_2 \neq 0. $ Thus, we apply Theorem \ref{Matveev} to $ \Lambda_2 $ given in \eqref{2ineq} and we get that \begin{equation} \label{23} \log{( \Lambda_2 ) } > -1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 2^2 (1+\log 2)(1+\log n) 2\log{10} \log \alpha \cdot [ 2+ {(3(n-k)+2)} \log(\alpha)]. \end{equation} On the other hand, from \eqref{2ineq}, we know that \begin{equation} \label{24} \log{( \Lambda_2 ) } < \log{7} -2k \log \alpha. \end{equation} Combining \eqref{23} and \eqref{24} we get that \begin{equation} \label{2k} k < 2.24 \cdot 10^{12} \cdot (1+\log n) [2+ (3(n-k)+2) \log(\alpha) ] . \end{equation} Now, we focus on two inequalities \eqref{2k} and \eqref{n-k} to get a bound for $ n $ by examining the cases $ k \leq m $ and $ m <k $ separately. First, assume that $ k \leq m. $ Then $ n-m \leq n-k $ and therefore, from \eqref{n-k}, we find $$ n-k-7 < 2.41 \cdot 10^{10}\cdot (1+\log {(n-k)}) $$ which means that $ n-k < 8 \cdot 10^{11}.$ So it follows that \begin{equation*} n< 1.7 \cdot 10^{12}, \end{equation*} since from the left side of \eqref{n}, we know that $ m-2<n-k $ and from the right side of \eqref{n}, $ n<2m+8.$ Let $ m \leq k .$ Then $ n<2k+8. $ From \eqref{n-k}, in particular we have that \begin{equation} \label{n-k2} n-k-7 < 2.41 \cdot 10^{10}\cdot (1+\log {n}). \end{equation} Substituting \eqref{n-k2} into \eqref{2k} and using the fact that $ (n-8)/2<k ,$ we find that \begin{equation} \label{n2} n< 7 \cdot 10^{26}. \end{equation} So whether $ m \leq k $ or not, in either case, we have that $ n< 7 \cdot 10^{26}. $ Now, we reduce this upper bound on $ n. $ \begin{lemma} \label{mbound} If the equation \eqref{FcFL} holds, then $ m \leq 150. $ In particular, if $ k \leq m ,$ then the equation \eqref{FcFL} has only solution for $ F_n \in \{ 1, 2, 3, 13, 21, 34 \} .$ \end{lemma} \begin{proof} Suppose that $ m>150.$ Let $\Gamma_1 :=d \log {10} -(n-m) \log \alpha.$ Since $ 141< m-9 < n-k-7,$ from \eqref{1ineq} $$\left| \Lambda_1 \right|:=\left| \exp ({\Gamma_1}) -1 \right| <\dfrac{3}{\alpha^{n-k-7}} <\dfrac{1}{2}.$$ Recall that, when $ v <\dfrac{1}{2} ,$ the inequality $$ \left| \exp (u) -1 \right| < v \quad \text{implies that} \quad \left| u \right| < 2 v.$$ So, it follows that $ \left| \Gamma_1 \right| < \dfrac{6}{\alpha^{n-k-7}}.$ Moreover, $\Gamma_1 \neq 0,$ since $ \Lambda_1 .$ Thus, we write \begin{equation} \label{r1} 0< \left| \dfrac{ d }{n-m } - \dfrac{ \log \alpha }{\log 10 } \right | < \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10}. \end{equation} Note that, we have $$ \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10} < \dfrac{1}{2(n-m)^2}, $$ for otherwise $$ \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10} \geq \dfrac{1}{2(n-m)^2} $$ implies that $$ n>n-m> \dfrac{ \alpha^{n-k-7} \log 10 }{ 12 } > \dfrac{ \alpha^{141} \log 10 }{ 12 } > 5.6 \cdot 10^{28}, $$ which is a contradiction because of \eqref{n2}. Then we have that $$ 0< \left| \dfrac{ \log \alpha }{\log 10 } - \dfrac{ d }{n-m } \right | < \dfrac{1}{2(n-m)^2}, $$ which implies that $ \dfrac{d}{n-m} $ is a convergent of continued fractions of $ \dfrac{ \log \alpha }{\log 10 } $, say $ \dfrac{ d }{n-m } = \dfrac{p_i}{q_i} .$ Since $\gcd(p_i, q_i)=1$, we have $ q_i \leq n-m \leq n <7 \cdot 10^{26} .$ Let $[a_1,a_2,a_3,a_4,a_5,\ldots]=[0, 4, 1, 3, 1, \ldots ]$ be the continued fraction expansion of $ \log \alpha / \log 10 $. With the help of Maple, we find that $ i < 54 $ and $ \max\{a_i\} =a_{37}=106 $ for $ i=1,2, \ldots , 54. $ So from the well known property of continued fractions, see for example \cite[Theorem 1.1.(iv)]{Hen}, we get that $$ \dfrac{1}{108 (n-m)^2} \leq \dfrac{1}{(a_i +2) (n-m)^2} < \left| \dfrac{ \log \alpha }{\log 10 } - \dfrac{ d }{n-m } \right | < \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10} $$ which means $$ n> n-m > \dfrac{ \alpha^{n-k-7} \log 10 }{6 \cdot 108} > 10^{27}. $$ But this is also a contradiction, since $ n < 7 \cdot 10^{26} .$ Therefore, we conclude that $ m \leq 150. $ \end{proof} By Lemma \ref{mbound}, from now on, we assume that $ m \leq k $ and hence $ n<2k+8 .$ Moreover, we write also $ n-k<m+8 \leq 158.$ By substituting this upper bound for $ n-k $ into the \eqref{2k}, we get a better bound for $ n $ as \begin{equation} \label{2k2} n-8 < 2k < 2 \cdot 2.24 \cdot 10^{12} \cdot (1+\log n) ( 476 \log(\alpha) + 2 ) . \end{equation} So from \eqref{2k2}, it follows that \begin{equation*} n< 4.5 \cdot 10^{16} . \end{equation*} Let \begin{equation*} \label{gama2} \Gamma_2 :=d \log {10} -n \log \alpha + \log \left( { \dfrac{ F_m \sqrt{5}}{ 1- \alpha^{k-n} \sqrt{5} } } \right) \end{equation*} so that $ \left | \Lambda_2 \right |:=\left | \exp{(\Gamma_2)}-1 \right | <\dfrac{7}{\alpha^{2k}}.$ Then $ \left | \Gamma_2 \right | < \dfrac{14} {\alpha^{2k}},$ since $ \dfrac{1}{\alpha^{2k}} < \dfrac{1}{2}.$ So, from \eqref{gama2}, \begin{equation*} 0 < \left | d \dfrac{\log {10}}{\log \alpha} -n + \dfrac{ \log \left( { \dfrac{ F_m \sqrt{5}}{ 1- \alpha^{k-n} \sqrt{5} } } \right) }{\log \alpha} \right | < \dfrac{14}{\alpha^{2k} \log \alpha }. \end{equation*} Now, we take $$ M:=4.5 \cdot 10^{16} > n > d \qquad \text{and} \qquad \tau:=\dfrac{ \log{ 10}}{\log \alpha}. $$ Then, in the continued fraction expansion of irrational $ \tau$, we take $ q_{60} ,$ the denominator of the $ 60th $ convergent of $ \tau,$ which exceeds $ 6M. $ Now, with the help of Maple, we calculate $$ \epsilon_{m, n-k} := ||\mu_{m, n-k} q_{60} || -M || \tau q_{60} || $$ for each $ 1 \leq m \leq 150 $ and $ 4 \leq n-k <m+8, $ where $$ \mu_{m, n-k} := \dfrac{ \log \left( { \dfrac{ F_m \sqrt{5}}{ 1- \alpha^{k-n} \sqrt{5} } } \right) }{\log \alpha}, \qquad q_{60}=2568762252997982327345614176552 $$ and we see that $$ 0.00034 < \epsilon_{50, 20} \leq \epsilon_{m, n-k}, \qquad \text{for all} \quad m, n-k . $$ Let $ A:= \dfrac{14}{\log \alpha} ,$ $ B:=\alpha $ and $ \omega :=2k $ . Thus from Lemma \ref{reduction}, we find that $$ 2k < \dfrac{ \log{ \left( Aq_{60} /{0.00034} \right) }}{\log B} < 170 .$$ So we get that $ k < 85 $ which is a contradiction because of bound of $ k .$ Thus, we conclude that the numbers 1, 2, 3, 13, 21 and 34 are the only Fibonacci numbers which are expressible as a concatenation of a Fibonacci and a Lucas number. \textbf{Proof of Theorem \ref{main2}:} Assume that the equation \eqref{FcLF} holds. As $ d=\lfloor \log_{10}F_k \rfloor +1 $ is the number of digits of $ F_k ,$ we have that $$ d=\lfloor \log_{10}F_k \rfloor +1 \leq 1+\log_{10}F_k \leq 1+ \log_{10}{ \alpha^{k-1} }= 1+ {(k-1)} \log_{10}{ \alpha }< \frac{k+3}{4} $$ and $$ d=\lfloor \log_{10}F_k \rfloor +1 > \log_{10}F_k \geq \log_{10}{ \alpha^{k-2} } = {(k-2)} \log_{10}{ \alpha }> \frac{k-2}{5} $$ since $ (1/5)< \log_{10}{ \alpha } \cong 0.208... <(1/4) .$ Hence, we have that \begin{equation} \label{bd2} \frac{k-2}{5} < d < \frac{k+3}{4}, \end{equation} and $$ F_k \leq 10^{\lfloor \log_{10}F_k \rfloor} <10^d < 10^{1+\lfloor \log_{10}F_k \rfloor} \leq 10 F_k .$$ From the last inequality together with \eqref{FcLF}, we can find a range of $ n $ depending on $ m $ and $ k. $ More precisely, $$\alpha^{n-1} \leq F_n=L_m 10^d +F_k \leq L_m 10 F_k+F_k <11 L_m F_k < 22 \alpha^{m+k-1} < \alpha^{m+k+7} $$ and $$ \alpha^{n-2} \geq F_n=L_m 10^d +F_k \geq L_m F_k+F_k > L_m F_k > \alpha^{m+k-3}.$$ Hence \begin{equation} \label{bn2} m+k-1 < n < m+k+7. \end{equation} When $ k,m \leq 200 $, by a quick computation, we see that the only solutions of \eqref{FcLF} are those given in Theorem \ref{main2}. So from now on, we assume that $ \max\{m,k \} >200. $ Using Binet formula for Fibonacci and Lucas sequences, we rewrite equation \eqref{FcLF} as $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \dfrac{\beta^{n}}{\sqrt{5}} = \left( {\alpha^{m} + \beta^{m}} \right) 10^d +F_k,$$ $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \alpha^{m} 10^d = \dfrac{\beta^{n}}{\sqrt{5}} + \beta^{m} 10^d +F_k,$$ Now, we multiply both sides of the above equation by $ {\sqrt{5}}/{\alpha^{n}} ,$ and take the absolute value of both sides of it. Thus, we get \begin{align*} \left| 1- \dfrac{10^{d} }{ \alpha^{n-m} / \sqrt{5} } \right| & \leq \dfrac{\left| \beta \right |^{n}}{\alpha^n} + \dfrac{\left| \beta \right |^{m}10^d \sqrt{5} }{\alpha^n} + \dfrac{F_k \sqrt{5} }{\alpha^n} \\ & < \dfrac{ 1}{ \alpha^{2n}} + \dfrac{ 10 F_k \sqrt{5} }{ \alpha^{n+m}}+ \dfrac{\alpha^{k-1} \alpha^{2}}{\alpha^n} \\ & < \dfrac{ 1}{ \alpha^{2n}} + \dfrac{ \alpha^{5} \alpha^{k-1} \alpha^{2} }{ \alpha^{n+m}}+ \dfrac{1}{\alpha^{n-k-1}} \\ & < \dfrac{ 3}{ \alpha^{n-k-6}}. \end{align*} So, we have that \begin{equation} \label{3ineq} \Lambda_3 := \left| 1- \dfrac{10^{d} \sqrt{5} }{ \alpha^{n-m} } \right| < \dfrac{ 3}{ \alpha^{n-k-6}}. \end{equation} Now, we may apply Theorem \ref{Matveev} to the left side of the inequality \eqref{3ineq} with $(\eta_1, b_1)=(10, d)$, $(\eta_2, b_2)=(\alpha, -(n-m))$ and $(\eta_3, b_3)=(\sqrt{5} , 1)$. Since $ \eta_1,$ $\eta_2$ and $ \eta_3 $ belong to the real quadratic number field $ \mathbb{K}=\mathbb{Q}(\sqrt{5}) $, we take $ d_\mathbb{K}=2 $, to be the degree of the number field $ \mathbb{K}.$ Since $ h(\eta_1)=\log{10},$ $h(\eta_2)=\dfrac{1}{2} \log {\alpha},$ and $h(\eta_3)=\log \sqrt{5}$ we take $ A_1 = 2\log{10},$ $ A_2= \log {\alpha}$ and $A_3= \log 5.$ Assume that $ \Lambda_3=0. $ Then, we get that $ \alpha^{n-m} = 10^d \sqrt{5},$ that is, $ \alpha^{2(n-m)} = 5 \cdot 10^{2d} \in \mathbb{Q} ,$ but this is false for $ n-m \neq 0 $ and one can see that the case $ n-m=0 $ is not possible from \eqref{FcLF}. So $ \Lambda_3 \neq 0.$ Now, we claim that $ \max\{d, n-m \}=n-m .$ Indeed, if $ n-m < d$ then from the inequalities \eqref{bd2} and \eqref{bn2}, we get that $ k-1<n-m<d<(k+3)/4.$ Thus $ k<3 ,$ which means that $ d=1 $ and hence $ n-m<1, $ that is $ n=m. $ But from the identity $L_i=F_{i-1}+F_{i+1}, $ we see that the case $ n=m $ is not possible. So $$ B: = \max\{ |b_i | \} = \max\{ d, n-m, 1 \} = n-m. $$ Now, we are ready to apply Theorem \ref{Matveev} to $ \Lambda_3 $ given in \eqref{3ineq}, and we get that $$ \log{( \Lambda_3 ) } > -1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 2^2 (1+\log 2)(1+\log {(n-m)}) \cdot 2 \log{10} \cdot \log \alpha \cdot \log 5. $$ Combining this inequality with the one directly obtained from \eqref{3ineq}, as $ \log( \Lambda_3 )< \log 3 -(n-k-6)\log \alpha ,$ we get that \begin{equation} \label{bn-k2} n-k-6 < 7.19 \cdot 10^{12} (1+\log {(n-m)}). \end{equation} Now, we rewrite \eqref{FcLF} as $$\dfrac{\alpha^{n}}{\sqrt{5}} - L_m 10^d - \dfrac{\alpha^{k}}{\sqrt{5}} = \dfrac{\beta^{n}}{\sqrt{5}} + \dfrac{\beta^{k}}{\sqrt{5}} ,$$ $$\dfrac{\alpha^{n}}{\sqrt{5}} \left( 1- \alpha^{k-n} \right) - L_m 10^d = \dfrac{\beta^{n}}{\sqrt{5}} + \dfrac{\beta^{k}}{\sqrt{5}} .$$ After dividing both sides of the last equation by $ \dfrac{\alpha^{n}}{\sqrt{5}} \left( 1- \alpha^{k-n} \right) ,$ and taking the absolute value of both sides of it, we get that $$ \left| 1- \dfrac{ L_m 10^d \sqrt{5} }{ \alpha^n ( 1-\alpha^{k-n} ) } \right| \leq \left( \dfrac{1}{ 1-\alpha^{k-n} } \right) \left( \dfrac{ |\beta^n | }{\alpha^n } + \dfrac{ |\beta^k |}{\alpha^n } \right). $$ Taking into account $ 1< \dfrac{1}{ 1-\alpha^{k-n} } <2 $ for $ n-k \geq 2,$ we get that \begin{equation} \label{4ineq} \left| \Lambda_4 \right | := \left| 1- \dfrac{ L_m 10^d \sqrt{5} }{ \alpha^n \left( 1- \alpha^{k-n} \right) } \right| < \dfrac{4}{\alpha^{2k}}. \end{equation} Let $(\eta_1, b_1)=(10, d)$, $(\eta_2, b_2)=(\alpha, -n)$ and $(\eta_3, b_3)=\left( \dfrac{ L_m \sqrt{5} }{ 1- \alpha^{k-n} } , 1 \right) .$ All of $ \eta_1, \eta_2$ and $ \eta_3 $ belong to real quadratic number field $ \mathbb{K}=\mathbb{Q}(\sqrt{5}) $, which has degree $ d_\mathbb{K}=2 .$ Since $h(\eta_1)= \log{10},$ $h(\eta_2)=(1/2)\log(\alpha)$ and $$ h(\eta_3) \leq h( L_m ) + h( \sqrt{5} ) + h( 1- \alpha^{k-n} ) \leq \log(L_m) + \log(\sqrt{5}) + h( \alpha^{k-n}) +\log 2 $$ $$ < \log (2\alpha^m) + \log(\sqrt{5}) + \dfrac{|k-n|}{2} \log \alpha +\log 2. $$ So $$ h(\eta_3) < \dfrac{3(n-k)+2}{2} \log(\alpha) + \log(4\sqrt{5}) $$ where we used the fact that $ m<n-k+1 $ from \eqref{bn2}. So we take $ A_1 = 2\log {10},$ $A_2= \log \alpha $ and $A_3= (3(n-k)+2) \log(\alpha) + \log(80).$ Clearly, $ B:=\max\{ |b_i | \} = \max\{ d, n, 1 \}= n.$ Also $ \Lambda_4 \neq 0.$ To show this fact assume that $ \Lambda_4=0. $ Then, we get that $\alpha^{n} - \alpha^{k} = 10^d L_m \sqrt{5}. $ Conjugating in $ \mathbb{K} ,$ we find $\beta^{n} - \beta^{k} = - 10^d L_m \sqrt{5}. $ Adding side by side the last two equalities we obtain that $ L_n-L_k=0, $ that is $ n=k, $ a contradiction. So $ \Lambda_4 \neq 0.$ Thus, we apply Theorem \ref{Matveev} to $ \Lambda_4 $ given in \eqref{4ineq}, and we get that \begin{equation} \label{b232} \log{( \Lambda_4|) } > -1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 2^2 (1+\log 2)(1+\log n) A_3 2\log{10} \log \alpha. \end{equation} On the other hand, from \eqref{4ineq}, taking into account that $\log{( \Lambda_4 ) } < \log4 -2k \log \alpha ,$ we get that \begin{equation} \label{b2k} k < 2.24 \cdot 10^{12} \cdot (1+\log n) [ (3(n-k)+2) \log(\alpha) + \log(80) ] . \end{equation} Now, we use the two inequalities \eqref{bn-k2} and \eqref{b2k} to get an initial bound on the variable $n$. To do this, first assume that $ k \leq m. $ Then, $ n-m \leq n-k $ and hence, from \eqref{bn-k2}, we write $$ n-k-6 < 7.19 \cdot 10^{12} (1+\log {(n-k)}) $$ which implies that $ n-k < 3 \cdot 10^{14}.$ So it follows that \begin{equation*} n< 10^{15}, \end{equation*} since from \eqref{bn2}, we know that $ n<2m+4$ and $ m<n-k+1.$ Let $ m \leq k .$ From \eqref{bn-k2}, in particular, we have that \begin{equation*} \label{bn-k23} n-k-6 < 7.19 \cdot 10^{12}(1+\log {n}). \end{equation*} Substituting \eqref{bn-k23} into \eqref{b2k} and using the fact that $ n<2k+7 ,$ we find that \begin{equation} \label{bn23} n< 2.5 \cdot 10^{29}. \end{equation} So, whether $ m \leq k $ or not, we have that $ n< 2.5 \cdot 10^{29}. $ \begin{lemma} \label{bmbound} If the equation \eqref{FcLF} holds then $ m \leq 168. $ In particular, if $ k \leq m ,$ then the equation has the solution only for $ F_n \in \{ 13, 21 \} .$ \end{lemma} \begin{proof} Suppose that $ m>168.$ Let \begin{equation} \label{gama3} \Gamma_3 :=d \log {10} -(n-m) \log \alpha + \log \sqrt{5} . \end{equation} Since $ 161<m-7< n-k-6,$ $ \left| \Lambda_3 \right|:=\left| \exp {(\Gamma_3)} -1 \right| <\dfrac{3}{\alpha^{n-k-6}} < \dfrac{1}{2}.$ Hence, we get that $ \left| \Gamma_3 \right| < \dfrac{6}{\alpha^{n-k-6}}.$ Thus from \eqref{gama3}, we write \begin{equation*} 0< \left| d \dfrac{ \log {10} }{\log \alpha } - (n-m) + \dfrac{\log \sqrt{5}}{\log \alpha} \right | < \dfrac{6}{ \alpha^{n-k-6} \log \alpha}. \end{equation*} Let $ M:= 2.5 \cdot 10^{29} > n > d$ and $\tau = \dfrac{\log {10}}{\log \alpha}.$ Then, in the continued fraction expansion of irrational $ \tau$, we see that $ q_{60},$ the denominator of the $ 60th $ convergent of $ \tau,$ exceeds $ 6M.$ With the help of Maple, we calculate $$ \epsilon := ||\mu q_{60} || -M || \tau q_{60} || $$ where $\mu = \dfrac{\log \sqrt{5}}{\log \alpha}$ and we find that $0.017775 < \epsilon .$ Let $ A:= \dfrac{6}{\log \alpha} ,$ $ B:=\alpha $ and $ \omega :=n-k-6 .$ From Lemma \ref{reduction}, we find that $$ n-k-6 < \dfrac{ \log{ \left( Aq_{60}/ \epsilon \right) } }{\log B} < 160 .$$ But this contradicts the fact that $ 161<m-7< n-k-6.$ So, we conclude that $ m \leq 168 .$ \end{proof} By Lemma \ref{bmbound}, from now on, we deal with only the case $ m<k. $ Since $ n-k<m+7 \leq 175 ,$ by substituting this upper bound for $ n-k $ into \eqref{b2k}, we get that \begin{equation} \label{b2k2} n <2k +7 < 2 \cdot 2.24 \cdot 10^{12} \cdot (1+\log n) [ 527 \log(\alpha) + \log(80) ] +7 . \end{equation} So from \eqref{b2k2}, it follows that \begin{equation*} n< 4.6 \cdot 10^{16} . \end{equation*} Let \begin{equation} \label{gama4} \Gamma_4 :=d \log {10} -n \log \alpha + \log { \left( \dfrac{ L_m \sqrt{5}}{ 1- \alpha^{k-n}} \right) }. \end{equation} So $$ \left | \Lambda_4 \right |:=\left | \exp{\Gamma_4}-1 \right | <\dfrac{4}{\alpha^{2k}}. $$ Then, $\left | \Gamma_4 \right | < \dfrac{8}{\alpha^{2k}}, $ since $ \dfrac{1}{\alpha^{2k}} < \dfrac{1}{2}.$ Thus, from \eqref{gama4}, \begin{equation} \label{br2} 0 < \left | \dfrac{\Gamma_4}{\log \alpha} \right | < \dfrac{}{\alpha^{2k} \log \alpha }. \end{equation} Now, we take $ M:=4.6 \cdot 10^{16} > n > d $ and $\tau:=\dfrac{ \log{ 10}}{\log \alpha}$ which is irrational. Then, in the continued fraction expansion of $ \tau$, we take $ q_{91} ,$ the denominator of the $ 91th $ convergent of $ \tau,$ which exceeds $ 6M. $ Now, with the help of Maple we calculate $$ \epsilon_{m, n-k} := ||\mu_{m, n-k} q_{91} || -M || \tau q_{91} || $$ for each $ 0 \leq m \leq 168 $ and $ 3 \leq n-k <m+7 ,$ where $$ \mu_{m, n-k} := \dfrac{ \log { \left( \dfrac{ L_m \sqrt{5}}{ 1- \alpha^{k-n}} \right) } }{\log \alpha} $$ except for $ n-k=4$ and $ n-k=8 .$ For these two values $ \epsilon_{m, 4}<0 $ and $ \epsilon_{m, 8} <0 .$ To overcome this problem, we use the periodicity of Fibonacci sequences. With an easy computation, one can see that there is no positive integer $ t $ in the range $ 1 \leq t \leq 20 $ which satisfies neither $ F_t \equiv F_{t+4} \pmod 5 $ nor $ F_t \equiv F_{t+8} \pmod 5 .$ Since the period of the Fibonacci sequence modulo 5 is 20 \cite[Theorem 35.6.]{Koshy}, we conclude that there is no positive integer which satisfies either of these two congruences. So, from \eqref{FcLF}, we take $ n-k \neq 4 $ and $ n-k \neq 8. $ Let $ A:= \dfrac{8}{\log \alpha} ,$ $ B:=\alpha $ and $ \omega :=2k .$ Thus, from Lemma \ref{reduction} we get that, the inequality \eqref{br2} has solutions only for $$ 2k \leq \dfrac{ \log{ \left( Aq_{91} /{0.000109} \right) }}{\log B} \leq 233 .$$ So, we get that $m< k < 117, $ which is a contradiction because of bound of $ k .$ Thus we conclude that the equation \eqref{FcLF} has no solution when $ F_n \not\in \{ 13, 21 \} .$ This completes the proof. \textbf{Discussion :} In this paper, we aimed to search the results of mixed concatenations of the numbers belonging to the different sequences in the Fibonacci and Lucas particular case and we found that there are only a handful of Fibonacci numbers that can be written as a mixed concatenation of a Fibonacci and a Lucas numbers. The results of this paper together with those of \cite{Alan, Banks} bring the question to the mind that could there exist a nondegenerate \emph{binary} recurrence sequence which contains infinitely many terms which are expressible as a mixed concatenation of, say, a Fibonacci and a Lucas numbers? Let us define a sequence $ u_n = 10F_n+1 .$ Then, clearly, all terms of this sequence are a concatenation of a Fibonacci and $ 1. $ So, in the above question, we can not omit the expression "nondegenerate \emph{binary}". As a final remark, it is also worth noting that just using some properties and identities of Fibonacci and Lucas numbers, we could get only a few partial results so we used the method in this paper, which is the effective combination of Baker's method together with the reduction method. \begin{thebibliography}{} \bibitem{Alan} M. Alan, On Concatenations of Fibonacci and Lucas Numbers. Bull. Iran. Math. Soc. (2022). https://doi.org/10.1007/s41980-021-00668-7 \bibitem{Alahmadi2} A. Alahmadi, A. Altassan, F. Luca, H. Shoaib, Fibonacci numbers which are concatenations of two repdigits, \emph{Quaestiones Mathematicae},44:2, 281-290, (2021). \bibitem{Baker-Davenport} A. Baker and H. Davenport, The equations $3x^2-2 =y^2$ and $8x^2-7=z^2$, \emph{Quart. J. Math. Oxford Ser.} {\bf 20} , 129-137, (1969). \bibitem{Banks} W.D. Banks, W.D. and F. Luca, Concatenations with binary reccurent sequences, \emph{Journal of Integer Sequences}, 8, 05.1.3, (2005). \bibitem{Dam1} M. Ddamulira, Padovan numbers that are concatenations of two distinct repdigits, \emph{Mathematica Slovaca}, 71(2), (2021). \bibitem{Dam2} M. Ddamulira, Tribonacci numbers that are concatenations of two repdigits, \emph{ Rev. R. Acad. Cienc. Exactas F´ıs. Nat. Ser. A Mat.}, RACSAM 114(4), 203, (2020). \bibitem{DP} A. Dujella and A. Peth\H o, A generalization of a theorem of Baker and Davenport, \emph{Quart. J. Math. Oxford Ser.} \textbf{49} , no.195, 291-306, (1998) \bibitem{EK2} F. Erduvan and R. Keskin, Lucas numbers which are concatenations of three repdigits, \emph{Results in Mathematics}, 76:13 (2021). https://doi.org/10.1007/s00025-020-01314-0 \bibitem{Hen} Hensley D., Continued fractions, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, (2006) \bibitem{Koshy} T. Koshy, \emph{ Fibonacci and Lucas numbers with Applications }, Wiley-Interscience Publications 2001. \bibitem{kramer} J. Kramer and V. E. Hoggatt Jr. Special cases of Fibonacci periodicity. Fib. Quart., 10:519–522, 1972. \bibitem{Matveev} E.M. Matveev, An explicit lower bound for a homogeneous rational linear form in the logarithms of algebraic numbers, II, \emph{Izv. Ross. Akad. Nauk Ser. Mat}. \textbf{64} (2000), 125-180. Translation in \emph{Izv. Math}. \textbf{64} , 1217-1269, (2000) \bibitem{Qu} Y. Qu and J. Zeng, Lucas Numbers Which Are Concatenations of Two Repdigits, \emph{Mathematics}, 8(8), 1360 (2020). https://doi.org/10.3390/math8081360 \bibitem{Trojovsky} P. Trojovsky, On Fibonacci numbers with a prescribed block of digits, \emph{Mathematics}, 8(4), 639 (2020). \end{thebibliography} \end{document}
2206.13592v3
http://arxiv.org/abs/2206.13592v3
Successive vertex orderings of fully regular graphs
\documentclass{article} \usepackage{amsthm} \usepackage{amsmath} \usepackage{amssymb} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{remark}[theorem]{Remark} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{maintheorem}[theorem]{Main Theorem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{example}[theorem]{Example} \newtheorem{observation}{Observation} \title{Successive vertex orderings of fully regular graphs} \author{Lixing Fang \thanks{Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China. Email: [email protected].} \and Hao Huang \thanks{Department of Mathematics, National University of Singapore. Email: [email protected]. Research supported in part by a start-up grant at NUS and an MOE Academic Research Fund (AcRF) Tier 1 grant.} \and J\'anos Pach \thanks{R\'enyi Institute, Budapest and IST Austria. Research partially supported by National Research, Development and Innovation Office (NKFIH) grant K-131529 and ERC Advanced Grant ``GeoScape.'' Email: [email protected].} \and G\'abor Tardos \thanks{R\'enyi Institute, Budapest. Research partially supported by National Research, Development and Innovation Office (NKFIH) grants K-132696, SSN-135643, and ERC Advanced Grant ``GeoScape.'' Email: [email protected]. } \and Junchi Zuo \thanks{Qiuzhen College, Tsinghua University, Beijing, China. Email: [email protected].}} \date{} \begin{document} \maketitle \begin{abstract} A graph $G=(V,E)$ is called {\em fully regular} if for every independent set $I\subset V$, the number of vertices in $V\setminus I$ that are not connected to any element of $I$ depends only on the size of $I$. A linear ordering of the vertices of $G$ is called \emph{successive} if for every $i$, the first $i$ vertices induce a connected subgraph of $G$. We give an explicit formula for the number of successive vertex orderings of a fully regular graph. As an application of our results, we give alternative proofs of two theorems of Stanley and Gao \& Peng, determining the number of linear \emph{edge} orderings of complete graphs and complete bipartite graphs, respectively, with the property that the first $i$ edges induce a connected subgraph. As another application, we give a simple product formula for the number of linear orderings of the hyperedges of a complete 3-partite 3-uniform hypergraph such that, for every $i$, the first $i$ hyperedges induce a connected subgraph. We found similar formulas for complete (non-partite) 3-uniform hypergraphs and in another closely related case, but we managed to verify them only when the number of vertices is small. \end{abstract} \section{Introduction} In preparation for a computing contest, the first-named author bumped into the following question. In how many different ways can we arrange the first $mn$ positive integers in an $m\times n$ matrix so that for each entry $i$ different from $1$, there is a smaller entry either in the same row or in the same column? After some computation, he accidentally found the formula $$(mn)!\cdot\frac{m+n}{\binom{m+n}{m}}$$ for this quantity, which he was able to verify by computer up to $m,n\le 2000$. It turns out that at about the same time, the same question was asked by S. Palcoux on MathOverflow~\cite{Pa18}, which has led to interesting results by Stanley \cite{Stanley} and by Gao and Peng \cite{GaoPeng}. We also posed the question as Problem 4 at the 2019 Mikl\'os Schweitzer Memorial Competition in Hungary, see~\cite{Sch19}. \smallskip Many outstanding mathematicians contemplated what makes a mathematical formula beautiful. One of the often proposed criteria was that, even if we somehow hit upon it, there is no easy way to verify it; see, e.g., ~\cite{Tu77}. The above formula seems to meet this criterion. \smallskip First, we reformulate the above question in graph-theoretic terms. A \emph{shelling} of a graph $G$ (regarded as a 1-dimensional simplicial complex) is a linear ordering of its edges such that, for every $i$, the first $i$ edges induce a connected subgraph in $G$. Clearly, the number of different ways to enumerate the $mn$ positions of an $m\times n$ matrix with the required properties is equal to the number of shellings of $K_{m,n}$, a complete bipartite graph with $m$ and $n$ vertices in its classes. Stanley and Gao and Peng were the first to establish the following formulas. \begin{theorem}\label{thm1} {\bf (i)} {\rm (Stanley, \cite{Stanley})} The number of shellings of the complete graph $K_n$ on $n\ge2$ vertices is $$\binom{n}{2}!\cdot \frac{n!}{2 \cdot (2n-3)!!}$$ {\bf (ii)} {\rm(Gao-Peng~\cite{GaoPeng})} The number of shellings of the complete bipartite graph $K_{m,n}$ with $m\ge1$ and $n\ge 1$ vertices in its classes is $$(mn)! \cdot \frac{m+n}{\binom{m+n}{m}}.$$ \end{theorem} The aim of the present note is to approach the above problem from a slightly different angle, by counting \emph{vertex orders} rather than edge orders. \begin{definition} Let $G$ be a graph with vertex set $V(G)$. A \emph{linear ordering} $\pi: V(G)\rightarrow \{1,2,\ldots,|V(G)|\}$ of $V(G)$ is said to be \emph{successive} if, for every $i\ge1$, the subgraph of $G$ induced by the vertices $v\in V(G)$ with $\pi(v)\le i$ is connected. \end{definition} Equivalently, $\pi$ is a successive vertex ordering if and only if for every vertex $v\in V(G)$ with $\pi(v)>1$, there is an adjacent vertex $v'\in V(G)$ with $\pi(v')<\pi(v)$. \smallskip Let $\sigma(G)$ denote the number of successive linear orderings of $V(G)$. In a probabilistic framework, it is often more convenient to calculate the probability $\sigma'(G)$ that a randomly and uniformly chosen linear ordering of $V(G)$ is successive. Obviously, we have $\sigma'(G)=\sigma(G)/|V(G)|!$ For an arbitrary graph $G$, usually it is hopelessly difficult to determine these parameters. We need to restrict our attention to some special classes of graphs. A set of vertices $I\subseteq V(G)$ is \emph{independent} if no two elements of $I$ are adjacent. The size of the largest independent set in $G$ is denoted by $\alpha(G)$. \begin{definition} A graph $G$ is called \emph{fully regular} if for an independent set $I\subseteq V(G)$, the number of vertices in $V(G)\setminus I$ not adjacent to any element of $I$ is determined by the size of $I$. \end{definition} Clearly, a graph $G$ is fully regular if there exist numbers $a_0, a_1,\ldots, a_{\alpha(G)}$ such that for any independent set $I\subseteq V(G)$, the number of vertices in $V(G)\setminus I$ not adjacent to any element of $I$ is $a_{|I|}$. We call the numbers $a_i$ the \emph{parameters} of the fully regular graph $G$. We must have $a_0=|V(G)|$ and $a_{\alpha(G)}=0$. \smallskip In Section~\ref{sec2}, we use the inclusion-exclusion principle to prove the following formula for the number of successive orderings of a fully regular graph. \begin{theorem}\label{main} Let $G$ be a fully regular graph with parameters $a_0,a_1,\dots,a_\alpha$, where $\alpha=\alpha(G)$. We have $$\sigma'(G)=\sum_{i=0}^{\alpha}\prod_{j=1}^i\frac{-a_j}{a_0-a_j},$$ $$\sigma(G)=a_0!\sum_{i=0}^{\alpha}\prod_{j=1}^i\frac{-a_j}{a_0-a_j}.$$ \end{theorem} Here and in some other formulas in this paper, we have empty products, such as $\prod_{j=1}^0a_j$. These products should be interpreted as having value $1$. The terms corresponding to $i=\alpha$ in the sums vanish, because we have $a_\alpha=0$. Thus, the upper limit $\alpha$ in the sums can be replaced by $\alpha-1$. \smallskip The \emph{‌line graph} of a hypergraph $H$ is a graph whose vertex set is the set of hyperedges of $H$ and two hyperedges are adjacent if and only if their intersection is nonempty \cite{Berge}, \cite{Bermond}. \smallskip It is easy to see that the line graph of every \emph{complete} $r$-uniform hypergraph and every \emph{complete $r$-partite} $r$-uniform hypergraph is fully regular, for any integer $r\ge 2$. We can generalize these examples as follows. Fix a sequence $d_1,\dots,d_t$ of positive integers, let $d=\sum_{j=1}^td_j$, and let $V_1,\ldots,V_t$ be pairwise disjoint sets. Consider the $d$-uniform hypergraph $H$ on the vertex set $V=\cup_{j=1}^tV_j$, consisting of all hyperedges $e$ such that $|e\cap V_j|=d_j$ for every $j$. The number of hyperedges of $H$ is $\prod_{j=1}^t\binom{|V_j|}{d_j}$. We claim that the line graph $L(H)$ of $H$ is fully regular. To see this, take an independent set $I$ of size $i$ in $L(H)$. Obviously, all hyperedges of $H$ which correspond to a vertex of $L(H)$ that does not intersect any hyperedge in $I$ form a complete uniform hypergraph on a smaller number of vertices. The number of these hyperedges (vertices of $L(H)$) is $a_i:=\prod_{j=1}^t\binom{|V_j|-id_j}{d_j}$. This number depends only on $i=|I|$, proving that $L(H)$ is fully regular. \smallskip The case $d=2$, where $H$ is a \emph{graph} (2-uniform hypergraph) is especially interesting, because a successive vertex ordering of its ‌line graph $L(H)$ is the same as a \emph{shelling} of $H$. Unfortunately, such a direct connection fails to hold for $d>2$. \smallskip For $d=2$, we have two possibilities: (i) the case $t=1$, $d_1=2$ yields complete graphs $H=K_n$; (ii) the case $t=2$, $d_1=d_2=1$ yields complete bipartite graphs $H=K_{m,n}$ for some $m$ and $n$. In case (i), we have that $L(H)$ is fully regular with parameters $a_i=\binom{n-2i}2$, for every $0\le i\le\lfloor n/2\rfloor=\alpha(K_n)$. In case (ii), we obtain $a_i=(m-i)(n-i)$ for every $0\le i\le\min(m,n)=\alpha(K_{m,n})$. A direct application of Theorem~\ref{main} to $L(H)$ gives \begin{corollary}\label{thm1'} {\bf (i)} The number of shellings of the complete graph $K_n$ on $n\ge2$ vertices is $$\binom{n}{2}!\cdot \sum_{i=0}^{\lfloor n/2\rfloor}\prod_{j=1}^i\frac{1}{1-\binom{n}{2}/\binom{n-2j}{2j}}.$$ {\bf (ii)} The number of shellings of the complete bipartite graph $K_{m,n}$ with $m\ge1$ and $n\ge 1$ vertices in its classes is $$(mn)! \cdot \sum_{i=0}^{\min(m,n)}\prod_{j=1}^i\frac{1}{1-mn/((m-j)(n-j))}.$$ \end{corollary} In Section~\ref{sec3}, we prove that the summation formulas in Corollary~\ref{thm1'} are equal to the product formulas in Theorem~\ref{thm1} obtained by Richard Stanley~\cite{Stanley}, Yibo Gao and Junyao Peng \cite{GaoPeng}. Thereby, we provide alternative proofs for the latter results. It is always interesting when a summation formula can be turned into a nice product formula. If this is possible, it often yields some deeper insights. We were able to turn the summation formula of Theorem~\ref{main} into a product formula in yet another case: applying it to line graphs of complete 3-partite 3-uniform hypergraph. In this case, we have $t=3$ and $d_1=d_2=d_3=1$. In Section~\ref{sec4}, we establish the following result. \begin{theorem}\label{new} Let $K_{m,n,p}$ denote the complete 3-partite 3-uniform hypergraph with $m, n,$ and $p$ elements in its vertex classes, and let $G$ denote its line graph. Set $b_i=mn+np+mp-i(m+n+p-i)$. Then the number of successive orderings of the vertices of $G$ is $$\sigma(G)=\frac{(mnp-1)!\prod_{i=1}^{m+n+p-1}b_i}{\prod_{i=1}^{m-1}b_i\prod_{i=1}^{n-1}b_i\prod_{i=1}^{p-1}b_i}=(mnp)!\cdot\frac{\prod_{i=m}^{m+p}b_i}{mnp\prod_{i=1}^{p-1}b_i},$$ where the fractions should be evaluated disregarding all zero factors in both the numerator and the denominator. \end{theorem} We found similar product formulas for the number of successive vertex orderings of the line graph of a complete 3-uniform hypergraph $K_n^{(3)}$ (where $t=1$ and $d_1=3$), and the line graph of $K_{m,n}^{(1,2)}$ (where $t=2$, $d_1=1$, $d_2=2$) but we were unable to verify them. We state them as conjectures in the last section, together with other open problems and remarks. \section{Successive vertex orderings\\--Proof of Theorem~\ref{main}}\label{sec2} In this section, we apply the inclusion-exclusion principle to establish Theorem~\ref{main}. \begin{proof} It is enough to prove the first formula. Consider a uniform random linear ordering $\pi$ of $V(G)$. For any vertex $v\in V(G)$, let $B_v$ denote the \emph{``bad''} event that $v$ is not the first vertex, but $v$ comes before all vertices adjacent to it. In other words, we have $\pi(v) \neq 1$ and $\pi(v)<\pi(v')$ for every vertex $v'$ adjacent to $v$. Note that if two vertices, $v$ and $v'$, are adjacent, then $B_v$ and $B_{v'}$ are mutually exclusive events, i.e., we have $$\mathbb{P}(B_{v} \wedge B_{v'})=0.$$ Indeed, the inequalities $\pi(v)<\pi(v')$ and $\pi(v')<\pi(v)$ cannot hold simultaneously. Therefore, the vertices $v$ for which a bad event occurs always form an independent set $I$. The linear order $\pi$ is successive if and only if this independent set is empty. By the inclusion-exclusion formula, we have \begin{equation}\label{eq0} \sigma'(G)=\sum_{i=0}^{\alpha} (-1)^i \cdot \sum_{I: |I|=i} \mathbb{P}(\bigwedge_{v \in I} B_{v}). \end{equation} Here, the second sum is taken over all independent sets $I$ of size $i$ in $G$. We also use the convention that empty intersection of events returns the universal event of probability $1$. \smallskip For a given independent set $I$, denote by $N(I)$ the \emph{neighborhood} of $I$, that is, the set of vertices either in $I$ or adjacent to at least one vertex that belongs to $I$. Clearly, we have $$|N(I)|=|V(G)|-a_{|I|}=a_0-a_{|I|}.$$ \smallskip We start with evaluating the probability of the event $B_I:=\bigwedge_{v \in I} B_{v}$ for an independent set $I$ of size $i$. Let $\rho$ be an enumeration of $I$, that is $I=\{\rho(1),\rho(2),\dots,\rho(i)\}$. Consider first the event $C_\rho$ that $B_I$ happens and we also have $\pi(\rho(1))<\pi(\rho(2))< \cdots <\pi(\rho(i))$. Clearly, $C_\rho$ occurs if and only if $\pi^{-1}(1)\notin N(I)$ and $\rho(j)$ is minimal among the vertices in $N(\{\rho(j),\rho(j+1),\dots,\rho(i)\})$ for $1\le j\le i$. These $i+1$ events are mutually independent and we clearly have $$\mathbb P(\pi^{-1}(1)\notin N(I))=\frac{|V(G)\setminus N(I)|}{|V(G)|}=\frac{a_i}{a_0},$$ $$\mathbb P(\rho(j)\hbox{ is minimal in }N(\{\rho(j),\rho(j+1),\dots,\rho(i)\}))$$$$=\frac1{|N(\{\rho(j),\rho(j+1),\dots,\rho(i)\})|}=\frac1{a_0-a_{i-j+1}}.$$ Therefore, we have $$\mathbb P(C_\rho)=\frac{a_i}{a_0}\prod_{j=1}^i\frac1{a_0-a_j}.$$ Now the event $B_I$ is the disjoint union of the events $C_\rho$ where $\rho$ runs over the $i!$ possible enumerations of $I$, so we have \begin{equation}\label{u} \mathbb P(B_I)=i!\frac{a_i}{a_0}\prod_{j=1}^i\frac1{a_0-a_j}. \end{equation} \smallskip As $\mathbb P(B_I)$ does not depend on the independent set $I$ beyond its size $i$, we can avaluate Equation~\ref{eq0} by simply counting the independent sets in $G$ of any given size. The first vertex $v_1$ of an independent set can be any one of the $|V(G)|=a_0$ vertices. After choosing $v_1,\dots v_j$, the next vertex of an independent set must be outside $N(\{v_1,\dots,v_j\})$, so we have $a_j$ choices. This implies that the number of size $i$ independent sets in $G$ is $$\frac{\prod_{j=0}^{i-1}a_j}{i!}.$$ We had to divide by $i!$, because the vertices of an independent set can be selected in an arbitrary order. Plugging this formula and Equation~\ref{u} in Equation~\ref{eq0} proves the theorem. \end{proof} \section{Shelling the edges of a graph\\ --Alternative proof of Theorem~\ref{thm1}}\label{sec3} The aim of this section is to give alternative proofs of parts (i) and (ii) of Theorem~\ref{thm1}, by establishing a common generalization of the two statements; see Theorem~\ref{together} below. \smallskip As explained in the Introduction, a shelling of a graph is the same as a successive ordering of the vertices of its ‌line graph. We also noted that the ‌line graphs of complete graphs and complete bipartite graphs are fully regular. Thus, we can apply Theorem~\ref{main} to obtain Corollary~\ref{thm1} for the number of shellings of $K_n$ and $K_{m,n}$. In this way, however, we obtain \emph{summation} formulas, while Theorem~\ref{thm1} gives much nicer \emph{product} formulas with low degree factors. \smallskip Two distinct \emph{edges} of a graph $G$ are called \emph{adjacent} if they share a vertex. Otherwise, we call them \emph{independent}. A family of pairwise independent edges is called a \emph{matching}. The \emph{matching number} $\nu(G)$ of $G$ is the size of the largest matching in $G$. Matchings in $G$ correspond to independent sets in the ‌line graph $L(G)$ of $G$, so $\nu(G)=\alpha(L(G))$. First, we characterize all graphs whose line graphs are fully regular. \begin{lemma}\label{obs} The ‌line graph of a graph $G$ is fully regular if and only if every edge of $G$ is adjacent to the same number of other edges and every pair of independent edges are connected by the same number of edges. \end{lemma} \begin{proof} If the ‌line graph of $G$ is fully regular with parameters $a_0,a_1,\dots$, then $a_0=|E(G)|$ and for any edge $e$ of $G$, the number of edges in $G$ distinct from and not adjacent to $e$ is $a_1$. Thus, $e$ is adjacent to $d:=a_0-a_1-1$ edges. If $e$ and $e'$ are independent, then the number of edges distinct from both and not adjacent to either of them is $a_2$. Therefore, we have $a_2=a_0-2-2d+\lambda$, where $\lambda$ is the number of edges connecting $e$ and $e'$ (and, hence, adjacent to both). Thus, we have $\lambda=a_2-a_0+2d+2=a_0-2a_1+a_2$, proving the ``only if'' part of the lemma. \smallskip To show the ``if'' part, we assume that every edge is adjacent to $d$ other edges in $G$ and every pair of independent edges is connected by $\lambda$ edges. We need to show that the ‌line graph of $G$ is fully regular, that is, for any independent set $I$ in the ‌line graph (i.e., for any matching $I$ in $G$), the number of vertices in the ‌line graph at distance at least 2 from $I$ is determined by the size $i:=|I|$. We have $i$ vertices at distance zero. We have $d$ adjacent edges for each edge in $I$, yielding $id$ vertices at distance 1, but we have to subtract from this the edges that we counted twice. Note that no edge is counted more than twice, and that in the line graph any pair of edges that belong to $I$ have exactly $\lambda$ common neighbors. So, in the line graph there are exactly $id-\binom i2\lambda$ vertices at distance $1$ from $I$. Hence, $a_i=|E(G)|-i(d+1)+\binom i2\lambda$, which completes the proof. \end{proof} The main result of this section is the following. \begin{theorem}\label{together} Let $G$ be a graph with matching number $\nu$. Suppose that every edge of $G$ is adjacent to $d$ other edges, and every pair of independent edges is connected by exactly $\lambda$ edges in $G$. The number $N$ of shellings of $G$ satisfies $$\frac N{|E(G)|!}=\frac\nu{\binom{\frac{d+1}{\lambda/2}}{\nu-1}}.$$ \end{theorem} In the last theorem, we used binomial coefficients of the form $a\choose b$, where $a$ is not necessarily an integer. They should be interpreted in the usual way: as a polynomial of $a$ with degree $b$. For $G=K_n$, $n\ge2$ the conditions of the theorem are satisfied with $\nu=\lfloor n/2 \rfloor$, $d=2n-4$, and $\lambda=4$. Therefore, Theorem~\ref{together} implies that the number $N$ of shellings of the complete graph $K_n$ satisfies $$\frac N{{\binom{n}{2}!}}=\frac{\lfloor n/2\rfloor}{\binom{\frac{2n-3}2}{\lfloor n/2 \rfloor-1}}=\frac{2^{\lfloor n/2\rfloor-1}\lfloor n/2\rfloor!(2n-2\lfloor n/2\rfloor-1)!!}{(2n-3)!!}=\frac{n!}{2 \cdot (2n-3)!!},$$ giving an alternative proof of part (i) of Theorem~\ref{thm1}. \smallskip For $G=K_{m, n}$ with $1\le m\le n$, we have $\nu=m$, $d=m+n-2$, and $\lambda=2$. Hence, the number $N$ of shellings of the complete bipartite graph $K_{m,n}$ satisfies $$\frac N{(mn)!}=\frac m{\binom{m+n-1}{m-1}}=\frac{m+n}{\binom{m+n}{m}},$$ providing an alternative proof of part (ii) of Theorem~\ref{thm1}. \smallskip We need the following identity for binomial coefficients. \begin{lemma}\label{lem_identity} Let $\alpha$ be a non-negative integer and let $\beta$ and $\gamma$ be reals. Then we have $$\sum_{t=0}^\alpha(-1)^t\frac{\binom\alpha t \binom\beta t}{\binom\gamma t} = \frac{\binom{\gamma-\beta}\alpha}{\binom\gamma\alpha},$$ unless $\gamma$ is a non-negative integer smaller than $\alpha$. In the latter case, neither side of the identity is defined. \end{lemma} \begin{proof} We start with the following simple equations that hold for non-negative integers $t\le\alpha$ and $v$, respectively: $$\frac{\binom\alpha t}{\binom\gamma t}=\frac{\binom{\gamma-t}{\alpha-t}}{\binom\gamma\alpha},$$ $$\binom uv=(-1)^v\binom{v-u-1}v.$$ Using them we obtain $$(-1)^t\frac{\binom\alpha t \binom\beta t}{\binom\gamma t}=(-1)^t\frac{\binom{\gamma-t}{\alpha-t}\binom\beta t}{\binom\gamma\alpha}=\frac{(-1)^\alpha}{\binom\gamma\alpha}\binom{\alpha-\gamma-1}{\alpha-t}\binom\beta t.$$ Using Vandermonde's identity we obtain $$\sum_{t=0}^\alpha(-1)^t\frac{\binom\alpha t \binom\beta t}{\binom\gamma t}=\frac{(-1)^\alpha}{\binom\gamma\alpha}\sum_{t=0}^\alpha\binom{\alpha-\gamma-1}{\alpha-t}\binom\beta t=\frac{(-1)^\alpha\binom{\alpha+\beta-\gamma-1}\alpha}{\binom\gamma\alpha}=\frac{\binom{\gamma-\beta}\alpha}{\binom\gamma\alpha}.$$ \end{proof} Now we are ready to present the proof of Theorem~\ref{together}. \begin{proof}[Proof of Theorem~\ref{together}:] Let $L(G)$ denote the ‌line graph of $G$. The independence number $\alpha(L(G))$ is equal to the matching number $\nu$ of $G$. By Lemma~\ref{obs}, $L(G)$ is fully regular. Its associated parameters were calculated as $a_j=|E(G)|-j(d+1)+\binom j2\lambda$, for $0\le j\le\nu$. Note that $0=a_\nu=|E(G)|-\nu(d+1)+\binom\nu2\lambda$ and, hence, $a_0=|E(G)|=\nu(d+1)-\binom\nu2\lambda$. The number $N$ of shellings of $G$ is equal to $\sigma(L(G))$. Therefore, \begin{eqnarray*} \frac N{|E(G)|!}=\sigma'(L(G))&=&\sum_{i=0}^{\nu}\prod_{j=1}^i\frac{-a_j}{a_0-a_j}\\ &=&\sum_{i=0}^{\nu-1}\prod_{j=1}^i\frac{(j(d+1)-\binom j2\lambda)-(\nu(d+1)-\binom\nu2\lambda)}{j(d+1)-\binom j2\lambda}\\ &=&\sum_{i=0}^{\nu-1}(-1)^i\prod_{j=1}^i\frac{(\nu-j)(d+1-(\nu+j-1)\lambda/2)}{j(d+1-(j-1)\lambda/2)}\\ &=&\sum_{i=0}^{\nu-1}(-1)^i\prod_{j=1}^i\frac{(\nu-j)(\frac{d+1}{\lambda/2}-\nu+1-j)}{j(\frac{d+1}{\lambda/2}+1-j)}\\ &=&\sum_{i=0}^{\nu-1}(-1)^i\frac{\binom{\nu-1}i\binom{\frac{d+1}{\lambda/2}-\nu}i}{\binom{\frac{d+1}{\lambda/2}}i}\\ &=&\frac{\binom{\nu}{\nu-1}}{\binom{\frac{d+1}{\lambda/2}}{\nu-1}}=\frac\nu{\binom{\frac{d+1}{\lambda/2}}{\nu-1}}, \end{eqnarray*} where the first line comes from Theorem~\ref{main}. The second line was obtained by substituting the values of $a_j$ as computed above and removing the vanishing summand for $i=\nu$. We used Lemma~\ref{lem_identity} to obtain the last line. This completes the proof of Theorem~\ref{together}. \end{proof} \section{Complete 3-uniform 3-partite hypergraphs\\ --Proof of Theorem~\ref{new}}\label{sec4} For any positive integers $m$, $n$, and $p$, let $K_{m,n,p}$ denote the complete 3-uniform 3-partite hypergraph with $m$, $n$, and $p$ vertices in its three vertex classes, and let $G=G_{m,n,p}$ denote the line graph of $K_{m,n,p}$. As we have seen in the Introduction, $G$ is a fully regular graph with parameters $a_i=(m-i)(n-i)(p-i)$. Theorem~\ref{main} gives a summation formula for $\sigma(G)$, the number of successive orderings of the vertices of $G$. In this section, we prove Theorem~\ref{new}, which is a nice product formula for the same quantity. \smallskip We need some notation. The parameters $a_i$ of the fully regular graph $G$ are defined \emph{a priory} only for $i\le\alpha(G)=\min(m,n,p)$. However, here we define $$a_i:=(m-i)(n-i)(p-i),$$ for all $i$. Notice that the parameters $b_i$ defined in the theorem satisfy $$b_i=\frac{a_0-a_i}i.$$ For fixed $i$, $a_i$ and $b_i$ are multilinear polynomials of the parameters $m$, $n$, and $p$. \smallskip In Theorem~\ref{new}, zero factors show up if $b_i=0$ for some $i$, which may occasionally happen (for instance, in the case $m=8$, $n=p=2$, we have $b_6=0$.) Looking at the definition of $b_i$, one can see that this happens for at most two values of $i$ (symmetric about $(m+n+p)/2$), and they must be smaller than $\max(m,n,p)$, but larger than the sum of the other two parameters among $m,$, $n$, and $p$. This is a nuisance, which can be avoided by using the second formula in the theorem and choosing $p$ not to be the single largest of the three parameters. Note also, that when zero factors do show up in one or both of the fractions in the theorem, then they have the same number of them (one or two) in both the numerator and the denominator. \smallskip We start with a polynomial equality. \begin{lemma}\label{poly} For positive integers $m$, $n$, and $p$ and the numbers $a_j$, $b_j$ depending on them, we have $$\sum_{i=0}^{p-1}\frac{(-1)^i}{i!}\prod_{j=1}^ia_j\prod_{j=i+1}^{p-1}b_j=p\prod_{j=m+1}^{m+p-1}b_j.$$ \end{lemma} \begin{proof} Let $P(m,n,p)$ and $Q(m,n,p)$ denote the left-hand side and the right-hand side, resp., of the equation to be verified. Recall that $a_j$ and $b_j$ are multilinear polynomials of $m$, $n$, and $p$. Thus, for a fixed $p$, both $P$ and $Q$ are polynomials in $m$ and $n$, whose degree is at most $p-1$ in either variable. \smallskip We prove the equality $P(m,n,p)=Q(m,n,p)$ by induction on $p$. We assume that the equation holds for any positive integer less than $p$ and for every $m$ and $n$, and we will prove that it also holds for $p$. For fixed $m$ and $p$, both $P(m,n,p)$ and $Q(m,n,p)$ are polynomials in $n$ of degree less than $p$. So, it is enough to find $p$ distinct values of $n$, where these polynomials agree. We will do that for $n=0,1,\dots,p-1$. \smallskip In case $n=0$, we have $a_0=mnp=0$ and $b_j=(a_0-a_j)/j=-a_j/j$. For any $0\le i\le p-1$, we have $$\frac{(-1)^i}{i!}\prod_{j=1}^ia_j\prod_{j=i+1}^{p-1}b_j=\prod_{j=1}^ib_j\prod_{j=i+1}^{p-1}b_j=\prod_{j=1}^{p-1}b_j.$$ Therefore, $$P(m,0,p)=\sum_{i=0}^{p-1}\prod_{j=1}^{p-1}b_j=p\prod_{j=1}^{p-1}b_j=p\prod_{j=m+1}^{m+p-1}b_j=Q(m,0,p),$$ as required. Here we used that $b_j=b_{m+n+p-j}$, which is clear from our formula for $b_j$. \smallskip Suppose now that $1\le n<p$. We have $a_n=0$, so all terms in $P(m,n,p)$ corresponding to $i\ge n$ vanish. We can collect $\prod_{j=n}^{p-1}b_j$ from the remaining terms to obtain $$P(m,n,p)=\prod_{j=n}^{p-1}b_j\sum_{i=0}^{n-1}\prod_{j=1}^ia_j\prod_{j=i+1}^{n-1}b_j=\prod_{j=n}^{p-1}b_jP(m,p,n),$$ where we used that permuting the parameters $m$, $n$, and $p$ (in this case, switching the roles of $n$ and $p$) has no effect on the values $a_j$ and $b_j$. \smallskip We have $P(m,p,n)=Q(m,p,n)$, by the induction hypothesis. This yields \begin{eqnarray*} P(m,n,p)&=&Q(m,p,n)\prod_{j=n}^{p-1}b_j\\ &=&n\prod_{j={m+1}}^{m+n-1}b_j\prod_{j=m+n+1}^{m+p}b_j\\ &=&n\frac{b_{m+p}}{b_{m+n}}\prod_{j=m+1}^{m+p-1}b_j\\ &=&p\prod_{j=m+1}^{m+p-1}b_j=Q(m,n,p). \end{eqnarray*} In the second line, we used that $b_j=b_{m+n+p-j}$ and in the last line we used the facts $b_{m+p}=mp$ and $b_{m+n}=mn.$ Since we found $p$ distinct values of $n$, for which $P(m,n,p)=Q(m,n,p)$, the two polynomials must agree for all $n$. This completes the induction step and the proof of the lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{new}] Notice that the two expressions for $\sigma(G)$ in the theorem are equal. Moreover, the second fraction can be obtained from the first by cancelling equal terms. This is immediate using the symmetry $b_i=b_{m+n+p-i}$, which implies $\prod_{i=1}^{n-1}b_i=\prod_{i=m+p+1}^{m+n+p-1}b_i$. It remains to prove that $\sigma'(G_{m,n,p})=\prod_{i=m}^{m+p}b_i/(mnp\prod_{i=1}^{p-1}b_i)$ as implied by the second expression in the theorem. Using the symmetry of the first expression, we can assume without loss of generality that $p\le m$ and $p\le n$. With this assumption, there are no zero factors in the fraction to worry about, and we know that the independence number of $G$ is $p$. \smallskip By Theorem~\ref{main}, we have \begin{eqnarray*} \sigma'(G)&=&\sum_{i=0}^p\prod_{j=1}^i\frac{-a_j}{a_0-a_j}\\ &=&\sum_{i=0}^{p-1}\frac{(-1)^i}{i!}\prod_{j=1}^i\frac{a_j}{b_j}\\ &=&\frac{\sum_{i=0}^{p-1}\frac{(-1)^i}{i!}\prod_{j=1}^ia_j\prod_{j=i+1}^{p-1}b_j}{\prod_{j=1}^{p-1}b_j}. \end{eqnarray*} By Lemma~\ref{poly}, the numerator of the last expression can be written as $p\prod_{j=m+1}^{m+p-1}b_j$. Substituting $b_m=np$ and $b_{m+p}=mp$, the theorem follows. \end{proof} \section{Comments and open problems} \noindent{\bf A.} Theorem~\ref{together} appears to be more general than the its special cases, the two parts of Theorem~\ref{thm1}. However, it is not hard to show by a case analysis that the only connected graphs with fully regular ‌line graphs are complete graphs, complete bipartite graphs, and the cycle $C_5$. \smallskip \noindent{\bf B.} Every $d$-uniform hypergraph $H$ (more precisely, the closure of its hyperedges under containment) can be regarded as a $(d-1)$-dimensional simplicial complex. An enumeration $E_1,E_2,\dots, E_n$ of the hyperedges of $H$ is called a \emph{shelling} if, for every $1\le i<j\le n$, there exists $1\le k<j$ with $E_i\cap E_j\subseteq E_k\cap E_j$ and $|E_k\cap E_j|=d-1$. See \cite{Ziegler}. In the case $d=3$, this means that for every $E_j$ with $j>1$, (i) there is a hyperedge $E_k$ preceding it which meets $E_j$ in two points, and (ii) either $E_j\not\subseteq\cup_{i<j}E_i$ or there are two preceding hyperedges that meet $E_j$ in distinct point pairs. If only condition (i) is satisfied, then the ordering is called a \emph{weak shelling}. As we have remarked in the Introduction, for $d>2$ it is not true that every successive vertex ordering of the line graph of a $d$-uniform hypergraph $H$ corresponds to a shelling of $H$. Nevertheless, every complete $d$-uniform hypergraph and every complete $d$-uniform $d$-partite hypergraph admits a shelling. It would be interesting to compute the number of shellings and the number of weak shellings of these hypergraphs, but Theorem~\ref{main} is not applicable here. Unfortunately, we have no evidence that either of these questions has an answer that can be expressed by a nice-looking summation or product formula. Observe that every weak shelling of the complete 3-uniform 3-partite hypergraph $K_{m,n,p}$ with $m, n,$ and $p$ vertices in its classes corresponds to an arrangement of the first $mnp$ positive integers in the 3-dimensional $m\times n\times p$ matrix such that for each entry larger than $1$, there is a smaller entry in one of the 3 \emph{rows} passing through it, parallel to a side of the matrix. Therefore, counting the number of weak shellings in this case can be regarded as another natural 3-dimensional generalization of the question described at the beginning of the Introduction, which is different from the generalization given in Theorem~\ref{new}. (Note that the successive vertex orderings of the line graph of $K_{m,n,p}$, which were counted by Theorem~\ref{new}, correspond to arrangements of the first $mnp$ integers in the same matrix such that for each entry larger than $1$, there is a smaller entry in one of the 3 coordinate \emph{planes} passing through it.) \smallskip \noindent{\bf C.} Let $K_n^{(3)}$ stand for the complete 3-uniform hypergraph on $n$ vertices. We saw that the line graph $L(K_n^{(3)})$ is fully regular with parameters $\alpha=\lfloor n/3\rfloor$ and $a_i=\binom{n-3i}3$. Therefore, Theorem~\ref{main} gives a summation formula for $\sigma'(L(K_n^{(3)}))$. In the conjecture below, we propose a nice product formula instead. \begin{conjecture} $$\sigma'(L(K_n^{(3)}))=\left\lfloor\frac n3\right\rfloor\cdot\frac{\prod\limits_{k=n+1\atop3\nmid k}^{n+\lfloor n/2\rfloor-2}c_k}{\prod\limits_{k=3\atop3\mid k}^{n-3}c_k},$$ where $$c_k=6\frac{\binom n3-\binom{n-k}3}k=3n^2-6n+2-k(3n-3-k).$$ \end{conjecture} We verified this conjecture for all $n\le100$. It would be interesting to find similar product formulas for the number of successive vertex orderings of the line graphs of complete $d$-uniform or complete $d$-uniform $d$-partite hypergraphs also for $d>3$. \smallskip \noindent {\bf D.} Consider the $3$-uniform ``bipartite'' hypergraph $K_{m, n}^{(1,2)}$ on the vertex set $V = V_1 \cup V_2$ with $|V_1|=m$, $|V_2|=n$, consisting of all subsets $e\subseteq V$ such that $|e \cap V_1| =1$ and $|e \cap V_2|=2$. We showed in the Introduction that its line graph is fully regular with parameters $\alpha=\min\{m, n/2\}$ and $a_i=(m-i)\binom{n-2i}{2}$. Therefore, Theorem~\ref{main} gives a summation formula for $\sigma'(L(K_{m,n}^{(1,2)}))$. As in the case $K_n^{(3)}$, we conjecture that the following product formula holds. \begin{conjecture} Let $d_i=(a_0-a_i)/i$, then $$\sigma'(L(K_{m,n}^{(1,2}))=m \cdot \prod_{i=1}^{m-1} \frac{mn-\binom{m+1}{2}+\binom{i}{2}}{d_i},$$ where the fractions should be evaluated disregarding all zero factors in both the numerator and the denominator. \end{conjecture} We verified this conjecture for $m, n \le 50$. \begin{thebibliography}{99} \bibitem{Berge} C. Berge: \emph{Hypergraphs: Combinatorics of Finite Sets}, North-Holland, Amsterdam, 1989. \bibitem{Bermond} J. C. Bermond, M. C. Heydemann, and D. Sotteau: Line graphs of hypergraphs I, Discrete Mathematics {\bf 18} no. 3 (1977), 235--241. \bibitem{GaoPeng} J. Gao and J. Peng: Counting shellings of complete bipartite graphs and trees, arXiv:1809.10263 \bibitem{Pa18} S. Palcoux: Number of collinear ways to fill a grid, MathOverflow, https://mathoverflow.net/questions/297385/number-of-collinear-ways-to-fill-a-grid \bibitem{Sch19} Problems of the 2019 Mikl\'os Schweitzer Memorial Competition in Mathematics, http://www.math.u-szeged.hu/$\sim$mmaroti/schweitzer/schweitzer-2019-eng.pdf \bibitem{Stanley} R. Stanley: Counting ``connected'' edge orderings (shellings) of the complete graph. MathOverflow, https://mathoverflow.net/questions/2974 \bibitem{Tu77} P. Tur\'an: An unusual life, Ramanujan, I and II, \emph{K\"oz\'episk. Mat. Lapok} {\bf 55} (1977), 49--54 and 97--106 [in Hungarian]. Also: Ein sonderbarer Lebensweg, Ramanujan, in: \emph{Grosse Augenblicke aus der Geschichte der Mathematik (R. Freud, Hrsg.)}, B.I. Wissenschaftsverlag, Mannheim, 1990. \bibitem{Ziegler} G. Ziegler: \emph{Lectures on Polytopes}, Graduate texts in Mathematics \textbf{152}, Springer, Berlin, 1995 (Chapter 8). \end{thebibliography} \end{document}
2206.13579v1
http://arxiv.org/abs/2206.13579v1
Hermitian-Yang-Mills connections on some complete non-compact Kähler manifolds
\documentclass[12pt,reqno]{amsart} \usepackage[shortlabels]{enumitem} \usepackage{esint} \usepackage{physics} \usepackage{relsize} \usepackage[backref]{hyperref} \usepackage{mathtools} \usepackage{amsfonts} \usepackage[all]{xy} \usepackage{geometry} \geometry{margin=2.5cm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \theoremstyle{corollary} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{conjecture} \newtheorem{conjecture}{Conjecture} \newtheorem*{claim}{Claim} \theoremstyle{assumption} \newtheorem{assumption}{Assumption} \theoremstyle{proposition} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \newcommand{\olsi}[1]{\,\overline{\!{#1}}} \newcommand{\ols}[1]{\mskip.5\thinmuskip\overline{\mskip-.5\thinmuskip {#1} \mskip-.5\thinmuskip}\mskip.5\thinmuskip} \newcommand{\ii}{\ensuremath{\sqrt{-1}}} \newcommand{\pp}{\bar\partial} \everymath{\displaystyle} \date{\today} \begin{document} \title[\resizebox{5.5in}{!}{Hermitian-Yang-Mills connections on some complete non-compact K\"ahler manifolds}]{Hermitian-Yang-Mills connections on some complete non-compact K\"ahler manifolds} \author{Junsheng Zhang} \address{Department of Mathematics\\ University of California\\ Berkeley, CA, USA, 94720\\} \curraddr{} \email{[email protected]} \thanks{} \begin{abstract}We give an algebraic criterion for the existence of projectively Hermitian-Yang-Mills metrics on a holomorphic vector bundle $E$ over some complete non-compact K\"ahler manifolds $(X,\omega)$, where $X$ is the complement of a divisor in a compact K\"ahler manifold and we impose some conditions on the cohomology class and the asymptotic behaviour of the K\"ahler form $\omega$. We introduce the notion of stability with respect to a pair of $(1,1)$-classes which generalizes the standard slope stability. We prove that this new stability condition is both sufficient and necessary for the existence of projectively Hermitian-Yang-Mills metrics in our setting. \end{abstract} \maketitle \section{Introduction} The celebrated Donaldson-Uhlenbeck-Yau theorem \cite{donaldson1985,uhlenbeck-yau} says that on a compact K\"ahler manifold $(X,\omega)$, an irreducible holomorphic vector bundle $E$ admits a Hermitian-Yang-Mills (HYM) metric if and only if it is $\omega$-stable. After this pioneering work, there have been several results aiming to generalize this to non-compact K\"ahler manifolds \cite{simpson,bando,ni-ren,ni,jacob-walpuski,mochizuki}. A key issue is to understand what role stability plays on the existence of projectively Hermitian-Yang-Mills (PHYM) metrics. An interesting special case in the non-compact setting is when $(X,E)$ both can be compactified, i.e. $X$ is the complement of a divisor in a compact K\"ahler manifold $\olsi{X}$ and $E$ is the restriction of a holomorphic vector bundle $\olsi{E}$ on $\olsi{X}$, and when the K\"ahler metric has a known asymptotic behaviour. Under these assumptions, one wants to build a relation between the existence of PHYM metrics on $E$ and some algebraic data on $\olsi E$. In this paper, we prove a result in this setting. Let $\olsi{X}$ be an $n$-dimensional ($n\geq 2$) compact K\"ahler manifold, $D$ be a smooth divisor and $X=\olsi{X}\backslash D$ denote the complement of $D$ in $\olsi{X}$. Let $\olsi{E}$ be a holomorphic vector bundle on $\olsi{X}$, which we always assume to be irreducible unless otherwise mentioned. Let $E$, $\olsi{E}|_D$ denote its restriction to $X$ and $D$ respectively. Suppose the normal bundle $N_D$ of $D$ in $\olsi X$ is ample. On $X$ we consider complete K\"ahler metrics $\omega=\omega_0+dd^c\varphi$ satisfying \textit{Assumption 1} (see Section \ref{assumption on the asymptotic behaviour} for a precise definition). Roughly speaking, we assume $\omega_0$ is a smooth closed (1,1)-form on $\olsi{X}$ vanishing when restricted to $D$, $\varphi$ is a smooth function on $X$, and $\omega$ is asymptotic to some model K\"ahler metrics given explicitly on the punctured disc bundle of $N_D$. Typical examples satisfying these assumptions are Calabi-Yau metrics on the complement of an anticanonical divisor of a Fano manifold and its generalizations \cite{tian-yau1,hsvz1,hsvz2} (see Section \ref{examples} for a sketch). To state our theorem, we need two ingredients: the existence of a good initial hermitian metric on $E$ and the definition for stability with respect to a pair of classes. The following lemma is proved in Section \ref{existence of good initial metric}. \begin{lemma}\label{good initial metric} If $\olsi{E}|_{D}$ is $c_1(N_D)$-polystable, then there is a hermitian metric $H_0$ on $E$ satisfying: \begin{itemize} \item [(1).] there is a hermitian metric $\olsi{H}_0$ on $\olsi{E}$ and a function $f\in C^{\infty}(X)$ such that $H_0={e}^f\olsi{H}_0$, \item [(2).] $|\Lambda_{\omega}F_{H_0}|=O(r^{-N_0})$, where $r$ denotes the distance function to a fixed point induced by the metric $w$ and $N_0$ is the number in \textit{Assumption 1}-(3). \end{itemize} \end{lemma} We call $H_0$ conformal to a smooth extendable metric if it satisfies the first condition in Lemma \ref{good initial metric}. A key feature we use in this paper is that the induced metic on $\operatorname {End}(E)$ is conformally invariant with respect to metrics on $E$. Therefore the two hermitian metrics $H_0$ and $\olsi{H}_0$ induce the same metric on $\operatorname{End}(E)$ and this is the norm used in Lemma \ref{good initial metric}-(2). Then naturally (following \cite{simpson}) one wants to find a PHYM metric in the following set \begin{equation}\label{define of P_H} \mathcal P_{H_0}=\left\{H_0 e^s: s\in C^{\infty}(X,\ii \mathfrak{su}(E,H_0)), \left\Vert s\right \Vert_{L^{\infty}}+\left\Vert \olsi{\partial}s\right\Vert_{L^{2}}< \infty\right\}. \end{equation} Here we use $\ii\mathfrak {su}(E,H_0)$ to denote the subbundle of $\operatorname{End}(E)$ consisting of the trace-free and self-adjoint endomorphisms with respect to $H_0$. Though $H_0$ in general is not unique, we will show that if we fix the induced metric on $\det E$, then the set $\mathcal P_{H_0}$ is uniquely determined as long as $H_0$ satisfies conditions in Lemma \ref{good initial metric} (see Proposition \ref{uniqueness of P_H}). Next we define stability with respect to a pair of (1,1)-classes, which generalizes the standard slope stability defined for K\"ahler classes in \cite[Chapter 5]{kobayashi-book}. In the following, we use $\mu_{\alpha}(S)$ to denote the slope of a torsion-free coherent sheaf with respect to a class $\alpha\in H^{1,1}(M)$ on a compact K\"ahler manifold $M$ (see Section \ref{definition of stability with respect to a pair} for a more detailed discussion), i.e. \begin{equation*} \mu_{\alpha}(S):=\frac{1}{\rank (S)}\int_{M} c_1(\det S)\wedge \alpha^{n-1}. \end{equation*} \begin{definition}\label{definition of alpha beta stability} Let $M$ be a compact K\"ahler manifold, $\alpha,\beta\in H^{1,1}(M)$ be two classes, $E$ be a holomorphic vector bundle over $M$. We say $E$ is $(\alpha,\beta)$-stable if every coherent reflexive subsheaf $S$ of $E$ with $0<\rank (S)<\rank (E)$ satisfies either of the following conditions: \begin{itemize} \item [(a)] $\mu_{\alpha}(S)< \mu_{\alpha}(E)$, or \item [(b)] $\mu_{\alpha}(S)= \mu_{\alpha}(E)$ and $\mu_{\beta}(S)<\mu_{\beta}(E)$. \end{itemize} \end{definition} The main result of this paper is \begin{theorem}\label{main theorem} Let $\omega=\omega_0+dd^c\varphi$ be a K\"ahler metric satisfying the \textit{Assumption 1} in Section \ref{assumption on the asymptotic behaviour}. Suppose $\olsi{E}|_{D}$ is $c_1(N_D)$-polystable and $\mathcal P_{H_0}$ is defined by (\ref{define of P_H}). Then there exists a unique $\omega$-PHYM metric in $\mathcal P_{H_0}$ if and only if $\olsi{E}$ is $\left(c_1(D),[\omega_0]\right)$-stable. \end{theorem} By the definition of $(\alpha,\beta)$-stability in Definition \ref{definition of alpha beta stability}, we have the following consequence. \begin{corollary} Suppose $\omega$ and $\olsi{E}$ satisfy the conditions in Theorem \ref{main theorem}. Then we have \begin{itemize} \item [(1).] suppose $[\omega_0]=0$, then there exists a unique $\omega$-PHYM metric in $\mathcal P_{H_0}$ if and only if $\olsi{E}$ is $c_1(D)$-stable. \item [(2).] if $\olsi{E}|_{D}$ is $c_1(N_D)$-stable, then there exists a unique $\omega$-PHYM in $\mathcal P_{H_0}$. \end{itemize} \end{corollary} Now let us give a brief outline for the proof of Theorem \ref{main theorem}. For the ``if'' direction, we follow the argument in \cite{simpson,mochizuki} by solving Dirichlet problems on a sequence of domains exhausting $X$. A key issue here is to prove a uniform $C^0$-estimate. For this we rely on a weighted Sobolev inequality in \cite[Proposition 2.1]{tian-yau1} and Lemma \ref{key observation} which builds a relation between weakly holomorphic projection maps over $X$ and coherent subsheaves over $\olsi X$. For the ``only if'' direction, we use integration by parts to show that the curvature form on $E$ can be used to compute the degree of $\olsi E$ with respect to $[\omega_0]$ (see Lemma \ref{degree coincide}). For both directions, the asymptotic behaviour of the K\"ahler metric $\omega$ plays an essential role. Then let us compare Theorem \ref{main theorem} with some results existing in the literature. In \cite{simpson} and \cite{mochizuki}, by assuming some conditions on the base K\"ahler manifold $(X,\omega)$ and an initial hermitian metric on $E$, it was proved that for an irreducible vector bundle $E$ the existence of a PHYM metric is equivalent to a stability condition called analytic stability. In our case, since we assume that $E$ has a compactification $\olsi{E}$ on $\olsi{X}$, the existence of good initial metrics is guaranteed by the polystablity assumption of $\olsi{E}|_D$. Moreover the stability we used in Theorem \ref{main theorem} is for $\olsi{E}$ which is purely algebraic, i.e. independent of choice of metrics. In \cite{bando}, for asymptotically conical K\"ahler metrics on $X$, it was proved that if $\olsi{E}|_D$ is $c_1(N_D)$-polystable, then there exists PHYM metrics on $E$. No extra stability condition is needed in this case. Therefore the necessity of stability conditions depends on the geometry of $(X,\omega)$ at infinity. Another typical example for such a phenomenon is the problem for the existence of bounded solutions of the Poisson equation on noncompact manifolds. See Section \ref{relation with some results} for a brief discussion. The paper is organized as follows. In Section \ref{assumption on the asymptotic behaviour}, we discuss the assumptions on the K\"ahler manifold $(X,\omega)$ and prove a weighted mean value inequality for nonnegative almost subharmonic functions. In Section \ref{basics on vector bundle}, we give a brief review of some standard results for hermitian holomorphic vector bundles and give a detailed discussion on $(\alpha,\beta)$-stability used in Theorem \ref{main theorem}. In Section \ref{existence of good initial metric}, Lemma \ref{good initial metric} is proved and we also show that the assumption in Lemma \ref{good initial metric} is necessary. In Section \ref{proof of the main theorem}, we prove Theorem \ref{main theorem} and give an example which does not satisfy the stability assumption. In Section \ref{relation with some results}, we discuss some other results on the existence of PHYM metrics. In Section \ref{examples}, we discuss some Calabi-Yau metrics satisfying \textit{Assumption 1}. In Section \ref{general normal bundles}, we prove a counterpart of Theorem \ref{main theorem} in a different setting where $\olsi{X}$ is a compact K\"ahler surface and $c_1(N_D)$ is trivial. In Section \ref{open problems}, we discuss some problems for further study. \subsection*{Notations and conventions} \begin{itemize} \item $d^c=\frac{\ii}{2}(-\partial+\pp)$, so $dd^c=\ii \partial\pp$. \item $\Delta=\ii \Lambda\pp\partial$, so in local normal coordinates $\Delta f=-\sum_{i=1}^{n}\frac{\partial^2f}{\partial z_i\partial \bar{z_i}}$. \item $B_r(p)$ denotes the geodesic ball centered at $p$ with radius $r$ and if the basepoint $p$ is clear from the context, we will just write it as $B_r$. \item In this paper, we identify a holomorphic vector bundle with the sheaf formed by its holomorphic sections. \item When we integrate on a Riemannian manifold $(M,g)$, typically we will omit the volume element $d V_g$. \item Let $(M,\omega)$ be a K\"ahler manifold and $(E,H)$ be a hermitian holomorphic vector bundle over $M$. We use $C^{\infty}(M,E)$ to denote smooth sections of $E$; \begin{equation*} \text{$W^{k,p}(M,E;\omega,H)$ (respectively $W_{loc}^{k,p}(M,E;\omega,H)$)} \end{equation*} to denote sections of $E$ which are $W^{k,p}$ (respectively $W_{loc}^{k,p}$) with respect to the metric $\omega$ and $H$. If bundles or metrics are clear from the text, we will omit them. \end{itemize} \subsection*{Acknowledgements} The author wants to express his deep and sincere gratitude to his advisor Song Sun, for the patient guidance, inspiring discussions and valuable suggestions. He thanks Charles Cifarelli for careful reading of the draft and numerous suggestions. He thanks Sean Paul for his interest in this work. The author is partially supported by NSF grant DMS-1810700 and NSF grant DMS-2004261. \section{On the asymptotic behaviour of $\omega$}\label{assumption on the asymptotic behaviour} As mentioned in the introduction, the asymptotic behaviour of the K\"ahler metric on the base manifold is crucial to make the argument in this paper work. In this section we will discuss these assumptions. Let $\olsi{X}$ be an $n$-dimensional ($n\geq 2$) compact K\"ahler manifold, $D$ be a smooth divisor and $X=\olsi{X}\backslash D$ denote the complement of $D$ in $\olsi{X}$. Let $L_D$ denote the holomorphic line bundle determined by $D$. From now on, we assume the normal bundle of $D$, i.e. $N_D=L_D|_D$ is ample unless otherwise mentioned. Then we know that $c_1(D)$ is nef and big. We fix a hermitian metric $h_D$ on $N_D$ such that $$\omega_D:=\ii \Theta_{h_D}$$ is a K\"ahler form on $D$, where $\Theta_{h_D}$ denote the curvature of $h_D$. Let $\mathcal D$ (respectively $\mathcal C$) denote the (respectively punctured) disc bundle of $N_D$ under the metric $h_D$, i.e. \begin{equation}\label{definition of model space} \begin{aligned} \mathcal C&:=\left\{\xi\in N_D:0<\left|\xi\right|_{h_D}\leq1/2\right\},\\ \mathcal D&:=\left\{\xi\in N_D:\left|\xi\right|_{h_D}\leq1\right\}. \end{aligned} \end{equation} We are mainly interested in the region where $|\xi|_{h_D}$ is small, which will be viewed as a model of $X$ at infinity. Then we have a well-defined positive smooth function on $\mathcal C$ \begin{equation}\label {definition of t} t=-\log|\xi|^2_{h_D} \end{equation} such that $\ii\partial\pp t=\omega_D$, where using the obvious projection map $p:\mathcal C\rightarrow D$, we view $\omega_D$ as a form on $\mathcal C$. Then for every smooth function $F:(0,\infty)\longrightarrow \mathbb R$ with $F'>0$ and $F''>0$, \begin{equation}\label{explicit expression of kahler form} \omega_{ F}=\ii\partial \pp F(t)=F'(t)\ii\partial\pp t+F''(t)\ii\partial t\wedge \pp t \end{equation} defines a K\"ahler form on $\mathcal C$. Let $g_{F}$ denote the corresponding Riemannian metric and $r_{F}$ denote the induced distance function to a fixed point $p$. We need a diffeomorphism to identify a neighborhood of $D$ in $N_D$ with a neighborhood of $D$ in $\olsi{X}$. For this, we use the following definition introduced by Conlon and Hein in \cite[Definition 4.5]{conlon-hein2}. \begin{definition}\label{definition of expoenntial map} An exponential-type map is a diffeomorphism $\Phi$ from a neighborhood of $D$ in $N_D$ to a neighborhood of $D$ in $\olsi{X}$ such that \begin{itemize} \item [(1)] $\Phi(p)=p$ for all $p\in D$, \item [(2)] $d\Phi_p$ is complex linear for all $p\in D$, \item [(3)] $\pi(d\Phi_p(v))=v$ for all $p\in D$ and $v\in N_{D,p}\subset T^{1,0}_pN_D$, where $\pi$ denotes the projection $T^{1,0}_p\olsi{X}\rightarrow T^{1,0}_p\olsi{X}/T^{1,0}_pD=N_{D,p}$. \end{itemize} \end{definition} Now we can state the assumptions for the K\"ahler metric $\omega$ on $X$. We consider a special class of potentials: \begin{equation}\label{conditions on potentials} \mathcal H:=\left\{F(t):F(t)=A t^a \text{ for some constant $A>0$ and $a\in \Big(1,\frac{n}{n-1}\Big]$}\right\} \end{equation} \begin{assumption} \label{assumption on kahler metrics} Let $\omega$ be a K\"ahler form on $X$ and $g$ be the corresponding Riemannian metric. We assume that \begin{itemize} \item [(1)] the sectional curvature of $g$ is bounded, \item [(2)] $\omega$ can be written as $\omega_0+\ii\partial\pp \varphi$, where $\varphi$ a smooth function on $X$ and $\omega_0$ is a smooth $(1,1)$-form on $\olsi{X}$ with $\omega_0|_D=0$, \item [(3)] there exists an exponential-type map $\Phi$ from a neighborhood of $D$ in $N_D$ to a neighborhood of $D$ in $\olsi{X}$ and a potential $F\in \mathcal H$ such that \begin{equation}\label{assumption on asymptotics} \left|\Phi^*(\omega)-\omega_{F}\right|_{g_{F}}=O(r_F^{-N_0}) \text{ for some number $N_0\geq 8$.} \end{equation} \end{itemize} \end{assumption} \begin{remark} There are (lots of) other potentials $F$ besides those given in (\ref{conditions on potentials}) making the argument in this paper work, but for simplicity of the statement and some computations we only consider potentials in $\mathcal H$. The order in (\ref{assumption on asymptotics}) is not optimal either and again we just choose the number 8 for a neat statement. From now on, unless otherwise mentioned, $N_0$ denote the number in (\ref{assumption on asymptotics}). \end{remark} Here are the main properties we will use for K\"ahler metrics defined by the potentials in $\mathcal H$. For simplicity of notation, we omit the subscript for the dependence on $F$. \begin{proposition}\label{basic geometric property} For the K\"ahler metric defined by a potential $F=At^a\in \mathcal H$, we have \begin{itemize} \item [(1)] The metric is complete as $|\xi|_{h_D}\rightarrow 0$, \item [(2)] $r\sim t^{\frac{a}{2}}$, \item [(3)] $\operatorname {Vol}(B_r(p))\sim r^{2n(1-\frac{1}{a})}$, \item [(4)] if $\theta$ is a smooth form on $\mathcal D$ with $\theta|_D=0$, then $\left|\theta\right|_g=O(e^{-\delta t})$ for some $\delta>0$.\end{itemize} \end{proposition} Conditions (1)--(3) follow directly from (\ref{explicit expression of kahler form}) and (\ref{conditions on potentials}). Condition (4) can be proved directly by doing computation in local coordinates on $\mathcal D$ as in \cite[Section 3]{hsvz1}. For completeness and later reference, we include some details. \textit{Proof of (4)}: We choose local holomorphic coordinates $\underline{z}=\{z_i\}_{i=1}^{n-1}$ on the smooth divisor $D$ and fix a local holomorphic trivialization $e_0$ of $N_D$ with $|e_0|_{h_D}=e^{-\psi}$, where $\psi$ is a smooth function on $D$ satisfying $\ii \partial\pp \psi=\omega_D$. Then we get local holomorphic coordinates $\{z_1,\cdots,z_{n-1},w\}$ on $\mathcal C$ by writing a point $\xi=we_0(\underline z)$. Then in these coordinates we can write (\ref{explicit expression of kahler form}) as \begin{equation}\label{model metric expansion} \omega_{F}= \ii F'(t) \psi_{i\bar j} dz_i\wedge d\bar z_{ j}+F''(t) \sqrt{-1}\left(\frac{d w}{w}- \psi_idz_i\right) \wedge\left(\frac{d \bar{w}}{\bar{w}}-\psi_{\bar j}d\bar z_{ j}\right). \end{equation} Then it is easy to check the following estimates: \begin{equation}\label{estimate for 1-forms} \begin{aligned} &|dz_i|^2_{\omega_{F}}=|\Lambda_{\omega_F}(dz_i\wedge d\bar z_i)| \sim t^{1-a}\\ &|dw|^2_{\omega_{F}}\sim \frac{|w|^2}{F''(t)}\leq Ce^{-\delta t} \text{ for some $\delta>0$}\\ &\omega_{F}^n \sim \frac{F'(t)^{n-1}F''(t)}{|\omega|^2}\ii^n\left(\prod_{i=1}^{n-1}dz_i\wedge d\bar z_i\right)\wedge dw\wedge d\bar w . \end{aligned} \end{equation} Then (4) follows directly from (\ref{estimate for 1-forms}). \qed \begin{remark} Actually, from the proof of Proposition \ref{basic geometric property}-(4), we can give an effective lower bound for $\delta$. For example, for 2-forms, $\delta$ can be chosen to be any positive number sufficiently close to (and less than) $1/2$. However $\delta>0$ is sufficient for our later use. \end{remark} \begin{remark} Although not needed in this paper, we mention that following the computation in \cite[Section 4]{tian-yau1} or \cite[Section 3]{biquard-guenancia}, we can show that $\Vert Rm\Vert\leq C r^{2(\frac{1}{a}-1)}$. \end{remark} In \textit{Assumption 1}, we only assume the asymptotics of the K\"ahler forms. To get the asymptotic behaviour of the corresponding Riemannian metrics, we need to show that the complex structure of $\olsi{X}$ and $\mathcal D$ are sufficiently close under the metric $g_F$. When $D$ is an anticanonical divisor, the following result is proved in \cite[Proposition 3.4]{hsvz1}. For a general smooth divisor $D$, the author learned the following proof from Song Sun. \begin{lemma} Let $J_{\mathcal D}$ and $J_{\ols{X}}$ denote the complex structure on $\mathcal D$ and $\olsi{X}$ respectively. And $\Phi^*J_{\olsi{X}}:=d\Phi\circ J_{\olsi{X}}\circ (d\Phi)^{-1}$ denote the pullback of $J_{\olsi{X}}$ under an exponential-type map $\Phi$. Then we have \begin{equation}\label{complex structure is close} \left|\nabla^k_{g_F}(\Phi^*J_{\olsi{X}}-J_{\mathcal D})\right|_{g_F}=O(e^{(-\frac{1}{2}+\epsilon) t}) \text{ for all $k\geq 0$ and $\epsilon>0$}. \end{equation} \end{lemma} \begin{proof} Since $d\Phi_p$ is complex linear for all $p\in D$, we know $\Phi^*J_{\olsi{X}}-J_{\mathcal D}$ is smooth section of $\operatorname{End}(T\mathcal D)$ vanishing on $D$. But this is not enough to get the bound claimed in (\ref{complex structure is close}). We will use the integrability of $\Phi^*J_{\olsi{X}}$ and property (3) in Definition \ref{definition of expoenntial map}. In the following, we ignore the pull-back notation. Around a fixed point in $D$ we can choose local holomorphic coordinates $\left\{w,z_1 \cdots, z_{n-1}\right\}$ of the total space of $N_{D}$ so that the zero section is given by $w=0$. Then we can write for $\alpha=1, \cdots, n-1$ that \begin{equation*} J_{\olsi{X}} \partial_{z_{\alpha}}=\ii \partial_{z_{\alpha}}+P_{\alpha} \partial_{\bar{w}}+Q_{\alpha \beta} \partial_{\bar{z}_{\beta}}+O\left(\left|w\right|^{2}\right), \end{equation*} where $P_{\alpha}$ and $Q_{\alpha \beta}$ are linear functions of $w$ and $\olsi{w}$, i.e. there are smooth functions $p_{\alpha}$ and $p_{\olsi{\alpha}}$ of $\{z_i\}$ such that $P_{\alpha}=p_{\alpha}w+p_{\ols{\alpha}}\ols{w}$ and a similar expression for $Q_{\alpha\beta}$. There are no type $(1,0)$ vectors in the linear term of the right hand side because $J_{\olsi{X}}^2=-\operatorname{id}$. Since $J_{\olsi{X}}$ is integrable, we know that \begin{equation*} \left[\partial_{w}-\ii J_{\olsi{X}} \partial_{w}, \partial_{z_{\alpha}}-\ii J_{\olsi{X}} \partial_{z_{\alpha}}\right]=-2 \ii \partial_{w} P_{\alpha} \partial_{\bar{w}}-2 \ii \partial_{w} Q_{\alpha \beta} \partial_{\bar{z}_{\beta}}+O\left(\left|w\right|\right) \end{equation*} is still of type $(1,0)$ with respect to $J_{\olsi{X}}$, which coincides with $J_{\mathcal D}$ when restricted to $D$. Therefore $$\partial_{w} P_{\alpha}=p_{\alpha}=0.$$ By the property (2) and (3) in Definition \ref{definition of expoenntial map} and the following standard exact sequence of the holomorphic vector bundles on $D$ \begin{equation*} 0\longrightarrow T^{1,0}D\longrightarrow T^{1,0}\olsi{X}\longrightarrow N_D\longrightarrow 0, \end{equation*} we know that on $D$, the $d\olsi{z}_{\alpha}$ component of $\pp_{J_{\olsi{X}}}\partial_{w}$ is tangential to $D$. Note that by definition we have $\pp_{J_{\olsi{X}}}\partial_{w}=\mathcal L_{\partial_{w}}J_{\olsi{X}}$, therefore we know that \begin{equation*} \pp_{J_{\olsi{X}}}\partial_{w}(\partial_{\olsi{z}_{\alpha}})=[\partial_{w},J_{\olsi{X}}\partial_{\olsi{z}_{\alpha}}]=\bar{p}_{\olsi{\alpha}} \partial_{w}+\partial_{w}\olsi{Q}_{\alpha \beta}\partial_{z_{\beta}} +O\left(\left|w\right|\right). \end{equation*} Since on $D$, the $d\olsi{z}_{\alpha}$ component of $\pp_{J_{\olsi{X}}}\partial_{w}$ is tangential to $D$, we obtain $p_{\olsi{\alpha}}=0$. So we have for $\alpha=1,\cdots, n-1$ \begin{equation}\label{difference of complex structure} J_{\olsi{X}} \partial_{z_{\alpha}}=\ii \partial_{z_{\alpha}}+Q_{\alpha \beta} \partial_{\bar{z}_{\beta}}+O\left(\left|w\right|^{2}\right), \end{equation} Now on $\mathcal{D}$ we consider the local basis of holomorphic vector fields (with respect to $J_{\mathcal{C}}$): $$ e_{n}=w \partial_{w}, e_{\alpha}=\partial_{z_{\alpha}}, \alpha=1, \cdots, n-1 $$ and correspondingly $\bar{e}_{n}, \bar{e}_{\alpha}$ the conjugate vector fields, and $e^{n}, e^{\alpha}$ the dual frame etc. Then we can write \begin{equation}\label{difference after 2.6} J_{\olsi{X}}-J_{\mathcal{D}}=\sum J_{i}^{j} e^{i} \otimes e_{j}, \end{equation} where $i, j$ ranges from $1, \cdots, n, \overline{1}, \cdots, \bar{n}$. Then (\ref{difference of complex structure}) implies that we have $\left|J_{i}^{j}\right|=O\left(\left|w\right|\right)$ for all $i, j .$ Then the lemma follows from the explicit expression of the K\"ahler metric on $\mathcal D$, see (\ref{estimate for 1-forms}). \end{proof} From the assumption (\ref{assumption on asymptotics}) on the K\"ahler form and (\ref{complex structure is close}) on the complex structure asymptotics, we obtain that for the corresponding Riemannian metric \begin{equation}\label{closeness of metric tensor} \left|\Phi^*g-g_F\right|_{g_F}=O(r_F^{-N_0}). \end{equation} It is also useful to write down the Riemannian metric $g_F$ explicitly in real coordinates. Note that the set $\left\{\xi\in N_D:\left|\xi\right|_{h_D}<1\right\}$ is diffeomorphic to $\mathbb{R_{+}}\times Y$, where $Y$ is a smooth $(2n-1)$-dimensional $S^1$ bundle over $D$. Let $F(t)=At^a\in \mathcal H$. Then we can write the Riemannian metric $g_{F}$ as follows \begin{equation}\label{metric tensors on real cosrdinates} g_{F}=dr^2+C_1r^{2(1-\frac{1}{a})}g_D+C_2r^{2(1-\frac{2}{a})}\theta^2, \end{equation} where $g_D$ is the corresponding Riemannian metric for $\omega_D$ and $\theta$ is a connection 1-form on $Y$ such that $d\theta=\omega_D$. From the asymptotic of the Riemannian metric tensor (\ref{closeness of metric tensor}), the explicit expression of the Riemannian metric $g_F$ in (\ref{metric tensors on real cosrdinates}) and conditions in \textit{Assumption 1}, one can directly show the following result. \begin{lemma}\label{vanishing at infinity decay} Suppose $(X,\omega,g)$ satisfy \textit{Assumption 1}, then \begin{itemize} \item [(1)] the volume growth of $g$ is at most 2, i.e. there exists a constant $C>0$ such that Vol($B_R(p))\leq CR^2$ for all $R$ sufficiently large. \item [(2)] for large numbers $K,\alpha=2$ and $\beta=\frac{4}{a}-2$, $(X,\omega)$ is of $(K,\alpha,\beta)$-polynomial growth as defined in \cite[Definitino 1.1]{tian-yau1}, \item [(3)] if $\theta$ is a smooth form on $\olsi{X}$ vanishing when restricted to $D$, then $$\left|\theta\right|_{g}=O(r^{-N_0}).$$ \end{itemize} \end{lemma} That $(M,g)$ is of $(K,\alpha,\beta)$-polynomial growth is important for us since we need the weighted Sobolev inequality in \cite[Proposition 2.1]{tian-yau1} to prove a weighted mean value inequality in the next subsection. \subsection{A weighted mean value inequality} In this subsection, using a weighted Sobolev inequality in \cite{tian-yau1}, we prove a weighted mean value inequality for nonnegative functions which are almost subharmonic. This is important when we run Simpson's argument to get a uniform $C^0$-estimate. As usual, $r$ denotes the distance function to a fixed base point induced by a Riemannian metric. \begin{lemma}\label{weighted sobolev inequality} Let $(X,g)$ be a Riemannian manifold which is of $(K,\alpha,\beta)$-polynomial growth as defined in \cite{tian-yau1}. Let $u$ be a nonnegative compactly supported Lipschitz function satisfying $\Delta u \leq f$ in the weak sense. Suppose that $|f|=O(r^{-N})$, for some $N\geq 2+\alpha+\beta$, then there exist $C_i=C_i(n, N)$ such that \begin{equation}\label{mean value inequality} \left\Vert u\right \Vert_{L^{\infty}}\leq C_1\int(1+r)^{-N}u+C_2 \end{equation} \end{lemma} \begin{proof} The following argument is the standard Moser iteration with the help of the weighted Sobolev inequality in \cite[Proposition 2.1]{tian-yau1}. Let $\gamma=\frac{2n+1}{2n-1}$. Note that we have $\int u^p\Delta u\ \leq \int u^pf$ for any $p\geq 1$. Integration by parts and using that $|f|=O(r^{-N})$, we have \begin{equation*} \int |\nabla u^{\frac{p+1}{2}}|^2 \leq Cp\int u^p(1+r)^{-N}. \end{equation*} Let $d\mu=(1+r)^{-N}dV_g$ and without loss of generality, we may assume $d\mu$ has total mass 1. Then the weighted Sobolev inequality shows that \begin{equation*} \left(\int \left|u^{\frac{p+1}{2}}-\int u^{\frac{p+1}{2}}d\mu\right|^{2\gamma}d\mu\right)^{\frac{1}{2\gamma}}\leq C\left(\int |\nabla u^{\frac{p+1}{2}}|^2\right)^{\frac{1}{2}}\leq C p^{\frac{1}{2}}\left(\int u^p d\mu\right)^{\frac{1}{2}}. \end{equation*} Applying the triangle inequality and H\"{o}lder inequality, we get \begin{equation*} \begin{aligned} \left(\int u^{(p+1)\gamma}d\mu\right)^{\frac{1}{2\gamma}}&\leq C_1\int u^{\frac{p+1}{2}}d\mu+C_2p^{\frac{1}{2}}\left(\int u^pd\mu\right)^{\frac{1}{2}}\\ &\leq C_1\left(\int u^{p+1}d\mu\right)^{\frac{1}{2}}+C_2p^{\frac{1}{2}}\left(\int u^{p+1}d\mu\right)^{\frac{p}{2(p+1)}}. \end{aligned} \end{equation*} Let $p_i=\gamma^i$, $i=0,1,\cdots$. We have for any $i$ \begin{equation*} \left(\int u^{p_{i+1}}d\mu\right)^{\frac{1}{\gamma}}\leq C_1 \int u^{p_i}d\mu+C_2p_i\left(\int u^{p_i}d\mu\right)^{\frac{p_i}{p_{i+1}}}. \end{equation*} Either there exists a sequence of $p_{i_j}\rightarrow \infty$ such that $\int u^{p_{i_j}}d \mu \leq 1$, which implies that $\Vert u\Vert_{L^{\infty}}\leq 1$, or there exists a smallest $i_0$ such that and $\int u^{p_{i}}d\mu > 1$ for $i\geq i_0$. In the second case, we have \begin{equation*} \begin{aligned} &\Vert u\Vert_{L^{p_{i_0}}}\leq \max\left\{ \Vert u\Vert_{L^1(d\mu)}, C p_{i_0}^{\frac{1}{p_{i_0}}}\right\}\leq C_1\Vert u\Vert_{L^1(d\mu)}+C_2\\ &\left(\int u^{p_{i+1}}d\mu\right)^{\frac{1}{\gamma}}\leq C p_i\int u^{p_i}d\mu, \ \text{for}\ i\geq i_0. \end{aligned} \end{equation*} Iterating gives that \begin{equation*} \Vert u\Vert_{L^{\infty}}=\lim_{i\rightarrow \infty}\Vert u\Vert_{L^{p_i}(d\mu)}\leq C \Vert u\Vert_{L^{p_{i_0}}}\leq C_1\Vert u\Vert_{L^1(d\mu)}+C_2 \end{equation*} \end{proof} \subsection{The assumption on the degree $a$} The only reason why we need to assume $a\leq \frac{n}{n-1}$ is that the volume growth of the corresponding Riemannian metric is at most 2. In fact we have the following easy but useful degree vanishing property for Rimannian manifolds with at most quadratic volume growth. \begin{lemma}\label{exact integral imply degree 0} Let $(M,g)$ be a complete Riemannian manifold with volume growth order at most 2. Let $u\in C^{\infty}(M)$ satisfying $|\nabla u|\in L^2$ and $\Delta u\in L^1$, then \begin{equation*} \int_{M} \Delta u \ dV_{g}=0. \end{equation*} \end{lemma} \begin{proof} By the Cauchy-Schwarz inequality and the assumption on the volume growth, we have \begin{equation*} \frac{1}{R}\int_{B_{2R}\backslash B_R}|\nabla u|dV_g\leq C \left(\int_{B_{2R}\backslash B_R}|\nabla u|^2dV_g\right)^{\frac{1}{2}}\rightarrow 0 \text{ as $R\rightarrow \infty$} . \end{equation*} Therefore there is a sequence $R_i\rightarrow \infty$ such that $\int_{\partial B_{R_i}}|\nabla u|\ dS\rightarrow 0$. Since $\Delta u$ is integrable, $\int_{M} \Delta u \ dV_{g}=\lim_{i\rightarrow\infty}\int_{B_{R_i}} \Delta u\ dV_g $ for any sequence $R_i$ going to infinity. Using Stokes' theorem, we have \begin{equation*} \left|\int_{B_{R_i}}\Delta u\ dV_g\right|\leq \int_{\partial B_{R_i}}|\nabla u|\ dS\rightarrow 0 \text{ as $R_i\rightarrow\infty$}. \end{equation*} \end{proof} \subsection{Assumption on $\Phi$ and $\omega$} By Proposition \ref{basic geometric property} and the assumption on the decomposition of $\omega=\omega_0+\ii \partial\pp \varphi$, we know that (\ref{assumption on asymptotics}) is equivalent to say that \begin{equation}\label{approximate for exact part} \left|\Phi^*(\ii\partial\pp \varphi)-\omega_{F}\right|_{g_{F}}=O(r_{F}^{-N_0}). \end{equation} Writing $\Phi^*(\ii\partial\pp \varphi)-\omega_{F}$ as $d(\Phi^*d^c\varphi-d^cF)$ and integrating this exact 2-form, we can show the following result, whose proof is similar to that given in \cite[Lemma 3.7]{hsvz1}. \begin{lemma}\label{a good 1-form potential} There exists a real 1-form $\eta$ outside a compact set of $\mathcal C$ with $$|\eta|_{g_F}=O(r_{F}^{-N_0+1+\frac{1}{a}})$$ such that \begin{equation*} \Phi^*(\ii\partial\pp \varphi)-\omega_{F}=d\eta \end{equation*} \end{lemma} \begin{proof}Choose a cut-off function $\chi$ which equals 1 on $\{0<|\xi|_{h_D}< \delta\}$and 0 on $\{|\xi|_{h_D}> 2\delta\}$ for some $\delta>0$ . Let \begin{equation*} \theta=d(\chi(\Phi^*(d^c \varphi)-d^cF)) \end{equation*} And it suffices to write $\theta=d\eta$ with $|\eta|_{g_F}=O(r_{F}^{-N_0+1+\frac{2}{a}})$. We identify $\mathcal C$ with $\mathbb{R_{+}}\times Y$ in such a way that the Riemannian metric $g_F$ can be written as $dr^2+g_r$, where $r$ is the coordinate function on $\mathbb R_{+}$ and $g_r$ is a metric on $\{r\} \times Y^{2n-1}$ that depends on $r$. Then $\theta$ is supported on the region $\{r>r_0\}$ for some $r_0>0$. Then there exist a 1-form $\alpha$ and a 2-form $\beta$ supported on the region $\{r>r_0\}$ such that $\left.\partial_{r}\right\lrcorner \alpha=0$ and $\left.\partial_{r}\right\lrcorner \beta=0$ \begin{equation*} \theta=dr\wedge\alpha+\beta. \end{equation*} Then we define \begin{equation*} \eta=\int_{r_0}^r\alpha\ dr. \end{equation*} $\theta$ is closed, therefore $d\alpha+\partial_r\beta=0$ and then one can directly check that $\theta=d\eta$. Since $dr\wedge \alpha$ is perpendicular to $\beta$ and we assumed $|\theta|_{g_F}=O(r_F^{-N_0})$, we obtain that $|\alpha|_{g_F}=O(r_F^{-N_0})$. Fix a smooth background Riemannian metric $\bar g$ on $Y$. Then from (\ref{metric tensors on real cosrdinates}) and (\ref{closeness of metric tensor}), we obtain the following estimate \begin{equation*} C^{-1}r^{2(1-\frac{2}{a})}\bar g\leq g_r \leq C r^{2(1-\frac{1}{a})}\bar g. \end{equation*} Then the estimate for $|\eta|_{g_F}$ follows from a direct computation. \end{proof} \begin{remark}\label{a new 1-form potential} A similar argument can be applied to $dd^c\varphi$ directly on $X$ (using \textit{Assumption 1}) and we obtain that there exists a cut-off function $\chi$ supported on a compact set and a smooth real 1-form $\psi$ supported outside a compact set satisfying $|\psi|=O(r^{1+\frac{1}{a}})$ such that \begin{equation*} dd^c\varphi=dd^c(\chi\varphi)+d\psi \end{equation*} This is quite useful when we want to integrate by parts on $X$. \end{remark} We assumed that $\omega_0$ is a closed (1,1)-form on $\olsi{X}$ and vanishes when restricted to $D$. In particular, $\int_{\olsi{X}}c_1(D)\wedge\omega_0^{n-1}=0$. Then by the Lelong-Poincar\'e formula, we obtain the following. \begin{lemma}\label{poincare lelong} Let $S\in H^0(\olsi{X},L_D)$ be a defining section of $D$ and $h$ be any smooth hermitian metric on $L_D$. Let $f=\log |S|^2_h$, then we have \begin{equation*} \int_Xdd^cf\wedge \omega_0^{n-1}=0 \end{equation*} \end{lemma} \section{Hermitian holomorphic vector bundles} \label{basics on vector bundle} Firstly let us recall the definition of projectively Hermitian-Yang-Mills metrics. Given a hermitian metric $H$ on a holomorphic vector bundle $E$, there is a unique connection compatible with these two structures and it is called the \textit{Chern connection} of $(E,H)$. Let $F_H$ denote the curvature of the \textit{Chern connection} and we call it the \textit{Chern curvature} of $(E,H)$. Let $E$ be a holomorphic vector bundle on a K\"ahler manifold $(X,\omega)$. A hermitian metric $H$ is called an $\omega$-projectively Hermitian-Yang-Mills metric ($\omega$-PHYM) if \begin{equation}\label{definition of hym} \Lambda_{\omega}F_H=\frac{\tr (\Lambda_{\omega}F_H)}{\rank (E)}\mathrm{id}_E, \end{equation} Accordingly the Chern connection is called an $\omega$-PHYM connection if (\ref{definition of hym}) is satisfied. A hermitian metric $H$ is called an $\omega$-Hermitian-Yang-Mills ($\omega$-HYM) metric if $$\Lambda_{\omega}F_H=\lambda \mathrm{id}_E$$ for some constant $\lambda$. We also use the notation $F^{\perp}_H$ to denote the trace-free part of the curvature form, i.e. $F_H^{\perp}=F_H-\frac{\tr (F_H)}{\rank (E)}\mathrm{id}_E$. Then (\ref{definition of hym}) is equivalent to say $\Lambda_{\omega}F^{\perp}_H=0$. \begin{remark} Note that the PHYM property is conformally invariant, i.e. if a hermitian metric $H_0$ satisfies (\ref{definition of hym}), then $H=H_0e^f$ also satisfies (\ref{definition of hym}) for every smooth function $f$. Moreover to get a HYM metric from a PHYM metric, it suffices to solve a Poisson equation, which is always solvable for a constant $\lambda$ such that $\int_X(\tr(\Lambda_{\omega}F_H)-\lambda)\omega^n=0$ when $M$ is compact. \end{remark} \subsection{Basic differential inequalities} Let $E$ be a holomorphic vector bundle and $H,K$ be two hermitian metrics on $E$, then we have an endomorphism $h$ defined by \begin{equation*} \left<s,t\right>_H=\left<h(s),t\right>_K. \end{equation*} We will write this as $H=Kh$ and $h=K^{-1}H$ interchangeably. Note that $h$ is positive and self-adjoint with respect to both $H$ and $K$. Let $\partial_H$ and $\partial_K$ denote the $(1,0)$ part of the Chern connection determined by $H$ and $K$ respectively. By abuse of notation, we use the same notation to denote the induced connection on $\operatorname{End}( E)$. Simpson showed that \begin{lemma}[{\cite{simpson}}]\label{basic differential inequlaties} Let $H=Kh$, then we have \begin{itemize} \item [(1)] $\partial_H=\partial_K+h^{-1}\partial_K(h)$; \item [(2)] $\Delta_{K}h=h\ii(\Lambda F_H-\Lambda F_K)+\ii \Lambda \pp (h) h^{-1}\partial_K(h)$ where $\Delta_K=\ii\Lambda\pp\partial_K$; \item [(3)]$\Delta \log \operatorname{tr}(h) \leq 2\left(\left|\Lambda F_{H}\right|_{H}+\left|\Lambda F_{K}\right|_{K}\right)$. \end{itemize} Moreover in (2) and (3), if $\det(h)=1$ then the curvatures can be replaced by the trace-free curvatures $F^{\perp}$. \end{lemma} \subsection{Slope stability}\label{definition of stability with respect to a pair} If $ \left|\tr(\Lambda_{\omega}F_H)\right| \in L^1$, the $\omega$-degree of $(E,H)$ and $\omega$-slope of $(E,H)$ are defined to be \begin{equation}\label{definition of slope} \begin{aligned} \deg_{\omega}(E,H)&=\frac{\ii}{2n\pi}\int_M \tr(\Lambda_{\omega}F_H)\omega^n=\frac{\ii}{2\pi}\int_M\tr(F_H)\wedge\omega^{n-1}\\ \mu_{\omega}(E,H)&=\frac{\deg_{\omega}(E,H)}{\rank (E)} \end{aligned} \end{equation} Now let us assume $M$ is compact. Integration by parts shows that the degree defined above is independent of the metric $H$ and only depends on the cohomology class of $[\omega]\in H^2(X,\mathbb R)$, i.e. by the Chern-Weil theory, \begin{equation*} \deg_{\omega}(E)=\int_Mc_1(E)\wedge\omega^{n-1}. \end{equation*} And for any coherent subsheaf $S$ of $E$, one can define its $\omega$-degree as follows (see \cite[Chapter 5]{kobayashi-book}). It is shown that $\det S:=(\wedge^rS)^{**}$ is a line bundle, where $r$ is the rank of $S$ and define \begin{equation}\label{definition of degree of sheaf} \deg_{\omega}(S)=\int_Mc_1(\det S)\wedge\omega^{n-1}. \end{equation} And as before we define $\mu_{\omega}(S)$, the $\omega$-slope of $S$, to be $\frac{\deg_{\omega}(S)}{\rank (S)}$. Note that for the definition of $\omega$-degree and $\omega$-slope, we do not need $\omega$ to be a K\"ahler form at all, and a real closed $(1,1)$-form is enough. That is for every real closed $(1,1)$-form $\alpha$, we can define \begin{equation*} \deg_{\alpha}(S)=\int_Mc_1(\det S)\wedge \alpha^{n-1}. \end{equation*} The slope $\mu_{\alpha}(S)$ is defined similarly as before and we will use the notation $\mu(S,\alpha)$ and $\mu_{\alpha}(S)$ interchangeably. We have the following definition which generalizes the standard slope stability defined for K\"ahler classes in \cite[Chapter 5]{kobayashi-book}. \begin{definition}\label{definition of pair stability} Let $M$ be a compact K\"ahler manifold, $\alpha,\beta\in H^{1,1}(M)$ be two cohomology classes, $E$ be a holomorphic vector bundle over $M$. \begin{itemize} \item [(1)]We say $E$ is $\alpha$-stable if for every coherent reflexive subsheaf $S$ of $E$ with $0<\rank(S)<\rank(E)$, we have $\mu_{\alpha}(S)<\mu_{\alpha}(E)$; $E$ is $\alpha$-polystable if it is the direct sum of stable vector bundles with the same $\alpha$-slope; $E$ is $\alpha$-semistable if for every coherent reflexive subsheaf $S$ of $E$ with $0<\rank(S)<\rank(E)$, we have $\mu_{\alpha}(S)\leq\mu_{\alpha}(E)$;. \item [(2)] We say $E$ is $(\alpha,\beta)$-stable if every coherent reflexive subsheaf $S$ of $E$ with $0<\rank (S)<\rank (E)$ satisfies either of the following conditions: \begin{itemize} \item [(a)] $\mu_{\alpha}(S)< \mu_{\alpha}(E)$, or \item[(b)] $\mu_{\alpha}(S)= \mu_{\alpha}(E)$ and $\mu_{\beta}(S)<\mu_{\beta}(E)$. \end{itemize} \end{itemize} \end{definition} From the definition, we know that if $\beta=0$, then $E$ is $(\alpha,\beta)$-stable if and only if it is $\alpha$-stable; if $E$ is $\alpha$-stable, then it is $(\alpha,\beta)$-stable for any class $\beta$. In applications, typically the first class $\alpha$ has some positivity. For example, in our Theorem \ref{main theorem}, $\alpha=c_1(D)$ is nef and big. \begin{remark} For every coherent subsheaf $S$ of a holomorphic vector bundle $E$, we have an exact sequence of sheaves: \begin{equation*} 0\rightarrow S\rightarrow S^{**}\rightarrow S^{**}/S\rightarrow0, \end{equation*} where $S^{**}/S$ is a torsion sheaf and supports on an analytic set with codimension at least 2. Then by \cite[Section 5.6]{kobayashi-book}, we know $\det S=\det (S^{**})$. In particular, we know that $E$ is $\alpha$-stable (respectively $(\alpha,\beta)$-stable) if and only if the conditions in (1) (respectively (2)) hold for every coherent subsheaf of $E$. \end{remark} \subsection{Coherent subsheaves and weakly holomorphic projection maps} Let $(E,H)$ be a hermitian holomorphic vector bundle over a K\"ahler manifold $(M,\omega)$. Suppose $S$ is a coherent subsheaf of $E$, since $E$ is torsion-free, then $S$ is torsion free and hence locally free outside $\Sigma$ which is a closed analytic set of codimension at least 2. Moreover on $X\backslash \Sigma$ we have an induced orthogonal projection map $\pi=\pi^H_S$ satisfying \begin{equation}\label{weakly holomophic map condition} \pi=\pi^{\star}=\pi^{2}, \quad(\mathrm{id}-\pi) \circ \pp \pi=0. \end{equation} Outside the singular set $\Sigma$, the Chern curvature of $(S,H|_S)$ is related to the Chern curvature $(E,H)$ by \begin{equation}\label{curvature form for subhseaf} F_{S,H}=F_{E,H}|_S-\partial \pi\wedge\pp\pi. \end{equation} Let us mention a result in current theory: \begin{theorem}[{\cite{harvey1974removable}}]\label{current lemma} Let $\Sigma$ be a closed analytic subset of codimension at least 2 in a K\"ahler manifold $(M,\omega)$. Assume $T$ is a closed positive current on $M\backslash \Sigma$ of bidegree $(1,1)$, i.e a $(1,1)$-form with distribution coefficients, then the mass of $T$ is locally finite in a neighborhood of $\Sigma$. More precisely, every $p\in \Sigma$ has a neighborhood $U\subseteq M$ such that \begin{equation*} \int_U T\wedge\omega^{n-1}<\infty. \end{equation*} \end{theorem} Applying the above theorem to $\tr(\ii\partial\pi\wedge\pp \pi)$, one gets \begin{equation}\label{L^2} \pi\in W^{1,2}_{loc}(M,\operatorname{End}( E);\omega,H). \end{equation} And in general we call $\pi\in W^{1,2}_{loc}(M,\operatorname{End}( E);\omega,H)$ is a weakly holomorphic projection map if it satisfies (\ref{weakly holomophic map condition}) almost everywhere. By the discussion above, we know that for a coherent subsheaf $S$ of $E$, $\pi_{S}^H$ is a weakly holomorphic projection map. A highly nontrivial result due to Uhlenbeck and Yau \cite{uhlenbeck-yau} is that the converse is also true (see also \cite{popovici}). \begin{theorem}[{\cite{uhlenbeck-yau}}]\label{uhlenbeckyau} Suppose there is a weakly holomorphic projection map $\pi$, then there exists a coherent subsheaf $S$ of $E$ such that $\pi=\pi^H_{S}$ almost everywhere. \end{theorem} If $X$ is compact, $\deg_{\omega}(S)$ defined in (\ref{definition of degree of sheaf}) can be computed using the curvature form $F_{S,H}$. The following result is well-known, see \cite[Section 5.8]{kobayashi-book}. We include a simple proof using Theorem \ref{current lemma}. \begin{proposition} Let $(E,H)$ be a hermitian holomorphic vector bundle over a compact K\"ahler manifold $(M,\omega)$ and $S$ be a coherent subsheaf of $E$. Then \begin{equation}\label{degree of coherent sheaf} \deg_{\omega}(S)=\frac{\ii}{2\pi}\int_{M\backslash\Sigma} \tr(F_{S,H})\wedge\omega^{n-1}, \end{equation} where $\deg_{\omega}(S)$ and $F_{S,H}$ are defined in (\ref{definition of degree of sheaf}) and (\ref{curvature form for subhseaf}) respectively. \end{proposition} \begin{proof} Let $r$ denote the rank of $S$. Since $S$ is a subsheaf of $E$, there is a natural sheaf homomorphism \begin{equation*} \Phi:(\wedge^rS)^{**}\longrightarrow (\wedge^r E)^{**}=\wedge^r E. \end{equation*} Note that $\Phi$ is only injective on $M\backslash \Sigma$ in general. Let $\wedge^rH$ denote the metric on $\wedge^r E$ induced from $H$, then $\Phi^*(\wedge^rH)$ defines a singular hermitian metric on $(\wedge^rS)^{**}$ which is smooth outside $\Sigma$ and whose curvature form is equal to $\tr(F_{S,H})$. Since $\Phi$ is a holomorphic bundle map, by choosing a local holomorphic basis of $(\wedge^rS)^{**}$ and $\wedge^r E$, it is easy to show that $\Phi^*(\wedge^rH)=fK$, where $K$ is a smooth hermitian on $(\wedge^rS)^{**}$, the function $f$ is positive smooth outside $\Sigma$ and converge to 0 polynomially along $\Sigma$. Then by Theorem \ref{current lemma}, it suffices to prove the following: for every smooth positive function $f$ on $M\backslash \Sigma$ satisfying $\Delta \log f\in L^1$, and $|f|=O(\operatorname {dist}(\cdot, \Sigma)^{k})$ for some $k$ one has \begin{equation}\label{another average of laplacian is 0} \int_M\Delta \log f\ \omega^n=0. \end{equation} Note that $|\log f|\in L^2$, then (\ref{another average of laplacian is 0}) follows from the Cauchy-Schwarz inequality and existence of good cut-off functions. More precisely, since $\Sigma$ has real codimension at least 4, it is well-known that there exists a sequence of cut-off functions $\chi_{\epsilon}$ such that $1-\chi_{\epsilon}$ is supported in the $\epsilon$-neighborhood of $\Sigma$ and we have uniform $L^2$ bound on $\Delta \chi_{\epsilon}$. We briefly describe a construction of these cut-off functions. Let $s$ be a regularized distance function to $\Sigma$ in the sense that $s:M\rightarrow \mathbb R_{\geq0}$ is smooth and satisfies that there exist positive constants $C_k$ such that \begin{equation*} C_0^{-1}\operatorname{dist}(x,\Sigma)\leq s(x)\leq C_0\operatorname {dist}(x,\Sigma) \text{ and } |\nabla^{k}s|\leq C_k\operatorname{dist}(x,\Sigma)^{1-k} \text{ for all } k\geq0. \end{equation*}The existence of such a regularized distance function can be derived from \cite[Theorem 2 on page 171]{Stein}. After a rescaling, we may assume $s<1$ on $M$. For every $\epsilon>0$, let $\rho_{\epsilon}$ be a smooth function which is equal to one on the interval $(-\infty, \varepsilon^{-1})$ and zero on $(2+\varepsilon^{-1}, \infty)$. Moreover we can have $|\rho^{\prime}_{\epsilon}|+|\rho^{\prime\prime}_{\epsilon}|\leq 10$. Then we define \begin{equation*} \chi_{\epsilon}=\rho_{\epsilon}(\log(-\log s)), \end{equation*} and we can directly check they satisfy the desired properties. \end{proof} Motivated by the above result, Simpson \cite{simpson} uses the right hand side of (\ref{degree of coherent sheaf}) to define an analytic $\omega$-degree of a coherent subsheaf on a noncompact K\"ahler manifold. Typically one needs to assume $|\Lambda_{\omega}F_H|\in L^1$ to ensure the first term of (\ref{curvature form for subhseaf}) is integrable. Then the degree of a coherent subsheaf is either a finite number or $-\infty$ depending on whether $|\pp\pi|\in L^2$. In general, this analytic degree depends on the choice of the background metric $H$. And a key observation in this paper is that when $E$ has a compactification and $H$ is conformal to a smooth extendable hermitian metric, this analytic degree does have an algebraic interpretation, see Lemma \ref{degree coincide} and Lemma \ref{key observation}. \subsection{Dirichlet problem} We have the following important theorem of Donaldson: \begin{theorem} [{\cite{donaldsonboudary}}]\label{donaldson dirichlet} Given a hermitian holomorphic vector bundle $(E,H_0)$ over $(Z,\omega)$ which is a compact K\"ahler manifold with boundary, there is a unique hermitian metric $H$ on $E$ such that \begin{itemize} \item [(1)] $H|_{\partial Z}=H_0|_{\partial Z}$, \item [(2)]$\ii\Lambda_{\omega}F_H=0$. \end{itemize} \end{theorem} As observed in \cite{mochizuki}, one can do conformal changes to $H$ to fix the induced metric on $\det E$ and still have it to be a projectively Hermitian-Yang-Mills metric. \begin{proposition}[{\cite{mochizuki}}] Given a hermitian holomorphic vector bundle $(E,H_0)$ over $(Z,\omega)$ which is a compact K\"ahler manifold with boundary, there is a unique hermitian metric $H$ on $E$ such that \begin{itemize} \item [(1)] $H|_{\partial Z}=H_0|_{\partial Z}$ and $\det H=\det H_0$, \item [(2)]$\ii\Lambda_{\omega}F_H^{\perp}=0$, \end{itemize} \end{proposition} \subsection{Donaldson functional for manifolds with boundary}\label{definition of new endmorphisms} Next we recall Simpson's construction \cite{simpson} for Donaldson functional. We follow the exposition in \cite[Subsection 2.5]{mochizuki} and focus on compact K\"ahler manifolds with boundary. Let $(Z,\omega)$ be a compact K\"ahler manifold with boundary, $(E,H_0)$ be a hermitian holomorphic vector bundle. Let $b$ be a smooth section of $\operatorname{End}(E)$ which is self-adjoint with respect to $H_0$. Then for any smooth function $f:\mathbb R\rightarrow \mathbb R$ and $\Phi:\mathbb R\times\mathbb R\rightarrow \mathbb R$ respectively, we can define \begin{equation*} \text{$f(b)\in C^{\infty}(\operatorname{End}( E))$ and $\Phi(b)\in C^{\infty}(\operatorname{End}(\operatorname{End}(E)))$ } \end{equation*}as follows: at each point $p\in Z$, choose an orthonormal basis $\{e_i\}$ of $E$ such that $b(e_i)=\lambda_ie_i$. Let $\{e_i^{\vee}\}$ denote its dual basis, then set \begin{equation*} \text{$f(b)(e_i)=f(\lambda_i)e_i$ and $\Phi(b)(e_i^{\vee}\otimes e_j)=\Phi(\lambda_i,\lambda_j)e_i^{\vee}\otimes e_j$ } \end{equation*} Recall that for any two hermitian metrics $H_1$ and $H_2$, there is a smooth section $s$ of $\operatorname{End}( E)$, which is self-adjoint with respect to both $H_1$ and $H_2$ such that $H_2=H_1e^s$ and $\det H=\det H_0$ if and only if $\tr(s)=0$. Let $\mathcal P_{H_0}$ denote the space of hermitian metrics $H$ of $E$ such that \begin{equation*} \text{$\det H=\det H_0$ and $H|_{\partial Z}=H_0|_{\partial Z}$.} \end{equation*} Let $\Psi(\lambda_1,\lambda_2)$ denote the smooth function \begin{equation}\label{definition of Phi} \Psi(\lambda_1,\lambda_2)=\left\{ \begin{aligned} &\frac{e^{\lambda_{2}-\lambda_{1}}-\left(\lambda_{2}-\lambda_{1}\right)-1}{\left(\lambda_{2}-\lambda_{1}\right)^{2}}\ \ \text{if $\lambda_1\neq \lambda_2$} \\ &\frac{1}{2}\qquad \qquad\qquad \qquad\qquad\text{if $\lambda_1= \lambda_2$}. \end{aligned} \right. \end{equation} Then put \begin{equation*} \mathcal M_{\omega} (H_1,H_2)=\ii\int_Z\tr(s\Lambda_{\omega}F_{H_1})\omega^n+\int_Z \left<\Psi(s)(\pp s),\pp s\right>_{H_1}\omega^n. \end{equation*} Mochizuki proved the following important result \begin{theorem}[{\cite{mochizuki}}]\label{phym metrics minimizing donaldson functional} If $H\in \mathcal P_{H_0}$ is an $\omega$-PHYM metic, then $\mathcal M_{\omega}(H_0,H)\leq 0$. \end{theorem} \subsection{Bando-Siu's interior estimate} The following result shows that to get a local uniform bound for a sequence of PHYM metrics, it suffices to have a uniform $C^0$-bound. \begin{theorem}[{\cite{bando-siu}\cite{jacob-walpuski}}]\label{bando-siu interior estimate} Let $ B_2(p)\subseteq (M,\omega)$ be a geodesic ball of radius 2 contained in a K\"ahler manifold such that $\ols{B_2(p)}$ is compact. Let $(E,H_0)$ be a hermitian holomorphic vector bundle over $B_2(p)$. Suppose $H=H_0e^s$ is an $\omega$-PHYM metric with $\tr(s)=0$, then for any $k\in \mathbb N$ and $p\in(1,\infty)$ there exists a smooth function $\mathcal F_{k,p}$ which vanishes at the origin and depends only on the geometry of $B_2(p)$ such that \begin{equation*} \left\Vert\nabla_{H_0}^{k+2}s\right\Vert_{L^p(B_1(p))} \leq \mathcal F_{k,p}\left(\left\Vert s\right\Vert_{L^{\infty}}+\sum_{i=0}^{k}\left\Vert\nabla_{H_0}^{i}F_{H_0}\right \Vert_{L^{\infty}(B_2(p))} \right). \end{equation*} \end{theorem} \section{Existence of a good initial metric}\label{existence of good initial metric} In this section, we continue to use notations in Section \ref{assumption on the asymptotic behaviour} and always assume the K\"ahler metric on $X$ satisfies the \textit{Assumption 1}. We begin by working on the model space $(\mathcal C,\omega_{\mathcal C})$, where $\omega_{\mathcal C}=dd^c F(t)$ for some potential $F(t)\in \mathcal H$. Using the explicit expression of $\omega_{\mathcal C}$ in (\ref{explicit expression of kahler form}), it is easily to show that \begin{lemma} Let $E$ be a holomorphic vector bundle on $D$ and $p^*(E)$ be its pull back to $\mathcal C$. Suppose $H_D$ is a metric on $E$ satisfying $\ii\Lambda_{\omega_D}F_{H_D}=\lambda \mathrm{id}_E$ for some constant $\lambda$, then $H=e^{-\frac{\lambda}{n-1} \log |\xi|^2_{h_D}}$$p^*(H_D)$ defines a metric on $p^*(E)$ satisfying $\ii\Lambda_{\omega_{\mathcal C}}F_H=0$. \end{lemma} \begin{remark}\label{n=2 initial metric} If $n=2$, the metric $H$ actually is a flat metric on $p^*(E)$. \end{remark} Let $\olsi E$ be a holomorphic vector bundle on $\mathcal D$. We still use $p$ to denote the projection map from $\mathcal D$ to $D$. Then we can compare the holomorphic structure on $\olsi E$ and $p^*(\olsi E|_D)$ as follows. In a neighborhood of $D$, which we may assume to be $\mathcal C$, we fix a bundle map $\Phi:\olsi E\rightarrow p^*(\olsi E|_D)$ such that $\Phi|_D$ is the canonical identity map and $\Phi$ is an isomorphism as maps between complex vector bundles. Then $\Phi$ pulls back the holomorphic structure on $p^*(\olsi E|_D)$ to $\olsi E$. Now we have two holomorphic structures on $\olsi E$ and denote them by $\pp_1$ and $\pp_2$. Then the difference $$\beta=\pp_2-\pp_1$$ is a smooth section of $\mathcal A^{0,1}(\operatorname{End}( \olsi E))$ and is 0 when restricted to $D$. Locally near a point in $D$, choose holomorphic coordinates $\{z_1,\cdots,z_{n-1},w\}$ such that $D=\{w=0\}$. Using these coordinates, $\beta$ can be written as $f_i\dd \bar z^i+h\dd \bar w$ where $f_i$ and $h$ are smooth sections of $\operatorname{End}( \olsi E)$ and $f_i|_{w=0}=0$. Now suppose we have a Hermitian metric $H$ on $p^*(\olsi E|_D)|_{\mathcal C}$, then via $\Phi$ we view it as a metric on $\olsi E|_{\mathcal C}$. Let $\partial_i$ denote the $(1,0)$ part of the Chern connection determined by $\pp_i$ and $H$. Then one can check that $$\mu=\partial_2-\partial_1=-\beta^{*_H},$$ where $\beta^{*_H}$ denote the smooth section of $\mathcal A^{1,0}(\operatorname{End}( \olsi E))$ obtained from $\beta$ by taking the metric adjoint for the $\operatorname{End}( \olsi E)$ part and taking conjugate for the 1-form part. Locally $\mu=f_i^{*_H}\dd z^i+h^{*_H}\dd w$. Since $H$ is only defined over $\mathcal C$, a priori $f^{*_H}$ and $h^{*_H}$ are only defined on $\mathcal C$. Note that the operator $*_H$ is conformally invariant with respect to the choice of metric $H$, so if $H= e^f\olsi H$ for some function $f\in C^{\infty}(\mathcal C)$ and metric $\olsi H$ on $\olsi E$, $f_i^{*_H}$ and $h^{*_H}$ are smooth sections over $\mathcal D$. Then we can compute the difference of curvatures for $(\pp_1,H)$ and $(\pp_2, H)$: \begin{equation}\label{difference of curvature} F_{\pp_2,H}=\pp_2\partial_2+\partial_2\pp_2=F_{\pp_1,H}+\partial_1\beta+\pp_1\mu+[\beta,\mu], \end{equation} where we abuse notation to use the same symbol $\partial$ and $\pp$ to denote the induced connection on $\operatorname{End}( \olsi E)$. Again note that the induced metric (and hence the Chern connection) on $\operatorname{End}( \olsi E)$ is conformally invariant for metrics $H$ on $\olsi E$. Therefore $F_{\pp_2,H}-F_{\pp_1,H}$ is a smooth $\operatorname{End}( \olsi E)$ valued $(1,1)$-form over $\mathcal D$ and $i^*_D(F_{\pp_2,H}-F_{\pp_1,H})=0$. Then by Proposition \ref{basic geometric property}, we obtain there exists a $\delta>0$, \begin{equation*} \left|F_{\pp_2,H}-F_{\pp_1,H}\right|_{\omega_{\mathcal C},H}=O(e^{-\delta t}). \end{equation*} Combining this and the previous lemma, we proved that \begin{proposition} \label{model case} Let $\olsi E$ be a holomorphic vector bundle on $\mathcal D$. Suppose there is a metric $H_D$ on $\olsi E|_D$ satisfying $\ii\Lambda_{\omega_D}F_{H_D}=\lambda \operatorname{id}$ for some constant $\lambda$, then there is a Hermitian metric $H$ on $\olsi E|_{\mathcal C}$ satisfying: \begin{itemize} \item [(1).] there is a hermitian metric $\olsi{H}$ on $\olsi{E}$ and a function $f\in C^{\infty}(\mathcal C)$ such that $H=e^f\olsi{H}$, and \item [(2).] $|\Lambda_{\omega_{\mathcal C}}F_{H}|=O( e^{-\delta t})$ for some $\delta>0$. \end{itemize} \end{proposition} Motivated by this, we can give the proof of the Lemma \ref{good initial metric}. \noindent\textit{Proof of Lemma \ref{good initial metric}.} By the Donaldson-Uhlenbeck-Yau theorem there exists a hermitian metric $H_D$ on $\olsi{E}|_D$ such that \begin{equation}\label{hermitian einstein condition} \ii\Lambda_{\omega_D}F_{H_D}=\lambda \mathrm{id}. \end{equation} Extend $H_D$ smoothly to get a hermitian metric $\olsi{H}_0$ on $\olsi{E}$. Using the diffeomorphism $\Phi$ given in the \textit{Assumption 1}-(3) we get a positive smooth function on $X$ by abuse of notations stilled denoted by $t$, which is equal to $(\Phi^{-1})^{*}t$ outside a compact set on $X$. Define a hermitian metric on $E$ using \begin{equation}\label{definition of H_0} H_0=e^{\frac{\lambda}{n-1}t}\olsi{H}_0. \end{equation} Then we claim that \begin{equation}\label{decay of contraction of curvature} |\Lambda_{\omega}F_{H_0}|=O(r^{-N_0}). \end{equation}From the construction, $F_{H_0}=F_{\olsi{H}_0}-\frac{\lambda dd^c t}{n-1}\operatorname{id}$, where $F_{\olsi{H}_0}$ is smooth bundle valued (1,1)-form on $\olsi{X}$. Recall that for a 2 form $\theta$, \begin{equation*} \Lambda_{\omega}\theta=\frac{n\theta\wedge\omega^{n-1}}{\omega^n}. \end{equation*} Since we assume that $|\Phi^*(\omega)-\omega_{\mathcal C}|=O(r^{-N_0})$, then (\ref{decay of contraction of curvature}) will follow from the following estimate on $\mathcal C$: there exist a $\delta>0$ such that \begin{equation}\label{estimate on model case} \left|\Lambda_{\omega_{\mathcal C}}\left(\Phi^*(F_{\olsi{H_0}})-\frac{\lambda}{n-1}\Phi^*dd^c t\right)\right|=O(e^{-\delta t}). \end{equation} By (\ref{complex structure is close}) and (\ref{difference after 2.6}), we can easily show that there exists a $\delta>0$ such that \begin{equation}\label{estimate for exact part} \left|\Phi^*dd^ct-dd^ct\right|=|d((\Phi^*J_{\olsi{X}}-J_{\mathcal D})\circ dt)|=O(e^{-\delta t}). \end{equation} Using the same argument as we did before Proposition \ref{model case}, we can show that there exists a $\delta>0$ such that \begin{equation}\label{estimate for smooth bundle part} \left|\Phi^*(F_{\olsi{H}_0})-p^*(F_{\olsi{H}_0}|_D)\right|=O(e^{-\delta t}). \end{equation} Then (\ref{estimate on model case}) follows from (\ref{hermitian einstein condition}), (\ref{estimate for exact part}) and (\ref{estimate for smooth bundle part}). \qed \begin{remark}\label{t can be viewed} Recall that we use $S\in H^0(\olsi{X},L_D)$ to denote a defining section of $D$. Then from the definition of $t$, we know that there exists a smooth hermitian metric $h$ on $L_D$ such that $t=-\log|S|_{h}^2$. \end{remark} \begin{remark}\label{full curvature decay} From the above discussion, we also obtain that $|F_{H_0}|=O(r^{1-a})$. In general, we can not expect a higher decay order for the full curvature tensor $F_{H_0}$ since it has non-vanishing component along the directions tangential to $D$, but if $n=2$ we actually proved that \begin{equation}\label{dim 2 case} |F_{H_0}|_{\omega}=O(r^{-N_0}). \end{equation} \end{remark} From the proof given above, the assumption that $E|_D$ is $c_1(N_D)$-polystable is used crucially to have a good initial metric $H_0$ satisfying (1) and (2) in Lemma \ref{good initial metric}, which both are important for the proof. We show the assumption that $E|_D$ is $c_1(N_D)$-polystable is also necessary subject to the conditions in Lemma \ref{good initial metric}. More precisely, we have that \begin{proposition}\label{necessarity} Suppose there is a Hermitian metric $H_0$ on $E$ satisfying: \begin{itemize} \item [(1).] $|\Lambda_{\omega}F_{H_0}|=O(r^{-N_0})$, where as before $N_0$ is the number in (\ref{assumption on asymptotics}), and \item[(2).] there is a hermitian metric $\olsi{H}_0$ on $\olsi{E}$ and a function $f\in C^{\infty}(X)$ such that $H_0=e^f\olsi{H}_0$. \end{itemize} Then $\olsi H_0|_D$ defines a PHYM metric with respect to $\omega_D\in c_1(N_D)$, i.e, $\olsi E|_D$ is $c_1(N_D)$-polystable. \end{proposition} \begin{proof} By these two assumptions, we have $\left|\ii\Lambda_{\omega}F_{\olsi H_0}+\Delta_{\omega}f \operatorname{id} \right|=O(r^{-N_0})$. In particular the trace-free part of $\Lambda_{\omega}F_{\olsi H_0}$ decays like $r^{-N_0}$. Since $$F_{\olsi H_0}^{\perp}=F_{\olsi H_0}-\frac{\tr (F_{\olsi H_0})}{\rank E} \operatorname{id}$$ is a smooth bundle valued (1,1)-form on $\olsi X$, its pull-back under $\Phi$ to $\mathcal D$ is a smooth bundle valued 2-form satisfying that its restriction to $D$ is of type (1,1) and $$|\Lambda_{\omega_{F}}\Phi^*(F_{\olsi{H}_0}^{\perp})|=O(r^{-N_0}).$$ From the explicit expression of the K\"ahler form $\omega_{F}$ in (\ref{explicit expression of kahler form}) and the assumption on the potential $F$ in \textit{Assumption 1}, we know that $$\Lambda_{\omega_D}(F_{\olsi{H}_0}^{\perp}|_D)=0.$$ \end{proof} Next we can show that the set $\mathcal{P}_{H}$ defined in (\ref{define of P_H}) is unique if we fix the induced metric on the determinant line bundle. More precisely, we have \begin{proposition}\label{uniqueness of P_H} Suppose we have two metrics $H_0$ and $H_1$ satisfying the condition (1) and (2) in Lemma \ref{good initial metric} and $\det H_0=\det H_1$, then \begin{equation*} \mathcal P_{H_0}=\mathcal P_{H_1}. \end{equation*} In particular, there exists a constant $C>0$ such that $C^{-1}H_0\leq H_1\leq CH_0$. \end{proposition} \begin{proof} By condition (1) in Lemma \ref{good initial metric}, we know that there are smooth hermitian metrics $\olsi{H}_0$ and $\olsi{H}_1$ and smooth functions $f_0$ and $f_1$ on $X$ such that for $i=0,1$ \begin{equation*} H_i=\olsi{H}_ie^{f_i}. \end{equation*} And by doing a conformal change, we may assume $\det \olsi{H}_0=\det \olsi{H}_1$. Let $h=\olsi{H}_0^{-1}\olsi{H}_1$. Since $\det H_0=\det H_1$, we have \begin{equation*} H_1=H_0h. \end{equation*} From the proof of Proposition \ref{necessarity}, we know that $\olsi{H}_i|_D$ are PHYM. By the uniqueness of PHYM metrics on compact K\"ahler manifolds, we know that \begin{equation*} \nabla (h|_D)=0, \end{equation*} where $\nabla$ denotes the induced connection on $\operatorname{End}(\olsi{E})$ from the Chern connection on $(\olsi{E},\olsi{H}_0)$. From this and noting that $H_0$ is conformal to an extendable metric, we can check directly that \begin{equation*} |\nabla h|\in L^2(X;\omega,H_0) \end{equation*} Then from the definition of $\mathcal P_{H_0}$, we obtain that $\mathcal P_{H_0}=\mathcal P_{H_1}.$ \end{proof} \section{Proof of the main theorem}\label{proof of the main theorem} We first prove a lemma on the degree vanishing property. \begin{lemma} \label{L1 imply degree 0} Let $(X^n, \omega)$ be a complete K\"{a}hler manifold and $\beta$ be a d-closed $(k,k)$ form with $\displaystyle \int_{X} |\beta|\omega^n$ finite for some $1\leq k\leq n-1$. Suppose $\omega=d\eta$ for some smooth 1-form $\eta$ with $|\eta|=O(r)$, then $\displaystyle\int \beta\wedge \omega^{n-k}=0$. \end{lemma} \begin{proof} Fix a base point $p\in X$ and let $\rho_R$ be a smooth cut-off function which is 1 on $B_R(p)$, 0 outside $B_{2R}(p)$ and $|\nabla \rho_R|\leq \frac{C}{R}$ where $C$ is a constant independent of $R$. Integrating by parts, we have the following \begin{equation*} \begin{aligned} \left|\int_X \beta\wedge\omega^{n-k}\right|&=\left|\lim_{R\rightarrow\infty}\int_{B_{2R}(p)} \rho_R\beta\wedge\omega^{n-k}\right|\\ &\leq C\lim_{R\rightarrow\infty}\int_{B_{2R}(p)\backslash B_R(p)}\frac{1}{R} \left|\beta\wedge \omega^{n-k-1}\wedge \eta\right|\omega^n, \end{aligned} \end{equation*} which is bounded by $C\int_{B_{2R}(p)\backslash B_R(p)}\left|\beta\right|\omega^n$. And this term tends to 0 as $R\rightarrow\infty$ since $|\beta| \in L^1$. \end{proof} From now on, we assume $\omega=\omega_0+dd^c\varphi$ is a K\"ahler form satisfying the \textit{Assumption 1} in Section \ref{assumption on the asymptotic behaviour}. Note that we only proved that $dd^c\varphi=d\psi$ for a smooth form $\psi$ with $|\psi|=O(r^{1+\frac{1}{a}})$ (see Remark \ref{a new 1-form potential}). Therefore we can not apply Lemma \ref{L1 imply degree 0} directly. Typically we have a definite decay order for $|\beta|$, so we can still use integration by parts to show some degree vanishing properties. More precisely, we have \begin{lemma}\label{a more precise degree vanishing} Let $\beta$ be a d-closed $(k,k)$ form for some $1\leq k\leq n-1$, satisfying $|\beta|=O(r^{-N_0})$. Then \begin{equation*} \int_X\beta\wedge(dd^c\varphi)^{n-k}=0 \end{equation*} \end{lemma} \begin{proof} By a similar integration by part argument as in the proof of Lemma \ref{L1 imply degree 0}, it suffices to show that \begin{equation*} \lim_{R\rightarrow\infty}\frac{1}{R} \int_{B_{2R}(p)\backslash B_R(p)}\left|\beta\wedge (dd^c\varphi)^{n-k-1}\wedge \psi\right|(dd^c\varphi)^n=0. \end{equation*} This follows from the facts that $|\beta|=O(r^{-N_0})$, $|\psi|=O(r^{1+\frac{1}{a}})$ and the volume growth order of $\omega$ is at most 2. \end{proof} The following two lemmas are crucial for us since they relate information on $\olsi X$ and that on $X$. \begin{lemma}\label{degree coincide}Let $H_0$ be the metric constructed in Lemma \ref{good initial metric}. One has the following equality: \begin{equation}\label{degree equality} \int_{X}\frac{\ii}{2\pi}\tr(F_{H_0})\wedge \omega^{n-1}=\int_{\olsi X}c_1(\olsi E)\wedge {[\omega_0]}^{n-1}. \end{equation} \end{lemma} \begin{proof} Firstly, recall that$$n\tr(F_{H_0})\wedge \omega^{n-1}=\Lambda_{\omega}\tr(F_{H_0})\omega^n.$$ By the construction in Lemma \ref{good initial metric}, we know that $|\Lambda_{\omega}\tr(F_{H_0})|=O(r^{-N_0})$. Since the volume growth order of $\omega$ is at most 2, we know that $\Lambda_{\omega}\tr(F_{H_0})$ is absolutely integrable. Therefore the left hand side of (\ref{degree coincide}) is well-defined. By the Chern-Weil theory, for any smooth hermitian metric $\olsi{H}_0$ on $\ols{E}$ we have \begin{equation*} \int_{\olsi X}c_1(\olsi E)\wedge {[\omega_0]}^{n-1}=\int_{\olsi X}\frac{\ii}{2\pi}\tr (F_{\olsi H_0})\wedge \omega_0^{n-1}. \end{equation*} By the construction \eqref{definition of H_0}, $H_0=e^{Ct}$$ \olsi{H}_0$ for some constant $C$ and $t$ defined in Section \ref{existence of good initial metric}. Moreover by Remark \ref{t can be viewed}, $t=-\log |S|^2_h$ for some smooth hermitian metric on $L_D$. By Lemma \ref{poincare lelong}, we obtain that $\int_X dd^ct\wedge \omega_0^{n-1}=0$. So we have \begin{equation}\label{a middle equality} \int_{\olsi X}c_1(\olsi E)\wedge {[\omega_0]}^{n-1}=\int_{X}\frac{\ii}{2\pi}\tr (F_{H_0})\wedge \omega_0^{n-1}. \end{equation} Using (\ref{a middle equality}) and $\omega=\omega_0+dd^c \varphi$, to prove (\ref{degree equality}), it suffices to show that for any $k=1,\cdots,n-1$, \begin{equation}\label{to be proved} \int_X\tr(F_{H_0})\wedge \omega_0^{n-1-k}\wedge(dd^c\varphi)^{k}=0. \end{equation} \noindent\textit{Case 1.} $1\leq k\leq n-2$. Since $\omega_0$ vanishes when restricted to $D$, by Lemma \ref{vanishing at infinity decay}, we know that $|\omega_0|=O(r^{-N_0})$. Combining this with Remark \ref{full curvature decay}, we know that $\tr(F_{H_0})\wedge \omega_0^{n-1-k}$ is a closed $(n-k,n-k)$-form with decay order at least $r^{-N_0}$. Therefore Lemma \ref{a more precise degree vanishing} implies that its integral is 0.\\ \noindent\textit{Case 2. }$k=n-1$. If $n=2$, then by (\ref{dim 2 case}) we can still apply Lemma \ref{a more precise degree vanishing}. If $n\geq 3$, note that though $|\Lambda_{\omega}\tr(F_{H_0})|=O(r^{-N_0})$, $|\tr(F_{H_0})|$ is not in $L^1$ in general. So we can not apply Lemma \ref{a more precise degree vanishing} directly. Instead we shall use the asymptotic behaviour of $\tr(F_{H_0})$ obtained from the construction. Integrating by parts and pulling back via $\Phi$, we know that \begin{equation*} \begin{aligned} \int_X\tr(F_{H_0})\wedge(dd^c\varphi)^{n-1}&=\lim_{\epsilon_i\rightarrow0}\int_{\Phi(|\xi|_{h_D}=\epsilon_i)}\tr(F_{H_0})\wedge (dd^c \varphi)^{n-2}\wedge d^c\varphi\\ &=\lim_{\epsilon_i\rightarrow0}\int_{|\xi|_{h_D}=\epsilon_i}\Phi^*\left(\tr(F_{H_0})\wedge(dd^c\varphi)^{n-2}\wedge d^c\varphi\right) \end{aligned} \end{equation*} Then by (\ref{assumption on asymptotics}), Lemma \ref{a good 1-form potential} and the assumption $N_0>8$, we obtain that the right hand side of the above equality equals \begin{equation*} \lim_{\epsilon_i\rightarrow0}\int_{|\xi|_{h_D}=\epsilon_i}\Phi^*(\tr(F_{H_0}))\wedge(dd^cF(t))^{n-2}\wedge d^cF(t) \end{equation*} By (\ref{estimate for exact part}) and (\ref{estimate for smooth bundle part}), we know that it equals \begin{equation}\label{almost done} \lim_{\epsilon_i\rightarrow0}\int_{|\xi|_{h_D}=\epsilon_i}\left(p^*\tr(F_{\olsi{H}_0})-\frac{\lambda \rank(E)}{n-1} dd^ct\right) \wedge(dd^cF(t))^{n-2}\wedge d^cF(t) \end{equation} Note that when restricted to the level set of $t$, $$(dd^cF(t))^{n-2}=F'(t)^{n-2}\omega_D^{n-2}.$$ Therefore $\left(p^*\tr(F_{\olsi{H}_0})-\frac{\lambda \rank(E)}{n-1} dd^ct\right) \wedge(dd^cF(t))^{n-2}=0$ by (\ref{hermitian einstein condition}). \end{proof} \begin{lemma}\label{key observation} Suppose $\olsi{E}|_D$ is $c_1(N_D)$-polystable and let $H_0$ be the metric constructed in Lemma \ref{good initial metric}. \begin{itemize} \item [(1).] Let $\olsi S$ be a coherent reflexive subsheaf of $\olsi E$. If $\olsi S|_D$ is locally free and a splitting factor of $\olsi E|_D$, then $\pp \pi_S^{H_0}\in L^2(X;\omega, H_0)$. \item [(2).] Let $\pi\in W^{1,2}_{loc}(X,\olsi E^*\otimes\olsi E;\omega,H_0)$ be a weakly holomorphic projection map. If $\pp \pi \in L^2(X;\omega,H_0)$, then there exists a coherent reflexive subsheaf $\olsi S$ of $\olsi E$ such that $\pi=\pi^{H_0}_S$ a.e. and $\olsi S|_D$ is a splitting factor of $\olsi E|_D$. \end{itemize} \end{lemma} \begin{proof} A crucial point here is that $H_0$ is conformal to a smooth extendable metric $\olsi H_0$. In particular, for a coherent subsheaf $S$ of $E$, the projections induced by $H_0$ and $\olsi H_0$ are the same. Note that by \cite[Lemma 3.23 and Remark 3.25]{chen-sun}, for every coherent reflexive subsheaf $\olsi{S}$ of $\olsi{E}$, $\olsi{S}|_D$ is torsion free and can be naturally viewed as a subsheaf of $\olsi{E}|_D$. (1) Let $\pi=\pi_{\olsi S}^{\olsi H_0}$. Then $\pi$ is smooth in a neighborhood of $D$ and $\pp \pi|_D=0$ by assumption. Note that $\pi_S^{H_0}=\pi|_X$, so it suffices to show $\pp \pi \in L^2(X,\omega, \olsi{H}_0)$. Fix small balls $U_i$ of $\olsi X$ covering $D$ such that there are holomorphic coordinates $\{z_1,\cdots,z_{n-1},w\}$ on each $U_i$ with $D\cap U_i=\{w=0\}$ and $\olsi E$ is trivial on each ball $U_i$. Under these coordinates and trivializations we can write $$\pp \pi=\pp_z\pi \dd \bar z+\pp_w\pi \dd \bar w,$$ where we view $\pp_z \pi$ and $\pp_w \pi$ as matrices of smooth functions and $\pp_z \pi|_{w=0}=0$. So we have $|\pp_z\pi|\leq C|w|$ and $|\pp_w\pi|\leq C$. Then the result follows from the explicit estimate given in (\ref{estimate for 1-forms}). (2) Given a projection map $\pi\in W^{1,2}_{loc}(X,\olsi E^*\otimes\olsi E;\omega,H_0)$ with $ \pp \pi \in L^2(X;\omega,H_0)$, we first prove the following: \begin{claim} $\pi\in W^{1,2}(\olsi X;\omega_{\olsi X}, \olsi H_0)$ for a fixed (hence any) smooth K\"ahler metric $\omega_{\olsi X}$ on $\olsi X$. \end{claim} Since $|\pi|_{\olsi H_0}\leq 1$ and by \cite[Lemma 7.3]{demailly-big-book}, it suffices to show $\pp\pi\in L^2(\olsi X; \omega_{\olsi X}, \olsi H_0)$. By (\ref{complex structure is close}) and (\ref{closeness of metric tensor}), we may assume in local coordinates around $D$ the K\"ahler metric $\omega$ is exactly given by the model space. We choose local holomorphic coordinates $\underline{z}=\{z_i\}_{i=1}^{n-1}$ on the smooth divisor $D$ and fix a local holomorphic trivialization $e_0$ of $N_D$ with $|e_0|_{h_D}=e^{-\psi}$, where $\psi$ is a smooth function on $D$ satisfying $\ii \partial\pp \psi=\omega_D$. Then we get local holomorphic coordinatea $\{z_1,\cdots,z_{n-1},w\}$ on $\mathcal C$ by writing a point $\xi=we_0(\underline z)$. Choose a basis of $(0,1)$-forms $\dd \bar z_1 \cdots\dd \bar z_{n-1}, \frac{\dd \bar w}{\bar w}-\pp_{z_i} \psi\dd \bar z_i$. Then we can write \begin{equation*} \pp \pi=f_i \dd \bar z_i+h(\frac{\dd \bar w}{\bar w}-\pp_{z_i} \psi \dd \bar z_i), \end{equation*} where $f_i$ and $h$ are sections of $\operatorname {End}(E)$. Notice that $d\bar z_i$ is perpendicular to the $\frac{\dd \bar w}{\bar w}-\pp_{z_i} \psi\dd \bar z_i$. Since $\pp\pi$ is in $L^2$ with respect to $\omega$, by (\ref{estimate for 1-forms}) we know that \begin{equation}\label{integrability} \mathlarger\int \left(|f_i|^2(-\log |w|)^{1-a}+|h|^2(-\log |w|)^{2-a}\right)\frac{(-\log |w|)^{(n-1)(a-1)+a-2}}{|w|^2}d\lambda< \infty. \end{equation} Then we know that $f_i-h\pp_{z_i}\psi,\ \frac{h}{\bar w }$ are all $L^2$-integrable with respect to the Lebesgue measure. Therefore the claim is proved: $$\pp\pi=(f_i-h\pp_{z_i}\psi)\dd \bar z_i+\frac{h}{\bar w}\dd \bar w\in L^2(\olsi X;\omega_{\olsi X},\olsi H_0).$$ Then Uhlenbeck-Yau's result (Theorem \ref{uhlenbeckyau}) implies that there exists a coherent subsheaf $\olsi S$ of $\olsi E$ such that $\pi=\pi^{\olsi H_0}_{\over S}$ outside the singular set of $\olsi S$. Taking the double dual, we may assume $\olsi S$ is reflexive. By the integrability condition (\ref{integrability}), $\pp \pi_{\over S}^{\olsi H_0}|_D=0$, which means that $\olsi S|_D$ is a splitting factor of $\olsi E|_D$ since $\olsi E|_D$ is polystable. \end{proof} Now we are ready to prove the main theorem. We decompose it into two propositions. \begin{proposition}\label{only if part} Suppose there exists an $\omega$-PHYM metric $H$ in $\mathcal P_{H_0}$, then $\olsi{E}$ is $\left(c_1(D),[\omega_0]\right)$-stable. \end{proposition} \begin{proof} Suppose there is a reflexive subsheaf $\olsi S$ of $\olsi E$ with $0< \rank(\olsi S)< \rank (\olsi E)$ such that $\mu(\olsi S,c_1(D))\geq c_1(\olsi E,c_1(D))$, we need to show that $\mu(\olsi S, [\omega_0])<\mu(\olsi E, [\omega_0])$. By \cite{mistretta} for any coherent refelxive sheaf $\mathcal E$ on $\olsi X$, we have \begin{equation*} \mu(\mathcal E,c_1(D))=\mu(\mathcal E|_D,c_1(D)|_D)=\mu(\mathcal E|_D,c_1(N_D)). \end{equation*} (When $\mathcal E$ is a vector bundle, this follows from the fact that the first Chern class $c_1(D)$ is the Poincar\'e dual of the homology class defined by the divisor $D$. For a general reflexive sheaf, the key point is to show that $c_1(\mathcal E)|_D=c_1(\mathcal E|_D)$ using the fact that $\mathcal E$ is locally free outside an analytic set of (complex) codimension at least 3.) Therefore we have \begin{equation}\label{degree exceeding} \mu(\olsi S|_D,c_1(N_D))\geq \mu(\olsi E|_D,c_1(N_D)). \end{equation} By assumption, $\olsi E|_D$ is $c_1(N_D)$-polystable, so (\ref{degree exceeding}) implies that $\olsi S|_D$ is locally free and is a splitting factor of $\olsi E|_D$. Then by Lemma \ref{key observation}, we have \begin{equation*} \pp\pi_S^{H_0}\in L^2(X,\omega, H_0). \end{equation*} \begin{claim} $\pp\pi_S^H\in L^2(X,\omega,H_0)=L^2(X,\omega,H)$. \end{claim} For simplicity of notation, in the following we omit the dependence on $S$. By the definition of $\mathcal P_{H_0}$ and $H\in \mathcal P_{H_0}$, we know that $H=H_0 e^s$ with $\left\Vert s\right\Vert_{L^{\infty}}+\left\Vert \pp s\right\Vert_{L^2}<\infty$. The claim follows directly from the following pointwise inequality (outside the singular set $\Sigma$ of $S$) \begin{equation}\label{projection with respect to different metrics} |\pp\pi^H|\leq C \left(|\pp s|+|\pp \pi^{H_0}|\right), \end{equation} where $C$ is a constant independent of points and all the norms are with respect to $H_0$. Let $r_0$, $r$ denote the rank of $S$ and $E$ respectively. Near any given point $p\in X\backslash \Sigma$, we can find a local holomorphic basis $\{e_1,\cdots,e_{r_0},e_{r_0+1},\cdots,e_r\}$ of $E$ such that \begin{equation*} \begin{aligned} &S=Span\{e_1,\cdots,e_{r_0}\},\\ &\left<e_i,e_j\right>_{H_0}(p)=\delta_{ij},\\ &\pp \left<e_i,e_j\right>_{H_0}(p)=0\ \text{for $1\leq i,j\leq r_0$ and $r_0+1\leq i,j\leq r$ }. \end{aligned} \end{equation*} In the following we use Einstein summation convention and use $i,j$ to denote numbers from 1 to $r$, $\alpha,\beta$ to denote numbers from 1 to $r_0$. Under this basis $\pi^{H_0}$ can be written as $$e_{\alpha}^{\vee}\otimes e_{\alpha}+H_{0,i\beta}\widetilde H_0^{\beta\alpha}e_i^{\vee}\otimes e_{\alpha},$$ where we view $H_0=(H_{0,ij})=(\left<e_i,e_j\right>_{H_0})$ as a matrix and $\widetilde H_0=(\widetilde H_{0,\alpha\beta})=(\left<e_{\alpha},e_{\beta}\right>_{H_0})$ as a submatrix of $H_0$. Then \begin{equation*} |\pp\pi^{H_0}|(p)=\sum_{i,\alpha}|\pp H_{0,i\alpha}|(p). \end{equation*} Similarly, $\pi^H$ can be written as $e_{\alpha}^{\vee}\otimes e_{\alpha}+H_{i\beta}\widetilde H^{\beta\alpha}e_i^{\vee}\otimes e_{\alpha}$. Note that as a matrix $H=H_0h$, where $h$ is the matrix representation of $e^s$ under the basis $\{e_i\}_{i=1}^r$. Since $\left\Vert s\right\Vert_{L^{\infty}}<\infty$, we have \begin{equation*} \begin{aligned} |\pp\pi^H|(p)&\leq C\left(\sum|\pp H_{i\alpha}|(p)+|\pp h_{ij}|(p))\right)\\ &\leq C\left(\sum |\pp H_{0,i\alpha}|(p)+|\pp h_{ij}|(p)\right)\\ &\leq C\left(|\pp\pi^{H_0}|(p)+|\pp s|(p)\right), \end{aligned} \end{equation*}which gives (\ref{projection with respect to different metrics}). Let $\pi=\pi^H_S$. Using the Chern-Weil formula and the fact that $H\in \mathcal P_{H_0}$ is PHYM, we have \begin{equation*} \begin{aligned} \Lambda_{\omega}\tr(F_{S,H})&=\Lambda_{\omega}\tr(F_{E,H}\circ\pi)-|\pp\pi|^2\\ &=\frac{\rank (S)}{\rank (E)}\Lambda_{\omega}\tr(F_{E,H_0})-|\pp\pi|^2, \end{aligned} \end{equation*} and consequently is $L^1$. \begin{claim} \begin{equation*} \frac{1}{\rank (S)}\int_X\tr(F_{S,H})\wedge \omega^{n-1}=\mu(\olsi S,{[\omega_0]}). \end{equation*} \end{claim} Assume this for a moment, then by Lemma \ref{degree coincide}, we know that \begin{equation*} \mu(\olsi S,[\omega_0])\leq \mu(\olsi E,[\omega_0]), \end{equation*} and equality holds if and only if $\pp\pi =0$. Suppose $\pp \pi=0$. Since $\left|\pi\right|_{\olsi H_0}=\left|\pi\right|_{H_0}\leq C\left|\pi\right|_H\leq C$. Again by \cite[Lemma 7.3]{demailly-big-book}, there is a global holomorphic section of $\operatorname {End}(\olsi E)$, which is still denoted by $\pi$, such that $\pi=\pi^H_S$ a.e. and $\pi^2=\pi$. Note that since $\rank (\pi)=\tr(\pi)$ is real valued and holomorphic, it follows that $\rank \pi$ is a constant. Thus $\olsi E$ holomorphically splits as the direct sum of $\ker \pi$ and $\Im \pi$, which contradict with our assumption that $\olsi E$ is irreducible. Therefore we prove that \begin{equation*} \mu(\olsi S,{[\omega_0]})< \mu(\olsi E,{[\omega_0]}). \end{equation*} \noindent \textit{Proof of the} \textbf{claim}: since $H\in \mathcal P_{H_0}$, we have \begin{equation*} \tr(F_{S,H})-\tr(F_{S,H_0})=\partial \pp u, \end{equation*} for a bounded real valued smooth function $u$ with $|\nabla u|\in L^2$. By Lemma \ref{exact integral imply degree 0}, \begin{equation*} \int\tr(F_{S,H})\wedge\omega^{n-1}=\int\tr(F_{S,H_0})\wedge\omega^{n-1}. \end{equation*} By the same argument in Lemma \ref{degree coincide}, we can show \begin{equation*} \int\tr(F_{S,H_0})\wedge\omega^{n-1}=\mu(\olsi S,{[\omega_0]}). \end{equation*} Hence we complete the proof of the claim. \end{proof} \begin{proposition}\label{existence result} Let $H_0$ be the metric constructed in Lemma \ref{good initial metric}. Suppose $\olsi E$ is $\left(c_1(D),{[\omega_0]}\right)$-stable, then there exists a unique $\omega$-PHYM metric $H$ in $\mathcal P_{H_0}$. \end{proposition} \begin{proof} Uniqueness is obvious. Suppose we have two $\omega$-PHYM metrics $H_1$, $H_2\in \mathcal P_{H_0}$. Let $h=H_1^{-1}H_2$. By the definition of $\mathcal P_{H_0}$, we know that $\det h=1$ and $h$ is both bounded from above and below and $|\pp h| \in L^2$. Then by taking the trace of the differential equality in Lemma \ref{basic differential inequlaties}-(2), we get \begin{equation*} \Delta_{\omega}\tr(h)=-|\pp hh^{-\frac{1}{2}}|^2. \end{equation*} By Lemma \ref{exact integral imply degree 0}, $$\int|\pp h h^{-\frac{1}{2}}|^2=-\int\Delta_{\omega} \tr(h)=0.$$ Therefore $\pp h=0$ and since $h$ is self-adjoint with respect to $H_i$, it is parallel with respect to the Chern connection determined by $(\pp,H_i)$. Then its eigenspaces give a holomorphic decomposition of $\olsi E$ which contradicts the assumption that $\olsi E$ is irrducible unless $h$ is a multiple of identity map. Since $\det h=1$, it must be that $h$ is the identity map, i.e. $H_1=H_2$. For the existence part, we follow Simpson and Mochizuki's argument \cite{simpson,mochizuki}. For completeness, we include some details. Let $\{X_i\}$ be an exhaustion of $X$ by compact domains with smooth boundary and we solve Dirichlet problems on every $X_i$ using Donaldson's theorem (Theorem \ref{donaldson dirichlet}). Then we have a sequence of PHYM metrics $H_i$ on $E|_{X_i}$ such that $H_i|_{\partial X_i}=H_0|_{\partial X_i}$ and $\det H_i=\det H_0$. Let $s_i$ be the endomorphism determined by $H_i=H_0h_i=H_0e^{s_i}$. Then we have $s_i|_{\partial X_i}=0$ and $\tr(s_i)=0$. We argue by contradiction to prove a uniform $C^0$-estimate for $s_i$. First note that by Lemma \ref{basic differential inequlaties}, $e^{s_i}$ satisfies the elliptic differential inequality \begin{equation} \Delta\log(\tr(e^{s_i}))\leq |\Lambda F_{H_0}^{\perp}| \end{equation} therefore $\tr(e^{s_i})$ satisfies the weighted mean value inequality in Lemma \ref{weighted sobolev inequality}. Since $\tr(e^{s_i})$ and $|s_i|$ are mutually bounded, we know that $|s_i|$ also satisfies the weighted mean value inequality (\ref{mean value inequality}). Lemma \ref{weighted sobolev inequality} plays an essential role since it ensures that after normalization we can have a nontrivial limit in $W^{1,2}_{loc}$. Suppose there is a sequence $s_i$ such that $\sup_{X_i}| s_i|_{H_0}\rightarrow \infty$ as $i\rightarrow \infty$. Then by Lemma \ref{weighted sobolev inequality}, we obtain $$l_i=\int_{X_i}|s_i|_{H_0}(1+r)^{-N_0}\rightarrow\infty.$$ Let $u_i=l_i^{-1}s_i$. Then by Lemma \ref{weighted sobolev inequality} again we obtain there is a constant $C$ independent of $i$ such that \begin{equation}\label{nonzero condition} \text{$\int_{X_i}|u_i|(1+r)^{-N_0}=1$ and $|u_i|\leq C$}, \end{equation} where the norms are with respect to the back ground metric $H_0$. Then following Simpson's argument, we can show that \begin{lemma} After passing to a subsequence, $u_i$ converge weakly in $W^{1,2}_{loc}$ to a nonzero limit $u_{\infty}$. The limit $u_{\infty}$ satisfies the following property: if $\Phi: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is a positive smooth function such that $\Phi\left(\lambda_{1}, \lambda_{2}\right)<\left(\lambda_{1}-\lambda_{2}\right)^{-1}$ whenever $\lambda_{1}>\lambda_{2}$, then \begin{equation}\label{inequality for limits} \sqrt{-1} \int_{X} \tr(u_{\infty} \Lambda F_{H_0})+\int_{X}\left<\Phi\left(u_{\infty}\right)\left(\pp u_{\infty}\right), \pp u_{\infty}\right>_{H_0} \leq 0. \end{equation} \end{lemma} \begin{proof}By Theorem \ref{phym metrics minimizing donaldson functional} \begin{equation}\label{inequality of X_i} \sqrt{-1} \int_{X_i} \tr(u_{i} \Lambda F_{H_0})+ l_{i} \int_{X_i}\left<\Psi\left(l_{i} u_{i}\right)\left(\pp u_{i}\right), \pp u_{i}\right>_{H_0} \leq 0. \end{equation} By the definition of $\Psi$ in (\ref{definition of Phi}), we know that as $l \rightarrow \infty, l \Psi\left(l \lambda_{1}, l \lambda_{2}\right)$ increases monotonically to $\left(\lambda_{1}-\lambda_{2}\right)^{-1}$ if $\lambda_{1}>\lambda_{2}$ and $\infty$ if $\lambda_{1} \leq \lambda_{2}$. Fix a $\Phi$ as in the statement of the lemma. We know that for all $A>0$ there exists $l_A$ such that if $|\lambda_i|\leq A$ and $l>l_A$, then we have \begin{equation}\label{property for functions} \Phi\left(\lambda_{1}, \lambda_{2}\right)<l \Psi\left(l \lambda_{1}, l \lambda_{2}\right). \end{equation} Since sup $\left|u_{i}\right|$ are bounded, its eigenvalues are also bounded. Then by (\ref{inequality of X_i}) and (\ref{property for functions}), we obtain that for $i$ sufficiently large \begin{equation}\label{inequality for u_i} \sqrt{-1} \int_{X_i} \tr(u_{i} \Lambda F_{H_0})+ \int_{X_i}\left<\Phi\left(u_{i}\right)\left(\pp u_{i}\right), \pp u_{i}\right>_{H_0} \leq 0. \end{equation} Again since $\sup|u_i|$ is bounded we can find $\Phi$ satisfying the assumption in the lemma and $\Phi(u_i)=c_0$ for all $i$, where $c_0$ a fixed small positive number. Then by (\ref{inequality for u_i}) and the construction of $H_0$, there exists a positive constant $C$ such that $$\int_{X_i}|\pp u_i|^2\leq C.$$ Therefore by a diagonal sequence argument and after passing to a subsequence we may assume $u_i$ converge weakly in $W^{1,2}_{loc}$ to a limits $u_{\infty}$ with $\int_X|\pp u_{\infty}|^2\leq C$. We claim that $u_{\infty}\neq 0$. Indeed by (\ref{nonzero condition}), there exists a compact set $K\subseteq X$ independent of $i$ such that \begin{equation*} \int_{K}|u_i|(1+r)^{-N}\geq \frac{1}{2}. \end{equation*} Since on compact sets the embedding from $W^{1,2}$ to $L^1$ is compact, after taking the limit, we get $\int_K|u_{\infty}|(1+r)^{-N}\geq \frac{1}{2}$. In particular $u_{\infty}\neq 0$. Next we prove (\ref{inequality for limits}). By the uniform boundedness of $u_i$, the $O(r^{-N_0})$ decay property of $|\Lambda F_{H_0}|$ and the nonnegativity of the second term of the left hand side in (\ref{inequality for u_i}), we know that there exists $\epsilon_i\rightarrow 0$ such that for any $j\geq i$, we have \begin{equation*} \sqrt{-1} \int_{X_i} \tr(u_{j} \Lambda F_{H_0})+ \int_{X_i}\left<\Phi\left(u_{j}\right)\left(\pp u_{j}\right), \pp u_{j}\right>_{H_0} \leq \epsilon_i. \end{equation*} Note that $\left<\Phi\left(u_{j}\right)\left(\pp u_{j}\right), \pp u_{j}\right>_{H_0}=|\Phi^{\frac{1}{2}}(u_j)(\pp u_j)|_{H_0}^2.$ By \cite[Proposition 4.1]{simpson}, we know that on each fixed $X_i$, $\Phi^{\frac{1}{2}}(u_j)\rightarrow \Phi^{\frac{1}{2}}(u_{\infty})$ in $\operatorname{Hom}\left(L^{2}, L^{q}\right) \text { for any } q<2$. Since $\pp u_j$ converge weakly in $L^2(X_i)$ to $\pp u_{\infty}$, we obtain that $\Phi^{\frac{1}{2}}(u_j)(\pp u_j)$ converge weakly to $\Phi^{\frac{1}{2}}(u_{\infty})(\pp u_{\infty})$ in $L^q(X_i)$ for any $q<2$. Then we know that for any $q<2$, \begin{equation*} \begin{aligned} \left\Vert\Phi^{\frac{1}{2}}(u_{\infty})(u_{\infty})\right\Vert_{L^q(X_i)}^2&\leq \liminf_{j\rightarrow\infty}\left\Vert\Phi^{\frac{1}{2}}(u_{j})(u_{j})\right\Vert_{L^q(X_i)}^2\\ &\leq\operatorname{Vol}(X_i)^{\frac{2}{q}-1}\liminf_{j\rightarrow \infty}\left\Vert\Phi^{\frac{1}{2}}(u_{j})(u_{j})\right\Vert_{L^2(X_i)}^2\\ &\leq\operatorname{Vol}(X_i)^{\frac{2}{q}-1}\left(\epsilon_i-\lim_{j\rightarrow \infty}\ii\int_{X_i}\tr(u_j\Lambda F_{H_0})\right)\\ &\leq \operatorname{Vol}(X_i)^{\frac{2}{q}-1}\left(\epsilon_i-\ii\int_{X_i}\tr(u_{\infty}\Lambda F_{H_0})\right). \end{aligned} \end{equation*} Let $q\rightarrow2$, we obtain \begin{equation*} \ii\int_{X_i}\tr(u_{\infty}\Lambda F_{H_0})+\left\Vert\Phi^{\frac{1}{2}}(u_{\infty})(u_{\infty})\right\Vert_{L^2(X_i)}^2\leq \epsilon_i. \end{equation*} Letting $i\rightarrow\infty$, the inequality (\ref{inequality for limits}) is proved. \end{proof} Simpson's argument in \cite[Lemma 5.5 and Lemma 5.6]{simpson} can be applied verbatim to the infinite volume case, so we have \begin{lemma}[{\cite{simpson}}] Let $u_{\infty}$ be a limit obtained in the previous lemma. Then we have \begin{itemize} \item [(1)]The eigenvalues of $u_{\infty}$ are constant and not all equal. \item [(2)]Let $\Phi: \mathbb{R} \times \mathbb{R} \longrightarrow(0, \infty)$ be a $C^{\infty}$-function such that $\Phi\left(\lambda_{i}, \lambda_{j}\right)=0$ if $\lambda_{i}>\lambda_{j}$. Then $\Phi\left(u_{\infty}\right)\left(\bar{\partial} u_{\infty}\right)=0$. \end{itemize} \end{lemma} Let $\lambda_1 \leq \lambda_2\leq \cdots\leq \lambda_{\rank(E)}$ denote the eigenvalues of $u_{\infty}$. Let $\gamma$ be an open interval between the eigenvalues (since eigenvalues of $u_{\infty}$ are not all equal by the previous lemma, there exists such a nonempty interval). We choose a $C^{\infty}$-function $p_{\gamma}: \mathbb{R} \longrightarrow(0, \infty)$ such that $p_{\gamma}\left(\lambda_{i}\right)=1$ if $\lambda_{i}<\gamma$, and $p_{\gamma}\left(\lambda_{i}\right)=0$ if $\lambda_{i}>\gamma$. Set $\pi_{\gamma}:=p_{\gamma}\left(u_{\infty}\right)$, see Section \ref{definition of new endmorphisms} for the definition. Then one can easily show that \cite{simpson, mochizuki} \begin{itemize} \item [(1)] $\pi_{\gamma}^2=\pi_{\gamma}$, $(\operatorname{id}-\pi_{\gamma})\circ\pi_{\gamma} =0$ and $\pi_{\gamma}$ is self-adjoint with respect to $H_0$. \item [(2)] $\int_X|\pp\pi_{\gamma}|^2<\infty$. \end{itemize} Moreover using (\ref{inequality for limits}), Simpson proved that \begin{lemma}[{\cite{simpson}}]\label{degree exceed} There exists at least one $\gamma$ such that \begin{equation*} \frac{1}{\tr(\pi_{\gamma})}\left(\ii\int_X\tr(\pi_{\gamma}\Lambda F_{H_0})-\int_X|\pp\pi_{\gamma}|^2\right)\geq \frac{1}{\rank(E)}\ii\int_X\tr(\Lambda F_{H_0}). \end{equation*} \end{lemma} By Lemma \ref{key observation}, we get a filtration of $\olsi E$ by coherent reflexive subsheaves $\olsi S_i$ whose restrictions to $D$ are splitting factors of $\olsi E|_D$. Since we assume that $\olsi{E}|_D$ is $c_1(N_D)$-polystable, we know that for every $i$ \begin{equation*} \mu(\olsi S_i|_D,c_1(N_D))=\mu(\olsi E|_D, c_1(N_D)). \end{equation*} Then again by \cite{mistretta}, we have \begin{equation}\label{first degree coincides} \mu(\olsi S_i,c_1(D))=\mu(\olsi E, c_1(D)). \end{equation} Note that Lemma \ref{degree exceed} is equivalent to the statement that there exists at least one $\olsi S_i$ such that \begin{equation*} \int_X\tr(F_{S_i,H_0})\wedge\omega^{n-1}\geq \int_X\tr(F_{E,H_0})\wedge\omega^{n-1}. \end{equation*} Then by Lemma \ref{degree coincide}, \begin{equation} \mu(\olsi S_i,{[\omega_0]})\geq \mu(\olsi E,{[\omega_0]}). \end{equation} which contradicts with the $(c_1(D),{[\omega_0]})$-stability assumption. Therefore we do have a uniform $C^0$-estimate for $s_i$. Bando-Siu's interior regularity result Theorem \ref{bando-siu interior estimate} can be applied to get local uniform estimate for all derivatives of $s_i$. Then we can take limits to get a smooth section $s\in \operatorname{End}( E)$, which is self-adjoint with respect to $H_0$ and $\tr(s)=0$ and more importantly \begin{equation*} \Vert s\Vert_{L^{\infty}}<\infty\ \text{and $H=H_0\rm e$$^{s}$ is a PHYM metric}. \end{equation*} Then we use Mochizuki's argument in \cite[Section 2.8]{mochizuki} to show that \begin{equation*} |\pp s|\in L^2(X,\omega, H_0). \end{equation*} Indeed taking the trace of the equality in Lemma \ref{basic differential inequlaties}-(2) and noting that $H_i=H_0h_i$ is PHYM, we have \begin{equation}\label{to show L^2} \Delta \tr(h_i)=-\tr(h_i\ii\Lambda F_{H_0}^{\perp})-|h_i^{\frac{1}{2}}\pp(h_i)|^2. \end{equation} Since $\det h_i=1$ and $h_i|_{\partial X_i}=\operatorname{id}$, we know that $\nabla_{\nu_i}\tr(h_i)\leq 0$, where $\nu_i$ denotes the outward unit normal vector of $\partial X_i$. Integrating (\ref{to show L^2}) over $X_i$ and using Stoke's theorem in the left hand side, we obtain \begin{equation*} \int_{X_i}|h_i^{\frac{1}{2}}\pp(h_i)|^2\leq -\int_{X_i}\tr(h_i\ii\Lambda F_{H_0}^{\perp}). \end{equation*} Since we have uniform $C^0$-estimate for $s_i=\log h_i$, there exist constants $C_1$ and $C_2$ independent of $i$ such that \begin{equation*} \int_{X_i}|\pp s_i|^2\leq C_1\int_{X_i}|\pp h_i|^2\leq C_2 \end{equation*} Let $i\rightarrow \infty$, we have $\int_X|\pp s|^2\leq C_2$. \end{proof} \noindent\textit{On the stability condition.} Note that global semistability is known \cite{mistretta}, if we assume the restriction to $D$ is semistable. There do exist irreducible holomorphic vector bundles which are polystable when restricted to $D$ but not globally stable, even under more restrictive assumptions that $\olsi{X}$ is Fano and $D\in |K_{\olsi{X}}^{-1}|$. \begin{example}\label{example of nonsplitting} Recall that for holomorphic vector bundles $S$, $Q$ over a complex manifold $M$, all exact sequences $0\rightarrow S\rightarrow E\rightarrow Q\rightarrow 0$ of holomorphic vector bundles are classified by elements $\beta\in H^1(M,\operatorname {Hom}(Q,S))$ and in particular the exact sequence splits holomorphically if and only if the corresponding element $\beta=0$. Now taking $M$ to be $\mathbb {CP}^1\times \mathbb {CP}^1$, $D$ to be a smooth anticanonical divisor. Then $c_1(D)$ is a K\"ahler class and $D$ itself is an elliptic curve. Choose $\operatorname {Hom}(Q,S)=L=p_1^*(\mathcal O(2))\otimes p_2^*(\mathcal O(-2))$. Then by the K\"unneth's formula, $\dim H^1(M,L)=3$. Note that $\deg (L|_D)=0$, which by Serre duality implies $\dim H^1(D,L)=\dim H^0(D,L^*)\leq 1$. So there exists a class $\beta \in H^1(M,L)$ corresponding to a non-splitting exact sequence of holomorphic vector bundles whose restriction to $D$ splits as a direct sum of two line bundles with the same degree. Therefore $E$ itself is not $c_1(D)$-stable but $E|_D$ is $c_1(N_D)$-polystable. Such an $E$ is irreducible, because if $E=L_1 \oplus L_2$, then $\deg (L_i,c_1(D))=\deg (L_i|_D)=0$ since $E|_D$ is polystable of degree 0, which implies that $S$ has to be one of the $L_i$ and $Q$ is the other one. This contradicts with the construction of $E$. \end{example} \section{Discussion} \label{miscellaneous discussion} \subsection{More results on the existence of PHYM metrics}\label{relation with some results} By Donaldson's theorem on the solvability of Dirichlet problem (Theorem \ref{donaldson dirichlet}), the elliptic differential inequality (Lemma \ref{basic differential inequlaties}-(3)), the maximal principle and Bando-Siu's interior estimate (Theorem \ref{bando-siu interior estimate}), we get the following well-known existence result. \begin{theorem} \label{general existence result}Let $(M,\omega,g)$ be a complete K\"ahler manifold, $E$ be a holomorphic vector bundle on $M$. Suppose there exists a smooth hermitian metric $H_0$ on $E$ such that the equation \begin{equation}\label{laplace equaiton} \Delta u=|\Lambda F_{H_0}^{\perp}| \end{equation} admits a positive solution $u$. Then there exists a smooth hermitian metric $H=H_{0}e^s$ satisfying \begin{equation*} \tr(s)=0,\ |s|_{H_0}\leq C_1 u \text{ and } \Lambda F_H^{\perp}=0. \end{equation*} Moreover, if $u$ is bounded and $|\Lambda F_{H_0}|\in L^1$ then $|\pp s|\in L^2$. \end{theorem} There are many examples for which (\ref{laplace equaiton}) has a positive solution and even bounded solutions \cite{bando,ni,ni-shi-tam}. \begin{itemize} \item [(1)] Suppose $(M,g)$ is asymptotically conical and $|\Lambda F_{H_0}^{\perp}|=O(r^{-2-\epsilon})$ for some $\epsilon>0$, then (\ref{laplace equaiton}) admits a solution $u$ with $|u|=O(r^{-\epsilon})$. \item [(2)] Suppose $(M,g)$ is non-parabolic (i.e admits a positive Green's function) and $|\Lambda F_{H_0}^{\perp}|\in L^1$, then (\ref{laplace equaiton}) admits a positive solution. \item [(3)] Suppose $(M,g)$ has nonnegative Ricci curvature, $|\Lambda F_{H_0}^{\perp}|=O(r^{-2})$ and \begin{equation*} \frac{1}{\operatorname{Vol}(B_r)}\int_{B_r} |\Lambda F_{H_0}^{\perp}|=O(r^{-2-\epsilon}), \end{equation*} for some $\epsilon>0$, then (\ref{laplace equaiton}) admits a bounded solution. In particular, if $(M,g)$ has nonnegative Ricci curvature, volume growth order greater than 2, $|\Lambda F_{H_0}^{\perp}|=O(r^{-2})$ and $|\Lambda F_{H_0}^{\perp}|\in L^1$, then (\ref{laplace equaiton}) admits a bounded solution. \end{itemize} Theorem \ref{general existence result} can not be applied to $(X,\omega,g)$ satisfying \textit{Assumption 1} since we do not know whether (\ref{laplace equaiton}) admits a positive solution (for this volume growth order at most 2 is a key issue). And actually Theorem \ref{main theorem} tells us that there are some obstructions for the existence of $\omega$-PHYM metrics which are mutually bounded with the initial metric. Such a phenomenon also appears when we seek a bounded solution for the Poisson equation \begin{equation}\label{poission equation} \Delta u=f \end{equation} on a complete noncompact Riemannian manifold $(M,g)$ with nonnegative Ricci curvature. Suppose $f$ is compactly supported for simplicity, then we know that \begin{itemize} \item [(1)] if the volume growth order is greater than 2, i.e. there is a constant $c>0$ such that $\operatorname {Vol}(B_r)\geq c r^{2+\epsilon}$ for some $\epsilon>0$, then (\ref{poission equation}) admits a bounded solution. (Since by Li-Yau \cite{li-yau}, $(M,g)$ admits a positive Green's function which is $O(r^{-\epsilon})$ at infinity, a bounded solution of (\ref{poission equation}) is obtained by the convolution with the Green's function.) \item [(2)] if the volume growth order does not exceed 2, i.e. there is a constant $C>0$ such that $\operatorname {Vol}(B_r)\leq C (r+1)^{2}$, then (\ref{poission equation}) admits a bounded solution if and only if $\int_Mf=0$. (For the ``if'' direction, see \cite[Theorem 1.5]{hein2011}. For the ``only if'' direction, suppose we have a bounded function $u$ and a compactly supported function $f$ such that $\Delta u=f$. Then by Cheng-Yau's gradient estimate \cite{cheng-yau}, we obtain $|\nabla u|\leq \frac{C}{r}$ for some $C>0$ independent of $r$. Multiplying $u$ both sides in (\ref{poission equation}) and integrating by parts, we obtain that $|\nabla u|\in L^2$. Then Lemma \ref{exact integral imply degree 0} implies $\int_Mf=0$.) \end{itemize} Next we discuss another result whose proof is similar to the proof of Theorem \ref{main theorem}. Let $(\olsi{X},\ols{\omega})$ be an $n$-dimensional ($n\geq 2$) compact K\"ahler manifold, $D$ be a smooth divisor. Let $\ols \omega_D=\ols{\omega}|_D$ denote the restriction of $\ols{\omega}$ to $D$ and $X=\olsi{X}\backslash D$ denote the complement of $D$ in $\olsi{X}$. Let $L_D$ be the line bundle determined by $D$ and $S\in H^0(\olsi{X},L_D)$ be a defining section of $D$. Fix a hermitian metric $h$ on $L_D$. Then after scaling $h$, the function $t=-\log |S|_h^2$ is smooth and positive on $X$. For any smooth function $F:(0,\infty)\longrightarrow \mathbb R$ with $|F'(t)|\rightarrow 0$ as $t\rightarrow \infty$ and $F''(t)\geq0$ there exists a large constant $A$ such that \begin{equation}\label{kahler forms used} \omega=A\ols{\omega}+dd^cF(t) \end{equation} is a K\"ahler form on $X$. By scaling $\ols{\omega}$ we may assume $A=1$. One can easily check that $\omega$ is complete is and only if $\int_{1}^{\infty} \sqrt{F''}=\infty$ and it always has finite volume. In the following, we always assume the function $F$ satisfies $|F'(t)|\rightarrow 0$ as $t\rightarrow \infty$ and $F''(t)\geq0$. Then we can state assumptions on $\omega$. \begin{assumption} Let $\omega$ be the K\"ahler form defined by (\ref{kahler forms used}) and $g$ be the corresponding Riemannian metric. We assume that \begin{itemize} \item [(1)] the sectional curvature of $g$ is bounded. \item [(2)] $C^{-1}t^{-2+\epsilon}\leq F''(t)\leq C$ for some constant $C,\epsilon>0$ and $t$ sufficiently large. \end{itemize} \end{assumption} A consequence of these assumptions is that $(X,g)$ is complete and of $(K,\alpha,\beta)$-polynomial growth defined in \cite[Definition 1.1]{tian-yau1}, so we can use the weighted Sobolev inequality as we did for the proof of Lemma \ref{weighted sobolev inequality}. Let $\olsi{E}$ be an irreducible holomorphic vector bundle on $\olsi{X}$ such that $\olsi{E}|_D$ is $\omega_D$-polystable. Then by Donaldson-Uhlenbeck-Yau theorem, there exists a hermitian metric $H_D$ on $\olsi{E}|_D$ such that \begin{equation}\label{hym condition} \Lambda_{\ols \omega_D}F_{H_D}^{\perp}=0. \end{equation} Extend $H_D$ smoothly to get a smooth hermitian metric $H_0$ on $\olsi{E}$. Then by (\ref{hym condition}) and \textit{Assumption 2}-(2), one can easily show that \begin{lemma}\label{good initial metric in general setting} There exists a $\delta>0$ such that $|\Lambda_{\omega} F_{H_0}^{\perp}|=O(e^{-\delta t})$. \end{lemma} Then we have the following result \begin{theorem}\label{theorem for general divisor} Suppose $(X,\omega)$ satisfies \textit{Assumption 2} and $\olsi{E}|_D$ is $\ols\omega_D$-polystable. Let $H_0$ be a hermitian metric as above and $\mathcal P_{H_0}$ be defined by (\ref{define of P_H}). Then there exists an $\omega$-PHYM metric in $\mathcal P_{H_0}$ if and only if $\olsi{E}$ is $\ols{\omega}$-stable. \end{theorem} Using the argument in Proposition \ref{only if part}, the ``only if'' direction follows from Lemma \ref{exact integral imply degree 0} and the following lemma . \begin{lemma} For every smooth closed (1,1)-form $\theta$ on $\olsi{X}$, we have \begin{equation}\label{degree equals 2} \int_X \theta\wedge \omega^{n-1}=\int_X\theta\wedge \ols{\omega}^{n-1} \end{equation} \end{lemma} \begin{proof} Firstly note that since there exists a positive number $c>0$ such that $\omega> c\ols{\omega}$ and $\int \omega^n<\infty$, the left hand side of (\ref{degree equals 2}) is well-defined. Therefore it suffices to show that for any $1\leq k\leq n-1$ \begin{equation*} \int_X\theta\wedge \ols{\omega}^{n-1-k}\wedge (dd^cF)^{k}=0. \end{equation*} Let $S_{\epsilon}$ denote the level set $\{|S|_h=\epsilon\}$. By integration by part, it suffices to show that \begin{equation}\label{to be proved for any k} \lim_{\epsilon\rightarrow 0}\int_{S_{\epsilon}}\theta\wedge \ols{\omega}^{n-1-k}\wedge (dd^cF)^{k-1}\wedge d^c F=0. \end{equation} \noindent\textit{Case 1.} $k=1$. Note that with respect to the smooth back ground metric $\ols{\omega}$, $\operatorname{Vol}(S_{\epsilon})=O(\epsilon)$ and $|d^c F|\leq C|F'(t)|\epsilon^{-1}$ on $S_{\epsilon}$. Then (\ref{to be proved for any k}) follows from the assumption that $|F'|\rightarrow 0$ as $t\rightarrow\infty$. \noindent\textit{Case 2.} $2\leq k\leq n-1$. Then (\ref{to be proved for any k}) follows from the fact that $|F'(t)|\rightarrow 0$ as $t\rightarrow \infty$ and $d^ct\wedge d^c t=0$. \end{proof} For the ``if'' direction, the argument in Proposition \ref{existence result} applies. We will not give the details and just point out the following two observations which make the argument work in this setting. The key points are \begin{itemize} \item [(1)] \textit{Assumption 2} and Lemma \ref{good initial metric in general setting} ensure that we can apply the weighted mean value inequality proved in Lemma \ref{weighted sobolev inequality}. \item [(2)] We have $L^2(X,\omega)\subset L^2(\olsi{X},\ols{\omega})$ since $\omega\geq c\ols{\omega}$ for some $c>0$, therefore by Uhlenbeck-Yau's theorem (Theorem \ref{uhlenbeckyau}) a weakly projection map $\pi$ of $E$ over $X$ with $|\pp \pi|\in L^2(X,\omega)$ defines a coherent torsion free sheaf $\olsi{S}$ of $\olsi{E}$. \end{itemize} \subsection{Calabi-Yau metrics satisfying \textit{Assumption 1}}\label{examples} As mentioned in the Introduction, there do exist interesting K\"ahler metrics satisfying the \textit{Assumption 1}, which contain Calabi-Yau metrics on the complement of an anticanonical divisor of a Fano manifold and its generalizations \cite{tian-yau1, hsvz1, hsvz2}. We will call them Tian-Yau metrics. Here we give a sketch for the construction of these Calabi-Yau metrics and refer to \cite{hsvz2}-Section 3 for more details. Let $\olsi{X}$ be an $n$-dimensional ($n\geq 2$) projective manifold, $D\in |K^{-1}_{\olsi{X}}|$ be a smooth divisor and $X=\olsi{X}\backslash D$ be the complement of $D$ in $\olsi{X}$. Suppose that the normal bundle of $D$ in $\olsi{X}$, $N_D=K^{-1}_{\olsi{X}}|_D$ is ample. Fixing a defining section $S\in H^0(\olsi X, K^{-1}_{\olsi X})$ of the divisor $D$ whose inverse can be viewed as a holomorphic volume form $\Omega_X$ on $X$ with a simple pole along $D$. Let $\Omega_D$ be the holomorphic volume form on $D$ given by the residue of $\Omega_X$ along $D$. Using Yau's theorem \cite{yau1978} , there is a hermitian metric $h_D$ on $K^{-1}_{\olsi X}|_D$ such that its curvature form is a Ricci-flat K\"ahler metric $\omega_D$ with $$\omega_D^{n-1}=(\ii)^{(n-1)^2}\Omega_D\wedge\ols \Omega_D$$ by rescaling $S$ if necessary. One can show that the hermitian metric $h_D$ extends to a global hermitian metric $h_{\olsi X}$ on $K^{-1}_{\olsi X}$ such that its curvature form is nonnegative and positive in a neighborhood of $D$. By glueing a smooth positive constant on a compact set, we get a global positive smooth function $z$ which is equal to $(-\log |S|^2_{h_{\olsi X}})^{\frac{1}{n}}$ outside a compact set. For any $A\in \mathbb R$, we denote $h_A=h_{\olsi X}e^{-A}$ and $v_A=\frac{n}{n+1}(-\log |S|_{h_A}^2)^{\frac{n+1}{n}}$, which is viewed as a smooth function defined outside a compact set on $X$. We denote by $H^2_{c,+}(X)$ the subset of $\Im(H^2_c(X,\mathbb R)\rightarrow H^2(X,\mathbb R))$ consisting of classes $\mathfrak k$ such that $\int_Y\mathfrak{k}^p>0$ for any compact analytic subset $Y$ of $X$ of pure dimension $p>0$. Then Hein-Sun-Viaclovsky-Zhang proved the following result \begin{theorem}\cite{hsvz2}\label{generalized tianyau} For every class $\mathfrak{k} \in H_{c,+}^{2}(X)$, there is a unique K\"ahler metric $\omega\in \mathfrak k $ such that \begin{itemize} \item [(1)] $\omega^n=(\ii)^{n^2} \Omega_X\wedge\ols{\Omega}_X$, and \item [(2)] $|\nabla^l_{\omega}(\omega-\ii\partial\pp v_A)|_{\omega}=O\left(e^{-\delta z^{\frac{n}{2}}}\right)$ for some $\delta,A>0$ and all $l\geq 0$. \end{itemize} \end{theorem} And from the construction in \cite{hsvz2}-Section 3, we have the decomposition $\omega=\omega_0+dd^c\varphi$, where $\omega_0$ is a smooth (1,1)-form on $\olsi{X}$ vanishing when restricted to $D$. And by Theorem \ref{generalized tianyau} and the estimate in \cite[Proposition 3.4]{hsvz1}, one can directly check that these K\"ahler metrics satisfy \textit{Assumption 1}. \begin{remark} It was proved in \cite{hsvz1} that Tian-Yau metrics $\omega_{TY}$ can be realized as the rescaled pointed Gromov-Hausdorff limits of a sequence of Calabi-Yau metrics $\omega_k$ on a $K3$ surface. We expect that $\omega_{TY}$-PHYM connections we obtained in this paper give models for the limits of $\omega_k$-HYM connections on the $K$3 surface. \end{remark} \subsection{On the ampleness assumption of the normal bundle $N_D$}\label{general normal bundles} In this subsection, we first explain why we assume the normal bundle of $D$ is ample and then discuss the case where the normal bundle is trivial on compact K\"ahler surfaces. Let us start with a question. Suppose we have a nontrivial nef class $\alpha$ in $H^{1,1}(\olsi{X})$ and $D\in \ols{X}$ a smooth divisor, when does \begin{equation} \mu(\mathcal E,\alpha)=\mu(\mathcal E|_D,\alpha|_D) \end{equation} hold for every coherent reflexive sheaf $\mathcal E$ on $\olsi{X}$ ? A sufficient condition is that \begin{equation*} \alpha^{n-2}\wedge(\alpha-c_1(D))=0 \text{ in $H^{n-1,n-1}(\olsi{X})$}. \end{equation*} In order to have the above equality, a natural (possibly the only reasonable) choice is that $\alpha=c_1(D)$. To make the argument in this paper work, we also need the following property: \textit{if a vector bundle $F$ on $D$ is polystable with respect to $\alpha|_D$ and $S$ is a coherent subsheaf of $F$ with the same $\alpha|_D$-degree as $F$, then $S$ is a vector bundle and is a splitting factor of $F$}. (Note that this does not follow from the definition since $\alpha|_D$ may not be a K\"ahler class. For example if $\alpha|_D$ is 0, then definitely it does not satisfy this property.) In general in order to have this property, we need $\alpha|_D$ to be a K\"ahler class. This is one of the reasons why we assume that the normal bundle of $D$ is ample, i.e. $c_1(D)|_D$ is a K\"ahler class. Another reason is that by assuming $N_D$ is ample, on the punctured disc bundle $\mathcal C$ we have explicit exact K\"ahler forms, which give models of the K\"ahler forms on $X$. However if $\olsi{X}$ is a compact complex surface, in which case the divisor $D$ now is a smooth Riemann surface, then the property mentioned above always holds. Note that on a Riemann surface $D$, the slope of a vector bundle is canonically defined and independent of the choice of cohomology classes on $D$. \begin{lemma}Let $\olsi{X}$ be a compact K\"ahler surface and $D$ be a smooth divisor. Suppose $\olsi{E}|_D$ is polystable. Let $\olsi{S}$ be a coherent reflexive subsheaf of $\olsi{E}$. Then $\mu(\olsi{S},c_1(D))=\mu(\olsi{E},c_1(D))$ if and only if $\olsi{S}|_D$ is a splitting factor of $\olsi{E}|_D$. \end{lemma} Using this, most of the arguments in Section \ref{proof of the main theorem} can be modified to work for divisors $D$ with $c_1(N_D)=0$ in complex dimension 2. In the following, we assume $c_1(N_D)=0$ in $H^2(D,\mathbb R)$. Then it is easy to see that $c_1(D)$ is nef and by the global $\partial\pp$-lemma on $D$, we know that there exists a hermitian metric $h_D$ on $N_D$ with vanishing curvature. Let $L_D$ be the line bundle determined by $D$ and $S\in H^0(\olsi{X},L_D)$ be a defining section of $D$. Then we can extend $h_D$ smoothly to get a smooth hermitian metric $h$ on $L_D$ and after a rescaling, we may assume that $t=-\log |S|_h^2$ is positive on $X$. In this case, we can consider (at least) all monomials potentials with degree greater than 1 \begin{equation} \mathcal H:=\left\{F(t)=At^a: A>0 \text{ is a constant and } a>1\right\}. \end{equation} \begin{assumption} Let $\omega$ be a K\"ahler form on $X$ and $g$ be the corresponding Riemannian metric. We assume that \begin{itemize} \item [(1)] the sectional curvature of $g$ is bounded. \item [(2)] the form $\omega$ can be written as $\omega_0+\ii\partial\pp F(t)$ for some $F\in \mathcal H$, where $\omega_0$ is a smooth closed (1,1)-form on $\olsi{X}$. \end{itemize} \end{assumption} Suppose $(X,\omega,g)$ satisfies \textit{Assumption 3}, then we have the following consequences: \begin{itemize} \item the Riemannian metric $g$ is complete and has volume growth order at most 2, \item $(X,g)$ is of $(K,2,\beta)$-polynomial growth as defined in \cite[Definition 1.1]{tian-yau1} for some positive constants $K$ and $\beta$. \end{itemize} Let $\olsi{E}$ be an irreducible holomorphic vector bundle over $\olsi{X}$ such that $\olsi{E}|_D$ is polystable with degree 0. Then by Donaldson-Uhlenbeck-Yau theorem (for Riemann surfaces this was first proved by Narasimhan and Seshadri \cite{narasimhan}), there exists a hermitian metric $H_D$ on $\olsi{E}|_D$ such that \begin{equation*} \Lambda_{\omega_D}F_{H_D}=0. \end{equation*} Since $D$ is a Riemann surface, this is equivalent to say that $H_D$ gives a flat metric on $\olsi{E}|_D$, i.e. \begin{equation}\label{PHYM condition} F_{H_D}=0. \end{equation} Extending $H_D$ smoothly to get a hermitian metric $H_0$ on $\olsi{E}$ then by (\ref{PHYM condition}) and the proof of Lemma \ref{good initial metric}, we know that $H_0$ is already a good initial metric in the following sense: \begin{equation}\label{again good initial metric} |F_{H_0}|=O(e^{-\delta t}). \end{equation} Then we have the following result, whose proof is essentially the same as that for Theorem \ref{main theorem}. We just point out the difference. \begin{theorem}\label{theorem for trivial normal bundle} Suppose $(X,\omega)$ satisfies \textit{Assumption 3} and $\olsi{E}|_D$ flat. Let $H_0$ be a hermitian metric as above and $\mathcal P_{H_0}$ be defined by (\ref{define of P_H}). Then there exists an $\omega$-PHYM metric in $\mathcal P_{H_0}$ if and only if $\olsi{E}$ is $\left(c_1(D),[\omega_0]\right)$-stable. \end{theorem} The argument in Section \ref{proof of the main theorem} can be applied if Lemma \ref{degree coincide} still holds. The analog of Lemma \ref{degree coincide} in this case is the following lemma, for which we need to assume $\olsi{E}|_D$ is flat. \begin{lemma}Suppose $(X,\omega)$ satisfies \textit{Assumption 3} and $\olsi{E}|_D$ is flat. Let $H_0$ be a hermitian metric as above. Then we have the following equality: \begin{equation*} \int_{X}\frac{\ii}{2\pi}\tr(F_{H_0})\wedge \omega=\int_{\olsi X}c_1(\olsi E)\wedge [\omega_0] \end{equation*} \end{lemma} \begin{proof} By Chern-Weil theory, it suffices to show that \begin{equation}\label{again degree vanish} \int_{X}\frac{\ii}{2\pi}\tr(F_{H_0})\wedge dd^c\varphi=0. \end{equation} The argument in Lemma \ref{a good 1-form potential} can be used again to show that there exists a cut-off function $\chi$ supported on a compact set and a smooth 1-form $\psi$ supported outside a compact set such that \begin{equation*} dd^c \varphi=d d^c(\chi\varphi)+d\psi. \end{equation*} Moreover $|\psi|$ grows at most in a polynomial rate of $r$. Then (\ref{again degree vanish}) follows from integration by parts and \eqref{again good initial metric}. \end{proof} \begin{example} Let $\olsi{X}=\mathbb {CP}^1\times D$, where $D$ is a compact Riemann surface. Then $D=\{\infty\}\times D$ is a smooth divisor with trivial normal bundle. Fix a K\"ahler form $\omega_D$ on $D$ and also view it as a form on $\mathbb {CP}^1\times D$ via the pull-back of the obvious projection map. Note that up to a scaling $[\omega_D]\in c_1(\mathbb {CP}^1)$ in $H^{1,1}(\olsi{X})$. We can consider asymptotically cylindrical metrics on $X=\mathbb C\times D$ given by the K\"ahler forms \begin{equation*} \omega=\omega_D+\ii\partial \pp \Phi(-\log |z|^2)=\omega_D+\frac{\varphi}{|z|^2}\ii d z\wedge d \olsi{z}, \end{equation*} where $z$ denotes the coordinate function on $\mathbb C$ and $\varphi=\Phi''$ is a positive smooth function defined on $\mathbb R$ such that $\varphi(t)=e^{t}$ when $t$ is sufficiently negative and $\varphi(t)=1$ for $t$ sufficiently positive. Then one can easily check that $(X,\omega)$ satisfies \textit{Assumption 3} with $F(t)=t^2$. Let $\olsi{E}$ be an irreducible holomorphic vector bundle on $\mathbb {CP}^1\times D$ such that $\olsi{E}|_D$ is flat. Then by Theorem \ref{theorem for trivial normal bundle}, we know that \begin{equation*} \text{$E$ admits an $\omega$-PHYM metric in $\mathcal P_{H_0}$ if and only if $\olsi{E}$ is $\left(c_1(D),c_1(\mathbb {CP}^1)\right)$-stable.} \end{equation*} \end{example} Similar examples as in Example \ref{example of nonsplitting} show that the condition $\left(c_1(D),c_1(\mathbb {CP}^1)\right)$-stability is non-trivial. More specifically, let $D$ be a Riemann surface with genus $g\geq 1$ and $k\geq 2$ be an integer. Then similar argument as in Example \ref{example of nonsplitting} shows that there exists a non-splitting extension \begin{equation*} 0\longrightarrow\mathcal O\longrightarrow E\longrightarrow p_1^*(\mathcal O_{\mathbb {P}^1}(-k))\longrightarrow0. \end{equation*} whose restriction to $D$ splits. Then one can easily check that $E$ is irreducible and not $\left(c_1(D),c_1(\mathbb {CP}^1)\right)$-stable. \subsection{Some problems for further study}\label{open problems} Let $(X,\omega)$ satisfy the \textit{Assumption 1}. As illustrated by Theorem \ref{generalized tianyau} it is more natural to assume a stronger condition on the background K\"ahler metric $\omega$. More precisely, we assume that in (\ref{assumption on asymptotics}) the right hand side is replaced by $O(e^{-\delta_0 r^{\alpha_0}})$ for some $\delta_0, \alpha_0 >0$ and we also have the same bound for higher order derivatives. Under these assumptions and motivated by the result of Hein \cite{hein2012} for solutions of complex Monge-Amp\`ere equations, we make the following conjecture. \begin{conjecture} The solution $s$ obtained in Proposition \ref{existence result} decays exponentially, i.e $|\nabla^k s|=O(e^{-\delta r^{\alpha}})$ for some $\delta, \alpha >0$ and all $k\geq 0$. \end{conjecture} Note that the key issue is to prove that $|s|$ decays exponentially, since all of the higher order estimates will follows form standard elliptic estimates. It is also an interesting problem to study the notion of $(\alpha,\beta)$-stability. From the definition, we have the following consequence: \begin{proposition}\label{pair stability} Let $\alpha,\beta \in H^{1,1}(M)$ be two classes on a compact K\"ahler manifold $M$. Suppose $\alpha\in H^2(M,\mathbb Z)$ and $\alpha\wedge \beta=0$. Then a holomorphic vector bundle $E$ is $(\alpha,\beta)$-stable if and only if $E$ is $\alpha$-semistalbe and there exists an $\epsilon_0>0$ such that $\alpha+\epsilon \beta$-stable for all $0<\epsilon < \epsilon_0$. \end{proposition} It is natural to consider the following problem. Let $\olsi{E}$ be a $(c_1(D),[\omega_0])$-stable holomorphic vector bundle on $\olsi{X}$. Then by Proposition \ref{pair stability}, we know that $\olsi{E}$ is $[\omega_0]+\epsilon^{-1}c_1(D)$-stable for $\epsilon$ positive and sufficiently small. Donaldson-Uhlenbeck-Yau theorem says that for every K\"ahler form $\omega$ in $[\omega_0]+\epsilon^{-1}c_1(D)$, there exists an $\omega$-Hermitian-Yang-Mills metric on $\olsi{E}$. In our setting, it is natural to consider the following K\"ahler forms \begin{equation} \omega_{\epsilon}=\omega_0+\epsilon^{-1}\theta_{\epsilon}+dd^c(\chi_{\epsilon} \varphi) \in [\omega_0]+\epsilon^{-1}c_1(D), \end{equation} where $\theta_{\epsilon}$ is a closed nonnegative (1,1)-form in $c_1(D)$, supported and positive in an $\epsilon$-neighborhood of $D$ and $\chi_{\epsilon}$ is a cut-off function which equals 1 outside an $\epsilon$-neighborhood of $D$ and $0$ in a smaller neighborhood of $D$. The K\"ahler forms are chosen such that \begin{equation*} \omega_{\epsilon}\longrightarrow \omega=\omega_0+dd^c\varphi \text{ in $C^{\infty}_{loc}(X)$}. \end{equation*} \begin{conjecture} Suppose $\olsi{E}|_D$ is $c_1(N_D)$-polystable and $(c_1(D),[\omega_0])$-stable. Let $H_{\epsilon}$ denote the Hermitian-Yang-Mills metric on $\olsi{E}$ with respect to the K\"ahler form $\omega_{\epsilon}$. Then there exists smooth functions $f_{\epsilon}$ on $X$ such that \begin{equation*} H_{\epsilon}e^{f_{\epsilon}}\longrightarrow H \text{ in $C^{\infty}_{loc}(X)$}, \end{equation*} where $H$ is the hermitian metric constructed in Theorem \ref{main theorem}. \end{conjecture} \bibliographystyle{plain} \bibliography{ref.bib} \end{document}
2206.13409v3
http://arxiv.org/abs/2206.13409v3
Homomesies on permutations -- an analysis of maps and statistics in the FindStat database
\documentclass{amsart} \usepackage{amsmath,amssymb,fullpage} \usepackage{amsthm} \usepackage{amsfonts,newlfont,url,xspace} \usepackage{ stmaryrd } \usepackage[dvipsnames]{xcolor} \usepackage{graphics,graphicx,verbatim} \usepackage{float} \usepackage{hyperref} \usepackage{soul} \usepackage[foot]{amsaddr} \usepackage{tikz} \usetikzlibrary {arrows.meta} \include{pythonlisting} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{quest}[thm]{Question} \theoremstyle{definition} \newtheorem{definition}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{remark}[thm]{Remark} \newtheorem{prob}[thm]{Problem} \DeclareMathOperator{\lcm}{lcm} \usepackage{ifxetex,ifluatex} T% \usepackage{fontspec} \else \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{lmodern} \newcommand{\K}{\mathcal{K}} \renewcommand\L{\mathcal{L}} \newcommand{\F}{\mathcal{F}} \newcommand{\M}{\mathcal{M}} \newcommand{\C}{\mathcal{C}} \newcommand{\R}{\mathcal{R}} \newcommand{\sage}{{SageMath}\xspace} \newcommand{\Findstat}{{FindStat}\xspace} \newcommand{\sinv}{\sigma^{-1}} \newcommand{\inv}{\textnormal{inv}} \newcommand{\Inv}{\textnormal{Inv}} \newcommand{\maj}{\textnormal{maj}} \newcommand{\exc}{\textnormal{exc}} \newcommand{\fp}{\textnormal{fp}(\sigma)} \newcommand{\Des}{\textnormal{Des}} \newcommand{\des}{\textnormal{des}} \newcommand{\wdec}{\textnormal{wdec}} \newcommand{\rank}{\textnormal{rank}} \newcommand{\Stat}{\textnormal{Stat}} x}{\textnormal{fix}} \newcommand{\bb}{\textbf} \title[Homomesies on permutations]{Homomesies on permutations: an analysis of maps and statistics in the FindStat database} \author[1]{Jennifer Elder$^1$} \address[1]{Rockhurst University. \href{mailto:[email protected]}{[email protected]}} \author[2]{Nadia Lafreni\`ere$^2$} \address[2]{Corresponding author. Dartmouth College, 6188 Kemeny Hall, 27 N. Main Street, Hanover, NH, 03755. \href{mailto:[email protected]}{[email protected]}} \author[3]{Erin McNicholas$^3$} \address[3]{Willamette University. \href{mailto:[email protected]}{[email protected]}} \author[4]{Jessica Striker$^4$} \address[4]{North Dakota State University. \href{mailto:[email protected]}{[email protected]}} \author[5]{Amanda Welch$^5$} \address[5]{Eastern Illinois University. \href{mailto:[email protected]}{[email protected]}} \begin{document} \maketitle \begin{abstract} In this paper, we perform a systematic study of permutation statistics and bijective maps on permutations in which we identify and prove 122 instances of the homomesy phenomenon. Homomesy occurs when the average value of a statistic is the same on each orbit of a given map. The maps we investigate include the Lehmer code rotation, the reverse, the complement, the Foata bijection, and the Kreweras complement. The statistics studied relate to familiar notions such as inversions, descents, and permutation patterns, and also more obscure constructs. Beside the many new homomesy results, we discuss our research method, in which we used SageMath to search the FindStat combinatorial statistics database to identify potential homomesies. \end{abstract} \textbf{Keywords}: Homomesy, permutations, permutation patterns, dynamical algebraic combinatorics, FindStat, Lehmer code, Kreweras complement, Foata bijection \section{Introduction} Dynamical algebraic combinatorics is the study of objects important in algebra and combinatorics through the lens of dynamics. In this paper, we focus on permutations, which are fundamental objects in algebra, combinatorics, representation theory, geometry, probability, and many other areas of mathematics. They are intrinsically dynamical, acting on sets by permuting their components. Here, we study bijections on permutations $f:S_n\rightarrow S_n$, so the $n!$ elements being permuted are themselves permutations. In particular, we find and prove many instances of \emph{homomesy}~\cite{PR2015}, an important phenomenon in dynamical algebraic combinatorics that occurs when the average value of some \emph{statistic} (a map $g:S_n\rightarrow \mathbb{Z}$) is the same over each orbit of the action. Homomesy occurs in many contexts, notably that of {rowmotion} on order ideals of certain families of posets and {promotion} on various sets of tableaux. See Subsection~\ref{sec:homomesy} for more specifics on homomesy and~\cite{Roby2016,Striker2017,SW2012} for further discussion. A prototypical example of our homomesy results is as follows. Consider the Kreweras complement map $\K:S_n\rightarrow S_n$ (from Definition \ref{def:krew}). We show in Proposition \ref{Khom lastentry} that the last entry statistic of a permutation exhibits homomesy with respect to the Kreweras complement. See Figure~\ref{fig:ex} for an example of this result in the case $n=3$. \begin{figure}[ht] \begin{tikzpicture} \node [anchor=west] at (1,10) {\text{orbit of size $1$:}}; \node[anchor=east] at (5,10) {$31{\color{red}2}$}; \draw [->] (4.8,10.2) .. controls (5.5,10.7) and (5.5,9.3) .. (4.8,9.8); \node [anchor=west] at (5.3,10) {$\mathcal{K}$}; \node [anchor =west] at (10,10) {\text{average of last entry} $={\color{red}2}$}; \node[anchor=west] at (1,8) {\text{orbit of size $2$:}}; \node[anchor=east] at (5,8) {$12{\color{red}3}$}; \draw [->] (5,8) -- (6,8); \node[anchor=south] at (5.5,8) {$\mathcal{K}$}; \node [anchor=west] at (6,8) {$23{\color{red}1}$}; \draw [<-] (4.8,7.8) .. controls (5.25,7.4) and (5.75,7.4) .. (6.2,7.8); \node[anchor=west] at (10,8) {\text{average of last entries} $=\frac{{\color{red}3+1}}{2}$}; \node[anchor=west] at (1,6) {\text{orbit of size $3$:}}; \node[anchor=east] at (5,6) {$13{\color{red}2}$}; \draw [->] (5,6) -- (6,6); \node[anchor=center] at (6.5,6) {$21{\color{red} 3}$}; \draw [->] (7,6) -- (8,6); \node[anchor=west] at (8,6) {$32{\color{red} 1}$}; \draw [<-] (4.8,5.8) .. controls (6,5) and (7.2,5) .. (8.4,5.8); \node[anchor=south] at (5.5,6) {$\mathcal{K}$}; \node[anchor=south] at (7.5,6) {$\mathcal{K}$}; \node[anchor=north] at (6.5,5.2) {$\mathcal{K}$}; \node[anchor=west] at (10,6) {\text{average of last entries} $=\frac{{\color{red} 2+3+1}}{3}$}; \end{tikzpicture} \caption{Orbit decomposition of $S_3$ under the action of the Kreweras complement. The last entry of each permutation is highlighted. Calculating the averages of these last entries over each orbit, we observe an instance of homomesy.}\label{fig:ex} \end{figure} Rather than pick actions and statistics at random to test for homomesy, we used FindStat~\cite{FindStat}, the combinatorial statistics database, which (at the time of writing) included 387 permutation statistics and 19 bijective maps on permutations. Using the interface with SageMath~\cite{sage} computational software, we tested all combinations of these maps and statistics, finding 117 potential instances of homomesy, involving 68 statistics. We highlight here some of the most interesting results. One initial finding was that homomesies occurred in only nine of the 19 examined maps. Among the maps that do not have any homomesic statistics, we find the well-known inverse map, as well as the first fundamental transform and cactus evacuation. Of the nine maps exhibiting homomesy, four maps (all related to the \textbf{Foata bijection} map) have only one homomesic statistic. Even more intriguing is the large number of homomesies found for the \textbf{Lehmer code rotation} map. Despite its presence in FindStat, we could not find any occurrence of the Lehmer code rotation in the literature on combinatorial actions. The study in this paper suggests that this map is worthy of further investigation. Many of the homomesic statistics are related to inversions and descents, but other notable statistics include several \emph{permutation patterns} as well as the \emph{rank} of a permutation. As we worked through the proofs for the homomesic statistics for the \textbf{reverse} and \textbf{complement} maps, we found that the global averages are often the same. Using the relationship between the two maps (see Lemma \ref{lem:C&R_relation} ), we were able to prove many of the shared homomesies. Given this strong relationship, it is also of interest that there are several statistics that are only homomesic for one of the two maps. In addition to exhibiting homomesies, the action of the \textbf{Kreweras complement} map generates an interesting orbit structure on $S_n$. Examining this orbit structure, we were able to characterize the distribution of all orbits. Our main results are Theorems~\ref{thm:LC}, \ref{thmboth}, \ref{onlycomp}, \ref{onlyrev}, \ref{thm:foata} and \ref{Thm:Kreweras}, in which we prove all 117 of these homomesies. In addition, we proved homomesy for 5 statistics not in the database, in Theorems \ref{thm:LC_inversions_at_entry} \ref{thm:descents_at_i_LC}, \ref{thm:inversion_positions_RC},\ref{Thm:Kreweras} and Proposition \ref{thm:ith_entry_comp}, for a grand total of 122 homomesic statistics. Furthermore, we found theorems on the orbit structure of the maps, chiefly Theorems \ref{Thm: L-Orbit cardinality}, \ref{Prop:K even orbs} and \ref{thm:Orbit_generators_K}. We also give one open problem (Problem~\ref{prob:pp_LRC}). This paper is organized as follows. In Section~\ref{sec:method}, we describe in detail our method of searching for potential homomesies. Section~\ref{sec:background} contains background material on homomesy and permutations. Sections~\ref{sec:lehmer} through \ref{sec:krew} contain our main results, namely, homomesies involving one or more related maps. Each section begins by defining the map(s), followed by any additional results on properties of the maps. It then states as a theorem all the homomesic statistics, organized by theme. Finally, many propositions proving specific homomesies are given, which together, prove the main theorem(s) for the section. Below is a list of the map(s) for each section, along with the number of homomesic statistics from the FindStat database included in the corresponding theorem: \begin{enumerate} \item[\ref{sec:lehmer}] Lehmer code rotation (45 homomesic statistics) \item[\ref{sec:comp_rev}] Complement and reverse (22 statistics homomesic for both maps, 5 statistics homomesic for reverse but not complement, and 13 statistics homomesic for complement but not reverse) \item[\ref{sec:foata}] Foata bijection and variations (4 maps all having the same single homomesic statistic) \item[\ref{sec:krew}] Kreweras complement and inverse Kreweras complement (3 homomesic statistics) \end{enumerate} \subsection*{Acknowledgements} The genesis for this project was the Research Community in Algebraic Combinatorics workshop, hosted by ICERM and funded by the NSF. In addition to thanking ICERM and the organizers of this workshop, we wish to thank the developers of FindStat~\cite{FindStat}, especially moderators Christian Stump and Martin Rubey for their helpful and timely responses to our questions. We also thank the developers of SageMath~\cite{sage} software, which was useful in this research, and the CoCalc~\cite{SMC} collaboration platform. The anonymous referees suggested several improvements to this paper, for which we are grateful to them. We thank Joel Brewster Lewis for the description of the orbits of odd sizes under the Kreweras complement (Proposition~\ref{prop:orbits_of_odd_length}, Proposition~\ref{prop:orbits_of_size_d_odd}, and Theorem ~\ref{thm:orbit_count_odd_sizes}), and we thank Sergi Elizalde for suggesting examples of statistics that exhibit homomesy under the inverse map. JS was supported by a grant from the Simons Foundation/SFARI (527204, JS). \section{Summary of methods} \label{sec:method} \Findstat~\cite{FindStat} is an online database of combinatorial statistics developed by Chris Berg and Christian Stump in 2011 and highlighted as an example of a {fingerprint database} in the Notices of the American Mathematical Society~\cite{fingerprint}. \Findstat is not only a searchable database which collects information, but also yields dynamic information about connections between combinatorial objects. \Findstat takes statistics input by a user (via the website \cite{FindStat} or the \sage interface), uses \sage to apply combinatorial maps, and outputs corresponding statistics on other combinatorial objects. \Findstat has grown expansively to a total of 1787 statistics on 23 combinatorial collections with 249 maps among them (as of April 27, 2022). For this project, we analyzed all combinations of bijective maps and statistics on one combinatorial collection: permutations. At the time of this investigation, there were 387 statistics and 19 bijective maps on permutations in \Findstat. For each map/statistic pair, our empirical investigations either suggested a possible homomesy or provided a counterexample in the form of two orbits with differing averages. We then set about finding proofs for the experimentally identified potential homomesies. These homomesy results are the main theorems of this paper: Theorems \ref{thm:LC}, \ref{thmboth}, \ref{onlycomp}, \ref{onlyrev}, \ref{thm:foata} and \ref{Thm:Kreweras}. Thanks to the already existing interface between SageMath \cite{sage} and FindStat, we were able to automatically search for pairs of maps and statistics that exhibited homomesic behavior. For each value of $2 \leq n \leq 6$, we ran the following code to identify potential homomesies: \begin{figure}[H] \begin{python} sage: from sage.databases.findstat import FindStatMaps, FindStatStatistics # Access to the FindStat methods ....: findstat()._allow_execution = True # To run all the code from Findstat ....: for map in FindStatMaps(domain="Cc0001", codomain= "Cc0001"): # Cc0001 is the Permutation Collection ....: if map.properties_raw().find('bijective') >= 0: # The map is bijective ....: F = DiscreteDynamicalSystem(Permutations(n), map) # Fix n ahead of time ....: for stat in FindStatStatistics("Permutations"): ....: if F.is_homomesic(stat): ....: print(map.id(), stat.id()) \end{python} \end{figure} Note that the choice of running the verification of homomesies for permutations of $2$ to $6$ elements is not arbitrary: at $n=6$, the computational results stabilize. We did not find any false positives when using the data from $n=6$. On the other hand, testing only smaller values of $n$ would have given us many false positives. For example, several statistics in the FindStat database involve the number of occurrences of some permutation patterns of length $5$. These statistics evaluate to $0$ for any permutation of fewer than $5$ elements, which misleadingly makes them appear $0$-mesic if not tested for values of $n$ at least $5$. In the statements of our results, we refer to statistics and maps by their name as well as their FindStat identifier (ID). There is a webpage on FindStat associated to each statistic or map; for example, the URL for the inversion number, which has FindStat ID 18, is: \url{http://www.findstat.org/StatisticsDatabase/St000018}. We often refer to statistics as ``Statistic 18'' (or simply ``Stat 18'') and similarly for maps. The FindStat database attributes IDs to statistics and maps sequentially; when we did the investigation, the maximum statistic ID in the FindStat database for a statistic on permutations was 1778, and the maximum map ID for a bijective map on permutations was 241. There were four statistics for which we could not disprove homomesy, because the database did not provide values for them on permutations of at least $5$ items, nor code to evaluate these statistics. Those are Statistics 1168, 1171, 1582 and 1583; they all correspond to the dimension of some vector spaces. A visual summary of our results is given in Figure~\ref{fig:table}. \begin{figure}[ht] \includegraphics[width=10cm]{findstattable} \caption{Each column corresponds to one of the 19 bijective maps on permutations stored in FindStat, and is labeled using the map's FindStat identifier. The rows correspond to the 387 statistics on permutations. Green boxes correspond to the $117$ proven homomesies. The maps that have several homomesic statistics are the reverse map (64), the complement map (69), the Kreweras complement (88) and its inverse (89), and the Lehmer code rotation (149). The Lehmer-code to major-code bijection (62) and its inverse (73), and the Foata bijection (67) and its inverse (175) are also homomesic when paired with the statistic that is computed as the major index minus the number of inversions (1377). }\label{fig:table} \end{figure} \section{Background} \label{sec:background} This section gives background definitions and properties regarding the two main topics in our title: permutations (Subsection~\ref{sec:permutations}) and homomesy (Subsection~\ref{sec:homomesy}). It also discusses prior work on homomesy for maps on permutations in Subsection~\ref{sec:prior}. \subsection{Permutations} \label{sec:permutations} Permutations are a central object in combinatorics, and many statistics on them are well-studied. We define here a few classical ones. Readers familiar with statistics on permutations may skip this subsection without loss of continuity. \begin{definition} \label{def:basic_stats} Let $[n] = \{1,2,\ldots, n\}$. A \textbf{permutation} $\sigma$ of $[n]$ is a bijection from $[n]$ to $[n]$ in which the image of $i \in [n]$ is $\sigma_i$. We use the one-line notation, which means we write $\sigma = \sigma_1\sigma_2\ldots\sigma_n$. Permutations form a group called the \textbf{symmetric group}; we write $S_n$ for the set of all permutations of $[n]$. For a permutation $\sigma=\sigma_1\sigma_2\ldots\sigma_n$, we say that $(i,j)$ is an \textbf{inversion} of $\sigma$ if $i<j$ and $\sigma_j < \sigma_i$. We write $\Inv(\sigma)$ for the set of inversions in $\sigma$. We say that $(\sigma_i, \sigma_j)$ is an \textbf{inversion pair} of $\sigma$ if $(i, j)$ is an inversion of $\sigma$. This also corresponds to pairs $(\sigma_i, \sigma_j)$ with $\sigma_i > \sigma_j$ and $\sigma_i$ positioned to the left of $\sigma_j$ in $\sigma = \sigma_1\sigma_2 \ldots\sigma_n$. The \textbf{inversion number} of a permutation $\sigma$, denoted $\inv(\sigma)$, is the number of inversions. We say that $i$ is a \textbf{descent} exactly when $(i, i+1)$ is an inversion. At times we will instead say $\sigma$ has a descent at $i$. This also corresponds to the indices $i \in [n-1]$ such that $\sigma_i > \sigma_{i+1}$. We write $\Des(\sigma)$ for the set of descents in $\sigma$, and $\des(\sigma)=\#\Des(\sigma)$ for the number of descents. If $i \in [n-1]$ is not a descent, we say that it is an \textbf{ascent}. We call a \textbf{peak} an ascent that is followed by a descent, and a \textbf{valley} a descent that is followed by an ascent. The \textbf{major index} of a permutation is the sum of its descents. We write $\maj(\sigma)$ to denote the major index of the permutation $\sigma$. A \textbf{run} in a permutation is a contiguous increasing sequence. Runs in permutations are separated by descents, so the number of runs is one more than the number of descents. \end{definition} \begin{example} For the permutation $\sigma = 216354$, the inversions are $\{(1,2), (3,4), (3,5), (3,6), (5,6)\}$, and the descents are $\{1,3,5\}$. There are two peaks ($2$ and $4$), and two valleys ($1$ and $3$), and the major index is $1+3+5=9$. The permutation has four runs, here separated with vertical bars: $2|16|35|4$. \end{example} \begin{definition}[Patterns in permutations]\label{def:patterns} We say that a permutation $\sigma$ of $[n]$ contains the \textbf{pattern} $abc$, with $\{a,b,c\} = \{1,2,3\}$, if there is a triplet $\{i_1 < i_2 < i_3\} \subseteq [n]$ such that $abc$ and $\sigma_{i_1}\sigma_{i_2}\sigma_{i_3}$ are in the same relative order. We call each such triplet an \textbf{occurrence} of the pattern $abc$ in $\sigma$. \end{definition} \begin{example} The permutation $\sigma=1324$ contains one occurrence of the pattern $213$, since $\sigma_2\sigma_3\sigma_4=324$ is in the same relative order as $213$. The permutation $415236$ contains four occurrences of the pattern $312$, because $412$, $413$, $423$ and $523$ all appear in the relative order $312$ in $415236$. \end{example} More generally, patterns of any length can be defined as subsequences of a permutation that appear in the same relative order as the pattern. The definition above is for what we call \textit{classical} patterns. We can refine this notion by putting additional constraints on the triplets of positions that form the occurrences. \begin{definition}\label{def:consecutive_patterns} The \textbf{consecutive pattern} (or \textbf{vincular pattern}) $a-bc$ (resp. $ab-c$) is a pattern in which $c$ occurs right after $b$ (resp. in which $b$ occurs right after $a$). The \textit{classical} pattern $abc$ corresponds to the pattern $a-b-c$. \end{definition} For example, the pattern $13-2$ means that we need to find three entries in the permutation such that \begin{itemize} \item The smallest is immediately followed by the largest; \item The median entry comes after the smallest and the largest, but not necessarily immediately after. \end{itemize} \begin{example} The permutation $415236$ contains two occurrences of the pattern $3-12$, because $423$ and $523$ appear in the relative order $312$ in $415236$, with the last entries being adjacent in the permutation. On the other hand, $\{1,2,5\}$ is an occurrence of the pattern $312$ (since $413$ appears in the right order) that is not an occurrence of $3-12$. Inversions correspond to the classical pattern $21$ and to the consecutive pattern $2-1$, whereas descents correspond to the consecutive pattern $21$. \end{example} Unless otherwise specified, ``patterns'' refer to classical patterns. \subsection{Homomesy} \label{sec:homomesy} First defined in 2015 by James~Propp and Tom~Roby~\cite{PR2015}, homomesy relates the average of a given statistic over some set, to the averages over orbits formed by a bijective map. Note that in this paper, we use the word \textbf{map} instead of function or action, to match with the terminology in FindStat. \begin{definition} Given a finite set $S$, an element $x\in S$, and an invertible map $\mathcal{X}:S \rightarrow S$, the \textbf{orbit} $\mathcal{O}(x)$ is the sequence consisting of $y_i\in S$ such that $y_i=\mathcal{X}^i(x) \textrm{ for some } i\in\mathbb{Z}$. That is, $\mathcal{O}(x)$ contains the elements of $S$ reachable from $x$ by applying $\mathcal{X}$ or $\mathcal{X}^{-1}$ any number of times. The \textbf{size} of an orbit is the number of unique elements in the sequence, denoted $|\mathcal{O}(x)|$. The \textbf{order} of $\mathcal{X}$ is the least common multiple of the sizes of the orbits. \end{definition} \begin{definition}[\cite{PR2015}] Given a finite set $S$, a bijective map $\mathcal{X}:S \rightarrow S$, and a statistic $f:S \rightarrow \mathbb{Z}$, we say that $(S, \mathcal{X}, f)$ exhibits \textbf{homomesy} if there exists $c \in \mathbb{Q}$ such that for every orbit $\mathcal{O}$ \begin{center} $\displaystyle\frac{1}{|\mathcal{O}|} \sum_{x \in \mathcal{O}} f(x) = c$ \end{center} where $|\mathcal{O}|$ denotes the size of $\mathcal{O}$. If such a $c$ exists, we say the triple is \textbf{$c$-mesic}. \end{definition} When the set $S$ is clear from context, we may say a statistic is \textbf{homomesic with respect to $\mathcal{X}$} rather than explicitly stating the triple. When the map $\mathcal{X}$ is also implicit, we may simply say a statistic is \textbf{homomesic}. Homomesy may be generalized beyond the realms of bijective actions and integer statistics, but we will not address these generalizations in this paper. \begin{remark}\label{global_avg} Note that whenever a statistic is homomesic, the orbit-average value is indeed the global average. \end{remark} We end this subsection with two general lemmas about homomesy that will be used later. In the interest of making this paper self-contained, we include proofs, though the results are well-known. \begin{lem} \label{lem:inverse} If a triple $(S,\mathcal{X},f)$ is $c$-mesic, then so is $(S,\mathcal{X}^{-1},f). $ \end{lem} \begin{proof} A bijective map and its inverse have exactly the same elements in their orbits, thus the orbit-averages for a given statistic are also equal. \end{proof} \begin{lem} \label{lem:sum_diff_homomesies} For a given action, linear combinations of homomesic statistics are also homomesic. \end{lem} \begin{proof} Suppose $f,g$ are homomesic statistics with respect to a bijective map $\mathcal{X}:S \rightarrow S$, where $S$ is a finite set. So $\displaystyle\frac{1}{|\mathcal{O}|} \sum_{x \in \mathcal{O}} f(x) = c$ and $\displaystyle\frac{1}{|\mathcal{O}|} \sum_{x \in \mathcal{O}} g(x) = d$ for some $c,d\in\mathbb{C}$. Let $a,b\in\mathbb{C}$. Then \[\displaystyle\frac{1}{|\mathcal{O}|} \sum_{x \in \mathcal{O}} (af+bg)(x) = a\displaystyle\frac{1}{|\mathcal{O}|} \sum_{x \in \mathcal{O}} f(x) + b\displaystyle\frac{1}{|\mathcal{O}|} \sum_{x \in \mathcal{O}} g(x)=ac+bd.\] Thus, $af+bg$ is homomesic with respect to $\mathcal{X}$ with average value $ac+bd$. \end{proof} \subsection{Prior work on homomesy on permutations} \label{sec:prior} Since the homomesy phenomenon was defined, mathematicians have looked for it on natural combinatorial objects. Permutations indeed arose as such a structure, and some recent work initiated the study of homomesic statistics on permutations. Michael La\,Croix and Tom Roby \cite{LaCroixRoby} focused on the statistic counting the number of fixed points in a permutation, while they looked at compositions of the first fundamental transform of Foata with what they call ``dihedral actions'', a few maps that include the complement, the inverse and the reverse, all discussed in Section \ref{sec:comp_rev}. Simultaneously, Elizabeth Sheridan-Rossi considered a wider range of statistics for the same maps, as well as for the compositions of dihedral actions with the Foata bijection \cite{Sheridan-Rossi}. Our approach differs from the previous studies by being more systematic. As previously mentioned, we proved or disproved homomesy for all the 7,345 combinations of a bijective map and a statistic on permutations that were in the FindStat database. It is worth noting that the interesting maps described in \cite{LaCroixRoby} and \cite{Sheridan-Rossi} are compositions of FindStat maps, but they are not listed as single maps in FindStat. We did not consider compositions of FindStat maps; this would be an interesting avenue for further study. \section{Lehmer code rotation} \label{sec:lehmer} An important way to describe a permutation is through its inversions. The Lehmer code of a permutation (defined below) completely characterizes it, so we have a bijection between Lehmer codes and permutations. In this section, after describing the Lehmer code, we define the Lehmer code rotation map. We then state Theorem~\ref{thm:LC} which lists the 45 statistics in FindStat that are homomesic for this map. Before proving this theorem, in Subsection~\ref{subsec:lehmer_orbit} we describe the orbits of the Lehmer code rotation and make connections with actions on other combinatorial objects in Remark~\ref{remark:lehmer_connections}. The homomesies are then proved, starting with statistics related to inversions (Subsection \ref{subsec:lehmer_inv}), then those related to descents (Subsection \ref{subsec:lehmer_des}), to permutation patterns (Subsection \ref{subsec:lehmer_pp}), and finishing with a few other statistics (Subsection \ref{subsec:lehmer_misc}). We also give one open problem related to homomesic permutation patterns for the Lehmer code rotation (Problem~\ref{prob:pp_LRC}). \begin{definition} \label{def:lehmercode} The \textbf{Lehmer code} of a permutation $\sigma\in S_n$ is: \[L(\sigma )=(L(\sigma )_{1},\ldots, L(\sigma )_{n})\quad {\text{where}}\quad L(\sigma )_{i}=\#\{j>i\mid \sigma _{j}<\sigma _{i}\}.\] \end{definition} It is well known (see for example \cite[p.12]{Knuth_AOCP3}), that there is a bijection between tuples of length $n$ whose entries are integers between $0$ and $n-i$ at position $i$ and permutations. Hence, the Lehmer code uniquely defines a permutation. \begin{example} The Lehmer code of the permutation 31452 is $L(31452)= (2,0,1,1,0)$, whereas the Lehmer code of $42513$ is $L(42513) = (3,1,2,0,0)$. \end{example} Since the entries of the Lehmer code count the number of inversions that start at each entry of the permutation, the following observation follows: \begin{prop}\label{prop:LC_num_inv_is_sum} The number of inversions in the permutation $\sigma$ is given by $\sum_{i=1}^n L(\sigma)_i$. \end{prop} \begin{definition} The \textbf{Lehmer code rotation} (FindStat map 149) is a map that sends a permutation $\sigma$ to the unique permutation $\tau$ (of the same set) such that every entry in the Lehmer code of $\tau$ is cyclically (modulo $n+1-i$) one larger than the Lehmer code of $\sigma$. In symbols: \begin{align*} \L : \sigma & \mapsto \tau \\ L(\sigma)_i & \mapsto L(\tau)_i = L(\sigma)_i+1 \mod (n-i+1). \end{align*} \end{definition} An example is illustrated in Figure \ref{fig:LCR}. \begin{figure} \centering \includegraphics{Lehmer_code_rotation} \caption{The Lehmer code rotation applied on the permutation $5371246$ yields the permutation $6413572$. The step-by-step process is illustrated by this picture.}\label{fig:LCR} \end{figure} \begin{example} The permutation $\sigma = 31452$ has Lehmer code $L(\sigma) = (2,0,1,1,0)$. Hence, \[L(\L(\sigma)) = (2+1 \mod 5,\ 0+1 \mod 4,\ 1+1\mod 3,\ 1+1\mod 2,\ 0+1 \mod 1) = (3,1,2,0,0).\] Because $(3,1,2,0,0)$ is the Lehmer code of the permutation $42513$, $\L(31452) = 42513$. \end{example} \begin{remark} \label{remark:lehmer_connections} Despite its presence in FindStat, we could not find the Lehmer code rotation map in the literature. However, we did find six similar maps in a paper by Vincent Vajnovszki \cite{vajnovszki}. Although those were not in the FindStat database, we tested them and found they did not exhibit interesting homomesies. Also, the Lehmer code rotation on permutations is equivalent to the toggle group action of \emph{rowmotion}~\cite{SW2012} on the poset constructed as the disjoint union of chains of $i$ elements for $1\leq i\leq n-1$. As noted by Martin Rubey (personal communication), the distributive lattice of order ideals of this poset forms a $\omega$-sorting order in the sense of \cite{Armstrong2009} with sorting word $\omega=[1,\ldots,n,1,\ldots,n-1,1,\ldots,n-2,\ldots,1,2,1]$. \end{remark} The main theorem of this section is the following. \begin{thm}\label{thm:LC} The Lehmer code rotation map exhibits homomesy for the following $45$ statistics found in the FindStat database: \begin{itemize} \item Statistics related to inversions: \begin{itemize} \rm \item \hyperref[18_246_LC]{\Stat ~$18$}: The number of inversions of a permutation $(${\small average: $\frac{n(n-1)}{4}$}$)$ \item \hyperref[18_246_LC]{\Stat ~$246$}: The number of non-inversions of a permutation $(${\small average: $\frac{n(n-1)}{4}$}$)$ \item \hyperref[495_836_837_LC]{\Stat ~$495$}: The number of inversions of distance at most $2$ of a permutation $(${\small average: $\frac{2n-3}{2}$}$)$ \item \hyperref[54_1556_1557_LC]{\Stat ~$1556$}: The number of inversions of the third entry of a permutation $(${\small average: $\frac{n-3}{2}$}$)$ \item \hyperref[54_1556_1557_LC]{\Stat ~$1557$}: The number of inversions of the second entry of a permutation $(${\small average: $\frac{n-2}{2}$}$)$ \end{itemize} \item Statistics related to descents: \begin{itemize} \rm \item \hyperref[4_21_245_833_LC]{\Stat ~$4$}: The major index of a permutation $(${\small average: $\frac{n(n-1)}{4}$}$)$ \item \hyperref[4_21_245_833_LC]{\Stat ~$21$}: The number of descents of a permutation$(${\small average: $\frac{n-1}{2}$}$)$ \item \hyperref[23_353_365_366_LC]{\Stat ~$23$}: The number of inner peaks of a permutation $(${\small average: $\frac{n-2}{3}$}$)$ \item \hyperref[35_92_99_483_834_LC]{\Stat ~$35$}: The number of left outer peaks of a permutation $(${\small average: $\frac{2n-1}{6}$}$)$ \item \hyperref[35_92_99_483_834_LC]{\Stat ~$92$}: The number of outer peaks of a permutation $(${\small average: $\frac{n+1}{3}$}$)$ \item \hyperref[35_92_99_483_834_LC]{\Stat ~$99$}: The number of valleys of a permutation, including the boundary $(${\small average: $\frac{n+1}{3}$}$)$ \item \hyperref[4_21_245_833_LC]{\Stat ~$245$}: The number of ascents of a permutation $(${\small average: $\frac{n-1}{2}$}$)$ \item \hyperref[23_353_365_366_LC]{\Stat ~$353$}: The number of inner valleys of a permutation $(${\small average: $\frac{n-2}{3}$}$)$ \item \hyperref[23_353_365_366_LC]{\Stat ~$365$}: The number of double ascents of a permutation $(${\small average: $\frac{n-2}{6}$}$)$ \item \hyperref[23_353_365_366_LC]{\Stat ~$366$}: The number of double descents of a permutation$(${\small average: $\frac{n-2}{6}$}$)$ \item \hyperref[325_470_LC]{\Stat ~$470$:} The number of runs in a permutation $(${\small average: $\frac{n+1}{2}$}$)$ \item \hyperref[35_92_99_483_834_LC]{\Stat ~$483$}: The number of times a permutation switches from increasing to decreasing or decreasing to increasing $(${\small average: $\frac{2n-4}{3}$}$)$ \item \hyperref[638_LC]{\Stat ~$638$}: The number of up-down runs of a permutation $(${\small average: $\frac{4n+1}{6}$}$)$ \item \hyperref[4_21_245_833_LC]{\Stat ~$833$}: The comajor index of a permutation $(${\small average: $\frac{n(n-1)}{4}$}$)$ \item \hyperref[35_92_99_483_834_LC]{\Stat ~$834$}: The number of right outer peaks of a permutation$(${\small average: $\frac{2n-1}{6}$}$)$ \item \hyperref[495_836_837_LC]{\Stat ~$836$}: The number of descents of distance $2$ of a permutation $(${\small average: $\frac{n-2}{2}$}$)$ \item \hyperref[495_836_837_LC]{\Stat ~$837$}: The number of ascents of distance $2$ of a permutation $(${\small average: $\frac{n-2}{2}$}$)$ \item \hyperref[1114_1115_LC]{\Stat ~$1114$}: The number of odd descents of a permutation $(${\small average: $ \frac{1}{2}\lceil\frac{n-1}{2}\rceil$}$)$ \item \hyperref[1114_1115_LC]{\Stat ~$1115$}: The number of even descents of a permutation $(${\small average: $\frac{1}{2}\lfloor\frac{n-1}{2} \rfloor$}$)$ \end{itemize} \item Statistics related to permutation patterns: \begin{itemize} \rm \item \hyperref[355_to_360_LC]{\Stat ~$355$}: The number of occurrences of the pattern $21-3$ $(${\small average: $\frac{(n-1)(n-2)}{12}$ }$)$ \item \hyperref[355_to_360_LC]{\Stat ~$356$}: The number of occurrences of the pattern $13-2$ $(${\small average: $\frac{(n-1)(n-2)}{12}$ }$)$ \item \hyperref[355_to_360_LC]{\Stat ~$357$}: The number of occurrences of the pattern $12-3$ $(${\small average: $\frac{(n-1)(n-2)}{12}$ }$)$ \item \hyperref[355_to_360_LC]{\Stat ~$358$}: The number of occurrences of the pattern $31-2$ $(${\small average: $\frac{(n-1)(n-2)}{12}$ }$)$ \item \hyperref[355_to_360_LC]{\Stat ~$359$}: The number of occurrences of the pattern $23-1$ $(${\small average: $\frac{(n-1)(n-2)}{12}$ }$)$ \item \hyperref[355_to_360_LC]{\Stat ~$360$}: The number of occurrences of the pattern $32-1$ $(${\small average: $\frac{(n-1)(n-2)}{12}$ }$)$ \item \hyperref[423_435_437_LC]{\Stat ~$423$}: The number of occurrences of the pattern $123$ or of the pattern $132$ in a permutation $(${\small average: $\frac{1}{3}\binom{n}{3}$ }$)$ \item \hyperref[423_435_437_LC]{\Stat ~$435$}: The number of occurrences of the pattern $213$ or of the pattern $231$ in a permutation $(${\small average: $\frac{1}{3}\binom{n}{3}$ }$)$ \item \hyperref[423_435_437_LC]{\Stat ~$437$}: The number of occurrences of the pattern $312$ or of the pattern $321$ in a permutation $(${\small average: $\frac{1}{3}\binom{n}{3}$ }$)$ \item \hyperref[709_LC]{\Stat ~$709$}: The number of occurrences of $14-2-3$ or $14-3-2$ $(${\small average: $\frac{1}{12}\binom{n-1}{3}$ }$)$ \item \hyperref[1084_LC]{\Stat ~$1084$}: The number of occurrences of the vincular pattern $|1-23$ in a permutation $(${\small average: $\frac{n-2}{6}$ }$)$ \end{itemize} \item Other statistics: \begin{itemize} \rm \item \hyperref[7_991_LC]{\Stat ~$7$}: The number of saliances (right-to-left maxima) of the permutation $(${\small average: $H_n = \sum_{i=1}^n \frac{1}{i}$}~$)$ \item \hyperref[20_LC]{\Stat ~$20$}: The rank of the permutation (among the permutations, in lexicographic order) $(${\small average: $\frac{n!+1}{2}$ }$)$ \item \hyperref[54_1556_1557_LC]{\Stat ~$54$}: The first entry of the permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \item \hyperref[325_470_LC]{\Stat ~$325$}: The width of a tree associated to a permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \item \hyperref[692_796_LC]{\Stat ~$692$}: Babson and Steingrímsson's statistic stat of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[692_796_LC]{\Stat ~$796$}: Babson and Steingrímsson's statistic stat' of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[7_991_LC]{\Stat ~$991$}: The number of right-to-left minima of a permutation $(${\small average: $H_n = \sum_{i=1}^n \frac{1}{i}$ }$)$ \item \hyperref[1377_1379_LC]{\Stat ~$1377$}: The major index minus the number of inversions of a permutation $(${\small average: $0$ }$)$ \item \hyperref[1377_1379_LC]{\Stat ~$1379$}: The number of inversions plus the major index of a permutation $(${\small average: $\frac{n(n-1)}{2}$ }$)$ \item \hyperref[1640_LC]{\Stat ~$1640$}: The number of ascent tops in the permutation such that all smaller elements appear before $(${\small average: $1-\frac{1}{n}$ }$)$ \end{itemize} \end{itemize} \end{thm} \subsection{Orbit structure} \label{subsec:lehmer_orbit} Before beginning the proof of Theorem~\ref{thm:LC}, we show a few results on the orbit structure of the Lehmer code rotation. \begin{thm}[Orbit cardinality]\label{Thm: L-Orbit cardinality} All orbits of the Lehmer code rotation have size $\lcm(1,2,\ldots, n)$. \end{thm} \begin{proof} Since there is a bijection between permutations and their Lehmer codes, one can look at the orbit of the map directly on the Lehmer code. We know that $L(\L(\sigma))_i = L(\sigma)_i + 1 \mod (n+1-i)$. Therefore, the minimum $k >0$ such that $L(\sigma)_i = L(\L^k(\sigma))_i$ is $k = n+1-i$. Looking at all values $1 \leq i \leq n$, we need $\lcm(1,2,3,\ldots, n)$ iterations of $\L$ to get back to the original Lehmer code. \end{proof} The following four lemmas about entries of the Lehmer code over an orbit will be useful to prove homomesies related to inversions and descents. \begin{lem}\label{lem:equioccurrences_Lehmer_code} Over one orbit of the Lehmer code rotation, the numbers $\{0, 1, 2, \ldots, n-i\}$ all appear equally often as $L(\sigma)_i$, the $i$-th entry of the Lehmer code. \end{lem} \begin{proof} We know that $L(\L(\sigma))_i = L(\sigma)_i + 1 \mod (n+1-i)$, which also means that $L(\L^j(\sigma))_i = L(\sigma)_i + j \mod (n+1-i)$. Using the fact that each orbit has size $\lcm(1, \ldots, n)$, the $i$-th entry of the Lehmer code has each value in $\{0, \ldots, n-i\}$ appearing exactly $\frac{\lcm(1,\ldots,n)}{n+1-i}$ times. \end{proof} \begin{lem}\label{lem:equioccurrences_of_pairs_Lehmer_code} Over one orbit of the Lehmer code rotation, the pairs $\{(a,b) \mid a \in \{0,\ldots, n-i\}, b \in \{0,\ldots, n-i-1\}\}$ all appear equally often as $(L(\sigma)_i, L(\sigma)_{i+1})$. That is, pairs of possible adjacent entries are all equally likely over any given orbit. \end{lem} \begin{proof} Let $(L(\sigma)_i,L(\sigma)_{i+1}) = (a,b)$. Then $\Big(L(\L(\sigma))_i, L(\L(\sigma))_{i+1}\Big) = \Big(a+1 \mod (n-i+1), b+1 \mod (n-i)\Big)$. Since $n-i$ and $n-i+1$ are coprime, the successive application of $\L$ spans all the possibilities for $(a,b)$ exactly once before returning to $(a,b)$ in $(n-i)(n-i+1)$ steps. \end{proof} The latter statement can be expanded to qualify the independence of non-adjacent entries, as explained by the next two lemmas. \begin{lem}\label{lem:equioccurrences_of_distant_pairs_Lehmer_code} Over one orbit of the Lehmer code rotation, the pairs $\{(a,b) \mid a \in \{0,\ldots, n-i\}, b \in \{0,\ldots, n-j\}\}$ all appear equally often as $(L(\sigma)_i, L(\sigma)_{j})$ if $n-i+1$ and $n-j+1 $ are coprime. \end{lem} \begin{proof} Let $(L(\sigma)_i,L(\sigma)_{j}) = (a,b)$. Then $\Big(L(\L(\sigma))_i, L(\L(\sigma))_{j}\Big) = \Big(a+1 \mod (n-i+1), b+1 \mod (n-j+1)\Big)$. Since $n-i+1$ and $n-j+1$ are coprime, the successive application of $\L$ spans all the possibilities for $(a,b)$ exactly once before returning to $(a,b)$ in $(n-i+1)(n-j+1)$ steps. \end{proof} The lemma above applies when $n-i-1$ and $n-j-1$ are coprime. The case where they are not is considered in the following two lemmas. \begin{lem}\label{lem:pairs_in_Lehmer_code_with_same_parities} Over one orbit of the Lehmer code rotation, the number $L(\sigma)_i-L(\sigma)_j \mod k$ has a constant value for all the values $i$ and $j$ for which $k$ is a divisor of both $n-i+1$ and $n-j+1$. \end{lem} \begin{proof} For any $m$, it suffices to show that $$L(\sigma)_i - L(\sigma)_j \mod k = L(\L^m(\sigma))_i - L(\L^m(\sigma))_j \mod k.$$ We know that $$L(\L^m(\sigma))_i - L(\L^m(\sigma))_j \mod k = \big(L(\sigma)_i +m \mod (n-i+1)\big)- \big(L(\sigma)_j+m \mod (n-j+1)\big) \mod k .$$ Since $k$ divides both $n-i+1$ and $n-j+1$, the latter is equal to $$L(\sigma)_i +m - L(\sigma)_j-m \mod k = L(\sigma)_i - L(\sigma)_j \mod k.$$ Putting the pieces together, we conclude that $$L(\sigma)_i - L(\sigma)_j \mod k = L(\L^m(\sigma))_i - L(\L^m(\sigma))_j \mod k.$$ \end{proof} \begin{lem}\label{lem:equioccurrences_pairs_distance_2} If $n-i$ is even, there exists for each orbit of the Lehmer code rotation a value $r \in \{0,1\}$ such that the pairs $\{(a,b) \mid a \in \{0, \ldots, n-i+1\}, b \in \{0, \ldots, n-i-1\}, \text{ with }a - b = r \mod 2\}$ all appear equally often as $(L(\sigma)_{i-1}, L(\sigma)_{i+1})$. \end{lem} \begin{proof} When $n-i$ is even, both $n-i+2$ and $n-i$ are even, meaning that their greatest common divisor is $2$. Thanks to Lemma \ref{lem:pairs_in_Lehmer_code_with_same_parities}, we know that $L(\sigma)_{i-1}-L(\sigma)_{i+1}$ has a constant value modulo $2$ over each orbit. What is left to prove is that all pairs $(a,b)$ that satisfy this constraint are equally likely to happen as $(L(\sigma)_{i-1}, L(\sigma)_{i+1})$. Acting $m$ times with the Lehmer code rotation, we look at the evolution of the pair $\big(L(\sigma)_{i-1},L(\sigma)_{i+1}\big)$: \[ \big(L(\L^m(\sigma))_{i-1}, L(\L^m(\sigma))_{i+1}\big) = \big(a + m \mod (n-i+2), b + m \mod (n-i) \big), \] and the orbit spans all possible combinations satisfying the parity condition before returning back to $(a,b)$ in $m = \frac{(n-i+2)(n-i)}{2}$ steps. \end{proof} \subsection{Statistics related to inversions} \label{subsec:lehmer_inv} In this subsection, we state and prove propositions giving the homomesies related to inversions of Theorem~\ref{thm:LC}, as well as homomesy for a family of statistics that do not appear in FindStat (see Theorem \ref{thm:LC_inversions_at_entry}). Recall inversions of a permutation from Definition \ref{def:basic_stats}. \begin{prop}[Statistics 18, 246]\label{18_246_LC} The number of inversions is $\frac{n(n-1)}{4}$-mesic with respect to the Lehmer code rotation for permutations of $[n]$. Similarly, the number of noninversions is also $\frac{n(n-1)}{4}$-mesic. \end{prop} \begin{proof} Following Proposition \ref{prop:LC_num_inv_is_sum}, the number of inversions of $\sigma$ is the sum of the numbers of the Lehmer code $L(\sigma)$. It suffices to observe, as we did in Lemma \ref{lem:equioccurrences_Lehmer_code}, that each number in $\{0, \ldots, n-i\}$ occurs equally often as $L(\sigma)_i$. Therefore, the average value of $L(\sigma)_i$ is $\frac{n-i}{2}$. Thus, the average number of inversions is the sum of the average values at each position of the Lehmer code, which is: \[\sum_{i=1}^{n} \frac{n-i}{2} = \sum_{k=0}^{n-1}\frac{k}{2} = \frac{n(n-1)}{4}. \] Since the noninversions are exactly the pairs $(i,j)$ that are not inversions, there are $\frac{n(n-1)}{2}-\inv(\sigma)$ noninversions in permutation $\sigma \in S_n$, so the number of non-inversions is also $\frac{n(n-1)}{4}$-mesic. \end{proof} \begin{thm}\label{thm:LC_inversions_at_entry} The number of inversions starting at the $i$-th entry of a permutation is $\frac{n-i}{2}$-mesic for the Lehmer Code rotation on permutations of $[n]$. \end{thm} \begin{proof} This follows from Lemma \ref{lem:equioccurrences_Lehmer_code}, since, for each $i$, the number of inversions starting at the $i$-th entry of a permutation $\sigma$ is exactly $L(\sigma)_i$. Hence, $L(\sigma)_i$ is $\frac{n-i}{2}$-mesic. \end{proof} \begin{cor}[Statistics 54, 1556, 1557] \label{54_1556_1557_LC} The following FindStat statistics are homomesic: \begin{itemize} \item The first entry of the permutation is $\frac{n+1}{2}$-mesic. \item The number of inversions of the second entry of a permutation is $\frac{n-2}{2}$-mesic. \item The number of inversions of the third entry of a permutation is $\frac{n-3}{2}$-mesic. \end{itemize} \end{cor} \begin{proof} The second and third statements are special cases of Theorem \ref{thm:LC_inversions_at_entry}. As for the first entry of a permutation $\sigma$, its value is $L(\sigma)_1+1$; following Theorem \ref{thm:LC_inversions_at_entry}, this is $\frac{n+1}{2}$-mesic. \end{proof} \begin{remark} While the above corollary shows that the first entry of the permutation is a homomesic statistic, this does not hold for the other entries of a permutation. For example, statistic 740 (last entry of a permutation) is not homomesic under Lehmer code rotation; the number of inversions starting with the last entry of a permutation is always indeed $0$ and is unrelated to the value of the last entry. \end{remark} \subsection{Statistics related to descents} \label{subsec:lehmer_des} In this subsection, we state and prove propositions giving the homomesies of Theorem~\ref{thm:LC} related to descents. Furthermore, Theorem \ref{thm:descents_at_i_LC} proves some homomesies that do not correspond to FindStat statistics, but are helpful in proving some of them. Recall that descents of a permutation, as well as affiliated concepts and notations, are given in Definition~\ref{def:basic_stats}. We begin with the following lemma, which will be helpful in proving Theorem~\ref{thm:descents_at_i_LC}. \begin{lem}\label{lem:descents_correspondence_in_Lehmer_code} Descents in a permutation correspond exactly to strict descents of the Lehmer code; formally, $i$ is a descent of $\sigma$ if and only if $L(\sigma)_{i} > L(\sigma)_{i+1}$. \end{lem} \begin{proof} If $i$ is a descent of $\sigma$, then $\sigma_i > \sigma_{i+1}$, and \[ L(\sigma)_i = \#\{ j>i\mid \sigma_i > \sigma_j \} = \#\{ j>i+1 \mid \sigma_i > \sigma_j \} + 1 \geq \#\{ j>i+1 \mid \sigma_{i+1} > \sigma_j \} + 1 = L(\sigma)_{i+1}+1. \] Therefore, $L(\sigma)_i > L(\sigma)_{i+1}$. We prove the converse by contrapositive: we assume $\sigma_i <\sigma_{i+1}$ (so $i$ is an ascent). Then, \[ L(\sigma)_i = \#\{ j>i \mid \sigma_{i} > \sigma_{j} \} = \#\{ j>i+1 \mid \sigma_i > \sigma_j \} \leq \#\{ j>i+1 \mid \sigma_{i+1} > \sigma_j \} = L(\sigma)_{i+1}. \] \end{proof} \begin{thm}\label{thm:descents_at_i_LC} The number of descents at position $i$ is $\frac{1}{2}$-mesic under the Lehmer code rotation, for $1\leq i< n$. \end{thm} \begin{proof} The key to this result is Lemma \ref{lem:descents_correspondence_in_Lehmer_code}, saying that $i$ is a descent of $\sigma$ if and only if $L(\sigma)_{i+1} < L(\sigma)_i$. Therefore, counting descents at position $i$ in permutations corresponds to counting strict descents at position $i$ in the Lehmer code. We look at all possible adjacent pairs of entries in the Lehmer code. The entry at position $i$ can take any value in $\{0, \ldots, n-i\}$, so there are $n-i+1$ options for the $i$-th entry and $n-i$ for the $(i+1)$-st. Since $n-i+1$ and $n-i$ are coprime, all possible pairs of adjacent entries are equally likely to occur in each orbit of the Lehmer code rotation (this is the result of Lemma \ref{lem:equioccurrences_of_pairs_Lehmer_code}). Hence, the proportion of strict descents in the Lehmer code at position $i$ is \[\frac{1}{(n-i)(n-i+1)}\sum_{k=1}^{n-i+1}(k-1) = \frac{(n-i)(n-i+1)}{2(n-i)(n-i+1)} = \frac{1}{2}. \] \end{proof} \begin{prop}[Statistics 4, 21, 245, 833]\label{4_21_245_833_LC} The number of descents in a permutation of $[n]$ is $\frac{n-1}{2}$-mesic under Lehmer code rotation, and the number of ascents is $\frac{n-1}{2}$-mesic. The major index of a permutation of $[n]$ is $\frac{n(n-1)}{4}$-mesic under Lehmer code rotation. Similarly, the comajor index is $\frac{n(n-1)}{4}$-mesic. \end{prop} \begin{proof} Theorem \ref{thm:descents_at_i_LC} states that the number of descents at position $i$ is $\frac{1}{2}$-mesic for $1 \leq i< n$. Hence, the number of descents is the sum of $n-1$ homomesic statistics, each with average $\frac{1}{2}$. Hence, there are in average $\frac{n-1}{2}$ descents in $\sigma$ for each orbit of the Lehmer code rotation. Ascents are the pairs that are not descents; there are therefore on average $n-1-\frac{n-1}{2} = \frac{n-1}{2}$ ascents over each orbit of the Lehmer code rotation. Recall that the major index of a permutation $\sigma$ is the sum of its descents, that is $\maj(\sigma)=\sum_{i \in \text{Des}(\sigma)} i$. We already observed that, for each position in $\{1, \ldots, n-1\}$ under the Lehmer code rotation, the number of descents at this position is $\frac{1}{2}$-mesic. Therefore, the average value of the major index over each orbit is \[ \sum_{i=1}^{n-1} \frac{1}{2} i = \frac{n(n-1)}{4}. \] The comajor index of a permutation $\sigma$ is defined as $\sum_{i \in \text{Des}(\sigma)} (n-i) = n \des(\sigma) - \text{maj}(\sigma)$. Since the number of descents is $\frac{n-1}{2}$-mesic and the major index is $\frac{n(n-1)}{4}$-mesic, the comajor index is homomesic with average value $n \frac{n-1}{2} - \frac{n(n-1)}{4}=\frac{n(n-1)}{4}$. \end{proof} \begin{prop}[Statistics 325, 470]\label{325_470_LC} The number of runs in a permutation of $[n]$ is $\frac{n+1}{2}$-mesic under Lehmer code rotation. The width of a tree associated to a permutation of $[n]$ is also $\frac{n+1}{2}$-mesic under Lehmer code rotation. \end{prop} \begin{proof} Since runs are contiguous increasing sequences, runs are separated by descents, and, in each permutation, there is one more run than there are descents. The result follows from the number of descents being $\frac{n-1}{2}$-mesic. It is shown in \cite{Luschny}, where permutation trees are defined, that the width of the permutation tree of $\sigma$ is the number of runs of $\sigma$. Therefore, we do not define permutation trees here, as we can prove the homomesy of the statistics by using the homomesy result regarding runs. \end{proof} \begin{prop}[Statistics 1114, 1115]\label{1114_1115_LC} The number of odd descents of a permutation of $[n]$ is $\frac{1}{2}\lceil \frac{n-1}{2}\rceil$-mesic under Lehmer code rotation, and the number of even descents is $\frac{1}{2}\lfloor\frac{n-1}{2}\rfloor$-mesic. \end{prop} \begin{proof} The proof also follows from Theorem \ref{thm:descents_at_i_LC}, which says that the average number of descents at $i$, $1\leq i\leq n-1$, is $\frac{1}{2}$ over each orbit. Therefore, the number of odd descents is, on average, \[\sum_{i=1}^{\left\lceil\frac{n-1}{2}\right\rceil} \frac{1}{2} = \frac{1}{2} \lceil\frac{n-1}{2}\rceil.\] Similarly, the average number of even descents is \[\sum_{i=1}^{\lfloor\frac{n-1}{2}\rfloor} \frac{1}{2} = \frac{1}{2} \lfloor\frac{n-1}{2}\rfloor.\] \end{proof} \begin{prop}[Statistics 23, 353, 365, 366]\label{23_353_365_366_LC} The number of double descents and the number of double ascents are each $\frac{n-2}{6}$-mesic under Lehmer code rotation. The number of (inner) valleys and the number of (inner) peaks are each $\frac{n-2}{3}$-mesic. \end{prop} \begin{proof} The permutation $\sigma$ has a peak at $i$ if $\sigma_i < \sigma_{i+1} > \sigma_{i+2}$. Following Lemma \ref{lem:descents_correspondence_in_Lehmer_code}, this is exactly when $L(\sigma)_i \leq L(\sigma)_{i+1} > L(\sigma)_{i+2}$. If $n-i$ is even, then $n-i-1$, $n-i$ and $n-i+1$ are two-by-two coprime, meaning that all combinations of values for $L(\sigma)_i$, $L(\sigma)_{i+1}$ and $L(\sigma)_{i+2}$ are possible and equally likely over one single orbit (following Lemmas \ref{lem:equioccurrences_of_pairs_Lehmer_code} and \ref{lem:equioccurrences_of_distant_pairs_Lehmer_code}). The argument is therefore the same as for (single) descents. We first prove that peaks are $\frac{n-2}{3}$-mesic. For a given value $i \in \{1,\ldots, n-2\}$, there are $n-i$ choices for $L(\sigma)_{i+1}$. Let $k = L(\sigma)_{i+1}$. Then, the proportion of options for the Lehmer code that represent a peak at position $i$ (i.e.\ $i$ is an ascent and $i+1$ is a descent) is $\frac{k(k+1)}{(n-i+1)(n-i-1)}$. This is because the options $\{0,1, \ldots k\}$ are valid entries for $L(\sigma)_i$, out of the $n-i+1$ options, and the valid entries for $L(\sigma)_{i+2}$ are $\{0,1,\ldots, k-1\}$, when there are a total of $n-i-1$ possible entries. Averaging over all values of $k$, we obtain: \[ \frac{1}{n-i} \sum_{k=1}^{n-i-1} \frac{k(k+1)}{(n-i+1)(n-i-1)} = \frac{1}{3}.\] If $n-i$ is odd, $L(\sigma)_i$ and $L(\sigma)_{i+2}$ either always have the same parity, or always have distinct parity, over one given orbit, following Lemma \ref{lem:pairs_in_Lehmer_code_with_same_parities}. We recall from Lemma \ref{lem:equioccurrences_pairs_distance_2} that, the parity condition aside, the entries of the Lehmer code $L(\sigma)_{i}$ and $L(\sigma)_{i+2}$ are independent. We first consider what happens when $L(\sigma)_i$ and $L(\sigma)_{i+2}$ have the same parity. We must treat separately the cases of $L(\sigma)_i$ and $L(\sigma)_{i+2}$ being both odd and both even. The probability of having a peak at $i$ when $L(\sigma)_{i+1} = k$ is: \[ \frac{\lceil\frac{k}{2}\rceil\lceil\frac{k+1}{2}\rceil}{\frac{n-i-1}{2}\frac{n-i+1}{2}} + \frac{\lfloor\frac{k}{2}\rfloor\lfloor\frac{k+1}{2}\rfloor}{\frac{n-i-1}{2}\frac{n-i+1}{2}}, \] where the first summand corresponds to $L(\sigma)_i$ and $L(\sigma)_{i+2}$ being even, and the second to both of them being odd. The above is always equal to $\frac{k(k+1)}{(n-i+1)(n-i-1)}$, so the proportion of peaks at position $i$ is also $\frac{1}{3}$, which is the same as the proportion of peaks when $n-i$ is even. Then, when $L(\sigma)_i$ and $L(\sigma)_{i+2}$ have different parities, the probability of having a peak at $i$ when $L(\sigma)_{i+1}=k$ is: \[ \frac{\lfloor\frac{k}{2}\rfloor\lceil\frac{k+1}{2}\rceil}{\frac{n-i-1}{2}\frac{n-i+1}{2}} + \frac{\lceil\frac{k}{2}\rceil\lfloor\frac{k+1}{2}\rfloor}{\frac{n-i-1}{2}\frac{n-i+1}{2}} = \frac{k(k+1)}{(n-i+1)(n-i-1)}. \] Therefore, by what is above, the probability of having a peak at position $i$ is always $\frac{1}{3}$, and the number of peaks is $\frac{n-2}{3}$-mesic. An ascent is a double ascent if it is not a peak nor $n-1$. Knowing that peaks are $\frac{n-2}{3}$-mesic and ascents that are not $n-1$ are $\frac{n-2}{2}$-mesic, double ascents are $\frac{n-2}{2}-\frac{n-2}{3}=\frac{n-2}{6}$-mesic (using Lemma \ref{lem:sum_diff_homomesies} and Theorem \ref{thm:descents_at_i_LC}). The proof for valleys is the same as the proof for peaks, and the proof for double descents follows from the one for double ascents. \end{proof} \begin{prop}[Statistic 638]\label{638_LC} The number of up-down runs is $\frac{4n+1}{6}$-mesic under Lehmer code rotation. \end{prop} \begin{proof} Up-down runs are defined as maximal monotone contiguous subsequences, or the first entry alone if it is a descent. Since peaks and valleys mark the end of a monotone contiguous subsequence (except for the last one that ends at the end of a permutation), the statistic is counted by one more than the number of peaks and valleys (added together), plus one if $1$ is a descent. Since peaks, valleys, and descents at each position are homomesic statistics, by Lemma~\ref{lem:sum_diff_homomesies} their sum is a homomesic statistic. \end{proof} \begin{definition} We define a few variants of peaks and valleys: \begin{enumerate} \item A \textbf{left outer peak} is either a peak, or 1 if it is a descent. Similarly, a \textbf{right outer peak} is either a peak, or $n-1$ if it is an ascent. An \textbf{outer peak} is either a left or a right outer peak. \item A \textbf{valley of a permutation, including the boundary,} is either a valley, 1 if it is an ascent, or $n-1$ if it is a descent. \end{enumerate} \end{definition} \begin{prop}[Statistics 35, 92, 99, 483, 834]\label{35_92_99_483_834_LC} The following variants of valleys and peaks are homomesic under the Lehmer code rotation: \begin{itemize} \item The number of left outer peaks of a permutation is $\frac{2n-1}{6}$-mesic; \item the number of outer peaks of a permutation is $\frac{n+1}{3}$-mesic; \item the number of valleys of a permutation, including the boundary, is $\frac{n+1}{3}$-mesic; \item the number of times a permutation switches from increasing to decreasing or decreasing to increasing is $\frac{2n-4}{3}$-mesic; \item the number of right outer peaks of a permutation is $\frac{2n-1}{6}$-mesic. \end{itemize} \end{prop} \begin{proof} Recall that the number of ascents (respectively, descents) at each position is $\frac{1}{2}$-mesic. Then, we can express these statistics as sum of homomesic statistics under the Lehmer code rotation. We know that sums of homomesic statistics are also homomesic, following Lemma \ref{lem:sum_diff_homomesies}. \begin{itemize} \item The number of left outer peaks of a permutation is the number of peaks, plus $1$ if there is a descent at position $1$; it is therefore homomesic with an average of $\frac{n-2}{3} + \frac{1}{2} = \frac{2n-1}{6}$. \item The number of outer peaks of a permutation is the number of left outer peaks, plus one if there is an ascent at position $n-1$; it is homomesic with an average of $\frac{2n-1}{6} + \frac{1}{2} = \frac{n+1}{3}$. \item Since the number of valleys of a permutation, including the boundary, is the sum of the number of valleys, plus $1$ if there is an ascent at position $1$ and plus $1$ if there is a descent at position $n-1$, the statistic is homomesic with an average of $\frac{n-2}{3}+2\frac{1}{2} = \frac{n+1}{3}$. \item The number of times a permutation switches from increasing to decreasing or decreasing to increasing is the sum of the number of valleys and peaks, so it is $\frac{2n-4}{3}$-mesic. \item For the same reasons as the number of left out peaks, the number of right outer peaks is $\frac{2n-1}{6}$-mesic. \end{itemize} \end{proof} We can also define variants of descents. \begin{definition} A \textbf{descent of distance $2$} is an index $i$ such that $\sigma_i>\sigma_{i+2}$. If $i \in \{1, \ldots, n-2\}$ is not a descent of distance $2$, it is an ascent of distance $2$. \end{definition} \begin{lem}\label{lem:desents_distance_2_in_Lehmer_code} The permutation $\sigma$ has a descent of distance $2$ at $i$ if and only if $L(\sigma)_i > L(\sigma)_{i+2}+1$ or $L(\sigma)_{i} = L(\sigma)_{i+2}+1$ and $L(\sigma)_{i}\leq L(\sigma)_{i+1}$. \end{lem} \begin{proof} First, note that the condition $L(\sigma)_{i}\leq L(\sigma)_{i+1}$ is equivalent to $i$ being an ascent by Lemma \ref{lem:descents_correspondence_in_Lehmer_code}. We prove three things: \begin{enumerate} \item If $L(\sigma)_i > L(\sigma)_{i+2}+1$, then $\sigma$ has a descent of distance $2$ at $i$ (we prove the contrapositive). \item If $L(\sigma)_{i} = L(\sigma)_{i+2}+1$ and $i$ is an ascent of $\sigma$, then $\sigma$ has a descent of distance $2$ at $i$ (using contradiction). \item If $\sigma$ has a descent of distance $2$ at $i$, then either $L(\sigma)_i > L(\sigma)_{i+2}+1$, or $L(\sigma)_{i} = L(\sigma)_{i+2}+1$ and $L(\sigma)_{i}\leq L(\sigma)_{i+1}$.\\ \end{enumerate} \begin{enumerate} \item The contrapositive of the statement we want to prove is that if $\sigma$ has an ascent of distance $2$ at $i$, then $L(\sigma)_i \leq L(\sigma)_{i+2}+1$. We prove this below. The function $\delta_{(i,j) \textit{ is an inversion}}$ takes value $1$, if $(i,j)$ is an inversion, or $0$ otherwise. If $\sigma_i < \sigma_{i+2}$, then $\{j > i+2 \mid \sigma_j < \sigma_i \} \subseteq \{j > i+2 \mid \sigma_j < \sigma_{i+2} \}.$ Also, \begin{align*} L(\sigma)_i & = \#\{j > i+2 \mid \sigma_j < \sigma_i \} + \delta_{(i,i+1) \textit{ is an inversion}} + \underbrace{\delta_{(i,i+2) \textit{ is an inversion}}}_0 \\ & \leq \underbrace{\#\{j > i+2 \mid \sigma_j < \sigma_{i+2} \}}_{L(\sigma)_{i+2}} +\ \delta_{(i,i+1) \textit{ is an inversion}} \\ & \leq L(\sigma)_{i+2}+1. \end{align*} \item Let $L(\sigma)_{i} = L(\sigma)_{i+2}+1$ and let $i$ be an ascent of $\sigma$. Assume, for now, that $\sigma_i < \sigma_{i+2}$ (this will lead to a contradiction). Then, \begin{align*} L(\sigma)_i & = \#\{j > i+2 \mid \sigma_j < \sigma_i \} + \underbrace{\delta_{(i,i+1) \textit{ is an inversion}}}_{0} + \underbrace{\delta_{(i,i+2) \textit{ is an inversion}}}_0 \\ & \leq \#\{j > i+2 \mid \sigma_j < \sigma_{i+2} \} \\ & = L(\sigma)_{i+2}, \end{align*} which contradicts the hypothesis that $L(\sigma)_i = L(\sigma)_{i+2}+1$. Therefore, the assumption that $\sigma_i < \sigma_{i+2}$ is false, and $i$ is a descent of distance $2$ of $\sigma$. \item If $\sigma$ has a descent of distance $2$ at $i$, then $\sigma_i > \sigma_{i+2}$. Then, \[ \{j > i+2 \mid \sigma_j < \sigma_i \} \supseteq \{ j > i+2 \mid \sigma_j < \sigma_{i+2} \}. \] Going back to the Lehmer code: \begin{align*} L(\sigma)_i & = \delta_{(i,i+1) \textit{ is an inversion}}+\underbrace{\delta_{(i,i+2) \textit{ is an inversion}}}_{1} + \#\{ j > i+2 \mid \sigma_j < \sigma_{i} \} \\ & \geq \delta_{(i,i+1) \textit{ is an inversion}}+ 1 + \underbrace{\#\{ j > i+2 \mid \sigma_j < \sigma_{i+2}\}}_{L(\sigma)_{i+2}} \\ & \geq L(\sigma)_{i+2} + 1. \end{align*} Moreover, if $i$ is not an ascent, then $\delta_{(i,i+1) \textit{ is an inversion}}=1$ and $L(\sigma)_{i} \geq L(\sigma)_{i+2} + 2$. Hence, either $L(\sigma)_{i} > L(\sigma)_{i+2} + 1$, or $L(\sigma)_{i} = L(\sigma)_{i+2} + 1$ and $i$ is an ascent. \end{enumerate} \end{proof} \begin{prop}[Statistics 495, 836, 837]\label{495_836_837_LC} The number of descents (respectively, ascents) of distance $2$ are $\frac{n-2}{2}$-mesic under the Lehmer code rotation. The inversions of distance at most $2$ are $\frac{2n-3}{2}$-mesic under the Lehmer code rotation. \end{prop} \begin{proof} We first prove that, over the course of one orbit, exactly half of the permutations have a descent of distance $2$ at position $i \leq n-2$, which proves these homomesy results for ascents and descents. Following Lemma \ref{lem:desents_distance_2_in_Lehmer_code}, we need to count the frequency of $L(\sigma)_i > L(\sigma)_{i+2}+1$ and of $L(\sigma)_i = L(\sigma)_{i+2} +1$ with $L(\sigma)_i \leq L(\sigma)_{i+1}$, over the course of a given orbit. There are three cases, according to Lemma \ref{lem:pairs_in_Lehmer_code_with_same_parities}: \begin{enumerate} \item $n-i$ is even; \item $n-i$ is odd, and, over that orbit, the parity of $L(\sigma)_i$ and $L(\sigma)_{i+2}$ are the same; \item $n-i$ is odd, and, over that orbit, the parity of $L(\sigma)_i$ and $L(\sigma)_{i+2}$ are distinct. \end{enumerate} For the following proofs, let $k = L(\sigma)_i$. \begin{enumerate} \item When $n-i$ is even. In this case, any triplet $(L(\sigma)_i, L(\sigma)_{i+1}, L(\sigma)_{i+2})$ occurs equally often over the course of one orbit of $\L$, following Lemmas \ref{lem:equioccurrences_of_pairs_Lehmer_code} and \ref{lem:equioccurrences_of_distant_pairs_Lehmer_code}. Then, over one orbit, the probability of having $L(\sigma)_i > L(\sigma)_{i+2}+1$ is \[ \frac{1}{n-i+1} \sum_{k=2}^{n-i}\frac{k-1}{n-i-1} = \frac{n-i}{2(n-i+1)}. \] The probability of having both $L(\sigma)_i = L(\sigma)_{i+2} + 1$ and $ L(\sigma)_i \leq L(\sigma)_{i+1}$ is \[ \frac{1}{n-i+1}\sum_{k=1}^{n-i-1} \frac{1}{n-i-1}\frac{n-i-k}{n-i} = \frac{1}{2(n-i+1)}.\] The two events being disjoint, we can sum the probabilities of getting them, and we obtain that the probability of getting a descent of distance $2$ at $i$ is $\frac{n-i+1}{2(n-i+1)}=\frac{1}{2}$. The next two cases are when $n-i$ is odd, which means that the entries $L(\sigma)_i$ and $L(\sigma)_{i+2}$ are not independent, as exhibited by Lemma \ref{lem:pairs_in_Lehmer_code_with_same_parities}. \item When $n-i$ is odd, and, over that orbit, the parity of $L(\sigma)_i$ and $L(\sigma)_{i+2}$ are the same. In this case, one cannot have $L(\sigma)_i = L(\sigma)_{i+2} +1$, and we only need to count how often $L(\sigma)_i > L(\sigma)_{i+2}+1$. Since the entries of the Lehmer code $L(\sigma)_{i}$ and $L(\sigma)_{i+2}$ are independent except for the parity constraint (as shown in Lemma \ref{lem:equioccurrences_pairs_distance_2}), $L(\sigma)_i > L(\sigma)_{i+2}+1$ occurs with probability \begin{align*} \frac{1}{n-i+1} \bigg( \sum_{k=2\text{, $k$ even}}^{n-i-1} & \frac{\frac{k}{2}}{\frac{n-i-1}{2}} + \sum_{k=3\text{, $k$ odd}}^{n-i} \frac{\frac{k-1}{2}}{\frac{n-i-1}{2}} \bigg) \\ & = \frac{1}{(n-i+1)(n-i-1)}\left( \sum_{k=2\text{, $k$ even}}^{n-i-1} k + \sum_{k=3\text{, $k$ odd}}^{n-i} k-1 \right) \\ & = \frac{1}{(n-i+1)(n-i-1)}\left( \sum_{m=1\ (m=\frac{k}{2})}^{\frac{n-i-1}{2}} 2m + \sum_{m=1\ (m=\frac{k-1}{2})}^{\frac{n-i-1}{2}} 2m \right) \\ & = \frac{(n-i-1)(n-i+1)}{2(n-i+1)(n-i-1)} = \frac{1}{2}. \\ \end{align*} \item When $n-i$ is odd, and, over that orbit, the parity of $L(\sigma)_i$ and $L(\sigma)_{i+2}$ are distinct. We first count how often $L(\sigma)_i > L(\sigma)_{i+2}+1$. This occurs with probability \begin{align*} \frac{1}{n-i+1} \bigg( \sum_{k=4\text{, $k$ even}}^{n-i-1} & \frac{\frac{k-2}{2}}{\frac{n-i-1}{2}} + \sum_{k=3\text{, $k$ odd}}^{n-i} \frac{\frac{k-1}{2}}{\frac{n-i-1}{2}} \bigg) \\ & = \frac{1}{(n-i+1)(n-i-1)}\left( \sum_{k=4\text{, $k$ even}}^{n-i-1} k-2 + \sum_{k=3\text{, $k$ odd}}^{n-i} k-1 \right) \\ & = \frac{1}{(n-i+1)(n-i-1)}\left( \sum_{m=1\ (m=\frac{k-2}{2}) }^{\frac{n-i-3}{2}} 2m + \sum_{m=1\ (m=\frac{k-1}{2})}^{\frac{n-i-1}{2}} 2m \right) \\ & = \frac{n-i-3+n-i+1}{4(n-i+1)} = \frac{n-i-1}{2(n-i+1)}. \\ \end{align*} We then count the frequency of triplets $(L(\sigma)_{i}, L(\sigma)_{i+1}, L(\sigma)_{i+2})$ such that $L(\sigma)_{i}= L(\sigma)_{i+2}+1$ and $L(\sigma)_{i}\leq L(\sigma)_{i+1}$. We get \begin{align*} \frac{1}{n-i+1}\bigg( \sum_{k=1,\ k\text{ odd}}^{n-i-2} \frac{1}{\frac{n-i-1}{2}}\frac{n-i-k}{n-i} + \sum_{k=2,\ k\text{ even}}^{n-i-1} \frac{1}{\frac{n-i-1}{2}}\frac{n-i-k}{n-i} \bigg) = \frac{1}{n-i+1}. \end{align*} Hence, summing the probability of the two events, we get $\frac{n-i-1+2}{2(n-i+1)} = \frac{1}{2}$. \end{enumerate} This completes the proof that a descent of distance $2$ occurs at position $i$ ($1\leq i \leq n-2$) in half of the permutations of any given orbit, hence proving that descents of distance $2$ are $\frac{n-2}{2}$-mesic. As for inversions of distance at most $2$, they are exactly descents and descents of distance $2$. Therefore, their number is the sum of the number of descents and descents of distance $2$. \end{proof} It is worth noting that, unlike inversions of distance at most $2$, inversions of distance at most $3$ (statistic 494 in FindStat) are not homomesic under the Lehmer code rotation. We found a counter-example when $n=6$, where the average over one orbit can be $\frac{119}{20}$, $6$, or $\frac{121}{20}$. \subsection{Statistics related to permutation patterns} \label{subsec:lehmer_pp} In this subsection, we state and prove propositions giving the homomesies of Theorem~\ref{thm:LC} related to permutation patterns, and ask if one can characterize what patterns are homomesic under the Lehmer code rotation (open problem~\ref{prob:pp_LRC}). Recall that permutation patterns and consecutive patterns were defined in Definitions \ref{def:patterns} and \ref{def:consecutive_patterns}. \begin{prop}[Statistics 355 to 360]\label{355_to_360_LC} The number of occurrences of the pattern $ab-c$, with $\{a,b,c\}=\{1,2,3\}$ is $\frac{(n-1)(n-2)}{12}$-mesic. \end{prop} \begin{proof} The proof is analogous for all six cases. We do the proof here for $13-2$. In the Lehmer code, this corresponds to two adjacent entries: the first needs to be at most as large as the second (because it represents an ascent), and the only inversion in the pattern is its first entry at the second of the two adjacent entries. Therefore, the number of occurrences in the Lehmer code is: \[ \sum_{i=2}^n \max(L(\sigma)_i-L(\sigma)_{i-1}, 0). \] Since adjacent entries in the Lehmer code are independent, all possibilities appear equally often over each orbit (see Lemma \ref{lem:equioccurrences_of_pairs_Lehmer_code}). Therefore, over one orbit, the average is given by the following (where $j$ is the value of $L(\sigma)_i$, and $k$ is the value of $L(\sigma)_{i-1}$): \begin{align*} \sum_{i=2}^{n}\Bigg(\frac{1}{n+1-i} & \sum_{j=0}^{n-i} \left(\frac{1}{n+2-i}\sum_{k=0}^j j-k\right)\Bigg) \\ & = \sum_{i=2}^{n}\left(\frac{1}{(n+1-i)(n+2-i)}\sum_{j=0}^{n-i} \frac{j(j+1)}{2}\right) \\ & = \frac{(n-2)(n-1)}{12}. \end{align*} \end{proof} \begin{prop}[Statistics 423, 435, 437]\label{423_435_437_LC} The number of total occurrences of the patterns $123$ and $132$ (respectively $213$ and $231$, or $312$ and $321$) in a permutation is $\Big(\frac{1}{3}\binom{n}{3}\Big)$-mesic. \end{prop} \begin{proof} The number of occurrences of the pattern 312 or of the pattern 321 in a permutation $\sigma$ is counted by the number of pairs of inversions that have the same first position. Therefore, in the Lehmer code, this is given by \[ \sum_{i=1}^n \binom{L(\sigma)_i}{2}. \] Given that every possible number appears equally often in the Lehmer code over each orbit (Lemma \ref{lem:equioccurrences_Lehmer_code}), the average over one orbit is \[ \sum_{i=1}^n \frac{1}{n-i+1} \sum_{j=0}^{n-i} \binom{j}{2} = \sum_{i=1}^{n} \frac{(n-i)(n-i-1)}{6} = \frac{1}{3}\binom{n}{3}.\] Similarly, for the patterns 132 and 123, these corresponds to two noninversions starting at the same position. Given a permutation $\sigma$, this is given by \[ \sum_{i=1}^n \binom{n-i-L(\sigma)_i}{2}. \] Averaging over one orbit, this is \[ \sum_{i=1}^n \frac{1}{n-i+1} \sum_{j=0}^{n-i} \binom{n-i-j}{2} = \sum_{i=1}^{n} \sum_{k=0}^{n-i} \binom{k}{2} = \frac{1}{3}\binom{n}{3},\] where the first equality is obtained by the change of variable $k= n-i-j$.\\ Finally, to obtain the number of occurrences of the pattern $213$ or of the pattern $231$, we consider all occurrences of patterns of length 3 in a permutation, and we subtract the number of occurrences of the other patterns of length 3: $123$, $132$, $312$ and $321$. Therefore, it is a difference of homomesic statistics, so by Lemma~\ref{lem:sum_diff_homomesies} it is also homomesic. \end{proof} \begin{remark} Note that classical patterns of length at least $3$ (Statistics 2, 119, 217, 218, 219, 220) are not homomesic under the Lehmer code rotation, nor are the sum of other pairs of classical patterns (statistics 424 to 434, as well as 436). \end{remark} \begin{prop}[Statistic 709]\label{709_LC} The number of occurrences of the vincular patterns $14-2-3$ or $14-3-2$ is $\Big(\frac{1}{12}\binom{n-1}{3}\Big)$-mesic. \end{prop} \begin{proof} These two patterns correspond to two consecutive positions $i-1$ and $i$, and two positions $\ell, m > i$ for which $\sigma_{i-1} < \sigma_\ell, \sigma_m < \sigma_i$. In the Lehmer code, $\ell$ and $m$ are counted as inversions at position $i$ that are not inversions at position $i-1$. Therefore, the number of such patterns in the permutation $\sigma$ is \[\sum_{i=2}^n \binom{L(\sigma)_i - L(\sigma)_{i-1}}{2}.\] In average, over one orbit (here, again, using Lemma \ref{lem:equioccurrences_Lehmer_code}), we have (where $j$ is the value of $L(\sigma)_i$, and $k$ is the value of $L(\sigma)_{i-1}$): \begin{align*} \sum_{i=2}^n \left(\frac{1}{n+1-i}\sum_{j=0}^{n-i}\left(\frac{1}{n+2-i}\sum_{k=0}^j \binom{j-k}{2}\right)\right) & =\sum_{i=2}^n \left(\frac{1}{n+1-i}\sum_{j=0}^{n-i}\frac{1}{n+2-i}\frac{j^3-j }{6}\right) \\ & =\sum_{i=2}^n \frac{(n-i)((n-i)(n-i+1)-2)}{24(n+2-i)} \\ & = \sum_{i=2}^n \frac{(n-i)(n-i-1)}{24} \\ & = \frac{1}{12}\binom{n-1}{3}. \end{align*} \end{proof} \begin{definition} The \textbf{vincular pattern $|1-23$} is the number of occurrences of the pattern $123$, where the first two matched entries are the first two entries of the permutation. \end{definition} \begin{prop}[Statistics 1084]\label{1084_LC} The number of occurrences of the vincular pattern $|1-23$ in a permutation is $\frac{n-2}{6}$-mesic under the Lehmer code rotation. \end{prop} \begin{proof} The condition that the first two entries must be the first two entries of the permutations means that we only need to consider $L(\sigma)_1$ and $L(\sigma)_2$ to count the number of occurrences of this pattern. More specifically, we need $L(\sigma)_1 \leq L(\sigma)_2$ (so that they form an ascent), and then we multiply it by the number of non-inversions starting at position $2$ (which is $n-2-L(\sigma)_2$). Since all combinations of the first two entries of the Lehmer code appear equally often over each orbit of the Lehmer code rotation (see Lemma \ref{lem:equioccurrences_of_pairs_Lehmer_code}), the average number of occurrences of the pattern over one orbit is (with $j = L(\sigma)_2$) \[ \frac{1}{n-1}\sum_{j=0}^{n-2} \frac{j+1}{n} (n-2-j) = \frac{n-2}{6}, \] where $\frac{j+1}{n}$ is the likelihood of the first entry in the Lehmer code being at most $L(\sigma)_2 = j$. \end{proof} Despite the evidence that the number of occurrences of many permutation patterns are homomesic for the Lehmer code rotation, we have found permutation patterns listed in FindStat that are not homomesic, including patterns as simple as $123$ (i.e.\ increasing subsequences of length $3$). This suggests the following problem: \begin{prob}\label{prob:pp_LRC} Characterize the permutation patterns that are homomesic for the Lehmer code rotation. \end{prob} \subsection{Miscellaneous statistics} \label{subsec:lehmer_misc} A few statistics not directly related to descents, inversions or permutation patterns are also homomesic for the Lehmer code rotation. They appear in this subsection. \begin{definition} A \textbf{left-to-right maximum} in a permutation is the maximum of the entries seen so far in the permutation when we read from left to right: this is $\sigma_i$ such that $\sigma_j < \sigma_i$ for all $j<i$. Similarly, a \textbf{left-to-right minimum} is an entry that is the smallest to be read so far: this is $\sigma_i$ such that $\sigma_j > \sigma_i$ for all $j<i$. We define a \textbf{right-to-left maximum (resp. minimum)} analogously: this is $\sigma_i$ such that $\sigma_j < \sigma_i$ (resp. $\sigma_j > \sigma_i$) for all $j>i$. \end{definition} \begin{prop}[Statistics 7, 991]\label{7_991_LC} The number of right-to-left maxima and the number of right-to-left minima are each $H_n$-mesic, where $H_n = \sum_{i=1}^n \frac{1}{i}$ is the $n$-th harmonic number. \end{prop} \begin{proof} Right-to-left minima are represented with zeros in the Lehmer code, since there is no inversion starting at that position. The average number of zeros at position $i$ is $\frac{1}{n+1-i}$, following Lemma \ref{lem:equioccurrences_Lehmer_code}. Therefore, the average number of right-to-left minima is $\sum_{i=1}^n \frac{1}{n+1-i} = \sum_{k=1}^n \frac{1}{k} = H_n$. Similarly, a right-to-left maximum at entry $i$ corresponds to entry $n-i$ in the Lehmer code, which means that $(i,j)$ is an inversion for all $j > i$. Hence, the average number of entries $n-i$ at position $i$ is also $\frac{1}{n+1-i}$. We therefore obtain the same result as for left-to-right minima. \end{proof} Note that the number of left-to-right minima (and maxima) are not homomesic. Counter-examples for the number of left-to-right minima can be found at $n=6$, where the orbit average ranges from $\frac{71}{30}$ to $\frac{5}{2}$. Note that, unlike right-to-left extrema, left-to-right extrema do not correspond to a specific value of given entries in the Lehmer code. \begin{definition} The \textbf{rank} of a permutation of $[n]$ is its position among the $n!$ permutations, ordered lexicographically. This is an integer between $1$ and $n!$. \end{definition} Before we prove homomesy for the rank under the Lehmer code rotation, we give a lemma describing the connection between the rank and the Lehmer code. This seems to be a known fact, but we could not find a proof in the literature. \begin{lem}\label{lem:rank_and_lehmer_code} For a permutation $\sigma$ of $[n]$, the rank of $\sigma$ is given directly by the Lehmer code $L(\sigma)$ as: \begin{equation} \rank(\sigma)=1+\sum_{i=1}^{n-1}L(\sigma)_i(n-i)!.\label{eqn:rank} \end{equation} \end{lem} \begin{proof} We prove this lemma by induction on $n$. The base case is when $n=1$: the only permutation has rank $1$, which satisfies Equation \eqref{eqn:rank}. Assuming Equation \eqref{eqn:rank} holds for permutations of $[n]$, we prove it works for permutations of $[n+1]$ in the following way. The key is to notice that the first entry of the permutation gives a range for the rank. The rank of a permutation $\sigma$ of $[n+1]$ is between $(\sigma_1-1)n!+1$ and $\sigma_1n!$. More specifically, it is given by $(\sigma_1-1)n!+\rank(\sigma_2\ldots\sigma_{n+1})$. Using the induction hypothesis, \[ \rank(\sigma) = (\sigma_1-1)n!+\rank(\sigma_2\ldots\sigma_{n+1}) = L(\sigma)_1 n! + 1 + \sum_{i=1}^{n-1}L(\sigma)_{i+1}(n+1-(i+1))! = 1 + \sum_{i=1}^{n}L(\sigma)_{i}(n+1-i)!, \] which proves Equation \eqref{eqn:rank}. \end{proof} We now have the tools to prove Proposition \ref{20_LC}. \begin{prop}[Statistics 20]\label{20_LC} The rank of the permutation is $\frac{n!+1}{2}$-mesic under the Lehmer code rotation. \end{prop} \begin{proof} We use Lemma \ref{lem:rank_and_lehmer_code} to compute the rank directly from the Lehmer code. Let $m$ be the orbit size under the Lehmer code rotation. By Theorem \ref{Thm: L-Orbit cardinality}, $m=\lcm(1,2,\ldots,n).$ Acting on $\sigma$ by the Lehmer code rotation we get $L(\sigma)+\textbf{1}$ where addition in the $i$-th component is done modulo $n-i+1$. Thus if we act on $\sigma$ by the Lehmer code rotation $k$ times, the resulting Lehmer code has rank $$\rank(\L^k(\sigma))=1+\sum_{i=1}^{n-1}[L(\sigma)_i+k]_{n-i+1}(n-i)!.$$ Calculating the average over an orbit of the Lehmer code rotation we find $$\frac{1}{m}\sum_{k=1}^{m} \rank(\L^k(\sigma))=1+\frac{1}{m}\sum_{i=1}^{n-1}(n-i)!\sum_{k=1}^m\left[L(\sigma)_i+k\right]_{n-i+1}.$$ Using the fact that $n-i+1$ is a divisor of $m$, it follows that $\sum_{k=1}^m\left[L(\sigma)_i+k\right]_{n-i+1}$ is the sum of the equivalence classes modulo $n-i+1$ repeated $\frac{m}{n-i+1}$ times. Thus, \begin{eqnarray*} \frac{1}{m}\sum_{k=1}^{m} \rank(\L^k(\sigma))&=&1+\frac{1}{m}\sum_{i=1}^{n-1}(n-i)!\frac{m}{n-i+1}\sum_{j=0}^{n-i}j\\ &=&1+\frac{1}{2}\sum_{i=1}^{n-1}(n-i)!(n-i)=\frac{n!+1}{2}. \end{eqnarray*} This shows that the rank is $\frac{n!+1}{2}$-mesic for the Lehmer code rotation. \end{proof} \begin{definition}\label{BabsonSteingr\'imsson} Eric Babson and Einar Steingr\'imsson defined a few statistics in terms of occurrences of permutation patterns, including the statistics that they name \textbf{stat} and \textbf{stat$'$} \cite[Proposition 9]{BabsonSteingrimsson}. The statistic stat is the sum of the number of occurrences of the consecutive permutation patterns $13-2$, $21-3$, $32-1$ and $21$, while stat$'$ is the sum of the number of occurrences of $13-2$, $31-2$. $32-1$ and $21$. \end{definition} \begin{prop}[Statistics 692, 796]\label{692_796_LC} The Babson--Steingr\'imsson statistics stat and stat$'$ are $\frac{n(n-1)}{4}$-mesic. \end{prop} \begin{proof} We showed in Proposition \ref{355_to_360_LC} that the consecutive patterns of the form $ab-c$ for $\{a,b,c\} = \{1,2,3\}$ are $\frac{(n-1)(n-2)}{12}$-mesic, and we also showed that the number of descents is $\frac{n-1}{2}$-mesic (Proposition \ref{4_21_245_833_LC}). Following Lemma \ref{lem:sum_diff_homomesies}, the sum of homomesic statistics is homomesic, and the average of both stat and stat$'$ over one orbit of the Lehmer code is $3\frac{(n-1)(n-2)}{12}+\frac{n-1}{2} = \frac{n(n-1)}{4}.$ \end{proof} \begin{prop}[Statistics 1377, 1379]\label{1377_1379_LC} The major index minus the number of inversions of a permutation is $0$-mesic. The number of inversions plus the major index of a permutation is $\frac{n(n-1)}{2}$-mesic. \end{prop} \begin{proof} Recall from Lemma \ref{lem:sum_diff_homomesies} that linear combinations of homomesic statistics are homomesic. Both the major index and the number of inversions are $\frac{n(n-1)}{4}$-mesic. Therefore, their difference is $0$-mesic and their sum is $\frac{n(n-1)}{2}$-mesic. \end{proof} \begin{definition} An \textbf{ascent top} is a position $i$ for which $\sigma_{i-1} < \sigma_{i}$. In other words, $i$ is an ascent top exactly when $i-1$ is an ascent. \end{definition} \begin{prop}[Statistic 1640]\label{1640_LC} The number of ascent tops in the permutation such that all smaller elements appear before is $\big(1-\frac{1}{n}\big)$-mesic under the Lehmer code rotation. \end{prop} \begin{proof} Given an index $i$, if all smaller elements appear before position $i$, $L(\sigma)_i = 0$. Using the proof of Proposition \ref{4_21_245_833_LC}, that means that $L(\sigma)_{i-1} \leq L(\sigma)_i = 0$. Therefore, this happens whenever we have two consecutive zero entries in the Lehmer code. We then use the fact that all possible choices for adjacent entries of the Lehmer code occur with the same frequency in any given orbit (Lemma \ref{lem:equioccurrences_of_pairs_Lehmer_code}). Hence, the average number of occurrences of $i$ being an ascent top in the permutation such that all smaller elements appear before, in each orbit, is $\frac{1}{n+1-i}\frac{1}{n+2-i}$. Hence, the total number is \[ \sum_{i=2}^{n} \frac{1}{n+1-i}\frac{1}{n+2-i} = \sum_{j=1}^{n-1}\frac{1}{j}\frac{1}{j+1} = \sum_{j=1}^{n-1}\left(\frac{1}{j}-\frac{1}{j+1}\right) = 1-\frac{1}{n}. \] \end{proof} This concludes the proof of Theorem \ref{thm:LC}, showing that the 45 statistics listed are homomesic under the Lehmer code rotation. \section{Complement and Reverse Maps} \label{sec:comp_rev} In this section, we prove homomesies for the reverse and complement maps. Because these maps behave similarly, there are many statistics that exhibit the same behavior on the orbits of both maps. For that reason, we have divided this section into four parts. Subsection~\ref{sec:comp} discusses the differences and similarities of the two maps and includes lemmas that will be helpful in our later proofs. In Subsection~\ref{sec:both}, we prove homomesies for both the complement and the reverse map. In Subsection~\ref{sec:complement}, we prove homomesies for the complement map, and provide examples to show that they are not homomesic for the reverse. In Subsection~\ref{sec:rev}, we prove homomesies for the reverse map, and provide examples to show that they are not homomesic for the complement. Many of the statistics that are homomesic under both maps are proven using one of two methods. The first method is to count all possibilities of the statistic and then divide by two, as either it will occur in the permutation or its reverse (or complement). The other method is to use the relationship between the reverse and complement maps seen in Lemma \ref{lem:C&R_relation}. However, there are a few statistics that use different proof techniques. While these statistics are not themselves of more interest than our other results, the proofs are noteworthy for being distinct. For the reverse, these are the \hyperref[prop:R_446]{disorder} of a permutation and the \hyperref[prop:R_304]{load} of a permutation. And for the complement, these are the \hyperref[prop:C_1114_1115]{number of odd descents}, the \hyperref[prop:C_1114_1115]{number of even descents}, and \hyperref[prop:C_692]{the Babson and Steingr\'imsson statistic stat}. First, we introduce the maps and main theorems. \begin{definition} If $\sigma = \sigma_1 \ldots \sigma_n$, then the \textbf{reverse} of $\sigma$ is $\R(\sigma) = \sigma_n\ldots \sigma_1$. That is, $\R(\sigma)_i=\sigma_{n+1-i}$. \end{definition} \begin{definition} If $\sigma = \sigma_1 \ldots \sigma_n$, then the \textbf{complement} of $\sigma$ is $\C(\sigma) = (n+1-\sigma_1)\ldots(n+1-\sigma_n)$. That is, $\C(\sigma)_i =n+1 - \sigma_i$. \end{definition} \begin{remark} It is useful to note that when viewing the reverse or complement as actions on permutation matrices, they are seen as horizontal and vertical reflections respectively. \end{remark} \begin{example}\label{revcompex} Let $\sigma = 52134$. Then $\R(\sigma) = 43125$ and $\C(\sigma) = 14532$. \end{example} \begin{figure}[h!] \centering \begin{minipage}[c]{0.25\linewidth} \begin{center} \begin{tabular}{c} $\begin{tikzpicture}[fill=cyan, scale=0.25, baseline={([yshift=-.5ex]current bounding box.center)}, cell30/.style={fill}, cell21/.style={fill}, cell02/.style={fill}, cell13/.style={fill}, cell44/.style={fill}, ] \foreach \i in {0,...,4} \foreach \j in {0,...,4} \path[cell\i\j/.try] (\i,\j) rectangle +(1,1); \draw grid (5,5); \end{tikzpicture}$\\ - - - - - - - \\ $\begin{tikzpicture}[fill=red, scale=0.25, baseline={([yshift=-.5ex]current bounding box.center)}, cell02/.style={fill}, cell11/.style={fill}, cell23/.style={fill}, cell40/.style={fill}, cell34/.style={fill}, ] \foreach \i in {0,...,4} \foreach \j in {0,...,4} \path[cell\i\j/.try] (\i,\j) rectangle +(1,1); \draw grid (5,5); \end{tikzpicture}$ \end{tabular} \end{center} \end{minipage} \begin{minipage}[c]{0.25\linewidth} \[ \begin{tikzpicture}[fill=cyan, scale=0.25, baseline={([yshift=-.5ex]current bounding box.center)}, cell30/.style={fill}, cell21/.style={fill}, cell02/.style={fill}, cell13/.style={fill}, cell44/.style={fill}, ] \foreach \i in {0,...,4} \foreach \j in {0,...,4} \path[cell\i\j/.try] (\i,\j) rectangle +(1,1); \draw grid (5,5); \draw[thick,dashed] (6,6) -- (6,-1); \end{tikzpicture} \ \begin{tikzpicture}[fill=red, scale=0.25, baseline={([yshift=-.5ex]current bounding box.center)}, cell04/.style={fill}, cell33/.style={fill}, cell42/.style={fill}, cell21/.style={fill}, cell10/.style={fill}, ] \foreach \i in {0,...,4} \foreach \j in {0,...,4} \path[cell\i\j/.try] (\i,\j) rectangle +(1,1); \draw grid (5,5); \end{tikzpicture}\] \end{minipage}\hfill \caption{The Reverse and the Complement} \end{figure} While the inverse map shares many similarities with the reverse and the complement, it is interesting to note that it does not exhibit homomesy on any of the statistics found in FindStat. We conjecture that this is due to the number of fixed points under the inverse map. For each permutation $\sigma$ fixed under a map, the value of the statistic evaluated at $\sigma$ has to be the global average of the statistic. Thus, each fixed point of a map adds another constraint on a statistic being homomesic under that map. \begin{example} Let $\sigma = 52134$. Then the inverse of $\sigma$ is $\mathcal{I}(\sigma) = 32451.$ \begin{figure}[h!] \begin{center} \begin{tabular}{c} $\begin{tikzpicture}[fill=cyan, scale=0.25, baseline={([yshift=-.5ex]current bounding box.center)}, cell30/.style={fill}, cell21/.style={fill}, cell02/.style={fill}, cell13/.style={fill}, cell44/.style={fill}, ] \foreach \i in {0,...,4} \foreach \j in {0,...,4} \path[cell\i\j/.try] (\i,\j) rectangle +(1,1); \draw grid (5,5); \draw[thick,dashed] (6,6) -- (12,-1); \end{tikzpicture} \ \begin{tikzpicture}[fill=red, scale=0.25, baseline={([yshift=-.5ex]current bounding box.center)}, cell24/.style={fill}, cell13/.style={fill}, cell32/.style={fill}, cell41/.style={fill}, cell00/.style={fill}, ] \foreach \i in {0,...,4} \foreach \j in {0,...,4} \path[cell\i\j/.try] (\i,\j) rectangle +(1,1); \draw grid (5,5); \end{tikzpicture}$ \end{tabular} \end{center} \caption{The Inverse} \end{figure}\label{fig:inv} \end{example} \begin{remark} One could manufacture statistics where the inverse map does exhibit homomesy. Sergi Elizalde suggested two such examples. The number of exceedances + $\frac{1}{2}$ the number of fixed points and the number of deficiencies + $\frac{1}{2}$ the number of fixed points are both $\frac{n}{2}$-mesic (see Definition \ref{def:exceedance} for the definition of exceedances and deficiencies). We see this as the number of exceedances equals the number of filled boxes above the main diagonal, the number of deficiencies equals the number of filled boxes below the main diagonal, the number of fixed points equals the number of filled boxes in the main diagonal, and the inverse acts on a permutation matrix by reflecting it along the main diagonal. \end{remark} The main theorems of this section are as follows. \begin{thm}\label{thmboth} The reverse map and the complement map are both homomesic under the following statistics: \begin{itemize} \rm \item Statistics related to inversions: \begin{itemize} \item \hyperref[prop:RC_18_246]{$\Stat~18$}: The number of inversions of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:RC_55_341]{$\Stat~55$}: The inversion sum of a permutation $(${\small average: $\frac{1}{2}\binom{n+1}{3}$ }$)$ \item \hyperref[prop:RC_18_246]{$\Stat$ $246$}: The number of non-inversions of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:RC_55_341]{$\Stat$ $341$}: The non-inversion sum of a permutation $(${\small average: $\frac{1}{2}\binom{n+1}{3}$ }$)$ \item \hyperref[prop:RC_495]{$\Stat$ $494$}: The number of inversions of distance at most $3$ of a permutation $(${\small average: $\frac{3n-6}{2}$ }$)$ \item \hyperref[prop:RC_495]{$\Stat$ $495$}: The number of inversions of distance at most $2$ of a permutation $(${\small average: $\frac{2n-3}{2}$ }$)$ \item \hyperref[prop:RC_538_539]{$\Stat$ $538$}: The number of even inversions of a permutation $(${\small average: $\frac{1}{2}\cdot \lfloor \frac{n}{2}\rfloor\lfloor\frac{n-1}{2}\rfloor$ }$)$ \item \hyperref[prop:RC_538_539]{$\Stat$ $539$}: The number of odd inversions of a permutation $(${\small average: $\frac{1}{2}\lfloor\frac{n^2}{4}\rfloor$ }$)$ \item \hyperref[prop:RC_677]{$\Stat$ $677$}: The standardized bi-alternating inversion number of a permutation $(${\small average: $\frac{\lfloor\frac{n}{2}\rfloor^2}{2}$ }$)$ \end{itemize} \item Statistics related to descents: \begin{itemize} \item \hyperref[prop:RC_21_245_et]{$\Stat$ $21$}: The number of descents of a permutation $(${\small average: $\frac{n-1}{2}$ }$)$ \item \hyperref[prop:RC_21_245_et]{$\Stat$ $245$}: The number of ascents of a permutation $(${\small average: $\frac{n-1}{2}$ }$)$ \item \hyperref[prop:RC_21_245_et]{$\Stat$ $470$}: The number of runs in a permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \item \hyperref[prop:RC_21_245_et]{$\Stat$ $619$}: The number of cyclic descents of a permutation $(${\small average: $\frac{n}{2}$ }$)$ \item \hyperref[prop:RC_836_837]{$\Stat$ $836$}: The number of descents of distance $2$ of a permutation $(${\small average: $\frac{n-2}{2}$ }$)$ \item \hyperref[prop:RC_836_837]{$\Stat$ $837$}: The number of ascents of distance $2$ of a permutation $(${\small average: $\frac{n-2}{2}$ }$)$ \item \hyperref[prop:RC_836_837]{$\Stat$ $1520$}: The number of strict $3$-descents of a permutation $(${\small average: $\frac{n-3}{2}$ }$)$ \end{itemize} \item Statistics related to other permutation properties: \begin{itemize} \item \hyperref[prop:RC_21_245_et]{$\Stat$ $325$}: The width of the tree associated to a permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \item \hyperref[prop:RC_342]{$\Stat$ $342$}: The cosine of a permutation $(${\small average: $\frac{(n+1)^2n}{4}$ }$)$ \item \hyperref[prop:RC_354]{$\Stat$ $354$}: The number of recoils of a permutation $(${\small average: $\frac{n-1}{2}$ }$)$ \item \hyperref[prop:RC_457]{$\Stat$ $457$}: The number of occurrences of one of the patterns $132$, $213$, or $321$ in a permutation $(${\small average: $\frac{\binom{n}{3}}{2}$ }$)$ \item \hyperref[prop:RC_21_245_et]{$\Stat$ $824$}: The sum of the number of descents and the number of recoils of a permutation $(${\small average: $n-1$ }$)$ \item \hyperref[prop:RC_828]{$\Stat$ $828$}: The Spearman’s rho of a permutation and the identity permutation $(${\small average: $\binom{n+1}{3}$ }$)$ \end{itemize} \end{itemize} \end{thm} \begin{thm}\label{onlycomp} The complement map is homomesic under the following statistics, but the reverse map is not: \begin{itemize} \rm \item Statistics related to inversions: \begin{itemize} \item \hyperref[prop:C_1557_1556]{$\Stat$ $1556$}: The number of inversions of the second entry of a permutation $(${\small average: $\frac{n-2}{2}$ }$)$ \item \hyperref[prop:C_1557_1556]{$\Stat$ $1557$}: The number of inversions of the third entry of a permutation $(${\small average: $\frac{n-3}{2}$ }$)$ \end{itemize} \item Statistics related to descents: \begin{itemize} \item \hyperref[prop:C_4]{$\Stat$ $4$}: The major index $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:C_1114_1115]{$\Stat$ $1114$}: The number of odd descents of a permutation $(${\small average: $\frac{1}{2}\lceil \frac{n-1}{2}\rceil$ }$)$ \item \hyperref[prop:C_1114_1115]{$\Stat$ $1115$}: The number of even descents of a permutation $(${\small average: $\frac{1}{2}\lfloor\frac{n-1}{2}\rfloor$ }$)$ \end{itemize} \item Statistics related to other permutation properties: \begin{itemize} \item \hyperref[prop:C_20]{$\Stat$ $20$}: The rank of a permutation $(${\small average: $\frac{n!+1}{2}$ }$)$ \item \hyperref[prop:C_54_740]{$\Stat$ $54$}: The first entry of the permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \item \hyperref[prop:C_662]{$\Stat$ $662$}: The staircase size of a permutation $(${\small average: $\frac{n-1}{2}$ }$)$ \item \hyperref[prop:C_692]{$\Stat$ $692$}: Babson and Steingr\'imsson’s statistic stat of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:C_54_740]{$\Stat$ $740$}: The last entry of a permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \item \hyperref[prop:C_1332]{$\Stat$ $1332$}: The number of steps on the non-negative side of the walk associated with a permutation $(${\small average: $\frac{n-1}{2}$ }$)$ \item \hyperref[prop:C_1377_1379]{$\Stat$ $1377$}: The major index minus the number of inversions of a permutation $(${\small average: $\frac{n(n-1)}{2}$ }$)$ \item \hyperref[prop:C_1377_1379]{$\Stat$ $1379$}: The number of inversions plus the major index of a permutation $(${\small average: $\frac{n(n-1)}{2}$ }$)$ \item \hyperref[thm:ith_entry_comp]{$i$-th entry}: The $i$-th entry of a permutation $(${\small average: $\frac{n+1}{2}$ }$)$ \end{itemize} \end{itemize} \end{thm} \begin{thm}\label{onlyrev} The reverse map is homomesic under the following statistics, but the complement map is not: \begin{itemize} \rm \item \hyperref[prop:R_304]{$\Stat$ $304$}: The load of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:R_305]{$\Stat$ $305$}: The inverse major index $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:R_446]{$\Stat$ $446$}: The disorder of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \item \hyperref[prop:R_616]{$\Stat$ $616$}: The inversion index of a permutation $(${\small average: $\binom{n+1}{3}$ }$)$ \item \hyperref[prop:R_798]{$\Stat$ $798$}: The makl of a permutation $(${\small average: $\frac{n(n-1)}{4}$ }$)$ \end{itemize} \end{thm} \subsection{Comparing and contrasting the reverse and complement} \label{sec:comp} Before we provide proofs for our main theorems, we introduce some general lemmas which show how the two maps are similar and how they differ. First, we note that both the complement and reverse maps are involutions, and thus their orbits always have size 2. One of the main differences between these two maps is illustrated by the following lemma. \begin{lem} Whenever $n>2$ is odd, we note the following: \begin{enumerate} \item $\R(\sigma)$ has a fixed point: $\sigma_{\frac{n+1}{2}} =\R(\sigma)_{\frac{n+1}{2}}.$ \item $\C(\sigma)$ has a fixed point: If $\sigma_i=\frac{n+1}{2}$, then $\C(\sigma)_i =\frac{n+1}{2} $. \end{enumerate} When $n$ is even, $\R(\sigma)$ and $\C(\sigma)$ have no fixed points. \end{lem} \begin{proof} The proof follows directly from the definitions of the reverse and complement map. Let $n>2$ be an odd integer. \begin{enumerate} \item Since $\R(\sigma)_i = \sigma_{n + 1 - i}, \R(\sigma)_{\frac{n+1}{2}} = \sigma_{n + 1 - \frac{n+1}{2}} = \sigma_{\frac{n+1}{2}}$. \item Since $\C(\sigma)_i = n+1-\sigma_i$, $\sigma_i = \frac{n+1}{2}$ implies that $\C(\sigma)_i = n+1-\frac{n+1}{2} = \frac{n+1}{2}.$ \end{enumerate} Let $n>2$ be an even integer. Then $\frac{n+1}{2}$ is not an integer, and there is no elements $\sigma_{\frac{n+1}{2}}$ or $\sigma_i=\frac{n+1}{2}$ as parts of a permutation $\sigma$. \end{proof} \begin{example} Continuing Example \ref{revcompex}, let $\sigma = 52134$. Then $\R(\sigma) = 43125$, and we see $\R(\sigma)$ has a fixed point $\sigma_3 = 1 = \R(\sigma)_3$. Additionally, $\C(\sigma) = 14532$, and we see $\C(\sigma)$ has a fixed point $\sigma_4 = 3 = \C(\sigma)_4$. \end{example} The following lemma exhibits the relationship between the complement and the reverse maps, which will be used in the proofs of our main results. \begin{lem} \label{lem:C&R_relation} Let $\sigma \in S_n$. Then, \begin{enumerate} \item $\C(\sigma)^{-1} = \R(\sigma^{-1})$ and $\C(\sigma^{-1}) = \R(\sigma)^{-1}.$ \item $(\R\circ \C)^2=e$, where $e$ is the identity map on permutations. \item $(\R\circ \mathcal{I})^4= e$, where $\mathcal{I}$ is the map that sends $\sigma$ to its inverse, $\sigma^{-1}$. \end{enumerate} \end{lem} \begin{proof} Each of these equations becomes clear when the maps are viewed as actions on permutation matrices, as the reverse map is equivalent to a horizontal reflection, the complement map is equivalent to a vertical reflection, and the inverse map is equivalent to a reflection along the main diagonal. \end{proof} Now we are ready to prove our main theorems. \subsection{Statistics homomesic for both the reverse and the complement} \label{sec:both} In this subsection, we prove homomesy of the complement and reverse maps for the statistics listed in Theorem \ref{thmboth}. First, we consider statistics related to inversions. We will use the following lemma and definition. \begin{lem}\label{inversion_pairs} The permutation $\sigma$ has $(a,b)\in \Inv (\sigma)$ if and only if $(a,b) \notin \Inv(\C(\sigma))$ and $(n+1-b, n+1-a) \notin \Inv(\R(\sigma))$. \end{lem} \begin{proof} Suppose that $(i, j)$ is a pair such that $1 \leq i < j \leq n$. If $\sigma_i>\sigma_j$, then $\C(\sigma)_i = n+1-\sigma_i<n+1-\sigma_j = \C(\sigma)_j$ and $\R(\sigma)_{n+1-i} = \sigma_i > \sigma_j = \R(\sigma)_{n+1-j}$. So $(i, j)$ is an inversion of $\sigma$ if and only if $(i, j)$ is not an inversion of $\C(\sigma)$ and $(n + 1 - j, n + 1 - i)$ is not an inversion of $\R(\sigma)$. \end{proof} \begin{definition} An inversion, where $i<j$, is said to be an \bb{odd inversion} if $i\neq j \mod 2$. An inversion is said to be an \bb{even inversion} if $i=j\mod 2$. \end{definition} \begin{prop}[Statistics 538, 539]\label{prop:RC_538_539} The number of even inversions of a permutation are $\Big(\frac{1}{2}\cdot \lfloor \frac{n}{2}\rfloor\lfloor\frac{n-1}{2}\rfloor\Big)$-mesic, and the number of odd inversions of a permutation are $\Big(\frac{1}{2}\lfloor\frac{n^2}{4}\rfloor\Big)$-mesic under the complement and reverse maps. \end{prop} \begin{proof} Using Lemma \ref{inversion_pairs} if $(i,j)$ is an inversion of $\sigma$, then $(n + 1 - j, n + 1 - i)$ is not an inversion of $\R(\sigma)$. \begin{itemize} \item If $(i,j)$ is an odd inversion, then so is $(n + 1 - j, n + 1 - i)$. \item If $(i,j)$ is an even inversion, then so is $(n + 1 - j, n + 1 - i)$. \end{itemize} In either case, each odd or even inversion of $\sigma$ is matched with an odd or even inversion that is not present in $\R(\sigma)$. Similarly, if $(i,j)$ is an odd or even inversion of $\sigma$, it is not an inversion pair for $\C(\sigma)$. There are $\lfloor \frac{n}{2} \rfloor \cdot \lfloor \frac{n+1}{2} \rfloor = \lfloor\frac{n^2}{4}\rfloor$ ways to choose an odd inversion, and $\lfloor \frac{n}{2}\rfloor\lfloor\frac{n-1}{2}\rfloor$ ways to choose an even inversion. Therefore, the number odd inversions of a permutation are $\Big(\frac{1}{2}\cdot \lfloor\frac{n^2}{4}\rfloor\Big)$-mesic, and the number of even inversions of a permutation is $\Big(\frac{1}{2}\cdot \lfloor \frac{n}{2}\rfloor\lfloor\frac{n-1}{2}\rfloor\Big)$-mesic. \end{proof} \begin{prop}[Statistics 18, 246]\label{prop:RC_18_246} The number of inversions and number of non-inversions of a permutation is $\frac{n(n-1)}{4}$-mesic for both the complement and reverse maps. \end{prop} \begin{proof} First note that the number of inversions of a permutation is the sum of even and odd inversions of that permutation. Using Lemma \ref{lem:sum_diff_homomesies}, we see that the number of inversions is homomesic for both the complement and reverse. Similarly, the number of non-inversions of a permutation is given by $\frac{n(n-1)}{2}-\inv(\sigma)$. Since we have proven that $\inv(\sigma)$ is homomesic, we see that the number of non-inversions is homomesic for both maps as well. Between $\sigma$ and $\C(\sigma)$, or $\sigma$ and $\R(\sigma)$, we count all the possible inversion or non-inversion pairs: $\frac{n(n-1)}{2}$. Thus the number of inversions or non-inversions is $\frac{n(n-1)}{4}$-mesic for both maps. \end{proof} \begin{definition} The \bb{inversion sum} of a permutation is given by $\displaystyle \sum_{(a, b) \in \Inv(\sigma)} (b - a)$. \end{definition} \begin{prop}[Statistics 55, 341]\label{prop:RC_55_341} The inversion sum of a permutation and the non-inversion sum of a permutation are both $\left(\frac{1}{2}\binom{n+1}{3}\right)$-mesic under the complement and reverse maps. \end{prop} \begin{proof} Using the result from Lemma \ref{inversion_pairs}, when we add the inversion sum for $\sigma $ with that of $\R(\sigma)$, we have: \[ \sum_{(a,b) \in \Inv (\sigma)}(b-a) + \sum_{(a,b) \notin \Inv (\sigma)}\left((n+1-a)-(n+1-b)\right) = \sum_{1\leq a < b \leq n} (b - a). \] This is also the result from adding the inversion sum for $\sigma $ with that of $\C(\sigma)$: \[ \sum_{(a,b) \in \Inv (\sigma)}(b-a) + \sum_{(a,b) \notin \Inv (\sigma)} (b-a) = \sum_{1\leq a < b \leq n} (b - a). \] From here, we find \[ \sum_{1\leq a < b \leq n} (b - a) = \sum_{i=1}^{n-1}i(n-i) = \frac{(n-1)n^2}{2} - \frac{(n-1)n(2n-1)}{6} = \frac{n(n-1)(n+1)}{6} = \binom{n+1}{3}.\] Hence, the average is $\frac{1}{2}\binom{n+1}{3}$. \end{proof} \begin{definition} The \bb{sign of an integer} is given by \[ \mbox{sign}(n) =\begin{cases} ~~1 & \mbox{ if } n >0 \\ ~~0 & \mbox{ if } n=0 \\ -1 & \mbox{ if } n <0 \end{cases}. \] \end{definition} \begin{definition}\cite{Even_Zohar_2016} The \textbf{standardized bi-alternating inversion number} of a permutation $\sigma = \sigma_1 \sigma_2 \ldots \sigma_n$ is defined as \[ \frac{j(\sigma) + \lfloor \frac{n}{2} \rfloor^2}{2} \] where \[ j(\sigma) = \sum_{1\leq y<x \leq n} (-1)^{x+y}\mbox{sign}(\sigma_x-\sigma_y). \] \end{definition} \begin{prop}[Statistic 677]\label{prop:RC_677} The standardized bi-alternating inversion number of a permutation is $\displaystyle \ \frac{\lfloor\frac{n}{2}\rfloor^2}{2}$-mesic under the complement and the reverse maps. \end{prop} \begin{proof} For the complement, we have: \[ j(\C(\sigma)) = \sum_{1\leq y<x \leq n} (-1)^{x+y}\mbox{sign}((n+1-\sigma_x)-(n+1-\sigma_y) = \sum_{1\leq y<x \leq n} (-1)^{x+y}\mbox{sign}(-\sigma_x+\sigma_y ). \] Since $\mbox{sign}(\sigma_x-\sigma_y)$ and $\mbox{sign}(-\sigma_x +\sigma_y )$ are always opposites, while $(-1)^{x+y}$ is always the same (as $(-1)^{y+x}$), the average of $j(\sigma)+j(\C(\sigma))$ is 0. For the reverse, we have: \[ j(\R(\sigma)) = \sum_{1\leq y<x \leq n} (-1)^{x+y}\mbox{sign}(\sigma_{n+1-x}-\sigma_{n+1-y})\\ =\sum_{1\leq y<x \leq n} (-1)^{x+y}\mbox{sign}(\sigma_y-\sigma_x). \] As with the complement, $j(\R(\sigma)) + j(\sigma) = 0$, and the average over the orbit for both maps is given by $\displaystyle \frac{\lfloor\frac{n}{2}\rfloor^2}{2}$. \end{proof} Recall Definition \ref{def:basic_stats} for the definition of an inversion pair. \begin{definition} The number of \bb{recoils} of a permutation $\sigma$ is defined as the number of inversion pairs of $\sigma$ of the form $(i+1, i)$. Alternatively, the number of recoils of a permutation $\sigma$ is the number of descents of $\sigma^{-1}$. \end{definition} \begin{prop}[Statistic 354]\label{prop:RC_354} The number of recoils of a permutation is $\frac{n-1}{2}$-mesic under the complement and the reverse maps. \end{prop} \begin{proof} Using Lemma \ref{lem:C&R_relation}, note that $\des(\C(\sigma)^{-1}) + \des(\sigma^{-1}) = \des(\R(\sigma^{-1})) + \des(\sigma^{-1}) = n-1$ and $\des(\sigma^{-1}) + \des(\R(\sigma)^{-1})= \des(\sigma^{-1}) + \des(\C(\sigma^{-1}))=n-1$. Therefore, the number of recoils is $\frac{n-1}{2}$-mesic for both maps. \end{proof} Before we look at statistics related to descents and ascents, we consider Lemmas \ref{lem:positionofdes_C&R} and \ref{lem:numberofdes_C&R} related to the position and number of descents and ascents for both maps. \begin{lem}\label{lem:positionofdes_C&R} If $\sigma$ has a descent at position $i$, $\C(\sigma)$ has an ascent at position $i$ and $\R(\sigma)$ has an ascent at position $n - i$. And if $\sigma$ has an ascent at position $i$, $\C(\sigma)$ has a descent at position $i$ and $\R(\sigma)$ has a descent at position $n - i$ \end{lem} \begin{proof} Let $\sigma\in S_n$. If $\sigma_i> \sigma_{i+1}$, then $\C(\sigma)_i=n+1-\sigma_i< n+1-\sigma_{i+1}=\C(\sigma)_{i+1}$ and $\R(\sigma)_{n-i} = \sigma_{i+1} < \sigma_i = \R(\sigma)_{n-i+1}$. This means that a descent at position $i$ is mapped to an ascent at position $i$ under the complement and an ascent at position $n - i$ under the reverse. Similarly, we can see an ascent at position $i$ is mapped to a descent at position $i$ under the complement and a descent at position $n-i$ under the reverse. \end{proof} \begin{lem}\label{lem:numberofdes_C&R} If $\sigma$ has $k$ descents and $n-1-k$ ascents, then $\C(\sigma)$ and $\R(\sigma)$ both have $k$ ascents and $n-1-k$ descents. \end{lem} \begin{proof} In Lemma \ref{lem:positionofdes_C&R} we showed that every descent in $\sigma$ contributed to an ascent in $\C(\sigma)$ and $\R(\sigma)$, and every ascent in $\sigma$ contributed to a descent in $\C(\sigma)$ and $\R(\sigma)$, so the result follows. \end{proof} \begin{prop}\label{prop:RC_21_245_et} For both the complement and the reverse maps, \begin{itemize} \item \textnormal{(Statistics 21, 245)} The number of descents of a permutation, and the number of ascents of a permutation, are $\frac{n-1}{2}$-mesic; \item \textnormal{(Statistic 619)} The number of cyclic descents of a permutation is $\frac{n}{2}$-mesic; \item \textnormal{(Statistic 470)} The number of runs in a permutation is $\frac{n+1}{2}$-mesic; \item \textnormal{(Statistic 325)} The width of the tree associated to a permutation is $\frac{n+1}{2}$-mesic; \item \textnormal{(Statistic 824)} The sum of the number of descents and the number of recoils of a permutation is $(n-1)$-mesic. \end{itemize} \end{prop} \begin{proof} There are $n-1$ possible ascents or descents in a permutation $\sigma$. From Lemma \ref{lem:positionofdes_C&R}, we note that between $\sigma$ and $\C(\sigma)$, and $\sigma$ and $\R(\sigma)$, we have all possible ascents and descents. Therefore, the number of ascents and descents of a permutation are $\frac{n-1}{2}$-mesic Cyclic descents only differ from standard descents by allowing a descent at position $n$ if $\sigma_n<\sigma_1$. Using the same argument from Lemma \ref{lem:numberofdes_C&R}, if $\sigma_n<\sigma_1$, we have $\C(\sigma)_1<\C(\sigma)_n$ and $\R(\sigma)_1< \R(\sigma)_n$. So between $\sigma$ and $\C(\sigma)$, or $\sigma$ and $\R(\sigma)$, we have all possible cyclic descents. Thus the average is $\frac{n}{2}$. The width of the tree associated to a permutation is the same as the number of runs, as stated in \cite{Luschny} (and in the proof of Proposition \ref{325_470_LC}). The number of runs is the same as the number of descents, plus 1. Since we just showed that the number of descents is $\frac{n-1}{2}$-mesic, this gives an average of $\frac{n+1}{2}$. The sum of the number of recoils and the number of descents is also homomesic by Proposition \ref{prop:RC_354} and Lemma \ref{lem:sum_diff_homomesies}. Using these results, this sum is $(n-1)$-mesic for both maps. \end{proof} \begin{prop}\label{prop:RC_836_837} For both the complement and the reverse maps, \begin{itemize} \item \textnormal{(Statistics 836, 837)} The number of descents of distance $2$ of a permutation, and the number of ascents of distance $2$ of a permutation is $\frac{n-2}{2}$-mesic. \item \textnormal{(Statistic 1520)} The number of strict $3$-descents of a permutation is $\frac{n-3}{2}$-mesic. \end{itemize} \end{prop} \begin{proof} If $\sigma$ has a descent of distance $2$, under $\C(\sigma)$ and $\R(\sigma)$ the descent is mapped to an ascent of distance $2$. We have all possible ascents and descents of distance $2$ either present in $\sigma$ or $\R(\sigma)$, and all possible ascents or descents in either $\sigma$ or $\C(\sigma)$. There are a total of $n-2$ possible descents or ascents of distance $2$, so the average is $\frac{n-2}{2}$. What is called a strict $3$-descent in FindStat is a descent of distance $3$. The argument for descents of distance $3$ follows similarly. Because there are a total of $n-3$ possible descents of distance $3$, the average is $\frac{n-3}{2}$. \end{proof} \begin{prop} \label{prop:RC_495} For both the complement and reverse maps, \begin{itemize} \item \textnormal{(Statistic 495)} The number of inversions of distance at most $2$ of a permutation is $\frac{2n-3}{2}$-mesic; \item \textnormal{(Statistic 494)} The number of inversions of distance at most $3$ of a permutation is $\frac{3n-6}{2}$-mesic. \end{itemize} \end{prop} \begin{proof} First note that the number of inversions of distance at most $i$ is the sum of all descents, descents of distance $2$, up to descents of distance $i$. In Propositions \ref{prop:RC_21_245_et} and \ref{prop:RC_836_837}, we showed that the number of descents, and the numbers of descents of distance $2$ and $3$ are homomesic for both the reverse and the complement. By Lemma \ref{lem:sum_diff_homomesies}, the sums of these statistics are also homomesic. The average number of inversions of distance at most $2$ is $\frac{(n-1)+(n-2)}{2} = \frac{2n-3}{2}$. Similarly, the average number of inversions of distance at most $3$ is $\frac{(n-1)+(n-2)+(n-3)}{2 } = \frac{3n-6}{2}$. \end{proof} Lastly, we prove homomesy for statistics related to permutation properties other than inversions or descents and ascents. \begin{prop}[Statistic 457]\label{prop:RC_457} The number of total occurrences of one of the patterns $132$, $213$, or $321$ in a permutation is $\frac{\binom{n}{3}}{2}$-mesic for both the complement and reverse maps. \end{prop} \begin{proof} These patterns are half of the patterns of length $3$ and, for each of the patterns, the reverse patterns and complement patterns are not included. So for a triple of positions $a<b<c$ and a permutation $\sigma$, either $\sigma_a\sigma_b\sigma_c$ or $\R(\sigma)_{n+1-c}\R(\sigma)_{n+1-b}\R(\sigma)_{n+1-a}$ is a pattern in the list. Similarly, for a triple of positions $a<b<c$, either $\sigma_a\sigma_b \sigma_c$ or $\C(\sigma)_a\C(\sigma)_b\C(\sigma)_c$ is a pattern in the list. Hence, $\frac{\binom{n}{3}}{2}$ is the average number of occurrences of these patterns in each orbit. \end{proof} \begin{definition}\cite{https://doi.org/10.48550/arxiv.1106.1995} The \bb{cosine of $\sigma$} is defined as $\cos(\sigma)=\displaystyle \sum_{i = 1}^{n} i\sigma_i$. \end{definition} The name of the statistic is due to the following construction, found in \cite{https://doi.org/10.48550/arxiv.1106.1995}: we treat the permutations $\sigma$ and $e$, the identity, as vectors. Then the dot product of the two vectors is calculated as \[ e\cdot \sigma =\displaystyle \sum_{i = 1}^{n} i\sigma_i \] or, alternatively, \[ e\cdot \sigma = |e||\sigma|\cos(\theta)=|\sigma|^2 \cos(\theta) = \frac{n(n+1)(2n+1)}{6} \cos(\theta), \] where $\theta$ is the angle between the vectors. Thus the dot product only relies on the cosine of the angle between the vectors, which is where the statistic derives its name. \begin{prop}[Statistic 342]\label{prop:RC_342} The cosine of a permutation is $\frac{(n+1)^2n}{4}$-mesic \end{prop} \begin{proof} By definition, \[ \cos(\sigma) + \cos(\C(\sigma)) = (n+1) \sum_{i = 1}^{n} i = (n+1) \frac{(n+1)n}{2}, \] and \[ \cos(\sigma) + \cos(\R(\sigma)) = \sum_{i = 1}^{n} i\sigma_i + \sum_{i = 1}^n (n + 1 - i) \sigma_i = \sum_{i = 1}^n (n+1) \sigma_i = (n+1) \frac{(n+1)n}{2}. \] Therefore, the average is \[ \frac{(n+1)^2n}{4}. \] \end{proof} In statistics, Spearman's rho is used as a test to determine the relationship between two variables. In the study of permutations, it can be used as a measure for a distance between $\sigma$ and the identity permutation. \begin{definition}\cite{ChatterjeeDiaconis} The \textbf{Spearman's rho of a permutation and the identity permutation} is given by $\displaystyle \sum_{i = 1}^n (\sigma_i - i)^2$. \end{definition} \begin{prop}[Statistic 828]\label{prop:RC_828} The Spearman's rho of a permutation and the identity permutation is $\binom{n+1}{3}$-mesic for both the complement and the reverse maps. \end{prop} \begin{proof} Under the reverse map, the average is calculated by \begin{align*} & \frac{1}{2}\sum_{i = 1}^n \left((\sigma_i - i)^2 + (\sigma_{n + 1 - i} - i)^2\right) \\ & = \frac{1}{2}\sum_{i = 1}^n \left((\sigma_i - i)^2 + (\sigma_{i} - (n + 1 - i))^2\right) \\ & = \frac{1}{2}\sum_{i = 1}^n \left((\sigma_i - i)^2 + (\sigma_i + i)^2 - 2(n+1)(\sigma_i + i) + (n+1)^2\right) \\ & = \frac{1}{2}\sum_{i = 1}^n \left(2\sigma_i^2 + 2i^2- 2(n+1)(\sigma_i + i) + (n+1)^2\right) \\ & = \frac{1}{2}\sum_{i = 1}^n \left(4i^2- 4(n+1)(i) + (n+1)^2\right) \\ & = 2\sum_{i = 1}^n i^2 - 2(n+1)\sum_{i = 1}^n i + \frac{n(n+1)^2}{2} \\ & = \frac{n(n+1)(2n+1)}{3} - n(n+1)^2 + \frac{n(n+1)^2}{2} \\ & = \frac{n(n+1)(2n+1)}{3} - \frac{n(n+1)^2}{2} \\ & = \frac{n(n+1)(n-1)}{6} \\ & = \binom{n+1}{3}. \end{align*} For the complement, we find that \[ \sum_{i = 1}^n (\C(\sigma)_i - i)^2 = \sum_{i = 1}^n (n+1-\sigma_i - i)^2 = \sum_{i = 1}^n (\sigma_i - (n-i+1))^2. \] So the average is \[ \frac{1}{2} \left( \sum_{i = 1}^n (\sigma_i - i)^2 + \sum_{i = 1}^n (\sigma_i - (n-i+1))^2\right), \] which means that the average for both the reverse and the complement is given by $\binom{n+1}{3}$. \end{proof} This concludes the proof of Theorem \ref{thmboth}, showing the statistics listed there exhibit homomesy for both the reverse and the complement maps. \subsection{Statistics homomesic for the complement but not the reverse} \label{sec:complement} In this subsection, we prove that the statistics listed in Theorem \ref{onlycomp} are homomesic under the complement map and provide examples illustrating that they are not homomesic under the reverse map. Note that it is enough to provide an example of an orbit whose average under the statistic does not match that of the global average, following Remark~\ref{global_avg}. \begin{prop}[Statistic 4]\label{prop:C_4} The major index is $\frac{n(n-1)}{4}$-mesic for the complement, but is not homomesic for the reverse. \end{prop} \begin{proof} For $\sigma$, recall from Definition \ref{def:basic_stats} that \[ \maj (\sigma) = \sum_{\sigma_i>\sigma_{i+1}} i. \] For the complement, we know from Lemma \ref{lem:positionofdes_C&R} that $\C(\sigma)_i<\C(\sigma)_{i+1}$ whenever $i$ is a descent for $\sigma$, and $\C(\sigma)_j>\C(\sigma)_{j+1}$ whenever $j$ is an ascent for $\sigma$. Thus \[ \maj (\sigma) +\maj (\C(\sigma)) = \sum_{i=1}^{n-1} i = \frac{n(n-1)}{2}. \] Thus the average over an orbit is $\frac{n(n-1)}{4}$. To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n(n-1)}{4}$. Consider the orbit $(\sigma, \R(\sigma))$ where $\sigma = 132$ and $\R(\sigma) = 231$. The major index of $\sigma$ is 2 and the major index of $\R(\sigma)$ is 2, so the average over the orbit is $2$, not $\frac{3(3-1)}{4} = \frac{3}{2}.$ \end{proof} \begin{cor}[Statistics 1377, 1379]\label{prop:C_1377_1379} The major index minus the number of inversions of a permutation is $0$-mesic, and the number of inversions plus the major index of a permutation is $\frac{n(n-1)}{2}$-mesic under the complement, but is not homomesic for the reverse. \end{cor} \begin{proof} Both of these are combinations of the major index and the number of inversions, which are both homomesic under the complement. The major index is $\frac{n(n-1)}{4}$-mesic, as is the number of inversions. This means that their difference is 0-mesic, and their sum is $\frac{n(n-1)}{2}$-mesic. The reverse map is homomesic under the number of inversions but is not under the major index, so it cannot be homomesic under the major index minus the number of inversions of a permutation, or under the number of inversions plus the major index of a permutation. \end{proof} \begin{thm}\label{thm:inversion_positions_RC} The number of inversions of the $i$-th entry of a permutation is $\frac{n-i}{2}$-mesic under the complement. \end{thm} \begin{proof} In general, if $\sigma_i>\sigma_j$ when $i<j$, we have $\C(\sigma)_i<\C(\sigma)_j$. There are $n-i$ possible inversions for the $i$-th entry. Each of those $n-i$ inversions is present in either $\sigma$ or $\C(\sigma)$. Thus we have an average of $\frac{n-i}{2}$ over one orbit. \end{proof} \begin{cor}[Statistics 1557, 1556]\label{prop:C_1557_1556} The number of inversions of the second entry of a permutation is $\frac{n-2}{2}$-mesic, and the number of inversions of the third entry of a permutation is $\frac{n-3}{2}$-mesic under the complement, but is not homomesic for the reverse. \end{cor} \begin{proof} From Theorem \ref{thm:inversion_positions_RC}, we have the desired homomesies for the complement. To see that the number of inversions of the second entry is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n-2}{2}$. Consider the orbit $(\sigma, \R(\sigma))$ where $\sigma = 132$ and $\R(\sigma) = 231$. The number of inversions of the second entry of $\sigma$ is 1 and the number of inversions of the second entry of $\R(\sigma)$ is 1, so the average over the orbit is 1, not $\frac{3-2}{2} = \frac{1}{2}$. To see that the number of inversions of the third entry is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n-3}{2}$. Consider the orbit $(\sigma, \R(\sigma))$ where $\sigma = 1243$ and $\R(\sigma) = 3421$. The number of inversions of the third entry of $\sigma$ is 1 and the number of inversions of the third entry of $\R(\sigma)$ is 1, so the average over the orbit is 1, not $\frac{4-3}{2} = \frac{1}{2}$. \end{proof} \begin{prop}[Statistics 1114, 1115]\label{prop:C_1114_1115} The number of odd descents of a permutation and the number of even descents of a permutation are both homomesic under the complement, but not the reverse. The average number of odd descents over one orbit is $\frac{1}{2}\lceil \frac{n-1}{2}\rceil$, and the average number of even descents is $\frac{1}{2}\lfloor\frac{n-1}{2}\rfloor$. \end{prop} \begin{proof} An odd descent in $\sigma$ is an odd ascent in $\C(\sigma)$ and vice versa. The sum of odd descents in $\sigma$ and $\C(\sigma)$ is the number of possible odd descents in a string $n$, so that the average is $\frac{1}{2}\lceil \frac{n-1}{2}\rceil$. To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{1}{2}\lceil \frac{n-1}{2}\rceil$. Consider the orbit $(\sigma, \R(\sigma))$ where $\sigma = 132$ and $\R(\sigma) = 231$. Then the number of odd descents of $\sigma$ is 0 and the number of odd descents of $\R(\sigma)$ is 0, so the average over the orbit is $0$, not $\frac{1}{2}\lceil \frac{3-1}{2}\rceil = \frac{1}{2}$. We can think of even descents as the complement of the set of all odd descents. The average is $\frac{1}{2}\lfloor\frac{n-1}{2}\rfloor$. As the number of descents is homomesic for the reverse map, but the number of odd descents is not, the number of even descents is not homomesic. \end{proof} \begin{prop}\label{thm:ith_entry_comp} The $i$-th entry of the permutation is $\frac{n+1}{2}$-mesic under the complement. \end{prop} \begin{proof} Since $\sigma_i + \C(\sigma)_i = n+1$, the average of the $i$-th entry is $\frac{n+1}{2}$. \end{proof} \begin{cor}[Statistics 54, 740]\label{prop:C_54_740} The first entry of the permutation and the last entry of a permutation is $\frac{n+1}{2}$-mesic under the complement, but not the reverse. \end{cor} \begin{proof} From Proposition \ref{thm:ith_entry_comp}, we have the desired homomesies under the complement. To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n+1}{2}$. Consider the orbit $( \sigma, \R(\sigma))$ where $\sigma = 132$ and $\R(\sigma) = 231$. The average over the orbit for the first entry (and last entry) is $\frac{3}{2}$, not $\frac{3+1}{2} = 2$. \end{proof} \begin{prop}[Statistic 20] \label{prop:C_20} The rank of a permutation is $\frac{n!+1}{2}$-mesic under the complement, but not the reverse. \end{prop} \begin{proof} From Lemma \ref{lem:rank_and_lehmer_code}, we know that the rank of a permutation can be found by \[ \rank(\sigma) = 1 + \sum_{i=1}^{n-1} L(\sigma)_i (n-i)! \] Since $\C(\sigma)_i = n+1-\sigma_i$, the definition of the Lehmer code implies that the sum of $i$-th entries of the Lehmer codes of $\sigma$ and its complement is $L(\sigma)_i+L(\C(\sigma))_i = n-i$. This allows us to find the following. \begin{align*} \rank (\sigma) + \rank (\C(\sigma)) & =2+ \sum_{i=1}^{n-1} L(\sigma)_i (n-i)! + \sum_{i=1}^{n-1} L(\C(\sigma))_i (n-i)! \\ & = 2+ \sum_{i=1}^{n-1}(n-i)! \left ( L(\sigma)_i + L(\C(\sigma))_i\right) \\ & =2+ \sum_{i=1}^{n-1}(n-i)! (n-i). \end{align*} This means that, as seen in Proposition \ref{20_LC}, the average is \[ 1+\frac{1}{2} \sum_{i=1}^{n-1}(n-i)! (n-i) = \frac{n!+1}{2}. \] To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n!+1}{2}$. Consider the orbit $( \sigma, \R(\sigma))$ where $\sigma = 132$ and $\R(\sigma) = 231$. The rank of $\sigma$ is 2 and the rank of $\R(\sigma)$ is 4, so the average over the orbit for the rank is $3$, not $\frac{3!+1}{2} = \frac{7}{2}$. \end{proof} The next proposition examines Babson and Steingr\'imsson's statistic stat, which is defined in Definition \ref{BabsonSteingr\'imsson}. \begin{prop}[Statistic 692]\label{prop:C_692} Babson and Steingr\'imsson's statistic stat of a permutation is $\frac{n(n-1)}{4}$-mesic under the complement, but not the reverse. \end{prop} \begin{proof} In terms of generalized patterns, this statistic is given by the sum of the number of occurrences of each of the patterns $13-2$, $21-3$, $32-1$ and $21$. Numbers in the pattern which are not separated by a dash must appear consecutively, as explained in Definition \ref{def:consecutive_patterns}. The patterns $13-2$, $21-3$, $32-1$ and $21$ have complement, respectively, $31-2$, $23-1$, $12-3$ and $12$. The sum of the number of occurrences of each of the patterns $13-2$, $21-3$, $32-1$, $21$, $31-2$, $23-1$, $12-3$ and $12$ is the total number of pairs of adjacent entries, plus the number of triples made of two adjacent entries, plus another entry to their right. This is also $\text{stat}(\sigma)+\text{stat}(\C(\sigma))$. Hence, over one orbit, the statistics has average \[ \frac{1}{2}\left(\text{stat}(\sigma)+\text{stat}(\C(\sigma))\right) = \frac{1}{2}\left( (n-1) + \sum_{i=1}^{n-2} (n-1-i) \right) = \frac{n(n-1)}{4}. \] The statistic is hence $\frac{n(n-1)}{4}$-mesic. To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n(n-1)}{4}$. Consider the orbit $(\sigma, \R(\sigma))$ where $\sigma = 2314$ and $\R(\sigma) = 4132$. The statistic on $\sigma$ is 2 and the statistic on $\R(\sigma)$ is 3, so the average over the orbit is $\frac{5}{2}$, not $\frac{4(4-1)}{4} = 3$. \end{proof} \begin{prop}[Statistic 1332]\label{prop:C_1332} The number of steps on the non-negative side of the walk associated with a permutation is $\frac{n-1}{2}$-mesic under the complement, but not the reverse. \end{prop} \begin{proof} Consider the walk taking an up step for each ascent, and a down step for each descent of the permutation. Then this statistic is the number of steps that begin and end at non-negative height. The complement of a permutation flips the path upside down. Since the path takes $n-1$ steps, the statistic is $\frac{n-1}{2}$-mesic. To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n-1}{2}$. Consider the orbit $( \sigma, \R(\sigma))$ where $\sigma = 132$ and $\R(\sigma) = 231$. The statistic on $\sigma$ is 2 and the statistic on $\R(\sigma)$ is 2, so the average over the orbit is $2$, not $\frac{3-1}{2} = 1$. \end{proof} \begin{definition} The \textbf{staircase size} of a permutation $\sigma$ is the largest index $k$ for which there exist indices \mbox{$i_k < i_{k-1}< \ldots < i_1$} with $L(\sigma)_{i_j}\geq j$, where $L(\sigma)_i$ is the $i$-th entry of the Lehmer code of $\sigma$ (see Section \ref{sec:lehmer} for a definition of the Lehmer code). \end{definition} \begin{example}\label{ex:staircase_size} In this example, we give the Lehmer code of a permutation and its complement, in which we highlight (in bold) the entries of the Lehmer code at the positions that form the staircases. Notice that the set of highlighted positions in $L(\C(\sigma))$ is the complement (in $[n-1]$) of the set of highlighted positions in $L(\sigma)$. This fact will be used in the proof of the next proposition. \begin{align*} \sigma = 15286347,\qquad L(\sigma) = & (0, \mathbf{3}, 0, \mathbf{4}, \mathbf{2}, 0, 0, 0), \quad \text{staircase size is 3} \\ \C(\sigma) = 84713652,\quad L(\C(\sigma)) = & (\mathbf{7}, 3, \mathbf{5}, 0, 1, \mathbf{2}, \mathbf{1}, 0), \quad \text{staircase size is 4}. \end{align*} \end{example} \begin{prop}[Statistic 662]\label{prop:C_662} The staircase size of the code of a permutation is $\frac{n-1}{2}$-mesic under the complement, but not the reverse. \end{prop} \begin{proof} Following Lemma \ref{inversion_pairs}, we know that $(i,j)$ is an inversion of $\sigma$ exactly when it is a noninversion of $\C(\sigma)$. Therefore, $L(\C(\sigma))_i = n-i-L(\sigma)_i$. To show homomesy, it is enough to prove the following claim: \underline{Claim}: There exists a subset $I = \{i_k<\ldots<i_1\}\subseteq [n-1]$ such that $L(\sigma)_{i_j} >j$ for all $j \in [k]$, and the numbers $[n-1]-I$ form a sequence $l_{n-1-k} < \ldots < l_1$ with $L(\C(\sigma))_{l_j} > j$ for all $j\in [n-1-k]$. In other terms, we partition the numbers $[n-1]$ into two sets that form the ``staircases'' of $\sigma$ and $\C(\sigma)$. An example of a partition into two staircases appear in Example \ref{ex:staircase_size}. We prove the claim by induction on $n$, the number of items in the permutations. The base case is when $n=2$. There is only one orbit, formed of $12$, with Lehmer code $(0,0)$, and $21$, that has Lehmer code $(\mathbf{1},0)$. The former has staircase size $0$, whereas the latter's staircase is $\{1\}$. For the induction step, we let $\sigma$ be a permutation of $[n+1]$ with Lehmer code $L(\sigma)$, and we define $\sigma'$ to be the unique permutation of $n$ elements with Lehmer code $(L(\sigma)_2, L(\sigma)_3, \ldots, L(\sigma)_{n+1})$. Let $I$ be the staircase of $\sigma'$, with size $|I|=k$. We know that $\C(\sigma')$ has Lehmer code $(n-1-L(\sigma)_1, n-2-L(\sigma)_2, \ldots, n-n-L(\sigma)_n)$, which correspond to the whole Lehmer code of $\C(\sigma)$ except the first entry. By induction hypothesis, there is a set $I \subseteq [n-1]$ of size $k$ that correspond to the staircase of $\sigma'$, and $[n-1]-I$ correspond to the staircase of $\C(\sigma')$. Denote $I\oplus 1 = \{i+1 \mid i \in I\}$. For $\sigma$, there are two cases:\begin{itemize} \sloppy \item If $L(\sigma)_1 \geq k+1$, then the staircase of $\sigma$ is $(I\oplus 1) \cup \{1\}$, and has size $k+1$. Then \mbox{$L(\C(\sigma))_1 = (n+1) - 1 - L(\sigma)_1 \leq n-k-1$}, and $\C(\sigma)$ has staircase, $I\oplus 1$, of size $n-k-1$. The union of the staircases of $\sigma$ and $\C(\sigma)$ is $[n]$. \item If $L(\sigma)_1 \leq k$, then the staircase of $\sigma$ is $I\oplus 1$, and has size $k$, but $L(\C(\sigma))_1 = (n+1) - 1 - L(\sigma)_1 \geq n-k$, so the staircase of $\C(\sigma)$ is $\{1\}\cup (I\oplus 1)$, of size $n-k$, and the staircases' union over the orbit is again $[n]$. \end{itemize} This concludes the proof of the claim, which means that the staircase size of a permutation is $\frac{n-1}{2}$-mesic under the complement. To see that it is not homomesic under the reverse, we exhibit an orbit with an average that differs from the global average of $\frac{n-1}{2}$. Consider the orbit $( \sigma, \R(\sigma))$ where $\sigma = 21453$ and $\R(\sigma) = 35412$. The statistic on $\sigma$ is 1 and the statistic on $\R(\sigma)$ is 2, so the average over the orbit is $\frac{3}{2}$, not $\frac{5-1}{2} = 2$. \end{proof} This concludes the proofs of Theorem~\ref{onlycomp}, showing the statistics listed there are homomesic for the complement map but not for the reverse map. \subsection{Statistics homomesic for the reverse but not the complement} \label{sec:rev} In this subsection, we prove the statistics listed in Theorem \ref{onlyrev} are homomesic under the reverse map and provide examples illustrating that they are not homomesic under the complement map. Note that it is enough to provide an example of an orbit whose average under the statistic does not match that of the global average, following Remark \ref{global_avg}. We begin with the inversion index, which is defined based on inversion pairs (see Definition \ref{def:basic_stats} for the definition of an inversion pair). \begin{definition} The \textbf{inversion index} of a permutation $\sigma$ is given by summing all $\sigma_i$ where $(\sigma_i, \sigma_j)$ is an inversion pair for $\sigma$. \end{definition} \begin{prop}[Statistic 616]\label{prop:R_616} The inversion index is $\binom{n+1}{3}$-mesic for the reverse, but not homomesic for the complement. \end{prop} \begin{proof} Since any pair $(\sigma_i, \sigma_j)$ with $\sigma_i > \sigma_j$ is either an inversion pair for $\sigma$ or $\R(\sigma)$, the inversion index of $\sigma$ added to the inversion index of $\R(\sigma)$ is $n(n - 1) + (n-1)(n - 2) + \ldots + 2(1)$, and the average over the orbit is the sum divided by 2, which is $\binom{n+1}{3}$. To see that it is not homomesic under the complement, we exhibit an orbit with an average that differs from the global average of $\binom{n+1}{3}$. Consider the orbit $( \sigma, \C(\sigma))$ where $\sigma = 132$ and $\C(\sigma) = 312$. The inversion index for $\sigma$ is 3 and the inverse index for $\C(\sigma)$ is 6, so the average over the orbit is $\frac{9}{2}$, not $\binom{3+1}{3} = 4$.\end{proof} The disorder of a permutation is defined by Emeric~Deutsch in the comments of the OEIS page for sequence A008302~\cite{OEIS}. \begin{definition} Given a permutation $\sigma = \sigma_1\sigma_2 \ldots \sigma_n$, cyclically pass through the permutation left to right and remove the numbers $1, 2, \ldots, n$ in order. The \textbf{disorder} of the permutation is then defined by counting the number of times a position is not selected and summing that over all the positions. \end{definition} \begin{example} Let $\sigma = 12543$. In the first pass, $54$ remains. In the second pass only $5$ remains. In the third pass, nothing remains. Thus the disorder of $\sigma$ is 3. \end{example} \begin{prop}[Statistic 446]\label{prop:R_446} The disorder of a permutation is $\frac{n(n-1)}{4}$-mesic for the reverse, but not homomesic for the complement. \end{prop} \begin{proof} Each pass through the permutation ends when encountering an inversion pair of the form $(i + 1, i)$ (meaning $i + 1$ is to the left of $i$ in $\sigma = \sigma_1\sigma_2 \ldots \sigma_n$). To count the disorder, we break the sum into parts based on those inversion pairs. Any inversion pair $(i + 1, i)$ contributes $n - i$ to the disorder because when $i$ is removed, $i + 1, i + 2, \ldots, n$ remain. As we are summing the disorder over both $\sigma$ and $\R(\sigma)$, we encounter every possible inversion pair of the form $(i + 1, i)$ exactly once. Thus the disorder over $\sigma$ and $\R(\sigma)$ can be found by the sum $\sum_{i = 1}^{n-1} n - i = 1 + 2 + \ldots + (n-1)$, and the average over the orbit is $\frac{n(n-1)}{4}$. To see that it is not homomesic under the complement, we exhibit an orbit with an average that differs from the global average of $\frac{n(n-1)}{4}$. Consider the orbit $( \sigma, \C(\sigma))$ where $\sigma = 132$ and $\C(\sigma) = 312$. The disorder of $\sigma$ is 1 and the disorder of $\C(\sigma)$ is 1, so the average over the orbit is $1$, not $\frac{3(3-1)}{4} = \frac{3}{2}$.\end{proof} The makl of a permutation was first defined in \cite{ClarkeSteingrimssonZeng} as the sum of the descent bottoms of the permutation with the left embracing sum of the permutation. In this paper, we use the alternative definition in \cite{BabsonSteingrimsson} that defines the makl of the permutation in terms of summing the occurrence of certain patterns. \begin{definition}\cite{BabsonSteingrimsson} The \textbf{makl} of a permutation is the sum of the number of occurrences of the patterns $1-32, 31-2, 32-1, 21,$ where letters without a dash appear side by side in the pattern, as explained in Definition \ref{def:consecutive_patterns}. \end{definition} \begin{example} Let $\sigma = 12543$. The pattern $1-32$ appears 4 times, the pattern $31-2$ appears 0 times, the pattern $32-1$ appears 1 time, and the pattern $21$ appears 2 times. Thus the makl of $\sigma$ is 7. \end{example} \begin{prop}[Statistic 798]\label{prop:R_798} The makl of a permutation is $\frac{n(n-1)}{4}$-mesic for the reverse, but not homomesic for the complement. \end{prop} \begin{proof} Summing the number of occurrence of the patterns $1-32, 31-2, 32-1, 21$ in the reverse permutation is the same as summing the number of occurrences of the patterns $23-1, 2-13, 1-23, 12$ in the original permutation. Summing all of these patterns is equal to $1 + 2 + \ldots + n-1 = \frac{n(n-1)}{2}$, as each pair $(\sigma_i, \sigma_j)$ in the permutation falls under one of these patterns, as explained below. Let $\sigma = \sigma_1 \sigma_2 \ldots \sigma_n$ a permutation of $[n]$ and $i, j$ such that $1\leq i < j \leq n$. \begin{enumerate} \item If $j = i + 1$, then $(\sigma_i, \sigma_j)$ either has the pattern $12$ or $21$. \item If $j > i + 1$ and $\sigma_i < \sigma_j$, then either \begin{itemize} \item $\sigma_i < \sigma_{j-1} <\sigma_j$ and $(\sigma_i, \sigma_j)$ has pattern $1-23.$ \item $\sigma_{j-1} < \sigma_i < \sigma_j$ and $(\sigma_i, \sigma_j)$ has pattern $2-13.$ \item $\sigma_i < \sigma_j < \sigma_{j-1}$ and $(\sigma_i, \sigma_j)$ has pattern $1-32.$ \end{itemize} \item If $j > i + 1$ and $\sigma_j < \sigma_i$, then either \begin{itemize} \item $\sigma_i > \sigma_{i + 1} > \sigma_j$ and $(\sigma_i, \sigma_j)$ has pattern $32-1.$ \item $\sigma_i > \sigma_j > \sigma_{i+1}$ and $(\sigma_i, \sigma_j)$ has pattern $31-2.$ \item $\sigma_{i+1} > \sigma_i > \sigma_j$ and $(\sigma_i, \sigma_j)$ has pattern $23-1.$ \end{itemize} \end{enumerate} So the average over the orbit is $\frac{n(n-1)}{4}$. To see that it is not homomesic under the complement, we exhibit an orbit with an average that differs from the global average of $\frac{n(n-1)}{4}$. Consider the orbit $(\sigma, \C(\sigma))$ where $\sigma = 132$ and $\C(\sigma) = 312$. The makl of $\sigma$ is 2 and the makl of $\C(\sigma)$ is 2, so the average over the orbit is $2$, not $\frac{3(3-1)}{4} = \frac{3}{2}$. \end{proof} For the next proposition, we examine the inverse major index of a permutation, $\sigma$, which is the major index for $\sigma^{-1}$ (see Definition \ref{def:basic_stats} for the definition of major index). \begin{prop}[Statistic 305]\label{prop:R_305} The inverse major index is $\frac{n(n-1)}{4}$-mesic for the reverse, but not homomesic for the complement. \end{prop} \begin{proof} As $\R(\sigma)^{-1} = \C(\sigma^{-1})$ (see Lemma \ref{lem:C&R_relation}), the average over an orbit is $\frac{1}{2}(\maj(\sigma^{-1})+\maj(\R(\sigma)^{-1})) = \frac{1}{2}(\maj(\sigma^{-1})+\maj(\C(\sigma^{-1}))) = \frac{n(n-1)}{4}$, where the last equality is obtained since the major index is homomesic for the complement (proven in Proposition \ref{prop:C_4}). To see that it is not homomesic under the complement, we exhibit an orbit with an average that differs from the global average of $\frac{n(n-1)}{4}$. Consider the orbit $(\sigma, \C(\sigma))$ where $\sigma = 132$ and $\C(\sigma) = 312$. The inverse major index of $\sigma$ is 2 and the inverse major index of $\C(\sigma)$ is 2, so the average over the orbit is $2$, not $\frac{3(3-1)}{4} = \frac{3}{2}$.\end{proof} Lastly, we examine, the load of a permutation, which is defined in \cite{LascouxSchutzenberger} for finite words in a totally ordered alphabet, but for a permutation $\sigma$ it reduces to the major index for $\R(\sigma^{-1})$. \begin{prop}[Statistic 304]\label{prop:R_304} The load of a permutation is $\frac{n(n-1)}{4}$-mesic under the reverse, but not homomesic for the complement. \end{prop} \begin{proof} The load of $\sigma$ is given by taking the major index of $\R(\sigma^{-1})$. So the average over an orbit under $\R$ sums the major index of $\R(\sigma^{-1})$ and the major index of $\R(\R(\sigma)^{-1})$ and divides by 2. Let $\beta^{-1} = \R(\sigma^{-1})$. We will show that the average above is the same as taking the average of the inverse major index of $\beta$ and $\R(\beta)$. As we know the inverse major index is homomesic for the reverse map, this will prove that the load of a permutation is also homomesic for the reverse map and attains the same average over each orbit as is attained by the inverse major index, which is $\frac{n(n-1)}{4}$. Note that $(\mathcal{I} \circ \R \circ \mathcal{I})(\beta^{-1}) = (\R(\beta))^{-1}$ and $(\R \circ \mathcal{I} \circ \R \circ \mathcal{I} \circ \R)(\R(\sigma^{-1})) = \R(\R(\sigma)^{-1})$. Using Lemma \ref{lem:C&R_relation}, this shows that if $\beta^{-1} = \R(\sigma^{-1})$, then $(\R(\beta))^{-1} = \R(\R(\sigma)^{-1})$. So summing the major index of $\R(\sigma^{-1})$ and the major index of $\R(\R(\sigma)^{-1})$ is equal to summing the major index of $\beta^{-1}$ and the major index of $(\R(\beta))^{-1}$, which is the same as summing the inverse major index of $\beta$ and $\R(\beta)$. To see that it is not homomesic under the complement, we exhibit an orbit with an average that differs from the global average of $\frac{n(n-1)}{4}$. Consider the orbit $(\sigma, \C(\sigma))$ where $\sigma = 132$ and $\C(\sigma) = 312$. The load of $\sigma$ is 2 and the load of $\C(\sigma)$ is 2, so the average over the orbit is $2$, not $\frac{3(3-1)}{4} = \frac{3}{2}$. \end{proof} This concludes the proof of Theorem \ref{onlyrev}, showing the statistics listed there are homomesic for the reverse map but not for the complement map. Thus we have proven Theorems \ref{thmboth}, \ref{onlycomp}, \ref{onlyrev}, illustrating homomesy for the reverse map with 27 of the statistics found in FindStat and for the complement map with 35. \section{Foata bijection and variations} \label{sec:foata} This section examines homomesies of the following statistic under the bijection of Foata \cite{Foata} (also appearing in Foata and Sch\"utzenberger~\cite{FoataSchutzenberger1978}) and related maps. Recall the inversion number and major index from Definition~\ref{def:basic_stats}. \begin{definition}(Statistic 1377) $\maj-\inv$ denotes the statistic equal to the difference of the major index and the inversion number: $(\maj-\inv)(\sigma)=\maj(\sigma)-\inv(\sigma)$. \end{definition} We turn our attention to defining the maps under which this statistic is homomesic, starting with the Foata bijection. \begin{definition} The \textbf{Foata bijection} $\F$ (Map 67) is defined recursively on $n$: Given a permutation $\sigma=\sigma_1 \sigma_2 \ldots \sigma_n$, compute the image inductively by starting with $\F(\sigma_1) = \sigma_1$. At the $i$-th step, if $\F(\sigma_1 \sigma_2 \ldots \sigma_i) = \tau_1 \tau_2 \ldots \tau_i$, define $\F(\sigma_1 \sigma_2 \ldots \sigma_i \sigma_{i+1})$ by placing $\sigma_{i+1}$ at the end of $\tau_1 \tau_2 \ldots \tau_i$ and breaking into blocks as follows: \begin{itemize} \item Place a vertical line to the left of $\tau_1$. \item If $\sigma_{i+1} \geq \tau_i$, place a vertical line to the right of each $\tau_k$ for which $\sigma_{i+1} > \tau_k$. \item If $\sigma_{i+1} < \tau_i$, place a vertical line to the right of each $\tau_k$ for which $\sigma_{i+1} < \tau_k$. \end{itemize} Now, within each block between vertical lines, cyclically shift the entries one place to the right. \end{definition} \begin{example} To compute $\F(31542)$, the sequence of words is: \begin{align*} 3 & \to 3 \\ |3|1 & \to 31 \\ |3|1|5 & \to 315 \\ |315|4 & \to 5314 \\ |5|3|14|2 & \to 53412. \end{align*} In total, this gives $\F(31542) = 53412$. \end{example} The Foata bijection also behaves nicely with respect to major index and inversions; see the reference \cite{Foata} for the proof. \begin{lem}[\protect{\cite[Theorem 4.3]{Foata}}] \label{lem:Foata} The Foata bijection sends the major index to the number of inversions. That is, $\maj(\sigma)=\inv(\F(\sigma))$. \end{lem} Other maps relevant to Theorem~\ref{thm:foata} include the Lehmer code to major code bijection and its inverse. The major code is defined below. See Definition~\ref{def:lehmercode} for the definition of the Lehmer code. \begin{definition}\label{def:foata} Given $\sigma \in S_n$, the \textbf{major code} is the sequence $M(\sigma)=(M(\sigma)_1, M(\sigma)_2, \ldots, M(\sigma)_n)$ where $M(\sigma)_i$ is defined as follows. Let $\operatorname{del}_i(\sigma)$ be the permutation obtained by removing all $\sigma_j < i$ from $\sigma$ and then normalizing. $M(\sigma)_i$ is given by $M(\sigma)_i = \operatorname{maj}(\operatorname{del}_i(\sigma)) - \operatorname{maj}(\operatorname{del}_{i-1}(\sigma))$. \end{definition} \begin{example} For the permutation $31542$, $\operatorname{del}_1(31542)=31542$. To obtain $\operatorname{del}_2(31542)$, we first remove $1$, obtaining $3542$. Then we normalize so that the values are in the interval $[1,4]$, obtaining $\operatorname{del}_2(31542)=2431$. Similarly, $\operatorname{del}_3(31542)=132$, $\operatorname{del}_4(31542)=21$, $\operatorname{del}_5(31542)=1$. Thus, $31542$ has major code $(3,3,1,1)$, since: \[\maj(31542)=8, \quad \maj(2431)=5, \quad \maj(132)=2, \quad \maj(21)=1, \quad \maj(1) = 0.\] \end{example} One can recover the permutation from the major code by inverting the process in the Definition \ref{def:foata}. The sum of the major code of $\sigma$ equals the major index of $\sigma$, that is, $\sum_i M(\sigma)_i=\maj(\sigma)$. Analogously, the sum of the Lehmer code of $\sigma$ equals the inversion number of $\sigma$, that is, $\sum_i L(\sigma)_i=\inv(\sigma)$ (see Proposition~\ref{prop:LC_num_inv_is_sum}). \begin{definition} The \textbf{Lehmer-code-to-major-code} map $\M$ (Map 62) sends a permutation to the unique permutation such that the Lehmer code is sent to the major code. The \textbf{major-index-to-inversion-number} map (Map 73) is its inverse. \end{definition} The following lemma is clear from construction. \begin{lem} \label{lem:invtomaj} The Lehmer-code-to-major-code map sends the number of inversions to the major index. That is, $\inv(\sigma)=\maj(\M(\sigma))$. \end{lem} \begin{thm} \label{thm:foata} The statistic $\maj-\inv$ \textnormal{(Stat 1377)} is 0-mesic with respect to each of the following maps: \begin{itemize} \rm \item \textnormal{Map} $62$: The Lehmer-code-to-major-code bijection, \item \textnormal{Map} $73$: The major-code-to-Lehmer-code bijection, \item \textnormal{Map} $67$: The Foata bijection, and \item \textnormal{Map} $175$: The inverse Foata bijection. \end{itemize} \end{thm} \begin{proof} By Lemma~\ref{lem:invtomaj}, $\inv(\sigma)=\maj(\M(\sigma))$, that is, the Lehmer-code-to-major-code bijection $\M$ sends inversion number to major index. Therefore, the sum over an orbit of $\M$ is: \[\sum_{\sigma} (\maj-\inv)(\sigma)=\sum_{\sigma} \maj(\sigma)- \sum_{\sigma}\inv(\sigma) = \sum_{\sigma} \maj(\sigma)- \sum_{\sigma}\maj(\M(\sigma)) = \sum_{\sigma} \maj(\sigma)- \sum_{\sigma}\maj(\sigma)=0.\] By Lemma~\ref{lem:Foata}, $\maj(\sigma)=\inv(\F(\sigma))$, that is, the Foata bijection $\F$ sends major index to inversion number. Therefore, the sum over an orbit of $\F$ is: \[\sum_{\sigma} (\maj-\inv)(\sigma)=\sum_{\sigma} \maj(\sigma)- \sum_{\sigma}\inv(\sigma) = \sum_{\sigma} \inv(\F(\sigma))- \sum_{\sigma}\inv(\sigma) = \sum_{\sigma} \inv(\sigma)- \sum_{\sigma}\inv(\sigma)=0.\] Thus, $\maj-\inv$ is $0$-mesic with respect to $\M$ (Map 62) and $\F$ (Map 67), and their inverses (Map 73 and Map 175). \end{proof} \section{Kreweras and inverse Kreweras complements.} \label{sec:krew} The Kreweras complement was introduced in 1972 as a bijection on noncrossing partitions \cite{kreweras1972partitions}. In this section, we first describe the Kreweras complement map. We then state Theorem~\ref{Thm:Kreweras} which lists the statistics we show are homomesic, three from the FindStat database and one not in FindStat at the time of the investigation. Before proving this theorem we describe in detail the orbit structure of the Kreweras complement in Subsection~\ref{sec:KrewOrb}. The homomesies are then proved in Subsection~\ref{sec:KrewHom}. Given a noncrossing partition on $n$ elements, the Kreweras complement $\K$ can be understood geometrically as rotating an associated noncrossing matching on $2n$ elements and finding the resulting noncrossing partition~ \cite{gobet2016noncrossing}. The action of the Kreweras complement may be extended to all permutations as follows. \begin{definition}\label{def:krew} Let $\sigma$ be a permutation of $n$ elements. The \textbf{Kreweras complement} of $\sigma$ and its inverse (Maps 88 and 89 in the FindStat database) are defined as $$\K(\sigma)=c\circ\sigma^{-1}\qquad\mbox{ and }\qquad \K^{-1}(\sigma)=\sigma^{-1}\circ c,$$ where $c$ is the long cycle $234\ldots 1$. \end{definition} Here composition is understood from right to left. Note that in the literature, the definitions of $\K$ and $\K^{-1}$ are often swapped as compared to the above. But since all our results describe orbit sizes and homomesy, which are both invariant under taking the inverse map (see Lemma~\ref{lem:inverse}), this convention choice is immaterial. We chose the above convention to match the code for these maps in FindStat. \begin{example} Consider $\sigma=43152.$ By definition, $\K(43152)=23451\circ 35214=41325$. \end{example} In \cite{EinsteinFGJMPR16}, the number of disjoint sets in a noncrossing partition of $n$ elements is shown to be $\frac{n+1}{2}$-mesic under a large class of operations which can be realized as compositions of toggles, including the Kreweras complement. In this section, we study the generalized action of the Kreweras complement on permutations, proving the following homomesy results. \begin{thm} \label{Thm:Kreweras} The Kreweras complement and its inverse exhibit homomesy for the following statistics \begin{itemize} \item \hyperref[KHom exc]{$\Stat$ $155$}: The number of exceedances of a permutation $(${\small average: $\frac{n-1}{2}$}$)$ \item \hyperref[KHom exc]{$\Stat$ $702$}: The number of weak deficiencies of a permutation $(${\small average: $\frac{n+1}{2}$}$)$ \item \hyperref[Khom lastentry]{$\Stat$ $740$}: The last entry of the permutation $(${\small average: $\frac{n+1}{2}$}$)$ \item \hyperref[Khom lastentry]{$\frac{n}{2}$-th element}: When $n$ is even, the $\frac{n}{2}$-th element of the permutation $(${\small average: $\frac{n+1}{2}$}$)$ \end{itemize} \end{thm} As we prove in Corollary \ref{KC: no entry but}, the last two homomesies are the only $i$-th entry homomesies possible for the Kreweras complement. \subsection{Kreweras complement orbit structure} \label{sec:KrewOrb} In this subsection, we examine the action of the Kreweras complement and its orbit structure, finding the order of the map in Theorem ~\ref{thm:K_order}, and completely characterizing the distribution of orbits in Theorems ~\ref{thm:orbit_count_odd_sizes}, and \ref{Prop:K even orbs}. We give explicit generators for orbits of certain sizes in Theorem \ref{thm:Orbit_generators_K}. The following lemma will be used to prove several results in this section. \begin{lem}\label{ithentry} Let $\sigma = \sigma_1\sigma_2 \ldots \sigma_n$. Then for all integer values of $j$, the $i$-th entry of $\K^j(\sigma)$ is given by: \[\K^{j}(\sigma)_i =\begin{cases} \sigma_{i - \frac{j}{2}} + \frac{j}{2} \pmod{n} & \textnormal{if } j \textnormal{ is an even integer}, \\ \sigma^{-1}_{i - \frac{j-1}{2}} + \frac{j + 1}{2} \pmod{n} & \textnormal{if } j \textnormal{ is an odd integer}. \end{cases}\] Note, the operation in the subscripts is also modulo $n$, and in both cases $n$ is used for the $0$-th equivalence class representative. \end{lem} \begin{proof} From the definition of $\K$ and $\K^{-1}$, \begin{equation}\label{Eqn:k power m} \K^j(\sigma)= \begin{cases} c^{\frac{j}{2}}\circ\sigma \circ c^{-\frac{j}{2}} & \mbox{if $j$ is an even integer,} \\ c^{\frac{j+1}{2}}\circ\sigma^{-1}\circ c^{-\left(\frac{j-1}{2}\right)} & \mbox{if $j$ is an odd integer.} \end{cases} \end{equation} Thus, if $j$ is even, $\K^j(\sigma)$ is found by rotating $\sigma$ cyclically $\frac{j}{2}$ units and adding $\frac{j}{2}$ to each entry modulo $n$, while if $j$ is odd, $\K^j(\sigma)$ is found by rotating $\sigma^{-1}$ cyclically $\frac{j-1}{2}$ units and adding $\frac{j+1}{2}$ to each entry modulo $n$. \end{proof} \begin{thm}[Order] \label{thm:K_order} For all $n>2,$ $\K$ and $\K^{-1}$ have order $2n$ as elements of $S_{S_n}.$ \end{thm} \begin{proof} Let $e$ be the identity permutation $123\ldots n$ in $S_n$. By equation (\ref{Eqn:k power m}), when $j$ is odd, $\K^j(e)=c\neq e$. Thus, as an element of $S_{S_n},$ the order of $\K$ must be even. Also by equation (\ref{Eqn:k power m}), when $j$ is even, $\K^j$ is an element of $\mbox{Inn}(S_n),$ the group of inner automorphisms of $S_n$ (automorphisms defined by conjugation). Since for all $n>2$, $\mbox{Inn}(S_n)\cong S_n$, it follows that $\K^j$ acts as the identity automorphism only when $c^{\frac{j}{2}}=e,$ i.e.\ when $j$ is a multiple of $2n$. Thus the order of $\K$, and equivalently $\K^{-1},$ is $2n.$ \end{proof} It follows by the orbit-stabilizer theorem that the orbit sizes under the action of $\K$ must be divisors of the order of $\K.$ The distribution of orbit sizes for $n\le 10$ are listed below. The exponents denote the number of orbits of a given size. \begin{itemize} \item $n=2:$ $[2]$ \item $n=3:$ $[1,2,3]$ \item $n=4:$ $[2^{(2)},4,8^{(2)}]$ \item $n=5: [1, 2^{(2)},5^{(5)},10^{(9)}]$ \item $n=6: [2^{(3)}, 4^{(3)}, 6^{(7)}, 12^{(55)}]$ \item $n=7: [1, 2^{(3)}, 7^{(33)}, 14^{(343)}]$ \item $n=8: [2^{(4)}, 4^{(6)}, 8^{(44)}, 16^{(2496)}]$ \item $n=9: [1, 2^{(4)}, 3^{(3)}, 6^{(24)}, 9^{(290)}, 18^{(20006)}]$ \item $n=10: [2^{(5)}, 4^{(10)}, 10^{(383)}, 20^{(181246)}]$ \end{itemize} Since the orbit of a given permutation under the action of $\K$ is the same as that under the action of $\K^{-1}$, just generated in the opposite order, these results also hold for $\K^{-1}$. The patterns observed in these results are captured in Theorems ~\ref{thm:orbit_count_odd_sizes}, and \ref{Prop:K even orbs}. We start by examining elements of $S_n$ belonging to orbits of odd size, and thank Joel Brewster Lewis for contributing the proofs of Proposition~\ref{prop:orbits_of_odd_length}, Proposition~\ref{prop:orbits_of_size_d_odd}, and Theorem ~\ref{thm:orbit_count_odd_sizes}. \begin{prop}\label{prop:orbits_of_odd_length} A permutation $\sigma \in S_n$ with $n$ odd belongs to an orbit of odd size under the Kreweras complement if and only if $\sigma = c^{\frac{n + 1}{2}} \tau$ for some involution $\tau$ in $S_n$. Hence, there is a bijection between permutations of $S_n$ that are involutions and permutations that belong to an orbit of odd size. \end{prop} \begin{proof} Suppose $n$ is odd. Since the order of $\K$ is $2n$, an element $\sigma$ belongs to an orbit of odd size under the action of $\K$ if and only if $\K^n(\sigma) = \sigma$. For any $\sigma$, \[ \K^n(\sigma) = \K\left(\K^{2 \cdot \frac{n - 1}{2}}(\sigma)\right) = \K(c^{\frac{n - 1}{2}} \sigma c^{-\frac{n - 1}{2}}) = c^{\frac{n +1}{2}} \sigma^{-1} c^{-\frac{n - 1}{2}}. \] Therefore, $\sigma = \K^n(\sigma)$ if and only if \[ \sigma = c^{\frac{n + 1}{2}} \sigma^{-1} c^{-\frac{n - 1}{2}}. \] Multiplying through by $c^{\frac{n - 1}{2}} = c^{-\frac{n + 1}{2}}$ on the left, we have that $ \sigma = \K^n(\sigma)$ if and only if \[ c^{\frac{n - 1}{2}} \sigma = \sigma^{-1} c^{-\frac{n - 1}{2}}. \] Setting $\tau := c^{\frac{n - 1}{2}} \sigma$, we have $\sigma = \K^n(\sigma)$ if and only if $\tau = \tau^{-1}$, in other words exactly when $\tau$ is an involution. Multiplying through by $c^{-\frac{n - 1}{2}} = c^{\frac{n + 1}{2}}$ on the left gives the result. \end{proof} It is possible to refine the last result, by describing the permutations in orbits of size $d$ for any divisor $d$ of $n$. \begin{prop}\label{prop:orbits_of_size_d_odd} x(\pi)$ is the set of fixed points of $\pi$: $\{x \in [d] \mid \pi(x) = x\}$. \end{prop} \begin{proof} Let $n$ be odd, $d$ be a divisor of $n$ and $\sigma \in S_n$ be in an orbit of odd size that divides $d$ for $\K$. Hence, $\K^n(\sigma) = \K^d(\sigma) = \sigma$. Therefore, we know from Proposition \ref{prop:orbits_of_odd_length} that there exists an involution $\tau \in S_n$ for which $\sigma = c^{\frac{n+1}{2}} \tau$ . Repeating the same trick as before, we have \begin{equation}\label{eq:K_orbit_q} \sigma = \K^d(\sigma) = c^{\frac{d+1}{2}} \sigma^{-1} c^{-\frac{d-1}{2}}. \end{equation} Using that $\sigma = c^{\frac{n+1}{2}} \tau$ for some involution $\tau$, we rewrite Equation \eqref{eq:K_orbit_q} as \[ \sigma = c^{\frac{n+1}{2}} \tau = c^{\frac{d+1}{2}} \tau c^{-\frac{n+1}{2}} c^{-\frac{d-1}{2}} = c^{\frac{d+1}{2}} \tau c^{-\frac{n+d}{2}} = c^{\frac{d+1}{2}} \tau c^{\frac{n-d}{2}} \] (where in the last step we use $c^n = e$). Multiplying $c^{\frac{n+1}{2}} \tau =c^{\frac{d+1}{2}} \tau c^{\frac{n-d}{2}}$ through by $c^{-\frac{d+1}{2}}$ on the left, this becomes \[ c^{\frac{n-d}{2}} \tau = \tau c^{\frac{n-d}{2}}, \] x(\pi)}{2}$.\\ Since $\gcd(n, \frac{n-d}{2}) = \gcd(n, n - d) = d$ and $c$ is an $n$-cycle, the element $c' := c^{\frac{n-d}{2}}$ has $d$ cycles, each of order $\frac{n}{d}$. Suppose that the involution $\tau$ commutes with $c'$, and let $a \in [n]$. We consider an exhaustive list of cases for $\tau(a)$: $\tau(a) = a$; $\tau(a) \neq a$ and $\tau(a)$ belongs to the same cycle in $c'$ as $a$; or $\tau(a)$ belongs to a different cycle of $c'$ than $a$ does. In the first case, we have $\tau(c'(a)) = c'(\tau(a)) = c'(a)$ and so $\tau$ also fixes $c'(a)$; and thus $\tau$ fixes the entire cycle of $a$ pointwise. In the second case, let $b := \tau(a) = (c')^k(a)$ for some $k$. Applying $\tau$ to this equation and using the hypothesis gives $a = \tau(b) = \tau( (c')^k(a)) = (c')^k(\tau(a)) = (c')^k(b)$, and so $a = (c')^{2k}(a)$. However, since all cycles of $c'$ are of odd length, this can only happen if actually $a = (c')^k(a)$ -- which is the previous case $\tau(a) = a$. Thus, this case never occurs. Finally, in the third case, suppose $(a_1 \cdots a_{n/d})$ and $(b_1 \cdots b_{n/d})$ are two cycles of $c'$, and that $\tau(a_1) = b_m$. Then (by the hypothesis) $\tau(a_{i + 1}) = \tau( (c')^i(a_1)) = (c')^i(\tau(a_1)) = (c')^i(b_m) = b_{m + i}$ -- so the value of $\tau$ is determined on the entire cycle by the choice of the image of $a = a_1$. Because $\tau$ is an involution, $\tau(b_m) = a$, and the the value of $\tau$ on the cycle $(b_1 \cdots b_{n/d})$ is also determined by the image of $a$. x(\pi)}{2}}$. \end{proof} This allows us to compute the orbit sizes when $n$ is odd, as follows. \begin{thm}\label{thm:orbit_count_odd_sizes} Let $n$ be an odd number and $d$ be an (odd) divisor of $n$. Then the number of orbits of $\K$ of order $d$ is \[\frac{1}{d} \sum_{q \mid d} \mu\left(\frac{d}{q}\right)\sum_{j = 0}^{\lfloor q/2 \rfloor} \binom{q}{2j} \cdot (2j - 1)!! \cdot \left(\frac{n}{q}\right)^j, \] where the double factorial $k!!$ is the product of the integers from $1$ to $k$ that have the same parity as $k$, and $\mu$ is the number-theoretic M\"obius function \end{thm} \begin{proof} The proof technique is to compute all permutations $\sigma$ for which $\K^d(\sigma) = \sigma$ and then use inclusion-exclusion to get those that belong to an orbit of size $d$ for each divisor $d$ of $n$. x(\pi)$ is the set of fixed points of $\pi$. The latter is easier to compute. x(\pi)}{2}}$. Therefore, the number of permutations $\sigma$ for which $\K^d(\sigma)=\sigma$ is x(\pi)}{2}}. \] x(\pi)}{2}$: x(\pi)}{2} = j \right\}.\] Involutions of $d$ with $d-2j$ fixed points are counted in the following way: we first pick the non-fixed points (there are $\binom{d}{2j}$ of them), and then match them so they come in pairs. The pairs are counted by $(2j-1)(2j-3)(2j-5)\ldots1 = (2j-1)!!$. Consequently, our sum becomes \[ \sum_{j = 0}^{\lfloor d/2 \rfloor} \binom{d}{2j} \cdot (2j - 1)!! \cdot \left(\frac{n}{d}\right)^j. \] This is the number of elements in $S_n$ that are fixed by $K^d$, i.e.\ that belong to orbits of $K$ of size \emph{dividing} $d$. In order to find the number of elements that belong to orbits of size \emph{exactly} $d$, we do a M\"obius inversion, finding the number of such elements is \[ \sum_{q \mid d} \mu\left(\frac{d}{q}\right) \sum_{j = 0}^{\lfloor q/2 \rfloor} \binom{q}{2j} \cdot (2j - 1)!! \cdot \left(\frac{n}{q}\right)^j, \] where $\mu$ is the number-theoretic M\"obius function. Finally, to get the number of orbits, we divide by the common size $d$ of the orbits, as needed. \end{proof} Calculating the number of orbits of even size is even more straight forward. \begin{thm}[Even sized orbit cardinality]\label{Prop:K even orbs} For each even divisor $2k$ of $2n$, the number of orbits of size $2k$ under the action of the Kreweras complement is equal to $$\frac{\left(\frac{n}{k}\right)^kk!-T}{2k},$$ where $\left(\frac{n}{k}\right)^kk!$ is the number of elements in $S_n$ fixed by $\K^{2k}$ and $T$ is the number of elements in orbits of size $q$ for each proper divisor $q$ of $2k.$ \end{thm} \begin{proof} Let $\sigma\in S_n$ be an arbitrary element fixed under the action of $\K^{2k}$. By Equation \eqref{Eqn:k power m}, $c^k\circ\sigma\circ c^{-k}=\sigma,$ or equivalently $\sigma\circ c^k\circ \sigma^{-1}=c^k.$ Thus $\sigma$ is fixed by $\K^{2k}$ if and only if it is in the centralizer of $c^k$ under the action of the inner automorphism group $\mbox{Inn}(S_n).$ By the orbit-stabilizer theorem, the number of elements in the centralizer of $c^k$ is equal to $\frac{|\mbox{Inn}(S_n)|}{|\mbox{Orb}(c^k)|}$ where $|\mbox{Orb}(c^k)|$ is the number of elements in the orbit of $c^k$ under conjugation, or equivalently the number of elements in $S_n$ with the same cycle structure as $c^k$. As $c^k$ consists of $k$ cycles, each of length $\frac{n}{k}$, and there are exactly $\frac{n!}{(\frac{n}{k})^kk!}$ permutations in $S_n$ with this cycle structure, it follows that the number of elements in the centralizer of $c^k$ is $\left(\frac{n}{k}\right)^kk!.$ Elements fixed by the action of $K^{2k}$ include those in orbits of size $q$ for all $q\mid 2k.$ To find the number of elements in orbits of size $2k$ we need to subtract out elements in orbits of size equal to a proper divisor of $2k$. Dividing the result by $2k$ gives the number of distinct orbits of size $2k$ under the action of $\K.$ \end{proof} \begin{remark} Consider $\sigma$ in the centralizer of $c^k$ under the action of the inner automorphism group. Since $\sigma\circ c^k \circ\sigma^{-1}=c^k$, conjugation by $\sigma$ has the effect of permuting the disjoint cycles $c_1,c_2,\ldots, c_k$ of $c^k.$ Furthermore, since $\sigma\circ c^k\circ \sigma^{-1}=\sigma c_1c_2\ldots c_k \sigma^{-1}=\sigma c_1\sigma^{-1}\sigma c_2\sigma^{-1}\sigma\ldots \sigma^{-1}\sigma c_k \sigma^{-1}$ and writing $c_i=(c_{i1}c_{i2}\ldots c_{i\frac{n}{k}})$, we have $\sigma c_i\sigma^{-1}=(\sigma(c_{i1})\sigma(c_{i2})\ldots \sigma(c_{i\frac{n}{k}})),$ it follows that elements of the centralizer of $c^k$ are completely determined by how they permute the disjoint cycles of $c^k$ and where they map $c_{i1}$ for each $1\leq i\leq k.$ Put another way, for each permutation $\pi$ in $S_k$, we can construct an element of the centralizer of $c^k$ from a word $w$ of length $k$ on the alphabet $\{1,2,\ldots,\frac{n}{k}\}$ by setting $\sigma(c_{i1})=c_{\pi(i)w_i}.$ Thus, similar to the proof of Proposition ~\ref{prop:orbits_of_size_d_odd}, we can count the number of elements in the centralizer of $c^k$ using a bijection between these elements and pairs consisting of a permutation in $S_k$ and a word of length $k$ on the alphabet $\{1,2,\ldots,\frac{n}{k}\}.$ \end{remark} \begin{remark} As we prove in Corollary \ref{Korb odd divisor even n}, there are no orbits of odd size when $n$ is even. Thus, Theorem \ref{Prop:K even orbs} completely characterizes the distribution of orbits when $n$ is even, allowing us to calculate the number of orbits of size $2k$ using the M\"obius inversion formula. By Theorem \ref{Prop:K even orbs}, the number of elements fixed by $2k$, or equivalently in orbits of size dividing $2k$, is $\left(\frac{n}{k}\right)^kk!.$ When $n$ is even, the number of elements in orbits of size exactly $2k$ (when $k$ divides $n$) is $$\sum_{q|k}\mu\left(\frac{k}{q}\right)\cdot\left(\frac{n}{q}\right)^q\cdot q!.$$ \end{remark} Below we examine orbits of size $1$, $2$, $n$, and $2n$, providing explicit generators in each case. \begin{thm}\label{thm:Orbit_generators_K} The following permutations generate orbits of a given size for the Kreweras complement and its inverse: \begin{itemize} \item When $n$ is odd $\left\{\frac{n+3}{2}\frac{n+5}{2}\ldots n123\ldots\frac{n+1}{2}\right\}$ is the unique orbit of size $1$. There is no orbit of size $1$ when $n$ is even. \item For all $n$, there are $\lfloor \frac{n}{2}\rfloor$ orbits of size $2,$ one of which is generated by the identity permutation. \item For all $n$, an orbit of size $n$ is generated by $n(n-1)\ldots 321$. \item For even values of $n>3$, an orbit of size $2n$ is generated by $13\ldots(n-1)24\ldots n$, and for odd values of $n>3$ an orbit of size $2n$ is generated by $13\ldots n24\ldots (n-1).$ \end{itemize} \end{thm} We prove each part of this theorem separately through Propositions \ref{Prop:K fixed points} --\ref{Prop:K_orbit_size_2n}. \begin{prop}\label{Prop:K fixed points} There are no fixed points in $S_n$ under the action of the Kreweras complement when $n$ is even and exactly one at \[\sigma = \frac{n+3}{2}\frac{n+5}{2}\ldots n123\ldots\frac{n+1}{2}\] when $n$ is odd. \end{prop} \begin{proof} Let $\sigma\in S_n$ be a fixed point under the action of the Kreweras complement, i.e.\ $\K(\sigma)=c\circ\sinv=\sigma,$ and $\sigma^2=c.$ Thus, if $\sigma_1=i,$ then $\sigma_i=2$ and $\sigma_2=i+1$ and so on with $\sigma_j=i+j-1$ and $\sigma_{(i+j-1)}=j+1$, in other words $\sigma=i(i+1)\ldots n 1 2 \ldots (i-1)$. Examining $\sigma$ we find $n$ appears in the $n-(i-1)$ spot, that is, $\sigma_{n-(i-1)}=n$. However since $\sigma_i=2$ and $n$ appears two positions to the left of $2,$ $n$ is in the $(i-2)$ spot, i.e.\ $\sigma_{i-2}=n.$ Setting the indices $n-(i-1)$ and $i-2$ equal and solving for $i$ we find $i=\frac{n+3}{2}$. \end{proof} \begin{prop} The Kreweras complement has $\lfloor \frac{n}{2} \rfloor $ orbits of size $2$. In particular, the identity permutation generates an orbit of size $2$. \end{prop} \begin{proof} By direct computation, we find that $\{1234\ldots n, 234\ldots n1\}$ is an orbit of size 2 under the Kreweras complement. By Theorem \ref{Prop:K even orbs}, there are $\frac{n-T}{2}$ distinct orbits of size 2, where $T$ is the number of orbits of size 1. By Proposition \ref{Prop:K fixed points}, $T$ is zero when $n$ is even and $1$ when $n$ is odd, thus $\frac{n-T}{2}=\lfloor \frac{n}{2} \rfloor.$ \end{proof} The following proposition shows the reverse of the identity permutation is in an orbit of size~$n$. \begin{prop}\label{prop:orbit_size_n} The Kreweras complement has an orbit of size $n$ of the form $\{ n(n-1) \ldots 321, 1n(n-1) \ldots 32, 21n \ldots 43, 321n \ldots 54, \ldots, (n-1)(n-2)\ldots 21n\}$. \end{prop} \begin{proof} Let $\sigma=n(n-1) \ldots 321$. We start by noting that the $i$-th entry of $\sigma$ is given by $\sigma_i=n-(i-1)$ and $\sigma=\sigma^{-1}$. By Lemma \ref{ithentry}, $\K^j(\sigma)_i$ is equal to $n-(i-\frac{j}{2}-1)+\frac{j}{2}\pmod{n}$ when $j$ is even, and $n-(i-\frac{j-1}{2}-1)+\frac{j+1}{2}\pmod{n}$ when $j$ is odd. In both cases $\K^j(\sigma)_i=n-(i-1)+j\pmod{n},$ which is only equal to $\sigma_i$ when $j$ is a multiple of $n$, and gives the specified orbit when evaluated at $0\le j<n$. \end{proof} Finally, the proposition below exhibits an orbit of size $2n$, completing the proof of Theorem~\ref{thm:Orbit_generators_K}. \begin{prop}\label{Prop:K_orbit_size_2n} For $n > 3$, the permutation $\sigma = 1 3 \ldots (n-1)2 4 \ldots n$ when $n$ is even, and $\sigma = 1 3 \ldots (n)2 4 \ldots (n-1)$ when $n$ is odd, generates an orbit of size $2n$ under the Kreweras complement. \end{prop} \begin{proof} Let $\sigma$ be the permutation on $n$ elements given in the proposition, i.e.\ \[\sigma_i= \begin{cases} 2(i-1)+1 & \mbox{for } 1\leq i\leq\lceil\frac{n}{2}\rceil \\ 2(i-\lceil\frac{n}{2}\rceil) & \mbox{for } \lceil\frac{n}{2}\rceil<i\leq n \end{cases}. \] Direct calculation shows that the orbit of $\sigma$ under the Kreweras complement has size $n$ for $n=2,$ and $3$ and size $2n$ for $n=4,$ and $5.$ For $n>5$, we use the fact that all odd entries of $\sigma$ are followed by all even entries to show that $\K^{j}(\sigma) \neq \sigma$ for $0 < j < 2n.$ Consider $\sigma^{-1}=1(\frac{n}{2}+1)2(\frac{n}{2}+2)3(\frac{n}{2}+3)\ldots \frac{n}{2} n$ for $n$ even, and $\sigma^{-1}=1(\lceil\frac{n}{2}\rceil+1)2 (\lceil\frac{n}{2}\rceil+2)3( \lceil\frac{n}{2}\rceil+3)\ldots \lceil\frac{n}{2}\rceil$ for $n$ odd. Unlike in $\sigma$, even and odd entries of $\sigma^{-1}$ appear in alternating pairs, potentially bookended on either or both sides by a single even or odd entry. To illustrate this point, when $n=4$, $\sigma^{-1}=1324$, when $n=5$, $\sigma^{-1}=14253$, when $n=6$, $\sigma^{-1}=142536$, and when $n=7$, $\sigma^{-1}=1526374.$ As conveyed in Lemma \ref{ithentry}, when $j$ is odd, $\K^j(\sigma)$ is equivalent to rotating $\sigma^{-1}$ some set amount and then adding a constant amount to each entry modulo $n$. This process preserves the grouping of even and odd entries of $\sigma^{-1}$ when viewed as a cyclic ordering, and for $n>5$ never results in all even entries followed by all odd entries. Thus, for $n>5,$ $\K^j(\sigma)\neq \sigma$ when $j$ is odd. Next, consider $j$ even, in which case the action of $\K^j$ can be realized as acting on $\sigma$ by $\K^2$ some number of times. By Lemma \ref{ithentry}, the action of $\K^2$ is the same as cyclically shifting each entry of $\sigma$ one place to the right and adding one modulo $n$. When $n$ is even, this process alternates the parity of every entry and thus preserves the string of $\frac{n}{2}$ even and $\frac{n}{2}$ odd entries (though cyclically shifted). For the permutation to return to a state where the first $\frac{n}{2}$ entries (and thus also the last $\frac{n}{2}$ entries) have the same parity, we have to shift the entries some multiple of $\frac{n}{2}$ times, i.e.\ $j$ must be some multiple of $n$. For $n$ even, $\K^n(\sigma) = (2 + \frac{n}{2})(4 + \frac{n}{2}) \ldots (n + \frac{n}{2})(1 + \frac{n}{2}) \ldots (n - 1 + \frac{n}{2})$, where the entries are calculated modulo $n$. If $\K^n(\sigma) = \sigma$, then $(2 + \frac{n}{2}) \mod{n} = 1$. But as $n > 5$, this is not possible. Thus $\K^n(\sigma) \neq \sigma$ and the smallest $j$ such that $\K^j(\sigma) = \sigma$ is $2n$. For $n$ odd, the case where $j$ is even is slightly more complicated. $\K^j(\sigma)$ can still be thought of as repeatedly acting on $\sigma$ by $\K^2,$ however each time we shift and add one to the entries of $\sigma$, every entry \emph{except $n$} alternates in parity. To show that $\K^j(\sigma)\neq\sigma$ for $0< j<2n$ when $n$ is odd, we examine the position of $1$ in $\K^{j}(\sigma).$ Suppose $1$ appears in the $i$-th position of $\K^j(\sigma).$ By Lemma \ref{ithentry}, $\K^j(\sigma)_i=\sigma_{i-\frac{j}{2}}+\frac{j}{2}\pmod{n}.$ Thus, if $\K^j(\sigma)_i=1$, it follows that $\sigma_{i-\frac{j}{2}}=1-\frac{j}{2}\pmod{n}.$ Let $j=2k$, then $\sigma_{i-k}=1-k=n+1-k\pmod{n}.$ When $k$ is even, $n+1-k$ is even and appears in the $n+1-\frac{k}{2}$ position by the definition of $\sigma$. Setting the indices equal we have $i-k=n+1-\frac{k}{2}\pmod{n}$, which implies $i=1+\frac{k}{2}\pmod{n}.$ Thus, when $k$ is even, the first time $i=1$ is when $k=2n.$ By a similar argument, if $k$ is odd, $\sigma_{i-k}=n+1-k$ is odd and appears at position $\frac{n-k}{2}+1$ in $\sigma$. Again, setting indices equal and solving for $i$ we have $i=\frac{n+k}{2}+1.$ In this case the first time $i=1$ is when $k=n.$ Thus, if $n$ is odd, the first time $1$ returns to the first position (i.e.\ $i=1$), and therefore $\K^{j}(\sigma)$ might equal $\sigma,$ is when $k=n$, or equivalently $j=2n.$ \end{proof} \subsection{Kreweras complement homomesies} \label{sec:KrewHom} Empirical investigations using the FindStat database suggested three potential homomesies for the Kreweras complement and its inverse. In this subsection, we prove those homomesies, as well as a fourth that was not included in the FindStat database at the time of our investigation. We begin with the relevant definitions and lemmas that will be useful in proving Theorem~\ref{Thm:Kreweras}. \begin{definition} \label{def:exceedance} Given a permutation $\sigma\in S_n$, an index $i$ is said to be an \textbf{exceedance} of $\sigma$ if $\sigma_i>i$, a \textbf{deficiency} of $\sigma$ if $\sigma_i < i$, and a \textbf{weak deficiency} or \textbf{anti-exceedance} if $\sigma_i\le i.$ The number of exceedances $\#\{i:\sigma_i> i\}$ is denoted $\exc(\sigma)$, while the number of points fixed by $\sigma$ is denoted $\fp$. \end{definition} \begin{lem}\label{exc_sinv} Given a permutation $\sigma\in S_n$, $$\exc(\sigma^{-1})=n-\exc(\sigma)-\fp.$$ \end{lem} \begin{proof} Examining the weak deficiencies of $\sigma^{-1}$ we note that $\sigma^{-1}_i=m\le i$ if and only if $\sigma_m=i\ge m$. Thus the number of weak deficiencies of $\sigma^{-1}$ is the sum of the number of exceedances and fixed points of $\sigma$. As the exceedances of $\sigma^{-1}$ are the complement of the weak deficiencies of $\sigma^{-1}$, the result follows. \end{proof} \begin{lem}\label{exc_K} Given $\sigma\in S_n$, the following formula gives the number of exceedances in the image under $\K$ and $\K^{-1}.$ $$\exc(\K(\sigma))=n-\exc(\sigma)-1=\exc(\K^{-1}(\sigma)).$$ \end{lem} \begin{proof} For all $i<n,$ if $i$ is a fixed point of $\sigma^{-1}$, then $(c\circ\sigma^{-1})_i=i+1$ and $i$ is an exceedance of $\K(\sigma)=c\circ\sigma^{-1}.$ If $n$ is a fixed point, $(c\circ\sinv)_n=1$ and $n$ is not an exceedance of $\K(\sigma).$ Similarly, if $i$ is an exceedance of $\sigma^{-1}$ with $\sinv_i\neq n$, then $(c\circ\sigma^{-1})_i=\sigma^{-1}_i+1$ is an exceedance. If $\sigma^{-1}_i=n,$ then $(c\circ\sinv)_i=1$ and $i$ is not an exceedance of $\K(\sigma).$ Thus, if $n$ is a fixed point of $\sigma^{-1}$, $$\exc(\K(\sigma))=\exc(c\circ\sinv)=\exc(\sinv)+(\fp-1),$$ while if $n$ is not a fixed point $$\exc(\K(\sigma))=(\exc(\sinv)-1)+\fp.$$ Therefore by Lemma \ref{exc_sinv}, $\exc(\K(\sigma))=n-\exc(\sigma)-1.$ To prove the result for $\K^{-1}$, let $\tau=\K^{-1}(\sigma)$. Then $\K(\tau) = \sigma$ and we know that $\exc(\sigma) = \exc(\K(\tau)) = n-\exc(\tau)-1 =n-1-\exc(\K^{-1}(\sigma))$. Solving for $\exc(\K^{-1}(\sigma))$, we find $\exc(\K^{-1}(\sigma)) = n-\exc(\sigma)-1.$ \end{proof} We are now ready to prove the homomesy results stated in Theorem \ref{Thm:Kreweras}. \begin{prop}[Statistics 155 and 702]\label{KHom exc} For the Kreweras complement and its inverse acting on $S_n$, the number of exceedances is $\frac{n-1}{2}$-mesic while the number of weak deficiencies is $\frac{n+1}{2}$-mesic. Furthermore, the number of exceedances and weak deficiencies is constant over orbits of odd size. \end{prop} \begin{proof} We start by observing that, as a consequence of Lemma \ref{exc_K},$$\exc(\K^{-2}(\sigma))=\exc(\K^2(\sigma))=n-\exc(\K(\sigma))-1=n-(n-\exc(\sigma)-1)-1=\exc(\sigma).$$ More generally, $$\exc(\K^{\pm m}(\sigma))= \left\{ \begin{array}{lr} \exc(\sigma) & \mbox{when $m$ is even} \\ n-\exc(\sigma)-1 & \mbox{when $m$ is odd} \end{array}\right . $$ Let $\ell$ be the size of the orbit of $\sigma$ under the action of $\K$, i.e.\ $\K^{\ell}(\sigma)=\sigma.$ Thus $\exc(\K^{\ell}(\sigma))=\exc(\sigma).$ If $\ell$ is odd, it follows that $$\exc(\K^{\ell}(\sigma))=n-\exc(\sigma)-1=\exc(\sigma)$$ and solving for $\exc(\sigma)$ we find $$\exc(\sigma)=\frac{n-1}{2}.$$ Thus, when $\ell$ is odd, the number of exceedances is constant over the orbit and equal to $\frac{n-1}{2}$. When $\ell$ is even, the average number of exceedances over the orbit is \begin{eqnarray*} \frac{1}{\ell}\sum_{i=0}^{\ell-1} \exc(\K^i(\sigma))&=&\frac{1}{\ell}\sum_{j=0}^{\frac{\ell}{2}-1}\exc(\K^{2j}(\sigma))+\exc(\K^{2j+1}(\sigma))\\ &=&\frac{1}{\ell}\sum_{j=0}^{\frac{\ell}{2}-1}\exc(\sigma)+(n-\exc(\sigma)-1)\\ &=&\frac{1}{\ell}\sum_{j=0}^{\frac{\ell}{2}-1}n-1=\frac{n-1}{2}. \end{eqnarray*} Thus, the number of exceedances is $\frac{n-1}{2}$-mesic under the action of the Kreweras complement. The result for $\K^{-1}$ follows from Lemma \ref{lem:inverse}. By definition, the number of weak deficiencies is equal to $n-\exc(\sigma)$ for all $\sigma\in S_n.$ Thus, since $\exc$ is $\frac{n-1}{2}$-mesic under the action of the Kreweras complement and its inverse, the number of weak deficiencies is $n-\frac{n-1}{2}=\frac{n+1}{2}$-mesic. \end{proof} Our homomesy result for the number of exceedances explains the observed lack of odd sized orbits under the action of the Kreweras complement on $S_n$ when $n$ is even. \begin{cor}\label{Korb odd divisor even n} If $n$ is even, there are no orbits of odd size under the Kreweras complement or its inverse acting on $S_n$. \end{cor} \begin{proof} Consider orbits of odd size under the Kreweras complement and its inverse. As shown in the proof of Proposition \ref{KHom exc}, the number of exceedances for each element of such an orbit is the constant value $\frac{n-1}{2}$. Since the number of exceedances must be an integer, we arrive at a contradiction when $n$ is even. \end{proof} \begin{definition} Given a permutation $\sigma\in S_n$, define the lower middle element to be $\sigma_{\frac{n}{2}}$ when $n$ is even, and $\sigma_{\frac{n+1}{2}}$ when $n$ is odd. \end{definition} \begin{prop}[Statistic 740]\label{Khom lastentry} The last entry of a permutation, and when $n$ is even, the lower middle element, are $\frac{n+1}{2}$-mesic under the Kreweras complement and its inverse. \end{prop} \begin{proof} We start by showing that the average of the set $\{ \K^j(\sigma)_i \ | \ j = 0, 1, \ldots , 2n-1\}$ is $\frac{n + 1}{2}$ when $i=n$ and when $i=\frac{n}{2}$ for even $n$. By Lemma \ref{ithentry}, running over even values of $j$, the last entry of $\K^j(\sigma)$, i.e.\ $\K^j(\sigma)_n,$ takes the form $\sigma_n, \sigma_{n-1} + 1, \dots, \sigma_{1} + (n - 1) $; and running over odd values of $j$, $\K^j(\sigma)_n$ takes the forms $\sigma^{-1}_n + 1, \sigma^{-1}_{n-1} + 2 , \dots , \sigma^{-1}_1$. In each case, the entries are calculated modulo $n$ with $n$ used for the $0$-th equivalence class. To find the sum of $\K^j(\sigma)_n$ for $0\leq j\leq 2n-1$, we show that these last terms can be partitioned into pairs which sum to $n+1.$ Suppose $\sigma_{n-k}=m.$ Breaking up the sum as follows $$\sum_{j=0}^{2n-1}\K^j(\sigma)_n=\sum_{k=0}^{n-1}[\sigma_{n-k}+k]_n+[\sigma^{-1}_m+(n-m+1)]_n,$$ we observe that \begin{itemize} \item if $\sigma_{n-k}+k=m+k>n$, then $\sigma^{-1}_m+(n-m+1)=n-k+(n-m+1)\leq n$. Since both terms are between $1$ and $2n$, it follows $[\sigma_{n-k}+k]_n+[\sigma^{-1}_m+(n-m+1)]_n=(m+k-n)+(n-k+(n-m+1))=n+1.$ \item Similarly, if $\sigma_{n-k}+k\leq n$, then $\sigma^{-1}_m+(n-m+1)> n$, and $[\sigma_{n-k}+k]_n+[\sigma^{-1}_m+(n-m+1)]_n=(m+k)+(n-k+(n-m+1)-n)=n+1.$ \end{itemize} Thus, the average of the last entries is $$\frac{1}{2n}\sum_{j=0}^{2n-1}\K^j(\sigma)_n=\frac{1}{2n}\sum_{k=0}^{n-1}(n+1)=\frac{n+1}{2}.$$ An analogous argument works to find the sum of the lower middle elements, $\{\K^j(\sigma)_{\frac{n}{2}}| 0\leq j\leq 2n-1\},$ when $n$ is even. By Lemma \ref{ithentry}, running over the even values of $j,$ $\K^j(\sigma)_{\frac{n}{2}}$ takes the form $\sigma_{[\frac{n}{2}]_n}, \sigma_{[\frac{n}{2}-1]_n} + 1, \sigma_{[\frac{n}{2}-2]_n} + 2, \ldots, \sigma_{[\frac{n}{2}-(n-1)]_n} + (n-1).$ Running over the odd values of $j,$ $\K^j(\sigma)_{\frac{n}{2}}$ takes the form $\sigma^{-1}_{[\frac{n}{2}]_n} + 1, \sigma^{-1}_{[\frac{n}{2}-1]_n} + 2, \ldots, \sigma^{-1}_{[\frac{n}{2}-(n-1)]_n} + n$. As before, values are calculated modulo $n$ with $n$ used for the $0$-th equivalence class. Suppose $\sigma_{[\frac{n}{2}-k]_n}=m$, and break up the sum $$\sum_{j=0}^{2n-1}\K^j(\sigma)_{\frac{n}{2}}=\sum_{k=0}^{n-1}[\sigma_{[\frac{n}{2}-k]_n}+k]_n+[\sigma^{-1}_m+\frac{n}{2}-m+1]_n,$$ observing that \begin{itemize} \item if $\sigma_{[\frac{n}{2}-k]_n}+k=m+k>n,$ then $\sigma^{-1}_m+(\frac{n}{2}-m+1)=\frac{n}{2}-k+(\frac{n}{2}-m+1)\leq 0$, and since both terms are between $-n$ and $n$, $[\sigma_{[\frac{n}{2}-k]_n}+k]_n+[\sigma^{-1}_m+(\frac{n}{2}-m+1)]_n=(m+k-n)+(n-k-m+1+n)=n+1.$ \item If $\sigma_{[\frac{n}{2}-k]_n}+k\leq n,$ then $n \geq \sigma^{-1}_m+(\frac{n}{2}-m+1)\geq 1$, and $[\sigma_{[\frac{n}{2}-k]_n}+k]_n+[\sigma^{-1}_m+(\frac{n}{2}-m+1)]_n=(m+k)+(n-k-m+1)=n+1.$ \end{itemize} Thus, the average of the $\frac{n}{2}$-th entries is $$\frac{1}{2n}\sum_{j=0}^{2n-1}\K^j(\sigma)_{\frac{n}{2}}=\frac{1}{2n}\sum_{k=0}^{n-1}(n+1)=\frac{n+1}{2}.$$ Next, we consider an orbit with size $l$ where $0 < l \leq 2n$. As $l$ must be a divisor of $2n$, there is a positive integer $k$ such that $2n = kl$. Therefore $$\displaystyle \sum_{j = 0}^{2n - 1} \K^{j}(\sigma)_i = k \sum_{j = 0}^{l - 1} \K^{j}(\sigma)_i.$$ So, the average over the orbit when $i=n,$ and when $i=\frac{n}{2}$ for even $n$, is $$\frac{\sum_{j = 0}^{l - 1} \K^{j}(\sigma)_i}{l} = \frac{k \sum_{j = 0}^{l - 1} \K^{j}(\sigma)_i}{k l} = \frac{\sum_{j = 0}^{2n - 1} \K^{j}(\sigma)_i}{2n} = \frac{n + 1}{2}.$$ The same results for $\K^{-1}$ follow by Lemma \ref{lem:inverse}. \end{proof} When we ran the experiment, the only entries of a permutation that were statistics in FindStat were Statistic 740, the last entry, and Statistic 54, the first entry. Our homomesy result for the lower middle element was found analytically. Since then, Statistic 1806 - the upper middle entry of a permutation ($\sigma_{\lceil\frac{n+1}{2}\rceil}$), and Statistic 1807 - the lower middle entry ($\sigma_{\lfloor\frac{n+1}{2}\rfloor}$), have been added to the FindStat database. As noted in the following corollary, the homomesies from Proposition \ref{Khom lastentry} are the only $i$-th entry homomesies possible for the Kreweras complement. \begin{cor}\label{KC: no entry but} No entry except the last entry, and the $\frac{n}{2}$-th entry when $n$ is even, is homomesic with respect to Kreweras complement. \end{cor} \begin{proof} Consider the set of $i$-th entries of each permutation in $S_n$. As each number between $1$ and $n$ is equally likely to appear in the $i$-th position, the global average of the $i$-th entry is $\frac{1+2+\ldots+n}{n}=\frac{n+1}{2}$. By Theorem~\ref{thm:Orbit_generators_K}, for all $n$, $\{1234\ldots n, 234\ldots n1\}$ is an orbit of Kreweras complement. In this orbit, the entries in a given position have to sum to $n+1$ in order for the orbit-average to equal the global average. This is clearly true for the last entry. The sum of the $i$-th entry over these two permutations is $i+(i+1)=2i+1$ for all $1\leq i\leq n-1$. As $2i+1=n+1$ implies $i=\frac{n}{2}$, this orbit shows that only the $n$-th and $\frac{n}{2}$-th entries can be homomesic. \end{proof} \bibliographystyle{plain} \bibliography{master.bib} \end{document} \usepackage[procnames]{listings} \usepackage{textcomp} \usepackage{setspace} \usepackage{inconsolata} \renewcommand{\lstlistlistingname}{Code Listings} \renewcommand{\lstlistingname}{Code Listing} \lstnewenvironment{python}[1][]{ \lstset{ xleftmargin=0.0in, language=Python, basicstyle=\ttfamily\footnotesize\setstretch{0.75}, basewidth=0.5em, stringstyle=\color{Purple}, showstringspaces=false, alsoletter={1234567890}, otherkeywords={1, 2, 3, 4, 5, 6, 7, 8 ,9 , 0, -, =, +, [, ], (, ), \{, \}, :, *, !}, keywordstyle=\color{RoyalBlue}, emph={access,and,as,break,class,continue,def,del,elif,else,except,exec,finally,for,from,global,if,import,in,is,lambda,not,or,pass,raise,return,try,while,assert}, emphstyle=\color{Orange}\bfseries, emph={[2]self}, emphstyle=[2]\color{Gray}, emph={[4]abs,all,any,apply,basestring,bool,buffer,callable,chr,classmethod,cmp,coerce,compile,complex,copyright,credits,delattr,dict,dir,divmod,enumerate,eval,execfile,exit,file,filter,float,frozenset,getattr,globals,hasattr,hash,help,hex,id,input,int,intern,isinstance,issubclass,iter,len,license,list,locals,long,map,max,min,object,oct,open,ord,print,pow,property,quit,range,raw_input,reduce,reload,repr,reversed,round,set,setattr,slice,sorted,staticmethod,str,sum,super,tuple,type,unichr,unicode,vars,xrange,zip}, emphstyle=[4]\color{Green}\bfseries, emph={[5]sage}, emphstyle=[5]\color{RoyalBlue}\bfseries, emph={[6]ArithmeticError,AssertionError,AttributeError,BaseException,DeprecationWarning,EOFError,Ellipsis,EnvironmentError,Exception,FloatingPointError,FutureWarning,GeneratorExit,IOError,ImportError,ImportWarning,IndentationError,IndexError,KeyError,KeyboardInterrupt,LookupError,MemoryError,NameError,None,NotImplemented,NotImplementedError,OSError,OverflowError,PendingDeprecationWarning,ReferenceError,RuntimeError,RuntimeWarning,StandardError,StopIteration,SyntaxError,SyntaxWarning,SystemError,SystemExit,TabError,TypeError,UnboundLocalError,UnicodeDecodeError,UnicodeEncodeError,UnicodeError,UnicodeTranslateError,UnicodeWarning,UserWarning,ValueError,Warning,ZeroDivisionError,TraceBack}, emphstyle=[6]\color{Red}\bfseries, emph={[7]True}, emphstyle=[7]\color{Red}\bfseries, emph={[8]False}, emphstyle=[8]\color{Red}\bfseries, upquote=true, morecomment=[s][\color{Gray}]{r"""}{"""}, commentstyle=\color{BrickRed}\slshape, literate={|->}{{$\longrightarrow$}}{3}, procnamekeys={def,class}, procnamestyle=\color{RoyalBlue}\textbf, framexleftmargin=1mm, framextopmargin=1mm, frame=single,frameround=tttt, rulecolor=\color{ForestGreen}, backgroundcolor=\color{White} }}{}